Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'drm-intel-next-2020-04-17' of git://anongit.freedesktop.org/drm/drm-intel into drm-next

UAPI Changes:

- drm/i915/perf: introduce global sseu pinning
Allow userspace to request at perf/OA open full SSEU configuration
on the system to be able to benchmark 3D workloads, at the cost of not
being able to run media workloads. (Lionel)

Userspace changes: https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/4021

- drm/i915/perf: add new open param to configure polling of OA buffer
Let application choose how often the OA buffer should be checked on
the CPU side for data availability for choosig between CPU overhead
and realtime nature of data.

Userspace changes: https://patchwork.freedesktop.org/series/74655/

(i915 perf recorder is a tool to capture i915 perf data for viewing
in GPUVis.)

- drm/i915/perf: remove generated code
Removal of the machine generated perf/OA test configurations from i915.
Used by Mesa v17.1-18.0, and shortly replaced by userspace supplied OA
configurations. Removal of configs causes affected Mesa versions to
fall back to earlier kernel behaviour (potentially missing metrics).
(Lionel)

Cross-subsystem Changes:

- Backmerge of drm-next

- Includes tag 'topic/phy-compliance-2020-04-08' from
git://anongit.freedesktop.org/drm/drm-misc

Driver Changes:

- Fix for GitLab issue #27: Support 5k tiled dual DP display on SKL (Ville)
- Fix https://github.com/thesofproject/linux/issues/1719: Broken audio after
S3 resume on JSL platforms. (Kai)
- Add new Tigerlake PCI IDs (Swathi D.)
- Add missing Tigerlake W/As (Matt R.)
- Extended Wa_2006604312 to EHL (Matt A)
- Add DPCD link_rate quirk for Apple 15" MBP 2017 (v3) (Mario)
- Make Wa_14010229206 apply to all Tigerlake steppings (Swathi d)
- Extend hotplug detect retry on TypeC connectors to 5 seconds (Imre)
- Yield the timeslice if caught waiting on a user semaphore (Chris)
- Limit the residual W/A batch to Haswell due to instability on IVB/BYT (Chris)
- TBT AUX should use TC power well ops on Tigerlake (Matt R)
- Update PMINTRMSK holding fw to make it effective for RPS (Francisco, Chris)
- Add YUV444 packed format support for skl+ (Stanislav)
- Invalidate OA TLB when closing perf stream to avoid corruption (Umesh)
- HDCP: fix Ri prime check done during link check (Oliver)
- Rearm heartbeat on sysfs interval change (Chris)
- Fix crtc nv12 etc. plane bitmasks for DPMS off (Ville)
- Treat idling as a RPS downclock event (Chris)
- Leave rps->cur_freq on unpark (Chris)
- Ignore short pulse when EDP panel powered off (Anshuman)
- Keep the engine awake until the next jiffie, to avoid ping-pong on
moderate load (Chris)
- Select the deepest available parking mode for rc6 on IVB (Chris)
- Optimizations to direct submission execlist path (Chris)
- Avoid NULL pointer dereference at intel_read_infoframe() (Chris)
- Fix mode private_flags comparison at atomic_check (Uma, Ville)
- Use forced codec wake on all gen9+ platforms (Kai)
- Schedule oa_config after modifying the contexts (Chris, Lionel)
- Explicitly reset both reg and context runtime on GPU reset (Chris)
- Don't enable DDI IO power on a TypeC port in TBT mode (Imre)
- Fixes to TGL, ICL and EHL vswing tables (Jose)
- Fill all the unused space in the GGTT (Chris, imre)
- Ignore readonly failures when updating relocs (Chris)
- Attempt to find free space earlier for non-pinned VMAs (Chris)
- Only wait for GPU activity before unbinding a GGTT fence (Chris)
- Avoid data loss on small userspace perf OA polling (Ashutosh)
- Watch out for unevictable nodes during eviction (Matt A)
- Reinforce the barrier after GTT updates for Ironlake (Chris)

- Convert various parts of driver to use drm_device based logging (Wambui, Jani)
- Avoid dereferencing already closed context for engine (Chris)
- Enable non-contiguous pipe fusing (Anshuman)
- Add HW readout of Gamma LUT on ICL (Swati S.)
- Use explicit flag to mark unreachable intel_context (Chris)
- Cancel a hung context if already closed (Chris)
- Add DP VSC/HDR SDP data structures and write routines (Gwan-gyeong)
- Report context-is-closed prior to pinning at execbuf (Chris)
- Mark timeline->cacheline as destroyed after rcu grace period (Chris)
- Avoid live-lock with i915_vma_parked() (Chris)
- Avoid gem_context->mutex for simple vma lookup (Chris)
- Rely on direct submission to the queue (Chris)
- Configure DSI transcoder to operate in TE GATE command mode (Vandita)
- Add DI vblank calculation for command mode (Vandita)
- Disable periodic command mode if programmed by GOP (Vandita)
- Use private flags to indicate TE in cmd mode (Vandita)
- Make fences a nice-to-have for FBC on GEN9+ (Jose)
- Fix work queuing issue with mixed virtual engine/physical engine
submissions (Chris)
- Drop final few uses of drm_i915_private.engine (Chris)
- Return early after MISSING_CASE for write_dp_sdp (Chris)
- Include port sync state in the state dump (Ville)
- ELSP workaround switching back to a completed context (Chris)
- Include priority info in trace_ports (Chris)
- Allow for different modes of interruptible i915_active_wait (Chris)
- Split eb_vma into its own allocation (Chris)
- Don't read perf head/tail pointers outside critical section (Lionel)
- Pause CS flow before execlists reset (Chris)
- Make fence revocation unequivocal (Chris)
- Drop cached obj->bind_count (Chris)
- Peek at the next submission for error interrupts (Chris)
- Utilize rcu iteration of context engines (Chris)
- Keep a per-engine request pool for power management ops (Chris)
- Refactor port sync code into normal modeset flow (Ville)
- Check current i915_vma.pin_count status first on unbind (Chris)
- Free request pool from virtual engines (Chris)
- Flush all the reloc_gpu batch (Chris)
- Make exclusive awaits on i915_active optional and allow async waits (Chris)
- Wait until the context is finally retired before releasing engines (Chris)

- Prefer '%ps' for printing function symbol names (Chris)
- Allow setting generic data pointer on intel GT debugfs (Andi)
- Constify DP link computation code more (Ville)
- Simplify MST master transcoder computation (Ville)
- Move TRANS_DDI_FUNC_CTL2 programming where it belongs (Ville)
- Move icl_get_trans_port_sync_config() into the DDI code (Ville)
- Add definitions for VRR registers and bits (Aditya)
- Refactor hardware fence code (Chris)
- Start passing latency as parameter to WM calculation (Stanislav)
- Kernel selftest and debug tracing improvements (Matt A, Chris, Mika)
- Fixes to CI found corner cases and lockdep splats (Chris)
- Overall fixes and refactoring to GEM code (Chris)
- Overall fixes and refactoring to display code (Ville)
- GuC/HuC code improvements (Daniele, Michal Wa)
- Static code checker fixes (Nathan, Ville, Colin, Chris)
- Fix spelling mistake (Chen)

Signed-off-by: Dave Airlie <airlied@redhat.com>
From: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20200417111548.GA15033@jlahtine-desk.ger.corp.intel.com

+5330 -4318
+3 -3
Documentation/gpu/i915.rst
··· 391 391 GTT Fences and Swizzling 392 392 ------------------------ 393 393 394 - .. kernel-doc:: drivers/gpu/drm/i915/i915_gem_fence_reg.c 394 + .. kernel-doc:: drivers/gpu/drm/i915/gt/intel_ggtt_fencing.c 395 395 :internal: 396 396 397 397 Global GTT Fence Handling 398 398 ~~~~~~~~~~~~~~~~~~~~~~~~~ 399 399 400 - .. kernel-doc:: drivers/gpu/drm/i915/i915_gem_fence_reg.c 400 + .. kernel-doc:: drivers/gpu/drm/i915/gt/intel_ggtt_fencing.c 401 401 :doc: fence register handling 402 402 403 403 Hardware Tiling and Swizzling Details 404 404 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 405 405 406 - .. kernel-doc:: drivers/gpu/drm/i915/i915_gem_fence_reg.c 406 + .. kernel-doc:: drivers/gpu/drm/i915/gt/intel_ggtt_fencing.c 407 407 :doc: tiling swizzling details 408 408 409 409 Object Tiling IOCTLs
+3 -1
drivers/char/agp/intel-gtt.c
··· 846 846 unsigned int flags) 847 847 { 848 848 intel_private.driver->write_entry(addr, pg, flags); 849 + readl(intel_private.gtt + pg); 849 850 if (intel_private.driver->chipset_flush) 850 851 intel_private.driver->chipset_flush(); 851 852 } ··· 872 871 j++; 873 872 } 874 873 } 875 - wmb(); 874 + readl(intel_private.gtt + j - 1); 876 875 if (intel_private.driver->chipset_flush) 877 876 intel_private.driver->chipset_flush(); 878 877 } ··· 1106 1105 1107 1106 static void i9xx_chipset_flush(void) 1108 1107 { 1108 + wmb(); 1109 1109 if (intel_private.i9xx_flush_page) 1110 1110 writel(1, intel_private.i9xx_flush_page); 1111 1111 }
+96
drivers/gpu/drm/drm_dp_helper.c
··· 1238 1238 { OUI(0x00, 0x00, 0x00), DEVICE_ID('C', 'H', '7', '5', '1', '1'), false, BIT(DP_DPCD_QUIRK_NO_SINK_COUNT) }, 1239 1239 /* Synaptics DP1.4 MST hubs can support DSC without virtual DPCD */ 1240 1240 { OUI(0x90, 0xCC, 0x24), DEVICE_ID_ANY, true, BIT(DP_DPCD_QUIRK_DSC_WITHOUT_VIRTUAL_DPCD) }, 1241 + /* Apple MacBookPro 2017 15 inch eDP Retina panel reports too low DP_MAX_LINK_RATE */ 1242 + { OUI(0x00, 0x10, 0xfa), DEVICE_ID(101, 68, 21, 101, 98, 97), false, BIT(DP_DPCD_QUIRK_CAN_DO_MAX_LINK_RATE_3_24_GBPS) }, 1241 1243 }; 1242 1244 1243 1245 #undef OUI ··· 1535 1533 return num_bpc; 1536 1534 } 1537 1535 EXPORT_SYMBOL(drm_dp_dsc_sink_supported_input_bpcs); 1536 + 1537 + /** 1538 + * drm_dp_get_phy_test_pattern() - get the requested pattern from the sink. 1539 + * @aux: DisplayPort AUX channel 1540 + * @data: DP phy compliance test parameters. 1541 + * 1542 + * Returns 0 on success or a negative error code on failure. 1543 + */ 1544 + int drm_dp_get_phy_test_pattern(struct drm_dp_aux *aux, 1545 + struct drm_dp_phy_test_params *data) 1546 + { 1547 + int err; 1548 + u8 rate, lanes; 1549 + 1550 + err = drm_dp_dpcd_readb(aux, DP_TEST_LINK_RATE, &rate); 1551 + if (err < 0) 1552 + return err; 1553 + data->link_rate = drm_dp_bw_code_to_link_rate(rate); 1554 + 1555 + err = drm_dp_dpcd_readb(aux, DP_TEST_LANE_COUNT, &lanes); 1556 + if (err < 0) 1557 + return err; 1558 + data->num_lanes = lanes & DP_MAX_LANE_COUNT_MASK; 1559 + 1560 + if (lanes & DP_ENHANCED_FRAME_CAP) 1561 + data->enhanced_frame_cap = true; 1562 + 1563 + err = drm_dp_dpcd_readb(aux, DP_PHY_TEST_PATTERN, &data->phy_pattern); 1564 + if (err < 0) 1565 + return err; 1566 + 1567 + switch (data->phy_pattern) { 1568 + case DP_PHY_TEST_PATTERN_80BIT_CUSTOM: 1569 + err = drm_dp_dpcd_read(aux, DP_TEST_80BIT_CUSTOM_PATTERN_7_0, 1570 + &data->custom80, sizeof(data->custom80)); 1571 + if (err < 0) 1572 + return err; 1573 + 1574 + break; 1575 + case DP_PHY_TEST_PATTERN_CP2520: 1576 + err = drm_dp_dpcd_read(aux, DP_TEST_HBR2_SCRAMBLER_RESET, 1577 + &data->hbr2_reset, 1578 + sizeof(data->hbr2_reset)); 1579 + if (err < 0) 1580 + return err; 1581 + } 1582 + 1583 + return 0; 1584 + } 1585 + EXPORT_SYMBOL(drm_dp_get_phy_test_pattern); 1586 + 1587 + /** 1588 + * drm_dp_set_phy_test_pattern() - set the pattern to the sink. 1589 + * @aux: DisplayPort AUX channel 1590 + * @data: DP phy compliance test parameters. 1591 + * 1592 + * Returns 0 on success or a negative error code on failure. 1593 + */ 1594 + int drm_dp_set_phy_test_pattern(struct drm_dp_aux *aux, 1595 + struct drm_dp_phy_test_params *data, u8 dp_rev) 1596 + { 1597 + int err, i; 1598 + u8 link_config[2]; 1599 + u8 test_pattern; 1600 + 1601 + link_config[0] = drm_dp_link_rate_to_bw_code(data->link_rate); 1602 + link_config[1] = data->num_lanes; 1603 + if (data->enhanced_frame_cap) 1604 + link_config[1] |= DP_LANE_COUNT_ENHANCED_FRAME_EN; 1605 + err = drm_dp_dpcd_write(aux, DP_LINK_BW_SET, link_config, 2); 1606 + if (err < 0) 1607 + return err; 1608 + 1609 + test_pattern = data->phy_pattern; 1610 + if (dp_rev < 0x12) { 1611 + test_pattern = (test_pattern << 2) & 1612 + DP_LINK_QUAL_PATTERN_11_MASK; 1613 + err = drm_dp_dpcd_writeb(aux, DP_TRAINING_PATTERN_SET, 1614 + test_pattern); 1615 + if (err < 0) 1616 + return err; 1617 + } else { 1618 + for (i = 0; i < data->num_lanes; i++) { 1619 + err = drm_dp_dpcd_writeb(aux, 1620 + DP_LINK_QUAL_LANE0_SET + i, 1621 + test_pattern); 1622 + if (err < 0) 1623 + return err; 1624 + } 1625 + } 1626 + 1627 + return 0; 1628 + } 1629 + EXPORT_SYMBOL(drm_dp_set_phy_test_pattern);
+5 -18
drivers/gpu/drm/i915/Makefile
··· 89 89 gt/intel_engine_pool.o \ 90 90 gt/intel_engine_user.o \ 91 91 gt/intel_ggtt.o \ 92 + gt/intel_ggtt_fencing.o \ 92 93 gt/intel_gt.o \ 93 94 gt/intel_gt_irq.o \ 94 95 gt/intel_gt_pm.o \ ··· 151 150 i915_buddy.o \ 152 151 i915_cmd_parser.o \ 153 152 i915_gem_evict.o \ 154 - i915_gem_fence_reg.o \ 155 153 i915_gem_gtt.o \ 156 154 i915_gem.o \ 157 155 i915_globals.o \ ··· 164 164 165 165 # general-purpose microcontroller (GuC) support 166 166 i915-y += gt/uc/intel_uc.o \ 167 + gt/uc/intel_uc_debugfs.o \ 167 168 gt/uc/intel_uc_fw.o \ 168 169 gt/uc/intel_guc.o \ 169 170 gt/uc/intel_guc_ads.o \ 170 171 gt/uc/intel_guc_ct.o \ 172 + gt/uc/intel_guc_debugfs.o \ 171 173 gt/uc/intel_guc_fw.o \ 172 174 gt/uc/intel_guc_log.o \ 175 + gt/uc/intel_guc_log_debugfs.o \ 173 176 gt/uc/intel_guc_submission.o \ 174 177 gt/uc/intel_huc.o \ 178 + gt/uc/intel_huc_debugfs.o \ 175 179 gt/uc/intel_huc_fw.o 176 180 177 181 # modesetting core code ··· 244 240 display/vlv_dsi.o \ 245 241 display/vlv_dsi_pll.o 246 242 247 - # perf code 248 - i915-y += \ 249 - oa/i915_oa_hsw.o \ 250 - oa/i915_oa_bdw.o \ 251 - oa/i915_oa_chv.o \ 252 - oa/i915_oa_sklgt2.o \ 253 - oa/i915_oa_sklgt3.o \ 254 - oa/i915_oa_sklgt4.o \ 255 - oa/i915_oa_bxt.o \ 256 - oa/i915_oa_kblgt2.o \ 257 - oa/i915_oa_kblgt3.o \ 258 - oa/i915_oa_glk.o \ 259 - oa/i915_oa_cflgt2.o \ 260 - oa/i915_oa_cflgt3.o \ 261 - oa/i915_oa_cnl.o \ 262 - oa/i915_oa_icl.o \ 263 - oa/i915_oa_tgl.o 264 243 i915-y += i915_perf.o 265 244 266 245 # Post-mortem debug and GPU hang state capture
+147 -20
drivers/gpu/drm/i915/display/icl_dsi.c
··· 186 186 static int dsi_send_pkt_payld(struct intel_dsi_host *host, 187 187 struct mipi_dsi_packet pkt) 188 188 { 189 + struct intel_dsi *intel_dsi = host->intel_dsi; 190 + struct drm_i915_private *i915 = to_i915(intel_dsi->base.base.dev); 191 + 189 192 /* payload queue can accept *256 bytes*, check limit */ 190 193 if (pkt.payload_length > MAX_PLOAD_CREDIT * 4) { 191 - DRM_ERROR("payload size exceeds max queue limit\n"); 194 + drm_err(&i915->drm, "payload size exceeds max queue limit\n"); 192 195 return -1; 193 196 } 194 197 195 198 /* load data into command payload queue */ 196 199 if (!add_payld_to_queue(host, pkt.payload, 197 200 pkt.payload_length)) { 198 - DRM_ERROR("adding payload to queue failed\n"); 201 + drm_err(&i915->drm, "adding payload to queue failed\n"); 199 202 return -1; 200 203 } 201 204 ··· 747 744 tmp |= VIDEO_MODE_SYNC_PULSE; 748 745 break; 749 746 } 747 + } else { 748 + /* 749 + * FIXME: Retrieve this info from VBT. 750 + * As per the spec when dsi transcoder is operating 751 + * in TE GATE mode, TE comes from GPIO 752 + * which is UTIL PIN for DSI 0. 753 + * Also this GPIO would not be used for other 754 + * purposes is an assumption. 755 + */ 756 + tmp &= ~OP_MODE_MASK; 757 + tmp |= CMD_MODE_TE_GATE; 758 + tmp |= TE_SOURCE_GPIO; 750 759 } 751 760 752 761 intel_de_write(dev_priv, DSI_TRANS_FUNC_CONF(dsi_trans), tmp); ··· 852 837 } 853 838 854 839 hactive = adjusted_mode->crtc_hdisplay; 855 - htotal = DIV_ROUND_UP(adjusted_mode->crtc_htotal * mul, div); 840 + 841 + if (is_vid_mode(intel_dsi)) 842 + htotal = DIV_ROUND_UP(adjusted_mode->crtc_htotal * mul, div); 843 + else 844 + htotal = DIV_ROUND_UP((hactive + 160) * mul, div); 845 + 856 846 hsync_start = DIV_ROUND_UP(adjusted_mode->crtc_hsync_start * mul, div); 857 847 hsync_end = DIV_ROUND_UP(adjusted_mode->crtc_hsync_end * mul, div); 858 848 hsync_size = hsync_end - hsync_start; 859 849 hback_porch = (adjusted_mode->crtc_htotal - 860 850 adjusted_mode->crtc_hsync_end); 861 851 vactive = adjusted_mode->crtc_vdisplay; 862 - vtotal = adjusted_mode->crtc_vtotal; 852 + 853 + if (is_vid_mode(intel_dsi)) { 854 + vtotal = adjusted_mode->crtc_vtotal; 855 + } else { 856 + int bpp, line_time_us, byte_clk_period_ns; 857 + 858 + if (crtc_state->dsc.compression_enable) 859 + bpp = crtc_state->dsc.compressed_bpp; 860 + else 861 + bpp = mipi_dsi_pixel_format_to_bpp(intel_dsi->pixel_format); 862 + 863 + byte_clk_period_ns = 1000000 / afe_clk(encoder, crtc_state); 864 + line_time_us = (htotal * (bpp / 8) * byte_clk_period_ns) / (1000 * intel_dsi->lane_count); 865 + vtotal = vactive + DIV_ROUND_UP(400, line_time_us); 866 + } 863 867 vsync_start = adjusted_mode->crtc_vsync_start; 864 868 vsync_end = adjusted_mode->crtc_vsync_end; 865 869 vsync_shift = hsync_start - htotal / 2; ··· 907 873 } 908 874 909 875 /* TRANS_HSYNC register to be programmed only for video mode */ 910 - if (intel_dsi->operation_mode == INTEL_DSI_VIDEO_MODE) { 876 + if (is_vid_mode(intel_dsi)) { 911 877 if (intel_dsi->video_mode_format == 912 878 VIDEO_MODE_NON_BURST_WITH_SYNC_PULSE) { 913 879 /* BSPEC: hsync size should be atleast 16 pixels */ ··· 950 916 if (vsync_start < vactive) 951 917 drm_err(&dev_priv->drm, "vsync_start less than vactive\n"); 952 918 953 - /* program TRANS_VSYNC register */ 954 - for_each_dsi_port(port, intel_dsi->ports) { 955 - dsi_trans = dsi_port_to_transcoder(port); 956 - intel_de_write(dev_priv, VSYNC(dsi_trans), 957 - (vsync_start - 1) | ((vsync_end - 1) << 16)); 919 + /* program TRANS_VSYNC register for video mode only */ 920 + if (is_vid_mode(intel_dsi)) { 921 + for_each_dsi_port(port, intel_dsi->ports) { 922 + dsi_trans = dsi_port_to_transcoder(port); 923 + intel_de_write(dev_priv, VSYNC(dsi_trans), 924 + (vsync_start - 1) | ((vsync_end - 1) << 16)); 925 + } 958 926 } 959 927 960 928 /* 961 - * FIXME: It has to be programmed only for interlaced 929 + * FIXME: It has to be programmed only for video modes and interlaced 962 930 * modes. Put the check condition here once interlaced 963 931 * info available as described above. 964 932 * program TRANS_VSYNCSHIFT register 965 933 */ 966 - for_each_dsi_port(port, intel_dsi->ports) { 967 - dsi_trans = dsi_port_to_transcoder(port); 968 - intel_de_write(dev_priv, VSYNCSHIFT(dsi_trans), vsync_shift); 934 + if (is_vid_mode(intel_dsi)) { 935 + for_each_dsi_port(port, intel_dsi->ports) { 936 + dsi_trans = dsi_port_to_transcoder(port); 937 + intel_de_write(dev_priv, VSYNCSHIFT(dsi_trans), 938 + vsync_shift); 939 + } 969 940 } 970 941 971 942 /* program TRANS_VBLANK register, should be same as vtotal programmed */ ··· 1055 1016 } 1056 1017 } 1057 1018 1019 + static void gen11_dsi_config_util_pin(struct intel_encoder *encoder, 1020 + bool enable) 1021 + { 1022 + struct drm_i915_private *dev_priv = to_i915(encoder->base.dev); 1023 + struct intel_dsi *intel_dsi = enc_to_intel_dsi(encoder); 1024 + u32 tmp; 1025 + 1026 + /* 1027 + * used as TE i/p for DSI0, 1028 + * for dual link/DSI1 TE is from slave DSI1 1029 + * through GPIO. 1030 + */ 1031 + if (is_vid_mode(intel_dsi) || (intel_dsi->ports & BIT(PORT_B))) 1032 + return; 1033 + 1034 + tmp = intel_de_read(dev_priv, UTIL_PIN_CTL); 1035 + 1036 + if (enable) { 1037 + tmp |= UTIL_PIN_DIRECTION_INPUT; 1038 + tmp |= UTIL_PIN_ENABLE; 1039 + } else { 1040 + tmp &= ~UTIL_PIN_ENABLE; 1041 + } 1042 + intel_de_write(dev_priv, UTIL_PIN_CTL, tmp); 1043 + } 1044 + 1058 1045 static void 1059 1046 gen11_dsi_enable_port_and_phy(struct intel_encoder *encoder, 1060 1047 const struct intel_crtc_state *crtc_state) ··· 1101 1036 1102 1037 /* setup D-PHY timings */ 1103 1038 gen11_dsi_setup_dphy_timings(encoder, crtc_state); 1039 + 1040 + /* Since transcoder is configured to take events from GPIO */ 1041 + gen11_dsi_config_util_pin(encoder, true); 1104 1042 1105 1043 /* step 4h: setup DSI protocol timeouts */ 1106 1044 gen11_dsi_setup_timeouts(encoder, crtc_state); ··· 1156 1088 wait_for_cmds_dispatched_to_panel(encoder); 1157 1089 } 1158 1090 1159 - static void gen11_dsi_pre_pll_enable(struct intel_encoder *encoder, 1091 + static void gen11_dsi_pre_pll_enable(struct intel_atomic_state *state, 1092 + struct intel_encoder *encoder, 1160 1093 const struct intel_crtc_state *crtc_state, 1161 1094 const struct drm_connector_state *conn_state) 1162 1095 { ··· 1168 1099 gen11_dsi_program_esc_clk_div(encoder, crtc_state); 1169 1100 } 1170 1101 1171 - static void gen11_dsi_pre_enable(struct intel_encoder *encoder, 1102 + static void gen11_dsi_pre_enable(struct intel_atomic_state *state, 1103 + struct intel_encoder *encoder, 1172 1104 const struct intel_crtc_state *pipe_config, 1173 1105 const struct drm_connector_state *conn_state) 1174 1106 { ··· 1188 1118 gen11_dsi_set_transcoder_timings(encoder, pipe_config); 1189 1119 } 1190 1120 1191 - static void gen11_dsi_enable(struct intel_encoder *encoder, 1121 + static void gen11_dsi_enable(struct intel_atomic_state *state, 1122 + struct intel_encoder *encoder, 1192 1123 const struct intel_crtc_state *crtc_state, 1193 1124 const struct drm_connector_state *conn_state) 1194 1125 { ··· 1250 1179 enum port port; 1251 1180 enum transcoder dsi_trans; 1252 1181 u32 tmp; 1182 + 1183 + /* disable periodic update mode */ 1184 + if (is_cmd_mode(intel_dsi)) { 1185 + for_each_dsi_port(port, intel_dsi->ports) { 1186 + tmp = intel_de_read(dev_priv, DSI_CMD_FRMCTL(port)); 1187 + tmp &= ~DSI_PERIODIC_FRAME_UPDATE_ENABLE; 1188 + intel_de_write(dev_priv, DSI_CMD_FRMCTL(port), tmp); 1189 + } 1190 + } 1253 1191 1254 1192 /* put dsi link in ULPS */ 1255 1193 for_each_dsi_port(port, intel_dsi->ports) { ··· 1344 1264 } 1345 1265 } 1346 1266 1347 - static void gen11_dsi_disable(struct intel_encoder *encoder, 1267 + static void gen11_dsi_disable(struct intel_atomic_state *state, 1268 + struct intel_encoder *encoder, 1348 1269 const struct intel_crtc_state *old_crtc_state, 1349 1270 const struct drm_connector_state *old_conn_state) 1350 1271 { ··· 1367 1286 /* step3: disable port */ 1368 1287 gen11_dsi_disable_port(encoder); 1369 1288 1289 + gen11_dsi_config_util_pin(encoder, false); 1290 + 1370 1291 /* step4: disable IO power */ 1371 1292 gen11_dsi_disable_io_power(encoder); 1372 1293 } 1373 1294 1374 - static void gen11_dsi_post_disable(struct intel_encoder *encoder, 1295 + static void gen11_dsi_post_disable(struct intel_atomic_state *state, 1296 + struct intel_encoder *encoder, 1375 1297 const struct intel_crtc_state *old_crtc_state, 1376 1298 const struct drm_connector_state *old_conn_state) 1377 1299 { ··· 1431 1347 adjusted_mode->crtc_vblank_end = adjusted_mode->crtc_vtotal; 1432 1348 } 1433 1349 1350 + static bool gen11_dsi_is_periodic_cmd_mode(struct intel_dsi *intel_dsi) 1351 + { 1352 + struct drm_device *dev = intel_dsi->base.base.dev; 1353 + struct drm_i915_private *dev_priv = to_i915(dev); 1354 + enum transcoder dsi_trans; 1355 + u32 val; 1356 + 1357 + if (intel_dsi->ports == BIT(PORT_B)) 1358 + dsi_trans = TRANSCODER_DSI_1; 1359 + else 1360 + dsi_trans = TRANSCODER_DSI_0; 1361 + 1362 + val = intel_de_read(dev_priv, DSI_TRANS_FUNC_CONF(dsi_trans)); 1363 + return (val & DSI_PERIODIC_FRAME_UPDATE_ENABLE); 1364 + } 1365 + 1434 1366 static void gen11_dsi_get_config(struct intel_encoder *encoder, 1435 1367 struct intel_crtc_state *pipe_config) 1436 1368 { ··· 1467 1367 gen11_dsi_get_timings(encoder, pipe_config); 1468 1368 pipe_config->output_types |= BIT(INTEL_OUTPUT_DSI); 1469 1369 pipe_config->pipe_bpp = bdw_get_pipemisc_bpp(crtc); 1370 + 1371 + if (gen11_dsi_is_periodic_cmd_mode(intel_dsi)) 1372 + pipe_config->hw.adjusted_mode.private_flags |= 1373 + I915_MODE_FLAG_DSI_PERIODIC_CMD_MODE; 1470 1374 } 1471 1375 1472 1376 static int gen11_dsi_dsc_compute_config(struct intel_encoder *encoder, ··· 1521 1417 struct intel_crtc_state *pipe_config, 1522 1418 struct drm_connector_state *conn_state) 1523 1419 { 1420 + struct drm_i915_private *i915 = to_i915(encoder->base.dev); 1524 1421 struct intel_dsi *intel_dsi = container_of(encoder, struct intel_dsi, 1525 1422 base); 1526 1423 struct intel_connector *intel_connector = intel_dsi->attached_connector; ··· 1551 1446 pipe_config->clock_set = true; 1552 1447 1553 1448 if (gen11_dsi_dsc_compute_config(encoder, pipe_config)) 1554 - DRM_DEBUG_KMS("Attempting to use DSC failed\n"); 1449 + drm_dbg_kms(&i915->drm, "Attempting to use DSC failed\n"); 1555 1450 1556 1451 pipe_config->port_clock = afe_clk(encoder, pipe_config) / 5; 1452 + 1453 + /* We would not operate in periodic command mode */ 1454 + pipe_config->hw.adjusted_mode.private_flags &= 1455 + ~I915_MODE_FLAG_DSI_PERIODIC_CMD_MODE; 1456 + 1457 + /* 1458 + * In case of TE GATE cmd mode, we 1459 + * receive TE from the slave if 1460 + * dual link is enabled 1461 + */ 1462 + if (is_cmd_mode(intel_dsi)) { 1463 + if (intel_dsi->ports == (BIT(PORT_B) | BIT(PORT_A))) 1464 + pipe_config->hw.adjusted_mode.private_flags |= 1465 + I915_MODE_FLAG_DSI_USE_TE1 | 1466 + I915_MODE_FLAG_DSI_USE_TE0; 1467 + else if (intel_dsi->ports == BIT(PORT_B)) 1468 + pipe_config->hw.adjusted_mode.private_flags |= 1469 + I915_MODE_FLAG_DSI_USE_TE1; 1470 + else 1471 + pipe_config->hw.adjusted_mode.private_flags |= 1472 + I915_MODE_FLAG_DSI_USE_TE0; 1473 + } 1557 1474 1558 1475 return 0; 1559 1476 }
+15 -6
drivers/gpu/drm/i915/display/intel_atomic_plane.c
··· 264 264 plane_state->hw.color_range = from_plane_state->uapi.color_range; 265 265 } 266 266 267 + void intel_plane_set_invisible(struct intel_crtc_state *crtc_state, 268 + struct intel_plane_state *plane_state) 269 + { 270 + struct intel_plane *plane = to_intel_plane(plane_state->uapi.plane); 271 + 272 + crtc_state->active_planes &= ~BIT(plane->id); 273 + crtc_state->nv12_planes &= ~BIT(plane->id); 274 + crtc_state->c8_planes &= ~BIT(plane->id); 275 + crtc_state->data_rate[plane->id] = 0; 276 + crtc_state->min_cdclk[plane->id] = 0; 277 + 278 + plane_state->uapi.visible = false; 279 + } 280 + 267 281 int intel_plane_atomic_check_with_state(const struct intel_crtc_state *old_crtc_state, 268 282 struct intel_crtc_state *new_crtc_state, 269 283 const struct intel_plane_state *old_plane_state, ··· 287 273 const struct drm_framebuffer *fb = new_plane_state->hw.fb; 288 274 int ret; 289 275 290 - new_crtc_state->active_planes &= ~BIT(plane->id); 291 - new_crtc_state->nv12_planes &= ~BIT(plane->id); 292 - new_crtc_state->c8_planes &= ~BIT(plane->id); 293 - new_crtc_state->data_rate[plane->id] = 0; 294 - new_crtc_state->min_cdclk[plane->id] = 0; 295 - new_plane_state->uapi.visible = false; 276 + intel_plane_set_invisible(new_crtc_state, new_plane_state); 296 277 297 278 if (!new_plane_state->hw.crtc && !old_plane_state->hw.crtc) 298 279 return 0;
+2
drivers/gpu/drm/i915/display/intel_atomic_plane.h
··· 52 52 int intel_plane_calc_min_cdclk(struct intel_atomic_state *state, 53 53 struct intel_plane *plane, 54 54 bool *need_cdclk_calc); 55 + void intel_plane_set_invisible(struct intel_crtc_state *crtc_state, 56 + struct intel_plane_state *plane_state); 55 57 56 58 #endif /* __INTEL_ATOMIC_PLANE_H__ */
+10 -8
drivers/gpu/drm/i915/display/intel_audio.c
··· 252 252 i = ARRAY_SIZE(hdmi_audio_clock); 253 253 254 254 if (i == ARRAY_SIZE(hdmi_audio_clock)) { 255 - DRM_DEBUG_KMS("HDMI audio pixel clock setting for %d not found, falling back to defaults\n", 256 - adjusted_mode->crtc_clock); 255 + drm_dbg_kms(&dev_priv->drm, 256 + "HDMI audio pixel clock setting for %d not found, falling back to defaults\n", 257 + adjusted_mode->crtc_clock); 257 258 i = 1; 258 259 } 259 260 260 - DRM_DEBUG_KMS("Configuring HDMI audio for pixel clock %d (0x%08x)\n", 261 - hdmi_audio_clock[i].clock, 262 - hdmi_audio_clock[i].config); 261 + drm_dbg_kms(&dev_priv->drm, 262 + "Configuring HDMI audio for pixel clock %d (0x%08x)\n", 263 + hdmi_audio_clock[i].clock, 264 + hdmi_audio_clock[i].config); 263 265 264 266 return hdmi_audio_clock[i].config; 265 267 } ··· 893 891 ret = intel_display_power_get(dev_priv, POWER_DOMAIN_AUDIO); 894 892 895 893 if (dev_priv->audio_power_refcount++ == 0) { 896 - if (IS_TIGERLAKE(dev_priv) || IS_ICELAKE(dev_priv)) { 894 + if (INTEL_GEN(dev_priv) >= 9) { 897 895 intel_de_write(dev_priv, AUD_FREQ_CNTRL, 898 896 dev_priv->audio_freq_cntrl); 899 897 drm_dbg_kms(&dev_priv->drm, ··· 933 931 unsigned long cookie; 934 932 u32 tmp; 935 933 936 - if (!IS_GEN(dev_priv, 9)) 934 + if (INTEL_GEN(dev_priv) < 9) 937 935 return; 938 936 939 937 cookie = i915_audio_component_get_power(kdev); ··· 1175 1173 return; 1176 1174 } 1177 1175 1178 - if (IS_TIGERLAKE(dev_priv) || IS_ICELAKE(dev_priv)) { 1176 + if (INTEL_GEN(dev_priv) >= 9) { 1179 1177 dev_priv->audio_freq_cntrl = intel_de_read(dev_priv, 1180 1178 AUD_FREQ_CNTRL); 1181 1179 drm_dbg_kms(&dev_priv->drm,
+5 -4
drivers/gpu/drm/i915/display/intel_bw.c
··· 338 338 const struct intel_crtc_state *crtc_state) 339 339 { 340 340 struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc); 341 + struct drm_i915_private *i915 = to_i915(crtc->base.dev); 341 342 342 343 bw_state->data_rate[crtc->pipe] = 343 344 intel_bw_crtc_data_rate(crtc_state); 344 345 bw_state->num_active_planes[crtc->pipe] = 345 346 intel_bw_crtc_num_active_planes(crtc_state); 346 347 347 - DRM_DEBUG_KMS("pipe %c data rate %u num active planes %u\n", 348 - pipe_name(crtc->pipe), 349 - bw_state->data_rate[crtc->pipe], 350 - bw_state->num_active_planes[crtc->pipe]); 348 + drm_dbg_kms(&i915->drm, "pipe %c data rate %u num active planes %u\n", 349 + pipe_name(crtc->pipe), 350 + bw_state->data_rate[crtc->pipe], 351 + bw_state->num_active_planes[crtc->pipe]); 351 352 } 352 353 353 354 static unsigned int intel_bw_num_active_planes(struct drm_i915_private *dev_priv,
+103 -18
drivers/gpu/drm/i915/display/intel_color.c
··· 460 460 entry->blue = intel_color_lut_pack(REG_FIELD_GET(PREC_PALETTE_BLUE_MASK, val), 10); 461 461 } 462 462 463 + static void icl_lut_multi_seg_pack(struct drm_color_lut *entry, u32 ldw, u32 udw) 464 + { 465 + entry->red = REG_FIELD_GET(PAL_PREC_MULTI_SEG_RED_UDW_MASK, udw) << 6 | 466 + REG_FIELD_GET(PAL_PREC_MULTI_SEG_RED_LDW_MASK, ldw); 467 + entry->green = REG_FIELD_GET(PAL_PREC_MULTI_SEG_GREEN_UDW_MASK, udw) << 6 | 468 + REG_FIELD_GET(PAL_PREC_MULTI_SEG_GREEN_LDW_MASK, ldw); 469 + entry->blue = REG_FIELD_GET(PAL_PREC_MULTI_SEG_BLUE_UDW_MASK, udw) << 6 | 470 + REG_FIELD_GET(PAL_PREC_MULTI_SEG_BLUE_LDW_MASK, ldw); 471 + } 472 + 463 473 static void i9xx_color_commit(const struct intel_crtc_state *crtc_state) 464 474 { 465 475 struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc); ··· 903 893 struct intel_dsb *dsb = intel_dsb_get(crtc); 904 894 enum pipe pipe = crtc->pipe; 905 895 906 - /* Fixme: LUT entries are 16 bit only, so we can prog 0xFFFF max */ 896 + /* FIXME LUT entries are 16 bit only, so we can prog 0xFFFF max */ 907 897 intel_dsb_reg_write(dsb, PREC_PAL_GC_MAX(pipe, 0), color->red); 908 898 intel_dsb_reg_write(dsb, PREC_PAL_GC_MAX(pipe, 1), color->green); 909 899 intel_dsb_reg_write(dsb, PREC_PAL_GC_MAX(pipe, 2), color->blue); ··· 1640 1630 } 1641 1631 } 1642 1632 1633 + static int icl_gamma_precision(const struct intel_crtc_state *crtc_state) 1634 + { 1635 + if ((crtc_state->gamma_mode & POST_CSC_GAMMA_ENABLE) == 0) 1636 + return 0; 1637 + 1638 + switch (crtc_state->gamma_mode & GAMMA_MODE_MODE_MASK) { 1639 + case GAMMA_MODE_MODE_8BIT: 1640 + return 8; 1641 + case GAMMA_MODE_MODE_10BIT: 1642 + return 10; 1643 + case GAMMA_MODE_MODE_12BIT_MULTI_SEGMENTED: 1644 + return 16; 1645 + default: 1646 + MISSING_CASE(crtc_state->gamma_mode); 1647 + return 0; 1648 + } 1649 + } 1650 + 1643 1651 int intel_color_get_gamma_bit_precision(const struct intel_crtc_state *crtc_state) 1644 1652 { 1645 1653 struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc); ··· 1669 1641 else 1670 1642 return i9xx_gamma_precision(crtc_state); 1671 1643 } else { 1672 - if (IS_CANNONLAKE(dev_priv) || IS_GEMINILAKE(dev_priv)) 1644 + if (INTEL_GEN(dev_priv) >= 11) 1645 + return icl_gamma_precision(crtc_state); 1646 + else if (IS_CANNONLAKE(dev_priv) || IS_GEMINILAKE(dev_priv)) 1673 1647 return glk_gamma_precision(crtc_state); 1674 1648 else if (IS_IRONLAKE(dev_priv)) 1675 1649 return ilk_gamma_precision(crtc_state); ··· 1688 1658 ((abs((long)lut2->green - lut1->green)) <= err); 1689 1659 } 1690 1660 1691 - static bool intel_color_lut_entry_equal(struct drm_color_lut *lut1, 1692 - struct drm_color_lut *lut2, 1693 - int lut_size, u32 err) 1661 + static bool intel_color_lut_entries_equal(struct drm_color_lut *lut1, 1662 + struct drm_color_lut *lut2, 1663 + int lut_size, u32 err) 1694 1664 { 1695 1665 int i; 1696 1666 ··· 1720 1690 lut_size2 = drm_color_lut_size(blob2); 1721 1691 1722 1692 /* check sw and hw lut size */ 1723 - switch (gamma_mode) { 1724 - case GAMMA_MODE_MODE_8BIT: 1725 - case GAMMA_MODE_MODE_10BIT: 1726 - if (lut_size1 != lut_size2) 1727 - return false; 1728 - break; 1729 - default: 1730 - MISSING_CASE(gamma_mode); 1731 - return false; 1732 - } 1693 + if (lut_size1 != lut_size2) 1694 + return false; 1733 1695 1734 1696 lut1 = blob1->data; 1735 1697 lut2 = blob2->data; ··· 1729 1707 err = 0xffff >> bit_precision; 1730 1708 1731 1709 /* check sw and hw lut entry to be equal */ 1732 - switch (gamma_mode) { 1710 + switch (gamma_mode & GAMMA_MODE_MODE_MASK) { 1733 1711 case GAMMA_MODE_MODE_8BIT: 1734 1712 case GAMMA_MODE_MODE_10BIT: 1735 - if (!intel_color_lut_entry_equal(lut1, lut2, 1736 - lut_size2, err)) 1713 + if (!intel_color_lut_entries_equal(lut1, lut2, 1714 + lut_size2, err)) 1715 + return false; 1716 + break; 1717 + case GAMMA_MODE_MODE_12BIT_MULTI_SEGMENTED: 1718 + if (!intel_color_lut_entries_equal(lut1, lut2, 1719 + 9, err)) 1737 1720 return false; 1738 1721 break; 1739 1722 default: ··· 1973 1946 crtc_state->hw.gamma_lut = glk_read_lut_10(crtc, PAL_PREC_INDEX_VALUE(0)); 1974 1947 } 1975 1948 1949 + static struct drm_property_blob * 1950 + icl_read_lut_multi_segment(struct intel_crtc *crtc) 1951 + { 1952 + struct drm_i915_private *dev_priv = to_i915(crtc->base.dev); 1953 + int i, lut_size = INTEL_INFO(dev_priv)->color.gamma_lut_size; 1954 + enum pipe pipe = crtc->pipe; 1955 + struct drm_property_blob *blob; 1956 + struct drm_color_lut *lut; 1957 + 1958 + blob = drm_property_create_blob(&dev_priv->drm, 1959 + sizeof(struct drm_color_lut) * lut_size, 1960 + NULL); 1961 + if (IS_ERR(blob)) 1962 + return NULL; 1963 + 1964 + lut = blob->data; 1965 + 1966 + intel_de_write(dev_priv, PREC_PAL_MULTI_SEG_INDEX(pipe), 1967 + PAL_PREC_AUTO_INCREMENT); 1968 + 1969 + for (i = 0; i < 9; i++) { 1970 + u32 ldw = intel_de_read(dev_priv, PREC_PAL_MULTI_SEG_DATA(pipe)); 1971 + u32 udw = intel_de_read(dev_priv, PREC_PAL_MULTI_SEG_DATA(pipe)); 1972 + 1973 + icl_lut_multi_seg_pack(&lut[i], ldw, udw); 1974 + } 1975 + 1976 + intel_de_write(dev_priv, PREC_PAL_MULTI_SEG_INDEX(pipe), 0); 1977 + 1978 + /* 1979 + * FIXME readouts from PAL_PREC_DATA register aren't giving 1980 + * correct values in the case of fine and coarse segments. 1981 + * Restricting readouts only for super fine segment as of now. 1982 + */ 1983 + 1984 + return blob; 1985 + } 1986 + 1987 + static void icl_read_luts(struct intel_crtc_state *crtc_state) 1988 + { 1989 + struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc); 1990 + 1991 + if ((crtc_state->gamma_mode & POST_CSC_GAMMA_ENABLE) == 0) 1992 + return; 1993 + 1994 + switch (crtc_state->gamma_mode & GAMMA_MODE_MODE_MASK) { 1995 + case GAMMA_MODE_MODE_8BIT: 1996 + crtc_state->hw.gamma_lut = ilk_read_lut_8(crtc); 1997 + break; 1998 + case GAMMA_MODE_MODE_12BIT_MULTI_SEGMENTED: 1999 + crtc_state->hw.gamma_lut = icl_read_lut_multi_segment(crtc); 2000 + break; 2001 + default: 2002 + crtc_state->hw.gamma_lut = glk_read_lut_10(crtc, PAL_PREC_INDEX_VALUE(0)); 2003 + } 2004 + } 2005 + 1976 2006 void intel_color_init(struct intel_crtc *crtc) 1977 2007 { 1978 2008 struct drm_i915_private *dev_priv = to_i915(crtc->base.dev); ··· 2073 1989 2074 1990 if (INTEL_GEN(dev_priv) >= 11) { 2075 1991 dev_priv->display.load_luts = icl_load_luts; 1992 + dev_priv->display.read_luts = icl_read_luts; 2076 1993 } else if (IS_CANNONLAKE(dev_priv) || IS_GEMINILAKE(dev_priv)) { 2077 1994 dev_priv->display.load_luts = glk_load_luts; 2078 1995 dev_priv->display.read_luts = glk_read_luts;
+1 -1
drivers/gpu/drm/i915/display/intel_connector.c
··· 290 290 return; 291 291 break; 292 292 default: 293 - DRM_DEBUG_KMS("Colorspace property not supported\n"); 293 + MISSING_CASE(connector->connector_type); 294 294 return; 295 295 } 296 296
+23 -13
drivers/gpu/drm/i915/display/intel_crt.c
··· 203 203 intel_de_write(dev_priv, crt->adpa_reg, adpa); 204 204 } 205 205 206 - static void intel_disable_crt(struct intel_encoder *encoder, 206 + static void intel_disable_crt(struct intel_atomic_state *state, 207 + struct intel_encoder *encoder, 207 208 const struct intel_crtc_state *old_crtc_state, 208 209 const struct drm_connector_state *old_conn_state) 209 210 { 210 211 intel_crt_set_dpms(encoder, old_crtc_state, DRM_MODE_DPMS_OFF); 211 212 } 212 213 213 - static void pch_disable_crt(struct intel_encoder *encoder, 214 + static void pch_disable_crt(struct intel_atomic_state *state, 215 + struct intel_encoder *encoder, 214 216 const struct intel_crtc_state *old_crtc_state, 215 217 const struct drm_connector_state *old_conn_state) 216 218 { 217 219 } 218 220 219 - static void pch_post_disable_crt(struct intel_encoder *encoder, 221 + static void pch_post_disable_crt(struct intel_atomic_state *state, 222 + struct intel_encoder *encoder, 220 223 const struct intel_crtc_state *old_crtc_state, 221 224 const struct drm_connector_state *old_conn_state) 222 225 { 223 - intel_disable_crt(encoder, old_crtc_state, old_conn_state); 226 + intel_disable_crt(state, encoder, old_crtc_state, old_conn_state); 224 227 } 225 228 226 - static void hsw_disable_crt(struct intel_encoder *encoder, 229 + static void hsw_disable_crt(struct intel_atomic_state *state, 230 + struct intel_encoder *encoder, 227 231 const struct intel_crtc_state *old_crtc_state, 228 232 const struct drm_connector_state *old_conn_state) 229 233 { ··· 238 234 intel_set_pch_fifo_underrun_reporting(dev_priv, PIPE_A, false); 239 235 } 240 236 241 - static void hsw_post_disable_crt(struct intel_encoder *encoder, 237 + static void hsw_post_disable_crt(struct intel_atomic_state *state, 238 + struct intel_encoder *encoder, 242 239 const struct intel_crtc_state *old_crtc_state, 243 240 const struct drm_connector_state *old_conn_state) 244 241 { ··· 255 250 256 251 intel_ddi_disable_pipe_clock(old_crtc_state); 257 252 258 - pch_post_disable_crt(encoder, old_crtc_state, old_conn_state); 253 + pch_post_disable_crt(state, encoder, old_crtc_state, old_conn_state); 259 254 260 255 lpt_disable_pch_transcoder(dev_priv); 261 256 lpt_disable_iclkip(dev_priv); 262 257 263 - intel_ddi_fdi_post_disable(encoder, old_crtc_state, old_conn_state); 258 + intel_ddi_fdi_post_disable(state, encoder, old_crtc_state, old_conn_state); 264 259 265 260 drm_WARN_ON(&dev_priv->drm, !old_crtc_state->has_pch_encoder); 266 261 267 262 intel_set_pch_fifo_underrun_reporting(dev_priv, PIPE_A, true); 268 263 } 269 264 270 - static void hsw_pre_pll_enable_crt(struct intel_encoder *encoder, 265 + static void hsw_pre_pll_enable_crt(struct intel_atomic_state *state, 266 + struct intel_encoder *encoder, 271 267 const struct intel_crtc_state *crtc_state, 272 268 const struct drm_connector_state *conn_state) 273 269 { ··· 279 273 intel_set_pch_fifo_underrun_reporting(dev_priv, PIPE_A, false); 280 274 } 281 275 282 - static void hsw_pre_enable_crt(struct intel_encoder *encoder, 276 + static void hsw_pre_enable_crt(struct intel_atomic_state *state, 277 + struct intel_encoder *encoder, 283 278 const struct intel_crtc_state *crtc_state, 284 279 const struct drm_connector_state *conn_state) 285 280 { ··· 297 290 intel_ddi_enable_pipe_clock(crtc_state); 298 291 } 299 292 300 - static void hsw_enable_crt(struct intel_encoder *encoder, 293 + static void hsw_enable_crt(struct intel_atomic_state *state, 294 + struct intel_encoder *encoder, 301 295 const struct intel_crtc_state *crtc_state, 302 296 const struct drm_connector_state *conn_state) 303 297 { ··· 322 314 intel_set_pch_fifo_underrun_reporting(dev_priv, PIPE_A, true); 323 315 } 324 316 325 - static void intel_enable_crt(struct intel_encoder *encoder, 317 + static void intel_enable_crt(struct intel_atomic_state *state, 318 + struct intel_encoder *encoder, 326 319 const struct intel_crtc_state *crtc_state, 327 320 const struct drm_connector_state *conn_state) 328 321 { ··· 603 594 edid = drm_get_edid(connector, i2c); 604 595 605 596 if (!edid && !intel_gmbus_is_forced_bit(i2c)) { 606 - DRM_DEBUG_KMS("CRT GMBUS EDID read failed, retry using GPIO bit-banging\n"); 597 + drm_dbg_kms(connector->dev, 598 + "CRT GMBUS EDID read failed, retry using GPIO bit-banging\n"); 607 599 intel_gmbus_force_bit(i2c, true); 608 600 edid = drm_get_edid(connector, i2c); 609 601 intel_gmbus_force_bit(i2c, false);
+380 -142
drivers/gpu/drm/i915/display/intel_ddi.c
··· 568 568 { 0x6, 0x7F, 0x35, 0x00, 0x0A }, /* 600 850 3.0 */ 569 569 }; 570 570 571 - static const struct cnl_ddi_buf_trans ehl_combo_phy_ddi_translations_hbr2_hbr3[] = { 571 + static const struct cnl_ddi_buf_trans ehl_combo_phy_ddi_translations_dp[] = { 572 572 /* NT mV Trans mV db */ 573 573 { 0xA, 0x33, 0x3F, 0x00, 0x00 }, /* 350 350 0.0 */ 574 574 { 0xA, 0x47, 0x36, 0x00, 0x09 }, /* 350 500 3.1 */ ··· 583 583 }; 584 584 585 585 struct icl_mg_phy_ddi_buf_trans { 586 - u32 cri_txdeemph_override_5_0; 587 586 u32 cri_txdeemph_override_11_6; 587 + u32 cri_txdeemph_override_5_0; 588 588 u32 cri_txdeemph_override_17_12; 589 589 }; 590 590 591 - static const struct icl_mg_phy_ddi_buf_trans icl_mg_phy_ddi_translations[] = { 591 + static const struct icl_mg_phy_ddi_buf_trans icl_mg_phy_ddi_translations_rbr_hbr[] = { 592 592 /* Voltage swing pre-emphasis */ 593 - { 0x0, 0x1B, 0x00 }, /* 0 0 */ 594 - { 0x0, 0x23, 0x08 }, /* 0 1 */ 595 - { 0x0, 0x2D, 0x12 }, /* 0 2 */ 596 - { 0x0, 0x00, 0x00 }, /* 0 3 */ 597 - { 0x0, 0x23, 0x00 }, /* 1 0 */ 598 - { 0x0, 0x2B, 0x09 }, /* 1 1 */ 599 - { 0x0, 0x2E, 0x11 }, /* 1 2 */ 600 - { 0x0, 0x2F, 0x00 }, /* 2 0 */ 601 - { 0x0, 0x33, 0x0C }, /* 2 1 */ 602 - { 0x0, 0x00, 0x00 }, /* 3 0 */ 593 + { 0x18, 0x00, 0x00 }, /* 0 0 */ 594 + { 0x1D, 0x00, 0x05 }, /* 0 1 */ 595 + { 0x24, 0x00, 0x0C }, /* 0 2 */ 596 + { 0x2B, 0x00, 0x14 }, /* 0 3 */ 597 + { 0x21, 0x00, 0x00 }, /* 1 0 */ 598 + { 0x2B, 0x00, 0x08 }, /* 1 1 */ 599 + { 0x30, 0x00, 0x0F }, /* 1 2 */ 600 + { 0x31, 0x00, 0x03 }, /* 2 0 */ 601 + { 0x34, 0x00, 0x0B }, /* 2 1 */ 602 + { 0x3F, 0x00, 0x00 }, /* 3 0 */ 603 + }; 604 + 605 + static const struct icl_mg_phy_ddi_buf_trans icl_mg_phy_ddi_translations_hbr2_hbr3[] = { 606 + /* Voltage swing pre-emphasis */ 607 + { 0x18, 0x00, 0x00 }, /* 0 0 */ 608 + { 0x1D, 0x00, 0x05 }, /* 0 1 */ 609 + { 0x24, 0x00, 0x0C }, /* 0 2 */ 610 + { 0x2B, 0x00, 0x14 }, /* 0 3 */ 611 + { 0x26, 0x00, 0x00 }, /* 1 0 */ 612 + { 0x2C, 0x00, 0x07 }, /* 1 1 */ 613 + { 0x33, 0x00, 0x0C }, /* 1 2 */ 614 + { 0x2E, 0x00, 0x00 }, /* 2 0 */ 615 + { 0x36, 0x00, 0x09 }, /* 2 1 */ 616 + { 0x3F, 0x00, 0x00 }, /* 3 0 */ 617 + }; 618 + 619 + static const struct icl_mg_phy_ddi_buf_trans icl_mg_phy_ddi_translations_hdmi[] = { 620 + /* HDMI Preset VS Pre-emph */ 621 + { 0x1A, 0x0, 0x0 }, /* 1 400mV 0dB */ 622 + { 0x20, 0x0, 0x0 }, /* 2 500mV 0dB */ 623 + { 0x29, 0x0, 0x0 }, /* 3 650mV 0dB */ 624 + { 0x32, 0x0, 0x0 }, /* 4 800mV 0dB */ 625 + { 0x3F, 0x0, 0x0 }, /* 5 1000mV 0dB */ 626 + { 0x3A, 0x0, 0x5 }, /* 6 Full -1.5 dB */ 627 + { 0x39, 0x0, 0x6 }, /* 7 Full -1.8 dB */ 628 + { 0x38, 0x0, 0x7 }, /* 8 Full -2 dB */ 629 + { 0x37, 0x0, 0x8 }, /* 9 Full -2.5 dB */ 630 + { 0x36, 0x0, 0x9 }, /* 10 Full -3 dB */ 603 631 }; 604 632 605 633 struct tgl_dkl_phy_ddi_buf_trans { ··· 971 943 return icl_combo_phy_ddi_translations_dp_hbr2; 972 944 } 973 945 946 + static const struct icl_mg_phy_ddi_buf_trans * 947 + icl_get_mg_buf_trans(struct drm_i915_private *dev_priv, int type, int rate, 948 + int *n_entries) 949 + { 950 + if (type == INTEL_OUTPUT_HDMI) { 951 + *n_entries = ARRAY_SIZE(icl_mg_phy_ddi_translations_hdmi); 952 + return icl_mg_phy_ddi_translations_hdmi; 953 + } else if (rate > 270000) { 954 + *n_entries = ARRAY_SIZE(icl_mg_phy_ddi_translations_hbr2_hbr3); 955 + return icl_mg_phy_ddi_translations_hbr2_hbr3; 956 + } 957 + 958 + *n_entries = ARRAY_SIZE(icl_mg_phy_ddi_translations_rbr_hbr); 959 + return icl_mg_phy_ddi_translations_rbr_hbr; 960 + } 961 + 974 962 static const struct cnl_ddi_buf_trans * 975 963 ehl_get_combo_buf_trans(struct drm_i915_private *dev_priv, int type, int rate, 976 964 int *n_entries) 977 965 { 978 - if (type != INTEL_OUTPUT_HDMI && type != INTEL_OUTPUT_EDP && 979 - rate > 270000) { 980 - *n_entries = ARRAY_SIZE(ehl_combo_phy_ddi_translations_hbr2_hbr3); 981 - return ehl_combo_phy_ddi_translations_hbr2_hbr3; 966 + if (type != INTEL_OUTPUT_HDMI && type != INTEL_OUTPUT_EDP) { 967 + *n_entries = ARRAY_SIZE(ehl_combo_phy_ddi_translations_dp); 968 + return ehl_combo_phy_ddi_translations_dp; 982 969 } 983 970 984 971 return icl_get_combo_buf_trans(dev_priv, type, rate, n_entries); ··· 1032 989 icl_get_combo_buf_trans(dev_priv, INTEL_OUTPUT_HDMI, 1033 990 0, &n_entries); 1034 991 else 1035 - n_entries = ARRAY_SIZE(icl_mg_phy_ddi_translations); 992 + icl_get_mg_buf_trans(dev_priv, INTEL_OUTPUT_HDMI, 0, 993 + &n_entries); 1036 994 default_entry = n_entries - 1; 1037 995 } else if (IS_CANNONLAKE(dev_priv)) { 1038 996 cnl_get_buf_trans_hdmi(dev_priv, &n_entries); ··· 1147 1103 if (intel_de_read(dev_priv, reg) & DDI_BUF_IS_IDLE) 1148 1104 return; 1149 1105 } 1150 - DRM_ERROR("Timeout waiting for DDI BUF %c idle bit\n", port_name(port)); 1106 + drm_err(&dev_priv->drm, "Timeout waiting for DDI BUF %c idle bit\n", 1107 + port_name(port)); 1151 1108 } 1152 1109 1153 1110 static u32 hsw_pll_to_ddi_pll_sel(const struct intel_shared_dpll *pll) ··· 1295 1250 1296 1251 temp = intel_de_read(dev_priv, DP_TP_STATUS(PORT_E)); 1297 1252 if (temp & DP_TP_STATUS_AUTOTRAIN_DONE) { 1298 - DRM_DEBUG_KMS("FDI link training done on step %d\n", i); 1253 + drm_dbg_kms(&dev_priv->drm, 1254 + "FDI link training done on step %d\n", i); 1299 1255 break; 1300 1256 } 1301 1257 ··· 1305 1259 * Results in less fireworks from the state checker. 1306 1260 */ 1307 1261 if (i == ARRAY_SIZE(hsw_ddi_translations_fdi) * 2 - 1) { 1308 - DRM_ERROR("FDI link training failed!\n"); 1262 + drm_err(&dev_priv->drm, "FDI link training failed!\n"); 1309 1263 break; 1310 1264 } 1311 1265 ··· 1497 1451 intel_de_write(dev_priv, TRANS_MSA_MISC(cpu_transcoder), temp); 1498 1452 } 1499 1453 1454 + static u32 bdw_trans_port_sync_master_select(enum transcoder master_transcoder) 1455 + { 1456 + if (master_transcoder == TRANSCODER_EDP) 1457 + return 0; 1458 + else 1459 + return master_transcoder + 1; 1460 + } 1461 + 1500 1462 /* 1501 1463 * Returns the TRANS_DDI_FUNC_CTL value based on CRTC state. 1502 1464 * ··· 1605 1551 temp |= DDI_PORT_WIDTH(crtc_state->lane_count); 1606 1552 } 1607 1553 1554 + if (IS_GEN_RANGE(dev_priv, 8, 10) && 1555 + crtc_state->master_transcoder != INVALID_TRANSCODER) { 1556 + u8 master_select = 1557 + bdw_trans_port_sync_master_select(crtc_state->master_transcoder); 1558 + 1559 + temp |= TRANS_DDI_PORT_SYNC_ENABLE | 1560 + TRANS_DDI_PORT_SYNC_MASTER_SELECT(master_select); 1561 + } 1562 + 1608 1563 return temp; 1609 1564 } 1610 1565 ··· 1622 1559 struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc); 1623 1560 struct drm_i915_private *dev_priv = to_i915(crtc->base.dev); 1624 1561 enum transcoder cpu_transcoder = crtc_state->cpu_transcoder; 1625 - u32 temp; 1562 + u32 ctl; 1626 1563 1627 - temp = intel_ddi_transcoder_func_reg_val_get(crtc_state); 1564 + if (INTEL_GEN(dev_priv) >= 11) { 1565 + enum transcoder master_transcoder = crtc_state->master_transcoder; 1566 + u32 ctl2 = 0; 1567 + 1568 + if (master_transcoder != INVALID_TRANSCODER) { 1569 + u8 master_select = 1570 + bdw_trans_port_sync_master_select(master_transcoder); 1571 + 1572 + ctl2 |= PORT_SYNC_MODE_ENABLE | 1573 + PORT_SYNC_MODE_MASTER_SELECT(master_select); 1574 + } 1575 + 1576 + intel_de_write(dev_priv, 1577 + TRANS_DDI_FUNC_CTL2(cpu_transcoder), ctl2); 1578 + } 1579 + 1580 + ctl = intel_ddi_transcoder_func_reg_val_get(crtc_state); 1628 1581 if (intel_crtc_has_type(crtc_state, INTEL_OUTPUT_DP_MST)) 1629 - temp |= TRANS_DDI_DP_VC_PAYLOAD_ALLOC; 1630 - intel_de_write(dev_priv, TRANS_DDI_FUNC_CTL(cpu_transcoder), temp); 1582 + ctl |= TRANS_DDI_DP_VC_PAYLOAD_ALLOC; 1583 + intel_de_write(dev_priv, TRANS_DDI_FUNC_CTL(cpu_transcoder), ctl); 1631 1584 } 1632 1585 1633 1586 /* ··· 1656 1577 struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc); 1657 1578 struct drm_i915_private *dev_priv = to_i915(crtc->base.dev); 1658 1579 enum transcoder cpu_transcoder = crtc_state->cpu_transcoder; 1659 - u32 temp; 1580 + u32 ctl; 1660 1581 1661 - temp = intel_ddi_transcoder_func_reg_val_get(crtc_state); 1662 - temp &= ~TRANS_DDI_FUNC_ENABLE; 1663 - intel_de_write(dev_priv, TRANS_DDI_FUNC_CTL(cpu_transcoder), temp); 1582 + ctl = intel_ddi_transcoder_func_reg_val_get(crtc_state); 1583 + ctl &= ~TRANS_DDI_FUNC_ENABLE; 1584 + intel_de_write(dev_priv, TRANS_DDI_FUNC_CTL(cpu_transcoder), ctl); 1664 1585 } 1665 1586 1666 1587 void intel_ddi_disable_transcoder_func(const struct intel_crtc_state *crtc_state) ··· 1668 1589 struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc); 1669 1590 struct drm_i915_private *dev_priv = to_i915(crtc->base.dev); 1670 1591 enum transcoder cpu_transcoder = crtc_state->cpu_transcoder; 1671 - u32 val; 1592 + u32 ctl; 1672 1593 1673 - val = intel_de_read(dev_priv, TRANS_DDI_FUNC_CTL(cpu_transcoder)); 1674 - val &= ~TRANS_DDI_FUNC_ENABLE; 1594 + if (INTEL_GEN(dev_priv) >= 11) 1595 + intel_de_write(dev_priv, 1596 + TRANS_DDI_FUNC_CTL2(cpu_transcoder), 0); 1597 + 1598 + ctl = intel_de_read(dev_priv, TRANS_DDI_FUNC_CTL(cpu_transcoder)); 1599 + 1600 + ctl &= ~TRANS_DDI_FUNC_ENABLE; 1601 + 1602 + if (IS_GEN_RANGE(dev_priv, 8, 10)) 1603 + ctl &= ~(TRANS_DDI_PORT_SYNC_ENABLE | 1604 + TRANS_DDI_PORT_SYNC_MASTER_SELECT_MASK); 1675 1605 1676 1606 if (INTEL_GEN(dev_priv) >= 12) { 1677 1607 if (!intel_dp_mst_is_master_trans(crtc_state)) { 1678 - val &= ~(TGL_TRANS_DDI_PORT_MASK | 1608 + ctl &= ~(TGL_TRANS_DDI_PORT_MASK | 1679 1609 TRANS_DDI_MODE_SELECT_MASK); 1680 1610 } 1681 1611 } else { 1682 - val &= ~(TRANS_DDI_PORT_MASK | TRANS_DDI_MODE_SELECT_MASK); 1612 + ctl &= ~(TRANS_DDI_PORT_MASK | TRANS_DDI_MODE_SELECT_MASK); 1683 1613 } 1684 - intel_de_write(dev_priv, TRANS_DDI_FUNC_CTL(cpu_transcoder), val); 1614 + 1615 + intel_de_write(dev_priv, TRANS_DDI_FUNC_CTL(cpu_transcoder), ctl); 1685 1616 1686 1617 if (dev_priv->quirks & QUIRK_INCREASE_DDI_DISABLED_TIME && 1687 1618 intel_crtc_has_type(crtc_state, INTEL_OUTPUT_HDMI)) { 1688 - DRM_DEBUG_KMS("Quirk Increase DDI disabled time\n"); 1619 + drm_dbg_kms(&dev_priv->drm, 1620 + "Quirk Increase DDI disabled time\n"); 1689 1621 /* Quirk time at 100ms for reliable operation */ 1690 1622 msleep(100); 1691 1623 } ··· 1757 1667 goto out; 1758 1668 } 1759 1669 1760 - if (HAS_TRANSCODER_EDP(dev_priv) && port == PORT_A) 1670 + if (HAS_TRANSCODER(dev_priv, TRANSCODER_EDP) && port == PORT_A) 1761 1671 cpu_transcoder = TRANSCODER_EDP; 1762 1672 else 1763 1673 cpu_transcoder = (enum transcoder) pipe; ··· 1819 1729 if (!(tmp & DDI_BUF_CTL_ENABLE)) 1820 1730 goto out; 1821 1731 1822 - if (HAS_TRANSCODER_EDP(dev_priv) && port == PORT_A) { 1732 + if (HAS_TRANSCODER(dev_priv, TRANSCODER_EDP) && port == PORT_A) { 1823 1733 tmp = intel_de_read(dev_priv, 1824 1734 TRANS_DDI_FUNC_CTL(TRANSCODER_EDP)); 1825 1735 ··· 1877 1787 } 1878 1788 1879 1789 if (!*pipe_mask) 1880 - DRM_DEBUG_KMS("No pipe for [ENCODER:%d:%s] found\n", 1881 - encoder->base.base.id, encoder->base.name); 1790 + drm_dbg_kms(&dev_priv->drm, 1791 + "No pipe for [ENCODER:%d:%s] found\n", 1792 + encoder->base.base.id, encoder->base.name); 1882 1793 1883 1794 if (!mst_pipe_mask && hweight8(*pipe_mask) > 1) { 1884 - DRM_DEBUG_KMS("Multiple pipes for [ENCODER:%d:%s] (pipe_mask %02x)\n", 1885 - encoder->base.base.id, encoder->base.name, 1886 - *pipe_mask); 1795 + drm_dbg_kms(&dev_priv->drm, 1796 + "Multiple pipes for [ENCODER:%d:%s] (pipe_mask %02x)\n", 1797 + encoder->base.base.id, encoder->base.name, 1798 + *pipe_mask); 1887 1799 *pipe_mask = BIT(ffs(*pipe_mask) - 1); 1888 1800 } 1889 1801 1890 1802 if (mst_pipe_mask && mst_pipe_mask != *pipe_mask) 1891 - DRM_DEBUG_KMS("Conflicting MST and non-MST state for [ENCODER:%d:%s] (pipe_mask %02x mst_pipe_mask %02x)\n", 1892 - encoder->base.base.id, encoder->base.name, 1893 - *pipe_mask, mst_pipe_mask); 1803 + drm_dbg_kms(&dev_priv->drm, 1804 + "Conflicting MST and non-MST state for [ENCODER:%d:%s] (pipe_mask %02x mst_pipe_mask %02x)\n", 1805 + encoder->base.base.id, encoder->base.name, 1806 + *pipe_mask, mst_pipe_mask); 1894 1807 else 1895 1808 *is_dp_mst = mst_pipe_mask; 1896 1809 ··· 1903 1810 if ((tmp & (BXT_PHY_CMNLANE_POWERDOWN_ACK | 1904 1811 BXT_PHY_LANE_POWERDOWN_ACK | 1905 1812 BXT_PHY_LANE_ENABLED)) != BXT_PHY_LANE_ENABLED) 1906 - DRM_ERROR("[ENCODER:%d:%s] enabled but PHY powered down? " 1907 - "(PHY_CTL %08x)\n", encoder->base.base.id, 1908 - encoder->base.name, tmp); 1813 + drm_err(&dev_priv->drm, 1814 + "[ENCODER:%d:%s] enabled but PHY powered down? (PHY_CTL %08x)\n", 1815 + encoder->base.base.id, encoder->base.name, tmp); 1909 1816 } 1910 1817 1911 1818 intel_display_power_put(dev_priv, encoder->power_domain, wakeref); ··· 2071 1978 2072 1979 /* Make sure that the requested I_boost is valid */ 2073 1980 if (iboost && iboost != 0x1 && iboost != 0x3 && iboost != 0x7) { 2074 - DRM_ERROR("Invalid I_boost value %u\n", iboost); 1981 + drm_err(&dev_priv->drm, "Invalid I_boost value %u\n", iboost); 2075 1982 return; 2076 1983 } 2077 1984 ··· 2130 2037 icl_get_combo_buf_trans(dev_priv, encoder->type, 2131 2038 intel_dp->link_rate, &n_entries); 2132 2039 else 2133 - n_entries = ARRAY_SIZE(icl_mg_phy_ddi_translations); 2040 + icl_get_mg_buf_trans(dev_priv, encoder->type, 2041 + intel_dp->link_rate, &n_entries); 2134 2042 } else if (IS_CANNONLAKE(dev_priv)) { 2135 2043 if (encoder->type == INTEL_OUTPUT_EDP) 2136 2044 cnl_get_buf_trans_edp(dev_priv, &n_entries); ··· 2331 2237 return; 2332 2238 2333 2239 if (level >= n_entries) { 2334 - DRM_DEBUG_KMS("DDI translation not found for level %d. Using %d instead.", level, n_entries - 1); 2240 + drm_dbg_kms(&dev_priv->drm, 2241 + "DDI translation not found for level %d. Using %d instead.", 2242 + level, n_entries - 1); 2335 2243 level = n_entries - 1; 2336 2244 } 2337 2245 ··· 2446 2350 } 2447 2351 2448 2352 static void icl_mg_phy_ddi_vswing_sequence(struct intel_encoder *encoder, 2449 - int link_clock, 2450 - u32 level) 2353 + int link_clock, u32 level, 2354 + enum intel_output_type type) 2451 2355 { 2452 2356 struct drm_i915_private *dev_priv = to_i915(encoder->base.dev); 2453 2357 enum tc_port tc_port = intel_port_to_tc(dev_priv, encoder->port); 2454 2358 const struct icl_mg_phy_ddi_buf_trans *ddi_translations; 2455 2359 u32 n_entries, val; 2456 - int ln; 2360 + int ln, rate = 0; 2457 2361 2458 - n_entries = ARRAY_SIZE(icl_mg_phy_ddi_translations); 2459 - ddi_translations = icl_mg_phy_ddi_translations; 2362 + if (type != INTEL_OUTPUT_HDMI) { 2363 + struct intel_dp *intel_dp = enc_to_intel_dp(encoder); 2364 + 2365 + rate = intel_dp->link_rate; 2366 + } 2367 + 2368 + ddi_translations = icl_get_mg_buf_trans(dev_priv, type, rate, 2369 + &n_entries); 2460 2370 /* The table does not have values for level 3 and level 9. */ 2461 2371 if (level >= n_entries || level == 3 || level == 9) { 2462 - DRM_DEBUG_KMS("DDI translation not found for level %d. Using %d instead.", 2463 - level, n_entries - 2); 2372 + drm_dbg_kms(&dev_priv->drm, 2373 + "DDI translation not found for level %d. Using %d instead.", 2374 + level, n_entries - 2); 2464 2375 level = n_entries - 2; 2465 2376 } 2466 2377 ··· 2586 2483 if (intel_phy_is_combo(dev_priv, phy)) 2587 2484 icl_combo_phy_ddi_vswing_sequence(encoder, level, type); 2588 2485 else 2589 - icl_mg_phy_ddi_vswing_sequence(encoder, link_clock, level); 2486 + icl_mg_phy_ddi_vswing_sequence(encoder, link_clock, level, 2487 + type); 2590 2488 } 2591 2489 2592 2490 static void ··· 2802 2698 if (drm_WARN_ON(&dev_priv->drm, ddi_clk_needed)) 2803 2699 continue; 2804 2700 2805 - DRM_NOTE("PHY %c is disabled/in DSI mode with an ungated DDI clock, gate it\n", 2806 - phy_name(phy)); 2701 + drm_notice(&dev_priv->drm, 2702 + "PHY %c is disabled/in DSI mode with an ungated DDI clock, gate it\n", 2703 + phy_name(phy)); 2807 2704 val |= icl_dpclka_cfgcr0_clk_off(dev_priv, phy); 2808 2705 intel_de_write(dev_priv, ICL_DPCLKA_CFGCR0, val); 2809 2706 } ··· 3041 2936 static void intel_dp_sink_set_fec_ready(struct intel_dp *intel_dp, 3042 2937 const struct intel_crtc_state *crtc_state) 3043 2938 { 2939 + struct drm_i915_private *i915 = dp_to_i915(intel_dp); 2940 + 3044 2941 if (!crtc_state->fec_enable) 3045 2942 return; 3046 2943 3047 2944 if (drm_dp_dpcd_writeb(&intel_dp->aux, DP_FEC_CONFIGURATION, DP_FEC_READY) <= 0) 3048 - DRM_DEBUG_KMS("Failed to set FEC_READY in the sink\n"); 2945 + drm_dbg_kms(&i915->drm, 2946 + "Failed to set FEC_READY in the sink\n"); 3049 2947 } 3050 2948 3051 2949 static void intel_ddi_enable_fec(struct intel_encoder *encoder, ··· 3068 2960 3069 2961 if (intel_de_wait_for_set(dev_priv, intel_dp->regs.dp_tp_status, 3070 2962 DP_TP_STATUS_FEC_ENABLE_LIVE, 1)) 3071 - DRM_ERROR("Timed out waiting for FEC Enable Status\n"); 2963 + drm_err(&dev_priv->drm, 2964 + "Timed out waiting for FEC Enable Status\n"); 3072 2965 } 3073 2966 3074 2967 static void intel_ddi_disable_fec_state(struct intel_encoder *encoder, ··· 3089 2980 intel_de_posting_read(dev_priv, intel_dp->regs.dp_tp_ctl); 3090 2981 } 3091 2982 3092 - static void tgl_ddi_pre_enable_dp(struct intel_encoder *encoder, 2983 + static void tgl_ddi_pre_enable_dp(struct intel_atomic_state *state, 2984 + struct intel_encoder *encoder, 3093 2985 const struct intel_crtc_state *crtc_state, 3094 2986 const struct drm_connector_state *conn_state) 3095 2987 { ··· 3230 3120 intel_dsc_enable(encoder, crtc_state); 3231 3121 } 3232 3122 3233 - static void hsw_ddi_pre_enable_dp(struct intel_encoder *encoder, 3123 + static void hsw_ddi_pre_enable_dp(struct intel_atomic_state *state, 3124 + struct intel_encoder *encoder, 3234 3125 const struct intel_crtc_state *crtc_state, 3235 3126 const struct drm_connector_state *conn_state) 3236 3127 { ··· 3304 3193 intel_dsc_enable(encoder, crtc_state); 3305 3194 } 3306 3195 3307 - static void intel_ddi_pre_enable_dp(struct intel_encoder *encoder, 3196 + static void intel_ddi_pre_enable_dp(struct intel_atomic_state *state, 3197 + struct intel_encoder *encoder, 3308 3198 const struct intel_crtc_state *crtc_state, 3309 3199 const struct drm_connector_state *conn_state) 3310 3200 { 3311 3201 struct drm_i915_private *dev_priv = to_i915(encoder->base.dev); 3312 3202 3313 3203 if (INTEL_GEN(dev_priv) >= 12) 3314 - tgl_ddi_pre_enable_dp(encoder, crtc_state, conn_state); 3204 + tgl_ddi_pre_enable_dp(state, encoder, crtc_state, conn_state); 3315 3205 else 3316 - hsw_ddi_pre_enable_dp(encoder, crtc_state, conn_state); 3206 + hsw_ddi_pre_enable_dp(state, encoder, crtc_state, conn_state); 3317 3207 3318 3208 /* MST will call a setting of MSA after an allocating of Virtual Channel 3319 3209 * from MST encoder pre_enable callback. ··· 3326 3214 } 3327 3215 } 3328 3216 3329 - static void intel_ddi_pre_enable_hdmi(struct intel_encoder *encoder, 3217 + static void intel_ddi_pre_enable_hdmi(struct intel_atomic_state *state, 3218 + struct intel_encoder *encoder, 3330 3219 const struct intel_crtc_state *crtc_state, 3331 3220 const struct drm_connector_state *conn_state) 3332 3221 { ··· 3367 3254 crtc_state, conn_state); 3368 3255 } 3369 3256 3370 - static void intel_ddi_pre_enable(struct intel_encoder *encoder, 3257 + static void intel_ddi_pre_enable(struct intel_atomic_state *state, 3258 + struct intel_encoder *encoder, 3371 3259 const struct intel_crtc_state *crtc_state, 3372 3260 const struct drm_connector_state *conn_state) 3373 3261 { ··· 3397 3283 intel_set_cpu_fifo_underrun_reporting(dev_priv, pipe, true); 3398 3284 3399 3285 if (intel_crtc_has_type(crtc_state, INTEL_OUTPUT_HDMI)) { 3400 - intel_ddi_pre_enable_hdmi(encoder, crtc_state, conn_state); 3286 + intel_ddi_pre_enable_hdmi(state, encoder, crtc_state, 3287 + conn_state); 3401 3288 } else { 3402 3289 struct intel_lspcon *lspcon = 3403 3290 enc_to_intel_lspcon(encoder); 3404 3291 3405 - intel_ddi_pre_enable_dp(encoder, crtc_state, conn_state); 3292 + intel_ddi_pre_enable_dp(state, encoder, crtc_state, 3293 + conn_state); 3406 3294 if (lspcon->active) { 3407 3295 struct intel_digital_port *dig_port = 3408 3296 enc_to_dig_port(encoder); ··· 3447 3331 intel_wait_ddi_buf_idle(dev_priv, port); 3448 3332 } 3449 3333 3450 - static void intel_ddi_post_disable_dp(struct intel_encoder *encoder, 3334 + static void intel_ddi_post_disable_dp(struct intel_atomic_state *state, 3335 + struct intel_encoder *encoder, 3451 3336 const struct intel_crtc_state *old_crtc_state, 3452 3337 const struct drm_connector_state *old_conn_state) 3453 3338 { ··· 3504 3387 intel_ddi_clk_disable(encoder); 3505 3388 } 3506 3389 3507 - static void intel_ddi_post_disable_hdmi(struct intel_encoder *encoder, 3390 + static void intel_ddi_post_disable_hdmi(struct intel_atomic_state *state, 3391 + struct intel_encoder *encoder, 3508 3392 const struct intel_crtc_state *old_crtc_state, 3509 3393 const struct drm_connector_state *old_conn_state) 3510 3394 { ··· 3528 3410 intel_dp_dual_mode_set_tmds_output(intel_hdmi, false); 3529 3411 } 3530 3412 3531 - static void icl_disable_transcoder_port_sync(const struct intel_crtc_state *old_crtc_state) 3532 - { 3533 - struct intel_crtc *crtc = to_intel_crtc(old_crtc_state->uapi.crtc); 3534 - struct drm_i915_private *dev_priv = to_i915(crtc->base.dev); 3535 - 3536 - if (old_crtc_state->master_transcoder == INVALID_TRANSCODER) 3537 - return; 3538 - 3539 - DRM_DEBUG_KMS("Disabling Transcoder Port Sync on Slave Transcoder %s\n", 3540 - transcoder_name(old_crtc_state->cpu_transcoder)); 3541 - 3542 - intel_de_write(dev_priv, 3543 - TRANS_DDI_FUNC_CTL2(old_crtc_state->cpu_transcoder), 0); 3544 - } 3545 - 3546 - static void intel_ddi_post_disable(struct intel_encoder *encoder, 3413 + static void intel_ddi_post_disable(struct intel_atomic_state *state, 3414 + struct intel_encoder *encoder, 3547 3415 const struct intel_crtc_state *old_crtc_state, 3548 3416 const struct drm_connector_state *old_conn_state) 3549 3417 { ··· 3542 3438 intel_crtc_vblank_off(old_crtc_state); 3543 3439 3544 3440 intel_disable_pipe(old_crtc_state); 3545 - 3546 - if (INTEL_GEN(dev_priv) >= 11) 3547 - icl_disable_transcoder_port_sync(old_crtc_state); 3548 3441 3549 3442 intel_ddi_disable_transcoder_func(old_crtc_state); 3550 3443 ··· 3567 3466 */ 3568 3467 3569 3468 if (intel_crtc_has_type(old_crtc_state, INTEL_OUTPUT_HDMI)) 3570 - intel_ddi_post_disable_hdmi(encoder, 3571 - old_crtc_state, old_conn_state); 3469 + intel_ddi_post_disable_hdmi(state, encoder, old_crtc_state, 3470 + old_conn_state); 3572 3471 else 3573 - intel_ddi_post_disable_dp(encoder, 3574 - old_crtc_state, old_conn_state); 3472 + intel_ddi_post_disable_dp(state, encoder, old_crtc_state, 3473 + old_conn_state); 3575 3474 3576 3475 if (INTEL_GEN(dev_priv) >= 11) 3577 3476 icl_unmap_plls_to_ports(encoder); ··· 3584 3483 intel_tc_port_put_link(dig_port); 3585 3484 } 3586 3485 3587 - void intel_ddi_fdi_post_disable(struct intel_encoder *encoder, 3486 + void intel_ddi_fdi_post_disable(struct intel_atomic_state *state, 3487 + struct intel_encoder *encoder, 3588 3488 const struct intel_crtc_state *old_crtc_state, 3589 3489 const struct drm_connector_state *old_conn_state) 3590 3490 { ··· 3619 3517 intel_de_write(dev_priv, FDI_RX_CTL(PIPE_A), val); 3620 3518 } 3621 3519 3622 - static void intel_enable_ddi_dp(struct intel_encoder *encoder, 3520 + static void trans_port_sync_stop_link_train(struct intel_atomic_state *state, 3521 + struct intel_encoder *encoder, 3522 + const struct intel_crtc_state *crtc_state) 3523 + { 3524 + const struct drm_connector_state *conn_state; 3525 + struct drm_connector *conn; 3526 + int i; 3527 + 3528 + if (!crtc_state->sync_mode_slaves_mask) 3529 + return; 3530 + 3531 + for_each_new_connector_in_state(&state->base, conn, conn_state, i) { 3532 + struct intel_encoder *slave_encoder = 3533 + to_intel_encoder(conn_state->best_encoder); 3534 + struct intel_crtc *slave_crtc = to_intel_crtc(conn_state->crtc); 3535 + const struct intel_crtc_state *slave_crtc_state; 3536 + 3537 + if (!slave_crtc) 3538 + continue; 3539 + 3540 + slave_crtc_state = 3541 + intel_atomic_get_new_crtc_state(state, slave_crtc); 3542 + 3543 + if (slave_crtc_state->master_transcoder != 3544 + crtc_state->cpu_transcoder) 3545 + continue; 3546 + 3547 + intel_dp_stop_link_train(enc_to_intel_dp(slave_encoder)); 3548 + } 3549 + 3550 + usleep_range(200, 400); 3551 + 3552 + intel_dp_stop_link_train(enc_to_intel_dp(encoder)); 3553 + } 3554 + 3555 + static void intel_enable_ddi_dp(struct intel_atomic_state *state, 3556 + struct intel_encoder *encoder, 3623 3557 const struct intel_crtc_state *crtc_state, 3624 3558 const struct drm_connector_state *conn_state) 3625 3559 { ··· 3674 3536 3675 3537 if (crtc_state->has_audio) 3676 3538 intel_audio_codec_enable(encoder, crtc_state, conn_state); 3539 + 3540 + trans_port_sync_stop_link_train(state, encoder, crtc_state); 3677 3541 } 3678 3542 3679 3543 static i915_reg_t ··· 3698 3558 return CHICKEN_TRANS(trans[port]); 3699 3559 } 3700 3560 3701 - static void intel_enable_ddi_hdmi(struct intel_encoder *encoder, 3561 + static void intel_enable_ddi_hdmi(struct intel_atomic_state *state, 3562 + struct intel_encoder *encoder, 3702 3563 const struct intel_crtc_state *crtc_state, 3703 3564 const struct drm_connector_state *conn_state) 3704 3565 { ··· 3711 3570 if (!intel_hdmi_handle_sink_scrambling(encoder, connector, 3712 3571 crtc_state->hdmi_high_tmds_clock_ratio, 3713 3572 crtc_state->hdmi_scrambling)) 3714 - DRM_DEBUG_KMS("[CONNECTOR:%d:%s] Failed to configure sink " 3715 - "scrambling/TMDS bit clock ratio\n", 3716 - connector->base.id, connector->name); 3573 + drm_dbg_kms(&dev_priv->drm, 3574 + "[CONNECTOR:%d:%s] Failed to configure sink scrambling/TMDS bit clock ratio\n", 3575 + connector->base.id, connector->name); 3717 3576 3718 3577 /* Display WA #1143: skl,kbl,cfl */ 3719 3578 if (IS_GEN9_BC(dev_priv)) { ··· 3761 3620 intel_audio_codec_enable(encoder, crtc_state, conn_state); 3762 3621 } 3763 3622 3764 - static void intel_enable_ddi(struct intel_encoder *encoder, 3623 + static void intel_enable_ddi(struct intel_atomic_state *state, 3624 + struct intel_encoder *encoder, 3765 3625 const struct intel_crtc_state *crtc_state, 3766 3626 const struct drm_connector_state *conn_state) 3767 3627 { ··· 3773 3631 intel_crtc_vblank_on(crtc_state); 3774 3632 3775 3633 if (intel_crtc_has_type(crtc_state, INTEL_OUTPUT_HDMI)) 3776 - intel_enable_ddi_hdmi(encoder, crtc_state, conn_state); 3634 + intel_enable_ddi_hdmi(state, encoder, crtc_state, conn_state); 3777 3635 else 3778 - intel_enable_ddi_dp(encoder, crtc_state, conn_state); 3636 + intel_enable_ddi_dp(state, encoder, crtc_state, conn_state); 3779 3637 3780 3638 /* Enable hdcp if it's desired */ 3781 3639 if (conn_state->content_protection == ··· 3785 3643 (u8)conn_state->hdcp_content_type); 3786 3644 } 3787 3645 3788 - static void intel_disable_ddi_dp(struct intel_encoder *encoder, 3646 + static void intel_disable_ddi_dp(struct intel_atomic_state *state, 3647 + struct intel_encoder *encoder, 3789 3648 const struct intel_crtc_state *old_crtc_state, 3790 3649 const struct drm_connector_state *old_conn_state) 3791 3650 { ··· 3806 3663 false); 3807 3664 } 3808 3665 3809 - static void intel_disable_ddi_hdmi(struct intel_encoder *encoder, 3666 + static void intel_disable_ddi_hdmi(struct intel_atomic_state *state, 3667 + struct intel_encoder *encoder, 3810 3668 const struct intel_crtc_state *old_crtc_state, 3811 3669 const struct drm_connector_state *old_conn_state) 3812 3670 { 3671 + struct drm_i915_private *i915 = to_i915(encoder->base.dev); 3813 3672 struct drm_connector *connector = old_conn_state->connector; 3814 3673 3815 3674 if (old_crtc_state->has_audio) ··· 3820 3675 3821 3676 if (!intel_hdmi_handle_sink_scrambling(encoder, connector, 3822 3677 false, false)) 3823 - DRM_DEBUG_KMS("[CONNECTOR:%d:%s] Failed to reset sink scrambling/TMDS bit clock ratio\n", 3824 - connector->base.id, connector->name); 3678 + drm_dbg_kms(&i915->drm, 3679 + "[CONNECTOR:%d:%s] Failed to reset sink scrambling/TMDS bit clock ratio\n", 3680 + connector->base.id, connector->name); 3825 3681 } 3826 3682 3827 - static void intel_disable_ddi(struct intel_encoder *encoder, 3683 + static void intel_disable_ddi(struct intel_atomic_state *state, 3684 + struct intel_encoder *encoder, 3828 3685 const struct intel_crtc_state *old_crtc_state, 3829 3686 const struct drm_connector_state *old_conn_state) 3830 3687 { 3831 3688 intel_hdcp_disable(to_intel_connector(old_conn_state->connector)); 3832 3689 3833 3690 if (intel_crtc_has_type(old_crtc_state, INTEL_OUTPUT_HDMI)) 3834 - intel_disable_ddi_hdmi(encoder, old_crtc_state, old_conn_state); 3691 + intel_disable_ddi_hdmi(state, encoder, old_crtc_state, 3692 + old_conn_state); 3835 3693 else 3836 - intel_disable_ddi_dp(encoder, old_crtc_state, old_conn_state); 3694 + intel_disable_ddi_dp(state, encoder, old_crtc_state, 3695 + old_conn_state); 3837 3696 } 3838 3697 3839 - static void intel_ddi_update_pipe_dp(struct intel_encoder *encoder, 3698 + static void intel_ddi_update_pipe_dp(struct intel_atomic_state *state, 3699 + struct intel_encoder *encoder, 3840 3700 const struct intel_crtc_state *crtc_state, 3841 3701 const struct drm_connector_state *conn_state) 3842 3702 { ··· 3852 3702 intel_psr_update(intel_dp, crtc_state); 3853 3703 intel_edp_drrs_enable(intel_dp, crtc_state); 3854 3704 3855 - intel_panel_update_backlight(encoder, crtc_state, conn_state); 3705 + intel_panel_update_backlight(state, encoder, crtc_state, conn_state); 3856 3706 } 3857 3707 3858 - static void intel_ddi_update_pipe(struct intel_encoder *encoder, 3708 + static void intel_ddi_update_pipe(struct intel_atomic_state *state, 3709 + struct intel_encoder *encoder, 3859 3710 const struct intel_crtc_state *crtc_state, 3860 3711 const struct drm_connector_state *conn_state) 3861 3712 { 3862 3713 3863 3714 if (!intel_crtc_has_type(crtc_state, INTEL_OUTPUT_HDMI)) 3864 - intel_ddi_update_pipe_dp(encoder, crtc_state, conn_state); 3715 + intel_ddi_update_pipe_dp(state, encoder, crtc_state, 3716 + conn_state); 3865 3717 3866 - intel_hdcp_update_pipe(encoder, crtc_state, conn_state); 3718 + intel_hdcp_update_pipe(state, encoder, crtc_state, conn_state); 3867 3719 } 3868 3720 3869 3721 static void ··· 3894 3742 } 3895 3743 3896 3744 static void 3897 - intel_ddi_pre_pll_enable(struct intel_encoder *encoder, 3745 + intel_ddi_pre_pll_enable(struct intel_atomic_state *state, 3746 + struct intel_encoder *encoder, 3898 3747 const struct intel_crtc_state *crtc_state, 3899 3748 const struct drm_connector_state *conn_state) 3900 3749 { ··· 3995 3842 crtc_state->min_voltage_level = 2; 3996 3843 } 3997 3844 3845 + static enum transcoder bdw_transcoder_master_readout(struct drm_i915_private *dev_priv, 3846 + enum transcoder cpu_transcoder) 3847 + { 3848 + u32 master_select; 3849 + 3850 + if (INTEL_GEN(dev_priv) >= 11) { 3851 + u32 ctl2 = intel_de_read(dev_priv, TRANS_DDI_FUNC_CTL2(cpu_transcoder)); 3852 + 3853 + if ((ctl2 & PORT_SYNC_MODE_ENABLE) == 0) 3854 + return INVALID_TRANSCODER; 3855 + 3856 + master_select = REG_FIELD_GET(PORT_SYNC_MODE_MASTER_SELECT_MASK, ctl2); 3857 + } else { 3858 + u32 ctl = intel_de_read(dev_priv, TRANS_DDI_FUNC_CTL(cpu_transcoder)); 3859 + 3860 + if ((ctl & TRANS_DDI_PORT_SYNC_ENABLE) == 0) 3861 + return INVALID_TRANSCODER; 3862 + 3863 + master_select = REG_FIELD_GET(TRANS_DDI_PORT_SYNC_MASTER_SELECT_MASK, ctl); 3864 + } 3865 + 3866 + if (master_select == 0) 3867 + return TRANSCODER_EDP; 3868 + else 3869 + return master_select - 1; 3870 + } 3871 + 3872 + static void bdw_get_trans_port_sync_config(struct intel_crtc_state *crtc_state) 3873 + { 3874 + struct drm_i915_private *dev_priv = to_i915(crtc_state->uapi.crtc->dev); 3875 + u32 transcoders = BIT(TRANSCODER_A) | BIT(TRANSCODER_B) | 3876 + BIT(TRANSCODER_C) | BIT(TRANSCODER_D); 3877 + enum transcoder cpu_transcoder; 3878 + 3879 + crtc_state->master_transcoder = 3880 + bdw_transcoder_master_readout(dev_priv, crtc_state->cpu_transcoder); 3881 + 3882 + for_each_cpu_transcoder_masked(dev_priv, cpu_transcoder, transcoders) { 3883 + enum intel_display_power_domain power_domain; 3884 + intel_wakeref_t trans_wakeref; 3885 + 3886 + power_domain = POWER_DOMAIN_TRANSCODER(cpu_transcoder); 3887 + trans_wakeref = intel_display_power_get_if_enabled(dev_priv, 3888 + power_domain); 3889 + 3890 + if (!trans_wakeref) 3891 + continue; 3892 + 3893 + if (bdw_transcoder_master_readout(dev_priv, cpu_transcoder) == 3894 + crtc_state->cpu_transcoder) 3895 + crtc_state->sync_mode_slaves_mask |= BIT(cpu_transcoder); 3896 + 3897 + intel_display_power_put(dev_priv, power_domain, trans_wakeref); 3898 + } 3899 + 3900 + drm_WARN_ON(&dev_priv->drm, 3901 + crtc_state->master_transcoder != INVALID_TRANSCODER && 3902 + crtc_state->sync_mode_slaves_mask); 3903 + } 3904 + 3998 3905 void intel_ddi_get_config(struct intel_encoder *encoder, 3999 3906 struct intel_crtc_state *pipe_config) 4000 3907 { ··· 4140 3927 pipe_config->fec_enable = 4141 3928 intel_de_read(dev_priv, dp_tp_ctl) & DP_TP_CTL_FEC_ENABLE; 4142 3929 4143 - DRM_DEBUG_KMS("[ENCODER:%d:%s] Fec status: %u\n", 4144 - encoder->base.base.id, encoder->base.name, 4145 - pipe_config->fec_enable); 3930 + drm_dbg_kms(&dev_priv->drm, 3931 + "[ENCODER:%d:%s] Fec status: %u\n", 3932 + encoder->base.base.id, encoder->base.name, 3933 + pipe_config->fec_enable); 4146 3934 } 4147 3935 4148 3936 break; ··· 4180 3966 * up by the BIOS, and thus we can't get the mode at module 4181 3967 * load. 4182 3968 */ 4183 - DRM_DEBUG_KMS("pipe has %d bpp for eDP panel, overriding BIOS-provided max %d bpp\n", 4184 - pipe_config->pipe_bpp, dev_priv->vbt.edp.bpp); 3969 + drm_dbg_kms(&dev_priv->drm, 3970 + "pipe has %d bpp for eDP panel, overriding BIOS-provided max %d bpp\n", 3971 + pipe_config->pipe_bpp, dev_priv->vbt.edp.bpp); 4185 3972 dev_priv->vbt.edp.bpp = pipe_config->pipe_bpp; 4186 3973 } 4187 3974 ··· 4208 3993 intel_read_infoframe(encoder, pipe_config, 4209 3994 HDMI_INFOFRAME_TYPE_DRM, 4210 3995 &pipe_config->infoframes.drm); 3996 + 3997 + if (INTEL_GEN(dev_priv) >= 8) 3998 + bdw_get_trans_port_sync_config(pipe_config); 4211 3999 } 4212 4000 4213 4001 static enum intel_output_type ··· 4240 4022 enum port port = encoder->port; 4241 4023 int ret; 4242 4024 4243 - if (HAS_TRANSCODER_EDP(dev_priv) && port == PORT_A) 4025 + if (HAS_TRANSCODER(dev_priv, TRANSCODER_EDP) && port == PORT_A) 4244 4026 pipe_config->cpu_transcoder = TRANSCODER_EDP; 4245 4027 4246 4028 if (intel_crtc_has_type(pipe_config, INTEL_OUTPUT_HDMI)) { ··· 4312 4094 u8 transcoders = 0; 4313 4095 int i; 4314 4096 4315 - if (INTEL_GEN(dev_priv) < 11) 4097 + /* 4098 + * We don't enable port sync on BDW due to missing w/as and 4099 + * due to not having adjusted the modeset sequence appropriately. 4100 + */ 4101 + if (INTEL_GEN(dev_priv) < 9) 4316 4102 return 0; 4317 4103 4318 4104 if (!intel_crtc_has_type(ref_crtc_state, INTEL_OUTPUT_DP)) ··· 4348 4126 struct intel_crtc_state *crtc_state, 4349 4127 struct drm_connector_state *conn_state) 4350 4128 { 4129 + struct drm_i915_private *i915 = to_i915(encoder->base.dev); 4351 4130 struct drm_connector *connector = conn_state->connector; 4352 4131 u8 port_sync_transcoders = 0; 4353 4132 4354 - DRM_DEBUG_KMS("[ENCODER:%d:%s] [CRTC:%d:%s]", 4355 - encoder->base.base.id, encoder->base.name, 4356 - crtc_state->uapi.crtc->base.id, crtc_state->uapi.crtc->name); 4133 + drm_dbg_kms(&i915->drm, "[ENCODER:%d:%s] [CRTC:%d:%s]", 4134 + encoder->base.base.id, encoder->base.name, 4135 + crtc_state->uapi.crtc->base.id, crtc_state->uapi.crtc->name); 4357 4136 4358 4137 if (connector->has_tile) 4359 4138 port_sync_transcoders = intel_ddi_port_sync_transcoders(crtc_state, ··· 4493 4270 4494 4271 ret = drm_scdc_readb(adapter, SCDC_TMDS_CONFIG, &config); 4495 4272 if (ret < 0) { 4496 - DRM_ERROR("Failed to read TMDS config: %d\n", ret); 4273 + drm_err(&dev_priv->drm, "Failed to read TMDS config: %d\n", 4274 + ret); 4497 4275 return 0; 4498 4276 } 4499 4277 ··· 4518 4294 4519 4295 static enum intel_hotplug_state 4520 4296 intel_ddi_hotplug(struct intel_encoder *encoder, 4521 - struct intel_connector *connector, 4522 - bool irq_received) 4297 + struct intel_connector *connector) 4523 4298 { 4299 + struct drm_i915_private *i915 = to_i915(encoder->base.dev); 4524 4300 struct intel_digital_port *dig_port = enc_to_dig_port(encoder); 4301 + enum phy phy = intel_port_to_phy(i915, encoder->port); 4302 + bool is_tc = intel_phy_is_tc(i915, phy); 4525 4303 struct drm_modeset_acquire_ctx ctx; 4526 4304 enum intel_hotplug_state state; 4527 4305 int ret; 4528 4306 4529 - state = intel_encoder_hotplug(encoder, connector, irq_received); 4307 + state = intel_encoder_hotplug(encoder, connector); 4530 4308 4531 4309 drm_modeset_acquire_init(&ctx, 0); 4532 4310 ··· 4566 4340 * valid EDID. To solve this schedule another detection cycle if this 4567 4341 * time around we didn't detect any change in the sink's connection 4568 4342 * status. 4343 + * 4344 + * Type-c connectors which get their HPD signal deasserted then 4345 + * reasserted, without unplugging/replugging the sink from the 4346 + * connector, introduce a delay until the AUX channel communication 4347 + * becomes functional. Retry the detection for 5 seconds on type-c 4348 + * connectors to account for this delay. 4569 4349 */ 4570 - if (state == INTEL_HOTPLUG_UNCHANGED && irq_received && 4350 + if (state == INTEL_HOTPLUG_UNCHANGED && 4351 + connector->hotplug_retries < (is_tc ? 5 : 1) && 4571 4352 !dig_port->dp.is_mst) 4572 4353 state = INTEL_HOTPLUG_RETRY; 4573 4354 ··· 4649 4416 * so we use the proper lane count for our calculations. 4650 4417 */ 4651 4418 if (intel_ddi_a_force_4_lanes(intel_dport)) { 4652 - DRM_DEBUG_KMS("Forcing DDI_A_4_LANES for port A\n"); 4419 + drm_dbg_kms(&dev_priv->drm, 4420 + "Forcing DDI_A_4_LANES for port A\n"); 4653 4421 intel_dport->saved_port_bits |= DDI_A_4_LANES; 4654 4422 max_lanes = 4; 4655 4423 } ··· 4678 4444 init_dp = true; 4679 4445 init_lspcon = true; 4680 4446 init_hdmi = false; 4681 - DRM_DEBUG_KMS("VBT says port %c has lspcon\n", port_name(port)); 4447 + drm_dbg_kms(&dev_priv->drm, "VBT says port %c has lspcon\n", 4448 + port_name(port)); 4682 4449 } 4683 4450 4684 4451 if (!init_dp && !init_hdmi) { 4685 - DRM_DEBUG_KMS("VBT says port %c is not DVI/HDMI/DP compatible, respect it\n", 4686 - port_name(port)); 4452 + drm_dbg_kms(&dev_priv->drm, 4453 + "VBT says port %c is not DVI/HDMI/DP compatible, respect it\n", 4454 + port_name(port)); 4687 4455 return; 4688 4456 } 4689 4457 ··· 4764 4528 if (init_lspcon) { 4765 4529 if (lspcon_init(intel_dig_port)) 4766 4530 /* TODO: handle hdmi info frame part */ 4767 - DRM_DEBUG_KMS("LSPCON init success on port %c\n", 4768 - port_name(port)); 4531 + drm_dbg_kms(&dev_priv->drm, 4532 + "LSPCON init success on port %c\n", 4533 + port_name(port)); 4769 4534 else 4770 4535 /* 4771 4536 * LSPCON init faied, but DP init was success, so 4772 4537 * lets try to drive as DP++ port. 4773 4538 */ 4774 - DRM_ERROR("LSPCON init failed on port %c\n", 4539 + drm_err(&dev_priv->drm, 4540 + "LSPCON init failed on port %c\n", 4775 4541 port_name(port)); 4776 4542 } 4777 4543
+2 -1
drivers/gpu/drm/i915/display/intel_ddi.h
··· 17 17 struct intel_dpll_hw_state; 18 18 struct intel_encoder; 19 19 20 - void intel_ddi_fdi_post_disable(struct intel_encoder *intel_encoder, 20 + void intel_ddi_fdi_post_disable(struct intel_atomic_state *state, 21 + struct intel_encoder *intel_encoder, 21 22 const struct intel_crtc_state *old_crtc_state, 22 23 const struct drm_connector_state *old_conn_state); 23 24 void hsw_fdi_link_train(struct intel_encoder *encoder,
+142 -336
drivers/gpu/drm/i915/display/intel_display.c
··· 525 525 intel_de_read(dev_priv, CLKGATE_DIS_PSL(pipe)) & ~(DUPS1_GATING_DIS | DUPS2_GATING_DIS)); 526 526 } 527 527 528 - /* Wa_2006604312:icl */ 528 + /* Wa_2006604312:icl,ehl */ 529 529 static void 530 530 icl_wa_scalerclkgating(struct drm_i915_private *dev_priv, enum pipe pipe, 531 531 bool enable) ··· 544 544 return drm_atomic_crtc_needs_modeset(&state->uapi); 545 545 } 546 546 547 - bool 548 - is_trans_port_sync_mode(const struct intel_crtc_state *crtc_state) 549 - { 550 - return (crtc_state->master_transcoder != INVALID_TRANSCODER || 551 - crtc_state->sync_mode_slaves_mask); 552 - } 553 - 554 547 static bool 555 548 is_trans_port_sync_slave(const struct intel_crtc_state *crtc_state) 556 549 { 557 550 return crtc_state->master_transcoder != INVALID_TRANSCODER; 551 + } 552 + 553 + static bool 554 + is_trans_port_sync_master(const struct intel_crtc_state *crtc_state) 555 + { 556 + return crtc_state->sync_mode_slaves_mask != 0; 557 + } 558 + 559 + bool 560 + is_trans_port_sync_mode(const struct intel_crtc_state *crtc_state) 561 + { 562 + return is_trans_port_sync_master(crtc_state) || 563 + is_trans_port_sync_slave(crtc_state); 558 564 } 559 565 560 566 /* ··· 626 620 return clock->dot / 5; 627 621 } 628 622 629 - #define INTELPllInvalid(s) do { /* DRM_DEBUG(s); */ return false; } while (0) 630 - 631 623 /* 632 624 * Returns whether the given set of divisors are valid for a given refclk with 633 625 * the given connectors. 634 626 */ 635 - static bool intel_PLL_is_valid(struct drm_i915_private *dev_priv, 627 + static bool intel_pll_is_valid(struct drm_i915_private *dev_priv, 636 628 const struct intel_limit *limit, 637 629 const struct dpll *clock) 638 630 { 639 - if (clock->n < limit->n.min || limit->n.max < clock->n) 640 - INTELPllInvalid("n out of range\n"); 641 - if (clock->p1 < limit->p1.min || limit->p1.max < clock->p1) 642 - INTELPllInvalid("p1 out of range\n"); 643 - if (clock->m2 < limit->m2.min || limit->m2.max < clock->m2) 644 - INTELPllInvalid("m2 out of range\n"); 645 - if (clock->m1 < limit->m1.min || limit->m1.max < clock->m1) 646 - INTELPllInvalid("m1 out of range\n"); 631 + if (clock->n < limit->n.min || limit->n.max < clock->n) 632 + return false; 633 + if (clock->p1 < limit->p1.min || limit->p1.max < clock->p1) 634 + return false; 635 + if (clock->m2 < limit->m2.min || limit->m2.max < clock->m2) 636 + return false; 637 + if (clock->m1 < limit->m1.min || limit->m1.max < clock->m1) 638 + return false; 647 639 648 640 if (!IS_PINEVIEW(dev_priv) && !IS_VALLEYVIEW(dev_priv) && 649 641 !IS_CHERRYVIEW(dev_priv) && !IS_GEN9_LP(dev_priv)) 650 642 if (clock->m1 <= clock->m2) 651 - INTELPllInvalid("m1 <= m2\n"); 643 + return false; 652 644 653 645 if (!IS_VALLEYVIEW(dev_priv) && !IS_CHERRYVIEW(dev_priv) && 654 646 !IS_GEN9_LP(dev_priv)) { 655 647 if (clock->p < limit->p.min || limit->p.max < clock->p) 656 - INTELPllInvalid("p out of range\n"); 648 + return false; 657 649 if (clock->m < limit->m.min || limit->m.max < clock->m) 658 - INTELPllInvalid("m out of range\n"); 650 + return false; 659 651 } 660 652 661 653 if (clock->vco < limit->vco.min || limit->vco.max < clock->vco) 662 - INTELPllInvalid("vco out of range\n"); 654 + return false; 663 655 /* XXX: We may need to be checking "Dot clock" depending on the multiplier, 664 656 * connector, etc., rather than just a single range. 665 657 */ 666 658 if (clock->dot < limit->dot.min || limit->dot.max < clock->dot) 667 - INTELPllInvalid("dot out of range\n"); 659 + return false; 668 660 669 661 return true; 670 662 } ··· 729 725 int this_err; 730 726 731 727 i9xx_calc_dpll_params(refclk, &clock); 732 - if (!intel_PLL_is_valid(to_i915(dev), 728 + if (!intel_pll_is_valid(to_i915(dev), 733 729 limit, 734 730 &clock)) 735 731 continue; ··· 785 781 int this_err; 786 782 787 783 pnv_calc_dpll_params(refclk, &clock); 788 - if (!intel_PLL_is_valid(to_i915(dev), 784 + if (!intel_pll_is_valid(to_i915(dev), 789 785 limit, 790 786 &clock)) 791 787 continue; ··· 846 842 int this_err; 847 843 848 844 i9xx_calc_dpll_params(refclk, &clock); 849 - if (!intel_PLL_is_valid(to_i915(dev), 845 + if (!intel_pll_is_valid(to_i915(dev), 850 846 limit, 851 847 &clock)) 852 848 continue; ··· 943 939 944 940 vlv_calc_dpll_params(refclk, &clock); 945 941 946 - if (!intel_PLL_is_valid(to_i915(dev), 942 + if (!intel_pll_is_valid(to_i915(dev), 947 943 limit, 948 944 &clock)) 949 945 continue; ··· 1012 1008 1013 1009 chv_calc_dpll_params(refclk, &clock); 1014 1010 1015 - if (!intel_PLL_is_valid(to_i915(dev), limit, &clock)) 1011 + if (!intel_pll_is_valid(to_i915(dev), limit, &clock)) 1016 1012 continue; 1017 1013 1018 1014 if (!vlv_PLL_is_optimal(dev, target, &clock, best_clock, ··· 2914 2910 static int 2915 2911 intel_fb_check_ccs_xy(struct drm_framebuffer *fb, int ccs_plane, int x, int y) 2916 2912 { 2913 + struct drm_i915_private *i915 = to_i915(fb->dev); 2917 2914 struct intel_framebuffer *intel_fb = to_intel_framebuffer(fb); 2918 2915 int main_plane; 2919 2916 int hsub, vsub; ··· 2943 2938 * x/y offsets must match between CCS and the main surface. 2944 2939 */ 2945 2940 if (main_x != ccs_x || main_y != ccs_y) { 2946 - DRM_DEBUG_KMS("Bad CCS x/y (main %d,%d ccs %d,%d) full (main %d,%d ccs %d,%d)\n", 2941 + drm_dbg_kms(&i915->drm, 2942 + "Bad CCS x/y (main %d,%d ccs %d,%d) full (main %d,%d ccs %d,%d)\n", 2947 2943 main_x, main_y, 2948 2944 ccs_x, ccs_y, 2949 2945 intel_fb->normal[main_plane].x, ··· 3342 3336 return DRM_FORMAT_RGB565; 3343 3337 case PLANE_CTL_FORMAT_NV12: 3344 3338 return DRM_FORMAT_NV12; 3339 + case PLANE_CTL_FORMAT_XYUV: 3340 + return DRM_FORMAT_XYUV8888; 3345 3341 case PLANE_CTL_FORMAT_P010: 3346 3342 return DRM_FORMAT_P010; 3347 3343 case PLANE_CTL_FORMAT_P012: ··· 4588 4580 case DRM_FORMAT_XRGB16161616F: 4589 4581 case DRM_FORMAT_ARGB16161616F: 4590 4582 return PLANE_CTL_FORMAT_XRGB_16161616F; 4583 + case DRM_FORMAT_XYUV8888: 4584 + return PLANE_CTL_FORMAT_XYUV; 4591 4585 case DRM_FORMAT_YUYV: 4592 4586 return PLANE_CTL_FORMAT_YUV422 | PLANE_CTL_YUV422_YUYV; 4593 4587 case DRM_FORMAT_YVYU: ··· 5006 4996 */ 5007 4997 tmp |= PIXEL_ROUNDING_TRUNC_FB_PASSTHRU; 5008 4998 intel_de_write(dev_priv, PIPE_CHICKEN(pipe), tmp); 5009 - } 5010 - 5011 - static void icl_enable_trans_port_sync(const struct intel_crtc_state *crtc_state) 5012 - { 5013 - struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc); 5014 - struct drm_i915_private *dev_priv = to_i915(crtc->base.dev); 5015 - u32 trans_ddi_func_ctl2_val; 5016 - u8 master_select; 5017 - 5018 - /* 5019 - * Configure the master select and enable Transcoder Port Sync for 5020 - * Slave CRTCs transcoder. 5021 - */ 5022 - if (crtc_state->master_transcoder == INVALID_TRANSCODER) 5023 - return; 5024 - 5025 - if (crtc_state->master_transcoder == TRANSCODER_EDP) 5026 - master_select = 0; 5027 - else 5028 - master_select = crtc_state->master_transcoder + 1; 5029 - 5030 - /* Set the master select bits for Tranascoder Port Sync */ 5031 - trans_ddi_func_ctl2_val = (PORT_SYNC_MODE_MASTER_SELECT(master_select) & 5032 - PORT_SYNC_MODE_MASTER_SELECT_MASK) << 5033 - PORT_SYNC_MODE_MASTER_SELECT_SHIFT; 5034 - /* Enable Transcoder Port Sync */ 5035 - trans_ddi_func_ctl2_val |= PORT_SYNC_MODE_ENABLE; 5036 - 5037 - intel_de_write(dev_priv, 5038 - TRANS_DDI_FUNC_CTL2(crtc_state->cpu_transcoder), 5039 - trans_ddi_func_ctl2_val); 5040 4999 } 5041 5000 5042 5001 static void intel_fdi_normal_train(struct intel_crtc *crtc) ··· 6179 6200 case DRM_FORMAT_UYVY: 6180 6201 case DRM_FORMAT_VYUY: 6181 6202 case DRM_FORMAT_NV12: 6203 + case DRM_FORMAT_XYUV8888: 6182 6204 case DRM_FORMAT_P010: 6183 6205 case DRM_FORMAT_P012: 6184 6206 case DRM_FORMAT_P016: ··· 6443 6463 { 6444 6464 struct drm_i915_private *dev_priv = to_i915(crtc_state->uapi.crtc->dev); 6445 6465 6446 - /* Wa_2006604312:icl */ 6447 - if (crtc_state->scaler_state.scaler_users > 0 && IS_ICELAKE(dev_priv)) 6466 + /* Wa_2006604312:icl,ehl */ 6467 + if (crtc_state->scaler_state.scaler_users > 0 && IS_GEN(dev_priv, 11)) 6448 6468 return true; 6449 6469 6450 6470 return false; ··· 6514 6534 needs_nv12_wa(new_crtc_state)) 6515 6535 skl_wa_827(dev_priv, pipe, true); 6516 6536 6517 - /* Wa_2006604312:icl */ 6537 + /* Wa_2006604312:icl,ehl */ 6518 6538 if (!needs_scalerclk_wa(old_crtc_state) && 6519 6539 needs_scalerclk_wa(new_crtc_state)) 6520 6540 icl_wa_scalerclkgating(dev_priv, pipe, true); ··· 6700 6720 continue; 6701 6721 6702 6722 if (encoder->pre_pll_enable) 6703 - encoder->pre_pll_enable(encoder, crtc_state, conn_state); 6723 + encoder->pre_pll_enable(state, encoder, 6724 + crtc_state, conn_state); 6704 6725 } 6705 6726 } 6706 6727 ··· 6722 6741 continue; 6723 6742 6724 6743 if (encoder->pre_enable) 6725 - encoder->pre_enable(encoder, crtc_state, conn_state); 6744 + encoder->pre_enable(state, encoder, 6745 + crtc_state, conn_state); 6726 6746 } 6727 6747 } 6728 6748 ··· 6744 6762 continue; 6745 6763 6746 6764 if (encoder->enable) 6747 - encoder->enable(encoder, crtc_state, conn_state); 6765 + encoder->enable(state, encoder, 6766 + crtc_state, conn_state); 6748 6767 intel_opregion_notify_encoder(encoder, true); 6749 6768 } 6750 6769 } ··· 6768 6785 6769 6786 intel_opregion_notify_encoder(encoder, false); 6770 6787 if (encoder->disable) 6771 - encoder->disable(encoder, old_crtc_state, old_conn_state); 6788 + encoder->disable(state, encoder, 6789 + old_crtc_state, old_conn_state); 6772 6790 } 6773 6791 } 6774 6792 ··· 6790 6806 continue; 6791 6807 6792 6808 if (encoder->post_disable) 6793 - encoder->post_disable(encoder, old_crtc_state, old_conn_state); 6809 + encoder->post_disable(state, encoder, 6810 + old_crtc_state, old_conn_state); 6794 6811 } 6795 6812 } 6796 6813 ··· 6812 6827 continue; 6813 6828 6814 6829 if (encoder->post_pll_disable) 6815 - encoder->post_pll_disable(encoder, old_crtc_state, old_conn_state); 6830 + encoder->post_pll_disable(state, encoder, 6831 + old_crtc_state, old_conn_state); 6816 6832 } 6817 6833 } 6818 6834 ··· 6834 6848 continue; 6835 6849 6836 6850 if (encoder->update_pipe) 6837 - encoder->update_pipe(encoder, crtc_state, conn_state); 6851 + encoder->update_pipe(state, encoder, 6852 + crtc_state, conn_state); 6838 6853 } 6839 6854 } 6840 6855 ··· 7023 7036 7024 7037 if (!transcoder_is_dsi(cpu_transcoder)) 7025 7038 intel_set_pipe_timings(new_crtc_state); 7026 - 7027 - if (INTEL_GEN(dev_priv) >= 11) 7028 - icl_enable_trans_port_sync(new_crtc_state); 7029 7039 7030 7040 intel_set_pipe_src_size(new_crtc_state); 7031 7041 ··· 9382 9398 pipe_config->output_format = INTEL_OUTPUT_FORMAT_RGB; 9383 9399 pipe_config->cpu_transcoder = (enum transcoder) crtc->pipe; 9384 9400 pipe_config->shared_dpll = NULL; 9385 - pipe_config->master_transcoder = INVALID_TRANSCODER; 9386 9401 9387 9402 ret = false; 9388 9403 ··· 10605 10622 10606 10623 pipe_config->cpu_transcoder = (enum transcoder) crtc->pipe; 10607 10624 pipe_config->shared_dpll = NULL; 10608 - pipe_config->master_transcoder = INVALID_TRANSCODER; 10609 10625 10610 10626 ret = false; 10611 10627 tmp = intel_de_read(dev_priv, PIPECONF(crtc->pipe)); ··· 10873 10891 panel_transcoder_mask |= 10874 10892 BIT(TRANSCODER_DSI_0) | BIT(TRANSCODER_DSI_1); 10875 10893 10876 - if (HAS_TRANSCODER_EDP(dev_priv)) 10894 + if (HAS_TRANSCODER(dev_priv, TRANSCODER_EDP)) 10877 10895 panel_transcoder_mask |= BIT(TRANSCODER_EDP); 10878 10896 10879 10897 /* ··· 11067 11085 } 11068 11086 } 11069 11087 11070 - static enum transcoder transcoder_master_readout(struct drm_i915_private *dev_priv, 11071 - enum transcoder cpu_transcoder) 11072 - { 11073 - u32 trans_port_sync, master_select; 11074 - 11075 - trans_port_sync = intel_de_read(dev_priv, 11076 - TRANS_DDI_FUNC_CTL2(cpu_transcoder)); 11077 - 11078 - if ((trans_port_sync & PORT_SYNC_MODE_ENABLE) == 0) 11079 - return INVALID_TRANSCODER; 11080 - 11081 - master_select = trans_port_sync & 11082 - PORT_SYNC_MODE_MASTER_SELECT_MASK; 11083 - if (master_select == 0) 11084 - return TRANSCODER_EDP; 11085 - else 11086 - return master_select - 1; 11087 - } 11088 - 11089 - static void icl_get_trans_port_sync_config(struct intel_crtc_state *crtc_state) 11090 - { 11091 - struct drm_i915_private *dev_priv = to_i915(crtc_state->uapi.crtc->dev); 11092 - u32 transcoders; 11093 - enum transcoder cpu_transcoder; 11094 - 11095 - crtc_state->master_transcoder = transcoder_master_readout(dev_priv, 11096 - crtc_state->cpu_transcoder); 11097 - 11098 - transcoders = BIT(TRANSCODER_A) | 11099 - BIT(TRANSCODER_B) | 11100 - BIT(TRANSCODER_C) | 11101 - BIT(TRANSCODER_D); 11102 - for_each_cpu_transcoder_masked(dev_priv, cpu_transcoder, transcoders) { 11103 - enum intel_display_power_domain power_domain; 11104 - intel_wakeref_t trans_wakeref; 11105 - 11106 - power_domain = POWER_DOMAIN_TRANSCODER(cpu_transcoder); 11107 - trans_wakeref = intel_display_power_get_if_enabled(dev_priv, 11108 - power_domain); 11109 - 11110 - if (!trans_wakeref) 11111 - continue; 11112 - 11113 - if (transcoder_master_readout(dev_priv, cpu_transcoder) == 11114 - crtc_state->cpu_transcoder) 11115 - crtc_state->sync_mode_slaves_mask |= BIT(cpu_transcoder); 11116 - 11117 - intel_display_power_put(dev_priv, power_domain, trans_wakeref); 11118 - } 11119 - 11120 - drm_WARN_ON(&dev_priv->drm, 11121 - crtc_state->master_transcoder != INVALID_TRANSCODER && 11122 - crtc_state->sync_mode_slaves_mask); 11123 - } 11124 - 11125 11088 static bool hsw_get_pipe_config(struct intel_crtc *crtc, 11126 11089 struct intel_crtc_state *pipe_config) 11127 11090 { ··· 11197 11270 } else { 11198 11271 pipe_config->pixel_multiplier = 1; 11199 11272 } 11200 - 11201 - if (INTEL_GEN(dev_priv) >= 11 && 11202 - !transcoder_is_dsi(pipe_config->cpu_transcoder)) 11203 - icl_get_trans_port_sync_config(pipe_config); 11204 11273 11205 11274 out: 11206 11275 for_each_power_domain(power_domain, power_domain_mask) ··· 12300 12377 * only combine the results from all planes in the current place? 12301 12378 */ 12302 12379 if (!is_crtc_enabled) { 12303 - plane_state->uapi.visible = visible = false; 12304 - crtc_state->active_planes &= ~BIT(plane->id); 12305 - crtc_state->data_rate[plane->id] = 0; 12306 - crtc_state->min_cdclk[plane->id] = 0; 12380 + intel_plane_set_invisible(crtc_state, plane_state); 12381 + visible = false; 12307 12382 } 12308 12383 12309 12384 if (!was_visible && !visible) ··· 12807 12886 return 0; 12808 12887 } 12809 12888 12810 - static void intel_dump_crtc_timings(const struct drm_display_mode *mode) 12889 + static void intel_dump_crtc_timings(struct drm_i915_private *i915, 12890 + const struct drm_display_mode *mode) 12811 12891 { 12812 - DRM_DEBUG_KMS("crtc timings: %d %d %d %d %d %d %d %d %d, " 12813 - "type: 0x%x flags: 0x%x\n", 12814 - mode->crtc_clock, 12815 - mode->crtc_hdisplay, mode->crtc_hsync_start, 12816 - mode->crtc_hsync_end, mode->crtc_htotal, 12817 - mode->crtc_vdisplay, mode->crtc_vsync_start, 12818 - mode->crtc_vsync_end, mode->crtc_vtotal, 12819 - mode->type, mode->flags); 12892 + drm_dbg_kms(&i915->drm, "crtc timings: %d %d %d %d %d %d %d %d %d, " 12893 + "type: 0x%x flags: 0x%x\n", 12894 + mode->crtc_clock, 12895 + mode->crtc_hdisplay, mode->crtc_hsync_start, 12896 + mode->crtc_hsync_end, mode->crtc_htotal, 12897 + mode->crtc_vdisplay, mode->crtc_vsync_start, 12898 + mode->crtc_vsync_end, mode->crtc_vtotal, 12899 + mode->type, mode->flags); 12820 12900 } 12821 12901 12822 12902 static inline void ··· 12964 13042 transcoder_name(pipe_config->cpu_transcoder), 12965 13043 pipe_config->pipe_bpp, pipe_config->dither); 12966 13044 13045 + drm_dbg_kms(&dev_priv->drm, 13046 + "port sync: master transcoder: %s, slave transcoder bitmask = 0x%x\n", 13047 + transcoder_name(pipe_config->master_transcoder), 13048 + pipe_config->sync_mode_slaves_mask); 13049 + 12967 13050 if (pipe_config->has_pch_encoder) 12968 13051 intel_dump_m_n_config(pipe_config, "fdi", 12969 13052 pipe_config->fdi_lanes, ··· 13006 13079 drm_mode_debug_printmodeline(&pipe_config->hw.mode); 13007 13080 drm_dbg_kms(&dev_priv->drm, "adjusted mode:\n"); 13008 13081 drm_mode_debug_printmodeline(&pipe_config->hw.adjusted_mode); 13009 - intel_dump_crtc_timings(&pipe_config->hw.adjusted_mode); 13082 + intel_dump_crtc_timings(dev_priv, &pipe_config->hw.adjusted_mode); 13010 13083 drm_dbg_kms(&dev_priv->drm, 13011 13084 "port clock: %d, pipe src size: %dx%d, pixel rate %d\n", 13012 13085 pipe_config->port_clock, ··· 14926 14999 } 14927 15000 14928 15001 static void commit_pipe_config(struct intel_atomic_state *state, 14929 - struct intel_crtc_state *old_crtc_state, 14930 - struct intel_crtc_state *new_crtc_state) 15002 + struct intel_crtc *crtc) 14931 15003 { 14932 - struct intel_crtc *crtc = to_intel_crtc(new_crtc_state->uapi.crtc); 14933 15004 struct drm_i915_private *dev_priv = to_i915(state->base.dev); 15005 + const struct intel_crtc_state *old_crtc_state = 15006 + intel_atomic_get_old_crtc_state(state, crtc); 15007 + const struct intel_crtc_state *new_crtc_state = 15008 + intel_atomic_get_new_crtc_state(state, crtc); 14934 15009 bool modeset = needs_modeset(new_crtc_state); 14935 15010 14936 15011 /* ··· 14958 15029 dev_priv->display.atomic_update_watermarks(state, crtc); 14959 15030 } 14960 15031 14961 - static void intel_update_crtc(struct intel_crtc *crtc, 14962 - struct intel_atomic_state *state, 14963 - struct intel_crtc_state *old_crtc_state, 14964 - struct intel_crtc_state *new_crtc_state) 15032 + static void intel_enable_crtc(struct intel_atomic_state *state, 15033 + struct intel_crtc *crtc) 14965 15034 { 14966 15035 struct drm_i915_private *dev_priv = to_i915(state->base.dev); 15036 + const struct intel_crtc_state *new_crtc_state = 15037 + intel_atomic_get_new_crtc_state(state, crtc); 15038 + 15039 + if (!needs_modeset(new_crtc_state)) 15040 + return; 15041 + 15042 + intel_crtc_update_active_timings(new_crtc_state); 15043 + 15044 + dev_priv->display.crtc_enable(state, crtc); 15045 + 15046 + /* vblanks work again, re-enable pipe CRC. */ 15047 + intel_crtc_enable_pipe_crc(crtc); 15048 + } 15049 + 15050 + static void intel_update_crtc(struct intel_atomic_state *state, 15051 + struct intel_crtc *crtc) 15052 + { 15053 + struct drm_i915_private *dev_priv = to_i915(state->base.dev); 15054 + const struct intel_crtc_state *old_crtc_state = 15055 + intel_atomic_get_old_crtc_state(state, crtc); 15056 + struct intel_crtc_state *new_crtc_state = 15057 + intel_atomic_get_new_crtc_state(state, crtc); 14967 15058 bool modeset = needs_modeset(new_crtc_state); 14968 15059 14969 - if (modeset) { 14970 - intel_crtc_update_active_timings(new_crtc_state); 14971 - 14972 - dev_priv->display.crtc_enable(state, crtc); 14973 - 14974 - /* vblanks work again, re-enable pipe CRC. */ 14975 - intel_crtc_enable_pipe_crc(crtc); 14976 - } else { 15060 + if (!modeset) { 14977 15061 if (new_crtc_state->preload_luts && 14978 15062 (new_crtc_state->uapi.color_mgmt_changed || 14979 15063 new_crtc_state->update_pipe)) ··· 15006 15064 /* Perform vblank evasion around commit operation */ 15007 15065 intel_pipe_update_start(new_crtc_state); 15008 15066 15009 - commit_pipe_config(state, old_crtc_state, new_crtc_state); 15067 + commit_pipe_config(state, crtc); 15010 15068 15011 15069 if (INTEL_GEN(dev_priv) >= 9) 15012 15070 skl_update_planes_on_crtc(state, crtc); ··· 15026 15084 intel_crtc_arm_fifo_underrun(crtc, new_crtc_state); 15027 15085 } 15028 15086 15029 - static struct intel_crtc *intel_get_slave_crtc(const struct intel_crtc_state *new_crtc_state) 15030 - { 15031 - struct drm_i915_private *dev_priv = to_i915(new_crtc_state->uapi.crtc->dev); 15032 - enum transcoder slave_transcoder; 15033 - 15034 - drm_WARN_ON(&dev_priv->drm, 15035 - !is_power_of_2(new_crtc_state->sync_mode_slaves_mask)); 15036 - 15037 - slave_transcoder = ffs(new_crtc_state->sync_mode_slaves_mask) - 1; 15038 - return intel_get_crtc_for_pipe(dev_priv, 15039 - (enum pipe)slave_transcoder); 15040 - } 15041 15087 15042 15088 static void intel_old_crtc_state_disables(struct intel_atomic_state *state, 15043 15089 struct intel_crtc_state *old_crtc_state, ··· 15101 15171 15102 15172 static void intel_commit_modeset_enables(struct intel_atomic_state *state) 15103 15173 { 15174 + struct intel_crtc_state *new_crtc_state; 15104 15175 struct intel_crtc *crtc; 15105 - struct intel_crtc_state *old_crtc_state, *new_crtc_state; 15106 15176 int i; 15107 15177 15108 - for_each_oldnew_intel_crtc_in_state(state, crtc, old_crtc_state, new_crtc_state, i) { 15178 + for_each_new_intel_crtc_in_state(state, crtc, new_crtc_state, i) { 15109 15179 if (!new_crtc_state->hw.active) 15110 15180 continue; 15111 15181 15112 - intel_update_crtc(crtc, state, old_crtc_state, 15113 - new_crtc_state); 15182 + intel_enable_crtc(state, crtc); 15183 + intel_update_crtc(state, crtc); 15114 15184 } 15115 - } 15116 - 15117 - static void intel_crtc_enable_trans_port_sync(struct intel_crtc *crtc, 15118 - struct intel_atomic_state *state, 15119 - struct intel_crtc_state *new_crtc_state) 15120 - { 15121 - struct drm_i915_private *dev_priv = to_i915(state->base.dev); 15122 - 15123 - intel_crtc_update_active_timings(new_crtc_state); 15124 - dev_priv->display.crtc_enable(state, crtc); 15125 - intel_crtc_enable_pipe_crc(crtc); 15126 - } 15127 - 15128 - static void intel_set_dp_tp_ctl_normal(struct intel_crtc *crtc, 15129 - struct intel_atomic_state *state) 15130 - { 15131 - struct drm_connector *uninitialized_var(conn); 15132 - struct drm_connector_state *conn_state; 15133 - struct intel_dp *intel_dp; 15134 - int i; 15135 - 15136 - for_each_new_connector_in_state(&state->base, conn, conn_state, i) { 15137 - if (conn_state->crtc == &crtc->base) 15138 - break; 15139 - } 15140 - intel_dp = intel_attached_dp(to_intel_connector(conn)); 15141 - intel_dp_stop_link_train(intel_dp); 15142 - } 15143 - 15144 - /* 15145 - * TODO: This is only called from port sync and it is identical to what will be 15146 - * executed again in intel_update_crtc() over port sync pipes 15147 - */ 15148 - static void intel_post_crtc_enable_updates(struct intel_crtc *crtc, 15149 - struct intel_atomic_state *state) 15150 - { 15151 - struct intel_crtc_state *new_crtc_state = 15152 - intel_atomic_get_new_crtc_state(state, crtc); 15153 - struct intel_crtc_state *old_crtc_state = 15154 - intel_atomic_get_old_crtc_state(state, crtc); 15155 - bool modeset = needs_modeset(new_crtc_state); 15156 - 15157 - if (new_crtc_state->update_pipe && !new_crtc_state->enable_fbc) 15158 - intel_fbc_disable(crtc); 15159 - else 15160 - intel_fbc_enable(state, crtc); 15161 - 15162 - /* Perform vblank evasion around commit operation */ 15163 - intel_pipe_update_start(new_crtc_state); 15164 - commit_pipe_config(state, old_crtc_state, new_crtc_state); 15165 - skl_update_planes_on_crtc(state, crtc); 15166 - intel_pipe_update_end(new_crtc_state); 15167 - 15168 - /* 15169 - * We usually enable FIFO underrun interrupts as part of the 15170 - * CRTC enable sequence during modesets. But when we inherit a 15171 - * valid pipe configuration from the BIOS we need to take care 15172 - * of enabling them on the CRTC's first fastset. 15173 - */ 15174 - if (new_crtc_state->update_pipe && !modeset && 15175 - old_crtc_state->hw.mode.private_flags & I915_MODE_FLAG_INHERITED) 15176 - intel_crtc_arm_fifo_underrun(crtc, new_crtc_state); 15177 - } 15178 - 15179 - static void intel_update_trans_port_sync_crtcs(struct intel_crtc *crtc, 15180 - struct intel_atomic_state *state, 15181 - struct intel_crtc_state *old_crtc_state, 15182 - struct intel_crtc_state *new_crtc_state) 15183 - { 15184 - struct drm_i915_private *i915 = to_i915(crtc->base.dev); 15185 - struct intel_crtc *slave_crtc = intel_get_slave_crtc(new_crtc_state); 15186 - struct intel_crtc_state *new_slave_crtc_state = 15187 - intel_atomic_get_new_crtc_state(state, slave_crtc); 15188 - struct intel_crtc_state *old_slave_crtc_state = 15189 - intel_atomic_get_old_crtc_state(state, slave_crtc); 15190 - 15191 - drm_WARN_ON(&i915->drm, !slave_crtc || !new_slave_crtc_state || 15192 - !old_slave_crtc_state); 15193 - 15194 - drm_dbg_kms(&i915->drm, 15195 - "Updating Transcoder Port Sync Master CRTC = %d %s and Slave CRTC %d %s\n", 15196 - crtc->base.base.id, crtc->base.name, 15197 - slave_crtc->base.base.id, slave_crtc->base.name); 15198 - 15199 - /* Enable seq for slave with with DP_TP_CTL left Idle until the 15200 - * master is ready 15201 - */ 15202 - intel_crtc_enable_trans_port_sync(slave_crtc, 15203 - state, 15204 - new_slave_crtc_state); 15205 - 15206 - /* Enable seq for master with with DP_TP_CTL left Idle */ 15207 - intel_crtc_enable_trans_port_sync(crtc, 15208 - state, 15209 - new_crtc_state); 15210 - 15211 - /* Set Slave's DP_TP_CTL to Normal */ 15212 - intel_set_dp_tp_ctl_normal(slave_crtc, 15213 - state); 15214 - 15215 - /* Set Master's DP_TP_CTL To Normal */ 15216 - usleep_range(200, 400); 15217 - intel_set_dp_tp_ctl_normal(crtc, 15218 - state); 15219 - 15220 - /* Now do the post crtc enable for all master and slaves */ 15221 - intel_post_crtc_enable_updates(slave_crtc, 15222 - state); 15223 - intel_post_crtc_enable_updates(crtc, 15224 - state); 15225 15185 } 15226 15186 15227 15187 static void icl_dbuf_slice_pre_update(struct intel_atomic_state *state) ··· 15185 15365 entries[pipe] = new_crtc_state->wm.skl.ddb; 15186 15366 update_pipes &= ~BIT(pipe); 15187 15367 15188 - intel_update_crtc(crtc, state, old_crtc_state, 15189 - new_crtc_state); 15368 + intel_update_crtc(state, crtc); 15190 15369 15191 15370 /* 15192 15371 * If this is an already active pipe, it's DDB changed, ··· 15200 15381 } 15201 15382 } 15202 15383 15384 + update_pipes = modeset_pipes; 15385 + 15203 15386 /* 15204 15387 * Enable all pipes that needs a modeset and do not depends on other 15205 15388 * pipes 15206 15389 */ 15207 - for_each_oldnew_intel_crtc_in_state(state, crtc, old_crtc_state, 15208 - new_crtc_state, i) { 15390 + for_each_new_intel_crtc_in_state(state, crtc, new_crtc_state, i) { 15209 15391 enum pipe pipe = crtc->pipe; 15210 15392 15211 15393 if ((modeset_pipes & BIT(pipe)) == 0) 15212 15394 continue; 15213 15395 15214 15396 if (intel_dp_mst_is_slave_trans(new_crtc_state) || 15215 - is_trans_port_sync_slave(new_crtc_state)) 15397 + is_trans_port_sync_master(new_crtc_state)) 15216 15398 continue; 15217 15399 15218 - drm_WARN_ON(&dev_priv->drm, skl_ddb_allocation_overlaps(&new_crtc_state->wm.skl.ddb, 15219 - entries, I915_MAX_PIPES, pipe)); 15220 - 15221 - entries[pipe] = new_crtc_state->wm.skl.ddb; 15222 15400 modeset_pipes &= ~BIT(pipe); 15223 15401 15224 - if (is_trans_port_sync_mode(new_crtc_state)) { 15225 - struct intel_crtc *slave_crtc; 15226 - 15227 - intel_update_trans_port_sync_crtcs(crtc, state, 15228 - old_crtc_state, 15229 - new_crtc_state); 15230 - 15231 - slave_crtc = intel_get_slave_crtc(new_crtc_state); 15232 - /* TODO: update entries[] of slave */ 15233 - modeset_pipes &= ~BIT(slave_crtc->pipe); 15234 - 15235 - } else { 15236 - intel_update_crtc(crtc, state, old_crtc_state, 15237 - new_crtc_state); 15238 - } 15402 + intel_enable_crtc(state, crtc); 15239 15403 } 15240 15404 15241 15405 /* 15242 - * Finally enable all pipes that needs a modeset and depends on 15243 - * other pipes, right now it is only MST slaves as both port sync slave 15244 - * and master are enabled together 15406 + * Then we enable all remaining pipes that depend on other 15407 + * pipes: MST slaves and port sync masters. 15245 15408 */ 15246 - for_each_oldnew_intel_crtc_in_state(state, crtc, old_crtc_state, 15247 - new_crtc_state, i) { 15409 + for_each_new_intel_crtc_in_state(state, crtc, new_crtc_state, i) { 15248 15410 enum pipe pipe = crtc->pipe; 15249 15411 15250 15412 if ((modeset_pipes & BIT(pipe)) == 0) 15251 15413 continue; 15252 15414 15415 + modeset_pipes &= ~BIT(pipe); 15416 + 15417 + intel_enable_crtc(state, crtc); 15418 + } 15419 + 15420 + /* 15421 + * Finally we do the plane updates/etc. for all pipes that got enabled. 15422 + */ 15423 + for_each_new_intel_crtc_in_state(state, crtc, new_crtc_state, i) { 15424 + enum pipe pipe = crtc->pipe; 15425 + 15426 + if ((update_pipes & BIT(pipe)) == 0) 15427 + continue; 15428 + 15253 15429 drm_WARN_ON(&dev_priv->drm, skl_ddb_allocation_overlaps(&new_crtc_state->wm.skl.ddb, 15254 15430 entries, I915_MAX_PIPES, pipe)); 15255 15431 15256 15432 entries[pipe] = new_crtc_state->wm.skl.ddb; 15257 - modeset_pipes &= ~BIT(pipe); 15433 + update_pipes &= ~BIT(pipe); 15258 15434 15259 - intel_update_crtc(crtc, state, old_crtc_state, new_crtc_state); 15435 + intel_update_crtc(state, crtc); 15260 15436 } 15261 15437 15262 15438 drm_WARN_ON(&dev_priv->drm, modeset_pipes); 15263 - 15439 + drm_WARN_ON(&dev_priv->drm, update_pipes); 15264 15440 } 15265 15441 15266 15442 static void intel_atomic_helper_free_state(struct drm_i915_private *dev_priv) ··· 18075 18261 best_encoder = connector->base.state->best_encoder; 18076 18262 connector->base.state->best_encoder = &encoder->base; 18077 18263 18264 + /* FIXME NULL atomic state passed! */ 18078 18265 if (encoder->disable) 18079 - encoder->disable(encoder, crtc_state, 18266 + encoder->disable(NULL, encoder, crtc_state, 18080 18267 connector->base.state); 18081 18268 if (encoder->post_disable) 18082 - encoder->post_disable(encoder, crtc_state, 18269 + encoder->post_disable(NULL, encoder, crtc_state, 18083 18270 connector->base.state); 18084 18271 18085 18272 connector->base.state->best_encoder = best_encoder; ··· 18617 18802 18618 18803 #if IS_ENABLED(CONFIG_DRM_I915_CAPTURE_ERROR) 18619 18804 18620 - static bool 18621 - has_transcoder(struct drm_i915_private *dev_priv, enum transcoder cpu_transcoder) 18622 - { 18623 - if (cpu_transcoder == TRANSCODER_EDP) 18624 - return HAS_TRANSCODER_EDP(dev_priv); 18625 - else 18626 - return INTEL_INFO(dev_priv)->pipe_mask & BIT(cpu_transcoder); 18627 - } 18628 - 18629 18805 struct intel_display_error_state { 18630 18806 18631 18807 u32 power_well_driver; ··· 18725 18919 for (i = 0; i < ARRAY_SIZE(error->transcoder); i++) { 18726 18920 enum transcoder cpu_transcoder = transcoders[i]; 18727 18921 18728 - if (!has_transcoder(dev_priv, cpu_transcoder)) 18922 + if (!HAS_TRANSCODER(dev_priv, cpu_transcoder)) 18729 18923 continue; 18730 18924 18731 18925 error->transcoder[i].available = true;
+6 -2
drivers/gpu/drm/i915/display/intel_display.h
··· 320 320 for_each_pipe(__dev_priv, __p) \ 321 321 for_each_if((__mask) & BIT(__p)) 322 322 323 - #define for_each_cpu_transcoder_masked(__dev_priv, __t, __mask) \ 323 + #define for_each_cpu_transcoder(__dev_priv, __t) \ 324 324 for ((__t) = 0; (__t) < I915_MAX_TRANSCODERS; (__t)++) \ 325 - for_each_if ((__mask) & (1 << (__t))) 325 + for_each_if (INTEL_INFO(__dev_priv)->cpu_transcoder_mask & BIT(__t)) 326 + 327 + #define for_each_cpu_transcoder_masked(__dev_priv, __t, __mask) \ 328 + for_each_cpu_transcoder(__dev_priv, __t) \ 329 + for_each_if ((__mask) & BIT(__t)) 326 330 327 331 #define for_each_universal_plane(__dev_priv, __pipe, __p) \ 328 332 for ((__p) = 0; \
+11 -1
drivers/gpu/drm/i915/display/intel_display_debugfs.c
··· 1326 1326 intel_dp->compliance.test_data.vdisplay); 1327 1327 seq_printf(m, "bpc: %u\n", 1328 1328 intel_dp->compliance.test_data.bpc); 1329 + } else if (intel_dp->compliance.test_type == 1330 + DP_TEST_LINK_PHY_TEST_PATTERN) { 1331 + seq_printf(m, "pattern: %d\n", 1332 + intel_dp->compliance.test_data.phytest.phy_pattern); 1333 + seq_printf(m, "Number of lanes: %d\n", 1334 + intel_dp->compliance.test_data.phytest.num_lanes); 1335 + seq_printf(m, "Link Rate: %d\n", 1336 + intel_dp->compliance.test_data.phytest.link_rate); 1337 + seq_printf(m, "level: %02x\n", 1338 + intel_dp->train_set[0]); 1329 1339 } 1330 1340 } else 1331 1341 seq_puts(m, "0"); ··· 1368 1358 1369 1359 if (encoder && connector->status == connector_status_connected) { 1370 1360 intel_dp = enc_to_intel_dp(encoder); 1371 - seq_printf(m, "%02lx", intel_dp->compliance.test_type); 1361 + seq_printf(m, "%02lx\n", intel_dp->compliance.test_type); 1372 1362 } else 1373 1363 seq_puts(m, "0"); 1374 1364 }
+22 -14
drivers/gpu/drm/i915/display/intel_display_power.c
··· 1873 1873 static void print_power_domains(struct i915_power_domains *power_domains, 1874 1874 const char *prefix, u64 mask) 1875 1875 { 1876 + struct drm_i915_private *i915 = container_of(power_domains, 1877 + struct drm_i915_private, 1878 + power_domains); 1876 1879 enum intel_display_power_domain domain; 1877 1880 1878 - DRM_DEBUG_DRIVER("%s (%lu):\n", prefix, hweight64(mask)); 1881 + drm_dbg(&i915->drm, "%s (%lu):\n", prefix, hweight64(mask)); 1879 1882 for_each_power_domain(domain, mask) 1880 - DRM_DEBUG_DRIVER("%s use_count %d\n", 1881 - intel_display_power_domain_str(domain), 1882 - power_domains->domain_use_count[domain]); 1883 + drm_dbg(&i915->drm, "%s use_count %d\n", 1884 + intel_display_power_domain_str(domain), 1885 + power_domains->domain_use_count[domain]); 1883 1886 } 1884 1887 1885 1888 static void 1886 1889 print_async_put_domains_state(struct i915_power_domains *power_domains) 1887 1890 { 1888 - DRM_DEBUG_DRIVER("async_put_wakeref %u\n", 1889 - power_domains->async_put_wakeref); 1891 + struct drm_i915_private *i915 = container_of(power_domains, 1892 + struct drm_i915_private, 1893 + power_domains); 1894 + 1895 + drm_dbg(&i915->drm, "async_put_wakeref %u\n", 1896 + power_domains->async_put_wakeref); 1890 1897 1891 1898 print_power_domains(power_domains, "async_put_domains[0]", 1892 1899 power_domains->async_put_domains[0]); ··· 4147 4140 { 4148 4141 .name = "AUX D TBT1", 4149 4142 .domains = TGL_AUX_D_TBT1_IO_POWER_DOMAINS, 4150 - .ops = &hsw_power_well_ops, 4143 + .ops = &icl_tc_phy_aux_power_well_ops, 4151 4144 .id = DISP_PW_ID_NONE, 4152 4145 { 4153 4146 .hsw.regs = &icl_aux_power_well_regs, ··· 4158 4151 { 4159 4152 .name = "AUX E TBT2", 4160 4153 .domains = TGL_AUX_E_TBT2_IO_POWER_DOMAINS, 4161 - .ops = &hsw_power_well_ops, 4154 + .ops = &icl_tc_phy_aux_power_well_ops, 4162 4155 .id = DISP_PW_ID_NONE, 4163 4156 { 4164 4157 .hsw.regs = &icl_aux_power_well_regs, ··· 4169 4162 { 4170 4163 .name = "AUX F TBT3", 4171 4164 .domains = TGL_AUX_F_TBT3_IO_POWER_DOMAINS, 4172 - .ops = &hsw_power_well_ops, 4165 + .ops = &icl_tc_phy_aux_power_well_ops, 4173 4166 .id = DISP_PW_ID_NONE, 4174 4167 { 4175 4168 .hsw.regs = &icl_aux_power_well_regs, ··· 4180 4173 { 4181 4174 .name = "AUX G TBT4", 4182 4175 .domains = TGL_AUX_G_TBT4_IO_POWER_DOMAINS, 4183 - .ops = &hsw_power_well_ops, 4176 + .ops = &icl_tc_phy_aux_power_well_ops, 4184 4177 .id = DISP_PW_ID_NONE, 4185 4178 { 4186 4179 .hsw.regs = &icl_aux_power_well_regs, ··· 4191 4184 { 4192 4185 .name = "AUX H TBT5", 4193 4186 .domains = TGL_AUX_H_TBT5_IO_POWER_DOMAINS, 4194 - .ops = &hsw_power_well_ops, 4187 + .ops = &icl_tc_phy_aux_power_well_ops, 4195 4188 .id = DISP_PW_ID_NONE, 4196 4189 { 4197 4190 .hsw.regs = &icl_aux_power_well_regs, ··· 4202 4195 { 4203 4196 .name = "AUX I TBT6", 4204 4197 .domains = TGL_AUX_I_TBT6_IO_POWER_DOMAINS, 4205 - .ops = &hsw_power_well_ops, 4198 + .ops = &icl_tc_phy_aux_power_well_ops, 4206 4199 .id = DISP_PW_ID_NONE, 4207 4200 { 4208 4201 .hsw.regs = &icl_aux_power_well_regs, ··· 4487 4480 drm_WARN(&dev_priv->drm, hweight8(req_slices) > max_slices, 4488 4481 "Invalid number of dbuf slices requested\n"); 4489 4482 4490 - DRM_DEBUG_KMS("Updating dbuf slices to 0x%x\n", req_slices); 4483 + drm_dbg_kms(&dev_priv->drm, "Updating dbuf slices to 0x%x\n", 4484 + req_slices); 4491 4485 4492 4486 /* 4493 4487 * Might be running this in parallel to gen9_dc_off_power_well_enable ··· 5024 5016 const struct buddy_page_mask *table; 5025 5017 int i; 5026 5018 5027 - if (IS_TGL_REVID(dev_priv, TGL_REVID_A0, TGL_REVID_A0)) 5019 + if (IS_TGL_REVID(dev_priv, TGL_REVID_A0, TGL_REVID_B0)) 5028 5020 /* Wa_1409767108: tgl */ 5029 5021 table = wa_1409767108_buddy_page_masks; 5030 5022 else
+30 -9
drivers/gpu/drm/i915/display/intel_display_types.h
··· 132 132 u16 cloneable; 133 133 u8 pipe_mask; 134 134 enum intel_hotplug_state (*hotplug)(struct intel_encoder *encoder, 135 - struct intel_connector *connector, 136 - bool irq_received); 135 + struct intel_connector *connector); 137 136 enum intel_output_type (*compute_output_type)(struct intel_encoder *, 138 137 struct intel_crtc_state *, 139 138 struct drm_connector_state *); ··· 145 146 void (*update_prepare)(struct intel_atomic_state *, 146 147 struct intel_encoder *, 147 148 struct intel_crtc *); 148 - void (*pre_pll_enable)(struct intel_encoder *, 149 + void (*pre_pll_enable)(struct intel_atomic_state *, 150 + struct intel_encoder *, 149 151 const struct intel_crtc_state *, 150 152 const struct drm_connector_state *); 151 - void (*pre_enable)(struct intel_encoder *, 153 + void (*pre_enable)(struct intel_atomic_state *, 154 + struct intel_encoder *, 152 155 const struct intel_crtc_state *, 153 156 const struct drm_connector_state *); 154 - void (*enable)(struct intel_encoder *, 157 + void (*enable)(struct intel_atomic_state *, 158 + struct intel_encoder *, 155 159 const struct intel_crtc_state *, 156 160 const struct drm_connector_state *); 157 161 void (*update_complete)(struct intel_atomic_state *, 158 162 struct intel_encoder *, 159 163 struct intel_crtc *); 160 - void (*disable)(struct intel_encoder *, 164 + void (*disable)(struct intel_atomic_state *, 165 + struct intel_encoder *, 161 166 const struct intel_crtc_state *, 162 167 const struct drm_connector_state *); 163 - void (*post_disable)(struct intel_encoder *, 168 + void (*post_disable)(struct intel_atomic_state *, 169 + struct intel_encoder *, 164 170 const struct intel_crtc_state *, 165 171 const struct drm_connector_state *); 166 - void (*post_pll_disable)(struct intel_encoder *, 172 + void (*post_pll_disable)(struct intel_atomic_state *, 173 + struct intel_encoder *, 167 174 const struct intel_crtc_state *, 168 175 const struct drm_connector_state *); 169 - void (*update_pipe)(struct intel_encoder *, 176 + void (*update_pipe)(struct intel_atomic_state *, 177 + struct intel_encoder *, 170 178 const struct intel_crtc_state *, 171 179 const struct drm_connector_state *); 172 180 /* Read out the current hw state of this connector, returning true if ··· 431 425 struct edid *edid; 432 426 struct edid *detect_edid; 433 427 428 + /* Number of times hotplug detection was tried after an HPD interrupt */ 429 + int hotplug_retries; 430 + 434 431 /* since POLL and HPD connectors may use the same HPD line keep the native 435 432 state of connector->polled in case hotplug storm detection changes it */ 436 433 u8 polled; ··· 649 640 #define I915_MODE_FLAG_GET_SCANLINE_FROM_TIMESTAMP (1<<1) 650 641 /* Flag to use the scanline counter instead of the pixel counter */ 651 642 #define I915_MODE_FLAG_USE_SCANLINE_COUNTER (1<<2) 643 + /* 644 + * TE0 or TE1 flag is set if the crtc has a DSI encoder which 645 + * is operating in command mode. 646 + * Flag to use TE from DSI0 instead of VBI in command mode 647 + */ 648 + #define I915_MODE_FLAG_DSI_USE_TE0 (1<<3) 649 + /* Flag to use TE from DSI1 instead of VBI in command mode */ 650 + #define I915_MODE_FLAG_DSI_USE_TE1 (1<<4) 651 + /* Flag to indicate mipi dsi periodic command mode where we do not get TE */ 652 + #define I915_MODE_FLAG_DSI_PERIODIC_CMD_MODE (1<<5) 652 653 653 654 struct intel_wm_level { 654 655 bool enable; ··· 1034 1015 union hdmi_infoframe spd; 1035 1016 union hdmi_infoframe hdmi; 1036 1017 union hdmi_infoframe drm; 1018 + struct drm_dp_vsc_sdp vsc; 1037 1019 } infoframes; 1038 1020 1039 1021 /* HDMI scrambling status */ ··· 1258 1238 u8 video_pattern; 1259 1239 u16 hdisplay, vdisplay; 1260 1240 u8 bpc; 1241 + struct drm_dp_phy_test_params phytest; 1261 1242 }; 1262 1243 1263 1244 struct intel_dp_compliance {
+782 -148
drivers/gpu/drm/i915/display/intel_dp.c
··· 164 164 }; 165 165 int i, max_rate; 166 166 167 + if (drm_dp_has_quirk(&intel_dp->desc, 0, 168 + DP_DPCD_QUIRK_CAN_DO_MAX_LINK_RATE_3_24_GBPS)) { 169 + /* Needed, e.g., for Apple MBP 2017, 15 inch eDP Retina panel */ 170 + static const int quirk_rates[] = { 162000, 270000, 324000 }; 171 + 172 + memcpy(intel_dp->sink_rates, quirk_rates, sizeof(quirk_rates)); 173 + intel_dp->num_sink_rates = ARRAY_SIZE(quirk_rates); 174 + 175 + return; 176 + } 177 + 167 178 max_rate = drm_dp_bw_code_to_link_rate(intel_dp->dpcd[DP_MAX_LINK_RATE]); 168 179 169 180 for (i = 0; i < ARRAY_SIZE(dp_rates); i++) { ··· 463 452 int intel_dp_get_link_train_fallback_values(struct intel_dp *intel_dp, 464 453 int link_rate, u8 lane_count) 465 454 { 455 + struct drm_i915_private *i915 = dp_to_i915(intel_dp); 466 456 int index; 467 457 468 458 index = intel_dp_rate_index(intel_dp->common_rates, ··· 474 462 !intel_dp_can_link_train_fallback_for_edp(intel_dp, 475 463 intel_dp->common_rates[index - 1], 476 464 lane_count)) { 477 - DRM_DEBUG_KMS("Retrying Link training for eDP with same parameters\n"); 465 + drm_dbg_kms(&i915->drm, 466 + "Retrying Link training for eDP with same parameters\n"); 478 467 return 0; 479 468 } 480 469 intel_dp->max_link_rate = intel_dp->common_rates[index - 1]; ··· 485 472 !intel_dp_can_link_train_fallback_for_edp(intel_dp, 486 473 intel_dp_max_common_rate(intel_dp), 487 474 lane_count >> 1)) { 488 - DRM_DEBUG_KMS("Retrying Link training for eDP with same parameters\n"); 475 + drm_dbg_kms(&i915->drm, 476 + "Retrying Link training for eDP with same parameters\n"); 489 477 return 0; 490 478 } 491 479 intel_dp->max_link_rate = intel_dp_max_common_rate(intel_dp); 492 480 intel_dp->max_link_lane_count = lane_count >> 1; 493 481 } else { 494 - DRM_ERROR("Link Training Unsuccessful\n"); 482 + drm_err(&i915->drm, "Link Training Unsuccessful\n"); 495 483 return -1; 496 484 } 497 485 ··· 567 553 static u8 intel_dp_dsc_get_slice_count(struct intel_dp *intel_dp, 568 554 int mode_clock, int mode_hdisplay) 569 555 { 556 + struct drm_i915_private *i915 = dp_to_i915(intel_dp); 570 557 u8 min_slice_count, i; 571 558 int max_slice_width; 572 559 ··· 580 565 581 566 max_slice_width = drm_dp_dsc_sink_max_slice_width(intel_dp->dsc_dpcd); 582 567 if (max_slice_width < DP_DSC_MIN_SLICE_WIDTH_VALUE) { 583 - DRM_DEBUG_KMS("Unsupported slice width %d by DP DSC Sink device\n", 584 - max_slice_width); 568 + drm_dbg_kms(&i915->drm, 569 + "Unsupported slice width %d by DP DSC Sink device\n", 570 + max_slice_width); 585 571 return 0; 586 572 } 587 573 /* Also take into account max slice width */ ··· 600 584 return valid_dsc_slicecount[i]; 601 585 } 602 586 603 - DRM_DEBUG_KMS("Unsupported Slice Count %d\n", min_slice_count); 587 + drm_dbg_kms(&i915->drm, "Unsupported Slice Count %d\n", 588 + min_slice_count); 604 589 return 0; 605 590 } 606 591 ··· 1849 1832 1850 1833 static void intel_dp_print_rates(struct intel_dp *intel_dp) 1851 1834 { 1835 + struct drm_i915_private *i915 = dp_to_i915(intel_dp); 1852 1836 char str[128]; /* FIXME: too big for stack? */ 1853 1837 1854 1838 if (!drm_debug_enabled(DRM_UT_KMS)) ··· 1857 1839 1858 1840 snprintf_int_array(str, sizeof(str), 1859 1841 intel_dp->source_rates, intel_dp->num_source_rates); 1860 - DRM_DEBUG_KMS("source rates: %s\n", str); 1842 + drm_dbg_kms(&i915->drm, "source rates: %s\n", str); 1861 1843 1862 1844 snprintf_int_array(str, sizeof(str), 1863 1845 intel_dp->sink_rates, intel_dp->num_sink_rates); 1864 - DRM_DEBUG_KMS("sink rates: %s\n", str); 1846 + drm_dbg_kms(&i915->drm, "sink rates: %s\n", str); 1865 1847 1866 1848 snprintf_int_array(str, sizeof(str), 1867 1849 intel_dp->common_rates, intel_dp->num_common_rates); 1868 - DRM_DEBUG_KMS("common rates: %s\n", str); 1850 + drm_dbg_kms(&i915->drm, "common rates: %s\n", str); 1869 1851 } 1870 1852 1871 1853 int ··· 1972 1954 struct intel_crtc_state *pipe_config, 1973 1955 struct link_config_limits *limits) 1974 1956 { 1957 + struct drm_i915_private *i915 = dp_to_i915(intel_dp); 1958 + 1975 1959 /* For DP Compliance we override the computed bpp for the pipe */ 1976 1960 if (intel_dp->compliance.test_data.bpc != 0) { 1977 1961 int bpp = 3 * intel_dp->compliance.test_data.bpc; ··· 1981 1961 limits->min_bpp = limits->max_bpp = bpp; 1982 1962 pipe_config->dither_force_disable = bpp == 6 * 3; 1983 1963 1984 - DRM_DEBUG_KMS("Setting pipe_bpp to %d\n", bpp); 1964 + drm_dbg_kms(&i915->drm, "Setting pipe_bpp to %d\n", bpp); 1985 1965 } 1986 1966 1987 1967 /* Use values requested by Compliance Test Request */ ··· 2075 2055 static int intel_dp_dsc_compute_params(struct intel_encoder *encoder, 2076 2056 struct intel_crtc_state *crtc_state) 2077 2057 { 2058 + struct drm_i915_private *i915 = to_i915(encoder->base.dev); 2078 2059 struct intel_dp *intel_dp = enc_to_intel_dp(encoder); 2079 2060 struct drm_dsc_config *vdsc_cfg = &crtc_state->dsc.config; 2080 2061 u8 line_buf_depth; ··· 2110 2089 2111 2090 line_buf_depth = drm_dp_dsc_sink_line_buf_depth(intel_dp->dsc_dpcd); 2112 2091 if (!line_buf_depth) { 2113 - DRM_DEBUG_KMS("DSC Sink Line Buffer Depth invalid\n"); 2092 + drm_dbg_kms(&i915->drm, 2093 + "DSC Sink Line Buffer Depth invalid\n"); 2114 2094 return -EINVAL; 2115 2095 } 2116 2096 ··· 2136 2114 { 2137 2115 struct intel_digital_port *dig_port = dp_to_dig_port(intel_dp); 2138 2116 struct drm_i915_private *dev_priv = to_i915(dig_port->base.base.dev); 2139 - struct drm_display_mode *adjusted_mode = &pipe_config->hw.adjusted_mode; 2117 + const struct drm_display_mode *adjusted_mode = 2118 + &pipe_config->hw.adjusted_mode; 2140 2119 u8 dsc_max_bpc; 2141 2120 int pipe_bpp; 2142 2121 int ret; ··· 2252 2229 struct intel_crtc_state *pipe_config, 2253 2230 struct drm_connector_state *conn_state) 2254 2231 { 2255 - struct drm_display_mode *adjusted_mode = &pipe_config->hw.adjusted_mode; 2232 + struct drm_i915_private *i915 = to_i915(encoder->base.dev); 2233 + const struct drm_display_mode *adjusted_mode = 2234 + &pipe_config->hw.adjusted_mode; 2256 2235 struct intel_dp *intel_dp = enc_to_intel_dp(encoder); 2257 2236 struct link_config_limits limits; 2258 2237 int common_len; ··· 2289 2264 2290 2265 intel_dp_adjust_compliance_config(intel_dp, pipe_config, &limits); 2291 2266 2292 - DRM_DEBUG_KMS("DP link computation with max lane count %i " 2293 - "max rate %d max bpp %d pixel clock %iKHz\n", 2294 - limits.max_lane_count, 2295 - intel_dp->common_rates[limits.max_clock], 2296 - limits.max_bpp, adjusted_mode->crtc_clock); 2267 + drm_dbg_kms(&i915->drm, "DP link computation with max lane count %i " 2268 + "max rate %d max bpp %d pixel clock %iKHz\n", 2269 + limits.max_lane_count, 2270 + intel_dp->common_rates[limits.max_clock], 2271 + limits.max_bpp, adjusted_mode->crtc_clock); 2297 2272 2298 2273 /* 2299 2274 * Optimize for slow and wide. This is the place to add alternative ··· 2302 2277 ret = intel_dp_compute_link_config_wide(intel_dp, pipe_config, &limits); 2303 2278 2304 2279 /* enable compression if the mode doesn't fit available BW */ 2305 - DRM_DEBUG_KMS("Force DSC en = %d\n", intel_dp->force_dsc_en); 2280 + drm_dbg_kms(&i915->drm, "Force DSC en = %d\n", intel_dp->force_dsc_en); 2306 2281 if (ret || intel_dp->force_dsc_en) { 2307 2282 ret = intel_dp_dsc_compute_config(intel_dp, pipe_config, 2308 2283 conn_state, &limits); ··· 2311 2286 } 2312 2287 2313 2288 if (pipe_config->dsc.compression_enable) { 2314 - DRM_DEBUG_KMS("DP lane count %d clock %d Input bpp %d Compressed bpp %d\n", 2315 - pipe_config->lane_count, pipe_config->port_clock, 2316 - pipe_config->pipe_bpp, 2317 - pipe_config->dsc.compressed_bpp); 2289 + drm_dbg_kms(&i915->drm, 2290 + "DP lane count %d clock %d Input bpp %d Compressed bpp %d\n", 2291 + pipe_config->lane_count, pipe_config->port_clock, 2292 + pipe_config->pipe_bpp, 2293 + pipe_config->dsc.compressed_bpp); 2318 2294 2319 - DRM_DEBUG_KMS("DP link rate required %i available %i\n", 2320 - intel_dp_link_required(adjusted_mode->crtc_clock, 2321 - pipe_config->dsc.compressed_bpp), 2322 - intel_dp_max_data_rate(pipe_config->port_clock, 2323 - pipe_config->lane_count)); 2295 + drm_dbg_kms(&i915->drm, 2296 + "DP link rate required %i available %i\n", 2297 + intel_dp_link_required(adjusted_mode->crtc_clock, 2298 + pipe_config->dsc.compressed_bpp), 2299 + intel_dp_max_data_rate(pipe_config->port_clock, 2300 + pipe_config->lane_count)); 2324 2301 } else { 2325 - DRM_DEBUG_KMS("DP lane count %d clock %d bpp %d\n", 2326 - pipe_config->lane_count, pipe_config->port_clock, 2327 - pipe_config->pipe_bpp); 2302 + drm_dbg_kms(&i915->drm, "DP lane count %d clock %d bpp %d\n", 2303 + pipe_config->lane_count, pipe_config->port_clock, 2304 + pipe_config->pipe_bpp); 2328 2305 2329 - DRM_DEBUG_KMS("DP link rate required %i available %i\n", 2330 - intel_dp_link_required(adjusted_mode->crtc_clock, 2331 - pipe_config->pipe_bpp), 2332 - intel_dp_max_data_rate(pipe_config->port_clock, 2333 - pipe_config->lane_count)); 2306 + drm_dbg_kms(&i915->drm, 2307 + "DP link rate required %i available %i\n", 2308 + intel_dp_link_required(adjusted_mode->crtc_clock, 2309 + pipe_config->pipe_bpp), 2310 + intel_dp_max_data_rate(pipe_config->port_clock, 2311 + pipe_config->lane_count)); 2334 2312 } 2335 2313 return 0; 2336 2314 } ··· 2343 2315 struct drm_connector *connector, 2344 2316 struct intel_crtc_state *crtc_state) 2345 2317 { 2318 + struct drm_i915_private *i915 = dp_to_i915(intel_dp); 2346 2319 const struct drm_display_info *info = &connector->display_info; 2347 2320 const struct drm_display_mode *adjusted_mode = 2348 2321 &crtc_state->hw.adjusted_mode; ··· 2360 2331 /* YCBCR 420 output conversion needs a scaler */ 2361 2332 ret = skl_update_scaler_crtc(crtc_state); 2362 2333 if (ret) { 2363 - DRM_DEBUG_KMS("Scaler allocation for output failed\n"); 2334 + drm_dbg_kms(&i915->drm, 2335 + "Scaler allocation for output failed\n"); 2364 2336 return ret; 2365 2337 } 2366 2338 ··· 2412 2382 return false; 2413 2383 2414 2384 return true; 2385 + } 2386 + 2387 + static void intel_dp_compute_vsc_colorimetry(const struct intel_crtc_state *crtc_state, 2388 + const struct drm_connector_state *conn_state, 2389 + struct drm_dp_vsc_sdp *vsc) 2390 + { 2391 + struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc); 2392 + struct drm_i915_private *dev_priv = to_i915(crtc->base.dev); 2393 + 2394 + /* 2395 + * Prepare VSC Header for SU as per DP 1.4 spec, Table 2-118 2396 + * VSC SDP supporting 3D stereo, PSR2, and Pixel Encoding/ 2397 + * Colorimetry Format indication. 2398 + */ 2399 + vsc->revision = 0x5; 2400 + vsc->length = 0x13; 2401 + 2402 + /* DP 1.4a spec, Table 2-120 */ 2403 + switch (crtc_state->output_format) { 2404 + case INTEL_OUTPUT_FORMAT_YCBCR444: 2405 + vsc->pixelformat = DP_PIXELFORMAT_YUV444; 2406 + break; 2407 + case INTEL_OUTPUT_FORMAT_YCBCR420: 2408 + vsc->pixelformat = DP_PIXELFORMAT_YUV420; 2409 + break; 2410 + case INTEL_OUTPUT_FORMAT_RGB: 2411 + default: 2412 + vsc->pixelformat = DP_PIXELFORMAT_RGB; 2413 + } 2414 + 2415 + switch (conn_state->colorspace) { 2416 + case DRM_MODE_COLORIMETRY_BT709_YCC: 2417 + vsc->colorimetry = DP_COLORIMETRY_BT709_YCC; 2418 + break; 2419 + case DRM_MODE_COLORIMETRY_XVYCC_601: 2420 + vsc->colorimetry = DP_COLORIMETRY_XVYCC_601; 2421 + break; 2422 + case DRM_MODE_COLORIMETRY_XVYCC_709: 2423 + vsc->colorimetry = DP_COLORIMETRY_XVYCC_709; 2424 + break; 2425 + case DRM_MODE_COLORIMETRY_SYCC_601: 2426 + vsc->colorimetry = DP_COLORIMETRY_SYCC_601; 2427 + break; 2428 + case DRM_MODE_COLORIMETRY_OPYCC_601: 2429 + vsc->colorimetry = DP_COLORIMETRY_OPYCC_601; 2430 + break; 2431 + case DRM_MODE_COLORIMETRY_BT2020_CYCC: 2432 + vsc->colorimetry = DP_COLORIMETRY_BT2020_CYCC; 2433 + break; 2434 + case DRM_MODE_COLORIMETRY_BT2020_RGB: 2435 + vsc->colorimetry = DP_COLORIMETRY_BT2020_RGB; 2436 + break; 2437 + case DRM_MODE_COLORIMETRY_BT2020_YCC: 2438 + vsc->colorimetry = DP_COLORIMETRY_BT2020_YCC; 2439 + break; 2440 + case DRM_MODE_COLORIMETRY_DCI_P3_RGB_D65: 2441 + case DRM_MODE_COLORIMETRY_DCI_P3_RGB_THEATER: 2442 + vsc->colorimetry = DP_COLORIMETRY_DCI_P3_RGB; 2443 + break; 2444 + default: 2445 + /* 2446 + * RGB->YCBCR color conversion uses the BT.709 2447 + * color space. 2448 + */ 2449 + if (crtc_state->output_format == INTEL_OUTPUT_FORMAT_YCBCR420) 2450 + vsc->colorimetry = DP_COLORIMETRY_BT709_YCC; 2451 + else 2452 + vsc->colorimetry = DP_COLORIMETRY_DEFAULT; 2453 + break; 2454 + } 2455 + 2456 + vsc->bpc = crtc_state->pipe_bpp / 3; 2457 + 2458 + /* only RGB pixelformat supports 6 bpc */ 2459 + drm_WARN_ON(&dev_priv->drm, 2460 + vsc->bpc == 6 && vsc->pixelformat != DP_PIXELFORMAT_RGB); 2461 + 2462 + /* all YCbCr are always limited range */ 2463 + vsc->dynamic_range = DP_DYNAMIC_RANGE_CTA; 2464 + vsc->content_type = DP_CONTENT_TYPE_NOT_DEFINED; 2465 + } 2466 + 2467 + static void intel_dp_compute_vsc_sdp(struct intel_dp *intel_dp, 2468 + struct intel_crtc_state *crtc_state, 2469 + const struct drm_connector_state *conn_state) 2470 + { 2471 + struct drm_dp_vsc_sdp *vsc = &crtc_state->infoframes.vsc; 2472 + 2473 + /* When PSR is enabled, VSC SDP is handled by PSR routine */ 2474 + if (intel_psr_enabled(intel_dp)) 2475 + return; 2476 + 2477 + if (!intel_dp_needs_vsc_sdp(crtc_state, conn_state)) 2478 + return; 2479 + 2480 + crtc_state->infoframes.enable |= intel_hdmi_infoframe_enable(DP_SDP_VSC); 2481 + vsc->sdp_type = DP_SDP_VSC; 2482 + intel_dp_compute_vsc_colorimetry(crtc_state, conn_state, 2483 + &crtc_state->infoframes.vsc); 2484 + } 2485 + 2486 + static void 2487 + intel_dp_compute_hdr_metadata_infoframe_sdp(struct intel_dp *intel_dp, 2488 + struct intel_crtc_state *crtc_state, 2489 + const struct drm_connector_state *conn_state) 2490 + { 2491 + int ret; 2492 + struct drm_i915_private *dev_priv = dp_to_i915(intel_dp); 2493 + struct hdmi_drm_infoframe *drm_infoframe = &crtc_state->infoframes.drm.drm; 2494 + 2495 + if (!conn_state->hdr_output_metadata) 2496 + return; 2497 + 2498 + ret = drm_hdmi_infoframe_set_hdr_metadata(drm_infoframe, conn_state); 2499 + 2500 + if (ret) { 2501 + drm_dbg_kms(&dev_priv->drm, "couldn't set HDR metadata in infoframe\n"); 2502 + return; 2503 + } 2504 + 2505 + crtc_state->infoframes.enable |= 2506 + intel_hdmi_infoframe_enable(HDMI_PACKET_TYPE_GAMUT_METADATA); 2415 2507 } 2416 2508 2417 2509 int ··· 2641 2489 intel_dp_set_clock(encoder, pipe_config); 2642 2490 2643 2491 intel_psr_compute_config(intel_dp, pipe_config); 2492 + intel_dp_compute_vsc_sdp(intel_dp, pipe_config, conn_state); 2493 + intel_dp_compute_hdr_metadata_infoframe_sdp(intel_dp, pipe_config, conn_state); 2644 2494 2645 2495 return 0; 2646 2496 } ··· 2787 2633 2788 2634 static void wait_panel_on(struct intel_dp *intel_dp) 2789 2635 { 2790 - DRM_DEBUG_KMS("Wait for panel power on\n"); 2636 + struct drm_i915_private *i915 = dp_to_i915(intel_dp); 2637 + 2638 + drm_dbg_kms(&i915->drm, "Wait for panel power on\n"); 2791 2639 wait_panel_status(intel_dp, IDLE_ON_MASK, IDLE_ON_VALUE); 2792 2640 } 2793 2641 2794 2642 static void wait_panel_off(struct intel_dp *intel_dp) 2795 2643 { 2796 - DRM_DEBUG_KMS("Wait for panel power off time\n"); 2644 + struct drm_i915_private *i915 = dp_to_i915(intel_dp); 2645 + 2646 + drm_dbg_kms(&i915->drm, "Wait for panel power off time\n"); 2797 2647 wait_panel_status(intel_dp, IDLE_OFF_MASK, IDLE_OFF_VALUE); 2798 2648 } 2799 2649 2800 2650 static void wait_panel_power_cycle(struct intel_dp *intel_dp) 2801 2651 { 2652 + struct drm_i915_private *i915 = dp_to_i915(intel_dp); 2802 2653 ktime_t panel_power_on_time; 2803 2654 s64 panel_power_off_duration; 2804 2655 2805 - DRM_DEBUG_KMS("Wait for panel power cycle\n"); 2656 + drm_dbg_kms(&i915->drm, "Wait for panel power cycle\n"); 2806 2657 2807 2658 /* take the difference of currrent time and panel power off time 2808 2659 * and then make panel wait for t11_t12 if needed. */ ··· 3171 3012 const struct drm_connector_state *conn_state) 3172 3013 { 3173 3014 struct intel_dp *intel_dp = enc_to_intel_dp(to_intel_encoder(conn_state->best_encoder)); 3015 + struct drm_i915_private *i915 = dp_to_i915(intel_dp); 3174 3016 3175 3017 if (!intel_dp_is_edp(intel_dp)) 3176 3018 return; 3177 3019 3178 - DRM_DEBUG_KMS("\n"); 3020 + drm_dbg_kms(&i915->drm, "\n"); 3179 3021 3180 3022 intel_panel_enable_backlight(crtc_state, conn_state); 3181 3023 _intel_edp_backlight_on(intel_dp); ··· 3210 3050 void intel_edp_backlight_off(const struct drm_connector_state *old_conn_state) 3211 3051 { 3212 3052 struct intel_dp *intel_dp = enc_to_intel_dp(to_intel_encoder(old_conn_state->best_encoder)); 3053 + struct drm_i915_private *i915 = dp_to_i915(intel_dp); 3213 3054 3214 3055 if (!intel_dp_is_edp(intel_dp)) 3215 3056 return; 3216 3057 3217 - DRM_DEBUG_KMS("\n"); 3058 + drm_dbg_kms(&i915->drm, "\n"); 3218 3059 3219 3060 _intel_edp_backlight_off(intel_dp); 3220 3061 intel_panel_disable_backlight(old_conn_state); ··· 3228 3067 static void intel_edp_backlight_power(struct intel_connector *connector, 3229 3068 bool enable) 3230 3069 { 3070 + struct drm_i915_private *i915 = to_i915(connector->base.dev); 3231 3071 struct intel_dp *intel_dp = intel_attached_dp(connector); 3232 3072 intel_wakeref_t wakeref; 3233 3073 bool is_enabled; ··· 3239 3077 if (is_enabled == enable) 3240 3078 return; 3241 3079 3242 - DRM_DEBUG_KMS("panel power control backlight %s\n", 3243 - enable ? "enable" : "disable"); 3080 + drm_dbg_kms(&i915->drm, "panel power control backlight %s\n", 3081 + enable ? "enable" : "disable"); 3244 3082 3245 3083 if (enable) 3246 3084 _intel_edp_backlight_on(intel_dp); ··· 3350 3188 const struct intel_crtc_state *crtc_state, 3351 3189 bool enable) 3352 3190 { 3191 + struct drm_i915_private *i915 = dp_to_i915(intel_dp); 3353 3192 int ret; 3354 3193 3355 3194 if (!crtc_state->dsc.compression_enable) ··· 3359 3196 ret = drm_dp_dpcd_writeb(&intel_dp->aux, DP_DSC_ENABLE, 3360 3197 enable ? DP_DECOMPRESSION_EN : 0); 3361 3198 if (ret < 0) 3362 - DRM_DEBUG_KMS("Failed to %s sink decompression state\n", 3363 - enable ? "enable" : "disable"); 3199 + drm_dbg_kms(&i915->drm, 3200 + "Failed to %s sink decompression state\n", 3201 + enable ? "enable" : "disable"); 3364 3202 } 3365 3203 3366 3204 /* If the sink supports it, try to set the power state appropriately */ 3367 3205 void intel_dp_sink_dpms(struct intel_dp *intel_dp, int mode) 3368 3206 { 3207 + struct drm_i915_private *i915 = dp_to_i915(intel_dp); 3369 3208 int ret, i; 3370 3209 3371 3210 /* Should have a valid DPCD by this point */ ··· 3400 3235 } 3401 3236 3402 3237 if (ret != 1) 3403 - DRM_DEBUG_KMS("failed to %s sink power state\n", 3404 - mode == DRM_MODE_DPMS_ON ? "enable" : "disable"); 3238 + drm_dbg_kms(&i915->drm, "failed to %s sink power state\n", 3239 + mode == DRM_MODE_DPMS_ON ? "enable" : "disable"); 3405 3240 } 3406 3241 3407 3242 static bool cpt_dp_port_selected(struct drm_i915_private *dev_priv, ··· 3558 3393 } 3559 3394 } 3560 3395 3561 - static void intel_disable_dp(struct intel_encoder *encoder, 3396 + static void intel_disable_dp(struct intel_atomic_state *state, 3397 + struct intel_encoder *encoder, 3562 3398 const struct intel_crtc_state *old_crtc_state, 3563 3399 const struct drm_connector_state *old_conn_state) 3564 3400 { ··· 3579 3413 intel_edp_panel_off(intel_dp); 3580 3414 } 3581 3415 3582 - static void g4x_disable_dp(struct intel_encoder *encoder, 3416 + static void g4x_disable_dp(struct intel_atomic_state *state, 3417 + struct intel_encoder *encoder, 3583 3418 const struct intel_crtc_state *old_crtc_state, 3584 3419 const struct drm_connector_state *old_conn_state) 3585 3420 { 3586 - intel_disable_dp(encoder, old_crtc_state, old_conn_state); 3421 + intel_disable_dp(state, encoder, old_crtc_state, old_conn_state); 3587 3422 } 3588 3423 3589 - static void vlv_disable_dp(struct intel_encoder *encoder, 3424 + static void vlv_disable_dp(struct intel_atomic_state *state, 3425 + struct intel_encoder *encoder, 3590 3426 const struct intel_crtc_state *old_crtc_state, 3591 3427 const struct drm_connector_state *old_conn_state) 3592 3428 { 3593 - intel_disable_dp(encoder, old_crtc_state, old_conn_state); 3429 + intel_disable_dp(state, encoder, old_crtc_state, old_conn_state); 3594 3430 } 3595 3431 3596 - static void g4x_post_disable_dp(struct intel_encoder *encoder, 3432 + static void g4x_post_disable_dp(struct intel_atomic_state *state, 3433 + struct intel_encoder *encoder, 3597 3434 const struct intel_crtc_state *old_crtc_state, 3598 3435 const struct drm_connector_state *old_conn_state) 3599 3436 { ··· 3616 3447 ilk_edp_pll_off(intel_dp, old_crtc_state); 3617 3448 } 3618 3449 3619 - static void vlv_post_disable_dp(struct intel_encoder *encoder, 3450 + static void vlv_post_disable_dp(struct intel_atomic_state *state, 3451 + struct intel_encoder *encoder, 3620 3452 const struct intel_crtc_state *old_crtc_state, 3621 3453 const struct drm_connector_state *old_conn_state) 3622 3454 { 3623 3455 intel_dp_link_down(encoder, old_crtc_state); 3624 3456 } 3625 3457 3626 - static void chv_post_disable_dp(struct intel_encoder *encoder, 3458 + static void chv_post_disable_dp(struct intel_atomic_state *state, 3459 + struct intel_encoder *encoder, 3627 3460 const struct intel_crtc_state *old_crtc_state, 3628 3461 const struct drm_connector_state *old_conn_state) 3629 3462 { ··· 3751 3580 intel_de_posting_read(dev_priv, intel_dp->output_reg); 3752 3581 } 3753 3582 3754 - static void intel_enable_dp(struct intel_encoder *encoder, 3583 + static void intel_enable_dp(struct intel_atomic_state *state, 3584 + struct intel_encoder *encoder, 3755 3585 const struct intel_crtc_state *pipe_config, 3756 3586 const struct drm_connector_state *conn_state) 3757 3587 { ··· 3798 3626 } 3799 3627 } 3800 3628 3801 - static void g4x_enable_dp(struct intel_encoder *encoder, 3629 + static void g4x_enable_dp(struct intel_atomic_state *state, 3630 + struct intel_encoder *encoder, 3802 3631 const struct intel_crtc_state *pipe_config, 3803 3632 const struct drm_connector_state *conn_state) 3804 3633 { 3805 - intel_enable_dp(encoder, pipe_config, conn_state); 3634 + intel_enable_dp(state, encoder, pipe_config, conn_state); 3806 3635 intel_edp_backlight_on(pipe_config, conn_state); 3807 3636 } 3808 3637 3809 - static void vlv_enable_dp(struct intel_encoder *encoder, 3638 + static void vlv_enable_dp(struct intel_atomic_state *state, 3639 + struct intel_encoder *encoder, 3810 3640 const struct intel_crtc_state *pipe_config, 3811 3641 const struct drm_connector_state *conn_state) 3812 3642 { 3813 3643 intel_edp_backlight_on(pipe_config, conn_state); 3814 3644 } 3815 3645 3816 - static void g4x_pre_enable_dp(struct intel_encoder *encoder, 3646 + static void g4x_pre_enable_dp(struct intel_atomic_state *state, 3647 + struct intel_encoder *encoder, 3817 3648 const struct intel_crtc_state *pipe_config, 3818 3649 const struct drm_connector_state *conn_state) 3819 3650 { ··· 3936 3761 intel_dp_init_panel_power_sequencer_registers(intel_dp, true); 3937 3762 } 3938 3763 3939 - static void vlv_pre_enable_dp(struct intel_encoder *encoder, 3764 + static void vlv_pre_enable_dp(struct intel_atomic_state *state, 3765 + struct intel_encoder *encoder, 3940 3766 const struct intel_crtc_state *pipe_config, 3941 3767 const struct drm_connector_state *conn_state) 3942 3768 { 3943 3769 vlv_phy_pre_encoder_enable(encoder, pipe_config); 3944 3770 3945 - intel_enable_dp(encoder, pipe_config, conn_state); 3771 + intel_enable_dp(state, encoder, pipe_config, conn_state); 3946 3772 } 3947 3773 3948 - static void vlv_dp_pre_pll_enable(struct intel_encoder *encoder, 3774 + static void vlv_dp_pre_pll_enable(struct intel_atomic_state *state, 3775 + struct intel_encoder *encoder, 3949 3776 const struct intel_crtc_state *pipe_config, 3950 3777 const struct drm_connector_state *conn_state) 3951 3778 { ··· 3956 3779 vlv_phy_pre_pll_enable(encoder, pipe_config); 3957 3780 } 3958 3781 3959 - static void chv_pre_enable_dp(struct intel_encoder *encoder, 3782 + static void chv_pre_enable_dp(struct intel_atomic_state *state, 3783 + struct intel_encoder *encoder, 3960 3784 const struct intel_crtc_state *pipe_config, 3961 3785 const struct drm_connector_state *conn_state) 3962 3786 { 3963 3787 chv_phy_pre_encoder_enable(encoder, pipe_config); 3964 3788 3965 - intel_enable_dp(encoder, pipe_config, conn_state); 3789 + intel_enable_dp(state, encoder, pipe_config, conn_state); 3966 3790 3967 3791 /* Second common lane will stay alive on its own now */ 3968 3792 chv_phy_release_cl2_override(encoder); 3969 3793 } 3970 3794 3971 - static void chv_dp_pre_pll_enable(struct intel_encoder *encoder, 3795 + static void chv_dp_pre_pll_enable(struct intel_atomic_state *state, 3796 + struct intel_encoder *encoder, 3972 3797 const struct intel_crtc_state *pipe_config, 3973 3798 const struct drm_connector_state *conn_state) 3974 3799 { ··· 3979 3800 chv_phy_pre_pll_enable(encoder, pipe_config); 3980 3801 } 3981 3802 3982 - static void chv_dp_post_pll_disable(struct intel_encoder *encoder, 3803 + static void chv_dp_post_pll_disable(struct intel_atomic_state *state, 3804 + struct intel_encoder *encoder, 3983 3805 const struct intel_crtc_state *old_crtc_state, 3984 3806 const struct drm_connector_state *old_conn_state) 3985 3807 { ··· 4499 4319 static void 4500 4320 intel_dp_extended_receiver_capabilities(struct intel_dp *intel_dp) 4501 4321 { 4322 + struct drm_i915_private *i915 = dp_to_i915(intel_dp); 4502 4323 u8 dpcd_ext[6]; 4503 4324 4504 4325 /* ··· 4515 4334 4516 4335 if (drm_dp_dpcd_read(&intel_dp->aux, DP_DP13_DPCD_REV, 4517 4336 &dpcd_ext, sizeof(dpcd_ext)) != sizeof(dpcd_ext)) { 4518 - DRM_ERROR("DPCD failed read at extended capabilities\n"); 4337 + drm_err(&i915->drm, 4338 + "DPCD failed read at extended capabilities\n"); 4519 4339 return; 4520 4340 } 4521 4341 4522 4342 if (intel_dp->dpcd[DP_DPCD_REV] > dpcd_ext[DP_DPCD_REV]) { 4523 - DRM_DEBUG_KMS("DPCD extended DPCD rev less than base DPCD rev\n"); 4343 + drm_dbg_kms(&i915->drm, 4344 + "DPCD extended DPCD rev less than base DPCD rev\n"); 4524 4345 return; 4525 4346 } 4526 4347 4527 4348 if (!memcmp(intel_dp->dpcd, dpcd_ext, sizeof(dpcd_ext))) 4528 4349 return; 4529 4350 4530 - DRM_DEBUG_KMS("Base DPCD: %*ph\n", 4531 - (int)sizeof(intel_dp->dpcd), intel_dp->dpcd); 4351 + drm_dbg_kms(&i915->drm, "Base DPCD: %*ph\n", 4352 + (int)sizeof(intel_dp->dpcd), intel_dp->dpcd); 4532 4353 4533 4354 memcpy(intel_dp->dpcd, dpcd_ext, sizeof(dpcd_ext)); 4534 4355 } ··· 4538 4355 bool 4539 4356 intel_dp_read_dpcd(struct intel_dp *intel_dp) 4540 4357 { 4358 + struct drm_i915_private *i915 = dp_to_i915(intel_dp); 4359 + 4541 4360 if (drm_dp_dpcd_read(&intel_dp->aux, 0x000, intel_dp->dpcd, 4542 4361 sizeof(intel_dp->dpcd)) < 0) 4543 4362 return false; /* aux transfer failed */ 4544 4363 4545 4364 intel_dp_extended_receiver_capabilities(intel_dp); 4546 4365 4547 - DRM_DEBUG_KMS("DPCD: %*ph\n", (int) sizeof(intel_dp->dpcd), intel_dp->dpcd); 4366 + drm_dbg_kms(&i915->drm, "DPCD: %*ph\n", (int)sizeof(intel_dp->dpcd), 4367 + intel_dp->dpcd); 4548 4368 4549 4369 return intel_dp->dpcd[DP_DPCD_REV] != 0; 4550 4370 } ··· 4564 4378 4565 4379 static void intel_dp_get_dsc_sink_cap(struct intel_dp *intel_dp) 4566 4380 { 4381 + struct drm_i915_private *i915 = dp_to_i915(intel_dp); 4382 + 4567 4383 /* 4568 4384 * Clear the cached register set to avoid using stale values 4569 4385 * for the sinks that do not support DSC. ··· 4581 4393 if (drm_dp_dpcd_read(&intel_dp->aux, DP_DSC_SUPPORT, 4582 4394 intel_dp->dsc_dpcd, 4583 4395 sizeof(intel_dp->dsc_dpcd)) < 0) 4584 - DRM_ERROR("Failed to read DPCD register 0x%x\n", 4585 - DP_DSC_SUPPORT); 4396 + drm_err(&i915->drm, 4397 + "Failed to read DPCD register 0x%x\n", 4398 + DP_DSC_SUPPORT); 4586 4399 4587 - DRM_DEBUG_KMS("DSC DPCD: %*ph\n", 4588 - (int)sizeof(intel_dp->dsc_dpcd), 4589 - intel_dp->dsc_dpcd); 4400 + drm_dbg_kms(&i915->drm, "DSC DPCD: %*ph\n", 4401 + (int)sizeof(intel_dp->dsc_dpcd), 4402 + intel_dp->dsc_dpcd); 4590 4403 4591 4404 /* FEC is supported only on DP 1.4 */ 4592 4405 if (!intel_dp_is_edp(intel_dp) && 4593 4406 drm_dp_dpcd_readb(&intel_dp->aux, DP_FEC_CAPABILITY, 4594 4407 &intel_dp->fec_capable) < 0) 4595 - DRM_ERROR("Failed to read FEC DPCD register\n"); 4408 + drm_err(&i915->drm, 4409 + "Failed to read FEC DPCD register\n"); 4596 4410 4597 - DRM_DEBUG_KMS("FEC CAPABILITY: %x\n", intel_dp->fec_capable); 4411 + drm_dbg_kms(&i915->drm, "FEC CAPABILITY: %x\n", 4412 + intel_dp->fec_capable); 4598 4413 } 4599 4414 } 4600 4415 ··· 4771 4580 static void 4772 4581 intel_dp_configure_mst(struct intel_dp *intel_dp) 4773 4582 { 4583 + struct drm_i915_private *i915 = dp_to_i915(intel_dp); 4774 4584 struct intel_encoder *encoder = 4775 4585 &dp_to_dig_port(intel_dp)->base; 4776 4586 bool sink_can_mst = intel_dp_sink_can_mst(intel_dp); 4777 4587 4778 - DRM_DEBUG_KMS("[ENCODER:%d:%s] MST support: port: %s, sink: %s, modparam: %s\n", 4779 - encoder->base.base.id, encoder->base.name, 4780 - yesno(intel_dp->can_mst), yesno(sink_can_mst), 4781 - yesno(i915_modparams.enable_dp_mst)); 4588 + drm_dbg_kms(&i915->drm, 4589 + "[ENCODER:%d:%s] MST support: port: %s, sink: %s, modparam: %s\n", 4590 + encoder->base.base.id, encoder->base.name, 4591 + yesno(intel_dp->can_mst), yesno(sink_can_mst), 4592 + yesno(i915_modparams.enable_dp_mst)); 4782 4593 4783 4594 if (!intel_dp->can_mst) 4784 4595 return; ··· 4824 4631 } 4825 4632 4826 4633 return false; 4634 + } 4635 + 4636 + static ssize_t intel_dp_vsc_sdp_pack(const struct drm_dp_vsc_sdp *vsc, 4637 + struct dp_sdp *sdp, size_t size) 4638 + { 4639 + size_t length = sizeof(struct dp_sdp); 4640 + 4641 + if (size < length) 4642 + return -ENOSPC; 4643 + 4644 + memset(sdp, 0, size); 4645 + 4646 + /* 4647 + * Prepare VSC Header for SU as per DP 1.4a spec, Table 2-119 4648 + * VSC SDP Header Bytes 4649 + */ 4650 + sdp->sdp_header.HB0 = 0; /* Secondary-Data Packet ID = 0 */ 4651 + sdp->sdp_header.HB1 = vsc->sdp_type; /* Secondary-data Packet Type */ 4652 + sdp->sdp_header.HB2 = vsc->revision; /* Revision Number */ 4653 + sdp->sdp_header.HB3 = vsc->length; /* Number of Valid Data Bytes */ 4654 + 4655 + /* VSC SDP Payload for DB16 through DB18 */ 4656 + /* Pixel Encoding and Colorimetry Formats */ 4657 + sdp->db[16] = (vsc->pixelformat & 0xf) << 4; /* DB16[7:4] */ 4658 + sdp->db[16] |= vsc->colorimetry & 0xf; /* DB16[3:0] */ 4659 + 4660 + switch (vsc->bpc) { 4661 + case 6: 4662 + /* 6bpc: 0x0 */ 4663 + break; 4664 + case 8: 4665 + sdp->db[17] = 0x1; /* DB17[3:0] */ 4666 + break; 4667 + case 10: 4668 + sdp->db[17] = 0x2; 4669 + break; 4670 + case 12: 4671 + sdp->db[17] = 0x3; 4672 + break; 4673 + case 16: 4674 + sdp->db[17] = 0x4; 4675 + break; 4676 + default: 4677 + MISSING_CASE(vsc->bpc); 4678 + break; 4679 + } 4680 + /* Dynamic Range and Component Bit Depth */ 4681 + if (vsc->dynamic_range == DP_DYNAMIC_RANGE_CTA) 4682 + sdp->db[17] |= 0x80; /* DB17[7] */ 4683 + 4684 + /* Content Type */ 4685 + sdp->db[18] = vsc->content_type & 0x7; 4686 + 4687 + return length; 4688 + } 4689 + 4690 + static ssize_t 4691 + intel_dp_hdr_metadata_infoframe_sdp_pack(const struct hdmi_drm_infoframe *drm_infoframe, 4692 + struct dp_sdp *sdp, 4693 + size_t size) 4694 + { 4695 + size_t length = sizeof(struct dp_sdp); 4696 + const int infoframe_size = HDMI_INFOFRAME_HEADER_SIZE + HDMI_DRM_INFOFRAME_SIZE; 4697 + unsigned char buf[HDMI_INFOFRAME_HEADER_SIZE + HDMI_DRM_INFOFRAME_SIZE]; 4698 + ssize_t len; 4699 + 4700 + if (size < length) 4701 + return -ENOSPC; 4702 + 4703 + memset(sdp, 0, size); 4704 + 4705 + len = hdmi_drm_infoframe_pack_only(drm_infoframe, buf, sizeof(buf)); 4706 + if (len < 0) { 4707 + DRM_DEBUG_KMS("buffer size is smaller than hdr metadata infoframe\n"); 4708 + return -ENOSPC; 4709 + } 4710 + 4711 + if (len != infoframe_size) { 4712 + DRM_DEBUG_KMS("wrong static hdr metadata size\n"); 4713 + return -ENOSPC; 4714 + } 4715 + 4716 + /* 4717 + * Set up the infoframe sdp packet for HDR static metadata. 4718 + * Prepare VSC Header for SU as per DP 1.4a spec, 4719 + * Table 2-100 and Table 2-101 4720 + */ 4721 + 4722 + /* Secondary-Data Packet ID, 00h for non-Audio INFOFRAME */ 4723 + sdp->sdp_header.HB0 = 0; 4724 + /* 4725 + * Packet Type 80h + Non-audio INFOFRAME Type value 4726 + * HDMI_INFOFRAME_TYPE_DRM: 0x87 4727 + * - 80h + Non-audio INFOFRAME Type value 4728 + * - InfoFrame Type: 0x07 4729 + * [CTA-861-G Table-42 Dynamic Range and Mastering InfoFrame] 4730 + */ 4731 + sdp->sdp_header.HB1 = drm_infoframe->type; 4732 + /* 4733 + * Least Significant Eight Bits of (Data Byte Count – 1) 4734 + * infoframe_size - 1 4735 + */ 4736 + sdp->sdp_header.HB2 = 0x1D; 4737 + /* INFOFRAME SDP Version Number */ 4738 + sdp->sdp_header.HB3 = (0x13 << 2); 4739 + /* CTA Header Byte 2 (INFOFRAME Version Number) */ 4740 + sdp->db[0] = drm_infoframe->version; 4741 + /* CTA Header Byte 3 (Length of INFOFRAME): HDMI_DRM_INFOFRAME_SIZE */ 4742 + sdp->db[1] = drm_infoframe->length; 4743 + /* 4744 + * Copy HDMI_DRM_INFOFRAME_SIZE size from a buffer after 4745 + * HDMI_INFOFRAME_HEADER_SIZE 4746 + */ 4747 + BUILD_BUG_ON(sizeof(sdp->db) < HDMI_DRM_INFOFRAME_SIZE + 2); 4748 + memcpy(&sdp->db[2], &buf[HDMI_INFOFRAME_HEADER_SIZE], 4749 + HDMI_DRM_INFOFRAME_SIZE); 4750 + 4751 + /* 4752 + * Size of DP infoframe sdp packet for HDR static metadata consists of 4753 + * - DP SDP Header(struct dp_sdp_header): 4 bytes 4754 + * - Two Data Blocks: 2 bytes 4755 + * CTA Header Byte2 (INFOFRAME Version Number) 4756 + * CTA Header Byte3 (Length of INFOFRAME) 4757 + * - HDMI_DRM_INFOFRAME_SIZE: 26 bytes 4758 + * 4759 + * Prior to GEN11's GMP register size is identical to DP HDR static metadata 4760 + * infoframe size. But GEN11+ has larger than that size, write_infoframe 4761 + * will pad rest of the size. 4762 + */ 4763 + return sizeof(struct dp_sdp_header) + 2 + HDMI_DRM_INFOFRAME_SIZE; 4764 + } 4765 + 4766 + static void intel_write_dp_sdp(struct intel_encoder *encoder, 4767 + const struct intel_crtc_state *crtc_state, 4768 + unsigned int type) 4769 + { 4770 + struct intel_digital_port *intel_dig_port = enc_to_dig_port(encoder); 4771 + struct drm_i915_private *dev_priv = to_i915(encoder->base.dev); 4772 + struct dp_sdp sdp = {}; 4773 + ssize_t len; 4774 + 4775 + if ((crtc_state->infoframes.enable & 4776 + intel_hdmi_infoframe_enable(type)) == 0) 4777 + return; 4778 + 4779 + switch (type) { 4780 + case DP_SDP_VSC: 4781 + len = intel_dp_vsc_sdp_pack(&crtc_state->infoframes.vsc, &sdp, 4782 + sizeof(sdp)); 4783 + break; 4784 + case HDMI_PACKET_TYPE_GAMUT_METADATA: 4785 + len = intel_dp_hdr_metadata_infoframe_sdp_pack(&crtc_state->infoframes.drm.drm, 4786 + &sdp, sizeof(sdp)); 4787 + break; 4788 + default: 4789 + MISSING_CASE(type); 4790 + return; 4791 + } 4792 + 4793 + if (drm_WARN_ON(&dev_priv->drm, len < 0)) 4794 + return; 4795 + 4796 + intel_dig_port->write_infoframe(encoder, crtc_state, type, &sdp, len); 4797 + } 4798 + 4799 + void intel_dp_set_infoframes(struct intel_encoder *encoder, 4800 + bool enable, 4801 + const struct intel_crtc_state *crtc_state, 4802 + const struct drm_connector_state *conn_state) 4803 + { 4804 + struct drm_i915_private *dev_priv = to_i915(encoder->base.dev); 4805 + struct intel_dp *intel_dp = enc_to_intel_dp(encoder); 4806 + i915_reg_t reg = HSW_TVIDEO_DIP_CTL(crtc_state->cpu_transcoder); 4807 + u32 dip_enable = VIDEO_DIP_ENABLE_AVI_HSW | VIDEO_DIP_ENABLE_GCP_HSW | 4808 + VIDEO_DIP_ENABLE_VS_HSW | VIDEO_DIP_ENABLE_GMP_HSW | 4809 + VIDEO_DIP_ENABLE_SPD_HSW | VIDEO_DIP_ENABLE_DRM_GLK; 4810 + u32 val = intel_de_read(dev_priv, reg); 4811 + 4812 + /* TODO: Add DSC case (DIP_ENABLE_PPS) */ 4813 + /* When PSR is enabled, this routine doesn't disable VSC DIP */ 4814 + if (intel_psr_enabled(intel_dp)) 4815 + val &= ~dip_enable; 4816 + else 4817 + val &= ~(dip_enable | VIDEO_DIP_ENABLE_VSC_HSW); 4818 + 4819 + if (!enable) { 4820 + intel_de_write(dev_priv, reg, val); 4821 + intel_de_posting_read(dev_priv, reg); 4822 + return; 4823 + } 4824 + 4825 + intel_de_write(dev_priv, reg, val); 4826 + intel_de_posting_read(dev_priv, reg); 4827 + 4828 + /* When PSR is enabled, VSC SDP is handled by PSR routine */ 4829 + if (!intel_psr_enabled(intel_dp)) 4830 + intel_write_dp_sdp(encoder, crtc_state, DP_SDP_VSC); 4831 + 4832 + intel_write_dp_sdp(encoder, crtc_state, HDMI_PACKET_TYPE_GAMUT_METADATA); 4827 4833 } 4828 4834 4829 4835 static void ··· 5154 4762 const struct intel_crtc_state *crtc_state, 5155 4763 const struct drm_connector_state *conn_state) 5156 4764 { 4765 + struct drm_i915_private *i915 = dp_to_i915(intel_dp); 5157 4766 struct intel_digital_port *intel_dig_port = dp_to_dig_port(intel_dp); 5158 4767 struct dp_sdp infoframe_sdp = {}; 5159 4768 struct hdmi_drm_infoframe drm_infoframe = {}; ··· 5165 4772 5166 4773 ret = drm_hdmi_infoframe_set_hdr_metadata(&drm_infoframe, conn_state); 5167 4774 if (ret) { 5168 - DRM_DEBUG_KMS("couldn't set HDR metadata in infoframe\n"); 4775 + drm_dbg_kms(&i915->drm, 4776 + "couldn't set HDR metadata in infoframe\n"); 5169 4777 return; 5170 4778 } 5171 4779 5172 4780 len = hdmi_drm_infoframe_pack_only(&drm_infoframe, buf, sizeof(buf)); 5173 4781 if (len < 0) { 5174 - DRM_DEBUG_KMS("buffer size is smaller than hdr metadata infoframe\n"); 4782 + drm_dbg_kms(&i915->drm, 4783 + "buffer size is smaller than hdr metadata infoframe\n"); 5175 4784 return; 5176 4785 } 5177 4786 5178 4787 if (len != infoframe_size) { 5179 - DRM_DEBUG_KMS("wrong static hdr metadata size\n"); 4788 + drm_dbg_kms(&i915->drm, "wrong static hdr metadata size\n"); 5180 4789 return; 5181 4790 } 5182 4791 ··· 5256 4861 5257 4862 static u8 intel_dp_autotest_link_training(struct intel_dp *intel_dp) 5258 4863 { 4864 + struct drm_i915_private *i915 = dp_to_i915(intel_dp); 5259 4865 int status = 0; 5260 4866 int test_link_rate; 5261 4867 u8 test_lane_count, test_link_bw; ··· 5268 4872 &test_lane_count); 5269 4873 5270 4874 if (status <= 0) { 5271 - DRM_DEBUG_KMS("Lane count read failed\n"); 4875 + drm_dbg_kms(&i915->drm, "Lane count read failed\n"); 5272 4876 return DP_TEST_NAK; 5273 4877 } 5274 4878 test_lane_count &= DP_MAX_LANE_COUNT_MASK; ··· 5276 4880 status = drm_dp_dpcd_readb(&intel_dp->aux, DP_TEST_LINK_RATE, 5277 4881 &test_link_bw); 5278 4882 if (status <= 0) { 5279 - DRM_DEBUG_KMS("Link Rate read failed\n"); 4883 + drm_dbg_kms(&i915->drm, "Link Rate read failed\n"); 5280 4884 return DP_TEST_NAK; 5281 4885 } 5282 4886 test_link_rate = drm_dp_bw_code_to_link_rate(test_link_bw); ··· 5294 4898 5295 4899 static u8 intel_dp_autotest_video_pattern(struct intel_dp *intel_dp) 5296 4900 { 4901 + struct drm_i915_private *i915 = dp_to_i915(intel_dp); 5297 4902 u8 test_pattern; 5298 4903 u8 test_misc; 5299 4904 __be16 h_width, v_height; ··· 5304 4907 status = drm_dp_dpcd_readb(&intel_dp->aux, DP_TEST_PATTERN, 5305 4908 &test_pattern); 5306 4909 if (status <= 0) { 5307 - DRM_DEBUG_KMS("Test pattern read failed\n"); 4910 + drm_dbg_kms(&i915->drm, "Test pattern read failed\n"); 5308 4911 return DP_TEST_NAK; 5309 4912 } 5310 4913 if (test_pattern != DP_COLOR_RAMP) ··· 5313 4916 status = drm_dp_dpcd_read(&intel_dp->aux, DP_TEST_H_WIDTH_HI, 5314 4917 &h_width, 2); 5315 4918 if (status <= 0) { 5316 - DRM_DEBUG_KMS("H Width read failed\n"); 4919 + drm_dbg_kms(&i915->drm, "H Width read failed\n"); 5317 4920 return DP_TEST_NAK; 5318 4921 } 5319 4922 5320 4923 status = drm_dp_dpcd_read(&intel_dp->aux, DP_TEST_V_HEIGHT_HI, 5321 4924 &v_height, 2); 5322 4925 if (status <= 0) { 5323 - DRM_DEBUG_KMS("V Height read failed\n"); 4926 + drm_dbg_kms(&i915->drm, "V Height read failed\n"); 5324 4927 return DP_TEST_NAK; 5325 4928 } 5326 4929 5327 4930 status = drm_dp_dpcd_readb(&intel_dp->aux, DP_TEST_MISC0, 5328 4931 &test_misc); 5329 4932 if (status <= 0) { 5330 - DRM_DEBUG_KMS("TEST MISC read failed\n"); 4933 + drm_dbg_kms(&i915->drm, "TEST MISC read failed\n"); 5331 4934 return DP_TEST_NAK; 5332 4935 } 5333 4936 if ((test_misc & DP_TEST_COLOR_FORMAT_MASK) != DP_COLOR_FORMAT_RGB) ··· 5356 4959 5357 4960 static u8 intel_dp_autotest_edid(struct intel_dp *intel_dp) 5358 4961 { 4962 + struct drm_i915_private *i915 = dp_to_i915(intel_dp); 5359 4963 u8 test_result = DP_TEST_ACK; 5360 4964 struct intel_connector *intel_connector = intel_dp->attached_connector; 5361 4965 struct drm_connector *connector = &intel_connector->base; ··· 5373 4975 */ 5374 4976 if (intel_dp->aux.i2c_nack_count > 0 || 5375 4977 intel_dp->aux.i2c_defer_count > 0) 5376 - DRM_DEBUG_KMS("EDID read had %d NACKs, %d DEFERs\n", 5377 - intel_dp->aux.i2c_nack_count, 5378 - intel_dp->aux.i2c_defer_count); 4978 + drm_dbg_kms(&i915->drm, 4979 + "EDID read had %d NACKs, %d DEFERs\n", 4980 + intel_dp->aux.i2c_nack_count, 4981 + intel_dp->aux.i2c_defer_count); 5379 4982 intel_dp->compliance.test_data.edid = INTEL_DP_RESOLUTION_FAILSAFE; 5380 4983 } else { 5381 4984 struct edid *block = intel_connector->detect_edid; ··· 5388 4989 5389 4990 if (drm_dp_dpcd_writeb(&intel_dp->aux, DP_TEST_EDID_CHECKSUM, 5390 4991 block->checksum) <= 0) 5391 - DRM_DEBUG_KMS("Failed to write EDID checksum\n"); 4992 + drm_dbg_kms(&i915->drm, 4993 + "Failed to write EDID checksum\n"); 5392 4994 5393 4995 test_result = DP_TEST_ACK | DP_TEST_EDID_CHECKSUM_WRITE; 5394 4996 intel_dp->compliance.test_data.edid = INTEL_DP_RESOLUTION_PREFERRED; ··· 5401 5001 return test_result; 5402 5002 } 5403 5003 5004 + static u8 intel_dp_prepare_phytest(struct intel_dp *intel_dp) 5005 + { 5006 + struct drm_dp_phy_test_params *data = 5007 + &intel_dp->compliance.test_data.phytest; 5008 + 5009 + if (drm_dp_get_phy_test_pattern(&intel_dp->aux, data)) { 5010 + DRM_DEBUG_KMS("DP Phy Test pattern AUX read failure\n"); 5011 + return DP_TEST_NAK; 5012 + } 5013 + 5014 + /* 5015 + * link_mst is set to false to avoid executing mst related code 5016 + * during compliance testing. 5017 + */ 5018 + intel_dp->link_mst = false; 5019 + 5020 + return DP_TEST_ACK; 5021 + } 5022 + 5023 + static void intel_dp_phy_pattern_update(struct intel_dp *intel_dp) 5024 + { 5025 + struct drm_i915_private *dev_priv = 5026 + to_i915(dp_to_dig_port(intel_dp)->base.base.dev); 5027 + struct intel_digital_port *intel_dig_port = dp_to_dig_port(intel_dp); 5028 + struct drm_dp_phy_test_params *data = 5029 + &intel_dp->compliance.test_data.phytest; 5030 + struct intel_crtc *crtc = to_intel_crtc(intel_dig_port->base.base.crtc); 5031 + enum pipe pipe = crtc->pipe; 5032 + u32 pattern_val; 5033 + 5034 + switch (data->phy_pattern) { 5035 + case DP_PHY_TEST_PATTERN_NONE: 5036 + DRM_DEBUG_KMS("Disable Phy Test Pattern\n"); 5037 + intel_de_write(dev_priv, DDI_DP_COMP_CTL(pipe), 0x0); 5038 + break; 5039 + case DP_PHY_TEST_PATTERN_D10_2: 5040 + DRM_DEBUG_KMS("Set D10.2 Phy Test Pattern\n"); 5041 + intel_de_write(dev_priv, DDI_DP_COMP_CTL(pipe), 5042 + DDI_DP_COMP_CTL_ENABLE | DDI_DP_COMP_CTL_D10_2); 5043 + break; 5044 + case DP_PHY_TEST_PATTERN_ERROR_COUNT: 5045 + DRM_DEBUG_KMS("Set Error Count Phy Test Pattern\n"); 5046 + intel_de_write(dev_priv, DDI_DP_COMP_CTL(pipe), 5047 + DDI_DP_COMP_CTL_ENABLE | 5048 + DDI_DP_COMP_CTL_SCRAMBLED_0); 5049 + break; 5050 + case DP_PHY_TEST_PATTERN_PRBS7: 5051 + DRM_DEBUG_KMS("Set PRBS7 Phy Test Pattern\n"); 5052 + intel_de_write(dev_priv, DDI_DP_COMP_CTL(pipe), 5053 + DDI_DP_COMP_CTL_ENABLE | DDI_DP_COMP_CTL_PRBS7); 5054 + break; 5055 + case DP_PHY_TEST_PATTERN_80BIT_CUSTOM: 5056 + /* 5057 + * FIXME: Ideally pattern should come from DPCD 0x250. As 5058 + * current firmware of DPR-100 could not set it, so hardcoding 5059 + * now for complaince test. 5060 + */ 5061 + DRM_DEBUG_KMS("Set 80Bit Custom Phy Test Pattern 0x3e0f83e0 0x0f83e0f8 0x0000f83e\n"); 5062 + pattern_val = 0x3e0f83e0; 5063 + intel_de_write(dev_priv, DDI_DP_COMP_PAT(pipe, 0), pattern_val); 5064 + pattern_val = 0x0f83e0f8; 5065 + intel_de_write(dev_priv, DDI_DP_COMP_PAT(pipe, 1), pattern_val); 5066 + pattern_val = 0x0000f83e; 5067 + intel_de_write(dev_priv, DDI_DP_COMP_PAT(pipe, 2), pattern_val); 5068 + intel_de_write(dev_priv, DDI_DP_COMP_CTL(pipe), 5069 + DDI_DP_COMP_CTL_ENABLE | 5070 + DDI_DP_COMP_CTL_CUSTOM80); 5071 + break; 5072 + case DP_PHY_TEST_PATTERN_CP2520: 5073 + /* 5074 + * FIXME: Ideally pattern should come from DPCD 0x24A. As 5075 + * current firmware of DPR-100 could not set it, so hardcoding 5076 + * now for complaince test. 5077 + */ 5078 + DRM_DEBUG_KMS("Set HBR2 compliance Phy Test Pattern\n"); 5079 + pattern_val = 0xFB; 5080 + intel_de_write(dev_priv, DDI_DP_COMP_CTL(pipe), 5081 + DDI_DP_COMP_CTL_ENABLE | DDI_DP_COMP_CTL_HBR2 | 5082 + pattern_val); 5083 + break; 5084 + default: 5085 + WARN(1, "Invalid Phy Test Pattern\n"); 5086 + } 5087 + } 5088 + 5089 + static void 5090 + intel_dp_autotest_phy_ddi_disable(struct intel_dp *intel_dp) 5091 + { 5092 + struct intel_digital_port *intel_dig_port = dp_to_dig_port(intel_dp); 5093 + struct drm_device *dev = intel_dig_port->base.base.dev; 5094 + struct drm_i915_private *dev_priv = to_i915(dev); 5095 + struct intel_crtc *crtc = to_intel_crtc(intel_dig_port->base.base.crtc); 5096 + enum pipe pipe = crtc->pipe; 5097 + u32 trans_ddi_func_ctl_value, trans_conf_value, dp_tp_ctl_value; 5098 + 5099 + trans_ddi_func_ctl_value = intel_de_read(dev_priv, 5100 + TRANS_DDI_FUNC_CTL(pipe)); 5101 + trans_conf_value = intel_de_read(dev_priv, PIPECONF(pipe)); 5102 + dp_tp_ctl_value = intel_de_read(dev_priv, TGL_DP_TP_CTL(pipe)); 5103 + 5104 + trans_ddi_func_ctl_value &= ~(TRANS_DDI_FUNC_ENABLE | 5105 + TGL_TRANS_DDI_PORT_MASK); 5106 + trans_conf_value &= ~PIPECONF_ENABLE; 5107 + dp_tp_ctl_value &= ~DP_TP_CTL_ENABLE; 5108 + 5109 + intel_de_write(dev_priv, PIPECONF(pipe), trans_conf_value); 5110 + intel_de_write(dev_priv, TRANS_DDI_FUNC_CTL(pipe), 5111 + trans_ddi_func_ctl_value); 5112 + intel_de_write(dev_priv, TGL_DP_TP_CTL(pipe), dp_tp_ctl_value); 5113 + } 5114 + 5115 + static void 5116 + intel_dp_autotest_phy_ddi_enable(struct intel_dp *intel_dp, uint8_t lane_cnt) 5117 + { 5118 + struct intel_digital_port *intel_dig_port = dp_to_dig_port(intel_dp); 5119 + struct drm_device *dev = intel_dig_port->base.base.dev; 5120 + struct drm_i915_private *dev_priv = to_i915(dev); 5121 + enum port port = intel_dig_port->base.port; 5122 + struct intel_crtc *crtc = to_intel_crtc(intel_dig_port->base.base.crtc); 5123 + enum pipe pipe = crtc->pipe; 5124 + u32 trans_ddi_func_ctl_value, trans_conf_value, dp_tp_ctl_value; 5125 + 5126 + trans_ddi_func_ctl_value = intel_de_read(dev_priv, 5127 + TRANS_DDI_FUNC_CTL(pipe)); 5128 + trans_conf_value = intel_de_read(dev_priv, PIPECONF(pipe)); 5129 + dp_tp_ctl_value = intel_de_read(dev_priv, TGL_DP_TP_CTL(pipe)); 5130 + 5131 + trans_ddi_func_ctl_value |= TRANS_DDI_FUNC_ENABLE | 5132 + TGL_TRANS_DDI_SELECT_PORT(port); 5133 + trans_conf_value |= PIPECONF_ENABLE; 5134 + dp_tp_ctl_value |= DP_TP_CTL_ENABLE; 5135 + 5136 + intel_de_write(dev_priv, PIPECONF(pipe), trans_conf_value); 5137 + intel_de_write(dev_priv, TGL_DP_TP_CTL(pipe), dp_tp_ctl_value); 5138 + intel_de_write(dev_priv, TRANS_DDI_FUNC_CTL(pipe), 5139 + trans_ddi_func_ctl_value); 5140 + } 5141 + 5142 + void intel_dp_process_phy_request(struct intel_dp *intel_dp) 5143 + { 5144 + struct drm_dp_phy_test_params *data = 5145 + &intel_dp->compliance.test_data.phytest; 5146 + u8 link_status[DP_LINK_STATUS_SIZE]; 5147 + 5148 + if (!intel_dp_get_link_status(intel_dp, link_status)) { 5149 + DRM_DEBUG_KMS("failed to get link status\n"); 5150 + return; 5151 + } 5152 + 5153 + /* retrieve vswing & pre-emphasis setting */ 5154 + intel_dp_get_adjust_train(intel_dp, link_status); 5155 + 5156 + intel_dp_autotest_phy_ddi_disable(intel_dp); 5157 + 5158 + intel_dp_set_signal_levels(intel_dp); 5159 + 5160 + intel_dp_phy_pattern_update(intel_dp); 5161 + 5162 + intel_dp_autotest_phy_ddi_enable(intel_dp, data->num_lanes); 5163 + 5164 + drm_dp_set_phy_test_pattern(&intel_dp->aux, data, 5165 + link_status[DP_DPCD_REV]); 5166 + } 5167 + 5404 5168 static u8 intel_dp_autotest_phy_pattern(struct intel_dp *intel_dp) 5405 5169 { 5406 5170 u8 test_result = DP_TEST_NAK; 5171 + 5172 + test_result = intel_dp_prepare_phytest(intel_dp); 5173 + if (test_result != DP_TEST_ACK) 5174 + DRM_ERROR("Phy test preparation failed\n"); 5175 + 5176 + intel_dp_process_phy_request(intel_dp); 5177 + 5407 5178 return test_result; 5408 5179 } 5409 5180 5410 5181 static void intel_dp_handle_test_request(struct intel_dp *intel_dp) 5411 5182 { 5183 + struct drm_i915_private *i915 = dp_to_i915(intel_dp); 5412 5184 u8 response = DP_TEST_NAK; 5413 5185 u8 request = 0; 5414 5186 int status; 5415 5187 5416 5188 status = drm_dp_dpcd_readb(&intel_dp->aux, DP_TEST_REQUEST, &request); 5417 5189 if (status <= 0) { 5418 - DRM_DEBUG_KMS("Could not read test request from sink\n"); 5190 + drm_dbg_kms(&i915->drm, 5191 + "Could not read test request from sink\n"); 5419 5192 goto update_status; 5420 5193 } 5421 5194 5422 5195 switch (request) { 5423 5196 case DP_TEST_LINK_TRAINING: 5424 - DRM_DEBUG_KMS("LINK_TRAINING test requested\n"); 5197 + drm_dbg_kms(&i915->drm, "LINK_TRAINING test requested\n"); 5425 5198 response = intel_dp_autotest_link_training(intel_dp); 5426 5199 break; 5427 5200 case DP_TEST_LINK_VIDEO_PATTERN: 5428 - DRM_DEBUG_KMS("TEST_PATTERN test requested\n"); 5201 + drm_dbg_kms(&i915->drm, "TEST_PATTERN test requested\n"); 5429 5202 response = intel_dp_autotest_video_pattern(intel_dp); 5430 5203 break; 5431 5204 case DP_TEST_LINK_EDID_READ: 5432 - DRM_DEBUG_KMS("EDID test requested\n"); 5205 + drm_dbg_kms(&i915->drm, "EDID test requested\n"); 5433 5206 response = intel_dp_autotest_edid(intel_dp); 5434 5207 break; 5435 5208 case DP_TEST_LINK_PHY_TEST_PATTERN: 5436 - DRM_DEBUG_KMS("PHY_PATTERN test requested\n"); 5209 + drm_dbg_kms(&i915->drm, "PHY_PATTERN test requested\n"); 5437 5210 response = intel_dp_autotest_phy_pattern(intel_dp); 5438 5211 break; 5439 5212 default: 5440 - DRM_DEBUG_KMS("Invalid test request '%02x'\n", request); 5213 + drm_dbg_kms(&i915->drm, "Invalid test request '%02x'\n", 5214 + request); 5441 5215 break; 5442 5216 } 5443 5217 ··· 5621 5047 update_status: 5622 5048 status = drm_dp_dpcd_writeb(&intel_dp->aux, DP_TEST_RESPONSE, response); 5623 5049 if (status <= 0) 5624 - DRM_DEBUG_KMS("Could not write test response to sink\n"); 5050 + drm_dbg_kms(&i915->drm, 5051 + "Could not write test response to sink\n"); 5625 5052 } 5626 5053 5627 5054 static int 5628 5055 intel_dp_check_mst_status(struct intel_dp *intel_dp) 5629 5056 { 5057 + struct drm_i915_private *i915 = dp_to_i915(intel_dp); 5630 5058 bool bret; 5631 5059 5632 5060 if (intel_dp->is_mst) { ··· 5645 5069 /* check link status - esi[10] = 0x200c */ 5646 5070 if (intel_dp->active_mst_links > 0 && 5647 5071 !drm_dp_channel_eq_ok(&esi[10], intel_dp->lane_count)) { 5648 - DRM_DEBUG_KMS("channel EQ not ok, retraining\n"); 5072 + drm_dbg_kms(&i915->drm, 5073 + "channel EQ not ok, retraining\n"); 5649 5074 intel_dp_start_link_train(intel_dp); 5650 5075 intel_dp_stop_link_train(intel_dp); 5651 5076 } 5652 5077 5653 - DRM_DEBUG_KMS("got esi %3ph\n", esi); 5078 + drm_dbg_kms(&i915->drm, "got esi %3ph\n", esi); 5654 5079 ret = drm_dp_mst_hpd_irq(&intel_dp->mst_mgr, esi, &handled); 5655 5080 5656 5081 if (handled) { ··· 5667 5090 5668 5091 bret = intel_dp_get_sink_irq_esi(intel_dp, esi); 5669 5092 if (bret == true) { 5670 - DRM_DEBUG_KMS("got esi2 %3ph\n", esi); 5093 + drm_dbg_kms(&i915->drm, 5094 + "got esi2 %3ph\n", esi); 5671 5095 goto go_again; 5672 5096 } 5673 5097 } else ··· 5676 5098 5677 5099 return ret; 5678 5100 } else { 5679 - DRM_DEBUG_KMS("failed to get ESI - device may have failed\n"); 5101 + drm_dbg_kms(&i915->drm, 5102 + "failed to get ESI - device may have failed\n"); 5680 5103 intel_dp->is_mst = false; 5681 5104 drm_dp_mst_topology_mgr_set_mst(&intel_dp->mst_mgr, 5682 5105 intel_dp->is_mst); ··· 5799 5220 */ 5800 5221 static enum intel_hotplug_state 5801 5222 intel_dp_hotplug(struct intel_encoder *encoder, 5802 - struct intel_connector *connector, 5803 - bool irq_received) 5223 + struct intel_connector *connector) 5804 5224 { 5805 5225 struct drm_modeset_acquire_ctx ctx; 5806 5226 enum intel_hotplug_state state; 5807 5227 int ret; 5808 5228 5809 - state = intel_encoder_hotplug(encoder, connector, irq_received); 5229 + state = intel_encoder_hotplug(encoder, connector); 5810 5230 5811 5231 drm_modeset_acquire_init(&ctx, 0); 5812 5232 ··· 5829 5251 * Keeping it consistent with intel_ddi_hotplug() and 5830 5252 * intel_hdmi_hotplug(). 5831 5253 */ 5832 - if (state == INTEL_HOTPLUG_UNCHANGED && irq_received) 5254 + if (state == INTEL_HOTPLUG_UNCHANGED && !connector->hotplug_retries) 5833 5255 state = INTEL_HOTPLUG_RETRY; 5834 5256 5835 5257 return state; ··· 5837 5259 5838 5260 static void intel_dp_check_service_irq(struct intel_dp *intel_dp) 5839 5261 { 5262 + struct drm_i915_private *i915 = dp_to_i915(intel_dp); 5840 5263 u8 val; 5841 5264 5842 5265 if (intel_dp->dpcd[DP_DPCD_REV] < 0x11) ··· 5856 5277 intel_hdcp_handle_cp_irq(intel_dp->attached_connector); 5857 5278 5858 5279 if (val & DP_SINK_SPECIFIC_IRQ) 5859 - DRM_DEBUG_DRIVER("Sink specific irq unhandled\n"); 5280 + drm_dbg_kms(&i915->drm, "Sink specific irq unhandled\n"); 5860 5281 } 5861 5282 5862 5283 /* ··· 5923 5344 static enum drm_connector_status 5924 5345 intel_dp_detect_dpcd(struct intel_dp *intel_dp) 5925 5346 { 5347 + struct drm_i915_private *i915 = dp_to_i915(intel_dp); 5926 5348 struct intel_lspcon *lspcon = dp_to_lspcon(intel_dp); 5927 5349 u8 *dpcd = intel_dp->dpcd; 5928 5350 u8 type; ··· 5971 5391 } 5972 5392 5973 5393 /* Anything else is out of spec, warn and ignore */ 5974 - DRM_DEBUG_KMS("Broken DP branch device, ignoring\n"); 5394 + drm_dbg_kms(&i915->drm, "Broken DP branch device, ignoring\n"); 5975 5395 return connector_status_disconnected; 5976 5396 } 5977 5397 ··· 6443 5863 static int 6444 5864 intel_dp_connector_register(struct drm_connector *connector) 6445 5865 { 5866 + struct drm_i915_private *i915 = to_i915(connector->dev); 6446 5867 struct intel_dp *intel_dp = intel_attached_dp(to_intel_connector(connector)); 6447 5868 int ret; 6448 5869 ··· 6453 5872 6454 5873 intel_connector_debugfs_add(connector); 6455 5874 6456 - DRM_DEBUG_KMS("registering %s bus for %s\n", 6457 - intel_dp->aux.name, connector->kdev->kobj.name); 5875 + drm_dbg_kms(&i915->drm, "registering %s bus for %s\n", 5876 + intel_dp->aux.name, connector->kdev->kobj.name); 6458 5877 6459 5878 intel_dp->aux.dev = connector->kdev; 6460 5879 ret = drm_dp_aux_register(&intel_dp->aux); ··· 6540 5959 int intel_dp_hdcp_write_an_aksv(struct intel_digital_port *intel_dig_port, 6541 5960 u8 *an) 6542 5961 { 5962 + struct drm_i915_private *i915 = to_i915(intel_dig_port->base.base.dev); 6543 5963 struct intel_dp *intel_dp = enc_to_intel_dp(to_intel_encoder(&intel_dig_port->base.base)); 6544 5964 static const struct drm_dp_aux_msg msg = { 6545 5965 .request = DP_AUX_NATIVE_WRITE, ··· 6555 5973 dpcd_ret = drm_dp_dpcd_write(&intel_dig_port->dp.aux, DP_AUX_HDCP_AN, 6556 5974 an, DRM_HDCP_AN_LEN); 6557 5975 if (dpcd_ret != DRM_HDCP_AN_LEN) { 6558 - DRM_DEBUG_KMS("Failed to write An over DP/AUX (%zd)\n", 6559 - dpcd_ret); 5976 + drm_dbg_kms(&i915->drm, 5977 + "Failed to write An over DP/AUX (%zd)\n", 5978 + dpcd_ret); 6560 5979 return dpcd_ret >= 0 ? -EIO : dpcd_ret; 6561 5980 } 6562 5981 ··· 6573 5990 rxbuf, sizeof(rxbuf), 6574 5991 DP_AUX_CH_CTL_AUX_AKSV_SELECT); 6575 5992 if (ret < 0) { 6576 - DRM_DEBUG_KMS("Write Aksv over DP/AUX failed (%d)\n", ret); 5993 + drm_dbg_kms(&i915->drm, 5994 + "Write Aksv over DP/AUX failed (%d)\n", ret); 6577 5995 return ret; 6578 5996 } else if (ret == 0) { 6579 - DRM_DEBUG_KMS("Aksv write over DP/AUX was empty\n"); 5997 + drm_dbg_kms(&i915->drm, "Aksv write over DP/AUX was empty\n"); 6580 5998 return -EIO; 6581 5999 } 6582 6000 6583 6001 reply = (rxbuf[0] >> 4) & DP_AUX_NATIVE_REPLY_MASK; 6584 6002 if (reply != DP_AUX_NATIVE_REPLY_ACK) { 6585 - DRM_DEBUG_KMS("Aksv write: no DP_AUX_NATIVE_REPLY_ACK %x\n", 6586 - reply); 6003 + drm_dbg_kms(&i915->drm, 6004 + "Aksv write: no DP_AUX_NATIVE_REPLY_ACK %x\n", 6005 + reply); 6587 6006 return -EIO; 6588 6007 } 6589 6008 return 0; ··· 6594 6009 static int intel_dp_hdcp_read_bksv(struct intel_digital_port *intel_dig_port, 6595 6010 u8 *bksv) 6596 6011 { 6012 + struct drm_i915_private *i915 = to_i915(intel_dig_port->base.base.dev); 6597 6013 ssize_t ret; 6014 + 6598 6015 ret = drm_dp_dpcd_read(&intel_dig_port->dp.aux, DP_AUX_HDCP_BKSV, bksv, 6599 6016 DRM_HDCP_KSV_LEN); 6600 6017 if (ret != DRM_HDCP_KSV_LEN) { 6601 - DRM_DEBUG_KMS("Read Bksv from DP/AUX failed (%zd)\n", ret); 6018 + drm_dbg_kms(&i915->drm, 6019 + "Read Bksv from DP/AUX failed (%zd)\n", ret); 6602 6020 return ret >= 0 ? -EIO : ret; 6603 6021 } 6604 6022 return 0; ··· 6610 6022 static int intel_dp_hdcp_read_bstatus(struct intel_digital_port *intel_dig_port, 6611 6023 u8 *bstatus) 6612 6024 { 6025 + struct drm_i915_private *i915 = to_i915(intel_dig_port->base.base.dev); 6613 6026 ssize_t ret; 6027 + 6614 6028 /* 6615 6029 * For some reason the HDMI and DP HDCP specs call this register 6616 6030 * definition by different names. In the HDMI spec, it's called BSTATUS, ··· 6621 6031 ret = drm_dp_dpcd_read(&intel_dig_port->dp.aux, DP_AUX_HDCP_BINFO, 6622 6032 bstatus, DRM_HDCP_BSTATUS_LEN); 6623 6033 if (ret != DRM_HDCP_BSTATUS_LEN) { 6624 - DRM_DEBUG_KMS("Read bstatus from DP/AUX failed (%zd)\n", ret); 6034 + drm_dbg_kms(&i915->drm, 6035 + "Read bstatus from DP/AUX failed (%zd)\n", ret); 6625 6036 return ret >= 0 ? -EIO : ret; 6626 6037 } 6627 6038 return 0; ··· 6632 6041 int intel_dp_hdcp_read_bcaps(struct intel_digital_port *intel_dig_port, 6633 6042 u8 *bcaps) 6634 6043 { 6044 + struct drm_i915_private *i915 = to_i915(intel_dig_port->base.base.dev); 6635 6045 ssize_t ret; 6636 6046 6637 6047 ret = drm_dp_dpcd_read(&intel_dig_port->dp.aux, DP_AUX_HDCP_BCAPS, 6638 6048 bcaps, 1); 6639 6049 if (ret != 1) { 6640 - DRM_DEBUG_KMS("Read bcaps from DP/AUX failed (%zd)\n", ret); 6050 + drm_dbg_kms(&i915->drm, 6051 + "Read bcaps from DP/AUX failed (%zd)\n", ret); 6641 6052 return ret >= 0 ? -EIO : ret; 6642 6053 } 6643 6054 ··· 6665 6072 int intel_dp_hdcp_read_ri_prime(struct intel_digital_port *intel_dig_port, 6666 6073 u8 *ri_prime) 6667 6074 { 6075 + struct drm_i915_private *i915 = to_i915(intel_dig_port->base.base.dev); 6668 6076 ssize_t ret; 6077 + 6669 6078 ret = drm_dp_dpcd_read(&intel_dig_port->dp.aux, DP_AUX_HDCP_RI_PRIME, 6670 6079 ri_prime, DRM_HDCP_RI_LEN); 6671 6080 if (ret != DRM_HDCP_RI_LEN) { 6672 - DRM_DEBUG_KMS("Read Ri' from DP/AUX failed (%zd)\n", ret); 6081 + drm_dbg_kms(&i915->drm, "Read Ri' from DP/AUX failed (%zd)\n", 6082 + ret); 6673 6083 return ret >= 0 ? -EIO : ret; 6674 6084 } 6675 6085 return 0; ··· 6682 6086 int intel_dp_hdcp_read_ksv_ready(struct intel_digital_port *intel_dig_port, 6683 6087 bool *ksv_ready) 6684 6088 { 6089 + struct drm_i915_private *i915 = to_i915(intel_dig_port->base.base.dev); 6685 6090 ssize_t ret; 6686 6091 u8 bstatus; 6092 + 6687 6093 ret = drm_dp_dpcd_read(&intel_dig_port->dp.aux, DP_AUX_HDCP_BSTATUS, 6688 6094 &bstatus, 1); 6689 6095 if (ret != 1) { 6690 - DRM_DEBUG_KMS("Read bstatus from DP/AUX failed (%zd)\n", ret); 6096 + drm_dbg_kms(&i915->drm, 6097 + "Read bstatus from DP/AUX failed (%zd)\n", ret); 6691 6098 return ret >= 0 ? -EIO : ret; 6692 6099 } 6693 6100 *ksv_ready = bstatus & DP_BSTATUS_READY; ··· 6701 6102 int intel_dp_hdcp_read_ksv_fifo(struct intel_digital_port *intel_dig_port, 6702 6103 int num_downstream, u8 *ksv_fifo) 6703 6104 { 6105 + struct drm_i915_private *i915 = to_i915(intel_dig_port->base.base.dev); 6704 6106 ssize_t ret; 6705 6107 int i; 6706 6108 ··· 6713 6113 ksv_fifo + i * DRM_HDCP_KSV_LEN, 6714 6114 len); 6715 6115 if (ret != len) { 6716 - DRM_DEBUG_KMS("Read ksv[%d] from DP/AUX failed (%zd)\n", 6717 - i, ret); 6116 + drm_dbg_kms(&i915->drm, 6117 + "Read ksv[%d] from DP/AUX failed (%zd)\n", 6118 + i, ret); 6718 6119 return ret >= 0 ? -EIO : ret; 6719 6120 } 6720 6121 } ··· 6726 6125 int intel_dp_hdcp_read_v_prime_part(struct intel_digital_port *intel_dig_port, 6727 6126 int i, u32 *part) 6728 6127 { 6128 + struct drm_i915_private *i915 = to_i915(intel_dig_port->base.base.dev); 6729 6129 ssize_t ret; 6730 6130 6731 6131 if (i >= DRM_HDCP_V_PRIME_NUM_PARTS) ··· 6736 6134 DP_AUX_HDCP_V_PRIME(i), part, 6737 6135 DRM_HDCP_V_PRIME_PART_LEN); 6738 6136 if (ret != DRM_HDCP_V_PRIME_PART_LEN) { 6739 - DRM_DEBUG_KMS("Read v'[%d] from DP/AUX failed (%zd)\n", i, ret); 6137 + drm_dbg_kms(&i915->drm, 6138 + "Read v'[%d] from DP/AUX failed (%zd)\n", i, ret); 6740 6139 return ret >= 0 ? -EIO : ret; 6741 6140 } 6742 6141 return 0; ··· 6754 6151 static 6755 6152 bool intel_dp_hdcp_check_link(struct intel_digital_port *intel_dig_port) 6756 6153 { 6154 + struct drm_i915_private *i915 = to_i915(intel_dig_port->base.base.dev); 6757 6155 ssize_t ret; 6758 6156 u8 bstatus; 6759 6157 6760 6158 ret = drm_dp_dpcd_read(&intel_dig_port->dp.aux, DP_AUX_HDCP_BSTATUS, 6761 6159 &bstatus, 1); 6762 6160 if (ret != 1) { 6763 - DRM_DEBUG_KMS("Read bstatus from DP/AUX failed (%zd)\n", ret); 6161 + drm_dbg_kms(&i915->drm, 6162 + "Read bstatus from DP/AUX failed (%zd)\n", ret); 6764 6163 return false; 6765 6164 } 6766 6165 ··· 6837 6232 int intel_dp_hdcp2_read_rx_status(struct intel_digital_port *intel_dig_port, 6838 6233 u8 *rx_status) 6839 6234 { 6235 + struct drm_i915_private *i915 = to_i915(intel_dig_port->base.base.dev); 6840 6236 ssize_t ret; 6841 6237 6842 6238 ret = drm_dp_dpcd_read(&intel_dig_port->dp.aux, 6843 6239 DP_HDCP_2_2_REG_RXSTATUS_OFFSET, rx_status, 6844 6240 HDCP_2_2_DP_RXSTATUS_LEN); 6845 6241 if (ret != HDCP_2_2_DP_RXSTATUS_LEN) { 6846 - DRM_DEBUG_KMS("Read bstatus from DP/AUX failed (%zd)\n", ret); 6242 + drm_dbg_kms(&i915->drm, 6243 + "Read bstatus from DP/AUX failed (%zd)\n", ret); 6847 6244 return ret >= 0 ? -EIO : ret; 6848 6245 } 6849 6246 ··· 6889 6282 intel_dp_hdcp2_wait_for_msg(struct intel_digital_port *intel_dig_port, 6890 6283 const struct hdcp2_dp_msg_data *hdcp2_msg_data) 6891 6284 { 6285 + struct drm_i915_private *i915 = to_i915(intel_dig_port->base.base.dev); 6892 6286 struct intel_dp *dp = &intel_dig_port->dp; 6893 6287 struct intel_hdcp *hdcp = &dp->attached_connector->hdcp; 6894 6288 u8 msg_id = hdcp2_msg_data->msg_id; ··· 6921 6313 } 6922 6314 6923 6315 if (ret) 6924 - DRM_DEBUG_KMS("msg_id %d, ret %d, timeout(mSec): %d\n", 6925 - hdcp2_msg_data->msg_id, ret, timeout); 6316 + drm_dbg_kms(&i915->drm, 6317 + "msg_id %d, ret %d, timeout(mSec): %d\n", 6318 + hdcp2_msg_data->msg_id, ret, timeout); 6926 6319 6927 6320 return ret; 6928 6321 } ··· 7009 6400 int intel_dp_hdcp2_read_msg(struct intel_digital_port *intel_dig_port, 7010 6401 u8 msg_id, void *buf, size_t size) 7011 6402 { 6403 + struct drm_i915_private *i915 = to_i915(intel_dig_port->base.base.dev); 7012 6404 unsigned int offset; 7013 6405 u8 *byte = buf; 7014 6406 ssize_t ret, bytes_to_recv, len; ··· 7043 6433 ret = drm_dp_dpcd_read(&intel_dig_port->dp.aux, offset, 7044 6434 (void *)byte, len); 7045 6435 if (ret < 0) { 7046 - DRM_DEBUG_KMS("msg_id %d, ret %zd\n", msg_id, ret); 6436 + drm_dbg_kms(&i915->drm, "msg_id %d, ret %zd\n", 6437 + msg_id, ret); 7047 6438 return ret; 7048 6439 } 7049 6440 ··· 7335 6724 if (ret) 7336 6725 return ret; 7337 6726 7338 - if (INTEL_GEN(dev_priv) < 11) 6727 + /* 6728 + * We don't enable port sync on BDW due to missing w/as and 6729 + * due to not having adjusted the modeset sequence appropriately. 6730 + */ 6731 + if (INTEL_GEN(dev_priv) < 9) 7339 6732 return 0; 7340 6733 7341 6734 if (!intel_connector_needs_modeset(state, conn)) ··· 7378 6763 .destroy = intel_dp_encoder_destroy, 7379 6764 }; 7380 6765 6766 + static bool intel_edp_have_power(struct intel_dp *intel_dp) 6767 + { 6768 + intel_wakeref_t wakeref; 6769 + bool have_power = false; 6770 + 6771 + with_pps_lock(intel_dp, wakeref) { 6772 + have_power = edp_have_panel_power(intel_dp) && 6773 + edp_have_panel_vdd(intel_dp); 6774 + } 6775 + 6776 + return have_power; 6777 + } 6778 + 7381 6779 enum irqreturn 7382 6780 intel_dp_hpd_pulse(struct intel_digital_port *intel_dig_port, bool long_hpd) 7383 6781 { 6782 + struct drm_i915_private *i915 = to_i915(intel_dig_port->base.base.dev); 7384 6783 struct intel_dp *intel_dp = &intel_dig_port->dp; 7385 6784 7386 - if (long_hpd && intel_dig_port->base.type == INTEL_OUTPUT_EDP) { 6785 + if (intel_dig_port->base.type == INTEL_OUTPUT_EDP && 6786 + (long_hpd || !intel_edp_have_power(intel_dp))) { 7387 6787 /* 7388 - * vdd off can generate a long pulse on eDP which 6788 + * vdd off can generate a long/short pulse on eDP which 7389 6789 * would require vdd on to handle it, and thus we 7390 6790 * would end up in an endless cycle of 7391 - * "vdd off -> long hpd -> vdd on -> detect -> vdd off -> ..." 6791 + * "vdd off -> long/short hpd -> vdd on -> detect -> vdd off -> ..." 7392 6792 */ 7393 - DRM_DEBUG_KMS("ignoring long hpd on eDP [ENCODER:%d:%s]\n", 7394 - intel_dig_port->base.base.base.id, 7395 - intel_dig_port->base.base.name); 6793 + drm_dbg_kms(&i915->drm, 6794 + "ignoring %s hpd on eDP [ENCODER:%d:%s]\n", 6795 + long_hpd ? "long" : "short", 6796 + intel_dig_port->base.base.base.id, 6797 + intel_dig_port->base.base.name); 7396 6798 return IRQ_HANDLED; 7397 6799 } 7398 6800 7399 - DRM_DEBUG_KMS("got hpd irq on [ENCODER:%d:%s] - %s\n", 7400 - intel_dig_port->base.base.base.id, 7401 - intel_dig_port->base.base.name, 7402 - long_hpd ? "long" : "short"); 6801 + drm_dbg_kms(&i915->drm, "got hpd irq on [ENCODER:%d:%s] - %s\n", 6802 + intel_dig_port->base.base.base.id, 6803 + intel_dig_port->base.base.name, 6804 + long_hpd ? "long" : "short"); 7403 6805 7404 6806 if (long_hpd) { 7405 6807 intel_dp->reset_link_params = true; ··· 7429 6797 * If we were in MST mode, and device is not 7430 6798 * there, get out of MST mode 7431 6799 */ 7432 - DRM_DEBUG_KMS("MST device may have disappeared %d vs %d\n", 7433 - intel_dp->is_mst, intel_dp->mst_mgr.mst_state); 6800 + drm_dbg_kms(&i915->drm, 6801 + "MST device may have disappeared %d vs %d\n", 6802 + intel_dp->is_mst, 6803 + intel_dp->mst_mgr.mst_state); 7434 6804 intel_dp->is_mst = false; 7435 6805 drm_dp_mst_topology_mgr_set_mst(&intel_dp->mst_mgr, 7436 6806 intel_dp->is_mst);
+4
drivers/gpu/drm/i915/display/intel_dp.h
··· 114 114 void intel_dp_hdr_metadata_enable(struct intel_dp *intel_dp, 115 115 const struct intel_crtc_state *crtc_state, 116 116 const struct drm_connector_state *conn_state); 117 + void intel_dp_set_infoframes(struct intel_encoder *encoder, bool enable, 118 + const struct intel_crtc_state *crtc_state, 119 + const struct drm_connector_state *conn_state); 117 120 bool intel_digital_port_connected(struct intel_encoder *encoder); 121 + void intel_dp_process_phy_request(struct intel_dp *intel_dp); 118 122 119 123 static inline unsigned int intel_dp_unused_lane_mask(int lane_count) 120 124 {
+50 -34
drivers/gpu/drm/i915/display/intel_dp_aux_backlight.c
··· 27 27 28 28 static void set_aux_backlight_enable(struct intel_dp *intel_dp, bool enable) 29 29 { 30 + struct drm_i915_private *i915 = dp_to_i915(intel_dp); 30 31 u8 reg_val = 0; 31 32 32 33 /* Early return when display use other mechanism to enable backlight. */ ··· 36 35 37 36 if (drm_dp_dpcd_readb(&intel_dp->aux, DP_EDP_DISPLAY_CONTROL_REGISTER, 38 37 &reg_val) < 0) { 39 - DRM_DEBUG_KMS("Failed to read DPCD register 0x%x\n", 40 - DP_EDP_DISPLAY_CONTROL_REGISTER); 38 + drm_dbg_kms(&i915->drm, "Failed to read DPCD register 0x%x\n", 39 + DP_EDP_DISPLAY_CONTROL_REGISTER); 41 40 return; 42 41 } 43 42 if (enable) ··· 47 46 48 47 if (drm_dp_dpcd_writeb(&intel_dp->aux, DP_EDP_DISPLAY_CONTROL_REGISTER, 49 48 reg_val) != 1) { 50 - DRM_DEBUG_KMS("Failed to %s aux backlight\n", 51 - enable ? "enable" : "disable"); 49 + drm_dbg_kms(&i915->drm, "Failed to %s aux backlight\n", 50 + enable ? "enable" : "disable"); 52 51 } 53 52 } 54 53 ··· 59 58 static u32 intel_dp_aux_get_backlight(struct intel_connector *connector) 60 59 { 61 60 struct intel_dp *intel_dp = intel_attached_dp(connector); 61 + struct drm_i915_private *i915 = dp_to_i915(intel_dp); 62 62 u8 read_val[2] = { 0x0 }; 63 63 u8 mode_reg; 64 64 u16 level = 0; ··· 67 65 if (drm_dp_dpcd_readb(&intel_dp->aux, 68 66 DP_EDP_BACKLIGHT_MODE_SET_REGISTER, 69 67 &mode_reg) != 1) { 70 - DRM_DEBUG_KMS("Failed to read the DPCD register 0x%x\n", 71 - DP_EDP_BACKLIGHT_MODE_SET_REGISTER); 68 + drm_dbg_kms(&i915->drm, 69 + "Failed to read the DPCD register 0x%x\n", 70 + DP_EDP_BACKLIGHT_MODE_SET_REGISTER); 72 71 return 0; 73 72 } 74 73 ··· 83 80 84 81 if (drm_dp_dpcd_read(&intel_dp->aux, DP_EDP_BACKLIGHT_BRIGHTNESS_MSB, 85 82 &read_val, sizeof(read_val)) < 0) { 86 - DRM_DEBUG_KMS("Failed to read DPCD register 0x%x\n", 87 - DP_EDP_BACKLIGHT_BRIGHTNESS_MSB); 83 + drm_dbg_kms(&i915->drm, "Failed to read DPCD register 0x%x\n", 84 + DP_EDP_BACKLIGHT_BRIGHTNESS_MSB); 88 85 return 0; 89 86 } 90 87 level = read_val[0]; ··· 103 100 { 104 101 struct intel_connector *connector = to_intel_connector(conn_state->connector); 105 102 struct intel_dp *intel_dp = intel_attached_dp(connector); 103 + struct drm_i915_private *i915 = dp_to_i915(intel_dp); 106 104 u8 vals[2] = { 0x0 }; 107 105 108 106 vals[0] = level; ··· 115 111 } 116 112 if (drm_dp_dpcd_write(&intel_dp->aux, DP_EDP_BACKLIGHT_BRIGHTNESS_MSB, 117 113 vals, sizeof(vals)) < 0) { 118 - DRM_DEBUG_KMS("Failed to write aux backlight level\n"); 114 + drm_dbg_kms(&i915->drm, 115 + "Failed to write aux backlight level\n"); 119 116 return; 120 117 } 121 118 } ··· 138 133 139 134 freq = dev_priv->vbt.backlight.pwm_freq_hz; 140 135 if (!freq) { 141 - DRM_DEBUG_KMS("Use panel default backlight frequency\n"); 136 + drm_dbg_kms(&dev_priv->drm, 137 + "Use panel default backlight frequency\n"); 142 138 return false; 143 139 } 144 140 ··· 152 146 fxp_max = DIV_ROUND_CLOSEST(fxp * 5, 4); 153 147 154 148 if (fxp_min > fxp_actual || fxp_actual > fxp_max) { 155 - DRM_DEBUG_KMS("Actual frequency out of range\n"); 149 + drm_dbg_kms(&dev_priv->drm, "Actual frequency out of range\n"); 156 150 return false; 157 151 } 158 152 159 153 if (drm_dp_dpcd_writeb(&intel_dp->aux, 160 154 DP_EDP_BACKLIGHT_FREQ_SET, (u8) f) < 0) { 161 - DRM_DEBUG_KMS("Failed to write aux backlight freq\n"); 155 + drm_dbg_kms(&dev_priv->drm, 156 + "Failed to write aux backlight freq\n"); 162 157 return false; 163 158 } 164 159 return true; ··· 170 163 { 171 164 struct intel_connector *connector = to_intel_connector(conn_state->connector); 172 165 struct intel_dp *intel_dp = intel_attached_dp(connector); 166 + struct drm_i915_private *i915 = dp_to_i915(intel_dp); 173 167 struct intel_panel *panel = &connector->panel; 174 168 u8 dpcd_buf, new_dpcd_buf, edp_backlight_mode; 175 169 176 170 if (drm_dp_dpcd_readb(&intel_dp->aux, 177 171 DP_EDP_BACKLIGHT_MODE_SET_REGISTER, &dpcd_buf) != 1) { 178 - DRM_DEBUG_KMS("Failed to read DPCD register 0x%x\n", 179 - DP_EDP_BACKLIGHT_MODE_SET_REGISTER); 172 + drm_dbg_kms(&i915->drm, "Failed to read DPCD register 0x%x\n", 173 + DP_EDP_BACKLIGHT_MODE_SET_REGISTER); 180 174 return; 181 175 } 182 176 ··· 194 186 if (drm_dp_dpcd_writeb(&intel_dp->aux, 195 187 DP_EDP_PWMGEN_BIT_COUNT, 196 188 panel->backlight.pwmgen_bit_count) < 0) 197 - DRM_DEBUG_KMS("Failed to write aux pwmgen bit count\n"); 189 + drm_dbg_kms(&i915->drm, 190 + "Failed to write aux pwmgen bit count\n"); 198 191 199 192 break; 200 193 ··· 212 203 if (new_dpcd_buf != dpcd_buf) { 213 204 if (drm_dp_dpcd_writeb(&intel_dp->aux, 214 205 DP_EDP_BACKLIGHT_MODE_SET_REGISTER, new_dpcd_buf) < 0) { 215 - DRM_DEBUG_KMS("Failed to write aux backlight mode\n"); 206 + drm_dbg_kms(&i915->drm, 207 + "Failed to write aux backlight mode\n"); 216 208 } 217 209 } 218 210 ··· 247 237 * minimum value will applied automatically. So no need to check that. 248 238 */ 249 239 freq = i915->vbt.backlight.pwm_freq_hz; 250 - DRM_DEBUG_KMS("VBT defined backlight frequency %u Hz\n", freq); 240 + drm_dbg_kms(&i915->drm, "VBT defined backlight frequency %u Hz\n", 241 + freq); 251 242 if (!freq) { 252 - DRM_DEBUG_KMS("Use panel default backlight frequency\n"); 243 + drm_dbg_kms(&i915->drm, 244 + "Use panel default backlight frequency\n"); 253 245 return max_backlight; 254 246 } 255 247 ··· 266 254 */ 267 255 if (drm_dp_dpcd_readb(&intel_dp->aux, 268 256 DP_EDP_PWMGEN_BIT_COUNT_CAP_MIN, &pn_min) != 1) { 269 - DRM_DEBUG_KMS("Failed to read pwmgen bit count cap min\n"); 257 + drm_dbg_kms(&i915->drm, 258 + "Failed to read pwmgen bit count cap min\n"); 270 259 return max_backlight; 271 260 } 272 261 if (drm_dp_dpcd_readb(&intel_dp->aux, 273 262 DP_EDP_PWMGEN_BIT_COUNT_CAP_MAX, &pn_max) != 1) { 274 - DRM_DEBUG_KMS("Failed to read pwmgen bit count cap max\n"); 263 + drm_dbg_kms(&i915->drm, 264 + "Failed to read pwmgen bit count cap max\n"); 275 265 return max_backlight; 276 266 } 277 267 pn_min &= DP_EDP_PWMGEN_BIT_COUNT_MASK; ··· 282 268 fxp_min = DIV_ROUND_CLOSEST(fxp * 3, 4); 283 269 fxp_max = DIV_ROUND_CLOSEST(fxp * 5, 4); 284 270 if (fxp_min < (1 << pn_min) || (255 << pn_max) < fxp_max) { 285 - DRM_DEBUG_KMS("VBT defined backlight frequency out of range\n"); 271 + drm_dbg_kms(&i915->drm, 272 + "VBT defined backlight frequency out of range\n"); 286 273 return max_backlight; 287 274 } 288 275 ··· 294 279 break; 295 280 } 296 281 297 - DRM_DEBUG_KMS("Using eDP pwmgen bit count of %d\n", pn); 282 + drm_dbg_kms(&i915->drm, "Using eDP pwmgen bit count of %d\n", pn); 298 283 if (drm_dp_dpcd_writeb(&intel_dp->aux, 299 284 DP_EDP_PWMGEN_BIT_COUNT, pn) < 0) { 300 - DRM_DEBUG_KMS("Failed to write aux pwmgen bit count\n"); 285 + drm_dbg_kms(&i915->drm, 286 + "Failed to write aux pwmgen bit count\n"); 301 287 return max_backlight; 302 288 } 303 289 panel->backlight.pwmgen_bit_count = pn; ··· 328 312 intel_dp_aux_display_control_capable(struct intel_connector *connector) 329 313 { 330 314 struct intel_dp *intel_dp = intel_attached_dp(connector); 315 + struct drm_i915_private *i915 = dp_to_i915(intel_dp); 331 316 332 317 /* Check the eDP Display control capabilities registers to determine if 333 318 * the panel can support backlight control over the aux channel ··· 336 319 if (intel_dp->edp_dpcd[1] & DP_EDP_TCON_BACKLIGHT_ADJUSTMENT_CAP && 337 320 (intel_dp->edp_dpcd[2] & DP_EDP_BACKLIGHT_BRIGHTNESS_AUX_SET_CAP) && 338 321 !(intel_dp->edp_dpcd[2] & DP_EDP_BACKLIGHT_BRIGHTNESS_PWM_PIN_CAP)) { 339 - DRM_DEBUG_KMS("AUX Backlight Control Supported!\n"); 322 + drm_dbg_kms(&i915->drm, "AUX Backlight Control Supported!\n"); 340 323 return true; 341 324 } 342 325 return false; ··· 346 329 { 347 330 struct intel_panel *panel = &intel_connector->panel; 348 331 struct intel_dp *intel_dp = enc_to_intel_dp(intel_connector->encoder); 349 - struct drm_device *dev = intel_connector->base.dev; 350 - struct drm_i915_private *dev_priv = to_i915(dev); 332 + struct drm_i915_private *i915 = dp_to_i915(intel_dp); 351 333 352 334 if (i915_modparams.enable_dpcd_backlight == 0 || 353 335 !intel_dp_aux_display_control_capable(intel_connector)) ··· 356 340 * There are a lot of machines that don't advertise the backlight 357 341 * control interface to use properly in their VBIOS, :\ 358 342 */ 359 - if (dev_priv->vbt.backlight.type != 343 + if (i915->vbt.backlight.type != 360 344 INTEL_BACKLIGHT_VESA_EDP_AUX_INTERFACE && 361 345 !drm_dp_has_quirk(&intel_dp->desc, intel_dp->edid_quirks, 362 346 DP_QUIRK_FORCE_DPCD_BACKLIGHT)) { 363 - DRM_DEV_INFO(dev->dev, 364 - "Panel advertises DPCD backlight support, but " 365 - "VBT disagrees. If your backlight controls " 366 - "don't work try booting with " 367 - "i915.enable_dpcd_backlight=1. If your machine " 368 - "needs this, please file a _new_ bug report on " 369 - "drm/i915, see " FDO_BUG_URL " for details.\n"); 347 + drm_info(&i915->drm, 348 + "Panel advertises DPCD backlight support, but " 349 + "VBT disagrees. If your backlight controls " 350 + "don't work try booting with " 351 + "i915.enable_dpcd_backlight=1. If your machine " 352 + "needs this, please file a _new_ bug report on " 353 + "drm/i915, see " FDO_BUG_URL " for details.\n"); 370 354 return -ENODEV; 371 355 } 372 356
+85 -68
drivers/gpu/drm/i915/display/intel_dp_mst.c
··· 47 47 struct intel_dp *intel_dp = &intel_mst->primary->dp; 48 48 struct intel_connector *connector = 49 49 to_intel_connector(conn_state->connector); 50 + struct drm_i915_private *i915 = to_i915(connector->base.dev); 50 51 const struct drm_display_mode *adjusted_mode = 51 52 &crtc_state->hw.adjusted_mode; 52 - void *port = connector->port; 53 53 bool constant_n = drm_dp_has_quirk(&intel_dp->desc, 0, 54 54 DP_DPCD_QUIRK_CONSTANT_N); 55 55 int bpp, slots = -EINVAL; ··· 65 65 false); 66 66 67 67 slots = drm_dp_atomic_find_vcpi_slots(state, &intel_dp->mst_mgr, 68 - port, crtc_state->pbn, 0); 68 + connector->port, 69 + crtc_state->pbn, 0); 69 70 if (slots == -EDEADLK) 70 71 return slots; 71 72 if (slots >= 0) ··· 74 73 } 75 74 76 75 if (slots < 0) { 77 - DRM_DEBUG_KMS("failed finding vcpi slots:%d\n", slots); 76 + drm_dbg_kms(&i915->drm, "failed finding vcpi slots:%d\n", 77 + slots); 78 78 return slots; 79 79 } 80 80 ··· 90 88 return 0; 91 89 } 92 90 93 - /* 94 - * Iterate over all connectors and return the smallest transcoder in the MST 95 - * stream 96 - */ 97 - static enum transcoder 98 - intel_dp_mst_master_trans_compute(struct intel_atomic_state *state, 99 - struct intel_dp *mst_port) 100 - { 101 - struct drm_i915_private *dev_priv = to_i915(state->base.dev); 102 - struct intel_digital_connector_state *conn_state; 103 - struct intel_connector *connector; 104 - enum pipe ret = I915_MAX_PIPES; 105 - int i; 106 - 107 - if (INTEL_GEN(dev_priv) < 12) 108 - return INVALID_TRANSCODER; 109 - 110 - for_each_new_intel_connector_in_state(state, connector, conn_state, i) { 111 - struct intel_crtc_state *crtc_state; 112 - struct intel_crtc *crtc; 113 - 114 - if (connector->mst_port != mst_port || !conn_state->base.crtc) 115 - continue; 116 - 117 - crtc = to_intel_crtc(conn_state->base.crtc); 118 - crtc_state = intel_atomic_get_new_crtc_state(state, crtc); 119 - if (!crtc_state->uapi.active) 120 - continue; 121 - 122 - /* 123 - * Using crtc->pipe because crtc_state->cpu_transcoder is 124 - * computed, so others CRTCs could have non-computed 125 - * cpu_transcoder 126 - */ 127 - if (crtc->pipe < ret) 128 - ret = crtc->pipe; 129 - } 130 - 131 - if (ret == I915_MAX_PIPES) 132 - return INVALID_TRANSCODER; 133 - 134 - /* Simple cast works because TGL don't have a eDP transcoder */ 135 - return (enum transcoder)ret; 136 - } 137 - 138 91 static int intel_dp_mst_compute_config(struct intel_encoder *encoder, 139 92 struct intel_crtc_state *pipe_config, 140 93 struct drm_connector_state *conn_state) 141 94 { 142 - struct intel_atomic_state *state = to_intel_atomic_state(conn_state->state); 143 95 struct drm_i915_private *dev_priv = to_i915(encoder->base.dev); 144 96 struct intel_dp_mst_encoder *intel_mst = enc_to_mst(encoder); 145 97 struct intel_dp *intel_dp = &intel_mst->primary->dp; ··· 103 147 to_intel_digital_connector_state(conn_state); 104 148 const struct drm_display_mode *adjusted_mode = 105 149 &pipe_config->hw.adjusted_mode; 106 - void *port = connector->port; 107 150 struct link_config_limits limits; 108 151 int ret; 109 152 ··· 114 159 115 160 if (intel_conn_state->force_audio == HDMI_AUDIO_AUTO) 116 161 pipe_config->has_audio = 117 - drm_dp_mst_port_has_audio(&intel_dp->mst_mgr, port); 162 + drm_dp_mst_port_has_audio(&intel_dp->mst_mgr, 163 + connector->port); 118 164 else 119 165 pipe_config->has_audio = 120 166 intel_conn_state->force_audio == HDMI_AUDIO_ON; ··· 157 201 158 202 intel_ddi_compute_min_voltage_level(dev_priv, pipe_config); 159 203 160 - pipe_config->mst_master_transcoder = intel_dp_mst_master_trans_compute(state, intel_dp); 204 + return 0; 205 + } 206 + 207 + /* 208 + * Iterate over all connectors and return a mask of 209 + * all CPU transcoders streaming over the same DP link. 210 + */ 211 + static unsigned int 212 + intel_dp_mst_transcoder_mask(struct intel_atomic_state *state, 213 + struct intel_dp *mst_port) 214 + { 215 + struct drm_i915_private *dev_priv = to_i915(state->base.dev); 216 + const struct intel_digital_connector_state *conn_state; 217 + struct intel_connector *connector; 218 + u8 transcoders = 0; 219 + int i; 220 + 221 + if (INTEL_GEN(dev_priv) < 12) 222 + return 0; 223 + 224 + for_each_new_intel_connector_in_state(state, connector, conn_state, i) { 225 + const struct intel_crtc_state *crtc_state; 226 + struct intel_crtc *crtc; 227 + 228 + if (connector->mst_port != mst_port || !conn_state->base.crtc) 229 + continue; 230 + 231 + crtc = to_intel_crtc(conn_state->base.crtc); 232 + crtc_state = intel_atomic_get_new_crtc_state(state, crtc); 233 + 234 + if (!crtc_state->hw.active) 235 + continue; 236 + 237 + transcoders |= BIT(crtc_state->cpu_transcoder); 238 + } 239 + 240 + return transcoders; 241 + } 242 + 243 + static int intel_dp_mst_compute_config_late(struct intel_encoder *encoder, 244 + struct intel_crtc_state *crtc_state, 245 + struct drm_connector_state *conn_state) 246 + { 247 + struct intel_atomic_state *state = to_intel_atomic_state(conn_state->state); 248 + struct intel_dp_mst_encoder *intel_mst = enc_to_mst(encoder); 249 + struct intel_dp *intel_dp = &intel_mst->primary->dp; 250 + 251 + /* lowest numbered transcoder will be designated master */ 252 + crtc_state->mst_master_transcoder = 253 + ffs(intel_dp_mst_transcoder_mask(state, intel_dp)) - 1; 161 254 162 255 return 0; 163 256 } ··· 318 313 return ret; 319 314 } 320 315 321 - static void intel_mst_disable_dp(struct intel_encoder *encoder, 316 + static void intel_mst_disable_dp(struct intel_atomic_state *state, 317 + struct intel_encoder *encoder, 322 318 const struct intel_crtc_state *old_crtc_state, 323 319 const struct drm_connector_state *old_conn_state) 324 320 { ··· 328 322 struct intel_dp *intel_dp = &intel_dig_port->dp; 329 323 struct intel_connector *connector = 330 324 to_intel_connector(old_conn_state->connector); 325 + struct drm_i915_private *i915 = to_i915(connector->base.dev); 331 326 int ret; 332 327 333 - DRM_DEBUG_KMS("active links %d\n", intel_dp->active_mst_links); 328 + drm_dbg_kms(&i915->drm, "active links %d\n", 329 + intel_dp->active_mst_links); 334 330 335 331 drm_dp_mst_reset_vcpi_slots(&intel_dp->mst_mgr, connector->port); 336 332 337 333 ret = drm_dp_update_payload_part1(&intel_dp->mst_mgr); 338 334 if (ret) { 339 - DRM_DEBUG_KMS("failed to update payload %d\n", ret); 335 + drm_dbg_kms(&i915->drm, "failed to update payload %d\n", ret); 340 336 } 341 337 if (old_crtc_state->has_audio) 342 338 intel_audio_codec_disable(encoder, 343 339 old_crtc_state, old_conn_state); 344 340 } 345 341 346 - static void intel_mst_post_disable_dp(struct intel_encoder *encoder, 342 + static void intel_mst_post_disable_dp(struct intel_atomic_state *state, 343 + struct intel_encoder *encoder, 347 344 const struct intel_crtc_state *old_crtc_state, 348 345 const struct drm_connector_state *old_conn_state) 349 346 { ··· 380 371 381 372 if (intel_de_wait_for_set(dev_priv, intel_dp->regs.dp_tp_status, 382 373 DP_TP_STATUS_ACT_SENT, 1)) 383 - DRM_ERROR("Timed out waiting for ACT sent when disabling\n"); 374 + drm_err(&dev_priv->drm, 375 + "Timed out waiting for ACT sent when disabling\n"); 384 376 drm_dp_check_act_status(&intel_dp->mst_mgr); 385 377 386 378 drm_dp_mst_deallocate_vcpi(&intel_dp->mst_mgr, connector->port); ··· 412 402 413 403 intel_mst->connector = NULL; 414 404 if (last_mst_stream) 415 - intel_dig_port->base.post_disable(&intel_dig_port->base, 405 + intel_dig_port->base.post_disable(state, &intel_dig_port->base, 416 406 old_crtc_state, NULL); 417 407 418 - DRM_DEBUG_KMS("active links %d\n", intel_dp->active_mst_links); 408 + drm_dbg_kms(&dev_priv->drm, "active links %d\n", 409 + intel_dp->active_mst_links); 419 410 } 420 411 421 - static void intel_mst_pre_pll_enable_dp(struct intel_encoder *encoder, 412 + static void intel_mst_pre_pll_enable_dp(struct intel_atomic_state *state, 413 + struct intel_encoder *encoder, 422 414 const struct intel_crtc_state *pipe_config, 423 415 const struct drm_connector_state *conn_state) 424 416 { ··· 429 417 struct intel_dp *intel_dp = &intel_dig_port->dp; 430 418 431 419 if (intel_dp->active_mst_links == 0) 432 - intel_dig_port->base.pre_pll_enable(&intel_dig_port->base, 420 + intel_dig_port->base.pre_pll_enable(state, &intel_dig_port->base, 433 421 pipe_config, NULL); 434 422 } 435 423 436 - static void intel_mst_pre_enable_dp(struct intel_encoder *encoder, 424 + static void intel_mst_pre_enable_dp(struct intel_atomic_state *state, 425 + struct intel_encoder *encoder, 437 426 const struct intel_crtc_state *pipe_config, 438 427 const struct drm_connector_state *conn_state) 439 428 { ··· 458 445 INTEL_GEN(dev_priv) >= 12 && first_mst_stream && 459 446 !intel_dp_mst_is_master_trans(pipe_config)); 460 447 461 - DRM_DEBUG_KMS("active links %d\n", intel_dp->active_mst_links); 448 + drm_dbg_kms(&dev_priv->drm, "active links %d\n", 449 + intel_dp->active_mst_links); 462 450 463 451 if (first_mst_stream) 464 452 intel_dp_sink_dpms(intel_dp, DRM_MODE_DPMS_ON); ··· 467 453 drm_dp_send_power_updown_phy(&intel_dp->mst_mgr, connector->port, true); 468 454 469 455 if (first_mst_stream) 470 - intel_dig_port->base.pre_enable(&intel_dig_port->base, 456 + intel_dig_port->base.pre_enable(state, &intel_dig_port->base, 471 457 pipe_config, NULL); 472 458 473 459 ret = drm_dp_mst_allocate_vcpi(&intel_dp->mst_mgr, ··· 475 461 pipe_config->pbn, 476 462 pipe_config->dp_m_n.tu); 477 463 if (!ret) 478 - DRM_ERROR("failed to allocate vcpi\n"); 464 + drm_err(&dev_priv->drm, "failed to allocate vcpi\n"); 479 465 480 466 intel_dp->active_mst_links++; 481 467 temp = intel_de_read(dev_priv, intel_dp->regs.dp_tp_status); ··· 498 484 intel_dp_set_m_n(pipe_config, M1_N1); 499 485 } 500 486 501 - static void intel_mst_enable_dp(struct intel_encoder *encoder, 487 + static void intel_mst_enable_dp(struct intel_atomic_state *state, 488 + struct intel_encoder *encoder, 502 489 const struct intel_crtc_state *pipe_config, 503 490 const struct drm_connector_state *conn_state) 504 491 { ··· 514 499 515 500 intel_crtc_vblank_on(pipe_config); 516 501 517 - DRM_DEBUG_KMS("active links %d\n", intel_dp->active_mst_links); 502 + drm_dbg_kms(&dev_priv->drm, "active links %d\n", 503 + intel_dp->active_mst_links); 518 504 519 505 if (intel_de_wait_for_set(dev_priv, intel_dp->regs.dp_tp_status, 520 506 DP_TP_STATUS_ACT_SENT, 1)) 521 - DRM_ERROR("Timed out waiting for ACT sent\n"); 507 + drm_err(&dev_priv->drm, "Timed out waiting for ACT sent\n"); 522 508 523 509 drm_dp_check_act_status(&intel_dp->mst_mgr); 524 510 ··· 802 786 intel_encoder->pipe_mask = ~0; 803 787 804 788 intel_encoder->compute_config = intel_dp_mst_compute_config; 789 + intel_encoder->compute_config_late = intel_dp_mst_compute_config_late; 805 790 intel_encoder->disable = intel_mst_disable_dp; 806 791 intel_encoder->post_disable = intel_mst_post_disable_dp; 807 792 intel_encoder->pre_pll_enable = intel_mst_pre_pll_enable_dp;
+5 -4
drivers/gpu/drm/i915/display/intel_dsi.c
··· 31 31 32 32 int intel_dsi_get_modes(struct drm_connector *connector) 33 33 { 34 + struct drm_i915_private *i915 = to_i915(connector->dev); 34 35 struct intel_connector *intel_connector = to_intel_connector(connector); 35 36 struct drm_display_mode *mode; 36 37 37 - DRM_DEBUG_KMS("\n"); 38 + drm_dbg_kms(&i915->drm, "\n"); 38 39 39 40 if (!intel_connector->panel.fixed_mode) { 40 - DRM_DEBUG_KMS("no fixed mode\n"); 41 + drm_dbg_kms(&i915->drm, "no fixed mode\n"); 41 42 return 0; 42 43 } 43 44 44 45 mode = drm_mode_duplicate(connector->dev, 45 46 intel_connector->panel.fixed_mode); 46 47 if (!mode) { 47 - DRM_DEBUG_KMS("drm_mode_duplicate failed\n"); 48 + drm_dbg_kms(&i915->drm, "drm_mode_duplicate failed\n"); 48 49 return 0; 49 50 } 50 51 ··· 61 60 const struct drm_display_mode *fixed_mode = intel_connector->panel.fixed_mode; 62 61 int max_dotclk = to_i915(connector->dev)->max_dotclk_freq; 63 62 64 - DRM_DEBUG_KMS("\n"); 63 + drm_dbg_kms(&dev_priv->drm, "\n"); 65 64 66 65 if (mode->flags & DRM_MODE_FLAG_DBLSCAN) 67 66 return MODE_NO_DBLESCAN;
+5 -6
drivers/gpu/drm/i915/display/intel_dsi_vbt.c
··· 453 453 454 454 static const u8 *mipi_exec_i2c(struct intel_dsi *intel_dsi, const u8 *data) 455 455 { 456 - struct drm_device *drm_dev = intel_dsi->base.base.dev; 457 - struct device *dev = &drm_dev->pdev->dev; 456 + struct drm_i915_private *i915 = to_i915(intel_dsi->base.base.dev); 458 457 struct i2c_adapter *adapter; 459 458 struct i2c_msg msg; 460 459 int ret; ··· 470 471 471 472 adapter = i2c_get_adapter(intel_dsi->i2c_bus_num); 472 473 if (!adapter) { 473 - DRM_DEV_ERROR(dev, "Cannot find a valid i2c bus for xfer\n"); 474 + drm_err(&i915->drm, "Cannot find a valid i2c bus for xfer\n"); 474 475 goto err_bus; 475 476 } 476 477 ··· 488 489 489 490 ret = i2c_transfer(adapter, &msg, 1); 490 491 if (ret < 0) 491 - DRM_DEV_ERROR(dev, 492 - "Failed to xfer payload of size (%u) to reg (%u)\n", 493 - payload_size, reg_offset); 492 + drm_err(&i915->drm, 493 + "Failed to xfer payload of size (%u) to reg (%u)\n", 494 + payload_size, reg_offset); 494 495 495 496 kfree(payload_data); 496 497 err_alloc:
+6 -3
drivers/gpu/drm/i915/display/intel_dvo.c
··· 183 183 pipe_config->hw.adjusted_mode.crtc_clock = pipe_config->port_clock; 184 184 } 185 185 186 - static void intel_disable_dvo(struct intel_encoder *encoder, 186 + static void intel_disable_dvo(struct intel_atomic_state *state, 187 + struct intel_encoder *encoder, 187 188 const struct intel_crtc_state *old_crtc_state, 188 189 const struct drm_connector_state *old_conn_state) 189 190 { ··· 198 197 intel_de_read(dev_priv, dvo_reg); 199 198 } 200 199 201 - static void intel_enable_dvo(struct intel_encoder *encoder, 200 + static void intel_enable_dvo(struct intel_atomic_state *state, 201 + struct intel_encoder *encoder, 202 202 const struct intel_crtc_state *pipe_config, 203 203 const struct drm_connector_state *conn_state) 204 204 { ··· 274 272 return 0; 275 273 } 276 274 277 - static void intel_dvo_pre_enable(struct intel_encoder *encoder, 275 + static void intel_dvo_pre_enable(struct intel_atomic_state *state, 276 + struct intel_encoder *encoder, 278 277 const struct intel_crtc_state *pipe_config, 279 278 const struct drm_connector_state *conn_state) 280 279 {
+65 -19
drivers/gpu/drm/i915/display/intel_fbc.c
··· 104 104 /* Wait for compressing bit to clear */ 105 105 if (intel_de_wait_for_clear(dev_priv, FBC_STATUS, 106 106 FBC_STAT_COMPRESSING, 10)) { 107 - DRM_DEBUG_KMS("FBC idle timed out\n"); 107 + drm_dbg_kms(&dev_priv->drm, "FBC idle timed out\n"); 108 108 return; 109 109 } 110 110 } ··· 485 485 if (!ret) 486 486 goto err_llb; 487 487 else if (ret > 1) { 488 - DRM_INFO("Reducing the compressed framebuffer size. This may lead to less power savings than a non-reduced-size. Try to increase stolen memory size if available in BIOS.\n"); 488 + drm_info(&dev_priv->drm, 489 + "Reducing the compressed framebuffer size. This may lead to less power savings than a non-reduced-size. Try to increase stolen memory size if available in BIOS.\n"); 489 490 490 491 } 491 492 ··· 522 521 dev_priv->dsm.start + compressed_llb->start); 523 522 } 524 523 525 - DRM_DEBUG_KMS("reserved %llu bytes of contiguous stolen space for FBC, threshold: %d\n", 526 - fbc->compressed_fb.size, fbc->threshold); 524 + drm_dbg_kms(&dev_priv->drm, 525 + "reserved %llu bytes of contiguous stolen space for FBC, threshold: %d\n", 526 + fbc->compressed_fb.size, fbc->threshold); 527 527 528 528 return 0; 529 529 ··· 533 531 i915_gem_stolen_remove_node(dev_priv, &fbc->compressed_fb); 534 532 err_llb: 535 533 if (drm_mm_initialized(&dev_priv->mm.stolen)) 536 - pr_info_once("drm: not enough stolen space for compressed buffer (need %d more bytes), disabling. Hint: you may be able to increase stolen memory size in the BIOS to avoid this.\n", size); 534 + drm_info_once(&dev_priv->drm, "not enough stolen space for compressed buffer (need %d more bytes), disabling. Hint: you may be able to increase stolen memory size in the BIOS to avoid this.\n", size); 537 535 return -ENOSPC; 538 536 } 539 537 ··· 608 606 } 609 607 } 610 608 609 + static bool rotation_is_valid(struct drm_i915_private *dev_priv, 610 + u32 pixel_format, unsigned int rotation) 611 + { 612 + if (INTEL_GEN(dev_priv) >= 9 && pixel_format == DRM_FORMAT_RGB565 && 613 + drm_rotation_90_or_270(rotation)) 614 + return false; 615 + else if (INTEL_GEN(dev_priv) <= 4 && !IS_G4X(dev_priv) && 616 + rotation != DRM_MODE_ROTATE_0) 617 + return false; 618 + 619 + return true; 620 + } 621 + 611 622 /* 612 623 * For some reason, the hardware tracking starts looking at whatever we 613 624 * programmed as the display plane base address register. It does not look at ··· 653 638 effective_h += fbc->state_cache.plane.adjusted_y; 654 639 655 640 return effective_w <= max_w && effective_h <= max_h; 641 + } 642 + 643 + static bool tiling_is_valid(struct drm_i915_private *dev_priv, 644 + uint64_t modifier) 645 + { 646 + switch (modifier) { 647 + case DRM_FORMAT_MOD_LINEAR: 648 + if (INTEL_GEN(dev_priv) >= 9) 649 + return true; 650 + return false; 651 + case I915_FORMAT_MOD_X_TILED: 652 + case I915_FORMAT_MOD_Y_TILED: 653 + return true; 654 + default: 655 + return false; 656 + } 656 657 } 657 658 658 659 static void intel_fbc_update_state_cache(struct intel_crtc *crtc, ··· 704 673 705 674 cache->fb.format = fb->format; 706 675 cache->fb.stride = fb->pitches[0]; 676 + cache->fb.modifier = fb->modifier; 707 677 708 678 drm_WARN_ON(&dev_priv->drm, plane_state->flags & PLANE_HAS_FENCE && 709 679 !plane_state->vma->fence); ··· 778 746 return false; 779 747 } 780 748 781 - /* The use of a CPU fence is mandatory in order to detect writes 782 - * by the CPU to the scanout and trigger updates to the FBC. 749 + /* The use of a CPU fence is one of two ways to detect writes by the 750 + * CPU to the scanout and trigger updates to the FBC. 751 + * 752 + * The other method is by software tracking (see 753 + * intel_fbc_invalidate/flush()), it will manually notify FBC and nuke 754 + * the current compressed buffer and recompress it. 783 755 * 784 756 * Note that is possible for a tiled surface to be unmappable (and 785 - * so have no fence associated with it) due to aperture constaints 757 + * so have no fence associated with it) due to aperture constraints 786 758 * at the time of pinning. 787 759 * 788 760 * FIXME with 90/270 degree rotation we should use the fence on 789 761 * the normal GTT view (the rotated view doesn't even have a 790 762 * fence). Would need changes to the FBC fence Y offset as well. 791 - * For now this will effecively disable FBC with 90/270 degree 763 + * For now this will effectively disable FBC with 90/270 degree 792 764 * rotation. 793 765 */ 794 - if (cache->fence_id < 0) { 766 + if (INTEL_GEN(dev_priv) < 9 && cache->fence_id < 0) { 795 767 fbc->no_fbc_reason = "framebuffer not tiled or fenced"; 796 768 return false; 797 769 } 798 - if (INTEL_GEN(dev_priv) <= 4 && !IS_G4X(dev_priv) && 799 - cache->plane.rotation != DRM_MODE_ROTATE_0) { 770 + 771 + if (!rotation_is_valid(dev_priv, cache->fb.format->format, 772 + cache->plane.rotation)) { 800 773 fbc->no_fbc_reason = "rotation unsupported"; 774 + return false; 775 + } 776 + 777 + if (!tiling_is_valid(dev_priv, cache->fb.modifier)) { 778 + fbc->no_fbc_reason = "tiling unsupported"; 801 779 return false; 802 780 } 803 781 ··· 990 948 drm_WARN_ON(&dev_priv->drm, !fbc->crtc); 991 949 drm_WARN_ON(&dev_priv->drm, fbc->active); 992 950 993 - DRM_DEBUG_KMS("Disabling FBC on pipe %c\n", pipe_name(crtc->pipe)); 951 + drm_dbg_kms(&dev_priv->drm, "Disabling FBC on pipe %c\n", 952 + pipe_name(crtc->pipe)); 994 953 995 954 __intel_fbc_cleanup_cfb(dev_priv); 996 955 ··· 1219 1176 else 1220 1177 cache->gen9_wa_cfb_stride = 0; 1221 1178 1222 - DRM_DEBUG_KMS("Enabling FBC on pipe %c\n", pipe_name(crtc->pipe)); 1179 + drm_dbg_kms(&dev_priv->drm, "Enabling FBC on pipe %c\n", 1180 + pipe_name(crtc->pipe)); 1223 1181 fbc->no_fbc_reason = "FBC enabled but not active yet\n"; 1224 1182 1225 1183 fbc->crtc = crtc; ··· 1282 1238 if (fbc->underrun_detected || !fbc->crtc) 1283 1239 goto out; 1284 1240 1285 - DRM_DEBUG_KMS("Disabling FBC due to FIFO underrun.\n"); 1241 + drm_dbg_kms(&dev_priv->drm, "Disabling FBC due to FIFO underrun.\n"); 1286 1242 fbc->underrun_detected = true; 1287 1243 1288 1244 intel_fbc_deactivate(dev_priv, "FIFO underrun"); ··· 1308 1264 return ret; 1309 1265 1310 1266 if (dev_priv->fbc.underrun_detected) { 1311 - DRM_DEBUG_KMS("Re-allowing FBC after fifo underrun\n"); 1267 + drm_dbg_kms(&dev_priv->drm, 1268 + "Re-allowing FBC after fifo underrun\n"); 1312 1269 dev_priv->fbc.no_fbc_reason = "FIFO underrun cleared"; 1313 1270 } 1314 1271 ··· 1380 1335 /* WaFbcTurnOffFbcWhenHyperVisorIsUsed:skl,bxt */ 1381 1336 if (intel_vtd_active() && 1382 1337 (IS_SKYLAKE(dev_priv) || IS_BROXTON(dev_priv))) { 1383 - DRM_INFO("Disabling framebuffer compression (FBC) to prevent screen flicker with VT-d enabled\n"); 1338 + drm_info(&dev_priv->drm, 1339 + "Disabling framebuffer compression (FBC) to prevent screen flicker with VT-d enabled\n"); 1384 1340 return true; 1385 1341 } 1386 1342 ··· 1409 1363 mkwrite_device_info(dev_priv)->display.has_fbc = false; 1410 1364 1411 1365 i915_modparams.enable_fbc = intel_sanitize_fbc_option(dev_priv); 1412 - DRM_DEBUG_KMS("Sanitized enable_fbc value: %d\n", 1413 - i915_modparams.enable_fbc); 1366 + drm_dbg_kms(&dev_priv->drm, "Sanitized enable_fbc value: %d\n", 1367 + i915_modparams.enable_fbc); 1414 1368 1415 1369 if (!HAS_FBC(dev_priv)) { 1416 1370 fbc->no_fbc_reason = "unsupported by this chipset";
+55 -41
drivers/gpu/drm/i915/display/intel_fbdev.c
··· 146 146 if (IS_ERR(obj)) 147 147 obj = i915_gem_object_create_shmem(dev_priv, size); 148 148 if (IS_ERR(obj)) { 149 - DRM_ERROR("failed to allocate framebuffer\n"); 149 + drm_err(&dev_priv->drm, "failed to allocate framebuffer\n"); 150 150 return PTR_ERR(obj); 151 151 } 152 152 ··· 183 183 if (intel_fb && 184 184 (sizes->fb_width > intel_fb->base.width || 185 185 sizes->fb_height > intel_fb->base.height)) { 186 - DRM_DEBUG_KMS("BIOS fb too small (%dx%d), we require (%dx%d)," 187 - " releasing it\n", 188 - intel_fb->base.width, intel_fb->base.height, 189 - sizes->fb_width, sizes->fb_height); 186 + drm_dbg_kms(&dev_priv->drm, 187 + "BIOS fb too small (%dx%d), we require (%dx%d)," 188 + " releasing it\n", 189 + intel_fb->base.width, intel_fb->base.height, 190 + sizes->fb_width, sizes->fb_height); 190 191 drm_framebuffer_put(&intel_fb->base); 191 192 intel_fb = ifbdev->fb = NULL; 192 193 } 193 194 if (!intel_fb || drm_WARN_ON(dev, !intel_fb_obj(&intel_fb->base))) { 194 - DRM_DEBUG_KMS("no BIOS fb, allocating a new one\n"); 195 + drm_dbg_kms(&dev_priv->drm, 196 + "no BIOS fb, allocating a new one\n"); 195 197 ret = intelfb_alloc(helper, sizes); 196 198 if (ret) 197 199 return ret; 198 200 intel_fb = ifbdev->fb; 199 201 } else { 200 - DRM_DEBUG_KMS("re-using BIOS fb\n"); 202 + drm_dbg_kms(&dev_priv->drm, "re-using BIOS fb\n"); 201 203 prealloc = true; 202 204 sizes->fb_width = intel_fb->base.width; 203 205 sizes->fb_height = intel_fb->base.height; ··· 222 220 223 221 info = drm_fb_helper_alloc_fbi(helper); 224 222 if (IS_ERR(info)) { 225 - DRM_ERROR("Failed to allocate fb_info\n"); 223 + drm_err(&dev_priv->drm, "Failed to allocate fb_info\n"); 226 224 ret = PTR_ERR(info); 227 225 goto out_unpin; 228 226 } ··· 242 240 243 241 vaddr = i915_vma_pin_iomap(vma); 244 242 if (IS_ERR(vaddr)) { 245 - DRM_ERROR("Failed to remap framebuffer into virtual memory\n"); 243 + drm_err(&dev_priv->drm, 244 + "Failed to remap framebuffer into virtual memory\n"); 246 245 ret = PTR_ERR(vaddr); 247 246 goto out_unpin; 248 247 } ··· 261 258 262 259 /* Use default scratch pixmap (info->pixmap.flags = FB_PIXMAP_SYSTEM) */ 263 260 264 - DRM_DEBUG_KMS("allocated %dx%d fb: 0x%08x\n", 265 - ifbdev->fb->base.width, ifbdev->fb->base.height, 266 - i915_ggtt_offset(vma)); 261 + drm_dbg_kms(&dev_priv->drm, "allocated %dx%d fb: 0x%08x\n", 262 + ifbdev->fb->base.width, ifbdev->fb->base.height, 263 + i915_ggtt_offset(vma)); 267 264 ifbdev->vma = vma; 268 265 ifbdev->vma_flags = flags; 269 266 ··· 312 309 static bool intel_fbdev_init_bios(struct drm_device *dev, 313 310 struct intel_fbdev *ifbdev) 314 311 { 312 + struct drm_i915_private *i915 = to_i915(dev); 315 313 struct intel_framebuffer *fb = NULL; 316 314 struct drm_crtc *crtc; 317 315 struct intel_crtc *intel_crtc; ··· 325 321 intel_crtc = to_intel_crtc(crtc); 326 322 327 323 if (!crtc->state->active || !obj) { 328 - DRM_DEBUG_KMS("pipe %c not active or no fb, skipping\n", 329 - pipe_name(intel_crtc->pipe)); 324 + drm_dbg_kms(&i915->drm, 325 + "pipe %c not active or no fb, skipping\n", 326 + pipe_name(intel_crtc->pipe)); 330 327 continue; 331 328 } 332 329 333 330 if (obj->base.size > max_size) { 334 - DRM_DEBUG_KMS("found possible fb from plane %c\n", 335 - pipe_name(intel_crtc->pipe)); 331 + drm_dbg_kms(&i915->drm, 332 + "found possible fb from plane %c\n", 333 + pipe_name(intel_crtc->pipe)); 336 334 fb = to_intel_framebuffer(crtc->primary->state->fb); 337 335 max_size = obj->base.size; 338 336 } 339 337 } 340 338 341 339 if (!fb) { 342 - DRM_DEBUG_KMS("no active fbs found, not using BIOS config\n"); 340 + drm_dbg_kms(&i915->drm, 341 + "no active fbs found, not using BIOS config\n"); 343 342 goto out; 344 343 } 345 344 ··· 353 346 intel_crtc = to_intel_crtc(crtc); 354 347 355 348 if (!crtc->state->active) { 356 - DRM_DEBUG_KMS("pipe %c not active, skipping\n", 357 - pipe_name(intel_crtc->pipe)); 349 + drm_dbg_kms(&i915->drm, 350 + "pipe %c not active, skipping\n", 351 + pipe_name(intel_crtc->pipe)); 358 352 continue; 359 353 } 360 354 361 - DRM_DEBUG_KMS("checking plane %c for BIOS fb\n", 362 - pipe_name(intel_crtc->pipe)); 355 + drm_dbg_kms(&i915->drm, "checking plane %c for BIOS fb\n", 356 + pipe_name(intel_crtc->pipe)); 363 357 364 358 /* 365 359 * See if the plane fb we found above will fit on this ··· 370 362 cur_size = crtc->state->adjusted_mode.crtc_hdisplay; 371 363 cur_size = cur_size * fb->base.format->cpp[0]; 372 364 if (fb->base.pitches[0] < cur_size) { 373 - DRM_DEBUG_KMS("fb not wide enough for plane %c (%d vs %d)\n", 374 - pipe_name(intel_crtc->pipe), 375 - cur_size, fb->base.pitches[0]); 365 + drm_dbg_kms(&i915->drm, 366 + "fb not wide enough for plane %c (%d vs %d)\n", 367 + pipe_name(intel_crtc->pipe), 368 + cur_size, fb->base.pitches[0]); 376 369 fb = NULL; 377 370 break; 378 371 } ··· 381 372 cur_size = crtc->state->adjusted_mode.crtc_vdisplay; 382 373 cur_size = intel_fb_align_height(&fb->base, 0, cur_size); 383 374 cur_size *= fb->base.pitches[0]; 384 - DRM_DEBUG_KMS("pipe %c area: %dx%d, bpp: %d, size: %d\n", 385 - pipe_name(intel_crtc->pipe), 386 - crtc->state->adjusted_mode.crtc_hdisplay, 387 - crtc->state->adjusted_mode.crtc_vdisplay, 388 - fb->base.format->cpp[0] * 8, 389 - cur_size); 375 + drm_dbg_kms(&i915->drm, 376 + "pipe %c area: %dx%d, bpp: %d, size: %d\n", 377 + pipe_name(intel_crtc->pipe), 378 + crtc->state->adjusted_mode.crtc_hdisplay, 379 + crtc->state->adjusted_mode.crtc_vdisplay, 380 + fb->base.format->cpp[0] * 8, 381 + cur_size); 390 382 391 383 if (cur_size > max_size) { 392 - DRM_DEBUG_KMS("fb not big enough for plane %c (%d vs %d)\n", 393 - pipe_name(intel_crtc->pipe), 394 - cur_size, max_size); 384 + drm_dbg_kms(&i915->drm, 385 + "fb not big enough for plane %c (%d vs %d)\n", 386 + pipe_name(intel_crtc->pipe), 387 + cur_size, max_size); 395 388 fb = NULL; 396 389 break; 397 390 } 398 391 399 - DRM_DEBUG_KMS("fb big enough for plane %c (%d >= %d)\n", 400 - pipe_name(intel_crtc->pipe), 401 - max_size, cur_size); 392 + drm_dbg_kms(&i915->drm, 393 + "fb big enough for plane %c (%d >= %d)\n", 394 + pipe_name(intel_crtc->pipe), 395 + max_size, cur_size); 402 396 } 403 397 404 398 if (!fb) { 405 - DRM_DEBUG_KMS("BIOS fb not suitable for all pipes, not using\n"); 399 + drm_dbg_kms(&i915->drm, 400 + "BIOS fb not suitable for all pipes, not using\n"); 406 401 goto out; 407 402 } 408 403 ··· 428 415 } 429 416 430 417 431 - DRM_DEBUG_KMS("using BIOS fb for initial console\n"); 418 + drm_dbg_kms(&i915->drm, "using BIOS fb for initial console\n"); 432 419 return true; 433 420 434 421 out: ··· 535 522 * processing, fbdev will perform a full connector reprobe if a hotplug event 536 523 * was received while HPD was suspended. 537 524 */ 538 - static void intel_fbdev_hpd_set_suspend(struct intel_fbdev *ifbdev, int state) 525 + static void intel_fbdev_hpd_set_suspend(struct drm_i915_private *i915, int state) 539 526 { 527 + struct intel_fbdev *ifbdev = i915->fbdev; 540 528 bool send_hpd = false; 541 529 542 530 mutex_lock(&ifbdev->hpd_lock); ··· 547 533 mutex_unlock(&ifbdev->hpd_lock); 548 534 549 535 if (send_hpd) { 550 - DRM_DEBUG_KMS("Handling delayed fbcon HPD event\n"); 536 + drm_dbg_kms(&i915->drm, "Handling delayed fbcon HPD event\n"); 551 537 drm_fb_helper_hotplug_event(&ifbdev->helper); 552 538 } 553 539 } ··· 602 588 drm_fb_helper_set_suspend(&ifbdev->helper, state); 603 589 console_unlock(); 604 590 605 - intel_fbdev_hpd_set_suspend(ifbdev, state); 591 + intel_fbdev_hpd_set_suspend(dev_priv, state); 606 592 } 607 593 608 594 void intel_fbdev_output_poll_changed(struct drm_device *dev)
+3 -2
drivers/gpu/drm/i915/display/intel_global_state.c
··· 71 71 intel_atomic_get_global_obj_state(struct intel_atomic_state *state, 72 72 struct intel_global_obj *obj) 73 73 { 74 + struct drm_i915_private *i915 = to_i915(state->base.dev); 74 75 int index, num_objs, i; 75 76 size_t size; 76 77 struct __intel_global_objs_state *arr; ··· 107 106 108 107 state->num_global_objs = num_objs; 109 108 110 - DRM_DEBUG_ATOMIC("Added new global object %p state %p to %p\n", 111 - obj, obj_state, state); 109 + drm_dbg_atomic(&i915->drm, "Added new global object %p state %p to %p\n", 110 + obj, obj_state, state); 112 111 113 112 return obj_state; 114 113 }
+4 -2
drivers/gpu/drm/i915/display/intel_hdcp.c
··· 1391 1391 int hdcp2_propagate_stream_management_info(struct intel_connector *connector) 1392 1392 { 1393 1393 struct intel_digital_port *intel_dig_port = intel_attached_dig_port(connector); 1394 + struct drm_i915_private *i915 = to_i915(connector->base.dev); 1394 1395 struct intel_hdcp *hdcp = &connector->hdcp; 1395 1396 union { 1396 1397 struct hdcp2_rep_stream_manage stream_manage; ··· 1432 1431 hdcp->seq_num_m++; 1433 1432 1434 1433 if (hdcp->seq_num_m > HDCP_2_2_SEQ_NUM_MAX) { 1435 - DRM_DEBUG_KMS("seq_num_m roll over.\n"); 1434 + drm_dbg_kms(&i915->drm, "seq_num_m roll over.\n"); 1436 1435 return -1; 1437 1436 } 1438 1437 ··· 2076 2075 return ret; 2077 2076 } 2078 2077 2079 - void intel_hdcp_update_pipe(struct intel_encoder *encoder, 2078 + void intel_hdcp_update_pipe(struct intel_atomic_state *state, 2079 + struct intel_encoder *encoder, 2080 2080 const struct intel_crtc_state *crtc_state, 2081 2081 const struct drm_connector_state *conn_state) 2082 2082 {
+3 -1
drivers/gpu/drm/i915/display/intel_hdcp.h
··· 11 11 struct drm_connector; 12 12 struct drm_connector_state; 13 13 struct drm_i915_private; 14 + struct intel_atomic_state; 14 15 struct intel_connector; 15 16 struct intel_crtc_state; 16 17 struct intel_encoder; ··· 27 26 int intel_hdcp_enable(struct intel_connector *connector, 28 27 enum transcoder cpu_transcoder, u8 content_type); 29 28 int intel_hdcp_disable(struct intel_connector *connector); 30 - void intel_hdcp_update_pipe(struct intel_encoder *encoder, 29 + void intel_hdcp_update_pipe(struct intel_atomic_state *state, 30 + struct intel_encoder *encoder, 31 31 const struct intel_crtc_state *crtc_state, 32 32 const struct drm_connector_state *conn_state); 33 33 bool is_hdcp_supported(struct drm_i915_private *dev_priv, enum port port);
+163 -93
drivers/gpu/drm/i915/display/intel_hdmi.c
··· 707 707 /* see comment above for the reason for this offset */ 708 708 ret = hdmi_infoframe_unpack(frame, buffer + 1, sizeof(buffer) - 1); 709 709 if (ret) { 710 - DRM_DEBUG_KMS("Failed to unpack infoframe type 0x%02x\n", type); 710 + drm_dbg_kms(encoder->base.dev, 711 + "Failed to unpack infoframe type 0x%02x\n", type); 711 712 return; 712 713 } 713 714 714 715 if (frame->any.type != type) 715 - DRM_DEBUG_KMS("Found the wrong infoframe type 0x%x (expected 0x%02x)\n", 716 - frame->any.type, type); 716 + drm_dbg_kms(encoder->base.dev, 717 + "Found the wrong infoframe type 0x%x (expected 0x%02x)\n", 718 + frame->any.type, type); 717 719 } 718 720 719 721 static bool ··· 855 853 856 854 ret = drm_hdmi_infoframe_set_hdr_metadata(frame, conn_state); 857 855 if (ret < 0) { 858 - DRM_DEBUG_KMS("couldn't set HDR metadata in infoframe\n"); 856 + drm_dbg_kms(&dev_priv->drm, 857 + "couldn't set HDR metadata in infoframe\n"); 859 858 return false; 860 859 } 861 860 ··· 896 893 if (!(val & VIDEO_DIP_ENABLE)) 897 894 return; 898 895 if (port != (val & VIDEO_DIP_PORT_MASK)) { 899 - DRM_DEBUG_KMS("video DIP still enabled on port %c\n", 900 - (val & VIDEO_DIP_PORT_MASK) >> 29); 896 + drm_dbg_kms(&dev_priv->drm, 897 + "video DIP still enabled on port %c\n", 898 + (val & VIDEO_DIP_PORT_MASK) >> 29); 901 899 return; 902 900 } 903 901 val &= ~(VIDEO_DIP_ENABLE | VIDEO_DIP_ENABLE_AVI | ··· 910 906 911 907 if (port != (val & VIDEO_DIP_PORT_MASK)) { 912 908 if (val & VIDEO_DIP_ENABLE) { 913 - DRM_DEBUG_KMS("video DIP already enabled on port %c\n", 914 - (val & VIDEO_DIP_PORT_MASK) >> 29); 909 + drm_dbg_kms(&dev_priv->drm, 910 + "video DIP already enabled on port %c\n", 911 + (val & VIDEO_DIP_PORT_MASK) >> 29); 915 912 return; 916 913 } 917 914 val &= ~VIDEO_DIP_PORT_MASK; ··· 1269 1264 if (hdmi->dp_dual_mode.type < DRM_DP_DUAL_MODE_TYPE2_DVI) 1270 1265 return; 1271 1266 1272 - DRM_DEBUG_KMS("%s DP dual mode adaptor TMDS output\n", 1273 - enable ? "Enabling" : "Disabling"); 1267 + drm_dbg_kms(&dev_priv->drm, "%s DP dual mode adaptor TMDS output\n", 1268 + enable ? "Enabling" : "Disabling"); 1274 1269 1275 1270 drm_dp_dual_mode_set_tmds_output(hdmi->dp_dual_mode.type, 1276 1271 adapter, enable); ··· 1351 1346 ret = intel_hdmi_hdcp_write(intel_dig_port, DRM_HDCP_DDC_AN, an, 1352 1347 DRM_HDCP_AN_LEN); 1353 1348 if (ret) { 1354 - DRM_DEBUG_KMS("Write An over DDC failed (%d)\n", ret); 1349 + drm_dbg_kms(&i915->drm, "Write An over DDC failed (%d)\n", 1350 + ret); 1355 1351 return ret; 1356 1352 } 1357 1353 1358 1354 ret = intel_gmbus_output_aksv(adapter); 1359 1355 if (ret < 0) { 1360 - DRM_DEBUG_KMS("Failed to output aksv (%d)\n", ret); 1356 + drm_dbg_kms(&i915->drm, "Failed to output aksv (%d)\n", ret); 1361 1357 return ret; 1362 1358 } 1363 1359 return 0; ··· 1367 1361 static int intel_hdmi_hdcp_read_bksv(struct intel_digital_port *intel_dig_port, 1368 1362 u8 *bksv) 1369 1363 { 1364 + struct drm_i915_private *i915 = to_i915(intel_dig_port->base.base.dev); 1365 + 1370 1366 int ret; 1371 1367 ret = intel_hdmi_hdcp_read(intel_dig_port, DRM_HDCP_DDC_BKSV, bksv, 1372 1368 DRM_HDCP_KSV_LEN); 1373 1369 if (ret) 1374 - DRM_DEBUG_KMS("Read Bksv over DDC failed (%d)\n", ret); 1370 + drm_dbg_kms(&i915->drm, "Read Bksv over DDC failed (%d)\n", 1371 + ret); 1375 1372 return ret; 1376 1373 } 1377 1374 ··· 1382 1373 int intel_hdmi_hdcp_read_bstatus(struct intel_digital_port *intel_dig_port, 1383 1374 u8 *bstatus) 1384 1375 { 1376 + struct drm_i915_private *i915 = to_i915(intel_dig_port->base.base.dev); 1377 + 1385 1378 int ret; 1386 1379 ret = intel_hdmi_hdcp_read(intel_dig_port, DRM_HDCP_DDC_BSTATUS, 1387 1380 bstatus, DRM_HDCP_BSTATUS_LEN); 1388 1381 if (ret) 1389 - DRM_DEBUG_KMS("Read bstatus over DDC failed (%d)\n", ret); 1382 + drm_dbg_kms(&i915->drm, "Read bstatus over DDC failed (%d)\n", 1383 + ret); 1390 1384 return ret; 1391 1385 } 1392 1386 ··· 1397 1385 int intel_hdmi_hdcp_repeater_present(struct intel_digital_port *intel_dig_port, 1398 1386 bool *repeater_present) 1399 1387 { 1388 + struct drm_i915_private *i915 = to_i915(intel_dig_port->base.base.dev); 1400 1389 int ret; 1401 1390 u8 val; 1402 1391 1403 1392 ret = intel_hdmi_hdcp_read(intel_dig_port, DRM_HDCP_DDC_BCAPS, &val, 1); 1404 1393 if (ret) { 1405 - DRM_DEBUG_KMS("Read bcaps over DDC failed (%d)\n", ret); 1394 + drm_dbg_kms(&i915->drm, "Read bcaps over DDC failed (%d)\n", 1395 + ret); 1406 1396 return ret; 1407 1397 } 1408 1398 *repeater_present = val & DRM_HDCP_DDC_BCAPS_REPEATER_PRESENT; ··· 1415 1401 int intel_hdmi_hdcp_read_ri_prime(struct intel_digital_port *intel_dig_port, 1416 1402 u8 *ri_prime) 1417 1403 { 1404 + struct drm_i915_private *i915 = to_i915(intel_dig_port->base.base.dev); 1405 + 1418 1406 int ret; 1419 1407 ret = intel_hdmi_hdcp_read(intel_dig_port, DRM_HDCP_DDC_RI_PRIME, 1420 1408 ri_prime, DRM_HDCP_RI_LEN); 1421 1409 if (ret) 1422 - DRM_DEBUG_KMS("Read Ri' over DDC failed (%d)\n", ret); 1410 + drm_dbg_kms(&i915->drm, "Read Ri' over DDC failed (%d)\n", 1411 + ret); 1423 1412 return ret; 1424 1413 } 1425 1414 ··· 1430 1413 int intel_hdmi_hdcp_read_ksv_ready(struct intel_digital_port *intel_dig_port, 1431 1414 bool *ksv_ready) 1432 1415 { 1416 + struct drm_i915_private *i915 = to_i915(intel_dig_port->base.base.dev); 1433 1417 int ret; 1434 1418 u8 val; 1435 1419 1436 1420 ret = intel_hdmi_hdcp_read(intel_dig_port, DRM_HDCP_DDC_BCAPS, &val, 1); 1437 1421 if (ret) { 1438 - DRM_DEBUG_KMS("Read bcaps over DDC failed (%d)\n", ret); 1422 + drm_dbg_kms(&i915->drm, "Read bcaps over DDC failed (%d)\n", 1423 + ret); 1439 1424 return ret; 1440 1425 } 1441 1426 *ksv_ready = val & DRM_HDCP_DDC_BCAPS_KSV_FIFO_READY; ··· 1448 1429 int intel_hdmi_hdcp_read_ksv_fifo(struct intel_digital_port *intel_dig_port, 1449 1430 int num_downstream, u8 *ksv_fifo) 1450 1431 { 1432 + struct drm_i915_private *i915 = to_i915(intel_dig_port->base.base.dev); 1451 1433 int ret; 1452 1434 ret = intel_hdmi_hdcp_read(intel_dig_port, DRM_HDCP_DDC_KSV_FIFO, 1453 1435 ksv_fifo, num_downstream * DRM_HDCP_KSV_LEN); 1454 1436 if (ret) { 1455 - DRM_DEBUG_KMS("Read ksv fifo over DDC failed (%d)\n", ret); 1437 + drm_dbg_kms(&i915->drm, 1438 + "Read ksv fifo over DDC failed (%d)\n", ret); 1456 1439 return ret; 1457 1440 } 1458 1441 return 0; ··· 1464 1443 int intel_hdmi_hdcp_read_v_prime_part(struct intel_digital_port *intel_dig_port, 1465 1444 int i, u32 *part) 1466 1445 { 1446 + struct drm_i915_private *i915 = to_i915(intel_dig_port->base.base.dev); 1467 1447 int ret; 1468 1448 1469 1449 if (i >= DRM_HDCP_V_PRIME_NUM_PARTS) ··· 1473 1451 ret = intel_hdmi_hdcp_read(intel_dig_port, DRM_HDCP_DDC_V_PRIME(i), 1474 1452 part, DRM_HDCP_V_PRIME_PART_LEN); 1475 1453 if (ret) 1476 - DRM_DEBUG_KMS("Read V'[%d] over DDC failed (%d)\n", i, ret); 1454 + drm_dbg_kms(&i915->drm, "Read V'[%d] over DDC failed (%d)\n", 1455 + i, ret); 1477 1456 return ret; 1478 1457 } 1479 1458 ··· 1497 1474 1498 1475 ret = intel_ddi_toggle_hdcp_signalling(&intel_dig_port->base, false); 1499 1476 if (ret) { 1500 - DRM_ERROR("Disable HDCP signalling failed (%d)\n", ret); 1477 + drm_err(&dev_priv->drm, 1478 + "Disable HDCP signalling failed (%d)\n", ret); 1501 1479 return ret; 1502 1480 } 1503 1481 ret = intel_ddi_toggle_hdcp_signalling(&intel_dig_port->base, true); 1504 1482 if (ret) { 1505 - DRM_ERROR("Enable HDCP signalling failed (%d)\n", ret); 1483 + drm_err(&dev_priv->drm, 1484 + "Enable HDCP signalling failed (%d)\n", ret); 1506 1485 return ret; 1507 1486 } 1508 1487 ··· 1525 1500 1526 1501 ret = intel_ddi_toggle_hdcp_signalling(&intel_dig_port->base, enable); 1527 1502 if (ret) { 1528 - DRM_ERROR("%s HDCP signalling failed (%d)\n", 1529 - enable ? "Enable" : "Disable", ret); 1503 + drm_err(&dev_priv->drm, "%s HDCP signalling failed (%d)\n", 1504 + enable ? "Enable" : "Disable", ret); 1530 1505 return ret; 1531 1506 } 1532 1507 ··· 1561 1536 intel_de_write(i915, HDCP_RPRIME(i915, cpu_transcoder, port), ri.reg); 1562 1537 1563 1538 /* Wait for Ri prime match */ 1564 - if (wait_for(intel_de_read(i915, HDCP_STATUS(i915, cpu_transcoder, port)) & 1539 + if (wait_for((intel_de_read(i915, HDCP_STATUS(i915, cpu_transcoder, port)) & 1540 + (HDCP_STATUS_RI_MATCH | HDCP_STATUS_ENC)) == 1565 1541 (HDCP_STATUS_RI_MATCH | HDCP_STATUS_ENC), 1)) { 1566 - DRM_ERROR("Ri' mismatch detected, link check failed (%x)\n", 1567 - intel_de_read(i915, HDCP_STATUS(i915, cpu_transcoder, port))); 1542 + drm_err(&i915->drm, 1543 + "Ri' mismatch detected, link check failed (%x)\n", 1544 + intel_de_read(i915, HDCP_STATUS(i915, cpu_transcoder, 1545 + port))); 1568 1546 return false; 1569 1547 } 1570 1548 return true; ··· 1616 1588 } 1617 1589 1618 1590 static inline 1619 - int hdcp2_detect_msg_availability(struct intel_digital_port *intel_digital_port, 1591 + int hdcp2_detect_msg_availability(struct intel_digital_port *intel_dig_port, 1620 1592 u8 msg_id, bool *msg_ready, 1621 1593 ssize_t *msg_sz) 1622 1594 { 1595 + struct drm_i915_private *i915 = to_i915(intel_dig_port->base.base.dev); 1623 1596 u8 rx_status[HDCP_2_2_HDMI_RXSTATUS_LEN]; 1624 1597 int ret; 1625 1598 1626 - ret = intel_hdmi_hdcp2_read_rx_status(intel_digital_port, rx_status); 1599 + ret = intel_hdmi_hdcp2_read_rx_status(intel_dig_port, rx_status); 1627 1600 if (ret < 0) { 1628 - DRM_DEBUG_KMS("rx_status read failed. Err %d\n", ret); 1601 + drm_dbg_kms(&i915->drm, "rx_status read failed. Err %d\n", 1602 + ret); 1629 1603 return ret; 1630 1604 } 1631 1605 ··· 1647 1617 intel_hdmi_hdcp2_wait_for_msg(struct intel_digital_port *intel_dig_port, 1648 1618 u8 msg_id, bool paired) 1649 1619 { 1620 + struct drm_i915_private *i915 = to_i915(intel_dig_port->base.base.dev); 1650 1621 bool msg_ready = false; 1651 1622 int timeout, ret; 1652 1623 ssize_t msg_sz = 0; ··· 1662 1631 !ret && msg_ready && msg_sz, timeout * 1000, 1663 1632 1000, 5 * 1000); 1664 1633 if (ret) 1665 - DRM_DEBUG_KMS("msg_id: %d, ret: %d, timeout: %d\n", 1666 - msg_id, ret, timeout); 1634 + drm_dbg_kms(&i915->drm, "msg_id: %d, ret: %d, timeout: %d\n", 1635 + msg_id, ret, timeout); 1667 1636 1668 1637 return ret ? ret : msg_sz; 1669 1638 } ··· 1682 1651 int intel_hdmi_hdcp2_read_msg(struct intel_digital_port *intel_dig_port, 1683 1652 u8 msg_id, void *buf, size_t size) 1684 1653 { 1654 + struct drm_i915_private *i915 = to_i915(intel_dig_port->base.base.dev); 1685 1655 struct intel_hdmi *hdmi = &intel_dig_port->hdmi; 1686 1656 struct intel_hdcp *hdcp = &hdmi->attached_connector->hdcp; 1687 1657 unsigned int offset; ··· 1698 1666 * available buffer. 1699 1667 */ 1700 1668 if (ret > size) { 1701 - DRM_DEBUG_KMS("msg_sz(%zd) is more than exp size(%zu)\n", 1702 - ret, size); 1669 + drm_dbg_kms(&i915->drm, 1670 + "msg_sz(%zd) is more than exp size(%zu)\n", 1671 + ret, size); 1703 1672 return -1; 1704 1673 } 1705 1674 1706 1675 offset = HDCP_2_2_HDMI_REG_RD_MSG_OFFSET; 1707 1676 ret = intel_hdmi_hdcp_read(intel_dig_port, offset, buf, ret); 1708 1677 if (ret) 1709 - DRM_DEBUG_KMS("Failed to read msg_id: %d(%zd)\n", msg_id, ret); 1678 + drm_dbg_kms(&i915->drm, "Failed to read msg_id: %d(%zd)\n", 1679 + msg_id, ret); 1710 1680 1711 1681 return ret; 1712 1682 } ··· 1904 1870 const struct intel_crtc_state *pipe_config, 1905 1871 const struct drm_connector_state *conn_state) 1906 1872 { 1873 + struct drm_i915_private *i915 = to_i915(encoder->base.dev); 1907 1874 struct intel_crtc *crtc = to_intel_crtc(pipe_config->uapi.crtc); 1908 1875 1909 - drm_WARN_ON(encoder->base.dev, !pipe_config->has_hdmi_sink); 1910 - DRM_DEBUG_DRIVER("Enabling HDMI audio on pipe %c\n", 1911 - pipe_name(crtc->pipe)); 1876 + drm_WARN_ON(&i915->drm, !pipe_config->has_hdmi_sink); 1877 + drm_dbg_kms(&i915->drm, "Enabling HDMI audio on pipe %c\n", 1878 + pipe_name(crtc->pipe)); 1912 1879 intel_audio_codec_enable(encoder, pipe_config, conn_state); 1913 1880 } 1914 1881 1915 - static void g4x_enable_hdmi(struct intel_encoder *encoder, 1882 + static void g4x_enable_hdmi(struct intel_atomic_state *state, 1883 + struct intel_encoder *encoder, 1916 1884 const struct intel_crtc_state *pipe_config, 1917 1885 const struct drm_connector_state *conn_state) 1918 1886 { ··· 1936 1900 intel_enable_hdmi_audio(encoder, pipe_config, conn_state); 1937 1901 } 1938 1902 1939 - static void ibx_enable_hdmi(struct intel_encoder *encoder, 1903 + static void ibx_enable_hdmi(struct intel_atomic_state *state, 1904 + struct intel_encoder *encoder, 1940 1905 const struct intel_crtc_state *pipe_config, 1941 1906 const struct drm_connector_state *conn_state) 1942 1907 { ··· 1988 1951 intel_enable_hdmi_audio(encoder, pipe_config, conn_state); 1989 1952 } 1990 1953 1991 - static void cpt_enable_hdmi(struct intel_encoder *encoder, 1954 + static void cpt_enable_hdmi(struct intel_atomic_state *state, 1955 + struct intel_encoder *encoder, 1992 1956 const struct intel_crtc_state *pipe_config, 1993 1957 const struct drm_connector_state *conn_state) 1994 1958 { ··· 2042 2004 intel_enable_hdmi_audio(encoder, pipe_config, conn_state); 2043 2005 } 2044 2006 2045 - static void vlv_enable_hdmi(struct intel_encoder *encoder, 2007 + static void vlv_enable_hdmi(struct intel_atomic_state *state, 2008 + struct intel_encoder *encoder, 2046 2009 const struct intel_crtc_state *pipe_config, 2047 2010 const struct drm_connector_state *conn_state) 2048 2011 { 2049 2012 } 2050 2013 2051 - static void intel_disable_hdmi(struct intel_encoder *encoder, 2014 + static void intel_disable_hdmi(struct intel_atomic_state *state, 2015 + struct intel_encoder *encoder, 2052 2016 const struct intel_crtc_state *old_crtc_state, 2053 2017 const struct drm_connector_state *old_conn_state) 2054 2018 { ··· 2108 2068 intel_dp_dual_mode_set_tmds_output(intel_hdmi, false); 2109 2069 } 2110 2070 2111 - static void g4x_disable_hdmi(struct intel_encoder *encoder, 2071 + static void g4x_disable_hdmi(struct intel_atomic_state *state, 2072 + struct intel_encoder *encoder, 2112 2073 const struct intel_crtc_state *old_crtc_state, 2113 2074 const struct drm_connector_state *old_conn_state) 2114 2075 { ··· 2117 2076 intel_audio_codec_disable(encoder, 2118 2077 old_crtc_state, old_conn_state); 2119 2078 2120 - intel_disable_hdmi(encoder, old_crtc_state, old_conn_state); 2079 + intel_disable_hdmi(state, encoder, old_crtc_state, old_conn_state); 2121 2080 } 2122 2081 2123 - static void pch_disable_hdmi(struct intel_encoder *encoder, 2082 + static void pch_disable_hdmi(struct intel_atomic_state *state, 2083 + struct intel_encoder *encoder, 2124 2084 const struct intel_crtc_state *old_crtc_state, 2125 2085 const struct drm_connector_state *old_conn_state) 2126 2086 { ··· 2130 2088 old_crtc_state, old_conn_state); 2131 2089 } 2132 2090 2133 - static void pch_post_disable_hdmi(struct intel_encoder *encoder, 2091 + static void pch_post_disable_hdmi(struct intel_atomic_state *state, 2092 + struct intel_encoder *encoder, 2134 2093 const struct intel_crtc_state *old_crtc_state, 2135 2094 const struct drm_connector_state *old_conn_state) 2136 2095 { 2137 - intel_disable_hdmi(encoder, old_crtc_state, old_conn_state); 2096 + intel_disable_hdmi(state, encoder, old_crtc_state, old_conn_state); 2138 2097 } 2139 2098 2140 2099 static int intel_hdmi_source_max_tmds_clock(struct intel_encoder *encoder) ··· 2332 2289 intel_hdmi_ycbcr420_config(struct drm_connector *connector, 2333 2290 struct intel_crtc_state *config) 2334 2291 { 2292 + struct drm_i915_private *i915 = to_i915(connector->dev); 2335 2293 struct intel_crtc *intel_crtc = to_intel_crtc(config->uapi.crtc); 2336 2294 2337 2295 if (!connector->ycbcr_420_allowed) { 2338 - DRM_ERROR("Platform doesn't support YCBCR420 output\n"); 2296 + drm_err(&i915->drm, 2297 + "Platform doesn't support YCBCR420 output\n"); 2339 2298 return false; 2340 2299 } 2341 2300 ··· 2345 2300 2346 2301 /* YCBCR 420 output conversion needs a scaler */ 2347 2302 if (skl_update_scaler_crtc(config)) { 2348 - DRM_DEBUG_KMS("Scaler allocation for output failed\n"); 2303 + drm_dbg_kms(&i915->drm, 2304 + "Scaler allocation for output failed\n"); 2349 2305 return false; 2350 2306 } 2351 2307 ··· 2387 2341 static int intel_hdmi_compute_clock(struct intel_encoder *encoder, 2388 2342 struct intel_crtc_state *crtc_state) 2389 2343 { 2344 + struct drm_i915_private *i915 = to_i915(encoder->base.dev); 2390 2345 struct intel_hdmi *intel_hdmi = enc_to_intel_hdmi(encoder); 2391 2346 const struct drm_display_mode *adjusted_mode = 2392 2347 &crtc_state->hw.adjusted_mode; ··· 2412 2365 if (crtc_state->pipe_bpp > bpc * 3) 2413 2366 crtc_state->pipe_bpp = bpc * 3; 2414 2367 2415 - DRM_DEBUG_KMS("picking %d bpc for HDMI output (pipe bpp: %d)\n", 2416 - bpc, crtc_state->pipe_bpp); 2368 + drm_dbg_kms(&i915->drm, 2369 + "picking %d bpc for HDMI output (pipe bpp: %d)\n", 2370 + bpc, crtc_state->pipe_bpp); 2417 2371 2418 2372 if (hdmi_port_clock_valid(intel_hdmi, crtc_state->port_clock, 2419 2373 false, crtc_state->has_hdmi_sink) != MODE_OK) { 2420 - DRM_DEBUG_KMS("unsupported HDMI clock (%d kHz), rejecting mode\n", 2421 - crtc_state->port_clock); 2374 + drm_dbg_kms(&i915->drm, 2375 + "unsupported HDMI clock (%d kHz), rejecting mode\n", 2376 + crtc_state->port_clock); 2422 2377 return -EINVAL; 2423 2378 } 2424 2379 ··· 2483 2434 2484 2435 if (drm_mode_is_420_only(&connector->display_info, adjusted_mode)) { 2485 2436 if (!intel_hdmi_ycbcr420_config(connector, pipe_config)) { 2486 - DRM_ERROR("Can't support YCBCR420 output\n"); 2437 + drm_err(&dev_priv->drm, 2438 + "Can't support YCBCR420 output\n"); 2487 2439 return -EINVAL; 2488 2440 } 2489 2441 } ··· 2524 2474 } 2525 2475 } 2526 2476 2527 - intel_hdmi_compute_gcp_infoframe(encoder, pipe_config, conn_state); 2477 + intel_hdmi_compute_gcp_infoframe(encoder, pipe_config, 2478 + conn_state); 2528 2479 2529 2480 if (!intel_hdmi_compute_avi_infoframe(encoder, pipe_config, conn_state)) { 2530 - DRM_DEBUG_KMS("bad AVI infoframe\n"); 2481 + drm_dbg_kms(&dev_priv->drm, "bad AVI infoframe\n"); 2531 2482 return -EINVAL; 2532 2483 } 2533 2484 2534 2485 if (!intel_hdmi_compute_spd_infoframe(encoder, pipe_config, conn_state)) { 2535 - DRM_DEBUG_KMS("bad SPD infoframe\n"); 2486 + drm_dbg_kms(&dev_priv->drm, "bad SPD infoframe\n"); 2536 2487 return -EINVAL; 2537 2488 } 2538 2489 2539 2490 if (!intel_hdmi_compute_hdmi_infoframe(encoder, pipe_config, conn_state)) { 2540 - DRM_DEBUG_KMS("bad HDMI infoframe\n"); 2491 + drm_dbg_kms(&dev_priv->drm, "bad HDMI infoframe\n"); 2541 2492 return -EINVAL; 2542 2493 } 2543 2494 2544 2495 if (!intel_hdmi_compute_drm_infoframe(encoder, pipe_config, conn_state)) { 2545 - DRM_DEBUG_KMS("bad DRM infoframe\n"); 2496 + drm_dbg_kms(&dev_priv->drm, "bad DRM infoframe\n"); 2546 2497 return -EINVAL; 2547 2498 } 2548 2499 ··· 2593 2542 */ 2594 2543 if (has_edid && !connector->override_edid && 2595 2544 intel_bios_is_port_dp_dual_mode(dev_priv, port)) { 2596 - DRM_DEBUG_KMS("Assuming DP dual mode adaptor presence based on VBT\n"); 2545 + drm_dbg_kms(&dev_priv->drm, 2546 + "Assuming DP dual mode adaptor presence based on VBT\n"); 2597 2547 type = DRM_DP_DUAL_MODE_TYPE1_DVI; 2598 2548 } else { 2599 2549 type = DRM_DP_DUAL_MODE_NONE; ··· 2608 2556 hdmi->dp_dual_mode.max_tmds_clock = 2609 2557 drm_dp_dual_mode_max_tmds_clock(type, adapter); 2610 2558 2611 - DRM_DEBUG_KMS("DP dual mode adaptor (%s) detected (max TMDS clock: %d kHz)\n", 2612 - drm_dp_get_dual_mode_type_name(type), 2613 - hdmi->dp_dual_mode.max_tmds_clock); 2559 + drm_dbg_kms(&dev_priv->drm, 2560 + "DP dual mode adaptor (%s) detected (max TMDS clock: %d kHz)\n", 2561 + drm_dp_get_dual_mode_type_name(type), 2562 + hdmi->dp_dual_mode.max_tmds_clock); 2614 2563 } 2615 2564 2616 2565 static bool ··· 2631 2578 edid = drm_get_edid(connector, i2c); 2632 2579 2633 2580 if (!edid && !intel_gmbus_is_forced_bit(i2c)) { 2634 - DRM_DEBUG_KMS("HDMI GMBUS EDID read failed, retry using GPIO bit-banging\n"); 2581 + drm_dbg_kms(&dev_priv->drm, 2582 + "HDMI GMBUS EDID read failed, retry using GPIO bit-banging\n"); 2635 2583 intel_gmbus_force_bit(i2c, true); 2636 2584 edid = drm_get_edid(connector, i2c); 2637 2585 intel_gmbus_force_bit(i2c, false); ··· 2664 2610 struct intel_encoder *encoder = &hdmi_to_dig_port(intel_hdmi)->base; 2665 2611 intel_wakeref_t wakeref; 2666 2612 2667 - DRM_DEBUG_KMS("[CONNECTOR:%d:%s]\n", 2668 - connector->base.id, connector->name); 2613 + drm_dbg_kms(&dev_priv->drm, "[CONNECTOR:%d:%s]\n", 2614 + connector->base.id, connector->name); 2669 2615 2670 2616 wakeref = intel_display_power_get(dev_priv, POWER_DOMAIN_GMBUS); 2671 2617 ··· 2696 2642 static void 2697 2643 intel_hdmi_force(struct drm_connector *connector) 2698 2644 { 2699 - DRM_DEBUG_KMS("[CONNECTOR:%d:%s]\n", 2700 - connector->base.id, connector->name); 2645 + struct drm_i915_private *i915 = to_i915(connector->dev); 2646 + 2647 + drm_dbg_kms(&i915->drm, "[CONNECTOR:%d:%s]\n", 2648 + connector->base.id, connector->name); 2701 2649 2702 2650 intel_hdmi_unset_edid(connector); 2703 2651 ··· 2720 2664 return intel_connector_update_modes(connector, edid); 2721 2665 } 2722 2666 2723 - static void intel_hdmi_pre_enable(struct intel_encoder *encoder, 2667 + static void intel_hdmi_pre_enable(struct intel_atomic_state *state, 2668 + struct intel_encoder *encoder, 2724 2669 const struct intel_crtc_state *pipe_config, 2725 2670 const struct drm_connector_state *conn_state) 2726 2671 { ··· 2735 2678 pipe_config, conn_state); 2736 2679 } 2737 2680 2738 - static void vlv_hdmi_pre_enable(struct intel_encoder *encoder, 2681 + static void vlv_hdmi_pre_enable(struct intel_atomic_state *state, 2682 + struct intel_encoder *encoder, 2739 2683 const struct intel_crtc_state *pipe_config, 2740 2684 const struct drm_connector_state *conn_state) 2741 2685 { ··· 2753 2695 pipe_config->has_infoframe, 2754 2696 pipe_config, conn_state); 2755 2697 2756 - g4x_enable_hdmi(encoder, pipe_config, conn_state); 2698 + g4x_enable_hdmi(state, encoder, pipe_config, conn_state); 2757 2699 2758 2700 vlv_wait_port_ready(dev_priv, dport, 0x0); 2759 2701 } 2760 2702 2761 - static void vlv_hdmi_pre_pll_enable(struct intel_encoder *encoder, 2703 + static void vlv_hdmi_pre_pll_enable(struct intel_atomic_state *state, 2704 + struct intel_encoder *encoder, 2762 2705 const struct intel_crtc_state *pipe_config, 2763 2706 const struct drm_connector_state *conn_state) 2764 2707 { ··· 2768 2709 vlv_phy_pre_pll_enable(encoder, pipe_config); 2769 2710 } 2770 2711 2771 - static void chv_hdmi_pre_pll_enable(struct intel_encoder *encoder, 2712 + static void chv_hdmi_pre_pll_enable(struct intel_atomic_state *state, 2713 + struct intel_encoder *encoder, 2772 2714 const struct intel_crtc_state *pipe_config, 2773 2715 const struct drm_connector_state *conn_state) 2774 2716 { ··· 2778 2718 chv_phy_pre_pll_enable(encoder, pipe_config); 2779 2719 } 2780 2720 2781 - static void chv_hdmi_post_pll_disable(struct intel_encoder *encoder, 2721 + static void chv_hdmi_post_pll_disable(struct intel_atomic_state *state, 2722 + struct intel_encoder *encoder, 2782 2723 const struct intel_crtc_state *old_crtc_state, 2783 2724 const struct drm_connector_state *old_conn_state) 2784 2725 { 2785 2726 chv_phy_post_pll_disable(encoder, old_crtc_state); 2786 2727 } 2787 2728 2788 - static void vlv_hdmi_post_disable(struct intel_encoder *encoder, 2729 + static void vlv_hdmi_post_disable(struct intel_atomic_state *state, 2730 + struct intel_encoder *encoder, 2789 2731 const struct intel_crtc_state *old_crtc_state, 2790 2732 const struct drm_connector_state *old_conn_state) 2791 2733 { ··· 2795 2733 vlv_phy_reset_lanes(encoder, old_crtc_state); 2796 2734 } 2797 2735 2798 - static void chv_hdmi_post_disable(struct intel_encoder *encoder, 2736 + static void chv_hdmi_post_disable(struct intel_atomic_state *state, 2737 + struct intel_encoder *encoder, 2799 2738 const struct intel_crtc_state *old_crtc_state, 2800 2739 const struct drm_connector_state *old_conn_state) 2801 2740 { ··· 2811 2748 vlv_dpio_put(dev_priv); 2812 2749 } 2813 2750 2814 - static void chv_hdmi_pre_enable(struct intel_encoder *encoder, 2751 + static void chv_hdmi_pre_enable(struct intel_atomic_state *state, 2752 + struct intel_encoder *encoder, 2815 2753 const struct intel_crtc_state *pipe_config, 2816 2754 const struct drm_connector_state *conn_state) 2817 2755 { ··· 2830 2766 pipe_config->has_infoframe, 2831 2767 pipe_config, conn_state); 2832 2768 2833 - g4x_enable_hdmi(encoder, pipe_config, conn_state); 2769 + g4x_enable_hdmi(state, encoder, pipe_config, conn_state); 2834 2770 2835 2771 vlv_wait_port_ready(dev_priv, dport, 0x0); 2836 2772 ··· 2849 2785 2850 2786 static void intel_hdmi_create_i2c_symlink(struct drm_connector *connector) 2851 2787 { 2788 + struct drm_i915_private *i915 = to_i915(connector->dev); 2852 2789 struct i2c_adapter *adapter = intel_hdmi_get_i2c_adapter(connector); 2853 2790 struct kobject *i2c_kobj = &adapter->dev.kobj; 2854 2791 struct kobject *connector_kobj = &connector->kdev->kobj; ··· 2857 2792 2858 2793 ret = sysfs_create_link(connector_kobj, i2c_kobj, i2c_kobj->name); 2859 2794 if (ret) 2860 - DRM_ERROR("Failed to create i2c symlink (%d)\n", ret); 2795 + drm_err(&i915->drm, "Failed to create i2c symlink (%d)\n", ret); 2861 2796 } 2862 2797 2863 2798 static void intel_hdmi_remove_i2c_symlink(struct drm_connector *connector) ··· 2986 2921 if (!sink_scrambling->supported) 2987 2922 return true; 2988 2923 2989 - DRM_DEBUG_KMS("[CONNECTOR:%d:%s] scrambling=%s, TMDS bit clock ratio=1/%d\n", 2990 - connector->base.id, connector->name, 2991 - yesno(scrambling), high_tmds_clock_ratio ? 40 : 10); 2924 + drm_dbg_kms(&dev_priv->drm, 2925 + "[CONNECTOR:%d:%s] scrambling=%s, TMDS bit clock ratio=1/%d\n", 2926 + connector->base.id, connector->name, 2927 + yesno(scrambling), high_tmds_clock_ratio ? 40 : 10); 2992 2928 2993 2929 /* Set TMDS bit clock ratio to 1/40 or 1/10, and enable/disable scrambling */ 2994 2930 return drm_scdc_set_high_tmds_clock_ratio(adapter, ··· 3131 3065 3132 3066 ddc_pin = intel_bios_alternate_ddc_pin(encoder); 3133 3067 if (ddc_pin) { 3134 - DRM_DEBUG_KMS("Using DDC pin 0x%x for port %c (VBT)\n", 3135 - ddc_pin, port_name(port)); 3068 + drm_dbg_kms(&dev_priv->drm, 3069 + "Using DDC pin 0x%x for port %c (VBT)\n", 3070 + ddc_pin, port_name(port)); 3136 3071 return ddc_pin; 3137 3072 } 3138 3073 ··· 3150 3083 else 3151 3084 ddc_pin = g4x_port_to_ddc_pin(dev_priv, port); 3152 3085 3153 - DRM_DEBUG_KMS("Using DDC pin 0x%x for port %c (platform default)\n", 3154 - ddc_pin, port_name(port)); 3086 + drm_dbg_kms(&dev_priv->drm, 3087 + "Using DDC pin 0x%x for port %c (platform default)\n", 3088 + ddc_pin, port_name(port)); 3155 3089 3156 3090 return ddc_pin; 3157 3091 } ··· 3209 3141 enum port port = intel_encoder->port; 3210 3142 struct cec_connector_info conn_info; 3211 3143 3212 - DRM_DEBUG_KMS("Adding HDMI connector on [ENCODER:%d:%s]\n", 3213 - intel_encoder->base.base.id, intel_encoder->base.name); 3144 + drm_dbg_kms(&dev_priv->drm, 3145 + "Adding HDMI connector on [ENCODER:%d:%s]\n", 3146 + intel_encoder->base.base.id, intel_encoder->base.name); 3214 3147 3215 3148 if (INTEL_GEN(dev_priv) < 12 && drm_WARN_ON(dev, port == PORT_A)) 3216 3149 return; ··· 3255 3186 int ret = intel_hdcp_init(intel_connector, 3256 3187 &intel_hdmi_hdcp_shim); 3257 3188 if (ret) 3258 - DRM_DEBUG_KMS("HDCP init failed, skipping.\n"); 3189 + drm_dbg_kms(&dev_priv->drm, 3190 + "HDCP init failed, skipping.\n"); 3259 3191 } 3260 3192 3261 3193 /* For G4X desktop chip, PEG_BAND_GAP_DATA 3:0 must first be written ··· 3275 3205 cec_notifier_conn_register(dev->dev, port_identifier(port), 3276 3206 &conn_info); 3277 3207 if (!intel_hdmi->cec_notifier) 3278 - DRM_DEBUG_KMS("CEC notifier get failed\n"); 3208 + drm_dbg_kms(&dev_priv->drm, "CEC notifier get failed\n"); 3279 3209 } 3280 3210 3281 3211 static enum intel_hotplug_state 3282 3212 intel_hdmi_hotplug(struct intel_encoder *encoder, 3283 - struct intel_connector *connector, bool irq_received) 3213 + struct intel_connector *connector) 3284 3214 { 3285 3215 enum intel_hotplug_state state; 3286 3216 3287 - state = intel_encoder_hotplug(encoder, connector, irq_received); 3217 + state = intel_encoder_hotplug(encoder, connector); 3288 3218 3289 3219 /* 3290 3220 * On many platforms the HDMI live state signal is known to be ··· 3298 3228 * time around we didn't detect any change in the sink's connection 3299 3229 * status. 3300 3230 */ 3301 - if (state == INTEL_HOTPLUG_UNCHANGED && irq_received) 3231 + if (state == INTEL_HOTPLUG_UNCHANGED && !connector->hotplug_retries) 3302 3232 state = INTEL_HOTPLUG_RETRY; 3303 3233 3304 3234 return state;
+11 -7
drivers/gpu/drm/i915/display/intel_hotplug.c
··· 270 270 271 271 enum intel_hotplug_state 272 272 intel_encoder_hotplug(struct intel_encoder *encoder, 273 - struct intel_connector *connector, 274 - bool irq_received) 273 + struct intel_connector *connector) 275 274 { 276 275 struct drm_device *dev = connector->base.dev; 277 276 enum drm_connector_status old_status; ··· 391 392 struct intel_encoder *encoder = 392 393 intel_attached_encoder(connector); 393 394 394 - drm_dbg_kms(&dev_priv->drm, 395 - "Connector %s (pin %i) received hotplug event.\n", 396 - connector->base.name, pin); 395 + if (hpd_event_bits & hpd_bit) 396 + connector->hotplug_retries = 0; 397 + else 398 + connector->hotplug_retries++; 397 399 398 - switch (encoder->hotplug(encoder, connector, 399 - hpd_event_bits & hpd_bit)) { 400 + drm_dbg_kms(&dev_priv->drm, 401 + "Connector %s (pin %i) received hotplug event. (retry %d)\n", 402 + connector->base.name, pin, 403 + connector->hotplug_retries); 404 + 405 + switch (encoder->hotplug(encoder, connector)) { 400 406 case INTEL_HOTPLUG_UNCHANGED: 401 407 break; 402 408 case INTEL_HOTPLUG_CHANGED:
+1 -2
drivers/gpu/drm/i915/display/intel_hotplug.h
··· 15 15 16 16 void intel_hpd_poll_init(struct drm_i915_private *dev_priv); 17 17 enum intel_hotplug_state intel_encoder_hotplug(struct intel_encoder *encoder, 18 - struct intel_connector *connector, 19 - bool irq_received); 18 + struct intel_connector *connector); 20 19 void intel_hpd_irq_handler(struct drm_i915_private *dev_priv, 21 20 u32 pin_mask, u32 long_mask); 22 21 void intel_hpd_init(struct drm_i915_private *dev_priv);
+14 -8
drivers/gpu/drm/i915/display/intel_lvds.c
··· 220 220 REG_FIELD_PREP(PP_REFERENCE_DIVIDER_MASK, pps->divider) | REG_FIELD_PREP(PANEL_POWER_CYCLE_DELAY_MASK, DIV_ROUND_UP(pps->t4, 1000) + 1)); 221 221 } 222 222 223 - static void intel_pre_enable_lvds(struct intel_encoder *encoder, 223 + static void intel_pre_enable_lvds(struct intel_atomic_state *state, 224 + struct intel_encoder *encoder, 224 225 const struct intel_crtc_state *pipe_config, 225 226 const struct drm_connector_state *conn_state) 226 227 { ··· 302 301 /* 303 302 * Sets the power state for the panel. 304 303 */ 305 - static void intel_enable_lvds(struct intel_encoder *encoder, 304 + static void intel_enable_lvds(struct intel_atomic_state *state, 305 + struct intel_encoder *encoder, 306 306 const struct intel_crtc_state *pipe_config, 307 307 const struct drm_connector_state *conn_state) 308 308 { ··· 325 323 intel_panel_enable_backlight(pipe_config, conn_state); 326 324 } 327 325 328 - static void intel_disable_lvds(struct intel_encoder *encoder, 326 + static void intel_disable_lvds(struct intel_atomic_state *state, 327 + struct intel_encoder *encoder, 329 328 const struct intel_crtc_state *old_crtc_state, 330 329 const struct drm_connector_state *old_conn_state) 331 330 { ··· 344 341 intel_de_posting_read(dev_priv, lvds_encoder->reg); 345 342 } 346 343 347 - static void gmch_disable_lvds(struct intel_encoder *encoder, 344 + static void gmch_disable_lvds(struct intel_atomic_state *state, 345 + struct intel_encoder *encoder, 348 346 const struct intel_crtc_state *old_crtc_state, 349 347 const struct drm_connector_state *old_conn_state) 350 348 351 349 { 352 350 intel_panel_disable_backlight(old_conn_state); 353 351 354 - intel_disable_lvds(encoder, old_crtc_state, old_conn_state); 352 + intel_disable_lvds(state, encoder, old_crtc_state, old_conn_state); 355 353 } 356 354 357 - static void pch_disable_lvds(struct intel_encoder *encoder, 355 + static void pch_disable_lvds(struct intel_atomic_state *state, 356 + struct intel_encoder *encoder, 358 357 const struct intel_crtc_state *old_crtc_state, 359 358 const struct drm_connector_state *old_conn_state) 360 359 { 361 360 intel_panel_disable_backlight(old_conn_state); 362 361 } 363 362 364 - static void pch_post_disable_lvds(struct intel_encoder *encoder, 363 + static void pch_post_disable_lvds(struct intel_atomic_state *state, 364 + struct intel_encoder *encoder, 365 365 const struct intel_crtc_state *old_crtc_state, 366 366 const struct drm_connector_state *old_conn_state) 367 367 { 368 - intel_disable_lvds(encoder, old_crtc_state, old_conn_state); 368 + intel_disable_lvds(state, encoder, old_crtc_state, old_conn_state); 369 369 } 370 370 371 371 static enum drm_mode_status
+1 -1
drivers/gpu/drm/i915/display/intel_overlay.c
··· 1342 1342 if (!HAS_OVERLAY(dev_priv)) 1343 1343 return; 1344 1344 1345 - engine = dev_priv->engine[RCS0]; 1345 + engine = dev_priv->gt.engine[RCS0]; 1346 1346 if (!engine || !engine->kernel_context) 1347 1347 return; 1348 1348
+13 -9
drivers/gpu/drm/i915/display/intel_panel.c
··· 684 684 intel_panel_actually_set_backlight(const struct drm_connector_state *conn_state, u32 level) 685 685 { 686 686 struct intel_connector *connector = to_intel_connector(conn_state->connector); 687 + struct drm_i915_private *i915 = to_i915(connector->base.dev); 687 688 struct intel_panel *panel = &connector->panel; 688 689 689 - DRM_DEBUG_DRIVER("set backlight PWM = %d\n", level); 690 + drm_dbg_kms(&i915->drm, "set backlight PWM = %d\n", level); 690 691 691 692 level = intel_panel_compute_brightness(connector, level); 692 693 panel->backlight.set(conn_state, level); ··· 868 867 * another client is not activated. 869 868 */ 870 869 if (dev_priv->drm.switch_power_state == DRM_SWITCH_POWER_CHANGING) { 871 - drm_dbg(&dev_priv->drm, 872 - "Skipping backlight disable on vga switch\n"); 870 + drm_dbg_kms(&dev_priv->drm, 871 + "Skipping backlight disable on vga switch\n"); 873 872 return; 874 873 } 875 874 ··· 1245 1244 1246 1245 mutex_unlock(&dev_priv->backlight_lock); 1247 1246 1248 - drm_dbg(&dev_priv->drm, "get backlight PWM = %d\n", val); 1247 + drm_dbg_kms(&dev_priv->drm, "get backlight PWM = %d\n", val); 1249 1248 return val; 1250 1249 } 1251 1250 ··· 1336 1335 1337 1336 int intel_backlight_device_register(struct intel_connector *connector) 1338 1337 { 1338 + struct drm_i915_private *i915 = to_i915(connector->base.dev); 1339 1339 struct intel_panel *panel = &connector->panel; 1340 1340 struct backlight_properties props; 1341 1341 ··· 1376 1374 &intel_backlight_device_ops, &props); 1377 1375 1378 1376 if (IS_ERR(panel->backlight.device)) { 1379 - DRM_ERROR("Failed to register backlight: %ld\n", 1380 - PTR_ERR(panel->backlight.device)); 1377 + drm_err(&i915->drm, "Failed to register backlight: %ld\n", 1378 + PTR_ERR(panel->backlight.device)); 1381 1379 panel->backlight.device = NULL; 1382 1380 return -ENODEV; 1383 1381 } 1384 1382 1385 - DRM_DEBUG_KMS("Connector %s backlight sysfs interface registered\n", 1386 - connector->base.name); 1383 + drm_dbg_kms(&i915->drm, 1384 + "Connector %s backlight sysfs interface registered\n", 1385 + connector->base.name); 1387 1386 1388 1387 return 0; 1389 1388 } ··· 1934 1931 return 0; 1935 1932 } 1936 1933 1937 - void intel_panel_update_backlight(struct intel_encoder *encoder, 1934 + void intel_panel_update_backlight(struct intel_atomic_state *state, 1935 + struct intel_encoder *encoder, 1938 1936 const struct intel_crtc_state *crtc_state, 1939 1937 const struct drm_connector_state *conn_state) 1940 1938 {
+2 -1
drivers/gpu/drm/i915/display/intel_panel.h
··· 37 37 enum pipe pipe); 38 38 void intel_panel_enable_backlight(const struct intel_crtc_state *crtc_state, 39 39 const struct drm_connector_state *conn_state); 40 - void intel_panel_update_backlight(struct intel_encoder *encoder, 40 + void intel_panel_update_backlight(struct intel_atomic_state *state, 41 + struct intel_encoder *encoder, 41 42 const struct intel_crtc_state *crtc_state, 42 43 const struct drm_connector_state *conn_state); 43 44 void intel_panel_disable_backlight(const struct drm_connector_state *old_conn_state);
+26 -21
drivers/gpu/drm/i915/display/intel_psr.c
··· 137 137 intel_de_write(dev_priv, imr_reg, val); 138 138 } 139 139 140 - static void psr_event_print(u32 val, bool psr2_enabled) 140 + static void psr_event_print(struct drm_i915_private *i915, 141 + u32 val, bool psr2_enabled) 141 142 { 142 - DRM_DEBUG_KMS("PSR exit events: 0x%x\n", val); 143 + drm_dbg_kms(&i915->drm, "PSR exit events: 0x%x\n", val); 143 144 if (val & PSR_EVENT_PSR2_WD_TIMER_EXPIRE) 144 - DRM_DEBUG_KMS("\tPSR2 watchdog timer expired\n"); 145 + drm_dbg_kms(&i915->drm, "\tPSR2 watchdog timer expired\n"); 145 146 if ((val & PSR_EVENT_PSR2_DISABLED) && psr2_enabled) 146 - DRM_DEBUG_KMS("\tPSR2 disabled\n"); 147 + drm_dbg_kms(&i915->drm, "\tPSR2 disabled\n"); 147 148 if (val & PSR_EVENT_SU_DIRTY_FIFO_UNDERRUN) 148 - DRM_DEBUG_KMS("\tSU dirty FIFO underrun\n"); 149 + drm_dbg_kms(&i915->drm, "\tSU dirty FIFO underrun\n"); 149 150 if (val & PSR_EVENT_SU_CRC_FIFO_UNDERRUN) 150 - DRM_DEBUG_KMS("\tSU CRC FIFO underrun\n"); 151 + drm_dbg_kms(&i915->drm, "\tSU CRC FIFO underrun\n"); 151 152 if (val & PSR_EVENT_GRAPHICS_RESET) 152 - DRM_DEBUG_KMS("\tGraphics reset\n"); 153 + drm_dbg_kms(&i915->drm, "\tGraphics reset\n"); 153 154 if (val & PSR_EVENT_PCH_INTERRUPT) 154 - DRM_DEBUG_KMS("\tPCH interrupt\n"); 155 + drm_dbg_kms(&i915->drm, "\tPCH interrupt\n"); 155 156 if (val & PSR_EVENT_MEMORY_UP) 156 - DRM_DEBUG_KMS("\tMemory up\n"); 157 + drm_dbg_kms(&i915->drm, "\tMemory up\n"); 157 158 if (val & PSR_EVENT_FRONT_BUFFER_MODIFY) 158 - DRM_DEBUG_KMS("\tFront buffer modification\n"); 159 + drm_dbg_kms(&i915->drm, "\tFront buffer modification\n"); 159 160 if (val & PSR_EVENT_WD_TIMER_EXPIRE) 160 - DRM_DEBUG_KMS("\tPSR watchdog timer expired\n"); 161 + drm_dbg_kms(&i915->drm, "\tPSR watchdog timer expired\n"); 161 162 if (val & PSR_EVENT_PIPE_REGISTERS_UPDATE) 162 - DRM_DEBUG_KMS("\tPIPE registers updated\n"); 163 + drm_dbg_kms(&i915->drm, "\tPIPE registers updated\n"); 163 164 if (val & PSR_EVENT_REGISTER_UPDATE) 164 - DRM_DEBUG_KMS("\tRegister updated\n"); 165 + drm_dbg_kms(&i915->drm, "\tRegister updated\n"); 165 166 if (val & PSR_EVENT_HDCP_ENABLE) 166 - DRM_DEBUG_KMS("\tHDCP enabled\n"); 167 + drm_dbg_kms(&i915->drm, "\tHDCP enabled\n"); 167 168 if (val & PSR_EVENT_KVMR_SESSION_ENABLE) 168 - DRM_DEBUG_KMS("\tKVMR session enabled\n"); 169 + drm_dbg_kms(&i915->drm, "\tKVMR session enabled\n"); 169 170 if (val & PSR_EVENT_VBI_ENABLE) 170 - DRM_DEBUG_KMS("\tVBI enabled\n"); 171 + drm_dbg_kms(&i915->drm, "\tVBI enabled\n"); 171 172 if (val & PSR_EVENT_LPSP_MODE_EXIT) 172 - DRM_DEBUG_KMS("\tLPSP mode exited\n"); 173 + drm_dbg_kms(&i915->drm, "\tLPSP mode exited\n"); 173 174 if ((val & PSR_EVENT_PSR_DISABLE) && !psr2_enabled) 174 - DRM_DEBUG_KMS("\tPSR disabled\n"); 175 + drm_dbg_kms(&i915->drm, "\tPSR disabled\n"); 175 176 } 176 177 177 178 void intel_psr_irq_handler(struct drm_i915_private *dev_priv, u32 psr_iir) ··· 210 209 211 210 intel_de_write(dev_priv, PSR_EVENT(cpu_transcoder), 212 211 val); 213 - psr_event_print(val, psr2_enabled); 212 + psr_event_print(dev_priv, val, psr2_enabled); 214 213 } 215 214 } 216 215 ··· 250 249 251 250 static u8 intel_dp_get_sink_sync_latency(struct intel_dp *intel_dp) 252 251 { 252 + struct drm_i915_private *i915 = dp_to_i915(intel_dp); 253 253 u8 val = 8; /* assume the worst if we can't read the value */ 254 254 255 255 if (drm_dp_dpcd_readb(&intel_dp->aux, 256 256 DP_SYNCHRONIZATION_LATENCY_IN_SINK, &val) == 1) 257 257 val &= DP_MAX_RESYNC_FRAME_COUNT_MASK; 258 258 else 259 - DRM_DEBUG_KMS("Unable to get sink synchronization latency, assuming 8 frames\n"); 259 + drm_dbg_kms(&i915->drm, 260 + "Unable to get sink synchronization latency, assuming 8 frames\n"); 260 261 return val; 261 262 } 262 263 263 264 static u16 intel_dp_get_su_x_granulartiy(struct intel_dp *intel_dp) 264 265 { 266 + struct drm_i915_private *i915 = dp_to_i915(intel_dp); 265 267 u16 val; 266 268 ssize_t r; 267 269 ··· 277 273 278 274 r = drm_dp_dpcd_read(&intel_dp->aux, DP_PSR2_SU_X_GRANULARITY, &val, 2); 279 275 if (r != 2) 280 - DRM_DEBUG_KMS("Unable to read DP_PSR2_SU_X_GRANULARITY\n"); 276 + drm_dbg_kms(&i915->drm, 277 + "Unable to read DP_PSR2_SU_X_GRANULARITY\n"); 281 278 282 279 /* 283 280 * Spec says that if the value read is 0 the default granularity should
+13 -9
drivers/gpu/drm/i915/display/intel_sdvo.c
··· 1430 1430 #undef UPDATE_PROPERTY 1431 1431 } 1432 1432 1433 - static void intel_sdvo_pre_enable(struct intel_encoder *intel_encoder, 1433 + static void intel_sdvo_pre_enable(struct intel_atomic_state *state, 1434 + struct intel_encoder *intel_encoder, 1434 1435 const struct intel_crtc_state *crtc_state, 1435 1436 const struct drm_connector_state *conn_state) 1436 1437 { ··· 1728 1727 SDVO_AUDIO_PRESENCE_DETECT); 1729 1728 } 1730 1729 1731 - static void intel_disable_sdvo(struct intel_encoder *encoder, 1730 + static void intel_disable_sdvo(struct intel_atomic_state *state, 1731 + struct intel_encoder *encoder, 1732 1732 const struct intel_crtc_state *old_crtc_state, 1733 1733 const struct drm_connector_state *conn_state) 1734 1734 { ··· 1777 1775 } 1778 1776 } 1779 1777 1780 - static void pch_disable_sdvo(struct intel_encoder *encoder, 1778 + static void pch_disable_sdvo(struct intel_atomic_state *state, 1779 + struct intel_encoder *encoder, 1781 1780 const struct intel_crtc_state *old_crtc_state, 1782 1781 const struct drm_connector_state *old_conn_state) 1783 1782 { 1784 1783 } 1785 1784 1786 - static void pch_post_disable_sdvo(struct intel_encoder *encoder, 1785 + static void pch_post_disable_sdvo(struct intel_atomic_state *state, 1786 + struct intel_encoder *encoder, 1787 1787 const struct intel_crtc_state *old_crtc_state, 1788 1788 const struct drm_connector_state *old_conn_state) 1789 1789 { 1790 - intel_disable_sdvo(encoder, old_crtc_state, old_conn_state); 1790 + intel_disable_sdvo(state, encoder, old_crtc_state, old_conn_state); 1791 1791 } 1792 1792 1793 - static void intel_enable_sdvo(struct intel_encoder *encoder, 1793 + static void intel_enable_sdvo(struct intel_atomic_state *state, 1794 + struct intel_encoder *encoder, 1794 1795 const struct intel_crtc_state *pipe_config, 1795 1796 const struct drm_connector_state *conn_state) 1796 1797 { ··· 1939 1934 1940 1935 static enum intel_hotplug_state 1941 1936 intel_sdvo_hotplug(struct intel_encoder *encoder, 1942 - struct intel_connector *connector, 1943 - bool irq_received) 1937 + struct intel_connector *connector) 1944 1938 { 1945 1939 intel_sdvo_enable_hotplug(encoder); 1946 1940 1947 - return intel_encoder_hotplug(encoder, connector, irq_received); 1941 + return intel_encoder_hotplug(encoder, connector); 1948 1942 } 1949 1943 1950 1944 static bool
+20 -5
drivers/gpu/drm/i915/display/intel_sprite.c
··· 2503 2503 DRM_FORMAT_YVYU, 2504 2504 DRM_FORMAT_UYVY, 2505 2505 DRM_FORMAT_VYUY, 2506 + DRM_FORMAT_XYUV8888, 2506 2507 }; 2507 2508 2508 2509 static const u32 skl_planar_formats[] = { ··· 2522 2521 DRM_FORMAT_UYVY, 2523 2522 DRM_FORMAT_VYUY, 2524 2523 DRM_FORMAT_NV12, 2524 + DRM_FORMAT_XYUV8888, 2525 2525 }; 2526 2526 2527 2527 static const u32 glk_planar_formats[] = { ··· 2541 2539 DRM_FORMAT_UYVY, 2542 2540 DRM_FORMAT_VYUY, 2543 2541 DRM_FORMAT_NV12, 2542 + DRM_FORMAT_XYUV8888, 2544 2543 DRM_FORMAT_P010, 2545 2544 DRM_FORMAT_P012, 2546 2545 DRM_FORMAT_P016, ··· 2565 2562 DRM_FORMAT_Y210, 2566 2563 DRM_FORMAT_Y212, 2567 2564 DRM_FORMAT_Y216, 2565 + DRM_FORMAT_XYUV8888, 2568 2566 DRM_FORMAT_XVYU2101010, 2569 2567 DRM_FORMAT_XVYU12_16161616, 2570 2568 DRM_FORMAT_XVYU16161616, ··· 2593 2589 DRM_FORMAT_Y210, 2594 2590 DRM_FORMAT_Y212, 2595 2591 DRM_FORMAT_Y216, 2592 + DRM_FORMAT_XYUV8888, 2596 2593 DRM_FORMAT_XVYU2101010, 2597 2594 DRM_FORMAT_XVYU12_16161616, 2598 2595 DRM_FORMAT_XVYU16161616, ··· 2625 2620 DRM_FORMAT_Y210, 2626 2621 DRM_FORMAT_Y212, 2627 2622 DRM_FORMAT_Y216, 2623 + DRM_FORMAT_XYUV8888, 2628 2624 DRM_FORMAT_XVYU2101010, 2629 2625 DRM_FORMAT_XVYU12_16161616, 2630 2626 DRM_FORMAT_XVYU16161616, ··· 2796 2790 case DRM_FORMAT_UYVY: 2797 2791 case DRM_FORMAT_VYUY: 2798 2792 case DRM_FORMAT_NV12: 2793 + case DRM_FORMAT_XYUV8888: 2799 2794 case DRM_FORMAT_P010: 2800 2795 case DRM_FORMAT_P012: 2801 2796 case DRM_FORMAT_P016: ··· 2824 2817 } 2825 2818 } 2826 2819 2827 - static bool gen12_plane_supports_mc_ccs(enum plane_id plane_id) 2820 + static bool gen12_plane_supports_mc_ccs(struct drm_i915_private *dev_priv, 2821 + enum plane_id plane_id) 2828 2822 { 2823 + /* Wa_14010477008:tgl[a0..c0] */ 2824 + if (IS_TGL_REVID(dev_priv, TGL_REVID_A0, TGL_REVID_C0)) 2825 + return false; 2826 + 2829 2827 return plane_id < PLANE_SPRITE4; 2830 2828 } 2831 2829 2832 2830 static bool gen12_plane_format_mod_supported(struct drm_plane *_plane, 2833 2831 u32 format, u64 modifier) 2834 2832 { 2833 + struct drm_i915_private *dev_priv = to_i915(_plane->dev); 2835 2834 struct intel_plane *plane = to_intel_plane(_plane); 2836 2835 2837 2836 switch (modifier) { 2838 2837 case I915_FORMAT_MOD_Y_TILED_GEN12_MC_CCS: 2839 - if (!gen12_plane_supports_mc_ccs(plane->id)) 2838 + if (!gen12_plane_supports_mc_ccs(dev_priv, plane->id)) 2840 2839 return false; 2841 2840 /* fall through */ 2842 2841 case DRM_FORMAT_MOD_LINEAR: ··· 2867 2854 case DRM_FORMAT_UYVY: 2868 2855 case DRM_FORMAT_VYUY: 2869 2856 case DRM_FORMAT_NV12: 2857 + case DRM_FORMAT_XYUV8888: 2870 2858 case DRM_FORMAT_P010: 2871 2859 case DRM_FORMAT_P012: 2872 2860 case DRM_FORMAT_P016: ··· 3012 2998 } 3013 2999 } 3014 3000 3015 - static const u64 *gen12_get_plane_modifiers(enum plane_id plane_id) 3001 + static const u64 *gen12_get_plane_modifiers(struct drm_i915_private *dev_priv, 3002 + enum plane_id plane_id) 3016 3003 { 3017 - if (gen12_plane_supports_mc_ccs(plane_id)) 3004 + if (gen12_plane_supports_mc_ccs(dev_priv, plane_id)) 3018 3005 return gen12_plane_format_modifiers_mc_ccs; 3019 3006 else 3020 3007 return gen12_plane_format_modifiers_rc_ccs; ··· 3085 3070 3086 3071 plane->has_ccs = skl_plane_has_ccs(dev_priv, pipe, plane_id); 3087 3072 if (INTEL_GEN(dev_priv) >= 12) { 3088 - modifiers = gen12_get_plane_modifiers(plane_id); 3073 + modifiers = gen12_get_plane_modifiers(dev_priv, plane_id); 3089 3074 plane_funcs = &gen12_plane_funcs; 3090 3075 } else { 3091 3076 if (plane->has_ccs)
+27 -20
drivers/gpu/drm/i915/display/intel_tc.c
··· 152 152 static void tc_port_fixup_legacy_flag(struct intel_digital_port *dig_port, 153 153 u32 live_status_mask) 154 154 { 155 + struct drm_i915_private *i915 = to_i915(dig_port->base.base.dev); 155 156 u32 valid_hpd_mask; 156 157 157 158 if (dig_port->tc_legacy_port) ··· 165 164 return; 166 165 167 166 /* If live status mismatches the VBT flag, trust the live status. */ 168 - DRM_ERROR("Port %s: live status %08x mismatches the legacy port flag, fix flag\n", 169 - dig_port->tc_port_name, live_status_mask); 167 + drm_err(&i915->drm, 168 + "Port %s: live status %08x mismatches the legacy port flag, fix flag\n", 169 + dig_port->tc_port_name, live_status_mask); 170 170 171 171 dig_port->tc_legacy_port = !dig_port->tc_legacy_port; 172 172 } ··· 235 233 if (val == 0xffffffff) { 236 234 drm_dbg_kms(&i915->drm, 237 235 "Port %s: PHY in TCCOLD, can't set safe-mode to %s\n", 238 - dig_port->tc_port_name, 239 - enableddisabled(enable)); 236 + dig_port->tc_port_name, enableddisabled(enable)); 240 237 241 238 return false; 242 239 } ··· 287 286 static void icl_tc_phy_connect(struct intel_digital_port *dig_port, 288 287 int required_lanes) 289 288 { 289 + struct drm_i915_private *i915 = to_i915(dig_port->base.base.dev); 290 290 int max_lanes; 291 291 292 292 if (!icl_tc_phy_status_complete(dig_port)) { 293 - DRM_DEBUG_KMS("Port %s: PHY not ready\n", 294 - dig_port->tc_port_name); 293 + drm_dbg_kms(&i915->drm, "Port %s: PHY not ready\n", 294 + dig_port->tc_port_name); 295 295 goto out_set_tbt_alt_mode; 296 296 } 297 297 ··· 313 311 * became disconnected. Not necessary for legacy mode. 314 312 */ 315 313 if (!(tc_port_live_status_mask(dig_port) & BIT(TC_PORT_DP_ALT))) { 316 - DRM_DEBUG_KMS("Port %s: PHY sudden disconnect\n", 317 - dig_port->tc_port_name); 314 + drm_dbg_kms(&i915->drm, "Port %s: PHY sudden disconnect\n", 315 + dig_port->tc_port_name); 318 316 goto out_set_safe_mode; 319 317 } 320 318 321 319 if (max_lanes < required_lanes) { 322 - DRM_DEBUG_KMS("Port %s: PHY max lanes %d < required lanes %d\n", 323 - dig_port->tc_port_name, 324 - max_lanes, required_lanes); 320 + drm_dbg_kms(&i915->drm, 321 + "Port %s: PHY max lanes %d < required lanes %d\n", 322 + dig_port->tc_port_name, 323 + max_lanes, required_lanes); 325 324 goto out_set_safe_mode; 326 325 } 327 326 ··· 360 357 361 358 static bool icl_tc_phy_is_connected(struct intel_digital_port *dig_port) 362 359 { 360 + struct drm_i915_private *i915 = to_i915(dig_port->base.base.dev); 361 + 363 362 if (!icl_tc_phy_status_complete(dig_port)) { 364 - DRM_DEBUG_KMS("Port %s: PHY status not complete\n", 365 - dig_port->tc_port_name); 363 + drm_dbg_kms(&i915->drm, "Port %s: PHY status not complete\n", 364 + dig_port->tc_port_name); 366 365 return dig_port->tc_mode == TC_PORT_TBT_ALT; 367 366 } 368 367 369 368 if (icl_tc_phy_is_in_safe_mode(dig_port)) { 370 - DRM_DEBUG_KMS("Port %s: PHY still in safe mode\n", 371 - dig_port->tc_port_name); 369 + drm_dbg_kms(&i915->drm, "Port %s: PHY still in safe mode\n", 370 + dig_port->tc_port_name); 372 371 373 372 return false; 374 373 } ··· 443 438 444 439 void intel_tc_port_sanitize(struct intel_digital_port *dig_port) 445 440 { 441 + struct drm_i915_private *i915 = to_i915(dig_port->base.base.dev); 446 442 struct intel_encoder *encoder = &dig_port->base; 447 443 int active_links = 0; 448 444 ··· 457 451 458 452 if (active_links) { 459 453 if (!icl_tc_phy_is_connected(dig_port)) 460 - DRM_DEBUG_KMS("Port %s: PHY disconnected with %d active link(s)\n", 461 - dig_port->tc_port_name, active_links); 454 + drm_dbg_kms(&i915->drm, 455 + "Port %s: PHY disconnected with %d active link(s)\n", 456 + dig_port->tc_port_name, active_links); 462 457 intel_tc_port_link_init_refcount(dig_port, active_links); 463 458 464 459 goto out; ··· 469 462 icl_tc_phy_connect(dig_port, 1); 470 463 471 464 out: 472 - DRM_DEBUG_KMS("Port %s: sanitize mode (%s)\n", 473 - dig_port->tc_port_name, 474 - tc_port_mode_name(dig_port->tc_mode)); 465 + drm_dbg_kms(&i915->drm, "Port %s: sanitize mode (%s)\n", 466 + dig_port->tc_port_name, 467 + tc_port_mode_name(dig_port->tc_mode)); 475 468 476 469 mutex_unlock(&dig_port->tc_lock); 477 470 }
+9 -6
drivers/gpu/drm/i915/display/intel_tv.c
··· 914 914 } 915 915 916 916 static void 917 - intel_enable_tv(struct intel_encoder *encoder, 917 + intel_enable_tv(struct intel_atomic_state *state, 918 + struct intel_encoder *encoder, 918 919 const struct intel_crtc_state *pipe_config, 919 920 const struct drm_connector_state *conn_state) 920 921 { ··· 931 930 } 932 931 933 932 static void 934 - intel_disable_tv(struct intel_encoder *encoder, 933 + intel_disable_tv(struct intel_atomic_state *state, 934 + struct intel_encoder *encoder, 935 935 const struct intel_crtc_state *old_crtc_state, 936 936 const struct drm_connector_state *old_conn_state) 937 937 { ··· 1416 1414 (color_conversion->bv << 16) | color_conversion->av); 1417 1415 } 1418 1416 1419 - static void intel_tv_pre_enable(struct intel_encoder *encoder, 1417 + static void intel_tv_pre_enable(struct intel_atomic_state *state, 1418 + struct intel_encoder *encoder, 1420 1419 const struct intel_crtc_state *pipe_config, 1421 1420 const struct drm_connector_state *conn_state) 1422 1421 { ··· 1701 1698 struct drm_modeset_acquire_ctx *ctx, 1702 1699 bool force) 1703 1700 { 1701 + struct drm_i915_private *i915 = to_i915(connector->dev); 1704 1702 struct intel_tv *intel_tv = intel_attached_tv(to_intel_connector(connector)); 1705 1703 enum drm_connector_status status; 1706 1704 int type; 1707 1705 1708 - DRM_DEBUG_KMS("[CONNECTOR:%d:%s] force=%d\n", 1709 - connector->base.id, connector->name, 1710 - force); 1706 + drm_dbg_kms(&i915->drm, "[CONNECTOR:%d:%s] force=%d\n", 1707 + connector->base.id, connector->name, force); 1711 1708 1712 1709 if (force) { 1713 1710 struct intel_load_detect_pipe tmp;
+10 -5
drivers/gpu/drm/i915/display/vlv_dsi.c
··· 759 759 * DSI port enable has to be done before pipe and plane enable, so we do it in 760 760 * the pre_enable hook instead of the enable hook. 761 761 */ 762 - static void intel_dsi_pre_enable(struct intel_encoder *encoder, 762 + static void intel_dsi_pre_enable(struct intel_atomic_state *state, 763 + struct intel_encoder *encoder, 763 764 const struct intel_crtc_state *pipe_config, 764 765 const struct drm_connector_state *conn_state) 765 766 { ··· 859 858 intel_dsi_vbt_exec_sequence(intel_dsi, MIPI_SEQ_BACKLIGHT_ON); 860 859 } 861 860 862 - static void bxt_dsi_enable(struct intel_encoder *encoder, 861 + static void bxt_dsi_enable(struct intel_atomic_state *state, 862 + struct intel_encoder *encoder, 863 863 const struct intel_crtc_state *crtc_state, 864 864 const struct drm_connector_state *conn_state) 865 865 { ··· 873 871 * DSI port disable has to be done after pipe and plane disable, so we do it in 874 872 * the post_disable hook. 875 873 */ 876 - static void intel_dsi_disable(struct intel_encoder *encoder, 874 + static void intel_dsi_disable(struct intel_atomic_state *state, 875 + struct intel_encoder *encoder, 877 876 const struct intel_crtc_state *old_crtc_state, 878 877 const struct drm_connector_state *old_conn_state) 879 878 { 879 + struct drm_i915_private *i915 = to_i915(encoder->base.dev); 880 880 struct intel_dsi *intel_dsi = enc_to_intel_dsi(encoder); 881 881 enum port port; 882 882 883 - DRM_DEBUG_KMS("\n"); 883 + drm_dbg_kms(&i915->drm, "\n"); 884 884 885 885 intel_dsi_vbt_exec_sequence(intel_dsi, MIPI_SEQ_BACKLIGHT_OFF); 886 886 intel_panel_disable_backlight(old_conn_state); ··· 910 906 vlv_dsi_clear_device_ready(encoder); 911 907 } 912 908 913 - static void intel_dsi_post_disable(struct intel_encoder *encoder, 909 + static void intel_dsi_post_disable(struct intel_atomic_state *state, 910 + struct intel_encoder *encoder, 914 911 const struct intel_crtc_state *old_crtc_state, 915 912 const struct drm_connector_state *old_conn_state) 916 913 {
+42 -43
drivers/gpu/drm/i915/gem/i915_gem_context.c
··· 570 570 engines->ctx = i915_gem_context_get(ctx); 571 571 572 572 for_each_gem_engine(ce, engines, it) { 573 - struct dma_fence *fence; 574 - int err = 0; 573 + int err; 575 574 576 575 /* serialises with execbuf */ 577 576 set_bit(CONTEXT_CLOSED_BIT, &ce->flags); 578 577 if (!intel_context_pin_if_active(ce)) 579 578 continue; 580 579 581 - fence = i915_active_fence_get(&ce->timeline->last_request); 582 - if (fence) { 583 - err = i915_sw_fence_await_dma_fence(&engines->fence, 584 - fence, 0, 585 - GFP_KERNEL); 586 - dma_fence_put(fence); 587 - } 580 + /* Wait until context is finally scheduled out and retired */ 581 + err = i915_sw_fence_await_active(&engines->fence, 582 + &ce->active, 583 + I915_ACTIVE_AWAIT_BARRIER); 588 584 intel_context_unpin(ce); 589 - if (err < 0) 585 + if (err) 590 586 goto kill; 591 587 } 592 588 ··· 753 757 return ERR_PTR(err); 754 758 } 755 759 760 + static inline struct i915_gem_engines * 761 + __context_engines_await(const struct i915_gem_context *ctx) 762 + { 763 + struct i915_gem_engines *engines; 764 + 765 + rcu_read_lock(); 766 + do { 767 + engines = rcu_dereference(ctx->engines); 768 + GEM_BUG_ON(!engines); 769 + 770 + if (unlikely(!i915_sw_fence_await(&engines->fence))) 771 + continue; 772 + 773 + if (likely(engines == rcu_access_pointer(ctx->engines))) 774 + break; 775 + 776 + i915_sw_fence_complete(&engines->fence); 777 + } while (1); 778 + rcu_read_unlock(); 779 + 780 + return engines; 781 + } 782 + 756 783 static int 757 784 context_apply_all(struct i915_gem_context *ctx, 758 785 int (*fn)(struct intel_context *ce, void *data), 759 786 void *data) 760 787 { 761 788 struct i915_gem_engines_iter it; 789 + struct i915_gem_engines *e; 762 790 struct intel_context *ce; 763 791 int err = 0; 764 792 765 - for_each_gem_engine(ce, i915_gem_context_lock_engines(ctx), it) { 793 + e = __context_engines_await(ctx); 794 + for_each_gem_engine(ce, e, it) { 766 795 err = fn(ce, data); 767 796 if (err) 768 797 break; 769 798 } 770 - i915_gem_context_unlock_engines(ctx); 799 + i915_sw_fence_complete(&e->fence); 771 800 772 801 return err; 773 802 } ··· 807 786 static struct i915_address_space * 808 787 __set_ppgtt(struct i915_gem_context *ctx, struct i915_address_space *vm) 809 788 { 810 - struct i915_address_space *old = i915_gem_context_vm(ctx); 789 + struct i915_address_space *old; 811 790 791 + old = rcu_replace_pointer(ctx->vm, 792 + i915_vm_open(vm), 793 + lockdep_is_held(&ctx->mutex)); 812 794 GEM_BUG_ON(old && i915_vm_is_4lvl(vm) != i915_vm_is_4lvl(old)); 813 795 814 - rcu_assign_pointer(ctx->vm, i915_vm_open(vm)); 815 796 context_apply_all(ctx, __apply_ppgtt, vm); 816 797 817 798 return old; ··· 1090 1067 1091 1068 i915_active_fini(&cb->base); 1092 1069 kfree(cb); 1093 - } 1094 - 1095 - static inline struct i915_gem_engines * 1096 - __context_engines_await(const struct i915_gem_context *ctx) 1097 - { 1098 - struct i915_gem_engines *engines; 1099 - 1100 - rcu_read_lock(); 1101 - do { 1102 - engines = rcu_dereference(ctx->engines); 1103 - if (unlikely(!engines)) 1104 - break; 1105 - 1106 - if (unlikely(!i915_sw_fence_await(&engines->fence))) 1107 - continue; 1108 - 1109 - if (likely(engines == rcu_access_pointer(ctx->engines))) 1110 - break; 1111 - 1112 - i915_sw_fence_complete(&engines->fence); 1113 - } while (1); 1114 - rcu_read_unlock(); 1115 - 1116 - return engines; 1117 1070 } 1118 1071 1119 1072 I915_SELFTEST_DECLARE(static intel_engine_mask_t context_barrier_inject_fault); ··· 1400 1401 return 0; 1401 1402 } 1402 1403 1403 - static int 1404 - user_to_context_sseu(struct drm_i915_private *i915, 1405 - const struct drm_i915_gem_context_param_sseu *user, 1406 - struct intel_sseu *context) 1404 + int 1405 + i915_gem_user_to_context_sseu(struct drm_i915_private *i915, 1406 + const struct drm_i915_gem_context_param_sseu *user, 1407 + struct intel_sseu *context) 1407 1408 { 1408 1409 const struct sseu_dev_info *device = &RUNTIME_INFO(i915)->sseu; 1409 1410 ··· 1538 1539 goto out_ce; 1539 1540 } 1540 1541 1541 - ret = user_to_context_sseu(i915, &user_sseu, &sseu); 1542 + ret = i915_gem_user_to_context_sseu(i915, &user_sseu, &sseu); 1542 1543 if (ret) 1543 1544 goto out_ce; 1544 1545
+4
drivers/gpu/drm/i915/gem/i915_gem_context.h
··· 225 225 struct i915_lut_handle *i915_lut_handle_alloc(void); 226 226 void i915_lut_handle_free(struct i915_lut_handle *lut); 227 227 228 + int i915_gem_user_to_context_sseu(struct drm_i915_private *i915, 229 + const struct drm_i915_gem_context_param_sseu *user, 230 + struct intel_sseu *context); 231 + 228 232 #endif /* !__I915_GEM_CONTEXT_H__ */
+1 -1
drivers/gpu/drm/i915/gem/i915_gem_domain.c
··· 369 369 struct i915_vma *vma; 370 370 371 371 GEM_BUG_ON(!i915_gem_object_has_pinned_pages(obj)); 372 - if (!atomic_read(&obj->bind_count)) 372 + if (list_empty(&obj->vma.list)) 373 373 return; 374 374 375 375 mutex_lock(&i915->ggtt.vm.mutex);
+227 -150
drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
··· 40 40 u32 handle; 41 41 }; 42 42 43 + struct eb_vma_array { 44 + struct kref kref; 45 + struct eb_vma vma[]; 46 + }; 47 + 43 48 enum { 44 49 FORCE_CPU_RELOC = 1, 45 50 FORCE_GTT_RELOC, ··· 57 52 #define __EXEC_OBJECT_NEEDS_MAP BIT(29) 58 53 #define __EXEC_OBJECT_NEEDS_BIAS BIT(28) 59 54 #define __EXEC_OBJECT_INTERNAL_FLAGS (~0u << 28) /* all of the above */ 60 - #define __EXEC_OBJECT_RESERVED (__EXEC_OBJECT_HAS_PIN | __EXEC_OBJECT_HAS_FENCE) 61 55 62 56 #define __EXEC_HAS_RELOC BIT(31) 63 57 #define __EXEC_INTERNAL_FLAGS (~0u << 31) ··· 287 283 */ 288 284 int lut_size; 289 285 struct hlist_head *buckets; /** ht for relocation handles */ 286 + struct eb_vma_array *array; 290 287 }; 291 288 292 289 static inline bool eb_use_cmdparser(const struct i915_execbuffer *eb) ··· 297 292 eb->args->batch_len); 298 293 } 299 294 295 + static struct eb_vma_array *eb_vma_array_create(unsigned int count) 296 + { 297 + struct eb_vma_array *arr; 298 + 299 + arr = kvmalloc(struct_size(arr, vma, count), GFP_KERNEL | __GFP_NOWARN); 300 + if (!arr) 301 + return NULL; 302 + 303 + kref_init(&arr->kref); 304 + arr->vma[0].vma = NULL; 305 + 306 + return arr; 307 + } 308 + 309 + static inline void eb_unreserve_vma(struct eb_vma *ev) 310 + { 311 + struct i915_vma *vma = ev->vma; 312 + 313 + if (unlikely(ev->flags & __EXEC_OBJECT_HAS_FENCE)) 314 + __i915_vma_unpin_fence(vma); 315 + 316 + if (ev->flags & __EXEC_OBJECT_HAS_PIN) 317 + __i915_vma_unpin(vma); 318 + 319 + ev->flags &= ~(__EXEC_OBJECT_HAS_PIN | 320 + __EXEC_OBJECT_HAS_FENCE); 321 + } 322 + 323 + static void eb_vma_array_destroy(struct kref *kref) 324 + { 325 + struct eb_vma_array *arr = container_of(kref, typeof(*arr), kref); 326 + struct eb_vma *ev = arr->vma; 327 + 328 + while (ev->vma) { 329 + eb_unreserve_vma(ev); 330 + i915_vma_put(ev->vma); 331 + ev++; 332 + } 333 + 334 + kvfree(arr); 335 + } 336 + 337 + static void eb_vma_array_put(struct eb_vma_array *arr) 338 + { 339 + kref_put(&arr->kref, eb_vma_array_destroy); 340 + } 341 + 300 342 static int eb_create(struct i915_execbuffer *eb) 301 343 { 344 + /* Allocate an extra slot for use by the command parser + sentinel */ 345 + eb->array = eb_vma_array_create(eb->buffer_count + 2); 346 + if (!eb->array) 347 + return -ENOMEM; 348 + 349 + eb->vma = eb->array->vma; 350 + 302 351 if (!(eb->args->flags & I915_EXEC_HANDLE_LUT)) { 303 352 unsigned int size = 1 + ilog2(eb->buffer_count); 304 353 ··· 386 327 break; 387 328 } while (--size); 388 329 389 - if (unlikely(!size)) 330 + if (unlikely(!size)) { 331 + eb_vma_array_put(eb->array); 390 332 return -ENOMEM; 333 + } 391 334 392 335 eb->lut_size = size; 393 336 } else { ··· 429 368 return false; 430 369 } 431 370 371 + static u64 eb_pin_flags(const struct drm_i915_gem_exec_object2 *entry, 372 + unsigned int exec_flags) 373 + { 374 + u64 pin_flags = 0; 375 + 376 + if (exec_flags & EXEC_OBJECT_NEEDS_GTT) 377 + pin_flags |= PIN_GLOBAL; 378 + 379 + /* 380 + * Wa32bitGeneralStateOffset & Wa32bitInstructionBaseOffset, 381 + * limit address to the first 4GBs for unflagged objects. 382 + */ 383 + if (!(exec_flags & EXEC_OBJECT_SUPPORTS_48B_ADDRESS)) 384 + pin_flags |= PIN_ZONE_4G; 385 + 386 + if (exec_flags & __EXEC_OBJECT_NEEDS_MAP) 387 + pin_flags |= PIN_MAPPABLE; 388 + 389 + if (exec_flags & EXEC_OBJECT_PINNED) 390 + pin_flags |= entry->offset | PIN_OFFSET_FIXED; 391 + else if (exec_flags & __EXEC_OBJECT_NEEDS_BIAS) 392 + pin_flags |= BATCH_OFFSET_BIAS | PIN_OFFSET_BIAS; 393 + 394 + return pin_flags; 395 + } 396 + 432 397 static inline bool 433 398 eb_pin_vma(struct i915_execbuffer *eb, 434 399 const struct drm_i915_gem_exec_object2 *entry, ··· 472 385 if (unlikely(ev->flags & EXEC_OBJECT_NEEDS_GTT)) 473 386 pin_flags |= PIN_GLOBAL; 474 387 475 - if (unlikely(i915_vma_pin(vma, 0, 0, pin_flags))) 476 - return false; 388 + /* Attempt to reuse the current location if available */ 389 + if (unlikely(i915_vma_pin(vma, 0, 0, pin_flags))) { 390 + if (entry->flags & EXEC_OBJECT_PINNED) 391 + return false; 392 + 393 + /* Failing that pick any _free_ space if suitable */ 394 + if (unlikely(i915_vma_pin(vma, 395 + entry->pad_to_size, 396 + entry->alignment, 397 + eb_pin_flags(entry, ev->flags) | 398 + PIN_USER | PIN_NOEVICT))) 399 + return false; 400 + } 477 401 478 402 if (unlikely(ev->flags & EXEC_OBJECT_NEEDS_FENCE)) { 479 403 if (unlikely(i915_vma_pin_fence(vma))) { ··· 498 400 499 401 ev->flags |= __EXEC_OBJECT_HAS_PIN; 500 402 return !eb_vma_misplaced(entry, vma, ev->flags); 501 - } 502 - 503 - static inline void __eb_unreserve_vma(struct i915_vma *vma, unsigned int flags) 504 - { 505 - GEM_BUG_ON(!(flags & __EXEC_OBJECT_HAS_PIN)); 506 - 507 - if (unlikely(flags & __EXEC_OBJECT_HAS_FENCE)) 508 - __i915_vma_unpin_fence(vma); 509 - 510 - __i915_vma_unpin(vma); 511 - } 512 - 513 - static inline void 514 - eb_unreserve_vma(struct eb_vma *ev) 515 - { 516 - if (!(ev->flags & __EXEC_OBJECT_HAS_PIN)) 517 - return; 518 - 519 - __eb_unreserve_vma(ev->vma, ev->flags); 520 - ev->flags &= ~__EXEC_OBJECT_RESERVED; 521 403 } 522 404 523 405 static int ··· 559 481 560 482 GEM_BUG_ON(i915_vma_is_closed(vma)); 561 483 562 - ev->vma = i915_vma_get(vma); 484 + ev->vma = vma; 563 485 ev->exec = entry; 564 486 ev->flags = entry->flags; 565 487 ··· 625 547 u64 pin_flags) 626 548 { 627 549 struct drm_i915_gem_exec_object2 *entry = ev->exec; 628 - unsigned int exec_flags = ev->flags; 629 550 struct i915_vma *vma = ev->vma; 630 551 int err; 631 - 632 - if (exec_flags & EXEC_OBJECT_NEEDS_GTT) 633 - pin_flags |= PIN_GLOBAL; 634 - 635 - /* 636 - * Wa32bitGeneralStateOffset & Wa32bitInstructionBaseOffset, 637 - * limit address to the first 4GBs for unflagged objects. 638 - */ 639 - if (!(exec_flags & EXEC_OBJECT_SUPPORTS_48B_ADDRESS)) 640 - pin_flags |= PIN_ZONE_4G; 641 - 642 - if (exec_flags & __EXEC_OBJECT_NEEDS_MAP) 643 - pin_flags |= PIN_MAPPABLE; 644 - 645 - if (exec_flags & EXEC_OBJECT_PINNED) 646 - pin_flags |= entry->offset | PIN_OFFSET_FIXED; 647 - else if (exec_flags & __EXEC_OBJECT_NEEDS_BIAS) 648 - pin_flags |= BATCH_OFFSET_BIAS | PIN_OFFSET_BIAS; 649 552 650 553 if (drm_mm_node_allocated(&vma->node) && 651 554 eb_vma_misplaced(entry, vma, ev->flags)) { ··· 637 578 638 579 err = i915_vma_pin(vma, 639 580 entry->pad_to_size, entry->alignment, 640 - pin_flags); 581 + eb_pin_flags(entry, ev->flags) | pin_flags); 641 582 if (err) 642 583 return err; 643 584 ··· 646 587 eb->args->flags |= __EXEC_HAS_RELOC; 647 588 } 648 589 649 - if (unlikely(exec_flags & EXEC_OBJECT_NEEDS_FENCE)) { 590 + if (unlikely(ev->flags & EXEC_OBJECT_NEEDS_FENCE)) { 650 591 err = i915_vma_pin_fence(vma); 651 592 if (unlikely(err)) { 652 593 i915_vma_unpin(vma); ··· 654 595 } 655 596 656 597 if (vma->fence) 657 - exec_flags |= __EXEC_OBJECT_HAS_FENCE; 598 + ev->flags |= __EXEC_OBJECT_HAS_FENCE; 658 599 } 659 600 660 - ev->flags = exec_flags | __EXEC_OBJECT_HAS_PIN; 601 + ev->flags |= __EXEC_OBJECT_HAS_PIN; 661 602 GEM_BUG_ON(eb_vma_misplaced(entry, vma, ev->flags)); 662 603 663 604 return 0; ··· 787 728 return 0; 788 729 } 789 730 790 - static int eb_lookup_vmas(struct i915_execbuffer *eb) 731 + static int __eb_add_lut(struct i915_execbuffer *eb, 732 + u32 handle, struct i915_vma *vma) 791 733 { 792 - struct radix_tree_root *handles_vma = &eb->gem_context->handles_vma; 793 - struct drm_i915_gem_object *obj; 794 - unsigned int i, batch; 734 + struct i915_gem_context *ctx = eb->gem_context; 735 + struct i915_lut_handle *lut; 795 736 int err; 796 737 797 - if (unlikely(i915_gem_context_is_closed(eb->gem_context))) 798 - return -ENOENT; 738 + lut = i915_lut_handle_alloc(); 739 + if (unlikely(!lut)) 740 + return -ENOMEM; 741 + 742 + i915_vma_get(vma); 743 + if (!atomic_fetch_inc(&vma->open_count)) 744 + i915_vma_reopen(vma); 745 + lut->handle = handle; 746 + lut->ctx = ctx; 747 + 748 + /* Check that the context hasn't been closed in the meantime */ 749 + err = -EINTR; 750 + if (!mutex_lock_interruptible(&ctx->mutex)) { 751 + err = -ENOENT; 752 + if (likely(!i915_gem_context_is_closed(ctx))) 753 + err = radix_tree_insert(&ctx->handles_vma, handle, vma); 754 + if (err == 0) { /* And nor has this handle */ 755 + struct drm_i915_gem_object *obj = vma->obj; 756 + 757 + i915_gem_object_lock(obj); 758 + if (idr_find(&eb->file->object_idr, handle) == obj) { 759 + list_add(&lut->obj_link, &obj->lut_list); 760 + } else { 761 + radix_tree_delete(&ctx->handles_vma, handle); 762 + err = -ENOENT; 763 + } 764 + i915_gem_object_unlock(obj); 765 + } 766 + mutex_unlock(&ctx->mutex); 767 + } 768 + if (unlikely(err)) 769 + goto err; 770 + 771 + return 0; 772 + 773 + err: 774 + atomic_dec(&vma->open_count); 775 + i915_vma_put(vma); 776 + i915_lut_handle_free(lut); 777 + return err; 778 + } 779 + 780 + static struct i915_vma *eb_lookup_vma(struct i915_execbuffer *eb, u32 handle) 781 + { 782 + do { 783 + struct drm_i915_gem_object *obj; 784 + struct i915_vma *vma; 785 + int err; 786 + 787 + rcu_read_lock(); 788 + vma = radix_tree_lookup(&eb->gem_context->handles_vma, handle); 789 + if (likely(vma)) 790 + vma = i915_vma_tryget(vma); 791 + rcu_read_unlock(); 792 + if (likely(vma)) 793 + return vma; 794 + 795 + obj = i915_gem_object_lookup(eb->file, handle); 796 + if (unlikely(!obj)) 797 + return ERR_PTR(-ENOENT); 798 + 799 + vma = i915_vma_instance(obj, eb->context->vm, NULL); 800 + if (IS_ERR(vma)) { 801 + i915_gem_object_put(obj); 802 + return vma; 803 + } 804 + 805 + err = __eb_add_lut(eb, handle, vma); 806 + if (likely(!err)) 807 + return vma; 808 + 809 + i915_gem_object_put(obj); 810 + if (err != -EEXIST) 811 + return ERR_PTR(err); 812 + } while (1); 813 + } 814 + 815 + static int eb_lookup_vmas(struct i915_execbuffer *eb) 816 + { 817 + unsigned int batch = eb_batch_index(eb); 818 + unsigned int i; 819 + int err = 0; 799 820 800 821 INIT_LIST_HEAD(&eb->relocs); 801 822 INIT_LIST_HEAD(&eb->unbound); 802 823 803 - batch = eb_batch_index(eb); 804 - 805 824 for (i = 0; i < eb->buffer_count; i++) { 806 - u32 handle = eb->exec[i].handle; 807 - struct i915_lut_handle *lut; 808 825 struct i915_vma *vma; 809 826 810 - vma = radix_tree_lookup(handles_vma, handle); 811 - if (likely(vma)) 812 - goto add_vma; 813 - 814 - obj = i915_gem_object_lookup(eb->file, handle); 815 - if (unlikely(!obj)) { 816 - err = -ENOENT; 817 - goto err_vma; 818 - } 819 - 820 - vma = i915_vma_instance(obj, eb->context->vm, NULL); 827 + vma = eb_lookup_vma(eb, eb->exec[i].handle); 821 828 if (IS_ERR(vma)) { 822 829 err = PTR_ERR(vma); 823 - goto err_obj; 830 + break; 824 831 } 825 832 826 - lut = i915_lut_handle_alloc(); 827 - if (unlikely(!lut)) { 828 - err = -ENOMEM; 829 - goto err_obj; 830 - } 831 - 832 - err = radix_tree_insert(handles_vma, handle, vma); 833 - if (unlikely(err)) { 834 - i915_lut_handle_free(lut); 835 - goto err_obj; 836 - } 837 - 838 - /* transfer ref to lut */ 839 - if (!atomic_fetch_inc(&vma->open_count)) 840 - i915_vma_reopen(vma); 841 - lut->handle = handle; 842 - lut->ctx = eb->gem_context; 843 - 844 - i915_gem_object_lock(obj); 845 - list_add(&lut->obj_link, &obj->lut_list); 846 - i915_gem_object_unlock(obj); 847 - 848 - add_vma: 849 833 err = eb_validate_vma(eb, &eb->exec[i], vma); 850 - if (unlikely(err)) 851 - goto err_vma; 834 + if (unlikely(err)) { 835 + i915_vma_put(vma); 836 + break; 837 + } 852 838 853 839 eb_add_vma(eb, i, batch, vma); 854 840 } 855 841 856 - return 0; 857 - 858 - err_obj: 859 - i915_gem_object_put(obj); 860 - err_vma: 861 842 eb->vma[i].vma = NULL; 862 843 return err; 863 844 } ··· 922 823 } 923 824 } 924 825 925 - static void eb_release_vmas(const struct i915_execbuffer *eb) 926 - { 927 - const unsigned int count = eb->buffer_count; 928 - unsigned int i; 929 - 930 - for (i = 0; i < count; i++) { 931 - struct eb_vma *ev = &eb->vma[i]; 932 - struct i915_vma *vma = ev->vma; 933 - 934 - if (!vma) 935 - break; 936 - 937 - eb->vma[i].vma = NULL; 938 - 939 - if (ev->flags & __EXEC_OBJECT_HAS_PIN) 940 - __eb_unreserve_vma(vma, ev->flags); 941 - 942 - i915_vma_put(vma); 943 - } 944 - } 945 - 946 826 static void eb_destroy(const struct i915_execbuffer *eb) 947 827 { 948 828 GEM_BUG_ON(eb->reloc_cache.rq); 829 + 830 + if (eb->array) 831 + eb_vma_array_put(eb->array); 949 832 950 833 if (eb->lut_size > 0) 951 834 kfree(eb->buckets); ··· 1301 1220 return cmd; 1302 1221 } 1303 1222 1223 + static inline bool use_reloc_gpu(struct i915_vma *vma) 1224 + { 1225 + if (DBG_FORCE_RELOC == FORCE_GPU_RELOC) 1226 + return true; 1227 + 1228 + if (DBG_FORCE_RELOC) 1229 + return false; 1230 + 1231 + return !dma_resv_test_signaled_rcu(vma->resv, true); 1232 + } 1233 + 1304 1234 static u64 1305 1235 relocate_entry(struct i915_vma *vma, 1306 1236 const struct drm_i915_gem_relocation_entry *reloc, ··· 1323 1231 bool wide = eb->reloc_cache.use_64bit_reloc; 1324 1232 void *vaddr; 1325 1233 1326 - if (!eb->reloc_cache.vaddr && 1327 - (DBG_FORCE_RELOC == FORCE_GPU_RELOC || 1328 - !dma_resv_test_signaled_rcu(vma->resv, true))) { 1234 + if (!eb->reloc_cache.vaddr && use_reloc_gpu(vma)) { 1329 1235 const unsigned int gen = eb->reloc_cache.gen; 1330 1236 unsigned int len; 1331 1237 u32 *batch; ··· 1501 1411 { 1502 1412 #define N_RELOC(x) ((x) / sizeof(struct drm_i915_gem_relocation_entry)) 1503 1413 struct drm_i915_gem_relocation_entry stack[N_RELOC(512)]; 1504 - struct drm_i915_gem_relocation_entry __user *urelocs; 1505 1414 const struct drm_i915_gem_exec_object2 *entry = ev->exec; 1506 - unsigned int remain; 1415 + struct drm_i915_gem_relocation_entry __user *urelocs = 1416 + u64_to_user_ptr(entry->relocs_ptr); 1417 + unsigned long remain = entry->relocation_count; 1507 1418 1508 - urelocs = u64_to_user_ptr(entry->relocs_ptr); 1509 - remain = entry->relocation_count; 1510 1419 if (unlikely(remain > N_RELOC(ULONG_MAX))) 1511 1420 return -EINVAL; 1512 1421 ··· 1514 1425 * to read. However, if the array is not writable the user loses 1515 1426 * the updated relocation values. 1516 1427 */ 1517 - if (unlikely(!access_ok(urelocs, remain*sizeof(*urelocs)))) 1428 + if (unlikely(!access_ok(urelocs, remain * sizeof(*urelocs)))) 1518 1429 return -EFAULT; 1519 1430 1520 1431 do { 1521 1432 struct drm_i915_gem_relocation_entry *r = stack; 1522 1433 unsigned int count = 1523 - min_t(unsigned int, remain, ARRAY_SIZE(stack)); 1434 + min_t(unsigned long, remain, ARRAY_SIZE(stack)); 1524 1435 unsigned int copied; 1525 1436 1526 1437 /* ··· 1583 1494 { 1584 1495 int err; 1585 1496 1586 - mutex_lock(&eb->gem_context->mutex); 1587 1497 err = eb_lookup_vmas(eb); 1588 - mutex_unlock(&eb->gem_context->mutex); 1589 1498 if (err) 1590 1499 return err; 1591 1500 ··· 1684 1597 err = i915_vma_move_to_active(vma, eb->request, flags); 1685 1598 1686 1599 i915_vma_unlock(vma); 1687 - 1688 - __eb_unreserve_vma(vma, flags); 1689 - i915_vma_put(vma); 1690 - 1691 - ev->vma = NULL; 1600 + eb_unreserve_vma(ev); 1692 1601 } 1693 1602 ww_acquire_fini(&acquire); 1694 1603 1604 + eb_vma_array_put(fetch_and_zero(&eb->array)); 1605 + 1695 1606 if (unlikely(err)) 1696 1607 goto err_skip; 1697 - 1698 - eb->exec = NULL; 1699 1608 1700 1609 /* Unconditionally flush any chipset caches (for streaming writes). */ 1701 1610 intel_gt_chipset_flush(eb->engine->gt); ··· 1867 1784 dma_resv_add_excl_fence(shadow->resv, &pw->base.dma); 1868 1785 dma_resv_unlock(shadow->resv); 1869 1786 1870 - dma_fence_work_commit(&pw->base); 1787 + dma_fence_work_commit_imm(&pw->base); 1871 1788 return 0; 1872 1789 1873 1790 err_batch_unlock: ··· 1944 1861 eb->vma[eb->buffer_count].vma = i915_vma_get(shadow); 1945 1862 eb->vma[eb->buffer_count].flags = __EXEC_OBJECT_HAS_PIN; 1946 1863 eb->batch = &eb->vma[eb->buffer_count++]; 1864 + eb->vma[eb->buffer_count].vma = NULL; 1947 1865 1948 1866 eb->trampoline = trampoline; 1949 1867 eb->batch_start_offset = 0; ··· 2432 2348 __i915_request_skip(rq); 2433 2349 } 2434 2350 2435 - local_bh_disable(); 2436 2351 __i915_request_queue(rq, &attr); 2437 - local_bh_enable(); /* Kick the execlists tasklet if just scheduled */ 2438 2352 2439 2353 /* Try to clean up the client's timeline after submitting the request */ 2440 2354 if (prev) ··· 2468 2386 args->flags |= __EXEC_HAS_RELOC; 2469 2387 2470 2388 eb.exec = exec; 2471 - eb.vma = (struct eb_vma *)(exec + args->buffer_count + 1); 2472 - eb.vma[0].vma = NULL; 2473 2389 2474 2390 eb.invalid_flags = __EXEC_OBJECT_UNKNOWN_FLAGS; 2475 2391 reloc_cache_init(&eb.reloc_cache, eb.i915); ··· 2674 2594 if (batch->private) 2675 2595 intel_engine_pool_put(batch->private); 2676 2596 err_vma: 2677 - if (eb.exec) 2678 - eb_release_vmas(&eb); 2679 2597 if (eb.trampoline) 2680 2598 i915_vma_unpin(eb.trampoline); 2681 2599 eb_unpin_engine(&eb); ··· 2693 2615 2694 2616 static size_t eb_element_size(void) 2695 2617 { 2696 - return sizeof(struct drm_i915_gem_exec_object2) + sizeof(struct eb_vma); 2618 + return sizeof(struct drm_i915_gem_exec_object2); 2697 2619 } 2698 2620 2699 2621 static bool check_buffer_count(size_t count) ··· 2749 2671 /* Copy in the exec list from userland */ 2750 2672 exec_list = kvmalloc_array(count, sizeof(*exec_list), 2751 2673 __GFP_NOWARN | GFP_KERNEL); 2752 - exec2_list = kvmalloc_array(count + 1, eb_element_size(), 2674 + exec2_list = kvmalloc_array(count, eb_element_size(), 2753 2675 __GFP_NOWARN | GFP_KERNEL); 2754 2676 if (exec_list == NULL || exec2_list == NULL) { 2755 2677 drm_dbg(&i915->drm, ··· 2827 2749 if (err) 2828 2750 return err; 2829 2751 2830 - /* Allocate an extra slot for use by the command parser */ 2831 - exec2_list = kvmalloc_array(count + 1, eb_element_size(), 2752 + exec2_list = kvmalloc_array(count, eb_element_size(), 2832 2753 __GFP_NOWARN | GFP_KERNEL); 2833 2754 if (exec2_list == NULL) { 2834 2755 drm_dbg(&i915->drm, "Failed to allocate exec list for %zd buffers\n",
-1
drivers/gpu/drm/i915/gem/i915_gem_object.c
··· 206 206 } 207 207 obj->mmo.offsets = RB_ROOT; 208 208 209 - GEM_BUG_ON(atomic_read(&obj->bind_count)); 210 209 GEM_BUG_ON(obj->userfault_count); 211 210 GEM_BUG_ON(!list_empty(&obj->lut_list)); 212 211
-3
drivers/gpu/drm/i915/gem/i915_gem_object_types.h
··· 179 179 #define TILING_MASK (FENCE_MINIMUM_STRIDE - 1) 180 180 #define STRIDE_MASK (~TILING_MASK) 181 181 182 - /** Count of VMA actually bound by this object */ 183 - atomic_t bind_count; 184 - 185 182 struct { 186 183 /* 187 184 * Protects the pages and their use. Do not use directly, but
-2
drivers/gpu/drm/i915/gem/i915_gem_pages.c
··· 199 199 if (i915_gem_object_has_pinned_pages(obj)) 200 200 return -EBUSY; 201 201 202 - GEM_BUG_ON(atomic_read(&obj->bind_count)); 203 - 204 202 /* May be called by shrinker from within get_pages() (on another bo) */ 205 203 mutex_lock(&obj->mm.lock); 206 204 if (unlikely(atomic_read(&obj->mm.pages_pin_count))) {
+2 -16
drivers/gpu/drm/i915/gem/i915_gem_shrinker.c
··· 27 27 return false; 28 28 29 29 /* 30 - * Only report true if by unbinding the object and putting its pages 31 - * we can actually make forward progress towards freeing physical 32 - * pages. 33 - * 34 - * If the pages are pinned for any other reason than being bound 35 - * to the GPU, simply unbinding from the GPU is not going to succeed 36 - * in releasing our pin count on the pages themselves. 37 - */ 38 - if (atomic_read(&obj->mm.pages_pin_count) > atomic_read(&obj->bind_count)) 39 - return false; 40 - 41 - /* 42 30 * We can only return physical pages to the system if we can either 43 31 * discard the contents (because the user has marked them as being 44 32 * purgeable) or if we can move their contents out to swap. ··· 42 54 flags = 0; 43 55 if (shrink & I915_SHRINK_ACTIVE) 44 56 flags = I915_GEM_OBJECT_UNBIND_ACTIVE; 57 + if (!(shrink & I915_SHRINK_BOUND)) 58 + flags = I915_GEM_OBJECT_UNBIND_TEST; 45 59 46 60 if (i915_gem_object_unbind(obj, flags) == 0) 47 61 __i915_gem_object_put_pages(obj); ··· 182 192 183 193 if (!(shrink & I915_SHRINK_ACTIVE) && 184 194 i915_gem_object_is_framebuffer(obj)) 185 - continue; 186 - 187 - if (!(shrink & I915_SHRINK_BOUND) && 188 - atomic_read(&obj->bind_count)) 189 195 continue; 190 196 191 197 if (!can_release_pages(obj))
+2 -2
drivers/gpu/drm/i915/gem/i915_gem_stolen.c
··· 381 381 mutex_init(&i915->mm.stolen_lock); 382 382 383 383 if (intel_vgpu_active(i915)) { 384 - dev_notice(i915->drm.dev, 384 + drm_notice(&i915->drm, 385 385 "%s, disabling use of stolen memory\n", 386 386 "iGVT-g active"); 387 387 return 0; 388 388 } 389 389 390 390 if (intel_vtd_active() && INTEL_GEN(i915) < 8) { 391 - dev_notice(i915->drm.dev, 391 + drm_notice(&i915->drm, 392 392 "%s, disabling use of stolen memory\n", 393 393 "DMAR active"); 394 394 return 0;
+1 -2
drivers/gpu/drm/i915/gem/selftests/huge_gem_object.c
··· 88 88 } 89 89 90 90 static const struct drm_i915_gem_object_ops huge_ops = { 91 - .flags = I915_GEM_OBJECT_HAS_STRUCT_PAGE | 92 - I915_GEM_OBJECT_IS_SHRINKABLE, 91 + .flags = I915_GEM_OBJECT_HAS_STRUCT_PAGE, 93 92 .get_pages = huge_get_pages, 94 93 .put_pages = huge_put_pages, 95 94 };
+1 -1
drivers/gpu/drm/i915/gem/selftests/i915_gem_context.c
··· 1925 1925 goto out; 1926 1926 } 1927 1927 1928 - rq = igt_request_alloc(ctx, i915->engine[RCS0]); 1928 + rq = igt_request_alloc(ctx, i915->gt.engine[RCS0]); 1929 1929 if (IS_ERR(rq)) { 1930 1930 pr_err("Request allocation failed!\n"); 1931 1931 goto out;
-4
drivers/gpu/drm/i915/gem/selftests/i915_gem_mman.c
··· 1156 1156 if (err) 1157 1157 goto out_unmap; 1158 1158 1159 - GEM_BUG_ON(mmo->mmap_type == I915_MMAP_TYPE_GTT && 1160 - !atomic_read(&obj->bind_count)); 1161 - 1162 1159 err = check_present(addr, obj->base.size); 1163 1160 if (err) { 1164 1161 pr_err("%s: was not present\n", obj->mm.region->name); ··· 1172 1175 pr_err("Failed to unbind object!\n"); 1173 1176 goto out_unmap; 1174 1177 } 1175 - GEM_BUG_ON(atomic_read(&obj->bind_count)); 1176 1178 1177 1179 if (type != I915_MMAP_TYPE_GTT) { 1178 1180 __i915_gem_object_put_pages(obj);
+1 -1
drivers/gpu/drm/i915/gem/selftests/i915_gem_object.c
··· 14 14 { 15 15 struct drm_i915_private *i915 = arg; 16 16 struct drm_i915_gem_object *obj; 17 - int err = -ENOMEM; 17 + int err; 18 18 19 19 /* Basic test to ensure we can create an object */ 20 20
+1 -1
drivers/gpu/drm/i915/gt/debugfs_engines.c
··· 32 32 { "engines", &engines_fops }, 33 33 }; 34 34 35 - debugfs_gt_register_files(gt, root, files, ARRAY_SIZE(files)); 35 + intel_gt_debugfs_register_files(root, files, ARRAY_SIZE(files), gt); 36 36 }
+9 -6
drivers/gpu/drm/i915/gt/debugfs_gt.c
··· 9 9 #include "debugfs_engines.h" 10 10 #include "debugfs_gt.h" 11 11 #include "debugfs_gt_pm.h" 12 + #include "uc/intel_uc_debugfs.h" 12 13 #include "i915_drv.h" 13 14 14 15 void debugfs_gt_register(struct intel_gt *gt) ··· 25 24 26 25 debugfs_engines_register(gt, root); 27 26 debugfs_gt_pm_register(gt, root); 27 + 28 + intel_uc_debugfs_register(&gt->uc, root); 28 29 } 29 30 30 - void debugfs_gt_register_files(struct intel_gt *gt, 31 - struct dentry *root, 32 - const struct debugfs_gt_file *files, 33 - unsigned long count) 31 + void intel_gt_debugfs_register_files(struct dentry *root, 32 + const struct debugfs_gt_file *files, 33 + unsigned long count, void *data) 34 34 { 35 35 while (count--) { 36 - if (!files->eval || files->eval(gt)) 36 + umode_t mode = files->fops->write ? 0644 : 0444; 37 + if (!files->eval || files->eval(data)) 37 38 debugfs_create_file(files->name, 38 - 0444, root, gt, 39 + mode, root, data, 39 40 files->fops); 40 41 41 42 files++;
+4 -5
drivers/gpu/drm/i915/gt/debugfs_gt.h
··· 28 28 struct debugfs_gt_file { 29 29 const char *name; 30 30 const struct file_operations *fops; 31 - bool (*eval)(const struct intel_gt *gt); 31 + bool (*eval)(void *data); 32 32 }; 33 33 34 - void debugfs_gt_register_files(struct intel_gt *gt, 35 - struct dentry *root, 36 - const struct debugfs_gt_file *files, 37 - unsigned long count); 34 + void intel_gt_debugfs_register_files(struct dentry *root, 35 + const struct debugfs_gt_file *files, 36 + unsigned long count, void *data); 38 37 39 38 #endif /* DEBUGFS_GT_H */
+7 -3
drivers/gpu/drm/i915/gt/debugfs_gt_pm.c
··· 506 506 return 0; 507 507 } 508 508 509 - static bool llc_eval(const struct intel_gt *gt) 509 + static bool llc_eval(void *data) 510 510 { 511 + struct intel_gt *gt = data; 512 + 511 513 return HAS_LLC(gt->i915); 512 514 } 513 515 ··· 582 580 return 0; 583 581 } 584 582 585 - static bool rps_eval(const struct intel_gt *gt) 583 + static bool rps_eval(void *data) 586 584 { 585 + struct intel_gt *gt = data; 586 + 587 587 return HAS_RPS(gt->i915); 588 588 } 589 589 ··· 601 597 { "rps_boost", &rps_boost_fops, rps_eval }, 602 598 }; 603 599 604 - debugfs_gt_register_files(gt, root, files, ARRAY_SIZE(files)); 600 + intel_gt_debugfs_register_files(root, files, ARRAY_SIZE(files), gt); 605 601 }
+3 -3
drivers/gpu/drm/i915/gt/intel_breadcrumbs.c
··· 64 64 if (!--b->irq_enabled) 65 65 irq_disable(engine); 66 66 67 - b->irq_armed = false; 67 + WRITE_ONCE(b->irq_armed, false); 68 68 intel_gt_pm_put_async(engine->gt); 69 69 } 70 70 ··· 73 73 struct intel_breadcrumbs *b = &engine->breadcrumbs; 74 74 unsigned long flags; 75 75 76 - if (!b->irq_armed) 76 + if (!READ_ONCE(b->irq_armed)) 77 77 return; 78 78 79 79 spin_lock_irqsave(&b->irq_lock, flags); ··· 233 233 * which we can add a new waiter and avoid the cost of re-enabling 234 234 * the irq. 235 235 */ 236 - b->irq_armed = true; 236 + WRITE_ONCE(b->irq_armed, true); 237 237 238 238 /* 239 239 * Since we are waiting on a request, the GPU should be busy
+5
drivers/gpu/drm/i915/gt/intel_context.c
··· 114 114 goto out_release; 115 115 } 116 116 117 + if (unlikely(intel_context_is_closed(ce))) { 118 + err = -ENOENT; 119 + goto out_unlock; 120 + } 121 + 117 122 if (likely(!atomic_add_unless(&ce->pin_count, 1, 0))) { 118 123 err = intel_context_active_acquire(ce); 119 124 if (unlikely(err))
+2
drivers/gpu/drm/i915/gt/intel_engine.h
··· 199 199 int intel_engines_init_mmio(struct intel_gt *gt); 200 200 int intel_engines_init(struct intel_gt *gt); 201 201 202 + void intel_engine_free_request_pool(struct intel_engine_cs *engine); 203 + 202 204 void intel_engines_release(struct intel_gt *gt); 203 205 void intel_engines_free(struct intel_gt *gt); 204 206
+73 -52
drivers/gpu/drm/i915/gt/intel_engine_cs.c
··· 347 347 gt->engine_class[info->class][info->instance] = engine; 348 348 gt->engine[id] = engine; 349 349 350 - i915->engine[id] = engine; 351 - 352 350 return 0; 353 351 } 354 352 ··· 423 425 engine->release = NULL; 424 426 425 427 memset(&engine->reset, 0, sizeof(engine->reset)); 426 - 427 - gt->i915->engine[id] = NULL; 428 428 } 429 + } 430 + 431 + void intel_engine_free_request_pool(struct intel_engine_cs *engine) 432 + { 433 + if (!engine->request_pool) 434 + return; 435 + 436 + kmem_cache_free(i915_request_slab_cache(), engine->request_pool); 429 437 } 430 438 431 439 void intel_engines_free(struct intel_gt *gt) ··· 439 435 struct intel_engine_cs *engine; 440 436 enum intel_engine_id id; 441 437 438 + /* Free the requests! dma-resv keeps fences around for an eternity */ 439 + rcu_barrier(); 440 + 442 441 for_each_engine(engine, gt, id) { 442 + intel_engine_free_request_pool(engine); 443 443 kfree(engine); 444 444 gt->engine[id] = NULL; 445 445 } ··· 1233 1225 name); 1234 1226 } 1235 1227 1228 + static struct intel_timeline *get_timeline(struct i915_request *rq) 1229 + { 1230 + struct intel_timeline *tl; 1231 + 1232 + /* 1233 + * Even though we are holding the engine->active.lock here, there 1234 + * is no control over the submission queue per-se and we are 1235 + * inspecting the active state at a random point in time, with an 1236 + * unknown queue. Play safe and make sure the timeline remains valid. 1237 + * (Only being used for pretty printing, one extra kref shouldn't 1238 + * cause a camel stampede!) 1239 + */ 1240 + rcu_read_lock(); 1241 + tl = rcu_dereference(rq->timeline); 1242 + if (!kref_get_unless_zero(&tl->kref)) 1243 + tl = NULL; 1244 + rcu_read_unlock(); 1245 + 1246 + return tl; 1247 + } 1248 + 1249 + static int print_ring(char *buf, int sz, struct i915_request *rq) 1250 + { 1251 + int len = 0; 1252 + 1253 + if (!i915_request_signaled(rq)) { 1254 + struct intel_timeline *tl = get_timeline(rq); 1255 + 1256 + len = scnprintf(buf, sz, 1257 + "ring:{start:%08x, hwsp:%08x, seqno:%08x, runtime:%llums}, ", 1258 + i915_ggtt_offset(rq->ring->vma), 1259 + tl ? tl->hwsp_offset : 0, 1260 + hwsp_seqno(rq), 1261 + DIV_ROUND_CLOSEST_ULL(intel_context_get_total_runtime_ns(rq->context), 1262 + 1000 * 1000)); 1263 + 1264 + if (tl) 1265 + intel_timeline_put(tl); 1266 + } 1267 + 1268 + return len; 1269 + } 1270 + 1236 1271 static void hexdump(struct drm_printer *m, const void *buf, size_t len) 1237 1272 { 1238 1273 const size_t rowsize = 8 * sizeof(u32); ··· 1305 1254 } 1306 1255 } 1307 1256 1308 - static struct intel_timeline *get_timeline(struct i915_request *rq) 1309 - { 1310 - struct intel_timeline *tl; 1311 - 1312 - /* 1313 - * Even though we are holding the engine->active.lock here, there 1314 - * is no control over the submission queue per-se and we are 1315 - * inspecting the active state at a random point in time, with an 1316 - * unknown queue. Play safe and make sure the timeline remains valid. 1317 - * (Only being used for pretty printing, one extra kref shouldn't 1318 - * cause a camel stampede!) 1319 - */ 1320 - rcu_read_lock(); 1321 - tl = rcu_dereference(rq->timeline); 1322 - if (!kref_get_unless_zero(&tl->kref)) 1323 - tl = NULL; 1324 - rcu_read_unlock(); 1325 - 1326 - return tl; 1327 - } 1328 - 1329 1257 static const char *repr_timer(const struct timer_list *t) 1330 1258 { 1331 1259 if (!READ_ONCE(t->expires)) ··· 1325 1295 1326 1296 if (engine->id == RENDER_CLASS && IS_GEN_RANGE(dev_priv, 4, 7)) 1327 1297 drm_printf(m, "\tCCID: 0x%08x\n", ENGINE_READ(engine, CCID)); 1298 + if (HAS_EXECLISTS(dev_priv)) { 1299 + drm_printf(m, "\tEL_STAT_HI: 0x%08x\n", 1300 + ENGINE_READ(engine, RING_EXECLIST_STATUS_HI)); 1301 + drm_printf(m, "\tEL_STAT_LO: 0x%08x\n", 1302 + ENGINE_READ(engine, RING_EXECLIST_STATUS_LO)); 1303 + } 1328 1304 drm_printf(m, "\tRING_START: 0x%08x\n", 1329 1305 ENGINE_READ(engine, RING_START)); 1330 1306 drm_printf(m, "\tRING_HEAD: 0x%08x\n", ··· 1423 1387 int len; 1424 1388 1425 1389 len = scnprintf(hdr, sizeof(hdr), 1426 - "\t\tActive[%d]: ", 1427 - (int)(port - execlists->active)); 1428 - if (!i915_request_signaled(rq)) { 1429 - struct intel_timeline *tl = get_timeline(rq); 1430 - 1431 - len += scnprintf(hdr + len, sizeof(hdr) - len, 1432 - "ring:{start:%08x, hwsp:%08x, seqno:%08x, runtime:%llums}, ", 1433 - i915_ggtt_offset(rq->ring->vma), 1434 - tl ? tl->hwsp_offset : 0, 1435 - hwsp_seqno(rq), 1436 - DIV_ROUND_CLOSEST_ULL(intel_context_get_total_runtime_ns(rq->context), 1437 - 1000 * 1000)); 1438 - 1439 - if (tl) 1440 - intel_timeline_put(tl); 1441 - } 1390 + "\t\tActive[%d]: ccid:%08x, ", 1391 + (int)(port - execlists->active), 1392 + upper_32_bits(rq->context->lrc_desc)); 1393 + len += print_ring(hdr + len, sizeof(hdr) - len, rq); 1442 1394 scnprintf(hdr + len, sizeof(hdr) - len, "rq: "); 1443 1395 print_request(m, rq, hdr); 1444 1396 } 1445 1397 for (port = execlists->pending; (rq = *port); port++) { 1446 - struct intel_timeline *tl = get_timeline(rq); 1447 - char hdr[80]; 1398 + char hdr[160]; 1399 + int len; 1448 1400 1449 - snprintf(hdr, sizeof(hdr), 1450 - "\t\tPending[%d] ring:{start:%08x, hwsp:%08x, seqno:%08x}, rq: ", 1451 - (int)(port - execlists->pending), 1452 - i915_ggtt_offset(rq->ring->vma), 1453 - tl ? tl->hwsp_offset : 0, 1454 - hwsp_seqno(rq)); 1401 + len = scnprintf(hdr, sizeof(hdr), 1402 + "\t\tPending[%d]: ccid:%08x, ", 1403 + (int)(port - execlists->pending), 1404 + upper_32_bits(rq->context->lrc_desc)); 1405 + len += print_ring(hdr + len, sizeof(hdr) - len, rq); 1406 + scnprintf(hdr + len, sizeof(hdr) - len, "rq: "); 1455 1407 print_request(m, rq, hdr); 1456 - 1457 - if (tl) 1458 - intel_timeline_put(tl); 1459 1408 } 1460 1409 rcu_read_unlock(); 1461 1410 execlists_active_unlock_bh(execlists);
+1 -1
drivers/gpu/drm/i915/gt/intel_engine_heartbeat.c
··· 31 31 delay = msecs_to_jiffies_timeout(delay); 32 32 if (delay >= HZ) 33 33 delay = round_jiffies_up_relative(delay); 34 - schedule_delayed_work(&engine->heartbeat.work, delay); 34 + mod_delayed_work(system_wq, &engine->heartbeat.work, delay); 35 35 36 36 return true; 37 37 }
+1 -1
drivers/gpu/drm/i915/gt/intel_engine_pm.c
··· 181 181 * Ergo, if we put ourselves on the timelines.active_list 182 182 * (se intel_timeline_enter()) before we increment the 183 183 * engine->wakeref.count, we may see the request completion and retire 184 - * it causing an undeflow of the engine->wakeref. 184 + * it causing an underflow of the engine->wakeref. 185 185 */ 186 186 flags = __timeline_mark_lock(ce); 187 187 GEM_BUG_ON(atomic_read(&ce->timeline->active_count) < 0);
+6
drivers/gpu/drm/i915/gt/intel_engine_pm.h
··· 37 37 intel_wakeref_put_async(&engine->wakeref); 38 38 } 39 39 40 + static inline void intel_engine_pm_put_delay(struct intel_engine_cs *engine, 41 + unsigned long delay) 42 + { 43 + intel_wakeref_put_delay(&engine->wakeref, delay); 44 + } 45 + 40 46 static inline void intel_engine_pm_flush(struct intel_engine_cs *engine) 41 47 { 42 48 intel_wakeref_unlock_wait(&engine->wakeref);
+12
drivers/gpu/drm/i915/gt/intel_engine_types.h
··· 157 157 struct i915_priolist default_priolist; 158 158 159 159 /** 160 + * @yield: CCID at the time of the last semaphore-wait interrupt. 161 + * 162 + * Instead of leaving a semaphore busy-spinning on an engine, we would 163 + * like to switch to another ready context, i.e. yielding the semaphore 164 + * timeslice. 165 + */ 166 + u32 yield; 167 + 168 + /** 160 169 * @error_interrupt: CS Master EIR 161 170 * 162 171 * The CS generates an interrupt when it detects an error. We capture ··· 316 307 struct list_head requests; 317 308 struct list_head hold; /* ready requests, but on hold */ 318 309 } active; 310 + 311 + /* keep a request in reserve for a [pm] barrier under oom */ 312 + struct i915_request *request_pool; 319 313 320 314 struct llist_head barrier_tasks; 321 315
+31 -21
drivers/gpu/drm/i915/gt/intel_ggtt.c
··· 65 65 ggtt->mappable_end); 66 66 } 67 67 68 - i915_ggtt_init_fences(ggtt); 68 + intel_ggtt_init_fences(ggtt); 69 69 70 70 return 0; 71 71 } ··· 715 715 */ 716 716 void i915_ggtt_driver_release(struct drm_i915_private *i915) 717 717 { 718 + struct i915_ggtt *ggtt = &i915->ggtt; 718 719 struct pagevec *pvec; 719 720 720 - fini_aliasing_ppgtt(&i915->ggtt); 721 + fini_aliasing_ppgtt(ggtt); 721 722 722 - ggtt_cleanup_hw(&i915->ggtt); 723 + intel_ggtt_fini_fences(ggtt); 724 + ggtt_cleanup_hw(ggtt); 723 725 724 726 pvec = &i915->mm.wc_stash.pvec; 725 727 if (pvec->nr) { ··· 786 784 else 787 785 ggtt->gsm = ioremap_wc(phys_addr, size); 788 786 if (!ggtt->gsm) { 789 - DRM_ERROR("Failed to map the ggtt page table\n"); 787 + drm_err(&i915->drm, "Failed to map the ggtt page table\n"); 790 788 return -ENOMEM; 791 789 } 792 790 793 791 ret = setup_scratch_page(&ggtt->vm, GFP_DMA32); 794 792 if (ret) { 795 - DRM_ERROR("Scratch setup failed\n"); 793 + drm_err(&i915->drm, "Scratch setup failed\n"); 796 794 /* iounmap will also get called at remove, but meh */ 797 795 iounmap(ggtt->gsm); 798 796 return ret; ··· 852 850 if (!err) 853 851 err = pci_set_consistent_dma_mask(pdev, DMA_BIT_MASK(39)); 854 852 if (err) 855 - DRM_ERROR("Can't set DMA mask/consistent mask (%d)\n", err); 853 + drm_err(&i915->drm, 854 + "Can't set DMA mask/consistent mask (%d)\n", err); 856 855 857 856 pci_read_config_word(pdev, SNB_GMCH_CTRL, &snb_gmch_ctl); 858 857 if (IS_CHERRYVIEW(i915)) ··· 1000 997 * just a coarse sanity check. 1001 998 */ 1002 999 if (ggtt->mappable_end < (64<<20) || ggtt->mappable_end > (512<<20)) { 1003 - DRM_ERROR("Unknown GMADR size (%pa)\n", &ggtt->mappable_end); 1000 + drm_err(&i915->drm, "Unknown GMADR size (%pa)\n", 1001 + &ggtt->mappable_end); 1004 1002 return -ENXIO; 1005 1003 } 1006 1004 ··· 1009 1005 if (!err) 1010 1006 err = pci_set_consistent_dma_mask(pdev, DMA_BIT_MASK(40)); 1011 1007 if (err) 1012 - DRM_ERROR("Can't set DMA mask/consistent mask (%d)\n", err); 1008 + drm_err(&i915->drm, 1009 + "Can't set DMA mask/consistent mask (%d)\n", err); 1013 1010 pci_read_config_word(pdev, SNB_GMCH_CTRL, &snb_gmch_ctl); 1014 1011 1015 1012 size = gen6_get_total_gtt_size(snb_gmch_ctl); ··· 1057 1052 1058 1053 ret = intel_gmch_probe(i915->bridge_dev, i915->drm.pdev, NULL); 1059 1054 if (!ret) { 1060 - DRM_ERROR("failed to set up gmch\n"); 1055 + drm_err(&i915->drm, "failed to set up gmch\n"); 1061 1056 return -EIO; 1062 1057 } 1063 1058 ··· 1080 1075 ggtt->vm.vma_ops.clear_pages = clear_pages; 1081 1076 1082 1077 if (unlikely(ggtt->do_idle_maps)) 1083 - dev_notice(i915->drm.dev, 1078 + drm_notice(&i915->drm, 1084 1079 "Applying Ironlake quirks for intel_iommu\n"); 1085 1080 1086 1081 return 0; ··· 1105 1100 return ret; 1106 1101 1107 1102 if ((ggtt->vm.total - 1) >> 32) { 1108 - DRM_ERROR("We never expected a Global GTT with more than 32bits" 1109 - " of address space! Found %lldM!\n", 1110 - ggtt->vm.total >> 20); 1103 + drm_err(&i915->drm, 1104 + "We never expected a Global GTT with more than 32bits" 1105 + " of address space! Found %lldM!\n", 1106 + ggtt->vm.total >> 20); 1111 1107 ggtt->vm.total = 1ULL << 32; 1112 1108 ggtt->mappable_end = 1113 1109 min_t(u64, ggtt->mappable_end, ggtt->vm.total); 1114 1110 } 1115 1111 1116 1112 if (ggtt->mappable_end > ggtt->vm.total) { 1117 - DRM_ERROR("mappable aperture extends past end of GGTT," 1118 - " aperture=%pa, total=%llx\n", 1119 - &ggtt->mappable_end, ggtt->vm.total); 1113 + drm_err(&i915->drm, 1114 + "mappable aperture extends past end of GGTT," 1115 + " aperture=%pa, total=%llx\n", 1116 + &ggtt->mappable_end, ggtt->vm.total); 1120 1117 ggtt->mappable_end = ggtt->vm.total; 1121 1118 } 1122 1119 1123 1120 /* GMADR is the PCI mmio aperture into the global GTT. */ 1124 - DRM_DEBUG_DRIVER("GGTT size = %lluM\n", ggtt->vm.total >> 20); 1125 - DRM_DEBUG_DRIVER("GMADR size = %lluM\n", (u64)ggtt->mappable_end >> 20); 1126 - DRM_DEBUG_DRIVER("DSM size = %lluM\n", 1127 - (u64)resource_size(&intel_graphics_stolen_res) >> 20); 1121 + drm_dbg(&i915->drm, "GGTT size = %lluM\n", ggtt->vm.total >> 20); 1122 + drm_dbg(&i915->drm, "GMADR size = %lluM\n", 1123 + (u64)ggtt->mappable_end >> 20); 1124 + drm_dbg(&i915->drm, "DSM size = %lluM\n", 1125 + (u64)resource_size(&intel_graphics_stolen_res) >> 20); 1128 1126 1129 1127 return 0; 1130 1128 } ··· 1145 1137 return ret; 1146 1138 1147 1139 if (intel_vtd_active()) 1148 - dev_info(i915->drm.dev, "VT-d active for gfx access\n"); 1140 + drm_info(&i915->drm, "VT-d active for gfx access\n"); 1149 1141 1150 1142 return 0; 1151 1143 } ··· 1220 1212 1221 1213 if (INTEL_GEN(ggtt->vm.i915) >= 8) 1222 1214 setup_private_pat(ggtt->vm.gt->uncore); 1215 + 1216 + intel_ggtt_restore_fences(ggtt); 1223 1217 } 1224 1218 1225 1219 static struct scatterlist *
+1 -2
drivers/gpu/drm/i915/gt/intel_gt.c
··· 635 635 { 636 636 __intel_gt_disable(gt); 637 637 638 - intel_uc_fini_hw(&gt->uc); 639 - intel_uc_fini(&gt->uc); 638 + intel_uc_driver_remove(&gt->uc); 640 639 641 640 intel_engines_release(gt); 642 641 }
+13 -2
drivers/gpu/drm/i915/gt/intel_gt_irq.c
··· 39 39 } 40 40 } 41 41 42 + if (iir & GT_WAIT_SEMAPHORE_INTERRUPT) { 43 + WRITE_ONCE(engine->execlists.yield, 44 + ENGINE_READ_FW(engine, RING_EXECLIST_STATUS_HI)); 45 + ENGINE_TRACE(engine, "semaphore yield: %08x\n", 46 + engine->execlists.yield); 47 + if (del_timer(&engine->execlists.timer)) 48 + tasklet = true; 49 + } 50 + 42 51 if (iir & GT_CONTEXT_SWITCH_INTERRUPT) 43 52 tasklet = true; 44 53 ··· 237 228 const u32 irqs = 238 229 GT_CS_MASTER_ERROR_INTERRUPT | 239 230 GT_RENDER_USER_INTERRUPT | 240 - GT_CONTEXT_SWITCH_INTERRUPT; 231 + GT_CONTEXT_SWITCH_INTERRUPT | 232 + GT_WAIT_SEMAPHORE_INTERRUPT; 241 233 struct intel_uncore *uncore = gt->uncore; 242 234 const u32 dmask = irqs << 16 | irqs; 243 235 const u32 smask = irqs << 16; ··· 376 366 const u32 irqs = 377 367 GT_CS_MASTER_ERROR_INTERRUPT | 378 368 GT_RENDER_USER_INTERRUPT | 379 - GT_CONTEXT_SWITCH_INTERRUPT; 369 + GT_CONTEXT_SWITCH_INTERRUPT | 370 + GT_WAIT_SEMAPHORE_INTERRUPT; 380 371 const u32 gt_interrupts[] = { 381 372 irqs << GEN8_RCS_IRQ_SHIFT | irqs << GEN8_BCS_IRQ_SHIFT, 382 373 irqs << GEN8_VCS0_IRQ_SHIFT | irqs << GEN8_VCS1_IRQ_SHIFT,
+3 -2
drivers/gpu/drm/i915/gt/intel_gt_pm.c
··· 204 204 /* Only when the HW is re-initialised, can we replay the requests */ 205 205 err = intel_gt_init_hw(gt); 206 206 if (err) { 207 - dev_err(gt->i915->drm.dev, 207 + drm_err(&gt->i915->drm, 208 208 "Failed to initialize GPU, declaring it wedged!\n"); 209 209 goto err_wedged; 210 210 } ··· 220 220 221 221 intel_engine_pm_put(engine); 222 222 if (err) { 223 - dev_err(gt->i915->drm.dev, 223 + drm_err(&gt->i915->drm, 224 224 "Failed to restart %s (%d)\n", 225 225 engine->name, err); 226 226 goto err_wedged; ··· 324 324 { 325 325 GT_TRACE(gt, "\n"); 326 326 intel_gt_init_swizzling(gt); 327 + intel_ggtt_restore_fences(gt->ggtt); 327 328 328 329 return intel_uc_runtime_resume(&gt->uc); 329 330 }
+1 -1
drivers/gpu/drm/i915/gt/intel_gt_requests.c
··· 38 38 for_each_engine(engine, gt, id) { 39 39 intel_engine_flush_submission(engine); 40 40 active |= flush_work(&engine->retire_work); 41 - active |= flush_work(&engine->wakeref.work); 41 + active |= flush_delayed_work(&engine->wakeref.work); 42 42 } 43 43 44 44 return active;
+3 -2
drivers/gpu/drm/i915/gt/intel_gtt.h
··· 26 26 #include <drm/drm_mm.h> 27 27 28 28 #include "gt/intel_reset.h" 29 - #include "i915_gem_fence_reg.h" 30 29 #include "i915_selftest.h" 31 30 #include "i915_vma_types.h" 32 31 ··· 133 134 134 135 #define GEN8_PDE_IPS_64K BIT(11) 135 136 #define GEN8_PDE_PS_2M BIT(7) 137 + 138 + struct i915_fence_reg; 136 139 137 140 #define for_each_sgt_daddr(__dp, __iter, __sgt) \ 138 141 __for_each_sgt_daddr(__dp, __iter, __sgt, I915_GTT_PAGE_SIZE) ··· 334 333 u32 pin_bias; 335 334 336 335 unsigned int num_fences; 337 - struct i915_fence_reg fence_regs[I915_MAX_NUM_FENCES]; 336 + struct i915_fence_reg *fence_regs; 338 337 struct list_head fence_list; 339 338 340 339 /**
+199 -53
drivers/gpu/drm/i915/gt/intel_lrc.c
··· 238 238 const struct intel_engine_cs *engine, 239 239 u32 head); 240 240 241 + static u32 intel_context_get_runtime(const struct intel_context *ce) 242 + { 243 + /* 244 + * We can use either ppHWSP[16] which is recorded before the context 245 + * switch (and so excludes the cost of context switches) or use the 246 + * value from the context image itself, which is saved/restored earlier 247 + * and so includes the cost of the save. 248 + */ 249 + return READ_ONCE(ce->lrc_reg_state[CTX_TIMESTAMP]); 250 + } 251 + 241 252 static void mark_eio(struct i915_request *rq) 242 253 { 243 254 if (i915_request_completed(rq)) ··· 1165 1154 engine->context_size - PAGE_SIZE); 1166 1155 1167 1156 execlists_init_reg_state(regs, ce, engine, ce->ring, false); 1157 + ce->runtime.last = intel_context_get_runtime(ce); 1168 1158 } 1169 1159 1170 1160 static void reset_active(struct i915_request *rq, ··· 1205 1193 1206 1194 /* We've switched away, so this should be a no-op, but intent matters */ 1207 1195 ce->lrc_desc |= CTX_DESC_FORCE_RESTORE; 1208 - } 1209 - 1210 - static u32 intel_context_get_runtime(const struct intel_context *ce) 1211 - { 1212 - /* 1213 - * We can use either ppHWSP[16] which is recorded before the context 1214 - * switch (and so excludes the cost of context switches) or use the 1215 - * value from the context image itself, which is saved/restored earlier 1216 - * and so includes the cost of the save. 1217 - */ 1218 - return READ_ONCE(ce->lrc_reg_state[CTX_TIMESTAMP]); 1219 1196 } 1220 1197 1221 1198 static void st_update_runtime_underflow(struct intel_context *ce, s32 dt) ··· 1416 1415 } 1417 1416 } 1418 1417 1418 + static __maybe_unused char * 1419 + dump_port(char *buf, int buflen, const char *prefix, struct i915_request *rq) 1420 + { 1421 + if (!rq) 1422 + return ""; 1423 + 1424 + snprintf(buf, buflen, "%s%llx:%lld%s prio %d", 1425 + prefix, 1426 + rq->fence.context, rq->fence.seqno, 1427 + i915_request_completed(rq) ? "!" : 1428 + i915_request_started(rq) ? "*" : 1429 + "", 1430 + rq_prio(rq)); 1431 + 1432 + return buf; 1433 + } 1434 + 1419 1435 static __maybe_unused void 1420 1436 trace_ports(const struct intel_engine_execlists *execlists, 1421 1437 const char *msg, ··· 1440 1422 { 1441 1423 const struct intel_engine_cs *engine = 1442 1424 container_of(execlists, typeof(*engine), execlists); 1425 + char __maybe_unused p0[40], p1[40]; 1443 1426 1444 1427 if (!ports[0]) 1445 1428 return; 1446 1429 1447 - ENGINE_TRACE(engine, "%s { %llx:%lld%s, %llx:%lld }\n", msg, 1448 - ports[0]->fence.context, 1449 - ports[0]->fence.seqno, 1450 - i915_request_completed(ports[0]) ? "!" : 1451 - i915_request_started(ports[0]) ? "*" : 1452 - "", 1453 - ports[1] ? ports[1]->fence.context : 0, 1454 - ports[1] ? ports[1]->fence.seqno : 0); 1430 + ENGINE_TRACE(engine, "%s { %s%s }\n", msg, 1431 + dump_port(p0, sizeof(p0), "", ports[0]), 1432 + dump_port(p1, sizeof(p1), ", ", ports[1])); 1455 1433 } 1456 1434 1457 1435 static inline bool ··· 1768 1754 } 1769 1755 1770 1756 static bool 1771 - need_timeslice(struct intel_engine_cs *engine, const struct i915_request *rq) 1757 + need_timeslice(const struct intel_engine_cs *engine, 1758 + const struct i915_request *rq) 1772 1759 { 1773 1760 int hint; 1774 1761 ··· 1781 1766 hint = max(hint, rq_prio(list_next_entry(rq, sched.link))); 1782 1767 1783 1768 return hint >= effective_prio(rq); 1769 + } 1770 + 1771 + static bool 1772 + timeslice_yield(const struct intel_engine_execlists *el, 1773 + const struct i915_request *rq) 1774 + { 1775 + /* 1776 + * Once bitten, forever smitten! 1777 + * 1778 + * If the active context ever busy-waited on a semaphore, 1779 + * it will be treated as a hog until the end of its timeslice (i.e. 1780 + * until it is scheduled out and replaced by a new submission, 1781 + * possibly even its own lite-restore). The HW only sends an interrupt 1782 + * on the first miss, and we do know if that semaphore has been 1783 + * signaled, or even if it is now stuck on another semaphore. Play 1784 + * safe, yield if it might be stuck -- it will be given a fresh 1785 + * timeslice in the near future. 1786 + */ 1787 + return upper_32_bits(rq->context->lrc_desc) == READ_ONCE(el->yield); 1788 + } 1789 + 1790 + static bool 1791 + timeslice_expired(const struct intel_engine_execlists *el, 1792 + const struct i915_request *rq) 1793 + { 1794 + return timer_expired(&el->timer) || timeslice_yield(el, rq); 1784 1795 } 1785 1796 1786 1797 static int ··· 1824 1783 return READ_ONCE(engine->props.timeslice_duration_ms); 1825 1784 } 1826 1785 1827 - static unsigned long 1828 - active_timeslice(const struct intel_engine_cs *engine) 1786 + static unsigned long active_timeslice(const struct intel_engine_cs *engine) 1829 1787 { 1830 1788 const struct intel_engine_execlists *execlists = &engine->execlists; 1831 1789 const struct i915_request *rq = *execlists->active; ··· 1840 1800 1841 1801 static void set_timeslice(struct intel_engine_cs *engine) 1842 1802 { 1803 + unsigned long duration; 1804 + 1843 1805 if (!intel_engine_has_timeslices(engine)) 1844 1806 return; 1845 1807 1846 - set_timer_ms(&engine->execlists.timer, active_timeslice(engine)); 1808 + duration = active_timeslice(engine); 1809 + ENGINE_TRACE(engine, "bump timeslicing, interval:%lu", duration); 1810 + 1811 + set_timer_ms(&engine->execlists.timer, duration); 1847 1812 } 1848 1813 1849 1814 static void start_timeslice(struct intel_engine_cs *engine) 1850 1815 { 1851 1816 struct intel_engine_execlists *execlists = &engine->execlists; 1852 - int prio = queue_prio(execlists); 1817 + const int prio = queue_prio(execlists); 1818 + unsigned long duration; 1819 + 1820 + if (!intel_engine_has_timeslices(engine)) 1821 + return; 1853 1822 1854 1823 WRITE_ONCE(execlists->switch_priority_hint, prio); 1855 1824 if (prio == INT_MIN) ··· 1867 1818 if (timer_pending(&execlists->timer)) 1868 1819 return; 1869 1820 1870 - set_timer_ms(&execlists->timer, timeslice(engine)); 1821 + duration = timeslice(engine); 1822 + ENGINE_TRACE(engine, 1823 + "start timeslicing, prio:%d, interval:%lu", 1824 + prio, duration); 1825 + 1826 + set_timer_ms(&execlists->timer, duration); 1871 1827 } 1872 1828 1873 1829 static void record_preemption(struct intel_engine_execlists *execlists) ··· 1969 1915 * of trouble. 1970 1916 */ 1971 1917 active = READ_ONCE(execlists->active); 1972 - while ((last = *active) && i915_request_completed(last)) 1973 - active++; 1974 1918 1975 - if (last) { 1919 + /* 1920 + * In theory we can skip over completed contexts that have not 1921 + * yet been processed by events (as those events are in flight): 1922 + * 1923 + * while ((last = *active) && i915_request_completed(last)) 1924 + * active++; 1925 + * 1926 + * However, the GPU cannot handle this as it will ultimately 1927 + * find itself trying to jump back into a context it has just 1928 + * completed and barf. 1929 + */ 1930 + 1931 + if ((last = *active)) { 1976 1932 if (need_preempt(engine, last, rb)) { 1933 + if (i915_request_completed(last)) { 1934 + tasklet_hi_schedule(&execlists->tasklet); 1935 + return; 1936 + } 1937 + 1977 1938 ENGINE_TRACE(engine, 1978 1939 "preempting last=%llx:%lld, prio=%d, hint=%d\n", 1979 1940 last->fence.context, ··· 2015 1946 2016 1947 last = NULL; 2017 1948 } else if (need_timeslice(engine, last) && 2018 - timer_expired(&engine->execlists.timer)) { 1949 + timeslice_expired(execlists, last)) { 1950 + if (i915_request_completed(last)) { 1951 + tasklet_hi_schedule(&execlists->tasklet); 1952 + return; 1953 + } 1954 + 2019 1955 ENGINE_TRACE(engine, 2020 - "expired last=%llx:%lld, prio=%d, hint=%d\n", 1956 + "expired last=%llx:%lld, prio=%d, hint=%d, yield?=%s\n", 2021 1957 last->fence.context, 2022 1958 last->fence.seqno, 2023 1959 last->sched.attr.priority, 2024 - execlists->queue_priority_hint); 1960 + execlists->queue_priority_hint, 1961 + yesno(timeslice_yield(execlists, last))); 2025 1962 2026 1963 ring_set_paused(engine, 1); 2027 1964 defer_active(engine); ··· 2288 2213 } 2289 2214 clear_ports(port + 1, last_port - port); 2290 2215 2216 + WRITE_ONCE(execlists->yield, -1); 2291 2217 execlists_submit_ports(engine); 2292 2218 set_preempt_timeout(engine, *active); 2293 2219 } else { ··· 2384 2308 return *csb & (GEN8_CTX_STATUS_IDLE_ACTIVE | GEN8_CTX_STATUS_PREEMPTED); 2385 2309 } 2386 2310 2311 + static inline void flush_hwsp(const struct i915_request *rq) 2312 + { 2313 + mb(); 2314 + clflush((void *)READ_ONCE(rq->hwsp_seqno)); 2315 + mb(); 2316 + } 2317 + 2387 2318 static void process_csb(struct intel_engine_cs *engine) 2388 2319 { 2389 2320 struct intel_engine_execlists * const execlists = &engine->execlists; ··· 2467 2384 if (promote) { 2468 2385 struct i915_request * const *old = execlists->active; 2469 2386 2470 - GEM_BUG_ON(!assert_pending_valid(execlists, "promote")); 2471 - 2472 2387 ring_set_paused(engine, 0); 2473 2388 2474 2389 /* Point active to the new ELSP; prevent overwriting */ ··· 2479 2398 execlists_schedule_out(*old++); 2480 2399 2481 2400 /* switch pending to inflight */ 2401 + GEM_BUG_ON(!assert_pending_valid(execlists, "promote")); 2482 2402 memcpy(execlists->inflight, 2483 2403 execlists->pending, 2484 2404 execlists_num_ports(execlists) * ··· 2501 2419 * user interrupt and CSB is processed. 2502 2420 */ 2503 2421 if (GEM_SHOW_DEBUG() && 2504 - !i915_request_completed(*execlists->active) && 2505 - !reset_in_progress(execlists)) { 2506 - struct i915_request *rq __maybe_unused = 2507 - *execlists->active; 2422 + !i915_request_completed(*execlists->active)) { 2423 + struct i915_request *rq = *execlists->active; 2508 2424 const u32 *regs __maybe_unused = 2509 2425 rq->context->lrc_reg_state; 2426 + 2427 + /* 2428 + * Flush the breadcrumb before crying foul. 2429 + * 2430 + * Since we have hit this on icl and seen the 2431 + * breadcrumb advance as we print out the debug 2432 + * info (so the problem corrected itself without 2433 + * lasting damage), and we know that icl suffers 2434 + * from missing global observation points in 2435 + * execlists, presume that affects even more 2436 + * coherency. 2437 + */ 2438 + flush_hwsp(rq); 2510 2439 2511 2440 ENGINE_TRACE(engine, 2512 2441 "ring:{start:0x%08x, head:%04x, tail:%04x, ctl:%08x, mode:%08x}\n", ··· 2539 2446 regs[CTX_RING_HEAD], 2540 2447 regs[CTX_RING_TAIL]); 2541 2448 2542 - GEM_BUG_ON("context completed before request"); 2449 + /* Still? Declare it caput! */ 2450 + if (!i915_request_completed(rq) && 2451 + !reset_in_progress(execlists)) 2452 + GEM_BUG_ON("context completed before request"); 2543 2453 } 2544 2454 2545 2455 execlists_schedule_out(*execlists->active++); ··· 2832 2736 return NULL; 2833 2737 } 2834 2738 2739 + static struct i915_request * 2740 + active_context(struct intel_engine_cs *engine, u32 ccid) 2741 + { 2742 + const struct intel_engine_execlists * const el = &engine->execlists; 2743 + struct i915_request * const *port, *rq; 2744 + 2745 + /* 2746 + * Use the most recent result from process_csb(), but just in case 2747 + * we trigger an error (via interrupt) before the first CS event has 2748 + * been written, peek at the next submission. 2749 + */ 2750 + 2751 + for (port = el->active; (rq = *port); port++) { 2752 + if (upper_32_bits(rq->context->lrc_desc) == ccid) { 2753 + ENGINE_TRACE(engine, 2754 + "ccid found at active:%zd\n", 2755 + port - el->active); 2756 + return rq; 2757 + } 2758 + } 2759 + 2760 + for (port = el->pending; (rq = *port); port++) { 2761 + if (upper_32_bits(rq->context->lrc_desc) == ccid) { 2762 + ENGINE_TRACE(engine, 2763 + "ccid found at pending:%zd\n", 2764 + port - el->pending); 2765 + return rq; 2766 + } 2767 + } 2768 + 2769 + ENGINE_TRACE(engine, "ccid:%x not found\n", ccid); 2770 + return NULL; 2771 + } 2772 + 2773 + static u32 active_ccid(struct intel_engine_cs *engine) 2774 + { 2775 + return ENGINE_READ_FW(engine, RING_EXECLIST_STATUS_HI); 2776 + } 2777 + 2835 2778 static bool execlists_capture(struct intel_engine_cs *engine) 2836 2779 { 2837 2780 struct execlists_capture *cap; ··· 2888 2753 return true; 2889 2754 2890 2755 spin_lock_irq(&engine->active.lock); 2891 - cap->rq = execlists_active(&engine->execlists); 2756 + cap->rq = active_context(engine, active_ccid(engine)); 2892 2757 if (cap->rq) { 2893 2758 cap->rq = active_request(cap->rq->context->timeline, cap->rq); 2894 2759 cap->rq = i915_request_get_rcu(cap->rq); ··· 3036 2901 if (reset_in_progress(execlists)) 3037 2902 return; /* defer until we restart the engine following reset */ 3038 2903 3039 - if (execlists->tasklet.func == execlists_submission_tasklet) 3040 - __execlists_submission_tasklet(engine); 3041 - else 3042 - tasklet_hi_schedule(&execlists->tasklet); 2904 + /* Hopefully we clear execlists->pending[] to let us through */ 2905 + if (READ_ONCE(execlists->pending[0]) && 2906 + tasklet_trylock(&execlists->tasklet)) { 2907 + process_csb(engine); 2908 + tasklet_unlock(&execlists->tasklet); 2909 + } 2910 + 2911 + __execlists_submission_tasklet(engine); 3043 2912 } 3044 2913 3045 2914 static void submit_queue(struct intel_engine_cs *engine, ··· 3129 2990 vaddr += engine->context_size; 3130 2991 3131 2992 if (memchr_inv(vaddr, CONTEXT_REDZONE, I915_GTT_PAGE_SIZE)) 3132 - dev_err_once(engine->i915->drm.dev, 2993 + drm_err_once(&engine->i915->drm, 3133 2994 "%s context redzone overwritten!\n", 3134 2995 engine->name); 3135 2996 } ··· 3581 3442 3582 3443 ret = lrc_setup_wa_ctx(engine); 3583 3444 if (ret) { 3584 - DRM_DEBUG_DRIVER("Failed to setup context WA page: %d\n", ret); 3445 + drm_dbg(&engine->i915->drm, 3446 + "Failed to setup context WA page: %d\n", ret); 3585 3447 return ret; 3586 3448 } 3587 3449 ··· 3625 3485 3626 3486 status = ENGINE_READ(engine, RING_ESR); 3627 3487 if (unlikely(status)) { 3628 - dev_err(engine->i915->drm.dev, 3488 + drm_err(&engine->i915->drm, 3629 3489 "engine '%s' resumed still in error: %08x\n", 3630 3490 engine->name, status); 3631 3491 __intel_gt_reset(engine->gt, engine->mask); ··· 3689 3549 bool unexpected = false; 3690 3550 3691 3551 if (ENGINE_READ_FW(engine, RING_MI_MODE) & STOP_RING) { 3692 - DRM_DEBUG_DRIVER("STOP_RING still set in RING_MI_MODE\n"); 3552 + drm_dbg(&engine->i915->drm, 3553 + "STOP_RING still set in RING_MI_MODE\n"); 3693 3554 unexpected = true; 3694 3555 } 3695 3556 ··· 3750 3609 * 3751 3610 * FIXME: Wa for more modern gens needs to be validated 3752 3611 */ 3612 + ring_set_paused(engine, 1); 3753 3613 intel_engine_stop_cs(engine); 3754 3614 } 3755 3615 ··· 4591 4449 engine->irq_enable_mask = GT_RENDER_USER_INTERRUPT << shift; 4592 4450 engine->irq_keep_mask = GT_CONTEXT_SWITCH_INTERRUPT << shift; 4593 4451 engine->irq_keep_mask |= GT_CS_MASTER_ERROR_INTERRUPT << shift; 4452 + engine->irq_keep_mask |= GT_WAIT_SEMAPHORE_INTERRUPT << shift; 4594 4453 } 4595 4454 4596 4455 static void rcs_submission_override(struct intel_engine_cs *engine) ··· 4636 4493 * because we only expect rare glitches but nothing 4637 4494 * critical to prevent us from using GPU 4638 4495 */ 4639 - DRM_ERROR("WA batch buffer initialization failed\n"); 4496 + drm_err(&i915->drm, "WA batch buffer initialization failed\n"); 4640 4497 4641 4498 if (HAS_LOGICAL_RING_ELSQ(i915)) { 4642 4499 execlists->submit_reg = uncore->regs + ··· 4718 4575 regs[CTX_CONTEXT_CONTROL] = ctl; 4719 4576 4720 4577 regs[CTX_RING_CTL] = RING_CTL_SIZE(ring->size) | RING_VALID; 4578 + regs[CTX_TIMESTAMP] = 0; 4721 4579 } 4722 4580 4723 4581 static void init_wa_bb_reg_state(u32 * const regs, ··· 4812 4668 vaddr = i915_gem_object_pin_map(ctx_obj, I915_MAP_WB); 4813 4669 if (IS_ERR(vaddr)) { 4814 4670 ret = PTR_ERR(vaddr); 4815 - DRM_DEBUG_DRIVER("Could not map object pages! (%d)\n", ret); 4671 + drm_dbg(&engine->i915->drm, 4672 + "Could not map object pages! (%d)\n", ret); 4816 4673 return ret; 4817 4674 } 4818 4675 ··· 4906 4761 4907 4762 ret = populate_lr_context(ce, ctx_obj, engine, ring); 4908 4763 if (ret) { 4909 - DRM_DEBUG_DRIVER("Failed to populate LRC: %d\n", ret); 4764 + drm_dbg(&engine->i915->drm, 4765 + "Failed to populate LRC: %d\n", ret); 4910 4766 goto error_ring_free; 4911 4767 } 4912 4768 ··· 4959 4813 if (ve->context.state) 4960 4814 __execlists_context_fini(&ve->context); 4961 4815 intel_context_fini(&ve->context); 4816 + 4817 + intel_engine_free_request_pool(&ve->base); 4962 4818 4963 4819 kfree(ve->bonds); 4964 4820 kfree(ve); ··· 5142 4994 submit_engine: 5143 4995 GEM_BUG_ON(RB_EMPTY_NODE(&node->rb)); 5144 4996 node->prio = prio; 5145 - if (first && prio > sibling->execlists.queue_priority_hint) { 5146 - sibling->execlists.queue_priority_hint = prio; 4997 + if (first && prio > sibling->execlists.queue_priority_hint) 5147 4998 tasklet_hi_schedule(&sibling->execlists.tasklet); 5148 - } 5149 4999 5150 5000 spin_unlock(&sibling->active.lock); 5151 5001 }
+22 -17
drivers/gpu/drm/i915/gt/intel_rc6.c
··· 246 246 ret = sandybridge_pcode_read(i915, GEN6_PCODE_READ_RC6VIDS, 247 247 &rc6vids, NULL); 248 248 if (IS_GEN(i915, 6) && ret) { 249 - DRM_DEBUG_DRIVER("Couldn't check for BIOS workaround\n"); 249 + drm_dbg(&i915->drm, "Couldn't check for BIOS workaround\n"); 250 250 } else if (IS_GEN(i915, 6) && 251 251 (GEN6_DECODE_RC6_VID(rc6vids & 0xff) < 450)) { 252 - DRM_DEBUG_DRIVER("You should update your BIOS. Correcting minimum rc6 voltage (%dmV->%dmV)\n", 253 - GEN6_DECODE_RC6_VID(rc6vids & 0xff), 450); 252 + drm_dbg(&i915->drm, 253 + "You should update your BIOS. Correcting minimum rc6 voltage (%dmV->%dmV)\n", 254 + GEN6_DECODE_RC6_VID(rc6vids & 0xff), 450); 254 255 rc6vids &= 0xffff00; 255 256 rc6vids |= GEN6_ENCODE_RC6_VID(450); 256 257 ret = sandybridge_pcode_write(i915, GEN6_PCODE_WRITE_RC6VIDS, rc6vids); 257 258 if (ret) 258 - DRM_ERROR("Couldn't fix incorrect rc6 voltage\n"); 259 + drm_err(&i915->drm, 260 + "Couldn't fix incorrect rc6 voltage\n"); 259 261 } 260 262 } 261 263 ··· 265 263 static int chv_rc6_init(struct intel_rc6 *rc6) 266 264 { 267 265 struct intel_uncore *uncore = rc6_to_uncore(rc6); 266 + struct drm_i915_private *i915 = rc6_to_i915(rc6); 268 267 resource_size_t pctx_paddr, paddr; 269 268 resource_size_t pctx_size = 32 * SZ_1K; 270 269 u32 pcbr; 271 270 272 271 pcbr = intel_uncore_read(uncore, VLV_PCBR); 273 272 if ((pcbr >> VLV_PCBR_ADDR_SHIFT) == 0) { 274 - DRM_DEBUG_DRIVER("BIOS didn't set up PCBR, fixing up\n"); 275 - paddr = rc6_to_i915(rc6)->dsm.end + 1 - pctx_size; 273 + drm_dbg(&i915->drm, "BIOS didn't set up PCBR, fixing up\n"); 274 + paddr = i915->dsm.end + 1 - pctx_size; 276 275 GEM_BUG_ON(paddr > U32_MAX); 277 276 278 277 pctx_paddr = (paddr & ~4095); ··· 307 304 goto out; 308 305 } 309 306 310 - DRM_DEBUG_DRIVER("BIOS didn't set up PCBR, fixing up\n"); 307 + drm_dbg(&i915->drm, "BIOS didn't set up PCBR, fixing up\n"); 311 308 312 309 /* 313 310 * From the Gunit register HAS: ··· 319 316 */ 320 317 pctx = i915_gem_object_create_stolen(i915, pctx_size); 321 318 if (IS_ERR(pctx)) { 322 - DRM_DEBUG("not enough stolen space for PCTX, disabling\n"); 319 + drm_dbg(&i915->drm, 320 + "not enough stolen space for PCTX, disabling\n"); 323 321 return PTR_ERR(pctx); 324 322 } 325 323 ··· 402 398 rc_sw_target = intel_uncore_read(uncore, GEN6_RC_STATE); 403 399 rc_sw_target &= RC_SW_TARGET_STATE_MASK; 404 400 rc_sw_target >>= RC_SW_TARGET_STATE_SHIFT; 405 - DRM_DEBUG_DRIVER("BIOS enabled RC states: " 401 + drm_dbg(&i915->drm, "BIOS enabled RC states: " 406 402 "HW_CTRL %s HW_RC6 %s SW_TARGET_STATE %x\n", 407 403 onoff(rc_ctl & GEN6_RC_CTL_HW_ENABLE), 408 404 onoff(rc_ctl & GEN6_RC_CTL_RC6_ENABLE), 409 405 rc_sw_target); 410 406 411 407 if (!(intel_uncore_read(uncore, RC6_LOCATION) & RC6_CTX_IN_DRAM)) { 412 - DRM_DEBUG_DRIVER("RC6 Base location not set properly.\n"); 408 + drm_dbg(&i915->drm, "RC6 Base location not set properly.\n"); 413 409 enable_rc6 = false; 414 410 } 415 411 ··· 421 417 intel_uncore_read(uncore, RC6_CTX_BASE) & RC6_CTX_BASE_MASK; 422 418 if (!(rc6_ctx_base >= i915->dsm_reserved.start && 423 419 rc6_ctx_base + PAGE_SIZE < i915->dsm_reserved.end)) { 424 - DRM_DEBUG_DRIVER("RC6 Base address not as expected.\n"); 420 + drm_dbg(&i915->drm, "RC6 Base address not as expected.\n"); 425 421 enable_rc6 = false; 426 422 } 427 423 ··· 429 425 (intel_uncore_read(uncore, PWRCTX_MAXCNT_VCSUNIT0) & IDLE_TIME_MASK) > 1 && 430 426 (intel_uncore_read(uncore, PWRCTX_MAXCNT_BCSUNIT) & IDLE_TIME_MASK) > 1 && 431 427 (intel_uncore_read(uncore, PWRCTX_MAXCNT_VECSUNIT) & IDLE_TIME_MASK) > 1)) { 432 - DRM_DEBUG_DRIVER("Engine Idle wait time not set properly.\n"); 428 + drm_dbg(&i915->drm, 429 + "Engine Idle wait time not set properly.\n"); 433 430 enable_rc6 = false; 434 431 } 435 432 436 433 if (!intel_uncore_read(uncore, GEN8_PUSHBUS_CONTROL) || 437 434 !intel_uncore_read(uncore, GEN8_PUSHBUS_ENABLE) || 438 435 !intel_uncore_read(uncore, GEN8_PUSHBUS_SHIFT)) { 439 - DRM_DEBUG_DRIVER("Pushbus not setup properly.\n"); 436 + drm_dbg(&i915->drm, "Pushbus not setup properly.\n"); 440 437 enable_rc6 = false; 441 438 } 442 439 443 440 if (!intel_uncore_read(uncore, GEN6_GFXPAUSE)) { 444 - DRM_DEBUG_DRIVER("GFX pause not setup properly.\n"); 441 + drm_dbg(&i915->drm, "GFX pause not setup properly.\n"); 445 442 enable_rc6 = false; 446 443 } 447 444 448 445 if (!intel_uncore_read(uncore, GEN8_MISC_CTRL0)) { 449 - DRM_DEBUG_DRIVER("GPM control not setup properly.\n"); 446 + drm_dbg(&i915->drm, "GPM control not setup properly.\n"); 450 447 enable_rc6 = false; 451 448 } 452 449 ··· 468 463 return false; 469 464 470 465 if (IS_GEN9_LP(i915) && !bxt_check_bios_rc6_setup(rc6)) { 471 - dev_notice(i915->drm.dev, 466 + drm_notice(&i915->drm, 472 467 "RC6 and powersaving disabled by BIOS\n"); 473 468 return false; 474 469 } ··· 500 495 if (intel_uncore_read(rc6_to_uncore(rc6), GEN8_RC6_CTX_INFO)) 501 496 return false; 502 497 503 - dev_notice(i915->drm.dev, 498 + drm_notice(&i915->drm, 504 499 "RC6 context corruption, disabling runtime power management\n"); 505 500 return true; 506 501 }
+1 -1
drivers/gpu/drm/i915/gt/intel_renderstate.c
··· 102 102 } 103 103 104 104 if (rodata->reloc[reloc_index] != -1) { 105 - DRM_ERROR("only %d relocs resolved\n", reloc_index); 105 + drm_err(&i915->drm, "only %d relocs resolved\n", reloc_index); 106 106 goto err; 107 107 } 108 108
+8 -8
drivers/gpu/drm/i915/gt/intel_reset.c
··· 109 109 goto out; 110 110 } 111 111 112 - dev_notice(ctx->i915->drm.dev, 112 + drm_notice(&ctx->i915->drm, 113 113 "%s context reset due to GPU hang\n", 114 114 ctx->name); 115 115 ··· 755 755 for_each_engine(engine, gt, id) 756 756 __intel_engine_reset(engine, stalled_mask & engine->mask); 757 757 758 - i915_gem_restore_fences(gt->ggtt); 758 + intel_ggtt_restore_fences(gt->ggtt); 759 759 760 760 return err; 761 761 } ··· 1031 1031 goto unlock; 1032 1032 1033 1033 if (reason) 1034 - dev_notice(gt->i915->drm.dev, 1034 + drm_notice(&gt->i915->drm, 1035 1035 "Resetting chip for %s\n", reason); 1036 1036 atomic_inc(&gt->i915->gpu_error.reset_count); 1037 1037 ··· 1039 1039 1040 1040 if (!intel_has_gpu_reset(gt)) { 1041 1041 if (i915_modparams.reset) 1042 - dev_err(gt->i915->drm.dev, "GPU reset not supported\n"); 1042 + drm_err(&gt->i915->drm, "GPU reset not supported\n"); 1043 1043 else 1044 1044 drm_dbg(&gt->i915->drm, "GPU reset disabled\n"); 1045 1045 goto error; ··· 1049 1049 intel_runtime_pm_disable_interrupts(gt->i915); 1050 1050 1051 1051 if (do_reset(gt, stalled_mask)) { 1052 - dev_err(gt->i915->drm.dev, "Failed to reset chip\n"); 1052 + drm_err(&gt->i915->drm, "Failed to reset chip\n"); 1053 1053 goto taint; 1054 1054 } 1055 1055 ··· 1111 1111 /** 1112 1112 * intel_engine_reset - reset GPU engine to recover from a hang 1113 1113 * @engine: engine to reset 1114 - * @msg: reason for GPU reset; or NULL for no dev_notice() 1114 + * @msg: reason for GPU reset; or NULL for no drm_notice() 1115 1115 * 1116 1116 * Reset a specific GPU engine. Useful if a hang is detected. 1117 1117 * Returns zero on successful reset or otherwise an error code. ··· 1136 1136 reset_prepare_engine(engine); 1137 1137 1138 1138 if (msg) 1139 - dev_notice(engine->i915->drm.dev, 1139 + drm_notice(&engine->i915->drm, 1140 1140 "Resetting %s for %s\n", engine->name, msg); 1141 1141 atomic_inc(&engine->i915->gpu_error.reset_engine_count[engine->uabi_class]); 1142 1142 ··· 1381 1381 { 1382 1382 struct intel_wedge_me *w = container_of(work, typeof(*w), work.work); 1383 1383 1384 - dev_err(w->gt->i915->drm.dev, 1384 + drm_err(&w->gt->i915->drm, 1385 1385 "%s timed out, cancelling all in-flight rendering.\n", 1386 1386 w->name); 1387 1387 intel_gt_set_wedged(w->gt);
+3 -2
drivers/gpu/drm/i915/gt/intel_ring.h
··· 88 88 static inline void 89 89 assert_ring_tail_valid(const struct intel_ring *ring, unsigned int tail) 90 90 { 91 + unsigned int head = READ_ONCE(ring->head); 92 + 91 93 GEM_BUG_ON(!intel_ring_offset_valid(ring, tail)); 92 94 93 95 /* ··· 107 105 * into the same cacheline as ring->head. 108 106 */ 109 107 #define cacheline(a) round_down(a, CACHELINE_BYTES) 110 - GEM_BUG_ON(cacheline(tail) == cacheline(ring->head) && 111 - tail < ring->head); 108 + GEM_BUG_ON(cacheline(tail) == cacheline(head) && tail < head); 112 109 #undef cacheline 113 110 } 114 111
+18 -15
drivers/gpu/drm/i915/gt/intel_ring_submission.c
··· 577 577 RING_INSTPM(engine->mmio_base), 578 578 INSTPM_SYNC_FLUSH, 0, 579 579 1000)) 580 - DRM_ERROR("%s: wait for SyncFlush to complete for TLB invalidation timed out\n", 581 - engine->name); 580 + drm_err(&dev_priv->drm, 581 + "%s: wait for SyncFlush to complete for TLB invalidation timed out\n", 582 + engine->name); 582 583 } 583 584 584 585 static void ring_setup_status_page(struct intel_engine_cs *engine) ··· 602 601 MODE_IDLE, 603 602 MODE_IDLE, 604 603 1000)) { 605 - DRM_ERROR("%s : timed out trying to stop ring\n", 606 - engine->name); 604 + drm_err(&dev_priv->drm, 605 + "%s : timed out trying to stop ring\n", 606 + engine->name); 607 607 608 608 /* 609 609 * Sometimes we observe that the idle flag is not ··· 663 661 /* WaClearRingBufHeadRegAtInit:ctg,elk */ 664 662 if (!stop_ring(engine)) { 665 663 /* G45 ring initialization often fails to reset head to zero */ 666 - DRM_DEBUG_DRIVER("%s head not reset to zero " 664 + drm_dbg(&dev_priv->drm, "%s head not reset to zero " 665 + "ctl %08x head %08x tail %08x start %08x\n", 666 + engine->name, 667 + ENGINE_READ(engine, RING_CTL), 668 + ENGINE_READ(engine, RING_HEAD), 669 + ENGINE_READ(engine, RING_TAIL), 670 + ENGINE_READ(engine, RING_START)); 671 + 672 + if (!stop_ring(engine)) { 673 + drm_err(&dev_priv->drm, 674 + "failed to set %s head to zero " 667 675 "ctl %08x head %08x tail %08x start %08x\n", 668 676 engine->name, 669 677 ENGINE_READ(engine, RING_CTL), 670 678 ENGINE_READ(engine, RING_HEAD), 671 679 ENGINE_READ(engine, RING_TAIL), 672 680 ENGINE_READ(engine, RING_START)); 673 - 674 - if (!stop_ring(engine)) { 675 - DRM_ERROR("failed to set %s head to zero " 676 - "ctl %08x head %08x tail %08x start %08x\n", 677 - engine->name, 678 - ENGINE_READ(engine, RING_CTL), 679 - ENGINE_READ(engine, RING_HEAD), 680 - ENGINE_READ(engine, RING_TAIL), 681 - ENGINE_READ(engine, RING_START)); 682 681 ret = -EIO; 683 682 goto out; 684 683 } ··· 722 719 RING_CTL(engine->mmio_base), 723 720 RING_VALID, RING_VALID, 724 721 50)) { 725 - DRM_ERROR("%s initialization failed " 722 + drm_err(&dev_priv->drm, "%s initialization failed " 726 723 "ctl %08x (valid? %d) head %08x [%08x] tail %08x [%08x] start %08x [expected %08x]\n", 727 724 engine->name, 728 725 ENGINE_READ(engine, RING_CTL),
+54 -51
drivers/gpu/drm/i915/gt/intel_rps.c
··· 81 81 events = (GEN6_PM_RP_UP_THRESHOLD | 82 82 GEN6_PM_RP_DOWN_THRESHOLD | 83 83 GEN6_PM_RP_DOWN_TIMEOUT); 84 - 85 84 WRITE_ONCE(rps->pm_events, events); 85 + 86 86 spin_lock_irq(&gt->irq_lock); 87 87 gen6_gt_pm_enable_irq(gt, rps->pm_events); 88 88 spin_unlock_irq(&gt->irq_lock); 89 89 90 - set(gt->uncore, GEN6_PMINTRMSK, rps_pm_mask(rps, rps->cur_freq)); 90 + intel_uncore_write(gt->uncore, 91 + GEN6_PMINTRMSK, rps_pm_mask(rps, rps->last_freq)); 91 92 } 92 93 93 94 static void gen6_rps_reset_interrupts(struct intel_rps *rps) ··· 121 120 struct intel_gt *gt = rps_to_gt(rps); 122 121 123 122 WRITE_ONCE(rps->pm_events, 0); 124 - set(gt->uncore, GEN6_PMINTRMSK, rps_pm_sanitize_mask(rps, ~0u)); 123 + 124 + intel_uncore_write(gt->uncore, 125 + GEN6_PMINTRMSK, rps_pm_sanitize_mask(rps, ~0u)); 125 126 126 127 spin_lock_irq(&gt->irq_lock); 127 128 gen6_gt_pm_disable_irq(gt, GEN6_PM_RPS_EVENTS); ··· 186 183 fmin = (rgvmodectl & MEMMODE_FMIN_MASK); 187 184 fstart = (rgvmodectl & MEMMODE_FSTART_MASK) >> 188 185 MEMMODE_FSTART_SHIFT; 189 - DRM_DEBUG_DRIVER("fmax: %d, fmin: %d, fstart: %d\n", 190 - fmax, fmin, fstart); 186 + drm_dbg(&i915->drm, "fmax: %d, fmin: %d, fstart: %d\n", 187 + fmax, fmin, fstart); 191 188 192 189 rps->min_freq = fmax; 190 + rps->efficient_freq = fstart; 193 191 rps->max_freq = fmin; 194 - 195 - rps->idle_freq = rps->min_freq; 196 - rps->cur_freq = rps->idle_freq; 197 192 } 198 193 199 194 static unsigned long ··· 454 453 455 454 if (wait_for_atomic((intel_uncore_read(uncore, MEMSWCTL) & 456 455 MEMCTL_CMD_STS) == 0, 10)) 457 - DRM_ERROR("stuck trying to change perf mode\n"); 456 + drm_err(&uncore->i915->drm, 457 + "stuck trying to change perf mode\n"); 458 458 mdelay(1); 459 459 460 460 gen5_rps_set(rps, rps->cur_freq); ··· 714 712 715 713 void intel_rps_unpark(struct intel_rps *rps) 716 714 { 717 - u8 freq; 718 - 719 715 if (!rps->enabled) 720 716 return; 721 717 ··· 725 725 726 726 WRITE_ONCE(rps->active, true); 727 727 728 - freq = max(rps->cur_freq, rps->efficient_freq), 729 - freq = clamp(freq, rps->min_freq_softlimit, rps->max_freq_softlimit); 730 - intel_rps_set(rps, freq); 728 + intel_rps_set(rps, 729 + clamp(rps->cur_freq, 730 + rps->min_freq_softlimit, 731 + rps->max_freq_softlimit)); 731 732 732 733 rps->last_adj = 0; 733 734 ··· 894 893 895 894 static bool rps_reset(struct intel_rps *rps) 896 895 { 896 + struct drm_i915_private *i915 = rps_to_i915(rps); 897 897 /* force a reset */ 898 898 rps->power.mode = -1; 899 899 rps->last_freq = -1; 900 900 901 901 if (rps_set(rps, rps->min_freq, true)) { 902 - DRM_ERROR("Failed to reset RPS to initial values\n"); 902 + drm_err(&i915->drm, "Failed to reset RPS to initial values\n"); 903 903 return false; 904 904 } 905 905 ··· 1051 1049 drm_WARN_ONCE(&i915->drm, (val & GPLLENABLE) == 0, 1052 1050 "GPLL not enabled\n"); 1053 1051 1054 - DRM_DEBUG_DRIVER("GPLL enabled? %s\n", yesno(val & GPLLENABLE)); 1055 - DRM_DEBUG_DRIVER("GPU status: 0x%08x\n", val); 1052 + drm_dbg(&i915->drm, "GPLL enabled? %s\n", yesno(val & GPLLENABLE)); 1053 + drm_dbg(&i915->drm, "GPU status: 0x%08x\n", val); 1056 1054 1057 1055 return rps_reset(rps); 1058 1056 } ··· 1149 1147 drm_WARN_ONCE(&i915->drm, (val & GPLLENABLE) == 0, 1150 1148 "GPLL not enabled\n"); 1151 1149 1152 - DRM_DEBUG_DRIVER("GPLL enabled? %s\n", yesno(val & GPLLENABLE)); 1153 - DRM_DEBUG_DRIVER("GPU status: 0x%08x\n", val); 1150 + drm_dbg(&i915->drm, "GPLL enabled? %s\n", yesno(val & GPLLENABLE)); 1151 + drm_dbg(&i915->drm, "GPU status: 0x%08x\n", val); 1154 1152 1155 1153 return rps_reset(rps); 1156 1154 } ··· 1307 1305 CCK_GPLL_CLOCK_CONTROL, 1308 1306 i915->czclk_freq); 1309 1307 1310 - DRM_DEBUG_DRIVER("GPLL reference freq: %d kHz\n", rps->gpll_ref_freq); 1308 + drm_dbg(&i915->drm, "GPLL reference freq: %d kHz\n", 1309 + rps->gpll_ref_freq); 1311 1310 } 1312 1311 1313 1312 static void vlv_rps_init(struct intel_rps *rps) ··· 1336 1333 i915->mem_freq = 1333; 1337 1334 break; 1338 1335 } 1339 - DRM_DEBUG_DRIVER("DDR speed: %d MHz\n", i915->mem_freq); 1336 + drm_dbg(&i915->drm, "DDR speed: %d MHz\n", i915->mem_freq); 1340 1337 1341 1338 rps->max_freq = vlv_rps_max_freq(rps); 1342 1339 rps->rp0_freq = rps->max_freq; 1343 - DRM_DEBUG_DRIVER("max GPU freq: %d MHz (%u)\n", 1344 - intel_gpu_freq(rps, rps->max_freq), 1345 - rps->max_freq); 1340 + drm_dbg(&i915->drm, "max GPU freq: %d MHz (%u)\n", 1341 + intel_gpu_freq(rps, rps->max_freq), rps->max_freq); 1346 1342 1347 1343 rps->efficient_freq = vlv_rps_rpe_freq(rps); 1348 - DRM_DEBUG_DRIVER("RPe GPU freq: %d MHz (%u)\n", 1349 - intel_gpu_freq(rps, rps->efficient_freq), 1350 - rps->efficient_freq); 1344 + drm_dbg(&i915->drm, "RPe GPU freq: %d MHz (%u)\n", 1345 + intel_gpu_freq(rps, rps->efficient_freq), rps->efficient_freq); 1351 1346 1352 1347 rps->rp1_freq = vlv_rps_guar_freq(rps); 1353 - DRM_DEBUG_DRIVER("RP1(Guar Freq) GPU freq: %d MHz (%u)\n", 1354 - intel_gpu_freq(rps, rps->rp1_freq), 1355 - rps->rp1_freq); 1348 + drm_dbg(&i915->drm, "RP1(Guar Freq) GPU freq: %d MHz (%u)\n", 1349 + intel_gpu_freq(rps, rps->rp1_freq), rps->rp1_freq); 1356 1350 1357 1351 rps->min_freq = vlv_rps_min_freq(rps); 1358 - DRM_DEBUG_DRIVER("min GPU freq: %d MHz (%u)\n", 1359 - intel_gpu_freq(rps, rps->min_freq), 1360 - rps->min_freq); 1352 + drm_dbg(&i915->drm, "min GPU freq: %d MHz (%u)\n", 1353 + intel_gpu_freq(rps, rps->min_freq), rps->min_freq); 1361 1354 1362 1355 vlv_iosf_sb_put(i915, 1363 1356 BIT(VLV_IOSF_SB_PUNIT) | ··· 1383 1384 i915->mem_freq = 1600; 1384 1385 break; 1385 1386 } 1386 - DRM_DEBUG_DRIVER("DDR speed: %d MHz\n", i915->mem_freq); 1387 + drm_dbg(&i915->drm, "DDR speed: %d MHz\n", i915->mem_freq); 1387 1388 1388 1389 rps->max_freq = chv_rps_max_freq(rps); 1389 1390 rps->rp0_freq = rps->max_freq; 1390 - DRM_DEBUG_DRIVER("max GPU freq: %d MHz (%u)\n", 1391 - intel_gpu_freq(rps, rps->max_freq), 1392 - rps->max_freq); 1391 + drm_dbg(&i915->drm, "max GPU freq: %d MHz (%u)\n", 1392 + intel_gpu_freq(rps, rps->max_freq), rps->max_freq); 1393 1393 1394 1394 rps->efficient_freq = chv_rps_rpe_freq(rps); 1395 - DRM_DEBUG_DRIVER("RPe GPU freq: %d MHz (%u)\n", 1396 - intel_gpu_freq(rps, rps->efficient_freq), 1397 - rps->efficient_freq); 1395 + drm_dbg(&i915->drm, "RPe GPU freq: %d MHz (%u)\n", 1396 + intel_gpu_freq(rps, rps->efficient_freq), rps->efficient_freq); 1398 1397 1399 1398 rps->rp1_freq = chv_rps_guar_freq(rps); 1400 - DRM_DEBUG_DRIVER("RP1(Guar) GPU freq: %d MHz (%u)\n", 1401 - intel_gpu_freq(rps, rps->rp1_freq), 1402 - rps->rp1_freq); 1399 + drm_dbg(&i915->drm, "RP1(Guar) GPU freq: %d MHz (%u)\n", 1400 + intel_gpu_freq(rps, rps->rp1_freq), rps->rp1_freq); 1403 1401 1404 1402 rps->min_freq = chv_rps_min_freq(rps); 1405 - DRM_DEBUG_DRIVER("min GPU freq: %d MHz (%u)\n", 1406 - intel_gpu_freq(rps, rps->min_freq), 1407 - rps->min_freq); 1403 + drm_dbg(&i915->drm, "min GPU freq: %d MHz (%u)\n", 1404 + intel_gpu_freq(rps, rps->min_freq), rps->min_freq); 1408 1405 1409 1406 vlv_iosf_sb_put(i915, 1410 1407 BIT(VLV_IOSF_SB_PUNIT) | ··· 1463 1468 { 1464 1469 struct intel_rps *rps = container_of(work, typeof(*rps), work); 1465 1470 struct intel_gt *gt = rps_to_gt(rps); 1471 + struct drm_i915_private *i915 = rps_to_i915(rps); 1466 1472 bool client_boost = false; 1467 1473 int new_freq, adj, min, max; 1468 1474 u32 pm_iir = 0; ··· 1539 1543 new_freq = clamp_t(int, new_freq, min, max); 1540 1544 1541 1545 if (intel_rps_set(rps, new_freq)) { 1542 - DRM_DEBUG_DRIVER("Failed to set new GPU frequency\n"); 1546 + drm_dbg(&i915->drm, "Failed to set new GPU frequency\n"); 1543 1547 rps->last_adj = 0; 1544 1548 } 1545 1549 ··· 1661 1665 sandybridge_pcode_read(i915, GEN6_READ_OC_PARAMS, 1662 1666 &params, NULL); 1663 1667 if (params & BIT(31)) { /* OC supported */ 1664 - DRM_DEBUG_DRIVER("Overclocking supported, max: %dMHz, overclock: %dMHz\n", 1665 - (rps->max_freq & 0xff) * 50, 1666 - (params & 0xff) * 50); 1668 + drm_dbg(&i915->drm, 1669 + "Overclocking supported, max: %dMHz, overclock: %dMHz\n", 1670 + (rps->max_freq & 0xff) * 50, 1671 + (params & 0xff) * 50); 1667 1672 rps->max_freq = params & 0xff; 1668 1673 } 1669 1674 } ··· 1672 1675 /* Finally allow us to boost to max by default */ 1673 1676 rps->boost_freq = rps->max_freq; 1674 1677 rps->idle_freq = rps->min_freq; 1675 - rps->cur_freq = rps->idle_freq; 1678 + 1679 + /* Start in the middle, from here we will autotune based on workload */ 1680 + rps->cur_freq = rps->efficient_freq; 1676 1681 1677 1682 rps->pm_intrmsk_mbz = 0; 1678 1683 ··· 1926 1927 return ret; 1927 1928 } 1928 1929 EXPORT_SYMBOL_GPL(i915_gpu_turbo_disable); 1930 + 1931 + #if IS_ENABLED(CONFIG_DRM_I915_SELFTEST) 1932 + #include "selftest_rps.c" 1933 + #endif
+7 -26
drivers/gpu/drm/i915/gt/intel_sseu.c
··· 65 65 { 66 66 const struct sseu_dev_info *sseu = &RUNTIME_INFO(i915)->sseu; 67 67 bool subslice_pg = sseu->has_subslice_pg; 68 - struct intel_sseu ctx_sseu; 69 68 u8 slices, subslices; 70 69 u32 rpcs = 0; 71 70 ··· 77 78 78 79 /* 79 80 * If i915/perf is active, we want a stable powergating configuration 80 - * on the system. 81 - * 82 - * We could choose full enablement, but on ICL we know there are use 83 - * cases which disable slices for functional, apart for performance 84 - * reasons. So in this case we select a known stable subset. 81 + * on the system. Use the configuration pinned by i915/perf. 85 82 */ 86 - if (!i915->perf.exclusive_stream) { 87 - ctx_sseu = *req_sseu; 88 - } else { 89 - ctx_sseu = intel_sseu_from_device_info(sseu); 83 + if (i915->perf.exclusive_stream) 84 + req_sseu = &i915->perf.sseu; 90 85 91 - if (IS_GEN(i915, 11)) { 92 - /* 93 - * We only need subslice count so it doesn't matter 94 - * which ones we select - just turn off low bits in the 95 - * amount of half of all available subslices per slice. 96 - */ 97 - ctx_sseu.subslice_mask = 98 - ~(~0 << (hweight8(ctx_sseu.subslice_mask) / 2)); 99 - ctx_sseu.slice_mask = 0x1; 100 - } 101 - } 102 - 103 - slices = hweight8(ctx_sseu.slice_mask); 104 - subslices = hweight8(ctx_sseu.subslice_mask); 86 + slices = hweight8(req_sseu->slice_mask); 87 + subslices = hweight8(req_sseu->subslice_mask); 105 88 106 89 /* 107 90 * Since the SScount bitfield in GEN8_R_PWR_CLK_STATE is only three bits ··· 156 175 if (sseu->has_eu_pg) { 157 176 u32 val; 158 177 159 - val = ctx_sseu.min_eus_per_subslice << GEN8_RPCS_EU_MIN_SHIFT; 178 + val = req_sseu->min_eus_per_subslice << GEN8_RPCS_EU_MIN_SHIFT; 160 179 GEM_BUG_ON(val & ~GEN8_RPCS_EU_MIN_MASK); 161 180 val &= GEN8_RPCS_EU_MIN_MASK; 162 181 163 182 rpcs |= val; 164 183 165 - val = ctx_sseu.max_eus_per_subslice << GEN8_RPCS_EU_MAX_SHIFT; 184 + val = req_sseu->max_eus_per_subslice << GEN8_RPCS_EU_MAX_SHIFT; 166 185 GEM_BUG_ON(val & ~GEN8_RPCS_EU_MAX_MASK); 167 186 val &= GEN8_RPCS_EU_MAX_MASK; 168 187
+10 -2
drivers/gpu/drm/i915/gt/intel_timeline.c
··· 119 119 spin_unlock_irqrestore(&gt->hwsp_lock, flags); 120 120 } 121 121 122 + static void __rcu_cacheline_free(struct rcu_head *rcu) 123 + { 124 + struct intel_timeline_cacheline *cl = 125 + container_of(rcu, typeof(*cl), rcu); 126 + 127 + i915_active_fini(&cl->active); 128 + kfree(cl); 129 + } 130 + 122 131 static void __idle_cacheline_free(struct intel_timeline_cacheline *cl) 123 132 { 124 133 GEM_BUG_ON(!i915_active_is_idle(&cl->active)); ··· 136 127 i915_vma_put(cl->hwsp->vma); 137 128 __idle_hwsp_free(cl->hwsp, ptr_unmask_bits(cl->vaddr, CACHELINE_BITS)); 138 129 139 - i915_active_fini(&cl->active); 140 - kfree_rcu(cl, rcu); 130 + call_rcu(&cl->rcu, __rcu_cacheline_free); 141 131 } 142 132 143 133 __i915_active_call
+12 -9
drivers/gpu/drm/i915/gt/intel_workarounds.c
··· 837 837 intel_uncore_read(&i915->uncore, GEN10_MIRROR_FUSE3) & 838 838 GEN10_L3BANK_MASK; 839 839 840 - DRM_DEBUG_DRIVER("L3 fuse = %x\n", l3_fuse); 840 + drm_dbg(&i915->drm, "L3 fuse = %x\n", l3_fuse); 841 841 l3_en = ~(l3_fuse << GEN10_L3BANK_PAIR_COUNT | l3_fuse); 842 842 } else { 843 843 l3_en = ~0; ··· 846 846 slice = fls(sseu->slice_mask) - 1; 847 847 subslice = fls(l3_en & intel_sseu_get_subslices(sseu, slice)); 848 848 if (!subslice) { 849 - DRM_WARN("No common index found between subslice mask %x and L3 bank mask %x!\n", 849 + drm_warn(&i915->drm, 850 + "No common index found between subslice mask %x and L3 bank mask %x!\n", 850 851 intel_sseu_get_subslices(sseu, slice), l3_en); 851 852 subslice = fls(l3_en); 852 853 drm_WARN_ON(&i915->drm, !subslice); ··· 862 861 mcr_mask = GEN8_MCR_SLICE_MASK | GEN8_MCR_SUBSLICE_MASK; 863 862 } 864 863 865 - DRM_DEBUG_DRIVER("MCR slice/subslice = %x\n", mcr); 864 + drm_dbg(&i915->drm, "MCR slice/subslice = %x\n", mcr); 866 865 867 866 wa_write_masked_or(wal, GEN8_MCR_SELECTOR, mcr_mask, mcr); 868 867 } ··· 943 942 static void 944 943 tgl_gt_workarounds_init(struct drm_i915_private *i915, struct i915_wa_list *wal) 945 944 { 945 + wa_init_mcr(i915, wal); 946 + 946 947 /* Wa_1409420604:tgl */ 947 948 if (IS_TGL_REVID(i915, TGL_REVID_A0, TGL_REVID_A0)) 948 949 wa_write_or(wal, ··· 1382 1379 GEN7_FF_THREAD_MODE, 1383 1380 GEN12_FF_TESSELATION_DOP_GATE_DISABLE); 1384 1381 1385 - /* 1386 - * Wa_1409085225:tgl 1387 - * Wa_14010229206:tgl 1388 - */ 1389 - wa_masked_en(wal, GEN9_ROW_CHICKEN4, GEN12_DISABLE_TDL_PUSH); 1390 - 1391 1382 /* Wa_1408615072:tgl */ 1392 1383 wa_write_or(wal, UNSLICE_UNIT_LEVEL_CLKGATE2, 1393 1384 VSUNIT_CLKGATE_DIS_TGL); ··· 1399 1402 wa_masked_en(wal, 1400 1403 GEN9_CS_DEBUG_MODE1, 1401 1404 FF_DOP_CLOCK_GATE_DISABLE); 1405 + 1406 + /* 1407 + * Wa_1409085225:tgl 1408 + * Wa_14010229206:tgl 1409 + */ 1410 + wa_masked_en(wal, GEN9_ROW_CHICKEN4, GEN12_DISABLE_TDL_PUSH); 1402 1411 } 1403 1412 1404 1413 if (IS_GEN(i915, 11)) {
+2
drivers/gpu/drm/i915/gt/selftest_gt_pm.c
··· 7 7 8 8 #include "selftest_llc.h" 9 9 #include "selftest_rc6.h" 10 + #include "selftest_rps.h" 10 11 11 12 static int live_gt_resume(void *arg) 12 13 { ··· 53 52 { 54 53 static const struct i915_subtest tests[] = { 55 54 SUBTEST(live_rc6_manual), 55 + SUBTEST(live_rps_interrupt), 56 56 SUBTEST(live_gt_resume), 57 57 }; 58 58
+74 -45
drivers/gpu/drm/i915/gt/selftest_lrc.c
··· 68 68 engine->props.heartbeat_interval_ms = saved; 69 69 } 70 70 71 + static bool is_active(struct i915_request *rq) 72 + { 73 + if (i915_request_is_active(rq)) 74 + return true; 75 + 76 + if (i915_request_on_hold(rq)) 77 + return true; 78 + 79 + if (i915_request_started(rq)) 80 + return true; 81 + 82 + return false; 83 + } 84 + 71 85 static int wait_for_submit(struct intel_engine_cs *engine, 72 86 struct i915_request *rq, 73 87 unsigned long timeout) 74 88 { 75 89 timeout += jiffies; 76 90 do { 77 - cond_resched(); 91 + bool done = time_after(jiffies, timeout); 92 + 93 + if (i915_request_completed(rq)) /* that was quick! */ 94 + return 0; 95 + 96 + /* Wait until the HW has acknowleged the submission (or err) */ 78 97 intel_engine_flush_submission(engine); 79 - 80 - if (READ_ONCE(engine->execlists.pending[0])) 81 - continue; 82 - 83 - if (i915_request_is_active(rq)) 98 + if (!READ_ONCE(engine->execlists.pending[0]) && is_active(rq)) 84 99 return 0; 85 100 86 - if (i915_request_started(rq)) /* that was quick! */ 87 - return 0; 88 - } while (time_before(jiffies, timeout)); 101 + if (done) 102 + return -ETIME; 89 103 90 - return -ETIME; 104 + cond_resched(); 105 + } while (1); 91 106 } 92 107 93 108 static int wait_for_reset(struct intel_engine_cs *engine, ··· 649 634 error_repr(p->error[i])); 650 635 651 636 if (!i915_request_started(client[i])) { 652 - pr_debug("%s: %s request not stated!\n", 653 - engine->name, 654 - error_repr(p->error[i])); 637 + pr_err("%s: %s request not started!\n", 638 + engine->name, 639 + error_repr(p->error[i])); 655 640 err = -ETIME; 656 641 goto out; 657 642 } ··· 659 644 /* Kick the tasklet to process the error */ 660 645 intel_engine_flush_submission(engine); 661 646 if (client[i]->fence.error != p->error[i]) { 662 - pr_err("%s: %s request completed with wrong error code: %d\n", 647 + pr_err("%s: %s request (%s) with wrong error code: %d\n", 663 648 engine->name, 664 649 error_repr(p->error[i]), 650 + i915_request_completed(client[i]) ? "completed" : "running", 665 651 client[i]->fence.error); 666 652 err = -EINVAL; 667 653 goto out; ··· 945 929 goto err; 946 930 } 947 931 948 - cs = intel_ring_begin(rq, 10); 932 + cs = intel_ring_begin(rq, 14); 949 933 if (IS_ERR(cs)) { 950 934 err = PTR_ERR(cs); 951 935 goto err; ··· 957 941 *cs++ = MI_SEMAPHORE_WAIT | 958 942 MI_SEMAPHORE_GLOBAL_GTT | 959 943 MI_SEMAPHORE_POLL | 960 - MI_SEMAPHORE_SAD_NEQ_SDD; 961 - *cs++ = 0; 944 + MI_SEMAPHORE_SAD_GTE_SDD; 945 + *cs++ = idx; 962 946 *cs++ = offset; 963 947 *cs++ = 0; 964 948 ··· 966 950 *cs++ = i915_mmio_reg_offset(RING_TIMESTAMP(rq->engine->mmio_base)); 967 951 *cs++ = offset + idx * sizeof(u32); 968 952 *cs++ = 0; 953 + 954 + *cs++ = MI_STORE_DWORD_IMM_GEN4 | MI_USE_GGTT; 955 + *cs++ = offset; 956 + *cs++ = 0; 957 + *cs++ = idx + 1; 969 958 970 959 intel_ring_advance(rq, cs); 971 960 ··· 1005 984 1006 985 for_each_engine(engine, gt, id) { 1007 986 enum { A1, A2, B1 }; 1008 - enum { X = 1, Y, Z }; 987 + enum { X = 1, Z, Y }; 1009 988 struct i915_request *rq[3] = {}; 1010 989 struct intel_context *ce; 1011 990 unsigned long heartbeat; ··· 1038 1017 goto err; 1039 1018 } 1040 1019 1041 - rq[0] = create_rewinder(ce, NULL, slot, 1); 1020 + rq[0] = create_rewinder(ce, NULL, slot, X); 1042 1021 if (IS_ERR(rq[0])) { 1043 1022 intel_context_put(ce); 1044 1023 goto err; 1045 1024 } 1046 1025 1047 - rq[1] = create_rewinder(ce, NULL, slot, 2); 1026 + rq[1] = create_rewinder(ce, NULL, slot, Y); 1048 1027 intel_context_put(ce); 1049 1028 if (IS_ERR(rq[1])) 1050 1029 goto err; ··· 1062 1041 goto err; 1063 1042 } 1064 1043 1065 - rq[2] = create_rewinder(ce, rq[0], slot, 3); 1044 + rq[2] = create_rewinder(ce, rq[0], slot, Z); 1066 1045 intel_context_put(ce); 1067 1046 if (IS_ERR(rq[2])) 1068 1047 goto err; ··· 1073 1052 engine->name); 1074 1053 goto err; 1075 1054 } 1076 - GEM_BUG_ON(!timer_pending(&engine->execlists.timer)); 1077 1055 1078 1056 /* ELSP[] = { { A:rq1, A:rq2 }, { B:rq1 } } */ 1079 - GEM_BUG_ON(!i915_request_is_active(rq[A1])); 1080 - GEM_BUG_ON(!i915_request_is_active(rq[A2])); 1081 - GEM_BUG_ON(!i915_request_is_active(rq[B1])); 1082 - 1083 - /* Wait for the timeslice to kick in */ 1084 - del_timer(&engine->execlists.timer); 1085 - tasklet_hi_schedule(&engine->execlists.tasklet); 1086 - intel_engine_flush_submission(engine); 1087 - 1057 + if (i915_request_is_active(rq[A2])) { /* semaphore yielded! */ 1058 + /* Wait for the timeslice to kick in */ 1059 + del_timer(&engine->execlists.timer); 1060 + tasklet_hi_schedule(&engine->execlists.tasklet); 1061 + intel_engine_flush_submission(engine); 1062 + } 1088 1063 /* -> ELSP[] = { { A:rq1 }, { B:rq1 } } */ 1089 1064 GEM_BUG_ON(!i915_request_is_active(rq[A1])); 1090 1065 GEM_BUG_ON(!i915_request_is_active(rq[B1])); ··· 1245 1228 if (err) 1246 1229 goto err_rq; 1247 1230 1248 - intel_engine_flush_submission(engine); 1231 + /* Wait until we ack the release_queue and start timeslicing */ 1232 + do { 1233 + cond_resched(); 1234 + intel_engine_flush_submission(engine); 1235 + } while (READ_ONCE(engine->execlists.pending[0])); 1236 + 1249 1237 if (!READ_ONCE(engine->execlists.timer.expires) && 1238 + execlists_active(&engine->execlists) == rq && 1250 1239 !i915_request_completed(rq)) { 1251 1240 struct drm_printer p = 1252 1241 drm_info_printer(gt->i915->drm.dev); ··· 2053 2030 if (!IS_ACTIVE(CONFIG_DRM_I915_PREEMPT_TIMEOUT)) 2054 2031 return 0; 2055 2032 2033 + if (!intel_has_reset_engine(arg->engine->gt)) 2034 + return 0; 2035 + 2056 2036 GEM_TRACE("%s(%s)\n", __func__, arg->engine->name); 2057 2037 rq = spinner_create_request(&arg->a.spin, 2058 2038 arg->a.ctx, arg->engine, ··· 2656 2630 if (IS_ERR(rq)) 2657 2631 goto err_obj; 2658 2632 2659 - rq->batch = vma; 2633 + rq->batch = i915_vma_get(vma); 2660 2634 i915_request_get(rq); 2661 2635 2662 2636 i915_vma_lock(vma); ··· 2680 2654 return 0; 2681 2655 2682 2656 err_rq: 2657 + i915_vma_put(rq->batch); 2683 2658 i915_request_put(rq); 2684 2659 err_obj: 2685 2660 i915_gem_object_put(obj); ··· 2777 2750 err = -ETIME; 2778 2751 } 2779 2752 2753 + i915_vma_put(rq->batch); 2780 2754 i915_request_put(rq); 2781 2755 rq = n; 2782 2756 } ··· 5183 5155 A[0][x], B[0][x], B[1][x], 5184 5156 poison, lrc[dw + 1]); 5185 5157 err = -EINVAL; 5186 - break; 5187 5158 } 5188 5159 } 5189 5160 dw += 2; ··· 5321 5294 0xffffffff, 5322 5295 0xffff0000, 5323 5296 }; 5297 + int err = 0; 5324 5298 5325 5299 /* 5326 5300 * Our goal is try and verify that per-context state cannot be ··· 5332 5304 */ 5333 5305 5334 5306 for_each_engine(engine, gt, id) { 5335 - int err = 0; 5336 5307 int i; 5337 5308 5338 5309 /* Just don't even ask */ ··· 5342 5315 intel_engine_pm_get(engine); 5343 5316 if (engine->pinned_default_state) { 5344 5317 for (i = 0; i < ARRAY_SIZE(poison); i++) { 5345 - err = __lrc_isolation(engine, poison[i]); 5346 - if (err) 5347 - break; 5318 + int result; 5348 5319 5349 - err = __lrc_isolation(engine, ~poison[i]); 5350 - if (err) 5351 - break; 5320 + result = __lrc_isolation(engine, poison[i]); 5321 + if (result && !err) 5322 + err = result; 5323 + 5324 + result = __lrc_isolation(engine, ~poison[i]); 5325 + if (result && !err) 5326 + err = result; 5352 5327 } 5353 5328 } 5354 5329 intel_engine_pm_put(engine); 5355 - if (igt_flush_test(gt->i915)) 5330 + if (igt_flush_test(gt->i915)) { 5356 5331 err = -EIO; 5357 - if (err) 5358 - return err; 5332 + break; 5333 + } 5359 5334 } 5360 5335 5361 - return 0; 5336 + return err; 5362 5337 } 5363 5338 5364 5339 static void garbage_reset(struct intel_engine_cs *engine,
+44 -1
drivers/gpu/drm/i915/gt/selftest_rc6.c
··· 12 12 13 13 #include "selftests/i915_random.h" 14 14 15 + static u64 energy_uJ(struct intel_rc6 *rc6) 16 + { 17 + unsigned long long power; 18 + u32 units; 19 + 20 + if (rdmsrl_safe(MSR_RAPL_POWER_UNIT, &power)) 21 + return 0; 22 + 23 + units = (power & 0x1f00) >> 8; 24 + 25 + if (rdmsrl_safe(MSR_PP1_ENERGY_STATUS, &power)) 26 + return 0; 27 + 28 + return (1000000 * power) >> units; /* convert to uJ */ 29 + } 30 + 15 31 static u64 rc6_residency(struct intel_rc6 *rc6) 16 32 { 17 33 u64 result; ··· 47 31 { 48 32 struct intel_gt *gt = arg; 49 33 struct intel_rc6 *rc6 = &gt->rc6; 34 + u64 rc0_power, rc6_power; 50 35 intel_wakeref_t wakeref; 36 + ktime_t dt; 51 37 u64 res[2]; 52 38 int err = 0; 53 39 ··· 72 54 msleep(1); /* wakeup is not immediate, takes about 100us on icl */ 73 55 74 56 res[0] = rc6_residency(rc6); 57 + 58 + dt = ktime_get(); 59 + rc0_power = energy_uJ(rc6); 75 60 msleep(250); 61 + rc0_power = energy_uJ(rc6) - rc0_power; 62 + dt = ktime_sub(ktime_get(), dt); 76 63 res[1] = rc6_residency(rc6); 77 64 if ((res[1] - res[0]) >> 10) { 78 65 pr_err("RC6 residency increased by %lldus while disabled for 250ms!\n", ··· 86 63 goto out_unlock; 87 64 } 88 65 66 + rc0_power = div64_u64(NSEC_PER_SEC * rc0_power, ktime_to_ns(dt)); 67 + if (!rc0_power) { 68 + pr_err("No power measured while in RC0\n"); 69 + err = -EINVAL; 70 + goto out_unlock; 71 + } 72 + 89 73 /* Manually enter RC6 */ 90 74 intel_rc6_park(rc6); 91 75 92 76 res[0] = rc6_residency(rc6); 77 + intel_uncore_forcewake_flush(rc6_to_uncore(rc6), FORCEWAKE_ALL); 78 + dt = ktime_get(); 79 + rc6_power = energy_uJ(rc6); 93 80 msleep(100); 81 + rc6_power = energy_uJ(rc6) - rc6_power; 82 + dt = ktime_sub(ktime_get(), dt); 94 83 res[1] = rc6_residency(rc6); 95 - 96 84 if (res[1] == res[0]) { 97 85 pr_err("Did not enter RC6! RC6_STATE=%08x, RC6_CONTROL=%08x, residency=%lld\n", 98 86 intel_uncore_read_fw(gt->uncore, GEN6_RC_STATE), 99 87 intel_uncore_read_fw(gt->uncore, GEN6_RC_CONTROL), 100 88 res[0]); 101 89 err = -EINVAL; 90 + } 91 + 92 + rc6_power = div64_u64(NSEC_PER_SEC * rc6_power, ktime_to_ns(dt)); 93 + pr_info("GPU consumed %llduW in RC0 and %llduW in RC6\n", 94 + rc0_power, rc6_power); 95 + if (2 * rc6_power > rc0_power) { 96 + pr_err("GPU leaked energy while in RC6!\n"); 97 + err = -EINVAL; 98 + goto out_unlock; 102 99 } 103 100 104 101 /* Restore what should have been the original state! */
+223
drivers/gpu/drm/i915/gt/selftest_rps.c
··· 1 + // SPDX-License-Identifier: MIT 2 + /* 3 + * Copyright © 2020 Intel Corporation 4 + */ 5 + 6 + #include "intel_engine_pm.h" 7 + #include "intel_gt_pm.h" 8 + #include "intel_rc6.h" 9 + #include "selftest_rps.h" 10 + #include "selftests/igt_flush_test.h" 11 + #include "selftests/igt_spinner.h" 12 + 13 + static void dummy_rps_work(struct work_struct *wrk) 14 + { 15 + } 16 + 17 + static int __rps_up_interrupt(struct intel_rps *rps, 18 + struct intel_engine_cs *engine, 19 + struct igt_spinner *spin) 20 + { 21 + struct intel_uncore *uncore = engine->uncore; 22 + struct i915_request *rq; 23 + u32 timeout; 24 + 25 + if (!intel_engine_can_store_dword(engine)) 26 + return 0; 27 + 28 + intel_gt_pm_wait_for_idle(engine->gt); 29 + GEM_BUG_ON(rps->active); 30 + 31 + rps->pm_iir = 0; 32 + rps->cur_freq = rps->min_freq; 33 + 34 + rq = igt_spinner_create_request(spin, engine->kernel_context, MI_NOOP); 35 + if (IS_ERR(rq)) 36 + return PTR_ERR(rq); 37 + 38 + i915_request_get(rq); 39 + i915_request_add(rq); 40 + 41 + if (!igt_wait_for_spinner(spin, rq)) { 42 + pr_err("%s: RPS spinner did not start\n", 43 + engine->name); 44 + i915_request_put(rq); 45 + intel_gt_set_wedged(engine->gt); 46 + return -EIO; 47 + } 48 + 49 + if (!rps->active) { 50 + pr_err("%s: RPS not enabled on starting spinner\n", 51 + engine->name); 52 + igt_spinner_end(spin); 53 + i915_request_put(rq); 54 + return -EINVAL; 55 + } 56 + 57 + if (!(rps->pm_events & GEN6_PM_RP_UP_THRESHOLD)) { 58 + pr_err("%s: RPS did not register UP interrupt\n", 59 + engine->name); 60 + i915_request_put(rq); 61 + return -EINVAL; 62 + } 63 + 64 + if (rps->last_freq != rps->min_freq) { 65 + pr_err("%s: RPS did not program min frequency\n", 66 + engine->name); 67 + i915_request_put(rq); 68 + return -EINVAL; 69 + } 70 + 71 + timeout = intel_uncore_read(uncore, GEN6_RP_UP_EI); 72 + timeout = GT_PM_INTERVAL_TO_US(engine->i915, timeout); 73 + 74 + usleep_range(2 * timeout, 3 * timeout); 75 + GEM_BUG_ON(i915_request_completed(rq)); 76 + 77 + igt_spinner_end(spin); 78 + i915_request_put(rq); 79 + 80 + if (rps->cur_freq != rps->min_freq) { 81 + pr_err("%s: Frequency unexpectedly changed [up], now %d!\n", 82 + engine->name, intel_rps_read_actual_frequency(rps)); 83 + return -EINVAL; 84 + } 85 + 86 + if (!(rps->pm_iir & GEN6_PM_RP_UP_THRESHOLD)) { 87 + pr_err("%s: UP interrupt not recorded for spinner, pm_iir:%x, prev_up:%x, up_threshold:%x, up_ei:%x\n", 88 + engine->name, rps->pm_iir, 89 + intel_uncore_read(uncore, GEN6_RP_PREV_UP), 90 + intel_uncore_read(uncore, GEN6_RP_UP_THRESHOLD), 91 + intel_uncore_read(uncore, GEN6_RP_UP_EI)); 92 + return -EINVAL; 93 + } 94 + 95 + intel_gt_pm_wait_for_idle(engine->gt); 96 + return 0; 97 + } 98 + 99 + static int __rps_down_interrupt(struct intel_rps *rps, 100 + struct intel_engine_cs *engine) 101 + { 102 + struct intel_uncore *uncore = engine->uncore; 103 + u32 timeout; 104 + 105 + mutex_lock(&rps->lock); 106 + GEM_BUG_ON(!rps->active); 107 + intel_rps_set(rps, rps->max_freq); 108 + mutex_unlock(&rps->lock); 109 + 110 + if (!(rps->pm_events & GEN6_PM_RP_DOWN_THRESHOLD)) { 111 + pr_err("%s: RPS did not register DOWN interrupt\n", 112 + engine->name); 113 + return -EINVAL; 114 + } 115 + 116 + if (rps->last_freq != rps->max_freq) { 117 + pr_err("%s: RPS did not program max frequency\n", 118 + engine->name); 119 + return -EINVAL; 120 + } 121 + 122 + timeout = intel_uncore_read(uncore, GEN6_RP_DOWN_EI); 123 + timeout = GT_PM_INTERVAL_TO_US(engine->i915, timeout); 124 + 125 + /* Flush any previous EI */ 126 + usleep_range(timeout, 2 * timeout); 127 + 128 + /* Reset the interrupt status */ 129 + rps_disable_interrupts(rps); 130 + GEM_BUG_ON(rps->pm_iir); 131 + rps_enable_interrupts(rps); 132 + 133 + /* And then wait for the timeout, for real this time */ 134 + usleep_range(2 * timeout, 3 * timeout); 135 + 136 + if (rps->cur_freq != rps->max_freq) { 137 + pr_err("%s: Frequency unexpectedly changed [down], now %d!\n", 138 + engine->name, 139 + intel_rps_read_actual_frequency(rps)); 140 + return -EINVAL; 141 + } 142 + 143 + if (!(rps->pm_iir & (GEN6_PM_RP_DOWN_THRESHOLD | GEN6_PM_RP_DOWN_TIMEOUT))) { 144 + pr_err("%s: DOWN interrupt not recorded for idle, pm_iir:%x, prev_down:%x, down_threshold:%x, down_ei:%x [prev_up:%x, up_threshold:%x, up_ei:%x]\n", 145 + engine->name, rps->pm_iir, 146 + intel_uncore_read(uncore, GEN6_RP_PREV_DOWN), 147 + intel_uncore_read(uncore, GEN6_RP_DOWN_THRESHOLD), 148 + intel_uncore_read(uncore, GEN6_RP_DOWN_EI), 149 + intel_uncore_read(uncore, GEN6_RP_PREV_UP), 150 + intel_uncore_read(uncore, GEN6_RP_UP_THRESHOLD), 151 + intel_uncore_read(uncore, GEN6_RP_UP_EI)); 152 + return -EINVAL; 153 + } 154 + 155 + return 0; 156 + } 157 + 158 + int live_rps_interrupt(void *arg) 159 + { 160 + struct intel_gt *gt = arg; 161 + struct intel_rps *rps = &gt->rps; 162 + void (*saved_work)(struct work_struct *wrk); 163 + struct intel_engine_cs *engine; 164 + enum intel_engine_id id; 165 + struct igt_spinner spin; 166 + u32 pm_events; 167 + int err = 0; 168 + 169 + /* 170 + * First, let's check whether or not we are receiving interrupts. 171 + */ 172 + 173 + if (!rps->enabled || rps->max_freq <= rps->min_freq) 174 + return 0; 175 + 176 + intel_gt_pm_get(gt); 177 + pm_events = rps->pm_events; 178 + intel_gt_pm_put(gt); 179 + if (!pm_events) { 180 + pr_err("No RPS PM events registered, but RPS is enabled?\n"); 181 + return -ENODEV; 182 + } 183 + 184 + if (igt_spinner_init(&spin, gt)) 185 + return -ENOMEM; 186 + 187 + intel_gt_pm_wait_for_idle(gt); 188 + saved_work = rps->work.func; 189 + rps->work.func = dummy_rps_work; 190 + 191 + for_each_engine(engine, gt, id) { 192 + /* Keep the engine busy with a spinner; expect an UP! */ 193 + if (pm_events & GEN6_PM_RP_UP_THRESHOLD) { 194 + err = __rps_up_interrupt(rps, engine, &spin); 195 + if (err) 196 + goto out; 197 + } 198 + 199 + /* Keep the engine awake but idle and check for DOWN */ 200 + if (pm_events & GEN6_PM_RP_DOWN_THRESHOLD) { 201 + intel_engine_pm_get(engine); 202 + intel_rc6_disable(&gt->rc6); 203 + 204 + err = __rps_down_interrupt(rps, engine); 205 + 206 + intel_rc6_enable(&gt->rc6); 207 + intel_engine_pm_put(engine); 208 + if (err) 209 + goto out; 210 + } 211 + } 212 + 213 + out: 214 + if (igt_flush_test(gt->i915)) 215 + err = -EIO; 216 + 217 + igt_spinner_fini(&spin); 218 + 219 + intel_gt_pm_wait_for_idle(gt); 220 + rps->work.func = saved_work; 221 + 222 + return err; 223 + }
+11
drivers/gpu/drm/i915/gt/selftest_rps.h
··· 1 + /* SPDX-License-Identifier: MIT */ 2 + /* 3 + * Copyright © 2020 Intel Corporation 4 + */ 5 + 6 + #ifndef SELFTEST_RPS_H 7 + #define SELFTEST_RPS_H 8 + 9 + int live_rps_interrupt(void *arg); 10 + 11 + #endif /* SELFTEST_RPS_H */
+45 -1
drivers/gpu/drm/i915/gt/uc/intel_guc.c
··· 169 169 { 170 170 struct drm_i915_private *i915 = guc_to_gt(guc)->i915; 171 171 172 - intel_guc_fw_init_early(guc); 172 + intel_uc_fw_init_early(&guc->fw, INTEL_UC_FW_TYPE_GUC); 173 173 intel_guc_ct_init_early(&guc->ct); 174 174 intel_guc_log_init_early(&guc->log); 175 175 intel_guc_submission_init_early(guc); ··· 722 722 *out_vaddr = vaddr; 723 723 724 724 return 0; 725 + } 726 + 727 + /** 728 + * intel_guc_load_status - dump information about GuC load status 729 + * @guc: the GuC 730 + * @p: the &drm_printer 731 + * 732 + * Pretty printer for GuC load status. 733 + */ 734 + void intel_guc_load_status(struct intel_guc *guc, struct drm_printer *p) 735 + { 736 + struct intel_gt *gt = guc_to_gt(guc); 737 + struct intel_uncore *uncore = gt->uncore; 738 + intel_wakeref_t wakeref; 739 + 740 + if (!intel_guc_is_supported(guc)) { 741 + drm_printf(p, "GuC not supported\n"); 742 + return; 743 + } 744 + 745 + if (!intel_guc_is_wanted(guc)) { 746 + drm_printf(p, "GuC disabled\n"); 747 + return; 748 + } 749 + 750 + intel_uc_fw_dump(&guc->fw, p); 751 + 752 + with_intel_runtime_pm(uncore->rpm, wakeref) { 753 + u32 status = intel_uncore_read(uncore, GUC_STATUS); 754 + u32 i; 755 + 756 + drm_printf(p, "\nGuC status 0x%08x:\n", status); 757 + drm_printf(p, "\tBootrom status = 0x%x\n", 758 + (status & GS_BOOTROM_MASK) >> GS_BOOTROM_SHIFT); 759 + drm_printf(p, "\tuKernel status = 0x%x\n", 760 + (status & GS_UKERNEL_MASK) >> GS_UKERNEL_SHIFT); 761 + drm_printf(p, "\tMIA Core status = 0x%x\n", 762 + (status & GS_MIA_MASK) >> GS_MIA_SHIFT); 763 + drm_puts(p, "\nScratch registers:\n"); 764 + for (i = 0; i < 16; i++) { 765 + drm_printf(p, "\t%2d: \t0x%x\n", 766 + i, intel_uncore_read(uncore, SOFT_SCRATCH(i))); 767 + } 768 + } 725 769 }
+7
drivers/gpu/drm/i915/gt/uc/intel_guc.h
··· 74 74 struct mutex send_mutex; 75 75 }; 76 76 77 + static inline struct intel_guc *log_to_guc(struct intel_guc_log *log) 78 + { 79 + return container_of(log, struct intel_guc, log); 80 + } 81 + 77 82 static 78 83 inline int intel_guc_send(struct intel_guc *guc, const u32 *action, u32 len) 79 84 { ··· 194 189 195 190 int intel_guc_reset_engine(struct intel_guc *guc, 196 191 struct intel_engine_cs *engine); 192 + 193 + void intel_guc_load_status(struct intel_guc *guc, struct drm_printer *p); 197 194 198 195 #endif
+42
drivers/gpu/drm/i915/gt/uc/intel_guc_debugfs.c
··· 1 + // SPDX-License-Identifier: MIT 2 + /* 3 + * Copyright © 2020 Intel Corporation 4 + */ 5 + 6 + #include <drm/drm_print.h> 7 + 8 + #include "gt/debugfs_gt.h" 9 + #include "intel_guc.h" 10 + #include "intel_guc_debugfs.h" 11 + #include "intel_guc_log_debugfs.h" 12 + 13 + static int guc_info_show(struct seq_file *m, void *data) 14 + { 15 + struct intel_guc *guc = m->private; 16 + struct drm_printer p = drm_seq_file_printer(m); 17 + 18 + if (!intel_guc_is_supported(guc)) 19 + return -ENODEV; 20 + 21 + intel_guc_load_status(guc, &p); 22 + drm_puts(&p, "\n"); 23 + intel_guc_log_info(&guc->log, &p); 24 + 25 + /* Add more as required ... */ 26 + 27 + return 0; 28 + } 29 + DEFINE_GT_DEBUGFS_ATTRIBUTE(guc_info); 30 + 31 + void intel_guc_debugfs_register(struct intel_guc *guc, struct dentry *root) 32 + { 33 + static const struct debugfs_gt_file files[] = { 34 + { "guc_info", &guc_info_fops, NULL }, 35 + }; 36 + 37 + if (!intel_guc_is_supported(guc)) 38 + return; 39 + 40 + intel_gt_debugfs_register_files(root, files, ARRAY_SIZE(files), guc); 41 + intel_guc_log_debugfs_register(&guc->log, root); 42 + }
+14
drivers/gpu/drm/i915/gt/uc/intel_guc_debugfs.h
··· 1 + /* SPDX-License-Identifier: MIT */ 2 + /* 3 + * Copyright © 2020 Intel Corporation 4 + */ 5 + 6 + #ifndef DEBUGFS_GUC_H 7 + #define DEBUGFS_GUC_H 8 + 9 + struct intel_guc; 10 + struct dentry; 11 + 12 + void intel_guc_debugfs_register(struct intel_guc *guc, struct dentry *root); 13 + 14 + #endif /* DEBUGFS_GUC_H */
-14
drivers/gpu/drm/i915/gt/uc/intel_guc_fw.c
··· 13 13 #include "intel_guc_fw.h" 14 14 #include "i915_drv.h" 15 15 16 - /** 17 - * intel_guc_fw_init_early() - initializes GuC firmware struct 18 - * @guc: intel_guc struct 19 - * 20 - * On platforms with GuC selects firmware for uploading 21 - */ 22 - void intel_guc_fw_init_early(struct intel_guc *guc) 23 - { 24 - struct drm_i915_private *i915 = guc_to_gt(guc)->i915; 25 - 26 - intel_uc_fw_init_early(&guc->fw, INTEL_UC_FW_TYPE_GUC, HAS_GT_UC(i915), 27 - INTEL_INFO(i915)->platform, INTEL_REVID(i915)); 28 - } 29 - 30 16 static void guc_prepare_xfer(struct intel_uncore *uncore) 31 17 { 32 18 u32 shim_flags = GUC_DISABLE_SRAM_INIT_TO_ZEROES |
-1
drivers/gpu/drm/i915/gt/uc/intel_guc_fw.h
··· 8 8 9 9 struct intel_guc; 10 10 11 - void intel_guc_fw_init_early(struct intel_guc *guc); 12 11 int intel_guc_fw_upload(struct intel_guc *guc); 13 12 14 13 #endif
+92 -5
drivers/gpu/drm/i915/gt/uc/intel_guc_log.c
··· 55 55 return intel_guc_send(guc, action, ARRAY_SIZE(action)); 56 56 } 57 57 58 - static inline struct intel_guc *log_to_guc(struct intel_guc_log *log) 59 - { 60 - return container_of(log, struct intel_guc, log); 61 - } 62 - 63 58 static void guc_log_enable_flush_events(struct intel_guc_log *log) 64 59 { 65 60 intel_guc_enable_msg(log_to_guc(log), ··· 666 671 void intel_guc_log_handle_flush_event(struct intel_guc_log *log) 667 672 { 668 673 queue_work(system_highpri_wq, &log->relay.flush_work); 674 + } 675 + 676 + static const char * 677 + stringify_guc_log_type(enum guc_log_buffer_type type) 678 + { 679 + switch (type) { 680 + case GUC_ISR_LOG_BUFFER: 681 + return "ISR"; 682 + case GUC_DPC_LOG_BUFFER: 683 + return "DPC"; 684 + case GUC_CRASH_DUMP_LOG_BUFFER: 685 + return "CRASH"; 686 + default: 687 + MISSING_CASE(type); 688 + } 689 + 690 + return ""; 691 + } 692 + 693 + /** 694 + * intel_guc_log_info - dump information about GuC log relay 695 + * @log: the GuC log 696 + * @p: the &drm_printer 697 + * 698 + * Pretty printer for GuC log info 699 + */ 700 + void intel_guc_log_info(struct intel_guc_log *log, struct drm_printer *p) 701 + { 702 + enum guc_log_buffer_type type; 703 + 704 + if (!intel_guc_log_relay_created(log)) { 705 + drm_puts(p, "GuC log relay not created\n"); 706 + return; 707 + } 708 + 709 + drm_puts(p, "GuC logging stats:\n"); 710 + 711 + drm_printf(p, "\tRelay full count: %u\n", log->relay.full_count); 712 + 713 + for (type = GUC_ISR_LOG_BUFFER; type < GUC_MAX_LOG_BUFFER; type++) { 714 + drm_printf(p, "\t%s:\tflush count %10u, overflow count %10u\n", 715 + stringify_guc_log_type(type), 716 + log->stats[type].flush, 717 + log->stats[type].sampled_overflow); 718 + } 719 + } 720 + 721 + /** 722 + * intel_guc_log_dump - dump the contents of the GuC log 723 + * @log: the GuC log 724 + * @p: the &drm_printer 725 + * @dump_load_err: dump the log saved on GuC load error 726 + * 727 + * Pretty printer for the GuC log 728 + */ 729 + int intel_guc_log_dump(struct intel_guc_log *log, struct drm_printer *p, 730 + bool dump_load_err) 731 + { 732 + struct intel_guc *guc = log_to_guc(log); 733 + struct intel_uc *uc = container_of(guc, struct intel_uc, guc); 734 + struct drm_i915_gem_object *obj = NULL; 735 + u32 *map; 736 + int i = 0; 737 + 738 + if (!intel_guc_is_supported(guc)) 739 + return -ENODEV; 740 + 741 + if (dump_load_err) 742 + obj = uc->load_err_log; 743 + else if (guc->log.vma) 744 + obj = guc->log.vma->obj; 745 + 746 + if (!obj) 747 + return 0; 748 + 749 + map = i915_gem_object_pin_map(obj, I915_MAP_WC); 750 + if (IS_ERR(map)) { 751 + DRM_DEBUG("Failed to pin object\n"); 752 + drm_puts(p, "(log data unaccessible)\n"); 753 + return PTR_ERR(map); 754 + } 755 + 756 + for (i = 0; i < obj->base.size / sizeof(u32); i += 4) 757 + drm_printf(p, "0x%08x 0x%08x 0x%08x 0x%08x\n", 758 + *(map + i), *(map + i + 1), 759 + *(map + i + 2), *(map + i + 3)); 760 + 761 + drm_puts(p, "\n"); 762 + 763 + i915_gem_object_unpin_map(obj); 764 + 765 + return 0; 669 766 }
+4
drivers/gpu/drm/i915/gt/uc/intel_guc_log.h
··· 79 79 return log->level; 80 80 } 81 81 82 + void intel_guc_log_info(struct intel_guc_log *log, struct drm_printer *p); 83 + int intel_guc_log_dump(struct intel_guc_log *log, struct drm_printer *p, 84 + bool dump_load_err); 85 + 82 86 #endif
+124
drivers/gpu/drm/i915/gt/uc/intel_guc_log_debugfs.c
··· 1 + // SPDX-License-Identifier: MIT 2 + /* 3 + * Copyright © 2020 Intel Corporation 4 + */ 5 + 6 + #include <linux/fs.h> 7 + #include <drm/drm_print.h> 8 + 9 + #include "gt/debugfs_gt.h" 10 + #include "intel_guc.h" 11 + #include "intel_guc_log.h" 12 + #include "intel_guc_log_debugfs.h" 13 + 14 + static int guc_log_dump_show(struct seq_file *m, void *data) 15 + { 16 + struct drm_printer p = drm_seq_file_printer(m); 17 + 18 + return intel_guc_log_dump(m->private, &p, false); 19 + } 20 + DEFINE_GT_DEBUGFS_ATTRIBUTE(guc_log_dump); 21 + 22 + static int guc_load_err_log_dump_show(struct seq_file *m, void *data) 23 + { 24 + struct drm_printer p = drm_seq_file_printer(m); 25 + 26 + return intel_guc_log_dump(m->private, &p, true); 27 + } 28 + DEFINE_GT_DEBUGFS_ATTRIBUTE(guc_load_err_log_dump); 29 + 30 + static int guc_log_level_get(void *data, u64 *val) 31 + { 32 + struct intel_guc_log *log = data; 33 + 34 + if (!intel_guc_is_used(log_to_guc(log))) 35 + return -ENODEV; 36 + 37 + *val = intel_guc_log_get_level(log); 38 + 39 + return 0; 40 + } 41 + 42 + static int guc_log_level_set(void *data, u64 val) 43 + { 44 + struct intel_guc_log *log = data; 45 + 46 + if (!intel_guc_is_used(log_to_guc(log))) 47 + return -ENODEV; 48 + 49 + return intel_guc_log_set_level(log, val); 50 + } 51 + 52 + DEFINE_SIMPLE_ATTRIBUTE(guc_log_level_fops, 53 + guc_log_level_get, guc_log_level_set, 54 + "%lld\n"); 55 + 56 + static int guc_log_relay_open(struct inode *inode, struct file *file) 57 + { 58 + struct intel_guc_log *log = inode->i_private; 59 + 60 + if (!intel_guc_is_ready(log_to_guc(log))) 61 + return -ENODEV; 62 + 63 + file->private_data = log; 64 + 65 + return intel_guc_log_relay_open(log); 66 + } 67 + 68 + static ssize_t 69 + guc_log_relay_write(struct file *filp, 70 + const char __user *ubuf, 71 + size_t cnt, 72 + loff_t *ppos) 73 + { 74 + struct intel_guc_log *log = filp->private_data; 75 + int val; 76 + int ret; 77 + 78 + ret = kstrtoint_from_user(ubuf, cnt, 0, &val); 79 + if (ret < 0) 80 + return ret; 81 + 82 + /* 83 + * Enable and start the guc log relay on value of 1. 84 + * Flush log relay for any other value. 85 + */ 86 + if (val == 1) 87 + ret = intel_guc_log_relay_start(log); 88 + else 89 + intel_guc_log_relay_flush(log); 90 + 91 + return ret ?: cnt; 92 + } 93 + 94 + static int guc_log_relay_release(struct inode *inode, struct file *file) 95 + { 96 + struct intel_guc_log *log = inode->i_private; 97 + 98 + intel_guc_log_relay_close(log); 99 + return 0; 100 + } 101 + 102 + static const struct file_operations guc_log_relay_fops = { 103 + .owner = THIS_MODULE, 104 + .open = guc_log_relay_open, 105 + .write = guc_log_relay_write, 106 + .release = guc_log_relay_release, 107 + }; 108 + 109 + void intel_guc_log_debugfs_register(struct intel_guc_log *log, 110 + struct dentry *root) 111 + { 112 + static const struct debugfs_gt_file files[] = { 113 + { "guc_log_dump", &guc_log_dump_fops, NULL }, 114 + { "guc_load_err_log_dump", &guc_load_err_log_dump_fops, NULL }, 115 + { "guc_log_level", &guc_log_level_fops, NULL }, 116 + { "guc_log_relay", &guc_log_relay_fops, NULL }, 117 + }; 118 + 119 + if (!intel_guc_is_supported(log_to_guc(log))) 120 + return; 121 + 122 + intel_gt_debugfs_register_files(root, files, ARRAY_SIZE(files), log); 123 + } 124 +
+15
drivers/gpu/drm/i915/gt/uc/intel_guc_log_debugfs.h
··· 1 + /* SPDX-License-Identifier: MIT */ 2 + /* 3 + * Copyright © 2020 Intel Corporation 4 + */ 5 + 6 + #ifndef DEBUGFS_GUC_LOG_H 7 + #define DEBUGFS_GUC_LOG_H 8 + 9 + struct intel_guc_log; 10 + struct dentry; 11 + 12 + void intel_guc_log_debugfs_register(struct intel_guc_log *log, 13 + struct dentry *root); 14 + 15 + #endif /* DEBUGFS_GUC_LOG_H */
+48 -5
drivers/gpu/drm/i915/gt/uc/intel_huc.c
··· 41 41 { 42 42 struct drm_i915_private *i915 = huc_to_gt(huc)->i915; 43 43 44 - intel_huc_fw_init_early(huc); 44 + intel_uc_fw_init_early(&huc->fw, INTEL_UC_FW_TYPE_HUC); 45 45 46 46 if (INTEL_GEN(i915) >= 11) { 47 47 huc->status.reg = GEN11_HUC_KERNEL_LOAD_INFO; ··· 200 200 * This function reads status register to verify if HuC 201 201 * firmware was successfully loaded. 202 202 * 203 - * Returns: 1 if HuC firmware is loaded and verified, 204 - * 0 if HuC firmware is not loaded and -ENODEV if HuC 205 - * is not present on this platform. 203 + * Returns: 204 + * * -ENODEV if HuC is not present on this platform, 205 + * * -EOPNOTSUPP if HuC firmware is disabled, 206 + * * -ENOPKG if HuC firmware was not installed, 207 + * * -ENOEXEC if HuC firmware is invalid or mismatched, 208 + * * 0 if HuC firmware is not running, 209 + * * 1 if HuC firmware is authenticated and running. 206 210 */ 207 211 int intel_huc_check_status(struct intel_huc *huc) 208 212 { ··· 214 210 intel_wakeref_t wakeref; 215 211 u32 status = 0; 216 212 217 - if (!intel_huc_is_supported(huc)) 213 + switch (__intel_uc_fw_status(&huc->fw)) { 214 + case INTEL_UC_FIRMWARE_NOT_SUPPORTED: 218 215 return -ENODEV; 216 + case INTEL_UC_FIRMWARE_DISABLED: 217 + return -EOPNOTSUPP; 218 + case INTEL_UC_FIRMWARE_MISSING: 219 + return -ENOPKG; 220 + case INTEL_UC_FIRMWARE_ERROR: 221 + return -ENOEXEC; 222 + default: 223 + break; 224 + } 219 225 220 226 with_intel_runtime_pm(gt->uncore->rpm, wakeref) 221 227 status = intel_uncore_read(gt->uncore, huc->status.reg); 222 228 223 229 return (status & huc->status.mask) == huc->status.value; 230 + } 231 + 232 + /** 233 + * intel_huc_load_status - dump information about HuC load status 234 + * @huc: the HuC 235 + * @p: the &drm_printer 236 + * 237 + * Pretty printer for HuC load status. 238 + */ 239 + void intel_huc_load_status(struct intel_huc *huc, struct drm_printer *p) 240 + { 241 + struct intel_gt *gt = huc_to_gt(huc); 242 + intel_wakeref_t wakeref; 243 + 244 + if (!intel_huc_is_supported(huc)) { 245 + drm_printf(p, "HuC not supported\n"); 246 + return; 247 + } 248 + 249 + if (!intel_huc_is_wanted(huc)) { 250 + drm_printf(p, "HuC disabled\n"); 251 + return; 252 + } 253 + 254 + intel_uc_fw_dump(&huc->fw, p); 255 + 256 + with_intel_runtime_pm(gt->uncore->rpm, wakeref) 257 + drm_printf(p, "HuC status: 0x%08x\n", 258 + intel_uncore_read(gt->uncore, huc->status.reg)); 224 259 }
+2
drivers/gpu/drm/i915/gt/uc/intel_huc.h
··· 57 57 return intel_uc_fw_is_running(&huc->fw); 58 58 } 59 59 60 + void intel_huc_load_status(struct intel_huc *huc, struct drm_printer *p); 61 + 60 62 #endif
+36
drivers/gpu/drm/i915/gt/uc/intel_huc_debugfs.c
··· 1 + // SPDX-License-Identifier: MIT 2 + /* 3 + * Copyright © 2020 Intel Corporation 4 + */ 5 + 6 + #include <drm/drm_print.h> 7 + 8 + #include "gt/debugfs_gt.h" 9 + #include "intel_huc.h" 10 + #include "intel_huc_debugfs.h" 11 + 12 + static int huc_info_show(struct seq_file *m, void *data) 13 + { 14 + struct intel_huc *huc = m->private; 15 + struct drm_printer p = drm_seq_file_printer(m); 16 + 17 + if (!intel_huc_is_supported(huc)) 18 + return -ENODEV; 19 + 20 + intel_huc_load_status(huc, &p); 21 + 22 + return 0; 23 + } 24 + DEFINE_GT_DEBUGFS_ATTRIBUTE(huc_info); 25 + 26 + void intel_huc_debugfs_register(struct intel_huc *huc, struct dentry *root) 27 + { 28 + static const struct debugfs_gt_file files[] = { 29 + { "huc_info", &huc_info_fops, NULL }, 30 + }; 31 + 32 + if (!intel_huc_is_supported(huc)) 33 + return; 34 + 35 + intel_gt_debugfs_register_files(root, files, ARRAY_SIZE(files), huc); 36 + }
+14
drivers/gpu/drm/i915/gt/uc/intel_huc_debugfs.h
··· 1 + /* SPDX-License-Identifier: MIT */ 2 + /* 3 + * Copyright © 2020 Intel Corporation 4 + */ 5 + 6 + #ifndef DEBUGFS_HUC_H 7 + #define DEBUGFS_HUC_H 8 + 9 + struct intel_huc; 10 + struct dentry; 11 + 12 + void intel_huc_debugfs_register(struct intel_huc *huc, struct dentry *root); 13 + 14 + #endif /* DEBUGFS_HUC_H */
-17
drivers/gpu/drm/i915/gt/uc/intel_huc_fw.c
··· 8 8 #include "i915_drv.h" 9 9 10 10 /** 11 - * intel_huc_fw_init_early() - initializes HuC firmware struct 12 - * @huc: intel_huc struct 13 - * 14 - * On platforms with HuC selects firmware for uploading 15 - */ 16 - void intel_huc_fw_init_early(struct intel_huc *huc) 17 - { 18 - struct intel_gt *gt = huc_to_gt(huc); 19 - struct intel_uc *uc = &gt->uc; 20 - struct drm_i915_private *i915 = gt->i915; 21 - 22 - intel_uc_fw_init_early(&huc->fw, INTEL_UC_FW_TYPE_HUC, 23 - intel_uc_wants_guc(uc), 24 - INTEL_INFO(i915)->platform, INTEL_REVID(i915)); 25 - } 26 - 27 - /** 28 11 * intel_huc_fw_upload() - load HuC uCode to device 29 12 * @huc: intel_huc structure 30 13 *
-1
drivers/gpu/drm/i915/gt/uc/intel_huc_fw.h
··· 8 8 9 9 struct intel_huc; 10 10 11 - void intel_huc_fw_init_early(struct intel_huc *huc); 12 11 int intel_huc_fw_upload(struct intel_huc *huc); 13 12 14 13 #endif
+20 -15
drivers/gpu/drm/i915/gt/uc/intel_uc.c
··· 45 45 { 46 46 struct drm_i915_private *i915 = uc_to_gt(uc)->i915; 47 47 48 - DRM_DEV_DEBUG_DRIVER(i915->drm.dev, 49 - "enable_guc=%d (guc:%s submission:%s huc:%s)\n", 50 - i915_modparams.enable_guc, 51 - yesno(intel_uc_wants_guc(uc)), 52 - yesno(intel_uc_wants_guc_submission(uc)), 53 - yesno(intel_uc_wants_huc(uc))); 48 + drm_dbg(&i915->drm, 49 + "enable_guc=%d (guc:%s submission:%s huc:%s)\n", 50 + i915_modparams.enable_guc, 51 + yesno(intel_uc_wants_guc(uc)), 52 + yesno(intel_uc_wants_guc_submission(uc)), 53 + yesno(intel_uc_wants_huc(uc))); 54 54 55 55 if (i915_modparams.enable_guc == -1) 56 56 return; ··· 63 63 } 64 64 65 65 if (!intel_uc_supports_guc(uc)) 66 - dev_info(i915->drm.dev, 66 + drm_info(&i915->drm, 67 67 "Incompatible option enable_guc=%d - %s\n", 68 68 i915_modparams.enable_guc, "GuC is not supported!"); 69 69 70 70 if (i915_modparams.enable_guc & ENABLE_GUC_LOAD_HUC && 71 71 !intel_uc_supports_huc(uc)) 72 - dev_info(i915->drm.dev, 72 + drm_info(&i915->drm, 73 73 "Incompatible option enable_guc=%d - %s\n", 74 74 i915_modparams.enable_guc, "HuC is not supported!"); 75 75 76 76 if (i915_modparams.enable_guc & ENABLE_GUC_SUBMISSION && 77 77 !intel_uc_supports_guc_submission(uc)) 78 - dev_info(i915->drm.dev, 78 + drm_info(&i915->drm, 79 79 "Incompatible option enable_guc=%d - %s\n", 80 80 i915_modparams.enable_guc, "GuC submission is N/A"); 81 81 82 82 if (i915_modparams.enable_guc & ~(ENABLE_GUC_SUBMISSION | 83 83 ENABLE_GUC_LOAD_HUC)) 84 - dev_info(i915->drm.dev, 84 + drm_info(&i915->drm, 85 85 "Incompatible option enable_guc=%d - %s\n", 86 86 i915_modparams.enable_guc, "undocumented flag"); 87 87 } ··· 129 129 130 130 if (log) 131 131 i915_gem_object_put(log); 132 + } 133 + 134 + void intel_uc_driver_remove(struct intel_uc *uc) 135 + { 136 + intel_uc_fini_hw(uc); 137 + intel_uc_fini(uc); 138 + __uc_free_load_err_log(uc); 132 139 } 133 140 134 141 static inline bool guc_communication_enabled(struct intel_guc *guc) ··· 318 311 { 319 312 intel_huc_fini(&uc->huc); 320 313 intel_guc_fini(&uc->guc); 321 - 322 - __uc_free_load_err_log(uc); 323 314 } 324 315 325 316 static int __uc_sanitize(struct intel_uc *uc) ··· 480 475 if (intel_uc_uses_guc_submission(uc)) 481 476 intel_guc_submission_enable(guc); 482 477 483 - dev_info(i915->drm.dev, "%s firmware %s version %u.%u %s:%s\n", 478 + drm_info(&i915->drm, "%s firmware %s version %u.%u %s:%s\n", 484 479 intel_uc_fw_type_repr(INTEL_UC_FW_TYPE_GUC), guc->fw.path, 485 480 guc->fw.major_ver_found, guc->fw.minor_ver_found, 486 481 "submission", 487 482 enableddisabled(intel_uc_uses_guc_submission(uc))); 488 483 489 484 if (intel_uc_uses_huc(uc)) { 490 - dev_info(i915->drm.dev, "%s firmware %s version %u.%u %s:%s\n", 485 + drm_info(&i915->drm, "%s firmware %s version %u.%u %s:%s\n", 491 486 intel_uc_fw_type_repr(INTEL_UC_FW_TYPE_HUC), 492 487 huc->fw.path, 493 488 huc->fw.major_ver_found, huc->fw.minor_ver_found, ··· 508 503 __uc_sanitize(uc); 509 504 510 505 if (!ret) { 511 - dev_notice(i915->drm.dev, "GuC is uninitialized\n"); 506 + drm_notice(&i915->drm, "GuC is uninitialized\n"); 512 507 /* We want to run without GuC submission */ 513 508 return 0; 514 509 }
+1
drivers/gpu/drm/i915/gt/uc/intel_uc.h
··· 34 34 35 35 void intel_uc_init_early(struct intel_uc *uc); 36 36 void intel_uc_driver_late_release(struct intel_uc *uc); 37 + void intel_uc_driver_remove(struct intel_uc *uc); 37 38 void intel_uc_init_mmio(struct intel_uc *uc); 38 39 void intel_uc_reset_prepare(struct intel_uc *uc); 39 40 void intel_uc_suspend(struct intel_uc *uc);
+30
drivers/gpu/drm/i915/gt/uc/intel_uc_debugfs.c
··· 1 + // SPDX-License-Identifier: MIT 2 + /* 3 + * Copyright © 2020 Intel Corporation 4 + */ 5 + 6 + #include <linux/debugfs.h> 7 + 8 + #include "intel_guc_debugfs.h" 9 + #include "intel_huc_debugfs.h" 10 + #include "intel_uc.h" 11 + #include "intel_uc_debugfs.h" 12 + 13 + void intel_uc_debugfs_register(struct intel_uc *uc, struct dentry *gt_root) 14 + { 15 + struct dentry *root; 16 + 17 + if (!gt_root) 18 + return; 19 + 20 + /* GuC and HuC go always in pair, no need to check both */ 21 + if (!intel_uc_supports_guc(uc)) 22 + return; 23 + 24 + root = debugfs_create_dir("uc", gt_root); 25 + if (IS_ERR(root)) 26 + return; 27 + 28 + intel_guc_debugfs_register(&uc->guc, root); 29 + intel_huc_debugfs_register(&uc->huc, root); 30 + }
+14
drivers/gpu/drm/i915/gt/uc/intel_uc_debugfs.h
··· 1 + /* SPDX-License-Identifier: MIT */ 2 + /* 3 + * Copyright © 2020 Intel Corporation 4 + */ 5 + 6 + #ifndef DEBUGFS_UC_H 7 + #define DEBUGFS_UC_H 8 + 9 + struct intel_uc; 10 + struct dentry; 11 + 12 + void intel_uc_debugfs_register(struct intel_uc *uc, struct dentry *gt_root); 13 + 14 + #endif /* DEBUGFS_UC_H */
+32 -26
drivers/gpu/drm/i915/gt/uc/intel_uc_fw.c
··· 11 11 #include "intel_uc_fw_abi.h" 12 12 #include "i915_drv.h" 13 13 14 + static inline struct intel_gt * 15 + ____uc_fw_to_gt(struct intel_uc_fw *uc_fw, enum intel_uc_fw_type type) 16 + { 17 + if (type == INTEL_UC_FW_TYPE_GUC) 18 + return container_of(uc_fw, struct intel_gt, uc.guc.fw); 19 + 20 + GEM_BUG_ON(type != INTEL_UC_FW_TYPE_HUC); 21 + return container_of(uc_fw, struct intel_gt, uc.huc.fw); 22 + } 23 + 14 24 static inline struct intel_gt *__uc_fw_to_gt(struct intel_uc_fw *uc_fw) 15 25 { 16 26 GEM_BUG_ON(uc_fw->status == INTEL_UC_FIRMWARE_UNINITIALIZED); 17 - if (uc_fw->type == INTEL_UC_FW_TYPE_GUC) 18 - return container_of(uc_fw, struct intel_gt, uc.guc.fw); 19 - 20 - GEM_BUG_ON(uc_fw->type != INTEL_UC_FW_TYPE_HUC); 21 - return container_of(uc_fw, struct intel_gt, uc.huc.fw); 27 + return ____uc_fw_to_gt(uc_fw, uc_fw->type); 22 28 } 23 29 24 30 #ifdef CONFIG_DRM_I915_DEBUG_GUC ··· 32 26 enum intel_uc_fw_status status) 33 27 { 34 28 uc_fw->__status = status; 35 - DRM_DEV_DEBUG_DRIVER(__uc_fw_to_gt(uc_fw)->i915->drm.dev, 36 - "%s firmware -> %s\n", 37 - intel_uc_fw_type_repr(uc_fw->type), 38 - status == INTEL_UC_FIRMWARE_SELECTED ? 39 - uc_fw->path : intel_uc_fw_status_repr(status)); 29 + drm_dbg(&__uc_fw_to_gt(uc_fw)->i915->drm, 30 + "%s firmware -> %s\n", 31 + intel_uc_fw_type_repr(uc_fw->type), 32 + status == INTEL_UC_FIRMWARE_SELECTED ? 33 + uc_fw->path : intel_uc_fw_status_repr(status)); 40 34 } 41 35 #endif 42 36 ··· 193 187 * intel_uc_fw_init_early - initialize the uC object and select the firmware 194 188 * @uc_fw: uC firmware 195 189 * @type: type of uC 196 - * @supported: is uC support possible 197 - * @platform: platform identifier 198 - * @rev: hardware revision 199 190 * 200 191 * Initialize the state of our uC object and relevant tracking and select the 201 192 * firmware to fetch and load. 202 193 */ 203 194 void intel_uc_fw_init_early(struct intel_uc_fw *uc_fw, 204 - enum intel_uc_fw_type type, bool supported, 205 - enum intel_platform platform, u8 rev) 195 + enum intel_uc_fw_type type) 206 196 { 197 + struct drm_i915_private *i915 = ____uc_fw_to_gt(uc_fw, type)->i915; 198 + 207 199 /* 208 200 * we use FIRMWARE_UNINITIALIZED to detect checks against uc_fw->status 209 201 * before we're looked at the HW caps to see if we have uc support ··· 212 208 213 209 uc_fw->type = type; 214 210 215 - if (supported) { 216 - __uc_fw_auto_select(uc_fw, platform, rev); 211 + if (HAS_GT_UC(i915)) { 212 + __uc_fw_auto_select(uc_fw, 213 + INTEL_INFO(i915)->platform, 214 + INTEL_REVID(i915)); 217 215 __uc_fw_user_override(uc_fw); 218 216 } 219 217 ··· 296 290 297 291 /* Check the size of the blob before examining buffer contents */ 298 292 if (unlikely(fw->size < sizeof(struct uc_css_header))) { 299 - dev_warn(dev, "%s firmware %s: invalid size: %zu < %zu\n", 293 + drm_warn(&i915->drm, "%s firmware %s: invalid size: %zu < %zu\n", 300 294 intel_uc_fw_type_repr(uc_fw->type), uc_fw->path, 301 295 fw->size, sizeof(struct uc_css_header)); 302 296 err = -ENODATA; ··· 309 303 size = (css->header_size_dw - css->key_size_dw - css->modulus_size_dw - 310 304 css->exponent_size_dw) * sizeof(u32); 311 305 if (unlikely(size != sizeof(struct uc_css_header))) { 312 - dev_warn(dev, 306 + drm_warn(&i915->drm, 313 307 "%s firmware %s: unexpected header size: %zu != %zu\n", 314 308 intel_uc_fw_type_repr(uc_fw->type), uc_fw->path, 315 309 fw->size, sizeof(struct uc_css_header)); ··· 322 316 323 317 /* now RSA */ 324 318 if (unlikely(css->key_size_dw != UOS_RSA_SCRATCH_COUNT)) { 325 - dev_warn(dev, "%s firmware %s: unexpected key size: %u != %u\n", 319 + drm_warn(&i915->drm, "%s firmware %s: unexpected key size: %u != %u\n", 326 320 intel_uc_fw_type_repr(uc_fw->type), uc_fw->path, 327 321 css->key_size_dw, UOS_RSA_SCRATCH_COUNT); 328 322 err = -EPROTO; ··· 333 327 /* At least, it should have header, uCode and RSA. Size of all three. */ 334 328 size = sizeof(struct uc_css_header) + uc_fw->ucode_size + uc_fw->rsa_size; 335 329 if (unlikely(fw->size < size)) { 336 - dev_warn(dev, "%s firmware %s: invalid size: %zu < %zu\n", 330 + drm_warn(&i915->drm, "%s firmware %s: invalid size: %zu < %zu\n", 337 331 intel_uc_fw_type_repr(uc_fw->type), uc_fw->path, 338 332 fw->size, size); 339 333 err = -ENOEXEC; ··· 343 337 /* Sanity check whether this fw is not larger than whole WOPCM memory */ 344 338 size = __intel_uc_fw_get_upload_size(uc_fw); 345 339 if (unlikely(size >= i915->wopcm.size)) { 346 - dev_warn(dev, "%s firmware %s: invalid size: %zu > %zu\n", 340 + drm_warn(&i915->drm, "%s firmware %s: invalid size: %zu > %zu\n", 347 341 intel_uc_fw_type_repr(uc_fw->type), uc_fw->path, 348 342 size, (size_t)i915->wopcm.size); 349 343 err = -E2BIG; ··· 358 352 359 353 if (uc_fw->major_ver_found != uc_fw->major_ver_wanted || 360 354 uc_fw->minor_ver_found < uc_fw->minor_ver_wanted) { 361 - dev_notice(dev, "%s firmware %s: unexpected version: %u.%u != %u.%u\n", 355 + drm_notice(&i915->drm, "%s firmware %s: unexpected version: %u.%u != %u.%u\n", 362 356 intel_uc_fw_type_repr(uc_fw->type), uc_fw->path, 363 357 uc_fw->major_ver_found, uc_fw->minor_ver_found, 364 358 uc_fw->major_ver_wanted, uc_fw->minor_ver_wanted); ··· 386 380 INTEL_UC_FIRMWARE_MISSING : 387 381 INTEL_UC_FIRMWARE_ERROR); 388 382 389 - dev_notice(dev, "%s firmware %s: fetch failed with error %d\n", 383 + drm_notice(&i915->drm, "%s firmware %s: fetch failed with error %d\n", 390 384 intel_uc_fw_type_repr(uc_fw->type), uc_fw->path, err); 391 - dev_info(dev, "%s firmware(s) can be downloaded from %s\n", 385 + drm_info(&i915->drm, "%s firmware(s) can be downloaded from %s\n", 392 386 intel_uc_fw_type_repr(uc_fw->type), INTEL_UC_FIRMWARE_URL); 393 387 394 388 release_firmware(fw); /* OK even if fw is NULL */ ··· 473 467 /* Wait for DMA to finish */ 474 468 ret = intel_wait_for_register_fw(uncore, DMA_CTRL, START_DMA, 0, 100); 475 469 if (ret) 476 - dev_err(gt->i915->drm.dev, "DMA for %s fw failed, DMA_CTRL=%u\n", 470 + drm_err(&gt->i915->drm, "DMA for %s fw failed, DMA_CTRL=%u\n", 477 471 intel_uc_fw_type_repr(uc_fw->type), 478 472 intel_uncore_read_fw(uncore, DMA_CTRL)); 479 473
+1 -2
drivers/gpu/drm/i915/gt/uc/intel_uc_fw.h
··· 239 239 } 240 240 241 241 void intel_uc_fw_init_early(struct intel_uc_fw *uc_fw, 242 - enum intel_uc_fw_type type, bool supported, 243 - enum intel_platform platform, u8 rev); 242 + enum intel_uc_fw_type type); 244 243 int intel_uc_fw_fetch(struct intel_uc_fw *uc_fw); 245 244 void intel_uc_fw_cleanup_fetch(struct intel_uc_fw *uc_fw); 246 245 int intel_uc_fw_upload(struct intel_uc_fw *uc_fw, u32 offset, u32 dma_flags);
+1 -1
drivers/gpu/drm/i915/gvt/aperture_gm.c
··· 35 35 */ 36 36 37 37 #include "i915_drv.h" 38 - #include "i915_gem_fence_reg.h" 38 + #include "gt/intel_ggtt_fencing.h" 39 39 #include "gvt.h" 40 40 41 41 static int alloc_gm(struct intel_vgpu *vgpu, bool high_gm)
+124 -15
drivers/gpu/drm/i915/i915_active.c
··· 496 496 return err; 497 497 } 498 498 499 - int i915_active_wait(struct i915_active *ref) 499 + int __i915_active_wait(struct i915_active *ref, int state) 500 500 { 501 501 int err; 502 502 ··· 511 511 if (err) 512 512 return err; 513 513 514 - if (wait_var_event_interruptible(ref, i915_active_is_idle(ref))) 514 + if (!i915_active_is_idle(ref) && 515 + ___wait_var_event(ref, i915_active_is_idle(ref), 516 + state, 0, 0, schedule())) 515 517 return -EINTR; 516 518 517 519 flush_work(&ref->work); ··· 542 540 return 0; 543 541 } 544 542 543 + struct wait_barrier { 544 + struct wait_queue_entry base; 545 + struct i915_active *ref; 546 + }; 547 + 548 + static int 549 + barrier_wake(wait_queue_entry_t *wq, unsigned int mode, int flags, void *key) 550 + { 551 + struct wait_barrier *wb = container_of(wq, typeof(*wb), base); 552 + 553 + if (i915_active_is_idle(wb->ref)) { 554 + list_del(&wq->entry); 555 + i915_sw_fence_complete(wq->private); 556 + kfree(wq); 557 + } 558 + 559 + return 0; 560 + } 561 + 562 + static int __await_barrier(struct i915_active *ref, struct i915_sw_fence *fence) 563 + { 564 + struct wait_barrier *wb; 565 + 566 + wb = kmalloc(sizeof(*wb), GFP_KERNEL); 567 + if (unlikely(!wb)) 568 + return -ENOMEM; 569 + 570 + GEM_BUG_ON(i915_active_is_idle(ref)); 571 + if (!i915_sw_fence_await(fence)) { 572 + kfree(wb); 573 + return -EINVAL; 574 + } 575 + 576 + wb->base.flags = 0; 577 + wb->base.func = barrier_wake; 578 + wb->base.private = fence; 579 + wb->ref = ref; 580 + 581 + add_wait_queue(__var_waitqueue(ref), &wb->base); 582 + return 0; 583 + } 584 + 545 585 static int await_active(struct i915_active *ref, 546 586 unsigned int flags, 547 587 int (*fn)(void *arg, struct dma_fence *fence), 548 - void *arg) 588 + void *arg, struct i915_sw_fence *barrier) 549 589 { 550 590 int err = 0; 551 591 552 - /* We must always wait for the exclusive fence! */ 553 - if (rcu_access_pointer(ref->excl.fence)) { 592 + if (!i915_active_acquire_if_busy(ref)) 593 + return 0; 594 + 595 + if (flags & I915_ACTIVE_AWAIT_EXCL && 596 + rcu_access_pointer(ref->excl.fence)) { 554 597 err = __await_active(&ref->excl, fn, arg); 555 598 if (err) 556 - return err; 599 + goto out; 557 600 } 558 601 559 - if (flags & I915_ACTIVE_AWAIT_ALL && i915_active_acquire_if_busy(ref)) { 602 + if (flags & I915_ACTIVE_AWAIT_ACTIVE) { 560 603 struct active_node *it, *n; 561 604 562 605 rbtree_postorder_for_each_entry_safe(it, n, &ref->tree, node) { 563 606 err = __await_active(&it->base, fn, arg); 564 607 if (err) 565 - break; 608 + goto out; 566 609 } 567 - i915_active_release(ref); 568 - if (err) 569 - return err; 570 610 } 571 611 572 - return 0; 612 + if (flags & I915_ACTIVE_AWAIT_BARRIER) { 613 + err = flush_lazy_signals(ref); 614 + if (err) 615 + goto out; 616 + 617 + err = __await_barrier(ref, barrier); 618 + if (err) 619 + goto out; 620 + } 621 + 622 + out: 623 + i915_active_release(ref); 624 + return err; 573 625 } 574 626 575 627 static int rq_await_fence(void *arg, struct dma_fence *fence) ··· 635 579 struct i915_active *ref, 636 580 unsigned int flags) 637 581 { 638 - return await_active(ref, flags, rq_await_fence, rq); 582 + return await_active(ref, flags, rq_await_fence, rq, &rq->submit); 639 583 } 640 584 641 585 static int sw_await_fence(void *arg, struct dma_fence *fence) ··· 648 592 struct i915_active *ref, 649 593 unsigned int flags) 650 594 { 651 - return await_active(ref, flags, sw_await_fence, fence); 595 + return await_active(ref, flags, sw_await_fence, fence, fence); 652 596 } 653 597 654 598 #if IS_ENABLED(CONFIG_DRM_I915_DEBUG_GEM) ··· 874 818 875 819 GEM_BUG_ON(!intel_engine_pm_is_awake(engine)); 876 820 llist_add(barrier_to_ll(node), &engine->barrier_tasks); 877 - intel_engine_pm_put(engine); 821 + intel_engine_pm_put_delay(engine, 1); 878 822 } 879 823 } 880 824 ··· 991 935 void i915_active_noop(struct dma_fence *fence, struct dma_fence_cb *cb) 992 936 { 993 937 active_fence_cb(fence, cb); 938 + } 939 + 940 + struct auto_active { 941 + struct i915_active base; 942 + struct kref ref; 943 + }; 944 + 945 + struct i915_active *i915_active_get(struct i915_active *ref) 946 + { 947 + struct auto_active *aa = container_of(ref, typeof(*aa), base); 948 + 949 + kref_get(&aa->ref); 950 + return &aa->base; 951 + } 952 + 953 + static void auto_release(struct kref *ref) 954 + { 955 + struct auto_active *aa = container_of(ref, typeof(*aa), ref); 956 + 957 + i915_active_fini(&aa->base); 958 + kfree(aa); 959 + } 960 + 961 + void i915_active_put(struct i915_active *ref) 962 + { 963 + struct auto_active *aa = container_of(ref, typeof(*aa), base); 964 + 965 + kref_put(&aa->ref, auto_release); 966 + } 967 + 968 + static int auto_active(struct i915_active *ref) 969 + { 970 + i915_active_get(ref); 971 + return 0; 972 + } 973 + 974 + static void auto_retire(struct i915_active *ref) 975 + { 976 + i915_active_put(ref); 977 + } 978 + 979 + struct i915_active *i915_active_create(void) 980 + { 981 + struct auto_active *aa; 982 + 983 + aa = kmalloc(sizeof(*aa), GFP_KERNEL); 984 + if (!aa) 985 + return NULL; 986 + 987 + kref_init(&aa->ref); 988 + i915_active_init(&aa->base, auto_active, auto_retire); 989 + 990 + return &aa->base; 994 991 } 995 992 996 993 #if IS_ENABLED(CONFIG_DRM_I915_SELFTEST)
+12 -2
drivers/gpu/drm/i915/i915_active.h
··· 181 181 return rcu_access_pointer(ref->excl.fence); 182 182 } 183 183 184 - int i915_active_wait(struct i915_active *ref); 184 + int __i915_active_wait(struct i915_active *ref, int state); 185 + static inline int i915_active_wait(struct i915_active *ref) 186 + { 187 + return __i915_active_wait(ref, TASK_INTERRUPTIBLE); 188 + } 185 189 186 190 int i915_sw_fence_await_active(struct i915_sw_fence *fence, 187 191 struct i915_active *ref, ··· 193 189 int i915_request_await_active(struct i915_request *rq, 194 190 struct i915_active *ref, 195 191 unsigned int flags); 196 - #define I915_ACTIVE_AWAIT_ALL BIT(0) 192 + #define I915_ACTIVE_AWAIT_EXCL BIT(0) 193 + #define I915_ACTIVE_AWAIT_ACTIVE BIT(1) 194 + #define I915_ACTIVE_AWAIT_BARRIER BIT(2) 197 195 198 196 int i915_active_acquire(struct i915_active *ref); 199 197 bool i915_active_acquire_if_busy(struct i915_active *ref); ··· 226 220 227 221 void i915_active_print(struct i915_active *ref, struct drm_printer *m); 228 222 void i915_active_unlock_wait(struct i915_active *ref); 223 + 224 + struct i915_active *i915_active_create(void); 225 + struct i915_active *i915_active_get(struct i915_active *ref); 226 + void i915_active_put(struct i915_active *ref); 229 227 230 228 #endif /* _I915_ACTIVE_H_ */
+3 -295
drivers/gpu/drm/i915/i915_debugfs.c
··· 37 37 #include "gt/intel_reset.h" 38 38 #include "gt/intel_rc6.h" 39 39 #include "gt/intel_rps.h" 40 - #include "gt/uc/intel_guc_submission.h" 41 40 42 41 #include "i915_debugfs.h" 43 42 #include "i915_debugfs_params.h" ··· 217 218 struct file_stats { 218 219 struct i915_address_space *vm; 219 220 unsigned long count; 220 - u64 total, unbound; 221 + u64 total; 221 222 u64 active, inactive; 222 223 u64 closed; 223 224 }; ··· 233 234 234 235 stats->count++; 235 236 stats->total += obj->base.size; 236 - if (!atomic_read(&obj->bind_count)) 237 - stats->unbound += obj->base.size; 238 237 239 238 spin_lock(&obj->vma.lock); 240 239 if (!stats->vm) { ··· 282 285 283 286 #define print_file_stats(m, name, stats) do { \ 284 287 if (stats.count) \ 285 - seq_printf(m, "%s: %lu objects, %llu bytes (%llu active, %llu inactive, %llu unbound, %llu closed)\n", \ 288 + seq_printf(m, "%s: %lu objects, %llu bytes (%llu active, %llu inactive, %llu closed)\n", \ 286 289 name, \ 287 290 stats.count, \ 288 291 stats.total, \ 289 292 stats.active, \ 290 293 stats.inactive, \ 291 - stats.unbound, \ 292 294 stats.closed); \ 293 295 } while (0) 294 296 ··· 741 745 if (!error) 742 746 return 0; 743 747 744 - DRM_DEBUG_DRIVER("Resetting error state\n"); 748 + drm_dbg(&error->i915->drm, "Resetting error state\n"); 745 749 i915_reset_error_state(error->i915); 746 750 747 751 return cnt; ··· 1246 1250 1247 1251 return 0; 1248 1252 } 1249 - 1250 - static int i915_huc_load_status_info(struct seq_file *m, void *data) 1251 - { 1252 - struct drm_i915_private *dev_priv = node_to_i915(m->private); 1253 - intel_wakeref_t wakeref; 1254 - struct drm_printer p; 1255 - 1256 - if (!HAS_GT_UC(dev_priv)) 1257 - return -ENODEV; 1258 - 1259 - p = drm_seq_file_printer(m); 1260 - intel_uc_fw_dump(&dev_priv->gt.uc.huc.fw, &p); 1261 - 1262 - with_intel_runtime_pm(&dev_priv->runtime_pm, wakeref) 1263 - seq_printf(m, "\nHuC status 0x%08x:\n", I915_READ(HUC_STATUS2)); 1264 - 1265 - return 0; 1266 - } 1267 - 1268 - static int i915_guc_load_status_info(struct seq_file *m, void *data) 1269 - { 1270 - struct drm_i915_private *dev_priv = node_to_i915(m->private); 1271 - intel_wakeref_t wakeref; 1272 - struct drm_printer p; 1273 - 1274 - if (!HAS_GT_UC(dev_priv)) 1275 - return -ENODEV; 1276 - 1277 - p = drm_seq_file_printer(m); 1278 - intel_uc_fw_dump(&dev_priv->gt.uc.guc.fw, &p); 1279 - 1280 - with_intel_runtime_pm(&dev_priv->runtime_pm, wakeref) { 1281 - u32 tmp = I915_READ(GUC_STATUS); 1282 - u32 i; 1283 - 1284 - seq_printf(m, "\nGuC status 0x%08x:\n", tmp); 1285 - seq_printf(m, "\tBootrom status = 0x%x\n", 1286 - (tmp & GS_BOOTROM_MASK) >> GS_BOOTROM_SHIFT); 1287 - seq_printf(m, "\tuKernel status = 0x%x\n", 1288 - (tmp & GS_UKERNEL_MASK) >> GS_UKERNEL_SHIFT); 1289 - seq_printf(m, "\tMIA Core status = 0x%x\n", 1290 - (tmp & GS_MIA_MASK) >> GS_MIA_SHIFT); 1291 - seq_puts(m, "\nScratch registers:\n"); 1292 - for (i = 0; i < 16; i++) { 1293 - seq_printf(m, "\t%2d: \t0x%x\n", 1294 - i, I915_READ(SOFT_SCRATCH(i))); 1295 - } 1296 - } 1297 - 1298 - return 0; 1299 - } 1300 - 1301 - static const char * 1302 - stringify_guc_log_type(enum guc_log_buffer_type type) 1303 - { 1304 - switch (type) { 1305 - case GUC_ISR_LOG_BUFFER: 1306 - return "ISR"; 1307 - case GUC_DPC_LOG_BUFFER: 1308 - return "DPC"; 1309 - case GUC_CRASH_DUMP_LOG_BUFFER: 1310 - return "CRASH"; 1311 - default: 1312 - MISSING_CASE(type); 1313 - } 1314 - 1315 - return ""; 1316 - } 1317 - 1318 - static void i915_guc_log_info(struct seq_file *m, struct intel_guc_log *log) 1319 - { 1320 - enum guc_log_buffer_type type; 1321 - 1322 - if (!intel_guc_log_relay_created(log)) { 1323 - seq_puts(m, "GuC log relay not created\n"); 1324 - return; 1325 - } 1326 - 1327 - seq_puts(m, "GuC logging stats:\n"); 1328 - 1329 - seq_printf(m, "\tRelay full count: %u\n", 1330 - log->relay.full_count); 1331 - 1332 - for (type = GUC_ISR_LOG_BUFFER; type < GUC_MAX_LOG_BUFFER; type++) { 1333 - seq_printf(m, "\t%s:\tflush count %10u, overflow count %10u\n", 1334 - stringify_guc_log_type(type), 1335 - log->stats[type].flush, 1336 - log->stats[type].sampled_overflow); 1337 - } 1338 - } 1339 - 1340 - static int i915_guc_info(struct seq_file *m, void *data) 1341 - { 1342 - struct drm_i915_private *dev_priv = node_to_i915(m->private); 1343 - struct intel_uc *uc = &dev_priv->gt.uc; 1344 - 1345 - if (!intel_uc_uses_guc(uc)) 1346 - return -ENODEV; 1347 - 1348 - i915_guc_log_info(m, &uc->guc.log); 1349 - 1350 - /* Add more as required ... */ 1351 - 1352 - return 0; 1353 - } 1354 - 1355 - static int i915_guc_stage_pool(struct seq_file *m, void *data) 1356 - { 1357 - struct drm_i915_private *dev_priv = node_to_i915(m->private); 1358 - struct intel_uc *uc = &dev_priv->gt.uc; 1359 - struct guc_stage_desc *desc = uc->guc.stage_desc_pool_vaddr; 1360 - int index; 1361 - 1362 - if (!intel_uc_uses_guc_submission(uc)) 1363 - return -ENODEV; 1364 - 1365 - for (index = 0; index < GUC_MAX_STAGE_DESCRIPTORS; index++, desc++) { 1366 - struct intel_engine_cs *engine; 1367 - 1368 - if (!(desc->attribute & GUC_STAGE_DESC_ATTR_ACTIVE)) 1369 - continue; 1370 - 1371 - seq_printf(m, "GuC stage descriptor %u:\n", index); 1372 - seq_printf(m, "\tIndex: %u\n", desc->stage_id); 1373 - seq_printf(m, "\tAttribute: 0x%x\n", desc->attribute); 1374 - seq_printf(m, "\tPriority: %d\n", desc->priority); 1375 - seq_printf(m, "\tDoorbell id: %d\n", desc->db_id); 1376 - seq_printf(m, "\tEngines used: 0x%x\n", 1377 - desc->engines_used); 1378 - seq_printf(m, "\tDoorbell trigger phy: 0x%llx, cpu: 0x%llx, uK: 0x%x\n", 1379 - desc->db_trigger_phy, 1380 - desc->db_trigger_cpu, 1381 - desc->db_trigger_uk); 1382 - seq_printf(m, "\tProcess descriptor: 0x%x\n", 1383 - desc->process_desc); 1384 - seq_printf(m, "\tWorkqueue address: 0x%x, size: 0x%x\n", 1385 - desc->wq_addr, desc->wq_size); 1386 - seq_putc(m, '\n'); 1387 - 1388 - for_each_uabi_engine(engine, dev_priv) { 1389 - u32 guc_engine_id = engine->guc_id; 1390 - struct guc_execlist_context *lrc = 1391 - &desc->lrc[guc_engine_id]; 1392 - 1393 - seq_printf(m, "\t%s LRC:\n", engine->name); 1394 - seq_printf(m, "\t\tContext desc: 0x%x\n", 1395 - lrc->context_desc); 1396 - seq_printf(m, "\t\tContext id: 0x%x\n", lrc->context_id); 1397 - seq_printf(m, "\t\tLRCA: 0x%x\n", lrc->ring_lrca); 1398 - seq_printf(m, "\t\tRing begin: 0x%x\n", lrc->ring_begin); 1399 - seq_printf(m, "\t\tRing end: 0x%x\n", lrc->ring_end); 1400 - seq_putc(m, '\n'); 1401 - } 1402 - } 1403 - 1404 - return 0; 1405 - } 1406 - 1407 - static int i915_guc_log_dump(struct seq_file *m, void *data) 1408 - { 1409 - struct drm_info_node *node = m->private; 1410 - struct drm_i915_private *dev_priv = node_to_i915(node); 1411 - bool dump_load_err = !!node->info_ent->data; 1412 - struct drm_i915_gem_object *obj = NULL; 1413 - u32 *log; 1414 - int i = 0; 1415 - 1416 - if (!HAS_GT_UC(dev_priv)) 1417 - return -ENODEV; 1418 - 1419 - if (dump_load_err) 1420 - obj = dev_priv->gt.uc.load_err_log; 1421 - else if (dev_priv->gt.uc.guc.log.vma) 1422 - obj = dev_priv->gt.uc.guc.log.vma->obj; 1423 - 1424 - if (!obj) 1425 - return 0; 1426 - 1427 - log = i915_gem_object_pin_map(obj, I915_MAP_WC); 1428 - if (IS_ERR(log)) { 1429 - DRM_DEBUG("Failed to pin object\n"); 1430 - seq_puts(m, "(log data unaccessible)\n"); 1431 - return PTR_ERR(log); 1432 - } 1433 - 1434 - for (i = 0; i < obj->base.size / sizeof(u32); i += 4) 1435 - seq_printf(m, "0x%08x 0x%08x 0x%08x 0x%08x\n", 1436 - *(log + i), *(log + i + 1), 1437 - *(log + i + 2), *(log + i + 3)); 1438 - 1439 - seq_putc(m, '\n'); 1440 - 1441 - i915_gem_object_unpin_map(obj); 1442 - 1443 - return 0; 1444 - } 1445 - 1446 - static int i915_guc_log_level_get(void *data, u64 *val) 1447 - { 1448 - struct drm_i915_private *dev_priv = data; 1449 - struct intel_uc *uc = &dev_priv->gt.uc; 1450 - 1451 - if (!intel_uc_uses_guc(uc)) 1452 - return -ENODEV; 1453 - 1454 - *val = intel_guc_log_get_level(&uc->guc.log); 1455 - 1456 - return 0; 1457 - } 1458 - 1459 - static int i915_guc_log_level_set(void *data, u64 val) 1460 - { 1461 - struct drm_i915_private *dev_priv = data; 1462 - struct intel_uc *uc = &dev_priv->gt.uc; 1463 - 1464 - if (!intel_uc_uses_guc(uc)) 1465 - return -ENODEV; 1466 - 1467 - return intel_guc_log_set_level(&uc->guc.log, val); 1468 - } 1469 - 1470 - DEFINE_SIMPLE_ATTRIBUTE(i915_guc_log_level_fops, 1471 - i915_guc_log_level_get, i915_guc_log_level_set, 1472 - "%lld\n"); 1473 - 1474 - static int i915_guc_log_relay_open(struct inode *inode, struct file *file) 1475 - { 1476 - struct drm_i915_private *i915 = inode->i_private; 1477 - struct intel_guc *guc = &i915->gt.uc.guc; 1478 - struct intel_guc_log *log = &guc->log; 1479 - 1480 - if (!intel_guc_is_ready(guc)) 1481 - return -ENODEV; 1482 - 1483 - file->private_data = log; 1484 - 1485 - return intel_guc_log_relay_open(log); 1486 - } 1487 - 1488 - static ssize_t 1489 - i915_guc_log_relay_write(struct file *filp, 1490 - const char __user *ubuf, 1491 - size_t cnt, 1492 - loff_t *ppos) 1493 - { 1494 - struct intel_guc_log *log = filp->private_data; 1495 - int val; 1496 - int ret; 1497 - 1498 - ret = kstrtoint_from_user(ubuf, cnt, 0, &val); 1499 - if (ret < 0) 1500 - return ret; 1501 - 1502 - /* 1503 - * Enable and start the guc log relay on value of 1. 1504 - * Flush log relay for any other value. 1505 - */ 1506 - if (val == 1) 1507 - ret = intel_guc_log_relay_start(log); 1508 - else 1509 - intel_guc_log_relay_flush(log); 1510 - 1511 - return ret ?: cnt; 1512 - } 1513 - 1514 - static int i915_guc_log_relay_release(struct inode *inode, struct file *file) 1515 - { 1516 - struct drm_i915_private *i915 = inode->i_private; 1517 - struct intel_guc *guc = &i915->gt.uc.guc; 1518 - 1519 - intel_guc_log_relay_close(&guc->log); 1520 - return 0; 1521 - } 1522 - 1523 - static const struct file_operations i915_guc_log_relay_fops = { 1524 - .owner = THIS_MODULE, 1525 - .open = i915_guc_log_relay_open, 1526 - .write = i915_guc_log_relay_write, 1527 - .release = i915_guc_log_relay_release, 1528 - }; 1529 1253 1530 1254 static int i915_runtime_pm_status(struct seq_file *m, void *unused) 1531 1255 { ··· 1855 2139 {"i915_gem_objects", i915_gem_object_info, 0}, 1856 2140 {"i915_gem_fence_regs", i915_gem_fence_regs_info, 0}, 1857 2141 {"i915_gem_interrupt", i915_interrupt_info, 0}, 1858 - {"i915_guc_info", i915_guc_info, 0}, 1859 - {"i915_guc_load_status", i915_guc_load_status_info, 0}, 1860 - {"i915_guc_log_dump", i915_guc_log_dump, 0}, 1861 - {"i915_guc_load_err_log_dump", i915_guc_log_dump, 0, (void *)1}, 1862 - {"i915_guc_stage_pool", i915_guc_stage_pool, 0}, 1863 - {"i915_huc_load_status", i915_huc_load_status_info, 0}, 1864 2142 {"i915_frequency_info", i915_frequency_info, 0}, 1865 2143 {"i915_ring_freq_table", i915_ring_freq_table, 0}, 1866 2144 {"i915_context_status", i915_context_status, 0}, ··· 1882 2172 {"i915_error_state", &i915_error_state_fops}, 1883 2173 {"i915_gpu_info", &i915_gpu_info_fops}, 1884 2174 #endif 1885 - {"i915_guc_log_level", &i915_guc_log_level_fops}, 1886 - {"i915_guc_log_relay", &i915_guc_log_relay_fops}, 1887 2175 }; 1888 2176 1889 2177 int i915_debugfs_register(struct drm_i915_private *dev_priv)
-4
drivers/gpu/drm/i915/i915_drv.c
··· 1286 1286 drm_err(&dev_priv->drm, "failed to re-enable GGTT\n"); 1287 1287 1288 1288 i915_ggtt_resume(&dev_priv->ggtt); 1289 - i915_gem_restore_fences(&dev_priv->ggtt); 1290 1289 1291 1290 intel_csr_ucode_resume(dev_priv); 1292 1291 ··· 1603 1604 1604 1605 intel_gt_runtime_resume(&dev_priv->gt); 1605 1606 1606 - i915_gem_restore_fences(&dev_priv->ggtt); 1607 - 1608 1607 enable_rpm_wakeref_asserts(rpm); 1609 1608 1610 1609 return ret; ··· 1682 1685 * we can do is to hope that things will still work (and disable RPM). 1683 1686 */ 1684 1687 intel_gt_runtime_resume(&dev_priv->gt); 1685 - i915_gem_restore_fences(&dev_priv->ggtt); 1686 1688 1687 1689 /* 1688 1690 * On VLV/CHV display interrupts are part of the display
+7 -6
drivers/gpu/drm/i915/i915_drv.h
··· 92 92 #include "intel_wopcm.h" 93 93 94 94 #include "i915_gem.h" 95 - #include "i915_gem_fence_reg.h" 96 95 #include "i915_gem_gtt.h" 97 96 #include "i915_gpu_error.h" 98 97 #include "i915_perf_types.h" ··· 108 109 109 110 #define DRIVER_NAME "i915" 110 111 #define DRIVER_DESC "Intel Graphics" 111 - #define DRIVER_DATE "20200313" 112 - #define DRIVER_TIMESTAMP 1584144591 112 + #define DRIVER_DATE "20200417" 113 + #define DRIVER_TIMESTAMP 1587105300 113 114 114 115 struct drm_i915_gem_object; 115 116 ··· 416 417 struct { 417 418 const struct drm_format_info *format; 418 419 unsigned int stride; 420 + u64 modifier; 419 421 } fb; 420 422 u16 gen9_wa_cfb_stride; 421 423 s8 fence_id; ··· 540 540 u32 saveSWF0[16]; 541 541 u32 saveSWF1[16]; 542 542 u32 saveSWF3[3]; 543 - u64 saveFENCE[I915_MAX_NUM_FENCES]; 544 543 u32 savePCH_PORT_HOTPLUG; 545 544 u16 saveGCDGMBUS; 546 545 }; ··· 884 885 885 886 struct pci_dev *bridge_dev; 886 887 887 - struct intel_engine_cs *engine[I915_NUM_ENGINES]; 888 888 struct rb_root uabi_engines; 889 889 890 890 struct resource mch_res; ··· 1505 1507 (IS_ICELAKE(p) && IS_REVID(p, since, until)) 1506 1508 1507 1509 #define TGL_REVID_A0 0x0 1510 + #define TGL_REVID_B0 0x1 1511 + #define TGL_REVID_C0 0x2 1508 1512 1509 1513 #define IS_TGL_REVID(p, since, until) \ 1510 1514 (IS_TIGERLAKE(p) && IS_REVID(p, since, until)) ··· 1604 1604 #define HAS_DDI(dev_priv) (INTEL_INFO(dev_priv)->display.has_ddi) 1605 1605 #define HAS_FPGA_DBG_UNCLAIMED(dev_priv) (INTEL_INFO(dev_priv)->has_fpga_dbg) 1606 1606 #define HAS_PSR(dev_priv) (INTEL_INFO(dev_priv)->display.has_psr) 1607 - #define HAS_TRANSCODER_EDP(dev_priv) (INTEL_INFO(dev_priv)->trans_offsets[TRANSCODER_EDP] != 0) 1607 + #define HAS_TRANSCODER(dev_priv, trans) ((INTEL_INFO(dev_priv)->cpu_transcoder_mask & BIT(trans)) != 0) 1608 1608 1609 1609 #define HAS_RC6(dev_priv) (INTEL_INFO(dev_priv)->has_rc6) 1610 1610 #define HAS_RC6p(dev_priv) (INTEL_INFO(dev_priv)->has_rc6p) ··· 1738 1738 unsigned long flags); 1739 1739 #define I915_GEM_OBJECT_UNBIND_ACTIVE BIT(0) 1740 1740 #define I915_GEM_OBJECT_UNBIND_BARRIER BIT(1) 1741 + #define I915_GEM_OBJECT_UNBIND_TEST BIT(2) 1741 1742 1742 1743 void i915_gem_runtime_suspend(struct drm_i915_private *dev_priv); 1743 1744
+12 -10
drivers/gpu/drm/i915/i915_gem.c
··· 118 118 struct i915_vma *vma; 119 119 int ret; 120 120 121 - if (!atomic_read(&obj->bind_count)) 121 + if (list_empty(&obj->vma.list)) 122 122 return 0; 123 123 124 124 /* ··· 140 140 list_move_tail(&vma->obj_link, &still_in_list); 141 141 if (!i915_vma_is_bound(vma, I915_VMA_BIND_MASK)) 142 142 continue; 143 + 144 + if (flags & I915_GEM_OBJECT_UNBIND_TEST) { 145 + ret = -EBUSY; 146 + break; 147 + } 143 148 144 149 ret = -EAGAIN; 145 150 if (!i915_vm_tryopen(vm)) ··· 998 993 return ERR_PTR(ret); 999 994 } 1000 995 1001 - if (vma->fence && !i915_gem_object_is_tiled(obj)) { 1002 - mutex_lock(&ggtt->vm.mutex); 1003 - ret = i915_vma_revoke_fence(vma); 1004 - mutex_unlock(&ggtt->vm.mutex); 1005 - if (ret) 1006 - return ERR_PTR(ret); 1007 - } 1008 - 1009 996 ret = i915_vma_pin(vma, size, alignment, flags | PIN_GLOBAL); 1010 997 if (ret) 1011 998 return ERR_PTR(ret); 999 + 1000 + if (vma->fence && !i915_gem_object_is_tiled(obj)) { 1001 + mutex_lock(&ggtt->vm.mutex); 1002 + i915_vma_revoke_fence(vma); 1003 + mutex_unlock(&ggtt->vm.mutex); 1004 + } 1012 1005 1013 1006 ret = i915_vma_wait_for_bind(vma); 1014 1007 if (ret) { ··· 1159 1156 /* Minimal basic recovery for KMS */ 1160 1157 ret = i915_ggtt_enable_hw(dev_priv); 1161 1158 i915_ggtt_resume(&dev_priv->ggtt); 1162 - i915_gem_restore_fences(&dev_priv->ggtt); 1163 1159 intel_init_clock_gating(dev_priv); 1164 1160 } 1165 1161
+6 -1
drivers/gpu/drm/i915/i915_gem_evict.c
··· 228 228 229 229 while (ret == 0 && (node = drm_mm_scan_color_evict(&scan))) { 230 230 vma = container_of(node, struct i915_vma, node); 231 - ret = __i915_vma_unbind(vma); 231 + 232 + /* If we find any non-objects (!vma), we cannot evict them */ 233 + if (vma->node.color != I915_COLOR_UNEVICTABLE) 234 + ret = __i915_vma_unbind(vma); 235 + else 236 + ret = -ENOSPC; /* XXX search failed, try again? */ 232 237 } 233 238 234 239 return ret;
+84 -86
drivers/gpu/drm/i915/i915_gem_fence_reg.c drivers/gpu/drm/i915/gt/intel_ggtt_fencing.c
··· 68 68 return fence->ggtt->vm.gt->uncore; 69 69 } 70 70 71 - static void i965_write_fence_reg(struct i915_fence_reg *fence, 72 - struct i915_vma *vma) 71 + static void i965_write_fence_reg(struct i915_fence_reg *fence) 73 72 { 74 73 i915_reg_t fence_reg_lo, fence_reg_hi; 75 74 int fence_pitch_shift; ··· 86 87 } 87 88 88 89 val = 0; 89 - if (vma) { 90 - unsigned int stride = i915_gem_object_get_stride(vma->obj); 90 + if (fence->tiling) { 91 + unsigned int stride = fence->stride; 91 92 92 - GEM_BUG_ON(!i915_vma_is_map_and_fenceable(vma)); 93 - GEM_BUG_ON(!IS_ALIGNED(vma->node.start, I965_FENCE_PAGE)); 94 - GEM_BUG_ON(!IS_ALIGNED(vma->fence_size, I965_FENCE_PAGE)); 95 93 GEM_BUG_ON(!IS_ALIGNED(stride, 128)); 96 94 97 - val = (vma->node.start + vma->fence_size - I965_FENCE_PAGE) << 32; 98 - val |= vma->node.start; 95 + val = fence->start + fence->size - I965_FENCE_PAGE; 96 + val <<= 32; 97 + val |= fence->start; 99 98 val |= (u64)((stride / 128) - 1) << fence_pitch_shift; 100 - if (i915_gem_object_get_tiling(vma->obj) == I915_TILING_Y) 99 + if (fence->tiling == I915_TILING_Y) 101 100 val |= BIT(I965_FENCE_TILING_Y_SHIFT); 102 101 val |= I965_FENCE_REG_VALID; 103 102 } ··· 122 125 } 123 126 } 124 127 125 - static void i915_write_fence_reg(struct i915_fence_reg *fence, 126 - struct i915_vma *vma) 128 + static void i915_write_fence_reg(struct i915_fence_reg *fence) 127 129 { 128 130 u32 val; 129 131 130 132 val = 0; 131 - if (vma) { 132 - unsigned int tiling = i915_gem_object_get_tiling(vma->obj); 133 + if (fence->tiling) { 134 + unsigned int stride = fence->stride; 135 + unsigned int tiling = fence->tiling; 133 136 bool is_y_tiled = tiling == I915_TILING_Y; 134 - unsigned int stride = i915_gem_object_get_stride(vma->obj); 135 - 136 - GEM_BUG_ON(!i915_vma_is_map_and_fenceable(vma)); 137 - GEM_BUG_ON(vma->node.start & ~I915_FENCE_START_MASK); 138 - GEM_BUG_ON(!is_power_of_2(vma->fence_size)); 139 - GEM_BUG_ON(!IS_ALIGNED(vma->node.start, vma->fence_size)); 140 137 141 138 if (is_y_tiled && HAS_128_BYTE_Y_TILING(fence_to_i915(fence))) 142 139 stride /= 128; ··· 138 147 stride /= 512; 139 148 GEM_BUG_ON(!is_power_of_2(stride)); 140 149 141 - val = vma->node.start; 150 + val = fence->start; 142 151 if (is_y_tiled) 143 152 val |= BIT(I830_FENCE_TILING_Y_SHIFT); 144 - val |= I915_FENCE_SIZE_BITS(vma->fence_size); 153 + val |= I915_FENCE_SIZE_BITS(fence->size); 145 154 val |= ilog2(stride) << I830_FENCE_PITCH_SHIFT; 146 155 147 156 val |= I830_FENCE_REG_VALID; ··· 156 165 } 157 166 } 158 167 159 - static void i830_write_fence_reg(struct i915_fence_reg *fence, 160 - struct i915_vma *vma) 168 + static void i830_write_fence_reg(struct i915_fence_reg *fence) 161 169 { 162 170 u32 val; 163 171 164 172 val = 0; 165 - if (vma) { 166 - unsigned int stride = i915_gem_object_get_stride(vma->obj); 173 + if (fence->tiling) { 174 + unsigned int stride = fence->stride; 167 175 168 - GEM_BUG_ON(!i915_vma_is_map_and_fenceable(vma)); 169 - GEM_BUG_ON(vma->node.start & ~I830_FENCE_START_MASK); 170 - GEM_BUG_ON(!is_power_of_2(vma->fence_size)); 171 - GEM_BUG_ON(!is_power_of_2(stride / 128)); 172 - GEM_BUG_ON(!IS_ALIGNED(vma->node.start, vma->fence_size)); 173 - 174 - val = vma->node.start; 175 - if (i915_gem_object_get_tiling(vma->obj) == I915_TILING_Y) 176 + val = fence->start; 177 + if (fence->tiling == I915_TILING_Y) 176 178 val |= BIT(I830_FENCE_TILING_Y_SHIFT); 177 - val |= I830_FENCE_SIZE_BITS(vma->fence_size); 179 + val |= I830_FENCE_SIZE_BITS(fence->size); 178 180 val |= ilog2(stride / 128) << I830_FENCE_PITCH_SHIFT; 179 181 val |= I830_FENCE_REG_VALID; 180 182 } ··· 181 197 } 182 198 } 183 199 184 - static void fence_write(struct i915_fence_reg *fence, 185 - struct i915_vma *vma) 200 + static void fence_write(struct i915_fence_reg *fence) 186 201 { 187 202 struct drm_i915_private *i915 = fence_to_i915(fence); 188 203 ··· 192 209 */ 193 210 194 211 if (IS_GEN(i915, 2)) 195 - i830_write_fence_reg(fence, vma); 212 + i830_write_fence_reg(fence); 196 213 else if (IS_GEN(i915, 3)) 197 - i915_write_fence_reg(fence, vma); 214 + i915_write_fence_reg(fence); 198 215 else 199 - i965_write_fence_reg(fence, vma); 216 + i965_write_fence_reg(fence); 200 217 201 218 /* 202 219 * Access through the fenced region afterwards is 203 220 * ordered by the posting reads whilst writing the registers. 204 221 */ 222 + } 205 223 206 - fence->dirty = false; 224 + static bool gpu_uses_fence_registers(struct i915_fence_reg *fence) 225 + { 226 + return INTEL_GEN(fence_to_i915(fence)) < 4; 207 227 } 208 228 209 229 static int fence_update(struct i915_fence_reg *fence, ··· 218 232 struct i915_vma *old; 219 233 int ret; 220 234 235 + fence->tiling = 0; 221 236 if (vma) { 237 + GEM_BUG_ON(!i915_gem_object_get_stride(vma->obj) || 238 + !i915_gem_object_get_tiling(vma->obj)); 239 + 222 240 if (!i915_vma_is_map_and_fenceable(vma)) 223 241 return -EINVAL; 224 242 225 - if (drm_WARN(&uncore->i915->drm, 226 - !i915_gem_object_get_stride(vma->obj) || 227 - !i915_gem_object_get_tiling(vma->obj), 228 - "bogus fence setup with stride: 0x%x, tiling mode: %i\n", 229 - i915_gem_object_get_stride(vma->obj), 230 - i915_gem_object_get_tiling(vma->obj))) 231 - return -EINVAL; 243 + if (gpu_uses_fence_registers(fence)) { 244 + /* implicit 'unfenced' GPU blits */ 245 + ret = i915_vma_sync(vma); 246 + if (ret) 247 + return ret; 248 + } 232 249 233 - ret = i915_vma_sync(vma); 234 - if (ret) 235 - return ret; 250 + fence->start = vma->node.start; 251 + fence->size = vma->fence_size; 252 + fence->stride = i915_gem_object_get_stride(vma->obj); 253 + fence->tiling = i915_gem_object_get_tiling(vma->obj); 236 254 } 255 + WRITE_ONCE(fence->dirty, false); 237 256 238 257 old = xchg(&fence->vma, NULL); 239 258 if (old) { 240 259 /* XXX Ideally we would move the waiting to outside the mutex */ 241 - ret = i915_vma_sync(old); 260 + ret = i915_active_wait(&fence->active); 242 261 if (ret) { 243 262 fence->vma = old; 244 263 return ret; ··· 267 276 /* 268 277 * We only need to update the register itself if the device is awake. 269 278 * If the device is currently powered down, we will defer the write 270 - * to the runtime resume, see i915_gem_restore_fences(). 279 + * to the runtime resume, see intel_ggtt_restore_fences(). 271 280 * 272 281 * This only works for removing the fence register, on acquisition 273 282 * the caller must hold the rpm wakeref. The fence register must ··· 281 290 } 282 291 283 292 WRITE_ONCE(fence->vma, vma); 284 - fence_write(fence, vma); 293 + fence_write(fence); 285 294 286 295 if (vma) { 287 296 vma->fence = fence; ··· 298 307 * 299 308 * This function force-removes any fence from the given object, which is useful 300 309 * if the kernel wants to do untiled GTT access. 301 - * 302 - * Returns: 303 - * 304 - * 0 on success, negative error code on failure. 305 310 */ 306 - int i915_vma_revoke_fence(struct i915_vma *vma) 311 + void i915_vma_revoke_fence(struct i915_vma *vma) 307 312 { 308 313 struct i915_fence_reg *fence = vma->fence; 314 + intel_wakeref_t wakeref; 309 315 310 316 lockdep_assert_held(&vma->vm->mutex); 311 317 if (!fence) 312 - return 0; 318 + return; 313 319 314 - if (atomic_read(&fence->pin_count)) 315 - return -EBUSY; 320 + GEM_BUG_ON(fence->vma != vma); 321 + GEM_BUG_ON(!i915_active_is_idle(&fence->active)); 322 + GEM_BUG_ON(atomic_read(&fence->pin_count)); 316 323 317 - return fence_update(fence, NULL); 324 + fence->tiling = 0; 325 + WRITE_ONCE(fence->vma, NULL); 326 + vma->fence = NULL; 327 + 328 + with_intel_runtime_pm_if_in_use(fence_to_uncore(fence)->rpm, wakeref) 329 + fence_write(fence); 318 330 } 319 331 320 332 static struct i915_fence_reg *fence_find(struct i915_ggtt *ggtt) ··· 481 487 } 482 488 483 489 /** 484 - * i915_gem_restore_fences - restore fence state 490 + * intel_ggtt_restore_fences - restore fence state 485 491 * @ggtt: Global GTT 486 492 * 487 493 * Restore the hw fence state to match the software tracking again, to be called 488 494 * after a gpu reset and on resume. Note that on runtime suspend we only cancel 489 495 * the fences, to be reacquired by the user later. 490 496 */ 491 - void i915_gem_restore_fences(struct i915_ggtt *ggtt) 497 + void intel_ggtt_restore_fences(struct i915_ggtt *ggtt) 492 498 { 493 499 int i; 494 500 495 - rcu_read_lock(); /* keep obj alive as we dereference */ 496 - for (i = 0; i < ggtt->num_fences; i++) { 497 - struct i915_fence_reg *reg = &ggtt->fence_regs[i]; 498 - struct i915_vma *vma = READ_ONCE(reg->vma); 499 - 500 - GEM_BUG_ON(vma && vma->fence != reg); 501 - 502 - /* 503 - * Commit delayed tiling changes if we have an object still 504 - * attached to the fence, otherwise just clear the fence. 505 - */ 506 - if (vma && !i915_gem_object_is_tiled(vma->obj)) 507 - vma = NULL; 508 - 509 - fence_write(reg, vma); 510 - } 511 - rcu_read_unlock(); 501 + for (i = 0; i < ggtt->num_fences; i++) 502 + fence_write(&ggtt->fence_regs[i]); 512 503 } 513 504 514 505 /** ··· 725 746 * bit 17 of its physical address and therefore being interpreted differently 726 747 * by the GPU. 727 748 */ 728 - static void i915_gem_swizzle_page(struct page *page) 749 + static void swizzle_page(struct page *page) 729 750 { 730 751 char temp[64]; 731 752 char *vaddr; ··· 770 791 for_each_sgt_page(page, sgt_iter, pages) { 771 792 char new_bit_17 = page_to_phys(page) >> 17; 772 793 if ((new_bit_17 & 0x1) != (test_bit(i, obj->bit_17) != 0)) { 773 - i915_gem_swizzle_page(page); 794 + swizzle_page(page); 774 795 set_page_dirty(page); 775 796 } 776 797 i++; ··· 815 836 } 816 837 } 817 838 818 - void i915_ggtt_init_fences(struct i915_ggtt *ggtt) 839 + void intel_ggtt_init_fences(struct i915_ggtt *ggtt) 819 840 { 820 841 struct drm_i915_private *i915 = ggtt->vm.i915; 821 842 struct intel_uncore *uncore = ggtt->vm.gt->uncore; ··· 843 864 if (intel_vgpu_active(i915)) 844 865 num_fences = intel_uncore_read(uncore, 845 866 vgtif_reg(avail_rs.fence_num)); 867 + ggtt->fence_regs = kcalloc(num_fences, 868 + sizeof(*ggtt->fence_regs), 869 + GFP_KERNEL); 870 + if (!ggtt->fence_regs) 871 + num_fences = 0; 846 872 847 873 /* Initialize fence registers to zero */ 848 874 for (i = 0; i < num_fences; i++) { 849 875 struct i915_fence_reg *fence = &ggtt->fence_regs[i]; 850 876 877 + i915_active_init(&fence->active, NULL, NULL); 851 878 fence->ggtt = ggtt; 852 879 fence->id = i; 853 880 list_add_tail(&fence->link, &ggtt->fence_list); 854 881 } 855 882 ggtt->num_fences = num_fences; 856 883 857 - i915_gem_restore_fences(ggtt); 884 + intel_ggtt_restore_fences(ggtt); 885 + } 886 + 887 + void intel_ggtt_fini_fences(struct i915_ggtt *ggtt) 888 + { 889 + int i; 890 + 891 + for (i = 0; i < ggtt->num_fences; i++) { 892 + struct i915_fence_reg *fence = &ggtt->fence_regs[i]; 893 + 894 + i915_active_fini(&fence->active); 895 + } 896 + 897 + kfree(ggtt->fence_regs); 858 898 } 859 899 860 900 void intel_gt_init_swizzling(struct intel_gt *gt)
+12 -5
drivers/gpu/drm/i915/i915_gem_fence_reg.h drivers/gpu/drm/i915/gt/intel_ggtt_fencing.h
··· 22 22 * 23 23 */ 24 24 25 - #ifndef __I915_FENCE_REG_H__ 26 - #define __I915_FENCE_REG_H__ 25 + #ifndef __INTEL_GGTT_FENCING_H__ 26 + #define __INTEL_GGTT_FENCING_H__ 27 27 28 28 #include <linux/list.h> 29 29 #include <linux/types.h> 30 + 31 + #include "i915_active.h" 30 32 31 33 struct drm_i915_gem_object; 32 34 struct i915_ggtt; ··· 43 41 struct i915_ggtt *ggtt; 44 42 struct i915_vma *vma; 45 43 atomic_t pin_count; 44 + struct i915_active active; 46 45 int id; 47 46 /** 48 47 * Whether the tiling parameters for the currently ··· 54 51 * command (such as BLT on gen2/3), as a "fence". 55 52 */ 56 53 bool dirty; 54 + u32 start; 55 + u32 size; 56 + u32 tiling; 57 + u32 stride; 57 58 }; 58 59 59 - /* i915_gem_fence_reg.c */ 60 60 struct i915_fence_reg *i915_reserve_fence(struct i915_ggtt *ggtt); 61 61 void i915_unreserve_fence(struct i915_fence_reg *fence); 62 62 63 - void i915_gem_restore_fences(struct i915_ggtt *ggtt); 63 + void intel_ggtt_restore_fences(struct i915_ggtt *ggtt); 64 64 65 65 void i915_gem_object_do_bit_17_swizzle(struct drm_i915_gem_object *obj, 66 66 struct sg_table *pages); 67 67 void i915_gem_object_save_bit_17_swizzle(struct drm_i915_gem_object *obj, 68 68 struct sg_table *pages); 69 69 70 - void i915_ggtt_init_fences(struct i915_ggtt *ggtt); 70 + void intel_ggtt_init_fences(struct i915_ggtt *ggtt); 71 + void intel_ggtt_fini_fences(struct i915_ggtt *ggtt); 71 72 72 73 void intel_gt_init_swizzling(struct intel_gt *gt); 73 74
+1 -1
drivers/gpu/drm/i915/i915_gpu_error.c
··· 1858 1858 return; 1859 1859 1860 1860 i915 = error->i915; 1861 - dev_info(i915->drm.dev, "%s\n", error_msg(error)); 1861 + drm_info(&i915->drm, "%s\n", error_msg(error)); 1862 1862 1863 1863 if (error->simulated || 1864 1864 cmpxchg(&i915->gpu_error.first_error, NULL, error))
+4 -4
drivers/gpu/drm/i915/i915_irq.c
··· 3658 3658 intel_uncore_write16(&dev_priv->uncore, GEN2_IIR, iir); 3659 3659 3660 3660 if (iir & I915_USER_INTERRUPT) 3661 - intel_engine_signal_breadcrumbs(dev_priv->engine[RCS0]); 3661 + intel_engine_signal_breadcrumbs(dev_priv->gt.engine[RCS0]); 3662 3662 3663 3663 if (iir & I915_MASTER_ERROR_INTERRUPT) 3664 3664 i8xx_error_irq_handler(dev_priv, eir, eir_stuck); ··· 3763 3763 I915_WRITE(GEN2_IIR, iir); 3764 3764 3765 3765 if (iir & I915_USER_INTERRUPT) 3766 - intel_engine_signal_breadcrumbs(dev_priv->engine[RCS0]); 3766 + intel_engine_signal_breadcrumbs(dev_priv->gt.engine[RCS0]); 3767 3767 3768 3768 if (iir & I915_MASTER_ERROR_INTERRUPT) 3769 3769 i9xx_error_irq_handler(dev_priv, eir, eir_stuck); ··· 3905 3905 I915_WRITE(GEN2_IIR, iir); 3906 3906 3907 3907 if (iir & I915_USER_INTERRUPT) 3908 - intel_engine_signal_breadcrumbs(dev_priv->engine[RCS0]); 3908 + intel_engine_signal_breadcrumbs(dev_priv->gt.engine[RCS0]); 3909 3909 3910 3910 if (iir & I915_BSD_USER_INTERRUPT) 3911 - intel_engine_signal_breadcrumbs(dev_priv->engine[VCS0]); 3911 + intel_engine_signal_breadcrumbs(dev_priv->gt.engine[VCS0]); 3912 3912 3913 3913 if (iir & I915_MASTER_ERROR_INTERRUPT) 3914 3914 i9xx_error_irq_handler(dev_priv, eir, eir_stuck);
+22 -1
drivers/gpu/drm/i915/i915_pci.c
··· 160 160 GEN(2), \ 161 161 .is_mobile = 1, \ 162 162 .pipe_mask = BIT(PIPE_A) | BIT(PIPE_B), \ 163 + .cpu_transcoder_mask = BIT(TRANSCODER_A) | BIT(TRANSCODER_B), \ 163 164 .display.has_overlay = 1, \ 164 165 .display.cursor_needs_physical = 1, \ 165 166 .display.overlay_needs_physical = 1, \ ··· 180 179 #define I845_FEATURES \ 181 180 GEN(2), \ 182 181 .pipe_mask = BIT(PIPE_A), \ 182 + .cpu_transcoder_mask = BIT(TRANSCODER_A), \ 183 183 .display.has_overlay = 1, \ 184 184 .display.overlay_needs_physical = 1, \ 185 185 .display.has_gmch = 1, \ ··· 220 218 #define GEN3_FEATURES \ 221 219 GEN(3), \ 222 220 .pipe_mask = BIT(PIPE_A) | BIT(PIPE_B), \ 221 + .cpu_transcoder_mask = BIT(TRANSCODER_A) | BIT(TRANSCODER_B), \ 223 222 .display.has_gmch = 1, \ 224 223 .gpu_reset_clobbers_display = true, \ 225 224 .engine_mask = BIT(RCS0), \ ··· 306 303 #define GEN4_FEATURES \ 307 304 GEN(4), \ 308 305 .pipe_mask = BIT(PIPE_A) | BIT(PIPE_B), \ 306 + .cpu_transcoder_mask = BIT(TRANSCODER_A) | BIT(TRANSCODER_B), \ 309 307 .display.has_hotplug = 1, \ 310 308 .display.has_gmch = 1, \ 311 309 .gpu_reset_clobbers_display = true, \ ··· 358 354 #define GEN5_FEATURES \ 359 355 GEN(5), \ 360 356 .pipe_mask = BIT(PIPE_A) | BIT(PIPE_B), \ 357 + .cpu_transcoder_mask = BIT(TRANSCODER_A) | BIT(TRANSCODER_B), \ 361 358 .display.has_hotplug = 1, \ 362 359 .engine_mask = BIT(RCS0) | BIT(VCS0), \ 363 360 .has_snoop = true, \ ··· 386 381 #define GEN6_FEATURES \ 387 382 GEN(6), \ 388 383 .pipe_mask = BIT(PIPE_A) | BIT(PIPE_B), \ 384 + .cpu_transcoder_mask = BIT(TRANSCODER_A) | BIT(TRANSCODER_B), \ 389 385 .display.has_hotplug = 1, \ 390 386 .display.has_fbc = 1, \ 391 387 .engine_mask = BIT(RCS0) | BIT(VCS0) | BIT(BCS0), \ ··· 436 430 #define GEN7_FEATURES \ 437 431 GEN(7), \ 438 432 .pipe_mask = BIT(PIPE_A) | BIT(PIPE_B) | BIT(PIPE_C), \ 433 + .cpu_transcoder_mask = BIT(TRANSCODER_A) | BIT(TRANSCODER_B) | BIT(TRANSCODER_C), \ 439 434 .display.has_hotplug = 1, \ 440 435 .display.has_fbc = 1, \ 441 436 .engine_mask = BIT(RCS0) | BIT(VCS0) | BIT(BCS0), \ ··· 489 482 PLATFORM(INTEL_IVYBRIDGE), 490 483 .gt = 2, 491 484 .pipe_mask = 0, /* legal, last one wins */ 485 + .cpu_transcoder_mask = 0, 492 486 .has_l3_dpf = 1, 493 487 }; 494 488 ··· 498 490 GEN(7), 499 491 .is_lp = 1, 500 492 .pipe_mask = BIT(PIPE_A) | BIT(PIPE_B), 493 + .cpu_transcoder_mask = BIT(TRANSCODER_A) | BIT(TRANSCODER_B), 501 494 .has_runtime_pm = 1, 502 495 .has_rc6 = 1, 503 496 .has_rps = true, ··· 520 511 #define G75_FEATURES \ 521 512 GEN7_FEATURES, \ 522 513 .engine_mask = BIT(RCS0) | BIT(VCS0) | BIT(BCS0) | BIT(VECS0), \ 514 + .cpu_transcoder_mask = BIT(TRANSCODER_A) | BIT(TRANSCODER_B) | \ 515 + BIT(TRANSCODER_C) | BIT(TRANSCODER_EDP), \ 523 516 .display.has_ddi = 1, \ 524 517 .has_fpga_dbg = 1, \ 525 518 .display.has_psr = 1, \ ··· 592 581 PLATFORM(INTEL_CHERRYVIEW), 593 582 GEN(8), 594 583 .pipe_mask = BIT(PIPE_A) | BIT(PIPE_B) | BIT(PIPE_C), 584 + .cpu_transcoder_mask = BIT(TRANSCODER_A) | BIT(TRANSCODER_B) | BIT(TRANSCODER_C), 595 585 .display.has_hotplug = 1, 596 586 .is_lp = 1, 597 587 .engine_mask = BIT(RCS0) | BIT(VCS0) | BIT(BCS0) | BIT(VECS0), ··· 668 656 .display.has_hotplug = 1, \ 669 657 .engine_mask = BIT(RCS0) | BIT(VCS0) | BIT(BCS0) | BIT(VECS0), \ 670 658 .pipe_mask = BIT(PIPE_A) | BIT(PIPE_B) | BIT(PIPE_C), \ 659 + .cpu_transcoder_mask = BIT(TRANSCODER_A) | BIT(TRANSCODER_B) | \ 660 + BIT(TRANSCODER_C) | BIT(TRANSCODER_EDP) | \ 661 + BIT(TRANSCODER_DSI_A) | BIT(TRANSCODER_DSI_C), \ 671 662 .has_64bit_reloc = 1, \ 672 663 .display.has_ddi = 1, \ 673 664 .has_fpga_dbg = 1, \ ··· 774 759 #define GEN11_FEATURES \ 775 760 GEN10_FEATURES, \ 776 761 GEN11_DEFAULT_PAGE_SIZES, \ 762 + .cpu_transcoder_mask = BIT(TRANSCODER_A) | BIT(TRANSCODER_B) | \ 763 + BIT(TRANSCODER_C) | BIT(TRANSCODER_EDP) | \ 764 + BIT(TRANSCODER_DSI_0) | BIT(TRANSCODER_DSI_1), \ 777 765 .pipe_offsets = { \ 778 766 [TRANSCODER_A] = PIPE_A_OFFSET, \ 779 767 [TRANSCODER_B] = PIPE_B_OFFSET, \ ··· 817 799 #define GEN12_FEATURES \ 818 800 GEN11_FEATURES, \ 819 801 GEN(12), \ 802 + .pipe_mask = BIT(PIPE_A) | BIT(PIPE_B) | BIT(PIPE_C) | BIT(PIPE_D), \ 803 + .cpu_transcoder_mask = BIT(TRANSCODER_A) | BIT(TRANSCODER_B) | \ 804 + BIT(TRANSCODER_C) | BIT(TRANSCODER_D) | \ 805 + BIT(TRANSCODER_DSI_0) | BIT(TRANSCODER_DSI_1), \ 820 806 .pipe_offsets = { \ 821 807 [TRANSCODER_A] = PIPE_A_OFFSET, \ 822 808 [TRANSCODER_B] = PIPE_B_OFFSET, \ ··· 844 822 static const struct intel_device_info tgl_info = { 845 823 GEN12_FEATURES, 846 824 PLATFORM(INTEL_TIGERLAKE), 847 - .pipe_mask = BIT(PIPE_A) | BIT(PIPE_B) | BIT(PIPE_C) | BIT(PIPE_D), 848 825 .display.has_modular_fia = 1, 849 826 .engine_mask = 850 827 BIT(RCS0) | BIT(BCS0) | BIT(VECS0) | BIT(VCS0) | BIT(VCS2),
+288 -279
drivers/gpu/drm/i915/i915_perf.c
··· 204 204 205 205 #include "i915_drv.h" 206 206 #include "i915_perf.h" 207 - #include "oa/i915_oa_hsw.h" 208 - #include "oa/i915_oa_bdw.h" 209 - #include "oa/i915_oa_chv.h" 210 - #include "oa/i915_oa_sklgt2.h" 211 - #include "oa/i915_oa_sklgt3.h" 212 - #include "oa/i915_oa_sklgt4.h" 213 - #include "oa/i915_oa_bxt.h" 214 - #include "oa/i915_oa_kblgt2.h" 215 - #include "oa/i915_oa_kblgt3.h" 216 - #include "oa/i915_oa_glk.h" 217 - #include "oa/i915_oa_cflgt2.h" 218 - #include "oa/i915_oa_cflgt3.h" 219 - #include "oa/i915_oa_cnl.h" 220 - #include "oa/i915_oa_icl.h" 221 - #include "oa/i915_oa_tgl.h" 222 207 223 208 /* HW requires this to be a power of two, between 128k and 16M, though driver 224 209 * is currently generally designed assuming the largest 16M size is used such ··· 223 238 * 224 239 * Although this can be observed explicitly while copying reports to userspace 225 240 * by checking for a zeroed report-id field in tail reports, we want to account 226 - * for this earlier, as part of the oa_buffer_check to avoid lots of redundant 227 - * read() attempts. 241 + * for this earlier, as part of the oa_buffer_check_unlocked to avoid lots of 242 + * redundant read() attempts. 228 243 * 229 - * In effect we define a tail pointer for reading that lags the real tail 230 - * pointer by at least %OA_TAIL_MARGIN_NSEC nanoseconds, which gives enough 231 - * time for the corresponding reports to become visible to the CPU. 232 - * 233 - * To manage this we actually track two tail pointers: 234 - * 1) An 'aging' tail with an associated timestamp that is tracked until we 235 - * can trust the corresponding data is visible to the CPU; at which point 236 - * it is considered 'aged'. 237 - * 2) An 'aged' tail that can be used for read()ing. 238 - * 239 - * The two separate pointers let us decouple read()s from tail pointer aging. 240 - * 241 - * The tail pointers are checked and updated at a limited rate within a hrtimer 242 - * callback (the same callback that is used for delivering EPOLLIN events) 243 - * 244 - * Initially the tails are marked invalid with %INVALID_TAIL_PTR which 245 - * indicates that an updated tail pointer is needed. 244 + * We workaround this issue in oa_buffer_check_unlocked() by reading the reports 245 + * in the OA buffer, starting from the tail reported by the HW until we find a 246 + * report with its first 2 dwords not 0 meaning its previous report is 247 + * completely in memory and ready to be read. Those dwords are also set to 0 248 + * once read and the whole buffer is cleared upon OA buffer initialization. The 249 + * first dword is the reason for this report while the second is the timestamp, 250 + * making the chances of having those 2 fields at 0 fairly unlikely. A more 251 + * detailed explanation is available in oa_buffer_check_unlocked(). 246 252 * 247 253 * Most of the implementation details for this workaround are in 248 254 * oa_buffer_check_unlocked() and _append_oa_reports() ··· 248 272 #define OA_TAIL_MARGIN_NSEC 100000ULL 249 273 #define INVALID_TAIL_PTR 0xffffffff 250 274 251 - /* frequency for checking whether the OA unit has written new reports to the 252 - * circular OA buffer... 275 + /* The default frequency for checking whether the OA unit has written new 276 + * reports to the circular OA buffer... 253 277 */ 254 - #define POLL_FREQUENCY 200 255 - #define POLL_PERIOD (NSEC_PER_SEC / POLL_FREQUENCY) 278 + #define DEFAULT_POLL_FREQUENCY_HZ 200 279 + #define DEFAULT_POLL_PERIOD_NS (NSEC_PER_SEC / DEFAULT_POLL_FREQUENCY_HZ) 256 280 257 281 /* for sysctl proc_dointvec_minmax of dev.i915.perf_stream_paranoid */ 258 282 static u32 i915_perf_stream_paranoid = true; ··· 335 359 * @oa_periodic: Whether to enable periodic OA unit sampling 336 360 * @oa_period_exponent: The OA unit sampling period is derived from this 337 361 * @engine: The engine (typically rcs0) being monitored by the OA unit 362 + * @has_sseu: Whether @sseu was specified by userspace 363 + * @sseu: internal SSEU configuration computed either from the userspace 364 + * specified configuration in the opening parameters or a default value 365 + * (see get_default_sseu_config()) 366 + * @poll_oa_period: The period in nanoseconds at which the CPU will check for OA 367 + * data availability 338 368 * 339 369 * As read_properties_unlocked() enumerates and validates the properties given 340 370 * to open a stream of metrics the configuration is built up in the structure ··· 360 378 int oa_period_exponent; 361 379 362 380 struct intel_engine_cs *engine; 381 + 382 + bool has_sseu; 383 + struct intel_sseu sseu; 384 + 385 + u64 poll_oa_period; 363 386 }; 364 387 365 388 struct i915_oa_config_bo { ··· 396 409 struct i915_oa_config *oa_config; 397 410 398 411 rcu_read_lock(); 399 - if (metrics_set == 1) 400 - oa_config = &perf->test_config; 401 - else 402 - oa_config = idr_find(&perf->metrics_idr, metrics_set); 412 + oa_config = idr_find(&perf->metrics_idr, metrics_set); 403 413 if (oa_config) 404 414 oa_config = i915_oa_config_get(oa_config); 405 415 rcu_read_unlock(); ··· 449 465 * (See description of OA_TAIL_MARGIN_NSEC above for further details.) 450 466 * 451 467 * Besides returning true when there is data available to read() this function 452 - * also has the side effect of updating the oa_buffer.tails[], .aging_timestamp 453 - * and .aged_tail_idx state used for reading. 468 + * also updates the tail, aging_tail and aging_timestamp in the oa_buffer 469 + * object. 454 470 * 455 471 * Note: It's safe to read OA config state here unlocked, assuming that this is 456 472 * only called while the stream is enabled, while the global OA configuration ··· 460 476 */ 461 477 static bool oa_buffer_check_unlocked(struct i915_perf_stream *stream) 462 478 { 479 + u32 gtt_offset = i915_ggtt_offset(stream->oa_buffer.vma); 463 480 int report_size = stream->oa_buffer.format_size; 464 481 unsigned long flags; 465 - unsigned int aged_idx; 466 - u32 head, hw_tail, aged_tail, aging_tail; 482 + bool pollin; 483 + u32 hw_tail; 467 484 u64 now; 468 485 469 486 /* We have to consider the (unlikely) possibility that read() errors 470 - * could result in an OA buffer reset which might reset the head, 471 - * tails[] and aged_tail state. 487 + * could result in an OA buffer reset which might reset the head and 488 + * tail state. 472 489 */ 473 490 spin_lock_irqsave(&stream->oa_buffer.ptr_lock, flags); 474 - 475 - /* NB: The head we observe here might effectively be a little out of 476 - * date (between head and tails[aged_idx].offset if there is currently 477 - * a read() in progress. 478 - */ 479 - head = stream->oa_buffer.head; 480 - 481 - aged_idx = stream->oa_buffer.aged_tail_idx; 482 - aged_tail = stream->oa_buffer.tails[aged_idx].offset; 483 - aging_tail = stream->oa_buffer.tails[!aged_idx].offset; 484 491 485 492 hw_tail = stream->perf->ops.oa_hw_tail_read(stream); 486 493 ··· 482 507 483 508 now = ktime_get_mono_fast_ns(); 484 509 485 - /* Update the aged tail 486 - * 487 - * Flip the tail pointer available for read()s once the aging tail is 488 - * old enough to trust that the corresponding data will be visible to 489 - * the CPU... 490 - * 491 - * Do this before updating the aging pointer in case we may be able to 492 - * immediately start aging a new pointer too (if new data has become 493 - * available) without needing to wait for a later hrtimer callback. 494 - */ 495 - if (aging_tail != INVALID_TAIL_PTR && 496 - ((now - stream->oa_buffer.aging_timestamp) > 497 - OA_TAIL_MARGIN_NSEC)) { 498 - 499 - aged_idx ^= 1; 500 - stream->oa_buffer.aged_tail_idx = aged_idx; 501 - 502 - aged_tail = aging_tail; 503 - 504 - /* Mark that we need a new pointer to start aging... */ 505 - stream->oa_buffer.tails[!aged_idx].offset = INVALID_TAIL_PTR; 506 - aging_tail = INVALID_TAIL_PTR; 507 - } 508 - 509 - /* Update the aging tail 510 - * 511 - * We throttle aging tail updates until we have a new tail that 512 - * represents >= one report more data than is already available for 513 - * reading. This ensures there will be enough data for a successful 514 - * read once this new pointer has aged and ensures we will give the new 515 - * pointer time to age. 516 - */ 517 - if (aging_tail == INVALID_TAIL_PTR && 518 - (aged_tail == INVALID_TAIL_PTR || 519 - OA_TAKEN(hw_tail, aged_tail) >= report_size)) { 520 - struct i915_vma *vma = stream->oa_buffer.vma; 521 - u32 gtt_offset = i915_ggtt_offset(vma); 522 - 523 - /* Be paranoid and do a bounds check on the pointer read back 524 - * from hardware, just in case some spurious hardware condition 525 - * could put the tail out of bounds... 510 + if (hw_tail == stream->oa_buffer.aging_tail && 511 + (now - stream->oa_buffer.aging_timestamp) > OA_TAIL_MARGIN_NSEC) { 512 + /* If the HW tail hasn't move since the last check and the HW 513 + * tail has been aging for long enough, declare it the new 514 + * tail. 526 515 */ 527 - if (hw_tail >= gtt_offset && 528 - hw_tail < (gtt_offset + OA_BUFFER_SIZE)) { 529 - stream->oa_buffer.tails[!aged_idx].offset = 530 - aging_tail = hw_tail; 531 - stream->oa_buffer.aging_timestamp = now; 532 - } else { 533 - drm_err(&stream->perf->i915->drm, 534 - "Ignoring spurious out of range OA buffer tail pointer = %x\n", 535 - hw_tail); 516 + stream->oa_buffer.tail = stream->oa_buffer.aging_tail; 517 + } else { 518 + u32 head, tail, aged_tail; 519 + 520 + /* NB: The head we observe here might effectively be a little 521 + * out of date. If a read() is in progress, the head could be 522 + * anywhere between this head and stream->oa_buffer.tail. 523 + */ 524 + head = stream->oa_buffer.head - gtt_offset; 525 + aged_tail = stream->oa_buffer.tail - gtt_offset; 526 + 527 + hw_tail -= gtt_offset; 528 + tail = hw_tail; 529 + 530 + /* Walk the stream backward until we find a report with dword 0 531 + * & 1 not at 0. Since the circular buffer pointers progress by 532 + * increments of 64 bytes and that reports can be up to 256 533 + * bytes long, we can't tell whether a report has fully landed 534 + * in memory before the first 2 dwords of the following report 535 + * have effectively landed. 536 + * 537 + * This is assuming that the writes of the OA unit land in 538 + * memory in the order they were written to. 539 + * If not : (╯°□°)╯︵ ┻━┻ 540 + */ 541 + while (OA_TAKEN(tail, aged_tail) >= report_size) { 542 + u32 *report32 = (void *)(stream->oa_buffer.vaddr + tail); 543 + 544 + if (report32[0] != 0 || report32[1] != 0) 545 + break; 546 + 547 + tail = (tail - report_size) & (OA_BUFFER_SIZE - 1); 536 548 } 549 + 550 + if (OA_TAKEN(hw_tail, tail) > report_size && 551 + __ratelimit(&stream->perf->tail_pointer_race)) 552 + DRM_NOTE("unlanded report(s) head=0x%x " 553 + "tail=0x%x hw_tail=0x%x\n", 554 + head, tail, hw_tail); 555 + 556 + stream->oa_buffer.tail = gtt_offset + tail; 557 + stream->oa_buffer.aging_tail = gtt_offset + hw_tail; 558 + stream->oa_buffer.aging_timestamp = now; 537 559 } 560 + 561 + pollin = OA_TAKEN(stream->oa_buffer.tail - gtt_offset, 562 + stream->oa_buffer.head - gtt_offset) >= report_size; 538 563 539 564 spin_unlock_irqrestore(&stream->oa_buffer.ptr_lock, flags); 540 565 541 - return aged_tail == INVALID_TAIL_PTR ? 542 - false : OA_TAKEN(aged_tail, head) >= report_size; 566 + return pollin; 543 567 } 544 568 545 569 /** ··· 656 682 u32 mask = (OA_BUFFER_SIZE - 1); 657 683 size_t start_offset = *offset; 658 684 unsigned long flags; 659 - unsigned int aged_tail_idx; 660 685 u32 head, tail; 661 686 u32 taken; 662 687 int ret = 0; ··· 666 693 spin_lock_irqsave(&stream->oa_buffer.ptr_lock, flags); 667 694 668 695 head = stream->oa_buffer.head; 669 - aged_tail_idx = stream->oa_buffer.aged_tail_idx; 670 - tail = stream->oa_buffer.tails[aged_tail_idx].offset; 696 + tail = stream->oa_buffer.tail; 671 697 672 698 spin_unlock_irqrestore(&stream->oa_buffer.ptr_lock, flags); 673 - 674 - /* 675 - * An invalid tail pointer here means we're still waiting for the poll 676 - * hrtimer callback to give us a pointer 677 - */ 678 - if (tail == INVALID_TAIL_PTR) 679 - return -EAGAIN; 680 699 681 700 /* 682 701 * NB: oa_buffer.head/tail include the gtt_offset which we don't want ··· 803 838 } 804 839 805 840 /* 806 - * The above reason field sanity check is based on 807 - * the assumption that the OA buffer is initially 808 - * zeroed and we reset the field after copying so the 809 - * check is still meaningful once old reports start 810 - * being overwritten. 841 + * Clear out the first 2 dword as a mean to detect unlanded 842 + * reports. 811 843 */ 812 844 report32[0] = 0; 845 + report32[1] = 0; 813 846 } 814 847 815 848 if (start_offset != *offset) { ··· 948 985 u32 mask = (OA_BUFFER_SIZE - 1); 949 986 size_t start_offset = *offset; 950 987 unsigned long flags; 951 - unsigned int aged_tail_idx; 952 988 u32 head, tail; 953 989 u32 taken; 954 990 int ret = 0; ··· 958 996 spin_lock_irqsave(&stream->oa_buffer.ptr_lock, flags); 959 997 960 998 head = stream->oa_buffer.head; 961 - aged_tail_idx = stream->oa_buffer.aged_tail_idx; 962 - tail = stream->oa_buffer.tails[aged_tail_idx].offset; 999 + tail = stream->oa_buffer.tail; 963 1000 964 1001 spin_unlock_irqrestore(&stream->oa_buffer.ptr_lock, flags); 965 - 966 - /* An invalid tail pointer here means we're still waiting for the poll 967 - * hrtimer callback to give us a pointer 968 - */ 969 - if (tail == INVALID_TAIL_PTR) 970 - return -EAGAIN; 971 1002 972 1003 /* NB: oa_buffer.head/tail include the gtt_offset which we don't want 973 1004 * while indexing relative to oa_buf_base. ··· 1019 1064 if (ret) 1020 1065 break; 1021 1066 1022 - /* The above report-id field sanity check is based on 1023 - * the assumption that the OA buffer is initially 1024 - * zeroed and we reset the field after copying so the 1025 - * check is still meaningful once old reports start 1026 - * being overwritten. 1067 + /* Clear out the first 2 dwords as a mean to detect unlanded 1068 + * reports. 1027 1069 */ 1028 1070 report32[0] = 0; 1071 + report32[1] = 0; 1029 1072 } 1030 1073 1031 1074 if (start_offset != *offset) { ··· 1402 1449 gtt_offset | OABUFFER_SIZE_16M); 1403 1450 1404 1451 /* Mark that we need updated tail pointers to read from... */ 1405 - stream->oa_buffer.tails[0].offset = INVALID_TAIL_PTR; 1406 - stream->oa_buffer.tails[1].offset = INVALID_TAIL_PTR; 1452 + stream->oa_buffer.aging_tail = INVALID_TAIL_PTR; 1453 + stream->oa_buffer.tail = gtt_offset; 1407 1454 1408 1455 spin_unlock_irqrestore(&stream->oa_buffer.ptr_lock, flags); 1409 1456 ··· 1425 1472 * memory... 1426 1473 */ 1427 1474 memset(stream->oa_buffer.vaddr, 0, OA_BUFFER_SIZE); 1428 - 1429 - stream->pollin = false; 1430 1475 } 1431 1476 1432 1477 static void gen8_init_oa_buffer(struct i915_perf_stream *stream) ··· 1454 1503 intel_uncore_write(uncore, GEN8_OATAILPTR, gtt_offset & GEN8_OATAILPTR_MASK); 1455 1504 1456 1505 /* Mark that we need updated tail pointers to read from... */ 1457 - stream->oa_buffer.tails[0].offset = INVALID_TAIL_PTR; 1458 - stream->oa_buffer.tails[1].offset = INVALID_TAIL_PTR; 1506 + stream->oa_buffer.aging_tail = INVALID_TAIL_PTR; 1507 + stream->oa_buffer.tail = gtt_offset; 1459 1508 1460 1509 /* 1461 1510 * Reset state used to recognise context switches, affecting which ··· 1479 1528 * memory... 1480 1529 */ 1481 1530 memset(stream->oa_buffer.vaddr, 0, OA_BUFFER_SIZE); 1482 - 1483 - stream->pollin = false; 1484 1531 } 1485 1532 1486 1533 static void gen12_init_oa_buffer(struct i915_perf_stream *stream) ··· 1508 1559 gtt_offset & GEN12_OAG_OATAILPTR_MASK); 1509 1560 1510 1561 /* Mark that we need updated tail pointers to read from... */ 1511 - stream->oa_buffer.tails[0].offset = INVALID_TAIL_PTR; 1512 - stream->oa_buffer.tails[1].offset = INVALID_TAIL_PTR; 1562 + stream->oa_buffer.aging_tail = INVALID_TAIL_PTR; 1563 + stream->oa_buffer.tail = gtt_offset; 1513 1564 1514 1565 /* 1515 1566 * Reset state used to recognise context switches, affecting which ··· 1534 1585 */ 1535 1586 memset(stream->oa_buffer.vaddr, 0, 1536 1587 stream->oa_buffer.vma->size); 1537 - 1538 - stream->pollin = false; 1539 1588 } 1540 1589 1541 1590 static int alloc_oa_buffer(struct i915_perf_stream *stream) ··· 1919 1972 return i915_vma_get(oa_bo->vma); 1920 1973 } 1921 1974 1922 - static struct i915_request * 1975 + static int 1923 1976 emit_oa_config(struct i915_perf_stream *stream, 1924 1977 struct i915_oa_config *oa_config, 1925 - struct intel_context *ce) 1978 + struct intel_context *ce, 1979 + struct i915_active *active) 1926 1980 { 1927 1981 struct i915_request *rq; 1928 1982 struct i915_vma *vma; ··· 1931 1983 1932 1984 vma = get_oa_vma(stream, oa_config); 1933 1985 if (IS_ERR(vma)) 1934 - return ERR_CAST(vma); 1986 + return PTR_ERR(vma); 1935 1987 1936 1988 err = i915_vma_pin(vma, 0, 0, PIN_GLOBAL | PIN_HIGH); 1937 1989 if (err) ··· 1943 1995 if (IS_ERR(rq)) { 1944 1996 err = PTR_ERR(rq); 1945 1997 goto err_vma_unpin; 1998 + } 1999 + 2000 + if (!IS_ERR_OR_NULL(active)) { 2001 + /* After all individual context modifications */ 2002 + err = i915_request_await_active(rq, active, 2003 + I915_ACTIVE_AWAIT_ACTIVE); 2004 + if (err) 2005 + goto err_add_request; 2006 + 2007 + err = i915_active_add_request(active, rq); 2008 + if (err) 2009 + goto err_add_request; 1946 2010 } 1947 2011 1948 2012 i915_vma_lock(vma); ··· 1971 2011 if (err) 1972 2012 goto err_add_request; 1973 2013 1974 - i915_request_get(rq); 1975 2014 err_add_request: 1976 2015 i915_request_add(rq); 1977 2016 err_vma_unpin: 1978 2017 i915_vma_unpin(vma); 1979 2018 err_vma_put: 1980 2019 i915_vma_put(vma); 1981 - return err ? ERR_PTR(err) : rq; 2020 + return err; 1982 2021 } 1983 2022 1984 2023 static struct intel_context *oa_context(struct i915_perf_stream *stream) ··· 1985 2026 return stream->pinned_ctx ?: stream->engine->kernel_context; 1986 2027 } 1987 2028 1988 - static struct i915_request * 1989 - hsw_enable_metric_set(struct i915_perf_stream *stream) 2029 + static int 2030 + hsw_enable_metric_set(struct i915_perf_stream *stream, 2031 + struct i915_active *active) 1990 2032 { 1991 2033 struct intel_uncore *uncore = stream->uncore; 1992 2034 ··· 2006 2046 intel_uncore_rmw(uncore, GEN6_UCGCTL1, 2007 2047 0, GEN6_CSUNIT_CLOCK_GATE_DISABLE); 2008 2048 2009 - return emit_oa_config(stream, stream->oa_config, oa_context(stream)); 2049 + return emit_oa_config(stream, 2050 + stream->oa_config, oa_context(stream), 2051 + active); 2010 2052 } 2011 2053 2012 2054 static void hsw_disable_metric_set(struct i915_perf_stream *stream) ··· 2078 2116 for (i = 0; i < ARRAY_SIZE(flex_regs); i++) 2079 2117 reg_state[ctx_flexeu0 + i * 2 + 1] = 2080 2118 oa_config_flex_reg(stream->oa_config, flex_regs[i]); 2081 - 2082 - reg_state[CTX_R_PWR_CLK_STATE] = 2083 - intel_sseu_make_rpcs(ce->engine->i915, &ce->sseu); 2084 2119 } 2085 2120 2086 2121 struct flex { ··· 2155 2196 return err; 2156 2197 } 2157 2198 2158 - static int gen8_modify_self(struct intel_context *ce, 2159 - const struct flex *flex, unsigned int count) 2199 + static int 2200 + gen8_modify_self(struct intel_context *ce, 2201 + const struct flex *flex, unsigned int count, 2202 + struct i915_active *active) 2160 2203 { 2161 2204 struct i915_request *rq; 2162 2205 int err; ··· 2169 2208 if (IS_ERR(rq)) 2170 2209 return PTR_ERR(rq); 2171 2210 2172 - err = gen8_load_flex(rq, ce, flex, count); 2211 + if (!IS_ERR_OR_NULL(active)) { 2212 + err = i915_active_add_request(active, rq); 2213 + if (err) 2214 + goto err_add_request; 2215 + } 2173 2216 2217 + err = gen8_load_flex(rq, ce, flex, count); 2218 + if (err) 2219 + goto err_add_request; 2220 + 2221 + err_add_request: 2174 2222 i915_request_add(rq); 2175 2223 return err; 2176 2224 } ··· 2213 2243 return err; 2214 2244 } 2215 2245 2216 - static int gen12_configure_oar_context(struct i915_perf_stream *stream, bool enable) 2246 + static int gen12_configure_oar_context(struct i915_perf_stream *stream, 2247 + struct i915_active *active) 2217 2248 { 2218 2249 int err; 2219 2250 struct intel_context *ce = stream->pinned_ctx; ··· 2223 2252 { 2224 2253 GEN8_OACTXCONTROL, 2225 2254 stream->perf->ctx_oactxctrl_offset + 1, 2226 - enable ? GEN8_OA_COUNTER_RESUME : 0, 2255 + active ? GEN8_OA_COUNTER_RESUME : 0, 2227 2256 }, 2228 2257 }; 2229 2258 /* Offsets in regs_lri are not used since this configuration is only ··· 2235 2264 GEN12_OAR_OACONTROL, 2236 2265 GEN12_OAR_OACONTROL_OFFSET + 1, 2237 2266 (format << GEN12_OAR_OACONTROL_COUNTER_FORMAT_SHIFT) | 2238 - (enable ? GEN12_OAR_OACONTROL_COUNTER_ENABLE : 0) 2267 + (active ? GEN12_OAR_OACONTROL_COUNTER_ENABLE : 0) 2239 2268 }, 2240 2269 { 2241 2270 RING_CONTEXT_CONTROL(ce->engine->mmio_base), 2242 2271 CTX_CONTEXT_CONTROL, 2243 2272 _MASKED_FIELD(GEN12_CTX_CTRL_OAR_CONTEXT_ENABLE, 2244 - enable ? 2273 + active ? 2245 2274 GEN12_CTX_CTRL_OAR_CONTEXT_ENABLE : 2246 2275 0) 2247 2276 }, ··· 2258 2287 return err; 2259 2288 2260 2289 /* Apply regs_lri using LRI with pinned context */ 2261 - return gen8_modify_self(ce, regs_lri, ARRAY_SIZE(regs_lri)); 2290 + return gen8_modify_self(ce, regs_lri, ARRAY_SIZE(regs_lri), active); 2262 2291 } 2263 2292 2264 2293 /* ··· 2286 2315 * Note: it's only the RCS/Render context that has any OA state. 2287 2316 * Note: the first flex register passed must always be R_PWR_CLK_STATE 2288 2317 */ 2289 - static int oa_configure_all_contexts(struct i915_perf_stream *stream, 2290 - struct flex *regs, 2291 - size_t num_regs) 2318 + static int 2319 + oa_configure_all_contexts(struct i915_perf_stream *stream, 2320 + struct flex *regs, 2321 + size_t num_regs, 2322 + struct i915_active *active) 2292 2323 { 2293 2324 struct drm_i915_private *i915 = stream->perf->i915; 2294 2325 struct intel_engine_cs *engine; ··· 2347 2374 2348 2375 regs[0].value = intel_sseu_make_rpcs(i915, &ce->sseu); 2349 2376 2350 - err = gen8_modify_self(ce, regs, num_regs); 2377 + err = gen8_modify_self(ce, regs, num_regs, active); 2351 2378 if (err) 2352 2379 return err; 2353 2380 } ··· 2355 2382 return 0; 2356 2383 } 2357 2384 2358 - static int gen12_configure_all_contexts(struct i915_perf_stream *stream, 2359 - const struct i915_oa_config *oa_config) 2385 + static int 2386 + gen12_configure_all_contexts(struct i915_perf_stream *stream, 2387 + const struct i915_oa_config *oa_config, 2388 + struct i915_active *active) 2360 2389 { 2361 2390 struct flex regs[] = { 2362 2391 { ··· 2367 2392 }, 2368 2393 }; 2369 2394 2370 - return oa_configure_all_contexts(stream, regs, ARRAY_SIZE(regs)); 2395 + return oa_configure_all_contexts(stream, 2396 + regs, ARRAY_SIZE(regs), 2397 + active); 2371 2398 } 2372 2399 2373 - static int lrc_configure_all_contexts(struct i915_perf_stream *stream, 2374 - const struct i915_oa_config *oa_config) 2400 + static int 2401 + lrc_configure_all_contexts(struct i915_perf_stream *stream, 2402 + const struct i915_oa_config *oa_config, 2403 + struct i915_active *active) 2375 2404 { 2376 2405 /* The MMIO offsets for Flex EU registers aren't contiguous */ 2377 2406 const u32 ctx_flexeu0 = stream->perf->ctx_flexeu0_offset; ··· 2408 2429 for (i = 2; i < ARRAY_SIZE(regs); i++) 2409 2430 regs[i].value = oa_config_flex_reg(oa_config, regs[i].reg); 2410 2431 2411 - return oa_configure_all_contexts(stream, regs, ARRAY_SIZE(regs)); 2432 + return oa_configure_all_contexts(stream, 2433 + regs, ARRAY_SIZE(regs), 2434 + active); 2412 2435 } 2413 2436 2414 - static struct i915_request * 2415 - gen8_enable_metric_set(struct i915_perf_stream *stream) 2437 + static int 2438 + gen8_enable_metric_set(struct i915_perf_stream *stream, 2439 + struct i915_active *active) 2416 2440 { 2417 2441 struct intel_uncore *uncore = stream->uncore; 2418 2442 struct i915_oa_config *oa_config = stream->oa_config; ··· 2455 2473 * to make sure all slices/subslices are ON before writing to NOA 2456 2474 * registers. 2457 2475 */ 2458 - ret = lrc_configure_all_contexts(stream, oa_config); 2476 + ret = lrc_configure_all_contexts(stream, oa_config, active); 2459 2477 if (ret) 2460 - return ERR_PTR(ret); 2478 + return ret; 2461 2479 2462 - return emit_oa_config(stream, oa_config, oa_context(stream)); 2480 + return emit_oa_config(stream, 2481 + stream->oa_config, oa_context(stream), 2482 + active); 2463 2483 } 2464 2484 2465 2485 static u32 oag_report_ctx_switches(const struct i915_perf_stream *stream) ··· 2471 2487 0 : GEN12_OAG_OA_DEBUG_DISABLE_CTX_SWITCH_REPORTS); 2472 2488 } 2473 2489 2474 - static struct i915_request * 2475 - gen12_enable_metric_set(struct i915_perf_stream *stream) 2490 + static int 2491 + gen12_enable_metric_set(struct i915_perf_stream *stream, 2492 + struct i915_active *active) 2476 2493 { 2477 2494 struct intel_uncore *uncore = stream->uncore; 2478 2495 struct i915_oa_config *oa_config = stream->oa_config; ··· 2502 2517 * to make sure all slices/subslices are ON before writing to NOA 2503 2518 * registers. 2504 2519 */ 2505 - ret = gen12_configure_all_contexts(stream, oa_config); 2520 + ret = gen12_configure_all_contexts(stream, oa_config, active); 2506 2521 if (ret) 2507 - return ERR_PTR(ret); 2522 + return ret; 2508 2523 2509 2524 /* 2510 2525 * For Gen12, performance counters are context ··· 2512 2527 * requested this. 2513 2528 */ 2514 2529 if (stream->ctx) { 2515 - ret = gen12_configure_oar_context(stream, true); 2530 + ret = gen12_configure_oar_context(stream, active); 2516 2531 if (ret) 2517 - return ERR_PTR(ret); 2532 + return ret; 2518 2533 } 2519 2534 2520 - return emit_oa_config(stream, oa_config, oa_context(stream)); 2535 + return emit_oa_config(stream, 2536 + stream->oa_config, oa_context(stream), 2537 + active); 2521 2538 } 2522 2539 2523 2540 static void gen8_disable_metric_set(struct i915_perf_stream *stream) ··· 2527 2540 struct intel_uncore *uncore = stream->uncore; 2528 2541 2529 2542 /* Reset all contexts' slices/subslices configurations. */ 2530 - lrc_configure_all_contexts(stream, NULL); 2543 + lrc_configure_all_contexts(stream, NULL, NULL); 2531 2544 2532 2545 intel_uncore_rmw(uncore, GDT_CHICKEN_BITS, GT_NOA_ENABLE, 0); 2533 2546 } ··· 2537 2550 struct intel_uncore *uncore = stream->uncore; 2538 2551 2539 2552 /* Reset all contexts' slices/subslices configurations. */ 2540 - lrc_configure_all_contexts(stream, NULL); 2553 + lrc_configure_all_contexts(stream, NULL, NULL); 2541 2554 2542 2555 /* Make sure we disable noa to save power. */ 2543 2556 intel_uncore_rmw(uncore, RPM_CONFIG1, GEN10_GT_NOA_ENABLE, 0); ··· 2548 2561 struct intel_uncore *uncore = stream->uncore; 2549 2562 2550 2563 /* Reset all contexts' slices/subslices configurations. */ 2551 - gen12_configure_all_contexts(stream, NULL); 2564 + gen12_configure_all_contexts(stream, NULL, NULL); 2552 2565 2553 2566 /* disable the context save/restore or OAR counters */ 2554 2567 if (stream->ctx) 2555 - gen12_configure_oar_context(stream, false); 2568 + gen12_configure_oar_context(stream, NULL); 2556 2569 2557 2570 /* Make sure we disable noa to save power. */ 2558 2571 intel_uncore_rmw(uncore, RPM_CONFIG1, GEN10_GT_NOA_ENABLE, 0); ··· 2644 2657 */ 2645 2658 static void i915_oa_stream_enable(struct i915_perf_stream *stream) 2646 2659 { 2660 + stream->pollin = false; 2661 + 2647 2662 stream->perf->ops.oa_enable(stream); 2648 2663 2649 2664 if (stream->periodic) 2650 2665 hrtimer_start(&stream->poll_check_timer, 2651 - ns_to_ktime(POLL_PERIOD), 2666 + ns_to_ktime(stream->poll_oa_period), 2652 2667 HRTIMER_MODE_REL_PINNED); 2653 2668 } 2654 2669 ··· 2726 2737 2727 2738 static int i915_perf_stream_enable_sync(struct i915_perf_stream *stream) 2728 2739 { 2729 - struct i915_request *rq; 2740 + struct i915_active *active; 2741 + int err; 2730 2742 2731 - rq = stream->perf->ops.enable_metric_set(stream); 2732 - if (IS_ERR(rq)) 2733 - return PTR_ERR(rq); 2743 + active = i915_active_create(); 2744 + if (!active) 2745 + return -ENOMEM; 2734 2746 2735 - i915_request_wait(rq, 0, MAX_SCHEDULE_TIMEOUT); 2736 - i915_request_put(rq); 2747 + err = stream->perf->ops.enable_metric_set(stream, active); 2748 + if (err == 0) 2749 + __i915_active_wait(active, TASK_UNINTERRUPTIBLE); 2737 2750 2738 - return 0; 2751 + i915_active_put(active); 2752 + return err; 2753 + } 2754 + 2755 + static void 2756 + get_default_sseu_config(struct intel_sseu *out_sseu, 2757 + struct intel_engine_cs *engine) 2758 + { 2759 + const struct sseu_dev_info *devinfo_sseu = 2760 + &RUNTIME_INFO(engine->i915)->sseu; 2761 + 2762 + *out_sseu = intel_sseu_from_device_info(devinfo_sseu); 2763 + 2764 + if (IS_GEN(engine->i915, 11)) { 2765 + /* 2766 + * We only need subslice count so it doesn't matter which ones 2767 + * we select - just turn off low bits in the amount of half of 2768 + * all available subslices per slice. 2769 + */ 2770 + out_sseu->subslice_mask = 2771 + ~(~0 << (hweight8(out_sseu->subslice_mask) / 2)); 2772 + out_sseu->slice_mask = 0x1; 2773 + } 2774 + } 2775 + 2776 + static int 2777 + get_sseu_config(struct intel_sseu *out_sseu, 2778 + struct intel_engine_cs *engine, 2779 + const struct drm_i915_gem_context_param_sseu *drm_sseu) 2780 + { 2781 + if (drm_sseu->engine.engine_class != engine->uabi_class || 2782 + drm_sseu->engine.engine_instance != engine->uabi_instance) 2783 + return -EINVAL; 2784 + 2785 + return i915_gem_user_to_context_sseu(engine->i915, drm_sseu, out_sseu); 2739 2786 } 2740 2787 2741 2788 /** ··· 2906 2881 goto err_oa_buf_alloc; 2907 2882 2908 2883 stream->ops = &i915_oa_stream_ops; 2884 + 2885 + perf->sseu = props->sseu; 2909 2886 WRITE_ONCE(perf->exclusive_stream, stream); 2910 2887 2911 2888 ret = i915_perf_stream_enable_sync(stream); ··· 2959 2932 2960 2933 /* perf.exclusive_stream serialised by lrc_configure_all_contexts() */ 2961 2934 stream = READ_ONCE(engine->i915->perf.exclusive_stream); 2962 - /* 2963 - * For gen12, only CTX_R_PWR_CLK_STATE needs update, but the caller 2964 - * is already doing that, so nothing to be done for gen12 here. 2965 - */ 2966 2935 if (stream && INTEL_GEN(stream->perf->i915) < 12) 2967 2936 gen8_update_reg_state_unlocked(ce, stream); 2968 2937 } ··· 3049 3026 wake_up(&stream->poll_wq); 3050 3027 } 3051 3028 3052 - hrtimer_forward_now(hrtimer, ns_to_ktime(POLL_PERIOD)); 3029 + hrtimer_forward_now(hrtimer, 3030 + ns_to_ktime(stream->poll_oa_period)); 3053 3031 3054 3032 return HRTIMER_RESTART; 3055 3033 } ··· 3181 3157 return -EINVAL; 3182 3158 3183 3159 if (config != stream->oa_config) { 3184 - struct i915_request *rq; 3160 + int err; 3185 3161 3186 3162 /* 3187 3163 * If OA is bound to a specific context, emit the ··· 3192 3168 * When set globally, we use a low priority kernel context, 3193 3169 * so it will effectively take effect when idle. 3194 3170 */ 3195 - rq = emit_oa_config(stream, config, oa_context(stream)); 3196 - if (!IS_ERR(rq)) { 3171 + err = emit_oa_config(stream, config, oa_context(stream), NULL); 3172 + if (!err) 3197 3173 config = xchg(&stream->oa_config, config); 3198 - i915_request_put(rq); 3199 - } else { 3200 - ret = PTR_ERR(rq); 3201 - } 3174 + else 3175 + ret = err; 3202 3176 } 3203 3177 3204 3178 i915_oa_config_put(config); ··· 3409 3387 privileged_op = true; 3410 3388 } 3411 3389 3390 + /* 3391 + * Asking for SSEU configuration is a priviliged operation. 3392 + */ 3393 + if (props->has_sseu) 3394 + privileged_op = true; 3395 + else 3396 + get_default_sseu_config(&props->sseu, props->engine); 3397 + 3412 3398 /* Similar to perf's kernel.perf_paranoid_cpu sysctl option 3413 3399 * we check a dev.i915.perf_stream_paranoid sysctl option 3414 3400 * to determine if it's ok to access system wide OA counters ··· 3437 3407 3438 3408 stream->perf = perf; 3439 3409 stream->ctx = specific_ctx; 3410 + stream->poll_oa_period = props->poll_oa_period; 3440 3411 3441 3412 ret = i915_oa_stream_init(stream, param, props); 3442 3413 if (ret) ··· 3513 3482 { 3514 3483 u64 __user *uprop = uprops; 3515 3484 u32 i; 3485 + int ret; 3516 3486 3517 3487 memset(props, 0, sizeof(struct perf_open_properties)); 3488 + props->poll_oa_period = DEFAULT_POLL_PERIOD_NS; 3518 3489 3519 3490 if (!n_props) { 3520 3491 DRM_DEBUG("No i915 perf properties given\n"); ··· 3546 3513 for (i = 0; i < n_props; i++) { 3547 3514 u64 oa_period, oa_freq_hz; 3548 3515 u64 id, value; 3549 - int ret; 3550 3516 3551 3517 ret = get_user(id, uprop); 3552 3518 if (ret) ··· 3631 3599 case DRM_I915_PERF_PROP_HOLD_PREEMPTION: 3632 3600 props->hold_preemption = !!value; 3633 3601 break; 3602 + case DRM_I915_PERF_PROP_GLOBAL_SSEU: { 3603 + struct drm_i915_gem_context_param_sseu user_sseu; 3604 + 3605 + if (copy_from_user(&user_sseu, 3606 + u64_to_user_ptr(value), 3607 + sizeof(user_sseu))) { 3608 + DRM_DEBUG("Unable to copy global sseu parameter\n"); 3609 + return -EFAULT; 3610 + } 3611 + 3612 + ret = get_sseu_config(&props->sseu, props->engine, &user_sseu); 3613 + if (ret) { 3614 + DRM_DEBUG("Invalid SSEU configuration\n"); 3615 + return ret; 3616 + } 3617 + props->has_sseu = true; 3618 + break; 3619 + } 3620 + case DRM_I915_PERF_PROP_POLL_OA_PERIOD: 3621 + if (value < 100000 /* 100us */) { 3622 + DRM_DEBUG("OA availability timer too small (%lluns < 100us)\n", 3623 + value); 3624 + return -EINVAL; 3625 + } 3626 + props->poll_oa_period = value; 3627 + break; 3634 3628 case DRM_I915_PERF_PROP_MAX: 3635 3629 MISSING_CASE(id); 3636 3630 return -EINVAL; ··· 3739 3681 void i915_perf_register(struct drm_i915_private *i915) 3740 3682 { 3741 3683 struct i915_perf *perf = &i915->perf; 3742 - int ret; 3743 3684 3744 3685 if (!perf->i915) 3745 3686 return; ··· 3752 3695 perf->metrics_kobj = 3753 3696 kobject_create_and_add("metrics", 3754 3697 &i915->drm.primary->kdev->kobj); 3755 - if (!perf->metrics_kobj) 3756 - goto exit; 3757 3698 3758 - sysfs_attr_init(&perf->test_config.sysfs_metric_id.attr); 3759 - 3760 - if (IS_TIGERLAKE(i915)) { 3761 - i915_perf_load_test_config_tgl(i915); 3762 - } else if (INTEL_GEN(i915) >= 11) { 3763 - i915_perf_load_test_config_icl(i915); 3764 - } else if (IS_CANNONLAKE(i915)) { 3765 - i915_perf_load_test_config_cnl(i915); 3766 - } else if (IS_COFFEELAKE(i915)) { 3767 - if (IS_CFL_GT2(i915)) 3768 - i915_perf_load_test_config_cflgt2(i915); 3769 - if (IS_CFL_GT3(i915)) 3770 - i915_perf_load_test_config_cflgt3(i915); 3771 - } else if (IS_GEMINILAKE(i915)) { 3772 - i915_perf_load_test_config_glk(i915); 3773 - } else if (IS_KABYLAKE(i915)) { 3774 - if (IS_KBL_GT2(i915)) 3775 - i915_perf_load_test_config_kblgt2(i915); 3776 - else if (IS_KBL_GT3(i915)) 3777 - i915_perf_load_test_config_kblgt3(i915); 3778 - } else if (IS_BROXTON(i915)) { 3779 - i915_perf_load_test_config_bxt(i915); 3780 - } else if (IS_SKYLAKE(i915)) { 3781 - if (IS_SKL_GT2(i915)) 3782 - i915_perf_load_test_config_sklgt2(i915); 3783 - else if (IS_SKL_GT3(i915)) 3784 - i915_perf_load_test_config_sklgt3(i915); 3785 - else if (IS_SKL_GT4(i915)) 3786 - i915_perf_load_test_config_sklgt4(i915); 3787 - } else if (IS_CHERRYVIEW(i915)) { 3788 - i915_perf_load_test_config_chv(i915); 3789 - } else if (IS_BROADWELL(i915)) { 3790 - i915_perf_load_test_config_bdw(i915); 3791 - } else if (IS_HASWELL(i915)) { 3792 - i915_perf_load_test_config_hsw(i915); 3793 - } 3794 - 3795 - if (perf->test_config.id == 0) 3796 - goto sysfs_error; 3797 - 3798 - ret = sysfs_create_group(perf->metrics_kobj, 3799 - &perf->test_config.sysfs_metric); 3800 - if (ret) 3801 - goto sysfs_error; 3802 - 3803 - perf->test_config.perf = perf; 3804 - kref_init(&perf->test_config.ref); 3805 - 3806 - goto exit; 3807 - 3808 - sysfs_error: 3809 - kobject_put(perf->metrics_kobj); 3810 - perf->metrics_kobj = NULL; 3811 - 3812 - exit: 3813 3699 mutex_unlock(&perf->lock); 3814 3700 } 3815 3701 ··· 3771 3771 3772 3772 if (!perf->metrics_kobj) 3773 3773 return; 3774 - 3775 - sysfs_remove_group(perf->metrics_kobj, 3776 - &perf->test_config.sysfs_metric); 3777 3774 3778 3775 kobject_put(perf->metrics_kobj); 3779 3776 perf->metrics_kobj = NULL; ··· 4370 4373 ratelimit_set_flags(&perf->spurious_report_rs, 4371 4374 RATELIMIT_MSG_ON_RELEASE); 4372 4375 4376 + ratelimit_state_init(&perf->tail_pointer_race, 4377 + 5 * HZ, 10); 4378 + ratelimit_set_flags(&perf->tail_pointer_race, 4379 + RATELIMIT_MSG_ON_RELEASE); 4380 + 4373 4381 atomic64_set(&perf->noa_programming_delay, 4374 4382 500 * 1000 /* 500us */); 4375 4383 ··· 4435 4433 * preemption on a particular context so that performance data is 4436 4434 * accessible from a delta of MI_RPC reports without looking at the 4437 4435 * OA buffer. 4436 + * 4437 + * 4: Add DRM_I915_PERF_PROP_ALLOWED_SSEU to limit what contexts can 4438 + * be run for the duration of the performance recording based on 4439 + * their SSEU configuration. 4440 + * 4441 + * 5: Add DRM_I915_PERF_PROP_POLL_OA_PERIOD parameter that controls the 4442 + * interval for the hrtimer used to check for OA data. 4438 4443 */ 4439 - return 3; 4444 + return 5; 4440 4445 } 4441 4446 4442 4447 #if IS_ENABLED(CONFIG_DRM_I915_SELFTEST)
+29 -17
drivers/gpu/drm/i915/i915_perf_types.h
··· 16 16 #include <linux/uuid.h> 17 17 #include <linux/wait.h> 18 18 19 + #include "gt/intel_sseu.h" 19 20 #include "i915_reg.h" 20 21 #include "intel_wakeref.h" 21 22 22 23 struct drm_i915_private; 23 24 struct file; 25 + struct i915_active; 24 26 struct i915_gem_context; 25 27 struct i915_perf; 26 28 struct i915_vma; ··· 274 272 spinlock_t ptr_lock; 275 273 276 274 /** 277 - * @tails: One 'aging' tail pointer and one 'aged' tail pointer ready to 278 - * used for reading. 279 - * 280 - * Initial values of 0xffffffff are invalid and imply that an 281 - * update is required (and should be ignored by an attempted 282 - * read) 275 + * @aging_tail: The last HW tail reported by HW. The data 276 + * might not have made it to memory yet though. 283 277 */ 284 - struct { 285 - u32 offset; 286 - } tails[2]; 287 - 288 - /** 289 - * @aged_tail_idx: Index for the aged tail ready to read() data up to. 290 - */ 291 - unsigned int aged_tail_idx; 278 + u32 aging_tail; 292 279 293 280 /** 294 281 * @aging_timestamp: A monotonic timestamp for when the current aging tail pointer ··· 293 302 * OA buffer data to userspace. 294 303 */ 295 304 u32 head; 305 + 306 + /** 307 + * @tail: The last verified tail that can be read by userspace. 308 + */ 309 + u32 tail; 296 310 } oa_buffer; 297 311 298 312 /** ··· 305 309 * reprogrammed. 306 310 */ 307 311 struct i915_vma *noa_wait; 312 + 313 + /** 314 + * @poll_oa_period: The period in nanoseconds at which the OA 315 + * buffer should be checked for available data. 316 + */ 317 + u64 poll_oa_period; 308 318 }; 309 319 310 320 /** ··· 341 339 * counter reports being sampled. May apply system constraints such as 342 340 * disabling EU clock gating as required. 343 341 */ 344 - struct i915_request * 345 - (*enable_metric_set)(struct i915_perf_stream *stream); 342 + int (*enable_metric_set)(struct i915_perf_stream *stream, 343 + struct i915_active *active); 346 344 347 345 /** 348 346 * @disable_metric_set: Remove system constraints associated with using ··· 410 408 struct i915_perf_stream *exclusive_stream; 411 409 412 410 /** 411 + * @sseu: sseu configuration selected to run while perf is active, 412 + * applies to all contexts. 413 + */ 414 + struct intel_sseu sseu; 415 + 416 + /** 413 417 * For rate limiting any notifications of spurious 414 418 * invalid OA reports 415 419 */ 416 420 struct ratelimit_state spurious_report_rs; 417 421 418 - struct i915_oa_config test_config; 422 + /** 423 + * For rate limiting any notifications of tail pointer 424 + * race. 425 + */ 426 + struct ratelimit_state tail_pointer_race; 419 427 420 428 u32 gen7_latched_oastatus1; 421 429 u32 ctx_oactxctrl_offset;
+2 -2
drivers/gpu/drm/i915/i915_pmu.c
··· 1115 1115 int ret = -ENOMEM; 1116 1116 1117 1117 if (INTEL_GEN(i915) <= 2) { 1118 - dev_info(i915->drm.dev, "PMU not supported for this GPU."); 1118 + drm_info(&i915->drm, "PMU not supported for this GPU."); 1119 1119 return; 1120 1120 } 1121 1121 ··· 1178 1178 if (!is_igp(i915)) 1179 1179 kfree(pmu->name); 1180 1180 err: 1181 - dev_notice(i915->drm.dev, "Failed to register PMU!\n"); 1181 + drm_notice(&i915->drm, "Failed to register PMU!\n"); 1182 1182 } 1183 1183 1184 1184 void i915_pmu_unregister(struct drm_i915_private *i915)
+123 -7
drivers/gpu/drm/i915/i915_reg.h
··· 3094 3094 #define GT_BSD_CS_ERROR_INTERRUPT (1 << 15) 3095 3095 #define GT_BSD_USER_INTERRUPT (1 << 12) 3096 3096 #define GT_RENDER_L3_PARITY_ERROR_INTERRUPT_S1 (1 << 11) /* hsw+; rsvd on snb, ivb, vlv */ 3097 + #define GT_WAIT_SEMAPHORE_INTERRUPT REG_BIT(11) /* bdw+ */ 3097 3098 #define GT_CONTEXT_SWITCH_INTERRUPT (1 << 8) 3098 3099 #define GT_RENDER_L3_PARITY_ERROR_INTERRUPT (1 << 5) /* !snb */ 3099 3100 #define GT_RENDER_PIPECTL_NOTIFY_INTERRUPT (1 << 4) ··· 4324 4323 #define EXITLINE_ENABLE REG_BIT(31) 4325 4324 #define EXITLINE_MASK REG_GENMASK(12, 0) 4326 4325 #define EXITLINE_SHIFT 0 4326 + 4327 + /* VRR registers */ 4328 + #define _TRANS_VRR_CTL_A 0x60420 4329 + #define _TRANS_VRR_CTL_B 0x61420 4330 + #define _TRANS_VRR_CTL_C 0x62420 4331 + #define _TRANS_VRR_CTL_D 0x63420 4332 + #define TRANS_VRR_CTL(trans) _MMIO_TRANS2(trans, _TRANS_VRR_CTL_A) 4333 + #define VRR_CTL_VRR_ENABLE REG_BIT(31) 4334 + #define VRR_CTL_IGN_MAX_SHIFT REG_BIT(30) 4335 + #define VRR_CTL_FLIP_LINE_EN REG_BIT(29) 4336 + #define VRR_CTL_LINE_COUNT_MASK REG_GENMASK(10, 3) 4337 + #define VRR_CTL_SW_FULLLINE_COUNT REG_BIT(0) 4338 + 4339 + #define _TRANS_VRR_VMAX_A 0x60424 4340 + #define _TRANS_VRR_VMAX_B 0x61424 4341 + #define _TRANS_VRR_VMAX_C 0x62424 4342 + #define _TRANS_VRR_VMAX_D 0x63424 4343 + #define TRANS_VRR_VMAX(trans) _MMIO_TRANS2(trans, _TRANS_VRR_VMAX_A) 4344 + #define VRR_VMAX_MASK REG_GENMASK(19, 0) 4345 + 4346 + #define _TRANS_VRR_VMIN_A 0x60434 4347 + #define _TRANS_VRR_VMIN_B 0x61434 4348 + #define _TRANS_VRR_VMIN_C 0x62434 4349 + #define _TRANS_VRR_VMIN_D 0x63434 4350 + #define TRANS_VRR_VMIN(trans) _MMIO_TRANS2(trans, _TRANS_VRR_VMIN_A) 4351 + #define VRR_VMIN_MASK REG_GENMASK(15, 0) 4352 + 4353 + #define _TRANS_VRR_VMAXSHIFT_A 0x60428 4354 + #define _TRANS_VRR_VMAXSHIFT_B 0x61428 4355 + #define _TRANS_VRR_VMAXSHIFT_C 0x62428 4356 + #define _TRANS_VRR_VMAXSHIFT_D 0x63428 4357 + #define TRANS_VRR_VMAXSHIFT(trans) _MMIO_TRANS2(trans, \ 4358 + _TRANS_VRR_VMAXSHIFT_A) 4359 + #define VRR_VMAXSHIFT_DEC_MASK REG_GENMASK(29, 16) 4360 + #define VRR_VMAXSHIFT_DEC REG_BIT(16) 4361 + #define VRR_VMAXSHIFT_INC_MASK REG_GENMASK(12, 0) 4362 + 4363 + #define _TRANS_VRR_STATUS_A 0x6042C 4364 + #define _TRANS_VRR_STATUS_B 0x6142C 4365 + #define _TRANS_VRR_STATUS_C 0x6242C 4366 + #define _TRANS_VRR_STATUS_D 0x6342C 4367 + #define TRANS_VRR_STATUS(trans) _MMIO_TRANS2(trans, _TRANS_VRR_STATUS_A) 4368 + #define VRR_STATUS_VMAX_REACHED REG_BIT(31) 4369 + #define VRR_STATUS_NOFLIP_TILL_BNDR REG_BIT(30) 4370 + #define VRR_STATUS_FLIP_BEF_BNDR REG_BIT(29) 4371 + #define VRR_STATUS_NO_FLIP_FRAME REG_BIT(28) 4372 + #define VRR_STATUS_VRR_EN_LIVE REG_BIT(27) 4373 + #define VRR_STATUS_FLIPS_SERVICED REG_BIT(26) 4374 + #define VRR_STATUS_VBLANK_MASK REG_GENMASK(22, 20) 4375 + #define STATUS_FSM_IDLE REG_FIELD_PREP(VRR_STATUS_VBLANK_MASK, 0) 4376 + #define STATUS_FSM_WAIT_TILL_FDB REG_FIELD_PREP(VRR_STATUS_VBLANK_MASK, 1) 4377 + #define STATUS_FSM_WAIT_TILL_FS REG_FIELD_PREP(VRR_STATUS_VBLANK_MASK, 2) 4378 + #define STATUS_FSM_WAIT_TILL_FLIP REG_FIELD_PREP(VRR_STATUS_VBLANK_MASK, 3) 4379 + #define STATUS_FSM_PIPELINE_FILL REG_FIELD_PREP(VRR_STATUS_VBLANK_MASK, 4) 4380 + #define STATUS_FSM_ACTIVE REG_FIELD_PREP(VRR_STATUS_VBLANK_MASK, 5) 4381 + #define STATUS_FSM_LEGACY_VBLANK REG_FIELD_PREP(VRR_STATUS_VBLANK_MASK, 6) 4382 + 4383 + #define _TRANS_VRR_VTOTAL_PREV_A 0x60480 4384 + #define _TRANS_VRR_VTOTAL_PREV_B 0x61480 4385 + #define _TRANS_VRR_VTOTAL_PREV_C 0x62480 4386 + #define _TRANS_VRR_VTOTAL_PREV_D 0x63480 4387 + #define TRANS_VRR_VTOTAL_PREV(trans) _MMIO_TRANS2(trans, \ 4388 + _TRANS_VRR_VTOTAL_PREV_A) 4389 + #define VRR_VTOTAL_FLIP_BEFR_BNDR REG_BIT(31) 4390 + #define VRR_VTOTAL_FLIP_AFTER_BNDR REG_BIT(30) 4391 + #define VRR_VTOTAL_FLIP_AFTER_DBLBUF REG_BIT(29) 4392 + #define VRR_VTOTAL_PREV_FRAME_MASK REG_GENMASK(19, 0) 4393 + 4394 + #define _TRANS_VRR_FLIPLINE_A 0x60438 4395 + #define _TRANS_VRR_FLIPLINE_B 0x61438 4396 + #define _TRANS_VRR_FLIPLINE_C 0x62438 4397 + #define _TRANS_VRR_FLIPLINE_D 0x63438 4398 + #define TRANS_VRR_FLIPLINE(trans) _MMIO_TRANS2(trans, \ 4399 + _TRANS_VRR_FLIPLINE_A) 4400 + #define VRR_FLIPLINE_MASK REG_GENMASK(19, 0) 4401 + 4402 + #define _TRANS_VRR_STATUS2_A 0x6043C 4403 + #define _TRANS_VRR_STATUS2_B 0x6143C 4404 + #define _TRANS_VRR_STATUS2_C 0x6243C 4405 + #define _TRANS_VRR_STATUS2_D 0x6343C 4406 + #define TRANS_VRR_STATUS2(trans) _MMIO_TRANS2(trans, _TRANS_VRR_STATUS2_A) 4407 + #define VRR_STATUS2_VERT_LN_CNT_MASK REG_GENMASK(19, 0) 4408 + 4409 + #define _TRANS_PUSH_A 0x60A70 4410 + #define _TRANS_PUSH_B 0x61A70 4411 + #define _TRANS_PUSH_C 0x62A70 4412 + #define _TRANS_PUSH_D 0x63A70 4413 + #define TRANS_PUSH(trans) _MMIO_TRANS2(trans, _TRANS_PUSH_A) 4414 + #define TRANS_PUSH_EN REG_BIT(31) 4415 + #define TRANS_PUSH_SEND REG_BIT(30) 4327 4416 4328 4417 /* 4329 4418 * HSW+ eDP PSR registers ··· 6855 6764 #define PLANE_CTL_FORMAT_P012 (5 << 24) 6856 6765 #define PLANE_CTL_FORMAT_XRGB_16161616F (6 << 24) 6857 6766 #define PLANE_CTL_FORMAT_P016 (7 << 24) 6858 - #define PLANE_CTL_FORMAT_AYUV (8 << 24) 6767 + #define PLANE_CTL_FORMAT_XYUV (8 << 24) 6859 6768 #define PLANE_CTL_FORMAT_INDEXED (12 << 24) 6860 6769 #define PLANE_CTL_FORMAT_RGB_565 (14 << 24) 6861 6770 #define ICL_PLANE_CTL_FORMAT_MASK (0x1f << 23) ··· 9791 9700 #define TRANS_DDI_BPC_10 (1 << 20) 9792 9701 #define TRANS_DDI_BPC_6 (2 << 20) 9793 9702 #define TRANS_DDI_BPC_12 (3 << 20) 9703 + #define TRANS_DDI_PORT_SYNC_MASTER_SELECT_MASK REG_GENMASK(19, 18) /* bdw-cnl */ 9704 + #define TRANS_DDI_PORT_SYNC_MASTER_SELECT(x) REG_FIELD_PREP(TRANS_DDI_PORT_SYNC_MASTER_SELECT_MASK, (x)) 9794 9705 #define TRANS_DDI_PVSYNC (1 << 17) 9795 9706 #define TRANS_DDI_PHSYNC (1 << 16) 9707 + #define TRANS_DDI_PORT_SYNC_ENABLE REG_BIT(15) /* bdw-cnl */ 9796 9708 #define TRANS_DDI_EDP_INPUT_MASK (7 << 12) 9797 9709 #define TRANS_DDI_EDP_INPUT_A_ON (0 << 12) 9798 9710 #define TRANS_DDI_EDP_INPUT_A_ONOFF (4 << 12) ··· 9822 9728 #define _TRANS_DDI_FUNC_CTL2_EDP 0x6f404 9823 9729 #define _TRANS_DDI_FUNC_CTL2_DSI0 0x6b404 9824 9730 #define _TRANS_DDI_FUNC_CTL2_DSI1 0x6bc04 9825 - #define TRANS_DDI_FUNC_CTL2(tran) _MMIO_TRANS2(tran, \ 9826 - _TRANS_DDI_FUNC_CTL2_A) 9827 - #define PORT_SYNC_MODE_ENABLE (1 << 4) 9828 - #define PORT_SYNC_MODE_MASTER_SELECT(x) ((x) << 0) 9829 - #define PORT_SYNC_MODE_MASTER_SELECT_MASK (0x7 << 0) 9830 - #define PORT_SYNC_MODE_MASTER_SELECT_SHIFT 0 9731 + #define TRANS_DDI_FUNC_CTL2(tran) _MMIO_TRANS2(tran, _TRANS_DDI_FUNC_CTL2_A) 9732 + #define PORT_SYNC_MODE_ENABLE REG_BIT(4) 9733 + #define PORT_SYNC_MODE_MASTER_SELECT_MASK REG_GENMASK(2, 0) 9734 + #define PORT_SYNC_MODE_MASTER_SELECT(x) REG_FIELD_PREP(PORT_SYNC_MODE_MASTER_SELECT_MASK, (x)) 9831 9735 9832 9736 /* DisplayPort Transport Control */ 9833 9737 #define _DP_TP_CTL_A 0x64040 ··· 9885 9793 #define DDI_BUF_TRANS_LO(port, i) _MMIO(_PORT(port, _DDI_BUF_TRANS_A, _DDI_BUF_TRANS_B) + (i) * 8) 9886 9794 #define DDI_BUF_BALANCE_LEG_ENABLE (1 << 31) 9887 9795 #define DDI_BUF_TRANS_HI(port, i) _MMIO(_PORT(port, _DDI_BUF_TRANS_A, _DDI_BUF_TRANS_B) + (i) * 8 + 4) 9796 + 9797 + /* DDI DP Compliance Control */ 9798 + #define _DDI_DP_COMP_CTL_A 0x605F0 9799 + #define _DDI_DP_COMP_CTL_B 0x615F0 9800 + #define DDI_DP_COMP_CTL(pipe) _MMIO_PIPE(pipe, _DDI_DP_COMP_CTL_A, _DDI_DP_COMP_CTL_B) 9801 + #define DDI_DP_COMP_CTL_ENABLE (1 << 31) 9802 + #define DDI_DP_COMP_CTL_D10_2 (0 << 28) 9803 + #define DDI_DP_COMP_CTL_SCRAMBLED_0 (1 << 28) 9804 + #define DDI_DP_COMP_CTL_PRBS7 (2 << 28) 9805 + #define DDI_DP_COMP_CTL_CUSTOM80 (3 << 28) 9806 + #define DDI_DP_COMP_CTL_HBR2 (4 << 28) 9807 + #define DDI_DP_COMP_CTL_SCRAMBLED_1 (5 << 28) 9808 + #define DDI_DP_COMP_CTL_HBR2_RESET (0xFC << 0) 9809 + 9810 + /* DDI DP Compliance Pattern */ 9811 + #define _DDI_DP_COMP_PAT_A 0x605F4 9812 + #define _DDI_DP_COMP_PAT_B 0x615F4 9813 + #define DDI_DP_COMP_PAT(pipe, i) _MMIO(_PIPE(pipe, _DDI_DP_COMP_PAT_A, _DDI_DP_COMP_PAT_B) + (i) * 4) 9888 9814 9889 9815 /* Sideband Interface (SBI) is programmed indirectly, via 9890 9816 * SBI_ADDR, which contains the register offset; and SBI_DATA, ··· 10851 10741 10852 10742 #define _PAL_PREC_MULTI_SEG_DATA_A 0x4A40C 10853 10743 #define _PAL_PREC_MULTI_SEG_DATA_B 0x4AC0C 10744 + #define PAL_PREC_MULTI_SEG_RED_LDW_MASK REG_GENMASK(29, 24) 10745 + #define PAL_PREC_MULTI_SEG_RED_UDW_MASK REG_GENMASK(29, 20) 10746 + #define PAL_PREC_MULTI_SEG_GREEN_LDW_MASK REG_GENMASK(19, 14) 10747 + #define PAL_PREC_MULTI_SEG_GREEN_UDW_MASK REG_GENMASK(19, 10) 10748 + #define PAL_PREC_MULTI_SEG_BLUE_LDW_MASK REG_GENMASK(9, 4) 10749 + #define PAL_PREC_MULTI_SEG_BLUE_UDW_MASK REG_GENMASK(9, 0) 10854 10750 10855 10751 #define PREC_PAL_MULTI_SEG_INDEX(pipe) _MMIO_PIPE(pipe, \ 10856 10752 _PAL_PREC_MULTI_SEG_INDEX_A, \
+22 -7
drivers/gpu/drm/i915/i915_request.c
··· 101 101 timeout); 102 102 } 103 103 104 + struct kmem_cache *i915_request_slab_cache(void) 105 + { 106 + return global.slab_requests; 107 + } 108 + 104 109 static void i915_fence_release(struct dma_fence *fence) 105 110 { 106 111 struct i915_request *rq = to_request(fence); ··· 119 114 */ 120 115 i915_sw_fence_fini(&rq->submit); 121 116 i915_sw_fence_fini(&rq->semaphore); 117 + 118 + /* Keep one request on each engine for reserved use under mempressure */ 119 + if (!cmpxchg(&rq->engine->request_pool, NULL, rq)) 120 + return; 122 121 123 122 kmem_cache_free(global.slab_requests, rq); 124 123 } ··· 638 629 } 639 630 640 631 static noinline struct i915_request * 641 - request_alloc_slow(struct intel_timeline *tl, gfp_t gfp) 632 + request_alloc_slow(struct intel_timeline *tl, 633 + struct i915_request **rsvd, 634 + gfp_t gfp) 642 635 { 643 636 struct i915_request *rq; 644 637 645 - if (list_empty(&tl->requests)) 646 - goto out; 638 + /* If we cannot wait, dip into our reserves */ 639 + if (!gfpflags_allow_blocking(gfp)) { 640 + rq = xchg(rsvd, NULL); 641 + if (!rq) /* Use the normal failure path for one final WARN */ 642 + goto out; 647 643 648 - if (!gfpflags_allow_blocking(gfp)) 644 + return rq; 645 + } 646 + 647 + if (list_empty(&tl->requests)) 649 648 goto out; 650 649 651 650 /* Move our oldest request to the slab-cache (if not in use!) */ ··· 738 721 rq = kmem_cache_alloc(global.slab_requests, 739 722 gfp | __GFP_RETRY_MAYFAIL | __GFP_NOWARN); 740 723 if (unlikely(!rq)) { 741 - rq = request_alloc_slow(tl, gfp); 724 + rq = request_alloc_slow(tl, &ce->engine->request_pool, gfp); 742 725 if (!rq) { 743 726 ret = -ENOMEM; 744 727 goto err_unreserve; ··· 1461 1444 if (list_empty(&rq->sched.signalers_list)) 1462 1445 attr.priority |= I915_PRIORITY_WAIT; 1463 1446 1464 - local_bh_disable(); 1465 1447 __i915_request_queue(rq, &attr); 1466 - local_bh_enable(); /* Kick the execlists tasklet if just scheduled */ 1467 1448 1468 1449 mutex_unlock(&tl->mutex); 1469 1450 }
+2
drivers/gpu/drm/i915/i915_request.h
··· 300 300 return fence->ops == &i915_fence_ops; 301 301 } 302 302 303 + struct kmem_cache *i915_request_slab_cache(void); 304 + 303 305 struct i915_request * __must_check 304 306 __i915_request_create(struct intel_context *ce, gfp_t gfp); 305 307 struct i915_request * __must_check
+10
drivers/gpu/drm/i915/i915_scheduler.c
··· 209 209 if (!inflight) 210 210 goto unlock; 211 211 212 + ENGINE_TRACE(engine, 213 + "bumping queue-priority-hint:%d for rq:%llx:%lld, inflight:%llx:%lld prio %d\n", 214 + prio, 215 + rq->fence.context, rq->fence.seqno, 216 + inflight->fence.context, inflight->fence.seqno, 217 + inflight->sched.attr.priority); 212 218 engine->execlists.queue_priority_hint = prio; 213 219 214 220 /* ··· 470 464 if (!dep) 471 465 return -ENOMEM; 472 466 467 + local_bh_disable(); 468 + 473 469 if (!__i915_sched_node_add_dependency(node, signal, dep, 474 470 I915_DEPENDENCY_EXTERNAL | 475 471 I915_DEPENDENCY_ALLOC)) 476 472 i915_dependency_free(dep); 473 + 474 + local_bh_enable(); /* kick submission tasklet */ 477 475 478 476 return 0; 479 477 }
+1 -1
drivers/gpu/drm/i915/i915_sw_fence.c
··· 421 421 if (!fence) 422 422 return; 423 423 424 - pr_notice("Asynchronous wait on fence %s:%s:%llx timed out (hint:%pS)\n", 424 + pr_notice("Asynchronous wait on fence %s:%s:%llx timed out (hint:%ps)\n", 425 425 cb->dma->ops->get_driver_name(cb->dma), 426 426 cb->dma->ops->get_timeline_name(cb->dma), 427 427 cb->dma->seqno,
+4 -1
drivers/gpu/drm/i915/i915_sw_fence_work.c
··· 38 38 39 39 if (!f->dma.error) { 40 40 dma_fence_get(&f->dma); 41 - queue_work(system_unbound_wq, &f->work); 41 + if (test_bit(DMA_FENCE_WORK_IMM, &f->dma.flags)) 42 + fence_work(&f->work); 43 + else 44 + queue_work(system_unbound_wq, &f->work); 42 45 } else { 43 46 fence_complete(f); 44 47 }
+23
drivers/gpu/drm/i915/i915_sw_fence_work.h
··· 32 32 const struct dma_fence_work_ops *ops; 33 33 }; 34 34 35 + enum { 36 + DMA_FENCE_WORK_IMM = DMA_FENCE_FLAG_USER_BITS, 37 + }; 38 + 35 39 void dma_fence_work_init(struct dma_fence_work *f, 36 40 const struct dma_fence_work_ops *ops); 37 41 int dma_fence_work_chain(struct dma_fence_work *f, struct dma_fence *signal); ··· 43 39 static inline void dma_fence_work_commit(struct dma_fence_work *f) 44 40 { 45 41 i915_sw_fence_commit(&f->chain); 42 + } 43 + 44 + /** 45 + * dma_fence_work_commit_imm: Commit the fence, and if possible execute locally. 46 + * @f: the fenced worker 47 + * 48 + * Instead of always scheduling a worker to execute the callback (see 49 + * dma_fence_work_commit()), we try to execute the callback immediately in 50 + * the local context. It is required that the fence be committed before it 51 + * is published, and that no other threads try to tamper with the number 52 + * of asynchronous waits on the fence (or else the callback will be 53 + * executed in the wrong context, i.e. not the callers). 54 + */ 55 + static inline void dma_fence_work_commit_imm(struct dma_fence_work *f) 56 + { 57 + if (atomic_read(&f->chain.pending) <= 1) 58 + __set_bit(DMA_FENCE_WORK_IMM, &f->dma.flags); 59 + 60 + dma_fence_work_commit(f); 46 61 } 47 62 48 63 #endif /* I915_SW_FENCE_WORK_H */
+2 -2
drivers/gpu/drm/i915/i915_switcheroo.c
··· 20 20 } 21 21 22 22 if (state == VGA_SWITCHEROO_ON) { 23 - pr_info("switched on\n"); 23 + drm_info(&i915->drm, "switched on\n"); 24 24 i915->drm.switch_power_state = DRM_SWITCH_POWER_CHANGING; 25 25 /* i915 resume handler doesn't set to D0 */ 26 26 pci_set_power_state(pdev, PCI_D0); 27 27 i915_resume_switcheroo(i915); 28 28 i915->drm.switch_power_state = DRM_SWITCH_POWER_ON; 29 29 } else { 30 - pr_info("switched off\n"); 30 + drm_info(&i915->drm, "switched off\n"); 31 31 i915->drm.switch_power_state = DRM_SWITCH_POWER_CHANGING; 32 32 i915_suspend_switcheroo(i915, pmm); 33 33 i915->drm.switch_power_state = DRM_SWITCH_POWER_OFF;
+2 -1
drivers/gpu/drm/i915/i915_utils.c
··· 101 101 */ 102 102 barrier(); 103 103 104 - mod_timer(t, jiffies + timeout); 104 + /* Keep t->expires = 0 reserved to indicate a canceled timer. */ 105 + mod_timer(t, jiffies + timeout ?: 1); 105 106 }
+42 -51
drivers/gpu/drm/i915/i915_vma.c
··· 608 608 return true; 609 609 } 610 610 611 - static void assert_bind_count(const struct drm_i915_gem_object *obj) 612 - { 613 - /* 614 - * Combine the assertion that the object is bound and that we have 615 - * pinned its pages. But we should never have bound the object 616 - * more than we have pinned its pages. (For complete accuracy, we 617 - * assume that no else is pinning the pages, but as a rough assertion 618 - * that we will not run into problems later, this will do!) 619 - */ 620 - GEM_BUG_ON(atomic_read(&obj->mm.pages_pin_count) < atomic_read(&obj->bind_count)); 621 - } 622 - 623 611 /** 624 612 * i915_vma_insert - finds a slot for the vma in its address space 625 613 * @vma: the vma ··· 726 738 GEM_BUG_ON(!drm_mm_node_allocated(&vma->node)); 727 739 GEM_BUG_ON(!i915_gem_valid_gtt_space(vma, color)); 728 740 729 - if (vma->obj) { 730 - struct drm_i915_gem_object *obj = vma->obj; 731 - 732 - atomic_inc(&obj->bind_count); 733 - assert_bind_count(obj); 734 - } 735 741 list_add_tail(&vma->vm_link, &vma->vm->bound_list); 736 742 737 743 return 0; ··· 743 761 * it to be reaped by the shrinker. 744 762 */ 745 763 list_del(&vma->vm_link); 746 - if (vma->obj) { 747 - struct drm_i915_gem_object *obj = vma->obj; 748 - 749 - assert_bind_count(obj); 750 - atomic_dec(&obj->bind_count); 751 - } 752 764 } 753 765 754 766 static bool try_qad_pin(struct i915_vma *vma, unsigned int flags) ··· 889 913 if (flags & PIN_GLOBAL) 890 914 wakeref = intel_runtime_pm_get(&vma->vm->i915->runtime_pm); 891 915 892 - /* No more allocations allowed once we hold vm->mutex */ 893 - err = mutex_lock_interruptible(&vma->vm->mutex); 916 + /* 917 + * Differentiate between user/kernel vma inside the aliasing-ppgtt. 918 + * 919 + * We conflate the Global GTT with the user's vma when using the 920 + * aliasing-ppgtt, but it is still vitally important to try and 921 + * keep the use cases distinct. For example, userptr objects are 922 + * not allowed inside the Global GTT as that will cause lock 923 + * inversions when we have to evict them the mmu_notifier callbacks - 924 + * but they are allowed to be part of the user ppGTT which can never 925 + * be mapped. As such we try to give the distinct users of the same 926 + * mutex, distinct lockclasses [equivalent to how we keep i915_ggtt 927 + * and i915_ppgtt separate]. 928 + * 929 + * NB this may cause us to mask real lock inversions -- while the 930 + * code is safe today, lockdep may not be able to spot future 931 + * transgressions. 932 + */ 933 + err = mutex_lock_interruptible_nested(&vma->vm->mutex, 934 + !(flags & PIN_GLOBAL)); 894 935 if (err) 895 936 goto err_fence; 937 + 938 + /* No more allocations allowed now we hold vm->mutex */ 896 939 897 940 if (unlikely(i915_vma_is_closed(vma))) { 898 941 err = -ENOENT; ··· 975 980 mutex_unlock(&vma->vm->mutex); 976 981 err_fence: 977 982 if (work) 978 - dma_fence_work_commit(&work->base); 983 + dma_fence_work_commit_imm(&work->base); 979 984 if (wakeref) 980 985 intel_runtime_pm_put(&vma->vm->i915->runtime_pm, wakeref); 981 986 err_pages: ··· 1167 1172 GEM_BUG_ON(!i915_vma_is_pinned(vma)); 1168 1173 1169 1174 /* Wait for the vma to be bound before we start! */ 1170 - err = i915_request_await_active(rq, &vma->active, 0); 1175 + err = i915_request_await_active(rq, &vma->active, 1176 + I915_ACTIVE_AWAIT_EXCL); 1171 1177 if (err) 1172 1178 return err; 1173 1179 ··· 1209 1213 dma_resv_add_shared_fence(vma->resv, &rq->fence); 1210 1214 obj->write_domain = 0; 1211 1215 } 1216 + 1217 + if (flags & EXEC_OBJECT_NEEDS_FENCE && vma->fence) 1218 + i915_active_add_request(&vma->fence->active, rq); 1219 + 1212 1220 obj->read_domains |= I915_GEM_GPU_DOMAINS; 1213 1221 obj->mm.dirty = true; 1214 1222 ··· 1225 1225 int ret; 1226 1226 1227 1227 lockdep_assert_held(&vma->vm->mutex); 1228 - 1229 - /* 1230 - * First wait upon any activity as retiring the request may 1231 - * have side-effects such as unpinning or even unbinding this vma. 1232 - * 1233 - * XXX Actually waiting under the vm->mutex is a hinderance and 1234 - * should be pipelined wherever possible. In cases where that is 1235 - * unavoidable, we should lift the wait to before the mutex. 1236 - */ 1237 - ret = i915_vma_sync(vma); 1238 - if (ret) 1239 - return ret; 1240 1228 1241 1229 if (i915_vma_is_pinned(vma)) { 1242 1230 vma_print_allocator(vma, "is pinned"); ··· 1247 1259 GEM_BUG_ON(i915_vma_is_active(vma)); 1248 1260 1249 1261 if (i915_vma_is_map_and_fenceable(vma)) { 1262 + /* Force a pagefault for domain tracking on next user access */ 1263 + i915_vma_revoke_mmap(vma); 1264 + 1250 1265 /* 1251 1266 * Check that we have flushed all writes through the GGTT 1252 1267 * before the unbind, other due to non-strict nature of those ··· 1266 1275 i915_vma_flush_writes(vma); 1267 1276 1268 1277 /* release the fence reg _after_ flushing */ 1269 - ret = i915_vma_revoke_fence(vma); 1270 - if (ret) 1271 - return ret; 1272 - 1273 - /* Force a pagefault for domain tracking on next user access */ 1274 - i915_vma_revoke_mmap(vma); 1278 + i915_vma_revoke_fence(vma); 1275 1279 1276 1280 __i915_vma_iounmap(vma); 1277 1281 clear_bit(I915_VMA_CAN_FENCE_BIT, __i915_vma_flags(vma)); ··· 1297 1311 if (!drm_mm_node_allocated(&vma->node)) 1298 1312 return 0; 1299 1313 1300 - if (i915_vma_is_bound(vma, I915_VMA_GLOBAL_BIND)) 1301 - /* XXX not always required: nop_clear_range */ 1302 - wakeref = intel_runtime_pm_get(&vm->i915->runtime_pm); 1303 - 1304 1314 /* Optimistic wait before taking the mutex */ 1305 1315 err = i915_vma_sync(vma); 1306 1316 if (err) 1307 1317 goto out_rpm; 1308 1318 1309 - err = mutex_lock_interruptible(&vm->mutex); 1319 + if (i915_vma_is_pinned(vma)) { 1320 + vma_print_allocator(vma, "is pinned"); 1321 + return -EAGAIN; 1322 + } 1323 + 1324 + if (i915_vma_is_bound(vma, I915_VMA_GLOBAL_BIND)) 1325 + /* XXX not always required: nop_clear_range */ 1326 + wakeref = intel_runtime_pm_get(&vm->i915->runtime_pm); 1327 + 1328 + err = mutex_lock_interruptible_nested(&vma->vm->mutex, !wakeref); 1310 1329 if (err) 1311 1330 goto out_rpm; 1312 1331
+2 -2
drivers/gpu/drm/i915/i915_vma.h
··· 30 30 31 31 #include <drm/drm_mm.h> 32 32 33 + #include "gt/intel_ggtt_fencing.h" 33 34 #include "gem/i915_gem_object.h" 34 35 35 36 #include "i915_gem_gtt.h" 36 - #include "i915_gem_fence_reg.h" 37 37 38 38 #include "i915_active.h" 39 39 #include "i915_request.h" ··· 326 326 * True if the vma has a fence, false otherwise. 327 327 */ 328 328 int __must_check i915_vma_pin_fence(struct i915_vma *vma); 329 - int __must_check i915_vma_revoke_fence(struct i915_vma *vma); 329 + void i915_vma_revoke_fence(struct i915_vma *vma); 330 330 331 331 int __i915_vma_pin_fence(struct i915_vma *vma); 332 332
+18 -21
drivers/gpu/drm/i915/intel_device_info.c
··· 980 980 drm_info(&dev_priv->drm, 981 981 "Display fused off, disabling\n"); 982 982 info->pipe_mask = 0; 983 + info->cpu_transcoder_mask = 0; 983 984 } else if (fuse_strap & IVB_PIPE_C_DISABLE) { 984 985 drm_info(&dev_priv->drm, "PipeC fused off\n"); 985 986 info->pipe_mask &= ~BIT(PIPE_C); 987 + info->cpu_transcoder_mask &= ~BIT(TRANSCODER_C); 986 988 } 987 989 } else if (HAS_DISPLAY(dev_priv) && INTEL_GEN(dev_priv) >= 9) { 988 990 u32 dfsm = I915_READ(SKL_DFSM); 989 - u8 enabled_mask = info->pipe_mask; 990 991 991 - if (dfsm & SKL_DFSM_PIPE_A_DISABLE) 992 - enabled_mask &= ~BIT(PIPE_A); 993 - if (dfsm & SKL_DFSM_PIPE_B_DISABLE) 994 - enabled_mask &= ~BIT(PIPE_B); 995 - if (dfsm & SKL_DFSM_PIPE_C_DISABLE) 996 - enabled_mask &= ~BIT(PIPE_C); 992 + if (dfsm & SKL_DFSM_PIPE_A_DISABLE) { 993 + info->pipe_mask &= ~BIT(PIPE_A); 994 + info->cpu_transcoder_mask &= ~BIT(TRANSCODER_A); 995 + } 996 + if (dfsm & SKL_DFSM_PIPE_B_DISABLE) { 997 + info->pipe_mask &= ~BIT(PIPE_B); 998 + info->cpu_transcoder_mask &= ~BIT(TRANSCODER_B); 999 + } 1000 + if (dfsm & SKL_DFSM_PIPE_C_DISABLE) { 1001 + info->pipe_mask &= ~BIT(PIPE_C); 1002 + info->cpu_transcoder_mask &= ~BIT(TRANSCODER_C); 1003 + } 997 1004 if (INTEL_GEN(dev_priv) >= 12 && 998 - (dfsm & TGL_DFSM_PIPE_D_DISABLE)) 999 - enabled_mask &= ~BIT(PIPE_D); 1000 - 1001 - /* 1002 - * At least one pipe should be enabled and if there are 1003 - * disabled pipes, they should be the last ones, with no holes 1004 - * in the mask. 1005 - */ 1006 - if (enabled_mask == 0 || !is_power_of_2(enabled_mask + 1)) 1007 - drm_err(&dev_priv->drm, 1008 - "invalid pipe fuse configuration: enabled_mask=0x%x\n", 1009 - enabled_mask); 1010 - else 1011 - info->pipe_mask = enabled_mask; 1005 + (dfsm & TGL_DFSM_PIPE_D_DISABLE)) { 1006 + info->pipe_mask &= ~BIT(PIPE_D); 1007 + info->cpu_transcoder_mask &= ~BIT(TRANSCODER_D); 1008 + } 1012 1009 1013 1010 if (dfsm & SKL_DFSM_DISPLAY_HDCP_DISABLE) 1014 1011 info->display.has_hdcp = 0;
+1
drivers/gpu/drm/i915/intel_device_info.h
··· 168 168 u32 display_mmio_offset; 169 169 170 170 u8 pipe_mask; 171 + u8 cpu_transcoder_mask; 171 172 172 173 #define DEFINE_FLAG(name) u8 name:1 173 174 DEV_INFO_FOR_EACH_FLAG(DEFINE_FLAG);
+1 -2
drivers/gpu/drm/i915/intel_dram.c
··· 495 495 else 496 496 i915->edram_size_mb = gen9_edram_size_mb(i915, edram_cap); 497 497 498 - dev_info(i915->drm.dev, 499 - "Found %uMB of eDRAM\n", i915->edram_size_mb); 498 + drm_info(&i915->drm, "Found %uMB of eDRAM\n", i915->edram_size_mb); 500 499 }
+8 -4
drivers/gpu/drm/i915/intel_pm.c
··· 4016 4016 int color_plane); 4017 4017 static void skl_compute_plane_wm(const struct intel_crtc_state *crtc_state, 4018 4018 int level, 4019 + unsigned int latency, 4019 4020 const struct skl_wm_params *wp, 4020 4021 const struct skl_wm_level *result_prev, 4021 4022 struct skl_wm_level *result /* out */); ··· 4039 4038 drm_WARN_ON(&dev_priv->drm, ret); 4040 4039 4041 4040 for (level = 0; level <= max_level; level++) { 4042 - skl_compute_plane_wm(crtc_state, level, &wp, &wm, &wm); 4041 + unsigned int latency = dev_priv->wm.skl_latency[level]; 4042 + 4043 + skl_compute_plane_wm(crtc_state, level, latency, &wp, &wm, &wm); 4043 4044 if (wm.min_ddb_alloc == U16_MAX) 4044 4045 break; 4045 4046 ··· 4975 4972 4976 4973 static void skl_compute_plane_wm(const struct intel_crtc_state *crtc_state, 4977 4974 int level, 4975 + unsigned int latency, 4978 4976 const struct skl_wm_params *wp, 4979 4977 const struct skl_wm_level *result_prev, 4980 4978 struct skl_wm_level *result /* out */) 4981 4979 { 4982 4980 struct drm_i915_private *dev_priv = to_i915(crtc_state->uapi.crtc->dev); 4983 - u32 latency = dev_priv->wm.skl_latency[level]; 4984 4981 uint_fixed_16_16_t method1, method2; 4985 4982 uint_fixed_16_16_t selected_result; 4986 4983 u32 res_blocks, res_lines, min_ddb_alloc = 0; ··· 5109 5106 5110 5107 for (level = 0; level <= max_level; level++) { 5111 5108 struct skl_wm_level *result = &levels[level]; 5109 + unsigned int latency = dev_priv->wm.skl_latency[level]; 5112 5110 5113 - skl_compute_plane_wm(crtc_state, level, wm_params, 5114 - result_prev, result); 5111 + skl_compute_plane_wm(crtc_state, level, latency, 5112 + wm_params, result_prev, result); 5115 5113 5116 5114 result_prev = result; 5117 5115 }
+23 -1
drivers/gpu/drm/i915/intel_uncore.c
··· 665 665 mmio_debug_resume(uncore->debug); 666 666 667 667 if (check_for_unclaimed_mmio(uncore)) 668 - dev_info(uncore->i915->drm.dev, 668 + drm_info(&uncore->i915->drm, 669 669 "Invalid mmio detected during user access\n"); 670 670 spin_unlock(&uncore->debug->lock); 671 671 ··· 732 732 spin_lock_irqsave(&uncore->lock, irqflags); 733 733 __intel_uncore_forcewake_put(uncore, fw_domains); 734 734 spin_unlock_irqrestore(&uncore->lock, irqflags); 735 + } 736 + 737 + /** 738 + * intel_uncore_forcewake_flush - flush the delayed release 739 + * @uncore: the intel_uncore structure 740 + * @fw_domains: forcewake domains to flush 741 + */ 742 + void intel_uncore_forcewake_flush(struct intel_uncore *uncore, 743 + enum forcewake_domains fw_domains) 744 + { 745 + struct intel_uncore_forcewake_domain *domain; 746 + unsigned int tmp; 747 + 748 + if (!uncore->funcs.force_wake_put) 749 + return; 750 + 751 + fw_domains &= uncore->fw_domains; 752 + for_each_fw_domain_masked(domain, fw_domains, uncore, tmp) { 753 + WRITE_ONCE(domain->active, false); 754 + if (hrtimer_cancel(&domain->timer)) 755 + intel_uncore_fw_release_timer(&domain->timer); 756 + } 735 757 } 736 758 737 759 /**
+5 -1
drivers/gpu/drm/i915/intel_uncore.h
··· 209 209 enum forcewake_domains domains); 210 210 void intel_uncore_forcewake_put(struct intel_uncore *uncore, 211 211 enum forcewake_domains domains); 212 - /* Like above but the caller must manage the uncore.lock itself. 212 + void intel_uncore_forcewake_flush(struct intel_uncore *uncore, 213 + enum forcewake_domains fw_domains); 214 + 215 + /* 216 + * Like above but the caller must manage the uncore.lock itself. 213 217 * Must be used with I915_READ_FW and friends. 214 218 */ 215 219 void intel_uncore_forcewake_get__locked(struct intel_uncore *uncore,
+7 -5
drivers/gpu/drm/i915/intel_wakeref.c
··· 70 70 71 71 void __intel_wakeref_put_last(struct intel_wakeref *wf, unsigned long flags) 72 72 { 73 - INTEL_WAKEREF_BUG_ON(work_pending(&wf->work)); 73 + INTEL_WAKEREF_BUG_ON(delayed_work_pending(&wf->work)); 74 74 75 75 /* Assume we are not in process context and so cannot sleep. */ 76 76 if (flags & INTEL_WAKEREF_PUT_ASYNC || !mutex_trylock(&wf->mutex)) { 77 - schedule_work(&wf->work); 77 + mod_delayed_work(system_wq, &wf->work, 78 + FIELD_GET(INTEL_WAKEREF_PUT_DELAY, flags)); 78 79 return; 79 80 } 80 81 ··· 84 83 85 84 static void __intel_wakeref_put_work(struct work_struct *wrk) 86 85 { 87 - struct intel_wakeref *wf = container_of(wrk, typeof(*wf), work); 86 + struct intel_wakeref *wf = container_of(wrk, typeof(*wf), work.work); 88 87 89 88 if (atomic_add_unless(&wf->count, -1, 1)) 90 89 return; ··· 105 104 atomic_set(&wf->count, 0); 106 105 wf->wakeref = 0; 107 106 108 - INIT_WORK(&wf->work, __intel_wakeref_put_work); 109 - lockdep_init_map(&wf->work.lockdep_map, "wakeref.work", &key->work, 0); 107 + INIT_DELAYED_WORK(&wf->work, __intel_wakeref_put_work); 108 + lockdep_init_map(&wf->work.work.lockdep_map, 109 + "wakeref.work", &key->work, 0); 110 110 } 111 111 112 112 int intel_wakeref_wait_for_idle(struct intel_wakeref *wf)
+19 -3
drivers/gpu/drm/i915/intel_wakeref.h
··· 8 8 #define INTEL_WAKEREF_H 9 9 10 10 #include <linux/atomic.h> 11 + #include <linux/bitfield.h> 11 12 #include <linux/bits.h> 12 13 #include <linux/lockdep.h> 13 14 #include <linux/mutex.h> ··· 42 41 struct intel_runtime_pm *rpm; 43 42 const struct intel_wakeref_ops *ops; 44 43 45 - struct work_struct work; 44 + struct delayed_work work; 46 45 }; 47 46 48 47 struct intel_wakeref_lockclass { ··· 118 117 return atomic_inc_not_zero(&wf->count); 119 118 } 120 119 120 + enum { 121 + INTEL_WAKEREF_PUT_ASYNC_BIT = 0, 122 + __INTEL_WAKEREF_PUT_LAST_BIT__ 123 + }; 124 + 121 125 /** 122 126 * intel_wakeref_put_flags: Release the wakeref 123 127 * @wf: the wakeref ··· 140 134 */ 141 135 static inline void 142 136 __intel_wakeref_put(struct intel_wakeref *wf, unsigned long flags) 143 - #define INTEL_WAKEREF_PUT_ASYNC BIT(0) 137 + #define INTEL_WAKEREF_PUT_ASYNC BIT(INTEL_WAKEREF_PUT_ASYNC_BIT) 138 + #define INTEL_WAKEREF_PUT_DELAY \ 139 + GENMASK(BITS_PER_LONG - 1, __INTEL_WAKEREF_PUT_LAST_BIT__) 144 140 { 145 141 INTEL_WAKEREF_BUG_ON(atomic_read(&wf->count) <= 0); 146 142 if (unlikely(!atomic_add_unless(&wf->count, -1, 1))) ··· 160 152 intel_wakeref_put_async(struct intel_wakeref *wf) 161 153 { 162 154 __intel_wakeref_put(wf, INTEL_WAKEREF_PUT_ASYNC); 155 + } 156 + 157 + static inline void 158 + intel_wakeref_put_delay(struct intel_wakeref *wf, unsigned long delay) 159 + { 160 + __intel_wakeref_put(wf, 161 + INTEL_WAKEREF_PUT_ASYNC | 162 + FIELD_PREP(INTEL_WAKEREF_PUT_DELAY, delay)); 163 163 } 164 164 165 165 /** ··· 210 194 { 211 195 mutex_lock(&wf->mutex); 212 196 mutex_unlock(&wf->mutex); 213 - flush_work(&wf->work); 197 + flush_delayed_work(&wf->work); 214 198 } 215 199 216 200 /**
+10 -12
drivers/gpu/drm/i915/intel_wopcm.c
··· 86 86 else 87 87 wopcm->size = GEN9_WOPCM_SIZE; 88 88 89 - DRM_DEV_DEBUG_DRIVER(i915->drm.dev, "WOPCM: %uK\n", wopcm->size / 1024); 89 + drm_dbg(&i915->drm, "WOPCM: %uK\n", wopcm->size / 1024); 90 90 } 91 91 92 92 static inline u32 context_reserved_size(struct drm_i915_private *i915) ··· 112 112 offset = guc_wopcm_base + GEN9_GUC_WOPCM_OFFSET; 113 113 if (offset > guc_wopcm_size || 114 114 (guc_wopcm_size - offset) < sizeof(u32)) { 115 - dev_err(i915->drm.dev, 115 + drm_err(&i915->drm, 116 116 "WOPCM: invalid GuC region size: %uK < %uK\n", 117 117 guc_wopcm_size / SZ_1K, 118 118 (u32)(offset + sizeof(u32)) / SZ_1K); ··· 131 131 * firmware uploading would fail. 132 132 */ 133 133 if (huc_fw_size > guc_wopcm_size - GUC_WOPCM_RESERVED) { 134 - dev_err(i915->drm.dev, "WOPCM: no space for %s: %uK < %uK\n", 134 + drm_err(&i915->drm, "WOPCM: no space for %s: %uK < %uK\n", 135 135 intel_uc_fw_type_repr(INTEL_UC_FW_TYPE_HUC), 136 136 (guc_wopcm_size - GUC_WOPCM_RESERVED) / SZ_1K, 137 137 huc_fw_size / 1024); ··· 166 166 167 167 size = wopcm_size - ctx_rsvd; 168 168 if (unlikely(range_overflows(guc_wopcm_base, guc_wopcm_size, size))) { 169 - dev_err(i915->drm.dev, 169 + drm_err(&i915->drm, 170 170 "WOPCM: invalid GuC region layout: %uK + %uK > %uK\n", 171 171 guc_wopcm_base / SZ_1K, guc_wopcm_size / SZ_1K, 172 172 size / SZ_1K); ··· 175 175 176 176 size = guc_fw_size + GUC_WOPCM_RESERVED + GUC_WOPCM_STACK_RESERVED; 177 177 if (unlikely(guc_wopcm_size < size)) { 178 - dev_err(i915->drm.dev, "WOPCM: no space for %s: %uK < %uK\n", 178 + drm_err(&i915->drm, "WOPCM: no space for %s: %uK < %uK\n", 179 179 intel_uc_fw_type_repr(INTEL_UC_FW_TYPE_GUC), 180 180 guc_wopcm_size / SZ_1K, size / SZ_1K); 181 181 return false; ··· 183 183 184 184 size = huc_fw_size + WOPCM_RESERVED_SIZE; 185 185 if (unlikely(guc_wopcm_base < size)) { 186 - dev_err(i915->drm.dev, "WOPCM: no space for %s: %uK < %uK\n", 186 + drm_err(&i915->drm, "WOPCM: no space for %s: %uK < %uK\n", 187 187 intel_uc_fw_type_repr(INTEL_UC_FW_TYPE_HUC), 188 188 guc_wopcm_base / SZ_1K, size / SZ_1K); 189 189 return false; ··· 242 242 return; 243 243 244 244 if (__wopcm_regs_locked(gt->uncore, &guc_wopcm_base, &guc_wopcm_size)) { 245 - DRM_DEV_DEBUG_DRIVER(i915->drm.dev, 246 - "GuC WOPCM is already locked [%uK, %uK)\n", 247 - guc_wopcm_base / SZ_1K, 248 - guc_wopcm_size / SZ_1K); 245 + drm_dbg(&i915->drm, "GuC WOPCM is already locked [%uK, %uK)\n", 246 + guc_wopcm_base / SZ_1K, guc_wopcm_size / SZ_1K); 249 247 goto check; 250 248 } 251 249 ··· 264 266 guc_wopcm_size = wopcm->size - ctx_rsvd - guc_wopcm_base; 265 267 guc_wopcm_size &= GUC_WOPCM_SIZE_MASK; 266 268 267 - DRM_DEV_DEBUG_DRIVER(i915->drm.dev, "Calculated GuC WOPCM [%uK, %uK)\n", 268 - guc_wopcm_base / SZ_1K, guc_wopcm_size / SZ_1K); 269 + drm_dbg(&i915->drm, "Calculated GuC WOPCM [%uK, %uK)\n", 270 + guc_wopcm_base / SZ_1K, guc_wopcm_size / SZ_1K); 269 271 270 272 check: 271 273 if (__check_layout(i915, wopcm->size, guc_wopcm_base, guc_wopcm_size,
-90
drivers/gpu/drm/i915/oa/i915_oa_bdw.c
··· 1 - // SPDX-License-Identifier: MIT 2 - /* 3 - * Copyright © 2018-2019 Intel Corporation 4 - * 5 - * Autogenerated file by GPU Top : https://github.com/rib/gputop 6 - * DO NOT EDIT manually! 7 - */ 8 - 9 - #include <linux/sysfs.h> 10 - 11 - #include "i915_drv.h" 12 - #include "i915_oa_bdw.h" 13 - 14 - static const struct i915_oa_reg b_counter_config_test_oa[] = { 15 - { _MMIO(0x2740), 0x00000000 }, 16 - { _MMIO(0x2744), 0x00800000 }, 17 - { _MMIO(0x2714), 0xf0800000 }, 18 - { _MMIO(0x2710), 0x00000000 }, 19 - { _MMIO(0x2724), 0xf0800000 }, 20 - { _MMIO(0x2720), 0x00000000 }, 21 - { _MMIO(0x2770), 0x00000004 }, 22 - { _MMIO(0x2774), 0x00000000 }, 23 - { _MMIO(0x2778), 0x00000003 }, 24 - { _MMIO(0x277c), 0x00000000 }, 25 - { _MMIO(0x2780), 0x00000007 }, 26 - { _MMIO(0x2784), 0x00000000 }, 27 - { _MMIO(0x2788), 0x00100002 }, 28 - { _MMIO(0x278c), 0x0000fff7 }, 29 - { _MMIO(0x2790), 0x00100002 }, 30 - { _MMIO(0x2794), 0x0000ffcf }, 31 - { _MMIO(0x2798), 0x00100082 }, 32 - { _MMIO(0x279c), 0x0000ffef }, 33 - { _MMIO(0x27a0), 0x001000c2 }, 34 - { _MMIO(0x27a4), 0x0000ffe7 }, 35 - { _MMIO(0x27a8), 0x00100001 }, 36 - { _MMIO(0x27ac), 0x0000ffe7 }, 37 - }; 38 - 39 - static const struct i915_oa_reg flex_eu_config_test_oa[] = { 40 - }; 41 - 42 - static const struct i915_oa_reg mux_config_test_oa[] = { 43 - { _MMIO(0x9840), 0x000000a0 }, 44 - { _MMIO(0x9888), 0x198b0000 }, 45 - { _MMIO(0x9888), 0x078b0066 }, 46 - { _MMIO(0x9888), 0x118b0000 }, 47 - { _MMIO(0x9888), 0x258b0000 }, 48 - { _MMIO(0x9888), 0x21850008 }, 49 - { _MMIO(0x9888), 0x0d834000 }, 50 - { _MMIO(0x9888), 0x07844000 }, 51 - { _MMIO(0x9888), 0x17804000 }, 52 - { _MMIO(0x9888), 0x21800000 }, 53 - { _MMIO(0x9888), 0x4f800000 }, 54 - { _MMIO(0x9888), 0x41800000 }, 55 - { _MMIO(0x9888), 0x31800000 }, 56 - { _MMIO(0x9840), 0x00000080 }, 57 - }; 58 - 59 - static ssize_t 60 - show_test_oa_id(struct device *kdev, struct device_attribute *attr, char *buf) 61 - { 62 - return sprintf(buf, "1\n"); 63 - } 64 - 65 - void 66 - i915_perf_load_test_config_bdw(struct drm_i915_private *dev_priv) 67 - { 68 - strlcpy(dev_priv->perf.test_config.uuid, 69 - "d6de6f55-e526-4f79-a6a6-d7315c09044e", 70 - sizeof(dev_priv->perf.test_config.uuid)); 71 - dev_priv->perf.test_config.id = 1; 72 - 73 - dev_priv->perf.test_config.mux_regs = mux_config_test_oa; 74 - dev_priv->perf.test_config.mux_regs_len = ARRAY_SIZE(mux_config_test_oa); 75 - 76 - dev_priv->perf.test_config.b_counter_regs = b_counter_config_test_oa; 77 - dev_priv->perf.test_config.b_counter_regs_len = ARRAY_SIZE(b_counter_config_test_oa); 78 - 79 - dev_priv->perf.test_config.flex_regs = flex_eu_config_test_oa; 80 - dev_priv->perf.test_config.flex_regs_len = ARRAY_SIZE(flex_eu_config_test_oa); 81 - 82 - dev_priv->perf.test_config.sysfs_metric.name = "d6de6f55-e526-4f79-a6a6-d7315c09044e"; 83 - dev_priv->perf.test_config.sysfs_metric.attrs = dev_priv->perf.test_config.attrs; 84 - 85 - dev_priv->perf.test_config.attrs[0] = &dev_priv->perf.test_config.sysfs_metric_id.attr; 86 - 87 - dev_priv->perf.test_config.sysfs_metric_id.attr.name = "id"; 88 - dev_priv->perf.test_config.sysfs_metric_id.attr.mode = 0444; 89 - dev_priv->perf.test_config.sysfs_metric_id.show = show_test_oa_id; 90 - }
-16
drivers/gpu/drm/i915/oa/i915_oa_bdw.h
··· 1 - /* SPDX-License-Identifier: MIT */ 2 - /* 3 - * Copyright © 2018-2019 Intel Corporation 4 - * 5 - * Autogenerated file by GPU Top : https://github.com/rib/gputop 6 - * DO NOT EDIT manually! 7 - */ 8 - 9 - #ifndef __I915_OA_BDW_H__ 10 - #define __I915_OA_BDW_H__ 11 - 12 - struct drm_i915_private; 13 - 14 - void i915_perf_load_test_config_bdw(struct drm_i915_private *dev_priv); 15 - 16 - #endif
-88
drivers/gpu/drm/i915/oa/i915_oa_bxt.c
··· 1 - // SPDX-License-Identifier: MIT 2 - /* 3 - * Copyright © 2018-2019 Intel Corporation 4 - * 5 - * Autogenerated file by GPU Top : https://github.com/rib/gputop 6 - * DO NOT EDIT manually! 7 - */ 8 - 9 - #include <linux/sysfs.h> 10 - 11 - #include "i915_drv.h" 12 - #include "i915_oa_bxt.h" 13 - 14 - static const struct i915_oa_reg b_counter_config_test_oa[] = { 15 - { _MMIO(0x2740), 0x00000000 }, 16 - { _MMIO(0x2744), 0x00800000 }, 17 - { _MMIO(0x2714), 0xf0800000 }, 18 - { _MMIO(0x2710), 0x00000000 }, 19 - { _MMIO(0x2724), 0xf0800000 }, 20 - { _MMIO(0x2720), 0x00000000 }, 21 - { _MMIO(0x2770), 0x00000004 }, 22 - { _MMIO(0x2774), 0x00000000 }, 23 - { _MMIO(0x2778), 0x00000003 }, 24 - { _MMIO(0x277c), 0x00000000 }, 25 - { _MMIO(0x2780), 0x00000007 }, 26 - { _MMIO(0x2784), 0x00000000 }, 27 - { _MMIO(0x2788), 0x00100002 }, 28 - { _MMIO(0x278c), 0x0000fff7 }, 29 - { _MMIO(0x2790), 0x00100002 }, 30 - { _MMIO(0x2794), 0x0000ffcf }, 31 - { _MMIO(0x2798), 0x00100082 }, 32 - { _MMIO(0x279c), 0x0000ffef }, 33 - { _MMIO(0x27a0), 0x001000c2 }, 34 - { _MMIO(0x27a4), 0x0000ffe7 }, 35 - { _MMIO(0x27a8), 0x00100001 }, 36 - { _MMIO(0x27ac), 0x0000ffe7 }, 37 - }; 38 - 39 - static const struct i915_oa_reg flex_eu_config_test_oa[] = { 40 - }; 41 - 42 - static const struct i915_oa_reg mux_config_test_oa[] = { 43 - { _MMIO(0x9840), 0x00000080 }, 44 - { _MMIO(0x9888), 0x19800000 }, 45 - { _MMIO(0x9888), 0x07800063 }, 46 - { _MMIO(0x9888), 0x11800000 }, 47 - { _MMIO(0x9888), 0x23810008 }, 48 - { _MMIO(0x9888), 0x1d950400 }, 49 - { _MMIO(0x9888), 0x0f922000 }, 50 - { _MMIO(0x9888), 0x1f908000 }, 51 - { _MMIO(0x9888), 0x37900000 }, 52 - { _MMIO(0x9888), 0x55900000 }, 53 - { _MMIO(0x9888), 0x47900000 }, 54 - { _MMIO(0x9888), 0x33900000 }, 55 - }; 56 - 57 - static ssize_t 58 - show_test_oa_id(struct device *kdev, struct device_attribute *attr, char *buf) 59 - { 60 - return sprintf(buf, "1\n"); 61 - } 62 - 63 - void 64 - i915_perf_load_test_config_bxt(struct drm_i915_private *dev_priv) 65 - { 66 - strlcpy(dev_priv->perf.test_config.uuid, 67 - "5ee72f5c-092f-421e-8b70-225f7c3e9612", 68 - sizeof(dev_priv->perf.test_config.uuid)); 69 - dev_priv->perf.test_config.id = 1; 70 - 71 - dev_priv->perf.test_config.mux_regs = mux_config_test_oa; 72 - dev_priv->perf.test_config.mux_regs_len = ARRAY_SIZE(mux_config_test_oa); 73 - 74 - dev_priv->perf.test_config.b_counter_regs = b_counter_config_test_oa; 75 - dev_priv->perf.test_config.b_counter_regs_len = ARRAY_SIZE(b_counter_config_test_oa); 76 - 77 - dev_priv->perf.test_config.flex_regs = flex_eu_config_test_oa; 78 - dev_priv->perf.test_config.flex_regs_len = ARRAY_SIZE(flex_eu_config_test_oa); 79 - 80 - dev_priv->perf.test_config.sysfs_metric.name = "5ee72f5c-092f-421e-8b70-225f7c3e9612"; 81 - dev_priv->perf.test_config.sysfs_metric.attrs = dev_priv->perf.test_config.attrs; 82 - 83 - dev_priv->perf.test_config.attrs[0] = &dev_priv->perf.test_config.sysfs_metric_id.attr; 84 - 85 - dev_priv->perf.test_config.sysfs_metric_id.attr.name = "id"; 86 - dev_priv->perf.test_config.sysfs_metric_id.attr.mode = 0444; 87 - dev_priv->perf.test_config.sysfs_metric_id.show = show_test_oa_id; 88 - }
-16
drivers/gpu/drm/i915/oa/i915_oa_bxt.h
··· 1 - /* SPDX-License-Identifier: MIT */ 2 - /* 3 - * Copyright © 2018-2019 Intel Corporation 4 - * 5 - * Autogenerated file by GPU Top : https://github.com/rib/gputop 6 - * DO NOT EDIT manually! 7 - */ 8 - 9 - #ifndef __I915_OA_BXT_H__ 10 - #define __I915_OA_BXT_H__ 11 - 12 - struct drm_i915_private; 13 - 14 - void i915_perf_load_test_config_bxt(struct drm_i915_private *dev_priv); 15 - 16 - #endif
-89
drivers/gpu/drm/i915/oa/i915_oa_cflgt2.c
··· 1 - // SPDX-License-Identifier: MIT 2 - /* 3 - * Copyright © 2018-2019 Intel Corporation 4 - * 5 - * Autogenerated file by GPU Top : https://github.com/rib/gputop 6 - * DO NOT EDIT manually! 7 - */ 8 - 9 - #include <linux/sysfs.h> 10 - 11 - #include "i915_drv.h" 12 - #include "i915_oa_cflgt2.h" 13 - 14 - static const struct i915_oa_reg b_counter_config_test_oa[] = { 15 - { _MMIO(0x2740), 0x00000000 }, 16 - { _MMIO(0x2744), 0x00800000 }, 17 - { _MMIO(0x2714), 0xf0800000 }, 18 - { _MMIO(0x2710), 0x00000000 }, 19 - { _MMIO(0x2724), 0xf0800000 }, 20 - { _MMIO(0x2720), 0x00000000 }, 21 - { _MMIO(0x2770), 0x00000004 }, 22 - { _MMIO(0x2774), 0x00000000 }, 23 - { _MMIO(0x2778), 0x00000003 }, 24 - { _MMIO(0x277c), 0x00000000 }, 25 - { _MMIO(0x2780), 0x00000007 }, 26 - { _MMIO(0x2784), 0x00000000 }, 27 - { _MMIO(0x2788), 0x00100002 }, 28 - { _MMIO(0x278c), 0x0000fff7 }, 29 - { _MMIO(0x2790), 0x00100002 }, 30 - { _MMIO(0x2794), 0x0000ffcf }, 31 - { _MMIO(0x2798), 0x00100082 }, 32 - { _MMIO(0x279c), 0x0000ffef }, 33 - { _MMIO(0x27a0), 0x001000c2 }, 34 - { _MMIO(0x27a4), 0x0000ffe7 }, 35 - { _MMIO(0x27a8), 0x00100001 }, 36 - { _MMIO(0x27ac), 0x0000ffe7 }, 37 - }; 38 - 39 - static const struct i915_oa_reg flex_eu_config_test_oa[] = { 40 - }; 41 - 42 - static const struct i915_oa_reg mux_config_test_oa[] = { 43 - { _MMIO(0x9840), 0x00000080 }, 44 - { _MMIO(0x9888), 0x11810000 }, 45 - { _MMIO(0x9888), 0x07810013 }, 46 - { _MMIO(0x9888), 0x1f810000 }, 47 - { _MMIO(0x9888), 0x1d810000 }, 48 - { _MMIO(0x9888), 0x1b930040 }, 49 - { _MMIO(0x9888), 0x07e54000 }, 50 - { _MMIO(0x9888), 0x1f908000 }, 51 - { _MMIO(0x9888), 0x11900000 }, 52 - { _MMIO(0x9888), 0x37900000 }, 53 - { _MMIO(0x9888), 0x53900000 }, 54 - { _MMIO(0x9888), 0x45900000 }, 55 - { _MMIO(0x9888), 0x33900000 }, 56 - }; 57 - 58 - static ssize_t 59 - show_test_oa_id(struct device *kdev, struct device_attribute *attr, char *buf) 60 - { 61 - return sprintf(buf, "1\n"); 62 - } 63 - 64 - void 65 - i915_perf_load_test_config_cflgt2(struct drm_i915_private *dev_priv) 66 - { 67 - strlcpy(dev_priv->perf.test_config.uuid, 68 - "74fb4902-d3d3-4237-9e90-cbdc68d0a446", 69 - sizeof(dev_priv->perf.test_config.uuid)); 70 - dev_priv->perf.test_config.id = 1; 71 - 72 - dev_priv->perf.test_config.mux_regs = mux_config_test_oa; 73 - dev_priv->perf.test_config.mux_regs_len = ARRAY_SIZE(mux_config_test_oa); 74 - 75 - dev_priv->perf.test_config.b_counter_regs = b_counter_config_test_oa; 76 - dev_priv->perf.test_config.b_counter_regs_len = ARRAY_SIZE(b_counter_config_test_oa); 77 - 78 - dev_priv->perf.test_config.flex_regs = flex_eu_config_test_oa; 79 - dev_priv->perf.test_config.flex_regs_len = ARRAY_SIZE(flex_eu_config_test_oa); 80 - 81 - dev_priv->perf.test_config.sysfs_metric.name = "74fb4902-d3d3-4237-9e90-cbdc68d0a446"; 82 - dev_priv->perf.test_config.sysfs_metric.attrs = dev_priv->perf.test_config.attrs; 83 - 84 - dev_priv->perf.test_config.attrs[0] = &dev_priv->perf.test_config.sysfs_metric_id.attr; 85 - 86 - dev_priv->perf.test_config.sysfs_metric_id.attr.name = "id"; 87 - dev_priv->perf.test_config.sysfs_metric_id.attr.mode = 0444; 88 - dev_priv->perf.test_config.sysfs_metric_id.show = show_test_oa_id; 89 - }
-16
drivers/gpu/drm/i915/oa/i915_oa_cflgt2.h
··· 1 - /* SPDX-License-Identifier: MIT */ 2 - /* 3 - * Copyright © 2018-2019 Intel Corporation 4 - * 5 - * Autogenerated file by GPU Top : https://github.com/rib/gputop 6 - * DO NOT EDIT manually! 7 - */ 8 - 9 - #ifndef __I915_OA_CFLGT2_H__ 10 - #define __I915_OA_CFLGT2_H__ 11 - 12 - struct drm_i915_private; 13 - 14 - void i915_perf_load_test_config_cflgt2(struct drm_i915_private *dev_priv); 15 - 16 - #endif
-89
drivers/gpu/drm/i915/oa/i915_oa_cflgt3.c
··· 1 - // SPDX-License-Identifier: MIT 2 - /* 3 - * Copyright © 2018-2019 Intel Corporation 4 - * 5 - * Autogenerated file by GPU Top : https://github.com/rib/gputop 6 - * DO NOT EDIT manually! 7 - */ 8 - 9 - #include <linux/sysfs.h> 10 - 11 - #include "i915_drv.h" 12 - #include "i915_oa_cflgt3.h" 13 - 14 - static const struct i915_oa_reg b_counter_config_test_oa[] = { 15 - { _MMIO(0x2740), 0x00000000 }, 16 - { _MMIO(0x2744), 0x00800000 }, 17 - { _MMIO(0x2714), 0xf0800000 }, 18 - { _MMIO(0x2710), 0x00000000 }, 19 - { _MMIO(0x2724), 0xf0800000 }, 20 - { _MMIO(0x2720), 0x00000000 }, 21 - { _MMIO(0x2770), 0x00000004 }, 22 - { _MMIO(0x2774), 0x00000000 }, 23 - { _MMIO(0x2778), 0x00000003 }, 24 - { _MMIO(0x277c), 0x00000000 }, 25 - { _MMIO(0x2780), 0x00000007 }, 26 - { _MMIO(0x2784), 0x00000000 }, 27 - { _MMIO(0x2788), 0x00100002 }, 28 - { _MMIO(0x278c), 0x0000fff7 }, 29 - { _MMIO(0x2790), 0x00100002 }, 30 - { _MMIO(0x2794), 0x0000ffcf }, 31 - { _MMIO(0x2798), 0x00100082 }, 32 - { _MMIO(0x279c), 0x0000ffef }, 33 - { _MMIO(0x27a0), 0x001000c2 }, 34 - { _MMIO(0x27a4), 0x0000ffe7 }, 35 - { _MMIO(0x27a8), 0x00100001 }, 36 - { _MMIO(0x27ac), 0x0000ffe7 }, 37 - }; 38 - 39 - static const struct i915_oa_reg flex_eu_config_test_oa[] = { 40 - }; 41 - 42 - static const struct i915_oa_reg mux_config_test_oa[] = { 43 - { _MMIO(0x9840), 0x00000080 }, 44 - { _MMIO(0x9888), 0x11810000 }, 45 - { _MMIO(0x9888), 0x07810013 }, 46 - { _MMIO(0x9888), 0x1f810000 }, 47 - { _MMIO(0x9888), 0x1d810000 }, 48 - { _MMIO(0x9888), 0x1b930040 }, 49 - { _MMIO(0x9888), 0x07e54000 }, 50 - { _MMIO(0x9888), 0x1f908000 }, 51 - { _MMIO(0x9888), 0x11900000 }, 52 - { _MMIO(0x9888), 0x37900000 }, 53 - { _MMIO(0x9888), 0x53900000 }, 54 - { _MMIO(0x9888), 0x45900000 }, 55 - { _MMIO(0x9888), 0x33900000 }, 56 - }; 57 - 58 - static ssize_t 59 - show_test_oa_id(struct device *kdev, struct device_attribute *attr, char *buf) 60 - { 61 - return sprintf(buf, "1\n"); 62 - } 63 - 64 - void 65 - i915_perf_load_test_config_cflgt3(struct drm_i915_private *dev_priv) 66 - { 67 - strlcpy(dev_priv->perf.test_config.uuid, 68 - "577e8e2c-3fa0-4875-8743-3538d585e3b0", 69 - sizeof(dev_priv->perf.test_config.uuid)); 70 - dev_priv->perf.test_config.id = 1; 71 - 72 - dev_priv->perf.test_config.mux_regs = mux_config_test_oa; 73 - dev_priv->perf.test_config.mux_regs_len = ARRAY_SIZE(mux_config_test_oa); 74 - 75 - dev_priv->perf.test_config.b_counter_regs = b_counter_config_test_oa; 76 - dev_priv->perf.test_config.b_counter_regs_len = ARRAY_SIZE(b_counter_config_test_oa); 77 - 78 - dev_priv->perf.test_config.flex_regs = flex_eu_config_test_oa; 79 - dev_priv->perf.test_config.flex_regs_len = ARRAY_SIZE(flex_eu_config_test_oa); 80 - 81 - dev_priv->perf.test_config.sysfs_metric.name = "577e8e2c-3fa0-4875-8743-3538d585e3b0"; 82 - dev_priv->perf.test_config.sysfs_metric.attrs = dev_priv->perf.test_config.attrs; 83 - 84 - dev_priv->perf.test_config.attrs[0] = &dev_priv->perf.test_config.sysfs_metric_id.attr; 85 - 86 - dev_priv->perf.test_config.sysfs_metric_id.attr.name = "id"; 87 - dev_priv->perf.test_config.sysfs_metric_id.attr.mode = 0444; 88 - dev_priv->perf.test_config.sysfs_metric_id.show = show_test_oa_id; 89 - }
-16
drivers/gpu/drm/i915/oa/i915_oa_cflgt3.h
··· 1 - /* SPDX-License-Identifier: MIT */ 2 - /* 3 - * Copyright © 2018-2019 Intel Corporation 4 - * 5 - * Autogenerated file by GPU Top : https://github.com/rib/gputop 6 - * DO NOT EDIT manually! 7 - */ 8 - 9 - #ifndef __I915_OA_CFLGT3_H__ 10 - #define __I915_OA_CFLGT3_H__ 11 - 12 - struct drm_i915_private; 13 - 14 - void i915_perf_load_test_config_cflgt3(struct drm_i915_private *dev_priv); 15 - 16 - #endif
-89
drivers/gpu/drm/i915/oa/i915_oa_chv.c
··· 1 - // SPDX-License-Identifier: MIT 2 - /* 3 - * Copyright © 2018-2019 Intel Corporation 4 - * 5 - * Autogenerated file by GPU Top : https://github.com/rib/gputop 6 - * DO NOT EDIT manually! 7 - */ 8 - 9 - #include <linux/sysfs.h> 10 - 11 - #include "i915_drv.h" 12 - #include "i915_oa_chv.h" 13 - 14 - static const struct i915_oa_reg b_counter_config_test_oa[] = { 15 - { _MMIO(0x2740), 0x00000000 }, 16 - { _MMIO(0x2744), 0x00800000 }, 17 - { _MMIO(0x2714), 0xf0800000 }, 18 - { _MMIO(0x2710), 0x00000000 }, 19 - { _MMIO(0x2724), 0xf0800000 }, 20 - { _MMIO(0x2720), 0x00000000 }, 21 - { _MMIO(0x2770), 0x00000004 }, 22 - { _MMIO(0x2774), 0x00000000 }, 23 - { _MMIO(0x2778), 0x00000003 }, 24 - { _MMIO(0x277c), 0x00000000 }, 25 - { _MMIO(0x2780), 0x00000007 }, 26 - { _MMIO(0x2784), 0x00000000 }, 27 - { _MMIO(0x2788), 0x00100002 }, 28 - { _MMIO(0x278c), 0x0000fff7 }, 29 - { _MMIO(0x2790), 0x00100002 }, 30 - { _MMIO(0x2794), 0x0000ffcf }, 31 - { _MMIO(0x2798), 0x00100082 }, 32 - { _MMIO(0x279c), 0x0000ffef }, 33 - { _MMIO(0x27a0), 0x001000c2 }, 34 - { _MMIO(0x27a4), 0x0000ffe7 }, 35 - { _MMIO(0x27a8), 0x00100001 }, 36 - { _MMIO(0x27ac), 0x0000ffe7 }, 37 - }; 38 - 39 - static const struct i915_oa_reg flex_eu_config_test_oa[] = { 40 - }; 41 - 42 - static const struct i915_oa_reg mux_config_test_oa[] = { 43 - { _MMIO(0x9840), 0x000000a0 }, 44 - { _MMIO(0x9888), 0x59800000 }, 45 - { _MMIO(0x9888), 0x59800001 }, 46 - { _MMIO(0x9888), 0x338b0000 }, 47 - { _MMIO(0x9888), 0x258b0066 }, 48 - { _MMIO(0x9888), 0x058b0000 }, 49 - { _MMIO(0x9888), 0x038b0000 }, 50 - { _MMIO(0x9888), 0x03844000 }, 51 - { _MMIO(0x9888), 0x47800080 }, 52 - { _MMIO(0x9888), 0x57800000 }, 53 - { _MMIO(0x1823a4), 0x00000000 }, 54 - { _MMIO(0x9888), 0x59800000 }, 55 - { _MMIO(0x9840), 0x00000080 }, 56 - }; 57 - 58 - static ssize_t 59 - show_test_oa_id(struct device *kdev, struct device_attribute *attr, char *buf) 60 - { 61 - return sprintf(buf, "1\n"); 62 - } 63 - 64 - void 65 - i915_perf_load_test_config_chv(struct drm_i915_private *dev_priv) 66 - { 67 - strlcpy(dev_priv->perf.test_config.uuid, 68 - "4a534b07-cba3-414d-8d60-874830e883aa", 69 - sizeof(dev_priv->perf.test_config.uuid)); 70 - dev_priv->perf.test_config.id = 1; 71 - 72 - dev_priv->perf.test_config.mux_regs = mux_config_test_oa; 73 - dev_priv->perf.test_config.mux_regs_len = ARRAY_SIZE(mux_config_test_oa); 74 - 75 - dev_priv->perf.test_config.b_counter_regs = b_counter_config_test_oa; 76 - dev_priv->perf.test_config.b_counter_regs_len = ARRAY_SIZE(b_counter_config_test_oa); 77 - 78 - dev_priv->perf.test_config.flex_regs = flex_eu_config_test_oa; 79 - dev_priv->perf.test_config.flex_regs_len = ARRAY_SIZE(flex_eu_config_test_oa); 80 - 81 - dev_priv->perf.test_config.sysfs_metric.name = "4a534b07-cba3-414d-8d60-874830e883aa"; 82 - dev_priv->perf.test_config.sysfs_metric.attrs = dev_priv->perf.test_config.attrs; 83 - 84 - dev_priv->perf.test_config.attrs[0] = &dev_priv->perf.test_config.sysfs_metric_id.attr; 85 - 86 - dev_priv->perf.test_config.sysfs_metric_id.attr.name = "id"; 87 - dev_priv->perf.test_config.sysfs_metric_id.attr.mode = 0444; 88 - dev_priv->perf.test_config.sysfs_metric_id.show = show_test_oa_id; 89 - }
-16
drivers/gpu/drm/i915/oa/i915_oa_chv.h
··· 1 - /* SPDX-License-Identifier: MIT */ 2 - /* 3 - * Copyright © 2018-2019 Intel Corporation 4 - * 5 - * Autogenerated file by GPU Top : https://github.com/rib/gputop 6 - * DO NOT EDIT manually! 7 - */ 8 - 9 - #ifndef __I915_OA_CHV_H__ 10 - #define __I915_OA_CHV_H__ 11 - 12 - struct drm_i915_private; 13 - 14 - void i915_perf_load_test_config_chv(struct drm_i915_private *dev_priv); 15 - 16 - #endif
-101
drivers/gpu/drm/i915/oa/i915_oa_cnl.c
··· 1 - // SPDX-License-Identifier: MIT 2 - /* 3 - * Copyright © 2018-2019 Intel Corporation 4 - * 5 - * Autogenerated file by GPU Top : https://github.com/rib/gputop 6 - * DO NOT EDIT manually! 7 - */ 8 - 9 - #include <linux/sysfs.h> 10 - 11 - #include "i915_drv.h" 12 - #include "i915_oa_cnl.h" 13 - 14 - static const struct i915_oa_reg b_counter_config_test_oa[] = { 15 - { _MMIO(0x2740), 0x00000000 }, 16 - { _MMIO(0x2710), 0x00000000 }, 17 - { _MMIO(0x2714), 0xf0800000 }, 18 - { _MMIO(0x2720), 0x00000000 }, 19 - { _MMIO(0x2724), 0xf0800000 }, 20 - { _MMIO(0x2770), 0x00000004 }, 21 - { _MMIO(0x2774), 0x0000ffff }, 22 - { _MMIO(0x2778), 0x00000003 }, 23 - { _MMIO(0x277c), 0x0000ffff }, 24 - { _MMIO(0x2780), 0x00000007 }, 25 - { _MMIO(0x2784), 0x0000ffff }, 26 - { _MMIO(0x2788), 0x00100002 }, 27 - { _MMIO(0x278c), 0x0000fff7 }, 28 - { _MMIO(0x2790), 0x00100002 }, 29 - { _MMIO(0x2794), 0x0000ffcf }, 30 - { _MMIO(0x2798), 0x00100082 }, 31 - { _MMIO(0x279c), 0x0000ffef }, 32 - { _MMIO(0x27a0), 0x001000c2 }, 33 - { _MMIO(0x27a4), 0x0000ffe7 }, 34 - { _MMIO(0x27a8), 0x00100001 }, 35 - { _MMIO(0x27ac), 0x0000ffe7 }, 36 - }; 37 - 38 - static const struct i915_oa_reg flex_eu_config_test_oa[] = { 39 - }; 40 - 41 - static const struct i915_oa_reg mux_config_test_oa[] = { 42 - { _MMIO(0xd04), 0x00000200 }, 43 - { _MMIO(0x9884), 0x00000007 }, 44 - { _MMIO(0x9888), 0x17060000 }, 45 - { _MMIO(0x9840), 0x00000000 }, 46 - { _MMIO(0x9884), 0x00000007 }, 47 - { _MMIO(0x9888), 0x13034000 }, 48 - { _MMIO(0x9884), 0x00000007 }, 49 - { _MMIO(0x9888), 0x07060066 }, 50 - { _MMIO(0x9884), 0x00000007 }, 51 - { _MMIO(0x9888), 0x05060000 }, 52 - { _MMIO(0x9884), 0x00000007 }, 53 - { _MMIO(0x9888), 0x0f080040 }, 54 - { _MMIO(0x9884), 0x00000007 }, 55 - { _MMIO(0x9888), 0x07091000 }, 56 - { _MMIO(0x9884), 0x00000007 }, 57 - { _MMIO(0x9888), 0x0f041000 }, 58 - { _MMIO(0x9884), 0x00000007 }, 59 - { _MMIO(0x9888), 0x1d004000 }, 60 - { _MMIO(0x9884), 0x00000007 }, 61 - { _MMIO(0x9888), 0x35000000 }, 62 - { _MMIO(0x9884), 0x00000007 }, 63 - { _MMIO(0x9888), 0x49000000 }, 64 - { _MMIO(0x9884), 0x00000007 }, 65 - { _MMIO(0x9888), 0x3d000000 }, 66 - { _MMIO(0x9884), 0x00000007 }, 67 - { _MMIO(0x9888), 0x31000000 }, 68 - }; 69 - 70 - static ssize_t 71 - show_test_oa_id(struct device *kdev, struct device_attribute *attr, char *buf) 72 - { 73 - return sprintf(buf, "1\n"); 74 - } 75 - 76 - void 77 - i915_perf_load_test_config_cnl(struct drm_i915_private *dev_priv) 78 - { 79 - strlcpy(dev_priv->perf.test_config.uuid, 80 - "db41edd4-d8e7-4730-ad11-b9a2d6833503", 81 - sizeof(dev_priv->perf.test_config.uuid)); 82 - dev_priv->perf.test_config.id = 1; 83 - 84 - dev_priv->perf.test_config.mux_regs = mux_config_test_oa; 85 - dev_priv->perf.test_config.mux_regs_len = ARRAY_SIZE(mux_config_test_oa); 86 - 87 - dev_priv->perf.test_config.b_counter_regs = b_counter_config_test_oa; 88 - dev_priv->perf.test_config.b_counter_regs_len = ARRAY_SIZE(b_counter_config_test_oa); 89 - 90 - dev_priv->perf.test_config.flex_regs = flex_eu_config_test_oa; 91 - dev_priv->perf.test_config.flex_regs_len = ARRAY_SIZE(flex_eu_config_test_oa); 92 - 93 - dev_priv->perf.test_config.sysfs_metric.name = "db41edd4-d8e7-4730-ad11-b9a2d6833503"; 94 - dev_priv->perf.test_config.sysfs_metric.attrs = dev_priv->perf.test_config.attrs; 95 - 96 - dev_priv->perf.test_config.attrs[0] = &dev_priv->perf.test_config.sysfs_metric_id.attr; 97 - 98 - dev_priv->perf.test_config.sysfs_metric_id.attr.name = "id"; 99 - dev_priv->perf.test_config.sysfs_metric_id.attr.mode = 0444; 100 - dev_priv->perf.test_config.sysfs_metric_id.show = show_test_oa_id; 101 - }
-16
drivers/gpu/drm/i915/oa/i915_oa_cnl.h
··· 1 - /* SPDX-License-Identifier: MIT */ 2 - /* 3 - * Copyright © 2018-2019 Intel Corporation 4 - * 5 - * Autogenerated file by GPU Top : https://github.com/rib/gputop 6 - * DO NOT EDIT manually! 7 - */ 8 - 9 - #ifndef __I915_OA_CNL_H__ 10 - #define __I915_OA_CNL_H__ 11 - 12 - struct drm_i915_private; 13 - 14 - void i915_perf_load_test_config_cnl(struct drm_i915_private *dev_priv); 15 - 16 - #endif
-88
drivers/gpu/drm/i915/oa/i915_oa_glk.c
··· 1 - // SPDX-License-Identifier: MIT 2 - /* 3 - * Copyright © 2018-2019 Intel Corporation 4 - * 5 - * Autogenerated file by GPU Top : https://github.com/rib/gputop 6 - * DO NOT EDIT manually! 7 - */ 8 - 9 - #include <linux/sysfs.h> 10 - 11 - #include "i915_drv.h" 12 - #include "i915_oa_glk.h" 13 - 14 - static const struct i915_oa_reg b_counter_config_test_oa[] = { 15 - { _MMIO(0x2740), 0x00000000 }, 16 - { _MMIO(0x2744), 0x00800000 }, 17 - { _MMIO(0x2714), 0xf0800000 }, 18 - { _MMIO(0x2710), 0x00000000 }, 19 - { _MMIO(0x2724), 0xf0800000 }, 20 - { _MMIO(0x2720), 0x00000000 }, 21 - { _MMIO(0x2770), 0x00000004 }, 22 - { _MMIO(0x2774), 0x00000000 }, 23 - { _MMIO(0x2778), 0x00000003 }, 24 - { _MMIO(0x277c), 0x00000000 }, 25 - { _MMIO(0x2780), 0x00000007 }, 26 - { _MMIO(0x2784), 0x00000000 }, 27 - { _MMIO(0x2788), 0x00100002 }, 28 - { _MMIO(0x278c), 0x0000fff7 }, 29 - { _MMIO(0x2790), 0x00100002 }, 30 - { _MMIO(0x2794), 0x0000ffcf }, 31 - { _MMIO(0x2798), 0x00100082 }, 32 - { _MMIO(0x279c), 0x0000ffef }, 33 - { _MMIO(0x27a0), 0x001000c2 }, 34 - { _MMIO(0x27a4), 0x0000ffe7 }, 35 - { _MMIO(0x27a8), 0x00100001 }, 36 - { _MMIO(0x27ac), 0x0000ffe7 }, 37 - }; 38 - 39 - static const struct i915_oa_reg flex_eu_config_test_oa[] = { 40 - }; 41 - 42 - static const struct i915_oa_reg mux_config_test_oa[] = { 43 - { _MMIO(0x9840), 0x00000080 }, 44 - { _MMIO(0x9888), 0x19800000 }, 45 - { _MMIO(0x9888), 0x07800063 }, 46 - { _MMIO(0x9888), 0x11800000 }, 47 - { _MMIO(0x9888), 0x23810008 }, 48 - { _MMIO(0x9888), 0x1d950400 }, 49 - { _MMIO(0x9888), 0x0f922000 }, 50 - { _MMIO(0x9888), 0x1f908000 }, 51 - { _MMIO(0x9888), 0x37900000 }, 52 - { _MMIO(0x9888), 0x55900000 }, 53 - { _MMIO(0x9888), 0x47900000 }, 54 - { _MMIO(0x9888), 0x33900000 }, 55 - }; 56 - 57 - static ssize_t 58 - show_test_oa_id(struct device *kdev, struct device_attribute *attr, char *buf) 59 - { 60 - return sprintf(buf, "1\n"); 61 - } 62 - 63 - void 64 - i915_perf_load_test_config_glk(struct drm_i915_private *dev_priv) 65 - { 66 - strlcpy(dev_priv->perf.test_config.uuid, 67 - "dd3fd789-e783-4204-8cd0-b671bbccb0cf", 68 - sizeof(dev_priv->perf.test_config.uuid)); 69 - dev_priv->perf.test_config.id = 1; 70 - 71 - dev_priv->perf.test_config.mux_regs = mux_config_test_oa; 72 - dev_priv->perf.test_config.mux_regs_len = ARRAY_SIZE(mux_config_test_oa); 73 - 74 - dev_priv->perf.test_config.b_counter_regs = b_counter_config_test_oa; 75 - dev_priv->perf.test_config.b_counter_regs_len = ARRAY_SIZE(b_counter_config_test_oa); 76 - 77 - dev_priv->perf.test_config.flex_regs = flex_eu_config_test_oa; 78 - dev_priv->perf.test_config.flex_regs_len = ARRAY_SIZE(flex_eu_config_test_oa); 79 - 80 - dev_priv->perf.test_config.sysfs_metric.name = "dd3fd789-e783-4204-8cd0-b671bbccb0cf"; 81 - dev_priv->perf.test_config.sysfs_metric.attrs = dev_priv->perf.test_config.attrs; 82 - 83 - dev_priv->perf.test_config.attrs[0] = &dev_priv->perf.test_config.sysfs_metric_id.attr; 84 - 85 - dev_priv->perf.test_config.sysfs_metric_id.attr.name = "id"; 86 - dev_priv->perf.test_config.sysfs_metric_id.attr.mode = 0444; 87 - dev_priv->perf.test_config.sysfs_metric_id.show = show_test_oa_id; 88 - }
-16
drivers/gpu/drm/i915/oa/i915_oa_glk.h
··· 1 - /* SPDX-License-Identifier: MIT */ 2 - /* 3 - * Copyright © 2018-2019 Intel Corporation 4 - * 5 - * Autogenerated file by GPU Top : https://github.com/rib/gputop 6 - * DO NOT EDIT manually! 7 - */ 8 - 9 - #ifndef __I915_OA_GLK_H__ 10 - #define __I915_OA_GLK_H__ 11 - 12 - struct drm_i915_private; 13 - 14 - void i915_perf_load_test_config_glk(struct drm_i915_private *dev_priv); 15 - 16 - #endif
-118
drivers/gpu/drm/i915/oa/i915_oa_hsw.c
··· 1 - // SPDX-License-Identifier: MIT 2 - /* 3 - * Copyright © 2018-2019 Intel Corporation 4 - * 5 - * Autogenerated file by GPU Top : https://github.com/rib/gputop 6 - * DO NOT EDIT manually! 7 - */ 8 - 9 - #include <linux/sysfs.h> 10 - 11 - #include "i915_drv.h" 12 - #include "i915_oa_hsw.h" 13 - 14 - static const struct i915_oa_reg b_counter_config_render_basic[] = { 15 - { _MMIO(0x2724), 0x00800000 }, 16 - { _MMIO(0x2720), 0x00000000 }, 17 - { _MMIO(0x2714), 0x00800000 }, 18 - { _MMIO(0x2710), 0x00000000 }, 19 - }; 20 - 21 - static const struct i915_oa_reg flex_eu_config_render_basic[] = { 22 - }; 23 - 24 - static const struct i915_oa_reg mux_config_render_basic[] = { 25 - { _MMIO(0x9840), 0x00000080 }, 26 - { _MMIO(0x253a4), 0x01600000 }, 27 - { _MMIO(0x25440), 0x00100000 }, 28 - { _MMIO(0x25128), 0x00000000 }, 29 - { _MMIO(0x2691c), 0x00000800 }, 30 - { _MMIO(0x26aa0), 0x01500000 }, 31 - { _MMIO(0x26b9c), 0x00006000 }, 32 - { _MMIO(0x2791c), 0x00000800 }, 33 - { _MMIO(0x27aa0), 0x01500000 }, 34 - { _MMIO(0x27b9c), 0x00006000 }, 35 - { _MMIO(0x2641c), 0x00000400 }, 36 - { _MMIO(0x25380), 0x00000010 }, 37 - { _MMIO(0x2538c), 0x00000000 }, 38 - { _MMIO(0x25384), 0x0800aaaa }, 39 - { _MMIO(0x25400), 0x00000004 }, 40 - { _MMIO(0x2540c), 0x06029000 }, 41 - { _MMIO(0x25410), 0x00000002 }, 42 - { _MMIO(0x25404), 0x5c30ffff }, 43 - { _MMIO(0x25100), 0x00000016 }, 44 - { _MMIO(0x25110), 0x00000400 }, 45 - { _MMIO(0x25104), 0x00000000 }, 46 - { _MMIO(0x26804), 0x00001211 }, 47 - { _MMIO(0x26884), 0x00000100 }, 48 - { _MMIO(0x26900), 0x00000002 }, 49 - { _MMIO(0x26908), 0x00700000 }, 50 - { _MMIO(0x26904), 0x00000000 }, 51 - { _MMIO(0x26984), 0x00001022 }, 52 - { _MMIO(0x26a04), 0x00000011 }, 53 - { _MMIO(0x26a80), 0x00000006 }, 54 - { _MMIO(0x26a88), 0x00000c02 }, 55 - { _MMIO(0x26a84), 0x00000000 }, 56 - { _MMIO(0x26b04), 0x00001000 }, 57 - { _MMIO(0x26b80), 0x00000002 }, 58 - { _MMIO(0x26b8c), 0x00000007 }, 59 - { _MMIO(0x26b84), 0x00000000 }, 60 - { _MMIO(0x27804), 0x00004844 }, 61 - { _MMIO(0x27884), 0x00000400 }, 62 - { _MMIO(0x27900), 0x00000002 }, 63 - { _MMIO(0x27908), 0x0e000000 }, 64 - { _MMIO(0x27904), 0x00000000 }, 65 - { _MMIO(0x27984), 0x00004088 }, 66 - { _MMIO(0x27a04), 0x00000044 }, 67 - { _MMIO(0x27a80), 0x00000006 }, 68 - { _MMIO(0x27a88), 0x00018040 }, 69 - { _MMIO(0x27a84), 0x00000000 }, 70 - { _MMIO(0x27b04), 0x00004000 }, 71 - { _MMIO(0x27b80), 0x00000002 }, 72 - { _MMIO(0x27b8c), 0x000000e0 }, 73 - { _MMIO(0x27b84), 0x00000000 }, 74 - { _MMIO(0x26104), 0x00002222 }, 75 - { _MMIO(0x26184), 0x0c006666 }, 76 - { _MMIO(0x26284), 0x04000000 }, 77 - { _MMIO(0x26304), 0x04000000 }, 78 - { _MMIO(0x26400), 0x00000002 }, 79 - { _MMIO(0x26410), 0x000000a0 }, 80 - { _MMIO(0x26404), 0x00000000 }, 81 - { _MMIO(0x25420), 0x04108020 }, 82 - { _MMIO(0x25424), 0x1284a420 }, 83 - { _MMIO(0x2541c), 0x00000000 }, 84 - { _MMIO(0x25428), 0x00042049 }, 85 - }; 86 - 87 - static ssize_t 88 - show_render_basic_id(struct device *kdev, struct device_attribute *attr, char *buf) 89 - { 90 - return sprintf(buf, "1\n"); 91 - } 92 - 93 - void 94 - i915_perf_load_test_config_hsw(struct drm_i915_private *dev_priv) 95 - { 96 - strlcpy(dev_priv->perf.test_config.uuid, 97 - "403d8832-1a27-4aa6-a64e-f5389ce7b212", 98 - sizeof(dev_priv->perf.test_config.uuid)); 99 - dev_priv->perf.test_config.id = 1; 100 - 101 - dev_priv->perf.test_config.mux_regs = mux_config_render_basic; 102 - dev_priv->perf.test_config.mux_regs_len = ARRAY_SIZE(mux_config_render_basic); 103 - 104 - dev_priv->perf.test_config.b_counter_regs = b_counter_config_render_basic; 105 - dev_priv->perf.test_config.b_counter_regs_len = ARRAY_SIZE(b_counter_config_render_basic); 106 - 107 - dev_priv->perf.test_config.flex_regs = flex_eu_config_render_basic; 108 - dev_priv->perf.test_config.flex_regs_len = ARRAY_SIZE(flex_eu_config_render_basic); 109 - 110 - dev_priv->perf.test_config.sysfs_metric.name = "403d8832-1a27-4aa6-a64e-f5389ce7b212"; 111 - dev_priv->perf.test_config.sysfs_metric.attrs = dev_priv->perf.test_config.attrs; 112 - 113 - dev_priv->perf.test_config.attrs[0] = &dev_priv->perf.test_config.sysfs_metric_id.attr; 114 - 115 - dev_priv->perf.test_config.sysfs_metric_id.attr.name = "id"; 116 - dev_priv->perf.test_config.sysfs_metric_id.attr.mode = 0444; 117 - dev_priv->perf.test_config.sysfs_metric_id.show = show_render_basic_id; 118 - }
-16
drivers/gpu/drm/i915/oa/i915_oa_hsw.h
··· 1 - /* SPDX-License-Identifier: MIT */ 2 - /* 3 - * Copyright © 2018-2019 Intel Corporation 4 - * 5 - * Autogenerated file by GPU Top : https://github.com/rib/gputop 6 - * DO NOT EDIT manually! 7 - */ 8 - 9 - #ifndef __I915_OA_HSW_H__ 10 - #define __I915_OA_HSW_H__ 11 - 12 - struct drm_i915_private; 13 - 14 - void i915_perf_load_test_config_hsw(struct drm_i915_private *dev_priv); 15 - 16 - #endif
-98
drivers/gpu/drm/i915/oa/i915_oa_icl.c
··· 1 - // SPDX-License-Identifier: MIT 2 - /* 3 - * Copyright © 2018-2019 Intel Corporation 4 - * 5 - * Autogenerated file by GPU Top : https://github.com/rib/gputop 6 - * DO NOT EDIT manually! 7 - */ 8 - 9 - #include <linux/sysfs.h> 10 - 11 - #include "i915_drv.h" 12 - #include "i915_oa_icl.h" 13 - 14 - static const struct i915_oa_reg b_counter_config_test_oa[] = { 15 - { _MMIO(0x2740), 0x00000000 }, 16 - { _MMIO(0x2710), 0x00000000 }, 17 - { _MMIO(0x2714), 0xf0800000 }, 18 - { _MMIO(0x2720), 0x00000000 }, 19 - { _MMIO(0x2724), 0xf0800000 }, 20 - { _MMIO(0x2770), 0x00000004 }, 21 - { _MMIO(0x2774), 0x0000ffff }, 22 - { _MMIO(0x2778), 0x00000003 }, 23 - { _MMIO(0x277c), 0x0000ffff }, 24 - { _MMIO(0x2780), 0x00000007 }, 25 - { _MMIO(0x2784), 0x0000ffff }, 26 - { _MMIO(0x2788), 0x00100002 }, 27 - { _MMIO(0x278c), 0x0000fff7 }, 28 - { _MMIO(0x2790), 0x00100002 }, 29 - { _MMIO(0x2794), 0x0000ffcf }, 30 - { _MMIO(0x2798), 0x00100082 }, 31 - { _MMIO(0x279c), 0x0000ffef }, 32 - { _MMIO(0x27a0), 0x001000c2 }, 33 - { _MMIO(0x27a4), 0x0000ffe7 }, 34 - { _MMIO(0x27a8), 0x00100001 }, 35 - { _MMIO(0x27ac), 0x0000ffe7 }, 36 - }; 37 - 38 - static const struct i915_oa_reg flex_eu_config_test_oa[] = { 39 - }; 40 - 41 - static const struct i915_oa_reg mux_config_test_oa[] = { 42 - { _MMIO(0xd04), 0x00000200 }, 43 - { _MMIO(0x9840), 0x00000000 }, 44 - { _MMIO(0x9884), 0x00000000 }, 45 - { _MMIO(0x9888), 0x10060000 }, 46 - { _MMIO(0x9888), 0x22060000 }, 47 - { _MMIO(0x9888), 0x16060000 }, 48 - { _MMIO(0x9888), 0x24060000 }, 49 - { _MMIO(0x9888), 0x18060000 }, 50 - { _MMIO(0x9888), 0x1a060000 }, 51 - { _MMIO(0x9888), 0x12060000 }, 52 - { _MMIO(0x9888), 0x14060000 }, 53 - { _MMIO(0x9888), 0x10060000 }, 54 - { _MMIO(0x9888), 0x22060000 }, 55 - { _MMIO(0x9884), 0x00000003 }, 56 - { _MMIO(0x9888), 0x16130000 }, 57 - { _MMIO(0x9888), 0x24000001 }, 58 - { _MMIO(0x9888), 0x0e130056 }, 59 - { _MMIO(0x9888), 0x10130000 }, 60 - { _MMIO(0x9888), 0x1a130000 }, 61 - { _MMIO(0x9888), 0x541f0001 }, 62 - { _MMIO(0x9888), 0x181f0000 }, 63 - { _MMIO(0x9888), 0x4c1f0000 }, 64 - { _MMIO(0x9888), 0x301f0000 }, 65 - }; 66 - 67 - static ssize_t 68 - show_test_oa_id(struct device *kdev, struct device_attribute *attr, char *buf) 69 - { 70 - return sprintf(buf, "1\n"); 71 - } 72 - 73 - void 74 - i915_perf_load_test_config_icl(struct drm_i915_private *dev_priv) 75 - { 76 - strlcpy(dev_priv->perf.test_config.uuid, 77 - "a291665e-244b-4b76-9b9a-01de9d3c8068", 78 - sizeof(dev_priv->perf.test_config.uuid)); 79 - dev_priv->perf.test_config.id = 1; 80 - 81 - dev_priv->perf.test_config.mux_regs = mux_config_test_oa; 82 - dev_priv->perf.test_config.mux_regs_len = ARRAY_SIZE(mux_config_test_oa); 83 - 84 - dev_priv->perf.test_config.b_counter_regs = b_counter_config_test_oa; 85 - dev_priv->perf.test_config.b_counter_regs_len = ARRAY_SIZE(b_counter_config_test_oa); 86 - 87 - dev_priv->perf.test_config.flex_regs = flex_eu_config_test_oa; 88 - dev_priv->perf.test_config.flex_regs_len = ARRAY_SIZE(flex_eu_config_test_oa); 89 - 90 - dev_priv->perf.test_config.sysfs_metric.name = "a291665e-244b-4b76-9b9a-01de9d3c8068"; 91 - dev_priv->perf.test_config.sysfs_metric.attrs = dev_priv->perf.test_config.attrs; 92 - 93 - dev_priv->perf.test_config.attrs[0] = &dev_priv->perf.test_config.sysfs_metric_id.attr; 94 - 95 - dev_priv->perf.test_config.sysfs_metric_id.attr.name = "id"; 96 - dev_priv->perf.test_config.sysfs_metric_id.attr.mode = 0444; 97 - dev_priv->perf.test_config.sysfs_metric_id.show = show_test_oa_id; 98 - }
-16
drivers/gpu/drm/i915/oa/i915_oa_icl.h
··· 1 - /* SPDX-License-Identifier: MIT */ 2 - /* 3 - * Copyright © 2018-2019 Intel Corporation 4 - * 5 - * Autogenerated file by GPU Top : https://github.com/rib/gputop 6 - * DO NOT EDIT manually! 7 - */ 8 - 9 - #ifndef __I915_OA_ICL_H__ 10 - #define __I915_OA_ICL_H__ 11 - 12 - struct drm_i915_private; 13 - 14 - void i915_perf_load_test_config_icl(struct drm_i915_private *dev_priv); 15 - 16 - #endif
-89
drivers/gpu/drm/i915/oa/i915_oa_kblgt2.c
··· 1 - // SPDX-License-Identifier: MIT 2 - /* 3 - * Copyright © 2018-2019 Intel Corporation 4 - * 5 - * Autogenerated file by GPU Top : https://github.com/rib/gputop 6 - * DO NOT EDIT manually! 7 - */ 8 - 9 - #include <linux/sysfs.h> 10 - 11 - #include "i915_drv.h" 12 - #include "i915_oa_kblgt2.h" 13 - 14 - static const struct i915_oa_reg b_counter_config_test_oa[] = { 15 - { _MMIO(0x2740), 0x00000000 }, 16 - { _MMIO(0x2744), 0x00800000 }, 17 - { _MMIO(0x2714), 0xf0800000 }, 18 - { _MMIO(0x2710), 0x00000000 }, 19 - { _MMIO(0x2724), 0xf0800000 }, 20 - { _MMIO(0x2720), 0x00000000 }, 21 - { _MMIO(0x2770), 0x00000004 }, 22 - { _MMIO(0x2774), 0x00000000 }, 23 - { _MMIO(0x2778), 0x00000003 }, 24 - { _MMIO(0x277c), 0x00000000 }, 25 - { _MMIO(0x2780), 0x00000007 }, 26 - { _MMIO(0x2784), 0x00000000 }, 27 - { _MMIO(0x2788), 0x00100002 }, 28 - { _MMIO(0x278c), 0x0000fff7 }, 29 - { _MMIO(0x2790), 0x00100002 }, 30 - { _MMIO(0x2794), 0x0000ffcf }, 31 - { _MMIO(0x2798), 0x00100082 }, 32 - { _MMIO(0x279c), 0x0000ffef }, 33 - { _MMIO(0x27a0), 0x001000c2 }, 34 - { _MMIO(0x27a4), 0x0000ffe7 }, 35 - { _MMIO(0x27a8), 0x00100001 }, 36 - { _MMIO(0x27ac), 0x0000ffe7 }, 37 - }; 38 - 39 - static const struct i915_oa_reg flex_eu_config_test_oa[] = { 40 - }; 41 - 42 - static const struct i915_oa_reg mux_config_test_oa[] = { 43 - { _MMIO(0x9840), 0x00000080 }, 44 - { _MMIO(0x9888), 0x11810000 }, 45 - { _MMIO(0x9888), 0x07810013 }, 46 - { _MMIO(0x9888), 0x1f810000 }, 47 - { _MMIO(0x9888), 0x1d810000 }, 48 - { _MMIO(0x9888), 0x1b930040 }, 49 - { _MMIO(0x9888), 0x07e54000 }, 50 - { _MMIO(0x9888), 0x1f908000 }, 51 - { _MMIO(0x9888), 0x11900000 }, 52 - { _MMIO(0x9888), 0x37900000 }, 53 - { _MMIO(0x9888), 0x53900000 }, 54 - { _MMIO(0x9888), 0x45900000 }, 55 - { _MMIO(0x9888), 0x33900000 }, 56 - }; 57 - 58 - static ssize_t 59 - show_test_oa_id(struct device *kdev, struct device_attribute *attr, char *buf) 60 - { 61 - return sprintf(buf, "1\n"); 62 - } 63 - 64 - void 65 - i915_perf_load_test_config_kblgt2(struct drm_i915_private *dev_priv) 66 - { 67 - strlcpy(dev_priv->perf.test_config.uuid, 68 - "baa3c7e4-52b6-4b85-801e-465a94b746dd", 69 - sizeof(dev_priv->perf.test_config.uuid)); 70 - dev_priv->perf.test_config.id = 1; 71 - 72 - dev_priv->perf.test_config.mux_regs = mux_config_test_oa; 73 - dev_priv->perf.test_config.mux_regs_len = ARRAY_SIZE(mux_config_test_oa); 74 - 75 - dev_priv->perf.test_config.b_counter_regs = b_counter_config_test_oa; 76 - dev_priv->perf.test_config.b_counter_regs_len = ARRAY_SIZE(b_counter_config_test_oa); 77 - 78 - dev_priv->perf.test_config.flex_regs = flex_eu_config_test_oa; 79 - dev_priv->perf.test_config.flex_regs_len = ARRAY_SIZE(flex_eu_config_test_oa); 80 - 81 - dev_priv->perf.test_config.sysfs_metric.name = "baa3c7e4-52b6-4b85-801e-465a94b746dd"; 82 - dev_priv->perf.test_config.sysfs_metric.attrs = dev_priv->perf.test_config.attrs; 83 - 84 - dev_priv->perf.test_config.attrs[0] = &dev_priv->perf.test_config.sysfs_metric_id.attr; 85 - 86 - dev_priv->perf.test_config.sysfs_metric_id.attr.name = "id"; 87 - dev_priv->perf.test_config.sysfs_metric_id.attr.mode = 0444; 88 - dev_priv->perf.test_config.sysfs_metric_id.show = show_test_oa_id; 89 - }
-16
drivers/gpu/drm/i915/oa/i915_oa_kblgt2.h
··· 1 - /* SPDX-License-Identifier: MIT */ 2 - /* 3 - * Copyright © 2018-2019 Intel Corporation 4 - * 5 - * Autogenerated file by GPU Top : https://github.com/rib/gputop 6 - * DO NOT EDIT manually! 7 - */ 8 - 9 - #ifndef __I915_OA_KBLGT2_H__ 10 - #define __I915_OA_KBLGT2_H__ 11 - 12 - struct drm_i915_private; 13 - 14 - void i915_perf_load_test_config_kblgt2(struct drm_i915_private *dev_priv); 15 - 16 - #endif
-89
drivers/gpu/drm/i915/oa/i915_oa_kblgt3.c
··· 1 - // SPDX-License-Identifier: MIT 2 - /* 3 - * Copyright © 2018-2019 Intel Corporation 4 - * 5 - * Autogenerated file by GPU Top : https://github.com/rib/gputop 6 - * DO NOT EDIT manually! 7 - */ 8 - 9 - #include <linux/sysfs.h> 10 - 11 - #include "i915_drv.h" 12 - #include "i915_oa_kblgt3.h" 13 - 14 - static const struct i915_oa_reg b_counter_config_test_oa[] = { 15 - { _MMIO(0x2740), 0x00000000 }, 16 - { _MMIO(0x2744), 0x00800000 }, 17 - { _MMIO(0x2714), 0xf0800000 }, 18 - { _MMIO(0x2710), 0x00000000 }, 19 - { _MMIO(0x2724), 0xf0800000 }, 20 - { _MMIO(0x2720), 0x00000000 }, 21 - { _MMIO(0x2770), 0x00000004 }, 22 - { _MMIO(0x2774), 0x00000000 }, 23 - { _MMIO(0x2778), 0x00000003 }, 24 - { _MMIO(0x277c), 0x00000000 }, 25 - { _MMIO(0x2780), 0x00000007 }, 26 - { _MMIO(0x2784), 0x00000000 }, 27 - { _MMIO(0x2788), 0x00100002 }, 28 - { _MMIO(0x278c), 0x0000fff7 }, 29 - { _MMIO(0x2790), 0x00100002 }, 30 - { _MMIO(0x2794), 0x0000ffcf }, 31 - { _MMIO(0x2798), 0x00100082 }, 32 - { _MMIO(0x279c), 0x0000ffef }, 33 - { _MMIO(0x27a0), 0x001000c2 }, 34 - { _MMIO(0x27a4), 0x0000ffe7 }, 35 - { _MMIO(0x27a8), 0x00100001 }, 36 - { _MMIO(0x27ac), 0x0000ffe7 }, 37 - }; 38 - 39 - static const struct i915_oa_reg flex_eu_config_test_oa[] = { 40 - }; 41 - 42 - static const struct i915_oa_reg mux_config_test_oa[] = { 43 - { _MMIO(0x9840), 0x00000080 }, 44 - { _MMIO(0x9888), 0x11810000 }, 45 - { _MMIO(0x9888), 0x07810013 }, 46 - { _MMIO(0x9888), 0x1f810000 }, 47 - { _MMIO(0x9888), 0x1d810000 }, 48 - { _MMIO(0x9888), 0x1b930040 }, 49 - { _MMIO(0x9888), 0x07e54000 }, 50 - { _MMIO(0x9888), 0x1f908000 }, 51 - { _MMIO(0x9888), 0x11900000 }, 52 - { _MMIO(0x9888), 0x37900000 }, 53 - { _MMIO(0x9888), 0x53900000 }, 54 - { _MMIO(0x9888), 0x45900000 }, 55 - { _MMIO(0x9888), 0x33900000 }, 56 - }; 57 - 58 - static ssize_t 59 - show_test_oa_id(struct device *kdev, struct device_attribute *attr, char *buf) 60 - { 61 - return sprintf(buf, "1\n"); 62 - } 63 - 64 - void 65 - i915_perf_load_test_config_kblgt3(struct drm_i915_private *dev_priv) 66 - { 67 - strlcpy(dev_priv->perf.test_config.uuid, 68 - "f1792f32-6db2-4b50-b4b2-557128f1688d", 69 - sizeof(dev_priv->perf.test_config.uuid)); 70 - dev_priv->perf.test_config.id = 1; 71 - 72 - dev_priv->perf.test_config.mux_regs = mux_config_test_oa; 73 - dev_priv->perf.test_config.mux_regs_len = ARRAY_SIZE(mux_config_test_oa); 74 - 75 - dev_priv->perf.test_config.b_counter_regs = b_counter_config_test_oa; 76 - dev_priv->perf.test_config.b_counter_regs_len = ARRAY_SIZE(b_counter_config_test_oa); 77 - 78 - dev_priv->perf.test_config.flex_regs = flex_eu_config_test_oa; 79 - dev_priv->perf.test_config.flex_regs_len = ARRAY_SIZE(flex_eu_config_test_oa); 80 - 81 - dev_priv->perf.test_config.sysfs_metric.name = "f1792f32-6db2-4b50-b4b2-557128f1688d"; 82 - dev_priv->perf.test_config.sysfs_metric.attrs = dev_priv->perf.test_config.attrs; 83 - 84 - dev_priv->perf.test_config.attrs[0] = &dev_priv->perf.test_config.sysfs_metric_id.attr; 85 - 86 - dev_priv->perf.test_config.sysfs_metric_id.attr.name = "id"; 87 - dev_priv->perf.test_config.sysfs_metric_id.attr.mode = 0444; 88 - dev_priv->perf.test_config.sysfs_metric_id.show = show_test_oa_id; 89 - }
-16
drivers/gpu/drm/i915/oa/i915_oa_kblgt3.h
··· 1 - /* SPDX-License-Identifier: MIT */ 2 - /* 3 - * Copyright © 2018-2019 Intel Corporation 4 - * 5 - * Autogenerated file by GPU Top : https://github.com/rib/gputop 6 - * DO NOT EDIT manually! 7 - */ 8 - 9 - #ifndef __I915_OA_KBLGT3_H__ 10 - #define __I915_OA_KBLGT3_H__ 11 - 12 - struct drm_i915_private; 13 - 14 - void i915_perf_load_test_config_kblgt3(struct drm_i915_private *dev_priv); 15 - 16 - #endif
-88
drivers/gpu/drm/i915/oa/i915_oa_sklgt2.c
··· 1 - // SPDX-License-Identifier: MIT 2 - /* 3 - * Copyright © 2018-2019 Intel Corporation 4 - * 5 - * Autogenerated file by GPU Top : https://github.com/rib/gputop 6 - * DO NOT EDIT manually! 7 - */ 8 - 9 - #include <linux/sysfs.h> 10 - 11 - #include "i915_drv.h" 12 - #include "i915_oa_sklgt2.h" 13 - 14 - static const struct i915_oa_reg b_counter_config_test_oa[] = { 15 - { _MMIO(0x2740), 0x00000000 }, 16 - { _MMIO(0x2714), 0xf0800000 }, 17 - { _MMIO(0x2710), 0x00000000 }, 18 - { _MMIO(0x2724), 0xf0800000 }, 19 - { _MMIO(0x2720), 0x00000000 }, 20 - { _MMIO(0x2770), 0x00000004 }, 21 - { _MMIO(0x2774), 0x00000000 }, 22 - { _MMIO(0x2778), 0x00000003 }, 23 - { _MMIO(0x277c), 0x00000000 }, 24 - { _MMIO(0x2780), 0x00000007 }, 25 - { _MMIO(0x2784), 0x00000000 }, 26 - { _MMIO(0x2788), 0x00100002 }, 27 - { _MMIO(0x278c), 0x0000fff7 }, 28 - { _MMIO(0x2790), 0x00100002 }, 29 - { _MMIO(0x2794), 0x0000ffcf }, 30 - { _MMIO(0x2798), 0x00100082 }, 31 - { _MMIO(0x279c), 0x0000ffef }, 32 - { _MMIO(0x27a0), 0x001000c2 }, 33 - { _MMIO(0x27a4), 0x0000ffe7 }, 34 - { _MMIO(0x27a8), 0x00100001 }, 35 - { _MMIO(0x27ac), 0x0000ffe7 }, 36 - }; 37 - 38 - static const struct i915_oa_reg flex_eu_config_test_oa[] = { 39 - }; 40 - 41 - static const struct i915_oa_reg mux_config_test_oa[] = { 42 - { _MMIO(0x9840), 0x00000080 }, 43 - { _MMIO(0x9888), 0x11810000 }, 44 - { _MMIO(0x9888), 0x07810016 }, 45 - { _MMIO(0x9888), 0x1f810000 }, 46 - { _MMIO(0x9888), 0x1d810000 }, 47 - { _MMIO(0x9888), 0x1b930040 }, 48 - { _MMIO(0x9888), 0x07e54000 }, 49 - { _MMIO(0x9888), 0x1f908000 }, 50 - { _MMIO(0x9888), 0x11900000 }, 51 - { _MMIO(0x9888), 0x37900000 }, 52 - { _MMIO(0x9888), 0x53900000 }, 53 - { _MMIO(0x9888), 0x45900000 }, 54 - { _MMIO(0x9888), 0x33900000 }, 55 - }; 56 - 57 - static ssize_t 58 - show_test_oa_id(struct device *kdev, struct device_attribute *attr, char *buf) 59 - { 60 - return sprintf(buf, "1\n"); 61 - } 62 - 63 - void 64 - i915_perf_load_test_config_sklgt2(struct drm_i915_private *dev_priv) 65 - { 66 - strlcpy(dev_priv->perf.test_config.uuid, 67 - "1651949f-0ac0-4cb1-a06f-dafd74a407d1", 68 - sizeof(dev_priv->perf.test_config.uuid)); 69 - dev_priv->perf.test_config.id = 1; 70 - 71 - dev_priv->perf.test_config.mux_regs = mux_config_test_oa; 72 - dev_priv->perf.test_config.mux_regs_len = ARRAY_SIZE(mux_config_test_oa); 73 - 74 - dev_priv->perf.test_config.b_counter_regs = b_counter_config_test_oa; 75 - dev_priv->perf.test_config.b_counter_regs_len = ARRAY_SIZE(b_counter_config_test_oa); 76 - 77 - dev_priv->perf.test_config.flex_regs = flex_eu_config_test_oa; 78 - dev_priv->perf.test_config.flex_regs_len = ARRAY_SIZE(flex_eu_config_test_oa); 79 - 80 - dev_priv->perf.test_config.sysfs_metric.name = "1651949f-0ac0-4cb1-a06f-dafd74a407d1"; 81 - dev_priv->perf.test_config.sysfs_metric.attrs = dev_priv->perf.test_config.attrs; 82 - 83 - dev_priv->perf.test_config.attrs[0] = &dev_priv->perf.test_config.sysfs_metric_id.attr; 84 - 85 - dev_priv->perf.test_config.sysfs_metric_id.attr.name = "id"; 86 - dev_priv->perf.test_config.sysfs_metric_id.attr.mode = 0444; 87 - dev_priv->perf.test_config.sysfs_metric_id.show = show_test_oa_id; 88 - }
-16
drivers/gpu/drm/i915/oa/i915_oa_sklgt2.h
··· 1 - /* SPDX-License-Identifier: MIT */ 2 - /* 3 - * Copyright © 2018-2019 Intel Corporation 4 - * 5 - * Autogenerated file by GPU Top : https://github.com/rib/gputop 6 - * DO NOT EDIT manually! 7 - */ 8 - 9 - #ifndef __I915_OA_SKLGT2_H__ 10 - #define __I915_OA_SKLGT2_H__ 11 - 12 - struct drm_i915_private; 13 - 14 - void i915_perf_load_test_config_sklgt2(struct drm_i915_private *dev_priv); 15 - 16 - #endif
-89
drivers/gpu/drm/i915/oa/i915_oa_sklgt3.c
··· 1 - // SPDX-License-Identifier: MIT 2 - /* 3 - * Copyright © 2018-2019 Intel Corporation 4 - * 5 - * Autogenerated file by GPU Top : https://github.com/rib/gputop 6 - * DO NOT EDIT manually! 7 - */ 8 - 9 - #include <linux/sysfs.h> 10 - 11 - #include "i915_drv.h" 12 - #include "i915_oa_sklgt3.h" 13 - 14 - static const struct i915_oa_reg b_counter_config_test_oa[] = { 15 - { _MMIO(0x2740), 0x00000000 }, 16 - { _MMIO(0x2744), 0x00800000 }, 17 - { _MMIO(0x2714), 0xf0800000 }, 18 - { _MMIO(0x2710), 0x00000000 }, 19 - { _MMIO(0x2724), 0xf0800000 }, 20 - { _MMIO(0x2720), 0x00000000 }, 21 - { _MMIO(0x2770), 0x00000004 }, 22 - { _MMIO(0x2774), 0x00000000 }, 23 - { _MMIO(0x2778), 0x00000003 }, 24 - { _MMIO(0x277c), 0x00000000 }, 25 - { _MMIO(0x2780), 0x00000007 }, 26 - { _MMIO(0x2784), 0x00000000 }, 27 - { _MMIO(0x2788), 0x00100002 }, 28 - { _MMIO(0x278c), 0x0000fff7 }, 29 - { _MMIO(0x2790), 0x00100002 }, 30 - { _MMIO(0x2794), 0x0000ffcf }, 31 - { _MMIO(0x2798), 0x00100082 }, 32 - { _MMIO(0x279c), 0x0000ffef }, 33 - { _MMIO(0x27a0), 0x001000c2 }, 34 - { _MMIO(0x27a4), 0x0000ffe7 }, 35 - { _MMIO(0x27a8), 0x00100001 }, 36 - { _MMIO(0x27ac), 0x0000ffe7 }, 37 - }; 38 - 39 - static const struct i915_oa_reg flex_eu_config_test_oa[] = { 40 - }; 41 - 42 - static const struct i915_oa_reg mux_config_test_oa[] = { 43 - { _MMIO(0x9840), 0x00000080 }, 44 - { _MMIO(0x9888), 0x11810000 }, 45 - { _MMIO(0x9888), 0x07810013 }, 46 - { _MMIO(0x9888), 0x1f810000 }, 47 - { _MMIO(0x9888), 0x1d810000 }, 48 - { _MMIO(0x9888), 0x1b930040 }, 49 - { _MMIO(0x9888), 0x07e54000 }, 50 - { _MMIO(0x9888), 0x1f908000 }, 51 - { _MMIO(0x9888), 0x11900000 }, 52 - { _MMIO(0x9888), 0x37900000 }, 53 - { _MMIO(0x9888), 0x53900000 }, 54 - { _MMIO(0x9888), 0x45900000 }, 55 - { _MMIO(0x9888), 0x33900000 }, 56 - }; 57 - 58 - static ssize_t 59 - show_test_oa_id(struct device *kdev, struct device_attribute *attr, char *buf) 60 - { 61 - return sprintf(buf, "1\n"); 62 - } 63 - 64 - void 65 - i915_perf_load_test_config_sklgt3(struct drm_i915_private *dev_priv) 66 - { 67 - strlcpy(dev_priv->perf.test_config.uuid, 68 - "2b985803-d3c9-4629-8a4f-634bfecba0e8", 69 - sizeof(dev_priv->perf.test_config.uuid)); 70 - dev_priv->perf.test_config.id = 1; 71 - 72 - dev_priv->perf.test_config.mux_regs = mux_config_test_oa; 73 - dev_priv->perf.test_config.mux_regs_len = ARRAY_SIZE(mux_config_test_oa); 74 - 75 - dev_priv->perf.test_config.b_counter_regs = b_counter_config_test_oa; 76 - dev_priv->perf.test_config.b_counter_regs_len = ARRAY_SIZE(b_counter_config_test_oa); 77 - 78 - dev_priv->perf.test_config.flex_regs = flex_eu_config_test_oa; 79 - dev_priv->perf.test_config.flex_regs_len = ARRAY_SIZE(flex_eu_config_test_oa); 80 - 81 - dev_priv->perf.test_config.sysfs_metric.name = "2b985803-d3c9-4629-8a4f-634bfecba0e8"; 82 - dev_priv->perf.test_config.sysfs_metric.attrs = dev_priv->perf.test_config.attrs; 83 - 84 - dev_priv->perf.test_config.attrs[0] = &dev_priv->perf.test_config.sysfs_metric_id.attr; 85 - 86 - dev_priv->perf.test_config.sysfs_metric_id.attr.name = "id"; 87 - dev_priv->perf.test_config.sysfs_metric_id.attr.mode = 0444; 88 - dev_priv->perf.test_config.sysfs_metric_id.show = show_test_oa_id; 89 - }
-16
drivers/gpu/drm/i915/oa/i915_oa_sklgt3.h
··· 1 - /* SPDX-License-Identifier: MIT */ 2 - /* 3 - * Copyright © 2018-2019 Intel Corporation 4 - * 5 - * Autogenerated file by GPU Top : https://github.com/rib/gputop 6 - * DO NOT EDIT manually! 7 - */ 8 - 9 - #ifndef __I915_OA_SKLGT3_H__ 10 - #define __I915_OA_SKLGT3_H__ 11 - 12 - struct drm_i915_private; 13 - 14 - void i915_perf_load_test_config_sklgt3(struct drm_i915_private *dev_priv); 15 - 16 - #endif
-89
drivers/gpu/drm/i915/oa/i915_oa_sklgt4.c
··· 1 - // SPDX-License-Identifier: MIT 2 - /* 3 - * Copyright © 2018-2019 Intel Corporation 4 - * 5 - * Autogenerated file by GPU Top : https://github.com/rib/gputop 6 - * DO NOT EDIT manually! 7 - */ 8 - 9 - #include <linux/sysfs.h> 10 - 11 - #include "i915_drv.h" 12 - #include "i915_oa_sklgt4.h" 13 - 14 - static const struct i915_oa_reg b_counter_config_test_oa[] = { 15 - { _MMIO(0x2740), 0x00000000 }, 16 - { _MMIO(0x2744), 0x00800000 }, 17 - { _MMIO(0x2714), 0xf0800000 }, 18 - { _MMIO(0x2710), 0x00000000 }, 19 - { _MMIO(0x2724), 0xf0800000 }, 20 - { _MMIO(0x2720), 0x00000000 }, 21 - { _MMIO(0x2770), 0x00000004 }, 22 - { _MMIO(0x2774), 0x00000000 }, 23 - { _MMIO(0x2778), 0x00000003 }, 24 - { _MMIO(0x277c), 0x00000000 }, 25 - { _MMIO(0x2780), 0x00000007 }, 26 - { _MMIO(0x2784), 0x00000000 }, 27 - { _MMIO(0x2788), 0x00100002 }, 28 - { _MMIO(0x278c), 0x0000fff7 }, 29 - { _MMIO(0x2790), 0x00100002 }, 30 - { _MMIO(0x2794), 0x0000ffcf }, 31 - { _MMIO(0x2798), 0x00100082 }, 32 - { _MMIO(0x279c), 0x0000ffef }, 33 - { _MMIO(0x27a0), 0x001000c2 }, 34 - { _MMIO(0x27a4), 0x0000ffe7 }, 35 - { _MMIO(0x27a8), 0x00100001 }, 36 - { _MMIO(0x27ac), 0x0000ffe7 }, 37 - }; 38 - 39 - static const struct i915_oa_reg flex_eu_config_test_oa[] = { 40 - }; 41 - 42 - static const struct i915_oa_reg mux_config_test_oa[] = { 43 - { _MMIO(0x9840), 0x00000080 }, 44 - { _MMIO(0x9888), 0x11810000 }, 45 - { _MMIO(0x9888), 0x07810013 }, 46 - { _MMIO(0x9888), 0x1f810000 }, 47 - { _MMIO(0x9888), 0x1d810000 }, 48 - { _MMIO(0x9888), 0x1b930040 }, 49 - { _MMIO(0x9888), 0x07e54000 }, 50 - { _MMIO(0x9888), 0x1f908000 }, 51 - { _MMIO(0x9888), 0x11900000 }, 52 - { _MMIO(0x9888), 0x37900000 }, 53 - { _MMIO(0x9888), 0x53900000 }, 54 - { _MMIO(0x9888), 0x45900000 }, 55 - { _MMIO(0x9888), 0x33900000 }, 56 - }; 57 - 58 - static ssize_t 59 - show_test_oa_id(struct device *kdev, struct device_attribute *attr, char *buf) 60 - { 61 - return sprintf(buf, "1\n"); 62 - } 63 - 64 - void 65 - i915_perf_load_test_config_sklgt4(struct drm_i915_private *dev_priv) 66 - { 67 - strlcpy(dev_priv->perf.test_config.uuid, 68 - "882fa433-1f4a-4a67-a962-c741888fe5f5", 69 - sizeof(dev_priv->perf.test_config.uuid)); 70 - dev_priv->perf.test_config.id = 1; 71 - 72 - dev_priv->perf.test_config.mux_regs = mux_config_test_oa; 73 - dev_priv->perf.test_config.mux_regs_len = ARRAY_SIZE(mux_config_test_oa); 74 - 75 - dev_priv->perf.test_config.b_counter_regs = b_counter_config_test_oa; 76 - dev_priv->perf.test_config.b_counter_regs_len = ARRAY_SIZE(b_counter_config_test_oa); 77 - 78 - dev_priv->perf.test_config.flex_regs = flex_eu_config_test_oa; 79 - dev_priv->perf.test_config.flex_regs_len = ARRAY_SIZE(flex_eu_config_test_oa); 80 - 81 - dev_priv->perf.test_config.sysfs_metric.name = "882fa433-1f4a-4a67-a962-c741888fe5f5"; 82 - dev_priv->perf.test_config.sysfs_metric.attrs = dev_priv->perf.test_config.attrs; 83 - 84 - dev_priv->perf.test_config.attrs[0] = &dev_priv->perf.test_config.sysfs_metric_id.attr; 85 - 86 - dev_priv->perf.test_config.sysfs_metric_id.attr.name = "id"; 87 - dev_priv->perf.test_config.sysfs_metric_id.attr.mode = 0444; 88 - dev_priv->perf.test_config.sysfs_metric_id.show = show_test_oa_id; 89 - }
-16
drivers/gpu/drm/i915/oa/i915_oa_sklgt4.h
··· 1 - /* SPDX-License-Identifier: MIT */ 2 - /* 3 - * Copyright © 2018-2019 Intel Corporation 4 - * 5 - * Autogenerated file by GPU Top : https://github.com/rib/gputop 6 - * DO NOT EDIT manually! 7 - */ 8 - 9 - #ifndef __I915_OA_SKLGT4_H__ 10 - #define __I915_OA_SKLGT4_H__ 11 - 12 - struct drm_i915_private; 13 - 14 - void i915_perf_load_test_config_sklgt4(struct drm_i915_private *dev_priv); 15 - 16 - #endif
-121
drivers/gpu/drm/i915/oa/i915_oa_tgl.c
··· 1 - // SPDX-License-Identifier: MIT 2 - /* 3 - * Copyright © 2018 Intel Corporation 4 - * 5 - * Autogenerated file by GPU Top : https://github.com/rib/gputop 6 - * DO NOT EDIT manually! 7 - */ 8 - 9 - #include <linux/sysfs.h> 10 - 11 - #include "i915_drv.h" 12 - #include "i915_oa_tgl.h" 13 - 14 - static const struct i915_oa_reg b_counter_config_test_oa[] = { 15 - { _MMIO(0xD920), 0x00000000 }, 16 - { _MMIO(0xD900), 0x00000000 }, 17 - { _MMIO(0xD904), 0xF0800000 }, 18 - { _MMIO(0xD910), 0x00000000 }, 19 - { _MMIO(0xD914), 0xF0800000 }, 20 - { _MMIO(0xDC40), 0x00FF0000 }, 21 - { _MMIO(0xD940), 0x00000004 }, 22 - { _MMIO(0xD944), 0x0000FFFF }, 23 - { _MMIO(0xDC00), 0x00000004 }, 24 - { _MMIO(0xDC04), 0x0000FFFF }, 25 - { _MMIO(0xD948), 0x00000003 }, 26 - { _MMIO(0xD94C), 0x0000FFFF }, 27 - { _MMIO(0xDC08), 0x00000003 }, 28 - { _MMIO(0xDC0C), 0x0000FFFF }, 29 - { _MMIO(0xD950), 0x00000007 }, 30 - { _MMIO(0xD954), 0x0000FFFF }, 31 - { _MMIO(0xDC10), 0x00000007 }, 32 - { _MMIO(0xDC14), 0x0000FFFF }, 33 - { _MMIO(0xD958), 0x00100002 }, 34 - { _MMIO(0xD95C), 0x0000FFF7 }, 35 - { _MMIO(0xDC18), 0x00100002 }, 36 - { _MMIO(0xDC1C), 0x0000FFF7 }, 37 - { _MMIO(0xD960), 0x00100002 }, 38 - { _MMIO(0xD964), 0x0000FFCF }, 39 - { _MMIO(0xDC20), 0x00100002 }, 40 - { _MMIO(0xDC24), 0x0000FFCF }, 41 - { _MMIO(0xD968), 0x00100082 }, 42 - { _MMIO(0xD96C), 0x0000FFEF }, 43 - { _MMIO(0xDC28), 0x00100082 }, 44 - { _MMIO(0xDC2C), 0x0000FFEF }, 45 - { _MMIO(0xD970), 0x001000C2 }, 46 - { _MMIO(0xD974), 0x0000FFE7 }, 47 - { _MMIO(0xDC30), 0x001000C2 }, 48 - { _MMIO(0xDC34), 0x0000FFE7 }, 49 - { _MMIO(0xD978), 0x00100001 }, 50 - { _MMIO(0xD97C), 0x0000FFE7 }, 51 - { _MMIO(0xDC38), 0x00100001 }, 52 - { _MMIO(0xDC3C), 0x0000FFE7 }, 53 - }; 54 - 55 - static const struct i915_oa_reg flex_eu_config_test_oa[] = { 56 - }; 57 - 58 - static const struct i915_oa_reg mux_config_test_oa[] = { 59 - { _MMIO(0x0D04), 0x00000200 }, 60 - { _MMIO(0x9840), 0x00000000 }, 61 - { _MMIO(0x9884), 0x00000000 }, 62 - { _MMIO(0x9888), 0x280E0000 }, 63 - { _MMIO(0x9888), 0x1E0E0147 }, 64 - { _MMIO(0x9888), 0x180E0000 }, 65 - { _MMIO(0x9888), 0x160E0000 }, 66 - { _MMIO(0x9888), 0x1E0F1000 }, 67 - { _MMIO(0x9888), 0x1E104000 }, 68 - { _MMIO(0x9888), 0x2E020100 }, 69 - { _MMIO(0x9888), 0x2C030004 }, 70 - { _MMIO(0x9888), 0x38003000 }, 71 - { _MMIO(0x9888), 0x1E0A8000 }, 72 - { _MMIO(0x9884), 0x00000003 }, 73 - { _MMIO(0x9888), 0x49110000 }, 74 - { _MMIO(0x9888), 0x5D101400 }, 75 - { _MMIO(0x9888), 0x1D140020 }, 76 - { _MMIO(0x9888), 0x1D1103A3 }, 77 - { _MMIO(0x9888), 0x01110000 }, 78 - { _MMIO(0x9888), 0x61111000 }, 79 - { _MMIO(0x9888), 0x1F128000 }, 80 - { _MMIO(0x9888), 0x17100000 }, 81 - { _MMIO(0x9888), 0x55100630 }, 82 - { _MMIO(0x9888), 0x57100000 }, 83 - { _MMIO(0x9888), 0x31100000 }, 84 - { _MMIO(0x9884), 0x00000003 }, 85 - { _MMIO(0x9888), 0x65100002 }, 86 - { _MMIO(0x9884), 0x00000000 }, 87 - { _MMIO(0x9888), 0x42000001 }, 88 - }; 89 - 90 - static ssize_t 91 - show_test_oa_id(struct device *kdev, struct device_attribute *attr, char *buf) 92 - { 93 - return sprintf(buf, "1\n"); 94 - } 95 - 96 - void 97 - i915_perf_load_test_config_tgl(struct drm_i915_private *dev_priv) 98 - { 99 - strlcpy(dev_priv->perf.test_config.uuid, 100 - "80a833f0-2504-4321-8894-e9277844ce7b", 101 - sizeof(dev_priv->perf.test_config.uuid)); 102 - dev_priv->perf.test_config.id = 1; 103 - 104 - dev_priv->perf.test_config.mux_regs = mux_config_test_oa; 105 - dev_priv->perf.test_config.mux_regs_len = ARRAY_SIZE(mux_config_test_oa); 106 - 107 - dev_priv->perf.test_config.b_counter_regs = b_counter_config_test_oa; 108 - dev_priv->perf.test_config.b_counter_regs_len = ARRAY_SIZE(b_counter_config_test_oa); 109 - 110 - dev_priv->perf.test_config.flex_regs = flex_eu_config_test_oa; 111 - dev_priv->perf.test_config.flex_regs_len = ARRAY_SIZE(flex_eu_config_test_oa); 112 - 113 - dev_priv->perf.test_config.sysfs_metric.name = "80a833f0-2504-4321-8894-e9277844ce7b"; 114 - dev_priv->perf.test_config.sysfs_metric.attrs = dev_priv->perf.test_config.attrs; 115 - 116 - dev_priv->perf.test_config.attrs[0] = &dev_priv->perf.test_config.sysfs_metric_id.attr; 117 - 118 - dev_priv->perf.test_config.sysfs_metric_id.attr.name = "id"; 119 - dev_priv->perf.test_config.sysfs_metric_id.attr.mode = 0444; 120 - dev_priv->perf.test_config.sysfs_metric_id.show = show_test_oa_id; 121 - }
-16
drivers/gpu/drm/i915/oa/i915_oa_tgl.h
··· 1 - /* SPDX-License-Identifier: MIT */ 2 - /* 3 - * Copyright © 2018 Intel Corporation 4 - * 5 - * Autogenerated file by GPU Top : https://github.com/rib/gputop 6 - * DO NOT EDIT manually! 7 - */ 8 - 9 - #ifndef __I915_OA_TGL_H__ 10 - #define __I915_OA_TGL_H__ 11 - 12 - struct drm_i915_private; 13 - 14 - void i915_perf_load_test_config_tgl(struct drm_i915_private *dev_priv); 15 - 16 - #endif
+6 -6
drivers/gpu/drm/i915/selftests/i915_active.c
··· 153 153 if (IS_ERR(active)) 154 154 return PTR_ERR(active); 155 155 156 - i915_active_wait(&active->base); 156 + __i915_active_wait(&active->base, TASK_UNINTERRUPTIBLE); 157 157 if (!READ_ONCE(active->retired)) { 158 158 struct drm_printer p = drm_err_printer(__func__); 159 159 ··· 228 228 } 229 229 230 230 i915_active_release(&active->base); 231 + if (err) 232 + goto out; 231 233 232 - if (err == 0) 233 - err = i915_active_wait(&active->base); 234 - 235 - if (err == 0 && !READ_ONCE(active->retired)) { 234 + __i915_active_wait(&active->base, TASK_UNINTERRUPTIBLE); 235 + if (!READ_ONCE(active->retired)) { 236 236 pr_err("i915_active not retired after flushing barriers!\n"); 237 237 err = -EINVAL; 238 238 } ··· 277 277 278 278 void i915_active_print(struct i915_active *ref, struct drm_printer *m) 279 279 { 280 - drm_printf(m, "active %pS:%pS\n", ref->active, ref->retire); 280 + drm_printf(m, "active %ps:%ps\n", ref->active, ref->retire); 281 281 drm_printf(m, "\tcount: %d\n", atomic_read(&ref->count)); 282 282 drm_printf(m, "\tpreallocated barriers? %s\n", 283 283 yesno(!llist_empty(&ref->preallocated_barriers)));
-2
drivers/gpu/drm/i915/selftests/i915_gem.c
··· 125 125 */ 126 126 with_intel_runtime_pm(&i915->runtime_pm, wakeref) { 127 127 i915_ggtt_resume(&i915->ggtt); 128 - i915_gem_restore_fences(&i915->ggtt); 129 - 130 128 i915_gem_resume(i915); 131 129 } 132 130 }
+1 -25
drivers/gpu/drm/i915/selftests/i915_gem_evict.c
··· 45 45 46 46 static int populate_ggtt(struct i915_ggtt *ggtt, struct list_head *objects) 47 47 { 48 - unsigned long unbound, bound, count; 49 48 struct drm_i915_gem_object *obj; 49 + unsigned long count; 50 50 51 51 count = 0; 52 52 do { ··· 71 71 } while (1); 72 72 pr_debug("Filled GGTT with %lu pages [%llu total]\n", 73 73 count, ggtt->vm.total / PAGE_SIZE); 74 - 75 - bound = 0; 76 - unbound = 0; 77 - list_for_each_entry(obj, objects, st_link) { 78 - GEM_BUG_ON(!obj->mm.quirked); 79 - 80 - if (atomic_read(&obj->bind_count)) 81 - bound++; 82 - else 83 - unbound++; 84 - } 85 - GEM_BUG_ON(bound + unbound != count); 86 - 87 - if (unbound) { 88 - pr_err("%s: Found %lu objects unbound, expected %u!\n", 89 - __func__, unbound, 0); 90 - return -EINVAL; 91 - } 92 - 93 - if (bound != count) { 94 - pr_err("%s: Found %lu objects bound, expected %lu!\n", 95 - __func__, bound, count); 96 - return -EINVAL; 97 - } 98 74 99 75 if (list_empty(&ggtt->vm.bound_list)) { 100 76 pr_err("No objects on the GGTT inactive list!\n");
-1
drivers/gpu/drm/i915/selftests/i915_gem_gtt.c
··· 1229 1229 { 1230 1230 struct drm_i915_gem_object *obj = vma->obj; 1231 1231 1232 - atomic_inc(&obj->bind_count); /* track for eviction later */ 1233 1232 __i915_gem_object_pin_pages(obj); 1234 1233 1235 1234 GEM_BUG_ON(vma->pages);
+96 -4
drivers/gpu/drm/i915/selftests/i915_perf.c
··· 14 14 #include "igt_flush_test.h" 15 15 #include "lib_sw_fence.h" 16 16 17 + #define TEST_OA_CONFIG_UUID "12345678-1234-1234-1234-1234567890ab" 18 + 19 + static int 20 + alloc_empty_config(struct i915_perf *perf) 21 + { 22 + struct i915_oa_config *oa_config; 23 + 24 + oa_config = kzalloc(sizeof(*oa_config), GFP_KERNEL); 25 + if (!oa_config) 26 + return -ENOMEM; 27 + 28 + oa_config->perf = perf; 29 + kref_init(&oa_config->ref); 30 + 31 + strlcpy(oa_config->uuid, TEST_OA_CONFIG_UUID, sizeof(oa_config->uuid)); 32 + 33 + mutex_lock(&perf->metrics_lock); 34 + 35 + oa_config->id = idr_alloc(&perf->metrics_idr, oa_config, 2, 0, GFP_KERNEL); 36 + if (oa_config->id < 0) { 37 + mutex_unlock(&perf->metrics_lock); 38 + i915_oa_config_put(oa_config); 39 + return -ENOMEM; 40 + } 41 + 42 + mutex_unlock(&perf->metrics_lock); 43 + 44 + return 0; 45 + } 46 + 47 + static void 48 + destroy_empty_config(struct i915_perf *perf) 49 + { 50 + struct i915_oa_config *oa_config = NULL, *tmp; 51 + int id; 52 + 53 + mutex_lock(&perf->metrics_lock); 54 + 55 + idr_for_each_entry(&perf->metrics_idr, tmp, id) { 56 + if (!strcmp(tmp->uuid, TEST_OA_CONFIG_UUID)) { 57 + oa_config = tmp; 58 + break; 59 + } 60 + } 61 + 62 + if (oa_config) 63 + idr_remove(&perf->metrics_idr, oa_config->id); 64 + 65 + mutex_unlock(&perf->metrics_lock); 66 + 67 + if (oa_config) 68 + i915_oa_config_put(oa_config); 69 + } 70 + 71 + static struct i915_oa_config * 72 + get_empty_config(struct i915_perf *perf) 73 + { 74 + struct i915_oa_config *oa_config = NULL, *tmp; 75 + int id; 76 + 77 + mutex_lock(&perf->metrics_lock); 78 + 79 + idr_for_each_entry(&perf->metrics_idr, tmp, id) { 80 + if (!strcmp(tmp->uuid, TEST_OA_CONFIG_UUID)) { 81 + oa_config = i915_oa_config_get(tmp); 82 + break; 83 + } 84 + } 85 + 86 + mutex_unlock(&perf->metrics_lock); 87 + 88 + return oa_config; 89 + } 90 + 17 91 static struct i915_perf_stream * 18 92 test_stream(struct i915_perf *perf) 19 93 { 20 94 struct drm_i915_perf_open_param param = {}; 95 + struct i915_oa_config *oa_config = get_empty_config(perf); 21 96 struct perf_open_properties props = { 22 97 .engine = intel_engine_lookup_user(perf->i915, 23 98 I915_ENGINE_CLASS_RENDER, ··· 100 25 .sample_flags = SAMPLE_OA_REPORT, 101 26 .oa_format = IS_GEN(perf->i915, 12) ? 102 27 I915_OA_FORMAT_A32u40_A4u32_B8_C8 : I915_OA_FORMAT_C4_B8, 103 - .metrics_set = 1, 104 28 }; 105 29 struct i915_perf_stream *stream; 106 30 107 - stream = kzalloc(sizeof(*stream), GFP_KERNEL); 108 - if (!stream) 31 + if (!oa_config) 109 32 return NULL; 33 + 34 + props.metrics_set = oa_config->id; 35 + 36 + stream = kzalloc(sizeof(*stream), GFP_KERNEL); 37 + if (!stream) { 38 + i915_oa_config_put(oa_config); 39 + return NULL; 40 + } 110 41 111 42 stream->perf = perf; 112 43 ··· 122 41 stream = NULL; 123 42 } 124 43 mutex_unlock(&perf->lock); 44 + 45 + i915_oa_config_put(oa_config); 125 46 126 47 return stream; 127 48 } ··· 289 206 SUBTEST(live_noa_delay), 290 207 }; 291 208 struct i915_perf *perf = &i915->perf; 209 + int err; 292 210 293 211 if (!perf->metrics_kobj || !perf->ops.enable_metric_set) 294 212 return 0; ··· 297 213 if (intel_gt_is_wedged(&i915->gt)) 298 214 return 0; 299 215 300 - return i915_subtests(tests, i915); 216 + err = alloc_empty_config(&i915->perf); 217 + if (err) 218 + return err; 219 + 220 + err = i915_subtests(tests, i915); 221 + 222 + destroy_empty_config(&i915->perf); 223 + 224 + return err; 301 225 }
+11 -5
drivers/gpu/drm/i915/selftests/i915_request.c
··· 28 28 #include "gem/selftests/mock_context.h" 29 29 30 30 #include "gt/intel_engine_pm.h" 31 + #include "gt/intel_engine_user.h" 31 32 #include "gt/intel_gt.h" 32 33 33 34 #include "i915_random.h" ··· 52 51 return count; 53 52 } 54 53 54 + static struct intel_engine_cs *rcs0(struct drm_i915_private *i915) 55 + { 56 + return intel_engine_lookup_user(i915, I915_ENGINE_CLASS_RENDER, 0); 57 + } 58 + 55 59 static int igt_add_request(void *arg) 56 60 { 57 61 struct drm_i915_private *i915 = arg; ··· 64 58 65 59 /* Basic preliminary test to create a request and let it loose! */ 66 60 67 - request = mock_request(i915->engine[RCS0]->kernel_context, HZ / 10); 61 + request = mock_request(rcs0(i915)->kernel_context, HZ / 10); 68 62 if (!request) 69 63 return -ENOMEM; 70 64 ··· 82 76 83 77 /* Submit a request, then wait upon it */ 84 78 85 - request = mock_request(i915->engine[RCS0]->kernel_context, T); 79 + request = mock_request(rcs0(i915)->kernel_context, T); 86 80 if (!request) 87 81 return -ENOMEM; 88 82 ··· 151 145 152 146 /* Submit a request, treat it as a fence and wait upon it */ 153 147 154 - request = mock_request(i915->engine[RCS0]->kernel_context, T); 148 + request = mock_request(rcs0(i915)->kernel_context, T); 155 149 if (!request) 156 150 return -ENOMEM; 157 151 ··· 426 420 { 427 421 struct drm_i915_private *i915 = arg; 428 422 struct smoketest t = { 429 - .engine = i915->engine[RCS0], 423 + .engine = rcs0(i915), 430 424 .ncontexts = 1024, 431 425 .max_batch = 1024, 432 426 .request_alloc = __mock_request_alloc ··· 1239 1233 struct igt_live_test t; 1240 1234 unsigned int idx; 1241 1235 1242 - snprintf(name, sizeof(name), "%pS", fn); 1236 + snprintf(name, sizeof(name), "%ps", fn); 1243 1237 err = igt_live_test_begin(&t, i915, __func__, name); 1244 1238 if (err) 1245 1239 break;
+4 -1
drivers/gpu/drm/i915/selftests/intel_memory_region.c
··· 594 594 void *addr; 595 595 596 596 obj = i915_gem_object_create_region(mr, size, 0); 597 - if (IS_ERR(obj)) 597 + if (IS_ERR(obj)) { 598 + if (PTR_ERR(obj) == -ENOSPC) /* Stolen memory */ 599 + return ERR_PTR(-ENODEV); 598 600 return obj; 601 + } 599 602 600 603 addr = i915_gem_object_pin_map(obj, type); 601 604 if (IS_ERR(addr)) {
+3 -3
drivers/gpu/drm/i915/selftests/mock_gem_device.c
··· 178 178 179 179 mkwrite_device_info(i915)->engine_mask = BIT(0); 180 180 181 - i915->engine[RCS0] = mock_engine(i915, "mock", RCS0); 182 - if (!i915->engine[RCS0]) 181 + i915->gt.engine[RCS0] = mock_engine(i915, "mock", RCS0); 182 + if (!i915->gt.engine[RCS0]) 183 183 goto err_unlock; 184 184 185 - if (mock_engine_init(i915->engine[RCS0])) 185 + if (mock_engine_init(i915->gt.engine[RCS0])) 186 186 goto err_context; 187 187 188 188 __clear_bit(I915_WEDGED, &i915->gt.reset.flags);
+169 -1
include/drm/drm_dp_helper.h
··· 701 701 # define DP_TEST_CRC_SUPPORTED (1 << 5) 702 702 # define DP_TEST_COUNT_MASK 0xf 703 703 704 - #define DP_TEST_PHY_PATTERN 0x248 704 + #define DP_PHY_TEST_PATTERN 0x248 705 + # define DP_PHY_TEST_PATTERN_SEL_MASK 0x7 706 + # define DP_PHY_TEST_PATTERN_NONE 0x0 707 + # define DP_PHY_TEST_PATTERN_D10_2 0x1 708 + # define DP_PHY_TEST_PATTERN_ERROR_COUNT 0x2 709 + # define DP_PHY_TEST_PATTERN_PRBS7 0x3 710 + # define DP_PHY_TEST_PATTERN_80BIT_CUSTOM 0x4 711 + # define DP_PHY_TEST_PATTERN_CP2520 0x5 712 + 713 + #define DP_TEST_HBR2_SCRAMBLER_RESET 0x24A 705 714 #define DP_TEST_80BIT_CUSTOM_PATTERN_7_0 0x250 706 715 #define DP_TEST_80BIT_CUSTOM_PATTERN_15_8 0x251 707 716 #define DP_TEST_80BIT_CUSTOM_PATTERN_23_16 0x252 ··· 1218 1209 #define EDP_VSC_PSR_UPDATE_RFB (1<<1) 1219 1210 #define EDP_VSC_PSR_CRC_VALUES_VALID (1<<2) 1220 1211 1212 + /** 1213 + * enum dp_pixelformat - drm DP Pixel encoding formats 1214 + * 1215 + * This enum is used to indicate DP VSC SDP Pixel encoding formats. 1216 + * It is based on DP 1.4 spec [Table 2-117: VSC SDP Payload for DB16 through 1217 + * DB18] 1218 + * 1219 + * @DP_PIXELFORMAT_RGB: RGB pixel encoding format 1220 + * @DP_PIXELFORMAT_YUV444: YCbCr 4:4:4 pixel encoding format 1221 + * @DP_PIXELFORMAT_YUV422: YCbCr 4:2:2 pixel encoding format 1222 + * @DP_PIXELFORMAT_YUV420: YCbCr 4:2:0 pixel encoding format 1223 + * @DP_PIXELFORMAT_Y_ONLY: Y Only pixel encoding format 1224 + * @DP_PIXELFORMAT_RAW: RAW pixel encoding format 1225 + * @DP_PIXELFORMAT_RESERVED: Reserved pixel encoding format 1226 + */ 1227 + enum dp_pixelformat { 1228 + DP_PIXELFORMAT_RGB = 0, 1229 + DP_PIXELFORMAT_YUV444 = 0x1, 1230 + DP_PIXELFORMAT_YUV422 = 0x2, 1231 + DP_PIXELFORMAT_YUV420 = 0x3, 1232 + DP_PIXELFORMAT_Y_ONLY = 0x4, 1233 + DP_PIXELFORMAT_RAW = 0x5, 1234 + DP_PIXELFORMAT_RESERVED = 0x6, 1235 + }; 1236 + 1237 + /** 1238 + * enum dp_colorimetry - drm DP Colorimetry formats 1239 + * 1240 + * This enum is used to indicate DP VSC SDP Colorimetry formats. 1241 + * It is based on DP 1.4 spec [Table 2-117: VSC SDP Payload for DB16 through 1242 + * DB18] and a name of enum member follows DRM_MODE_COLORIMETRY definition. 1243 + * 1244 + * @DP_COLORIMETRY_DEFAULT: sRGB (IEC 61966-2-1) or 1245 + * ITU-R BT.601 colorimetry format 1246 + * @DP_COLORIMETRY_RGB_WIDE_FIXED: RGB wide gamut fixed point colorimetry format 1247 + * @DP_COLORIMETRY_BT709_YCC: ITU-R BT.709 colorimetry format 1248 + * @DP_COLORIMETRY_RGB_WIDE_FLOAT: RGB wide gamut floating point 1249 + * (scRGB (IEC 61966-2-2)) colorimetry format 1250 + * @DP_COLORIMETRY_XVYCC_601: xvYCC601 colorimetry format 1251 + * @DP_COLORIMETRY_OPRGB: OpRGB colorimetry format 1252 + * @DP_COLORIMETRY_XVYCC_709: xvYCC709 colorimetry format 1253 + * @DP_COLORIMETRY_DCI_P3_RGB: DCI-P3 (SMPTE RP 431-2) colorimetry format 1254 + * @DP_COLORIMETRY_SYCC_601: sYCC601 colorimetry format 1255 + * @DP_COLORIMETRY_RGB_CUSTOM: RGB Custom Color Profile colorimetry format 1256 + * @DP_COLORIMETRY_OPYCC_601: opYCC601 colorimetry format 1257 + * @DP_COLORIMETRY_BT2020_RGB: ITU-R BT.2020 R' G' B' colorimetry format 1258 + * @DP_COLORIMETRY_BT2020_CYCC: ITU-R BT.2020 Y'c C'bc C'rc colorimetry format 1259 + * @DP_COLORIMETRY_BT2020_YCC: ITU-R BT.2020 Y' C'b C'r colorimetry format 1260 + */ 1261 + enum dp_colorimetry { 1262 + DP_COLORIMETRY_DEFAULT = 0, 1263 + DP_COLORIMETRY_RGB_WIDE_FIXED = 0x1, 1264 + DP_COLORIMETRY_BT709_YCC = 0x1, 1265 + DP_COLORIMETRY_RGB_WIDE_FLOAT = 0x2, 1266 + DP_COLORIMETRY_XVYCC_601 = 0x2, 1267 + DP_COLORIMETRY_OPRGB = 0x3, 1268 + DP_COLORIMETRY_XVYCC_709 = 0x3, 1269 + DP_COLORIMETRY_DCI_P3_RGB = 0x4, 1270 + DP_COLORIMETRY_SYCC_601 = 0x4, 1271 + DP_COLORIMETRY_RGB_CUSTOM = 0x5, 1272 + DP_COLORIMETRY_OPYCC_601 = 0x5, 1273 + DP_COLORIMETRY_BT2020_RGB = 0x6, 1274 + DP_COLORIMETRY_BT2020_CYCC = 0x6, 1275 + DP_COLORIMETRY_BT2020_YCC = 0x7, 1276 + }; 1277 + 1278 + /** 1279 + * enum dp_dynamic_range - drm DP Dynamic Range 1280 + * 1281 + * This enum is used to indicate DP VSC SDP Dynamic Range. 1282 + * It is based on DP 1.4 spec [Table 2-117: VSC SDP Payload for DB16 through 1283 + * DB18] 1284 + * 1285 + * @DP_DYNAMIC_RANGE_VESA: VESA range 1286 + * @DP_DYNAMIC_RANGE_CTA: CTA range 1287 + */ 1288 + enum dp_dynamic_range { 1289 + DP_DYNAMIC_RANGE_VESA = 0, 1290 + DP_DYNAMIC_RANGE_CTA = 1, 1291 + }; 1292 + 1293 + /** 1294 + * enum dp_content_type - drm DP Content Type 1295 + * 1296 + * This enum is used to indicate DP VSC SDP Content Types. 1297 + * It is based on DP 1.4 spec [Table 2-117: VSC SDP Payload for DB16 through 1298 + * DB18] 1299 + * CTA-861-G defines content types and expected processing by a sink device 1300 + * 1301 + * @DP_CONTENT_TYPE_NOT_DEFINED: Not defined type 1302 + * @DP_CONTENT_TYPE_GRAPHICS: Graphics type 1303 + * @DP_CONTENT_TYPE_PHOTO: Photo type 1304 + * @DP_CONTENT_TYPE_VIDEO: Video type 1305 + * @DP_CONTENT_TYPE_GAME: Game type 1306 + */ 1307 + enum dp_content_type { 1308 + DP_CONTENT_TYPE_NOT_DEFINED = 0x00, 1309 + DP_CONTENT_TYPE_GRAPHICS = 0x01, 1310 + DP_CONTENT_TYPE_PHOTO = 0x02, 1311 + DP_CONTENT_TYPE_VIDEO = 0x03, 1312 + DP_CONTENT_TYPE_GAME = 0x04, 1313 + }; 1314 + 1315 + /** 1316 + * struct drm_dp_vsc_sdp - drm DP VSC SDP 1317 + * 1318 + * This structure represents a DP VSC SDP of drm 1319 + * It is based on DP 1.4 spec [Table 2-116: VSC SDP Header Bytes] and 1320 + * [Table 2-117: VSC SDP Payload for DB16 through DB18] 1321 + * 1322 + * @sdp_type: secondary-data packet type 1323 + * @revision: revision number 1324 + * @length: number of valid data bytes 1325 + * @pixelformat: pixel encoding format 1326 + * @colorimetry: colorimetry format 1327 + * @bpc: bit per color 1328 + * @dynamic_range: dynamic range information 1329 + * @content_type: CTA-861-G defines content types and expected processing by a sink device 1330 + */ 1331 + struct drm_dp_vsc_sdp { 1332 + unsigned char sdp_type; 1333 + unsigned char revision; 1334 + unsigned char length; 1335 + enum dp_pixelformat pixelformat; 1336 + enum dp_colorimetry colorimetry; 1337 + int bpc; 1338 + enum dp_dynamic_range dynamic_range; 1339 + enum dp_content_type content_type; 1340 + }; 1341 + 1221 1342 int drm_dp_psr_setup_time(const u8 psr_cap[EDP_PSR_RECEIVER_CAP_SIZE]); 1222 1343 1223 1344 static inline int ··· 1687 1548 * capabilities advertised. 1688 1549 */ 1689 1550 DP_QUIRK_FORCE_DPCD_BACKLIGHT, 1551 + /** 1552 + * @DP_DPCD_QUIRK_CAN_DO_MAX_LINK_RATE_3_24_GBPS: 1553 + * 1554 + * The device supports a link rate of 3.24 Gbps (multiplier 0xc) despite 1555 + * the DP_MAX_LINK_RATE register reporting a lower max multiplier. 1556 + */ 1557 + DP_DPCD_QUIRK_CAN_DO_MAX_LINK_RATE_3_24_GBPS, 1690 1558 }; 1691 1559 1692 1560 /** ··· 1744 1598 1745 1599 #endif 1746 1600 1601 + /** 1602 + * struct drm_dp_phy_test_params - DP Phy Compliance parameters 1603 + * @link_rate: Requested Link rate from DPCD 0x219 1604 + * @num_lanes: Number of lanes requested by sing through DPCD 0x220 1605 + * @phy_pattern: DP Phy test pattern from DPCD 0x248 1606 + * @hb2_reset: DP HBR2_COMPLIANCE_SCRAMBLER_RESET from DCPD 0x24A and 0x24B 1607 + * @custom80: DP Test_80BIT_CUSTOM_PATTERN from DPCDs 0x250 through 0x259 1608 + * @enhanced_frame_cap: flag for enhanced frame capability. 1609 + */ 1610 + struct drm_dp_phy_test_params { 1611 + int link_rate; 1612 + u8 num_lanes; 1613 + u8 phy_pattern; 1614 + u8 hbr2_reset[2]; 1615 + u8 custom80[10]; 1616 + bool enhanced_frame_cap; 1617 + }; 1618 + 1619 + int drm_dp_get_phy_test_pattern(struct drm_dp_aux *aux, 1620 + struct drm_dp_phy_test_params *data); 1621 + int drm_dp_set_phy_test_pattern(struct drm_dp_aux *aux, 1622 + struct drm_dp_phy_test_params *data, u8 dp_rev); 1747 1623 #endif /* _DRM_DP_HELPER_H_ */
+6 -2
include/drm/i915_pciids.h
··· 593 593 594 594 /* TGL */ 595 595 #define INTEL_TGL_12_IDS(info) \ 596 - INTEL_VGA_DEVICE(0x9A49, info), \ 597 596 INTEL_VGA_DEVICE(0x9A40, info), \ 597 + INTEL_VGA_DEVICE(0x9A49, info), \ 598 598 INTEL_VGA_DEVICE(0x9A59, info), \ 599 599 INTEL_VGA_DEVICE(0x9A60, info), \ 600 600 INTEL_VGA_DEVICE(0x9A68, info), \ 601 601 INTEL_VGA_DEVICE(0x9A70, info), \ 602 - INTEL_VGA_DEVICE(0x9A78, info) 602 + INTEL_VGA_DEVICE(0x9A78, info), \ 603 + INTEL_VGA_DEVICE(0x9AC0, info), \ 604 + INTEL_VGA_DEVICE(0x9AC9, info), \ 605 + INTEL_VGA_DEVICE(0x9AD9, info), \ 606 + INTEL_VGA_DEVICE(0x9AF8, info) 603 607 604 608 #endif /* _I915_PCIIDS_H */
+24
include/uapi/drm/i915_drm.h
··· 1969 1969 */ 1970 1970 DRM_I915_PERF_PROP_HOLD_PREEMPTION, 1971 1971 1972 + /** 1973 + * Specifying this pins all contexts to the specified SSEU power 1974 + * configuration for the duration of the recording. 1975 + * 1976 + * This parameter's value is a pointer to a struct 1977 + * drm_i915_gem_context_param_sseu. 1978 + * 1979 + * This property is available in perf revision 4. 1980 + */ 1981 + DRM_I915_PERF_PROP_GLOBAL_SSEU, 1982 + 1983 + /** 1984 + * This optional parameter specifies the timer interval in nanoseconds 1985 + * at which the i915 driver will check the OA buffer for available data. 1986 + * Minimum allowed value is 100 microseconds. A default value is used by 1987 + * the driver if this parameter is not specified. Note that larger timer 1988 + * values will reduce cpu consumption during OA perf captures. However, 1989 + * excessively large values would potentially result in OA buffer 1990 + * overwrites as captures reach end of the OA buffer. 1991 + * 1992 + * This property is available in perf revision 5. 1993 + */ 1994 + DRM_I915_PERF_PROP_POLL_OA_PERIOD, 1995 + 1972 1996 DRM_I915_PERF_PROP_MAX /* non-ABI */ 1973 1997 }; 1974 1998