Merge tag 'drm-fixes-2026-01-09' of https://gitlab.freedesktop.org/drm/kernel

Pull drm fixes from Dave Airlie:
"I missed the drm-rust fixes tree for last week, so this catches up on
that, along with amdgpu, and then some misc fixes across a few
drivers. I hadn't got an xe pull by the time I sent this, I suspect
one will arrive 10 mins after, but I don't think there is anything
that can't wait for next week.

Things seem to have picked up a little with people coming back from
holidays,

MAINTAINERS:
- Fix Nova GPU driver git links
- Fix typo in TYR driver entry preventing correct behavior of
scripts/get_maintainer.pl
- Exclude TYR driver from DRM MISC

nova-core:
- Correctly select RUST_FW_LOADER_ABSTRACTIONS to prevent build
errors
- Regenerate nova-core bindgen bindings with '--explicit-padding' to
avoid uninitialized bytes
- Fix length of received GSP messages, due to miscalculated message
payload size
- Regenerate bindings to derive MaybeZeroable
- Use a bindings alias to derive the firmware version

exynos:
- hdmi: replace system_wq with system_percpu_wq

pl111:
- Fix error handling in probe

mediatek/atomic/tidss:
- Fix tidss in another way and revert reordering of pre-enable and
post-disable operations, as it breaks other bridge drivers

nouveau:
- Fix regression from fwsec s/r fix

pci/vga:
- Fix multiple gpu's being reported a 'boot_display'

fb-helper:
- Fix vblank timeout during suspend/reset

amdgpu:
- Clang fixes
- Navi1x PCIe DPM fixes
- Ring reset fixes
- ISP suspend fix
- Analog DC fixes
- VPE fixes
- Mode1 reset fix

radeon:
- Variable sized array fix"

* tag 'drm-fixes-2026-01-09' of https://gitlab.freedesktop.org/drm/kernel: (32 commits)
Reapply "Revert "drm/amd: Skip power ungate during suspend for VPE""
drm/amd/display: Check NULL before calling dac_load_detection
drm/amd/pm: Disable MMIO access during SMU Mode 1 reset
drm/exynos: hdmi: replace use of system_wq with system_percpu_wq
drm/fb-helper: Fix vblank timeout during suspend/reset
PCI/VGA: Don't assume the only VGA device on a system is `boot_vga`
drm/amdgpu: Fix query for VPE block_type and ip_count
drm/amd/display: Add missing encoder setup to DACnEncoderControl
drm/amd/display: Correct color depth for SelectCRTC_Source
drm/amd/amdgpu: Fix SMU warning during isp suspend-resume
drm/amdgpu: always backup and reemit fences
drm/amdgpu: don't reemit ring contents more than once
drm/amd/pm: force send pcie parmater on navi1x
drm/amd/pm: fix wrong pcie parameter on navi1x
drm/radeon: Remove __counted_by from ClockInfoArray.clockInfo[]
drm/amd/display: Reduce number of arguments of dcn30's CalculateWatermarksAndDRAMSpeedChangeSupport()
drm/amd/display: Reduce number of arguments of dcn30's CalculatePrefetchSchedule()
drm/amd/display: Apply e4479aecf658 to dml
nouveau: don't attempt fwsec on sb on newer platforms
drm/tidss: Fix enable/disable order
...

+4 -3
MAINTAINERS
··· 2159 2159 L: dri-devel@lists.freedesktop.org 2160 2160 S: Supported 2161 2161 W: https://rust-for-linux.com/tyr-gpu-driver 2162 - W https://drm.pages.freedesktop.org/maintainer-tools/drm-rust.html 2162 + W: https://drm.pages.freedesktop.org/maintainer-tools/drm-rust.html 2163 2163 B: https://gitlab.freedesktop.org/panfrost/linux/-/issues 2164 2164 T: git https://gitlab.freedesktop.org/drm/rust/kernel.git 2165 2165 F: Documentation/devicetree/bindings/gpu/arm,mali-valhall-csf.yaml ··· 8068 8068 Q: https://patchwork.freedesktop.org/project/nouveau/ 8069 8069 B: https://gitlab.freedesktop.org/drm/nova/-/issues 8070 8070 C: irc://irc.oftc.net/nouveau 8071 - T: git https://gitlab.freedesktop.org/drm/nova.git nova-next 8071 + T: git https://gitlab.freedesktop.org/drm/rust/kernel.git drm-rust-next 8072 8072 F: Documentation/gpu/nova/ 8073 8073 F: drivers/gpu/nova-core/ 8074 8074 ··· 8080 8080 Q: https://patchwork.freedesktop.org/project/nouveau/ 8081 8081 B: https://gitlab.freedesktop.org/drm/nova/-/issues 8082 8082 C: irc://irc.oftc.net/nouveau 8083 - T: git https://gitlab.freedesktop.org/drm/nova.git nova-next 8083 + T: git https://gitlab.freedesktop.org/drm/rust/kernel.git drm-rust-next 8084 8084 F: Documentation/gpu/nova/ 8085 8085 F: drivers/gpu/drm/nova/ 8086 8086 F: include/uapi/drm/nova_drm.h ··· 8358 8358 X: drivers/gpu/drm/nova/ 8359 8359 X: drivers/gpu/drm/radeon/ 8360 8360 X: drivers/gpu/drm/tegra/ 8361 + X: drivers/gpu/drm/tyr/ 8361 8362 X: drivers/gpu/drm/xe/ 8362 8363 8363 8364 DRM DRIVERS AND COMMON INFRASTRUCTURE [RUST]
+4 -2
drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
··· 3445 3445 (adev->ip_blocks[i].version->type == AMD_IP_BLOCK_TYPE_GFX || 3446 3446 adev->ip_blocks[i].version->type == AMD_IP_BLOCK_TYPE_SDMA)) 3447 3447 continue; 3448 - /* skip CG for VCE/UVD/VPE, it's handled specially */ 3448 + /* skip CG for VCE/UVD, it's handled specially */ 3449 3449 if (adev->ip_blocks[i].version->type != AMD_IP_BLOCK_TYPE_UVD && 3450 3450 adev->ip_blocks[i].version->type != AMD_IP_BLOCK_TYPE_VCE && 3451 3451 adev->ip_blocks[i].version->type != AMD_IP_BLOCK_TYPE_VCN && 3452 - adev->ip_blocks[i].version->type != AMD_IP_BLOCK_TYPE_VPE && 3453 3452 adev->ip_blocks[i].version->type != AMD_IP_BLOCK_TYPE_JPEG && 3454 3453 adev->ip_blocks[i].version->funcs->set_powergating_state) { 3455 3454 /* enable powergating to save power */ ··· 5865 5866 5866 5867 if (ret) 5867 5868 goto mode1_reset_failed; 5869 + 5870 + /* enable mmio access after mode 1 reset completed */ 5871 + adev->no_hw_access = false; 5868 5872 5869 5873 amdgpu_device_load_pci_state(adev->pdev); 5870 5874 ret = amdgpu_psp_wait_for_bootloader(adev);
+31 -5
drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
··· 89 89 return seq; 90 90 } 91 91 92 + static void amdgpu_fence_save_fence_wptr_start(struct amdgpu_fence *af) 93 + { 94 + af->fence_wptr_start = af->ring->wptr; 95 + } 96 + 97 + static void amdgpu_fence_save_fence_wptr_end(struct amdgpu_fence *af) 98 + { 99 + af->fence_wptr_end = af->ring->wptr; 100 + } 101 + 92 102 /** 93 103 * amdgpu_fence_emit - emit a fence on the requested ring 94 104 * ··· 126 116 &ring->fence_drv.lock, 127 117 adev->fence_context + ring->idx, seq); 128 118 119 + amdgpu_fence_save_fence_wptr_start(af); 129 120 amdgpu_ring_emit_fence(ring, ring->fence_drv.gpu_addr, 130 121 seq, flags | AMDGPU_FENCE_FLAG_INT); 122 + amdgpu_fence_save_fence_wptr_end(af); 131 123 amdgpu_fence_save_wptr(af); 132 124 pm_runtime_get_noresume(adev_to_drm(adev)->dev); 133 125 ptr = &ring->fence_drv.fences[seq & ring->fence_drv.num_fences_mask]; ··· 721 709 struct amdgpu_ring *ring = af->ring; 722 710 unsigned long flags; 723 711 u32 seq, last_seq; 712 + bool reemitted = false; 724 713 725 714 last_seq = amdgpu_fence_read(ring) & ring->fence_drv.num_fences_mask; 726 715 seq = ring->fence_drv.sync_seq & ring->fence_drv.num_fences_mask; ··· 739 726 if (unprocessed && !dma_fence_is_signaled_locked(unprocessed)) { 740 727 fence = container_of(unprocessed, struct amdgpu_fence, base); 741 728 742 - if (fence == af) 729 + if (fence->reemitted > 1) 730 + reemitted = true; 731 + else if (fence == af) 743 732 dma_fence_set_error(&fence->base, -ETIME); 744 733 else if (fence->context == af->context) 745 734 dma_fence_set_error(&fence->base, -ECANCELED); ··· 749 734 rcu_read_unlock(); 750 735 } while (last_seq != seq); 751 736 spin_unlock_irqrestore(&ring->fence_drv.lock, flags); 752 - /* signal the guilty fence */ 753 - amdgpu_fence_write(ring, (u32)af->base.seqno); 754 - amdgpu_fence_process(ring); 737 + 738 + if (reemitted) { 739 + /* if we've already reemitted once then just cancel everything */ 740 + amdgpu_fence_driver_force_completion(af->ring); 741 + af->ring->ring_backup_entries_to_copy = 0; 742 + } 755 743 } 756 744 757 745 void amdgpu_fence_save_wptr(struct amdgpu_fence *af) ··· 802 784 /* save everything if the ring is not guilty, otherwise 803 785 * just save the content from other contexts. 804 786 */ 805 - if (!guilty_fence || (fence->context != guilty_fence->context)) 787 + if (!fence->reemitted && 788 + (!guilty_fence || (fence->context != guilty_fence->context))) { 806 789 amdgpu_ring_backup_unprocessed_command(ring, wptr, 807 790 fence->wptr); 791 + } else if (!fence->reemitted) { 792 + /* always save the fence */ 793 + amdgpu_ring_backup_unprocessed_command(ring, 794 + fence->fence_wptr_start, 795 + fence->fence_wptr_end); 796 + } 808 797 wptr = fence->wptr; 798 + fence->reemitted++; 809 799 } 810 800 rcu_read_unlock(); 811 801 } while (last_seq != seq);
+24
drivers/gpu/drm/amd/amdgpu/amdgpu_isp.c
··· 318 318 } 319 319 EXPORT_SYMBOL(isp_kernel_buffer_free); 320 320 321 + static int isp_resume(struct amdgpu_ip_block *ip_block) 322 + { 323 + struct amdgpu_device *adev = ip_block->adev; 324 + struct amdgpu_isp *isp = &adev->isp; 325 + 326 + if (isp->funcs->hw_resume) 327 + return isp->funcs->hw_resume(isp); 328 + 329 + return -ENODEV; 330 + } 331 + 332 + static int isp_suspend(struct amdgpu_ip_block *ip_block) 333 + { 334 + struct amdgpu_device *adev = ip_block->adev; 335 + struct amdgpu_isp *isp = &adev->isp; 336 + 337 + if (isp->funcs->hw_suspend) 338 + return isp->funcs->hw_suspend(isp); 339 + 340 + return -ENODEV; 341 + } 342 + 321 343 static const struct amd_ip_funcs isp_ip_funcs = { 322 344 .name = "isp_ip", 323 345 .early_init = isp_early_init, 324 346 .hw_init = isp_hw_init, 325 347 .hw_fini = isp_hw_fini, 326 348 .is_idle = isp_is_idle, 349 + .suspend = isp_suspend, 350 + .resume = isp_resume, 327 351 .set_clockgating_state = isp_set_clockgating_state, 328 352 .set_powergating_state = isp_set_powergating_state, 329 353 };
+2
drivers/gpu/drm/amd/amdgpu/amdgpu_isp.h
··· 38 38 struct isp_funcs { 39 39 int (*hw_init)(struct amdgpu_isp *isp); 40 40 int (*hw_fini)(struct amdgpu_isp *isp); 41 + int (*hw_suspend)(struct amdgpu_isp *isp); 42 + int (*hw_resume)(struct amdgpu_isp *isp); 41 43 }; 42 44 43 45 struct amdgpu_isp {
+6
drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c
··· 201 201 type = (amdgpu_device_ip_get_ip_block(adev, AMD_IP_BLOCK_TYPE_JPEG)) ? 202 202 AMD_IP_BLOCK_TYPE_JPEG : AMD_IP_BLOCK_TYPE_VCN; 203 203 break; 204 + case AMDGPU_HW_IP_VPE: 205 + type = AMD_IP_BLOCK_TYPE_VPE; 206 + break; 204 207 default: 205 208 type = AMD_IP_BLOCK_TYPE_NUM; 206 209 break; ··· 723 720 break; 724 721 case AMD_IP_BLOCK_TYPE_UVD: 725 722 count = adev->uvd.num_uvd_inst; 723 + break; 724 + case AMD_IP_BLOCK_TYPE_VPE: 725 + count = adev->vpe.num_instances; 726 726 break; 727 727 /* For all other IP block types not listed in the switch statement 728 728 * the ip status is valid here and the instance count is one.
+6 -1
drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
··· 144 144 struct amdgpu_ring *ring; 145 145 ktime_t start_timestamp; 146 146 147 - /* wptr for the fence for resets */ 147 + /* wptr for the total submission for resets */ 148 148 u64 wptr; 149 149 /* fence context for resets */ 150 150 u64 context; 151 + /* has this fence been reemitted */ 152 + unsigned int reemitted; 153 + /* wptr for the fence for the submission */ 154 + u64 fence_wptr_start; 155 + u64 fence_wptr_end; 151 156 }; 152 157 153 158 extern const struct drm_sched_backend_ops amdgpu_sched_ops;
+41
drivers/gpu/drm/amd/amdgpu/isp_v4_1_1.c
··· 26 26 */ 27 27 28 28 #include <linux/gpio/machine.h> 29 + #include <linux/pm_runtime.h> 29 30 #include "amdgpu.h" 30 31 #include "isp_v4_1_1.h" 31 32 ··· 146 145 return -ENODEV; 147 146 } 148 147 148 + /* The devices will be managed by the pm ops from the parent */ 149 + dev_pm_syscore_device(dev, true); 150 + 149 151 exit: 150 152 /* Continue to add */ 151 153 return 0; ··· 181 177 drm_err(&adev->ddev, "Failed to remove dev from genpd %d\n", ret); 182 178 return -ENODEV; 183 179 } 180 + dev_pm_syscore_device(dev, false); 184 181 185 182 exit: 186 183 /* Continue to remove */ 187 184 return 0; 185 + } 186 + 187 + static int isp_suspend_device(struct device *dev, void *data) 188 + { 189 + return pm_runtime_force_suspend(dev); 190 + } 191 + 192 + static int isp_resume_device(struct device *dev, void *data) 193 + { 194 + return pm_runtime_force_resume(dev); 195 + } 196 + 197 + static int isp_v4_1_1_hw_suspend(struct amdgpu_isp *isp) 198 + { 199 + int r; 200 + 201 + r = device_for_each_child(isp->parent, NULL, 202 + isp_suspend_device); 203 + if (r) 204 + dev_err(isp->parent, "failed to suspend hw devices (%d)\n", r); 205 + 206 + return r; 207 + } 208 + 209 + static int isp_v4_1_1_hw_resume(struct amdgpu_isp *isp) 210 + { 211 + int r; 212 + 213 + r = device_for_each_child(isp->parent, NULL, 214 + isp_resume_device); 215 + if (r) 216 + dev_err(isp->parent, "failed to resume hw device (%d)\n", r); 217 + 218 + return r; 188 219 } 189 220 190 221 static int isp_v4_1_1_hw_init(struct amdgpu_isp *isp) ··· 408 369 static const struct isp_funcs isp_v4_1_1_funcs = { 409 370 .hw_init = isp_v4_1_1_hw_init, 410 371 .hw_fini = isp_v4_1_1_hw_fini, 372 + .hw_suspend = isp_v4_1_1_hw_suspend, 373 + .hw_resume = isp_v4_1_1_hw_resume, 411 374 }; 412 375 413 376 void isp_v4_1_1_set_isp_funcs(struct amdgpu_isp *isp)
+2 -2
drivers/gpu/drm/amd/display/dc/bios/bios_parser.c
··· 763 763 return BP_RESULT_FAILURE; 764 764 765 765 return bp->cmd_tbl.dac1_encoder_control( 766 - bp, cntl->action == ENCODER_CONTROL_ENABLE, 766 + bp, cntl->action, 767 767 cntl->pixel_clock, ATOM_DAC1_PS2); 768 768 } else if (cntl->engine_id == ENGINE_ID_DACB) { 769 769 if (!bp->cmd_tbl.dac2_encoder_control) 770 770 return BP_RESULT_FAILURE; 771 771 772 772 return bp->cmd_tbl.dac2_encoder_control( 773 - bp, cntl->action == ENCODER_CONTROL_ENABLE, 773 + bp, cntl->action, 774 774 cntl->pixel_clock, ATOM_DAC1_PS2); 775 775 } 776 776
+35 -9
drivers/gpu/drm/amd/display/dc/bios/command_table.c
··· 1797 1797 &params.ucEncodeMode)) 1798 1798 return BP_RESULT_BADINPUT; 1799 1799 1800 - params.ucDstBpc = bp_params->bit_depth; 1800 + switch (bp_params->color_depth) { 1801 + case COLOR_DEPTH_UNDEFINED: 1802 + params.ucDstBpc = PANEL_BPC_UNDEFINE; 1803 + break; 1804 + case COLOR_DEPTH_666: 1805 + params.ucDstBpc = PANEL_6BIT_PER_COLOR; 1806 + break; 1807 + default: 1808 + case COLOR_DEPTH_888: 1809 + params.ucDstBpc = PANEL_8BIT_PER_COLOR; 1810 + break; 1811 + case COLOR_DEPTH_101010: 1812 + params.ucDstBpc = PANEL_10BIT_PER_COLOR; 1813 + break; 1814 + case COLOR_DEPTH_121212: 1815 + params.ucDstBpc = PANEL_12BIT_PER_COLOR; 1816 + break; 1817 + case COLOR_DEPTH_141414: 1818 + dm_error("14-bit color not supported by SelectCRTC_Source v3\n"); 1819 + break; 1820 + case COLOR_DEPTH_161616: 1821 + params.ucDstBpc = PANEL_16BIT_PER_COLOR; 1822 + break; 1823 + } 1801 1824 1802 1825 if (EXEC_BIOS_CMD_TABLE(SelectCRTC_Source, params)) 1803 1826 result = BP_RESULT_OK; ··· 1838 1815 1839 1816 static enum bp_result dac1_encoder_control_v1( 1840 1817 struct bios_parser *bp, 1841 - bool enable, 1818 + enum bp_encoder_control_action action, 1842 1819 uint32_t pixel_clock, 1843 1820 uint8_t dac_standard); 1844 1821 static enum bp_result dac2_encoder_control_v1( 1845 1822 struct bios_parser *bp, 1846 - bool enable, 1823 + enum bp_encoder_control_action action, 1847 1824 uint32_t pixel_clock, 1848 1825 uint8_t dac_standard); 1849 1826 ··· 1869 1846 1870 1847 static void dac_encoder_control_prepare_params( 1871 1848 DAC_ENCODER_CONTROL_PS_ALLOCATION *params, 1872 - bool enable, 1849 + enum bp_encoder_control_action action, 1873 1850 uint32_t pixel_clock, 1874 1851 uint8_t dac_standard) 1875 1852 { 1876 1853 params->ucDacStandard = dac_standard; 1877 - if (enable) 1854 + if (action == ENCODER_CONTROL_SETUP || 1855 + action == ENCODER_CONTROL_INIT) 1856 + params->ucAction = ATOM_ENCODER_INIT; 1857 + else if (action == ENCODER_CONTROL_ENABLE) 1878 1858 params->ucAction = ATOM_ENABLE; 1879 1859 else 1880 1860 params->ucAction = ATOM_DISABLE; ··· 1890 1864 1891 1865 static enum bp_result dac1_encoder_control_v1( 1892 1866 struct bios_parser *bp, 1893 - bool enable, 1867 + enum bp_encoder_control_action action, 1894 1868 uint32_t pixel_clock, 1895 1869 uint8_t dac_standard) 1896 1870 { ··· 1899 1873 1900 1874 dac_encoder_control_prepare_params( 1901 1875 &params, 1902 - enable, 1876 + action, 1903 1877 pixel_clock, 1904 1878 dac_standard); 1905 1879 ··· 1911 1885 1912 1886 static enum bp_result dac2_encoder_control_v1( 1913 1887 struct bios_parser *bp, 1914 - bool enable, 1888 + enum bp_encoder_control_action action, 1915 1889 uint32_t pixel_clock, 1916 1890 uint8_t dac_standard) 1917 1891 { ··· 1920 1894 1921 1895 dac_encoder_control_prepare_params( 1922 1896 &params, 1923 - enable, 1897 + action, 1924 1898 pixel_clock, 1925 1899 dac_standard); 1926 1900
+2 -2
drivers/gpu/drm/amd/display/dc/bios/command_table.h
··· 57 57 struct bp_crtc_source_select *bp_params); 58 58 enum bp_result (*dac1_encoder_control)( 59 59 struct bios_parser *bp, 60 - bool enable, 60 + enum bp_encoder_control_action action, 61 61 uint32_t pixel_clock, 62 62 uint8_t dac_standard); 63 63 enum bp_result (*dac2_encoder_control)( 64 64 struct bios_parser *bp, 65 - bool enable, 65 + enum bp_encoder_control_action action, 66 66 uint32_t pixel_clock, 67 67 uint8_t dac_standard); 68 68 enum bp_result (*dac1_output_control)(
+5 -1
drivers/gpu/drm/amd/display/dc/dml/Makefile
··· 30 30 31 31 ifneq ($(CONFIG_FRAME_WARN),0) 32 32 ifeq ($(filter y,$(CONFIG_KASAN)$(CONFIG_KCSAN)),y) 33 - frame_warn_limit := 3072 33 + ifeq ($(CONFIG_CC_IS_CLANG)$(CONFIG_COMPILE_TEST),yy) 34 + frame_warn_limit := 4096 35 + else 36 + frame_warn_limit := 3072 37 + endif 34 38 else 35 39 frame_warn_limit := 2048 36 40 endif
+139 -406
drivers/gpu/drm/amd/display/dc/dml/dcn30/display_mode_vba_30.c
··· 77 77 static unsigned int dscComputeDelay( 78 78 enum output_format_class pixelFormat, 79 79 enum output_encoder_class Output); 80 - // Super monster function with some 45 argument 81 80 static bool CalculatePrefetchSchedule( 82 81 struct display_mode_lib *mode_lib, 83 - double PercentOfIdealDRAMFabricAndSDPPortBWReceivedAfterUrgLatencyPixelMixedWithVMData, 84 - double PercentOfIdealDRAMFabricAndSDPPortBWReceivedAfterUrgLatencyVMDataOnly, 82 + unsigned int k, 85 83 Pipe *myPipe, 86 84 unsigned int DSCDelay, 87 - double DPPCLKDelaySubtotalPlusCNVCFormater, 88 - double DPPCLKDelaySCL, 89 - double DPPCLKDelaySCLLBOnly, 90 - double DPPCLKDelayCNVCCursor, 91 - double DISPCLKDelaySubtotal, 92 85 unsigned int DPP_RECOUT_WIDTH, 93 - enum output_format_class OutputFormat, 94 - unsigned int MaxInterDCNTileRepeaters, 95 86 unsigned int VStartup, 96 87 unsigned int MaxVStartup, 97 - unsigned int GPUVMPageTableLevels, 98 - bool GPUVMEnable, 99 - bool HostVMEnable, 100 - unsigned int HostVMMaxNonCachedPageTableLevels, 101 - double HostVMMinPageSize, 102 - bool DynamicMetadataEnable, 103 - bool DynamicMetadataVMEnabled, 104 - int DynamicMetadataLinesBeforeActiveRequired, 105 - unsigned int DynamicMetadataTransmittedBytes, 106 88 double UrgentLatency, 107 89 double UrgentExtraLatency, 108 90 double TCalc, ··· 98 116 unsigned int MaxNumSwathY, 99 117 double PrefetchSourceLinesC, 100 118 unsigned int SwathWidthC, 101 - int BytePerPixelC, 102 119 double VInitPreFillC, 103 120 unsigned int MaxNumSwathC, 104 121 long swath_width_luma_ub, ··· 105 124 unsigned int SwathHeightY, 106 125 unsigned int SwathHeightC, 107 126 double TWait, 108 - bool ProgressiveToInterlaceUnitInOPP, 109 - double *DSTXAfterScaler, 110 - double *DSTYAfterScaler, 111 127 double *DestinationLinesForPrefetch, 112 128 double *PrefetchBandwidth, 113 129 double *DestinationLinesToRequestVMInVBlank, ··· 113 135 double *VRatioPrefetchC, 114 136 double *RequiredPrefetchPixDataBWLuma, 115 137 double *RequiredPrefetchPixDataBWChroma, 116 - bool *NotEnoughTimeForDynamicMetadata, 117 - double *Tno_bw, 118 - double *prefetch_vmrow_bw, 119 - double *Tdmdl_vm, 120 - double *Tdmdl, 121 - unsigned int *VUpdateOffsetPix, 122 - double *VUpdateWidthPix, 123 - double *VReadyOffsetPix); 138 + bool *NotEnoughTimeForDynamicMetadata); 124 139 static double RoundToDFSGranularityUp(double Clock, double VCOSpeed); 125 140 static double RoundToDFSGranularityDown(double Clock, double VCOSpeed); 126 141 static void CalculateDCCConfiguration( ··· 265 294 static void CalculateWatermarksAndDRAMSpeedChangeSupport( 266 295 struct display_mode_lib *mode_lib, 267 296 unsigned int PrefetchMode, 268 - unsigned int NumberOfActivePlanes, 269 - unsigned int MaxLineBufferLines, 270 - unsigned int LineBufferSize, 271 - unsigned int DPPOutputBufferPixels, 272 - unsigned int DETBufferSizeInKByte, 273 - unsigned int WritebackInterfaceBufferSize, 274 297 double DCFCLK, 275 298 double ReturnBW, 276 - bool GPUVMEnable, 277 - unsigned int dpte_group_bytes[], 278 - unsigned int MetaChunkSize, 279 299 double UrgentLatency, 280 300 double ExtraLatency, 281 - double WritebackLatency, 282 - double WritebackChunkSize, 283 301 double SOCCLK, 284 - double DRAMClockChangeLatency, 285 - double SRExitTime, 286 - double SREnterPlusExitTime, 287 302 double DCFCLKDeepSleep, 288 303 unsigned int DPPPerPlane[], 289 - bool DCCEnable[], 290 304 double DPPCLK[], 291 305 unsigned int DETBufferSizeY[], 292 306 unsigned int DETBufferSizeC[], 293 307 unsigned int SwathHeightY[], 294 308 unsigned int SwathHeightC[], 295 - unsigned int LBBitPerPixel[], 296 309 double SwathWidthY[], 297 310 double SwathWidthC[], 298 - double HRatio[], 299 - double HRatioChroma[], 300 - unsigned int vtaps[], 301 - unsigned int VTAPsChroma[], 302 - double VRatio[], 303 - double VRatioChroma[], 304 - unsigned int HTotal[], 305 - double PixelClock[], 306 - unsigned int BlendingAndTiming[], 307 311 double BytePerPixelDETY[], 308 312 double BytePerPixelDETC[], 309 - double DSTXAfterScaler[], 310 - double DSTYAfterScaler[], 311 - bool WritebackEnable[], 312 - enum source_format_class WritebackPixelFormat[], 313 - double WritebackDestinationWidth[], 314 - double WritebackDestinationHeight[], 315 - double WritebackSourceHeight[], 316 - enum clock_change_support *DRAMClockChangeSupport, 317 - double *UrgentWatermark, 318 - double *WritebackUrgentWatermark, 319 - double *DRAMClockChangeWatermark, 320 - double *WritebackDRAMClockChangeWatermark, 321 - double *StutterExitWatermark, 322 - double *StutterEnterPlusExitWatermark, 323 - double *MinActiveDRAMClockChangeLatencySupported); 313 + enum clock_change_support *DRAMClockChangeSupport); 324 314 static void CalculateDCFCLKDeepSleep( 325 315 struct display_mode_lib *mode_lib, 326 316 unsigned int NumberOfActivePlanes, ··· 742 810 743 811 static bool CalculatePrefetchSchedule( 744 812 struct display_mode_lib *mode_lib, 745 - double PercentOfIdealDRAMFabricAndSDPPortBWReceivedAfterUrgLatencyPixelMixedWithVMData, 746 - double PercentOfIdealDRAMFabricAndSDPPortBWReceivedAfterUrgLatencyVMDataOnly, 813 + unsigned int k, 747 814 Pipe *myPipe, 748 815 unsigned int DSCDelay, 749 - double DPPCLKDelaySubtotalPlusCNVCFormater, 750 - double DPPCLKDelaySCL, 751 - double DPPCLKDelaySCLLBOnly, 752 - double DPPCLKDelayCNVCCursor, 753 - double DISPCLKDelaySubtotal, 754 816 unsigned int DPP_RECOUT_WIDTH, 755 - enum output_format_class OutputFormat, 756 - unsigned int MaxInterDCNTileRepeaters, 757 817 unsigned int VStartup, 758 818 unsigned int MaxVStartup, 759 - unsigned int GPUVMPageTableLevels, 760 - bool GPUVMEnable, 761 - bool HostVMEnable, 762 - unsigned int HostVMMaxNonCachedPageTableLevels, 763 - double HostVMMinPageSize, 764 - bool DynamicMetadataEnable, 765 - bool DynamicMetadataVMEnabled, 766 - int DynamicMetadataLinesBeforeActiveRequired, 767 - unsigned int DynamicMetadataTransmittedBytes, 768 819 double UrgentLatency, 769 820 double UrgentExtraLatency, 770 821 double TCalc, ··· 761 846 unsigned int MaxNumSwathY, 762 847 double PrefetchSourceLinesC, 763 848 unsigned int SwathWidthC, 764 - int BytePerPixelC, 765 849 double VInitPreFillC, 766 850 unsigned int MaxNumSwathC, 767 851 long swath_width_luma_ub, ··· 768 854 unsigned int SwathHeightY, 769 855 unsigned int SwathHeightC, 770 856 double TWait, 771 - bool ProgressiveToInterlaceUnitInOPP, 772 - double *DSTXAfterScaler, 773 - double *DSTYAfterScaler, 774 857 double *DestinationLinesForPrefetch, 775 858 double *PrefetchBandwidth, 776 859 double *DestinationLinesToRequestVMInVBlank, ··· 776 865 double *VRatioPrefetchC, 777 866 double *RequiredPrefetchPixDataBWLuma, 778 867 double *RequiredPrefetchPixDataBWChroma, 779 - bool *NotEnoughTimeForDynamicMetadata, 780 - double *Tno_bw, 781 - double *prefetch_vmrow_bw, 782 - double *Tdmdl_vm, 783 - double *Tdmdl, 784 - unsigned int *VUpdateOffsetPix, 785 - double *VUpdateWidthPix, 786 - double *VReadyOffsetPix) 868 + bool *NotEnoughTimeForDynamicMetadata) 787 869 { 870 + struct vba_vars_st *v = &mode_lib->vba; 871 + double DPPCLKDelaySubtotalPlusCNVCFormater = v->DPPCLKDelaySubtotal + v->DPPCLKDelayCNVCFormater; 788 872 bool MyError = false; 789 873 unsigned int DPPCycles = 0, DISPCLKCycles = 0; 790 874 double DSTTotalPixelsAfterScaler = 0; ··· 811 905 double Tdmec = 0; 812 906 double Tdmsks = 0; 813 907 814 - if (GPUVMEnable == true && HostVMEnable == true) { 815 - HostVMInefficiencyFactor = PercentOfIdealDRAMFabricAndSDPPortBWReceivedAfterUrgLatencyPixelMixedWithVMData / PercentOfIdealDRAMFabricAndSDPPortBWReceivedAfterUrgLatencyVMDataOnly; 816 - HostVMDynamicLevelsTrips = HostVMMaxNonCachedPageTableLevels; 908 + if (v->GPUVMEnable == true && v->HostVMEnable == true) { 909 + HostVMInefficiencyFactor = v->PercentOfIdealDRAMFabricAndSDPPortBWReceivedAfterUrgLatencyPixelMixedWithVMData / v->PercentOfIdealDRAMFabricAndSDPPortBWReceivedAfterUrgLatencyVMDataOnly; 910 + HostVMDynamicLevelsTrips = v->HostVMMaxNonCachedPageTableLevels; 817 911 } else { 818 912 HostVMInefficiencyFactor = 1; 819 913 HostVMDynamicLevelsTrips = 0; 820 914 } 821 915 822 916 CalculateDynamicMetadataParameters( 823 - MaxInterDCNTileRepeaters, 917 + v->MaxInterDCNTileRepeaters, 824 918 myPipe->DPPCLK, 825 919 myPipe->DISPCLK, 826 920 myPipe->DCFCLKDeepSleep, 827 921 myPipe->PixelClock, 828 922 myPipe->HTotal, 829 923 myPipe->VBlank, 830 - DynamicMetadataTransmittedBytes, 831 - DynamicMetadataLinesBeforeActiveRequired, 924 + v->DynamicMetadataTransmittedBytes[k], 925 + v->DynamicMetadataLinesBeforeActiveRequired[k], 832 926 myPipe->InterlaceEnable, 833 - ProgressiveToInterlaceUnitInOPP, 927 + v->ProgressiveToInterlaceUnitInOPP, 834 928 &Tsetup, 835 929 &Tdmbf, 836 930 &Tdmec, ··· 838 932 839 933 LineTime = myPipe->HTotal / myPipe->PixelClock; 840 934 trip_to_mem = UrgentLatency; 841 - Tvm_trips = UrgentExtraLatency + trip_to_mem * (GPUVMPageTableLevels * (HostVMDynamicLevelsTrips + 1) - 1); 935 + Tvm_trips = UrgentExtraLatency + trip_to_mem * (v->GPUVMMaxPageTableLevels * (HostVMDynamicLevelsTrips + 1) - 1); 842 936 843 - if (DynamicMetadataVMEnabled == true && GPUVMEnable == true) { 844 - *Tdmdl = TWait + Tvm_trips + trip_to_mem; 937 + if (v->DynamicMetadataVMEnabled == true && v->GPUVMEnable == true) { 938 + v->Tdmdl[k] = TWait + Tvm_trips + trip_to_mem; 845 939 } else { 846 - *Tdmdl = TWait + UrgentExtraLatency; 940 + v->Tdmdl[k] = TWait + UrgentExtraLatency; 847 941 } 848 942 849 - if (DynamicMetadataEnable == true) { 850 - if (VStartup * LineTime < Tsetup + *Tdmdl + Tdmbf + Tdmec + Tdmsks) { 943 + if (v->DynamicMetadataEnable[k] == true) { 944 + if (VStartup * LineTime < Tsetup + v->Tdmdl[k] + Tdmbf + Tdmec + Tdmsks) { 851 945 *NotEnoughTimeForDynamicMetadata = true; 852 946 } else { 853 947 *NotEnoughTimeForDynamicMetadata = false; ··· 855 949 dml_print("DML: Tdmbf: %fus - time for dmd transfer from dchub to dio output buffer\n", Tdmbf); 856 950 dml_print("DML: Tdmec: %fus - time dio takes to transfer dmd\n", Tdmec); 857 951 dml_print("DML: Tdmsks: %fus - time before active dmd must complete transmission at dio\n", Tdmsks); 858 - dml_print("DML: Tdmdl: %fus - time for fabric to become ready and fetch dmd \n", *Tdmdl); 952 + dml_print("DML: Tdmdl: %fus - time for fabric to become ready and fetch dmd \n", v->Tdmdl[k]); 859 953 } 860 954 } else { 861 955 *NotEnoughTimeForDynamicMetadata = false; 862 956 } 863 957 864 - *Tdmdl_vm = (DynamicMetadataEnable == true && DynamicMetadataVMEnabled == true && GPUVMEnable == true ? TWait + Tvm_trips : 0); 958 + v->Tdmdl_vm[k] = (v->DynamicMetadataEnable[k] == true && v->DynamicMetadataVMEnabled == true && v->GPUVMEnable == true ? TWait + Tvm_trips : 0); 865 959 866 960 if (myPipe->ScalerEnabled) 867 - DPPCycles = DPPCLKDelaySubtotalPlusCNVCFormater + DPPCLKDelaySCL; 961 + DPPCycles = DPPCLKDelaySubtotalPlusCNVCFormater + v->DPPCLKDelaySCL; 868 962 else 869 - DPPCycles = DPPCLKDelaySubtotalPlusCNVCFormater + DPPCLKDelaySCLLBOnly; 963 + DPPCycles = DPPCLKDelaySubtotalPlusCNVCFormater + v->DPPCLKDelaySCLLBOnly; 870 964 871 - DPPCycles = DPPCycles + myPipe->NumberOfCursors * DPPCLKDelayCNVCCursor; 965 + DPPCycles = DPPCycles + myPipe->NumberOfCursors * v->DPPCLKDelayCNVCCursor; 872 966 873 - DISPCLKCycles = DISPCLKDelaySubtotal; 967 + DISPCLKCycles = v->DISPCLKDelaySubtotal; 874 968 875 969 if (myPipe->DPPCLK == 0.0 || myPipe->DISPCLK == 0.0) 876 970 return true; 877 971 878 - *DSTXAfterScaler = DPPCycles * myPipe->PixelClock / myPipe->DPPCLK + DISPCLKCycles * myPipe->PixelClock / myPipe->DISPCLK 972 + v->DSTXAfterScaler[k] = DPPCycles * myPipe->PixelClock / myPipe->DPPCLK + DISPCLKCycles * myPipe->PixelClock / myPipe->DISPCLK 879 973 + DSCDelay; 880 974 881 - *DSTXAfterScaler = *DSTXAfterScaler + ((myPipe->ODMCombineEnabled)?18:0) + (myPipe->DPPPerPlane - 1) * DPP_RECOUT_WIDTH; 975 + v->DSTXAfterScaler[k] = v->DSTXAfterScaler[k] + ((myPipe->ODMCombineEnabled)?18:0) + (myPipe->DPPPerPlane - 1) * DPP_RECOUT_WIDTH; 882 976 883 - if (OutputFormat == dm_420 || (myPipe->InterlaceEnable && ProgressiveToInterlaceUnitInOPP)) 884 - *DSTYAfterScaler = 1; 977 + if (v->OutputFormat[k] == dm_420 || (myPipe->InterlaceEnable && v->ProgressiveToInterlaceUnitInOPP)) 978 + v->DSTYAfterScaler[k] = 1; 885 979 else 886 - *DSTYAfterScaler = 0; 980 + v->DSTYAfterScaler[k] = 0; 887 981 888 - DSTTotalPixelsAfterScaler = *DSTYAfterScaler * myPipe->HTotal + *DSTXAfterScaler; 889 - *DSTYAfterScaler = dml_floor(DSTTotalPixelsAfterScaler / myPipe->HTotal, 1); 890 - *DSTXAfterScaler = DSTTotalPixelsAfterScaler - ((double) (*DSTYAfterScaler * myPipe->HTotal)); 982 + DSTTotalPixelsAfterScaler = v->DSTYAfterScaler[k] * myPipe->HTotal + v->DSTXAfterScaler[k]; 983 + v->DSTYAfterScaler[k] = dml_floor(DSTTotalPixelsAfterScaler / myPipe->HTotal, 1); 984 + v->DSTXAfterScaler[k] = DSTTotalPixelsAfterScaler - ((double) (v->DSTYAfterScaler[k] * myPipe->HTotal)); 891 985 892 986 MyError = false; 893 987 ··· 896 990 Tvm_trips_rounded = dml_ceil(4.0 * Tvm_trips / LineTime, 1) / 4 * LineTime; 897 991 Tr0_trips_rounded = dml_ceil(4.0 * Tr0_trips / LineTime, 1) / 4 * LineTime; 898 992 899 - if (GPUVMEnable) { 900 - if (GPUVMPageTableLevels >= 3) { 901 - *Tno_bw = UrgentExtraLatency + trip_to_mem * ((GPUVMPageTableLevels - 2) - 1); 993 + if (v->GPUVMEnable) { 994 + if (v->GPUVMMaxPageTableLevels >= 3) { 995 + v->Tno_bw[k] = UrgentExtraLatency + trip_to_mem * ((v->GPUVMMaxPageTableLevels - 2) - 1); 902 996 } else 903 - *Tno_bw = 0; 997 + v->Tno_bw[k] = 0; 904 998 } else if (!myPipe->DCCEnable) 905 - *Tno_bw = LineTime; 999 + v->Tno_bw[k] = LineTime; 906 1000 else 907 - *Tno_bw = LineTime / 4; 1001 + v->Tno_bw[k] = LineTime / 4; 908 1002 909 - dst_y_prefetch_equ = VStartup - (Tsetup + dml_max(TWait + TCalc, *Tdmdl)) / LineTime 910 - - (*DSTYAfterScaler + *DSTXAfterScaler / myPipe->HTotal); 1003 + dst_y_prefetch_equ = VStartup - (Tsetup + dml_max(TWait + TCalc, v->Tdmdl[k])) / LineTime 1004 + - (v->DSTYAfterScaler[k] + v->DSTXAfterScaler[k] / myPipe->HTotal); 911 1005 dst_y_prefetch_equ = dml_min(dst_y_prefetch_equ, 63.75); // limit to the reg limit of U6.2 for DST_Y_PREFETCH 912 1006 913 1007 Lsw_oto = dml_max(PrefetchSourceLinesY, PrefetchSourceLinesC); 914 1008 Tsw_oto = Lsw_oto * LineTime; 915 1009 916 - prefetch_bw_oto = (PrefetchSourceLinesY * swath_width_luma_ub * BytePerPixelY + PrefetchSourceLinesC * swath_width_chroma_ub * BytePerPixelC) / Tsw_oto; 1010 + prefetch_bw_oto = (PrefetchSourceLinesY * swath_width_luma_ub * BytePerPixelY + PrefetchSourceLinesC * swath_width_chroma_ub * v->BytePerPixelC[k]) / Tsw_oto; 917 1011 918 - if (GPUVMEnable == true) { 919 - Tvm_oto = dml_max3(*Tno_bw + PDEAndMetaPTEBytesFrame * HostVMInefficiencyFactor / prefetch_bw_oto, 1012 + if (v->GPUVMEnable == true) { 1013 + Tvm_oto = dml_max3(v->Tno_bw[k] + PDEAndMetaPTEBytesFrame * HostVMInefficiencyFactor / prefetch_bw_oto, 920 1014 Tvm_trips, 921 1015 LineTime / 4.0); 922 1016 } else 923 1017 Tvm_oto = LineTime / 4.0; 924 1018 925 - if ((GPUVMEnable == true || myPipe->DCCEnable == true)) { 1019 + if ((v->GPUVMEnable == true || myPipe->DCCEnable == true)) { 926 1020 Tr0_oto = dml_max3( 927 1021 (MetaRowByte + PixelPTEBytesPerRow * HostVMInefficiencyFactor) / prefetch_bw_oto, 928 1022 LineTime - Tvm_oto, LineTime / 4); ··· 948 1042 dml_print("DML: Tdmbf: %fus - time for dmd transfer from dchub to dio output buffer\n", Tdmbf); 949 1043 dml_print("DML: Tdmec: %fus - time dio takes to transfer dmd\n", Tdmec); 950 1044 dml_print("DML: Tdmsks: %fus - time before active dmd must complete transmission at dio\n", Tdmsks); 951 - dml_print("DML: Tdmdl_vm: %fus - time for vm stages of dmd \n", *Tdmdl_vm); 952 - dml_print("DML: Tdmdl: %fus - time for fabric to become ready and fetch dmd \n", *Tdmdl); 953 - dml_print("DML: dst_x_after_scl: %f pixels - number of pixel clocks pipeline and buffer delay after scaler \n", *DSTXAfterScaler); 954 - dml_print("DML: dst_y_after_scl: %d lines - number of lines of pipeline and buffer delay after scaler \n", (int)*DSTYAfterScaler); 1045 + dml_print("DML: Tdmdl_vm: %fus - time for vm stages of dmd \n", v->Tdmdl_vm[k]); 1046 + dml_print("DML: Tdmdl: %fus - time for fabric to become ready and fetch dmd \n", v->Tdmdl[k]); 1047 + dml_print("DML: dst_x_after_scl: %f pixels - number of pixel clocks pipeline and buffer delay after scaler \n", v->DSTXAfterScaler[k]); 1048 + dml_print("DML: dst_y_after_scl: %d lines - number of lines of pipeline and buffer delay after scaler \n", (int)v->DSTYAfterScaler[k]); 955 1049 956 1050 *PrefetchBandwidth = 0; 957 1051 *DestinationLinesToRequestVMInVBlank = 0; ··· 965 1059 double PrefetchBandwidth3 = 0; 966 1060 double PrefetchBandwidth4 = 0; 967 1061 968 - if (Tpre_rounded - *Tno_bw > 0) 1062 + if (Tpre_rounded - v->Tno_bw[k] > 0) 969 1063 PrefetchBandwidth1 = (PDEAndMetaPTEBytesFrame * HostVMInefficiencyFactor + 2 * MetaRowByte 970 1064 + 2 * PixelPTEBytesPerRow * HostVMInefficiencyFactor 971 1065 + PrefetchSourceLinesY * swath_width_luma_ub * BytePerPixelY 972 - + PrefetchSourceLinesC * swath_width_chroma_ub * BytePerPixelC) 973 - / (Tpre_rounded - *Tno_bw); 1066 + + PrefetchSourceLinesC * swath_width_chroma_ub * v->BytePerPixelC[k]) 1067 + / (Tpre_rounded - v->Tno_bw[k]); 974 1068 else 975 1069 PrefetchBandwidth1 = 0; 976 1070 977 - if (VStartup == MaxVStartup && (PrefetchBandwidth1 > 4 * prefetch_bw_oto) && (Tpre_rounded - Tsw_oto / 4 - 0.75 * LineTime - *Tno_bw) > 0) { 978 - PrefetchBandwidth1 = (PDEAndMetaPTEBytesFrame * HostVMInefficiencyFactor + 2 * MetaRowByte + 2 * PixelPTEBytesPerRow * HostVMInefficiencyFactor) / (Tpre_rounded - Tsw_oto / 4 - 0.75 * LineTime - *Tno_bw); 1071 + if (VStartup == MaxVStartup && (PrefetchBandwidth1 > 4 * prefetch_bw_oto) && (Tpre_rounded - Tsw_oto / 4 - 0.75 * LineTime - v->Tno_bw[k]) > 0) { 1072 + PrefetchBandwidth1 = (PDEAndMetaPTEBytesFrame * HostVMInefficiencyFactor + 2 * MetaRowByte + 2 * PixelPTEBytesPerRow * HostVMInefficiencyFactor) / (Tpre_rounded - Tsw_oto / 4 - 0.75 * LineTime - v->Tno_bw[k]); 979 1073 } 980 1074 981 - if (Tpre_rounded - *Tno_bw - 2 * Tr0_trips_rounded > 0) 1075 + if (Tpre_rounded - v->Tno_bw[k] - 2 * Tr0_trips_rounded > 0) 982 1076 PrefetchBandwidth2 = (PDEAndMetaPTEBytesFrame * 983 1077 HostVMInefficiencyFactor + PrefetchSourceLinesY * 984 1078 swath_width_luma_ub * BytePerPixelY + 985 1079 PrefetchSourceLinesC * swath_width_chroma_ub * 986 - BytePerPixelC) / 987 - (Tpre_rounded - *Tno_bw - 2 * Tr0_trips_rounded); 1080 + v->BytePerPixelC[k]) / 1081 + (Tpre_rounded - v->Tno_bw[k] - 2 * Tr0_trips_rounded); 988 1082 else 989 1083 PrefetchBandwidth2 = 0; 990 1084 ··· 992 1086 PrefetchBandwidth3 = (2 * MetaRowByte + 2 * PixelPTEBytesPerRow * 993 1087 HostVMInefficiencyFactor + PrefetchSourceLinesY * 994 1088 swath_width_luma_ub * BytePerPixelY + PrefetchSourceLinesC * 995 - swath_width_chroma_ub * BytePerPixelC) / (Tpre_rounded - 1089 + swath_width_chroma_ub * v->BytePerPixelC[k]) / (Tpre_rounded - 996 1090 Tvm_trips_rounded); 997 1091 else 998 1092 PrefetchBandwidth3 = 0; ··· 1002 1096 } 1003 1097 1004 1098 if (Tpre_rounded - Tvm_trips_rounded - 2 * Tr0_trips_rounded > 0) 1005 - PrefetchBandwidth4 = (PrefetchSourceLinesY * swath_width_luma_ub * BytePerPixelY + PrefetchSourceLinesC * swath_width_chroma_ub * BytePerPixelC) 1099 + PrefetchBandwidth4 = (PrefetchSourceLinesY * swath_width_luma_ub * BytePerPixelY + PrefetchSourceLinesC * swath_width_chroma_ub * v->BytePerPixelC[k]) 1006 1100 / (Tpre_rounded - Tvm_trips_rounded - 2 * Tr0_trips_rounded); 1007 1101 else 1008 1102 PrefetchBandwidth4 = 0; ··· 1013 1107 bool Case3OK; 1014 1108 1015 1109 if (PrefetchBandwidth1 > 0) { 1016 - if (*Tno_bw + PDEAndMetaPTEBytesFrame * HostVMInefficiencyFactor / PrefetchBandwidth1 1110 + if (v->Tno_bw[k] + PDEAndMetaPTEBytesFrame * HostVMInefficiencyFactor / PrefetchBandwidth1 1017 1111 >= Tvm_trips_rounded && (MetaRowByte + PixelPTEBytesPerRow * HostVMInefficiencyFactor) / PrefetchBandwidth1 >= Tr0_trips_rounded) { 1018 1112 Case1OK = true; 1019 1113 } else { ··· 1024 1118 } 1025 1119 1026 1120 if (PrefetchBandwidth2 > 0) { 1027 - if (*Tno_bw + PDEAndMetaPTEBytesFrame * HostVMInefficiencyFactor / PrefetchBandwidth2 1121 + if (v->Tno_bw[k] + PDEAndMetaPTEBytesFrame * HostVMInefficiencyFactor / PrefetchBandwidth2 1028 1122 >= Tvm_trips_rounded && (MetaRowByte + PixelPTEBytesPerRow * HostVMInefficiencyFactor) / PrefetchBandwidth2 < Tr0_trips_rounded) { 1029 1123 Case2OK = true; 1030 1124 } else { ··· 1035 1129 } 1036 1130 1037 1131 if (PrefetchBandwidth3 > 0) { 1038 - if (*Tno_bw + PDEAndMetaPTEBytesFrame * HostVMInefficiencyFactor / PrefetchBandwidth3 1132 + if (v->Tno_bw[k] + PDEAndMetaPTEBytesFrame * HostVMInefficiencyFactor / PrefetchBandwidth3 1039 1133 < Tvm_trips_rounded && (MetaRowByte + PixelPTEBytesPerRow * HostVMInefficiencyFactor) / PrefetchBandwidth3 >= Tr0_trips_rounded) { 1040 1134 Case3OK = true; 1041 1135 } else { ··· 1058 1152 dml_print("DML: prefetch_bw_equ: %f\n", prefetch_bw_equ); 1059 1153 1060 1154 if (prefetch_bw_equ > 0) { 1061 - if (GPUVMEnable) { 1062 - Tvm_equ = dml_max3(*Tno_bw + PDEAndMetaPTEBytesFrame * HostVMInefficiencyFactor / prefetch_bw_equ, Tvm_trips, LineTime / 4); 1155 + if (v->GPUVMEnable) { 1156 + Tvm_equ = dml_max3(v->Tno_bw[k] + PDEAndMetaPTEBytesFrame * HostVMInefficiencyFactor / prefetch_bw_equ, Tvm_trips, LineTime / 4); 1063 1157 } else { 1064 1158 Tvm_equ = LineTime / 4; 1065 1159 } 1066 1160 1067 - if ((GPUVMEnable || myPipe->DCCEnable)) { 1161 + if ((v->GPUVMEnable || myPipe->DCCEnable)) { 1068 1162 Tr0_equ = dml_max4( 1069 1163 (MetaRowByte + PixelPTEBytesPerRow * HostVMInefficiencyFactor) / prefetch_bw_equ, 1070 1164 Tr0_trips, ··· 1133 1227 } 1134 1228 1135 1229 *RequiredPrefetchPixDataBWLuma = (double) PrefetchSourceLinesY / LinesToRequestPrefetchPixelData * BytePerPixelY * swath_width_luma_ub / LineTime; 1136 - *RequiredPrefetchPixDataBWChroma = (double) PrefetchSourceLinesC / LinesToRequestPrefetchPixelData * BytePerPixelC * swath_width_chroma_ub / LineTime; 1230 + *RequiredPrefetchPixDataBWChroma = (double) PrefetchSourceLinesC / LinesToRequestPrefetchPixelData * v->BytePerPixelC[k] * swath_width_chroma_ub / LineTime; 1137 1231 } else { 1138 1232 MyError = true; 1139 1233 dml_print("DML: MyErr set %s:%d\n", __FILE__, __LINE__); ··· 1149 1243 dml_print("DML: Tr0: %fus - time to fetch first row of data pagetables and first row of meta data (done in parallel)\n", TimeForFetchingRowInVBlank); 1150 1244 dml_print("DML: Tr1: %fus - time to fetch second row of data pagetables and second row of meta data (done in parallel)\n", TimeForFetchingRowInVBlank); 1151 1245 dml_print("DML: Tsw: %fus = time to fetch enough pixel data and cursor data to feed the scalers init position and detile\n", (double)LinesToRequestPrefetchPixelData * LineTime); 1152 - dml_print("DML: To: %fus - time for propagation from scaler to optc\n", (*DSTYAfterScaler + ((*DSTXAfterScaler) / (double) myPipe->HTotal)) * LineTime); 1246 + dml_print("DML: To: %fus - time for propagation from scaler to optc\n", (v->DSTYAfterScaler[k] + ((v->DSTXAfterScaler[k]) / (double) myPipe->HTotal)) * LineTime); 1153 1247 dml_print("DML: Tvstartup - Tsetup - Tcalc - Twait - Tpre - To > 0\n"); 1154 - dml_print("DML: Tslack(pre): %fus - time left over in schedule\n", VStartup * LineTime - TimeForFetchingMetaPTE - 2 * TimeForFetchingRowInVBlank - (*DSTYAfterScaler + ((*DSTXAfterScaler) / (double) myPipe->HTotal)) * LineTime - TWait - TCalc - Tsetup); 1248 + dml_print("DML: Tslack(pre): %fus - time left over in schedule\n", VStartup * LineTime - TimeForFetchingMetaPTE - 2 * TimeForFetchingRowInVBlank - (v->DSTYAfterScaler[k] + ((v->DSTXAfterScaler[k]) / (double) myPipe->HTotal)) * LineTime - TWait - TCalc - Tsetup); 1155 1249 dml_print("DML: row_bytes = dpte_row_bytes (per_pipe) = PixelPTEBytesPerRow = : %d\n", PixelPTEBytesPerRow); 1156 1250 1157 1251 } else { ··· 1182 1276 dml_print("DML: MyErr set %s:%d\n", __FILE__, __LINE__); 1183 1277 } 1184 1278 1185 - *prefetch_vmrow_bw = dml_max(prefetch_vm_bw, prefetch_row_bw); 1279 + v->prefetch_vmrow_bw[k] = dml_max(prefetch_vm_bw, prefetch_row_bw); 1186 1280 } 1187 1281 1188 1282 if (MyError) { ··· 2343 2437 2344 2438 v->ErrorResult[k] = CalculatePrefetchSchedule( 2345 2439 mode_lib, 2346 - v->PercentOfIdealDRAMFabricAndSDPPortBWReceivedAfterUrgLatencyPixelMixedWithVMData, 2347 - v->PercentOfIdealDRAMFabricAndSDPPortBWReceivedAfterUrgLatencyVMDataOnly, 2440 + k, 2348 2441 &myPipe, 2349 2442 v->DSCDelay[k], 2350 - v->DPPCLKDelaySubtotal 2351 - + v->DPPCLKDelayCNVCFormater, 2352 - v->DPPCLKDelaySCL, 2353 - v->DPPCLKDelaySCLLBOnly, 2354 - v->DPPCLKDelayCNVCCursor, 2355 - v->DISPCLKDelaySubtotal, 2356 2443 (unsigned int) (v->SwathWidthY[k] / v->HRatio[k]), 2357 - v->OutputFormat[k], 2358 - v->MaxInterDCNTileRepeaters, 2359 2444 dml_min(v->VStartupLines, v->MaxVStartupLines[k]), 2360 2445 v->MaxVStartupLines[k], 2361 - v->GPUVMMaxPageTableLevels, 2362 - v->GPUVMEnable, 2363 - v->HostVMEnable, 2364 - v->HostVMMaxNonCachedPageTableLevels, 2365 - v->HostVMMinPageSize, 2366 - v->DynamicMetadataEnable[k], 2367 - v->DynamicMetadataVMEnabled, 2368 - v->DynamicMetadataLinesBeforeActiveRequired[k], 2369 - v->DynamicMetadataTransmittedBytes[k], 2370 2446 v->UrgentLatency, 2371 2447 v->UrgentExtraLatency, 2372 2448 v->TCalc, ··· 2362 2474 v->MaxNumSwathY[k], 2363 2475 v->PrefetchSourceLinesC[k], 2364 2476 v->SwathWidthC[k], 2365 - v->BytePerPixelC[k], 2366 2477 v->VInitPreFillC[k], 2367 2478 v->MaxNumSwathC[k], 2368 2479 v->swath_width_luma_ub[k], ··· 2369 2482 v->SwathHeightY[k], 2370 2483 v->SwathHeightC[k], 2371 2484 TWait, 2372 - v->ProgressiveToInterlaceUnitInOPP, 2373 - &v->DSTXAfterScaler[k], 2374 - &v->DSTYAfterScaler[k], 2375 2485 &v->DestinationLinesForPrefetch[k], 2376 2486 &v->PrefetchBandwidth[k], 2377 2487 &v->DestinationLinesToRequestVMInVBlank[k], ··· 2377 2493 &v->VRatioPrefetchC[k], 2378 2494 &v->RequiredPrefetchPixDataBWLuma[k], 2379 2495 &v->RequiredPrefetchPixDataBWChroma[k], 2380 - &v->NotEnoughTimeForDynamicMetadata[k], 2381 - &v->Tno_bw[k], 2382 - &v->prefetch_vmrow_bw[k], 2383 - &v->Tdmdl_vm[k], 2384 - &v->Tdmdl[k], 2385 - &v->VUpdateOffsetPix[k], 2386 - &v->VUpdateWidthPix[k], 2387 - &v->VReadyOffsetPix[k]); 2496 + &v->NotEnoughTimeForDynamicMetadata[k]); 2388 2497 if (v->BlendingAndTiming[k] == k) { 2389 2498 double TotalRepeaterDelayTime = v->MaxInterDCNTileRepeaters * (2 / v->DPPCLK[k] + 3 / v->DISPCLK); 2390 2499 v->VUpdateWidthPix[k] = (14 / v->DCFCLKDeepSleep + 12 / v->DPPCLK[k] + TotalRepeaterDelayTime) * v->PixelClock[k]; ··· 2607 2730 CalculateWatermarksAndDRAMSpeedChangeSupport( 2608 2731 mode_lib, 2609 2732 PrefetchMode, 2610 - v->NumberOfActivePlanes, 2611 - v->MaxLineBufferLines, 2612 - v->LineBufferSize, 2613 - v->DPPOutputBufferPixels, 2614 - v->DETBufferSizeInKByte[0], 2615 - v->WritebackInterfaceBufferSize, 2616 2733 v->DCFCLK, 2617 2734 v->ReturnBW, 2618 - v->GPUVMEnable, 2619 - v->dpte_group_bytes, 2620 - v->MetaChunkSize, 2621 2735 v->UrgentLatency, 2622 2736 v->UrgentExtraLatency, 2623 - v->WritebackLatency, 2624 - v->WritebackChunkSize, 2625 2737 v->SOCCLK, 2626 - v->FinalDRAMClockChangeLatency, 2627 - v->SRExitTime, 2628 - v->SREnterPlusExitTime, 2629 2738 v->DCFCLKDeepSleep, 2630 2739 v->DPPPerPlane, 2631 - v->DCCEnable, 2632 2740 v->DPPCLK, 2633 2741 v->DETBufferSizeY, 2634 2742 v->DETBufferSizeC, 2635 2743 v->SwathHeightY, 2636 2744 v->SwathHeightC, 2637 - v->LBBitPerPixel, 2638 2745 v->SwathWidthY, 2639 2746 v->SwathWidthC, 2640 - v->HRatio, 2641 - v->HRatioChroma, 2642 - v->vtaps, 2643 - v->VTAPsChroma, 2644 - v->VRatio, 2645 - v->VRatioChroma, 2646 - v->HTotal, 2647 - v->PixelClock, 2648 - v->BlendingAndTiming, 2649 2747 v->BytePerPixelDETY, 2650 2748 v->BytePerPixelDETC, 2651 - v->DSTXAfterScaler, 2652 - v->DSTYAfterScaler, 2653 - v->WritebackEnable, 2654 - v->WritebackPixelFormat, 2655 - v->WritebackDestinationWidth, 2656 - v->WritebackDestinationHeight, 2657 - v->WritebackSourceHeight, 2658 - &DRAMClockChangeSupport, 2659 - &v->UrgentWatermark, 2660 - &v->WritebackUrgentWatermark, 2661 - &v->DRAMClockChangeWatermark, 2662 - &v->WritebackDRAMClockChangeWatermark, 2663 - &v->StutterExitWatermark, 2664 - &v->StutterEnterPlusExitWatermark, 2665 - &v->MinActiveDRAMClockChangeLatencySupported); 2749 + &DRAMClockChangeSupport); 2666 2750 2667 2751 for (k = 0; k < v->NumberOfActivePlanes; ++k) { 2668 2752 if (v->WritebackEnable[k] == true) { ··· 4608 4770 4609 4771 v->NoTimeForPrefetch[i][j][k] = CalculatePrefetchSchedule( 4610 4772 mode_lib, 4611 - v->PercentOfIdealDRAMFabricAndSDPPortBWReceivedAfterUrgLatencyPixelMixedWithVMData, 4612 - v->PercentOfIdealDRAMFabricAndSDPPortBWReceivedAfterUrgLatencyVMDataOnly, 4773 + k, 4613 4774 &myPipe, 4614 4775 v->DSCDelayPerState[i][k], 4615 - v->DPPCLKDelaySubtotal + v->DPPCLKDelayCNVCFormater, 4616 - v->DPPCLKDelaySCL, 4617 - v->DPPCLKDelaySCLLBOnly, 4618 - v->DPPCLKDelayCNVCCursor, 4619 - v->DISPCLKDelaySubtotal, 4620 4776 v->SwathWidthYThisState[k] / v->HRatio[k], 4621 - v->OutputFormat[k], 4622 - v->MaxInterDCNTileRepeaters, 4623 4777 dml_min(v->MaxVStartup, v->MaximumVStartup[i][j][k]), 4624 4778 v->MaximumVStartup[i][j][k], 4625 - v->GPUVMMaxPageTableLevels, 4626 - v->GPUVMEnable, 4627 - v->HostVMEnable, 4628 - v->HostVMMaxNonCachedPageTableLevels, 4629 - v->HostVMMinPageSize, 4630 - v->DynamicMetadataEnable[k], 4631 - v->DynamicMetadataVMEnabled, 4632 - v->DynamicMetadataLinesBeforeActiveRequired[k], 4633 - v->DynamicMetadataTransmittedBytes[k], 4634 4779 v->UrgLatency[i], 4635 4780 v->ExtraLatency, 4636 4781 v->TimeCalc, ··· 4627 4806 v->MaxNumSwY[k], 4628 4807 v->PrefetchLinesC[i][j][k], 4629 4808 v->SwathWidthCThisState[k], 4630 - v->BytePerPixelC[k], 4631 4809 v->PrefillC[k], 4632 4810 v->MaxNumSwC[k], 4633 4811 v->swath_width_luma_ub_this_state[k], ··· 4634 4814 v->SwathHeightYThisState[k], 4635 4815 v->SwathHeightCThisState[k], 4636 4816 v->TWait, 4637 - v->ProgressiveToInterlaceUnitInOPP, 4638 - &v->DSTXAfterScaler[k], 4639 - &v->DSTYAfterScaler[k], 4640 4817 &v->LineTimesForPrefetch[k], 4641 4818 &v->PrefetchBW[k], 4642 4819 &v->LinesForMetaPTE[k], ··· 4642 4825 &v->VRatioPreC[i][j][k], 4643 4826 &v->RequiredPrefetchPixelDataBWLuma[i][j][k], 4644 4827 &v->RequiredPrefetchPixelDataBWChroma[i][j][k], 4645 - &v->NoTimeForDynamicMetadata[i][j][k], 4646 - &v->Tno_bw[k], 4647 - &v->prefetch_vmrow_bw[k], 4648 - &v->Tdmdl_vm[k], 4649 - &v->Tdmdl[k], 4650 - &v->VUpdateOffsetPix[k], 4651 - &v->VUpdateWidthPix[k], 4652 - &v->VReadyOffsetPix[k]); 4828 + &v->NoTimeForDynamicMetadata[i][j][k]); 4653 4829 } 4654 4830 4655 4831 for (k = 0; k <= v->NumberOfActivePlanes - 1; k++) { ··· 4817 5007 CalculateWatermarksAndDRAMSpeedChangeSupport( 4818 5008 mode_lib, 4819 5009 v->PrefetchModePerState[i][j], 4820 - v->NumberOfActivePlanes, 4821 - v->MaxLineBufferLines, 4822 - v->LineBufferSize, 4823 - v->DPPOutputBufferPixels, 4824 - v->DETBufferSizeInKByte[0], 4825 - v->WritebackInterfaceBufferSize, 4826 5010 v->DCFCLKState[i][j], 4827 5011 v->ReturnBWPerState[i][j], 4828 - v->GPUVMEnable, 4829 - v->dpte_group_bytes, 4830 - v->MetaChunkSize, 4831 5012 v->UrgLatency[i], 4832 5013 v->ExtraLatency, 4833 - v->WritebackLatency, 4834 - v->WritebackChunkSize, 4835 5014 v->SOCCLKPerState[i], 4836 - v->FinalDRAMClockChangeLatency, 4837 - v->SRExitTime, 4838 - v->SREnterPlusExitTime, 4839 5015 v->ProjectedDCFCLKDeepSleep[i][j], 4840 5016 v->NoOfDPPThisState, 4841 - v->DCCEnable, 4842 5017 v->RequiredDPPCLKThisState, 4843 5018 v->DETBufferSizeYThisState, 4844 5019 v->DETBufferSizeCThisState, 4845 5020 v->SwathHeightYThisState, 4846 5021 v->SwathHeightCThisState, 4847 - v->LBBitPerPixel, 4848 5022 v->SwathWidthYThisState, 4849 5023 v->SwathWidthCThisState, 4850 - v->HRatio, 4851 - v->HRatioChroma, 4852 - v->vtaps, 4853 - v->VTAPsChroma, 4854 - v->VRatio, 4855 - v->VRatioChroma, 4856 - v->HTotal, 4857 - v->PixelClock, 4858 - v->BlendingAndTiming, 4859 5024 v->BytePerPixelInDETY, 4860 5025 v->BytePerPixelInDETC, 4861 - v->DSTXAfterScaler, 4862 - v->DSTYAfterScaler, 4863 - v->WritebackEnable, 4864 - v->WritebackPixelFormat, 4865 - v->WritebackDestinationWidth, 4866 - v->WritebackDestinationHeight, 4867 - v->WritebackSourceHeight, 4868 - &v->DRAMClockChangeSupport[i][j], 4869 - &v->UrgentWatermark, 4870 - &v->WritebackUrgentWatermark, 4871 - &v->DRAMClockChangeWatermark, 4872 - &v->WritebackDRAMClockChangeWatermark, 4873 - &v->StutterExitWatermark, 4874 - &v->StutterEnterPlusExitWatermark, 4875 - &v->MinActiveDRAMClockChangeLatencySupported); 5026 + &v->DRAMClockChangeSupport[i][j]); 4876 5027 } 4877 5028 } 4878 5029 ··· 4950 5179 static void CalculateWatermarksAndDRAMSpeedChangeSupport( 4951 5180 struct display_mode_lib *mode_lib, 4952 5181 unsigned int PrefetchMode, 4953 - unsigned int NumberOfActivePlanes, 4954 - unsigned int MaxLineBufferLines, 4955 - unsigned int LineBufferSize, 4956 - unsigned int DPPOutputBufferPixels, 4957 - unsigned int DETBufferSizeInKByte, 4958 - unsigned int WritebackInterfaceBufferSize, 4959 5182 double DCFCLK, 4960 5183 double ReturnBW, 4961 - bool GPUVMEnable, 4962 - unsigned int dpte_group_bytes[], 4963 - unsigned int MetaChunkSize, 4964 5184 double UrgentLatency, 4965 5185 double ExtraLatency, 4966 - double WritebackLatency, 4967 - double WritebackChunkSize, 4968 5186 double SOCCLK, 4969 - double DRAMClockChangeLatency, 4970 - double SRExitTime, 4971 - double SREnterPlusExitTime, 4972 5187 double DCFCLKDeepSleep, 4973 5188 unsigned int DPPPerPlane[], 4974 - bool DCCEnable[], 4975 5189 double DPPCLK[], 4976 5190 unsigned int DETBufferSizeY[], 4977 5191 unsigned int DETBufferSizeC[], 4978 5192 unsigned int SwathHeightY[], 4979 5193 unsigned int SwathHeightC[], 4980 - unsigned int LBBitPerPixel[], 4981 5194 double SwathWidthY[], 4982 5195 double SwathWidthC[], 4983 - double HRatio[], 4984 - double HRatioChroma[], 4985 - unsigned int vtaps[], 4986 - unsigned int VTAPsChroma[], 4987 - double VRatio[], 4988 - double VRatioChroma[], 4989 - unsigned int HTotal[], 4990 - double PixelClock[], 4991 - unsigned int BlendingAndTiming[], 4992 5196 double BytePerPixelDETY[], 4993 5197 double BytePerPixelDETC[], 4994 - double DSTXAfterScaler[], 4995 - double DSTYAfterScaler[], 4996 - bool WritebackEnable[], 4997 - enum source_format_class WritebackPixelFormat[], 4998 - double WritebackDestinationWidth[], 4999 - double WritebackDestinationHeight[], 5000 - double WritebackSourceHeight[], 5001 - enum clock_change_support *DRAMClockChangeSupport, 5002 - double *UrgentWatermark, 5003 - double *WritebackUrgentWatermark, 5004 - double *DRAMClockChangeWatermark, 5005 - double *WritebackDRAMClockChangeWatermark, 5006 - double *StutterExitWatermark, 5007 - double *StutterEnterPlusExitWatermark, 5008 - double *MinActiveDRAMClockChangeLatencySupported) 5198 + enum clock_change_support *DRAMClockChangeSupport) 5009 5199 { 5200 + struct vba_vars_st *v = &mode_lib->vba; 5010 5201 double EffectiveLBLatencyHidingY = 0; 5011 5202 double EffectiveLBLatencyHidingC = 0; 5012 5203 double LinesInDETY[DC__NUM_DPP__MAX] = { 0 }; ··· 4987 5254 double WritebackDRAMClockChangeLatencyHiding = 0; 4988 5255 unsigned int k, j; 4989 5256 4990 - mode_lib->vba.TotalActiveDPP = 0; 4991 - mode_lib->vba.TotalDCCActiveDPP = 0; 4992 - for (k = 0; k < NumberOfActivePlanes; ++k) { 4993 - mode_lib->vba.TotalActiveDPP = mode_lib->vba.TotalActiveDPP + DPPPerPlane[k]; 4994 - if (DCCEnable[k] == true) { 4995 - mode_lib->vba.TotalDCCActiveDPP = mode_lib->vba.TotalDCCActiveDPP + DPPPerPlane[k]; 5257 + v->TotalActiveDPP = 0; 5258 + v->TotalDCCActiveDPP = 0; 5259 + for (k = 0; k < v->NumberOfActivePlanes; ++k) { 5260 + v->TotalActiveDPP = v->TotalActiveDPP + DPPPerPlane[k]; 5261 + if (v->DCCEnable[k] == true) { 5262 + v->TotalDCCActiveDPP = v->TotalDCCActiveDPP + DPPPerPlane[k]; 4996 5263 } 4997 5264 } 4998 5265 4999 - *UrgentWatermark = UrgentLatency + ExtraLatency; 5266 + v->UrgentWatermark = UrgentLatency + ExtraLatency; 5000 5267 5001 - *DRAMClockChangeWatermark = DRAMClockChangeLatency + *UrgentWatermark; 5268 + v->DRAMClockChangeWatermark = v->FinalDRAMClockChangeLatency + v->UrgentWatermark; 5002 5269 5003 - mode_lib->vba.TotalActiveWriteback = 0; 5004 - for (k = 0; k < NumberOfActivePlanes; ++k) { 5005 - if (WritebackEnable[k] == true) { 5006 - mode_lib->vba.TotalActiveWriteback = mode_lib->vba.TotalActiveWriteback + 1; 5270 + v->TotalActiveWriteback = 0; 5271 + for (k = 0; k < v->NumberOfActivePlanes; ++k) { 5272 + if (v->WritebackEnable[k] == true) { 5273 + v->TotalActiveWriteback = v->TotalActiveWriteback + 1; 5007 5274 } 5008 5275 } 5009 5276 5010 - if (mode_lib->vba.TotalActiveWriteback <= 1) { 5011 - *WritebackUrgentWatermark = WritebackLatency; 5277 + if (v->TotalActiveWriteback <= 1) { 5278 + v->WritebackUrgentWatermark = v->WritebackLatency; 5012 5279 } else { 5013 - *WritebackUrgentWatermark = WritebackLatency + WritebackChunkSize * 1024.0 / 32.0 / SOCCLK; 5280 + v->WritebackUrgentWatermark = v->WritebackLatency + v->WritebackChunkSize * 1024.0 / 32.0 / SOCCLK; 5014 5281 } 5015 5282 5016 - if (mode_lib->vba.TotalActiveWriteback <= 1) { 5017 - *WritebackDRAMClockChangeWatermark = DRAMClockChangeLatency + WritebackLatency; 5283 + if (v->TotalActiveWriteback <= 1) { 5284 + v->WritebackDRAMClockChangeWatermark = v->FinalDRAMClockChangeLatency + v->WritebackLatency; 5018 5285 } else { 5019 - *WritebackDRAMClockChangeWatermark = DRAMClockChangeLatency + WritebackLatency + WritebackChunkSize * 1024.0 / 32.0 / SOCCLK; 5286 + v->WritebackDRAMClockChangeWatermark = v->FinalDRAMClockChangeLatency + v->WritebackLatency + v->WritebackChunkSize * 1024.0 / 32.0 / SOCCLK; 5020 5287 } 5021 5288 5022 - for (k = 0; k < NumberOfActivePlanes; ++k) { 5289 + for (k = 0; k < v->NumberOfActivePlanes; ++k) { 5023 5290 5024 - mode_lib->vba.LBLatencyHidingSourceLinesY = dml_min((double) MaxLineBufferLines, dml_floor(LineBufferSize / LBBitPerPixel[k] / (SwathWidthY[k] / dml_max(HRatio[k], 1.0)), 1)) - (vtaps[k] - 1); 5291 + v->LBLatencyHidingSourceLinesY = dml_min((double) v->MaxLineBufferLines, dml_floor(v->LineBufferSize / v->LBBitPerPixel[k] / (SwathWidthY[k] / dml_max(v->HRatio[k], 1.0)), 1)) - (v->vtaps[k] - 1); 5025 5292 5026 - mode_lib->vba.LBLatencyHidingSourceLinesC = dml_min((double) MaxLineBufferLines, dml_floor(LineBufferSize / LBBitPerPixel[k] / (SwathWidthC[k] / dml_max(HRatioChroma[k], 1.0)), 1)) - (VTAPsChroma[k] - 1); 5293 + v->LBLatencyHidingSourceLinesC = dml_min((double) v->MaxLineBufferLines, dml_floor(v->LineBufferSize / v->LBBitPerPixel[k] / (SwathWidthC[k] / dml_max(v->HRatioChroma[k], 1.0)), 1)) - (v->VTAPsChroma[k] - 1); 5027 5294 5028 - EffectiveLBLatencyHidingY = mode_lib->vba.LBLatencyHidingSourceLinesY / VRatio[k] * (HTotal[k] / PixelClock[k]); 5295 + EffectiveLBLatencyHidingY = v->LBLatencyHidingSourceLinesY / v->VRatio[k] * (v->HTotal[k] / v->PixelClock[k]); 5029 5296 5030 - EffectiveLBLatencyHidingC = mode_lib->vba.LBLatencyHidingSourceLinesC / VRatioChroma[k] * (HTotal[k] / PixelClock[k]); 5297 + EffectiveLBLatencyHidingC = v->LBLatencyHidingSourceLinesC / v->VRatioChroma[k] * (v->HTotal[k] / v->PixelClock[k]); 5031 5298 5032 5299 LinesInDETY[k] = (double) DETBufferSizeY[k] / BytePerPixelDETY[k] / SwathWidthY[k]; 5033 5300 LinesInDETYRoundedDownToSwath[k] = dml_floor(LinesInDETY[k], SwathHeightY[k]); 5034 - FullDETBufferingTimeY[k] = LinesInDETYRoundedDownToSwath[k] * (HTotal[k] / PixelClock[k]) / VRatio[k]; 5301 + FullDETBufferingTimeY[k] = LinesInDETYRoundedDownToSwath[k] * (v->HTotal[k] / v->PixelClock[k]) / v->VRatio[k]; 5035 5302 if (BytePerPixelDETC[k] > 0) { 5036 - LinesInDETC = mode_lib->vba.DETBufferSizeC[k] / BytePerPixelDETC[k] / SwathWidthC[k]; 5303 + LinesInDETC = v->DETBufferSizeC[k] / BytePerPixelDETC[k] / SwathWidthC[k]; 5037 5304 LinesInDETCRoundedDownToSwath = dml_floor(LinesInDETC, SwathHeightC[k]); 5038 - FullDETBufferingTimeC = LinesInDETCRoundedDownToSwath * (HTotal[k] / PixelClock[k]) / VRatioChroma[k]; 5305 + FullDETBufferingTimeC = LinesInDETCRoundedDownToSwath * (v->HTotal[k] / v->PixelClock[k]) / v->VRatioChroma[k]; 5039 5306 } else { 5040 5307 LinesInDETC = 0; 5041 5308 FullDETBufferingTimeC = 999999; 5042 5309 } 5043 5310 5044 - ActiveDRAMClockChangeLatencyMarginY = EffectiveLBLatencyHidingY + FullDETBufferingTimeY[k] - *UrgentWatermark - (HTotal[k] / PixelClock[k]) * (DSTXAfterScaler[k] / HTotal[k] + DSTYAfterScaler[k]) - *DRAMClockChangeWatermark; 5311 + ActiveDRAMClockChangeLatencyMarginY = EffectiveLBLatencyHidingY + FullDETBufferingTimeY[k] - v->UrgentWatermark - (v->HTotal[k] / v->PixelClock[k]) * (v->DSTXAfterScaler[k] / v->HTotal[k] + v->DSTYAfterScaler[k]) - v->DRAMClockChangeWatermark; 5045 5312 5046 - if (NumberOfActivePlanes > 1) { 5047 - ActiveDRAMClockChangeLatencyMarginY = ActiveDRAMClockChangeLatencyMarginY - (1 - 1.0 / NumberOfActivePlanes) * SwathHeightY[k] * HTotal[k] / PixelClock[k] / VRatio[k]; 5313 + if (v->NumberOfActivePlanes > 1) { 5314 + ActiveDRAMClockChangeLatencyMarginY = ActiveDRAMClockChangeLatencyMarginY - (1 - 1.0 / v->NumberOfActivePlanes) * SwathHeightY[k] * v->HTotal[k] / v->PixelClock[k] / v->VRatio[k]; 5048 5315 } 5049 5316 5050 5317 if (BytePerPixelDETC[k] > 0) { 5051 - ActiveDRAMClockChangeLatencyMarginC = EffectiveLBLatencyHidingC + FullDETBufferingTimeC - *UrgentWatermark - (HTotal[k] / PixelClock[k]) * (DSTXAfterScaler[k] / HTotal[k] + DSTYAfterScaler[k]) - *DRAMClockChangeWatermark; 5318 + ActiveDRAMClockChangeLatencyMarginC = EffectiveLBLatencyHidingC + FullDETBufferingTimeC - v->UrgentWatermark - (v->HTotal[k] / v->PixelClock[k]) * (v->DSTXAfterScaler[k] / v->HTotal[k] + v->DSTYAfterScaler[k]) - v->DRAMClockChangeWatermark; 5052 5319 5053 - if (NumberOfActivePlanes > 1) { 5054 - ActiveDRAMClockChangeLatencyMarginC = ActiveDRAMClockChangeLatencyMarginC - (1 - 1.0 / NumberOfActivePlanes) * SwathHeightC[k] * HTotal[k] / PixelClock[k] / VRatioChroma[k]; 5320 + if (v->NumberOfActivePlanes > 1) { 5321 + ActiveDRAMClockChangeLatencyMarginC = ActiveDRAMClockChangeLatencyMarginC - (1 - 1.0 / v->NumberOfActivePlanes) * SwathHeightC[k] * v->HTotal[k] / v->PixelClock[k] / v->VRatioChroma[k]; 5055 5322 } 5056 - mode_lib->vba.ActiveDRAMClockChangeLatencyMargin[k] = dml_min(ActiveDRAMClockChangeLatencyMarginY, ActiveDRAMClockChangeLatencyMarginC); 5323 + v->ActiveDRAMClockChangeLatencyMargin[k] = dml_min(ActiveDRAMClockChangeLatencyMarginY, ActiveDRAMClockChangeLatencyMarginC); 5057 5324 } else { 5058 - mode_lib->vba.ActiveDRAMClockChangeLatencyMargin[k] = ActiveDRAMClockChangeLatencyMarginY; 5325 + v->ActiveDRAMClockChangeLatencyMargin[k] = ActiveDRAMClockChangeLatencyMarginY; 5059 5326 } 5060 5327 5061 - if (WritebackEnable[k] == true) { 5328 + if (v->WritebackEnable[k] == true) { 5062 5329 5063 - WritebackDRAMClockChangeLatencyHiding = WritebackInterfaceBufferSize * 1024 / (WritebackDestinationWidth[k] * WritebackDestinationHeight[k] / (WritebackSourceHeight[k] * HTotal[k] / PixelClock[k]) * 4); 5064 - if (WritebackPixelFormat[k] == dm_444_64) { 5330 + WritebackDRAMClockChangeLatencyHiding = v->WritebackInterfaceBufferSize * 1024 / (v->WritebackDestinationWidth[k] * v->WritebackDestinationHeight[k] / (v->WritebackSourceHeight[k] * v->HTotal[k] / v->PixelClock[k]) * 4); 5331 + if (v->WritebackPixelFormat[k] == dm_444_64) { 5065 5332 WritebackDRAMClockChangeLatencyHiding = WritebackDRAMClockChangeLatencyHiding / 2; 5066 5333 } 5067 - if (mode_lib->vba.WritebackConfiguration == dm_whole_buffer_for_single_stream_interleave) { 5334 + if (v->WritebackConfiguration == dm_whole_buffer_for_single_stream_interleave) { 5068 5335 WritebackDRAMClockChangeLatencyHiding = WritebackDRAMClockChangeLatencyHiding * 2; 5069 5336 } 5070 - WritebackDRAMClockChangeLatencyMargin = WritebackDRAMClockChangeLatencyHiding - mode_lib->vba.WritebackDRAMClockChangeWatermark; 5071 - mode_lib->vba.ActiveDRAMClockChangeLatencyMargin[k] = dml_min(mode_lib->vba.ActiveDRAMClockChangeLatencyMargin[k], WritebackDRAMClockChangeLatencyMargin); 5337 + WritebackDRAMClockChangeLatencyMargin = WritebackDRAMClockChangeLatencyHiding - v->WritebackDRAMClockChangeWatermark; 5338 + v->ActiveDRAMClockChangeLatencyMargin[k] = dml_min(v->ActiveDRAMClockChangeLatencyMargin[k], WritebackDRAMClockChangeLatencyMargin); 5072 5339 } 5073 5340 } 5074 5341 5075 - mode_lib->vba.MinActiveDRAMClockChangeMargin = 999999; 5342 + v->MinActiveDRAMClockChangeMargin = 999999; 5076 5343 PlaneWithMinActiveDRAMClockChangeMargin = 0; 5077 - for (k = 0; k < NumberOfActivePlanes; ++k) { 5078 - if (mode_lib->vba.ActiveDRAMClockChangeLatencyMargin[k] < mode_lib->vba.MinActiveDRAMClockChangeMargin) { 5079 - mode_lib->vba.MinActiveDRAMClockChangeMargin = mode_lib->vba.ActiveDRAMClockChangeLatencyMargin[k]; 5080 - if (BlendingAndTiming[k] == k) { 5344 + for (k = 0; k < v->NumberOfActivePlanes; ++k) { 5345 + if (v->ActiveDRAMClockChangeLatencyMargin[k] < v->MinActiveDRAMClockChangeMargin) { 5346 + v->MinActiveDRAMClockChangeMargin = v->ActiveDRAMClockChangeLatencyMargin[k]; 5347 + if (v->BlendingAndTiming[k] == k) { 5081 5348 PlaneWithMinActiveDRAMClockChangeMargin = k; 5082 5349 } else { 5083 - for (j = 0; j < NumberOfActivePlanes; ++j) { 5084 - if (BlendingAndTiming[k] == j) { 5350 + for (j = 0; j < v->NumberOfActivePlanes; ++j) { 5351 + if (v->BlendingAndTiming[k] == j) { 5085 5352 PlaneWithMinActiveDRAMClockChangeMargin = j; 5086 5353 } 5087 5354 } ··· 5089 5356 } 5090 5357 } 5091 5358 5092 - *MinActiveDRAMClockChangeLatencySupported = mode_lib->vba.MinActiveDRAMClockChangeMargin + DRAMClockChangeLatency; 5359 + v->MinActiveDRAMClockChangeLatencySupported = v->MinActiveDRAMClockChangeMargin + v->FinalDRAMClockChangeLatency; 5093 5360 5094 5361 SecondMinActiveDRAMClockChangeMarginOneDisplayInVBLank = 999999; 5095 - for (k = 0; k < NumberOfActivePlanes; ++k) { 5096 - if (!((k == PlaneWithMinActiveDRAMClockChangeMargin) && (BlendingAndTiming[k] == k)) && !(BlendingAndTiming[k] == PlaneWithMinActiveDRAMClockChangeMargin) && mode_lib->vba.ActiveDRAMClockChangeLatencyMargin[k] < SecondMinActiveDRAMClockChangeMarginOneDisplayInVBLank) { 5097 - SecondMinActiveDRAMClockChangeMarginOneDisplayInVBLank = mode_lib->vba.ActiveDRAMClockChangeLatencyMargin[k]; 5362 + for (k = 0; k < v->NumberOfActivePlanes; ++k) { 5363 + if (!((k == PlaneWithMinActiveDRAMClockChangeMargin) && (v->BlendingAndTiming[k] == k)) && !(v->BlendingAndTiming[k] == PlaneWithMinActiveDRAMClockChangeMargin) && v->ActiveDRAMClockChangeLatencyMargin[k] < SecondMinActiveDRAMClockChangeMarginOneDisplayInVBLank) { 5364 + SecondMinActiveDRAMClockChangeMarginOneDisplayInVBLank = v->ActiveDRAMClockChangeLatencyMargin[k]; 5098 5365 } 5099 5366 } 5100 5367 5101 - mode_lib->vba.TotalNumberOfActiveOTG = 0; 5102 - for (k = 0; k < NumberOfActivePlanes; ++k) { 5103 - if (BlendingAndTiming[k] == k) { 5104 - mode_lib->vba.TotalNumberOfActiveOTG = mode_lib->vba.TotalNumberOfActiveOTG + 1; 5368 + v->TotalNumberOfActiveOTG = 0; 5369 + for (k = 0; k < v->NumberOfActivePlanes; ++k) { 5370 + if (v->BlendingAndTiming[k] == k) { 5371 + v->TotalNumberOfActiveOTG = v->TotalNumberOfActiveOTG + 1; 5105 5372 } 5106 5373 } 5107 5374 5108 - if (mode_lib->vba.MinActiveDRAMClockChangeMargin > 0) { 5375 + if (v->MinActiveDRAMClockChangeMargin > 0) { 5109 5376 *DRAMClockChangeSupport = dm_dram_clock_change_vactive; 5110 - } else if (((mode_lib->vba.SynchronizedVBlank == true || mode_lib->vba.TotalNumberOfActiveOTG == 1 || SecondMinActiveDRAMClockChangeMarginOneDisplayInVBLank > 0) && PrefetchMode == 0)) { 5377 + } else if (((v->SynchronizedVBlank == true || v->TotalNumberOfActiveOTG == 1 || SecondMinActiveDRAMClockChangeMarginOneDisplayInVBLank > 0) && PrefetchMode == 0)) { 5111 5378 *DRAMClockChangeSupport = dm_dram_clock_change_vblank; 5112 5379 } else { 5113 5380 *DRAMClockChangeSupport = dm_dram_clock_change_unsupported; 5114 5381 } 5115 5382 5116 5383 FullDETBufferingTimeYStutterCriticalPlane = FullDETBufferingTimeY[0]; 5117 - for (k = 0; k < NumberOfActivePlanes; ++k) { 5384 + for (k = 0; k < v->NumberOfActivePlanes; ++k) { 5118 5385 if (FullDETBufferingTimeY[k] <= FullDETBufferingTimeYStutterCriticalPlane) { 5119 5386 FullDETBufferingTimeYStutterCriticalPlane = FullDETBufferingTimeY[k]; 5120 - TimeToFinishSwathTransferStutterCriticalPlane = (SwathHeightY[k] - (LinesInDETY[k] - LinesInDETYRoundedDownToSwath[k])) * (HTotal[k] / PixelClock[k]) / VRatio[k]; 5387 + TimeToFinishSwathTransferStutterCriticalPlane = (SwathHeightY[k] - (LinesInDETY[k] - LinesInDETYRoundedDownToSwath[k])) * (v->HTotal[k] / v->PixelClock[k]) / v->VRatio[k]; 5121 5388 } 5122 5389 } 5123 5390 5124 - *StutterExitWatermark = SRExitTime + ExtraLatency + 10 / DCFCLKDeepSleep; 5125 - *StutterEnterPlusExitWatermark = dml_max(SREnterPlusExitTime + ExtraLatency + 10 / DCFCLKDeepSleep, TimeToFinishSwathTransferStutterCriticalPlane); 5391 + v->StutterExitWatermark = v->SRExitTime + ExtraLatency + 10 / DCFCLKDeepSleep; 5392 + v->StutterEnterPlusExitWatermark = dml_max(v->SREnterPlusExitTime + ExtraLatency + 10 / DCFCLKDeepSleep, TimeToFinishSwathTransferStutterCriticalPlane); 5126 5393 5127 5394 } 5128 5395
+1 -27
drivers/gpu/drm/amd/display/dc/hwss/dce110/dce110_hwseq.c
··· 1610 1610 struct dc_bios *bios = link->ctx->dc_bios; 1611 1611 struct bp_crtc_source_select crtc_source_select = {0}; 1612 1612 enum engine_id engine_id = link->link_enc->preferred_engine; 1613 - uint8_t bit_depth; 1614 1613 1615 1614 if (dc_is_rgb_signal(pipe_ctx->stream->signal)) 1616 1615 engine_id = link->link_enc->analog_engine; 1617 1616 1618 - switch (pipe_ctx->stream->timing.display_color_depth) { 1619 - case COLOR_DEPTH_UNDEFINED: 1620 - bit_depth = 0; 1621 - break; 1622 - case COLOR_DEPTH_666: 1623 - bit_depth = 6; 1624 - break; 1625 - default: 1626 - case COLOR_DEPTH_888: 1627 - bit_depth = 8; 1628 - break; 1629 - case COLOR_DEPTH_101010: 1630 - bit_depth = 10; 1631 - break; 1632 - case COLOR_DEPTH_121212: 1633 - bit_depth = 12; 1634 - break; 1635 - case COLOR_DEPTH_141414: 1636 - bit_depth = 14; 1637 - break; 1638 - case COLOR_DEPTH_161616: 1639 - bit_depth = 16; 1640 - break; 1641 - } 1642 - 1643 1617 crtc_source_select.controller_id = CONTROLLER_ID_D0 + pipe_ctx->stream_res.tg->inst; 1644 - crtc_source_select.bit_depth = bit_depth; 1618 + crtc_source_select.color_depth = pipe_ctx->stream->timing.display_color_depth; 1645 1619 crtc_source_select.engine_id = engine_id; 1646 1620 crtc_source_select.sink_signal = pipe_ctx->stream->signal; 1647 1621
+1 -1
drivers/gpu/drm/amd/display/include/bios_parser_types.h
··· 136 136 enum engine_id engine_id; 137 137 enum controller_id controller_id; 138 138 enum signal_type sink_signal; 139 - uint8_t bit_depth; 139 + enum dc_color_depth color_depth; 140 140 }; 141 141 142 142 struct bp_transmitter_control {
+15 -18
drivers/gpu/drm/amd/pm/swsmu/smu11/navi10_ppt.c
··· 2455 2455 } 2456 2456 2457 2457 for (i = 0; i < NUM_LINK_LEVELS; i++) { 2458 - if (pptable->PcieGenSpeed[i] > pcie_gen_cap || 2459 - pptable->PcieLaneCount[i] > pcie_width_cap) { 2460 - dpm_context->dpm_tables.pcie_table.pcie_gen[i] = 2461 - pptable->PcieGenSpeed[i] > pcie_gen_cap ? 2462 - pcie_gen_cap : pptable->PcieGenSpeed[i]; 2463 - dpm_context->dpm_tables.pcie_table.pcie_lane[i] = 2464 - pptable->PcieLaneCount[i] > pcie_width_cap ? 2465 - pcie_width_cap : pptable->PcieLaneCount[i]; 2466 - smu_pcie_arg = i << 16; 2467 - smu_pcie_arg |= pcie_gen_cap << 8; 2468 - smu_pcie_arg |= pcie_width_cap; 2469 - ret = smu_cmn_send_smc_msg_with_param(smu, 2470 - SMU_MSG_OverridePcieParameters, 2471 - smu_pcie_arg, 2472 - NULL); 2473 - if (ret) 2474 - break; 2475 - } 2458 + dpm_context->dpm_tables.pcie_table.pcie_gen[i] = 2459 + pptable->PcieGenSpeed[i] > pcie_gen_cap ? 2460 + pcie_gen_cap : pptable->PcieGenSpeed[i]; 2461 + dpm_context->dpm_tables.pcie_table.pcie_lane[i] = 2462 + pptable->PcieLaneCount[i] > pcie_width_cap ? 2463 + pcie_width_cap : pptable->PcieLaneCount[i]; 2464 + smu_pcie_arg = i << 16; 2465 + smu_pcie_arg |= dpm_context->dpm_tables.pcie_table.pcie_gen[i] << 8; 2466 + smu_pcie_arg |= dpm_context->dpm_tables.pcie_table.pcie_lane[i]; 2467 + ret = smu_cmn_send_smc_msg_with_param(smu, 2468 + SMU_MSG_OverridePcieParameters, 2469 + smu_pcie_arg, 2470 + NULL); 2471 + if (ret) 2472 + return ret; 2476 2473 } 2477 2474 2478 2475 return ret;
+6 -1
drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_0_ppt.c
··· 2923 2923 break; 2924 2924 } 2925 2925 2926 - if (!ret) 2926 + if (!ret) { 2927 + /* disable mmio access while doing mode 1 reset*/ 2928 + smu->adev->no_hw_access = true; 2929 + /* ensure no_hw_access is globally visible before any MMIO */ 2930 + smp_mb(); 2927 2931 msleep(SMU13_MODE1_RESET_WAIT_TIME_IN_MS); 2932 + } 2928 2933 2929 2934 return ret; 2930 2935 }
+7 -2
drivers/gpu/drm/amd/pm/swsmu/smu14/smu_v14_0_2_ppt.c
··· 2143 2143 2144 2144 ret = smu_cmn_send_debug_smc_msg(smu, DEBUGSMC_MSG_Mode1Reset); 2145 2145 if (!ret) { 2146 - if (amdgpu_emu_mode == 1) 2146 + if (amdgpu_emu_mode == 1) { 2147 2147 msleep(50000); 2148 - else 2148 + } else { 2149 + /* disable mmio access while doing mode 1 reset*/ 2150 + smu->adev->no_hw_access = true; 2151 + /* ensure no_hw_access is globally visible before any MMIO */ 2152 + smp_mb(); 2149 2153 msleep(1000); 2154 + } 2150 2155 } 2151 2156 2152 2157 return ret;
+99 -23
drivers/gpu/drm/drm_atomic_helper.c
··· 1162 1162 new_state->self_refresh_active; 1163 1163 } 1164 1164 1165 - static void 1166 - encoder_bridge_disable(struct drm_device *dev, struct drm_atomic_state *state) 1165 + /** 1166 + * drm_atomic_helper_commit_encoder_bridge_disable - disable bridges and encoder 1167 + * @dev: DRM device 1168 + * @state: the driver state object 1169 + * 1170 + * Loops over all connectors in the current state and if the CRTC needs 1171 + * it, disables the bridge chain all the way, then disables the encoder 1172 + * afterwards. 1173 + */ 1174 + void 1175 + drm_atomic_helper_commit_encoder_bridge_disable(struct drm_device *dev, 1176 + struct drm_atomic_state *state) 1167 1177 { 1168 1178 struct drm_connector *connector; 1169 1179 struct drm_connector_state *old_conn_state, *new_conn_state; ··· 1239 1229 } 1240 1230 } 1241 1231 } 1232 + EXPORT_SYMBOL(drm_atomic_helper_commit_encoder_bridge_disable); 1242 1233 1243 - static void 1244 - crtc_disable(struct drm_device *dev, struct drm_atomic_state *state) 1234 + /** 1235 + * drm_atomic_helper_commit_crtc_disable - disable CRTSs 1236 + * @dev: DRM device 1237 + * @state: the driver state object 1238 + * 1239 + * Loops over all CRTCs in the current state and if the CRTC needs 1240 + * it, disables it. 1241 + */ 1242 + void 1243 + drm_atomic_helper_commit_crtc_disable(struct drm_device *dev, struct drm_atomic_state *state) 1245 1244 { 1246 1245 struct drm_crtc *crtc; 1247 1246 struct drm_crtc_state *old_crtc_state, *new_crtc_state; ··· 1301 1282 drm_crtc_vblank_put(crtc); 1302 1283 } 1303 1284 } 1285 + EXPORT_SYMBOL(drm_atomic_helper_commit_crtc_disable); 1304 1286 1305 - static void 1306 - encoder_bridge_post_disable(struct drm_device *dev, struct drm_atomic_state *state) 1287 + /** 1288 + * drm_atomic_helper_commit_encoder_bridge_post_disable - post-disable encoder bridges 1289 + * @dev: DRM device 1290 + * @state: the driver state object 1291 + * 1292 + * Loops over all connectors in the current state and if the CRTC needs 1293 + * it, post-disables all encoder bridges. 1294 + */ 1295 + void 1296 + drm_atomic_helper_commit_encoder_bridge_post_disable(struct drm_device *dev, struct drm_atomic_state *state) 1307 1297 { 1308 1298 struct drm_connector *connector; 1309 1299 struct drm_connector_state *old_conn_state, *new_conn_state; ··· 1363 1335 drm_bridge_put(bridge); 1364 1336 } 1365 1337 } 1338 + EXPORT_SYMBOL(drm_atomic_helper_commit_encoder_bridge_post_disable); 1366 1339 1367 1340 static void 1368 1341 disable_outputs(struct drm_device *dev, struct drm_atomic_state *state) 1369 1342 { 1370 - encoder_bridge_disable(dev, state); 1343 + drm_atomic_helper_commit_encoder_bridge_disable(dev, state); 1371 1344 1372 - crtc_disable(dev, state); 1345 + drm_atomic_helper_commit_encoder_bridge_post_disable(dev, state); 1373 1346 1374 - encoder_bridge_post_disable(dev, state); 1347 + drm_atomic_helper_commit_crtc_disable(dev, state); 1375 1348 } 1376 1349 1377 1350 /** ··· 1475 1446 } 1476 1447 EXPORT_SYMBOL(drm_atomic_helper_calc_timestamping_constants); 1477 1448 1478 - static void 1479 - crtc_set_mode(struct drm_device *dev, struct drm_atomic_state *state) 1449 + /** 1450 + * drm_atomic_helper_commit_crtc_set_mode - set the new mode 1451 + * @dev: DRM device 1452 + * @state: the driver state object 1453 + * 1454 + * Loops over all connectors in the current state and if the mode has 1455 + * changed, change the mode of the CRTC, then call down the bridge 1456 + * chain and change the mode in all bridges as well. 1457 + */ 1458 + void 1459 + drm_atomic_helper_commit_crtc_set_mode(struct drm_device *dev, struct drm_atomic_state *state) 1480 1460 { 1481 1461 struct drm_crtc *crtc; 1482 1462 struct drm_crtc_state *new_crtc_state; ··· 1546 1508 drm_bridge_put(bridge); 1547 1509 } 1548 1510 } 1511 + EXPORT_SYMBOL(drm_atomic_helper_commit_crtc_set_mode); 1549 1512 1550 1513 /** 1551 1514 * drm_atomic_helper_commit_modeset_disables - modeset commit to disable outputs ··· 1570 1531 drm_atomic_helper_update_legacy_modeset_state(dev, state); 1571 1532 drm_atomic_helper_calc_timestamping_constants(state); 1572 1533 1573 - crtc_set_mode(dev, state); 1534 + drm_atomic_helper_commit_crtc_set_mode(dev, state); 1574 1535 } 1575 1536 EXPORT_SYMBOL(drm_atomic_helper_commit_modeset_disables); 1576 1537 1577 - static void drm_atomic_helper_commit_writebacks(struct drm_device *dev, 1578 - struct drm_atomic_state *state) 1538 + /** 1539 + * drm_atomic_helper_commit_writebacks - issue writebacks 1540 + * @dev: DRM device 1541 + * @state: atomic state object being committed 1542 + * 1543 + * This loops over the connectors, checks if the new state requires 1544 + * a writeback job to be issued and in that case issues an atomic 1545 + * commit on each connector. 1546 + */ 1547 + void drm_atomic_helper_commit_writebacks(struct drm_device *dev, 1548 + struct drm_atomic_state *state) 1579 1549 { 1580 1550 struct drm_connector *connector; 1581 1551 struct drm_connector_state *new_conn_state; ··· 1603 1555 } 1604 1556 } 1605 1557 } 1558 + EXPORT_SYMBOL(drm_atomic_helper_commit_writebacks); 1606 1559 1607 - static void 1608 - encoder_bridge_pre_enable(struct drm_device *dev, struct drm_atomic_state *state) 1560 + /** 1561 + * drm_atomic_helper_commit_encoder_bridge_pre_enable - pre-enable bridges 1562 + * @dev: DRM device 1563 + * @state: atomic state object being committed 1564 + * 1565 + * This loops over the connectors and if the CRTC needs it, pre-enables 1566 + * the entire bridge chain. 1567 + */ 1568 + void 1569 + drm_atomic_helper_commit_encoder_bridge_pre_enable(struct drm_device *dev, struct drm_atomic_state *state) 1609 1570 { 1610 1571 struct drm_connector *connector; 1611 1572 struct drm_connector_state *new_conn_state; ··· 1645 1588 drm_bridge_put(bridge); 1646 1589 } 1647 1590 } 1591 + EXPORT_SYMBOL(drm_atomic_helper_commit_encoder_bridge_pre_enable); 1648 1592 1649 - static void 1650 - crtc_enable(struct drm_device *dev, struct drm_atomic_state *state) 1593 + /** 1594 + * drm_atomic_helper_commit_crtc_enable - enables the CRTCs 1595 + * @dev: DRM device 1596 + * @state: atomic state object being committed 1597 + * 1598 + * This loops over CRTCs in the new state, and of the CRTC needs 1599 + * it, enables it. 1600 + */ 1601 + void 1602 + drm_atomic_helper_commit_crtc_enable(struct drm_device *dev, struct drm_atomic_state *state) 1651 1603 { 1652 1604 struct drm_crtc *crtc; 1653 1605 struct drm_crtc_state *old_crtc_state; ··· 1685 1619 } 1686 1620 } 1687 1621 } 1622 + EXPORT_SYMBOL(drm_atomic_helper_commit_crtc_enable); 1688 1623 1689 - static void 1690 - encoder_bridge_enable(struct drm_device *dev, struct drm_atomic_state *state) 1624 + /** 1625 + * drm_atomic_helper_commit_encoder_bridge_enable - enables the bridges 1626 + * @dev: DRM device 1627 + * @state: atomic state object being committed 1628 + * 1629 + * This loops over all connectors in the new state, and of the CRTC needs 1630 + * it, enables the entire bridge chain. 1631 + */ 1632 + void 1633 + drm_atomic_helper_commit_encoder_bridge_enable(struct drm_device *dev, struct drm_atomic_state *state) 1691 1634 { 1692 1635 struct drm_connector *connector; 1693 1636 struct drm_connector_state *new_conn_state; ··· 1739 1664 drm_bridge_put(bridge); 1740 1665 } 1741 1666 } 1667 + EXPORT_SYMBOL(drm_atomic_helper_commit_encoder_bridge_enable); 1742 1668 1743 1669 /** 1744 1670 * drm_atomic_helper_commit_modeset_enables - modeset commit to enable outputs ··· 1758 1682 void drm_atomic_helper_commit_modeset_enables(struct drm_device *dev, 1759 1683 struct drm_atomic_state *state) 1760 1684 { 1761 - encoder_bridge_pre_enable(dev, state); 1685 + drm_atomic_helper_commit_crtc_enable(dev, state); 1762 1686 1763 - crtc_enable(dev, state); 1687 + drm_atomic_helper_commit_encoder_bridge_pre_enable(dev, state); 1764 1688 1765 - encoder_bridge_enable(dev, state); 1689 + drm_atomic_helper_commit_encoder_bridge_enable(dev, state); 1766 1690 1767 1691 drm_atomic_helper_commit_writebacks(dev, state); 1768 1692 }
+10
drivers/gpu/drm/drm_fb_helper.c
··· 366 366 { 367 367 struct drm_fb_helper *helper = container_of(work, struct drm_fb_helper, damage_work); 368 368 369 + if (helper->info->state != FBINFO_STATE_RUNNING) 370 + return; 371 + 369 372 drm_fb_helper_fb_dirty(helper); 370 373 } 371 374 ··· 734 731 if (suspend) { 735 732 if (fb_helper->info->state != FBINFO_STATE_RUNNING) 736 733 return; 734 + 735 + /* 736 + * Cancel pending damage work. During GPU reset, VBlank 737 + * interrupts are disabled and drm_fb_helper_fb_dirty() 738 + * would wait for VBlank timeout otherwise. 739 + */ 740 + cancel_work_sync(&fb_helper->damage_work); 737 741 738 742 console_lock(); 739 743
+1 -1
drivers/gpu/drm/exynos/exynos_hdmi.c
··· 1692 1692 { 1693 1693 struct hdmi_context *hdata = arg; 1694 1694 1695 - mod_delayed_work(system_wq, &hdata->hotplug_work, 1695 + mod_delayed_work(system_percpu_wq, &hdata->hotplug_work, 1696 1696 msecs_to_jiffies(HOTPLUG_DEBOUNCE_MS)); 1697 1697 1698 1698 return IRQ_HANDLED;
-6
drivers/gpu/drm/mediatek/mtk_dsi.c
··· 1002 1002 return PTR_ERR(dsi->next_bridge); 1003 1003 } 1004 1004 1005 - /* 1006 - * set flag to request the DSI host bridge be pre-enabled before device bridge 1007 - * in the chain, so the DSI host is ready when the device bridge is pre-enabled 1008 - */ 1009 - dsi->next_bridge->pre_enable_prev_first = true; 1010 - 1011 1005 drm_bridge_add(&dsi->bridge); 1012 1006 1013 1007 ret = component_add(host->dev, &mtk_dsi_component_ops);
+3
drivers/gpu/drm/nouveau/nvkm/subdev/gsp/ad102.c
··· 30 30 31 31 .booter.ctor = ga102_gsp_booter_ctor, 32 32 33 + .fwsec_sb.ctor = tu102_gsp_fwsec_sb_ctor, 34 + .fwsec_sb.dtor = tu102_gsp_fwsec_sb_dtor, 35 + 33 36 .dtor = r535_gsp_dtor, 34 37 .oneinit = tu102_gsp_oneinit, 35 38 .init = tu102_gsp_init,
+1 -7
drivers/gpu/drm/nouveau/nvkm/subdev/gsp/fwsec.c
··· 337 337 } 338 338 339 339 int 340 - nvkm_gsp_fwsec_sb_ctor(struct nvkm_gsp *gsp) 340 + nvkm_gsp_fwsec_sb_init(struct nvkm_gsp *gsp) 341 341 { 342 342 return nvkm_gsp_fwsec_init(gsp, &gsp->fws.falcon.sb, "fwsec-sb", 343 343 NVFW_FALCON_APPIF_DMEMMAPPER_CMD_SB); 344 - } 345 - 346 - void 347 - nvkm_gsp_fwsec_sb_dtor(struct nvkm_gsp *gsp) 348 - { 349 - nvkm_falcon_fw_dtor(&gsp->fws.falcon.sb); 350 344 } 351 345 352 346 int
+3
drivers/gpu/drm/nouveau/nvkm/subdev/gsp/ga100.c
··· 47 47 48 48 .booter.ctor = tu102_gsp_booter_ctor, 49 49 50 + .fwsec_sb.ctor = tu102_gsp_fwsec_sb_ctor, 51 + .fwsec_sb.dtor = tu102_gsp_fwsec_sb_dtor, 52 + 50 53 .dtor = r535_gsp_dtor, 51 54 .oneinit = tu102_gsp_oneinit, 52 55 .init = tu102_gsp_init,
+3
drivers/gpu/drm/nouveau/nvkm/subdev/gsp/ga102.c
··· 158 158 159 159 .booter.ctor = ga102_gsp_booter_ctor, 160 160 161 + .fwsec_sb.ctor = tu102_gsp_fwsec_sb_ctor, 162 + .fwsec_sb.dtor = tu102_gsp_fwsec_sb_dtor, 163 + 161 164 .dtor = r535_gsp_dtor, 162 165 .oneinit = tu102_gsp_oneinit, 163 166 .init = tu102_gsp_init,
+21 -2
drivers/gpu/drm/nouveau/nvkm/subdev/gsp/priv.h
··· 7 7 8 8 int nvkm_gsp_fwsec_frts(struct nvkm_gsp *); 9 9 10 - int nvkm_gsp_fwsec_sb_ctor(struct nvkm_gsp *); 11 10 int nvkm_gsp_fwsec_sb(struct nvkm_gsp *); 12 - void nvkm_gsp_fwsec_sb_dtor(struct nvkm_gsp *); 11 + int nvkm_gsp_fwsec_sb_init(struct nvkm_gsp *gsp); 13 12 14 13 struct nvkm_gsp_fwif { 15 14 int version; ··· 51 52 struct nvkm_falcon *, struct nvkm_falcon_fw *); 52 53 } booter; 53 54 55 + struct { 56 + int (*ctor)(struct nvkm_gsp *); 57 + void (*dtor)(struct nvkm_gsp *); 58 + } fwsec_sb; 59 + 54 60 void (*dtor)(struct nvkm_gsp *); 55 61 int (*oneinit)(struct nvkm_gsp *); 56 62 int (*init)(struct nvkm_gsp *); ··· 71 67 extern const struct nvkm_falcon_fw_func tu102_gsp_fwsec; 72 68 int tu102_gsp_booter_ctor(struct nvkm_gsp *, const char *, const struct firmware *, 73 69 struct nvkm_falcon *, struct nvkm_falcon_fw *); 70 + int tu102_gsp_fwsec_sb_ctor(struct nvkm_gsp *); 71 + void tu102_gsp_fwsec_sb_dtor(struct nvkm_gsp *); 74 72 int tu102_gsp_oneinit(struct nvkm_gsp *); 75 73 int tu102_gsp_init(struct nvkm_gsp *); 76 74 int tu102_gsp_fini(struct nvkm_gsp *, bool suspend); ··· 96 90 97 91 int nvkm_gsp_new_(const struct nvkm_gsp_fwif *, struct nvkm_device *, enum nvkm_subdev_type, int, 98 92 struct nvkm_gsp **); 93 + 94 + static inline int nvkm_gsp_fwsec_sb_ctor(struct nvkm_gsp *gsp) 95 + { 96 + if (gsp->func->fwsec_sb.ctor) 97 + return gsp->func->fwsec_sb.ctor(gsp); 98 + return 0; 99 + } 100 + 101 + static inline void nvkm_gsp_fwsec_sb_dtor(struct nvkm_gsp *gsp) 102 + { 103 + if (gsp->func->fwsec_sb.dtor) 104 + gsp->func->fwsec_sb.dtor(gsp); 105 + } 99 106 100 107 extern const struct nvkm_gsp_func gv100_gsp; 101 108 #endif
+15
drivers/gpu/drm/nouveau/nvkm/subdev/gsp/tu102.c
··· 30 30 #include <nvfw/fw.h> 31 31 #include <nvfw/hs.h> 32 32 33 + int 34 + tu102_gsp_fwsec_sb_ctor(struct nvkm_gsp *gsp) 35 + { 36 + return nvkm_gsp_fwsec_sb_init(gsp); 37 + } 38 + 39 + void 40 + tu102_gsp_fwsec_sb_dtor(struct nvkm_gsp *gsp) 41 + { 42 + nvkm_falcon_fw_dtor(&gsp->fws.falcon.sb); 43 + } 44 + 33 45 static int 34 46 tu102_gsp_booter_unload(struct nvkm_gsp *gsp, u32 mbox0, u32 mbox1) 35 47 { ··· 381 369 .sig_section = ".fwsignature_tu10x", 382 370 383 371 .booter.ctor = tu102_gsp_booter_ctor, 372 + 373 + .fwsec_sb.ctor = tu102_gsp_fwsec_sb_ctor, 374 + .fwsec_sb.dtor = tu102_gsp_fwsec_sb_dtor, 384 375 385 376 .dtor = r535_gsp_dtor, 386 377 .oneinit = tu102_gsp_oneinit,
+3
drivers/gpu/drm/nouveau/nvkm/subdev/gsp/tu116.c
··· 30 30 31 31 .booter.ctor = tu102_gsp_booter_ctor, 32 32 33 + .fwsec_sb.ctor = tu102_gsp_fwsec_sb_ctor, 34 + .fwsec_sb.dtor = tu102_gsp_fwsec_sb_dtor, 35 + 33 36 .dtor = r535_gsp_dtor, 34 37 .oneinit = tu102_gsp_oneinit, 35 38 .init = tu102_gsp_init,
+1 -1
drivers/gpu/drm/pl111/pl111_drv.c
··· 295 295 variant->name, priv); 296 296 if (ret != 0) { 297 297 dev_err(dev, "%s failed irq %d\n", __func__, ret); 298 - return ret; 298 + goto dev_put; 299 299 } 300 300 301 301 ret = pl111_modeset_init(drm);
+1 -1
drivers/gpu/drm/radeon/pptable.h
··· 450 450 //sizeof(ATOM_PPLIB_CLOCK_INFO) 451 451 UCHAR ucEntrySize; 452 452 453 - UCHAR clockInfo[] __counted_by(ucNumEntries); 453 + UCHAR clockInfo[] /*__counted_by(ucNumEntries)*/; 454 454 }ClockInfoArray; 455 455 456 456 typedef struct _NonClockInfoArray{
+27 -3
drivers/gpu/drm/tidss/tidss_kms.c
··· 26 26 27 27 tidss_runtime_get(tidss); 28 28 29 - drm_atomic_helper_commit_modeset_disables(ddev, old_state); 30 - drm_atomic_helper_commit_planes(ddev, old_state, DRM_PLANE_COMMIT_ACTIVE_ONLY); 31 - drm_atomic_helper_commit_modeset_enables(ddev, old_state); 29 + /* 30 + * TI's OLDI and DSI encoders need to be set up before the crtc is 31 + * enabled. Thus drm_atomic_helper_commit_modeset_enables() and 32 + * drm_atomic_helper_commit_modeset_disables() cannot be used here, as 33 + * they enable the crtc before bridges' pre-enable, and disable the crtc 34 + * after bridges' post-disable. 35 + * 36 + * Open code the functions here and first call the bridges' pre-enables, 37 + * then crtc enable, then bridges' post-enable (and vice versa for 38 + * disable). 39 + */ 40 + 41 + drm_atomic_helper_commit_encoder_bridge_disable(ddev, old_state); 42 + drm_atomic_helper_commit_crtc_disable(ddev, old_state); 43 + drm_atomic_helper_commit_encoder_bridge_post_disable(ddev, old_state); 44 + 45 + drm_atomic_helper_update_legacy_modeset_state(ddev, old_state); 46 + drm_atomic_helper_calc_timestamping_constants(old_state); 47 + drm_atomic_helper_commit_crtc_set_mode(ddev, old_state); 48 + 49 + drm_atomic_helper_commit_planes(ddev, old_state, 50 + DRM_PLANE_COMMIT_ACTIVE_ONLY); 51 + 52 + drm_atomic_helper_commit_encoder_bridge_pre_enable(ddev, old_state); 53 + drm_atomic_helper_commit_crtc_enable(ddev, old_state); 54 + drm_atomic_helper_commit_encoder_bridge_enable(ddev, old_state); 55 + drm_atomic_helper_commit_writebacks(ddev, old_state); 32 56 33 57 drm_atomic_helper_commit_hw_done(old_state); 34 58 drm_atomic_helper_wait_for_flip_done(ddev, old_state);
+1 -1
drivers/gpu/nova-core/Kconfig
··· 3 3 depends on 64BIT 4 4 depends on PCI 5 5 depends on RUST 6 - depends on RUST_FW_LOADER_ABSTRACTIONS 6 + select RUST_FW_LOADER_ABSTRACTIONS 7 7 select AUXILIARY_BUS 8 8 default n 9 9 help
+8 -6
drivers/gpu/nova-core/gsp/cmdq.rs
··· 588 588 header.length(), 589 589 ); 590 590 591 + let payload_length = header.payload_length(); 592 + 591 593 // Check that the driver read area is large enough for the message. 592 - if slice_1.len() + slice_2.len() < header.length() { 594 + if slice_1.len() + slice_2.len() < payload_length { 593 595 return Err(EIO); 594 596 } 595 597 596 598 // Cut the message slices down to the actual length of the message. 597 - let (slice_1, slice_2) = if slice_1.len() > header.length() { 598 - // PANIC: we checked above that `slice_1` is at least as long as `msg_header.length()`. 599 - (slice_1.split_at(header.length()).0, &slice_2[0..0]) 599 + let (slice_1, slice_2) = if slice_1.len() > payload_length { 600 + // PANIC: we checked above that `slice_1` is at least as long as `payload_length`. 601 + (slice_1.split_at(payload_length).0, &slice_2[0..0]) 600 602 } else { 601 603 ( 602 604 slice_1, 603 605 // PANIC: we checked above that `slice_1.len() + slice_2.len()` is at least as 604 - // large as `msg_header.length()`. 605 - slice_2.split_at(header.length() - slice_1.len()).0, 606 + // large as `payload_length`. 607 + slice_2.split_at(payload_length - slice_1.len()).0, 606 608 ) 607 609 }; 608 610
+38 -40
drivers/gpu/nova-core/gsp/fw.rs
··· 141 141 // are valid. 142 142 unsafe impl FromBytes for GspFwWprMeta {} 143 143 144 - type GspFwWprMetaBootResumeInfo = r570_144::GspFwWprMeta__bindgen_ty_1; 145 - type GspFwWprMetaBootInfo = r570_144::GspFwWprMeta__bindgen_ty_1__bindgen_ty_1; 144 + type GspFwWprMetaBootResumeInfo = bindings::GspFwWprMeta__bindgen_ty_1; 145 + type GspFwWprMetaBootInfo = bindings::GspFwWprMeta__bindgen_ty_1__bindgen_ty_1; 146 146 147 147 impl GspFwWprMeta { 148 148 /// Fill in and return a `GspFwWprMeta` suitable for booting `gsp_firmware` using the ··· 150 150 pub(crate) fn new(gsp_firmware: &GspFirmware, fb_layout: &FbLayout) -> Self { 151 151 Self(bindings::GspFwWprMeta { 152 152 // CAST: we want to store the bits of `GSP_FW_WPR_META_MAGIC` unmodified. 153 - magic: r570_144::GSP_FW_WPR_META_MAGIC as u64, 154 - revision: u64::from(r570_144::GSP_FW_WPR_META_REVISION), 153 + magic: bindings::GSP_FW_WPR_META_MAGIC as u64, 154 + revision: u64::from(bindings::GSP_FW_WPR_META_REVISION), 155 155 sysmemAddrOfRadix3Elf: gsp_firmware.radix3_dma_handle(), 156 156 sizeOfRadix3Elf: u64::from_safe_cast(gsp_firmware.size), 157 157 sysmemAddrOfBootloader: gsp_firmware.bootloader.ucode.dma_handle(), ··· 315 315 #[repr(u32)] 316 316 pub(crate) enum SeqBufOpcode { 317 317 // Core operation opcodes 318 - CoreReset = r570_144::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_CORE_RESET, 319 - CoreResume = r570_144::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_CORE_RESUME, 320 - CoreStart = r570_144::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_CORE_START, 321 - CoreWaitForHalt = r570_144::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_CORE_WAIT_FOR_HALT, 318 + CoreReset = bindings::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_CORE_RESET, 319 + CoreResume = bindings::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_CORE_RESUME, 320 + CoreStart = bindings::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_CORE_START, 321 + CoreWaitForHalt = bindings::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_CORE_WAIT_FOR_HALT, 322 322 323 323 // Delay opcode 324 - DelayUs = r570_144::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_DELAY_US, 324 + DelayUs = bindings::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_DELAY_US, 325 325 326 326 // Register operation opcodes 327 - RegModify = r570_144::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_REG_MODIFY, 328 - RegPoll = r570_144::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_REG_POLL, 329 - RegStore = r570_144::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_REG_STORE, 330 - RegWrite = r570_144::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_REG_WRITE, 327 + RegModify = bindings::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_REG_MODIFY, 328 + RegPoll = bindings::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_REG_POLL, 329 + RegStore = bindings::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_REG_STORE, 330 + RegWrite = bindings::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_REG_WRITE, 331 331 } 332 332 333 333 impl fmt::Display for SeqBufOpcode { ··· 351 351 352 352 fn try_from(value: u32) -> Result<SeqBufOpcode> { 353 353 match value { 354 - r570_144::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_CORE_RESET => { 354 + bindings::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_CORE_RESET => { 355 355 Ok(SeqBufOpcode::CoreReset) 356 356 } 357 - r570_144::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_CORE_RESUME => { 357 + bindings::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_CORE_RESUME => { 358 358 Ok(SeqBufOpcode::CoreResume) 359 359 } 360 - r570_144::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_CORE_START => { 360 + bindings::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_CORE_START => { 361 361 Ok(SeqBufOpcode::CoreStart) 362 362 } 363 - r570_144::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_CORE_WAIT_FOR_HALT => { 363 + bindings::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_CORE_WAIT_FOR_HALT => { 364 364 Ok(SeqBufOpcode::CoreWaitForHalt) 365 365 } 366 - r570_144::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_DELAY_US => Ok(SeqBufOpcode::DelayUs), 367 - r570_144::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_REG_MODIFY => { 366 + bindings::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_DELAY_US => Ok(SeqBufOpcode::DelayUs), 367 + bindings::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_REG_MODIFY => { 368 368 Ok(SeqBufOpcode::RegModify) 369 369 } 370 - r570_144::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_REG_POLL => Ok(SeqBufOpcode::RegPoll), 371 - r570_144::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_REG_STORE => Ok(SeqBufOpcode::RegStore), 372 - r570_144::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_REG_WRITE => Ok(SeqBufOpcode::RegWrite), 370 + bindings::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_REG_POLL => Ok(SeqBufOpcode::RegPoll), 371 + bindings::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_REG_STORE => Ok(SeqBufOpcode::RegStore), 372 + bindings::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_REG_WRITE => Ok(SeqBufOpcode::RegWrite), 373 373 _ => Err(EINVAL), 374 374 } 375 375 } ··· 385 385 /// Wrapper for GSP sequencer register write payload. 386 386 #[repr(transparent)] 387 387 #[derive(Copy, Clone)] 388 - pub(crate) struct RegWritePayload(r570_144::GSP_SEQ_BUF_PAYLOAD_REG_WRITE); 388 + pub(crate) struct RegWritePayload(bindings::GSP_SEQ_BUF_PAYLOAD_REG_WRITE); 389 389 390 390 impl RegWritePayload { 391 391 /// Returns the register address. ··· 408 408 /// Wrapper for GSP sequencer register modify payload. 409 409 #[repr(transparent)] 410 410 #[derive(Copy, Clone)] 411 - pub(crate) struct RegModifyPayload(r570_144::GSP_SEQ_BUF_PAYLOAD_REG_MODIFY); 411 + pub(crate) struct RegModifyPayload(bindings::GSP_SEQ_BUF_PAYLOAD_REG_MODIFY); 412 412 413 413 impl RegModifyPayload { 414 414 /// Returns the register address. ··· 436 436 /// Wrapper for GSP sequencer register poll payload. 437 437 #[repr(transparent)] 438 438 #[derive(Copy, Clone)] 439 - pub(crate) struct RegPollPayload(r570_144::GSP_SEQ_BUF_PAYLOAD_REG_POLL); 439 + pub(crate) struct RegPollPayload(bindings::GSP_SEQ_BUF_PAYLOAD_REG_POLL); 440 440 441 441 impl RegPollPayload { 442 442 /// Returns the register address. ··· 469 469 /// Wrapper for GSP sequencer delay payload. 470 470 #[repr(transparent)] 471 471 #[derive(Copy, Clone)] 472 - pub(crate) struct DelayUsPayload(r570_144::GSP_SEQ_BUF_PAYLOAD_DELAY_US); 472 + pub(crate) struct DelayUsPayload(bindings::GSP_SEQ_BUF_PAYLOAD_DELAY_US); 473 473 474 474 impl DelayUsPayload { 475 475 /// Returns the delay value in microseconds. ··· 487 487 /// Wrapper for GSP sequencer register store payload. 488 488 #[repr(transparent)] 489 489 #[derive(Copy, Clone)] 490 - pub(crate) struct RegStorePayload(r570_144::GSP_SEQ_BUF_PAYLOAD_REG_STORE); 490 + pub(crate) struct RegStorePayload(bindings::GSP_SEQ_BUF_PAYLOAD_REG_STORE); 491 491 492 492 impl RegStorePayload { 493 493 /// Returns the register address. ··· 510 510 511 511 /// Wrapper for GSP sequencer buffer command. 512 512 #[repr(transparent)] 513 - pub(crate) struct SequencerBufferCmd(r570_144::GSP_SEQUENCER_BUFFER_CMD); 513 + pub(crate) struct SequencerBufferCmd(bindings::GSP_SEQUENCER_BUFFER_CMD); 514 514 515 515 impl SequencerBufferCmd { 516 516 /// Returns the opcode as a `SeqBufOpcode` enum, or error if invalid. ··· 612 612 613 613 /// Wrapper for GSP run CPU sequencer RPC. 614 614 #[repr(transparent)] 615 - pub(crate) struct RunCpuSequencer(r570_144::rpc_run_cpu_sequencer_v17_00); 615 + pub(crate) struct RunCpuSequencer(bindings::rpc_run_cpu_sequencer_v17_00); 616 616 617 617 impl RunCpuSequencer { 618 618 /// Returns the command index. ··· 797 797 } 798 798 } 799 799 800 - // SAFETY: We can't derive the Zeroable trait for this binding because the 801 - // procedural macro doesn't support the syntax used by bindgen to create the 802 - // __IncompleteArrayField types. So instead we implement it here, which is safe 803 - // because these are explicitly padded structures only containing types for 804 - // which any bit pattern, including all zeros, is valid. 805 - unsafe impl Zeroable for bindings::rpc_message_header_v {} 806 - 807 800 /// GSP Message Element. 808 801 /// 809 802 /// This is essentially a message header expected to be followed by the message data. ··· 846 853 self.inner.checkSum = checksum; 847 854 } 848 855 849 - /// Returns the total length of the message. 856 + /// Returns the length of the message's payload. 857 + pub(crate) fn payload_length(&self) -> usize { 858 + // `rpc.length` includes the length of the RPC message header. 859 + num::u32_as_usize(self.inner.rpc.length) 860 + .saturating_sub(size_of::<bindings::rpc_message_header_v>()) 861 + } 862 + 863 + /// Returns the total length of the message, message and RPC headers included. 850 864 pub(crate) fn length(&self) -> usize { 851 - // `rpc.length` includes the length of the GspRpcHeader but not the message header. 852 - size_of::<Self>() - size_of::<bindings::rpc_message_header_v>() 853 - + num::u32_as_usize(self.inner.rpc.length) 865 + size_of::<Self>() + self.payload_length() 854 866 } 855 867 856 868 // Returns the sequence number of the message.
+7 -4
drivers/gpu/nova-core/gsp/fw/r570_144.rs
··· 24 24 unreachable_pub, 25 25 unsafe_op_in_unsafe_fn 26 26 )] 27 - use kernel::{ 28 - ffi, 29 - prelude::Zeroable, // 30 - }; 27 + use kernel::ffi; 28 + use pin_init::MaybeZeroable; 29 + 31 30 include!("r570_144/bindings.rs"); 31 + 32 + // SAFETY: This type has a size of zero, so its inclusion into another type should not affect their 33 + // ability to implement `Zeroable`. 34 + unsafe impl<T> kernel::prelude::Zeroable for __IncompleteArrayField<T> {}
+59 -46
drivers/gpu/nova-core/gsp/fw/r570_144/bindings.rs
··· 320 320 pub const NV_VGPU_MSG_EVENT_NUM_EVENTS: _bindgen_ty_3 = 4131; 321 321 pub type _bindgen_ty_3 = ffi::c_uint; 322 322 #[repr(C)] 323 - #[derive(Debug, Default, Copy, Clone)] 323 + #[derive(Debug, Default, Copy, Clone, MaybeZeroable)] 324 324 pub struct NV0080_CTRL_GPU_GET_SRIOV_CAPS_PARAMS { 325 325 pub totalVFs: u32_, 326 326 pub firstVfOffset: u32_, 327 327 pub vfFeatureMask: u32_, 328 + pub __bindgen_padding_0: [u8; 4usize], 328 329 pub FirstVFBar0Address: u64_, 329 330 pub FirstVFBar1Address: u64_, 330 331 pub FirstVFBar2Address: u64_, ··· 341 340 pub bClientRmAllocatedCtxBuffer: u8_, 342 341 pub bNonPowerOf2ChannelCountSupported: u8_, 343 342 pub bVfResizableBAR1Supported: u8_, 343 + pub __bindgen_padding_1: [u8; 7usize], 344 344 } 345 345 #[repr(C)] 346 - #[derive(Debug, Default, Copy, Clone)] 346 + #[derive(Debug, Default, Copy, Clone, MaybeZeroable)] 347 347 pub struct NV2080_CTRL_BIOS_GET_SKU_INFO_PARAMS { 348 348 pub BoardID: u32_, 349 349 pub chipSKU: [ffi::c_char; 9usize], 350 350 pub chipSKUMod: [ffi::c_char; 5usize], 351 + pub __bindgen_padding_0: [u8; 2usize], 351 352 pub skuConfigVersion: u32_, 352 353 pub project: [ffi::c_char; 5usize], 353 354 pub projectSKU: [ffi::c_char; 5usize], 354 355 pub CDP: [ffi::c_char; 6usize], 355 356 pub projectSKUMod: [ffi::c_char; 2usize], 357 + pub __bindgen_padding_1: [u8; 2usize], 356 358 pub businessCycle: u32_, 357 359 } 358 360 pub type NV2080_CTRL_CMD_FB_GET_FB_REGION_SURFACE_MEM_TYPE_FLAG = [u8_; 17usize]; 359 361 #[repr(C)] 360 - #[derive(Debug, Default, Copy, Clone)] 362 + #[derive(Debug, Default, Copy, Clone, MaybeZeroable)] 361 363 pub struct NV2080_CTRL_CMD_FB_GET_FB_REGION_FB_REGION_INFO { 362 364 pub base: u64_, 363 365 pub limit: u64_, ··· 372 368 pub blackList: NV2080_CTRL_CMD_FB_GET_FB_REGION_SURFACE_MEM_TYPE_FLAG, 373 369 } 374 370 #[repr(C)] 375 - #[derive(Debug, Default, Copy, Clone)] 371 + #[derive(Debug, Default, Copy, Clone, MaybeZeroable)] 376 372 pub struct NV2080_CTRL_CMD_FB_GET_FB_REGION_INFO_PARAMS { 377 373 pub numFBRegions: u32_, 374 + pub __bindgen_padding_0: [u8; 4usize], 378 375 pub fbRegion: [NV2080_CTRL_CMD_FB_GET_FB_REGION_FB_REGION_INFO; 16usize], 379 376 } 380 377 #[repr(C)] 381 - #[derive(Debug, Copy, Clone)] 378 + #[derive(Debug, Copy, Clone, MaybeZeroable)] 382 379 pub struct NV2080_CTRL_GPU_GET_GID_INFO_PARAMS { 383 380 pub index: u32_, 384 381 pub flags: u32_, ··· 396 391 } 397 392 } 398 393 #[repr(C)] 399 - #[derive(Debug, Default, Copy, Clone, Zeroable)] 394 + #[derive(Debug, Default, Copy, Clone, MaybeZeroable)] 400 395 pub struct DOD_METHOD_DATA { 401 396 pub status: u32_, 402 397 pub acpiIdListLen: u32_, 403 398 pub acpiIdList: [u32_; 16usize], 404 399 } 405 400 #[repr(C)] 406 - #[derive(Debug, Default, Copy, Clone, Zeroable)] 401 + #[derive(Debug, Default, Copy, Clone, MaybeZeroable)] 407 402 pub struct JT_METHOD_DATA { 408 403 pub status: u32_, 409 404 pub jtCaps: u32_, ··· 412 407 pub __bindgen_padding_0: u8, 413 408 } 414 409 #[repr(C)] 415 - #[derive(Debug, Default, Copy, Clone, Zeroable)] 410 + #[derive(Debug, Default, Copy, Clone, MaybeZeroable)] 416 411 pub struct MUX_METHOD_DATA_ELEMENT { 417 412 pub acpiId: u32_, 418 413 pub mode: u32_, 419 414 pub status: u32_, 420 415 } 421 416 #[repr(C)] 422 - #[derive(Debug, Default, Copy, Clone, Zeroable)] 417 + #[derive(Debug, Default, Copy, Clone, MaybeZeroable)] 423 418 pub struct MUX_METHOD_DATA { 424 419 pub tableLen: u32_, 425 420 pub acpiIdMuxModeTable: [MUX_METHOD_DATA_ELEMENT; 16usize], ··· 427 422 pub acpiIdMuxStateTable: [MUX_METHOD_DATA_ELEMENT; 16usize], 428 423 } 429 424 #[repr(C)] 430 - #[derive(Debug, Default, Copy, Clone, Zeroable)] 425 + #[derive(Debug, Default, Copy, Clone, MaybeZeroable)] 431 426 pub struct CAPS_METHOD_DATA { 432 427 pub status: u32_, 433 428 pub optimusCaps: u32_, 434 429 } 435 430 #[repr(C)] 436 - #[derive(Debug, Default, Copy, Clone, Zeroable)] 431 + #[derive(Debug, Default, Copy, Clone, MaybeZeroable)] 437 432 pub struct ACPI_METHOD_DATA { 438 433 pub bValid: u8_, 439 434 pub __bindgen_padding_0: [u8; 3usize], ··· 443 438 pub capsMethodData: CAPS_METHOD_DATA, 444 439 } 445 440 #[repr(C)] 446 - #[derive(Debug, Default, Copy, Clone)] 441 + #[derive(Debug, Default, Copy, Clone, MaybeZeroable)] 447 442 pub struct VIRTUAL_DISPLAY_GET_MAX_RESOLUTION_PARAMS { 448 443 pub headIndex: u32_, 449 444 pub maxHResolution: u32_, 450 445 pub maxVResolution: u32_, 451 446 } 452 447 #[repr(C)] 453 - #[derive(Debug, Default, Copy, Clone)] 448 + #[derive(Debug, Default, Copy, Clone, MaybeZeroable)] 454 449 pub struct VIRTUAL_DISPLAY_GET_NUM_HEADS_PARAMS { 455 450 pub numHeads: u32_, 456 451 pub maxNumHeads: u32_, 457 452 } 458 453 #[repr(C)] 459 - #[derive(Debug, Default, Copy, Clone, Zeroable)] 454 + #[derive(Debug, Default, Copy, Clone, MaybeZeroable)] 460 455 pub struct BUSINFO { 461 456 pub deviceID: u16_, 462 457 pub vendorID: u16_, ··· 466 461 pub __bindgen_padding_0: u8, 467 462 } 468 463 #[repr(C)] 469 - #[derive(Debug, Default, Copy, Clone, Zeroable)] 464 + #[derive(Debug, Default, Copy, Clone, MaybeZeroable)] 470 465 pub struct GSP_VF_INFO { 471 466 pub totalVFs: u32_, 472 467 pub firstVFOffset: u32_, ··· 479 474 pub __bindgen_padding_0: [u8; 5usize], 480 475 } 481 476 #[repr(C)] 482 - #[derive(Debug, Default, Copy, Clone, Zeroable)] 477 + #[derive(Debug, Default, Copy, Clone, MaybeZeroable)] 483 478 pub struct GSP_PCIE_CONFIG_REG { 484 479 pub linkCap: u32_, 485 480 } 486 481 #[repr(C)] 487 - #[derive(Debug, Default, Copy, Clone)] 482 + #[derive(Debug, Default, Copy, Clone, MaybeZeroable)] 488 483 pub struct EcidManufacturingInfo { 489 484 pub ecidLow: u32_, 490 485 pub ecidHigh: u32_, 491 486 pub ecidExtended: u32_, 492 487 } 493 488 #[repr(C)] 494 - #[derive(Debug, Default, Copy, Clone)] 489 + #[derive(Debug, Default, Copy, Clone, MaybeZeroable)] 495 490 pub struct FW_WPR_LAYOUT_OFFSET { 496 491 pub nonWprHeapOffset: u64_, 497 492 pub frtsOffset: u64_, 498 493 } 499 494 #[repr(C)] 500 - #[derive(Debug, Copy, Clone)] 495 + #[derive(Debug, Copy, Clone, MaybeZeroable)] 501 496 pub struct GspStaticConfigInfo_t { 502 497 pub grCapsBits: [u8_; 23usize], 498 + pub __bindgen_padding_0: u8, 503 499 pub gidInfo: NV2080_CTRL_GPU_GET_GID_INFO_PARAMS, 504 500 pub SKUInfo: NV2080_CTRL_BIOS_GET_SKU_INFO_PARAMS, 501 + pub __bindgen_padding_1: [u8; 4usize], 505 502 pub fbRegionInfoParams: NV2080_CTRL_CMD_FB_GET_FB_REGION_INFO_PARAMS, 506 503 pub sriovCaps: NV0080_CTRL_GPU_GET_SRIOV_CAPS_PARAMS, 507 504 pub sriovMaxGfid: u32_, 508 505 pub engineCaps: [u32_; 3usize], 509 506 pub poisonFuseEnabled: u8_, 507 + pub __bindgen_padding_2: [u8; 7usize], 510 508 pub fb_length: u64_, 511 509 pub fbio_mask: u64_, 512 510 pub fb_bus_width: u32_, ··· 535 527 pub bIsMigSupported: u8_, 536 528 pub RTD3GC6TotalBoardPower: u16_, 537 529 pub RTD3GC6PerstDelay: u16_, 530 + pub __bindgen_padding_3: [u8; 2usize], 538 531 pub bar1PdeBase: u64_, 539 532 pub bar2PdeBase: u64_, 540 533 pub bVbiosValid: u8_, 534 + pub __bindgen_padding_4: [u8; 3usize], 541 535 pub vbiosSubVendor: u32_, 542 536 pub vbiosSubDevice: u32_, 543 537 pub bPageRetirementSupported: u8_, 544 538 pub bSplitVasBetweenServerClientRm: u8_, 545 539 pub bClRootportNeedsNosnoopWAR: u8_, 540 + pub __bindgen_padding_5: u8, 546 541 pub displaylessMaxHeads: VIRTUAL_DISPLAY_GET_NUM_HEADS_PARAMS, 547 542 pub displaylessMaxResolution: VIRTUAL_DISPLAY_GET_MAX_RESOLUTION_PARAMS, 543 + pub __bindgen_padding_6: [u8; 4usize], 548 544 pub displaylessMaxPixels: u64_, 549 545 pub hInternalClient: u32_, 550 546 pub hInternalDevice: u32_, ··· 570 558 } 571 559 } 572 560 #[repr(C)] 573 - #[derive(Debug, Default, Copy, Clone, Zeroable)] 561 + #[derive(Debug, Default, Copy, Clone, MaybeZeroable)] 574 562 pub struct GspSystemInfo { 575 563 pub gpuPhysAddr: u64_, 576 564 pub gpuPhysFbAddr: u64_, ··· 627 615 pub hostPageSize: u64_, 628 616 } 629 617 #[repr(C)] 630 - #[derive(Debug, Default, Copy, Clone, Zeroable)] 618 + #[derive(Debug, Default, Copy, Clone, MaybeZeroable)] 631 619 pub struct MESSAGE_QUEUE_INIT_ARGUMENTS { 632 620 pub sharedMemPhysAddr: u64_, 633 621 pub pageTableEntryCount: u32_, ··· 636 624 pub statQueueOffset: u64_, 637 625 } 638 626 #[repr(C)] 639 - #[derive(Debug, Default, Copy, Clone, Zeroable)] 627 + #[derive(Debug, Default, Copy, Clone, MaybeZeroable)] 640 628 pub struct GSP_SR_INIT_ARGUMENTS { 641 629 pub oldLevel: u32_, 642 630 pub flags: u32_, ··· 644 632 pub __bindgen_padding_0: [u8; 3usize], 645 633 } 646 634 #[repr(C)] 647 - #[derive(Debug, Default, Copy, Clone, Zeroable)] 635 + #[derive(Debug, Default, Copy, Clone, MaybeZeroable)] 648 636 pub struct GSP_ARGUMENTS_CACHED { 649 637 pub messageQueueInitArguments: MESSAGE_QUEUE_INIT_ARGUMENTS, 650 638 pub srInitArguments: GSP_SR_INIT_ARGUMENTS, ··· 654 642 pub profilerArgs: GSP_ARGUMENTS_CACHED__bindgen_ty_1, 655 643 } 656 644 #[repr(C)] 657 - #[derive(Debug, Default, Copy, Clone, Zeroable)] 645 + #[derive(Debug, Default, Copy, Clone, MaybeZeroable)] 658 646 pub struct GSP_ARGUMENTS_CACHED__bindgen_ty_1 { 659 647 pub pa: u64_, 660 648 pub size: u64_, 661 649 } 662 650 #[repr(C)] 663 - #[derive(Copy, Clone, Zeroable)] 651 + #[derive(Copy, Clone, MaybeZeroable)] 664 652 pub union rpc_message_rpc_union_field_v03_00 { 665 653 pub spare: u32_, 666 654 pub cpuRmGfid: u32_, ··· 676 664 } 677 665 pub type rpc_message_rpc_union_field_v = rpc_message_rpc_union_field_v03_00; 678 666 #[repr(C)] 667 + #[derive(MaybeZeroable)] 679 668 pub struct rpc_message_header_v03_00 { 680 669 pub header_version: u32_, 681 670 pub signature: u32_, ··· 699 686 } 700 687 pub type rpc_message_header_v = rpc_message_header_v03_00; 701 688 #[repr(C)] 702 - #[derive(Copy, Clone, Zeroable)] 689 + #[derive(Copy, Clone, MaybeZeroable)] 703 690 pub struct GspFwWprMeta { 704 691 pub magic: u64_, 705 692 pub revision: u64_, ··· 734 721 pub verified: u64_, 735 722 } 736 723 #[repr(C)] 737 - #[derive(Copy, Clone, Zeroable)] 724 + #[derive(Copy, Clone, MaybeZeroable)] 738 725 pub union GspFwWprMeta__bindgen_ty_1 { 739 726 pub __bindgen_anon_1: GspFwWprMeta__bindgen_ty_1__bindgen_ty_1, 740 727 pub __bindgen_anon_2: GspFwWprMeta__bindgen_ty_1__bindgen_ty_2, 741 728 } 742 729 #[repr(C)] 743 - #[derive(Debug, Default, Copy, Clone, Zeroable)] 730 + #[derive(Debug, Default, Copy, Clone, MaybeZeroable)] 744 731 pub struct GspFwWprMeta__bindgen_ty_1__bindgen_ty_1 { 745 732 pub sysmemAddrOfSignature: u64_, 746 733 pub sizeOfSignature: u64_, 747 734 } 748 735 #[repr(C)] 749 - #[derive(Debug, Default, Copy, Clone, Zeroable)] 736 + #[derive(Debug, Default, Copy, Clone, MaybeZeroable)] 750 737 pub struct GspFwWprMeta__bindgen_ty_1__bindgen_ty_2 { 751 738 pub gspFwHeapFreeListWprOffset: u32_, 752 739 pub unused0: u32_, ··· 762 749 } 763 750 } 764 751 #[repr(C)] 765 - #[derive(Copy, Clone, Zeroable)] 752 + #[derive(Copy, Clone, MaybeZeroable)] 766 753 pub union GspFwWprMeta__bindgen_ty_2 { 767 754 pub __bindgen_anon_1: GspFwWprMeta__bindgen_ty_2__bindgen_ty_1, 768 755 pub __bindgen_anon_2: GspFwWprMeta__bindgen_ty_2__bindgen_ty_2, 769 756 } 770 757 #[repr(C)] 771 - #[derive(Debug, Default, Copy, Clone, Zeroable)] 758 + #[derive(Debug, Default, Copy, Clone, MaybeZeroable)] 772 759 pub struct GspFwWprMeta__bindgen_ty_2__bindgen_ty_1 { 773 760 pub partitionRpcAddr: u64_, 774 761 pub partitionRpcRequestOffset: u16_, ··· 780 767 pub lsUcodeVersion: u32_, 781 768 } 782 769 #[repr(C)] 783 - #[derive(Debug, Default, Copy, Clone, Zeroable)] 770 + #[derive(Debug, Default, Copy, Clone, MaybeZeroable)] 784 771 pub struct GspFwWprMeta__bindgen_ty_2__bindgen_ty_2 { 785 772 pub partitionRpcPadding: [u32_; 4usize], 786 773 pub sysmemAddrOfCrashReportQueue: u64_, ··· 815 802 pub const LibosMemoryRegionLoc_LIBOS_MEMORY_REGION_LOC_FB: LibosMemoryRegionLoc = 2; 816 803 pub type LibosMemoryRegionLoc = ffi::c_uint; 817 804 #[repr(C)] 818 - #[derive(Debug, Default, Copy, Clone, Zeroable)] 805 + #[derive(Debug, Default, Copy, Clone, MaybeZeroable)] 819 806 pub struct LibosMemoryRegionInitArgument { 820 807 pub id8: LibosAddress, 821 808 pub pa: LibosAddress, ··· 825 812 pub __bindgen_padding_0: [u8; 6usize], 826 813 } 827 814 #[repr(C)] 828 - #[derive(Debug, Default, Copy, Clone)] 815 + #[derive(Debug, Default, Copy, Clone, MaybeZeroable)] 829 816 pub struct PACKED_REGISTRY_ENTRY { 830 817 pub nameOffset: u32_, 831 818 pub type_: u8_, ··· 834 821 pub length: u32_, 835 822 } 836 823 #[repr(C)] 837 - #[derive(Debug, Default)] 824 + #[derive(Debug, Default, MaybeZeroable)] 838 825 pub struct PACKED_REGISTRY_TABLE { 839 826 pub size: u32_, 840 827 pub numEntries: u32_, 841 828 pub entries: __IncompleteArrayField<PACKED_REGISTRY_ENTRY>, 842 829 } 843 830 #[repr(C)] 844 - #[derive(Debug, Default, Copy, Clone, Zeroable)] 831 + #[derive(Debug, Default, Copy, Clone, MaybeZeroable)] 845 832 pub struct msgqTxHeader { 846 833 pub version: u32_, 847 834 pub size: u32_, ··· 853 840 pub entryOff: u32_, 854 841 } 855 842 #[repr(C)] 856 - #[derive(Debug, Default, Copy, Clone, Zeroable)] 843 + #[derive(Debug, Default, Copy, Clone, MaybeZeroable)] 857 844 pub struct msgqRxHeader { 858 845 pub readPtr: u32_, 859 846 } 860 847 #[repr(C)] 861 848 #[repr(align(8))] 862 - #[derive(Zeroable)] 849 + #[derive(MaybeZeroable)] 863 850 pub struct GSP_MSG_QUEUE_ELEMENT { 864 851 pub authTagBuffer: [u8_; 16usize], 865 852 pub aadBuffer: [u8_; 16usize], ··· 879 866 } 880 867 } 881 868 #[repr(C)] 882 - #[derive(Debug, Default)] 869 + #[derive(Debug, Default, MaybeZeroable)] 883 870 pub struct rpc_run_cpu_sequencer_v17_00 { 884 871 pub bufferSizeDWord: u32_, 885 872 pub cmdIndex: u32_, ··· 897 884 pub const GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_CORE_RESUME: GSP_SEQ_BUF_OPCODE = 8; 898 885 pub type GSP_SEQ_BUF_OPCODE = ffi::c_uint; 899 886 #[repr(C)] 900 - #[derive(Debug, Default, Copy, Clone)] 887 + #[derive(Debug, Default, Copy, Clone, MaybeZeroable)] 901 888 pub struct GSP_SEQ_BUF_PAYLOAD_REG_WRITE { 902 889 pub addr: u32_, 903 890 pub val: u32_, 904 891 } 905 892 #[repr(C)] 906 - #[derive(Debug, Default, Copy, Clone)] 893 + #[derive(Debug, Default, Copy, Clone, MaybeZeroable)] 907 894 pub struct GSP_SEQ_BUF_PAYLOAD_REG_MODIFY { 908 895 pub addr: u32_, 909 896 pub mask: u32_, 910 897 pub val: u32_, 911 898 } 912 899 #[repr(C)] 913 - #[derive(Debug, Default, Copy, Clone)] 900 + #[derive(Debug, Default, Copy, Clone, MaybeZeroable)] 914 901 pub struct GSP_SEQ_BUF_PAYLOAD_REG_POLL { 915 902 pub addr: u32_, 916 903 pub mask: u32_, ··· 919 906 pub error: u32_, 920 907 } 921 908 #[repr(C)] 922 - #[derive(Debug, Default, Copy, Clone)] 909 + #[derive(Debug, Default, Copy, Clone, MaybeZeroable)] 923 910 pub struct GSP_SEQ_BUF_PAYLOAD_DELAY_US { 924 911 pub val: u32_, 925 912 } 926 913 #[repr(C)] 927 - #[derive(Debug, Default, Copy, Clone)] 914 + #[derive(Debug, Default, Copy, Clone, MaybeZeroable)] 928 915 pub struct GSP_SEQ_BUF_PAYLOAD_REG_STORE { 929 916 pub addr: u32_, 930 917 pub index: u32_, 931 918 } 932 919 #[repr(C)] 933 - #[derive(Copy, Clone)] 920 + #[derive(Copy, Clone, MaybeZeroable)] 934 921 pub struct GSP_SEQUENCER_BUFFER_CMD { 935 922 pub opCode: GSP_SEQ_BUF_OPCODE, 936 923 pub payload: GSP_SEQUENCER_BUFFER_CMD__bindgen_ty_1, 937 924 } 938 925 #[repr(C)] 939 - #[derive(Copy, Clone)] 926 + #[derive(Copy, Clone, MaybeZeroable)] 940 927 pub union GSP_SEQUENCER_BUFFER_CMD__bindgen_ty_1 { 941 928 pub regWrite: GSP_SEQ_BUF_PAYLOAD_REG_WRITE, 942 929 pub regModify: GSP_SEQ_BUF_PAYLOAD_REG_MODIFY,
-7
drivers/pci/vgaarb.c
··· 652 652 return true; 653 653 } 654 654 655 - /* 656 - * Vgadev has neither IO nor MEM enabled. If we haven't found any 657 - * other VGA devices, it is the best candidate so far. 658 - */ 659 - if (!boot_vga) 660 - return true; 661 - 662 655 return false; 663 656 } 664 657
+22
include/drm/drm_atomic_helper.h
··· 60 60 int drm_atomic_helper_check_planes(struct drm_device *dev, 61 61 struct drm_atomic_state *state); 62 62 int drm_atomic_helper_check_crtc_primary_plane(struct drm_crtc_state *crtc_state); 63 + void drm_atomic_helper_commit_encoder_bridge_disable(struct drm_device *dev, 64 + struct drm_atomic_state *state); 65 + void drm_atomic_helper_commit_crtc_disable(struct drm_device *dev, 66 + struct drm_atomic_state *state); 67 + void drm_atomic_helper_commit_encoder_bridge_post_disable(struct drm_device *dev, 68 + struct drm_atomic_state *state); 63 69 int drm_atomic_helper_check(struct drm_device *dev, 64 70 struct drm_atomic_state *state); 65 71 void drm_atomic_helper_commit_tail(struct drm_atomic_state *state); ··· 95 89 void 96 90 drm_atomic_helper_calc_timestamping_constants(struct drm_atomic_state *state); 97 91 92 + void drm_atomic_helper_commit_crtc_set_mode(struct drm_device *dev, 93 + struct drm_atomic_state *state); 94 + 98 95 void drm_atomic_helper_commit_modeset_disables(struct drm_device *dev, 99 96 struct drm_atomic_state *state); 97 + 98 + void drm_atomic_helper_commit_writebacks(struct drm_device *dev, 99 + struct drm_atomic_state *state); 100 + 101 + void drm_atomic_helper_commit_encoder_bridge_pre_enable(struct drm_device *dev, 102 + struct drm_atomic_state *state); 103 + 104 + void drm_atomic_helper_commit_crtc_enable(struct drm_device *dev, 105 + struct drm_atomic_state *state); 106 + 107 + void drm_atomic_helper_commit_encoder_bridge_enable(struct drm_device *dev, 108 + struct drm_atomic_state *state); 109 + 100 110 void drm_atomic_helper_commit_modeset_enables(struct drm_device *dev, 101 111 struct drm_atomic_state *old_state); 102 112
+66 -183
include/drm/drm_bridge.h
··· 176 176 /** 177 177 * @disable: 178 178 * 179 - * The @disable callback should disable the bridge. 179 + * This callback should disable the bridge. It is called right before 180 + * the preceding element in the display pipe is disabled. If the 181 + * preceding element is a bridge this means it's called before that 182 + * bridge's @disable vfunc. If the preceding element is a &drm_encoder 183 + * it's called right before the &drm_encoder_helper_funcs.disable, 184 + * &drm_encoder_helper_funcs.prepare or &drm_encoder_helper_funcs.dpms 185 + * hook. 180 186 * 181 187 * The bridge can assume that the display pipe (i.e. clocks and timing 182 188 * signals) feeding it is still running when this callback is called. 183 - * 184 - * 185 - * If the preceding element is a &drm_bridge, then this is called before 186 - * that bridge is disabled via one of: 187 - * 188 - * - &drm_bridge_funcs.disable 189 - * - &drm_bridge_funcs.atomic_disable 190 - * 191 - * If the preceding element of the bridge is a display controller, then 192 - * this callback is called before the encoder is disabled via one of: 193 - * 194 - * - &drm_encoder_helper_funcs.atomic_disable 195 - * - &drm_encoder_helper_funcs.prepare 196 - * - &drm_encoder_helper_funcs.disable 197 - * - &drm_encoder_helper_funcs.dpms 198 - * 199 - * and the CRTC is disabled via one of: 200 - * 201 - * - &drm_crtc_helper_funcs.prepare 202 - * - &drm_crtc_helper_funcs.atomic_disable 203 - * - &drm_crtc_helper_funcs.disable 204 - * - &drm_crtc_helper_funcs.dpms. 205 189 * 206 190 * The @disable callback is optional. 207 191 * ··· 199 215 /** 200 216 * @post_disable: 201 217 * 218 + * This callback should disable the bridge. It is called right after the 219 + * preceding element in the display pipe is disabled. If the preceding 220 + * element is a bridge this means it's called after that bridge's 221 + * @post_disable function. If the preceding element is a &drm_encoder 222 + * it's called right after the encoder's 223 + * &drm_encoder_helper_funcs.disable, &drm_encoder_helper_funcs.prepare 224 + * or &drm_encoder_helper_funcs.dpms hook. 225 + * 202 226 * The bridge must assume that the display pipe (i.e. clocks and timing 203 - * signals) feeding this bridge is no longer running when the 204 - * @post_disable is called. 205 - * 206 - * This callback should perform all the actions required by the hardware 207 - * after it has stopped receiving signals from the preceding element. 208 - * 209 - * If the preceding element is a &drm_bridge, then this is called after 210 - * that bridge is post-disabled (unless marked otherwise by the 211 - * @pre_enable_prev_first flag) via one of: 212 - * 213 - * - &drm_bridge_funcs.post_disable 214 - * - &drm_bridge_funcs.atomic_post_disable 215 - * 216 - * If the preceding element of the bridge is a display controller, then 217 - * this callback is called after the encoder is disabled via one of: 218 - * 219 - * - &drm_encoder_helper_funcs.atomic_disable 220 - * - &drm_encoder_helper_funcs.prepare 221 - * - &drm_encoder_helper_funcs.disable 222 - * - &drm_encoder_helper_funcs.dpms 223 - * 224 - * and the CRTC is disabled via one of: 225 - * 226 - * - &drm_crtc_helper_funcs.prepare 227 - * - &drm_crtc_helper_funcs.atomic_disable 228 - * - &drm_crtc_helper_funcs.disable 229 - * - &drm_crtc_helper_funcs.dpms 227 + * signals) feeding it is no longer running when this callback is 228 + * called. 230 229 * 231 230 * The @post_disable callback is optional. 232 231 * ··· 252 285 /** 253 286 * @pre_enable: 254 287 * 288 + * This callback should enable the bridge. It is called right before 289 + * the preceding element in the display pipe is enabled. If the 290 + * preceding element is a bridge this means it's called before that 291 + * bridge's @pre_enable function. If the preceding element is a 292 + * &drm_encoder it's called right before the encoder's 293 + * &drm_encoder_helper_funcs.enable, &drm_encoder_helper_funcs.commit or 294 + * &drm_encoder_helper_funcs.dpms hook. 295 + * 255 296 * The display pipe (i.e. clocks and timing signals) feeding this bridge 256 - * will not yet be running when the @pre_enable is called. 257 - * 258 - * This callback should perform all the necessary actions to prepare the 259 - * bridge to accept signals from the preceding element. 260 - * 261 - * If the preceding element is a &drm_bridge, then this is called before 262 - * that bridge is pre-enabled (unless marked otherwise by 263 - * @pre_enable_prev_first flag) via one of: 264 - * 265 - * - &drm_bridge_funcs.pre_enable 266 - * - &drm_bridge_funcs.atomic_pre_enable 267 - * 268 - * If the preceding element of the bridge is a display controller, then 269 - * this callback is called before the CRTC is enabled via one of: 270 - * 271 - * - &drm_crtc_helper_funcs.atomic_enable 272 - * - &drm_crtc_helper_funcs.commit 273 - * 274 - * and the encoder is enabled via one of: 275 - * 276 - * - &drm_encoder_helper_funcs.atomic_enable 277 - * - &drm_encoder_helper_funcs.enable 278 - * - &drm_encoder_helper_funcs.commit 297 + * will not yet be running when this callback is called. The bridge must 298 + * not enable the display link feeding the next bridge in the chain (if 299 + * there is one) when this callback is called. 279 300 * 280 301 * The @pre_enable callback is optional. 281 302 * ··· 277 322 /** 278 323 * @enable: 279 324 * 280 - * The @enable callback should enable the bridge. 325 + * This callback should enable the bridge. It is called right after 326 + * the preceding element in the display pipe is enabled. If the 327 + * preceding element is a bridge this means it's called after that 328 + * bridge's @enable function. If the preceding element is a 329 + * &drm_encoder it's called right after the encoder's 330 + * &drm_encoder_helper_funcs.enable, &drm_encoder_helper_funcs.commit or 331 + * &drm_encoder_helper_funcs.dpms hook. 281 332 * 282 333 * The bridge can assume that the display pipe (i.e. clocks and timing 283 334 * signals) feeding it is running when this callback is called. This 284 335 * callback must enable the display link feeding the next bridge in the 285 336 * chain if there is one. 286 - * 287 - * If the preceding element is a &drm_bridge, then this is called after 288 - * that bridge is enabled via one of: 289 - * 290 - * - &drm_bridge_funcs.enable 291 - * - &drm_bridge_funcs.atomic_enable 292 - * 293 - * If the preceding element of the bridge is a display controller, then 294 - * this callback is called after the CRTC is enabled via one of: 295 - * 296 - * - &drm_crtc_helper_funcs.atomic_enable 297 - * - &drm_crtc_helper_funcs.commit 298 - * 299 - * and the encoder is enabled via one of: 300 - * 301 - * - &drm_encoder_helper_funcs.atomic_enable 302 - * - &drm_encoder_helper_funcs.enable 303 - * - drm_encoder_helper_funcs.commit 304 337 * 305 338 * The @enable callback is optional. 306 339 * ··· 302 359 /** 303 360 * @atomic_pre_enable: 304 361 * 362 + * This callback should enable the bridge. It is called right before 363 + * the preceding element in the display pipe is enabled. If the 364 + * preceding element is a bridge this means it's called before that 365 + * bridge's @atomic_pre_enable or @pre_enable function. If the preceding 366 + * element is a &drm_encoder it's called right before the encoder's 367 + * &drm_encoder_helper_funcs.atomic_enable hook. 368 + * 305 369 * The display pipe (i.e. clocks and timing signals) feeding this bridge 306 - * will not yet be running when the @atomic_pre_enable is called. 307 - * 308 - * This callback should perform all the necessary actions to prepare the 309 - * bridge to accept signals from the preceding element. 310 - * 311 - * If the preceding element is a &drm_bridge, then this is called before 312 - * that bridge is pre-enabled (unless marked otherwise by 313 - * @pre_enable_prev_first flag) via one of: 314 - * 315 - * - &drm_bridge_funcs.pre_enable 316 - * - &drm_bridge_funcs.atomic_pre_enable 317 - * 318 - * If the preceding element of the bridge is a display controller, then 319 - * this callback is called before the CRTC is enabled via one of: 320 - * 321 - * - &drm_crtc_helper_funcs.atomic_enable 322 - * - &drm_crtc_helper_funcs.commit 323 - * 324 - * and the encoder is enabled via one of: 325 - * 326 - * - &drm_encoder_helper_funcs.atomic_enable 327 - * - &drm_encoder_helper_funcs.enable 328 - * - &drm_encoder_helper_funcs.commit 370 + * will not yet be running when this callback is called. The bridge must 371 + * not enable the display link feeding the next bridge in the chain (if 372 + * there is one) when this callback is called. 329 373 * 330 374 * The @atomic_pre_enable callback is optional. 331 375 */ ··· 322 392 /** 323 393 * @atomic_enable: 324 394 * 325 - * The @atomic_enable callback should enable the bridge. 395 + * This callback should enable the bridge. It is called right after 396 + * the preceding element in the display pipe is enabled. If the 397 + * preceding element is a bridge this means it's called after that 398 + * bridge's @atomic_enable or @enable function. If the preceding element 399 + * is a &drm_encoder it's called right after the encoder's 400 + * &drm_encoder_helper_funcs.atomic_enable hook. 326 401 * 327 402 * The bridge can assume that the display pipe (i.e. clocks and timing 328 403 * signals) feeding it is running when this callback is called. This 329 404 * callback must enable the display link feeding the next bridge in the 330 405 * chain if there is one. 331 - * 332 - * If the preceding element is a &drm_bridge, then this is called after 333 - * that bridge is enabled via one of: 334 - * 335 - * - &drm_bridge_funcs.enable 336 - * - &drm_bridge_funcs.atomic_enable 337 - * 338 - * If the preceding element of the bridge is a display controller, then 339 - * this callback is called after the CRTC is enabled via one of: 340 - * 341 - * - &drm_crtc_helper_funcs.atomic_enable 342 - * - &drm_crtc_helper_funcs.commit 343 - * 344 - * and the encoder is enabled via one of: 345 - * 346 - * - &drm_encoder_helper_funcs.atomic_enable 347 - * - &drm_encoder_helper_funcs.enable 348 - * - drm_encoder_helper_funcs.commit 349 406 * 350 407 * The @atomic_enable callback is optional. 351 408 */ ··· 341 424 /** 342 425 * @atomic_disable: 343 426 * 344 - * The @atomic_disable callback should disable the bridge. 427 + * This callback should disable the bridge. It is called right before 428 + * the preceding element in the display pipe is disabled. If the 429 + * preceding element is a bridge this means it's called before that 430 + * bridge's @atomic_disable or @disable vfunc. If the preceding element 431 + * is a &drm_encoder it's called right before the 432 + * &drm_encoder_helper_funcs.atomic_disable hook. 345 433 * 346 434 * The bridge can assume that the display pipe (i.e. clocks and timing 347 435 * signals) feeding it is still running when this callback is called. 348 - * 349 - * If the preceding element is a &drm_bridge, then this is called before 350 - * that bridge is disabled via one of: 351 - * 352 - * - &drm_bridge_funcs.disable 353 - * - &drm_bridge_funcs.atomic_disable 354 - * 355 - * If the preceding element of the bridge is a display controller, then 356 - * this callback is called before the encoder is disabled via one of: 357 - * 358 - * - &drm_encoder_helper_funcs.atomic_disable 359 - * - &drm_encoder_helper_funcs.prepare 360 - * - &drm_encoder_helper_funcs.disable 361 - * - &drm_encoder_helper_funcs.dpms 362 - * 363 - * and the CRTC is disabled via one of: 364 - * 365 - * - &drm_crtc_helper_funcs.prepare 366 - * - &drm_crtc_helper_funcs.atomic_disable 367 - * - &drm_crtc_helper_funcs.disable 368 - * - &drm_crtc_helper_funcs.dpms. 369 436 * 370 437 * The @atomic_disable callback is optional. 371 438 */ ··· 359 458 /** 360 459 * @atomic_post_disable: 361 460 * 461 + * This callback should disable the bridge. It is called right after the 462 + * preceding element in the display pipe is disabled. If the preceding 463 + * element is a bridge this means it's called after that bridge's 464 + * @atomic_post_disable or @post_disable function. If the preceding 465 + * element is a &drm_encoder it's called right after the encoder's 466 + * &drm_encoder_helper_funcs.atomic_disable hook. 467 + * 362 468 * The bridge must assume that the display pipe (i.e. clocks and timing 363 - * signals) feeding this bridge is no longer running when the 364 - * @atomic_post_disable is called. 365 - * 366 - * This callback should perform all the actions required by the hardware 367 - * after it has stopped receiving signals from the preceding element. 368 - * 369 - * If the preceding element is a &drm_bridge, then this is called after 370 - * that bridge is post-disabled (unless marked otherwise by the 371 - * @pre_enable_prev_first flag) via one of: 372 - * 373 - * - &drm_bridge_funcs.post_disable 374 - * - &drm_bridge_funcs.atomic_post_disable 375 - * 376 - * If the preceding element of the bridge is a display controller, then 377 - * this callback is called after the encoder is disabled via one of: 378 - * 379 - * - &drm_encoder_helper_funcs.atomic_disable 380 - * - &drm_encoder_helper_funcs.prepare 381 - * - &drm_encoder_helper_funcs.disable 382 - * - &drm_encoder_helper_funcs.dpms 383 - * 384 - * and the CRTC is disabled via one of: 385 - * 386 - * - &drm_crtc_helper_funcs.prepare 387 - * - &drm_crtc_helper_funcs.atomic_disable 388 - * - &drm_crtc_helper_funcs.disable 389 - * - &drm_crtc_helper_funcs.dpms 469 + * signals) feeding it is no longer running when this callback is 470 + * called. 390 471 * 391 472 * The @atomic_post_disable callback is optional. 392 473 */