Merge tag 'drm-fixes-2026-01-09' of https://gitlab.freedesktop.org/drm/kernel

Pull drm fixes from Dave Airlie:
"I missed the drm-rust fixes tree for last week, so this catches up on
that, along with amdgpu, and then some misc fixes across a few
drivers. I hadn't got an xe pull by the time I sent this, I suspect
one will arrive 10 mins after, but I don't think there is anything
that can't wait for next week.

Things seem to have picked up a little with people coming back from
holidays,

MAINTAINERS:
- Fix Nova GPU driver git links
- Fix typo in TYR driver entry preventing correct behavior of
scripts/get_maintainer.pl
- Exclude TYR driver from DRM MISC

nova-core:
- Correctly select RUST_FW_LOADER_ABSTRACTIONS to prevent build
errors
- Regenerate nova-core bindgen bindings with '--explicit-padding' to
avoid uninitialized bytes
- Fix length of received GSP messages, due to miscalculated message
payload size
- Regenerate bindings to derive MaybeZeroable
- Use a bindings alias to derive the firmware version

exynos:
- hdmi: replace system_wq with system_percpu_wq

pl111:
- Fix error handling in probe

mediatek/atomic/tidss:
- Fix tidss in another way and revert reordering of pre-enable and
post-disable operations, as it breaks other bridge drivers

nouveau:
- Fix regression from fwsec s/r fix

pci/vga:
- Fix multiple gpu's being reported a 'boot_display'

fb-helper:
- Fix vblank timeout during suspend/reset

amdgpu:
- Clang fixes
- Navi1x PCIe DPM fixes
- Ring reset fixes
- ISP suspend fix
- Analog DC fixes
- VPE fixes
- Mode1 reset fix

radeon:
- Variable sized array fix"

* tag 'drm-fixes-2026-01-09' of https://gitlab.freedesktop.org/drm/kernel: (32 commits)
Reapply "Revert "drm/amd: Skip power ungate during suspend for VPE""
drm/amd/display: Check NULL before calling dac_load_detection
drm/amd/pm: Disable MMIO access during SMU Mode 1 reset
drm/exynos: hdmi: replace use of system_wq with system_percpu_wq
drm/fb-helper: Fix vblank timeout during suspend/reset
PCI/VGA: Don't assume the only VGA device on a system is `boot_vga`
drm/amdgpu: Fix query for VPE block_type and ip_count
drm/amd/display: Add missing encoder setup to DACnEncoderControl
drm/amd/display: Correct color depth for SelectCRTC_Source
drm/amd/amdgpu: Fix SMU warning during isp suspend-resume
drm/amdgpu: always backup and reemit fences
drm/amdgpu: don't reemit ring contents more than once
drm/amd/pm: force send pcie parmater on navi1x
drm/amd/pm: fix wrong pcie parameter on navi1x
drm/radeon: Remove __counted_by from ClockInfoArray.clockInfo[]
drm/amd/display: Reduce number of arguments of dcn30's CalculateWatermarksAndDRAMSpeedChangeSupport()
drm/amd/display: Reduce number of arguments of dcn30's CalculatePrefetchSchedule()
drm/amd/display: Apply e4479aecf658 to dml
nouveau: don't attempt fwsec on sb on newer platforms
drm/tidss: Fix enable/disable order
...

+4 -3
MAINTAINERS
··· 2159 L: dri-devel@lists.freedesktop.org 2160 S: Supported 2161 W: https://rust-for-linux.com/tyr-gpu-driver 2162 - W https://drm.pages.freedesktop.org/maintainer-tools/drm-rust.html 2163 B: https://gitlab.freedesktop.org/panfrost/linux/-/issues 2164 T: git https://gitlab.freedesktop.org/drm/rust/kernel.git 2165 F: Documentation/devicetree/bindings/gpu/arm,mali-valhall-csf.yaml ··· 8068 Q: https://patchwork.freedesktop.org/project/nouveau/ 8069 B: https://gitlab.freedesktop.org/drm/nova/-/issues 8070 C: irc://irc.oftc.net/nouveau 8071 - T: git https://gitlab.freedesktop.org/drm/nova.git nova-next 8072 F: Documentation/gpu/nova/ 8073 F: drivers/gpu/nova-core/ 8074 ··· 8080 Q: https://patchwork.freedesktop.org/project/nouveau/ 8081 B: https://gitlab.freedesktop.org/drm/nova/-/issues 8082 C: irc://irc.oftc.net/nouveau 8083 - T: git https://gitlab.freedesktop.org/drm/nova.git nova-next 8084 F: Documentation/gpu/nova/ 8085 F: drivers/gpu/drm/nova/ 8086 F: include/uapi/drm/nova_drm.h ··· 8358 X: drivers/gpu/drm/nova/ 8359 X: drivers/gpu/drm/radeon/ 8360 X: drivers/gpu/drm/tegra/ 8361 X: drivers/gpu/drm/xe/ 8362 8363 DRM DRIVERS AND COMMON INFRASTRUCTURE [RUST]
··· 2159 L: dri-devel@lists.freedesktop.org 2160 S: Supported 2161 W: https://rust-for-linux.com/tyr-gpu-driver 2162 + W: https://drm.pages.freedesktop.org/maintainer-tools/drm-rust.html 2163 B: https://gitlab.freedesktop.org/panfrost/linux/-/issues 2164 T: git https://gitlab.freedesktop.org/drm/rust/kernel.git 2165 F: Documentation/devicetree/bindings/gpu/arm,mali-valhall-csf.yaml ··· 8068 Q: https://patchwork.freedesktop.org/project/nouveau/ 8069 B: https://gitlab.freedesktop.org/drm/nova/-/issues 8070 C: irc://irc.oftc.net/nouveau 8071 + T: git https://gitlab.freedesktop.org/drm/rust/kernel.git drm-rust-next 8072 F: Documentation/gpu/nova/ 8073 F: drivers/gpu/nova-core/ 8074 ··· 8080 Q: https://patchwork.freedesktop.org/project/nouveau/ 8081 B: https://gitlab.freedesktop.org/drm/nova/-/issues 8082 C: irc://irc.oftc.net/nouveau 8083 + T: git https://gitlab.freedesktop.org/drm/rust/kernel.git drm-rust-next 8084 F: Documentation/gpu/nova/ 8085 F: drivers/gpu/drm/nova/ 8086 F: include/uapi/drm/nova_drm.h ··· 8358 X: drivers/gpu/drm/nova/ 8359 X: drivers/gpu/drm/radeon/ 8360 X: drivers/gpu/drm/tegra/ 8361 + X: drivers/gpu/drm/tyr/ 8362 X: drivers/gpu/drm/xe/ 8363 8364 DRM DRIVERS AND COMMON INFRASTRUCTURE [RUST]
+4 -2
drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
··· 3445 (adev->ip_blocks[i].version->type == AMD_IP_BLOCK_TYPE_GFX || 3446 adev->ip_blocks[i].version->type == AMD_IP_BLOCK_TYPE_SDMA)) 3447 continue; 3448 - /* skip CG for VCE/UVD/VPE, it's handled specially */ 3449 if (adev->ip_blocks[i].version->type != AMD_IP_BLOCK_TYPE_UVD && 3450 adev->ip_blocks[i].version->type != AMD_IP_BLOCK_TYPE_VCE && 3451 adev->ip_blocks[i].version->type != AMD_IP_BLOCK_TYPE_VCN && 3452 - adev->ip_blocks[i].version->type != AMD_IP_BLOCK_TYPE_VPE && 3453 adev->ip_blocks[i].version->type != AMD_IP_BLOCK_TYPE_JPEG && 3454 adev->ip_blocks[i].version->funcs->set_powergating_state) { 3455 /* enable powergating to save power */ ··· 5865 5866 if (ret) 5867 goto mode1_reset_failed; 5868 5869 amdgpu_device_load_pci_state(adev->pdev); 5870 ret = amdgpu_psp_wait_for_bootloader(adev);
··· 3445 (adev->ip_blocks[i].version->type == AMD_IP_BLOCK_TYPE_GFX || 3446 adev->ip_blocks[i].version->type == AMD_IP_BLOCK_TYPE_SDMA)) 3447 continue; 3448 + /* skip CG for VCE/UVD, it's handled specially */ 3449 if (adev->ip_blocks[i].version->type != AMD_IP_BLOCK_TYPE_UVD && 3450 adev->ip_blocks[i].version->type != AMD_IP_BLOCK_TYPE_VCE && 3451 adev->ip_blocks[i].version->type != AMD_IP_BLOCK_TYPE_VCN && 3452 adev->ip_blocks[i].version->type != AMD_IP_BLOCK_TYPE_JPEG && 3453 adev->ip_blocks[i].version->funcs->set_powergating_state) { 3454 /* enable powergating to save power */ ··· 5866 5867 if (ret) 5868 goto mode1_reset_failed; 5869 + 5870 + /* enable mmio access after mode 1 reset completed */ 5871 + adev->no_hw_access = false; 5872 5873 amdgpu_device_load_pci_state(adev->pdev); 5874 ret = amdgpu_psp_wait_for_bootloader(adev);
+31 -5
drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
··· 89 return seq; 90 } 91 92 /** 93 * amdgpu_fence_emit - emit a fence on the requested ring 94 * ··· 126 &ring->fence_drv.lock, 127 adev->fence_context + ring->idx, seq); 128 129 amdgpu_ring_emit_fence(ring, ring->fence_drv.gpu_addr, 130 seq, flags | AMDGPU_FENCE_FLAG_INT); 131 amdgpu_fence_save_wptr(af); 132 pm_runtime_get_noresume(adev_to_drm(adev)->dev); 133 ptr = &ring->fence_drv.fences[seq & ring->fence_drv.num_fences_mask]; ··· 721 struct amdgpu_ring *ring = af->ring; 722 unsigned long flags; 723 u32 seq, last_seq; 724 725 last_seq = amdgpu_fence_read(ring) & ring->fence_drv.num_fences_mask; 726 seq = ring->fence_drv.sync_seq & ring->fence_drv.num_fences_mask; ··· 739 if (unprocessed && !dma_fence_is_signaled_locked(unprocessed)) { 740 fence = container_of(unprocessed, struct amdgpu_fence, base); 741 742 - if (fence == af) 743 dma_fence_set_error(&fence->base, -ETIME); 744 else if (fence->context == af->context) 745 dma_fence_set_error(&fence->base, -ECANCELED); ··· 749 rcu_read_unlock(); 750 } while (last_seq != seq); 751 spin_unlock_irqrestore(&ring->fence_drv.lock, flags); 752 - /* signal the guilty fence */ 753 - amdgpu_fence_write(ring, (u32)af->base.seqno); 754 - amdgpu_fence_process(ring); 755 } 756 757 void amdgpu_fence_save_wptr(struct amdgpu_fence *af) ··· 802 /* save everything if the ring is not guilty, otherwise 803 * just save the content from other contexts. 804 */ 805 - if (!guilty_fence || (fence->context != guilty_fence->context)) 806 amdgpu_ring_backup_unprocessed_command(ring, wptr, 807 fence->wptr); 808 wptr = fence->wptr; 809 } 810 rcu_read_unlock(); 811 } while (last_seq != seq);
··· 89 return seq; 90 } 91 92 + static void amdgpu_fence_save_fence_wptr_start(struct amdgpu_fence *af) 93 + { 94 + af->fence_wptr_start = af->ring->wptr; 95 + } 96 + 97 + static void amdgpu_fence_save_fence_wptr_end(struct amdgpu_fence *af) 98 + { 99 + af->fence_wptr_end = af->ring->wptr; 100 + } 101 + 102 /** 103 * amdgpu_fence_emit - emit a fence on the requested ring 104 * ··· 116 &ring->fence_drv.lock, 117 adev->fence_context + ring->idx, seq); 118 119 + amdgpu_fence_save_fence_wptr_start(af); 120 amdgpu_ring_emit_fence(ring, ring->fence_drv.gpu_addr, 121 seq, flags | AMDGPU_FENCE_FLAG_INT); 122 + amdgpu_fence_save_fence_wptr_end(af); 123 amdgpu_fence_save_wptr(af); 124 pm_runtime_get_noresume(adev_to_drm(adev)->dev); 125 ptr = &ring->fence_drv.fences[seq & ring->fence_drv.num_fences_mask]; ··· 709 struct amdgpu_ring *ring = af->ring; 710 unsigned long flags; 711 u32 seq, last_seq; 712 + bool reemitted = false; 713 714 last_seq = amdgpu_fence_read(ring) & ring->fence_drv.num_fences_mask; 715 seq = ring->fence_drv.sync_seq & ring->fence_drv.num_fences_mask; ··· 726 if (unprocessed && !dma_fence_is_signaled_locked(unprocessed)) { 727 fence = container_of(unprocessed, struct amdgpu_fence, base); 728 729 + if (fence->reemitted > 1) 730 + reemitted = true; 731 + else if (fence == af) 732 dma_fence_set_error(&fence->base, -ETIME); 733 else if (fence->context == af->context) 734 dma_fence_set_error(&fence->base, -ECANCELED); ··· 734 rcu_read_unlock(); 735 } while (last_seq != seq); 736 spin_unlock_irqrestore(&ring->fence_drv.lock, flags); 737 + 738 + if (reemitted) { 739 + /* if we've already reemitted once then just cancel everything */ 740 + amdgpu_fence_driver_force_completion(af->ring); 741 + af->ring->ring_backup_entries_to_copy = 0; 742 + } 743 } 744 745 void amdgpu_fence_save_wptr(struct amdgpu_fence *af) ··· 784 /* save everything if the ring is not guilty, otherwise 785 * just save the content from other contexts. 786 */ 787 + if (!fence->reemitted && 788 + (!guilty_fence || (fence->context != guilty_fence->context))) { 789 amdgpu_ring_backup_unprocessed_command(ring, wptr, 790 fence->wptr); 791 + } else if (!fence->reemitted) { 792 + /* always save the fence */ 793 + amdgpu_ring_backup_unprocessed_command(ring, 794 + fence->fence_wptr_start, 795 + fence->fence_wptr_end); 796 + } 797 wptr = fence->wptr; 798 + fence->reemitted++; 799 } 800 rcu_read_unlock(); 801 } while (last_seq != seq);
+24
drivers/gpu/drm/amd/amdgpu/amdgpu_isp.c
··· 318 } 319 EXPORT_SYMBOL(isp_kernel_buffer_free); 320 321 static const struct amd_ip_funcs isp_ip_funcs = { 322 .name = "isp_ip", 323 .early_init = isp_early_init, 324 .hw_init = isp_hw_init, 325 .hw_fini = isp_hw_fini, 326 .is_idle = isp_is_idle, 327 .set_clockgating_state = isp_set_clockgating_state, 328 .set_powergating_state = isp_set_powergating_state, 329 };
··· 318 } 319 EXPORT_SYMBOL(isp_kernel_buffer_free); 320 321 + static int isp_resume(struct amdgpu_ip_block *ip_block) 322 + { 323 + struct amdgpu_device *adev = ip_block->adev; 324 + struct amdgpu_isp *isp = &adev->isp; 325 + 326 + if (isp->funcs->hw_resume) 327 + return isp->funcs->hw_resume(isp); 328 + 329 + return -ENODEV; 330 + } 331 + 332 + static int isp_suspend(struct amdgpu_ip_block *ip_block) 333 + { 334 + struct amdgpu_device *adev = ip_block->adev; 335 + struct amdgpu_isp *isp = &adev->isp; 336 + 337 + if (isp->funcs->hw_suspend) 338 + return isp->funcs->hw_suspend(isp); 339 + 340 + return -ENODEV; 341 + } 342 + 343 static const struct amd_ip_funcs isp_ip_funcs = { 344 .name = "isp_ip", 345 .early_init = isp_early_init, 346 .hw_init = isp_hw_init, 347 .hw_fini = isp_hw_fini, 348 .is_idle = isp_is_idle, 349 + .suspend = isp_suspend, 350 + .resume = isp_resume, 351 .set_clockgating_state = isp_set_clockgating_state, 352 .set_powergating_state = isp_set_powergating_state, 353 };
+2
drivers/gpu/drm/amd/amdgpu/amdgpu_isp.h
··· 38 struct isp_funcs { 39 int (*hw_init)(struct amdgpu_isp *isp); 40 int (*hw_fini)(struct amdgpu_isp *isp); 41 }; 42 43 struct amdgpu_isp {
··· 38 struct isp_funcs { 39 int (*hw_init)(struct amdgpu_isp *isp); 40 int (*hw_fini)(struct amdgpu_isp *isp); 41 + int (*hw_suspend)(struct amdgpu_isp *isp); 42 + int (*hw_resume)(struct amdgpu_isp *isp); 43 }; 44 45 struct amdgpu_isp {
+6
drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c
··· 201 type = (amdgpu_device_ip_get_ip_block(adev, AMD_IP_BLOCK_TYPE_JPEG)) ? 202 AMD_IP_BLOCK_TYPE_JPEG : AMD_IP_BLOCK_TYPE_VCN; 203 break; 204 default: 205 type = AMD_IP_BLOCK_TYPE_NUM; 206 break; ··· 723 break; 724 case AMD_IP_BLOCK_TYPE_UVD: 725 count = adev->uvd.num_uvd_inst; 726 break; 727 /* For all other IP block types not listed in the switch statement 728 * the ip status is valid here and the instance count is one.
··· 201 type = (amdgpu_device_ip_get_ip_block(adev, AMD_IP_BLOCK_TYPE_JPEG)) ? 202 AMD_IP_BLOCK_TYPE_JPEG : AMD_IP_BLOCK_TYPE_VCN; 203 break; 204 + case AMDGPU_HW_IP_VPE: 205 + type = AMD_IP_BLOCK_TYPE_VPE; 206 + break; 207 default: 208 type = AMD_IP_BLOCK_TYPE_NUM; 209 break; ··· 720 break; 721 case AMD_IP_BLOCK_TYPE_UVD: 722 count = adev->uvd.num_uvd_inst; 723 + break; 724 + case AMD_IP_BLOCK_TYPE_VPE: 725 + count = adev->vpe.num_instances; 726 break; 727 /* For all other IP block types not listed in the switch statement 728 * the ip status is valid here and the instance count is one.
+6 -1
drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
··· 144 struct amdgpu_ring *ring; 145 ktime_t start_timestamp; 146 147 - /* wptr for the fence for resets */ 148 u64 wptr; 149 /* fence context for resets */ 150 u64 context; 151 }; 152 153 extern const struct drm_sched_backend_ops amdgpu_sched_ops;
··· 144 struct amdgpu_ring *ring; 145 ktime_t start_timestamp; 146 147 + /* wptr for the total submission for resets */ 148 u64 wptr; 149 /* fence context for resets */ 150 u64 context; 151 + /* has this fence been reemitted */ 152 + unsigned int reemitted; 153 + /* wptr for the fence for the submission */ 154 + u64 fence_wptr_start; 155 + u64 fence_wptr_end; 156 }; 157 158 extern const struct drm_sched_backend_ops amdgpu_sched_ops;
+41
drivers/gpu/drm/amd/amdgpu/isp_v4_1_1.c
··· 26 */ 27 28 #include <linux/gpio/machine.h> 29 #include "amdgpu.h" 30 #include "isp_v4_1_1.h" 31 ··· 146 return -ENODEV; 147 } 148 149 exit: 150 /* Continue to add */ 151 return 0; ··· 181 drm_err(&adev->ddev, "Failed to remove dev from genpd %d\n", ret); 182 return -ENODEV; 183 } 184 185 exit: 186 /* Continue to remove */ 187 return 0; 188 } 189 190 static int isp_v4_1_1_hw_init(struct amdgpu_isp *isp) ··· 408 static const struct isp_funcs isp_v4_1_1_funcs = { 409 .hw_init = isp_v4_1_1_hw_init, 410 .hw_fini = isp_v4_1_1_hw_fini, 411 }; 412 413 void isp_v4_1_1_set_isp_funcs(struct amdgpu_isp *isp)
··· 26 */ 27 28 #include <linux/gpio/machine.h> 29 + #include <linux/pm_runtime.h> 30 #include "amdgpu.h" 31 #include "isp_v4_1_1.h" 32 ··· 145 return -ENODEV; 146 } 147 148 + /* The devices will be managed by the pm ops from the parent */ 149 + dev_pm_syscore_device(dev, true); 150 + 151 exit: 152 /* Continue to add */ 153 return 0; ··· 177 drm_err(&adev->ddev, "Failed to remove dev from genpd %d\n", ret); 178 return -ENODEV; 179 } 180 + dev_pm_syscore_device(dev, false); 181 182 exit: 183 /* Continue to remove */ 184 return 0; 185 + } 186 + 187 + static int isp_suspend_device(struct device *dev, void *data) 188 + { 189 + return pm_runtime_force_suspend(dev); 190 + } 191 + 192 + static int isp_resume_device(struct device *dev, void *data) 193 + { 194 + return pm_runtime_force_resume(dev); 195 + } 196 + 197 + static int isp_v4_1_1_hw_suspend(struct amdgpu_isp *isp) 198 + { 199 + int r; 200 + 201 + r = device_for_each_child(isp->parent, NULL, 202 + isp_suspend_device); 203 + if (r) 204 + dev_err(isp->parent, "failed to suspend hw devices (%d)\n", r); 205 + 206 + return r; 207 + } 208 + 209 + static int isp_v4_1_1_hw_resume(struct amdgpu_isp *isp) 210 + { 211 + int r; 212 + 213 + r = device_for_each_child(isp->parent, NULL, 214 + isp_resume_device); 215 + if (r) 216 + dev_err(isp->parent, "failed to resume hw device (%d)\n", r); 217 + 218 + return r; 219 } 220 221 static int isp_v4_1_1_hw_init(struct amdgpu_isp *isp) ··· 369 static const struct isp_funcs isp_v4_1_1_funcs = { 370 .hw_init = isp_v4_1_1_hw_init, 371 .hw_fini = isp_v4_1_1_hw_fini, 372 + .hw_suspend = isp_v4_1_1_hw_suspend, 373 + .hw_resume = isp_v4_1_1_hw_resume, 374 }; 375 376 void isp_v4_1_1_set_isp_funcs(struct amdgpu_isp *isp)
+2 -2
drivers/gpu/drm/amd/display/dc/bios/bios_parser.c
··· 763 return BP_RESULT_FAILURE; 764 765 return bp->cmd_tbl.dac1_encoder_control( 766 - bp, cntl->action == ENCODER_CONTROL_ENABLE, 767 cntl->pixel_clock, ATOM_DAC1_PS2); 768 } else if (cntl->engine_id == ENGINE_ID_DACB) { 769 if (!bp->cmd_tbl.dac2_encoder_control) 770 return BP_RESULT_FAILURE; 771 772 return bp->cmd_tbl.dac2_encoder_control( 773 - bp, cntl->action == ENCODER_CONTROL_ENABLE, 774 cntl->pixel_clock, ATOM_DAC1_PS2); 775 } 776
··· 763 return BP_RESULT_FAILURE; 764 765 return bp->cmd_tbl.dac1_encoder_control( 766 + bp, cntl->action, 767 cntl->pixel_clock, ATOM_DAC1_PS2); 768 } else if (cntl->engine_id == ENGINE_ID_DACB) { 769 if (!bp->cmd_tbl.dac2_encoder_control) 770 return BP_RESULT_FAILURE; 771 772 return bp->cmd_tbl.dac2_encoder_control( 773 + bp, cntl->action, 774 cntl->pixel_clock, ATOM_DAC1_PS2); 775 } 776
+35 -9
drivers/gpu/drm/amd/display/dc/bios/command_table.c
··· 1797 &params.ucEncodeMode)) 1798 return BP_RESULT_BADINPUT; 1799 1800 - params.ucDstBpc = bp_params->bit_depth; 1801 1802 if (EXEC_BIOS_CMD_TABLE(SelectCRTC_Source, params)) 1803 result = BP_RESULT_OK; ··· 1838 1839 static enum bp_result dac1_encoder_control_v1( 1840 struct bios_parser *bp, 1841 - bool enable, 1842 uint32_t pixel_clock, 1843 uint8_t dac_standard); 1844 static enum bp_result dac2_encoder_control_v1( 1845 struct bios_parser *bp, 1846 - bool enable, 1847 uint32_t pixel_clock, 1848 uint8_t dac_standard); 1849 ··· 1869 1870 static void dac_encoder_control_prepare_params( 1871 DAC_ENCODER_CONTROL_PS_ALLOCATION *params, 1872 - bool enable, 1873 uint32_t pixel_clock, 1874 uint8_t dac_standard) 1875 { 1876 params->ucDacStandard = dac_standard; 1877 - if (enable) 1878 params->ucAction = ATOM_ENABLE; 1879 else 1880 params->ucAction = ATOM_DISABLE; ··· 1890 1891 static enum bp_result dac1_encoder_control_v1( 1892 struct bios_parser *bp, 1893 - bool enable, 1894 uint32_t pixel_clock, 1895 uint8_t dac_standard) 1896 { ··· 1899 1900 dac_encoder_control_prepare_params( 1901 &params, 1902 - enable, 1903 pixel_clock, 1904 dac_standard); 1905 ··· 1911 1912 static enum bp_result dac2_encoder_control_v1( 1913 struct bios_parser *bp, 1914 - bool enable, 1915 uint32_t pixel_clock, 1916 uint8_t dac_standard) 1917 { ··· 1920 1921 dac_encoder_control_prepare_params( 1922 &params, 1923 - enable, 1924 pixel_clock, 1925 dac_standard); 1926
··· 1797 &params.ucEncodeMode)) 1798 return BP_RESULT_BADINPUT; 1799 1800 + switch (bp_params->color_depth) { 1801 + case COLOR_DEPTH_UNDEFINED: 1802 + params.ucDstBpc = PANEL_BPC_UNDEFINE; 1803 + break; 1804 + case COLOR_DEPTH_666: 1805 + params.ucDstBpc = PANEL_6BIT_PER_COLOR; 1806 + break; 1807 + default: 1808 + case COLOR_DEPTH_888: 1809 + params.ucDstBpc = PANEL_8BIT_PER_COLOR; 1810 + break; 1811 + case COLOR_DEPTH_101010: 1812 + params.ucDstBpc = PANEL_10BIT_PER_COLOR; 1813 + break; 1814 + case COLOR_DEPTH_121212: 1815 + params.ucDstBpc = PANEL_12BIT_PER_COLOR; 1816 + break; 1817 + case COLOR_DEPTH_141414: 1818 + dm_error("14-bit color not supported by SelectCRTC_Source v3\n"); 1819 + break; 1820 + case COLOR_DEPTH_161616: 1821 + params.ucDstBpc = PANEL_16BIT_PER_COLOR; 1822 + break; 1823 + } 1824 1825 if (EXEC_BIOS_CMD_TABLE(SelectCRTC_Source, params)) 1826 result = BP_RESULT_OK; ··· 1815 1816 static enum bp_result dac1_encoder_control_v1( 1817 struct bios_parser *bp, 1818 + enum bp_encoder_control_action action, 1819 uint32_t pixel_clock, 1820 uint8_t dac_standard); 1821 static enum bp_result dac2_encoder_control_v1( 1822 struct bios_parser *bp, 1823 + enum bp_encoder_control_action action, 1824 uint32_t pixel_clock, 1825 uint8_t dac_standard); 1826 ··· 1846 1847 static void dac_encoder_control_prepare_params( 1848 DAC_ENCODER_CONTROL_PS_ALLOCATION *params, 1849 + enum bp_encoder_control_action action, 1850 uint32_t pixel_clock, 1851 uint8_t dac_standard) 1852 { 1853 params->ucDacStandard = dac_standard; 1854 + if (action == ENCODER_CONTROL_SETUP || 1855 + action == ENCODER_CONTROL_INIT) 1856 + params->ucAction = ATOM_ENCODER_INIT; 1857 + else if (action == ENCODER_CONTROL_ENABLE) 1858 params->ucAction = ATOM_ENABLE; 1859 else 1860 params->ucAction = ATOM_DISABLE; ··· 1864 1865 static enum bp_result dac1_encoder_control_v1( 1866 struct bios_parser *bp, 1867 + enum bp_encoder_control_action action, 1868 uint32_t pixel_clock, 1869 uint8_t dac_standard) 1870 { ··· 1873 1874 dac_encoder_control_prepare_params( 1875 &params, 1876 + action, 1877 pixel_clock, 1878 dac_standard); 1879 ··· 1885 1886 static enum bp_result dac2_encoder_control_v1( 1887 struct bios_parser *bp, 1888 + enum bp_encoder_control_action action, 1889 uint32_t pixel_clock, 1890 uint8_t dac_standard) 1891 { ··· 1894 1895 dac_encoder_control_prepare_params( 1896 &params, 1897 + action, 1898 pixel_clock, 1899 dac_standard); 1900
+2 -2
drivers/gpu/drm/amd/display/dc/bios/command_table.h
··· 57 struct bp_crtc_source_select *bp_params); 58 enum bp_result (*dac1_encoder_control)( 59 struct bios_parser *bp, 60 - bool enable, 61 uint32_t pixel_clock, 62 uint8_t dac_standard); 63 enum bp_result (*dac2_encoder_control)( 64 struct bios_parser *bp, 65 - bool enable, 66 uint32_t pixel_clock, 67 uint8_t dac_standard); 68 enum bp_result (*dac1_output_control)(
··· 57 struct bp_crtc_source_select *bp_params); 58 enum bp_result (*dac1_encoder_control)( 59 struct bios_parser *bp, 60 + enum bp_encoder_control_action action, 61 uint32_t pixel_clock, 62 uint8_t dac_standard); 63 enum bp_result (*dac2_encoder_control)( 64 struct bios_parser *bp, 65 + enum bp_encoder_control_action action, 66 uint32_t pixel_clock, 67 uint8_t dac_standard); 68 enum bp_result (*dac1_output_control)(
+5 -1
drivers/gpu/drm/amd/display/dc/dml/Makefile
··· 30 31 ifneq ($(CONFIG_FRAME_WARN),0) 32 ifeq ($(filter y,$(CONFIG_KASAN)$(CONFIG_KCSAN)),y) 33 - frame_warn_limit := 3072 34 else 35 frame_warn_limit := 2048 36 endif
··· 30 31 ifneq ($(CONFIG_FRAME_WARN),0) 32 ifeq ($(filter y,$(CONFIG_KASAN)$(CONFIG_KCSAN)),y) 33 + ifeq ($(CONFIG_CC_IS_CLANG)$(CONFIG_COMPILE_TEST),yy) 34 + frame_warn_limit := 4096 35 + else 36 + frame_warn_limit := 3072 37 + endif 38 else 39 frame_warn_limit := 2048 40 endif
+139 -406
drivers/gpu/drm/amd/display/dc/dml/dcn30/display_mode_vba_30.c
··· 77 static unsigned int dscComputeDelay( 78 enum output_format_class pixelFormat, 79 enum output_encoder_class Output); 80 - // Super monster function with some 45 argument 81 static bool CalculatePrefetchSchedule( 82 struct display_mode_lib *mode_lib, 83 - double PercentOfIdealDRAMFabricAndSDPPortBWReceivedAfterUrgLatencyPixelMixedWithVMData, 84 - double PercentOfIdealDRAMFabricAndSDPPortBWReceivedAfterUrgLatencyVMDataOnly, 85 Pipe *myPipe, 86 unsigned int DSCDelay, 87 - double DPPCLKDelaySubtotalPlusCNVCFormater, 88 - double DPPCLKDelaySCL, 89 - double DPPCLKDelaySCLLBOnly, 90 - double DPPCLKDelayCNVCCursor, 91 - double DISPCLKDelaySubtotal, 92 unsigned int DPP_RECOUT_WIDTH, 93 - enum output_format_class OutputFormat, 94 - unsigned int MaxInterDCNTileRepeaters, 95 unsigned int VStartup, 96 unsigned int MaxVStartup, 97 - unsigned int GPUVMPageTableLevels, 98 - bool GPUVMEnable, 99 - bool HostVMEnable, 100 - unsigned int HostVMMaxNonCachedPageTableLevels, 101 - double HostVMMinPageSize, 102 - bool DynamicMetadataEnable, 103 - bool DynamicMetadataVMEnabled, 104 - int DynamicMetadataLinesBeforeActiveRequired, 105 - unsigned int DynamicMetadataTransmittedBytes, 106 double UrgentLatency, 107 double UrgentExtraLatency, 108 double TCalc, ··· 98 unsigned int MaxNumSwathY, 99 double PrefetchSourceLinesC, 100 unsigned int SwathWidthC, 101 - int BytePerPixelC, 102 double VInitPreFillC, 103 unsigned int MaxNumSwathC, 104 long swath_width_luma_ub, ··· 105 unsigned int SwathHeightY, 106 unsigned int SwathHeightC, 107 double TWait, 108 - bool ProgressiveToInterlaceUnitInOPP, 109 - double *DSTXAfterScaler, 110 - double *DSTYAfterScaler, 111 double *DestinationLinesForPrefetch, 112 double *PrefetchBandwidth, 113 double *DestinationLinesToRequestVMInVBlank, ··· 113 double *VRatioPrefetchC, 114 double *RequiredPrefetchPixDataBWLuma, 115 double *RequiredPrefetchPixDataBWChroma, 116 - bool *NotEnoughTimeForDynamicMetadata, 117 - double *Tno_bw, 118 - double *prefetch_vmrow_bw, 119 - double *Tdmdl_vm, 120 - double *Tdmdl, 121 - unsigned int *VUpdateOffsetPix, 122 - double *VUpdateWidthPix, 123 - double *VReadyOffsetPix); 124 static double RoundToDFSGranularityUp(double Clock, double VCOSpeed); 125 static double RoundToDFSGranularityDown(double Clock, double VCOSpeed); 126 static void CalculateDCCConfiguration( ··· 265 static void CalculateWatermarksAndDRAMSpeedChangeSupport( 266 struct display_mode_lib *mode_lib, 267 unsigned int PrefetchMode, 268 - unsigned int NumberOfActivePlanes, 269 - unsigned int MaxLineBufferLines, 270 - unsigned int LineBufferSize, 271 - unsigned int DPPOutputBufferPixels, 272 - unsigned int DETBufferSizeInKByte, 273 - unsigned int WritebackInterfaceBufferSize, 274 double DCFCLK, 275 double ReturnBW, 276 - bool GPUVMEnable, 277 - unsigned int dpte_group_bytes[], 278 - unsigned int MetaChunkSize, 279 double UrgentLatency, 280 double ExtraLatency, 281 - double WritebackLatency, 282 - double WritebackChunkSize, 283 double SOCCLK, 284 - double DRAMClockChangeLatency, 285 - double SRExitTime, 286 - double SREnterPlusExitTime, 287 double DCFCLKDeepSleep, 288 unsigned int DPPPerPlane[], 289 - bool DCCEnable[], 290 double DPPCLK[], 291 unsigned int DETBufferSizeY[], 292 unsigned int DETBufferSizeC[], 293 unsigned int SwathHeightY[], 294 unsigned int SwathHeightC[], 295 - unsigned int LBBitPerPixel[], 296 double SwathWidthY[], 297 double SwathWidthC[], 298 - double HRatio[], 299 - double HRatioChroma[], 300 - unsigned int vtaps[], 301 - unsigned int VTAPsChroma[], 302 - double VRatio[], 303 - double VRatioChroma[], 304 - unsigned int HTotal[], 305 - double PixelClock[], 306 - unsigned int BlendingAndTiming[], 307 double BytePerPixelDETY[], 308 double BytePerPixelDETC[], 309 - double DSTXAfterScaler[], 310 - double DSTYAfterScaler[], 311 - bool WritebackEnable[], 312 - enum source_format_class WritebackPixelFormat[], 313 - double WritebackDestinationWidth[], 314 - double WritebackDestinationHeight[], 315 - double WritebackSourceHeight[], 316 - enum clock_change_support *DRAMClockChangeSupport, 317 - double *UrgentWatermark, 318 - double *WritebackUrgentWatermark, 319 - double *DRAMClockChangeWatermark, 320 - double *WritebackDRAMClockChangeWatermark, 321 - double *StutterExitWatermark, 322 - double *StutterEnterPlusExitWatermark, 323 - double *MinActiveDRAMClockChangeLatencySupported); 324 static void CalculateDCFCLKDeepSleep( 325 struct display_mode_lib *mode_lib, 326 unsigned int NumberOfActivePlanes, ··· 742 743 static bool CalculatePrefetchSchedule( 744 struct display_mode_lib *mode_lib, 745 - double PercentOfIdealDRAMFabricAndSDPPortBWReceivedAfterUrgLatencyPixelMixedWithVMData, 746 - double PercentOfIdealDRAMFabricAndSDPPortBWReceivedAfterUrgLatencyVMDataOnly, 747 Pipe *myPipe, 748 unsigned int DSCDelay, 749 - double DPPCLKDelaySubtotalPlusCNVCFormater, 750 - double DPPCLKDelaySCL, 751 - double DPPCLKDelaySCLLBOnly, 752 - double DPPCLKDelayCNVCCursor, 753 - double DISPCLKDelaySubtotal, 754 unsigned int DPP_RECOUT_WIDTH, 755 - enum output_format_class OutputFormat, 756 - unsigned int MaxInterDCNTileRepeaters, 757 unsigned int VStartup, 758 unsigned int MaxVStartup, 759 - unsigned int GPUVMPageTableLevels, 760 - bool GPUVMEnable, 761 - bool HostVMEnable, 762 - unsigned int HostVMMaxNonCachedPageTableLevels, 763 - double HostVMMinPageSize, 764 - bool DynamicMetadataEnable, 765 - bool DynamicMetadataVMEnabled, 766 - int DynamicMetadataLinesBeforeActiveRequired, 767 - unsigned int DynamicMetadataTransmittedBytes, 768 double UrgentLatency, 769 double UrgentExtraLatency, 770 double TCalc, ··· 761 unsigned int MaxNumSwathY, 762 double PrefetchSourceLinesC, 763 unsigned int SwathWidthC, 764 - int BytePerPixelC, 765 double VInitPreFillC, 766 unsigned int MaxNumSwathC, 767 long swath_width_luma_ub, ··· 768 unsigned int SwathHeightY, 769 unsigned int SwathHeightC, 770 double TWait, 771 - bool ProgressiveToInterlaceUnitInOPP, 772 - double *DSTXAfterScaler, 773 - double *DSTYAfterScaler, 774 double *DestinationLinesForPrefetch, 775 double *PrefetchBandwidth, 776 double *DestinationLinesToRequestVMInVBlank, ··· 776 double *VRatioPrefetchC, 777 double *RequiredPrefetchPixDataBWLuma, 778 double *RequiredPrefetchPixDataBWChroma, 779 - bool *NotEnoughTimeForDynamicMetadata, 780 - double *Tno_bw, 781 - double *prefetch_vmrow_bw, 782 - double *Tdmdl_vm, 783 - double *Tdmdl, 784 - unsigned int *VUpdateOffsetPix, 785 - double *VUpdateWidthPix, 786 - double *VReadyOffsetPix) 787 { 788 bool MyError = false; 789 unsigned int DPPCycles = 0, DISPCLKCycles = 0; 790 double DSTTotalPixelsAfterScaler = 0; ··· 811 double Tdmec = 0; 812 double Tdmsks = 0; 813 814 - if (GPUVMEnable == true && HostVMEnable == true) { 815 - HostVMInefficiencyFactor = PercentOfIdealDRAMFabricAndSDPPortBWReceivedAfterUrgLatencyPixelMixedWithVMData / PercentOfIdealDRAMFabricAndSDPPortBWReceivedAfterUrgLatencyVMDataOnly; 816 - HostVMDynamicLevelsTrips = HostVMMaxNonCachedPageTableLevels; 817 } else { 818 HostVMInefficiencyFactor = 1; 819 HostVMDynamicLevelsTrips = 0; 820 } 821 822 CalculateDynamicMetadataParameters( 823 - MaxInterDCNTileRepeaters, 824 myPipe->DPPCLK, 825 myPipe->DISPCLK, 826 myPipe->DCFCLKDeepSleep, 827 myPipe->PixelClock, 828 myPipe->HTotal, 829 myPipe->VBlank, 830 - DynamicMetadataTransmittedBytes, 831 - DynamicMetadataLinesBeforeActiveRequired, 832 myPipe->InterlaceEnable, 833 - ProgressiveToInterlaceUnitInOPP, 834 &Tsetup, 835 &Tdmbf, 836 &Tdmec, ··· 838 839 LineTime = myPipe->HTotal / myPipe->PixelClock; 840 trip_to_mem = UrgentLatency; 841 - Tvm_trips = UrgentExtraLatency + trip_to_mem * (GPUVMPageTableLevels * (HostVMDynamicLevelsTrips + 1) - 1); 842 843 - if (DynamicMetadataVMEnabled == true && GPUVMEnable == true) { 844 - *Tdmdl = TWait + Tvm_trips + trip_to_mem; 845 } else { 846 - *Tdmdl = TWait + UrgentExtraLatency; 847 } 848 849 - if (DynamicMetadataEnable == true) { 850 - if (VStartup * LineTime < Tsetup + *Tdmdl + Tdmbf + Tdmec + Tdmsks) { 851 *NotEnoughTimeForDynamicMetadata = true; 852 } else { 853 *NotEnoughTimeForDynamicMetadata = false; ··· 855 dml_print("DML: Tdmbf: %fus - time for dmd transfer from dchub to dio output buffer\n", Tdmbf); 856 dml_print("DML: Tdmec: %fus - time dio takes to transfer dmd\n", Tdmec); 857 dml_print("DML: Tdmsks: %fus - time before active dmd must complete transmission at dio\n", Tdmsks); 858 - dml_print("DML: Tdmdl: %fus - time for fabric to become ready and fetch dmd \n", *Tdmdl); 859 } 860 } else { 861 *NotEnoughTimeForDynamicMetadata = false; 862 } 863 864 - *Tdmdl_vm = (DynamicMetadataEnable == true && DynamicMetadataVMEnabled == true && GPUVMEnable == true ? TWait + Tvm_trips : 0); 865 866 if (myPipe->ScalerEnabled) 867 - DPPCycles = DPPCLKDelaySubtotalPlusCNVCFormater + DPPCLKDelaySCL; 868 else 869 - DPPCycles = DPPCLKDelaySubtotalPlusCNVCFormater + DPPCLKDelaySCLLBOnly; 870 871 - DPPCycles = DPPCycles + myPipe->NumberOfCursors * DPPCLKDelayCNVCCursor; 872 873 - DISPCLKCycles = DISPCLKDelaySubtotal; 874 875 if (myPipe->DPPCLK == 0.0 || myPipe->DISPCLK == 0.0) 876 return true; 877 878 - *DSTXAfterScaler = DPPCycles * myPipe->PixelClock / myPipe->DPPCLK + DISPCLKCycles * myPipe->PixelClock / myPipe->DISPCLK 879 + DSCDelay; 880 881 - *DSTXAfterScaler = *DSTXAfterScaler + ((myPipe->ODMCombineEnabled)?18:0) + (myPipe->DPPPerPlane - 1) * DPP_RECOUT_WIDTH; 882 883 - if (OutputFormat == dm_420 || (myPipe->InterlaceEnable && ProgressiveToInterlaceUnitInOPP)) 884 - *DSTYAfterScaler = 1; 885 else 886 - *DSTYAfterScaler = 0; 887 888 - DSTTotalPixelsAfterScaler = *DSTYAfterScaler * myPipe->HTotal + *DSTXAfterScaler; 889 - *DSTYAfterScaler = dml_floor(DSTTotalPixelsAfterScaler / myPipe->HTotal, 1); 890 - *DSTXAfterScaler = DSTTotalPixelsAfterScaler - ((double) (*DSTYAfterScaler * myPipe->HTotal)); 891 892 MyError = false; 893 ··· 896 Tvm_trips_rounded = dml_ceil(4.0 * Tvm_trips / LineTime, 1) / 4 * LineTime; 897 Tr0_trips_rounded = dml_ceil(4.0 * Tr0_trips / LineTime, 1) / 4 * LineTime; 898 899 - if (GPUVMEnable) { 900 - if (GPUVMPageTableLevels >= 3) { 901 - *Tno_bw = UrgentExtraLatency + trip_to_mem * ((GPUVMPageTableLevels - 2) - 1); 902 } else 903 - *Tno_bw = 0; 904 } else if (!myPipe->DCCEnable) 905 - *Tno_bw = LineTime; 906 else 907 - *Tno_bw = LineTime / 4; 908 909 - dst_y_prefetch_equ = VStartup - (Tsetup + dml_max(TWait + TCalc, *Tdmdl)) / LineTime 910 - - (*DSTYAfterScaler + *DSTXAfterScaler / myPipe->HTotal); 911 dst_y_prefetch_equ = dml_min(dst_y_prefetch_equ, 63.75); // limit to the reg limit of U6.2 for DST_Y_PREFETCH 912 913 Lsw_oto = dml_max(PrefetchSourceLinesY, PrefetchSourceLinesC); 914 Tsw_oto = Lsw_oto * LineTime; 915 916 - prefetch_bw_oto = (PrefetchSourceLinesY * swath_width_luma_ub * BytePerPixelY + PrefetchSourceLinesC * swath_width_chroma_ub * BytePerPixelC) / Tsw_oto; 917 918 - if (GPUVMEnable == true) { 919 - Tvm_oto = dml_max3(*Tno_bw + PDEAndMetaPTEBytesFrame * HostVMInefficiencyFactor / prefetch_bw_oto, 920 Tvm_trips, 921 LineTime / 4.0); 922 } else 923 Tvm_oto = LineTime / 4.0; 924 925 - if ((GPUVMEnable == true || myPipe->DCCEnable == true)) { 926 Tr0_oto = dml_max3( 927 (MetaRowByte + PixelPTEBytesPerRow * HostVMInefficiencyFactor) / prefetch_bw_oto, 928 LineTime - Tvm_oto, LineTime / 4); ··· 948 dml_print("DML: Tdmbf: %fus - time for dmd transfer from dchub to dio output buffer\n", Tdmbf); 949 dml_print("DML: Tdmec: %fus - time dio takes to transfer dmd\n", Tdmec); 950 dml_print("DML: Tdmsks: %fus - time before active dmd must complete transmission at dio\n", Tdmsks); 951 - dml_print("DML: Tdmdl_vm: %fus - time for vm stages of dmd \n", *Tdmdl_vm); 952 - dml_print("DML: Tdmdl: %fus - time for fabric to become ready and fetch dmd \n", *Tdmdl); 953 - dml_print("DML: dst_x_after_scl: %f pixels - number of pixel clocks pipeline and buffer delay after scaler \n", *DSTXAfterScaler); 954 - dml_print("DML: dst_y_after_scl: %d lines - number of lines of pipeline and buffer delay after scaler \n", (int)*DSTYAfterScaler); 955 956 *PrefetchBandwidth = 0; 957 *DestinationLinesToRequestVMInVBlank = 0; ··· 965 double PrefetchBandwidth3 = 0; 966 double PrefetchBandwidth4 = 0; 967 968 - if (Tpre_rounded - *Tno_bw > 0) 969 PrefetchBandwidth1 = (PDEAndMetaPTEBytesFrame * HostVMInefficiencyFactor + 2 * MetaRowByte 970 + 2 * PixelPTEBytesPerRow * HostVMInefficiencyFactor 971 + PrefetchSourceLinesY * swath_width_luma_ub * BytePerPixelY 972 - + PrefetchSourceLinesC * swath_width_chroma_ub * BytePerPixelC) 973 - / (Tpre_rounded - *Tno_bw); 974 else 975 PrefetchBandwidth1 = 0; 976 977 - if (VStartup == MaxVStartup && (PrefetchBandwidth1 > 4 * prefetch_bw_oto) && (Tpre_rounded - Tsw_oto / 4 - 0.75 * LineTime - *Tno_bw) > 0) { 978 - PrefetchBandwidth1 = (PDEAndMetaPTEBytesFrame * HostVMInefficiencyFactor + 2 * MetaRowByte + 2 * PixelPTEBytesPerRow * HostVMInefficiencyFactor) / (Tpre_rounded - Tsw_oto / 4 - 0.75 * LineTime - *Tno_bw); 979 } 980 981 - if (Tpre_rounded - *Tno_bw - 2 * Tr0_trips_rounded > 0) 982 PrefetchBandwidth2 = (PDEAndMetaPTEBytesFrame * 983 HostVMInefficiencyFactor + PrefetchSourceLinesY * 984 swath_width_luma_ub * BytePerPixelY + 985 PrefetchSourceLinesC * swath_width_chroma_ub * 986 - BytePerPixelC) / 987 - (Tpre_rounded - *Tno_bw - 2 * Tr0_trips_rounded); 988 else 989 PrefetchBandwidth2 = 0; 990 ··· 992 PrefetchBandwidth3 = (2 * MetaRowByte + 2 * PixelPTEBytesPerRow * 993 HostVMInefficiencyFactor + PrefetchSourceLinesY * 994 swath_width_luma_ub * BytePerPixelY + PrefetchSourceLinesC * 995 - swath_width_chroma_ub * BytePerPixelC) / (Tpre_rounded - 996 Tvm_trips_rounded); 997 else 998 PrefetchBandwidth3 = 0; ··· 1002 } 1003 1004 if (Tpre_rounded - Tvm_trips_rounded - 2 * Tr0_trips_rounded > 0) 1005 - PrefetchBandwidth4 = (PrefetchSourceLinesY * swath_width_luma_ub * BytePerPixelY + PrefetchSourceLinesC * swath_width_chroma_ub * BytePerPixelC) 1006 / (Tpre_rounded - Tvm_trips_rounded - 2 * Tr0_trips_rounded); 1007 else 1008 PrefetchBandwidth4 = 0; ··· 1013 bool Case3OK; 1014 1015 if (PrefetchBandwidth1 > 0) { 1016 - if (*Tno_bw + PDEAndMetaPTEBytesFrame * HostVMInefficiencyFactor / PrefetchBandwidth1 1017 >= Tvm_trips_rounded && (MetaRowByte + PixelPTEBytesPerRow * HostVMInefficiencyFactor) / PrefetchBandwidth1 >= Tr0_trips_rounded) { 1018 Case1OK = true; 1019 } else { ··· 1024 } 1025 1026 if (PrefetchBandwidth2 > 0) { 1027 - if (*Tno_bw + PDEAndMetaPTEBytesFrame * HostVMInefficiencyFactor / PrefetchBandwidth2 1028 >= Tvm_trips_rounded && (MetaRowByte + PixelPTEBytesPerRow * HostVMInefficiencyFactor) / PrefetchBandwidth2 < Tr0_trips_rounded) { 1029 Case2OK = true; 1030 } else { ··· 1035 } 1036 1037 if (PrefetchBandwidth3 > 0) { 1038 - if (*Tno_bw + PDEAndMetaPTEBytesFrame * HostVMInefficiencyFactor / PrefetchBandwidth3 1039 < Tvm_trips_rounded && (MetaRowByte + PixelPTEBytesPerRow * HostVMInefficiencyFactor) / PrefetchBandwidth3 >= Tr0_trips_rounded) { 1040 Case3OK = true; 1041 } else { ··· 1058 dml_print("DML: prefetch_bw_equ: %f\n", prefetch_bw_equ); 1059 1060 if (prefetch_bw_equ > 0) { 1061 - if (GPUVMEnable) { 1062 - Tvm_equ = dml_max3(*Tno_bw + PDEAndMetaPTEBytesFrame * HostVMInefficiencyFactor / prefetch_bw_equ, Tvm_trips, LineTime / 4); 1063 } else { 1064 Tvm_equ = LineTime / 4; 1065 } 1066 1067 - if ((GPUVMEnable || myPipe->DCCEnable)) { 1068 Tr0_equ = dml_max4( 1069 (MetaRowByte + PixelPTEBytesPerRow * HostVMInefficiencyFactor) / prefetch_bw_equ, 1070 Tr0_trips, ··· 1133 } 1134 1135 *RequiredPrefetchPixDataBWLuma = (double) PrefetchSourceLinesY / LinesToRequestPrefetchPixelData * BytePerPixelY * swath_width_luma_ub / LineTime; 1136 - *RequiredPrefetchPixDataBWChroma = (double) PrefetchSourceLinesC / LinesToRequestPrefetchPixelData * BytePerPixelC * swath_width_chroma_ub / LineTime; 1137 } else { 1138 MyError = true; 1139 dml_print("DML: MyErr set %s:%d\n", __FILE__, __LINE__); ··· 1149 dml_print("DML: Tr0: %fus - time to fetch first row of data pagetables and first row of meta data (done in parallel)\n", TimeForFetchingRowInVBlank); 1150 dml_print("DML: Tr1: %fus - time to fetch second row of data pagetables and second row of meta data (done in parallel)\n", TimeForFetchingRowInVBlank); 1151 dml_print("DML: Tsw: %fus = time to fetch enough pixel data and cursor data to feed the scalers init position and detile\n", (double)LinesToRequestPrefetchPixelData * LineTime); 1152 - dml_print("DML: To: %fus - time for propagation from scaler to optc\n", (*DSTYAfterScaler + ((*DSTXAfterScaler) / (double) myPipe->HTotal)) * LineTime); 1153 dml_print("DML: Tvstartup - Tsetup - Tcalc - Twait - Tpre - To > 0\n"); 1154 - dml_print("DML: Tslack(pre): %fus - time left over in schedule\n", VStartup * LineTime - TimeForFetchingMetaPTE - 2 * TimeForFetchingRowInVBlank - (*DSTYAfterScaler + ((*DSTXAfterScaler) / (double) myPipe->HTotal)) * LineTime - TWait - TCalc - Tsetup); 1155 dml_print("DML: row_bytes = dpte_row_bytes (per_pipe) = PixelPTEBytesPerRow = : %d\n", PixelPTEBytesPerRow); 1156 1157 } else { ··· 1182 dml_print("DML: MyErr set %s:%d\n", __FILE__, __LINE__); 1183 } 1184 1185 - *prefetch_vmrow_bw = dml_max(prefetch_vm_bw, prefetch_row_bw); 1186 } 1187 1188 if (MyError) { ··· 2343 2344 v->ErrorResult[k] = CalculatePrefetchSchedule( 2345 mode_lib, 2346 - v->PercentOfIdealDRAMFabricAndSDPPortBWReceivedAfterUrgLatencyPixelMixedWithVMData, 2347 - v->PercentOfIdealDRAMFabricAndSDPPortBWReceivedAfterUrgLatencyVMDataOnly, 2348 &myPipe, 2349 v->DSCDelay[k], 2350 - v->DPPCLKDelaySubtotal 2351 - + v->DPPCLKDelayCNVCFormater, 2352 - v->DPPCLKDelaySCL, 2353 - v->DPPCLKDelaySCLLBOnly, 2354 - v->DPPCLKDelayCNVCCursor, 2355 - v->DISPCLKDelaySubtotal, 2356 (unsigned int) (v->SwathWidthY[k] / v->HRatio[k]), 2357 - v->OutputFormat[k], 2358 - v->MaxInterDCNTileRepeaters, 2359 dml_min(v->VStartupLines, v->MaxVStartupLines[k]), 2360 v->MaxVStartupLines[k], 2361 - v->GPUVMMaxPageTableLevels, 2362 - v->GPUVMEnable, 2363 - v->HostVMEnable, 2364 - v->HostVMMaxNonCachedPageTableLevels, 2365 - v->HostVMMinPageSize, 2366 - v->DynamicMetadataEnable[k], 2367 - v->DynamicMetadataVMEnabled, 2368 - v->DynamicMetadataLinesBeforeActiveRequired[k], 2369 - v->DynamicMetadataTransmittedBytes[k], 2370 v->UrgentLatency, 2371 v->UrgentExtraLatency, 2372 v->TCalc, ··· 2362 v->MaxNumSwathY[k], 2363 v->PrefetchSourceLinesC[k], 2364 v->SwathWidthC[k], 2365 - v->BytePerPixelC[k], 2366 v->VInitPreFillC[k], 2367 v->MaxNumSwathC[k], 2368 v->swath_width_luma_ub[k], ··· 2369 v->SwathHeightY[k], 2370 v->SwathHeightC[k], 2371 TWait, 2372 - v->ProgressiveToInterlaceUnitInOPP, 2373 - &v->DSTXAfterScaler[k], 2374 - &v->DSTYAfterScaler[k], 2375 &v->DestinationLinesForPrefetch[k], 2376 &v->PrefetchBandwidth[k], 2377 &v->DestinationLinesToRequestVMInVBlank[k], ··· 2377 &v->VRatioPrefetchC[k], 2378 &v->RequiredPrefetchPixDataBWLuma[k], 2379 &v->RequiredPrefetchPixDataBWChroma[k], 2380 - &v->NotEnoughTimeForDynamicMetadata[k], 2381 - &v->Tno_bw[k], 2382 - &v->prefetch_vmrow_bw[k], 2383 - &v->Tdmdl_vm[k], 2384 - &v->Tdmdl[k], 2385 - &v->VUpdateOffsetPix[k], 2386 - &v->VUpdateWidthPix[k], 2387 - &v->VReadyOffsetPix[k]); 2388 if (v->BlendingAndTiming[k] == k) { 2389 double TotalRepeaterDelayTime = v->MaxInterDCNTileRepeaters * (2 / v->DPPCLK[k] + 3 / v->DISPCLK); 2390 v->VUpdateWidthPix[k] = (14 / v->DCFCLKDeepSleep + 12 / v->DPPCLK[k] + TotalRepeaterDelayTime) * v->PixelClock[k]; ··· 2607 CalculateWatermarksAndDRAMSpeedChangeSupport( 2608 mode_lib, 2609 PrefetchMode, 2610 - v->NumberOfActivePlanes, 2611 - v->MaxLineBufferLines, 2612 - v->LineBufferSize, 2613 - v->DPPOutputBufferPixels, 2614 - v->DETBufferSizeInKByte[0], 2615 - v->WritebackInterfaceBufferSize, 2616 v->DCFCLK, 2617 v->ReturnBW, 2618 - v->GPUVMEnable, 2619 - v->dpte_group_bytes, 2620 - v->MetaChunkSize, 2621 v->UrgentLatency, 2622 v->UrgentExtraLatency, 2623 - v->WritebackLatency, 2624 - v->WritebackChunkSize, 2625 v->SOCCLK, 2626 - v->FinalDRAMClockChangeLatency, 2627 - v->SRExitTime, 2628 - v->SREnterPlusExitTime, 2629 v->DCFCLKDeepSleep, 2630 v->DPPPerPlane, 2631 - v->DCCEnable, 2632 v->DPPCLK, 2633 v->DETBufferSizeY, 2634 v->DETBufferSizeC, 2635 v->SwathHeightY, 2636 v->SwathHeightC, 2637 - v->LBBitPerPixel, 2638 v->SwathWidthY, 2639 v->SwathWidthC, 2640 - v->HRatio, 2641 - v->HRatioChroma, 2642 - v->vtaps, 2643 - v->VTAPsChroma, 2644 - v->VRatio, 2645 - v->VRatioChroma, 2646 - v->HTotal, 2647 - v->PixelClock, 2648 - v->BlendingAndTiming, 2649 v->BytePerPixelDETY, 2650 v->BytePerPixelDETC, 2651 - v->DSTXAfterScaler, 2652 - v->DSTYAfterScaler, 2653 - v->WritebackEnable, 2654 - v->WritebackPixelFormat, 2655 - v->WritebackDestinationWidth, 2656 - v->WritebackDestinationHeight, 2657 - v->WritebackSourceHeight, 2658 - &DRAMClockChangeSupport, 2659 - &v->UrgentWatermark, 2660 - &v->WritebackUrgentWatermark, 2661 - &v->DRAMClockChangeWatermark, 2662 - &v->WritebackDRAMClockChangeWatermark, 2663 - &v->StutterExitWatermark, 2664 - &v->StutterEnterPlusExitWatermark, 2665 - &v->MinActiveDRAMClockChangeLatencySupported); 2666 2667 for (k = 0; k < v->NumberOfActivePlanes; ++k) { 2668 if (v->WritebackEnable[k] == true) { ··· 4608 4609 v->NoTimeForPrefetch[i][j][k] = CalculatePrefetchSchedule( 4610 mode_lib, 4611 - v->PercentOfIdealDRAMFabricAndSDPPortBWReceivedAfterUrgLatencyPixelMixedWithVMData, 4612 - v->PercentOfIdealDRAMFabricAndSDPPortBWReceivedAfterUrgLatencyVMDataOnly, 4613 &myPipe, 4614 v->DSCDelayPerState[i][k], 4615 - v->DPPCLKDelaySubtotal + v->DPPCLKDelayCNVCFormater, 4616 - v->DPPCLKDelaySCL, 4617 - v->DPPCLKDelaySCLLBOnly, 4618 - v->DPPCLKDelayCNVCCursor, 4619 - v->DISPCLKDelaySubtotal, 4620 v->SwathWidthYThisState[k] / v->HRatio[k], 4621 - v->OutputFormat[k], 4622 - v->MaxInterDCNTileRepeaters, 4623 dml_min(v->MaxVStartup, v->MaximumVStartup[i][j][k]), 4624 v->MaximumVStartup[i][j][k], 4625 - v->GPUVMMaxPageTableLevels, 4626 - v->GPUVMEnable, 4627 - v->HostVMEnable, 4628 - v->HostVMMaxNonCachedPageTableLevels, 4629 - v->HostVMMinPageSize, 4630 - v->DynamicMetadataEnable[k], 4631 - v->DynamicMetadataVMEnabled, 4632 - v->DynamicMetadataLinesBeforeActiveRequired[k], 4633 - v->DynamicMetadataTransmittedBytes[k], 4634 v->UrgLatency[i], 4635 v->ExtraLatency, 4636 v->TimeCalc, ··· 4627 v->MaxNumSwY[k], 4628 v->PrefetchLinesC[i][j][k], 4629 v->SwathWidthCThisState[k], 4630 - v->BytePerPixelC[k], 4631 v->PrefillC[k], 4632 v->MaxNumSwC[k], 4633 v->swath_width_luma_ub_this_state[k], ··· 4634 v->SwathHeightYThisState[k], 4635 v->SwathHeightCThisState[k], 4636 v->TWait, 4637 - v->ProgressiveToInterlaceUnitInOPP, 4638 - &v->DSTXAfterScaler[k], 4639 - &v->DSTYAfterScaler[k], 4640 &v->LineTimesForPrefetch[k], 4641 &v->PrefetchBW[k], 4642 &v->LinesForMetaPTE[k], ··· 4642 &v->VRatioPreC[i][j][k], 4643 &v->RequiredPrefetchPixelDataBWLuma[i][j][k], 4644 &v->RequiredPrefetchPixelDataBWChroma[i][j][k], 4645 - &v->NoTimeForDynamicMetadata[i][j][k], 4646 - &v->Tno_bw[k], 4647 - &v->prefetch_vmrow_bw[k], 4648 - &v->Tdmdl_vm[k], 4649 - &v->Tdmdl[k], 4650 - &v->VUpdateOffsetPix[k], 4651 - &v->VUpdateWidthPix[k], 4652 - &v->VReadyOffsetPix[k]); 4653 } 4654 4655 for (k = 0; k <= v->NumberOfActivePlanes - 1; k++) { ··· 4817 CalculateWatermarksAndDRAMSpeedChangeSupport( 4818 mode_lib, 4819 v->PrefetchModePerState[i][j], 4820 - v->NumberOfActivePlanes, 4821 - v->MaxLineBufferLines, 4822 - v->LineBufferSize, 4823 - v->DPPOutputBufferPixels, 4824 - v->DETBufferSizeInKByte[0], 4825 - v->WritebackInterfaceBufferSize, 4826 v->DCFCLKState[i][j], 4827 v->ReturnBWPerState[i][j], 4828 - v->GPUVMEnable, 4829 - v->dpte_group_bytes, 4830 - v->MetaChunkSize, 4831 v->UrgLatency[i], 4832 v->ExtraLatency, 4833 - v->WritebackLatency, 4834 - v->WritebackChunkSize, 4835 v->SOCCLKPerState[i], 4836 - v->FinalDRAMClockChangeLatency, 4837 - v->SRExitTime, 4838 - v->SREnterPlusExitTime, 4839 v->ProjectedDCFCLKDeepSleep[i][j], 4840 v->NoOfDPPThisState, 4841 - v->DCCEnable, 4842 v->RequiredDPPCLKThisState, 4843 v->DETBufferSizeYThisState, 4844 v->DETBufferSizeCThisState, 4845 v->SwathHeightYThisState, 4846 v->SwathHeightCThisState, 4847 - v->LBBitPerPixel, 4848 v->SwathWidthYThisState, 4849 v->SwathWidthCThisState, 4850 - v->HRatio, 4851 - v->HRatioChroma, 4852 - v->vtaps, 4853 - v->VTAPsChroma, 4854 - v->VRatio, 4855 - v->VRatioChroma, 4856 - v->HTotal, 4857 - v->PixelClock, 4858 - v->BlendingAndTiming, 4859 v->BytePerPixelInDETY, 4860 v->BytePerPixelInDETC, 4861 - v->DSTXAfterScaler, 4862 - v->DSTYAfterScaler, 4863 - v->WritebackEnable, 4864 - v->WritebackPixelFormat, 4865 - v->WritebackDestinationWidth, 4866 - v->WritebackDestinationHeight, 4867 - v->WritebackSourceHeight, 4868 - &v->DRAMClockChangeSupport[i][j], 4869 - &v->UrgentWatermark, 4870 - &v->WritebackUrgentWatermark, 4871 - &v->DRAMClockChangeWatermark, 4872 - &v->WritebackDRAMClockChangeWatermark, 4873 - &v->StutterExitWatermark, 4874 - &v->StutterEnterPlusExitWatermark, 4875 - &v->MinActiveDRAMClockChangeLatencySupported); 4876 } 4877 } 4878 ··· 4950 static void CalculateWatermarksAndDRAMSpeedChangeSupport( 4951 struct display_mode_lib *mode_lib, 4952 unsigned int PrefetchMode, 4953 - unsigned int NumberOfActivePlanes, 4954 - unsigned int MaxLineBufferLines, 4955 - unsigned int LineBufferSize, 4956 - unsigned int DPPOutputBufferPixels, 4957 - unsigned int DETBufferSizeInKByte, 4958 - unsigned int WritebackInterfaceBufferSize, 4959 double DCFCLK, 4960 double ReturnBW, 4961 - bool GPUVMEnable, 4962 - unsigned int dpte_group_bytes[], 4963 - unsigned int MetaChunkSize, 4964 double UrgentLatency, 4965 double ExtraLatency, 4966 - double WritebackLatency, 4967 - double WritebackChunkSize, 4968 double SOCCLK, 4969 - double DRAMClockChangeLatency, 4970 - double SRExitTime, 4971 - double SREnterPlusExitTime, 4972 double DCFCLKDeepSleep, 4973 unsigned int DPPPerPlane[], 4974 - bool DCCEnable[], 4975 double DPPCLK[], 4976 unsigned int DETBufferSizeY[], 4977 unsigned int DETBufferSizeC[], 4978 unsigned int SwathHeightY[], 4979 unsigned int SwathHeightC[], 4980 - unsigned int LBBitPerPixel[], 4981 double SwathWidthY[], 4982 double SwathWidthC[], 4983 - double HRatio[], 4984 - double HRatioChroma[], 4985 - unsigned int vtaps[], 4986 - unsigned int VTAPsChroma[], 4987 - double VRatio[], 4988 - double VRatioChroma[], 4989 - unsigned int HTotal[], 4990 - double PixelClock[], 4991 - unsigned int BlendingAndTiming[], 4992 double BytePerPixelDETY[], 4993 double BytePerPixelDETC[], 4994 - double DSTXAfterScaler[], 4995 - double DSTYAfterScaler[], 4996 - bool WritebackEnable[], 4997 - enum source_format_class WritebackPixelFormat[], 4998 - double WritebackDestinationWidth[], 4999 - double WritebackDestinationHeight[], 5000 - double WritebackSourceHeight[], 5001 - enum clock_change_support *DRAMClockChangeSupport, 5002 - double *UrgentWatermark, 5003 - double *WritebackUrgentWatermark, 5004 - double *DRAMClockChangeWatermark, 5005 - double *WritebackDRAMClockChangeWatermark, 5006 - double *StutterExitWatermark, 5007 - double *StutterEnterPlusExitWatermark, 5008 - double *MinActiveDRAMClockChangeLatencySupported) 5009 { 5010 double EffectiveLBLatencyHidingY = 0; 5011 double EffectiveLBLatencyHidingC = 0; 5012 double LinesInDETY[DC__NUM_DPP__MAX] = { 0 }; ··· 4987 double WritebackDRAMClockChangeLatencyHiding = 0; 4988 unsigned int k, j; 4989 4990 - mode_lib->vba.TotalActiveDPP = 0; 4991 - mode_lib->vba.TotalDCCActiveDPP = 0; 4992 - for (k = 0; k < NumberOfActivePlanes; ++k) { 4993 - mode_lib->vba.TotalActiveDPP = mode_lib->vba.TotalActiveDPP + DPPPerPlane[k]; 4994 - if (DCCEnable[k] == true) { 4995 - mode_lib->vba.TotalDCCActiveDPP = mode_lib->vba.TotalDCCActiveDPP + DPPPerPlane[k]; 4996 } 4997 } 4998 4999 - *UrgentWatermark = UrgentLatency + ExtraLatency; 5000 5001 - *DRAMClockChangeWatermark = DRAMClockChangeLatency + *UrgentWatermark; 5002 5003 - mode_lib->vba.TotalActiveWriteback = 0; 5004 - for (k = 0; k < NumberOfActivePlanes; ++k) { 5005 - if (WritebackEnable[k] == true) { 5006 - mode_lib->vba.TotalActiveWriteback = mode_lib->vba.TotalActiveWriteback + 1; 5007 } 5008 } 5009 5010 - if (mode_lib->vba.TotalActiveWriteback <= 1) { 5011 - *WritebackUrgentWatermark = WritebackLatency; 5012 } else { 5013 - *WritebackUrgentWatermark = WritebackLatency + WritebackChunkSize * 1024.0 / 32.0 / SOCCLK; 5014 } 5015 5016 - if (mode_lib->vba.TotalActiveWriteback <= 1) { 5017 - *WritebackDRAMClockChangeWatermark = DRAMClockChangeLatency + WritebackLatency; 5018 } else { 5019 - *WritebackDRAMClockChangeWatermark = DRAMClockChangeLatency + WritebackLatency + WritebackChunkSize * 1024.0 / 32.0 / SOCCLK; 5020 } 5021 5022 - for (k = 0; k < NumberOfActivePlanes; ++k) { 5023 5024 - mode_lib->vba.LBLatencyHidingSourceLinesY = dml_min((double) MaxLineBufferLines, dml_floor(LineBufferSize / LBBitPerPixel[k] / (SwathWidthY[k] / dml_max(HRatio[k], 1.0)), 1)) - (vtaps[k] - 1); 5025 5026 - mode_lib->vba.LBLatencyHidingSourceLinesC = dml_min((double) MaxLineBufferLines, dml_floor(LineBufferSize / LBBitPerPixel[k] / (SwathWidthC[k] / dml_max(HRatioChroma[k], 1.0)), 1)) - (VTAPsChroma[k] - 1); 5027 5028 - EffectiveLBLatencyHidingY = mode_lib->vba.LBLatencyHidingSourceLinesY / VRatio[k] * (HTotal[k] / PixelClock[k]); 5029 5030 - EffectiveLBLatencyHidingC = mode_lib->vba.LBLatencyHidingSourceLinesC / VRatioChroma[k] * (HTotal[k] / PixelClock[k]); 5031 5032 LinesInDETY[k] = (double) DETBufferSizeY[k] / BytePerPixelDETY[k] / SwathWidthY[k]; 5033 LinesInDETYRoundedDownToSwath[k] = dml_floor(LinesInDETY[k], SwathHeightY[k]); 5034 - FullDETBufferingTimeY[k] = LinesInDETYRoundedDownToSwath[k] * (HTotal[k] / PixelClock[k]) / VRatio[k]; 5035 if (BytePerPixelDETC[k] > 0) { 5036 - LinesInDETC = mode_lib->vba.DETBufferSizeC[k] / BytePerPixelDETC[k] / SwathWidthC[k]; 5037 LinesInDETCRoundedDownToSwath = dml_floor(LinesInDETC, SwathHeightC[k]); 5038 - FullDETBufferingTimeC = LinesInDETCRoundedDownToSwath * (HTotal[k] / PixelClock[k]) / VRatioChroma[k]; 5039 } else { 5040 LinesInDETC = 0; 5041 FullDETBufferingTimeC = 999999; 5042 } 5043 5044 - ActiveDRAMClockChangeLatencyMarginY = EffectiveLBLatencyHidingY + FullDETBufferingTimeY[k] - *UrgentWatermark - (HTotal[k] / PixelClock[k]) * (DSTXAfterScaler[k] / HTotal[k] + DSTYAfterScaler[k]) - *DRAMClockChangeWatermark; 5045 5046 - if (NumberOfActivePlanes > 1) { 5047 - ActiveDRAMClockChangeLatencyMarginY = ActiveDRAMClockChangeLatencyMarginY - (1 - 1.0 / NumberOfActivePlanes) * SwathHeightY[k] * HTotal[k] / PixelClock[k] / VRatio[k]; 5048 } 5049 5050 if (BytePerPixelDETC[k] > 0) { 5051 - ActiveDRAMClockChangeLatencyMarginC = EffectiveLBLatencyHidingC + FullDETBufferingTimeC - *UrgentWatermark - (HTotal[k] / PixelClock[k]) * (DSTXAfterScaler[k] / HTotal[k] + DSTYAfterScaler[k]) - *DRAMClockChangeWatermark; 5052 5053 - if (NumberOfActivePlanes > 1) { 5054 - ActiveDRAMClockChangeLatencyMarginC = ActiveDRAMClockChangeLatencyMarginC - (1 - 1.0 / NumberOfActivePlanes) * SwathHeightC[k] * HTotal[k] / PixelClock[k] / VRatioChroma[k]; 5055 } 5056 - mode_lib->vba.ActiveDRAMClockChangeLatencyMargin[k] = dml_min(ActiveDRAMClockChangeLatencyMarginY, ActiveDRAMClockChangeLatencyMarginC); 5057 } else { 5058 - mode_lib->vba.ActiveDRAMClockChangeLatencyMargin[k] = ActiveDRAMClockChangeLatencyMarginY; 5059 } 5060 5061 - if (WritebackEnable[k] == true) { 5062 5063 - WritebackDRAMClockChangeLatencyHiding = WritebackInterfaceBufferSize * 1024 / (WritebackDestinationWidth[k] * WritebackDestinationHeight[k] / (WritebackSourceHeight[k] * HTotal[k] / PixelClock[k]) * 4); 5064 - if (WritebackPixelFormat[k] == dm_444_64) { 5065 WritebackDRAMClockChangeLatencyHiding = WritebackDRAMClockChangeLatencyHiding / 2; 5066 } 5067 - if (mode_lib->vba.WritebackConfiguration == dm_whole_buffer_for_single_stream_interleave) { 5068 WritebackDRAMClockChangeLatencyHiding = WritebackDRAMClockChangeLatencyHiding * 2; 5069 } 5070 - WritebackDRAMClockChangeLatencyMargin = WritebackDRAMClockChangeLatencyHiding - mode_lib->vba.WritebackDRAMClockChangeWatermark; 5071 - mode_lib->vba.ActiveDRAMClockChangeLatencyMargin[k] = dml_min(mode_lib->vba.ActiveDRAMClockChangeLatencyMargin[k], WritebackDRAMClockChangeLatencyMargin); 5072 } 5073 } 5074 5075 - mode_lib->vba.MinActiveDRAMClockChangeMargin = 999999; 5076 PlaneWithMinActiveDRAMClockChangeMargin = 0; 5077 - for (k = 0; k < NumberOfActivePlanes; ++k) { 5078 - if (mode_lib->vba.ActiveDRAMClockChangeLatencyMargin[k] < mode_lib->vba.MinActiveDRAMClockChangeMargin) { 5079 - mode_lib->vba.MinActiveDRAMClockChangeMargin = mode_lib->vba.ActiveDRAMClockChangeLatencyMargin[k]; 5080 - if (BlendingAndTiming[k] == k) { 5081 PlaneWithMinActiveDRAMClockChangeMargin = k; 5082 } else { 5083 - for (j = 0; j < NumberOfActivePlanes; ++j) { 5084 - if (BlendingAndTiming[k] == j) { 5085 PlaneWithMinActiveDRAMClockChangeMargin = j; 5086 } 5087 } ··· 5089 } 5090 } 5091 5092 - *MinActiveDRAMClockChangeLatencySupported = mode_lib->vba.MinActiveDRAMClockChangeMargin + DRAMClockChangeLatency; 5093 5094 SecondMinActiveDRAMClockChangeMarginOneDisplayInVBLank = 999999; 5095 - for (k = 0; k < NumberOfActivePlanes; ++k) { 5096 - if (!((k == PlaneWithMinActiveDRAMClockChangeMargin) && (BlendingAndTiming[k] == k)) && !(BlendingAndTiming[k] == PlaneWithMinActiveDRAMClockChangeMargin) && mode_lib->vba.ActiveDRAMClockChangeLatencyMargin[k] < SecondMinActiveDRAMClockChangeMarginOneDisplayInVBLank) { 5097 - SecondMinActiveDRAMClockChangeMarginOneDisplayInVBLank = mode_lib->vba.ActiveDRAMClockChangeLatencyMargin[k]; 5098 } 5099 } 5100 5101 - mode_lib->vba.TotalNumberOfActiveOTG = 0; 5102 - for (k = 0; k < NumberOfActivePlanes; ++k) { 5103 - if (BlendingAndTiming[k] == k) { 5104 - mode_lib->vba.TotalNumberOfActiveOTG = mode_lib->vba.TotalNumberOfActiveOTG + 1; 5105 } 5106 } 5107 5108 - if (mode_lib->vba.MinActiveDRAMClockChangeMargin > 0) { 5109 *DRAMClockChangeSupport = dm_dram_clock_change_vactive; 5110 - } else if (((mode_lib->vba.SynchronizedVBlank == true || mode_lib->vba.TotalNumberOfActiveOTG == 1 || SecondMinActiveDRAMClockChangeMarginOneDisplayInVBLank > 0) && PrefetchMode == 0)) { 5111 *DRAMClockChangeSupport = dm_dram_clock_change_vblank; 5112 } else { 5113 *DRAMClockChangeSupport = dm_dram_clock_change_unsupported; 5114 } 5115 5116 FullDETBufferingTimeYStutterCriticalPlane = FullDETBufferingTimeY[0]; 5117 - for (k = 0; k < NumberOfActivePlanes; ++k) { 5118 if (FullDETBufferingTimeY[k] <= FullDETBufferingTimeYStutterCriticalPlane) { 5119 FullDETBufferingTimeYStutterCriticalPlane = FullDETBufferingTimeY[k]; 5120 - TimeToFinishSwathTransferStutterCriticalPlane = (SwathHeightY[k] - (LinesInDETY[k] - LinesInDETYRoundedDownToSwath[k])) * (HTotal[k] / PixelClock[k]) / VRatio[k]; 5121 } 5122 } 5123 5124 - *StutterExitWatermark = SRExitTime + ExtraLatency + 10 / DCFCLKDeepSleep; 5125 - *StutterEnterPlusExitWatermark = dml_max(SREnterPlusExitTime + ExtraLatency + 10 / DCFCLKDeepSleep, TimeToFinishSwathTransferStutterCriticalPlane); 5126 5127 } 5128
··· 77 static unsigned int dscComputeDelay( 78 enum output_format_class pixelFormat, 79 enum output_encoder_class Output); 80 static bool CalculatePrefetchSchedule( 81 struct display_mode_lib *mode_lib, 82 + unsigned int k, 83 Pipe *myPipe, 84 unsigned int DSCDelay, 85 unsigned int DPP_RECOUT_WIDTH, 86 unsigned int VStartup, 87 unsigned int MaxVStartup, 88 double UrgentLatency, 89 double UrgentExtraLatency, 90 double TCalc, ··· 116 unsigned int MaxNumSwathY, 117 double PrefetchSourceLinesC, 118 unsigned int SwathWidthC, 119 double VInitPreFillC, 120 unsigned int MaxNumSwathC, 121 long swath_width_luma_ub, ··· 124 unsigned int SwathHeightY, 125 unsigned int SwathHeightC, 126 double TWait, 127 double *DestinationLinesForPrefetch, 128 double *PrefetchBandwidth, 129 double *DestinationLinesToRequestVMInVBlank, ··· 135 double *VRatioPrefetchC, 136 double *RequiredPrefetchPixDataBWLuma, 137 double *RequiredPrefetchPixDataBWChroma, 138 + bool *NotEnoughTimeForDynamicMetadata); 139 static double RoundToDFSGranularityUp(double Clock, double VCOSpeed); 140 static double RoundToDFSGranularityDown(double Clock, double VCOSpeed); 141 static void CalculateDCCConfiguration( ··· 294 static void CalculateWatermarksAndDRAMSpeedChangeSupport( 295 struct display_mode_lib *mode_lib, 296 unsigned int PrefetchMode, 297 double DCFCLK, 298 double ReturnBW, 299 double UrgentLatency, 300 double ExtraLatency, 301 double SOCCLK, 302 double DCFCLKDeepSleep, 303 unsigned int DPPPerPlane[], 304 double DPPCLK[], 305 unsigned int DETBufferSizeY[], 306 unsigned int DETBufferSizeC[], 307 unsigned int SwathHeightY[], 308 unsigned int SwathHeightC[], 309 double SwathWidthY[], 310 double SwathWidthC[], 311 double BytePerPixelDETY[], 312 double BytePerPixelDETC[], 313 + enum clock_change_support *DRAMClockChangeSupport); 314 static void CalculateDCFCLKDeepSleep( 315 struct display_mode_lib *mode_lib, 316 unsigned int NumberOfActivePlanes, ··· 810 811 static bool CalculatePrefetchSchedule( 812 struct display_mode_lib *mode_lib, 813 + unsigned int k, 814 Pipe *myPipe, 815 unsigned int DSCDelay, 816 unsigned int DPP_RECOUT_WIDTH, 817 unsigned int VStartup, 818 unsigned int MaxVStartup, 819 double UrgentLatency, 820 double UrgentExtraLatency, 821 double TCalc, ··· 846 unsigned int MaxNumSwathY, 847 double PrefetchSourceLinesC, 848 unsigned int SwathWidthC, 849 double VInitPreFillC, 850 unsigned int MaxNumSwathC, 851 long swath_width_luma_ub, ··· 854 unsigned int SwathHeightY, 855 unsigned int SwathHeightC, 856 double TWait, 857 double *DestinationLinesForPrefetch, 858 double *PrefetchBandwidth, 859 double *DestinationLinesToRequestVMInVBlank, ··· 865 double *VRatioPrefetchC, 866 double *RequiredPrefetchPixDataBWLuma, 867 double *RequiredPrefetchPixDataBWChroma, 868 + bool *NotEnoughTimeForDynamicMetadata) 869 { 870 + struct vba_vars_st *v = &mode_lib->vba; 871 + double DPPCLKDelaySubtotalPlusCNVCFormater = v->DPPCLKDelaySubtotal + v->DPPCLKDelayCNVCFormater; 872 bool MyError = false; 873 unsigned int DPPCycles = 0, DISPCLKCycles = 0; 874 double DSTTotalPixelsAfterScaler = 0; ··· 905 double Tdmec = 0; 906 double Tdmsks = 0; 907 908 + if (v->GPUVMEnable == true && v->HostVMEnable == true) { 909 + HostVMInefficiencyFactor = v->PercentOfIdealDRAMFabricAndSDPPortBWReceivedAfterUrgLatencyPixelMixedWithVMData / v->PercentOfIdealDRAMFabricAndSDPPortBWReceivedAfterUrgLatencyVMDataOnly; 910 + HostVMDynamicLevelsTrips = v->HostVMMaxNonCachedPageTableLevels; 911 } else { 912 HostVMInefficiencyFactor = 1; 913 HostVMDynamicLevelsTrips = 0; 914 } 915 916 CalculateDynamicMetadataParameters( 917 + v->MaxInterDCNTileRepeaters, 918 myPipe->DPPCLK, 919 myPipe->DISPCLK, 920 myPipe->DCFCLKDeepSleep, 921 myPipe->PixelClock, 922 myPipe->HTotal, 923 myPipe->VBlank, 924 + v->DynamicMetadataTransmittedBytes[k], 925 + v->DynamicMetadataLinesBeforeActiveRequired[k], 926 myPipe->InterlaceEnable, 927 + v->ProgressiveToInterlaceUnitInOPP, 928 &Tsetup, 929 &Tdmbf, 930 &Tdmec, ··· 932 933 LineTime = myPipe->HTotal / myPipe->PixelClock; 934 trip_to_mem = UrgentLatency; 935 + Tvm_trips = UrgentExtraLatency + trip_to_mem * (v->GPUVMMaxPageTableLevels * (HostVMDynamicLevelsTrips + 1) - 1); 936 937 + if (v->DynamicMetadataVMEnabled == true && v->GPUVMEnable == true) { 938 + v->Tdmdl[k] = TWait + Tvm_trips + trip_to_mem; 939 } else { 940 + v->Tdmdl[k] = TWait + UrgentExtraLatency; 941 } 942 943 + if (v->DynamicMetadataEnable[k] == true) { 944 + if (VStartup * LineTime < Tsetup + v->Tdmdl[k] + Tdmbf + Tdmec + Tdmsks) { 945 *NotEnoughTimeForDynamicMetadata = true; 946 } else { 947 *NotEnoughTimeForDynamicMetadata = false; ··· 949 dml_print("DML: Tdmbf: %fus - time for dmd transfer from dchub to dio output buffer\n", Tdmbf); 950 dml_print("DML: Tdmec: %fus - time dio takes to transfer dmd\n", Tdmec); 951 dml_print("DML: Tdmsks: %fus - time before active dmd must complete transmission at dio\n", Tdmsks); 952 + dml_print("DML: Tdmdl: %fus - time for fabric to become ready and fetch dmd \n", v->Tdmdl[k]); 953 } 954 } else { 955 *NotEnoughTimeForDynamicMetadata = false; 956 } 957 958 + v->Tdmdl_vm[k] = (v->DynamicMetadataEnable[k] == true && v->DynamicMetadataVMEnabled == true && v->GPUVMEnable == true ? TWait + Tvm_trips : 0); 959 960 if (myPipe->ScalerEnabled) 961 + DPPCycles = DPPCLKDelaySubtotalPlusCNVCFormater + v->DPPCLKDelaySCL; 962 else 963 + DPPCycles = DPPCLKDelaySubtotalPlusCNVCFormater + v->DPPCLKDelaySCLLBOnly; 964 965 + DPPCycles = DPPCycles + myPipe->NumberOfCursors * v->DPPCLKDelayCNVCCursor; 966 967 + DISPCLKCycles = v->DISPCLKDelaySubtotal; 968 969 if (myPipe->DPPCLK == 0.0 || myPipe->DISPCLK == 0.0) 970 return true; 971 972 + v->DSTXAfterScaler[k] = DPPCycles * myPipe->PixelClock / myPipe->DPPCLK + DISPCLKCycles * myPipe->PixelClock / myPipe->DISPCLK 973 + DSCDelay; 974 975 + v->DSTXAfterScaler[k] = v->DSTXAfterScaler[k] + ((myPipe->ODMCombineEnabled)?18:0) + (myPipe->DPPPerPlane - 1) * DPP_RECOUT_WIDTH; 976 977 + if (v->OutputFormat[k] == dm_420 || (myPipe->InterlaceEnable && v->ProgressiveToInterlaceUnitInOPP)) 978 + v->DSTYAfterScaler[k] = 1; 979 else 980 + v->DSTYAfterScaler[k] = 0; 981 982 + DSTTotalPixelsAfterScaler = v->DSTYAfterScaler[k] * myPipe->HTotal + v->DSTXAfterScaler[k]; 983 + v->DSTYAfterScaler[k] = dml_floor(DSTTotalPixelsAfterScaler / myPipe->HTotal, 1); 984 + v->DSTXAfterScaler[k] = DSTTotalPixelsAfterScaler - ((double) (v->DSTYAfterScaler[k] * myPipe->HTotal)); 985 986 MyError = false; 987 ··· 990 Tvm_trips_rounded = dml_ceil(4.0 * Tvm_trips / LineTime, 1) / 4 * LineTime; 991 Tr0_trips_rounded = dml_ceil(4.0 * Tr0_trips / LineTime, 1) / 4 * LineTime; 992 993 + if (v->GPUVMEnable) { 994 + if (v->GPUVMMaxPageTableLevels >= 3) { 995 + v->Tno_bw[k] = UrgentExtraLatency + trip_to_mem * ((v->GPUVMMaxPageTableLevels - 2) - 1); 996 } else 997 + v->Tno_bw[k] = 0; 998 } else if (!myPipe->DCCEnable) 999 + v->Tno_bw[k] = LineTime; 1000 else 1001 + v->Tno_bw[k] = LineTime / 4; 1002 1003 + dst_y_prefetch_equ = VStartup - (Tsetup + dml_max(TWait + TCalc, v->Tdmdl[k])) / LineTime 1004 + - (v->DSTYAfterScaler[k] + v->DSTXAfterScaler[k] / myPipe->HTotal); 1005 dst_y_prefetch_equ = dml_min(dst_y_prefetch_equ, 63.75); // limit to the reg limit of U6.2 for DST_Y_PREFETCH 1006 1007 Lsw_oto = dml_max(PrefetchSourceLinesY, PrefetchSourceLinesC); 1008 Tsw_oto = Lsw_oto * LineTime; 1009 1010 + prefetch_bw_oto = (PrefetchSourceLinesY * swath_width_luma_ub * BytePerPixelY + PrefetchSourceLinesC * swath_width_chroma_ub * v->BytePerPixelC[k]) / Tsw_oto; 1011 1012 + if (v->GPUVMEnable == true) { 1013 + Tvm_oto = dml_max3(v->Tno_bw[k] + PDEAndMetaPTEBytesFrame * HostVMInefficiencyFactor / prefetch_bw_oto, 1014 Tvm_trips, 1015 LineTime / 4.0); 1016 } else 1017 Tvm_oto = LineTime / 4.0; 1018 1019 + if ((v->GPUVMEnable == true || myPipe->DCCEnable == true)) { 1020 Tr0_oto = dml_max3( 1021 (MetaRowByte + PixelPTEBytesPerRow * HostVMInefficiencyFactor) / prefetch_bw_oto, 1022 LineTime - Tvm_oto, LineTime / 4); ··· 1042 dml_print("DML: Tdmbf: %fus - time for dmd transfer from dchub to dio output buffer\n", Tdmbf); 1043 dml_print("DML: Tdmec: %fus - time dio takes to transfer dmd\n", Tdmec); 1044 dml_print("DML: Tdmsks: %fus - time before active dmd must complete transmission at dio\n", Tdmsks); 1045 + dml_print("DML: Tdmdl_vm: %fus - time for vm stages of dmd \n", v->Tdmdl_vm[k]); 1046 + dml_print("DML: Tdmdl: %fus - time for fabric to become ready and fetch dmd \n", v->Tdmdl[k]); 1047 + dml_print("DML: dst_x_after_scl: %f pixels - number of pixel clocks pipeline and buffer delay after scaler \n", v->DSTXAfterScaler[k]); 1048 + dml_print("DML: dst_y_after_scl: %d lines - number of lines of pipeline and buffer delay after scaler \n", (int)v->DSTYAfterScaler[k]); 1049 1050 *PrefetchBandwidth = 0; 1051 *DestinationLinesToRequestVMInVBlank = 0; ··· 1059 double PrefetchBandwidth3 = 0; 1060 double PrefetchBandwidth4 = 0; 1061 1062 + if (Tpre_rounded - v->Tno_bw[k] > 0) 1063 PrefetchBandwidth1 = (PDEAndMetaPTEBytesFrame * HostVMInefficiencyFactor + 2 * MetaRowByte 1064 + 2 * PixelPTEBytesPerRow * HostVMInefficiencyFactor 1065 + PrefetchSourceLinesY * swath_width_luma_ub * BytePerPixelY 1066 + + PrefetchSourceLinesC * swath_width_chroma_ub * v->BytePerPixelC[k]) 1067 + / (Tpre_rounded - v->Tno_bw[k]); 1068 else 1069 PrefetchBandwidth1 = 0; 1070 1071 + if (VStartup == MaxVStartup && (PrefetchBandwidth1 > 4 * prefetch_bw_oto) && (Tpre_rounded - Tsw_oto / 4 - 0.75 * LineTime - v->Tno_bw[k]) > 0) { 1072 + PrefetchBandwidth1 = (PDEAndMetaPTEBytesFrame * HostVMInefficiencyFactor + 2 * MetaRowByte + 2 * PixelPTEBytesPerRow * HostVMInefficiencyFactor) / (Tpre_rounded - Tsw_oto / 4 - 0.75 * LineTime - v->Tno_bw[k]); 1073 } 1074 1075 + if (Tpre_rounded - v->Tno_bw[k] - 2 * Tr0_trips_rounded > 0) 1076 PrefetchBandwidth2 = (PDEAndMetaPTEBytesFrame * 1077 HostVMInefficiencyFactor + PrefetchSourceLinesY * 1078 swath_width_luma_ub * BytePerPixelY + 1079 PrefetchSourceLinesC * swath_width_chroma_ub * 1080 + v->BytePerPixelC[k]) / 1081 + (Tpre_rounded - v->Tno_bw[k] - 2 * Tr0_trips_rounded); 1082 else 1083 PrefetchBandwidth2 = 0; 1084 ··· 1086 PrefetchBandwidth3 = (2 * MetaRowByte + 2 * PixelPTEBytesPerRow * 1087 HostVMInefficiencyFactor + PrefetchSourceLinesY * 1088 swath_width_luma_ub * BytePerPixelY + PrefetchSourceLinesC * 1089 + swath_width_chroma_ub * v->BytePerPixelC[k]) / (Tpre_rounded - 1090 Tvm_trips_rounded); 1091 else 1092 PrefetchBandwidth3 = 0; ··· 1096 } 1097 1098 if (Tpre_rounded - Tvm_trips_rounded - 2 * Tr0_trips_rounded > 0) 1099 + PrefetchBandwidth4 = (PrefetchSourceLinesY * swath_width_luma_ub * BytePerPixelY + PrefetchSourceLinesC * swath_width_chroma_ub * v->BytePerPixelC[k]) 1100 / (Tpre_rounded - Tvm_trips_rounded - 2 * Tr0_trips_rounded); 1101 else 1102 PrefetchBandwidth4 = 0; ··· 1107 bool Case3OK; 1108 1109 if (PrefetchBandwidth1 > 0) { 1110 + if (v->Tno_bw[k] + PDEAndMetaPTEBytesFrame * HostVMInefficiencyFactor / PrefetchBandwidth1 1111 >= Tvm_trips_rounded && (MetaRowByte + PixelPTEBytesPerRow * HostVMInefficiencyFactor) / PrefetchBandwidth1 >= Tr0_trips_rounded) { 1112 Case1OK = true; 1113 } else { ··· 1118 } 1119 1120 if (PrefetchBandwidth2 > 0) { 1121 + if (v->Tno_bw[k] + PDEAndMetaPTEBytesFrame * HostVMInefficiencyFactor / PrefetchBandwidth2 1122 >= Tvm_trips_rounded && (MetaRowByte + PixelPTEBytesPerRow * HostVMInefficiencyFactor) / PrefetchBandwidth2 < Tr0_trips_rounded) { 1123 Case2OK = true; 1124 } else { ··· 1129 } 1130 1131 if (PrefetchBandwidth3 > 0) { 1132 + if (v->Tno_bw[k] + PDEAndMetaPTEBytesFrame * HostVMInefficiencyFactor / PrefetchBandwidth3 1133 < Tvm_trips_rounded && (MetaRowByte + PixelPTEBytesPerRow * HostVMInefficiencyFactor) / PrefetchBandwidth3 >= Tr0_trips_rounded) { 1134 Case3OK = true; 1135 } else { ··· 1152 dml_print("DML: prefetch_bw_equ: %f\n", prefetch_bw_equ); 1153 1154 if (prefetch_bw_equ > 0) { 1155 + if (v->GPUVMEnable) { 1156 + Tvm_equ = dml_max3(v->Tno_bw[k] + PDEAndMetaPTEBytesFrame * HostVMInefficiencyFactor / prefetch_bw_equ, Tvm_trips, LineTime / 4); 1157 } else { 1158 Tvm_equ = LineTime / 4; 1159 } 1160 1161 + if ((v->GPUVMEnable || myPipe->DCCEnable)) { 1162 Tr0_equ = dml_max4( 1163 (MetaRowByte + PixelPTEBytesPerRow * HostVMInefficiencyFactor) / prefetch_bw_equ, 1164 Tr0_trips, ··· 1227 } 1228 1229 *RequiredPrefetchPixDataBWLuma = (double) PrefetchSourceLinesY / LinesToRequestPrefetchPixelData * BytePerPixelY * swath_width_luma_ub / LineTime; 1230 + *RequiredPrefetchPixDataBWChroma = (double) PrefetchSourceLinesC / LinesToRequestPrefetchPixelData * v->BytePerPixelC[k] * swath_width_chroma_ub / LineTime; 1231 } else { 1232 MyError = true; 1233 dml_print("DML: MyErr set %s:%d\n", __FILE__, __LINE__); ··· 1243 dml_print("DML: Tr0: %fus - time to fetch first row of data pagetables and first row of meta data (done in parallel)\n", TimeForFetchingRowInVBlank); 1244 dml_print("DML: Tr1: %fus - time to fetch second row of data pagetables and second row of meta data (done in parallel)\n", TimeForFetchingRowInVBlank); 1245 dml_print("DML: Tsw: %fus = time to fetch enough pixel data and cursor data to feed the scalers init position and detile\n", (double)LinesToRequestPrefetchPixelData * LineTime); 1246 + dml_print("DML: To: %fus - time for propagation from scaler to optc\n", (v->DSTYAfterScaler[k] + ((v->DSTXAfterScaler[k]) / (double) myPipe->HTotal)) * LineTime); 1247 dml_print("DML: Tvstartup - Tsetup - Tcalc - Twait - Tpre - To > 0\n"); 1248 + dml_print("DML: Tslack(pre): %fus - time left over in schedule\n", VStartup * LineTime - TimeForFetchingMetaPTE - 2 * TimeForFetchingRowInVBlank - (v->DSTYAfterScaler[k] + ((v->DSTXAfterScaler[k]) / (double) myPipe->HTotal)) * LineTime - TWait - TCalc - Tsetup); 1249 dml_print("DML: row_bytes = dpte_row_bytes (per_pipe) = PixelPTEBytesPerRow = : %d\n", PixelPTEBytesPerRow); 1250 1251 } else { ··· 1276 dml_print("DML: MyErr set %s:%d\n", __FILE__, __LINE__); 1277 } 1278 1279 + v->prefetch_vmrow_bw[k] = dml_max(prefetch_vm_bw, prefetch_row_bw); 1280 } 1281 1282 if (MyError) { ··· 2437 2438 v->ErrorResult[k] = CalculatePrefetchSchedule( 2439 mode_lib, 2440 + k, 2441 &myPipe, 2442 v->DSCDelay[k], 2443 (unsigned int) (v->SwathWidthY[k] / v->HRatio[k]), 2444 dml_min(v->VStartupLines, v->MaxVStartupLines[k]), 2445 v->MaxVStartupLines[k], 2446 v->UrgentLatency, 2447 v->UrgentExtraLatency, 2448 v->TCalc, ··· 2474 v->MaxNumSwathY[k], 2475 v->PrefetchSourceLinesC[k], 2476 v->SwathWidthC[k], 2477 v->VInitPreFillC[k], 2478 v->MaxNumSwathC[k], 2479 v->swath_width_luma_ub[k], ··· 2482 v->SwathHeightY[k], 2483 v->SwathHeightC[k], 2484 TWait, 2485 &v->DestinationLinesForPrefetch[k], 2486 &v->PrefetchBandwidth[k], 2487 &v->DestinationLinesToRequestVMInVBlank[k], ··· 2493 &v->VRatioPrefetchC[k], 2494 &v->RequiredPrefetchPixDataBWLuma[k], 2495 &v->RequiredPrefetchPixDataBWChroma[k], 2496 + &v->NotEnoughTimeForDynamicMetadata[k]); 2497 if (v->BlendingAndTiming[k] == k) { 2498 double TotalRepeaterDelayTime = v->MaxInterDCNTileRepeaters * (2 / v->DPPCLK[k] + 3 / v->DISPCLK); 2499 v->VUpdateWidthPix[k] = (14 / v->DCFCLKDeepSleep + 12 / v->DPPCLK[k] + TotalRepeaterDelayTime) * v->PixelClock[k]; ··· 2730 CalculateWatermarksAndDRAMSpeedChangeSupport( 2731 mode_lib, 2732 PrefetchMode, 2733 v->DCFCLK, 2734 v->ReturnBW, 2735 v->UrgentLatency, 2736 v->UrgentExtraLatency, 2737 v->SOCCLK, 2738 v->DCFCLKDeepSleep, 2739 v->DPPPerPlane, 2740 v->DPPCLK, 2741 v->DETBufferSizeY, 2742 v->DETBufferSizeC, 2743 v->SwathHeightY, 2744 v->SwathHeightC, 2745 v->SwathWidthY, 2746 v->SwathWidthC, 2747 v->BytePerPixelDETY, 2748 v->BytePerPixelDETC, 2749 + &DRAMClockChangeSupport); 2750 2751 for (k = 0; k < v->NumberOfActivePlanes; ++k) { 2752 if (v->WritebackEnable[k] == true) { ··· 4770 4771 v->NoTimeForPrefetch[i][j][k] = CalculatePrefetchSchedule( 4772 mode_lib, 4773 + k, 4774 &myPipe, 4775 v->DSCDelayPerState[i][k], 4776 v->SwathWidthYThisState[k] / v->HRatio[k], 4777 dml_min(v->MaxVStartup, v->MaximumVStartup[i][j][k]), 4778 v->MaximumVStartup[i][j][k], 4779 v->UrgLatency[i], 4780 v->ExtraLatency, 4781 v->TimeCalc, ··· 4806 v->MaxNumSwY[k], 4807 v->PrefetchLinesC[i][j][k], 4808 v->SwathWidthCThisState[k], 4809 v->PrefillC[k], 4810 v->MaxNumSwC[k], 4811 v->swath_width_luma_ub_this_state[k], ··· 4814 v->SwathHeightYThisState[k], 4815 v->SwathHeightCThisState[k], 4816 v->TWait, 4817 &v->LineTimesForPrefetch[k], 4818 &v->PrefetchBW[k], 4819 &v->LinesForMetaPTE[k], ··· 4825 &v->VRatioPreC[i][j][k], 4826 &v->RequiredPrefetchPixelDataBWLuma[i][j][k], 4827 &v->RequiredPrefetchPixelDataBWChroma[i][j][k], 4828 + &v->NoTimeForDynamicMetadata[i][j][k]); 4829 } 4830 4831 for (k = 0; k <= v->NumberOfActivePlanes - 1; k++) { ··· 5007 CalculateWatermarksAndDRAMSpeedChangeSupport( 5008 mode_lib, 5009 v->PrefetchModePerState[i][j], 5010 v->DCFCLKState[i][j], 5011 v->ReturnBWPerState[i][j], 5012 v->UrgLatency[i], 5013 v->ExtraLatency, 5014 v->SOCCLKPerState[i], 5015 v->ProjectedDCFCLKDeepSleep[i][j], 5016 v->NoOfDPPThisState, 5017 v->RequiredDPPCLKThisState, 5018 v->DETBufferSizeYThisState, 5019 v->DETBufferSizeCThisState, 5020 v->SwathHeightYThisState, 5021 v->SwathHeightCThisState, 5022 v->SwathWidthYThisState, 5023 v->SwathWidthCThisState, 5024 v->BytePerPixelInDETY, 5025 v->BytePerPixelInDETC, 5026 + &v->DRAMClockChangeSupport[i][j]); 5027 } 5028 } 5029 ··· 5179 static void CalculateWatermarksAndDRAMSpeedChangeSupport( 5180 struct display_mode_lib *mode_lib, 5181 unsigned int PrefetchMode, 5182 double DCFCLK, 5183 double ReturnBW, 5184 double UrgentLatency, 5185 double ExtraLatency, 5186 double SOCCLK, 5187 double DCFCLKDeepSleep, 5188 unsigned int DPPPerPlane[], 5189 double DPPCLK[], 5190 unsigned int DETBufferSizeY[], 5191 unsigned int DETBufferSizeC[], 5192 unsigned int SwathHeightY[], 5193 unsigned int SwathHeightC[], 5194 double SwathWidthY[], 5195 double SwathWidthC[], 5196 double BytePerPixelDETY[], 5197 double BytePerPixelDETC[], 5198 + enum clock_change_support *DRAMClockChangeSupport) 5199 { 5200 + struct vba_vars_st *v = &mode_lib->vba; 5201 double EffectiveLBLatencyHidingY = 0; 5202 double EffectiveLBLatencyHidingC = 0; 5203 double LinesInDETY[DC__NUM_DPP__MAX] = { 0 }; ··· 5254 double WritebackDRAMClockChangeLatencyHiding = 0; 5255 unsigned int k, j; 5256 5257 + v->TotalActiveDPP = 0; 5258 + v->TotalDCCActiveDPP = 0; 5259 + for (k = 0; k < v->NumberOfActivePlanes; ++k) { 5260 + v->TotalActiveDPP = v->TotalActiveDPP + DPPPerPlane[k]; 5261 + if (v->DCCEnable[k] == true) { 5262 + v->TotalDCCActiveDPP = v->TotalDCCActiveDPP + DPPPerPlane[k]; 5263 } 5264 } 5265 5266 + v->UrgentWatermark = UrgentLatency + ExtraLatency; 5267 5268 + v->DRAMClockChangeWatermark = v->FinalDRAMClockChangeLatency + v->UrgentWatermark; 5269 5270 + v->TotalActiveWriteback = 0; 5271 + for (k = 0; k < v->NumberOfActivePlanes; ++k) { 5272 + if (v->WritebackEnable[k] == true) { 5273 + v->TotalActiveWriteback = v->TotalActiveWriteback + 1; 5274 } 5275 } 5276 5277 + if (v->TotalActiveWriteback <= 1) { 5278 + v->WritebackUrgentWatermark = v->WritebackLatency; 5279 } else { 5280 + v->WritebackUrgentWatermark = v->WritebackLatency + v->WritebackChunkSize * 1024.0 / 32.0 / SOCCLK; 5281 } 5282 5283 + if (v->TotalActiveWriteback <= 1) { 5284 + v->WritebackDRAMClockChangeWatermark = v->FinalDRAMClockChangeLatency + v->WritebackLatency; 5285 } else { 5286 + v->WritebackDRAMClockChangeWatermark = v->FinalDRAMClockChangeLatency + v->WritebackLatency + v->WritebackChunkSize * 1024.0 / 32.0 / SOCCLK; 5287 } 5288 5289 + for (k = 0; k < v->NumberOfActivePlanes; ++k) { 5290 5291 + v->LBLatencyHidingSourceLinesY = dml_min((double) v->MaxLineBufferLines, dml_floor(v->LineBufferSize / v->LBBitPerPixel[k] / (SwathWidthY[k] / dml_max(v->HRatio[k], 1.0)), 1)) - (v->vtaps[k] - 1); 5292 5293 + v->LBLatencyHidingSourceLinesC = dml_min((double) v->MaxLineBufferLines, dml_floor(v->LineBufferSize / v->LBBitPerPixel[k] / (SwathWidthC[k] / dml_max(v->HRatioChroma[k], 1.0)), 1)) - (v->VTAPsChroma[k] - 1); 5294 5295 + EffectiveLBLatencyHidingY = v->LBLatencyHidingSourceLinesY / v->VRatio[k] * (v->HTotal[k] / v->PixelClock[k]); 5296 5297 + EffectiveLBLatencyHidingC = v->LBLatencyHidingSourceLinesC / v->VRatioChroma[k] * (v->HTotal[k] / v->PixelClock[k]); 5298 5299 LinesInDETY[k] = (double) DETBufferSizeY[k] / BytePerPixelDETY[k] / SwathWidthY[k]; 5300 LinesInDETYRoundedDownToSwath[k] = dml_floor(LinesInDETY[k], SwathHeightY[k]); 5301 + FullDETBufferingTimeY[k] = LinesInDETYRoundedDownToSwath[k] * (v->HTotal[k] / v->PixelClock[k]) / v->VRatio[k]; 5302 if (BytePerPixelDETC[k] > 0) { 5303 + LinesInDETC = v->DETBufferSizeC[k] / BytePerPixelDETC[k] / SwathWidthC[k]; 5304 LinesInDETCRoundedDownToSwath = dml_floor(LinesInDETC, SwathHeightC[k]); 5305 + FullDETBufferingTimeC = LinesInDETCRoundedDownToSwath * (v->HTotal[k] / v->PixelClock[k]) / v->VRatioChroma[k]; 5306 } else { 5307 LinesInDETC = 0; 5308 FullDETBufferingTimeC = 999999; 5309 } 5310 5311 + ActiveDRAMClockChangeLatencyMarginY = EffectiveLBLatencyHidingY + FullDETBufferingTimeY[k] - v->UrgentWatermark - (v->HTotal[k] / v->PixelClock[k]) * (v->DSTXAfterScaler[k] / v->HTotal[k] + v->DSTYAfterScaler[k]) - v->DRAMClockChangeWatermark; 5312 5313 + if (v->NumberOfActivePlanes > 1) { 5314 + ActiveDRAMClockChangeLatencyMarginY = ActiveDRAMClockChangeLatencyMarginY - (1 - 1.0 / v->NumberOfActivePlanes) * SwathHeightY[k] * v->HTotal[k] / v->PixelClock[k] / v->VRatio[k]; 5315 } 5316 5317 if (BytePerPixelDETC[k] > 0) { 5318 + ActiveDRAMClockChangeLatencyMarginC = EffectiveLBLatencyHidingC + FullDETBufferingTimeC - v->UrgentWatermark - (v->HTotal[k] / v->PixelClock[k]) * (v->DSTXAfterScaler[k] / v->HTotal[k] + v->DSTYAfterScaler[k]) - v->DRAMClockChangeWatermark; 5319 5320 + if (v->NumberOfActivePlanes > 1) { 5321 + ActiveDRAMClockChangeLatencyMarginC = ActiveDRAMClockChangeLatencyMarginC - (1 - 1.0 / v->NumberOfActivePlanes) * SwathHeightC[k] * v->HTotal[k] / v->PixelClock[k] / v->VRatioChroma[k]; 5322 } 5323 + v->ActiveDRAMClockChangeLatencyMargin[k] = dml_min(ActiveDRAMClockChangeLatencyMarginY, ActiveDRAMClockChangeLatencyMarginC); 5324 } else { 5325 + v->ActiveDRAMClockChangeLatencyMargin[k] = ActiveDRAMClockChangeLatencyMarginY; 5326 } 5327 5328 + if (v->WritebackEnable[k] == true) { 5329 5330 + WritebackDRAMClockChangeLatencyHiding = v->WritebackInterfaceBufferSize * 1024 / (v->WritebackDestinationWidth[k] * v->WritebackDestinationHeight[k] / (v->WritebackSourceHeight[k] * v->HTotal[k] / v->PixelClock[k]) * 4); 5331 + if (v->WritebackPixelFormat[k] == dm_444_64) { 5332 WritebackDRAMClockChangeLatencyHiding = WritebackDRAMClockChangeLatencyHiding / 2; 5333 } 5334 + if (v->WritebackConfiguration == dm_whole_buffer_for_single_stream_interleave) { 5335 WritebackDRAMClockChangeLatencyHiding = WritebackDRAMClockChangeLatencyHiding * 2; 5336 } 5337 + WritebackDRAMClockChangeLatencyMargin = WritebackDRAMClockChangeLatencyHiding - v->WritebackDRAMClockChangeWatermark; 5338 + v->ActiveDRAMClockChangeLatencyMargin[k] = dml_min(v->ActiveDRAMClockChangeLatencyMargin[k], WritebackDRAMClockChangeLatencyMargin); 5339 } 5340 } 5341 5342 + v->MinActiveDRAMClockChangeMargin = 999999; 5343 PlaneWithMinActiveDRAMClockChangeMargin = 0; 5344 + for (k = 0; k < v->NumberOfActivePlanes; ++k) { 5345 + if (v->ActiveDRAMClockChangeLatencyMargin[k] < v->MinActiveDRAMClockChangeMargin) { 5346 + v->MinActiveDRAMClockChangeMargin = v->ActiveDRAMClockChangeLatencyMargin[k]; 5347 + if (v->BlendingAndTiming[k] == k) { 5348 PlaneWithMinActiveDRAMClockChangeMargin = k; 5349 } else { 5350 + for (j = 0; j < v->NumberOfActivePlanes; ++j) { 5351 + if (v->BlendingAndTiming[k] == j) { 5352 PlaneWithMinActiveDRAMClockChangeMargin = j; 5353 } 5354 } ··· 5356 } 5357 } 5358 5359 + v->MinActiveDRAMClockChangeLatencySupported = v->MinActiveDRAMClockChangeMargin + v->FinalDRAMClockChangeLatency; 5360 5361 SecondMinActiveDRAMClockChangeMarginOneDisplayInVBLank = 999999; 5362 + for (k = 0; k < v->NumberOfActivePlanes; ++k) { 5363 + if (!((k == PlaneWithMinActiveDRAMClockChangeMargin) && (v->BlendingAndTiming[k] == k)) && !(v->BlendingAndTiming[k] == PlaneWithMinActiveDRAMClockChangeMargin) && v->ActiveDRAMClockChangeLatencyMargin[k] < SecondMinActiveDRAMClockChangeMarginOneDisplayInVBLank) { 5364 + SecondMinActiveDRAMClockChangeMarginOneDisplayInVBLank = v->ActiveDRAMClockChangeLatencyMargin[k]; 5365 } 5366 } 5367 5368 + v->TotalNumberOfActiveOTG = 0; 5369 + for (k = 0; k < v->NumberOfActivePlanes; ++k) { 5370 + if (v->BlendingAndTiming[k] == k) { 5371 + v->TotalNumberOfActiveOTG = v->TotalNumberOfActiveOTG + 1; 5372 } 5373 } 5374 5375 + if (v->MinActiveDRAMClockChangeMargin > 0) { 5376 *DRAMClockChangeSupport = dm_dram_clock_change_vactive; 5377 + } else if (((v->SynchronizedVBlank == true || v->TotalNumberOfActiveOTG == 1 || SecondMinActiveDRAMClockChangeMarginOneDisplayInVBLank > 0) && PrefetchMode == 0)) { 5378 *DRAMClockChangeSupport = dm_dram_clock_change_vblank; 5379 } else { 5380 *DRAMClockChangeSupport = dm_dram_clock_change_unsupported; 5381 } 5382 5383 FullDETBufferingTimeYStutterCriticalPlane = FullDETBufferingTimeY[0]; 5384 + for (k = 0; k < v->NumberOfActivePlanes; ++k) { 5385 if (FullDETBufferingTimeY[k] <= FullDETBufferingTimeYStutterCriticalPlane) { 5386 FullDETBufferingTimeYStutterCriticalPlane = FullDETBufferingTimeY[k]; 5387 + TimeToFinishSwathTransferStutterCriticalPlane = (SwathHeightY[k] - (LinesInDETY[k] - LinesInDETYRoundedDownToSwath[k])) * (v->HTotal[k] / v->PixelClock[k]) / v->VRatio[k]; 5388 } 5389 } 5390 5391 + v->StutterExitWatermark = v->SRExitTime + ExtraLatency + 10 / DCFCLKDeepSleep; 5392 + v->StutterEnterPlusExitWatermark = dml_max(v->SREnterPlusExitTime + ExtraLatency + 10 / DCFCLKDeepSleep, TimeToFinishSwathTransferStutterCriticalPlane); 5393 5394 } 5395
+1 -27
drivers/gpu/drm/amd/display/dc/hwss/dce110/dce110_hwseq.c
··· 1610 struct dc_bios *bios = link->ctx->dc_bios; 1611 struct bp_crtc_source_select crtc_source_select = {0}; 1612 enum engine_id engine_id = link->link_enc->preferred_engine; 1613 - uint8_t bit_depth; 1614 1615 if (dc_is_rgb_signal(pipe_ctx->stream->signal)) 1616 engine_id = link->link_enc->analog_engine; 1617 1618 - switch (pipe_ctx->stream->timing.display_color_depth) { 1619 - case COLOR_DEPTH_UNDEFINED: 1620 - bit_depth = 0; 1621 - break; 1622 - case COLOR_DEPTH_666: 1623 - bit_depth = 6; 1624 - break; 1625 - default: 1626 - case COLOR_DEPTH_888: 1627 - bit_depth = 8; 1628 - break; 1629 - case COLOR_DEPTH_101010: 1630 - bit_depth = 10; 1631 - break; 1632 - case COLOR_DEPTH_121212: 1633 - bit_depth = 12; 1634 - break; 1635 - case COLOR_DEPTH_141414: 1636 - bit_depth = 14; 1637 - break; 1638 - case COLOR_DEPTH_161616: 1639 - bit_depth = 16; 1640 - break; 1641 - } 1642 - 1643 crtc_source_select.controller_id = CONTROLLER_ID_D0 + pipe_ctx->stream_res.tg->inst; 1644 - crtc_source_select.bit_depth = bit_depth; 1645 crtc_source_select.engine_id = engine_id; 1646 crtc_source_select.sink_signal = pipe_ctx->stream->signal; 1647
··· 1610 struct dc_bios *bios = link->ctx->dc_bios; 1611 struct bp_crtc_source_select crtc_source_select = {0}; 1612 enum engine_id engine_id = link->link_enc->preferred_engine; 1613 1614 if (dc_is_rgb_signal(pipe_ctx->stream->signal)) 1615 engine_id = link->link_enc->analog_engine; 1616 1617 crtc_source_select.controller_id = CONTROLLER_ID_D0 + pipe_ctx->stream_res.tg->inst; 1618 + crtc_source_select.color_depth = pipe_ctx->stream->timing.display_color_depth; 1619 crtc_source_select.engine_id = engine_id; 1620 crtc_source_select.sink_signal = pipe_ctx->stream->signal; 1621
+1 -1
drivers/gpu/drm/amd/display/include/bios_parser_types.h
··· 136 enum engine_id engine_id; 137 enum controller_id controller_id; 138 enum signal_type sink_signal; 139 - uint8_t bit_depth; 140 }; 141 142 struct bp_transmitter_control {
··· 136 enum engine_id engine_id; 137 enum controller_id controller_id; 138 enum signal_type sink_signal; 139 + enum dc_color_depth color_depth; 140 }; 141 142 struct bp_transmitter_control {
+15 -18
drivers/gpu/drm/amd/pm/swsmu/smu11/navi10_ppt.c
··· 2455 } 2456 2457 for (i = 0; i < NUM_LINK_LEVELS; i++) { 2458 - if (pptable->PcieGenSpeed[i] > pcie_gen_cap || 2459 - pptable->PcieLaneCount[i] > pcie_width_cap) { 2460 - dpm_context->dpm_tables.pcie_table.pcie_gen[i] = 2461 - pptable->PcieGenSpeed[i] > pcie_gen_cap ? 2462 - pcie_gen_cap : pptable->PcieGenSpeed[i]; 2463 - dpm_context->dpm_tables.pcie_table.pcie_lane[i] = 2464 - pptable->PcieLaneCount[i] > pcie_width_cap ? 2465 - pcie_width_cap : pptable->PcieLaneCount[i]; 2466 - smu_pcie_arg = i << 16; 2467 - smu_pcie_arg |= pcie_gen_cap << 8; 2468 - smu_pcie_arg |= pcie_width_cap; 2469 - ret = smu_cmn_send_smc_msg_with_param(smu, 2470 - SMU_MSG_OverridePcieParameters, 2471 - smu_pcie_arg, 2472 - NULL); 2473 - if (ret) 2474 - break; 2475 - } 2476 } 2477 2478 return ret;
··· 2455 } 2456 2457 for (i = 0; i < NUM_LINK_LEVELS; i++) { 2458 + dpm_context->dpm_tables.pcie_table.pcie_gen[i] = 2459 + pptable->PcieGenSpeed[i] > pcie_gen_cap ? 2460 + pcie_gen_cap : pptable->PcieGenSpeed[i]; 2461 + dpm_context->dpm_tables.pcie_table.pcie_lane[i] = 2462 + pptable->PcieLaneCount[i] > pcie_width_cap ? 2463 + pcie_width_cap : pptable->PcieLaneCount[i]; 2464 + smu_pcie_arg = i << 16; 2465 + smu_pcie_arg |= dpm_context->dpm_tables.pcie_table.pcie_gen[i] << 8; 2466 + smu_pcie_arg |= dpm_context->dpm_tables.pcie_table.pcie_lane[i]; 2467 + ret = smu_cmn_send_smc_msg_with_param(smu, 2468 + SMU_MSG_OverridePcieParameters, 2469 + smu_pcie_arg, 2470 + NULL); 2471 + if (ret) 2472 + return ret; 2473 } 2474 2475 return ret;
+6 -1
drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_0_ppt.c
··· 2923 break; 2924 } 2925 2926 - if (!ret) 2927 msleep(SMU13_MODE1_RESET_WAIT_TIME_IN_MS); 2928 2929 return ret; 2930 }
··· 2923 break; 2924 } 2925 2926 + if (!ret) { 2927 + /* disable mmio access while doing mode 1 reset*/ 2928 + smu->adev->no_hw_access = true; 2929 + /* ensure no_hw_access is globally visible before any MMIO */ 2930 + smp_mb(); 2931 msleep(SMU13_MODE1_RESET_WAIT_TIME_IN_MS); 2932 + } 2933 2934 return ret; 2935 }
+7 -2
drivers/gpu/drm/amd/pm/swsmu/smu14/smu_v14_0_2_ppt.c
··· 2143 2144 ret = smu_cmn_send_debug_smc_msg(smu, DEBUGSMC_MSG_Mode1Reset); 2145 if (!ret) { 2146 - if (amdgpu_emu_mode == 1) 2147 msleep(50000); 2148 - else 2149 msleep(1000); 2150 } 2151 2152 return ret;
··· 2143 2144 ret = smu_cmn_send_debug_smc_msg(smu, DEBUGSMC_MSG_Mode1Reset); 2145 if (!ret) { 2146 + if (amdgpu_emu_mode == 1) { 2147 msleep(50000); 2148 + } else { 2149 + /* disable mmio access while doing mode 1 reset*/ 2150 + smu->adev->no_hw_access = true; 2151 + /* ensure no_hw_access is globally visible before any MMIO */ 2152 + smp_mb(); 2153 msleep(1000); 2154 + } 2155 } 2156 2157 return ret;
+99 -23
drivers/gpu/drm/drm_atomic_helper.c
··· 1162 new_state->self_refresh_active; 1163 } 1164 1165 - static void 1166 - encoder_bridge_disable(struct drm_device *dev, struct drm_atomic_state *state) 1167 { 1168 struct drm_connector *connector; 1169 struct drm_connector_state *old_conn_state, *new_conn_state; ··· 1239 } 1240 } 1241 } 1242 1243 - static void 1244 - crtc_disable(struct drm_device *dev, struct drm_atomic_state *state) 1245 { 1246 struct drm_crtc *crtc; 1247 struct drm_crtc_state *old_crtc_state, *new_crtc_state; ··· 1301 drm_crtc_vblank_put(crtc); 1302 } 1303 } 1304 1305 - static void 1306 - encoder_bridge_post_disable(struct drm_device *dev, struct drm_atomic_state *state) 1307 { 1308 struct drm_connector *connector; 1309 struct drm_connector_state *old_conn_state, *new_conn_state; ··· 1363 drm_bridge_put(bridge); 1364 } 1365 } 1366 1367 static void 1368 disable_outputs(struct drm_device *dev, struct drm_atomic_state *state) 1369 { 1370 - encoder_bridge_disable(dev, state); 1371 1372 - crtc_disable(dev, state); 1373 1374 - encoder_bridge_post_disable(dev, state); 1375 } 1376 1377 /** ··· 1475 } 1476 EXPORT_SYMBOL(drm_atomic_helper_calc_timestamping_constants); 1477 1478 - static void 1479 - crtc_set_mode(struct drm_device *dev, struct drm_atomic_state *state) 1480 { 1481 struct drm_crtc *crtc; 1482 struct drm_crtc_state *new_crtc_state; ··· 1546 drm_bridge_put(bridge); 1547 } 1548 } 1549 1550 /** 1551 * drm_atomic_helper_commit_modeset_disables - modeset commit to disable outputs ··· 1570 drm_atomic_helper_update_legacy_modeset_state(dev, state); 1571 drm_atomic_helper_calc_timestamping_constants(state); 1572 1573 - crtc_set_mode(dev, state); 1574 } 1575 EXPORT_SYMBOL(drm_atomic_helper_commit_modeset_disables); 1576 1577 - static void drm_atomic_helper_commit_writebacks(struct drm_device *dev, 1578 - struct drm_atomic_state *state) 1579 { 1580 struct drm_connector *connector; 1581 struct drm_connector_state *new_conn_state; ··· 1603 } 1604 } 1605 } 1606 1607 - static void 1608 - encoder_bridge_pre_enable(struct drm_device *dev, struct drm_atomic_state *state) 1609 { 1610 struct drm_connector *connector; 1611 struct drm_connector_state *new_conn_state; ··· 1645 drm_bridge_put(bridge); 1646 } 1647 } 1648 1649 - static void 1650 - crtc_enable(struct drm_device *dev, struct drm_atomic_state *state) 1651 { 1652 struct drm_crtc *crtc; 1653 struct drm_crtc_state *old_crtc_state; ··· 1685 } 1686 } 1687 } 1688 1689 - static void 1690 - encoder_bridge_enable(struct drm_device *dev, struct drm_atomic_state *state) 1691 { 1692 struct drm_connector *connector; 1693 struct drm_connector_state *new_conn_state; ··· 1739 drm_bridge_put(bridge); 1740 } 1741 } 1742 1743 /** 1744 * drm_atomic_helper_commit_modeset_enables - modeset commit to enable outputs ··· 1758 void drm_atomic_helper_commit_modeset_enables(struct drm_device *dev, 1759 struct drm_atomic_state *state) 1760 { 1761 - encoder_bridge_pre_enable(dev, state); 1762 1763 - crtc_enable(dev, state); 1764 1765 - encoder_bridge_enable(dev, state); 1766 1767 drm_atomic_helper_commit_writebacks(dev, state); 1768 }
··· 1162 new_state->self_refresh_active; 1163 } 1164 1165 + /** 1166 + * drm_atomic_helper_commit_encoder_bridge_disable - disable bridges and encoder 1167 + * @dev: DRM device 1168 + * @state: the driver state object 1169 + * 1170 + * Loops over all connectors in the current state and if the CRTC needs 1171 + * it, disables the bridge chain all the way, then disables the encoder 1172 + * afterwards. 1173 + */ 1174 + void 1175 + drm_atomic_helper_commit_encoder_bridge_disable(struct drm_device *dev, 1176 + struct drm_atomic_state *state) 1177 { 1178 struct drm_connector *connector; 1179 struct drm_connector_state *old_conn_state, *new_conn_state; ··· 1229 } 1230 } 1231 } 1232 + EXPORT_SYMBOL(drm_atomic_helper_commit_encoder_bridge_disable); 1233 1234 + /** 1235 + * drm_atomic_helper_commit_crtc_disable - disable CRTSs 1236 + * @dev: DRM device 1237 + * @state: the driver state object 1238 + * 1239 + * Loops over all CRTCs in the current state and if the CRTC needs 1240 + * it, disables it. 1241 + */ 1242 + void 1243 + drm_atomic_helper_commit_crtc_disable(struct drm_device *dev, struct drm_atomic_state *state) 1244 { 1245 struct drm_crtc *crtc; 1246 struct drm_crtc_state *old_crtc_state, *new_crtc_state; ··· 1282 drm_crtc_vblank_put(crtc); 1283 } 1284 } 1285 + EXPORT_SYMBOL(drm_atomic_helper_commit_crtc_disable); 1286 1287 + /** 1288 + * drm_atomic_helper_commit_encoder_bridge_post_disable - post-disable encoder bridges 1289 + * @dev: DRM device 1290 + * @state: the driver state object 1291 + * 1292 + * Loops over all connectors in the current state and if the CRTC needs 1293 + * it, post-disables all encoder bridges. 1294 + */ 1295 + void 1296 + drm_atomic_helper_commit_encoder_bridge_post_disable(struct drm_device *dev, struct drm_atomic_state *state) 1297 { 1298 struct drm_connector *connector; 1299 struct drm_connector_state *old_conn_state, *new_conn_state; ··· 1335 drm_bridge_put(bridge); 1336 } 1337 } 1338 + EXPORT_SYMBOL(drm_atomic_helper_commit_encoder_bridge_post_disable); 1339 1340 static void 1341 disable_outputs(struct drm_device *dev, struct drm_atomic_state *state) 1342 { 1343 + drm_atomic_helper_commit_encoder_bridge_disable(dev, state); 1344 1345 + drm_atomic_helper_commit_encoder_bridge_post_disable(dev, state); 1346 1347 + drm_atomic_helper_commit_crtc_disable(dev, state); 1348 } 1349 1350 /** ··· 1446 } 1447 EXPORT_SYMBOL(drm_atomic_helper_calc_timestamping_constants); 1448 1449 + /** 1450 + * drm_atomic_helper_commit_crtc_set_mode - set the new mode 1451 + * @dev: DRM device 1452 + * @state: the driver state object 1453 + * 1454 + * Loops over all connectors in the current state and if the mode has 1455 + * changed, change the mode of the CRTC, then call down the bridge 1456 + * chain and change the mode in all bridges as well. 1457 + */ 1458 + void 1459 + drm_atomic_helper_commit_crtc_set_mode(struct drm_device *dev, struct drm_atomic_state *state) 1460 { 1461 struct drm_crtc *crtc; 1462 struct drm_crtc_state *new_crtc_state; ··· 1508 drm_bridge_put(bridge); 1509 } 1510 } 1511 + EXPORT_SYMBOL(drm_atomic_helper_commit_crtc_set_mode); 1512 1513 /** 1514 * drm_atomic_helper_commit_modeset_disables - modeset commit to disable outputs ··· 1531 drm_atomic_helper_update_legacy_modeset_state(dev, state); 1532 drm_atomic_helper_calc_timestamping_constants(state); 1533 1534 + drm_atomic_helper_commit_crtc_set_mode(dev, state); 1535 } 1536 EXPORT_SYMBOL(drm_atomic_helper_commit_modeset_disables); 1537 1538 + /** 1539 + * drm_atomic_helper_commit_writebacks - issue writebacks 1540 + * @dev: DRM device 1541 + * @state: atomic state object being committed 1542 + * 1543 + * This loops over the connectors, checks if the new state requires 1544 + * a writeback job to be issued and in that case issues an atomic 1545 + * commit on each connector. 1546 + */ 1547 + void drm_atomic_helper_commit_writebacks(struct drm_device *dev, 1548 + struct drm_atomic_state *state) 1549 { 1550 struct drm_connector *connector; 1551 struct drm_connector_state *new_conn_state; ··· 1555 } 1556 } 1557 } 1558 + EXPORT_SYMBOL(drm_atomic_helper_commit_writebacks); 1559 1560 + /** 1561 + * drm_atomic_helper_commit_encoder_bridge_pre_enable - pre-enable bridges 1562 + * @dev: DRM device 1563 + * @state: atomic state object being committed 1564 + * 1565 + * This loops over the connectors and if the CRTC needs it, pre-enables 1566 + * the entire bridge chain. 1567 + */ 1568 + void 1569 + drm_atomic_helper_commit_encoder_bridge_pre_enable(struct drm_device *dev, struct drm_atomic_state *state) 1570 { 1571 struct drm_connector *connector; 1572 struct drm_connector_state *new_conn_state; ··· 1588 drm_bridge_put(bridge); 1589 } 1590 } 1591 + EXPORT_SYMBOL(drm_atomic_helper_commit_encoder_bridge_pre_enable); 1592 1593 + /** 1594 + * drm_atomic_helper_commit_crtc_enable - enables the CRTCs 1595 + * @dev: DRM device 1596 + * @state: atomic state object being committed 1597 + * 1598 + * This loops over CRTCs in the new state, and of the CRTC needs 1599 + * it, enables it. 1600 + */ 1601 + void 1602 + drm_atomic_helper_commit_crtc_enable(struct drm_device *dev, struct drm_atomic_state *state) 1603 { 1604 struct drm_crtc *crtc; 1605 struct drm_crtc_state *old_crtc_state; ··· 1619 } 1620 } 1621 } 1622 + EXPORT_SYMBOL(drm_atomic_helper_commit_crtc_enable); 1623 1624 + /** 1625 + * drm_atomic_helper_commit_encoder_bridge_enable - enables the bridges 1626 + * @dev: DRM device 1627 + * @state: atomic state object being committed 1628 + * 1629 + * This loops over all connectors in the new state, and of the CRTC needs 1630 + * it, enables the entire bridge chain. 1631 + */ 1632 + void 1633 + drm_atomic_helper_commit_encoder_bridge_enable(struct drm_device *dev, struct drm_atomic_state *state) 1634 { 1635 struct drm_connector *connector; 1636 struct drm_connector_state *new_conn_state; ··· 1664 drm_bridge_put(bridge); 1665 } 1666 } 1667 + EXPORT_SYMBOL(drm_atomic_helper_commit_encoder_bridge_enable); 1668 1669 /** 1670 * drm_atomic_helper_commit_modeset_enables - modeset commit to enable outputs ··· 1682 void drm_atomic_helper_commit_modeset_enables(struct drm_device *dev, 1683 struct drm_atomic_state *state) 1684 { 1685 + drm_atomic_helper_commit_crtc_enable(dev, state); 1686 1687 + drm_atomic_helper_commit_encoder_bridge_pre_enable(dev, state); 1688 1689 + drm_atomic_helper_commit_encoder_bridge_enable(dev, state); 1690 1691 drm_atomic_helper_commit_writebacks(dev, state); 1692 }
+10
drivers/gpu/drm/drm_fb_helper.c
··· 366 { 367 struct drm_fb_helper *helper = container_of(work, struct drm_fb_helper, damage_work); 368 369 drm_fb_helper_fb_dirty(helper); 370 } 371 ··· 734 if (suspend) { 735 if (fb_helper->info->state != FBINFO_STATE_RUNNING) 736 return; 737 738 console_lock(); 739
··· 366 { 367 struct drm_fb_helper *helper = container_of(work, struct drm_fb_helper, damage_work); 368 369 + if (helper->info->state != FBINFO_STATE_RUNNING) 370 + return; 371 + 372 drm_fb_helper_fb_dirty(helper); 373 } 374 ··· 731 if (suspend) { 732 if (fb_helper->info->state != FBINFO_STATE_RUNNING) 733 return; 734 + 735 + /* 736 + * Cancel pending damage work. During GPU reset, VBlank 737 + * interrupts are disabled and drm_fb_helper_fb_dirty() 738 + * would wait for VBlank timeout otherwise. 739 + */ 740 + cancel_work_sync(&fb_helper->damage_work); 741 742 console_lock(); 743
+1 -1
drivers/gpu/drm/exynos/exynos_hdmi.c
··· 1692 { 1693 struct hdmi_context *hdata = arg; 1694 1695 - mod_delayed_work(system_wq, &hdata->hotplug_work, 1696 msecs_to_jiffies(HOTPLUG_DEBOUNCE_MS)); 1697 1698 return IRQ_HANDLED;
··· 1692 { 1693 struct hdmi_context *hdata = arg; 1694 1695 + mod_delayed_work(system_percpu_wq, &hdata->hotplug_work, 1696 msecs_to_jiffies(HOTPLUG_DEBOUNCE_MS)); 1697 1698 return IRQ_HANDLED;
-6
drivers/gpu/drm/mediatek/mtk_dsi.c
··· 1002 return PTR_ERR(dsi->next_bridge); 1003 } 1004 1005 - /* 1006 - * set flag to request the DSI host bridge be pre-enabled before device bridge 1007 - * in the chain, so the DSI host is ready when the device bridge is pre-enabled 1008 - */ 1009 - dsi->next_bridge->pre_enable_prev_first = true; 1010 - 1011 drm_bridge_add(&dsi->bridge); 1012 1013 ret = component_add(host->dev, &mtk_dsi_component_ops);
··· 1002 return PTR_ERR(dsi->next_bridge); 1003 } 1004 1005 drm_bridge_add(&dsi->bridge); 1006 1007 ret = component_add(host->dev, &mtk_dsi_component_ops);
+3
drivers/gpu/drm/nouveau/nvkm/subdev/gsp/ad102.c
··· 30 31 .booter.ctor = ga102_gsp_booter_ctor, 32 33 .dtor = r535_gsp_dtor, 34 .oneinit = tu102_gsp_oneinit, 35 .init = tu102_gsp_init,
··· 30 31 .booter.ctor = ga102_gsp_booter_ctor, 32 33 + .fwsec_sb.ctor = tu102_gsp_fwsec_sb_ctor, 34 + .fwsec_sb.dtor = tu102_gsp_fwsec_sb_dtor, 35 + 36 .dtor = r535_gsp_dtor, 37 .oneinit = tu102_gsp_oneinit, 38 .init = tu102_gsp_init,
+1 -7
drivers/gpu/drm/nouveau/nvkm/subdev/gsp/fwsec.c
··· 337 } 338 339 int 340 - nvkm_gsp_fwsec_sb_ctor(struct nvkm_gsp *gsp) 341 { 342 return nvkm_gsp_fwsec_init(gsp, &gsp->fws.falcon.sb, "fwsec-sb", 343 NVFW_FALCON_APPIF_DMEMMAPPER_CMD_SB); 344 - } 345 - 346 - void 347 - nvkm_gsp_fwsec_sb_dtor(struct nvkm_gsp *gsp) 348 - { 349 - nvkm_falcon_fw_dtor(&gsp->fws.falcon.sb); 350 } 351 352 int
··· 337 } 338 339 int 340 + nvkm_gsp_fwsec_sb_init(struct nvkm_gsp *gsp) 341 { 342 return nvkm_gsp_fwsec_init(gsp, &gsp->fws.falcon.sb, "fwsec-sb", 343 NVFW_FALCON_APPIF_DMEMMAPPER_CMD_SB); 344 } 345 346 int
+3
drivers/gpu/drm/nouveau/nvkm/subdev/gsp/ga100.c
··· 47 48 .booter.ctor = tu102_gsp_booter_ctor, 49 50 .dtor = r535_gsp_dtor, 51 .oneinit = tu102_gsp_oneinit, 52 .init = tu102_gsp_init,
··· 47 48 .booter.ctor = tu102_gsp_booter_ctor, 49 50 + .fwsec_sb.ctor = tu102_gsp_fwsec_sb_ctor, 51 + .fwsec_sb.dtor = tu102_gsp_fwsec_sb_dtor, 52 + 53 .dtor = r535_gsp_dtor, 54 .oneinit = tu102_gsp_oneinit, 55 .init = tu102_gsp_init,
+3
drivers/gpu/drm/nouveau/nvkm/subdev/gsp/ga102.c
··· 158 159 .booter.ctor = ga102_gsp_booter_ctor, 160 161 .dtor = r535_gsp_dtor, 162 .oneinit = tu102_gsp_oneinit, 163 .init = tu102_gsp_init,
··· 158 159 .booter.ctor = ga102_gsp_booter_ctor, 160 161 + .fwsec_sb.ctor = tu102_gsp_fwsec_sb_ctor, 162 + .fwsec_sb.dtor = tu102_gsp_fwsec_sb_dtor, 163 + 164 .dtor = r535_gsp_dtor, 165 .oneinit = tu102_gsp_oneinit, 166 .init = tu102_gsp_init,
+21 -2
drivers/gpu/drm/nouveau/nvkm/subdev/gsp/priv.h
··· 7 8 int nvkm_gsp_fwsec_frts(struct nvkm_gsp *); 9 10 - int nvkm_gsp_fwsec_sb_ctor(struct nvkm_gsp *); 11 int nvkm_gsp_fwsec_sb(struct nvkm_gsp *); 12 - void nvkm_gsp_fwsec_sb_dtor(struct nvkm_gsp *); 13 14 struct nvkm_gsp_fwif { 15 int version; ··· 51 struct nvkm_falcon *, struct nvkm_falcon_fw *); 52 } booter; 53 54 void (*dtor)(struct nvkm_gsp *); 55 int (*oneinit)(struct nvkm_gsp *); 56 int (*init)(struct nvkm_gsp *); ··· 71 extern const struct nvkm_falcon_fw_func tu102_gsp_fwsec; 72 int tu102_gsp_booter_ctor(struct nvkm_gsp *, const char *, const struct firmware *, 73 struct nvkm_falcon *, struct nvkm_falcon_fw *); 74 int tu102_gsp_oneinit(struct nvkm_gsp *); 75 int tu102_gsp_init(struct nvkm_gsp *); 76 int tu102_gsp_fini(struct nvkm_gsp *, bool suspend); ··· 96 97 int nvkm_gsp_new_(const struct nvkm_gsp_fwif *, struct nvkm_device *, enum nvkm_subdev_type, int, 98 struct nvkm_gsp **); 99 100 extern const struct nvkm_gsp_func gv100_gsp; 101 #endif
··· 7 8 int nvkm_gsp_fwsec_frts(struct nvkm_gsp *); 9 10 int nvkm_gsp_fwsec_sb(struct nvkm_gsp *); 11 + int nvkm_gsp_fwsec_sb_init(struct nvkm_gsp *gsp); 12 13 struct nvkm_gsp_fwif { 14 int version; ··· 52 struct nvkm_falcon *, struct nvkm_falcon_fw *); 53 } booter; 54 55 + struct { 56 + int (*ctor)(struct nvkm_gsp *); 57 + void (*dtor)(struct nvkm_gsp *); 58 + } fwsec_sb; 59 + 60 void (*dtor)(struct nvkm_gsp *); 61 int (*oneinit)(struct nvkm_gsp *); 62 int (*init)(struct nvkm_gsp *); ··· 67 extern const struct nvkm_falcon_fw_func tu102_gsp_fwsec; 68 int tu102_gsp_booter_ctor(struct nvkm_gsp *, const char *, const struct firmware *, 69 struct nvkm_falcon *, struct nvkm_falcon_fw *); 70 + int tu102_gsp_fwsec_sb_ctor(struct nvkm_gsp *); 71 + void tu102_gsp_fwsec_sb_dtor(struct nvkm_gsp *); 72 int tu102_gsp_oneinit(struct nvkm_gsp *); 73 int tu102_gsp_init(struct nvkm_gsp *); 74 int tu102_gsp_fini(struct nvkm_gsp *, bool suspend); ··· 90 91 int nvkm_gsp_new_(const struct nvkm_gsp_fwif *, struct nvkm_device *, enum nvkm_subdev_type, int, 92 struct nvkm_gsp **); 93 + 94 + static inline int nvkm_gsp_fwsec_sb_ctor(struct nvkm_gsp *gsp) 95 + { 96 + if (gsp->func->fwsec_sb.ctor) 97 + return gsp->func->fwsec_sb.ctor(gsp); 98 + return 0; 99 + } 100 + 101 + static inline void nvkm_gsp_fwsec_sb_dtor(struct nvkm_gsp *gsp) 102 + { 103 + if (gsp->func->fwsec_sb.dtor) 104 + gsp->func->fwsec_sb.dtor(gsp); 105 + } 106 107 extern const struct nvkm_gsp_func gv100_gsp; 108 #endif
+15
drivers/gpu/drm/nouveau/nvkm/subdev/gsp/tu102.c
··· 30 #include <nvfw/fw.h> 31 #include <nvfw/hs.h> 32 33 static int 34 tu102_gsp_booter_unload(struct nvkm_gsp *gsp, u32 mbox0, u32 mbox1) 35 { ··· 381 .sig_section = ".fwsignature_tu10x", 382 383 .booter.ctor = tu102_gsp_booter_ctor, 384 385 .dtor = r535_gsp_dtor, 386 .oneinit = tu102_gsp_oneinit,
··· 30 #include <nvfw/fw.h> 31 #include <nvfw/hs.h> 32 33 + int 34 + tu102_gsp_fwsec_sb_ctor(struct nvkm_gsp *gsp) 35 + { 36 + return nvkm_gsp_fwsec_sb_init(gsp); 37 + } 38 + 39 + void 40 + tu102_gsp_fwsec_sb_dtor(struct nvkm_gsp *gsp) 41 + { 42 + nvkm_falcon_fw_dtor(&gsp->fws.falcon.sb); 43 + } 44 + 45 static int 46 tu102_gsp_booter_unload(struct nvkm_gsp *gsp, u32 mbox0, u32 mbox1) 47 { ··· 369 .sig_section = ".fwsignature_tu10x", 370 371 .booter.ctor = tu102_gsp_booter_ctor, 372 + 373 + .fwsec_sb.ctor = tu102_gsp_fwsec_sb_ctor, 374 + .fwsec_sb.dtor = tu102_gsp_fwsec_sb_dtor, 375 376 .dtor = r535_gsp_dtor, 377 .oneinit = tu102_gsp_oneinit,
+3
drivers/gpu/drm/nouveau/nvkm/subdev/gsp/tu116.c
··· 30 31 .booter.ctor = tu102_gsp_booter_ctor, 32 33 .dtor = r535_gsp_dtor, 34 .oneinit = tu102_gsp_oneinit, 35 .init = tu102_gsp_init,
··· 30 31 .booter.ctor = tu102_gsp_booter_ctor, 32 33 + .fwsec_sb.ctor = tu102_gsp_fwsec_sb_ctor, 34 + .fwsec_sb.dtor = tu102_gsp_fwsec_sb_dtor, 35 + 36 .dtor = r535_gsp_dtor, 37 .oneinit = tu102_gsp_oneinit, 38 .init = tu102_gsp_init,
+1 -1
drivers/gpu/drm/pl111/pl111_drv.c
··· 295 variant->name, priv); 296 if (ret != 0) { 297 dev_err(dev, "%s failed irq %d\n", __func__, ret); 298 - return ret; 299 } 300 301 ret = pl111_modeset_init(drm);
··· 295 variant->name, priv); 296 if (ret != 0) { 297 dev_err(dev, "%s failed irq %d\n", __func__, ret); 298 + goto dev_put; 299 } 300 301 ret = pl111_modeset_init(drm);
+1 -1
drivers/gpu/drm/radeon/pptable.h
··· 450 //sizeof(ATOM_PPLIB_CLOCK_INFO) 451 UCHAR ucEntrySize; 452 453 - UCHAR clockInfo[] __counted_by(ucNumEntries); 454 }ClockInfoArray; 455 456 typedef struct _NonClockInfoArray{
··· 450 //sizeof(ATOM_PPLIB_CLOCK_INFO) 451 UCHAR ucEntrySize; 452 453 + UCHAR clockInfo[] /*__counted_by(ucNumEntries)*/; 454 }ClockInfoArray; 455 456 typedef struct _NonClockInfoArray{
+27 -3
drivers/gpu/drm/tidss/tidss_kms.c
··· 26 27 tidss_runtime_get(tidss); 28 29 - drm_atomic_helper_commit_modeset_disables(ddev, old_state); 30 - drm_atomic_helper_commit_planes(ddev, old_state, DRM_PLANE_COMMIT_ACTIVE_ONLY); 31 - drm_atomic_helper_commit_modeset_enables(ddev, old_state); 32 33 drm_atomic_helper_commit_hw_done(old_state); 34 drm_atomic_helper_wait_for_flip_done(ddev, old_state);
··· 26 27 tidss_runtime_get(tidss); 28 29 + /* 30 + * TI's OLDI and DSI encoders need to be set up before the crtc is 31 + * enabled. Thus drm_atomic_helper_commit_modeset_enables() and 32 + * drm_atomic_helper_commit_modeset_disables() cannot be used here, as 33 + * they enable the crtc before bridges' pre-enable, and disable the crtc 34 + * after bridges' post-disable. 35 + * 36 + * Open code the functions here and first call the bridges' pre-enables, 37 + * then crtc enable, then bridges' post-enable (and vice versa for 38 + * disable). 39 + */ 40 + 41 + drm_atomic_helper_commit_encoder_bridge_disable(ddev, old_state); 42 + drm_atomic_helper_commit_crtc_disable(ddev, old_state); 43 + drm_atomic_helper_commit_encoder_bridge_post_disable(ddev, old_state); 44 + 45 + drm_atomic_helper_update_legacy_modeset_state(ddev, old_state); 46 + drm_atomic_helper_calc_timestamping_constants(old_state); 47 + drm_atomic_helper_commit_crtc_set_mode(ddev, old_state); 48 + 49 + drm_atomic_helper_commit_planes(ddev, old_state, 50 + DRM_PLANE_COMMIT_ACTIVE_ONLY); 51 + 52 + drm_atomic_helper_commit_encoder_bridge_pre_enable(ddev, old_state); 53 + drm_atomic_helper_commit_crtc_enable(ddev, old_state); 54 + drm_atomic_helper_commit_encoder_bridge_enable(ddev, old_state); 55 + drm_atomic_helper_commit_writebacks(ddev, old_state); 56 57 drm_atomic_helper_commit_hw_done(old_state); 58 drm_atomic_helper_wait_for_flip_done(ddev, old_state);
+1 -1
drivers/gpu/nova-core/Kconfig
··· 3 depends on 64BIT 4 depends on PCI 5 depends on RUST 6 - depends on RUST_FW_LOADER_ABSTRACTIONS 7 select AUXILIARY_BUS 8 default n 9 help
··· 3 depends on 64BIT 4 depends on PCI 5 depends on RUST 6 + select RUST_FW_LOADER_ABSTRACTIONS 7 select AUXILIARY_BUS 8 default n 9 help
+8 -6
drivers/gpu/nova-core/gsp/cmdq.rs
··· 588 header.length(), 589 ); 590 591 // Check that the driver read area is large enough for the message. 592 - if slice_1.len() + slice_2.len() < header.length() { 593 return Err(EIO); 594 } 595 596 // Cut the message slices down to the actual length of the message. 597 - let (slice_1, slice_2) = if slice_1.len() > header.length() { 598 - // PANIC: we checked above that `slice_1` is at least as long as `msg_header.length()`. 599 - (slice_1.split_at(header.length()).0, &slice_2[0..0]) 600 } else { 601 ( 602 slice_1, 603 // PANIC: we checked above that `slice_1.len() + slice_2.len()` is at least as 604 - // large as `msg_header.length()`. 605 - slice_2.split_at(header.length() - slice_1.len()).0, 606 ) 607 }; 608
··· 588 header.length(), 589 ); 590 591 + let payload_length = header.payload_length(); 592 + 593 // Check that the driver read area is large enough for the message. 594 + if slice_1.len() + slice_2.len() < payload_length { 595 return Err(EIO); 596 } 597 598 // Cut the message slices down to the actual length of the message. 599 + let (slice_1, slice_2) = if slice_1.len() > payload_length { 600 + // PANIC: we checked above that `slice_1` is at least as long as `payload_length`. 601 + (slice_1.split_at(payload_length).0, &slice_2[0..0]) 602 } else { 603 ( 604 slice_1, 605 // PANIC: we checked above that `slice_1.len() + slice_2.len()` is at least as 606 + // large as `payload_length`. 607 + slice_2.split_at(payload_length - slice_1.len()).0, 608 ) 609 }; 610
+38 -40
drivers/gpu/nova-core/gsp/fw.rs
··· 141 // are valid. 142 unsafe impl FromBytes for GspFwWprMeta {} 143 144 - type GspFwWprMetaBootResumeInfo = r570_144::GspFwWprMeta__bindgen_ty_1; 145 - type GspFwWprMetaBootInfo = r570_144::GspFwWprMeta__bindgen_ty_1__bindgen_ty_1; 146 147 impl GspFwWprMeta { 148 /// Fill in and return a `GspFwWprMeta` suitable for booting `gsp_firmware` using the ··· 150 pub(crate) fn new(gsp_firmware: &GspFirmware, fb_layout: &FbLayout) -> Self { 151 Self(bindings::GspFwWprMeta { 152 // CAST: we want to store the bits of `GSP_FW_WPR_META_MAGIC` unmodified. 153 - magic: r570_144::GSP_FW_WPR_META_MAGIC as u64, 154 - revision: u64::from(r570_144::GSP_FW_WPR_META_REVISION), 155 sysmemAddrOfRadix3Elf: gsp_firmware.radix3_dma_handle(), 156 sizeOfRadix3Elf: u64::from_safe_cast(gsp_firmware.size), 157 sysmemAddrOfBootloader: gsp_firmware.bootloader.ucode.dma_handle(), ··· 315 #[repr(u32)] 316 pub(crate) enum SeqBufOpcode { 317 // Core operation opcodes 318 - CoreReset = r570_144::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_CORE_RESET, 319 - CoreResume = r570_144::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_CORE_RESUME, 320 - CoreStart = r570_144::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_CORE_START, 321 - CoreWaitForHalt = r570_144::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_CORE_WAIT_FOR_HALT, 322 323 // Delay opcode 324 - DelayUs = r570_144::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_DELAY_US, 325 326 // Register operation opcodes 327 - RegModify = r570_144::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_REG_MODIFY, 328 - RegPoll = r570_144::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_REG_POLL, 329 - RegStore = r570_144::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_REG_STORE, 330 - RegWrite = r570_144::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_REG_WRITE, 331 } 332 333 impl fmt::Display for SeqBufOpcode { ··· 351 352 fn try_from(value: u32) -> Result<SeqBufOpcode> { 353 match value { 354 - r570_144::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_CORE_RESET => { 355 Ok(SeqBufOpcode::CoreReset) 356 } 357 - r570_144::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_CORE_RESUME => { 358 Ok(SeqBufOpcode::CoreResume) 359 } 360 - r570_144::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_CORE_START => { 361 Ok(SeqBufOpcode::CoreStart) 362 } 363 - r570_144::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_CORE_WAIT_FOR_HALT => { 364 Ok(SeqBufOpcode::CoreWaitForHalt) 365 } 366 - r570_144::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_DELAY_US => Ok(SeqBufOpcode::DelayUs), 367 - r570_144::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_REG_MODIFY => { 368 Ok(SeqBufOpcode::RegModify) 369 } 370 - r570_144::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_REG_POLL => Ok(SeqBufOpcode::RegPoll), 371 - r570_144::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_REG_STORE => Ok(SeqBufOpcode::RegStore), 372 - r570_144::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_REG_WRITE => Ok(SeqBufOpcode::RegWrite), 373 _ => Err(EINVAL), 374 } 375 } ··· 385 /// Wrapper for GSP sequencer register write payload. 386 #[repr(transparent)] 387 #[derive(Copy, Clone)] 388 - pub(crate) struct RegWritePayload(r570_144::GSP_SEQ_BUF_PAYLOAD_REG_WRITE); 389 390 impl RegWritePayload { 391 /// Returns the register address. ··· 408 /// Wrapper for GSP sequencer register modify payload. 409 #[repr(transparent)] 410 #[derive(Copy, Clone)] 411 - pub(crate) struct RegModifyPayload(r570_144::GSP_SEQ_BUF_PAYLOAD_REG_MODIFY); 412 413 impl RegModifyPayload { 414 /// Returns the register address. ··· 436 /// Wrapper for GSP sequencer register poll payload. 437 #[repr(transparent)] 438 #[derive(Copy, Clone)] 439 - pub(crate) struct RegPollPayload(r570_144::GSP_SEQ_BUF_PAYLOAD_REG_POLL); 440 441 impl RegPollPayload { 442 /// Returns the register address. ··· 469 /// Wrapper for GSP sequencer delay payload. 470 #[repr(transparent)] 471 #[derive(Copy, Clone)] 472 - pub(crate) struct DelayUsPayload(r570_144::GSP_SEQ_BUF_PAYLOAD_DELAY_US); 473 474 impl DelayUsPayload { 475 /// Returns the delay value in microseconds. ··· 487 /// Wrapper for GSP sequencer register store payload. 488 #[repr(transparent)] 489 #[derive(Copy, Clone)] 490 - pub(crate) struct RegStorePayload(r570_144::GSP_SEQ_BUF_PAYLOAD_REG_STORE); 491 492 impl RegStorePayload { 493 /// Returns the register address. ··· 510 511 /// Wrapper for GSP sequencer buffer command. 512 #[repr(transparent)] 513 - pub(crate) struct SequencerBufferCmd(r570_144::GSP_SEQUENCER_BUFFER_CMD); 514 515 impl SequencerBufferCmd { 516 /// Returns the opcode as a `SeqBufOpcode` enum, or error if invalid. ··· 612 613 /// Wrapper for GSP run CPU sequencer RPC. 614 #[repr(transparent)] 615 - pub(crate) struct RunCpuSequencer(r570_144::rpc_run_cpu_sequencer_v17_00); 616 617 impl RunCpuSequencer { 618 /// Returns the command index. ··· 797 } 798 } 799 800 - // SAFETY: We can't derive the Zeroable trait for this binding because the 801 - // procedural macro doesn't support the syntax used by bindgen to create the 802 - // __IncompleteArrayField types. So instead we implement it here, which is safe 803 - // because these are explicitly padded structures only containing types for 804 - // which any bit pattern, including all zeros, is valid. 805 - unsafe impl Zeroable for bindings::rpc_message_header_v {} 806 - 807 /// GSP Message Element. 808 /// 809 /// This is essentially a message header expected to be followed by the message data. ··· 846 self.inner.checkSum = checksum; 847 } 848 849 - /// Returns the total length of the message. 850 pub(crate) fn length(&self) -> usize { 851 - // `rpc.length` includes the length of the GspRpcHeader but not the message header. 852 - size_of::<Self>() - size_of::<bindings::rpc_message_header_v>() 853 - + num::u32_as_usize(self.inner.rpc.length) 854 } 855 856 // Returns the sequence number of the message.
··· 141 // are valid. 142 unsafe impl FromBytes for GspFwWprMeta {} 143 144 + type GspFwWprMetaBootResumeInfo = bindings::GspFwWprMeta__bindgen_ty_1; 145 + type GspFwWprMetaBootInfo = bindings::GspFwWprMeta__bindgen_ty_1__bindgen_ty_1; 146 147 impl GspFwWprMeta { 148 /// Fill in and return a `GspFwWprMeta` suitable for booting `gsp_firmware` using the ··· 150 pub(crate) fn new(gsp_firmware: &GspFirmware, fb_layout: &FbLayout) -> Self { 151 Self(bindings::GspFwWprMeta { 152 // CAST: we want to store the bits of `GSP_FW_WPR_META_MAGIC` unmodified. 153 + magic: bindings::GSP_FW_WPR_META_MAGIC as u64, 154 + revision: u64::from(bindings::GSP_FW_WPR_META_REVISION), 155 sysmemAddrOfRadix3Elf: gsp_firmware.radix3_dma_handle(), 156 sizeOfRadix3Elf: u64::from_safe_cast(gsp_firmware.size), 157 sysmemAddrOfBootloader: gsp_firmware.bootloader.ucode.dma_handle(), ··· 315 #[repr(u32)] 316 pub(crate) enum SeqBufOpcode { 317 // Core operation opcodes 318 + CoreReset = bindings::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_CORE_RESET, 319 + CoreResume = bindings::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_CORE_RESUME, 320 + CoreStart = bindings::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_CORE_START, 321 + CoreWaitForHalt = bindings::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_CORE_WAIT_FOR_HALT, 322 323 // Delay opcode 324 + DelayUs = bindings::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_DELAY_US, 325 326 // Register operation opcodes 327 + RegModify = bindings::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_REG_MODIFY, 328 + RegPoll = bindings::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_REG_POLL, 329 + RegStore = bindings::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_REG_STORE, 330 + RegWrite = bindings::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_REG_WRITE, 331 } 332 333 impl fmt::Display for SeqBufOpcode { ··· 351 352 fn try_from(value: u32) -> Result<SeqBufOpcode> { 353 match value { 354 + bindings::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_CORE_RESET => { 355 Ok(SeqBufOpcode::CoreReset) 356 } 357 + bindings::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_CORE_RESUME => { 358 Ok(SeqBufOpcode::CoreResume) 359 } 360 + bindings::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_CORE_START => { 361 Ok(SeqBufOpcode::CoreStart) 362 } 363 + bindings::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_CORE_WAIT_FOR_HALT => { 364 Ok(SeqBufOpcode::CoreWaitForHalt) 365 } 366 + bindings::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_DELAY_US => Ok(SeqBufOpcode::DelayUs), 367 + bindings::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_REG_MODIFY => { 368 Ok(SeqBufOpcode::RegModify) 369 } 370 + bindings::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_REG_POLL => Ok(SeqBufOpcode::RegPoll), 371 + bindings::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_REG_STORE => Ok(SeqBufOpcode::RegStore), 372 + bindings::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_REG_WRITE => Ok(SeqBufOpcode::RegWrite), 373 _ => Err(EINVAL), 374 } 375 } ··· 385 /// Wrapper for GSP sequencer register write payload. 386 #[repr(transparent)] 387 #[derive(Copy, Clone)] 388 + pub(crate) struct RegWritePayload(bindings::GSP_SEQ_BUF_PAYLOAD_REG_WRITE); 389 390 impl RegWritePayload { 391 /// Returns the register address. ··· 408 /// Wrapper for GSP sequencer register modify payload. 409 #[repr(transparent)] 410 #[derive(Copy, Clone)] 411 + pub(crate) struct RegModifyPayload(bindings::GSP_SEQ_BUF_PAYLOAD_REG_MODIFY); 412 413 impl RegModifyPayload { 414 /// Returns the register address. ··· 436 /// Wrapper for GSP sequencer register poll payload. 437 #[repr(transparent)] 438 #[derive(Copy, Clone)] 439 + pub(crate) struct RegPollPayload(bindings::GSP_SEQ_BUF_PAYLOAD_REG_POLL); 440 441 impl RegPollPayload { 442 /// Returns the register address. ··· 469 /// Wrapper for GSP sequencer delay payload. 470 #[repr(transparent)] 471 #[derive(Copy, Clone)] 472 + pub(crate) struct DelayUsPayload(bindings::GSP_SEQ_BUF_PAYLOAD_DELAY_US); 473 474 impl DelayUsPayload { 475 /// Returns the delay value in microseconds. ··· 487 /// Wrapper for GSP sequencer register store payload. 488 #[repr(transparent)] 489 #[derive(Copy, Clone)] 490 + pub(crate) struct RegStorePayload(bindings::GSP_SEQ_BUF_PAYLOAD_REG_STORE); 491 492 impl RegStorePayload { 493 /// Returns the register address. ··· 510 511 /// Wrapper for GSP sequencer buffer command. 512 #[repr(transparent)] 513 + pub(crate) struct SequencerBufferCmd(bindings::GSP_SEQUENCER_BUFFER_CMD); 514 515 impl SequencerBufferCmd { 516 /// Returns the opcode as a `SeqBufOpcode` enum, or error if invalid. ··· 612 613 /// Wrapper for GSP run CPU sequencer RPC. 614 #[repr(transparent)] 615 + pub(crate) struct RunCpuSequencer(bindings::rpc_run_cpu_sequencer_v17_00); 616 617 impl RunCpuSequencer { 618 /// Returns the command index. ··· 797 } 798 } 799 800 /// GSP Message Element. 801 /// 802 /// This is essentially a message header expected to be followed by the message data. ··· 853 self.inner.checkSum = checksum; 854 } 855 856 + /// Returns the length of the message's payload. 857 + pub(crate) fn payload_length(&self) -> usize { 858 + // `rpc.length` includes the length of the RPC message header. 859 + num::u32_as_usize(self.inner.rpc.length) 860 + .saturating_sub(size_of::<bindings::rpc_message_header_v>()) 861 + } 862 + 863 + /// Returns the total length of the message, message and RPC headers included. 864 pub(crate) fn length(&self) -> usize { 865 + size_of::<Self>() + self.payload_length() 866 } 867 868 // Returns the sequence number of the message.
+7 -4
drivers/gpu/nova-core/gsp/fw/r570_144.rs
··· 24 unreachable_pub, 25 unsafe_op_in_unsafe_fn 26 )] 27 - use kernel::{ 28 - ffi, 29 - prelude::Zeroable, // 30 - }; 31 include!("r570_144/bindings.rs");
··· 24 unreachable_pub, 25 unsafe_op_in_unsafe_fn 26 )] 27 + use kernel::ffi; 28 + use pin_init::MaybeZeroable; 29 + 30 include!("r570_144/bindings.rs"); 31 + 32 + // SAFETY: This type has a size of zero, so its inclusion into another type should not affect their 33 + // ability to implement `Zeroable`. 34 + unsafe impl<T> kernel::prelude::Zeroable for __IncompleteArrayField<T> {}
+59 -46
drivers/gpu/nova-core/gsp/fw/r570_144/bindings.rs
··· 320 pub const NV_VGPU_MSG_EVENT_NUM_EVENTS: _bindgen_ty_3 = 4131; 321 pub type _bindgen_ty_3 = ffi::c_uint; 322 #[repr(C)] 323 - #[derive(Debug, Default, Copy, Clone)] 324 pub struct NV0080_CTRL_GPU_GET_SRIOV_CAPS_PARAMS { 325 pub totalVFs: u32_, 326 pub firstVfOffset: u32_, 327 pub vfFeatureMask: u32_, 328 pub FirstVFBar0Address: u64_, 329 pub FirstVFBar1Address: u64_, 330 pub FirstVFBar2Address: u64_, ··· 341 pub bClientRmAllocatedCtxBuffer: u8_, 342 pub bNonPowerOf2ChannelCountSupported: u8_, 343 pub bVfResizableBAR1Supported: u8_, 344 } 345 #[repr(C)] 346 - #[derive(Debug, Default, Copy, Clone)] 347 pub struct NV2080_CTRL_BIOS_GET_SKU_INFO_PARAMS { 348 pub BoardID: u32_, 349 pub chipSKU: [ffi::c_char; 9usize], 350 pub chipSKUMod: [ffi::c_char; 5usize], 351 pub skuConfigVersion: u32_, 352 pub project: [ffi::c_char; 5usize], 353 pub projectSKU: [ffi::c_char; 5usize], 354 pub CDP: [ffi::c_char; 6usize], 355 pub projectSKUMod: [ffi::c_char; 2usize], 356 pub businessCycle: u32_, 357 } 358 pub type NV2080_CTRL_CMD_FB_GET_FB_REGION_SURFACE_MEM_TYPE_FLAG = [u8_; 17usize]; 359 #[repr(C)] 360 - #[derive(Debug, Default, Copy, Clone)] 361 pub struct NV2080_CTRL_CMD_FB_GET_FB_REGION_FB_REGION_INFO { 362 pub base: u64_, 363 pub limit: u64_, ··· 372 pub blackList: NV2080_CTRL_CMD_FB_GET_FB_REGION_SURFACE_MEM_TYPE_FLAG, 373 } 374 #[repr(C)] 375 - #[derive(Debug, Default, Copy, Clone)] 376 pub struct NV2080_CTRL_CMD_FB_GET_FB_REGION_INFO_PARAMS { 377 pub numFBRegions: u32_, 378 pub fbRegion: [NV2080_CTRL_CMD_FB_GET_FB_REGION_FB_REGION_INFO; 16usize], 379 } 380 #[repr(C)] 381 - #[derive(Debug, Copy, Clone)] 382 pub struct NV2080_CTRL_GPU_GET_GID_INFO_PARAMS { 383 pub index: u32_, 384 pub flags: u32_, ··· 396 } 397 } 398 #[repr(C)] 399 - #[derive(Debug, Default, Copy, Clone, Zeroable)] 400 pub struct DOD_METHOD_DATA { 401 pub status: u32_, 402 pub acpiIdListLen: u32_, 403 pub acpiIdList: [u32_; 16usize], 404 } 405 #[repr(C)] 406 - #[derive(Debug, Default, Copy, Clone, Zeroable)] 407 pub struct JT_METHOD_DATA { 408 pub status: u32_, 409 pub jtCaps: u32_, ··· 412 pub __bindgen_padding_0: u8, 413 } 414 #[repr(C)] 415 - #[derive(Debug, Default, Copy, Clone, Zeroable)] 416 pub struct MUX_METHOD_DATA_ELEMENT { 417 pub acpiId: u32_, 418 pub mode: u32_, 419 pub status: u32_, 420 } 421 #[repr(C)] 422 - #[derive(Debug, Default, Copy, Clone, Zeroable)] 423 pub struct MUX_METHOD_DATA { 424 pub tableLen: u32_, 425 pub acpiIdMuxModeTable: [MUX_METHOD_DATA_ELEMENT; 16usize], ··· 427 pub acpiIdMuxStateTable: [MUX_METHOD_DATA_ELEMENT; 16usize], 428 } 429 #[repr(C)] 430 - #[derive(Debug, Default, Copy, Clone, Zeroable)] 431 pub struct CAPS_METHOD_DATA { 432 pub status: u32_, 433 pub optimusCaps: u32_, 434 } 435 #[repr(C)] 436 - #[derive(Debug, Default, Copy, Clone, Zeroable)] 437 pub struct ACPI_METHOD_DATA { 438 pub bValid: u8_, 439 pub __bindgen_padding_0: [u8; 3usize], ··· 443 pub capsMethodData: CAPS_METHOD_DATA, 444 } 445 #[repr(C)] 446 - #[derive(Debug, Default, Copy, Clone)] 447 pub struct VIRTUAL_DISPLAY_GET_MAX_RESOLUTION_PARAMS { 448 pub headIndex: u32_, 449 pub maxHResolution: u32_, 450 pub maxVResolution: u32_, 451 } 452 #[repr(C)] 453 - #[derive(Debug, Default, Copy, Clone)] 454 pub struct VIRTUAL_DISPLAY_GET_NUM_HEADS_PARAMS { 455 pub numHeads: u32_, 456 pub maxNumHeads: u32_, 457 } 458 #[repr(C)] 459 - #[derive(Debug, Default, Copy, Clone, Zeroable)] 460 pub struct BUSINFO { 461 pub deviceID: u16_, 462 pub vendorID: u16_, ··· 466 pub __bindgen_padding_0: u8, 467 } 468 #[repr(C)] 469 - #[derive(Debug, Default, Copy, Clone, Zeroable)] 470 pub struct GSP_VF_INFO { 471 pub totalVFs: u32_, 472 pub firstVFOffset: u32_, ··· 479 pub __bindgen_padding_0: [u8; 5usize], 480 } 481 #[repr(C)] 482 - #[derive(Debug, Default, Copy, Clone, Zeroable)] 483 pub struct GSP_PCIE_CONFIG_REG { 484 pub linkCap: u32_, 485 } 486 #[repr(C)] 487 - #[derive(Debug, Default, Copy, Clone)] 488 pub struct EcidManufacturingInfo { 489 pub ecidLow: u32_, 490 pub ecidHigh: u32_, 491 pub ecidExtended: u32_, 492 } 493 #[repr(C)] 494 - #[derive(Debug, Default, Copy, Clone)] 495 pub struct FW_WPR_LAYOUT_OFFSET { 496 pub nonWprHeapOffset: u64_, 497 pub frtsOffset: u64_, 498 } 499 #[repr(C)] 500 - #[derive(Debug, Copy, Clone)] 501 pub struct GspStaticConfigInfo_t { 502 pub grCapsBits: [u8_; 23usize], 503 pub gidInfo: NV2080_CTRL_GPU_GET_GID_INFO_PARAMS, 504 pub SKUInfo: NV2080_CTRL_BIOS_GET_SKU_INFO_PARAMS, 505 pub fbRegionInfoParams: NV2080_CTRL_CMD_FB_GET_FB_REGION_INFO_PARAMS, 506 pub sriovCaps: NV0080_CTRL_GPU_GET_SRIOV_CAPS_PARAMS, 507 pub sriovMaxGfid: u32_, 508 pub engineCaps: [u32_; 3usize], 509 pub poisonFuseEnabled: u8_, 510 pub fb_length: u64_, 511 pub fbio_mask: u64_, 512 pub fb_bus_width: u32_, ··· 535 pub bIsMigSupported: u8_, 536 pub RTD3GC6TotalBoardPower: u16_, 537 pub RTD3GC6PerstDelay: u16_, 538 pub bar1PdeBase: u64_, 539 pub bar2PdeBase: u64_, 540 pub bVbiosValid: u8_, 541 pub vbiosSubVendor: u32_, 542 pub vbiosSubDevice: u32_, 543 pub bPageRetirementSupported: u8_, 544 pub bSplitVasBetweenServerClientRm: u8_, 545 pub bClRootportNeedsNosnoopWAR: u8_, 546 pub displaylessMaxHeads: VIRTUAL_DISPLAY_GET_NUM_HEADS_PARAMS, 547 pub displaylessMaxResolution: VIRTUAL_DISPLAY_GET_MAX_RESOLUTION_PARAMS, 548 pub displaylessMaxPixels: u64_, 549 pub hInternalClient: u32_, 550 pub hInternalDevice: u32_, ··· 570 } 571 } 572 #[repr(C)] 573 - #[derive(Debug, Default, Copy, Clone, Zeroable)] 574 pub struct GspSystemInfo { 575 pub gpuPhysAddr: u64_, 576 pub gpuPhysFbAddr: u64_, ··· 627 pub hostPageSize: u64_, 628 } 629 #[repr(C)] 630 - #[derive(Debug, Default, Copy, Clone, Zeroable)] 631 pub struct MESSAGE_QUEUE_INIT_ARGUMENTS { 632 pub sharedMemPhysAddr: u64_, 633 pub pageTableEntryCount: u32_, ··· 636 pub statQueueOffset: u64_, 637 } 638 #[repr(C)] 639 - #[derive(Debug, Default, Copy, Clone, Zeroable)] 640 pub struct GSP_SR_INIT_ARGUMENTS { 641 pub oldLevel: u32_, 642 pub flags: u32_, ··· 644 pub __bindgen_padding_0: [u8; 3usize], 645 } 646 #[repr(C)] 647 - #[derive(Debug, Default, Copy, Clone, Zeroable)] 648 pub struct GSP_ARGUMENTS_CACHED { 649 pub messageQueueInitArguments: MESSAGE_QUEUE_INIT_ARGUMENTS, 650 pub srInitArguments: GSP_SR_INIT_ARGUMENTS, ··· 654 pub profilerArgs: GSP_ARGUMENTS_CACHED__bindgen_ty_1, 655 } 656 #[repr(C)] 657 - #[derive(Debug, Default, Copy, Clone, Zeroable)] 658 pub struct GSP_ARGUMENTS_CACHED__bindgen_ty_1 { 659 pub pa: u64_, 660 pub size: u64_, 661 } 662 #[repr(C)] 663 - #[derive(Copy, Clone, Zeroable)] 664 pub union rpc_message_rpc_union_field_v03_00 { 665 pub spare: u32_, 666 pub cpuRmGfid: u32_, ··· 676 } 677 pub type rpc_message_rpc_union_field_v = rpc_message_rpc_union_field_v03_00; 678 #[repr(C)] 679 pub struct rpc_message_header_v03_00 { 680 pub header_version: u32_, 681 pub signature: u32_, ··· 699 } 700 pub type rpc_message_header_v = rpc_message_header_v03_00; 701 #[repr(C)] 702 - #[derive(Copy, Clone, Zeroable)] 703 pub struct GspFwWprMeta { 704 pub magic: u64_, 705 pub revision: u64_, ··· 734 pub verified: u64_, 735 } 736 #[repr(C)] 737 - #[derive(Copy, Clone, Zeroable)] 738 pub union GspFwWprMeta__bindgen_ty_1 { 739 pub __bindgen_anon_1: GspFwWprMeta__bindgen_ty_1__bindgen_ty_1, 740 pub __bindgen_anon_2: GspFwWprMeta__bindgen_ty_1__bindgen_ty_2, 741 } 742 #[repr(C)] 743 - #[derive(Debug, Default, Copy, Clone, Zeroable)] 744 pub struct GspFwWprMeta__bindgen_ty_1__bindgen_ty_1 { 745 pub sysmemAddrOfSignature: u64_, 746 pub sizeOfSignature: u64_, 747 } 748 #[repr(C)] 749 - #[derive(Debug, Default, Copy, Clone, Zeroable)] 750 pub struct GspFwWprMeta__bindgen_ty_1__bindgen_ty_2 { 751 pub gspFwHeapFreeListWprOffset: u32_, 752 pub unused0: u32_, ··· 762 } 763 } 764 #[repr(C)] 765 - #[derive(Copy, Clone, Zeroable)] 766 pub union GspFwWprMeta__bindgen_ty_2 { 767 pub __bindgen_anon_1: GspFwWprMeta__bindgen_ty_2__bindgen_ty_1, 768 pub __bindgen_anon_2: GspFwWprMeta__bindgen_ty_2__bindgen_ty_2, 769 } 770 #[repr(C)] 771 - #[derive(Debug, Default, Copy, Clone, Zeroable)] 772 pub struct GspFwWprMeta__bindgen_ty_2__bindgen_ty_1 { 773 pub partitionRpcAddr: u64_, 774 pub partitionRpcRequestOffset: u16_, ··· 780 pub lsUcodeVersion: u32_, 781 } 782 #[repr(C)] 783 - #[derive(Debug, Default, Copy, Clone, Zeroable)] 784 pub struct GspFwWprMeta__bindgen_ty_2__bindgen_ty_2 { 785 pub partitionRpcPadding: [u32_; 4usize], 786 pub sysmemAddrOfCrashReportQueue: u64_, ··· 815 pub const LibosMemoryRegionLoc_LIBOS_MEMORY_REGION_LOC_FB: LibosMemoryRegionLoc = 2; 816 pub type LibosMemoryRegionLoc = ffi::c_uint; 817 #[repr(C)] 818 - #[derive(Debug, Default, Copy, Clone, Zeroable)] 819 pub struct LibosMemoryRegionInitArgument { 820 pub id8: LibosAddress, 821 pub pa: LibosAddress, ··· 825 pub __bindgen_padding_0: [u8; 6usize], 826 } 827 #[repr(C)] 828 - #[derive(Debug, Default, Copy, Clone)] 829 pub struct PACKED_REGISTRY_ENTRY { 830 pub nameOffset: u32_, 831 pub type_: u8_, ··· 834 pub length: u32_, 835 } 836 #[repr(C)] 837 - #[derive(Debug, Default)] 838 pub struct PACKED_REGISTRY_TABLE { 839 pub size: u32_, 840 pub numEntries: u32_, 841 pub entries: __IncompleteArrayField<PACKED_REGISTRY_ENTRY>, 842 } 843 #[repr(C)] 844 - #[derive(Debug, Default, Copy, Clone, Zeroable)] 845 pub struct msgqTxHeader { 846 pub version: u32_, 847 pub size: u32_, ··· 853 pub entryOff: u32_, 854 } 855 #[repr(C)] 856 - #[derive(Debug, Default, Copy, Clone, Zeroable)] 857 pub struct msgqRxHeader { 858 pub readPtr: u32_, 859 } 860 #[repr(C)] 861 #[repr(align(8))] 862 - #[derive(Zeroable)] 863 pub struct GSP_MSG_QUEUE_ELEMENT { 864 pub authTagBuffer: [u8_; 16usize], 865 pub aadBuffer: [u8_; 16usize], ··· 879 } 880 } 881 #[repr(C)] 882 - #[derive(Debug, Default)] 883 pub struct rpc_run_cpu_sequencer_v17_00 { 884 pub bufferSizeDWord: u32_, 885 pub cmdIndex: u32_, ··· 897 pub const GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_CORE_RESUME: GSP_SEQ_BUF_OPCODE = 8; 898 pub type GSP_SEQ_BUF_OPCODE = ffi::c_uint; 899 #[repr(C)] 900 - #[derive(Debug, Default, Copy, Clone)] 901 pub struct GSP_SEQ_BUF_PAYLOAD_REG_WRITE { 902 pub addr: u32_, 903 pub val: u32_, 904 } 905 #[repr(C)] 906 - #[derive(Debug, Default, Copy, Clone)] 907 pub struct GSP_SEQ_BUF_PAYLOAD_REG_MODIFY { 908 pub addr: u32_, 909 pub mask: u32_, 910 pub val: u32_, 911 } 912 #[repr(C)] 913 - #[derive(Debug, Default, Copy, Clone)] 914 pub struct GSP_SEQ_BUF_PAYLOAD_REG_POLL { 915 pub addr: u32_, 916 pub mask: u32_, ··· 919 pub error: u32_, 920 } 921 #[repr(C)] 922 - #[derive(Debug, Default, Copy, Clone)] 923 pub struct GSP_SEQ_BUF_PAYLOAD_DELAY_US { 924 pub val: u32_, 925 } 926 #[repr(C)] 927 - #[derive(Debug, Default, Copy, Clone)] 928 pub struct GSP_SEQ_BUF_PAYLOAD_REG_STORE { 929 pub addr: u32_, 930 pub index: u32_, 931 } 932 #[repr(C)] 933 - #[derive(Copy, Clone)] 934 pub struct GSP_SEQUENCER_BUFFER_CMD { 935 pub opCode: GSP_SEQ_BUF_OPCODE, 936 pub payload: GSP_SEQUENCER_BUFFER_CMD__bindgen_ty_1, 937 } 938 #[repr(C)] 939 - #[derive(Copy, Clone)] 940 pub union GSP_SEQUENCER_BUFFER_CMD__bindgen_ty_1 { 941 pub regWrite: GSP_SEQ_BUF_PAYLOAD_REG_WRITE, 942 pub regModify: GSP_SEQ_BUF_PAYLOAD_REG_MODIFY,
··· 320 pub const NV_VGPU_MSG_EVENT_NUM_EVENTS: _bindgen_ty_3 = 4131; 321 pub type _bindgen_ty_3 = ffi::c_uint; 322 #[repr(C)] 323 + #[derive(Debug, Default, Copy, Clone, MaybeZeroable)] 324 pub struct NV0080_CTRL_GPU_GET_SRIOV_CAPS_PARAMS { 325 pub totalVFs: u32_, 326 pub firstVfOffset: u32_, 327 pub vfFeatureMask: u32_, 328 + pub __bindgen_padding_0: [u8; 4usize], 329 pub FirstVFBar0Address: u64_, 330 pub FirstVFBar1Address: u64_, 331 pub FirstVFBar2Address: u64_, ··· 340 pub bClientRmAllocatedCtxBuffer: u8_, 341 pub bNonPowerOf2ChannelCountSupported: u8_, 342 pub bVfResizableBAR1Supported: u8_, 343 + pub __bindgen_padding_1: [u8; 7usize], 344 } 345 #[repr(C)] 346 + #[derive(Debug, Default, Copy, Clone, MaybeZeroable)] 347 pub struct NV2080_CTRL_BIOS_GET_SKU_INFO_PARAMS { 348 pub BoardID: u32_, 349 pub chipSKU: [ffi::c_char; 9usize], 350 pub chipSKUMod: [ffi::c_char; 5usize], 351 + pub __bindgen_padding_0: [u8; 2usize], 352 pub skuConfigVersion: u32_, 353 pub project: [ffi::c_char; 5usize], 354 pub projectSKU: [ffi::c_char; 5usize], 355 pub CDP: [ffi::c_char; 6usize], 356 pub projectSKUMod: [ffi::c_char; 2usize], 357 + pub __bindgen_padding_1: [u8; 2usize], 358 pub businessCycle: u32_, 359 } 360 pub type NV2080_CTRL_CMD_FB_GET_FB_REGION_SURFACE_MEM_TYPE_FLAG = [u8_; 17usize]; 361 #[repr(C)] 362 + #[derive(Debug, Default, Copy, Clone, MaybeZeroable)] 363 pub struct NV2080_CTRL_CMD_FB_GET_FB_REGION_FB_REGION_INFO { 364 pub base: u64_, 365 pub limit: u64_, ··· 368 pub blackList: NV2080_CTRL_CMD_FB_GET_FB_REGION_SURFACE_MEM_TYPE_FLAG, 369 } 370 #[repr(C)] 371 + #[derive(Debug, Default, Copy, Clone, MaybeZeroable)] 372 pub struct NV2080_CTRL_CMD_FB_GET_FB_REGION_INFO_PARAMS { 373 pub numFBRegions: u32_, 374 + pub __bindgen_padding_0: [u8; 4usize], 375 pub fbRegion: [NV2080_CTRL_CMD_FB_GET_FB_REGION_FB_REGION_INFO; 16usize], 376 } 377 #[repr(C)] 378 + #[derive(Debug, Copy, Clone, MaybeZeroable)] 379 pub struct NV2080_CTRL_GPU_GET_GID_INFO_PARAMS { 380 pub index: u32_, 381 pub flags: u32_, ··· 391 } 392 } 393 #[repr(C)] 394 + #[derive(Debug, Default, Copy, Clone, MaybeZeroable)] 395 pub struct DOD_METHOD_DATA { 396 pub status: u32_, 397 pub acpiIdListLen: u32_, 398 pub acpiIdList: [u32_; 16usize], 399 } 400 #[repr(C)] 401 + #[derive(Debug, Default, Copy, Clone, MaybeZeroable)] 402 pub struct JT_METHOD_DATA { 403 pub status: u32_, 404 pub jtCaps: u32_, ··· 407 pub __bindgen_padding_0: u8, 408 } 409 #[repr(C)] 410 + #[derive(Debug, Default, Copy, Clone, MaybeZeroable)] 411 pub struct MUX_METHOD_DATA_ELEMENT { 412 pub acpiId: u32_, 413 pub mode: u32_, 414 pub status: u32_, 415 } 416 #[repr(C)] 417 + #[derive(Debug, Default, Copy, Clone, MaybeZeroable)] 418 pub struct MUX_METHOD_DATA { 419 pub tableLen: u32_, 420 pub acpiIdMuxModeTable: [MUX_METHOD_DATA_ELEMENT; 16usize], ··· 422 pub acpiIdMuxStateTable: [MUX_METHOD_DATA_ELEMENT; 16usize], 423 } 424 #[repr(C)] 425 + #[derive(Debug, Default, Copy, Clone, MaybeZeroable)] 426 pub struct CAPS_METHOD_DATA { 427 pub status: u32_, 428 pub optimusCaps: u32_, 429 } 430 #[repr(C)] 431 + #[derive(Debug, Default, Copy, Clone, MaybeZeroable)] 432 pub struct ACPI_METHOD_DATA { 433 pub bValid: u8_, 434 pub __bindgen_padding_0: [u8; 3usize], ··· 438 pub capsMethodData: CAPS_METHOD_DATA, 439 } 440 #[repr(C)] 441 + #[derive(Debug, Default, Copy, Clone, MaybeZeroable)] 442 pub struct VIRTUAL_DISPLAY_GET_MAX_RESOLUTION_PARAMS { 443 pub headIndex: u32_, 444 pub maxHResolution: u32_, 445 pub maxVResolution: u32_, 446 } 447 #[repr(C)] 448 + #[derive(Debug, Default, Copy, Clone, MaybeZeroable)] 449 pub struct VIRTUAL_DISPLAY_GET_NUM_HEADS_PARAMS { 450 pub numHeads: u32_, 451 pub maxNumHeads: u32_, 452 } 453 #[repr(C)] 454 + #[derive(Debug, Default, Copy, Clone, MaybeZeroable)] 455 pub struct BUSINFO { 456 pub deviceID: u16_, 457 pub vendorID: u16_, ··· 461 pub __bindgen_padding_0: u8, 462 } 463 #[repr(C)] 464 + #[derive(Debug, Default, Copy, Clone, MaybeZeroable)] 465 pub struct GSP_VF_INFO { 466 pub totalVFs: u32_, 467 pub firstVFOffset: u32_, ··· 474 pub __bindgen_padding_0: [u8; 5usize], 475 } 476 #[repr(C)] 477 + #[derive(Debug, Default, Copy, Clone, MaybeZeroable)] 478 pub struct GSP_PCIE_CONFIG_REG { 479 pub linkCap: u32_, 480 } 481 #[repr(C)] 482 + #[derive(Debug, Default, Copy, Clone, MaybeZeroable)] 483 pub struct EcidManufacturingInfo { 484 pub ecidLow: u32_, 485 pub ecidHigh: u32_, 486 pub ecidExtended: u32_, 487 } 488 #[repr(C)] 489 + #[derive(Debug, Default, Copy, Clone, MaybeZeroable)] 490 pub struct FW_WPR_LAYOUT_OFFSET { 491 pub nonWprHeapOffset: u64_, 492 pub frtsOffset: u64_, 493 } 494 #[repr(C)] 495 + #[derive(Debug, Copy, Clone, MaybeZeroable)] 496 pub struct GspStaticConfigInfo_t { 497 pub grCapsBits: [u8_; 23usize], 498 + pub __bindgen_padding_0: u8, 499 pub gidInfo: NV2080_CTRL_GPU_GET_GID_INFO_PARAMS, 500 pub SKUInfo: NV2080_CTRL_BIOS_GET_SKU_INFO_PARAMS, 501 + pub __bindgen_padding_1: [u8; 4usize], 502 pub fbRegionInfoParams: NV2080_CTRL_CMD_FB_GET_FB_REGION_INFO_PARAMS, 503 pub sriovCaps: NV0080_CTRL_GPU_GET_SRIOV_CAPS_PARAMS, 504 pub sriovMaxGfid: u32_, 505 pub engineCaps: [u32_; 3usize], 506 pub poisonFuseEnabled: u8_, 507 + pub __bindgen_padding_2: [u8; 7usize], 508 pub fb_length: u64_, 509 pub fbio_mask: u64_, 510 pub fb_bus_width: u32_, ··· 527 pub bIsMigSupported: u8_, 528 pub RTD3GC6TotalBoardPower: u16_, 529 pub RTD3GC6PerstDelay: u16_, 530 + pub __bindgen_padding_3: [u8; 2usize], 531 pub bar1PdeBase: u64_, 532 pub bar2PdeBase: u64_, 533 pub bVbiosValid: u8_, 534 + pub __bindgen_padding_4: [u8; 3usize], 535 pub vbiosSubVendor: u32_, 536 pub vbiosSubDevice: u32_, 537 pub bPageRetirementSupported: u8_, 538 pub bSplitVasBetweenServerClientRm: u8_, 539 pub bClRootportNeedsNosnoopWAR: u8_, 540 + pub __bindgen_padding_5: u8, 541 pub displaylessMaxHeads: VIRTUAL_DISPLAY_GET_NUM_HEADS_PARAMS, 542 pub displaylessMaxResolution: VIRTUAL_DISPLAY_GET_MAX_RESOLUTION_PARAMS, 543 + pub __bindgen_padding_6: [u8; 4usize], 544 pub displaylessMaxPixels: u64_, 545 pub hInternalClient: u32_, 546 pub hInternalDevice: u32_, ··· 558 } 559 } 560 #[repr(C)] 561 + #[derive(Debug, Default, Copy, Clone, MaybeZeroable)] 562 pub struct GspSystemInfo { 563 pub gpuPhysAddr: u64_, 564 pub gpuPhysFbAddr: u64_, ··· 615 pub hostPageSize: u64_, 616 } 617 #[repr(C)] 618 + #[derive(Debug, Default, Copy, Clone, MaybeZeroable)] 619 pub struct MESSAGE_QUEUE_INIT_ARGUMENTS { 620 pub sharedMemPhysAddr: u64_, 621 pub pageTableEntryCount: u32_, ··· 624 pub statQueueOffset: u64_, 625 } 626 #[repr(C)] 627 + #[derive(Debug, Default, Copy, Clone, MaybeZeroable)] 628 pub struct GSP_SR_INIT_ARGUMENTS { 629 pub oldLevel: u32_, 630 pub flags: u32_, ··· 632 pub __bindgen_padding_0: [u8; 3usize], 633 } 634 #[repr(C)] 635 + #[derive(Debug, Default, Copy, Clone, MaybeZeroable)] 636 pub struct GSP_ARGUMENTS_CACHED { 637 pub messageQueueInitArguments: MESSAGE_QUEUE_INIT_ARGUMENTS, 638 pub srInitArguments: GSP_SR_INIT_ARGUMENTS, ··· 642 pub profilerArgs: GSP_ARGUMENTS_CACHED__bindgen_ty_1, 643 } 644 #[repr(C)] 645 + #[derive(Debug, Default, Copy, Clone, MaybeZeroable)] 646 pub struct GSP_ARGUMENTS_CACHED__bindgen_ty_1 { 647 pub pa: u64_, 648 pub size: u64_, 649 } 650 #[repr(C)] 651 + #[derive(Copy, Clone, MaybeZeroable)] 652 pub union rpc_message_rpc_union_field_v03_00 { 653 pub spare: u32_, 654 pub cpuRmGfid: u32_, ··· 664 } 665 pub type rpc_message_rpc_union_field_v = rpc_message_rpc_union_field_v03_00; 666 #[repr(C)] 667 + #[derive(MaybeZeroable)] 668 pub struct rpc_message_header_v03_00 { 669 pub header_version: u32_, 670 pub signature: u32_, ··· 686 } 687 pub type rpc_message_header_v = rpc_message_header_v03_00; 688 #[repr(C)] 689 + #[derive(Copy, Clone, MaybeZeroable)] 690 pub struct GspFwWprMeta { 691 pub magic: u64_, 692 pub revision: u64_, ··· 721 pub verified: u64_, 722 } 723 #[repr(C)] 724 + #[derive(Copy, Clone, MaybeZeroable)] 725 pub union GspFwWprMeta__bindgen_ty_1 { 726 pub __bindgen_anon_1: GspFwWprMeta__bindgen_ty_1__bindgen_ty_1, 727 pub __bindgen_anon_2: GspFwWprMeta__bindgen_ty_1__bindgen_ty_2, 728 } 729 #[repr(C)] 730 + #[derive(Debug, Default, Copy, Clone, MaybeZeroable)] 731 pub struct GspFwWprMeta__bindgen_ty_1__bindgen_ty_1 { 732 pub sysmemAddrOfSignature: u64_, 733 pub sizeOfSignature: u64_, 734 } 735 #[repr(C)] 736 + #[derive(Debug, Default, Copy, Clone, MaybeZeroable)] 737 pub struct GspFwWprMeta__bindgen_ty_1__bindgen_ty_2 { 738 pub gspFwHeapFreeListWprOffset: u32_, 739 pub unused0: u32_, ··· 749 } 750 } 751 #[repr(C)] 752 + #[derive(Copy, Clone, MaybeZeroable)] 753 pub union GspFwWprMeta__bindgen_ty_2 { 754 pub __bindgen_anon_1: GspFwWprMeta__bindgen_ty_2__bindgen_ty_1, 755 pub __bindgen_anon_2: GspFwWprMeta__bindgen_ty_2__bindgen_ty_2, 756 } 757 #[repr(C)] 758 + #[derive(Debug, Default, Copy, Clone, MaybeZeroable)] 759 pub struct GspFwWprMeta__bindgen_ty_2__bindgen_ty_1 { 760 pub partitionRpcAddr: u64_, 761 pub partitionRpcRequestOffset: u16_, ··· 767 pub lsUcodeVersion: u32_, 768 } 769 #[repr(C)] 770 + #[derive(Debug, Default, Copy, Clone, MaybeZeroable)] 771 pub struct GspFwWprMeta__bindgen_ty_2__bindgen_ty_2 { 772 pub partitionRpcPadding: [u32_; 4usize], 773 pub sysmemAddrOfCrashReportQueue: u64_, ··· 802 pub const LibosMemoryRegionLoc_LIBOS_MEMORY_REGION_LOC_FB: LibosMemoryRegionLoc = 2; 803 pub type LibosMemoryRegionLoc = ffi::c_uint; 804 #[repr(C)] 805 + #[derive(Debug, Default, Copy, Clone, MaybeZeroable)] 806 pub struct LibosMemoryRegionInitArgument { 807 pub id8: LibosAddress, 808 pub pa: LibosAddress, ··· 812 pub __bindgen_padding_0: [u8; 6usize], 813 } 814 #[repr(C)] 815 + #[derive(Debug, Default, Copy, Clone, MaybeZeroable)] 816 pub struct PACKED_REGISTRY_ENTRY { 817 pub nameOffset: u32_, 818 pub type_: u8_, ··· 821 pub length: u32_, 822 } 823 #[repr(C)] 824 + #[derive(Debug, Default, MaybeZeroable)] 825 pub struct PACKED_REGISTRY_TABLE { 826 pub size: u32_, 827 pub numEntries: u32_, 828 pub entries: __IncompleteArrayField<PACKED_REGISTRY_ENTRY>, 829 } 830 #[repr(C)] 831 + #[derive(Debug, Default, Copy, Clone, MaybeZeroable)] 832 pub struct msgqTxHeader { 833 pub version: u32_, 834 pub size: u32_, ··· 840 pub entryOff: u32_, 841 } 842 #[repr(C)] 843 + #[derive(Debug, Default, Copy, Clone, MaybeZeroable)] 844 pub struct msgqRxHeader { 845 pub readPtr: u32_, 846 } 847 #[repr(C)] 848 #[repr(align(8))] 849 + #[derive(MaybeZeroable)] 850 pub struct GSP_MSG_QUEUE_ELEMENT { 851 pub authTagBuffer: [u8_; 16usize], 852 pub aadBuffer: [u8_; 16usize], ··· 866 } 867 } 868 #[repr(C)] 869 + #[derive(Debug, Default, MaybeZeroable)] 870 pub struct rpc_run_cpu_sequencer_v17_00 { 871 pub bufferSizeDWord: u32_, 872 pub cmdIndex: u32_, ··· 884 pub const GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_CORE_RESUME: GSP_SEQ_BUF_OPCODE = 8; 885 pub type GSP_SEQ_BUF_OPCODE = ffi::c_uint; 886 #[repr(C)] 887 + #[derive(Debug, Default, Copy, Clone, MaybeZeroable)] 888 pub struct GSP_SEQ_BUF_PAYLOAD_REG_WRITE { 889 pub addr: u32_, 890 pub val: u32_, 891 } 892 #[repr(C)] 893 + #[derive(Debug, Default, Copy, Clone, MaybeZeroable)] 894 pub struct GSP_SEQ_BUF_PAYLOAD_REG_MODIFY { 895 pub addr: u32_, 896 pub mask: u32_, 897 pub val: u32_, 898 } 899 #[repr(C)] 900 + #[derive(Debug, Default, Copy, Clone, MaybeZeroable)] 901 pub struct GSP_SEQ_BUF_PAYLOAD_REG_POLL { 902 pub addr: u32_, 903 pub mask: u32_, ··· 906 pub error: u32_, 907 } 908 #[repr(C)] 909 + #[derive(Debug, Default, Copy, Clone, MaybeZeroable)] 910 pub struct GSP_SEQ_BUF_PAYLOAD_DELAY_US { 911 pub val: u32_, 912 } 913 #[repr(C)] 914 + #[derive(Debug, Default, Copy, Clone, MaybeZeroable)] 915 pub struct GSP_SEQ_BUF_PAYLOAD_REG_STORE { 916 pub addr: u32_, 917 pub index: u32_, 918 } 919 #[repr(C)] 920 + #[derive(Copy, Clone, MaybeZeroable)] 921 pub struct GSP_SEQUENCER_BUFFER_CMD { 922 pub opCode: GSP_SEQ_BUF_OPCODE, 923 pub payload: GSP_SEQUENCER_BUFFER_CMD__bindgen_ty_1, 924 } 925 #[repr(C)] 926 + #[derive(Copy, Clone, MaybeZeroable)] 927 pub union GSP_SEQUENCER_BUFFER_CMD__bindgen_ty_1 { 928 pub regWrite: GSP_SEQ_BUF_PAYLOAD_REG_WRITE, 929 pub regModify: GSP_SEQ_BUF_PAYLOAD_REG_MODIFY,
-7
drivers/pci/vgaarb.c
··· 652 return true; 653 } 654 655 - /* 656 - * Vgadev has neither IO nor MEM enabled. If we haven't found any 657 - * other VGA devices, it is the best candidate so far. 658 - */ 659 - if (!boot_vga) 660 - return true; 661 - 662 return false; 663 } 664
··· 652 return true; 653 } 654 655 return false; 656 } 657
+22
include/drm/drm_atomic_helper.h
··· 60 int drm_atomic_helper_check_planes(struct drm_device *dev, 61 struct drm_atomic_state *state); 62 int drm_atomic_helper_check_crtc_primary_plane(struct drm_crtc_state *crtc_state); 63 int drm_atomic_helper_check(struct drm_device *dev, 64 struct drm_atomic_state *state); 65 void drm_atomic_helper_commit_tail(struct drm_atomic_state *state); ··· 95 void 96 drm_atomic_helper_calc_timestamping_constants(struct drm_atomic_state *state); 97 98 void drm_atomic_helper_commit_modeset_disables(struct drm_device *dev, 99 struct drm_atomic_state *state); 100 void drm_atomic_helper_commit_modeset_enables(struct drm_device *dev, 101 struct drm_atomic_state *old_state); 102
··· 60 int drm_atomic_helper_check_planes(struct drm_device *dev, 61 struct drm_atomic_state *state); 62 int drm_atomic_helper_check_crtc_primary_plane(struct drm_crtc_state *crtc_state); 63 + void drm_atomic_helper_commit_encoder_bridge_disable(struct drm_device *dev, 64 + struct drm_atomic_state *state); 65 + void drm_atomic_helper_commit_crtc_disable(struct drm_device *dev, 66 + struct drm_atomic_state *state); 67 + void drm_atomic_helper_commit_encoder_bridge_post_disable(struct drm_device *dev, 68 + struct drm_atomic_state *state); 69 int drm_atomic_helper_check(struct drm_device *dev, 70 struct drm_atomic_state *state); 71 void drm_atomic_helper_commit_tail(struct drm_atomic_state *state); ··· 89 void 90 drm_atomic_helper_calc_timestamping_constants(struct drm_atomic_state *state); 91 92 + void drm_atomic_helper_commit_crtc_set_mode(struct drm_device *dev, 93 + struct drm_atomic_state *state); 94 + 95 void drm_atomic_helper_commit_modeset_disables(struct drm_device *dev, 96 struct drm_atomic_state *state); 97 + 98 + void drm_atomic_helper_commit_writebacks(struct drm_device *dev, 99 + struct drm_atomic_state *state); 100 + 101 + void drm_atomic_helper_commit_encoder_bridge_pre_enable(struct drm_device *dev, 102 + struct drm_atomic_state *state); 103 + 104 + void drm_atomic_helper_commit_crtc_enable(struct drm_device *dev, 105 + struct drm_atomic_state *state); 106 + 107 + void drm_atomic_helper_commit_encoder_bridge_enable(struct drm_device *dev, 108 + struct drm_atomic_state *state); 109 + 110 void drm_atomic_helper_commit_modeset_enables(struct drm_device *dev, 111 struct drm_atomic_state *old_state); 112
+66 -183
include/drm/drm_bridge.h
··· 176 /** 177 * @disable: 178 * 179 - * The @disable callback should disable the bridge. 180 * 181 * The bridge can assume that the display pipe (i.e. clocks and timing 182 * signals) feeding it is still running when this callback is called. 183 - * 184 - * 185 - * If the preceding element is a &drm_bridge, then this is called before 186 - * that bridge is disabled via one of: 187 - * 188 - * - &drm_bridge_funcs.disable 189 - * - &drm_bridge_funcs.atomic_disable 190 - * 191 - * If the preceding element of the bridge is a display controller, then 192 - * this callback is called before the encoder is disabled via one of: 193 - * 194 - * - &drm_encoder_helper_funcs.atomic_disable 195 - * - &drm_encoder_helper_funcs.prepare 196 - * - &drm_encoder_helper_funcs.disable 197 - * - &drm_encoder_helper_funcs.dpms 198 - * 199 - * and the CRTC is disabled via one of: 200 - * 201 - * - &drm_crtc_helper_funcs.prepare 202 - * - &drm_crtc_helper_funcs.atomic_disable 203 - * - &drm_crtc_helper_funcs.disable 204 - * - &drm_crtc_helper_funcs.dpms. 205 * 206 * The @disable callback is optional. 207 * ··· 199 /** 200 * @post_disable: 201 * 202 * The bridge must assume that the display pipe (i.e. clocks and timing 203 - * signals) feeding this bridge is no longer running when the 204 - * @post_disable is called. 205 - * 206 - * This callback should perform all the actions required by the hardware 207 - * after it has stopped receiving signals from the preceding element. 208 - * 209 - * If the preceding element is a &drm_bridge, then this is called after 210 - * that bridge is post-disabled (unless marked otherwise by the 211 - * @pre_enable_prev_first flag) via one of: 212 - * 213 - * - &drm_bridge_funcs.post_disable 214 - * - &drm_bridge_funcs.atomic_post_disable 215 - * 216 - * If the preceding element of the bridge is a display controller, then 217 - * this callback is called after the encoder is disabled via one of: 218 - * 219 - * - &drm_encoder_helper_funcs.atomic_disable 220 - * - &drm_encoder_helper_funcs.prepare 221 - * - &drm_encoder_helper_funcs.disable 222 - * - &drm_encoder_helper_funcs.dpms 223 - * 224 - * and the CRTC is disabled via one of: 225 - * 226 - * - &drm_crtc_helper_funcs.prepare 227 - * - &drm_crtc_helper_funcs.atomic_disable 228 - * - &drm_crtc_helper_funcs.disable 229 - * - &drm_crtc_helper_funcs.dpms 230 * 231 * The @post_disable callback is optional. 232 * ··· 252 /** 253 * @pre_enable: 254 * 255 * The display pipe (i.e. clocks and timing signals) feeding this bridge 256 - * will not yet be running when the @pre_enable is called. 257 - * 258 - * This callback should perform all the necessary actions to prepare the 259 - * bridge to accept signals from the preceding element. 260 - * 261 - * If the preceding element is a &drm_bridge, then this is called before 262 - * that bridge is pre-enabled (unless marked otherwise by 263 - * @pre_enable_prev_first flag) via one of: 264 - * 265 - * - &drm_bridge_funcs.pre_enable 266 - * - &drm_bridge_funcs.atomic_pre_enable 267 - * 268 - * If the preceding element of the bridge is a display controller, then 269 - * this callback is called before the CRTC is enabled via one of: 270 - * 271 - * - &drm_crtc_helper_funcs.atomic_enable 272 - * - &drm_crtc_helper_funcs.commit 273 - * 274 - * and the encoder is enabled via one of: 275 - * 276 - * - &drm_encoder_helper_funcs.atomic_enable 277 - * - &drm_encoder_helper_funcs.enable 278 - * - &drm_encoder_helper_funcs.commit 279 * 280 * The @pre_enable callback is optional. 281 * ··· 277 /** 278 * @enable: 279 * 280 - * The @enable callback should enable the bridge. 281 * 282 * The bridge can assume that the display pipe (i.e. clocks and timing 283 * signals) feeding it is running when this callback is called. This 284 * callback must enable the display link feeding the next bridge in the 285 * chain if there is one. 286 - * 287 - * If the preceding element is a &drm_bridge, then this is called after 288 - * that bridge is enabled via one of: 289 - * 290 - * - &drm_bridge_funcs.enable 291 - * - &drm_bridge_funcs.atomic_enable 292 - * 293 - * If the preceding element of the bridge is a display controller, then 294 - * this callback is called after the CRTC is enabled via one of: 295 - * 296 - * - &drm_crtc_helper_funcs.atomic_enable 297 - * - &drm_crtc_helper_funcs.commit 298 - * 299 - * and the encoder is enabled via one of: 300 - * 301 - * - &drm_encoder_helper_funcs.atomic_enable 302 - * - &drm_encoder_helper_funcs.enable 303 - * - drm_encoder_helper_funcs.commit 304 * 305 * The @enable callback is optional. 306 * ··· 302 /** 303 * @atomic_pre_enable: 304 * 305 * The display pipe (i.e. clocks and timing signals) feeding this bridge 306 - * will not yet be running when the @atomic_pre_enable is called. 307 - * 308 - * This callback should perform all the necessary actions to prepare the 309 - * bridge to accept signals from the preceding element. 310 - * 311 - * If the preceding element is a &drm_bridge, then this is called before 312 - * that bridge is pre-enabled (unless marked otherwise by 313 - * @pre_enable_prev_first flag) via one of: 314 - * 315 - * - &drm_bridge_funcs.pre_enable 316 - * - &drm_bridge_funcs.atomic_pre_enable 317 - * 318 - * If the preceding element of the bridge is a display controller, then 319 - * this callback is called before the CRTC is enabled via one of: 320 - * 321 - * - &drm_crtc_helper_funcs.atomic_enable 322 - * - &drm_crtc_helper_funcs.commit 323 - * 324 - * and the encoder is enabled via one of: 325 - * 326 - * - &drm_encoder_helper_funcs.atomic_enable 327 - * - &drm_encoder_helper_funcs.enable 328 - * - &drm_encoder_helper_funcs.commit 329 * 330 * The @atomic_pre_enable callback is optional. 331 */ ··· 322 /** 323 * @atomic_enable: 324 * 325 - * The @atomic_enable callback should enable the bridge. 326 * 327 * The bridge can assume that the display pipe (i.e. clocks and timing 328 * signals) feeding it is running when this callback is called. This 329 * callback must enable the display link feeding the next bridge in the 330 * chain if there is one. 331 - * 332 - * If the preceding element is a &drm_bridge, then this is called after 333 - * that bridge is enabled via one of: 334 - * 335 - * - &drm_bridge_funcs.enable 336 - * - &drm_bridge_funcs.atomic_enable 337 - * 338 - * If the preceding element of the bridge is a display controller, then 339 - * this callback is called after the CRTC is enabled via one of: 340 - * 341 - * - &drm_crtc_helper_funcs.atomic_enable 342 - * - &drm_crtc_helper_funcs.commit 343 - * 344 - * and the encoder is enabled via one of: 345 - * 346 - * - &drm_encoder_helper_funcs.atomic_enable 347 - * - &drm_encoder_helper_funcs.enable 348 - * - drm_encoder_helper_funcs.commit 349 * 350 * The @atomic_enable callback is optional. 351 */ ··· 341 /** 342 * @atomic_disable: 343 * 344 - * The @atomic_disable callback should disable the bridge. 345 * 346 * The bridge can assume that the display pipe (i.e. clocks and timing 347 * signals) feeding it is still running when this callback is called. 348 - * 349 - * If the preceding element is a &drm_bridge, then this is called before 350 - * that bridge is disabled via one of: 351 - * 352 - * - &drm_bridge_funcs.disable 353 - * - &drm_bridge_funcs.atomic_disable 354 - * 355 - * If the preceding element of the bridge is a display controller, then 356 - * this callback is called before the encoder is disabled via one of: 357 - * 358 - * - &drm_encoder_helper_funcs.atomic_disable 359 - * - &drm_encoder_helper_funcs.prepare 360 - * - &drm_encoder_helper_funcs.disable 361 - * - &drm_encoder_helper_funcs.dpms 362 - * 363 - * and the CRTC is disabled via one of: 364 - * 365 - * - &drm_crtc_helper_funcs.prepare 366 - * - &drm_crtc_helper_funcs.atomic_disable 367 - * - &drm_crtc_helper_funcs.disable 368 - * - &drm_crtc_helper_funcs.dpms. 369 * 370 * The @atomic_disable callback is optional. 371 */ ··· 359 /** 360 * @atomic_post_disable: 361 * 362 * The bridge must assume that the display pipe (i.e. clocks and timing 363 - * signals) feeding this bridge is no longer running when the 364 - * @atomic_post_disable is called. 365 - * 366 - * This callback should perform all the actions required by the hardware 367 - * after it has stopped receiving signals from the preceding element. 368 - * 369 - * If the preceding element is a &drm_bridge, then this is called after 370 - * that bridge is post-disabled (unless marked otherwise by the 371 - * @pre_enable_prev_first flag) via one of: 372 - * 373 - * - &drm_bridge_funcs.post_disable 374 - * - &drm_bridge_funcs.atomic_post_disable 375 - * 376 - * If the preceding element of the bridge is a display controller, then 377 - * this callback is called after the encoder is disabled via one of: 378 - * 379 - * - &drm_encoder_helper_funcs.atomic_disable 380 - * - &drm_encoder_helper_funcs.prepare 381 - * - &drm_encoder_helper_funcs.disable 382 - * - &drm_encoder_helper_funcs.dpms 383 - * 384 - * and the CRTC is disabled via one of: 385 - * 386 - * - &drm_crtc_helper_funcs.prepare 387 - * - &drm_crtc_helper_funcs.atomic_disable 388 - * - &drm_crtc_helper_funcs.disable 389 - * - &drm_crtc_helper_funcs.dpms 390 * 391 * The @atomic_post_disable callback is optional. 392 */
··· 176 /** 177 * @disable: 178 * 179 + * This callback should disable the bridge. It is called right before 180 + * the preceding element in the display pipe is disabled. If the 181 + * preceding element is a bridge this means it's called before that 182 + * bridge's @disable vfunc. If the preceding element is a &drm_encoder 183 + * it's called right before the &drm_encoder_helper_funcs.disable, 184 + * &drm_encoder_helper_funcs.prepare or &drm_encoder_helper_funcs.dpms 185 + * hook. 186 * 187 * The bridge can assume that the display pipe (i.e. clocks and timing 188 * signals) feeding it is still running when this callback is called. 189 * 190 * The @disable callback is optional. 191 * ··· 215 /** 216 * @post_disable: 217 * 218 + * This callback should disable the bridge. It is called right after the 219 + * preceding element in the display pipe is disabled. If the preceding 220 + * element is a bridge this means it's called after that bridge's 221 + * @post_disable function. If the preceding element is a &drm_encoder 222 + * it's called right after the encoder's 223 + * &drm_encoder_helper_funcs.disable, &drm_encoder_helper_funcs.prepare 224 + * or &drm_encoder_helper_funcs.dpms hook. 225 + * 226 * The bridge must assume that the display pipe (i.e. clocks and timing 227 + * signals) feeding it is no longer running when this callback is 228 + * called. 229 * 230 * The @post_disable callback is optional. 231 * ··· 285 /** 286 * @pre_enable: 287 * 288 + * This callback should enable the bridge. It is called right before 289 + * the preceding element in the display pipe is enabled. If the 290 + * preceding element is a bridge this means it's called before that 291 + * bridge's @pre_enable function. If the preceding element is a 292 + * &drm_encoder it's called right before the encoder's 293 + * &drm_encoder_helper_funcs.enable, &drm_encoder_helper_funcs.commit or 294 + * &drm_encoder_helper_funcs.dpms hook. 295 + * 296 * The display pipe (i.e. clocks and timing signals) feeding this bridge 297 + * will not yet be running when this callback is called. The bridge must 298 + * not enable the display link feeding the next bridge in the chain (if 299 + * there is one) when this callback is called. 300 * 301 * The @pre_enable callback is optional. 302 * ··· 322 /** 323 * @enable: 324 * 325 + * This callback should enable the bridge. It is called right after 326 + * the preceding element in the display pipe is enabled. If the 327 + * preceding element is a bridge this means it's called after that 328 + * bridge's @enable function. If the preceding element is a 329 + * &drm_encoder it's called right after the encoder's 330 + * &drm_encoder_helper_funcs.enable, &drm_encoder_helper_funcs.commit or 331 + * &drm_encoder_helper_funcs.dpms hook. 332 * 333 * The bridge can assume that the display pipe (i.e. clocks and timing 334 * signals) feeding it is running when this callback is called. This 335 * callback must enable the display link feeding the next bridge in the 336 * chain if there is one. 337 * 338 * The @enable callback is optional. 339 * ··· 359 /** 360 * @atomic_pre_enable: 361 * 362 + * This callback should enable the bridge. It is called right before 363 + * the preceding element in the display pipe is enabled. If the 364 + * preceding element is a bridge this means it's called before that 365 + * bridge's @atomic_pre_enable or @pre_enable function. If the preceding 366 + * element is a &drm_encoder it's called right before the encoder's 367 + * &drm_encoder_helper_funcs.atomic_enable hook. 368 + * 369 * The display pipe (i.e. clocks and timing signals) feeding this bridge 370 + * will not yet be running when this callback is called. The bridge must 371 + * not enable the display link feeding the next bridge in the chain (if 372 + * there is one) when this callback is called. 373 * 374 * The @atomic_pre_enable callback is optional. 375 */ ··· 392 /** 393 * @atomic_enable: 394 * 395 + * This callback should enable the bridge. It is called right after 396 + * the preceding element in the display pipe is enabled. If the 397 + * preceding element is a bridge this means it's called after that 398 + * bridge's @atomic_enable or @enable function. If the preceding element 399 + * is a &drm_encoder it's called right after the encoder's 400 + * &drm_encoder_helper_funcs.atomic_enable hook. 401 * 402 * The bridge can assume that the display pipe (i.e. clocks and timing 403 * signals) feeding it is running when this callback is called. This 404 * callback must enable the display link feeding the next bridge in the 405 * chain if there is one. 406 * 407 * The @atomic_enable callback is optional. 408 */ ··· 424 /** 425 * @atomic_disable: 426 * 427 + * This callback should disable the bridge. It is called right before 428 + * the preceding element in the display pipe is disabled. If the 429 + * preceding element is a bridge this means it's called before that 430 + * bridge's @atomic_disable or @disable vfunc. If the preceding element 431 + * is a &drm_encoder it's called right before the 432 + * &drm_encoder_helper_funcs.atomic_disable hook. 433 * 434 * The bridge can assume that the display pipe (i.e. clocks and timing 435 * signals) feeding it is still running when this callback is called. 436 * 437 * The @atomic_disable callback is optional. 438 */ ··· 458 /** 459 * @atomic_post_disable: 460 * 461 + * This callback should disable the bridge. It is called right after the 462 + * preceding element in the display pipe is disabled. If the preceding 463 + * element is a bridge this means it's called after that bridge's 464 + * @atomic_post_disable or @post_disable function. If the preceding 465 + * element is a &drm_encoder it's called right after the encoder's 466 + * &drm_encoder_helper_funcs.atomic_disable hook. 467 + * 468 * The bridge must assume that the display pipe (i.e. clocks and timing 469 + * signals) feeding it is no longer running when this callback is 470 + * called. 471 * 472 * The @atomic_post_disable callback is optional. 473 */