Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'amd-drm-next-6.18-2025-09-26' of https://gitlab.freedesktop.org/agd5f/linux into drm-next

amd-drm-next-6.18-2025-09-26:

amdgpu:
- Misc fixes
- Misc cleanups
- SMU 13.x fixes
- MES fix
- VCN 5.0.1 reset fixes
- DCN 3.2 watermark fixes
- AVI infoframe fixes
- PSR fix
- Brightness fixes
- DCN 3.1.4 fixes
- DCN 3.1+ DTM fixes
- DCN powergating fixes
- DMUB fixes
- DCN/SMU cleanup
- DCN stutter fixes
- DCN 3.5 fixes
- GAMMA_LUT fixes
- Add UserQ documentation
- GC 9.4 reset fixes
- Enforce isolation cleanups
- UserQ fixes
- DC/non-DC common modes cleanup
- DCE6-10 fixes

amdkfd:
- Fix a race in sw_fini
- Switch partition fix

Signed-off-by: Dave Airlie <airlied@redhat.com>

From: Alex Deucher <alexander.deucher@amd.com>
Link: https://lore.kernel.org/r/20250926143918.2030854-1-alexander.deucher@amd.com

+1195 -392
+1
Documentation/gpu/amdgpu/index.rst
··· 12 12 module-parameters 13 13 gc/index 14 14 display/index 15 + userq 15 16 flashing 16 17 xgmi 17 18 ras
+203
Documentation/gpu/amdgpu/userq.rst
··· 1 + ================== 2 + User Mode Queues 3 + ================== 4 + 5 + Introduction 6 + ============ 7 + 8 + Similar to the KFD, GPU engine queues move into userspace. The idea is to let 9 + user processes manage their submissions to the GPU engines directly, bypassing 10 + IOCTL calls to the driver to submit work. This reduces overhead and also allows 11 + the GPU to submit work to itself. Applications can set up work graphs of jobs 12 + across multiple GPU engines without needing trips through the CPU. 13 + 14 + UMDs directly interface with firmware via per application shared memory areas. 15 + The main vehicle for this is queue. A queue is a ring buffer with a read 16 + pointer (rptr) and a write pointer (wptr). The UMD writes IP specific packets 17 + into the queue and the firmware processes those packets, kicking off work on the 18 + GPU engines. The CPU in the application (or another queue or device) updates 19 + the wptr to tell the firmware how far into the ring buffer to process packets 20 + and the rtpr provides feedback to the UMD on how far the firmware has progressed 21 + in executing those packets. When the wptr and the rptr are equal, the queue is 22 + idle. 23 + 24 + Theory of Operation 25 + =================== 26 + 27 + The various engines on modern AMD GPUs support multiple queues per engine with a 28 + scheduling firmware which handles dynamically scheduling user queues on the 29 + available hardware queue slots. When the number of user queues outnumbers the 30 + available hardware queue slots, the scheduling firmware dynamically maps and 31 + unmaps queues based on priority and time quanta. The state of each user queue 32 + is managed in the kernel driver in an MQD (Memory Queue Descriptor). This is a 33 + buffer in GPU accessible memory that stores the state of a user queue. The 34 + scheduling firmware uses the MQD to load the queue state into an HQD (Hardware 35 + Queue Descriptor) when a user queue is mapped. Each user queue requires a 36 + number of additional buffers which represent the ring buffer and any metadata 37 + needed by the engine for runtime operation. On most engines this consists of 38 + the ring buffer itself, a rptr buffer (where the firmware will shadow the rptr 39 + to userspace), a wptr buffer (where the application will write the wptr for the 40 + firmware to fetch it), and a doorbell. A doorbell is a piece of one of the 41 + device's MMIO BARs which can be mapped to specific user queues. When the 42 + application writes to the doorbell, it will signal the firmware to take some 43 + action. Writing to the doorbell wakes the firmware and causes it to fetch the 44 + wptr and start processing the packets in the queue. Each 4K page of the doorbell 45 + BAR supports specific offset ranges for specific engines. The doorbell of a 46 + queue must be mapped into the aperture aligned to the IP used by the queue 47 + (e.g., GFX, VCN, SDMA, etc.). These doorbell apertures are set up via NBIO 48 + registers. Doorbells are 32 bit or 64 bit (depending on the engine) chunks of 49 + the doorbell BAR. A 4K doorbell page provides 512 64-bit doorbells for up to 50 + 512 user queues. A subset of each page is reserved for each IP type supported 51 + on the device. The user can query the doorbell ranges for each IP via the INFO 52 + IOCTL. See the IOCTL Interfaces section for more information. 53 + 54 + When an application wants to create a user queue, it allocates the necessary 55 + buffers for the queue (ring buffer, wptr and rptr, context save areas, etc.). 56 + These can be separate buffers or all part of one larger buffer. The application 57 + would map the buffer(s) into its GPUVM and use the GPU virtual addresses of for 58 + the areas of memory they want to use for the user queue. They would also 59 + allocate a doorbell page for the doorbells used by the user queues. The 60 + application would then populate the MQD in the USERQ IOCTL structure with the 61 + GPU virtual addresses and doorbell index they want to use. The user can also 62 + specify the attributes for the user queue (priority, whether the queue is secure 63 + for protected content, etc.). The application would then call the USERQ 64 + CREATE IOCTL to create the queue using the specified MQD details in the IOCTL. 65 + The kernel driver then validates the MQD provided by the application and 66 + translates the MQD into the engine specific MQD format for the IP. The IP 67 + specific MQD would be allocated and the queue would be added to the run list 68 + maintained by the scheduling firmware. Once the queue has been created, the 69 + application can write packets directly into the queue, update the wptr, and 70 + write to the doorbell offset to kick off work in the user queue. 71 + 72 + When the application is done with the user queue, it would call the USERQ 73 + FREE IOCTL to destroy it. The kernel driver would preempt the queue and 74 + remove it from the scheduling firmware's run list. Then the IP specific MQD 75 + would be freed and the user queue state would be cleaned up. 76 + 77 + Some engines may require the aggregated doorbell too if the engine does not 78 + support doorbells from unmapped queues. The aggregated doorbell is a special 79 + page of doorbell space which wakes the scheduler. In cases where the engine may 80 + be oversubscribed, some queues may not be mapped. If the doorbell is rung when 81 + the queue is not mapped, the engine firmware may miss the request. Some 82 + scheduling firmware may work around this by polling wptr shadows when the 83 + hardware is oversubscribed, other engines may support doorbell updates from 84 + unmapped queues. In the event that one of these options is not available, the 85 + kernel driver will map a page of aggregated doorbell space into each GPUVM 86 + space. The UMD will then update the doorbell and wptr as normal and then write 87 + to the aggregated doorbell as well. 88 + 89 + Special Packets 90 + --------------- 91 + 92 + In order to support legacy implicit synchronization, as well as mixed user and 93 + kernel queues, we need a synchronization mechanism that is secure. Because 94 + kernel queues or memory management tasks depend on kernel fences, we need a way 95 + for user queues to update memory that the kernel can use for a fence, that can't 96 + be messed with by a bad actor. To support this, we've added a protected fence 97 + packet. This packet works by writing a monotonically increasing value to 98 + a memory location that only privileged clients have write access to. User 99 + queues only have read access. When this packet is executed, the memory location 100 + is updated and other queues (kernel or user) can see the results. The 101 + user application would submit this packet in their command stream. The actual 102 + packet format varies from IP to IP (GFX/Compute, SDMA, VCN, etc.), but the 103 + behavior is the same. The packet submission is handled in userspace. The 104 + kernel driver sets up the privileged memory used for each user queue when it 105 + sets the queues up when the application creates them. 106 + 107 + 108 + Memory Management 109 + ================= 110 + 111 + It is assumed that all buffers mapped into the GPUVM space for the process are 112 + valid when engines on the GPU are running. The kernel driver will only allow 113 + user queues to run when all buffers are mapped. If there is a memory event that 114 + requires buffer migration, the kernel driver will preempt the user queues, 115 + migrate buffers to where they need to be, update the GPUVM page tables and 116 + invaldidate the TLB, and then resume the user queues. 117 + 118 + Interaction with Kernel Queues 119 + ============================== 120 + 121 + Depending on the IP and the scheduling firmware, you can enable kernel queues 122 + and user queues at the same time, however, you are limited by the HQD slots. 123 + Kernel queues are always mapped so any work that goes into kernel queues will 124 + take priority. This limits the available HQD slots for user queues. 125 + 126 + Not all IPs will support user queues on all GPUs. As such, UMDs will need to 127 + support both user queues and kernel queues depending on the IP. For example, a 128 + GPU may support user queues for GFX, compute, and SDMA, but not for VCN, JPEG, 129 + and VPE. UMDs need to support both. The kernel driver provides a way to 130 + determine if user queues and kernel queues are supported on a per IP basis. 131 + UMDs can query this information via the INFO IOCTL and determine whether to use 132 + kernel queues or user queues for each IP. 133 + 134 + Queue Resets 135 + ============ 136 + 137 + For most engines, queues can be reset individually. GFX, compute, and SDMA 138 + queues can be reset individually. When a hung queue is detected, it can be 139 + reset either via the scheduling firmware or MMIO. Since there are no kernel 140 + fences for most user queues, they will usually only be detected when some other 141 + event happens; e.g., a memory event which requires migration of buffers. When 142 + the queues are preempted, if the queue is hung, the preemption will fail. 143 + Driver will then look up the queues that failed to preempt and reset them and 144 + record which queues are hung. 145 + 146 + On the UMD side, we will add a USERQ QUERY_STATUS IOCTL to query the queue 147 + status. UMD will provide the queue id in the IOCTL and the kernel driver 148 + will check if it has already recorded the queue as hung (e.g., due to failed 149 + peemption) and report back the status. 150 + 151 + IOCTL Interfaces 152 + ================ 153 + 154 + GPU virtual addresses used for queues and related data (rptrs, wptrs, context 155 + save areas, etc.) should be validated by the kernel mode driver to prevent the 156 + user from specifying invalid GPU virtual addresses. If the user provides 157 + invalid GPU virtual addresses or doorbell indicies, the IOCTL should return an 158 + error message. These buffers should also be tracked in the kernel driver so 159 + that if the user attempts to unmap the buffer(s) from the GPUVM, the umap call 160 + would return an error. 161 + 162 + INFO 163 + ---- 164 + There are several new INFO queries related to user queues in order to query the 165 + size of user queue meta data needed for a user queue (e.g., context save areas 166 + or shadow buffers), whether kernel or user queues or both are supported 167 + for each IP type, and the offsets for each IP type in each doorbell page. 168 + 169 + USERQ 170 + ----- 171 + The USERQ IOCTL is used for creating, freeing, and querying the status of user 172 + queues. It supports 3 opcodes: 173 + 174 + 1. CREATE - Create a user queue. The application provides an MQD-like structure 175 + that defines the type of queue and associated metadata and flags for that 176 + queue type. Returns the queue id. 177 + 2. FREE - Free a user queue. 178 + 3. QUERY_STATUS - Query that status of a queue. Used to check if the queue is 179 + healthy or not. E.g., if the queue has been reset. (WIP) 180 + 181 + USERQ_SIGNAL 182 + ------------ 183 + The USERQ_SIGNAL IOCTL is used to provide a list of sync objects to be signaled. 184 + 185 + USERQ_WAIT 186 + ---------- 187 + The USERQ_WAIT IOCTL is used to provide a list of sync object to be waited on. 188 + 189 + Kernel and User Queues 190 + ====================== 191 + 192 + In order to properly validate and test performance, we have a driver option to 193 + select what type of queues are enabled (kernel queues, user queues or both). 194 + The user_queue driver parameter allows you to enable kernel queues only (0), 195 + user queues and kernel queues (1), and user queues only (2). Enabling user 196 + queues only will free up static queue assignments that would otherwise be used 197 + by kernel queues for use by the scheduling firmware. Some kernel queues are 198 + required for kernel driver operation and they will always be created. When the 199 + kernel queues are not enabled, they are not registered with the drm scheduler 200 + and the CS IOCTL will reject any incoming command submissions which target those 201 + queue types. Kernel queues only mirrors the behavior on all existing GPUs. 202 + Enabling both queues allows for backwards compatibility with old userspace while 203 + still supporting user queues.
+2 -2
drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gfx_v10.c
··· 352 352 (*dump)[i++][1] = RREG32_SOC15_IP(GC, addr); \ 353 353 } while (0) 354 354 355 - *dump = kmalloc(HQD_N_REGS*2*sizeof(uint32_t), GFP_KERNEL); 355 + *dump = kmalloc_array(HQD_N_REGS, sizeof(**dump), GFP_KERNEL); 356 356 if (*dump == NULL) 357 357 return -ENOMEM; 358 358 ··· 449 449 #undef HQD_N_REGS 450 450 #define HQD_N_REGS (19+6+7+10) 451 451 452 - *dump = kmalloc(HQD_N_REGS*2*sizeof(uint32_t), GFP_KERNEL); 452 + *dump = kmalloc_array(HQD_N_REGS, sizeof(**dump), GFP_KERNEL); 453 453 if (*dump == NULL) 454 454 return -ENOMEM; 455 455
+2 -2
drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gfx_v10_3.c
··· 338 338 (*dump)[i++][1] = RREG32_SOC15_IP(GC, addr); \ 339 339 } while (0) 340 340 341 - *dump = kmalloc(HQD_N_REGS*2*sizeof(uint32_t), GFP_KERNEL); 341 + *dump = kmalloc_array(HQD_N_REGS, sizeof(**dump), GFP_KERNEL); 342 342 if (*dump == NULL) 343 343 return -ENOMEM; 344 344 ··· 435 435 #undef HQD_N_REGS 436 436 #define HQD_N_REGS (19+6+7+12) 437 437 438 - *dump = kmalloc(HQD_N_REGS*2*sizeof(uint32_t), GFP_KERNEL); 438 + *dump = kmalloc_array(HQD_N_REGS, sizeof(**dump), GFP_KERNEL); 439 439 if (*dump == NULL) 440 440 return -ENOMEM; 441 441
+2 -2
drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gfx_v11.c
··· 323 323 (*dump)[i++][1] = RREG32(addr); \ 324 324 } while (0) 325 325 326 - *dump = kmalloc(HQD_N_REGS*2*sizeof(uint32_t), GFP_KERNEL); 326 + *dump = kmalloc_array(HQD_N_REGS, sizeof(**dump), GFP_KERNEL); 327 327 if (*dump == NULL) 328 328 return -ENOMEM; 329 329 ··· 420 420 #undef HQD_N_REGS 421 421 #define HQD_N_REGS (7+11+1+12+12) 422 422 423 - *dump = kmalloc(HQD_N_REGS*2*sizeof(uint32_t), GFP_KERNEL); 423 + *dump = kmalloc_array(HQD_N_REGS, sizeof(**dump), GFP_KERNEL); 424 424 if (*dump == NULL) 425 425 return -ENOMEM; 426 426
+2 -2
drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gfx_v12.c
··· 115 115 (*dump)[i++][1] = RREG32(addr); \ 116 116 } while (0) 117 117 118 - *dump = kmalloc(HQD_N_REGS*2*sizeof(uint32_t), GFP_KERNEL); 118 + *dump = kmalloc_array(HQD_N_REGS, sizeof(**dump), GFP_KERNEL); 119 119 if (*dump == NULL) 120 120 return -ENOMEM; 121 121 ··· 146 146 #undef HQD_N_REGS 147 147 #define HQD_N_REGS (last_reg - first_reg + 1) 148 148 149 - *dump = kmalloc(HQD_N_REGS*2*sizeof(uint32_t), GFP_KERNEL); 149 + *dump = kmalloc_array(HQD_N_REGS, sizeof(**dump), GFP_KERNEL); 150 150 if (*dump == NULL) 151 151 return -ENOMEM; 152 152
+7 -3
drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c
··· 1089 1089 return 0; 1090 1090 } 1091 1091 1092 - ret = amdgpu_ttm_tt_get_user_pages(bo, bo->tbo.ttm->pages, &range); 1092 + ret = amdgpu_ttm_tt_get_user_pages(bo, &range); 1093 1093 if (ret) { 1094 1094 if (ret == -EAGAIN) 1095 1095 pr_debug("Failed to get user pages, try again\n"); ··· 1103 1103 pr_err("%s: Failed to reserve BO\n", __func__); 1104 1104 goto release_out; 1105 1105 } 1106 + 1107 + amdgpu_ttm_tt_set_user_pages(bo->tbo.ttm, range); 1108 + 1106 1109 amdgpu_bo_placement_from_domain(bo, mem->domain); 1107 1110 ret = ttm_bo_validate(&bo->tbo, &bo->placement, &ctx); 1108 1111 if (ret) ··· 2568 2565 } 2569 2566 2570 2567 /* Get updated user pages */ 2571 - ret = amdgpu_ttm_tt_get_user_pages(bo, bo->tbo.ttm->pages, 2572 - &mem->range); 2568 + ret = amdgpu_ttm_tt_get_user_pages(bo, &mem->range); 2573 2569 if (ret) { 2574 2570 pr_debug("Failed %d to get user pages\n", ret); 2575 2571 ··· 2596 2594 2597 2595 ret = 0; 2598 2596 } 2597 + 2598 + amdgpu_ttm_tt_set_user_pages(bo->tbo.ttm, mem->range); 2599 2599 2600 2600 mutex_lock(&process_info->notifier_lock); 2601 2601
-1
drivers/gpu/drm/amd/amdgpu/amdgpu_bo_list.h
··· 38 38 struct amdgpu_bo *bo; 39 39 struct amdgpu_bo_va *bo_va; 40 40 uint32_t priority; 41 - struct page **user_pages; 42 41 struct hmm_range *range; 43 42 bool user_invalidated; 44 43 };
+19 -22
drivers/gpu/drm/amd/amdgpu/amdgpu_connectors.c
··· 398 398 struct drm_display_mode *mode = NULL; 399 399 struct drm_display_mode *native_mode = &amdgpu_encoder->native_mode; 400 400 int i; 401 - static const struct mode_size { 401 + int n; 402 + struct mode_size { 403 + char name[DRM_DISPLAY_MODE_LEN]; 402 404 int w; 403 405 int h; 404 - } common_modes[17] = { 405 - { 640, 480}, 406 - { 720, 480}, 407 - { 800, 600}, 408 - { 848, 480}, 409 - {1024, 768}, 410 - {1152, 768}, 411 - {1280, 720}, 412 - {1280, 800}, 413 - {1280, 854}, 414 - {1280, 960}, 415 - {1280, 1024}, 416 - {1440, 900}, 417 - {1400, 1050}, 418 - {1680, 1050}, 419 - {1600, 1200}, 420 - {1920, 1080}, 421 - {1920, 1200} 406 + } common_modes[] = { 407 + { "640x480", 640, 480}, 408 + { "800x600", 800, 600}, 409 + { "1024x768", 1024, 768}, 410 + { "1280x720", 1280, 720}, 411 + { "1280x800", 1280, 800}, 412 + {"1280x1024", 1280, 1024}, 413 + { "1440x900", 1440, 900}, 414 + {"1680x1050", 1680, 1050}, 415 + {"1600x1200", 1600, 1200}, 416 + {"1920x1080", 1920, 1080}, 417 + {"1920x1200", 1920, 1200} 422 418 }; 423 419 424 - for (i = 0; i < 17; i++) { 420 + n = ARRAY_SIZE(common_modes); 421 + 422 + for (i = 0; i < n; i++) { 425 423 if (amdgpu_encoder->devices & (ATOM_DEVICE_TV_SUPPORT)) { 426 424 if (common_modes[i].w > 1024 || 427 425 common_modes[i].h > 768) ··· 432 434 common_modes[i].h == native_mode->vdisplay)) 433 435 continue; 434 436 } 435 - if (common_modes[i].w < 320 || common_modes[i].h < 200) 436 - continue; 437 437 438 438 mode = drm_cvt_mode(dev, common_modes[i].w, common_modes[i].h, 60, false, false, false); 439 439 if (!mode) 440 440 return; 441 + strscpy(mode->name, common_modes[i].name, DRM_DISPLAY_MODE_LEN); 441 442 442 443 drm_mode_probed_add(connector, mode); 443 444 }
+6 -25
drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
··· 29 29 #include <linux/pagemap.h> 30 30 #include <linux/sync_file.h> 31 31 #include <linux/dma-buf.h> 32 + #include <linux/hmm.h> 32 33 33 34 #include <drm/amdgpu_drm.h> 34 35 #include <drm/drm_syncobj.h> ··· 884 883 amdgpu_bo_list_for_each_userptr_entry(e, p->bo_list) { 885 884 bool userpage_invalidated = false; 886 885 struct amdgpu_bo *bo = e->bo; 887 - int i; 888 886 889 - e->user_pages = kvcalloc(bo->tbo.ttm->num_pages, 890 - sizeof(struct page *), 891 - GFP_KERNEL); 892 - if (!e->user_pages) { 893 - drm_err(adev_to_drm(p->adev), "kvmalloc_array failure\n"); 894 - r = -ENOMEM; 887 + r = amdgpu_ttm_tt_get_user_pages(bo, &e->range); 888 + if (r) 895 889 goto out_free_user_pages; 896 - } 897 - 898 - r = amdgpu_ttm_tt_get_user_pages(bo, e->user_pages, &e->range); 899 - if (r) { 900 - kvfree(e->user_pages); 901 - e->user_pages = NULL; 902 - goto out_free_user_pages; 903 - } 904 890 905 891 for (i = 0; i < bo->tbo.ttm->num_pages; i++) { 906 - if (bo->tbo.ttm->pages[i] != e->user_pages[i]) { 892 + if (bo->tbo.ttm->pages[i] != hmm_pfn_to_page(e->range->hmm_pfns[i])) { 907 893 userpage_invalidated = true; 908 894 break; 909 895 } ··· 934 946 } 935 947 936 948 if (amdgpu_ttm_tt_is_userptr(e->bo->tbo.ttm) && 937 - e->user_invalidated && e->user_pages) { 949 + e->user_invalidated) { 938 950 amdgpu_bo_placement_from_domain(e->bo, 939 951 AMDGPU_GEM_DOMAIN_CPU); 940 952 r = ttm_bo_validate(&e->bo->tbo, &e->bo->placement, ··· 943 955 goto out_free_user_pages; 944 956 945 957 amdgpu_ttm_tt_set_user_pages(e->bo->tbo.ttm, 946 - e->user_pages); 958 + e->range); 947 959 } 948 - 949 - kvfree(e->user_pages); 950 - e->user_pages = NULL; 951 960 } 952 961 953 962 amdgpu_cs_get_threshold_for_moves(p->adev, &p->bytes_moved_threshold, ··· 986 1001 amdgpu_bo_list_for_each_userptr_entry(e, p->bo_list) { 987 1002 struct amdgpu_bo *bo = e->bo; 988 1003 989 - if (!e->user_pages) 990 - continue; 991 1004 amdgpu_ttm_tt_get_user_pages_done(bo->tbo.ttm, e->range); 992 - kvfree(e->user_pages); 993 - e->user_pages = NULL; 994 1005 e->range = NULL; 995 1006 } 996 1007 mutex_unlock(&p->bo_list->bo_list_mutex);
+1 -1
drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
··· 960 960 */ 961 961 MODULE_PARM_DESC( 962 962 freesync_video, 963 - "Enable freesync modesetting optimization feature (0 = off (default), 1 = on)"); 963 + "Adds additional modes via VRR for refresh changes without a full modeset (0 = off (default), 1 = on)"); 964 964 module_param_named(freesync_video, amdgpu_freesync_vid_mode, uint, 0444); 965 965 966 966 /**
+3 -2
drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
··· 572 572 goto release_object; 573 573 574 574 if (args->flags & AMDGPU_GEM_USERPTR_VALIDATE) { 575 - r = amdgpu_ttm_tt_get_user_pages(bo, bo->tbo.ttm->pages, 576 - &range); 575 + r = amdgpu_ttm_tt_get_user_pages(bo, &range); 577 576 if (r) 578 577 goto release_object; 579 578 580 579 r = amdgpu_bo_reserve(bo, true); 581 580 if (r) 582 581 goto user_pages_done; 582 + 583 + amdgpu_ttm_tt_set_user_pages(bo->tbo.ttm, range); 583 584 584 585 amdgpu_bo_placement_from_domain(bo, AMDGPU_GEM_DOMAIN_GTT); 585 586 r = ttm_bo_validate(&bo->tbo, &bo->placement, &ctx);
+1 -10
drivers/gpu/drm/amd/amdgpu/amdgpu_hmm.c
··· 167 167 168 168 int amdgpu_hmm_range_get_pages(struct mmu_interval_notifier *notifier, 169 169 uint64_t start, uint64_t npages, bool readonly, 170 - void *owner, struct page **pages, 170 + void *owner, 171 171 struct hmm_range **phmm_range) 172 172 { 173 173 struct hmm_range *hmm_range; 174 174 unsigned long end; 175 175 unsigned long timeout; 176 - unsigned long i; 177 176 unsigned long *pfns; 178 177 int r = 0; 179 178 ··· 220 221 221 222 hmm_range->start = start; 222 223 hmm_range->hmm_pfns = pfns; 223 - 224 - /* 225 - * Due to default_flags, all pages are HMM_PFN_VALID or 226 - * hmm_range_fault() fails. FIXME: The pages cannot be touched outside 227 - * the notifier_lock, and mmu_interval_read_retry() must be done first. 228 - */ 229 - for (i = 0; pages && i < npages; i++) 230 - pages[i] = hmm_pfn_to_page(pfns[i]); 231 224 232 225 *phmm_range = hmm_range; 233 226
+1 -1
drivers/gpu/drm/amd/amdgpu/amdgpu_hmm.h
··· 33 33 34 34 int amdgpu_hmm_range_get_pages(struct mmu_interval_notifier *notifier, 35 35 uint64_t start, uint64_t npages, bool readonly, 36 - void *owner, struct page **pages, 36 + void *owner, 37 37 struct hmm_range **phmm_range); 38 38 bool amdgpu_hmm_range_get_pages_done(struct hmm_range *hmm_range); 39 39
+40 -26
drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c
··· 275 275 { 276 276 struct amdgpu_device *adev = ring->adev; 277 277 unsigned vmhub = ring->vm_hub; 278 - struct amdgpu_vmid_mgr *id_mgr = &adev->vm_manager.id_mgr[vmhub]; 279 278 uint64_t fence_context = adev->fence_context + ring->idx; 280 279 bool needs_flush = vm->use_cpu_for_update; 281 280 uint64_t updates = amdgpu_vm_tlb_seq(vm); 282 281 int r; 283 282 284 - *id = id_mgr->reserved; 283 + *id = vm->reserved_vmid[vmhub]; 285 284 if ((*id)->owner != vm->immediate.fence_context || 286 285 !amdgpu_vmid_compatible(*id, job) || 287 286 (*id)->flushed_updates < updates || ··· 473 474 return vm->reserved_vmid[vmhub]; 474 475 } 475 476 476 - int amdgpu_vmid_alloc_reserved(struct amdgpu_device *adev, 477 + /* 478 + * amdgpu_vmid_alloc_reserved - reserve a specific VMID for this vm 479 + * @adev: amdgpu device structure 480 + * @vm: the VM to reserve an ID for 481 + * @vmhub: the VMHUB which should be used 482 + * 483 + * Mostly used to have a reserved VMID for debugging and SPM. 484 + * 485 + * Returns: 0 for success, -ENOENT if an ID is already reserved. 486 + */ 487 + int amdgpu_vmid_alloc_reserved(struct amdgpu_device *adev, struct amdgpu_vm *vm, 477 488 unsigned vmhub) 478 489 { 479 490 struct amdgpu_vmid_mgr *id_mgr = &adev->vm_manager.id_mgr[vmhub]; 491 + struct amdgpu_vmid *id; 492 + int r = 0; 480 493 481 494 mutex_lock(&id_mgr->lock); 482 - 483 - ++id_mgr->reserved_use_count; 484 - if (!id_mgr->reserved) { 485 - struct amdgpu_vmid *id; 486 - 487 - id = list_first_entry(&id_mgr->ids_lru, struct amdgpu_vmid, 488 - list); 489 - /* Remove from normal round robin handling */ 490 - list_del_init(&id->list); 491 - id_mgr->reserved = id; 495 + if (vm->reserved_vmid[vmhub]) 496 + goto unlock; 497 + if (id_mgr->reserved_vmid) { 498 + r = -ENOENT; 499 + goto unlock; 492 500 } 493 - 501 + /* Remove from normal round robin handling */ 502 + id = list_first_entry(&id_mgr->ids_lru, struct amdgpu_vmid, list); 503 + list_del_init(&id->list); 504 + vm->reserved_vmid[vmhub] = id; 505 + id_mgr->reserved_vmid = true; 494 506 mutex_unlock(&id_mgr->lock); 507 + 495 508 return 0; 509 + unlock: 510 + mutex_unlock(&id_mgr->lock); 511 + return r; 496 512 } 497 513 498 - void amdgpu_vmid_free_reserved(struct amdgpu_device *adev, 514 + /* 515 + * amdgpu_vmid_free_reserved - free up a reserved VMID again 516 + * @adev: amdgpu device structure 517 + * @vm: the VM with the reserved ID 518 + * @vmhub: the VMHUB which should be used 519 + */ 520 + void amdgpu_vmid_free_reserved(struct amdgpu_device *adev, struct amdgpu_vm *vm, 499 521 unsigned vmhub) 500 522 { 501 523 struct amdgpu_vmid_mgr *id_mgr = &adev->vm_manager.id_mgr[vmhub]; 502 524 503 525 mutex_lock(&id_mgr->lock); 504 - if (!--id_mgr->reserved_use_count) { 505 - /* give the reserved ID back to normal round robin */ 506 - list_add(&id_mgr->reserved->list, &id_mgr->ids_lru); 507 - id_mgr->reserved = NULL; 526 + if (vm->reserved_vmid[vmhub]) { 527 + list_add(&vm->reserved_vmid[vmhub]->list, 528 + &id_mgr->ids_lru); 529 + vm->reserved_vmid[vmhub] = NULL; 530 + id_mgr->reserved_vmid = false; 508 531 } 509 - 510 532 mutex_unlock(&id_mgr->lock); 511 533 } 512 534 ··· 594 574 595 575 mutex_init(&id_mgr->lock); 596 576 INIT_LIST_HEAD(&id_mgr->ids_lru); 597 - id_mgr->reserved_use_count = 0; 598 577 599 578 /* for GC <10, SDMA uses MMHUB so use first_kfd_vmid for both GC and MM */ 600 579 if (amdgpu_ip_version(adev, GC_HWIP, 0) < IP_VERSION(10, 0, 0)) ··· 612 593 amdgpu_sync_create(&id_mgr->ids[j].active); 613 594 list_add_tail(&id_mgr->ids[j].list, &id_mgr->ids_lru); 614 595 } 615 - } 616 - /* alloc a default reserved vmid to enforce isolation */ 617 - for (i = 0; i < (adev->xcp_mgr ? adev->xcp_mgr->num_xcps : 1); i++) { 618 - if (adev->enforce_isolation[i] != AMDGPU_ENFORCE_ISOLATION_DISABLE) 619 - amdgpu_vmid_alloc_reserved(adev, AMDGPU_GFXHUB(i)); 620 596 } 621 597 } 622 598
+5 -6
drivers/gpu/drm/amd/amdgpu/amdgpu_ids.h
··· 67 67 unsigned num_ids; 68 68 struct list_head ids_lru; 69 69 struct amdgpu_vmid ids[AMDGPU_NUM_VMID]; 70 - struct amdgpu_vmid *reserved; 71 - unsigned int reserved_use_count; 70 + bool reserved_vmid; 72 71 }; 73 72 74 73 int amdgpu_pasid_alloc(unsigned int bits); ··· 78 79 bool amdgpu_vmid_had_gpu_reset(struct amdgpu_device *adev, 79 80 struct amdgpu_vmid *id); 80 81 bool amdgpu_vmid_uses_reserved(struct amdgpu_vm *vm, unsigned int vmhub); 81 - int amdgpu_vmid_alloc_reserved(struct amdgpu_device *adev, 82 - unsigned vmhub); 83 - void amdgpu_vmid_free_reserved(struct amdgpu_device *adev, 84 - unsigned vmhub); 82 + int amdgpu_vmid_alloc_reserved(struct amdgpu_device *adev, struct amdgpu_vm *vm, 83 + unsigned vmhub); 84 + void amdgpu_vmid_free_reserved(struct amdgpu_device *adev, struct amdgpu_vm *vm, 85 + unsigned vmhub); 85 86 int amdgpu_vmid_grab(struct amdgpu_vm *vm, struct amdgpu_ring *ring, 86 87 struct amdgpu_job *job, struct dma_fence **fence); 87 88 void amdgpu_vmid_reset(struct amdgpu_device *adev, unsigned vmhub,
+4 -4
drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
··· 708 708 * Calling function must call amdgpu_ttm_tt_userptr_range_done() once and only 709 709 * once afterwards to stop HMM tracking 710 710 */ 711 - int amdgpu_ttm_tt_get_user_pages(struct amdgpu_bo *bo, struct page **pages, 711 + int amdgpu_ttm_tt_get_user_pages(struct amdgpu_bo *bo, 712 712 struct hmm_range **range) 713 713 { 714 714 struct ttm_tt *ttm = bo->tbo.ttm; ··· 745 745 746 746 readonly = amdgpu_ttm_tt_is_readonly(ttm); 747 747 r = amdgpu_hmm_range_get_pages(&bo->notifier, start, ttm->num_pages, 748 - readonly, NULL, pages, range); 748 + readonly, NULL, range); 749 749 out_unlock: 750 750 mmap_read_unlock(mm); 751 751 if (r) ··· 797 797 * that backs user memory and will ultimately be mapped into the device 798 798 * address space. 799 799 */ 800 - void amdgpu_ttm_tt_set_user_pages(struct ttm_tt *ttm, struct page **pages) 800 + void amdgpu_ttm_tt_set_user_pages(struct ttm_tt *ttm, struct hmm_range *range) 801 801 { 802 802 unsigned long i; 803 803 804 804 for (i = 0; i < ttm->num_pages; ++i) 805 - ttm->pages[i] = pages ? pages[i] : NULL; 805 + ttm->pages[i] = range ? hmm_pfn_to_page(range->hmm_pfns[i]) : NULL; 806 806 } 807 807 808 808 /*
+2 -3
drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
··· 191 191 uint64_t amdgpu_ttm_domain_start(struct amdgpu_device *adev, uint32_t type); 192 192 193 193 #if IS_ENABLED(CONFIG_DRM_AMDGPU_USERPTR) 194 - int amdgpu_ttm_tt_get_user_pages(struct amdgpu_bo *bo, struct page **pages, 194 + int amdgpu_ttm_tt_get_user_pages(struct amdgpu_bo *bo, 195 195 struct hmm_range **range); 196 196 void amdgpu_ttm_tt_discard_user_pages(struct ttm_tt *ttm, 197 197 struct hmm_range *range); ··· 199 199 struct hmm_range *range); 200 200 #else 201 201 static inline int amdgpu_ttm_tt_get_user_pages(struct amdgpu_bo *bo, 202 - struct page **pages, 203 202 struct hmm_range **range) 204 203 { 205 204 return -EPERM; ··· 214 215 } 215 216 #endif 216 217 217 - void amdgpu_ttm_tt_set_user_pages(struct ttm_tt *ttm, struct page **pages); 218 + void amdgpu_ttm_tt_set_user_pages(struct ttm_tt *ttm, struct hmm_range *range); 218 219 int amdgpu_ttm_tt_get_userptr(const struct ttm_buffer_object *tbo, 219 220 uint64_t *user_addr); 220 221 int amdgpu_ttm_tt_set_userptr(struct ttm_buffer_object *bo,
+2
drivers/gpu/drm/amd/amdgpu/amdgpu_userq.c
··· 71 71 return 0; 72 72 } 73 73 74 + r = -EINVAL; 74 75 out_err: 75 76 amdgpu_bo_unreserve(vm->root.bo); 76 77 return r; ··· 509 508 if (amdgpu_userq_input_va_validate(&fpriv->vm, args->in.queue_va, args->in.queue_size) || 510 509 amdgpu_userq_input_va_validate(&fpriv->vm, args->in.rptr_va, AMDGPU_GPU_PAGE_SIZE) || 511 510 amdgpu_userq_input_va_validate(&fpriv->vm, args->in.wptr_va, AMDGPU_GPU_PAGE_SIZE)) { 511 + r = -EINVAL; 512 512 kfree(queue); 513 513 goto unlock; 514 514 }
+1 -1
drivers/gpu/drm/amd/amdgpu/amdgpu_userq_fence.c
··· 284 284 285 285 /* Check if hardware has already processed the job */ 286 286 spin_lock_irqsave(&fence_drv->fence_list_lock, flags); 287 - if (!dma_fence_is_signaled_locked(fence)) 287 + if (!dma_fence_is_signaled(fence)) 288 288 list_add_tail(&userq_fence->link, &fence_drv->fences); 289 289 else 290 290 dma_fence_put(fence);
+1 -1
drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.h
··· 501 501 struct amdgpu_fw_shared_rb_setup rb_setup; 502 502 struct amdgpu_fw_shared_smu_interface_info smu_dpm_interface; 503 503 struct amdgpu_fw_shared_drm_key_wa drm_key_wa; 504 - uint8_t pad3[9]; 504 + uint8_t pad3[404]; 505 505 }; 506 506 507 507 #define VCN_BLOCK_ENCODE_DISABLE_MASK 0x80
+4 -13
drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
··· 2790 2790 dma_fence_put(vm->last_update); 2791 2791 2792 2792 for (i = 0; i < AMDGPU_MAX_VMHUBS; i++) { 2793 - if (vm->reserved_vmid[i]) { 2794 - amdgpu_vmid_free_reserved(adev, i); 2795 - vm->reserved_vmid[i] = false; 2796 - } 2793 + amdgpu_vmid_free_reserved(adev, vm, i); 2797 2794 } 2798 2795 2799 2796 ttm_lru_bulk_move_fini(&adev->mman.bdev, &vm->lru_bulk_move); ··· 2886 2889 union drm_amdgpu_vm *args = data; 2887 2890 struct amdgpu_device *adev = drm_to_adev(dev); 2888 2891 struct amdgpu_fpriv *fpriv = filp->driver_priv; 2892 + struct amdgpu_vm *vm = &fpriv->vm; 2889 2893 2890 2894 /* No valid flags defined yet */ 2891 2895 if (args->in.flags) ··· 2895 2897 switch (args->in.op) { 2896 2898 case AMDGPU_VM_OP_RESERVE_VMID: 2897 2899 /* We only have requirement to reserve vmid from gfxhub */ 2898 - if (!fpriv->vm.reserved_vmid[AMDGPU_GFXHUB(0)]) { 2899 - amdgpu_vmid_alloc_reserved(adev, AMDGPU_GFXHUB(0)); 2900 - fpriv->vm.reserved_vmid[AMDGPU_GFXHUB(0)] = true; 2901 - } 2902 - 2900 + amdgpu_vmid_alloc_reserved(adev, vm, AMDGPU_GFXHUB(0)); 2903 2901 break; 2904 2902 case AMDGPU_VM_OP_UNRESERVE_VMID: 2905 - if (fpriv->vm.reserved_vmid[AMDGPU_GFXHUB(0)]) { 2906 - amdgpu_vmid_free_reserved(adev, AMDGPU_GFXHUB(0)); 2907 - fpriv->vm.reserved_vmid[AMDGPU_GFXHUB(0)] = false; 2908 - } 2903 + amdgpu_vmid_free_reserved(adev, vm, AMDGPU_GFXHUB(0)); 2909 2904 break; 2910 2905 default: 2911 2906 return -EINVAL;
+1 -1
drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
··· 415 415 struct dma_fence *last_unlocked; 416 416 417 417 unsigned int pasid; 418 - bool reserved_vmid[AMDGPU_MAX_VMHUBS]; 418 + struct amdgpu_vmid *reserved_vmid[AMDGPU_MAX_VMHUBS]; 419 419 420 420 /* Flag to indicate if VM tables are updated by CPU or GPU (SDMA) */ 421 421 bool use_cpu_for_update;
+4 -3
drivers/gpu/drm/amd/amdgpu/atom.c
··· 1502 1502 { 1503 1503 unsigned char *atom_rom_hdr; 1504 1504 unsigned char *str; 1505 - uint16_t base; 1505 + uint16_t base, len; 1506 1506 1507 1507 base = CU16(ATOM_ROM_TABLE_PTR); 1508 1508 atom_rom_hdr = CSTR(base); ··· 1515 1515 while (str < atom_rom_hdr && *str++) 1516 1516 ; 1517 1517 1518 - if ((str + STRLEN_NORMAL) < atom_rom_hdr) 1519 - strscpy(ctx->build_num, str, STRLEN_NORMAL); 1518 + len = min(atom_rom_hdr - str, STRLEN_NORMAL); 1519 + if (len) 1520 + strscpy(ctx->build_num, str, len); 1520 1521 } 1521 1522 1522 1523 struct atom_context *amdgpu_atom_parse(struct card_info *card, void *bios)
+12
drivers/gpu/drm/amd/amdgpu/gfx_v9_4_3.c
··· 3560 3560 struct amdgpu_device *adev = ring->adev; 3561 3561 struct amdgpu_kiq *kiq = &adev->gfx.kiq[ring->xcc_id]; 3562 3562 struct amdgpu_ring *kiq_ring = &kiq->ring; 3563 + int reset_mode = AMDGPU_RESET_TYPE_PER_QUEUE; 3563 3564 unsigned long flags; 3564 3565 int r; 3565 3566 ··· 3598 3597 if (!(adev->gfx.compute_supported_reset & AMDGPU_RESET_TYPE_PER_PIPE)) 3599 3598 return -EOPNOTSUPP; 3600 3599 r = gfx_v9_4_3_reset_hw_pipe(ring); 3600 + reset_mode = AMDGPU_RESET_TYPE_PER_PIPE; 3601 3601 dev_info(adev->dev, "ring: %s pipe reset :%s\n", ring->name, 3602 3602 r ? "failed" : "successfully"); 3603 3603 if (r) ··· 3621 3619 r = amdgpu_ring_test_ring(kiq_ring); 3622 3620 spin_unlock_irqrestore(&kiq->ring_lock, flags); 3623 3621 if (r) { 3622 + if (reset_mode == AMDGPU_RESET_TYPE_PER_QUEUE) 3623 + goto pipe_reset; 3624 + 3624 3625 dev_err(adev->dev, "fail to remap queue\n"); 3625 3626 return r; 3626 3627 } 3628 + 3629 + if (reset_mode == AMDGPU_RESET_TYPE_PER_QUEUE) { 3630 + r = amdgpu_ring_test_ring(ring); 3631 + if (r) 3632 + goto pipe_reset; 3633 + } 3634 + 3627 3635 3628 3636 return amdgpu_ring_reset_helper_end(ring, timedout_fence); 3629 3637 }
+6
drivers/gpu/drm/amd/amdgpu/mes_v11_0.c
··· 713 713 mes_set_hw_res_pkt.enable_reg_active_poll = 1; 714 714 mes_set_hw_res_pkt.enable_level_process_quantum_check = 1; 715 715 mes_set_hw_res_pkt.oversubscription_timer = 50; 716 + if ((mes->adev->mes.sched_version & AMDGPU_MES_VERSION_MASK) >= 0x7f) 717 + mes_set_hw_res_pkt.enable_lr_compute_wa = 1; 718 + else 719 + dev_info_once(mes->adev->dev, 720 + "MES FW version must be >= 0x7f to enable LR compute workaround.\n"); 721 + 716 722 if (amdgpu_mes_log_enable) { 717 723 mes_set_hw_res_pkt.enable_mes_event_int_logging = 1; 718 724 mes_set_hw_res_pkt.event_intr_history_gpu_mc_ptr =
+5
drivers/gpu/drm/amd/amdgpu/mes_v12_0.c
··· 769 769 mes_set_hw_res_pkt.use_different_vmid_compute = 1; 770 770 mes_set_hw_res_pkt.enable_reg_active_poll = 1; 771 771 mes_set_hw_res_pkt.enable_level_process_quantum_check = 1; 772 + if ((mes->adev->mes.sched_version & AMDGPU_MES_VERSION_MASK) >= 0x82) 773 + mes_set_hw_res_pkt.enable_lr_compute_wa = 1; 774 + else 775 + dev_info_once(adev->dev, 776 + "MES FW version must be >= 0x82 to enable LR compute workaround.\n"); 772 777 773 778 /* 774 779 * Keep oversubscribe timer for sdma . When we have unmapped doorbell
+65 -13
drivers/gpu/drm/amd/amdgpu/vcn_v5_0_1.c
··· 113 113 return 0; 114 114 } 115 115 116 + static int vcn_v5_0_1_late_init(struct amdgpu_ip_block *ip_block) 117 + { 118 + struct amdgpu_device *adev = ip_block->adev; 119 + 120 + adev->vcn.supported_reset = 121 + amdgpu_get_soft_full_reset_mask(&adev->vcn.inst[0].ring_enc[0]); 122 + 123 + switch (amdgpu_ip_version(adev, MP0_HWIP, 0)) { 124 + case IP_VERSION(13, 0, 12): 125 + if ((adev->psp.sos.fw_version >= 0x00450025) && amdgpu_dpm_reset_vcn_is_supported(adev)) 126 + adev->vcn.supported_reset |= AMDGPU_RESET_TYPE_PER_QUEUE; 127 + break; 128 + default: 129 + break; 130 + } 131 + 132 + return 0; 133 + } 134 + 116 135 static void vcn_v5_0_1_fw_shared_init(struct amdgpu_device *adev, int inst_idx) 117 136 { 118 137 struct amdgpu_vcn5_fw_shared *fw_shared; ··· 206 187 vcn_v5_0_1_fw_shared_init(adev, i); 207 188 } 208 189 209 - /* TODO: Add queue reset mask when FW fully supports it */ 210 - adev->vcn.supported_reset = 211 - amdgpu_get_soft_full_reset_mask(&adev->vcn.inst[0].ring_enc[0]); 212 - 213 190 if (amdgpu_sriov_vf(adev)) { 214 191 r = amdgpu_virt_alloc_mm_table(adev); 215 192 if (r) ··· 268 253 return 0; 269 254 } 270 255 256 + static int vcn_v5_0_1_hw_init_inst(struct amdgpu_device *adev, int i) 257 + { 258 + struct amdgpu_ring *ring; 259 + int vcn_inst; 260 + 261 + vcn_inst = GET_INST(VCN, i); 262 + ring = &adev->vcn.inst[i].ring_enc[0]; 263 + 264 + if (ring->use_doorbell) 265 + adev->nbio.funcs->vcn_doorbell_range(adev, ring->use_doorbell, 266 + ((adev->doorbell_index.vcn.vcn_ring0_1 << 1) + 267 + 11 * vcn_inst), 268 + adev->vcn.inst[i].aid_id); 269 + 270 + return 0; 271 + } 272 + 271 273 /** 272 274 * vcn_v5_0_1_hw_init - start and test VCN block 273 275 * ··· 296 264 { 297 265 struct amdgpu_device *adev = ip_block->adev; 298 266 struct amdgpu_ring *ring; 299 - int i, r, vcn_inst; 267 + int i, r; 300 268 301 269 if (amdgpu_sriov_vf(adev)) { 302 270 r = vcn_v5_0_1_start_sriov(adev); ··· 314 282 if (RREG32_SOC15(VCN, GET_INST(VCN, 0), regVCN_RRMT_CNTL) & 0x100) 315 283 adev->vcn.caps |= AMDGPU_VCN_CAPS(RRMT_ENABLED); 316 284 for (i = 0; i < adev->vcn.num_vcn_inst; ++i) { 317 - vcn_inst = GET_INST(VCN, i); 318 285 ring = &adev->vcn.inst[i].ring_enc[0]; 319 - 320 - if (ring->use_doorbell) 321 - adev->nbio.funcs->vcn_doorbell_range(adev, ring->use_doorbell, 322 - ((adev->doorbell_index.vcn.vcn_ring0_1 << 1) + 323 - 11 * vcn_inst), 324 - adev->vcn.inst[i].aid_id); 286 + vcn_v5_0_1_hw_init_inst(adev, i); 325 287 326 288 /* Re-init fw_shared, if required */ 327 289 vcn_v5_0_1_fw_shared_init(adev, i); ··· 1299 1273 } 1300 1274 } 1301 1275 1276 + static int vcn_v5_0_1_ring_reset(struct amdgpu_ring *ring, 1277 + unsigned int vmid, 1278 + struct amdgpu_fence *timedout_fence) 1279 + { 1280 + int r = 0; 1281 + int vcn_inst; 1282 + struct amdgpu_device *adev = ring->adev; 1283 + struct amdgpu_vcn_inst *vinst = &adev->vcn.inst[ring->me]; 1284 + 1285 + amdgpu_ring_reset_helper_begin(ring, timedout_fence); 1286 + 1287 + vcn_inst = GET_INST(VCN, ring->me); 1288 + r = amdgpu_dpm_reset_vcn(adev, 1 << vcn_inst); 1289 + 1290 + if (r) { 1291 + DRM_DEV_ERROR(adev->dev, "VCN reset fail : %d\n", r); 1292 + return r; 1293 + } 1294 + 1295 + vcn_v5_0_1_hw_init_inst(adev, ring->me); 1296 + vcn_v5_0_1_start_dpg_mode(vinst, vinst->indirect_sram); 1297 + 1298 + return amdgpu_ring_reset_helper_end(ring, timedout_fence); 1299 + } 1300 + 1302 1301 static const struct amdgpu_ring_funcs vcn_v5_0_1_unified_ring_vm_funcs = { 1303 1302 .type = AMDGPU_RING_TYPE_VCN_ENC, 1304 1303 .align_mask = 0x3f, ··· 1352 1301 .emit_wreg = vcn_v4_0_3_enc_ring_emit_wreg, 1353 1302 .emit_reg_wait = vcn_v4_0_3_enc_ring_emit_reg_wait, 1354 1303 .emit_reg_write_reg_wait = amdgpu_ring_emit_reg_write_reg_wait_helper, 1304 + .reset = vcn_v5_0_1_ring_reset, 1355 1305 }; 1356 1306 1357 1307 /** ··· 1556 1504 static const struct amd_ip_funcs vcn_v5_0_1_ip_funcs = { 1557 1505 .name = "vcn_v5_0_1", 1558 1506 .early_init = vcn_v5_0_1_early_init, 1559 - .late_init = NULL, 1507 + .late_init = vcn_v5_0_1_late_init, 1560 1508 .sw_init = vcn_v5_0_1_sw_init, 1561 1509 .sw_fini = vcn_v5_0_1_sw_fini, 1562 1510 .hw_init = vcn_v5_0_1_hw_init,
+19 -1
drivers/gpu/drm/amd/amdkfd/kfd_device.c
··· 495 495 mutex_init(&kfd->doorbell_mutex); 496 496 497 497 ida_init(&kfd->doorbell_ida); 498 + atomic_set(&kfd->kfd_processes_count, 0); 498 499 499 500 return kfd; 500 501 } ··· 1134 1133 } 1135 1134 1136 1135 for (i = 0; i < kfd->num_nodes; i++) { 1137 - node = kfd->nodes[i]; 1136 + /* Race if another thread in b/w 1137 + * kfd_cleanup_nodes and kfree(kfd), 1138 + * when kfd->nodes[i] = NULL 1139 + */ 1140 + if (kfd->nodes[i]) 1141 + node = kfd->nodes[i]; 1142 + else 1143 + return; 1144 + 1138 1145 spin_lock_irqsave(&node->interrupt_lock, flags); 1139 1146 1140 1147 if (node->interrupts_active ··· 1493 1484 int r = 0, temp, idx; 1494 1485 1495 1486 mutex_lock(&kfd_processes_mutex); 1487 + 1488 + /* kfd_processes_count is per kfd_dev, return -EBUSY without 1489 + * further check 1490 + */ 1491 + if (!!atomic_read(&kfd->kfd_processes_count)) { 1492 + pr_debug("process_wq_release not finished\n"); 1493 + r = -EBUSY; 1494 + goto out; 1495 + } 1496 1496 1497 1497 if (hash_empty(kfd_processes_table) && !kfd_is_locked(kfd)) 1498 1498 goto out;
+2
drivers/gpu/drm/amd/amdkfd/kfd_priv.h
··· 382 382 383 383 /* for dynamic partitioning */ 384 384 int kfd_dev_lock; 385 + 386 + atomic_t kfd_processes_count; 385 387 }; 386 388 387 389 enum kfd_mempool {
+4
drivers/gpu/drm/amd/amdkfd/kfd_process.c
··· 1088 1088 pdd->runtime_inuse = false; 1089 1089 } 1090 1090 1091 + atomic_dec(&pdd->dev->kfd->kfd_processes_count); 1092 + 1091 1093 kfree(pdd); 1092 1094 p->pdds[i] = NULL; 1093 1095 } ··· 1650 1648 1651 1649 /* Init idr used for memory handle translation */ 1652 1650 idr_init(&pdd->alloc_idr); 1651 + 1652 + atomic_inc(&dev->kfd->kfd_processes_count); 1653 1653 1654 1654 return pdd; 1655 1655 }
+1 -1
drivers/gpu/drm/amd/amdkfd/kfd_svm.c
··· 1738 1738 1739 1739 WRITE_ONCE(p->svms.faulting_task, current); 1740 1740 r = amdgpu_hmm_range_get_pages(&prange->notifier, addr, npages, 1741 - readonly, owner, NULL, 1741 + readonly, owner, 1742 1742 &hmm_range); 1743 1743 WRITE_ONCE(p->svms.faulting_task, NULL); 1744 1744 if (r)
+51 -4
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
··· 233 233 234 234 static int amdgpu_dm_connector_get_modes(struct drm_connector *connector); 235 235 236 + static int amdgpu_dm_atomic_setup_commit(struct drm_atomic_state *state); 236 237 static void amdgpu_dm_atomic_commit_tail(struct drm_atomic_state *state); 237 238 238 239 static int amdgpu_dm_atomic_check(struct drm_device *dev, ··· 418 417 /* 419 418 * Previous frame finished and HW is ready for optimization. 420 419 */ 421 - if (update_type == UPDATE_TYPE_FAST) 422 - dc_post_update_surfaces_to_stream(dc); 420 + dc_post_update_surfaces_to_stream(dc); 423 421 424 422 return dc_update_planes_and_stream(dc, 425 423 array_of_surface_update, ··· 3637 3637 3638 3638 static struct drm_mode_config_helper_funcs amdgpu_dm_mode_config_helperfuncs = { 3639 3639 .atomic_commit_tail = amdgpu_dm_atomic_commit_tail, 3640 - .atomic_commit_setup = drm_dp_mst_atomic_setup_commit, 3640 + .atomic_commit_setup = amdgpu_dm_atomic_setup_commit, 3641 3641 }; 3642 3642 3643 3643 static void update_connector_ext_caps(struct amdgpu_dm_connector *aconnector) ··· 4828 4828 4829 4829 if (!caps->data_points) 4830 4830 return; 4831 + 4832 + /* 4833 + * Handle the case where brightness is below the first data point 4834 + * Interpolate between (0,0) and (first_signal, first_lum) 4835 + */ 4836 + if (brightness < caps->luminance_data[0].input_signal) { 4837 + lum = DIV_ROUND_CLOSEST(caps->luminance_data[0].luminance * brightness, 4838 + caps->luminance_data[0].input_signal); 4839 + goto scale; 4840 + } 4831 4841 4832 4842 left = 0; 4833 4843 right = caps->data_points - 1; ··· 8271 8261 {"1920x1200", 1920, 1200} 8272 8262 }; 8273 8263 8264 + if ((connector->connector_type != DRM_MODE_CONNECTOR_eDP) && 8265 + (connector->connector_type != DRM_MODE_CONNECTOR_LVDS)) 8266 + return; 8267 + 8274 8268 n = ARRAY_SIZE(common_modes); 8275 8269 8276 8270 for (i = 0; i < n; i++) { ··· 10367 10353 } 10368 10354 } 10369 10355 10356 + static int amdgpu_dm_atomic_setup_commit(struct drm_atomic_state *state) 10357 + { 10358 + struct drm_crtc *crtc; 10359 + struct drm_crtc_state *old_crtc_state, *new_crtc_state; 10360 + struct dm_crtc_state *dm_old_crtc_state, *dm_new_crtc_state; 10361 + int i, ret; 10362 + 10363 + ret = drm_dp_mst_atomic_setup_commit(state); 10364 + if (ret) 10365 + return ret; 10366 + 10367 + for_each_oldnew_crtc_in_state(state, crtc, old_crtc_state, new_crtc_state, i) { 10368 + dm_old_crtc_state = to_dm_crtc_state(old_crtc_state); 10369 + dm_new_crtc_state = to_dm_crtc_state(new_crtc_state); 10370 + /* 10371 + * Color management settings. We also update color properties 10372 + * when a modeset is needed, to ensure it gets reprogrammed. 10373 + */ 10374 + if (dm_new_crtc_state->base.active && dm_new_crtc_state->stream && 10375 + (dm_new_crtc_state->base.color_mgmt_changed || 10376 + dm_old_crtc_state->regamma_tf != dm_new_crtc_state->regamma_tf || 10377 + drm_atomic_crtc_needs_modeset(new_crtc_state))) { 10378 + ret = amdgpu_dm_update_crtc_color_mgmt(dm_new_crtc_state); 10379 + if (ret) { 10380 + drm_dbg_atomic(state->dev, "Failed to update color state\n"); 10381 + return ret; 10382 + } 10383 + } 10384 + } 10385 + 10386 + return 0; 10387 + } 10388 + 10370 10389 /** 10371 10390 * amdgpu_dm_atomic_commit_tail() - AMDgpu DM's commit tail implementation. 10372 10391 * @state: The atomic state to commit ··· 11214 11167 if (dm_new_crtc_state->base.color_mgmt_changed || 11215 11168 dm_old_crtc_state->regamma_tf != dm_new_crtc_state->regamma_tf || 11216 11169 drm_atomic_crtc_needs_modeset(new_crtc_state)) { 11217 - ret = amdgpu_dm_update_crtc_color_mgmt(dm_new_crtc_state); 11170 + ret = amdgpu_dm_check_crtc_color_mgmt(dm_new_crtc_state, true); 11218 11171 if (ret) 11219 11172 goto fail; 11220 11173 }
+2
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h
··· 1054 1054 int amdgpu_dm_create_color_properties(struct amdgpu_device *adev); 1055 1055 int amdgpu_dm_verify_lut_sizes(const struct drm_crtc_state *crtc_state); 1056 1056 int amdgpu_dm_update_crtc_color_mgmt(struct dm_crtc_state *crtc); 1057 + int amdgpu_dm_check_crtc_color_mgmt(struct dm_crtc_state *crtc, 1058 + bool check_only); 1057 1059 int amdgpu_dm_update_plane_color_mgmt(struct dm_crtc_state *crtc, 1058 1060 struct drm_plane_state *plane_state, 1059 1061 struct dc_plane_state *dc_plane_state);
+64 -24
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_color.c
··· 566 566 return res ? 0 : -ENOMEM; 567 567 } 568 568 569 - static int amdgpu_dm_set_atomic_regamma(struct dc_stream_state *stream, 569 + static int amdgpu_dm_set_atomic_regamma(struct dc_transfer_func *out_tf, 570 570 const struct drm_color_lut *regamma_lut, 571 571 uint32_t regamma_size, bool has_rom, 572 572 enum dc_transfer_func_predefined tf) 573 573 { 574 - struct dc_transfer_func *out_tf = &stream->out_transfer_func; 575 574 int ret = 0; 576 575 577 576 if (regamma_size || tf != TRANSFER_FUNCTION_LINEAR) { ··· 820 821 struct dm_plane_state *dm_plane_state = to_dm_plane_state(plane_state); 821 822 const struct drm_color_lut *shaper = NULL, *lut3d = NULL; 822 823 uint32_t exp_size, size, dim_size = MAX_COLOR_3DLUT_SIZE; 823 - bool has_3dlut = adev->dm.dc->caps.color.dpp.hw_3d_lut; 824 + bool has_3dlut = adev->dm.dc->caps.color.dpp.hw_3d_lut || adev->dm.dc->caps.color.mpc.preblend; 824 825 825 826 /* shaper LUT is only available if 3D LUT color caps */ 826 827 exp_size = has_3dlut ? MAX_COLOR_LUT_ENTRIES : 0; ··· 884 885 } 885 886 886 887 /** 887 - * amdgpu_dm_update_crtc_color_mgmt: Maps DRM color management to DC stream. 888 + * amdgpu_dm_check_crtc_color_mgmt: Check if DRM color props are programmable by DC. 888 889 * @crtc: amdgpu_dm crtc state 890 + * @check_only: only check color state without update dc stream 889 891 * 890 - * With no plane level color management properties we're free to use any 891 - * of the HW blocks as long as the CRTC CTM always comes before the 892 - * CRTC RGM and after the CRTC DGM. 893 - * 894 - * - The CRTC RGM block will be placed in the RGM LUT block if it is non-linear. 895 - * - The CRTC DGM block will be placed in the DGM LUT block if it is non-linear. 896 - * - The CRTC CTM will be placed in the gamut remap block if it is non-linear. 892 + * This function just verifies CRTC LUT sizes, if there is enough space for 893 + * output transfer function and if its parameters can be calculated by AMD 894 + * color module. It also adjusts some settings for programming CRTC degamma at 895 + * plane stage, using plane DGM block. 897 896 * 898 897 * The RGM block is typically more fully featured and accurate across 899 898 * all ASICs - DCE can't support a custom non-linear CRTC DGM. 900 899 * 901 900 * For supporting both plane level color management and CRTC level color 902 - * management at once we have to either restrict the usage of CRTC properties 903 - * or blend adjustments together. 901 + * management at once we have to either restrict the usage of some CRTC 902 + * properties or blend adjustments together. 904 903 * 905 904 * Returns: 906 - * 0 on success. Error code if setup fails. 905 + * 0 on success. Error code if validation fails. 907 906 */ 908 - int amdgpu_dm_update_crtc_color_mgmt(struct dm_crtc_state *crtc) 907 + 908 + int amdgpu_dm_check_crtc_color_mgmt(struct dm_crtc_state *crtc, 909 + bool check_only) 909 910 { 910 911 struct dc_stream_state *stream = crtc->stream; 911 912 struct amdgpu_device *adev = drm_to_adev(crtc->base.state->dev); 912 913 bool has_rom = adev->asic_type <= CHIP_RAVEN; 913 - struct drm_color_ctm *ctm = NULL; 914 + struct dc_transfer_func *out_tf; 914 915 const struct drm_color_lut *degamma_lut, *regamma_lut; 915 916 uint32_t degamma_size, regamma_size; 916 917 bool has_regamma, has_degamma; ··· 939 940 crtc->cm_has_degamma = false; 940 941 crtc->cm_is_degamma_srgb = false; 941 942 943 + if (check_only) { 944 + out_tf = kvzalloc(sizeof(*out_tf), GFP_KERNEL); 945 + if (!out_tf) 946 + return -ENOMEM; 947 + } else { 948 + out_tf = &stream->out_transfer_func; 949 + } 950 + 942 951 /* Setup regamma and degamma. */ 943 952 if (is_legacy) { 944 953 /* ··· 961 954 * inverse color ramp in legacy userspace. 962 955 */ 963 956 crtc->cm_is_degamma_srgb = true; 964 - stream->out_transfer_func.type = TF_TYPE_DISTRIBUTED_POINTS; 965 - stream->out_transfer_func.tf = TRANSFER_FUNCTION_SRGB; 957 + out_tf->type = TF_TYPE_DISTRIBUTED_POINTS; 958 + out_tf->tf = TRANSFER_FUNCTION_SRGB; 966 959 /* 967 960 * Note: although we pass has_rom as parameter here, we never 968 961 * actually use ROM because the color module only takes the ROM ··· 970 963 * 971 964 * See more in mod_color_calculate_regamma_params() 972 965 */ 973 - r = __set_legacy_tf(&stream->out_transfer_func, regamma_lut, 966 + r = __set_legacy_tf(out_tf, regamma_lut, 974 967 regamma_size, has_rom); 975 - if (r) 976 - return r; 977 968 } else { 978 969 regamma_size = has_regamma ? regamma_size : 0; 979 - r = amdgpu_dm_set_atomic_regamma(stream, regamma_lut, 970 + r = amdgpu_dm_set_atomic_regamma(out_tf, regamma_lut, 980 971 regamma_size, has_rom, tf); 981 - if (r) 982 - return r; 983 972 } 984 973 985 974 /* ··· 984 981 * have to place the CTM in the OCSC in that case. 985 982 */ 986 983 crtc->cm_has_degamma = has_degamma; 984 + if (check_only) 985 + kvfree(out_tf); 986 + 987 + return r; 988 + } 989 + 990 + /** 991 + * amdgpu_dm_update_crtc_color_mgmt: Maps DRM color management to DC stream. 992 + * @crtc: amdgpu_dm crtc state 993 + * 994 + * With no plane level color management properties we're free to use any 995 + * of the HW blocks as long as the CRTC CTM always comes before the 996 + * CRTC RGM and after the CRTC DGM. 997 + * 998 + * - The CRTC RGM block will be placed in the RGM LUT block if it is non-linear. 999 + * - The CRTC DGM block will be placed in the DGM LUT block if it is non-linear. 1000 + * - The CRTC CTM will be placed in the gamut remap block if it is non-linear. 1001 + * 1002 + * The RGM block is typically more fully featured and accurate across 1003 + * all ASICs - DCE can't support a custom non-linear CRTC DGM. 1004 + * 1005 + * For supporting both plane level color management and CRTC level color 1006 + * management at once we have to either restrict the usage of CRTC properties 1007 + * or blend adjustments together. 1008 + * 1009 + * Returns: 1010 + * 0 on success. Error code if setup fails. 1011 + */ 1012 + int amdgpu_dm_update_crtc_color_mgmt(struct dm_crtc_state *crtc) 1013 + { 1014 + struct dc_stream_state *stream = crtc->stream; 1015 + struct drm_color_ctm *ctm = NULL; 1016 + int ret; 1017 + 1018 + ret = amdgpu_dm_check_crtc_color_mgmt(crtc, false); 1019 + if (ret) 1020 + return ret; 987 1021 988 1022 /* Setup CRTC CTM. */ 989 1023 if (crtc->base.ctm) {
+6 -2
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_crtc.c
··· 218 218 break; 219 219 } 220 220 221 - if (idle_work->enable) 221 + if (idle_work->enable) { 222 + dc_post_update_surfaces_to_stream(idle_work->dm->dc); 222 223 dc_allow_idle_optimizations(idle_work->dm->dc, true); 224 + } 223 225 mutex_unlock(&idle_work->dm->dc_lock); 224 226 } 225 227 idle_work->dm->idle_workqueue->running = false; ··· 275 273 vblank_work->acrtc->dm_irq_params.allow_sr_entry); 276 274 } 277 275 278 - if (dm->active_vblank_irq_count == 0) 276 + if (dm->active_vblank_irq_count == 0) { 277 + dc_post_update_surfaces_to_stream(dm->dc); 279 278 dc_allow_idle_optimizations(dm->dc, true); 279 + } 280 280 281 281 mutex_unlock(&dm->dc_lock); 282 282
+8 -4
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_hdcp.c
··· 768 768 struct mod_hdcp_ddc_funcs *ddc_funcs = &config->ddc.funcs; 769 769 770 770 config->psp.handle = &adev->psp; 771 - if (dc->ctx->dce_version == DCN_VERSION_3_1 || 771 + if (dc->ctx->dce_version == DCN_VERSION_3_1 || 772 772 dc->ctx->dce_version == DCN_VERSION_3_14 || 773 773 dc->ctx->dce_version == DCN_VERSION_3_15 || 774 - dc->ctx->dce_version == DCN_VERSION_3_5 || 774 + dc->ctx->dce_version == DCN_VERSION_3_16 || 775 + dc->ctx->dce_version == DCN_VERSION_3_2 || 776 + dc->ctx->dce_version == DCN_VERSION_3_21 || 777 + dc->ctx->dce_version == DCN_VERSION_3_5 || 775 778 dc->ctx->dce_version == DCN_VERSION_3_51 || 776 - dc->ctx->dce_version == DCN_VERSION_3_6 || 777 - dc->ctx->dce_version == DCN_VERSION_3_16) 779 + dc->ctx->dce_version == DCN_VERSION_3_6 || 780 + dc->ctx->dce_version == DCN_VERSION_4_01) 778 781 config->psp.caps.dtm_v3_supported = 1; 782 + 779 783 config->ddc.handle = dc_get_link_at_index(dc, i); 780 784 781 785 ddc_funcs->write_i2c = lp_write_i2c;
+1 -1
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_plane.c
··· 1633 1633 drm_object_attach_property(&plane->base, 1634 1634 dm->adev->mode_info.plane_ctm_property, 0); 1635 1635 1636 - if (dpp_color_caps.hw_3d_lut) { 1636 + if (dpp_color_caps.hw_3d_lut || dm->dc->caps.color.mpc.preblend) { 1637 1637 drm_object_attach_property(&plane->base, 1638 1638 mode_info.plane_shaper_lut_property, 0); 1639 1639 drm_object_attach_property(&plane->base,
+2 -2
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_services.c
··· 53 53 func_name, line); 54 54 } 55 55 56 - void dm_trace_smu_msg(uint32_t msg_id, uint32_t param_in, struct dc_context *ctx) 56 + void dm_trace_smu_enter(uint32_t msg_id, uint32_t param_in, unsigned int delay, struct dc_context *ctx) 57 57 { 58 58 } 59 59 60 - void dm_trace_smu_delay(uint32_t delay, struct dc_context *ctx) 60 + void dm_trace_smu_exit(bool success, uint32_t response, struct dc_context *ctx) 61 61 { 62 62 } 63 63
+3
drivers/gpu/drm/amd/display/dc/clk_mgr/dce100/dce_clk_mgr.c
··· 463 463 clk_mgr->max_clks_state = DM_PP_CLOCKS_STATE_NOMINAL; 464 464 clk_mgr->cur_min_clks_state = DM_PP_CLOCKS_STATE_INVALID; 465 465 466 + base->clks.max_supported_dispclk_khz = 467 + clk_mgr->max_clks_by_state[DM_PP_CLOCKS_STATE_PERFORMANCE].display_clk_khz; 468 + 466 469 dce_clock_read_integrated_info(clk_mgr); 467 470 dce_clock_read_ss_info(clk_mgr); 468 471 }
+5
drivers/gpu/drm/amd/display/dc/clk_mgr/dce60/dce60_clk_mgr.c
··· 147 147 struct dc_context *ctx, 148 148 struct clk_mgr_internal *clk_mgr) 149 149 { 150 + struct clk_mgr *base = &clk_mgr->base; 151 + 150 152 dce_clk_mgr_construct(ctx, clk_mgr); 151 153 152 154 memcpy(clk_mgr->max_clks_by_state, ··· 159 157 clk_mgr->clk_mgr_shift = &disp_clk_shift; 160 158 clk_mgr->clk_mgr_mask = &disp_clk_mask; 161 159 clk_mgr->base.funcs = &dce60_funcs; 160 + 161 + base->clks.max_supported_dispclk_khz = 162 + clk_mgr->max_clks_by_state[DM_PP_CLOCKS_STATE_PERFORMANCE].display_clk_khz; 162 163 } 163 164
+1 -1
drivers/gpu/drm/amd/display/dc/clk_mgr/dcn30/dcn30_clk_mgr_smu_msg.c
··· 69 69 70 70 /* handle DALSMC_Result_CmdRejectedBusy? */ 71 71 72 - TRACE_SMU_DELAY(delay_us * (initial_max_retries - max_retries), clk_mgr->base.ctx); 72 + TRACE_SMU_MSG_DELAY(0, 0, delay_us * (initial_max_retries - max_retries), clk_mgr->base.ctx); 73 73 74 74 return reg; 75 75 }
+137 -5
drivers/gpu/drm/amd/display/dc/clk_mgr/dcn314/dcn314_clk_mgr.c
··· 77 77 #undef DC_LOGGER 78 78 #define DC_LOGGER \ 79 79 clk_mgr->base.base.ctx->logger 80 + 80 81 #define regCLK1_CLK_PLL_REQ 0x0237 81 82 #define regCLK1_CLK_PLL_REQ_BASE_IDX 0 82 83 ··· 88 87 #define CLK1_CLK_PLL_REQ__PllSpineDiv_MASK 0x0000F000L 89 88 #define CLK1_CLK_PLL_REQ__FbMult_frac_MASK 0xFFFF0000L 90 89 90 + #define regCLK1_CLK0_DFS_CNTL 0x0269 91 + #define regCLK1_CLK0_DFS_CNTL_BASE_IDX 0 92 + #define regCLK1_CLK1_DFS_CNTL 0x026c 93 + #define regCLK1_CLK1_DFS_CNTL_BASE_IDX 0 94 + #define regCLK1_CLK2_DFS_CNTL 0x026f 95 + #define regCLK1_CLK2_DFS_CNTL_BASE_IDX 0 96 + #define regCLK1_CLK3_DFS_CNTL 0x0272 97 + #define regCLK1_CLK3_DFS_CNTL_BASE_IDX 0 98 + #define regCLK1_CLK4_DFS_CNTL 0x0275 99 + #define regCLK1_CLK4_DFS_CNTL_BASE_IDX 0 100 + #define regCLK1_CLK5_DFS_CNTL 0x0278 101 + #define regCLK1_CLK5_DFS_CNTL_BASE_IDX 0 102 + 103 + #define regCLK1_CLK0_CURRENT_CNT 0x02fb 104 + #define regCLK1_CLK0_CURRENT_CNT_BASE_IDX 0 105 + #define regCLK1_CLK1_CURRENT_CNT 0x02fc 106 + #define regCLK1_CLK1_CURRENT_CNT_BASE_IDX 0 107 + #define regCLK1_CLK2_CURRENT_CNT 0x02fd 108 + #define regCLK1_CLK2_CURRENT_CNT_BASE_IDX 0 109 + #define regCLK1_CLK3_CURRENT_CNT 0x02fe 110 + #define regCLK1_CLK3_CURRENT_CNT_BASE_IDX 0 111 + #define regCLK1_CLK4_CURRENT_CNT 0x02ff 112 + #define regCLK1_CLK4_CURRENT_CNT_BASE_IDX 0 113 + #define regCLK1_CLK5_CURRENT_CNT 0x0300 114 + #define regCLK1_CLK5_CURRENT_CNT_BASE_IDX 0 115 + 116 + #define regCLK1_CLK0_BYPASS_CNTL 0x028a 117 + #define regCLK1_CLK0_BYPASS_CNTL_BASE_IDX 0 118 + #define regCLK1_CLK1_BYPASS_CNTL 0x0293 119 + #define regCLK1_CLK1_BYPASS_CNTL_BASE_IDX 0 91 120 #define regCLK1_CLK2_BYPASS_CNTL 0x029c 92 121 #define regCLK1_CLK2_BYPASS_CNTL_BASE_IDX 0 122 + #define regCLK1_CLK3_BYPASS_CNTL 0x02a5 123 + #define regCLK1_CLK3_BYPASS_CNTL_BASE_IDX 0 124 + #define regCLK1_CLK4_BYPASS_CNTL 0x02ae 125 + #define regCLK1_CLK4_BYPASS_CNTL_BASE_IDX 0 126 + #define regCLK1_CLK5_BYPASS_CNTL 0x02b7 127 + #define regCLK1_CLK5_BYPASS_CNTL_BASE_IDX 0 128 + 129 + #define regCLK1_CLK0_DS_CNTL 0x0283 130 + #define regCLK1_CLK0_DS_CNTL_BASE_IDX 0 131 + #define regCLK1_CLK1_DS_CNTL 0x028c 132 + #define regCLK1_CLK1_DS_CNTL_BASE_IDX 0 133 + #define regCLK1_CLK2_DS_CNTL 0x0295 134 + #define regCLK1_CLK2_DS_CNTL_BASE_IDX 0 135 + #define regCLK1_CLK3_DS_CNTL 0x029e 136 + #define regCLK1_CLK3_DS_CNTL_BASE_IDX 0 137 + #define regCLK1_CLK4_DS_CNTL 0x02a7 138 + #define regCLK1_CLK4_DS_CNTL_BASE_IDX 0 139 + #define regCLK1_CLK5_DS_CNTL 0x02b0 140 + #define regCLK1_CLK5_DS_CNTL_BASE_IDX 0 141 + 142 + #define regCLK1_CLK0_ALLOW_DS 0x0284 143 + #define regCLK1_CLK0_ALLOW_DS_BASE_IDX 0 144 + #define regCLK1_CLK1_ALLOW_DS 0x028d 145 + #define regCLK1_CLK1_ALLOW_DS_BASE_IDX 0 146 + #define regCLK1_CLK2_ALLOW_DS 0x0296 147 + #define regCLK1_CLK2_ALLOW_DS_BASE_IDX 0 148 + #define regCLK1_CLK3_ALLOW_DS 0x029f 149 + #define regCLK1_CLK3_ALLOW_DS_BASE_IDX 0 150 + #define regCLK1_CLK4_ALLOW_DS 0x02a8 151 + #define regCLK1_CLK4_ALLOW_DS_BASE_IDX 0 152 + #define regCLK1_CLK5_ALLOW_DS 0x02b1 153 + #define regCLK1_CLK5_ALLOW_DS_BASE_IDX 0 93 154 94 155 #define CLK1_CLK2_BYPASS_CNTL__CLK2_BYPASS_SEL__SHIFT 0x0 95 156 #define CLK1_CLK2_BYPASS_CNTL__CLK2_BYPASS_DIV__SHIFT 0x10 ··· 248 185 { 249 186 struct clk_mgr_internal *clk_mgr_int = TO_CLK_MGR_INTERNAL(clk_mgr); 250 187 uint32_t ref_dtbclk = clk_mgr->clks.ref_dtbclk_khz; 188 + struct clk_mgr_dcn314 *clk_mgr_dcn314 = TO_CLK_MGR_DCN314(clk_mgr_int); 189 + struct clk_log_info log_info = {0}; 251 190 252 191 memset(&(clk_mgr->clks), 0, sizeof(struct dc_clocks)); 253 192 // Assumption is that boot state always supports pstate ··· 265 200 dce_adjust_dp_ref_freq_for_ss(clk_mgr_int, clk_mgr->dprefclk_khz); 266 201 else 267 202 clk_mgr->dp_dto_source_clock_in_khz = clk_mgr->dprefclk_khz; 203 + 204 + dcn314_dump_clk_registers(&clk_mgr->boot_snapshot, &clk_mgr_dcn314->base.base, &log_info); 205 + clk_mgr->clks.dispclk_khz = clk_mgr->boot_snapshot.dispclk * 1000; 268 206 } 269 207 270 208 void dcn314_update_clocks(struct clk_mgr *clk_mgr_base, ··· 285 217 286 218 if (dc->work_arounds.skip_clock_update) 287 219 return; 220 + 221 + display_count = dcn314_get_active_display_cnt_wa(dc, context); 288 222 289 223 /* 290 224 * if it is safe to lower, but we are already in the lower state, we don't have to do anything ··· 306 236 } 307 237 /* check that we're not already in lower */ 308 238 if (clk_mgr_base->clks.pwr_state != DCN_PWR_STATE_LOW_POWER) { 309 - display_count = dcn314_get_active_display_cnt_wa(dc, context); 310 239 /* if we can go lower, go lower */ 311 240 if (display_count == 0) { 312 241 union display_idle_optimization_u idle_info = { 0 }; ··· 362 293 update_dppclk = true; 363 294 } 364 295 365 - if (should_set_clock(safe_to_lower, new_clocks->dispclk_khz, clk_mgr_base->clks.dispclk_khz)) { 296 + if (should_set_clock(safe_to_lower, new_clocks->dispclk_khz, clk_mgr_base->clks.dispclk_khz) && 297 + (new_clocks->dispclk_khz > 0 || (safe_to_lower && display_count == 0))) { 298 + int requested_dispclk_khz = new_clocks->dispclk_khz; 299 + 366 300 dcn314_disable_otg_wa(clk_mgr_base, context, safe_to_lower, true); 367 301 302 + /* Clamp the requested clock to PMFW based on their limit. */ 303 + if (dc->debug.min_disp_clk_khz > 0 && requested_dispclk_khz < dc->debug.min_disp_clk_khz) 304 + requested_dispclk_khz = dc->debug.min_disp_clk_khz; 305 + 306 + dcn314_smu_set_dispclk(clk_mgr, requested_dispclk_khz); 368 307 clk_mgr_base->clks.dispclk_khz = new_clocks->dispclk_khz; 369 - dcn314_smu_set_dispclk(clk_mgr, clk_mgr_base->clks.dispclk_khz); 308 + 370 309 dcn314_disable_otg_wa(clk_mgr_base, context, safe_to_lower, false); 371 310 372 311 update_dispclk = true; ··· 462 385 return true; 463 386 } 464 387 465 - static void dcn314_dump_clk_registers(struct clk_state_registers_and_bypass *regs_and_bypass, 388 + 389 + static void dcn314_dump_clk_registers_internal(struct dcn35_clk_internal *internal, struct clk_mgr *clk_mgr_base) 390 + { 391 + struct clk_mgr_internal *clk_mgr = TO_CLK_MGR_INTERNAL(clk_mgr_base); 392 + 393 + // read dtbclk 394 + internal->CLK1_CLK4_CURRENT_CNT = REG_READ(CLK1_CLK4_CURRENT_CNT); 395 + internal->CLK1_CLK4_BYPASS_CNTL = REG_READ(CLK1_CLK4_BYPASS_CNTL); 396 + 397 + // read dcfclk 398 + internal->CLK1_CLK3_CURRENT_CNT = REG_READ(CLK1_CLK3_CURRENT_CNT); 399 + internal->CLK1_CLK3_BYPASS_CNTL = REG_READ(CLK1_CLK3_BYPASS_CNTL); 400 + 401 + // read dcf deep sleep divider 402 + internal->CLK1_CLK3_DS_CNTL = REG_READ(CLK1_CLK3_DS_CNTL); 403 + internal->CLK1_CLK3_ALLOW_DS = REG_READ(CLK1_CLK3_ALLOW_DS); 404 + 405 + // read dppclk 406 + internal->CLK1_CLK1_CURRENT_CNT = REG_READ(CLK1_CLK1_CURRENT_CNT); 407 + internal->CLK1_CLK1_BYPASS_CNTL = REG_READ(CLK1_CLK1_BYPASS_CNTL); 408 + 409 + // read dprefclk 410 + internal->CLK1_CLK2_CURRENT_CNT = REG_READ(CLK1_CLK2_CURRENT_CNT); 411 + internal->CLK1_CLK2_BYPASS_CNTL = REG_READ(CLK1_CLK2_BYPASS_CNTL); 412 + 413 + // read dispclk 414 + internal->CLK1_CLK0_CURRENT_CNT = REG_READ(CLK1_CLK0_CURRENT_CNT); 415 + internal->CLK1_CLK0_BYPASS_CNTL = REG_READ(CLK1_CLK0_BYPASS_CNTL); 416 + } 417 + 418 + void dcn314_dump_clk_registers(struct clk_state_registers_and_bypass *regs_and_bypass, 466 419 struct clk_mgr *clk_mgr_base, struct clk_log_info *log_info) 467 420 { 468 - return; 421 + 422 + struct dcn35_clk_internal internal = {0}; 423 + 424 + dcn314_dump_clk_registers_internal(&internal, clk_mgr_base); 425 + 426 + regs_and_bypass->dcfclk = internal.CLK1_CLK3_CURRENT_CNT / 10; 427 + regs_and_bypass->dcf_deep_sleep_divider = internal.CLK1_CLK3_DS_CNTL / 10; 428 + regs_and_bypass->dcf_deep_sleep_allow = internal.CLK1_CLK3_ALLOW_DS; 429 + regs_and_bypass->dprefclk = internal.CLK1_CLK2_CURRENT_CNT / 10; 430 + regs_and_bypass->dispclk = internal.CLK1_CLK0_CURRENT_CNT / 10; 431 + regs_and_bypass->dppclk = internal.CLK1_CLK1_CURRENT_CNT / 10; 432 + regs_and_bypass->dtbclk = internal.CLK1_CLK4_CURRENT_CNT / 10; 433 + 434 + regs_and_bypass->dppclk_bypass = internal.CLK1_CLK1_BYPASS_CNTL & 0x0007; 435 + if (regs_and_bypass->dppclk_bypass < 0 || regs_and_bypass->dppclk_bypass > 4) 436 + regs_and_bypass->dppclk_bypass = 0; 437 + regs_and_bypass->dcfclk_bypass = internal.CLK1_CLK3_BYPASS_CNTL & 0x0007; 438 + if (regs_and_bypass->dcfclk_bypass < 0 || regs_and_bypass->dcfclk_bypass > 4) 439 + regs_and_bypass->dcfclk_bypass = 0; 440 + regs_and_bypass->dispclk_bypass = internal.CLK1_CLK0_BYPASS_CNTL & 0x0007; 441 + if (regs_and_bypass->dispclk_bypass < 0 || regs_and_bypass->dispclk_bypass > 4) 442 + regs_and_bypass->dispclk_bypass = 0; 443 + regs_and_bypass->dprefclk_bypass = internal.CLK1_CLK2_BYPASS_CNTL & 0x0007; 444 + if (regs_and_bypass->dprefclk_bypass < 0 || regs_and_bypass->dprefclk_bypass > 4) 445 + regs_and_bypass->dprefclk_bypass = 0; 446 + 469 447 } 470 448 471 449 static struct clk_bw_params dcn314_bw_params = {
+5
drivers/gpu/drm/amd/display/dc/clk_mgr/dcn314/dcn314_clk_mgr.h
··· 65 65 66 66 void dcn314_clk_mgr_destroy(struct clk_mgr_internal *clk_mgr_int); 67 67 68 + 69 + void dcn314_dump_clk_registers(struct clk_state_registers_and_bypass *regs_and_bypass, 70 + struct clk_mgr *clk_mgr_base, struct clk_log_info *log_info); 71 + 72 + 68 73 #endif //__DCN314_CLK_MGR_H__
+3 -2
drivers/gpu/drm/amd/display/dc/clk_mgr/dcn32/dcn32_clk_mgr_smu_msg.c
··· 63 63 udelay(delay_us); 64 64 } while (max_retries--); 65 65 66 - TRACE_SMU_DELAY(delay_us * (initial_max_retries - max_retries), clk_mgr->base.ctx); 66 + TRACE_SMU_MSG_DELAY(0, 0, delay_us * (initial_max_retries - max_retries), clk_mgr->base.ctx); 67 + 67 68 68 69 return reg; 69 70 } ··· 121 120 *total_delay_us += delay_us; 122 121 } while (max_retries--); 123 122 124 - TRACE_SMU_DELAY(*total_delay_us, clk_mgr->base.ctx); 123 + TRACE_SMU_MSG_DELAY(0, 0, *total_delay_us, clk_mgr->base.ctx); 125 124 126 125 return reg; 127 126 }
+119 -2
drivers/gpu/drm/amd/display/dc/clk_mgr/dcn35/dcn35_clk_mgr.c
··· 587 587 return true; 588 588 } 589 589 590 - static void dcn35_dump_clk_registers(struct clk_state_registers_and_bypass *regs_and_bypass, 590 + static void dcn35_save_clk_registers_internal(struct dcn35_clk_internal *internal, struct clk_mgr *clk_mgr_base) 591 + { 592 + struct clk_mgr_internal *clk_mgr = TO_CLK_MGR_INTERNAL(clk_mgr_base); 593 + 594 + // read dtbclk 595 + internal->CLK1_CLK4_CURRENT_CNT = REG_READ(CLK1_CLK4_CURRENT_CNT); 596 + internal->CLK1_CLK4_BYPASS_CNTL = REG_READ(CLK1_CLK4_BYPASS_CNTL); 597 + 598 + // read dcfclk 599 + internal->CLK1_CLK3_CURRENT_CNT = REG_READ(CLK1_CLK3_CURRENT_CNT); 600 + internal->CLK1_CLK3_BYPASS_CNTL = REG_READ(CLK1_CLK3_BYPASS_CNTL); 601 + 602 + // read dcf deep sleep divider 603 + internal->CLK1_CLK3_DS_CNTL = REG_READ(CLK1_CLK3_DS_CNTL); 604 + internal->CLK1_CLK3_ALLOW_DS = REG_READ(CLK1_CLK3_ALLOW_DS); 605 + 606 + // read dppclk 607 + internal->CLK1_CLK1_CURRENT_CNT = REG_READ(CLK1_CLK1_CURRENT_CNT); 608 + internal->CLK1_CLK1_BYPASS_CNTL = REG_READ(CLK1_CLK1_BYPASS_CNTL); 609 + 610 + // read dprefclk 611 + internal->CLK1_CLK2_CURRENT_CNT = REG_READ(CLK1_CLK2_CURRENT_CNT); 612 + internal->CLK1_CLK2_BYPASS_CNTL = REG_READ(CLK1_CLK2_BYPASS_CNTL); 613 + 614 + // read dispclk 615 + internal->CLK1_CLK0_CURRENT_CNT = REG_READ(CLK1_CLK0_CURRENT_CNT); 616 + internal->CLK1_CLK0_BYPASS_CNTL = REG_READ(CLK1_CLK0_BYPASS_CNTL); 617 + } 618 + 619 + static void dcn35_save_clk_registers(struct clk_state_registers_and_bypass *regs_and_bypass, 591 620 struct clk_mgr_dcn35 *clk_mgr) 592 621 { 622 + struct dcn35_clk_internal internal = {0}; 623 + char *bypass_clks[5] = {"0x0 DFS", "0x1 REFCLK", "0x2 ERROR", "0x3 400 FCH", "0x4 600 FCH"}; 624 + 625 + dcn35_save_clk_registers_internal(&internal, &clk_mgr->base.base); 626 + 627 + regs_and_bypass->dcfclk = internal.CLK1_CLK3_CURRENT_CNT / 10; 628 + regs_and_bypass->dcf_deep_sleep_divider = internal.CLK1_CLK3_DS_CNTL / 10; 629 + regs_and_bypass->dcf_deep_sleep_allow = internal.CLK1_CLK3_ALLOW_DS; 630 + regs_and_bypass->dprefclk = internal.CLK1_CLK2_CURRENT_CNT / 10; 631 + regs_and_bypass->dispclk = internal.CLK1_CLK0_CURRENT_CNT / 10; 632 + regs_and_bypass->dppclk = internal.CLK1_CLK1_CURRENT_CNT / 10; 633 + regs_and_bypass->dtbclk = internal.CLK1_CLK4_CURRENT_CNT / 10; 634 + 635 + regs_and_bypass->dppclk_bypass = internal.CLK1_CLK1_BYPASS_CNTL & 0x0007; 636 + if (regs_and_bypass->dppclk_bypass < 0 || regs_and_bypass->dppclk_bypass > 4) 637 + regs_and_bypass->dppclk_bypass = 0; 638 + regs_and_bypass->dcfclk_bypass = internal.CLK1_CLK3_BYPASS_CNTL & 0x0007; 639 + if (regs_and_bypass->dcfclk_bypass < 0 || regs_and_bypass->dcfclk_bypass > 4) 640 + regs_and_bypass->dcfclk_bypass = 0; 641 + regs_and_bypass->dispclk_bypass = internal.CLK1_CLK0_BYPASS_CNTL & 0x0007; 642 + if (regs_and_bypass->dispclk_bypass < 0 || regs_and_bypass->dispclk_bypass > 4) 643 + regs_and_bypass->dispclk_bypass = 0; 644 + regs_and_bypass->dprefclk_bypass = internal.CLK1_CLK2_BYPASS_CNTL & 0x0007; 645 + if (regs_and_bypass->dprefclk_bypass < 0 || regs_and_bypass->dprefclk_bypass > 4) 646 + regs_and_bypass->dprefclk_bypass = 0; 647 + 648 + if (clk_mgr->base.base.ctx->dc->debug.pstate_enabled) { 649 + DC_LOG_SMU("clk_type,clk_value,deepsleep_cntl,deepsleep_allow,bypass\n"); 650 + 651 + DC_LOG_SMU("dcfclk,%d,%d,%d,%s\n", 652 + regs_and_bypass->dcfclk, 653 + regs_and_bypass->dcf_deep_sleep_divider, 654 + regs_and_bypass->dcf_deep_sleep_allow, 655 + bypass_clks[(int) regs_and_bypass->dcfclk_bypass]); 656 + 657 + DC_LOG_SMU("dprefclk,%d,N/A,N/A,%s\n", 658 + regs_and_bypass->dprefclk, 659 + bypass_clks[(int) regs_and_bypass->dprefclk_bypass]); 660 + 661 + DC_LOG_SMU("dispclk,%d,N/A,N/A,%s\n", 662 + regs_and_bypass->dispclk, 663 + bypass_clks[(int) regs_and_bypass->dispclk_bypass]); 664 + 665 + // REGISTER VALUES 666 + DC_LOG_SMU("reg_name,value,clk_type"); 667 + 668 + DC_LOG_SMU("CLK1_CLK3_CURRENT_CNT,%d,dcfclk", 669 + internal.CLK1_CLK3_CURRENT_CNT); 670 + 671 + DC_LOG_SMU("CLK1_CLK4_CURRENT_CNT,%d,dtbclk", 672 + internal.CLK1_CLK4_CURRENT_CNT); 673 + 674 + DC_LOG_SMU("CLK1_CLK3_DS_CNTL,%d,dcf_deep_sleep_divider", 675 + internal.CLK1_CLK3_DS_CNTL); 676 + 677 + DC_LOG_SMU("CLK1_CLK3_ALLOW_DS,%d,dcf_deep_sleep_allow", 678 + internal.CLK1_CLK3_ALLOW_DS); 679 + 680 + DC_LOG_SMU("CLK1_CLK2_CURRENT_CNT,%d,dprefclk", 681 + internal.CLK1_CLK2_CURRENT_CNT); 682 + 683 + DC_LOG_SMU("CLK1_CLK0_CURRENT_CNT,%d,dispclk", 684 + internal.CLK1_CLK0_CURRENT_CNT); 685 + 686 + DC_LOG_SMU("CLK1_CLK1_CURRENT_CNT,%d,dppclk", 687 + internal.CLK1_CLK1_CURRENT_CNT); 688 + 689 + DC_LOG_SMU("CLK1_CLK3_BYPASS_CNTL,%d,dcfclk_bypass", 690 + internal.CLK1_CLK3_BYPASS_CNTL); 691 + 692 + DC_LOG_SMU("CLK1_CLK2_BYPASS_CNTL,%d,dprefclk_bypass", 693 + internal.CLK1_CLK2_BYPASS_CNTL); 694 + 695 + DC_LOG_SMU("CLK1_CLK0_BYPASS_CNTL,%d,dispclk_bypass", 696 + internal.CLK1_CLK0_BYPASS_CNTL); 697 + 698 + DC_LOG_SMU("CLK1_CLK1_BYPASS_CNTL,%d,dppclk_bypass", 699 + internal.CLK1_CLK1_BYPASS_CNTL); 700 + 701 + } 593 702 } 594 703 595 704 static bool dcn35_is_spll_ssc_enabled(struct clk_mgr *clk_mgr_base) ··· 732 623 void dcn35_init_clocks(struct clk_mgr *clk_mgr) 733 624 { 734 625 struct clk_mgr_internal *clk_mgr_int = TO_CLK_MGR_INTERNAL(clk_mgr); 626 + struct clk_mgr_dcn35 *clk_mgr_dcn35 = TO_CLK_MGR_DCN35(clk_mgr_int); 735 627 736 628 init_clk_states(clk_mgr); 737 629 ··· 743 633 else 744 634 clk_mgr->dp_dto_source_clock_in_khz = clk_mgr->dprefclk_khz; 745 635 636 + dcn35_save_clk_registers(&clk_mgr->boot_snapshot, clk_mgr_dcn35); 637 + 638 + clk_mgr->clks.ref_dtbclk_khz = clk_mgr->boot_snapshot.dtbclk * 10; 639 + if (clk_mgr->boot_snapshot.dtbclk > 59000) { 640 + /*dtbclk enabled based on */ 641 + clk_mgr->clks.dtbclk_en = true; 642 + } 746 643 } 747 644 static struct clk_bw_params dcn35_bw_params = { 748 645 .vram_type = Ddr4MemType, ··· 1440 1323 dcn35_bw_params.wm_table = ddr5_wm_table; 1441 1324 } 1442 1325 /* Saved clocks configured at boot for debug purposes */ 1443 - dcn35_dump_clk_registers(&clk_mgr->base.base.boot_snapshot, clk_mgr); 1326 + dcn35_save_clk_registers(&clk_mgr->base.base.boot_snapshot, clk_mgr); 1444 1327 1445 1328 clk_mgr->base.base.dprefclk_khz = dcn35_smu_get_dprefclk(&clk_mgr->base); 1446 1329 clk_mgr->base.base.clks.ref_dtbclk_khz = 600000;
+13 -13
drivers/gpu/drm/amd/display/dc/clk_mgr/dcn401/dcn401_clk_mgr.c
··· 162 162 unsigned int i; 163 163 char *entry_i = (char *)entry_0; 164 164 165 - uint32_t ret = dcn30_smu_get_dpm_freq_by_index(clk_mgr, clk, 0xFF); 165 + uint32_t ret = dcn401_smu_get_dpm_freq_by_index(clk_mgr, clk, 0xFF); 166 166 167 167 if (ret & (1 << 31)) 168 168 /* fine-grained, only min and max */ ··· 174 174 175 175 /* if the initial message failed, num_levels will be 0 */ 176 176 for (i = 0; i < *num_levels && i < ARRAY_SIZE(clk_mgr->base.bw_params->clk_table.entries); i++) { 177 - *((unsigned int *)entry_i) = (dcn30_smu_get_dpm_freq_by_index(clk_mgr, clk, i) & 0xFFFF); 177 + *((unsigned int *)entry_i) = (dcn401_smu_get_dpm_freq_by_index(clk_mgr, clk, i) & 0xFFFF); 178 178 entry_i += sizeof(clk_mgr->base.bw_params->clk_table.entries[0]); 179 179 } 180 180 } ··· 231 231 clk_mgr->smu_present = false; 232 232 clk_mgr->dpm_present = false; 233 233 234 - if (!clk_mgr_base->force_smu_not_present && dcn30_smu_get_smu_version(clk_mgr, &clk_mgr->smu_ver)) 234 + if (!clk_mgr_base->force_smu_not_present && dcn401_smu_get_smu_version(clk_mgr, &clk_mgr->smu_ver)) 235 235 clk_mgr->smu_present = true; 236 236 237 237 if (!clk_mgr->smu_present) 238 238 return; 239 239 240 - dcn30_smu_check_driver_if_version(clk_mgr); 241 - dcn30_smu_check_msg_header_version(clk_mgr); 240 + dcn401_smu_check_driver_if_version(clk_mgr); 241 + dcn401_smu_check_msg_header_version(clk_mgr); 242 242 243 243 /* DCFCLK */ 244 244 dcn401_init_single_clock(clk_mgr, PPCLK_DCFCLK, 245 245 &clk_mgr_base->bw_params->clk_table.entries[0].dcfclk_mhz, 246 246 &num_entries_per_clk->num_dcfclk_levels); 247 - clk_mgr_base->bw_params->dc_mode_limit.dcfclk_mhz = dcn30_smu_get_dc_mode_max_dpm_freq(clk_mgr, PPCLK_DCFCLK); 247 + clk_mgr_base->bw_params->dc_mode_limit.dcfclk_mhz = dcn401_smu_get_dc_mode_max_dpm_freq(clk_mgr, PPCLK_DCFCLK); 248 248 if (num_entries_per_clk->num_dcfclk_levels && clk_mgr_base->bw_params->dc_mode_limit.dcfclk_mhz == 249 249 clk_mgr_base->bw_params->clk_table.entries[num_entries_per_clk->num_dcfclk_levels - 1].dcfclk_mhz) 250 250 clk_mgr_base->bw_params->dc_mode_limit.dcfclk_mhz = 0; ··· 253 253 dcn401_init_single_clock(clk_mgr, PPCLK_SOCCLK, 254 254 &clk_mgr_base->bw_params->clk_table.entries[0].socclk_mhz, 255 255 &num_entries_per_clk->num_socclk_levels); 256 - clk_mgr_base->bw_params->dc_mode_limit.socclk_mhz = dcn30_smu_get_dc_mode_max_dpm_freq(clk_mgr, PPCLK_SOCCLK); 256 + clk_mgr_base->bw_params->dc_mode_limit.socclk_mhz = dcn401_smu_get_dc_mode_max_dpm_freq(clk_mgr, PPCLK_SOCCLK); 257 257 if (num_entries_per_clk->num_socclk_levels && clk_mgr_base->bw_params->dc_mode_limit.socclk_mhz == 258 258 clk_mgr_base->bw_params->clk_table.entries[num_entries_per_clk->num_socclk_levels - 1].socclk_mhz) 259 259 clk_mgr_base->bw_params->dc_mode_limit.socclk_mhz = 0; ··· 263 263 dcn401_init_single_clock(clk_mgr, PPCLK_DTBCLK, 264 264 &clk_mgr_base->bw_params->clk_table.entries[0].dtbclk_mhz, 265 265 &num_entries_per_clk->num_dtbclk_levels); 266 - clk_mgr_base->bw_params->dc_mode_limit.dtbclk_mhz = dcn30_smu_get_dc_mode_max_dpm_freq(clk_mgr, PPCLK_DTBCLK); 266 + clk_mgr_base->bw_params->dc_mode_limit.dtbclk_mhz = dcn401_smu_get_dc_mode_max_dpm_freq(clk_mgr, PPCLK_DTBCLK); 267 267 if (num_entries_per_clk->num_dtbclk_levels && clk_mgr_base->bw_params->dc_mode_limit.dtbclk_mhz == 268 268 clk_mgr_base->bw_params->clk_table.entries[num_entries_per_clk->num_dtbclk_levels - 1].dtbclk_mhz) 269 269 clk_mgr_base->bw_params->dc_mode_limit.dtbclk_mhz = 0; ··· 273 273 dcn401_init_single_clock(clk_mgr, PPCLK_DISPCLK, 274 274 &clk_mgr_base->bw_params->clk_table.entries[0].dispclk_mhz, 275 275 &num_entries_per_clk->num_dispclk_levels); 276 - clk_mgr_base->bw_params->dc_mode_limit.dispclk_mhz = dcn30_smu_get_dc_mode_max_dpm_freq(clk_mgr, PPCLK_DISPCLK); 276 + clk_mgr_base->bw_params->dc_mode_limit.dispclk_mhz = dcn401_smu_get_dc_mode_max_dpm_freq(clk_mgr, PPCLK_DISPCLK); 277 277 if (num_entries_per_clk->num_dispclk_levels && clk_mgr_base->bw_params->dc_mode_limit.dispclk_mhz == 278 278 clk_mgr_base->bw_params->clk_table.entries[num_entries_per_clk->num_dispclk_levels - 1].dispclk_mhz) 279 279 clk_mgr_base->bw_params->dc_mode_limit.dispclk_mhz = 0; ··· 1318 1318 table->Watermarks.WatermarkRow[i].WmSetting = i; 1319 1319 table->Watermarks.WatermarkRow[i].Flags = clk_mgr->base.bw_params->wm_table.nv_entries[i].pmfw_breakdown.wm_type; 1320 1320 } 1321 - dcn30_smu_set_dram_addr_high(clk_mgr, clk_mgr->wm_range_table_addr >> 32); 1322 - dcn30_smu_set_dram_addr_low(clk_mgr, clk_mgr->wm_range_table_addr & 0xFFFFFFFF); 1321 + dcn401_smu_set_dram_addr_high(clk_mgr, clk_mgr->wm_range_table_addr >> 32); 1322 + dcn401_smu_set_dram_addr_low(clk_mgr, clk_mgr->wm_range_table_addr & 0xFFFFFFFF); 1323 1323 dcn401_smu_transfer_wm_table_dram_2_smu(clk_mgr); 1324 1324 } 1325 1325 ··· 1390 1390 clk_mgr_base->bw_params->clk_table.entries[num_entries_per_clk->num_memclk_levels - 1].memclk_mhz; 1391 1391 } 1392 1392 1393 - clk_mgr_base->bw_params->dc_mode_limit.memclk_mhz = dcn30_smu_get_dc_mode_max_dpm_freq(clk_mgr, PPCLK_UCLK); 1393 + clk_mgr_base->bw_params->dc_mode_limit.memclk_mhz = dcn401_smu_get_dc_mode_max_dpm_freq(clk_mgr, PPCLK_UCLK); 1394 1394 if (num_entries_per_clk->num_memclk_levels && clk_mgr_base->bw_params->dc_mode_limit.memclk_mhz == 1395 1395 clk_mgr_base->bw_params->clk_table.entries[num_entries_per_clk->num_memclk_levels - 1].memclk_mhz) 1396 1396 clk_mgr_base->bw_params->dc_mode_limit.memclk_mhz = 0; ··· 1399 1399 dcn401_init_single_clock(clk_mgr, PPCLK_FCLK, 1400 1400 &clk_mgr_base->bw_params->clk_table.entries[0].fclk_mhz, 1401 1401 &num_entries_per_clk->num_fclk_levels); 1402 - clk_mgr_base->bw_params->dc_mode_limit.fclk_mhz = dcn30_smu_get_dc_mode_max_dpm_freq(clk_mgr, PPCLK_FCLK); 1402 + clk_mgr_base->bw_params->dc_mode_limit.fclk_mhz = dcn401_smu_get_dc_mode_max_dpm_freq(clk_mgr, PPCLK_FCLK); 1403 1403 if (num_entries_per_clk->num_fclk_levels && clk_mgr_base->bw_params->dc_mode_limit.fclk_mhz == 1404 1404 clk_mgr_base->bw_params->clk_table.entries[num_entries_per_clk->num_fclk_levels - 1].fclk_mhz) 1405 1405 clk_mgr_base->bw_params->dc_mode_limit.fclk_mhz = 0;
+126 -4
drivers/gpu/drm/amd/display/dc/clk_mgr/dcn401/dcn401_clk_mgr_smu_msg.c
··· 57 57 /* Wait for response register to be ready */ 58 58 dcn401_smu_wait_for_response(clk_mgr, 10, 200000); 59 59 60 + TRACE_SMU_MSG_ENTER(msg_id, param_in, clk_mgr->base.ctx); 61 + 60 62 /* Clear response register */ 61 63 REG_WRITE(DAL_RESP_REG, 0); 62 64 ··· 73 71 if (param_out) 74 72 *param_out = REG_READ(DAL_ARG_REG); 75 73 74 + TRACE_SMU_MSG_EXIT(true, param_out ? *param_out : 0, clk_mgr->base.ctx); 76 75 return true; 77 76 } 78 77 78 + TRACE_SMU_MSG_EXIT(false, 0, clk_mgr->base.ctx); 79 79 return false; 80 80 } 81 81 ··· 106 102 *total_delay_us += delay_us; 107 103 } while (max_retries--); 108 104 109 - TRACE_SMU_DELAY(*total_delay_us, clk_mgr->base.ctx); 110 - 111 105 return reg; 112 106 } 113 107 ··· 117 115 /* Wait for response register to be ready */ 118 116 dcn401_smu_wait_for_response_delay(clk_mgr, 10, 200000, &delay1_us); 119 117 118 + TRACE_SMU_MSG_ENTER(msg_id, param_in, clk_mgr->base.ctx); 119 + 120 120 /* Clear response register */ 121 121 REG_WRITE(DAL_RESP_REG, 0); 122 122 ··· 128 124 /* Trigger the message transaction by writing the message ID */ 129 125 REG_WRITE(DAL_MSG_REG, msg_id); 130 126 131 - TRACE_SMU_MSG(msg_id, param_in, clk_mgr->base.ctx); 132 - 133 127 /* Wait for response */ 134 128 if (dcn401_smu_wait_for_response_delay(clk_mgr, 10, 200000, &delay2_us) == DALSMC_Result_OK) { 135 129 if (param_out) 136 130 *param_out = REG_READ(DAL_ARG_REG); 137 131 138 132 *total_delay_us = delay1_us + delay2_us; 133 + TRACE_SMU_MSG_EXIT(true, param_out ? *param_out : 0, clk_mgr->base.ctx); 139 134 return true; 140 135 } 141 136 142 137 *total_delay_us = delay1_us + 2000000; 138 + TRACE_SMU_MSG_EXIT(false, 0, clk_mgr->base.ctx); 139 + return false; 140 + } 141 + 142 + bool dcn401_smu_get_smu_version(struct clk_mgr_internal *clk_mgr, unsigned int *version) 143 + { 144 + smu_print("SMU Get SMU version\n"); 145 + 146 + if (dcn401_smu_send_msg_with_param(clk_mgr, 147 + DALSMC_MSG_GetSmuVersion, 0, version)) { 148 + 149 + smu_print("SMU version: %d\n", *version); 150 + 151 + return true; 152 + } 153 + 154 + return false; 155 + } 156 + 157 + /* Message output should match SMU11_DRIVER_IF_VERSION in smu11_driver_if.h */ 158 + bool dcn401_smu_check_driver_if_version(struct clk_mgr_internal *clk_mgr) 159 + { 160 + uint32_t response = 0; 161 + 162 + smu_print("SMU Check driver if version\n"); 163 + 164 + if (dcn401_smu_send_msg_with_param(clk_mgr, 165 + DALSMC_MSG_GetDriverIfVersion, 0, &response)) { 166 + 167 + smu_print("SMU driver if version: %d\n", response); 168 + 169 + if (response == SMU14_DRIVER_IF_VERSION) 170 + return true; 171 + } 172 + 173 + return false; 174 + } 175 + 176 + /* Message output should match DALSMC_VERSION in dalsmc.h */ 177 + bool dcn401_smu_check_msg_header_version(struct clk_mgr_internal *clk_mgr) 178 + { 179 + uint32_t response = 0; 180 + 181 + smu_print("SMU Check msg header version\n"); 182 + 183 + if (dcn401_smu_send_msg_with_param(clk_mgr, 184 + DALSMC_MSG_GetMsgHeaderVersion, 0, &response)) { 185 + 186 + smu_print("SMU msg header version: %d\n", response); 187 + 188 + if (response == DALSMC_VERSION) 189 + return true; 190 + } 191 + 143 192 return false; 144 193 } 145 194 ··· 218 161 219 162 dcn401_smu_send_msg_with_param(clk_mgr, DALSMC_MSG_SetCabForUclkPstate, param, NULL); 220 163 smu_print("Numways for SubVP : %d\n", num_ways); 164 + } 165 + 166 + void dcn401_smu_set_dram_addr_high(struct clk_mgr_internal *clk_mgr, uint32_t addr_high) 167 + { 168 + smu_print("SMU Set DRAM addr high: %d\n", addr_high); 169 + 170 + dcn401_smu_send_msg_with_param(clk_mgr, 171 + DALSMC_MSG_SetDalDramAddrHigh, addr_high, NULL); 172 + } 173 + 174 + void dcn401_smu_set_dram_addr_low(struct clk_mgr_internal *clk_mgr, uint32_t addr_low) 175 + { 176 + smu_print("SMU Set DRAM addr low: %d\n", addr_low); 177 + 178 + dcn401_smu_send_msg_with_param(clk_mgr, 179 + DALSMC_MSG_SetDalDramAddrLow, addr_low, NULL); 221 180 } 222 181 223 182 void dcn401_smu_transfer_wm_table_dram_2_smu(struct clk_mgr_internal *clk_mgr) ··· 418 345 dcn401_smu_send_msg_with_param(clk_mgr, DALSMC_MSG_GetNumUmcChannels, 0, &response); 419 346 420 347 smu_print("SMU Get Num UMC Channels: num_umc_channels = %d\n", response); 348 + 349 + return response; 350 + } 351 + 352 + /* 353 + * Frequency in MHz returned in lower 16 bits for valid DPM level 354 + * 355 + * Call with dpm_level = 0xFF to query features, return value will be: 356 + * Bits 7:0 - number of DPM levels 357 + * Bit 28 - 1 = auto DPM on 358 + * Bit 29 - 1 = sweep DPM on 359 + * Bit 30 - 1 = forced DPM on 360 + * Bit 31 - 0 = discrete, 1 = fine-grained 361 + * 362 + * With fine-grained DPM, only min and max frequencies will be reported 363 + * 364 + * Returns 0 on failure 365 + */ 366 + unsigned int dcn401_smu_get_dpm_freq_by_index(struct clk_mgr_internal *clk_mgr, uint32_t clk, uint8_t dpm_level) 367 + { 368 + uint32_t response = 0; 369 + 370 + /* bits 23:16 for clock type, lower 8 bits for DPM level */ 371 + uint32_t param = (clk << 16) | dpm_level; 372 + 373 + smu_print("SMU Get dpm freq by index: clk = %d, dpm_level = %d\n", clk, dpm_level); 374 + 375 + dcn401_smu_send_msg_with_param(clk_mgr, 376 + DALSMC_MSG_GetDpmFreqByIndex, param, &response); 377 + 378 + smu_print("SMU dpm freq: %d MHz\n", response); 379 + 380 + return response; 381 + } 382 + 383 + /* Returns the max DPM frequency in DC mode in MHz, 0 on failure */ 384 + unsigned int dcn401_smu_get_dc_mode_max_dpm_freq(struct clk_mgr_internal *clk_mgr, uint32_t clk) 385 + { 386 + uint32_t response = 0; 387 + 388 + /* bits 23:16 for clock type */ 389 + uint32_t param = clk << 16; 390 + 391 + smu_print("SMU Get DC mode max DPM freq: clk = %d\n", clk); 392 + 393 + dcn401_smu_send_msg_with_param(clk_mgr, 394 + DALSMC_MSG_GetDcModeMaxDpmFreq, param, &response); 395 + 396 + smu_print("SMU DC mode max DMP freq: %d MHz\n", response); 421 397 422 398 return response; 423 399 }
+9 -1
drivers/gpu/drm/amd/display/dc/clk_mgr/dcn401/dcn401_clk_mgr_smu_msg.h
··· 7 7 8 8 #include "os_types.h" 9 9 #include "core_types.h" 10 - #include "dcn32/dcn32_clk_mgr_smu_msg.h" 11 10 11 + struct clk_mgr_internal; 12 + 13 + bool dcn401_smu_get_smu_version(struct clk_mgr_internal *clk_mgr, unsigned int *version); 14 + bool dcn401_smu_check_driver_if_version(struct clk_mgr_internal *clk_mgr); 15 + bool dcn401_smu_check_msg_header_version(struct clk_mgr_internal *clk_mgr); 12 16 void dcn401_smu_send_fclk_pstate_message(struct clk_mgr_internal *clk_mgr, bool support); 13 17 void dcn401_smu_send_uclk_pstate_message(struct clk_mgr_internal *clk_mgr, bool support); 14 18 void dcn401_smu_send_cab_for_uclk_message(struct clk_mgr_internal *clk_mgr, unsigned int num_ways); 19 + void dcn401_smu_set_dram_addr_high(struct clk_mgr_internal *clk_mgr, uint32_t addr_high); 20 + void dcn401_smu_set_dram_addr_low(struct clk_mgr_internal *clk_mgr, uint32_t addr_low); 15 21 void dcn401_smu_transfer_wm_table_dram_2_smu(struct clk_mgr_internal *clk_mgr); 16 22 void dcn401_smu_set_pme_workaround(struct clk_mgr_internal *clk_mgr); 17 23 unsigned int dcn401_smu_set_hard_min_by_freq(struct clk_mgr_internal *clk_mgr, uint32_t clk, uint16_t freq_mhz); ··· 35 29 void dcn401_smu_set_min_deep_sleep_dcef_clk(struct clk_mgr_internal *clk_mgr, uint32_t freq_mhz); 36 30 void dcn401_smu_set_num_of_displays(struct clk_mgr_internal *clk_mgr, uint32_t num_displays); 37 31 unsigned int dcn401_smu_get_num_of_umc_channels(struct clk_mgr_internal *clk_mgr); 32 + unsigned int dcn401_smu_get_dc_mode_max_dpm_freq(struct clk_mgr_internal *clk_mgr, uint32_t clk); 33 + unsigned int dcn401_smu_get_dpm_freq_by_index(struct clk_mgr_internal *clk_mgr, uint32_t clk, uint8_t dpm_level); 38 34 39 35 #endif /* __DCN401_CLK_MGR_SMU_MSG_H_ */
+7 -5
drivers/gpu/drm/amd/display/dc/core/dc.c
··· 460 460 * avoid conflicting with firmware updates. 461 461 */ 462 462 if (dc->ctx->dce_version > DCE_VERSION_MAX) { 463 - if ((dc->optimized_required || dc->wm_optimized_required) && 463 + if (dc->optimized_required && 464 464 (stream->adjust.v_total_max != adjust->v_total_max || 465 465 stream->adjust.v_total_min != adjust->v_total_min)) { 466 466 stream->adjust.timing_adjust_pending = true; ··· 2577 2577 } 2578 2578 2579 2579 dc->optimized_required = false; 2580 - dc->wm_optimized_required = false; 2581 2580 } 2582 2581 2583 2582 bool dc_set_generic_gpio_for_stereo(bool enable, ··· 3055 3056 } else if (memcmp(&dc->current_state->bw_ctx.bw.dcn.clk, &dc->clk_mgr->clks, offsetof(struct dc_clocks, prev_p_state_change_support)) != 0) { 3056 3057 dc->optimized_required = true; 3057 3058 } 3058 - 3059 - dc->optimized_required |= dc->wm_optimized_required; 3060 3059 } 3061 3060 3062 3061 return type; ··· 3309 3312 3310 3313 if (update->adaptive_sync_infopacket) 3311 3314 stream->adaptive_sync_infopacket = *update->adaptive_sync_infopacket; 3315 + 3316 + if (update->avi_infopacket) 3317 + stream->avi_infopacket = *update->avi_infopacket; 3312 3318 3313 3319 if (update->dither_option) 3314 3320 stream->dither_option = *update->dither_option; ··· 3607 3607 stream_update->vsp_infopacket || 3608 3608 stream_update->hfvsif_infopacket || 3609 3609 stream_update->adaptive_sync_infopacket || 3610 - stream_update->vtem_infopacket) { 3610 + stream_update->vtem_infopacket || 3611 + stream_update->avi_infopacket) { 3611 3612 resource_build_info_frame(pipe_ctx); 3612 3613 dc->hwss.update_info_frame(pipe_ctx); 3613 3614 ··· 5080 5079 stream_update->hfvsif_infopacket || 5081 5080 stream_update->vtem_infopacket || 5082 5081 stream_update->adaptive_sync_infopacket || 5082 + stream_update->avi_infopacket || 5083 5083 stream_update->dpms_off || 5084 5084 stream_update->allow_freesync || 5085 5085 stream_update->vrr_active_variable ||
+6
drivers/gpu/drm/amd/display/dc/core/dc_resource.c
··· 4410 4410 unsigned int fr_ind = pipe_ctx->stream->timing.fr_index; 4411 4411 enum dc_timing_3d_format format; 4412 4412 4413 + if (stream->avi_infopacket.valid) { 4414 + *info_packet = stream->avi_infopacket; 4415 + return; 4416 + } 4417 + 4413 4418 memset(&hdmi_info, 0, sizeof(union hdmi_info_packet)); 4419 + 4414 4420 4415 4421 color_space = pipe_ctx->stream->output_color_space; 4416 4422 if (color_space == COLOR_SPACE_UNKNOWN)
+2 -3
drivers/gpu/drm/amd/display/dc/dc.h
··· 55 55 struct set_config_cmd_payload; 56 56 struct dmub_notification; 57 57 58 - #define DC_VER "3.2.350" 58 + #define DC_VER "3.2.351" 59 59 60 60 /** 61 61 * MAX_SURFACES - representative of the upper bound of surfaces that can be piped to a single CRTC ··· 1163 1163 unsigned int auxless_alpm_lfps_silence_ns; 1164 1164 unsigned int auxless_alpm_lfps_t1t2_us; 1165 1165 short auxless_alpm_lfps_t1t2_offset_us; 1166 + bool disable_stutter_for_wm_program; 1166 1167 }; 1167 1168 1168 1169 ··· 1392 1391 uint32_t in_transfer_func_change:1; 1393 1392 uint32_t input_csc_change:1; 1394 1393 uint32_t coeff_reduction_change:1; 1395 - uint32_t output_tf_change:1; 1396 1394 uint32_t pixel_format_change:1; 1397 1395 uint32_t plane_size_change:1; 1398 1396 uint32_t gamut_remap_change:1; ··· 1735 1735 1736 1736 /* Require to optimize clocks and bandwidth for added/removed planes */ 1737 1737 bool optimized_required; 1738 - bool wm_optimized_required; 1739 1738 bool idle_optimizations_allowed; 1740 1739 bool enable_c20_dtm_b0; 1741 1740
+3
drivers/gpu/drm/amd/display/dc/dc_stream.h
··· 203 203 struct dc_info_packet hfvsif_infopacket; 204 204 struct dc_info_packet vtem_infopacket; 205 205 struct dc_info_packet adaptive_sync_infopacket; 206 + struct dc_info_packet avi_infopacket; 206 207 uint8_t dsc_packed_pps[128]; 207 208 struct rect src; /* composition area */ 208 209 struct rect dst; /* stream addressable area */ ··· 336 335 struct dc_info_packet *hfvsif_infopacket; 337 336 struct dc_info_packet *vtem_infopacket; 338 337 struct dc_info_packet *adaptive_sync_infopacket; 338 + struct dc_info_packet *avi_infopacket; 339 + 339 340 bool *dpms_off; 340 341 bool integer_scaling_update; 341 342 bool *allow_freesync;
+1
drivers/gpu/drm/amd/display/dc/dc_types.h
··· 1217 1217 bool rc_disable; 1218 1218 bool rc_allow_static_screen; 1219 1219 bool rc_allow_fullscreen_VPB; 1220 + bool read_psrcap_again; 1220 1221 unsigned int replay_enable_option; 1221 1222 } psr; 1222 1223 /* ABM */
+21 -3
drivers/gpu/drm/amd/display/dc/dccg/dcn35/dcn35_dccg.c
··· 39 39 40 40 #define CTX \ 41 41 dccg_dcn->base.ctx 42 + #include "logger_types.h" 42 43 #define DC_LOGGER \ 43 44 dccg->ctx->logger 44 45 ··· 1137 1136 default: 1138 1137 break; 1139 1138 } 1140 - //DC_LOG_DEBUG("%s: dpp_inst(%d) DPPCLK_EN = %d\n", __func__, dpp_inst, enable); 1139 + DC_LOG_DEBUG("%s: dpp_inst(%d) DPPCLK_EN = %d\n", __func__, dpp_inst, enable); 1141 1140 1142 1141 } 1143 1142 ··· 1407 1406 * PIPEx_DTO_SRC_SEL should not be programmed during DTBCLK update since OTG may still be on, and the 1408 1407 * programming is handled in program_pix_clk() regardless, so it can be removed from here. 1409 1408 */ 1409 + DC_LOG_DEBUG("%s: OTG%d DTBCLK DTO enabled: pixclk_khz=%d, ref_dtbclk_khz=%d, req_dtbclk_khz=%d, phase=%d, modulo=%d\n", 1410 + __func__, params->otg_inst, params->pixclk_khz, 1411 + params->ref_dtbclk_khz, req_dtbclk_khz, phase, modulo); 1412 + 1410 1413 } else { 1411 1414 switch (params->otg_inst) { 1412 1415 case 0: ··· 1436 1431 1437 1432 REG_WRITE(DTBCLK_DTO_MODULO[params->otg_inst], 0); 1438 1433 REG_WRITE(DTBCLK_DTO_PHASE[params->otg_inst], 0); 1434 + 1435 + DC_LOG_DEBUG("%s: OTG%d DTBCLK DTO disabled\n", __func__, params->otg_inst); 1439 1436 } 1440 1437 } 1441 1438 ··· 1482 1475 BREAK_TO_DEBUGGER(); 1483 1476 return; 1484 1477 } 1478 + DC_LOG_DEBUG("%s: dp_hpo_inst(%d) DPSTREAMCLK_EN = %d, DPSTREAMCLK_SRC_SEL = %d\n", 1479 + __func__, dp_hpo_inst, (src == REFCLK) ? 0 : 1, otg_inst); 1485 1480 } 1486 1481 1487 1482 ··· 1523 1514 BREAK_TO_DEBUGGER(); 1524 1515 return; 1525 1516 } 1517 + DC_LOG_DEBUG("%s: dp_hpo_inst(%d) DPSTREAMCLK_ROOT_GATE_DISABLE = %d\n", 1518 + __func__, dp_hpo_inst, enable ? 1 : 0); 1526 1519 } 1527 1520 1528 1521 ··· 1564 1553 BREAK_TO_DEBUGGER(); 1565 1554 return; 1566 1555 } 1567 - //DC_LOG_DEBUG("%s: dpp_inst(%d) PHYESYMCLK_ROOT_GATE_DISABLE:\n", __func__, phy_inst, enable ? 0 : 1); 1556 + DC_LOG_DEBUG("%s: dpp_inst(%d) PHYESYMCLK_ROOT_GATE_DISABLE: %d\n", __func__, phy_inst, enable ? 0 : 1); 1568 1557 1569 1558 } 1570 1559 ··· 1637 1626 BREAK_TO_DEBUGGER(); 1638 1627 return; 1639 1628 } 1629 + DC_LOG_DEBUG("%s: phy_inst(%d) PHYxSYMCLK_EN = %d, PHYxSYMCLK_SRC_SEL = %d\n", 1630 + __func__, phy_inst, force_enable ? 1 : 0, clk_src); 1640 1631 } 1641 1632 1642 1633 static void dccg35_set_valid_pixel_rate( ··· 1686 1673 } 1687 1674 1688 1675 dccg->dpp_clock_gated[dpp_inst] = !clock_on; 1676 + DC_LOG_DEBUG("%s: dpp_inst(%d) clock_on = %d\n", __func__, dpp_inst, clock_on); 1689 1677 } 1690 1678 1691 1679 static void dccg35_disable_symclk32_se( ··· 1745 1731 BREAK_TO_DEBUGGER(); 1746 1732 return; 1747 1733 } 1734 + 1748 1735 } 1749 1736 1750 1737 static void dccg35_init_cb(struct dccg *dccg) ··· 1753 1738 (void)dccg; 1754 1739 /* Any RCG should be done when driver enter low power mode*/ 1755 1740 } 1756 - 1757 1741 void dccg35_init(struct dccg *dccg) 1758 1742 { 1759 1743 int otg_inst; ··· 1767 1753 for (otg_inst = 0; otg_inst < 2; otg_inst++) { 1768 1754 dccg31_disable_symclk32_le(dccg, otg_inst); 1769 1755 dccg31_set_symclk32_le_root_clock_gating(dccg, otg_inst, false); 1756 + DC_LOG_DEBUG("%s: OTG%d SYMCLK32_LE disabled and root clock gating disabled\n", 1757 + __func__, otg_inst); 1770 1758 } 1771 1759 1772 1760 // if (dccg->ctx->dc->debug.root_clock_optimization.bits.symclk32_se) ··· 1781 1765 dccg35_set_dpstreamclk(dccg, REFCLK, otg_inst, 1782 1766 otg_inst); 1783 1767 dccg35_set_dpstreamclk_root_clock_gating(dccg, otg_inst, false); 1768 + DC_LOG_DEBUG("%s: OTG%d DPSTREAMCLK disabled and root clock gating disabled\n", 1769 + __func__, otg_inst); 1784 1770 } 1785 1771 1786 1772 /*
+1
drivers/gpu/drm/amd/display/dc/dce/dmub_replay.c
··· 169 169 copy_settings_data->max_deviation_line = link->dpcd_caps.pr_info.max_deviation_line; 170 170 copy_settings_data->smu_optimizations_en = link->replay_settings.replay_smu_opt_enable; 171 171 copy_settings_data->replay_timing_sync_supported = link->replay_settings.config.replay_timing_sync_supported; 172 + copy_settings_data->replay_support_fast_resync_in_ultra_sleep_mode = link->replay_settings.config.replay_support_fast_resync_in_ultra_sleep_mode; 172 173 173 174 copy_settings_data->debug.bitfields.enable_ips_visual_confirm = dc->dc->debug.enable_ips_visual_confirm; 174 175
+6 -5
drivers/gpu/drm/amd/display/dc/dm_services.h
··· 277 277 /* 278 278 * SMU message tracing 279 279 */ 280 - void dm_trace_smu_msg(uint32_t msg_id, uint32_t param_in, struct dc_context *ctx); 281 - void dm_trace_smu_delay(uint32_t delay, struct dc_context *ctx); 280 + void dm_trace_smu_enter(uint32_t msg_id, uint32_t param_in, unsigned int delay, struct dc_context *ctx); 281 + void dm_trace_smu_exit(bool success, uint32_t response, struct dc_context *ctx); 282 282 283 - #define TRACE_SMU_MSG(msg_id, param_in, ctx) dm_trace_smu_msg(msg_id, param_in, ctx) 284 - #define TRACE_SMU_DELAY(response_delay, ctx) dm_trace_smu_delay(response_delay, ctx) 285 - 283 + #define TRACE_SMU_MSG_DELAY(msg_id, param_in, delay, ctx) dm_trace_smu_enter(msg_id, param_in, delay, ctx) 284 + #define TRACE_SMU_MSG(msg_id, param_in, ctx) dm_trace_smu_enter(msg_id, param_in, 0, ctx) 285 + #define TRACE_SMU_MSG_ENTER(msg_id, param_in, ctx) dm_trace_smu_enter(msg_id, param_in, 0, ctx) 286 + #define TRACE_SMU_MSG_EXIT(success, response, ctx) dm_trace_smu_exit(success, response, ctx) 286 287 287 288 /* 288 289 * DMUB Interfaces
+11 -10
drivers/gpu/drm/amd/display/dc/dml2/dml21/src/dml2_pmo/dml2_pmo_dcn3.c
··· 48 48 49 49 static void remove_duplicates(double *list_a, int *list_a_size) 50 50 { 51 - int cur_element = 0; 52 - // For all elements b[i] in list_b[] 53 - while (cur_element < *list_a_size - 1) { 54 - if (list_a[cur_element] == list_a[cur_element + 1]) { 55 - for (int j = cur_element + 1; j < *list_a_size - 1; j++) { 56 - list_a[j] = list_a[j + 1]; 57 - } 58 - *list_a_size = *list_a_size - 1; 59 - } else { 60 - cur_element++; 51 + int j = 0; 52 + 53 + if (*list_a_size == 0) 54 + return; 55 + 56 + for (int i = 1; i < *list_a_size; i++) { 57 + if (list_a[j] != list_a[i]) { 58 + j++; 59 + list_a[j] = list_a[i]; 61 60 } 62 61 } 62 + 63 + *list_a_size = j + 1; 63 64 } 64 65 65 66 static bool increase_mpc_combine_factor(unsigned int *mpc_combine_factor, unsigned int limit)
+33 -5
drivers/gpu/drm/amd/display/dc/hubbub/dcn32/dcn32_hubbub.c
··· 28 28 #include "dcn32_hubbub.h" 29 29 #include "dm_services.h" 30 30 #include "reg_helper.h" 31 + #include "dal_asic_id.h" 31 32 32 33 33 34 #define CTX \ ··· 71 70 COMPBUF_RESERVED_SPACE_64B, hubbub2->pixel_chunk_size / 32, 72 71 COMPBUF_RESERVED_SPACE_ZS, hubbub2->pixel_chunk_size / 128); 73 72 REG_UPDATE(DCHUBBUB_DEBUG_CTRL_0, DET_DEPTH, 0x47F); 73 + } 74 + 75 + static void hubbub32_set_sdp_control(struct hubbub *hubbub, bool dc_control) 76 + { 77 + struct dcn20_hubbub *hubbub2 = TO_DCN20_HUBBUB(hubbub); 78 + 79 + REG_UPDATE(DCHUBBUB_SDPIF_CFG0, 80 + SDPIF_PORT_CONTROL, dc_control); 74 81 } 75 82 76 83 void hubbub32_set_request_limit(struct hubbub *hubbub, int memory_channel_count, int words_per_channel) ··· 763 754 unsigned int refclk_mhz, 764 755 bool safe_to_lower) 765 756 { 757 + struct dc *dc = hubbub->ctx->dc; 766 758 bool wm_pending = false; 759 + 760 + if (!safe_to_lower && dc->debug.disable_stutter_for_wm_program && 761 + (ASICREV_IS_GC_11_0_0(dc->ctx->asic_id.hw_internal_rev) || 762 + ASICREV_IS_GC_11_0_3(dc->ctx->asic_id.hw_internal_rev))) { 763 + /* before raising watermarks, SDP control give to DF, stutter must be disabled */ 764 + wm_pending = true; 765 + hubbub32_set_sdp_control(hubbub, false); 766 + hubbub1_allow_self_refresh_control(hubbub, false); 767 + } 767 768 768 769 if (hubbub32_program_urgent_watermarks(hubbub, watermarks, refclk_mhz, safe_to_lower)) 769 770 wm_pending = true; ··· 805 786 REG_UPDATE(DCHUBBUB_ARB_DF_REQ_OUTSTAND, 806 787 DCHUBBUB_ARB_MIN_REQ_OUTSTAND, 0x1FF);*/ 807 788 808 - if (safe_to_lower || hubbub->ctx->dc->debug.disable_stutter) 809 - hubbub1_allow_self_refresh_control(hubbub, !hubbub->ctx->dc->debug.disable_stutter); 789 + if (safe_to_lower) { 790 + /* after lowering watermarks, stutter setting is restored, SDP control given to DC */ 791 + hubbub1_allow_self_refresh_control(hubbub, !dc->debug.disable_stutter); 810 792 811 - hubbub32_force_usr_retraining_allow(hubbub, hubbub->ctx->dc->debug.force_usr_allow); 793 + if (dc->debug.disable_stutter_for_wm_program && 794 + (ASICREV_IS_GC_11_0_0(dc->ctx->asic_id.hw_internal_rev) || 795 + ASICREV_IS_GC_11_0_3(dc->ctx->asic_id.hw_internal_rev))) { 796 + hubbub32_set_sdp_control(hubbub, true); 797 + } 798 + } else if (dc->debug.disable_stutter) { 799 + hubbub1_allow_self_refresh_control(hubbub, !dc->debug.disable_stutter); 800 + } 801 + 802 + hubbub32_force_usr_retraining_allow(hubbub, dc->debug.force_usr_allow); 812 803 813 804 return wm_pending; 814 805 } ··· 1003 974 ignore the "df_pre_cstate_req" from the SDP port control. 1004 975 only the DCN will determine when to connect the SDP port 1005 976 */ 1006 - REG_UPDATE(DCHUBBUB_SDPIF_CFG0, 1007 - SDPIF_PORT_CONTROL, 1); 977 + hubbub32_set_sdp_control(hubbub, true); 1008 978 /*Set SDP's max outstanding request to 512 1009 979 must set the register back to 0 (max outstanding = 256) in zero frame buffer mode*/ 1010 980 REG_UPDATE(DCHUBBUB_SDPIF_CFG1,
+1 -1
drivers/gpu/drm/amd/display/dc/hwss/dcn10/dcn10_hwseq.c
··· 3347 3347 context, 3348 3348 false); 3349 3349 3350 - dc->wm_optimized_required = hubbub->funcs->program_watermarks(hubbub, 3350 + dc->optimized_required = hubbub->funcs->program_watermarks(hubbub, 3351 3351 &context->bw_ctx.bw.dcn.watermarks, 3352 3352 dc->res_pool->ref_clocks.dchub_ref_clock_inKhz / 1000, 3353 3353 true);
+8 -9
drivers/gpu/drm/amd/display/dc/hwss/dcn20/dcn20_hwseq.c
··· 1982 1982 * updating on slave planes 1983 1983 */ 1984 1984 if (pipe_ctx->update_flags.bits.enable || 1985 - pipe_ctx->update_flags.bits.plane_changed || 1986 - pipe_ctx->stream->update_flags.bits.out_tf || 1987 - (pipe_ctx->plane_state && 1988 - pipe_ctx->plane_state->update_flags.bits.output_tf_change)) 1985 + pipe_ctx->update_flags.bits.plane_changed || 1986 + pipe_ctx->stream->update_flags.bits.out_tf) 1989 1987 hws->funcs.set_output_transfer_func(dc, pipe_ctx, pipe_ctx->stream); 1990 1988 1991 1989 /* If the pipe has been enabled or has a different opp, we ··· 2388 2390 } 2389 2391 2390 2392 /* program dchubbub watermarks: 2391 - * For assigning wm_optimized_required, use |= operator since we don't want 2393 + * For assigning optimized_required, use |= operator since we don't want 2392 2394 * to clear the value if the optimize has not happened yet 2393 2395 */ 2394 - dc->wm_optimized_required |= hubbub->funcs->program_watermarks(hubbub, 2396 + dc->optimized_required |= hubbub->funcs->program_watermarks(hubbub, 2395 2397 &context->bw_ctx.bw.dcn.watermarks, 2396 2398 dc->res_pool->ref_clocks.dchub_ref_clock_inKhz / 1000, 2397 2399 false); ··· 2404 2406 if (hubbub->funcs->program_compbuf_size) { 2405 2407 if (context->bw_ctx.dml.ip.min_comp_buffer_size_kbytes) { 2406 2408 compbuf_size_kb = context->bw_ctx.dml.ip.min_comp_buffer_size_kbytes; 2407 - dc->wm_optimized_required |= (compbuf_size_kb != dc->current_state->bw_ctx.dml.ip.min_comp_buffer_size_kbytes); 2409 + dc->optimized_required |= (compbuf_size_kb != dc->current_state->bw_ctx.dml.ip.min_comp_buffer_size_kbytes); 2408 2410 } else { 2409 2411 compbuf_size_kb = context->bw_ctx.bw.dcn.compbuf_size_kb; 2410 - dc->wm_optimized_required |= (compbuf_size_kb != dc->current_state->bw_ctx.bw.dcn.compbuf_size_kb); 2412 + dc->optimized_required |= (compbuf_size_kb != dc->current_state->bw_ctx.bw.dcn.compbuf_size_kb); 2411 2413 } 2412 2414 2413 2415 hubbub->funcs->program_compbuf_size(hubbub, compbuf_size_kb, false); ··· 3129 3131 res_pool->dccg->funcs->dccg_init(res_pool->dccg); 3130 3132 3131 3133 //Enable ability to power gate / don't force power on permanently 3132 - hws->funcs.enable_power_gating_plane(hws, true); 3134 + if (hws->funcs.enable_power_gating_plane) 3135 + hws->funcs.enable_power_gating_plane(hws, true); 3133 3136 3134 3137 // Specific to FPGA dccg and registers 3135 3138 REG_WRITE(RBBMIF_TIMEOUT_DIS, 0xFFFFFFFF);
+6 -8
drivers/gpu/drm/amd/display/dc/hwss/dcn401/dcn401_hwseq.c
··· 1383 1383 false); 1384 1384 1385 1385 /* program dchubbub watermarks: 1386 - * For assigning wm_optimized_required, use |= operator since we don't want 1386 + * For assigning optimized_required, use |= operator since we don't want 1387 1387 * to clear the value if the optimize has not happened yet 1388 1388 */ 1389 - dc->wm_optimized_required |= hubbub->funcs->program_watermarks(hubbub, 1389 + dc->optimized_required |= hubbub->funcs->program_watermarks(hubbub, 1390 1390 &context->bw_ctx.bw.dcn.watermarks, 1391 1391 dc->res_pool->ref_clocks.dchub_ref_clock_inKhz / 1000, 1392 1392 false); 1393 1393 /* update timeout thresholds */ 1394 1394 if (hubbub->funcs->program_arbiter) { 1395 - dc->wm_optimized_required |= hubbub->funcs->program_arbiter(hubbub, &context->bw_ctx.bw.dcn.arb_regs, false); 1395 + dc->optimized_required |= hubbub->funcs->program_arbiter(hubbub, &context->bw_ctx.bw.dcn.arb_regs, false); 1396 1396 } 1397 1397 1398 1398 /* decrease compbuf size */ 1399 1399 if (hubbub->funcs->program_compbuf_segments) { 1400 1400 compbuf_size = context->bw_ctx.bw.dcn.arb_regs.compbuf_size; 1401 - dc->wm_optimized_required |= (compbuf_size != dc->current_state->bw_ctx.bw.dcn.arb_regs.compbuf_size); 1401 + dc->optimized_required |= (compbuf_size != dc->current_state->bw_ctx.bw.dcn.arb_regs.compbuf_size); 1402 1402 1403 1403 hubbub->funcs->program_compbuf_segments(hubbub, compbuf_size, false); 1404 1404 } ··· 2032 2032 * updating on slave planes 2033 2033 */ 2034 2034 if (pipe_ctx->update_flags.bits.enable || 2035 - pipe_ctx->update_flags.bits.plane_changed || 2036 - pipe_ctx->stream->update_flags.bits.out_tf || 2037 - (pipe_ctx->plane_state && 2038 - pipe_ctx->plane_state->update_flags.bits.output_tf_change)) 2035 + pipe_ctx->update_flags.bits.plane_changed || 2036 + pipe_ctx->stream->update_flags.bits.out_tf) 2039 2037 hws->funcs.set_output_transfer_func(dc, pipe_ctx, pipe_ctx->stream); 2040 2038 2041 2039 /* If the pipe has been enabled or has a different opp, we
+21 -4
drivers/gpu/drm/amd/display/dc/resource/dce100/dce100_resource.c
··· 29 29 #include "stream_encoder.h" 30 30 31 31 #include "resource.h" 32 + #include "clk_mgr.h" 32 33 #include "include/irq_service_interface.h" 33 34 #include "virtual/virtual_stream_encoder.h" 34 35 #include "dce110/dce110_resource.h" ··· 837 836 return DC_OK; 838 837 } 839 838 840 - static enum dc_status dce100_validate_bandwidth( 839 + enum dc_status dce100_validate_bandwidth( 841 840 struct dc *dc, 842 841 struct dc_state *context, 843 842 enum dc_validate_mode validate_mode) 844 843 { 845 844 int i; 846 845 bool at_least_one_pipe = false; 846 + struct dc_stream_state *stream = NULL; 847 + const uint32_t max_pix_clk_khz = max(dc->clk_mgr->clks.max_supported_dispclk_khz, 400000); 847 848 848 849 for (i = 0; i < dc->res_pool->pipe_count; i++) { 849 - if (context->res_ctx.pipe_ctx[i].stream) 850 + stream = context->res_ctx.pipe_ctx[i].stream; 851 + if (stream) { 850 852 at_least_one_pipe = true; 853 + 854 + if (stream->timing.pix_clk_100hz >= max_pix_clk_khz * 10) 855 + return DC_FAIL_BANDWIDTH_VALIDATE; 856 + } 851 857 } 852 858 853 859 if (at_least_one_pipe) { ··· 862 854 context->bw_ctx.bw.dce.dispclk_khz = 681000; 863 855 context->bw_ctx.bw.dce.yclk_khz = 250000 * MEMORY_TYPE_MULTIPLIER_CZ; 864 856 } else { 865 - context->bw_ctx.bw.dce.dispclk_khz = 0; 857 + /* On DCE 6.0 and 6.4 the PLL0 is both the display engine clock and 858 + * the DP clock, and shouldn't be turned off. Just select the display 859 + * clock value from its low power mode. 860 + */ 861 + if (dc->ctx->dce_version == DCE_VERSION_6_0 || 862 + dc->ctx->dce_version == DCE_VERSION_6_4) 863 + context->bw_ctx.bw.dce.dispclk_khz = 352000; 864 + else 865 + context->bw_ctx.bw.dce.dispclk_khz = 0; 866 + 866 867 context->bw_ctx.bw.dce.yclk_khz = 0; 867 868 } 868 869 ··· 898 881 return true; 899 882 } 900 883 901 - static enum dc_status dce100_validate_global( 884 + enum dc_status dce100_validate_global( 902 885 struct dc *dc, 903 886 struct dc_state *context) 904 887 {
+9
drivers/gpu/drm/amd/display/dc/resource/dce100/dce100_resource.h
··· 41 41 42 42 enum dc_status dce100_validate_plane(const struct dc_plane_state *plane_state, struct dc_caps *caps); 43 43 44 + enum dc_status dce100_validate_global( 45 + struct dc *dc, 46 + struct dc_state *context); 47 + 48 + enum dc_status dce100_validate_bandwidth( 49 + struct dc *dc, 50 + struct dc_state *context, 51 + enum dc_validate_mode validate_mode); 52 + 44 53 enum dc_status dce100_add_stream_to_ctx( 45 54 struct dc *dc, 46 55 struct dc_state *new_ctx,
+3 -66
drivers/gpu/drm/amd/display/dc/resource/dce60/dce60_resource.c
··· 34 34 #include "stream_encoder.h" 35 35 36 36 #include "resource.h" 37 + #include "clk_mgr.h" 37 38 #include "include/irq_service_interface.h" 38 39 #include "irq/dce60/irq_service_dce60.h" 39 40 #include "dce110/dce110_timing_generator.h" ··· 864 863 } 865 864 } 866 865 867 - static enum dc_status dce60_validate_bandwidth( 868 - struct dc *dc, 869 - struct dc_state *context, 870 - enum dc_validate_mode validate_mode) 871 - { 872 - int i; 873 - bool at_least_one_pipe = false; 874 - 875 - for (i = 0; i < dc->res_pool->pipe_count; i++) { 876 - if (context->res_ctx.pipe_ctx[i].stream) 877 - at_least_one_pipe = true; 878 - } 879 - 880 - if (at_least_one_pipe) { 881 - /* TODO implement when needed but for now hardcode max value*/ 882 - context->bw_ctx.bw.dce.dispclk_khz = 681000; 883 - context->bw_ctx.bw.dce.yclk_khz = 250000 * MEMORY_TYPE_MULTIPLIER_CZ; 884 - } else { 885 - /* On DCE 6.0 and 6.4 the PLL0 is both the display engine clock and 886 - * the DP clock, and shouldn't be turned off. Just select the display 887 - * clock value from its low power mode. 888 - */ 889 - if (dc->ctx->dce_version == DCE_VERSION_6_0 || 890 - dc->ctx->dce_version == DCE_VERSION_6_4) 891 - context->bw_ctx.bw.dce.dispclk_khz = 352000; 892 - else 893 - context->bw_ctx.bw.dce.dispclk_khz = 0; 894 - 895 - context->bw_ctx.bw.dce.yclk_khz = 0; 896 - } 897 - 898 - return DC_OK; 899 - } 900 - 901 - static bool dce60_validate_surface_sets( 902 - struct dc_state *context) 903 - { 904 - int i; 905 - 906 - for (i = 0; i < context->stream_count; i++) { 907 - if (context->stream_status[i].plane_count == 0) 908 - continue; 909 - 910 - if (context->stream_status[i].plane_count > 1) 911 - return false; 912 - 913 - if (context->stream_status[i].plane_states[0]->format 914 - >= SURFACE_PIXEL_FORMAT_VIDEO_BEGIN) 915 - return false; 916 - } 917 - 918 - return true; 919 - } 920 - 921 - static enum dc_status dce60_validate_global( 922 - struct dc *dc, 923 - struct dc_state *context) 924 - { 925 - if (!dce60_validate_surface_sets(context)) 926 - return DC_FAIL_SURFACE_VALIDATE; 927 - 928 - return DC_OK; 929 - } 930 - 931 866 static void dce60_destroy_resource_pool(struct resource_pool **pool) 932 867 { 933 868 struct dce110_resource_pool *dce110_pool = TO_DCE110_RES_POOL(*pool); ··· 877 940 .destroy = dce60_destroy_resource_pool, 878 941 .link_enc_create = dce60_link_encoder_create, 879 942 .panel_cntl_create = dce60_panel_cntl_create, 880 - .validate_bandwidth = dce60_validate_bandwidth, 943 + .validate_bandwidth = dce100_validate_bandwidth, 881 944 .validate_plane = dce100_validate_plane, 882 945 .add_stream_to_ctx = dce100_add_stream_to_ctx, 883 - .validate_global = dce60_validate_global, 946 + .validate_global = dce100_validate_global, 884 947 .find_first_free_match_stream_enc_for_link = dce100_find_first_free_match_stream_enc_for_link 885 948 }; 886 949
+3 -57
drivers/gpu/drm/amd/display/dc/resource/dce80/dce80_resource.c
··· 32 32 #include "stream_encoder.h" 33 33 34 34 #include "resource.h" 35 + #include "clk_mgr.h" 35 36 #include "include/irq_service_interface.h" 36 37 #include "irq/dce80/irq_service_dce80.h" 37 38 #include "dce110/dce110_timing_generator.h" ··· 870 869 } 871 870 } 872 871 873 - static enum dc_status dce80_validate_bandwidth( 874 - struct dc *dc, 875 - struct dc_state *context, 876 - enum dc_validate_mode validate_mode) 877 - { 878 - int i; 879 - bool at_least_one_pipe = false; 880 - 881 - for (i = 0; i < dc->res_pool->pipe_count; i++) { 882 - if (context->res_ctx.pipe_ctx[i].stream) 883 - at_least_one_pipe = true; 884 - } 885 - 886 - if (at_least_one_pipe) { 887 - /* TODO implement when needed but for now hardcode max value*/ 888 - context->bw_ctx.bw.dce.dispclk_khz = 681000; 889 - context->bw_ctx.bw.dce.yclk_khz = 250000 * MEMORY_TYPE_MULTIPLIER_CZ; 890 - } else { 891 - context->bw_ctx.bw.dce.dispclk_khz = 0; 892 - context->bw_ctx.bw.dce.yclk_khz = 0; 893 - } 894 - 895 - return DC_OK; 896 - } 897 - 898 - static bool dce80_validate_surface_sets( 899 - struct dc_state *context) 900 - { 901 - int i; 902 - 903 - for (i = 0; i < context->stream_count; i++) { 904 - if (context->stream_status[i].plane_count == 0) 905 - continue; 906 - 907 - if (context->stream_status[i].plane_count > 1) 908 - return false; 909 - 910 - if (context->stream_status[i].plane_states[0]->format 911 - >= SURFACE_PIXEL_FORMAT_VIDEO_BEGIN) 912 - return false; 913 - } 914 - 915 - return true; 916 - } 917 - 918 - static enum dc_status dce80_validate_global( 919 - struct dc *dc, 920 - struct dc_state *context) 921 - { 922 - if (!dce80_validate_surface_sets(context)) 923 - return DC_FAIL_SURFACE_VALIDATE; 924 - 925 - return DC_OK; 926 - } 927 - 928 872 static void dce80_destroy_resource_pool(struct resource_pool **pool) 929 873 { 930 874 struct dce110_resource_pool *dce110_pool = TO_DCE110_RES_POOL(*pool); ··· 883 937 .destroy = dce80_destroy_resource_pool, 884 938 .link_enc_create = dce80_link_encoder_create, 885 939 .panel_cntl_create = dce80_panel_cntl_create, 886 - .validate_bandwidth = dce80_validate_bandwidth, 940 + .validate_bandwidth = dce100_validate_bandwidth, 887 941 .validate_plane = dce100_validate_plane, 888 942 .add_stream_to_ctx = dce100_add_stream_to_ctx, 889 - .validate_global = dce80_validate_global, 943 + .validate_global = dce100_validate_global, 890 944 .find_first_free_match_stream_enc_for_link = dce100_find_first_free_match_stream_enc_for_link 891 945 }; 892 946
+1
drivers/gpu/drm/amd/display/dc/resource/dcn314/dcn314_resource.c
··· 927 927 .enable_legacy_fast_update = true, 928 928 .using_dml2 = false, 929 929 .disable_dsc_power_gate = true, 930 + .min_disp_clk_khz = 100000, 930 931 }; 931 932 932 933 static const struct dc_panel_config panel_config_defaults = {
+1
drivers/gpu/drm/amd/display/dc/resource/dcn32/dcn32_resource.c
··· 739 739 .fpo_vactive_min_active_margin_us = 200, 740 740 .fpo_vactive_max_blank_us = 1000, 741 741 .enable_legacy_fast_update = false, 742 + .disable_stutter_for_wm_program = true 742 743 }; 743 744 744 745 static struct dce_aux *dcn32_aux_engine_create(
+2 -1
drivers/gpu/drm/amd/display/dc/resource/dcn32/dcn32_resource.h
··· 1230 1230 SR(DCHUBBUB_ARB_MALL_CNTL), \ 1231 1231 SR(DCN_VM_FAULT_ADDR_MSB), SR(DCN_VM_FAULT_ADDR_LSB), \ 1232 1232 SR(DCN_VM_FAULT_CNTL), SR(DCN_VM_FAULT_STATUS), \ 1233 - SR(SDPIF_REQUEST_RATE_LIMIT) 1233 + SR(SDPIF_REQUEST_RATE_LIMIT), \ 1234 + SR(DCHUBBUB_SDPIF_CFG0) 1234 1235 1235 1236 /* DCCG */ 1236 1237
+5 -1
drivers/gpu/drm/amd/display/dmub/inc/dmub_cmd.h
··· 4143 4143 */ 4144 4144 uint8_t hpo_link_enc_inst; 4145 4145 /** 4146 + * Determines if fast resync in ultra sleep mode is enabled/disabled. 4147 + */ 4148 + uint8_t replay_support_fast_resync_in_ultra_sleep_mode; 4149 + /** 4146 4150 * @pad: Align structure to 4 byte boundary. 4147 4151 */ 4148 - uint8_t pad[2]; 4152 + uint8_t pad[1]; 4149 4153 }; 4150 4154 4151 4155
+2 -1
drivers/gpu/drm/amd/include/mes_v11_api_def.h
··· 238 238 uint32_t enable_mes_sch_stb_log : 1; 239 239 uint32_t limit_single_process : 1; 240 240 uint32_t is_strix_tmz_wa_enabled :1; 241 - uint32_t reserved : 13; 241 + uint32_t enable_lr_compute_wa : 1; 242 + uint32_t reserved : 12; 242 243 }; 243 244 uint32_t uint32_t_all; 244 245 };
+2 -1
drivers/gpu/drm/amd/include/mes_v12_api_def.h
··· 287 287 uint32_t limit_single_process : 1; 288 288 uint32_t unmapped_doorbell_handling: 2; 289 289 uint32_t enable_mes_fence_int: 1; 290 - uint32_t reserved : 10; 290 + uint32_t enable_lr_compute_wa : 1; 291 + uint32_t reserved : 9; 291 292 }; 292 293 uint32_t uint32_all; 293 294 };
drivers/gpu/drm/amd/pm/inc/smu_v13_0_0_pptable.h drivers/gpu/drm/amd/pm/swsmu/inc/smu_v13_0_0_pptable.h
+2 -1
drivers/gpu/drm/amd/pm/swsmu/inc/pmfw_if/smu_v13_0_12_ppsmc.h
··· 120 120 #define PPSMC_MSG_GetBadPageSeverity 0x5B 121 121 #define PPSMC_MSG_GetSystemMetricsTable 0x5C 122 122 #define PPSMC_MSG_GetSystemMetricsVersion 0x5D 123 - #define PPSMC_Message_Count 0x5E 123 + #define PPSMC_MSG_ResetVCN 0x5E 124 + #define PPSMC_Message_Count 0x5F 124 125 125 126 //PPSMC Reset Types for driver msg argument 126 127 #define PPSMC_RESET_TYPE_DRIVER_MODE_1_RESET 0x1
+1
drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_12_ppt.c
··· 136 136 MSG_MAP(RmaDueToBadPageThreshold, PPSMC_MSG_RmaDueToBadPageThreshold, 0), 137 137 MSG_MAP(SetThrottlingPolicy, PPSMC_MSG_SetThrottlingPolicy, 0), 138 138 MSG_MAP(ResetSDMA, PPSMC_MSG_ResetSDMA, 0), 139 + MSG_MAP(ResetVCN, PPSMC_MSG_ResetVCN, 0), 139 140 MSG_MAP(GetStaticMetricsTable, PPSMC_MSG_GetStaticMetricsTable, 1), 140 141 MSG_MAP(GetSystemMetricsTable, PPSMC_MSG_GetSystemMetricsTable, 1), 141 142 };
+3
drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_6_ppt.c
··· 353 353 smu_v13_0_6_cap_set(smu, SMU_CAP(PLDM_VERSION)); 354 354 } 355 355 356 + if (fw_ver > 0x04560900) 357 + smu_v13_0_6_cap_set(smu, SMU_CAP(VCN_RESET)); 358 + 356 359 if (fw_ver >= 0x04560700) { 357 360 if (fw_ver >= 0x04560900) { 358 361 smu_v13_0_6_cap_set(smu, SMU_CAP(TEMP_METRICS));
drivers/gpu/drm/amd/ras/rascore/Makefile
+37
drivers/gpu/drm/amd/ras/rascore/ras_core_status.h
··· 1 + /* SPDX-License-Identifier: MIT */ 2 + /* 3 + * Copyright 2025 Advanced Micro Devices, Inc. 4 + * 5 + * Permission is hereby granted, free of charge, to any person obtaining a 6 + * copy of this software and associated documentation files (the "Software"), 7 + * to deal in the Software without restriction, including without limitation 8 + * the rights to use, copy, modify, merge, publish, distribute, sublicense, 9 + * and/or sell copies of the Software, and to permit persons to whom the 10 + * Software is furnished to do so, subject to the following conditions: 11 + * 12 + * The above copyright notice and this permission notice shall be included in 13 + * all copies or substantial portions of the Software. 14 + * 15 + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL 18 + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR 19 + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, 20 + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR 21 + * OTHER DEALINGS IN THE SOFTWARE. 22 + * 23 + */ 24 + 25 + #ifndef __RAS_CORE_STATUS_H__ 26 + #define __RAS_CORE_STATUS_H__ 27 + 28 + #define RAS_CORE_OK 0 29 + #define RAS_CORE_NOT_SUPPORTED 248 30 + #define RAS_CORE_FAIL_ERROR_QUERY 249 31 + #define RAS_CORE_FAIL_ERROR_INJECTION 250 32 + #define RAS_CORE_FAIL_FATAL_RECOVERY 251 33 + #define RAS_CORE_FAIL_POISON_CONSUMPTION 252 34 + #define RAS_CORE_FAIL_POISON_CREATION 253 35 + #define RAS_CORE_FAIL_NO_VALID_BANKS 254 36 + #define RAS_CORE_GPU_IN_MODE1_RESET 255 37 + #endif