Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'drm-misc-next-2025-10-21' of https://gitlab.freedesktop.org/drm/misc/kernel into drm-next

drm-misc-next for v6.19:

UAPI Changes:

amdxdna:
- Support reading last hardware error

Cross-subsystem Changes:

dma-buf:
- heaps: Create heap per CMA reserved location; Improve user-space documentation

Core Changes:

atomic:
- Clean up and improve state-handling interfaces, update drivers

bridge:
- Improve ref counting

buddy:
- Optimize block management

Driver Changes:

amdxdna:
- Fix runtime power management
- Support firmware debug output

ast:
- Set quirks for each chip model

atmel-hlcdc:
- Set LCDC_ATTRE register in plane disable
- Set correct values for plane scaler

bochs:
- Use vblank timer

bridge:
- synopsis: Support CEC; Init timer with correct frequency

cirrus-qemu:
- Use vblank timer

imx:
- Clean up

ivu:
- Update JSM API to 3.33.0
- Reset engine on more job errors
- Return correct error codes for jobs

komeda:
- Use drm_ logging functions

panel:
- edp: Support AUO B116XAN02.0

panfrost:
- Embed struct drm_driver in Panfrost device
- Improve error handling
- Clean up job handling

panthor:
- Support custom ASN_HASH for mt8196

renesas:
- rz-du: Fix dependencies

rockchip:
- dsi: Add support for RK3368
- Fix LUT size for RK3386

sitronix:
- Fix output position when clearing screens

qaic:
- Support dma-buf exports
- Support new firmware's READ_DATA implementation
- Replace kcalloc with memdup
- Replace snprintf() with sysfs_emit()
- Avoid overflows in arithmetics
- Clean up
- Fixes

qxl:
- Use vblank timer

rockchip:
- Clean up mode-setting code

vgem:
- Fix fence timer deadlock

virtgpu:
- Use vblank timer

Signed-off-by: Simona Vetter <simona.vetter@ffwll.ch>

From: Thomas Zimmermann <tzimmermann@suse.de>
Link: https://lore.kernel.org/r/20251021111837.GA40643@linux.fritz.box

+2496 -1116
+2
Documentation/devicetree/bindings/display/rockchip/rockchip,dw-mipi-dsi.yaml
··· 17 17 - rockchip,px30-mipi-dsi 18 18 - rockchip,rk3128-mipi-dsi 19 19 - rockchip,rk3288-mipi-dsi 20 + - rockchip,rk3368-mipi-dsi 20 21 - rockchip,rk3399-mipi-dsi 21 22 - rockchip,rk3568-mipi-dsi 22 23 - rockchip,rv1126-mipi-dsi ··· 74 73 enum: 75 74 - rockchip,px30-mipi-dsi 76 75 - rockchip,rk3128-mipi-dsi 76 + - rockchip,rk3368-mipi-dsi 77 77 - rockchip,rk3568-mipi-dsi 78 78 - rockchip,rv1126-mipi-dsi 79 79
+3 -1
Documentation/devicetree/bindings/gpu/arm,mali-valhall-csf.yaml
··· 18 18 oneOf: 19 19 - items: 20 20 - enum: 21 + - mediatek,mt8196-mali 21 22 - rockchip,rk3588-mali 22 23 - const: arm,mali-valhall-csf # Mali Valhall GPU model/revision is fully discoverable 23 24 ··· 92 91 - interrupts 93 92 - interrupt-names 94 93 - clocks 95 - - mali-supply 96 94 97 95 additionalProperties: false 98 96 ··· 108 108 power-domains: 109 109 maxItems: 1 110 110 power-domain-names: false 111 + required: 112 + - mali-supply 111 113 112 114 examples: 113 115 - |
+49 -10
Documentation/userspace-api/dma-buf-heaps.rst
··· 16 16 17 17 - The ``system`` heap allocates virtually contiguous, cacheable, buffers. 18 18 19 - - The ``cma`` heap allocates physically contiguous, cacheable, 20 - buffers. Only present if a CMA region is present. Such a region is 21 - usually created either through the kernel commandline through the 22 - ``cma`` parameter, a memory region Device-Tree node with the 23 - ``linux,cma-default`` property set, or through the ``CMA_SIZE_MBYTES`` or 24 - ``CMA_SIZE_PERCENTAGE`` Kconfig options. The heap's name in devtmpfs is 25 - ``default_cma_region``. For backwards compatibility, when the 26 - ``DMABUF_HEAPS_CMA_LEGACY`` Kconfig option is set, a duplicate node is 27 - created following legacy naming conventions; the legacy name might be 28 - ``reserved``, ``linux,cma``, or ``default-pool``. 19 + - The ``default_cma_region`` heap allocates physically contiguous, 20 + cacheable, buffers. Only present if a CMA region is present. Such a 21 + region is usually created either through the kernel commandline 22 + through the ``cma`` parameter, a memory region Device-Tree node with 23 + the ``linux,cma-default`` property set, or through the 24 + ``CMA_SIZE_MBYTES`` or ``CMA_SIZE_PERCENTAGE`` Kconfig options. Prior 25 + to Linux 6.17, its name wasn't stable and could be called 26 + ``reserved``, ``linux,cma``, or ``default-pool``, depending on the 27 + platform. 28 + 29 + - A heap will be created for each reusable region in the device tree 30 + with the ``shared-dma-pool`` compatible, using the full device tree 31 + node name as its name. The buffer semantics are identical to 32 + ``default-cma-region``. 33 + 34 + Naming Convention 35 + ================= 36 + 37 + ``dma-buf`` heaps name should meet a number of constraints: 38 + 39 + - The name must be stable, and must not change from one version to the other. 40 + Userspace identifies heaps by their name, so if the names ever change, we 41 + would be likely to introduce regressions. 42 + 43 + - The name must describe the memory region the heap will allocate from, and 44 + must uniquely identify it in a given platform. Since userspace applications 45 + use the heap name as the discriminant, it must be able to tell which heap it 46 + wants to use reliably if there's multiple heaps. 47 + 48 + - The name must not mention implementation details, such as the allocator. The 49 + heap driver will change over time, and implementation details when it was 50 + introduced might not be relevant in the future. 51 + 52 + - The name should describe properties of the buffers that would be allocated. 53 + Doing so will make heap identification easier for userspace. Such properties 54 + are: 55 + 56 + - ``contiguous`` for physically contiguous buffers; 57 + 58 + - ``protected`` for encrypted buffers not accessible the OS; 59 + 60 + - The name may describe intended usage. Doing so will make heap identification 61 + easier for userspace applications and users. 62 + 63 + For example, assuming a platform with a reserved memory region located 64 + at the RAM address 0x42000000, intended to allocate video framebuffers, 65 + physically contiguous, and backed by the CMA kernel allocator, good 66 + names would be ``memory@42000000-contiguous`` or ``video@42000000``, but 67 + ``cma-video`` wouldn't.
+7 -6
MAINTAINERS
··· 2092 2092 ARM MALI PANFROST DRM DRIVER 2093 2093 M: Boris Brezillon <boris.brezillon@collabora.com> 2094 2094 M: Rob Herring <robh@kernel.org> 2095 - R: Steven Price <steven.price@arm.com> 2095 + M: Steven Price <steven.price@arm.com> 2096 + M: Adrián Larumbe <adrian.larumbe@collabora.com> 2096 2097 L: dri-devel@lists.freedesktop.org 2097 2098 S: Supported 2098 2099 T: git https://gitlab.freedesktop.org/drm/misc/kernel.git ··· 7310 7309 F: drivers/dma-buf/ 7311 7310 F: include/linux/*fence.h 7312 7311 F: include/linux/dma-buf.h 7312 + F: include/linux/dma-buf/ 7313 7313 F: include/linux/dma-resv.h 7314 7314 K: \bdma_(?:buf|fence|resv)\b 7315 7315 ··· 7639 7637 F: include/drm/drm_accel.h 7640 7638 7641 7639 DRM DRIVER FOR ALLWINNER DE2 AND DE3 ENGINE 7642 - M: Maxime Ripard <mripard@kernel.org> 7643 7640 M: Chen-Yu Tsai <wens@csie.org> 7644 7641 R: Jernej Skrabec <jernej.skrabec@gmail.com> 7645 7642 L: dri-devel@lists.freedesktop.org ··· 7748 7747 F: drivers/gpu/drm/panel/panel-edp.c 7749 7748 7750 7749 DRM DRIVER FOR GENERIC USB DISPLAY 7751 - S: Orphan 7750 + M: Ruben Wauters <rubenru09@aol.com> 7751 + S: Maintained 7752 7752 W: https://github.com/notro/gud/wiki 7753 7753 T: git https://gitlab.freedesktop.org/drm/misc/kernel.git 7754 7754 F: drivers/gpu/drm/gud/ ··· 7890 7888 M: Rob Clark <robin.clark@oss.qualcomm.com> 7891 7889 M: Dmitry Baryshkov <lumag@kernel.org> 7892 7890 R: Abhinav Kumar <abhinav.kumar@linux.dev> 7893 - R: Jessica Zhang <jessica.zhang@oss.qualcomm.com> 7891 + R: Jessica Zhang <jesszhan0024@gmail.com> 7894 7892 R: Sean Paul <sean@poorly.run> 7895 7893 R: Marijn Suijten <marijn.suijten@somainline.org> 7896 7894 L: linux-arm-msm@vger.kernel.org ··· 8253 8251 F: rust/kernel/drm/ 8254 8252 8255 8253 DRM DRIVERS FOR ALLWINNER A10 8256 - M: Maxime Ripard <mripard@kernel.org> 8257 8254 M: Chen-Yu Tsai <wens@csie.org> 8258 8255 L: dri-devel@lists.freedesktop.org 8259 8256 S: Supported ··· 8602 8601 8603 8602 DRM PANEL DRIVERS 8604 8603 M: Neil Armstrong <neil.armstrong@linaro.org> 8605 - R: Jessica Zhang <jessica.zhang@oss.qualcomm.com> 8604 + R: Jessica Zhang <jesszhan0024@gmail.com> 8606 8605 L: dri-devel@lists.freedesktop.org 8607 8606 S: Maintained 8608 8607 T: git https://gitlab.freedesktop.org/drm/misc/kernel.git
-1
drivers/accel/amdxdna/TODO
··· 1 1 - Add debugfs support 2 - - Add debug BO support
+117 -8
drivers/accel/amdxdna/aie2_ctx.c
··· 226 226 } 227 227 228 228 static int 229 - aie2_sched_nocmd_resp_handler(void *handle, void __iomem *data, size_t size) 229 + aie2_sched_drvcmd_resp_handler(void *handle, void __iomem *data, size_t size) 230 230 { 231 231 struct amdxdna_sched_job *job = handle; 232 232 int ret = 0; 233 - u32 status; 234 233 235 234 if (unlikely(!data)) 236 235 goto out; ··· 239 240 goto out; 240 241 } 241 242 242 - status = readl(data); 243 - XDNA_DBG(job->hwctx->client->xdna, "Resp status 0x%x", status); 243 + job->drv_cmd->result = readl(data); 244 244 245 245 out: 246 246 aie2_sched_notify(job); ··· 312 314 kref_get(&job->refcnt); 313 315 fence = dma_fence_get(job->fence); 314 316 315 - if (unlikely(!cmd_abo)) { 316 - ret = aie2_sync_bo(hwctx, job, aie2_sched_nocmd_resp_handler); 317 + if (job->drv_cmd) { 318 + switch (job->drv_cmd->opcode) { 319 + case SYNC_DEBUG_BO: 320 + ret = aie2_sync_bo(hwctx, job, aie2_sched_drvcmd_resp_handler); 321 + break; 322 + case ATTACH_DEBUG_BO: 323 + ret = aie2_config_debug_bo(hwctx, job, aie2_sched_drvcmd_resp_handler); 324 + break; 325 + default: 326 + ret = -EINVAL; 327 + break; 328 + } 317 329 goto out; 318 330 } 319 331 ··· 618 610 goto free_entity; 619 611 } 620 612 613 + ret = amdxdna_pm_resume_get(xdna); 614 + if (ret) 615 + goto free_col_list; 616 + 621 617 ret = aie2_alloc_resource(hwctx); 622 618 if (ret) { 623 619 XDNA_ERR(xdna, "Alloc hw resource failed, ret %d", ret); 624 - goto free_col_list; 620 + goto suspend_put; 625 621 } 626 622 627 623 ret = aie2_map_host_buf(xdna->dev_handle, hwctx->fw_ctx_id, ··· 640 628 XDNA_ERR(xdna, "Create syncobj failed, ret %d", ret); 641 629 goto release_resource; 642 630 } 631 + amdxdna_pm_suspend_put(xdna); 643 632 644 633 hwctx->status = HWCTX_STAT_INIT; 645 634 ndev = xdna->dev_handle; ··· 653 640 654 641 release_resource: 655 642 aie2_release_resource(hwctx); 643 + suspend_put: 644 + amdxdna_pm_suspend_put(xdna); 656 645 free_col_list: 657 646 kfree(hwctx->col_list); 658 647 free_entity: ··· 774 759 return ret; 775 760 } 776 761 762 + static void aie2_cmd_wait(struct amdxdna_hwctx *hwctx, u64 seq) 763 + { 764 + struct dma_fence *out_fence = aie2_cmd_get_out_fence(hwctx, seq); 765 + 766 + if (!out_fence) { 767 + XDNA_ERR(hwctx->client->xdna, "Failed to get fence"); 768 + return; 769 + } 770 + 771 + dma_fence_wait_timeout(out_fence, false, MAX_SCHEDULE_TIMEOUT); 772 + dma_fence_put(out_fence); 773 + } 774 + 775 + static int aie2_hwctx_cfg_debug_bo(struct amdxdna_hwctx *hwctx, u32 bo_hdl, 776 + bool attach) 777 + { 778 + struct amdxdna_client *client = hwctx->client; 779 + struct amdxdna_dev *xdna = client->xdna; 780 + struct amdxdna_drv_cmd cmd = { 0 }; 781 + struct amdxdna_gem_obj *abo; 782 + u64 seq; 783 + int ret; 784 + 785 + abo = amdxdna_gem_get_obj(client, bo_hdl, AMDXDNA_BO_DEV); 786 + if (!abo) { 787 + XDNA_ERR(xdna, "Get bo %d failed", bo_hdl); 788 + return -EINVAL; 789 + } 790 + 791 + if (attach) { 792 + if (abo->assigned_hwctx != AMDXDNA_INVALID_CTX_HANDLE) { 793 + ret = -EBUSY; 794 + goto put_obj; 795 + } 796 + cmd.opcode = ATTACH_DEBUG_BO; 797 + } else { 798 + if (abo->assigned_hwctx != hwctx->id) { 799 + ret = -EINVAL; 800 + goto put_obj; 801 + } 802 + cmd.opcode = DETACH_DEBUG_BO; 803 + } 804 + 805 + ret = amdxdna_cmd_submit(client, &cmd, AMDXDNA_INVALID_BO_HANDLE, 806 + &bo_hdl, 1, hwctx->id, &seq); 807 + if (ret) { 808 + XDNA_ERR(xdna, "Submit command failed"); 809 + goto put_obj; 810 + } 811 + 812 + aie2_cmd_wait(hwctx, seq); 813 + if (cmd.result) { 814 + XDNA_ERR(xdna, "Response failure 0x%x", cmd.result); 815 + goto put_obj; 816 + } 817 + 818 + if (attach) 819 + abo->assigned_hwctx = hwctx->id; 820 + else 821 + abo->assigned_hwctx = AMDXDNA_INVALID_CTX_HANDLE; 822 + 823 + XDNA_DBG(xdna, "Config debug BO %d to %s", bo_hdl, hwctx->name); 824 + 825 + put_obj: 826 + amdxdna_gem_put_obj(abo); 827 + return ret; 828 + } 829 + 777 830 int aie2_hwctx_config(struct amdxdna_hwctx *hwctx, u32 type, u64 value, void *buf, u32 size) 778 831 { 779 832 struct amdxdna_dev *xdna = hwctx->client->xdna; ··· 851 768 case DRM_AMDXDNA_HWCTX_CONFIG_CU: 852 769 return aie2_hwctx_cu_config(hwctx, buf, size); 853 770 case DRM_AMDXDNA_HWCTX_ASSIGN_DBG_BUF: 771 + return aie2_hwctx_cfg_debug_bo(hwctx, (u32)value, true); 854 772 case DRM_AMDXDNA_HWCTX_REMOVE_DBG_BUF: 855 - return -EOPNOTSUPP; 773 + return aie2_hwctx_cfg_debug_bo(hwctx, (u32)value, false); 856 774 default: 857 775 XDNA_DBG(xdna, "Not supported type %d", type); 858 776 return -EOPNOTSUPP; 859 777 } 778 + } 779 + 780 + int aie2_hwctx_sync_debug_bo(struct amdxdna_hwctx *hwctx, u32 debug_bo_hdl) 781 + { 782 + struct amdxdna_client *client = hwctx->client; 783 + struct amdxdna_dev *xdna = client->xdna; 784 + struct amdxdna_drv_cmd cmd = { 0 }; 785 + u64 seq; 786 + int ret; 787 + 788 + cmd.opcode = SYNC_DEBUG_BO; 789 + ret = amdxdna_cmd_submit(client, &cmd, AMDXDNA_INVALID_BO_HANDLE, 790 + &debug_bo_hdl, 1, hwctx->id, &seq); 791 + if (ret) { 792 + XDNA_ERR(xdna, "Submit command failed"); 793 + return ret; 794 + } 795 + 796 + aie2_cmd_wait(hwctx, seq); 797 + if (cmd.result) { 798 + XDNA_ERR(xdna, "Response failure 0x%x", cmd.result); 799 + return ret; 800 + } 801 + 802 + return 0; 860 803 } 861 804 862 805 static int aie2_populate_range(struct amdxdna_gem_obj *abo)
+78 -17
drivers/accel/amdxdna/aie2_error.c
··· 13 13 14 14 #include "aie2_msg_priv.h" 15 15 #include "aie2_pci.h" 16 + #include "amdxdna_error.h" 16 17 #include "amdxdna_mailbox.h" 17 18 #include "amdxdna_pci_drv.h" 18 19 ··· 47 46 AIE_MEM_MOD = 0, 48 47 AIE_CORE_MOD, 49 48 AIE_PL_MOD, 49 + AIE_UNKNOWN_MOD, 50 50 }; 51 51 52 52 enum aie_error_category { ··· 145 143 EVENT_CATEGORY(74U, AIE_ERROR_LOCK), 146 144 }; 147 145 146 + static const enum amdxdna_error_num aie_cat_err_num_map[] = { 147 + [AIE_ERROR_SATURATION] = AMDXDNA_ERROR_NUM_AIE_SATURATION, 148 + [AIE_ERROR_FP] = AMDXDNA_ERROR_NUM_AIE_FP, 149 + [AIE_ERROR_STREAM] = AMDXDNA_ERROR_NUM_AIE_STREAM, 150 + [AIE_ERROR_ACCESS] = AMDXDNA_ERROR_NUM_AIE_ACCESS, 151 + [AIE_ERROR_BUS] = AMDXDNA_ERROR_NUM_AIE_BUS, 152 + [AIE_ERROR_INSTRUCTION] = AMDXDNA_ERROR_NUM_AIE_INSTRUCTION, 153 + [AIE_ERROR_ECC] = AMDXDNA_ERROR_NUM_AIE_ECC, 154 + [AIE_ERROR_LOCK] = AMDXDNA_ERROR_NUM_AIE_LOCK, 155 + [AIE_ERROR_DMA] = AMDXDNA_ERROR_NUM_AIE_DMA, 156 + [AIE_ERROR_MEM_PARITY] = AMDXDNA_ERROR_NUM_AIE_MEM_PARITY, 157 + [AIE_ERROR_UNKNOWN] = AMDXDNA_ERROR_NUM_UNKNOWN, 158 + }; 159 + 160 + static_assert(ARRAY_SIZE(aie_cat_err_num_map) == AIE_ERROR_UNKNOWN + 1); 161 + 162 + static const enum amdxdna_error_module aie_err_mod_map[] = { 163 + [AIE_MEM_MOD] = AMDXDNA_ERROR_MODULE_AIE_MEMORY, 164 + [AIE_CORE_MOD] = AMDXDNA_ERROR_MODULE_AIE_CORE, 165 + [AIE_PL_MOD] = AMDXDNA_ERROR_MODULE_AIE_PL, 166 + [AIE_UNKNOWN_MOD] = AMDXDNA_ERROR_MODULE_UNKNOWN, 167 + }; 168 + 169 + static_assert(ARRAY_SIZE(aie_err_mod_map) == AIE_UNKNOWN_MOD + 1); 170 + 148 171 static enum aie_error_category 149 172 aie_get_error_category(u8 row, u8 event_id, enum aie_module_type mod_type) 150 173 { ··· 203 176 if (event_id != lut[i].event_id) 204 177 continue; 205 178 179 + if (lut[i].category > AIE_ERROR_UNKNOWN) 180 + return AIE_ERROR_UNKNOWN; 181 + 206 182 return lut[i].category; 207 183 } 208 184 209 185 return AIE_ERROR_UNKNOWN; 186 + } 187 + 188 + static void aie2_update_last_async_error(struct amdxdna_dev_hdl *ndev, void *err_info, u32 num_err) 189 + { 190 + struct aie_error *errs = err_info; 191 + enum amdxdna_error_module err_mod; 192 + enum aie_error_category aie_err; 193 + enum amdxdna_error_num err_num; 194 + struct aie_error *last_err; 195 + 196 + last_err = &errs[num_err - 1]; 197 + if (last_err->mod_type >= AIE_UNKNOWN_MOD) { 198 + err_num = aie_cat_err_num_map[AIE_ERROR_UNKNOWN]; 199 + err_mod = aie_err_mod_map[AIE_UNKNOWN_MOD]; 200 + } else { 201 + aie_err = aie_get_error_category(last_err->row, 202 + last_err->event_id, 203 + last_err->mod_type); 204 + err_num = aie_cat_err_num_map[aie_err]; 205 + err_mod = aie_err_mod_map[last_err->mod_type]; 206 + } 207 + 208 + ndev->last_async_err.err_code = AMDXDNA_ERROR_ENCODE(err_num, err_mod); 209 + ndev->last_async_err.ts_us = ktime_to_us(ktime_get_real()); 210 + ndev->last_async_err.ex_err_code = AMDXDNA_EXTRA_ERR_ENCODE(last_err->row, last_err->col); 210 211 } 211 212 212 213 static u32 aie2_error_backtrack(struct amdxdna_dev_hdl *ndev, void *err_info, u32 num_err) ··· 319 264 } 320 265 321 266 mutex_lock(&xdna->dev_lock); 267 + aie2_update_last_async_error(e->ndev, info->payload, info->err_cnt); 268 + 322 269 /* Re-sent this event to firmware */ 323 270 if (aie2_error_event_send(e)) 324 271 XDNA_WARN(xdna, "Unable to register async event"); 325 272 mutex_unlock(&xdna->dev_lock); 326 - } 327 - 328 - int aie2_error_async_events_send(struct amdxdna_dev_hdl *ndev) 329 - { 330 - struct amdxdna_dev *xdna = ndev->xdna; 331 - struct async_event *e; 332 - int i, ret; 333 - 334 - drm_WARN_ON(&xdna->ddev, !mutex_is_locked(&xdna->dev_lock)); 335 - for (i = 0; i < ndev->async_events->event_cnt; i++) { 336 - e = &ndev->async_events->event[i]; 337 - ret = aie2_error_event_send(e); 338 - if (ret) 339 - return ret; 340 - } 341 - 342 - return 0; 343 273 } 344 274 345 275 void aie2_error_async_events_free(struct amdxdna_dev_hdl *ndev) ··· 381 341 e->size = ASYNC_BUF_SIZE; 382 342 e->resp.status = MAX_AIE2_STATUS_CODE; 383 343 INIT_WORK(&e->work, aie2_error_worker); 344 + 345 + ret = aie2_error_event_send(e); 346 + if (ret) 347 + goto free_wq; 384 348 } 385 349 386 350 ndev->async_events = events; ··· 393 349 events->event_cnt, events->size); 394 350 return 0; 395 351 352 + free_wq: 353 + destroy_workqueue(events->wq); 396 354 free_buf: 397 355 dma_free_noncoherent(xdna->ddev.dev, events->size, events->buf, 398 356 events->addr, DMA_FROM_DEVICE); 399 357 free_events: 400 358 kfree(events); 401 359 return ret; 360 + } 361 + 362 + int aie2_get_array_async_error(struct amdxdna_dev_hdl *ndev, struct amdxdna_drm_get_array *args) 363 + { 364 + struct amdxdna_dev *xdna = ndev->xdna; 365 + 366 + drm_WARN_ON(&xdna->ddev, !mutex_is_locked(&xdna->dev_lock)); 367 + 368 + args->num_element = 1; 369 + args->element_size = sizeof(ndev->last_async_err); 370 + if (copy_to_user(u64_to_user_ptr(args->buffer), 371 + &ndev->last_async_err, args->element_size)) 372 + return -EFAULT; 373 + 374 + return 0; 402 375 }
+30 -1
drivers/accel/amdxdna/aie2_message.c
··· 749 749 int ret = 0; 750 750 751 751 req.src_addr = 0; 752 - req.dst_addr = abo->mem.dev_addr - hwctx->client->dev_heap->mem.dev_addr; 752 + req.dst_addr = amdxdna_dev_bo_offset(abo); 753 753 req.size = abo->mem.size; 754 754 755 755 /* Device to Host */ ··· 772 772 } 773 773 774 774 return 0; 775 + } 776 + 777 + int aie2_config_debug_bo(struct amdxdna_hwctx *hwctx, struct amdxdna_sched_job *job, 778 + int (*notify_cb)(void *, void __iomem *, size_t)) 779 + { 780 + struct mailbox_channel *chann = hwctx->priv->mbox_chann; 781 + struct amdxdna_gem_obj *abo = to_xdna_obj(job->bos[0]); 782 + struct amdxdna_dev *xdna = hwctx->client->xdna; 783 + struct config_debug_bo_req req; 784 + struct xdna_mailbox_msg msg; 785 + 786 + if (job->drv_cmd->opcode == ATTACH_DEBUG_BO) 787 + req.config = DEBUG_BO_REGISTER; 788 + else 789 + req.config = DEBUG_BO_UNREGISTER; 790 + 791 + req.offset = amdxdna_dev_bo_offset(abo); 792 + req.size = abo->mem.size; 793 + 794 + XDNA_DBG(xdna, "offset 0x%llx size 0x%llx config %d", 795 + req.offset, req.size, req.config); 796 + 797 + msg.handle = job; 798 + msg.notify_cb = notify_cb; 799 + msg.send_data = (u8 *)&req; 800 + msg.send_size = sizeof(req); 801 + msg.opcode = MSG_OP_CONFIG_DEBUG_BO; 802 + 803 + return xdna_mailbox_send_msg(chann, &msg, TX_TIMEOUT); 775 804 }
+18
drivers/accel/amdxdna/aie2_msg_priv.h
··· 18 18 MSG_OP_CONFIG_CU = 0x11, 19 19 MSG_OP_CHAIN_EXEC_BUFFER_CF = 0x12, 20 20 MSG_OP_CHAIN_EXEC_DPU = 0x13, 21 + MSG_OP_CONFIG_DEBUG_BO = 0x14, 21 22 MSG_OP_MAX_XRT_OPCODE, 22 23 MSG_OP_SUSPEND = 0x101, 23 24 MSG_OP_RESUME = 0x102, ··· 364 363 } __packed; 365 364 366 365 struct sync_bo_resp { 366 + enum aie2_msg_status status; 367 + } __packed; 368 + 369 + #define DEBUG_BO_UNREGISTER 0 370 + #define DEBUG_BO_REGISTER 1 371 + struct config_debug_bo_req { 372 + __u64 offset; 373 + __u64 size; 374 + /* 375 + * config operations. 376 + * DEBUG_BO_REGISTER: Register debug buffer 377 + * DEBUG_BO_UNREGISTER: Unregister debug buffer 378 + */ 379 + __u32 config; 380 + } __packed; 381 + 382 + struct config_debug_bo_resp { 367 383 enum aie2_msg_status status; 368 384 } __packed; 369 385 #endif /* _AIE2_MSG_PRIV_H_ */
+4
drivers/accel/amdxdna/aie2_pci.c
··· 924 924 case DRM_AMDXDNA_HW_CONTEXT_ALL: 925 925 ret = aie2_query_ctx_status_array(client, args); 926 926 break; 927 + case DRM_AMDXDNA_HW_LAST_ASYNC_ERR: 928 + ret = aie2_get_array_async_error(xdna->dev_handle, args); 929 + break; 927 930 default: 928 931 XDNA_ERR(xdna, "Not supported request parameter %u", args->param); 929 932 ret = -EOPNOTSUPP; ··· 1004 1001 .hwctx_init = aie2_hwctx_init, 1005 1002 .hwctx_fini = aie2_hwctx_fini, 1006 1003 .hwctx_config = aie2_hwctx_config, 1004 + .hwctx_sync_debug_bo = aie2_hwctx_sync_debug_bo, 1007 1005 .cmd_submit = aie2_cmd_submit, 1008 1006 .hmm_invalidate = aie2_hmm_invalidate, 1009 1007 .get_array = aie2_get_array,
+7 -1
drivers/accel/amdxdna/aie2_pci.h
··· 190 190 191 191 enum aie2_dev_status dev_status; 192 192 u32 hwctx_num; 193 + 194 + struct amdxdna_async_error last_async_err; 193 195 }; 194 196 195 197 #define DEFINE_BAR_OFFSET(reg_name, bar, reg_addr) \ ··· 255 253 /* aie2_error.c */ 256 254 int aie2_error_async_events_alloc(struct amdxdna_dev_hdl *ndev); 257 255 void aie2_error_async_events_free(struct amdxdna_dev_hdl *ndev); 258 - int aie2_error_async_events_send(struct amdxdna_dev_hdl *ndev); 259 256 int aie2_error_async_msg_thread(void *data); 257 + int aie2_get_array_async_error(struct amdxdna_dev_hdl *ndev, 258 + struct amdxdna_drm_get_array *args); 260 259 261 260 /* aie2_message.c */ 262 261 int aie2_suspend_fw(struct amdxdna_dev_hdl *ndev); ··· 287 284 int (*notify_cb)(void *, void __iomem *, size_t)); 288 285 int aie2_sync_bo(struct amdxdna_hwctx *hwctx, struct amdxdna_sched_job *job, 289 286 int (*notify_cb)(void *, void __iomem *, size_t)); 287 + int aie2_config_debug_bo(struct amdxdna_hwctx *hwctx, struct amdxdna_sched_job *job, 288 + int (*notify_cb)(void *, void __iomem *, size_t)); 290 289 291 290 /* aie2_hwctx.c */ 292 291 int aie2_hwctx_init(struct amdxdna_hwctx *hwctx); 293 292 void aie2_hwctx_fini(struct amdxdna_hwctx *hwctx); 294 293 int aie2_hwctx_config(struct amdxdna_hwctx *hwctx, u32 type, u64 value, void *buf, u32 size); 294 + int aie2_hwctx_sync_debug_bo(struct amdxdna_hwctx *hwctx, u32 debug_bo_hdl); 295 295 void aie2_hwctx_suspend(struct amdxdna_client *client); 296 296 int aie2_hwctx_resume(struct amdxdna_client *client); 297 297 int aie2_cmd_submit(struct amdxdna_hwctx *hwctx, struct amdxdna_sched_job *job, u64 *seq);
+36 -3
drivers/accel/amdxdna/amdxdna_ctx.c
··· 328 328 return ret; 329 329 } 330 330 331 + int amdxdna_hwctx_sync_debug_bo(struct amdxdna_client *client, u32 debug_bo_hdl) 332 + { 333 + struct amdxdna_dev *xdna = client->xdna; 334 + struct amdxdna_hwctx *hwctx; 335 + struct amdxdna_gem_obj *abo; 336 + struct drm_gem_object *gobj; 337 + int ret, idx; 338 + 339 + if (!xdna->dev_info->ops->hwctx_sync_debug_bo) 340 + return -EOPNOTSUPP; 341 + 342 + gobj = drm_gem_object_lookup(client->filp, debug_bo_hdl); 343 + if (!gobj) 344 + return -EINVAL; 345 + 346 + abo = to_xdna_obj(gobj); 347 + guard(mutex)(&xdna->dev_lock); 348 + idx = srcu_read_lock(&client->hwctx_srcu); 349 + hwctx = xa_load(&client->hwctx_xa, abo->assigned_hwctx); 350 + if (!hwctx) { 351 + ret = -EINVAL; 352 + goto unlock_srcu; 353 + } 354 + 355 + ret = xdna->dev_info->ops->hwctx_sync_debug_bo(hwctx, debug_bo_hdl); 356 + 357 + unlock_srcu: 358 + srcu_read_unlock(&client->hwctx_srcu, idx); 359 + drm_gem_object_put(gobj); 360 + return ret; 361 + } 362 + 331 363 static void 332 364 amdxdna_arg_bos_put(struct amdxdna_sched_job *job) 333 365 { ··· 425 393 } 426 394 427 395 int amdxdna_cmd_submit(struct amdxdna_client *client, 396 + struct amdxdna_drv_cmd *drv_cmd, 428 397 u32 cmd_bo_hdl, u32 *arg_bo_hdls, u32 arg_bo_cnt, 429 398 u32 hwctx_hdl, u64 *seq) 430 399 { ··· 439 406 if (!job) 440 407 return -ENOMEM; 441 408 409 + job->drv_cmd = drv_cmd; 410 + 442 411 if (cmd_bo_hdl != AMDXDNA_INVALID_BO_HANDLE) { 443 412 job->cmd_bo = amdxdna_gem_get_obj(client, cmd_bo_hdl, AMDXDNA_BO_CMD); 444 413 if (!job->cmd_bo) { ··· 448 413 ret = -EINVAL; 449 414 goto free_job; 450 415 } 451 - } else { 452 - job->cmd_bo = NULL; 453 416 } 454 417 455 418 ret = amdxdna_arg_bos_lookup(client, job, arg_bo_hdls, arg_bo_cnt); ··· 541 508 } 542 509 } 543 510 544 - ret = amdxdna_cmd_submit(client, cmd_bo_hdl, arg_bo_hdls, 511 + ret = amdxdna_cmd_submit(client, NULL, cmd_bo_hdl, arg_bo_hdls, 545 512 args->arg_count, args->hwctx, &args->seq); 546 513 if (ret) 547 514 XDNA_DBG(xdna, "Submit cmds failed, ret %d", ret);
+15 -1
drivers/accel/amdxdna/amdxdna_ctx.h
··· 95 95 #define drm_job_to_xdna_job(j) \ 96 96 container_of(j, struct amdxdna_sched_job, base) 97 97 98 + enum amdxdna_job_opcode { 99 + SYNC_DEBUG_BO, 100 + ATTACH_DEBUG_BO, 101 + DETACH_DEBUG_BO, 102 + }; 103 + 104 + struct amdxdna_drv_cmd { 105 + enum amdxdna_job_opcode opcode; 106 + u32 result; 107 + }; 108 + 98 109 struct amdxdna_sched_job { 99 110 struct drm_sched_job base; 100 111 struct kref refcnt; ··· 117 106 struct dma_fence *out_fence; 118 107 bool job_done; 119 108 u64 seq; 109 + struct amdxdna_drv_cmd *drv_cmd; 120 110 struct amdxdna_gem_obj *cmd_bo; 121 111 size_t bo_cnt; 122 112 struct drm_gem_object *bos[] __counted_by(bo_cnt); ··· 155 143 void amdxdna_hwctx_remove_all(struct amdxdna_client *client); 156 144 int amdxdna_hwctx_walk(struct amdxdna_client *client, void *arg, 157 145 int (*walk)(struct amdxdna_hwctx *hwctx, void *arg)); 146 + int amdxdna_hwctx_sync_debug_bo(struct amdxdna_client *client, u32 debug_bo_hdl); 158 147 159 148 int amdxdna_cmd_submit(struct amdxdna_client *client, 160 - u32 cmd_bo_hdls, u32 *arg_bo_hdls, u32 arg_bo_cnt, 149 + struct amdxdna_drv_cmd *drv_cmd, u32 cmd_bo_hdls, 150 + u32 *arg_bo_hdls, u32 arg_bo_cnt, 161 151 u32 hwctx_hdl, u64 *seq); 162 152 163 153 int amdxdna_cmd_wait(struct amdxdna_client *client, u32 hwctx_hdl,
+59
drivers/accel/amdxdna/amdxdna_error.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + /* 3 + * Copyright (C) 2025, Advanced Micro Devices, Inc. 4 + */ 5 + 6 + #ifndef _AMDXDNA_ERROR_H_ 7 + #define _AMDXDNA_ERROR_H_ 8 + 9 + #include <linux/bitfield.h> 10 + #include <linux/bits.h> 11 + 12 + #define AMDXDNA_ERR_DRV_AIE 4 13 + #define AMDXDNA_ERR_SEV_CRITICAL 3 14 + #define AMDXDNA_ERR_CLASS_AIE 2 15 + 16 + #define AMDXDNA_ERR_NUM_MASK GENMASK_U64(15, 0) 17 + #define AMDXDNA_ERR_DRV_MASK GENMASK_U64(23, 16) 18 + #define AMDXDNA_ERR_SEV_MASK GENMASK_U64(31, 24) 19 + #define AMDXDNA_ERR_MOD_MASK GENMASK_U64(39, 32) 20 + #define AMDXDNA_ERR_CLASS_MASK GENMASK_U64(47, 40) 21 + 22 + enum amdxdna_error_num { 23 + AMDXDNA_ERROR_NUM_AIE_SATURATION = 3, 24 + AMDXDNA_ERROR_NUM_AIE_FP, 25 + AMDXDNA_ERROR_NUM_AIE_STREAM, 26 + AMDXDNA_ERROR_NUM_AIE_ACCESS, 27 + AMDXDNA_ERROR_NUM_AIE_BUS, 28 + AMDXDNA_ERROR_NUM_AIE_INSTRUCTION, 29 + AMDXDNA_ERROR_NUM_AIE_ECC, 30 + AMDXDNA_ERROR_NUM_AIE_LOCK, 31 + AMDXDNA_ERROR_NUM_AIE_DMA, 32 + AMDXDNA_ERROR_NUM_AIE_MEM_PARITY, 33 + AMDXDNA_ERROR_NUM_UNKNOWN = 15, 34 + }; 35 + 36 + enum amdxdna_error_module { 37 + AMDXDNA_ERROR_MODULE_AIE_CORE = 3, 38 + AMDXDNA_ERROR_MODULE_AIE_MEMORY, 39 + AMDXDNA_ERROR_MODULE_AIE_SHIM, 40 + AMDXDNA_ERROR_MODULE_AIE_NOC, 41 + AMDXDNA_ERROR_MODULE_AIE_PL, 42 + AMDXDNA_ERROR_MODULE_UNKNOWN = 8, 43 + }; 44 + 45 + #define AMDXDNA_ERROR_ENCODE(err_num, err_mod) \ 46 + (FIELD_PREP(AMDXDNA_ERR_NUM_MASK, err_num) | \ 47 + FIELD_PREP_CONST(AMDXDNA_ERR_DRV_MASK, AMDXDNA_ERR_DRV_AIE) | \ 48 + FIELD_PREP_CONST(AMDXDNA_ERR_SEV_MASK, AMDXDNA_ERR_SEV_CRITICAL) | \ 49 + FIELD_PREP(AMDXDNA_ERR_MOD_MASK, err_mod) | \ 50 + FIELD_PREP_CONST(AMDXDNA_ERR_CLASS_MASK, AMDXDNA_ERR_CLASS_AIE)) 51 + 52 + #define AMDXDNA_EXTRA_ERR_COL_MASK GENMASK_U64(7, 0) 53 + #define AMDXDNA_EXTRA_ERR_ROW_MASK GENMASK_U64(15, 8) 54 + 55 + #define AMDXDNA_EXTRA_ERR_ENCODE(row, col) \ 56 + (FIELD_PREP(AMDXDNA_EXTRA_ERR_COL_MASK, col) | \ 57 + FIELD_PREP(AMDXDNA_EXTRA_ERR_ROW_MASK, row)) 58 + 59 + #endif /* _AMDXDNA_ERROR_H_ */
+3
drivers/accel/amdxdna/amdxdna_gem.c
··· 962 962 XDNA_DBG(xdna, "Sync bo %d offset 0x%llx, size 0x%llx\n", 963 963 args->handle, args->offset, args->size); 964 964 965 + if (args->direction == SYNC_DIRECT_FROM_DEVICE) 966 + ret = amdxdna_hwctx_sync_debug_bo(abo->client, args->handle); 967 + 965 968 put_obj: 966 969 drm_gem_object_put(gobj); 967 970 return ret;
+6
drivers/accel/amdxdna/amdxdna_gem.h
··· 7 7 #define _AMDXDNA_GEM_H_ 8 8 9 9 #include <linux/hmm.h> 10 + #include "amdxdna_pci_drv.h" 10 11 11 12 struct amdxdna_umap { 12 13 struct vm_area_struct *vma; ··· 61 60 static inline void amdxdna_gem_put_obj(struct amdxdna_gem_obj *abo) 62 61 { 63 62 drm_gem_object_put(to_gobj(abo)); 63 + } 64 + 65 + static inline u64 amdxdna_dev_bo_offset(struct amdxdna_gem_obj *abo) 66 + { 67 + return abo->mem.dev_addr - abo->client->dev_heap->mem.dev_addr; 64 68 } 65 69 66 70 void amdxdna_umap_put(struct amdxdna_umap *mapp);
+3 -1
drivers/accel/amdxdna/amdxdna_pci_drv.c
··· 27 27 /* 28 28 * 0.0: Initial version 29 29 * 0.1: Support getting all hardware contexts by DRM_IOCTL_AMDXDNA_GET_ARRAY 30 + * 0.2: Support getting last error hardware error 31 + * 0.3: Support firmware debug buffer 30 32 */ 31 33 #define AMDXDNA_DRIVER_MAJOR 0 32 - #define AMDXDNA_DRIVER_MINOR 1 34 + #define AMDXDNA_DRIVER_MINOR 3 33 35 34 36 /* 35 37 * Bind the driver base on (vendor_id, device_id) pair and later use the
+1
drivers/accel/amdxdna/amdxdna_pci_drv.h
··· 55 55 int (*hwctx_init)(struct amdxdna_hwctx *hwctx); 56 56 void (*hwctx_fini)(struct amdxdna_hwctx *hwctx); 57 57 int (*hwctx_config)(struct amdxdna_hwctx *hwctx, u32 type, u64 value, void *buf, u32 size); 58 + int (*hwctx_sync_debug_bo)(struct amdxdna_hwctx *hwctx, u32 debug_bo_hdl); 58 59 void (*hmm_invalidate)(struct amdxdna_gem_obj *abo, unsigned long cur_seq); 59 60 int (*cmd_submit)(struct amdxdna_hwctx *hwctx, struct amdxdna_sched_job *job, u64 *seq); 60 61 int (*get_aie_info)(struct amdxdna_client *client, struct amdxdna_drm_get_info *args);
+1
drivers/accel/amdxdna/npu1_regs.c
··· 46 46 47 47 const struct rt_config npu1_default_rt_cfg[] = { 48 48 { 2, 1, AIE2_RT_CFG_INIT }, /* PDI APP LOAD MODE */ 49 + { 4, 1, AIE2_RT_CFG_INIT }, /* Debug BO */ 49 50 { 1, 1, AIE2_RT_CFG_CLK_GATING }, /* Clock gating on */ 50 51 { 0 }, 51 52 };
+1
drivers/accel/amdxdna/npu4_regs.c
··· 63 63 64 64 const struct rt_config npu4_default_rt_cfg[] = { 65 65 { 5, 1, AIE2_RT_CFG_INIT }, /* PDI APP LOAD MODE */ 66 + { 10, 1, AIE2_RT_CFG_INIT }, /* DEBUG BUF */ 66 67 { 1, 1, AIE2_RT_CFG_CLK_GATING }, /* Clock gating on */ 67 68 { 2, 1, AIE2_RT_CFG_CLK_GATING }, /* Clock gating on */ 68 69 { 3, 1, AIE2_RT_CFG_CLK_GATING }, /* Clock gating on */
+2 -1
drivers/accel/ivpu/ivpu_gem.c
··· 46 46 47 47 static struct sg_table *ivpu_bo_map_attachment(struct ivpu_device *vdev, struct ivpu_bo *bo) 48 48 { 49 - struct sg_table *sgt = bo->base.sgt; 49 + struct sg_table *sgt; 50 50 51 51 drm_WARN_ON(&vdev->drm, !bo->base.base.import_attach); 52 52 53 53 ivpu_bo_lock(bo); 54 54 55 + sgt = bo->base.sgt; 55 56 if (!sgt) { 56 57 sgt = dma_buf_map_attachment(bo->base.base.import_attach, DMA_BIDIRECTIONAL); 57 58 if (IS_ERR(sgt))
+49 -22
drivers/accel/ivpu/ivpu_job.c
··· 564 564 return job; 565 565 } 566 566 567 + bool ivpu_job_handle_engine_error(struct ivpu_device *vdev, u32 job_id, u32 job_status) 568 + { 569 + lockdep_assert_held(&vdev->submitted_jobs_lock); 570 + 571 + switch (job_status) { 572 + case VPU_JSM_STATUS_PROCESSING_ERR: 573 + case VPU_JSM_STATUS_ENGINE_RESET_REQUIRED_MIN ... VPU_JSM_STATUS_ENGINE_RESET_REQUIRED_MAX: 574 + { 575 + struct ivpu_job *job = xa_load(&vdev->submitted_jobs_xa, job_id); 576 + 577 + if (!job) 578 + return false; 579 + 580 + /* Trigger an engine reset */ 581 + guard(mutex)(&job->file_priv->lock); 582 + 583 + job->job_status = job_status; 584 + 585 + if (job->file_priv->has_mmu_faults) 586 + return false; 587 + 588 + /* 589 + * Mark context as faulty and defer destruction of the job to jobs abort thread 590 + * handler to synchronize between both faults and jobs returning context violation 591 + * status and ensure both are handled in the same way 592 + */ 593 + job->file_priv->has_mmu_faults = true; 594 + queue_work(system_wq, &vdev->context_abort_work); 595 + return true; 596 + } 597 + default: 598 + /* Complete job with error status, engine reset not required */ 599 + break; 600 + } 601 + 602 + return false; 603 + } 604 + 567 605 static int ivpu_job_signal_and_destroy(struct ivpu_device *vdev, u32 job_id, u32 job_status) 568 606 { 569 607 struct ivpu_job *job; ··· 612 574 if (!job) 613 575 return -ENOENT; 614 576 615 - if (job_status == VPU_JSM_STATUS_MVNCI_CONTEXT_VIOLATION_HW) { 616 - guard(mutex)(&job->file_priv->lock); 577 + ivpu_job_remove_from_submitted_jobs(vdev, job_id); 617 578 579 + if (job->job_status == VPU_JSM_STATUS_SUCCESS) { 618 580 if (job->file_priv->has_mmu_faults) 619 - return 0; 620 - 621 - /* 622 - * Mark context as faulty and defer destruction of the job to jobs abort thread 623 - * handler to synchronize between both faults and jobs returning context violation 624 - * status and ensure both are handled in the same way 625 - */ 626 - job->file_priv->has_mmu_faults = true; 627 - queue_work(system_wq, &vdev->context_abort_work); 628 - return 0; 581 + job->job_status = DRM_IVPU_JOB_STATUS_ABORTED; 582 + else 583 + job->job_status = job_status; 629 584 } 630 585 631 - job = ivpu_job_remove_from_submitted_jobs(vdev, job_id); 632 - if (!job) 633 - return -ENOENT; 634 - 635 - if (job->file_priv->has_mmu_faults) 636 - job_status = DRM_IVPU_JOB_STATUS_ABORTED; 637 - 638 - job->bos[CMD_BUF_IDX]->job_status = job_status; 586 + job->bos[CMD_BUF_IDX]->job_status = job->job_status; 639 587 dma_fence_signal(job->done_fence); 640 588 641 589 trace_job("done", job); 642 590 ivpu_dbg(vdev, JOB, "Job complete: id %3u ctx %2d cmdq_id %u engine %d status 0x%x\n", 643 - job->job_id, job->file_priv->ctx.id, job->cmdq_id, job->engine_idx, job_status); 591 + job->job_id, job->file_priv->ctx.id, job->cmdq_id, job->engine_idx, 592 + job->job_status); 644 593 645 594 ivpu_job_destroy(job); 646 595 ivpu_stop_job_timeout_detection(vdev); ··· 1047 1022 payload = (struct vpu_ipc_msg_payload_job_done *)&jsm_msg->payload; 1048 1023 1049 1024 mutex_lock(&vdev->submitted_jobs_lock); 1050 - ivpu_job_signal_and_destroy(vdev, payload->job_id, payload->job_status); 1025 + if (!ivpu_job_handle_engine_error(vdev, payload->job_id, payload->job_status)) 1026 + /* No engine error, complete the job normally */ 1027 + ivpu_job_signal_and_destroy(vdev, payload->job_id, payload->job_status); 1051 1028 mutex_unlock(&vdev->submitted_jobs_lock); 1052 1029 } 1053 1030
+3
drivers/accel/ivpu/ivpu_job.h
··· 51 51 * @cmdq_id: Command queue ID used for submission 52 52 * @job_id: Unique job ID for tracking and status reporting 53 53 * @engine_idx: Engine index for job execution 54 + * @job_status: Status reported by firmware for this job 54 55 * @primary_preempt_buf: Primary preemption buffer for job 55 56 * @secondary_preempt_buf: Secondary preemption buffer for job (optional) 56 57 * @bo_count: Number of buffer objects associated with this job ··· 65 64 u32 cmdq_id; 66 65 u32 job_id; 67 66 u32 engine_idx; 67 + u32 job_status; 68 68 struct ivpu_bo *primary_preempt_buf; 69 69 struct ivpu_bo *secondary_preempt_buf; 70 70 size_t bo_count; ··· 85 83 86 84 void ivpu_job_done_consumer_init(struct ivpu_device *vdev); 87 85 void ivpu_job_done_consumer_fini(struct ivpu_device *vdev); 86 + bool ivpu_job_handle_engine_error(struct ivpu_device *vdev, u32 job_id, u32 job_status); 88 87 void ivpu_context_abort_work_fn(struct work_struct *work); 89 88 90 89 void ivpu_jobs_abort_all(struct ivpu_device *vdev);
+96 -54
drivers/accel/ivpu/vpu_jsm_api.h
··· 23 23 /* 24 24 * Minor version changes when API backward compatibility is preserved. 25 25 */ 26 - #define VPU_JSM_API_VER_MINOR 32 26 + #define VPU_JSM_API_VER_MINOR 33 27 27 28 28 /* 29 29 * API header changed (field names, documentation, formatting) but API itself has not been changed 30 30 */ 31 - #define VPU_JSM_API_VER_PATCH 5 31 + #define VPU_JSM_API_VER_PATCH 0 32 32 33 33 /* 34 34 * Index in the API version table ··· 76 76 #define VPU_JSM_STATUS_PREEMPTED_MID_INFERENCE 0xDU 77 77 /* Job status returned when the job was preempted mid-command */ 78 78 #define VPU_JSM_STATUS_PREEMPTED_MID_COMMAND 0xDU 79 + /* Range of status codes that require engine reset */ 80 + #define VPU_JSM_STATUS_ENGINE_RESET_REQUIRED_MIN 0xEU 79 81 #define VPU_JSM_STATUS_MVNCI_CONTEXT_VIOLATION_HW 0xEU 80 82 #define VPU_JSM_STATUS_MVNCI_PREEMPTION_TIMED_OUT 0xFU 83 + #define VPU_JSM_STATUS_ENGINE_RESET_REQUIRED_MAX 0x1FU 81 84 82 85 /* 83 86 * Host <-> VPU IPC channels. ··· 407 404 /** Index of the first free entry in buffer. */ 408 405 u32 first_free_entry_idx; 409 406 /** 410 - * Incremented each time NPU wraps around 411 - * the buffer to write next entry. 407 + * Incremented whenever the NPU wraps around the buffer and writes 408 + * to the first entry again. 412 409 */ 413 410 u32 wraparound_count; 414 411 }; ··· 457 454 * Host <-> VPU IPC messages types. 458 455 */ 459 456 enum vpu_ipc_msg_type { 457 + /** Unsupported command */ 460 458 VPU_JSM_MSG_UNKNOWN = 0xFFFFFFFF, 461 459 462 - /* IPC Host -> Device, Async commands */ 460 + /** IPC Host -> Device, base id for async commands */ 463 461 VPU_JSM_MSG_ASYNC_CMD = 0x1100, 462 + /** 463 + * Reset engine. The NPU cancels all the jobs currently executing on the target 464 + * engine making the engine become idle and then does a HW reset, before returning 465 + * to the host. 466 + * @see struct vpu_ipc_msg_payload_engine_reset 467 + */ 464 468 VPU_JSM_MSG_ENGINE_RESET = VPU_JSM_MSG_ASYNC_CMD, 465 469 /** 466 470 * Preempt engine. The NPU stops (preempts) all the jobs currently ··· 477 467 * the target engine, but it stops processing them (until the queue doorbell 478 468 * is rung again); the host is responsible to reset the job queue, either 479 469 * after preemption or when resubmitting jobs to the queue. 470 + * @see vpu_ipc_msg_payload_engine_preempt 480 471 */ 481 472 VPU_JSM_MSG_ENGINE_PREEMPT = 0x1101, 482 473 /** ··· 594 583 * @see vpu_ipc_msg_payload_hws_resume_engine 595 584 */ 596 585 VPU_JSM_MSG_HWS_ENGINE_RESUME = 0x111b, 597 - /* Control command: Enable survivability/DCT mode */ 586 + /** 587 + * Control command: Enable survivability/DCT mode 588 + * @see vpu_ipc_msg_payload_pwr_dct_control 589 + */ 598 590 VPU_JSM_MSG_DCT_ENABLE = 0x111c, 599 - /* Control command: Disable survivability/DCT mode */ 591 + /** 592 + * Control command: Disable survivability/DCT mode 593 + * This command has no payload 594 + */ 600 595 VPU_JSM_MSG_DCT_DISABLE = 0x111d, 601 596 /** 602 597 * Dump VPU state. To be used for debug purposes only. 603 - * NOTE: Please introduce new ASYNC commands before this one. * 598 + * This command has no payload. 599 + * NOTE: Please introduce new ASYNC commands before this one. 604 600 */ 605 601 VPU_JSM_MSG_STATE_DUMP = 0x11FF, 606 602 607 - /* IPC Host -> Device, General commands */ 603 + /** IPC Host -> Device, base id for general commands */ 608 604 VPU_JSM_MSG_GENERAL_CMD = 0x1200, 605 + /** Unsupported command */ 609 606 VPU_JSM_MSG_BLOB_DEINIT_DEPRECATED = VPU_JSM_MSG_GENERAL_CMD, 610 607 /** 611 608 * Control dyndbg behavior by executing a dyndbg command; equivalent to 612 609 * Linux command: 613 610 * @verbatim echo '<dyndbg_cmd>' > <debugfs>/dynamic_debug/control @endverbatim 611 + * @see vpu_ipc_msg_payload_dyndbg_control 614 612 */ 615 613 VPU_JSM_MSG_DYNDBG_CONTROL = 0x1201, 616 614 /** ··· 627 607 */ 628 608 VPU_JSM_MSG_PWR_D0I3_ENTER = 0x1202, 629 609 630 - /* IPC Device -> Host, Job completion */ 610 + /** 611 + * IPC Device -> Host, Job completion 612 + * @see struct vpu_ipc_msg_payload_job_done 613 + */ 631 614 VPU_JSM_MSG_JOB_DONE = 0x2100, 632 615 /** 633 616 * IPC Device -> Host, Fence signalled ··· 645 622 * @see vpu_ipc_msg_payload_engine_reset_done 646 623 */ 647 624 VPU_JSM_MSG_ENGINE_RESET_DONE = VPU_JSM_MSG_ASYNC_CMD_DONE, 625 + /** 626 + * Preempt complete message 627 + * @see vpu_ipc_msg_payload_engine_preempt_done 628 + */ 648 629 VPU_JSM_MSG_ENGINE_PREEMPT_DONE = 0x2201, 649 630 VPU_JSM_MSG_REGISTER_DB_DONE = 0x2202, 650 631 VPU_JSM_MSG_UNREGISTER_DB_DONE = 0x2203, ··· 756 729 * @see vpu_ipc_msg_payload_hws_resume_engine 757 730 */ 758 731 VPU_JSM_MSG_HWS_RESUME_ENGINE_DONE = 0x221c, 759 - /* Response to control command: Enable survivability/DCT mode */ 732 + /** 733 + * Response to control command: Enable survivability/DCT mode 734 + * This command has no payload 735 + */ 760 736 VPU_JSM_MSG_DCT_ENABLE_DONE = 0x221d, 761 - /* Response to control command: Disable survivability/DCT mode */ 737 + /** 738 + * Response to control command: Disable survivability/DCT mode 739 + * This command has no payload 740 + */ 762 741 VPU_JSM_MSG_DCT_DISABLE_DONE = 0x221e, 763 742 /** 764 743 * Response to state dump control command. 765 - * NOTE: Please introduce new ASYNC responses before this one. * 744 + * This command has no payload. 745 + * NOTE: Please introduce new ASYNC responses before this one. 766 746 */ 767 747 VPU_JSM_MSG_STATE_DUMP_RSP = 0x22FF, 768 748 ··· 787 753 788 754 enum vpu_ipc_msg_status { VPU_JSM_MSG_FREE, VPU_JSM_MSG_ALLOCATED }; 789 755 790 - /* 791 - * Host <-> LRT IPC message payload definitions 756 + /** 757 + * Engine reset request payload 758 + * @see VPU_JSM_MSG_ENGINE_RESET 792 759 */ 793 760 struct vpu_ipc_msg_payload_engine_reset { 794 - /* Engine to be reset. */ 761 + /** Engine to be reset. */ 795 762 u32 engine_idx; 796 - /* Reserved */ 763 + /** Reserved */ 797 764 u32 reserved_0; 798 765 }; 799 766 767 + /** 768 + * Engine preemption request struct 769 + * @see VPU_JSM_MSG_ENGINE_PREEMPT 770 + */ 800 771 struct vpu_ipc_msg_payload_engine_preempt { 801 - /* Engine to be preempted. */ 772 + /** Engine to be preempted. */ 802 773 u32 engine_idx; 803 - /* ID of the preemption request. */ 774 + /** ID of the preemption request. */ 804 775 u32 preempt_id; 805 776 }; 806 777 ··· 974 935 u64 next_buffer_size; 975 936 }; 976 937 938 + /** 939 + * Device -> host job completion message. 940 + * @see VPU_JSM_MSG_JOB_DONE 941 + */ 977 942 struct vpu_ipc_msg_payload_job_done { 978 - /* Engine to which the job was submitted. */ 943 + /** Engine to which the job was submitted. */ 979 944 u32 engine_idx; 980 - /* Index of the doorbell to which the job was submitted */ 945 + /** Index of the doorbell to which the job was submitted */ 981 946 u32 db_idx; 982 - /* ID of the completed job */ 947 + /** ID of the completed job */ 983 948 u32 job_id; 984 - /* Status of the completed job */ 949 + /** Status of the completed job */ 985 950 u32 job_status; 986 - /* Host SSID */ 951 + /** Host SSID */ 987 952 u32 host_ssid; 988 - /* Zero Padding */ 953 + /** Zero Padding */ 989 954 u32 reserved_0; 990 - /* Command queue id */ 955 + /** Command queue id */ 991 956 u64 cmdq_id; 992 957 }; 993 958 ··· 1040 997 impacted_contexts[VPU_MAX_ENGINE_RESET_IMPACTED_CONTEXTS]; 1041 998 }; 1042 999 1000 + /** 1001 + * Preemption response struct 1002 + * @see VPU_JSM_MSG_ENGINE_PREEMPT_DONE 1003 + */ 1043 1004 struct vpu_ipc_msg_payload_engine_preempt_done { 1044 - /* Engine preempted. */ 1005 + /** Engine preempted. */ 1045 1006 u32 engine_idx; 1046 - /* ID of the preemption request. */ 1007 + /** ID of the preemption request. */ 1047 1008 u32 preempt_id; 1048 1009 }; 1049 1010 ··· 1599 1552 }; 1600 1553 1601 1554 /** 1602 - * Payload for VPU_JSM_MSG_DYNDBG_CONTROL requests. 1555 + * Payload for @ref VPU_JSM_MSG_DYNDBG_CONTROL requests. 1603 1556 * 1604 - * VPU_JSM_MSG_DYNDBG_CONTROL are used to control the VPU FW Dynamic Debug 1605 - * feature, which allows developers to selectively enable / disable MVLOG_DEBUG 1606 - * messages. This is equivalent to the Dynamic Debug functionality provided by 1607 - * Linux 1608 - * (https://www.kernel.org/doc/html/latest/admin-guide/dynamic-debug-howto.html) 1609 - * The host can control Dynamic Debug behavior by sending dyndbg commands, which 1610 - * have the same syntax as Linux 1611 - * dyndbg commands. 1557 + * VPU_JSM_MSG_DYNDBG_CONTROL requests are used to control the VPU FW dynamic debug 1558 + * feature, which allows developers to selectively enable/disable code to obtain 1559 + * additional FW information. This is equivalent to the dynamic debug functionality 1560 + * provided by Linux. The host can control dynamic debug behavior by sending dyndbg 1561 + * commands, using the same syntax as for Linux dynamic debug commands. 1612 1562 * 1613 - * NOTE: in order for MVLOG_DEBUG messages to be actually printed, the host 1614 - * still has to set the logging level to MVLOG_DEBUG, using the 1615 - * VPU_JSM_MSG_TRACE_SET_CONFIG command. 1563 + * @see https://www.kernel.org/doc/html/latest/admin-guide/dynamic-debug-howto.html. 1616 1564 * 1617 - * The host can see the current dynamic debug configuration by executing a 1618 - * special 'show' command. The dyndbg configuration will be printed to the 1619 - * configured logging destination using MVLOG_INFO logging level. 1565 + * NOTE: 1566 + * As the dynamic debug feature uses MVLOG messages to provide information, the host 1567 + * must first set the logging level to MVLOG_DEBUG, using the @ref VPU_JSM_MSG_TRACE_SET_CONFIG 1568 + * command. 1620 1569 */ 1621 1570 struct vpu_ipc_msg_payload_dyndbg_control { 1622 1571 /** 1623 - * Dyndbg command (same format as Linux dyndbg); must be a NULL-terminated 1624 - * string. 1572 + * Dyndbg command to be executed. 1625 1573 */ 1626 1574 char dyndbg_cmd[VPU_DYNDBG_CMD_MAX_LEN]; 1627 1575 }; ··· 1637 1595 }; 1638 1596 1639 1597 /** 1640 - * Payload for VPU_JSM_MSG_DCT_ENABLE message. 1598 + * Payload for @ref VPU_JSM_MSG_DCT_ENABLE message. 1641 1599 * 1642 1600 * Default values for DCT active/inactive times are 5.3ms and 30ms respectively, 1643 1601 * corresponding to a 85% duty cycle. This payload allows the host to tune these ··· 1694 1652 struct vpu_ipc_msg_payload_pwr_dct_control pwr_dct_control; 1695 1653 }; 1696 1654 1697 - /* 1698 - * Host <-> LRT IPC message base structure. 1655 + /** 1656 + * Host <-> NPU IPC message base structure. 1699 1657 * 1700 1658 * NOTE: All instances of this object must be aligned on a 64B boundary 1701 1659 * to allow proper handling of VPU cache operations. 1702 1660 */ 1703 1661 struct vpu_jsm_msg { 1704 - /* Reserved */ 1662 + /** Reserved */ 1705 1663 u64 reserved_0; 1706 - /* Message type, see vpu_ipc_msg_type enum. */ 1664 + /** Message type, see @ref vpu_ipc_msg_type. */ 1707 1665 u32 type; 1708 - /* Buffer status, see vpu_ipc_msg_status enum. */ 1666 + /** Buffer status, see @ref vpu_ipc_msg_status. */ 1709 1667 u32 status; 1710 - /* 1668 + /** 1711 1669 * Request ID, provided by the host in a request message and passed 1712 1670 * back by VPU in the response message. 1713 1671 */ 1714 1672 u32 request_id; 1715 - /* Request return code set by the VPU, see VPU_JSM_STATUS_* defines. */ 1673 + /** Request return code set by the VPU, see VPU_JSM_STATUS_* defines. */ 1716 1674 u32 result; 1717 1675 u64 reserved_1; 1718 - /* Message payload depending on message type, see vpu_ipc_msg_payload union. */ 1676 + /** Message payload depending on message type, see vpu_ipc_msg_payload union. */ 1719 1677 union vpu_ipc_msg_payload payload; 1720 1678 }; 1721 1679
+5 -4
drivers/accel/qaic/qaic_control.c
··· 17 17 #include <linux/overflow.h> 18 18 #include <linux/pci.h> 19 19 #include <linux/scatterlist.h> 20 + #include <linux/sched/signal.h> 20 21 #include <linux/types.h> 21 22 #include <linux/uaccess.h> 22 23 #include <linux/workqueue.h> ··· 656 655 return -EINVAL; 657 656 658 657 nelem = in_trans->queue_size; 659 - size = (get_dbc_req_elem_size() + get_dbc_rsp_elem_size()) * nelem; 660 - if (size / nelem != get_dbc_req_elem_size() + get_dbc_rsp_elem_size()) 658 + if (check_mul_overflow((u32)(get_dbc_req_elem_size() + get_dbc_rsp_elem_size()), 659 + nelem, 660 + &size)) 661 661 return -EINVAL; 662 662 663 663 if (size + QAIC_DBC_Q_GAP + QAIC_DBC_Q_BUF_ALIGN < size) ··· 812 810 } 813 811 814 812 if (ret) 815 - break; 813 + goto out; 816 814 } 817 815 818 816 if (user_len != user_msg->len) ··· 1081 1079 1082 1080 list_for_each_entry(w, &wrappers->list, list) { 1083 1081 kref_get(&w->ref_count); 1084 - retry_count = 0; 1085 1082 ret = mhi_queue_buf(qdev->cntl_ch, DMA_TO_DEVICE, &w->msg, w->len, 1086 1083 list_is_last(&w->list, &wrappers->list) ? MHI_EOT : MHI_CHAIN); 1087 1084 if (ret) {
+57 -39
drivers/accel/qaic/qaic_data.c
··· 18 18 #include <linux/scatterlist.h> 19 19 #include <linux/spinlock.h> 20 20 #include <linux/srcu.h> 21 + #include <linux/string.h> 21 22 #include <linux/types.h> 22 23 #include <linux/uaccess.h> 23 24 #include <linux/wait.h> ··· 166 165 drm_gem_object_put(&slice->bo->base); 167 166 sg_free_table(slice->sgt); 168 167 kfree(slice->sgt); 169 - kfree(slice->reqs); 168 + kvfree(slice->reqs); 170 169 kfree(slice); 171 170 } 172 171 ··· 405 404 goto free_sgt; 406 405 } 407 406 408 - slice->reqs = kcalloc(sgt->nents, sizeof(*slice->reqs), GFP_KERNEL); 407 + slice->reqs = kvcalloc(sgt->nents, sizeof(*slice->reqs), GFP_KERNEL); 409 408 if (!slice->reqs) { 410 409 ret = -ENOMEM; 411 410 goto free_slice; ··· 431 430 return 0; 432 431 433 432 free_req: 434 - kfree(slice->reqs); 433 + kvfree(slice->reqs); 435 434 free_slice: 436 435 kfree(slice); 437 436 free_sgt: ··· 644 643 kfree(bo); 645 644 } 646 645 646 + static struct sg_table *qaic_get_sg_table(struct drm_gem_object *obj) 647 + { 648 + struct qaic_bo *bo = to_qaic_bo(obj); 649 + struct scatterlist *sg, *sg_in; 650 + struct sg_table *sgt, *sgt_in; 651 + int i; 652 + 653 + sgt_in = bo->sgt; 654 + 655 + sgt = kmalloc(sizeof(*sgt), GFP_KERNEL); 656 + if (!sgt) 657 + return ERR_PTR(-ENOMEM); 658 + 659 + if (sg_alloc_table(sgt, sgt_in->orig_nents, GFP_KERNEL)) { 660 + kfree(sgt); 661 + return ERR_PTR(-ENOMEM); 662 + } 663 + 664 + sg = sgt->sgl; 665 + for_each_sgtable_sg(sgt_in, sg_in, i) { 666 + memcpy(sg, sg_in, sizeof(*sg)); 667 + sg = sg_next(sg); 668 + } 669 + 670 + return sgt; 671 + } 672 + 647 673 static const struct drm_gem_object_funcs qaic_gem_funcs = { 648 674 .free = qaic_free_object, 675 + .get_sg_table = qaic_get_sg_table, 649 676 .print_info = qaic_gem_print_info, 650 677 .mmap = qaic_gem_object_mmap, 651 678 .vm_ops = &drm_vm_ops, ··· 982 953 if (args->hdr.count == 0) 983 954 return -EINVAL; 984 955 985 - arg_size = args->hdr.count * sizeof(*slice_ent); 986 - if (arg_size / args->hdr.count != sizeof(*slice_ent)) 956 + if (check_mul_overflow((unsigned long)args->hdr.count, 957 + (unsigned long)sizeof(*slice_ent), 958 + &arg_size)) 987 959 return -EINVAL; 988 960 989 961 if (!(args->hdr.dir == DMA_TO_DEVICE || args->hdr.dir == DMA_FROM_DEVICE)) ··· 1014 984 1015 985 user_data = u64_to_user_ptr(args->data); 1016 986 1017 - slice_ent = kzalloc(arg_size, GFP_KERNEL); 1018 - if (!slice_ent) { 1019 - ret = -EINVAL; 987 + slice_ent = memdup_user(user_data, arg_size); 988 + if (IS_ERR(slice_ent)) { 989 + ret = PTR_ERR(slice_ent); 1020 990 goto unlock_dev_srcu; 1021 - } 1022 - 1023 - ret = copy_from_user(slice_ent, user_data, arg_size); 1024 - if (ret) { 1025 - ret = -EFAULT; 1026 - goto free_slice_ent; 1027 991 } 1028 992 1029 993 obj = drm_gem_object_lookup(file_priv, args->hdr.handle); ··· 1324 1300 int usr_rcu_id, qdev_rcu_id; 1325 1301 struct qaic_device *qdev; 1326 1302 struct qaic_user *usr; 1327 - u8 __user *user_data; 1328 - unsigned long n; 1329 1303 u64 received_ts; 1330 1304 u32 queue_level; 1331 1305 u64 submit_ts; ··· 1336 1314 received_ts = ktime_get_ns(); 1337 1315 1338 1316 size = is_partial ? sizeof(struct qaic_partial_execute_entry) : sizeof(*exec); 1339 - n = (unsigned long)size * args->hdr.count; 1340 - if (args->hdr.count == 0 || n / args->hdr.count != size) 1317 + if (args->hdr.count == 0) 1341 1318 return -EINVAL; 1342 1319 1343 - user_data = u64_to_user_ptr(args->data); 1344 - 1345 - exec = kcalloc(args->hdr.count, size, GFP_KERNEL); 1346 - if (!exec) 1347 - return -ENOMEM; 1348 - 1349 - if (copy_from_user(exec, user_data, n)) { 1350 - ret = -EFAULT; 1351 - goto free_exec; 1352 - } 1320 + exec = memdup_array_user(u64_to_user_ptr(args->data), args->hdr.count, size); 1321 + if (IS_ERR(exec)) 1322 + return PTR_ERR(exec); 1353 1323 1354 1324 usr = file_priv->driver_priv; 1355 1325 usr_rcu_id = srcu_read_lock(&usr->qddev_lock); ··· 1410 1396 srcu_read_unlock(&qdev->dev_lock, qdev_rcu_id); 1411 1397 unlock_usr_srcu: 1412 1398 srcu_read_unlock(&usr->qddev_lock, usr_rcu_id); 1413 - free_exec: 1414 1399 kfree(exec); 1415 1400 return ret; 1416 1401 } ··· 1762 1749 struct qaic_device *qdev; 1763 1750 struct qaic_user *usr; 1764 1751 struct qaic_bo *bo; 1765 - int ret, i; 1752 + int ret = 0; 1753 + int i; 1766 1754 1767 1755 usr = file_priv->driver_priv; 1768 1756 usr_rcu_id = srcu_read_lock(&usr->qddev_lock); ··· 1784 1770 goto unlock_dev_srcu; 1785 1771 } 1786 1772 1787 - ent = kcalloc(args->hdr.count, sizeof(*ent), GFP_KERNEL); 1788 - if (!ent) { 1789 - ret = -EINVAL; 1773 + ent = memdup_array_user(u64_to_user_ptr(args->data), args->hdr.count, sizeof(*ent)); 1774 + if (IS_ERR(ent)) { 1775 + ret = PTR_ERR(ent); 1790 1776 goto unlock_dev_srcu; 1791 - } 1792 - 1793 - ret = copy_from_user(ent, u64_to_user_ptr(args->data), args->hdr.count * sizeof(*ent)); 1794 - if (ret) { 1795 - ret = -EFAULT; 1796 - goto free_ent; 1797 1777 } 1798 1778 1799 1779 for (i = 0; i < args->hdr.count; i++) { ··· 1797 1789 goto free_ent; 1798 1790 } 1799 1791 bo = to_qaic_bo(obj); 1792 + if (!bo->sliced) { 1793 + drm_gem_object_put(obj); 1794 + ret = -EINVAL; 1795 + goto free_ent; 1796 + } 1797 + if (bo->dbc->id != args->hdr.dbc_id) { 1798 + drm_gem_object_put(obj); 1799 + ret = -EINVAL; 1800 + goto free_ent; 1801 + } 1800 1802 /* 1801 1803 * perf stats ioctl is called before wait ioctl is complete then 1802 1804 * the latency information is invalid.
+3 -3
drivers/accel/qaic/qaic_ras.c
··· 514 514 { 515 515 struct qaic_device *qdev = pci_get_drvdata(to_pci_dev(dev)); 516 516 517 - return snprintf(buf, PAGE_SIZE, "%d\n", qdev->ce_count); 517 + return sysfs_emit(buf, "%d\n", qdev->ce_count); 518 518 } 519 519 520 520 static ssize_t ue_count_show(struct device *dev, struct device_attribute *attr, char *buf) 521 521 { 522 522 struct qaic_device *qdev = pci_get_drvdata(to_pci_dev(dev)); 523 523 524 - return snprintf(buf, PAGE_SIZE, "%d\n", qdev->ue_count); 524 + return sysfs_emit(buf, "%d\n", qdev->ue_count); 525 525 } 526 526 527 527 static ssize_t ue_nonfatal_count_show(struct device *dev, struct device_attribute *attr, char *buf) 528 528 { 529 529 struct qaic_device *qdev = pci_get_drvdata(to_pci_dev(dev)); 530 530 531 - return snprintf(buf, PAGE_SIZE, "%d\n", qdev->ue_nf_count); 531 + return sysfs_emit(buf, "%d\n", qdev->ue_nf_count); 532 532 } 533 533 534 534 static DEVICE_ATTR_RO(ce_count);
+120 -45
drivers/accel/qaic/sahara.c
··· 159 159 struct sahara_packet *rx; 160 160 struct work_struct fw_work; 161 161 struct work_struct dump_work; 162 + struct work_struct read_data_work; 162 163 struct mhi_device *mhi_dev; 163 164 const char * const *image_table; 164 165 u32 table_size; ··· 175 174 u64 dump_image_offset; 176 175 void *mem_dump_freespace; 177 176 u64 dump_images_left; 177 + u32 read_data_offset; 178 + u32 read_data_length; 178 179 bool is_mem_dump_mode; 180 + bool non_streaming; 179 181 }; 180 182 181 183 static const char * const aic100_image_table[] = { ··· 219 215 [74] = "qcom/aic200/sti.bin", 220 216 [75] = "qcom/aic200/pvs.bin", 221 217 }; 218 + 219 + static bool is_streaming(struct sahara_context *context) 220 + { 221 + return !context->non_streaming; 222 + } 222 223 223 224 static int sahara_find_image(struct sahara_context *context, u32 image_id) 224 225 { ··· 274 265 int ret; 275 266 276 267 context->is_mem_dump_mode = false; 268 + context->read_data_offset = 0; 269 + context->read_data_length = 0; 277 270 278 271 context->tx[0]->cmd = cpu_to_le32(SAHARA_RESET_CMD); 279 272 context->tx[0]->length = cpu_to_le32(SAHARA_RESET_LENGTH); ··· 330 319 dev_err(&context->mhi_dev->dev, "Unable to send hello response %d\n", ret); 331 320 } 332 321 322 + static int read_data_helper(struct sahara_context *context, int buf_index) 323 + { 324 + enum mhi_flags mhi_flag; 325 + u32 pkt_data_len; 326 + int ret; 327 + 328 + pkt_data_len = min(context->read_data_length, SAHARA_PACKET_MAX_SIZE); 329 + 330 + memcpy(context->tx[buf_index], 331 + &context->firmware->data[context->read_data_offset], 332 + pkt_data_len); 333 + 334 + context->read_data_offset += pkt_data_len; 335 + context->read_data_length -= pkt_data_len; 336 + 337 + if (is_streaming(context) || !context->read_data_length) 338 + mhi_flag = MHI_EOT; 339 + else 340 + mhi_flag = MHI_CHAIN; 341 + 342 + ret = mhi_queue_buf(context->mhi_dev, DMA_TO_DEVICE, 343 + context->tx[buf_index], pkt_data_len, mhi_flag); 344 + if (ret) { 345 + dev_err(&context->mhi_dev->dev, "Unable to send read_data response %d\n", ret); 346 + return ret; 347 + } 348 + 349 + return 0; 350 + } 351 + 333 352 static void sahara_read_data(struct sahara_context *context) 334 353 { 335 - u32 image_id, data_offset, data_len, pkt_data_len; 354 + u32 image_id, data_offset, data_len; 336 355 int ret; 337 356 int i; 338 357 ··· 398 357 * and is not needed here on error. 399 358 */ 400 359 401 - if (data_len > SAHARA_TRANSFER_MAX_SIZE) { 360 + if (context->non_streaming && data_len > SAHARA_TRANSFER_MAX_SIZE) { 402 361 dev_err(&context->mhi_dev->dev, "Malformed read_data packet - data len %d exceeds max xfer size %d\n", 403 362 data_len, SAHARA_TRANSFER_MAX_SIZE); 404 363 sahara_send_reset(context); ··· 419 378 return; 420 379 } 421 380 422 - for (i = 0; i < SAHARA_NUM_TX_BUF && data_len; ++i) { 423 - pkt_data_len = min(data_len, SAHARA_PACKET_MAX_SIZE); 381 + context->read_data_offset = data_offset; 382 + context->read_data_length = data_len; 424 383 425 - memcpy(context->tx[i], &context->firmware->data[data_offset], pkt_data_len); 384 + if (is_streaming(context)) { 385 + schedule_work(&context->read_data_work); 386 + return; 387 + } 426 388 427 - data_offset += pkt_data_len; 428 - data_len -= pkt_data_len; 429 - 430 - ret = mhi_queue_buf(context->mhi_dev, DMA_TO_DEVICE, 431 - context->tx[i], pkt_data_len, 432 - !data_len ? MHI_EOT : MHI_CHAIN); 433 - if (ret) { 434 - dev_err(&context->mhi_dev->dev, "Unable to send read_data response %d\n", 435 - ret); 436 - return; 437 - } 389 + for (i = 0; i < SAHARA_NUM_TX_BUF && context->read_data_length; ++i) { 390 + ret = read_data_helper(context, i); 391 + if (ret) 392 + break; 438 393 } 439 394 } 440 395 ··· 575 538 struct sahara_memory_dump_meta_v1 *dump_meta; 576 539 u64 table_nents; 577 540 u64 dump_length; 541 + u64 mul_bytes; 578 542 int ret; 579 543 u64 i; 580 544 ··· 589 551 dev_table[i].description[SAHARA_TABLE_ENTRY_STR_LEN - 1] = 0; 590 552 dev_table[i].filename[SAHARA_TABLE_ENTRY_STR_LEN - 1] = 0; 591 553 592 - dump_length = size_add(dump_length, le64_to_cpu(dev_table[i].length)); 593 - if (dump_length == SIZE_MAX) { 554 + if (check_add_overflow(dump_length, 555 + le64_to_cpu(dev_table[i].length), 556 + &dump_length)) { 594 557 /* Discard the dump */ 595 558 sahara_send_reset(context); 596 559 return; ··· 607 568 dev_table[i].filename); 608 569 } 609 570 610 - dump_length = size_add(dump_length, sizeof(*dump_meta)); 611 - if (dump_length == SIZE_MAX) { 571 + if (check_add_overflow(dump_length, (u64)sizeof(*dump_meta), &dump_length)) { 612 572 /* Discard the dump */ 613 573 sahara_send_reset(context); 614 574 return; 615 575 } 616 - dump_length = size_add(dump_length, size_mul(sizeof(*image_out_table), table_nents)); 617 - if (dump_length == SIZE_MAX) { 576 + if (check_mul_overflow((u64)sizeof(*image_out_table), table_nents, &mul_bytes)) { 577 + /* Discard the dump */ 578 + sahara_send_reset(context); 579 + return; 580 + } 581 + if (check_add_overflow(dump_length, mul_bytes, &dump_length)) { 618 582 /* Discard the dump */ 619 583 sahara_send_reset(context); 620 584 return; ··· 657 615 658 616 /* Request the first chunk of the first image */ 659 617 context->dump_image = &image_out_table[0]; 660 - dump_length = min(context->dump_image->length, SAHARA_READ_MAX_SIZE); 618 + dump_length = min_t(u64, context->dump_image->length, SAHARA_READ_MAX_SIZE); 661 619 /* Avoid requesting EOI sized data so that we can identify errors */ 662 620 if (dump_length == SAHARA_END_OF_IMAGE_LENGTH) 663 621 dump_length = SAHARA_END_OF_IMAGE_LENGTH / 2; ··· 705 663 706 664 /* Get next image chunk */ 707 665 dump_length = context->dump_image->length - context->dump_image_offset; 708 - dump_length = min(dump_length, SAHARA_READ_MAX_SIZE); 666 + dump_length = min_t(u64, dump_length, SAHARA_READ_MAX_SIZE); 709 667 /* Avoid requesting EOI sized data so that we can identify errors */ 710 668 if (dump_length == SAHARA_END_OF_IMAGE_LENGTH) 711 669 dump_length = SAHARA_END_OF_IMAGE_LENGTH / 2; ··· 784 742 sahara_send_reset(context); 785 743 } 786 744 745 + static void sahara_read_data_processing(struct work_struct *work) 746 + { 747 + struct sahara_context *context = container_of(work, struct sahara_context, read_data_work); 748 + 749 + read_data_helper(context, 0); 750 + } 751 + 787 752 static int sahara_mhi_probe(struct mhi_device *mhi_dev, const struct mhi_device_id *id) 788 753 { 789 754 struct sahara_context *context; ··· 805 756 if (!context->rx) 806 757 return -ENOMEM; 807 758 808 - /* 809 - * AIC100 defines SAHARA_TRANSFER_MAX_SIZE as the largest value it 810 - * will request for READ_DATA. This is larger than 811 - * SAHARA_PACKET_MAX_SIZE, and we need 9x SAHARA_PACKET_MAX_SIZE to 812 - * cover SAHARA_TRANSFER_MAX_SIZE. When the remote side issues a 813 - * READ_DATA, it requires a transfer of the exact size requested. We 814 - * can use MHI_CHAIN to link multiple buffers into a single transfer 815 - * but the remote side will not consume the buffers until it sees an 816 - * EOT, thus we need to allocate enough buffers to put in the tx fifo 817 - * to cover an entire READ_DATA request of the max size. 818 - */ 819 - for (i = 0; i < SAHARA_NUM_TX_BUF; ++i) { 820 - context->tx[i] = devm_kzalloc(&mhi_dev->dev, SAHARA_PACKET_MAX_SIZE, GFP_KERNEL); 821 - if (!context->tx[i]) 822 - return -ENOMEM; 823 - } 824 - 825 - context->mhi_dev = mhi_dev; 826 - INIT_WORK(&context->fw_work, sahara_processing); 827 - INIT_WORK(&context->dump_work, sahara_dump_processing); 828 - 829 759 if (!strcmp(mhi_dev->mhi_cntrl->name, "AIC200")) { 830 760 context->image_table = aic200_image_table; 831 761 context->table_size = ARRAY_SIZE(aic200_image_table); 832 762 } else { 833 763 context->image_table = aic100_image_table; 834 764 context->table_size = ARRAY_SIZE(aic100_image_table); 765 + context->non_streaming = true; 835 766 } 767 + 768 + /* 769 + * There are two firmware implementations for READ_DATA handling. 770 + * The older "SBL" implementation defines a Sahara transfer size, and 771 + * expects that the response is a single transport transfer. If the 772 + * FW wants to transfer a file that is larger than the transfer size, 773 + * the FW will issue multiple READ_DATA commands. For this 774 + * implementation, we need to allocate enough buffers to contain the 775 + * entire Sahara transfer size. 776 + * 777 + * The newer "XBL" implementation does not define a maximum transfer 778 + * size and instead expects the data to be streamed over using the 779 + * transport level MTU. The FW will issue a single READ_DATA command 780 + * of whatever size, and consume multiple transport level transfers 781 + * until the expected amount of data is consumed. For this 782 + * implementation we only need a single buffer of the transport MTU 783 + * but we'll need to be able to use it multiple times for a single 784 + * READ_DATA request. 785 + * 786 + * AIC100 is the SBL implementation and defines SAHARA_TRANSFER_MAX_SIZE 787 + * and we need 9x SAHARA_PACKET_MAX_SIZE to cover that. We can use 788 + * MHI_CHAIN to link multiple buffers into a single transfer but the 789 + * remote side will not consume the buffers until it sees an EOT, thus 790 + * we need to allocate enough buffers to put in the tx fifo to cover an 791 + * entire READ_DATA request of the max size. 792 + * 793 + * AIC200 is the XBL implementation, and so a single buffer will work. 794 + */ 795 + for (i = 0; i < SAHARA_NUM_TX_BUF; ++i) { 796 + context->tx[i] = devm_kzalloc(&mhi_dev->dev, 797 + SAHARA_PACKET_MAX_SIZE, 798 + GFP_KERNEL); 799 + if (!context->tx[i]) 800 + return -ENOMEM; 801 + if (is_streaming(context)) 802 + break; 803 + } 804 + 805 + context->mhi_dev = mhi_dev; 806 + INIT_WORK(&context->fw_work, sahara_processing); 807 + INIT_WORK(&context->dump_work, sahara_dump_processing); 808 + INIT_WORK(&context->read_data_work, sahara_read_data_processing); 836 809 837 810 context->active_image_id = SAHARA_IMAGE_ID_NONE; 838 811 dev_set_drvdata(&mhi_dev->dev, context); ··· 885 814 886 815 static void sahara_mhi_ul_xfer_cb(struct mhi_device *mhi_dev, struct mhi_result *mhi_result) 887 816 { 817 + struct sahara_context *context = dev_get_drvdata(&mhi_dev->dev); 818 + 819 + if (!mhi_result->transaction_status && context->read_data_length && is_streaming(context)) 820 + schedule_work(&context->read_data_work); 888 821 } 889 822 890 823 static void sahara_mhi_dl_xfer_cb(struct mhi_device *mhi_dev, struct mhi_result *mhi_result)
-10
drivers/dma-buf/heaps/Kconfig
··· 12 12 Choose this option to enable dma-buf CMA heap. This heap is backed 13 13 by the Contiguous Memory Allocator (CMA). If your system has these 14 14 regions, you should say Y here. 15 - 16 - config DMABUF_HEAPS_CMA_LEGACY 17 - bool "Legacy DMA-BUF CMA Heap" 18 - default y 19 - depends on DMABUF_HEAPS_CMA 20 - help 21 - Add a duplicate CMA-backed dma-buf heap with legacy naming derived 22 - from the CMA area's devicetree node, or "reserved" if the area is not 23 - defined in the devicetree. This uses the same underlying allocator as 24 - CONFIG_DMABUF_HEAPS_CMA.
+30 -17
drivers/dma-buf/heaps/cma_heap.c
··· 14 14 15 15 #include <linux/cma.h> 16 16 #include <linux/dma-buf.h> 17 + #include <linux/dma-buf/heaps/cma.h> 17 18 #include <linux/dma-heap.h> 18 19 #include <linux/dma-map-ops.h> 19 20 #include <linux/err.h> ··· 22 21 #include <linux/io.h> 23 22 #include <linux/mm.h> 24 23 #include <linux/module.h> 24 + #include <linux/of.h> 25 + #include <linux/of_reserved_mem.h> 25 26 #include <linux/scatterlist.h> 26 27 #include <linux/slab.h> 27 28 #include <linux/vmalloc.h> 28 29 29 30 #define DEFAULT_CMA_NAME "default_cma_region" 31 + 32 + static struct cma *dma_areas[MAX_CMA_AREAS] __initdata; 33 + static unsigned int dma_areas_num __initdata; 34 + 35 + int __init dma_heap_cma_register_heap(struct cma *cma) 36 + { 37 + if (dma_areas_num >= ARRAY_SIZE(dma_areas)) 38 + return -EINVAL; 39 + 40 + dma_areas[dma_areas_num++] = cma; 41 + 42 + return 0; 43 + } 30 44 31 45 struct cma_heap { 32 46 struct dma_heap *heap; ··· 411 395 return 0; 412 396 } 413 397 414 - static int __init add_default_cma_heap(void) 398 + static int __init add_cma_heaps(void) 415 399 { 416 400 struct cma *default_cma = dev_get_cma_area(NULL); 417 - const char *legacy_cma_name; 401 + unsigned int i; 418 402 int ret; 419 403 420 - if (!default_cma) 421 - return 0; 404 + if (default_cma) { 405 + ret = __add_cma_heap(default_cma, DEFAULT_CMA_NAME); 406 + if (ret) 407 + return ret; 408 + } 422 409 423 - ret = __add_cma_heap(default_cma, DEFAULT_CMA_NAME); 424 - if (ret) 425 - return ret; 410 + for (i = 0; i < dma_areas_num; i++) { 411 + struct cma *cma = dma_areas[i]; 426 412 427 - if (IS_ENABLED(CONFIG_DMABUF_HEAPS_CMA_LEGACY)) { 428 - legacy_cma_name = cma_get_name(default_cma); 429 - if (!strcmp(legacy_cma_name, DEFAULT_CMA_NAME)) { 430 - pr_warn("legacy name and default name are the same, skipping legacy heap\n"); 431 - return 0; 413 + ret = __add_cma_heap(cma, cma_get_name(cma)); 414 + if (ret) { 415 + pr_warn("Failed to add CMA heap %s", cma_get_name(cma)); 416 + continue; 432 417 } 433 418 434 - ret = __add_cma_heap(default_cma, legacy_cma_name); 435 - if (ret) 436 - pr_warn("failed to add legacy heap: %pe\n", 437 - ERR_PTR(ret)); 438 419 } 439 420 440 421 return 0; 441 422 } 442 - module_init(add_default_cma_heap); 423 + module_init(add_cma_heaps); 443 424 MODULE_DESCRIPTION("DMA-BUF CMA Heap");
+4 -4
drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
··· 5219 5219 dev_warn(adev->dev, "smart shift update failed\n"); 5220 5220 5221 5221 if (notify_clients) 5222 - drm_client_dev_suspend(adev_to_drm(adev), false); 5222 + drm_client_dev_suspend(adev_to_drm(adev)); 5223 5223 5224 5224 cancel_delayed_work_sync(&adev->delayed_init_work); 5225 5225 ··· 5353 5353 flush_delayed_work(&adev->delayed_init_work); 5354 5354 5355 5355 if (notify_clients) 5356 - drm_client_dev_resume(adev_to_drm(adev), false); 5356 + drm_client_dev_resume(adev_to_drm(adev)); 5357 5357 5358 5358 amdgpu_ras_resume(adev); 5359 5359 ··· 5958 5958 if (r) 5959 5959 goto out; 5960 5960 5961 - drm_client_dev_resume(adev_to_drm(tmp_adev), false); 5961 + drm_client_dev_resume(adev_to_drm(tmp_adev)); 5962 5962 5963 5963 /* 5964 5964 * The GPU enters bad state once faulty pages ··· 6293 6293 */ 6294 6294 amdgpu_unregister_gpu_instance(tmp_adev); 6295 6295 6296 - drm_client_dev_suspend(adev_to_drm(tmp_adev), false); 6296 + drm_client_dev_suspend(adev_to_drm(tmp_adev)); 6297 6297 6298 6298 /* disable ras on ALL IPs */ 6299 6299 if (!need_emergency_restart && !amdgpu_reset_in_dpc(adev) &&
+2 -2
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
··· 12446 12446 int j = state->num_private_objs-1; 12447 12447 12448 12448 dm_atomic_destroy_state(obj, 12449 - state->private_objs[i].state); 12449 + state->private_objs[i].state_to_destroy); 12450 12450 12451 12451 /* If i is not at the end of the array then the 12452 12452 * last element needs to be moved to where i was ··· 12457 12457 state->private_objs[j]; 12458 12458 12459 12459 state->private_objs[j].ptr = NULL; 12460 - state->private_objs[j].state = NULL; 12460 + state->private_objs[j].state_to_destroy = NULL; 12461 12461 state->private_objs[j].old_state = NULL; 12462 12462 state->private_objs[j].new_state = NULL; 12463 12463
+18 -13
drivers/gpu/drm/arm/display/komeda/komeda_crtc.c
··· 111 111 static int 112 112 komeda_crtc_prepare(struct komeda_crtc *kcrtc) 113 113 { 114 + struct drm_device *drm = kcrtc->base.dev; 114 115 struct komeda_dev *mdev = kcrtc->base.dev->dev_private; 115 116 struct komeda_pipeline *master = kcrtc->master; 116 117 struct komeda_crtc_state *kcrtc_st = to_kcrtc_st(kcrtc->base.state); ··· 129 128 130 129 err = mdev->funcs->change_opmode(mdev, new_mode); 131 130 if (err) { 132 - DRM_ERROR("failed to change opmode: 0x%x -> 0x%x.\n,", 133 - mdev->dpmode, new_mode); 131 + drm_err(drm, "failed to change opmode: 0x%x -> 0x%x.\n,", 132 + mdev->dpmode, new_mode); 134 133 goto unlock; 135 134 } 136 135 ··· 143 142 if (new_mode != KOMEDA_MODE_DUAL_DISP) { 144 143 err = clk_set_rate(mdev->aclk, komeda_crtc_get_aclk(kcrtc_st)); 145 144 if (err) 146 - DRM_ERROR("failed to set aclk.\n"); 145 + drm_err(drm, "failed to set aclk.\n"); 147 146 err = clk_prepare_enable(mdev->aclk); 148 147 if (err) 149 - DRM_ERROR("failed to enable aclk.\n"); 148 + drm_err(drm, "failed to enable aclk.\n"); 150 149 } 151 150 152 151 err = clk_set_rate(master->pxlclk, mode->crtc_clock * 1000); 153 152 if (err) 154 - DRM_ERROR("failed to set pxlclk for pipe%d\n", master->id); 153 + drm_err(drm, "failed to set pxlclk for pipe%d\n", master->id); 155 154 err = clk_prepare_enable(master->pxlclk); 156 155 if (err) 157 - DRM_ERROR("failed to enable pxl clk for pipe%d.\n", master->id); 156 + drm_err(drm, "failed to enable pxl clk for pipe%d.\n", master->id); 158 157 159 158 unlock: 160 159 mutex_unlock(&mdev->lock); ··· 165 164 static int 166 165 komeda_crtc_unprepare(struct komeda_crtc *kcrtc) 167 166 { 167 + struct drm_device *drm = kcrtc->base.dev; 168 168 struct komeda_dev *mdev = kcrtc->base.dev->dev_private; 169 169 struct komeda_pipeline *master = kcrtc->master; 170 170 u32 new_mode; ··· 182 180 183 181 err = mdev->funcs->change_opmode(mdev, new_mode); 184 182 if (err) { 185 - DRM_ERROR("failed to change opmode: 0x%x -> 0x%x.\n,", 186 - mdev->dpmode, new_mode); 183 + drm_err(drm, "failed to change opmode: 0x%x -> 0x%x.\n,", 184 + mdev->dpmode, new_mode); 187 185 goto unlock; 188 186 } 189 187 ··· 202 200 void komeda_crtc_handle_event(struct komeda_crtc *kcrtc, 203 201 struct komeda_events *evts) 204 202 { 203 + struct drm_device *drm = kcrtc->base.dev; 205 204 struct drm_crtc *crtc = &kcrtc->base; 206 205 u32 events = evts->pipes[kcrtc->master->id]; 207 206 ··· 215 212 if (wb_conn) 216 213 drm_writeback_signal_completion(&wb_conn->base, 0); 217 214 else 218 - DRM_WARN("CRTC[%d]: EOW happen but no wb_connector.\n", 215 + drm_warn(drm, "CRTC[%d]: EOW happen but no wb_connector.\n", 219 216 drm_crtc_index(&kcrtc->base)); 220 217 } 221 218 /* will handle it together with the write back support */ ··· 239 236 crtc->state->event = NULL; 240 237 drm_crtc_send_vblank_event(crtc, event); 241 238 } else { 242 - DRM_WARN("CRTC[%d]: FLIP happened but no pending commit.\n", 239 + drm_warn(drm, "CRTC[%d]: FLIP happened but no pending commit.\n", 243 240 drm_crtc_index(&kcrtc->base)); 244 241 } 245 242 spin_unlock_irqrestore(&crtc->dev->event_lock, flags); ··· 312 309 313 310 /* wait the flip take affect.*/ 314 311 if (wait_for_completion_timeout(flip_done, HZ) == 0) { 315 - DRM_ERROR("wait pipe%d flip done timeout\n", kcrtc->master->id); 312 + drm_err(drm, "wait pipe%d flip done timeout\n", kcrtc->master->id); 316 313 if (!input_flip_done) { 317 314 unsigned long flags; 318 315 ··· 565 562 int komeda_kms_setup_crtcs(struct komeda_kms_dev *kms, 566 563 struct komeda_dev *mdev) 567 564 { 565 + struct drm_device *drm = &kms->base; 568 566 struct komeda_crtc *crtc; 569 567 struct komeda_pipeline *master; 570 568 char str[16]; ··· 585 581 else 586 582 sprintf(str, "None"); 587 583 588 - DRM_INFO("CRTC-%d: master(pipe-%d) slave(%s).\n", 584 + drm_info(drm, "CRTC-%d: master(pipe-%d) slave(%s).\n", 589 585 kms->n_crtcs, master->id, str); 590 586 591 587 kms->n_crtcs++; ··· 617 613 struct komeda_pipeline *pipe, 618 614 struct drm_encoder *encoder) 619 615 { 616 + struct drm_device *drm = encoder->dev; 620 617 struct drm_bridge *bridge; 621 618 int err; 622 619 ··· 629 624 630 625 err = drm_bridge_attach(encoder, bridge, NULL, 0); 631 626 if (err) 632 - dev_err(dev, "bridge_attach() failed for pipe: %s\n", 627 + drm_err(drm, "bridge_attach() failed for pipe: %s\n", 633 628 of_node_full_name(pipe->of_node)); 634 629 635 630 return err;
+1 -1
drivers/gpu/drm/arm/malidp_planes.c
··· 263 263 struct drm_plane_state *state) 264 264 { 265 265 struct drm_crtc_state *crtc_state = 266 - drm_atomic_get_existing_crtc_state(state->state, state->crtc); 266 + drm_atomic_get_new_crtc_state(state->state, state->crtc); 267 267 struct malidp_crtc_state *mc; 268 268 u32 src_w, src_h; 269 269 int ret;
+1 -6
drivers/gpu/drm/armada/armada_plane.c
··· 94 94 return 0; 95 95 } 96 96 97 - if (state) 98 - crtc_state = drm_atomic_get_existing_crtc_state(state, 99 - crtc); 100 - else 101 - crtc_state = crtc->state; 102 - 97 + crtc_state = drm_atomic_get_new_crtc_state(state, crtc); 103 98 ret = drm_atomic_helper_check_plane_state(new_plane_state, crtc_state, 104 99 0, 105 100 INT_MAX, true, false);
+8 -1
drivers/gpu/drm/ast/ast_2000.c
··· 211 211 __ast_device_set_tx_chip(ast, tx_chip); 212 212 } 213 213 214 + static const struct ast_device_quirks ast_2000_device_quirks = { 215 + .crtc_mem_req_threshold_low = 31, 216 + .crtc_mem_req_threshold_high = 47, 217 + }; 218 + 214 219 struct drm_device *ast_2000_device_create(struct pci_dev *pdev, 215 220 const struct drm_driver *drv, 216 221 enum ast_chip chip, ··· 233 228 return ERR_CAST(ast); 234 229 dev = &ast->base; 235 230 236 - ast_device_init(ast, chip, config_mode, regs, ioregs); 231 + ast_device_init(ast, chip, config_mode, regs, ioregs, &ast_2000_device_quirks); 232 + 233 + ast->dclk_table = ast_2000_dclk_table; 237 234 238 235 ast_2000_detect_tx_chip(ast, need_post); 239 236
+8 -1
drivers/gpu/drm/ast/ast_2100.c
··· 432 432 ast->support_wuxga = true; 433 433 } 434 434 435 + static const struct ast_device_quirks ast_2100_device_quirks = { 436 + .crtc_mem_req_threshold_low = 47, 437 + .crtc_mem_req_threshold_high = 63, 438 + }; 439 + 435 440 struct drm_device *ast_2100_device_create(struct pci_dev *pdev, 436 441 const struct drm_driver *drv, 437 442 enum ast_chip chip, ··· 454 449 return ERR_CAST(ast); 455 450 dev = &ast->base; 456 451 457 - ast_device_init(ast, chip, config_mode, regs, ioregs); 452 + ast_device_init(ast, chip, config_mode, regs, ioregs, &ast_2100_device_quirks); 453 + 454 + ast->dclk_table = ast_2000_dclk_table; 458 455 459 456 ast_2000_detect_tx_chip(ast, need_post); 460 457
+8 -1
drivers/gpu/drm/ast/ast_2200.c
··· 43 43 ast->support_wuxga = true; 44 44 } 45 45 46 + static const struct ast_device_quirks ast_2200_device_quirks = { 47 + .crtc_mem_req_threshold_low = 47, 48 + .crtc_mem_req_threshold_high = 63, 49 + }; 50 + 46 51 struct drm_device *ast_2200_device_create(struct pci_dev *pdev, 47 52 const struct drm_driver *drv, 48 53 enum ast_chip chip, ··· 65 60 return ERR_CAST(ast); 66 61 dev = &ast->base; 67 62 68 - ast_device_init(ast, chip, config_mode, regs, ioregs); 63 + ast_device_init(ast, chip, config_mode, regs, ioregs, &ast_2200_device_quirks); 64 + 65 + ast->dclk_table = ast_2000_dclk_table; 69 66 70 67 ast_2000_detect_tx_chip(ast, need_post); 71 68
+8 -1
drivers/gpu/drm/ast/ast_2300.c
··· 1407 1407 ast->support_wuxga = true; 1408 1408 } 1409 1409 1410 + static const struct ast_device_quirks ast_2300_device_quirks = { 1411 + .crtc_mem_req_threshold_low = 96, 1412 + .crtc_mem_req_threshold_high = 120, 1413 + }; 1414 + 1410 1415 struct drm_device *ast_2300_device_create(struct pci_dev *pdev, 1411 1416 const struct drm_driver *drv, 1412 1417 enum ast_chip chip, ··· 1429 1424 return ERR_CAST(ast); 1430 1425 dev = &ast->base; 1431 1426 1432 - ast_device_init(ast, chip, config_mode, regs, ioregs); 1427 + ast_device_init(ast, chip, config_mode, regs, ioregs, &ast_2300_device_quirks); 1428 + 1429 + ast->dclk_table = ast_2000_dclk_table; 1433 1430 1434 1431 ast_2300_detect_tx_chip(ast); 1435 1432
+8 -1
drivers/gpu/drm/ast/ast_2400.c
··· 44 44 ast->support_wuxga = true; 45 45 } 46 46 47 + static const struct ast_device_quirks ast_2400_device_quirks = { 48 + .crtc_mem_req_threshold_low = 96, 49 + .crtc_mem_req_threshold_high = 120, 50 + }; 51 + 47 52 struct drm_device *ast_2400_device_create(struct pci_dev *pdev, 48 53 const struct drm_driver *drv, 49 54 enum ast_chip chip, ··· 66 61 return ERR_CAST(ast); 67 62 dev = &ast->base; 68 63 69 - ast_device_init(ast, chip, config_mode, regs, ioregs); 64 + ast_device_init(ast, chip, config_mode, regs, ioregs, &ast_2400_device_quirks); 65 + 66 + ast->dclk_table = ast_2000_dclk_table; 70 67 71 68 ast_2300_detect_tx_chip(ast); 72 69
+9 -1
drivers/gpu/drm/ast/ast_2500.c
··· 618 618 ast->support_wuxga = true; 619 619 } 620 620 621 + static const struct ast_device_quirks ast_2500_device_quirks = { 622 + .crtc_mem_req_threshold_low = 96, 623 + .crtc_mem_req_threshold_high = 120, 624 + .crtc_hsync_precatch_needed = true, 625 + }; 626 + 621 627 struct drm_device *ast_2500_device_create(struct pci_dev *pdev, 622 628 const struct drm_driver *drv, 623 629 enum ast_chip chip, ··· 641 635 return ERR_CAST(ast); 642 636 dev = &ast->base; 643 637 644 - ast_device_init(ast, chip, config_mode, regs, ioregs); 638 + ast_device_init(ast, chip, config_mode, regs, ioregs, &ast_2500_device_quirks); 639 + 640 + ast->dclk_table = ast_2500_dclk_table; 645 641 646 642 ast_2300_detect_tx_chip(ast); 647 643
+10 -1
drivers/gpu/drm/ast/ast_2600.c
··· 59 59 ast->support_wuxga = true; 60 60 } 61 61 62 + static const struct ast_device_quirks ast_2600_device_quirks = { 63 + .crtc_mem_req_threshold_low = 160, 64 + .crtc_mem_req_threshold_high = 224, 65 + .crtc_hsync_precatch_needed = true, 66 + .crtc_hsync_add4_needed = true, 67 + }; 68 + 62 69 struct drm_device *ast_2600_device_create(struct pci_dev *pdev, 63 70 const struct drm_driver *drv, 64 71 enum ast_chip chip, ··· 83 76 return ERR_CAST(ast); 84 77 dev = &ast->base; 85 78 86 - ast_device_init(ast, chip, config_mode, regs, ioregs); 79 + ast_device_init(ast, chip, config_mode, regs, ioregs, &ast_2600_device_quirks); 80 + 81 + ast->dclk_table = ast_2500_dclk_table; 87 82 88 83 ast_2300_detect_tx_chip(ast); 89 84
+3 -1
drivers/gpu/drm/ast/ast_drv.c
··· 51 51 enum ast_chip chip, 52 52 enum ast_config_mode config_mode, 53 53 void __iomem *regs, 54 - void __iomem *ioregs) 54 + void __iomem *ioregs, 55 + const struct ast_device_quirks *quirks) 55 56 { 57 + ast->quirks = quirks; 56 58 ast->chip = chip; 57 59 ast->config_mode = config_mode; 58 60 ast->regs = regs;
+26 -1
drivers/gpu/drm/ast/ast_drv.h
··· 164 164 * Device 165 165 */ 166 166 167 + struct ast_device_quirks { 168 + /* 169 + * CRTC memory request threshold 170 + */ 171 + unsigned char crtc_mem_req_threshold_low; 172 + unsigned char crtc_mem_req_threshold_high; 173 + 174 + /* 175 + * Adjust hsync values to load next scanline early. Signalled 176 + * by AST2500PreCatchCRT in VBIOS mode flags. 177 + */ 178 + bool crtc_hsync_precatch_needed; 179 + 180 + /* 181 + * Workaround for modes with HSync Time that is not a multiple 182 + * of 8 (e.g., 1920x1080@60Hz, HSync +44 pixels). 183 + */ 184 + bool crtc_hsync_add4_needed; 185 + }; 186 + 167 187 struct ast_device { 168 188 struct drm_device base; 189 + 190 + const struct ast_device_quirks *quirks; 169 191 170 192 void __iomem *regs; 171 193 void __iomem *ioregs; ··· 195 173 196 174 enum ast_config_mode config_mode; 197 175 enum ast_chip chip; 176 + 177 + const struct ast_vbios_dclk_info *dclk_table; 198 178 199 179 void __iomem *vram; 200 180 unsigned long vram_base; ··· 436 412 enum ast_chip chip, 437 413 enum ast_config_mode config_mode, 438 414 void __iomem *regs, 439 - void __iomem *ioregs); 415 + void __iomem *ioregs, 416 + const struct ast_device_quirks *quirks); 440 417 void __ast_device_set_tx_chip(struct ast_device *ast, enum ast_tx_chip tx_chip); 441 418 442 419 /* ast_2000.c */
+15 -31
drivers/gpu/drm/ast/ast_mode.c
··· 241 241 ast_set_index_reg(ast, AST_IO_VGAGRI, i, stdtable->gr[i]); 242 242 } 243 243 244 - static void ast_set_crtc_reg(struct ast_device *ast, 245 - struct drm_display_mode *mode, 244 + static void ast_set_crtc_reg(struct ast_device *ast, struct drm_display_mode *mode, 246 245 const struct ast_vbios_enhtable *vmode) 247 246 { 248 247 u8 jreg05 = 0, jreg07 = 0, jreg09 = 0, jregAC = 0, jregAD = 0, jregAE = 0; 249 - u16 temp, precache = 0; 248 + u16 temp; 249 + unsigned char crtc_hsync_precatch = 0; 250 250 251 - if ((IS_AST_GEN6(ast) || IS_AST_GEN7(ast)) && 252 - (vmode->flags & AST2500PreCatchCRT)) 253 - precache = 40; 251 + if (ast->quirks->crtc_hsync_precatch_needed && (vmode->flags & AST2500PreCatchCRT)) 252 + crtc_hsync_precatch = 40; 254 253 255 254 ast_set_index_reg_mask(ast, AST_IO_VGACRI, 0x11, 0x7f, 0x00); 256 255 ··· 275 276 jregAD |= 0x01; /* HBE D[5] */ 276 277 ast_set_index_reg_mask(ast, AST_IO_VGACRI, 0x03, 0xE0, (temp & 0x1f)); 277 278 278 - temp = ((mode->crtc_hsync_start-precache) >> 3) - 1; 279 + temp = ((mode->crtc_hsync_start - crtc_hsync_precatch) >> 3) - 1; 279 280 if (temp & 0x100) 280 281 jregAC |= 0x40; /* HRS D[5] */ 281 282 ast_set_index_reg_mask(ast, AST_IO_VGACRI, 0x04, 0x00, temp); 282 283 283 - temp = (((mode->crtc_hsync_end-precache) >> 3) - 1) & 0x3f; 284 + temp = (((mode->crtc_hsync_end - crtc_hsync_precatch) >> 3) - 1) & 0x3f; 284 285 if (temp & 0x20) 285 286 jregAD |= 0x04; /* HRE D[5] */ 286 287 ast_set_index_reg_mask(ast, AST_IO_VGACRI, 0x05, 0x60, (u8)((temp & 0x1f) | jreg05)); ··· 288 289 ast_set_index_reg_mask(ast, AST_IO_VGACRI, 0xAC, 0x00, jregAC); 289 290 ast_set_index_reg_mask(ast, AST_IO_VGACRI, 0xAD, 0x00, jregAD); 290 291 291 - // Workaround for HSync Time non octave pixels (1920x1080@60Hz HSync 44 pixels); 292 - if (IS_AST_GEN7(ast) && (mode->crtc_vdisplay == 1080)) 292 + if (ast->quirks->crtc_hsync_add4_needed && mode->crtc_vdisplay == 1080) 293 293 ast_set_index_reg_mask(ast, AST_IO_VGACRI, 0xFC, 0xFD, 0x02); 294 294 else 295 295 ast_set_index_reg_mask(ast, AST_IO_VGACRI, 0xFC, 0xFD, 0x00); ··· 346 348 ast_set_index_reg_mask(ast, AST_IO_VGACRI, 0x09, 0xdf, jreg09); 347 349 ast_set_index_reg_mask(ast, AST_IO_VGACRI, 0xAE, 0x00, (jregAE | 0x80)); 348 350 349 - if (precache) 351 + if (crtc_hsync_precatch) 350 352 ast_set_index_reg_mask(ast, AST_IO_VGACRI, 0xb6, 0x3f, 0x80); 351 353 else 352 354 ast_set_index_reg_mask(ast, AST_IO_VGACRI, 0xb6, 0x3f, 0x00); ··· 368 370 struct drm_display_mode *mode, 369 371 const struct ast_vbios_enhtable *vmode) 370 372 { 371 - const struct ast_vbios_dclk_info *clk_info; 372 - 373 - if (IS_AST_GEN6(ast) || IS_AST_GEN7(ast)) 374 - clk_info = &ast_2500_dclk_table[vmode->dclk_index]; 375 - else 376 - clk_info = &ast_2000_dclk_table[vmode->dclk_index]; 373 + const struct ast_vbios_dclk_info *clk_info = &ast->dclk_table[vmode->dclk_index]; 377 374 378 375 ast_set_index_reg_mask(ast, AST_IO_VGACRI, 0xc0, 0x00, clk_info->param1); 379 376 ast_set_index_reg_mask(ast, AST_IO_VGACRI, 0xc1, 0x00, clk_info->param2); ··· 408 415 409 416 static void ast_set_crtthd_reg(struct ast_device *ast) 410 417 { 411 - /* Set Threshold */ 412 - if (IS_AST_GEN7(ast)) { 413 - ast_set_index_reg(ast, AST_IO_VGACRI, 0xa7, 0xe0); 414 - ast_set_index_reg(ast, AST_IO_VGACRI, 0xa6, 0xa0); 415 - } else if (IS_AST_GEN6(ast) || IS_AST_GEN5(ast) || IS_AST_GEN4(ast)) { 416 - ast_set_index_reg(ast, AST_IO_VGACRI, 0xa7, 0x78); 417 - ast_set_index_reg(ast, AST_IO_VGACRI, 0xa6, 0x60); 418 - } else if (IS_AST_GEN3(ast) || IS_AST_GEN2(ast)) { 419 - ast_set_index_reg(ast, AST_IO_VGACRI, 0xa7, 0x3f); 420 - ast_set_index_reg(ast, AST_IO_VGACRI, 0xa6, 0x2f); 421 - } else { 422 - ast_set_index_reg(ast, AST_IO_VGACRI, 0xa7, 0x2f); 423 - ast_set_index_reg(ast, AST_IO_VGACRI, 0xa6, 0x1f); 424 - } 418 + u8 vgacra6 = ast->quirks->crtc_mem_req_threshold_low; 419 + u8 vgacra7 = ast->quirks->crtc_mem_req_threshold_high; 420 + 421 + ast_set_index_reg(ast, AST_IO_VGACRI, 0xa7, vgacra7); 422 + ast_set_index_reg(ast, AST_IO_VGACRI, 0xa6, vgacra6); 425 423 } 426 424 427 425 static void ast_set_sync_reg(struct ast_device *ast,
+11 -10
drivers/gpu/drm/atmel-hlcdc/atmel_hlcdc_crtc.c
··· 20 20 #include <drm/drm_atomic_helper.h> 21 21 #include <drm/drm_crtc.h> 22 22 #include <drm/drm_modeset_helper_vtables.h> 23 + #include <drm/drm_print.h> 23 24 #include <drm/drm_probe_helper.h> 24 25 #include <drm/drm_vblank.h> 25 26 ··· 216 215 if (regmap_read_poll_timeout(regmap, ATMEL_HLCDC_SR, status, 217 216 !(status & ATMEL_XLCDC_CM), 218 217 10, 1000)) 219 - dev_warn(dev->dev, "Atmel LCDC status register CMSTS timeout\n"); 218 + drm_warn(dev, "Atmel LCDC status register CMSTS timeout\n"); 220 219 221 220 regmap_write(regmap, ATMEL_HLCDC_DIS, ATMEL_XLCDC_SD); 222 221 if (regmap_read_poll_timeout(regmap, ATMEL_HLCDC_SR, status, 223 222 status & ATMEL_XLCDC_SD, 224 223 10, 1000)) 225 - dev_warn(dev->dev, "Atmel LCDC status register SDSTS timeout\n"); 224 + drm_warn(dev, "Atmel LCDC status register SDSTS timeout\n"); 226 225 } 227 226 228 227 regmap_write(regmap, ATMEL_HLCDC_DIS, ATMEL_HLCDC_DISP); 229 228 if (regmap_read_poll_timeout(regmap, ATMEL_HLCDC_SR, status, 230 229 !(status & ATMEL_HLCDC_DISP), 231 230 10, 1000)) 232 - dev_warn(dev->dev, "Atmel LCDC status register DISPSTS timeout\n"); 231 + drm_warn(dev, "Atmel LCDC status register DISPSTS timeout\n"); 233 232 234 233 regmap_write(regmap, ATMEL_HLCDC_DIS, ATMEL_HLCDC_SYNC); 235 234 if (regmap_read_poll_timeout(regmap, ATMEL_HLCDC_SR, status, 236 235 !(status & ATMEL_HLCDC_SYNC), 237 236 10, 1000)) 238 - dev_warn(dev->dev, "Atmel LCDC status register LCDSTS timeout\n"); 237 + drm_warn(dev, "Atmel LCDC status register LCDSTS timeout\n"); 239 238 240 239 regmap_write(regmap, ATMEL_HLCDC_DIS, ATMEL_HLCDC_PIXEL_CLK); 241 240 if (regmap_read_poll_timeout(regmap, ATMEL_HLCDC_SR, status, 242 241 !(status & ATMEL_HLCDC_PIXEL_CLK), 243 242 10, 1000)) 244 - dev_warn(dev->dev, "Atmel LCDC status register CLKSTS timeout\n"); 243 + drm_warn(dev, "Atmel LCDC status register CLKSTS timeout\n"); 245 244 246 245 clk_disable_unprepare(crtc->dc->hlcdc->sys_clk); 247 246 pinctrl_pm_select_sleep_state(dev->dev); ··· 270 269 if (regmap_read_poll_timeout(regmap, ATMEL_HLCDC_SR, status, 271 270 status & ATMEL_HLCDC_PIXEL_CLK, 272 271 10, 1000)) 273 - dev_warn(dev->dev, "Atmel LCDC status register CLKSTS timeout\n"); 272 + drm_warn(dev, "Atmel LCDC status register CLKSTS timeout\n"); 274 273 275 274 regmap_write(regmap, ATMEL_HLCDC_EN, ATMEL_HLCDC_SYNC); 276 275 if (regmap_read_poll_timeout(regmap, ATMEL_HLCDC_SR, status, 277 276 status & ATMEL_HLCDC_SYNC, 278 277 10, 1000)) 279 - dev_warn(dev->dev, "Atmel LCDC status register LCDSTS timeout\n"); 278 + drm_warn(dev, "Atmel LCDC status register LCDSTS timeout\n"); 280 279 281 280 regmap_write(regmap, ATMEL_HLCDC_EN, ATMEL_HLCDC_DISP); 282 281 if (regmap_read_poll_timeout(regmap, ATMEL_HLCDC_SR, status, 283 282 status & ATMEL_HLCDC_DISP, 284 283 10, 1000)) 285 - dev_warn(dev->dev, "Atmel LCDC status register DISPSTS timeout\n"); 284 + drm_warn(dev, "Atmel LCDC status register DISPSTS timeout\n"); 286 285 287 286 if (crtc->dc->desc->is_xlcdc) { 288 287 regmap_write(regmap, ATMEL_HLCDC_EN, ATMEL_XLCDC_CM); 289 288 if (regmap_read_poll_timeout(regmap, ATMEL_HLCDC_SR, status, 290 289 status & ATMEL_XLCDC_CM, 291 290 10, 1000)) 292 - dev_warn(dev->dev, "Atmel LCDC status register CMSTS timeout\n"); 291 + drm_warn(dev, "Atmel LCDC status register CMSTS timeout\n"); 293 292 294 293 regmap_write(regmap, ATMEL_HLCDC_EN, ATMEL_XLCDC_SD); 295 294 if (regmap_read_poll_timeout(regmap, ATMEL_HLCDC_SR, status, 296 295 !(status & ATMEL_XLCDC_SD), 297 296 10, 1000)) 298 - dev_warn(dev->dev, "Atmel LCDC status register SDSTS timeout\n"); 297 + drm_warn(dev, "Atmel LCDC status register SDSTS timeout\n"); 299 298 } 300 299 301 300 pm_runtime_put_sync(dev->dev);
+7 -7
drivers/gpu/drm/atmel-hlcdc/atmel_hlcdc_dc.c
··· 724 724 725 725 ret = atmel_hlcdc_create_outputs(dev); 726 726 if (ret) { 727 - dev_err(dev->dev, "failed to create HLCDC outputs: %d\n", ret); 727 + drm_err(dev, "failed to create HLCDC outputs: %d\n", ret); 728 728 return ret; 729 729 } 730 730 731 731 ret = atmel_hlcdc_create_planes(dev); 732 732 if (ret) { 733 - dev_err(dev->dev, "failed to create planes: %d\n", ret); 733 + drm_err(dev, "failed to create planes: %d\n", ret); 734 734 return ret; 735 735 } 736 736 737 737 ret = atmel_hlcdc_crtc_create(dev); 738 738 if (ret) { 739 - dev_err(dev->dev, "failed to create crtc\n"); 739 + drm_err(dev, "failed to create crtc\n"); 740 740 return ret; 741 741 } 742 742 ··· 778 778 779 779 ret = clk_prepare_enable(dc->hlcdc->periph_clk); 780 780 if (ret) { 781 - dev_err(dev->dev, "failed to enable periph_clk\n"); 781 + drm_err(dev, "failed to enable periph_clk\n"); 782 782 return ret; 783 783 } 784 784 ··· 786 786 787 787 ret = drm_vblank_init(dev, 1); 788 788 if (ret < 0) { 789 - dev_err(dev->dev, "failed to initialize vblank\n"); 789 + drm_err(dev, "failed to initialize vblank\n"); 790 790 goto err_periph_clk_disable; 791 791 } 792 792 793 793 ret = atmel_hlcdc_dc_modeset_init(dev); 794 794 if (ret < 0) { 795 - dev_err(dev->dev, "failed to initialize mode setting\n"); 795 + drm_err(dev, "failed to initialize mode setting\n"); 796 796 goto err_periph_clk_disable; 797 797 } 798 798 ··· 802 802 ret = atmel_hlcdc_dc_irq_install(dev, dc->hlcdc->irq); 803 803 pm_runtime_put_sync(dev->dev); 804 804 if (ret < 0) { 805 - dev_err(dev->dev, "failed to install IRQ handler\n"); 805 + drm_err(dev, "failed to install IRQ handler\n"); 806 806 goto err_periph_clk_disable; 807 807 } 808 808
+2 -1
drivers/gpu/drm/atmel-hlcdc/atmel_hlcdc_dc.h
··· 378 378 void (*lcdc_update_buffers)(struct atmel_hlcdc_plane *plane, 379 379 struct atmel_hlcdc_plane_state *state, 380 380 u32 sr, int i); 381 - void (*lcdc_atomic_disable)(struct atmel_hlcdc_plane *plane); 381 + void (*lcdc_atomic_disable)(struct atmel_hlcdc_plane *plane, 382 + struct atmel_hlcdc_dc *dc); 382 383 void (*lcdc_update_general_settings)(struct atmel_hlcdc_plane *plane, 383 384 struct atmel_hlcdc_plane_state *state); 384 385 void (*lcdc_atomic_update)(struct atmel_hlcdc_plane *plane,
+2 -1
drivers/gpu/drm/atmel-hlcdc/atmel_hlcdc_output.c
··· 15 15 #include <drm/drm_bridge.h> 16 16 #include <drm/drm_encoder.h> 17 17 #include <drm/drm_of.h> 18 + #include <drm/drm_print.h> 18 19 #include <drm/drm_simple_kms_helper.h> 19 20 20 21 #include "atmel_hlcdc_dc.h" ··· 93 92 output->bus_fmt = atmel_hlcdc_of_bus_fmt(ep); 94 93 of_node_put(ep); 95 94 if (output->bus_fmt < 0) { 96 - dev_err(dev->dev, "endpoint %d: invalid bus width\n", endpoint); 95 + drm_err(dev, "endpoint %d: invalid bus width\n", endpoint); 97 96 return -EINVAL; 98 97 } 99 98
+42 -10
drivers/gpu/drm/atmel-hlcdc/atmel_hlcdc_plane.c
··· 365 365 xfactor); 366 366 367 367 /* 368 - * With YCbCr 4:2:2 and YCbYcr 4:2:0 window resampling, configuration 369 - * register LCDC_HEOCFG25.VXSCFACT and LCDC_HEOCFG27.HXSCFACT is half 368 + * With YCbCr 4:2:0 window resampling, configuration register 369 + * LCDC_HEOCFG25.VXSCFACT and LCDC_HEOCFG27.HXSCFACT values are half 370 370 * the value of yfactor and xfactor. 371 + * 372 + * On the other hand, with YCbCr 4:2:2 window resampling, only the 373 + * configuration register LCDC_HEOCFG27.HXSCFACT value is half the value 374 + * of the xfactor; the value of LCDC_HEOCFG25.VXSCFACT is yfactor (no 375 + * division by 2). 371 376 */ 372 - if (state->base.fb->format->format == DRM_FORMAT_YUV420) { 377 + switch (state->base.fb->format->format) { 378 + /* YCbCr 4:2:2 */ 379 + case DRM_FORMAT_YUYV: 380 + case DRM_FORMAT_UYVY: 381 + case DRM_FORMAT_YVYU: 382 + case DRM_FORMAT_VYUY: 383 + case DRM_FORMAT_YUV422: 384 + case DRM_FORMAT_NV61: 385 + xfactor /= 2; 386 + break; 387 + 388 + /* YCbCr 4:2:0 */ 389 + case DRM_FORMAT_YUV420: 390 + case DRM_FORMAT_NV21: 373 391 yfactor /= 2; 374 392 xfactor /= 2; 393 + break; 394 + default: 395 + break; 375 396 } 376 397 377 398 atmel_hlcdc_layer_write_cfg(&plane->layer, desc->layout.scaler_config + 2, ··· 735 714 if (!hstate->base.crtc || WARN_ON(!fb)) 736 715 return 0; 737 716 738 - crtc_state = drm_atomic_get_existing_crtc_state(state, s->crtc); 717 + crtc_state = drm_atomic_get_new_crtc_state(state, s->crtc); 739 718 mode = &crtc_state->adjusted_mode; 740 719 741 720 ret = drm_atomic_helper_check_plane_state(s, crtc_state, ··· 837 816 return 0; 838 817 } 839 818 840 - static void atmel_hlcdc_atomic_disable(struct atmel_hlcdc_plane *plane) 819 + static void atmel_hlcdc_atomic_disable(struct atmel_hlcdc_plane *plane, 820 + struct atmel_hlcdc_dc *dc) 841 821 { 842 822 /* Disable interrupts */ 843 823 atmel_hlcdc_layer_write_reg(&plane->layer, ATMEL_HLCDC_LAYER_IDR, ··· 854 832 atmel_hlcdc_layer_read_reg(&plane->layer, ATMEL_HLCDC_LAYER_ISR); 855 833 } 856 834 857 - static void atmel_xlcdc_atomic_disable(struct atmel_hlcdc_plane *plane) 835 + static void atmel_xlcdc_atomic_disable(struct atmel_hlcdc_plane *plane, 836 + struct atmel_hlcdc_dc *dc) 858 837 { 859 838 /* Disable interrupts */ 860 839 atmel_hlcdc_layer_write_reg(&plane->layer, ATMEL_XLCDC_LAYER_IDR, ··· 864 841 /* Disable the layer */ 865 842 atmel_hlcdc_layer_write_reg(&plane->layer, 866 843 ATMEL_XLCDC_LAYER_ENR, 0); 844 + 845 + /* 846 + * Updating XLCDC_xxxCFGx, XLCDC_xxxFBA and XLCDC_xxxEN, 847 + * (where xxx indicates each layer) requires writing one to the 848 + * Update Attribute field for each layer in LCDC_ATTRE register for SAM9X7. 849 + */ 850 + regmap_write(dc->hlcdc->regmap, ATMEL_XLCDC_ATTRE, ATMEL_XLCDC_BASE_UPDATE | 851 + ATMEL_XLCDC_OVR1_UPDATE | ATMEL_XLCDC_OVR3_UPDATE | 852 + ATMEL_XLCDC_HEO_UPDATE); 867 853 868 854 /* Clear all pending interrupts */ 869 855 atmel_hlcdc_layer_read_reg(&plane->layer, ATMEL_XLCDC_LAYER_ISR); ··· 884 852 struct atmel_hlcdc_plane *plane = drm_plane_to_atmel_hlcdc_plane(p); 885 853 struct atmel_hlcdc_dc *dc = plane->base.dev->dev_private; 886 854 887 - dc->desc->ops->lcdc_atomic_disable(plane); 855 + dc->desc->ops->lcdc_atomic_disable(plane, dc); 888 856 } 889 857 890 858 static void atmel_hlcdc_atomic_update(struct atmel_hlcdc_plane *plane, ··· 1066 1034 if (isr & 1067 1035 (ATMEL_HLCDC_LAYER_OVR_IRQ(0) | ATMEL_HLCDC_LAYER_OVR_IRQ(1) | 1068 1036 ATMEL_HLCDC_LAYER_OVR_IRQ(2))) 1069 - dev_dbg(plane->base.dev->dev, "overrun on plane %s\n", 1037 + drm_dbg(plane->base.dev, "overrun on plane %s\n", 1070 1038 desc->name); 1071 1039 } 1072 1040 ··· 1083 1051 if (isr & 1084 1052 (ATMEL_XLCDC_LAYER_OVR_IRQ(0) | ATMEL_XLCDC_LAYER_OVR_IRQ(1) | 1085 1053 ATMEL_XLCDC_LAYER_OVR_IRQ(2))) 1086 - dev_dbg(plane->base.dev->dev, "overrun on plane %s\n", 1054 + drm_dbg(plane->base.dev, "overrun on plane %s\n", 1087 1055 desc->name); 1088 1056 } 1089 1057 ··· 1172 1140 if (state) { 1173 1141 if (atmel_hlcdc_plane_alloc_dscrs(p, state)) { 1174 1142 kfree(state); 1175 - dev_err(p->dev->dev, 1143 + drm_err(p->dev, 1176 1144 "Failed to allocate initial plane state\n"); 1177 1145 return; 1178 1146 }
+8
drivers/gpu/drm/bridge/synopsys/Kconfig
··· 61 61 select DRM_KMS_HELPER 62 62 select REGMAP_MMIO 63 63 64 + config DRM_DW_HDMI_QP_CEC 65 + bool "Synopsis Designware QP CEC interface" 66 + depends on DRM_DW_HDMI_QP 67 + select DRM_DISPLAY_HDMI_CEC_HELPER 68 + help 69 + Support the CEC interface which is part of the Synopsys 70 + Designware HDMI QP block. 71 + 64 72 config DRM_DW_MIPI_DSI 65 73 tristate 66 74 select DRM_KMS_HELPER
+221 -3
drivers/gpu/drm/bridge/synopsys/dw-hdmi-qp.c
··· 18 18 19 19 #include <drm/bridge/dw_hdmi_qp.h> 20 20 #include <drm/display/drm_hdmi_helper.h> 21 + #include <drm/display/drm_hdmi_cec_helper.h> 21 22 #include <drm/display/drm_hdmi_state_helper.h> 22 23 #include <drm/drm_atomic.h> 23 24 #include <drm/drm_atomic_helper.h> ··· 26 25 #include <drm/drm_connector.h> 27 26 #include <drm/drm_edid.h> 28 27 #include <drm/drm_modes.h> 28 + 29 + #include <media/cec.h> 29 30 30 31 #include <sound/hdmi-codec.h> 31 32 ··· 134 131 bool is_segment; 135 132 }; 136 133 134 + #ifdef CONFIG_DRM_DW_HDMI_QP_CEC 135 + struct dw_hdmi_qp_cec { 136 + struct drm_connector *connector; 137 + int irq; 138 + u32 addresses; 139 + struct cec_msg rx_msg; 140 + u8 tx_status; 141 + bool tx_done; 142 + bool rx_done; 143 + }; 144 + #endif 145 + 137 146 struct dw_hdmi_qp { 138 147 struct drm_bridge bridge; 139 148 140 149 struct device *dev; 141 150 struct dw_hdmi_qp_i2c *i2c; 142 151 152 + #ifdef CONFIG_DRM_DW_HDMI_QP_CEC 153 + struct dw_hdmi_qp_cec *cec; 154 + #endif 155 + 143 156 struct { 144 157 const struct dw_hdmi_qp_phy_ops *ops; 145 158 void *data; 146 159 } phy; 147 160 161 + unsigned long ref_clk_rate; 148 162 struct regmap *regm; 149 163 150 164 unsigned long tmds_char_rate; ··· 985 965 } 986 966 } 987 967 968 + #ifdef CONFIG_DRM_DW_HDMI_QP_CEC 969 + static irqreturn_t dw_hdmi_qp_cec_hardirq(int irq, void *dev_id) 970 + { 971 + struct dw_hdmi_qp *hdmi = dev_id; 972 + struct dw_hdmi_qp_cec *cec = hdmi->cec; 973 + irqreturn_t ret = IRQ_HANDLED; 974 + u32 stat; 975 + 976 + stat = dw_hdmi_qp_read(hdmi, CEC_INT_STATUS); 977 + if (stat == 0) 978 + return IRQ_NONE; 979 + 980 + dw_hdmi_qp_write(hdmi, stat, CEC_INT_CLEAR); 981 + 982 + if (stat & CEC_STAT_LINE_ERR) { 983 + cec->tx_status = CEC_TX_STATUS_ERROR; 984 + cec->tx_done = true; 985 + ret = IRQ_WAKE_THREAD; 986 + } else if (stat & CEC_STAT_DONE) { 987 + cec->tx_status = CEC_TX_STATUS_OK; 988 + cec->tx_done = true; 989 + ret = IRQ_WAKE_THREAD; 990 + } else if (stat & CEC_STAT_NACK) { 991 + cec->tx_status = CEC_TX_STATUS_NACK; 992 + cec->tx_done = true; 993 + ret = IRQ_WAKE_THREAD; 994 + } 995 + 996 + if (stat & CEC_STAT_EOM) { 997 + unsigned int len, i, val; 998 + 999 + val = dw_hdmi_qp_read(hdmi, CEC_RX_COUNT_STATUS); 1000 + len = (val & 0xf) + 1; 1001 + 1002 + if (len > sizeof(cec->rx_msg.msg)) 1003 + len = sizeof(cec->rx_msg.msg); 1004 + 1005 + for (i = 0; i < 4; i++) { 1006 + val = dw_hdmi_qp_read(hdmi, CEC_RX_DATA3_0 + i * 4); 1007 + cec->rx_msg.msg[i * 4] = val & 0xff; 1008 + cec->rx_msg.msg[i * 4 + 1] = (val >> 8) & 0xff; 1009 + cec->rx_msg.msg[i * 4 + 2] = (val >> 16) & 0xff; 1010 + cec->rx_msg.msg[i * 4 + 3] = (val >> 24) & 0xff; 1011 + } 1012 + 1013 + dw_hdmi_qp_write(hdmi, 1, CEC_LOCK_CONTROL); 1014 + 1015 + cec->rx_msg.len = len; 1016 + cec->rx_done = true; 1017 + 1018 + ret = IRQ_WAKE_THREAD; 1019 + } 1020 + 1021 + return ret; 1022 + } 1023 + 1024 + static irqreturn_t dw_hdmi_qp_cec_thread(int irq, void *dev_id) 1025 + { 1026 + struct dw_hdmi_qp *hdmi = dev_id; 1027 + struct dw_hdmi_qp_cec *cec = hdmi->cec; 1028 + 1029 + if (cec->tx_done) { 1030 + cec->tx_done = false; 1031 + drm_connector_hdmi_cec_transmit_attempt_done(cec->connector, 1032 + cec->tx_status); 1033 + } 1034 + 1035 + if (cec->rx_done) { 1036 + cec->rx_done = false; 1037 + drm_connector_hdmi_cec_received_msg(cec->connector, &cec->rx_msg); 1038 + } 1039 + 1040 + return IRQ_HANDLED; 1041 + } 1042 + 1043 + static int dw_hdmi_qp_cec_init(struct drm_bridge *bridge, 1044 + struct drm_connector *connector) 1045 + { 1046 + struct dw_hdmi_qp *hdmi = dw_hdmi_qp_from_bridge(bridge); 1047 + struct dw_hdmi_qp_cec *cec = hdmi->cec; 1048 + 1049 + cec->connector = connector; 1050 + 1051 + dw_hdmi_qp_write(hdmi, 0, CEC_TX_COUNT); 1052 + dw_hdmi_qp_write(hdmi, ~0, CEC_INT_CLEAR); 1053 + dw_hdmi_qp_write(hdmi, 0, CEC_INT_MASK_N); 1054 + 1055 + return devm_request_threaded_irq(hdmi->dev, cec->irq, 1056 + dw_hdmi_qp_cec_hardirq, 1057 + dw_hdmi_qp_cec_thread, IRQF_SHARED, 1058 + dev_name(hdmi->dev), hdmi); 1059 + } 1060 + 1061 + static int dw_hdmi_qp_cec_log_addr(struct drm_bridge *bridge, u8 logical_addr) 1062 + { 1063 + struct dw_hdmi_qp *hdmi = dw_hdmi_qp_from_bridge(bridge); 1064 + struct dw_hdmi_qp_cec *cec = hdmi->cec; 1065 + 1066 + if (logical_addr == CEC_LOG_ADDR_INVALID) 1067 + cec->addresses = 0; 1068 + else 1069 + cec->addresses |= BIT(logical_addr) | CEC_ADDR_BROADCAST; 1070 + 1071 + dw_hdmi_qp_write(hdmi, cec->addresses, CEC_ADDR); 1072 + 1073 + return 0; 1074 + } 1075 + 1076 + static int dw_hdmi_qp_cec_enable(struct drm_bridge *bridge, bool enable) 1077 + { 1078 + struct dw_hdmi_qp *hdmi = dw_hdmi_qp_from_bridge(bridge); 1079 + unsigned int irqs; 1080 + u32 swdisable; 1081 + 1082 + if (!enable) { 1083 + dw_hdmi_qp_write(hdmi, 0, CEC_INT_MASK_N); 1084 + dw_hdmi_qp_write(hdmi, ~0, CEC_INT_CLEAR); 1085 + 1086 + swdisable = dw_hdmi_qp_read(hdmi, GLOBAL_SWDISABLE); 1087 + swdisable = swdisable | CEC_SWDISABLE; 1088 + dw_hdmi_qp_write(hdmi, swdisable, GLOBAL_SWDISABLE); 1089 + } else { 1090 + swdisable = dw_hdmi_qp_read(hdmi, GLOBAL_SWDISABLE); 1091 + swdisable = swdisable & ~CEC_SWDISABLE; 1092 + dw_hdmi_qp_write(hdmi, swdisable, GLOBAL_SWDISABLE); 1093 + 1094 + dw_hdmi_qp_write(hdmi, ~0, CEC_INT_CLEAR); 1095 + dw_hdmi_qp_write(hdmi, 1, CEC_LOCK_CONTROL); 1096 + 1097 + dw_hdmi_qp_cec_log_addr(bridge, CEC_LOG_ADDR_INVALID); 1098 + 1099 + irqs = CEC_STAT_LINE_ERR | CEC_STAT_NACK | CEC_STAT_EOM | 1100 + CEC_STAT_DONE; 1101 + dw_hdmi_qp_write(hdmi, ~0, CEC_INT_CLEAR); 1102 + dw_hdmi_qp_write(hdmi, irqs, CEC_INT_MASK_N); 1103 + } 1104 + 1105 + return 0; 1106 + } 1107 + 1108 + static int dw_hdmi_qp_cec_transmit(struct drm_bridge *bridge, u8 attempts, 1109 + u32 signal_free_time, struct cec_msg *msg) 1110 + { 1111 + struct dw_hdmi_qp *hdmi = dw_hdmi_qp_from_bridge(bridge); 1112 + unsigned int i; 1113 + u32 val; 1114 + 1115 + for (i = 0; i < msg->len; i++) { 1116 + if (!(i % 4)) 1117 + val = msg->msg[i]; 1118 + if ((i % 4) == 1) 1119 + val |= msg->msg[i] << 8; 1120 + if ((i % 4) == 2) 1121 + val |= msg->msg[i] << 16; 1122 + if ((i % 4) == 3) 1123 + val |= msg->msg[i] << 24; 1124 + 1125 + if (i == (msg->len - 1) || (i % 4) == 3) 1126 + dw_hdmi_qp_write(hdmi, val, CEC_TX_DATA3_0 + (i / 4) * 4); 1127 + } 1128 + 1129 + dw_hdmi_qp_write(hdmi, msg->len - 1, CEC_TX_COUNT); 1130 + dw_hdmi_qp_write(hdmi, CEC_CTRL_START, CEC_TX_CONTROL); 1131 + 1132 + return 0; 1133 + } 1134 + #else 1135 + #define dw_hdmi_qp_cec_init NULL 1136 + #define dw_hdmi_qp_cec_enable NULL 1137 + #define dw_hdmi_qp_cec_log_addr NULL 1138 + #define dw_hdmi_qp_cec_transmit NULL 1139 + #endif /* CONFIG_DRM_DW_HDMI_QP_CEC */ 1140 + 988 1141 static const struct drm_bridge_funcs dw_hdmi_qp_bridge_funcs = { 989 1142 .atomic_duplicate_state = drm_atomic_helper_bridge_duplicate_state, 990 1143 .atomic_destroy_state = drm_atomic_helper_bridge_destroy_state, ··· 1172 979 .hdmi_audio_startup = dw_hdmi_qp_audio_enable, 1173 980 .hdmi_audio_shutdown = dw_hdmi_qp_audio_disable, 1174 981 .hdmi_audio_prepare = dw_hdmi_qp_audio_prepare, 982 + .hdmi_cec_init = dw_hdmi_qp_cec_init, 983 + .hdmi_cec_enable = dw_hdmi_qp_cec_enable, 984 + .hdmi_cec_log_addr = dw_hdmi_qp_cec_log_addr, 985 + .hdmi_cec_transmit = dw_hdmi_qp_cec_transmit, 1175 986 }; 1176 987 1177 988 static irqreturn_t dw_hdmi_qp_main_hardirq(int irq, void *dev_id) ··· 1211 1014 { 1212 1015 dw_hdmi_qp_write(hdmi, 0, MAINUNIT_0_INT_MASK_N); 1213 1016 dw_hdmi_qp_write(hdmi, 0, MAINUNIT_1_INT_MASK_N); 1214 - dw_hdmi_qp_write(hdmi, 428571429, TIMER_BASE_CONFIG0); 1017 + dw_hdmi_qp_write(hdmi, hdmi->ref_clk_rate, TIMER_BASE_CONFIG0); 1215 1018 1216 1019 /* Software reset */ 1217 1020 dw_hdmi_qp_write(hdmi, 0x01, I2CM_CONTROL0); 1218 - 1219 1021 dw_hdmi_qp_write(hdmi, 0x085c085c, I2CM_FM_SCL_CONFIG0); 1220 - 1221 1022 dw_hdmi_qp_mod(hdmi, 0, I2CM_FM_EN, I2CM_INTERFACE_CONTROL0); 1222 1023 1223 1024 /* Clear DONE and ERROR interrupts */ ··· 1261 1066 hdmi->phy.ops = plat_data->phy_ops; 1262 1067 hdmi->phy.data = plat_data->phy_data; 1263 1068 1069 + if (plat_data->ref_clk_rate) { 1070 + hdmi->ref_clk_rate = plat_data->ref_clk_rate; 1071 + } else { 1072 + hdmi->ref_clk_rate = 428571429; 1073 + dev_warn(dev, "Set ref_clk_rate to vendor default\n"); 1074 + } 1075 + 1264 1076 dw_hdmi_qp_init_hw(hdmi); 1265 1077 1266 1078 ret = devm_request_threaded_irq(dev, plat_data->main_irq, ··· 1294 1092 hdmi->bridge.hdmi_audio_max_i2s_playback_channels = 8; 1295 1093 hdmi->bridge.hdmi_audio_dev = dev; 1296 1094 hdmi->bridge.hdmi_audio_dai_port = 1; 1095 + 1096 + #ifdef CONFIG_DRM_DW_HDMI_QP_CEC 1097 + if (plat_data->cec_irq) { 1098 + hdmi->bridge.ops |= DRM_BRIDGE_OP_HDMI_CEC_ADAPTER; 1099 + hdmi->bridge.hdmi_cec_dev = dev; 1100 + hdmi->bridge.hdmi_cec_adapter_name = dev_name(dev); 1101 + 1102 + hdmi->cec = devm_kzalloc(hdmi->dev, sizeof(*hdmi->cec), GFP_KERNEL); 1103 + if (!hdmi->cec) 1104 + return ERR_PTR(-ENOMEM); 1105 + 1106 + hdmi->cec->irq = plat_data->cec_irq; 1107 + } else { 1108 + dev_warn(dev, "Disabled CEC support due to missing IRQ\n"); 1109 + } 1110 + #endif 1297 1111 1298 1112 ret = devm_drm_bridge_add(dev, &hdmi->bridge); 1299 1113 if (ret)
+14
drivers/gpu/drm/bridge/synopsys/dw-hdmi-qp.h
··· 488 488 #define AUDPKT_VBIT_OVR0 0xf24 489 489 /* CEC Registers */ 490 490 #define CEC_TX_CONTROL 0x1000 491 + #define CEC_CTRL_CLEAR BIT(0) 492 + #define CEC_CTRL_START BIT(0) 491 493 #define CEC_STATUS 0x1004 494 + #define CEC_STAT_DONE BIT(0) 495 + #define CEC_STAT_NACK BIT(1) 496 + #define CEC_STAT_ARBLOST BIT(2) 497 + #define CEC_STAT_LINE_ERR BIT(3) 498 + #define CEC_STAT_RETRANS_FAIL BIT(4) 499 + #define CEC_STAT_DISCARD BIT(5) 500 + #define CEC_STAT_TX_BUSY BIT(8) 501 + #define CEC_STAT_RX_BUSY BIT(9) 502 + #define CEC_STAT_DRIVE_ERR BIT(10) 503 + #define CEC_STAT_EOM BIT(11) 504 + #define CEC_STAT_NOTIFY_ERR BIT(12) 492 505 #define CEC_CONFIG 0x1008 493 506 #define CEC_ADDR 0x100c 507 + #define CEC_ADDR_BROADCAST BIT(15) 494 508 #define CEC_TX_COUNT 0x1020 495 509 #define CEC_TX_DATA3_0 0x1024 496 510 #define CEC_TX_DATA7_4 0x1028
+4 -10
drivers/gpu/drm/clients/drm_fbdev_client.c
··· 62 62 return ret; 63 63 } 64 64 65 - static int drm_fbdev_client_suspend(struct drm_client_dev *client, bool holds_console_lock) 65 + static int drm_fbdev_client_suspend(struct drm_client_dev *client) 66 66 { 67 67 struct drm_fb_helper *fb_helper = drm_fb_helper_from_client(client); 68 68 69 - if (holds_console_lock) 70 - drm_fb_helper_set_suspend(fb_helper, true); 71 - else 72 - drm_fb_helper_set_suspend_unlocked(fb_helper, true); 69 + drm_fb_helper_set_suspend_unlocked(fb_helper, true); 73 70 74 71 return 0; 75 72 } 76 73 77 - static int drm_fbdev_client_resume(struct drm_client_dev *client, bool holds_console_lock) 74 + static int drm_fbdev_client_resume(struct drm_client_dev *client) 78 75 { 79 76 struct drm_fb_helper *fb_helper = drm_fb_helper_from_client(client); 80 77 81 - if (holds_console_lock) 82 - drm_fb_helper_set_suspend(fb_helper, false); 83 - else 84 - drm_fb_helper_set_suspend_unlocked(fb_helper, false); 78 + drm_fb_helper_set_suspend_unlocked(fb_helper, false); 85 79 86 80 return 0; 87 81 }
+2 -2
drivers/gpu/drm/clients/drm_log.c
··· 319 319 return 0; 320 320 } 321 321 322 - static int drm_log_client_suspend(struct drm_client_dev *client, bool _console_lock) 322 + static int drm_log_client_suspend(struct drm_client_dev *client) 323 323 { 324 324 struct drm_log *dlog = client_to_drm_log(client); 325 325 ··· 328 328 return 0; 329 329 } 330 330 331 - static int drm_log_client_resume(struct drm_client_dev *client, bool _console_lock) 331 + static int drm_log_client_resume(struct drm_client_dev *client) 332 332 { 333 333 struct drm_log *dlog = client_to_drm_log(client); 334 334
+78 -36
drivers/gpu/drm/display/drm_bridge_connector.c
··· 618 618 * Bridge Connector Initialisation 619 619 */ 620 620 621 + static void drm_bridge_connector_put_bridges(struct drm_device *dev, void *data) 622 + { 623 + struct drm_bridge_connector *bridge_connector = (struct drm_bridge_connector *)data; 624 + 625 + drm_bridge_put(bridge_connector->bridge_edid); 626 + drm_bridge_put(bridge_connector->bridge_hpd); 627 + drm_bridge_put(bridge_connector->bridge_detect); 628 + drm_bridge_put(bridge_connector->bridge_modes); 629 + drm_bridge_put(bridge_connector->bridge_hdmi); 630 + drm_bridge_put(bridge_connector->bridge_hdmi_audio); 631 + drm_bridge_put(bridge_connector->bridge_dp_audio); 632 + drm_bridge_put(bridge_connector->bridge_hdmi_cec); 633 + } 634 + 621 635 /** 622 636 * drm_bridge_connector_init - Initialise a connector for a chain of bridges 623 637 * @drm: the DRM device ··· 652 638 struct drm_bridge_connector *bridge_connector; 653 639 struct drm_connector *connector; 654 640 struct i2c_adapter *ddc = NULL; 655 - struct drm_bridge *panel_bridge = NULL; 641 + struct drm_bridge *panel_bridge __free(drm_bridge_put) = NULL; 642 + struct drm_bridge *bridge_edid __free(drm_bridge_put) = NULL; 643 + struct drm_bridge *bridge_hpd __free(drm_bridge_put) = NULL; 644 + struct drm_bridge *bridge_detect __free(drm_bridge_put) = NULL; 645 + struct drm_bridge *bridge_modes __free(drm_bridge_put) = NULL; 646 + struct drm_bridge *bridge_hdmi __free(drm_bridge_put) = NULL; 647 + struct drm_bridge *bridge_hdmi_audio __free(drm_bridge_put) = NULL; 648 + struct drm_bridge *bridge_dp_audio __free(drm_bridge_put) = NULL; 649 + struct drm_bridge *bridge_hdmi_cec __free(drm_bridge_put) = NULL; 656 650 unsigned int supported_formats = BIT(HDMI_COLORSPACE_RGB); 657 651 unsigned int max_bpc = 8; 658 652 bool support_hdcp = false; ··· 670 648 bridge_connector = drmm_kzalloc(drm, sizeof(*bridge_connector), GFP_KERNEL); 671 649 if (!bridge_connector) 672 650 return ERR_PTR(-ENOMEM); 651 + 652 + ret = drmm_add_action(drm, drm_bridge_connector_put_bridges, bridge_connector); 653 + if (ret) 654 + return ERR_PTR(ret); 673 655 674 656 bridge_connector->encoder = encoder; 675 657 ··· 698 672 if (!bridge->ycbcr_420_allowed) 699 673 connector->ycbcr_420_allowed = false; 700 674 701 - if (bridge->ops & DRM_BRIDGE_OP_EDID) 702 - bridge_connector->bridge_edid = bridge; 703 - if (bridge->ops & DRM_BRIDGE_OP_HPD) 704 - bridge_connector->bridge_hpd = bridge; 705 - if (bridge->ops & DRM_BRIDGE_OP_DETECT) 706 - bridge_connector->bridge_detect = bridge; 707 - if (bridge->ops & DRM_BRIDGE_OP_MODES) 708 - bridge_connector->bridge_modes = bridge; 675 + if (bridge->ops & DRM_BRIDGE_OP_EDID) { 676 + drm_bridge_put(bridge_edid); 677 + bridge_edid = drm_bridge_get(bridge); 678 + } 679 + if (bridge->ops & DRM_BRIDGE_OP_HPD) { 680 + drm_bridge_put(bridge_hpd); 681 + bridge_hpd = drm_bridge_get(bridge); 682 + } 683 + if (bridge->ops & DRM_BRIDGE_OP_DETECT) { 684 + drm_bridge_put(bridge_detect); 685 + bridge_detect = drm_bridge_get(bridge); 686 + } 687 + if (bridge->ops & DRM_BRIDGE_OP_MODES) { 688 + drm_bridge_put(bridge_modes); 689 + bridge_modes = drm_bridge_get(bridge); 690 + } 709 691 if (bridge->ops & DRM_BRIDGE_OP_HDMI) { 710 - if (bridge_connector->bridge_hdmi) 692 + if (bridge_hdmi) 711 693 return ERR_PTR(-EBUSY); 712 694 if (!bridge->funcs->hdmi_write_infoframe || 713 695 !bridge->funcs->hdmi_clear_infoframe) 714 696 return ERR_PTR(-EINVAL); 715 697 716 - bridge_connector->bridge_hdmi = bridge; 698 + bridge_hdmi = drm_bridge_get(bridge); 717 699 718 700 if (bridge->supported_formats) 719 701 supported_formats = bridge->supported_formats; ··· 730 696 } 731 697 732 698 if (bridge->ops & DRM_BRIDGE_OP_HDMI_AUDIO) { 733 - if (bridge_connector->bridge_hdmi_audio) 699 + if (bridge_hdmi_audio) 734 700 return ERR_PTR(-EBUSY); 735 701 736 - if (bridge_connector->bridge_dp_audio) 702 + if (bridge_dp_audio) 737 703 return ERR_PTR(-EBUSY); 738 704 739 705 if (!bridge->hdmi_audio_max_i2s_playback_channels && ··· 744 710 !bridge->funcs->hdmi_audio_shutdown) 745 711 return ERR_PTR(-EINVAL); 746 712 747 - bridge_connector->bridge_hdmi_audio = bridge; 713 + bridge_hdmi_audio = drm_bridge_get(bridge); 748 714 } 749 715 750 716 if (bridge->ops & DRM_BRIDGE_OP_DP_AUDIO) { 751 - if (bridge_connector->bridge_dp_audio) 717 + if (bridge_dp_audio) 752 718 return ERR_PTR(-EBUSY); 753 719 754 - if (bridge_connector->bridge_hdmi_audio) 720 + if (bridge_hdmi_audio) 755 721 return ERR_PTR(-EBUSY); 756 722 757 723 if (!bridge->hdmi_audio_max_i2s_playback_channels && ··· 762 728 !bridge->funcs->dp_audio_shutdown) 763 729 return ERR_PTR(-EINVAL); 764 730 765 - bridge_connector->bridge_dp_audio = bridge; 731 + bridge_dp_audio = drm_bridge_get(bridge); 766 732 } 767 733 768 734 if (bridge->ops & DRM_BRIDGE_OP_HDMI_CEC_NOTIFIER) { ··· 773 739 } 774 740 775 741 if (bridge->ops & DRM_BRIDGE_OP_HDMI_CEC_ADAPTER) { 776 - if (bridge_connector->bridge_hdmi_cec) 742 + if (bridge_hdmi_cec) 777 743 return ERR_PTR(-EBUSY); 778 744 779 - bridge_connector->bridge_hdmi_cec = bridge; 745 + bridge_hdmi_cec = drm_bridge_get(bridge); 780 746 781 747 if (!bridge->funcs->hdmi_cec_enable || 782 748 !bridge->funcs->hdmi_cec_log_addr || ··· 796 762 ddc = bridge->ddc; 797 763 798 764 if (drm_bridge_is_panel(bridge)) 799 - panel_bridge = bridge; 765 + panel_bridge = drm_bridge_get(bridge); 800 766 801 767 if (bridge->support_hdcp) 802 768 support_hdcp = true; ··· 805 771 if (connector_type == DRM_MODE_CONNECTOR_Unknown) 806 772 return ERR_PTR(-EINVAL); 807 773 808 - if (bridge_connector->bridge_hdmi) { 774 + if (bridge_hdmi) { 809 775 if (!connector->ycbcr_420_allowed) 810 776 supported_formats &= ~BIT(HDMI_COLORSPACE_YUV420); 811 777 812 778 ret = drmm_connector_hdmi_init(drm, connector, 813 - bridge_connector->bridge_hdmi->vendor, 814 - bridge_connector->bridge_hdmi->product, 779 + bridge_hdmi->vendor, 780 + bridge_hdmi->product, 815 781 &drm_bridge_connector_funcs, 816 782 &drm_bridge_connector_hdmi_funcs, 817 783 connector_type, ddc, ··· 827 793 return ERR_PTR(ret); 828 794 } 829 795 830 - if (bridge_connector->bridge_hdmi_audio || 831 - bridge_connector->bridge_dp_audio) { 796 + if (bridge_hdmi_audio || bridge_dp_audio) { 832 797 struct device *dev; 833 798 struct drm_bridge *bridge; 834 799 835 - if (bridge_connector->bridge_hdmi_audio) 836 - bridge = bridge_connector->bridge_hdmi_audio; 800 + if (bridge_hdmi_audio) 801 + bridge = bridge_hdmi_audio; 837 802 else 838 - bridge = bridge_connector->bridge_dp_audio; 803 + bridge = bridge_dp_audio; 839 804 840 805 dev = bridge->hdmi_audio_dev; 841 806 ··· 848 815 return ERR_PTR(ret); 849 816 } 850 817 851 - if (bridge_connector->bridge_hdmi_cec && 852 - bridge_connector->bridge_hdmi_cec->ops & DRM_BRIDGE_OP_HDMI_CEC_NOTIFIER) { 853 - struct drm_bridge *bridge = bridge_connector->bridge_hdmi_cec; 818 + if (bridge_hdmi_cec && 819 + bridge_hdmi_cec->ops & DRM_BRIDGE_OP_HDMI_CEC_NOTIFIER) { 820 + struct drm_bridge *bridge = bridge_hdmi_cec; 854 821 855 822 ret = drmm_connector_hdmi_cec_notifier_register(connector, 856 823 NULL, ··· 859 826 return ERR_PTR(ret); 860 827 } 861 828 862 - if (bridge_connector->bridge_hdmi_cec && 863 - bridge_connector->bridge_hdmi_cec->ops & DRM_BRIDGE_OP_HDMI_CEC_ADAPTER) { 864 - struct drm_bridge *bridge = bridge_connector->bridge_hdmi_cec; 829 + if (bridge_hdmi_cec && 830 + bridge_hdmi_cec->ops & DRM_BRIDGE_OP_HDMI_CEC_ADAPTER) { 831 + struct drm_bridge *bridge = bridge_hdmi_cec; 865 832 866 833 ret = drmm_connector_hdmi_cec_register(connector, 867 834 &drm_bridge_connector_hdmi_cec_funcs, ··· 874 841 875 842 drm_connector_helper_add(connector, &drm_bridge_connector_helper_funcs); 876 843 877 - if (bridge_connector->bridge_hpd) 844 + if (bridge_hpd) 878 845 connector->polled = DRM_CONNECTOR_POLL_HPD; 879 - else if (bridge_connector->bridge_detect) 846 + else if (bridge_detect) 880 847 connector->polled = DRM_CONNECTOR_POLL_CONNECT 881 848 | DRM_CONNECTOR_POLL_DISCONNECT; 882 849 ··· 886 853 if (support_hdcp && IS_REACHABLE(CONFIG_DRM_DISPLAY_HELPER) && 887 854 IS_ENABLED(CONFIG_DRM_DISPLAY_HDCP_HELPER)) 888 855 drm_connector_attach_content_protection_property(connector, true); 856 + 857 + bridge_connector->bridge_edid = drm_bridge_get(bridge_edid); 858 + bridge_connector->bridge_hpd = drm_bridge_get(bridge_hpd); 859 + bridge_connector->bridge_detect = drm_bridge_get(bridge_detect); 860 + bridge_connector->bridge_modes = drm_bridge_get(bridge_modes); 861 + bridge_connector->bridge_hdmi = drm_bridge_get(bridge_hdmi); 862 + bridge_connector->bridge_hdmi_audio = drm_bridge_get(bridge_hdmi_audio); 863 + bridge_connector->bridge_dp_audio = drm_bridge_get(bridge_dp_audio); 864 + bridge_connector->bridge_hdmi_cec = drm_bridge_get(bridge_hdmi_cec); 889 865 890 866 return connector; 891 867 }
+23 -22
drivers/gpu/drm/drm_atomic.c
··· 207 207 continue; 208 208 209 209 connector->funcs->atomic_destroy_state(connector, 210 - state->connectors[i].state); 210 + state->connectors[i].state_to_destroy); 211 211 state->connectors[i].ptr = NULL; 212 - state->connectors[i].state = NULL; 212 + state->connectors[i].state_to_destroy = NULL; 213 213 state->connectors[i].old_state = NULL; 214 214 state->connectors[i].new_state = NULL; 215 215 drm_connector_put(connector); ··· 222 222 continue; 223 223 224 224 crtc->funcs->atomic_destroy_state(crtc, 225 - state->crtcs[i].state); 225 + state->crtcs[i].state_to_destroy); 226 226 227 227 state->crtcs[i].ptr = NULL; 228 - state->crtcs[i].state = NULL; 228 + state->crtcs[i].state_to_destroy = NULL; 229 229 state->crtcs[i].old_state = NULL; 230 230 state->crtcs[i].new_state = NULL; 231 231 ··· 242 242 continue; 243 243 244 244 plane->funcs->atomic_destroy_state(plane, 245 - state->planes[i].state); 245 + state->planes[i].state_to_destroy); 246 246 state->planes[i].ptr = NULL; 247 - state->planes[i].state = NULL; 247 + state->planes[i].state_to_destroy = NULL; 248 248 state->planes[i].old_state = NULL; 249 249 state->planes[i].new_state = NULL; 250 250 } ··· 253 253 struct drm_private_obj *obj = state->private_objs[i].ptr; 254 254 255 255 obj->funcs->atomic_destroy_state(obj, 256 - state->private_objs[i].state); 256 + state->private_objs[i].state_to_destroy); 257 257 state->private_objs[i].ptr = NULL; 258 - state->private_objs[i].state = NULL; 258 + state->private_objs[i].state_to_destroy = NULL; 259 259 state->private_objs[i].old_state = NULL; 260 260 state->private_objs[i].new_state = NULL; 261 261 } ··· 349 349 350 350 WARN_ON(!state->acquire_ctx); 351 351 352 - crtc_state = drm_atomic_get_existing_crtc_state(state, crtc); 352 + crtc_state = drm_atomic_get_new_crtc_state(state, crtc); 353 353 if (crtc_state) 354 354 return crtc_state; 355 355 ··· 361 361 if (!crtc_state) 362 362 return ERR_PTR(-ENOMEM); 363 363 364 - state->crtcs[index].state = crtc_state; 364 + state->crtcs[index].state_to_destroy = crtc_state; 365 365 state->crtcs[index].old_state = crtc->state; 366 366 state->crtcs[index].new_state = crtc_state; 367 367 state->crtcs[index].ptr = crtc; ··· 480 480 } 481 481 482 482 if (state->crtc) 483 - crtc_state = drm_atomic_get_existing_crtc_state(state->state, 484 - state->crtc); 483 + crtc_state = drm_atomic_get_new_crtc_state(state->state, 484 + state->crtc); 485 485 486 486 if (writeback_job->fb && !crtc_state->active) { 487 487 drm_dbg_atomic(connector->dev, ··· 534 534 WARN_ON(plane->old_fb); 535 535 WARN_ON(plane->crtc); 536 536 537 - plane_state = drm_atomic_get_existing_plane_state(state, plane); 537 + plane_state = drm_atomic_get_new_plane_state(state, plane); 538 538 if (plane_state) 539 539 return plane_state; 540 540 ··· 546 546 if (!plane_state) 547 547 return ERR_PTR(-ENOMEM); 548 548 549 - state->planes[index].state = plane_state; 549 + state->planes[index].state_to_destroy = plane_state; 550 550 state->planes[index].ptr = plane; 551 551 state->planes[index].old_state = plane->state; 552 552 state->planes[index].new_state = plane_state; ··· 831 831 drm_atomic_get_private_obj_state(struct drm_atomic_state *state, 832 832 struct drm_private_obj *obj) 833 833 { 834 - int index, num_objs, i, ret; 834 + int index, num_objs, ret; 835 835 size_t size; 836 836 struct __drm_private_objs_state *arr; 837 837 struct drm_private_state *obj_state; 838 838 839 - for (i = 0; i < state->num_private_objs; i++) 840 - if (obj == state->private_objs[i].ptr) 841 - return state->private_objs[i].state; 839 + obj_state = drm_atomic_get_new_private_obj_state(state, obj); 840 + if (obj_state) 841 + return obj_state; 842 842 843 843 ret = drm_modeset_lock(&obj->lock, state->acquire_ctx); 844 844 if (ret) ··· 858 858 if (!obj_state) 859 859 return ERR_PTR(-ENOMEM); 860 860 861 - state->private_objs[index].state = obj_state; 861 + state->private_objs[index].state_to_destroy = obj_state; 862 862 state->private_objs[index].old_state = obj->state; 863 863 state->private_objs[index].new_state = obj_state; 864 864 state->private_objs[index].ptr = obj; ··· 1152 1152 state->num_connector = alloc; 1153 1153 } 1154 1154 1155 - if (state->connectors[index].state) 1156 - return state->connectors[index].state; 1155 + connector_state = drm_atomic_get_new_connector_state(state, connector); 1156 + if (connector_state) 1157 + return connector_state; 1157 1158 1158 1159 connector_state = connector->funcs->atomic_duplicate_state(connector); 1159 1160 if (!connector_state) 1160 1161 return ERR_PTR(-ENOMEM); 1161 1162 1162 1163 drm_connector_get(connector); 1163 - state->connectors[index].state = connector_state; 1164 + state->connectors[index].state_to_destroy = connector_state; 1164 1165 state->connectors[index].old_state = connector->state; 1165 1166 state->connectors[index].new_state = connector_state; 1166 1167 state->connectors[index].ptr = connector;
+4 -4
drivers/gpu/drm/drm_atomic_helper.c
··· 3236 3236 old_conn_state->state = state; 3237 3237 new_conn_state->state = NULL; 3238 3238 3239 - state->connectors[i].state = old_conn_state; 3239 + state->connectors[i].state_to_destroy = old_conn_state; 3240 3240 connector->state = new_conn_state; 3241 3241 } 3242 3242 ··· 3246 3246 old_crtc_state->state = state; 3247 3247 new_crtc_state->state = NULL; 3248 3248 3249 - state->crtcs[i].state = old_crtc_state; 3249 + state->crtcs[i].state_to_destroy = old_crtc_state; 3250 3250 crtc->state = new_crtc_state; 3251 3251 3252 3252 if (new_crtc_state->commit) { ··· 3266 3266 old_plane_state->state = state; 3267 3267 new_plane_state->state = NULL; 3268 3268 3269 - state->planes[i].state = old_plane_state; 3269 + state->planes[i].state_to_destroy = old_plane_state; 3270 3270 plane->state = new_plane_state; 3271 3271 } 3272 3272 drm_panic_unlock(state->dev, flags); ··· 3277 3277 old_obj_state->state = state; 3278 3278 new_obj_state->state = NULL; 3279 3279 3280 - state->private_objs[i].state = old_obj_state; 3280 + state->private_objs[i].state_to_destroy = old_obj_state; 3281 3281 obj->state = new_obj_state; 3282 3282 } 3283 3283
+3 -3
drivers/gpu/drm/drm_bridge.c
··· 1086 1086 struct drm_encoder *encoder = bridge->encoder; 1087 1087 struct drm_bridge_state *last_bridge_state; 1088 1088 unsigned int i, num_out_bus_fmts = 0; 1089 - struct drm_bridge *last_bridge; 1090 1089 u32 *out_bus_fmts; 1091 1090 int ret = 0; 1092 1091 1093 - last_bridge = list_last_entry(&encoder->bridge_chain, 1094 - struct drm_bridge, chain_node); 1092 + struct drm_bridge *last_bridge __free(drm_bridge_put) = 1093 + drm_bridge_get(list_last_entry(&encoder->bridge_chain, 1094 + struct drm_bridge, chain_node)); 1095 1095 last_bridge_state = drm_atomic_get_new_bridge_state(crtc_state->state, 1096 1096 last_bridge); 1097 1097
+236 -154
drivers/gpu/drm/drm_buddy.c
··· 12 12 13 13 #include <drm/drm_buddy.h> 14 14 15 + enum drm_buddy_free_tree { 16 + DRM_BUDDY_CLEAR_TREE = 0, 17 + DRM_BUDDY_DIRTY_TREE, 18 + DRM_BUDDY_MAX_FREE_TREES, 19 + }; 20 + 15 21 static struct kmem_cache *slab_blocks; 22 + 23 + #define for_each_free_tree(tree) \ 24 + for ((tree) = 0; (tree) < DRM_BUDDY_MAX_FREE_TREES; (tree)++) 16 25 17 26 static struct drm_buddy_block *drm_block_alloc(struct drm_buddy *mm, 18 27 struct drm_buddy_block *parent, ··· 40 31 block->header |= order; 41 32 block->parent = parent; 42 33 34 + RB_CLEAR_NODE(&block->rb); 35 + 43 36 BUG_ON(block->header & DRM_BUDDY_HEADER_UNUSED); 44 37 return block; 45 38 } ··· 52 41 kmem_cache_free(slab_blocks, block); 53 42 } 54 43 55 - static void list_insert_sorted(struct drm_buddy *mm, 56 - struct drm_buddy_block *block) 44 + static enum drm_buddy_free_tree 45 + get_block_tree(struct drm_buddy_block *block) 57 46 { 58 - struct drm_buddy_block *node; 59 - struct list_head *head; 47 + return drm_buddy_block_is_clear(block) ? 48 + DRM_BUDDY_CLEAR_TREE : DRM_BUDDY_DIRTY_TREE; 49 + } 60 50 61 - head = &mm->free_list[drm_buddy_block_order(block)]; 62 - if (list_empty(head)) { 63 - list_add(&block->link, head); 64 - return; 65 - } 51 + static struct drm_buddy_block * 52 + rbtree_get_free_block(const struct rb_node *node) 53 + { 54 + return node ? rb_entry(node, struct drm_buddy_block, rb) : NULL; 55 + } 66 56 67 - list_for_each_entry(node, head, link) 68 - if (drm_buddy_block_offset(block) < drm_buddy_block_offset(node)) 69 - break; 57 + static struct drm_buddy_block * 58 + rbtree_last_free_block(struct rb_root *root) 59 + { 60 + return rbtree_get_free_block(rb_last(root)); 61 + } 70 62 71 - __list_add(&block->link, node->link.prev, &node->link); 63 + static bool rbtree_is_empty(struct rb_root *root) 64 + { 65 + return RB_EMPTY_ROOT(root); 66 + } 67 + 68 + static bool drm_buddy_block_offset_less(const struct drm_buddy_block *block, 69 + const struct drm_buddy_block *node) 70 + { 71 + return drm_buddy_block_offset(block) < drm_buddy_block_offset(node); 72 + } 73 + 74 + static bool rbtree_block_offset_less(struct rb_node *block, 75 + const struct rb_node *node) 76 + { 77 + return drm_buddy_block_offset_less(rbtree_get_free_block(block), 78 + rbtree_get_free_block(node)); 79 + } 80 + 81 + static void rbtree_insert(struct drm_buddy *mm, 82 + struct drm_buddy_block *block, 83 + enum drm_buddy_free_tree tree) 84 + { 85 + rb_add(&block->rb, 86 + &mm->free_trees[tree][drm_buddy_block_order(block)], 87 + rbtree_block_offset_less); 88 + } 89 + 90 + static void rbtree_remove(struct drm_buddy *mm, 91 + struct drm_buddy_block *block) 92 + { 93 + unsigned int order = drm_buddy_block_order(block); 94 + enum drm_buddy_free_tree tree; 95 + struct rb_root *root; 96 + 97 + tree = get_block_tree(block); 98 + root = &mm->free_trees[tree][order]; 99 + 100 + rb_erase(&block->rb, root); 101 + RB_CLEAR_NODE(&block->rb); 72 102 } 73 103 74 104 static void clear_reset(struct drm_buddy_block *block) ··· 122 70 block->header |= DRM_BUDDY_HEADER_CLEAR; 123 71 } 124 72 125 - static void mark_allocated(struct drm_buddy_block *block) 73 + static void mark_allocated(struct drm_buddy *mm, 74 + struct drm_buddy_block *block) 126 75 { 127 76 block->header &= ~DRM_BUDDY_HEADER_STATE; 128 77 block->header |= DRM_BUDDY_ALLOCATED; 129 78 130 - list_del(&block->link); 79 + rbtree_remove(mm, block); 131 80 } 132 81 133 82 static void mark_free(struct drm_buddy *mm, 134 83 struct drm_buddy_block *block) 135 84 { 85 + enum drm_buddy_free_tree tree; 86 + 136 87 block->header &= ~DRM_BUDDY_HEADER_STATE; 137 88 block->header |= DRM_BUDDY_FREE; 138 89 139 - list_insert_sorted(mm, block); 90 + tree = get_block_tree(block); 91 + rbtree_insert(mm, block, tree); 140 92 } 141 93 142 - static void mark_split(struct drm_buddy_block *block) 94 + static void mark_split(struct drm_buddy *mm, 95 + struct drm_buddy_block *block) 143 96 { 144 97 block->header &= ~DRM_BUDDY_HEADER_STATE; 145 98 block->header |= DRM_BUDDY_SPLIT; 146 99 147 - list_del(&block->link); 100 + rbtree_remove(mm, block); 148 101 } 149 102 150 103 static inline bool overlaps(u64 s1, u64 e1, u64 s2, u64 e2) ··· 205 148 mark_cleared(parent); 206 149 } 207 150 208 - list_del(&buddy->link); 151 + rbtree_remove(mm, buddy); 209 152 if (force_merge && drm_buddy_block_is_clear(buddy)) 210 153 mm->clear_avail -= drm_buddy_block_size(mm, buddy); 211 154 ··· 226 169 u64 end, 227 170 unsigned int min_order) 228 171 { 229 - unsigned int order; 172 + unsigned int tree, order; 230 173 int i; 231 174 232 175 if (!min_order) ··· 235 178 if (min_order > mm->max_order) 236 179 return -EINVAL; 237 180 238 - for (i = min_order - 1; i >= 0; i--) { 239 - struct drm_buddy_block *block, *prev; 181 + for_each_free_tree(tree) { 182 + for (i = min_order - 1; i >= 0; i--) { 183 + struct rb_node *iter = rb_last(&mm->free_trees[tree][i]); 240 184 241 - list_for_each_entry_safe_reverse(block, prev, &mm->free_list[i], link) { 242 - struct drm_buddy_block *buddy; 243 - u64 block_start, block_end; 185 + while (iter) { 186 + struct drm_buddy_block *block, *buddy; 187 + u64 block_start, block_end; 244 188 245 - if (!block->parent) 246 - continue; 189 + block = rbtree_get_free_block(iter); 190 + iter = rb_prev(iter); 247 191 248 - block_start = drm_buddy_block_offset(block); 249 - block_end = block_start + drm_buddy_block_size(mm, block) - 1; 192 + if (!block || !block->parent) 193 + continue; 250 194 251 - if (!contains(start, end, block_start, block_end)) 252 - continue; 195 + block_start = drm_buddy_block_offset(block); 196 + block_end = block_start + drm_buddy_block_size(mm, block) - 1; 253 197 254 - buddy = __get_buddy(block); 255 - if (!drm_buddy_block_is_free(buddy)) 256 - continue; 198 + if (!contains(start, end, block_start, block_end)) 199 + continue; 257 200 258 - WARN_ON(drm_buddy_block_is_clear(block) == 259 - drm_buddy_block_is_clear(buddy)); 201 + buddy = __get_buddy(block); 202 + if (!drm_buddy_block_is_free(buddy)) 203 + continue; 260 204 261 - /* 262 - * If the prev block is same as buddy, don't access the 263 - * block in the next iteration as we would free the 264 - * buddy block as part of the free function. 265 - */ 266 - if (prev == buddy) 267 - prev = list_prev_entry(prev, link); 205 + WARN_ON(drm_buddy_block_is_clear(block) == 206 + drm_buddy_block_is_clear(buddy)); 268 207 269 - list_del(&block->link); 270 - if (drm_buddy_block_is_clear(block)) 271 - mm->clear_avail -= drm_buddy_block_size(mm, block); 208 + /* 209 + * Advance to the next node when the current node is the buddy, 210 + * as freeing the block will also remove its buddy from the tree. 211 + */ 212 + if (iter == &buddy->rb) 213 + iter = rb_prev(iter); 272 214 273 - order = __drm_buddy_free(mm, block, true); 274 - if (order >= min_order) 275 - return 0; 215 + rbtree_remove(mm, block); 216 + if (drm_buddy_block_is_clear(block)) 217 + mm->clear_avail -= drm_buddy_block_size(mm, block); 218 + 219 + order = __drm_buddy_free(mm, block, true); 220 + if (order >= min_order) 221 + return 0; 222 + } 276 223 } 277 224 } 278 225 ··· 297 236 */ 298 237 int drm_buddy_init(struct drm_buddy *mm, u64 size, u64 chunk_size) 299 238 { 300 - unsigned int i; 301 - u64 offset; 239 + unsigned int i, j, root_count = 0; 240 + u64 offset = 0; 302 241 303 242 if (size < chunk_size) 304 243 return -EINVAL; ··· 319 258 320 259 BUG_ON(mm->max_order > DRM_BUDDY_MAX_ORDER); 321 260 322 - mm->free_list = kmalloc_array(mm->max_order + 1, 323 - sizeof(struct list_head), 324 - GFP_KERNEL); 325 - if (!mm->free_list) 261 + mm->free_trees = kmalloc_array(DRM_BUDDY_MAX_FREE_TREES, 262 + sizeof(*mm->free_trees), 263 + GFP_KERNEL); 264 + if (!mm->free_trees) 326 265 return -ENOMEM; 327 266 328 - for (i = 0; i <= mm->max_order; ++i) 329 - INIT_LIST_HEAD(&mm->free_list[i]); 267 + for_each_free_tree(i) { 268 + mm->free_trees[i] = kmalloc_array(mm->max_order + 1, 269 + sizeof(struct rb_root), 270 + GFP_KERNEL); 271 + if (!mm->free_trees[i]) 272 + goto out_free_tree; 273 + 274 + for (j = 0; j <= mm->max_order; ++j) 275 + mm->free_trees[i][j] = RB_ROOT; 276 + } 330 277 331 278 mm->n_roots = hweight64(size); 332 279 ··· 342 273 sizeof(struct drm_buddy_block *), 343 274 GFP_KERNEL); 344 275 if (!mm->roots) 345 - goto out_free_list; 346 - 347 - offset = 0; 348 - i = 0; 276 + goto out_free_tree; 349 277 350 278 /* 351 279 * Split into power-of-two blocks, in case we are given a size that is ··· 362 296 363 297 mark_free(mm, root); 364 298 365 - BUG_ON(i > mm->max_order); 299 + BUG_ON(root_count > mm->max_order); 366 300 BUG_ON(drm_buddy_block_size(mm, root) < chunk_size); 367 301 368 - mm->roots[i] = root; 302 + mm->roots[root_count] = root; 369 303 370 304 offset += root_size; 371 305 size -= root_size; 372 - i++; 306 + root_count++; 373 307 } while (size); 374 308 375 309 return 0; 376 310 377 311 out_free_roots: 378 - while (i--) 379 - drm_block_free(mm, mm->roots[i]); 312 + while (root_count--) 313 + drm_block_free(mm, mm->roots[root_count]); 380 314 kfree(mm->roots); 381 - out_free_list: 382 - kfree(mm->free_list); 315 + out_free_tree: 316 + while (i--) 317 + kfree(mm->free_trees[i]); 318 + kfree(mm->free_trees); 383 319 return -ENOMEM; 384 320 } 385 321 EXPORT_SYMBOL(drm_buddy_init); ··· 391 323 * 392 324 * @mm: DRM buddy manager to free 393 325 * 394 - * Cleanup memory manager resources and the freelist 326 + * Cleanup memory manager resources and the freetree 395 327 */ 396 328 void drm_buddy_fini(struct drm_buddy *mm) 397 329 { ··· 417 349 418 350 WARN_ON(mm->avail != mm->size); 419 351 352 + for_each_free_tree(i) 353 + kfree(mm->free_trees[i]); 420 354 kfree(mm->roots); 421 - kfree(mm->free_list); 422 355 } 423 356 EXPORT_SYMBOL(drm_buddy_fini); 424 357 ··· 443 374 return -ENOMEM; 444 375 } 445 376 446 - mark_free(mm, block->left); 447 - mark_free(mm, block->right); 377 + mark_split(mm, block); 448 378 449 379 if (drm_buddy_block_is_clear(block)) { 450 380 mark_cleared(block->left); ··· 451 383 clear_reset(block); 452 384 } 453 385 454 - mark_split(block); 386 + mark_free(mm, block->left); 387 + mark_free(mm, block->right); 455 388 456 389 return 0; 457 390 } ··· 481 412 * @is_clear: blocks clear state 482 413 * 483 414 * Reset the clear state based on @is_clear value for each block 484 - * in the freelist. 415 + * in the freetree. 485 416 */ 486 417 void drm_buddy_reset_clear(struct drm_buddy *mm, bool is_clear) 487 418 { 419 + enum drm_buddy_free_tree src_tree, dst_tree; 488 420 u64 root_size, size, start; 489 421 unsigned int order; 490 422 int i; ··· 500 430 size -= root_size; 501 431 } 502 432 503 - for (i = 0; i <= mm->max_order; ++i) { 504 - struct drm_buddy_block *block; 433 + src_tree = is_clear ? DRM_BUDDY_DIRTY_TREE : DRM_BUDDY_CLEAR_TREE; 434 + dst_tree = is_clear ? DRM_BUDDY_CLEAR_TREE : DRM_BUDDY_DIRTY_TREE; 505 435 506 - list_for_each_entry_reverse(block, &mm->free_list[i], link) { 507 - if (is_clear != drm_buddy_block_is_clear(block)) { 508 - if (is_clear) { 509 - mark_cleared(block); 510 - mm->clear_avail += drm_buddy_block_size(mm, block); 511 - } else { 512 - clear_reset(block); 513 - mm->clear_avail -= drm_buddy_block_size(mm, block); 514 - } 436 + for (i = 0; i <= mm->max_order; ++i) { 437 + struct rb_root *root = &mm->free_trees[src_tree][i]; 438 + struct drm_buddy_block *block, *tmp; 439 + 440 + rbtree_postorder_for_each_entry_safe(block, tmp, root, rb) { 441 + rbtree_remove(mm, block); 442 + if (is_clear) { 443 + mark_cleared(block); 444 + mm->clear_avail += drm_buddy_block_size(mm, block); 445 + } else { 446 + clear_reset(block); 447 + mm->clear_avail -= drm_buddy_block_size(mm, block); 515 448 } 449 + 450 + rbtree_insert(mm, block, dst_tree); 516 451 } 517 452 } 518 453 } ··· 707 632 } 708 633 709 634 static struct drm_buddy_block * 710 - get_maxblock(struct drm_buddy *mm, unsigned int order, 711 - unsigned long flags) 635 + get_maxblock(struct drm_buddy *mm, 636 + unsigned int order, 637 + enum drm_buddy_free_tree tree) 712 638 { 713 639 struct drm_buddy_block *max_block = NULL, *block = NULL; 640 + struct rb_root *root; 714 641 unsigned int i; 715 642 716 643 for (i = order; i <= mm->max_order; ++i) { 717 - struct drm_buddy_block *tmp_block; 718 - 719 - list_for_each_entry_reverse(tmp_block, &mm->free_list[i], link) { 720 - if (block_incompatible(tmp_block, flags)) 721 - continue; 722 - 723 - block = tmp_block; 724 - break; 725 - } 726 - 644 + root = &mm->free_trees[tree][i]; 645 + block = rbtree_last_free_block(root); 727 646 if (!block) 728 647 continue; 729 648 ··· 736 667 } 737 668 738 669 static struct drm_buddy_block * 739 - alloc_from_freelist(struct drm_buddy *mm, 670 + alloc_from_freetree(struct drm_buddy *mm, 740 671 unsigned int order, 741 672 unsigned long flags) 742 673 { 743 674 struct drm_buddy_block *block = NULL; 675 + struct rb_root *root; 676 + enum drm_buddy_free_tree tree; 744 677 unsigned int tmp; 745 678 int err; 746 679 680 + tree = (flags & DRM_BUDDY_CLEAR_ALLOCATION) ? 681 + DRM_BUDDY_CLEAR_TREE : DRM_BUDDY_DIRTY_TREE; 682 + 747 683 if (flags & DRM_BUDDY_TOPDOWN_ALLOCATION) { 748 - block = get_maxblock(mm, order, flags); 684 + block = get_maxblock(mm, order, tree); 749 685 if (block) 750 686 /* Store the obtained block order */ 751 687 tmp = drm_buddy_block_order(block); 752 688 } else { 753 689 for (tmp = order; tmp <= mm->max_order; ++tmp) { 754 - struct drm_buddy_block *tmp_block; 755 - 756 - list_for_each_entry_reverse(tmp_block, &mm->free_list[tmp], link) { 757 - if (block_incompatible(tmp_block, flags)) 758 - continue; 759 - 760 - block = tmp_block; 761 - break; 762 - } 763 - 690 + /* Get RB tree root for this order and tree */ 691 + root = &mm->free_trees[tree][tmp]; 692 + block = rbtree_last_free_block(root); 764 693 if (block) 765 694 break; 766 695 } 767 696 } 768 697 769 698 if (!block) { 770 - /* Fallback method */ 699 + /* Try allocating from the other tree */ 700 + tree = (tree == DRM_BUDDY_CLEAR_TREE) ? 701 + DRM_BUDDY_DIRTY_TREE : DRM_BUDDY_CLEAR_TREE; 702 + 771 703 for (tmp = order; tmp <= mm->max_order; ++tmp) { 772 - if (!list_empty(&mm->free_list[tmp])) { 773 - block = list_last_entry(&mm->free_list[tmp], 774 - struct drm_buddy_block, 775 - link); 776 - if (block) 777 - break; 778 - } 704 + root = &mm->free_trees[tree][tmp]; 705 + block = rbtree_last_free_block(root); 706 + if (block) 707 + break; 779 708 } 780 709 781 710 if (!block) ··· 838 771 839 772 if (contains(start, end, block_start, block_end)) { 840 773 if (drm_buddy_block_is_free(block)) { 841 - mark_allocated(block); 774 + mark_allocated(mm, block); 842 775 total_allocated += drm_buddy_block_size(mm, block); 843 776 mm->avail -= drm_buddy_block_size(mm, block); 844 777 if (drm_buddy_block_is_clear(block)) ··· 916 849 { 917 850 u64 rhs_offset, lhs_offset, lhs_size, filled; 918 851 struct drm_buddy_block *block; 919 - struct list_head *list; 852 + unsigned int tree, order; 920 853 LIST_HEAD(blocks_lhs); 921 854 unsigned long pages; 922 - unsigned int order; 923 855 u64 modify_size; 924 856 int err; 925 857 ··· 928 862 if (order == 0) 929 863 return -ENOSPC; 930 864 931 - list = &mm->free_list[order]; 932 - if (list_empty(list)) 933 - return -ENOSPC; 865 + for_each_free_tree(tree) { 866 + struct rb_root *root; 867 + struct rb_node *iter; 934 868 935 - list_for_each_entry_reverse(block, list, link) { 936 - /* Allocate blocks traversing RHS */ 937 - rhs_offset = drm_buddy_block_offset(block); 938 - err = __drm_buddy_alloc_range(mm, rhs_offset, size, 939 - &filled, blocks); 940 - if (!err || err != -ENOSPC) 941 - return err; 869 + root = &mm->free_trees[tree][order]; 870 + if (rbtree_is_empty(root)) 871 + continue; 942 872 943 - lhs_size = max((size - filled), min_block_size); 944 - if (!IS_ALIGNED(lhs_size, min_block_size)) 945 - lhs_size = round_up(lhs_size, min_block_size); 873 + iter = rb_last(root); 874 + while (iter) { 875 + block = rbtree_get_free_block(iter); 946 876 947 - /* Allocate blocks traversing LHS */ 948 - lhs_offset = drm_buddy_block_offset(block) - lhs_size; 949 - err = __drm_buddy_alloc_range(mm, lhs_offset, lhs_size, 950 - NULL, &blocks_lhs); 951 - if (!err) { 952 - list_splice(&blocks_lhs, blocks); 953 - return 0; 954 - } else if (err != -ENOSPC) { 877 + /* Allocate blocks traversing RHS */ 878 + rhs_offset = drm_buddy_block_offset(block); 879 + err = __drm_buddy_alloc_range(mm, rhs_offset, size, 880 + &filled, blocks); 881 + if (!err || err != -ENOSPC) 882 + return err; 883 + 884 + lhs_size = max((size - filled), min_block_size); 885 + if (!IS_ALIGNED(lhs_size, min_block_size)) 886 + lhs_size = round_up(lhs_size, min_block_size); 887 + 888 + /* Allocate blocks traversing LHS */ 889 + lhs_offset = drm_buddy_block_offset(block) - lhs_size; 890 + err = __drm_buddy_alloc_range(mm, lhs_offset, lhs_size, 891 + NULL, &blocks_lhs); 892 + if (!err) { 893 + list_splice(&blocks_lhs, blocks); 894 + return 0; 895 + } else if (err != -ENOSPC) { 896 + drm_buddy_free_list_internal(mm, blocks); 897 + return err; 898 + } 899 + /* Free blocks for the next iteration */ 955 900 drm_buddy_free_list_internal(mm, blocks); 956 - return err; 901 + 902 + iter = rb_prev(iter); 957 903 } 958 - /* Free blocks for the next iteration */ 959 - drm_buddy_free_list_internal(mm, blocks); 960 904 } 961 905 962 906 return -ENOSPC; ··· 1052 976 list_add(&block->tmp_link, &dfs); 1053 977 err = __alloc_range(mm, &dfs, new_start, new_size, blocks, NULL); 1054 978 if (err) { 1055 - mark_allocated(block); 979 + mark_allocated(mm, block); 1056 980 mm->avail -= drm_buddy_block_size(mm, block); 1057 981 if (drm_buddy_block_is_clear(block)) 1058 982 mm->clear_avail -= drm_buddy_block_size(mm, block); ··· 1075 999 return __drm_buddy_alloc_range_bias(mm, start, end, 1076 1000 order, flags); 1077 1001 else 1078 - /* Allocate from freelist */ 1079 - return alloc_from_freelist(mm, order, flags); 1002 + /* Allocate from freetree */ 1003 + return alloc_from_freetree(mm, order, flags); 1080 1004 } 1081 1005 1082 1006 /** ··· 1093 1017 * alloc_range_bias() called on range limitations, which traverses 1094 1018 * the tree and returns the desired block. 1095 1019 * 1096 - * alloc_from_freelist() called when *no* range restrictions 1097 - * are enforced, which picks the block from the freelist. 1020 + * alloc_from_freetree() called when *no* range restrictions 1021 + * are enforced, which picks the block from the freetree. 1098 1022 * 1099 1023 * Returns: 1100 1024 * 0 on success, error code on failure. ··· 1196 1120 } 1197 1121 } while (1); 1198 1122 1199 - mark_allocated(block); 1123 + mark_allocated(mm, block); 1200 1124 mm->avail -= drm_buddy_block_size(mm, block); 1201 1125 if (drm_buddy_block_is_clear(block)) 1202 1126 mm->clear_avail -= drm_buddy_block_size(mm, block); ··· 1277 1201 mm->chunk_size >> 10, mm->size >> 20, mm->avail >> 20, mm->clear_avail >> 20); 1278 1202 1279 1203 for (order = mm->max_order; order >= 0; order--) { 1280 - struct drm_buddy_block *block; 1204 + struct drm_buddy_block *block, *tmp; 1205 + struct rb_root *root; 1281 1206 u64 count = 0, free; 1207 + unsigned int tree; 1282 1208 1283 - list_for_each_entry(block, &mm->free_list[order], link) { 1284 - BUG_ON(!drm_buddy_block_is_free(block)); 1285 - count++; 1209 + for_each_free_tree(tree) { 1210 + root = &mm->free_trees[tree][order]; 1211 + 1212 + rbtree_postorder_for_each_entry_safe(block, tmp, root, rb) { 1213 + BUG_ON(!drm_buddy_block_is_free(block)); 1214 + count++; 1215 + } 1286 1216 } 1287 1217 1288 1218 drm_printf(p, "order-%2d ", order);
+8 -8
drivers/gpu/drm/drm_client_event.c
··· 122 122 mutex_unlock(&dev->clientlist_mutex); 123 123 } 124 124 125 - static int drm_client_suspend(struct drm_client_dev *client, bool holds_console_lock) 125 + static int drm_client_suspend(struct drm_client_dev *client) 126 126 { 127 127 struct drm_device *dev = client->dev; 128 128 int ret = 0; ··· 131 131 return 0; 132 132 133 133 if (client->funcs && client->funcs->suspend) 134 - ret = client->funcs->suspend(client, holds_console_lock); 134 + ret = client->funcs->suspend(client); 135 135 drm_dbg_kms(dev, "%s: ret=%d\n", client->name, ret); 136 136 137 137 client->suspended = true; ··· 139 139 return ret; 140 140 } 141 141 142 - void drm_client_dev_suspend(struct drm_device *dev, bool holds_console_lock) 142 + void drm_client_dev_suspend(struct drm_device *dev) 143 143 { 144 144 struct drm_client_dev *client; 145 145 146 146 mutex_lock(&dev->clientlist_mutex); 147 147 list_for_each_entry(client, &dev->clientlist, list) { 148 148 if (!client->suspended) 149 - drm_client_suspend(client, holds_console_lock); 149 + drm_client_suspend(client); 150 150 } 151 151 mutex_unlock(&dev->clientlist_mutex); 152 152 } 153 153 EXPORT_SYMBOL(drm_client_dev_suspend); 154 154 155 - static int drm_client_resume(struct drm_client_dev *client, bool holds_console_lock) 155 + static int drm_client_resume(struct drm_client_dev *client) 156 156 { 157 157 struct drm_device *dev = client->dev; 158 158 int ret = 0; ··· 161 161 return 0; 162 162 163 163 if (client->funcs && client->funcs->resume) 164 - ret = client->funcs->resume(client, holds_console_lock); 164 + ret = client->funcs->resume(client); 165 165 drm_dbg_kms(dev, "%s: ret=%d\n", client->name, ret); 166 166 167 167 client->suspended = false; ··· 172 172 return ret; 173 173 } 174 174 175 - void drm_client_dev_resume(struct drm_device *dev, bool holds_console_lock) 175 + void drm_client_dev_resume(struct drm_device *dev) 176 176 { 177 177 struct drm_client_dev *client; 178 178 179 179 mutex_lock(&dev->clientlist_mutex); 180 180 list_for_each_entry(client, &dev->clientlist, list) { 181 181 if (client->suspended) 182 - drm_client_resume(client, holds_console_lock); 182 + drm_client_resume(client); 183 183 } 184 184 mutex_unlock(&dev->clientlist_mutex); 185 185 }
+1 -1
drivers/gpu/drm/drm_framebuffer.c
··· 1048 1048 plane_state->crtc->base.id, 1049 1049 plane_state->crtc->name, fb->base.id); 1050 1050 1051 - crtc_state = drm_atomic_get_existing_crtc_state(state, plane_state->crtc); 1051 + crtc_state = drm_atomic_get_new_crtc_state(state, plane_state->crtc); 1052 1052 1053 1053 ret = drm_atomic_add_affected_connectors(state, plane_state->crtc); 1054 1054 if (ret)
+1 -1
drivers/gpu/drm/drm_gem_vram_helper.c
··· 967 967 968 968 max_fbpages = (vmm->vram_size / 2) >> PAGE_SHIFT; 969 969 970 - fbsize = mode->hdisplay * mode->vdisplay * max_bpp; 970 + fbsize = (u32)mode->hdisplay * mode->vdisplay * max_bpp; 971 971 fbpages = DIV_ROUND_UP(fbsize, PAGE_SIZE); 972 972 973 973 if (fbpages > max_fbpages)
+1 -1
drivers/gpu/drm/drm_mipi_dbi.c
··· 691 691 const struct drm_simple_display_pipe_funcs *funcs, 692 692 const struct drm_display_mode *mode, unsigned int rotation) 693 693 { 694 - size_t bufsize = mode->vdisplay * mode->hdisplay * sizeof(u16); 694 + size_t bufsize = (u32)mode->vdisplay * mode->hdisplay * sizeof(u16); 695 695 696 696 dbidev->drm.mode_config.preferred_depth = 16; 697 697
+3 -3
drivers/gpu/drm/drm_modeset_helper.c
··· 203 203 if (dev->mode_config.poll_enabled) 204 204 drm_kms_helper_poll_disable(dev); 205 205 206 - drm_client_dev_suspend(dev, false); 206 + drm_client_dev_suspend(dev); 207 207 state = drm_atomic_helper_suspend(dev); 208 208 if (IS_ERR(state)) { 209 - drm_client_dev_resume(dev, false); 209 + drm_client_dev_resume(dev); 210 210 211 211 /* 212 212 * Don't enable polling if it was never initialized ··· 252 252 DRM_ERROR("Failed to resume (%d)\n", ret); 253 253 dev->mode_config.suspend_state = NULL; 254 254 255 - drm_client_dev_resume(dev, false); 255 + drm_client_dev_resume(dev); 256 256 257 257 /* 258 258 * Don't enable polling if it is not initialized
+1 -1
drivers/gpu/drm/exynos/exynos_drm_plane.c
··· 58 58 struct drm_plane_state *state = &exynos_state->base; 59 59 struct drm_crtc *crtc = state->crtc; 60 60 struct drm_crtc_state *crtc_state = 61 - drm_atomic_get_existing_crtc_state(state->state, crtc); 61 + drm_atomic_get_new_crtc_state(state->state, crtc); 62 62 struct drm_display_mode *mode = &crtc_state->adjusted_mode; 63 63 int crtc_x, crtc_y; 64 64 unsigned int crtc_w, crtc_h;
+1 -1
drivers/gpu/drm/gud/gud_pipe.c
··· 69 69 height = drm_rect_height(rect); 70 70 len = drm_format_info_min_pitch(format, 0, width) * height; 71 71 72 - buf = kmalloc(width * height, GFP_KERNEL); 72 + buf = kmalloc_array(height, width, GFP_KERNEL); 73 73 if (!buf) 74 74 return 0; 75 75
+3 -3
drivers/gpu/drm/i915/i915_driver.c
··· 978 978 intel_runtime_pm_disable(&i915->runtime_pm); 979 979 intel_power_domains_disable(display); 980 980 981 - drm_client_dev_suspend(&i915->drm, false); 981 + drm_client_dev_suspend(&i915->drm); 982 982 if (intel_display_device_present(display)) { 983 983 drm_kms_helper_poll_disable(&i915->drm); 984 984 intel_display_driver_disable_user_access(display); ··· 1061 1061 /* We do a lot of poking in a lot of registers, make sure they work 1062 1062 * properly. */ 1063 1063 intel_power_domains_disable(display); 1064 - drm_client_dev_suspend(dev, false); 1064 + drm_client_dev_suspend(dev); 1065 1065 if (intel_display_device_present(display)) { 1066 1066 drm_kms_helper_poll_disable(dev); 1067 1067 intel_display_driver_disable_user_access(display); ··· 1245 1245 1246 1246 intel_opregion_resume(display); 1247 1247 1248 - drm_client_dev_resume(dev, false); 1248 + drm_client_dev_resume(dev); 1249 1249 1250 1250 intel_power_domains_enable(display); 1251 1251
+4 -4
drivers/gpu/drm/imx/dc/dc-ed.c
··· 15 15 #include "dc-pe.h" 16 16 17 17 #define PIXENGCFG_STATIC 0x8 18 - #define POWERDOWN BIT(4) 19 - #define SYNC_MODE BIT(8) 20 - #define SINGLE 0 21 18 #define DIV_MASK GENMASK(23, 16) 22 19 #define DIV(x) FIELD_PREP(DIV_MASK, (x)) 23 20 #define DIV_RESET 0x80 21 + #define SYNC_MODE BIT(8) 22 + #define SINGLE 0 23 + #define POWERDOWN BIT(4) 24 24 25 25 #define PIXENGCFG_DYNAMIC 0xc 26 26 ··· 28 28 #define SYNC_TRIGGER BIT(0) 29 29 30 30 #define STATICCONTROL 0x8 31 + #define PERFCOUNTMODE BIT(12) 31 32 #define KICK_MODE BIT(8) 32 33 #define EXTERNAL BIT(8) 33 - #define PERFCOUNTMODE BIT(12) 34 34 35 35 #define CONTROL 0xc 36 36 #define GAMMAAPPLYENABLE BIT(0)
+2 -2
drivers/gpu/drm/imx/dc/dc-fg.c
··· 56 56 57 57 #define FGINCTRL 0x5c 58 58 #define FGINCTRLPANIC 0x60 59 - #define FGDM_MASK GENMASK(2, 0) 60 - #define ENPRIMALPHA BIT(3) 61 59 #define ENSECALPHA BIT(4) 60 + #define ENPRIMALPHA BIT(3) 61 + #define FGDM_MASK GENMASK(2, 0) 62 62 63 63 #define FGCCR 0x64 64 64 #define CCGREEN(x) FIELD_PREP(GENMASK(19, 10), (x))
+5 -5
drivers/gpu/drm/imx/dc/dc-fu.c
··· 18 18 #define BASEADDRESSAUTOUPDATE(x) FIELD_PREP(BASEADDRESSAUTOUPDATE_MASK, (x)) 19 19 20 20 /* BURSTBUFFERMANAGEMENT */ 21 + #define LINEMODE_MASK BIT(31) 21 22 #define SETBURSTLENGTH_MASK GENMASK(12, 8) 22 23 #define SETBURSTLENGTH(x) FIELD_PREP(SETBURSTLENGTH_MASK, (x)) 23 24 #define SETNUMBUFFERS_MASK GENMASK(7, 0) 24 25 #define SETNUMBUFFERS(x) FIELD_PREP(SETNUMBUFFERS_MASK, (x)) 25 - #define LINEMODE_MASK BIT(31) 26 26 27 27 /* SOURCEBUFFERATTRIBUTES */ 28 28 #define BITSPERPIXEL_MASK GENMASK(21, 16) ··· 31 31 #define STRIDE(x) FIELD_PREP(STRIDE_MASK, (x) - 1) 32 32 33 33 /* SOURCEBUFFERDIMENSION */ 34 - #define LINEWIDTH(x) FIELD_PREP(GENMASK(13, 0), (x)) 35 34 #define LINECOUNT(x) FIELD_PREP(GENMASK(29, 16), (x)) 35 + #define LINEWIDTH(x) FIELD_PREP(GENMASK(13, 0), (x)) 36 36 37 37 /* LAYEROFFSET */ 38 - #define LAYERXOFFSET(x) FIELD_PREP(GENMASK(14, 0), (x)) 39 38 #define LAYERYOFFSET(x) FIELD_PREP(GENMASK(30, 16), (x)) 39 + #define LAYERXOFFSET(x) FIELD_PREP(GENMASK(14, 0), (x)) 40 40 41 41 /* CLIPWINDOWOFFSET */ 42 - #define CLIPWINDOWXOFFSET(x) FIELD_PREP(GENMASK(14, 0), (x)) 43 42 #define CLIPWINDOWYOFFSET(x) FIELD_PREP(GENMASK(30, 16), (x)) 43 + #define CLIPWINDOWXOFFSET(x) FIELD_PREP(GENMASK(14, 0), (x)) 44 44 45 45 /* CLIPWINDOWDIMENSIONS */ 46 - #define CLIPWINDOWWIDTH(x) FIELD_PREP(GENMASK(13, 0), (x) - 1) 47 46 #define CLIPWINDOWHEIGHT(x) FIELD_PREP(GENMASK(29, 16), (x) - 1) 47 + #define CLIPWINDOWWIDTH(x) FIELD_PREP(GENMASK(13, 0), (x) - 1) 48 48 49 49 enum dc_linemode { 50 50 /*
+2 -2
drivers/gpu/drm/imx/dc/dc-fu.h
··· 33 33 #define A_SHIFT(x) FIELD_PREP_CONST(GENMASK(4, 0), (x)) 34 34 35 35 /* LAYERPROPERTY */ 36 + #define SOURCEBUFFERENABLE BIT(31) 36 37 #define YUVCONVERSIONMODE_MASK GENMASK(18, 17) 37 38 #define YUVCONVERSIONMODE(x) FIELD_PREP(YUVCONVERSIONMODE_MASK, (x)) 38 - #define SOURCEBUFFERENABLE BIT(31) 39 39 40 40 /* FRAMEDIMENSIONS */ 41 - #define FRAMEWIDTH(x) FIELD_PREP(GENMASK(13, 0), (x)) 42 41 #define FRAMEHEIGHT(x) FIELD_PREP(GENMASK(29, 16), (x)) 42 + #define FRAMEWIDTH(x) FIELD_PREP(GENMASK(13, 0), (x)) 43 43 44 44 /* CONTROL */ 45 45 #define INPUTSELECT_MASK GENMASK(4, 3)
+14 -14
drivers/gpu/drm/imx/dc/dc-lb.c
··· 17 17 #include "dc-pe.h" 18 18 19 19 #define PIXENGCFG_DYNAMIC 0x8 20 - #define PIXENGCFG_DYNAMIC_PRIM_SEL_MASK GENMASK(5, 0) 21 - #define PIXENGCFG_DYNAMIC_PRIM_SEL(x) \ 22 - FIELD_PREP(PIXENGCFG_DYNAMIC_PRIM_SEL_MASK, (x)) 23 20 #define PIXENGCFG_DYNAMIC_SEC_SEL_MASK GENMASK(13, 8) 24 21 #define PIXENGCFG_DYNAMIC_SEC_SEL(x) \ 25 22 FIELD_PREP(PIXENGCFG_DYNAMIC_SEC_SEL_MASK, (x)) 23 + #define PIXENGCFG_DYNAMIC_PRIM_SEL_MASK GENMASK(5, 0) 24 + #define PIXENGCFG_DYNAMIC_PRIM_SEL(x) \ 25 + FIELD_PREP(PIXENGCFG_DYNAMIC_PRIM_SEL_MASK, (x)) 26 26 27 27 #define STATICCONTROL 0x8 28 28 #define SHDTOKSEL_MASK GENMASK(4, 3) ··· 37 37 #define BLENDCONTROL 0x10 38 38 #define ALPHA_MASK GENMASK(23, 16) 39 39 #define ALPHA(x) FIELD_PREP(ALPHA_MASK, (x)) 40 - #define PRIM_C_BLD_FUNC_MASK GENMASK(2, 0) 41 - #define PRIM_C_BLD_FUNC(x) \ 42 - FIELD_PREP(PRIM_C_BLD_FUNC_MASK, (x)) 43 - #define SEC_C_BLD_FUNC_MASK GENMASK(6, 4) 44 - #define SEC_C_BLD_FUNC(x) \ 45 - FIELD_PREP(SEC_C_BLD_FUNC_MASK, (x)) 46 - #define PRIM_A_BLD_FUNC_MASK GENMASK(10, 8) 47 - #define PRIM_A_BLD_FUNC(x) \ 48 - FIELD_PREP(PRIM_A_BLD_FUNC_MASK, (x)) 49 40 #define SEC_A_BLD_FUNC_MASK GENMASK(14, 12) 50 41 #define SEC_A_BLD_FUNC(x) \ 51 42 FIELD_PREP(SEC_A_BLD_FUNC_MASK, (x)) 43 + #define PRIM_A_BLD_FUNC_MASK GENMASK(10, 8) 44 + #define PRIM_A_BLD_FUNC(x) \ 45 + FIELD_PREP(PRIM_A_BLD_FUNC_MASK, (x)) 46 + #define SEC_C_BLD_FUNC_MASK GENMASK(6, 4) 47 + #define SEC_C_BLD_FUNC(x) \ 48 + FIELD_PREP(SEC_C_BLD_FUNC_MASK, (x)) 49 + #define PRIM_C_BLD_FUNC_MASK GENMASK(2, 0) 50 + #define PRIM_C_BLD_FUNC(x) \ 51 + FIELD_PREP(PRIM_C_BLD_FUNC_MASK, (x)) 52 52 53 53 #define POSITION 0x14 54 - #define XPOS_MASK GENMASK(15, 0) 55 - #define XPOS(x) FIELD_PREP(XPOS_MASK, (x)) 56 54 #define YPOS_MASK GENMASK(31, 16) 57 55 #define YPOS(x) FIELD_PREP(YPOS_MASK, (x)) 56 + #define XPOS_MASK GENMASK(15, 0) 57 + #define XPOS(x) FIELD_PREP(XPOS_MASK, (x)) 58 58 59 59 enum dc_lb_blend_func { 60 60 DC_LAYERBLEND_BLEND_ZERO,
+1 -1
drivers/gpu/drm/imx/dc/dc-plane.c
··· 106 106 } 107 107 108 108 crtc_state = 109 - drm_atomic_get_existing_crtc_state(state, plane_state->crtc); 109 + drm_atomic_get_new_crtc_state(state, plane_state->crtc); 110 110 if (WARN_ON(!crtc_state)) 111 111 return -EINVAL; 112 112
+2 -2
drivers/gpu/drm/imx/dcss/dcss-plane.c
··· 159 159 dma_obj = drm_fb_dma_get_gem_obj(fb, 0); 160 160 WARN_ON(!dma_obj); 161 161 162 - crtc_state = drm_atomic_get_existing_crtc_state(state, 163 - new_plane_state->crtc); 162 + crtc_state = drm_atomic_get_new_crtc_state(state, 163 + new_plane_state->crtc); 164 164 165 165 hdisplay = crtc_state->adjusted_mode.hdisplay; 166 166 vdisplay = crtc_state->adjusted_mode.vdisplay;
+1 -2
drivers/gpu/drm/imx/ipuv3/ipuv3-plane.c
··· 386 386 return -EINVAL; 387 387 388 388 crtc_state = 389 - drm_atomic_get_existing_crtc_state(state, 390 - new_state->crtc); 389 + drm_atomic_get_new_crtc_state(state, new_state->crtc); 391 390 if (WARN_ON(!crtc_state)) 392 391 return -EINVAL; 393 392
+9 -4
drivers/gpu/drm/ingenic/ingenic-drm-drv.c
··· 247 247 struct ingenic_drm_private_state *priv_state; 248 248 unsigned int next_id; 249 249 250 - priv_state = ingenic_drm_get_priv_state(priv, state); 251 - if (WARN_ON(IS_ERR(priv_state))) 250 + priv_state = ingenic_drm_get_new_priv_state(priv, state); 251 + if (WARN_ON(!priv_state)) 252 252 return; 253 253 254 254 /* Set addresses of our DMA descriptor chains */ ··· 340 340 crtc); 341 341 struct ingenic_drm *priv = drm_crtc_get_priv(crtc); 342 342 struct drm_plane_state *f1_state, *f0_state, *ipu_state = NULL; 343 + struct ingenic_drm_private_state *priv_state; 343 344 344 345 if (crtc_state->gamma_lut && 345 346 drm_color_lut_size(crtc_state->gamma_lut) != ARRAY_SIZE(priv->dma_hwdescs->palette)) { 346 347 dev_dbg(priv->dev, "Invalid palette size\n"); 347 348 return -EINVAL; 348 349 } 350 + 351 + /* We will need the state in atomic_enable, so let's make sure it's part of the state */ 352 + priv_state = ingenic_drm_get_priv_state(priv, state); 353 + if (IS_ERR(priv_state)) 354 + return PTR_ERR(priv_state); 349 355 350 356 if (drm_atomic_crtc_needs_modeset(crtc_state) && priv->soc_info->has_osd) { 351 357 f1_state = drm_atomic_get_plane_state(crtc_state->state, ··· 477 471 if (priv->soc_info->plane_f0_not_working && plane == &priv->f0) 478 472 return -EINVAL; 479 473 480 - crtc_state = drm_atomic_get_existing_crtc_state(state, 481 - crtc); 474 + crtc_state = drm_atomic_get_new_crtc_state(state, crtc); 482 475 if (WARN_ON(!crtc_state)) 483 476 return -EINVAL; 484 477
+2 -2
drivers/gpu/drm/ingenic/ingenic-ipu.c
··· 580 580 if (!crtc) 581 581 return 0; 582 582 583 - crtc_state = drm_atomic_get_existing_crtc_state(state, crtc); 583 + crtc_state = drm_atomic_get_new_crtc_state(state, crtc); 584 584 if (WARN_ON(!crtc_state)) 585 585 return -EINVAL; 586 586 ··· 705 705 ipu->sharpness = val; 706 706 707 707 if (state->crtc) { 708 - crtc_state = drm_atomic_get_existing_crtc_state(state->state, state->crtc); 708 + crtc_state = drm_atomic_get_new_crtc_state(state->state, state->crtc); 709 709 if (WARN_ON(!crtc_state)) 710 710 return -EINVAL; 711 711
+1 -2
drivers/gpu/drm/kmb/kmb_plane.c
··· 129 129 } 130 130 can_position = (plane->type == DRM_PLANE_TYPE_OVERLAY); 131 131 crtc_state = 132 - drm_atomic_get_existing_crtc_state(state, 133 - new_plane_state->crtc); 132 + drm_atomic_get_new_crtc_state(state, new_plane_state->crtc); 134 133 return drm_atomic_helper_check_plane_state(new_plane_state, 135 134 crtc_state, 136 135 DRM_PLANE_NO_SCALING,
+2 -2
drivers/gpu/drm/logicvc/logicvc_layer.c
··· 96 96 if (!new_state->crtc) 97 97 return 0; 98 98 99 - crtc_state = drm_atomic_get_existing_crtc_state(new_state->state, 100 - new_state->crtc); 99 + crtc_state = drm_atomic_get_new_crtc_state(new_state->state, 100 + new_state->crtc); 101 101 if (WARN_ON(!crtc_state)) 102 102 return -EINVAL; 103 103
+1 -1
drivers/gpu/drm/loongson/lsdc_plane.c
··· 196 196 return -EINVAL; 197 197 } 198 198 199 - crtc_state = drm_atomic_get_existing_crtc_state(state, new_state->crtc); 199 + crtc_state = drm_atomic_get_new_crtc_state(state, new_state->crtc); 200 200 if (!crtc_state->active) 201 201 return -EINVAL; 202 202
+2 -1
drivers/gpu/drm/mediatek/mtk_plane.c
··· 122 122 if (ret) 123 123 return ret; 124 124 125 - crtc_state = drm_atomic_get_existing_crtc_state(state, new_plane_state->crtc); 125 + crtc_state = drm_atomic_get_new_crtc_state(state, 126 + new_plane_state->crtc); 126 127 127 128 return drm_atomic_helper_check_plane_state(plane->state, crtc_state, 128 129 DRM_PLANE_NO_SCALING,
+3 -4
drivers/gpu/drm/msm/disp/mdp5/mdp5_plane.c
··· 336 336 if (!crtc) 337 337 return 0; 338 338 339 - crtc_state = drm_atomic_get_existing_crtc_state(state, 340 - crtc); 339 + crtc_state = drm_atomic_get_new_crtc_state(state, crtc); 341 340 if (WARN_ON(!crtc_state)) 342 341 return -EINVAL; 343 342 ··· 372 373 int min_scale, max_scale; 373 374 int ret; 374 375 375 - crtc_state = drm_atomic_get_existing_crtc_state(state, 376 - new_plane_state->crtc); 376 + crtc_state = drm_atomic_get_new_crtc_state(state, 377 + new_plane_state->crtc); 377 378 if (WARN_ON(!crtc_state)) 378 379 return -EINVAL; 379 380
+2 -2
drivers/gpu/drm/nouveau/nouveau_display.c
··· 765 765 { 766 766 struct nouveau_display *disp = nouveau_display(dev); 767 767 768 - drm_client_dev_suspend(dev, false); 768 + drm_client_dev_suspend(dev); 769 769 770 770 if (drm_drv_uses_atomic_modeset(dev)) { 771 771 if (!runtime) { ··· 796 796 } 797 797 } 798 798 799 - drm_client_dev_resume(dev, false); 799 + drm_client_dev_resume(dev); 800 800 } 801 801 802 802 int
+1 -1
drivers/gpu/drm/omapdrm/omap_plane.c
··· 229 229 if (!crtc) 230 230 return 0; 231 231 232 - crtc_state = drm_atomic_get_existing_crtc_state(state, crtc); 232 + crtc_state = drm_atomic_get_new_crtc_state(state, crtc); 233 233 /* we should have a crtc state if the plane is attached to a crtc */ 234 234 if (WARN_ON(!crtc_state)) 235 235 return 0;
+1
drivers/gpu/drm/panel/panel-edp.c
··· 1888 1888 EDP_PANEL_ENTRY('A', 'U', 'O', 0x1e9b, &delay_200_500_e50, "B133UAN02.1"), 1889 1889 EDP_PANEL_ENTRY('A', 'U', 'O', 0x1ea5, &delay_200_500_e50, "B116XAK01.6"), 1890 1890 EDP_PANEL_ENTRY('A', 'U', 'O', 0x203d, &delay_200_500_e50, "B140HTN02.0"), 1891 + EDP_PANEL_ENTRY('A', 'U', 'O', 0x205c, &delay_200_500_e50, "B116XAN02.0"), 1891 1892 EDP_PANEL_ENTRY('A', 'U', 'O', 0x208d, &delay_200_500_e50, "B140HTN02.1"), 1892 1893 EDP_PANEL_ENTRY('A', 'U', 'O', 0x235c, &delay_200_500_e50, "B116XTN02.3"), 1893 1894 EDP_PANEL_ENTRY('A', 'U', 'O', 0x239b, &delay_200_500_e50, "B116XAN06.1"),
+2 -2
drivers/gpu/drm/panfrost/panfrost_devfreq.c
··· 74 74 75 75 spin_unlock_irqrestore(&pfdevfreq->lock, irqflags); 76 76 77 - dev_dbg(pfdev->dev, "busy %lu total %lu %lu %% freq %lu MHz\n", 77 + dev_dbg(pfdev->base.dev, "busy %lu total %lu %lu %% freq %lu MHz\n", 78 78 status->busy_time, status->total_time, 79 79 status->busy_time / (status->total_time / 100), 80 80 status->current_frequency / 1000 / 1000); ··· 119 119 int ret; 120 120 struct dev_pm_opp *opp; 121 121 unsigned long cur_freq; 122 - struct device *dev = &pfdev->pdev->dev; 122 + struct device *dev = pfdev->base.dev; 123 123 struct devfreq *devfreq; 124 124 struct thermal_cooling_device *cooling; 125 125 struct panfrost_devfreq *pfdevfreq = &pfdev->pfdevfreq;
+36 -32
drivers/gpu/drm/panfrost/panfrost_device.c
··· 20 20 21 21 static int panfrost_reset_init(struct panfrost_device *pfdev) 22 22 { 23 - pfdev->rstc = devm_reset_control_array_get_optional_exclusive(pfdev->dev); 23 + pfdev->rstc = devm_reset_control_array_get_optional_exclusive(pfdev->base.dev); 24 24 if (IS_ERR(pfdev->rstc)) { 25 - dev_err(pfdev->dev, "get reset failed %ld\n", PTR_ERR(pfdev->rstc)); 25 + dev_err(pfdev->base.dev, "get reset failed %ld\n", PTR_ERR(pfdev->rstc)); 26 26 return PTR_ERR(pfdev->rstc); 27 27 } 28 28 ··· 39 39 int err; 40 40 unsigned long rate; 41 41 42 - pfdev->clock = devm_clk_get(pfdev->dev, NULL); 42 + pfdev->clock = devm_clk_get(pfdev->base.dev, NULL); 43 43 if (IS_ERR(pfdev->clock)) { 44 - dev_err(pfdev->dev, "get clock failed %ld\n", PTR_ERR(pfdev->clock)); 44 + dev_err(pfdev->base.dev, "get clock failed %ld\n", PTR_ERR(pfdev->clock)); 45 45 return PTR_ERR(pfdev->clock); 46 46 } 47 47 48 48 rate = clk_get_rate(pfdev->clock); 49 - dev_info(pfdev->dev, "clock rate = %lu\n", rate); 49 + dev_info(pfdev->base.dev, "clock rate = %lu\n", rate); 50 50 51 51 err = clk_prepare_enable(pfdev->clock); 52 52 if (err) 53 53 return err; 54 54 55 - pfdev->bus_clock = devm_clk_get_optional(pfdev->dev, "bus"); 55 + pfdev->bus_clock = devm_clk_get_optional(pfdev->base.dev, "bus"); 56 56 if (IS_ERR(pfdev->bus_clock)) { 57 - dev_err(pfdev->dev, "get bus_clock failed %ld\n", 57 + dev_err(pfdev->base.dev, "get bus_clock failed %ld\n", 58 58 PTR_ERR(pfdev->bus_clock)); 59 59 err = PTR_ERR(pfdev->bus_clock); 60 60 goto disable_clock; ··· 62 62 63 63 if (pfdev->bus_clock) { 64 64 rate = clk_get_rate(pfdev->bus_clock); 65 - dev_info(pfdev->dev, "bus_clock rate = %lu\n", rate); 65 + dev_info(pfdev->base.dev, "bus_clock rate = %lu\n", rate); 66 66 67 67 err = clk_prepare_enable(pfdev->bus_clock); 68 68 if (err) ··· 87 87 { 88 88 int ret, i; 89 89 90 - pfdev->regulators = devm_kcalloc(pfdev->dev, pfdev->comp->num_supplies, 90 + pfdev->regulators = devm_kcalloc(pfdev->base.dev, pfdev->comp->num_supplies, 91 91 sizeof(*pfdev->regulators), 92 92 GFP_KERNEL); 93 93 if (!pfdev->regulators) ··· 96 96 for (i = 0; i < pfdev->comp->num_supplies; i++) 97 97 pfdev->regulators[i].supply = pfdev->comp->supply_names[i]; 98 98 99 - ret = devm_regulator_bulk_get(pfdev->dev, 99 + ret = devm_regulator_bulk_get(pfdev->base.dev, 100 100 pfdev->comp->num_supplies, 101 101 pfdev->regulators); 102 102 if (ret < 0) { 103 103 if (ret != -EPROBE_DEFER) 104 - dev_err(pfdev->dev, "failed to get regulators: %d\n", 104 + dev_err(pfdev->base.dev, "failed to get regulators: %d\n", 105 105 ret); 106 106 return ret; 107 107 } ··· 109 109 ret = regulator_bulk_enable(pfdev->comp->num_supplies, 110 110 pfdev->regulators); 111 111 if (ret < 0) { 112 - dev_err(pfdev->dev, "failed to enable regulators: %d\n", ret); 112 + dev_err(pfdev->base.dev, "failed to enable regulators: %d\n", ret); 113 113 return ret; 114 114 } 115 115 ··· 144 144 int err; 145 145 int i, num_domains; 146 146 147 - num_domains = of_count_phandle_with_args(pfdev->dev->of_node, 147 + num_domains = of_count_phandle_with_args(pfdev->base.dev->of_node, 148 148 "power-domains", 149 149 "#power-domain-cells"); 150 150 ··· 156 156 return 0; 157 157 158 158 if (num_domains != pfdev->comp->num_pm_domains) { 159 - dev_err(pfdev->dev, 159 + dev_err(pfdev->base.dev, 160 160 "Incorrect number of power domains: %d provided, %d needed\n", 161 161 num_domains, pfdev->comp->num_pm_domains); 162 162 return -EINVAL; ··· 168 168 169 169 for (i = 0; i < num_domains; i++) { 170 170 pfdev->pm_domain_devs[i] = 171 - dev_pm_domain_attach_by_name(pfdev->dev, 172 - pfdev->comp->pm_domain_names[i]); 171 + dev_pm_domain_attach_by_name(pfdev->base.dev, 172 + pfdev->comp->pm_domain_names[i]); 173 173 if (IS_ERR_OR_NULL(pfdev->pm_domain_devs[i])) { 174 174 err = PTR_ERR(pfdev->pm_domain_devs[i]) ? : -ENODATA; 175 175 pfdev->pm_domain_devs[i] = NULL; 176 - dev_err(pfdev->dev, 176 + dev_err(pfdev->base.dev, 177 177 "failed to get pm-domain %s(%d): %d\n", 178 178 pfdev->comp->pm_domain_names[i], i, err); 179 179 goto err; 180 180 } 181 181 182 - pfdev->pm_domain_links[i] = device_link_add(pfdev->dev, 183 - pfdev->pm_domain_devs[i], DL_FLAG_PM_RUNTIME | 184 - DL_FLAG_STATELESS | DL_FLAG_RPM_ACTIVE); 182 + pfdev->pm_domain_links[i] = 183 + device_link_add(pfdev->base.dev, 184 + pfdev->pm_domain_devs[i], DL_FLAG_PM_RUNTIME | 185 + DL_FLAG_STATELESS | DL_FLAG_RPM_ACTIVE); 185 186 if (!pfdev->pm_domain_links[i]) { 186 187 dev_err(pfdev->pm_domain_devs[i], 187 188 "adding device link failed!\n"); ··· 221 220 222 221 err = panfrost_reset_init(pfdev); 223 222 if (err) { 224 - dev_err(pfdev->dev, "reset init failed %d\n", err); 223 + dev_err(pfdev->base.dev, "reset init failed %d\n", err); 225 224 goto out_pm_domain; 226 225 } 227 226 228 227 err = panfrost_clk_init(pfdev); 229 228 if (err) { 230 - dev_err(pfdev->dev, "clk init failed %d\n", err); 229 + dev_err(pfdev->base.dev, "clk init failed %d\n", err); 231 230 goto out_reset; 232 231 } 233 232 234 233 err = panfrost_devfreq_init(pfdev); 235 234 if (err) { 236 235 if (err != -EPROBE_DEFER) 237 - dev_err(pfdev->dev, "devfreq init failed %d\n", err); 236 + dev_err(pfdev->base.dev, "devfreq init failed %d\n", err); 238 237 goto out_clk; 239 238 } 240 239 ··· 245 244 goto out_devfreq; 246 245 } 247 246 248 - pfdev->iomem = devm_platform_ioremap_resource(pfdev->pdev, 0); 247 + pfdev->iomem = devm_platform_ioremap_resource(to_platform_device(pfdev->base.dev), 0); 249 248 if (IS_ERR(pfdev->iomem)) { 250 249 err = PTR_ERR(pfdev->iomem); 251 250 goto out_regulator; ··· 259 258 if (err) 260 259 goto out_gpu; 261 260 262 - err = panfrost_job_init(pfdev); 261 + err = panfrost_jm_init(pfdev); 263 262 if (err) 264 263 goto out_mmu; 265 264 ··· 269 268 270 269 return 0; 271 270 out_job: 272 - panfrost_job_fini(pfdev); 271 + panfrost_jm_fini(pfdev); 273 272 out_mmu: 274 273 panfrost_mmu_fini(pfdev); 275 274 out_gpu: ··· 290 289 void panfrost_device_fini(struct panfrost_device *pfdev) 291 290 { 292 291 panfrost_perfcnt_fini(pfdev); 293 - panfrost_job_fini(pfdev); 292 + panfrost_jm_fini(pfdev); 294 293 panfrost_mmu_fini(pfdev); 295 294 panfrost_gpu_fini(pfdev); 296 295 panfrost_devfreq_fini(pfdev); ··· 400 399 return false; 401 400 } 402 401 403 - void panfrost_device_reset(struct panfrost_device *pfdev) 402 + void panfrost_device_reset(struct panfrost_device *pfdev, bool enable_job_int) 404 403 { 405 404 panfrost_gpu_soft_reset(pfdev); 406 405 407 406 panfrost_gpu_power_on(pfdev); 408 407 panfrost_mmu_reset(pfdev); 409 - panfrost_job_enable_interrupts(pfdev); 408 + 409 + panfrost_jm_reset_interrupts(pfdev); 410 + if (enable_job_int) 411 + panfrost_jm_enable_interrupts(pfdev); 410 412 } 411 413 412 414 static int panfrost_device_runtime_resume(struct device *dev) ··· 433 429 } 434 430 } 435 431 436 - panfrost_device_reset(pfdev); 432 + panfrost_device_reset(pfdev, true); 437 433 panfrost_devfreq_resume(pfdev); 438 434 439 435 return 0; ··· 451 447 { 452 448 struct panfrost_device *pfdev = dev_get_drvdata(dev); 453 449 454 - if (!panfrost_job_is_idle(pfdev)) 450 + if (!panfrost_jm_is_idle(pfdev)) 455 451 return -EBUSY; 456 452 457 453 panfrost_devfreq_suspend(pfdev); 458 - panfrost_job_suspend_irq(pfdev); 454 + panfrost_jm_suspend_irq(pfdev); 459 455 panfrost_mmu_suspend_irq(pfdev); 460 456 panfrost_gpu_suspend_irq(pfdev); 461 457 panfrost_gpu_power_off(pfdev);
+7 -6
drivers/gpu/drm/panfrost/panfrost_device.h
··· 26 26 27 27 #define MAX_PM_DOMAINS 5 28 28 29 + #define ALL_JS_INT_MASK \ 30 + (GENMASK(16 + NUM_JOB_SLOTS - 1, 16) | \ 31 + GENMASK(NUM_JOB_SLOTS - 1, 0)) 32 + 29 33 enum panfrost_drv_comp_bits { 30 34 PANFROST_COMP_BIT_GPU, 31 35 PANFROST_COMP_BIT_JOB, ··· 128 124 }; 129 125 130 126 struct panfrost_device { 131 - struct device *dev; 132 - struct drm_device *ddev; 133 - struct platform_device *pdev; 127 + struct drm_device base; 134 128 int gpu_irq; 135 129 int mmu_irq; 136 130 ··· 147 145 DECLARE_BITMAP(is_suspended, PANFROST_COMP_BIT_MAX); 148 146 149 147 spinlock_t as_lock; 150 - unsigned long as_in_use_mask; 151 148 unsigned long as_alloc_mask; 152 149 unsigned long as_faulty_mask; 153 150 struct list_head as_lru_list; ··· 223 222 224 223 static inline struct panfrost_device *to_panfrost_device(struct drm_device *ddev) 225 224 { 226 - return ddev->dev_private; 225 + return container_of(ddev, struct panfrost_device, base); 227 226 } 228 227 229 228 static inline int panfrost_model_cmp(struct panfrost_device *pfdev, s32 id) ··· 249 248 250 249 int panfrost_device_init(struct panfrost_device *pfdev); 251 250 void panfrost_device_fini(struct panfrost_device *pfdev); 252 - void panfrost_device_reset(struct panfrost_device *pfdev); 251 + void panfrost_device_reset(struct panfrost_device *pfdev, bool enable_job_int); 253 252 254 253 extern const struct dev_pm_ops panfrost_pm_ops; 255 254
+37 -57
drivers/gpu/drm/panfrost/panfrost_drv.c
··· 36 36 { 37 37 int ret; 38 38 39 - ret = pm_runtime_resume_and_get(pfdev->dev); 39 + ret = pm_runtime_resume_and_get(pfdev->base.dev); 40 40 if (ret) 41 41 return ret; 42 42 ··· 44 44 *arg = panfrost_timestamp_read(pfdev); 45 45 panfrost_cycle_counter_put(pfdev); 46 46 47 - pm_runtime_put(pfdev->dev); 47 + pm_runtime_put(pfdev->base.dev); 48 48 return 0; 49 49 } 50 50 51 51 static int panfrost_ioctl_get_param(struct drm_device *ddev, void *data, struct drm_file *file) 52 52 { 53 53 struct drm_panfrost_get_param *param = data; 54 - struct panfrost_device *pfdev = ddev->dev_private; 54 + struct panfrost_device *pfdev = to_panfrost_device(ddev); 55 55 int ret; 56 56 57 57 if (param->pad != 0) ··· 283 283 static int panfrost_ioctl_submit(struct drm_device *dev, void *data, 284 284 struct drm_file *file) 285 285 { 286 - struct panfrost_device *pfdev = dev->dev_private; 286 + struct panfrost_device *pfdev = to_panfrost_device(dev); 287 287 struct panfrost_file_priv *file_priv = file->driver_priv; 288 288 struct drm_panfrost_submit *args = data; 289 289 struct drm_syncobj *sync_out = NULL; ··· 457 457 { 458 458 struct panfrost_file_priv *priv = file_priv->driver_priv; 459 459 struct drm_panfrost_madvise *args = data; 460 - struct panfrost_device *pfdev = dev->dev_private; 460 + struct panfrost_device *pfdev = to_panfrost_device(dev); 461 461 struct drm_gem_object *gem_obj; 462 462 struct panfrost_gem_object *bo; 463 463 int ret = 0; ··· 590 590 panfrost_open(struct drm_device *dev, struct drm_file *file) 591 591 { 592 592 int ret; 593 - struct panfrost_device *pfdev = dev->dev_private; 593 + struct panfrost_device *pfdev = to_panfrost_device(dev); 594 594 struct panfrost_file_priv *panfrost_priv; 595 595 596 596 panfrost_priv = kzalloc(sizeof(*panfrost_priv), GFP_KERNEL); ··· 606 606 goto err_free; 607 607 } 608 608 609 - ret = panfrost_job_open(file); 609 + ret = panfrost_jm_open(file); 610 610 if (ret) 611 611 goto err_job; 612 612 ··· 625 625 struct panfrost_file_priv *panfrost_priv = file->driver_priv; 626 626 627 627 panfrost_perfcnt_close(file); 628 - panfrost_job_close(file); 628 + panfrost_jm_close(file); 629 629 630 630 panfrost_mmu_ctx_put(panfrost_priv->mmu); 631 631 kfree(panfrost_priv); ··· 668 668 * job spent on the GPU. 669 669 */ 670 670 671 - static const char * const engine_names[] = { 672 - "fragment", "vertex-tiler", "compute-only" 673 - }; 674 - 675 - BUILD_BUG_ON(ARRAY_SIZE(engine_names) != NUM_JOB_SLOTS); 676 - 677 671 for (i = 0; i < NUM_JOB_SLOTS - 1; i++) { 678 672 if (pfdev->profile_mode) { 679 673 drm_printf(p, "drm-engine-%s:\t%llu ns\n", 680 - engine_names[i], panfrost_priv->engine_usage.elapsed_ns[i]); 674 + panfrost_engine_names[i], 675 + panfrost_priv->engine_usage.elapsed_ns[i]); 681 676 drm_printf(p, "drm-cycles-%s:\t%llu\n", 682 - engine_names[i], panfrost_priv->engine_usage.cycles[i]); 677 + panfrost_engine_names[i], 678 + panfrost_priv->engine_usage.cycles[i]); 683 679 } 684 680 drm_printf(p, "drm-maxfreq-%s:\t%lu Hz\n", 685 - engine_names[i], pfdev->pfdevfreq.fast_rate); 681 + panfrost_engine_names[i], pfdev->pfdevfreq.fast_rate); 686 682 drm_printf(p, "drm-curfreq-%s:\t%lu Hz\n", 687 - engine_names[i], pfdev->pfdevfreq.current_frequency); 683 + panfrost_engine_names[i], pfdev->pfdevfreq.current_frequency); 688 684 } 689 685 } 690 686 691 687 static void panfrost_show_fdinfo(struct drm_printer *p, struct drm_file *file) 692 688 { 693 - struct drm_device *dev = file->minor->dev; 694 - struct panfrost_device *pfdev = dev->dev_private; 689 + struct panfrost_device *pfdev = to_panfrost_device(file->minor->dev); 695 690 696 691 panfrost_gpu_show_fdinfo(pfdev, file->driver_priv, p); 697 692 ··· 703 708 static int panthor_gems_show(struct seq_file *m, void *data) 704 709 { 705 710 struct drm_info_node *node = m->private; 706 - struct drm_device *dev = node->minor->dev; 707 - struct panfrost_device *pfdev = dev->dev_private; 711 + struct panfrost_device *pfdev = to_panfrost_device(node->minor->dev); 708 712 709 713 panfrost_gem_debugfs_print_bos(pfdev, m); 710 714 ··· 752 758 } 753 759 754 760 static struct drm_info_list panthor_debugfs_list[] = { 755 - {"gems", panthor_gems_show, 0, NULL}, 761 + {"gems", 762 + panthor_gems_show, 0, NULL}, 756 763 }; 757 764 758 765 static int panthor_gems_debugfs_init(struct drm_minor *minor) ··· 860 865 static int panfrost_probe(struct platform_device *pdev) 861 866 { 862 867 struct panfrost_device *pfdev; 863 - struct drm_device *ddev; 864 868 int err; 865 869 866 - pfdev = devm_kzalloc(&pdev->dev, sizeof(*pfdev), GFP_KERNEL); 867 - if (!pfdev) 868 - return -ENOMEM; 869 - 870 - pfdev->pdev = pdev; 871 - pfdev->dev = &pdev->dev; 870 + pfdev = devm_drm_dev_alloc(&pdev->dev, &panfrost_drm_driver, 871 + struct panfrost_device, base); 872 + if (IS_ERR(pfdev)) 873 + return PTR_ERR(pfdev); 872 874 873 875 platform_set_drvdata(pdev, pfdev); 874 876 ··· 874 882 return -ENODEV; 875 883 876 884 pfdev->coherent = device_get_dma_attr(&pdev->dev) == DEV_DMA_COHERENT; 877 - 878 - /* Allocate and initialize the DRM device. */ 879 - ddev = drm_dev_alloc(&panfrost_drm_driver, &pdev->dev); 880 - if (IS_ERR(ddev)) 881 - return PTR_ERR(ddev); 882 - 883 - ddev->dev_private = pfdev; 884 - pfdev->ddev = ddev; 885 885 886 886 mutex_init(&pfdev->shrinker_lock); 887 887 INIT_LIST_HEAD(&pfdev->shrinker_list); ··· 885 901 goto err_out0; 886 902 } 887 903 888 - pm_runtime_set_active(pfdev->dev); 889 - pm_runtime_mark_last_busy(pfdev->dev); 890 - pm_runtime_enable(pfdev->dev); 891 - pm_runtime_set_autosuspend_delay(pfdev->dev, 50); /* ~3 frames */ 892 - pm_runtime_use_autosuspend(pfdev->dev); 904 + pm_runtime_set_active(pfdev->base.dev); 905 + pm_runtime_mark_last_busy(pfdev->base.dev); 906 + pm_runtime_enable(pfdev->base.dev); 907 + pm_runtime_set_autosuspend_delay(pfdev->base.dev, 50); /* ~3 frames */ 908 + pm_runtime_use_autosuspend(pfdev->base.dev); 893 909 894 910 /* 895 911 * Register the DRM device with the core and the connectors with 896 912 * sysfs 897 913 */ 898 - err = drm_dev_register(ddev, 0); 914 + err = drm_dev_register(&pfdev->base, 0); 899 915 if (err < 0) 900 916 goto err_out1; 901 917 902 - err = panfrost_gem_shrinker_init(ddev); 918 + err = panfrost_gem_shrinker_init(&pfdev->base); 903 919 if (err) 904 920 goto err_out2; 905 921 906 922 return 0; 907 923 908 924 err_out2: 909 - drm_dev_unregister(ddev); 925 + drm_dev_unregister(&pfdev->base); 910 926 err_out1: 911 - pm_runtime_disable(pfdev->dev); 927 + pm_runtime_disable(pfdev->base.dev); 912 928 panfrost_device_fini(pfdev); 913 - pm_runtime_set_suspended(pfdev->dev); 929 + pm_runtime_set_suspended(pfdev->base.dev); 914 930 err_out0: 915 - drm_dev_put(ddev); 916 931 return err; 917 932 } 918 933 919 934 static void panfrost_remove(struct platform_device *pdev) 920 935 { 921 936 struct panfrost_device *pfdev = platform_get_drvdata(pdev); 922 - struct drm_device *ddev = pfdev->ddev; 923 937 924 - drm_dev_unregister(ddev); 925 - panfrost_gem_shrinker_cleanup(ddev); 938 + drm_dev_unregister(&pfdev->base); 939 + panfrost_gem_shrinker_cleanup(&pfdev->base); 926 940 927 - pm_runtime_get_sync(pfdev->dev); 928 - pm_runtime_disable(pfdev->dev); 941 + pm_runtime_get_sync(pfdev->base.dev); 942 + pm_runtime_disable(pfdev->base.dev); 929 943 panfrost_device_fini(pfdev); 930 - pm_runtime_set_suspended(pfdev->dev); 931 - 932 - drm_dev_put(ddev); 944 + pm_runtime_set_suspended(pfdev->base.dev); 933 945 } 934 946 935 947 static ssize_t profiling_show(struct device *dev,
+4 -4
drivers/gpu/drm/panfrost/panfrost_dump.c
··· 163 163 iter.start = __vmalloc(file_size, GFP_KERNEL | __GFP_NOWARN | 164 164 __GFP_NORETRY); 165 165 if (!iter.start) { 166 - dev_warn(pfdev->dev, "failed to allocate devcoredump file\n"); 166 + dev_warn(pfdev->base.dev, "failed to allocate devcoredump file\n"); 167 167 return; 168 168 } 169 169 ··· 204 204 mapping = job->mappings[i]; 205 205 206 206 if (!bo->base.sgt) { 207 - dev_err(pfdev->dev, "Panfrost Dump: BO has no sgt, cannot dump\n"); 207 + dev_err(pfdev->base.dev, "Panfrost Dump: BO has no sgt, cannot dump\n"); 208 208 iter.hdr->bomap.valid = 0; 209 209 goto dump_header; 210 210 } 211 211 212 212 ret = drm_gem_vmap(&bo->base.base, &map); 213 213 if (ret) { 214 - dev_err(pfdev->dev, "Panfrost Dump: couldn't map Buffer Object\n"); 214 + dev_err(pfdev->base.dev, "Panfrost Dump: couldn't map Buffer Object\n"); 215 215 iter.hdr->bomap.valid = 0; 216 216 goto dump_header; 217 217 } ··· 237 237 } 238 238 panfrost_core_dump_header(&iter, PANFROSTDUMP_BUF_TRAILER, iter.data); 239 239 240 - dev_coredumpv(pfdev->dev, iter.start, iter.data - iter.start, GFP_KERNEL); 240 + dev_coredumpv(pfdev->base.dev, iter.start, iter.data - iter.start, GFP_KERNEL); 241 241 }
+4 -4
drivers/gpu/drm/panfrost/panfrost_gem.c
··· 26 26 27 27 static void panfrost_gem_debugfs_bo_rm(struct panfrost_gem_object *bo) 28 28 { 29 - struct panfrost_device *pfdev = bo->base.base.dev->dev_private; 29 + struct panfrost_device *pfdev = to_panfrost_device(bo->base.base.dev); 30 30 31 31 if (list_empty(&bo->debugfs.node)) 32 32 return; ··· 48 48 static void panfrost_gem_free_object(struct drm_gem_object *obj) 49 49 { 50 50 struct panfrost_gem_object *bo = to_panfrost_bo(obj); 51 - struct panfrost_device *pfdev = obj->dev->dev_private; 51 + struct panfrost_device *pfdev = to_panfrost_device(obj->dev); 52 52 53 53 /* 54 54 * Make sure the BO is no longer inserted in the shrinker list before ··· 76 76 77 77 for (i = 0; i < n_sgt; i++) { 78 78 if (bo->sgts[i].sgl) { 79 - dma_unmap_sgtable(pfdev->dev, &bo->sgts[i], 79 + dma_unmap_sgtable(pfdev->base.dev, &bo->sgts[i], 80 80 DMA_BIDIRECTIONAL, 0); 81 81 sg_free_table(&bo->sgts[i]); 82 82 } ··· 284 284 */ 285 285 struct drm_gem_object *panfrost_gem_create_object(struct drm_device *dev, size_t size) 286 286 { 287 - struct panfrost_device *pfdev = dev->dev_private; 287 + struct panfrost_device *pfdev = to_panfrost_device(dev); 288 288 struct panfrost_gem_object *obj; 289 289 290 290 obj = kzalloc(sizeof(*obj), GFP_KERNEL);
+2 -2
drivers/gpu/drm/panfrost/panfrost_gem_shrinker.c
··· 97 97 */ 98 98 int panfrost_gem_shrinker_init(struct drm_device *dev) 99 99 { 100 - struct panfrost_device *pfdev = dev->dev_private; 100 + struct panfrost_device *pfdev = to_panfrost_device(dev); 101 101 102 102 pfdev->shrinker = shrinker_alloc(0, "drm-panfrost"); 103 103 if (!pfdev->shrinker) ··· 120 120 */ 121 121 void panfrost_gem_shrinker_cleanup(struct drm_device *dev) 122 122 { 123 - struct panfrost_device *pfdev = dev->dev_private; 123 + struct panfrost_device *pfdev = to_panfrost_device(dev); 124 124 125 125 if (pfdev->shrinker) 126 126 shrinker_free(pfdev->shrinker);
+39 -27
drivers/gpu/drm/panfrost/panfrost_gpu.c
··· 36 36 u64 address = (u64) gpu_read(pfdev, GPU_FAULT_ADDRESS_HI) << 32; 37 37 address |= gpu_read(pfdev, GPU_FAULT_ADDRESS_LO); 38 38 39 - dev_warn(pfdev->dev, "GPU Fault 0x%08x (%s) at 0x%016llx\n", 39 + dev_warn(pfdev->base.dev, "GPU Fault 0x%08x (%s) at 0x%016llx\n", 40 40 fault_status, panfrost_exception_name(fault_status & 0xFF), 41 41 address); 42 42 43 43 if (state & GPU_IRQ_MULTIPLE_FAULT) 44 - dev_warn(pfdev->dev, "There were multiple GPU faults - some have not been reported\n"); 44 + dev_warn(pfdev->base.dev, "There were multiple GPU faults - some have not been reported\n"); 45 45 46 46 gpu_write(pfdev, GPU_INT_MASK, 0); 47 47 } ··· 72 72 val, val & GPU_IRQ_RESET_COMPLETED, 10, 10000); 73 73 74 74 if (ret) { 75 - dev_err(pfdev->dev, "gpu soft reset timed out, attempting hard reset\n"); 75 + dev_err(pfdev->base.dev, "gpu soft reset timed out, attempting hard reset\n"); 76 76 77 77 gpu_write(pfdev, GPU_CMD, GPU_CMD_HARD_RESET); 78 78 ret = readl_relaxed_poll_timeout(pfdev->iomem + GPU_INT_RAWSTAT, val, 79 79 val & GPU_IRQ_RESET_COMPLETED, 100, 10000); 80 80 if (ret) { 81 - dev_err(pfdev->dev, "gpu hard reset timed out\n"); 81 + dev_err(pfdev->base.dev, "gpu hard reset timed out\n"); 82 82 return ret; 83 83 } 84 84 } ··· 95 95 * All in-flight jobs should have released their cycle 96 96 * counter references upon reset, but let us make sure 97 97 */ 98 - if (drm_WARN_ON(pfdev->ddev, atomic_read(&pfdev->cycle_counter.use_count) != 0)) 98 + if (drm_WARN_ON(&pfdev->base, atomic_read(&pfdev->cycle_counter.use_count) != 0)) 99 99 atomic_set(&pfdev->cycle_counter.use_count, 0); 100 100 101 101 return 0; ··· 240 240 /* MediaTek MT8188 Mali-G57 MC3 */ 241 241 GPU_MODEL(g57, 0x9093, 242 242 GPU_REV(g57, 0, 0)), 243 + {0}, 243 244 }; 244 245 245 - static void panfrost_gpu_init_features(struct panfrost_device *pfdev) 246 + static int panfrost_gpu_init_features(struct panfrost_device *pfdev) 246 247 { 247 248 u32 gpu_id, num_js, major, minor, status, rev; 248 249 const char *name = "unknown"; ··· 328 327 break; 329 328 } 330 329 330 + if (!model->name) { 331 + dev_err(pfdev->base.dev, "GPU model not found: mali-%s id rev %#x %#x\n", 332 + name, gpu_id, rev); 333 + return -ENODEV; 334 + } 335 + 331 336 bitmap_from_u64(pfdev->features.hw_features, hw_feat); 332 337 bitmap_from_u64(pfdev->features.hw_issues, hw_issues); 333 338 334 - dev_info(pfdev->dev, "mali-%s id 0x%x major 0x%x minor 0x%x status 0x%x", 339 + dev_info(pfdev->base.dev, "mali-%s id 0x%x major 0x%x minor 0x%x status 0x%x", 335 340 name, gpu_id, major, minor, status); 336 - dev_info(pfdev->dev, "features: %64pb, issues: %64pb", 341 + dev_info(pfdev->base.dev, "features: %64pb, issues: %64pb", 337 342 pfdev->features.hw_features, 338 343 pfdev->features.hw_issues); 339 344 340 - dev_info(pfdev->dev, "Features: L2:0x%08x Shader:0x%08x Tiler:0x%08x Mem:0x%0x MMU:0x%08x AS:0x%x JS:0x%x", 345 + dev_info(pfdev->base.dev, "Features: L2:0x%08x Shader:0x%08x Tiler:0x%08x Mem:0x%0x MMU:0x%08x AS:0x%x JS:0x%x", 341 346 pfdev->features.l2_features, 342 347 pfdev->features.core_features, 343 348 pfdev->features.tiler_features, ··· 352 345 pfdev->features.as_present, 353 346 pfdev->features.js_present); 354 347 355 - dev_info(pfdev->dev, "shader_present=0x%0llx l2_present=0x%0llx", 348 + dev_info(pfdev->base.dev, "shader_present=0x%0llx l2_present=0x%0llx", 356 349 pfdev->features.shader_present, pfdev->features.l2_present); 350 + 351 + return 0; 357 352 } 358 353 359 354 void panfrost_cycle_counter_get(struct panfrost_device *pfdev) ··· 420 411 */ 421 412 core_mask = ~(pfdev->features.l2_present - 1) & 422 413 (pfdev->features.l2_present - 2); 423 - dev_info_once(pfdev->dev, "using only 1st core group (%lu cores from %lu)\n", 414 + dev_info_once(pfdev->base.dev, "using only 1st core group (%lu cores from %lu)\n", 424 415 hweight64(core_mask), 425 416 hweight64(pfdev->features.shader_present)); 426 417 ··· 441 432 val, val == (pfdev->features.l2_present & core_mask), 442 433 10, 20000); 443 434 if (ret) 444 - dev_err(pfdev->dev, "error powering up gpu L2"); 435 + dev_err(pfdev->base.dev, "error powering up gpu L2"); 445 436 446 437 gpu_write(pfdev, SHADER_PWRON_LO, 447 438 pfdev->features.shader_present & core_mask); ··· 449 440 val, val == (pfdev->features.shader_present & core_mask), 450 441 10, 20000); 451 442 if (ret) 452 - dev_err(pfdev->dev, "error powering up gpu shader"); 443 + dev_err(pfdev->base.dev, "error powering up gpu shader"); 453 444 454 445 gpu_write(pfdev, TILER_PWRON_LO, pfdev->features.tiler_present); 455 446 ret = readl_relaxed_poll_timeout(pfdev->iomem + TILER_READY_LO, 456 447 val, val == pfdev->features.tiler_present, 10, 1000); 457 448 if (ret) 458 - dev_err(pfdev->dev, "error powering up gpu tiler"); 449 + dev_err(pfdev->base.dev, "error powering up gpu tiler"); 459 450 } 460 451 461 452 void panfrost_gpu_power_off(struct panfrost_device *pfdev) ··· 467 458 ret = readl_relaxed_poll_timeout(pfdev->iomem + SHADER_PWRTRANS_LO, 468 459 val, !val, 1, 2000); 469 460 if (ret) 470 - dev_err(pfdev->dev, "shader power transition timeout"); 461 + dev_err(pfdev->base.dev, "shader power transition timeout"); 471 462 472 463 gpu_write(pfdev, TILER_PWROFF_LO, pfdev->features.tiler_present); 473 464 ret = readl_relaxed_poll_timeout(pfdev->iomem + TILER_PWRTRANS_LO, 474 465 val, !val, 1, 2000); 475 466 if (ret) 476 - dev_err(pfdev->dev, "tiler power transition timeout"); 467 + dev_err(pfdev->base.dev, "tiler power transition timeout"); 477 468 478 469 gpu_write(pfdev, L2_PWROFF_LO, pfdev->features.l2_present); 479 470 ret = readl_poll_timeout(pfdev->iomem + L2_PWRTRANS_LO, 480 471 val, !val, 0, 2000); 481 472 if (ret) 482 - dev_err(pfdev->dev, "l2 power transition timeout"); 473 + dev_err(pfdev->base.dev, "l2 power transition timeout"); 483 474 } 484 475 485 476 void panfrost_gpu_suspend_irq(struct panfrost_device *pfdev) ··· 498 489 if (err) 499 490 return err; 500 491 501 - panfrost_gpu_init_features(pfdev); 502 - 503 - err = dma_set_mask_and_coherent(pfdev->dev, 504 - DMA_BIT_MASK(FIELD_GET(0xff00, pfdev->features.mmu_features))); 492 + err = panfrost_gpu_init_features(pfdev); 505 493 if (err) 506 494 return err; 507 495 508 - dma_set_max_seg_size(pfdev->dev, UINT_MAX); 496 + err = dma_set_mask_and_coherent(pfdev->base.dev, 497 + DMA_BIT_MASK(FIELD_GET(0xff00, 498 + pfdev->features.mmu_features))); 499 + if (err) 500 + return err; 509 501 510 - pfdev->gpu_irq = platform_get_irq_byname(to_platform_device(pfdev->dev), "gpu"); 502 + dma_set_max_seg_size(pfdev->base.dev, UINT_MAX); 503 + 504 + pfdev->gpu_irq = platform_get_irq_byname(to_platform_device(pfdev->base.dev), "gpu"); 511 505 if (pfdev->gpu_irq < 0) 512 506 return pfdev->gpu_irq; 513 507 514 - err = devm_request_irq(pfdev->dev, pfdev->gpu_irq, panfrost_gpu_irq_handler, 508 + err = devm_request_irq(pfdev->base.dev, pfdev->gpu_irq, panfrost_gpu_irq_handler, 515 509 IRQF_SHARED, KBUILD_MODNAME "-gpu", pfdev); 516 510 if (err) { 517 - dev_err(pfdev->dev, "failed to request gpu irq"); 511 + dev_err(pfdev->base.dev, "failed to request gpu irq"); 518 512 return err; 519 513 } 520 514 ··· 537 525 538 526 if (panfrost_has_hw_feature(pfdev, HW_FEATURE_FLUSH_REDUCTION)) { 539 527 /* Flush reduction only makes sense when the GPU is kept powered on between jobs */ 540 - if (pm_runtime_get_if_in_use(pfdev->dev)) { 528 + if (pm_runtime_get_if_in_use(pfdev->base.dev)) { 541 529 flush_id = gpu_read(pfdev, GPU_LATEST_FLUSH_ID); 542 - pm_runtime_put(pfdev->dev); 530 + pm_runtime_put(pfdev->base.dev); 543 531 return flush_id; 544 532 } 545 533 }
+81 -70
drivers/gpu/drm/panfrost/panfrost_job.c
··· 28 28 #define job_write(dev, reg, data) writel(data, dev->iomem + (reg)) 29 29 #define job_read(dev, reg) readl(dev->iomem + (reg)) 30 30 31 + const char * const panfrost_engine_names[] = { 32 + "fragment", "vertex-tiler", "compute-only" 33 + }; 34 + 31 35 struct panfrost_queue_state { 32 36 struct drm_gpu_scheduler sched; 33 37 u64 fence_context; ··· 99 95 if (!fence) 100 96 return ERR_PTR(-ENOMEM); 101 97 102 - fence->dev = pfdev->ddev; 98 + fence->dev = &pfdev->base; 103 99 fence->queue = js_num; 104 100 fence->seqno = ++js->queue[js_num].emit_seqno; 105 101 dma_fence_init(&fence->base, &panfrost_fence_ops, &js->job_lock, ··· 200 196 return 1; 201 197 } 202 198 203 - static void panfrost_job_hw_submit(struct panfrost_job *job, int js) 199 + static int panfrost_job_hw_submit(struct panfrost_job *job, int js) 204 200 { 205 201 struct panfrost_device *pfdev = job->pfdev; 206 202 unsigned int subslot; ··· 208 204 u64 jc_head = job->jc; 209 205 int ret; 210 206 211 - panfrost_devfreq_record_busy(&pfdev->pfdevfreq); 212 - 213 - ret = pm_runtime_get_sync(pfdev->dev); 207 + ret = pm_runtime_get_sync(pfdev->base.dev); 214 208 if (ret < 0) 215 - return; 209 + goto err_hwsubmit; 216 210 217 211 if (WARN_ON(job_read(pfdev, JS_COMMAND_NEXT(js)))) { 218 - return; 212 + ret = -EINVAL; 213 + goto err_hwsubmit; 219 214 } 220 215 221 - cfg = panfrost_mmu_as_get(pfdev, job->mmu); 216 + ret = panfrost_mmu_as_get(pfdev, job->mmu); 217 + if (ret < 0) 218 + goto err_hwsubmit; 219 + 220 + cfg = ret; 221 + 222 + panfrost_devfreq_record_busy(&pfdev->pfdevfreq); 222 223 223 224 job_write(pfdev, JS_HEAD_NEXT_LO(js), lower_32_bits(jc_head)); 224 225 job_write(pfdev, JS_HEAD_NEXT_HI(js), upper_32_bits(jc_head)); ··· 266 257 } 267 258 268 259 job_write(pfdev, JS_COMMAND_NEXT(js), JS_COMMAND_START); 269 - dev_dbg(pfdev->dev, 260 + dev_dbg(pfdev->base.dev, 270 261 "JS: Submitting atom %p to js[%d][%d] with head=0x%llx AS %d", 271 262 job, js, subslot, jc_head, cfg & 0xf); 272 263 } 273 264 spin_unlock(&pfdev->js->job_lock); 265 + 266 + return 0; 267 + 268 + err_hwsubmit: 269 + pm_runtime_put_autosuspend(pfdev->base.dev); 270 + return ret; 274 271 } 275 272 276 273 static int panfrost_acquire_object_fences(struct drm_gem_object **bos, ··· 399 384 struct panfrost_device *pfdev = job->pfdev; 400 385 int slot = panfrost_job_get_slot(job); 401 386 struct dma_fence *fence = NULL; 387 + int ret; 402 388 403 389 if (job->ctx->destroyed) 404 390 return ERR_PTR(-ECANCELED); ··· 421 405 dma_fence_put(job->done_fence); 422 406 job->done_fence = dma_fence_get(fence); 423 407 424 - panfrost_job_hw_submit(job, slot); 408 + ret = panfrost_job_hw_submit(job, slot); 409 + if (ret) { 410 + dma_fence_put(fence); 411 + return ERR_PTR(ret); 412 + } 425 413 426 414 return fence; 427 415 } 428 416 429 - void panfrost_job_enable_interrupts(struct panfrost_device *pfdev) 417 + void panfrost_jm_reset_interrupts(struct panfrost_device *pfdev) 430 418 { 431 - int j; 432 - u32 irq_mask = 0; 433 - 434 - clear_bit(PANFROST_COMP_BIT_JOB, pfdev->is_suspended); 435 - 436 - for (j = 0; j < NUM_JOB_SLOTS; j++) { 437 - irq_mask |= MK_JS_MASK(j); 438 - } 439 - 440 - job_write(pfdev, JOB_INT_CLEAR, irq_mask); 441 - job_write(pfdev, JOB_INT_MASK, irq_mask); 419 + job_write(pfdev, JOB_INT_CLEAR, ALL_JS_INT_MASK); 442 420 } 443 421 444 - void panfrost_job_suspend_irq(struct panfrost_device *pfdev) 422 + void panfrost_jm_enable_interrupts(struct panfrost_device *pfdev) 423 + { 424 + clear_bit(PANFROST_COMP_BIT_JOB, pfdev->is_suspended); 425 + job_write(pfdev, JOB_INT_MASK, ALL_JS_INT_MASK); 426 + } 427 + 428 + void panfrost_jm_suspend_irq(struct panfrost_device *pfdev) 445 429 { 446 430 set_bit(PANFROST_COMP_BIT_JOB, pfdev->is_suspended); 447 431 ··· 458 442 bool signal_fence = true; 459 443 460 444 if (!panfrost_exception_is_fault(js_status)) { 461 - dev_dbg(pfdev->dev, "js event, js=%d, status=%s, head=0x%x, tail=0x%x", 445 + dev_dbg(pfdev->base.dev, "js event, js=%d, status=%s, head=0x%x, tail=0x%x", 462 446 js, exception_name, 463 447 job_read(pfdev, JS_HEAD_LO(js)), 464 448 job_read(pfdev, JS_TAIL_LO(js))); 465 449 } else { 466 - dev_err(pfdev->dev, "js fault, js=%d, status=%s, head=0x%x, tail=0x%x", 450 + dev_err(pfdev->base.dev, "js fault, js=%d, status=%s, head=0x%x, tail=0x%x", 467 451 js, exception_name, 468 452 job_read(pfdev, JS_HEAD_LO(js)), 469 453 job_read(pfdev, JS_TAIL_LO(js))); ··· 495 479 if (signal_fence) 496 480 dma_fence_signal_locked(job->done_fence); 497 481 498 - pm_runtime_put_autosuspend(pfdev->dev); 482 + pm_runtime_put_autosuspend(pfdev->base.dev); 499 483 500 484 if (panfrost_exception_needs_reset(pfdev, js_status)) { 501 485 atomic_set(&pfdev->reset.pending, 1); ··· 503 487 } 504 488 } 505 489 506 - static void panfrost_job_handle_done(struct panfrost_device *pfdev, 507 - struct panfrost_job *job) 490 + static void panfrost_jm_handle_done(struct panfrost_device *pfdev, 491 + struct panfrost_job *job) 508 492 { 509 493 /* Set ->jc to 0 to avoid re-submitting an already finished job (can 510 494 * happen when we receive the DONE interrupt while doing a GPU reset). ··· 514 498 panfrost_devfreq_record_idle(&pfdev->pfdevfreq); 515 499 516 500 dma_fence_signal_locked(job->done_fence); 517 - pm_runtime_put_autosuspend(pfdev->dev); 501 + pm_runtime_put_autosuspend(pfdev->base.dev); 518 502 } 519 503 520 - static void panfrost_job_handle_irq(struct panfrost_device *pfdev, u32 status) 504 + static void panfrost_jm_handle_irq(struct panfrost_device *pfdev, u32 status) 521 505 { 522 506 struct panfrost_job *done[NUM_JOB_SLOTS][2] = {}; 523 507 struct panfrost_job *failed[NUM_JOB_SLOTS] = {}; ··· 592 576 } 593 577 594 578 for (i = 0; i < ARRAY_SIZE(done[0]) && done[j][i]; i++) 595 - panfrost_job_handle_done(pfdev, done[j][i]); 579 + panfrost_jm_handle_done(pfdev, done[j][i]); 596 580 } 597 581 598 582 /* And finally we requeue jobs that were waiting in the second slot ··· 610 594 struct panfrost_job *canceled = panfrost_dequeue_job(pfdev, j); 611 595 612 596 dma_fence_set_error(canceled->done_fence, -ECANCELED); 613 - panfrost_job_handle_done(pfdev, canceled); 597 + panfrost_jm_handle_done(pfdev, canceled); 614 598 } else if (!atomic_read(&pfdev->reset.pending)) { 615 599 /* Requeue the job we removed if no reset is pending */ 616 600 job_write(pfdev, JS_COMMAND_NEXT(j), JS_COMMAND_START); ··· 618 602 } 619 603 } 620 604 621 - static void panfrost_job_handle_irqs(struct panfrost_device *pfdev) 605 + static void panfrost_jm_handle_irqs(struct panfrost_device *pfdev) 622 606 { 623 607 u32 status = job_read(pfdev, JOB_INT_RAWSTAT); 624 608 625 609 while (status) { 626 - pm_runtime_mark_last_busy(pfdev->dev); 610 + pm_runtime_mark_last_busy(pfdev->base.dev); 627 611 628 612 spin_lock(&pfdev->js->job_lock); 629 - panfrost_job_handle_irq(pfdev, status); 613 + panfrost_jm_handle_irq(pfdev, status); 630 614 spin_unlock(&pfdev->js->job_lock); 631 615 status = job_read(pfdev, JOB_INT_RAWSTAT); 632 616 } ··· 704 688 10, 10000); 705 689 706 690 if (ret) 707 - dev_err(pfdev->dev, "Soft-stop failed\n"); 691 + dev_err(pfdev->base.dev, "Soft-stop failed\n"); 708 692 709 693 /* Handle the remaining interrupts before we reset. */ 710 - panfrost_job_handle_irqs(pfdev); 694 + panfrost_jm_handle_irqs(pfdev); 711 695 712 696 /* Remaining interrupts have been handled, but we might still have 713 697 * stuck jobs. Let's make sure the PM counters stay balanced by ··· 722 706 if (pfdev->jobs[i][j]->requirements & PANFROST_JD_REQ_CYCLE_COUNT || 723 707 pfdev->jobs[i][j]->is_profiled) 724 708 panfrost_cycle_counter_put(pfdev->jobs[i][j]->pfdev); 725 - pm_runtime_put_noidle(pfdev->dev); 709 + pm_runtime_put_noidle(pfdev->base.dev); 726 710 panfrost_devfreq_record_idle(&pfdev->pfdevfreq); 727 711 } 728 712 } ··· 730 714 spin_unlock(&pfdev->js->job_lock); 731 715 732 716 /* Proceed with reset now. */ 733 - panfrost_device_reset(pfdev); 734 - 735 - /* panfrost_device_reset() unmasks job interrupts, but we want to 736 - * keep them masked a bit longer. 737 - */ 738 - job_write(pfdev, JOB_INT_MASK, 0); 717 + panfrost_device_reset(pfdev, false); 739 718 740 719 /* GPU has been reset, we can clear the reset pending bit. */ 741 720 atomic_set(&pfdev->reset.pending, 0); ··· 752 741 drm_sched_start(&pfdev->js->queue[i].sched, 0); 753 742 754 743 /* Re-enable job interrupts now that everything has been restarted. */ 755 - job_write(pfdev, JOB_INT_MASK, 756 - GENMASK(16 + NUM_JOB_SLOTS - 1, 16) | 757 - GENMASK(NUM_JOB_SLOTS - 1, 0)); 744 + panfrost_jm_enable_interrupts(pfdev); 758 745 759 746 dma_fence_end_signalling(cookie); 760 747 } ··· 783 774 synchronize_irq(pfdev->js->irq); 784 775 785 776 if (dma_fence_is_signaled(job->done_fence)) { 786 - dev_warn(pfdev->dev, "unexpectedly high interrupt latency\n"); 777 + dev_warn(pfdev->base.dev, "unexpectedly high interrupt latency\n"); 787 778 return DRM_GPU_SCHED_STAT_NO_HANG; 788 779 } 789 780 790 - dev_err(pfdev->dev, "gpu sched timeout, js=%d, config=0x%x, status=0x%x, head=0x%x, tail=0x%x, sched_job=%p", 781 + dev_err(pfdev->base.dev, "gpu sched timeout, js=%d, config=0x%x, status=0x%x, head=0x%x, tail=0x%x, sched_job=%p", 791 782 js, 792 783 job_read(pfdev, JS_CONFIG(js)), 793 784 job_read(pfdev, JS_STATUS(js)), ··· 817 808 .free_job = panfrost_job_free 818 809 }; 819 810 820 - static irqreturn_t panfrost_job_irq_handler_thread(int irq, void *data) 811 + static irqreturn_t panfrost_jm_irq_handler_thread(int irq, void *data) 821 812 { 822 813 struct panfrost_device *pfdev = data; 823 814 824 - panfrost_job_handle_irqs(pfdev); 815 + panfrost_jm_handle_irqs(pfdev); 825 816 826 817 /* Enable interrupts only if we're not about to get suspended */ 827 818 if (!test_bit(PANFROST_COMP_BIT_JOB, pfdev->is_suspended)) 828 - job_write(pfdev, JOB_INT_MASK, 829 - GENMASK(16 + NUM_JOB_SLOTS - 1, 16) | 830 - GENMASK(NUM_JOB_SLOTS - 1, 0)); 819 + job_write(pfdev, JOB_INT_MASK, ALL_JS_INT_MASK); 831 820 832 821 return IRQ_HANDLED; 833 822 } 834 823 835 - static irqreturn_t panfrost_job_irq_handler(int irq, void *data) 824 + static irqreturn_t panfrost_jm_irq_handler(int irq, void *data) 836 825 { 837 826 struct panfrost_device *pfdev = data; 838 827 u32 status; ··· 846 839 return IRQ_WAKE_THREAD; 847 840 } 848 841 849 - int panfrost_job_init(struct panfrost_device *pfdev) 842 + int panfrost_jm_init(struct panfrost_device *pfdev) 850 843 { 851 844 struct drm_sched_init_args args = { 852 845 .ops = &panfrost_sched_ops, 853 846 .num_rqs = DRM_SCHED_PRIORITY_COUNT, 854 847 .credit_limit = 2, 855 848 .timeout = msecs_to_jiffies(JOB_TIMEOUT_MS), 856 - .name = "pan_js", 857 - .dev = pfdev->dev, 849 + .dev = pfdev->base.dev, 858 850 }; 859 851 struct panfrost_job_slot *js; 860 852 int ret, j; 853 + 854 + BUILD_BUG_ON(ARRAY_SIZE(panfrost_engine_names) != NUM_JOB_SLOTS); 861 855 862 856 /* All GPUs have two entries per queue, but without jobchain 863 857 * disambiguation stopping the right job in the close path is tricky, ··· 867 859 if (!panfrost_has_hw_feature(pfdev, HW_FEATURE_JOBCHAIN_DISAMBIGUATION)) 868 860 args.credit_limit = 1; 869 861 870 - pfdev->js = js = devm_kzalloc(pfdev->dev, sizeof(*js), GFP_KERNEL); 862 + js = devm_kzalloc(pfdev->base.dev, sizeof(*js), GFP_KERNEL); 871 863 if (!js) 872 864 return -ENOMEM; 865 + pfdev->js = js; 873 866 874 867 INIT_WORK(&pfdev->reset.work, panfrost_reset_work); 875 868 spin_lock_init(&js->job_lock); 876 869 877 - js->irq = platform_get_irq_byname(to_platform_device(pfdev->dev), "job"); 870 + js->irq = platform_get_irq_byname(to_platform_device(pfdev->base.dev), "job"); 878 871 if (js->irq < 0) 879 872 return js->irq; 880 873 881 - ret = devm_request_threaded_irq(pfdev->dev, js->irq, 882 - panfrost_job_irq_handler, 883 - panfrost_job_irq_handler_thread, 874 + ret = devm_request_threaded_irq(pfdev->base.dev, js->irq, 875 + panfrost_jm_irq_handler, 876 + panfrost_jm_irq_handler_thread, 884 877 IRQF_SHARED, KBUILD_MODNAME "-job", 885 878 pfdev); 886 879 if (ret) { 887 - dev_err(pfdev->dev, "failed to request job irq"); 880 + dev_err(pfdev->base.dev, "failed to request job irq"); 888 881 return ret; 889 882 } 890 883 ··· 896 887 897 888 for (j = 0; j < NUM_JOB_SLOTS; j++) { 898 889 js->queue[j].fence_context = dma_fence_context_alloc(1); 890 + args.name = panfrost_engine_names[j]; 899 891 900 892 ret = drm_sched_init(&js->queue[j].sched, &args); 901 893 if (ret) { 902 - dev_err(pfdev->dev, "Failed to create scheduler: %d.", ret); 894 + dev_err(pfdev->base.dev, "Failed to create scheduler: %d.", ret); 903 895 goto err_sched; 904 896 } 905 897 } 906 898 907 - panfrost_job_enable_interrupts(pfdev); 899 + panfrost_jm_reset_interrupts(pfdev); 900 + panfrost_jm_enable_interrupts(pfdev); 908 901 909 902 return 0; 910 903 ··· 918 907 return ret; 919 908 } 920 909 921 - void panfrost_job_fini(struct panfrost_device *pfdev) 910 + void panfrost_jm_fini(struct panfrost_device *pfdev) 922 911 { 923 912 struct panfrost_job_slot *js = pfdev->js; 924 913 int j; ··· 933 922 destroy_workqueue(pfdev->reset.wq); 934 923 } 935 924 936 - int panfrost_job_open(struct drm_file *file) 925 + int panfrost_jm_open(struct drm_file *file) 937 926 { 938 927 struct panfrost_file_priv *panfrost_priv = file->driver_priv; 939 928 int ret; ··· 955 944 return 0; 956 945 } 957 946 958 - void panfrost_job_close(struct drm_file *file) 947 + void panfrost_jm_close(struct drm_file *file) 959 948 { 960 949 struct panfrost_file_priv *panfrost_priv = file->driver_priv; 961 950 struct panfrost_jm_ctx *jm_ctx; ··· 967 956 xa_destroy(&panfrost_priv->jm_ctxs); 968 957 } 969 958 970 - int panfrost_job_is_idle(struct panfrost_device *pfdev) 959 + int panfrost_jm_is_idle(struct panfrost_device *pfdev) 971 960 { 972 961 struct panfrost_job_slot *js = pfdev->js; 973 962 int i;
+10 -7
drivers/gpu/drm/panfrost/panfrost_job.h
··· 53 53 struct drm_sched_entity slot_entity[NUM_JOB_SLOTS]; 54 54 }; 55 55 56 + extern const char * const panfrost_engine_names[]; 57 + 56 58 int panfrost_jm_ctx_create(struct drm_file *file, 57 59 struct drm_panfrost_jm_ctx_create *args); 58 60 int panfrost_jm_ctx_destroy(struct drm_file *file, u32 handle); ··· 62 60 struct panfrost_jm_ctx *panfrost_jm_ctx_get(struct panfrost_jm_ctx *jm_ctx); 63 61 struct panfrost_jm_ctx *panfrost_jm_ctx_from_handle(struct drm_file *file, u32 handle); 64 62 65 - int panfrost_job_init(struct panfrost_device *pfdev); 66 - void panfrost_job_fini(struct panfrost_device *pfdev); 67 - int panfrost_job_open(struct drm_file *file); 68 - void panfrost_job_close(struct drm_file *file); 63 + int panfrost_jm_init(struct panfrost_device *pfdev); 64 + void panfrost_jm_fini(struct panfrost_device *pfdev); 65 + int panfrost_jm_open(struct drm_file *file); 66 + void panfrost_jm_close(struct drm_file *file); 67 + void panfrost_jm_reset_interrupts(struct panfrost_device *pfdev); 68 + void panfrost_jm_enable_interrupts(struct panfrost_device *pfdev); 69 + void panfrost_jm_suspend_irq(struct panfrost_device *pfdev); 70 + int panfrost_jm_is_idle(struct panfrost_device *pfdev); 69 71 int panfrost_job_get_slot(struct panfrost_job *job); 70 72 int panfrost_job_push(struct panfrost_job *job); 71 73 void panfrost_job_put(struct panfrost_job *job); 72 - void panfrost_job_enable_interrupts(struct panfrost_device *pfdev); 73 - void panfrost_job_suspend_irq(struct panfrost_device *pfdev); 74 - int panfrost_job_is_idle(struct panfrost_device *pfdev); 75 74 76 75 #endif
+80 -34
drivers/gpu/drm/panfrost/panfrost_mmu.c
··· 81 81 if (ret) { 82 82 /* The GPU hung, let's trigger a reset */ 83 83 panfrost_device_schedule_reset(pfdev); 84 - dev_err(pfdev->dev, "AS_ACTIVE bit stuck\n"); 84 + dev_err(pfdev->base.dev, "AS_ACTIVE bit stuck\n"); 85 85 } 86 86 87 87 return ret; ··· 222 222 struct io_pgtable_cfg *pgtbl_cfg = &mmu->pgtbl_cfg; 223 223 struct panfrost_device *pfdev = mmu->pfdev; 224 224 225 - if (drm_WARN_ON(pfdev->ddev, pgtbl_cfg->arm_lpae_s1_cfg.ttbr & 225 + if (drm_WARN_ON(&pfdev->base, pgtbl_cfg->arm_lpae_s1_cfg.ttbr & 226 226 ~AS_TRANSTAB_AARCH64_4K_ADDR_MASK)) 227 227 return -EINVAL; 228 228 ··· 253 253 return mmu_cfg_init_mali_lpae(mmu); 254 254 default: 255 255 /* This should never happen */ 256 - drm_WARN(pfdev->ddev, 1, "Invalid pgtable format"); 256 + drm_WARN(&pfdev->base, 1, "Invalid pgtable format"); 257 257 return -EINVAL; 258 258 } 259 259 } 260 260 261 - u32 panfrost_mmu_as_get(struct panfrost_device *pfdev, struct panfrost_mmu *mmu) 261 + int panfrost_mmu_as_get(struct panfrost_device *pfdev, struct panfrost_mmu *mmu) 262 262 { 263 263 int as; 264 264 ··· 300 300 if (!atomic_read(&lru_mmu->as_count)) 301 301 break; 302 302 } 303 - WARN_ON(&lru_mmu->list == &pfdev->as_lru_list); 303 + if (WARN_ON(&lru_mmu->list == &pfdev->as_lru_list)) { 304 + as = -EBUSY; 305 + goto out; 306 + } 304 307 305 308 list_del_init(&lru_mmu->list); 306 309 as = lru_mmu->as; ··· 318 315 atomic_set(&mmu->as_count, 1); 319 316 list_add(&mmu->list, &pfdev->as_lru_list); 320 317 321 - dev_dbg(pfdev->dev, "Assigned AS%d to mmu %p, alloc_mask=%lx", as, mmu, pfdev->as_alloc_mask); 318 + dev_dbg(pfdev->base.dev, 319 + "Assigned AS%d to mmu %p, alloc_mask=%lx", 320 + as, mmu, pfdev->as_alloc_mask); 322 321 323 322 panfrost_mmu_enable(pfdev, mmu); 324 323 ··· 386 381 if (mmu->as < 0) 387 382 return; 388 383 389 - pm_runtime_get_noresume(pfdev->dev); 384 + pm_runtime_get_noresume(pfdev->base.dev); 390 385 391 386 /* Flush the PTs only if we're already awake */ 392 - if (pm_runtime_active(pfdev->dev)) 387 + if (pm_runtime_active(pfdev->base.dev)) 393 388 mmu_hw_do_operation(pfdev, mmu, iova, size, AS_COMMAND_FLUSH_PT); 394 389 395 - pm_runtime_put_autosuspend(pfdev->dev); 390 + pm_runtime_put_autosuspend(pfdev->base.dev); 391 + } 392 + 393 + static void mmu_unmap_range(struct panfrost_mmu *mmu, u64 iova, size_t len) 394 + { 395 + struct io_pgtable_ops *ops = mmu->pgtbl_ops; 396 + size_t pgsize, unmapped_len = 0; 397 + size_t unmapped_page, pgcount; 398 + 399 + while (unmapped_len < len) { 400 + pgsize = get_pgsize(iova, len - unmapped_len, &pgcount); 401 + 402 + unmapped_page = ops->unmap_pages(ops, iova, pgsize, pgcount, NULL); 403 + WARN_ON(unmapped_page != pgsize * pgcount); 404 + 405 + iova += pgsize * pgcount; 406 + unmapped_len += pgsize * pgcount; 407 + } 396 408 } 397 409 398 410 static int mmu_map_sg(struct panfrost_device *pfdev, struct panfrost_mmu *mmu, ··· 418 396 unsigned int count; 419 397 struct scatterlist *sgl; 420 398 struct io_pgtable_ops *ops = mmu->pgtbl_ops; 399 + size_t total_mapped = 0; 421 400 u64 start_iova = iova; 401 + int ret; 422 402 423 403 for_each_sgtable_dma_sg(sgt, sgl, count) { 424 404 unsigned long paddr = sg_dma_address(sgl); 425 405 size_t len = sg_dma_len(sgl); 426 406 427 - dev_dbg(pfdev->dev, "map: as=%d, iova=%llx, paddr=%lx, len=%zx", mmu->as, iova, paddr, len); 407 + dev_dbg(pfdev->base.dev, 408 + "map: as=%d, iova=%llx, paddr=%lx, len=%zx", 409 + mmu->as, iova, paddr, len); 428 410 429 411 while (len) { 430 412 size_t pgcount, mapped = 0; 431 413 size_t pgsize = get_pgsize(iova | paddr, len, &pgcount); 432 414 433 - ops->map_pages(ops, iova, paddr, pgsize, pgcount, prot, 415 + ret = ops->map_pages(ops, iova, paddr, pgsize, pgcount, prot, 434 416 GFP_KERNEL, &mapped); 417 + if (ret) 418 + goto err_unmap_pages; 419 + 435 420 /* Don't get stuck if things have gone wrong */ 436 421 mapped = max(mapped, pgsize); 422 + total_mapped += mapped; 437 423 iova += mapped; 438 424 paddr += mapped; 439 425 len -= mapped; ··· 451 421 panfrost_mmu_flush_range(pfdev, mmu, start_iova, iova - start_iova); 452 422 453 423 return 0; 424 + 425 + err_unmap_pages: 426 + mmu_unmap_range(mmu, start_iova, total_mapped); 427 + return ret; 454 428 } 455 429 456 430 int panfrost_mmu_map(struct panfrost_gem_mapping *mapping) ··· 465 431 struct panfrost_device *pfdev = to_panfrost_device(obj->dev); 466 432 struct sg_table *sgt; 467 433 int prot = IOMMU_READ | IOMMU_WRITE | IOMMU_CACHE; 434 + int ret; 468 435 469 436 if (WARN_ON(mapping->active)) 470 437 return 0; ··· 477 442 if (WARN_ON(IS_ERR(sgt))) 478 443 return PTR_ERR(sgt); 479 444 480 - mmu_map_sg(pfdev, mapping->mmu, mapping->mmnode.start << PAGE_SHIFT, 481 - prot, sgt); 445 + ret = mmu_map_sg(pfdev, mapping->mmu, mapping->mmnode.start << PAGE_SHIFT, 446 + prot, sgt); 447 + if (ret) 448 + goto err_put_pages; 449 + 482 450 mapping->active = true; 483 451 484 452 return 0; 453 + 454 + err_put_pages: 455 + drm_gem_shmem_put_pages_locked(shmem); 456 + return ret; 485 457 } 486 458 487 459 void panfrost_mmu_unmap(struct panfrost_gem_mapping *mapping) ··· 504 462 if (WARN_ON(!mapping->active)) 505 463 return; 506 464 507 - dev_dbg(pfdev->dev, "unmap: as=%d, iova=%llx, len=%zx", 465 + dev_dbg(pfdev->base.dev, "unmap: as=%d, iova=%llx, len=%zx", 508 466 mapping->mmu->as, iova, len); 509 467 510 468 while (unmapped_len < len) { ··· 601 559 602 560 bo = bomapping->obj; 603 561 if (!bo->is_heap) { 604 - dev_WARN(pfdev->dev, "matching BO is not heap type (GPU VA = %llx)", 562 + dev_WARN(pfdev->base.dev, "matching BO is not heap type (GPU VA = %llx)", 605 563 bomapping->mmnode.start << PAGE_SHIFT); 606 564 ret = -EINVAL; 607 565 goto err_bo; ··· 637 595 refcount_set(&bo->base.pages_use_count, 1); 638 596 } else { 639 597 pages = bo->base.pages; 640 - if (pages[page_offset]) { 641 - /* Pages are already mapped, bail out. */ 642 - goto out; 643 - } 598 + } 599 + 600 + sgt = &bo->sgts[page_offset / (SZ_2M / PAGE_SIZE)]; 601 + if (sgt->sgl) { 602 + /* Pages are already mapped, bail out. */ 603 + goto out; 644 604 } 645 605 646 606 mapping = bo->base.base.filp->f_mapping; ··· 664 620 } 665 621 } 666 622 667 - sgt = &bo->sgts[page_offset / (SZ_2M / PAGE_SIZE)]; 668 623 ret = sg_alloc_table_from_pages(sgt, pages + page_offset, 669 624 NUM_FAULT_PAGES, 0, SZ_2M, GFP_KERNEL); 670 625 if (ret) 671 626 goto err_unlock; 672 627 673 - ret = dma_map_sgtable(pfdev->dev, sgt, DMA_BIDIRECTIONAL, 0); 628 + ret = dma_map_sgtable(pfdev->base.dev, sgt, DMA_BIDIRECTIONAL, 0); 674 629 if (ret) 675 630 goto err_map; 676 631 677 - mmu_map_sg(pfdev, bomapping->mmu, addr, 678 - IOMMU_WRITE | IOMMU_READ | IOMMU_CACHE | IOMMU_NOEXEC, sgt); 632 + ret = mmu_map_sg(pfdev, bomapping->mmu, addr, 633 + IOMMU_WRITE | IOMMU_READ | IOMMU_CACHE | IOMMU_NOEXEC, sgt); 634 + if (ret) 635 + goto err_mmu_map_sg; 679 636 680 637 bomapping->active = true; 681 638 bo->heap_rss_size += SZ_2M; 682 639 683 - dev_dbg(pfdev->dev, "mapped page fault @ AS%d %llx", as, addr); 640 + dev_dbg(pfdev->base.dev, "mapped page fault @ AS%d %llx", as, addr); 684 641 685 642 out: 686 643 dma_resv_unlock(obj->resv); ··· 690 645 691 646 return 0; 692 647 648 + err_mmu_map_sg: 649 + dma_unmap_sgtable(pfdev->base.dev, sgt, DMA_BIDIRECTIONAL, 0); 693 650 err_map: 694 651 sg_free_table(sgt); 695 652 err_unlock: ··· 709 662 710 663 spin_lock(&pfdev->as_lock); 711 664 if (mmu->as >= 0) { 712 - pm_runtime_get_noresume(pfdev->dev); 713 - if (pm_runtime_active(pfdev->dev)) 665 + pm_runtime_get_noresume(pfdev->base.dev); 666 + if (pm_runtime_active(pfdev->base.dev)) 714 667 panfrost_mmu_disable(pfdev, mmu->as); 715 - pm_runtime_put_autosuspend(pfdev->dev); 668 + pm_runtime_put_autosuspend(pfdev->base.dev); 716 669 717 670 clear_bit(mmu->as, &pfdev->as_alloc_mask); 718 - clear_bit(mmu->as, &pfdev->as_in_use_mask); 719 671 list_del(&mmu->list); 720 672 } 721 673 spin_unlock(&pfdev->as_lock); ··· 772 726 773 727 if (pfdev->comp->gpu_quirks & BIT(GPU_QUIRK_FORCE_AARCH64_PGTABLE)) { 774 728 if (!panfrost_has_hw_feature(pfdev, HW_FEATURE_AARCH64_MMU)) { 775 - dev_err_once(pfdev->dev, 729 + dev_err_once(pfdev->base.dev, 776 730 "AARCH64_4K page table not supported\n"); 777 731 return ERR_PTR(-EINVAL); 778 732 } ··· 801 755 .oas = pa_bits, 802 756 .coherent_walk = pfdev->coherent, 803 757 .tlb = &mmu_tlb_ops, 804 - .iommu_dev = pfdev->dev, 758 + .iommu_dev = pfdev->base.dev, 805 759 }; 806 760 807 761 mmu->pgtbl_ops = alloc_io_pgtable_ops(fmt, &mmu->pgtbl_cfg, mmu); ··· 894 848 895 849 if (ret) { 896 850 /* terminal fault, print info about the fault */ 897 - dev_err(pfdev->dev, 851 + dev_err(pfdev->base.dev, 898 852 "Unhandled Page fault in AS%d at VA 0x%016llX\n" 899 853 "Reason: %s\n" 900 854 "raw fault status: 0x%X\n" ··· 942 896 { 943 897 int err; 944 898 945 - pfdev->mmu_irq = platform_get_irq_byname(to_platform_device(pfdev->dev), "mmu"); 899 + pfdev->mmu_irq = platform_get_irq_byname(to_platform_device(pfdev->base.dev), "mmu"); 946 900 if (pfdev->mmu_irq < 0) 947 901 return pfdev->mmu_irq; 948 902 949 - err = devm_request_threaded_irq(pfdev->dev, pfdev->mmu_irq, 903 + err = devm_request_threaded_irq(pfdev->base.dev, pfdev->mmu_irq, 950 904 panfrost_mmu_irq_handler, 951 905 panfrost_mmu_irq_handler_thread, 952 906 IRQF_SHARED, KBUILD_MODNAME "-mmu", 953 907 pfdev); 954 908 955 909 if (err) { 956 - dev_err(pfdev->dev, "failed to request mmu irq"); 910 + dev_err(pfdev->base.dev, "failed to request mmu irq"); 957 911 return err; 958 912 } 959 913
+2 -1
drivers/gpu/drm/panfrost/panfrost_mmu.h
··· 4 4 #ifndef __PANFROST_MMU_H__ 5 5 #define __PANFROST_MMU_H__ 6 6 7 + struct panfrost_device; 7 8 struct panfrost_gem_mapping; 8 9 struct panfrost_file_priv; 9 10 struct panfrost_mmu; ··· 17 16 void panfrost_mmu_reset(struct panfrost_device *pfdev); 18 17 void panfrost_mmu_suspend_irq(struct panfrost_device *pfdev); 19 18 20 - u32 panfrost_mmu_as_get(struct panfrost_device *pfdev, struct panfrost_mmu *mmu); 19 + int panfrost_mmu_as_get(struct panfrost_device *pfdev, struct panfrost_mmu *mmu); 21 20 void panfrost_mmu_as_put(struct panfrost_device *pfdev, struct panfrost_mmu *mmu); 22 21 23 22 struct panfrost_mmu *panfrost_mmu_ctx_get(struct panfrost_mmu *mmu);
+15 -11
drivers/gpu/drm/panfrost/panfrost_perfcnt.c
··· 84 84 else if (perfcnt->user) 85 85 return -EBUSY; 86 86 87 - ret = pm_runtime_get_sync(pfdev->dev); 87 + ret = pm_runtime_get_sync(pfdev->base.dev); 88 88 if (ret < 0) 89 89 goto err_put_pm; 90 90 91 - bo = drm_gem_shmem_create(pfdev->ddev, perfcnt->bosize); 91 + bo = drm_gem_shmem_create(&pfdev->base, perfcnt->bosize); 92 92 if (IS_ERR(bo)) { 93 93 ret = PTR_ERR(bo); 94 94 goto err_put_pm; ··· 130 130 goto err_vunmap; 131 131 } 132 132 133 - perfcnt->user = user; 133 + ret = panfrost_mmu_as_get(pfdev, perfcnt->mapping->mmu); 134 + if (ret < 0) 135 + goto err_vunmap; 134 136 135 - as = panfrost_mmu_as_get(pfdev, perfcnt->mapping->mmu); 137 + as = ret; 136 138 cfg = GPU_PERFCNT_CFG_AS(as) | 137 139 GPU_PERFCNT_CFG_MODE(GPU_PERFCNT_CFG_MODE_MANUAL); 138 140 ··· 166 164 /* The BO ref is retained by the mapping. */ 167 165 drm_gem_object_put(&bo->base); 168 166 167 + perfcnt->user = user; 168 + 169 169 return 0; 170 170 171 171 err_vunmap: ··· 179 175 err_put_bo: 180 176 drm_gem_object_put(&bo->base); 181 177 err_put_pm: 182 - pm_runtime_put(pfdev->dev); 178 + pm_runtime_put(pfdev->base.dev); 183 179 return ret; 184 180 } 185 181 ··· 207 203 panfrost_mmu_as_put(pfdev, perfcnt->mapping->mmu); 208 204 panfrost_gem_mapping_put(perfcnt->mapping); 209 205 perfcnt->mapping = NULL; 210 - pm_runtime_put_autosuspend(pfdev->dev); 206 + pm_runtime_put_autosuspend(pfdev->base.dev); 211 207 212 208 return 0; 213 209 } ··· 215 211 int panfrost_ioctl_perfcnt_enable(struct drm_device *dev, void *data, 216 212 struct drm_file *file_priv) 217 213 { 218 - struct panfrost_device *pfdev = dev->dev_private; 214 + struct panfrost_device *pfdev = to_panfrost_device(dev); 219 215 struct panfrost_perfcnt *perfcnt = pfdev->perfcnt; 220 216 struct drm_panfrost_perfcnt_enable *req = data; 221 217 int ret; ··· 242 238 int panfrost_ioctl_perfcnt_dump(struct drm_device *dev, void *data, 243 239 struct drm_file *file_priv) 244 240 { 245 - struct panfrost_device *pfdev = dev->dev_private; 241 + struct panfrost_device *pfdev = to_panfrost_device(dev); 246 242 struct panfrost_perfcnt *perfcnt = pfdev->perfcnt; 247 243 struct drm_panfrost_perfcnt_dump *req = data; 248 244 void __user *user_ptr = (void __user *)(uintptr_t)req->buf_ptr; ··· 277 273 struct panfrost_device *pfdev = pfile->pfdev; 278 274 struct panfrost_perfcnt *perfcnt = pfdev->perfcnt; 279 275 280 - pm_runtime_get_sync(pfdev->dev); 276 + pm_runtime_get_sync(pfdev->base.dev); 281 277 mutex_lock(&perfcnt->lock); 282 278 if (perfcnt->user == pfile) 283 279 panfrost_perfcnt_disable_locked(pfdev, file_priv); 284 280 mutex_unlock(&perfcnt->lock); 285 - pm_runtime_put_autosuspend(pfdev->dev); 281 + pm_runtime_put_autosuspend(pfdev->base.dev); 286 282 } 287 283 288 284 int panfrost_perfcnt_init(struct panfrost_device *pfdev) ··· 320 316 COUNTERS_PER_BLOCK * BYTES_PER_COUNTER; 321 317 } 322 318 323 - perfcnt = devm_kzalloc(pfdev->dev, sizeof(*perfcnt), GFP_KERNEL); 319 + perfcnt = devm_kzalloc(pfdev->base.dev, sizeof(*perfcnt), GFP_KERNEL); 324 320 if (!perfcnt) 325 321 return -ENOMEM; 326 322
+1 -2
drivers/gpu/drm/panthor/panthor_devfreq.c
··· 146 146 ptdev->devfreq = pdevfreq; 147 147 148 148 ret = devm_pm_opp_set_regulators(dev, reg_names); 149 - if (ret) { 149 + if (ret && ret != -ENODEV) { 150 150 if (ret != -EPROBE_DEFER) 151 151 DRM_DEV_ERROR(dev, "Couldn't set OPP regulators\n"); 152 - 153 152 return ret; 154 153 } 155 154
+2
drivers/gpu/drm/panthor/panthor_device.c
··· 172 172 struct page *p; 173 173 int ret; 174 174 175 + ptdev->soc_data = of_device_get_match_data(ptdev->base.dev); 176 + 175 177 init_completion(&ptdev->unplug.done); 176 178 ret = drmm_mutex_init(&ptdev->base, &ptdev->unplug.lock); 177 179 if (ret)
+14
drivers/gpu/drm/panthor/panthor_device.h
··· 32 32 struct panthor_vm_pool; 33 33 34 34 /** 35 + * struct panthor_soc_data - Panthor SoC Data 36 + */ 37 + struct panthor_soc_data { 38 + /** @asn_hash_enable: True if GPU_L2_CONFIG_ASN_HASH_ENABLE must be set. */ 39 + bool asn_hash_enable; 40 + 41 + /** @asn_hash: ASN_HASH values when asn_hash_enable is true. */ 42 + u32 asn_hash[3]; 43 + }; 44 + 45 + /** 35 46 * enum panthor_device_pm_state - PM state 36 47 */ 37 48 enum panthor_device_pm_state { ··· 103 92 struct panthor_device { 104 93 /** @base: Base drm_device. */ 105 94 struct drm_device base; 95 + 96 + /** @soc_data: Optional SoC data. */ 97 + const struct panthor_soc_data *soc_data; 106 98 107 99 /** @phys_addr: Physical address of the iomem region. */ 108 100 phys_addr_t phys_addr;
+6
drivers/gpu/drm/panthor/panthor_drv.c
··· 1682 1682 1683 1683 ATTRIBUTE_GROUPS(panthor); 1684 1684 1685 + static const struct panthor_soc_data soc_data_mediatek_mt8196 = { 1686 + .asn_hash_enable = true, 1687 + .asn_hash = { 0xb, 0xe, 0x0, }, 1688 + }; 1689 + 1685 1690 static const struct of_device_id dt_match[] = { 1691 + { .compatible = "mediatek,mt8196-mali", .data = &soc_data_mediatek_mt8196, }, 1686 1692 { .compatible = "rockchip,rk3588-mali" }, 1687 1693 { .compatible = "arm,mali-valhall-csf" }, 1688 1694 {}
+24 -1
drivers/gpu/drm/panthor/panthor_gpu.c
··· 52 52 ptdev->coherent ? GPU_COHERENCY_PROT_BIT(ACE_LITE) : GPU_COHERENCY_NONE); 53 53 } 54 54 55 + static void panthor_gpu_l2_config_set(struct panthor_device *ptdev) 56 + { 57 + const struct panthor_soc_data *data = ptdev->soc_data; 58 + u32 l2_config; 59 + u32 i; 60 + 61 + if (!data || !data->asn_hash_enable) 62 + return; 63 + 64 + if (GPU_ARCH_MAJOR(ptdev->gpu_info.gpu_id) < 11) { 65 + drm_err(&ptdev->base, "Custom ASN hash not supported by the device"); 66 + return; 67 + } 68 + 69 + for (i = 0; i < ARRAY_SIZE(data->asn_hash); i++) 70 + gpu_write(ptdev, GPU_ASN_HASH(i), data->asn_hash[i]); 71 + 72 + l2_config = gpu_read(ptdev, GPU_L2_CONFIG); 73 + l2_config |= GPU_L2_CONFIG_ASN_HASH_ENABLE; 74 + gpu_write(ptdev, GPU_L2_CONFIG, l2_config); 75 + } 76 + 55 77 static void panthor_gpu_irq_handler(struct panthor_device *ptdev, u32 status) 56 78 { 57 79 gpu_write(ptdev, GPU_INT_CLEAR, status); ··· 263 241 hweight64(ptdev->gpu_info.shader_present)); 264 242 } 265 243 266 - /* Set the desired coherency mode before the power up of L2 */ 244 + /* Set the desired coherency mode and L2 config before the power up of L2 */ 267 245 panthor_gpu_coherency_set(ptdev); 246 + panthor_gpu_l2_config_set(ptdev); 268 247 269 248 return panthor_gpu_power_on(ptdev, L2, 1, 20000); 270 249 }
+4
drivers/gpu/drm/panthor/panthor_regs.h
··· 64 64 65 65 #define GPU_FAULT_STATUS 0x3C 66 66 #define GPU_FAULT_ADDR 0x40 67 + #define GPU_L2_CONFIG 0x48 68 + #define GPU_L2_CONFIG_ASN_HASH_ENABLE BIT(24) 67 69 68 70 #define GPU_PWR_KEY 0x50 69 71 #define GPU_PWR_KEY_UNLOCK 0x2968A819 ··· 111 109 #define L2_PWRACTIVE 0x260 112 110 113 111 #define GPU_REVID 0x280 112 + 113 + #define GPU_ASN_HASH(n) (0x2C0 + ((n) * 4)) 114 114 115 115 #define GPU_COHERENCY_FEATURES 0x300 116 116 #define GPU_COHERENCY_PROT_BIT(name) BIT(GPU_COHERENCY_ ## name)
+29
drivers/gpu/drm/qxl/qxl_display.c
··· 37 37 #include <drm/drm_probe_helper.h> 38 38 #include <drm/drm_simple_kms_helper.h> 39 39 #include <drm/drm_gem_atomic_helper.h> 40 + #include <drm/drm_vblank.h> 41 + #include <drm/drm_vblank_helper.h> 40 42 41 43 #include "qxl_drv.h" 42 44 #include "qxl_object.h" ··· 384 382 static void qxl_crtc_atomic_flush(struct drm_crtc *crtc, 385 383 struct drm_atomic_state *state) 386 384 { 385 + struct drm_device *dev = crtc->dev; 386 + struct drm_crtc_state *crtc_state = drm_atomic_get_new_crtc_state(state, crtc); 387 + struct drm_pending_vblank_event *event; 388 + 387 389 qxl_crtc_update_monitors_config(crtc, "flush"); 390 + 391 + spin_lock_irq(&dev->event_lock); 392 + 393 + event = crtc_state->event; 394 + crtc_state->event = NULL; 395 + 396 + if (event) { 397 + if (drm_crtc_vblank_get(crtc) == 0) 398 + drm_crtc_arm_vblank_event(crtc, event); 399 + else 400 + drm_crtc_send_vblank_event(crtc, event); 401 + } 402 + 403 + spin_unlock_irq(&dev->event_lock); 388 404 } 389 405 390 406 static void qxl_crtc_destroy(struct drm_crtc *crtc) ··· 421 401 .reset = drm_atomic_helper_crtc_reset, 422 402 .atomic_duplicate_state = drm_atomic_helper_crtc_duplicate_state, 423 403 .atomic_destroy_state = drm_atomic_helper_crtc_destroy_state, 404 + DRM_CRTC_VBLANK_TIMER_FUNCS, 424 405 }; 425 406 426 407 static int qxl_framebuffer_surface_dirty(struct drm_framebuffer *fb, ··· 476 455 struct drm_atomic_state *state) 477 456 { 478 457 qxl_crtc_update_monitors_config(crtc, "enable"); 458 + 459 + drm_crtc_vblank_on(crtc); 479 460 } 480 461 481 462 static void qxl_crtc_atomic_disable(struct drm_crtc *crtc, 482 463 struct drm_atomic_state *state) 483 464 { 465 + drm_crtc_vblank_off(crtc); 466 + 484 467 qxl_crtc_update_monitors_config(crtc, "disable"); 485 468 } 486 469 ··· 1300 1275 } 1301 1276 1302 1277 qxl_display_read_client_monitors_config(qdev); 1278 + 1279 + ret = drm_vblank_init(&qdev->ddev, qxl_num_crtc); 1280 + if (ret) 1281 + return ret; 1303 1282 1304 1283 drm_mode_config_reset(&qdev->ddev); 1305 1284 return 0;
+2 -2
drivers/gpu/drm/radeon/radeon_device.c
··· 1635 1635 } 1636 1636 1637 1637 if (notify_clients) 1638 - drm_client_dev_suspend(dev, false); 1638 + drm_client_dev_suspend(dev); 1639 1639 1640 1640 return 0; 1641 1641 } ··· 1739 1739 radeon_pm_compute_clocks(rdev); 1740 1740 1741 1741 if (notify_clients) 1742 - drm_client_dev_resume(dev, false); 1742 + drm_client_dev_resume(dev); 1743 1743 1744 1744 return 0; 1745 1745 }
+1 -1
drivers/gpu/drm/renesas/rz-du/Kconfig
··· 1 1 # SPDX-License-Identifier: GPL-2.0 2 2 config DRM_RZG2L_DU 3 3 tristate "DRM Support for RZ/G2L Display Unit" 4 - depends on ARCH_RZG2L || COMPILE_TEST 4 + depends on ARCH_RENESAS || COMPILE_TEST 5 5 depends on DRM && OF 6 6 depends on VIDEO_RENESAS_VSP1 7 7 select DRM_CLIENT_SELECTION
+3 -9
drivers/gpu/drm/rockchip/analogix_dp-rockchip.c
··· 335 335 return PTR_ERR(dp->grf); 336 336 } 337 337 338 - dp->grfclk = devm_clk_get(dev, "grf"); 339 - if (PTR_ERR(dp->grfclk) == -ENOENT) { 340 - dp->grfclk = NULL; 341 - } else if (PTR_ERR(dp->grfclk) == -EPROBE_DEFER) { 342 - return -EPROBE_DEFER; 343 - } else if (IS_ERR(dp->grfclk)) { 344 - DRM_DEV_ERROR(dev, "failed to get grf clock\n"); 345 - return PTR_ERR(dp->grfclk); 346 - } 338 + dp->grfclk = devm_clk_get_optional(dev, "grf"); 339 + if (IS_ERR(dp->grfclk)) 340 + return dev_err_probe(dev, PTR_ERR(dp->grfclk), "failed to get grf clock\n"); 347 341 348 342 dp->pclk = devm_clk_get(dev, "pclk"); 349 343 if (IS_ERR(dp->pclk)) {
+20
drivers/gpu/drm/rockchip/dw-mipi-dsi-rockchip.c
··· 163 163 #define RK3288_DSI0_LCDC_SEL BIT(6) 164 164 #define RK3288_DSI1_LCDC_SEL BIT(9) 165 165 166 + #define RK3368_GRF_SOC_CON7 0x41c 167 + #define RK3368_DSI_FORCETXSTOPMODE (0xf << 7) 168 + #define RK3368_DSI_FORCERXMODE BIT(6) 169 + #define RK3368_DSI_TURNDISABLE BIT(5) 170 + 166 171 #define RK3399_GRF_SOC_CON20 0x6250 167 172 #define RK3399_DSI0_LCDC_SEL BIT(0) 168 173 #define RK3399_DSI1_LCDC_SEL BIT(4) ··· 1533 1528 { /* sentinel */ } 1534 1529 }; 1535 1530 1531 + static const struct rockchip_dw_dsi_chip_data rk3368_chip_data[] = { 1532 + { 1533 + .reg = 0xff960000, 1534 + .lanecfg1_grf_reg = RK3368_GRF_SOC_CON7, 1535 + .lanecfg1 = FIELD_PREP_WM16_CONST((RK3368_DSI_TURNDISABLE | 1536 + RK3368_DSI_FORCETXSTOPMODE | 1537 + RK3368_DSI_FORCERXMODE), 0), 1538 + .max_data_lanes = 4, 1539 + }, 1540 + { /* sentinel */ } 1541 + }; 1542 + 1536 1543 static int rk3399_dphy_tx1rx1_init(struct phy *phy) 1537 1544 { 1538 1545 struct dw_mipi_dsi_rockchip *dsi = phy_get_drvdata(phy); ··· 1704 1687 }, { 1705 1688 .compatible = "rockchip,rk3288-mipi-dsi", 1706 1689 .data = &rk3288_chip_data, 1690 + }, { 1691 + .compatible = "rockchip,rk3368-mipi-dsi", 1692 + .data = &rk3368_chip_data, 1707 1693 }, { 1708 1694 .compatible = "rockchip,rk3399-mipi-dsi", 1709 1695 .data = &rk3399_chip_data,
+38 -39
drivers/gpu/drm/rockchip/dw_hdmi_qp-rockchip.c
··· 429 429 void *data) 430 430 { 431 431 struct platform_device *pdev = to_platform_device(dev); 432 + struct dw_hdmi_qp_plat_data plat_data = {}; 432 433 const struct rockchip_hdmi_qp_cfg *cfg; 433 - struct dw_hdmi_qp_plat_data plat_data; 434 434 struct drm_device *drm = data; 435 435 struct drm_connector *connector; 436 436 struct drm_encoder *encoder; 437 437 struct rockchip_hdmi_qp *hdmi; 438 438 struct resource *res; 439 439 struct clk_bulk_data *clks; 440 + struct clk *ref_clk; 440 441 int ret, irq, i; 441 442 442 443 if (!pdev->dev.of_node) ··· 456 455 return -ENODEV; 457 456 458 457 if (!cfg->ctrl_ops || !cfg->ctrl_ops->io_init || 459 - !cfg->ctrl_ops->irq_callback || !cfg->ctrl_ops->hardirq_callback) { 460 - dev_err(dev, "Missing platform ctrl ops\n"); 461 - return -ENODEV; 462 - } 458 + !cfg->ctrl_ops->irq_callback || !cfg->ctrl_ops->hardirq_callback) 459 + return dev_err_probe(dev, -ENODEV, "Missing platform ctrl ops\n"); 463 460 464 461 hdmi->ctrl_ops = cfg->ctrl_ops; 465 462 hdmi->dev = &pdev->dev; ··· 470 471 break; 471 472 } 472 473 } 473 - if (hdmi->port_id < 0) { 474 - dev_err(hdmi->dev, "Failed to match HDMI port ID\n"); 475 - return hdmi->port_id; 476 - } 474 + if (hdmi->port_id < 0) 475 + return dev_err_probe(hdmi->dev, hdmi->port_id, 476 + "Failed to match HDMI port ID\n"); 477 477 478 478 plat_data.phy_ops = cfg->phy_ops; 479 479 plat_data.phy_data = hdmi; ··· 493 495 494 496 hdmi->regmap = syscon_regmap_lookup_by_phandle(dev->of_node, 495 497 "rockchip,grf"); 496 - if (IS_ERR(hdmi->regmap)) { 497 - dev_err(hdmi->dev, "Unable to get rockchip,grf\n"); 498 - return PTR_ERR(hdmi->regmap); 499 - } 498 + if (IS_ERR(hdmi->regmap)) 499 + return dev_err_probe(hdmi->dev, PTR_ERR(hdmi->regmap), 500 + "Unable to get rockchip,grf\n"); 500 501 501 502 hdmi->vo_regmap = syscon_regmap_lookup_by_phandle(dev->of_node, 502 503 "rockchip,vo-grf"); 503 - if (IS_ERR(hdmi->vo_regmap)) { 504 - dev_err(hdmi->dev, "Unable to get rockchip,vo-grf\n"); 505 - return PTR_ERR(hdmi->vo_regmap); 506 - } 504 + if (IS_ERR(hdmi->vo_regmap)) 505 + return dev_err_probe(hdmi->dev, PTR_ERR(hdmi->vo_regmap), 506 + "Unable to get rockchip,vo-grf\n"); 507 507 508 508 ret = devm_clk_bulk_get_all_enabled(hdmi->dev, &clks); 509 - if (ret < 0) { 510 - dev_err(hdmi->dev, "Failed to get clocks: %d\n", ret); 511 - return ret; 512 - } 509 + if (ret < 0) 510 + return dev_err_probe(hdmi->dev, ret, "Failed to get clocks\n"); 511 + 512 + ref_clk = clk_get(hdmi->dev, "ref"); 513 + if (IS_ERR(ref_clk)) 514 + return dev_err_probe(hdmi->dev, PTR_ERR(ref_clk), 515 + "Failed to get ref clock\n"); 516 + 517 + plat_data.ref_clk_rate = clk_get_rate(ref_clk); 518 + clk_put(ref_clk); 513 519 514 520 hdmi->enable_gpio = devm_gpiod_get_optional(hdmi->dev, "enable", 515 521 GPIOD_OUT_HIGH); 516 - if (IS_ERR(hdmi->enable_gpio)) { 517 - ret = PTR_ERR(hdmi->enable_gpio); 518 - dev_err(hdmi->dev, "Failed to request enable GPIO: %d\n", ret); 519 - return ret; 520 - } 522 + if (IS_ERR(hdmi->enable_gpio)) 523 + return dev_err_probe(hdmi->dev, PTR_ERR(hdmi->enable_gpio), 524 + "Failed to request enable GPIO\n"); 521 525 522 526 hdmi->phy = devm_of_phy_get_by_index(dev, dev->of_node, 0); 523 - if (IS_ERR(hdmi->phy)) { 524 - ret = PTR_ERR(hdmi->phy); 525 - if (ret != -EPROBE_DEFER) 526 - dev_err(hdmi->dev, "failed to get phy: %d\n", ret); 527 - return ret; 528 - } 527 + if (IS_ERR(hdmi->phy)) 528 + return dev_err_probe(hdmi->dev, PTR_ERR(hdmi->phy), 529 + "Failed to get phy\n"); 529 530 530 531 cfg->ctrl_ops->io_init(hdmi); 531 532 ··· 533 536 plat_data.main_irq = platform_get_irq_byname(pdev, "main"); 534 537 if (plat_data.main_irq < 0) 535 538 return plat_data.main_irq; 539 + 540 + plat_data.cec_irq = platform_get_irq_byname(pdev, "cec"); 541 + if (plat_data.cec_irq < 0) 542 + return plat_data.cec_irq; 536 543 537 544 irq = platform_get_irq_byname(pdev, "hpd"); 538 545 if (irq < 0) ··· 557 556 558 557 hdmi->hdmi = dw_hdmi_qp_bind(pdev, encoder, &plat_data); 559 558 if (IS_ERR(hdmi->hdmi)) { 560 - ret = PTR_ERR(hdmi->hdmi); 561 559 drm_encoder_cleanup(encoder); 562 - return ret; 560 + return dev_err_probe(hdmi->dev, PTR_ERR(hdmi->hdmi), 561 + "Failed to bind dw-hdmi-qp"); 563 562 } 564 563 565 564 connector = drm_bridge_connector_init(drm, encoder); 566 - if (IS_ERR(connector)) { 567 - ret = PTR_ERR(connector); 568 - dev_err(hdmi->dev, "failed to init bridge connector: %d\n", ret); 569 - return ret; 570 - } 565 + if (IS_ERR(connector)) 566 + return dev_err_probe(hdmi->dev, PTR_ERR(connector), 567 + "Failed to init bridge connector\n"); 571 568 572 569 return drm_connector_attach_encoder(connector, encoder); 573 570 }
+3 -3
drivers/gpu/drm/rockchip/rockchip_drm_vop.c
··· 826 826 if (!crtc || WARN_ON(!fb)) 827 827 return 0; 828 828 829 - crtc_state = drm_atomic_get_existing_crtc_state(state, 830 - crtc); 829 + crtc_state = drm_atomic_get_new_crtc_state(state, crtc); 831 830 if (WARN_ON(!crtc_state)) 832 831 return -EINVAL; 833 832 ··· 1091 1092 if (!plane->state->fb) 1092 1093 return -EINVAL; 1093 1094 1094 - crtc_state = drm_atomic_get_existing_crtc_state(state, new_plane_state->crtc); 1095 + crtc_state = drm_atomic_get_new_crtc_state(state, 1096 + new_plane_state->crtc); 1095 1097 1096 1098 /* Special case for asynchronous cursor updates. */ 1097 1099 if (!crtc_state)
+47 -36
drivers/gpu/drm/rockchip/rockchip_drm_vop2.c
··· 1003 1003 struct drm_rect *src = &pstate->src; 1004 1004 int min_scale = FRAC_16_16(1, 8); 1005 1005 int max_scale = FRAC_16_16(8, 1); 1006 + int src_x, src_w, src_h; 1007 + int dest_w, dest_h; 1006 1008 int format; 1007 1009 int ret; 1008 1010 ··· 1015 1013 vop2 = vp->vop2; 1016 1014 vop2_data = vop2->data; 1017 1015 1018 - cstate = drm_atomic_get_existing_crtc_state(pstate->state, crtc); 1016 + cstate = drm_atomic_get_new_crtc_state(pstate->state, crtc); 1019 1017 if (WARN_ON(!cstate)) 1020 1018 return -EINVAL; 1021 1019 ··· 1032 1030 if (format < 0) 1033 1031 return format; 1034 1032 1035 - if (drm_rect_width(src) >> 16 < 4 || drm_rect_height(src) >> 16 < 4 || 1036 - drm_rect_width(dest) < 4 || drm_rect_height(dest) < 4) { 1037 - drm_err(vop2->drm, "Invalid size: %dx%d->%dx%d, min size is 4x4\n", 1038 - drm_rect_width(src) >> 16, drm_rect_height(src) >> 16, 1039 - drm_rect_width(dest), drm_rect_height(dest)); 1040 - pstate->visible = false; 1041 - return 0; 1033 + /* Co-ordinates have now been clipped */ 1034 + src_x = src->x1 >> 16; 1035 + src_w = drm_rect_width(src) >> 16; 1036 + src_h = drm_rect_height(src) >> 16; 1037 + dest_w = drm_rect_width(dest); 1038 + dest_h = drm_rect_height(dest); 1039 + 1040 + if (src_w < 4 || src_h < 4 || dest_w < 4 || dest_h < 4) { 1041 + drm_dbg_kms(vop2->drm, "Invalid size: %dx%d->%dx%d, min size is 4x4\n", 1042 + src_w, src_h, dest_w, dest_h); 1043 + return -EINVAL; 1042 1044 } 1043 1045 1044 - if (drm_rect_width(src) >> 16 > vop2_data->max_input.width || 1045 - drm_rect_height(src) >> 16 > vop2_data->max_input.height) { 1046 - drm_err(vop2->drm, "Invalid source: %dx%d. max input: %dx%d\n", 1047 - drm_rect_width(src) >> 16, 1048 - drm_rect_height(src) >> 16, 1049 - vop2_data->max_input.width, 1050 - vop2_data->max_input.height); 1046 + if (src_w > vop2_data->max_input.width || 1047 + src_h > vop2_data->max_input.height) { 1048 + drm_dbg_kms(vop2->drm, "Invalid source: %dx%d. max input: %dx%d\n", 1049 + src_w, src_h, 1050 + vop2_data->max_input.width, 1051 + vop2_data->max_input.height); 1051 1052 return -EINVAL; 1052 1053 } 1053 1054 ··· 1058 1053 * Src.x1 can be odd when do clip, but yuv plane start point 1059 1054 * need align with 2 pixel. 1060 1055 */ 1061 - if (fb->format->is_yuv && ((pstate->src.x1 >> 16) % 2)) { 1062 - drm_err(vop2->drm, "Invalid Source: Yuv format not support odd xpos\n"); 1056 + if (fb->format->is_yuv && src_x % 2) { 1057 + drm_dbg_kms(vop2->drm, "Invalid Source: Yuv format not support odd xpos\n"); 1063 1058 return -EINVAL; 1064 1059 } 1065 1060 ··· 1145 1140 struct vop2 *vop2 = win->vop2; 1146 1141 struct drm_framebuffer *fb = pstate->fb; 1147 1142 u32 bpp = vop2_get_bpp(fb->format); 1148 - u32 actual_w, actual_h, dsp_w, dsp_h; 1143 + u32 src_w, src_h, dsp_w, dsp_h; 1149 1144 u32 act_info, dsp_info; 1150 1145 u32 format; 1151 1146 u32 afbc_format; ··· 1209 1204 uv_mst = rk_obj->dma_addr + offset + fb->offsets[1]; 1210 1205 } 1211 1206 1212 - actual_w = drm_rect_width(src) >> 16; 1213 - actual_h = drm_rect_height(src) >> 16; 1207 + src_w = drm_rect_width(src) >> 16; 1208 + src_h = drm_rect_height(src) >> 16; 1214 1209 dsp_w = drm_rect_width(dest); 1215 1210 1216 1211 if (dest->x1 + dsp_w > adjusted_mode->hdisplay) { ··· 1220 1215 dsp_w = adjusted_mode->hdisplay - dest->x1; 1221 1216 if (dsp_w < 4) 1222 1217 dsp_w = 4; 1223 - actual_w = dsp_w * actual_w / drm_rect_width(dest); 1218 + src_w = dsp_w * src_w / drm_rect_width(dest); 1224 1219 } 1225 1220 1226 1221 dsp_h = drm_rect_height(dest); ··· 1232 1227 dsp_h = adjusted_mode->vdisplay - dest->y1; 1233 1228 if (dsp_h < 4) 1234 1229 dsp_h = 4; 1235 - actual_h = dsp_h * actual_h / drm_rect_height(dest); 1230 + src_h = dsp_h * src_h / drm_rect_height(dest); 1236 1231 } 1237 1232 1238 1233 /* 1239 1234 * This is workaround solution for IC design: 1240 - * esmart can't support scale down when actual_w % 16 == 1. 1235 + * esmart can't support scale down when src_w % 16 == 1. 1241 1236 */ 1242 1237 if (!(win->data->feature & WIN_FEATURE_AFBDC)) { 1243 - if (actual_w > dsp_w && (actual_w & 0xf) == 1) { 1238 + if (src_w > dsp_w && (src_w & 0xf) == 1) { 1244 1239 drm_dbg_kms(vop2->drm, "vp%d %s act_w[%d] MODE 16 == 1\n", 1245 - vp->id, win->data->name, actual_w); 1246 - actual_w -= 1; 1240 + vp->id, win->data->name, src_w); 1241 + src_w -= 1; 1247 1242 } 1248 1243 } 1249 1244 1250 - if (afbc_en && actual_w % 4) { 1251 - drm_dbg_kms(vop2->drm, "vp%d %s actual_w[%d] not 4 pixel aligned\n", 1252 - vp->id, win->data->name, actual_w); 1253 - actual_w = ALIGN_DOWN(actual_w, 4); 1245 + if (afbc_en && src_w % 4) { 1246 + drm_dbg_kms(vop2->drm, "vp%d %s src_w[%d] not 4 pixel aligned\n", 1247 + vp->id, win->data->name, src_w); 1248 + src_w = ALIGN_DOWN(src_w, 4); 1254 1249 } 1255 1250 1256 - act_info = (actual_h - 1) << 16 | ((actual_w - 1) & 0xffff); 1251 + act_info = (src_h - 1) << 16 | ((src_w - 1) & 0xffff); 1257 1252 dsp_info = (dsp_h - 1) << 16 | ((dsp_w - 1) & 0xffff); 1258 1253 1259 1254 format = vop2_convert_format(fb->format->format); 1260 1255 half_block_en = vop2_half_block_enable(pstate); 1261 1256 1262 1257 drm_dbg(vop2->drm, "vp%d update %s[%dx%d->%dx%d@%dx%d] fmt[%p4cc_%s] addr[%pad]\n", 1263 - vp->id, win->data->name, actual_w, actual_h, dsp_w, dsp_h, 1258 + vp->id, win->data->name, src_w, src_h, dsp_w, dsp_h, 1264 1259 dest->x1, dest->y1, 1265 1260 &fb->format->format, 1266 1261 afbc_en ? "AFBC" : "", &yrgb_mst); ··· 1289 1284 if (fb->modifier & AFBC_FORMAT_MOD_YTR) 1290 1285 afbc_format |= (1 << 4); 1291 1286 1292 - afbc_tile_num = ALIGN(actual_w, block_w) / block_w; 1287 + afbc_tile_num = ALIGN(src_w, block_w) / block_w; 1293 1288 1294 1289 /* 1295 1290 * AFBC pic_vir_width is count by pixel, this is different ··· 1367 1362 1368 1363 if (rotate_90 || rotate_270) { 1369 1364 act_info = swahw32(act_info); 1370 - actual_w = drm_rect_height(src) >> 16; 1371 - actual_h = drm_rect_width(src) >> 16; 1365 + src_w = drm_rect_height(src) >> 16; 1366 + src_h = drm_rect_width(src) >> 16; 1372 1367 } 1373 1368 1374 1369 vop2_win_write(win, VOP2_WIN_FORMAT, format); ··· 1384 1379 vop2_win_write(win, VOP2_WIN_UV_MST, uv_mst); 1385 1380 } 1386 1381 1387 - vop2_setup_scale(vop2, win, actual_w, actual_h, dsp_w, dsp_h, fb->format->format); 1382 + vop2_setup_scale(vop2, win, src_w, src_h, dsp_w, dsp_h, fb->format->format); 1388 1383 if (!vop2_cluster_window(win)) 1389 1384 vop2_plane_setup_color_key(plane, 0); 1390 1385 vop2_win_write(win, VOP2_WIN_ACT_INFO, act_info); ··· 2651 2646 vop2->map = devm_regmap_init_mmio(dev, vop2->regs, &vop2_regmap_config); 2652 2647 if (IS_ERR(vop2->map)) 2653 2648 return PTR_ERR(vop2->map); 2649 + 2650 + /* Set the bounds for framebuffer creation */ 2651 + drm->mode_config.min_width = 4; 2652 + drm->mode_config.min_height = 4; 2653 + drm->mode_config.max_width = vop2_data->max_input.width; 2654 + drm->mode_config.max_height = vop2_data->max_input.height; 2654 2655 2655 2656 ret = vop2_win_init(vop2); 2656 2657 if (ret)
+1
drivers/gpu/drm/rockchip/rockchip_vop_reg.c
··· 880 880 .win = rk3368_vop_win_data, 881 881 .win_size = ARRAY_SIZE(rk3368_vop_win_data), 882 882 .max_output = { 4096, 2160 }, 883 + .lut_size = 1024, 883 884 }; 884 885 885 886 static const struct vop_intr rk3366_vop_intr = {
+1
drivers/gpu/drm/sitronix/st7571-i2c.c
··· 263 263 u32 npixels = st7571->ncols * round_up(st7571->nlines, ST7571_PAGE_HEIGHT) * st7571->bpp; 264 264 char pixelvalue = 0x00; 265 265 266 + st7571_set_position(st7571, 0, 0); 266 267 for (int i = 0; i < npixels; i++) 267 268 regmap_bulk_write(st7571->regmap, ST7571_DATA_MODE, &pixelvalue, 1); 268 269
+1 -2
drivers/gpu/drm/sun4i/sun8i_ui_layer.c
··· 206 206 if (!crtc) 207 207 return 0; 208 208 209 - crtc_state = drm_atomic_get_existing_crtc_state(state, 210 - crtc); 209 + crtc_state = drm_atomic_get_new_crtc_state(state, crtc); 211 210 if (WARN_ON(!crtc_state)) 212 211 return -EINVAL; 213 212
+1 -2
drivers/gpu/drm/sun4i/sun8i_vi_layer.c
··· 327 327 if (!crtc) 328 328 return 0; 329 329 330 - crtc_state = drm_atomic_get_existing_crtc_state(state, 331 - crtc); 330 + crtc_state = drm_atomic_get_new_crtc_state(state, crtc); 332 331 if (WARN_ON(!crtc_state)) 333 332 return -EINVAL; 334 333
+1 -1
drivers/gpu/drm/tegra/dc.c
··· 1033 1033 int min_scale, max_scale; 1034 1034 int err; 1035 1035 1036 - crtc_state = drm_atomic_get_existing_crtc_state(state, new_state->crtc); 1036 + crtc_state = drm_atomic_get_new_crtc_state(state, new_state->crtc); 1037 1037 if (WARN_ON(!crtc_state)) 1038 1038 return -EINVAL; 1039 1039
+105
drivers/gpu/drm/tests/drm_buddy_test.c
··· 21 21 return (1 << order) * chunk_size; 22 22 } 23 23 24 + static void drm_test_buddy_fragmentation_performance(struct kunit *test) 25 + { 26 + struct drm_buddy_block *block, *tmp; 27 + int num_blocks, i, ret, count = 0; 28 + LIST_HEAD(allocated_blocks); 29 + unsigned long elapsed_ms; 30 + LIST_HEAD(reverse_list); 31 + LIST_HEAD(test_blocks); 32 + LIST_HEAD(clear_list); 33 + LIST_HEAD(dirty_list); 34 + LIST_HEAD(free_list); 35 + struct drm_buddy mm; 36 + u64 mm_size = SZ_4G; 37 + ktime_t start, end; 38 + 39 + /* 40 + * Allocation under severe fragmentation 41 + * 42 + * Create severe fragmentation by allocating the entire 4 GiB address space 43 + * as tiny 8 KiB blocks but forcing a 64 KiB alignment. The resulting pattern 44 + * leaves many scattered holes. Split the allocations into two groups and 45 + * return them with different flags to block coalescing, then repeatedly 46 + * allocate and free 64 KiB blocks while timing the loop. This stresses how 47 + * quickly the allocator can satisfy larger, aligned requests from a pool of 48 + * highly fragmented space. 49 + */ 50 + KUNIT_ASSERT_FALSE_MSG(test, drm_buddy_init(&mm, mm_size, SZ_4K), 51 + "buddy_init failed\n"); 52 + 53 + num_blocks = mm_size / SZ_64K; 54 + 55 + start = ktime_get(); 56 + /* Allocate with maximum fragmentation - 8K blocks with 64K alignment */ 57 + for (i = 0; i < num_blocks; i++) 58 + KUNIT_ASSERT_FALSE_MSG(test, drm_buddy_alloc_blocks(&mm, 0, mm_size, SZ_8K, SZ_64K, 59 + &allocated_blocks, 0), 60 + "buddy_alloc hit an error size=%u\n", SZ_8K); 61 + 62 + list_for_each_entry_safe(block, tmp, &allocated_blocks, link) { 63 + if (count % 4 == 0 || count % 4 == 3) 64 + list_move_tail(&block->link, &clear_list); 65 + else 66 + list_move_tail(&block->link, &dirty_list); 67 + count++; 68 + } 69 + 70 + /* Free with different flags to ensure no coalescing */ 71 + drm_buddy_free_list(&mm, &clear_list, DRM_BUDDY_CLEARED); 72 + drm_buddy_free_list(&mm, &dirty_list, 0); 73 + 74 + for (i = 0; i < num_blocks; i++) 75 + KUNIT_ASSERT_FALSE_MSG(test, drm_buddy_alloc_blocks(&mm, 0, mm_size, SZ_64K, SZ_64K, 76 + &test_blocks, 0), 77 + "buddy_alloc hit an error size=%u\n", SZ_64K); 78 + drm_buddy_free_list(&mm, &test_blocks, 0); 79 + 80 + end = ktime_get(); 81 + elapsed_ms = ktime_to_ms(ktime_sub(end, start)); 82 + 83 + kunit_info(test, "Fragmented allocation took %lu ms\n", elapsed_ms); 84 + 85 + drm_buddy_fini(&mm); 86 + 87 + /* 88 + * Reverse free order under fragmentation 89 + * 90 + * Construct a fragmented 4 GiB space by allocating every 8 KiB block with 91 + * 64 KiB alignment, creating a dense scatter of small regions. Half of the 92 + * blocks are selectively freed to form sparse gaps, while the remaining 93 + * allocations are preserved, reordered in reverse, and released back with 94 + * the cleared flag. This models a pathological reverse-ordered free pattern 95 + * and measures how quickly the allocator can merge and reclaim space when 96 + * deallocation occurs in the opposite order of allocation, exposing the 97 + * cost difference between a linear freelist scan and an ordered tree lookup. 98 + */ 99 + ret = drm_buddy_init(&mm, mm_size, SZ_4K); 100 + KUNIT_ASSERT_EQ(test, ret, 0); 101 + 102 + start = ktime_get(); 103 + /* Allocate maximum fragmentation */ 104 + for (i = 0; i < num_blocks; i++) 105 + KUNIT_ASSERT_FALSE_MSG(test, drm_buddy_alloc_blocks(&mm, 0, mm_size, SZ_8K, SZ_64K, 106 + &allocated_blocks, 0), 107 + "buddy_alloc hit an error size=%u\n", SZ_8K); 108 + 109 + list_for_each_entry_safe(block, tmp, &allocated_blocks, link) { 110 + if (count % 2 == 0) 111 + list_move_tail(&block->link, &free_list); 112 + count++; 113 + } 114 + drm_buddy_free_list(&mm, &free_list, DRM_BUDDY_CLEARED); 115 + 116 + list_for_each_entry_safe_reverse(block, tmp, &allocated_blocks, link) 117 + list_move(&block->link, &reverse_list); 118 + drm_buddy_free_list(&mm, &reverse_list, DRM_BUDDY_CLEARED); 119 + 120 + end = ktime_get(); 121 + elapsed_ms = ktime_to_ms(ktime_sub(end, start)); 122 + 123 + kunit_info(test, "Reverse-ordered free took %lu ms\n", elapsed_ms); 124 + 125 + drm_buddy_fini(&mm); 126 + } 127 + 24 128 static void drm_test_buddy_alloc_range_bias(struct kunit *test) 25 129 { 26 130 u32 mm_size, size, ps, bias_size, bias_start, bias_end, bias_rem; ··· 876 772 KUNIT_CASE(drm_test_buddy_alloc_contiguous), 877 773 KUNIT_CASE(drm_test_buddy_alloc_clear), 878 774 KUNIT_CASE(drm_test_buddy_alloc_range_bias), 775 + KUNIT_CASE(drm_test_buddy_fragmentation_performance), 879 776 {} 880 777 }; 881 778
+1 -8
drivers/gpu/drm/tilcdc/tilcdc_crtc.c
··· 676 676 if (!crtc_state->active) 677 677 return 0; 678 678 679 - if (state->planes[0].ptr != crtc->primary || 680 - state->planes[0].state == NULL || 681 - state->planes[0].state->crtc != crtc) { 682 - dev_dbg(crtc->dev->dev, "CRTC primary plane must be present"); 683 - return -EINVAL; 684 - } 685 - 686 - return 0; 679 + return drm_atomic_helper_check_crtc_primary_plane(crtc_state); 687 680 } 688 681 689 682 static int tilcdc_crtc_enable_vblank(struct drm_crtc *crtc)
+1 -2
drivers/gpu/drm/tilcdc/tilcdc_plane.c
··· 42 42 return -EINVAL; 43 43 } 44 44 45 - crtc_state = drm_atomic_get_existing_crtc_state(state, 46 - new_state->crtc); 45 + crtc_state = drm_atomic_get_new_crtc_state(state, new_state->crtc); 47 46 /* we should have a crtc state if the plane is attached to a crtc */ 48 47 if (WARN_ON(!crtc_state)) 49 48 return 0;
+10
drivers/gpu/drm/tiny/bochs.c
··· 22 22 #include <drm/drm_panic.h> 23 23 #include <drm/drm_plane_helper.h> 24 24 #include <drm/drm_probe_helper.h> 25 + #include <drm/drm_vblank.h> 26 + #include <drm/drm_vblank_helper.h> 25 27 26 28 #include <video/vga.h> 27 29 ··· 528 526 struct bochs_device *bochs = to_bochs_device(crtc->dev); 529 527 530 528 bochs_hw_blank(bochs, false); 529 + drm_crtc_vblank_on(crtc); 531 530 } 532 531 533 532 static void bochs_crtc_helper_atomic_disable(struct drm_crtc *crtc, ··· 536 533 { 537 534 struct bochs_device *bochs = to_bochs_device(crtc->dev); 538 535 536 + drm_crtc_vblank_off(crtc); 539 537 bochs_hw_blank(bochs, true); 540 538 } 541 539 542 540 static const struct drm_crtc_helper_funcs bochs_crtc_helper_funcs = { 543 541 .mode_set_nofb = bochs_crtc_helper_mode_set_nofb, 544 542 .atomic_check = bochs_crtc_helper_atomic_check, 543 + .atomic_flush = drm_crtc_vblank_atomic_flush, 545 544 .atomic_enable = bochs_crtc_helper_atomic_enable, 546 545 .atomic_disable = bochs_crtc_helper_atomic_disable, 547 546 }; ··· 555 550 .page_flip = drm_atomic_helper_page_flip, 556 551 .atomic_duplicate_state = drm_atomic_helper_crtc_duplicate_state, 557 552 .atomic_destroy_state = drm_atomic_helper_crtc_destroy_state, 553 + DRM_CRTC_VBLANK_TIMER_FUNCS, 558 554 }; 559 555 560 556 static const struct drm_encoder_funcs bochs_encoder_funcs = { ··· 675 669 drm_connector_helper_add(connector, &bochs_connector_helper_funcs); 676 670 drm_connector_attach_edid_property(connector); 677 671 drm_connector_attach_encoder(connector, encoder); 672 + 673 + ret = drm_vblank_init(dev, 1); 674 + if (ret) 675 + return ret; 678 676 679 677 drm_mode_config_reset(dev); 680 678
+11
drivers/gpu/drm/tiny/cirrus-qemu.c
··· 45 45 #include <drm/drm_modeset_helper_vtables.h> 46 46 #include <drm/drm_module.h> 47 47 #include <drm/drm_probe_helper.h> 48 + #include <drm/drm_vblank.h> 49 + #include <drm/drm_vblank_helper.h> 48 50 49 51 #define DRIVER_NAME "cirrus-qemu" 50 52 #define DRIVER_DESC "qemu cirrus vga" ··· 406 404 #endif 407 405 408 406 drm_dev_exit(idx); 407 + 408 + drm_crtc_vblank_on(crtc); 409 409 } 410 410 411 411 static const struct drm_crtc_helper_funcs cirrus_crtc_helper_funcs = { 412 412 .atomic_check = cirrus_crtc_helper_atomic_check, 413 + .atomic_flush = drm_crtc_vblank_atomic_flush, 413 414 .atomic_enable = cirrus_crtc_helper_atomic_enable, 415 + .atomic_disable = drm_crtc_vblank_atomic_disable, 414 416 }; 415 417 416 418 static const struct drm_crtc_funcs cirrus_crtc_funcs = { ··· 424 418 .page_flip = drm_atomic_helper_page_flip, 425 419 .atomic_duplicate_state = drm_atomic_helper_crtc_duplicate_state, 426 420 .atomic_destroy_state = drm_atomic_helper_crtc_destroy_state, 421 + DRM_CRTC_VBLANK_TIMER_FUNCS, 427 422 }; 428 423 429 424 static const struct drm_encoder_funcs cirrus_encoder_funcs = { ··· 497 490 drm_connector_helper_add(connector, &cirrus_connector_helper_funcs); 498 491 499 492 ret = drm_connector_attach_encoder(connector, encoder); 493 + if (ret) 494 + return ret; 495 + 496 + ret = drm_vblank_init(dev, 1); 500 497 if (ret) 501 498 return ret; 502 499
+3
drivers/gpu/drm/ttm/ttm_resource.c
··· 587 587 { 588 588 uint64_t usage; 589 589 590 + if (WARN_ON_ONCE(!man->bdev)) 591 + return 0; 592 + 590 593 spin_lock(&man->bdev->lru_lock); 591 594 usage = man->usage; 592 595 spin_unlock(&man->bdev->lru_lock);
+4 -4
drivers/gpu/drm/vboxvideo/vbox_mode.c
··· 262 262 struct drm_crtc_state *crtc_state = NULL; 263 263 264 264 if (new_state->crtc) { 265 - crtc_state = drm_atomic_get_existing_crtc_state(state, 266 - new_state->crtc); 265 + crtc_state = drm_atomic_get_new_crtc_state(state, 266 + new_state->crtc); 267 267 if (WARN_ON(!crtc_state)) 268 268 return -EINVAL; 269 269 } ··· 344 344 int ret; 345 345 346 346 if (new_state->crtc) { 347 - crtc_state = drm_atomic_get_existing_crtc_state(state, 348 - new_state->crtc); 347 + crtc_state = drm_atomic_get_new_crtc_state(state, 348 + new_state->crtc); 349 349 if (WARN_ON(!crtc_state)) 350 350 return -EINVAL; 351 351 }
+2 -4
drivers/gpu/drm/vc4/vc4_plane.c
··· 497 497 u32 v_subsample = fb->format->vsub; 498 498 int ret; 499 499 500 - crtc_state = drm_atomic_get_existing_crtc_state(state->state, 501 - state->crtc); 500 + crtc_state = drm_atomic_get_new_crtc_state(state->state, state->crtc); 502 501 if (!crtc_state) { 503 502 DRM_DEBUG_KMS("Invalid crtc state\n"); 504 503 return -EINVAL; ··· 874 875 unsigned int vscale_factor; 875 876 876 877 vc4_state = to_vc4_plane_state(state); 877 - crtc_state = drm_atomic_get_existing_crtc_state(state->state, 878 - state->crtc); 878 + crtc_state = drm_atomic_get_new_crtc_state(state->state, state->crtc); 879 879 vrefresh = drm_mode_vrefresh(&crtc_state->adjusted_mode); 880 880 881 881 /* The HVS is able to process 2 pixels/cycle when scaling the source,
+1 -1
drivers/gpu/drm/vgem/vgem_fence.c
··· 79 79 dma_fence_init(&fence->base, &vgem_fence_ops, &fence->lock, 80 80 dma_fence_context_alloc(1), 1); 81 81 82 - timer_setup(&fence->timer, vgem_fence_timeout, 0); 82 + timer_setup(&fence->timer, vgem_fence_timeout, TIMER_IRQSAFE); 83 83 84 84 /* We force the fence to expire within 10s to prevent driver hangs */ 85 85 mod_timer(&fence->timer, jiffies + VGEM_FENCE_TIMEOUT);
+32 -4
drivers/gpu/drm/virtio/virtgpu_display.c
··· 32 32 #include <drm/drm_gem_framebuffer_helper.h> 33 33 #include <drm/drm_probe_helper.h> 34 34 #include <drm/drm_simple_kms_helper.h> 35 + #include <drm/drm_vblank.h> 36 + #include <drm/drm_vblank_helper.h> 35 37 36 38 #include "virtgpu_drv.h" 37 39 ··· 57 55 .reset = drm_atomic_helper_crtc_reset, 58 56 .atomic_duplicate_state = drm_atomic_helper_crtc_duplicate_state, 59 57 .atomic_destroy_state = drm_atomic_helper_crtc_destroy_state, 58 + DRM_CRTC_VBLANK_TIMER_FUNCS, 60 59 }; 61 60 62 61 static const struct drm_framebuffer_funcs virtio_gpu_fb_funcs = { ··· 102 99 static void virtio_gpu_crtc_atomic_enable(struct drm_crtc *crtc, 103 100 struct drm_atomic_state *state) 104 101 { 102 + drm_crtc_vblank_on(crtc); 105 103 } 106 104 107 105 static void virtio_gpu_crtc_atomic_disable(struct drm_crtc *crtc, ··· 111 107 struct drm_device *dev = crtc->dev; 112 108 struct virtio_gpu_device *vgdev = dev->dev_private; 113 109 struct virtio_gpu_output *output = drm_crtc_to_virtio_gpu_output(crtc); 110 + 111 + drm_crtc_vblank_off(crtc); 114 112 115 113 virtio_gpu_cmd_set_scanout(vgdev, output->index, 0, 0, 0, 0, 0); 116 114 virtio_gpu_notify(vgdev); ··· 127 121 static void virtio_gpu_crtc_atomic_flush(struct drm_crtc *crtc, 128 122 struct drm_atomic_state *state) 129 123 { 130 - struct drm_crtc_state *crtc_state = drm_atomic_get_new_crtc_state(state, 131 - crtc); 124 + struct drm_device *dev = crtc->dev; 125 + struct drm_crtc_state *crtc_state = drm_atomic_get_new_crtc_state(state, crtc); 132 126 struct virtio_gpu_output *output = drm_crtc_to_virtio_gpu_output(crtc); 127 + struct drm_pending_vblank_event *event; 133 128 134 129 /* 135 130 * virtio-gpu can't do modeset and plane update operations ··· 140 133 */ 141 134 if (drm_atomic_crtc_needs_modeset(crtc_state)) 142 135 output->needs_modeset = true; 136 + 137 + spin_lock_irq(&dev->event_lock); 138 + 139 + event = crtc_state->event; 140 + crtc_state->event = NULL; 141 + 142 + if (event) { 143 + if (drm_crtc_vblank_get(crtc) == 0) 144 + drm_crtc_arm_vblank_event(crtc, event); 145 + else 146 + drm_crtc_send_vblank_event(crtc, event); 147 + } 148 + 149 + spin_unlock_irq(&dev->event_lock); 143 150 } 144 151 145 152 static const struct drm_crtc_helper_funcs virtio_gpu_crtc_helper_funcs = { ··· 278 257 struct drm_encoder *encoder = &output->enc; 279 258 struct drm_crtc *crtc = &output->crtc; 280 259 struct drm_plane *primary, *cursor; 260 + int ret; 281 261 282 262 output->index = index; 283 263 if (index == 0) { ··· 293 271 cursor = virtio_gpu_plane_init(vgdev, DRM_PLANE_TYPE_CURSOR, index); 294 272 if (IS_ERR(cursor)) 295 273 return PTR_ERR(cursor); 296 - drm_crtc_init_with_planes(dev, crtc, primary, cursor, 297 - &virtio_gpu_crtc_funcs, NULL); 274 + ret = drm_crtc_init_with_planes(dev, crtc, primary, cursor, 275 + &virtio_gpu_crtc_funcs, NULL); 276 + if (ret) 277 + return ret; 298 278 drm_crtc_helper_add(crtc, &virtio_gpu_crtc_helper_funcs); 299 279 300 280 drm_connector_init(dev, connector, &virtio_gpu_connector_funcs, ··· 379 355 380 356 for (i = 0 ; i < vgdev->num_scanouts; ++i) 381 357 vgdev_output_init(vgdev, i); 358 + 359 + ret = drm_vblank_init(vgdev->ddev, vgdev->num_scanouts); 360 + if (ret) 361 + return ret; 382 362 383 363 drm_mode_config_reset(vgdev->ddev); 384 364 return 0;
+2 -2
drivers/gpu/drm/vkms/vkms_crtc.c
··· 127 127 return ret; 128 128 129 129 drm_for_each_plane_mask(plane, crtc->dev, crtc_state->plane_mask) { 130 - plane_state = drm_atomic_get_existing_plane_state(crtc_state->state, plane); 130 + plane_state = drm_atomic_get_new_plane_state(crtc_state->state, plane); 131 131 WARN_ON(!plane_state); 132 132 133 133 if (!plane_state->visible) ··· 143 143 144 144 i = 0; 145 145 drm_for_each_plane_mask(plane, crtc->dev, crtc_state->plane_mask) { 146 - plane_state = drm_atomic_get_existing_plane_state(crtc_state->state, plane); 146 + plane_state = drm_atomic_get_new_plane_state(crtc_state->state, plane); 147 147 148 148 if (!plane_state->visible) 149 149 continue;
+3 -3
drivers/gpu/drm/xe/display/xe_display.c
··· 323 323 * properly. 324 324 */ 325 325 intel_power_domains_disable(display); 326 - drm_client_dev_suspend(&xe->drm, false); 326 + drm_client_dev_suspend(&xe->drm); 327 327 328 328 if (intel_display_device_present(display)) { 329 329 drm_kms_helper_poll_disable(&xe->drm); ··· 355 355 return; 356 356 357 357 intel_power_domains_disable(display); 358 - drm_client_dev_suspend(&xe->drm, false); 358 + drm_client_dev_suspend(&xe->drm); 359 359 360 360 if (intel_display_device_present(display)) { 361 361 drm_kms_helper_poll_disable(&xe->drm); ··· 480 480 481 481 intel_opregion_resume(display); 482 482 483 - drm_client_dev_resume(&xe->drm, false); 483 + drm_client_dev_resume(&xe->drm); 484 484 485 485 intel_power_domains_enable(display); 486 486 }
+2
include/drm/bridge/dw_hdmi_qp.h
··· 23 23 const struct dw_hdmi_qp_phy_ops *phy_ops; 24 24 void *phy_data; 25 25 int main_irq; 26 + int cec_irq; 27 + unsigned long ref_clk_rate; 26 28 }; 27 29 28 30 struct dw_hdmi_qp *dw_hdmi_qp_bind(struct platform_device *pdev,
+82 -70
include/drm/drm_atomic.h
··· 159 159 160 160 struct __drm_planes_state { 161 161 struct drm_plane *ptr; 162 - struct drm_plane_state *state, *old_state, *new_state; 162 + 163 + /** 164 + * @state_to_destroy: 165 + * 166 + * Used to track the @drm_plane_state we will need to free when 167 + * tearing down the associated &drm_atomic_state in 168 + * $drm_mode_config_funcs.atomic_state_clear or 169 + * drm_atomic_state_default_clear(). 170 + * 171 + * Before a commit, and the call to 172 + * drm_atomic_helper_swap_state() in particular, it points to 173 + * the same state than @new_state. After a commit, it points to 174 + * the same state than @old_state. 175 + */ 176 + struct drm_plane_state *state_to_destroy; 177 + 178 + struct drm_plane_state *old_state, *new_state; 163 179 }; 164 180 165 181 struct __drm_crtcs_state { 166 182 struct drm_crtc *ptr; 167 - struct drm_crtc_state *state, *old_state, *new_state; 183 + 184 + /** 185 + * @state_to_destroy: 186 + * 187 + * Used to track the @drm_crtc_state we will need to free when 188 + * tearing down the associated &drm_atomic_state in 189 + * $drm_mode_config_funcs.atomic_state_clear or 190 + * drm_atomic_state_default_clear(). 191 + * 192 + * Before a commit, and the call to 193 + * drm_atomic_helper_swap_state() in particular, it points to 194 + * the same state than @new_state. After a commit, it points to 195 + * the same state than @old_state. 196 + */ 197 + struct drm_crtc_state *state_to_destroy; 198 + 199 + struct drm_crtc_state *old_state, *new_state; 168 200 169 201 /** 170 202 * @commit: ··· 214 182 215 183 struct __drm_connnectors_state { 216 184 struct drm_connector *ptr; 217 - struct drm_connector_state *state, *old_state, *new_state; 185 + 186 + /** 187 + * @state_to_destroy: 188 + * 189 + * Used to track the @drm_connector_state we will need to free 190 + * when tearing down the associated &drm_atomic_state in 191 + * $drm_mode_config_funcs.atomic_state_clear or 192 + * drm_atomic_state_default_clear(). 193 + * 194 + * Before a commit, and the call to 195 + * drm_atomic_helper_swap_state() in particular, it points to 196 + * the same state than @new_state. After a commit, it points to 197 + * the same state than @old_state. 198 + */ 199 + struct drm_connector_state *state_to_destroy; 200 + 201 + struct drm_connector_state *old_state, *new_state; 202 + 218 203 /** 219 204 * @out_fence_ptr: 220 205 * ··· 391 342 392 343 struct __drm_private_objs_state { 393 344 struct drm_private_obj *ptr; 394 - struct drm_private_state *state, *old_state, *new_state; 345 + 346 + /** 347 + * @state_to_destroy: 348 + * 349 + * Used to track the @drm_private_state we will need to free 350 + * when tearing down the associated &drm_atomic_state in 351 + * $drm_mode_config_funcs.atomic_state_clear or 352 + * drm_atomic_state_default_clear(). 353 + * 354 + * Before a commit, and the call to 355 + * drm_atomic_helper_swap_state() in particular, it points to 356 + * the same state than @new_state. After a commit, it points to 357 + * the same state than @old_state. 358 + */ 359 + struct drm_private_state *state_to_destroy; 360 + 361 + struct drm_private_state *old_state, *new_state; 395 362 }; 396 363 397 364 /** ··· 702 637 struct drm_encoder *encoder); 703 638 704 639 /** 705 - * drm_atomic_get_existing_crtc_state - get CRTC state, if it exists 706 - * @state: global atomic state object 707 - * @crtc: CRTC to grab 708 - * 709 - * This function returns the CRTC state for the given CRTC, or NULL 710 - * if the CRTC is not part of the global atomic state. 711 - * 712 - * This function is deprecated, @drm_atomic_get_old_crtc_state or 713 - * @drm_atomic_get_new_crtc_state should be used instead. 714 - */ 715 - static inline struct drm_crtc_state * 716 - drm_atomic_get_existing_crtc_state(const struct drm_atomic_state *state, 717 - struct drm_crtc *crtc) 718 - { 719 - return state->crtcs[drm_crtc_index(crtc)].state; 720 - } 721 - 722 - /** 723 640 * drm_atomic_get_old_crtc_state - get old CRTC state, if it exists 724 641 * @state: global atomic state object 725 642 * @crtc: CRTC to grab ··· 728 681 struct drm_crtc *crtc) 729 682 { 730 683 return state->crtcs[drm_crtc_index(crtc)].new_state; 731 - } 732 - 733 - /** 734 - * drm_atomic_get_existing_plane_state - get plane state, if it exists 735 - * @state: global atomic state object 736 - * @plane: plane to grab 737 - * 738 - * This function returns the plane state for the given plane, or NULL 739 - * if the plane is not part of the global atomic state. 740 - * 741 - * This function is deprecated, @drm_atomic_get_old_plane_state or 742 - * @drm_atomic_get_new_plane_state should be used instead. 743 - */ 744 - static inline struct drm_plane_state * 745 - drm_atomic_get_existing_plane_state(const struct drm_atomic_state *state, 746 - struct drm_plane *plane) 747 - { 748 - return state->planes[drm_plane_index(plane)].state; 749 684 } 750 685 751 686 /** ··· 758 729 struct drm_plane *plane) 759 730 { 760 731 return state->planes[drm_plane_index(plane)].new_state; 761 - } 762 - 763 - /** 764 - * drm_atomic_get_existing_connector_state - get connector state, if it exists 765 - * @state: global atomic state object 766 - * @connector: connector to grab 767 - * 768 - * This function returns the connector state for the given connector, 769 - * or NULL if the connector is not part of the global atomic state. 770 - * 771 - * This function is deprecated, @drm_atomic_get_old_connector_state or 772 - * @drm_atomic_get_new_connector_state should be used instead. 773 - */ 774 - static inline struct drm_connector_state * 775 - drm_atomic_get_existing_connector_state(const struct drm_atomic_state *state, 776 - struct drm_connector *connector) 777 - { 778 - int index = drm_connector_index(connector); 779 - 780 - if (index >= state->num_connector) 781 - return NULL; 782 - 783 - return state->connectors[index].state; 784 732 } 785 733 786 734 /** ··· 805 799 * @state: global atomic state object 806 800 * @plane: plane to grab 807 801 * 808 - * This function returns the plane state for the given plane, either from 809 - * @state, or if the plane isn't part of the atomic state update, from @plane. 810 - * This is useful in atomic check callbacks, when drivers need to peek at, but 811 - * not change, state of other planes, since it avoids threading an error code 812 - * back up the call chain. 802 + * This function returns the plane state for the given plane, either the 803 + * new plane state from @state, or if the plane isn't part of the atomic 804 + * state update, from @plane. This is useful in atomic check callbacks, 805 + * when drivers need to peek at, but not change, state of other planes, 806 + * since it avoids threading an error code back up the call chain. 813 807 * 814 808 * WARNING: 815 809 * ··· 830 824 __drm_atomic_get_current_plane_state(const struct drm_atomic_state *state, 831 825 struct drm_plane *plane) 832 826 { 833 - if (state->planes[drm_plane_index(plane)].state) 834 - return state->planes[drm_plane_index(plane)].state; 827 + struct drm_plane_state *plane_state; 835 828 829 + plane_state = drm_atomic_get_new_plane_state(state, plane); 830 + if (plane_state) 831 + return plane_state; 832 + 833 + /* 834 + * If the plane isn't part of the state, fallback to the currently active one. 835 + */ 836 836 return plane->state; 837 837 } 838 838
+8 -3
include/drm/drm_buddy.h
··· 10 10 #include <linux/list.h> 11 11 #include <linux/slab.h> 12 12 #include <linux/sched.h> 13 + #include <linux/rbtree.h> 13 14 14 15 #include <drm/drm_print.h> 15 16 ··· 45 44 * a list, if so desired. As soon as the block is freed with 46 45 * drm_buddy_free* ownership is given back to the mm. 47 46 */ 48 - struct list_head link; 47 + union { 48 + struct rb_node rb; 49 + struct list_head link; 50 + }; 51 + 49 52 struct list_head tmp_link; 50 53 }; 51 54 ··· 64 59 */ 65 60 struct drm_buddy { 66 61 /* Maintain a free list for each order. */ 67 - struct list_head *free_list; 62 + struct rb_root **free_trees; 68 63 69 64 /* 70 65 * Maintain explicit binary tree(s) to track the allocation of the ··· 90 85 }; 91 86 92 87 static inline u64 93 - drm_buddy_block_offset(struct drm_buddy_block *block) 88 + drm_buddy_block_offset(const struct drm_buddy_block *block) 94 89 { 95 90 return block->header & DRM_BUDDY_HEADER_OFFSET; 96 91 }
+2 -12
include/drm/drm_client.h
··· 70 70 * Called when suspending the device. 71 71 * 72 72 * This callback is optional. 73 - * 74 - * FIXME: Some callers hold the console lock when invoking this 75 - * function. This interferes with fbdev emulation, which 76 - * also tries to acquire the lock. Push the console lock 77 - * into the callback and remove 'holds_console_lock'. 78 73 */ 79 - int (*suspend)(struct drm_client_dev *client, bool holds_console_lock); 74 + int (*suspend)(struct drm_client_dev *client); 80 75 81 76 /** 82 77 * @resume: ··· 79 84 * Called when resuming the device from suspend. 80 85 * 81 86 * This callback is optional. 82 - * 83 - * FIXME: Some callers hold the console lock when invoking this 84 - * function. This interferes with fbdev emulation, which 85 - * also tries to acquire the lock. Push the console lock 86 - * into the callback and remove 'holds_console_lock'. 87 87 */ 88 - int (*resume)(struct drm_client_dev *client, bool holds_console_lock); 88 + int (*resume)(struct drm_client_dev *client); 89 89 }; 90 90 91 91 /**
+4 -4
include/drm/drm_client_event.h
··· 11 11 void drm_client_dev_unregister(struct drm_device *dev); 12 12 void drm_client_dev_hotplug(struct drm_device *dev); 13 13 void drm_client_dev_restore(struct drm_device *dev); 14 - void drm_client_dev_suspend(struct drm_device *dev, bool holds_console_lock); 15 - void drm_client_dev_resume(struct drm_device *dev, bool holds_console_lock); 14 + void drm_client_dev_suspend(struct drm_device *dev); 15 + void drm_client_dev_resume(struct drm_device *dev); 16 16 #else 17 17 static inline void drm_client_dev_unregister(struct drm_device *dev) 18 18 { } ··· 20 20 { } 21 21 static inline void drm_client_dev_restore(struct drm_device *dev) 22 22 { } 23 - static inline void drm_client_dev_suspend(struct drm_device *dev, bool holds_console_lock) 23 + static inline void drm_client_dev_suspend(struct drm_device *dev) 24 24 { } 25 - static inline void drm_client_dev_resume(struct drm_device *dev, bool holds_console_lock) 25 + static inline void drm_client_dev_resume(struct drm_device *dev) 26 26 { } 27 27 #endif 28 28
+16
include/linux/dma-buf/heaps/cma.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + #ifndef DMA_BUF_HEAP_CMA_H_ 3 + #define DMA_BUF_HEAP_CMA_H_ 4 + 5 + struct cma; 6 + 7 + #ifdef CONFIG_DMABUF_HEAPS_CMA 8 + int dma_heap_cma_register_heap(struct cma *cma); 9 + #else 10 + static inline int dma_heap_cma_register_heap(struct cma *cma) 11 + { 12 + return 0; 13 + } 14 + #endif // CONFIG_DMABUF_HEAPS_CMA 15 + 16 + #endif // DMA_BUF_HEAP_CMA_H_
+13
include/uapi/drm/amdxdna_accel.h
··· 523 523 __u32 pad; 524 524 }; 525 525 526 + /** 527 + * struct amdxdna_async_error - XDNA async error structure 528 + */ 529 + struct amdxdna_async_error { 530 + /** @err_code: Error code. */ 531 + __u64 err_code; 532 + /** @ts_us: Timestamp. */ 533 + __u64 ts_us; 534 + /** @ex_err_code: Extra error code */ 535 + __u64 ex_err_code; 536 + }; 537 + 526 538 #define DRM_AMDXDNA_HW_CONTEXT_ALL 0 539 + #define DRM_AMDXDNA_HW_LAST_ASYNC_ERR 2 527 540 528 541 /** 529 542 * struct amdxdna_drm_get_array - Get information array.
+11
kernel/dma/contiguous.c
··· 42 42 #include <linux/memblock.h> 43 43 #include <linux/err.h> 44 44 #include <linux/sizes.h> 45 + #include <linux/dma-buf/heaps/cma.h> 45 46 #include <linux/dma-map-ops.h> 46 47 #include <linux/cma.h> 47 48 #include <linux/nospec.h> ··· 242 241 } 243 242 244 243 if (selected_size && !dma_contiguous_default_area) { 244 + int ret; 245 + 245 246 pr_debug("%s: reserving %ld MiB for global area\n", __func__, 246 247 (unsigned long)selected_size / SZ_1M); 247 248 ··· 251 248 selected_limit, 252 249 &dma_contiguous_default_area, 253 250 fixed); 251 + 252 + ret = dma_heap_cma_register_heap(dma_contiguous_default_area); 253 + if (ret) 254 + pr_warn("Couldn't register default CMA heap."); 254 255 } 255 256 } 256 257 ··· 499 492 500 493 pr_info("Reserved memory: created CMA memory pool at %pa, size %ld MiB\n", 501 494 &rmem->base, (unsigned long)rmem->size / SZ_1M); 495 + 496 + err = dma_heap_cma_register_heap(cma); 497 + if (err) 498 + pr_warn("Couldn't register CMA heap."); 502 499 503 500 return 0; 504 501 }