Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'drm-misc-next-2021-03-03' of git://anongit.freedesktop.org/drm/drm-misc into drm-next

drm-misc-next for 5.13:

UAPI Changes:

Cross-subsystem Changes:

Core Changes:
- %p4cc printk format modifier
- atomic: introduce drm_crtc_commit_wait, rework atomic plane state
helpers to take the drm_commit_state structure
- dma-buf: heaps rework to return a struct dma_buf
- simple-kms: Add plate state helpers
- ttm: debugfs support, removal of sysfs

Driver Changes:
- Convert drivers to shadow plane helpers
- arc: Move to drm/tiny
- ast: cursor plane reworks
- gma500: Remove TTM and medfield support
- mxsfb: imx8mm support
- panfrost: MMU IRQ handling rework
- qxl: rework to better handle resources deallocation, locking
- sun4i: Add alpha properties for UI and VI layers
- vc4: RPi4 CEC support
- vmwgfx: doc cleanup

Signed-off-by: Dave Airlie <airlied@redhat.com>

From: Maxime Ripard <maxime@cerno.tech>
Link: https://patchwork.freedesktop.org/patch/msgid/20210303100600.dgnkadonzuvfnu22@gilmour

+5484 -4843
+18
Documentation/core-api/printk-formats.rst
··· 567 567 568 568 Passed by reference. 569 569 570 + V4L2 and DRM FourCC code (pixel format) 571 + --------------------------------------- 572 + 573 + :: 574 + 575 + %p4cc 576 + 577 + Print a FourCC code used by V4L2 or DRM, including format endianness and 578 + its numerical value as hexadecimal. 579 + 580 + Passed by reference. 581 + 582 + Examples:: 583 + 584 + %p4cc BG12 little-endian (0x32314742) 585 + %p4cc Y10 little-endian (0x20303159) 586 + %p4cc NV12 big-endian (0xb231564e) 587 + 570 588 Thanks 571 589 ====== 572 590
+1 -1
Documentation/devicetree/bindings/display/brcm,bcm2711-hdmi.yaml
··· 109 109 - resets 110 110 - ddc 111 111 112 - additionalProperties: false 112 + unevaluatedProperties: false 113 113 114 114 examples: 115 115 - |
+110
Documentation/devicetree/bindings/display/fsl,lcdif.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/display/fsl,lcdif.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: Freescale/NXP i.MX LCD Interface (LCDIF) 8 + 9 + maintainers: 10 + - Marek Vasut <marex@denx.de> 11 + - Stefan Agner <stefan@agner.ch> 12 + 13 + description: | 14 + (e)LCDIF display controller found in the Freescale/NXP i.MX SoCs. 15 + 16 + properties: 17 + compatible: 18 + oneOf: 19 + - enum: 20 + - fsl,imx23-lcdif 21 + - fsl,imx28-lcdif 22 + - fsl,imx6sx-lcdif 23 + - items: 24 + - enum: 25 + - fsl,imx6sl-lcdif 26 + - fsl,imx6sll-lcdif 27 + - fsl,imx6ul-lcdif 28 + - fsl,imx7d-lcdif 29 + - fsl,imx8mm-lcdif 30 + - fsl,imx8mq-lcdif 31 + - const: fsl,imx6sx-lcdif 32 + 33 + reg: 34 + maxItems: 1 35 + 36 + clocks: 37 + items: 38 + - description: Pixel clock 39 + - description: Bus clock 40 + - description: Display AXI clock 41 + minItems: 1 42 + 43 + clock-names: 44 + items: 45 + - const: pix 46 + - const: axi 47 + - const: disp_axi 48 + minItems: 1 49 + 50 + interrupts: 51 + maxItems: 1 52 + 53 + port: 54 + $ref: /schemas/graph.yaml#/properties/port 55 + description: The LCDIF output port 56 + 57 + required: 58 + - compatible 59 + - reg 60 + - clocks 61 + - interrupts 62 + - port 63 + 64 + additionalProperties: false 65 + 66 + allOf: 67 + - if: 68 + properties: 69 + compatible: 70 + contains: 71 + const: fsl,imx6sx-lcdif 72 + then: 73 + properties: 74 + clocks: 75 + minItems: 2 76 + maxItems: 3 77 + clock-names: 78 + minItems: 2 79 + maxItems: 3 80 + required: 81 + - clock-names 82 + else: 83 + properties: 84 + clocks: 85 + maxItems: 1 86 + clock-names: 87 + maxItems: 1 88 + 89 + examples: 90 + - | 91 + #include <dt-bindings/clock/imx6sx-clock.h> 92 + #include <dt-bindings/interrupt-controller/arm-gic.h> 93 + 94 + display-controller@2220000 { 95 + compatible = "fsl,imx6sx-lcdif"; 96 + reg = <0x02220000 0x4000>; 97 + interrupts = <GIC_SPI 5 IRQ_TYPE_LEVEL_HIGH>; 98 + clocks = <&clks IMX6SX_CLK_LCDIF1_PIX>, 99 + <&clks IMX6SX_CLK_LCDIF_APB>, 100 + <&clks IMX6SX_CLK_DISPLAY_AXI>; 101 + clock-names = "pix", "axi", "disp_axi"; 102 + 103 + port { 104 + endpoint { 105 + remote-endpoint = <&panel_in>; 106 + }; 107 + }; 108 + }; 109 + 110 + ...
-87
Documentation/devicetree/bindings/display/mxsfb.txt
··· 1 - * Freescale MXS LCD Interface (LCDIF) 2 - 3 - New bindings: 4 - ============= 5 - Required properties: 6 - - compatible: Should be "fsl,imx23-lcdif" for i.MX23. 7 - Should be "fsl,imx28-lcdif" for i.MX28. 8 - Should be "fsl,imx6sx-lcdif" for i.MX6SX. 9 - Should be "fsl,imx8mq-lcdif" for i.MX8MQ. 10 - - reg: Address and length of the register set for LCDIF 11 - - interrupts: Should contain LCDIF interrupt 12 - - clocks: A list of phandle + clock-specifier pairs, one for each 13 - entry in 'clock-names'. 14 - - clock-names: A list of clock names. For MXSFB it should contain: 15 - - "pix" for the LCDIF block clock 16 - - (MX6SX-only) "axi", "disp_axi" for the bus interface clock 17 - 18 - Required sub-nodes: 19 - - port: The connection to an encoder chip. 20 - 21 - Example: 22 - 23 - lcdif1: display-controller@2220000 { 24 - compatible = "fsl,imx6sx-lcdif", "fsl,imx28-lcdif"; 25 - reg = <0x02220000 0x4000>; 26 - interrupts = <GIC_SPI 5 IRQ_TYPE_LEVEL_HIGH>; 27 - clocks = <&clks IMX6SX_CLK_LCDIF1_PIX>, 28 - <&clks IMX6SX_CLK_LCDIF_APB>, 29 - <&clks IMX6SX_CLK_DISPLAY_AXI>; 30 - clock-names = "pix", "axi", "disp_axi"; 31 - 32 - port { 33 - parallel_out: endpoint { 34 - remote-endpoint = <&panel_in_parallel>; 35 - }; 36 - }; 37 - }; 38 - 39 - Deprecated bindings: 40 - ==================== 41 - Required properties: 42 - - compatible: Should be "fsl,imx23-lcdif" for i.MX23. 43 - Should be "fsl,imx28-lcdif" for i.MX28. 44 - - reg: Address and length of the register set for LCDIF 45 - - interrupts: Should contain LCDIF interrupts 46 - - display: phandle to display node (see below for details) 47 - 48 - * display node 49 - 50 - Required properties: 51 - - bits-per-pixel: <16> for RGB565, <32> for RGB888/666. 52 - - bus-width: number of data lines. Could be <8>, <16>, <18> or <24>. 53 - 54 - Required sub-node: 55 - - display-timings: Refer to binding doc display-timing.txt for details. 56 - 57 - Examples: 58 - 59 - lcdif@80030000 { 60 - compatible = "fsl,imx28-lcdif"; 61 - reg = <0x80030000 2000>; 62 - interrupts = <38 86>; 63 - 64 - display: display { 65 - bits-per-pixel = <32>; 66 - bus-width = <24>; 67 - 68 - display-timings { 69 - native-mode = <&timing0>; 70 - timing0: timing0 { 71 - clock-frequency = <33500000>; 72 - hactive = <800>; 73 - vactive = <480>; 74 - hfront-porch = <164>; 75 - hback-porch = <89>; 76 - hsync-len = <10>; 77 - vback-porch = <23>; 78 - vfront-porch = <10>; 79 - vsync-len = <10>; 80 - hsync-active = <0>; 81 - vsync-active = <0>; 82 - de-active = <1>; 83 - pixelclk-active = <0>; 84 - }; 85 - }; 86 - }; 87 - };
+12
Documentation/gpu/drm-kms-helpers.rst
··· 80 80 .. kernel-doc:: drivers/gpu/drm/drm_atomic_state_helper.c 81 81 :export: 82 82 83 + GEM Atomic Helper Reference 84 + --------------------------- 85 + 86 + .. kernel-doc:: drivers/gpu/drm/drm_gem_atomic_helper.c 87 + :doc: overview 88 + 89 + .. kernel-doc:: include/drm/drm_gem_atomic_helper.h 90 + :internal: 91 + 92 + .. kernel-doc:: drivers/gpu/drm/drm_gem_atomic_helper.c 93 + :export: 94 + 83 95 Simple KMS Helper Reference 84 96 =========================== 85 97
+15 -59
Documentation/gpu/todo.rst
··· 459 459 460 460 Level: Intermediate 461 461 462 - Plumb drm_atomic_state all over 463 - ------------------------------- 464 - 465 - Currently various atomic functions take just a single or a handful of 466 - object states (eg. plane state). While that single object state can 467 - suffice for some simple cases, we often have to dig out additional 468 - object states for dealing with various dependencies between the individual 469 - objects or the hardware they represent. The process of digging out the 470 - additional states is rather non-intuitive and error prone. 471 - 472 - To fix that most functions should rather take the overall 473 - drm_atomic_state as one of their parameters. The other parameters 474 - would generally be the object(s) we mainly want to interact with. 475 - 476 - For example, instead of 477 - 478 - .. code-block:: c 479 - 480 - int (*atomic_check)(struct drm_plane *plane, struct drm_plane_state *state); 481 - 482 - we would have something like 483 - 484 - .. code-block:: c 485 - 486 - int (*atomic_check)(struct drm_plane *plane, struct drm_atomic_state *state); 487 - 488 - The implementation can then trivially gain access to any required object 489 - state(s) via drm_atomic_get_plane_state(), drm_atomic_get_new_plane_state(), 490 - drm_atomic_get_old_plane_state(), and their equivalents for 491 - other object types. 492 - 493 - Additionally many drivers currently access the object->state pointer 494 - directly in their commit functions. That is not going to work if we 495 - eg. want to allow deeper commit pipelines as those pointers could 496 - then point to the states corresponding to a future commit instead of 497 - the current commit we're trying to process. Also non-blocking commits 498 - execute locklessly so there are serious concerns with dereferencing 499 - the object->state pointers without holding the locks that protect them. 500 - Use of drm_atomic_get_new_plane_state(), drm_atomic_get_old_plane_state(), 501 - etc. avoids these problems as well since they relate to a specific 502 - commit via the passed in drm_atomic_state. 503 - 504 - Contact: Ville Syrjälä, Daniel Vetter 505 - 506 - Level: Intermediate 507 - 508 462 Use struct dma_buf_map throughout codebase 509 463 ------------------------------------------ 510 464 ··· 550 596 551 597 Level: Intermediate 552 598 553 - KMS cleanups 554 - ------------ 599 + Object lifetime fixes 600 + --------------------- 555 601 556 - Some of these date from the very introduction of KMS in 2008 ... 602 + There's two related issues here 557 603 558 - - Make ->funcs and ->helper_private vtables optional. There's a bunch of empty 559 - function tables in drivers, but before we can remove them we need to make sure 560 - that all the users in helpers and drivers do correctly check for a NULL 561 - vtable. 604 + - Cleanup up the various ->destroy callbacks, which often are all the same 605 + simple code. 562 606 563 - - Cleanup up the various ->destroy callbacks. A lot of them just wrapt the 564 - drm_*_cleanup implementations and can be removed. Some tack a kfree() at the 565 - end, for which we could add drm_*_cleanup_kfree(). And then there's the (for 566 - historical reasons) misnamed drm_primary_helper_destroy() function. 607 + - Lots of drivers erroneously allocate DRM modeset objects using devm_kzalloc, 608 + which results in use-after free issues on driver unload. This can be serious 609 + trouble even for drivers for hardware integrated on the SoC due to 610 + EPROBE_DEFERRED backoff. 611 + 612 + Both these problems can be solved by switching over to drmm_kzalloc(), and the 613 + various convenience wrappers provided, e.g. drmm_crtc_alloc_with_planes(), 614 + drmm_universal_plane_alloc(), ... and so on. 615 + 616 + Contact: Daniel Vetter 567 617 568 618 Level: Intermediate 569 619 ··· 623 665 See the documentation of :ref:`VKMS <vkms>` for more details. This is an ideal 624 666 internship task, since it only requires a virtual machine and can be sized to 625 667 fit the available time. 626 - 627 - Contact: Daniel Vetter 628 668 629 669 Level: See details 630 670
+2 -2
MAINTAINERS
··· 1323 1323 M: Alexey Brodkin <abrodkin@synopsys.com> 1324 1324 S: Supported 1325 1325 F: Documentation/devicetree/bindings/display/snps,arcpgu.txt 1326 - F: drivers/gpu/drm/arc/ 1326 + F: drivers/gpu/drm/tiny/arcpgu.c 1327 1327 1328 1328 ARCNET NETWORK LAYER 1329 1329 M: Michael Grzeschik <m.grzeschik@pengutronix.de> ··· 12267 12267 L: dri-devel@lists.freedesktop.org 12268 12268 S: Supported 12269 12269 T: git git://anongit.freedesktop.org/drm/drm-misc 12270 - F: Documentation/devicetree/bindings/display/mxsfb.txt 12270 + F: Documentation/devicetree/bindings/display/fsl,lcdif.yaml 12271 12271 F: drivers/gpu/drm/mxsfb/ 12272 12272 12273 12273 MYLEX DAC960 PCI RAID Controller
+12
drivers/dma-buf/dma-heap.c
··· 202 202 return heap->priv; 203 203 } 204 204 205 + /** 206 + * dma_heap_get_name() - get heap name 207 + * @heap: DMA-Heap to retrieve private data for 208 + * 209 + * Returns: 210 + * The char* for the heap name. 211 + */ 212 + const char *dma_heap_get_name(struct dma_heap *heap) 213 + { 214 + return heap->name; 215 + } 216 + 205 217 struct dma_heap *dma_heap_add(const struct dma_heap_export_info *exp_info) 206 218 { 207 219 struct dma_heap *heap, *h, *err_ret;
+1
drivers/dma-buf/heaps/cma_heap.c
··· 339 339 buffer->pagecount = pagecount; 340 340 341 341 /* create the dmabuf */ 342 + exp_info.exp_name = dma_heap_get_name(heap); 342 343 exp_info.ops = &cma_heap_buf_ops; 343 344 exp_info.size = buffer->len; 344 345 exp_info.flags = fd_flags;
+1
drivers/dma-buf/heaps/system_heap.c
··· 390 390 } 391 391 392 392 /* create the dmabuf */ 393 + exp_info.exp_name = dma_heap_get_name(heap); 393 394 exp_info.ops = &system_heap_buf_ops; 394 395 exp_info.size = buffer->len; 395 396 exp_info.flags = fd_flags;
-2
drivers/gpu/drm/Kconfig
··· 352 352 353 353 source "drivers/gpu/drm/etnaviv/Kconfig" 354 354 355 - source "drivers/gpu/drm/arc/Kconfig" 356 - 357 355 source "drivers/gpu/drm/hisilicon/Kconfig" 358 356 359 357 source "drivers/gpu/drm/mediatek/Kconfig"
+2 -2
drivers/gpu/drm/Makefile
··· 44 44 drm_plane_helper.o drm_dp_mst_topology.o drm_atomic_helper.o \ 45 45 drm_kms_helper_common.o drm_dp_dual_mode_helper.o \ 46 46 drm_simple_kms_helper.o drm_modeset_helper.o \ 47 - drm_scdc_helper.o drm_gem_framebuffer_helper.o \ 47 + drm_scdc_helper.o drm_gem_atomic_helper.o \ 48 + drm_gem_framebuffer_helper.o \ 48 49 drm_atomic_state_helper.o drm_damage_helper.o \ 49 50 drm_format_helper.o drm_self_refresh_helper.o 50 51 ··· 111 110 obj-y += bridge/ 112 111 obj-$(CONFIG_DRM_FSL_DCU) += fsl-dcu/ 113 112 obj-$(CONFIG_DRM_ETNAVIV) += etnaviv/ 114 - obj-$(CONFIG_DRM_ARCPGU)+= arc/ 115 113 obj-y += hisilicon/ 116 114 obj-$(CONFIG_DRM_ZTE) += zte/ 117 115 obj-$(CONFIG_DRM_MXSFB) += mxsfb/
+1 -1
drivers/gpu/drm/amd/amdgpu/amdgpu.h
··· 1066 1066 return &adev->ddev; 1067 1067 } 1068 1068 1069 - static inline struct amdgpu_device *amdgpu_ttm_adev(struct ttm_bo_device *bdev) 1069 + static inline struct amdgpu_device *amdgpu_ttm_adev(struct ttm_device *bdev) 1070 1070 { 1071 1071 return container_of(bdev, struct amdgpu_device, mman.bdev); 1072 1072 }
+3 -3
drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_fence.c
··· 40 40 * All the BOs in a process share an eviction fence. When process X wants 41 41 * to map VRAM memory but TTM can't find enough space, TTM will attempt to 42 42 * evict BOs from its LRU list. TTM checks if the BO is valuable to evict 43 - * by calling ttm_bo_driver->eviction_valuable(). 43 + * by calling ttm_device_funcs->eviction_valuable(). 44 44 * 45 - * ttm_bo_driver->eviction_valuable() - will return false if the BO belongs 45 + * ttm_device_funcs->eviction_valuable() - will return false if the BO belongs 46 46 * to process X. Otherwise, it will return true to indicate BO can be 47 47 * evicted by TTM. 48 48 * 49 - * If ttm_bo_driver->eviction_valuable returns true, then TTM will continue 49 + * If ttm_device_funcs->eviction_valuable returns true, then TTM will continue 50 50 * the evcition process for that BO by calling ttm_bo_evict --> amdgpu_bo_move 51 51 * --> amdgpu_copy_buffer(). This sets up job in GPU scheduler. 52 52 *
+12 -4
drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c
··· 119 119 */ 120 120 #define ESTIMATE_PT_SIZE(mem_size) ((mem_size) >> 14) 121 121 122 + static size_t amdgpu_amdkfd_acc_size(uint64_t size) 123 + { 124 + size >>= PAGE_SHIFT; 125 + size *= sizeof(dma_addr_t) + sizeof(void *); 126 + 127 + return __roundup_pow_of_two(sizeof(struct amdgpu_bo)) + 128 + __roundup_pow_of_two(sizeof(struct ttm_tt)) + 129 + PAGE_ALIGN(size); 130 + } 131 + 122 132 static int amdgpu_amdkfd_reserve_mem_limit(struct amdgpu_device *adev, 123 133 uint64_t size, u32 domain, bool sg) 124 134 { ··· 137 127 size_t acc_size, system_mem_needed, ttm_mem_needed, vram_needed; 138 128 int ret = 0; 139 129 140 - acc_size = ttm_bo_dma_acc_size(&adev->mman.bdev, size, 141 - sizeof(struct amdgpu_bo)); 130 + acc_size = amdgpu_amdkfd_acc_size(size); 142 131 143 132 vram_needed = 0; 144 133 if (domain == AMDGPU_GEM_DOMAIN_GTT) { ··· 184 175 { 185 176 size_t acc_size; 186 177 187 - acc_size = ttm_bo_dma_acc_size(&adev->mman.bdev, size, 188 - sizeof(struct amdgpu_bo)); 178 + acc_size = amdgpu_amdkfd_acc_size(size); 189 179 190 180 spin_lock(&kfd_mem_limit.mem_limit_lock); 191 181 if (domain == AMDGPU_GEM_DOMAIN_GTT) {
+1 -1
drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
··· 487 487 488 488 r = drm_sched_init(&ring->sched, &amdgpu_sched_ops, 489 489 num_hw_submission, amdgpu_job_hang_limit, 490 - timeout, ring->name); 490 + timeout, NULL, ring->name); 491 491 if (r) { 492 492 DRM_ERROR("Failed to create scheduler on ring %s.\n", 493 493 ring->name);
+1 -1
drivers/gpu/drm/amd/amdgpu/amdgpu_gart.c
··· 71 71 */ 72 72 static int amdgpu_gart_dummy_page_init(struct amdgpu_device *adev) 73 73 { 74 - struct page *dummy_page = ttm_bo_glob.dummy_read_page; 74 + struct page *dummy_page = ttm_glob.dummy_read_page; 75 75 76 76 if (adev->dummy_page_addr) 77 77 return 0;
+4 -2
drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
··· 28 28 #include "amdgpu.h" 29 29 #include "amdgpu_trace.h" 30 30 31 - static void amdgpu_job_timedout(struct drm_sched_job *s_job) 31 + static enum drm_gpu_sched_stat amdgpu_job_timedout(struct drm_sched_job *s_job) 32 32 { 33 33 struct amdgpu_ring *ring = to_amdgpu_ring(s_job->sched); 34 34 struct amdgpu_job *job = to_amdgpu_job(s_job); ··· 41 41 amdgpu_ring_soft_recovery(ring, job->vmid, s_job->s_fence->parent)) { 42 42 DRM_ERROR("ring %s timeout, but soft recovered\n", 43 43 s_job->sched->name); 44 - return; 44 + return DRM_GPU_SCHED_STAT_NOMINAL; 45 45 } 46 46 47 47 amdgpu_vm_get_task_info(ring->adev, job->pasid, &ti); ··· 53 53 54 54 if (amdgpu_device_should_recover_gpu(ring->adev)) { 55 55 amdgpu_device_gpu_recover(ring->adev, job); 56 + return DRM_GPU_SCHED_STAT_NOMINAL; 56 57 } else { 57 58 drm_sched_suspend_timeout(&ring->sched); 58 59 if (amdgpu_sriov_vf(adev)) 59 60 adev->virt.tdr_debug = true; 61 + return DRM_GPU_SCHED_STAT_NOMINAL; 60 62 } 61 63 } 62 64
+2 -6
drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
··· 523 523 }; 524 524 struct amdgpu_bo *bo; 525 525 unsigned long page_align, size = bp->size; 526 - size_t acc_size; 527 526 int r; 528 527 529 528 /* Note that GDS/GWS/OA allocates 1 page per byte/resource. */ ··· 544 545 return -ENOMEM; 545 546 546 547 *bo_ptr = NULL; 547 - 548 - acc_size = ttm_bo_dma_acc_size(&adev->mman.bdev, size, 549 - sizeof(struct amdgpu_bo)); 550 548 551 549 bo = kzalloc(sizeof(struct amdgpu_bo), GFP_KERNEL); 552 550 if (bo == NULL) ··· 573 577 bo->tbo.priority = 1; 574 578 575 579 r = ttm_bo_init_reserved(&adev->mman.bdev, &bo->tbo, size, bp->type, 576 - &bo->placement, page_align, &ctx, acc_size, 577 - NULL, bp->resv, &amdgpu_bo_destroy); 580 + &bo->placement, page_align, &ctx, NULL, 581 + bp->resv, &amdgpu_bo_destroy); 578 582 if (unlikely(r != 0)) 579 583 return r; 580 584
+14 -14
drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
··· 61 61 62 62 #define AMDGPU_TTM_VRAM_MAX_DW_READ (size_t)128 63 63 64 - static int amdgpu_ttm_backend_bind(struct ttm_bo_device *bdev, 64 + static int amdgpu_ttm_backend_bind(struct ttm_device *bdev, 65 65 struct ttm_tt *ttm, 66 66 struct ttm_resource *bo_mem); 67 - static void amdgpu_ttm_backend_unbind(struct ttm_bo_device *bdev, 67 + static void amdgpu_ttm_backend_unbind(struct ttm_device *bdev, 68 68 struct ttm_tt *ttm); 69 69 70 70 static int amdgpu_ttm_init_on_chip(struct amdgpu_device *adev, ··· 646 646 * 647 647 * Called by ttm_mem_io_reserve() ultimately via ttm_bo_vm_fault() 648 648 */ 649 - static int amdgpu_ttm_io_mem_reserve(struct ttm_bo_device *bdev, struct ttm_resource *mem) 649 + static int amdgpu_ttm_io_mem_reserve(struct ttm_device *bdev, struct ttm_resource *mem) 650 650 { 651 651 struct amdgpu_device *adev = amdgpu_ttm_adev(bdev); 652 652 struct drm_mm_node *mm_node = mem->mm_node; ··· 893 893 * 894 894 * Called by amdgpu_ttm_backend_bind() 895 895 **/ 896 - static int amdgpu_ttm_tt_pin_userptr(struct ttm_bo_device *bdev, 896 + static int amdgpu_ttm_tt_pin_userptr(struct ttm_device *bdev, 897 897 struct ttm_tt *ttm) 898 898 { 899 899 struct amdgpu_device *adev = amdgpu_ttm_adev(bdev); ··· 931 931 /* 932 932 * amdgpu_ttm_tt_unpin_userptr - Unpin and unmap userptr pages 933 933 */ 934 - static void amdgpu_ttm_tt_unpin_userptr(struct ttm_bo_device *bdev, 934 + static void amdgpu_ttm_tt_unpin_userptr(struct ttm_device *bdev, 935 935 struct ttm_tt *ttm) 936 936 { 937 937 struct amdgpu_device *adev = amdgpu_ttm_adev(bdev); ··· 1015 1015 * Called by ttm_tt_bind() on behalf of ttm_bo_handle_move_mem(). 1016 1016 * This handles binding GTT memory to the device address space. 1017 1017 */ 1018 - static int amdgpu_ttm_backend_bind(struct ttm_bo_device *bdev, 1018 + static int amdgpu_ttm_backend_bind(struct ttm_device *bdev, 1019 1019 struct ttm_tt *ttm, 1020 1020 struct ttm_resource *bo_mem) 1021 1021 { ··· 1155 1155 * Called by ttm_tt_unbind() on behalf of ttm_bo_move_ttm() and 1156 1156 * ttm_tt_destroy(). 1157 1157 */ 1158 - static void amdgpu_ttm_backend_unbind(struct ttm_bo_device *bdev, 1158 + static void amdgpu_ttm_backend_unbind(struct ttm_device *bdev, 1159 1159 struct ttm_tt *ttm) 1160 1160 { 1161 1161 struct amdgpu_device *adev = amdgpu_ttm_adev(bdev); ··· 1180 1180 gtt->bound = false; 1181 1181 } 1182 1182 1183 - static void amdgpu_ttm_backend_destroy(struct ttm_bo_device *bdev, 1183 + static void amdgpu_ttm_backend_destroy(struct ttm_device *bdev, 1184 1184 struct ttm_tt *ttm) 1185 1185 { 1186 1186 struct amdgpu_ttm_tt *gtt = (void *)ttm; ··· 1234 1234 * Map the pages of a ttm_tt object to an address space visible 1235 1235 * to the underlying device. 1236 1236 */ 1237 - static int amdgpu_ttm_tt_populate(struct ttm_bo_device *bdev, 1237 + static int amdgpu_ttm_tt_populate(struct ttm_device *bdev, 1238 1238 struct ttm_tt *ttm, 1239 1239 struct ttm_operation_ctx *ctx) 1240 1240 { ··· 1278 1278 * Unmaps pages of a ttm_tt object from the device address space and 1279 1279 * unpopulates the page array backing it. 1280 1280 */ 1281 - static void amdgpu_ttm_tt_unpopulate(struct ttm_bo_device *bdev, 1281 + static void amdgpu_ttm_tt_unpopulate(struct ttm_device *bdev, 1282 1282 struct ttm_tt *ttm) 1283 1283 { 1284 1284 struct amdgpu_ttm_tt *gtt = (void *)ttm; ··· 1603 1603 amdgpu_bo_move_notify(bo, false, NULL); 1604 1604 } 1605 1605 1606 - static struct ttm_bo_driver amdgpu_bo_driver = { 1606 + static struct ttm_device_funcs amdgpu_bo_driver = { 1607 1607 .ttm_tt_create = &amdgpu_ttm_tt_create, 1608 1608 .ttm_tt_populate = &amdgpu_ttm_tt_populate, 1609 1609 .ttm_tt_unpopulate = &amdgpu_ttm_tt_unpopulate, ··· 1785 1785 mutex_init(&adev->mman.gtt_window_lock); 1786 1786 1787 1787 /* No others user of address space so set it to 0 */ 1788 - r = ttm_bo_device_init(&adev->mman.bdev, &amdgpu_bo_driver, adev->dev, 1788 + r = ttm_device_init(&adev->mman.bdev, &amdgpu_bo_driver, adev->dev, 1789 1789 adev_to_drm(adev)->anon_inode->i_mapping, 1790 1790 adev_to_drm(adev)->vma_offset_manager, 1791 1791 adev->need_swiotlb, ··· 1926 1926 ttm_range_man_fini(&adev->mman.bdev, AMDGPU_PL_GDS); 1927 1927 ttm_range_man_fini(&adev->mman.bdev, AMDGPU_PL_GWS); 1928 1928 ttm_range_man_fini(&adev->mman.bdev, AMDGPU_PL_OA); 1929 - ttm_bo_device_release(&adev->mman.bdev); 1929 + ttm_device_fini(&adev->mman.bdev); 1930 1930 adev->mman.initialized = false; 1931 1931 DRM_INFO("amdgpu: ttm finalized\n"); 1932 1932 } ··· 2002 2002 return ret; 2003 2003 } 2004 2004 2005 - static struct vm_operations_struct amdgpu_ttm_vm_ops = { 2005 + static const struct vm_operations_struct amdgpu_ttm_vm_ops = { 2006 2006 .fault = amdgpu_ttm_fault, 2007 2007 .open = ttm_bo_vm_open, 2008 2008 .close = ttm_bo_vm_close,
+1 -1
drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
··· 60 60 }; 61 61 62 62 struct amdgpu_mman { 63 - struct ttm_bo_device bdev; 63 + struct ttm_device bdev; 64 64 bool initialized; 65 65 void __iomem *aper_base_kaddr; 66 66
+4 -4
drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
··· 638 638 struct amdgpu_vm_bo_base *bo_base; 639 639 640 640 if (vm->bulk_moveable) { 641 - spin_lock(&ttm_bo_glob.lru_lock); 641 + spin_lock(&ttm_glob.lru_lock); 642 642 ttm_bo_bulk_move_lru_tail(&vm->lru_bulk_move); 643 - spin_unlock(&ttm_bo_glob.lru_lock); 643 + spin_unlock(&ttm_glob.lru_lock); 644 644 return; 645 645 } 646 646 647 647 memset(&vm->lru_bulk_move, 0, sizeof(vm->lru_bulk_move)); 648 648 649 - spin_lock(&ttm_bo_glob.lru_lock); 649 + spin_lock(&ttm_glob.lru_lock); 650 650 list_for_each_entry(bo_base, &vm->idle, vm_status) { 651 651 struct amdgpu_bo *bo = bo_base->bo; 652 652 ··· 660 660 &bo->shadow->tbo.mem, 661 661 &vm->lru_bulk_move); 662 662 } 663 - spin_unlock(&ttm_bo_glob.lru_lock); 663 + spin_unlock(&ttm_glob.lru_lock); 664 664 665 665 vm->bulk_moveable = true; 666 666 }
+2 -3
drivers/gpu/drm/amd/amdgpu/dce_v10_0.c
··· 1862 1862 u32 tmp, viewport_w, viewport_h; 1863 1863 int r; 1864 1864 bool bypass_lut = false; 1865 - struct drm_format_name_buf format_name; 1866 1865 1867 1866 /* no fb bound */ 1868 1867 if (!atomic && !crtc->primary->fb) { ··· 1980 1981 #endif 1981 1982 break; 1982 1983 default: 1983 - DRM_ERROR("Unsupported screen format %s\n", 1984 - drm_get_format_name(target_fb->format->format, &format_name)); 1984 + DRM_ERROR("Unsupported screen format %p4cc\n", 1985 + &target_fb->format->format); 1985 1986 return -EINVAL; 1986 1987 } 1987 1988
+2 -3
drivers/gpu/drm/amd/amdgpu/dce_v11_0.c
··· 1904 1904 u32 tmp, viewport_w, viewport_h; 1905 1905 int r; 1906 1906 bool bypass_lut = false; 1907 - struct drm_format_name_buf format_name; 1908 1907 1909 1908 /* no fb bound */ 1910 1909 if (!atomic && !crtc->primary->fb) { ··· 2022 2023 #endif 2023 2024 break; 2024 2025 default: 2025 - DRM_ERROR("Unsupported screen format %s\n", 2026 - drm_get_format_name(target_fb->format->format, &format_name)); 2026 + DRM_ERROR("Unsupported screen format %p4cc\n", 2027 + &target_fb->format->format); 2027 2028 return -EINVAL; 2028 2029 } 2029 2030
+2 -3
drivers/gpu/drm/amd/amdgpu/dce_v6_0.c
··· 1820 1820 u32 viewport_w, viewport_h; 1821 1821 int r; 1822 1822 bool bypass_lut = false; 1823 - struct drm_format_name_buf format_name; 1824 1823 1825 1824 /* no fb bound */ 1826 1825 if (!atomic && !crtc->primary->fb) { ··· 1928 1929 #endif 1929 1930 break; 1930 1931 default: 1931 - DRM_ERROR("Unsupported screen format %s\n", 1932 - drm_get_format_name(target_fb->format->format, &format_name)); 1932 + DRM_ERROR("Unsupported screen format %p4cc\n", 1933 + &target_fb->format->format); 1933 1934 return -EINVAL; 1934 1935 } 1935 1936
+2 -3
drivers/gpu/drm/amd/amdgpu/dce_v8_0.c
··· 1791 1791 u32 viewport_w, viewport_h; 1792 1792 int r; 1793 1793 bool bypass_lut = false; 1794 - struct drm_format_name_buf format_name; 1795 1794 1796 1795 /* no fb bound */ 1797 1796 if (!atomic && !crtc->primary->fb) { ··· 1901 1902 #endif 1902 1903 break; 1903 1904 default: 1904 - DRM_ERROR("Unsupported screen format %s\n", 1905 - drm_get_format_name(target_fb->format->format, &format_name)); 1905 + DRM_ERROR("Unsupported screen format %p4cc\n", 1906 + &target_fb->format->format); 1906 1907 return -EINVAL; 1907 1908 } 1908 1909
+16 -12
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
··· 4583 4583 const struct drm_framebuffer *fb = plane_state->fb; 4584 4584 const struct amdgpu_framebuffer *afb = 4585 4585 to_amdgpu_framebuffer(plane_state->fb); 4586 - struct drm_format_name_buf format_name; 4587 4586 int ret; 4588 4587 4589 4588 memset(plane_info, 0, sizeof(*plane_info)); ··· 4630 4631 break; 4631 4632 default: 4632 4633 DRM_ERROR( 4633 - "Unsupported screen format %s\n", 4634 - drm_get_format_name(fb->format->format, &format_name)); 4634 + "Unsupported screen format %p4cc\n", 4635 + &fb->format->format); 4635 4636 return -EINVAL; 4636 4637 } 4637 4638 ··· 6514 6515 } 6515 6516 6516 6517 static int dm_plane_atomic_check(struct drm_plane *plane, 6517 - struct drm_plane_state *state) 6518 + struct drm_atomic_state *state) 6518 6519 { 6520 + struct drm_plane_state *new_plane_state = drm_atomic_get_new_plane_state(state, 6521 + plane); 6519 6522 struct amdgpu_device *adev = drm_to_adev(plane->dev); 6520 6523 struct dc *dc = adev->dm.dc; 6521 6524 struct dm_plane_state *dm_plane_state; ··· 6525 6524 struct drm_crtc_state *new_crtc_state; 6526 6525 int ret; 6527 6526 6528 - trace_amdgpu_dm_plane_atomic_check(state); 6527 + trace_amdgpu_dm_plane_atomic_check(new_plane_state); 6529 6528 6530 - dm_plane_state = to_dm_plane_state(state); 6529 + dm_plane_state = to_dm_plane_state(new_plane_state); 6531 6530 6532 6531 if (!dm_plane_state->dc_state) 6533 6532 return 0; 6534 6533 6535 6534 new_crtc_state = 6536 - drm_atomic_get_new_crtc_state(state->state, state->crtc); 6535 + drm_atomic_get_new_crtc_state(state, 6536 + new_plane_state->crtc); 6537 6537 if (!new_crtc_state) 6538 6538 return -EINVAL; 6539 6539 6540 - ret = dm_plane_helper_check_state(state, new_crtc_state); 6540 + ret = dm_plane_helper_check_state(new_plane_state, new_crtc_state); 6541 6541 if (ret) 6542 6542 return ret; 6543 6543 6544 - ret = fill_dc_scaling_info(state, &scaling_info); 6544 + ret = fill_dc_scaling_info(new_plane_state, &scaling_info); 6545 6545 if (ret) 6546 6546 return ret; 6547 6547 ··· 6553 6551 } 6554 6552 6555 6553 static int dm_plane_atomic_async_check(struct drm_plane *plane, 6556 - struct drm_plane_state *new_plane_state) 6554 + struct drm_atomic_state *state) 6557 6555 { 6558 6556 /* Only support async updates on cursor planes. */ 6559 6557 if (plane->type != DRM_PLANE_TYPE_CURSOR) ··· 6563 6561 } 6564 6562 6565 6563 static void dm_plane_atomic_async_update(struct drm_plane *plane, 6566 - struct drm_plane_state *new_state) 6564 + struct drm_atomic_state *state) 6567 6565 { 6566 + struct drm_plane_state *new_state = drm_atomic_get_new_plane_state(state, 6567 + plane); 6568 6568 struct drm_plane_state *old_state = 6569 - drm_atomic_get_old_plane_state(new_state->state, plane); 6569 + drm_atomic_get_old_plane_state(state, plane); 6570 6570 6571 6571 trace_amdgpu_dm_atomic_update_cursor(new_state); 6572 6572
-10
drivers/gpu/drm/arc/Kconfig
··· 1 - # SPDX-License-Identifier: GPL-2.0-only 2 - config DRM_ARCPGU 3 - tristate "ARC PGU" 4 - depends on DRM && OF 5 - select DRM_KMS_CMA_HELPER 6 - select DRM_KMS_HELPER 7 - help 8 - Choose this option if you have an ARC PGU controller. 9 - 10 - If M is selected the module will be called arcpgu.
-3
drivers/gpu/drm/arc/Makefile
··· 1 - # SPDX-License-Identifier: GPL-2.0-only 2 - arcpgu-y := arcpgu_crtc.o arcpgu_hdmi.o arcpgu_sim.o arcpgu_drv.o 3 - obj-$(CONFIG_DRM_ARCPGU) += arcpgu.o
-37
drivers/gpu/drm/arc/arcpgu.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0-only */ 2 - /* 3 - * ARC PGU DRM driver. 4 - * 5 - * Copyright (C) 2016 Synopsys, Inc. (www.synopsys.com) 6 - */ 7 - 8 - #ifndef _ARCPGU_H_ 9 - #define _ARCPGU_H_ 10 - 11 - struct arcpgu_drm_private { 12 - void __iomem *regs; 13 - struct clk *clk; 14 - struct drm_framebuffer *fb; 15 - struct drm_crtc crtc; 16 - struct drm_plane *plane; 17 - }; 18 - 19 - #define crtc_to_arcpgu_priv(x) container_of(x, struct arcpgu_drm_private, crtc) 20 - 21 - static inline void arc_pgu_write(struct arcpgu_drm_private *arcpgu, 22 - unsigned int reg, u32 value) 23 - { 24 - iowrite32(value, arcpgu->regs + reg); 25 - } 26 - 27 - static inline u32 arc_pgu_read(struct arcpgu_drm_private *arcpgu, 28 - unsigned int reg) 29 - { 30 - return ioread32(arcpgu->regs + reg); 31 - } 32 - 33 - int arc_pgu_setup_crtc(struct drm_device *dev); 34 - int arcpgu_drm_hdmi_init(struct drm_device *drm, struct device_node *np); 35 - int arcpgu_drm_sim_init(struct drm_device *drm, struct device_node *np); 36 - 37 - #endif
-217
drivers/gpu/drm/arc/arcpgu_crtc.c
··· 1 - // SPDX-License-Identifier: GPL-2.0-only 2 - /* 3 - * ARC PGU DRM driver. 4 - * 5 - * Copyright (C) 2016 Synopsys, Inc. (www.synopsys.com) 6 - */ 7 - 8 - #include <drm/drm_atomic_helper.h> 9 - #include <drm/drm_device.h> 10 - #include <drm/drm_fb_cma_helper.h> 11 - #include <drm/drm_gem_cma_helper.h> 12 - #include <drm/drm_plane_helper.h> 13 - #include <drm/drm_probe_helper.h> 14 - #include <linux/clk.h> 15 - #include <linux/platform_data/simplefb.h> 16 - 17 - #include "arcpgu.h" 18 - #include "arcpgu_regs.h" 19 - 20 - #define ENCODE_PGU_XY(x, y) ((((x) - 1) << 16) | ((y) - 1)) 21 - 22 - static const u32 arc_pgu_supported_formats[] = { 23 - DRM_FORMAT_RGB565, 24 - DRM_FORMAT_XRGB8888, 25 - DRM_FORMAT_ARGB8888, 26 - }; 27 - 28 - static void arc_pgu_set_pxl_fmt(struct drm_crtc *crtc) 29 - { 30 - struct arcpgu_drm_private *arcpgu = crtc_to_arcpgu_priv(crtc); 31 - const struct drm_framebuffer *fb = crtc->primary->state->fb; 32 - uint32_t pixel_format = fb->format->format; 33 - u32 format = DRM_FORMAT_INVALID; 34 - int i; 35 - u32 reg_ctrl; 36 - 37 - for (i = 0; i < ARRAY_SIZE(arc_pgu_supported_formats); i++) { 38 - if (arc_pgu_supported_formats[i] == pixel_format) 39 - format = arc_pgu_supported_formats[i]; 40 - } 41 - 42 - if (WARN_ON(format == DRM_FORMAT_INVALID)) 43 - return; 44 - 45 - reg_ctrl = arc_pgu_read(arcpgu, ARCPGU_REG_CTRL); 46 - if (format == DRM_FORMAT_RGB565) 47 - reg_ctrl &= ~ARCPGU_MODE_XRGB8888; 48 - else 49 - reg_ctrl |= ARCPGU_MODE_XRGB8888; 50 - arc_pgu_write(arcpgu, ARCPGU_REG_CTRL, reg_ctrl); 51 - } 52 - 53 - static const struct drm_crtc_funcs arc_pgu_crtc_funcs = { 54 - .destroy = drm_crtc_cleanup, 55 - .set_config = drm_atomic_helper_set_config, 56 - .page_flip = drm_atomic_helper_page_flip, 57 - .reset = drm_atomic_helper_crtc_reset, 58 - .atomic_duplicate_state = drm_atomic_helper_crtc_duplicate_state, 59 - .atomic_destroy_state = drm_atomic_helper_crtc_destroy_state, 60 - }; 61 - 62 - static enum drm_mode_status arc_pgu_crtc_mode_valid(struct drm_crtc *crtc, 63 - const struct drm_display_mode *mode) 64 - { 65 - struct arcpgu_drm_private *arcpgu = crtc_to_arcpgu_priv(crtc); 66 - long rate, clk_rate = mode->clock * 1000; 67 - long diff = clk_rate / 200; /* +-0.5% allowed by HDMI spec */ 68 - 69 - rate = clk_round_rate(arcpgu->clk, clk_rate); 70 - if ((max(rate, clk_rate) - min(rate, clk_rate) < diff) && (rate > 0)) 71 - return MODE_OK; 72 - 73 - return MODE_NOCLOCK; 74 - } 75 - 76 - static void arc_pgu_crtc_mode_set_nofb(struct drm_crtc *crtc) 77 - { 78 - struct arcpgu_drm_private *arcpgu = crtc_to_arcpgu_priv(crtc); 79 - struct drm_display_mode *m = &crtc->state->adjusted_mode; 80 - u32 val; 81 - 82 - arc_pgu_write(arcpgu, ARCPGU_REG_FMT, 83 - ENCODE_PGU_XY(m->crtc_htotal, m->crtc_vtotal)); 84 - 85 - arc_pgu_write(arcpgu, ARCPGU_REG_HSYNC, 86 - ENCODE_PGU_XY(m->crtc_hsync_start - m->crtc_hdisplay, 87 - m->crtc_hsync_end - m->crtc_hdisplay)); 88 - 89 - arc_pgu_write(arcpgu, ARCPGU_REG_VSYNC, 90 - ENCODE_PGU_XY(m->crtc_vsync_start - m->crtc_vdisplay, 91 - m->crtc_vsync_end - m->crtc_vdisplay)); 92 - 93 - arc_pgu_write(arcpgu, ARCPGU_REG_ACTIVE, 94 - ENCODE_PGU_XY(m->crtc_hblank_end - m->crtc_hblank_start, 95 - m->crtc_vblank_end - m->crtc_vblank_start)); 96 - 97 - val = arc_pgu_read(arcpgu, ARCPGU_REG_CTRL); 98 - 99 - if (m->flags & DRM_MODE_FLAG_PVSYNC) 100 - val |= ARCPGU_CTRL_VS_POL_MASK << ARCPGU_CTRL_VS_POL_OFST; 101 - else 102 - val &= ~(ARCPGU_CTRL_VS_POL_MASK << ARCPGU_CTRL_VS_POL_OFST); 103 - 104 - if (m->flags & DRM_MODE_FLAG_PHSYNC) 105 - val |= ARCPGU_CTRL_HS_POL_MASK << ARCPGU_CTRL_HS_POL_OFST; 106 - else 107 - val &= ~(ARCPGU_CTRL_HS_POL_MASK << ARCPGU_CTRL_HS_POL_OFST); 108 - 109 - arc_pgu_write(arcpgu, ARCPGU_REG_CTRL, val); 110 - arc_pgu_write(arcpgu, ARCPGU_REG_STRIDE, 0); 111 - arc_pgu_write(arcpgu, ARCPGU_REG_START_SET, 1); 112 - 113 - arc_pgu_set_pxl_fmt(crtc); 114 - 115 - clk_set_rate(arcpgu->clk, m->crtc_clock * 1000); 116 - } 117 - 118 - static void arc_pgu_crtc_atomic_enable(struct drm_crtc *crtc, 119 - struct drm_atomic_state *state) 120 - { 121 - struct arcpgu_drm_private *arcpgu = crtc_to_arcpgu_priv(crtc); 122 - 123 - clk_prepare_enable(arcpgu->clk); 124 - arc_pgu_write(arcpgu, ARCPGU_REG_CTRL, 125 - arc_pgu_read(arcpgu, ARCPGU_REG_CTRL) | 126 - ARCPGU_CTRL_ENABLE_MASK); 127 - } 128 - 129 - static void arc_pgu_crtc_atomic_disable(struct drm_crtc *crtc, 130 - struct drm_atomic_state *state) 131 - { 132 - struct arcpgu_drm_private *arcpgu = crtc_to_arcpgu_priv(crtc); 133 - 134 - clk_disable_unprepare(arcpgu->clk); 135 - arc_pgu_write(arcpgu, ARCPGU_REG_CTRL, 136 - arc_pgu_read(arcpgu, ARCPGU_REG_CTRL) & 137 - ~ARCPGU_CTRL_ENABLE_MASK); 138 - } 139 - 140 - static const struct drm_crtc_helper_funcs arc_pgu_crtc_helper_funcs = { 141 - .mode_valid = arc_pgu_crtc_mode_valid, 142 - .mode_set_nofb = arc_pgu_crtc_mode_set_nofb, 143 - .atomic_enable = arc_pgu_crtc_atomic_enable, 144 - .atomic_disable = arc_pgu_crtc_atomic_disable, 145 - }; 146 - 147 - static void arc_pgu_plane_atomic_update(struct drm_plane *plane, 148 - struct drm_plane_state *state) 149 - { 150 - struct arcpgu_drm_private *arcpgu; 151 - struct drm_gem_cma_object *gem; 152 - 153 - if (!plane->state->crtc || !plane->state->fb) 154 - return; 155 - 156 - arcpgu = crtc_to_arcpgu_priv(plane->state->crtc); 157 - gem = drm_fb_cma_get_gem_obj(plane->state->fb, 0); 158 - arc_pgu_write(arcpgu, ARCPGU_REG_BUF0_ADDR, gem->paddr); 159 - } 160 - 161 - static const struct drm_plane_helper_funcs arc_pgu_plane_helper_funcs = { 162 - .atomic_update = arc_pgu_plane_atomic_update, 163 - }; 164 - 165 - static const struct drm_plane_funcs arc_pgu_plane_funcs = { 166 - .update_plane = drm_atomic_helper_update_plane, 167 - .disable_plane = drm_atomic_helper_disable_plane, 168 - .destroy = drm_plane_cleanup, 169 - .reset = drm_atomic_helper_plane_reset, 170 - .atomic_duplicate_state = drm_atomic_helper_plane_duplicate_state, 171 - .atomic_destroy_state = drm_atomic_helper_plane_destroy_state, 172 - }; 173 - 174 - static struct drm_plane *arc_pgu_plane_init(struct drm_device *drm) 175 - { 176 - struct arcpgu_drm_private *arcpgu = drm->dev_private; 177 - struct drm_plane *plane = NULL; 178 - int ret; 179 - 180 - plane = devm_kzalloc(drm->dev, sizeof(*plane), GFP_KERNEL); 181 - if (!plane) 182 - return ERR_PTR(-ENOMEM); 183 - 184 - ret = drm_universal_plane_init(drm, plane, 0xff, &arc_pgu_plane_funcs, 185 - arc_pgu_supported_formats, 186 - ARRAY_SIZE(arc_pgu_supported_formats), 187 - NULL, 188 - DRM_PLANE_TYPE_PRIMARY, NULL); 189 - if (ret) 190 - return ERR_PTR(ret); 191 - 192 - drm_plane_helper_add(plane, &arc_pgu_plane_helper_funcs); 193 - arcpgu->plane = plane; 194 - 195 - return plane; 196 - } 197 - 198 - int arc_pgu_setup_crtc(struct drm_device *drm) 199 - { 200 - struct arcpgu_drm_private *arcpgu = drm->dev_private; 201 - struct drm_plane *primary; 202 - int ret; 203 - 204 - primary = arc_pgu_plane_init(drm); 205 - if (IS_ERR(primary)) 206 - return PTR_ERR(primary); 207 - 208 - ret = drm_crtc_init_with_planes(drm, &arcpgu->crtc, primary, NULL, 209 - &arc_pgu_crtc_funcs, NULL); 210 - if (ret) { 211 - drm_plane_cleanup(primary); 212 - return ret; 213 - } 214 - 215 - drm_crtc_helper_add(&arcpgu->crtc, &arc_pgu_crtc_helper_funcs); 216 - return 0; 217 - }
-224
drivers/gpu/drm/arc/arcpgu_drv.c
··· 1 - // SPDX-License-Identifier: GPL-2.0-only 2 - /* 3 - * ARC PGU DRM driver. 4 - * 5 - * Copyright (C) 2016 Synopsys, Inc. (www.synopsys.com) 6 - */ 7 - 8 - #include <linux/clk.h> 9 - #include <drm/drm_atomic_helper.h> 10 - #include <drm/drm_debugfs.h> 11 - #include <drm/drm_device.h> 12 - #include <drm/drm_drv.h> 13 - #include <drm/drm_fb_cma_helper.h> 14 - #include <drm/drm_fb_helper.h> 15 - #include <drm/drm_gem_cma_helper.h> 16 - #include <drm/drm_gem_framebuffer_helper.h> 17 - #include <drm/drm_of.h> 18 - #include <drm/drm_probe_helper.h> 19 - #include <linux/dma-mapping.h> 20 - #include <linux/module.h> 21 - #include <linux/of_reserved_mem.h> 22 - #include <linux/platform_device.h> 23 - 24 - #include "arcpgu.h" 25 - #include "arcpgu_regs.h" 26 - 27 - static const struct drm_mode_config_funcs arcpgu_drm_modecfg_funcs = { 28 - .fb_create = drm_gem_fb_create, 29 - .atomic_check = drm_atomic_helper_check, 30 - .atomic_commit = drm_atomic_helper_commit, 31 - }; 32 - 33 - static void arcpgu_setup_mode_config(struct drm_device *drm) 34 - { 35 - drm_mode_config_init(drm); 36 - drm->mode_config.min_width = 0; 37 - drm->mode_config.min_height = 0; 38 - drm->mode_config.max_width = 1920; 39 - drm->mode_config.max_height = 1080; 40 - drm->mode_config.funcs = &arcpgu_drm_modecfg_funcs; 41 - } 42 - 43 - DEFINE_DRM_GEM_CMA_FOPS(arcpgu_drm_ops); 44 - 45 - static int arcpgu_load(struct drm_device *drm) 46 - { 47 - struct platform_device *pdev = to_platform_device(drm->dev); 48 - struct arcpgu_drm_private *arcpgu; 49 - struct device_node *encoder_node = NULL, *endpoint_node = NULL; 50 - struct resource *res; 51 - int ret; 52 - 53 - arcpgu = devm_kzalloc(&pdev->dev, sizeof(*arcpgu), GFP_KERNEL); 54 - if (arcpgu == NULL) 55 - return -ENOMEM; 56 - 57 - drm->dev_private = arcpgu; 58 - 59 - arcpgu->clk = devm_clk_get(drm->dev, "pxlclk"); 60 - if (IS_ERR(arcpgu->clk)) 61 - return PTR_ERR(arcpgu->clk); 62 - 63 - arcpgu_setup_mode_config(drm); 64 - 65 - res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 66 - arcpgu->regs = devm_ioremap_resource(&pdev->dev, res); 67 - if (IS_ERR(arcpgu->regs)) 68 - return PTR_ERR(arcpgu->regs); 69 - 70 - dev_info(drm->dev, "arc_pgu ID: 0x%x\n", 71 - arc_pgu_read(arcpgu, ARCPGU_REG_ID)); 72 - 73 - /* Get the optional framebuffer memory resource */ 74 - ret = of_reserved_mem_device_init(drm->dev); 75 - if (ret && ret != -ENODEV) 76 - return ret; 77 - 78 - if (dma_set_mask_and_coherent(drm->dev, DMA_BIT_MASK(32))) 79 - return -ENODEV; 80 - 81 - if (arc_pgu_setup_crtc(drm) < 0) 82 - return -ENODEV; 83 - 84 - /* 85 - * There is only one output port inside each device. It is linked with 86 - * encoder endpoint. 87 - */ 88 - endpoint_node = of_graph_get_next_endpoint(pdev->dev.of_node, NULL); 89 - if (endpoint_node) { 90 - encoder_node = of_graph_get_remote_port_parent(endpoint_node); 91 - of_node_put(endpoint_node); 92 - } 93 - 94 - if (encoder_node) { 95 - ret = arcpgu_drm_hdmi_init(drm, encoder_node); 96 - of_node_put(encoder_node); 97 - if (ret < 0) 98 - return ret; 99 - } else { 100 - dev_info(drm->dev, "no encoder found. Assumed virtual LCD on simulation platform\n"); 101 - ret = arcpgu_drm_sim_init(drm, NULL); 102 - if (ret < 0) 103 - return ret; 104 - } 105 - 106 - drm_mode_config_reset(drm); 107 - drm_kms_helper_poll_init(drm); 108 - 109 - platform_set_drvdata(pdev, drm); 110 - return 0; 111 - } 112 - 113 - static int arcpgu_unload(struct drm_device *drm) 114 - { 115 - drm_kms_helper_poll_fini(drm); 116 - drm_atomic_helper_shutdown(drm); 117 - drm_mode_config_cleanup(drm); 118 - 119 - return 0; 120 - } 121 - 122 - #ifdef CONFIG_DEBUG_FS 123 - static int arcpgu_show_pxlclock(struct seq_file *m, void *arg) 124 - { 125 - struct drm_info_node *node = (struct drm_info_node *)m->private; 126 - struct drm_device *drm = node->minor->dev; 127 - struct arcpgu_drm_private *arcpgu = drm->dev_private; 128 - unsigned long clkrate = clk_get_rate(arcpgu->clk); 129 - unsigned long mode_clock = arcpgu->crtc.mode.crtc_clock * 1000; 130 - 131 - seq_printf(m, "hw : %lu\n", clkrate); 132 - seq_printf(m, "mode: %lu\n", mode_clock); 133 - return 0; 134 - } 135 - 136 - static struct drm_info_list arcpgu_debugfs_list[] = { 137 - { "clocks", arcpgu_show_pxlclock, 0 }, 138 - }; 139 - 140 - static void arcpgu_debugfs_init(struct drm_minor *minor) 141 - { 142 - drm_debugfs_create_files(arcpgu_debugfs_list, 143 - ARRAY_SIZE(arcpgu_debugfs_list), 144 - minor->debugfs_root, minor); 145 - } 146 - #endif 147 - 148 - static const struct drm_driver arcpgu_drm_driver = { 149 - .driver_features = DRIVER_MODESET | DRIVER_GEM | DRIVER_ATOMIC, 150 - .name = "arcpgu", 151 - .desc = "ARC PGU Controller", 152 - .date = "20160219", 153 - .major = 1, 154 - .minor = 0, 155 - .patchlevel = 0, 156 - .fops = &arcpgu_drm_ops, 157 - DRM_GEM_CMA_DRIVER_OPS, 158 - #ifdef CONFIG_DEBUG_FS 159 - .debugfs_init = arcpgu_debugfs_init, 160 - #endif 161 - }; 162 - 163 - static int arcpgu_probe(struct platform_device *pdev) 164 - { 165 - struct drm_device *drm; 166 - int ret; 167 - 168 - drm = drm_dev_alloc(&arcpgu_drm_driver, &pdev->dev); 169 - if (IS_ERR(drm)) 170 - return PTR_ERR(drm); 171 - 172 - ret = arcpgu_load(drm); 173 - if (ret) 174 - goto err_unref; 175 - 176 - ret = drm_dev_register(drm, 0); 177 - if (ret) 178 - goto err_unload; 179 - 180 - drm_fbdev_generic_setup(drm, 16); 181 - 182 - return 0; 183 - 184 - err_unload: 185 - arcpgu_unload(drm); 186 - 187 - err_unref: 188 - drm_dev_put(drm); 189 - 190 - return ret; 191 - } 192 - 193 - static int arcpgu_remove(struct platform_device *pdev) 194 - { 195 - struct drm_device *drm = platform_get_drvdata(pdev); 196 - 197 - drm_dev_unregister(drm); 198 - arcpgu_unload(drm); 199 - drm_dev_put(drm); 200 - 201 - return 0; 202 - } 203 - 204 - static const struct of_device_id arcpgu_of_table[] = { 205 - {.compatible = "snps,arcpgu"}, 206 - {} 207 - }; 208 - 209 - MODULE_DEVICE_TABLE(of, arcpgu_of_table); 210 - 211 - static struct platform_driver arcpgu_platform_driver = { 212 - .probe = arcpgu_probe, 213 - .remove = arcpgu_remove, 214 - .driver = { 215 - .name = "arcpgu", 216 - .of_match_table = arcpgu_of_table, 217 - }, 218 - }; 219 - 220 - module_platform_driver(arcpgu_platform_driver); 221 - 222 - MODULE_AUTHOR("Carlos Palminha <palminha@synopsys.com>"); 223 - MODULE_DESCRIPTION("ARC PGU DRM driver"); 224 - MODULE_LICENSE("GPL");
-48
drivers/gpu/drm/arc/arcpgu_hdmi.c
··· 1 - // SPDX-License-Identifier: GPL-2.0-only 2 - /* 3 - * ARC PGU DRM driver. 4 - * 5 - * Copyright (C) 2016 Synopsys, Inc. (www.synopsys.com) 6 - */ 7 - 8 - #include <drm/drm_bridge.h> 9 - #include <drm/drm_crtc.h> 10 - #include <drm/drm_encoder.h> 11 - #include <drm/drm_device.h> 12 - 13 - #include "arcpgu.h" 14 - 15 - static struct drm_encoder_funcs arcpgu_drm_encoder_funcs = { 16 - .destroy = drm_encoder_cleanup, 17 - }; 18 - 19 - int arcpgu_drm_hdmi_init(struct drm_device *drm, struct device_node *np) 20 - { 21 - struct drm_encoder *encoder; 22 - struct drm_bridge *bridge; 23 - 24 - int ret = 0; 25 - 26 - encoder = devm_kzalloc(drm->dev, sizeof(*encoder), GFP_KERNEL); 27 - if (encoder == NULL) 28 - return -ENOMEM; 29 - 30 - /* Locate drm bridge from the hdmi encoder DT node */ 31 - bridge = of_drm_find_bridge(np); 32 - if (!bridge) 33 - return -EPROBE_DEFER; 34 - 35 - encoder->possible_crtcs = 1; 36 - encoder->possible_clones = 0; 37 - ret = drm_encoder_init(drm, encoder, &arcpgu_drm_encoder_funcs, 38 - DRM_MODE_ENCODER_TMDS, NULL); 39 - if (ret) 40 - return ret; 41 - 42 - /* Link drm_bridge to encoder */ 43 - ret = drm_bridge_attach(encoder, bridge, NULL, 0); 44 - if (ret) 45 - drm_encoder_cleanup(encoder); 46 - 47 - return ret; 48 - }
-31
drivers/gpu/drm/arc/arcpgu_regs.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0-only */ 2 - /* 3 - * ARC PGU DRM driver. 4 - * 5 - * Copyright (C) 2016 Synopsys, Inc. (www.synopsys.com) 6 - */ 7 - 8 - #ifndef _ARC_PGU_REGS_H_ 9 - #define _ARC_PGU_REGS_H_ 10 - 11 - #define ARCPGU_REG_CTRL 0x00 12 - #define ARCPGU_REG_STAT 0x04 13 - #define ARCPGU_REG_FMT 0x10 14 - #define ARCPGU_REG_HSYNC 0x14 15 - #define ARCPGU_REG_VSYNC 0x18 16 - #define ARCPGU_REG_ACTIVE 0x1c 17 - #define ARCPGU_REG_BUF0_ADDR 0x40 18 - #define ARCPGU_REG_STRIDE 0x50 19 - #define ARCPGU_REG_START_SET 0x84 20 - 21 - #define ARCPGU_REG_ID 0x3FC 22 - 23 - #define ARCPGU_CTRL_ENABLE_MASK 0x02 24 - #define ARCPGU_CTRL_VS_POL_MASK 0x1 25 - #define ARCPGU_CTRL_VS_POL_OFST 0x3 26 - #define ARCPGU_CTRL_HS_POL_MASK 0x1 27 - #define ARCPGU_CTRL_HS_POL_OFST 0x4 28 - #define ARCPGU_MODE_XRGB8888 BIT(2) 29 - #define ARCPGU_STAT_BUSY_MASK 0x02 30 - 31 - #endif
-108
drivers/gpu/drm/arc/arcpgu_sim.c
··· 1 - // SPDX-License-Identifier: GPL-2.0-only 2 - /* 3 - * ARC PGU DRM driver. 4 - * 5 - * Copyright (C) 2016 Synopsys, Inc. (www.synopsys.com) 6 - */ 7 - 8 - #include <drm/drm_atomic_helper.h> 9 - #include <drm/drm_device.h> 10 - #include <drm/drm_probe_helper.h> 11 - 12 - #include "arcpgu.h" 13 - 14 - #define XRES_DEF 640 15 - #define YRES_DEF 480 16 - 17 - #define XRES_MAX 8192 18 - #define YRES_MAX 8192 19 - 20 - 21 - struct arcpgu_drm_connector { 22 - struct drm_connector connector; 23 - }; 24 - 25 - static int arcpgu_drm_connector_get_modes(struct drm_connector *connector) 26 - { 27 - int count; 28 - 29 - count = drm_add_modes_noedid(connector, XRES_MAX, YRES_MAX); 30 - drm_set_preferred_mode(connector, XRES_DEF, YRES_DEF); 31 - return count; 32 - } 33 - 34 - static void arcpgu_drm_connector_destroy(struct drm_connector *connector) 35 - { 36 - drm_connector_unregister(connector); 37 - drm_connector_cleanup(connector); 38 - } 39 - 40 - static const struct drm_connector_helper_funcs 41 - arcpgu_drm_connector_helper_funcs = { 42 - .get_modes = arcpgu_drm_connector_get_modes, 43 - }; 44 - 45 - static const struct drm_connector_funcs arcpgu_drm_connector_funcs = { 46 - .reset = drm_atomic_helper_connector_reset, 47 - .fill_modes = drm_helper_probe_single_connector_modes, 48 - .destroy = arcpgu_drm_connector_destroy, 49 - .atomic_duplicate_state = drm_atomic_helper_connector_duplicate_state, 50 - .atomic_destroy_state = drm_atomic_helper_connector_destroy_state, 51 - }; 52 - 53 - static struct drm_encoder_funcs arcpgu_drm_encoder_funcs = { 54 - .destroy = drm_encoder_cleanup, 55 - }; 56 - 57 - int arcpgu_drm_sim_init(struct drm_device *drm, struct device_node *np) 58 - { 59 - struct arcpgu_drm_connector *arcpgu_connector; 60 - struct drm_encoder *encoder; 61 - struct drm_connector *connector; 62 - int ret; 63 - 64 - encoder = devm_kzalloc(drm->dev, sizeof(*encoder), GFP_KERNEL); 65 - if (encoder == NULL) 66 - return -ENOMEM; 67 - 68 - encoder->possible_crtcs = 1; 69 - encoder->possible_clones = 0; 70 - 71 - ret = drm_encoder_init(drm, encoder, &arcpgu_drm_encoder_funcs, 72 - DRM_MODE_ENCODER_VIRTUAL, NULL); 73 - if (ret) 74 - return ret; 75 - 76 - arcpgu_connector = devm_kzalloc(drm->dev, sizeof(*arcpgu_connector), 77 - GFP_KERNEL); 78 - if (!arcpgu_connector) { 79 - ret = -ENOMEM; 80 - goto error_encoder_cleanup; 81 - } 82 - 83 - connector = &arcpgu_connector->connector; 84 - drm_connector_helper_add(connector, &arcpgu_drm_connector_helper_funcs); 85 - 86 - ret = drm_connector_init(drm, connector, &arcpgu_drm_connector_funcs, 87 - DRM_MODE_CONNECTOR_VIRTUAL); 88 - if (ret < 0) { 89 - dev_err(drm->dev, "failed to initialize drm connector\n"); 90 - goto error_encoder_cleanup; 91 - } 92 - 93 - ret = drm_connector_attach_encoder(connector, encoder); 94 - if (ret < 0) { 95 - dev_err(drm->dev, "could not attach connector to encoder\n"); 96 - drm_connector_unregister(connector); 97 - goto error_connector_cleanup; 98 - } 99 - 100 - return 0; 101 - 102 - error_connector_cleanup: 103 - drm_connector_cleanup(connector); 104 - 105 - error_encoder_cleanup: 106 - drm_encoder_cleanup(encoder); 107 - return ret; 108 - }
-11
drivers/gpu/drm/arm/display/komeda/komeda_format_caps.h
··· 82 82 83 83 extern u64 komeda_supported_modifiers[]; 84 84 85 - static inline const char *komeda_get_format_name(u32 fourcc, u64 modifier) 86 - { 87 - struct drm_format_name_buf buf; 88 - static char name[64]; 89 - 90 - snprintf(name, sizeof(name), "%s with modifier: 0x%llx.", 91 - drm_get_format_name(fourcc, &buf), modifier); 92 - 93 - return name; 94 - } 95 - 96 85 const struct komeda_format_caps * 97 86 komeda_get_format_caps(struct komeda_format_caps_table *table, 98 87 u32 fourcc, u64 modifier);
+2 -2
drivers/gpu/drm/arm/display/komeda/komeda_framebuffer.c
··· 276 276 supported = komeda_format_mod_supported(&mdev->fmt_tbl, layer_type, 277 277 fourcc, modifier, rot); 278 278 if (!supported) 279 - DRM_DEBUG_ATOMIC("Layer TYPE: %d doesn't support fb FMT: %s.\n", 280 - layer_type, komeda_get_format_name(fourcc, modifier)); 279 + DRM_DEBUG_ATOMIC("Layer TYPE: %d doesn't support fb FMT: %p4cc with modifier: 0x%llx.\n", 280 + layer_type, &fourcc, modifier); 281 281 282 282 return supported; 283 283 }
+3
drivers/gpu/drm/arm/display/komeda/komeda_kms.c
··· 73 73 static void komeda_kms_commit_tail(struct drm_atomic_state *old_state) 74 74 { 75 75 struct drm_device *dev = old_state->dev; 76 + bool fence_cookie = dma_fence_begin_signalling(); 76 77 77 78 drm_atomic_helper_commit_modeset_disables(dev, old_state); 78 79 ··· 85 84 drm_atomic_helper_commit_hw_done(old_state); 86 85 87 86 drm_atomic_helper_wait_for_flip_done(dev, old_state); 87 + 88 + dma_fence_end_signalling(fence_cookie); 88 89 89 90 drm_atomic_helper_cleanup_planes(dev, old_state); 90 91 }
+11 -10
drivers/gpu/drm/arm/display/komeda/komeda_plane.c
··· 49 49 50 50 dflow->rot = drm_rotation_simplify(st->rotation, caps->supported_rots); 51 51 if (!has_bits(dflow->rot, caps->supported_rots)) { 52 - DRM_DEBUG_ATOMIC("rotation(0x%x) isn't supported by %s.\n", 53 - dflow->rot, 54 - komeda_get_format_name(caps->fourcc, 55 - fb->modifier)); 52 + DRM_DEBUG_ATOMIC("rotation(0x%x) isn't supported by %p4cc with modifier: 0x%llx.\n", 53 + dflow->rot, &caps->fourcc, fb->modifier); 56 54 return -EINVAL; 57 55 } 58 56 ··· 69 71 */ 70 72 static int 71 73 komeda_plane_atomic_check(struct drm_plane *plane, 72 - struct drm_plane_state *state) 74 + struct drm_atomic_state *state) 73 75 { 76 + struct drm_plane_state *new_plane_state = drm_atomic_get_new_plane_state(state, 77 + plane); 74 78 struct komeda_plane *kplane = to_kplane(plane); 75 - struct komeda_plane_state *kplane_st = to_kplane_st(state); 79 + struct komeda_plane_state *kplane_st = to_kplane_st(new_plane_state); 76 80 struct komeda_layer *layer = kplane->layer; 77 81 struct drm_crtc_state *crtc_st; 78 82 struct komeda_crtc_state *kcrtc_st; 79 83 struct komeda_data_flow_cfg dflow; 80 84 int err; 81 85 82 - if (!state->crtc || !state->fb) 86 + if (!new_plane_state->crtc || !new_plane_state->fb) 83 87 return 0; 84 88 85 - crtc_st = drm_atomic_get_crtc_state(state->state, state->crtc); 89 + crtc_st = drm_atomic_get_crtc_state(state, 90 + new_plane_state->crtc); 86 91 if (IS_ERR(crtc_st) || !crtc_st->enable) { 87 92 DRM_DEBUG_ATOMIC("Cannot update plane on a disabled CRTC.\n"); 88 93 return -EINVAL; ··· 97 96 98 97 kcrtc_st = to_kcrtc_st(crtc_st); 99 98 100 - err = komeda_plane_init_data_flow(state, kcrtc_st, &dflow); 99 + err = komeda_plane_init_data_flow(new_plane_state, kcrtc_st, &dflow); 101 100 if (err) 102 101 return err; 103 102 ··· 116 115 */ 117 116 static void 118 117 komeda_plane_atomic_update(struct drm_plane *plane, 119 - struct drm_plane_state *old_state) 118 + struct drm_atomic_state *state) 120 119 { 121 120 } 122 121
+18 -12
drivers/gpu/drm/arm/hdlcd_crtc.c
··· 229 229 }; 230 230 231 231 static int hdlcd_plane_atomic_check(struct drm_plane *plane, 232 - struct drm_plane_state *state) 232 + struct drm_atomic_state *state) 233 233 { 234 + struct drm_plane_state *new_plane_state = drm_atomic_get_new_plane_state(state, 235 + plane); 234 236 int i; 235 237 struct drm_crtc *crtc; 236 238 struct drm_crtc_state *crtc_state; 237 - u32 src_h = state->src_h >> 16; 239 + u32 src_h = new_plane_state->src_h >> 16; 238 240 239 241 /* only the HDLCD_REG_FB_LINE_COUNT register has a limit */ 240 242 if (src_h >= HDLCD_MAX_YRES) { ··· 244 242 return -EINVAL; 245 243 } 246 244 247 - for_each_new_crtc_in_state(state->state, crtc, crtc_state, i) { 245 + for_each_new_crtc_in_state(state, crtc, crtc_state, 246 + i) { 248 247 /* we cannot disable the plane while the CRTC is active */ 249 - if (!state->fb && crtc_state->active) 248 + if (!new_plane_state->fb && crtc_state->active) 250 249 return -EINVAL; 251 - return drm_atomic_helper_check_plane_state(state, crtc_state, 252 - DRM_PLANE_HELPER_NO_SCALING, 253 - DRM_PLANE_HELPER_NO_SCALING, 254 - false, true); 250 + return drm_atomic_helper_check_plane_state(new_plane_state, 251 + crtc_state, 252 + DRM_PLANE_HELPER_NO_SCALING, 253 + DRM_PLANE_HELPER_NO_SCALING, 254 + false, true); 255 255 } 256 256 257 257 return 0; 258 258 } 259 259 260 260 static void hdlcd_plane_atomic_update(struct drm_plane *plane, 261 - struct drm_plane_state *state) 261 + struct drm_atomic_state *state) 262 262 { 263 - struct drm_framebuffer *fb = plane->state->fb; 263 + struct drm_plane_state *new_plane_state = drm_atomic_get_new_plane_state(state, 264 + plane); 265 + struct drm_framebuffer *fb = new_plane_state->fb; 264 266 struct hdlcd_drm_private *hdlcd; 265 267 u32 dest_h; 266 268 dma_addr_t scanout_start; ··· 272 266 if (!fb) 273 267 return; 274 268 275 - dest_h = drm_rect_height(&plane->state->dst); 276 - scanout_start = drm_fb_cma_get_gem_addr(fb, plane->state, 0); 269 + dest_h = drm_rect_height(&new_plane_state->dst); 270 + scanout_start = drm_fb_cma_get_gem_addr(fb, new_plane_state, 0); 277 271 278 272 hdlcd = plane->dev->dev_private; 279 273 hdlcd_write(hdlcd, HDLCD_REG_FB_LINE_LENGTH, fb->pitches[0]);
+3
drivers/gpu/drm/arm/malidp_drv.c
··· 234 234 struct drm_crtc *crtc; 235 235 struct drm_crtc_state *old_crtc_state; 236 236 int i; 237 + bool fence_cookie = dma_fence_begin_signalling(); 237 238 238 239 pm_runtime_get_sync(drm->dev); 239 240 ··· 260 259 drm_atomic_helper_commit_modeset_enables(drm, state); 261 260 262 261 malidp_atomic_commit_hw_done(state); 262 + 263 + dma_fence_end_signalling(fence_cookie); 263 264 264 265 pm_runtime_put(drm->dev); 265 266
+2 -5
drivers/gpu/drm/arm/malidp_mw.c
··· 151 151 malidp_hw_get_format_id(&malidp->dev->hw->map, SE_MEMWRITE, 152 152 fb->format->format, !!fb->modifier); 153 153 if (mw_state->format == MALIDP_INVALID_FORMAT_ID) { 154 - struct drm_format_name_buf format_name; 155 - 156 - DRM_DEBUG_KMS("Invalid pixel format %s\n", 157 - drm_get_format_name(fb->format->format, 158 - &format_name)); 154 + DRM_DEBUG_KMS("Invalid pixel format %p4cc\n", 155 + &fb->format->format); 159 156 return -EINVAL; 160 157 } 161 158
+42 -37
drivers/gpu/drm/arm/malidp_planes.c
··· 502 502 } 503 503 504 504 static int malidp_de_plane_check(struct drm_plane *plane, 505 - struct drm_plane_state *state) 505 + struct drm_atomic_state *state) 506 506 { 507 + struct drm_plane_state *new_plane_state = drm_atomic_get_new_plane_state(state, 508 + plane); 507 509 struct malidp_plane *mp = to_malidp_plane(plane); 508 - struct malidp_plane_state *ms = to_malidp_plane_state(state); 509 - bool rotated = state->rotation & MALIDP_ROTATED_MASK; 510 + struct malidp_plane_state *ms = to_malidp_plane_state(new_plane_state); 511 + bool rotated = new_plane_state->rotation & MALIDP_ROTATED_MASK; 510 512 struct drm_framebuffer *fb; 511 - u16 pixel_alpha = state->pixel_blend_mode; 513 + u16 pixel_alpha = new_plane_state->pixel_blend_mode; 512 514 int i, ret; 513 515 unsigned int block_w, block_h; 514 516 515 - if (!state->crtc || WARN_ON(!state->fb)) 517 + if (!new_plane_state->crtc || WARN_ON(!new_plane_state->fb)) 516 518 return 0; 517 519 518 - fb = state->fb; 520 + fb = new_plane_state->fb; 519 521 520 522 ms->format = malidp_hw_get_format_id(&mp->hwdev->hw->map, 521 523 mp->layer->id, fb->format->format, ··· 543 541 DRM_DEBUG_KMS("Buffer width/height needs to be a multiple of tile sizes"); 544 542 return -EINVAL; 545 543 } 546 - if ((state->src_x >> 16) % block_w || (state->src_y >> 16) % block_h) { 544 + if ((new_plane_state->src_x >> 16) % block_w || (new_plane_state->src_y >> 16) % block_h) { 547 545 DRM_DEBUG_KMS("Plane src_x/src_y needs to be a multiple of tile sizes"); 548 546 return -EINVAL; 549 547 } 550 548 551 - if ((state->crtc_w > mp->hwdev->max_line_size) || 552 - (state->crtc_h > mp->hwdev->max_line_size) || 553 - (state->crtc_w < mp->hwdev->min_line_size) || 554 - (state->crtc_h < mp->hwdev->min_line_size)) 549 + if ((new_plane_state->crtc_w > mp->hwdev->max_line_size) || 550 + (new_plane_state->crtc_h > mp->hwdev->max_line_size) || 551 + (new_plane_state->crtc_w < mp->hwdev->min_line_size) || 552 + (new_plane_state->crtc_h < mp->hwdev->min_line_size)) 555 553 return -EINVAL; 556 554 557 555 /* ··· 561 559 */ 562 560 if (ms->n_planes == 3 && 563 561 !(mp->hwdev->hw->features & MALIDP_DEVICE_LV_HAS_3_STRIDES) && 564 - (state->fb->pitches[1] != state->fb->pitches[2])) 562 + (new_plane_state->fb->pitches[1] != new_plane_state->fb->pitches[2])) 565 563 return -EINVAL; 566 564 567 - ret = malidp_se_check_scaling(mp, state); 565 + ret = malidp_se_check_scaling(mp, new_plane_state); 568 566 if (ret) 569 567 return ret; 570 568 571 569 /* validate the rotation constraints for each layer */ 572 - if (state->rotation != DRM_MODE_ROTATE_0) { 570 + if (new_plane_state->rotation != DRM_MODE_ROTATE_0) { 573 571 if (mp->layer->rot == ROTATE_NONE) 574 572 return -EINVAL; 575 573 if ((mp->layer->rot == ROTATE_COMPRESSED) && !(fb->modifier)) ··· 590 588 } 591 589 592 590 ms->rotmem_size = 0; 593 - if (state->rotation & MALIDP_ROTATED_MASK) { 591 + if (new_plane_state->rotation & MALIDP_ROTATED_MASK) { 594 592 int val; 595 593 596 - val = mp->hwdev->hw->rotmem_required(mp->hwdev, state->crtc_w, 597 - state->crtc_h, 594 + val = mp->hwdev->hw->rotmem_required(mp->hwdev, new_plane_state->crtc_w, 595 + new_plane_state->crtc_h, 598 596 fb->format->format, 599 597 !!(fb->modifier)); 600 598 if (val < 0) ··· 604 602 } 605 603 606 604 /* HW can't support plane + pixel blending */ 607 - if ((state->alpha != DRM_BLEND_ALPHA_OPAQUE) && 605 + if ((new_plane_state->alpha != DRM_BLEND_ALPHA_OPAQUE) && 608 606 (pixel_alpha != DRM_MODE_BLEND_PIXEL_NONE) && 609 607 fb->format->has_alpha) 610 608 return -EINVAL; ··· 791 789 } 792 790 793 791 static void malidp_de_plane_update(struct drm_plane *plane, 794 - struct drm_plane_state *old_state) 792 + struct drm_atomic_state *state) 795 793 { 794 + struct drm_plane_state *old_state = drm_atomic_get_old_plane_state(state, 795 + plane); 796 796 struct malidp_plane *mp; 797 797 struct malidp_plane_state *ms = to_malidp_plane_state(plane->state); 798 - struct drm_plane_state *state = plane->state; 799 - u16 pixel_alpha = state->pixel_blend_mode; 800 - u8 plane_alpha = state->alpha >> 8; 798 + struct drm_plane_state *new_state = drm_atomic_get_new_plane_state(state, 799 + plane); 800 + u16 pixel_alpha = new_state->pixel_blend_mode; 801 + u8 plane_alpha = new_state->alpha >> 8; 801 802 u32 src_w, src_h, dest_w, dest_h, val; 802 803 int i; 803 804 struct drm_framebuffer *fb = plane->state->fb; ··· 816 811 src_h = fb->height; 817 812 } else { 818 813 /* convert src values from Q16 fixed point to integer */ 819 - src_w = state->src_w >> 16; 820 - src_h = state->src_h >> 16; 814 + src_w = new_state->src_w >> 16; 815 + src_h = new_state->src_h >> 16; 821 816 } 822 817 823 - dest_w = state->crtc_w; 824 - dest_h = state->crtc_h; 818 + dest_w = new_state->crtc_w; 819 + dest_h = new_state->crtc_h; 825 820 826 821 val = malidp_hw_read(mp->hwdev, mp->layer->base); 827 822 val = (val & ~LAYER_FORMAT_MASK) | ms->format; ··· 833 828 malidp_de_set_mmu_control(mp, ms); 834 829 835 830 malidp_de_set_plane_pitches(mp, ms->n_planes, 836 - state->fb->pitches); 831 + new_state->fb->pitches); 837 832 838 833 if ((plane->state->color_encoding != old_state->color_encoding) || 839 834 (plane->state->color_range != old_state->color_range)) ··· 846 841 malidp_hw_write(mp->hwdev, LAYER_H_VAL(dest_w) | LAYER_V_VAL(dest_h), 847 842 mp->layer->base + MALIDP_LAYER_COMP_SIZE); 848 843 849 - malidp_hw_write(mp->hwdev, LAYER_H_VAL(state->crtc_x) | 850 - LAYER_V_VAL(state->crtc_y), 844 + malidp_hw_write(mp->hwdev, LAYER_H_VAL(new_state->crtc_x) | 845 + LAYER_V_VAL(new_state->crtc_y), 851 846 mp->layer->base + MALIDP_LAYER_OFFSET); 852 847 853 848 if (mp->layer->id == DE_SMART) { ··· 869 864 val &= ~LAYER_ROT_MASK; 870 865 871 866 /* setup the rotation and axis flip bits */ 872 - if (state->rotation & DRM_MODE_ROTATE_MASK) 867 + if (new_state->rotation & DRM_MODE_ROTATE_MASK) 873 868 val |= ilog2(plane->state->rotation & DRM_MODE_ROTATE_MASK) << 874 869 LAYER_ROT_OFFSET; 875 - if (state->rotation & DRM_MODE_REFLECT_X) 870 + if (new_state->rotation & DRM_MODE_REFLECT_X) 876 871 val |= LAYER_H_FLIP; 877 - if (state->rotation & DRM_MODE_REFLECT_Y) 872 + if (new_state->rotation & DRM_MODE_REFLECT_Y) 878 873 val |= LAYER_V_FLIP; 879 874 880 875 val &= ~(LAYER_COMP_MASK | LAYER_PMUL_ENABLE | LAYER_ALPHA(0xff)); 881 876 882 - if (state->alpha != DRM_BLEND_ALPHA_OPAQUE) { 877 + if (new_state->alpha != DRM_BLEND_ALPHA_OPAQUE) { 883 878 val |= LAYER_COMP_PLANE; 884 - } else if (state->fb->format->has_alpha) { 879 + } else if (new_state->fb->format->has_alpha) { 885 880 /* We only care about blend mode if the format has alpha */ 886 881 switch (pixel_alpha) { 887 882 case DRM_MODE_BLEND_PREMULTI: ··· 895 890 val |= LAYER_ALPHA(plane_alpha); 896 891 897 892 val &= ~LAYER_FLOWCFG(LAYER_FLOWCFG_MASK); 898 - if (state->crtc) { 893 + if (new_state->crtc) { 899 894 struct malidp_crtc_state *m = 900 - to_malidp_crtc_state(state->crtc->state); 895 + to_malidp_crtc_state(new_state->crtc->state); 901 896 902 897 if (m->scaler_config.scale_enable && 903 898 m->scaler_config.plane_src_id == mp->layer->id) ··· 912 907 } 913 908 914 909 static void malidp_de_plane_disable(struct drm_plane *plane, 915 - struct drm_plane_state *state) 910 + struct drm_atomic_state *state) 916 911 { 917 912 struct malidp_plane *mp = to_malidp_plane(plane); 918 913
+60 -53
drivers/gpu/drm/armada/armada_overlay.c
··· 66 66 67 67 /* === Plane support === */ 68 68 static void armada_drm_overlay_plane_atomic_update(struct drm_plane *plane, 69 - struct drm_plane_state *old_state) 69 + struct drm_atomic_state *state) 70 70 { 71 - struct drm_plane_state *state = plane->state; 71 + struct drm_plane_state *old_state = drm_atomic_get_old_plane_state(state, 72 + plane); 73 + struct drm_plane_state *new_state = drm_atomic_get_new_plane_state(state, 74 + plane); 72 75 struct armada_crtc *dcrtc; 73 76 struct armada_regs *regs; 74 77 unsigned int idx; ··· 79 76 80 77 DRM_DEBUG_KMS("[PLANE:%d:%s]\n", plane->base.id, plane->name); 81 78 82 - if (!state->fb || WARN_ON(!state->crtc)) 79 + if (!new_state->fb || WARN_ON(!new_state->crtc)) 83 80 return; 84 81 85 82 DRM_DEBUG_KMS("[PLANE:%d:%s] is on [CRTC:%d:%s] with [FB:%d] visible %u->%u\n", 86 83 plane->base.id, plane->name, 87 - state->crtc->base.id, state->crtc->name, 88 - state->fb->base.id, 89 - old_state->visible, state->visible); 84 + new_state->crtc->base.id, new_state->crtc->name, 85 + new_state->fb->base.id, 86 + old_state->visible, new_state->visible); 90 87 91 - dcrtc = drm_to_armada_crtc(state->crtc); 88 + dcrtc = drm_to_armada_crtc(new_state->crtc); 92 89 regs = dcrtc->regs + dcrtc->regs_idx; 93 90 94 91 idx = 0; 95 - if (!old_state->visible && state->visible) 92 + if (!old_state->visible && new_state->visible) 96 93 armada_reg_queue_mod(regs, idx, 97 94 0, CFG_PDWN16x66 | CFG_PDWN32x66, 98 95 LCD_SPU_SRAM_PARA1); 99 - val = armada_src_hw(state); 96 + val = armada_src_hw(new_state); 100 97 if (armada_src_hw(old_state) != val) 101 98 armada_reg_queue_set(regs, idx, val, LCD_SPU_DMA_HPXL_VLN); 102 - val = armada_dst_yx(state); 99 + val = armada_dst_yx(new_state); 103 100 if (armada_dst_yx(old_state) != val) 104 101 armada_reg_queue_set(regs, idx, val, LCD_SPU_DMA_OVSA_HPXL_VLN); 105 - val = armada_dst_hw(state); 102 + val = armada_dst_hw(new_state); 106 103 if (armada_dst_hw(old_state) != val) 107 104 armada_reg_queue_set(regs, idx, val, LCD_SPU_DZM_HPXL_VLN); 108 105 /* FIXME: overlay on an interlaced display */ 109 - if (old_state->src.x1 != state->src.x1 || 110 - old_state->src.y1 != state->src.y1 || 111 - old_state->fb != state->fb || 112 - state->crtc->state->mode_changed) { 106 + if (old_state->src.x1 != new_state->src.x1 || 107 + old_state->src.y1 != new_state->src.y1 || 108 + old_state->fb != new_state->fb || 109 + new_state->crtc->state->mode_changed) { 113 110 const struct drm_format_info *format; 114 111 u16 src_x; 115 112 116 - armada_reg_queue_set(regs, idx, armada_addr(state, 0, 0), 113 + armada_reg_queue_set(regs, idx, armada_addr(new_state, 0, 0), 117 114 LCD_SPU_DMA_START_ADDR_Y0); 118 - armada_reg_queue_set(regs, idx, armada_addr(state, 0, 1), 115 + armada_reg_queue_set(regs, idx, armada_addr(new_state, 0, 1), 119 116 LCD_SPU_DMA_START_ADDR_U0); 120 - armada_reg_queue_set(regs, idx, armada_addr(state, 0, 2), 117 + armada_reg_queue_set(regs, idx, armada_addr(new_state, 0, 2), 121 118 LCD_SPU_DMA_START_ADDR_V0); 122 - armada_reg_queue_set(regs, idx, armada_addr(state, 1, 0), 119 + armada_reg_queue_set(regs, idx, armada_addr(new_state, 1, 0), 123 120 LCD_SPU_DMA_START_ADDR_Y1); 124 - armada_reg_queue_set(regs, idx, armada_addr(state, 1, 1), 121 + armada_reg_queue_set(regs, idx, armada_addr(new_state, 1, 1), 125 122 LCD_SPU_DMA_START_ADDR_U1); 126 - armada_reg_queue_set(regs, idx, armada_addr(state, 1, 2), 123 + armada_reg_queue_set(regs, idx, armada_addr(new_state, 1, 2), 127 124 LCD_SPU_DMA_START_ADDR_V1); 128 125 129 - val = armada_pitch(state, 0) << 16 | armada_pitch(state, 0); 126 + val = armada_pitch(new_state, 0) << 16 | armada_pitch(new_state, 127 + 0); 130 128 armada_reg_queue_set(regs, idx, val, LCD_SPU_DMA_PITCH_YC); 131 - val = armada_pitch(state, 1) << 16 | armada_pitch(state, 2); 129 + val = armada_pitch(new_state, 1) << 16 | armada_pitch(new_state, 130 + 2); 132 131 armada_reg_queue_set(regs, idx, val, LCD_SPU_DMA_PITCH_UV); 133 132 134 - cfg = CFG_DMA_FMT(drm_fb_to_armada_fb(state->fb)->fmt) | 135 - CFG_DMA_MOD(drm_fb_to_armada_fb(state->fb)->mod) | 133 + cfg = CFG_DMA_FMT(drm_fb_to_armada_fb(new_state->fb)->fmt) | 134 + CFG_DMA_MOD(drm_fb_to_armada_fb(new_state->fb)->mod) | 136 135 CFG_CBSH_ENA; 137 - if (state->visible) 136 + if (new_state->visible) 138 137 cfg |= CFG_DMA_ENA; 139 138 140 139 /* ··· 144 139 * U/V planes to swap. Compensate for it by also toggling 145 140 * the UV swap. 146 141 */ 147 - format = state->fb->format; 148 - src_x = state->src.x1 >> 16; 142 + format = new_state->fb->format; 143 + src_x = new_state->src.x1 >> 16; 149 144 if (format->num_planes == 1 && src_x & (format->hsub - 1)) 150 145 cfg ^= CFG_DMA_MOD(CFG_SWAPUV); 151 - if (to_armada_plane_state(state)->interlace) 146 + if (to_armada_plane_state(new_state)->interlace) 152 147 cfg |= CFG_DMA_FTOGGLE; 153 148 cfg_mask = CFG_CBSH_ENA | CFG_DMAFORMAT | 154 149 CFG_DMA_MOD(CFG_SWAPRB | CFG_SWAPUV | 155 150 CFG_SWAPYU | CFG_YUV2RGB) | 156 151 CFG_DMA_FTOGGLE | CFG_DMA_TSTMODE | 157 152 CFG_DMA_ENA; 158 - } else if (old_state->visible != state->visible) { 159 - cfg = state->visible ? CFG_DMA_ENA : 0; 153 + } else if (old_state->visible != new_state->visible) { 154 + cfg = new_state->visible ? CFG_DMA_ENA : 0; 160 155 cfg_mask = CFG_DMA_ENA; 161 156 } else { 162 157 cfg = cfg_mask = 0; 163 158 } 164 - if (drm_rect_width(&old_state->src) != drm_rect_width(&state->src) || 165 - drm_rect_width(&old_state->dst) != drm_rect_width(&state->dst)) { 159 + if (drm_rect_width(&old_state->src) != drm_rect_width(&new_state->src) || 160 + drm_rect_width(&old_state->dst) != drm_rect_width(&new_state->dst)) { 166 161 cfg_mask |= CFG_DMA_HSMOOTH; 167 - if (drm_rect_width(&state->src) >> 16 != 168 - drm_rect_width(&state->dst)) 162 + if (drm_rect_width(&new_state->src) >> 16 != 163 + drm_rect_width(&new_state->dst)) 169 164 cfg |= CFG_DMA_HSMOOTH; 170 165 } 171 166 ··· 173 168 armada_reg_queue_mod(regs, idx, cfg, cfg_mask, 174 169 LCD_SPU_DMA_CTRL0); 175 170 176 - val = armada_spu_contrast(state); 177 - if ((!old_state->visible && state->visible) || 171 + val = armada_spu_contrast(new_state); 172 + if ((!old_state->visible && new_state->visible) || 178 173 armada_spu_contrast(old_state) != val) 179 174 armada_reg_queue_set(regs, idx, val, LCD_SPU_CONTRAST); 180 - val = armada_spu_saturation(state); 181 - if ((!old_state->visible && state->visible) || 175 + val = armada_spu_saturation(new_state); 176 + if ((!old_state->visible && new_state->visible) || 182 177 armada_spu_saturation(old_state) != val) 183 178 armada_reg_queue_set(regs, idx, val, LCD_SPU_SATURATION); 184 - if (!old_state->visible && state->visible) 179 + if (!old_state->visible && new_state->visible) 185 180 armada_reg_queue_set(regs, idx, 0x00002000, LCD_SPU_CBSH_HUE); 186 - val = armada_csc(state); 187 - if ((!old_state->visible && state->visible) || 181 + val = armada_csc(new_state); 182 + if ((!old_state->visible && new_state->visible) || 188 183 armada_csc(old_state) != val) 189 184 armada_reg_queue_mod(regs, idx, val, CFG_CSC_MASK, 190 185 LCD_SPU_IOPAD_CONTROL); 191 - val = drm_to_overlay_state(state)->colorkey_yr; 192 - if ((!old_state->visible && state->visible) || 186 + val = drm_to_overlay_state(new_state)->colorkey_yr; 187 + if ((!old_state->visible && new_state->visible) || 193 188 drm_to_overlay_state(old_state)->colorkey_yr != val) 194 189 armada_reg_queue_set(regs, idx, val, LCD_SPU_COLORKEY_Y); 195 - val = drm_to_overlay_state(state)->colorkey_ug; 196 - if ((!old_state->visible && state->visible) || 190 + val = drm_to_overlay_state(new_state)->colorkey_ug; 191 + if ((!old_state->visible && new_state->visible) || 197 192 drm_to_overlay_state(old_state)->colorkey_ug != val) 198 193 armada_reg_queue_set(regs, idx, val, LCD_SPU_COLORKEY_U); 199 - val = drm_to_overlay_state(state)->colorkey_vb; 200 - if ((!old_state->visible && state->visible) || 194 + val = drm_to_overlay_state(new_state)->colorkey_vb; 195 + if ((!old_state->visible && new_state->visible) || 201 196 drm_to_overlay_state(old_state)->colorkey_vb != val) 202 197 armada_reg_queue_set(regs, idx, val, LCD_SPU_COLORKEY_V); 203 - val = drm_to_overlay_state(state)->colorkey_mode; 204 - if ((!old_state->visible && state->visible) || 198 + val = drm_to_overlay_state(new_state)->colorkey_mode; 199 + if ((!old_state->visible && new_state->visible) || 205 200 drm_to_overlay_state(old_state)->colorkey_mode != val) 206 201 armada_reg_queue_mod(regs, idx, val, CFG_CKMODE_MASK | 207 202 CFG_ALPHAM_MASK | CFG_ALPHA_MASK, 208 203 LCD_SPU_DMA_CTRL1); 209 - val = drm_to_overlay_state(state)->colorkey_enable; 210 - if (((!old_state->visible && state->visible) || 204 + val = drm_to_overlay_state(new_state)->colorkey_enable; 205 + if (((!old_state->visible && new_state->visible) || 211 206 drm_to_overlay_state(old_state)->colorkey_enable != val) && 212 207 dcrtc->variant->has_spu_adv_reg) 213 208 armada_reg_queue_mod(regs, idx, val, ADV_GRACOLORKEY | ··· 217 212 } 218 213 219 214 static void armada_drm_overlay_plane_atomic_disable(struct drm_plane *plane, 220 - struct drm_plane_state *old_state) 215 + struct drm_atomic_state *state) 221 216 { 217 + struct drm_plane_state *old_state = drm_atomic_get_old_plane_state(state, 218 + plane); 222 219 struct armada_crtc *dcrtc; 223 220 struct armada_regs *regs; 224 221 unsigned int idx = 0;
+63 -52
drivers/gpu/drm/armada/armada_plane.c
··· 106 106 } 107 107 108 108 int armada_drm_plane_atomic_check(struct drm_plane *plane, 109 - struct drm_plane_state *state) 109 + struct drm_atomic_state *state) 110 110 { 111 - struct armada_plane_state *st = to_armada_plane_state(state); 112 - struct drm_crtc *crtc = state->crtc; 111 + struct drm_plane_state *new_plane_state = drm_atomic_get_new_plane_state(state, 112 + plane); 113 + struct armada_plane_state *st = to_armada_plane_state(new_plane_state); 114 + struct drm_crtc *crtc = new_plane_state->crtc; 113 115 struct drm_crtc_state *crtc_state; 114 116 bool interlace; 115 117 int ret; 116 118 117 - if (!state->fb || WARN_ON(!state->crtc)) { 118 - state->visible = false; 119 + if (!new_plane_state->fb || WARN_ON(!new_plane_state->crtc)) { 120 + new_plane_state->visible = false; 119 121 return 0; 120 122 } 121 123 122 - if (state->state) 123 - crtc_state = drm_atomic_get_existing_crtc_state(state->state, crtc); 124 + if (state) 125 + crtc_state = drm_atomic_get_existing_crtc_state(state, 126 + crtc); 124 127 else 125 128 crtc_state = crtc->state; 126 129 127 - ret = drm_atomic_helper_check_plane_state(state, crtc_state, 0, 130 + ret = drm_atomic_helper_check_plane_state(new_plane_state, crtc_state, 131 + 0, 128 132 INT_MAX, true, false); 129 133 if (ret) 130 134 return ret; 131 135 132 136 interlace = crtc_state->adjusted_mode.flags & DRM_MODE_FLAG_INTERLACE; 133 137 if (interlace) { 134 - if ((state->dst.y1 | state->dst.y2) & 1) 138 + if ((new_plane_state->dst.y1 | new_plane_state->dst.y2) & 1) 135 139 return -EINVAL; 136 - st->src_hw = drm_rect_height(&state->src) >> 17; 137 - st->dst_yx = state->dst.y1 >> 1; 138 - st->dst_hw = drm_rect_height(&state->dst) >> 1; 140 + st->src_hw = drm_rect_height(&new_plane_state->src) >> 17; 141 + st->dst_yx = new_plane_state->dst.y1 >> 1; 142 + st->dst_hw = drm_rect_height(&new_plane_state->dst) >> 1; 139 143 } else { 140 - st->src_hw = drm_rect_height(&state->src) >> 16; 141 - st->dst_yx = state->dst.y1; 142 - st->dst_hw = drm_rect_height(&state->dst); 144 + st->src_hw = drm_rect_height(&new_plane_state->src) >> 16; 145 + st->dst_yx = new_plane_state->dst.y1; 146 + st->dst_hw = drm_rect_height(&new_plane_state->dst); 143 147 } 144 148 145 149 st->src_hw <<= 16; 146 - st->src_hw |= drm_rect_width(&state->src) >> 16; 150 + st->src_hw |= drm_rect_width(&new_plane_state->src) >> 16; 147 151 st->dst_yx <<= 16; 148 - st->dst_yx |= state->dst.x1 & 0x0000ffff; 152 + st->dst_yx |= new_plane_state->dst.x1 & 0x0000ffff; 149 153 st->dst_hw <<= 16; 150 - st->dst_hw |= drm_rect_width(&state->dst) & 0x0000ffff; 154 + st->dst_hw |= drm_rect_width(&new_plane_state->dst) & 0x0000ffff; 151 155 152 - armada_drm_plane_calc(state, st->addrs, st->pitches, interlace); 156 + armada_drm_plane_calc(new_plane_state, st->addrs, st->pitches, 157 + interlace); 153 158 st->interlace = interlace; 154 159 155 160 return 0; 156 161 } 157 162 158 163 static void armada_drm_primary_plane_atomic_update(struct drm_plane *plane, 159 - struct drm_plane_state *old_state) 164 + struct drm_atomic_state *state) 160 165 { 161 - struct drm_plane_state *state = plane->state; 166 + struct drm_plane_state *old_state = drm_atomic_get_old_plane_state(state, 167 + plane); 168 + struct drm_plane_state *new_state = drm_atomic_get_new_plane_state(state, 169 + plane); 162 170 struct armada_crtc *dcrtc; 163 171 struct armada_regs *regs; 164 172 u32 cfg, cfg_mask, val; ··· 174 166 175 167 DRM_DEBUG_KMS("[PLANE:%d:%s]\n", plane->base.id, plane->name); 176 168 177 - if (!state->fb || WARN_ON(!state->crtc)) 169 + if (!new_state->fb || WARN_ON(!new_state->crtc)) 178 170 return; 179 171 180 172 DRM_DEBUG_KMS("[PLANE:%d:%s] is on [CRTC:%d:%s] with [FB:%d] visible %u->%u\n", 181 173 plane->base.id, plane->name, 182 - state->crtc->base.id, state->crtc->name, 183 - state->fb->base.id, 184 - old_state->visible, state->visible); 174 + new_state->crtc->base.id, new_state->crtc->name, 175 + new_state->fb->base.id, 176 + old_state->visible, new_state->visible); 185 177 186 - dcrtc = drm_to_armada_crtc(state->crtc); 178 + dcrtc = drm_to_armada_crtc(new_state->crtc); 187 179 regs = dcrtc->regs + dcrtc->regs_idx; 188 180 189 181 idx = 0; 190 - if (!old_state->visible && state->visible) { 182 + if (!old_state->visible && new_state->visible) { 191 183 val = CFG_PDWN64x66; 192 - if (drm_fb_to_armada_fb(state->fb)->fmt > CFG_420) 184 + if (drm_fb_to_armada_fb(new_state->fb)->fmt > CFG_420) 193 185 val |= CFG_PDWN256x24; 194 186 armada_reg_queue_mod(regs, idx, 0, val, LCD_SPU_SRAM_PARA1); 195 187 } 196 - val = armada_src_hw(state); 188 + val = armada_src_hw(new_state); 197 189 if (armada_src_hw(old_state) != val) 198 190 armada_reg_queue_set(regs, idx, val, LCD_SPU_GRA_HPXL_VLN); 199 - val = armada_dst_yx(state); 191 + val = armada_dst_yx(new_state); 200 192 if (armada_dst_yx(old_state) != val) 201 193 armada_reg_queue_set(regs, idx, val, LCD_SPU_GRA_OVSA_HPXL_VLN); 202 - val = armada_dst_hw(state); 194 + val = armada_dst_hw(new_state); 203 195 if (armada_dst_hw(old_state) != val) 204 196 armada_reg_queue_set(regs, idx, val, LCD_SPU_GZM_HPXL_VLN); 205 - if (old_state->src.x1 != state->src.x1 || 206 - old_state->src.y1 != state->src.y1 || 207 - old_state->fb != state->fb || 208 - state->crtc->state->mode_changed) { 209 - armada_reg_queue_set(regs, idx, armada_addr(state, 0, 0), 197 + if (old_state->src.x1 != new_state->src.x1 || 198 + old_state->src.y1 != new_state->src.y1 || 199 + old_state->fb != new_state->fb || 200 + new_state->crtc->state->mode_changed) { 201 + armada_reg_queue_set(regs, idx, armada_addr(new_state, 0, 0), 210 202 LCD_CFG_GRA_START_ADDR0); 211 - armada_reg_queue_set(regs, idx, armada_addr(state, 1, 0), 203 + armada_reg_queue_set(regs, idx, armada_addr(new_state, 1, 0), 212 204 LCD_CFG_GRA_START_ADDR1); 213 - armada_reg_queue_mod(regs, idx, armada_pitch(state, 0), 0xffff, 205 + armada_reg_queue_mod(regs, idx, armada_pitch(new_state, 0), 206 + 0xffff, 214 207 LCD_CFG_GRA_PITCH); 215 208 } 216 - if (old_state->fb != state->fb || 217 - state->crtc->state->mode_changed) { 218 - cfg = CFG_GRA_FMT(drm_fb_to_armada_fb(state->fb)->fmt) | 219 - CFG_GRA_MOD(drm_fb_to_armada_fb(state->fb)->mod); 220 - if (drm_fb_to_armada_fb(state->fb)->fmt > CFG_420) 209 + if (old_state->fb != new_state->fb || 210 + new_state->crtc->state->mode_changed) { 211 + cfg = CFG_GRA_FMT(drm_fb_to_armada_fb(new_state->fb)->fmt) | 212 + CFG_GRA_MOD(drm_fb_to_armada_fb(new_state->fb)->mod); 213 + if (drm_fb_to_armada_fb(new_state->fb)->fmt > CFG_420) 221 214 cfg |= CFG_PALETTE_ENA; 222 - if (state->visible) 215 + if (new_state->visible) 223 216 cfg |= CFG_GRA_ENA; 224 - if (to_armada_plane_state(state)->interlace) 217 + if (to_armada_plane_state(new_state)->interlace) 225 218 cfg |= CFG_GRA_FTOGGLE; 226 219 cfg_mask = CFG_GRAFORMAT | 227 220 CFG_GRA_MOD(CFG_SWAPRB | CFG_SWAPUV | 228 221 CFG_SWAPYU | CFG_YUV2RGB) | 229 222 CFG_PALETTE_ENA | CFG_GRA_FTOGGLE | 230 223 CFG_GRA_ENA; 231 - } else if (old_state->visible != state->visible) { 232 - cfg = state->visible ? CFG_GRA_ENA : 0; 224 + } else if (old_state->visible != new_state->visible) { 225 + cfg = new_state->visible ? CFG_GRA_ENA : 0; 233 226 cfg_mask = CFG_GRA_ENA; 234 227 } else { 235 228 cfg = cfg_mask = 0; 236 229 } 237 - if (drm_rect_width(&old_state->src) != drm_rect_width(&state->src) || 238 - drm_rect_width(&old_state->dst) != drm_rect_width(&state->dst)) { 230 + if (drm_rect_width(&old_state->src) != drm_rect_width(&new_state->src) || 231 + drm_rect_width(&old_state->dst) != drm_rect_width(&new_state->dst)) { 239 232 cfg_mask |= CFG_GRA_HSMOOTH; 240 - if (drm_rect_width(&state->src) >> 16 != 241 - drm_rect_width(&state->dst)) 233 + if (drm_rect_width(&new_state->src) >> 16 != 234 + drm_rect_width(&new_state->dst)) 242 235 cfg |= CFG_GRA_HSMOOTH; 243 236 } 244 237 ··· 251 242 } 252 243 253 244 static void armada_drm_primary_plane_atomic_disable(struct drm_plane *plane, 254 - struct drm_plane_state *old_state) 245 + struct drm_atomic_state *state) 255 246 { 247 + struct drm_plane_state *old_state = drm_atomic_get_old_plane_state(state, 248 + plane); 256 249 struct armada_crtc *dcrtc; 257 250 struct armada_regs *regs; 258 251 unsigned int idx = 0;
+1 -1
drivers/gpu/drm/armada/armada_plane.h
··· 26 26 void armada_drm_plane_cleanup_fb(struct drm_plane *plane, 27 27 struct drm_plane_state *old_state); 28 28 int armada_drm_plane_atomic_check(struct drm_plane *plane, 29 - struct drm_plane_state *state); 29 + struct drm_atomic_state *state); 30 30 void armada_plane_reset(struct drm_plane *plane); 31 31 struct drm_plane_state *armada_plane_duplicate_state(struct drm_plane *plane); 32 32 void armada_plane_destroy_state(struct drm_plane *plane,
+5 -3
drivers/gpu/drm/aspeed/aspeed_gfx.h
··· 11 11 struct reset_control *rst; 12 12 struct regmap *scu; 13 13 14 + u32 dac_reg; 15 + u32 vga_scratch_reg; 16 + u32 throd_val; 17 + u32 scan_line_max; 18 + 14 19 struct drm_simple_display_pipe pipe; 15 20 struct drm_connector connector; 16 21 }; ··· 105 100 /* CRT_THROD */ 106 101 #define CRT_THROD_LOW(x) (x) 107 102 #define CRT_THROD_HIGH(x) ((x) << 8) 108 - 109 - /* Default Threshold Seting */ 110 - #define G5_CRT_THROD_VAL (CRT_THROD_LOW(0x24) | CRT_THROD_HIGH(0x3C))
+8 -7
drivers/gpu/drm/aspeed/aspeed_gfx_crtc.c
··· 9 9 #include <drm/drm_device.h> 10 10 #include <drm/drm_fb_cma_helper.h> 11 11 #include <drm/drm_fourcc.h> 12 + #include <drm/drm_gem_atomic_helper.h> 12 13 #include <drm/drm_gem_cma_helper.h> 13 - #include <drm/drm_gem_framebuffer_helper.h> 14 14 #include <drm/drm_panel.h> 15 15 #include <drm/drm_simple_kms_helper.h> 16 16 #include <drm/drm_vblank.h> ··· 59 59 u32 ctrl1 = readl(priv->base + CRT_CTRL1); 60 60 u32 ctrl2 = readl(priv->base + CRT_CTRL2); 61 61 62 - /* SCU2C: set DAC source for display output to Graphics CRT (GFX) */ 63 - regmap_update_bits(priv->scu, 0x2c, BIT(16), BIT(16)); 62 + /* Set DAC source for display output to Graphics CRT (GFX) */ 63 + regmap_update_bits(priv->scu, priv->dac_reg, BIT(16), BIT(16)); 64 64 65 65 writel(ctrl1 | CRT_CTRL_EN, priv->base + CRT_CTRL1); 66 66 writel(ctrl2 | CRT_CTRL_DAC_EN, priv->base + CRT_CTRL2); ··· 74 74 writel(ctrl1 & ~CRT_CTRL_EN, priv->base + CRT_CTRL1); 75 75 writel(ctrl2 & ~CRT_CTRL_DAC_EN, priv->base + CRT_CTRL2); 76 76 77 - regmap_update_bits(priv->scu, 0x2c, BIT(16), 0); 77 + regmap_update_bits(priv->scu, priv->dac_reg, BIT(16), 0); 78 78 } 79 79 80 80 static void aspeed_gfx_crtc_mode_set_nofb(struct aspeed_gfx *priv) ··· 127 127 * Terminal Count: memory size of one scan line 128 128 */ 129 129 d_offset = m->hdisplay * bpp / 8; 130 - t_count = (m->hdisplay * bpp + 127) / 128; 130 + t_count = DIV_ROUND_UP(m->hdisplay * bpp, priv->scan_line_max); 131 + 131 132 writel(CRT_DISP_OFFSET(d_offset) | CRT_TERM_COUNT(t_count), 132 133 priv->base + CRT_OFFSET); 133 134 ··· 136 135 * Threshold: FIFO thresholds of refill and stop (16 byte chunks 137 136 * per line, rounded up) 138 137 */ 139 - writel(G5_CRT_THROD_VAL, priv->base + CRT_THROD); 138 + writel(priv->throd_val, priv->base + CRT_THROD); 140 139 } 141 140 142 141 static void aspeed_gfx_pipe_enable(struct drm_simple_display_pipe *pipe, ··· 220 219 .enable = aspeed_gfx_pipe_enable, 221 220 .disable = aspeed_gfx_pipe_disable, 222 221 .update = aspeed_gfx_pipe_update, 223 - .prepare_fb = drm_gem_fb_simple_display_pipe_prepare_fb, 222 + .prepare_fb = drm_gem_simple_display_pipe_prepare_fb, 224 223 .enable_vblank = aspeed_gfx_enable_vblank, 225 224 .disable_vblank = aspeed_gfx_disable_vblank, 226 225 };
+52 -17
drivers/gpu/drm/aspeed/aspeed_gfx_drv.c
··· 7 7 #include <linux/mfd/syscon.h> 8 8 #include <linux/module.h> 9 9 #include <linux/of.h> 10 + #include <linux/of_device.h> 10 11 #include <linux/of_reserved_mem.h> 11 12 #include <linux/platform_device.h> 12 13 #include <linux/regmap.h> ··· 58 57 * which is available under NDA from ASPEED. 59 58 */ 60 59 60 + struct aspeed_gfx_config { 61 + u32 dac_reg; /* DAC register in SCU */ 62 + u32 vga_scratch_reg; /* VGA scratch register in SCU */ 63 + u32 throd_val; /* Default Threshold Seting */ 64 + u32 scan_line_max; /* Max memory size of one scan line */ 65 + }; 66 + 67 + static const struct aspeed_gfx_config ast2400_config = { 68 + .dac_reg = 0x2c, 69 + .vga_scratch_reg = 0x50, 70 + .throd_val = CRT_THROD_LOW(0x1e) | CRT_THROD_HIGH(0x12), 71 + .scan_line_max = 64, 72 + }; 73 + 74 + static const struct aspeed_gfx_config ast2500_config = { 75 + .dac_reg = 0x2c, 76 + .vga_scratch_reg = 0x50, 77 + .throd_val = CRT_THROD_LOW(0x24) | CRT_THROD_HIGH(0x3c), 78 + .scan_line_max = 128, 79 + }; 80 + 81 + static const struct of_device_id aspeed_gfx_match[] = { 82 + { .compatible = "aspeed,ast2400-gfx", .data = &ast2400_config }, 83 + { .compatible = "aspeed,ast2500-gfx", .data = &ast2500_config }, 84 + { }, 85 + }; 86 + MODULE_DEVICE_TABLE(of, aspeed_gfx_match); 87 + 61 88 static const struct drm_mode_config_funcs aspeed_gfx_mode_config_funcs = { 62 89 .fb_create = drm_gem_fb_create, 63 90 .atomic_check = drm_atomic_helper_check, ··· 126 97 return IRQ_NONE; 127 98 } 128 99 129 - 130 - 131 100 static int aspeed_gfx_load(struct drm_device *drm) 132 101 { 133 102 struct platform_device *pdev = to_platform_device(drm->dev); 134 103 struct aspeed_gfx *priv = to_aspeed_gfx(drm); 104 + struct device_node *np = pdev->dev.of_node; 105 + const struct aspeed_gfx_config *config; 106 + const struct of_device_id *match; 135 107 struct resource *res; 136 108 int ret; 137 109 ··· 141 111 if (IS_ERR(priv->base)) 142 112 return PTR_ERR(priv->base); 143 113 144 - priv->scu = syscon_regmap_lookup_by_compatible("aspeed,ast2500-scu"); 114 + match = of_match_device(aspeed_gfx_match, &pdev->dev); 115 + if (!match) 116 + return -EINVAL; 117 + config = match->data; 118 + 119 + priv->dac_reg = config->dac_reg; 120 + priv->vga_scratch_reg = config->vga_scratch_reg; 121 + priv->throd_val = config->throd_val; 122 + priv->scan_line_max = config->scan_line_max; 123 + 124 + priv->scu = syscon_regmap_lookup_by_phandle(np, "syscon"); 145 125 if (IS_ERR(priv->scu)) { 146 - dev_err(&pdev->dev, "failed to find SCU regmap\n"); 147 - return PTR_ERR(priv->scu); 126 + priv->scu = syscon_regmap_lookup_by_compatible("aspeed,ast2500-scu"); 127 + if (IS_ERR(priv->scu)) { 128 + dev_err(&pdev->dev, "failed to find SCU regmap\n"); 129 + return PTR_ERR(priv->scu); 130 + } 148 131 } 149 132 150 133 ret = of_reserved_mem_device_init(drm->dev); ··· 245 202 .minor = 0, 246 203 }; 247 204 248 - static const struct of_device_id aspeed_gfx_match[] = { 249 - { .compatible = "aspeed,ast2500-gfx" }, 250 - { } 251 - }; 252 - 253 - #define ASPEED_SCU_VGA0 0x50 254 - #define ASPEED_SCU_MISC_CTRL 0x2c 255 - 256 205 static ssize_t dac_mux_store(struct device *dev, struct device_attribute *attr, 257 206 const char *buf, size_t count) 258 207 { ··· 259 224 if (val > 3) 260 225 return -EINVAL; 261 226 262 - rc = regmap_update_bits(priv->scu, ASPEED_SCU_MISC_CTRL, 0x30000, val << 16); 227 + rc = regmap_update_bits(priv->scu, priv->dac_reg, 0x30000, val << 16); 263 228 if (rc < 0) 264 229 return 0; 265 230 ··· 272 237 u32 reg; 273 238 int rc; 274 239 275 - rc = regmap_read(priv->scu, ASPEED_SCU_MISC_CTRL, &reg); 240 + rc = regmap_read(priv->scu, priv->dac_reg, &reg); 276 241 if (rc) 277 242 return rc; 278 243 ··· 287 252 u32 reg; 288 253 int rc; 289 254 290 - rc = regmap_read(priv->scu, ASPEED_SCU_VGA0, &reg); 255 + rc = regmap_read(priv->scu, priv->vga_scratch_reg, &reg); 291 256 if (rc) 292 257 return rc; 293 258 ··· 319 284 if (ret) 320 285 return ret; 321 286 322 - dev_set_drvdata(&pdev->dev, priv); 287 + platform_set_drvdata(pdev, priv); 323 288 324 289 ret = sysfs_create_group(&pdev->dev.kobj, &aspeed_sysfs_attr_group); 325 290 if (ret)
+1 -2
drivers/gpu/drm/ast/Makefile
··· 3 3 # Makefile for the drm device driver. This driver provides support for the 4 4 # Direct Rendering Infrastructure (DRI) in XFree86 4.1.0 and higher. 5 5 6 - ast-y := ast_cursor.o ast_drv.o ast_main.o ast_mm.o ast_mode.o ast_post.o \ 7 - ast_dp501.o 6 + ast-y := ast_drv.o ast_main.o ast_mm.o ast_mode.o ast_post.o ast_dp501.o 8 7 9 8 obj-$(CONFIG_DRM_AST) := ast.o
-286
drivers/gpu/drm/ast/ast_cursor.c
··· 1 - /* 2 - * Copyright 2012 Red Hat Inc. 3 - * Parts based on xf86-video-ast 4 - * Copyright (c) 2005 ASPEED Technology Inc. 5 - * 6 - * Permission is hereby granted, free of charge, to any person obtaining a 7 - * copy of this software and associated documentation files (the 8 - * "Software"), to deal in the Software without restriction, including 9 - * without limitation the rights to use, copy, modify, merge, publish, 10 - * distribute, sub license, and/or sell copies of the Software, and to 11 - * permit persons to whom the Software is furnished to do so, subject to 12 - * the following conditions: 13 - * 14 - * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 15 - * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 16 - * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL 17 - * THE COPYRIGHT HOLDERS, AUTHORS AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, 18 - * DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR 19 - * OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE 20 - * USE OR OTHER DEALINGS IN THE SOFTWARE. 21 - * 22 - * The above copyright notice and this permission notice (including the 23 - * next paragraph) shall be included in all copies or substantial portions 24 - * of the Software. 25 - */ 26 - /* 27 - * Authors: Dave Airlie <airlied@redhat.com> 28 - */ 29 - 30 - #include <drm/drm_gem_vram_helper.h> 31 - #include <drm/drm_managed.h> 32 - 33 - #include "ast_drv.h" 34 - 35 - static void ast_cursor_fini(struct ast_private *ast) 36 - { 37 - size_t i; 38 - struct drm_gem_vram_object *gbo; 39 - 40 - for (i = 0; i < ARRAY_SIZE(ast->cursor.gbo); ++i) { 41 - gbo = ast->cursor.gbo[i]; 42 - drm_gem_vram_unpin(gbo); 43 - drm_gem_vram_put(gbo); 44 - } 45 - } 46 - 47 - static void ast_cursor_release(struct drm_device *dev, void *ptr) 48 - { 49 - struct ast_private *ast = to_ast_private(dev); 50 - 51 - ast_cursor_fini(ast); 52 - } 53 - 54 - /* 55 - * Allocate cursor BOs and pin them at the end of VRAM. 56 - */ 57 - int ast_cursor_init(struct ast_private *ast) 58 - { 59 - struct drm_device *dev = &ast->base; 60 - size_t size, i; 61 - struct drm_gem_vram_object *gbo; 62 - int ret; 63 - 64 - size = roundup(AST_HWC_SIZE + AST_HWC_SIGNATURE_SIZE, PAGE_SIZE); 65 - 66 - for (i = 0; i < ARRAY_SIZE(ast->cursor.gbo); ++i) { 67 - gbo = drm_gem_vram_create(dev, size, 0); 68 - if (IS_ERR(gbo)) { 69 - ret = PTR_ERR(gbo); 70 - goto err_drm_gem_vram_put; 71 - } 72 - ret = drm_gem_vram_pin(gbo, DRM_GEM_VRAM_PL_FLAG_VRAM | 73 - DRM_GEM_VRAM_PL_FLAG_TOPDOWN); 74 - if (ret) { 75 - drm_gem_vram_put(gbo); 76 - goto err_drm_gem_vram_put; 77 - } 78 - ast->cursor.gbo[i] = gbo; 79 - } 80 - 81 - return drmm_add_action_or_reset(dev, ast_cursor_release, NULL); 82 - 83 - err_drm_gem_vram_put: 84 - while (i) { 85 - --i; 86 - gbo = ast->cursor.gbo[i]; 87 - drm_gem_vram_unpin(gbo); 88 - drm_gem_vram_put(gbo); 89 - } 90 - return ret; 91 - } 92 - 93 - static void update_cursor_image(u8 __iomem *dst, const u8 *src, int width, int height) 94 - { 95 - union { 96 - u32 ul; 97 - u8 b[4]; 98 - } srcdata32[2], data32; 99 - union { 100 - u16 us; 101 - u8 b[2]; 102 - } data16; 103 - u32 csum = 0; 104 - s32 alpha_dst_delta, last_alpha_dst_delta; 105 - u8 __iomem *dstxor; 106 - const u8 *srcxor; 107 - int i, j; 108 - u32 per_pixel_copy, two_pixel_copy; 109 - 110 - alpha_dst_delta = AST_MAX_HWC_WIDTH << 1; 111 - last_alpha_dst_delta = alpha_dst_delta - (width << 1); 112 - 113 - srcxor = src; 114 - dstxor = (u8 *)dst + last_alpha_dst_delta + (AST_MAX_HWC_HEIGHT - height) * alpha_dst_delta; 115 - per_pixel_copy = width & 1; 116 - two_pixel_copy = width >> 1; 117 - 118 - for (j = 0; j < height; j++) { 119 - for (i = 0; i < two_pixel_copy; i++) { 120 - srcdata32[0].ul = *((u32 *)srcxor) & 0xf0f0f0f0; 121 - srcdata32[1].ul = *((u32 *)(srcxor + 4)) & 0xf0f0f0f0; 122 - data32.b[0] = srcdata32[0].b[1] | (srcdata32[0].b[0] >> 4); 123 - data32.b[1] = srcdata32[0].b[3] | (srcdata32[0].b[2] >> 4); 124 - data32.b[2] = srcdata32[1].b[1] | (srcdata32[1].b[0] >> 4); 125 - data32.b[3] = srcdata32[1].b[3] | (srcdata32[1].b[2] >> 4); 126 - 127 - writel(data32.ul, dstxor); 128 - csum += data32.ul; 129 - 130 - dstxor += 4; 131 - srcxor += 8; 132 - 133 - } 134 - 135 - for (i = 0; i < per_pixel_copy; i++) { 136 - srcdata32[0].ul = *((u32 *)srcxor) & 0xf0f0f0f0; 137 - data16.b[0] = srcdata32[0].b[1] | (srcdata32[0].b[0] >> 4); 138 - data16.b[1] = srcdata32[0].b[3] | (srcdata32[0].b[2] >> 4); 139 - writew(data16.us, dstxor); 140 - csum += (u32)data16.us; 141 - 142 - dstxor += 2; 143 - srcxor += 4; 144 - } 145 - dstxor += last_alpha_dst_delta; 146 - } 147 - 148 - /* write checksum + signature */ 149 - dst += AST_HWC_SIZE; 150 - writel(csum, dst); 151 - writel(width, dst + AST_HWC_SIGNATURE_SizeX); 152 - writel(height, dst + AST_HWC_SIGNATURE_SizeY); 153 - writel(0, dst + AST_HWC_SIGNATURE_HOTSPOTX); 154 - writel(0, dst + AST_HWC_SIGNATURE_HOTSPOTY); 155 - } 156 - 157 - int ast_cursor_blit(struct ast_private *ast, struct drm_framebuffer *fb) 158 - { 159 - struct drm_device *dev = &ast->base; 160 - struct drm_gem_vram_object *dst_gbo = ast->cursor.gbo[ast->cursor.next_index]; 161 - struct drm_gem_vram_object *src_gbo = drm_gem_vram_of_gem(fb->obj[0]); 162 - struct dma_buf_map src_map, dst_map; 163 - void __iomem *dst; 164 - void *src; 165 - int ret; 166 - 167 - if (drm_WARN_ON_ONCE(dev, fb->width > AST_MAX_HWC_WIDTH) || 168 - drm_WARN_ON_ONCE(dev, fb->height > AST_MAX_HWC_HEIGHT)) 169 - return -EINVAL; 170 - 171 - ret = drm_gem_vram_vmap(src_gbo, &src_map); 172 - if (ret) 173 - return ret; 174 - src = src_map.vaddr; /* TODO: Use mapping abstraction properly */ 175 - 176 - ret = drm_gem_vram_vmap(dst_gbo, &dst_map); 177 - if (ret) 178 - goto err_drm_gem_vram_vunmap; 179 - dst = dst_map.vaddr_iomem; /* TODO: Use mapping abstraction properly */ 180 - 181 - /* do data transfer to cursor BO */ 182 - update_cursor_image(dst, src, fb->width, fb->height); 183 - 184 - drm_gem_vram_vunmap(dst_gbo, &dst_map); 185 - drm_gem_vram_vunmap(src_gbo, &src_map); 186 - 187 - return 0; 188 - 189 - err_drm_gem_vram_vunmap: 190 - drm_gem_vram_vunmap(src_gbo, &src_map); 191 - return ret; 192 - } 193 - 194 - static void ast_cursor_set_base(struct ast_private *ast, u64 address) 195 - { 196 - u8 addr0 = (address >> 3) & 0xff; 197 - u8 addr1 = (address >> 11) & 0xff; 198 - u8 addr2 = (address >> 19) & 0xff; 199 - 200 - ast_set_index_reg(ast, AST_IO_CRTC_PORT, 0xc8, addr0); 201 - ast_set_index_reg(ast, AST_IO_CRTC_PORT, 0xc9, addr1); 202 - ast_set_index_reg(ast, AST_IO_CRTC_PORT, 0xca, addr2); 203 - } 204 - 205 - void ast_cursor_page_flip(struct ast_private *ast) 206 - { 207 - struct drm_device *dev = &ast->base; 208 - struct drm_gem_vram_object *gbo; 209 - s64 off; 210 - 211 - gbo = ast->cursor.gbo[ast->cursor.next_index]; 212 - 213 - off = drm_gem_vram_offset(gbo); 214 - if (drm_WARN_ON_ONCE(dev, off < 0)) 215 - return; /* Bug: we didn't pin the cursor HW BO to VRAM. */ 216 - 217 - ast_cursor_set_base(ast, off); 218 - 219 - ++ast->cursor.next_index; 220 - ast->cursor.next_index %= ARRAY_SIZE(ast->cursor.gbo); 221 - } 222 - 223 - static void ast_cursor_set_location(struct ast_private *ast, u16 x, u16 y, 224 - u8 x_offset, u8 y_offset) 225 - { 226 - u8 x0 = (x & 0x00ff); 227 - u8 x1 = (x & 0x0f00) >> 8; 228 - u8 y0 = (y & 0x00ff); 229 - u8 y1 = (y & 0x0700) >> 8; 230 - 231 - ast_set_index_reg(ast, AST_IO_CRTC_PORT, 0xc2, x_offset); 232 - ast_set_index_reg(ast, AST_IO_CRTC_PORT, 0xc3, y_offset); 233 - ast_set_index_reg(ast, AST_IO_CRTC_PORT, 0xc4, x0); 234 - ast_set_index_reg(ast, AST_IO_CRTC_PORT, 0xc5, x1); 235 - ast_set_index_reg(ast, AST_IO_CRTC_PORT, 0xc6, y0); 236 - ast_set_index_reg(ast, AST_IO_CRTC_PORT, 0xc7, y1); 237 - } 238 - 239 - void ast_cursor_show(struct ast_private *ast, int x, int y, 240 - unsigned int offset_x, unsigned int offset_y) 241 - { 242 - struct drm_device *dev = &ast->base; 243 - struct drm_gem_vram_object *gbo = ast->cursor.gbo[ast->cursor.next_index]; 244 - struct dma_buf_map map; 245 - u8 x_offset, y_offset; 246 - u8 __iomem *dst; 247 - u8 __iomem *sig; 248 - u8 jreg; 249 - int ret; 250 - 251 - ret = drm_gem_vram_vmap(gbo, &map); 252 - if (drm_WARN_ONCE(dev, ret, "drm_gem_vram_vmap() failed, ret=%d\n", ret)) 253 - return; 254 - dst = map.vaddr_iomem; /* TODO: Use mapping abstraction properly */ 255 - 256 - sig = dst + AST_HWC_SIZE; 257 - writel(x, sig + AST_HWC_SIGNATURE_X); 258 - writel(y, sig + AST_HWC_SIGNATURE_Y); 259 - 260 - drm_gem_vram_vunmap(gbo, &map); 261 - 262 - if (x < 0) { 263 - x_offset = (-x) + offset_x; 264 - x = 0; 265 - } else { 266 - x_offset = offset_x; 267 - } 268 - if (y < 0) { 269 - y_offset = (-y) + offset_y; 270 - y = 0; 271 - } else { 272 - y_offset = offset_y; 273 - } 274 - 275 - ast_cursor_set_location(ast, x, y, x_offset, y_offset); 276 - 277 - /* dummy write to fire HWC */ 278 - jreg = 0x02 | 279 - 0x01; /* enable ARGB4444 cursor */ 280 - ast_set_index_reg_mask(ast, AST_IO_CRTC_PORT, 0xcb, 0xfc, jreg); 281 - } 282 - 283 - void ast_cursor_hide(struct ast_private *ast) 284 - { 285 - ast_set_index_reg_mask(ast, AST_IO_CRTC_PORT, 0xcb, 0xfc, 0x00); 286 - }
+2
drivers/gpu/drm/ast/ast_drv.c
··· 30 30 #include <linux/module.h> 31 31 #include <linux/pci.h> 32 32 33 + #include <drm/drm_atomic_helper.h> 33 34 #include <drm/drm_crtc_helper.h> 34 35 #include <drm/drm_drv.h> 35 36 #include <drm/drm_fb_helper.h> ··· 139 138 struct drm_device *dev = pci_get_drvdata(pdev); 140 139 141 140 drm_dev_unregister(dev); 141 + drm_atomic_helper_shutdown(dev); 142 142 } 143 143 144 144 static int ast_drm_freeze(struct drm_device *dev)
+33 -14
drivers/gpu/drm/ast/ast_drv.h
··· 81 81 #define AST_DRAM_4Gx16 7 82 82 #define AST_DRAM_8Gx16 8 83 83 84 + /* 85 + * Cursor plane 86 + */ 84 87 85 88 #define AST_MAX_HWC_WIDTH 64 86 89 #define AST_MAX_HWC_HEIGHT 64 ··· 102 99 #define AST_HWC_SIGNATURE_HOTSPOTX 0x14 103 100 #define AST_HWC_SIGNATURE_HOTSPOTY 0x18 104 101 102 + struct ast_cursor_plane { 103 + struct drm_plane base; 104 + 105 + struct { 106 + struct drm_gem_vram_object *gbo; 107 + struct dma_buf_map map; 108 + u64 off; 109 + } hwc[AST_DEFAULT_HWC_NUM]; 110 + 111 + unsigned int next_hwc_index; 112 + }; 113 + 114 + static inline struct ast_cursor_plane * 115 + to_ast_cursor_plane(struct drm_plane *plane) 116 + { 117 + return container_of(plane, struct ast_cursor_plane, base); 118 + } 119 + 120 + /* 121 + * Connector with i2c channel 122 + */ 123 + 105 124 struct ast_i2c_chan { 106 125 struct i2c_adapter adapter; 107 126 struct drm_device *dev; ··· 141 116 return container_of(connector, struct ast_connector, base); 142 117 } 143 118 119 + /* 120 + * Device 121 + */ 122 + 144 123 struct ast_private { 145 124 struct drm_device base; 146 125 ··· 159 130 160 131 int fb_mtrr; 161 132 162 - struct { 163 - struct drm_gem_vram_object *gbo[AST_DEFAULT_HWC_NUM]; 164 - unsigned int next_index; 165 - } cursor; 166 - 167 133 struct drm_plane primary_plane; 168 - struct drm_plane cursor_plane; 134 + struct ast_cursor_plane cursor_plane; 169 135 struct drm_crtc crtc; 170 136 struct drm_encoder encoder; 171 137 struct ast_connector connector; ··· 202 178 #define AST_IO_MM_OFFSET (0x380) 203 179 204 180 #define AST_IO_VGAIR1_VREFRESH BIT(3) 181 + 182 + #define AST_IO_VGACRCB_HWC_ENABLED BIT(1) 183 + #define AST_IO_VGACRCB_HWC_16BPP BIT(0) /* set: ARGB4444, cleared: 2bpp palette */ 205 184 206 185 #define __ast_read(x) \ 207 186 static inline u##x ast_read##x(struct ast_private *ast, u32 reg) { \ ··· 340 313 bool ast_dp501_read_edid(struct drm_device *dev, u8 *ediddata); 341 314 u8 ast_get_dp501_max_clk(struct drm_device *dev); 342 315 void ast_init_3rdtx(struct drm_device *dev); 343 - 344 - /* ast_cursor.c */ 345 - int ast_cursor_init(struct ast_private *ast); 346 - int ast_cursor_blit(struct ast_private *ast, struct drm_framebuffer *fb); 347 - void ast_cursor_page_flip(struct ast_private *ast); 348 - void ast_cursor_show(struct ast_private *ast, int x, int y, 349 - unsigned int offset_x, unsigned int offset_y); 350 - void ast_cursor_hide(struct ast_private *ast); 351 316 352 317 #endif
+317 -83
drivers/gpu/drm/ast/ast_mode.c
··· 37 37 #include <drm/drm_crtc.h> 38 38 #include <drm/drm_crtc_helper.h> 39 39 #include <drm/drm_fourcc.h> 40 + #include <drm/drm_gem_atomic_helper.h> 40 41 #include <drm/drm_gem_framebuffer_helper.h> 41 42 #include <drm/drm_gem_vram_helper.h> 42 43 #include <drm/drm_plane_helper.h> ··· 536 535 }; 537 536 538 537 static int ast_primary_plane_helper_atomic_check(struct drm_plane *plane, 539 - struct drm_plane_state *state) 538 + struct drm_atomic_state *state) 540 539 { 540 + struct drm_plane_state *new_plane_state = drm_atomic_get_new_plane_state(state, 541 + plane); 541 542 struct drm_crtc_state *crtc_state; 542 543 struct ast_crtc_state *ast_crtc_state; 543 544 int ret; 544 545 545 - if (!state->crtc) 546 + if (!new_plane_state->crtc) 546 547 return 0; 547 548 548 - crtc_state = drm_atomic_get_new_crtc_state(state->state, state->crtc); 549 + crtc_state = drm_atomic_get_new_crtc_state(state, 550 + new_plane_state->crtc); 549 551 550 - ret = drm_atomic_helper_check_plane_state(state, crtc_state, 552 + ret = drm_atomic_helper_check_plane_state(new_plane_state, crtc_state, 551 553 DRM_PLANE_HELPER_NO_SCALING, 552 554 DRM_PLANE_HELPER_NO_SCALING, 553 555 false, true); 554 556 if (ret) 555 557 return ret; 556 558 557 - if (!state->visible) 559 + if (!new_plane_state->visible) 558 560 return 0; 559 561 560 562 ast_crtc_state = to_ast_crtc_state(crtc_state); 561 563 562 - ast_crtc_state->format = state->fb->format; 564 + ast_crtc_state->format = new_plane_state->fb->format; 563 565 564 566 return 0; 565 567 } 566 568 567 569 static void 568 570 ast_primary_plane_helper_atomic_update(struct drm_plane *plane, 569 - struct drm_plane_state *old_state) 571 + struct drm_atomic_state *state) 570 572 { 573 + struct drm_plane_state *old_state = drm_atomic_get_old_plane_state(state, 574 + plane); 571 575 struct drm_device *dev = plane->dev; 572 576 struct ast_private *ast = to_ast_private(dev); 573 - struct drm_plane_state *state = plane->state; 577 + struct drm_plane_state *new_state = drm_atomic_get_new_plane_state(state, 578 + plane); 574 579 struct drm_gem_vram_object *gbo; 575 580 s64 gpu_addr; 576 - struct drm_framebuffer *fb = state->fb; 581 + struct drm_framebuffer *fb = new_state->fb; 577 582 struct drm_framebuffer *old_fb = old_state->fb; 578 583 579 584 if (!old_fb || (fb->format != old_fb->format)) { 580 - struct drm_crtc_state *crtc_state = state->crtc->state; 585 + struct drm_crtc_state *crtc_state = new_state->crtc->state; 581 586 struct ast_crtc_state *ast_crtc_state = to_ast_crtc_state(crtc_state); 582 587 struct ast_vbios_mode_info *vbios_mode_info = &ast_crtc_state->vbios_mode_info; 583 588 ··· 604 597 605 598 static void 606 599 ast_primary_plane_helper_atomic_disable(struct drm_plane *plane, 607 - struct drm_plane_state *old_state) 600 + struct drm_atomic_state *state) 608 601 { 609 602 struct ast_private *ast = to_ast_private(plane->dev); 610 603 ··· 628 621 .atomic_destroy_state = drm_atomic_helper_plane_destroy_state, 629 622 }; 630 623 624 + static int ast_primary_plane_init(struct ast_private *ast) 625 + { 626 + struct drm_device *dev = &ast->base; 627 + struct drm_plane *primary_plane = &ast->primary_plane; 628 + int ret; 629 + 630 + ret = drm_universal_plane_init(dev, primary_plane, 0x01, 631 + &ast_primary_plane_funcs, 632 + ast_primary_plane_formats, 633 + ARRAY_SIZE(ast_primary_plane_formats), 634 + NULL, DRM_PLANE_TYPE_PRIMARY, NULL); 635 + if (ret) { 636 + drm_err(dev, "drm_universal_plane_init() failed: %d\n", ret); 637 + return ret; 638 + } 639 + drm_plane_helper_add(primary_plane, &ast_primary_plane_helper_funcs); 640 + 641 + return 0; 642 + } 643 + 631 644 /* 632 645 * Cursor plane 633 646 */ 647 + 648 + static void ast_update_cursor_image(u8 __iomem *dst, const u8 *src, int width, int height) 649 + { 650 + union { 651 + u32 ul; 652 + u8 b[4]; 653 + } srcdata32[2], data32; 654 + union { 655 + u16 us; 656 + u8 b[2]; 657 + } data16; 658 + u32 csum = 0; 659 + s32 alpha_dst_delta, last_alpha_dst_delta; 660 + u8 __iomem *dstxor; 661 + const u8 *srcxor; 662 + int i, j; 663 + u32 per_pixel_copy, two_pixel_copy; 664 + 665 + alpha_dst_delta = AST_MAX_HWC_WIDTH << 1; 666 + last_alpha_dst_delta = alpha_dst_delta - (width << 1); 667 + 668 + srcxor = src; 669 + dstxor = (u8 *)dst + last_alpha_dst_delta + (AST_MAX_HWC_HEIGHT - height) * alpha_dst_delta; 670 + per_pixel_copy = width & 1; 671 + two_pixel_copy = width >> 1; 672 + 673 + for (j = 0; j < height; j++) { 674 + for (i = 0; i < two_pixel_copy; i++) { 675 + srcdata32[0].ul = *((u32 *)srcxor) & 0xf0f0f0f0; 676 + srcdata32[1].ul = *((u32 *)(srcxor + 4)) & 0xf0f0f0f0; 677 + data32.b[0] = srcdata32[0].b[1] | (srcdata32[0].b[0] >> 4); 678 + data32.b[1] = srcdata32[0].b[3] | (srcdata32[0].b[2] >> 4); 679 + data32.b[2] = srcdata32[1].b[1] | (srcdata32[1].b[0] >> 4); 680 + data32.b[3] = srcdata32[1].b[3] | (srcdata32[1].b[2] >> 4); 681 + 682 + writel(data32.ul, dstxor); 683 + csum += data32.ul; 684 + 685 + dstxor += 4; 686 + srcxor += 8; 687 + 688 + } 689 + 690 + for (i = 0; i < per_pixel_copy; i++) { 691 + srcdata32[0].ul = *((u32 *)srcxor) & 0xf0f0f0f0; 692 + data16.b[0] = srcdata32[0].b[1] | (srcdata32[0].b[0] >> 4); 693 + data16.b[1] = srcdata32[0].b[3] | (srcdata32[0].b[2] >> 4); 694 + writew(data16.us, dstxor); 695 + csum += (u32)data16.us; 696 + 697 + dstxor += 2; 698 + srcxor += 4; 699 + } 700 + dstxor += last_alpha_dst_delta; 701 + } 702 + 703 + /* write checksum + signature */ 704 + dst += AST_HWC_SIZE; 705 + writel(csum, dst); 706 + writel(width, dst + AST_HWC_SIGNATURE_SizeX); 707 + writel(height, dst + AST_HWC_SIGNATURE_SizeY); 708 + writel(0, dst + AST_HWC_SIGNATURE_HOTSPOTX); 709 + writel(0, dst + AST_HWC_SIGNATURE_HOTSPOTY); 710 + } 711 + 712 + static void ast_set_cursor_base(struct ast_private *ast, u64 address) 713 + { 714 + u8 addr0 = (address >> 3) & 0xff; 715 + u8 addr1 = (address >> 11) & 0xff; 716 + u8 addr2 = (address >> 19) & 0xff; 717 + 718 + ast_set_index_reg(ast, AST_IO_CRTC_PORT, 0xc8, addr0); 719 + ast_set_index_reg(ast, AST_IO_CRTC_PORT, 0xc9, addr1); 720 + ast_set_index_reg(ast, AST_IO_CRTC_PORT, 0xca, addr2); 721 + } 722 + 723 + static void ast_set_cursor_location(struct ast_private *ast, u16 x, u16 y, 724 + u8 x_offset, u8 y_offset) 725 + { 726 + u8 x0 = (x & 0x00ff); 727 + u8 x1 = (x & 0x0f00) >> 8; 728 + u8 y0 = (y & 0x00ff); 729 + u8 y1 = (y & 0x0700) >> 8; 730 + 731 + ast_set_index_reg(ast, AST_IO_CRTC_PORT, 0xc2, x_offset); 732 + ast_set_index_reg(ast, AST_IO_CRTC_PORT, 0xc3, y_offset); 733 + ast_set_index_reg(ast, AST_IO_CRTC_PORT, 0xc4, x0); 734 + ast_set_index_reg(ast, AST_IO_CRTC_PORT, 0xc5, x1); 735 + ast_set_index_reg(ast, AST_IO_CRTC_PORT, 0xc6, y0); 736 + ast_set_index_reg(ast, AST_IO_CRTC_PORT, 0xc7, y1); 737 + } 738 + 739 + static void ast_set_cursor_enabled(struct ast_private *ast, bool enabled) 740 + { 741 + static const u8 mask = (u8)~(AST_IO_VGACRCB_HWC_16BPP | 742 + AST_IO_VGACRCB_HWC_ENABLED); 743 + 744 + u8 vgacrcb = AST_IO_VGACRCB_HWC_16BPP; 745 + 746 + if (enabled) 747 + vgacrcb |= AST_IO_VGACRCB_HWC_ENABLED; 748 + 749 + ast_set_index_reg_mask(ast, AST_IO_CRTC_PORT, 0xcb, mask, vgacrcb); 750 + } 634 751 635 752 static const uint32_t ast_cursor_plane_formats[] = { 636 753 DRM_FORMAT_ARGB8888, 637 754 }; 638 755 639 - static int 640 - ast_cursor_plane_helper_prepare_fb(struct drm_plane *plane, 641 - struct drm_plane_state *new_state) 642 - { 643 - struct drm_framebuffer *fb = new_state->fb; 644 - struct drm_crtc *crtc = new_state->crtc; 645 - struct ast_private *ast; 646 - int ret; 647 - 648 - if (!crtc || !fb) 649 - return 0; 650 - 651 - ast = to_ast_private(plane->dev); 652 - 653 - ret = ast_cursor_blit(ast, fb); 654 - if (ret) 655 - return ret; 656 - 657 - return 0; 658 - } 659 - 660 756 static int ast_cursor_plane_helper_atomic_check(struct drm_plane *plane, 661 - struct drm_plane_state *state) 757 + struct drm_atomic_state *state) 662 758 { 663 - struct drm_framebuffer *fb = state->fb; 759 + struct drm_plane_state *new_plane_state = drm_atomic_get_new_plane_state(state, 760 + plane); 761 + struct drm_framebuffer *fb = new_plane_state->fb; 664 762 struct drm_crtc_state *crtc_state; 665 763 int ret; 666 764 667 - if (!state->crtc) 765 + if (!new_plane_state->crtc) 668 766 return 0; 669 767 670 - crtc_state = drm_atomic_get_new_crtc_state(state->state, state->crtc); 768 + crtc_state = drm_atomic_get_new_crtc_state(state, 769 + new_plane_state->crtc); 671 770 672 - ret = drm_atomic_helper_check_plane_state(state, crtc_state, 771 + ret = drm_atomic_helper_check_plane_state(new_plane_state, crtc_state, 673 772 DRM_PLANE_HELPER_NO_SCALING, 674 773 DRM_PLANE_HELPER_NO_SCALING, 675 774 true, true); 676 775 if (ret) 677 776 return ret; 678 777 679 - if (!state->visible) 778 + if (!new_plane_state->visible) 680 779 return 0; 681 780 682 781 if (fb->width > AST_MAX_HWC_WIDTH || fb->height > AST_MAX_HWC_HEIGHT) ··· 793 680 794 681 static void 795 682 ast_cursor_plane_helper_atomic_update(struct drm_plane *plane, 796 - struct drm_plane_state *old_state) 683 + struct drm_atomic_state *state) 797 684 { 798 - struct drm_plane_state *state = plane->state; 799 - struct drm_framebuffer *fb = state->fb; 685 + struct ast_cursor_plane *ast_cursor_plane = to_ast_cursor_plane(plane); 686 + struct drm_plane_state *old_state = drm_atomic_get_old_plane_state(state, 687 + plane); 688 + struct drm_plane_state *new_state = drm_atomic_get_new_plane_state(state, 689 + plane); 690 + struct drm_shadow_plane_state *shadow_plane_state = to_drm_shadow_plane_state(new_state); 691 + struct drm_framebuffer *fb = new_state->fb; 800 692 struct ast_private *ast = to_ast_private(plane->dev); 693 + struct dma_buf_map dst_map = 694 + ast_cursor_plane->hwc[ast_cursor_plane->next_hwc_index].map; 695 + u64 dst_off = 696 + ast_cursor_plane->hwc[ast_cursor_plane->next_hwc_index].off; 697 + struct dma_buf_map src_map = shadow_plane_state->map[0]; 801 698 unsigned int offset_x, offset_y; 699 + u16 x, y; 700 + u8 x_offset, y_offset; 701 + u8 __iomem *dst; 702 + u8 __iomem *sig; 703 + const u8 *src; 802 704 803 - offset_x = AST_MAX_HWC_WIDTH - fb->width; 804 - offset_y = AST_MAX_HWC_WIDTH - fb->height; 705 + src = src_map.vaddr; /* TODO: Use mapping abstraction properly */ 706 + dst = dst_map.vaddr_iomem; /* TODO: Use mapping abstraction properly */ 707 + sig = dst + AST_HWC_SIZE; /* TODO: Use mapping abstraction properly */ 805 708 806 - if (state->fb != old_state->fb) { 807 - /* A new cursor image was installed. */ 808 - ast_cursor_page_flip(ast); 709 + /* 710 + * Do data transfer to HW cursor BO. If a new cursor image was installed, 711 + * point the scanout engine to dst_gbo's offset and page-flip the HWC buffers. 712 + */ 713 + 714 + ast_update_cursor_image(dst, src, fb->width, fb->height); 715 + 716 + if (new_state->fb != old_state->fb) { 717 + ast_set_cursor_base(ast, dst_off); 718 + 719 + ++ast_cursor_plane->next_hwc_index; 720 + ast_cursor_plane->next_hwc_index %= ARRAY_SIZE(ast_cursor_plane->hwc); 809 721 } 810 722 811 - ast_cursor_show(ast, state->crtc_x, state->crtc_y, 812 - offset_x, offset_y); 723 + /* 724 + * Update location in HWC signature and registers. 725 + */ 726 + 727 + writel(new_state->crtc_x, sig + AST_HWC_SIGNATURE_X); 728 + writel(new_state->crtc_y, sig + AST_HWC_SIGNATURE_Y); 729 + 730 + offset_x = AST_MAX_HWC_WIDTH - fb->width; 731 + offset_y = AST_MAX_HWC_HEIGHT - fb->height; 732 + 733 + if (new_state->crtc_x < 0) { 734 + x_offset = (-new_state->crtc_x) + offset_x; 735 + x = 0; 736 + } else { 737 + x_offset = offset_x; 738 + x = new_state->crtc_x; 739 + } 740 + if (new_state->crtc_y < 0) { 741 + y_offset = (-new_state->crtc_y) + offset_y; 742 + y = 0; 743 + } else { 744 + y_offset = offset_y; 745 + y = new_state->crtc_y; 746 + } 747 + 748 + ast_set_cursor_location(ast, x, y, x_offset, y_offset); 749 + 750 + /* Dummy write to enable HWC and make the HW pick-up the changes. */ 751 + ast_set_cursor_enabled(ast, true); 813 752 } 814 753 815 754 static void 816 755 ast_cursor_plane_helper_atomic_disable(struct drm_plane *plane, 817 - struct drm_plane_state *old_state) 756 + struct drm_atomic_state *state) 818 757 { 819 758 struct ast_private *ast = to_ast_private(plane->dev); 820 759 821 - ast_cursor_hide(ast); 760 + ast_set_cursor_enabled(ast, false); 822 761 } 823 762 824 763 static const struct drm_plane_helper_funcs ast_cursor_plane_helper_funcs = { 825 - .prepare_fb = ast_cursor_plane_helper_prepare_fb, 826 - .cleanup_fb = NULL, /* not required for cursor plane */ 764 + DRM_GEM_SHADOW_PLANE_HELPER_FUNCS, 827 765 .atomic_check = ast_cursor_plane_helper_atomic_check, 828 766 .atomic_update = ast_cursor_plane_helper_atomic_update, 829 767 .atomic_disable = ast_cursor_plane_helper_atomic_disable, 830 768 }; 831 769 770 + static void ast_cursor_plane_destroy(struct drm_plane *plane) 771 + { 772 + struct ast_cursor_plane *ast_cursor_plane = to_ast_cursor_plane(plane); 773 + size_t i; 774 + struct drm_gem_vram_object *gbo; 775 + struct dma_buf_map map; 776 + 777 + for (i = 0; i < ARRAY_SIZE(ast_cursor_plane->hwc); ++i) { 778 + gbo = ast_cursor_plane->hwc[i].gbo; 779 + map = ast_cursor_plane->hwc[i].map; 780 + drm_gem_vram_vunmap(gbo, &map); 781 + drm_gem_vram_unpin(gbo); 782 + drm_gem_vram_put(gbo); 783 + } 784 + 785 + drm_plane_cleanup(plane); 786 + } 787 + 832 788 static const struct drm_plane_funcs ast_cursor_plane_funcs = { 833 789 .update_plane = drm_atomic_helper_update_plane, 834 790 .disable_plane = drm_atomic_helper_disable_plane, 835 - .destroy = drm_plane_cleanup, 836 - .reset = drm_atomic_helper_plane_reset, 837 - .atomic_duplicate_state = drm_atomic_helper_plane_duplicate_state, 838 - .atomic_destroy_state = drm_atomic_helper_plane_destroy_state, 791 + .destroy = ast_cursor_plane_destroy, 792 + DRM_GEM_SHADOW_PLANE_FUNCS, 839 793 }; 794 + 795 + static int ast_cursor_plane_init(struct ast_private *ast) 796 + { 797 + struct drm_device *dev = &ast->base; 798 + struct ast_cursor_plane *ast_cursor_plane = &ast->cursor_plane; 799 + struct drm_plane *cursor_plane = &ast_cursor_plane->base; 800 + size_t size, i; 801 + struct drm_gem_vram_object *gbo; 802 + struct dma_buf_map map; 803 + int ret; 804 + s64 off; 805 + 806 + /* 807 + * Allocate backing storage for cursors. The BOs are permanently 808 + * pinned to the top end of the VRAM. 809 + */ 810 + 811 + size = roundup(AST_HWC_SIZE + AST_HWC_SIGNATURE_SIZE, PAGE_SIZE); 812 + 813 + for (i = 0; i < ARRAY_SIZE(ast_cursor_plane->hwc); ++i) { 814 + gbo = drm_gem_vram_create(dev, size, 0); 815 + if (IS_ERR(gbo)) { 816 + ret = PTR_ERR(gbo); 817 + goto err_hwc; 818 + } 819 + ret = drm_gem_vram_pin(gbo, DRM_GEM_VRAM_PL_FLAG_VRAM | 820 + DRM_GEM_VRAM_PL_FLAG_TOPDOWN); 821 + if (ret) 822 + goto err_drm_gem_vram_put; 823 + ret = drm_gem_vram_vmap(gbo, &map); 824 + if (ret) 825 + goto err_drm_gem_vram_unpin; 826 + off = drm_gem_vram_offset(gbo); 827 + if (off < 0) { 828 + ret = off; 829 + goto err_drm_gem_vram_vunmap; 830 + } 831 + ast_cursor_plane->hwc[i].gbo = gbo; 832 + ast_cursor_plane->hwc[i].map = map; 833 + ast_cursor_plane->hwc[i].off = off; 834 + } 835 + 836 + /* 837 + * Create the cursor plane. The plane's destroy callback will release 838 + * the backing storages' BO memory. 839 + */ 840 + 841 + ret = drm_universal_plane_init(dev, cursor_plane, 0x01, 842 + &ast_cursor_plane_funcs, 843 + ast_cursor_plane_formats, 844 + ARRAY_SIZE(ast_cursor_plane_formats), 845 + NULL, DRM_PLANE_TYPE_CURSOR, NULL); 846 + if (ret) { 847 + drm_err(dev, "drm_universal_plane failed(): %d\n", ret); 848 + goto err_hwc; 849 + } 850 + drm_plane_helper_add(cursor_plane, &ast_cursor_plane_helper_funcs); 851 + 852 + return 0; 853 + 854 + err_hwc: 855 + while (i) { 856 + --i; 857 + gbo = ast_cursor_plane->hwc[i].gbo; 858 + map = ast_cursor_plane->hwc[i].map; 859 + err_drm_gem_vram_vunmap: 860 + drm_gem_vram_vunmap(gbo, &map); 861 + err_drm_gem_vram_unpin: 862 + drm_gem_vram_unpin(gbo); 863 + err_drm_gem_vram_put: 864 + drm_gem_vram_put(gbo); 865 + } 866 + return ret; 867 + } 840 868 841 869 /* 842 870 * CRTC ··· 1171 917 int ret; 1172 918 1173 919 ret = drm_crtc_init_with_planes(dev, crtc, &ast->primary_plane, 1174 - &ast->cursor_plane, &ast_crtc_funcs, 920 + &ast->cursor_plane.base, &ast_crtc_funcs, 1175 921 NULL); 1176 922 if (ret) 1177 923 return ret; ··· 1363 1109 struct pci_dev *pdev = to_pci_dev(dev->dev); 1364 1110 int ret; 1365 1111 1366 - ret = ast_cursor_init(ast); 1367 - if (ret) 1368 - return ret; 1369 - 1370 1112 ret = drmm_mode_config_init(dev); 1371 1113 if (ret) 1372 1114 return ret; ··· 1388 1138 1389 1139 dev->mode_config.helper_private = &ast_mode_config_helper_funcs; 1390 1140 1391 - memset(&ast->primary_plane, 0, sizeof(ast->primary_plane)); 1392 - ret = drm_universal_plane_init(dev, &ast->primary_plane, 0x01, 1393 - &ast_primary_plane_funcs, 1394 - ast_primary_plane_formats, 1395 - ARRAY_SIZE(ast_primary_plane_formats), 1396 - NULL, DRM_PLANE_TYPE_PRIMARY, NULL); 1397 - if (ret) { 1398 - drm_err(dev, "ast: drm_universal_plane_init() failed: %d\n", ret); 1399 - return ret; 1400 - } 1401 - drm_plane_helper_add(&ast->primary_plane, 1402 - &ast_primary_plane_helper_funcs); 1403 1141 1404 - ret = drm_universal_plane_init(dev, &ast->cursor_plane, 0x01, 1405 - &ast_cursor_plane_funcs, 1406 - ast_cursor_plane_formats, 1407 - ARRAY_SIZE(ast_cursor_plane_formats), 1408 - NULL, DRM_PLANE_TYPE_CURSOR, NULL); 1409 - if (ret) { 1410 - drm_err(dev, "drm_universal_plane_failed(): %d\n", ret); 1142 + ret = ast_primary_plane_init(ast); 1143 + if (ret) 1411 1144 return ret; 1412 - } 1413 - drm_plane_helper_add(&ast->cursor_plane, 1414 - &ast_cursor_plane_helper_funcs); 1145 + 1146 + ret = ast_cursor_plane_init(ast); 1147 + if (ret) 1148 + return ret; 1415 1149 1416 1150 ast_crtc_init(dev); 1417 1151 ast_encoder_init(dev);
+2 -105
drivers/gpu/drm/atmel-hlcdc/atmel_hlcdc_dc.c
··· 557 557 return IRQ_HANDLED; 558 558 } 559 559 560 - struct atmel_hlcdc_dc_commit { 561 - struct work_struct work; 562 - struct drm_device *dev; 563 - struct drm_atomic_state *state; 564 - }; 565 - 566 - static void 567 - atmel_hlcdc_dc_atomic_complete(struct atmel_hlcdc_dc_commit *commit) 568 - { 569 - struct drm_device *dev = commit->dev; 570 - struct atmel_hlcdc_dc *dc = dev->dev_private; 571 - struct drm_atomic_state *old_state = commit->state; 572 - 573 - /* Apply the atomic update. */ 574 - drm_atomic_helper_commit_modeset_disables(dev, old_state); 575 - drm_atomic_helper_commit_planes(dev, old_state, 0); 576 - drm_atomic_helper_commit_modeset_enables(dev, old_state); 577 - 578 - drm_atomic_helper_wait_for_vblanks(dev, old_state); 579 - 580 - drm_atomic_helper_cleanup_planes(dev, old_state); 581 - 582 - drm_atomic_state_put(old_state); 583 - 584 - /* Complete the commit, wake up any waiter. */ 585 - spin_lock(&dc->commit.wait.lock); 586 - dc->commit.pending = false; 587 - wake_up_all_locked(&dc->commit.wait); 588 - spin_unlock(&dc->commit.wait.lock); 589 - 590 - kfree(commit); 591 - } 592 - 593 - static void atmel_hlcdc_dc_atomic_work(struct work_struct *work) 594 - { 595 - struct atmel_hlcdc_dc_commit *commit = 596 - container_of(work, struct atmel_hlcdc_dc_commit, work); 597 - 598 - atmel_hlcdc_dc_atomic_complete(commit); 599 - } 600 - 601 - static int atmel_hlcdc_dc_atomic_commit(struct drm_device *dev, 602 - struct drm_atomic_state *state, 603 - bool async) 604 - { 605 - struct atmel_hlcdc_dc *dc = dev->dev_private; 606 - struct atmel_hlcdc_dc_commit *commit; 607 - int ret; 608 - 609 - ret = drm_atomic_helper_prepare_planes(dev, state); 610 - if (ret) 611 - return ret; 612 - 613 - /* Allocate the commit object. */ 614 - commit = kzalloc(sizeof(*commit), GFP_KERNEL); 615 - if (!commit) { 616 - ret = -ENOMEM; 617 - goto error; 618 - } 619 - 620 - INIT_WORK(&commit->work, atmel_hlcdc_dc_atomic_work); 621 - commit->dev = dev; 622 - commit->state = state; 623 - 624 - spin_lock(&dc->commit.wait.lock); 625 - ret = wait_event_interruptible_locked(dc->commit.wait, 626 - !dc->commit.pending); 627 - if (ret == 0) 628 - dc->commit.pending = true; 629 - spin_unlock(&dc->commit.wait.lock); 630 - 631 - if (ret) 632 - goto err_free; 633 - 634 - /* We have our own synchronization through the commit lock. */ 635 - BUG_ON(drm_atomic_helper_swap_state(state, false) < 0); 636 - 637 - /* Swap state succeeded, this is the point of no return. */ 638 - drm_atomic_state_get(state); 639 - if (async) 640 - queue_work(dc->wq, &commit->work); 641 - else 642 - atmel_hlcdc_dc_atomic_complete(commit); 643 - 644 - return 0; 645 - 646 - err_free: 647 - kfree(commit); 648 - error: 649 - drm_atomic_helper_cleanup_planes(dev, state); 650 - return ret; 651 - } 652 - 653 560 static const struct drm_mode_config_funcs mode_config_funcs = { 654 561 .fb_create = drm_gem_fb_create, 655 562 .atomic_check = drm_atomic_helper_check, 656 - .atomic_commit = atmel_hlcdc_dc_atomic_commit, 563 + .atomic_commit = drm_atomic_helper_commit, 657 564 }; 658 565 659 566 static int atmel_hlcdc_dc_modeset_init(struct drm_device *dev) ··· 619 712 if (!dc) 620 713 return -ENOMEM; 621 714 622 - dc->wq = alloc_ordered_workqueue("atmel-hlcdc-dc", 0); 623 - if (!dc->wq) 624 - return -ENOMEM; 625 - 626 - init_waitqueue_head(&dc->commit.wait); 627 715 dc->desc = match->data; 628 716 dc->hlcdc = dev_get_drvdata(dev->dev->parent); 629 717 dev->dev_private = dc; ··· 626 724 ret = clk_prepare_enable(dc->hlcdc->periph_clk); 627 725 if (ret) { 628 726 dev_err(dev->dev, "failed to enable periph_clk\n"); 629 - goto err_destroy_wq; 727 + return ret; 630 728 } 631 729 632 730 pm_runtime_enable(dev->dev); ··· 663 761 pm_runtime_disable(dev->dev); 664 762 clk_disable_unprepare(dc->hlcdc->periph_clk); 665 763 666 - err_destroy_wq: 667 - destroy_workqueue(dc->wq); 668 - 669 764 return ret; 670 765 } 671 766 ··· 670 771 { 671 772 struct atmel_hlcdc_dc *dc = dev->dev_private; 672 773 673 - flush_workqueue(dc->wq); 674 774 drm_kms_helper_poll_fini(dev); 675 775 drm_atomic_helper_shutdown(dev); 676 776 drm_mode_config_cleanup(dev); ··· 682 784 683 785 pm_runtime_disable(dev->dev); 684 786 clk_disable_unprepare(dc->hlcdc->periph_clk); 685 - destroy_workqueue(dc->wq); 686 787 } 687 788 688 789 static int atmel_hlcdc_dc_irq_postinstall(struct drm_device *dev)
-7
drivers/gpu/drm/atmel-hlcdc/atmel_hlcdc_dc.h
··· 331 331 * @crtc: CRTC provided by the display controller 332 332 * @planes: instantiated planes 333 333 * @layers: active HLCDC layers 334 - * @wq: display controller workqueue 335 334 * @suspend: used to store the HLCDC state when entering suspend 336 - * @commit: used for async commit handling 337 335 */ 338 336 struct atmel_hlcdc_dc { 339 337 const struct atmel_hlcdc_dc_desc *desc; ··· 339 341 struct atmel_hlcdc *hlcdc; 340 342 struct drm_crtc *crtc; 341 343 struct atmel_hlcdc_layer *layers[ATMEL_HLCDC_MAX_LAYERS]; 342 - struct workqueue_struct *wq; 343 344 struct { 344 345 u32 imr; 345 346 struct drm_atomic_state *state; 346 347 } suspend; 347 - struct { 348 - wait_queue_head_t wait; 349 - bool pending; 350 - } commit; 351 348 }; 352 349 353 350 extern struct atmel_hlcdc_formats atmel_hlcdc_plane_rgb_formats;
+69 -66
drivers/gpu/drm/atmel-hlcdc/atmel_hlcdc_plane.c
··· 593 593 } 594 594 595 595 static int atmel_hlcdc_plane_atomic_check(struct drm_plane *p, 596 - struct drm_plane_state *s) 596 + struct drm_atomic_state *state) 597 597 { 598 + struct drm_plane_state *s = drm_atomic_get_new_plane_state(state, p); 598 599 struct atmel_hlcdc_plane *plane = drm_plane_to_atmel_hlcdc_plane(p); 599 - struct atmel_hlcdc_plane_state *state = 600 + struct atmel_hlcdc_plane_state *hstate = 600 601 drm_plane_state_to_atmel_hlcdc_plane_state(s); 601 602 const struct atmel_hlcdc_layer_desc *desc = plane->layer.desc; 602 - struct drm_framebuffer *fb = state->base.fb; 603 + struct drm_framebuffer *fb = hstate->base.fb; 603 604 const struct drm_display_mode *mode; 604 605 struct drm_crtc_state *crtc_state; 605 606 int ret; 606 607 int i; 607 608 608 - if (!state->base.crtc || WARN_ON(!fb)) 609 + if (!hstate->base.crtc || WARN_ON(!fb)) 609 610 return 0; 610 611 611 - crtc_state = drm_atomic_get_existing_crtc_state(s->state, s->crtc); 612 + crtc_state = drm_atomic_get_existing_crtc_state(state, s->crtc); 612 613 mode = &crtc_state->adjusted_mode; 613 614 614 615 ret = drm_atomic_helper_check_plane_state(s, crtc_state, ··· 618 617 if (ret || !s->visible) 619 618 return ret; 620 619 621 - state->src_x = s->src.x1; 622 - state->src_y = s->src.y1; 623 - state->src_w = drm_rect_width(&s->src); 624 - state->src_h = drm_rect_height(&s->src); 625 - state->crtc_x = s->dst.x1; 626 - state->crtc_y = s->dst.y1; 627 - state->crtc_w = drm_rect_width(&s->dst); 628 - state->crtc_h = drm_rect_height(&s->dst); 620 + hstate->src_x = s->src.x1; 621 + hstate->src_y = s->src.y1; 622 + hstate->src_w = drm_rect_width(&s->src); 623 + hstate->src_h = drm_rect_height(&s->src); 624 + hstate->crtc_x = s->dst.x1; 625 + hstate->crtc_y = s->dst.y1; 626 + hstate->crtc_w = drm_rect_width(&s->dst); 627 + hstate->crtc_h = drm_rect_height(&s->dst); 629 628 630 - if ((state->src_x | state->src_y | state->src_w | state->src_h) & 629 + if ((hstate->src_x | hstate->src_y | hstate->src_w | hstate->src_h) & 631 630 SUBPIXEL_MASK) 632 631 return -EINVAL; 633 632 634 - state->src_x >>= 16; 635 - state->src_y >>= 16; 636 - state->src_w >>= 16; 637 - state->src_h >>= 16; 633 + hstate->src_x >>= 16; 634 + hstate->src_y >>= 16; 635 + hstate->src_w >>= 16; 636 + hstate->src_h >>= 16; 638 637 639 - state->nplanes = fb->format->num_planes; 640 - if (state->nplanes > ATMEL_HLCDC_LAYER_MAX_PLANES) 638 + hstate->nplanes = fb->format->num_planes; 639 + if (hstate->nplanes > ATMEL_HLCDC_LAYER_MAX_PLANES) 641 640 return -EINVAL; 642 641 643 - for (i = 0; i < state->nplanes; i++) { 642 + for (i = 0; i < hstate->nplanes; i++) { 644 643 unsigned int offset = 0; 645 644 int xdiv = i ? fb->format->hsub : 1; 646 645 int ydiv = i ? fb->format->vsub : 1; 647 646 648 - state->bpp[i] = fb->format->cpp[i]; 649 - if (!state->bpp[i]) 647 + hstate->bpp[i] = fb->format->cpp[i]; 648 + if (!hstate->bpp[i]) 650 649 return -EINVAL; 651 650 652 - switch (state->base.rotation & DRM_MODE_ROTATE_MASK) { 651 + switch (hstate->base.rotation & DRM_MODE_ROTATE_MASK) { 653 652 case DRM_MODE_ROTATE_90: 654 - offset = (state->src_y / ydiv) * 653 + offset = (hstate->src_y / ydiv) * 655 654 fb->pitches[i]; 656 - offset += ((state->src_x + state->src_w - 1) / 657 - xdiv) * state->bpp[i]; 658 - state->xstride[i] = -(((state->src_h - 1) / ydiv) * 655 + offset += ((hstate->src_x + hstate->src_w - 1) / 656 + xdiv) * hstate->bpp[i]; 657 + hstate->xstride[i] = -(((hstate->src_h - 1) / ydiv) * 659 658 fb->pitches[i]) - 660 - (2 * state->bpp[i]); 661 - state->pstride[i] = fb->pitches[i] - state->bpp[i]; 659 + (2 * hstate->bpp[i]); 660 + hstate->pstride[i] = fb->pitches[i] - hstate->bpp[i]; 662 661 break; 663 662 case DRM_MODE_ROTATE_180: 664 - offset = ((state->src_y + state->src_h - 1) / 663 + offset = ((hstate->src_y + hstate->src_h - 1) / 665 664 ydiv) * fb->pitches[i]; 666 - offset += ((state->src_x + state->src_w - 1) / 667 - xdiv) * state->bpp[i]; 668 - state->xstride[i] = ((((state->src_w - 1) / xdiv) - 1) * 669 - state->bpp[i]) - fb->pitches[i]; 670 - state->pstride[i] = -2 * state->bpp[i]; 665 + offset += ((hstate->src_x + hstate->src_w - 1) / 666 + xdiv) * hstate->bpp[i]; 667 + hstate->xstride[i] = ((((hstate->src_w - 1) / xdiv) - 1) * 668 + hstate->bpp[i]) - fb->pitches[i]; 669 + hstate->pstride[i] = -2 * hstate->bpp[i]; 671 670 break; 672 671 case DRM_MODE_ROTATE_270: 673 - offset = ((state->src_y + state->src_h - 1) / 672 + offset = ((hstate->src_y + hstate->src_h - 1) / 674 673 ydiv) * fb->pitches[i]; 675 - offset += (state->src_x / xdiv) * state->bpp[i]; 676 - state->xstride[i] = ((state->src_h - 1) / ydiv) * 674 + offset += (hstate->src_x / xdiv) * hstate->bpp[i]; 675 + hstate->xstride[i] = ((hstate->src_h - 1) / ydiv) * 677 676 fb->pitches[i]; 678 - state->pstride[i] = -fb->pitches[i] - state->bpp[i]; 677 + hstate->pstride[i] = -fb->pitches[i] - hstate->bpp[i]; 679 678 break; 680 679 case DRM_MODE_ROTATE_0: 681 680 default: 682 - offset = (state->src_y / ydiv) * fb->pitches[i]; 683 - offset += (state->src_x / xdiv) * state->bpp[i]; 684 - state->xstride[i] = fb->pitches[i] - 685 - ((state->src_w / xdiv) * 686 - state->bpp[i]); 687 - state->pstride[i] = 0; 681 + offset = (hstate->src_y / ydiv) * fb->pitches[i]; 682 + offset += (hstate->src_x / xdiv) * hstate->bpp[i]; 683 + hstate->xstride[i] = fb->pitches[i] - 684 + ((hstate->src_w / xdiv) * 685 + hstate->bpp[i]); 686 + hstate->pstride[i] = 0; 688 687 break; 689 688 } 690 689 691 - state->offsets[i] = offset + fb->offsets[i]; 690 + hstate->offsets[i] = offset + fb->offsets[i]; 692 691 } 693 692 694 693 /* 695 694 * Swap width and size in case of 90 or 270 degrees rotation 696 695 */ 697 - if (drm_rotation_90_or_270(state->base.rotation)) { 698 - swap(state->src_w, state->src_h); 696 + if (drm_rotation_90_or_270(hstate->base.rotation)) { 697 + swap(hstate->src_w, hstate->src_h); 699 698 } 700 699 701 700 if (!desc->layout.size && 702 - (mode->hdisplay != state->crtc_w || 703 - mode->vdisplay != state->crtc_h)) 701 + (mode->hdisplay != hstate->crtc_w || 702 + mode->vdisplay != hstate->crtc_h)) 704 703 return -EINVAL; 705 704 706 - if ((state->crtc_h != state->src_h || state->crtc_w != state->src_w) && 705 + if ((hstate->crtc_h != hstate->src_h || hstate->crtc_w != hstate->src_w) && 707 706 (!desc->layout.memsize || 708 - state->base.fb->format->has_alpha)) 707 + hstate->base.fb->format->has_alpha)) 709 708 return -EINVAL; 710 709 711 710 return 0; 712 711 } 713 712 714 713 static void atmel_hlcdc_plane_atomic_disable(struct drm_plane *p, 715 - struct drm_plane_state *old_state) 714 + struct drm_atomic_state *state) 716 715 { 717 716 struct atmel_hlcdc_plane *plane = drm_plane_to_atmel_hlcdc_plane(p); 718 717 ··· 731 730 } 732 731 733 732 static void atmel_hlcdc_plane_atomic_update(struct drm_plane *p, 734 - struct drm_plane_state *old_s) 733 + struct drm_atomic_state *state) 735 734 { 735 + struct drm_plane_state *new_s = drm_atomic_get_new_plane_state(state, 736 + p); 736 737 struct atmel_hlcdc_plane *plane = drm_plane_to_atmel_hlcdc_plane(p); 737 - struct atmel_hlcdc_plane_state *state = 738 - drm_plane_state_to_atmel_hlcdc_plane_state(p->state); 738 + struct atmel_hlcdc_plane_state *hstate = 739 + drm_plane_state_to_atmel_hlcdc_plane_state(new_s); 739 740 u32 sr; 740 741 741 - if (!p->state->crtc || !p->state->fb) 742 + if (!new_s->crtc || !new_s->fb) 742 743 return; 743 744 744 - if (!state->base.visible) { 745 - atmel_hlcdc_plane_atomic_disable(p, old_s); 745 + if (!hstate->base.visible) { 746 + atmel_hlcdc_plane_atomic_disable(p, state); 746 747 return; 747 748 } 748 749 749 - atmel_hlcdc_plane_update_pos_and_size(plane, state); 750 - atmel_hlcdc_plane_update_general_settings(plane, state); 751 - atmel_hlcdc_plane_update_format(plane, state); 752 - atmel_hlcdc_plane_update_clut(plane, state); 753 - atmel_hlcdc_plane_update_buffers(plane, state); 754 - atmel_hlcdc_plane_update_disc_area(plane, state); 750 + atmel_hlcdc_plane_update_pos_and_size(plane, hstate); 751 + atmel_hlcdc_plane_update_general_settings(plane, hstate); 752 + atmel_hlcdc_plane_update_format(plane, hstate); 753 + atmel_hlcdc_plane_update_clut(plane, hstate); 754 + atmel_hlcdc_plane_update_buffers(plane, hstate); 755 + atmel_hlcdc_plane_update_disc_area(plane, hstate); 755 756 756 757 /* Enable the overrun interrupts. */ 757 758 atmel_hlcdc_layer_write_reg(&plane->layer, ATMEL_HLCDC_LAYER_IER,
+1 -1
drivers/gpu/drm/bridge/cadence/cdns-mhdp8546-core.c
··· 2457 2457 2458 2458 static int cdns_mhdp_remove(struct platform_device *pdev) 2459 2459 { 2460 - struct cdns_mhdp_device *mhdp = dev_get_drvdata(&pdev->dev); 2460 + struct cdns_mhdp_device *mhdp = platform_get_drvdata(pdev); 2461 2461 unsigned long timeout = msecs_to_jiffies(100); 2462 2462 bool stop_fw = false; 2463 2463 int ret;
+41 -6
drivers/gpu/drm/drm_atomic.c
··· 53 53 EXPORT_SYMBOL(__drm_crtc_commit_free); 54 54 55 55 /** 56 + * drm_crtc_commit_wait - Waits for a commit to complete 57 + * @commit: &drm_crtc_commit to wait for 58 + * 59 + * Waits for a given &drm_crtc_commit to be programmed into the 60 + * hardware and flipped to. 61 + * 62 + * Returns: 63 + * 64 + * 0 on success, a negative error code otherwise. 65 + */ 66 + int drm_crtc_commit_wait(struct drm_crtc_commit *commit) 67 + { 68 + unsigned long timeout = 10 * HZ; 69 + int ret; 70 + 71 + if (!commit) 72 + return 0; 73 + 74 + ret = wait_for_completion_timeout(&commit->hw_done, timeout); 75 + if (!ret) { 76 + DRM_ERROR("hw_done timed out\n"); 77 + return -ETIMEDOUT; 78 + } 79 + 80 + /* 81 + * Currently no support for overwriting flips, hence 82 + * stall for previous one to execute completely. 83 + */ 84 + ret = wait_for_completion_timeout(&commit->flip_done, timeout); 85 + if (!ret) { 86 + DRM_ERROR("flip_done timed out\n"); 87 + return -ETIMEDOUT; 88 + } 89 + 90 + return 0; 91 + } 92 + EXPORT_SYMBOL(drm_crtc_commit_wait); 93 + 94 + /** 56 95 * drm_atomic_state_default_release - 57 96 * release memory initialized by drm_atomic_state_init 58 97 * @state: atomic state ··· 617 578 ret = drm_plane_check_pixel_format(plane, fb->format->format, 618 579 fb->modifier); 619 580 if (ret) { 620 - struct drm_format_name_buf format_name; 621 - 622 - DRM_DEBUG_ATOMIC("[PLANE:%d:%s] invalid pixel format %s, modifier 0x%llx\n", 581 + DRM_DEBUG_ATOMIC("[PLANE:%d:%s] invalid pixel format %p4cc, modifier 0x%llx\n", 623 582 plane->base.id, plane->name, 624 - drm_get_format_name(fb->format->format, 625 - &format_name), 626 - fb->modifier); 583 + &fb->format->format, fb->modifier); 627 584 return ret; 628 585 } 629 586
+16 -59
drivers/gpu/drm/drm_atomic_helper.c
··· 902 902 if (!funcs || !funcs->atomic_check) 903 903 continue; 904 904 905 - ret = funcs->atomic_check(plane, new_plane_state); 905 + ret = funcs->atomic_check(plane, state); 906 906 if (ret) { 907 907 DRM_DEBUG_ATOMIC("[PLANE:%d:%s] atomic driver check failed\n", 908 908 plane->base.id, plane->name); ··· 1742 1742 return -EBUSY; 1743 1743 } 1744 1744 1745 - return funcs->atomic_async_check(plane, new_plane_state); 1745 + return funcs->atomic_async_check(plane, state); 1746 1746 } 1747 1747 EXPORT_SYMBOL(drm_atomic_helper_async_check); 1748 1748 ··· 1772 1772 struct drm_framebuffer *old_fb = plane->state->fb; 1773 1773 1774 1774 funcs = plane->helper_private; 1775 - funcs->atomic_async_update(plane, plane_state); 1775 + funcs->atomic_async_update(plane, state); 1776 1776 1777 1777 /* 1778 1778 * ->atomic_async_update() is supposed to update the ··· 2202 2202 struct drm_plane_state *old_plane_state; 2203 2203 struct drm_connector *conn; 2204 2204 struct drm_connector_state *old_conn_state; 2205 - struct drm_crtc_commit *commit; 2206 2205 int i; 2207 2206 long ret; 2208 2207 2209 2208 for_each_old_crtc_in_state(old_state, crtc, old_crtc_state, i) { 2210 - commit = old_crtc_state->commit; 2211 - 2212 - if (!commit) 2213 - continue; 2214 - 2215 - ret = wait_for_completion_timeout(&commit->hw_done, 2216 - 10*HZ); 2217 - if (ret == 0) 2218 - DRM_ERROR("[CRTC:%d:%s] hw_done timed out\n", 2219 - crtc->base.id, crtc->name); 2220 - 2221 - /* Currently no support for overwriting flips, hence 2222 - * stall for previous one to execute completely. */ 2223 - ret = wait_for_completion_timeout(&commit->flip_done, 2224 - 10*HZ); 2225 - if (ret == 0) 2226 - DRM_ERROR("[CRTC:%d:%s] flip_done timed out\n", 2209 + ret = drm_crtc_commit_wait(old_crtc_state->commit); 2210 + if (ret) 2211 + DRM_ERROR("[CRTC:%d:%s] commit wait timed out\n", 2227 2212 crtc->base.id, crtc->name); 2228 2213 } 2229 2214 2230 2215 for_each_old_connector_in_state(old_state, conn, old_conn_state, i) { 2231 - commit = old_conn_state->commit; 2232 - 2233 - if (!commit) 2234 - continue; 2235 - 2236 - ret = wait_for_completion_timeout(&commit->hw_done, 2237 - 10*HZ); 2238 - if (ret == 0) 2239 - DRM_ERROR("[CONNECTOR:%d:%s] hw_done timed out\n", 2240 - conn->base.id, conn->name); 2241 - 2242 - /* Currently no support for overwriting flips, hence 2243 - * stall for previous one to execute completely. */ 2244 - ret = wait_for_completion_timeout(&commit->flip_done, 2245 - 10*HZ); 2246 - if (ret == 0) 2247 - DRM_ERROR("[CONNECTOR:%d:%s] flip_done timed out\n", 2216 + ret = drm_crtc_commit_wait(old_conn_state->commit); 2217 + if (ret) 2218 + DRM_ERROR("[CONNECTOR:%d:%s] commit wait timed out\n", 2248 2219 conn->base.id, conn->name); 2249 2220 } 2250 2221 2251 2222 for_each_old_plane_in_state(old_state, plane, old_plane_state, i) { 2252 - commit = old_plane_state->commit; 2253 - 2254 - if (!commit) 2255 - continue; 2256 - 2257 - ret = wait_for_completion_timeout(&commit->hw_done, 2258 - 10*HZ); 2259 - if (ret == 0) 2260 - DRM_ERROR("[PLANE:%d:%s] hw_done timed out\n", 2261 - plane->base.id, plane->name); 2262 - 2263 - /* Currently no support for overwriting flips, hence 2264 - * stall for previous one to execute completely. */ 2265 - ret = wait_for_completion_timeout(&commit->flip_done, 2266 - 10*HZ); 2267 - if (ret == 0) 2268 - DRM_ERROR("[PLANE:%d:%s] flip_done timed out\n", 2223 + ret = drm_crtc_commit_wait(old_plane_state->commit); 2224 + if (ret) 2225 + DRM_ERROR("[PLANE:%d:%s] commit wait timed out\n", 2269 2226 plane->base.id, plane->name); 2270 2227 } 2271 2228 } ··· 2528 2571 no_disable) 2529 2572 continue; 2530 2573 2531 - funcs->atomic_disable(plane, old_plane_state); 2574 + funcs->atomic_disable(plane, old_state); 2532 2575 } else if (new_plane_state->crtc || disabling) { 2533 - funcs->atomic_update(plane, old_plane_state); 2576 + funcs->atomic_update(plane, old_state); 2534 2577 } 2535 2578 } 2536 2579 ··· 2602 2645 2603 2646 if (drm_atomic_plane_disabling(old_plane_state, new_plane_state) && 2604 2647 plane_funcs->atomic_disable) 2605 - plane_funcs->atomic_disable(plane, old_plane_state); 2648 + plane_funcs->atomic_disable(plane, old_state); 2606 2649 else if (new_plane_state->crtc || 2607 2650 drm_atomic_plane_disabling(old_plane_state, new_plane_state)) 2608 - plane_funcs->atomic_update(plane, old_plane_state); 2651 + plane_funcs->atomic_update(plane, old_state); 2609 2652 } 2610 2653 2611 2654 if (crtc_funcs && crtc_funcs->atomic_flush)
+2 -5
drivers/gpu/drm/drm_crtc.c
··· 735 735 fb->format->format, 736 736 fb->modifier); 737 737 if (ret) { 738 - struct drm_format_name_buf format_name; 739 - 740 - DRM_DEBUG_KMS("Invalid pixel format %s, modifier 0x%llx\n", 741 - drm_get_format_name(fb->format->format, 742 - &format_name), 738 + DRM_DEBUG_KMS("Invalid pixel format %p4cc, modifier 0x%llx\n", 739 + &fb->format->format, 743 740 fb->modifier); 744 741 goto out; 745 742 }
+22 -13
drivers/gpu/drm/drm_dp_mst_topology.c
··· 1154 1154 1155 1155 req.req_type = DP_CLEAR_PAYLOAD_ID_TABLE; 1156 1156 drm_dp_encode_sideband_req(&req, msg); 1157 + msg->path_msg = true; 1157 1158 } 1158 1159 1159 1160 static int build_enum_path_resources(struct drm_dp_sideband_msg_tx *msg, ··· 2304 2303 2305 2304 if (port->pdt != DP_PEER_DEVICE_NONE && 2306 2305 drm_dp_mst_is_end_device(port->pdt, port->mcs) && 2307 - port->port_num >= DP_MST_LOGICAL_PORT_0) { 2306 + port->port_num >= DP_MST_LOGICAL_PORT_0) 2308 2307 port->cached_edid = drm_get_edid(port->connector, 2309 2308 &port->aux.ddc); 2310 - drm_connector_set_tile_property(port->connector); 2311 - } 2312 2309 2313 2310 drm_connector_register(port->connector); 2314 2311 return; ··· 2823 2824 2824 2825 req_type = txmsg->msg[0] & 0x7f; 2825 2826 if (req_type == DP_CONNECTION_STATUS_NOTIFY || 2826 - req_type == DP_RESOURCE_STATUS_NOTIFY) 2827 + req_type == DP_RESOURCE_STATUS_NOTIFY || 2828 + req_type == DP_CLEAR_PAYLOAD_ID_TABLE) 2827 2829 hdr->broadcast = 1; 2828 2830 else 2829 2831 hdr->broadcast = 0; 2830 2832 hdr->path_msg = txmsg->path_msg; 2831 - hdr->lct = mstb->lct; 2832 - hdr->lcr = mstb->lct - 1; 2833 - if (mstb->lct > 1) 2834 - memcpy(hdr->rad, mstb->rad, mstb->lct / 2); 2833 + if (hdr->broadcast) { 2834 + hdr->lct = 1; 2835 + hdr->lcr = 6; 2836 + } else { 2837 + hdr->lct = mstb->lct; 2838 + hdr->lcr = mstb->lct - 1; 2839 + } 2840 + 2841 + memcpy(hdr->rad, mstb->rad, hdr->lct / 2); 2835 2842 2836 2843 return 0; 2837 2844 } ··· 4239 4234 case DP_PEER_DEVICE_SST_SINK: 4240 4235 ret = connector_status_connected; 4241 4236 /* for logical ports - cache the EDID */ 4242 - if (port->port_num >= 8 && !port->cached_edid) { 4237 + if (port->port_num >= DP_MST_LOGICAL_PORT_0 && !port->cached_edid) 4243 4238 port->cached_edid = drm_get_edid(connector, &port->aux.ddc); 4244 - } 4245 4239 break; 4246 4240 case DP_PEER_DEVICE_DP_LEGACY_CONV: 4247 4241 if (port->ldps) ··· 5125 5121 if (!found) 5126 5122 return 0; 5127 5123 5128 - /* This should never happen, as it means we tried to 5129 - * set a mode before querying the full_pbn 5124 + /* 5125 + * This could happen if the sink deasserted its HPD line, but 5126 + * the branch device still reports it as attached (PDT != NONE). 5130 5127 */ 5131 - if (WARN_ON(!port->full_pbn)) 5128 + if (!port->full_pbn) { 5129 + drm_dbg_atomic(port->mgr->dev, 5130 + "[MSTB:%p] [MST PORT:%p] no BW available for the port\n", 5131 + port->parent, port); 5132 5132 return -EINVAL; 5133 + } 5133 5134 5134 5135 pbn_used = vcpi->pbn; 5135 5136 } else {
+1 -1
drivers/gpu/drm/drm_drv.c
··· 61 61 * prefer to embed struct drm_device into their own device 62 62 * structure and call drm_dev_init() themselves. 63 63 */ 64 - static bool drm_core_init_complete = false; 64 + static bool drm_core_init_complete; 65 65 66 66 static struct dentry *drm_debugfs_root; 67 67
+3 -8
drivers/gpu/drm/drm_framebuffer.c
··· 177 177 178 178 /* check if the format is supported at all */ 179 179 if (!__drm_format_info(r->pixel_format)) { 180 - struct drm_format_name_buf format_name; 181 - 182 - DRM_DEBUG_KMS("bad framebuffer format %s\n", 183 - drm_get_format_name(r->pixel_format, 184 - &format_name)); 180 + DRM_DEBUG_KMS("bad framebuffer format %p4cc\n", 181 + &r->pixel_format); 185 182 return -EINVAL; 186 183 } 187 184 ··· 1157 1160 void drm_framebuffer_print_info(struct drm_printer *p, unsigned int indent, 1158 1161 const struct drm_framebuffer *fb) 1159 1162 { 1160 - struct drm_format_name_buf format_name; 1161 1163 unsigned int i; 1162 1164 1163 1165 drm_printf_indent(p, indent, "allocated by = %s\n", fb->comm); 1164 1166 drm_printf_indent(p, indent, "refcount=%u\n", 1165 1167 drm_framebuffer_read_refcount(fb)); 1166 - drm_printf_indent(p, indent, "format=%s\n", 1167 - drm_get_format_name(fb->format->format, &format_name)); 1168 + drm_printf_indent(p, indent, "format=%p4cc\n", &fb->format->format); 1168 1169 drm_printf_indent(p, indent, "modifier=0x%llx\n", fb->modifier); 1169 1170 drm_printf_indent(p, indent, "size=%ux%u\n", fb->width, fb->height); 1170 1171 drm_printf_indent(p, indent, "layers:\n");
+2
drivers/gpu/drm/drm_gem.c
··· 1212 1212 1213 1213 return 0; 1214 1214 } 1215 + EXPORT_SYMBOL(drm_gem_vmap); 1215 1216 1216 1217 void drm_gem_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map) 1217 1218 { ··· 1225 1224 /* Always set the mapping to NULL. Callers may rely on this. */ 1226 1225 dma_buf_map_clear(map); 1227 1226 } 1227 + EXPORT_SYMBOL(drm_gem_vunmap); 1228 1228 1229 1229 /** 1230 1230 * drm_gem_lock_reservations - Sets up the ww context and acquires
+432
drivers/gpu/drm/drm_gem_atomic_helper.c
··· 1 + // SPDX-License-Identifier: GPL-2.0-or-later 2 + 3 + #include <linux/dma-resv.h> 4 + 5 + #include <drm/drm_atomic_state_helper.h> 6 + #include <drm/drm_atomic_uapi.h> 7 + #include <drm/drm_gem.h> 8 + #include <drm/drm_gem_atomic_helper.h> 9 + #include <drm/drm_gem_framebuffer_helper.h> 10 + #include <drm/drm_simple_kms_helper.h> 11 + 12 + #include "drm_internal.h" 13 + 14 + /** 15 + * DOC: overview 16 + * 17 + * The GEM atomic helpers library implements generic atomic-commit 18 + * functions for drivers that use GEM objects. Currently, it provides 19 + * synchronization helpers, and plane state and framebuffer BO mappings 20 + * for planes with shadow buffers. 21 + * 22 + * Before scanout, a plane's framebuffer needs to be synchronized with 23 + * possible writers that draw into the framebuffer. All drivers should 24 + * call drm_gem_plane_helper_prepare_fb() from their implementation of 25 + * struct &drm_plane_helper.prepare_fb . It sets the plane's fence from 26 + * the framebuffer so that the DRM core can synchronize access automatically. 27 + * 28 + * drm_gem_plane_helper_prepare_fb() can also be used directly as 29 + * implementation of prepare_fb. For drivers based on 30 + * struct drm_simple_display_pipe, drm_gem_simple_display_pipe_prepare_fb() 31 + * provides equivalent functionality. 32 + * 33 + * .. code-block:: c 34 + * 35 + * #include <drm/drm_gem_atomic_helper.h> 36 + * 37 + * struct drm_plane_helper_funcs driver_plane_helper_funcs = { 38 + * ..., 39 + * . prepare_fb = drm_gem_plane_helper_prepare_fb, 40 + * }; 41 + * 42 + * struct drm_simple_display_pipe_funcs driver_pipe_funcs = { 43 + * ..., 44 + * . prepare_fb = drm_gem_simple_display_pipe_prepare_fb, 45 + * }; 46 + * 47 + * A driver using a shadow buffer copies the content of the shadow buffers 48 + * into the HW's framebuffer memory during an atomic update. This requires 49 + * a mapping of the shadow buffer into kernel address space. The mappings 50 + * cannot be established by commit-tail functions, such as atomic_update, 51 + * as this would violate locking rules around dma_buf_vmap(). 52 + * 53 + * The helpers for shadow-buffered planes establish and release mappings, 54 + * and provide struct drm_shadow_plane_state, which stores the plane's mapping 55 + * for commit-tail functons. 56 + * 57 + * Shadow-buffered planes can easily be enabled by using the provided macros 58 + * %DRM_GEM_SHADOW_PLANE_FUNCS and %DRM_GEM_SHADOW_PLANE_HELPER_FUNCS. 59 + * These macros set up the plane and plane-helper callbacks to point to the 60 + * shadow-buffer helpers. 61 + * 62 + * .. code-block:: c 63 + * 64 + * #include <drm/drm_gem_atomic_helper.h> 65 + * 66 + * struct drm_plane_funcs driver_plane_funcs = { 67 + * ..., 68 + * DRM_GEM_SHADOW_PLANE_FUNCS, 69 + * }; 70 + * 71 + * struct drm_plane_helper_funcs driver_plane_helper_funcs = { 72 + * ..., 73 + * DRM_GEM_SHADOW_PLANE_HELPER_FUNCS, 74 + * }; 75 + * 76 + * In the driver's atomic-update function, shadow-buffer mappings are available 77 + * from the plane state. Use to_drm_shadow_plane_state() to upcast from 78 + * struct drm_plane_state. 79 + * 80 + * .. code-block:: c 81 + * 82 + * void driver_plane_atomic_update(struct drm_plane *plane, 83 + * struct drm_plane_state *old_plane_state) 84 + * { 85 + * struct drm_plane_state *plane_state = plane->state; 86 + * struct drm_shadow_plane_state *shadow_plane_state = 87 + * to_drm_shadow_plane_state(plane_state); 88 + * 89 + * // access shadow buffer via shadow_plane_state->map 90 + * } 91 + * 92 + * A mapping address for each of the framebuffer's buffer object is stored in 93 + * struct &drm_shadow_plane_state.map. The mappings are valid while the state 94 + * is being used. 95 + * 96 + * Drivers that use struct drm_simple_display_pipe can use 97 + * %DRM_GEM_SIMPLE_DISPLAY_PIPE_SHADOW_PLANE_FUNCS to initialize the rsp 98 + * callbacks. Access to shadow-buffer mappings is similar to regular 99 + * atomic_update. 100 + * 101 + * .. code-block:: c 102 + * 103 + * struct drm_simple_display_pipe_funcs driver_pipe_funcs = { 104 + * ..., 105 + * DRM_GEM_SIMPLE_DISPLAY_PIPE_SHADOW_PLANE_FUNCS, 106 + * }; 107 + * 108 + * void driver_pipe_enable(struct drm_simple_display_pipe *pipe, 109 + * struct drm_crtc_state *crtc_state, 110 + * struct drm_plane_state *plane_state) 111 + * { 112 + * struct drm_shadow_plane_state *shadow_plane_state = 113 + * to_drm_shadow_plane_state(plane_state); 114 + * 115 + * // access shadow buffer via shadow_plane_state->map 116 + * } 117 + */ 118 + 119 + /* 120 + * Plane Helpers 121 + */ 122 + 123 + /** 124 + * drm_gem_plane_helper_prepare_fb() - Prepare a GEM backed framebuffer 125 + * @plane: Plane 126 + * @state: Plane state the fence will be attached to 127 + * 128 + * This function extracts the exclusive fence from &drm_gem_object.resv and 129 + * attaches it to plane state for the atomic helper to wait on. This is 130 + * necessary to correctly implement implicit synchronization for any buffers 131 + * shared as a struct &dma_buf. This function can be used as the 132 + * &drm_plane_helper_funcs.prepare_fb callback. 133 + * 134 + * There is no need for &drm_plane_helper_funcs.cleanup_fb hook for simple 135 + * GEM based framebuffer drivers which have their buffers always pinned in 136 + * memory. 137 + * 138 + * See drm_atomic_set_fence_for_plane() for a discussion of implicit and 139 + * explicit fencing in atomic modeset updates. 140 + */ 141 + int drm_gem_plane_helper_prepare_fb(struct drm_plane *plane, struct drm_plane_state *state) 142 + { 143 + struct drm_gem_object *obj; 144 + struct dma_fence *fence; 145 + 146 + if (!state->fb) 147 + return 0; 148 + 149 + obj = drm_gem_fb_get_obj(state->fb, 0); 150 + fence = dma_resv_get_excl_rcu(obj->resv); 151 + drm_atomic_set_fence_for_plane(state, fence); 152 + 153 + return 0; 154 + } 155 + EXPORT_SYMBOL_GPL(drm_gem_plane_helper_prepare_fb); 156 + 157 + /** 158 + * drm_gem_simple_display_pipe_prepare_fb - prepare_fb helper for &drm_simple_display_pipe 159 + * @pipe: Simple display pipe 160 + * @plane_state: Plane state 161 + * 162 + * This function uses drm_gem_plane_helper_prepare_fb() to extract the exclusive fence 163 + * from &drm_gem_object.resv and attaches it to plane state for the atomic 164 + * helper to wait on. This is necessary to correctly implement implicit 165 + * synchronization for any buffers shared as a struct &dma_buf. Drivers can use 166 + * this as their &drm_simple_display_pipe_funcs.prepare_fb callback. 167 + * 168 + * See drm_atomic_set_fence_for_plane() for a discussion of implicit and 169 + * explicit fencing in atomic modeset updates. 170 + */ 171 + int drm_gem_simple_display_pipe_prepare_fb(struct drm_simple_display_pipe *pipe, 172 + struct drm_plane_state *plane_state) 173 + { 174 + return drm_gem_plane_helper_prepare_fb(&pipe->plane, plane_state); 175 + } 176 + EXPORT_SYMBOL(drm_gem_simple_display_pipe_prepare_fb); 177 + 178 + /* 179 + * Shadow-buffered Planes 180 + */ 181 + 182 + /** 183 + * drm_gem_duplicate_shadow_plane_state - duplicates shadow-buffered plane state 184 + * @plane: the plane 185 + * 186 + * This function implements struct &drm_plane_funcs.atomic_duplicate_state for 187 + * shadow-buffered planes. It assumes the existing state to be of type 188 + * struct drm_shadow_plane_state and it allocates the new state to be of this 189 + * type. 190 + * 191 + * The function does not duplicate existing mappings of the shadow buffers. 192 + * Mappings are maintained during the atomic commit by the plane's prepare_fb 193 + * and cleanup_fb helpers. See drm_gem_prepare_shadow_fb() and drm_gem_cleanup_shadow_fb() 194 + * for corresponding helpers. 195 + * 196 + * Returns: 197 + * A pointer to a new plane state on success, or NULL otherwise. 198 + */ 199 + struct drm_plane_state * 200 + drm_gem_duplicate_shadow_plane_state(struct drm_plane *plane) 201 + { 202 + struct drm_plane_state *plane_state = plane->state; 203 + struct drm_shadow_plane_state *new_shadow_plane_state; 204 + 205 + if (!plane_state) 206 + return NULL; 207 + 208 + new_shadow_plane_state = kzalloc(sizeof(*new_shadow_plane_state), GFP_KERNEL); 209 + if (!new_shadow_plane_state) 210 + return NULL; 211 + __drm_atomic_helper_plane_duplicate_state(plane, &new_shadow_plane_state->base); 212 + 213 + return &new_shadow_plane_state->base; 214 + } 215 + EXPORT_SYMBOL(drm_gem_duplicate_shadow_plane_state); 216 + 217 + /** 218 + * drm_gem_destroy_shadow_plane_state - deletes shadow-buffered plane state 219 + * @plane: the plane 220 + * @plane_state: the plane state of type struct drm_shadow_plane_state 221 + * 222 + * This function implements struct &drm_plane_funcs.atomic_destroy_state 223 + * for shadow-buffered planes. It expects that mappings of shadow buffers 224 + * have been released already. 225 + */ 226 + void drm_gem_destroy_shadow_plane_state(struct drm_plane *plane, 227 + struct drm_plane_state *plane_state) 228 + { 229 + struct drm_shadow_plane_state *shadow_plane_state = 230 + to_drm_shadow_plane_state(plane_state); 231 + 232 + __drm_atomic_helper_plane_destroy_state(&shadow_plane_state->base); 233 + kfree(shadow_plane_state); 234 + } 235 + EXPORT_SYMBOL(drm_gem_destroy_shadow_plane_state); 236 + 237 + /** 238 + * drm_gem_reset_shadow_plane - resets a shadow-buffered plane 239 + * @plane: the plane 240 + * 241 + * This function implements struct &drm_plane_funcs.reset_plane for 242 + * shadow-buffered planes. It assumes the current plane state to be 243 + * of type struct drm_shadow_plane and it allocates the new state of 244 + * this type. 245 + */ 246 + void drm_gem_reset_shadow_plane(struct drm_plane *plane) 247 + { 248 + struct drm_shadow_plane_state *shadow_plane_state; 249 + 250 + if (plane->state) { 251 + drm_gem_destroy_shadow_plane_state(plane, plane->state); 252 + plane->state = NULL; /* must be set to NULL here */ 253 + } 254 + 255 + shadow_plane_state = kzalloc(sizeof(*shadow_plane_state), GFP_KERNEL); 256 + if (!shadow_plane_state) 257 + return; 258 + __drm_atomic_helper_plane_reset(plane, &shadow_plane_state->base); 259 + } 260 + EXPORT_SYMBOL(drm_gem_reset_shadow_plane); 261 + 262 + /** 263 + * drm_gem_prepare_shadow_fb - prepares shadow framebuffers 264 + * @plane: the plane 265 + * @plane_state: the plane state of type struct drm_shadow_plane_state 266 + * 267 + * This function implements struct &drm_plane_helper_funcs.prepare_fb. It 268 + * maps all buffer objects of the plane's framebuffer into kernel address 269 + * space and stores them in &struct drm_shadow_plane_state.map. The 270 + * framebuffer will be synchronized as part of the atomic commit. 271 + * 272 + * See drm_gem_cleanup_shadow_fb() for cleanup. 273 + * 274 + * Returns: 275 + * 0 on success, or a negative errno code otherwise. 276 + */ 277 + int drm_gem_prepare_shadow_fb(struct drm_plane *plane, struct drm_plane_state *plane_state) 278 + { 279 + struct drm_shadow_plane_state *shadow_plane_state = to_drm_shadow_plane_state(plane_state); 280 + struct drm_framebuffer *fb = plane_state->fb; 281 + struct drm_gem_object *obj; 282 + struct dma_buf_map map; 283 + int ret; 284 + size_t i; 285 + 286 + if (!fb) 287 + return 0; 288 + 289 + ret = drm_gem_plane_helper_prepare_fb(plane, plane_state); 290 + if (ret) 291 + return ret; 292 + 293 + for (i = 0; i < ARRAY_SIZE(shadow_plane_state->map); ++i) { 294 + obj = drm_gem_fb_get_obj(fb, i); 295 + if (!obj) 296 + continue; 297 + ret = drm_gem_vmap(obj, &map); 298 + if (ret) 299 + goto err_drm_gem_vunmap; 300 + shadow_plane_state->map[i] = map; 301 + } 302 + 303 + return 0; 304 + 305 + err_drm_gem_vunmap: 306 + while (i) { 307 + --i; 308 + obj = drm_gem_fb_get_obj(fb, i); 309 + if (!obj) 310 + continue; 311 + drm_gem_vunmap(obj, &shadow_plane_state->map[i]); 312 + } 313 + return ret; 314 + } 315 + EXPORT_SYMBOL(drm_gem_prepare_shadow_fb); 316 + 317 + /** 318 + * drm_gem_cleanup_shadow_fb - releases shadow framebuffers 319 + * @plane: the plane 320 + * @plane_state: the plane state of type struct drm_shadow_plane_state 321 + * 322 + * This function implements struct &drm_plane_helper_funcs.cleanup_fb. 323 + * This function unmaps all buffer objects of the plane's framebuffer. 324 + * 325 + * See drm_gem_prepare_shadow_fb() for more inforamtion. 326 + */ 327 + void drm_gem_cleanup_shadow_fb(struct drm_plane *plane, struct drm_plane_state *plane_state) 328 + { 329 + struct drm_shadow_plane_state *shadow_plane_state = to_drm_shadow_plane_state(plane_state); 330 + struct drm_framebuffer *fb = plane_state->fb; 331 + size_t i = ARRAY_SIZE(shadow_plane_state->map); 332 + struct drm_gem_object *obj; 333 + 334 + if (!fb) 335 + return; 336 + 337 + while (i) { 338 + --i; 339 + obj = drm_gem_fb_get_obj(fb, i); 340 + if (!obj) 341 + continue; 342 + drm_gem_vunmap(obj, &shadow_plane_state->map[i]); 343 + } 344 + } 345 + EXPORT_SYMBOL(drm_gem_cleanup_shadow_fb); 346 + 347 + /** 348 + * drm_gem_simple_kms_prepare_shadow_fb - prepares shadow framebuffers 349 + * @pipe: the simple display pipe 350 + * @plane_state: the plane state of type struct drm_shadow_plane_state 351 + * 352 + * This function implements struct drm_simple_display_funcs.prepare_fb. It 353 + * maps all buffer objects of the plane's framebuffer into kernel address 354 + * space and stores them in struct drm_shadow_plane_state.map. The 355 + * framebuffer will be synchronized as part of the atomic commit. 356 + * 357 + * See drm_gem_simple_kms_cleanup_shadow_fb() for cleanup. 358 + * 359 + * Returns: 360 + * 0 on success, or a negative errno code otherwise. 361 + */ 362 + int drm_gem_simple_kms_prepare_shadow_fb(struct drm_simple_display_pipe *pipe, 363 + struct drm_plane_state *plane_state) 364 + { 365 + return drm_gem_prepare_shadow_fb(&pipe->plane, plane_state); 366 + } 367 + EXPORT_SYMBOL(drm_gem_simple_kms_prepare_shadow_fb); 368 + 369 + /** 370 + * drm_gem_simple_kms_cleanup_shadow_fb - releases shadow framebuffers 371 + * @pipe: the simple display pipe 372 + * @plane_state: the plane state of type struct drm_shadow_plane_state 373 + * 374 + * This function implements struct drm_simple_display_funcs.cleanup_fb. 375 + * This function unmaps all buffer objects of the plane's framebuffer. 376 + * 377 + * See drm_gem_simple_kms_prepare_shadow_fb(). 378 + */ 379 + void drm_gem_simple_kms_cleanup_shadow_fb(struct drm_simple_display_pipe *pipe, 380 + struct drm_plane_state *plane_state) 381 + { 382 + drm_gem_cleanup_shadow_fb(&pipe->plane, plane_state); 383 + } 384 + EXPORT_SYMBOL(drm_gem_simple_kms_cleanup_shadow_fb); 385 + 386 + /** 387 + * drm_gem_simple_kms_reset_shadow_plane - resets a shadow-buffered plane 388 + * @pipe: the simple display pipe 389 + * 390 + * This function implements struct drm_simple_display_funcs.reset_plane 391 + * for shadow-buffered planes. 392 + */ 393 + void drm_gem_simple_kms_reset_shadow_plane(struct drm_simple_display_pipe *pipe) 394 + { 395 + drm_gem_reset_shadow_plane(&pipe->plane); 396 + } 397 + EXPORT_SYMBOL(drm_gem_simple_kms_reset_shadow_plane); 398 + 399 + /** 400 + * drm_gem_simple_kms_duplicate_shadow_plane_state - duplicates shadow-buffered plane state 401 + * @pipe: the simple display pipe 402 + * 403 + * This function implements struct drm_simple_display_funcs.duplicate_plane_state 404 + * for shadow-buffered planes. It does not duplicate existing mappings of the shadow 405 + * buffers. Mappings are maintained during the atomic commit by the plane's prepare_fb 406 + * and cleanup_fb helpers. 407 + * 408 + * Returns: 409 + * A pointer to a new plane state on success, or NULL otherwise. 410 + */ 411 + struct drm_plane_state * 412 + drm_gem_simple_kms_duplicate_shadow_plane_state(struct drm_simple_display_pipe *pipe) 413 + { 414 + return drm_gem_duplicate_shadow_plane_state(&pipe->plane); 415 + } 416 + EXPORT_SYMBOL(drm_gem_simple_kms_duplicate_shadow_plane_state); 417 + 418 + /** 419 + * drm_gem_simple_kms_destroy_shadow_plane_state - resets shadow-buffered plane state 420 + * @pipe: the simple display pipe 421 + * @plane_state: the plane state of type struct drm_shadow_plane_state 422 + * 423 + * This function implements struct drm_simple_display_funcs.destroy_plane_state 424 + * for shadow-buffered planes. It expects that mappings of shadow buffers 425 + * have been released already. 426 + */ 427 + void drm_gem_simple_kms_destroy_shadow_plane_state(struct drm_simple_display_pipe *pipe, 428 + struct drm_plane_state *plane_state) 429 + { 430 + drm_gem_destroy_shadow_plane_state(&pipe->plane, plane_state); 431 + } 432 + EXPORT_SYMBOL(drm_gem_simple_kms_destroy_shadow_plane_state);
-63
drivers/gpu/drm/drm_gem_framebuffer_helper.c
··· 5 5 * Copyright (C) 2017 Noralf Trønnes 6 6 */ 7 7 8 - #include <linux/dma-buf.h> 9 - #include <linux/dma-fence.h> 10 - #include <linux/dma-resv.h> 11 8 #include <linux/slab.h> 12 9 13 - #include <drm/drm_atomic.h> 14 - #include <drm/drm_atomic_uapi.h> 15 10 #include <drm/drm_damage_helper.h> 16 11 #include <drm/drm_fb_helper.h> 17 12 #include <drm/drm_fourcc.h> ··· 14 19 #include <drm/drm_gem.h> 15 20 #include <drm/drm_gem_framebuffer_helper.h> 16 21 #include <drm/drm_modeset_helper.h> 17 - #include <drm/drm_simple_kms_helper.h> 18 22 19 23 #define AFBC_HEADER_SIZE 16 20 24 #define AFBC_TH_LAYOUT_ALIGNMENT 8 ··· 426 432 return 0; 427 433 } 428 434 EXPORT_SYMBOL_GPL(drm_gem_fb_afbc_init); 429 - 430 - /** 431 - * drm_gem_fb_prepare_fb() - Prepare a GEM backed framebuffer 432 - * @plane: Plane 433 - * @state: Plane state the fence will be attached to 434 - * 435 - * This function extracts the exclusive fence from &drm_gem_object.resv and 436 - * attaches it to plane state for the atomic helper to wait on. This is 437 - * necessary to correctly implement implicit synchronization for any buffers 438 - * shared as a struct &dma_buf. This function can be used as the 439 - * &drm_plane_helper_funcs.prepare_fb callback. 440 - * 441 - * There is no need for &drm_plane_helper_funcs.cleanup_fb hook for simple 442 - * gem based framebuffer drivers which have their buffers always pinned in 443 - * memory. 444 - * 445 - * See drm_atomic_set_fence_for_plane() for a discussion of implicit and 446 - * explicit fencing in atomic modeset updates. 447 - */ 448 - int drm_gem_fb_prepare_fb(struct drm_plane *plane, 449 - struct drm_plane_state *state) 450 - { 451 - struct drm_gem_object *obj; 452 - struct dma_fence *fence; 453 - 454 - if (!state->fb) 455 - return 0; 456 - 457 - obj = drm_gem_fb_get_obj(state->fb, 0); 458 - fence = dma_resv_get_excl_rcu(obj->resv); 459 - drm_atomic_set_fence_for_plane(state, fence); 460 - 461 - return 0; 462 - } 463 - EXPORT_SYMBOL_GPL(drm_gem_fb_prepare_fb); 464 - 465 - /** 466 - * drm_gem_fb_simple_display_pipe_prepare_fb - prepare_fb helper for 467 - * &drm_simple_display_pipe 468 - * @pipe: Simple display pipe 469 - * @plane_state: Plane state 470 - * 471 - * This function uses drm_gem_fb_prepare_fb() to extract the exclusive fence 472 - * from &drm_gem_object.resv and attaches it to plane state for the atomic 473 - * helper to wait on. This is necessary to correctly implement implicit 474 - * synchronization for any buffers shared as a struct &dma_buf. Drivers can use 475 - * this as their &drm_simple_display_pipe_funcs.prepare_fb callback. 476 - * 477 - * See drm_atomic_set_fence_for_plane() for a discussion of implicit and 478 - * explicit fencing in atomic modeset updates. 479 - */ 480 - int drm_gem_fb_simple_display_pipe_prepare_fb(struct drm_simple_display_pipe *pipe, 481 - struct drm_plane_state *plane_state) 482 - { 483 - return drm_gem_fb_prepare_fb(&pipe->plane, plane_state); 484 - } 485 - EXPORT_SYMBOL(drm_gem_fb_simple_display_pipe_prepare_fb);
+15 -27
drivers/gpu/drm/drm_gem_vram_helper.c
··· 8 8 #include <drm/drm_drv.h> 9 9 #include <drm/drm_file.h> 10 10 #include <drm/drm_framebuffer.h> 11 - #include <drm/drm_gem_framebuffer_helper.h> 11 + #include <drm/drm_gem_atomic_helper.h> 12 12 #include <drm/drm_gem_ttm_helper.h> 13 13 #include <drm/drm_gem_vram_helper.h> 14 14 #include <drm/drm_managed.h> ··· 187 187 struct drm_gem_vram_object *gbo; 188 188 struct drm_gem_object *gem; 189 189 struct drm_vram_mm *vmm = dev->vram_mm; 190 - struct ttm_bo_device *bdev; 190 + struct ttm_device *bdev; 191 191 int ret; 192 - size_t acc_size; 193 192 194 193 if (WARN_ONCE(!vmm, "VRAM MM not initialized")) 195 194 return ERR_PTR(-EINVAL); ··· 215 216 } 216 217 217 218 bdev = &vmm->bdev; 218 - acc_size = ttm_bo_dma_acc_size(bdev, size, sizeof(*gbo)); 219 219 220 220 gbo->bo.bdev = bdev; 221 221 drm_gem_vram_placement(gbo, DRM_GEM_VRAM_PL_FLAG_SYSTEM); ··· 224 226 * to release gbo->bo.base and kfree gbo. 225 227 */ 226 228 ret = ttm_bo_init(bdev, &gbo->bo, size, ttm_bo_type_device, 227 - &gbo->placement, pg_align, false, acc_size, 228 - NULL, NULL, ttm_buffer_object_destroy); 229 + &gbo->placement, pg_align, false, NULL, NULL, 230 + ttm_buffer_object_destroy); 229 231 if (ret) 230 232 return ERR_PTR(ret); 231 233 ··· 556 558 EXPORT_SYMBOL(drm_gem_vram_fill_create_dumb); 557 559 558 560 /* 559 - * Helpers for struct ttm_bo_driver 561 + * Helpers for struct ttm_device_funcs 560 562 */ 561 563 562 564 static bool drm_is_gem_vram(struct ttm_buffer_object *bo) ··· 571 573 *pl = gbo->placement; 572 574 } 573 575 574 - static void drm_gem_vram_bo_driver_move_notify(struct drm_gem_vram_object *gbo, 575 - bool evict, 576 - struct ttm_resource *new_mem) 576 + static void drm_gem_vram_bo_driver_move_notify(struct drm_gem_vram_object *gbo) 577 577 { 578 578 struct ttm_buffer_object *bo = &gbo->bo; 579 579 struct drm_device *dev = bo->base.dev; ··· 588 592 struct ttm_operation_ctx *ctx, 589 593 struct ttm_resource *new_mem) 590 594 { 591 - int ret; 592 - 593 - drm_gem_vram_bo_driver_move_notify(gbo, evict, new_mem); 594 - ret = ttm_bo_move_memcpy(&gbo->bo, ctx, new_mem); 595 - if (ret) { 596 - swap(*new_mem, gbo->bo.mem); 597 - drm_gem_vram_bo_driver_move_notify(gbo, false, new_mem); 598 - swap(*new_mem, gbo->bo.mem); 599 - } 600 - return ret; 595 + drm_gem_vram_bo_driver_move_notify(gbo); 596 + return ttm_bo_move_memcpy(&gbo->bo, ctx, new_mem); 601 597 } 602 598 603 599 /* ··· 708 720 goto err_drm_gem_vram_unpin; 709 721 } 710 722 711 - ret = drm_gem_fb_prepare_fb(plane, new_state); 723 + ret = drm_gem_plane_helper_prepare_fb(plane, new_state); 712 724 if (ret) 713 725 goto err_drm_gem_vram_unpin; 714 726 ··· 889 901 * TTM TT 890 902 */ 891 903 892 - static void bo_driver_ttm_tt_destroy(struct ttm_bo_device *bdev, struct ttm_tt *tt) 904 + static void bo_driver_ttm_tt_destroy(struct ttm_device *bdev, struct ttm_tt *tt) 893 905 { 894 906 ttm_tt_destroy_common(bdev, tt); 895 907 ttm_tt_fini(tt); ··· 945 957 946 958 gbo = drm_gem_vram_of_bo(bo); 947 959 948 - drm_gem_vram_bo_driver_move_notify(gbo, false, NULL); 960 + drm_gem_vram_bo_driver_move_notify(gbo); 949 961 } 950 962 951 963 static int bo_driver_move(struct ttm_buffer_object *bo, ··· 961 973 return drm_gem_vram_bo_driver_move(gbo, evict, ctx, new_mem); 962 974 } 963 975 964 - static int bo_driver_io_mem_reserve(struct ttm_bo_device *bdev, 976 + static int bo_driver_io_mem_reserve(struct ttm_device *bdev, 965 977 struct ttm_resource *mem) 966 978 { 967 979 struct drm_vram_mm *vmm = drm_vram_mm_of_bdev(bdev); ··· 981 993 return 0; 982 994 } 983 995 984 - static struct ttm_bo_driver bo_driver = { 996 + static struct ttm_device_funcs bo_driver = { 985 997 .ttm_tt_create = bo_driver_ttm_tt_create, 986 998 .ttm_tt_destroy = bo_driver_ttm_tt_destroy, 987 999 .eviction_valuable = ttm_bo_eviction_valuable, ··· 1032 1044 vmm->vram_base = vram_base; 1033 1045 vmm->vram_size = vram_size; 1034 1046 1035 - ret = ttm_bo_device_init(&vmm->bdev, &bo_driver, dev->dev, 1047 + ret = ttm_device_init(&vmm->bdev, &bo_driver, dev->dev, 1036 1048 dev->anon_inode->i_mapping, 1037 1049 dev->vma_offset_manager, 1038 1050 false, true); ··· 1050 1062 static void drm_vram_mm_cleanup(struct drm_vram_mm *vmm) 1051 1063 { 1052 1064 ttm_range_man_fini(&vmm->bdev, TTM_PL_VRAM); 1053 - ttm_bo_device_release(&vmm->bdev); 1065 + ttm_device_fini(&vmm->bdev); 1054 1066 } 1055 1067 1056 1068 /*
+3 -12
drivers/gpu/drm/drm_ioc32.c
··· 302 302 unsigned long arg) 303 303 { 304 304 drm_stats32_t __user *argp = (void __user *)arg; 305 - int err; 306 305 307 - err = drm_ioctl_kernel(file, drm_noop, NULL, 0); 308 - if (err) 309 - return err; 310 - 306 + /* getstats is defunct, just clear */ 311 307 if (clear_user(argp, sizeof(drm_stats32_t))) 312 308 return -EFAULT; 313 309 return 0; ··· 816 820 static int compat_drm_update_draw(struct file *file, unsigned int cmd, 817 821 unsigned long arg) 818 822 { 819 - drm_update_draw32_t update32; 820 - 821 - if (copy_from_user(&update32, (void __user *)arg, sizeof(update32))) 822 - return -EFAULT; 823 - 824 - return drm_ioctl_kernel(file, drm_noop, NULL, 825 - DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY); 823 + /* update_draw is defunct */ 824 + return 0; 826 825 } 827 826 #endif 828 827
+2 -3
drivers/gpu/drm/drm_mipi_dbi.c
··· 203 203 struct drm_gem_object *gem = drm_gem_fb_get_obj(fb, 0); 204 204 struct drm_gem_cma_object *cma_obj = to_drm_gem_cma_obj(gem); 205 205 struct dma_buf_attachment *import_attach = gem->import_attach; 206 - struct drm_format_name_buf format_name; 207 206 void *src = cma_obj->vaddr; 208 207 int ret = 0; 209 208 ··· 224 225 drm_fb_xrgb8888_to_rgb565(dst, src, fb, clip, swap); 225 226 break; 226 227 default: 227 - drm_err_once(fb->dev, "Format is not supported: %s\n", 228 - drm_get_format_name(fb->format->format, &format_name)); 228 + drm_err_once(fb->dev, "Format is not supported: %p4cc\n", 229 + &fb->format->format); 229 230 return -EINVAL; 230 231 } 231 232
+14
drivers/gpu/drm/drm_panel_orientation_quirks.c
··· 84 84 .orientation = DRM_MODE_PANEL_ORIENTATION_RIGHT_UP, 85 85 }; 86 86 87 + static const struct drm_dmi_panel_orientation_data onegx1_pro = { 88 + .width = 1200, 89 + .height = 1920, 90 + .bios_dates = (const char * const []){ "12/17/2020", NULL }, 91 + .orientation = DRM_MODE_PANEL_ORIENTATION_RIGHT_UP, 92 + }; 93 + 87 94 static const struct drm_dmi_panel_orientation_data lcd720x1280_rightside_up = { 88 95 .width = 720, 89 96 .height = 1280, ··· 218 211 DMI_EXACT_MATCH(DMI_PRODUCT_VERSION, "Lenovo ideapad D330-10IGM"), 219 212 }, 220 213 .driver_data = (void *)&lcd1200x1920_rightside_up, 214 + }, { /* OneGX1 Pro */ 215 + .matches = { 216 + DMI_EXACT_MATCH(DMI_SYS_VENDOR, "SYSTEM_MANUFACTURER"), 217 + DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "SYSTEM_PRODUCT_NAME"), 218 + DMI_EXACT_MATCH(DMI_PRODUCT_VERSION, "Default string"), 219 + }, 220 + .driver_data = (void *)&onegx1_pro, 221 221 }, { /* VIOS LTH17 */ 222 222 .matches = { 223 223 DMI_EXACT_MATCH(DMI_SYS_VENDOR, "VIOS"),
+56 -10
drivers/gpu/drm/drm_plane.c
··· 50 50 * &struct drm_plane (possibly as part of a larger structure) and registers it 51 51 * with a call to drm_universal_plane_init(). 52 52 * 53 - * The type of a plane is exposed in the immutable "type" enumeration property, 54 - * which has one of the following values: "Overlay", "Primary", "Cursor" (see 55 - * enum drm_plane_type). A plane can be compatible with multiple CRTCs, see 56 - * &drm_plane.possible_crtcs. 53 + * Each plane has a type, see enum drm_plane_type. A plane can be compatible 54 + * with multiple CRTCs, see &drm_plane.possible_crtcs. 57 55 * 58 56 * Each CRTC must have a unique primary plane userspace can attach to enable 59 57 * the CRTC. In other words, userspace must be able to attach a different ··· 70 72 * DOC: standard plane properties 71 73 * 72 74 * DRM planes have a few standardized properties: 75 + * 76 + * type: 77 + * Immutable property describing the type of the plane. 78 + * 79 + * For user-space which has enabled the &DRM_CLIENT_CAP_ATOMIC capability, 80 + * the plane type is just a hint and is mostly superseded by atomic 81 + * test-only commits. The type hint can still be used to come up more 82 + * easily with a plane configuration accepted by the driver. 83 + * 84 + * The value of this property can be one of the following: 85 + * 86 + * "Primary": 87 + * To light up a CRTC, attaching a primary plane is the most likely to 88 + * work if it covers the whole CRTC and doesn't have scaling or 89 + * cropping set up. 90 + * 91 + * Drivers may support more features for the primary plane, user-space 92 + * can find out with test-only atomic commits. 93 + * 94 + * Some primary planes are implicitly used by the kernel in the legacy 95 + * IOCTLs &DRM_IOCTL_MODE_SETCRTC and &DRM_IOCTL_MODE_PAGE_FLIP. 96 + * Therefore user-space must not mix explicit usage of any primary 97 + * plane (e.g. through an atomic commit) with these legacy IOCTLs. 98 + * 99 + * "Cursor": 100 + * To enable this plane, using a framebuffer configured without scaling 101 + * or cropping and with the following properties is the most likely to 102 + * work: 103 + * 104 + * - If the driver provides the capabilities &DRM_CAP_CURSOR_WIDTH and 105 + * &DRM_CAP_CURSOR_HEIGHT, create the framebuffer with this size. 106 + * Otherwise, create a framebuffer with the size 64x64. 107 + * - If the driver doesn't support modifiers, create a framebuffer with 108 + * a linear layout. Otherwise, use the IN_FORMATS plane property. 109 + * 110 + * Drivers may support more features for the cursor plane, user-space 111 + * can find out with test-only atomic commits. 112 + * 113 + * Some cursor planes are implicitly used by the kernel in the legacy 114 + * IOCTLs &DRM_IOCTL_MODE_CURSOR and &DRM_IOCTL_MODE_CURSOR2. 115 + * Therefore user-space must not mix explicit usage of any cursor 116 + * plane (e.g. through an atomic commit) with these legacy IOCTLs. 117 + * 118 + * Some drivers may support cursors even if no cursor plane is exposed. 119 + * In this case, the legacy cursor IOCTLs can be used to configure the 120 + * cursor. 121 + * 122 + * "Overlay": 123 + * Neither primary nor cursor. 124 + * 125 + * Overlay planes are the only planes exposed when the 126 + * &DRM_CLIENT_CAP_UNIVERSAL_PLANES capability is disabled. 73 127 * 74 128 * IN_FORMATS: 75 129 * Blob property which contains the set of buffer format and modifier ··· 769 719 ret = drm_plane_check_pixel_format(plane, fb->format->format, 770 720 fb->modifier); 771 721 if (ret) { 772 - struct drm_format_name_buf format_name; 773 - 774 - DRM_DEBUG_KMS("Invalid pixel format %s, modifier 0x%llx\n", 775 - drm_get_format_name(fb->format->format, 776 - &format_name), 777 - fb->modifier); 722 + DRM_DEBUG_KMS("Invalid pixel format %p4cc, modifier 0x%llx\n", 723 + &fb->format->format, fb->modifier); 778 724 return ret; 779 725 } 780 726
+44 -6
drivers/gpu/drm/drm_simple_kms_helper.c
··· 177 177 }; 178 178 179 179 static int drm_simple_kms_plane_atomic_check(struct drm_plane *plane, 180 - struct drm_plane_state *plane_state) 180 + struct drm_atomic_state *state) 181 181 { 182 + struct drm_plane_state *plane_state = drm_atomic_get_new_plane_state(state, 183 + plane); 182 184 struct drm_simple_display_pipe *pipe; 183 185 struct drm_crtc_state *crtc_state; 184 186 int ret; 185 187 186 188 pipe = container_of(plane, struct drm_simple_display_pipe, plane); 187 - crtc_state = drm_atomic_get_new_crtc_state(plane_state->state, 189 + crtc_state = drm_atomic_get_new_crtc_state(state, 188 190 &pipe->crtc); 189 191 190 192 ret = drm_atomic_helper_check_plane_state(plane_state, crtc_state, ··· 206 204 } 207 205 208 206 static void drm_simple_kms_plane_atomic_update(struct drm_plane *plane, 209 - struct drm_plane_state *old_pstate) 207 + struct drm_atomic_state *state) 210 208 { 209 + struct drm_plane_state *old_pstate = drm_atomic_get_old_plane_state(state, 210 + plane); 211 211 struct drm_simple_display_pipe *pipe; 212 212 213 213 pipe = container_of(plane, struct drm_simple_display_pipe, plane); ··· 257 253 .atomic_update = drm_simple_kms_plane_atomic_update, 258 254 }; 259 255 256 + static void drm_simple_kms_plane_reset(struct drm_plane *plane) 257 + { 258 + struct drm_simple_display_pipe *pipe; 259 + 260 + pipe = container_of(plane, struct drm_simple_display_pipe, plane); 261 + if (!pipe->funcs || !pipe->funcs->reset_plane) 262 + return drm_atomic_helper_plane_reset(plane); 263 + 264 + return pipe->funcs->reset_plane(pipe); 265 + } 266 + 267 + static struct drm_plane_state *drm_simple_kms_plane_duplicate_state(struct drm_plane *plane) 268 + { 269 + struct drm_simple_display_pipe *pipe; 270 + 271 + pipe = container_of(plane, struct drm_simple_display_pipe, plane); 272 + if (!pipe->funcs || !pipe->funcs->duplicate_plane_state) 273 + return drm_atomic_helper_plane_duplicate_state(plane); 274 + 275 + return pipe->funcs->duplicate_plane_state(pipe); 276 + } 277 + 278 + static void drm_simple_kms_plane_destroy_state(struct drm_plane *plane, 279 + struct drm_plane_state *state) 280 + { 281 + struct drm_simple_display_pipe *pipe; 282 + 283 + pipe = container_of(plane, struct drm_simple_display_pipe, plane); 284 + if (!pipe->funcs || !pipe->funcs->destroy_plane_state) 285 + drm_atomic_helper_plane_destroy_state(plane, state); 286 + else 287 + pipe->funcs->destroy_plane_state(pipe, state); 288 + } 289 + 260 290 static const struct drm_plane_funcs drm_simple_kms_plane_funcs = { 261 291 .update_plane = drm_atomic_helper_update_plane, 262 292 .disable_plane = drm_atomic_helper_disable_plane, 263 293 .destroy = drm_plane_cleanup, 264 - .reset = drm_atomic_helper_plane_reset, 265 - .atomic_duplicate_state = drm_atomic_helper_plane_duplicate_state, 266 - .atomic_destroy_state = drm_atomic_helper_plane_destroy_state, 294 + .reset = drm_simple_kms_plane_reset, 295 + .atomic_duplicate_state = drm_simple_kms_plane_duplicate_state, 296 + .atomic_destroy_state = drm_simple_kms_plane_destroy_state, 267 297 .format_mod_supported = drm_simple_kms_format_mod_supported, 268 298 }; 269 299
+12
drivers/gpu/drm/drm_syncobj.c
··· 387 387 if (!syncobj) 388 388 return -ENOENT; 389 389 390 + /* Waiting for userspace with locks help is illegal cause that can 391 + * trivial deadlock with page faults for example. Make lockdep complain 392 + * about it early on. 393 + */ 394 + if (flags & DRM_SYNCOBJ_WAIT_FLAGS_WAIT_FOR_SUBMIT) { 395 + might_sleep(); 396 + lockdep_assert_none_held_once(); 397 + } 398 + 390 399 *fence = drm_syncobj_fence_get(syncobj); 391 400 392 401 if (*fence) { ··· 950 941 struct dma_fence *fence; 951 942 uint64_t *points; 952 943 uint32_t signaled_count, i; 944 + 945 + if (flags & DRM_SYNCOBJ_WAIT_FLAGS_WAIT_FOR_SUBMIT) 946 + lockdep_assert_none_held_once(); 953 947 954 948 points = kmalloc_array(count, sizeof(*points), GFP_KERNEL); 955 949 if (points == NULL)
+10 -15
drivers/gpu/drm/drm_vblank.c
··· 1470 1470 } 1471 1471 EXPORT_SYMBOL(drm_crtc_vblank_on); 1472 1472 1473 - /** 1474 - * drm_vblank_restore - estimate missed vblanks and update vblank count. 1475 - * @dev: DRM device 1476 - * @pipe: CRTC index 1477 - * 1478 - * Power manamement features can cause frame counter resets between vblank 1479 - * disable and enable. Drivers can use this function in their 1480 - * &drm_crtc_funcs.enable_vblank implementation to estimate missed vblanks since 1481 - * the last &drm_crtc_funcs.disable_vblank using timestamps and update the 1482 - * vblank counter. 1483 - * 1484 - * This function is the legacy version of drm_crtc_vblank_restore(). 1485 - */ 1486 - void drm_vblank_restore(struct drm_device *dev, unsigned int pipe) 1473 + static void drm_vblank_restore(struct drm_device *dev, unsigned int pipe) 1487 1474 { 1488 1475 ktime_t t_vblank; 1489 1476 struct drm_vblank_crtc *vblank; ··· 1506 1519 diff, diff_ns, framedur_ns, cur_vblank - vblank->last); 1507 1520 store_vblank(dev, pipe, diff, t_vblank, cur_vblank); 1508 1521 } 1509 - EXPORT_SYMBOL(drm_vblank_restore); 1510 1522 1511 1523 /** 1512 1524 * drm_crtc_vblank_restore - estimate missed vblanks and update vblank count. ··· 1516 1530 * &drm_crtc_funcs.enable_vblank implementation to estimate missed vblanks since 1517 1531 * the last &drm_crtc_funcs.disable_vblank using timestamps and update the 1518 1532 * vblank counter. 1533 + * 1534 + * Note that drivers must have race-free high-precision timestamping support, 1535 + * i.e. &drm_crtc_funcs.get_vblank_timestamp must be hooked up and 1536 + * &drm_driver.vblank_disable_immediate must be set to indicate the 1537 + * time-stamping functions are race-free against vblank hardware counter 1538 + * increments. 1519 1539 */ 1520 1540 void drm_crtc_vblank_restore(struct drm_crtc *crtc) 1521 1541 { 1542 + WARN_ON_ONCE(!crtc->funcs->get_vblank_timestamp); 1543 + WARN_ON_ONCE(!crtc->dev->vblank_disable_immediate); 1544 + 1522 1545 drm_vblank_restore(crtc->dev, drm_crtc_index(crtc)); 1523 1546 } 1524 1547 EXPORT_SYMBOL(drm_crtc_vblank_restore);
+7 -2
drivers/gpu/drm/etnaviv/etnaviv_sched.c
··· 82 82 return fence; 83 83 } 84 84 85 - static void etnaviv_sched_timedout_job(struct drm_sched_job *sched_job) 85 + static enum drm_gpu_sched_stat etnaviv_sched_timedout_job(struct drm_sched_job 86 + *sched_job) 86 87 { 87 88 struct etnaviv_gem_submit *submit = to_etnaviv_submit(sched_job); 88 89 struct etnaviv_gpu *gpu = submit->gpu; ··· 121 120 122 121 drm_sched_resubmit_jobs(&gpu->sched); 123 122 123 + drm_sched_start(&gpu->sched, true); 124 + return DRM_GPU_SCHED_STAT_NOMINAL; 125 + 124 126 out_no_timeout: 125 127 /* restart scheduler after GPU is usable again */ 126 128 drm_sched_start(&gpu->sched, true); 129 + return DRM_GPU_SCHED_STAT_NOMINAL; 127 130 } 128 131 129 132 static void etnaviv_sched_free_job(struct drm_sched_job *sched_job) ··· 190 185 191 186 ret = drm_sched_init(&gpu->sched, &etnaviv_sched_ops, 192 187 etnaviv_hw_jobs_limit, etnaviv_job_hang_limit, 193 - msecs_to_jiffies(500), dev_name(gpu->dev)); 188 + msecs_to_jiffies(500), NULL, dev_name(gpu->dev)); 194 189 if (ret) 195 190 return ret; 196 191
+12 -8
drivers/gpu/drm/exynos/exynos_drm_plane.c
··· 228 228 } 229 229 230 230 static int exynos_plane_atomic_check(struct drm_plane *plane, 231 - struct drm_plane_state *state) 231 + struct drm_atomic_state *state) 232 232 { 233 + struct drm_plane_state *new_plane_state = drm_atomic_get_new_plane_state(state, 234 + plane); 233 235 struct exynos_drm_plane *exynos_plane = to_exynos_plane(plane); 234 236 struct exynos_drm_plane_state *exynos_state = 235 - to_exynos_plane_state(state); 237 + to_exynos_plane_state(new_plane_state); 236 238 int ret = 0; 237 239 238 - if (!state->crtc || !state->fb) 240 + if (!new_plane_state->crtc || !new_plane_state->fb) 239 241 return 0; 240 242 241 243 /* translate state into exynos_state */ ··· 252 250 } 253 251 254 252 static void exynos_plane_atomic_update(struct drm_plane *plane, 255 - struct drm_plane_state *old_state) 253 + struct drm_atomic_state *state) 256 254 { 257 - struct drm_plane_state *state = plane->state; 258 - struct exynos_drm_crtc *exynos_crtc = to_exynos_crtc(state->crtc); 255 + struct drm_plane_state *new_state = drm_atomic_get_new_plane_state(state, 256 + plane); 257 + struct exynos_drm_crtc *exynos_crtc = to_exynos_crtc(new_state->crtc); 259 258 struct exynos_drm_plane *exynos_plane = to_exynos_plane(plane); 260 259 261 - if (!state->crtc) 260 + if (!new_state->crtc) 262 261 return; 263 262 264 263 if (exynos_crtc->ops->update_plane) ··· 267 264 } 268 265 269 266 static void exynos_plane_atomic_disable(struct drm_plane *plane, 270 - struct drm_plane_state *old_state) 267 + struct drm_atomic_state *state) 271 268 { 269 + struct drm_plane_state *old_state = drm_atomic_get_old_plane_state(state, plane); 272 270 struct exynos_drm_plane *exynos_plane = to_exynos_plane(plane); 273 271 struct exynos_drm_crtc *exynos_crtc = to_exynos_crtc(old_state->crtc); 274 272
+14 -10
drivers/gpu/drm/fsl-dcu/fsl_dcu_drm_plane.c
··· 7 7 8 8 #include <linux/regmap.h> 9 9 10 + #include <drm/drm_atomic.h> 10 11 #include <drm/drm_atomic_helper.h> 11 12 #include <drm/drm_crtc.h> 12 13 #include <drm/drm_fb_cma_helper.h> ··· 34 33 } 35 34 36 35 static int fsl_dcu_drm_plane_atomic_check(struct drm_plane *plane, 37 - struct drm_plane_state *state) 36 + struct drm_atomic_state *state) 38 37 { 39 - struct drm_framebuffer *fb = state->fb; 38 + struct drm_plane_state *new_plane_state = drm_atomic_get_new_plane_state(state, 39 + plane); 40 + struct drm_framebuffer *fb = new_plane_state->fb; 40 41 41 - if (!state->fb || !state->crtc) 42 + if (!new_plane_state->fb || !new_plane_state->crtc) 42 43 return 0; 43 44 44 45 switch (fb->format->format) { ··· 60 57 } 61 58 62 59 static void fsl_dcu_drm_plane_atomic_disable(struct drm_plane *plane, 63 - struct drm_plane_state *old_state) 60 + struct drm_atomic_state *state) 64 61 { 65 62 struct fsl_dcu_drm_device *fsl_dev = plane->dev->dev_private; 66 63 unsigned int value; ··· 76 73 } 77 74 78 75 static void fsl_dcu_drm_plane_atomic_update(struct drm_plane *plane, 79 - struct drm_plane_state *old_state) 76 + struct drm_atomic_state *state) 80 77 81 78 { 82 79 struct fsl_dcu_drm_device *fsl_dev = plane->dev->dev_private; 83 - struct drm_plane_state *state = plane->state; 80 + struct drm_plane_state *new_state = drm_atomic_get_new_plane_state(state, 81 + plane); 84 82 struct drm_framebuffer *fb = plane->state->fb; 85 83 struct drm_gem_cma_object *gem; 86 84 unsigned int alpha = DCU_LAYER_AB_NONE, bpp; ··· 129 125 } 130 126 131 127 regmap_write(fsl_dev->regmap, DCU_CTRLDESCLN(index, 1), 132 - DCU_LAYER_HEIGHT(state->crtc_h) | 133 - DCU_LAYER_WIDTH(state->crtc_w)); 128 + DCU_LAYER_HEIGHT(new_state->crtc_h) | 129 + DCU_LAYER_WIDTH(new_state->crtc_w)); 134 130 regmap_write(fsl_dev->regmap, DCU_CTRLDESCLN(index, 2), 135 - DCU_LAYER_POSY(state->crtc_y) | 136 - DCU_LAYER_POSX(state->crtc_x)); 131 + DCU_LAYER_POSY(new_state->crtc_y) | 132 + DCU_LAYER_POSX(new_state->crtc_x)); 137 133 regmap_write(fsl_dev->regmap, 138 134 DCU_CTRLDESCLN(index, 3), gem->paddr); 139 135 regmap_write(fsl_dev->regmap, DCU_CTRLDESCLN(index, 4),
+2 -9
drivers/gpu/drm/gma500/Kconfig
··· 9 9 select INPUT if ACPI 10 10 help 11 11 Say yes for an experimental 2D KMS framebuffer driver for the 12 - Intel GMA500 ('Poulsbo') and other Intel IMG based graphics 13 - devices. 14 - 15 - config DRM_GMA600 16 - bool "Intel GMA600 support (Experimental)" 17 - depends on DRM_GMA500 18 - help 19 - Say yes to include support for GMA600 (Intel Moorestown/Oaktrail) 20 - platforms with LVDS ports. MIPI is not currently supported. 12 + Intel GMA500 (Poulsbo), Intel GMA600 (Moorestown/Oak Trail) and 13 + Intel GMA3600/3650 (Cedar Trail).
+7 -10
drivers/gpu/drm/gma500/Makefile
··· 4 4 # 5 5 6 6 gma500_gfx-y += \ 7 - accel_2d.o \ 8 7 backlight.o \ 9 - blitter.o \ 10 8 cdv_device.o \ 11 9 cdv_intel_crt.o \ 12 10 cdv_intel_display.o \ ··· 21 23 intel_i2c.o \ 22 24 mid_bios.o \ 23 25 mmu.o \ 26 + oaktrail_device.o \ 27 + oaktrail_crtc.o \ 28 + oaktrail_hdmi.o \ 29 + oaktrail_hdmi_i2c.o \ 30 + oaktrail_lvds.o \ 31 + oaktrail_lvds_i2c.o \ 24 32 power.o \ 25 33 psb_device.o \ 26 34 psb_drv.o \ ··· 37 33 psb_lid.o \ 38 34 psb_irq.o 39 35 40 - gma500_gfx-$(CONFIG_ACPI) += opregion.o \ 41 - 42 - gma500_gfx-$(CONFIG_DRM_GMA600) += oaktrail_device.o \ 43 - oaktrail_crtc.o \ 44 - oaktrail_lvds.o \ 45 - oaktrail_lvds_i2c.o \ 46 - oaktrail_hdmi.o \ 47 - oaktrail_hdmi_i2c.o 36 + gma500_gfx-$(CONFIG_ACPI) += opregion.o 48 37 49 38 obj-$(CONFIG_DRM_GMA500) += gma500_gfx.o
-60
drivers/gpu/drm/gma500/accel_2d.c
··· 1 - // SPDX-License-Identifier: GPL-2.0-only 2 - /************************************************************************** 3 - * Copyright (c) 2007-2011, Intel Corporation. 4 - * All Rights Reserved. 5 - * 6 - * Intel funded Tungsten Graphics (http://www.tungstengraphics.com) to 7 - * develop this driver. 8 - * 9 - **************************************************************************/ 10 - 11 - #include <linux/console.h> 12 - #include <linux/delay.h> 13 - #include <linux/errno.h> 14 - #include <linux/init.h> 15 - #include <linux/kernel.h> 16 - #include <linux/mm.h> 17 - #include <linux/module.h> 18 - #include <linux/slab.h> 19 - #include <linux/string.h> 20 - #include <linux/tty.h> 21 - 22 - #include <drm/drm.h> 23 - #include <drm/drm_crtc.h> 24 - #include <drm/drm_fb_helper.h> 25 - #include <drm/drm_fourcc.h> 26 - 27 - #include "psb_drv.h" 28 - #include "psb_reg.h" 29 - 30 - /** 31 - * psb_spank - reset the 2D engine 32 - * @dev_priv: our PSB DRM device 33 - * 34 - * Soft reset the graphics engine and then reload the necessary registers. 35 - * We use this at initialisation time but it will become relevant for 36 - * accelerated X later 37 - */ 38 - void psb_spank(struct drm_psb_private *dev_priv) 39 - { 40 - PSB_WSGX32(_PSB_CS_RESET_BIF_RESET | _PSB_CS_RESET_DPM_RESET | 41 - _PSB_CS_RESET_TA_RESET | _PSB_CS_RESET_USE_RESET | 42 - _PSB_CS_RESET_ISP_RESET | _PSB_CS_RESET_TSP_RESET | 43 - _PSB_CS_RESET_TWOD_RESET, PSB_CR_SOFT_RESET); 44 - PSB_RSGX32(PSB_CR_SOFT_RESET); 45 - 46 - msleep(1); 47 - 48 - PSB_WSGX32(0, PSB_CR_SOFT_RESET); 49 - wmb(); 50 - PSB_WSGX32(PSB_RSGX32(PSB_CR_BIF_CTRL) | _PSB_CB_CTRL_CLEAR_FAULT, 51 - PSB_CR_BIF_CTRL); 52 - wmb(); 53 - (void) PSB_RSGX32(PSB_CR_BIF_CTRL); 54 - 55 - msleep(1); 56 - PSB_WSGX32(PSB_RSGX32(PSB_CR_BIF_CTRL) & ~_PSB_CB_CTRL_CLEAR_FAULT, 57 - PSB_CR_BIF_CTRL); 58 - (void) PSB_RSGX32(PSB_CR_BIF_CTRL); 59 - PSB_WSGX32(dev_priv->gtt.gatt_start, PSB_CR_BIF_TWOD_REQ_BASE); 60 - }
-43
drivers/gpu/drm/gma500/blitter.c
··· 1 - // SPDX-License-Identifier: GPL-2.0-only 2 - /* 3 - * Copyright (c) 2014, Patrik Jakobsson 4 - * All Rights Reserved. 5 - * 6 - * Authors: Patrik Jakobsson <patrik.r.jakobsson@gmail.com> 7 - */ 8 - 9 - #include "psb_drv.h" 10 - 11 - #include "blitter.h" 12 - #include "psb_reg.h" 13 - 14 - /* Wait for the blitter to be completely idle */ 15 - int gma_blt_wait_idle(struct drm_psb_private *dev_priv) 16 - { 17 - unsigned long stop = jiffies + HZ; 18 - int busy = 1; 19 - 20 - /* NOP for Cedarview */ 21 - if (IS_CDV(dev_priv->dev)) 22 - return 0; 23 - 24 - /* First do a quick check */ 25 - if ((PSB_RSGX32(PSB_CR_2D_SOCIF) == _PSB_C2_SOCIF_EMPTY) && 26 - ((PSB_RSGX32(PSB_CR_2D_BLIT_STATUS) & _PSB_C2B_STATUS_BUSY) == 0)) 27 - return 0; 28 - 29 - do { 30 - busy = (PSB_RSGX32(PSB_CR_2D_SOCIF) != _PSB_C2_SOCIF_EMPTY); 31 - } while (busy && !time_after_eq(jiffies, stop)); 32 - 33 - if (busy) 34 - return -EBUSY; 35 - 36 - do { 37 - busy = ((PSB_RSGX32(PSB_CR_2D_BLIT_STATUS) & 38 - _PSB_C2B_STATUS_BUSY) != 0); 39 - } while (busy && !time_after_eq(jiffies, stop)); 40 - 41 - /* If still busy, we probably have a hang */ 42 - return (busy) ? -EBUSY : 0; 43 - }
-16
drivers/gpu/drm/gma500/blitter.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0-only */ 2 - /* 3 - * Copyright (c) 2014, Patrik Jakobsson 4 - * All Rights Reserved. 5 - * 6 - * Authors: Patrik Jakobsson <patrik.r.jakobsson@gmail.com> 7 - */ 8 - 9 - #ifndef __BLITTER_H 10 - #define __BLITTER_H 11 - 12 - struct drm_psb_private; 13 - 14 - extern int gma_blt_wait_idle(struct drm_psb_private *dev_priv); 15 - 16 - #endif
+1 -1
drivers/gpu/drm/gma500/cdv_device.c
··· 603 603 .errata = cdv_errata, 604 604 605 605 .crtc_helper = &cdv_intel_helper_funcs, 606 - .crtc_funcs = &cdv_intel_crtc_funcs, 606 + .crtc_funcs = &gma_intel_crtc_funcs, 607 607 .clock_funcs = &cdv_clock_funcs, 608 608 609 609 .output_init = cdv_output_init,
-1
drivers/gpu/drm/gma500/cdv_device.h
··· 8 8 struct psb_intel_mode_device; 9 9 10 10 extern const struct drm_crtc_helper_funcs cdv_intel_helper_funcs; 11 - extern const struct drm_crtc_funcs cdv_intel_crtc_funcs; 12 11 extern const struct gma_clock_funcs cdv_clock_funcs; 13 12 extern void cdv_intel_crt_init(struct drm_device *dev, 14 13 struct psb_intel_mode_device *mode_dev);
+1 -14
drivers/gpu/drm/gma500/cdv_intel_crt.c
··· 248 248 struct drm_connector *connector; 249 249 struct drm_encoder *encoder; 250 250 251 - u32 i2c_reg; 252 - 253 251 gma_encoder = kzalloc(sizeof(struct gma_encoder), GFP_KERNEL); 254 252 if (!gma_encoder) 255 253 return; ··· 267 269 gma_connector_attach_encoder(gma_connector, gma_encoder); 268 270 269 271 /* Set up the DDC bus. */ 270 - i2c_reg = GPIOA; 271 - /* Remove the following code for CDV */ 272 - /* 273 - if (dev_priv->crt_ddc_bus != 0) 274 - i2c_reg = dev_priv->crt_ddc_bus; 275 - }*/ 276 - gma_encoder->ddc_bus = psb_intel_i2c_create(dev, 277 - i2c_reg, "CRTDDC_A"); 272 + gma_encoder->ddc_bus = psb_intel_i2c_create(dev, GPIOA, "CRTDDC_A"); 278 273 if (!gma_encoder->ddc_bus) { 279 274 dev_printk(KERN_ERR, dev->dev, "DDC bus registration failed.\n"); 280 275 goto failed_ddc; 281 276 } 282 277 283 278 gma_encoder->type = INTEL_OUTPUT_ANALOG; 284 - /* 285 - psb_intel_output->clone_mask = (1 << INTEL_ANALOG_CLONE_BIT); 286 - psb_intel_output->crtc_mask = (1 << 0) | (1 << 1); 287 - */ 288 279 connector->interlace_allowed = 0; 289 280 connector->doublescan_allowed = 0; 290 281
+1 -22
drivers/gpu/drm/gma500/cdv_intel_display.c
··· 582 582 struct gma_clock_t clock; 583 583 u32 dpll = 0, dspcntr, pipeconf; 584 584 bool ok; 585 - bool is_lvds = false, is_tv = false; 585 + bool is_lvds = false; 586 586 bool is_dp = false; 587 587 struct drm_mode_config *mode_config = &dev->mode_config; 588 588 struct drm_connector *connector; ··· 602 602 switch (gma_encoder->type) { 603 603 case INTEL_OUTPUT_LVDS: 604 604 is_lvds = true; 605 - break; 606 - case INTEL_OUTPUT_TVOUT: 607 - is_tv = true; 608 605 break; 609 606 case INTEL_OUTPUT_ANALOG: 610 607 case INTEL_OUTPUT_HDMI: ··· 657 660 } 658 661 659 662 dpll = DPLL_VGA_MODE_DIS; 660 - if (is_tv) { 661 - /* XXX: just matching BIOS for now */ 662 - /* dpll |= PLL_REF_INPUT_TVCLKINBC; */ 663 - dpll |= 3; 664 - } 665 - /* dpll |= PLL_REF_INPUT_DREFCLK; */ 666 663 667 664 if (is_dp || is_edp) { 668 665 cdv_intel_dp_set_m_n(crtc, mode, adjusted_mode); ··· 959 968 .prepare = gma_crtc_prepare, 960 969 .commit = gma_crtc_commit, 961 970 .disable = gma_crtc_disable, 962 - }; 963 - 964 - const struct drm_crtc_funcs cdv_intel_crtc_funcs = { 965 - .cursor_set = gma_crtc_cursor_set, 966 - .cursor_move = gma_crtc_cursor_move, 967 - .gamma_set = gma_crtc_gamma_set, 968 - .set_config = gma_crtc_set_config, 969 - .destroy = gma_crtc_destroy, 970 - .page_flip = gma_crtc_page_flip, 971 - .enable_vblank = psb_enable_vblank, 972 - .disable_vblank = psb_disable_vblank, 973 - .get_vblank_counter = psb_get_vblank_counter, 974 971 }; 975 972 976 973 const struct gma_clock_funcs cdv_clock_funcs = {
-11
drivers/gpu/drm/gma500/gtt.c
··· 11 11 12 12 #include <asm/set_memory.h> 13 13 14 - #include "blitter.h" 15 14 #include "psb_drv.h" 16 15 17 16 ··· 228 229 struct drm_device *dev = gt->gem.dev; 229 230 struct drm_psb_private *dev_priv = dev->dev_private; 230 231 u32 gpu_base = dev_priv->gtt.gatt_start; 231 - int ret; 232 232 233 - /* While holding the gtt_mutex no new blits can be initiated */ 234 233 mutex_lock(&dev_priv->gtt_mutex); 235 - 236 - /* Wait for any possible usage of the memory to be finished */ 237 - ret = gma_blt_wait_idle(dev_priv); 238 - if (ret) { 239 - DRM_ERROR("Failed to idle the blitter, unpin failed!"); 240 - goto out; 241 - } 242 234 243 235 WARN_ON(!gt->in_gart); 244 236 ··· 241 251 psb_gtt_detach_pages(gt); 242 252 } 243 253 244 - out: 245 254 mutex_unlock(&dev_priv->gtt_mutex); 246 255 } 247 256
+2 -2
drivers/gpu/drm/gma500/intel_gmbus.c
··· 44 44 ret__ = -ETIMEDOUT; \ 45 45 break; \ 46 46 } \ 47 - if (W && !(in_atomic() || in_dbg_master())) msleep(W); \ 47 + if (W && !(in_dbg_master())) \ 48 + msleep(W); \ 48 49 } \ 49 50 ret__; \ 50 51 }) 51 52 52 53 #define wait_for(COND, MS) _wait_for(COND, MS, 1) 53 - #define wait_for_atomic(COND, MS) _wait_for(COND, MS, 0) 54 54 55 55 #define GMBUS_REG_READ(reg) ioread32(dev_priv->gmbus_reg + (reg)) 56 56 #define GMBUS_REG_WRITE(reg, val) iowrite32((val), dev_priv->gmbus_reg + (reg))
+1 -1
drivers/gpu/drm/gma500/oaktrail_device.c
··· 545 545 .chip_setup = oaktrail_chip_setup, 546 546 .chip_teardown = oaktrail_teardown, 547 547 .crtc_helper = &oaktrail_helper_funcs, 548 - .crtc_funcs = &psb_intel_crtc_funcs, 548 + .crtc_funcs = &gma_intel_crtc_funcs, 549 549 550 550 .output_init = oaktrail_output_init, 551 551
+1 -1
drivers/gpu/drm/gma500/psb_device.c
··· 329 329 .chip_teardown = psb_chip_teardown, 330 330 331 331 .crtc_helper = &psb_intel_helper_funcs, 332 - .crtc_funcs = &psb_intel_crtc_funcs, 332 + .crtc_funcs = &gma_intel_crtc_funcs, 333 333 .clock_funcs = &psb_clock_funcs, 334 334 335 335 .output_init = psb_output_init,
+33 -3
drivers/gpu/drm/gma500/psb_drv.c
··· 12 12 #include <linux/notifier.h> 13 13 #include <linux/pm_runtime.h> 14 14 #include <linux/spinlock.h> 15 + #include <linux/delay.h> 15 16 16 17 #include <asm/set_memory.h> 17 18 ··· 55 54 /* Poulsbo */ 56 55 { 0x8086, 0x8108, PCI_ANY_ID, PCI_ANY_ID, 0, 0, (long) &psb_chip_ops }, 57 56 { 0x8086, 0x8109, PCI_ANY_ID, PCI_ANY_ID, 0, 0, (long) &psb_chip_ops }, 58 - #if defined(CONFIG_DRM_GMA600) 57 + /* Oak Trail */ 59 58 { 0x8086, 0x4100, PCI_ANY_ID, PCI_ANY_ID, 0, 0, (long) &oaktrail_chip_ops }, 60 59 { 0x8086, 0x4101, PCI_ANY_ID, PCI_ANY_ID, 0, 0, (long) &oaktrail_chip_ops }, 61 60 { 0x8086, 0x4102, PCI_ANY_ID, PCI_ANY_ID, 0, 0, (long) &oaktrail_chip_ops }, ··· 65 64 { 0x8086, 0x4106, PCI_ANY_ID, PCI_ANY_ID, 0, 0, (long) &oaktrail_chip_ops }, 66 65 { 0x8086, 0x4107, PCI_ANY_ID, PCI_ANY_ID, 0, 0, (long) &oaktrail_chip_ops }, 67 66 { 0x8086, 0x4108, PCI_ANY_ID, PCI_ANY_ID, 0, 0, (long) &oaktrail_chip_ops }, 68 - #endif 69 - /* Cedartrail */ 67 + /* Cedar Trail */ 70 68 { 0x8086, 0x0be0, PCI_ANY_ID, PCI_ANY_ID, 0, 0, (long) &cdv_chip_ops }, 71 69 { 0x8086, 0x0be1, PCI_ANY_ID, PCI_ANY_ID, 0, 0, (long) &cdv_chip_ops }, 72 70 { 0x8086, 0x0be2, PCI_ANY_ID, PCI_ANY_ID, 0, 0, (long) &cdv_chip_ops }, ··· 91 91 */ 92 92 static const struct drm_ioctl_desc psb_ioctls[] = { 93 93 }; 94 + 95 + /** 96 + * psb_spank - reset the 2D engine 97 + * @dev_priv: our PSB DRM device 98 + * 99 + * Soft reset the graphics engine and then reload the necessary registers. 100 + */ 101 + void psb_spank(struct drm_psb_private *dev_priv) 102 + { 103 + PSB_WSGX32(_PSB_CS_RESET_BIF_RESET | _PSB_CS_RESET_DPM_RESET | 104 + _PSB_CS_RESET_TA_RESET | _PSB_CS_RESET_USE_RESET | 105 + _PSB_CS_RESET_ISP_RESET | _PSB_CS_RESET_TSP_RESET | 106 + _PSB_CS_RESET_TWOD_RESET, PSB_CR_SOFT_RESET); 107 + PSB_RSGX32(PSB_CR_SOFT_RESET); 108 + 109 + msleep(1); 110 + 111 + PSB_WSGX32(0, PSB_CR_SOFT_RESET); 112 + wmb(); 113 + PSB_WSGX32(PSB_RSGX32(PSB_CR_BIF_CTRL) | _PSB_CB_CTRL_CLEAR_FAULT, 114 + PSB_CR_BIF_CTRL); 115 + wmb(); 116 + (void) PSB_RSGX32(PSB_CR_BIF_CTRL); 117 + 118 + msleep(1); 119 + PSB_WSGX32(PSB_RSGX32(PSB_CR_BIF_CTRL) & ~_PSB_CB_CTRL_CLEAR_FAULT, 120 + PSB_CR_BIF_CTRL); 121 + (void) PSB_RSGX32(PSB_CR_BIF_CTRL); 122 + PSB_WSGX32(dev_priv->gtt.gatt_start, PSB_CR_BIF_TWOD_REQ_BASE); 123 + } 94 124 95 125 static int psb_do_init(struct drm_device *dev) 96 126 {
+1 -5
drivers/gpu/drm/gma500/psb_drv.h
··· 625 625 626 626 /* psb_irq.c */ 627 627 extern irqreturn_t psb_irq_handler(int irq, void *arg); 628 - extern int psb_irq_enable_dpst(struct drm_device *dev); 629 - extern int psb_irq_disable_dpst(struct drm_device *dev); 630 628 extern void psb_irq_preinstall(struct drm_device *dev); 631 629 extern int psb_irq_postinstall(struct drm_device *dev); 632 630 extern void psb_irq_uninstall(struct drm_device *dev); 633 - extern void psb_irq_turn_on_dpst(struct drm_device *dev); 634 - extern void psb_irq_turn_off_dpst(struct drm_device *dev); 635 631 636 632 extern void psb_irq_uninstall_islands(struct drm_device *dev, int hw_islands); 637 633 extern int psb_vblank_wait2(struct drm_device *dev, unsigned int *sequence); ··· 675 679 676 680 /* psb_intel_display.c */ 677 681 extern const struct drm_crtc_helper_funcs psb_intel_helper_funcs; 678 - extern const struct drm_crtc_funcs psb_intel_crtc_funcs; 682 + extern const struct drm_crtc_funcs gma_intel_crtc_funcs; 679 683 680 684 /* psb_intel_lvds.c */ 681 685 extern const struct drm_connector_helper_funcs
+1 -1
drivers/gpu/drm/gma500/psb_intel_display.c
··· 426 426 .disable = gma_crtc_disable, 427 427 }; 428 428 429 - const struct drm_crtc_funcs psb_intel_crtc_funcs = { 429 + const struct drm_crtc_funcs gma_intel_crtc_funcs = { 430 430 .cursor_set = gma_crtc_cursor_set, 431 431 .cursor_move = gma_crtc_cursor_move, 432 432 .gamma_set = gma_crtc_gamma_set,
-32
drivers/gpu/drm/gma500/psb_intel_reg.h
··· 550 550 #define HISTOGRAM_INT_CTRL_CLEAR (1UL << 30) 551 551 #define DPST_YUV_LUMA_MODE 0 552 552 553 - struct dpst_ie_histogram_control { 554 - union { 555 - uint32_t data; 556 - struct { 557 - uint32_t bin_reg_index:7; 558 - uint32_t reserved:4; 559 - uint32_t bin_reg_func_select:1; 560 - uint32_t sync_to_phase_in:1; 561 - uint32_t alt_enhancement_mode:2; 562 - uint32_t reserved1:1; 563 - uint32_t sync_to_phase_in_count:8; 564 - uint32_t histogram_mode_select:1; 565 - uint32_t reserved2:4; 566 - uint32_t ie_pipe_assignment:1; 567 - uint32_t ie_mode_table_enabled:1; 568 - uint32_t ie_histogram_enable:1; 569 - }; 570 - }; 571 - }; 572 - 573 - struct dpst_guardband { 574 - union { 575 - uint32_t data; 576 - struct { 577 - uint32_t guardband:22; 578 - uint32_t guardband_interrupt_delay:8; 579 - uint32_t interrupt_status:1; 580 - uint32_t interrupt_enable:1; 581 - }; 582 - }; 583 - }; 584 - 585 553 #define PIPEAFRAMEHIGH 0x70040 586 554 #define PIPEAFRAMEPIXEL 0x70044 587 555 #define PIPEBFRAMEHIGH 0x71040
-110
drivers/gpu/drm/gma500/psb_irq.c
··· 101 101 } 102 102 } 103 103 104 - static void mid_enable_pipe_event(struct drm_psb_private *dev_priv, int pipe) 105 - { 106 - if (gma_power_begin(dev_priv->dev, false)) { 107 - u32 pipe_event = mid_pipe_event(pipe); 108 - dev_priv->vdc_irq_mask |= pipe_event; 109 - PSB_WVDC32(~dev_priv->vdc_irq_mask, PSB_INT_MASK_R); 110 - PSB_WVDC32(dev_priv->vdc_irq_mask, PSB_INT_ENABLE_R); 111 - gma_power_end(dev_priv->dev); 112 - } 113 - } 114 - 115 - static void mid_disable_pipe_event(struct drm_psb_private *dev_priv, int pipe) 116 - { 117 - if (dev_priv->pipestat[pipe] == 0) { 118 - if (gma_power_begin(dev_priv->dev, false)) { 119 - u32 pipe_event = mid_pipe_event(pipe); 120 - dev_priv->vdc_irq_mask &= ~pipe_event; 121 - PSB_WVDC32(~dev_priv->vdc_irq_mask, PSB_INT_MASK_R); 122 - PSB_WVDC32(dev_priv->vdc_irq_mask, PSB_INT_ENABLE_R); 123 - gma_power_end(dev_priv->dev); 124 - } 125 - } 126 - } 127 - 128 104 /* 129 105 * Display controller interrupt handler for pipe event. 130 106 */ ··· 366 390 /* This register is safe even if display island is off */ 367 391 PSB_WVDC32(PSB_RVDC32(PSB_INT_IDENTITY_R), PSB_INT_IDENTITY_R); 368 392 spin_unlock_irqrestore(&dev_priv->irqmask_lock, irqflags); 369 - } 370 - 371 - void psb_irq_turn_on_dpst(struct drm_device *dev) 372 - { 373 - struct drm_psb_private *dev_priv = 374 - (struct drm_psb_private *) dev->dev_private; 375 - u32 hist_reg; 376 - u32 pwm_reg; 377 - 378 - if (gma_power_begin(dev, false)) { 379 - PSB_WVDC32(1 << 31, HISTOGRAM_LOGIC_CONTROL); 380 - hist_reg = PSB_RVDC32(HISTOGRAM_LOGIC_CONTROL); 381 - PSB_WVDC32(1 << 31, HISTOGRAM_INT_CONTROL); 382 - hist_reg = PSB_RVDC32(HISTOGRAM_INT_CONTROL); 383 - 384 - PSB_WVDC32(0x80010100, PWM_CONTROL_LOGIC); 385 - pwm_reg = PSB_RVDC32(PWM_CONTROL_LOGIC); 386 - PSB_WVDC32(pwm_reg | PWM_PHASEIN_ENABLE 387 - | PWM_PHASEIN_INT_ENABLE, 388 - PWM_CONTROL_LOGIC); 389 - pwm_reg = PSB_RVDC32(PWM_CONTROL_LOGIC); 390 - 391 - psb_enable_pipestat(dev_priv, 0, PIPE_DPST_EVENT_ENABLE); 392 - 393 - hist_reg = PSB_RVDC32(HISTOGRAM_INT_CONTROL); 394 - PSB_WVDC32(hist_reg | HISTOGRAM_INT_CTRL_CLEAR, 395 - HISTOGRAM_INT_CONTROL); 396 - pwm_reg = PSB_RVDC32(PWM_CONTROL_LOGIC); 397 - PSB_WVDC32(pwm_reg | 0x80010100 | PWM_PHASEIN_ENABLE, 398 - PWM_CONTROL_LOGIC); 399 - 400 - gma_power_end(dev); 401 - } 402 - } 403 - 404 - int psb_irq_enable_dpst(struct drm_device *dev) 405 - { 406 - struct drm_psb_private *dev_priv = 407 - (struct drm_psb_private *) dev->dev_private; 408 - unsigned long irqflags; 409 - 410 - spin_lock_irqsave(&dev_priv->irqmask_lock, irqflags); 411 - 412 - /* enable DPST */ 413 - mid_enable_pipe_event(dev_priv, 0); 414 - psb_irq_turn_on_dpst(dev); 415 - 416 - spin_unlock_irqrestore(&dev_priv->irqmask_lock, irqflags); 417 - return 0; 418 - } 419 - 420 - void psb_irq_turn_off_dpst(struct drm_device *dev) 421 - { 422 - struct drm_psb_private *dev_priv = 423 - (struct drm_psb_private *) dev->dev_private; 424 - u32 pwm_reg; 425 - 426 - if (gma_power_begin(dev, false)) { 427 - PSB_WVDC32(0x00000000, HISTOGRAM_INT_CONTROL); 428 - PSB_RVDC32(HISTOGRAM_INT_CONTROL); 429 - 430 - psb_disable_pipestat(dev_priv, 0, PIPE_DPST_EVENT_ENABLE); 431 - 432 - pwm_reg = PSB_RVDC32(PWM_CONTROL_LOGIC); 433 - PSB_WVDC32(pwm_reg & ~PWM_PHASEIN_INT_ENABLE, 434 - PWM_CONTROL_LOGIC); 435 - pwm_reg = PSB_RVDC32(PWM_CONTROL_LOGIC); 436 - 437 - gma_power_end(dev); 438 - } 439 - } 440 - 441 - int psb_irq_disable_dpst(struct drm_device *dev) 442 - { 443 - struct drm_psb_private *dev_priv = 444 - (struct drm_psb_private *) dev->dev_private; 445 - unsigned long irqflags; 446 - 447 - spin_lock_irqsave(&dev_priv->irqmask_lock, irqflags); 448 - 449 - mid_disable_pipe_event(dev_priv, 0); 450 - psb_irq_turn_off_dpst(dev); 451 - 452 - spin_unlock_irqrestore(&dev_priv->irqmask_lock, irqflags); 453 - 454 - return 0; 455 393 } 456 394 457 395 /*
-4
drivers/gpu/drm/gma500/psb_irq.h
··· 23 23 void psb_irq_uninstall(struct drm_device *dev); 24 24 irqreturn_t psb_irq_handler(int irq, void *arg); 25 25 26 - int psb_irq_enable_dpst(struct drm_device *dev); 27 - int psb_irq_disable_dpst(struct drm_device *dev); 28 - void psb_irq_turn_on_dpst(struct drm_device *dev); 29 - void psb_irq_turn_off_dpst(struct drm_device *dev); 30 26 int psb_enable_vblank(struct drm_crtc *crtc); 31 27 void psb_disable_vblank(struct drm_crtc *crtc); 32 28 u32 psb_get_vblank_counter(struct drm_crtc *crtc);
+21 -18
drivers/gpu/drm/hisilicon/hibmc/hibmc_drm_de.c
··· 53 53 }; 54 54 55 55 static int hibmc_plane_atomic_check(struct drm_plane *plane, 56 - struct drm_plane_state *state) 56 + struct drm_atomic_state *state) 57 57 { 58 - struct drm_framebuffer *fb = state->fb; 59 - struct drm_crtc *crtc = state->crtc; 58 + struct drm_plane_state *new_plane_state = drm_atomic_get_new_plane_state(state, 59 + plane); 60 + struct drm_framebuffer *fb = new_plane_state->fb; 61 + struct drm_crtc *crtc = new_plane_state->crtc; 60 62 struct drm_crtc_state *crtc_state; 61 - u32 src_w = state->src_w >> 16; 62 - u32 src_h = state->src_h >> 16; 63 + u32 src_w = new_plane_state->src_w >> 16; 64 + u32 src_h = new_plane_state->src_h >> 16; 63 65 64 66 if (!crtc || !fb) 65 67 return 0; 66 68 67 - crtc_state = drm_atomic_get_crtc_state(state->state, crtc); 69 + crtc_state = drm_atomic_get_crtc_state(state, crtc); 68 70 if (IS_ERR(crtc_state)) 69 71 return PTR_ERR(crtc_state); 70 72 71 - if (src_w != state->crtc_w || src_h != state->crtc_h) { 73 + if (src_w != new_plane_state->crtc_w || src_h != new_plane_state->crtc_h) { 72 74 drm_dbg_atomic(plane->dev, "scale not support\n"); 73 75 return -EINVAL; 74 76 } 75 77 76 - if (state->crtc_x < 0 || state->crtc_y < 0) { 78 + if (new_plane_state->crtc_x < 0 || new_plane_state->crtc_y < 0) { 77 79 drm_dbg_atomic(plane->dev, "crtc_x/y of drm_plane state is invalid\n"); 78 80 return -EINVAL; 79 81 } ··· 83 81 if (!crtc_state->enable) 84 82 return 0; 85 83 86 - if (state->crtc_x + state->crtc_w > 84 + if (new_plane_state->crtc_x + new_plane_state->crtc_w > 87 85 crtc_state->adjusted_mode.hdisplay || 88 - state->crtc_y + state->crtc_h > 86 + new_plane_state->crtc_y + new_plane_state->crtc_h > 89 87 crtc_state->adjusted_mode.vdisplay) { 90 88 drm_dbg_atomic(plane->dev, "visible portion of plane is invalid\n"); 91 89 return -EINVAL; 92 90 } 93 91 94 - if (state->fb->pitches[0] % 128 != 0) { 92 + if (new_plane_state->fb->pitches[0] % 128 != 0) { 95 93 drm_dbg_atomic(plane->dev, "wrong stride with 128-byte aligned\n"); 96 94 return -EINVAL; 97 95 } ··· 99 97 } 100 98 101 99 static void hibmc_plane_atomic_update(struct drm_plane *plane, 102 - struct drm_plane_state *old_state) 100 + struct drm_atomic_state *state) 103 101 { 104 - struct drm_plane_state *state = plane->state; 102 + struct drm_plane_state *new_state = drm_atomic_get_new_plane_state(state, 103 + plane); 105 104 u32 reg; 106 105 s64 gpu_addr = 0; 107 106 u32 line_l; 108 107 struct hibmc_drm_private *priv = to_hibmc_drm_private(plane->dev); 109 108 struct drm_gem_vram_object *gbo; 110 109 111 - if (!state->fb) 110 + if (!new_state->fb) 112 111 return; 113 112 114 - gbo = drm_gem_vram_of_gem(state->fb->obj[0]); 113 + gbo = drm_gem_vram_of_gem(new_state->fb->obj[0]); 115 114 116 115 gpu_addr = drm_gem_vram_offset(gbo); 117 116 if (WARN_ON_ONCE(gpu_addr < 0)) ··· 120 117 121 118 writel(gpu_addr, priv->mmio + HIBMC_CRT_FB_ADDRESS); 122 119 123 - reg = state->fb->width * (state->fb->format->cpp[0]); 120 + reg = new_state->fb->width * (new_state->fb->format->cpp[0]); 124 121 125 - line_l = state->fb->pitches[0]; 122 + line_l = new_state->fb->pitches[0]; 126 123 writel(HIBMC_FIELD(HIBMC_CRT_FB_WIDTH_WIDTH, reg) | 127 124 HIBMC_FIELD(HIBMC_CRT_FB_WIDTH_OFFS, line_l), 128 125 priv->mmio + HIBMC_CRT_FB_WIDTH); ··· 131 128 reg = readl(priv->mmio + HIBMC_CRT_DISP_CTL); 132 129 reg &= ~HIBMC_CRT_DISP_CTL_FORMAT_MASK; 133 130 reg |= HIBMC_FIELD(HIBMC_CRT_DISP_CTL_FORMAT, 134 - state->fb->format->cpp[0] * 8 / 16); 131 + new_state->fb->format->cpp[0] * 8 / 16); 135 132 writel(reg, priv->mmio + HIBMC_CRT_DISP_CTL); 136 133 } 137 134
+25 -22
drivers/gpu/drm/hisilicon/kirin/kirin_drm_ade.c
··· 549 549 u32 ch, u32 y, u32 in_h, u32 fmt) 550 550 { 551 551 struct drm_gem_cma_object *obj = drm_fb_cma_get_gem_obj(fb, 0); 552 - struct drm_format_name_buf format_name; 553 552 u32 reg_ctrl, reg_addr, reg_size, reg_stride, reg_space, reg_en; 554 553 u32 stride = fb->pitches[0]; 555 554 u32 addr = (u32)obj->paddr + y * stride; 556 555 557 556 DRM_DEBUG_DRIVER("rdma%d: (y=%d, height=%d), stride=%d, paddr=0x%x\n", 558 557 ch + 1, y, in_h, stride, (u32)obj->paddr); 559 - DRM_DEBUG_DRIVER("addr=0x%x, fb:%dx%d, pixel_format=%d(%s)\n", 558 + DRM_DEBUG_DRIVER("addr=0x%x, fb:%dx%d, pixel_format=%d(%p4cc)\n", 560 559 addr, fb->width, fb->height, fmt, 561 - drm_get_format_name(fb->format->format, &format_name)); 560 + &fb->format->format); 562 561 563 562 /* get reg offset */ 564 563 reg_ctrl = RD_CH_CTRL(ch); ··· 757 758 } 758 759 759 760 static int ade_plane_atomic_check(struct drm_plane *plane, 760 - struct drm_plane_state *state) 761 + struct drm_atomic_state *state) 761 762 { 762 - struct drm_framebuffer *fb = state->fb; 763 - struct drm_crtc *crtc = state->crtc; 763 + struct drm_plane_state *new_plane_state = drm_atomic_get_new_plane_state(state, 764 + plane); 765 + struct drm_framebuffer *fb = new_plane_state->fb; 766 + struct drm_crtc *crtc = new_plane_state->crtc; 764 767 struct drm_crtc_state *crtc_state; 765 - u32 src_x = state->src_x >> 16; 766 - u32 src_y = state->src_y >> 16; 767 - u32 src_w = state->src_w >> 16; 768 - u32 src_h = state->src_h >> 16; 769 - int crtc_x = state->crtc_x; 770 - int crtc_y = state->crtc_y; 771 - u32 crtc_w = state->crtc_w; 772 - u32 crtc_h = state->crtc_h; 768 + u32 src_x = new_plane_state->src_x >> 16; 769 + u32 src_y = new_plane_state->src_y >> 16; 770 + u32 src_w = new_plane_state->src_w >> 16; 771 + u32 src_h = new_plane_state->src_h >> 16; 772 + int crtc_x = new_plane_state->crtc_x; 773 + int crtc_y = new_plane_state->crtc_y; 774 + u32 crtc_w = new_plane_state->crtc_w; 775 + u32 crtc_h = new_plane_state->crtc_h; 773 776 u32 fmt; 774 777 775 778 if (!crtc || !fb) ··· 781 780 if (fmt == ADE_FORMAT_UNSUPPORT) 782 781 return -EINVAL; 783 782 784 - crtc_state = drm_atomic_get_crtc_state(state->state, crtc); 783 + crtc_state = drm_atomic_get_crtc_state(state, crtc); 785 784 if (IS_ERR(crtc_state)) 786 785 return PTR_ERR(crtc_state); 787 786 ··· 804 803 } 805 804 806 805 static void ade_plane_atomic_update(struct drm_plane *plane, 807 - struct drm_plane_state *old_state) 806 + struct drm_atomic_state *state) 808 807 { 809 - struct drm_plane_state *state = plane->state; 808 + struct drm_plane_state *new_state = drm_atomic_get_new_plane_state(state, 809 + plane); 810 810 struct kirin_plane *kplane = to_kirin_plane(plane); 811 811 812 - ade_update_channel(kplane, state->fb, state->crtc_x, state->crtc_y, 813 - state->crtc_w, state->crtc_h, 814 - state->src_x >> 16, state->src_y >> 16, 815 - state->src_w >> 16, state->src_h >> 16); 812 + ade_update_channel(kplane, new_state->fb, new_state->crtc_x, 813 + new_state->crtc_y, 814 + new_state->crtc_w, new_state->crtc_h, 815 + new_state->src_x >> 16, new_state->src_y >> 16, 816 + new_state->src_w >> 16, new_state->src_h >> 16); 816 817 } 817 818 818 819 static void ade_plane_atomic_disable(struct drm_plane *plane, 819 - struct drm_plane_state *old_state) 820 + struct drm_atomic_state *state) 820 821 { 821 822 struct kirin_plane *kplane = to_kirin_plane(plane); 822 823
+4 -10
drivers/gpu/drm/i915/display/intel_display.c
··· 10228 10228 struct intel_plane *plane = to_intel_plane(plane_state->uapi.plane); 10229 10229 struct drm_i915_private *i915 = to_i915(plane->base.dev); 10230 10230 const struct drm_framebuffer *fb = plane_state->hw.fb; 10231 - struct drm_format_name_buf format_name; 10232 10231 10233 10232 if (!fb) { 10234 10233 drm_dbg_kms(&i915->drm, ··· 10238 10239 } 10239 10240 10240 10241 drm_dbg_kms(&i915->drm, 10241 - "[PLANE:%d:%s] fb: [FB:%d] %ux%u format = %s modifier = 0x%llx, visible: %s\n", 10242 + "[PLANE:%d:%s] fb: [FB:%d] %ux%u format = %p4cc modifier = 0x%llx, visible: %s\n", 10242 10243 plane->base.base.id, plane->base.name, 10243 - fb->base.id, fb->width, fb->height, 10244 - drm_get_format_name(fb->format->format, &format_name), 10244 + fb->base.id, fb->width, fb->height, &fb->format->format, 10245 10245 fb->modifier, yesno(plane_state->uapi.visible)); 10246 10246 drm_dbg_kms(&i915->drm, "\trotation: 0x%x, scaler: %d\n", 10247 10247 plane_state->hw.rotation, plane_state->scaler_id); ··· 14234 14236 if (!drm_any_plane_has_format(&dev_priv->drm, 14235 14237 mode_cmd->pixel_format, 14236 14238 mode_cmd->modifier[0])) { 14237 - struct drm_format_name_buf format_name; 14238 - 14239 14239 drm_dbg_kms(&dev_priv->drm, 14240 - "unsupported pixel format %s / modifier 0x%llx\n", 14241 - drm_get_format_name(mode_cmd->pixel_format, 14242 - &format_name), 14243 - mode_cmd->modifier[0]); 14240 + "unsupported pixel format %p4cc / modifier 0x%llx\n", 14241 + &mode_cmd->pixel_format, mode_cmd->modifier[0]); 14244 14242 goto err; 14245 14243 } 14246 14244
+13 -17
drivers/gpu/drm/i915/display/intel_display_debugfs.c
··· 772 772 const struct intel_plane_state *plane_state = 773 773 to_intel_plane_state(plane->base.state); 774 774 const struct drm_framebuffer *fb = plane_state->uapi.fb; 775 - struct drm_format_name_buf format_name; 776 775 struct drm_rect src, dst; 777 776 char rot_str[48]; 778 777 779 778 src = drm_plane_state_src(&plane_state->uapi); 780 779 dst = drm_plane_state_dest(&plane_state->uapi); 781 780 782 - if (fb) 783 - drm_get_format_name(fb->format->format, &format_name); 784 - 785 781 plane_rotation(rot_str, sizeof(rot_str), 786 782 plane_state->uapi.rotation); 787 783 788 - seq_printf(m, "\t\tuapi: [FB:%d] %s,0x%llx,%dx%d, visible=%s, src=" DRM_RECT_FP_FMT ", dst=" DRM_RECT_FMT ", rotation=%s\n", 789 - fb ? fb->base.id : 0, fb ? format_name.str : "n/a", 790 - fb ? fb->modifier : 0, 791 - fb ? fb->width : 0, fb ? fb->height : 0, 792 - plane_visibility(plane_state), 793 - DRM_RECT_FP_ARG(&src), 794 - DRM_RECT_ARG(&dst), 795 - rot_str); 784 + seq_puts(m, "\t\tuapi: [FB:"); 785 + if (fb) 786 + seq_printf(m, "%d] %p4cc,0x%llx,%dx%d", fb->base.id, 787 + &fb->format->format, fb->modifier, fb->width, 788 + fb->height); 789 + else 790 + seq_puts(m, "0] n/a,0x0,0x0,"); 791 + seq_printf(m, ", visible=%s, src=" DRM_RECT_FP_FMT ", dst=" DRM_RECT_FMT 792 + ", rotation=%s\n", plane_visibility(plane_state), 793 + DRM_RECT_FP_ARG(&src), DRM_RECT_ARG(&dst), rot_str); 796 794 797 795 if (plane_state->planar_linked_plane) 798 796 seq_printf(m, "\t\tplanar: Linked to [PLANE:%d:%s] as a %s\n", ··· 803 805 const struct intel_plane_state *plane_state = 804 806 to_intel_plane_state(plane->base.state); 805 807 const struct drm_framebuffer *fb = plane_state->hw.fb; 806 - struct drm_format_name_buf format_name; 807 808 char rot_str[48]; 808 809 809 810 if (!fb) 810 811 return; 811 812 812 - drm_get_format_name(fb->format->format, &format_name); 813 - 814 813 plane_rotation(rot_str, sizeof(rot_str), 815 814 plane_state->hw.rotation); 816 815 817 - seq_printf(m, "\t\thw: [FB:%d] %s,0x%llx,%dx%d, visible=%s, src=" DRM_RECT_FP_FMT ", dst=" DRM_RECT_FMT ", rotation=%s\n", 818 - fb->base.id, format_name.str, 816 + seq_printf(m, "\t\thw: [FB:%d] %p4cc,0x%llx,%dx%d, visible=%s, src=" 817 + DRM_RECT_FP_FMT ", dst=" DRM_RECT_FMT ", rotation=%s\n", 818 + fb->base.id, &fb->format->format, 819 819 fb->modifier, fb->width, fb->height, 820 820 yesno(plane_state->uapi.visible), 821 821 DRM_RECT_FP_ARG(&plane_state->uapi.src),
+2 -4
drivers/gpu/drm/i915/display/intel_sprite.c
··· 2320 2320 struct drm_i915_private *dev_priv = to_i915(plane->base.dev); 2321 2321 const struct drm_framebuffer *fb = plane_state->hw.fb; 2322 2322 unsigned int rotation = plane_state->hw.rotation; 2323 - struct drm_format_name_buf format_name; 2324 2323 2325 2324 if (!fb) 2326 2325 return 0; ··· 2367 2368 case DRM_FORMAT_XVYU12_16161616: 2368 2369 case DRM_FORMAT_XVYU16161616: 2369 2370 drm_dbg_kms(&dev_priv->drm, 2370 - "Unsupported pixel format %s for 90/270!\n", 2371 - drm_get_format_name(fb->format->format, 2372 - &format_name)); 2371 + "Unsupported pixel format %p4cc for 90/270!\n", 2372 + &fb->format->format); 2373 2373 return -EINVAL; 2374 2374 default: 2375 2375 break;
+35 -29
drivers/gpu/drm/imx/dcss/dcss-plane.c
··· 6 6 #include <drm/drm_atomic.h> 7 7 #include <drm/drm_atomic_helper.h> 8 8 #include <drm/drm_fb_cma_helper.h> 9 - #include <drm/drm_gem_framebuffer_helper.h> 9 + #include <drm/drm_gem_atomic_helper.h> 10 10 #include <drm/drm_gem_cma_helper.h> 11 11 12 12 #include "dcss-dev.h" ··· 137 137 } 138 138 139 139 static int dcss_plane_atomic_check(struct drm_plane *plane, 140 - struct drm_plane_state *state) 140 + struct drm_atomic_state *state) 141 141 { 142 + struct drm_plane_state *new_plane_state = drm_atomic_get_new_plane_state(state, 143 + plane); 142 144 struct dcss_plane *dcss_plane = to_dcss_plane(plane); 143 145 struct dcss_dev *dcss = plane->dev->dev_private; 144 - struct drm_framebuffer *fb = state->fb; 146 + struct drm_framebuffer *fb = new_plane_state->fb; 145 147 bool is_primary_plane = plane->type == DRM_PLANE_TYPE_PRIMARY; 146 148 struct drm_gem_cma_object *cma_obj; 147 149 struct drm_crtc_state *crtc_state; ··· 151 149 int min, max; 152 150 int ret; 153 151 154 - if (!fb || !state->crtc) 152 + if (!fb || !new_plane_state->crtc) 155 153 return 0; 156 154 157 155 cma_obj = drm_fb_cma_get_gem_obj(fb, 0); 158 156 WARN_ON(!cma_obj); 159 157 160 - crtc_state = drm_atomic_get_existing_crtc_state(state->state, 161 - state->crtc); 158 + crtc_state = drm_atomic_get_existing_crtc_state(state, 159 + new_plane_state->crtc); 162 160 163 161 hdisplay = crtc_state->adjusted_mode.hdisplay; 164 162 vdisplay = crtc_state->adjusted_mode.vdisplay; 165 163 166 - if (!dcss_plane_is_source_size_allowed(state->src_w >> 16, 167 - state->src_h >> 16, 164 + if (!dcss_plane_is_source_size_allowed(new_plane_state->src_w >> 16, 165 + new_plane_state->src_h >> 16, 168 166 fb->format->format)) { 169 167 DRM_DEBUG_KMS("Source plane size is not allowed!\n"); 170 168 return -EINVAL; ··· 173 171 dcss_scaler_get_min_max_ratios(dcss->scaler, dcss_plane->ch_num, 174 172 &min, &max); 175 173 176 - ret = drm_atomic_helper_check_plane_state(state, crtc_state, 174 + ret = drm_atomic_helper_check_plane_state(new_plane_state, crtc_state, 177 175 min, max, !is_primary_plane, 178 176 false); 179 177 if (ret) 180 178 return ret; 181 179 182 - if (!state->visible) 180 + if (!new_plane_state->visible) 183 181 return 0; 184 182 185 183 if (!dcss_plane_can_rotate(fb->format, 186 184 !!(fb->flags & DRM_MODE_FB_MODIFIERS), 187 185 fb->modifier, 188 - state->rotation)) { 186 + new_plane_state->rotation)) { 189 187 DRM_DEBUG_KMS("requested rotation is not allowed!\n"); 190 188 return -EINVAL; 191 189 } 192 190 193 - if ((state->crtc_x < 0 || state->crtc_y < 0 || 194 - state->crtc_x + state->crtc_w > hdisplay || 195 - state->crtc_y + state->crtc_h > vdisplay) && 191 + if ((new_plane_state->crtc_x < 0 || new_plane_state->crtc_y < 0 || 192 + new_plane_state->crtc_x + new_plane_state->crtc_w > hdisplay || 193 + new_plane_state->crtc_y + new_plane_state->crtc_h > vdisplay) && 196 194 !dcss_plane_fb_is_linear(fb)) { 197 195 DRM_DEBUG_KMS("requested cropping operation is not allowed!\n"); 198 196 return -EINVAL; ··· 264 262 } 265 263 266 264 static void dcss_plane_atomic_update(struct drm_plane *plane, 267 - struct drm_plane_state *old_state) 265 + struct drm_atomic_state *state) 268 266 { 269 - struct drm_plane_state *state = plane->state; 267 + struct drm_plane_state *old_state = drm_atomic_get_old_plane_state(state, 268 + plane); 269 + struct drm_plane_state *new_state = drm_atomic_get_new_plane_state(state, 270 + plane); 270 271 struct dcss_plane *dcss_plane = to_dcss_plane(plane); 271 272 struct dcss_dev *dcss = plane->dev->dev_private; 272 - struct drm_framebuffer *fb = state->fb; 273 + struct drm_framebuffer *fb = new_state->fb; 273 274 struct drm_crtc_state *crtc_state; 274 275 bool modifiers_present; 275 276 u32 src_w, src_h, dst_w, dst_h; ··· 280 275 bool enable = true; 281 276 bool is_rotation_90_or_270; 282 277 283 - if (!fb || !state->crtc || !state->visible) 278 + if (!fb || !new_state->crtc || !new_state->visible) 284 279 return; 285 280 286 - crtc_state = state->crtc->state; 281 + crtc_state = new_state->crtc->state; 287 282 modifiers_present = !!(fb->flags & DRM_MODE_FB_MODIFIERS); 288 283 289 284 if (old_state->fb && !drm_atomic_crtc_needs_modeset(crtc_state) && 290 - !dcss_plane_needs_setup(state, old_state)) { 285 + !dcss_plane_needs_setup(new_state, old_state)) { 291 286 dcss_plane_atomic_set_base(dcss_plane); 292 287 return; 293 288 } ··· 307 302 modifiers_present && fb->modifier == DRM_FORMAT_MOD_LINEAR) 308 303 modifiers_present = false; 309 304 310 - dcss_dpr_format_set(dcss->dpr, dcss_plane->ch_num, state->fb->format, 305 + dcss_dpr_format_set(dcss->dpr, dcss_plane->ch_num, 306 + new_state->fb->format, 311 307 modifiers_present ? fb->modifier : 312 308 DRM_FORMAT_MOD_LINEAR); 313 309 dcss_dpr_set_res(dcss->dpr, dcss_plane->ch_num, src_w, src_h); 314 310 dcss_dpr_set_rotation(dcss->dpr, dcss_plane->ch_num, 315 - state->rotation); 311 + new_state->rotation); 316 312 317 313 dcss_plane_atomic_set_base(dcss_plane); 318 314 319 - is_rotation_90_or_270 = state->rotation & (DRM_MODE_ROTATE_90 | 315 + is_rotation_90_or_270 = new_state->rotation & (DRM_MODE_ROTATE_90 | 320 316 DRM_MODE_ROTATE_270); 321 317 322 318 dcss_scaler_set_filter(dcss->scaler, dcss_plane->ch_num, 323 - state->scaling_filter); 319 + new_state->scaling_filter); 324 320 325 321 dcss_scaler_setup(dcss->scaler, dcss_plane->ch_num, 326 - state->fb->format, 322 + new_state->fb->format, 327 323 is_rotation_90_or_270 ? src_h : src_w, 328 324 is_rotation_90_or_270 ? src_w : src_h, 329 325 dst_w, dst_h, ··· 333 327 dcss_dtg_plane_pos_set(dcss->dtg, dcss_plane->ch_num, 334 328 dst.x1, dst.y1, dst_w, dst_h); 335 329 dcss_dtg_plane_alpha_set(dcss->dtg, dcss_plane->ch_num, 336 - fb->format, state->alpha >> 8); 330 + fb->format, new_state->alpha >> 8); 337 331 338 - if (!dcss_plane->ch_num && (state->alpha >> 8) == 0) 332 + if (!dcss_plane->ch_num && (new_state->alpha >> 8) == 0) 339 333 enable = false; 340 334 341 335 dcss_dpr_enable(dcss->dpr, dcss_plane->ch_num, enable); ··· 349 343 } 350 344 351 345 static void dcss_plane_atomic_disable(struct drm_plane *plane, 352 - struct drm_plane_state *old_state) 346 + struct drm_atomic_state *state) 353 347 { 354 348 struct dcss_plane *dcss_plane = to_dcss_plane(plane); 355 349 struct dcss_dev *dcss = plane->dev->dev_private; ··· 361 355 } 362 356 363 357 static const struct drm_plane_helper_funcs dcss_plane_helper_funcs = { 364 - .prepare_fb = drm_gem_fb_prepare_fb, 358 + .prepare_fb = drm_gem_plane_helper_prepare_fb, 365 359 .atomic_check = dcss_plane_atomic_check, 366 360 .atomic_update = dcss_plane_atomic_update, 367 361 .atomic_disable = dcss_plane_atomic_disable,
+50 -42
drivers/gpu/drm/imx/ipuv3-plane.c
··· 9 9 #include <drm/drm_atomic_helper.h> 10 10 #include <drm/drm_fb_cma_helper.h> 11 11 #include <drm/drm_fourcc.h> 12 + #include <drm/drm_gem_atomic_helper.h> 12 13 #include <drm/drm_gem_cma_helper.h> 13 - #include <drm/drm_gem_framebuffer_helper.h> 14 14 #include <drm/drm_managed.h> 15 15 #include <drm/drm_plane_helper.h> 16 16 ··· 337 337 }; 338 338 339 339 static int ipu_plane_atomic_check(struct drm_plane *plane, 340 - struct drm_plane_state *state) 340 + struct drm_atomic_state *state) 341 341 { 342 - struct drm_plane_state *old_state = plane->state; 342 + struct drm_plane_state *new_state = drm_atomic_get_new_plane_state(state, 343 + plane); 344 + struct drm_plane_state *old_state = drm_atomic_get_old_plane_state(state, 345 + plane); 343 346 struct drm_crtc_state *crtc_state; 344 347 struct device *dev = plane->dev->dev; 345 - struct drm_framebuffer *fb = state->fb; 348 + struct drm_framebuffer *fb = new_state->fb; 346 349 struct drm_framebuffer *old_fb = old_state->fb; 347 350 unsigned long eba, ubo, vbo, old_ubo, old_vbo, alpha_eba; 348 351 bool can_position = (plane->type == DRM_PLANE_TYPE_OVERLAY); ··· 355 352 if (!fb) 356 353 return 0; 357 354 358 - if (WARN_ON(!state->crtc)) 355 + if (WARN_ON(!new_state->crtc)) 359 356 return -EINVAL; 360 357 361 358 crtc_state = 362 - drm_atomic_get_existing_crtc_state(state->state, state->crtc); 359 + drm_atomic_get_existing_crtc_state(state, 360 + new_state->crtc); 363 361 if (WARN_ON(!crtc_state)) 364 362 return -EINVAL; 365 363 366 - ret = drm_atomic_helper_check_plane_state(state, crtc_state, 364 + ret = drm_atomic_helper_check_plane_state(new_state, crtc_state, 367 365 DRM_PLANE_HELPER_NO_SCALING, 368 366 DRM_PLANE_HELPER_NO_SCALING, 369 367 can_position, true); ··· 378 374 switch (plane->type) { 379 375 case DRM_PLANE_TYPE_PRIMARY: 380 376 /* full plane minimum width is 13 pixels */ 381 - if (drm_rect_width(&state->dst) < 13) 377 + if (drm_rect_width(&new_state->dst) < 13) 382 378 return -EINVAL; 383 379 break; 384 380 case DRM_PLANE_TYPE_OVERLAY: ··· 388 384 return -EINVAL; 389 385 } 390 386 391 - if (drm_rect_height(&state->dst) < 2) 387 + if (drm_rect_height(&new_state->dst) < 2) 392 388 return -EINVAL; 393 389 394 390 /* ··· 399 395 * callback. 400 396 */ 401 397 if (old_fb && 402 - (drm_rect_width(&state->dst) != drm_rect_width(&old_state->dst) || 403 - drm_rect_height(&state->dst) != drm_rect_height(&old_state->dst) || 398 + (drm_rect_width(&new_state->dst) != drm_rect_width(&old_state->dst) || 399 + drm_rect_height(&new_state->dst) != drm_rect_height(&old_state->dst) || 404 400 fb->format != old_fb->format)) 405 401 crtc_state->mode_changed = true; 406 402 407 - eba = drm_plane_state_to_eba(state, 0); 403 + eba = drm_plane_state_to_eba(new_state, 0); 408 404 409 405 if (eba & 0x7) 410 406 return -EINVAL; ··· 430 426 * - Only EBA may be changed while scanout is active 431 427 * - The strides of U and V planes must be identical. 432 428 */ 433 - vbo = drm_plane_state_to_vbo(state); 429 + vbo = drm_plane_state_to_vbo(new_state); 434 430 435 431 if (vbo & 0x7 || vbo > 0xfffff8) 436 432 return -EINVAL; ··· 447 443 fallthrough; 448 444 case DRM_FORMAT_NV12: 449 445 case DRM_FORMAT_NV16: 450 - ubo = drm_plane_state_to_ubo(state); 446 + ubo = drm_plane_state_to_ubo(new_state); 451 447 452 448 if (ubo & 0x7 || ubo > 0xfffff8) 453 449 return -EINVAL; ··· 468 464 * The x/y offsets must be even in case of horizontal/vertical 469 465 * chroma subsampling. 470 466 */ 471 - if (((state->src.x1 >> 16) & (fb->format->hsub - 1)) || 472 - ((state->src.y1 >> 16) & (fb->format->vsub - 1))) 467 + if (((new_state->src.x1 >> 16) & (fb->format->hsub - 1)) || 468 + ((new_state->src.y1 >> 16) & (fb->format->vsub - 1))) 473 469 return -EINVAL; 474 470 break; 475 471 case DRM_FORMAT_RGB565_A8: ··· 478 474 case DRM_FORMAT_BGR888_A8: 479 475 case DRM_FORMAT_RGBX8888_A8: 480 476 case DRM_FORMAT_BGRX8888_A8: 481 - alpha_eba = drm_plane_state_to_eba(state, 1); 477 + alpha_eba = drm_plane_state_to_eba(new_state, 1); 482 478 if (alpha_eba & 0x7) 483 479 return -EINVAL; 484 480 ··· 494 490 } 495 491 496 492 static void ipu_plane_atomic_disable(struct drm_plane *plane, 497 - struct drm_plane_state *old_state) 493 + struct drm_atomic_state *state) 498 494 { 499 495 struct ipu_plane *ipu_plane = to_ipu_plane(plane); 500 496 ··· 539 535 } 540 536 541 537 static void ipu_plane_atomic_update(struct drm_plane *plane, 542 - struct drm_plane_state *old_state) 538 + struct drm_atomic_state *state) 543 539 { 540 + struct drm_plane_state *old_state = drm_atomic_get_old_plane_state(state, 541 + plane); 544 542 struct ipu_plane *ipu_plane = to_ipu_plane(plane); 545 - struct drm_plane_state *state = plane->state; 546 - struct ipu_plane_state *ipu_state = to_ipu_plane_state(state); 547 - struct drm_crtc_state *crtc_state = state->crtc->state; 548 - struct drm_framebuffer *fb = state->fb; 549 - struct drm_rect *dst = &state->dst; 543 + struct drm_plane_state *new_state = drm_atomic_get_new_plane_state(state, 544 + plane); 545 + struct ipu_plane_state *ipu_state = to_ipu_plane_state(new_state); 546 + struct drm_crtc_state *crtc_state = new_state->crtc->state; 547 + struct drm_framebuffer *fb = new_state->fb; 548 + struct drm_rect *dst = &new_state->dst; 550 549 unsigned long eba, ubo, vbo; 551 550 unsigned long alpha_eba = 0; 552 551 enum ipu_color_space ics; ··· 564 557 565 558 switch (ipu_plane->dp_flow) { 566 559 case IPU_DP_FLOW_SYNC_BG: 567 - if (state->normalized_zpos == 1) { 560 + if (new_state->normalized_zpos == 1) { 568 561 ipu_dp_set_global_alpha(ipu_plane->dp, 569 562 !fb->format->has_alpha, 0xff, 570 563 true); ··· 573 566 } 574 567 break; 575 568 case IPU_DP_FLOW_SYNC_FG: 576 - if (state->normalized_zpos == 1) { 569 + if (new_state->normalized_zpos == 1) { 577 570 ipu_dp_set_global_alpha(ipu_plane->dp, 578 571 !fb->format->has_alpha, 0xff, 579 572 false); ··· 581 574 break; 582 575 } 583 576 584 - eba = drm_plane_state_to_eba(state, 0); 577 + eba = drm_plane_state_to_eba(new_state, 0); 585 578 586 579 /* 587 580 * Configure PRG channel and attached PRE, this changes the EBA to an ··· 590 583 if (ipu_state->use_pre) { 591 584 axi_id = ipu_chan_assign_axi_id(ipu_plane->dma); 592 585 ipu_prg_channel_configure(ipu_plane->ipu_ch, axi_id, 593 - drm_rect_width(&state->src) >> 16, 594 - drm_rect_height(&state->src) >> 16, 586 + drm_rect_width(&new_state->src) >> 16, 587 + drm_rect_height(&new_state->src) >> 16, 595 588 fb->pitches[0], fb->format->format, 596 589 fb->modifier, &eba); 597 590 } ··· 625 618 626 619 ipu_dmfc_config_wait4eot(ipu_plane->dmfc, drm_rect_width(dst)); 627 620 628 - width = drm_rect_width(&state->src) >> 16; 629 - height = drm_rect_height(&state->src) >> 16; 621 + width = drm_rect_width(&new_state->src) >> 16; 622 + height = drm_rect_height(&new_state->src) >> 16; 630 623 info = drm_format_info(fb->format->format); 631 624 ipu_calculate_bursts(width, info->cpp[0], fb->pitches[0], 632 625 &burstsize, &num_bursts); ··· 648 641 case DRM_FORMAT_YVU422: 649 642 case DRM_FORMAT_YUV444: 650 643 case DRM_FORMAT_YVU444: 651 - ubo = drm_plane_state_to_ubo(state); 652 - vbo = drm_plane_state_to_vbo(state); 644 + ubo = drm_plane_state_to_ubo(new_state); 645 + vbo = drm_plane_state_to_vbo(new_state); 653 646 if (fb->format->format == DRM_FORMAT_YVU420 || 654 647 fb->format->format == DRM_FORMAT_YVU422 || 655 648 fb->format->format == DRM_FORMAT_YVU444) ··· 660 653 661 654 dev_dbg(ipu_plane->base.dev->dev, 662 655 "phy = %lu %lu %lu, x = %d, y = %d", eba, ubo, vbo, 663 - state->src.x1 >> 16, state->src.y1 >> 16); 656 + new_state->src.x1 >> 16, new_state->src.y1 >> 16); 664 657 break; 665 658 case DRM_FORMAT_NV12: 666 659 case DRM_FORMAT_NV16: 667 - ubo = drm_plane_state_to_ubo(state); 660 + ubo = drm_plane_state_to_ubo(new_state); 668 661 669 662 ipu_cpmem_set_yuv_planar_full(ipu_plane->ipu_ch, 670 663 fb->pitches[1], ubo, ubo); 671 664 672 665 dev_dbg(ipu_plane->base.dev->dev, 673 666 "phy = %lu %lu, x = %d, y = %d", eba, ubo, 674 - state->src.x1 >> 16, state->src.y1 >> 16); 667 + new_state->src.x1 >> 16, new_state->src.y1 >> 16); 675 668 break; 676 669 case DRM_FORMAT_RGB565_A8: 677 670 case DRM_FORMAT_BGR565_A8: ··· 679 672 case DRM_FORMAT_BGR888_A8: 680 673 case DRM_FORMAT_RGBX8888_A8: 681 674 case DRM_FORMAT_BGRX8888_A8: 682 - alpha_eba = drm_plane_state_to_eba(state, 1); 675 + alpha_eba = drm_plane_state_to_eba(new_state, 1); 683 676 num_bursts = 0; 684 677 685 678 dev_dbg(ipu_plane->base.dev->dev, "phys = %lu %lu, x = %d, y = %d", 686 - eba, alpha_eba, state->src.x1 >> 16, state->src.y1 >> 16); 679 + eba, alpha_eba, new_state->src.x1 >> 16, 680 + new_state->src.y1 >> 16); 687 681 688 682 ipu_cpmem_set_burstsize(ipu_plane->ipu_ch, 16); 689 683 690 684 ipu_cpmem_zero(ipu_plane->alpha_ch); 691 685 ipu_cpmem_set_resolution(ipu_plane->alpha_ch, 692 - drm_rect_width(&state->src) >> 16, 693 - drm_rect_height(&state->src) >> 16); 686 + drm_rect_width(&new_state->src) >> 16, 687 + drm_rect_height(&new_state->src) >> 16); 694 688 ipu_cpmem_set_format_passthrough(ipu_plane->alpha_ch, 8); 695 689 ipu_cpmem_set_high_priority(ipu_plane->alpha_ch); 696 690 ipu_idmac_set_double_buffer(ipu_plane->alpha_ch, 1); ··· 702 694 break; 703 695 default: 704 696 dev_dbg(ipu_plane->base.dev->dev, "phys = %lu, x = %d, y = %d", 705 - eba, state->src.x1 >> 16, state->src.y1 >> 16); 697 + eba, new_state->src.x1 >> 16, new_state->src.y1 >> 16); 706 698 break; 707 699 } 708 700 ipu_cpmem_set_buffer(ipu_plane->ipu_ch, 0, eba); ··· 712 704 } 713 705 714 706 static const struct drm_plane_helper_funcs ipu_plane_helper_funcs = { 715 - .prepare_fb = drm_gem_fb_prepare_fb, 707 + .prepare_fb = drm_gem_plane_helper_prepare_fb, 716 708 .atomic_check = ipu_plane_atomic_check, 717 709 .atomic_disable = ipu_plane_atomic_disable, 718 710 .atomic_update = ipu_plane_atomic_update,
+31 -24
drivers/gpu/drm/ingenic/ingenic-drm-drv.c
··· 28 28 #include <drm/drm_fb_cma_helper.h> 29 29 #include <drm/drm_fb_helper.h> 30 30 #include <drm/drm_fourcc.h> 31 + #include <drm/drm_gem_atomic_helper.h> 31 32 #include <drm/drm_gem_framebuffer_helper.h> 32 33 #include <drm/drm_irq.h> 33 34 #include <drm/drm_managed.h> ··· 360 359 } 361 360 362 361 static int ingenic_drm_plane_atomic_check(struct drm_plane *plane, 363 - struct drm_plane_state *state) 362 + struct drm_atomic_state *state) 364 363 { 364 + struct drm_plane_state *old_plane_state = drm_atomic_get_old_plane_state(state, 365 + plane); 366 + struct drm_plane_state *new_plane_state = drm_atomic_get_new_plane_state(state, 367 + plane); 365 368 struct ingenic_drm *priv = drm_device_get_priv(plane->dev); 366 369 struct drm_crtc_state *crtc_state; 367 - struct drm_crtc *crtc = state->crtc ?: plane->state->crtc; 370 + struct drm_crtc *crtc = new_plane_state->crtc ?: old_plane_state->crtc; 368 371 int ret; 369 372 370 373 if (!crtc) 371 374 return 0; 372 375 373 - crtc_state = drm_atomic_get_existing_crtc_state(state->state, crtc); 376 + crtc_state = drm_atomic_get_existing_crtc_state(state, 377 + crtc); 374 378 if (WARN_ON(!crtc_state)) 375 379 return -EINVAL; 376 380 377 - ret = drm_atomic_helper_check_plane_state(state, crtc_state, 381 + ret = drm_atomic_helper_check_plane_state(new_plane_state, crtc_state, 378 382 DRM_PLANE_HELPER_NO_SCALING, 379 383 DRM_PLANE_HELPER_NO_SCALING, 380 384 priv->soc_info->has_osd, ··· 392 386 * Note that state->src_* are in 16.16 fixed-point format. 393 387 */ 394 388 if (!priv->soc_info->has_osd && 395 - (state->src_x != 0 || 396 - (state->src_w >> 16) != state->crtc_w || 397 - (state->src_h >> 16) != state->crtc_h)) 389 + (new_plane_state->src_x != 0 || 390 + (new_plane_state->src_w >> 16) != new_plane_state->crtc_w || 391 + (new_plane_state->src_h >> 16) != new_plane_state->crtc_h)) 398 392 return -EINVAL; 399 393 400 394 /* ··· 402 396 * its position, size or depth. 403 397 */ 404 398 if (priv->soc_info->has_osd && 405 - (!plane->state->fb || !state->fb || 406 - plane->state->crtc_x != state->crtc_x || 407 - plane->state->crtc_y != state->crtc_y || 408 - plane->state->crtc_w != state->crtc_w || 409 - plane->state->crtc_h != state->crtc_h || 410 - plane->state->fb->format->format != state->fb->format->format)) 399 + (!old_plane_state->fb || !new_plane_state->fb || 400 + old_plane_state->crtc_x != new_plane_state->crtc_x || 401 + old_plane_state->crtc_y != new_plane_state->crtc_y || 402 + old_plane_state->crtc_w != new_plane_state->crtc_w || 403 + old_plane_state->crtc_h != new_plane_state->crtc_h || 404 + old_plane_state->fb->format->format != new_plane_state->fb->format->format)) 411 405 crtc_state->mode_changed = true; 412 406 413 407 return 0; ··· 444 438 } 445 439 446 440 static void ingenic_drm_plane_atomic_disable(struct drm_plane *plane, 447 - struct drm_plane_state *old_state) 441 + struct drm_atomic_state *state) 448 442 { 449 443 struct ingenic_drm *priv = drm_device_get_priv(plane->dev); 450 444 ··· 542 536 } 543 537 544 538 static void ingenic_drm_plane_atomic_update(struct drm_plane *plane, 545 - struct drm_plane_state *oldstate) 539 + struct drm_atomic_state *state) 546 540 { 547 541 struct ingenic_drm *priv = drm_device_get_priv(plane->dev); 548 - struct drm_plane_state *state = plane->state; 542 + struct drm_plane_state *newstate = drm_atomic_get_new_plane_state(state, 543 + plane); 549 544 struct drm_crtc_state *crtc_state; 550 545 struct ingenic_dma_hwdesc *hwdesc; 551 546 unsigned int width, height, cpp, offset; 552 547 dma_addr_t addr; 553 548 u32 fourcc; 554 549 555 - if (state && state->fb) { 556 - crtc_state = state->crtc->state; 550 + if (newstate && newstate->fb) { 551 + crtc_state = newstate->crtc->state; 557 552 558 - addr = drm_fb_cma_get_gem_addr(state->fb, state, 0); 559 - width = state->src_w >> 16; 560 - height = state->src_h >> 16; 561 - cpp = state->fb->format->cpp[0]; 553 + addr = drm_fb_cma_get_gem_addr(newstate->fb, newstate, 0); 554 + width = newstate->src_w >> 16; 555 + height = newstate->src_h >> 16; 556 + cpp = newstate->fb->format->cpp[0]; 562 557 563 558 if (priv->soc_info->has_osd && plane->type == DRM_PLANE_TYPE_OVERLAY) 564 559 hwdesc = &priv->dma_hwdescs->hwdesc_f0; ··· 570 563 hwdesc->cmd = JZ_LCD_CMD_EOF_IRQ | (width * height * cpp / 4); 571 564 572 565 if (drm_atomic_crtc_needs_modeset(crtc_state)) { 573 - fourcc = state->fb->format->format; 566 + fourcc = newstate->fb->format->format; 574 567 575 568 ingenic_drm_plane_config(priv->dev, plane, fourcc); 576 569 ··· 787 780 .atomic_update = ingenic_drm_plane_atomic_update, 788 781 .atomic_check = ingenic_drm_plane_atomic_check, 789 782 .atomic_disable = ingenic_drm_plane_atomic_disable, 790 - .prepare_fb = drm_gem_fb_prepare_fb, 783 + .prepare_fb = drm_gem_plane_helper_prepare_fb, 791 784 }; 792 785 793 786 static const struct drm_crtc_helper_funcs ingenic_drm_crtc_helper_funcs = {
+42 -35
drivers/gpu/drm/ingenic/ingenic-ipu.c
··· 23 23 #include <drm/drm_drv.h> 24 24 #include <drm/drm_fb_cma_helper.h> 25 25 #include <drm/drm_fourcc.h> 26 - #include <drm/drm_gem_framebuffer_helper.h> 26 + #include <drm/drm_gem_atomic_helper.h> 27 27 #include <drm/drm_plane.h> 28 28 #include <drm/drm_plane_helper.h> 29 29 #include <drm/drm_property.h> ··· 282 282 } 283 283 284 284 static void ingenic_ipu_plane_atomic_update(struct drm_plane *plane, 285 - struct drm_plane_state *oldstate) 285 + struct drm_atomic_state *state) 286 286 { 287 287 struct ingenic_ipu *ipu = plane_to_ingenic_ipu(plane); 288 - struct drm_plane_state *state = plane->state; 288 + struct drm_plane_state *newstate = drm_atomic_get_new_plane_state(state, 289 + plane); 289 290 const struct drm_format_info *finfo; 290 291 u32 ctrl, stride = 0, coef_index = 0, format = 0; 291 292 bool needs_modeset, upscaling_w, upscaling_h; 292 293 int err; 293 294 294 - if (!state || !state->fb) 295 + if (!newstate || !newstate->fb) 295 296 return; 296 297 297 - finfo = drm_format_info(state->fb->format->format); 298 + finfo = drm_format_info(newstate->fb->format->format); 298 299 299 300 if (!ipu->clk_enabled) { 300 301 err = clk_enable(ipu->clk); ··· 308 307 } 309 308 310 309 /* Reset all the registers if needed */ 311 - needs_modeset = drm_atomic_crtc_needs_modeset(state->crtc->state); 310 + needs_modeset = drm_atomic_crtc_needs_modeset(newstate->crtc->state); 312 311 if (needs_modeset) { 313 312 regmap_set_bits(ipu->map, JZ_REG_IPU_CTRL, JZ_IPU_CTRL_RST); 314 313 ··· 318 317 } 319 318 320 319 /* New addresses will be committed in vblank handler... */ 321 - ipu->addr_y = drm_fb_cma_get_gem_addr(state->fb, state, 0); 320 + ipu->addr_y = drm_fb_cma_get_gem_addr(newstate->fb, newstate, 0); 322 321 if (finfo->num_planes > 1) 323 - ipu->addr_u = drm_fb_cma_get_gem_addr(state->fb, state, 1); 322 + ipu->addr_u = drm_fb_cma_get_gem_addr(newstate->fb, newstate, 323 + 1); 324 324 if (finfo->num_planes > 2) 325 - ipu->addr_v = drm_fb_cma_get_gem_addr(state->fb, state, 2); 325 + ipu->addr_v = drm_fb_cma_get_gem_addr(newstate->fb, newstate, 326 + 2); 326 327 327 328 if (!needs_modeset) 328 329 return; ··· 341 338 342 339 /* Set the input height/width/strides */ 343 340 if (finfo->num_planes > 2) 344 - stride = ((state->src_w >> 16) * finfo->cpp[2] / finfo->hsub) 341 + stride = ((newstate->src_w >> 16) * finfo->cpp[2] / finfo->hsub) 345 342 << JZ_IPU_UV_STRIDE_V_LSB; 346 343 347 344 if (finfo->num_planes > 1) 348 - stride |= ((state->src_w >> 16) * finfo->cpp[1] / finfo->hsub) 345 + stride |= ((newstate->src_w >> 16) * finfo->cpp[1] / finfo->hsub) 349 346 << JZ_IPU_UV_STRIDE_U_LSB; 350 347 351 348 regmap_write(ipu->map, JZ_REG_IPU_UV_STRIDE, stride); 352 349 353 - stride = ((state->src_w >> 16) * finfo->cpp[0]) << JZ_IPU_Y_STRIDE_Y_LSB; 350 + stride = ((newstate->src_w >> 16) * finfo->cpp[0]) << JZ_IPU_Y_STRIDE_Y_LSB; 354 351 regmap_write(ipu->map, JZ_REG_IPU_Y_STRIDE, stride); 355 352 356 353 regmap_write(ipu->map, JZ_REG_IPU_IN_GS, 357 354 (stride << JZ_IPU_IN_GS_W_LSB) | 358 - ((state->src_h >> 16) << JZ_IPU_IN_GS_H_LSB)); 355 + ((newstate->src_h >> 16) << JZ_IPU_IN_GS_H_LSB)); 359 356 360 357 switch (finfo->format) { 361 358 case DRM_FORMAT_XRGB1555: ··· 424 421 425 422 /* Set the output height/width/stride */ 426 423 regmap_write(ipu->map, JZ_REG_IPU_OUT_GS, 427 - ((state->crtc_w * 4) << JZ_IPU_OUT_GS_W_LSB) 428 - | state->crtc_h << JZ_IPU_OUT_GS_H_LSB); 429 - regmap_write(ipu->map, JZ_REG_IPU_OUT_STRIDE, state->crtc_w * 4); 424 + ((newstate->crtc_w * 4) << JZ_IPU_OUT_GS_W_LSB) 425 + | newstate->crtc_h << JZ_IPU_OUT_GS_H_LSB); 426 + regmap_write(ipu->map, JZ_REG_IPU_OUT_STRIDE, newstate->crtc_w * 4); 430 427 431 428 if (finfo->is_yuv) { 432 429 regmap_set_bits(ipu->map, JZ_REG_IPU_CTRL, JZ_IPU_CTRL_CSC_EN); ··· 511 508 JZ_IPU_CTRL_RUN | JZ_IPU_CTRL_FM_IRQ_EN); 512 509 513 510 dev_dbg(ipu->dev, "Scaling %ux%u to %ux%u (%u:%u horiz, %u:%u vert)\n", 514 - state->src_w >> 16, state->src_h >> 16, 515 - state->crtc_w, state->crtc_h, 511 + newstate->src_w >> 16, newstate->src_h >> 16, 512 + newstate->crtc_w, newstate->crtc_h, 516 513 ipu->num_w, ipu->denom_w, ipu->num_h, ipu->denom_h); 517 514 } 518 515 519 516 static int ingenic_ipu_plane_atomic_check(struct drm_plane *plane, 520 - struct drm_plane_state *state) 517 + struct drm_atomic_state *state) 521 518 { 519 + struct drm_plane_state *old_plane_state = drm_atomic_get_old_plane_state(state, 520 + plane); 521 + struct drm_plane_state *new_plane_state = drm_atomic_get_new_plane_state(state, 522 + plane); 522 523 unsigned int num_w, denom_w, num_h, denom_h, xres, yres, max_w, max_h; 523 524 struct ingenic_ipu *ipu = plane_to_ingenic_ipu(plane); 524 - struct drm_crtc *crtc = state->crtc ?: plane->state->crtc; 525 + struct drm_crtc *crtc = new_plane_state->crtc ?: old_plane_state->crtc; 525 526 struct drm_crtc_state *crtc_state; 526 527 527 528 if (!crtc) 528 529 return 0; 529 530 530 - crtc_state = drm_atomic_get_existing_crtc_state(state->state, crtc); 531 + crtc_state = drm_atomic_get_existing_crtc_state(state, crtc); 531 532 if (WARN_ON(!crtc_state)) 532 533 return -EINVAL; 533 534 534 535 /* Request a full modeset if we are enabling or disabling the IPU. */ 535 - if (!plane->state->crtc ^ !state->crtc) 536 + if (!old_plane_state->crtc ^ !new_plane_state->crtc) 536 537 crtc_state->mode_changed = true; 537 538 538 - if (!state->crtc || 539 + if (!new_plane_state->crtc || 539 540 !crtc_state->mode.hdisplay || !crtc_state->mode.vdisplay) 540 541 return 0; 541 542 542 543 /* Plane must be fully visible */ 543 - if (state->crtc_x < 0 || state->crtc_y < 0 || 544 - state->crtc_x + state->crtc_w > crtc_state->mode.hdisplay || 545 - state->crtc_y + state->crtc_h > crtc_state->mode.vdisplay) 544 + if (new_plane_state->crtc_x < 0 || new_plane_state->crtc_y < 0 || 545 + new_plane_state->crtc_x + new_plane_state->crtc_w > crtc_state->mode.hdisplay || 546 + new_plane_state->crtc_y + new_plane_state->crtc_h > crtc_state->mode.vdisplay) 546 547 return -EINVAL; 547 548 548 549 /* Minimum size is 4x4 */ 549 - if ((state->src_w >> 16) < 4 || (state->src_h >> 16) < 4) 550 + if ((new_plane_state->src_w >> 16) < 4 || (new_plane_state->src_h >> 16) < 4) 550 551 return -EINVAL; 551 552 552 553 /* Input and output lines must have an even number of pixels. */ 553 - if (((state->src_w >> 16) & 1) || (state->crtc_w & 1)) 554 + if (((new_plane_state->src_w >> 16) & 1) || (new_plane_state->crtc_w & 1)) 554 555 return -EINVAL; 555 556 556 - if (!osd_changed(state, plane->state)) 557 + if (!osd_changed(new_plane_state, old_plane_state)) 557 558 return 0; 558 559 559 560 crtc_state->mode_changed = true; 560 561 561 - xres = state->src_w >> 16; 562 - yres = state->src_h >> 16; 562 + xres = new_plane_state->src_w >> 16; 563 + yres = new_plane_state->src_h >> 16; 563 564 564 565 /* 565 566 * Increase the scaled image's theorical width/height until we find a ··· 575 568 max_w = crtc_state->mode.hdisplay * 102 / 100; 576 569 max_h = crtc_state->mode.vdisplay * 102 / 100; 577 570 578 - for (denom_w = xres, num_w = state->crtc_w; num_w <= max_w; num_w++) 571 + for (denom_w = xres, num_w = new_plane_state->crtc_w; num_w <= max_w; num_w++) 579 572 if (!reduce_fraction(&num_w, &denom_w)) 580 573 break; 581 574 if (num_w > max_w) 582 575 return -EINVAL; 583 576 584 - for (denom_h = yres, num_h = state->crtc_h; num_h <= max_h; num_h++) 577 + for (denom_h = yres, num_h = new_plane_state->crtc_h; num_h <= max_h; num_h++) 585 578 if (!reduce_fraction(&num_h, &denom_h)) 586 579 break; 587 580 if (num_h > max_h) ··· 596 589 } 597 590 598 591 static void ingenic_ipu_plane_atomic_disable(struct drm_plane *plane, 599 - struct drm_plane_state *old_state) 592 + struct drm_atomic_state *state) 600 593 { 601 594 struct ingenic_ipu *ipu = plane_to_ingenic_ipu(plane); 602 595 ··· 615 608 .atomic_update = ingenic_ipu_plane_atomic_update, 616 609 .atomic_check = ingenic_ipu_plane_atomic_check, 617 610 .atomic_disable = ingenic_ipu_plane_atomic_disable, 618 - .prepare_fb = drm_gem_fb_prepare_fb, 611 + .prepare_fb = drm_gem_plane_helper_prepare_fb, 619 612 }; 620 613 621 614 static int
+29 -21
drivers/gpu/drm/kmb/kmb_plane.c
··· 77 77 } 78 78 79 79 static int kmb_plane_atomic_check(struct drm_plane *plane, 80 - struct drm_plane_state *state) 80 + struct drm_atomic_state *state) 81 81 { 82 + struct drm_plane_state *new_plane_state = drm_atomic_get_new_plane_state(state, 83 + plane); 82 84 struct drm_framebuffer *fb; 83 85 int ret; 84 86 struct drm_crtc_state *crtc_state; 85 87 bool can_position; 86 88 87 - fb = state->fb; 88 - if (!fb || !state->crtc) 89 + fb = new_plane_state->fb; 90 + if (!fb || !new_plane_state->crtc) 89 91 return 0; 90 92 91 93 ret = check_pixel_format(plane, fb->format->format); 92 94 if (ret) 93 95 return ret; 94 96 95 - if (state->crtc_w > KMB_MAX_WIDTH || state->crtc_h > KMB_MAX_HEIGHT) 97 + if (new_plane_state->crtc_w > KMB_MAX_WIDTH || new_plane_state->crtc_h > KMB_MAX_HEIGHT) 96 98 return -EINVAL; 97 - if (state->crtc_w < KMB_MIN_WIDTH || state->crtc_h < KMB_MIN_HEIGHT) 99 + if (new_plane_state->crtc_w < KMB_MIN_WIDTH || new_plane_state->crtc_h < KMB_MIN_HEIGHT) 98 100 return -EINVAL; 99 101 can_position = (plane->type == DRM_PLANE_TYPE_OVERLAY); 100 102 crtc_state = 101 - drm_atomic_get_existing_crtc_state(state->state, state->crtc); 102 - return drm_atomic_helper_check_plane_state(state, crtc_state, 103 - DRM_PLANE_HELPER_NO_SCALING, 104 - DRM_PLANE_HELPER_NO_SCALING, 105 - can_position, true); 103 + drm_atomic_get_existing_crtc_state(state, 104 + new_plane_state->crtc); 105 + return drm_atomic_helper_check_plane_state(new_plane_state, 106 + crtc_state, 107 + DRM_PLANE_HELPER_NO_SCALING, 108 + DRM_PLANE_HELPER_NO_SCALING, 109 + can_position, true); 106 110 } 107 111 108 112 static void kmb_plane_atomic_disable(struct drm_plane *plane, 109 - struct drm_plane_state *state) 113 + struct drm_atomic_state *state) 110 114 { 111 115 struct kmb_plane *kmb_plane = to_kmb_plane(plane); 112 116 int plane_id = kmb_plane->id; ··· 278 274 } 279 275 280 276 static void kmb_plane_atomic_update(struct drm_plane *plane, 281 - struct drm_plane_state *state) 277 + struct drm_atomic_state *state) 282 278 { 279 + struct drm_plane_state *old_plane_state = drm_atomic_get_old_plane_state(state, 280 + plane); 281 + struct drm_plane_state *new_plane_state = drm_atomic_get_new_plane_state(state, 282 + plane); 283 283 struct drm_framebuffer *fb; 284 284 struct kmb_drm_private *kmb; 285 285 unsigned int width; ··· 297 289 int num_planes; 298 290 static dma_addr_t addr[MAX_SUB_PLANES]; 299 291 300 - if (!plane || !plane->state || !state) 292 + if (!plane || !new_plane_state || !old_plane_state) 301 293 return; 302 294 303 - fb = plane->state->fb; 295 + fb = new_plane_state->fb; 304 296 if (!fb) 305 297 return; 306 298 num_planes = fb->format->num_planes; ··· 317 309 } 318 310 spin_unlock_irq(&kmb->irq_lock); 319 311 320 - src_w = (plane->state->src_w >> 16); 321 - src_h = plane->state->src_h >> 16; 322 - crtc_x = plane->state->crtc_x; 323 - crtc_y = plane->state->crtc_y; 312 + src_w = (new_plane_state->src_w >> 16); 313 + src_h = new_plane_state->src_h >> 16; 314 + crtc_x = new_plane_state->crtc_x; 315 + crtc_y = new_plane_state->crtc_y; 324 316 325 317 drm_dbg(&kmb->drm, 326 318 "src_w=%d src_h=%d, fb->format->format=0x%x fb->flags=0x%x\n", ··· 337 329 kmb_write_lcd(kmb, LCD_LAYERn_DMA_LINE_WIDTH(plane_id), 338 330 (width * fb->format->cpp[0])); 339 331 340 - addr[Y_PLANE] = drm_fb_cma_get_gem_addr(fb, plane->state, 0); 332 + addr[Y_PLANE] = drm_fb_cma_get_gem_addr(fb, new_plane_state, 0); 341 333 kmb_write_lcd(kmb, LCD_LAYERn_DMA_START_ADDR(plane_id), 342 334 addr[Y_PLANE] + fb->offsets[0]); 343 335 val = get_pixel_format(fb->format->format); ··· 349 341 kmb_write_lcd(kmb, LCD_LAYERn_DMA_CB_LINE_WIDTH(plane_id), 350 342 (width * fb->format->cpp[0])); 351 343 352 - addr[U_PLANE] = drm_fb_cma_get_gem_addr(fb, plane->state, 344 + addr[U_PLANE] = drm_fb_cma_get_gem_addr(fb, new_plane_state, 353 345 U_PLANE); 354 346 /* check if Cb/Cr is swapped*/ 355 347 if (num_planes == 3 && (val & LCD_LAYER_CRCB_ORDER)) ··· 371 363 ((width) * fb->format->cpp[0])); 372 364 373 365 addr[V_PLANE] = drm_fb_cma_get_gem_addr(fb, 374 - plane->state, 366 + new_plane_state, 375 367 V_PLANE); 376 368 377 369 /* check if Cb/Cr is swapped*/
+10 -1
drivers/gpu/drm/lima/lima_devfreq.c
··· 81 81 } 82 82 83 83 static struct devfreq_dev_profile lima_devfreq_profile = { 84 + .timer = DEVFREQ_TIMER_DELAYED, 84 85 .polling_ms = 50, /* ~3 frames */ 85 86 .target = lima_devfreq_target, 86 87 .get_dev_status = lima_devfreq_get_dev_status, ··· 164 163 lima_devfreq_profile.initial_freq = cur_freq; 165 164 dev_pm_opp_put(opp); 166 165 166 + /* 167 + * Setup default thresholds for the simple_ondemand governor. 168 + * The values are chosen based on experiments. 169 + */ 170 + ldevfreq->gov_data.upthreshold = 30; 171 + ldevfreq->gov_data.downdifferential = 5; 172 + 167 173 devfreq = devm_devfreq_add_device(dev, &lima_devfreq_profile, 168 - DEVFREQ_GOV_SIMPLE_ONDEMAND, NULL); 174 + DEVFREQ_GOV_SIMPLE_ONDEMAND, 175 + &ldevfreq->gov_data); 169 176 if (IS_ERR(devfreq)) { 170 177 dev_err(dev, "Couldn't initialize GPU devfreq\n"); 171 178 ret = PTR_ERR(devfreq);
+2
drivers/gpu/drm/lima/lima_devfreq.h
··· 4 4 #ifndef __LIMA_DEVFREQ_H__ 5 5 #define __LIMA_DEVFREQ_H__ 6 6 7 + #include <linux/devfreq.h> 7 8 #include <linux/spinlock.h> 8 9 #include <linux/ktime.h> 9 10 ··· 19 18 struct opp_table *clkname_opp_table; 20 19 struct opp_table *regulators_opp_table; 21 20 struct thermal_cooling_device *cooling; 21 + struct devfreq_simple_ondemand_data gov_data; 22 22 23 23 ktime_t busy_time; 24 24 ktime_t idle_time;
+4 -2
drivers/gpu/drm/lima/lima_sched.c
··· 415 415 mutex_unlock(&dev->error_task_list_lock); 416 416 } 417 417 418 - static void lima_sched_timedout_job(struct drm_sched_job *job) 418 + static enum drm_gpu_sched_stat lima_sched_timedout_job(struct drm_sched_job *job) 419 419 { 420 420 struct lima_sched_pipe *pipe = to_lima_pipe(job->sched); 421 421 struct lima_sched_task *task = to_lima_task(job); ··· 449 449 450 450 drm_sched_resubmit_jobs(&pipe->base); 451 451 drm_sched_start(&pipe->base, true); 452 + 453 + return DRM_GPU_SCHED_STAT_NOMINAL; 452 454 } 453 455 454 456 static void lima_sched_free_job(struct drm_sched_job *job) ··· 509 507 510 508 return drm_sched_init(&pipe->base, &lima_sched_ops, 1, 511 509 lima_job_hang_limit, msecs_to_jiffies(timeout), 512 - name); 510 + NULL, name); 513 511 } 514 512 515 513 void lima_sched_pipe_fini(struct lima_sched_pipe *pipe)
+4 -6
drivers/gpu/drm/mcde/mcde_display.c
··· 13 13 #include <drm/drm_device.h> 14 14 #include <drm/drm_fb_cma_helper.h> 15 15 #include <drm/drm_fourcc.h> 16 + #include <drm/drm_gem_atomic_helper.h> 16 17 #include <drm/drm_gem_cma_helper.h> 17 - #include <drm/drm_gem_framebuffer_helper.h> 18 18 #include <drm/drm_mipi_dsi.h> 19 19 #include <drm/drm_simple_kms_helper.h> 20 20 #include <drm/drm_bridge.h> ··· 1161 1161 int dsi_pkt_size; 1162 1162 int fifo_wtrmrk; 1163 1163 int cpp = fb->format->cpp[0]; 1164 - struct drm_format_name_buf tmp; 1165 1164 u32 dsi_formatter_frame; 1166 1165 u32 val; 1167 1166 int ret; ··· 1172 1173 return; 1173 1174 } 1174 1175 1175 - dev_info(drm->dev, "enable MCDE, %d x %d format %s\n", 1176 - mode->hdisplay, mode->vdisplay, 1177 - drm_get_format_name(format, &tmp)); 1176 + dev_info(drm->dev, "enable MCDE, %d x %d format %p4cc\n", 1177 + mode->hdisplay, mode->vdisplay, &format); 1178 1178 1179 1179 1180 1180 /* Clear any pending interrupts */ ··· 1479 1481 .update = mcde_display_update, 1480 1482 .enable_vblank = mcde_display_enable_vblank, 1481 1483 .disable_vblank = mcde_display_disable_vblank, 1482 - .prepare_fb = drm_gem_fb_simple_display_pipe_prepare_fb, 1484 + .prepare_fb = drm_gem_simple_display_pipe_prepare_fb, 1483 1485 }; 1484 1486 1485 1487 int mcde_display_init(struct drm_device *drm)
+2 -2
drivers/gpu/drm/mediatek/mtk_drm_crtc.c
··· 522 522 } 523 523 524 524 void mtk_drm_crtc_async_update(struct drm_crtc *crtc, struct drm_plane *plane, 525 - struct drm_plane_state *new_state) 525 + struct drm_atomic_state *state) 526 526 { 527 527 struct mtk_drm_crtc *mtk_crtc = to_mtk_crtc(crtc); 528 528 const struct drm_plane_helper_funcs *plane_helper_funcs = ··· 531 531 if (!mtk_crtc->enabled) 532 532 return; 533 533 534 - plane_helper_funcs->atomic_update(plane, new_state); 534 + plane_helper_funcs->atomic_update(plane, state); 535 535 mtk_drm_crtc_hw_config(mtk_crtc); 536 536 } 537 537
+1 -1
drivers/gpu/drm/mediatek/mtk_drm_crtc.h
··· 21 21 int mtk_drm_crtc_plane_check(struct drm_crtc *crtc, struct drm_plane *plane, 22 22 struct mtk_plane_state *state); 23 23 void mtk_drm_crtc_async_update(struct drm_crtc *crtc, struct drm_plane *plane, 24 - struct drm_plane_state *plane_state); 24 + struct drm_atomic_state *plane_state); 25 25 26 26 #endif /* MTK_DRM_CRTC_H */
+56 -45
drivers/gpu/drm/mediatek/mtk_drm_plane.c
··· 6 6 7 7 #include <drm/drm_atomic.h> 8 8 #include <drm/drm_atomic_helper.h> 9 - #include <drm/drm_fourcc.h> 10 9 #include <drm/drm_atomic_uapi.h> 10 + #include <drm/drm_fourcc.h> 11 + #include <drm/drm_gem_atomic_helper.h> 11 12 #include <drm/drm_plane_helper.h> 12 - #include <drm/drm_gem_framebuffer_helper.h> 13 13 14 14 #include "mtk_drm_crtc.h" 15 15 #include "mtk_drm_ddp_comp.h" ··· 77 77 } 78 78 79 79 static int mtk_plane_atomic_async_check(struct drm_plane *plane, 80 - struct drm_plane_state *state) 80 + struct drm_atomic_state *state) 81 81 { 82 + struct drm_plane_state *new_plane_state = drm_atomic_get_new_plane_state(state, 83 + plane); 82 84 struct drm_crtc_state *crtc_state; 83 85 int ret; 84 86 85 - if (plane != state->crtc->cursor) 87 + if (plane != new_plane_state->crtc->cursor) 86 88 return -EINVAL; 87 89 88 90 if (!plane->state) ··· 93 91 if (!plane->state->fb) 94 92 return -EINVAL; 95 93 96 - ret = mtk_drm_crtc_plane_check(state->crtc, plane, 97 - to_mtk_plane_state(state)); 94 + ret = mtk_drm_crtc_plane_check(new_plane_state->crtc, plane, 95 + to_mtk_plane_state(new_plane_state)); 98 96 if (ret) 99 97 return ret; 100 98 101 - if (state->state) 102 - crtc_state = drm_atomic_get_existing_crtc_state(state->state, 103 - state->crtc); 99 + if (state) 100 + crtc_state = drm_atomic_get_existing_crtc_state(state, 101 + new_plane_state->crtc); 104 102 else /* Special case for asynchronous cursor updates. */ 105 - crtc_state = state->crtc->state; 103 + crtc_state = new_plane_state->crtc->state; 106 104 107 105 return drm_atomic_helper_check_plane_state(plane->state, crtc_state, 108 106 DRM_PLANE_HELPER_NO_SCALING, ··· 111 109 } 112 110 113 111 static void mtk_plane_atomic_async_update(struct drm_plane *plane, 114 - struct drm_plane_state *new_state) 112 + struct drm_atomic_state *state) 115 113 { 116 - struct mtk_plane_state *state = to_mtk_plane_state(plane->state); 114 + struct drm_plane_state *new_state = drm_atomic_get_new_plane_state(state, 115 + plane); 116 + struct mtk_plane_state *new_plane_state = to_mtk_plane_state(plane->state); 117 117 118 118 plane->state->crtc_x = new_state->crtc_x; 119 119 plane->state->crtc_y = new_state->crtc_y; ··· 126 122 plane->state->src_h = new_state->src_h; 127 123 plane->state->src_w = new_state->src_w; 128 124 swap(plane->state->fb, new_state->fb); 129 - state->pending.async_dirty = true; 125 + new_plane_state->pending.async_dirty = true; 130 126 131 - mtk_drm_crtc_async_update(new_state->crtc, plane, new_state); 127 + mtk_drm_crtc_async_update(new_state->crtc, plane, state); 132 128 } 133 129 134 130 static const struct drm_plane_funcs mtk_plane_funcs = { ··· 141 137 }; 142 138 143 139 static int mtk_plane_atomic_check(struct drm_plane *plane, 144 - struct drm_plane_state *state) 140 + struct drm_atomic_state *state) 145 141 { 146 - struct drm_framebuffer *fb = state->fb; 142 + struct drm_plane_state *new_plane_state = drm_atomic_get_new_plane_state(state, 143 + plane); 144 + struct drm_framebuffer *fb = new_plane_state->fb; 147 145 struct drm_crtc_state *crtc_state; 148 146 int ret; 149 147 150 148 if (!fb) 151 149 return 0; 152 150 153 - if (WARN_ON(!state->crtc)) 151 + if (WARN_ON(!new_plane_state->crtc)) 154 152 return 0; 155 153 156 - ret = mtk_drm_crtc_plane_check(state->crtc, plane, 157 - to_mtk_plane_state(state)); 154 + ret = mtk_drm_crtc_plane_check(new_plane_state->crtc, plane, 155 + to_mtk_plane_state(new_plane_state)); 158 156 if (ret) 159 157 return ret; 160 158 161 - crtc_state = drm_atomic_get_crtc_state(state->state, state->crtc); 159 + crtc_state = drm_atomic_get_crtc_state(state, 160 + new_plane_state->crtc); 162 161 if (IS_ERR(crtc_state)) 163 162 return PTR_ERR(crtc_state); 164 163 165 - return drm_atomic_helper_check_plane_state(state, crtc_state, 164 + return drm_atomic_helper_check_plane_state(new_plane_state, 165 + crtc_state, 166 166 DRM_PLANE_HELPER_NO_SCALING, 167 167 DRM_PLANE_HELPER_NO_SCALING, 168 168 true, true); 169 169 } 170 170 171 171 static void mtk_plane_atomic_disable(struct drm_plane *plane, 172 - struct drm_plane_state *old_state) 172 + struct drm_atomic_state *state) 173 173 { 174 - struct mtk_plane_state *state = to_mtk_plane_state(plane->state); 175 - 176 - state->pending.enable = false; 174 + struct drm_plane_state *new_state = drm_atomic_get_new_plane_state(state, 175 + plane); 176 + struct mtk_plane_state *mtk_plane_state = to_mtk_plane_state(new_state); 177 + mtk_plane_state->pending.enable = false; 177 178 wmb(); /* Make sure the above parameter is set before update */ 178 - state->pending.dirty = true; 179 + mtk_plane_state->pending.dirty = true; 179 180 } 180 181 181 182 static void mtk_plane_atomic_update(struct drm_plane *plane, 182 - struct drm_plane_state *old_state) 183 + struct drm_atomic_state *state) 183 184 { 184 - struct mtk_plane_state *state = to_mtk_plane_state(plane->state); 185 - struct drm_crtc *crtc = plane->state->crtc; 186 - struct drm_framebuffer *fb = plane->state->fb; 185 + struct drm_plane_state *new_state = drm_atomic_get_new_plane_state(state, 186 + plane); 187 + struct mtk_plane_state *mtk_plane_state = to_mtk_plane_state(new_state); 188 + struct drm_crtc *crtc = new_state->crtc; 189 + struct drm_framebuffer *fb = new_state->fb; 187 190 struct drm_gem_object *gem; 188 191 struct mtk_drm_gem_obj *mtk_gem; 189 192 unsigned int pitch, format; ··· 199 188 if (!crtc || WARN_ON(!fb)) 200 189 return; 201 190 202 - if (!plane->state->visible) { 203 - mtk_plane_atomic_disable(plane, old_state); 191 + if (!new_state->visible) { 192 + mtk_plane_atomic_disable(plane, state); 204 193 return; 205 194 } 206 195 ··· 210 199 pitch = fb->pitches[0]; 211 200 format = fb->format->format; 212 201 213 - addr += (plane->state->src.x1 >> 16) * fb->format->cpp[0]; 214 - addr += (plane->state->src.y1 >> 16) * pitch; 202 + addr += (new_state->src.x1 >> 16) * fb->format->cpp[0]; 203 + addr += (new_state->src.y1 >> 16) * pitch; 215 204 216 - state->pending.enable = true; 217 - state->pending.pitch = pitch; 218 - state->pending.format = format; 219 - state->pending.addr = addr; 220 - state->pending.x = plane->state->dst.x1; 221 - state->pending.y = plane->state->dst.y1; 222 - state->pending.width = drm_rect_width(&plane->state->dst); 223 - state->pending.height = drm_rect_height(&plane->state->dst); 224 - state->pending.rotation = plane->state->rotation; 205 + mtk_plane_state->pending.enable = true; 206 + mtk_plane_state->pending.pitch = pitch; 207 + mtk_plane_state->pending.format = format; 208 + mtk_plane_state->pending.addr = addr; 209 + mtk_plane_state->pending.x = new_state->dst.x1; 210 + mtk_plane_state->pending.y = new_state->dst.y1; 211 + mtk_plane_state->pending.width = drm_rect_width(&new_state->dst); 212 + mtk_plane_state->pending.height = drm_rect_height(&new_state->dst); 213 + mtk_plane_state->pending.rotation = new_state->rotation; 225 214 wmb(); /* Make sure the above parameters are set before update */ 226 - state->pending.dirty = true; 215 + mtk_plane_state->pending.dirty = true; 227 216 } 228 217 229 218 static const struct drm_plane_helper_funcs mtk_plane_helper_funcs = { 230 - .prepare_fb = drm_gem_fb_prepare_fb, 219 + .prepare_fb = drm_gem_plane_helper_prepare_fb, 231 220 .atomic_check = mtk_plane_atomic_check, 232 221 .atomic_update = mtk_plane_atomic_update, 233 222 .atomic_disable = mtk_plane_atomic_disable,
+19 -14
drivers/gpu/drm/meson/meson_overlay.c
··· 10 10 #include <drm/drm_atomic.h> 11 11 #include <drm/drm_atomic_helper.h> 12 12 #include <drm/drm_device.h> 13 - #include <drm/drm_fourcc.h> 14 - #include <drm/drm_plane_helper.h> 15 - #include <drm/drm_gem_cma_helper.h> 16 13 #include <drm/drm_fb_cma_helper.h> 17 - #include <drm/drm_gem_framebuffer_helper.h> 14 + #include <drm/drm_fourcc.h> 15 + #include <drm/drm_gem_atomic_helper.h> 16 + #include <drm/drm_gem_cma_helper.h> 17 + #include <drm/drm_plane_helper.h> 18 18 19 19 #include "meson_overlay.h" 20 20 #include "meson_registers.h" ··· 165 165 #define FRAC_16_16(mult, div) (((mult) << 16) / (div)) 166 166 167 167 static int meson_overlay_atomic_check(struct drm_plane *plane, 168 - struct drm_plane_state *state) 168 + struct drm_atomic_state *state) 169 169 { 170 + struct drm_plane_state *new_plane_state = drm_atomic_get_new_plane_state(state, 171 + plane); 170 172 struct drm_crtc_state *crtc_state; 171 173 172 - if (!state->crtc) 174 + if (!new_plane_state->crtc) 173 175 return 0; 174 176 175 - crtc_state = drm_atomic_get_crtc_state(state->state, state->crtc); 177 + crtc_state = drm_atomic_get_crtc_state(state, 178 + new_plane_state->crtc); 176 179 if (IS_ERR(crtc_state)) 177 180 return PTR_ERR(crtc_state); 178 181 179 - return drm_atomic_helper_check_plane_state(state, crtc_state, 182 + return drm_atomic_helper_check_plane_state(new_plane_state, 183 + crtc_state, 180 184 FRAC_16_16(1, 5), 181 185 FRAC_16_16(5, 1), 182 186 true, true); ··· 468 464 } 469 465 470 466 static void meson_overlay_atomic_update(struct drm_plane *plane, 471 - struct drm_plane_state *old_state) 467 + struct drm_atomic_state *state) 472 468 { 473 469 struct meson_overlay *meson_overlay = to_meson_overlay(plane); 474 - struct drm_plane_state *state = plane->state; 475 - struct drm_framebuffer *fb = state->fb; 470 + struct drm_plane_state *new_state = drm_atomic_get_new_plane_state(state, 471 + plane); 472 + struct drm_framebuffer *fb = new_state->fb; 476 473 struct meson_drm *priv = meson_overlay->priv; 477 474 struct drm_gem_cma_object *gem; 478 475 unsigned long flags; ··· 481 476 482 477 DRM_DEBUG_DRIVER("\n"); 483 478 484 - interlace_mode = state->crtc->mode.flags & DRM_MODE_FLAG_INTERLACE; 479 + interlace_mode = new_state->crtc->mode.flags & DRM_MODE_FLAG_INTERLACE; 485 480 486 481 spin_lock_irqsave(&priv->drm->event_lock, flags); 487 482 ··· 722 717 } 723 718 724 719 static void meson_overlay_atomic_disable(struct drm_plane *plane, 725 - struct drm_plane_state *old_state) 720 + struct drm_atomic_state *state) 726 721 { 727 722 struct meson_overlay *meson_overlay = to_meson_overlay(plane); 728 723 struct meson_drm *priv = meson_overlay->priv; ··· 747 742 .atomic_check = meson_overlay_atomic_check, 748 743 .atomic_disable = meson_overlay_atomic_disable, 749 744 .atomic_update = meson_overlay_atomic_update, 750 - .prepare_fb = drm_gem_fb_prepare_fb, 745 + .prepare_fb = drm_gem_plane_helper_prepare_fb, 751 746 }; 752 747 753 748 static bool meson_overlay_format_mod_supported(struct drm_plane *plane,
+28 -23
drivers/gpu/drm/meson/meson_plane.c
··· 16 16 #include <drm/drm_device.h> 17 17 #include <drm/drm_fb_cma_helper.h> 18 18 #include <drm/drm_fourcc.h> 19 + #include <drm/drm_gem_atomic_helper.h> 19 20 #include <drm/drm_gem_cma_helper.h> 20 - #include <drm/drm_gem_framebuffer_helper.h> 21 21 #include <drm/drm_plane_helper.h> 22 22 23 23 #include "meson_plane.h" ··· 71 71 #define FRAC_16_16(mult, div) (((mult) << 16) / (div)) 72 72 73 73 static int meson_plane_atomic_check(struct drm_plane *plane, 74 - struct drm_plane_state *state) 74 + struct drm_atomic_state *state) 75 75 { 76 + struct drm_plane_state *new_plane_state = drm_atomic_get_new_plane_state(state, 77 + plane); 76 78 struct drm_crtc_state *crtc_state; 77 79 78 - if (!state->crtc) 80 + if (!new_plane_state->crtc) 79 81 return 0; 80 82 81 - crtc_state = drm_atomic_get_crtc_state(state->state, state->crtc); 83 + crtc_state = drm_atomic_get_crtc_state(state, 84 + new_plane_state->crtc); 82 85 if (IS_ERR(crtc_state)) 83 86 return PTR_ERR(crtc_state); 84 87 ··· 90 87 * - Upscaling up to 5x, vertical and horizontal 91 88 * - Final coordinates must match crtc size 92 89 */ 93 - return drm_atomic_helper_check_plane_state(state, crtc_state, 90 + return drm_atomic_helper_check_plane_state(new_plane_state, 91 + crtc_state, 94 92 FRAC_16_16(1, 5), 95 93 DRM_PLANE_HELPER_NO_SCALING, 96 94 false, true); ··· 130 126 } 131 127 132 128 static void meson_plane_atomic_update(struct drm_plane *plane, 133 - struct drm_plane_state *old_state) 129 + struct drm_atomic_state *state) 134 130 { 135 131 struct meson_plane *meson_plane = to_meson_plane(plane); 136 - struct drm_plane_state *state = plane->state; 137 - struct drm_rect dest = drm_plane_state_dest(state); 132 + struct drm_plane_state *new_state = drm_atomic_get_new_plane_state(state, 133 + plane); 134 + struct drm_rect dest = drm_plane_state_dest(new_state); 138 135 struct meson_drm *priv = meson_plane->priv; 139 - struct drm_framebuffer *fb = state->fb; 136 + struct drm_framebuffer *fb = new_state->fb; 140 137 struct drm_gem_cma_object *gem; 141 138 unsigned long flags; 142 139 int vsc_ini_rcv_num, vsc_ini_rpt_p0_num; ··· 250 245 hf_bank_len = 4; 251 246 vf_bank_len = 4; 252 247 253 - if (state->crtc->mode.flags & DRM_MODE_FLAG_INTERLACE) { 248 + if (new_state->crtc->mode.flags & DRM_MODE_FLAG_INTERLACE) { 254 249 vsc_bot_rcv_num = 6; 255 250 vsc_bot_rpt_p0_num = 2; 256 251 } ··· 260 255 hsc_ini_rpt_p0_num = (hf_bank_len / 2) - 1; 261 256 vsc_ini_rpt_p0_num = (vf_bank_len / 2) - 1; 262 257 263 - src_w = fixed16_to_int(state->src_w); 264 - src_h = fixed16_to_int(state->src_h); 265 - dst_w = state->crtc_w; 266 - dst_h = state->crtc_h; 258 + src_w = fixed16_to_int(new_state->src_w); 259 + src_h = fixed16_to_int(new_state->src_h); 260 + dst_w = new_state->crtc_w; 261 + dst_h = new_state->crtc_h; 267 262 268 263 /* 269 264 * When the output is interlaced, the OSD must switch between ··· 272 267 * But the vertical scaler can provide such funtionnality if 273 268 * is configured for 2:1 scaling with interlace options enabled. 274 269 */ 275 - if (state->crtc->mode.flags & DRM_MODE_FLAG_INTERLACE) { 270 + if (new_state->crtc->mode.flags & DRM_MODE_FLAG_INTERLACE) { 276 271 dest.y1 /= 2; 277 272 dest.y2 /= 2; 278 273 dst_h /= 2; ··· 281 276 hf_phase_step = ((src_w << 18) / dst_w) << 6; 282 277 vf_phase_step = (src_h << 20) / dst_h; 283 278 284 - if (state->crtc->mode.flags & DRM_MODE_FLAG_INTERLACE) 279 + if (new_state->crtc->mode.flags & DRM_MODE_FLAG_INTERLACE) 285 280 bot_ini_phase = ((vf_phase_step / 2) >> 4); 286 281 else 287 282 bot_ini_phase = 0; ··· 313 308 VSC_TOP_RPT_L0_NUM(vsc_ini_rpt_p0_num) | 314 309 VSC_VERTICAL_SCALER_EN; 315 310 316 - if (state->crtc->mode.flags & DRM_MODE_FLAG_INTERLACE) 311 + if (new_state->crtc->mode.flags & DRM_MODE_FLAG_INTERLACE) 317 312 priv->viu.osd_sc_v_ctrl0 |= 318 313 VSC_BOT_INI_RCV_NUM(vsc_bot_rcv_num) | 319 314 VSC_BOT_RPT_L0_NUM(vsc_bot_rpt_p0_num) | ··· 348 343 * e.g. +30x1920 would be (1919 << 16) | 30 349 344 */ 350 345 priv->viu.osd1_blk0_cfg[1] = 351 - ((fixed16_to_int(state->src.x2) - 1) << 16) | 352 - fixed16_to_int(state->src.x1); 346 + ((fixed16_to_int(new_state->src.x2) - 1) << 16) | 347 + fixed16_to_int(new_state->src.x1); 353 348 priv->viu.osd1_blk0_cfg[2] = 354 - ((fixed16_to_int(state->src.y2) - 1) << 16) | 355 - fixed16_to_int(state->src.y1); 349 + ((fixed16_to_int(new_state->src.y2) - 1) << 16) | 350 + fixed16_to_int(new_state->src.y1); 356 351 priv->viu.osd1_blk0_cfg[3] = ((dest.x2 - 1) << 16) | dest.x1; 357 352 priv->viu.osd1_blk0_cfg[4] = ((dest.y2 - 1) << 16) | dest.y1; 358 353 ··· 396 391 } 397 392 398 393 static void meson_plane_atomic_disable(struct drm_plane *plane, 399 - struct drm_plane_state *old_state) 394 + struct drm_atomic_state *state) 400 395 { 401 396 struct meson_plane *meson_plane = to_meson_plane(plane); 402 397 struct meson_drm *priv = meson_plane->priv; ··· 422 417 .atomic_check = meson_plane_atomic_check, 423 418 .atomic_disable = meson_plane_atomic_disable, 424 419 .atomic_update = meson_plane_atomic_update, 425 - .prepare_fb = drm_gem_fb_prepare_fb, 420 + .prepare_fb = drm_gem_plane_helper_prepare_fb, 426 421 }; 427 422 428 423 static bool meson_plane_format_mod_supported(struct drm_plane *plane,
+9 -16
drivers/gpu/drm/mgag200/mgag200_mode.c
··· 17 17 #include <drm/drm_damage_helper.h> 18 18 #include <drm/drm_format_helper.h> 19 19 #include <drm/drm_fourcc.h> 20 + #include <drm/drm_gem_atomic_helper.h> 20 21 #include <drm/drm_gem_framebuffer_helper.h> 21 22 #include <drm/drm_plane_helper.h> 22 23 #include <drm/drm_print.h> ··· 707 706 708 707 static int mga_g200er_set_plls(struct mga_device *mdev, long clock) 709 708 { 709 + static const unsigned int m_div_val[] = { 1, 2, 4, 8 }; 710 710 unsigned int vcomax, vcomin, pllreffreq; 711 711 unsigned int delta, tmpdelta; 712 712 int testr, testn, testm, testo; 713 713 unsigned int p, m, n; 714 714 unsigned int computed, vco; 715 715 int tmp; 716 - const unsigned int m_div_val[] = { 1, 2, 4, 8 }; 717 716 718 717 m = n = p = 0; 719 718 vcomax = 1488000; ··· 1550 1549 1551 1550 static void 1552 1551 mgag200_handle_damage(struct mga_device *mdev, struct drm_framebuffer *fb, 1553 - struct drm_rect *clip) 1552 + struct drm_rect *clip, const struct dma_buf_map *map) 1554 1553 { 1555 - struct drm_device *dev = &mdev->base; 1556 - struct dma_buf_map map; 1557 - void *vmap; 1558 - int ret; 1559 - 1560 - ret = drm_gem_shmem_vmap(fb->obj[0], &map); 1561 - if (drm_WARN_ON(dev, ret)) 1562 - return; /* BUG: SHMEM BO should always be vmapped */ 1563 - vmap = map.vaddr; /* TODO: Use mapping abstraction properly */ 1554 + void *vmap = map->vaddr; /* TODO: Use mapping abstraction properly */ 1564 1555 1565 1556 drm_fb_memcpy_dstclip(mdev->vram, vmap, fb, clip); 1566 - 1567 - drm_gem_shmem_vunmap(fb->obj[0], &map); 1568 1557 1569 1558 /* Always scanout image at VRAM offset 0 */ 1570 1559 mgag200_set_startadd(mdev, (u32)0); ··· 1571 1580 struct mga_device *mdev = to_mga_device(dev); 1572 1581 struct drm_display_mode *adjusted_mode = &crtc_state->adjusted_mode; 1573 1582 struct drm_framebuffer *fb = plane_state->fb; 1583 + struct drm_shadow_plane_state *shadow_plane_state = to_drm_shadow_plane_state(plane_state); 1574 1584 struct drm_rect fullscreen = { 1575 1585 .x1 = 0, 1576 1586 .x2 = fb->width, ··· 1600 1608 mga_crtc_load_lut(crtc); 1601 1609 mgag200_enable_display(mdev); 1602 1610 1603 - mgag200_handle_damage(mdev, fb, &fullscreen); 1611 + mgag200_handle_damage(mdev, fb, &fullscreen, &shadow_plane_state->map[0]); 1604 1612 } 1605 1613 1606 1614 static void ··· 1641 1649 struct drm_device *dev = plane->dev; 1642 1650 struct mga_device *mdev = to_mga_device(dev); 1643 1651 struct drm_plane_state *state = plane->state; 1652 + struct drm_shadow_plane_state *shadow_plane_state = to_drm_shadow_plane_state(state); 1644 1653 struct drm_framebuffer *fb = state->fb; 1645 1654 struct drm_rect damage; 1646 1655 ··· 1649 1656 return; 1650 1657 1651 1658 if (drm_atomic_helper_damage_merged(old_state, state, &damage)) 1652 - mgag200_handle_damage(mdev, fb, &damage); 1659 + mgag200_handle_damage(mdev, fb, &damage, &shadow_plane_state->map[0]); 1653 1660 } 1654 1661 1655 1662 static const struct drm_simple_display_pipe_funcs ··· 1659 1666 .disable = mgag200_simple_display_pipe_disable, 1660 1667 .check = mgag200_simple_display_pipe_check, 1661 1668 .update = mgag200_simple_display_pipe_update, 1662 - .prepare_fb = drm_gem_fb_simple_display_pipe_prepare_fb, 1669 + DRM_GEM_SIMPLE_DISPLAY_PIPE_SHADOW_PLANE_FUNCS, 1663 1670 }; 1664 1671 1665 1672 static const uint32_t mgag200_simple_display_pipe_formats[] = {
+3 -5
drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c
··· 71 71 { 72 72 struct dpu_hw_mixer *lm = mixer->hw_lm; 73 73 uint32_t blend_op; 74 - struct drm_format_name_buf format_name; 75 74 76 75 /* default to opaque blending */ 77 76 blend_op = DPU_BLEND_FG_ALPHA_FG_CONST | ··· 86 87 lm->ops.setup_blend_config(lm, pstate->stage, 87 88 0xFF, 0, blend_op); 88 89 89 - DPU_DEBUG("format:%s, alpha_en:%u blend_op:0x%x\n", 90 - drm_get_format_name(format->base.pixel_format, &format_name), 91 - format->alpha_enable, blend_op); 90 + DPU_DEBUG("format:%p4cc, alpha_en:%u blend_op:0x%x\n", 91 + &format->base.pixel_format, format->alpha_enable, blend_op); 92 92 } 93 93 94 94 static void _dpu_crtc_program_lm_output_roi(struct drm_crtc *crtc) ··· 574 576 * of those planes explicitly here prior to plane flush. 575 577 */ 576 578 drm_atomic_crtc_for_each_plane(plane, crtc) 577 - dpu_plane_restore(plane); 579 + dpu_plane_restore(plane, state); 578 580 579 581 /* update performance setting before crtc kickoff */ 580 582 dpu_core_perf_crtc_update(crtc, 1, false);
+29 -25
drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c
··· 10 10 #include <linux/debugfs.h> 11 11 #include <linux/dma-buf.h> 12 12 13 + #include <drm/drm_atomic.h> 13 14 #include <drm/drm_atomic_uapi.h> 14 15 #include <drm/drm_damage_helper.h> 15 16 #include <drm/drm_file.h> 16 - #include <drm/drm_gem_framebuffer_helper.h> 17 + #include <drm/drm_gem_atomic_helper.h> 17 18 18 19 #include "msm_drv.h" 19 20 #include "dpu_kms.h" ··· 893 892 * we can use msm_atomic_prepare_fb() instead of doing the 894 893 * implicit fence and fb prepare by hand here. 895 894 */ 896 - drm_gem_fb_prepare_fb(plane, new_state); 895 + drm_gem_plane_helper_prepare_fb(plane, new_state); 897 896 898 897 if (pstate->aspace) { 899 898 ret = msm_framebuffer_prepare(new_state->fb, ··· 951 950 } 952 951 953 952 static int dpu_plane_atomic_check(struct drm_plane *plane, 954 - struct drm_plane_state *state) 953 + struct drm_atomic_state *state) 955 954 { 955 + struct drm_plane_state *new_plane_state = drm_atomic_get_new_plane_state(state, 956 + plane); 956 957 int ret = 0, min_scale; 957 958 struct dpu_plane *pdpu = to_dpu_plane(plane); 958 - struct dpu_plane_state *pstate = to_dpu_plane_state(state); 959 + struct dpu_plane_state *pstate = to_dpu_plane_state(new_plane_state); 959 960 const struct drm_crtc_state *crtc_state = NULL; 960 961 const struct dpu_format *fmt; 961 962 struct drm_rect src, dst, fb_rect = { 0 }; 962 963 uint32_t min_src_size, max_linewidth; 963 964 964 - if (state->crtc) 965 - crtc_state = drm_atomic_get_new_crtc_state(state->state, 966 - state->crtc); 965 + if (new_plane_state->crtc) 966 + crtc_state = drm_atomic_get_new_crtc_state(state, 967 + new_plane_state->crtc); 967 968 968 969 min_scale = FRAC_16_16(1, pdpu->pipe_sblk->maxupscale); 969 - ret = drm_atomic_helper_check_plane_state(state, crtc_state, min_scale, 970 - pdpu->pipe_sblk->maxdwnscale << 16, 971 - true, true); 970 + ret = drm_atomic_helper_check_plane_state(new_plane_state, crtc_state, 971 + min_scale, 972 + pdpu->pipe_sblk->maxdwnscale << 16, 973 + true, true); 972 974 if (ret) { 973 975 DPU_DEBUG_PLANE(pdpu, "Check plane state failed (%d)\n", ret); 974 976 return ret; 975 977 } 976 - if (!state->visible) 978 + if (!new_plane_state->visible) 977 979 return 0; 978 980 979 - src.x1 = state->src_x >> 16; 980 - src.y1 = state->src_y >> 16; 981 - src.x2 = src.x1 + (state->src_w >> 16); 982 - src.y2 = src.y1 + (state->src_h >> 16); 981 + src.x1 = new_plane_state->src_x >> 16; 982 + src.y1 = new_plane_state->src_y >> 16; 983 + src.x2 = src.x1 + (new_plane_state->src_w >> 16); 984 + src.y2 = src.y1 + (new_plane_state->src_h >> 16); 983 985 984 - dst = drm_plane_state_dest(state); 986 + dst = drm_plane_state_dest(new_plane_state); 985 987 986 - fb_rect.x2 = state->fb->width; 987 - fb_rect.y2 = state->fb->height; 988 + fb_rect.x2 = new_plane_state->fb->width; 989 + fb_rect.y2 = new_plane_state->fb->height; 988 990 989 991 max_linewidth = pdpu->catalog->caps->max_linewidth; 990 992 991 - fmt = to_dpu_format(msm_framebuffer_format(state->fb)); 993 + fmt = to_dpu_format(msm_framebuffer_format(new_plane_state->fb)); 992 994 993 995 min_src_size = DPU_FORMAT_IS_YUV(fmt) ? 2 : 1; 994 996 ··· 1241 1237 } 1242 1238 1243 1239 static void dpu_plane_atomic_update(struct drm_plane *plane, 1244 - struct drm_plane_state *old_state) 1240 + struct drm_atomic_state *state) 1245 1241 { 1246 1242 struct dpu_plane *pdpu = to_dpu_plane(plane); 1247 - struct drm_plane_state *state = plane->state; 1243 + struct drm_plane_state *new_state = drm_atomic_get_new_plane_state(state, 1244 + plane); 1248 1245 1249 1246 pdpu->is_error = false; 1250 1247 1251 1248 DPU_DEBUG_PLANE(pdpu, "\n"); 1252 1249 1253 - if (!state->visible) { 1250 + if (!new_state->visible) { 1254 1251 _dpu_plane_atomic_disable(plane); 1255 1252 } else { 1256 1253 dpu_plane_sspp_atomic_update(plane); 1257 1254 } 1258 1255 } 1259 1256 1260 - void dpu_plane_restore(struct drm_plane *plane) 1257 + void dpu_plane_restore(struct drm_plane *plane, struct drm_atomic_state *state) 1261 1258 { 1262 1259 struct dpu_plane *pdpu; 1263 1260 ··· 1271 1266 1272 1267 DPU_DEBUG_PLANE(pdpu, "\n"); 1273 1268 1274 - /* last plane state is same as current state */ 1275 - dpu_plane_atomic_update(plane, plane->state); 1269 + dpu_plane_atomic_update(plane, state); 1276 1270 } 1277 1271 1278 1272 static void dpu_plane_destroy(struct drm_plane *plane)
+1 -1
drivers/gpu/drm/msm/disp/dpu1/dpu_plane.h
··· 88 88 * dpu_plane_restore - restore hw state if previously power collapsed 89 89 * @plane: Pointer to drm plane structure 90 90 */ 91 - void dpu_plane_restore(struct drm_plane *plane); 91 + void dpu_plane_restore(struct drm_plane *plane, struct drm_atomic_state *state); 92 92 93 93 /** 94 94 * dpu_plane_flush - final plane operations before commit flush
+10 -8
drivers/gpu/drm/msm/disp/mdp4/mdp4_plane.c
··· 4 4 * Author: Rob Clark <robdclark@gmail.com> 5 5 */ 6 6 7 + #include <drm/drm_atomic.h> 7 8 #include <drm/drm_damage_helper.h> 8 9 #include <drm/drm_fourcc.h> 9 10 ··· 107 106 108 107 109 108 static int mdp4_plane_atomic_check(struct drm_plane *plane, 110 - struct drm_plane_state *state) 109 + struct drm_atomic_state *state) 111 110 { 112 111 return 0; 113 112 } 114 113 115 114 static void mdp4_plane_atomic_update(struct drm_plane *plane, 116 - struct drm_plane_state *old_state) 115 + struct drm_atomic_state *state) 117 116 { 118 - struct drm_plane_state *state = plane->state; 117 + struct drm_plane_state *new_state = drm_atomic_get_new_plane_state(state, 118 + plane); 119 119 int ret; 120 120 121 121 ret = mdp4_plane_mode_set(plane, 122 - state->crtc, state->fb, 123 - state->crtc_x, state->crtc_y, 124 - state->crtc_w, state->crtc_h, 125 - state->src_x, state->src_y, 126 - state->src_w, state->src_h); 122 + new_state->crtc, new_state->fb, 123 + new_state->crtc_x, new_state->crtc_y, 124 + new_state->crtc_w, new_state->crtc_h, 125 + new_state->src_x, new_state->src_y, 126 + new_state->src_w, new_state->src_h); 127 127 /* atomic_check should have ensured that this doesn't fail */ 128 128 WARN_ON(ret < 0); 129 129 }
+34 -23
drivers/gpu/drm/msm/disp/mdp5/mdp5_plane.c
··· 5 5 * Author: Rob Clark <robdclark@gmail.com> 6 6 */ 7 7 8 + #include <drm/drm_atomic.h> 8 9 #include <drm/drm_damage_helper.h> 9 10 #include <drm/drm_fourcc.h> 10 11 #include <drm/drm_print.h> ··· 404 403 } 405 404 406 405 static int mdp5_plane_atomic_check(struct drm_plane *plane, 407 - struct drm_plane_state *state) 406 + struct drm_atomic_state *state) 408 407 { 408 + struct drm_plane_state *old_plane_state = drm_atomic_get_old_plane_state(state, 409 + plane); 410 + struct drm_plane_state *new_plane_state = drm_atomic_get_new_plane_state(state, 411 + plane); 409 412 struct drm_crtc *crtc; 410 413 struct drm_crtc_state *crtc_state; 411 414 412 - crtc = state->crtc ? state->crtc : plane->state->crtc; 415 + crtc = new_plane_state->crtc ? new_plane_state->crtc : old_plane_state->crtc; 413 416 if (!crtc) 414 417 return 0; 415 418 416 - crtc_state = drm_atomic_get_existing_crtc_state(state->state, crtc); 419 + crtc_state = drm_atomic_get_existing_crtc_state(state, 420 + crtc); 417 421 if (WARN_ON(!crtc_state)) 418 422 return -EINVAL; 419 423 420 - return mdp5_plane_atomic_check_with_state(crtc_state, state); 424 + return mdp5_plane_atomic_check_with_state(crtc_state, new_plane_state); 421 425 } 422 426 423 427 static void mdp5_plane_atomic_update(struct drm_plane *plane, 424 - struct drm_plane_state *old_state) 428 + struct drm_atomic_state *state) 425 429 { 426 - struct drm_plane_state *state = plane->state; 430 + struct drm_plane_state *new_state = drm_atomic_get_new_plane_state(state, 431 + plane); 427 432 428 433 DBG("%s: update", plane->name); 429 434 430 - if (plane_enabled(state)) { 435 + if (plane_enabled(new_state)) { 431 436 int ret; 432 437 433 438 ret = mdp5_plane_mode_set(plane, 434 - state->crtc, state->fb, 435 - &state->src, &state->dst); 439 + new_state->crtc, new_state->fb, 440 + &new_state->src, &new_state->dst); 436 441 /* atomic_check should have ensured that this doesn't fail */ 437 442 WARN_ON(ret < 0); 438 443 } 439 444 } 440 445 441 446 static int mdp5_plane_atomic_async_check(struct drm_plane *plane, 442 - struct drm_plane_state *state) 447 + struct drm_atomic_state *state) 443 448 { 444 - struct mdp5_plane_state *mdp5_state = to_mdp5_plane_state(state); 449 + struct drm_plane_state *new_plane_state = drm_atomic_get_new_plane_state(state, 450 + plane); 451 + struct mdp5_plane_state *mdp5_state = to_mdp5_plane_state(new_plane_state); 445 452 struct drm_crtc_state *crtc_state; 446 453 int min_scale, max_scale; 447 454 int ret; 448 455 449 - crtc_state = drm_atomic_get_existing_crtc_state(state->state, 450 - state->crtc); 456 + crtc_state = drm_atomic_get_existing_crtc_state(state, 457 + new_plane_state->crtc); 451 458 if (WARN_ON(!crtc_state)) 452 459 return -EINVAL; 453 460 454 461 if (!crtc_state->active) 455 462 return -EINVAL; 456 463 457 - mdp5_state = to_mdp5_plane_state(state); 464 + mdp5_state = to_mdp5_plane_state(new_plane_state); 458 465 459 466 /* don't use fast path if we don't have a hwpipe allocated yet */ 460 467 if (!mdp5_state->hwpipe) 461 468 return -EINVAL; 462 469 463 470 /* only allow changing of position(crtc x/y or src x/y) in fast path */ 464 - if (plane->state->crtc != state->crtc || 465 - plane->state->src_w != state->src_w || 466 - plane->state->src_h != state->src_h || 467 - plane->state->crtc_w != state->crtc_w || 468 - plane->state->crtc_h != state->crtc_h || 471 + if (plane->state->crtc != new_plane_state->crtc || 472 + plane->state->src_w != new_plane_state->src_w || 473 + plane->state->src_h != new_plane_state->src_h || 474 + plane->state->crtc_w != new_plane_state->crtc_w || 475 + plane->state->crtc_h != new_plane_state->crtc_h || 469 476 !plane->state->fb || 470 - plane->state->fb != state->fb) 477 + plane->state->fb != new_plane_state->fb) 471 478 return -EINVAL; 472 479 473 480 min_scale = FRAC_16_16(1, 8); 474 481 max_scale = FRAC_16_16(8, 1); 475 482 476 - ret = drm_atomic_helper_check_plane_state(state, crtc_state, 483 + ret = drm_atomic_helper_check_plane_state(new_plane_state, crtc_state, 477 484 min_scale, max_scale, 478 485 true, true); 479 486 if (ret) ··· 494 485 * also assign/unassign the hwpipe(s) tied to the plane. We avoid 495 486 * taking the fast path for both these reasons. 496 487 */ 497 - if (state->visible != plane->state->visible) 488 + if (new_plane_state->visible != plane->state->visible) 498 489 return -EINVAL; 499 490 500 491 return 0; 501 492 } 502 493 503 494 static void mdp5_plane_atomic_async_update(struct drm_plane *plane, 504 - struct drm_plane_state *new_state) 495 + struct drm_atomic_state *state) 505 496 { 497 + struct drm_plane_state *new_state = drm_atomic_get_new_plane_state(state, 498 + plane); 506 499 struct drm_framebuffer *old_fb = plane->state->fb; 507 500 508 501 plane->state->src_x = new_state->src_x;
+2 -2
drivers/gpu/drm/msm/msm_atomic.c
··· 5 5 */ 6 6 7 7 #include <drm/drm_atomic_uapi.h> 8 - #include <drm/drm_gem_framebuffer_helper.h> 8 + #include <drm/drm_gem_atomic_helper.h> 9 9 #include <drm/drm_vblank.h> 10 10 11 11 #include "msm_atomic_trace.h" ··· 22 22 if (!new_state->fb) 23 23 return 0; 24 24 25 - drm_gem_fb_prepare_fb(plane, new_state); 25 + drm_gem_plane_helper_prepare_fb(plane, new_state); 26 26 27 27 return msm_framebuffer_prepare(new_state->fb, kms->aspace); 28 28 }
+14 -9
drivers/gpu/drm/mxsfb/mxsfb_kms.c
··· 21 21 #include <drm/drm_encoder.h> 22 22 #include <drm/drm_fb_cma_helper.h> 23 23 #include <drm/drm_fourcc.h> 24 + #include <drm/drm_gem_atomic_helper.h> 24 25 #include <drm/drm_gem_cma_helper.h> 25 - #include <drm/drm_gem_framebuffer_helper.h> 26 26 #include <drm/drm_plane.h> 27 27 #include <drm/drm_plane_helper.h> 28 28 #include <drm/drm_vblank.h> ··· 402 402 */ 403 403 404 404 static int mxsfb_plane_atomic_check(struct drm_plane *plane, 405 - struct drm_plane_state *plane_state) 405 + struct drm_atomic_state *state) 406 406 { 407 + struct drm_plane_state *plane_state = drm_atomic_get_new_plane_state(state, 408 + plane); 407 409 struct mxsfb_drm_private *mxsfb = to_mxsfb_drm_private(plane->dev); 408 410 struct drm_crtc_state *crtc_state; 409 411 410 - crtc_state = drm_atomic_get_new_crtc_state(plane_state->state, 412 + crtc_state = drm_atomic_get_new_crtc_state(state, 411 413 &mxsfb->crtc); 412 414 413 415 return drm_atomic_helper_check_plane_state(plane_state, crtc_state, ··· 419 417 } 420 418 421 419 static void mxsfb_plane_primary_atomic_update(struct drm_plane *plane, 422 - struct drm_plane_state *old_pstate) 420 + struct drm_atomic_state *state) 423 421 { 424 422 struct mxsfb_drm_private *mxsfb = to_mxsfb_drm_private(plane->dev); 425 423 dma_addr_t paddr; ··· 430 428 } 431 429 432 430 static void mxsfb_plane_overlay_atomic_update(struct drm_plane *plane, 433 - struct drm_plane_state *old_pstate) 431 + struct drm_atomic_state *state) 434 432 { 433 + struct drm_plane_state *old_pstate = drm_atomic_get_old_plane_state(state, 434 + plane); 435 435 struct mxsfb_drm_private *mxsfb = to_mxsfb_drm_private(plane->dev); 436 - struct drm_plane_state *state = plane->state; 436 + struct drm_plane_state *new_pstate = drm_atomic_get_new_plane_state(state, 437 + plane); 437 438 dma_addr_t paddr; 438 439 u32 ctrl; 439 440 ··· 465 460 466 461 ctrl = AS_CTRL_AS_ENABLE | AS_CTRL_ALPHA(255); 467 462 468 - switch (state->fb->format->format) { 463 + switch (new_pstate->fb->format->format) { 469 464 case DRM_FORMAT_XRGB4444: 470 465 ctrl |= AS_CTRL_FORMAT_RGB444 | AS_CTRL_ALPHA_CTRL_OVERRIDE; 471 466 break; ··· 500 495 } 501 496 502 497 static const struct drm_plane_helper_funcs mxsfb_plane_primary_helper_funcs = { 503 - .prepare_fb = drm_gem_fb_prepare_fb, 498 + .prepare_fb = drm_gem_plane_helper_prepare_fb, 504 499 .atomic_check = mxsfb_plane_atomic_check, 505 500 .atomic_update = mxsfb_plane_primary_atomic_update, 506 501 }; 507 502 508 503 static const struct drm_plane_helper_funcs mxsfb_plane_overlay_helper_funcs = { 509 - .prepare_fb = drm_gem_fb_prepare_fb, 504 + .prepare_fb = drm_gem_plane_helper_prepare_fb, 510 505 .atomic_check = mxsfb_plane_atomic_check, 511 506 .atomic_update = mxsfb_plane_overlay_atomic_update, 512 507 };
+6 -2
drivers/gpu/drm/nouveau/dispnv50/wndw.c
··· 30 30 #include <nvhw/class/cl507e.h> 31 31 #include <nvhw/class/clc37e.h> 32 32 33 + #include <drm/drm_atomic.h> 33 34 #include <drm/drm_atomic_helper.h> 34 35 #include <drm/drm_fourcc.h> 35 36 ··· 435 434 } 436 435 437 436 static int 438 - nv50_wndw_atomic_check(struct drm_plane *plane, struct drm_plane_state *state) 437 + nv50_wndw_atomic_check(struct drm_plane *plane, 438 + struct drm_atomic_state *state) 439 439 { 440 + struct drm_plane_state *new_plane_state = drm_atomic_get_new_plane_state(state, 441 + plane); 440 442 struct nouveau_drm *drm = nouveau_drm(plane->dev); 441 443 struct nv50_wndw *wndw = nv50_wndw(plane); 442 444 struct nv50_wndw_atom *armw = nv50_wndw_atom(wndw->plane.state); 443 - struct nv50_wndw_atom *asyw = nv50_wndw_atom(state); 445 + struct nv50_wndw_atom *asyw = nv50_wndw_atom(new_plane_state); 444 446 struct nv50_head_atom *harm = NULL, *asyh = NULL; 445 447 bool modeset = false; 446 448 int ret;
+12 -15
drivers/gpu/drm/nouveau/nouveau_bo.c
··· 43 43 #include <nvif/if500b.h> 44 44 #include <nvif/if900b.h> 45 45 46 - static int nouveau_ttm_tt_bind(struct ttm_bo_device *bdev, struct ttm_tt *ttm, 46 + static int nouveau_ttm_tt_bind(struct ttm_device *bdev, struct ttm_tt *ttm, 47 47 struct ttm_resource *reg); 48 - static void nouveau_ttm_tt_unbind(struct ttm_bo_device *bdev, struct ttm_tt *ttm); 48 + static void nouveau_ttm_tt_unbind(struct ttm_device *bdev, struct ttm_tt *ttm); 49 49 50 50 /* 51 51 * NV10-NV40 tiling helpers ··· 300 300 struct sg_table *sg, struct dma_resv *robj) 301 301 { 302 302 int type = sg ? ttm_bo_type_sg : ttm_bo_type_device; 303 - size_t acc_size; 304 303 int ret; 305 - 306 - acc_size = ttm_bo_dma_acc_size(nvbo->bo.bdev, size, sizeof(*nvbo)); 307 304 308 305 nvbo->bo.mem.num_pages = size >> PAGE_SHIFT; 309 306 nouveau_bo_placement_set(nvbo, domain, 0); 310 307 INIT_LIST_HEAD(&nvbo->io_reserve_lru); 311 308 312 309 ret = ttm_bo_init(nvbo->bo.bdev, &nvbo->bo, size, type, 313 - &nvbo->placement, align >> PAGE_SHIFT, false, 314 - acc_size, sg, robj, nouveau_bo_del_ttm); 310 + &nvbo->placement, align >> PAGE_SHIFT, false, sg, 311 + robj, nouveau_bo_del_ttm); 315 312 if (ret) { 316 313 /* ttm will call nouveau_bo_del_ttm if it fails.. */ 317 314 return ret; ··· 696 699 } 697 700 698 701 static int 699 - nouveau_ttm_tt_bind(struct ttm_bo_device *bdev, struct ttm_tt *ttm, 702 + nouveau_ttm_tt_bind(struct ttm_device *bdev, struct ttm_tt *ttm, 700 703 struct ttm_resource *reg) 701 704 { 702 705 #if IS_ENABLED(CONFIG_AGP) ··· 712 715 } 713 716 714 717 static void 715 - nouveau_ttm_tt_unbind(struct ttm_bo_device *bdev, struct ttm_tt *ttm) 718 + nouveau_ttm_tt_unbind(struct ttm_device *bdev, struct ttm_tt *ttm) 716 719 { 717 720 #if IS_ENABLED(CONFIG_AGP) 718 721 struct nouveau_drm *drm = nouveau_bdev(bdev); ··· 1077 1080 } 1078 1081 1079 1082 static int 1080 - nouveau_ttm_io_mem_reserve(struct ttm_bo_device *bdev, struct ttm_resource *reg) 1083 + nouveau_ttm_io_mem_reserve(struct ttm_device *bdev, struct ttm_resource *reg) 1081 1084 { 1082 1085 struct nouveau_drm *drm = nouveau_bdev(bdev); 1083 1086 struct nvkm_device *device = nvxx_device(&drm->client.device); ··· 1185 1188 } 1186 1189 1187 1190 static void 1188 - nouveau_ttm_io_mem_free(struct ttm_bo_device *bdev, struct ttm_resource *reg) 1191 + nouveau_ttm_io_mem_free(struct ttm_device *bdev, struct ttm_resource *reg) 1189 1192 { 1190 1193 struct nouveau_drm *drm = nouveau_bdev(bdev); 1191 1194 ··· 1245 1248 } 1246 1249 1247 1250 static int 1248 - nouveau_ttm_tt_populate(struct ttm_bo_device *bdev, 1251 + nouveau_ttm_tt_populate(struct ttm_device *bdev, 1249 1252 struct ttm_tt *ttm, struct ttm_operation_ctx *ctx) 1250 1253 { 1251 1254 struct ttm_tt *ttm_dma = (void *)ttm; ··· 1269 1272 } 1270 1273 1271 1274 static void 1272 - nouveau_ttm_tt_unpopulate(struct ttm_bo_device *bdev, 1275 + nouveau_ttm_tt_unpopulate(struct ttm_device *bdev, 1273 1276 struct ttm_tt *ttm) 1274 1277 { 1275 1278 struct nouveau_drm *drm; ··· 1286 1289 } 1287 1290 1288 1291 static void 1289 - nouveau_ttm_tt_destroy(struct ttm_bo_device *bdev, 1292 + nouveau_ttm_tt_destroy(struct ttm_device *bdev, 1290 1293 struct ttm_tt *ttm) 1291 1294 { 1292 1295 #if IS_ENABLED(CONFIG_AGP) ··· 1318 1321 nouveau_bo_move_ntfy(bo, false, NULL); 1319 1322 } 1320 1323 1321 - struct ttm_bo_driver nouveau_bo_driver = { 1324 + struct ttm_device_funcs nouveau_bo_driver = { 1322 1325 .ttm_tt_create = &nouveau_ttm_tt_create, 1323 1326 .ttm_tt_populate = &nouveau_ttm_tt_populate, 1324 1327 .ttm_tt_unpopulate = &nouveau_ttm_tt_unpopulate,
+1 -1
drivers/gpu/drm/nouveau/nouveau_bo.h
··· 68 68 return 0; 69 69 } 70 70 71 - extern struct ttm_bo_driver nouveau_bo_driver; 71 + extern struct ttm_device_funcs nouveau_bo_driver; 72 72 73 73 void nouveau_bo_move_init(struct nouveau_drm *); 74 74 struct nouveau_bo *nouveau_bo_alloc(struct nouveau_cli *, u64 *size, int *align,
+3 -6
drivers/gpu/drm/nouveau/nouveau_display.c
··· 322 322 mode_cmd->pitches[0] >= 0x10000 || /* at most 64k pitch */ 323 323 (mode_cmd->pitches[1] && /* pitches for planes must match */ 324 324 mode_cmd->pitches[0] != mode_cmd->pitches[1]))) { 325 - struct drm_format_name_buf format_name; 326 - DRM_DEBUG_KMS("Unsuitable framebuffer: format: %s; pitches: 0x%x\n 0x%x\n", 327 - drm_get_format_name(mode_cmd->pixel_format, 328 - &format_name), 329 - mode_cmd->pitches[0], 330 - mode_cmd->pitches[1]); 325 + DRM_DEBUG_KMS("Unsuitable framebuffer: format: %p4cc; pitches: 0x%x\n 0x%x\n", 326 + &mode_cmd->pixel_format, 327 + mode_cmd->pitches[0], mode_cmd->pitches[1]); 331 328 return -EINVAL; 332 329 } 333 330
+1 -2
drivers/gpu/drm/nouveau/nouveau_drv.h
··· 54 54 #include <drm/ttm/ttm_bo_api.h> 55 55 #include <drm/ttm/ttm_bo_driver.h> 56 56 #include <drm/ttm/ttm_placement.h> 57 - #include <drm/ttm/ttm_memory.h> 58 57 59 58 #include <drm/drm_audio_component.h> 60 59 ··· 150 151 151 152 /* TTM interface support */ 152 153 struct { 153 - struct ttm_bo_device bdev; 154 + struct ttm_device bdev; 154 155 atomic_t validate_sequence; 155 156 int (*move)(struct nouveau_channel *, 156 157 struct ttm_buffer_object *,
+3 -3
drivers/gpu/drm/nouveau/nouveau_sgdma.c
··· 16 16 }; 17 17 18 18 void 19 - nouveau_sgdma_destroy(struct ttm_bo_device *bdev, struct ttm_tt *ttm) 19 + nouveau_sgdma_destroy(struct ttm_device *bdev, struct ttm_tt *ttm) 20 20 { 21 21 struct nouveau_sgdma_be *nvbe = (struct nouveau_sgdma_be *)ttm; 22 22 ··· 29 29 } 30 30 31 31 int 32 - nouveau_sgdma_bind(struct ttm_bo_device *bdev, struct ttm_tt *ttm, struct ttm_resource *reg) 32 + nouveau_sgdma_bind(struct ttm_device *bdev, struct ttm_tt *ttm, struct ttm_resource *reg) 33 33 { 34 34 struct nouveau_sgdma_be *nvbe = (struct nouveau_sgdma_be *)ttm; 35 35 struct nouveau_drm *drm = nouveau_bdev(bdev); ··· 56 56 } 57 57 58 58 void 59 - nouveau_sgdma_unbind(struct ttm_bo_device *bdev, struct ttm_tt *ttm) 59 + nouveau_sgdma_unbind(struct ttm_device *bdev, struct ttm_tt *ttm) 60 60 { 61 61 struct nouveau_sgdma_be *nvbe = (struct nouveau_sgdma_be *)ttm; 62 62 if (nvbe->mem) {
+6 -6
drivers/gpu/drm/nouveau/nouveau_ttm.c
··· 154 154 return ret; 155 155 } 156 156 157 - static struct vm_operations_struct nouveau_ttm_vm_ops = { 157 + static const struct vm_operations_struct nouveau_ttm_vm_ops = { 158 158 .fault = nouveau_ttm_fault, 159 159 .open = ttm_bo_vm_open, 160 160 .close = ttm_bo_vm_close, ··· 324 324 need_swiotlb = !!swiotlb_nr_tbl(); 325 325 #endif 326 326 327 - ret = ttm_bo_device_init(&drm->ttm.bdev, &nouveau_bo_driver, 328 - drm->dev->dev, dev->anon_inode->i_mapping, 329 - dev->vma_offset_manager, need_swiotlb, 330 - drm->client.mmu.dmabits <= 32); 327 + ret = ttm_device_init(&drm->ttm.bdev, &nouveau_bo_driver, drm->dev->dev, 328 + dev->anon_inode->i_mapping, 329 + dev->vma_offset_manager, need_swiotlb, 330 + drm->client.mmu.dmabits <= 32); 331 331 if (ret) { 332 332 NV_ERROR(drm, "error initialising bo driver, %d\n", ret); 333 333 return ret; ··· 377 377 nouveau_ttm_fini_vram(drm); 378 378 nouveau_ttm_fini_gtt(drm); 379 379 380 - ttm_bo_device_release(&drm->ttm.bdev); 380 + ttm_device_fini(&drm->ttm.bdev); 381 381 382 382 arch_phys_wc_del(drm->ttm.mtrr); 383 383 drm->ttm.mtrr = 0;
+4 -4
drivers/gpu/drm/nouveau/nouveau_ttm.h
··· 3 3 #define __NOUVEAU_TTM_H__ 4 4 5 5 static inline struct nouveau_drm * 6 - nouveau_bdev(struct ttm_bo_device *bd) 6 + nouveau_bdev(struct ttm_device *bd) 7 7 { 8 8 return container_of(bd, struct nouveau_drm, ttm.bdev); 9 9 } ··· 22 22 int nouveau_ttm_global_init(struct nouveau_drm *); 23 23 void nouveau_ttm_global_release(struct nouveau_drm *); 24 24 25 - int nouveau_sgdma_bind(struct ttm_bo_device *bdev, struct ttm_tt *ttm, struct ttm_resource *reg); 26 - void nouveau_sgdma_unbind(struct ttm_bo_device *bdev, struct ttm_tt *ttm); 27 - void nouveau_sgdma_destroy(struct ttm_bo_device *bdev, struct ttm_tt *ttm); 25 + int nouveau_sgdma_bind(struct ttm_device *bdev, struct ttm_tt *ttm, struct ttm_resource *reg); 26 + void nouveau_sgdma_unbind(struct ttm_device *bdev, struct ttm_tt *ttm); 27 + void nouveau_sgdma_destroy(struct ttm_device *bdev, struct ttm_tt *ttm); 28 28 #endif
+4 -3
drivers/gpu/drm/omapdrm/dss/dsi.c
··· 2149 2149 const struct mipi_dsi_msg *msg) 2150 2150 { 2151 2151 struct mipi_dsi_packet pkt; 2152 + int err; 2152 2153 u32 r; 2153 2154 2154 - r = mipi_dsi_create_packet(&pkt, msg); 2155 - if (r < 0) 2156 - return r; 2155 + err = mipi_dsi_create_packet(&pkt, msg); 2156 + if (err) 2157 + return err; 2157 2158 2158 2159 WARN_ON(!dsi_bus_is_locked(dsi)); 2159 2160
+5 -4
drivers/gpu/drm/omapdrm/omap_drv.c
··· 68 68 { 69 69 struct drm_device *dev = old_state->dev; 70 70 struct omap_drm_private *priv = dev->dev_private; 71 + bool fence_cookie = dma_fence_begin_signalling(); 71 72 72 73 dispc_runtime_get(priv->dispc); 73 74 ··· 91 90 omap_atomic_wait_for_completion(dev, old_state); 92 91 93 92 drm_atomic_helper_commit_planes(dev, old_state, 0); 94 - 95 - drm_atomic_helper_commit_hw_done(old_state); 96 93 } else { 97 94 /* 98 95 * OMAP3 DSS seems to have issues with the work-around above, ··· 100 101 drm_atomic_helper_commit_planes(dev, old_state, 0); 101 102 102 103 drm_atomic_helper_commit_modeset_enables(dev, old_state); 103 - 104 - drm_atomic_helper_commit_hw_done(old_state); 105 104 } 105 + 106 + drm_atomic_helper_commit_hw_done(old_state); 107 + 108 + dma_fence_end_signalling(fence_cookie); 106 109 107 110 /* 108 111 * Wait for completion of the page flips to ensure that old buffers
+31 -25
drivers/gpu/drm/omapdrm/omap_plane.c
··· 40 40 } 41 41 42 42 static void omap_plane_atomic_update(struct drm_plane *plane, 43 - struct drm_plane_state *old_state) 43 + struct drm_atomic_state *state) 44 44 { 45 45 struct omap_drm_private *priv = plane->dev->dev_private; 46 46 struct omap_plane *omap_plane = to_omap_plane(plane); 47 - struct drm_plane_state *state = plane->state; 47 + struct drm_plane_state *new_state = drm_atomic_get_new_plane_state(state, 48 + plane); 48 49 struct omap_overlay_info info; 49 50 int ret; 50 51 51 - DBG("%s, crtc=%p fb=%p", omap_plane->name, state->crtc, state->fb); 52 + DBG("%s, crtc=%p fb=%p", omap_plane->name, new_state->crtc, 53 + new_state->fb); 52 54 53 55 memset(&info, 0, sizeof(info)); 54 56 info.rotation_type = OMAP_DSS_ROT_NONE; 55 57 info.rotation = DRM_MODE_ROTATE_0; 56 - info.global_alpha = state->alpha >> 8; 57 - info.zorder = state->normalized_zpos; 58 - if (state->pixel_blend_mode == DRM_MODE_BLEND_PREMULTI) 58 + info.global_alpha = new_state->alpha >> 8; 59 + info.zorder = new_state->normalized_zpos; 60 + if (new_state->pixel_blend_mode == DRM_MODE_BLEND_PREMULTI) 59 61 info.pre_mult_alpha = 1; 60 62 else 61 63 info.pre_mult_alpha = 0; 62 - info.color_encoding = state->color_encoding; 63 - info.color_range = state->color_range; 64 + info.color_encoding = new_state->color_encoding; 65 + info.color_range = new_state->color_range; 64 66 65 67 /* update scanout: */ 66 - omap_framebuffer_update_scanout(state->fb, state, &info); 68 + omap_framebuffer_update_scanout(new_state->fb, new_state, &info); 67 69 68 70 DBG("%dx%d -> %dx%d (%d)", info.width, info.height, 69 71 info.out_width, info.out_height, ··· 75 73 76 74 /* and finally, update omapdss: */ 77 75 ret = dispc_ovl_setup(priv->dispc, omap_plane->id, &info, 78 - omap_crtc_timings(state->crtc), false, 79 - omap_crtc_channel(state->crtc)); 76 + omap_crtc_timings(new_state->crtc), false, 77 + omap_crtc_channel(new_state->crtc)); 80 78 if (ret) { 81 79 dev_err(plane->dev->dev, "Failed to setup plane %s\n", 82 80 omap_plane->name); ··· 88 86 } 89 87 90 88 static void omap_plane_atomic_disable(struct drm_plane *plane, 91 - struct drm_plane_state *old_state) 89 + struct drm_atomic_state *state) 92 90 { 91 + struct drm_plane_state *new_state = drm_atomic_get_new_plane_state(state, 92 + plane); 93 93 struct omap_drm_private *priv = plane->dev->dev_private; 94 94 struct omap_plane *omap_plane = to_omap_plane(plane); 95 95 96 - plane->state->rotation = DRM_MODE_ROTATE_0; 97 - plane->state->zpos = plane->type == DRM_PLANE_TYPE_PRIMARY 98 - ? 0 : omap_plane->id; 96 + new_state->rotation = DRM_MODE_ROTATE_0; 97 + new_state->zpos = plane->type == DRM_PLANE_TYPE_PRIMARY ? 0 : omap_plane->id; 99 98 100 99 dispc_ovl_enable(priv->dispc, omap_plane->id, false); 101 100 } 102 101 103 102 static int omap_plane_atomic_check(struct drm_plane *plane, 104 - struct drm_plane_state *state) 103 + struct drm_atomic_state *state) 105 104 { 105 + struct drm_plane_state *new_plane_state = drm_atomic_get_new_plane_state(state, 106 + plane); 106 107 struct drm_crtc_state *crtc_state; 107 108 108 - if (!state->fb) 109 + if (!new_plane_state->fb) 109 110 return 0; 110 111 111 - /* crtc should only be NULL when disabling (i.e., !state->fb) */ 112 - if (WARN_ON(!state->crtc)) 112 + /* crtc should only be NULL when disabling (i.e., !new_plane_state->fb) */ 113 + if (WARN_ON(!new_plane_state->crtc)) 113 114 return 0; 114 115 115 - crtc_state = drm_atomic_get_existing_crtc_state(state->state, state->crtc); 116 + crtc_state = drm_atomic_get_existing_crtc_state(state, 117 + new_plane_state->crtc); 116 118 /* we should have a crtc state if the plane is attached to a crtc */ 117 119 if (WARN_ON(!crtc_state)) 118 120 return 0; ··· 124 118 if (!crtc_state->enable) 125 119 return 0; 126 120 127 - if (state->crtc_x < 0 || state->crtc_y < 0) 121 + if (new_plane_state->crtc_x < 0 || new_plane_state->crtc_y < 0) 128 122 return -EINVAL; 129 123 130 - if (state->crtc_x + state->crtc_w > crtc_state->adjusted_mode.hdisplay) 124 + if (new_plane_state->crtc_x + new_plane_state->crtc_w > crtc_state->adjusted_mode.hdisplay) 131 125 return -EINVAL; 132 126 133 - if (state->crtc_y + state->crtc_h > crtc_state->adjusted_mode.vdisplay) 127 + if (new_plane_state->crtc_y + new_plane_state->crtc_h > crtc_state->adjusted_mode.vdisplay) 134 128 return -EINVAL; 135 129 136 - if (state->rotation != DRM_MODE_ROTATE_0 && 137 - !omap_framebuffer_supports_rotation(state->fb)) 130 + if (new_plane_state->rotation != DRM_MODE_ROTATE_0 && 131 + !omap_framebuffer_supports_rotation(new_plane_state->fb)) 138 132 return -EINVAL; 139 133 140 134 return 0;
+1 -1
drivers/gpu/drm/panel/panel-lvds.c
··· 244 244 245 245 static int panel_lvds_remove(struct platform_device *pdev) 246 246 { 247 - struct panel_lvds *lvds = dev_get_drvdata(&pdev->dev); 247 + struct panel_lvds *lvds = platform_get_drvdata(pdev); 248 248 249 249 drm_panel_remove(&lvds->panel); 250 250
+2 -2
drivers/gpu/drm/panel/panel-seiko-43wvf1g.c
··· 267 267 268 268 static int seiko_panel_remove(struct platform_device *pdev) 269 269 { 270 - struct seiko_panel *panel = dev_get_drvdata(&pdev->dev); 270 + struct seiko_panel *panel = platform_get_drvdata(pdev); 271 271 272 272 drm_panel_remove(&panel->base); 273 273 drm_panel_disable(&panel->base); ··· 277 277 278 278 static void seiko_panel_shutdown(struct platform_device *pdev) 279 279 { 280 - struct seiko_panel *panel = dev_get_drvdata(&pdev->dev); 280 + struct seiko_panel *panel = platform_get_drvdata(pdev); 281 281 282 282 drm_panel_disable(&panel->base); 283 283 }
+1 -1
drivers/gpu/drm/panel/panel-simple.c
··· 4800 4800 4801 4801 err = mipi_dsi_attach(dsi); 4802 4802 if (err) { 4803 - struct panel_simple *panel = dev_get_drvdata(&dsi->dev); 4803 + struct panel_simple *panel = mipi_dsi_get_drvdata(dsi); 4804 4804 4805 4805 drm_panel_remove(&panel->base); 4806 4806 }
+9 -1
drivers/gpu/drm/panfrost/panfrost_devfreq.c
··· 130 130 panfrost_devfreq_profile.initial_freq = cur_freq; 131 131 dev_pm_opp_put(opp); 132 132 133 + /* 134 + * Setup default thresholds for the simple_ondemand governor. 135 + * The values are chosen based on experiments. 136 + */ 137 + pfdevfreq->gov_data.upthreshold = 45; 138 + pfdevfreq->gov_data.downdifferential = 5; 139 + 133 140 devfreq = devm_devfreq_add_device(dev, &panfrost_devfreq_profile, 134 - DEVFREQ_GOV_SIMPLE_ONDEMAND, NULL); 141 + DEVFREQ_GOV_SIMPLE_ONDEMAND, 142 + &pfdevfreq->gov_data); 135 143 if (IS_ERR(devfreq)) { 136 144 DRM_DEV_ERROR(dev, "Couldn't initialize GPU devfreq\n"); 137 145 ret = PTR_ERR(devfreq);
+2
drivers/gpu/drm/panfrost/panfrost_devfreq.h
··· 4 4 #ifndef __PANFROST_DEVFREQ_H__ 5 5 #define __PANFROST_DEVFREQ_H__ 6 6 7 + #include <linux/devfreq.h> 7 8 #include <linux/spinlock.h> 8 9 #include <linux/ktime.h> 9 10 ··· 18 17 struct devfreq *devfreq; 19 18 struct opp_table *regulators_opp_table; 20 19 struct thermal_cooling_device *cooling; 20 + struct devfreq_simple_ondemand_data gov_data; 21 21 bool opp_of_table_added; 22 22 23 23 ktime_t busy_time;
+7 -4
drivers/gpu/drm/panfrost/panfrost_job.c
··· 432 432 mutex_unlock(&queue->lock); 433 433 } 434 434 435 - static void panfrost_job_timedout(struct drm_sched_job *sched_job) 435 + static enum drm_gpu_sched_stat panfrost_job_timedout(struct drm_sched_job 436 + *sched_job) 436 437 { 437 438 struct panfrost_job *job = to_panfrost_job(sched_job); 438 439 struct panfrost_device *pfdev = job->pfdev; ··· 444 443 * spurious. Bail out. 445 444 */ 446 445 if (dma_fence_is_signaled(job->done_fence)) 447 - return; 446 + return DRM_GPU_SCHED_STAT_NOMINAL; 448 447 449 448 dev_err(pfdev->dev, "gpu sched timeout, js=%d, config=0x%x, status=0x%x, head=0x%x, tail=0x%x, sched_job=%p", 450 449 js, ··· 456 455 457 456 /* Scheduler is already stopped, nothing to do. */ 458 457 if (!panfrost_scheduler_stop(&pfdev->js->queue[js], sched_job)) 459 - return; 458 + return DRM_GPU_SCHED_STAT_NOMINAL; 460 459 461 460 /* Schedule a reset if there's no reset in progress. */ 462 461 if (!atomic_xchg(&pfdev->reset.pending, 1)) 463 462 schedule_work(&pfdev->reset.work); 463 + 464 + return DRM_GPU_SCHED_STAT_NOMINAL; 464 465 } 465 466 466 467 static const struct drm_sched_backend_ops panfrost_sched_ops = { ··· 627 624 ret = drm_sched_init(&js->queue[j].sched, 628 625 &panfrost_sched_ops, 629 626 1, 0, msecs_to_jiffies(JOB_TIMEOUT_MS), 630 - "pan_js"); 627 + NULL, "pan_js"); 631 628 if (ret) { 632 629 dev_err(pfdev->dev, "Failed to create scheduler: %d.", ret); 633 630 goto err_sched;
+24 -15
drivers/gpu/drm/panfrost/panfrost_mmu.c
··· 488 488 } 489 489 bo->base.pages = pages; 490 490 bo->base.pages_use_count = 1; 491 - } else 491 + } else { 492 492 pages = bo->base.pages; 493 + if (pages[page_offset]) { 494 + /* Pages are already mapped, bail out. */ 495 + mutex_unlock(&bo->base.pages_lock); 496 + goto out; 497 + } 498 + } 493 499 494 500 mapping = bo->base.base.filp->f_mapping; 495 501 mapping_set_unevictable(mapping); ··· 528 522 529 523 dev_dbg(pfdev->dev, "mapped page fault @ AS%d %llx", as, addr); 530 524 525 + out: 531 526 panfrost_gem_mapping_put(bomapping); 532 527 533 528 return 0; ··· 578 571 { 579 572 struct panfrost_device *pfdev = data; 580 573 u32 status = mmu_read(pfdev, MMU_INT_RAWSTAT); 581 - int i, ret; 574 + int ret; 582 575 583 - for (i = 0; status; i++) { 584 - u32 mask = BIT(i) | BIT(i + 16); 576 + while (status) { 577 + u32 as = ffs(status | (status >> 16)) - 1; 578 + u32 mask = BIT(as) | BIT(as + 16); 585 579 u64 addr; 586 580 u32 fault_status; 587 581 u32 exception_type; 588 582 u32 access_type; 589 583 u32 source_id; 590 584 591 - if (!(status & mask)) 592 - continue; 593 - 594 - fault_status = mmu_read(pfdev, AS_FAULTSTATUS(i)); 595 - addr = mmu_read(pfdev, AS_FAULTADDRESS_LO(i)); 596 - addr |= (u64)mmu_read(pfdev, AS_FAULTADDRESS_HI(i)) << 32; 585 + fault_status = mmu_read(pfdev, AS_FAULTSTATUS(as)); 586 + addr = mmu_read(pfdev, AS_FAULTADDRESS_LO(as)); 587 + addr |= (u64)mmu_read(pfdev, AS_FAULTADDRESS_HI(as)) << 32; 597 588 598 589 /* decode the fault status */ 599 590 exception_type = fault_status & 0xFF; 600 591 access_type = (fault_status >> 8) & 0x3; 601 592 source_id = (fault_status >> 16); 602 593 594 + mmu_write(pfdev, MMU_INT_CLEAR, mask); 595 + 603 596 /* Page fault only */ 604 597 ret = -1; 605 - if ((status & mask) == BIT(i) && (exception_type & 0xF8) == 0xC0) 606 - ret = panfrost_mmu_map_fault_addr(pfdev, i, addr); 598 + if ((status & mask) == BIT(as) && (exception_type & 0xF8) == 0xC0) 599 + ret = panfrost_mmu_map_fault_addr(pfdev, as, addr); 607 600 608 601 if (ret) 609 602 /* terminal fault, print info about the fault */ ··· 615 608 "exception type 0x%X: %s\n" 616 609 "access type 0x%X: %s\n" 617 610 "source id 0x%X\n", 618 - i, addr, 611 + as, addr, 619 612 "TODO", 620 613 fault_status, 621 614 (fault_status & (1 << 10) ? "DECODER FAULT" : "SLAVE FAULT"), ··· 623 616 access_type, access_type_name(pfdev, fault_status), 624 617 source_id); 625 618 626 - mmu_write(pfdev, MMU_INT_CLEAR, mask); 627 - 628 619 status &= ~mask; 620 + 621 + /* If we received new MMU interrupts, process them before returning. */ 622 + if (!status) 623 + status = mmu_read(pfdev, MMU_INT_RAWSTAT); 629 624 } 630 625 631 626 mmu_write(pfdev, MMU_INT_MASK, ~0);
+2 -2
drivers/gpu/drm/pl111/pl111_display.c
··· 17 17 18 18 #include <drm/drm_fb_cma_helper.h> 19 19 #include <drm/drm_fourcc.h> 20 + #include <drm/drm_gem_atomic_helper.h> 20 21 #include <drm/drm_gem_cma_helper.h> 21 - #include <drm/drm_gem_framebuffer_helper.h> 22 22 #include <drm/drm_vblank.h> 23 23 24 24 #include "pl111_drm.h" ··· 440 440 .enable = pl111_display_enable, 441 441 .disable = pl111_display_disable, 442 442 .update = pl111_display_update, 443 - .prepare_fb = drm_gem_fb_simple_display_pipe_prepare_fb, 443 + .prepare_fb = drm_gem_simple_display_pipe_prepare_fb, 444 444 }; 445 445 446 446 static int pl111_clk_div_choose_div(struct clk_hw *hw, unsigned long rate,
+2 -1
drivers/gpu/drm/qxl/qxl_cmd.c
··· 254 254 } 255 255 } 256 256 257 + wake_up_all(&qdev->release_event); 257 258 DRM_DEBUG_DRIVER("%d\n", i); 258 259 259 260 return i; ··· 269 268 int ret; 270 269 271 270 ret = qxl_bo_create(qdev, size, false /* not kernel - device */, 272 - false, QXL_GEM_DOMAIN_VRAM, NULL, &bo); 271 + false, QXL_GEM_DOMAIN_VRAM, 0, NULL, &bo); 273 272 if (ret) { 274 273 DRM_ERROR("failed to allocate VRAM BO\n"); 275 274 return ret;
+209 -161
drivers/gpu/drm/qxl/qxl_display.c
··· 464 464 }; 465 465 466 466 static int qxl_primary_atomic_check(struct drm_plane *plane, 467 - struct drm_plane_state *state) 467 + struct drm_atomic_state *state) 468 468 { 469 + struct drm_plane_state *new_plane_state = drm_atomic_get_new_plane_state(state, 470 + plane); 469 471 struct qxl_device *qdev = to_qxl(plane->dev); 470 472 struct qxl_bo *bo; 471 473 472 - if (!state->crtc || !state->fb) 474 + if (!new_plane_state->crtc || !new_plane_state->fb) 473 475 return 0; 474 476 475 - bo = gem_to_qxl_bo(state->fb->obj[0]); 477 + bo = gem_to_qxl_bo(new_plane_state->fb->obj[0]); 476 478 477 479 return qxl_check_framebuffer(qdev, bo); 478 480 } 479 481 480 - static int qxl_primary_apply_cursor(struct drm_plane *plane) 482 + static int qxl_primary_apply_cursor(struct qxl_device *qdev, 483 + struct drm_plane_state *plane_state) 481 484 { 482 - struct drm_device *dev = plane->dev; 483 - struct qxl_device *qdev = to_qxl(dev); 484 - struct drm_framebuffer *fb = plane->state->fb; 485 - struct qxl_crtc *qcrtc = to_qxl_crtc(plane->state->crtc); 485 + struct drm_framebuffer *fb = plane_state->fb; 486 + struct qxl_crtc *qcrtc = to_qxl_crtc(plane_state->crtc); 486 487 struct qxl_cursor_cmd *cmd; 487 488 struct qxl_release *release; 488 489 int ret = 0; ··· 507 506 508 507 cmd = (struct qxl_cursor_cmd *)qxl_release_map(qdev, release); 509 508 cmd->type = QXL_CURSOR_SET; 510 - cmd->u.set.position.x = plane->state->crtc_x + fb->hot_x; 511 - cmd->u.set.position.y = plane->state->crtc_y + fb->hot_y; 509 + cmd->u.set.position.x = plane_state->crtc_x + fb->hot_x; 510 + cmd->u.set.position.y = plane_state->crtc_y + fb->hot_y; 512 511 513 512 cmd->u.set.shape = qxl_bo_physical_address(qdev, qcrtc->cursor_bo, 0); 514 513 ··· 525 524 return ret; 526 525 } 527 526 528 - static void qxl_primary_atomic_update(struct drm_plane *plane, 529 - struct drm_plane_state *old_state) 527 + static int qxl_primary_move_cursor(struct qxl_device *qdev, 528 + struct drm_plane_state *plane_state) 530 529 { 530 + struct drm_framebuffer *fb = plane_state->fb; 531 + struct qxl_crtc *qcrtc = to_qxl_crtc(plane_state->crtc); 532 + struct qxl_cursor_cmd *cmd; 533 + struct qxl_release *release; 534 + int ret = 0; 535 + 536 + if (!qcrtc->cursor_bo) 537 + return 0; 538 + 539 + ret = qxl_alloc_release_reserved(qdev, sizeof(*cmd), 540 + QXL_RELEASE_CURSOR_CMD, 541 + &release, NULL); 542 + if (ret) 543 + return ret; 544 + 545 + ret = qxl_release_reserve_list(release, true); 546 + if (ret) { 547 + qxl_release_free(qdev, release); 548 + return ret; 549 + } 550 + 551 + cmd = (struct qxl_cursor_cmd *)qxl_release_map(qdev, release); 552 + cmd->type = QXL_CURSOR_MOVE; 553 + cmd->u.position.x = plane_state->crtc_x + fb->hot_x; 554 + cmd->u.position.y = plane_state->crtc_y + fb->hot_y; 555 + qxl_release_unmap(qdev, release, &cmd->release_info); 556 + 557 + qxl_release_fence_buffer_objects(release); 558 + qxl_push_cursor_ring_release(qdev, release, QXL_CMD_CURSOR, false); 559 + return ret; 560 + } 561 + 562 + static struct qxl_bo *qxl_create_cursor(struct qxl_device *qdev, 563 + struct qxl_bo *user_bo, 564 + int hot_x, int hot_y) 565 + { 566 + static const u32 size = 64 * 64 * 4; 567 + struct qxl_bo *cursor_bo; 568 + struct dma_buf_map cursor_map; 569 + struct dma_buf_map user_map; 570 + struct qxl_cursor cursor; 571 + int ret; 572 + 573 + if (!user_bo) 574 + return NULL; 575 + 576 + ret = qxl_bo_create(qdev, sizeof(struct qxl_cursor) + size, 577 + false, true, QXL_GEM_DOMAIN_VRAM, 1, 578 + NULL, &cursor_bo); 579 + if (ret) 580 + goto err; 581 + 582 + ret = qxl_bo_vmap(cursor_bo, &cursor_map); 583 + if (ret) 584 + goto err_unref; 585 + 586 + ret = qxl_bo_vmap(user_bo, &user_map); 587 + if (ret) 588 + goto err_unmap; 589 + 590 + cursor.header.unique = 0; 591 + cursor.header.type = SPICE_CURSOR_TYPE_ALPHA; 592 + cursor.header.width = 64; 593 + cursor.header.height = 64; 594 + cursor.header.hot_spot_x = hot_x; 595 + cursor.header.hot_spot_y = hot_y; 596 + cursor.data_size = size; 597 + cursor.chunk.next_chunk = 0; 598 + cursor.chunk.prev_chunk = 0; 599 + cursor.chunk.data_size = size; 600 + if (cursor_map.is_iomem) { 601 + memcpy_toio(cursor_map.vaddr_iomem, 602 + &cursor, sizeof(cursor)); 603 + memcpy_toio(cursor_map.vaddr_iomem + sizeof(cursor), 604 + user_map.vaddr, size); 605 + } else { 606 + memcpy(cursor_map.vaddr, 607 + &cursor, sizeof(cursor)); 608 + memcpy(cursor_map.vaddr + sizeof(cursor), 609 + user_map.vaddr, size); 610 + } 611 + 612 + qxl_bo_vunmap(user_bo); 613 + qxl_bo_vunmap(cursor_bo); 614 + return cursor_bo; 615 + 616 + err_unmap: 617 + qxl_bo_vunmap(cursor_bo); 618 + err_unref: 619 + qxl_bo_unpin(cursor_bo); 620 + qxl_bo_unref(&cursor_bo); 621 + err: 622 + return NULL; 623 + } 624 + 625 + static void qxl_free_cursor(struct qxl_bo *cursor_bo) 626 + { 627 + if (!cursor_bo) 628 + return; 629 + 630 + qxl_bo_unpin(cursor_bo); 631 + qxl_bo_unref(&cursor_bo); 632 + } 633 + 634 + static void qxl_primary_atomic_update(struct drm_plane *plane, 635 + struct drm_atomic_state *state) 636 + { 637 + struct drm_plane_state *new_state = drm_atomic_get_new_plane_state(state, 638 + plane); 531 639 struct qxl_device *qdev = to_qxl(plane->dev); 532 - struct qxl_bo *bo = gem_to_qxl_bo(plane->state->fb->obj[0]); 640 + struct qxl_bo *bo = gem_to_qxl_bo(new_state->fb->obj[0]); 533 641 struct qxl_bo *primary; 534 642 struct drm_clip_rect norect = { 535 643 .x1 = 0, 536 644 .y1 = 0, 537 - .x2 = plane->state->fb->width, 538 - .y2 = plane->state->fb->height 645 + .x2 = new_state->fb->width, 646 + .y2 = new_state->fb->height 539 647 }; 540 648 uint32_t dumb_shadow_offset = 0; 541 649 ··· 654 544 if (qdev->primary_bo) 655 545 qxl_io_destroy_primary(qdev); 656 546 qxl_io_create_primary(qdev, primary); 657 - qxl_primary_apply_cursor(plane); 547 + qxl_primary_apply_cursor(qdev, plane->state); 658 548 } 659 549 660 550 if (bo->is_dumb) 661 551 dumb_shadow_offset = 662 - qdev->dumb_heads[plane->state->crtc->index].x; 552 + qdev->dumb_heads[new_state->crtc->index].x; 663 553 664 - qxl_draw_dirty_fb(qdev, plane->state->fb, bo, 0, 0, &norect, 1, 1, 554 + qxl_draw_dirty_fb(qdev, new_state->fb, bo, 0, 0, &norect, 1, 1, 665 555 dumb_shadow_offset); 666 556 } 667 557 668 558 static void qxl_primary_atomic_disable(struct drm_plane *plane, 669 - struct drm_plane_state *old_state) 559 + struct drm_atomic_state *state) 670 560 { 561 + struct drm_plane_state *old_state = drm_atomic_get_old_plane_state(state, 562 + plane); 671 563 struct qxl_device *qdev = to_qxl(plane->dev); 672 564 673 565 if (old_state->fb) { 674 566 struct qxl_bo *bo = gem_to_qxl_bo(old_state->fb->obj[0]); 675 567 568 + if (bo->shadow) 569 + bo = bo->shadow; 676 570 if (bo->is_primary) { 677 571 qxl_io_destroy_primary(qdev); 678 572 bo->is_primary = false; ··· 685 571 } 686 572 687 573 static void qxl_cursor_atomic_update(struct drm_plane *plane, 688 - struct drm_plane_state *old_state) 574 + struct drm_atomic_state *state) 689 575 { 690 - struct drm_device *dev = plane->dev; 691 - struct qxl_device *qdev = to_qxl(dev); 692 - struct drm_framebuffer *fb = plane->state->fb; 693 - struct qxl_crtc *qcrtc = to_qxl_crtc(plane->state->crtc); 694 - struct qxl_release *release; 695 - struct qxl_cursor_cmd *cmd; 696 - struct qxl_cursor *cursor; 697 - struct drm_gem_object *obj; 698 - struct qxl_bo *cursor_bo = NULL, *user_bo = NULL, *old_cursor_bo = NULL; 699 - int ret; 700 - struct dma_buf_map user_map; 701 - struct dma_buf_map cursor_map; 702 - void *user_ptr; 703 - int size = 64*64*4; 704 - 705 - ret = qxl_alloc_release_reserved(qdev, sizeof(*cmd), 706 - QXL_RELEASE_CURSOR_CMD, 707 - &release, NULL); 708 - if (ret) 709 - return; 576 + struct drm_plane_state *old_state = drm_atomic_get_old_plane_state(state, 577 + plane); 578 + struct drm_plane_state *new_state = drm_atomic_get_new_plane_state(state, 579 + plane); 580 + struct qxl_device *qdev = to_qxl(plane->dev); 581 + struct drm_framebuffer *fb = new_state->fb; 710 582 711 583 if (fb != old_state->fb) { 712 - obj = fb->obj[0]; 713 - user_bo = gem_to_qxl_bo(obj); 714 - 715 - /* pinning is done in the prepare/cleanup framevbuffer */ 716 - ret = qxl_bo_kmap(user_bo, &user_map); 717 - if (ret) 718 - goto out_free_release; 719 - user_ptr = user_map.vaddr; /* TODO: Use mapping abstraction properly */ 720 - 721 - ret = qxl_alloc_bo_reserved(qdev, release, 722 - sizeof(struct qxl_cursor) + size, 723 - &cursor_bo); 724 - if (ret) 725 - goto out_kunmap; 726 - 727 - ret = qxl_bo_pin(cursor_bo); 728 - if (ret) 729 - goto out_free_bo; 730 - 731 - ret = qxl_release_reserve_list(release, true); 732 - if (ret) 733 - goto out_unpin; 734 - 735 - ret = qxl_bo_kmap(cursor_bo, &cursor_map); 736 - if (ret) 737 - goto out_backoff; 738 - if (cursor_map.is_iomem) /* TODO: Use mapping abstraction properly */ 739 - cursor = (struct qxl_cursor __force *)cursor_map.vaddr_iomem; 740 - else 741 - cursor = (struct qxl_cursor *)cursor_map.vaddr; 742 - 743 - cursor->header.unique = 0; 744 - cursor->header.type = SPICE_CURSOR_TYPE_ALPHA; 745 - cursor->header.width = 64; 746 - cursor->header.height = 64; 747 - cursor->header.hot_spot_x = fb->hot_x; 748 - cursor->header.hot_spot_y = fb->hot_y; 749 - cursor->data_size = size; 750 - cursor->chunk.next_chunk = 0; 751 - cursor->chunk.prev_chunk = 0; 752 - cursor->chunk.data_size = size; 753 - memcpy(cursor->chunk.data, user_ptr, size); 754 - qxl_bo_kunmap(cursor_bo); 755 - qxl_bo_kunmap(user_bo); 756 - 757 - cmd = (struct qxl_cursor_cmd *) qxl_release_map(qdev, release); 758 - cmd->u.set.visible = 1; 759 - cmd->u.set.shape = qxl_bo_physical_address(qdev, 760 - cursor_bo, 0); 761 - cmd->type = QXL_CURSOR_SET; 762 - 763 - old_cursor_bo = qcrtc->cursor_bo; 764 - qcrtc->cursor_bo = cursor_bo; 765 - cursor_bo = NULL; 584 + qxl_primary_apply_cursor(qdev, new_state); 766 585 } else { 767 - 768 - ret = qxl_release_reserve_list(release, true); 769 - if (ret) 770 - goto out_free_release; 771 - 772 - cmd = (struct qxl_cursor_cmd *) qxl_release_map(qdev, release); 773 - cmd->type = QXL_CURSOR_MOVE; 586 + qxl_primary_move_cursor(qdev, new_state); 774 587 } 775 - 776 - cmd->u.position.x = plane->state->crtc_x + fb->hot_x; 777 - cmd->u.position.y = plane->state->crtc_y + fb->hot_y; 778 - 779 - qxl_release_unmap(qdev, release, &cmd->release_info); 780 - qxl_release_fence_buffer_objects(release); 781 - qxl_push_cursor_ring_release(qdev, release, QXL_CMD_CURSOR, false); 782 - 783 - if (old_cursor_bo != NULL) 784 - qxl_bo_unpin(old_cursor_bo); 785 - qxl_bo_unref(&old_cursor_bo); 786 - qxl_bo_unref(&cursor_bo); 787 - 788 - return; 789 - 790 - out_backoff: 791 - qxl_release_backoff_reserve_list(release); 792 - out_unpin: 793 - qxl_bo_unpin(cursor_bo); 794 - out_free_bo: 795 - qxl_bo_unref(&cursor_bo); 796 - out_kunmap: 797 - qxl_bo_kunmap(user_bo); 798 - out_free_release: 799 - qxl_release_free(qdev, release); 800 - return; 801 - 802 588 } 803 589 804 590 static void qxl_cursor_atomic_disable(struct drm_plane *plane, 805 - struct drm_plane_state *old_state) 591 + struct drm_atomic_state *state) 806 592 { 593 + struct drm_plane_state *old_state = drm_atomic_get_old_plane_state(state, 594 + plane); 807 595 struct qxl_device *qdev = to_qxl(plane->dev); 596 + struct qxl_crtc *qcrtc; 808 597 struct qxl_release *release; 809 598 struct qxl_cursor_cmd *cmd; 810 599 int ret; ··· 730 713 731 714 qxl_release_fence_buffer_objects(release); 732 715 qxl_push_cursor_ring_release(qdev, release, QXL_CMD_CURSOR, false); 716 + 717 + qcrtc = to_qxl_crtc(old_state->crtc); 718 + qxl_free_cursor(qcrtc->cursor_bo); 719 + qcrtc->cursor_bo = NULL; 733 720 } 734 721 735 722 static void qxl_update_dumb_head(struct qxl_device *qdev, ··· 791 770 DRM_DEBUG("%dx%d\n", surf->width, surf->height); 792 771 } 793 772 773 + static void qxl_prepare_shadow(struct qxl_device *qdev, struct qxl_bo *user_bo, 774 + int crtc_index) 775 + { 776 + struct qxl_surface surf; 777 + 778 + qxl_update_dumb_head(qdev, crtc_index, 779 + user_bo); 780 + qxl_calc_dumb_shadow(qdev, &surf); 781 + if (!qdev->dumb_shadow_bo || 782 + qdev->dumb_shadow_bo->surf.width != surf.width || 783 + qdev->dumb_shadow_bo->surf.height != surf.height) { 784 + if (qdev->dumb_shadow_bo) { 785 + drm_gem_object_put 786 + (&qdev->dumb_shadow_bo->tbo.base); 787 + qdev->dumb_shadow_bo = NULL; 788 + } 789 + qxl_bo_create(qdev, surf.height * surf.stride, 790 + true, true, QXL_GEM_DOMAIN_SURFACE, 0, 791 + &surf, &qdev->dumb_shadow_bo); 792 + } 793 + if (user_bo->shadow != qdev->dumb_shadow_bo) { 794 + if (user_bo->shadow) { 795 + qxl_bo_unpin(user_bo->shadow); 796 + drm_gem_object_put 797 + (&user_bo->shadow->tbo.base); 798 + user_bo->shadow = NULL; 799 + } 800 + drm_gem_object_get(&qdev->dumb_shadow_bo->tbo.base); 801 + user_bo->shadow = qdev->dumb_shadow_bo; 802 + qxl_bo_pin(user_bo->shadow); 803 + } 804 + } 805 + 794 806 static int qxl_plane_prepare_fb(struct drm_plane *plane, 795 807 struct drm_plane_state *new_state) 796 808 { 797 809 struct qxl_device *qdev = to_qxl(plane->dev); 798 810 struct drm_gem_object *obj; 799 811 struct qxl_bo *user_bo; 800 - struct qxl_surface surf; 801 812 802 813 if (!new_state->fb) 803 814 return 0; ··· 839 786 840 787 if (plane->type == DRM_PLANE_TYPE_PRIMARY && 841 788 user_bo->is_dumb) { 842 - qxl_update_dumb_head(qdev, new_state->crtc->index, 843 - user_bo); 844 - qxl_calc_dumb_shadow(qdev, &surf); 845 - if (!qdev->dumb_shadow_bo || 846 - qdev->dumb_shadow_bo->surf.width != surf.width || 847 - qdev->dumb_shadow_bo->surf.height != surf.height) { 848 - if (qdev->dumb_shadow_bo) { 849 - drm_gem_object_put 850 - (&qdev->dumb_shadow_bo->tbo.base); 851 - qdev->dumb_shadow_bo = NULL; 852 - } 853 - qxl_bo_create(qdev, surf.height * surf.stride, 854 - true, true, QXL_GEM_DOMAIN_SURFACE, &surf, 855 - &qdev->dumb_shadow_bo); 856 - } 857 - if (user_bo->shadow != qdev->dumb_shadow_bo) { 858 - if (user_bo->shadow) { 859 - drm_gem_object_put 860 - (&user_bo->shadow->tbo.base); 861 - user_bo->shadow = NULL; 862 - } 863 - drm_gem_object_get(&qdev->dumb_shadow_bo->tbo.base); 864 - user_bo->shadow = qdev->dumb_shadow_bo; 865 - } 789 + qxl_prepare_shadow(qdev, user_bo, new_state->crtc->index); 790 + } 791 + 792 + if (plane->type == DRM_PLANE_TYPE_CURSOR && 793 + plane->state->fb != new_state->fb) { 794 + struct qxl_crtc *qcrtc = to_qxl_crtc(new_state->crtc); 795 + struct qxl_bo *old_cursor_bo = qcrtc->cursor_bo; 796 + 797 + qcrtc->cursor_bo = qxl_create_cursor(qdev, user_bo, 798 + new_state->fb->hot_x, 799 + new_state->fb->hot_y); 800 + qxl_free_cursor(old_cursor_bo); 866 801 } 867 802 868 803 return qxl_bo_pin(user_bo); ··· 875 834 qxl_bo_unpin(user_bo); 876 835 877 836 if (old_state->fb != plane->state->fb && user_bo->shadow) { 837 + qxl_bo_unpin(user_bo->shadow); 878 838 drm_gem_object_put(&user_bo->shadow->tbo.base); 879 839 user_bo->shadow = NULL; 880 840 } ··· 1197 1155 } 1198 1156 qdev->monitors_config_bo = gem_to_qxl_bo(gobj); 1199 1157 1200 - ret = qxl_bo_pin(qdev->monitors_config_bo); 1158 + ret = qxl_bo_vmap(qdev->monitors_config_bo, &map); 1201 1159 if (ret) 1202 1160 return ret; 1203 - 1204 - qxl_bo_kmap(qdev->monitors_config_bo, &map); 1205 1161 1206 1162 qdev->monitors_config = qdev->monitors_config_bo->kptr; 1207 1163 qdev->ram_header->monitors_config = ··· 1219 1179 { 1220 1180 int ret; 1221 1181 1182 + if (!qdev->monitors_config_bo) 1183 + return 0; 1184 + 1222 1185 qdev->monitors_config = NULL; 1223 1186 qdev->ram_header->monitors_config = 0; 1224 1187 1225 - qxl_bo_kunmap(qdev->monitors_config_bo); 1226 - ret = qxl_bo_unpin(qdev->monitors_config_bo); 1188 + ret = qxl_bo_vunmap(qdev->monitors_config_bo); 1227 1189 if (ret) 1228 1190 return ret; 1229 1191 ··· 1238 1196 int i; 1239 1197 int ret; 1240 1198 1241 - drm_mode_config_init(&qdev->ddev); 1199 + ret = drmm_mode_config_init(&qdev->ddev); 1200 + if (ret) 1201 + return ret; 1242 1202 1243 1203 ret = qxl_create_monitors_object(qdev); 1244 1204 if (ret) ··· 1272 1228 1273 1229 void qxl_modeset_fini(struct qxl_device *qdev) 1274 1230 { 1231 + if (qdev->dumb_shadow_bo) { 1232 + qxl_bo_unpin(qdev->dumb_shadow_bo); 1233 + drm_gem_object_put(&qdev->dumb_shadow_bo->tbo.base); 1234 + qdev->dumb_shadow_bo = NULL; 1235 + } 1275 1236 qxl_destroy_monitors_object(qdev); 1276 - drm_mode_config_cleanup(&qdev->ddev); 1277 1237 }
+4 -4
drivers/gpu/drm/qxl/qxl_draw.c
··· 48 48 struct qxl_clip_rects *dev_clips; 49 49 int ret; 50 50 51 - ret = qxl_bo_kmap(clips_bo, &map); 51 + ret = qxl_bo_vmap_locked(clips_bo, &map); 52 52 if (ret) 53 53 return NULL; 54 54 dev_clips = map.vaddr; /* TODO: Use mapping abstraction properly */ ··· 202 202 if (ret) 203 203 goto out_release_backoff; 204 204 205 - ret = qxl_bo_kmap(bo, &surface_map); 205 + ret = qxl_bo_vmap_locked(bo, &surface_map); 206 206 if (ret) 207 207 goto out_release_backoff; 208 208 surface_base = surface_map.vaddr; /* TODO: Use mapping abstraction properly */ ··· 210 210 ret = qxl_image_init(qdev, release, dimage, surface_base, 211 211 left - dumb_shadow_offset, 212 212 top, width, height, depth, stride); 213 - qxl_bo_kunmap(bo); 213 + qxl_bo_vunmap_locked(bo); 214 214 if (ret) 215 215 goto out_release_backoff; 216 216 ··· 247 247 rects[i].top = clips_ptr->y1; 248 248 rects[i].bottom = clips_ptr->y2; 249 249 } 250 - qxl_bo_kunmap(clips_bo); 250 + qxl_bo_vunmap_locked(clips_bo); 251 251 252 252 qxl_release_fence_buffer_objects(release); 253 253 qxl_push_command_ring_release(qdev, release, QXL_CMD_DRAW, false);
+4 -2
drivers/gpu/drm/qxl/qxl_drv.h
··· 125 125 #define drm_encoder_to_qxl_output(x) container_of(x, struct qxl_output, enc) 126 126 127 127 struct qxl_mman { 128 - struct ttm_bo_device bdev; 128 + struct ttm_device bdev; 129 129 }; 130 130 131 131 struct qxl_memslot { ··· 214 214 spinlock_t release_lock; 215 215 struct idr release_idr; 216 216 uint32_t release_seqno; 217 + atomic_t release_count; 218 + wait_queue_head_t release_event; 217 219 spinlock_t release_idr_lock; 218 220 struct mutex async_io_mutex; 219 221 unsigned int last_sent_io_cmd; ··· 337 335 /* qxl ttm */ 338 336 int qxl_ttm_init(struct qxl_device *qdev); 339 337 void qxl_ttm_fini(struct qxl_device *qdev); 340 - int qxl_ttm_io_mem_reserve(struct ttm_bo_device *bdev, 338 + int qxl_ttm_io_mem_reserve(struct ttm_device *bdev, 341 339 struct ttm_resource *mem); 342 340 343 341 /* qxl image */
+1 -1
drivers/gpu/drm/qxl/qxl_dumb.c
··· 59 59 surf.stride = pitch; 60 60 surf.format = format; 61 61 r = qxl_gem_object_create_with_handle(qdev, file_priv, 62 - QXL_GEM_DOMAIN_SURFACE, 62 + QXL_GEM_DOMAIN_CPU, 63 63 args->size, &surf, &qobj, 64 64 &handle); 65 65 if (r)
+1 -1
drivers/gpu/drm/qxl/qxl_gem.c
··· 55 55 /* At least align on page size */ 56 56 if (alignment < PAGE_SIZE) 57 57 alignment = PAGE_SIZE; 58 - r = qxl_bo_create(qdev, size, kernel, false, initial_domain, surf, &qbo); 58 + r = qxl_bo_create(qdev, size, kernel, false, initial_domain, 0, surf, &qbo); 59 59 if (r) { 60 60 if (r != -ERESTARTSYS) 61 61 DRM_ERROR(
+1 -1
drivers/gpu/drm/qxl/qxl_image.c
··· 186 186 } 187 187 } 188 188 } 189 - qxl_bo_kunmap(chunk_bo); 189 + qxl_bo_vunmap_locked(chunk_bo); 190 190 191 191 image_bo = dimage->bo; 192 192 ptr = qxl_bo_kmap_atomic_page(qdev, image_bo, 0);
+1
drivers/gpu/drm/qxl/qxl_irq.c
··· 87 87 init_waitqueue_head(&qdev->display_event); 88 88 init_waitqueue_head(&qdev->cursor_event); 89 89 init_waitqueue_head(&qdev->io_cmd_event); 90 + init_waitqueue_head(&qdev->release_event); 90 91 INIT_WORK(&qdev->client_monitors_config_work, 91 92 qxl_client_monitors_config_work_func); 92 93 atomic_set(&qdev->irq_received, 0);
+27 -3
drivers/gpu/drm/qxl/qxl_kms.c
··· 286 286 287 287 void qxl_device_fini(struct qxl_device *qdev) 288 288 { 289 - qxl_bo_unref(&qdev->current_release_bo[0]); 290 - qxl_bo_unref(&qdev->current_release_bo[1]); 289 + int cur_idx; 290 + 291 + /* check if qxl_device_init() was successful (gc_work is initialized last) */ 292 + if (!qdev->gc_work.func) 293 + return; 294 + 295 + for (cur_idx = 0; cur_idx < 3; cur_idx++) { 296 + if (!qdev->current_release_bo[cur_idx]) 297 + continue; 298 + qxl_bo_unpin(qdev->current_release_bo[cur_idx]); 299 + qxl_bo_unref(&qdev->current_release_bo[cur_idx]); 300 + qdev->current_release_bo_offset[cur_idx] = 0; 301 + qdev->current_release_bo[cur_idx] = NULL; 302 + } 303 + 304 + /* 305 + * Ask host to release resources (+fill release ring), 306 + * then wait for the release actually happening. 307 + */ 308 + qxl_io_notify_oom(qdev); 309 + wait_event_timeout(qdev->release_event, 310 + atomic_read(&qdev->release_count) == 0, 311 + HZ); 312 + flush_work(&qdev->gc_work); 313 + qxl_surf_evict(qdev); 314 + qxl_vram_evict(qdev); 315 + 291 316 qxl_gem_fini(qdev); 292 317 qxl_bo_fini(qdev); 293 - flush_work(&qdev->gc_work); 294 318 qxl_ring_free(qdev->command_ring); 295 319 qxl_ring_free(qdev->cursor_ring); 296 320 qxl_ring_free(qdev->release_ring);
+49 -8
drivers/gpu/drm/qxl/qxl_object.c
··· 29 29 #include "qxl_drv.h" 30 30 #include "qxl_object.h" 31 31 32 + static int __qxl_bo_pin(struct qxl_bo *bo); 33 + static void __qxl_bo_unpin(struct qxl_bo *bo); 34 + 32 35 static void qxl_ttm_bo_destroy(struct ttm_buffer_object *tbo) 33 36 { 34 37 struct qxl_bo *bo; ··· 106 103 .print_info = drm_gem_ttm_print_info, 107 104 }; 108 105 109 - int qxl_bo_create(struct qxl_device *qdev, 110 - unsigned long size, bool kernel, bool pinned, u32 domain, 106 + int qxl_bo_create(struct qxl_device *qdev, unsigned long size, 107 + bool kernel, bool pinned, u32 domain, u32 priority, 111 108 struct qxl_surface *surf, 112 109 struct qxl_bo **bo_ptr) 113 110 { ··· 140 137 141 138 qxl_ttm_placement_from_domain(bo, domain); 142 139 140 + bo->tbo.priority = priority; 143 141 r = ttm_bo_init_reserved(&qdev->mman.bdev, &bo->tbo, size, type, 144 - &bo->placement, 0, &ctx, size, 145 - NULL, NULL, &qxl_ttm_bo_destroy); 142 + &bo->placement, 0, &ctx, NULL, NULL, 143 + &qxl_ttm_bo_destroy); 146 144 if (unlikely(r != 0)) { 147 145 if (r != -ERESTARTSYS) 148 146 dev_err(qdev->ddev.dev, ··· 158 154 return 0; 159 155 } 160 156 161 - int qxl_bo_kmap(struct qxl_bo *bo, struct dma_buf_map *map) 157 + int qxl_bo_vmap_locked(struct qxl_bo *bo, struct dma_buf_map *map) 162 158 { 163 159 int r; 160 + 161 + dma_resv_assert_held(bo->tbo.base.resv); 164 162 165 163 if (bo->kptr) { 166 164 bo->map_count++; ··· 182 176 out: 183 177 *map = bo->map; 184 178 return 0; 179 + } 180 + 181 + int qxl_bo_vmap(struct qxl_bo *bo, struct dma_buf_map *map) 182 + { 183 + int r; 184 + 185 + r = qxl_bo_reserve(bo); 186 + if (r) 187 + return r; 188 + 189 + r = __qxl_bo_pin(bo); 190 + if (r) { 191 + qxl_bo_unreserve(bo); 192 + return r; 193 + } 194 + 195 + r = qxl_bo_vmap_locked(bo, map); 196 + qxl_bo_unreserve(bo); 197 + return r; 185 198 } 186 199 187 200 void *qxl_bo_kmap_atomic_page(struct qxl_device *qdev, ··· 227 202 return rptr; 228 203 } 229 204 230 - ret = qxl_bo_kmap(bo, &bo_map); 205 + ret = qxl_bo_vmap_locked(bo, &bo_map); 231 206 if (ret) 232 207 return NULL; 233 208 rptr = bo_map.vaddr; /* TODO: Use mapping abstraction properly */ ··· 236 211 return rptr; 237 212 } 238 213 239 - void qxl_bo_kunmap(struct qxl_bo *bo) 214 + void qxl_bo_vunmap_locked(struct qxl_bo *bo) 240 215 { 216 + dma_resv_assert_held(bo->tbo.base.resv); 217 + 241 218 if (bo->kptr == NULL) 242 219 return; 243 220 bo->map_count--; ··· 247 220 return; 248 221 bo->kptr = NULL; 249 222 ttm_bo_vunmap(&bo->tbo, &bo->map); 223 + } 224 + 225 + int qxl_bo_vunmap(struct qxl_bo *bo) 226 + { 227 + int r; 228 + 229 + r = qxl_bo_reserve(bo); 230 + if (r) 231 + return r; 232 + 233 + qxl_bo_vunmap_locked(bo); 234 + __qxl_bo_unpin(bo); 235 + qxl_bo_unreserve(bo); 236 + return 0; 250 237 } 251 238 252 239 void qxl_bo_kunmap_atomic_page(struct qxl_device *qdev, ··· 273 232 io_mapping_unmap_atomic(pmap); 274 233 return; 275 234 fallback: 276 - qxl_bo_kunmap(bo); 235 + qxl_bo_vunmap_locked(bo); 277 236 } 278 237 279 238 void qxl_bo_unref(struct qxl_bo **bo)
+5 -2
drivers/gpu/drm/qxl/qxl_object.h
··· 61 61 extern int qxl_bo_create(struct qxl_device *qdev, 62 62 unsigned long size, 63 63 bool kernel, bool pinned, u32 domain, 64 + u32 priority, 64 65 struct qxl_surface *surf, 65 66 struct qxl_bo **bo_ptr); 66 - extern int qxl_bo_kmap(struct qxl_bo *bo, struct dma_buf_map *map); 67 - extern void qxl_bo_kunmap(struct qxl_bo *bo); 67 + int qxl_bo_vmap(struct qxl_bo *bo, struct dma_buf_map *map); 68 + int qxl_bo_vmap_locked(struct qxl_bo *bo, struct dma_buf_map *map); 69 + int qxl_bo_vunmap(struct qxl_bo *bo); 70 + void qxl_bo_vunmap_locked(struct qxl_bo *bo); 68 71 void *qxl_bo_kmap_atomic_page(struct qxl_device *qdev, struct qxl_bo *bo, int page_offset); 69 72 void qxl_bo_kunmap_atomic_page(struct qxl_device *qdev, struct qxl_bo *bo, void *map); 70 73 extern struct qxl_bo *qxl_bo_ref(struct qxl_bo *bo);
+2 -2
drivers/gpu/drm/qxl/qxl_prime.c
··· 59 59 struct qxl_bo *bo = gem_to_qxl_bo(obj); 60 60 int ret; 61 61 62 - ret = qxl_bo_kmap(bo, map); 62 + ret = qxl_bo_vmap(bo, map); 63 63 if (ret < 0) 64 64 return ret; 65 65 ··· 71 71 { 72 72 struct qxl_bo *bo = gem_to_qxl_bo(obj); 73 73 74 - qxl_bo_kunmap(bo); 74 + qxl_bo_vunmap(bo); 75 75 } 76 76 77 77 int qxl_gem_prime_mmap(struct drm_gem_object *obj,
+22 -54
drivers/gpu/drm/qxl/qxl_release.c
··· 58 58 signed long timeout) 59 59 { 60 60 struct qxl_device *qdev; 61 - struct qxl_release *release; 62 - int count = 0, sc = 0; 63 - bool have_drawable_releases; 64 61 unsigned long cur, end = jiffies + timeout; 65 62 66 63 qdev = container_of(fence->lock, struct qxl_device, release_lock); 67 - release = container_of(fence, struct qxl_release, base); 68 - have_drawable_releases = release->type == QXL_RELEASE_DRAWABLE; 69 64 70 - retry: 71 - sc++; 65 + if (!wait_event_timeout(qdev->release_event, 66 + (dma_fence_is_signaled(fence) || 67 + (qxl_io_notify_oom(qdev), 0)), 68 + timeout)) 69 + return 0; 72 70 73 - if (dma_fence_is_signaled(fence)) 74 - goto signaled; 75 - 76 - qxl_io_notify_oom(qdev); 77 - 78 - for (count = 0; count < 11; count++) { 79 - if (!qxl_queue_garbage_collect(qdev, true)) 80 - break; 81 - 82 - if (dma_fence_is_signaled(fence)) 83 - goto signaled; 84 - } 85 - 86 - if (dma_fence_is_signaled(fence)) 87 - goto signaled; 88 - 89 - if (have_drawable_releases || sc < 4) { 90 - if (sc > 2) 91 - /* back off */ 92 - usleep_range(500, 1000); 93 - 94 - if (time_after(jiffies, end)) 95 - return 0; 96 - 97 - if (have_drawable_releases && sc > 300) { 98 - DMA_FENCE_WARN(fence, "failed to wait on release %llu " 99 - "after spincount %d\n", 100 - fence->context & ~0xf0000000, sc); 101 - goto signaled; 102 - } 103 - goto retry; 104 - } 105 - /* 106 - * yeah, original sync_obj_wait gave up after 3 spins when 107 - * have_drawable_releases is not set. 108 - */ 109 - 110 - signaled: 111 71 cur = jiffies; 112 72 if (time_after(cur, end)) 113 73 return 0; ··· 156 196 qxl_release_free_list(release); 157 197 kfree(release); 158 198 } 199 + atomic_dec(&qdev->release_count); 159 200 } 160 201 161 202 static int qxl_release_bo_alloc(struct qxl_device *qdev, 162 - struct qxl_bo **bo) 203 + struct qxl_bo **bo, 204 + u32 priority) 163 205 { 164 206 /* pin releases bo's they are too messy to evict */ 165 207 return qxl_bo_create(qdev, PAGE_SIZE, false, true, 166 - QXL_GEM_DOMAIN_VRAM, NULL, bo); 208 + QXL_GEM_DOMAIN_VRAM, priority, NULL, bo); 167 209 } 168 210 169 211 int qxl_release_list_add(struct qxl_release *release, struct qxl_bo *bo) ··· 288 326 int ret = 0; 289 327 union qxl_release_info *info; 290 328 int cur_idx; 329 + u32 priority; 291 330 292 - if (type == QXL_RELEASE_DRAWABLE) 331 + if (type == QXL_RELEASE_DRAWABLE) { 293 332 cur_idx = 0; 294 - else if (type == QXL_RELEASE_SURFACE_CMD) 333 + priority = 0; 334 + } else if (type == QXL_RELEASE_SURFACE_CMD) { 295 335 cur_idx = 1; 296 - else if (type == QXL_RELEASE_CURSOR_CMD) 336 + priority = 1; 337 + } else if (type == QXL_RELEASE_CURSOR_CMD) { 297 338 cur_idx = 2; 339 + priority = 1; 340 + } 298 341 else { 299 342 DRM_ERROR("got illegal type: %d\n", type); 300 343 return -EINVAL; ··· 311 344 *rbo = NULL; 312 345 return idr_ret; 313 346 } 347 + atomic_inc(&qdev->release_count); 314 348 315 349 mutex_lock(&qdev->release_mutex); 316 350 if (qdev->current_release_bo_offset[cur_idx] + 1 >= releases_per_bo[cur_idx]) { ··· 320 352 qdev->current_release_bo[cur_idx] = NULL; 321 353 } 322 354 if (!qdev->current_release_bo[cur_idx]) { 323 - ret = qxl_release_bo_alloc(qdev, &qdev->current_release_bo[cur_idx]); 355 + ret = qxl_release_bo_alloc(qdev, &qdev->current_release_bo[cur_idx], priority); 324 356 if (ret) { 325 357 mutex_unlock(&qdev->release_mutex); 326 358 if (free_bo) { ··· 405 437 void qxl_release_fence_buffer_objects(struct qxl_release *release) 406 438 { 407 439 struct ttm_buffer_object *bo; 408 - struct ttm_bo_device *bdev; 440 + struct ttm_device *bdev; 409 441 struct ttm_validate_buffer *entry; 410 442 struct qxl_device *qdev; 411 443 ··· 426 458 release->id | 0xf0000000, release->base.seqno); 427 459 trace_dma_fence_emit(&release->base); 428 460 429 - spin_lock(&ttm_bo_glob.lru_lock); 461 + spin_lock(&ttm_glob.lru_lock); 430 462 431 463 list_for_each_entry(entry, &release->bos, head) { 432 464 bo = entry->bo; ··· 435 467 ttm_bo_move_to_lru_tail(bo, &bo->mem, NULL); 436 468 dma_resv_unlock(bo->base.resv); 437 469 } 438 - spin_unlock(&ttm_bo_glob.lru_lock); 470 + spin_unlock(&ttm_glob.lru_lock); 439 471 ww_acquire_fini(&release->ticket); 440 472 } 441 473
+9 -10
drivers/gpu/drm/qxl/qxl_ttm.c
··· 36 36 #include "qxl_drv.h" 37 37 #include "qxl_object.h" 38 38 39 - static struct qxl_device *qxl_get_qdev(struct ttm_bo_device *bdev) 39 + static struct qxl_device *qxl_get_qdev(struct ttm_device *bdev) 40 40 { 41 41 struct qxl_mman *mman; 42 42 struct qxl_device *qdev; ··· 69 69 *placement = qbo->placement; 70 70 } 71 71 72 - int qxl_ttm_io_mem_reserve(struct ttm_bo_device *bdev, 72 + int qxl_ttm_io_mem_reserve(struct ttm_device *bdev, 73 73 struct ttm_resource *mem) 74 74 { 75 75 struct qxl_device *qdev = qxl_get_qdev(bdev); ··· 98 98 /* 99 99 * TTM backend functions. 100 100 */ 101 - static void qxl_ttm_backend_destroy(struct ttm_bo_device *bdev, 102 - struct ttm_tt *ttm) 101 + static void qxl_ttm_backend_destroy(struct ttm_device *bdev, struct ttm_tt *ttm) 103 102 { 104 103 ttm_tt_destroy_common(bdev, ttm); 105 104 ttm_tt_fini(ttm); ··· 169 170 qxl_bo_move_notify(bo, false, NULL); 170 171 } 171 172 172 - static struct ttm_bo_driver qxl_bo_driver = { 173 + static struct ttm_device_funcs qxl_bo_driver = { 173 174 .ttm_tt_create = &qxl_ttm_tt_create, 174 175 .ttm_tt_destroy = &qxl_ttm_backend_destroy, 175 176 .eviction_valuable = ttm_bo_eviction_valuable, ··· 192 193 int num_io_pages; /* != rom->num_io_pages, we include surface0 */ 193 194 194 195 /* No others user of address space so set it to 0 */ 195 - r = ttm_bo_device_init(&qdev->mman.bdev, &qxl_bo_driver, NULL, 196 - qdev->ddev.anon_inode->i_mapping, 197 - qdev->ddev.vma_offset_manager, 198 - false, false); 196 + r = ttm_device_init(&qdev->mman.bdev, &qxl_bo_driver, NULL, 197 + qdev->ddev.anon_inode->i_mapping, 198 + qdev->ddev.vma_offset_manager, 199 + false, false); 199 200 if (r) { 200 201 DRM_ERROR("failed initializing buffer object driver(%d).\n", r); 201 202 return r; ··· 226 227 { 227 228 ttm_range_man_fini(&qdev->mman.bdev, TTM_PL_VRAM); 228 229 ttm_range_man_fini(&qdev->mman.bdev, TTM_PL_PRIV); 229 - ttm_bo_device_release(&qdev->mman.bdev); 230 + ttm_device_fini(&qdev->mman.bdev); 230 231 DRM_INFO("qxl: ttm finalized\n"); 231 232 } 232 233
+4 -6
drivers/gpu/drm/radeon/atombios_crtc.c
··· 1157 1157 u32 tmp, viewport_w, viewport_h; 1158 1158 int r; 1159 1159 bool bypass_lut = false; 1160 - struct drm_format_name_buf format_name; 1161 1160 1162 1161 /* no fb bound */ 1163 1162 if (!atomic && !crtc->primary->fb) { ··· 1266 1267 #endif 1267 1268 break; 1268 1269 default: 1269 - DRM_ERROR("Unsupported screen format %s\n", 1270 - drm_get_format_name(target_fb->format->format, &format_name)); 1270 + DRM_ERROR("Unsupported screen format %p4cc\n", 1271 + &target_fb->format->format); 1271 1272 return -EINVAL; 1272 1273 } 1273 1274 ··· 1477 1478 u32 viewport_w, viewport_h; 1478 1479 int r; 1479 1480 bool bypass_lut = false; 1480 - struct drm_format_name_buf format_name; 1481 1481 1482 1482 /* no fb bound */ 1483 1483 if (!atomic && !crtc->primary->fb) { ··· 1577 1579 #endif 1578 1580 break; 1579 1581 default: 1580 - DRM_ERROR("Unsupported screen format %s\n", 1581 - drm_get_format_name(target_fb->format->format, &format_name)); 1582 + DRM_ERROR("Unsupported screen format %p4cc\n", 1583 + &target_fb->format->format); 1582 1584 return -EINVAL; 1583 1585 } 1584 1586
+3 -3
drivers/gpu/drm/radeon/radeon.h
··· 451 451 * TTM. 452 452 */ 453 453 struct radeon_mman { 454 - struct ttm_bo_device bdev; 454 + struct ttm_device bdev; 455 455 bool initialized; 456 456 457 457 #if defined(CONFIG_DEBUG_FS) ··· 2824 2824 uint32_t flags); 2825 2825 extern bool radeon_ttm_tt_has_userptr(struct radeon_device *rdev, struct ttm_tt *ttm); 2826 2826 extern bool radeon_ttm_tt_is_readonly(struct radeon_device *rdev, struct ttm_tt *ttm); 2827 - bool radeon_ttm_tt_is_bound(struct ttm_bo_device *bdev, struct ttm_tt *ttm); 2827 + bool radeon_ttm_tt_is_bound(struct ttm_device *bdev, struct ttm_tt *ttm); 2828 2828 extern void radeon_vram_location(struct radeon_device *rdev, struct radeon_mc *mc, u64 base); 2829 2829 extern void radeon_gtt_location(struct radeon_device *rdev, struct radeon_mc *mc); 2830 2830 extern int radeon_resume_kms(struct drm_device *dev, bool resume, bool fbcon); ··· 2834 2834 extern void radeon_program_register_sequence(struct radeon_device *rdev, 2835 2835 const u32 *registers, 2836 2836 const u32 array_size); 2837 - struct radeon_device *radeon_get_rdev(struct ttm_bo_device *bdev); 2837 + struct radeon_device *radeon_get_rdev(struct ttm_device *bdev); 2838 2838 2839 2839 /* KMS */ 2840 2840
+3 -7
drivers/gpu/drm/radeon/radeon_object.c
··· 159 159 struct radeon_bo *bo; 160 160 enum ttm_bo_type type; 161 161 unsigned long page_align = roundup(byte_align, PAGE_SIZE) >> PAGE_SHIFT; 162 - size_t acc_size; 163 162 int r; 164 163 165 164 size = ALIGN(size, PAGE_SIZE); ··· 171 172 type = ttm_bo_type_device; 172 173 } 173 174 *bo_ptr = NULL; 174 - 175 - acc_size = ttm_bo_dma_acc_size(&rdev->mman.bdev, size, 176 - sizeof(struct radeon_bo)); 177 175 178 176 bo = kzalloc(sizeof(struct radeon_bo), GFP_KERNEL); 179 177 if (bo == NULL) ··· 226 230 /* Kernel allocation are uninterruptible */ 227 231 down_read(&rdev->pm.mclk_lock); 228 232 r = ttm_bo_init(&rdev->mman.bdev, &bo->tbo, size, type, 229 - &bo->placement, page_align, !kernel, acc_size, 230 - sg, resv, &radeon_ttm_bo_destroy); 233 + &bo->placement, page_align, !kernel, sg, resv, 234 + &radeon_ttm_bo_destroy); 231 235 up_read(&rdev->pm.mclk_lock); 232 236 if (unlikely(r != 0)) { 233 237 return r; ··· 368 372 369 373 int radeon_bo_evict_vram(struct radeon_device *rdev) 370 374 { 371 - struct ttm_bo_device *bdev = &rdev->mman.bdev; 375 + struct ttm_device *bdev = &rdev->mman.bdev; 372 376 struct ttm_resource_manager *man; 373 377 374 378 /* late 2.6.33 fix IGP hibernate - we need pm ops to do this correct */
+19 -21
drivers/gpu/drm/radeon/radeon_ttm.c
··· 55 55 static int radeon_ttm_debugfs_init(struct radeon_device *rdev); 56 56 static void radeon_ttm_debugfs_fini(struct radeon_device *rdev); 57 57 58 - static int radeon_ttm_tt_bind(struct ttm_bo_device *bdev, 59 - struct ttm_tt *ttm, 58 + static int radeon_ttm_tt_bind(struct ttm_device *bdev, struct ttm_tt *ttm, 60 59 struct ttm_resource *bo_mem); 61 - static void radeon_ttm_tt_unbind(struct ttm_bo_device *bdev, 62 - struct ttm_tt *ttm); 60 + static void radeon_ttm_tt_unbind(struct ttm_device *bdev, struct ttm_tt *ttm); 63 61 64 - struct radeon_device *radeon_get_rdev(struct ttm_bo_device *bdev) 62 + struct radeon_device *radeon_get_rdev(struct ttm_device *bdev) 65 63 { 66 64 struct radeon_mman *mman; 67 65 struct radeon_device *rdev; ··· 278 280 return 0; 279 281 } 280 282 281 - static int radeon_ttm_io_mem_reserve(struct ttm_bo_device *bdev, struct ttm_resource *mem) 283 + static int radeon_ttm_io_mem_reserve(struct ttm_device *bdev, struct ttm_resource *mem) 282 284 { 283 285 struct radeon_device *rdev = radeon_get_rdev(bdev); 284 286 size_t bus_size = (size_t)mem->num_pages << PAGE_SHIFT; ··· 345 347 }; 346 348 347 349 /* prepare the sg table with the user pages */ 348 - static int radeon_ttm_tt_pin_userptr(struct ttm_bo_device *bdev, struct ttm_tt *ttm) 350 + static int radeon_ttm_tt_pin_userptr(struct ttm_device *bdev, struct ttm_tt *ttm) 349 351 { 350 352 struct radeon_device *rdev = radeon_get_rdev(bdev); 351 353 struct radeon_ttm_tt *gtt = (void *)ttm; ··· 406 408 return r; 407 409 } 408 410 409 - static void radeon_ttm_tt_unpin_userptr(struct ttm_bo_device *bdev, struct ttm_tt *ttm) 411 + static void radeon_ttm_tt_unpin_userptr(struct ttm_device *bdev, struct ttm_tt *ttm) 410 412 { 411 413 struct radeon_device *rdev = radeon_get_rdev(bdev); 412 414 struct radeon_ttm_tt *gtt = (void *)ttm; ··· 442 444 return (gtt->bound); 443 445 } 444 446 445 - static int radeon_ttm_backend_bind(struct ttm_bo_device *bdev, 447 + static int radeon_ttm_backend_bind(struct ttm_device *bdev, 446 448 struct ttm_tt *ttm, 447 449 struct ttm_resource *bo_mem) 448 450 { ··· 478 480 return 0; 479 481 } 480 482 481 - static void radeon_ttm_backend_unbind(struct ttm_bo_device *bdev, struct ttm_tt *ttm) 483 + static void radeon_ttm_backend_unbind(struct ttm_device *bdev, struct ttm_tt *ttm) 482 484 { 483 485 struct radeon_ttm_tt *gtt = (void *)ttm; 484 486 struct radeon_device *rdev = radeon_get_rdev(bdev); ··· 493 495 gtt->bound = false; 494 496 } 495 497 496 - static void radeon_ttm_backend_destroy(struct ttm_bo_device *bdev, struct ttm_tt *ttm) 498 + static void radeon_ttm_backend_destroy(struct ttm_device *bdev, struct ttm_tt *ttm) 497 499 { 498 500 struct radeon_ttm_tt *gtt = (void *)ttm; 499 501 ··· 552 554 return container_of(ttm, struct radeon_ttm_tt, ttm); 553 555 } 554 556 555 - static int radeon_ttm_tt_populate(struct ttm_bo_device *bdev, 557 + static int radeon_ttm_tt_populate(struct ttm_device *bdev, 556 558 struct ttm_tt *ttm, 557 559 struct ttm_operation_ctx *ctx) 558 560 { ··· 578 580 return ttm_pool_alloc(&rdev->mman.bdev.pool, ttm, ctx); 579 581 } 580 582 581 - static void radeon_ttm_tt_unpopulate(struct ttm_bo_device *bdev, struct ttm_tt *ttm) 583 + static void radeon_ttm_tt_unpopulate(struct ttm_device *bdev, struct ttm_tt *ttm) 582 584 { 583 585 struct radeon_device *rdev = radeon_get_rdev(bdev); 584 586 struct radeon_ttm_tt *gtt = radeon_ttm_tt_to_gtt(rdev, ttm); ··· 611 613 return 0; 612 614 } 613 615 614 - bool radeon_ttm_tt_is_bound(struct ttm_bo_device *bdev, 616 + bool radeon_ttm_tt_is_bound(struct ttm_device *bdev, 615 617 struct ttm_tt *ttm) 616 618 { 617 619 #if IS_ENABLED(CONFIG_AGP) ··· 622 624 return radeon_ttm_backend_is_bound(ttm); 623 625 } 624 626 625 - static int radeon_ttm_tt_bind(struct ttm_bo_device *bdev, 627 + static int radeon_ttm_tt_bind(struct ttm_device *bdev, 626 628 struct ttm_tt *ttm, 627 629 struct ttm_resource *bo_mem) 628 630 { ··· 640 642 return radeon_ttm_backend_bind(bdev, ttm, bo_mem); 641 643 } 642 644 643 - static void radeon_ttm_tt_unbind(struct ttm_bo_device *bdev, 645 + static void radeon_ttm_tt_unbind(struct ttm_device *bdev, 644 646 struct ttm_tt *ttm) 645 647 { 646 648 #if IS_ENABLED(CONFIG_AGP) ··· 654 656 radeon_ttm_backend_unbind(bdev, ttm); 655 657 } 656 658 657 - static void radeon_ttm_tt_destroy(struct ttm_bo_device *bdev, 659 + static void radeon_ttm_tt_destroy(struct ttm_device *bdev, 658 660 struct ttm_tt *ttm) 659 661 { 660 662 #if IS_ENABLED(CONFIG_AGP) ··· 698 700 radeon_bo_move_notify(bo, false, NULL); 699 701 } 700 702 701 - static struct ttm_bo_driver radeon_bo_driver = { 703 + static struct ttm_device_funcs radeon_bo_driver = { 702 704 .ttm_tt_create = &radeon_ttm_tt_create, 703 705 .ttm_tt_populate = &radeon_ttm_tt_populate, 704 706 .ttm_tt_unpopulate = &radeon_ttm_tt_unpopulate, ··· 716 718 int r; 717 719 718 720 /* No others user of address space so set it to 0 */ 719 - r = ttm_bo_device_init(&rdev->mman.bdev, &radeon_bo_driver, rdev->dev, 721 + r = ttm_device_init(&rdev->mman.bdev, &radeon_bo_driver, rdev->dev, 720 722 rdev->ddev->anon_inode->i_mapping, 721 723 rdev->ddev->vma_offset_manager, 722 724 rdev->need_swiotlb, ··· 786 788 } 787 789 ttm_range_man_fini(&rdev->mman.bdev, TTM_PL_VRAM); 788 790 ttm_range_man_fini(&rdev->mman.bdev, TTM_PL_TT); 789 - ttm_bo_device_release(&rdev->mman.bdev); 791 + ttm_device_fini(&rdev->mman.bdev); 790 792 radeon_gart_fini(rdev); 791 793 rdev->mman.initialized = false; 792 794 DRM_INFO("radeon: ttm finalized\n"); ··· 835 837 return ret; 836 838 } 837 839 838 - static struct vm_operations_struct radeon_ttm_vm_ops = { 840 + static const struct vm_operations_struct radeon_ttm_vm_ops = { 839 841 .fault = radeon_ttm_fault, 840 842 .open = ttm_bo_vm_open, 841 843 .close = ttm_bo_vm_close,
+11 -6
drivers/gpu/drm/rcar-du/rcar_du_plane.c
··· 607 607 } 608 608 609 609 static int rcar_du_plane_atomic_check(struct drm_plane *plane, 610 - struct drm_plane_state *state) 610 + struct drm_atomic_state *state) 611 611 { 612 - struct rcar_du_plane_state *rstate = to_rcar_plane_state(state); 612 + struct drm_plane_state *new_plane_state = drm_atomic_get_new_plane_state(state, 613 + plane); 614 + struct rcar_du_plane_state *rstate = to_rcar_plane_state(new_plane_state); 613 615 614 - return __rcar_du_plane_atomic_check(plane, state, &rstate->format); 616 + return __rcar_du_plane_atomic_check(plane, new_plane_state, 617 + &rstate->format); 615 618 } 616 619 617 620 static void rcar_du_plane_atomic_update(struct drm_plane *plane, 618 - struct drm_plane_state *old_state) 621 + struct drm_atomic_state *state) 619 622 { 623 + struct drm_plane_state *old_state = drm_atomic_get_old_plane_state(state, plane); 624 + struct drm_plane_state *new_state = drm_atomic_get_new_plane_state(state, plane); 620 625 struct rcar_du_plane *rplane = to_rcar_plane(plane); 621 626 struct rcar_du_plane_state *old_rstate; 622 627 struct rcar_du_plane_state *new_rstate; 623 628 624 - if (!plane->state->visible) 629 + if (!new_state->visible) 625 630 return; 626 631 627 632 rcar_du_plane_setup(rplane); ··· 640 635 * bit. We thus need to restart the group if the source changes. 641 636 */ 642 637 old_rstate = to_rcar_plane_state(old_state); 643 - new_rstate = to_rcar_plane_state(plane->state); 638 + new_rstate = to_rcar_plane_state(new_state); 644 639 645 640 if ((old_rstate->source == RCAR_DU_PLANE_MEMORY) != 646 641 (new_rstate->source == RCAR_DU_PLANE_MEMORY))
+13 -7
drivers/gpu/drm/rcar-du/rcar_du_vsp.c
··· 7 7 * Contact: Laurent Pinchart (laurent.pinchart@ideasonboard.com) 8 8 */ 9 9 10 + #include <drm/drm_atomic.h> 10 11 #include <drm/drm_atomic_helper.h> 11 12 #include <drm/drm_crtc.h> 12 13 #include <drm/drm_fb_cma_helper.h> 13 14 #include <drm/drm_fourcc.h> 15 + #include <drm/drm_gem_atomic_helper.h> 14 16 #include <drm/drm_gem_cma_helper.h> 15 - #include <drm/drm_gem_framebuffer_helper.h> 16 17 #include <drm/drm_managed.h> 17 18 #include <drm/drm_plane_helper.h> 18 19 #include <drm/drm_vblank.h> ··· 237 236 if (ret < 0) 238 237 return ret; 239 238 240 - return drm_gem_fb_prepare_fb(plane, state); 239 + return drm_gem_plane_helper_prepare_fb(plane, state); 241 240 } 242 241 243 242 void rcar_du_vsp_unmap_fb(struct rcar_du_vsp *vsp, struct drm_framebuffer *fb, ··· 266 265 } 267 266 268 267 static int rcar_du_vsp_plane_atomic_check(struct drm_plane *plane, 269 - struct drm_plane_state *state) 268 + struct drm_atomic_state *state) 270 269 { 271 - struct rcar_du_vsp_plane_state *rstate = to_rcar_vsp_plane_state(state); 270 + struct drm_plane_state *new_plane_state = drm_atomic_get_new_plane_state(state, 271 + plane); 272 + struct rcar_du_vsp_plane_state *rstate = to_rcar_vsp_plane_state(new_plane_state); 272 273 273 - return __rcar_du_plane_atomic_check(plane, state, &rstate->format); 274 + return __rcar_du_plane_atomic_check(plane, new_plane_state, 275 + &rstate->format); 274 276 } 275 277 276 278 static void rcar_du_vsp_plane_atomic_update(struct drm_plane *plane, 277 - struct drm_plane_state *old_state) 279 + struct drm_atomic_state *state) 278 280 { 281 + struct drm_plane_state *old_state = drm_atomic_get_old_plane_state(state, plane); 282 + struct drm_plane_state *new_state = drm_atomic_get_new_plane_state(state, plane); 279 283 struct rcar_du_vsp_plane *rplane = to_rcar_vsp_plane(plane); 280 284 struct rcar_du_crtc *crtc = to_rcar_crtc(old_state->crtc); 281 285 282 - if (plane->state->visible) 286 + if (new_state->visible) 283 287 rcar_du_vsp_plane_setup(rplane); 284 288 else if (old_state->crtc) 285 289 vsp1_du_atomic_update(rplane->vsp->vsp, crtc->vsp_pipe,
+48 -33
drivers/gpu/drm/rockchip/rockchip_drm_vop.c
··· 23 23 #include <drm/drm_crtc.h> 24 24 #include <drm/drm_flip_work.h> 25 25 #include <drm/drm_fourcc.h> 26 + #include <drm/drm_gem_atomic_helper.h> 26 27 #include <drm/drm_gem_framebuffer_helper.h> 27 28 #include <drm/drm_plane_helper.h> 28 29 #include <drm/drm_probe_helper.h> ··· 779 778 } 780 779 781 780 static int vop_plane_atomic_check(struct drm_plane *plane, 782 - struct drm_plane_state *state) 781 + struct drm_atomic_state *state) 783 782 { 784 - struct drm_crtc *crtc = state->crtc; 783 + struct drm_plane_state *new_plane_state = drm_atomic_get_new_plane_state(state, 784 + plane); 785 + struct drm_crtc *crtc = new_plane_state->crtc; 785 786 struct drm_crtc_state *crtc_state; 786 - struct drm_framebuffer *fb = state->fb; 787 + struct drm_framebuffer *fb = new_plane_state->fb; 787 788 struct vop_win *vop_win = to_vop_win(plane); 788 789 const struct vop_win_data *win = vop_win->data; 789 790 int ret; ··· 797 794 if (!crtc || WARN_ON(!fb)) 798 795 return 0; 799 796 800 - crtc_state = drm_atomic_get_existing_crtc_state(state->state, crtc); 797 + crtc_state = drm_atomic_get_existing_crtc_state(state, 798 + crtc); 801 799 if (WARN_ON(!crtc_state)) 802 800 return -EINVAL; 803 801 804 - ret = drm_atomic_helper_check_plane_state(state, crtc_state, 802 + ret = drm_atomic_helper_check_plane_state(new_plane_state, crtc_state, 805 803 min_scale, max_scale, 806 804 true, true); 807 805 if (ret) 808 806 return ret; 809 807 810 - if (!state->visible) 808 + if (!new_plane_state->visible) 811 809 return 0; 812 810 813 811 ret = vop_convert_format(fb->format->format); ··· 819 815 * Src.x1 can be odd when do clip, but yuv plane start point 820 816 * need align with 2 pixel. 821 817 */ 822 - if (fb->format->is_yuv && ((state->src.x1 >> 16) % 2)) { 818 + if (fb->format->is_yuv && ((new_plane_state->src.x1 >> 16) % 2)) { 823 819 DRM_ERROR("Invalid Source: Yuv format not support odd xpos\n"); 824 820 return -EINVAL; 825 821 } 826 822 827 - if (fb->format->is_yuv && state->rotation & DRM_MODE_REFLECT_Y) { 823 + if (fb->format->is_yuv && new_plane_state->rotation & DRM_MODE_REFLECT_Y) { 828 824 DRM_ERROR("Invalid Source: Yuv format does not support this rotation\n"); 829 825 return -EINVAL; 830 826 } ··· 841 837 if (ret < 0) 842 838 return ret; 843 839 844 - if (state->src.x1 || state->src.y1) { 845 - DRM_ERROR("AFBC does not support offset display, xpos=%d, ypos=%d, offset=%d\n", state->src.x1, state->src.y1, fb->offsets[0]); 840 + if (new_plane_state->src.x1 || new_plane_state->src.y1) { 841 + DRM_ERROR("AFBC does not support offset display, xpos=%d, ypos=%d, offset=%d\n", 842 + new_plane_state->src.x1, 843 + new_plane_state->src.y1, fb->offsets[0]); 846 844 return -EINVAL; 847 845 } 848 846 849 - if (state->rotation && state->rotation != DRM_MODE_ROTATE_0) { 847 + if (new_plane_state->rotation && new_plane_state->rotation != DRM_MODE_ROTATE_0) { 850 848 DRM_ERROR("No rotation support in AFBC, rotation=%d\n", 851 - state->rotation); 849 + new_plane_state->rotation); 852 850 return -EINVAL; 853 851 } 854 852 } ··· 859 853 } 860 854 861 855 static void vop_plane_atomic_disable(struct drm_plane *plane, 862 - struct drm_plane_state *old_state) 856 + struct drm_atomic_state *state) 863 857 { 858 + struct drm_plane_state *old_state = drm_atomic_get_old_plane_state(state, 859 + plane); 864 860 struct vop_win *vop_win = to_vop_win(plane); 865 861 struct vop *vop = to_vop(old_state->crtc); 866 862 ··· 877 869 } 878 870 879 871 static void vop_plane_atomic_update(struct drm_plane *plane, 880 - struct drm_plane_state *old_state) 872 + struct drm_atomic_state *state) 881 873 { 882 - struct drm_plane_state *state = plane->state; 883 - struct drm_crtc *crtc = state->crtc; 874 + struct drm_plane_state *old_state = drm_atomic_get_old_plane_state(state, 875 + plane); 876 + struct drm_plane_state *new_state = drm_atomic_get_new_plane_state(state, 877 + plane); 878 + struct drm_crtc *crtc = new_state->crtc; 884 879 struct vop_win *vop_win = to_vop_win(plane); 885 880 const struct vop_win_data *win = vop_win->data; 886 881 const struct vop_win_yuv2yuv_data *win_yuv2yuv = vop_win->yuv2yuv_data; 887 - struct vop *vop = to_vop(state->crtc); 888 - struct drm_framebuffer *fb = state->fb; 882 + struct vop *vop = to_vop(new_state->crtc); 883 + struct drm_framebuffer *fb = new_state->fb; 889 884 unsigned int actual_w, actual_h; 890 885 unsigned int dsp_stx, dsp_sty; 891 886 uint32_t act_info, dsp_info, dsp_st; 892 - struct drm_rect *src = &state->src; 893 - struct drm_rect *dest = &state->dst; 887 + struct drm_rect *src = &new_state->src; 888 + struct drm_rect *dest = &new_state->dst; 894 889 struct drm_gem_object *obj, *uv_obj; 895 890 struct rockchip_gem_object *rk_obj, *rk_uv_obj; 896 891 unsigned long offset; ··· 914 903 if (WARN_ON(!vop->is_enabled)) 915 904 return; 916 905 917 - if (!state->visible) { 918 - vop_plane_atomic_disable(plane, old_state); 906 + if (!new_state->visible) { 907 + vop_plane_atomic_disable(plane, state); 919 908 return; 920 909 } 921 910 ··· 941 930 * For y-mirroring we need to move address 942 931 * to the beginning of the last line. 943 932 */ 944 - if (state->rotation & DRM_MODE_REFLECT_Y) 933 + if (new_state->rotation & DRM_MODE_REFLECT_Y) 945 934 dma_addr += (actual_h - 1) * fb->pitches[0]; 946 935 947 936 format = vop_convert_format(fb->format->format); ··· 963 952 VOP_WIN_SET(vop, win, yrgb_mst, dma_addr); 964 953 VOP_WIN_YUV2YUV_SET(vop, win_yuv2yuv, y2r_en, is_yuv); 965 954 VOP_WIN_SET(vop, win, y_mir_en, 966 - (state->rotation & DRM_MODE_REFLECT_Y) ? 1 : 0); 955 + (new_state->rotation & DRM_MODE_REFLECT_Y) ? 1 : 0); 967 956 VOP_WIN_SET(vop, win, x_mir_en, 968 - (state->rotation & DRM_MODE_REFLECT_X) ? 1 : 0); 957 + (new_state->rotation & DRM_MODE_REFLECT_X) ? 1 : 0); 969 958 970 959 if (is_yuv) { 971 960 int hsub = fb->format->hsub; ··· 1032 1021 } 1033 1022 1034 1023 static int vop_plane_atomic_async_check(struct drm_plane *plane, 1035 - struct drm_plane_state *state) 1024 + struct drm_atomic_state *state) 1036 1025 { 1026 + struct drm_plane_state *new_plane_state = drm_atomic_get_new_plane_state(state, 1027 + plane); 1037 1028 struct vop_win *vop_win = to_vop_win(plane); 1038 1029 const struct vop_win_data *win = vop_win->data; 1039 1030 int min_scale = win->phy->scl ? FRAC_16_16(1, 8) : ··· 1044 1031 DRM_PLANE_HELPER_NO_SCALING; 1045 1032 struct drm_crtc_state *crtc_state; 1046 1033 1047 - if (plane != state->crtc->cursor) 1034 + if (plane != new_plane_state->crtc->cursor) 1048 1035 return -EINVAL; 1049 1036 1050 1037 if (!plane->state) ··· 1053 1040 if (!plane->state->fb) 1054 1041 return -EINVAL; 1055 1042 1056 - if (state->state) 1057 - crtc_state = drm_atomic_get_existing_crtc_state(state->state, 1058 - state->crtc); 1043 + if (state) 1044 + crtc_state = drm_atomic_get_existing_crtc_state(state, 1045 + new_plane_state->crtc); 1059 1046 else /* Special case for asynchronous cursor updates. */ 1060 1047 crtc_state = plane->crtc->state; 1061 1048 ··· 1065 1052 } 1066 1053 1067 1054 static void vop_plane_atomic_async_update(struct drm_plane *plane, 1068 - struct drm_plane_state *new_state) 1055 + struct drm_atomic_state *state) 1069 1056 { 1057 + struct drm_plane_state *new_state = drm_atomic_get_new_plane_state(state, 1058 + plane); 1070 1059 struct vop *vop = to_vop(plane->state->crtc); 1071 1060 struct drm_framebuffer *old_fb = plane->state->fb; 1072 1061 ··· 1083 1068 swap(plane->state->fb, new_state->fb); 1084 1069 1085 1070 if (vop->is_enabled) { 1086 - vop_plane_atomic_update(plane, plane->state); 1071 + vop_plane_atomic_update(plane, state); 1087 1072 spin_lock(&vop->reg_lock); 1088 1073 vop_cfg_done(vop); 1089 1074 spin_unlock(&vop->reg_lock); ··· 1111 1096 .atomic_disable = vop_plane_atomic_disable, 1112 1097 .atomic_async_check = vop_plane_atomic_async_check, 1113 1098 .atomic_async_update = vop_plane_atomic_async_update, 1114 - .prepare_fb = drm_gem_fb_prepare_fb, 1099 + .prepare_fb = drm_gem_plane_helper_prepare_fb, 1115 1100 }; 1116 1101 1117 1102 static const struct drm_plane_funcs vop_plane_funcs = {
+1 -1
drivers/gpu/drm/rockchip/rockchip_lvds.c
··· 725 725 726 726 static int rockchip_lvds_remove(struct platform_device *pdev) 727 727 { 728 - struct rockchip_lvds *lvds = dev_get_drvdata(&pdev->dev); 728 + struct rockchip_lvds *lvds = platform_get_drvdata(pdev); 729 729 730 730 component_del(&pdev->dev, &rockchip_lvds_component_ops); 731 731 clk_unprepare(lvds->pclk);
+1 -1
drivers/gpu/drm/scheduler/sched_entity.c
··· 489 489 bool first; 490 490 491 491 trace_drm_sched_job(sched_job, entity); 492 - atomic_inc(&entity->rq->sched->score); 492 + atomic_inc(entity->rq->sched->score); 493 493 WRITE_ONCE(entity->last_user, current->group_leader); 494 494 first = spsc_queue_push(&entity->job_queue, &sched_job->queue_node); 495 495
+10 -12
drivers/gpu/drm/scheduler/sched_main.c
··· 91 91 if (!list_empty(&entity->list)) 92 92 return; 93 93 spin_lock(&rq->lock); 94 - atomic_inc(&rq->sched->score); 94 + atomic_inc(rq->sched->score); 95 95 list_add_tail(&entity->list, &rq->entities); 96 96 spin_unlock(&rq->lock); 97 97 } ··· 110 110 if (list_empty(&entity->list)) 111 111 return; 112 112 spin_lock(&rq->lock); 113 - atomic_dec(&rq->sched->score); 113 + atomic_dec(rq->sched->score); 114 114 list_del_init(&entity->list); 115 115 if (rq->current_entity == entity) 116 116 rq->current_entity = NULL; ··· 173 173 struct drm_gpu_scheduler *sched = s_fence->sched; 174 174 175 175 atomic_dec(&sched->hw_rq_count); 176 - atomic_dec(&sched->score); 176 + atomic_dec(sched->score); 177 177 178 178 trace_drm_sched_process_job(s_fence); 179 179 ··· 527 527 EXPORT_SYMBOL(drm_sched_start); 528 528 529 529 /** 530 - * drm_sched_resubmit_jobs - helper to relunch job from pending ring list 530 + * drm_sched_resubmit_jobs - helper to relaunch jobs from the pending list 531 531 * 532 532 * @sched: scheduler instance 533 533 * ··· 561 561 } else { 562 562 s_job->s_fence->parent = fence; 563 563 } 564 - 565 - 566 564 } 567 565 } 568 566 EXPORT_SYMBOL(drm_sched_resubmit_jobs); ··· 732 734 continue; 733 735 } 734 736 735 - num_score = atomic_read(&sched->score); 737 + num_score = atomic_read(sched->score); 736 738 if (num_score < min_score) { 737 739 min_score = num_score; 738 740 picked_sched = sched; ··· 842 844 * @hw_submission: number of hw submissions that can be in flight 843 845 * @hang_limit: number of times to allow a job to hang before dropping it 844 846 * @timeout: timeout value in jiffies for the scheduler 847 + * @score: optional score atomic shared with other schedulers 845 848 * @name: name used for debugging 846 849 * 847 850 * Return 0 on success, otherwise error code. 848 851 */ 849 852 int drm_sched_init(struct drm_gpu_scheduler *sched, 850 853 const struct drm_sched_backend_ops *ops, 851 - unsigned hw_submission, 852 - unsigned hang_limit, 853 - long timeout, 854 - const char *name) 854 + unsigned hw_submission, unsigned hang_limit, long timeout, 855 + atomic_t *score, const char *name) 855 856 { 856 857 int i, ret; 857 858 sched->ops = ops; ··· 858 861 sched->name = name; 859 862 sched->timeout = timeout; 860 863 sched->hang_limit = hang_limit; 864 + sched->score = score ? score : &sched->_score; 861 865 for (i = DRM_SCHED_PRIORITY_MIN; i < DRM_SCHED_PRIORITY_COUNT; i++) 862 866 drm_sched_rq_init(sched, &sched->sched_rq[i]); 863 867 ··· 868 870 spin_lock_init(&sched->job_list_lock); 869 871 atomic_set(&sched->hw_rq_count, 0); 870 872 INIT_DELAYED_WORK(&sched->work_tdr, drm_sched_job_timedout); 871 - atomic_set(&sched->score, 0); 873 + atomic_set(&sched->_score, 0); 872 874 atomic64_set(&sched->job_id_count, 0); 873 875 874 876 /* Each scheduler will run on a seperate kernel thread */
+24 -17
drivers/gpu/drm/sti/sti_cursor.c
··· 181 181 } 182 182 183 183 static int sti_cursor_atomic_check(struct drm_plane *drm_plane, 184 - struct drm_plane_state *state) 184 + struct drm_atomic_state *state) 185 185 { 186 + struct drm_plane_state *new_plane_state = drm_atomic_get_new_plane_state(state, 187 + drm_plane); 186 188 struct sti_plane *plane = to_sti_plane(drm_plane); 187 189 struct sti_cursor *cursor = to_sti_cursor(plane); 188 - struct drm_crtc *crtc = state->crtc; 189 - struct drm_framebuffer *fb = state->fb; 190 + struct drm_crtc *crtc = new_plane_state->crtc; 191 + struct drm_framebuffer *fb = new_plane_state->fb; 190 192 struct drm_crtc_state *crtc_state; 191 193 struct drm_display_mode *mode; 192 194 int dst_x, dst_y, dst_w, dst_h; ··· 198 196 if (!crtc || !fb) 199 197 return 0; 200 198 201 - crtc_state = drm_atomic_get_crtc_state(state->state, crtc); 199 + crtc_state = drm_atomic_get_crtc_state(state, crtc); 202 200 mode = &crtc_state->mode; 203 - dst_x = state->crtc_x; 204 - dst_y = state->crtc_y; 205 - dst_w = clamp_val(state->crtc_w, 0, mode->crtc_hdisplay - dst_x); 206 - dst_h = clamp_val(state->crtc_h, 0, mode->crtc_vdisplay - dst_y); 201 + dst_x = new_plane_state->crtc_x; 202 + dst_y = new_plane_state->crtc_y; 203 + dst_w = clamp_val(new_plane_state->crtc_w, 0, 204 + mode->crtc_hdisplay - dst_x); 205 + dst_h = clamp_val(new_plane_state->crtc_h, 0, 206 + mode->crtc_vdisplay - dst_y); 207 207 /* src_x are in 16.16 format */ 208 - src_w = state->src_w >> 16; 209 - src_h = state->src_h >> 16; 208 + src_w = new_plane_state->src_w >> 16; 209 + src_h = new_plane_state->src_h >> 16; 210 210 211 211 if (src_w < STI_CURS_MIN_SIZE || 212 212 src_h < STI_CURS_MIN_SIZE || ··· 256 252 } 257 253 258 254 static void sti_cursor_atomic_update(struct drm_plane *drm_plane, 259 - struct drm_plane_state *oldstate) 255 + struct drm_atomic_state *state) 260 256 { 261 - struct drm_plane_state *state = drm_plane->state; 257 + struct drm_plane_state *newstate = drm_atomic_get_new_plane_state(state, 258 + drm_plane); 262 259 struct sti_plane *plane = to_sti_plane(drm_plane); 263 260 struct sti_cursor *cursor = to_sti_cursor(plane); 264 - struct drm_crtc *crtc = state->crtc; 265 - struct drm_framebuffer *fb = state->fb; 261 + struct drm_crtc *crtc = newstate->crtc; 262 + struct drm_framebuffer *fb = newstate->fb; 266 263 struct drm_display_mode *mode; 267 264 int dst_x, dst_y; 268 265 struct drm_gem_cma_object *cma_obj; ··· 274 269 return; 275 270 276 271 mode = &crtc->mode; 277 - dst_x = state->crtc_x; 278 - dst_y = state->crtc_y; 272 + dst_x = newstate->crtc_x; 273 + dst_y = newstate->crtc_y; 279 274 280 275 cma_obj = drm_fb_cma_get_gem_obj(fb, 0); 281 276 ··· 311 306 } 312 307 313 308 static void sti_cursor_atomic_disable(struct drm_plane *drm_plane, 314 - struct drm_plane_state *oldstate) 309 + struct drm_atomic_state *state) 315 310 { 311 + struct drm_plane_state *oldstate = drm_atomic_get_old_plane_state(state, 312 + drm_plane); 316 313 struct sti_plane *plane = to_sti_plane(drm_plane); 317 314 318 315 if (!oldstate->crtc) {
+43 -34
drivers/gpu/drm/sti/sti_gdp.c
··· 615 615 } 616 616 617 617 static int sti_gdp_atomic_check(struct drm_plane *drm_plane, 618 - struct drm_plane_state *state) 618 + struct drm_atomic_state *state) 619 619 { 620 + struct drm_plane_state *new_plane_state = drm_atomic_get_new_plane_state(state, 621 + drm_plane); 620 622 struct sti_plane *plane = to_sti_plane(drm_plane); 621 623 struct sti_gdp *gdp = to_sti_gdp(plane); 622 - struct drm_crtc *crtc = state->crtc; 623 - struct drm_framebuffer *fb = state->fb; 624 + struct drm_crtc *crtc = new_plane_state->crtc; 625 + struct drm_framebuffer *fb = new_plane_state->fb; 624 626 struct drm_crtc_state *crtc_state; 625 627 struct sti_mixer *mixer; 626 628 struct drm_display_mode *mode; ··· 635 633 return 0; 636 634 637 635 mixer = to_sti_mixer(crtc); 638 - crtc_state = drm_atomic_get_crtc_state(state->state, crtc); 636 + crtc_state = drm_atomic_get_crtc_state(state, crtc); 639 637 mode = &crtc_state->mode; 640 - dst_x = state->crtc_x; 641 - dst_y = state->crtc_y; 642 - dst_w = clamp_val(state->crtc_w, 0, mode->hdisplay - dst_x); 643 - dst_h = clamp_val(state->crtc_h, 0, mode->vdisplay - dst_y); 638 + dst_x = new_plane_state->crtc_x; 639 + dst_y = new_plane_state->crtc_y; 640 + dst_w = clamp_val(new_plane_state->crtc_w, 0, mode->hdisplay - dst_x); 641 + dst_h = clamp_val(new_plane_state->crtc_h, 0, mode->vdisplay - dst_y); 644 642 /* src_x are in 16.16 format */ 645 - src_x = state->src_x >> 16; 646 - src_y = state->src_y >> 16; 647 - src_w = clamp_val(state->src_w >> 16, 0, GAM_GDP_SIZE_MAX_WIDTH); 648 - src_h = clamp_val(state->src_h >> 16, 0, GAM_GDP_SIZE_MAX_HEIGHT); 643 + src_x = new_plane_state->src_x >> 16; 644 + src_y = new_plane_state->src_y >> 16; 645 + src_w = clamp_val(new_plane_state->src_w >> 16, 0, 646 + GAM_GDP_SIZE_MAX_WIDTH); 647 + src_h = clamp_val(new_plane_state->src_h >> 16, 0, 648 + GAM_GDP_SIZE_MAX_HEIGHT); 649 649 650 650 format = sti_gdp_fourcc2format(fb->format->format); 651 651 if (format == -1) { ··· 699 695 } 700 696 701 697 static void sti_gdp_atomic_update(struct drm_plane *drm_plane, 702 - struct drm_plane_state *oldstate) 698 + struct drm_atomic_state *state) 703 699 { 704 - struct drm_plane_state *state = drm_plane->state; 700 + struct drm_plane_state *oldstate = drm_atomic_get_old_plane_state(state, 701 + drm_plane); 702 + struct drm_plane_state *newstate = drm_atomic_get_new_plane_state(state, 703 + drm_plane); 705 704 struct sti_plane *plane = to_sti_plane(drm_plane); 706 705 struct sti_gdp *gdp = to_sti_gdp(plane); 707 - struct drm_crtc *crtc = state->crtc; 708 - struct drm_framebuffer *fb = state->fb; 706 + struct drm_crtc *crtc = newstate->crtc; 707 + struct drm_framebuffer *fb = newstate->fb; 709 708 struct drm_display_mode *mode; 710 709 int dst_x, dst_y, dst_w, dst_h; 711 710 int src_x, src_y, src_w, src_h; ··· 725 718 if (!crtc || !fb) 726 719 return; 727 720 728 - if ((oldstate->fb == state->fb) && 729 - (oldstate->crtc_x == state->crtc_x) && 730 - (oldstate->crtc_y == state->crtc_y) && 731 - (oldstate->crtc_w == state->crtc_w) && 732 - (oldstate->crtc_h == state->crtc_h) && 733 - (oldstate->src_x == state->src_x) && 734 - (oldstate->src_y == state->src_y) && 735 - (oldstate->src_w == state->src_w) && 736 - (oldstate->src_h == state->src_h)) { 721 + if ((oldstate->fb == newstate->fb) && 722 + (oldstate->crtc_x == newstate->crtc_x) && 723 + (oldstate->crtc_y == newstate->crtc_y) && 724 + (oldstate->crtc_w == newstate->crtc_w) && 725 + (oldstate->crtc_h == newstate->crtc_h) && 726 + (oldstate->src_x == newstate->src_x) && 727 + (oldstate->src_y == newstate->src_y) && 728 + (oldstate->src_w == newstate->src_w) && 729 + (oldstate->src_h == newstate->src_h)) { 737 730 /* No change since last update, do not post cmd */ 738 731 DRM_DEBUG_DRIVER("No change, not posting cmd\n"); 739 732 plane->status = STI_PLANE_UPDATED; ··· 751 744 } 752 745 753 746 mode = &crtc->mode; 754 - dst_x = state->crtc_x; 755 - dst_y = state->crtc_y; 756 - dst_w = clamp_val(state->crtc_w, 0, mode->hdisplay - dst_x); 757 - dst_h = clamp_val(state->crtc_h, 0, mode->vdisplay - dst_y); 747 + dst_x = newstate->crtc_x; 748 + dst_y = newstate->crtc_y; 749 + dst_w = clamp_val(newstate->crtc_w, 0, mode->hdisplay - dst_x); 750 + dst_h = clamp_val(newstate->crtc_h, 0, mode->vdisplay - dst_y); 758 751 /* src_x are in 16.16 format */ 759 - src_x = state->src_x >> 16; 760 - src_y = state->src_y >> 16; 761 - src_w = clamp_val(state->src_w >> 16, 0, GAM_GDP_SIZE_MAX_WIDTH); 762 - src_h = clamp_val(state->src_h >> 16, 0, GAM_GDP_SIZE_MAX_HEIGHT); 752 + src_x = newstate->src_x >> 16; 753 + src_y = newstate->src_y >> 16; 754 + src_w = clamp_val(newstate->src_w >> 16, 0, GAM_GDP_SIZE_MAX_WIDTH); 755 + src_h = clamp_val(newstate->src_h >> 16, 0, GAM_GDP_SIZE_MAX_HEIGHT); 763 756 764 757 list = sti_gdp_get_free_nodes(gdp); 765 758 top_field = list->top_field; ··· 867 860 } 868 861 869 862 static void sti_gdp_atomic_disable(struct drm_plane *drm_plane, 870 - struct drm_plane_state *oldstate) 863 + struct drm_atomic_state *state) 871 864 { 865 + struct drm_plane_state *oldstate = drm_atomic_get_old_plane_state(state, 866 + drm_plane); 872 867 struct sti_plane *plane = to_sti_plane(drm_plane); 873 868 874 869 if (!oldstate->crtc) {
+41 -34
drivers/gpu/drm/sti/sti_hqvdp.c
··· 1017 1017 } 1018 1018 1019 1019 static int sti_hqvdp_atomic_check(struct drm_plane *drm_plane, 1020 - struct drm_plane_state *state) 1020 + struct drm_atomic_state *state) 1021 1021 { 1022 + struct drm_plane_state *new_plane_state = drm_atomic_get_new_plane_state(state, 1023 + drm_plane); 1022 1024 struct sti_plane *plane = to_sti_plane(drm_plane); 1023 1025 struct sti_hqvdp *hqvdp = to_sti_hqvdp(plane); 1024 - struct drm_crtc *crtc = state->crtc; 1025 - struct drm_framebuffer *fb = state->fb; 1026 + struct drm_crtc *crtc = new_plane_state->crtc; 1027 + struct drm_framebuffer *fb = new_plane_state->fb; 1026 1028 struct drm_crtc_state *crtc_state; 1027 1029 struct drm_display_mode *mode; 1028 1030 int dst_x, dst_y, dst_w, dst_h; ··· 1034 1032 if (!crtc || !fb) 1035 1033 return 0; 1036 1034 1037 - crtc_state = drm_atomic_get_crtc_state(state->state, crtc); 1035 + crtc_state = drm_atomic_get_crtc_state(state, crtc); 1038 1036 mode = &crtc_state->mode; 1039 - dst_x = state->crtc_x; 1040 - dst_y = state->crtc_y; 1041 - dst_w = clamp_val(state->crtc_w, 0, mode->hdisplay - dst_x); 1042 - dst_h = clamp_val(state->crtc_h, 0, mode->vdisplay - dst_y); 1037 + dst_x = new_plane_state->crtc_x; 1038 + dst_y = new_plane_state->crtc_y; 1039 + dst_w = clamp_val(new_plane_state->crtc_w, 0, mode->hdisplay - dst_x); 1040 + dst_h = clamp_val(new_plane_state->crtc_h, 0, mode->vdisplay - dst_y); 1043 1041 /* src_x are in 16.16 format */ 1044 - src_x = state->src_x >> 16; 1045 - src_y = state->src_y >> 16; 1046 - src_w = state->src_w >> 16; 1047 - src_h = state->src_h >> 16; 1042 + src_x = new_plane_state->src_x >> 16; 1043 + src_y = new_plane_state->src_y >> 16; 1044 + src_w = new_plane_state->src_w >> 16; 1045 + src_h = new_plane_state->src_h >> 16; 1048 1046 1049 1047 if (mode->clock && !sti_hqvdp_check_hw_scaling(hqvdp, mode, 1050 1048 src_w, src_h, ··· 1109 1107 } 1110 1108 1111 1109 static void sti_hqvdp_atomic_update(struct drm_plane *drm_plane, 1112 - struct drm_plane_state *oldstate) 1110 + struct drm_atomic_state *state) 1113 1111 { 1114 - struct drm_plane_state *state = drm_plane->state; 1112 + struct drm_plane_state *oldstate = drm_atomic_get_old_plane_state(state, 1113 + drm_plane); 1114 + struct drm_plane_state *newstate = drm_atomic_get_new_plane_state(state, 1115 + drm_plane); 1115 1116 struct sti_plane *plane = to_sti_plane(drm_plane); 1116 1117 struct sti_hqvdp *hqvdp = to_sti_hqvdp(plane); 1117 - struct drm_crtc *crtc = state->crtc; 1118 - struct drm_framebuffer *fb = state->fb; 1118 + struct drm_crtc *crtc = newstate->crtc; 1119 + struct drm_framebuffer *fb = newstate->fb; 1119 1120 struct drm_display_mode *mode; 1120 1121 int dst_x, dst_y, dst_w, dst_h; 1121 1122 int src_x, src_y, src_w, src_h; ··· 1130 1125 if (!crtc || !fb) 1131 1126 return; 1132 1127 1133 - if ((oldstate->fb == state->fb) && 1134 - (oldstate->crtc_x == state->crtc_x) && 1135 - (oldstate->crtc_y == state->crtc_y) && 1136 - (oldstate->crtc_w == state->crtc_w) && 1137 - (oldstate->crtc_h == state->crtc_h) && 1138 - (oldstate->src_x == state->src_x) && 1139 - (oldstate->src_y == state->src_y) && 1140 - (oldstate->src_w == state->src_w) && 1141 - (oldstate->src_h == state->src_h)) { 1128 + if ((oldstate->fb == newstate->fb) && 1129 + (oldstate->crtc_x == newstate->crtc_x) && 1130 + (oldstate->crtc_y == newstate->crtc_y) && 1131 + (oldstate->crtc_w == newstate->crtc_w) && 1132 + (oldstate->crtc_h == newstate->crtc_h) && 1133 + (oldstate->src_x == newstate->src_x) && 1134 + (oldstate->src_y == newstate->src_y) && 1135 + (oldstate->src_w == newstate->src_w) && 1136 + (oldstate->src_h == newstate->src_h)) { 1142 1137 /* No change since last update, do not post cmd */ 1143 1138 DRM_DEBUG_DRIVER("No change, not posting cmd\n"); 1144 1139 plane->status = STI_PLANE_UPDATED; ··· 1146 1141 } 1147 1142 1148 1143 mode = &crtc->mode; 1149 - dst_x = state->crtc_x; 1150 - dst_y = state->crtc_y; 1151 - dst_w = clamp_val(state->crtc_w, 0, mode->hdisplay - dst_x); 1152 - dst_h = clamp_val(state->crtc_h, 0, mode->vdisplay - dst_y); 1144 + dst_x = newstate->crtc_x; 1145 + dst_y = newstate->crtc_y; 1146 + dst_w = clamp_val(newstate->crtc_w, 0, mode->hdisplay - dst_x); 1147 + dst_h = clamp_val(newstate->crtc_h, 0, mode->vdisplay - dst_y); 1153 1148 /* src_x are in 16.16 format */ 1154 - src_x = state->src_x >> 16; 1155 - src_y = state->src_y >> 16; 1156 - src_w = state->src_w >> 16; 1157 - src_h = state->src_h >> 16; 1149 + src_x = newstate->src_x >> 16; 1150 + src_y = newstate->src_y >> 16; 1151 + src_w = newstate->src_w >> 16; 1152 + src_h = newstate->src_h >> 16; 1158 1153 1159 1154 cmd_offset = sti_hqvdp_get_free_cmd(hqvdp); 1160 1155 if (cmd_offset == -1) { ··· 1243 1238 } 1244 1239 1245 1240 static void sti_hqvdp_atomic_disable(struct drm_plane *drm_plane, 1246 - struct drm_plane_state *oldstate) 1241 + struct drm_atomic_state *state) 1247 1242 { 1243 + struct drm_plane_state *oldstate = drm_atomic_get_old_plane_state(state, 1244 + drm_plane); 1248 1245 struct sti_plane *plane = to_sti_plane(drm_plane); 1249 1246 1250 1247 if (!oldstate->crtc) {
+59 -25
drivers/gpu/drm/stm/ltdc.c
··· 26 26 #include <drm/drm_device.h> 27 27 #include <drm/drm_fb_cma_helper.h> 28 28 #include <drm/drm_fourcc.h> 29 + #include <drm/drm_gem_atomic_helper.h> 29 30 #include <drm/drm_gem_cma_helper.h> 30 - #include <drm/drm_gem_framebuffer_helper.h> 31 31 #include <drm/drm_of.h> 32 32 #include <drm/drm_plane_helper.h> 33 33 #include <drm/drm_probe_helper.h> ··· 525 525 { 526 526 struct ltdc_device *ldev = crtc_to_ltdc(crtc); 527 527 struct drm_device *ddev = crtc->dev; 528 + struct drm_connector_list_iter iter; 529 + struct drm_connector *connector = NULL; 530 + struct drm_encoder *encoder = NULL; 531 + struct drm_bridge *bridge = NULL; 528 532 struct drm_display_mode *mode = &crtc->state->adjusted_mode; 529 533 struct videomode vm; 530 534 u32 hsync, vsync, accum_hbp, accum_vbp, accum_act_w, accum_act_h; 531 535 u32 total_width, total_height; 536 + u32 bus_flags = 0; 532 537 u32 val; 533 538 int ret; 539 + 540 + /* get encoder from crtc */ 541 + drm_for_each_encoder(encoder, ddev) 542 + if (encoder->crtc == crtc) 543 + break; 544 + 545 + if (encoder) { 546 + /* get bridge from encoder */ 547 + list_for_each_entry(bridge, &encoder->bridge_chain, chain_node) 548 + if (bridge->encoder == encoder) 549 + break; 550 + 551 + /* Get the connector from encoder */ 552 + drm_connector_list_iter_begin(ddev, &iter); 553 + drm_for_each_connector_iter(connector, &iter) 554 + if (connector->encoder == encoder) 555 + break; 556 + drm_connector_list_iter_end(&iter); 557 + } 558 + 559 + if (bridge && bridge->timings) 560 + bus_flags = bridge->timings->input_bus_flags; 561 + else if (connector) 562 + bus_flags = connector->display_info.bus_flags; 534 563 535 564 if (!pm_runtime_active(ddev->dev)) { 536 565 ret = pm_runtime_get_sync(ddev->dev); ··· 596 567 if (vm.flags & DISPLAY_FLAGS_VSYNC_HIGH) 597 568 val |= GCR_VSPOL; 598 569 599 - if (vm.flags & DISPLAY_FLAGS_DE_LOW) 570 + if (bus_flags & DRM_BUS_FLAG_DE_LOW) 600 571 val |= GCR_DEPOL; 601 572 602 - if (vm.flags & DISPLAY_FLAGS_PIXDATA_NEGEDGE) 573 + if (bus_flags & DRM_BUS_FLAG_PIXDATA_DRIVE_NEGEDGE) 603 574 val |= GCR_PCPOL; 604 575 605 576 reg_update_bits(ldev->regs, LTDC_GCR, ··· 749 720 */ 750 721 751 722 static int ltdc_plane_atomic_check(struct drm_plane *plane, 752 - struct drm_plane_state *state) 723 + struct drm_atomic_state *state) 753 724 { 754 - struct drm_framebuffer *fb = state->fb; 725 + struct drm_plane_state *new_plane_state = drm_atomic_get_new_plane_state(state, 726 + plane); 727 + struct drm_framebuffer *fb = new_plane_state->fb; 755 728 u32 src_w, src_h; 756 729 757 730 DRM_DEBUG_DRIVER("\n"); ··· 762 731 return 0; 763 732 764 733 /* convert src_ from 16:16 format */ 765 - src_w = state->src_w >> 16; 766 - src_h = state->src_h >> 16; 734 + src_w = new_plane_state->src_w >> 16; 735 + src_h = new_plane_state->src_h >> 16; 767 736 768 737 /* Reject scaling */ 769 - if (src_w != state->crtc_w || src_h != state->crtc_h) { 738 + if (src_w != new_plane_state->crtc_w || src_h != new_plane_state->crtc_h) { 770 739 DRM_ERROR("Scaling is not supported"); 771 740 return -EINVAL; 772 741 } ··· 775 744 } 776 745 777 746 static void ltdc_plane_atomic_update(struct drm_plane *plane, 778 - struct drm_plane_state *oldstate) 747 + struct drm_atomic_state *state) 779 748 { 780 749 struct ltdc_device *ldev = plane_to_ltdc(plane); 781 - struct drm_plane_state *state = plane->state; 782 - struct drm_framebuffer *fb = state->fb; 750 + struct drm_plane_state *newstate = drm_atomic_get_new_plane_state(state, 751 + plane); 752 + struct drm_framebuffer *fb = newstate->fb; 783 753 u32 lofs = plane->index * LAY_OFS; 784 - u32 x0 = state->crtc_x; 785 - u32 x1 = state->crtc_x + state->crtc_w - 1; 786 - u32 y0 = state->crtc_y; 787 - u32 y1 = state->crtc_y + state->crtc_h - 1; 754 + u32 x0 = newstate->crtc_x; 755 + u32 x1 = newstate->crtc_x + newstate->crtc_w - 1; 756 + u32 y0 = newstate->crtc_y; 757 + u32 y1 = newstate->crtc_y + newstate->crtc_h - 1; 788 758 u32 src_x, src_y, src_w, src_h; 789 759 u32 val, pitch_in_bytes, line_length, paddr, ahbp, avbp, bpcr; 790 760 enum ltdc_pix_fmt pf; 791 761 792 - if (!state->crtc || !fb) { 762 + if (!newstate->crtc || !fb) { 793 763 DRM_DEBUG_DRIVER("fb or crtc NULL"); 794 764 return; 795 765 } 796 766 797 767 /* convert src_ from 16:16 format */ 798 - src_x = state->src_x >> 16; 799 - src_y = state->src_y >> 16; 800 - src_w = state->src_w >> 16; 801 - src_h = state->src_h >> 16; 768 + src_x = newstate->src_x >> 16; 769 + src_y = newstate->src_y >> 16; 770 + src_w = newstate->src_w >> 16; 771 + src_h = newstate->src_h >> 16; 802 772 803 773 DRM_DEBUG_DRIVER("plane:%d fb:%d (%dx%d)@(%d,%d) -> (%dx%d)@(%d,%d)\n", 804 774 plane->base.id, fb->base.id, 805 775 src_w, src_h, src_x, src_y, 806 - state->crtc_w, state->crtc_h, 807 - state->crtc_x, state->crtc_y); 776 + newstate->crtc_w, newstate->crtc_h, 777 + newstate->crtc_x, newstate->crtc_y); 808 778 809 779 bpcr = reg_read(ldev->regs, LTDC_BPCR); 810 780 ahbp = (bpcr & BPCR_AHBP) >> 16; ··· 864 832 reg_update_bits(ldev->regs, LTDC_L1CFBLNR + lofs, LXCFBLNR_CFBLN, val); 865 833 866 834 /* Sets the FB address */ 867 - paddr = (u32)drm_fb_cma_get_gem_addr(fb, state, 0); 835 + paddr = (u32)drm_fb_cma_get_gem_addr(fb, newstate, 0); 868 836 869 837 DRM_DEBUG_DRIVER("fb: phys 0x%08x", paddr); 870 838 reg_write(ldev->regs, LTDC_L1CFBAR + lofs, paddr); ··· 890 858 } 891 859 892 860 static void ltdc_plane_atomic_disable(struct drm_plane *plane, 893 - struct drm_plane_state *oldstate) 861 + struct drm_atomic_state *state) 894 862 { 863 + struct drm_plane_state *oldstate = drm_atomic_get_old_plane_state(state, 864 + plane); 895 865 struct ltdc_device *ldev = plane_to_ltdc(plane); 896 866 u32 lofs = plane->index * LAY_OFS; 897 867 ··· 945 911 }; 946 912 947 913 static const struct drm_plane_helper_funcs ltdc_plane_helper_funcs = { 948 - .prepare_fb = drm_gem_fb_prepare_fb, 914 + .prepare_fb = drm_gem_plane_helper_prepare_fb, 949 915 .atomic_check = ltdc_plane_atomic_check, 950 916 .atomic_update = ltdc_plane_atomic_update, 951 917 .atomic_disable = ltdc_plane_atomic_disable,
+2 -4
drivers/gpu/drm/sun4i/sun4i_backend.c
··· 510 510 struct sun4i_layer_state *layer_state = 511 511 state_to_sun4i_layer_state(plane_state); 512 512 struct drm_framebuffer *fb = plane_state->fb; 513 - struct drm_format_name_buf format_name; 514 513 515 514 if (!sun4i_backend_plane_is_supported(plane_state, 516 515 &layer_state->uses_frontend)) ··· 526 527 } 527 528 } 528 529 529 - DRM_DEBUG_DRIVER("Plane FB format is %s\n", 530 - drm_get_format_name(fb->format->format, 531 - &format_name)); 530 + DRM_DEBUG_DRIVER("Plane FB format is %p4cc\n", 531 + &fb->format->format); 532 532 if (fb->format->has_alpha || (plane_state->alpha != DRM_BLEND_ALPHA_OPAQUE)) 533 533 num_alpha_planes++; 534 534
+10 -5
drivers/gpu/drm/sun4i/sun4i_layer.c
··· 6 6 * Maxime Ripard <maxime.ripard@free-electrons.com> 7 7 */ 8 8 9 + #include <drm/drm_atomic.h> 9 10 #include <drm/drm_atomic_helper.h> 10 - #include <drm/drm_gem_framebuffer_helper.h> 11 + #include <drm/drm_gem_atomic_helper.h> 11 12 #include <drm/drm_plane_helper.h> 12 13 13 14 #include "sun4i_backend.h" ··· 64 63 } 65 64 66 65 static void sun4i_backend_layer_atomic_disable(struct drm_plane *plane, 67 - struct drm_plane_state *old_state) 66 + struct drm_atomic_state *state) 68 67 { 68 + struct drm_plane_state *old_state = drm_atomic_get_old_plane_state(state, 69 + plane); 69 70 struct sun4i_layer_state *layer_state = state_to_sun4i_layer_state(old_state); 70 71 struct sun4i_layer *layer = plane_to_sun4i_layer(plane); 71 72 struct sun4i_backend *backend = layer->backend; ··· 84 81 } 85 82 86 83 static void sun4i_backend_layer_atomic_update(struct drm_plane *plane, 87 - struct drm_plane_state *old_state) 84 + struct drm_atomic_state *state) 88 85 { 89 - struct sun4i_layer_state *layer_state = state_to_sun4i_layer_state(plane->state); 86 + struct drm_plane_state *new_state = drm_atomic_get_new_plane_state(state, 87 + plane); 88 + struct sun4i_layer_state *layer_state = state_to_sun4i_layer_state(new_state); 90 89 struct sun4i_layer *layer = plane_to_sun4i_layer(plane); 91 90 struct sun4i_backend *backend = layer->backend; 92 91 struct sun4i_frontend *frontend = backend->frontend; ··· 127 122 } 128 123 129 124 static const struct drm_plane_helper_funcs sun4i_backend_layer_helper_funcs = { 130 - .prepare_fb = drm_gem_fb_prepare_fb, 125 + .prepare_fb = drm_gem_plane_helper_prepare_fb, 131 126 .atomic_disable = sun4i_backend_layer_atomic_disable, 132 127 .atomic_update = sun4i_backend_layer_atomic_update, 133 128 };
+49 -10
drivers/gpu/drm/sun4i/sun8i_ui_layer.c
··· 14 14 #include <drm/drm_crtc.h> 15 15 #include <drm/drm_fb_cma_helper.h> 16 16 #include <drm/drm_fourcc.h> 17 + #include <drm/drm_gem_atomic_helper.h> 17 18 #include <drm/drm_gem_cma_helper.h> 18 - #include <drm/drm_gem_framebuffer_helper.h> 19 19 #include <drm/drm_plane_helper.h> 20 20 #include <drm/drm_probe_helper.h> 21 21 ··· 70 70 SUN8I_MIXER_BLEND_ROUTE_PIPE_MSK(zpos), 71 71 val); 72 72 } 73 + } 74 + 75 + static void sun8i_ui_layer_update_alpha(struct sun8i_mixer *mixer, int channel, 76 + int overlay, struct drm_plane *plane) 77 + { 78 + u32 mask, val, ch_base; 79 + 80 + ch_base = sun8i_channel_base(mixer, channel); 81 + 82 + mask = SUN8I_MIXER_CHAN_UI_LAYER_ATTR_ALPHA_MODE_MASK | 83 + SUN8I_MIXER_CHAN_UI_LAYER_ATTR_ALPHA_MASK; 84 + 85 + val = SUN8I_MIXER_CHAN_UI_LAYER_ATTR_ALPHA(plane->state->alpha >> 8); 86 + 87 + val |= (plane->state->alpha == DRM_BLEND_ALPHA_OPAQUE) ? 88 + SUN8I_MIXER_CHAN_UI_LAYER_ATTR_ALPHA_MODE_PIXEL : 89 + SUN8I_MIXER_CHAN_UI_LAYER_ATTR_ALPHA_MODE_COMBINED; 90 + 91 + regmap_update_bits(mixer->engine.regs, 92 + SUN8I_MIXER_CHAN_UI_LAYER_ATTR(ch_base, overlay), 93 + mask, val); 73 94 } 74 95 75 96 static int sun8i_ui_layer_update_coord(struct sun8i_mixer *mixer, int channel, ··· 257 236 } 258 237 259 238 static int sun8i_ui_layer_atomic_check(struct drm_plane *plane, 260 - struct drm_plane_state *state) 239 + struct drm_atomic_state *state) 261 240 { 241 + struct drm_plane_state *new_plane_state = drm_atomic_get_new_plane_state(state, 242 + plane); 262 243 struct sun8i_ui_layer *layer = plane_to_sun8i_ui_layer(plane); 263 - struct drm_crtc *crtc = state->crtc; 244 + struct drm_crtc *crtc = new_plane_state->crtc; 264 245 struct drm_crtc_state *crtc_state; 265 246 int min_scale, max_scale; 266 247 267 248 if (!crtc) 268 249 return 0; 269 250 270 - crtc_state = drm_atomic_get_existing_crtc_state(state->state, crtc); 251 + crtc_state = drm_atomic_get_existing_crtc_state(state, 252 + crtc); 271 253 if (WARN_ON(!crtc_state)) 272 254 return -EINVAL; 273 255 ··· 282 258 max_scale = SUN8I_UI_SCALER_SCALE_MAX; 283 259 } 284 260 285 - return drm_atomic_helper_check_plane_state(state, crtc_state, 261 + return drm_atomic_helper_check_plane_state(new_plane_state, 262 + crtc_state, 286 263 min_scale, max_scale, 287 264 true, true); 288 265 } 289 266 290 267 static void sun8i_ui_layer_atomic_disable(struct drm_plane *plane, 291 - struct drm_plane_state *old_state) 268 + struct drm_atomic_state *state) 292 269 { 270 + struct drm_plane_state *old_state = drm_atomic_get_old_plane_state(state, 271 + plane); 293 272 struct sun8i_ui_layer *layer = plane_to_sun8i_ui_layer(plane); 294 273 unsigned int old_zpos = old_state->normalized_zpos; 295 274 struct sun8i_mixer *mixer = layer->mixer; ··· 302 275 } 303 276 304 277 static void sun8i_ui_layer_atomic_update(struct drm_plane *plane, 305 - struct drm_plane_state *old_state) 278 + struct drm_atomic_state *state) 306 279 { 280 + struct drm_plane_state *old_state = drm_atomic_get_old_plane_state(state, 281 + plane); 282 + struct drm_plane_state *new_state = drm_atomic_get_new_plane_state(state, 283 + plane); 307 284 struct sun8i_ui_layer *layer = plane_to_sun8i_ui_layer(plane); 308 - unsigned int zpos = plane->state->normalized_zpos; 285 + unsigned int zpos = new_state->normalized_zpos; 309 286 unsigned int old_zpos = old_state->normalized_zpos; 310 287 struct sun8i_mixer *mixer = layer->mixer; 311 288 312 - if (!plane->state->visible) { 289 + if (!new_state->visible) { 313 290 sun8i_ui_layer_enable(mixer, layer->channel, 314 291 layer->overlay, false, 0, old_zpos); 315 292 return; ··· 321 290 322 291 sun8i_ui_layer_update_coord(mixer, layer->channel, 323 292 layer->overlay, plane, zpos); 293 + sun8i_ui_layer_update_alpha(mixer, layer->channel, 294 + layer->overlay, plane); 324 295 sun8i_ui_layer_update_formats(mixer, layer->channel, 325 296 layer->overlay, plane); 326 297 sun8i_ui_layer_update_buffer(mixer, layer->channel, ··· 332 299 } 333 300 334 301 static const struct drm_plane_helper_funcs sun8i_ui_layer_helper_funcs = { 335 - .prepare_fb = drm_gem_fb_prepare_fb, 302 + .prepare_fb = drm_gem_plane_helper_prepare_fb, 336 303 .atomic_check = sun8i_ui_layer_atomic_check, 337 304 .atomic_disable = sun8i_ui_layer_atomic_disable, 338 305 .atomic_update = sun8i_ui_layer_atomic_update, ··· 399 366 } 400 367 401 368 plane_cnt = mixer->cfg->ui_num + mixer->cfg->vi_num; 369 + 370 + ret = drm_plane_create_alpha_property(&layer->plane); 371 + if (ret) { 372 + dev_err(drm->dev, "Couldn't add alpha property\n"); 373 + return ERR_PTR(ret); 374 + } 402 375 403 376 ret = drm_plane_create_zpos_property(&layer->plane, channel, 404 377 0, plane_cnt - 1);
+5
drivers/gpu/drm/sun4i/sun8i_ui_layer.h
··· 40 40 #define SUN8I_MIXER_CHAN_UI_LAYER_ATTR_FBFMT_MASK GENMASK(12, 8) 41 41 #define SUN8I_MIXER_CHAN_UI_LAYER_ATTR_FBFMT_OFFSET 8 42 42 #define SUN8I_MIXER_CHAN_UI_LAYER_ATTR_ALPHA_MASK GENMASK(31, 24) 43 + #define SUN8I_MIXER_CHAN_UI_LAYER_ATTR_ALPHA(x) ((x) << 24) 44 + 45 + #define SUN8I_MIXER_CHAN_UI_LAYER_ATTR_ALPHA_MODE_PIXEL ((0) << 1) 46 + #define SUN8I_MIXER_CHAN_UI_LAYER_ATTR_ALPHA_MODE_LAYER ((1) << 1) 47 + #define SUN8I_MIXER_CHAN_UI_LAYER_ATTR_ALPHA_MODE_COMBINED ((2) << 1) 43 48 44 49 struct sun8i_mixer; 45 50
+60 -18
drivers/gpu/drm/sun4i/sun8i_vi_layer.c
··· 7 7 #include <drm/drm_atomic_helper.h> 8 8 #include <drm/drm_crtc.h> 9 9 #include <drm/drm_fb_cma_helper.h> 10 + #include <drm/drm_gem_atomic_helper.h> 10 11 #include <drm/drm_gem_cma_helper.h> 11 - #include <drm/drm_gem_framebuffer_helper.h> 12 12 #include <drm/drm_plane_helper.h> 13 13 #include <drm/drm_probe_helper.h> 14 14 ··· 63 63 SUN8I_MIXER_BLEND_ROUTE(bld_base), 64 64 SUN8I_MIXER_BLEND_ROUTE_PIPE_MSK(zpos), 65 65 val); 66 + } 67 + } 68 + 69 + static void sun8i_vi_layer_update_alpha(struct sun8i_mixer *mixer, int channel, 70 + int overlay, struct drm_plane *plane) 71 + { 72 + u32 mask, val, ch_base; 73 + 74 + ch_base = sun8i_channel_base(mixer, channel); 75 + 76 + if (mixer->cfg->is_de3) { 77 + mask = SUN50I_MIXER_CHAN_VI_LAYER_ATTR_ALPHA_MASK | 78 + SUN50I_MIXER_CHAN_VI_LAYER_ATTR_ALPHA_MODE_MASK; 79 + val = SUN50I_MIXER_CHAN_VI_LAYER_ATTR_ALPHA 80 + (plane->state->alpha >> 8); 81 + 82 + val |= (plane->state->alpha == DRM_BLEND_ALPHA_OPAQUE) ? 83 + SUN50I_MIXER_CHAN_VI_LAYER_ATTR_ALPHA_MODE_PIXEL : 84 + SUN50I_MIXER_CHAN_VI_LAYER_ATTR_ALPHA_MODE_COMBINED; 85 + 86 + regmap_update_bits(mixer->engine.regs, 87 + SUN8I_MIXER_CHAN_VI_LAYER_ATTR(ch_base, 88 + overlay), 89 + mask, val); 90 + } else if (mixer->cfg->vi_num == 1) { 91 + regmap_update_bits(mixer->engine.regs, 92 + SUN8I_MIXER_FCC_GLOBAL_ALPHA_REG, 93 + SUN8I_MIXER_FCC_GLOBAL_ALPHA_MASK, 94 + SUN8I_MIXER_FCC_GLOBAL_ALPHA 95 + (plane->state->alpha >> 8)); 66 96 } 67 97 } 68 98 ··· 298 268 SUN8I_MIXER_CHAN_VI_LAYER_ATTR(ch_base, overlay), 299 269 SUN8I_MIXER_CHAN_VI_LAYER_ATTR_RGB_MODE, val); 300 270 301 - /* It seems that YUV formats use global alpha setting. */ 302 - if (mixer->cfg->is_de3) 303 - regmap_update_bits(mixer->engine.regs, 304 - SUN8I_MIXER_CHAN_VI_LAYER_ATTR(ch_base, 305 - overlay), 306 - SUN50I_MIXER_CHAN_VI_LAYER_ATTR_ALPHA_MASK, 307 - SUN50I_MIXER_CHAN_VI_LAYER_ATTR_ALPHA(0xff)); 308 - 309 271 return 0; 310 272 } 311 273 ··· 361 339 } 362 340 363 341 static int sun8i_vi_layer_atomic_check(struct drm_plane *plane, 364 - struct drm_plane_state *state) 342 + struct drm_atomic_state *state) 365 343 { 344 + struct drm_plane_state *new_plane_state = drm_atomic_get_new_plane_state(state, 345 + plane); 366 346 struct sun8i_vi_layer *layer = plane_to_sun8i_vi_layer(plane); 367 - struct drm_crtc *crtc = state->crtc; 347 + struct drm_crtc *crtc = new_plane_state->crtc; 368 348 struct drm_crtc_state *crtc_state; 369 349 int min_scale, max_scale; 370 350 371 351 if (!crtc) 372 352 return 0; 373 353 374 - crtc_state = drm_atomic_get_existing_crtc_state(state->state, crtc); 354 + crtc_state = drm_atomic_get_existing_crtc_state(state, 355 + crtc); 375 356 if (WARN_ON(!crtc_state)) 376 357 return -EINVAL; 377 358 ··· 386 361 max_scale = SUN8I_VI_SCALER_SCALE_MAX; 387 362 } 388 363 389 - return drm_atomic_helper_check_plane_state(state, crtc_state, 364 + return drm_atomic_helper_check_plane_state(new_plane_state, 365 + crtc_state, 390 366 min_scale, max_scale, 391 367 true, true); 392 368 } 393 369 394 370 static void sun8i_vi_layer_atomic_disable(struct drm_plane *plane, 395 - struct drm_plane_state *old_state) 371 + struct drm_atomic_state *state) 396 372 { 373 + struct drm_plane_state *old_state = drm_atomic_get_old_plane_state(state, 374 + plane); 397 375 struct sun8i_vi_layer *layer = plane_to_sun8i_vi_layer(plane); 398 376 unsigned int old_zpos = old_state->normalized_zpos; 399 377 struct sun8i_mixer *mixer = layer->mixer; ··· 406 378 } 407 379 408 380 static void sun8i_vi_layer_atomic_update(struct drm_plane *plane, 409 - struct drm_plane_state *old_state) 381 + struct drm_atomic_state *state) 410 382 { 383 + struct drm_plane_state *old_state = drm_atomic_get_old_plane_state(state, 384 + plane); 385 + struct drm_plane_state *new_state = drm_atomic_get_new_plane_state(state, 386 + plane); 411 387 struct sun8i_vi_layer *layer = plane_to_sun8i_vi_layer(plane); 412 - unsigned int zpos = plane->state->normalized_zpos; 388 + unsigned int zpos = new_state->normalized_zpos; 413 389 unsigned int old_zpos = old_state->normalized_zpos; 414 390 struct sun8i_mixer *mixer = layer->mixer; 415 391 416 - if (!plane->state->visible) { 392 + if (!new_state->visible) { 417 393 sun8i_vi_layer_enable(mixer, layer->channel, 418 394 layer->overlay, false, 0, old_zpos); 419 395 return; ··· 425 393 426 394 sun8i_vi_layer_update_coord(mixer, layer->channel, 427 395 layer->overlay, plane, zpos); 396 + sun8i_vi_layer_update_alpha(mixer, layer->channel, 397 + layer->overlay, plane); 428 398 sun8i_vi_layer_update_formats(mixer, layer->channel, 429 399 layer->overlay, plane); 430 400 sun8i_vi_layer_update_buffer(mixer, layer->channel, ··· 436 402 } 437 403 438 404 static const struct drm_plane_helper_funcs sun8i_vi_layer_helper_funcs = { 439 - .prepare_fb = drm_gem_fb_prepare_fb, 405 + .prepare_fb = drm_gem_plane_helper_prepare_fb, 440 406 .atomic_check = sun8i_vi_layer_atomic_check, 441 407 .atomic_disable = sun8i_vi_layer_atomic_disable, 442 408 .atomic_update = sun8i_vi_layer_atomic_update, ··· 567 533 } 568 534 569 535 plane_cnt = mixer->cfg->ui_num + mixer->cfg->vi_num; 536 + 537 + if (mixer->cfg->vi_num == 1 || mixer->cfg->is_de3) { 538 + ret = drm_plane_create_alpha_property(&layer->plane); 539 + if (ret) { 540 + dev_err(drm->dev, "Couldn't add alpha property\n"); 541 + return ERR_PTR(ret); 542 + } 543 + } 570 544 571 545 ret = drm_plane_create_zpos_property(&layer->plane, index, 572 546 0, plane_cnt - 1);
+11
drivers/gpu/drm/sun4i/sun8i_vi_layer.h
··· 29 29 #define SUN8I_MIXER_CHAN_VI_VDS_UV(base) \ 30 30 ((base) + 0xfc) 31 31 32 + #define SUN8I_MIXER_FCC_GLOBAL_ALPHA_REG \ 33 + (0xAA000 + 0x90) 34 + 35 + #define SUN8I_MIXER_FCC_GLOBAL_ALPHA(x) ((x) << 24) 36 + #define SUN8I_MIXER_FCC_GLOBAL_ALPHA_MASK GENMASK(31, 24) 37 + 32 38 #define SUN8I_MIXER_CHAN_VI_LAYER_ATTR_EN BIT(0) 33 39 /* RGB mode should be set for RGB formats and cleared for YCbCr */ 34 40 #define SUN8I_MIXER_CHAN_VI_LAYER_ATTR_RGB_MODE BIT(15) 35 41 #define SUN8I_MIXER_CHAN_VI_LAYER_ATTR_FBFMT_OFFSET 8 36 42 #define SUN8I_MIXER_CHAN_VI_LAYER_ATTR_FBFMT_MASK GENMASK(12, 8) 43 + #define SUN50I_MIXER_CHAN_VI_LAYER_ATTR_ALPHA_MODE_MASK GENMASK(2, 1) 37 44 #define SUN50I_MIXER_CHAN_VI_LAYER_ATTR_ALPHA_MASK GENMASK(31, 24) 38 45 #define SUN50I_MIXER_CHAN_VI_LAYER_ATTR_ALPHA(x) ((x) << 24) 46 + 47 + #define SUN50I_MIXER_CHAN_VI_LAYER_ATTR_ALPHA_MODE_PIXEL ((0) << 1) 48 + #define SUN50I_MIXER_CHAN_VI_LAYER_ATTR_ALPHA_MODE_LAYER ((1) << 1) 49 + #define SUN50I_MIXER_CHAN_VI_LAYER_ATTR_ALPHA_MODE_COMBINED ((2) << 1) 39 50 40 51 #define SUN8I_MIXER_CHAN_VI_DS_N(x) ((x) << 16) 41 52 #define SUN8I_MIXER_CHAN_VI_DS_M(x) ((x) << 0)
+64 -52
drivers/gpu/drm/tegra/dc.c
··· 604 604 }; 605 605 606 606 static int tegra_plane_atomic_check(struct drm_plane *plane, 607 - struct drm_plane_state *state) 607 + struct drm_atomic_state *state) 608 608 { 609 - struct tegra_plane_state *plane_state = to_tegra_plane_state(state); 609 + struct drm_plane_state *new_plane_state = drm_atomic_get_new_plane_state(state, 610 + plane); 611 + struct tegra_plane_state *plane_state = to_tegra_plane_state(new_plane_state); 610 612 unsigned int supported_rotation = DRM_MODE_ROTATE_0 | 611 613 DRM_MODE_REFLECT_X | 612 614 DRM_MODE_REFLECT_Y; 613 - unsigned int rotation = state->rotation; 615 + unsigned int rotation = new_plane_state->rotation; 614 616 struct tegra_bo_tiling *tiling = &plane_state->tiling; 615 617 struct tegra_plane *tegra = to_tegra_plane(plane); 616 - struct tegra_dc *dc = to_tegra_dc(state->crtc); 618 + struct tegra_dc *dc = to_tegra_dc(new_plane_state->crtc); 617 619 int err; 618 620 619 621 /* no need for further checks if the plane is being disabled */ 620 - if (!state->crtc) 622 + if (!new_plane_state->crtc) 621 623 return 0; 622 624 623 - err = tegra_plane_format(state->fb->format->format, 625 + err = tegra_plane_format(new_plane_state->fb->format->format, 624 626 &plane_state->format, 625 627 &plane_state->swap); 626 628 if (err < 0) ··· 640 638 return err; 641 639 } 642 640 643 - err = tegra_fb_get_tiling(state->fb, tiling); 641 + err = tegra_fb_get_tiling(new_plane_state->fb, tiling); 644 642 if (err < 0) 645 643 return err; 646 644 ··· 656 654 * property in order to achieve the same result. The legacy BO flag 657 655 * duplicates the DRM rotation property when both are set. 658 656 */ 659 - if (tegra_fb_is_bottom_up(state->fb)) 657 + if (tegra_fb_is_bottom_up(new_plane_state->fb)) 660 658 rotation |= DRM_MODE_REFLECT_Y; 661 659 662 660 rotation = drm_rotation_simplify(rotation, supported_rotation); ··· 676 674 * error out if the user tries to display a framebuffer with such a 677 675 * configuration. 678 676 */ 679 - if (state->fb->format->num_planes > 2) { 680 - if (state->fb->pitches[2] != state->fb->pitches[1]) { 677 + if (new_plane_state->fb->format->num_planes > 2) { 678 + if (new_plane_state->fb->pitches[2] != new_plane_state->fb->pitches[1]) { 681 679 DRM_ERROR("unsupported UV-plane configuration\n"); 682 680 return -EINVAL; 683 681 } 684 682 } 685 683 686 - err = tegra_plane_state_add(tegra, state); 684 + err = tegra_plane_state_add(tegra, new_plane_state); 687 685 if (err < 0) 688 686 return err; 689 687 ··· 691 689 } 692 690 693 691 static void tegra_plane_atomic_disable(struct drm_plane *plane, 694 - struct drm_plane_state *old_state) 692 + struct drm_atomic_state *state) 695 693 { 694 + struct drm_plane_state *old_state = drm_atomic_get_old_plane_state(state, 695 + plane); 696 696 struct tegra_plane *p = to_tegra_plane(plane); 697 697 u32 value; 698 698 ··· 708 704 } 709 705 710 706 static void tegra_plane_atomic_update(struct drm_plane *plane, 711 - struct drm_plane_state *old_state) 707 + struct drm_atomic_state *state) 712 708 { 713 - struct tegra_plane_state *state = to_tegra_plane_state(plane->state); 714 - struct drm_framebuffer *fb = plane->state->fb; 709 + struct drm_plane_state *new_state = drm_atomic_get_new_plane_state(state, 710 + plane); 711 + struct tegra_plane_state *tegra_plane_state = to_tegra_plane_state(new_state); 712 + struct drm_framebuffer *fb = new_state->fb; 715 713 struct tegra_plane *p = to_tegra_plane(plane); 716 714 struct tegra_dc_window window; 717 715 unsigned int i; 718 716 719 717 /* rien ne va plus */ 720 - if (!plane->state->crtc || !plane->state->fb) 718 + if (!new_state->crtc || !new_state->fb) 721 719 return; 722 720 723 - if (!plane->state->visible) 724 - return tegra_plane_atomic_disable(plane, old_state); 721 + if (!new_state->visible) 722 + return tegra_plane_atomic_disable(plane, state); 725 723 726 724 memset(&window, 0, sizeof(window)); 727 - window.src.x = plane->state->src.x1 >> 16; 728 - window.src.y = plane->state->src.y1 >> 16; 729 - window.src.w = drm_rect_width(&plane->state->src) >> 16; 730 - window.src.h = drm_rect_height(&plane->state->src) >> 16; 731 - window.dst.x = plane->state->dst.x1; 732 - window.dst.y = plane->state->dst.y1; 733 - window.dst.w = drm_rect_width(&plane->state->dst); 734 - window.dst.h = drm_rect_height(&plane->state->dst); 725 + window.src.x = new_state->src.x1 >> 16; 726 + window.src.y = new_state->src.y1 >> 16; 727 + window.src.w = drm_rect_width(&new_state->src) >> 16; 728 + window.src.h = drm_rect_height(&new_state->src) >> 16; 729 + window.dst.x = new_state->dst.x1; 730 + window.dst.y = new_state->dst.y1; 731 + window.dst.w = drm_rect_width(&new_state->dst); 732 + window.dst.h = drm_rect_height(&new_state->dst); 735 733 window.bits_per_pixel = fb->format->cpp[0] * 8; 736 - window.reflect_x = state->reflect_x; 737 - window.reflect_y = state->reflect_y; 734 + window.reflect_x = tegra_plane_state->reflect_x; 735 + window.reflect_y = tegra_plane_state->reflect_y; 738 736 739 737 /* copy from state */ 740 - window.zpos = plane->state->normalized_zpos; 741 - window.tiling = state->tiling; 742 - window.format = state->format; 743 - window.swap = state->swap; 738 + window.zpos = new_state->normalized_zpos; 739 + window.tiling = tegra_plane_state->tiling; 740 + window.format = tegra_plane_state->format; 741 + window.swap = tegra_plane_state->swap; 744 742 745 743 for (i = 0; i < fb->format->num_planes; i++) { 746 - window.base[i] = state->iova[i] + fb->offsets[i]; 744 + window.base[i] = tegra_plane_state->iova[i] + fb->offsets[i]; 747 745 748 746 /* 749 747 * Tegra uses a shared stride for UV planes. Framebuffers are ··· 837 831 }; 838 832 839 833 static int tegra_cursor_atomic_check(struct drm_plane *plane, 840 - struct drm_plane_state *state) 834 + struct drm_atomic_state *state) 841 835 { 836 + struct drm_plane_state *new_plane_state = drm_atomic_get_new_plane_state(state, 837 + plane); 842 838 struct tegra_plane *tegra = to_tegra_plane(plane); 843 839 int err; 844 840 845 841 /* no need for further checks if the plane is being disabled */ 846 - if (!state->crtc) 842 + if (!new_plane_state->crtc) 847 843 return 0; 848 844 849 845 /* scaling not supported for cursor */ 850 - if ((state->src_w >> 16 != state->crtc_w) || 851 - (state->src_h >> 16 != state->crtc_h)) 846 + if ((new_plane_state->src_w >> 16 != new_plane_state->crtc_w) || 847 + (new_plane_state->src_h >> 16 != new_plane_state->crtc_h)) 852 848 return -EINVAL; 853 849 854 850 /* only square cursors supported */ 855 - if (state->src_w != state->src_h) 851 + if (new_plane_state->src_w != new_plane_state->src_h) 856 852 return -EINVAL; 857 853 858 - if (state->crtc_w != 32 && state->crtc_w != 64 && 859 - state->crtc_w != 128 && state->crtc_w != 256) 854 + if (new_plane_state->crtc_w != 32 && new_plane_state->crtc_w != 64 && 855 + new_plane_state->crtc_w != 128 && new_plane_state->crtc_w != 256) 860 856 return -EINVAL; 861 857 862 - err = tegra_plane_state_add(tegra, state); 858 + err = tegra_plane_state_add(tegra, new_plane_state); 863 859 if (err < 0) 864 860 return err; 865 861 ··· 869 861 } 870 862 871 863 static void tegra_cursor_atomic_update(struct drm_plane *plane, 872 - struct drm_plane_state *old_state) 864 + struct drm_atomic_state *state) 873 865 { 874 - struct tegra_plane_state *state = to_tegra_plane_state(plane->state); 875 - struct tegra_dc *dc = to_tegra_dc(plane->state->crtc); 866 + struct drm_plane_state *new_state = drm_atomic_get_new_plane_state(state, 867 + plane); 868 + struct tegra_plane_state *tegra_plane_state = to_tegra_plane_state(new_state); 869 + struct tegra_dc *dc = to_tegra_dc(new_state->crtc); 876 870 u32 value = CURSOR_CLIP_DISPLAY; 877 871 878 872 /* rien ne va plus */ 879 - if (!plane->state->crtc || !plane->state->fb) 873 + if (!new_state->crtc || !new_state->fb) 880 874 return; 881 875 882 - switch (plane->state->crtc_w) { 876 + switch (new_state->crtc_w) { 883 877 case 32: 884 878 value |= CURSOR_SIZE_32x32; 885 879 break; ··· 900 890 901 891 default: 902 892 WARN(1, "cursor size %ux%u not supported\n", 903 - plane->state->crtc_w, plane->state->crtc_h); 893 + new_state->crtc_w, new_state->crtc_h); 904 894 return; 905 895 } 906 896 907 - value |= (state->iova[0] >> 10) & 0x3fffff; 897 + value |= (tegra_plane_state->iova[0] >> 10) & 0x3fffff; 908 898 tegra_dc_writel(dc, value, DC_DISP_CURSOR_START_ADDR); 909 899 910 900 #ifdef CONFIG_ARCH_DMA_ADDR_T_64BIT 911 - value = (state->iova[0] >> 32) & 0x3; 901 + value = (tegra_plane_state->iova[0] >> 32) & 0x3; 912 902 tegra_dc_writel(dc, value, DC_DISP_CURSOR_START_ADDR_HI); 913 903 #endif 914 904 ··· 927 917 tegra_dc_writel(dc, value, DC_DISP_BLEND_CURSOR_CONTROL); 928 918 929 919 /* position the cursor */ 930 - value = (plane->state->crtc_y & 0x3fff) << 16 | 931 - (plane->state->crtc_x & 0x3fff); 920 + value = (new_state->crtc_y & 0x3fff) << 16 | 921 + (new_state->crtc_x & 0x3fff); 932 922 tegra_dc_writel(dc, value, DC_DISP_CURSOR_POSITION); 933 923 } 934 924 935 925 static void tegra_cursor_atomic_disable(struct drm_plane *plane, 936 - struct drm_plane_state *old_state) 926 + struct drm_atomic_state *state) 937 927 { 928 + struct drm_plane_state *old_state = drm_atomic_get_old_plane_state(state, 929 + plane); 938 930 struct tegra_dc *dc; 939 931 u32 value; 940 932
+3
drivers/gpu/drm/tegra/drm.c
··· 65 65 struct tegra_drm *tegra = drm->dev_private; 66 66 67 67 if (tegra->hub) { 68 + bool fence_cookie = dma_fence_begin_signalling(); 69 + 68 70 drm_atomic_helper_commit_modeset_disables(drm, old_state); 69 71 tegra_display_hub_atomic_commit(drm, old_state); 70 72 drm_atomic_helper_commit_planes(drm, old_state, 0); 71 73 drm_atomic_helper_commit_modeset_enables(drm, old_state); 72 74 drm_atomic_helper_commit_hw_done(old_state); 75 + dma_fence_end_signalling(fence_cookie); 73 76 drm_atomic_helper_wait_for_vblanks(drm, old_state); 74 77 drm_atomic_helper_cleanup_planes(drm, old_state); 75 78 } else {
+34 -28
drivers/gpu/drm/tegra/hub.c
··· 336 336 } 337 337 338 338 static int tegra_shared_plane_atomic_check(struct drm_plane *plane, 339 - struct drm_plane_state *state) 339 + struct drm_atomic_state *state) 340 340 { 341 - struct tegra_plane_state *plane_state = to_tegra_plane_state(state); 341 + struct drm_plane_state *new_plane_state = drm_atomic_get_new_plane_state(state, 342 + plane); 343 + struct tegra_plane_state *plane_state = to_tegra_plane_state(new_plane_state); 342 344 struct tegra_shared_plane *tegra = to_tegra_shared_plane(plane); 343 345 struct tegra_bo_tiling *tiling = &plane_state->tiling; 344 - struct tegra_dc *dc = to_tegra_dc(state->crtc); 346 + struct tegra_dc *dc = to_tegra_dc(new_plane_state->crtc); 345 347 int err; 346 348 347 349 /* no need for further checks if the plane is being disabled */ 348 - if (!state->crtc || !state->fb) 350 + if (!new_plane_state->crtc || !new_plane_state->fb) 349 351 return 0; 350 352 351 - err = tegra_plane_format(state->fb->format->format, 353 + err = tegra_plane_format(new_plane_state->fb->format->format, 352 354 &plane_state->format, 353 355 &plane_state->swap); 354 356 if (err < 0) 355 357 return err; 356 358 357 - err = tegra_fb_get_tiling(state->fb, tiling); 359 + err = tegra_fb_get_tiling(new_plane_state->fb, tiling); 358 360 if (err < 0) 359 361 return err; 360 362 ··· 371 369 * error out if the user tries to display a framebuffer with such a 372 370 * configuration. 373 371 */ 374 - if (state->fb->format->num_planes > 2) { 375 - if (state->fb->pitches[2] != state->fb->pitches[1]) { 372 + if (new_plane_state->fb->format->num_planes > 2) { 373 + if (new_plane_state->fb->pitches[2] != new_plane_state->fb->pitches[1]) { 376 374 DRM_ERROR("unsupported UV-plane configuration\n"); 377 375 return -EINVAL; 378 376 } ··· 380 378 381 379 /* XXX scaling is not yet supported, add a check here */ 382 380 383 - err = tegra_plane_state_add(&tegra->base, state); 381 + err = tegra_plane_state_add(&tegra->base, new_plane_state); 384 382 if (err < 0) 385 383 return err; 386 384 ··· 388 386 } 389 387 390 388 static void tegra_shared_plane_atomic_disable(struct drm_plane *plane, 391 - struct drm_plane_state *old_state) 389 + struct drm_atomic_state *state) 392 390 { 391 + struct drm_plane_state *old_state = drm_atomic_get_old_plane_state(state, 392 + plane); 393 393 struct tegra_plane *p = to_tegra_plane(plane); 394 394 struct tegra_dc *dc; 395 395 u32 value; ··· 427 423 } 428 424 429 425 static void tegra_shared_plane_atomic_update(struct drm_plane *plane, 430 - struct drm_plane_state *old_state) 426 + struct drm_atomic_state *state) 431 427 { 432 - struct tegra_plane_state *state = to_tegra_plane_state(plane->state); 433 - struct tegra_dc *dc = to_tegra_dc(plane->state->crtc); 434 - unsigned int zpos = plane->state->normalized_zpos; 435 - struct drm_framebuffer *fb = plane->state->fb; 428 + struct drm_plane_state *new_state = drm_atomic_get_new_plane_state(state, 429 + plane); 430 + struct tegra_plane_state *tegra_plane_state = to_tegra_plane_state(new_state); 431 + struct tegra_dc *dc = to_tegra_dc(new_state->crtc); 432 + unsigned int zpos = new_state->normalized_zpos; 433 + struct drm_framebuffer *fb = new_state->fb; 436 434 struct tegra_plane *p = to_tegra_plane(plane); 437 435 dma_addr_t base; 438 436 u32 value; 439 437 int err; 440 438 441 439 /* rien ne va plus */ 442 - if (!plane->state->crtc || !plane->state->fb) 440 + if (!new_state->crtc || !new_state->fb) 443 441 return; 444 442 445 - if (!plane->state->visible) { 446 - tegra_shared_plane_atomic_disable(plane, old_state); 443 + if (!new_state->visible) { 444 + tegra_shared_plane_atomic_disable(plane, state); 447 445 return; 448 446 } 449 447 ··· 483 477 /* disable compression */ 484 478 tegra_plane_writel(p, 0, DC_WINBUF_CDE_CONTROL); 485 479 486 - base = state->iova[0] + fb->offsets[0]; 480 + base = tegra_plane_state->iova[0] + fb->offsets[0]; 487 481 488 - tegra_plane_writel(p, state->format, DC_WIN_COLOR_DEPTH); 482 + tegra_plane_writel(p, tegra_plane_state->format, DC_WIN_COLOR_DEPTH); 489 483 tegra_plane_writel(p, 0, DC_WIN_PRECOMP_WGRP_PARAMS); 490 484 491 - value = V_POSITION(plane->state->crtc_y) | 492 - H_POSITION(plane->state->crtc_x); 485 + value = V_POSITION(new_state->crtc_y) | 486 + H_POSITION(new_state->crtc_x); 493 487 tegra_plane_writel(p, value, DC_WIN_POSITION); 494 488 495 - value = V_SIZE(plane->state->crtc_h) | H_SIZE(plane->state->crtc_w); 489 + value = V_SIZE(new_state->crtc_h) | H_SIZE(new_state->crtc_w); 496 490 tegra_plane_writel(p, value, DC_WIN_SIZE); 497 491 498 492 value = WIN_ENABLE | COLOR_EXPAND; 499 493 tegra_plane_writel(p, value, DC_WIN_WIN_OPTIONS); 500 494 501 - value = V_SIZE(plane->state->crtc_h) | H_SIZE(plane->state->crtc_w); 495 + value = V_SIZE(new_state->crtc_h) | H_SIZE(new_state->crtc_w); 502 496 tegra_plane_writel(p, value, DC_WIN_CROPPED_SIZE); 503 497 504 498 tegra_plane_writel(p, upper_32_bits(base), DC_WINBUF_START_ADDR_HI); ··· 510 504 value = CLAMP_BEFORE_BLEND | DEGAMMA_SRGB | INPUT_RANGE_FULL; 511 505 tegra_plane_writel(p, value, DC_WIN_SET_PARAMS); 512 506 513 - value = OFFSET_X(plane->state->src_y >> 16) | 514 - OFFSET_Y(plane->state->src_x >> 16); 507 + value = OFFSET_X(new_state->src_y >> 16) | 508 + OFFSET_Y(new_state->src_x >> 16); 515 509 tegra_plane_writel(p, value, DC_WINBUF_CROPPED_POINT); 516 510 517 511 if (dc->soc->supports_block_linear) { 518 - unsigned long height = state->tiling.value; 512 + unsigned long height = tegra_plane_state->tiling.value; 519 513 520 514 /* XXX */ 521 - switch (state->tiling.mode) { 515 + switch (tegra_plane_state->tiling.mode) { 522 516 case TEGRA_BO_TILING_MODE_PITCH: 523 517 value = DC_WINBUF_SURFACE_KIND_BLOCK_HEIGHT(0) | 524 518 DC_WINBUF_SURFACE_KIND_PITCH;
+2 -2
drivers/gpu/drm/tegra/plane.c
··· 8 8 #include <drm/drm_atomic.h> 9 9 #include <drm/drm_atomic_helper.h> 10 10 #include <drm/drm_fourcc.h> 11 - #include <drm/drm_gem_framebuffer_helper.h> 11 + #include <drm/drm_gem_atomic_helper.h> 12 12 #include <drm/drm_plane_helper.h> 13 13 14 14 #include "dc.h" ··· 198 198 if (!state->fb) 199 199 return 0; 200 200 201 - drm_gem_fb_prepare_fb(plane, state); 201 + drm_gem_plane_helper_prepare_fb(plane, state); 202 202 203 203 return tegra_dc_pin(dc, to_tegra_plane_state(state)); 204 204 }
+4
drivers/gpu/drm/tidss/tidss_kms.c
··· 4 4 * Author: Tomi Valkeinen <tomi.valkeinen@ti.com> 5 5 */ 6 6 7 + #include <linux/dma-fence.h> 8 + 7 9 #include <drm/drm_atomic.h> 8 10 #include <drm/drm_atomic_helper.h> 9 11 #include <drm/drm_bridge.h> ··· 28 26 { 29 27 struct drm_device *ddev = old_state->dev; 30 28 struct tidss_device *tidss = to_tidss(ddev); 29 + bool fence_cookie = dma_fence_begin_signalling(); 31 30 32 31 dev_dbg(ddev->dev, "%s\n", __func__); 33 32 ··· 39 36 drm_atomic_helper_commit_modeset_enables(ddev, old_state); 40 37 41 38 drm_atomic_helper_commit_hw_done(old_state); 39 + dma_fence_end_signalling(fence_cookie); 42 40 drm_atomic_helper_wait_for_flip_done(ddev, old_state); 43 41 44 42 drm_atomic_helper_cleanup_planes(ddev, old_state);
+30 -23
drivers/gpu/drm/tidss/tidss_plane.c
··· 10 10 #include <drm/drm_crtc_helper.h> 11 11 #include <drm/drm_fourcc.h> 12 12 #include <drm/drm_fb_cma_helper.h> 13 - #include <drm/drm_gem_framebuffer_helper.h> 13 + #include <drm/drm_gem_atomic_helper.h> 14 14 15 15 #include "tidss_crtc.h" 16 16 #include "tidss_dispc.h" ··· 20 20 /* drm_plane_helper_funcs */ 21 21 22 22 static int tidss_plane_atomic_check(struct drm_plane *plane, 23 - struct drm_plane_state *state) 23 + struct drm_atomic_state *state) 24 24 { 25 + struct drm_plane_state *new_plane_state = drm_atomic_get_new_plane_state(state, 26 + plane); 25 27 struct drm_device *ddev = plane->dev; 26 28 struct tidss_device *tidss = to_tidss(ddev); 27 29 struct tidss_plane *tplane = to_tidss_plane(plane); ··· 35 33 36 34 dev_dbg(ddev->dev, "%s\n", __func__); 37 35 38 - if (!state->crtc) { 36 + if (!new_plane_state->crtc) { 39 37 /* 40 38 * The visible field is not reset by the DRM core but only 41 39 * updated by drm_plane_helper_check_state(), set it manually. 42 40 */ 43 - state->visible = false; 41 + new_plane_state->visible = false; 44 42 return 0; 45 43 } 46 44 47 - crtc_state = drm_atomic_get_crtc_state(state->state, state->crtc); 45 + crtc_state = drm_atomic_get_crtc_state(state, 46 + new_plane_state->crtc); 48 47 if (IS_ERR(crtc_state)) 49 48 return PTR_ERR(crtc_state); 50 49 51 - ret = drm_atomic_helper_check_plane_state(state, crtc_state, 0, 50 + ret = drm_atomic_helper_check_plane_state(new_plane_state, crtc_state, 51 + 0, 52 52 INT_MAX, true, true); 53 53 if (ret < 0) 54 54 return ret; ··· 67 63 * check for odd height). 68 64 */ 69 65 70 - finfo = drm_format_info(state->fb->format->format); 66 + finfo = drm_format_info(new_plane_state->fb->format->format); 71 67 72 - if ((state->src_x >> 16) % finfo->hsub != 0) { 68 + if ((new_plane_state->src_x >> 16) % finfo->hsub != 0) { 73 69 dev_dbg(ddev->dev, 74 70 "%s: x-position %u not divisible subpixel size %u\n", 75 - __func__, (state->src_x >> 16), finfo->hsub); 71 + __func__, (new_plane_state->src_x >> 16), finfo->hsub); 76 72 return -EINVAL; 77 73 } 78 74 79 - if ((state->src_y >> 16) % finfo->vsub != 0) { 75 + if ((new_plane_state->src_y >> 16) % finfo->vsub != 0) { 80 76 dev_dbg(ddev->dev, 81 77 "%s: y-position %u not divisible subpixel size %u\n", 82 - __func__, (state->src_y >> 16), finfo->vsub); 78 + __func__, (new_plane_state->src_y >> 16), finfo->vsub); 83 79 return -EINVAL; 84 80 } 85 81 86 - if ((state->src_w >> 16) % finfo->hsub != 0) { 82 + if ((new_plane_state->src_w >> 16) % finfo->hsub != 0) { 87 83 dev_dbg(ddev->dev, 88 84 "%s: src width %u not divisible by subpixel size %u\n", 89 - __func__, (state->src_w >> 16), finfo->hsub); 85 + __func__, (new_plane_state->src_w >> 16), 86 + finfo->hsub); 90 87 return -EINVAL; 91 88 } 92 89 93 - if (!state->visible) 90 + if (!new_plane_state->visible) 94 91 return 0; 95 92 96 - hw_videoport = to_tidss_crtc(state->crtc)->hw_videoport; 93 + hw_videoport = to_tidss_crtc(new_plane_state->crtc)->hw_videoport; 97 94 98 - ret = dispc_plane_check(tidss->dispc, hw_plane, state, hw_videoport); 95 + ret = dispc_plane_check(tidss->dispc, hw_plane, new_plane_state, 96 + hw_videoport); 99 97 if (ret) 100 98 return ret; 101 99 ··· 105 99 } 106 100 107 101 static void tidss_plane_atomic_update(struct drm_plane *plane, 108 - struct drm_plane_state *old_state) 102 + struct drm_atomic_state *state) 109 103 { 110 104 struct drm_device *ddev = plane->dev; 111 105 struct tidss_device *tidss = to_tidss(ddev); 112 106 struct tidss_plane *tplane = to_tidss_plane(plane); 113 - struct drm_plane_state *state = plane->state; 107 + struct drm_plane_state *new_state = drm_atomic_get_new_plane_state(state, 108 + plane); 114 109 u32 hw_videoport; 115 110 int ret; 116 111 117 112 dev_dbg(ddev->dev, "%s\n", __func__); 118 113 119 - if (!state->visible) { 114 + if (!new_state->visible) { 120 115 dispc_plane_enable(tidss->dispc, tplane->hw_plane_id, false); 121 116 return; 122 117 } 123 118 124 - hw_videoport = to_tidss_crtc(state->crtc)->hw_videoport; 119 + hw_videoport = to_tidss_crtc(new_state->crtc)->hw_videoport; 125 120 126 121 ret = dispc_plane_setup(tidss->dispc, tplane->hw_plane_id, 127 - state, hw_videoport); 122 + new_state, hw_videoport); 128 123 129 124 if (ret) { 130 125 dev_err(plane->dev->dev, "%s: Failed to setup plane %d\n", ··· 138 131 } 139 132 140 133 static void tidss_plane_atomic_disable(struct drm_plane *plane, 141 - struct drm_plane_state *old_state) 134 + struct drm_atomic_state *state) 142 135 { 143 136 struct drm_device *ddev = plane->dev; 144 137 struct tidss_device *tidss = to_tidss(ddev); ··· 158 151 } 159 152 160 153 static const struct drm_plane_helper_funcs tidss_plane_helper_funcs = { 161 - .prepare_fb = drm_gem_fb_prepare_fb, 154 + .prepare_fb = drm_gem_plane_helper_prepare_fb, 162 155 .atomic_check = tidss_plane_atomic_check, 163 156 .atomic_update = tidss_plane_atomic_update, 164 157 .atomic_disable = tidss_plane_atomic_disable,
+14 -6
drivers/gpu/drm/tilcdc/tilcdc_crtc.c
··· 393 393 return; 394 394 } 395 395 } 396 - reg |= info->fdd < 12; 396 + reg |= info->fdd << 12; 397 397 tilcdc_write(dev, LCDC_RASTER_CTRL_REG, reg); 398 398 399 399 if (info->invert_pxl_clk) ··· 514 514 __func__); 515 515 516 516 drm_crtc_vblank_off(crtc); 517 + 518 + spin_lock_irq(&crtc->dev->event_lock); 519 + 520 + if (crtc->state->event) { 521 + drm_crtc_send_vblank_event(crtc, crtc->state->event); 522 + crtc->state->event = NULL; 523 + } 524 + 525 + spin_unlock_irq(&crtc->dev->event_lock); 517 526 518 527 tilcdc_crtc_disable_irqs(dev); 519 528 ··· 913 904 tilcdc_clear_irqstatus(dev, stat); 914 905 915 906 if (stat & LCDC_END_OF_FRAME0) { 916 - unsigned long flags; 917 907 bool skip_event = false; 918 908 ktime_t now; 919 909 920 910 now = ktime_get(); 921 911 922 - spin_lock_irqsave(&tilcdc_crtc->irq_lock, flags); 912 + spin_lock(&tilcdc_crtc->irq_lock); 923 913 924 914 tilcdc_crtc->last_vblank = now; 925 915 ··· 928 920 skip_event = true; 929 921 } 930 922 931 - spin_unlock_irqrestore(&tilcdc_crtc->irq_lock, flags); 923 + spin_unlock(&tilcdc_crtc->irq_lock); 932 924 933 925 drm_crtc_handle_vblank(crtc); 934 926 935 927 if (!skip_event) { 936 928 struct drm_pending_vblank_event *event; 937 929 938 - spin_lock_irqsave(&dev->event_lock, flags); 930 + spin_lock(&dev->event_lock); 939 931 940 932 event = tilcdc_crtc->event; 941 933 tilcdc_crtc->event = NULL; 942 934 if (event) 943 935 drm_crtc_send_vblank_event(crtc, event); 944 936 945 - spin_unlock_irqrestore(&dev->event_lock, flags); 937 + spin_unlock(&dev->event_lock); 946 938 } 947 939 948 940 if (tilcdc_crtc->frame_intact)
+25 -21
drivers/gpu/drm/tilcdc/tilcdc_plane.c
··· 21 21 }; 22 22 23 23 static int tilcdc_plane_atomic_check(struct drm_plane *plane, 24 - struct drm_plane_state *state) 24 + struct drm_atomic_state *state) 25 25 { 26 + struct drm_plane_state *new_state = drm_atomic_get_new_plane_state(state, 27 + plane); 26 28 struct drm_crtc_state *crtc_state; 27 - struct drm_plane_state *old_state = plane->state; 29 + struct drm_plane_state *old_state = drm_atomic_get_old_plane_state(state, 30 + plane); 28 31 unsigned int pitch; 29 32 30 - if (!state->crtc) 33 + if (!new_state->crtc) 31 34 return 0; 32 35 33 - if (WARN_ON(!state->fb)) 36 + if (WARN_ON(!new_state->fb)) 34 37 return -EINVAL; 35 38 36 - if (state->crtc_x || state->crtc_y) { 39 + if (new_state->crtc_x || new_state->crtc_y) { 37 40 dev_err(plane->dev->dev, "%s: crtc position must be zero.", 38 41 __func__); 39 42 return -EINVAL; 40 43 } 41 44 42 - crtc_state = drm_atomic_get_existing_crtc_state(state->state, 43 - state->crtc); 45 + crtc_state = drm_atomic_get_existing_crtc_state(state, 46 + new_state->crtc); 44 47 /* we should have a crtc state if the plane is attached to a crtc */ 45 48 if (WARN_ON(!crtc_state)) 46 49 return 0; 47 50 48 - if (crtc_state->mode.hdisplay != state->crtc_w || 49 - crtc_state->mode.vdisplay != state->crtc_h) { 51 + if (crtc_state->mode.hdisplay != new_state->crtc_w || 52 + crtc_state->mode.vdisplay != new_state->crtc_h) { 50 53 dev_err(plane->dev->dev, 51 54 "%s: Size must match mode (%dx%d == %dx%d)", __func__, 52 55 crtc_state->mode.hdisplay, crtc_state->mode.vdisplay, 53 - state->crtc_w, state->crtc_h); 56 + new_state->crtc_w, new_state->crtc_h); 54 57 return -EINVAL; 55 58 } 56 59 57 60 pitch = crtc_state->mode.hdisplay * 58 - state->fb->format->cpp[0]; 59 - if (state->fb->pitches[0] != pitch) { 61 + new_state->fb->format->cpp[0]; 62 + if (new_state->fb->pitches[0] != pitch) { 60 63 dev_err(plane->dev->dev, 61 64 "Invalid pitch: fb and crtc widths must be the same"); 62 65 return -EINVAL; 63 66 } 64 67 65 - if (old_state->fb && state->fb->format != old_state->fb->format) { 68 + if (old_state->fb && new_state->fb->format != old_state->fb->format) { 66 69 dev_dbg(plane->dev->dev, 67 70 "%s(): pixel format change requires mode_change\n", 68 71 __func__); ··· 76 73 } 77 74 78 75 static void tilcdc_plane_atomic_update(struct drm_plane *plane, 79 - struct drm_plane_state *old_state) 76 + struct drm_atomic_state *state) 80 77 { 81 - struct drm_plane_state *state = plane->state; 78 + struct drm_plane_state *new_state = drm_atomic_get_new_plane_state(state, 79 + plane); 82 80 83 - if (!state->crtc) 81 + if (!new_state->crtc) 84 82 return; 85 83 86 - if (WARN_ON(!state->fb || !state->crtc->state)) 84 + if (WARN_ON(!new_state->fb || !new_state->crtc->state)) 87 85 return; 88 86 89 - if (tilcdc_crtc_update_fb(state->crtc, 90 - state->fb, 91 - state->crtc->state->event) == 0) { 92 - state->crtc->state->event = NULL; 87 + if (tilcdc_crtc_update_fb(new_state->crtc, 88 + new_state->fb, 89 + new_state->crtc->state->event) == 0) { 90 + new_state->crtc->state->event = NULL; 93 91 } 94 92 } 95 93
+10
drivers/gpu/drm/tiny/Kconfig
··· 1 1 # SPDX-License-Identifier: GPL-2.0-only 2 2 3 + config DRM_ARCPGU 4 + tristate "ARC PGU" 5 + depends on DRM && OF 6 + select DRM_KMS_CMA_HELPER 7 + select DRM_KMS_HELPER 8 + help 9 + Choose this option if you have an ARC PGU controller. 10 + 11 + If M is selected the module will be called arcpgu. 12 + 3 13 config DRM_CIRRUS_QEMU 4 14 tristate "Cirrus driver for QEMU emulated device" 5 15 depends on DRM && PCI && MMU
+1
drivers/gpu/drm/tiny/Makefile
··· 1 1 # SPDX-License-Identifier: GPL-2.0-only 2 2 3 + obj-$(CONFIG_DRM_ARCPGU) += arcpgu.o 3 4 obj-$(CONFIG_DRM_CIRRUS_QEMU) += cirrus.o 4 5 obj-$(CONFIG_DRM_GM12U320) += gm12u320.o 5 6 obj-$(CONFIG_TINYDRM_HX8357D) += hx8357d.o
+434
drivers/gpu/drm/tiny/arcpgu.c
··· 1 + // SPDX-License-Identifier: GPL-2.0-only 2 + /* 3 + * ARC PGU DRM driver. 4 + * 5 + * Copyright (C) 2016 Synopsys, Inc. (www.synopsys.com) 6 + */ 7 + 8 + #include <linux/clk.h> 9 + #include <drm/drm_atomic_helper.h> 10 + #include <drm/drm_debugfs.h> 11 + #include <drm/drm_device.h> 12 + #include <drm/drm_drv.h> 13 + #include <drm/drm_fb_cma_helper.h> 14 + #include <drm/drm_fb_helper.h> 15 + #include <drm/drm_fourcc.h> 16 + #include <drm/drm_gem_cma_helper.h> 17 + #include <drm/drm_gem_framebuffer_helper.h> 18 + #include <drm/drm_of.h> 19 + #include <drm/drm_probe_helper.h> 20 + #include <drm/drm_simple_kms_helper.h> 21 + #include <linux/dma-mapping.h> 22 + #include <linux/module.h> 23 + #include <linux/of_reserved_mem.h> 24 + #include <linux/platform_device.h> 25 + 26 + #define ARCPGU_REG_CTRL 0x00 27 + #define ARCPGU_REG_STAT 0x04 28 + #define ARCPGU_REG_FMT 0x10 29 + #define ARCPGU_REG_HSYNC 0x14 30 + #define ARCPGU_REG_VSYNC 0x18 31 + #define ARCPGU_REG_ACTIVE 0x1c 32 + #define ARCPGU_REG_BUF0_ADDR 0x40 33 + #define ARCPGU_REG_STRIDE 0x50 34 + #define ARCPGU_REG_START_SET 0x84 35 + 36 + #define ARCPGU_REG_ID 0x3FC 37 + 38 + #define ARCPGU_CTRL_ENABLE_MASK 0x02 39 + #define ARCPGU_CTRL_VS_POL_MASK 0x1 40 + #define ARCPGU_CTRL_VS_POL_OFST 0x3 41 + #define ARCPGU_CTRL_HS_POL_MASK 0x1 42 + #define ARCPGU_CTRL_HS_POL_OFST 0x4 43 + #define ARCPGU_MODE_XRGB8888 BIT(2) 44 + #define ARCPGU_STAT_BUSY_MASK 0x02 45 + 46 + struct arcpgu_drm_private { 47 + struct drm_device drm; 48 + void __iomem *regs; 49 + struct clk *clk; 50 + struct drm_simple_display_pipe pipe; 51 + struct drm_connector sim_conn; 52 + }; 53 + 54 + #define dev_to_arcpgu(x) container_of(x, struct arcpgu_drm_private, drm) 55 + 56 + #define pipe_to_arcpgu_priv(x) container_of(x, struct arcpgu_drm_private, pipe) 57 + 58 + static inline void arc_pgu_write(struct arcpgu_drm_private *arcpgu, 59 + unsigned int reg, u32 value) 60 + { 61 + iowrite32(value, arcpgu->regs + reg); 62 + } 63 + 64 + static inline u32 arc_pgu_read(struct arcpgu_drm_private *arcpgu, 65 + unsigned int reg) 66 + { 67 + return ioread32(arcpgu->regs + reg); 68 + } 69 + 70 + #define XRES_DEF 640 71 + #define YRES_DEF 480 72 + 73 + #define XRES_MAX 8192 74 + #define YRES_MAX 8192 75 + 76 + static int arcpgu_drm_connector_get_modes(struct drm_connector *connector) 77 + { 78 + int count; 79 + 80 + count = drm_add_modes_noedid(connector, XRES_MAX, YRES_MAX); 81 + drm_set_preferred_mode(connector, XRES_DEF, YRES_DEF); 82 + return count; 83 + } 84 + 85 + static const struct drm_connector_helper_funcs 86 + arcpgu_drm_connector_helper_funcs = { 87 + .get_modes = arcpgu_drm_connector_get_modes, 88 + }; 89 + 90 + static const struct drm_connector_funcs arcpgu_drm_connector_funcs = { 91 + .reset = drm_atomic_helper_connector_reset, 92 + .fill_modes = drm_helper_probe_single_connector_modes, 93 + .destroy = drm_connector_cleanup, 94 + .atomic_duplicate_state = drm_atomic_helper_connector_duplicate_state, 95 + .atomic_destroy_state = drm_atomic_helper_connector_destroy_state, 96 + }; 97 + 98 + static int arcpgu_drm_sim_init(struct drm_device *drm, struct drm_connector *connector) 99 + { 100 + drm_connector_helper_add(connector, &arcpgu_drm_connector_helper_funcs); 101 + return drm_connector_init(drm, connector, &arcpgu_drm_connector_funcs, 102 + DRM_MODE_CONNECTOR_VIRTUAL); 103 + } 104 + 105 + #define ENCODE_PGU_XY(x, y) ((((x) - 1) << 16) | ((y) - 1)) 106 + 107 + static const u32 arc_pgu_supported_formats[] = { 108 + DRM_FORMAT_RGB565, 109 + DRM_FORMAT_XRGB8888, 110 + DRM_FORMAT_ARGB8888, 111 + }; 112 + 113 + static void arc_pgu_set_pxl_fmt(struct arcpgu_drm_private *arcpgu) 114 + { 115 + const struct drm_framebuffer *fb = arcpgu->pipe.plane.state->fb; 116 + uint32_t pixel_format = fb->format->format; 117 + u32 format = DRM_FORMAT_INVALID; 118 + int i; 119 + u32 reg_ctrl; 120 + 121 + for (i = 0; i < ARRAY_SIZE(arc_pgu_supported_formats); i++) { 122 + if (arc_pgu_supported_formats[i] == pixel_format) 123 + format = arc_pgu_supported_formats[i]; 124 + } 125 + 126 + if (WARN_ON(format == DRM_FORMAT_INVALID)) 127 + return; 128 + 129 + reg_ctrl = arc_pgu_read(arcpgu, ARCPGU_REG_CTRL); 130 + if (format == DRM_FORMAT_RGB565) 131 + reg_ctrl &= ~ARCPGU_MODE_XRGB8888; 132 + else 133 + reg_ctrl |= ARCPGU_MODE_XRGB8888; 134 + arc_pgu_write(arcpgu, ARCPGU_REG_CTRL, reg_ctrl); 135 + } 136 + 137 + static enum drm_mode_status arc_pgu_mode_valid(struct drm_simple_display_pipe *pipe, 138 + const struct drm_display_mode *mode) 139 + { 140 + struct arcpgu_drm_private *arcpgu = pipe_to_arcpgu_priv(pipe); 141 + long rate, clk_rate = mode->clock * 1000; 142 + long diff = clk_rate / 200; /* +-0.5% allowed by HDMI spec */ 143 + 144 + rate = clk_round_rate(arcpgu->clk, clk_rate); 145 + if ((max(rate, clk_rate) - min(rate, clk_rate) < diff) && (rate > 0)) 146 + return MODE_OK; 147 + 148 + return MODE_NOCLOCK; 149 + } 150 + 151 + static void arc_pgu_mode_set(struct arcpgu_drm_private *arcpgu) 152 + { 153 + struct drm_display_mode *m = &arcpgu->pipe.crtc.state->adjusted_mode; 154 + u32 val; 155 + 156 + arc_pgu_write(arcpgu, ARCPGU_REG_FMT, 157 + ENCODE_PGU_XY(m->crtc_htotal, m->crtc_vtotal)); 158 + 159 + arc_pgu_write(arcpgu, ARCPGU_REG_HSYNC, 160 + ENCODE_PGU_XY(m->crtc_hsync_start - m->crtc_hdisplay, 161 + m->crtc_hsync_end - m->crtc_hdisplay)); 162 + 163 + arc_pgu_write(arcpgu, ARCPGU_REG_VSYNC, 164 + ENCODE_PGU_XY(m->crtc_vsync_start - m->crtc_vdisplay, 165 + m->crtc_vsync_end - m->crtc_vdisplay)); 166 + 167 + arc_pgu_write(arcpgu, ARCPGU_REG_ACTIVE, 168 + ENCODE_PGU_XY(m->crtc_hblank_end - m->crtc_hblank_start, 169 + m->crtc_vblank_end - m->crtc_vblank_start)); 170 + 171 + val = arc_pgu_read(arcpgu, ARCPGU_REG_CTRL); 172 + 173 + if (m->flags & DRM_MODE_FLAG_PVSYNC) 174 + val |= ARCPGU_CTRL_VS_POL_MASK << ARCPGU_CTRL_VS_POL_OFST; 175 + else 176 + val &= ~(ARCPGU_CTRL_VS_POL_MASK << ARCPGU_CTRL_VS_POL_OFST); 177 + 178 + if (m->flags & DRM_MODE_FLAG_PHSYNC) 179 + val |= ARCPGU_CTRL_HS_POL_MASK << ARCPGU_CTRL_HS_POL_OFST; 180 + else 181 + val &= ~(ARCPGU_CTRL_HS_POL_MASK << ARCPGU_CTRL_HS_POL_OFST); 182 + 183 + arc_pgu_write(arcpgu, ARCPGU_REG_CTRL, val); 184 + arc_pgu_write(arcpgu, ARCPGU_REG_STRIDE, 0); 185 + arc_pgu_write(arcpgu, ARCPGU_REG_START_SET, 1); 186 + 187 + arc_pgu_set_pxl_fmt(arcpgu); 188 + 189 + clk_set_rate(arcpgu->clk, m->crtc_clock * 1000); 190 + } 191 + 192 + static void arc_pgu_enable(struct drm_simple_display_pipe *pipe, 193 + struct drm_crtc_state *crtc_state, 194 + struct drm_plane_state *plane_state) 195 + { 196 + struct arcpgu_drm_private *arcpgu = pipe_to_arcpgu_priv(pipe); 197 + 198 + arc_pgu_mode_set(arcpgu); 199 + 200 + clk_prepare_enable(arcpgu->clk); 201 + arc_pgu_write(arcpgu, ARCPGU_REG_CTRL, 202 + arc_pgu_read(arcpgu, ARCPGU_REG_CTRL) | 203 + ARCPGU_CTRL_ENABLE_MASK); 204 + } 205 + 206 + static void arc_pgu_disable(struct drm_simple_display_pipe *pipe) 207 + { 208 + struct arcpgu_drm_private *arcpgu = pipe_to_arcpgu_priv(pipe); 209 + 210 + clk_disable_unprepare(arcpgu->clk); 211 + arc_pgu_write(arcpgu, ARCPGU_REG_CTRL, 212 + arc_pgu_read(arcpgu, ARCPGU_REG_CTRL) & 213 + ~ARCPGU_CTRL_ENABLE_MASK); 214 + } 215 + 216 + static void arc_pgu_update(struct drm_simple_display_pipe *pipe, 217 + struct drm_plane_state *state) 218 + { 219 + struct arcpgu_drm_private *arcpgu; 220 + struct drm_gem_cma_object *gem; 221 + 222 + if (!pipe->plane.state->fb) 223 + return; 224 + 225 + arcpgu = pipe_to_arcpgu_priv(pipe); 226 + gem = drm_fb_cma_get_gem_obj(pipe->plane.state->fb, 0); 227 + arc_pgu_write(arcpgu, ARCPGU_REG_BUF0_ADDR, gem->paddr); 228 + } 229 + 230 + static const struct drm_simple_display_pipe_funcs arc_pgu_pipe_funcs = { 231 + .update = arc_pgu_update, 232 + .mode_valid = arc_pgu_mode_valid, 233 + .enable = arc_pgu_enable, 234 + .disable = arc_pgu_disable, 235 + }; 236 + 237 + static const struct drm_mode_config_funcs arcpgu_drm_modecfg_funcs = { 238 + .fb_create = drm_gem_fb_create, 239 + .atomic_check = drm_atomic_helper_check, 240 + .atomic_commit = drm_atomic_helper_commit, 241 + }; 242 + 243 + DEFINE_DRM_GEM_CMA_FOPS(arcpgu_drm_ops); 244 + 245 + static int arcpgu_load(struct arcpgu_drm_private *arcpgu) 246 + { 247 + struct platform_device *pdev = to_platform_device(arcpgu->drm.dev); 248 + struct device_node *encoder_node = NULL, *endpoint_node = NULL; 249 + struct drm_connector *connector = NULL; 250 + struct drm_device *drm = &arcpgu->drm; 251 + struct resource *res; 252 + int ret; 253 + 254 + arcpgu->clk = devm_clk_get(drm->dev, "pxlclk"); 255 + if (IS_ERR(arcpgu->clk)) 256 + return PTR_ERR(arcpgu->clk); 257 + 258 + ret = drmm_mode_config_init(drm); 259 + if (ret) 260 + return ret; 261 + 262 + drm->mode_config.min_width = 0; 263 + drm->mode_config.min_height = 0; 264 + drm->mode_config.max_width = 1920; 265 + drm->mode_config.max_height = 1080; 266 + drm->mode_config.funcs = &arcpgu_drm_modecfg_funcs; 267 + 268 + res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 269 + arcpgu->regs = devm_ioremap_resource(&pdev->dev, res); 270 + if (IS_ERR(arcpgu->regs)) 271 + return PTR_ERR(arcpgu->regs); 272 + 273 + dev_info(drm->dev, "arc_pgu ID: 0x%x\n", 274 + arc_pgu_read(arcpgu, ARCPGU_REG_ID)); 275 + 276 + /* Get the optional framebuffer memory resource */ 277 + ret = of_reserved_mem_device_init(drm->dev); 278 + if (ret && ret != -ENODEV) 279 + return ret; 280 + 281 + if (dma_set_mask_and_coherent(drm->dev, DMA_BIT_MASK(32))) 282 + return -ENODEV; 283 + 284 + /* 285 + * There is only one output port inside each device. It is linked with 286 + * encoder endpoint. 287 + */ 288 + endpoint_node = of_graph_get_next_endpoint(pdev->dev.of_node, NULL); 289 + if (endpoint_node) { 290 + encoder_node = of_graph_get_remote_port_parent(endpoint_node); 291 + of_node_put(endpoint_node); 292 + } else { 293 + connector = &arcpgu->sim_conn; 294 + dev_info(drm->dev, "no encoder found. Assumed virtual LCD on simulation platform\n"); 295 + ret = arcpgu_drm_sim_init(drm, connector); 296 + if (ret < 0) 297 + return ret; 298 + } 299 + 300 + ret = drm_simple_display_pipe_init(drm, &arcpgu->pipe, &arc_pgu_pipe_funcs, 301 + arc_pgu_supported_formats, 302 + ARRAY_SIZE(arc_pgu_supported_formats), 303 + NULL, connector); 304 + if (ret) 305 + return ret; 306 + 307 + if (encoder_node) { 308 + struct drm_bridge *bridge; 309 + 310 + /* Locate drm bridge from the hdmi encoder DT node */ 311 + bridge = of_drm_find_bridge(encoder_node); 312 + if (!bridge) 313 + return -EPROBE_DEFER; 314 + 315 + ret = drm_simple_display_pipe_attach_bridge(&arcpgu->pipe, bridge); 316 + if (ret) 317 + return ret; 318 + } 319 + 320 + drm_mode_config_reset(drm); 321 + drm_kms_helper_poll_init(drm); 322 + 323 + platform_set_drvdata(pdev, drm); 324 + return 0; 325 + } 326 + 327 + static int arcpgu_unload(struct drm_device *drm) 328 + { 329 + drm_kms_helper_poll_fini(drm); 330 + drm_atomic_helper_shutdown(drm); 331 + 332 + return 0; 333 + } 334 + 335 + #ifdef CONFIG_DEBUG_FS 336 + static int arcpgu_show_pxlclock(struct seq_file *m, void *arg) 337 + { 338 + struct drm_info_node *node = (struct drm_info_node *)m->private; 339 + struct drm_device *drm = node->minor->dev; 340 + struct arcpgu_drm_private *arcpgu = dev_to_arcpgu(drm); 341 + unsigned long clkrate = clk_get_rate(arcpgu->clk); 342 + unsigned long mode_clock = arcpgu->pipe.crtc.mode.crtc_clock * 1000; 343 + 344 + seq_printf(m, "hw : %lu\n", clkrate); 345 + seq_printf(m, "mode: %lu\n", mode_clock); 346 + return 0; 347 + } 348 + 349 + static struct drm_info_list arcpgu_debugfs_list[] = { 350 + { "clocks", arcpgu_show_pxlclock, 0 }, 351 + }; 352 + 353 + static void arcpgu_debugfs_init(struct drm_minor *minor) 354 + { 355 + drm_debugfs_create_files(arcpgu_debugfs_list, 356 + ARRAY_SIZE(arcpgu_debugfs_list), 357 + minor->debugfs_root, minor); 358 + } 359 + #endif 360 + 361 + static const struct drm_driver arcpgu_drm_driver = { 362 + .driver_features = DRIVER_MODESET | DRIVER_GEM | DRIVER_ATOMIC, 363 + .name = "arcpgu", 364 + .desc = "ARC PGU Controller", 365 + .date = "20160219", 366 + .major = 1, 367 + .minor = 0, 368 + .patchlevel = 0, 369 + .fops = &arcpgu_drm_ops, 370 + DRM_GEM_CMA_DRIVER_OPS, 371 + #ifdef CONFIG_DEBUG_FS 372 + .debugfs_init = arcpgu_debugfs_init, 373 + #endif 374 + }; 375 + 376 + static int arcpgu_probe(struct platform_device *pdev) 377 + { 378 + struct arcpgu_drm_private *arcpgu; 379 + int ret; 380 + 381 + arcpgu = devm_drm_dev_alloc(&pdev->dev, &arcpgu_drm_driver, 382 + struct arcpgu_drm_private, drm); 383 + if (IS_ERR(arcpgu)) 384 + return PTR_ERR(arcpgu); 385 + 386 + ret = arcpgu_load(arcpgu); 387 + if (ret) 388 + return ret; 389 + 390 + ret = drm_dev_register(&arcpgu->drm, 0); 391 + if (ret) 392 + goto err_unload; 393 + 394 + drm_fbdev_generic_setup(&arcpgu->drm, 16); 395 + 396 + return 0; 397 + 398 + err_unload: 399 + arcpgu_unload(&arcpgu->drm); 400 + 401 + return ret; 402 + } 403 + 404 + static int arcpgu_remove(struct platform_device *pdev) 405 + { 406 + struct drm_device *drm = platform_get_drvdata(pdev); 407 + 408 + drm_dev_unregister(drm); 409 + arcpgu_unload(drm); 410 + 411 + return 0; 412 + } 413 + 414 + static const struct of_device_id arcpgu_of_table[] = { 415 + {.compatible = "snps,arcpgu"}, 416 + {} 417 + }; 418 + 419 + MODULE_DEVICE_TABLE(of, arcpgu_of_table); 420 + 421 + static struct platform_driver arcpgu_platform_driver = { 422 + .probe = arcpgu_probe, 423 + .remove = arcpgu_remove, 424 + .driver = { 425 + .name = "arcpgu", 426 + .of_match_table = arcpgu_of_table, 427 + }, 428 + }; 429 + 430 + module_platform_driver(arcpgu_platform_driver); 431 + 432 + MODULE_AUTHOR("Carlos Palminha <palminha@synopsys.com>"); 433 + MODULE_DESCRIPTION("ARC PGU DRM driver"); 434 + MODULE_LICENSE("GPL");
+17 -26
drivers/gpu/drm/tiny/cirrus.c
··· 33 33 #include <drm/drm_file.h> 34 34 #include <drm/drm_format_helper.h> 35 35 #include <drm/drm_fourcc.h> 36 - #include <drm/drm_gem_shmem_helper.h> 36 + #include <drm/drm_gem_atomic_helper.h> 37 37 #include <drm/drm_gem_framebuffer_helper.h> 38 + #include <drm/drm_gem_shmem_helper.h> 38 39 #include <drm/drm_ioctl.h> 39 40 #include <drm/drm_managed.h> 40 41 #include <drm/drm_modeset_helper_vtables.h> ··· 312 311 return 0; 313 312 } 314 313 315 - static int cirrus_fb_blit_rect(struct drm_framebuffer *fb, 314 + static int cirrus_fb_blit_rect(struct drm_framebuffer *fb, const struct dma_buf_map *map, 316 315 struct drm_rect *rect) 317 316 { 318 317 struct cirrus_device *cirrus = to_cirrus(fb->dev); 319 - struct dma_buf_map map; 320 - void *vmap; 321 - int idx, ret; 318 + void *vmap = map->vaddr; /* TODO: Use mapping abstraction properly */ 319 + int idx; 322 320 323 - ret = -ENODEV; 324 321 if (!drm_dev_enter(&cirrus->dev, &idx)) 325 - goto out; 326 - 327 - ret = drm_gem_shmem_vmap(fb->obj[0], &map); 328 - if (ret) 329 - goto out_dev_exit; 330 - vmap = map.vaddr; /* TODO: Use mapping abstraction properly */ 322 + return -ENODEV; 331 323 332 324 if (cirrus->cpp == fb->format->cpp[0]) 333 325 drm_fb_memcpy_dstclip(cirrus->vram, ··· 339 345 else 340 346 WARN_ON_ONCE("cpp mismatch"); 341 347 342 - drm_gem_shmem_vunmap(fb->obj[0], &map); 343 - ret = 0; 344 - 345 - out_dev_exit: 346 348 drm_dev_exit(idx); 347 - out: 348 - return ret; 349 + 350 + return 0; 349 351 } 350 352 351 - static int cirrus_fb_blit_fullscreen(struct drm_framebuffer *fb) 353 + static int cirrus_fb_blit_fullscreen(struct drm_framebuffer *fb, const struct dma_buf_map *map) 352 354 { 353 355 struct drm_rect fullscreen = { 354 356 .x1 = 0, ··· 352 362 .y1 = 0, 353 363 .y2 = fb->height, 354 364 }; 355 - return cirrus_fb_blit_rect(fb, &fullscreen); 365 + return cirrus_fb_blit_rect(fb, map, &fullscreen); 356 366 } 357 367 358 368 static int cirrus_check_size(int width, int height, ··· 431 441 struct drm_plane_state *plane_state) 432 442 { 433 443 struct cirrus_device *cirrus = to_cirrus(pipe->crtc.dev); 444 + struct drm_shadow_plane_state *shadow_plane_state = to_drm_shadow_plane_state(plane_state); 434 445 435 446 cirrus_mode_set(cirrus, &crtc_state->mode, plane_state->fb); 436 - cirrus_fb_blit_fullscreen(plane_state->fb); 447 + cirrus_fb_blit_fullscreen(plane_state->fb, &shadow_plane_state->map[0]); 437 448 } 438 449 439 450 static void cirrus_pipe_update(struct drm_simple_display_pipe *pipe, ··· 442 451 { 443 452 struct cirrus_device *cirrus = to_cirrus(pipe->crtc.dev); 444 453 struct drm_plane_state *state = pipe->plane.state; 454 + struct drm_shadow_plane_state *shadow_plane_state = to_drm_shadow_plane_state(state); 445 455 struct drm_crtc *crtc = &pipe->crtc; 446 456 struct drm_rect rect; 447 457 448 - if (pipe->plane.state->fb && 449 - cirrus->cpp != cirrus_cpp(pipe->plane.state->fb)) 450 - cirrus_mode_set(cirrus, &crtc->mode, 451 - pipe->plane.state->fb); 458 + if (state->fb && cirrus->cpp != cirrus_cpp(state->fb)) 459 + cirrus_mode_set(cirrus, &crtc->mode, state->fb); 452 460 453 461 if (drm_atomic_helper_damage_merged(old_state, state, &rect)) 454 - cirrus_fb_blit_rect(pipe->plane.state->fb, &rect); 462 + cirrus_fb_blit_rect(state->fb, &shadow_plane_state->map[0], &rect); 455 463 } 456 464 457 465 static const struct drm_simple_display_pipe_funcs cirrus_pipe_funcs = { ··· 458 468 .check = cirrus_pipe_check, 459 469 .enable = cirrus_pipe_enable, 460 470 .update = cirrus_pipe_update, 471 + DRM_GEM_SIMPLE_DISPLAY_PIPE_SHADOW_PLANE_FUNCS, 461 472 }; 462 473 463 474 static const uint32_t cirrus_formats[] = {
+13 -15
drivers/gpu/drm/tiny/gm12u320.c
··· 16 16 #include <drm/drm_file.h> 17 17 #include <drm/drm_format_helper.h> 18 18 #include <drm/drm_fourcc.h> 19 - #include <drm/drm_gem_shmem_helper.h> 19 + #include <drm/drm_gem_atomic_helper.h> 20 20 #include <drm/drm_gem_framebuffer_helper.h> 21 + #include <drm/drm_gem_shmem_helper.h> 21 22 #include <drm/drm_ioctl.h> 22 23 #include <drm/drm_managed.h> 23 24 #include <drm/drm_modeset_helper_vtables.h> ··· 96 95 struct drm_rect rect; 97 96 int frame; 98 97 int draw_status_timeout; 98 + struct dma_buf_map src_map; 99 99 } fb_update; 100 100 }; 101 101 ··· 253 251 { 254 252 int block, dst_offset, len, remain, ret, x1, x2, y1, y2; 255 253 struct drm_framebuffer *fb; 256 - struct dma_buf_map map; 257 254 void *vaddr; 258 255 u8 *src; 259 256 ··· 266 265 x2 = gm12u320->fb_update.rect.x2; 267 266 y1 = gm12u320->fb_update.rect.y1; 268 267 y2 = gm12u320->fb_update.rect.y2; 269 - 270 - ret = drm_gem_shmem_vmap(fb->obj[0], &map); 271 - if (ret) { 272 - GM12U320_ERR("failed to vmap fb: %d\n", ret); 273 - goto put_fb; 274 - } 275 - vaddr = map.vaddr; /* TODO: Use mapping abstraction properly */ 268 + vaddr = gm12u320->fb_update.src_map.vaddr; /* TODO: Use mapping abstraction properly */ 276 269 277 270 if (fb->obj[0]->import_attach) { 278 271 ret = dma_buf_begin_cpu_access( 279 272 fb->obj[0]->import_attach->dmabuf, DMA_FROM_DEVICE); 280 273 if (ret) { 281 274 GM12U320_ERR("dma_buf_begin_cpu_access err: %d\n", ret); 282 - goto vunmap; 275 + goto put_fb; 283 276 } 284 277 } 285 278 ··· 317 322 if (ret) 318 323 GM12U320_ERR("dma_buf_end_cpu_access err: %d\n", ret); 319 324 } 320 - vunmap: 321 - drm_gem_shmem_vunmap(fb->obj[0], &map); 322 325 put_fb: 323 326 drm_framebuffer_put(fb); 324 327 gm12u320->fb_update.fb = NULL; ··· 404 411 GM12U320_ERR("Frame update error: %d\n", ret); 405 412 } 406 413 407 - static void gm12u320_fb_mark_dirty(struct drm_framebuffer *fb, 414 + static void gm12u320_fb_mark_dirty(struct drm_framebuffer *fb, const struct dma_buf_map *map, 408 415 struct drm_rect *dirty) 409 416 { 410 417 struct gm12u320_device *gm12u320 = to_gm12u320(fb->dev); ··· 418 425 drm_framebuffer_get(fb); 419 426 gm12u320->fb_update.fb = fb; 420 427 gm12u320->fb_update.rect = *dirty; 428 + gm12u320->fb_update.src_map = *map; 421 429 wakeup = true; 422 430 } else { 423 431 struct drm_rect *rect = &gm12u320->fb_update.rect; ··· 447 453 mutex_lock(&gm12u320->fb_update.lock); 448 454 old_fb = gm12u320->fb_update.fb; 449 455 gm12u320->fb_update.fb = NULL; 456 + dma_buf_map_clear(&gm12u320->fb_update.src_map); 450 457 mutex_unlock(&gm12u320->fb_update.lock); 451 458 452 459 drm_framebuffer_put(old_fb); ··· 560 565 { 561 566 struct drm_rect rect = { 0, 0, GM12U320_USER_WIDTH, GM12U320_HEIGHT }; 562 567 struct gm12u320_device *gm12u320 = to_gm12u320(pipe->crtc.dev); 568 + struct drm_shadow_plane_state *shadow_plane_state = to_drm_shadow_plane_state(plane_state); 563 569 564 570 gm12u320->fb_update.draw_status_timeout = FIRST_FRAME_TIMEOUT; 565 - gm12u320_fb_mark_dirty(plane_state->fb, &rect); 571 + gm12u320_fb_mark_dirty(plane_state->fb, &shadow_plane_state->map[0], &rect); 566 572 } 567 573 568 574 static void gm12u320_pipe_disable(struct drm_simple_display_pipe *pipe) ··· 577 581 struct drm_plane_state *old_state) 578 582 { 579 583 struct drm_plane_state *state = pipe->plane.state; 584 + struct drm_shadow_plane_state *shadow_plane_state = to_drm_shadow_plane_state(state); 580 585 struct drm_rect rect; 581 586 582 587 if (drm_atomic_helper_damage_merged(old_state, state, &rect)) 583 - gm12u320_fb_mark_dirty(pipe->plane.state->fb, &rect); 588 + gm12u320_fb_mark_dirty(state->fb, &shadow_plane_state->map[0], &rect); 584 589 } 585 590 586 591 static const struct drm_simple_display_pipe_funcs gm12u320_pipe_funcs = { 587 592 .enable = gm12u320_pipe_enable, 588 593 .disable = gm12u320_pipe_disable, 589 594 .update = gm12u320_pipe_update, 595 + DRM_GEM_SIMPLE_DISPLAY_PIPE_SHADOW_PLANE_FUNCS, 590 596 }; 591 597 592 598 static const uint32_t gm12u320_pipe_formats[] = {
+2 -2
drivers/gpu/drm/tiny/hx8357d.c
··· 19 19 #include <drm/drm_atomic_helper.h> 20 20 #include <drm/drm_drv.h> 21 21 #include <drm/drm_fb_helper.h> 22 + #include <drm/drm_gem_atomic_helper.h> 22 23 #include <drm/drm_gem_cma_helper.h> 23 - #include <drm/drm_gem_framebuffer_helper.h> 24 24 #include <drm/drm_managed.h> 25 25 #include <drm/drm_mipi_dbi.h> 26 26 #include <drm/drm_modeset_helper.h> ··· 184 184 .enable = yx240qv29_enable, 185 185 .disable = mipi_dbi_pipe_disable, 186 186 .update = mipi_dbi_pipe_update, 187 - .prepare_fb = drm_gem_fb_simple_display_pipe_prepare_fb, 187 + .prepare_fb = drm_gem_simple_display_pipe_prepare_fb, 188 188 }; 189 189 190 190 static const struct drm_display_mode yx350hv15_mode = {
+2 -2
drivers/gpu/drm/tiny/ili9225.c
··· 22 22 #include <drm/drm_fb_cma_helper.h> 23 23 #include <drm/drm_fb_helper.h> 24 24 #include <drm/drm_fourcc.h> 25 + #include <drm/drm_gem_atomic_helper.h> 25 26 #include <drm/drm_gem_cma_helper.h> 26 - #include <drm/drm_gem_framebuffer_helper.h> 27 27 #include <drm/drm_managed.h> 28 28 #include <drm/drm_mipi_dbi.h> 29 29 #include <drm/drm_rect.h> ··· 328 328 .enable = ili9225_pipe_enable, 329 329 .disable = ili9225_pipe_disable, 330 330 .update = ili9225_pipe_update, 331 - .prepare_fb = drm_gem_fb_simple_display_pipe_prepare_fb, 331 + .prepare_fb = drm_gem_simple_display_pipe_prepare_fb, 332 332 }; 333 333 334 334 static const struct drm_display_mode ili9225_mode = {
+2 -2
drivers/gpu/drm/tiny/ili9341.c
··· 18 18 #include <drm/drm_atomic_helper.h> 19 19 #include <drm/drm_drv.h> 20 20 #include <drm/drm_fb_helper.h> 21 + #include <drm/drm_gem_atomic_helper.h> 21 22 #include <drm/drm_gem_cma_helper.h> 22 - #include <drm/drm_gem_framebuffer_helper.h> 23 23 #include <drm/drm_managed.h> 24 24 #include <drm/drm_mipi_dbi.h> 25 25 #include <drm/drm_modeset_helper.h> ··· 140 140 .enable = yx240qv29_enable, 141 141 .disable = mipi_dbi_pipe_disable, 142 142 .update = mipi_dbi_pipe_update, 143 - .prepare_fb = drm_gem_fb_simple_display_pipe_prepare_fb, 143 + .prepare_fb = drm_gem_simple_display_pipe_prepare_fb, 144 144 }; 145 145 146 146 static const struct drm_display_mode yx240qv29_mode = {
+2 -2
drivers/gpu/drm/tiny/ili9486.c
··· 17 17 #include <drm/drm_atomic_helper.h> 18 18 #include <drm/drm_drv.h> 19 19 #include <drm/drm_fb_helper.h> 20 + #include <drm/drm_gem_atomic_helper.h> 20 21 #include <drm/drm_gem_cma_helper.h> 21 - #include <drm/drm_gem_framebuffer_helper.h> 22 22 #include <drm/drm_managed.h> 23 23 #include <drm/drm_mipi_dbi.h> 24 24 #include <drm/drm_modeset_helper.h> ··· 153 153 .enable = waveshare_enable, 154 154 .disable = mipi_dbi_pipe_disable, 155 155 .update = mipi_dbi_pipe_update, 156 - .prepare_fb = drm_gem_fb_simple_display_pipe_prepare_fb, 156 + .prepare_fb = drm_gem_simple_display_pipe_prepare_fb, 157 157 }; 158 158 159 159 static const struct drm_display_mode waveshare_mode = {
+2 -2
drivers/gpu/drm/tiny/mi0283qt.c
··· 16 16 #include <drm/drm_atomic_helper.h> 17 17 #include <drm/drm_drv.h> 18 18 #include <drm/drm_fb_helper.h> 19 + #include <drm/drm_gem_atomic_helper.h> 19 20 #include <drm/drm_gem_cma_helper.h> 20 - #include <drm/drm_gem_framebuffer_helper.h> 21 21 #include <drm/drm_managed.h> 22 22 #include <drm/drm_mipi_dbi.h> 23 23 #include <drm/drm_modeset_helper.h> ··· 144 144 .enable = mi0283qt_enable, 145 145 .disable = mipi_dbi_pipe_disable, 146 146 .update = mipi_dbi_pipe_update, 147 - .prepare_fb = drm_gem_fb_simple_display_pipe_prepare_fb, 147 + .prepare_fb = drm_gem_simple_display_pipe_prepare_fb, 148 148 }; 149 149 150 150 static const struct drm_display_mode mi0283qt_mode = {
+2 -1
drivers/gpu/drm/tiny/repaper.c
··· 29 29 #include <drm/drm_fb_cma_helper.h> 30 30 #include <drm/drm_fb_helper.h> 31 31 #include <drm/drm_format_helper.h> 32 + #include <drm/drm_gem_atomic_helper.h> 32 33 #include <drm/drm_gem_cma_helper.h> 33 34 #include <drm/drm_gem_framebuffer_helper.h> 34 35 #include <drm/drm_managed.h> ··· 861 860 .enable = repaper_pipe_enable, 862 861 .disable = repaper_pipe_disable, 863 862 .update = repaper_pipe_update, 864 - .prepare_fb = drm_gem_fb_simple_display_pipe_prepare_fb, 863 + .prepare_fb = drm_gem_simple_display_pipe_prepare_fb, 865 864 }; 866 865 867 866 static int repaper_connector_get_modes(struct drm_connector *connector)
+2 -2
drivers/gpu/drm/tiny/st7586.c
··· 19 19 #include <drm/drm_fb_cma_helper.h> 20 20 #include <drm/drm_fb_helper.h> 21 21 #include <drm/drm_format_helper.h> 22 + #include <drm/drm_gem_atomic_helper.h> 22 23 #include <drm/drm_gem_cma_helper.h> 23 - #include <drm/drm_gem_framebuffer_helper.h> 24 24 #include <drm/drm_managed.h> 25 25 #include <drm/drm_mipi_dbi.h> 26 26 #include <drm/drm_rect.h> ··· 268 268 .enable = st7586_pipe_enable, 269 269 .disable = st7586_pipe_disable, 270 270 .update = st7586_pipe_update, 271 - .prepare_fb = drm_gem_fb_simple_display_pipe_prepare_fb, 271 + .prepare_fb = drm_gem_simple_display_pipe_prepare_fb, 272 272 }; 273 273 274 274 static const struct drm_display_mode st7586_mode = {
+2 -2
drivers/gpu/drm/tiny/st7735r.c
··· 19 19 #include <drm/drm_atomic_helper.h> 20 20 #include <drm/drm_drv.h> 21 21 #include <drm/drm_fb_helper.h> 22 + #include <drm/drm_gem_atomic_helper.h> 22 23 #include <drm/drm_gem_cma_helper.h> 23 - #include <drm/drm_gem_framebuffer_helper.h> 24 24 #include <drm/drm_managed.h> 25 25 #include <drm/drm_mipi_dbi.h> 26 26 ··· 136 136 .enable = st7735r_pipe_enable, 137 137 .disable = mipi_dbi_pipe_disable, 138 138 .update = mipi_dbi_pipe_update, 139 - .prepare_fb = drm_gem_fb_simple_display_pipe_prepare_fb, 139 + .prepare_fb = drm_gem_simple_display_pipe_prepare_fb, 140 140 }; 141 141 142 142 static const struct st7735r_cfg jd_t18003_t01_cfg = {
+3 -4
drivers/gpu/drm/ttm/Makefile
··· 2 2 # 3 3 # Makefile for the drm device driver. This driver provides support for the 4 4 5 - ttm-y := ttm_memory.o ttm_tt.o ttm_bo.o \ 6 - ttm_bo_util.o ttm_bo_vm.o ttm_module.o \ 7 - ttm_execbuf_util.o ttm_range_manager.o \ 8 - ttm_resource.o ttm_pool.o 5 + ttm-y := ttm_tt.o ttm_bo.o ttm_bo_util.o ttm_bo_vm.o ttm_module.o \ 6 + ttm_execbuf_util.o ttm_range_manager.o ttm_resource.o ttm_pool.o \ 7 + ttm_device.o 9 8 ttm-$(CONFIG_AGP) += ttm_agp_backend.o 10 9 11 10 obj-$(CONFIG_DRM_TTM) += ttm.o
+1 -1
drivers/gpu/drm/ttm/ttm_agp_backend.c
··· 49 49 int ttm_agp_bind(struct ttm_tt *ttm, struct ttm_resource *bo_mem) 50 50 { 51 51 struct ttm_agp_backend *agp_be = container_of(ttm, struct ttm_agp_backend, ttm); 52 - struct page *dummy_read_page = ttm_bo_glob.dummy_read_page; 52 + struct page *dummy_read_page = ttm_glob.dummy_read_page; 53 53 struct drm_mm_node *node = bo_mem->mm_node; 54 54 struct agp_memory *mem; 55 55 int ret, cached = ttm->caching == ttm_cached;
+55 -280
drivers/gpu/drm/ttm/ttm_bo.c
··· 44 44 45 45 #include "ttm_module.h" 46 46 47 - static void ttm_bo_global_kobj_release(struct kobject *kobj); 48 - 49 - /* 50 - * ttm_global_mutex - protecting the global BO state 51 - */ 52 - DEFINE_MUTEX(ttm_global_mutex); 53 - unsigned ttm_bo_glob_use_count; 54 - struct ttm_bo_global ttm_bo_glob; 55 - EXPORT_SYMBOL(ttm_bo_glob); 56 - 57 - static struct attribute ttm_bo_count = { 58 - .name = "bo_count", 59 - .mode = S_IRUGO 60 - }; 61 - 62 47 /* default destructor */ 63 48 static void ttm_bo_default_destroy(struct ttm_buffer_object *bo) 64 49 { ··· 69 84 } 70 85 } 71 86 72 - static ssize_t ttm_bo_global_show(struct kobject *kobj, 73 - struct attribute *attr, 74 - char *buffer) 75 - { 76 - struct ttm_bo_global *glob = 77 - container_of(kobj, struct ttm_bo_global, kobj); 78 - 79 - return snprintf(buffer, PAGE_SIZE, "%d\n", 80 - atomic_read(&glob->bo_count)); 81 - } 82 - 83 - static struct attribute *ttm_bo_global_attrs[] = { 84 - &ttm_bo_count, 85 - NULL 86 - }; 87 - 88 - static const struct sysfs_ops ttm_bo_global_ops = { 89 - .show = &ttm_bo_global_show 90 - }; 91 - 92 - static struct kobj_type ttm_bo_glob_kobj_type = { 93 - .release = &ttm_bo_global_kobj_release, 94 - .sysfs_ops = &ttm_bo_global_ops, 95 - .default_attrs = ttm_bo_global_attrs 96 - }; 97 - 98 87 static void ttm_bo_del_from_lru(struct ttm_buffer_object *bo) 99 88 { 100 - struct ttm_bo_device *bdev = bo->bdev; 89 + struct ttm_device *bdev = bo->bdev; 101 90 102 91 list_del_init(&bo->swap); 103 92 list_del_init(&bo->lru); 104 93 105 - if (bdev->driver->del_from_lru_notify) 106 - bdev->driver->del_from_lru_notify(bo); 94 + if (bdev->funcs->del_from_lru_notify) 95 + bdev->funcs->del_from_lru_notify(bo); 107 96 } 108 97 109 98 static void ttm_bo_bulk_move_set_pos(struct ttm_lru_bulk_move_pos *pos, ··· 92 133 struct ttm_resource *mem, 93 134 struct ttm_lru_bulk_move *bulk) 94 135 { 95 - struct ttm_bo_device *bdev = bo->bdev; 136 + struct ttm_device *bdev = bo->bdev; 96 137 struct ttm_resource_manager *man; 97 138 98 139 if (!bo->deleted) ··· 110 151 TTM_PAGE_FLAG_SWAPPED))) { 111 152 struct list_head *swap; 112 153 113 - swap = &ttm_bo_glob.swap_lru[bo->priority]; 154 + swap = &ttm_glob.swap_lru[bo->priority]; 114 155 list_move_tail(&bo->swap, swap); 156 + } else { 157 + list_del_init(&bo->swap); 115 158 } 116 159 117 - if (bdev->driver->del_from_lru_notify) 118 - bdev->driver->del_from_lru_notify(bo); 160 + if (bdev->funcs->del_from_lru_notify) 161 + bdev->funcs->del_from_lru_notify(bo); 119 162 120 163 if (bulk && !bo->pin_count) { 121 164 switch (bo->mem.mem_type) { ··· 180 219 dma_resv_assert_held(pos->first->base.resv); 181 220 dma_resv_assert_held(pos->last->base.resv); 182 221 183 - lru = &ttm_bo_glob.swap_lru[i]; 222 + lru = &ttm_glob.swap_lru[i]; 184 223 list_bulk_move_tail(lru, &pos->first->swap, &pos->last->swap); 185 224 } 186 225 } ··· 191 230 struct ttm_operation_ctx *ctx, 192 231 struct ttm_place *hop) 193 232 { 194 - struct ttm_bo_device *bdev = bo->bdev; 233 + struct ttm_device *bdev = bo->bdev; 195 234 struct ttm_resource_manager *old_man = ttm_manager_type(bdev, bo->mem.mem_type); 196 235 struct ttm_resource_manager *new_man = ttm_manager_type(bdev, mem->mem_type); 197 236 int ret; ··· 217 256 } 218 257 } 219 258 220 - ret = bdev->driver->move(bo, evict, ctx, mem, hop); 259 + ret = bdev->funcs->move(bo, evict, ctx, mem, hop); 221 260 if (ret) { 222 261 if (ret == -EMULTIHOP) 223 262 return ret; ··· 245 284 246 285 static void ttm_bo_cleanup_memtype_use(struct ttm_buffer_object *bo) 247 286 { 248 - if (bo->bdev->driver->delete_mem_notify) 249 - bo->bdev->driver->delete_mem_notify(bo); 287 + if (bo->bdev->funcs->delete_mem_notify) 288 + bo->bdev->funcs->delete_mem_notify(bo); 250 289 251 290 ttm_bo_tt_destroy(bo); 252 291 ttm_resource_free(bo, &bo->mem); ··· 271 310 * reference it any more. The only tricky case is the trylock on 272 311 * the resv object while holding the lru_lock. 273 312 */ 274 - spin_lock(&ttm_bo_glob.lru_lock); 313 + spin_lock(&ttm_glob.lru_lock); 275 314 bo->base.resv = &bo->base._resv; 276 - spin_unlock(&ttm_bo_glob.lru_lock); 315 + spin_unlock(&ttm_glob.lru_lock); 277 316 } 278 317 279 318 return r; ··· 332 371 333 372 if (unlock_resv) 334 373 dma_resv_unlock(bo->base.resv); 335 - spin_unlock(&ttm_bo_glob.lru_lock); 374 + spin_unlock(&ttm_glob.lru_lock); 336 375 337 376 lret = dma_resv_wait_timeout_rcu(resv, true, interruptible, 338 377 30 * HZ); ··· 342 381 else if (lret == 0) 343 382 return -EBUSY; 344 383 345 - spin_lock(&ttm_bo_glob.lru_lock); 384 + spin_lock(&ttm_glob.lru_lock); 346 385 if (unlock_resv && !dma_resv_trylock(bo->base.resv)) { 347 386 /* 348 387 * We raced, and lost, someone else holds the reservation now, ··· 352 391 * delayed destruction would succeed, so just return success 353 392 * here. 354 393 */ 355 - spin_unlock(&ttm_bo_glob.lru_lock); 394 + spin_unlock(&ttm_glob.lru_lock); 356 395 return 0; 357 396 } 358 397 ret = 0; ··· 361 400 if (ret || unlikely(list_empty(&bo->ddestroy))) { 362 401 if (unlock_resv) 363 402 dma_resv_unlock(bo->base.resv); 364 - spin_unlock(&ttm_bo_glob.lru_lock); 403 + spin_unlock(&ttm_glob.lru_lock); 365 404 return ret; 366 405 } 367 406 368 407 ttm_bo_del_from_lru(bo); 369 408 list_del_init(&bo->ddestroy); 370 - spin_unlock(&ttm_bo_glob.lru_lock); 409 + spin_unlock(&ttm_glob.lru_lock); 371 410 ttm_bo_cleanup_memtype_use(bo); 372 411 373 412 if (unlock_resv) ··· 382 421 * Traverse the delayed list, and call ttm_bo_cleanup_refs on all 383 422 * encountered buffers. 384 423 */ 385 - static bool ttm_bo_delayed_delete(struct ttm_bo_device *bdev, bool remove_all) 424 + bool ttm_bo_delayed_delete(struct ttm_device *bdev, bool remove_all) 386 425 { 387 - struct ttm_bo_global *glob = &ttm_bo_glob; 426 + struct ttm_global *glob = &ttm_glob; 388 427 struct list_head removed; 389 428 bool empty; 390 429 ··· 423 462 return empty; 424 463 } 425 464 426 - static void ttm_bo_delayed_workqueue(struct work_struct *work) 427 - { 428 - struct ttm_bo_device *bdev = 429 - container_of(work, struct ttm_bo_device, wq.work); 430 - 431 - if (!ttm_bo_delayed_delete(bdev, false)) 432 - schedule_delayed_work(&bdev->wq, 433 - ((HZ / 100) < 1) ? 1 : HZ / 100); 434 - } 435 - 436 465 static void ttm_bo_release(struct kref *kref) 437 466 { 438 467 struct ttm_buffer_object *bo = 439 468 container_of(kref, struct ttm_buffer_object, kref); 440 - struct ttm_bo_device *bdev = bo->bdev; 441 - size_t acc_size = bo->acc_size; 469 + struct ttm_device *bdev = bo->bdev; 442 470 int ret; 443 471 444 472 if (!bo->deleted) { ··· 440 490 30 * HZ); 441 491 } 442 492 443 - if (bo->bdev->driver->release_notify) 444 - bo->bdev->driver->release_notify(bo); 493 + if (bo->bdev->funcs->release_notify) 494 + bo->bdev->funcs->release_notify(bo); 445 495 446 496 drm_vma_offset_remove(bdev->vma_manager, &bo->base.vma_node); 447 497 ttm_mem_io_free(bdev, &bo->mem); ··· 453 503 ttm_bo_flush_all_fences(bo); 454 504 bo->deleted = true; 455 505 456 - spin_lock(&ttm_bo_glob.lru_lock); 506 + spin_lock(&ttm_glob.lru_lock); 457 507 458 508 /* 459 509 * Make pinned bos immediately available to ··· 470 520 471 521 kref_init(&bo->kref); 472 522 list_add_tail(&bo->ddestroy, &bdev->ddestroy); 473 - spin_unlock(&ttm_bo_glob.lru_lock); 523 + spin_unlock(&ttm_glob.lru_lock); 474 524 475 525 schedule_delayed_work(&bdev->wq, 476 526 ((HZ / 100) < 1) ? 1 : HZ / 100); 477 527 return; 478 528 } 479 529 480 - spin_lock(&ttm_bo_glob.lru_lock); 530 + spin_lock(&ttm_glob.lru_lock); 481 531 ttm_bo_del_from_lru(bo); 482 532 list_del(&bo->ddestroy); 483 - spin_unlock(&ttm_bo_glob.lru_lock); 533 + spin_unlock(&ttm_glob.lru_lock); 484 534 485 535 ttm_bo_cleanup_memtype_use(bo); 486 536 dma_resv_unlock(bo->base.resv); 487 537 488 - atomic_dec(&ttm_bo_glob.bo_count); 538 + atomic_dec(&ttm_glob.bo_count); 489 539 dma_fence_put(bo->moving); 490 540 if (!ttm_bo_uses_embedded_gem_object(bo)) 491 541 dma_resv_fini(&bo->base._resv); 492 542 bo->destroy(bo); 493 - ttm_mem_global_free(&ttm_mem_glob, acc_size); 494 543 } 495 544 496 545 void ttm_bo_put(struct ttm_buffer_object *bo) ··· 498 549 } 499 550 EXPORT_SYMBOL(ttm_bo_put); 500 551 501 - int ttm_bo_lock_delayed_workqueue(struct ttm_bo_device *bdev) 552 + int ttm_bo_lock_delayed_workqueue(struct ttm_device *bdev) 502 553 { 503 554 return cancel_delayed_work_sync(&bdev->wq); 504 555 } 505 556 EXPORT_SYMBOL(ttm_bo_lock_delayed_workqueue); 506 557 507 - void ttm_bo_unlock_delayed_workqueue(struct ttm_bo_device *bdev, int resched) 558 + void ttm_bo_unlock_delayed_workqueue(struct ttm_device *bdev, int resched) 508 559 { 509 560 if (resched) 510 561 schedule_delayed_work(&bdev->wq, ··· 515 566 static int ttm_bo_evict(struct ttm_buffer_object *bo, 516 567 struct ttm_operation_ctx *ctx) 517 568 { 518 - struct ttm_bo_device *bdev = bo->bdev; 569 + struct ttm_device *bdev = bo->bdev; 519 570 struct ttm_resource evict_mem; 520 571 struct ttm_placement placement; 521 572 struct ttm_place hop; ··· 527 578 528 579 placement.num_placement = 0; 529 580 placement.num_busy_placement = 0; 530 - bdev->driver->evict_flags(bo, &placement); 581 + bdev->funcs->evict_flags(bo, &placement); 531 582 532 583 if (!placement.num_placement && !placement.num_busy_placement) { 533 584 ttm_bo_wait(bo, false, false); ··· 643 694 return r == -EDEADLK ? -EBUSY : r; 644 695 } 645 696 646 - int ttm_mem_evict_first(struct ttm_bo_device *bdev, 697 + int ttm_mem_evict_first(struct ttm_device *bdev, 647 698 struct ttm_resource_manager *man, 648 699 const struct ttm_place *place, 649 700 struct ttm_operation_ctx *ctx, ··· 654 705 unsigned i; 655 706 int ret; 656 707 657 - spin_lock(&ttm_bo_glob.lru_lock); 708 + spin_lock(&ttm_glob.lru_lock); 658 709 for (i = 0; i < TTM_MAX_BO_PRIORITY; ++i) { 659 710 list_for_each_entry(bo, &man->lru[i], lru) { 660 711 bool busy; ··· 667 718 continue; 668 719 } 669 720 670 - if (place && !bdev->driver->eviction_valuable(bo, 721 + if (place && !bdev->funcs->eviction_valuable(bo, 671 722 place)) { 672 723 if (locked) 673 724 dma_resv_unlock(bo->base.resv); ··· 691 742 if (!bo) { 692 743 if (busy_bo && !ttm_bo_get_unless_zero(busy_bo)) 693 744 busy_bo = NULL; 694 - spin_unlock(&ttm_bo_glob.lru_lock); 745 + spin_unlock(&ttm_glob.lru_lock); 695 746 ret = ttm_mem_evict_wait_busy(busy_bo, ctx, ticket); 696 747 if (busy_bo) 697 748 ttm_bo_put(busy_bo); ··· 705 756 return ret; 706 757 } 707 758 708 - spin_unlock(&ttm_bo_glob.lru_lock); 759 + spin_unlock(&ttm_glob.lru_lock); 709 760 710 761 ret = ttm_bo_evict(bo, ctx); 711 762 if (locked) ··· 760 811 struct ttm_resource *mem, 761 812 struct ttm_operation_ctx *ctx) 762 813 { 763 - struct ttm_bo_device *bdev = bo->bdev; 814 + struct ttm_device *bdev = bo->bdev; 764 815 struct ttm_resource_manager *man = ttm_manager_type(bdev, mem->mem_type); 765 816 struct ww_acquire_ctx *ticket; 766 817 int ret; ··· 795 846 const struct ttm_place *place, 796 847 struct ttm_resource *mem) 797 848 { 798 - struct ttm_bo_device *bdev = bo->bdev; 849 + struct ttm_device *bdev = bo->bdev; 799 850 struct ttm_resource_manager *man; 800 851 801 852 man = ttm_manager_type(bdev, place->mem_type); ··· 805 856 mem->mem_type = place->mem_type; 806 857 mem->placement = place->flags; 807 858 808 - spin_lock(&ttm_bo_glob.lru_lock); 859 + spin_lock(&ttm_glob.lru_lock); 809 860 ttm_bo_move_to_lru_tail(bo, mem, NULL); 810 - spin_unlock(&ttm_bo_glob.lru_lock); 861 + spin_unlock(&ttm_glob.lru_lock); 811 862 812 863 return 0; 813 864 } ··· 825 876 struct ttm_resource *mem, 826 877 struct ttm_operation_ctx *ctx) 827 878 { 828 - struct ttm_bo_device *bdev = bo->bdev; 879 + struct ttm_device *bdev = bo->bdev; 829 880 bool type_found = false; 830 881 int i, ret; 831 882 ··· 1046 1097 } 1047 1098 EXPORT_SYMBOL(ttm_bo_validate); 1048 1099 1049 - int ttm_bo_init_reserved(struct ttm_bo_device *bdev, 1100 + int ttm_bo_init_reserved(struct ttm_device *bdev, 1050 1101 struct ttm_buffer_object *bo, 1051 1102 size_t size, 1052 1103 enum ttm_bo_type type, 1053 1104 struct ttm_placement *placement, 1054 1105 uint32_t page_alignment, 1055 1106 struct ttm_operation_ctx *ctx, 1056 - size_t acc_size, 1057 1107 struct sg_table *sg, 1058 1108 struct dma_resv *resv, 1059 1109 void (*destroy) (struct ttm_buffer_object *)) 1060 1110 { 1061 - struct ttm_mem_global *mem_glob = &ttm_mem_glob; 1062 1111 bool locked; 1063 1112 int ret = 0; 1064 - 1065 - ret = ttm_mem_global_alloc(mem_glob, acc_size, ctx); 1066 - if (ret) { 1067 - pr_err("Out of kernel memory\n"); 1068 - if (destroy) 1069 - (*destroy)(bo); 1070 - else 1071 - kfree(bo); 1072 - return -ENOMEM; 1073 - } 1074 1113 1075 1114 bo->destroy = destroy ? destroy : ttm_bo_default_destroy; 1076 1115 ··· 1076 1139 bo->mem.bus.addr = NULL; 1077 1140 bo->moving = NULL; 1078 1141 bo->mem.placement = 0; 1079 - bo->acc_size = acc_size; 1080 1142 bo->pin_count = 0; 1081 1143 bo->sg = sg; 1082 1144 if (resv) { ··· 1093 1157 dma_resv_init(&bo->base._resv); 1094 1158 drm_vma_node_reset(&bo->base.vma_node); 1095 1159 } 1096 - atomic_inc(&ttm_bo_glob.bo_count); 1160 + atomic_inc(&ttm_glob.bo_count); 1097 1161 1098 1162 /* 1099 1163 * For ttm_bo_type_device buffers, allocate ··· 1129 1193 } 1130 1194 EXPORT_SYMBOL(ttm_bo_init_reserved); 1131 1195 1132 - int ttm_bo_init(struct ttm_bo_device *bdev, 1196 + int ttm_bo_init(struct ttm_device *bdev, 1133 1197 struct ttm_buffer_object *bo, 1134 1198 size_t size, 1135 1199 enum ttm_bo_type type, 1136 1200 struct ttm_placement *placement, 1137 1201 uint32_t page_alignment, 1138 1202 bool interruptible, 1139 - size_t acc_size, 1140 1203 struct sg_table *sg, 1141 1204 struct dma_resv *resv, 1142 1205 void (*destroy) (struct ttm_buffer_object *)) ··· 1144 1209 int ret; 1145 1210 1146 1211 ret = ttm_bo_init_reserved(bdev, bo, size, type, placement, 1147 - page_alignment, &ctx, acc_size, 1148 - sg, resv, destroy); 1212 + page_alignment, &ctx, sg, resv, destroy); 1149 1213 if (ret) 1150 1214 return ret; 1151 1215 ··· 1155 1221 } 1156 1222 EXPORT_SYMBOL(ttm_bo_init); 1157 1223 1158 - size_t ttm_bo_dma_acc_size(struct ttm_bo_device *bdev, 1159 - unsigned long bo_size, 1160 - unsigned struct_size) 1161 - { 1162 - unsigned npages = (PAGE_ALIGN(bo_size)) >> PAGE_SHIFT; 1163 - size_t size = 0; 1164 - 1165 - size += ttm_round_pot(struct_size); 1166 - size += ttm_round_pot(npages * (2*sizeof(void *) + sizeof(dma_addr_t))); 1167 - size += ttm_round_pot(sizeof(struct ttm_tt)); 1168 - return size; 1169 - } 1170 - EXPORT_SYMBOL(ttm_bo_dma_acc_size); 1171 - 1172 - static void ttm_bo_global_kobj_release(struct kobject *kobj) 1173 - { 1174 - struct ttm_bo_global *glob = 1175 - container_of(kobj, struct ttm_bo_global, kobj); 1176 - 1177 - __free_page(glob->dummy_read_page); 1178 - } 1179 - 1180 - static void ttm_bo_global_release(void) 1181 - { 1182 - struct ttm_bo_global *glob = &ttm_bo_glob; 1183 - 1184 - mutex_lock(&ttm_global_mutex); 1185 - if (--ttm_bo_glob_use_count > 0) 1186 - goto out; 1187 - 1188 - kobject_del(&glob->kobj); 1189 - kobject_put(&glob->kobj); 1190 - ttm_mem_global_release(&ttm_mem_glob); 1191 - memset(glob, 0, sizeof(*glob)); 1192 - out: 1193 - mutex_unlock(&ttm_global_mutex); 1194 - } 1195 - 1196 - static int ttm_bo_global_init(void) 1197 - { 1198 - struct ttm_bo_global *glob = &ttm_bo_glob; 1199 - int ret = 0; 1200 - unsigned i; 1201 - 1202 - mutex_lock(&ttm_global_mutex); 1203 - if (++ttm_bo_glob_use_count > 1) 1204 - goto out; 1205 - 1206 - ret = ttm_mem_global_init(&ttm_mem_glob); 1207 - if (ret) 1208 - goto out; 1209 - 1210 - spin_lock_init(&glob->lru_lock); 1211 - glob->dummy_read_page = alloc_page(__GFP_ZERO | GFP_DMA32); 1212 - 1213 - if (unlikely(glob->dummy_read_page == NULL)) { 1214 - ret = -ENOMEM; 1215 - goto out; 1216 - } 1217 - 1218 - for (i = 0; i < TTM_MAX_BO_PRIORITY; ++i) 1219 - INIT_LIST_HEAD(&glob->swap_lru[i]); 1220 - INIT_LIST_HEAD(&glob->device_list); 1221 - atomic_set(&glob->bo_count, 0); 1222 - 1223 - ret = kobject_init_and_add( 1224 - &glob->kobj, &ttm_bo_glob_kobj_type, ttm_get_kobj(), "buffer_objects"); 1225 - if (unlikely(ret != 0)) 1226 - kobject_put(&glob->kobj); 1227 - out: 1228 - mutex_unlock(&ttm_global_mutex); 1229 - return ret; 1230 - } 1231 - 1232 - int ttm_bo_device_release(struct ttm_bo_device *bdev) 1233 - { 1234 - struct ttm_bo_global *glob = &ttm_bo_glob; 1235 - int ret = 0; 1236 - unsigned i; 1237 - struct ttm_resource_manager *man; 1238 - 1239 - man = ttm_manager_type(bdev, TTM_PL_SYSTEM); 1240 - ttm_resource_manager_set_used(man, false); 1241 - ttm_set_driver_manager(bdev, TTM_PL_SYSTEM, NULL); 1242 - 1243 - mutex_lock(&ttm_global_mutex); 1244 - list_del(&bdev->device_list); 1245 - mutex_unlock(&ttm_global_mutex); 1246 - 1247 - cancel_delayed_work_sync(&bdev->wq); 1248 - 1249 - if (ttm_bo_delayed_delete(bdev, true)) 1250 - pr_debug("Delayed destroy list was clean\n"); 1251 - 1252 - spin_lock(&glob->lru_lock); 1253 - for (i = 0; i < TTM_MAX_BO_PRIORITY; ++i) 1254 - if (list_empty(&man->lru[0])) 1255 - pr_debug("Swap list %d was clean\n", i); 1256 - spin_unlock(&glob->lru_lock); 1257 - 1258 - ttm_pool_fini(&bdev->pool); 1259 - 1260 - if (!ret) 1261 - ttm_bo_global_release(); 1262 - 1263 - return ret; 1264 - } 1265 - EXPORT_SYMBOL(ttm_bo_device_release); 1266 - 1267 - static void ttm_bo_init_sysman(struct ttm_bo_device *bdev) 1268 - { 1269 - struct ttm_resource_manager *man = &bdev->sysman; 1270 - 1271 - /* 1272 - * Initialize the system memory buffer type. 1273 - * Other types need to be driver / IOCTL initialized. 1274 - */ 1275 - man->use_tt = true; 1276 - 1277 - ttm_resource_manager_init(man, 0); 1278 - ttm_set_driver_manager(bdev, TTM_PL_SYSTEM, man); 1279 - ttm_resource_manager_set_used(man, true); 1280 - } 1281 - 1282 - int ttm_bo_device_init(struct ttm_bo_device *bdev, 1283 - struct ttm_bo_driver *driver, 1284 - struct device *dev, 1285 - struct address_space *mapping, 1286 - struct drm_vma_offset_manager *vma_manager, 1287 - bool use_dma_alloc, bool use_dma32) 1288 - { 1289 - struct ttm_bo_global *glob = &ttm_bo_glob; 1290 - int ret; 1291 - 1292 - if (WARN_ON(vma_manager == NULL)) 1293 - return -EINVAL; 1294 - 1295 - ret = ttm_bo_global_init(); 1296 - if (ret) 1297 - return ret; 1298 - 1299 - bdev->driver = driver; 1300 - 1301 - ttm_bo_init_sysman(bdev); 1302 - ttm_pool_init(&bdev->pool, dev, use_dma_alloc, use_dma32); 1303 - 1304 - bdev->vma_manager = vma_manager; 1305 - INIT_DELAYED_WORK(&bdev->wq, ttm_bo_delayed_workqueue); 1306 - INIT_LIST_HEAD(&bdev->ddestroy); 1307 - bdev->dev_mapping = mapping; 1308 - mutex_lock(&ttm_global_mutex); 1309 - list_add_tail(&bdev->device_list, &glob->device_list); 1310 - mutex_unlock(&ttm_global_mutex); 1311 - 1312 - return 0; 1313 - } 1314 - EXPORT_SYMBOL(ttm_bo_device_init); 1315 - 1316 1224 /* 1317 1225 * buffer object vm functions. 1318 1226 */ 1319 1227 1320 1228 void ttm_bo_unmap_virtual(struct ttm_buffer_object *bo) 1321 1229 { 1322 - struct ttm_bo_device *bdev = bo->bdev; 1230 + struct ttm_device *bdev = bo->bdev; 1323 1231 1324 1232 drm_vma_node_unmap(&bo->base.vma_node, bdev->dev_mapping); 1325 1233 ttm_mem_io_free(bdev, &bo->mem); ··· 1197 1421 * A buffer object shrink method that tries to swap out the first 1198 1422 * buffer object on the bo_global::swap_lru list. 1199 1423 */ 1200 - int ttm_bo_swapout(struct ttm_operation_ctx *ctx) 1424 + int ttm_bo_swapout(struct ttm_operation_ctx *ctx, gfp_t gfp_flags) 1201 1425 { 1202 - struct ttm_bo_global *glob = &ttm_bo_glob; 1426 + struct ttm_global *glob = &ttm_glob; 1203 1427 struct ttm_buffer_object *bo; 1204 1428 int ret = -EBUSY; 1205 1429 bool locked; ··· 1277 1501 * anyone tries to access a ttm page. 1278 1502 */ 1279 1503 1280 - if (bo->bdev->driver->swap_notify) 1281 - bo->bdev->driver->swap_notify(bo); 1504 + if (bo->bdev->funcs->swap_notify) 1505 + bo->bdev->funcs->swap_notify(bo); 1282 1506 1283 - ret = ttm_tt_swapout(bo->bdev, bo->ttm); 1507 + ret = ttm_tt_swapout(bo->bdev, bo->ttm, gfp_flags); 1284 1508 out: 1285 1509 1286 1510 /** ··· 1303 1527 ttm_tt_destroy(bo->bdev, bo->ttm); 1304 1528 bo->ttm = NULL; 1305 1529 } 1306 -
+12 -13
drivers/gpu/drm/ttm/ttm_bo_util.c
··· 46 46 struct ttm_buffer_object *bo; 47 47 }; 48 48 49 - int ttm_mem_io_reserve(struct ttm_bo_device *bdev, 49 + int ttm_mem_io_reserve(struct ttm_device *bdev, 50 50 struct ttm_resource *mem) 51 51 { 52 52 if (mem->bus.offset || mem->bus.addr) 53 53 return 0; 54 54 55 55 mem->bus.is_iomem = false; 56 - if (!bdev->driver->io_mem_reserve) 56 + if (!bdev->funcs->io_mem_reserve) 57 57 return 0; 58 58 59 - return bdev->driver->io_mem_reserve(bdev, mem); 59 + return bdev->funcs->io_mem_reserve(bdev, mem); 60 60 } 61 61 62 - void ttm_mem_io_free(struct ttm_bo_device *bdev, 62 + void ttm_mem_io_free(struct ttm_device *bdev, 63 63 struct ttm_resource *mem) 64 64 { 65 65 if (!mem->bus.offset && !mem->bus.addr) 66 66 return; 67 67 68 - if (bdev->driver->io_mem_free) 69 - bdev->driver->io_mem_free(bdev, mem); 68 + if (bdev->funcs->io_mem_free) 69 + bdev->funcs->io_mem_free(bdev, mem); 70 70 71 71 mem->bus.offset = 0; 72 72 mem->bus.addr = NULL; 73 73 } 74 74 75 - static int ttm_resource_ioremap(struct ttm_bo_device *bdev, 75 + static int ttm_resource_ioremap(struct ttm_device *bdev, 76 76 struct ttm_resource *mem, 77 77 void **virtual) 78 78 { ··· 102 102 return 0; 103 103 } 104 104 105 - static void ttm_resource_iounmap(struct ttm_bo_device *bdev, 105 + static void ttm_resource_iounmap(struct ttm_device *bdev, 106 106 struct ttm_resource *mem, 107 107 void *virtual) 108 108 { ··· 172 172 struct ttm_operation_ctx *ctx, 173 173 struct ttm_resource *new_mem) 174 174 { 175 - struct ttm_bo_device *bdev = bo->bdev; 175 + struct ttm_device *bdev = bo->bdev; 176 176 struct ttm_resource_manager *man = ttm_manager_type(bdev, new_mem->mem_type); 177 177 struct ttm_tt *ttm = bo->ttm; 178 178 struct ttm_resource *old_mem = &bo->mem; ··· 300 300 * TODO: Explicit member copy would probably be better here. 301 301 */ 302 302 303 - atomic_inc(&ttm_bo_glob.bo_count); 303 + atomic_inc(&ttm_glob.bo_count); 304 304 INIT_LIST_HEAD(&fbo->base.ddestroy); 305 305 INIT_LIST_HEAD(&fbo->base.lru); 306 306 INIT_LIST_HEAD(&fbo->base.swap); ··· 309 309 310 310 kref_init(&fbo->base.kref); 311 311 fbo->base.destroy = &ttm_transfered_destroy; 312 - fbo->base.acc_size = 0; 313 312 fbo->base.pin_count = 0; 314 313 if (bo->type != ttm_bo_type_sg) 315 314 fbo->base.base.resv = &fbo->base.base._resv; ··· 601 602 static void ttm_bo_move_pipeline_evict(struct ttm_buffer_object *bo, 602 603 struct dma_fence *fence) 603 604 { 604 - struct ttm_bo_device *bdev = bo->bdev; 605 + struct ttm_device *bdev = bo->bdev; 605 606 struct ttm_resource_manager *from = ttm_manager_type(bdev, bo->mem.mem_type); 606 607 607 608 /** ··· 627 628 bool pipeline, 628 629 struct ttm_resource *new_mem) 629 630 { 630 - struct ttm_bo_device *bdev = bo->bdev; 631 + struct ttm_device *bdev = bo->bdev; 631 632 struct ttm_resource_manager *from = ttm_manager_type(bdev, bo->mem.mem_type); 632 633 struct ttm_resource_manager *man = ttm_manager_type(bdev, new_mem->mem_type); 633 634 int ret = 0;
+11 -13
drivers/gpu/drm/ttm/ttm_bo_vm.c
··· 95 95 static unsigned long ttm_bo_io_mem_pfn(struct ttm_buffer_object *bo, 96 96 unsigned long page_offset) 97 97 { 98 - struct ttm_bo_device *bdev = bo->bdev; 98 + struct ttm_device *bdev = bo->bdev; 99 99 100 - if (bdev->driver->io_mem_pfn) 101 - return bdev->driver->io_mem_pfn(bo, page_offset); 100 + if (bdev->funcs->io_mem_pfn) 101 + return bdev->funcs->io_mem_pfn(bo, page_offset); 102 102 103 103 return (bo->mem.bus.offset >> PAGE_SHIFT) + page_offset; 104 104 } ··· 216 216 if (page_to_pfn(ttm->pages[page_offset + i]) != pfn + i) 217 217 goto out_fallback; 218 218 } 219 - } else if (bo->bdev->driver->io_mem_pfn) { 219 + } else if (bo->bdev->funcs->io_mem_pfn) { 220 220 for (i = 1; i < fault_page_size; ++i) { 221 221 if (ttm_bo_io_mem_pfn(bo, page_offset + i) != pfn + i) 222 222 goto out_fallback; ··· 278 278 { 279 279 struct vm_area_struct *vma = vmf->vma; 280 280 struct ttm_buffer_object *bo = vma->vm_private_data; 281 - struct ttm_bo_device *bdev = bo->bdev; 281 + struct ttm_device *bdev = bo->bdev; 282 282 unsigned long page_offset; 283 283 unsigned long page_last; 284 284 unsigned long pfn; ··· 488 488 ret = ttm_bo_vm_access_kmap(bo, offset, buf, len, write); 489 489 break; 490 490 default: 491 - if (bo->bdev->driver->access_memory) 492 - ret = bo->bdev->driver->access_memory( 491 + if (bo->bdev->funcs->access_memory) 492 + ret = bo->bdev->funcs->access_memory( 493 493 bo, offset, buf, len, write); 494 494 else 495 495 ret = -EIO; ··· 508 508 .access = ttm_bo_vm_access, 509 509 }; 510 510 511 - static struct ttm_buffer_object *ttm_bo_vm_lookup(struct ttm_bo_device *bdev, 511 + static struct ttm_buffer_object *ttm_bo_vm_lookup(struct ttm_device *bdev, 512 512 unsigned long offset, 513 513 unsigned long pages) 514 514 { ··· 555 555 } 556 556 557 557 int ttm_bo_mmap(struct file *filp, struct vm_area_struct *vma, 558 - struct ttm_bo_device *bdev) 558 + struct ttm_device *bdev) 559 559 { 560 - struct ttm_bo_driver *driver; 561 560 struct ttm_buffer_object *bo; 562 561 int ret; 563 562 ··· 567 568 if (unlikely(!bo)) 568 569 return -EINVAL; 569 570 570 - driver = bo->bdev->driver; 571 - if (unlikely(!driver->verify_access)) { 571 + if (unlikely(!bo->bdev->funcs->verify_access)) { 572 572 ret = -EPERM; 573 573 goto out_unref; 574 574 } 575 - ret = driver->verify_access(bo, filp); 575 + ret = bo->bdev->funcs->verify_access(bo, filp); 576 576 if (unlikely(ret != 0)) 577 577 goto out_unref; 578 578
+205
drivers/gpu/drm/ttm/ttm_device.c
··· 1 + /* SPDX-License-Identifier: GPL-2.0 OR MIT */ 2 + 3 + /* 4 + * Copyright (c) 2006-2009 VMware, Inc., Palo Alto, CA., USA 5 + * Copyright 2020 Advanced Micro Devices, Inc. 6 + * 7 + * Permission is hereby granted, free of charge, to any person obtaining a 8 + * copy of this software and associated documentation files (the "Software"), 9 + * to deal in the Software without restriction, including without limitation 10 + * the rights to use, copy, modify, merge, publish, distribute, sublicense, 11 + * and/or sell copies of the Software, and to permit persons to whom the 12 + * Software is furnished to do so, subject to the following conditions: 13 + * 14 + * The above copyright notice and this permission notice shall be included in 15 + * all copies or substantial portions of the Software. 16 + * 17 + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 18 + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 19 + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL 20 + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR 21 + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, 22 + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR 23 + * OTHER DEALINGS IN THE SOFTWARE. 24 + * 25 + * Authors: Christian König 26 + */ 27 + 28 + #define pr_fmt(fmt) "[TTM DEVICE] " fmt 29 + 30 + #include <linux/mm.h> 31 + 32 + #include <drm/ttm/ttm_device.h> 33 + #include <drm/ttm/ttm_tt.h> 34 + #include <drm/ttm/ttm_placement.h> 35 + #include <drm/ttm/ttm_bo_api.h> 36 + 37 + #include "ttm_module.h" 38 + 39 + /** 40 + * ttm_global_mutex - protecting the global state 41 + */ 42 + DEFINE_MUTEX(ttm_global_mutex); 43 + unsigned ttm_glob_use_count; 44 + struct ttm_global ttm_glob; 45 + EXPORT_SYMBOL(ttm_glob); 46 + 47 + static void ttm_global_release(void) 48 + { 49 + struct ttm_global *glob = &ttm_glob; 50 + 51 + mutex_lock(&ttm_global_mutex); 52 + if (--ttm_glob_use_count > 0) 53 + goto out; 54 + 55 + ttm_pool_mgr_fini(); 56 + ttm_tt_mgr_fini(); 57 + 58 + __free_page(glob->dummy_read_page); 59 + memset(glob, 0, sizeof(*glob)); 60 + out: 61 + mutex_unlock(&ttm_global_mutex); 62 + } 63 + 64 + static int ttm_global_init(void) 65 + { 66 + struct ttm_global *glob = &ttm_glob; 67 + unsigned long num_pages; 68 + struct sysinfo si; 69 + int ret = 0; 70 + unsigned i; 71 + 72 + mutex_lock(&ttm_global_mutex); 73 + if (++ttm_glob_use_count > 1) 74 + goto out; 75 + 76 + si_meminfo(&si); 77 + 78 + /* Limit the number of pages in the pool to about 50% of the total 79 + * system memory. 80 + */ 81 + num_pages = ((u64)si.totalram * si.mem_unit) >> PAGE_SHIFT; 82 + ttm_pool_mgr_init(num_pages * 50 / 100); 83 + ttm_tt_mgr_init(); 84 + 85 + spin_lock_init(&glob->lru_lock); 86 + glob->dummy_read_page = alloc_page(__GFP_ZERO | GFP_DMA32); 87 + 88 + if (unlikely(glob->dummy_read_page == NULL)) { 89 + ret = -ENOMEM; 90 + goto out; 91 + } 92 + 93 + for (i = 0; i < TTM_MAX_BO_PRIORITY; ++i) 94 + INIT_LIST_HEAD(&glob->swap_lru[i]); 95 + INIT_LIST_HEAD(&glob->device_list); 96 + atomic_set(&glob->bo_count, 0); 97 + 98 + debugfs_create_atomic_t("buffer_objects", 0444, ttm_debugfs_root, 99 + &glob->bo_count); 100 + out: 101 + mutex_unlock(&ttm_global_mutex); 102 + return ret; 103 + } 104 + 105 + static void ttm_init_sysman(struct ttm_device *bdev) 106 + { 107 + struct ttm_resource_manager *man = &bdev->sysman; 108 + 109 + /* 110 + * Initialize the system memory buffer type. 111 + * Other types need to be driver / IOCTL initialized. 112 + */ 113 + man->use_tt = true; 114 + 115 + ttm_resource_manager_init(man, 0); 116 + ttm_set_driver_manager(bdev, TTM_PL_SYSTEM, man); 117 + ttm_resource_manager_set_used(man, true); 118 + } 119 + 120 + static void ttm_device_delayed_workqueue(struct work_struct *work) 121 + { 122 + struct ttm_device *bdev = 123 + container_of(work, struct ttm_device, wq.work); 124 + 125 + if (!ttm_bo_delayed_delete(bdev, false)) 126 + schedule_delayed_work(&bdev->wq, 127 + ((HZ / 100) < 1) ? 1 : HZ / 100); 128 + } 129 + 130 + /** 131 + * ttm_device_init 132 + * 133 + * @bdev: A pointer to a struct ttm_device to initialize. 134 + * @funcs: Function table for the device. 135 + * @dev: The core kernel device pointer for DMA mappings and allocations. 136 + * @mapping: The address space to use for this bo. 137 + * @vma_manager: A pointer to a vma manager. 138 + * @use_dma_alloc: If coherent DMA allocation API should be used. 139 + * @use_dma32: If we should use GFP_DMA32 for device memory allocations. 140 + * 141 + * Initializes a struct ttm_device: 142 + * Returns: 143 + * !0: Failure. 144 + */ 145 + int ttm_device_init(struct ttm_device *bdev, struct ttm_device_funcs *funcs, 146 + struct device *dev, struct address_space *mapping, 147 + struct drm_vma_offset_manager *vma_manager, 148 + bool use_dma_alloc, bool use_dma32) 149 + { 150 + struct ttm_global *glob = &ttm_glob; 151 + int ret; 152 + 153 + if (WARN_ON(vma_manager == NULL)) 154 + return -EINVAL; 155 + 156 + ret = ttm_global_init(); 157 + if (ret) 158 + return ret; 159 + 160 + bdev->funcs = funcs; 161 + 162 + ttm_init_sysman(bdev); 163 + ttm_pool_init(&bdev->pool, dev, use_dma_alloc, use_dma32); 164 + 165 + bdev->vma_manager = vma_manager; 166 + INIT_DELAYED_WORK(&bdev->wq, ttm_device_delayed_workqueue); 167 + INIT_LIST_HEAD(&bdev->ddestroy); 168 + bdev->dev_mapping = mapping; 169 + mutex_lock(&ttm_global_mutex); 170 + list_add_tail(&bdev->device_list, &glob->device_list); 171 + mutex_unlock(&ttm_global_mutex); 172 + 173 + return 0; 174 + } 175 + EXPORT_SYMBOL(ttm_device_init); 176 + 177 + void ttm_device_fini(struct ttm_device *bdev) 178 + { 179 + struct ttm_global *glob = &ttm_glob; 180 + struct ttm_resource_manager *man; 181 + unsigned i; 182 + 183 + man = ttm_manager_type(bdev, TTM_PL_SYSTEM); 184 + ttm_resource_manager_set_used(man, false); 185 + ttm_set_driver_manager(bdev, TTM_PL_SYSTEM, NULL); 186 + 187 + mutex_lock(&ttm_global_mutex); 188 + list_del(&bdev->device_list); 189 + mutex_unlock(&ttm_global_mutex); 190 + 191 + cancel_delayed_work_sync(&bdev->wq); 192 + 193 + if (ttm_bo_delayed_delete(bdev, true)) 194 + pr_debug("Delayed destroy list was clean\n"); 195 + 196 + spin_lock(&glob->lru_lock); 197 + for (i = 0; i < TTM_MAX_BO_PRIORITY; ++i) 198 + if (list_empty(&man->lru[0])) 199 + pr_debug("Swap list %d was clean\n", i); 200 + spin_unlock(&glob->lru_lock); 201 + 202 + ttm_pool_fini(&bdev->pool); 203 + ttm_global_release(); 204 + } 205 + EXPORT_SYMBOL(ttm_device_fini);
+4 -4
drivers/gpu/drm/ttm/ttm_execbuf_util.c
··· 51 51 if (list_empty(list)) 52 52 return; 53 53 54 - spin_lock(&ttm_bo_glob.lru_lock); 54 + spin_lock(&ttm_glob.lru_lock); 55 55 list_for_each_entry(entry, list, head) { 56 56 struct ttm_buffer_object *bo = entry->bo; 57 57 58 58 ttm_bo_move_to_lru_tail(bo, &bo->mem, NULL); 59 59 dma_resv_unlock(bo->base.resv); 60 60 } 61 - spin_unlock(&ttm_bo_glob.lru_lock); 61 + spin_unlock(&ttm_glob.lru_lock); 62 62 63 63 if (ticket) 64 64 ww_acquire_fini(ticket); ··· 154 154 if (list_empty(list)) 155 155 return; 156 156 157 - spin_lock(&ttm_bo_glob.lru_lock); 157 + spin_lock(&ttm_glob.lru_lock); 158 158 list_for_each_entry(entry, list, head) { 159 159 struct ttm_buffer_object *bo = entry->bo; 160 160 ··· 165 165 ttm_bo_move_to_lru_tail(bo, &bo->mem, NULL); 166 166 dma_resv_unlock(bo->base.resv); 167 167 } 168 - spin_unlock(&ttm_bo_glob.lru_lock); 168 + spin_unlock(&ttm_glob.lru_lock); 169 169 if (ticket) 170 170 ww_acquire_fini(ticket); 171 171 }
+10 -12
drivers/gpu/drm/ttm/ttm_memory.c drivers/gpu/drm/vmwgfx/ttm_memory.c
··· 28 28 29 29 #define pr_fmt(fmt) "[TTM] " fmt 30 30 31 - #include <drm/ttm/ttm_memory.h> 32 31 #include <linux/spinlock.h> 33 32 #include <linux/sched.h> 34 33 #include <linux/wait.h> ··· 35 36 #include <linux/module.h> 36 37 #include <linux/slab.h> 37 38 #include <linux/swap.h> 38 - #include <drm/ttm/ttm_pool.h> 39 39 40 - #include "ttm_module.h" 40 + #include <drm/drm_device.h> 41 + #include <drm/drm_file.h> 42 + 43 + #include "ttm_memory.h" 41 44 42 45 #define TTM_MEMORY_ALLOC_RETRIES 4 43 46 ··· 277 276 278 277 while (ttm_zones_above_swap_target(glob, from_wq, extra)) { 279 278 spin_unlock(&glob->lock); 280 - ret = ttm_bo_swapout(ctx); 279 + ret = ttm_bo_swapout(ctx, GFP_KERNEL); 281 280 spin_lock(&glob->lock); 282 - if (unlikely(ret != 0)) 281 + if (unlikely(ret < 0)) 283 282 break; 284 283 } 285 284 ··· 414 413 } 415 414 #endif 416 415 417 - int ttm_mem_global_init(struct ttm_mem_global *glob) 416 + int ttm_mem_global_init(struct ttm_mem_global *glob, struct device *dev) 418 417 { 419 418 struct sysinfo si; 420 419 int ret; ··· 424 423 spin_lock_init(&glob->lock); 425 424 glob->swap_queue = create_singlethread_workqueue("ttm_swap"); 426 425 INIT_WORK(&glob->work, ttm_shrink_work); 427 - ret = kobject_init_and_add( 428 - &glob->kobj, &ttm_mem_glob_kobj_type, ttm_get_kobj(), "memory_accounting"); 426 + 427 + ret = kobject_init_and_add(&glob->kobj, &ttm_mem_glob_kobj_type, 428 + &dev->kobj, "memory_accounting"); 429 429 if (unlikely(ret != 0)) { 430 430 kobject_put(&glob->kobj); 431 431 return ret; ··· 454 452 pr_info("Zone %7s: Available graphics memory: %llu KiB\n", 455 453 zone->name, (unsigned long long)zone->max_mem >> 10); 456 454 } 457 - ttm_pool_mgr_init(glob->zone_kernel->max_mem/(2*PAGE_SIZE)); 458 455 return 0; 459 456 out_no_zone: 460 457 ttm_mem_global_release(glob); ··· 464 463 { 465 464 struct ttm_mem_zone *zone; 466 465 unsigned int i; 467 - 468 - /* let the page allocator first stop the shrink work. */ 469 - ttm_pool_mgr_fini(); 470 466 471 467 flush_workqueue(glob->swap_queue); 472 468 destroy_workqueue(glob->swap_queue);
+4 -50
drivers/gpu/drm/ttm/ttm_module.c
··· 32 32 #include <linux/module.h> 33 33 #include <linux/device.h> 34 34 #include <linux/sched.h> 35 + #include <linux/debugfs.h> 35 36 #include <drm/drm_sysfs.h> 36 37 37 38 #include "ttm_module.h" 38 39 39 - static DECLARE_WAIT_QUEUE_HEAD(exit_q); 40 - static atomic_t device_released; 41 - 42 - static struct device_type ttm_drm_class_type = { 43 - .name = "ttm", 44 - /** 45 - * Add pm ops here. 46 - */ 47 - }; 48 - 49 - static void ttm_drm_class_device_release(struct device *dev) 50 - { 51 - atomic_set(&device_released, 1); 52 - wake_up_all(&exit_q); 53 - } 54 - 55 - static struct device ttm_drm_class_device = { 56 - .type = &ttm_drm_class_type, 57 - .release = &ttm_drm_class_device_release 58 - }; 59 - 60 - struct kobject *ttm_get_kobj(void) 61 - { 62 - struct kobject *kobj = &ttm_drm_class_device.kobj; 63 - BUG_ON(kobj == NULL); 64 - return kobj; 65 - } 40 + struct dentry *ttm_debugfs_root; 66 41 67 42 static int __init ttm_init(void) 68 43 { 69 - int ret; 70 - 71 - ret = dev_set_name(&ttm_drm_class_device, "ttm"); 72 - if (unlikely(ret != 0)) 73 - return ret; 74 - 75 - atomic_set(&device_released, 0); 76 - ret = drm_class_device_register(&ttm_drm_class_device); 77 - if (unlikely(ret != 0)) 78 - goto out_no_dev_reg; 79 - 44 + ttm_debugfs_root = debugfs_create_dir("ttm", NULL); 80 45 return 0; 81 - out_no_dev_reg: 82 - atomic_set(&device_released, 1); 83 - wake_up_all(&exit_q); 84 - return ret; 85 46 } 86 47 87 48 static void __exit ttm_exit(void) 88 49 { 89 - drm_class_device_unregister(&ttm_drm_class_device); 90 - 91 - /** 92 - * Refuse to unload until the TTM device is released. 93 - * Not sure this is 100% needed. 94 - */ 95 - 96 - wait_event(exit_q, atomic_read(&device_released) == 1); 50 + debugfs_remove(ttm_debugfs_root); 97 51 } 98 52 99 53 module_init(ttm_init);
+4 -4
drivers/gpu/drm/ttm/ttm_module.h
··· 31 31 #ifndef _TTM_MODULE_H_ 32 32 #define _TTM_MODULE_H_ 33 33 34 - #include <linux/kernel.h> 35 - struct kobject; 36 - 37 34 #define TTM_PFX "[TTM] " 38 - extern struct kobject *ttm_get_kobj(void); 35 + 36 + struct dentry; 37 + 38 + extern struct dentry *ttm_debugfs_root; 39 39 40 40 #endif /* _TTM_MODULE_H_ */
+146 -101
drivers/gpu/drm/ttm/ttm_pool.c
··· 34 34 #include <linux/module.h> 35 35 #include <linux/dma-mapping.h> 36 36 #include <linux/highmem.h> 37 + #include <linux/sched/mm.h> 37 38 38 39 #ifdef CONFIG_X86 39 40 #include <asm/set_memory.h> ··· 43 42 #include <drm/ttm/ttm_pool.h> 44 43 #include <drm/ttm/ttm_bo_driver.h> 45 44 #include <drm/ttm/ttm_tt.h> 45 + 46 + #include "ttm_module.h" 46 47 47 48 /** 48 49 * struct ttm_pool_dma - Helper object for coherent DMA mappings ··· 415 412 caching = pages + (1 << order); 416 413 } 417 414 418 - r = ttm_mem_global_alloc_page(&ttm_mem_glob, p, 419 - (1 << order) * PAGE_SIZE, 420 - ctx); 421 - if (r) 422 - goto error_free_page; 423 - 424 415 if (dma_addr) { 425 416 r = ttm_pool_map(pool, order, p, &dma_addr); 426 417 if (r) 427 - goto error_global_free; 418 + goto error_free_page; 428 419 } 429 420 430 421 num_pages -= 1 << order; ··· 431 434 goto error_free_all; 432 435 433 436 return 0; 434 - 435 - error_global_free: 436 - ttm_mem_global_free_page(&ttm_mem_glob, p, (1 << order) * PAGE_SIZE); 437 437 438 438 error_free_page: 439 439 ttm_pool_free_page(pool, tt->caching, order, p); ··· 466 472 467 473 order = ttm_pool_page_order(pool, p); 468 474 num_pages = 1ULL << order; 469 - ttm_mem_global_free_page(&ttm_mem_glob, p, 470 - num_pages * PAGE_SIZE); 471 475 if (tt->dma_address) 472 476 ttm_pool_unmap(pool, tt->dma_address[i], num_pages); 473 477 ··· 505 513 pool->use_dma_alloc = use_dma_alloc; 506 514 pool->use_dma32 = use_dma32; 507 515 508 - for (i = 0; i < TTM_NUM_CACHING_TYPES; ++i) 509 - for (j = 0; j < MAX_ORDER; ++j) 510 - ttm_pool_type_init(&pool->caching[i].orders[j], 511 - pool, i, j); 516 + if (use_dma_alloc) { 517 + for (i = 0; i < TTM_NUM_CACHING_TYPES; ++i) 518 + for (j = 0; j < MAX_ORDER; ++j) 519 + ttm_pool_type_init(&pool->caching[i].orders[j], 520 + pool, i, j); 521 + } 512 522 } 513 523 514 524 /** ··· 525 531 { 526 532 unsigned int i, j; 527 533 528 - for (i = 0; i < TTM_NUM_CACHING_TYPES; ++i) 529 - for (j = 0; j < MAX_ORDER; ++j) 530 - ttm_pool_type_fini(&pool->caching[i].orders[j]); 531 - } 532 - 533 - #ifdef CONFIG_DEBUG_FS 534 - /* Count the number of pages available in a pool_type */ 535 - static unsigned int ttm_pool_type_count(struct ttm_pool_type *pt) 536 - { 537 - unsigned int count = 0; 538 - struct page *p; 539 - 540 - spin_lock(&pt->lock); 541 - /* Only used for debugfs, the overhead doesn't matter */ 542 - list_for_each_entry(p, &pt->pages, lru) 543 - ++count; 544 - spin_unlock(&pt->lock); 545 - 546 - return count; 547 - } 548 - 549 - /* Dump information about the different pool types */ 550 - static void ttm_pool_debugfs_orders(struct ttm_pool_type *pt, 551 - struct seq_file *m) 552 - { 553 - unsigned int i; 554 - 555 - for (i = 0; i < MAX_ORDER; ++i) 556 - seq_printf(m, " %8u", ttm_pool_type_count(&pt[i])); 557 - seq_puts(m, "\n"); 558 - } 559 - 560 - /** 561 - * ttm_pool_debugfs - Debugfs dump function for a pool 562 - * 563 - * @pool: the pool to dump the information for 564 - * @m: seq_file to dump to 565 - * 566 - * Make a debugfs dump with the per pool and global information. 567 - */ 568 - int ttm_pool_debugfs(struct ttm_pool *pool, struct seq_file *m) 569 - { 570 - unsigned int i; 571 - 572 - mutex_lock(&shrinker_lock); 573 - 574 - seq_puts(m, "\t "); 575 - for (i = 0; i < MAX_ORDER; ++i) 576 - seq_printf(m, " ---%2u---", i); 577 - seq_puts(m, "\n"); 578 - 579 - seq_puts(m, "wc\t:"); 580 - ttm_pool_debugfs_orders(global_write_combined, m); 581 - seq_puts(m, "uc\t:"); 582 - ttm_pool_debugfs_orders(global_uncached, m); 583 - 584 - seq_puts(m, "wc 32\t:"); 585 - ttm_pool_debugfs_orders(global_dma32_write_combined, m); 586 - seq_puts(m, "uc 32\t:"); 587 - ttm_pool_debugfs_orders(global_dma32_uncached, m); 588 - 589 - for (i = 0; i < TTM_NUM_CACHING_TYPES; ++i) { 590 - seq_puts(m, "DMA "); 591 - switch (i) { 592 - case ttm_cached: 593 - seq_puts(m, "\t:"); 594 - break; 595 - case ttm_write_combined: 596 - seq_puts(m, "wc\t:"); 597 - break; 598 - case ttm_uncached: 599 - seq_puts(m, "uc\t:"); 600 - break; 601 - } 602 - ttm_pool_debugfs_orders(pool->caching[i].orders, m); 534 + if (pool->use_dma_alloc) { 535 + for (i = 0; i < TTM_NUM_CACHING_TYPES; ++i) 536 + for (j = 0; j < MAX_ORDER; ++j) 537 + ttm_pool_type_fini(&pool->caching[i].orders[j]); 603 538 } 604 - 605 - seq_printf(m, "\ntotal\t: %8lu of %8lu\n", 606 - atomic_long_read(&allocated_pages), page_pool_size); 607 - 608 - mutex_unlock(&shrinker_lock); 609 - 610 - return 0; 611 539 } 612 - EXPORT_SYMBOL(ttm_pool_debugfs); 613 - 614 - #endif 615 540 616 541 /* As long as pages are available make sure to release at least one */ 617 542 static unsigned long ttm_pool_shrinker_scan(struct shrinker *shrink, ··· 553 640 554 641 return num_pages ? num_pages : SHRINK_EMPTY; 555 642 } 643 + 644 + #ifdef CONFIG_DEBUG_FS 645 + /* Count the number of pages available in a pool_type */ 646 + static unsigned int ttm_pool_type_count(struct ttm_pool_type *pt) 647 + { 648 + unsigned int count = 0; 649 + struct page *p; 650 + 651 + spin_lock(&pt->lock); 652 + /* Only used for debugfs, the overhead doesn't matter */ 653 + list_for_each_entry(p, &pt->pages, lru) 654 + ++count; 655 + spin_unlock(&pt->lock); 656 + 657 + return count; 658 + } 659 + 660 + /* Print a nice header for the order */ 661 + static void ttm_pool_debugfs_header(struct seq_file *m) 662 + { 663 + unsigned int i; 664 + 665 + seq_puts(m, "\t "); 666 + for (i = 0; i < MAX_ORDER; ++i) 667 + seq_printf(m, " ---%2u---", i); 668 + seq_puts(m, "\n"); 669 + } 670 + 671 + /* Dump information about the different pool types */ 672 + static void ttm_pool_debugfs_orders(struct ttm_pool_type *pt, 673 + struct seq_file *m) 674 + { 675 + unsigned int i; 676 + 677 + for (i = 0; i < MAX_ORDER; ++i) 678 + seq_printf(m, " %8u", ttm_pool_type_count(&pt[i])); 679 + seq_puts(m, "\n"); 680 + } 681 + 682 + /* Dump the total amount of allocated pages */ 683 + static void ttm_pool_debugfs_footer(struct seq_file *m) 684 + { 685 + seq_printf(m, "\ntotal\t: %8lu of %8lu\n", 686 + atomic_long_read(&allocated_pages), page_pool_size); 687 + } 688 + 689 + /* Dump the information for the global pools */ 690 + static int ttm_pool_debugfs_globals_show(struct seq_file *m, void *data) 691 + { 692 + ttm_pool_debugfs_header(m); 693 + 694 + mutex_lock(&shrinker_lock); 695 + seq_puts(m, "wc\t:"); 696 + ttm_pool_debugfs_orders(global_write_combined, m); 697 + seq_puts(m, "uc\t:"); 698 + ttm_pool_debugfs_orders(global_uncached, m); 699 + seq_puts(m, "wc 32\t:"); 700 + ttm_pool_debugfs_orders(global_dma32_write_combined, m); 701 + seq_puts(m, "uc 32\t:"); 702 + ttm_pool_debugfs_orders(global_dma32_uncached, m); 703 + mutex_unlock(&shrinker_lock); 704 + 705 + ttm_pool_debugfs_footer(m); 706 + 707 + return 0; 708 + } 709 + DEFINE_SHOW_ATTRIBUTE(ttm_pool_debugfs_globals); 710 + 711 + /** 712 + * ttm_pool_debugfs - Debugfs dump function for a pool 713 + * 714 + * @pool: the pool to dump the information for 715 + * @m: seq_file to dump to 716 + * 717 + * Make a debugfs dump with the per pool and global information. 718 + */ 719 + int ttm_pool_debugfs(struct ttm_pool *pool, struct seq_file *m) 720 + { 721 + unsigned int i; 722 + 723 + if (!pool->use_dma_alloc) { 724 + seq_puts(m, "unused\n"); 725 + return 0; 726 + } 727 + 728 + ttm_pool_debugfs_header(m); 729 + 730 + mutex_lock(&shrinker_lock); 731 + for (i = 0; i < TTM_NUM_CACHING_TYPES; ++i) { 732 + seq_puts(m, "DMA "); 733 + switch (i) { 734 + case ttm_cached: 735 + seq_puts(m, "\t:"); 736 + break; 737 + case ttm_write_combined: 738 + seq_puts(m, "wc\t:"); 739 + break; 740 + case ttm_uncached: 741 + seq_puts(m, "uc\t:"); 742 + break; 743 + } 744 + ttm_pool_debugfs_orders(pool->caching[i].orders, m); 745 + } 746 + mutex_unlock(&shrinker_lock); 747 + 748 + ttm_pool_debugfs_footer(m); 749 + return 0; 750 + } 751 + EXPORT_SYMBOL(ttm_pool_debugfs); 752 + 753 + /* Test the shrinker functions and dump the result */ 754 + static int ttm_pool_debugfs_shrink_show(struct seq_file *m, void *data) 755 + { 756 + struct shrink_control sc = { .gfp_mask = GFP_NOFS }; 757 + 758 + fs_reclaim_acquire(GFP_KERNEL); 759 + seq_printf(m, "%lu/%lu\n", ttm_pool_shrinker_count(&mm_shrinker, &sc), 760 + ttm_pool_shrinker_scan(&mm_shrinker, &sc)); 761 + fs_reclaim_release(GFP_KERNEL); 762 + 763 + return 0; 764 + } 765 + DEFINE_SHOW_ATTRIBUTE(ttm_pool_debugfs_shrink); 766 + 767 + #endif 556 768 557 769 /** 558 770 * ttm_pool_mgr_init - Initialize globals ··· 706 668 ttm_pool_type_init(&global_dma32_uncached[i], NULL, 707 669 ttm_uncached, i); 708 670 } 671 + 672 + #ifdef CONFIG_DEBUG_FS 673 + debugfs_create_file("page_pool", 0444, ttm_debugfs_root, NULL, 674 + &ttm_pool_debugfs_globals_fops); 675 + debugfs_create_file("page_pool_shrink", 0400, ttm_debugfs_root, NULL, 676 + &ttm_pool_debugfs_shrink_fops); 677 + #endif 709 678 710 679 mm_shrinker.count_objects = ttm_pool_shrinker_count; 711 680 mm_shrinker.scan_objects = ttm_pool_shrinker_scan;
+2 -2
drivers/gpu/drm/ttm/ttm_range_manager.c
··· 111 111 112 112 static const struct ttm_resource_manager_func ttm_range_manager_func; 113 113 114 - int ttm_range_man_init(struct ttm_bo_device *bdev, 114 + int ttm_range_man_init(struct ttm_device *bdev, 115 115 unsigned type, bool use_tt, 116 116 unsigned long p_size) 117 117 { ··· 138 138 } 139 139 EXPORT_SYMBOL(ttm_range_man_init); 140 140 141 - int ttm_range_man_fini(struct ttm_bo_device *bdev, 141 + int ttm_range_man_fini(struct ttm_device *bdev, 142 142 unsigned type) 143 143 { 144 144 struct ttm_resource_manager *man = ttm_manager_type(bdev, type);
+2 -2
drivers/gpu/drm/ttm/ttm_resource.c
··· 83 83 * Evict all the objects out of a memory manager until it is empty. 84 84 * Part of memory manager cleanup sequence. 85 85 */ 86 - int ttm_resource_manager_evict_all(struct ttm_bo_device *bdev, 86 + int ttm_resource_manager_evict_all(struct ttm_device *bdev, 87 87 struct ttm_resource_manager *man) 88 88 { 89 89 struct ttm_operation_ctx ctx = { ··· 91 91 .no_wait_gpu = false, 92 92 .force_alloc = true 93 93 }; 94 - struct ttm_bo_global *glob = &ttm_bo_glob; 94 + struct ttm_global *glob = &ttm_glob; 95 95 struct dma_fence *fence; 96 96 int ret; 97 97 unsigned i;
+109 -20
drivers/gpu/drm/ttm/ttm_tt.c
··· 38 38 #include <drm/drm_cache.h> 39 39 #include <drm/ttm/ttm_bo_driver.h> 40 40 41 + #include "ttm_module.h" 42 + 43 + static struct shrinker mm_shrinker; 44 + static atomic_long_t swapable_pages; 45 + 41 46 /* 42 47 * Allocates a ttm structure for the given BO. 43 48 */ 44 49 int ttm_tt_create(struct ttm_buffer_object *bo, bool zero_alloc) 45 50 { 46 - struct ttm_bo_device *bdev = bo->bdev; 51 + struct ttm_device *bdev = bo->bdev; 47 52 uint32_t page_flags = 0; 48 53 49 54 dma_resv_assert_held(bo->base.resv); ··· 71 66 return -EINVAL; 72 67 } 73 68 74 - bo->ttm = bdev->driver->ttm_tt_create(bo, page_flags); 69 + bo->ttm = bdev->funcs->ttm_tt_create(bo, page_flags); 75 70 if (unlikely(bo->ttm == NULL)) 76 71 return -ENOMEM; 77 72 ··· 113 108 return 0; 114 109 } 115 110 116 - void ttm_tt_destroy_common(struct ttm_bo_device *bdev, struct ttm_tt *ttm) 111 + void ttm_tt_destroy_common(struct ttm_device *bdev, struct ttm_tt *ttm) 117 112 { 118 113 ttm_tt_unpopulate(bdev, ttm); 119 114 ··· 124 119 } 125 120 EXPORT_SYMBOL(ttm_tt_destroy_common); 126 121 127 - void ttm_tt_destroy(struct ttm_bo_device *bdev, struct ttm_tt *ttm) 122 + void ttm_tt_destroy(struct ttm_device *bdev, struct ttm_tt *ttm) 128 123 { 129 - bdev->driver->ttm_tt_destroy(bdev, ttm); 124 + bdev->funcs->ttm_tt_destroy(bdev, ttm); 130 125 } 131 126 132 127 static void ttm_tt_init_fields(struct ttm_tt *ttm, ··· 228 223 return ret; 229 224 } 230 225 231 - int ttm_tt_swapout(struct ttm_bo_device *bdev, struct ttm_tt *ttm) 226 + /** 227 + * ttm_tt_swapout - swap out tt object 228 + * 229 + * @bdev: TTM device structure. 230 + * @ttm: The struct ttm_tt. 231 + * @gfp_flags: Flags to use for memory allocation. 232 + * 233 + * Swapout a TT object to a shmem_file, return number of pages swapped out or 234 + * negative error code. 235 + */ 236 + int ttm_tt_swapout(struct ttm_device *bdev, struct ttm_tt *ttm, 237 + gfp_t gfp_flags) 232 238 { 239 + loff_t size = (loff_t)ttm->num_pages << PAGE_SHIFT; 233 240 struct address_space *swap_space; 234 241 struct file *swap_storage; 235 242 struct page *from_page; 236 243 struct page *to_page; 237 - gfp_t gfp_mask; 238 244 int i, ret; 239 245 240 - swap_storage = shmem_file_setup("ttm swap", 241 - ttm->num_pages << PAGE_SHIFT, 242 - 0); 246 + swap_storage = shmem_file_setup("ttm swap", size, 0); 243 247 if (IS_ERR(swap_storage)) { 244 248 pr_err("Failed allocating swap storage\n"); 245 249 return PTR_ERR(swap_storage); 246 250 } 247 251 248 252 swap_space = swap_storage->f_mapping; 249 - gfp_mask = mapping_gfp_mask(swap_space); 253 + gfp_flags &= mapping_gfp_mask(swap_space); 250 254 251 255 for (i = 0; i < ttm->num_pages; ++i) { 252 256 from_page = ttm->pages[i]; 253 257 if (unlikely(from_page == NULL)) 254 258 continue; 255 259 256 - to_page = shmem_read_mapping_page_gfp(swap_space, i, gfp_mask); 260 + to_page = shmem_read_mapping_page_gfp(swap_space, i, gfp_flags); 257 261 if (IS_ERR(to_page)) { 258 262 ret = PTR_ERR(to_page); 259 263 goto out_err; ··· 277 263 ttm->swap_storage = swap_storage; 278 264 ttm->page_flags |= TTM_PAGE_FLAG_SWAPPED; 279 265 280 - return 0; 266 + return ttm->num_pages; 281 267 282 268 out_err: 283 269 fput(swap_storage); ··· 285 271 return ret; 286 272 } 287 273 288 - static void ttm_tt_add_mapping(struct ttm_bo_device *bdev, struct ttm_tt *ttm) 274 + static void ttm_tt_add_mapping(struct ttm_device *bdev, struct ttm_tt *ttm) 289 275 { 290 276 pgoff_t i; 291 277 ··· 294 280 295 281 for (i = 0; i < ttm->num_pages; ++i) 296 282 ttm->pages[i]->mapping = bdev->dev_mapping; 283 + 284 + atomic_long_add(ttm->num_pages, &swapable_pages); 297 285 } 298 286 299 - int ttm_tt_populate(struct ttm_bo_device *bdev, 287 + int ttm_tt_populate(struct ttm_device *bdev, 300 288 struct ttm_tt *ttm, struct ttm_operation_ctx *ctx) 301 289 { 302 290 int ret; ··· 309 293 if (ttm_tt_is_populated(ttm)) 310 294 return 0; 311 295 312 - if (bdev->driver->ttm_tt_populate) 313 - ret = bdev->driver->ttm_tt_populate(bdev, ttm, ctx); 296 + if (bdev->funcs->ttm_tt_populate) 297 + ret = bdev->funcs->ttm_tt_populate(bdev, ttm, ctx); 314 298 else 315 299 ret = ttm_pool_alloc(&bdev->pool, ttm, ctx); 316 300 if (ret) ··· 342 326 (*page)->mapping = NULL; 343 327 (*page++)->index = 0; 344 328 } 329 + 330 + atomic_long_sub(ttm->num_pages, &swapable_pages); 345 331 } 346 332 347 - void ttm_tt_unpopulate(struct ttm_bo_device *bdev, 333 + void ttm_tt_unpopulate(struct ttm_device *bdev, 348 334 struct ttm_tt *ttm) 349 335 { 350 336 if (!ttm_tt_is_populated(ttm)) 351 337 return; 352 338 353 339 ttm_tt_clear_mapping(ttm); 354 - if (bdev->driver->ttm_tt_unpopulate) 355 - bdev->driver->ttm_tt_unpopulate(bdev, ttm); 340 + if (bdev->funcs->ttm_tt_unpopulate) 341 + bdev->funcs->ttm_tt_unpopulate(bdev, ttm); 356 342 else 357 343 ttm_pool_free(&bdev->pool, ttm); 358 344 ttm->page_flags &= ~TTM_PAGE_FLAG_PRIV_POPULATED; 345 + } 346 + 347 + /* As long as pages are available make sure to release at least one */ 348 + static unsigned long ttm_tt_shrinker_scan(struct shrinker *shrink, 349 + struct shrink_control *sc) 350 + { 351 + struct ttm_operation_ctx ctx = { 352 + .no_wait_gpu = false 353 + }; 354 + int ret; 355 + 356 + ret = ttm_bo_swapout(&ctx, GFP_NOFS); 357 + return ret < 0 ? SHRINK_EMPTY : ret; 358 + } 359 + 360 + /* Return the number of pages available or SHRINK_EMPTY if we have none */ 361 + static unsigned long ttm_tt_shrinker_count(struct shrinker *shrink, 362 + struct shrink_control *sc) 363 + { 364 + unsigned long num_pages; 365 + 366 + num_pages = atomic_long_read(&swapable_pages); 367 + return num_pages ? num_pages : SHRINK_EMPTY; 368 + } 369 + 370 + #ifdef CONFIG_DEBUG_FS 371 + 372 + /* Test the shrinker functions and dump the result */ 373 + static int ttm_tt_debugfs_shrink_show(struct seq_file *m, void *data) 374 + { 375 + struct shrink_control sc = { .gfp_mask = GFP_KERNEL }; 376 + 377 + fs_reclaim_acquire(GFP_KERNEL); 378 + seq_printf(m, "%lu/%lu\n", ttm_tt_shrinker_count(&mm_shrinker, &sc), 379 + ttm_tt_shrinker_scan(&mm_shrinker, &sc)); 380 + fs_reclaim_release(GFP_KERNEL); 381 + 382 + return 0; 383 + } 384 + DEFINE_SHOW_ATTRIBUTE(ttm_tt_debugfs_shrink); 385 + 386 + #endif 387 + 388 + 389 + 390 + /** 391 + * ttm_tt_mgr_init - register with the MM shrinker 392 + * 393 + * Register with the MM shrinker for swapping out BOs. 394 + */ 395 + int ttm_tt_mgr_init(void) 396 + { 397 + #ifdef CONFIG_DEBUG_FS 398 + debugfs_create_file("tt_shrink", 0400, ttm_debugfs_root, NULL, 399 + &ttm_tt_debugfs_shrink_fops); 400 + #endif 401 + 402 + mm_shrinker.count_objects = ttm_tt_shrinker_count; 403 + mm_shrinker.scan_objects = ttm_tt_shrinker_scan; 404 + mm_shrinker.seeks = 1; 405 + return register_shrinker(&mm_shrinker); 406 + } 407 + 408 + /** 409 + * ttm_tt_mgr_fini - unregister our MM shrinker 410 + * 411 + * Unregisters the MM shrinker. 412 + */ 413 + void ttm_tt_mgr_fini(void) 414 + { 415 + unregister_shrinker(&mm_shrinker); 359 416 }
+2 -2
drivers/gpu/drm/tve200/tve200_display.c
··· 17 17 18 18 #include <drm/drm_fb_cma_helper.h> 19 19 #include <drm/drm_fourcc.h> 20 + #include <drm/drm_gem_atomic_helper.h> 20 21 #include <drm/drm_gem_cma_helper.h> 21 - #include <drm/drm_gem_framebuffer_helper.h> 22 22 #include <drm/drm_panel.h> 23 23 #include <drm/drm_vblank.h> 24 24 ··· 316 316 .enable = tve200_display_enable, 317 317 .disable = tve200_display_disable, 318 318 .update = tve200_display_update, 319 - .prepare_fb = drm_gem_fb_simple_display_pipe_prepare_fb, 319 + .prepare_fb = drm_gem_simple_display_pipe_prepare_fb, 320 320 .enable_vblank = tve200_display_enable_vblank, 321 321 .disable_vblank = tve200_display_disable_vblank, 322 322 };
+13 -21
drivers/gpu/drm/udl/udl_modeset.c
··· 15 15 #include <drm/drm_crtc_helper.h> 16 16 #include <drm/drm_damage_helper.h> 17 17 #include <drm/drm_fourcc.h> 18 + #include <drm/drm_gem_atomic_helper.h> 18 19 #include <drm/drm_gem_framebuffer_helper.h> 19 20 #include <drm/drm_gem_shmem_helper.h> 20 21 #include <drm/drm_modeset_helper_vtables.h> ··· 267 266 return 0; 268 267 } 269 268 270 - static int udl_handle_damage(struct drm_framebuffer *fb, int x, int y, 271 - int width, int height) 269 + static int udl_handle_damage(struct drm_framebuffer *fb, const struct dma_buf_map *map, 270 + int x, int y, int width, int height) 272 271 { 273 272 struct drm_device *dev = fb->dev; 274 273 struct dma_buf_attachment *import_attach = fb->obj[0]->import_attach; 274 + void *vaddr = map->vaddr; /* TODO: Use mapping abstraction properly */ 275 275 int i, ret, tmp_ret; 276 276 char *cmd; 277 277 struct urb *urb; 278 278 struct drm_rect clip; 279 279 int log_bpp; 280 - struct dma_buf_map map; 281 - void *vaddr; 282 280 283 281 ret = udl_log_cpp(fb->format->cpp[0]); 284 282 if (ret < 0) ··· 297 297 return ret; 298 298 } 299 299 300 - ret = drm_gem_shmem_vmap(fb->obj[0], &map); 301 - if (ret) { 302 - DRM_ERROR("failed to vmap fb\n"); 303 - goto out_dma_buf_end_cpu_access; 304 - } 305 - vaddr = map.vaddr; /* TODO: Use mapping abstraction properly */ 306 - 307 300 urb = udl_get_urb(dev); 308 301 if (!urb) { 309 302 ret = -ENOMEM; 310 - goto out_drm_gem_shmem_vunmap; 303 + goto out_dma_buf_end_cpu_access; 311 304 } 312 305 cmd = urb->transfer_buffer; 313 306 ··· 313 320 &cmd, byte_offset, dev_byte_offset, 314 321 byte_width); 315 322 if (ret) 316 - goto out_drm_gem_shmem_vunmap; 323 + goto out_dma_buf_end_cpu_access; 317 324 } 318 325 319 326 if (cmd > (char *)urb->transfer_buffer) { ··· 329 336 330 337 ret = 0; 331 338 332 - out_drm_gem_shmem_vunmap: 333 - drm_gem_shmem_vunmap(fb->obj[0], &map); 334 339 out_dma_buf_end_cpu_access: 335 340 if (import_attach) { 336 341 tmp_ret = dma_buf_end_cpu_access(import_attach->dmabuf, ··· 366 375 struct drm_framebuffer *fb = plane_state->fb; 367 376 struct udl_device *udl = to_udl(dev); 368 377 struct drm_display_mode *mode = &crtc_state->mode; 378 + struct drm_shadow_plane_state *shadow_plane_state = to_drm_shadow_plane_state(plane_state); 369 379 char *buf; 370 380 char *wrptr; 371 381 int color_depth = UDL_COLOR_DEPTH_16BPP; ··· 392 400 393 401 udl->mode_buf_len = wrptr - buf; 394 402 395 - udl_handle_damage(fb, 0, 0, fb->width, fb->height); 403 + udl_handle_damage(fb, &shadow_plane_state->map[0], 0, 0, fb->width, fb->height); 396 404 397 405 if (!crtc_state->mode_changed) 398 406 return; ··· 427 435 struct drm_plane_state *old_plane_state) 428 436 { 429 437 struct drm_plane_state *state = pipe->plane.state; 438 + struct drm_shadow_plane_state *shadow_plane_state = to_drm_shadow_plane_state(state); 430 439 struct drm_framebuffer *fb = state->fb; 431 440 struct drm_rect rect; 432 441 ··· 435 442 return; 436 443 437 444 if (drm_atomic_helper_damage_merged(old_plane_state, state, &rect)) 438 - udl_handle_damage(fb, rect.x1, rect.y1, rect.x2 - rect.x1, 439 - rect.y2 - rect.y1); 445 + udl_handle_damage(fb, &shadow_plane_state->map[0], rect.x1, rect.y1, 446 + rect.x2 - rect.x1, rect.y2 - rect.y1); 440 447 } 441 448 442 - static const 443 - struct drm_simple_display_pipe_funcs udl_simple_display_pipe_funcs = { 449 + static const struct drm_simple_display_pipe_funcs udl_simple_display_pipe_funcs = { 444 450 .mode_valid = udl_simple_display_pipe_mode_valid, 445 451 .enable = udl_simple_display_pipe_enable, 446 452 .disable = udl_simple_display_pipe_disable, 447 453 .update = udl_simple_display_pipe_update, 448 - .prepare_fb = drm_gem_fb_simple_display_pipe_prepare_fb, 454 + DRM_GEM_SIMPLE_DISPLAY_PIPE_SHADOW_PLANE_FUNCS, 449 455 }; 450 456 451 457 /*
+22 -20
drivers/gpu/drm/v3d/v3d_sched.c
··· 259 259 return NULL; 260 260 } 261 261 262 - static void 262 + static enum drm_gpu_sched_stat 263 263 v3d_gpu_reset_for_timeout(struct v3d_dev *v3d, struct drm_sched_job *sched_job) 264 264 { 265 265 enum v3d_queue q; ··· 285 285 } 286 286 287 287 mutex_unlock(&v3d->reset_lock); 288 + 289 + return DRM_GPU_SCHED_STAT_NOMINAL; 288 290 } 289 291 290 292 /* If the current address or return address have changed, then the GPU ··· 294 292 * could fail if the GPU got in an infinite loop in the CL, but that 295 293 * is pretty unlikely outside of an i-g-t testcase. 296 294 */ 297 - static void 295 + static enum drm_gpu_sched_stat 298 296 v3d_cl_job_timedout(struct drm_sched_job *sched_job, enum v3d_queue q, 299 297 u32 *timedout_ctca, u32 *timedout_ctra) 300 298 { ··· 306 304 if (*timedout_ctca != ctca || *timedout_ctra != ctra) { 307 305 *timedout_ctca = ctca; 308 306 *timedout_ctra = ctra; 309 - return; 307 + return DRM_GPU_SCHED_STAT_NOMINAL; 310 308 } 311 309 312 - v3d_gpu_reset_for_timeout(v3d, sched_job); 310 + return v3d_gpu_reset_for_timeout(v3d, sched_job); 313 311 } 314 312 315 - static void 313 + static enum drm_gpu_sched_stat 316 314 v3d_bin_job_timedout(struct drm_sched_job *sched_job) 317 315 { 318 316 struct v3d_bin_job *job = to_bin_job(sched_job); 319 317 320 - v3d_cl_job_timedout(sched_job, V3D_BIN, 321 - &job->timedout_ctca, &job->timedout_ctra); 318 + return v3d_cl_job_timedout(sched_job, V3D_BIN, 319 + &job->timedout_ctca, &job->timedout_ctra); 322 320 } 323 321 324 - static void 322 + static enum drm_gpu_sched_stat 325 323 v3d_render_job_timedout(struct drm_sched_job *sched_job) 326 324 { 327 325 struct v3d_render_job *job = to_render_job(sched_job); 328 326 329 - v3d_cl_job_timedout(sched_job, V3D_RENDER, 330 - &job->timedout_ctca, &job->timedout_ctra); 327 + return v3d_cl_job_timedout(sched_job, V3D_RENDER, 328 + &job->timedout_ctca, &job->timedout_ctra); 331 329 } 332 330 333 - static void 331 + static enum drm_gpu_sched_stat 334 332 v3d_generic_job_timedout(struct drm_sched_job *sched_job) 335 333 { 336 334 struct v3d_job *job = to_v3d_job(sched_job); 337 335 338 - v3d_gpu_reset_for_timeout(job->v3d, sched_job); 336 + return v3d_gpu_reset_for_timeout(job->v3d, sched_job); 339 337 } 340 338 341 - static void 339 + static enum drm_gpu_sched_stat 342 340 v3d_csd_job_timedout(struct drm_sched_job *sched_job) 343 341 { 344 342 struct v3d_csd_job *job = to_csd_job(sched_job); ··· 350 348 */ 351 349 if (job->timedout_batches != batches) { 352 350 job->timedout_batches = batches; 353 - return; 351 + return DRM_GPU_SCHED_STAT_NOMINAL; 354 352 } 355 353 356 - v3d_gpu_reset_for_timeout(v3d, sched_job); 354 + return v3d_gpu_reset_for_timeout(v3d, sched_job); 357 355 } 358 356 359 357 static const struct drm_sched_backend_ops v3d_bin_sched_ops = { ··· 403 401 &v3d_bin_sched_ops, 404 402 hw_jobs_limit, job_hang_limit, 405 403 msecs_to_jiffies(hang_limit_ms), 406 - "v3d_bin"); 404 + NULL, "v3d_bin"); 407 405 if (ret) { 408 406 dev_err(v3d->drm.dev, "Failed to create bin scheduler: %d.", ret); 409 407 return ret; ··· 413 411 &v3d_render_sched_ops, 414 412 hw_jobs_limit, job_hang_limit, 415 413 msecs_to_jiffies(hang_limit_ms), 416 - "v3d_render"); 414 + NULL, "v3d_render"); 417 415 if (ret) { 418 416 dev_err(v3d->drm.dev, "Failed to create render scheduler: %d.", 419 417 ret); ··· 425 423 &v3d_tfu_sched_ops, 426 424 hw_jobs_limit, job_hang_limit, 427 425 msecs_to_jiffies(hang_limit_ms), 428 - "v3d_tfu"); 426 + NULL, "v3d_tfu"); 429 427 if (ret) { 430 428 dev_err(v3d->drm.dev, "Failed to create TFU scheduler: %d.", 431 429 ret); ··· 438 436 &v3d_csd_sched_ops, 439 437 hw_jobs_limit, job_hang_limit, 440 438 msecs_to_jiffies(hang_limit_ms), 441 - "v3d_csd"); 439 + NULL, "v3d_csd"); 442 440 if (ret) { 443 441 dev_err(v3d->drm.dev, "Failed to create CSD scheduler: %d.", 444 442 ret); ··· 450 448 &v3d_cache_clean_sched_ops, 451 449 hw_jobs_limit, job_hang_limit, 452 450 msecs_to_jiffies(hang_limit_ms), 453 - "v3d_cache_clean"); 451 + NULL, "v3d_cache_clean"); 454 452 if (ret) { 455 453 dev_err(v3d->drm.dev, "Failed to create CACHE_CLEAN scheduler: %d.", 456 454 ret);
+41 -41
drivers/gpu/drm/vboxvideo/vbox_mode.c
··· 17 17 #include <drm/drm_atomic_helper.h> 18 18 #include <drm/drm_fb_helper.h> 19 19 #include <drm/drm_fourcc.h> 20 + #include <drm/drm_gem_atomic_helper.h> 20 21 #include <drm/drm_gem_framebuffer_helper.h> 21 22 #include <drm/drm_plane_helper.h> 22 23 #include <drm/drm_probe_helper.h> ··· 253 252 }; 254 253 255 254 static int vbox_primary_atomic_check(struct drm_plane *plane, 256 - struct drm_plane_state *new_state) 255 + struct drm_atomic_state *state) 257 256 { 257 + struct drm_plane_state *new_state = drm_atomic_get_new_plane_state(state, 258 + plane); 258 259 struct drm_crtc_state *crtc_state = NULL; 259 260 260 261 if (new_state->crtc) { 261 - crtc_state = drm_atomic_get_existing_crtc_state( 262 - new_state->state, new_state->crtc); 262 + crtc_state = drm_atomic_get_existing_crtc_state(state, 263 + new_state->crtc); 263 264 if (WARN_ON(!crtc_state)) 264 265 return -EINVAL; 265 266 } ··· 273 270 } 274 271 275 272 static void vbox_primary_atomic_update(struct drm_plane *plane, 276 - struct drm_plane_state *old_state) 273 + struct drm_atomic_state *state) 277 274 { 278 - struct drm_crtc *crtc = plane->state->crtc; 279 - struct drm_framebuffer *fb = plane->state->fb; 275 + struct drm_plane_state *new_state = drm_atomic_get_new_plane_state(state, 276 + plane); 277 + struct drm_crtc *crtc = new_state->crtc; 278 + struct drm_framebuffer *fb = new_state->fb; 280 279 struct vbox_private *vbox = to_vbox_dev(fb->dev); 281 280 struct drm_mode_rect *clips; 282 281 uint32_t num_clips, i; 283 282 284 283 vbox_crtc_set_base_and_mode(crtc, fb, 285 - plane->state->src_x >> 16, 286 - plane->state->src_y >> 16); 284 + new_state->src_x >> 16, 285 + new_state->src_y >> 16); 287 286 288 287 /* Send information about dirty rectangles to VBVA. */ 289 288 290 - clips = drm_plane_get_damage_clips(plane->state); 291 - num_clips = drm_plane_get_damage_clips_count(plane->state); 289 + clips = drm_plane_get_damage_clips(new_state); 290 + num_clips = drm_plane_get_damage_clips_count(new_state); 292 291 293 292 if (!num_clips) 294 293 return; ··· 319 314 } 320 315 321 316 static void vbox_primary_atomic_disable(struct drm_plane *plane, 322 - struct drm_plane_state *old_state) 317 + struct drm_atomic_state *state) 323 318 { 319 + struct drm_plane_state *old_state = drm_atomic_get_old_plane_state(state, 320 + plane); 324 321 struct drm_crtc *crtc = old_state->crtc; 325 322 326 323 /* vbox_do_modeset checks plane->state->fb and will disable if NULL */ ··· 332 325 } 333 326 334 327 static int vbox_cursor_atomic_check(struct drm_plane *plane, 335 - struct drm_plane_state *new_state) 328 + struct drm_atomic_state *state) 336 329 { 330 + struct drm_plane_state *new_state = drm_atomic_get_new_plane_state(state, 331 + plane); 337 332 struct drm_crtc_state *crtc_state = NULL; 338 333 u32 width = new_state->crtc_w; 339 334 u32 height = new_state->crtc_h; 340 335 int ret; 341 336 342 337 if (new_state->crtc) { 343 - crtc_state = drm_atomic_get_existing_crtc_state( 344 - new_state->state, new_state->crtc); 338 + crtc_state = drm_atomic_get_existing_crtc_state(state, 339 + new_state->crtc); 345 340 if (WARN_ON(!crtc_state)) 346 341 return -EINVAL; 347 342 } ··· 384 375 } 385 376 386 377 static void vbox_cursor_atomic_update(struct drm_plane *plane, 387 - struct drm_plane_state *old_state) 378 + struct drm_atomic_state *state) 388 379 { 380 + struct drm_plane_state *old_state = drm_atomic_get_old_plane_state(state, 381 + plane); 382 + struct drm_plane_state *new_state = drm_atomic_get_new_plane_state(state, 383 + plane); 389 384 struct vbox_private *vbox = 390 385 container_of(plane->dev, struct vbox_private, ddev); 391 - struct vbox_crtc *vbox_crtc = to_vbox_crtc(plane->state->crtc); 392 - struct drm_framebuffer *fb = plane->state->fb; 393 - struct drm_gem_vram_object *gbo = drm_gem_vram_of_gem(fb->obj[0]); 394 - u32 width = plane->state->crtc_w; 395 - u32 height = plane->state->crtc_h; 386 + struct vbox_crtc *vbox_crtc = to_vbox_crtc(new_state->crtc); 387 + struct drm_framebuffer *fb = new_state->fb; 388 + u32 width = new_state->crtc_w; 389 + u32 height = new_state->crtc_h; 390 + struct drm_shadow_plane_state *shadow_plane_state = 391 + to_drm_shadow_plane_state(new_state); 392 + struct dma_buf_map map = shadow_plane_state->map[0]; 393 + u8 *src = map.vaddr; /* TODO: Use mapping abstraction properly */ 396 394 size_t data_size, mask_size; 397 395 u32 flags; 398 - struct dma_buf_map map; 399 - int ret; 400 - u8 *src; 401 396 402 397 /* 403 398 * VirtualBox uses the host windowing system to draw the cursor so ··· 414 401 415 402 vbox_crtc->cursor_enabled = true; 416 403 417 - ret = drm_gem_vram_vmap(gbo, &map); 418 - if (ret) { 419 - /* 420 - * BUG: we should have pinned the BO in prepare_fb(). 421 - */ 422 - mutex_unlock(&vbox->hw_mutex); 423 - DRM_WARN("Could not map cursor bo, skipping update\n"); 424 - return; 425 - } 426 - src = map.vaddr; /* TODO: Use mapping abstraction properly */ 427 - 428 404 /* 429 405 * The mask must be calculated based on the alpha 430 406 * channel, one bit per ARGB word, and must be 32-bit ··· 423 421 data_size = width * height * 4 + mask_size; 424 422 425 423 copy_cursor_image(src, vbox->cursor_data, width, height, mask_size); 426 - drm_gem_vram_vunmap(gbo, &map); 427 424 428 425 flags = VBOX_MOUSE_POINTER_VISIBLE | VBOX_MOUSE_POINTER_SHAPE | 429 426 VBOX_MOUSE_POINTER_ALPHA; ··· 435 434 } 436 435 437 436 static void vbox_cursor_atomic_disable(struct drm_plane *plane, 438 - struct drm_plane_state *old_state) 437 + struct drm_atomic_state *state) 439 438 { 439 + struct drm_plane_state *old_state = drm_atomic_get_old_plane_state(state, 440 + plane); 440 441 struct vbox_private *vbox = 441 442 container_of(plane->dev, struct vbox_private, ddev); 442 443 struct vbox_crtc *vbox_crtc = to_vbox_crtc(old_state->crtc); ··· 469 466 .atomic_check = vbox_cursor_atomic_check, 470 467 .atomic_update = vbox_cursor_atomic_update, 471 468 .atomic_disable = vbox_cursor_atomic_disable, 472 - .prepare_fb = drm_gem_vram_plane_helper_prepare_fb, 473 - .cleanup_fb = drm_gem_vram_plane_helper_cleanup_fb, 469 + DRM_GEM_SHADOW_PLANE_HELPER_FUNCS, 474 470 }; 475 471 476 472 static const struct drm_plane_funcs vbox_cursor_plane_funcs = { 477 473 .update_plane = drm_atomic_helper_update_plane, 478 474 .disable_plane = drm_atomic_helper_disable_plane, 479 475 .destroy = drm_primary_helper_destroy, 480 - .reset = drm_atomic_helper_plane_reset, 481 - .atomic_duplicate_state = drm_atomic_helper_plane_duplicate_state, 482 - .atomic_destroy_state = drm_atomic_helper_plane_destroy_state, 476 + DRM_GEM_SHADOW_PLANE_FUNCS, 483 477 }; 484 478 485 479 static const u32 vbox_primary_plane_formats[] = {
+4 -13
drivers/gpu/drm/vc4/vc4_kms.c
··· 363 363 for_each_old_crtc_in_state(state, crtc, old_crtc_state, i) { 364 364 struct vc4_crtc_state *vc4_crtc_state = 365 365 to_vc4_crtc_state(old_crtc_state); 366 - struct drm_crtc_commit *commit; 367 366 unsigned int channel = vc4_crtc_state->assigned_channel; 368 - unsigned long done; 367 + int ret; 369 368 370 369 if (channel == VC4_HVS_CHANNEL_DISABLED) 371 370 continue; ··· 372 373 if (!old_hvs_state->fifo_state[channel].in_use) 373 374 continue; 374 375 375 - commit = old_hvs_state->fifo_state[i].pending_commit; 376 - if (!commit) 377 - continue; 378 - 379 - done = wait_for_completion_timeout(&commit->hw_done, 10 * HZ); 380 - if (!done) 381 - drm_err(dev, "Timed out waiting for hw_done\n"); 382 - 383 - done = wait_for_completion_timeout(&commit->flip_done, 10 * HZ); 384 - if (!done) 385 - drm_err(dev, "Timed out waiting for flip_done\n"); 376 + ret = drm_crtc_commit_wait(old_hvs_state->fifo_state[i].pending_commit); 377 + if (ret) 378 + drm_err(dev, "Timed out waiting for commit\n"); 386 379 } 387 380 388 381 drm_atomic_helper_commit_modeset_disables(dev, state);
+40 -34
drivers/gpu/drm/vc4/vc4_plane.c
··· 20 20 #include <drm/drm_atomic_uapi.h> 21 21 #include <drm/drm_fb_cma_helper.h> 22 22 #include <drm/drm_fourcc.h> 23 - #include <drm/drm_gem_framebuffer_helper.h> 23 + #include <drm/drm_gem_atomic_helper.h> 24 24 #include <drm/drm_plane_helper.h> 25 25 26 26 #include "uapi/drm/vc4_drm.h" ··· 1055 1055 * in the CRTC's flush. 1056 1056 */ 1057 1057 static int vc4_plane_atomic_check(struct drm_plane *plane, 1058 - struct drm_plane_state *state) 1058 + struct drm_atomic_state *state) 1059 1059 { 1060 - struct vc4_plane_state *vc4_state = to_vc4_plane_state(state); 1060 + struct drm_plane_state *new_plane_state = drm_atomic_get_new_plane_state(state, 1061 + plane); 1062 + struct vc4_plane_state *vc4_state = to_vc4_plane_state(new_plane_state); 1061 1063 int ret; 1062 1064 1063 1065 vc4_state->dlist_count = 0; 1064 1066 1065 - if (!plane_enabled(state)) 1067 + if (!plane_enabled(new_plane_state)) 1066 1068 return 0; 1067 1069 1068 - ret = vc4_plane_mode_set(plane, state); 1070 + ret = vc4_plane_mode_set(plane, new_plane_state); 1069 1071 if (ret) 1070 1072 return ret; 1071 1073 1072 - return vc4_plane_allocate_lbm(state); 1074 + return vc4_plane_allocate_lbm(new_plane_state); 1073 1075 } 1074 1076 1075 1077 static void vc4_plane_atomic_update(struct drm_plane *plane, 1076 - struct drm_plane_state *old_state) 1078 + struct drm_atomic_state *state) 1077 1079 { 1078 1080 /* No contents here. Since we don't know where in the CRTC's 1079 1081 * dlist we should be stored, our dlist is uploaded to the ··· 1135 1133 } 1136 1134 1137 1135 static void vc4_plane_atomic_async_update(struct drm_plane *plane, 1138 - struct drm_plane_state *state) 1136 + struct drm_atomic_state *state) 1139 1137 { 1138 + struct drm_plane_state *new_plane_state = drm_atomic_get_new_plane_state(state, 1139 + plane); 1140 1140 struct vc4_plane_state *vc4_state, *new_vc4_state; 1141 1141 1142 - swap(plane->state->fb, state->fb); 1143 - plane->state->crtc_x = state->crtc_x; 1144 - plane->state->crtc_y = state->crtc_y; 1145 - plane->state->crtc_w = state->crtc_w; 1146 - plane->state->crtc_h = state->crtc_h; 1147 - plane->state->src_x = state->src_x; 1148 - plane->state->src_y = state->src_y; 1149 - plane->state->src_w = state->src_w; 1150 - plane->state->src_h = state->src_h; 1151 - plane->state->src_h = state->src_h; 1152 - plane->state->alpha = state->alpha; 1153 - plane->state->pixel_blend_mode = state->pixel_blend_mode; 1154 - plane->state->rotation = state->rotation; 1155 - plane->state->zpos = state->zpos; 1156 - plane->state->normalized_zpos = state->normalized_zpos; 1157 - plane->state->color_encoding = state->color_encoding; 1158 - plane->state->color_range = state->color_range; 1159 - plane->state->src = state->src; 1160 - plane->state->dst = state->dst; 1161 - plane->state->visible = state->visible; 1142 + swap(plane->state->fb, new_plane_state->fb); 1143 + plane->state->crtc_x = new_plane_state->crtc_x; 1144 + plane->state->crtc_y = new_plane_state->crtc_y; 1145 + plane->state->crtc_w = new_plane_state->crtc_w; 1146 + plane->state->crtc_h = new_plane_state->crtc_h; 1147 + plane->state->src_x = new_plane_state->src_x; 1148 + plane->state->src_y = new_plane_state->src_y; 1149 + plane->state->src_w = new_plane_state->src_w; 1150 + plane->state->src_h = new_plane_state->src_h; 1151 + plane->state->src_h = new_plane_state->src_h; 1152 + plane->state->alpha = new_plane_state->alpha; 1153 + plane->state->pixel_blend_mode = new_plane_state->pixel_blend_mode; 1154 + plane->state->rotation = new_plane_state->rotation; 1155 + plane->state->zpos = new_plane_state->zpos; 1156 + plane->state->normalized_zpos = new_plane_state->normalized_zpos; 1157 + plane->state->color_encoding = new_plane_state->color_encoding; 1158 + plane->state->color_range = new_plane_state->color_range; 1159 + plane->state->src = new_plane_state->src; 1160 + plane->state->dst = new_plane_state->dst; 1161 + plane->state->visible = new_plane_state->visible; 1162 1162 1163 - new_vc4_state = to_vc4_plane_state(state); 1163 + new_vc4_state = to_vc4_plane_state(new_plane_state); 1164 1164 vc4_state = to_vc4_plane_state(plane->state); 1165 1165 1166 1166 vc4_state->crtc_x = new_vc4_state->crtc_x; ··· 1206 1202 } 1207 1203 1208 1204 static int vc4_plane_atomic_async_check(struct drm_plane *plane, 1209 - struct drm_plane_state *state) 1205 + struct drm_atomic_state *state) 1210 1206 { 1207 + struct drm_plane_state *new_plane_state = drm_atomic_get_new_plane_state(state, 1208 + plane); 1211 1209 struct vc4_plane_state *old_vc4_state, *new_vc4_state; 1212 1210 int ret; 1213 1211 u32 i; 1214 1212 1215 - ret = vc4_plane_mode_set(plane, state); 1213 + ret = vc4_plane_mode_set(plane, new_plane_state); 1216 1214 if (ret) 1217 1215 return ret; 1218 1216 1219 1217 old_vc4_state = to_vc4_plane_state(plane->state); 1220 - new_vc4_state = to_vc4_plane_state(state); 1218 + new_vc4_state = to_vc4_plane_state(new_plane_state); 1221 1219 if (old_vc4_state->dlist_count != new_vc4_state->dlist_count || 1222 1220 old_vc4_state->pos0_offset != new_vc4_state->pos0_offset || 1223 1221 old_vc4_state->pos2_offset != new_vc4_state->pos2_offset || 1224 1222 old_vc4_state->ptr0_offset != new_vc4_state->ptr0_offset || 1225 - vc4_lbm_size(plane->state) != vc4_lbm_size(state)) 1223 + vc4_lbm_size(plane->state) != vc4_lbm_size(new_plane_state)) 1226 1224 return -EINVAL; 1227 1225 1228 1226 /* Only pos0, pos2 and ptr0 DWORDS can be updated in an async update ··· 1256 1250 1257 1251 bo = to_vc4_bo(&drm_fb_cma_get_gem_obj(state->fb, 0)->base); 1258 1252 1259 - drm_gem_fb_prepare_fb(plane, state); 1253 + drm_gem_plane_helper_prepare_fb(plane, state); 1260 1254 1261 1255 if (plane->state->fb == state->fb) 1262 1256 return 0;
+13 -6
drivers/gpu/drm/virtio/virtgpu_plane.c
··· 83 83 }; 84 84 85 85 static int virtio_gpu_plane_atomic_check(struct drm_plane *plane, 86 - struct drm_plane_state *state) 86 + struct drm_atomic_state *state) 87 87 { 88 + struct drm_plane_state *new_plane_state = drm_atomic_get_new_plane_state(state, 89 + plane); 88 90 bool is_cursor = plane->type == DRM_PLANE_TYPE_CURSOR; 89 91 struct drm_crtc_state *crtc_state; 90 92 int ret; 91 93 92 - if (!state->fb || WARN_ON(!state->crtc)) 94 + if (!new_plane_state->fb || WARN_ON(!new_plane_state->crtc)) 93 95 return 0; 94 96 95 - crtc_state = drm_atomic_get_crtc_state(state->state, state->crtc); 97 + crtc_state = drm_atomic_get_crtc_state(state, 98 + new_plane_state->crtc); 96 99 if (IS_ERR(crtc_state)) 97 100 return PTR_ERR(crtc_state); 98 101 99 - ret = drm_atomic_helper_check_plane_state(state, crtc_state, 102 + ret = drm_atomic_helper_check_plane_state(new_plane_state, crtc_state, 100 103 DRM_PLANE_HELPER_NO_SCALING, 101 104 DRM_PLANE_HELPER_NO_SCALING, 102 105 is_cursor, true); ··· 130 127 } 131 128 132 129 static void virtio_gpu_primary_plane_update(struct drm_plane *plane, 133 - struct drm_plane_state *old_state) 130 + struct drm_atomic_state *state) 134 131 { 132 + struct drm_plane_state *old_state = drm_atomic_get_old_plane_state(state, 133 + plane); 135 134 struct drm_device *dev = plane->dev; 136 135 struct virtio_gpu_device *vgdev = dev->dev_private; 137 136 struct virtio_gpu_output *output = NULL; ··· 244 239 } 245 240 246 241 static void virtio_gpu_cursor_plane_update(struct drm_plane *plane, 247 - struct drm_plane_state *old_state) 242 + struct drm_atomic_state *state) 248 243 { 244 + struct drm_plane_state *old_state = drm_atomic_get_old_plane_state(state, 245 + plane); 249 246 struct drm_device *dev = plane->dev; 250 247 struct virtio_gpu_device *vgdev = dev->dev_private; 251 248 struct virtio_gpu_output *output = NULL;
+7 -1
drivers/gpu/drm/vkms/vkms_crtc.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0+ 2 2 3 + #include <linux/dma-fence.h> 4 + 3 5 #include <drm/drm_atomic.h> 4 6 #include <drm/drm_atomic_helper.h> 5 7 #include <drm/drm_probe_helper.h> ··· 16 14 struct drm_crtc *crtc = &output->crtc; 17 15 struct vkms_crtc_state *state; 18 16 u64 ret_overrun; 19 - bool ret; 17 + bool ret, fence_cookie; 18 + 19 + fence_cookie = dma_fence_begin_signalling(); 20 20 21 21 ret_overrun = hrtimer_forward_now(&output->vblank_hrtimer, 22 22 output->period_ns); ··· 52 48 if (!ret) 53 49 DRM_DEBUG_DRIVER("Composer worker already queued\n"); 54 50 } 51 + 52 + dma_fence_end_signalling(fence_cookie); 55 53 56 54 return HRTIMER_RESTART; 57 55 }
+18 -12
drivers/gpu/drm/vkms/vkms_plane.c
··· 5 5 #include <drm/drm_atomic.h> 6 6 #include <drm/drm_atomic_helper.h> 7 7 #include <drm/drm_fourcc.h> 8 + #include <drm/drm_gem_atomic_helper.h> 8 9 #include <drm/drm_gem_framebuffer_helper.h> 9 10 #include <drm/drm_plane_helper.h> 10 11 #include <drm/drm_gem_shmem_helper.h> ··· 93 92 }; 94 93 95 94 static void vkms_plane_atomic_update(struct drm_plane *plane, 96 - struct drm_plane_state *old_state) 95 + struct drm_atomic_state *state) 97 96 { 97 + struct drm_plane_state *new_state = drm_atomic_get_new_plane_state(state, 98 + plane); 98 99 struct vkms_plane_state *vkms_plane_state; 99 - struct drm_framebuffer *fb = plane->state->fb; 100 + struct drm_framebuffer *fb = new_state->fb; 100 101 struct vkms_composer *composer; 101 102 102 - if (!plane->state->crtc || !fb) 103 + if (!new_state->crtc || !fb) 103 104 return; 104 105 105 - vkms_plane_state = to_vkms_plane_state(plane->state); 106 + vkms_plane_state = to_vkms_plane_state(new_state); 106 107 107 108 composer = vkms_plane_state->composer; 108 - memcpy(&composer->src, &plane->state->src, sizeof(struct drm_rect)); 109 - memcpy(&composer->dst, &plane->state->dst, sizeof(struct drm_rect)); 109 + memcpy(&composer->src, &new_state->src, sizeof(struct drm_rect)); 110 + memcpy(&composer->dst, &new_state->dst, sizeof(struct drm_rect)); 110 111 memcpy(&composer->fb, fb, sizeof(struct drm_framebuffer)); 111 112 drm_framebuffer_get(&composer->fb); 112 113 composer->offset = fb->offsets[0]; ··· 117 114 } 118 115 119 116 static int vkms_plane_atomic_check(struct drm_plane *plane, 120 - struct drm_plane_state *state) 117 + struct drm_atomic_state *state) 121 118 { 119 + struct drm_plane_state *new_plane_state = drm_atomic_get_new_plane_state(state, 120 + plane); 122 121 struct drm_crtc_state *crtc_state; 123 122 bool can_position = false; 124 123 int ret; 125 124 126 - if (!state->fb || WARN_ON(!state->crtc)) 125 + if (!new_plane_state->fb || WARN_ON(!new_plane_state->crtc)) 127 126 return 0; 128 127 129 - crtc_state = drm_atomic_get_crtc_state(state->state, state->crtc); 128 + crtc_state = drm_atomic_get_crtc_state(state, 129 + new_plane_state->crtc); 130 130 if (IS_ERR(crtc_state)) 131 131 return PTR_ERR(crtc_state); 132 132 133 133 if (plane->type == DRM_PLANE_TYPE_CURSOR) 134 134 can_position = true; 135 135 136 - ret = drm_atomic_helper_check_plane_state(state, crtc_state, 136 + ret = drm_atomic_helper_check_plane_state(new_plane_state, crtc_state, 137 137 DRM_PLANE_HELPER_NO_SCALING, 138 138 DRM_PLANE_HELPER_NO_SCALING, 139 139 can_position, true); ··· 144 138 return ret; 145 139 146 140 /* for now primary plane must be visible and full screen */ 147 - if (!state->visible && !can_position) 141 + if (!new_plane_state->visible && !can_position) 148 142 return -EINVAL; 149 143 150 144 return 0; ··· 165 159 if (ret) 166 160 DRM_ERROR("vmap failed: %d\n", ret); 167 161 168 - return drm_gem_fb_prepare_fb(plane, state); 162 + return drm_gem_plane_helper_prepare_fb(plane, state); 169 163 } 170 164 171 165 static void vkms_cleanup_fb(struct drm_plane *plane,
+2 -5
drivers/gpu/drm/vkms/vkms_writeback.c
··· 42 42 } 43 43 44 44 if (fb->format->format != vkms_wb_formats[0]) { 45 - struct drm_format_name_buf format_name; 46 - 47 - DRM_DEBUG_KMS("Invalid pixel format %s\n", 48 - drm_get_format_name(fb->format->format, 49 - &format_name)); 45 + DRM_DEBUG_KMS("Invalid pixel format %p4cc\n", 46 + &fb->format->format); 50 47 return -EINVAL; 51 48 } 52 49
+1 -1
drivers/gpu/drm/vmwgfx/Makefile
··· 9 9 vmwgfx_cotable.o vmwgfx_so.o vmwgfx_binding.o vmwgfx_msg.o \ 10 10 vmwgfx_simple_resource.o vmwgfx_va.o vmwgfx_blit.o \ 11 11 vmwgfx_validation.o vmwgfx_page_dirty.o vmwgfx_streamoutput.o \ 12 - ttm_object.o ttm_lock.o 12 + ttm_object.o ttm_lock.o ttm_memory.o 13 13 14 14 vmwgfx-$(CONFIG_TRANSPARENT_HUGEPAGE) += vmwgfx_thp.o 15 15 obj-$(CONFIG_DRM_VMWGFX) := vmwgfx.o
+13 -12
drivers/gpu/drm/vmwgfx/ttm_object.c
··· 42 42 */ 43 43 44 44 45 + #define pr_fmt(fmt) "[TTM] " fmt 46 + 47 + #include <linux/list.h> 48 + #include <linux/spinlock.h> 49 + #include <linux/slab.h> 50 + #include <linux/atomic.h> 51 + #include "ttm_object.h" 52 + 45 53 /** 46 54 * struct ttm_object_file 47 55 * ··· 63 55 * 64 56 * @ref_hash: Hash tables of ref objects, one per ttm_ref_type, 65 57 * for fast lookup of ref objects given a base object. 58 + * 59 + * @refcount: reference/usage count 66 60 */ 67 - 68 - #define pr_fmt(fmt) "[TTM] " fmt 69 - 70 - #include <linux/list.h> 71 - #include <linux/spinlock.h> 72 - #include <linux/slab.h> 73 - #include <linux/atomic.h> 74 - #include "ttm_object.h" 75 - 76 61 struct ttm_object_file { 77 62 struct ttm_object_device *tdev; 78 63 spinlock_t lock; ··· 74 73 struct kref refcount; 75 74 }; 76 75 77 - /** 76 + /* 78 77 * struct ttm_object_device 79 78 * 80 79 * @object_lock: lock that protects the object_hash hash table. ··· 97 96 struct idr idr; 98 97 }; 99 98 100 - /** 99 + /* 101 100 * struct ttm_ref_object 102 101 * 103 102 * @hash: Hash entry for the per-file object reference hash. ··· 569 568 /** 570 569 * get_dma_buf_unless_doomed - get a dma_buf reference if possible. 571 570 * 572 - * @dma_buf: Non-refcounted pointer to a struct dma-buf. 571 + * @dmabuf: Non-refcounted pointer to a struct dma-buf. 573 572 * 574 573 * Obtain a file reference from a lookup structure that doesn't refcount 575 574 * the file, but synchronizes with its release method to make sure it has
+2 -1
drivers/gpu/drm/vmwgfx/ttm_object.h
··· 43 43 #include <linux/rcupdate.h> 44 44 45 45 #include <drm/drm_hashtab.h> 46 - #include <drm/ttm/ttm_memory.h> 46 + 47 + #include "ttm_memory.h" 47 48 48 49 /** 49 50 * enum ttm_ref_type
+6 -3
drivers/gpu/drm/vmwgfx/vmwgfx_binding.c
··· 330 330 * 331 331 * @cbs: Pointer to the context binding state tracker. 332 332 * @bi: Information about the binding to track. 333 + * @shader_slot: The shader slot of the binding. 334 + * @slot: The slot of the binding. 333 335 * 334 336 * Starts tracking the binding in the context binding 335 337 * state structure @cbs. ··· 369 367 * vmw_binding_transfer: Transfer a context binding tracking entry. 370 368 * 371 369 * @cbs: Pointer to the persistent context binding state tracker. 370 + * @from: Staged binding info built during execbuf 372 371 * @bi: Information about the binding to track. 373 372 * 374 373 */ ··· 487 484 /** 488 485 * vmw_binding_state_commit - Commit staged binding info 489 486 * 490 - * @ctx: Pointer to context to commit the staged binding info to. 487 + * @to: Staged binding info area to copy into to. 491 488 * @from: Staged binding info built during execbuf. 492 - * @scrubbed: Transfer only scrubbed bindings. 493 489 * 494 490 * Transfers binding info from a temporary structure 495 491 * (typically used by execbuf) to the persistent ··· 513 511 /** 514 512 * vmw_binding_rebind_all - Rebind all scrubbed bindings of a context 515 513 * 516 - * @ctx: The context resource 514 + * @cbs: Pointer to the context binding state tracker. 517 515 * 518 516 * Walks through the context binding list and rebinds all scrubbed 519 517 * resources. ··· 791 789 * vmw_binding_emit_set_sr - Issue delayed DX shader resource binding commands 792 790 * 793 791 * @cbs: Pointer to the context's struct vmw_ctx_binding_state 792 + * @shader_slot: The shader slot of the binding. 794 793 */ 795 794 static int vmw_emit_set_sr(struct vmw_ctx_binding_state *cbs, 796 795 int shader_slot)
+3 -2
drivers/gpu/drm/vmwgfx/vmwgfx_blit.c
··· 431 431 * @src_stride: Source stride in bytes. 432 432 * @w: Width of blit. 433 433 * @h: Height of blit. 434 + * @diff: The struct vmw_diff_cpy used to track the modified bounding box. 434 435 * return: Zero on success. Negative error value on failure. Will print out 435 436 * kernel warnings on caller bugs. 436 437 * ··· 466 465 dma_resv_assert_held(src->base.resv); 467 466 468 467 if (!ttm_tt_is_populated(dst->ttm)) { 469 - ret = dst->bdev->driver->ttm_tt_populate(dst->bdev, dst->ttm, &ctx); 468 + ret = dst->bdev->funcs->ttm_tt_populate(dst->bdev, dst->ttm, &ctx); 470 469 if (ret) 471 470 return ret; 472 471 } 473 472 474 473 if (!ttm_tt_is_populated(src->ttm)) { 475 - ret = src->bdev->driver->ttm_tt_populate(src->bdev, src->ttm, &ctx); 474 + ret = src->bdev->funcs->ttm_tt_populate(src->bdev, src->ttm, &ctx); 476 475 if (ret) 477 476 return ret; 478 477 }
+23 -9
drivers/gpu/drm/vmwgfx/vmwgfx_bo.c
··· 131 131 * 132 132 * @dev_priv: Driver private. 133 133 * @buf: DMA buffer to move. 134 - * @pin: Pin buffer if true. 135 134 * @interruptible: Use interruptible wait. 136 135 * Return: Zero on success, Negative error code on failure. In particular 137 136 * -ERESTARTSYS if interrupted by a signal ··· 507 508 acc_size = ttm_round_pot(sizeof(*bo)); 508 509 acc_size += ttm_round_pot(npages * sizeof(void *)); 509 510 acc_size += ttm_round_pot(sizeof(struct ttm_tt)); 510 - ret = ttm_bo_init_reserved(&dev_priv->bdev, bo, size, 511 - ttm_bo_type_device, placement, 0, 512 - &ctx, acc_size, NULL, NULL, NULL); 511 + 512 + ret = ttm_mem_global_alloc(&ttm_mem_glob, acc_size, &ctx); 513 513 if (unlikely(ret)) 514 514 goto error_free; 515 + 516 + ret = ttm_bo_init_reserved(&dev_priv->bdev, bo, size, 517 + ttm_bo_type_device, placement, 0, 518 + &ctx, NULL, NULL, NULL); 519 + if (unlikely(ret)) 520 + goto error_account; 515 521 516 522 ttm_bo_pin(bo); 517 523 ttm_bo_unreserve(bo); 518 524 *p_bo = bo; 519 525 520 526 return 0; 527 + 528 + error_account: 529 + ttm_mem_global_free(&ttm_mem_glob, acc_size); 521 530 522 531 error_free: 523 532 kfree(bo); ··· 553 546 void (*bo_free)(struct ttm_buffer_object *bo)) 554 547 { 555 548 struct ttm_operation_ctx ctx = { interruptible, false }; 556 - struct ttm_bo_device *bdev = &dev_priv->bdev; 549 + struct ttm_device *bdev = &dev_priv->bdev; 557 550 size_t acc_size; 558 551 int ret; 559 552 bool user = (bo_free == &vmw_user_bo_destroy); ··· 566 559 vmw_bo->base.priority = 3; 567 560 vmw_bo->res_tree = RB_ROOT; 568 561 569 - ret = ttm_bo_init_reserved(bdev, &vmw_bo->base, size, 570 - ttm_bo_type_device, placement, 571 - 0, &ctx, acc_size, NULL, NULL, bo_free); 562 + ret = ttm_mem_global_alloc(&ttm_mem_glob, acc_size, &ctx); 572 563 if (unlikely(ret)) 573 564 return ret; 565 + 566 + ret = ttm_bo_init_reserved(bdev, &vmw_bo->base, size, 567 + ttm_bo_type_device, placement, 568 + 0, &ctx, NULL, NULL, bo_free); 569 + if (unlikely(ret)) { 570 + ttm_mem_global_free(&ttm_mem_glob, acc_size); 571 + return ret; 572 + } 574 573 575 574 if (pin) 576 575 ttm_bo_pin(&vmw_bo->base); ··· 648 635 * @handle: Pointer to where the handle value should be assigned. 649 636 * @p_vbo: Pointer to where the refcounted struct vmw_buffer_object pointer 650 637 * should be assigned. 638 + * @p_base: The TTM base object pointer about to be allocated. 651 639 * Return: Zero on success, negative error code on error. 652 640 */ 653 641 int vmw_user_bo_alloc(struct vmw_private *dev_priv, ··· 1072 1058 void vmw_bo_fence_single(struct ttm_buffer_object *bo, 1073 1059 struct vmw_fence_obj *fence) 1074 1060 { 1075 - struct ttm_bo_device *bdev = bo->bdev; 1061 + struct ttm_device *bdev = bo->bdev; 1076 1062 1077 1063 struct vmw_private *dev_priv = 1078 1064 container_of(bdev, struct vmw_private, bdev);
+1 -1
drivers/gpu/drm/vmwgfx/vmwgfx_cmd.c
··· 276 276 return ret; 277 277 } 278 278 279 - /** 279 + /* 280 280 * Reserve @bytes number of bytes in the fifo. 281 281 * 282 282 * This function will return NULL (error) on two conditions:
+9 -5
drivers/gpu/drm/vmwgfx/vmwgfx_cmdbuf.c
··· 48 48 * @hw_submitted: List of command buffers submitted to hardware. 49 49 * @preempted: List of preempted command buffers. 50 50 * @num_hw_submitted: Number of buffers currently being processed by hardware 51 + * @block_submission: Identifies a block command submission. 51 52 */ 52 53 struct vmw_cmdbuf_context { 53 54 struct list_head submitted; ··· 59 58 }; 60 59 61 60 /** 62 - * struct vmw_cmdbuf_man: - Command buffer manager 61 + * struct vmw_cmdbuf_man - Command buffer manager 63 62 * 64 63 * @cur_mutex: Mutex protecting the command buffer used for incremental small 65 64 * kernel command submissions, @cur. ··· 89 88 * @max_hw_submitted: Max number of in-flight command buffers the device can 90 89 * handle. Immutable. 91 90 * @lock: Spinlock protecting command submission queues. 92 - * @header: Pool of DMA memory for device command buffer headers. 91 + * @headers: Pool of DMA memory for device command buffer headers. 93 92 * Internal protection. 94 93 * @dheaders: Pool of DMA memory for device command buffer headers with trailing 95 94 * space for inline data. Internal protection. ··· 144 143 * @cb_context: The device command buffer context. 145 144 * @list: List head for attaching to the manager lists. 146 145 * @node: The range manager node. 147 - * @handle. The DMA address of @cb_header. Handed to the device on command 146 + * @handle: The DMA address of @cb_header. Handed to the device on command 148 147 * buffer submission. 149 148 * @cmd: Pointer to the command buffer space of this buffer. 150 149 * @size: Size of the command buffer space of this buffer. ··· 250 249 * __vmw_cmdbuf_header_free - Free a struct vmw_cmdbuf_header and its 251 250 * associated structures. 252 251 * 253 - * header: Pointer to the header to free. 252 + * @header: Pointer to the header to free. 254 253 * 255 254 * For internal use. Must be called with man::lock held. 256 255 */ ··· 366 365 } 367 366 368 367 /** 369 - * vmw_cmdbuf_ctx_submit: Process a command buffer context. 368 + * vmw_cmdbuf_ctx_process - Process a command buffer context. 370 369 * 371 370 * @man: The command buffer manager. 372 371 * @ctx: The command buffer context. 372 + * @notempty: Pass back count of non-empty command submitted lists. 373 373 * 374 374 * Submit command buffers to hardware if possible, and process finished 375 375 * buffers. Typically freeing them, but on preemption or error take ··· 1163 1161 * context. 1164 1162 * 1165 1163 * @man: The command buffer manager. 1164 + * @context: Device context to pass command through. 1166 1165 * 1167 1166 * Synchronously sends a preempt command. 1168 1167 */ ··· 1187 1184 * context. 1188 1185 * 1189 1186 * @man: The command buffer manager. 1187 + * @context: Device context to start/stop. 1190 1188 * @enable: Whether to enable or disable the context. 1191 1189 * 1192 1190 * Synchronously sends a device start / stop context command.
+2 -6
drivers/gpu/drm/vmwgfx/vmwgfx_cmdbuf_res.c
··· 69 69 * vmw_cmdbuf_res_lookup - Look up a command buffer resource 70 70 * 71 71 * @man: Pointer to the command buffer resource manager 72 - * @resource_type: The resource type, that combined with the user key 72 + * @res_type: The resource type, that combined with the user key 73 73 * identifies the resource. 74 74 * @user_key: The user key. 75 75 * ··· 148 148 /** 149 149 * vmw_cmdbuf_res_revert - Revert a list of command buffer resource actions 150 150 * 151 - * @man: Pointer to the command buffer resource manager 152 151 * @list: Caller's list of command buffer resource action 153 152 * 154 153 * This function reverts a list of command buffer resource ··· 159 160 void vmw_cmdbuf_res_revert(struct list_head *list) 160 161 { 161 162 struct vmw_cmdbuf_res *entry, *next; 162 - int ret; 163 163 164 164 list_for_each_entry_safe(entry, next, list, head) { 165 165 switch (entry->state) { ··· 166 168 vmw_cmdbuf_res_free(entry->man, entry); 167 169 break; 168 170 case VMW_CMDBUF_RES_DEL: 169 - ret = drm_ht_insert_item(&entry->man->resources, 170 - &entry->hash); 171 + drm_ht_insert_item(&entry->man->resources, &entry->hash); 171 172 list_del(&entry->head); 172 173 list_add_tail(&entry->head, &entry->man->list); 173 174 entry->state = VMW_CMDBUF_RES_COMMITTED; ··· 324 327 } 325 328 326 329 /** 327 - * 328 330 * vmw_cmdbuf_res_man_size - Return the size of a command buffer managed 329 331 * resource manager 330 332 *
+3 -3
drivers/gpu/drm/vmwgfx/vmwgfx_context.c
··· 112 112 .unbind = vmw_dx_context_unbind 113 113 }; 114 114 115 - /** 115 + /* 116 116 * Context management: 117 117 */ 118 118 ··· 672 672 return 0; 673 673 } 674 674 675 - /** 675 + /* 676 676 * User-space context management: 677 677 */ 678 678 ··· 698 698 vmw_user_context_size); 699 699 } 700 700 701 - /** 701 + /* 702 702 * This function is called when user space has no more references on the 703 703 * base object. It releases the base-object's reference on the resource object. 704 704 */
+2 -1
drivers/gpu/drm/vmwgfx/vmwgfx_cotable.c
··· 63 63 * @min_initial_entries: Min number of initial intries at cotable allocation 64 64 * for this cotable type. 65 65 * @size: Size of each entry. 66 + * @unbind_func: Unbind call-back function. 66 67 */ 67 68 struct vmw_cotable_info { 68 69 u32 min_initial_entries; ··· 298 297 * 299 298 * @res: Pointer to the cotable resource. 300 299 * @readback: Whether to read back cotable data to the backup buffer. 301 - * val_buf: Pointer to a struct ttm_validate_buffer prepared by the caller 300 + * @val_buf: Pointer to a struct ttm_validate_buffer prepared by the caller 302 301 * for convenience / fencing. 303 302 * 304 303 * Unbinds the cotable from the device and fences the backup buffer.
+18 -27
drivers/gpu/drm/vmwgfx/vmwgfx_drv.c
··· 47 47 #define VMW_MIN_INITIAL_WIDTH 800 48 48 #define VMW_MIN_INITIAL_HEIGHT 600 49 49 50 - #ifndef VMWGFX_GIT_VERSION 51 - #define VMWGFX_GIT_VERSION "Unknown" 52 - #endif 53 - 54 - #define VMWGFX_REPO "In Tree" 55 - 56 50 #define VMWGFX_VALIDATION_MEM_GRAN (16*PAGE_SIZE) 57 51 58 52 ··· 147 153 DRM_IOWR(DRM_COMMAND_BASE + DRM_VMW_MSG, \ 148 154 struct drm_vmw_msg_arg) 149 155 150 - /** 156 + /* 151 157 * The core DRM version of this macro doesn't account for 152 158 * DRM_COMMAND_BASE. 153 159 */ ··· 155 161 #define VMW_IOCTL_DEF(ioctl, func, flags) \ 156 162 [DRM_IOCTL_NR(DRM_IOCTL_##ioctl) - DRM_COMMAND_BASE] = {DRM_IOCTL_##ioctl, flags, func} 157 163 158 - /** 164 + /* 159 165 * Ioctl definitions. 160 166 */ 161 167 ··· 520 526 vmw_fifo_release(dev_priv, &dev_priv->fifo); 521 527 } 522 528 523 - /** 529 + /* 524 530 * Sets the initial_[width|height] fields on the given vmw_private. 525 531 * 526 532 * It does so by reading SVGA_REG_[WIDTH|HEIGHT] regs and then ··· 593 599 /** 594 600 * vmw_dma_masks - set required page- and dma masks 595 601 * 596 - * @dev: Pointer to struct drm-device 602 + * @dev_priv: Pointer to struct drm-device 597 603 * 598 604 * With 32-bit we can only handle 32 bit PFNs. Optionally set that 599 605 * restriction also for 64-bit systems. ··· 879 885 drm_vma_offset_manager_init(&dev_priv->vma_manager, 880 886 DRM_FILE_PAGE_OFFSET_START, 881 887 DRM_FILE_PAGE_OFFSET_SIZE); 882 - ret = ttm_bo_device_init(&dev_priv->bdev, &vmw_bo_driver, 883 - dev_priv->drm.dev, 884 - dev_priv->drm.anon_inode->i_mapping, 885 - &dev_priv->vma_manager, 886 - dev_priv->map_mode == vmw_dma_alloc_coherent, 887 - false); 888 + ret = ttm_device_init(&dev_priv->bdev, &vmw_bo_driver, 889 + dev_priv->drm.dev, 890 + dev_priv->drm.anon_inode->i_mapping, 891 + &dev_priv->vma_manager, 892 + dev_priv->map_mode == vmw_dma_alloc_coherent, 893 + false); 888 894 if (unlikely(ret != 0)) { 889 895 DRM_ERROR("Failed initializing TTM buffer object driver.\n"); 890 896 goto out_no_bdev; ··· 961 967 if (ret) 962 968 goto out_no_fifo; 963 969 964 - DRM_INFO("Atomic: %s\n", (dev_priv->drm.driver->driver_features & DRIVER_ATOMIC) 965 - ? "yes." : "no."); 966 970 if (dev_priv->sm_type == VMW_SM_5) 967 971 DRM_INFO("SM5 support available.\n"); 968 972 if (dev_priv->sm_type == VMW_SM_4_1) ··· 968 976 if (dev_priv->sm_type == VMW_SM_4) 969 977 DRM_INFO("SM4 support available.\n"); 970 978 971 - snprintf(host_log, sizeof(host_log), "vmwgfx: %s-%s", 972 - VMWGFX_REPO, VMWGFX_GIT_VERSION); 973 - vmw_host_log(host_log); 974 - 975 - memset(host_log, 0, sizeof(host_log)); 976 979 snprintf(host_log, sizeof(host_log), "vmwgfx: Module Version: %d.%d.%d", 977 980 VMWGFX_DRIVER_MAJOR, VMWGFX_DRIVER_MINOR, 978 981 VMWGFX_DRIVER_PATCHLEVEL); ··· 994 1007 vmw_gmrid_man_fini(dev_priv, VMW_PL_GMR); 995 1008 vmw_vram_manager_fini(dev_priv); 996 1009 out_no_vram: 997 - (void)ttm_bo_device_release(&dev_priv->bdev); 1010 + ttm_device_fini(&dev_priv->bdev); 998 1011 out_no_bdev: 999 1012 vmw_fence_manager_takedown(dev_priv->fman); 1000 1013 out_no_fman: ··· 1041 1054 if (dev_priv->has_mob) 1042 1055 vmw_gmrid_man_fini(dev_priv, VMW_PL_MOB); 1043 1056 vmw_vram_manager_fini(dev_priv); 1044 - (void) ttm_bo_device_release(&dev_priv->bdev); 1057 + ttm_device_fini(&dev_priv->bdev); 1045 1058 drm_vma_offset_manager_destroy(&dev_priv->vma_manager); 1046 1059 vmw_release_device_late(dev_priv); 1047 1060 vmw_fence_manager_takedown(dev_priv->fman); ··· 1255 1268 { 1256 1269 struct drm_device *dev = pci_get_drvdata(pdev); 1257 1270 1271 + ttm_mem_global_release(&ttm_mem_glob); 1258 1272 drm_dev_unregister(dev); 1259 1273 vmw_driver_unload(dev); 1260 1274 } ··· 1371 1383 vmw_execbuf_release_pinned_bo(dev_priv); 1372 1384 vmw_resource_evict_all(dev_priv); 1373 1385 vmw_release_device_early(dev_priv); 1374 - while (ttm_bo_swapout(&ctx) == 0); 1386 + while (ttm_bo_swapout(&ctx, GFP_KERNEL) > 0); 1375 1387 if (dev_priv->enable_fb) 1376 1388 vmw_fifo_resource_dec(dev_priv); 1377 1389 if (atomic_read(&dev_priv->num_fifo_resources) != 0) { ··· 1504 1516 if (IS_ERR(vmw)) 1505 1517 return PTR_ERR(vmw); 1506 1518 1507 - vmw->drm.pdev = pdev; 1508 1519 pci_set_drvdata(pdev, &vmw->drm); 1520 + 1521 + ret = ttm_mem_global_init(&ttm_mem_glob, &pdev->dev); 1522 + if (ret) 1523 + return ret; 1509 1524 1510 1525 ret = vmw_driver_load(vmw, ent->device); 1511 1526 if (ret)
+6 -4
drivers/gpu/drm/vmwgfx/vmwgfx_drv.h
··· 55 55 56 56 57 57 #define VMWGFX_DRIVER_NAME "vmwgfx" 58 - #define VMWGFX_DRIVER_DATE "20200114" 58 + #define VMWGFX_DRIVER_DATE "20210218" 59 59 #define VMWGFX_DRIVER_MAJOR 2 60 60 #define VMWGFX_DRIVER_MINOR 18 61 - #define VMWGFX_DRIVER_PATCHLEVEL 0 61 + #define VMWGFX_DRIVER_PATCHLEVEL 1 62 62 #define VMWGFX_FIFO_STATIC_SIZE (1024*1024) 63 63 #define VMWGFX_MAX_RELOCATIONS 2048 64 64 #define VMWGFX_MAX_VALIDATIONS 2048 ··· 484 484 485 485 struct vmw_private { 486 486 struct drm_device drm; 487 - struct ttm_bo_device bdev; 487 + struct ttm_device bdev; 488 488 489 489 struct vmw_fifo_state fifo; 490 490 ··· 999 999 extern struct ttm_placement vmw_srf_placement; 1000 1000 extern struct ttm_placement vmw_mob_placement; 1001 1001 extern struct ttm_placement vmw_nonfixed_placement; 1002 - extern struct ttm_bo_driver vmw_bo_driver; 1002 + extern struct ttm_device_funcs vmw_bo_driver; 1003 1003 extern const struct vmw_sg_table * 1004 1004 vmw_bo_sg_table(struct ttm_buffer_object *bo); 1005 1005 extern int vmw_bo_create_and_populate(struct vmw_private *dev_priv, ··· 1525 1525 1526 1526 *buf = NULL; 1527 1527 if (tmp_buf != NULL) { 1528 + if (tmp_buf->base.pin_count > 0) 1529 + ttm_bo_unpin(&tmp_buf->base); 1528 1530 ttm_bo_put(&tmp_buf->base); 1529 1531 } 1530 1532 }
+12 -8
drivers/gpu/drm/vmwgfx/vmwgfx_execbuf.c
··· 80 80 * with a NOP. 81 81 * @vmw_res_rel_cond_nop: Conditional NOP relocation. If the resource id after 82 82 * validation is -1, the command is replaced with a NOP. Otherwise no action. 83 - */ 83 + * @vmw_res_rel_max: Last value in the enum - used for error checking 84 + */ 84 85 enum vmw_resource_relocation_type { 85 86 vmw_res_rel_normal, 86 87 vmw_res_rel_nop, ··· 123 122 /** 124 123 * struct vmw_cmd_entry - Describe a command for the verifier 125 124 * 125 + * @func: Call-back to handle the command. 126 126 * @user_allow: Whether allowed from the execbuf ioctl. 127 127 * @gb_disable: Whether disabled if guest-backed objects are available. 128 128 * @gb_enable: Whether enabled iff guest-backed objects are available. 129 + * @cmd_name: Name of the command. 129 130 */ 130 131 struct vmw_cmd_entry { 131 132 int (*func) (struct vmw_private *, struct vmw_sw_context *, ··· 206 203 * 207 204 * @dev_priv: Pointer to the device private: 208 205 * @sw_context: The command submission context 206 + * @res: Pointer to the resource 209 207 * @node: The validation node holding the context resource metadata 210 208 */ 211 209 static int vmw_cmd_ctx_first_setup(struct vmw_private *dev_priv, ··· 513 509 /** 514 510 * vmw_resource_relocation_add - Add a relocation to the relocation list 515 511 * 516 - * @list: Pointer to head of relocation list. 512 + * @sw_context: Pointer to the software context. 517 513 * @res: The resource. 518 514 * @offset: Offset into the command buffer currently being parsed where the id 519 515 * that needs fixup is located. Granularity is one byte. ··· 643 639 * @converter: User-space visisble type specific information. 644 640 * @id_loc: Pointer to the location in the command buffer currently being parsed 645 641 * from where the user-space resource id handle is located. 646 - * @p_val: Pointer to pointer to resource validalidation node. Populated on 642 + * @p_res: Pointer to pointer to resource validalidation node. Populated on 647 643 * exit. 648 644 */ 649 645 static int ··· 1704 1700 * 1705 1701 * @dev_priv: Pointer to a device private struct. 1706 1702 * @sw_context: The software context being used for this batch. 1707 - * @val_node: The validation node representing the resource. 1703 + * @res: Pointer to the resource. 1708 1704 * @buf_id: Pointer to the user-space backup buffer handle in the command 1709 1705 * stream. 1710 1706 * @backup_offset: Offset of backup into MOB. ··· 3743 3739 return 0; 3744 3740 } 3745 3741 3746 - /** 3742 + /* 3747 3743 * vmw_execbuf_fence_commands - create and submit a command stream fence 3748 3744 * 3749 3745 * Creates a fence object and submits a command stream marker. ··· 3943 3939 * On successful return, the function returns a pointer to the data in the 3944 3940 * command buffer and *@header is set to non-NULL. 3945 3941 * 3946 - * If command buffers could not be used, the function will return the value of 3947 - * @kernel_commands on function call. That value may be NULL. In that case, the 3948 - * value of *@header will be set to NULL. 3942 + * @kernel_commands: If command buffers could not be used, the function will 3943 + * return the value of @kernel_commands on function call. That value may be 3944 + * NULL. In that case, the value of *@header will be set to NULL. 3949 3945 * 3950 3946 * If an error is encountered, the function will return a pointer error value. 3951 3947 * If the function is interrupted by a signal while sleeping, it will return
+10 -8
drivers/gpu/drm/vmwgfx/vmwgfx_fence.c
··· 58 58 /** 59 59 * struct vmw_event_fence_action - fence action that delivers a drm event. 60 60 * 61 - * @e: A struct drm_pending_event that controls the event delivery. 62 61 * @action: A struct vmw_fence_action to hook up to a fence. 62 + * @event: A pointer to the pending event. 63 63 * @fence: A referenced pointer to the fence to keep it alive while @action 64 64 * hangs on it. 65 65 * @dev: Pointer to a struct drm_device so we can access the event stuff. 66 - * @kref: Both @e and @action has destructors, so we need to refcount. 67 - * @size: Size accounted for this object. 68 66 * @tv_sec: If non-null, the variable pointed to will be assigned 69 67 * current time tv_sec val when the fence signals. 70 68 * @tv_usec: Must be set if @tv_sec is set, and the variable pointed to will ··· 85 87 return container_of(fence->base.lock, struct vmw_fence_manager, lock); 86 88 } 87 89 88 - /** 90 + /* 89 91 * Note on fencing subsystem usage of irqs: 90 92 * Typically the vmw_fences_update function is called 91 93 * ··· 248 250 }; 249 251 250 252 251 - /** 253 + /* 252 254 * Execute signal actions on fences recently signaled. 253 255 * This is done from a workqueue so we don't have to execute 254 256 * signal actions from atomic context. ··· 706 708 } 707 709 708 710 709 - /** 711 + /* 710 712 * vmw_fence_fifo_down - signal all unsignaled fence objects. 711 713 */ 712 714 ··· 946 948 /** 947 949 * vmw_fence_obj_add_action - Add an action to a fence object. 948 950 * 949 - * @fence - The fence object. 950 - * @action - The action to add. 951 + * @fence: The fence object. 952 + * @action: The action to add. 951 953 * 952 954 * Note that the action callbacks may be executed before this function 953 955 * returns. ··· 999 1001 * @fence: The fence object on which to post the event. 1000 1002 * @event: Event to be posted. This event should've been alloced 1001 1003 * using k[mz]alloc, and should've been completely initialized. 1004 + * @tv_sec: If non-null, the variable pointed to will be assigned 1005 + * current time tv_sec val when the fence signals. 1006 + * @tv_usec: Must be set if @tv_sec is set, and the variable pointed to will 1007 + * be assigned the current time tv_usec val when the fence signals. 1002 1008 * @interruptible: Interruptible waits if possible. 1003 1009 * 1004 1010 * As a side effect, the object pointed to by @event may have been
+1 -1
drivers/gpu/drm/vmwgfx/vmwgfx_ioctl.c
··· 437 437 * @filp: See the linux fops read documentation. 438 438 * @buffer: See the linux fops read documentation. 439 439 * @count: See the linux fops read documentation. 440 - * offset: See the linux fops read documentation. 440 + * @offset: See the linux fops read documentation. 441 441 * 442 442 * Wrapper around the drm_read function that makes sure the device is 443 443 * processing the fifo if drm_read decides to wait.
+65 -38
drivers/gpu/drm/vmwgfx/vmwgfx_kms.c
··· 370 370 371 371 void 372 372 vmw_du_cursor_plane_atomic_update(struct drm_plane *plane, 373 - struct drm_plane_state *old_state) 373 + struct drm_atomic_state *state) 374 374 { 375 - struct drm_crtc *crtc = plane->state->crtc ?: old_state->crtc; 375 + struct drm_plane_state *new_state = drm_atomic_get_new_plane_state(state, 376 + plane); 377 + struct drm_plane_state *old_state = drm_atomic_get_old_plane_state(state, 378 + plane); 379 + struct drm_crtc *crtc = new_state->crtc ?: old_state->crtc; 376 380 struct vmw_private *dev_priv = vmw_priv(crtc->dev); 377 381 struct vmw_display_unit *du = vmw_crtc_to_du(crtc); 378 - struct vmw_plane_state *vps = vmw_plane_state_to_vps(plane->state); 382 + struct vmw_plane_state *vps = vmw_plane_state_to_vps(new_state); 379 383 s32 hotspot_x, hotspot_y; 380 384 int ret = 0; 381 385 ··· 387 383 hotspot_x = du->hotspot_x; 388 384 hotspot_y = du->hotspot_y; 389 385 390 - if (plane->state->fb) { 391 - hotspot_x += plane->state->fb->hot_x; 392 - hotspot_y += plane->state->fb->hot_y; 386 + if (new_state->fb) { 387 + hotspot_x += new_state->fb->hot_x; 388 + hotspot_y += new_state->fb->hot_y; 393 389 } 394 390 395 391 du->cursor_surface = vps->surf; ··· 404 400 hotspot_y); 405 401 } else if (vps->bo) { 406 402 ret = vmw_cursor_update_bo(dev_priv, vps->bo, 407 - plane->state->crtc_w, 408 - plane->state->crtc_h, 403 + new_state->crtc_w, 404 + new_state->crtc_h, 409 405 hotspot_x, hotspot_y); 410 406 } else { 411 407 vmw_cursor_update_position(dev_priv, false, 0, 0); ··· 413 409 } 414 410 415 411 if (!ret) { 416 - du->cursor_x = plane->state->crtc_x + du->set_gui_x; 417 - du->cursor_y = plane->state->crtc_y + du->set_gui_y; 412 + du->cursor_x = new_state->crtc_x + du->set_gui_x; 413 + du->cursor_y = new_state->crtc_y + du->set_gui_y; 418 414 419 415 vmw_cursor_update_position(dev_priv, true, 420 416 du->cursor_x + hotspot_x, ··· 441 437 * Returns 0 on success 442 438 */ 443 439 int vmw_du_primary_plane_atomic_check(struct drm_plane *plane, 444 - struct drm_plane_state *state) 440 + struct drm_atomic_state *state) 445 441 { 442 + struct drm_plane_state *new_state = drm_atomic_get_new_plane_state(state, 443 + plane); 446 444 struct drm_crtc_state *crtc_state = NULL; 447 - struct drm_framebuffer *new_fb = state->fb; 445 + struct drm_framebuffer *new_fb = new_state->fb; 448 446 int ret; 449 447 450 - if (state->crtc) 451 - crtc_state = drm_atomic_get_new_crtc_state(state->state, state->crtc); 448 + if (new_state->crtc) 449 + crtc_state = drm_atomic_get_new_crtc_state(state, 450 + new_state->crtc); 452 451 453 - ret = drm_atomic_helper_check_plane_state(state, crtc_state, 452 + ret = drm_atomic_helper_check_plane_state(new_state, crtc_state, 454 453 DRM_PLANE_HELPER_NO_SCALING, 455 454 DRM_PLANE_HELPER_NO_SCALING, 456 455 false, true); 457 456 458 457 if (!ret && new_fb) { 459 - struct drm_crtc *crtc = state->crtc; 460 - struct vmw_connector_state *vcs; 458 + struct drm_crtc *crtc = new_state->crtc; 461 459 struct vmw_display_unit *du = vmw_crtc_to_du(crtc); 462 460 463 - vcs = vmw_connector_state_to_vcs(du->connector.state); 461 + vmw_connector_state_to_vcs(du->connector.state); 464 462 } 465 463 466 464 ··· 474 468 * vmw_du_cursor_plane_atomic_check - check if the new state is okay 475 469 * 476 470 * @plane: cursor plane 477 - * @state: info on the new plane state 471 + * @new_state: info on the new plane state 478 472 * 479 473 * This is a chance to fail if the new cursor state does not fit 480 474 * our requirements. ··· 482 476 * Returns 0 on success 483 477 */ 484 478 int vmw_du_cursor_plane_atomic_check(struct drm_plane *plane, 485 - struct drm_plane_state *new_state) 479 + struct drm_atomic_state *state) 486 480 { 481 + struct drm_plane_state *new_state = drm_atomic_get_new_plane_state(state, 482 + plane); 487 483 int ret = 0; 488 484 struct drm_crtc_state *crtc_state = NULL; 489 485 struct vmw_surface *surface = NULL; ··· 899 891 struct vmw_framebuffer_surface *vfbs; 900 892 enum SVGA3dSurfaceFormat format; 901 893 int ret; 902 - struct drm_format_name_buf format_name; 903 894 904 895 /* 3D is only supported on HWv8 and newer hosts */ 905 896 if (dev_priv->active_display_unit == vmw_du_legacy) ··· 936 929 format = SVGA3D_A1R5G5B5; 937 930 break; 938 931 default: 939 - DRM_ERROR("Invalid pixel format: %s\n", 940 - drm_get_format_name(mode_cmd->pixel_format, &format_name)); 932 + DRM_ERROR("Invalid pixel format: %p4cc\n", 933 + &mode_cmd->pixel_format); 941 934 return -EINVAL; 942 935 } 943 936 ··· 1065 1058 .dirty = vmw_framebuffer_bo_dirty_ext, 1066 1059 }; 1067 1060 1068 - /** 1061 + /* 1069 1062 * Pin the bofer in a location suitable for access by the 1070 1063 * display system. 1071 1064 */ ··· 1152 1145 uint32_t format; 1153 1146 struct vmw_resource *res; 1154 1147 unsigned int bytes_pp; 1155 - struct drm_format_name_buf format_name; 1156 1148 int ret; 1157 1149 1158 1150 switch (mode_cmd->pixel_format) { ··· 1173 1167 break; 1174 1168 1175 1169 default: 1176 - DRM_ERROR("Invalid framebuffer format %s\n", 1177 - drm_get_format_name(mode_cmd->pixel_format, &format_name)); 1170 + DRM_ERROR("Invalid framebuffer format %p4cc\n", 1171 + &mode_cmd->pixel_format); 1178 1172 return -EINVAL; 1179 1173 } 1180 1174 ··· 1218 1212 struct drm_device *dev = &dev_priv->drm; 1219 1213 struct vmw_framebuffer_bo *vfbd; 1220 1214 unsigned int requested_size; 1221 - struct drm_format_name_buf format_name; 1222 1215 int ret; 1223 1216 1224 1217 requested_size = mode_cmd->height * mode_cmd->pitches[0]; ··· 1237 1232 case DRM_FORMAT_RGB565: 1238 1233 break; 1239 1234 default: 1240 - DRM_ERROR("Invalid pixel format: %s\n", 1241 - drm_get_format_name(mode_cmd->pixel_format, &format_name)); 1235 + DRM_ERROR("Invalid pixel format: %p4cc\n", 1236 + &mode_cmd->pixel_format); 1242 1237 return -EINVAL; 1243 1238 } 1244 1239 } ··· 1273 1268 /** 1274 1269 * vmw_kms_srf_ok - check if a surface can be created 1275 1270 * 1271 + * @dev_priv: Pointer to device private struct. 1276 1272 * @width: requested width 1277 1273 * @height: requested height 1278 1274 * ··· 1785 1779 drm_property_create_range(&dev_priv->drm, 1786 1780 DRM_MODE_PROP_IMMUTABLE, 1787 1781 "hotplug_mode_update", 0, 1); 1788 - 1789 - if (!dev_priv->hotplug_mode_update_property) 1790 - return; 1791 - 1792 1782 } 1793 1783 1794 1784 int vmw_kms_init(struct vmw_private *dev_priv) ··· 1899 1897 } 1900 1898 1901 1899 1902 - /** 1900 + /* 1903 1901 * Function called by DRM code called with vbl_lock held. 1904 1902 */ 1905 1903 u32 vmw_get_vblank_counter(struct drm_crtc *crtc) ··· 1907 1905 return 0; 1908 1906 } 1909 1907 1910 - /** 1908 + /* 1911 1909 * Function called by DRM code called with vbl_lock held. 1912 1910 */ 1913 1911 int vmw_enable_vblank(struct drm_crtc *crtc) ··· 1915 1913 return -EINVAL; 1916 1914 } 1917 1915 1918 - /** 1916 + /* 1919 1917 * Function called by DRM code called with vbl_lock held. 1920 1918 */ 1921 1919 void vmw_disable_vblank(struct drm_crtc *crtc) ··· 2059 2057 { DRM_MODE("1152x864", DRM_MODE_TYPE_DRIVER, 108000, 1152, 1216, 2060 2058 1344, 1600, 0, 864, 865, 868, 900, 0, 2061 2059 DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC) }, 2060 + /* 1280x720@60Hz */ 2061 + { DRM_MODE("1280x720", DRM_MODE_TYPE_DRIVER, 74500, 1280, 1344, 2062 + 1472, 1664, 0, 720, 723, 728, 748, 0, 2063 + DRM_MODE_FLAG_NHSYNC | DRM_MODE_FLAG_PVSYNC) }, 2062 2064 /* 1280x768@60Hz */ 2063 2065 { DRM_MODE("1280x768", DRM_MODE_TYPE_DRIVER, 79500, 1280, 1344, 2064 2066 1472, 1664, 0, 768, 771, 778, 798, 0, ··· 2107 2101 { DRM_MODE("1856x1392", DRM_MODE_TYPE_DRIVER, 218250, 1856, 1952, 2108 2102 2176, 2528, 0, 1392, 1393, 1396, 1439, 0, 2109 2103 DRM_MODE_FLAG_NHSYNC | DRM_MODE_FLAG_PVSYNC) }, 2104 + /* 1920x1080@60Hz */ 2105 + { DRM_MODE("1920x1080", DRM_MODE_TYPE_DRIVER, 173000, 1920, 2048, 2106 + 2248, 2576, 0, 1080, 1083, 1088, 1120, 0, 2107 + DRM_MODE_FLAG_NHSYNC | DRM_MODE_FLAG_PVSYNC) }, 2110 2108 /* 1920x1200@60Hz */ 2111 2109 { DRM_MODE("1920x1200", DRM_MODE_TYPE_DRIVER, 193250, 1920, 2056, 2112 2110 2256, 2592, 0, 1200, 1203, 1209, 1245, 0, ··· 2119 2109 { DRM_MODE("1920x1440", DRM_MODE_TYPE_DRIVER, 234000, 1920, 2048, 2120 2110 2256, 2600, 0, 1440, 1441, 1444, 1500, 0, 2121 2111 DRM_MODE_FLAG_NHSYNC | DRM_MODE_FLAG_PVSYNC) }, 2112 + /* 2560x1440@60Hz */ 2113 + { DRM_MODE("2560x1440", DRM_MODE_TYPE_DRIVER, 241500, 2560, 2608, 2114 + 2640, 2720, 0, 1440, 1443, 1448, 1481, 0, 2115 + DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_NVSYNC) }, 2122 2116 /* 2560x1600@60Hz */ 2123 2117 { DRM_MODE("2560x1600", DRM_MODE_TYPE_DRIVER, 348500, 2560, 2752, 2124 2118 3032, 3504, 0, 1600, 1603, 1609, 1658, 0, 2125 2119 DRM_MODE_FLAG_NHSYNC | DRM_MODE_FLAG_PVSYNC) }, 2120 + /* 2880x1800@60Hz */ 2121 + { DRM_MODE("2880x1800", DRM_MODE_TYPE_DRIVER, 337500, 2880, 2928, 2122 + 2960, 3040, 0, 1800, 1803, 1809, 1852, 0, 2123 + DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_NVSYNC) }, 2124 + /* 3840x2160@60Hz */ 2125 + { DRM_MODE("3840x2160", DRM_MODE_TYPE_DRIVER, 533000, 3840, 3888, 2126 + 3920, 4000, 0, 2160, 2163, 2168, 2222, 0, 2127 + DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_NVSYNC) }, 2128 + /* 3840x2400@60Hz */ 2129 + { DRM_MODE("3840x2400", DRM_MODE_TYPE_DRIVER, 592250, 3840, 3888, 2130 + 3920, 4000, 0, 2400, 2403, 2409, 2469, 0, 2131 + DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_NVSYNC) }, 2126 2132 /* Terminate */ 2127 2133 { DRM_MODE("", 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0) }, 2128 2134 }; ··· 2147 2121 * vmw_guess_mode_timing - Provide fake timings for a 2148 2122 * 60Hz vrefresh mode. 2149 2123 * 2150 - * @mode - Pointer to a struct drm_display_mode with hdisplay and vdisplay 2124 + * @mode: Pointer to a struct drm_display_mode with hdisplay and vdisplay 2151 2125 * members filled in. 2152 2126 */ 2153 2127 void vmw_guess_mode_timing(struct drm_display_mode *mode) ··· 2202 2176 mode->hdisplay = du->pref_width; 2203 2177 mode->vdisplay = du->pref_height; 2204 2178 vmw_guess_mode_timing(mode); 2179 + drm_mode_set_name(mode); 2205 2180 2206 2181 if (vmw_kms_validate_mode_vram(dev_priv, 2207 2182 mode->hdisplay * assumed_bpp,
+5 -5
drivers/gpu/drm/vmwgfx/vmwgfx_kms.h
··· 245 245 }; 246 246 247 247 248 - static const uint32_t vmw_primary_plane_formats[] = { 248 + static const uint32_t __maybe_unused vmw_primary_plane_formats[] = { 249 249 DRM_FORMAT_XRGB1555, 250 250 DRM_FORMAT_RGB565, 251 251 DRM_FORMAT_RGB888, ··· 253 253 DRM_FORMAT_ARGB8888, 254 254 }; 255 255 256 - static const uint32_t vmw_cursor_plane_formats[] = { 256 + static const uint32_t __maybe_unused vmw_cursor_plane_formats[] = { 257 257 DRM_FORMAT_ARGB8888, 258 258 }; 259 259 ··· 456 456 457 457 /* Atomic Helpers */ 458 458 int vmw_du_primary_plane_atomic_check(struct drm_plane *plane, 459 - struct drm_plane_state *state); 459 + struct drm_atomic_state *state); 460 460 int vmw_du_cursor_plane_atomic_check(struct drm_plane *plane, 461 - struct drm_plane_state *state); 461 + struct drm_atomic_state *state); 462 462 void vmw_du_cursor_plane_atomic_update(struct drm_plane *plane, 463 - struct drm_plane_state *old_state); 463 + struct drm_atomic_state *state); 464 464 int vmw_du_cursor_plane_prepare_fb(struct drm_plane *plane, 465 465 struct drm_plane_state *new_state); 466 466 void vmw_du_plane_cleanup_fb(struct drm_plane *plane,
+10 -4
drivers/gpu/drm/vmwgfx/vmwgfx_ldu.c
··· 49 49 struct vmw_framebuffer *fb; 50 50 }; 51 51 52 - /** 52 + /* 53 53 * Display unit using the legacy register interface. 54 54 */ 55 55 struct vmw_legacy_display_unit { ··· 206 206 * vmw_ldu_crtc_atomic_enable - Noop 207 207 * 208 208 * @crtc: CRTC associated with the new screen 209 + * @state: Unused 209 210 * 210 211 * This is called after a mode set has been completed. Here's 211 212 * usually a good place to call vmw_ldu_add_active/vmw_ldu_del_active ··· 222 221 * vmw_ldu_crtc_atomic_disable - Turns off CRTC 223 222 * 224 223 * @crtc: CRTC to be turned off 224 + * @state: Unused 225 225 */ 226 226 static void vmw_ldu_crtc_atomic_disable(struct drm_crtc *crtc, 227 227 struct drm_atomic_state *state) ··· 284 282 285 283 static void 286 284 vmw_ldu_primary_plane_atomic_update(struct drm_plane *plane, 287 - struct drm_plane_state *old_state) 285 + struct drm_atomic_state *state) 288 286 { 287 + struct drm_plane_state *old_state = drm_atomic_get_old_plane_state(state, 288 + plane); 289 + struct drm_plane_state *new_state = drm_atomic_get_new_plane_state(state, 290 + plane); 289 291 struct vmw_private *dev_priv; 290 292 struct vmw_legacy_display_unit *ldu; 291 293 struct vmw_framebuffer *vfb; 292 294 struct drm_framebuffer *fb; 293 - struct drm_crtc *crtc = plane->state->crtc ?: old_state->crtc; 295 + struct drm_crtc *crtc = new_state->crtc ?: old_state->crtc; 294 296 295 297 296 298 ldu = vmw_crtc_to_ldu(crtc); 297 299 dev_priv = vmw_priv(plane->dev); 298 - fb = plane->state->fb; 300 + fb = new_state->fb; 299 301 300 302 vfb = (fb) ? vmw_framebuffer_to_vfb(fb) : NULL; 301 303
+4
drivers/gpu/drm/vmwgfx/vmwgfx_mob.c
··· 277 277 &batch->otables[i]); 278 278 } 279 279 280 + ttm_bo_unpin(batch->otable_bo); 280 281 ttm_bo_put(batch->otable_bo); 281 282 batch->otable_bo = NULL; 282 283 return ret; ··· 343 342 vmw_bo_fence_single(bo, NULL); 344 343 ttm_bo_unreserve(bo); 345 344 345 + ttm_bo_unpin(batch->otable_bo); 346 346 ttm_bo_put(batch->otable_bo); 347 347 batch->otable_bo = NULL; 348 348 } ··· 530 528 void vmw_mob_destroy(struct vmw_mob *mob) 531 529 { 532 530 if (mob->pt_bo) { 531 + ttm_bo_unpin(mob->pt_bo); 533 532 ttm_bo_put(mob->pt_bo); 534 533 mob->pt_bo = NULL; 535 534 } ··· 646 643 out_no_cmd_space: 647 644 vmw_fifo_resource_dec(dev_priv); 648 645 if (pt_set_up) { 646 + ttm_bo_unpin(mob->pt_bo); 649 647 ttm_bo_put(mob->pt_bo); 650 648 mob->pt_bo = NULL; 651 649 }
+1 -1
drivers/gpu/drm/vmwgfx/vmwgfx_msg.c
··· 253 253 * vmw_send_msg: Sends a message to the host 254 254 * 255 255 * @channel: RPC channel 256 - * @logmsg: NULL terminated string 256 + * @msg: NULL terminated string 257 257 * 258 258 * Returns: 0 on success 259 259 */
+8 -8
drivers/gpu/drm/vmwgfx/vmwgfx_overlay.c
··· 42 42 struct drm_vmw_control_stream_arg saved; 43 43 }; 44 44 45 - /** 45 + /* 46 46 * Overlay control 47 47 */ 48 48 struct vmw_overlay { ··· 85 85 cmd->flush.streamId = stream_id; 86 86 } 87 87 88 - /** 88 + /* 89 89 * Send put command to hw. 90 90 * 91 91 * Returns ··· 174 174 return 0; 175 175 } 176 176 177 - /** 177 + /* 178 178 * Send stop command to hw. 179 179 * 180 180 * Returns ··· 216 216 return 0; 217 217 } 218 218 219 - /** 219 + /* 220 220 * Move a buffer to vram or gmr if @pin is set, else unpin the buffer. 221 221 * 222 222 * With the introduction of screen objects buffers could now be ··· 235 235 return vmw_bo_pin_in_vram_or_gmr(dev_priv, buf, inter); 236 236 } 237 237 238 - /** 238 + /* 239 239 * Stop or pause a stream. 240 240 * 241 241 * If the stream is paused the no evict flag is removed from the buffer ··· 285 285 return 0; 286 286 } 287 287 288 - /** 288 + /* 289 289 * Update a stream and send any put or stop fifo commands needed. 290 290 * 291 291 * The caller must hold the overlay lock. ··· 353 353 return 0; 354 354 } 355 355 356 - /** 356 + /* 357 357 * Try to resume all paused streams. 358 358 * 359 359 * Used by the kms code after moving a new scanout buffer to vram. ··· 387 387 return 0; 388 388 } 389 389 390 - /** 390 + /* 391 391 * Pauses all active streams. 392 392 * 393 393 * Used by the kms code when moving a new scanout buffer to vram.
+5 -7
drivers/gpu/drm/vmwgfx/vmwgfx_resource.c
··· 202 202 * 203 203 * @dev_priv: Pointer to a device private struct. 204 204 * @res: The struct vmw_resource to initialize. 205 - * @obj_type: Resource object type. 206 205 * @delay_id: Boolean whether to defer device id allocation until 207 206 * the first validation. 208 207 * @res_free: Resource destructor. ··· 287 288 * @tfile: Pointer to a struct ttm_object_file identifying the caller 288 289 * @handle: The TTM user-space handle 289 290 * @converter: Pointer to an object describing the resource type 290 - * @p_res: On successful return the location pointed to will contain 291 - * a pointer to a refcounted struct vmw_resource. 292 291 * 293 292 * If the handle can't be found or is associated with an incorrect resource 294 293 * type, -EINVAL will be returned. ··· 312 315 return converter->base_obj_to_res(base); 313 316 } 314 317 315 - /** 318 + /* 316 319 * Helper function that looks either a surface or bo. 317 320 * 318 321 * The pointer this pointed at by out_surf and out_buf needs to be null. ··· 385 388 * @res: The resource to make visible to the device. 386 389 * @val_buf: Information about a buffer possibly 387 390 * containing backup data if a bind operation is needed. 391 + * @dirtying: Transfer dirty regions. 388 392 * 389 393 * On hardware resource shortage, this function returns -EBUSY and 390 394 * should be retried once resources have been freed up. ··· 584 586 return ret; 585 587 } 586 588 587 - /** 589 + /* 588 590 * vmw_resource_reserve - Reserve a resource for command submission 589 591 * 590 592 * @res: The resource to reserve. ··· 856 858 struct ttm_resource *mem) 857 859 { 858 860 struct vmw_buffer_object *dx_query_mob; 859 - struct ttm_bo_device *bdev = bo->bdev; 861 + struct ttm_device *bdev = bo->bdev; 860 862 struct vmw_private *dev_priv; 861 863 862 864 ··· 971 973 mutex_unlock(&dev_priv->cmdbuf_mutex); 972 974 } 973 975 974 - /** 976 + /* 975 977 * vmw_resource_pin - Add a pin reference on a resource 976 978 * 977 979 * @res: The resource to add a pin reference on
+12 -8
drivers/gpu/drm/vmwgfx/vmwgfx_scrn.c
··· 84 84 SVGAFifoCmdDefineGMRFB body; 85 85 }; 86 86 87 - /** 87 + /* 88 88 * Display unit using screen objects. 89 89 */ 90 90 struct vmw_screen_object_unit { ··· 112 112 vmw_sou_destroy(vmw_crtc_to_sou(crtc)); 113 113 } 114 114 115 - /** 115 + /* 116 116 * Send the fifo command to create a screen. 117 117 */ 118 118 static int vmw_sou_fifo_create(struct vmw_private *dev_priv, ··· 160 160 return 0; 161 161 } 162 162 163 - /** 163 + /* 164 164 * Send the fifo command to destroy a screen. 165 165 */ 166 166 static int vmw_sou_fifo_destroy(struct vmw_private *dev_priv, ··· 263 263 /** 264 264 * vmw_sou_crtc_helper_prepare - Noop 265 265 * 266 - * @crtc: CRTC associated with the new screen 266 + * @crtc: CRTC associated with the new screen 267 267 * 268 268 * Prepares the CRTC for a mode set, but we don't need to do anything here. 269 269 */ ··· 275 275 * vmw_sou_crtc_atomic_enable - Noop 276 276 * 277 277 * @crtc: CRTC associated with the new screen 278 + * @state: Unused 278 279 * 279 280 * This is called after a mode set has been completed. 280 281 */ ··· 288 287 * vmw_sou_crtc_atomic_disable - Turns off CRTC 289 288 * 290 289 * @crtc: CRTC to be turned off 290 + * @state: Unused 291 291 */ 292 292 static void vmw_sou_crtc_atomic_disable(struct drm_crtc *crtc, 293 293 struct drm_atomic_state *state) ··· 730 728 731 729 static void 732 730 vmw_sou_primary_plane_atomic_update(struct drm_plane *plane, 733 - struct drm_plane_state *old_state) 731 + struct drm_atomic_state *state) 734 732 { 735 - struct drm_crtc *crtc = plane->state->crtc; 733 + struct drm_plane_state *old_state = drm_atomic_get_old_plane_state(state, plane); 734 + struct drm_plane_state *new_state = drm_atomic_get_new_plane_state(state, plane); 735 + struct drm_crtc *crtc = new_state->crtc; 736 736 struct drm_pending_vblank_event *event = NULL; 737 737 struct vmw_fence_obj *fence = NULL; 738 738 int ret; 739 739 740 740 /* In case of device error, maintain consistent atomic state */ 741 - if (crtc && plane->state->fb) { 741 + if (crtc && new_state->fb) { 742 742 struct vmw_private *dev_priv = vmw_priv(crtc->dev); 743 743 struct vmw_framebuffer *vfb = 744 - vmw_framebuffer_to_vfb(plane->state->fb); 744 + vmw_framebuffer_to_vfb(new_state->fb); 745 745 746 746 if (vfb->bo) 747 747 ret = vmw_sou_plane_update_bo(dev_priv, plane,
+5 -5
drivers/gpu/drm/vmwgfx/vmwgfx_shader.c
··· 125 125 .commit_notify = vmw_dx_shader_commit_notify, 126 126 }; 127 127 128 - /** 128 + /* 129 129 * Shader management: 130 130 */ 131 131 ··· 654 654 655 655 656 656 657 - /** 657 + /* 658 658 * User-space shader management: 659 659 */ 660 660 ··· 686 686 vmw_shader_size); 687 687 } 688 688 689 - /** 689 + /* 690 690 * This function is called when user space has no more references on the 691 691 * base object. It releases the base-object's reference on the resource object. 692 692 */ ··· 945 945 * vmw_compat_shader_add - Create a compat shader and stage it for addition 946 946 * as a command buffer managed resource. 947 947 * 948 + * @dev_priv: Pointer to device private structure. 948 949 * @man: Pointer to the compat shader manager identifying the shader namespace. 949 950 * @user_key: The key that is used to identify the shader. The key is 950 951 * unique to the shader type. 951 952 * @bytecode: Pointer to the bytecode of the shader. 952 953 * @shader_type: Shader type. 953 - * @tfile: Pointer to a struct ttm_object_file that the guest-backed shader is 954 - * to be created with. 954 + * @size: Command size. 955 955 * @list: Caller's list of staged command buffer resource actions. 956 956 * 957 957 */
+1
drivers/gpu/drm/vmwgfx/vmwgfx_so.c
··· 42 42 /** 43 43 * struct vmw_view - view metadata 44 44 * 45 + * @rcu: RCU callback head 45 46 * @res: The struct vmw_resource we derive from 46 47 * @ctx: Non-refcounted pointer to the context this view belongs to. 47 48 * @srf: Refcounted pointer to the surface pointed to by this view.
+14 -7
drivers/gpu/drm/vmwgfx/vmwgfx_stdu.c
··· 61 61 * @bottom: Bottom side of bounding box. 62 62 * @fb_left: Left side of the framebuffer/content bounding box 63 63 * @fb_top: Top of the framebuffer/content bounding box 64 + * @pitch: framebuffer pitch (stride) 64 65 * @buf: buffer object when DMA-ing between buffer and screen targets. 65 66 * @sid: Surface ID when copying between surface and screen targets. 66 67 */ ··· 110 109 * content_vfbs dimensions, then this is a pointer into the 111 110 * corresponding field in content_vfbs. If not, then this 112 111 * is a separate buffer to which content_vfbs will blit to. 113 - * @content_type: content_fb type 114 - * @defined: true if the current display unit has been initialized 112 + * @content_fb_type: content_fb type 113 + * @display_width: display width 114 + * @display_height: display height 115 + * @defined: true if the current display unit has been initialized 116 + * @cpp: Bytes per pixel 115 117 */ 116 118 struct vmw_screen_target_display_unit { 117 119 struct vmw_display_unit base; ··· 656 652 * @file_priv: Pointer to a struct drm-file identifying the caller. May be 657 653 * set to NULL, but then @user_fence_rep must also be set to NULL. 658 654 * @vfb: Pointer to the buffer-object backed framebuffer. 655 + * @user_fence_rep: User-space provided structure for fence information. 659 656 * @clips: Array of clip rects. Either @clips or @vclips must be NULL. 660 657 * @vclips: Alternate array of clip rects. Either @clips or @vclips must 661 658 * be NULL. ··· 1580 1575 */ 1581 1576 static void 1582 1577 vmw_stdu_primary_plane_atomic_update(struct drm_plane *plane, 1583 - struct drm_plane_state *old_state) 1578 + struct drm_atomic_state *state) 1584 1579 { 1585 - struct vmw_plane_state *vps = vmw_plane_state_to_vps(plane->state); 1586 - struct drm_crtc *crtc = plane->state->crtc; 1580 + struct drm_plane_state *old_state = drm_atomic_get_old_plane_state(state, plane); 1581 + struct drm_plane_state *new_state = drm_atomic_get_new_plane_state(state, plane); 1582 + struct vmw_plane_state *vps = vmw_plane_state_to_vps(new_state); 1583 + struct drm_crtc *crtc = new_state->crtc; 1587 1584 struct vmw_screen_target_display_unit *stdu; 1588 1585 struct drm_pending_vblank_event *event; 1589 1586 struct vmw_fence_obj *fence = NULL; ··· 1593 1586 int ret; 1594 1587 1595 1588 /* If case of device error, maintain consistent atomic state */ 1596 - if (crtc && plane->state->fb) { 1589 + if (crtc && new_state->fb) { 1597 1590 struct vmw_framebuffer *vfb = 1598 - vmw_framebuffer_to_vfb(plane->state->fb); 1591 + vmw_framebuffer_to_vfb(new_state->fb); 1599 1592 stdu = vmw_crtc_to_stdu(crtc); 1600 1593 dev_priv = vmw_priv(crtc->dev); 1601 1594
+10 -7
drivers/gpu/drm/vmwgfx/vmwgfx_surface.c
··· 41 41 /** 42 42 * struct vmw_user_surface - User-space visible surface resource 43 43 * 44 + * @prime: The TTM prime object. 44 45 * @base: The TTM base object handling user-space visibility. 45 46 * @srf: The surface metadata. 46 47 * @size: TTM accounting size for the surface. 47 - * @master: master of the creating client. Used for security check. 48 + * @master: Master of the creating client. Used for security check. 49 + * @backup_base: The TTM base object of the backup buffer. 48 50 */ 49 51 struct vmw_user_surface { 50 52 struct ttm_prime_object prime; ··· 71 69 }; 72 70 73 71 /** 74 - * vmw_surface_dirty - Surface dirty-tracker 72 + * struct vmw_surface_dirty - Surface dirty-tracker 75 73 * @cache: Cached layout information of the surface. 76 74 * @size: Accounting size for the struct vmw_surface_dirty. 77 75 * @num_subres: Number of subresources. ··· 164 162 .clean = vmw_surface_clean, 165 163 }; 166 164 167 - /** 165 + /* 168 166 * struct vmw_surface_dma - SVGA3D DMA command 169 167 */ 170 168 struct vmw_surface_dma { ··· 174 172 SVGA3dCmdSurfaceDMASuffix suffix; 175 173 }; 176 174 177 - /** 175 + /* 178 176 * struct vmw_surface_define - SVGA3D Surface Define command 179 177 */ 180 178 struct vmw_surface_define { ··· 182 180 SVGA3dCmdDefineSurface body; 183 181 }; 184 182 185 - /** 183 + /* 186 184 * struct vmw_surface_destroy - SVGA3D Surface Destroy command 187 185 */ 188 186 struct vmw_surface_destroy { ··· 546 544 * 547 545 * @res: Pointer to a struct vmw_res embedded in a struct 548 546 * vmw_surface. 547 + * @readback: Readback - only true if dirty 549 548 * @val_buf: Pointer to a struct ttm_validate_buffer containing 550 549 * information about the backup buffer. 551 550 * ··· 1063 1060 /** 1064 1061 * vmw_surface_define_encode - Encode a surface_define command. 1065 1062 * 1066 - * @srf: Pointer to a struct vmw_surface object. 1067 - * @cmd_space: Pointer to memory area in which the commands should be encoded. 1063 + * @res: Pointer to a struct vmw_resource embedded in a struct 1064 + * vmw_surface. 1068 1065 */ 1069 1066 static int vmw_gb_surface_create(struct vmw_resource *res) 1070 1067 {
+1
drivers/gpu/drm/vmwgfx/vmwgfx_thp.c
··· 11 11 /** 12 12 * struct vmw_thp_manager - Range manager implementing huge page alignment 13 13 * 14 + * @manager: TTM resource manager. 14 15 * @mm: The underlying range manager. Protected by @lock. 15 16 * @lock: Manager lock. 16 17 */
+36 -8
drivers/gpu/drm/vmwgfx/vmwgfx_ttm_buffer.c
··· 265 265 * 266 266 * @viter: Pointer to the iterator to initialize 267 267 * @vsgt: Pointer to a struct vmw_sg_table to initialize from 268 + * @p_offset: Pointer offset used to update current array position 268 269 * 269 270 * Note that we're following the convention of __sg_page_iter_start, so that 270 271 * the iterator doesn't point to a valid page after initialization; it has ··· 483 482 } 484 483 485 484 486 - static int vmw_ttm_bind(struct ttm_bo_device *bdev, 485 + static int vmw_ttm_bind(struct ttm_device *bdev, 487 486 struct ttm_tt *ttm, struct ttm_resource *bo_mem) 488 487 { 489 488 struct vmw_ttm_tt *vmw_be = ··· 527 526 return ret; 528 527 } 529 528 530 - static void vmw_ttm_unbind(struct ttm_bo_device *bdev, 529 + static void vmw_ttm_unbind(struct ttm_device *bdev, 531 530 struct ttm_tt *ttm) 532 531 { 533 532 struct vmw_ttm_tt *vmw_be = ··· 553 552 } 554 553 555 554 556 - static void vmw_ttm_destroy(struct ttm_bo_device *bdev, struct ttm_tt *ttm) 555 + static void vmw_ttm_destroy(struct ttm_device *bdev, struct ttm_tt *ttm) 557 556 { 558 557 struct vmw_ttm_tt *vmw_be = 559 558 container_of(ttm, struct vmw_ttm_tt, dma_ttm); ··· 573 572 } 574 573 575 574 576 - static int vmw_ttm_populate(struct ttm_bo_device *bdev, 575 + static int vmw_ttm_populate(struct ttm_device *bdev, 577 576 struct ttm_tt *ttm, struct ttm_operation_ctx *ctx) 578 577 { 578 + unsigned int i; 579 + int ret; 580 + 579 581 /* TODO: maybe completely drop this ? */ 580 582 if (ttm_tt_is_populated(ttm)) 581 583 return 0; 582 584 583 - return ttm_pool_alloc(&bdev->pool, ttm, ctx); 585 + ret = ttm_pool_alloc(&bdev->pool, ttm, ctx); 586 + if (ret) 587 + return ret; 588 + 589 + for (i = 0; i < ttm->num_pages; ++i) { 590 + ret = ttm_mem_global_alloc_page(&ttm_mem_glob, ttm->pages[i], 591 + PAGE_SIZE, ctx); 592 + if (ret) 593 + goto error; 594 + } 595 + return 0; 596 + 597 + error: 598 + while (i--) 599 + ttm_mem_global_free_page(&ttm_mem_glob, ttm->pages[i], 600 + PAGE_SIZE); 601 + ttm_pool_free(&bdev->pool, ttm); 602 + return ret; 584 603 } 585 604 586 - static void vmw_ttm_unpopulate(struct ttm_bo_device *bdev, 605 + static void vmw_ttm_unpopulate(struct ttm_device *bdev, 587 606 struct ttm_tt *ttm) 588 607 { 589 608 struct vmw_ttm_tt *vmw_tt = container_of(ttm, struct vmw_ttm_tt, 590 609 dma_ttm); 610 + unsigned int i; 591 611 592 612 if (vmw_tt->mob) { 593 613 vmw_mob_destroy(vmw_tt->mob); ··· 616 594 } 617 595 618 596 vmw_ttm_unmap_dma(vmw_tt); 597 + 598 + for (i = 0; i < ttm->num_pages; ++i) 599 + ttm_mem_global_free_page(&ttm_mem_glob, ttm->pages[i], 600 + PAGE_SIZE); 601 + 619 602 ttm_pool_free(&bdev->pool, ttm); 620 603 } 621 604 ··· 666 639 return vmw_user_bo_verify_access(bo, tfile); 667 640 } 668 641 669 - static int vmw_ttm_io_mem_reserve(struct ttm_bo_device *bdev, struct ttm_resource *mem) 642 + static int vmw_ttm_io_mem_reserve(struct ttm_device *bdev, struct ttm_resource *mem) 670 643 { 671 644 struct vmw_private *dev_priv = container_of(bdev, struct vmw_private, bdev); 672 645 ··· 691 664 * vmw_move_notify - TTM move_notify_callback 692 665 * 693 666 * @bo: The TTM buffer object about to move. 667 + * @evict: Unused 694 668 * @mem: The struct ttm_resource indicating to what memory 695 669 * region the move is taking place. 696 670 * ··· 770 742 vmw_move_notify(bo, false, NULL); 771 743 } 772 744 773 - struct ttm_bo_driver vmw_bo_driver = { 745 + struct ttm_device_funcs vmw_bo_driver = { 774 746 .ttm_tt_create = &vmw_ttm_tt_create, 775 747 .ttm_tt_populate = &vmw_ttm_populate, 776 748 .ttm_tt_unpopulate = &vmw_ttm_unpopulate,
+3 -2
drivers/gpu/drm/vmwgfx/vmwgfx_validation.c
··· 48 48 u32 as_mob : 1; 49 49 u32 cpu_blit : 1; 50 50 }; 51 - 52 51 /** 53 52 * struct vmw_validation_res_node - Resource validation metadata. 54 53 * @head: List head for the resource validation list. ··· 63 64 * @first_usage: True iff the resource has been seen only once in the current 64 65 * validation batch. 65 66 * @reserved: Whether the resource is currently reserved by this process. 67 + * @dirty_set: Change dirty status of the resource. 68 + * @dirty: Dirty information VMW_RES_DIRTY_XX. 66 69 * @private: Optionally additional memory for caller-private data. 67 70 * 68 71 * Bit fields are used since these structures are allocated and freed in ··· 206 205 * vmw_validation_find_res_dup - Find a duplicate resource entry in the 207 206 * validation context's lists. 208 207 * @ctx: The validation context to search. 209 - * @vbo: The buffer object to search for. 208 + * @res: Reference counted resource pointer. 210 209 * 211 210 * Return: Pointer to the struct vmw_validation_bo_node referencing the 212 211 * duplicate, or NULL if none found.
+3 -7
drivers/gpu/drm/xen/Kconfig
··· 1 1 # SPDX-License-Identifier: GPL-2.0-only 2 2 config DRM_XEN 3 - bool "DRM Support for Xen guest OS" 4 - depends on XEN 5 - help 6 - Choose this option if you want to enable DRM support 7 - for Xen. 3 + bool 8 4 9 5 config DRM_XEN_FRONTEND 10 6 tristate "Para-virtualized frontend driver for Xen guest OS" 11 - depends on DRM_XEN 12 - depends on DRM 7 + depends on XEN && DRM 8 + select DRM_XEN 13 9 select DRM_KMS_HELPER 14 10 select VIDEOMODE_HELPERS 15 11 select XEN_XENBUS_FRONTEND
+2 -1
drivers/gpu/drm/xen/xen_drm_front_kms.c
··· 13 13 #include <drm/drm_drv.h> 14 14 #include <drm/drm_fourcc.h> 15 15 #include <drm/drm_gem.h> 16 + #include <drm/drm_gem_atomic_helper.h> 16 17 #include <drm/drm_gem_framebuffer_helper.h> 17 18 #include <drm/drm_probe_helper.h> 18 19 #include <drm/drm_vblank.h> ··· 302 301 .mode_valid = display_mode_valid, 303 302 .enable = display_enable, 304 303 .disable = display_disable, 305 - .prepare_fb = drm_gem_fb_simple_display_pipe_prepare_fb, 304 + .prepare_fb = drm_gem_simple_display_pipe_prepare_fb, 306 305 .check = display_check, 307 306 .update = display_update, 308 307 };
+18 -14
drivers/gpu/drm/xlnx/zynqmp_disp.c
··· 1143 1143 1144 1144 static int 1145 1145 zynqmp_disp_plane_atomic_check(struct drm_plane *plane, 1146 - struct drm_plane_state *state) 1146 + struct drm_atomic_state *state) 1147 1147 { 1148 + struct drm_plane_state *new_plane_state = drm_atomic_get_new_plane_state(state, 1149 + plane); 1148 1150 struct drm_crtc_state *crtc_state; 1149 1151 1150 - if (!state->crtc) 1152 + if (!new_plane_state->crtc) 1151 1153 return 0; 1152 1154 1153 - crtc_state = drm_atomic_get_crtc_state(state->state, state->crtc); 1155 + crtc_state = drm_atomic_get_crtc_state(state, new_plane_state->crtc); 1154 1156 if (IS_ERR(crtc_state)) 1155 1157 return PTR_ERR(crtc_state); 1156 1158 1157 - return drm_atomic_helper_check_plane_state(state, crtc_state, 1159 + return drm_atomic_helper_check_plane_state(new_plane_state, 1160 + crtc_state, 1158 1161 DRM_PLANE_HELPER_NO_SCALING, 1159 1162 DRM_PLANE_HELPER_NO_SCALING, 1160 1163 false, false); ··· 1165 1162 1166 1163 static void 1167 1164 zynqmp_disp_plane_atomic_disable(struct drm_plane *plane, 1168 - struct drm_plane_state *old_state) 1165 + struct drm_atomic_state *state) 1169 1166 { 1167 + struct drm_plane_state *old_state = drm_atomic_get_old_plane_state(state, 1168 + plane); 1170 1169 struct zynqmp_disp_layer *layer = plane_to_layer(plane); 1171 1170 1172 1171 if (!old_state->fb) ··· 1179 1174 1180 1175 static void 1181 1176 zynqmp_disp_plane_atomic_update(struct drm_plane *plane, 1182 - struct drm_plane_state *old_state) 1177 + struct drm_atomic_state *state) 1183 1178 { 1179 + struct drm_plane_state *old_state = drm_atomic_get_old_plane_state(state, plane); 1180 + struct drm_plane_state *new_state = drm_atomic_get_new_plane_state(state, plane); 1184 1181 struct zynqmp_disp_layer *layer = plane_to_layer(plane); 1185 1182 bool format_changed = false; 1186 1183 1187 1184 if (!old_state->fb || 1188 - old_state->fb->format->format != plane->state->fb->format->format) 1185 + old_state->fb->format->format != new_state->fb->format->format) 1189 1186 format_changed = true; 1190 1187 1191 1188 /* ··· 1199 1192 if (old_state->fb) 1200 1193 zynqmp_disp_layer_disable(layer); 1201 1194 1202 - zynqmp_disp_layer_set_format(layer, plane->state); 1195 + zynqmp_disp_layer_set_format(layer, new_state); 1203 1196 } 1204 1197 1205 - zynqmp_disp_layer_update(layer, plane->state); 1198 + zynqmp_disp_layer_update(layer, new_state); 1206 1199 1207 1200 /* Enable or re-enable the plane is the format has changed. */ 1208 1201 if (format_changed) ··· 1479 1472 zynqmp_disp_crtc_atomic_disable(struct drm_crtc *crtc, 1480 1473 struct drm_atomic_state *state) 1481 1474 { 1482 - struct drm_crtc_state *old_crtc_state = drm_atomic_get_old_crtc_state(state, 1483 - crtc); 1484 1475 struct zynqmp_disp *disp = crtc_to_disp(crtc); 1485 1476 struct drm_plane_state *old_plane_state; 1486 1477 ··· 1487 1482 * .shutdown() path if the plane is already disabled, skip 1488 1483 * zynqmp_disp_plane_atomic_disable() in that case. 1489 1484 */ 1490 - old_plane_state = drm_atomic_get_old_plane_state(old_crtc_state->state, 1491 - crtc->primary); 1485 + old_plane_state = drm_atomic_get_old_plane_state(state, crtc->primary); 1492 1486 if (old_plane_state) 1493 - zynqmp_disp_plane_atomic_disable(crtc->primary, old_plane_state); 1487 + zynqmp_disp_plane_atomic_disable(crtc->primary, state); 1494 1488 1495 1489 zynqmp_disp_disable(disp); 1496 1490
+29 -20
drivers/gpu/drm/zte/zx_plane.c
··· 46 46 #define FRAC_16_16(mult, div) (((mult) << 16) / (div)) 47 47 48 48 static int zx_vl_plane_atomic_check(struct drm_plane *plane, 49 - struct drm_plane_state *plane_state) 49 + struct drm_atomic_state *state) 50 50 { 51 + struct drm_plane_state *plane_state = drm_atomic_get_new_plane_state(state, 52 + plane); 51 53 struct drm_framebuffer *fb = plane_state->fb; 52 54 struct drm_crtc *crtc = plane_state->crtc; 53 55 struct drm_crtc_state *crtc_state; ··· 59 57 if (!crtc || WARN_ON(!fb)) 60 58 return 0; 61 59 62 - crtc_state = drm_atomic_get_existing_crtc_state(plane_state->state, 60 + crtc_state = drm_atomic_get_existing_crtc_state(state, 63 61 crtc); 64 62 if (WARN_ON(!crtc_state)) 65 63 return -EINVAL; ··· 181 179 } 182 180 183 181 static void zx_vl_plane_atomic_update(struct drm_plane *plane, 184 - struct drm_plane_state *old_state) 182 + struct drm_atomic_state *state) 185 183 { 186 184 struct zx_plane *zplane = to_zx_plane(plane); 187 - struct drm_plane_state *state = plane->state; 188 - struct drm_framebuffer *fb = state->fb; 189 - struct drm_rect *src = &state->src; 190 - struct drm_rect *dst = &state->dst; 185 + struct drm_plane_state *new_state = drm_atomic_get_new_plane_state(state, 186 + plane); 187 + struct drm_framebuffer *fb = new_state->fb; 188 + struct drm_rect *src = &new_state->src; 189 + struct drm_rect *dst = &new_state->dst; 191 190 struct drm_gem_cma_object *cma_obj; 192 191 void __iomem *layer = zplane->layer; 193 192 void __iomem *hbsc = zplane->hbsc; ··· 260 257 } 261 258 262 259 static void zx_plane_atomic_disable(struct drm_plane *plane, 263 - struct drm_plane_state *old_state) 260 + struct drm_atomic_state *state) 264 261 { 262 + struct drm_plane_state *old_state = drm_atomic_get_old_plane_state(state, 263 + plane); 265 264 struct zx_plane *zplane = to_zx_plane(plane); 266 265 void __iomem *hbsc = zplane->hbsc; 267 266 ··· 280 275 }; 281 276 282 277 static int zx_gl_plane_atomic_check(struct drm_plane *plane, 283 - struct drm_plane_state *plane_state) 278 + struct drm_atomic_state *state) 284 279 { 280 + struct drm_plane_state *plane_state = drm_atomic_get_new_plane_state(state, 281 + plane); 285 282 struct drm_framebuffer *fb = plane_state->fb; 286 283 struct drm_crtc *crtc = plane_state->crtc; 287 284 struct drm_crtc_state *crtc_state; ··· 291 284 if (!crtc || WARN_ON(!fb)) 292 285 return 0; 293 286 294 - crtc_state = drm_atomic_get_existing_crtc_state(plane_state->state, 287 + crtc_state = drm_atomic_get_existing_crtc_state(state, 295 288 crtc); 296 289 if (WARN_ON(!crtc_state)) 297 290 return -EINVAL; ··· 354 347 } 355 348 356 349 static void zx_gl_plane_atomic_update(struct drm_plane *plane, 357 - struct drm_plane_state *old_state) 350 + struct drm_atomic_state *state) 358 351 { 352 + struct drm_plane_state *new_state = drm_atomic_get_new_plane_state(state, 353 + plane); 359 354 struct zx_plane *zplane = to_zx_plane(plane); 360 - struct drm_framebuffer *fb = plane->state->fb; 355 + struct drm_framebuffer *fb = new_state->fb; 361 356 struct drm_gem_cma_object *cma_obj; 362 357 void __iomem *layer = zplane->layer; 363 358 void __iomem *csc = zplane->csc; ··· 378 369 format = fb->format->format; 379 370 stride = fb->pitches[0]; 380 371 381 - src_x = plane->state->src_x >> 16; 382 - src_y = plane->state->src_y >> 16; 383 - src_w = plane->state->src_w >> 16; 384 - src_h = plane->state->src_h >> 16; 372 + src_x = new_state->src_x >> 16; 373 + src_y = new_state->src_y >> 16; 374 + src_w = new_state->src_w >> 16; 375 + src_h = new_state->src_h >> 16; 385 376 386 - dst_x = plane->state->crtc_x; 387 - dst_y = plane->state->crtc_y; 388 - dst_w = plane->state->crtc_w; 389 - dst_h = plane->state->crtc_h; 377 + dst_x = new_state->crtc_x; 378 + dst_y = new_state->crtc_y; 379 + dst_w = new_state->crtc_w; 380 + dst_h = new_state->crtc_h; 390 381 391 382 bpp = fb->format->cpp[0]; 392 383
+21 -64
drivers/media/v4l2-core/v4l2-ioctl.c
··· 265 265 { 266 266 const struct v4l2_fmtdesc *p = arg; 267 267 268 - pr_cont("index=%u, type=%s, flags=0x%x, pixelformat=%c%c%c%c, mbus_code=0x%04x, description='%.*s'\n", 268 + pr_cont("index=%u, type=%s, flags=0x%x, pixelformat=%p4cc, mbus_code=0x%04x, description='%.*s'\n", 269 269 p->index, prt_names(p->type, v4l2_type_names), 270 - p->flags, (p->pixelformat & 0xff), 271 - (p->pixelformat >> 8) & 0xff, 272 - (p->pixelformat >> 16) & 0xff, 273 - (p->pixelformat >> 24) & 0xff, 274 - p->mbus_code, 270 + p->flags, &p->pixelformat, p->mbus_code, 275 271 (int)sizeof(p->description), p->description); 276 272 } 277 273 ··· 289 293 case V4L2_BUF_TYPE_VIDEO_CAPTURE: 290 294 case V4L2_BUF_TYPE_VIDEO_OUTPUT: 291 295 pix = &p->fmt.pix; 292 - pr_cont(", width=%u, height=%u, pixelformat=%c%c%c%c, field=%s, bytesperline=%u, sizeimage=%u, colorspace=%d, flags=0x%x, ycbcr_enc=%u, quantization=%u, xfer_func=%u\n", 293 - pix->width, pix->height, 294 - (pix->pixelformat & 0xff), 295 - (pix->pixelformat >> 8) & 0xff, 296 - (pix->pixelformat >> 16) & 0xff, 297 - (pix->pixelformat >> 24) & 0xff, 296 + pr_cont(", width=%u, height=%u, pixelformat=%p4cc, field=%s, bytesperline=%u, sizeimage=%u, colorspace=%d, flags=0x%x, ycbcr_enc=%u, quantization=%u, xfer_func=%u\n", 297 + pix->width, pix->height, &pix->pixelformat, 298 298 prt_names(pix->field, v4l2_field_names), 299 299 pix->bytesperline, pix->sizeimage, 300 300 pix->colorspace, pix->flags, pix->ycbcr_enc, ··· 299 307 case V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE: 300 308 case V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE: 301 309 mp = &p->fmt.pix_mp; 302 - pr_cont(", width=%u, height=%u, format=%c%c%c%c, field=%s, colorspace=%d, num_planes=%u, flags=0x%x, ycbcr_enc=%u, quantization=%u, xfer_func=%u\n", 303 - mp->width, mp->height, 304 - (mp->pixelformat & 0xff), 305 - (mp->pixelformat >> 8) & 0xff, 306 - (mp->pixelformat >> 16) & 0xff, 307 - (mp->pixelformat >> 24) & 0xff, 310 + pr_cont(", width=%u, height=%u, format=%p4cc, field=%s, colorspace=%d, num_planes=%u, flags=0x%x, ycbcr_enc=%u, quantization=%u, xfer_func=%u\n", 311 + mp->width, mp->height, &mp->pixelformat, 308 312 prt_names(mp->field, v4l2_field_names), 309 313 mp->colorspace, mp->num_planes, mp->flags, 310 314 mp->ycbcr_enc, mp->quantization, mp->xfer_func); ··· 325 337 case V4L2_BUF_TYPE_VBI_CAPTURE: 326 338 case V4L2_BUF_TYPE_VBI_OUTPUT: 327 339 vbi = &p->fmt.vbi; 328 - pr_cont(", sampling_rate=%u, offset=%u, samples_per_line=%u, sample_format=%c%c%c%c, start=%u,%u, count=%u,%u\n", 340 + pr_cont(", sampling_rate=%u, offset=%u, samples_per_line=%u, sample_format=%p4cc, start=%u,%u, count=%u,%u\n", 329 341 vbi->sampling_rate, vbi->offset, 330 - vbi->samples_per_line, 331 - (vbi->sample_format & 0xff), 332 - (vbi->sample_format >> 8) & 0xff, 333 - (vbi->sample_format >> 16) & 0xff, 334 - (vbi->sample_format >> 24) & 0xff, 342 + vbi->samples_per_line, &vbi->sample_format, 335 343 vbi->start[0], vbi->start[1], 336 344 vbi->count[0], vbi->count[1]); 337 345 break; ··· 344 360 case V4L2_BUF_TYPE_SDR_CAPTURE: 345 361 case V4L2_BUF_TYPE_SDR_OUTPUT: 346 362 sdr = &p->fmt.sdr; 347 - pr_cont(", pixelformat=%c%c%c%c\n", 348 - (sdr->pixelformat >> 0) & 0xff, 349 - (sdr->pixelformat >> 8) & 0xff, 350 - (sdr->pixelformat >> 16) & 0xff, 351 - (sdr->pixelformat >> 24) & 0xff); 363 + pr_cont(", pixelformat=%p4cc\n", &sdr->pixelformat); 352 364 break; 353 365 case V4L2_BUF_TYPE_META_CAPTURE: 354 366 case V4L2_BUF_TYPE_META_OUTPUT: 355 367 meta = &p->fmt.meta; 356 - pr_cont(", dataformat=%c%c%c%c, buffersize=%u\n", 357 - (meta->dataformat >> 0) & 0xff, 358 - (meta->dataformat >> 8) & 0xff, 359 - (meta->dataformat >> 16) & 0xff, 360 - (meta->dataformat >> 24) & 0xff, 361 - meta->buffersize); 368 + pr_cont(", dataformat=%p4cc, buffersize=%u\n", 369 + &meta->dataformat, meta->buffersize); 362 370 break; 363 371 } 364 372 } ··· 359 383 { 360 384 const struct v4l2_framebuffer *p = arg; 361 385 362 - pr_cont("capability=0x%x, flags=0x%x, base=0x%p, width=%u, height=%u, pixelformat=%c%c%c%c, bytesperline=%u, sizeimage=%u, colorspace=%d\n", 363 - p->capability, p->flags, p->base, 364 - p->fmt.width, p->fmt.height, 365 - (p->fmt.pixelformat & 0xff), 366 - (p->fmt.pixelformat >> 8) & 0xff, 367 - (p->fmt.pixelformat >> 16) & 0xff, 368 - (p->fmt.pixelformat >> 24) & 0xff, 369 - p->fmt.bytesperline, p->fmt.sizeimage, 370 - p->fmt.colorspace); 386 + pr_cont("capability=0x%x, flags=0x%x, base=0x%p, width=%u, height=%u, pixelformat=%p4cc, bytesperline=%u, sizeimage=%u, colorspace=%d\n", 387 + p->capability, p->flags, p->base, p->fmt.width, p->fmt.height, 388 + &p->fmt.pixelformat, p->fmt.bytesperline, p->fmt.sizeimage, 389 + p->fmt.colorspace); 371 390 } 372 391 373 392 static void v4l_print_buftype(const void *arg, bool write_only) ··· 732 761 { 733 762 const struct v4l2_frmsizeenum *p = arg; 734 763 735 - pr_cont("index=%u, pixelformat=%c%c%c%c, type=%u", 736 - p->index, 737 - (p->pixel_format & 0xff), 738 - (p->pixel_format >> 8) & 0xff, 739 - (p->pixel_format >> 16) & 0xff, 740 - (p->pixel_format >> 24) & 0xff, 741 - p->type); 764 + pr_cont("index=%u, pixelformat=%p4cc, type=%u", 765 + p->index, &p->pixel_format, p->type); 742 766 switch (p->type) { 743 767 case V4L2_FRMSIZE_TYPE_DISCRETE: 744 768 pr_cont(", wxh=%ux%u\n", ··· 759 793 { 760 794 const struct v4l2_frmivalenum *p = arg; 761 795 762 - pr_cont("index=%u, pixelformat=%c%c%c%c, wxh=%ux%u, type=%u", 763 - p->index, 764 - (p->pixel_format & 0xff), 765 - (p->pixel_format >> 8) & 0xff, 766 - (p->pixel_format >> 16) & 0xff, 767 - (p->pixel_format >> 24) & 0xff, 768 - p->width, p->height, p->type); 796 + pr_cont("index=%u, pixelformat=%p4cc, wxh=%ux%u, type=%u", 797 + p->index, &p->pixel_format, p->width, p->height, p->type); 769 798 switch (p->type) { 770 799 case V4L2_FRMIVAL_TYPE_DISCRETE: 771 800 pr_cont(", fps=%d/%d\n", ··· 1420 1459 return; 1421 1460 WARN(1, "Unknown pixelformat 0x%08x\n", fmt->pixelformat); 1422 1461 flags = 0; 1423 - snprintf(fmt->description, sz, "%c%c%c%c%s", 1424 - (char)(fmt->pixelformat & 0x7f), 1425 - (char)((fmt->pixelformat >> 8) & 0x7f), 1426 - (char)((fmt->pixelformat >> 16) & 0x7f), 1427 - (char)((fmt->pixelformat >> 24) & 0x7f), 1428 - (fmt->pixelformat & (1UL << 31)) ? "-BE" : ""); 1462 + snprintf(fmt->description, sz, "%p4cc", 1463 + &fmt->pixelformat); 1429 1464 break; 1430 1465 } 1431 1466 }
+2 -15
drivers/video/fbdev/amba-clcd.c
··· 35 35 /* This is limited to 16 characters when displayed by X startup */ 36 36 static const char *clcd_name = "CLCD FB"; 37 37 38 - /* 39 - * Unfortunately, the enable/disable functions may be called either from 40 - * process or IRQ context, and we _need_ to delay. This is _not_ good. 41 - */ 42 - static inline void clcdfb_sleep(unsigned int ms) 43 - { 44 - if (in_atomic()) { 45 - mdelay(ms); 46 - } else { 47 - msleep(ms); 48 - } 49 - } 50 - 51 38 static inline void clcdfb_set_start(struct clcd_fb *fb) 52 39 { 53 40 unsigned long ustart = fb->fb.fix.smem_start; ··· 64 77 val &= ~CNTL_LCDPWR; 65 78 writel(val, fb->regs + fb->off_cntl); 66 79 67 - clcdfb_sleep(20); 80 + msleep(20); 68 81 } 69 82 if (val & CNTL_LCDEN) { 70 83 val &= ~CNTL_LCDEN; ··· 96 109 cntl |= CNTL_LCDEN; 97 110 writel(cntl, fb->regs + fb->off_cntl); 98 111 99 - clcdfb_sleep(20); 112 + msleep(20); 100 113 101 114 /* 102 115 * and now apply power.
+3
drivers/video/fbdev/efifb.c
··· 16 16 #include <linux/platform_device.h> 17 17 #include <linux/printk.h> 18 18 #include <linux/screen_info.h> 19 + #include <linux/pm_runtime.h> 19 20 #include <video/vga.h> 20 21 #include <asm/efi.h> 21 22 #include <drm/drm_utils.h> /* For drm_get_panel_orientation_quirk */ ··· 575 574 goto err_fb_dealoc; 576 575 } 577 576 fb_info(info, "%s frame buffer device\n", info->fix.id); 577 + pm_runtime_get_sync(&efifb_pci_dev->dev); 578 578 return 0; 579 579 580 580 err_fb_dealoc: ··· 602 600 unregister_framebuffer(info); 603 601 sysfs_remove_groups(&pdev->dev.kobj, efifb_groups); 604 602 framebuffer_release(info); 603 + pm_runtime_put(&efifb_pci_dev->dev); 605 604 606 605 return 0; 607 606 }
+28 -14
drivers/video/fbdev/omap/hwa742.c
··· 100 100 struct hwa742_request req_pool[REQ_POOL_SIZE]; 101 101 struct list_head pending_req_list; 102 102 struct list_head free_req_list; 103 + 104 + /* 105 + * @req_lock: protect request slots pool and its tracking lists 106 + * @req_sema: counter; slot allocators from task contexts must 107 + * push it down before acquiring a slot. This 108 + * guarantees that atomic contexts will always have 109 + * a minimum of IRQ_REQ_POOL_SIZE slots available. 110 + */ 103 111 struct semaphore req_sema; 104 112 spinlock_t req_lock; 105 113 ··· 232 224 hwa742_write_reg(HWA742_NDP_CTRL, b); 233 225 } 234 226 235 - static inline struct hwa742_request *alloc_req(void) 227 + static inline struct hwa742_request *alloc_req(bool can_sleep) 236 228 { 237 229 unsigned long flags; 238 230 struct hwa742_request *req; 239 231 int req_flags = 0; 240 232 241 - if (!in_interrupt()) 233 + if (can_sleep) 242 234 down(&hwa742.req_sema); 243 235 else 244 236 req_flags = REQ_FROM_IRQ_POOL; ··· 407 399 hwa742.int_ctrl->enable_plane(OMAPFB_PLANE_GFX, 0); 408 400 } 409 401 410 - #define ADD_PREQ(_x, _y, _w, _h) do { \ 411 - req = alloc_req(); \ 402 + #define ADD_PREQ(_x, _y, _w, _h, can_sleep) do {\ 403 + req = alloc_req(can_sleep); \ 412 404 req->handler = send_frame_handler; \ 413 405 req->complete = send_frame_complete; \ 414 406 req->par.update.x = _x; \ ··· 421 413 } while(0) 422 414 423 415 static void create_req_list(struct omapfb_update_window *win, 424 - struct list_head *req_head) 416 + struct list_head *req_head, 417 + bool can_sleep) 425 418 { 426 419 struct hwa742_request *req; 427 420 int x = win->x; ··· 436 427 color_mode = win->format & OMAPFB_FORMAT_MASK; 437 428 438 429 if (x & 1) { 439 - ADD_PREQ(x, y, 1, height); 430 + ADD_PREQ(x, y, 1, height, can_sleep); 440 431 width--; 441 432 x++; 442 433 flags &= ~OMAPFB_FORMAT_FLAG_TEARSYNC; ··· 448 439 449 440 if (xspan * height * 2 > hwa742.max_transmit_size) { 450 441 yspan = hwa742.max_transmit_size / (xspan * 2); 451 - ADD_PREQ(x, ystart, xspan, yspan); 442 + ADD_PREQ(x, ystart, xspan, yspan, can_sleep); 452 443 ystart += yspan; 453 444 yspan = height - yspan; 454 445 flags &= ~OMAPFB_FORMAT_FLAG_TEARSYNC; 455 446 } 456 447 457 - ADD_PREQ(x, ystart, xspan, yspan); 448 + ADD_PREQ(x, ystart, xspan, yspan, can_sleep); 458 449 x += xspan; 459 450 width -= xspan; 460 451 flags &= ~OMAPFB_FORMAT_FLAG_TEARSYNC; 461 452 } 462 453 if (width) 463 - ADD_PREQ(x, y, 1, height); 454 + ADD_PREQ(x, y, 1, height, can_sleep); 464 455 } 465 456 466 457 static void auto_update_complete(void *data) ··· 470 461 jiffies + HWA742_AUTO_UPDATE_TIME); 471 462 } 472 463 473 - static void hwa742_update_window_auto(struct timer_list *unused) 464 + static void __hwa742_update_window_auto(bool can_sleep) 474 465 { 475 466 LIST_HEAD(req_list); 476 467 struct hwa742_request *last; 477 468 478 - create_req_list(&hwa742.auto_update_window, &req_list); 469 + create_req_list(&hwa742.auto_update_window, &req_list, can_sleep); 479 470 last = list_entry(req_list.prev, struct hwa742_request, entry); 480 471 481 472 last->complete = auto_update_complete; 482 473 last->complete_data = NULL; 483 474 484 475 submit_req_list(&req_list); 476 + } 477 + 478 + static void hwa742_update_window_auto(struct timer_list *unused) 479 + { 480 + __hwa742_update_window_auto(false); 485 481 } 486 482 487 483 int hwa742_update_window_async(struct fb_info *fbi, ··· 511 497 goto out; 512 498 } 513 499 514 - create_req_list(win, &req_list); 500 + create_req_list(win, &req_list, true); 515 501 last = list_entry(req_list.prev, struct hwa742_request, entry); 516 502 517 503 last->complete = complete_callback; ··· 558 544 struct hwa742_request *req; 559 545 struct completion comp; 560 546 561 - req = alloc_req(); 547 + req = alloc_req(true); 562 548 563 549 req->handler = sync_handler; 564 550 req->complete = NULL; ··· 613 599 omapfb_notify_clients(hwa742.fbdev, OMAPFB_EVENT_READY); 614 600 break; 615 601 case OMAPFB_AUTO_UPDATE: 616 - hwa742_update_window_auto(0); 602 + __hwa742_update_window_auto(true); 617 603 break; 618 604 case OMAPFB_UPDATE_DISABLED: 619 605 break;
-2
drivers/video/fbdev/omap2/omapfb/dss/dsi.c
··· 2376 2376 2377 2377 WARN_ON(!dsi_bus_is_locked(dsidev)); 2378 2378 2379 - WARN_ON(in_interrupt()); 2380 - 2381 2379 if (!dsi_vc_is_enabled(dsidev, channel)) 2382 2380 return 0; 2383 2381
+2 -3
drivers/video/fbdev/simplefb.c
··· 477 477 simplefb_clocks_enable(par, pdev); 478 478 simplefb_regulators_enable(par, pdev); 479 479 480 - dev_info(&pdev->dev, "framebuffer at 0x%lx, 0x%x bytes, mapped to 0x%p\n", 481 - info->fix.smem_start, info->fix.smem_len, 482 - info->screen_base); 480 + dev_info(&pdev->dev, "framebuffer at 0x%lx, 0x%x bytes\n", 481 + info->fix.smem_start, info->fix.smem_len); 483 482 dev_info(&pdev->dev, "format=%s, mode=%dx%dx%d, linelength=%d\n", 484 483 params.format->name, 485 484 info->var.xres, info->var.yres,
+4
include/drm/drm_atomic.h
··· 66 66 * 67 67 * For an implementation of how to use this look at 68 68 * drm_atomic_helper_setup_commit() from the atomic helper library. 69 + * 70 + * See also drm_crtc_commit_wait(). 69 71 */ 70 72 struct drm_crtc_commit { 71 73 /** ··· 437 435 { 438 436 kref_put(&commit->ref, __drm_crtc_commit_free); 439 437 } 438 + 439 + int drm_crtc_commit_wait(struct drm_crtc_commit *commit); 440 440 441 441 struct drm_atomic_state * __must_check 442 442 drm_atomic_state_alloc(struct drm_device *dev);
+113
include/drm/drm_gem_atomic_helper.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0-or-later */ 2 + 3 + #ifndef __DRM_GEM_ATOMIC_HELPER_H__ 4 + #define __DRM_GEM_ATOMIC_HELPER_H__ 5 + 6 + #include <linux/dma-buf-map.h> 7 + 8 + #include <drm/drm_plane.h> 9 + 10 + struct drm_simple_display_pipe; 11 + 12 + /* 13 + * Plane Helpers 14 + */ 15 + 16 + int drm_gem_plane_helper_prepare_fb(struct drm_plane *plane, struct drm_plane_state *state); 17 + int drm_gem_simple_display_pipe_prepare_fb(struct drm_simple_display_pipe *pipe, 18 + struct drm_plane_state *plane_state); 19 + 20 + /* 21 + * Helpers for planes with shadow buffers 22 + */ 23 + 24 + /** 25 + * struct drm_shadow_plane_state - plane state for planes with shadow buffers 26 + * 27 + * For planes that use a shadow buffer, struct drm_shadow_plane_state 28 + * provides the regular plane state plus mappings of the shadow buffer 29 + * into kernel address space. 30 + */ 31 + struct drm_shadow_plane_state { 32 + /** @base: plane state */ 33 + struct drm_plane_state base; 34 + 35 + /* Transitional state - do not export or duplicate */ 36 + 37 + /** 38 + * @map: Mappings of the plane's framebuffer BOs in to kernel address space 39 + * 40 + * The memory mappings stored in map should be established in the plane's 41 + * prepare_fb callback and removed in the cleanup_fb callback. 42 + */ 43 + struct dma_buf_map map[4]; 44 + }; 45 + 46 + /** 47 + * to_drm_shadow_plane_state - upcasts from struct drm_plane_state 48 + * @state: the plane state 49 + */ 50 + static inline struct drm_shadow_plane_state * 51 + to_drm_shadow_plane_state(struct drm_plane_state *state) 52 + { 53 + return container_of(state, struct drm_shadow_plane_state, base); 54 + } 55 + 56 + void drm_gem_reset_shadow_plane(struct drm_plane *plane); 57 + struct drm_plane_state *drm_gem_duplicate_shadow_plane_state(struct drm_plane *plane); 58 + void drm_gem_destroy_shadow_plane_state(struct drm_plane *plane, 59 + struct drm_plane_state *plane_state); 60 + 61 + /** 62 + * DRM_GEM_SHADOW_PLANE_FUNCS - 63 + * Initializes struct drm_plane_funcs for shadow-buffered planes 64 + * 65 + * Drivers may use GEM BOs as shadow buffers over the framebuffer memory. This 66 + * macro initializes struct drm_plane_funcs to use the rsp helper functions. 67 + */ 68 + #define DRM_GEM_SHADOW_PLANE_FUNCS \ 69 + .reset = drm_gem_reset_shadow_plane, \ 70 + .atomic_duplicate_state = drm_gem_duplicate_shadow_plane_state, \ 71 + .atomic_destroy_state = drm_gem_destroy_shadow_plane_state 72 + 73 + int drm_gem_prepare_shadow_fb(struct drm_plane *plane, struct drm_plane_state *plane_state); 74 + void drm_gem_cleanup_shadow_fb(struct drm_plane *plane, struct drm_plane_state *plane_state); 75 + 76 + /** 77 + * DRM_GEM_SHADOW_PLANE_HELPER_FUNCS - 78 + * Initializes struct drm_plane_helper_funcs for shadow-buffered planes 79 + * 80 + * Drivers may use GEM BOs as shadow buffers over the framebuffer memory. This 81 + * macro initializes struct drm_plane_helper_funcs to use the rsp helper 82 + * functions. 83 + */ 84 + #define DRM_GEM_SHADOW_PLANE_HELPER_FUNCS \ 85 + .prepare_fb = drm_gem_prepare_shadow_fb, \ 86 + .cleanup_fb = drm_gem_cleanup_shadow_fb 87 + 88 + int drm_gem_simple_kms_prepare_shadow_fb(struct drm_simple_display_pipe *pipe, 89 + struct drm_plane_state *plane_state); 90 + void drm_gem_simple_kms_cleanup_shadow_fb(struct drm_simple_display_pipe *pipe, 91 + struct drm_plane_state *plane_state); 92 + void drm_gem_simple_kms_reset_shadow_plane(struct drm_simple_display_pipe *pipe); 93 + struct drm_plane_state * 94 + drm_gem_simple_kms_duplicate_shadow_plane_state(struct drm_simple_display_pipe *pipe); 95 + void drm_gem_simple_kms_destroy_shadow_plane_state(struct drm_simple_display_pipe *pipe, 96 + struct drm_plane_state *plane_state); 97 + 98 + /** 99 + * DRM_GEM_SIMPLE_DISPLAY_PIPE_SHADOW_PLANE_FUNCS - 100 + * Initializes struct drm_simple_display_pipe_funcs for shadow-buffered planes 101 + * 102 + * Drivers may use GEM BOs as shadow buffers over the framebuffer memory. This 103 + * macro initializes struct drm_simple_display_pipe_funcs to use the rsp helper 104 + * functions. 105 + */ 106 + #define DRM_GEM_SIMPLE_DISPLAY_PIPE_SHADOW_PLANE_FUNCS \ 107 + .prepare_fb = drm_gem_simple_kms_prepare_shadow_fb, \ 108 + .cleanup_fb = drm_gem_simple_kms_cleanup_shadow_fb, \ 109 + .reset_plane = drm_gem_simple_kms_reset_shadow_plane, \ 110 + .duplicate_plane_state = drm_gem_simple_kms_duplicate_shadow_plane_state, \ 111 + .destroy_plane_state = drm_gem_simple_kms_destroy_shadow_plane_state 112 + 113 + #endif /* __DRM_GEM_ATOMIC_HELPER_H__ */
-7
include/drm/drm_gem_framebuffer_helper.h
··· 9 9 struct drm_framebuffer_funcs; 10 10 struct drm_gem_object; 11 11 struct drm_mode_fb_cmd2; 12 - struct drm_plane; 13 - struct drm_plane_state; 14 - struct drm_simple_display_pipe; 15 12 16 13 #define AFBC_VENDOR_AND_TYPE_MASK GENMASK_ULL(63, 52) 17 14 ··· 41 44 const struct drm_mode_fb_cmd2 *mode_cmd, 42 45 struct drm_afbc_framebuffer *afbc_fb); 43 46 44 - int drm_gem_fb_prepare_fb(struct drm_plane *plane, 45 - struct drm_plane_state *state); 46 - int drm_gem_fb_simple_display_pipe_prepare_fb(struct drm_simple_display_pipe *pipe, 47 - struct drm_plane_state *plane_state); 48 47 #endif
+3 -3
include/drm/drm_gem_vram_helper.h
··· 172 172 uint64_t vram_base; 173 173 size_t vram_size; 174 174 175 - struct ttm_bo_device bdev; 175 + struct ttm_device bdev; 176 176 }; 177 177 178 178 /** 179 179 * drm_vram_mm_of_bdev() - \ 180 - Returns the container of type &struct ttm_bo_device for field bdev. 180 + Returns the container of type &struct ttm_device for field bdev. 181 181 * @bdev: the TTM BO device 182 182 * 183 183 * Returns: 184 184 * The containing instance of &struct drm_vram_mm 185 185 */ 186 186 static inline struct drm_vram_mm *drm_vram_mm_of_bdev( 187 - struct ttm_bo_device *bdev) 187 + struct ttm_device *bdev) 188 188 { 189 189 return container_of(bdev, struct drm_vram_mm, bdev); 190 190 }
+14 -17
include/drm/drm_modeset_helper_vtables.h
··· 1179 1179 * members in the plane structure. 1180 1180 * 1181 1181 * Drivers which always have their buffers pinned should use 1182 - * drm_gem_fb_prepare_fb() for this hook. 1182 + * drm_gem_plane_helper_prepare_fb() for this hook. 1183 1183 * 1184 1184 * The helpers will call @cleanup_fb with matching arguments for every 1185 1185 * successful call to this hook. ··· 1233 1233 * NOTE: 1234 1234 * 1235 1235 * This function is called in the check phase of an atomic update. The 1236 - * driver is not allowed to change anything outside of the free-standing 1237 - * state objects passed-in or assembled in the overall &drm_atomic_state 1238 - * update tracking structure. 1236 + * driver is not allowed to change anything outside of the 1237 + * &drm_atomic_state update tracking structure. 1239 1238 * 1240 1239 * RETURNS: 1241 1240 * ··· 1244 1245 * deadlock. 1245 1246 */ 1246 1247 int (*atomic_check)(struct drm_plane *plane, 1247 - struct drm_plane_state *state); 1248 + struct drm_atomic_state *state); 1248 1249 1249 1250 /** 1250 1251 * @atomic_update: ··· 1262 1263 * transitional plane helpers, but it is optional. 1263 1264 */ 1264 1265 void (*atomic_update)(struct drm_plane *plane, 1265 - struct drm_plane_state *old_state); 1266 + struct drm_atomic_state *state); 1266 1267 /** 1267 1268 * @atomic_disable: 1268 1269 * ··· 1286 1287 * transitional plane helpers, but it is optional. 1287 1288 */ 1288 1289 void (*atomic_disable)(struct drm_plane *plane, 1289 - struct drm_plane_state *old_state); 1290 + struct drm_atomic_state *state); 1290 1291 1291 1292 /** 1292 1293 * @atomic_async_check: 1293 1294 * 1294 - * Drivers should set this function pointer to check if the plane state 1295 - * can be updated in a async fashion. Here async means "not vblank 1296 - * synchronized". 1295 + * Drivers should set this function pointer to check if the plane's 1296 + * atomic state can be updated in a async fashion. Here async means 1297 + * "not vblank synchronized". 1297 1298 * 1298 1299 * This hook is called by drm_atomic_async_check() to establish if a 1299 1300 * given update can be committed asynchronously, that is, if it can ··· 1305 1306 * can not be applied in asynchronous manner. 1306 1307 */ 1307 1308 int (*atomic_async_check)(struct drm_plane *plane, 1308 - struct drm_plane_state *state); 1309 + struct drm_atomic_state *state); 1309 1310 1310 1311 /** 1311 1312 * @atomic_async_update: ··· 1321 1322 * update won't happen if there is an outstanding commit modifying 1322 1323 * the same plane. 1323 1324 * 1324 - * Note that unlike &drm_plane_helper_funcs.atomic_update this hook 1325 - * takes the new &drm_plane_state as parameter. When doing async_update 1326 - * drivers shouldn't replace the &drm_plane_state but update the 1327 - * current one with the new plane configurations in the new 1328 - * plane_state. 1325 + * When doing async_update drivers shouldn't replace the 1326 + * &drm_plane_state but update the current one with the new plane 1327 + * configurations in the new plane_state. 1329 1328 * 1330 1329 * Drivers should also swap the framebuffers between current plane 1331 1330 * state (&drm_plane.state) and new_state. ··· 1342 1345 * for deferring if needed, until a common solution is created. 1343 1346 */ 1344 1347 void (*atomic_async_update)(struct drm_plane *plane, 1345 - struct drm_plane_state *new_state); 1348 + struct drm_atomic_state *state); 1346 1349 }; 1347 1350 1348 1351 /**
+15 -10
include/drm/drm_plane.h
··· 79 79 * preserved. 80 80 * 81 81 * Drivers should store any implicit fence in this from their 82 - * &drm_plane_helper_funcs.prepare_fb callback. See drm_gem_fb_prepare_fb() 83 - * and drm_gem_fb_simple_display_pipe_prepare_fb() for suitable helpers. 82 + * &drm_plane_helper_funcs.prepare_fb callback. See drm_gem_plane_helper_prepare_fb() 83 + * and drm_gem_simple_display_pipe_prepare_fb() for suitable helpers. 84 84 */ 85 85 struct dma_fence *fence; 86 86 ··· 538 538 * 539 539 * For compatibility with legacy userspace, only overlay planes are made 540 540 * available to userspace by default. Userspace clients may set the 541 - * DRM_CLIENT_CAP_UNIVERSAL_PLANES client capability bit to indicate that they 541 + * &DRM_CLIENT_CAP_UNIVERSAL_PLANES client capability bit to indicate that they 542 542 * wish to receive a universal plane list containing all plane types. See also 543 543 * drm_for_each_legacy_plane(). 544 + * 545 + * In addition to setting each plane's type, drivers need to setup the 546 + * &drm_crtc.primary and optionally &drm_crtc.cursor pointers for legacy 547 + * IOCTLs. See drm_crtc_init_with_planes(). 544 548 * 545 549 * WARNING: The values of this enum is UABI since they're exposed in the "type" 546 550 * property. ··· 561 557 /** 562 558 * @DRM_PLANE_TYPE_PRIMARY: 563 559 * 564 - * Primary planes represent a "main" plane for a CRTC. Primary planes 565 - * are the planes operated upon by CRTC modesetting and flipping 566 - * operations described in the &drm_crtc_funcs.page_flip and 567 - * &drm_crtc_funcs.set_config hooks. 560 + * A primary plane attached to a CRTC is the most likely to be able to 561 + * light up the CRTC when no scaling/cropping is used and the plane 562 + * covers the whole CRTC. 568 563 */ 569 564 DRM_PLANE_TYPE_PRIMARY, 570 565 571 566 /** 572 567 * @DRM_PLANE_TYPE_CURSOR: 573 568 * 574 - * Cursor planes represent a "cursor" plane for a CRTC. Cursor planes 575 - * are the planes operated upon by the DRM_IOCTL_MODE_CURSOR and 576 - * DRM_IOCTL_MODE_CURSOR2 IOCTLs. 569 + * A cursor plane attached to a CRTC is more likely to be able to be 570 + * enabled when no scaling/cropping is used and the framebuffer has the 571 + * size indicated by &drm_mode_config.cursor_width and 572 + * &drm_mode_config.cursor_height. Additionally, if the driver doesn't 573 + * support modifiers, the framebuffer should have a linear layout. 577 574 */ 578 575 DRM_PLANE_TYPE_CURSOR, 579 576 };
+28 -1
include/drm/drm_simple_kms_helper.h
··· 117 117 * more details. 118 118 * 119 119 * Drivers which always have their buffers pinned should use 120 - * drm_gem_fb_simple_display_pipe_prepare_fb() for this hook. 120 + * drm_gem_simple_display_pipe_prepare_fb() for this hook. 121 121 */ 122 122 int (*prepare_fb)(struct drm_simple_display_pipe *pipe, 123 123 struct drm_plane_state *plane_state); ··· 149 149 * more details. 150 150 */ 151 151 void (*disable_vblank)(struct drm_simple_display_pipe *pipe); 152 + 153 + /** 154 + * @reset_plane: 155 + * 156 + * Optional, called by &drm_plane_funcs.reset. Please read the 157 + * documentation for the &drm_plane_funcs.reset hook for more details. 158 + */ 159 + void (*reset_plane)(struct drm_simple_display_pipe *pipe); 160 + 161 + /** 162 + * @duplicate_plane_state: 163 + * 164 + * Optional, called by &drm_plane_funcs.atomic_duplicate_state. Please 165 + * read the documentation for the &drm_plane_funcs.atomic_duplicate_state 166 + * hook for more details. 167 + */ 168 + struct drm_plane_state * (*duplicate_plane_state)(struct drm_simple_display_pipe *pipe); 169 + 170 + /** 171 + * @destroy_plane_state: 172 + * 173 + * Optional, called by &drm_plane_funcs.atomic_destroy_state. Please 174 + * read the documentation for the &drm_plane_funcs.atomic_destroy_state 175 + * hook for more details. 176 + */ 177 + void (*destroy_plane_state)(struct drm_simple_display_pipe *pipe, 178 + struct drm_plane_state *plane_state); 152 179 }; 153 180 154 181 /**
-1
include/drm/drm_vblank.h
··· 247 247 void drm_crtc_vblank_reset(struct drm_crtc *crtc); 248 248 void drm_crtc_vblank_on(struct drm_crtc *crtc); 249 249 u64 drm_crtc_accurate_vblank_count(struct drm_crtc *crtc); 250 - void drm_vblank_restore(struct drm_device *dev, unsigned int pipe); 251 250 void drm_crtc_vblank_restore(struct drm_crtc *crtc); 252 251 253 252 void drm_calc_timestamping_constants(struct drm_crtc *crtc,
+18 -5
include/drm/gpu_scheduler.h
··· 206 206 return s_job && atomic_inc_return(&s_job->karma) > threshold; 207 207 } 208 208 209 + enum drm_gpu_sched_stat { 210 + DRM_GPU_SCHED_STAT_NONE, /* Reserve 0 */ 211 + DRM_GPU_SCHED_STAT_NOMINAL, 212 + DRM_GPU_SCHED_STAT_ENODEV, 213 + }; 214 + 209 215 /** 210 216 * struct drm_sched_backend_ops 211 217 * ··· 236 230 struct dma_fence *(*run_job)(struct drm_sched_job *sched_job); 237 231 238 232 /** 239 - * @timedout_job: Called when a job has taken too long to execute, 240 - * to trigger GPU recovery. 233 + * @timedout_job: Called when a job has taken too long to execute, 234 + * to trigger GPU recovery. 235 + * 236 + * Return DRM_GPU_SCHED_STAT_NOMINAL, when all is normal, 237 + * and the underlying driver has started or completed recovery. 238 + * 239 + * Return DRM_GPU_SCHED_STAT_ENODEV, if the device is no longer 240 + * available, i.e. has been unplugged. 241 241 */ 242 - void (*timedout_job)(struct drm_sched_job *sched_job); 242 + enum drm_gpu_sched_stat (*timedout_job)(struct drm_sched_job *sched_job); 243 243 244 244 /** 245 245 * @free_job: Called once the job's finished fence has been signaled ··· 297 285 struct list_head pending_list; 298 286 spinlock_t job_list_lock; 299 287 int hang_limit; 300 - atomic_t score; 288 + atomic_t *score; 289 + atomic_t _score; 301 290 bool ready; 302 291 bool free_guilty; 303 292 }; ··· 306 293 int drm_sched_init(struct drm_gpu_scheduler *sched, 307 294 const struct drm_sched_backend_ops *ops, 308 295 uint32_t hw_submission, unsigned hang_limit, long timeout, 309 - const char *name); 296 + atomic_t *score, const char *name); 310 297 311 298 void drm_sched_fini(struct drm_gpu_scheduler *sched); 312 299 int drm_sched_job_init(struct drm_sched_job *job,
+20 -28
include/drm/ttm/ttm_bo_api.h
··· 44 44 45 45 #include "ttm_resource.h" 46 46 47 - struct ttm_bo_global; 47 + struct ttm_global; 48 48 49 - struct ttm_bo_device; 49 + struct ttm_device; 50 50 51 51 struct dma_buf_map; 52 52 ··· 88 88 * @type: The bo type. 89 89 * @destroy: Destruction function. If NULL, kfree is used. 90 90 * @num_pages: Actual number of pages. 91 - * @acc_size: Accounted size for this object. 92 91 * @kref: Reference count of this buffer object. When this refcount reaches 93 92 * zero, the object is destroyed or put on the delayed delete list. 94 93 * @mem: structure describing current placement. ··· 121 122 * Members constant at init. 122 123 */ 123 124 124 - struct ttm_bo_device *bdev; 125 + struct ttm_device *bdev; 125 126 enum ttm_bo_type type; 126 127 void (*destroy) (struct ttm_buffer_object *); 127 - size_t acc_size; 128 128 129 129 /** 130 130 * Members not needing protection. ··· 311 313 * @bulk: optional bulk move structure to remember BO positions 312 314 * 313 315 * Move this BO to the tail of all lru lists used to lookup and reserve an 314 - * object. This function must be called with struct ttm_bo_global::lru_lock 316 + * object. This function must be called with struct ttm_global::lru_lock 315 317 * held, and is used to make a BO less likely to be considered for eviction. 316 318 */ 317 319 void ttm_bo_move_to_lru_tail(struct ttm_buffer_object *bo, ··· 324 326 * @bulk: bulk move structure 325 327 * 326 328 * Bulk move BOs to the LRU tail, only valid to use when driver makes sure that 327 - * BO order never changes. Should be called with ttm_bo_global::lru_lock held. 329 + * BO order never changes. Should be called with ttm_global::lru_lock held. 328 330 */ 329 331 void ttm_bo_bulk_move_lru_tail(struct ttm_lru_bulk_move *bulk); 330 332 ··· 335 337 * Returns 336 338 * True if the workqueue was queued at the time 337 339 */ 338 - int ttm_bo_lock_delayed_workqueue(struct ttm_bo_device *bdev); 340 + int ttm_bo_lock_delayed_workqueue(struct ttm_device *bdev); 339 341 340 342 /** 341 343 * ttm_bo_unlock_delayed_workqueue 342 344 * 343 345 * Allows the delayed workqueue to run. 344 346 */ 345 - void ttm_bo_unlock_delayed_workqueue(struct ttm_bo_device *bdev, int resched); 347 + void ttm_bo_unlock_delayed_workqueue(struct ttm_device *bdev, int resched); 346 348 347 349 /** 348 350 * ttm_bo_eviction_valuable ··· 355 357 bool ttm_bo_eviction_valuable(struct ttm_buffer_object *bo, 356 358 const struct ttm_place *place); 357 359 358 - size_t ttm_bo_dma_acc_size(struct ttm_bo_device *bdev, 359 - unsigned long bo_size, 360 - unsigned struct_size); 361 - 362 360 /** 363 361 * ttm_bo_init_reserved 364 362 * 365 - * @bdev: Pointer to a ttm_bo_device struct. 363 + * @bdev: Pointer to a ttm_device struct. 366 364 * @bo: Pointer to a ttm_buffer_object to be initialized. 367 365 * @size: Requested size of buffer object. 368 366 * @type: Requested type of buffer object. 369 367 * @flags: Initial placement flags. 370 368 * @page_alignment: Data alignment in pages. 371 369 * @ctx: TTM operation context for memory allocation. 372 - * @acc_size: Accounted size for this object. 373 370 * @resv: Pointer to a dma_resv, or NULL to let ttm allocate one. 374 371 * @destroy: Destroy function. Use NULL for kfree(). 375 372 * ··· 389 396 * -ERESTARTSYS: Interrupted by signal while sleeping waiting for resources. 390 397 */ 391 398 392 - int ttm_bo_init_reserved(struct ttm_bo_device *bdev, 399 + int ttm_bo_init_reserved(struct ttm_device *bdev, 393 400 struct ttm_buffer_object *bo, 394 401 size_t size, enum ttm_bo_type type, 395 402 struct ttm_placement *placement, 396 403 uint32_t page_alignment, 397 404 struct ttm_operation_ctx *ctx, 398 - size_t acc_size, struct sg_table *sg, 399 - struct dma_resv *resv, 405 + struct sg_table *sg, struct dma_resv *resv, 400 406 void (*destroy) (struct ttm_buffer_object *)); 401 407 402 408 /** 403 409 * ttm_bo_init 404 410 * 405 - * @bdev: Pointer to a ttm_bo_device struct. 411 + * @bdev: Pointer to a ttm_device struct. 406 412 * @bo: Pointer to a ttm_buffer_object to be initialized. 407 413 * @size: Requested size of buffer object. 408 414 * @type: Requested type of buffer object. ··· 413 421 * holds a pointer to a persistent shmem object. Typically, this would 414 422 * point to the shmem object backing a GEM object if TTM is used to back a 415 423 * GEM user interface. 416 - * @acc_size: Accounted size for this object. 417 424 * @resv: Pointer to a dma_resv, or NULL to let ttm allocate one. 418 425 * @destroy: Destroy function. Use NULL for kfree(). 419 426 * ··· 434 443 * -EINVAL: Invalid placement flags. 435 444 * -ERESTARTSYS: Interrupted by signal while sleeping waiting for resources. 436 445 */ 437 - int ttm_bo_init(struct ttm_bo_device *bdev, struct ttm_buffer_object *bo, 446 + int ttm_bo_init(struct ttm_device *bdev, struct ttm_buffer_object *bo, 438 447 size_t size, enum ttm_bo_type type, 439 448 struct ttm_placement *placement, 440 - uint32_t page_alignment, bool interrubtible, size_t acc_size, 449 + uint32_t page_alignment, bool interrubtible, 441 450 struct sg_table *sg, struct dma_resv *resv, 442 451 void (*destroy) (struct ttm_buffer_object *)); 443 452 ··· 528 537 * 529 538 * @filp: filp as input from the mmap method. 530 539 * @vma: vma as input from the mmap method. 531 - * @bdev: Pointer to the ttm_bo_device with the address space manager. 540 + * @bdev: Pointer to the ttm_device with the address space manager. 532 541 * 533 542 * This function is intended to be called by the device mmap method. 534 543 * if the device address space is to be backed by the bo manager. 535 544 */ 536 545 int ttm_bo_mmap(struct file *filp, struct vm_area_struct *vma, 537 - struct ttm_bo_device *bdev); 546 + struct ttm_device *bdev); 538 547 539 548 /** 540 549 * ttm_bo_io 541 550 * 542 - * @bdev: Pointer to the struct ttm_bo_device. 551 + * @bdev: Pointer to the struct ttm_device. 543 552 * @filp: Pointer to the struct file attempting to read / write. 544 553 * @wbuf: User-space pointer to address of buffer to write. NULL on read. 545 554 * @rbuf: User-space pointer to address of buffer to read into. ··· 556 565 * the function may return -ERESTARTSYS if 557 566 * interrupted by a signal. 558 567 */ 559 - ssize_t ttm_bo_io(struct ttm_bo_device *bdev, struct file *filp, 568 + ssize_t ttm_bo_io(struct ttm_device *bdev, struct file *filp, 560 569 const char __user *wbuf, char __user *rbuf, 561 570 size_t count, loff_t *f_pos, bool write); 562 571 563 - int ttm_bo_swapout(struct ttm_operation_ctx *ctx); 572 + int ttm_bo_swapout(struct ttm_operation_ctx *ctx, gfp_t gfp_flags); 564 573 565 574 /** 566 575 * ttm_bo_uses_embedded_gem_object - check if the given bo uses the ··· 608 617 --bo->pin_count; 609 618 } 610 619 611 - int ttm_mem_evict_first(struct ttm_bo_device *bdev, 620 + int ttm_mem_evict_first(struct ttm_device *bdev, 612 621 struct ttm_resource_manager *man, 613 622 const struct ttm_place *place, 614 623 struct ttm_operation_ctx *ctx, ··· 633 642 634 643 int ttm_bo_vm_access(struct vm_area_struct *vma, unsigned long addr, 635 644 void *buf, int len, int write); 645 + bool ttm_bo_delayed_delete(struct ttm_device *bdev, bool remove_all); 636 646 637 647 #endif
+8 -321
include/drm/ttm/ttm_bo_driver.h
··· 37 37 #include <linux/spinlock.h> 38 38 #include <linux/dma-resv.h> 39 39 40 + #include <drm/ttm/ttm_device.h> 41 + 40 42 #include "ttm_bo_api.h" 41 - #include "ttm_memory.h" 42 43 #include "ttm_placement.h" 43 44 #include "ttm_tt.h" 44 45 #include "ttm_pool.h" 45 - 46 - /** 47 - * struct ttm_bo_driver 48 - * 49 - * @create_ttm_backend_entry: Callback to create a struct ttm_backend. 50 - * @evict_flags: Callback to obtain placement flags when a buffer is evicted. 51 - * @move: Callback for a driver to hook in accelerated functions to 52 - * move a buffer. 53 - * If set to NULL, a potentially slow memcpy() move is used. 54 - */ 55 - 56 - struct ttm_bo_driver { 57 - /** 58 - * ttm_tt_create 59 - * 60 - * @bo: The buffer object to create the ttm for. 61 - * @page_flags: Page flags as identified by TTM_PAGE_FLAG_XX flags. 62 - * 63 - * Create a struct ttm_tt to back data with system memory pages. 64 - * No pages are actually allocated. 65 - * Returns: 66 - * NULL: Out of memory. 67 - */ 68 - struct ttm_tt *(*ttm_tt_create)(struct ttm_buffer_object *bo, 69 - uint32_t page_flags); 70 - 71 - /** 72 - * ttm_tt_populate 73 - * 74 - * @ttm: The struct ttm_tt to contain the backing pages. 75 - * 76 - * Allocate all backing pages 77 - * Returns: 78 - * -ENOMEM: Out of memory. 79 - */ 80 - int (*ttm_tt_populate)(struct ttm_bo_device *bdev, 81 - struct ttm_tt *ttm, 82 - struct ttm_operation_ctx *ctx); 83 - 84 - /** 85 - * ttm_tt_unpopulate 86 - * 87 - * @ttm: The struct ttm_tt to contain the backing pages. 88 - * 89 - * Free all backing page 90 - */ 91 - void (*ttm_tt_unpopulate)(struct ttm_bo_device *bdev, struct ttm_tt *ttm); 92 - 93 - /** 94 - * ttm_tt_destroy 95 - * 96 - * @bdev: Pointer to a ttm device 97 - * @ttm: Pointer to a struct ttm_tt. 98 - * 99 - * Destroy the backend. This will be call back from ttm_tt_destroy so 100 - * don't call ttm_tt_destroy from the callback or infinite loop. 101 - */ 102 - void (*ttm_tt_destroy)(struct ttm_bo_device *bdev, struct ttm_tt *ttm); 103 - 104 - /** 105 - * struct ttm_bo_driver member eviction_valuable 106 - * 107 - * @bo: the buffer object to be evicted 108 - * @place: placement we need room for 109 - * 110 - * Check with the driver if it is valuable to evict a BO to make room 111 - * for a certain placement. 112 - */ 113 - bool (*eviction_valuable)(struct ttm_buffer_object *bo, 114 - const struct ttm_place *place); 115 - /** 116 - * struct ttm_bo_driver member evict_flags: 117 - * 118 - * @bo: the buffer object to be evicted 119 - * 120 - * Return the bo flags for a buffer which is not mapped to the hardware. 121 - * These will be placed in proposed_flags so that when the move is 122 - * finished, they'll end up in bo->mem.flags 123 - * This should not cause multihop evictions, and the core will warn 124 - * if one is proposed. 125 - */ 126 - 127 - void (*evict_flags)(struct ttm_buffer_object *bo, 128 - struct ttm_placement *placement); 129 - 130 - /** 131 - * struct ttm_bo_driver member move: 132 - * 133 - * @bo: the buffer to move 134 - * @evict: whether this motion is evicting the buffer from 135 - * the graphics address space 136 - * @ctx: context for this move with parameters 137 - * @new_mem: the new memory region receiving the buffer 138 - @ @hop: placement for driver directed intermediate hop 139 - * 140 - * Move a buffer between two memory regions. 141 - * Returns errno -EMULTIHOP if driver requests a hop 142 - */ 143 - int (*move)(struct ttm_buffer_object *bo, bool evict, 144 - struct ttm_operation_ctx *ctx, 145 - struct ttm_resource *new_mem, 146 - struct ttm_place *hop); 147 - 148 - /** 149 - * struct ttm_bo_driver_member verify_access 150 - * 151 - * @bo: Pointer to a buffer object. 152 - * @filp: Pointer to a struct file trying to access the object. 153 - * 154 - * Called from the map / write / read methods to verify that the 155 - * caller is permitted to access the buffer object. 156 - * This member may be set to NULL, which will refuse this kind of 157 - * access for all buffer objects. 158 - * This function should return 0 if access is granted, -EPERM otherwise. 159 - */ 160 - int (*verify_access)(struct ttm_buffer_object *bo, 161 - struct file *filp); 162 - 163 - /** 164 - * Hook to notify driver about a resource delete. 165 - */ 166 - void (*delete_mem_notify)(struct ttm_buffer_object *bo); 167 - 168 - /** 169 - * notify the driver that we're about to swap out this bo 170 - */ 171 - void (*swap_notify)(struct ttm_buffer_object *bo); 172 - 173 - /** 174 - * Driver callback on when mapping io memory (for bo_move_memcpy 175 - * for instance). TTM will take care to call io_mem_free whenever 176 - * the mapping is not use anymore. io_mem_reserve & io_mem_free 177 - * are balanced. 178 - */ 179 - int (*io_mem_reserve)(struct ttm_bo_device *bdev, 180 - struct ttm_resource *mem); 181 - void (*io_mem_free)(struct ttm_bo_device *bdev, 182 - struct ttm_resource *mem); 183 - 184 - /** 185 - * Return the pfn for a given page_offset inside the BO. 186 - * 187 - * @bo: the BO to look up the pfn for 188 - * @page_offset: the offset to look up 189 - */ 190 - unsigned long (*io_mem_pfn)(struct ttm_buffer_object *bo, 191 - unsigned long page_offset); 192 - 193 - /** 194 - * Read/write memory buffers for ptrace access 195 - * 196 - * @bo: the BO to access 197 - * @offset: the offset from the start of the BO 198 - * @buf: pointer to source/destination buffer 199 - * @len: number of bytes to copy 200 - * @write: whether to read (0) from or write (non-0) to BO 201 - * 202 - * If successful, this function should return the number of 203 - * bytes copied, -EIO otherwise. If the number of bytes 204 - * returned is < len, the function may be called again with 205 - * the remainder of the buffer to copy. 206 - */ 207 - int (*access_memory)(struct ttm_buffer_object *bo, unsigned long offset, 208 - void *buf, int len, int write); 209 - 210 - /** 211 - * struct ttm_bo_driver member del_from_lru_notify 212 - * 213 - * @bo: the buffer object deleted from lru 214 - * 215 - * notify driver that a BO was deleted from LRU. 216 - */ 217 - void (*del_from_lru_notify)(struct ttm_buffer_object *bo); 218 - 219 - /** 220 - * Notify the driver that we're about to release a BO 221 - * 222 - * @bo: BO that is about to be released 223 - * 224 - * Gives the driver a chance to do any cleanup, including 225 - * adding fences that may force a delayed delete 226 - */ 227 - void (*release_notify)(struct ttm_buffer_object *bo); 228 - }; 229 - 230 - /** 231 - * struct ttm_bo_global - Buffer object driver global data. 232 - * 233 - * @dummy_read_page: Pointer to a dummy page used for mapping requests 234 - * of unpopulated pages. 235 - * @shrink: A shrink callback object used for buffer object swap. 236 - * @device_list_mutex: Mutex protecting the device list. 237 - * This mutex is held while traversing the device list for pm options. 238 - * @lru_lock: Spinlock protecting the bo subsystem lru lists. 239 - * @device_list: List of buffer object devices. 240 - * @swap_lru: Lru list of buffer objects used for swapping. 241 - */ 242 - 243 - extern struct ttm_bo_global { 244 - 245 - /** 246 - * Constant after init. 247 - */ 248 - 249 - struct kobject kobj; 250 - struct page *dummy_read_page; 251 - spinlock_t lru_lock; 252 - 253 - /** 254 - * Protected by ttm_global_mutex. 255 - */ 256 - struct list_head device_list; 257 - 258 - /** 259 - * Protected by the lru_lock. 260 - */ 261 - struct list_head swap_lru[TTM_MAX_BO_PRIORITY]; 262 - 263 - /** 264 - * Internal protection. 265 - */ 266 - atomic_t bo_count; 267 - } ttm_bo_glob; 268 - 269 - 270 - #define TTM_NUM_MEM_TYPES 8 271 - 272 - /** 273 - * struct ttm_bo_device - Buffer object driver device-specific data. 274 - * 275 - * @driver: Pointer to a struct ttm_bo_driver struct setup by the driver. 276 - * @man: An array of resource_managers. 277 - * @vma_manager: Address space manager (pointer) 278 - * lru_lock: Spinlock that protects the buffer+device lru lists and 279 - * ddestroy lists. 280 - * @dev_mapping: A pointer to the struct address_space representing the 281 - * device address space. 282 - * @wq: Work queue structure for the delayed delete workqueue. 283 - * 284 - */ 285 - 286 - struct ttm_bo_device { 287 - 288 - /* 289 - * Constant after bo device init / atomic. 290 - */ 291 - struct list_head device_list; 292 - struct ttm_bo_driver *driver; 293 - /* 294 - * access via ttm_manager_type. 295 - */ 296 - struct ttm_resource_manager sysman; 297 - struct ttm_resource_manager *man_drv[TTM_NUM_MEM_TYPES]; 298 - /* 299 - * Protected by internal locks. 300 - */ 301 - struct drm_vma_offset_manager *vma_manager; 302 - struct ttm_pool pool; 303 - 304 - /* 305 - * Protected by the global:lru lock. 306 - */ 307 - struct list_head ddestroy; 308 - 309 - /* 310 - * Protected by load / firstopen / lastclose /unload sync. 311 - */ 312 - 313 - struct address_space *dev_mapping; 314 - 315 - /* 316 - * Internal protection. 317 - */ 318 - 319 - struct delayed_work wq; 320 - }; 321 - 322 - static inline struct ttm_resource_manager *ttm_manager_type(struct ttm_bo_device *bdev, 323 - int mem_type) 324 - { 325 - return bdev->man_drv[mem_type]; 326 - } 327 - 328 - static inline void ttm_set_driver_manager(struct ttm_bo_device *bdev, 329 - int type, 330 - struct ttm_resource_manager *manager) 331 - { 332 - bdev->man_drv[type] = manager; 333 - } 334 46 335 47 /** 336 48 * struct ttm_lru_bulk_move_pos ··· 99 387 struct ttm_placement *placement, 100 388 struct ttm_resource *mem, 101 389 struct ttm_operation_ctx *ctx); 102 - 103 - int ttm_bo_device_release(struct ttm_bo_device *bdev); 104 - 105 - /** 106 - * ttm_bo_device_init 107 - * 108 - * @bdev: A pointer to a struct ttm_bo_device to initialize. 109 - * @glob: A pointer to an initialized struct ttm_bo_global. 110 - * @driver: A pointer to a struct ttm_bo_driver set up by the caller. 111 - * @dev: The core kernel device pointer for DMA mappings and allocations. 112 - * @mapping: The address space to use for this bo. 113 - * @vma_manager: A pointer to a vma manager. 114 - * @use_dma_alloc: If coherent DMA allocation API should be used. 115 - * @use_dma32: If we should use GFP_DMA32 for device memory allocations. 116 - * 117 - * Initializes a struct ttm_bo_device: 118 - * Returns: 119 - * !0: Failure. 120 - */ 121 - int ttm_bo_device_init(struct ttm_bo_device *bdev, 122 - struct ttm_bo_driver *driver, 123 - struct device *dev, 124 - struct address_space *mapping, 125 - struct drm_vma_offset_manager *vma_manager, 126 - bool use_dma_alloc, bool use_dma32); 127 390 128 391 /** 129 392 * ttm_bo_unmap_virtual ··· 181 494 static inline void 182 495 ttm_bo_move_to_lru_tail_unlocked(struct ttm_buffer_object *bo) 183 496 { 184 - spin_lock(&ttm_bo_glob.lru_lock); 497 + spin_lock(&ttm_glob.lru_lock); 185 498 ttm_bo_move_to_lru_tail(bo, &bo->mem, NULL); 186 - spin_unlock(&ttm_bo_glob.lru_lock); 499 + spin_unlock(&ttm_glob.lru_lock); 187 500 } 188 501 189 502 static inline void ttm_bo_assign_mem(struct ttm_buffer_object *bo, ··· 225 538 /* 226 539 * ttm_bo_util.c 227 540 */ 228 - int ttm_mem_io_reserve(struct ttm_bo_device *bdev, 541 + int ttm_mem_io_reserve(struct ttm_device *bdev, 229 542 struct ttm_resource *mem); 230 - void ttm_mem_io_free(struct ttm_bo_device *bdev, 543 + void ttm_mem_io_free(struct ttm_device *bdev, 231 544 struct ttm_resource *mem); 232 545 233 546 /** ··· 318 631 * Initialise a generic range manager for the selected memory type. 319 632 * The range manager is installed for this device in the type slot. 320 633 */ 321 - int ttm_range_man_init(struct ttm_bo_device *bdev, 634 + int ttm_range_man_init(struct ttm_device *bdev, 322 635 unsigned type, bool use_tt, 323 636 unsigned long p_size); 324 637 ··· 330 643 * 331 644 * Remove the generic range manager from a slot and tear it down. 332 645 */ 333 - int ttm_range_man_fini(struct ttm_bo_device *bdev, 646 + int ttm_range_man_fini(struct ttm_device *bdev, 334 647 unsigned type); 335 648 336 649 #endif
+318
include/drm/ttm/ttm_device.h
··· 1 + /* 2 + * Copyright 2020 Advanced Micro Devices, Inc. 3 + * 4 + * Permission is hereby granted, free of charge, to any person obtaining a 5 + * copy of this software and associated documentation files (the "Software"), 6 + * to deal in the Software without restriction, including without limitation 7 + * the rights to use, copy, modify, merge, publish, distribute, sublicense, 8 + * and/or sell copies of the Software, and to permit persons to whom the 9 + * Software is furnished to do so, subject to the following conditions: 10 + * 11 + * The above copyright notice and this permission notice shall be included in 12 + * all copies or substantial portions of the Software. 13 + * 14 + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 15 + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 16 + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL 17 + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR 18 + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, 19 + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR 20 + * OTHER DEALINGS IN THE SOFTWARE. 21 + * 22 + * Authors: Christian König 23 + */ 24 + 25 + #ifndef _TTM_DEVICE_H_ 26 + #define _TTM_DEVICE_H_ 27 + 28 + #include <linux/types.h> 29 + #include <linux/workqueue.h> 30 + #include <drm/ttm/ttm_resource.h> 31 + #include <drm/ttm/ttm_pool.h> 32 + 33 + #define TTM_NUM_MEM_TYPES 8 34 + 35 + struct ttm_device; 36 + struct ttm_placement; 37 + struct ttm_buffer_object; 38 + struct ttm_operation_ctx; 39 + 40 + /** 41 + * struct ttm_global - Buffer object driver global data. 42 + * 43 + * @dummy_read_page: Pointer to a dummy page used for mapping requests 44 + * of unpopulated pages. 45 + * @shrink: A shrink callback object used for buffer object swap. 46 + * @device_list_mutex: Mutex protecting the device list. 47 + * This mutex is held while traversing the device list for pm options. 48 + * @lru_lock: Spinlock protecting the bo subsystem lru lists. 49 + * @device_list: List of buffer object devices. 50 + * @swap_lru: Lru list of buffer objects used for swapping. 51 + */ 52 + extern struct ttm_global { 53 + 54 + /** 55 + * Constant after init. 56 + */ 57 + 58 + struct page *dummy_read_page; 59 + spinlock_t lru_lock; 60 + 61 + /** 62 + * Protected by ttm_global_mutex. 63 + */ 64 + struct list_head device_list; 65 + 66 + /** 67 + * Protected by the lru_lock. 68 + */ 69 + struct list_head swap_lru[TTM_MAX_BO_PRIORITY]; 70 + 71 + /** 72 + * Internal protection. 73 + */ 74 + atomic_t bo_count; 75 + } ttm_glob; 76 + 77 + struct ttm_device_funcs { 78 + /** 79 + * ttm_tt_create 80 + * 81 + * @bo: The buffer object to create the ttm for. 82 + * @page_flags: Page flags as identified by TTM_PAGE_FLAG_XX flags. 83 + * 84 + * Create a struct ttm_tt to back data with system memory pages. 85 + * No pages are actually allocated. 86 + * Returns: 87 + * NULL: Out of memory. 88 + */ 89 + struct ttm_tt *(*ttm_tt_create)(struct ttm_buffer_object *bo, 90 + uint32_t page_flags); 91 + 92 + /** 93 + * ttm_tt_populate 94 + * 95 + * @ttm: The struct ttm_tt to contain the backing pages. 96 + * 97 + * Allocate all backing pages 98 + * Returns: 99 + * -ENOMEM: Out of memory. 100 + */ 101 + int (*ttm_tt_populate)(struct ttm_device *bdev, 102 + struct ttm_tt *ttm, 103 + struct ttm_operation_ctx *ctx); 104 + 105 + /** 106 + * ttm_tt_unpopulate 107 + * 108 + * @ttm: The struct ttm_tt to contain the backing pages. 109 + * 110 + * Free all backing page 111 + */ 112 + void (*ttm_tt_unpopulate)(struct ttm_device *bdev, 113 + struct ttm_tt *ttm); 114 + 115 + /** 116 + * ttm_tt_destroy 117 + * 118 + * @bdev: Pointer to a ttm device 119 + * @ttm: Pointer to a struct ttm_tt. 120 + * 121 + * Destroy the backend. This will be call back from ttm_tt_destroy so 122 + * don't call ttm_tt_destroy from the callback or infinite loop. 123 + */ 124 + void (*ttm_tt_destroy)(struct ttm_device *bdev, struct ttm_tt *ttm); 125 + 126 + /** 127 + * struct ttm_bo_driver member eviction_valuable 128 + * 129 + * @bo: the buffer object to be evicted 130 + * @place: placement we need room for 131 + * 132 + * Check with the driver if it is valuable to evict a BO to make room 133 + * for a certain placement. 134 + */ 135 + bool (*eviction_valuable)(struct ttm_buffer_object *bo, 136 + const struct ttm_place *place); 137 + /** 138 + * struct ttm_bo_driver member evict_flags: 139 + * 140 + * @bo: the buffer object to be evicted 141 + * 142 + * Return the bo flags for a buffer which is not mapped to the hardware. 143 + * These will be placed in proposed_flags so that when the move is 144 + * finished, they'll end up in bo->mem.flags 145 + * This should not cause multihop evictions, and the core will warn 146 + * if one is proposed. 147 + */ 148 + 149 + void (*evict_flags)(struct ttm_buffer_object *bo, 150 + struct ttm_placement *placement); 151 + 152 + /** 153 + * struct ttm_bo_driver member move: 154 + * 155 + * @bo: the buffer to move 156 + * @evict: whether this motion is evicting the buffer from 157 + * the graphics address space 158 + * @ctx: context for this move with parameters 159 + * @new_mem: the new memory region receiving the buffer 160 + @ @hop: placement for driver directed intermediate hop 161 + * 162 + * Move a buffer between two memory regions. 163 + * Returns errno -EMULTIHOP if driver requests a hop 164 + */ 165 + int (*move)(struct ttm_buffer_object *bo, bool evict, 166 + struct ttm_operation_ctx *ctx, 167 + struct ttm_resource *new_mem, 168 + struct ttm_place *hop); 169 + 170 + /** 171 + * struct ttm_bo_driver_member verify_access 172 + * 173 + * @bo: Pointer to a buffer object. 174 + * @filp: Pointer to a struct file trying to access the object. 175 + * 176 + * Called from the map / write / read methods to verify that the 177 + * caller is permitted to access the buffer object. 178 + * This member may be set to NULL, which will refuse this kind of 179 + * access for all buffer objects. 180 + * This function should return 0 if access is granted, -EPERM otherwise. 181 + */ 182 + int (*verify_access)(struct ttm_buffer_object *bo, 183 + struct file *filp); 184 + 185 + /** 186 + * Hook to notify driver about a resource delete. 187 + */ 188 + void (*delete_mem_notify)(struct ttm_buffer_object *bo); 189 + 190 + /** 191 + * notify the driver that we're about to swap out this bo 192 + */ 193 + void (*swap_notify)(struct ttm_buffer_object *bo); 194 + 195 + /** 196 + * Driver callback on when mapping io memory (for bo_move_memcpy 197 + * for instance). TTM will take care to call io_mem_free whenever 198 + * the mapping is not use anymore. io_mem_reserve & io_mem_free 199 + * are balanced. 200 + */ 201 + int (*io_mem_reserve)(struct ttm_device *bdev, 202 + struct ttm_resource *mem); 203 + void (*io_mem_free)(struct ttm_device *bdev, 204 + struct ttm_resource *mem); 205 + 206 + /** 207 + * Return the pfn for a given page_offset inside the BO. 208 + * 209 + * @bo: the BO to look up the pfn for 210 + * @page_offset: the offset to look up 211 + */ 212 + unsigned long (*io_mem_pfn)(struct ttm_buffer_object *bo, 213 + unsigned long page_offset); 214 + 215 + /** 216 + * Read/write memory buffers for ptrace access 217 + * 218 + * @bo: the BO to access 219 + * @offset: the offset from the start of the BO 220 + * @buf: pointer to source/destination buffer 221 + * @len: number of bytes to copy 222 + * @write: whether to read (0) from or write (non-0) to BO 223 + * 224 + * If successful, this function should return the number of 225 + * bytes copied, -EIO otherwise. If the number of bytes 226 + * returned is < len, the function may be called again with 227 + * the remainder of the buffer to copy. 228 + */ 229 + int (*access_memory)(struct ttm_buffer_object *bo, unsigned long offset, 230 + void *buf, int len, int write); 231 + 232 + /** 233 + * struct ttm_bo_driver member del_from_lru_notify 234 + * 235 + * @bo: the buffer object deleted from lru 236 + * 237 + * notify driver that a BO was deleted from LRU. 238 + */ 239 + void (*del_from_lru_notify)(struct ttm_buffer_object *bo); 240 + 241 + /** 242 + * Notify the driver that we're about to release a BO 243 + * 244 + * @bo: BO that is about to be released 245 + * 246 + * Gives the driver a chance to do any cleanup, including 247 + * adding fences that may force a delayed delete 248 + */ 249 + void (*release_notify)(struct ttm_buffer_object *bo); 250 + }; 251 + 252 + /** 253 + * struct ttm_device - Buffer object driver device-specific data. 254 + * 255 + * @device_list: Our entry in the global device list. 256 + * @funcs: Function table for the device. 257 + * @sysman: Resource manager for the system domain. 258 + * @man_drv: An array of resource_managers. 259 + * @vma_manager: Address space manager. 260 + * @pool: page pool for the device. 261 + * @dev_mapping: A pointer to the struct address_space representing the 262 + * device address space. 263 + * @wq: Work queue structure for the delayed delete workqueue. 264 + */ 265 + struct ttm_device { 266 + /* 267 + * Constant after bo device init 268 + */ 269 + struct list_head device_list; 270 + struct ttm_device_funcs *funcs; 271 + 272 + /* 273 + * Access via ttm_manager_type. 274 + */ 275 + struct ttm_resource_manager sysman; 276 + struct ttm_resource_manager *man_drv[TTM_NUM_MEM_TYPES]; 277 + 278 + /* 279 + * Protected by internal locks. 280 + */ 281 + struct drm_vma_offset_manager *vma_manager; 282 + struct ttm_pool pool; 283 + 284 + /* 285 + * Protected by the global:lru lock. 286 + */ 287 + struct list_head ddestroy; 288 + 289 + /* 290 + * Protected by load / firstopen / lastclose /unload sync. 291 + */ 292 + struct address_space *dev_mapping; 293 + 294 + /* 295 + * Internal protection. 296 + */ 297 + struct delayed_work wq; 298 + }; 299 + 300 + static inline struct ttm_resource_manager * 301 + ttm_manager_type(struct ttm_device *bdev, int mem_type) 302 + { 303 + return bdev->man_drv[mem_type]; 304 + } 305 + 306 + static inline void ttm_set_driver_manager(struct ttm_device *bdev, int type, 307 + struct ttm_resource_manager *manager) 308 + { 309 + bdev->man_drv[type] = manager; 310 + } 311 + 312 + int ttm_device_init(struct ttm_device *bdev, struct ttm_device_funcs *funcs, 313 + struct device *dev, struct address_space *mapping, 314 + struct drm_vma_offset_manager *vma_manager, 315 + bool use_dma_alloc, bool use_dma32); 316 + void ttm_device_fini(struct ttm_device *bdev); 317 + 318 + #endif
+3 -2
include/drm/ttm/ttm_memory.h drivers/gpu/drm/vmwgfx/ttm_memory.h
··· 35 35 #include <linux/errno.h> 36 36 #include <linux/kobject.h> 37 37 #include <linux/mm.h> 38 - #include "ttm_bo_api.h" 38 + 39 + #include <drm/ttm/ttm_bo_api.h> 39 40 40 41 /** 41 42 * struct ttm_mem_global - Global memory accounting structure. ··· 80 79 #endif 81 80 } ttm_mem_glob; 82 81 83 - int ttm_mem_global_init(struct ttm_mem_global *glob); 82 + int ttm_mem_global_init(struct ttm_mem_global *glob, struct device *dev); 84 83 void ttm_mem_global_release(struct ttm_mem_global *glob); 85 84 int ttm_mem_global_alloc(struct ttm_mem_global *glob, uint64_t memory, 86 85 struct ttm_operation_ctx *ctx);
+2 -2
include/drm/ttm/ttm_resource.h
··· 33 33 34 34 #define TTM_MAX_BO_PRIORITY 4U 35 35 36 - struct ttm_bo_device; 36 + struct ttm_device; 37 37 struct ttm_resource_manager; 38 38 struct ttm_resource; 39 39 struct ttm_place; ··· 233 233 void ttm_resource_manager_init(struct ttm_resource_manager *man, 234 234 unsigned long p_size); 235 235 236 - int ttm_resource_manager_evict_all(struct ttm_bo_device *bdev, 236 + int ttm_resource_manager_evict_all(struct ttm_device *bdev, 237 237 struct ttm_resource_manager *man); 238 238 239 239 void ttm_resource_manager_debug(struct ttm_resource_manager *man,
+10 -5
include/drm/ttm/ttm_tt.h
··· 30 30 #include <linux/types.h> 31 31 #include <drm/ttm/ttm_caching.h> 32 32 33 + struct ttm_bo_device; 33 34 struct ttm_tt; 34 35 struct ttm_resource; 35 36 struct ttm_buffer_object; ··· 119 118 * 120 119 * Unbind, unpopulate and destroy common struct ttm_tt. 121 120 */ 122 - void ttm_tt_destroy(struct ttm_bo_device *bdev, struct ttm_tt *ttm); 121 + void ttm_tt_destroy(struct ttm_device *bdev, struct ttm_tt *ttm); 123 122 124 123 /** 125 124 * ttm_tt_destroy_common: 126 125 * 127 126 * Called from driver to destroy common path. 128 127 */ 129 - void ttm_tt_destroy_common(struct ttm_bo_device *bdev, struct ttm_tt *ttm); 128 + void ttm_tt_destroy_common(struct ttm_device *bdev, struct ttm_tt *ttm); 130 129 131 130 /** 132 131 * ttm_tt_swapin: ··· 136 135 * Swap in a previously swap out ttm_tt. 137 136 */ 138 137 int ttm_tt_swapin(struct ttm_tt *ttm); 139 - int ttm_tt_swapout(struct ttm_bo_device *bdev, struct ttm_tt *ttm); 138 + int ttm_tt_swapout(struct ttm_device *bdev, struct ttm_tt *ttm, 139 + gfp_t gfp_flags); 140 140 141 141 /** 142 142 * ttm_tt_populate - allocate pages for a ttm ··· 146 144 * 147 145 * Calls the driver method to allocate pages for a ttm 148 146 */ 149 - int ttm_tt_populate(struct ttm_bo_device *bdev, struct ttm_tt *ttm, struct ttm_operation_ctx *ctx); 147 + int ttm_tt_populate(struct ttm_device *bdev, struct ttm_tt *ttm, struct ttm_operation_ctx *ctx); 150 148 151 149 /** 152 150 * ttm_tt_unpopulate - free pages from a ttm ··· 155 153 * 156 154 * Calls the driver method to free all pages from a ttm 157 155 */ 158 - void ttm_tt_unpopulate(struct ttm_bo_device *bdev, struct ttm_tt *ttm); 156 + void ttm_tt_unpopulate(struct ttm_device *bdev, struct ttm_tt *ttm); 157 + 158 + int ttm_tt_mgr_init(void); 159 + void ttm_tt_mgr_fini(void); 159 160 160 161 #if IS_ENABLED(CONFIG_AGP) 161 162 #include <linux/agp_backend.h>
+9
include/linux/dma-heap.h
··· 51 51 void *dma_heap_get_drvdata(struct dma_heap *heap); 52 52 53 53 /** 54 + * dma_heap_get_name() - get heap name 55 + * @heap: DMA-Heap to retrieve private data for 56 + * 57 + * Returns: 58 + * The char* for the heap name. 59 + */ 60 + const char *dma_heap_get_name(struct dma_heap *heap); 61 + 62 + /** 54 63 * dma_heap_add - adds a heap to dmabuf heaps 55 64 * @exp_info: information needed to register this heap 56 65 */
+1 -1
include/linux/hdmi.h
··· 156 156 }; 157 157 158 158 enum hdmi_metadata_type { 159 - HDMI_STATIC_METADATA_TYPE1 = 1, 159 + HDMI_STATIC_METADATA_TYPE1 = 0, 160 160 }; 161 161 162 162 enum hdmi_eotf {
+5
include/linux/lockdep.h
··· 317 317 WARN_ON_ONCE(debug_locks && !lockdep_is_held(l)); \ 318 318 } while (0) 319 319 320 + #define lockdep_assert_none_held_once() do { \ 321 + WARN_ON_ONCE(debug_locks && current->lockdep_depth); \ 322 + } while (0) 323 + 320 324 #define lockdep_recursing(tsk) ((tsk)->lockdep_recursion) 321 325 322 326 #define lockdep_pin_lock(l) lock_pin_lock(&(l)->dep_map) ··· 400 396 #define lockdep_assert_held_write(l) do { (void)(l); } while (0) 401 397 #define lockdep_assert_held_read(l) do { (void)(l); } while (0) 402 398 #define lockdep_assert_held_once(l) do { (void)(l); } while (0) 399 + #define lockdep_assert_none_held_once() do { } while (0) 403 400 404 401 #define lockdep_recursing(tsk) (0) 405 402
+1
include/linux/platform_data/simplefb.h
··· 16 16 #define SIMPLEFB_FORMATS \ 17 17 { \ 18 18 { "r5g6b5", 16, {11, 5}, {5, 6}, {0, 5}, {0, 0}, DRM_FORMAT_RGB565 }, \ 19 + { "r5g5b5a1", 16, {11, 5}, {6, 5}, {1, 5}, {0, 1}, DRM_FORMAT_RGBA5551 }, \ 19 20 { "x1r5g5b5", 16, {10, 5}, {5, 5}, {0, 5}, {0, 0}, DRM_FORMAT_XRGB1555 }, \ 20 21 { "a1r5g5b5", 16, {10, 5}, {5, 5}, {0, 5}, {15, 1}, DRM_FORMAT_ARGB1555 }, \ 21 22 { "r8g8b8", 24, {16, 8}, {8, 8}, {0, 8}, {0, 0}, DRM_FORMAT_RGB888 }, \
+1 -1
include/uapi/drm/drm_mode.h
··· 990 990 }; 991 991 992 992 /** 993 - * struct drm_mode_create_blob - Create New block property 993 + * struct drm_mode_create_blob - Create New blob property 994 994 * 995 995 * Create a new 'blob' data property, copying length bytes from data pointer, 996 996 * and returning new blob ID.
+18
lib/test_printf.c
··· 655 655 software_node_unregister_nodes(softnodes); 656 656 } 657 657 658 + static void __init fourcc_pointer(void) 659 + { 660 + struct { 661 + u32 code; 662 + char *str; 663 + } const try[] = { 664 + { 0x3231564e, "NV12 little-endian (0x3231564e)", }, 665 + { 0xb231564e, "NV12 big-endian (0xb231564e)", }, 666 + { 0x10111213, ".... little-endian (0x10111213)", }, 667 + { 0x20303159, "Y10 little-endian (0x20303159)", }, 668 + }; 669 + unsigned int i; 670 + 671 + for (i = 0; i < ARRAY_SIZE(try); i++) 672 + test(try[i].str, "%p4cc", &try[i].code); 673 + } 674 + 658 675 static void __init 659 676 errptr(void) 660 677 { ··· 717 700 flags(); 718 701 errptr(); 719 702 fwnode_pointer(); 703 + fourcc_pointer(); 720 704 } 721 705 722 706 static void __init selftest(void)
+39
lib/vsprintf.c
··· 1734 1734 } 1735 1735 1736 1736 static noinline_for_stack 1737 + char *fourcc_string(char *buf, char *end, const u32 *fourcc, 1738 + struct printf_spec spec, const char *fmt) 1739 + { 1740 + char output[sizeof("0123 little-endian (0x01234567)")]; 1741 + char *p = output; 1742 + unsigned int i; 1743 + u32 val; 1744 + 1745 + if (fmt[1] != 'c' || fmt[2] != 'c') 1746 + return error_string(buf, end, "(%p4?)", spec); 1747 + 1748 + if (check_pointer(&buf, end, fourcc, spec)) 1749 + return buf; 1750 + 1751 + val = *fourcc & ~BIT(31); 1752 + 1753 + for (i = 0; i < sizeof(*fourcc); i++) { 1754 + unsigned char c = val >> (i * 8); 1755 + 1756 + /* Print non-control ASCII characters as-is, dot otherwise */ 1757 + *p++ = isascii(c) && isprint(c) ? c : '.'; 1758 + } 1759 + 1760 + strcpy(p, *fourcc & BIT(31) ? " big-endian" : " little-endian"); 1761 + p += strlen(p); 1762 + 1763 + *p++ = ' '; 1764 + *p++ = '('; 1765 + p = special_hex_number(p, output + sizeof(output) - 2, *fourcc, sizeof(u32)); 1766 + *p++ = ')'; 1767 + *p = '\0'; 1768 + 1769 + return string(buf, end, output, spec); 1770 + } 1771 + 1772 + static noinline_for_stack 1737 1773 char *address_val(char *buf, char *end, const void *addr, 1738 1774 struct printf_spec spec, const char *fmt) 1739 1775 { ··· 2224 2188 * correctness of the format string and va_list arguments. 2225 2189 * - 'K' For a kernel pointer that should be hidden from unprivileged users 2226 2190 * - 'NF' For a netdev_features_t 2191 + * - '4cc' V4L2 or DRM FourCC code, with endianness and raw numerical value. 2227 2192 * - 'h[CDN]' For a variable-length buffer, it prints it as a hex string with 2228 2193 * a certain separator (' ' by default): 2229 2194 * C colon ··· 2322 2285 return restricted_pointer(buf, end, ptr, spec); 2323 2286 case 'N': 2324 2287 return netdev_bits(buf, end, ptr, spec, fmt); 2288 + case '4': 2289 + return fourcc_string(buf, end, ptr, spec, fmt); 2325 2290 case 'a': 2326 2291 return address_val(buf, end, ptr, spec, fmt); 2327 2292 case 'd':
+4 -2
scripts/checkpatch.pl
··· 6606 6606 $specifier = $1; 6607 6607 $extension = $2; 6608 6608 $qualifier = $3; 6609 - if ($extension !~ /[SsBKRraEehMmIiUDdgVCbGNOxtf]/ || 6609 + if ($extension !~ /[4SsBKRraEehMmIiUDdgVCbGNOxtf]/ || 6610 6610 ($extension eq "f" && 6611 - defined $qualifier && $qualifier !~ /^w/)) { 6611 + defined $qualifier && $qualifier !~ /^w/) || 6612 + ($extension eq "4" && 6613 + defined $qualifier && $qualifier !~ /^cc/)) { 6612 6614 $bad_specifier = $specifier; 6613 6615 last; 6614 6616 }