Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'drm-misc-next-2019-05-24' of git://anongit.freedesktop.org/drm/drm-misc into drm-next

drm-misc-next for v5.3, try #2:

UAPI Changes:
- Add HDR source metadata property.
- Make drm.h compile on GNU/kFreeBSD by including stdint.h
- Clarify how the userspace reviewer has to review new kernel UAPI.
- Clarify that for using new UAPI, merging to drm-next or drm-misc-next should be enough.

Cross-subsystem Changes:
- video/hdmi: Add unpack function for DRM infoframes.
- Device tree bindings:
* Updating a property for Mali Midgard GPUs
* Updating a property for STM32 DSI panel
* Adding support for FriendlyELEC HD702E 800x1280 panel
* Adding support for Evervision VGG804821 800x480 5.0" WVGA TFT panel
* Adding support for the EDT ET035012DM6 3.5" 320x240 QVGA 24-bit RGB TFT.
* Adding support for Three Five displays TFC S9700RTWV43TR-01B 800x480 panel
with resistive touch found on TI's AM335X-EVM.
* Adding support for EDT ETM0430G0DH6 480x272 panel.
- Add OSD101T2587-53TS driver with DT bindings.
- Add Samsung S6E63M0 panel driver with DT bindings.
- Add VXT VL050-8048NT-C01 800x480 panel with DT bindings.
- Dma-buf:
- Make mmap callback actually optional.
- Documentation updates.
- Fix debugfs refcount inbalance.
- Remove unused sync_dump function.
- Fix device tree bindings in drm-misc-next after a botched merge.

Core Changes:
- Add support for HDR infoframes and related EDID parsing.
- Remove prime sg_table caching, now done inside dma-buf.
- Add shiny new drm_gem_vram helpers for simple VRAM drivers;
with some fixes to the new API on top.
- Small fix to job cleanup without timeout handler.
- Documentation fixes to drm_fourcc.
- Replace lookups of drm_format with struct drm_format_info;
remove functions that become obsolete by this conversion.
- Remove double include in bridge/panel.c and some drivers.
- Remove drmP.h include from drm/edid and drm/dp.
- Fix null pointer deref in drm_fb_helper_hotplug_event().
- Remove most members from drm_fb_helper_crtc, only mode_set is kept.
- Remove race of fb helpers with userspace; only restore mode
when userspace is not master.
- Move legacy setup from drm_file.c to drm_legacy_misc.c
- Rework scheduler job destruction.
- drm/bus was removed, remove from TODO.
- Add __drm_atomic_helper_crtc_reset() to subclass crtc_state,
and convert some drivers to use it (conversion is not complete yet).
- Bump vblank timeout wait to 100 ms for atomic.
- Docbook fix for drm_hdmi_infoframe_set_hdr_metadata.

Driver Changes:
- sun4i: Use DRM_GEM_CMA_VMAP_DRIVER_OPS instead of definining manually.
- v3d: Small cleanups, adding support for compute shaders,
reservation/synchronization fixes and job management refactoring,
fixes MMU and debugfs.
- lima: Fix null pointer in irq handler on startup, set default timeout for scheduled jobs.
- stm/ltdc: Assorted fixes and adding FB modifier support.
- amdgpu: Avoid hw reset if guilty job was already signaled.
- virtio: Add seqno to fences, add trace events, use correct flags for fence allocation.
- Convert AST, bochs, mgag200, vboxvideo, hisilicon to the new drm_gem_vram API.
- sun6i_mipi_dsi: Support DSI GENERIC_SHORT_WRITE_2 transfers.
- bochs: Small fix to use PTR_RET_OR_ZERO and driver unload.
- gma500: header fixes
- cirrus: Remove unused files.
- mediatek: Fix compiler warning after merging the HDR series.
- vc4: Rework binner bo handling.

Signed-off-by: Dave Airlie <airlied@redhat.com>

From: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/052875a5-27ba-3832-60c2-193d950afdff@linux.intel.com

+5980 -4630
+16
Documentation/devicetree/bindings/display/panel/edt,et-series.txt
··· 6 6 compatible with the simple-panel binding, which is specified in 7 7 simple-panel.txt 8 8 9 + 3,5" QVGA TFT Panels 10 + -------------------- 11 + +-----------------+---------------------+-------------------------------------+ 12 + | Identifier | compatbile | description | 13 + +=================+=====================+=====================================+ 14 + | ET035012DM6 | edt,et035012dm6 | 3.5" QVGA TFT LCD panel | 15 + +-----------------+---------------------+-------------------------------------+ 16 + 17 + 4,3" WVGA TFT Panels 18 + -------------------- 19 + 20 + +-----------------+---------------------+-------------------------------------+ 21 + | Identifier | compatbile | description | 22 + +=================+=====================+=====================================+ 23 + | ETM0430G0DH6 | edt,etm0430g0dh6 | 480x272 TFT Display | 24 + +-----------------+---------------------+-------------------------------------+ 9 25 10 26 5,7" WVGA TFT Panels 11 27 --------------------
+12
Documentation/devicetree/bindings/display/panel/evervision,vgg804821.txt
··· 1 + Evervision Electronics Co. Ltd. VGG804821 5.0" WVGA TFT LCD Panel 2 + 3 + Required properties: 4 + - compatible: should be "evervision,vgg804821" 5 + - power-supply: See simple-panel.txt 6 + 7 + Optional properties: 8 + - backlight: See simple-panel.txt 9 + - enable-gpios: See simple-panel.txt 10 + 11 + This binding is compatible with the simple-panel binding, which is specified 12 + in simple-panel.txt in this directory.
+32
Documentation/devicetree/bindings/display/panel/friendlyarm,hd702e.txt
··· 1 + FriendlyELEC HD702E 800x1280 LCD panel 2 + 3 + HD702E lcd is FriendlyELEC developed eDP LCD panel with 800x1280 4 + resolution. It has built in Goodix, GT9271 captive touchscreen 5 + with backlight adjustable via PWM. 6 + 7 + Required properties: 8 + - compatible: should be "friendlyarm,hd702e" 9 + - power-supply: regulator to provide the supply voltage 10 + 11 + Optional properties: 12 + - backlight: phandle of the backlight device attached to the panel 13 + 14 + Optional nodes: 15 + - Video port for LCD panel input. 16 + 17 + This binding is compatible with the simple-panel binding, which is specified 18 + in simple-panel.txt in this directory. 19 + 20 + Example: 21 + 22 + panel { 23 + compatible ="friendlyarm,hd702e", "simple-panel"; 24 + backlight = <&backlight>; 25 + power-supply = <&vcc3v3_sys>; 26 + 27 + port { 28 + panel_in_edp: endpoint { 29 + remote-endpoint = <&edp_out_panel>; 30 + }; 31 + }; 32 + };
+11
Documentation/devicetree/bindings/display/panel/osddisplays,osd101t2045-53ts.txt
··· 1 + One Stop Displays OSD101T2045-53TS 10.1" 1920x1200 panel 2 + 3 + Required properties: 4 + - compatible: should be "osddisplays,osd101t2045-53ts" 5 + - power-supply: as specified in the base binding 6 + 7 + Optional properties: 8 + - backlight: as specified in the base binding 9 + 10 + This binding is compatible with the simple-panel binding, which is specified 11 + in simple-panel.txt in this directory.
+14
Documentation/devicetree/bindings/display/panel/osddisplays,osd101t2587-53ts.txt
··· 1 + One Stop Displays OSD101T2587-53TS 10.1" 1920x1200 panel 2 + 3 + The panel is similar to OSD101T2045-53TS, but it needs additional 4 + MIPI_DSI_TURN_ON_PERIPHERAL message from the host. 5 + 6 + Required properties: 7 + - compatible: should be "osddisplays,osd101t2587-53ts" 8 + - power-supply: as specified in the base binding 9 + 10 + Optional properties: 11 + - backlight: as specified in the base binding 12 + 13 + This binding is compatible with the simple-panel binding, which is specified 14 + in simple-panel.txt in this directory.
+33
Documentation/devicetree/bindings/display/panel/samsung,s6e63m0.txt
··· 1 + Samsung s6e63m0 AMOLED LCD panel 2 + 3 + Required properties: 4 + - compatible: "samsung,s6e63m0" 5 + - reset-gpios: GPIO spec for reset pin 6 + - vdd3-supply: VDD regulator 7 + - vci-supply: VCI regulator 8 + 9 + The panel must obey rules for SPI slave device specified in document [1]. 10 + 11 + The device node can contain one 'port' child node with one child 12 + 'endpoint' node, according to the bindings defined in [2]. This 13 + node should describe panel's video bus. 14 + 15 + [1]: Documentation/devicetree/bindings/spi/spi-bus.txt 16 + [2]: Documentation/devicetree/bindings/media/video-interfaces.txt 17 + 18 + Example: 19 + 20 + s6e63m0: display@0 { 21 + compatible = "samsung,s6e63m0"; 22 + reg = <0>; 23 + reset-gpio = <&mp05 5 1>; 24 + vdd3-supply = <&ldo12_reg>; 25 + vci-supply = <&ldo11_reg>; 26 + spi-max-frequency = <1200000>; 27 + 28 + port { 29 + lcd_ep: endpoint { 30 + remote-endpoint = <&fimd_ep>; 31 + }; 32 + }; 33 + };
+15
Documentation/devicetree/bindings/display/panel/tfc,s9700rtwv43tr-01b.txt
··· 1 + TFC S9700RTWV43TR-01B 7" Three Five Corp 800x480 LCD panel with 2 + resistive touch 3 + 4 + The panel is found on TI AM335x-evm. 5 + 6 + Required properties: 7 + - compatible: should be "tfc,s9700rtwv43tr-01b" 8 + - power-supply: See panel-common.txt 9 + 10 + Optional properties: 11 + - enable-gpios: GPIO pin to enable or disable the panel, if there is one 12 + - backlight: phandle of the backlight device attached to the panel 13 + 14 + This binding is compatible with the simple-panel binding, which is specified 15 + in simple-panel.txt in this directory.
+12
Documentation/devicetree/bindings/display/panel/vl050_8048nt_c01.txt
··· 1 + VXT 800x480 color TFT LCD panel 2 + 3 + Required properties: 4 + - compatible: should be "vxt,vl050-8048nt-c01" 5 + - power-supply: as specified in the base binding 6 + 7 + Optional properties: 8 + - backlight: as specified in the base binding 9 + - enable-gpios: as specified in the base binding 10 + 11 + This binding is compatible with the simple-panel binding, which is specified 12 + in simple-panel.txt in this directory.
+3
Documentation/devicetree/bindings/display/st,stm32-ltdc.txt
··· 40 40 - panel or bridge node: A node containing the panel or bridge description as 41 41 documented in [6]. 42 42 - port: panel or bridge port node, connected to the DSI output port (port@1). 43 + Optional properties: 44 + - phy-dsi-supply: phandle of the regulator that provides the supply voltage. 43 45 44 46 Note: You can find more documentation in the following references 45 47 [1] Documentation/devicetree/bindings/clock/clock-bindings.txt ··· 103 101 clock-names = "pclk", "ref"; 104 102 resets = <&rcc STM32F4_APB2_RESET(DSI)>; 105 103 reset-names = "apb"; 104 + phy-dsi-supply = <&reg18>; 106 105 107 106 ports { 108 107 #address-cells = <1>;
+18 -1
Documentation/devicetree/bindings/gpu/arm,mali-midgard.txt
··· 15 15 + "arm,mali-t860" 16 16 + "arm,mali-t880" 17 17 * which must be preceded by one of the following vendor specifics: 18 + + "allwinner,sun50i-h6-mali" 18 19 + "amlogic,meson-gxm-mali" 19 20 + "rockchip,rk3288-mali" 20 21 + "rockchip,rk3399-mali" ··· 32 31 33 32 - clocks : Phandle to clock for the Mali Midgard device. 34 33 34 + - clock-names : Specify the names of the clocks specified in clocks 35 + when multiple clocks are present. 36 + * core: clock driving the GPU itself (When only one clock is present, 37 + assume it's this clock.) 38 + * bus: bus clock for the GPU 39 + 35 40 - mali-supply : Phandle to regulator for the Mali device. Refer to 36 41 Documentation/devicetree/bindings/regulator/regulator.txt for details. 37 42 38 43 - operating-points-v2 : Refer to Documentation/devicetree/bindings/opp/opp.txt 44 + for details. 45 + 46 + - #cooling-cells: Refer to Documentation/devicetree/bindings/thermal/thermal.txt 39 47 for details. 40 48 41 49 - resets : Phandle of the GPU reset line. ··· 53 43 ------------------------ 54 44 55 45 The Mali GPU is integrated very differently from one SoC to 56 - another. In order to accomodate those differences, you have the option 46 + another. In order to accommodate those differences, you have the option 57 47 to specify one more vendor-specific compatible, among: 48 + 49 + - "allwinner,sun50i-h6-mali" 50 + Required properties: 51 + - clocks : phandles to core and bus clocks 52 + - clock-names : must contain "core" and "bus" 53 + - resets: phandle to GPU reset line 58 54 59 55 - "amlogic,meson-gxm-mali" 60 56 Required properties: ··· 81 65 mali-supply = <&vdd_gpu>; 82 66 operating-points-v2 = <&gpu_opp_table>; 83 67 power-domains = <&power RK3288_PD_GPU>; 68 + #cooling-cells = <2>; 84 69 }; 85 70 86 71 gpu_opp_table: opp_table0 {
+6
Documentation/devicetree/bindings/vendor-prefixes.yaml
··· 287 287 description: Everest Semiconductor Co. Ltd. 288 288 "^everspin,.*": 289 289 description: Everspin Technologies, Inc. 290 + "^evervision,.*": 291 + description: Evervision Electronics Co. Ltd. 290 292 "^exar,.*": 291 293 description: Exar Corporation 292 294 "^excito,.*": ··· 851 849 description: Shenzhen Techstar Electronics Co., Ltd. 852 850 "^terasic,.*": 853 851 description: Terasic Inc. 852 + "^tfc,.*": 853 + description: Three Five Corp 854 854 "^thine,.*": 855 855 description: THine Electronics, Inc. 856 856 "^ti,.*": ··· 927 923 description: Voipac Technologies s.r.o. 928 924 "^vot,.*": 929 925 description: Vision Optical Technology Co., Ltd. 926 + "^vxt,.*": 927 + description: VXT Ltd 930 928 "^wd,.*": 931 929 description: Western Digital Corp. 932 930 "^wetek,.*":
+33 -1
Documentation/gpu/drm-mm.rst
··· 79 79 80 80 See the radeon_ttm.c file for an example of usage. 81 81 82 - 83 82 The Graphics Execution Manager (GEM) 84 83 ==================================== 85 84 ··· 377 378 :internal: 378 379 379 380 .. kernel-doc:: drivers/gpu/drm/drm_gem_cma_helper.c 381 + :export: 382 + 383 + VRAM Helper Function Reference 384 + ============================== 385 + 386 + .. kernel-doc:: drivers/gpu/drm/drm_vram_helper_common.c 387 + :doc: overview 388 + 389 + .. kernel-doc:: include/drm/drm_gem_vram_helper.h 390 + :internal: 391 + 392 + GEM VRAM Helper Functions Reference 393 + ----------------------------------- 394 + 395 + .. kernel-doc:: drivers/gpu/drm/drm_gem_vram_helper.c 396 + :doc: overview 397 + 398 + .. kernel-doc:: include/drm/drm_gem_vram_helper.h 399 + :internal: 400 + 401 + .. kernel-doc:: drivers/gpu/drm/drm_gem_vram_helper.c 402 + :export: 403 + 404 + VRAM MM Helper Functions Reference 405 + ---------------------------------- 406 + 407 + .. kernel-doc:: drivers/gpu/drm/drm_vram_mm_helper.c 408 + :doc: overview 409 + 410 + .. kernel-doc:: include/drm/drm_vram_mm_helper.h 411 + :internal: 412 + 413 + .. kernel-doc:: drivers/gpu/drm/drm_vram_mm_helper.c 380 414 :export: 381 415 382 416 VMA Offset Manager
+6 -4
Documentation/gpu/drm-uapi.rst
··· 85 85 - The userspace side must be fully reviewed and tested to the standards of that 86 86 userspace project. For e.g. mesa this means piglit testcases and review on the 87 87 mailing list. This is again to ensure that the new interface actually gets the 88 - job done. 88 + job done. The userspace-side reviewer should also provide at least an 89 + Acked-by on the kernel uAPI patch indicating that they've looked at how the 90 + kernel side is implementing the new feature being used. 89 91 90 92 - The userspace patches must be against the canonical upstream, not some vendor 91 93 fork. This is to make sure that no one cheats on the review and testing 92 94 requirements by doing a quick fork. 93 95 94 96 - The kernel patch can only be merged after all the above requirements are met, 95 - but it **must** be merged **before** the userspace patches land. uAPI always flows 96 - from the kernel, doing things the other way round risks divergence of the uAPI 97 - definitions and header files. 97 + but it **must** be merged to either drm-next or drm-misc-next **before** the 98 + userspace patches land. uAPI always flows from the kernel, doing things the 99 + other way round risks divergence of the uAPI definitions and header files. 98 100 99 101 These are fairly steep requirements, but have grown out from years of shared 100 102 pain and experience with uAPI added hastily, and almost always regretted about
+8 -19
Documentation/gpu/todo.rst
··· 10 10 Subsystem-wide refactorings 11 11 =========================== 12 12 13 - De-midlayer drivers 14 - ------------------- 15 - 16 - With the recent ``drm_bus`` cleanup patches for 3.17 it is no longer required 17 - to have a ``drm_bus`` structure set up. Drivers can directly set up the 18 - ``drm_device`` structure instead of relying on bus methods in ``drm_usb.c`` 19 - and ``drm_pci.c``. The goal is to get rid of the driver's ``->load`` / 20 - ``->unload`` callbacks and open-code the load/unload sequence properly, using 21 - the new two-stage ``drm_device`` setup/teardown. 22 - 23 - Once all existing drivers are converted we can also remove those bus support 24 - files for USB and platform devices. 25 - 26 - All you need is a GPU for a non-converted driver (currently almost all of 27 - them, but also all the virtual ones used by KVM, so everyone qualifies). 28 - 29 - Contact: Daniel Vetter, Thierry Reding, respective driver maintainers 30 - 31 - 32 13 Remove custom dumb_map_offset implementations 33 14 --------------------------------------------- 34 15 ··· 280 299 it to use drm_mode_hsync() instead. 281 300 282 301 Contact: Sean Paul 302 + 303 + drm_fb_helper tasks 304 + ------------------- 305 + 306 + - drm_fb_helper_restore_fbdev_mode_unlocked() should call restore_fbdev_mode() 307 + not the _force variant so it can bail out if there is a master. But first 308 + these igt tests need to be fixed: kms_fbcon_fbt@psr and 309 + kms_fbcon_fbt@psr-suspend. 283 310 284 311 Core refactorings 285 312 =================
+1 -1
MAINTAINERS
··· 5413 5413 5414 5414 DRM PANEL DRIVERS 5415 5415 M: Thierry Reding <thierry.reding@gmail.com> 5416 + R: Sam Ravnborg <sam@ravnborg.org> 5416 5417 L: dri-devel@lists.freedesktop.org 5417 5418 T: git git://anongit.freedesktop.org/drm/drm-misc 5418 5419 S: Maintained ··· 5442 5441 DRM TTM SUBSYSTEM 5443 5442 M: Christian Koenig <christian.koenig@amd.com> 5444 5443 M: Huang Rui <ray.huang@amd.com> 5445 - M: Junwei Zhang <Jerry.Zhang@amd.com> 5446 5444 T: git git://people.freedesktop.org/~agd5f/linux 5447 5445 S: Maintained 5448 5446 L: dri-devel@lists.freedesktop.org
+35 -4
drivers/dma-buf/dma-buf.c
··· 90 90 91 91 dmabuf = file->private_data; 92 92 93 + /* check if buffer supports mmap */ 94 + if (!dmabuf->ops->mmap) 95 + return -EINVAL; 96 + 93 97 /* check for overflowing the buffer's size */ 94 98 if (vma->vm_pgoff + vma_pages(vma) > 95 99 dmabuf->size >> PAGE_SHIFT) ··· 408 404 || !exp_info->ops 409 405 || !exp_info->ops->map_dma_buf 410 406 || !exp_info->ops->unmap_dma_buf 411 - || !exp_info->ops->release 412 - || !exp_info->ops->mmap)) { 407 + || !exp_info->ops->release)) { 413 408 return ERR_PTR(-EINVAL); 414 409 } 415 410 ··· 576 573 list_add(&attach->node, &dmabuf->attachments); 577 574 578 575 mutex_unlock(&dmabuf->lock); 576 + 579 577 return attach; 580 578 581 579 err_attach: ··· 598 594 { 599 595 if (WARN_ON(!dmabuf || !attach)) 600 596 return; 597 + 598 + if (attach->sgt) 599 + dmabuf->ops->unmap_dma_buf(attach, attach->sgt, attach->dir); 601 600 602 601 mutex_lock(&dmabuf->lock); 603 602 list_del(&attach->node); ··· 637 630 if (WARN_ON(!attach || !attach->dmabuf)) 638 631 return ERR_PTR(-EINVAL); 639 632 633 + if (attach->sgt) { 634 + /* 635 + * Two mappings with different directions for the same 636 + * attachment are not allowed. 637 + */ 638 + if (attach->dir != direction && 639 + attach->dir != DMA_BIDIRECTIONAL) 640 + return ERR_PTR(-EBUSY); 641 + 642 + return attach->sgt; 643 + } 644 + 640 645 sg_table = attach->dmabuf->ops->map_dma_buf(attach, direction); 641 646 if (!sg_table) 642 647 sg_table = ERR_PTR(-ENOMEM); 648 + 649 + if (!IS_ERR(sg_table) && attach->dmabuf->ops->cache_sgt_mapping) { 650 + attach->sgt = sg_table; 651 + attach->dir = direction; 652 + } 643 653 644 654 return sg_table; 645 655 } ··· 681 657 if (WARN_ON(!attach || !attach->dmabuf || !sg_table)) 682 658 return; 683 659 684 - attach->dmabuf->ops->unmap_dma_buf(attach, sg_table, 685 - direction); 660 + if (attach->sgt == sg_table) 661 + return; 662 + 663 + attach->dmabuf->ops->unmap_dma_buf(attach, sg_table, direction); 686 664 } 687 665 EXPORT_SYMBOL_GPL(dma_buf_unmap_attachment); 688 666 ··· 932 906 if (WARN_ON(!dmabuf || !vma)) 933 907 return -EINVAL; 934 908 909 + /* check if buffer supports mmap */ 910 + if (!dmabuf->ops->mmap) 911 + return -EINVAL; 912 + 935 913 /* check for offset overflow */ 936 914 if (pgoff + vma_pages(vma) < pgoff) 937 915 return -EOVERFLOW; ··· 1098 1068 fence->ops->get_driver_name(fence), 1099 1069 fence->ops->get_timeline_name(fence), 1100 1070 dma_fence_is_signaled(fence) ? "" : "un"); 1071 + dma_fence_put(fence); 1101 1072 } 1102 1073 rcu_read_unlock(); 1103 1074
-26
drivers/dma-buf/sync_debug.c
··· 197 197 return 0; 198 198 } 199 199 late_initcall(sync_debugfs_init); 200 - 201 - #define DUMP_CHUNK 256 202 - static char sync_dump_buf[64 * 1024]; 203 - void sync_dump(void) 204 - { 205 - struct seq_file s = { 206 - .buf = sync_dump_buf, 207 - .size = sizeof(sync_dump_buf) - 1, 208 - }; 209 - int i; 210 - 211 - sync_info_debugfs_show(&s, NULL); 212 - 213 - for (i = 0; i < s.count; i += DUMP_CHUNK) { 214 - if ((s.count - i) > DUMP_CHUNK) { 215 - char c = s.buf[i + DUMP_CHUNK]; 216 - 217 - s.buf[i + DUMP_CHUNK] = 0; 218 - pr_cont("%s", s.buf + i); 219 - s.buf[i + DUMP_CHUNK] = c; 220 - } else { 221 - s.buf[s.count] = 0; 222 - pr_cont("%s", s.buf + i); 223 - } 224 - } 225 - }
-1
drivers/dma-buf/sync_debug.h
··· 68 68 void sync_timeline_debug_remove(struct sync_timeline *obj); 69 69 void sync_file_debug_add(struct sync_file *fence); 70 70 void sync_file_debug_remove(struct sync_file *fence); 71 - void sync_dump(void); 72 71 73 72 #endif /* _LINUX_SYNC_H */
+7
drivers/gpu/drm/Kconfig
··· 161 161 GPU memory types. Will be enabled automatically if a device driver 162 162 uses it. 163 163 164 + config DRM_VRAM_HELPER 165 + tristate 166 + depends on DRM 167 + select DRM_TTM 168 + help 169 + Helpers for VRAM memory management 170 + 164 171 config DRM_GEM_CMA_HELPER 165 172 bool 166 173 depends on DRM
+5
drivers/gpu/drm/Makefile
··· 32 32 drm-$(CONFIG_DEBUG_FS) += drm_debugfs.o drm_debugfs_crc.o 33 33 drm-$(CONFIG_DRM_LOAD_EDID_FIRMWARE) += drm_edid_load.o 34 34 35 + drm_vram_helper-y := drm_gem_vram_helper.o \ 36 + drm_vram_helper_common.o \ 37 + drm_vram_mm_helper.o 38 + obj-$(CONFIG_DRM_VRAM_HELPER) += drm_vram_helper.o 39 + 35 40 drm_kms_helper-y := drm_crtc_helper.o drm_dp_helper.o drm_dsc.o drm_probe_helper.o \ 36 41 drm_plane_helper.o drm_dp_mst_topology.o drm_atomic_helper.o \ 37 42 drm_kms_helper_common.o drm_dp_dual_mode_helper.o \
+96 -52
drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
··· 3341 3341 if (!ring || !ring->sched.thread) 3342 3342 continue; 3343 3343 3344 - drm_sched_stop(&ring->sched); 3345 - 3346 3344 /* after all hw jobs are reset, hw fence is meaningless, so force_completion */ 3347 3345 amdgpu_fence_driver_force_completion(ring); 3348 3346 } ··· 3348 3350 if(job) 3349 3351 drm_sched_increase_karma(&job->base); 3350 3352 3351 - 3352 - 3353 + /* Don't suspend on bare metal if we are not going to HW reset the ASIC */ 3353 3354 if (!amdgpu_sriov_vf(adev)) { 3354 3355 3355 3356 if (!need_full_reset) ··· 3486 3489 return r; 3487 3490 } 3488 3491 3489 - static void amdgpu_device_post_asic_reset(struct amdgpu_device *adev, 3490 - struct amdgpu_job *job) 3492 + static bool amdgpu_device_lock_adev(struct amdgpu_device *adev, bool trylock) 3491 3493 { 3492 - int i; 3494 + if (trylock) { 3495 + if (!mutex_trylock(&adev->lock_reset)) 3496 + return false; 3497 + } else 3498 + mutex_lock(&adev->lock_reset); 3493 3499 3494 - for (i = 0; i < AMDGPU_MAX_RINGS; ++i) { 3495 - struct amdgpu_ring *ring = adev->rings[i]; 3496 - 3497 - if (!ring || !ring->sched.thread) 3498 - continue; 3499 - 3500 - if (!adev->asic_reset_res) 3501 - drm_sched_resubmit_jobs(&ring->sched); 3502 - 3503 - drm_sched_start(&ring->sched, !adev->asic_reset_res); 3504 - } 3505 - 3506 - if (!amdgpu_device_has_dc_support(adev)) { 3507 - drm_helper_resume_force_mode(adev->ddev); 3508 - } 3509 - 3510 - adev->asic_reset_res = 0; 3511 - } 3512 - 3513 - static void amdgpu_device_lock_adev(struct amdgpu_device *adev) 3514 - { 3515 - mutex_lock(&adev->lock_reset); 3516 3500 atomic_inc(&adev->gpu_reset_counter); 3517 3501 adev->in_gpu_reset = 1; 3518 3502 /* Block kfd: SRIOV would do it separately */ 3519 3503 if (!amdgpu_sriov_vf(adev)) 3520 3504 amdgpu_amdkfd_pre_reset(adev); 3505 + 3506 + return true; 3521 3507 } 3522 3508 3523 3509 static void amdgpu_device_unlock_adev(struct amdgpu_device *adev) ··· 3528 3548 int amdgpu_device_gpu_recover(struct amdgpu_device *adev, 3529 3549 struct amdgpu_job *job) 3530 3550 { 3531 - int r; 3532 - struct amdgpu_hive_info *hive = NULL; 3533 - bool need_full_reset = false; 3534 - struct amdgpu_device *tmp_adev = NULL; 3535 3551 struct list_head device_list, *device_list_handle = NULL; 3552 + bool need_full_reset, job_signaled; 3553 + struct amdgpu_hive_info *hive = NULL; 3554 + struct amdgpu_device *tmp_adev = NULL; 3555 + int i, r = 0; 3536 3556 3557 + need_full_reset = job_signaled = false; 3537 3558 INIT_LIST_HEAD(&device_list); 3538 3559 3539 3560 dev_info(adev->dev, "GPU reset begin!\n"); 3540 3561 3562 + hive = amdgpu_get_xgmi_hive(adev, false); 3563 + 3541 3564 /* 3542 - * In case of XGMI hive disallow concurrent resets to be triggered 3543 - * by different nodes. No point also since the one node already executing 3544 - * reset will also reset all the other nodes in the hive. 3565 + * Here we trylock to avoid chain of resets executing from 3566 + * either trigger by jobs on different adevs in XGMI hive or jobs on 3567 + * different schedulers for same device while this TO handler is running. 3568 + * We always reset all schedulers for device and all devices for XGMI 3569 + * hive so that should take care of them too. 3545 3570 */ 3546 - hive = amdgpu_get_xgmi_hive(adev, 0); 3547 - if (hive && adev->gmc.xgmi.num_physical_nodes > 1 && 3548 - !mutex_trylock(&hive->reset_lock)) 3571 + 3572 + if (hive && !mutex_trylock(&hive->reset_lock)) { 3573 + DRM_INFO("Bailing on TDR for s_job:%llx, hive: %llx as another already in progress", 3574 + job->base.id, hive->hive_id); 3549 3575 return 0; 3576 + } 3550 3577 3551 3578 /* Start with adev pre asic reset first for soft reset check.*/ 3552 - amdgpu_device_lock_adev(adev); 3553 - r = amdgpu_device_pre_asic_reset(adev, 3554 - job, 3555 - &need_full_reset); 3556 - if (r) { 3557 - /*TODO Should we stop ?*/ 3558 - DRM_ERROR("GPU pre asic reset failed with err, %d for drm dev, %s ", 3559 - r, adev->ddev->unique); 3560 - adev->asic_reset_res = r; 3579 + if (!amdgpu_device_lock_adev(adev, !hive)) { 3580 + DRM_INFO("Bailing on TDR for s_job:%llx, as another already in progress", 3581 + job->base.id); 3582 + return 0; 3561 3583 } 3562 3584 3563 3585 /* Build list of devices to reset */ 3564 - if (need_full_reset && adev->gmc.xgmi.num_physical_nodes > 1) { 3586 + if (adev->gmc.xgmi.num_physical_nodes > 1) { 3565 3587 if (!hive) { 3566 3588 amdgpu_device_unlock_adev(adev); 3567 3589 return -ENODEV; ··· 3580 3598 device_list_handle = &device_list; 3581 3599 } 3582 3600 3601 + /* block all schedulers and reset given job's ring */ 3602 + list_for_each_entry(tmp_adev, device_list_handle, gmc.xgmi.head) { 3603 + for (i = 0; i < AMDGPU_MAX_RINGS; ++i) { 3604 + struct amdgpu_ring *ring = tmp_adev->rings[i]; 3605 + 3606 + if (!ring || !ring->sched.thread) 3607 + continue; 3608 + 3609 + drm_sched_stop(&ring->sched, &job->base); 3610 + } 3611 + } 3612 + 3613 + 3614 + /* 3615 + * Must check guilty signal here since after this point all old 3616 + * HW fences are force signaled. 3617 + * 3618 + * job->base holds a reference to parent fence 3619 + */ 3620 + if (job && job->base.s_fence->parent && 3621 + dma_fence_is_signaled(job->base.s_fence->parent)) 3622 + job_signaled = true; 3623 + 3624 + if (!amdgpu_device_ip_need_full_reset(adev)) 3625 + device_list_handle = &device_list; 3626 + 3627 + if (job_signaled) { 3628 + dev_info(adev->dev, "Guilty job already signaled, skipping HW reset"); 3629 + goto skip_hw_reset; 3630 + } 3631 + 3632 + 3633 + /* Guilty job will be freed after this*/ 3634 + r = amdgpu_device_pre_asic_reset(adev, 3635 + job, 3636 + &need_full_reset); 3637 + if (r) { 3638 + /*TODO Should we stop ?*/ 3639 + DRM_ERROR("GPU pre asic reset failed with err, %d for drm dev, %s ", 3640 + r, adev->ddev->unique); 3641 + adev->asic_reset_res = r; 3642 + } 3643 + 3583 3644 retry: /* Rest of adevs pre asic reset from XGMI hive. */ 3584 3645 list_for_each_entry(tmp_adev, device_list_handle, gmc.xgmi.head) { 3585 3646 3586 3647 if (tmp_adev == adev) 3587 3648 continue; 3588 3649 3589 - amdgpu_device_lock_adev(tmp_adev); 3650 + amdgpu_device_lock_adev(tmp_adev, false); 3590 3651 r = amdgpu_device_pre_asic_reset(tmp_adev, 3591 3652 NULL, 3592 3653 &need_full_reset); ··· 3653 3628 goto retry; 3654 3629 } 3655 3630 3631 + skip_hw_reset: 3632 + 3656 3633 /* Post ASIC reset for all devs .*/ 3657 3634 list_for_each_entry(tmp_adev, device_list_handle, gmc.xgmi.head) { 3658 - amdgpu_device_post_asic_reset(tmp_adev, tmp_adev == adev ? job : NULL); 3635 + for (i = 0; i < AMDGPU_MAX_RINGS; ++i) { 3636 + struct amdgpu_ring *ring = tmp_adev->rings[i]; 3637 + 3638 + if (!ring || !ring->sched.thread) 3639 + continue; 3640 + 3641 + /* No point to resubmit jobs if we didn't HW reset*/ 3642 + if (!tmp_adev->asic_reset_res && !job_signaled) 3643 + drm_sched_resubmit_jobs(&ring->sched); 3644 + 3645 + drm_sched_start(&ring->sched, !tmp_adev->asic_reset_res); 3646 + } 3647 + 3648 + if (!amdgpu_device_has_dc_support(tmp_adev) && !job_signaled) { 3649 + drm_helper_resume_force_mode(tmp_adev->ddev); 3650 + } 3651 + 3652 + tmp_adev->asic_reset_res = 0; 3659 3653 3660 3654 if (r) { 3661 3655 /* bad news, how to tell it to userspace ? */ ··· 3687 3643 amdgpu_device_unlock_adev(tmp_adev); 3688 3644 } 3689 3645 3690 - if (hive && adev->gmc.xgmi.num_physical_nodes > 1) 3646 + if (hive) 3691 3647 mutex_unlock(&hive->reset_lock); 3692 3648 3693 3649 if (r)
+3 -1
drivers/gpu/drm/amd/amdgpu/amdgpu_fb.c
··· 121 121 struct drm_mode_fb_cmd2 *mode_cmd, 122 122 struct drm_gem_object **gobj_p) 123 123 { 124 + const struct drm_format_info *info; 124 125 struct amdgpu_device *adev = rfbdev->adev; 125 126 struct drm_gem_object *gobj = NULL; 126 127 struct amdgpu_bo *abo = NULL; ··· 132 131 int height = mode_cmd->height; 133 132 u32 cpp; 134 133 135 - cpp = drm_format_plane_cpp(mode_cmd->pixel_format, 0); 134 + info = drm_get_format_info(adev->ddev, mode_cmd); 135 + cpp = info->cpp[0]; 136 136 137 137 /* need to align pitch with crtc limits */ 138 138 mode_cmd->pitches[0] = amdgpu_align_pitch(adev, mode_cmd->width, cpp,
+11 -17
drivers/gpu/drm/arm/malidp_crtc.c
··· 463 463 return &state->base; 464 464 } 465 465 466 - static void malidp_crtc_reset(struct drm_crtc *crtc) 467 - { 468 - struct malidp_crtc_state *state = NULL; 469 - 470 - if (crtc->state) { 471 - state = to_malidp_crtc_state(crtc->state); 472 - __drm_atomic_helper_crtc_destroy_state(crtc->state); 473 - } 474 - 475 - kfree(state); 476 - state = kzalloc(sizeof(*state), GFP_KERNEL); 477 - if (state) { 478 - crtc->state = &state->base; 479 - crtc->state->crtc = crtc; 480 - } 481 - } 482 - 483 466 static void malidp_crtc_destroy_state(struct drm_crtc *crtc, 484 467 struct drm_crtc_state *state) 485 468 { ··· 474 491 } 475 492 476 493 kfree(mali_state); 494 + } 495 + 496 + static void malidp_crtc_reset(struct drm_crtc *crtc) 497 + { 498 + struct malidp_crtc_state *state = 499 + kzalloc(sizeof(*state), GFP_KERNEL); 500 + 501 + if (crtc->state) 502 + malidp_crtc_destroy_state(crtc, crtc->state); 503 + 504 + __drm_atomic_helper_crtc_reset(crtc, &state->base); 477 505 } 478 506 479 507 static int malidp_crtc_enable_vblank(struct drm_crtc *crtc)
+2 -1
drivers/gpu/drm/arm/malidp_hw.c
··· 382 382 383 383 int malidp_format_get_bpp(u32 fmt) 384 384 { 385 - int bpp = drm_format_plane_cpp(fmt, 0) * 8; 385 + const struct drm_format_info *info = drm_format_info(fmt); 386 + int bpp = info->cpp[0] * 8; 386 387 387 388 if (bpp == 0) { 388 389 switch (fmt) {
+1 -1
drivers/gpu/drm/arm/malidp_mw.c
··· 158 158 return -EINVAL; 159 159 } 160 160 161 - n_planes = drm_format_num_planes(fb->format->format); 161 + n_planes = fb->format->num_planes; 162 162 for (i = 0; i < n_planes; i++) { 163 163 struct drm_gem_cma_object *obj = drm_fb_cma_get_gem_obj(fb, i); 164 164 /* memory write buffers are never rotated */
+3 -5
drivers/gpu/drm/arm/malidp_planes.c
··· 227 227 228 228 if (modifier & AFBC_SPLIT) { 229 229 if (!info->is_yuv) { 230 - if (drm_format_plane_cpp(format, 0) <= 2) { 230 + if (info->cpp[0] <= 2) { 231 231 DRM_DEBUG_KMS("RGB formats <= 16bpp are not supported with SPLIT\n"); 232 232 return false; 233 233 } 234 234 } 235 235 236 - if ((drm_format_horz_chroma_subsampling(format) != 1) || 237 - (drm_format_vert_chroma_subsampling(format) != 1)) { 236 + if ((info->hsub != 1) || (info->vsub != 1)) { 238 237 if (!(format == DRM_FORMAT_YUV420_10BIT && 239 238 (map->features & MALIDP_DEVICE_AFBC_YUV_420_10_SUPPORT_SPLIT))) { 240 239 DRM_DEBUG_KMS("Formats which are sub-sampled should never be split\n"); ··· 243 244 } 244 245 245 246 if (modifier & AFBC_CBR) { 246 - if ((drm_format_horz_chroma_subsampling(format) == 1) || 247 - (drm_format_vert_chroma_subsampling(format) == 1)) { 247 + if ((info->hsub == 1) || (info->vsub == 1)) { 248 248 DRM_DEBUG_KMS("Formats which are not sub-sampled should not have CBR set\n"); 249 249 return false; 250 250 }
+2 -1
drivers/gpu/drm/armada/armada_fb.c
··· 87 87 struct drm_framebuffer *armada_fb_create(struct drm_device *dev, 88 88 struct drm_file *dfile, const struct drm_mode_fb_cmd2 *mode) 89 89 { 90 + const struct drm_format_info *info = drm_get_format_info(dev, mode); 90 91 struct armada_gem_object *obj; 91 92 struct armada_framebuffer *dfb; 92 93 int ret; ··· 98 97 mode->pitches[2]); 99 98 100 99 /* We can only handle a single plane at the moment */ 101 - if (drm_format_num_planes(mode->pixel_format) > 1 && 100 + if (info->num_planes > 1 && 102 101 (mode->handles[0] != mode->handles[1] || 103 102 mode->handles[0] != mode->handles[2])) { 104 103 ret = -EINVAL;
+1 -2
drivers/gpu/drm/ast/Kconfig
··· 2 2 config DRM_AST 3 3 tristate "AST server chips" 4 4 depends on DRM && PCI && MMU 5 - select DRM_TTM 6 5 select DRM_KMS_HELPER 7 - select DRM_TTM 6 + select DRM_VRAM_HELPER 8 7 help 9 8 Say yes for experimental AST GPU driver. Do not enable 10 9 this driver without having a working -modesetting,
+2 -11
drivers/gpu/drm/ast/ast_drv.c
··· 205 205 206 206 static const struct file_operations ast_fops = { 207 207 .owner = THIS_MODULE, 208 - .open = drm_open, 209 - .release = drm_release, 210 - .unlocked_ioctl = drm_ioctl, 211 - .mmap = ast_mmap, 212 - .poll = drm_poll, 213 - .compat_ioctl = drm_compat_ioctl, 214 - .read = drm_read, 208 + DRM_VRAM_MM_FILE_OPERATIONS 215 209 }; 216 210 217 211 static struct drm_driver driver = { ··· 222 228 .minor = DRIVER_MINOR, 223 229 .patchlevel = DRIVER_PATCHLEVEL, 224 230 225 - .gem_free_object_unlocked = ast_gem_free_object, 226 - .dumb_create = ast_dumb_create, 227 - .dumb_map_offset = ast_dumb_mmap_offset, 228 - 231 + DRM_GEM_VRAM_DRIVER 229 232 }; 230 233 231 234 static int __init ast_init(void)
+3 -68
drivers/gpu/drm/ast/ast_drv.h
··· 31 31 #include <drm/drm_encoder.h> 32 32 #include <drm/drm_fb_helper.h> 33 33 34 - #include <drm/ttm/ttm_bo_api.h> 35 - #include <drm/ttm/ttm_bo_driver.h> 36 - #include <drm/ttm/ttm_placement.h> 37 - #include <drm/ttm/ttm_memory.h> 38 - #include <drm/ttm/ttm_module.h> 39 - 40 34 #include <drm/drm_gem.h> 35 + #include <drm/drm_gem_vram_helper.h> 36 + 37 + #include <drm/drm_vram_mm_helper.h> 41 38 42 39 #include <linux/i2c.h> 43 40 #include <linux/i2c-algo-bit.h> ··· 99 102 struct ast_fbdev *fbdev; 100 103 101 104 int fb_mtrr; 102 - 103 - struct { 104 - struct ttm_bo_device bdev; 105 - } ttm; 106 105 107 106 struct drm_gem_object *cursor_cache; 108 107 uint64_t cursor_cache_gpu_addr; ··· 256 263 struct ast_framebuffer afb; 257 264 void *sysram; 258 265 int size; 259 - struct ttm_bo_kmap_obj mapping; 260 266 int x1, y1, x2, y2; /* dirty rect */ 261 267 spinlock_t dirty_lock; 262 268 }; ··· 313 321 void ast_fbdev_set_suspend(struct drm_device *dev, int state); 314 322 void ast_fbdev_set_base(struct ast_private *ast, unsigned long gpu_addr); 315 323 316 - struct ast_bo { 317 - struct ttm_buffer_object bo; 318 - struct ttm_placement placement; 319 - struct ttm_bo_kmap_obj kmap; 320 - struct drm_gem_object gem; 321 - struct ttm_place placements[3]; 322 - int pin_count; 323 - }; 324 - #define gem_to_ast_bo(gobj) container_of((gobj), struct ast_bo, gem) 325 - 326 - static inline struct ast_bo * 327 - ast_bo(struct ttm_buffer_object *bo) 328 - { 329 - return container_of(bo, struct ast_bo, bo); 330 - } 331 - 332 - 333 - #define to_ast_obj(x) container_of(x, struct ast_gem_object, base) 334 - 335 324 #define AST_MM_ALIGN_SHIFT 4 336 325 #define AST_MM_ALIGN_MASK ((1 << AST_MM_ALIGN_SHIFT) - 1) 337 - 338 - extern int ast_dumb_create(struct drm_file *file, 339 - struct drm_device *dev, 340 - struct drm_mode_create_dumb *args); 341 - 342 - extern void ast_gem_free_object(struct drm_gem_object *obj); 343 - extern int ast_dumb_mmap_offset(struct drm_file *file, 344 - struct drm_device *dev, 345 - uint32_t handle, 346 - uint64_t *offset); 347 326 348 327 int ast_mm_init(struct ast_private *ast); 349 328 void ast_mm_fini(struct ast_private *ast); 350 329 351 - int ast_bo_create(struct drm_device *dev, int size, int align, 352 - uint32_t flags, struct ast_bo **pastbo); 353 - 354 330 int ast_gem_create(struct drm_device *dev, 355 331 u32 size, bool iskernel, 356 332 struct drm_gem_object **obj); 357 - 358 - int ast_bo_pin(struct ast_bo *bo, u32 pl_flag, u64 *gpu_addr); 359 - int ast_bo_unpin(struct ast_bo *bo); 360 - 361 - static inline int ast_bo_reserve(struct ast_bo *bo, bool no_wait) 362 - { 363 - int ret; 364 - 365 - ret = ttm_bo_reserve(&bo->bo, true, no_wait, NULL); 366 - if (ret) { 367 - if (ret != -ERESTARTSYS && ret != -EBUSY) 368 - DRM_ERROR("reserve failed %p\n", bo); 369 - return ret; 370 - } 371 - return 0; 372 - } 373 - 374 - static inline void ast_bo_unreserve(struct ast_bo *bo) 375 - { 376 - ttm_bo_unreserve(&bo->bo); 377 - } 378 - 379 - void ast_ttm_placement(struct ast_bo *bo, int domain); 380 - int ast_bo_push_sysram(struct ast_bo *bo); 381 - int ast_mmap(struct file *filp, struct vm_area_struct *vma); 382 333 383 334 /* ast post */ 384 335 void ast_enable_vga(struct drm_device *dev);
+25 -18
drivers/gpu/drm/ast/ast_fb.c
··· 49 49 { 50 50 int i; 51 51 struct drm_gem_object *obj; 52 - struct ast_bo *bo; 52 + struct drm_gem_vram_object *gbo; 53 53 int src_offset, dst_offset; 54 54 int bpp = afbdev->afb.base.format->cpp[0]; 55 55 int ret = -EBUSY; 56 + u8 *dst; 56 57 bool unmap = false; 57 58 bool store_for_later = false; 58 59 int x2, y2; 59 60 unsigned long flags; 60 61 61 62 obj = afbdev->afb.obj; 62 - bo = gem_to_ast_bo(obj); 63 + gbo = drm_gem_vram_of_gem(obj); 63 64 64 - /* 65 - * try and reserve the BO, if we fail with busy 66 - * then the BO is being moved and we should 67 - * store up the damage until later. 65 + /* Try to lock the BO. If we fail with -EBUSY then 66 + * the BO is being moved and we should store up the 67 + * damage until later. 68 68 */ 69 69 if (drm_can_sleep()) 70 - ret = ast_bo_reserve(bo, true); 70 + ret = drm_gem_vram_lock(gbo, true); 71 71 if (ret) { 72 72 if (ret != -EBUSY) 73 73 return; ··· 101 101 afbdev->x2 = afbdev->y2 = 0; 102 102 spin_unlock_irqrestore(&afbdev->dirty_lock, flags); 103 103 104 - if (!bo->kmap.virtual) { 105 - ret = ttm_bo_kmap(&bo->bo, 0, bo->bo.num_pages, &bo->kmap); 106 - if (ret) { 104 + dst = drm_gem_vram_kmap(gbo, false, NULL); 105 + if (IS_ERR(dst)) { 106 + DRM_ERROR("failed to kmap fb updates\n"); 107 + goto out; 108 + } else if (!dst) { 109 + dst = drm_gem_vram_kmap(gbo, true, NULL); 110 + if (IS_ERR(dst)) { 107 111 DRM_ERROR("failed to kmap fb updates\n"); 108 - ast_bo_unreserve(bo); 109 - return; 112 + goto out; 110 113 } 111 114 unmap = true; 112 115 } 116 + 113 117 for (i = y; i <= y2; i++) { 114 118 /* assume equal stride for now */ 115 - src_offset = dst_offset = i * afbdev->afb.base.pitches[0] + (x * bpp); 116 - memcpy_toio(bo->kmap.virtual + src_offset, afbdev->sysram + src_offset, (x2 - x + 1) * bpp); 117 - 119 + src_offset = dst_offset = 120 + i * afbdev->afb.base.pitches[0] + (x * bpp); 121 + memcpy_toio(dst + dst_offset, afbdev->sysram + src_offset, 122 + (x2 - x + 1) * bpp); 118 123 } 119 - if (unmap) 120 - ttm_bo_kunmap(&bo->kmap); 121 124 122 - ast_bo_unreserve(bo); 125 + if (unmap) 126 + drm_gem_vram_kunmap(gbo); 127 + 128 + out: 129 + drm_gem_vram_unlock(gbo); 123 130 } 124 131 125 132 static void ast_fillrect(struct fb_info *info,
+5 -72
drivers/gpu/drm/ast/ast_main.c
··· 593 593 u32 size, bool iskernel, 594 594 struct drm_gem_object **obj) 595 595 { 596 - struct ast_bo *astbo; 596 + struct drm_gem_vram_object *gbo; 597 597 int ret; 598 598 599 599 *obj = NULL; ··· 602 602 if (size == 0) 603 603 return -EINVAL; 604 604 605 - ret = ast_bo_create(dev, size, 0, 0, &astbo); 606 - if (ret) { 605 + gbo = drm_gem_vram_create(dev, &dev->vram_mm->bdev, size, 0, false); 606 + if (IS_ERR(gbo)) { 607 + ret = PTR_ERR(gbo); 607 608 if (ret != -ERESTARTSYS) 608 609 DRM_ERROR("failed to allocate GEM object\n"); 609 610 return ret; 610 611 } 611 - *obj = &astbo->gem; 612 + *obj = &gbo->gem; 612 613 return 0; 613 614 } 614 - 615 - int ast_dumb_create(struct drm_file *file, 616 - struct drm_device *dev, 617 - struct drm_mode_create_dumb *args) 618 - { 619 - int ret; 620 - struct drm_gem_object *gobj; 621 - u32 handle; 622 - 623 - args->pitch = args->width * ((args->bpp + 7) / 8); 624 - args->size = args->pitch * args->height; 625 - 626 - ret = ast_gem_create(dev, args->size, false, 627 - &gobj); 628 - if (ret) 629 - return ret; 630 - 631 - ret = drm_gem_handle_create(file, gobj, &handle); 632 - drm_gem_object_put_unlocked(gobj); 633 - if (ret) 634 - return ret; 635 - 636 - args->handle = handle; 637 - return 0; 638 - } 639 - 640 - static void ast_bo_unref(struct ast_bo **bo) 641 - { 642 - if ((*bo) == NULL) 643 - return; 644 - ttm_bo_put(&((*bo)->bo)); 645 - *bo = NULL; 646 - } 647 - 648 - void ast_gem_free_object(struct drm_gem_object *obj) 649 - { 650 - struct ast_bo *ast_bo = gem_to_ast_bo(obj); 651 - 652 - ast_bo_unref(&ast_bo); 653 - } 654 - 655 - 656 - static inline u64 ast_bo_mmap_offset(struct ast_bo *bo) 657 - { 658 - return drm_vma_node_offset_addr(&bo->bo.vma_node); 659 - } 660 - int 661 - ast_dumb_mmap_offset(struct drm_file *file, 662 - struct drm_device *dev, 663 - uint32_t handle, 664 - uint64_t *offset) 665 - { 666 - struct drm_gem_object *obj; 667 - struct ast_bo *bo; 668 - 669 - obj = drm_gem_object_lookup(file, handle); 670 - if (obj == NULL) 671 - return -ENOENT; 672 - 673 - bo = gem_to_ast_bo(obj); 674 - *offset = ast_bo_mmap_offset(bo); 675 - 676 - drm_gem_object_put_unlocked(obj); 677 - 678 - return 0; 679 - 680 - } 681 -
+77 -55
drivers/gpu/drm/ast/ast_mode.c
··· 521 521 } 522 522 } 523 523 524 - /* ast is different - we will force move buffers out of VRAM */ 525 524 static int ast_crtc_do_set_base(struct drm_crtc *crtc, 526 525 struct drm_framebuffer *fb, 527 526 int x, int y, int atomic) ··· 528 529 struct ast_private *ast = crtc->dev->dev_private; 529 530 struct drm_gem_object *obj; 530 531 struct ast_framebuffer *ast_fb; 531 - struct ast_bo *bo; 532 + struct drm_gem_vram_object *gbo; 532 533 int ret; 533 - u64 gpu_addr; 534 + s64 gpu_addr; 535 + void *base; 534 536 535 - /* push the previous fb to system ram */ 536 537 if (!atomic && fb) { 537 538 ast_fb = to_ast_framebuffer(fb); 538 539 obj = ast_fb->obj; 539 - bo = gem_to_ast_bo(obj); 540 - ret = ast_bo_reserve(bo, false); 541 - if (ret) 542 - return ret; 543 - ast_bo_push_sysram(bo); 544 - ast_bo_unreserve(bo); 540 + gbo = drm_gem_vram_of_gem(obj); 541 + 542 + /* unmap if console */ 543 + if (&ast->fbdev->afb == ast_fb) 544 + drm_gem_vram_kunmap(gbo); 545 + drm_gem_vram_unpin(gbo); 545 546 } 546 547 547 548 ast_fb = to_ast_framebuffer(crtc->primary->fb); 548 549 obj = ast_fb->obj; 549 - bo = gem_to_ast_bo(obj); 550 + gbo = drm_gem_vram_of_gem(obj); 550 551 551 - ret = ast_bo_reserve(bo, false); 552 + ret = drm_gem_vram_pin(gbo, DRM_GEM_VRAM_PL_FLAG_VRAM); 552 553 if (ret) 553 554 return ret; 554 - 555 - ret = ast_bo_pin(bo, TTM_PL_FLAG_VRAM, &gpu_addr); 556 - if (ret) { 557 - ast_bo_unreserve(bo); 558 - return ret; 555 + gpu_addr = drm_gem_vram_offset(gbo); 556 + if (gpu_addr < 0) { 557 + ret = (int)gpu_addr; 558 + goto err_drm_gem_vram_unpin; 559 559 } 560 560 561 561 if (&ast->fbdev->afb == ast_fb) { 562 562 /* if pushing console in kmap it */ 563 - ret = ttm_bo_kmap(&bo->bo, 0, bo->bo.num_pages, &bo->kmap); 564 - if (ret) 563 + base = drm_gem_vram_kmap(gbo, true, NULL); 564 + if (IS_ERR(base)) { 565 + ret = PTR_ERR(base); 565 566 DRM_ERROR("failed to kmap fbcon\n"); 566 - else 567 + } else { 567 568 ast_fbdev_set_base(ast, gpu_addr); 569 + } 568 570 } 569 - ast_bo_unreserve(bo); 570 571 571 572 ast_set_offset_reg(crtc); 572 573 ast_set_start_address_crt1(crtc, (u32)gpu_addr); 573 574 574 575 return 0; 576 + 577 + err_drm_gem_vram_unpin: 578 + drm_gem_vram_unpin(gbo); 579 + return ret; 575 580 } 576 581 577 582 static int ast_crtc_mode_set_base(struct drm_crtc *crtc, int x, int y, ··· 621 618 622 619 static void ast_crtc_disable(struct drm_crtc *crtc) 623 620 { 624 - int ret; 625 - 626 621 DRM_DEBUG_KMS("\n"); 627 622 ast_crtc_dpms(crtc, DRM_MODE_DPMS_OFF); 628 623 if (crtc->primary->fb) { 624 + struct ast_private *ast = crtc->dev->dev_private; 629 625 struct ast_framebuffer *ast_fb = to_ast_framebuffer(crtc->primary->fb); 630 626 struct drm_gem_object *obj = ast_fb->obj; 631 - struct ast_bo *bo = gem_to_ast_bo(obj); 627 + struct drm_gem_vram_object *gbo = drm_gem_vram_of_gem(obj); 632 628 633 - ret = ast_bo_reserve(bo, false); 634 - if (ret) 635 - return; 636 - 637 - ast_bo_push_sysram(bo); 638 - ast_bo_unreserve(bo); 629 + /* unmap if console */ 630 + if (&ast->fbdev->afb == ast_fb) 631 + drm_gem_vram_kunmap(gbo); 632 + drm_gem_vram_unpin(gbo); 639 633 } 640 634 crtc->primary->fb = NULL; 641 635 } ··· 918 918 int size; 919 919 int ret; 920 920 struct drm_gem_object *obj; 921 - struct ast_bo *bo; 922 - uint64_t gpu_addr; 921 + struct drm_gem_vram_object *gbo; 922 + s64 gpu_addr; 923 + void *base; 923 924 924 925 size = (AST_HWC_SIZE + AST_HWC_SIGNATURE_SIZE) * AST_DEFAULT_HWC_NUM; 925 926 926 927 ret = ast_gem_create(dev, size, true, &obj); 927 928 if (ret) 928 929 return ret; 929 - bo = gem_to_ast_bo(obj); 930 - ret = ast_bo_reserve(bo, false); 931 - if (unlikely(ret != 0)) 932 - goto fail; 933 - 934 - ret = ast_bo_pin(bo, TTM_PL_FLAG_VRAM, &gpu_addr); 935 - ast_bo_unreserve(bo); 930 + gbo = drm_gem_vram_of_gem(obj); 931 + ret = drm_gem_vram_pin(gbo, DRM_GEM_VRAM_PL_FLAG_VRAM); 936 932 if (ret) 937 933 goto fail; 934 + gpu_addr = drm_gem_vram_offset(gbo); 935 + if (gpu_addr < 0) { 936 + drm_gem_vram_unpin(gbo); 937 + ret = (int)gpu_addr; 938 + goto fail; 939 + } 938 940 939 941 /* kmap the object */ 940 - ret = ttm_bo_kmap(&bo->bo, 0, bo->bo.num_pages, &ast->cache_kmap); 941 - if (ret) 942 + base = drm_gem_vram_kmap_at(gbo, true, NULL, &ast->cache_kmap); 943 + if (IS_ERR(base)) { 944 + ret = PTR_ERR(base); 942 945 goto fail; 946 + } 943 947 944 948 ast->cursor_cache = obj; 945 949 ast->cursor_cache_gpu_addr = gpu_addr; ··· 956 952 static void ast_cursor_fini(struct drm_device *dev) 957 953 { 958 954 struct ast_private *ast = dev->dev_private; 959 - ttm_bo_kunmap(&ast->cache_kmap); 955 + struct drm_gem_vram_object *gbo = 956 + drm_gem_vram_of_gem(ast->cursor_cache); 957 + drm_gem_vram_kunmap_at(gbo, &ast->cache_kmap); 960 958 drm_gem_object_put_unlocked(ast->cursor_cache); 961 959 } 962 960 ··· 1179 1173 struct ast_private *ast = crtc->dev->dev_private; 1180 1174 struct ast_crtc *ast_crtc = to_ast_crtc(crtc); 1181 1175 struct drm_gem_object *obj; 1182 - struct ast_bo *bo; 1183 - uint64_t gpu_addr; 1176 + struct drm_gem_vram_object *gbo; 1177 + s64 gpu_addr; 1184 1178 u32 csum; 1185 1179 int ret; 1186 1180 struct ttm_bo_kmap_obj uobj_map; ··· 1199 1193 DRM_ERROR("Cannot find cursor object %x for crtc\n", handle); 1200 1194 return -ENOENT; 1201 1195 } 1202 - bo = gem_to_ast_bo(obj); 1196 + gbo = drm_gem_vram_of_gem(obj); 1203 1197 1204 - ret = ast_bo_reserve(bo, false); 1198 + ret = drm_gem_vram_lock(gbo, false); 1205 1199 if (ret) 1206 1200 goto fail; 1207 1201 1208 - ret = ttm_bo_kmap(&bo->bo, 0, bo->bo.num_pages, &uobj_map); 1209 - 1210 - src = ttm_kmap_obj_virtual(&uobj_map, &src_isiomem); 1211 - dst = ttm_kmap_obj_virtual(&ast->cache_kmap, &dst_isiomem); 1212 - 1202 + memset(&uobj_map, 0, sizeof(uobj_map)); 1203 + src = drm_gem_vram_kmap_at(gbo, true, &src_isiomem, &uobj_map); 1204 + if (IS_ERR(src)) { 1205 + ret = PTR_ERR(src); 1206 + goto fail_unlock; 1207 + } 1213 1208 if (src_isiomem == true) 1214 1209 DRM_ERROR("src cursor bo should be in main memory\n"); 1210 + 1211 + dst = drm_gem_vram_kmap_at(drm_gem_vram_of_gem(ast->cursor_cache), 1212 + false, &dst_isiomem, &ast->cache_kmap); 1213 + if (IS_ERR(dst)) { 1214 + ret = PTR_ERR(dst); 1215 + goto fail_unlock; 1216 + } 1215 1217 if (dst_isiomem == false) 1216 1218 DRM_ERROR("dst bo should be in VRAM\n"); 1217 1219 ··· 1228 1214 /* do data transfer to cursor cache */ 1229 1215 csum = copy_cursor_image(src, dst, width, height); 1230 1216 1217 + drm_gem_vram_kunmap_at(gbo, &uobj_map); 1218 + drm_gem_vram_unlock(gbo); 1219 + 1231 1220 /* write checksum + signature */ 1232 - ttm_bo_kunmap(&uobj_map); 1233 - ast_bo_unreserve(bo); 1234 1221 { 1235 - u8 *dst = (u8 *)ast->cache_kmap.virtual + (AST_HWC_SIZE + AST_HWC_SIGNATURE_SIZE)*ast->next_cursor + AST_HWC_SIZE; 1222 + u8 *dst = drm_gem_vram_kmap_at(drm_gem_vram_of_gem(ast->cursor_cache), 1223 + false, NULL, &ast->cache_kmap); 1224 + dst += (AST_HWC_SIZE + AST_HWC_SIGNATURE_SIZE)*ast->next_cursor + AST_HWC_SIZE; 1236 1225 writel(csum, dst); 1237 1226 writel(width, dst + AST_HWC_SIGNATURE_SizeX); 1238 1227 writel(height, dst + AST_HWC_SIGNATURE_SizeY); ··· 1261 1244 1262 1245 drm_gem_object_put_unlocked(obj); 1263 1246 return 0; 1247 + 1248 + fail_unlock: 1249 + drm_gem_vram_unlock(gbo); 1264 1250 fail: 1265 1251 drm_gem_object_put_unlocked(obj); 1266 1252 return ret; ··· 1277 1257 int x_offset, y_offset; 1278 1258 u8 *sig; 1279 1259 1280 - sig = (u8 *)ast->cache_kmap.virtual + (AST_HWC_SIZE + AST_HWC_SIGNATURE_SIZE)*ast->next_cursor + AST_HWC_SIZE; 1260 + sig = drm_gem_vram_kmap_at(drm_gem_vram_of_gem(ast->cursor_cache), 1261 + false, NULL, &ast->cache_kmap); 1262 + sig += (AST_HWC_SIZE + AST_HWC_SIGNATURE_SIZE)*ast->next_cursor + AST_HWC_SIZE; 1281 1263 writel(x, sig + AST_HWC_SIGNATURE_X); 1282 1264 writel(y, sig + AST_HWC_SIGNATURE_Y); 1283 1265
+8 -294
drivers/gpu/drm/ast/ast_ttm.c
··· 26 26 * Authors: Dave Airlie <airlied@redhat.com> 27 27 */ 28 28 #include <drm/drmP.h> 29 - #include <drm/ttm/ttm_page_alloc.h> 30 29 31 30 #include "ast_drv.h" 32 31 33 - static inline struct ast_private * 34 - ast_bdev(struct ttm_bo_device *bd) 35 - { 36 - return container_of(bd, struct ast_private, ttm.bdev); 37 - } 38 - 39 - static void ast_bo_ttm_destroy(struct ttm_buffer_object *tbo) 40 - { 41 - struct ast_bo *bo; 42 - 43 - bo = container_of(tbo, struct ast_bo, bo); 44 - 45 - drm_gem_object_release(&bo->gem); 46 - kfree(bo); 47 - } 48 - 49 - static bool ast_ttm_bo_is_ast_bo(struct ttm_buffer_object *bo) 50 - { 51 - if (bo->destroy == &ast_bo_ttm_destroy) 52 - return true; 53 - return false; 54 - } 55 - 56 - static int 57 - ast_bo_init_mem_type(struct ttm_bo_device *bdev, uint32_t type, 58 - struct ttm_mem_type_manager *man) 59 - { 60 - switch (type) { 61 - case TTM_PL_SYSTEM: 62 - man->flags = TTM_MEMTYPE_FLAG_MAPPABLE; 63 - man->available_caching = TTM_PL_MASK_CACHING; 64 - man->default_caching = TTM_PL_FLAG_CACHED; 65 - break; 66 - case TTM_PL_VRAM: 67 - man->func = &ttm_bo_manager_func; 68 - man->flags = TTM_MEMTYPE_FLAG_FIXED | 69 - TTM_MEMTYPE_FLAG_MAPPABLE; 70 - man->available_caching = TTM_PL_FLAG_UNCACHED | 71 - TTM_PL_FLAG_WC; 72 - man->default_caching = TTM_PL_FLAG_WC; 73 - break; 74 - default: 75 - DRM_ERROR("Unsupported memory type %u\n", (unsigned)type); 76 - return -EINVAL; 77 - } 78 - return 0; 79 - } 80 - 81 - static void 82 - ast_bo_evict_flags(struct ttm_buffer_object *bo, struct ttm_placement *pl) 83 - { 84 - struct ast_bo *astbo = ast_bo(bo); 85 - 86 - if (!ast_ttm_bo_is_ast_bo(bo)) 87 - return; 88 - 89 - ast_ttm_placement(astbo, TTM_PL_FLAG_SYSTEM); 90 - *pl = astbo->placement; 91 - } 92 - 93 - static int ast_bo_verify_access(struct ttm_buffer_object *bo, struct file *filp) 94 - { 95 - struct ast_bo *astbo = ast_bo(bo); 96 - 97 - return drm_vma_node_verify_access(&astbo->gem.vma_node, 98 - filp->private_data); 99 - } 100 - 101 - static int ast_ttm_io_mem_reserve(struct ttm_bo_device *bdev, 102 - struct ttm_mem_reg *mem) 103 - { 104 - struct ttm_mem_type_manager *man = &bdev->man[mem->mem_type]; 105 - struct ast_private *ast = ast_bdev(bdev); 106 - 107 - mem->bus.addr = NULL; 108 - mem->bus.offset = 0; 109 - mem->bus.size = mem->num_pages << PAGE_SHIFT; 110 - mem->bus.base = 0; 111 - mem->bus.is_iomem = false; 112 - if (!(man->flags & TTM_MEMTYPE_FLAG_MAPPABLE)) 113 - return -EINVAL; 114 - switch (mem->mem_type) { 115 - case TTM_PL_SYSTEM: 116 - /* system memory */ 117 - return 0; 118 - case TTM_PL_VRAM: 119 - mem->bus.offset = mem->start << PAGE_SHIFT; 120 - mem->bus.base = pci_resource_start(ast->dev->pdev, 0); 121 - mem->bus.is_iomem = true; 122 - break; 123 - default: 124 - return -EINVAL; 125 - break; 126 - } 127 - return 0; 128 - } 129 - 130 - static void ast_ttm_io_mem_free(struct ttm_bo_device *bdev, struct ttm_mem_reg *mem) 131 - { 132 - } 133 - 134 - static void ast_ttm_backend_destroy(struct ttm_tt *tt) 135 - { 136 - ttm_tt_fini(tt); 137 - kfree(tt); 138 - } 139 - 140 - static struct ttm_backend_func ast_tt_backend_func = { 141 - .destroy = &ast_ttm_backend_destroy, 142 - }; 143 - 144 - 145 - static struct ttm_tt *ast_ttm_tt_create(struct ttm_buffer_object *bo, 146 - uint32_t page_flags) 147 - { 148 - struct ttm_tt *tt; 149 - 150 - tt = kzalloc(sizeof(struct ttm_tt), GFP_KERNEL); 151 - if (tt == NULL) 152 - return NULL; 153 - tt->func = &ast_tt_backend_func; 154 - if (ttm_tt_init(tt, bo, page_flags)) { 155 - kfree(tt); 156 - return NULL; 157 - } 158 - return tt; 159 - } 160 - 161 - struct ttm_bo_driver ast_bo_driver = { 162 - .ttm_tt_create = ast_ttm_tt_create, 163 - .init_mem_type = ast_bo_init_mem_type, 164 - .eviction_valuable = ttm_bo_eviction_valuable, 165 - .evict_flags = ast_bo_evict_flags, 166 - .move = NULL, 167 - .verify_access = ast_bo_verify_access, 168 - .io_mem_reserve = &ast_ttm_io_mem_reserve, 169 - .io_mem_free = &ast_ttm_io_mem_free, 170 - }; 171 - 172 32 int ast_mm_init(struct ast_private *ast) 173 33 { 34 + struct drm_vram_mm *vmm; 174 35 int ret; 175 36 struct drm_device *dev = ast->dev; 176 - struct ttm_bo_device *bdev = &ast->ttm.bdev; 177 37 178 - ret = ttm_bo_device_init(&ast->ttm.bdev, 179 - &ast_bo_driver, 180 - dev->anon_inode->i_mapping, 181 - true); 182 - if (ret) { 183 - DRM_ERROR("Error initialising bo driver; %d\n", ret); 184 - return ret; 185 - } 186 - 187 - ret = ttm_bo_init_mm(bdev, TTM_PL_VRAM, 188 - ast->vram_size >> PAGE_SHIFT); 189 - if (ret) { 190 - DRM_ERROR("Failed ttm VRAM init: %d\n", ret); 38 + vmm = drm_vram_helper_alloc_mm( 39 + dev, pci_resource_start(dev->pdev, 0), 40 + ast->vram_size, &drm_gem_vram_mm_funcs); 41 + if (IS_ERR(vmm)) { 42 + ret = PTR_ERR(vmm); 43 + DRM_ERROR("Error initializing VRAM MM; %d\n", ret); 191 44 return ret; 192 45 } 193 46 ··· 56 203 { 57 204 struct drm_device *dev = ast->dev; 58 205 59 - ttm_bo_device_release(&ast->ttm.bdev); 206 + drm_vram_helper_release_mm(dev); 60 207 61 208 arch_phys_wc_del(ast->fb_mtrr); 62 209 arch_io_free_memtype_wc(pci_resource_start(dev->pdev, 0), 63 210 pci_resource_len(dev->pdev, 0)); 64 - } 65 - 66 - void ast_ttm_placement(struct ast_bo *bo, int domain) 67 - { 68 - u32 c = 0; 69 - unsigned i; 70 - 71 - bo->placement.placement = bo->placements; 72 - bo->placement.busy_placement = bo->placements; 73 - if (domain & TTM_PL_FLAG_VRAM) 74 - bo->placements[c++].flags = TTM_PL_FLAG_WC | TTM_PL_FLAG_UNCACHED | TTM_PL_FLAG_VRAM; 75 - if (domain & TTM_PL_FLAG_SYSTEM) 76 - bo->placements[c++].flags = TTM_PL_FLAG_CACHED | TTM_PL_FLAG_SYSTEM; 77 - if (!c) 78 - bo->placements[c++].flags = TTM_PL_FLAG_CACHED | TTM_PL_FLAG_SYSTEM; 79 - bo->placement.num_placement = c; 80 - bo->placement.num_busy_placement = c; 81 - for (i = 0; i < c; ++i) { 82 - bo->placements[i].fpfn = 0; 83 - bo->placements[i].lpfn = 0; 84 - } 85 - } 86 - 87 - int ast_bo_create(struct drm_device *dev, int size, int align, 88 - uint32_t flags, struct ast_bo **pastbo) 89 - { 90 - struct ast_private *ast = dev->dev_private; 91 - struct ast_bo *astbo; 92 - size_t acc_size; 93 - int ret; 94 - 95 - astbo = kzalloc(sizeof(struct ast_bo), GFP_KERNEL); 96 - if (!astbo) 97 - return -ENOMEM; 98 - 99 - ret = drm_gem_object_init(dev, &astbo->gem, size); 100 - if (ret) 101 - goto error; 102 - 103 - astbo->bo.bdev = &ast->ttm.bdev; 104 - 105 - ast_ttm_placement(astbo, TTM_PL_FLAG_VRAM | TTM_PL_FLAG_SYSTEM); 106 - 107 - acc_size = ttm_bo_dma_acc_size(&ast->ttm.bdev, size, 108 - sizeof(struct ast_bo)); 109 - 110 - ret = ttm_bo_init(&ast->ttm.bdev, &astbo->bo, size, 111 - ttm_bo_type_device, &astbo->placement, 112 - align >> PAGE_SHIFT, false, acc_size, 113 - NULL, NULL, ast_bo_ttm_destroy); 114 - if (ret) 115 - goto error; 116 - 117 - *pastbo = astbo; 118 - return 0; 119 - error: 120 - kfree(astbo); 121 - return ret; 122 - } 123 - 124 - static inline u64 ast_bo_gpu_offset(struct ast_bo *bo) 125 - { 126 - return bo->bo.offset; 127 - } 128 - 129 - int ast_bo_pin(struct ast_bo *bo, u32 pl_flag, u64 *gpu_addr) 130 - { 131 - struct ttm_operation_ctx ctx = { false, false }; 132 - int i, ret; 133 - 134 - if (bo->pin_count) { 135 - bo->pin_count++; 136 - if (gpu_addr) 137 - *gpu_addr = ast_bo_gpu_offset(bo); 138 - } 139 - 140 - ast_ttm_placement(bo, pl_flag); 141 - for (i = 0; i < bo->placement.num_placement; i++) 142 - bo->placements[i].flags |= TTM_PL_FLAG_NO_EVICT; 143 - ret = ttm_bo_validate(&bo->bo, &bo->placement, &ctx); 144 - if (ret) 145 - return ret; 146 - 147 - bo->pin_count = 1; 148 - if (gpu_addr) 149 - *gpu_addr = ast_bo_gpu_offset(bo); 150 - return 0; 151 - } 152 - 153 - int ast_bo_unpin(struct ast_bo *bo) 154 - { 155 - struct ttm_operation_ctx ctx = { false, false }; 156 - int i; 157 - if (!bo->pin_count) { 158 - DRM_ERROR("unpin bad %p\n", bo); 159 - return 0; 160 - } 161 - bo->pin_count--; 162 - if (bo->pin_count) 163 - return 0; 164 - 165 - for (i = 0; i < bo->placement.num_placement ; i++) 166 - bo->placements[i].flags &= ~TTM_PL_FLAG_NO_EVICT; 167 - return ttm_bo_validate(&bo->bo, &bo->placement, &ctx); 168 - } 169 - 170 - int ast_bo_push_sysram(struct ast_bo *bo) 171 - { 172 - struct ttm_operation_ctx ctx = { false, false }; 173 - int i, ret; 174 - if (!bo->pin_count) { 175 - DRM_ERROR("unpin bad %p\n", bo); 176 - return 0; 177 - } 178 - bo->pin_count--; 179 - if (bo->pin_count) 180 - return 0; 181 - 182 - if (bo->kmap.virtual) 183 - ttm_bo_kunmap(&bo->kmap); 184 - 185 - ast_ttm_placement(bo, TTM_PL_FLAG_SYSTEM); 186 - for (i = 0; i < bo->placement.num_placement ; i++) 187 - bo->placements[i].flags |= TTM_PL_FLAG_NO_EVICT; 188 - 189 - ret = ttm_bo_validate(&bo->bo, &bo->placement, &ctx); 190 - if (ret) { 191 - DRM_ERROR("pushing to VRAM failed\n"); 192 - return ret; 193 - } 194 - return 0; 195 - } 196 - 197 - int ast_mmap(struct file *filp, struct vm_area_struct *vma) 198 - { 199 - struct drm_file *file_priv = filp->private_data; 200 - struct ast_private *ast = file_priv->minor->dev->dev_private; 201 - 202 - return ttm_bo_mmap(filp, vma, &ast->ttm.bdev); 203 211 }
+2 -7
drivers/gpu/drm/atmel-hlcdc/atmel_hlcdc_plane.c
··· 603 603 const struct drm_display_mode *mode; 604 604 struct drm_crtc_state *crtc_state; 605 605 unsigned int tmp; 606 - int hsub = 1; 607 - int vsub = 1; 608 606 int ret; 609 607 int i; 610 608 ··· 640 642 if (state->nplanes > ATMEL_HLCDC_LAYER_MAX_PLANES) 641 643 return -EINVAL; 642 644 643 - hsub = drm_format_horz_chroma_subsampling(fb->format->format); 644 - vsub = drm_format_vert_chroma_subsampling(fb->format->format); 645 - 646 645 for (i = 0; i < state->nplanes; i++) { 647 646 unsigned int offset = 0; 648 - int xdiv = i ? hsub : 1; 649 - int ydiv = i ? vsub : 1; 647 + int xdiv = i ? fb->format->hsub : 1; 648 + int ydiv = i ? fb->format->vsub : 1; 650 649 651 650 state->bpp[i] = fb->format->cpp[i]; 652 651 if (!state->bpp[i])
+1 -1
drivers/gpu/drm/bochs/Kconfig
··· 3 3 tristate "DRM Support for bochs dispi vga interface (qemu stdvga)" 4 4 depends on DRM && PCI && MMU 5 5 select DRM_KMS_HELPER 6 - select DRM_TTM 6 + select DRM_VRAM_HELPER 7 7 help 8 8 Choose this option for qemu. 9 9 If M is selected the module will be called bochs-drm.
+2 -52
drivers/gpu/drm/bochs/bochs.h
··· 10 10 #include <drm/drm_simple_kms_helper.h> 11 11 12 12 #include <drm/drm_gem.h> 13 + #include <drm/drm_gem_vram_helper.h> 13 14 14 - #include <drm/ttm/ttm_bo_driver.h> 15 - #include <drm/ttm/ttm_page_alloc.h> 15 + #include <drm/drm_vram_mm_helper.h> 16 16 17 17 /* ---------------------------------------------------------------------- */ 18 18 ··· 73 73 struct drm_device *dev; 74 74 struct drm_simple_display_pipe pipe; 75 75 struct drm_connector connector; 76 - 77 - /* ttm */ 78 - struct { 79 - struct ttm_bo_device bdev; 80 - bool initialized; 81 - } ttm; 82 76 }; 83 - 84 - struct bochs_bo { 85 - struct ttm_buffer_object bo; 86 - struct ttm_placement placement; 87 - struct ttm_bo_kmap_obj kmap; 88 - struct drm_gem_object gem; 89 - struct ttm_place placements[3]; 90 - int pin_count; 91 - }; 92 - 93 - static inline struct bochs_bo *bochs_bo(struct ttm_buffer_object *bo) 94 - { 95 - return container_of(bo, struct bochs_bo, bo); 96 - } 97 - 98 - static inline struct bochs_bo *gem_to_bochs_bo(struct drm_gem_object *gem) 99 - { 100 - return container_of(gem, struct bochs_bo, gem); 101 - } 102 - 103 - static inline u64 bochs_bo_mmap_offset(struct bochs_bo *bo) 104 - { 105 - return drm_vma_node_offset_addr(&bo->bo.vma_node); 106 - } 107 77 108 78 /* ---------------------------------------------------------------------- */ 109 79 ··· 92 122 /* bochs_mm.c */ 93 123 int bochs_mm_init(struct bochs_device *bochs); 94 124 void bochs_mm_fini(struct bochs_device *bochs); 95 - int bochs_mmap(struct file *filp, struct vm_area_struct *vma); 96 - 97 - int bochs_gem_create(struct drm_device *dev, u32 size, bool iskernel, 98 - struct drm_gem_object **obj); 99 - int bochs_gem_init_object(struct drm_gem_object *obj); 100 - void bochs_gem_free_object(struct drm_gem_object *obj); 101 - int bochs_dumb_create(struct drm_file *file, struct drm_device *dev, 102 - struct drm_mode_create_dumb *args); 103 - int bochs_dumb_mmap_offset(struct drm_file *file, struct drm_device *dev, 104 - uint32_t handle, uint64_t *offset); 105 - 106 - int bochs_bo_pin(struct bochs_bo *bo, u32 pl_flag); 107 - int bochs_bo_unpin(struct bochs_bo *bo); 108 - 109 - int bochs_gem_prime_pin(struct drm_gem_object *obj); 110 - void bochs_gem_prime_unpin(struct drm_gem_object *obj); 111 - void *bochs_gem_prime_vmap(struct drm_gem_object *obj); 112 - void bochs_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr); 113 - int bochs_gem_prime_mmap(struct drm_gem_object *obj, 114 - struct vm_area_struct *vma); 115 125 116 126 /* bochs_kms.c */ 117 127 int bochs_kms_init(struct bochs_device *bochs);
+5 -19
drivers/gpu/drm/bochs/bochs_drv.c
··· 10 10 #include <linux/slab.h> 11 11 #include <drm/drm_fb_helper.h> 12 12 #include <drm/drm_probe_helper.h> 13 + #include <drm/drm_atomic_helper.h> 13 14 14 15 #include "bochs.h" 15 16 ··· 64 63 65 64 static const struct file_operations bochs_fops = { 66 65 .owner = THIS_MODULE, 67 - .open = drm_open, 68 - .release = drm_release, 69 - .unlocked_ioctl = drm_ioctl, 70 - .compat_ioctl = drm_compat_ioctl, 71 - .poll = drm_poll, 72 - .read = drm_read, 73 - .llseek = no_llseek, 74 - .mmap = bochs_mmap, 66 + DRM_VRAM_MM_FILE_OPERATIONS 75 67 }; 76 68 77 69 static struct drm_driver bochs_driver = { ··· 76 82 .date = "20130925", 77 83 .major = 1, 78 84 .minor = 0, 79 - .gem_free_object_unlocked = bochs_gem_free_object, 80 - .dumb_create = bochs_dumb_create, 81 - .dumb_map_offset = bochs_dumb_mmap_offset, 82 - 83 - .gem_prime_export = drm_gem_prime_export, 84 - .gem_prime_import = drm_gem_prime_import, 85 - .gem_prime_pin = bochs_gem_prime_pin, 86 - .gem_prime_unpin = bochs_gem_prime_unpin, 87 - .gem_prime_vmap = bochs_gem_prime_vmap, 88 - .gem_prime_vunmap = bochs_gem_prime_vunmap, 89 - .gem_prime_mmap = bochs_gem_prime_mmap, 85 + DRM_GEM_VRAM_DRIVER, 86 + DRM_GEM_VRAM_DRIVER_PRIME, 90 87 }; 91 88 92 89 /* ---------------------------------------------------------------------- */ ··· 159 174 { 160 175 struct drm_device *dev = pci_get_drvdata(pdev); 161 176 177 + drm_atomic_helper_shutdown(dev); 162 178 drm_dev_unregister(dev); 163 179 bochs_unload(dev); 164 180 drm_dev_put(dev);
+9 -9
drivers/gpu/drm/bochs/bochs_kms.c
··· 30 30 static void bochs_plane_update(struct bochs_device *bochs, 31 31 struct drm_plane_state *state) 32 32 { 33 - struct bochs_bo *bo; 33 + struct drm_gem_vram_object *gbo; 34 34 35 35 if (!state->fb || !bochs->stride) 36 36 return; 37 37 38 - bo = gem_to_bochs_bo(state->fb->obj[0]); 38 + gbo = drm_gem_vram_of_gem(state->fb->obj[0]); 39 39 bochs_hw_setbase(bochs, 40 40 state->crtc_x, 41 41 state->crtc_y, 42 - bo->bo.offset); 42 + gbo->bo.offset); 43 43 bochs_hw_setformat(bochs, state->fb->format); 44 44 } 45 45 ··· 72 72 static int bochs_pipe_prepare_fb(struct drm_simple_display_pipe *pipe, 73 73 struct drm_plane_state *new_state) 74 74 { 75 - struct bochs_bo *bo; 75 + struct drm_gem_vram_object *gbo; 76 76 77 77 if (!new_state->fb) 78 78 return 0; 79 - bo = gem_to_bochs_bo(new_state->fb->obj[0]); 80 - return bochs_bo_pin(bo, TTM_PL_FLAG_VRAM); 79 + gbo = drm_gem_vram_of_gem(new_state->fb->obj[0]); 80 + return drm_gem_vram_pin(gbo, DRM_GEM_VRAM_PL_FLAG_VRAM); 81 81 } 82 82 83 83 static void bochs_pipe_cleanup_fb(struct drm_simple_display_pipe *pipe, 84 84 struct drm_plane_state *old_state) 85 85 { 86 - struct bochs_bo *bo; 86 + struct drm_gem_vram_object *gbo; 87 87 88 88 if (!old_state->fb) 89 89 return; 90 - bo = gem_to_bochs_bo(old_state->fb->obj[0]); 91 - bochs_bo_unpin(bo); 90 + gbo = drm_gem_vram_of_gem(old_state->fb->obj[0]); 91 + drm_gem_vram_unpin(gbo); 92 92 } 93 93 94 94 static const struct drm_simple_display_pipe_funcs bochs_pipe_funcs = {
+7 -420
drivers/gpu/drm/bochs/bochs_mm.c
··· 7 7 8 8 #include "bochs.h" 9 9 10 - static void bochs_ttm_placement(struct bochs_bo *bo, int domain); 11 - 12 10 /* ---------------------------------------------------------------------- */ 13 - 14 - static inline struct bochs_device *bochs_bdev(struct ttm_bo_device *bd) 15 - { 16 - return container_of(bd, struct bochs_device, ttm.bdev); 17 - } 18 - 19 - static void bochs_bo_ttm_destroy(struct ttm_buffer_object *tbo) 20 - { 21 - struct bochs_bo *bo; 22 - 23 - bo = container_of(tbo, struct bochs_bo, bo); 24 - drm_gem_object_release(&bo->gem); 25 - kfree(bo); 26 - } 27 - 28 - static bool bochs_ttm_bo_is_bochs_bo(struct ttm_buffer_object *bo) 29 - { 30 - if (bo->destroy == &bochs_bo_ttm_destroy) 31 - return true; 32 - return false; 33 - } 34 - 35 - static int bochs_bo_init_mem_type(struct ttm_bo_device *bdev, uint32_t type, 36 - struct ttm_mem_type_manager *man) 37 - { 38 - switch (type) { 39 - case TTM_PL_SYSTEM: 40 - man->flags = TTM_MEMTYPE_FLAG_MAPPABLE; 41 - man->available_caching = TTM_PL_MASK_CACHING; 42 - man->default_caching = TTM_PL_FLAG_CACHED; 43 - break; 44 - case TTM_PL_VRAM: 45 - man->func = &ttm_bo_manager_func; 46 - man->flags = TTM_MEMTYPE_FLAG_FIXED | 47 - TTM_MEMTYPE_FLAG_MAPPABLE; 48 - man->available_caching = TTM_PL_FLAG_UNCACHED | 49 - TTM_PL_FLAG_WC; 50 - man->default_caching = TTM_PL_FLAG_WC; 51 - break; 52 - default: 53 - DRM_ERROR("Unsupported memory type %u\n", (unsigned)type); 54 - return -EINVAL; 55 - } 56 - return 0; 57 - } 58 - 59 - static void 60 - bochs_bo_evict_flags(struct ttm_buffer_object *bo, struct ttm_placement *pl) 61 - { 62 - struct bochs_bo *bochsbo = bochs_bo(bo); 63 - 64 - if (!bochs_ttm_bo_is_bochs_bo(bo)) 65 - return; 66 - 67 - bochs_ttm_placement(bochsbo, TTM_PL_FLAG_SYSTEM); 68 - *pl = bochsbo->placement; 69 - } 70 - 71 - static int bochs_bo_verify_access(struct ttm_buffer_object *bo, 72 - struct file *filp) 73 - { 74 - struct bochs_bo *bochsbo = bochs_bo(bo); 75 - 76 - return drm_vma_node_verify_access(&bochsbo->gem.vma_node, 77 - filp->private_data); 78 - } 79 - 80 - static int bochs_ttm_io_mem_reserve(struct ttm_bo_device *bdev, 81 - struct ttm_mem_reg *mem) 82 - { 83 - struct ttm_mem_type_manager *man = &bdev->man[mem->mem_type]; 84 - struct bochs_device *bochs = bochs_bdev(bdev); 85 - 86 - mem->bus.addr = NULL; 87 - mem->bus.offset = 0; 88 - mem->bus.size = mem->num_pages << PAGE_SHIFT; 89 - mem->bus.base = 0; 90 - mem->bus.is_iomem = false; 91 - if (!(man->flags & TTM_MEMTYPE_FLAG_MAPPABLE)) 92 - return -EINVAL; 93 - switch (mem->mem_type) { 94 - case TTM_PL_SYSTEM: 95 - /* system memory */ 96 - return 0; 97 - case TTM_PL_VRAM: 98 - mem->bus.offset = mem->start << PAGE_SHIFT; 99 - mem->bus.base = bochs->fb_base; 100 - mem->bus.is_iomem = true; 101 - break; 102 - default: 103 - return -EINVAL; 104 - break; 105 - } 106 - return 0; 107 - } 108 - 109 - static void bochs_ttm_io_mem_free(struct ttm_bo_device *bdev, 110 - struct ttm_mem_reg *mem) 111 - { 112 - } 113 - 114 - static void bochs_ttm_backend_destroy(struct ttm_tt *tt) 115 - { 116 - ttm_tt_fini(tt); 117 - kfree(tt); 118 - } 119 - 120 - static struct ttm_backend_func bochs_tt_backend_func = { 121 - .destroy = &bochs_ttm_backend_destroy, 122 - }; 123 - 124 - static struct ttm_tt *bochs_ttm_tt_create(struct ttm_buffer_object *bo, 125 - uint32_t page_flags) 126 - { 127 - struct ttm_tt *tt; 128 - 129 - tt = kzalloc(sizeof(struct ttm_tt), GFP_KERNEL); 130 - if (tt == NULL) 131 - return NULL; 132 - tt->func = &bochs_tt_backend_func; 133 - if (ttm_tt_init(tt, bo, page_flags)) { 134 - kfree(tt); 135 - return NULL; 136 - } 137 - return tt; 138 - } 139 - 140 - static struct ttm_bo_driver bochs_bo_driver = { 141 - .ttm_tt_create = bochs_ttm_tt_create, 142 - .init_mem_type = bochs_bo_init_mem_type, 143 - .eviction_valuable = ttm_bo_eviction_valuable, 144 - .evict_flags = bochs_bo_evict_flags, 145 - .move = NULL, 146 - .verify_access = bochs_bo_verify_access, 147 - .io_mem_reserve = &bochs_ttm_io_mem_reserve, 148 - .io_mem_free = &bochs_ttm_io_mem_free, 149 - }; 150 11 151 12 int bochs_mm_init(struct bochs_device *bochs) 152 13 { 153 - struct ttm_bo_device *bdev = &bochs->ttm.bdev; 154 - int ret; 14 + struct drm_vram_mm *vmm; 155 15 156 - ret = ttm_bo_device_init(&bochs->ttm.bdev, 157 - &bochs_bo_driver, 158 - bochs->dev->anon_inode->i_mapping, 159 - true); 160 - if (ret) { 161 - DRM_ERROR("Error initialising bo driver; %d\n", ret); 162 - return ret; 163 - } 164 - 165 - ret = ttm_bo_init_mm(bdev, TTM_PL_VRAM, 166 - bochs->fb_size >> PAGE_SHIFT); 167 - if (ret) { 168 - DRM_ERROR("Failed ttm VRAM init: %d\n", ret); 169 - return ret; 170 - } 171 - 172 - bochs->ttm.initialized = true; 173 - return 0; 16 + vmm = drm_vram_helper_alloc_mm(bochs->dev, bochs->fb_base, 17 + bochs->fb_size, 18 + &drm_gem_vram_mm_funcs); 19 + return PTR_ERR_OR_ZERO(vmm); 174 20 } 175 21 176 22 void bochs_mm_fini(struct bochs_device *bochs) 177 23 { 178 - if (!bochs->ttm.initialized) 24 + if (!bochs->dev->vram_mm) 179 25 return; 180 26 181 - ttm_bo_device_release(&bochs->ttm.bdev); 182 - bochs->ttm.initialized = false; 183 - } 184 - 185 - static void bochs_ttm_placement(struct bochs_bo *bo, int domain) 186 - { 187 - unsigned i; 188 - u32 c = 0; 189 - bo->placement.placement = bo->placements; 190 - bo->placement.busy_placement = bo->placements; 191 - if (domain & TTM_PL_FLAG_VRAM) { 192 - bo->placements[c++].flags = TTM_PL_FLAG_WC 193 - | TTM_PL_FLAG_UNCACHED 194 - | TTM_PL_FLAG_VRAM; 195 - } 196 - if (domain & TTM_PL_FLAG_SYSTEM) { 197 - bo->placements[c++].flags = TTM_PL_MASK_CACHING 198 - | TTM_PL_FLAG_SYSTEM; 199 - } 200 - if (!c) { 201 - bo->placements[c++].flags = TTM_PL_MASK_CACHING 202 - | TTM_PL_FLAG_SYSTEM; 203 - } 204 - for (i = 0; i < c; ++i) { 205 - bo->placements[i].fpfn = 0; 206 - bo->placements[i].lpfn = 0; 207 - } 208 - bo->placement.num_placement = c; 209 - bo->placement.num_busy_placement = c; 210 - } 211 - 212 - int bochs_bo_pin(struct bochs_bo *bo, u32 pl_flag) 213 - { 214 - struct ttm_operation_ctx ctx = { false, false }; 215 - int i, ret; 216 - 217 - if (bo->pin_count) { 218 - bo->pin_count++; 219 - return 0; 220 - } 221 - 222 - bochs_ttm_placement(bo, pl_flag); 223 - for (i = 0; i < bo->placement.num_placement; i++) 224 - bo->placements[i].flags |= TTM_PL_FLAG_NO_EVICT; 225 - ret = ttm_bo_reserve(&bo->bo, true, false, NULL); 226 - if (ret) 227 - return ret; 228 - ret = ttm_bo_validate(&bo->bo, &bo->placement, &ctx); 229 - ttm_bo_unreserve(&bo->bo); 230 - if (ret) 231 - return ret; 232 - 233 - bo->pin_count = 1; 234 - return 0; 235 - } 236 - 237 - int bochs_bo_unpin(struct bochs_bo *bo) 238 - { 239 - struct ttm_operation_ctx ctx = { false, false }; 240 - int i, ret; 241 - 242 - if (!bo->pin_count) { 243 - DRM_ERROR("unpin bad %p\n", bo); 244 - return 0; 245 - } 246 - bo->pin_count--; 247 - 248 - if (bo->pin_count) 249 - return 0; 250 - 251 - for (i = 0; i < bo->placement.num_placement; i++) 252 - bo->placements[i].flags &= ~TTM_PL_FLAG_NO_EVICT; 253 - ret = ttm_bo_reserve(&bo->bo, true, false, NULL); 254 - if (ret) 255 - return ret; 256 - ret = ttm_bo_validate(&bo->bo, &bo->placement, &ctx); 257 - ttm_bo_unreserve(&bo->bo); 258 - if (ret) 259 - return ret; 260 - 261 - return 0; 262 - } 263 - 264 - int bochs_mmap(struct file *filp, struct vm_area_struct *vma) 265 - { 266 - struct drm_file *file_priv = filp->private_data; 267 - struct bochs_device *bochs = file_priv->minor->dev->dev_private; 268 - 269 - return ttm_bo_mmap(filp, vma, &bochs->ttm.bdev); 270 - } 271 - 272 - /* ---------------------------------------------------------------------- */ 273 - 274 - static int bochs_bo_create(struct drm_device *dev, int size, int align, 275 - uint32_t flags, struct bochs_bo **pbochsbo) 276 - { 277 - struct bochs_device *bochs = dev->dev_private; 278 - struct bochs_bo *bochsbo; 279 - size_t acc_size; 280 - int ret; 281 - 282 - bochsbo = kzalloc(sizeof(struct bochs_bo), GFP_KERNEL); 283 - if (!bochsbo) 284 - return -ENOMEM; 285 - 286 - ret = drm_gem_object_init(dev, &bochsbo->gem, size); 287 - if (ret) { 288 - kfree(bochsbo); 289 - return ret; 290 - } 291 - 292 - bochsbo->bo.bdev = &bochs->ttm.bdev; 293 - bochsbo->bo.bdev->dev_mapping = dev->anon_inode->i_mapping; 294 - 295 - bochs_ttm_placement(bochsbo, TTM_PL_FLAG_VRAM | TTM_PL_FLAG_SYSTEM); 296 - 297 - acc_size = ttm_bo_dma_acc_size(&bochs->ttm.bdev, size, 298 - sizeof(struct bochs_bo)); 299 - 300 - ret = ttm_bo_init(&bochs->ttm.bdev, &bochsbo->bo, size, 301 - ttm_bo_type_device, &bochsbo->placement, 302 - align >> PAGE_SHIFT, false, acc_size, 303 - NULL, NULL, bochs_bo_ttm_destroy); 304 - if (ret) 305 - return ret; 306 - 307 - *pbochsbo = bochsbo; 308 - return 0; 309 - } 310 - 311 - int bochs_gem_create(struct drm_device *dev, u32 size, bool iskernel, 312 - struct drm_gem_object **obj) 313 - { 314 - struct bochs_bo *bochsbo; 315 - int ret; 316 - 317 - *obj = NULL; 318 - 319 - size = PAGE_ALIGN(size); 320 - if (size == 0) 321 - return -EINVAL; 322 - 323 - ret = bochs_bo_create(dev, size, 0, 0, &bochsbo); 324 - if (ret) { 325 - if (ret != -ERESTARTSYS) 326 - DRM_ERROR("failed to allocate GEM object\n"); 327 - return ret; 328 - } 329 - *obj = &bochsbo->gem; 330 - return 0; 331 - } 332 - 333 - int bochs_dumb_create(struct drm_file *file, struct drm_device *dev, 334 - struct drm_mode_create_dumb *args) 335 - { 336 - struct drm_gem_object *gobj; 337 - u32 handle; 338 - int ret; 339 - 340 - args->pitch = args->width * ((args->bpp + 7) / 8); 341 - args->size = args->pitch * args->height; 342 - 343 - ret = bochs_gem_create(dev, args->size, false, 344 - &gobj); 345 - if (ret) 346 - return ret; 347 - 348 - ret = drm_gem_handle_create(file, gobj, &handle); 349 - drm_gem_object_put_unlocked(gobj); 350 - if (ret) 351 - return ret; 352 - 353 - args->handle = handle; 354 - return 0; 355 - } 356 - 357 - static void bochs_bo_unref(struct bochs_bo **bo) 358 - { 359 - struct ttm_buffer_object *tbo; 360 - 361 - if ((*bo) == NULL) 362 - return; 363 - 364 - tbo = &((*bo)->bo); 365 - ttm_bo_put(tbo); 366 - *bo = NULL; 367 - } 368 - 369 - void bochs_gem_free_object(struct drm_gem_object *obj) 370 - { 371 - struct bochs_bo *bochs_bo = gem_to_bochs_bo(obj); 372 - 373 - bochs_bo_unref(&bochs_bo); 374 - } 375 - 376 - int bochs_dumb_mmap_offset(struct drm_file *file, struct drm_device *dev, 377 - uint32_t handle, uint64_t *offset) 378 - { 379 - struct drm_gem_object *obj; 380 - struct bochs_bo *bo; 381 - 382 - obj = drm_gem_object_lookup(file, handle); 383 - if (obj == NULL) 384 - return -ENOENT; 385 - 386 - bo = gem_to_bochs_bo(obj); 387 - *offset = bochs_bo_mmap_offset(bo); 388 - 389 - drm_gem_object_put_unlocked(obj); 390 - return 0; 391 - } 392 - 393 - /* ---------------------------------------------------------------------- */ 394 - 395 - int bochs_gem_prime_pin(struct drm_gem_object *obj) 396 - { 397 - struct bochs_bo *bo = gem_to_bochs_bo(obj); 398 - 399 - return bochs_bo_pin(bo, TTM_PL_FLAG_VRAM); 400 - } 401 - 402 - void bochs_gem_prime_unpin(struct drm_gem_object *obj) 403 - { 404 - struct bochs_bo *bo = gem_to_bochs_bo(obj); 405 - 406 - bochs_bo_unpin(bo); 407 - } 408 - 409 - void *bochs_gem_prime_vmap(struct drm_gem_object *obj) 410 - { 411 - struct bochs_bo *bo = gem_to_bochs_bo(obj); 412 - bool is_iomem; 413 - int ret; 414 - 415 - ret = bochs_bo_pin(bo, TTM_PL_FLAG_VRAM); 416 - if (ret) 417 - return NULL; 418 - ret = ttm_bo_kmap(&bo->bo, 0, bo->bo.num_pages, &bo->kmap); 419 - if (ret) { 420 - bochs_bo_unpin(bo); 421 - return NULL; 422 - } 423 - return ttm_kmap_obj_virtual(&bo->kmap, &is_iomem); 424 - } 425 - 426 - void bochs_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr) 427 - { 428 - struct bochs_bo *bo = gem_to_bochs_bo(obj); 429 - 430 - ttm_bo_kunmap(&bo->kmap); 431 - bochs_bo_unpin(bo); 432 - } 433 - 434 - int bochs_gem_prime_mmap(struct drm_gem_object *obj, 435 - struct vm_area_struct *vma) 436 - { 437 - struct bochs_bo *bo = gem_to_bochs_bo(obj); 438 - 439 - bo->gem.vma_node.vm_node.start = bo->bo.vma_node.vm_node.start; 440 - return drm_gem_prime_mmap(obj, vma); 27 + drm_vram_helper_release_mm(bochs->dev); 441 28 }
+1 -2
drivers/gpu/drm/bridge/panel.c
··· 9 9 */ 10 10 11 11 #include <drm/drmP.h> 12 - #include <drm/drm_panel.h> 13 12 #include <drm/drm_atomic_helper.h> 14 13 #include <drm/drm_connector.h> 15 14 #include <drm/drm_encoder.h> 16 15 #include <drm/drm_modeset_helper_vtables.h> 17 - #include <drm/drm_probe_helper.h> 18 16 #include <drm/drm_panel.h> 17 + #include <drm/drm_probe_helper.h> 19 18 20 19 struct panel_bridge { 21 20 struct drm_bridge bridge;
-250
drivers/gpu/drm/cirrus/cirrus_drv.h
··· 1 - /* 2 - * Copyright 2012 Red Hat 3 - * 4 - * This file is subject to the terms and conditions of the GNU General 5 - * Public License version 2. See the file COPYING in the main 6 - * directory of this archive for more details. 7 - * 8 - * Authors: Matthew Garrett 9 - * Dave Airlie 10 - */ 11 - #ifndef __CIRRUS_DRV_H__ 12 - #define __CIRRUS_DRV_H__ 13 - 14 - #include <video/vga.h> 15 - 16 - #include <drm/drm_encoder.h> 17 - #include <drm/drm_fb_helper.h> 18 - 19 - #include <drm/ttm/ttm_bo_api.h> 20 - #include <drm/ttm/ttm_bo_driver.h> 21 - #include <drm/ttm/ttm_placement.h> 22 - #include <drm/ttm/ttm_memory.h> 23 - #include <drm/ttm/ttm_module.h> 24 - 25 - #include <drm/drm_gem.h> 26 - 27 - #define DRIVER_AUTHOR "Matthew Garrett" 28 - 29 - #define DRIVER_NAME "cirrus" 30 - #define DRIVER_DESC "qemu Cirrus emulation" 31 - #define DRIVER_DATE "20110418" 32 - 33 - #define DRIVER_MAJOR 1 34 - #define DRIVER_MINOR 0 35 - #define DRIVER_PATCHLEVEL 0 36 - 37 - #define CIRRUSFB_CONN_LIMIT 1 38 - 39 - #define RREG8(reg) ioread8(((void __iomem *)cdev->rmmio) + (reg)) 40 - #define WREG8(reg, v) iowrite8(v, ((void __iomem *)cdev->rmmio) + (reg)) 41 - #define RREG32(reg) ioread32(((void __iomem *)cdev->rmmio) + (reg)) 42 - #define WREG32(reg, v) iowrite32(v, ((void __iomem *)cdev->rmmio) + (reg)) 43 - 44 - #define SEQ_INDEX 4 45 - #define SEQ_DATA 5 46 - 47 - #define WREG_SEQ(reg, v) \ 48 - do { \ 49 - WREG8(SEQ_INDEX, reg); \ 50 - WREG8(SEQ_DATA, v); \ 51 - } while (0) \ 52 - 53 - #define CRT_INDEX 0x14 54 - #define CRT_DATA 0x15 55 - 56 - #define WREG_CRT(reg, v) \ 57 - do { \ 58 - WREG8(CRT_INDEX, reg); \ 59 - WREG8(CRT_DATA, v); \ 60 - } while (0) \ 61 - 62 - #define GFX_INDEX 0xe 63 - #define GFX_DATA 0xf 64 - 65 - #define WREG_GFX(reg, v) \ 66 - do { \ 67 - WREG8(GFX_INDEX, reg); \ 68 - WREG8(GFX_DATA, v); \ 69 - } while (0) \ 70 - 71 - /* 72 - * Cirrus has a "hidden" DAC register that can be accessed by writing to 73 - * the pixel mask register to reset the state, then reading from the register 74 - * four times. The next write will then pass to the DAC 75 - */ 76 - #define VGA_DAC_MASK 0x6 77 - 78 - #define WREG_HDR(v) \ 79 - do { \ 80 - RREG8(VGA_DAC_MASK); \ 81 - RREG8(VGA_DAC_MASK); \ 82 - RREG8(VGA_DAC_MASK); \ 83 - RREG8(VGA_DAC_MASK); \ 84 - WREG8(VGA_DAC_MASK, v); \ 85 - } while (0) \ 86 - 87 - 88 - #define CIRRUS_MAX_FB_HEIGHT 4096 89 - #define CIRRUS_MAX_FB_WIDTH 4096 90 - 91 - #define CIRRUS_DPMS_CLEARED (-1) 92 - 93 - #define to_cirrus_crtc(x) container_of(x, struct cirrus_crtc, base) 94 - #define to_cirrus_encoder(x) container_of(x, struct cirrus_encoder, base) 95 - 96 - struct cirrus_crtc { 97 - struct drm_crtc base; 98 - int last_dpms; 99 - bool enabled; 100 - }; 101 - 102 - struct cirrus_fbdev; 103 - struct cirrus_mode_info { 104 - struct cirrus_crtc *crtc; 105 - /* pointer to fbdev info structure */ 106 - struct cirrus_fbdev *gfbdev; 107 - }; 108 - 109 - struct cirrus_encoder { 110 - struct drm_encoder base; 111 - int last_dpms; 112 - }; 113 - 114 - struct cirrus_connector { 115 - struct drm_connector base; 116 - }; 117 - 118 - struct cirrus_mc { 119 - resource_size_t vram_size; 120 - resource_size_t vram_base; 121 - }; 122 - 123 - struct cirrus_device { 124 - struct drm_device *dev; 125 - unsigned long flags; 126 - 127 - resource_size_t rmmio_base; 128 - resource_size_t rmmio_size; 129 - void __iomem *rmmio; 130 - 131 - struct cirrus_mc mc; 132 - struct cirrus_mode_info mode_info; 133 - 134 - int num_crtc; 135 - int fb_mtrr; 136 - 137 - struct { 138 - struct ttm_bo_device bdev; 139 - } ttm; 140 - bool mm_inited; 141 - }; 142 - 143 - 144 - struct cirrus_fbdev { 145 - struct drm_fb_helper helper; /* must be first */ 146 - struct drm_framebuffer *gfb; 147 - void *sysram; 148 - int size; 149 - int x1, y1, x2, y2; /* dirty rect */ 150 - spinlock_t dirty_lock; 151 - }; 152 - 153 - struct cirrus_bo { 154 - struct ttm_buffer_object bo; 155 - struct ttm_placement placement; 156 - struct ttm_bo_kmap_obj kmap; 157 - struct drm_gem_object gem; 158 - struct ttm_place placements[3]; 159 - int pin_count; 160 - }; 161 - #define gem_to_cirrus_bo(gobj) container_of((gobj), struct cirrus_bo, gem) 162 - 163 - static inline struct cirrus_bo * 164 - cirrus_bo(struct ttm_buffer_object *bo) 165 - { 166 - return container_of(bo, struct cirrus_bo, bo); 167 - } 168 - 169 - 170 - #define to_cirrus_obj(x) container_of(x, struct cirrus_gem_object, base) 171 - 172 - /* cirrus_main.c */ 173 - int cirrus_device_init(struct cirrus_device *cdev, 174 - struct drm_device *ddev, 175 - struct pci_dev *pdev, 176 - uint32_t flags); 177 - void cirrus_device_fini(struct cirrus_device *cdev); 178 - void cirrus_gem_free_object(struct drm_gem_object *obj); 179 - int cirrus_dumb_mmap_offset(struct drm_file *file, 180 - struct drm_device *dev, 181 - uint32_t handle, 182 - uint64_t *offset); 183 - int cirrus_gem_create(struct drm_device *dev, 184 - u32 size, bool iskernel, 185 - struct drm_gem_object **obj); 186 - int cirrus_dumb_create(struct drm_file *file, 187 - struct drm_device *dev, 188 - struct drm_mode_create_dumb *args); 189 - 190 - int cirrus_framebuffer_init(struct drm_device *dev, 191 - struct drm_framebuffer *gfb, 192 - const struct drm_mode_fb_cmd2 *mode_cmd, 193 - struct drm_gem_object *obj); 194 - 195 - bool cirrus_check_framebuffer(struct cirrus_device *cdev, int width, int height, 196 - int bpp, int pitch); 197 - 198 - /* cirrus_display.c */ 199 - int cirrus_modeset_init(struct cirrus_device *cdev); 200 - void cirrus_modeset_fini(struct cirrus_device *cdev); 201 - 202 - /* cirrus_fbdev.c */ 203 - int cirrus_fbdev_init(struct cirrus_device *cdev); 204 - void cirrus_fbdev_fini(struct cirrus_device *cdev); 205 - 206 - 207 - 208 - /* cirrus_irq.c */ 209 - void cirrus_driver_irq_preinstall(struct drm_device *dev); 210 - int cirrus_driver_irq_postinstall(struct drm_device *dev); 211 - void cirrus_driver_irq_uninstall(struct drm_device *dev); 212 - irqreturn_t cirrus_driver_irq_handler(int irq, void *arg); 213 - 214 - /* cirrus_kms.c */ 215 - int cirrus_driver_load(struct drm_device *dev, unsigned long flags); 216 - void cirrus_driver_unload(struct drm_device *dev); 217 - extern struct drm_ioctl_desc cirrus_ioctls[]; 218 - extern int cirrus_max_ioctl; 219 - 220 - int cirrus_mm_init(struct cirrus_device *cirrus); 221 - void cirrus_mm_fini(struct cirrus_device *cirrus); 222 - void cirrus_ttm_placement(struct cirrus_bo *bo, int domain); 223 - int cirrus_bo_create(struct drm_device *dev, int size, int align, 224 - uint32_t flags, struct cirrus_bo **pcirrusbo); 225 - int cirrus_mmap(struct file *filp, struct vm_area_struct *vma); 226 - 227 - static inline int cirrus_bo_reserve(struct cirrus_bo *bo, bool no_wait) 228 - { 229 - int ret; 230 - 231 - ret = ttm_bo_reserve(&bo->bo, true, no_wait, NULL); 232 - if (ret) { 233 - if (ret != -ERESTARTSYS && ret != -EBUSY) 234 - DRM_ERROR("reserve failed %p\n", bo); 235 - return ret; 236 - } 237 - return 0; 238 - } 239 - 240 - static inline void cirrus_bo_unreserve(struct cirrus_bo *bo) 241 - { 242 - ttm_bo_unreserve(&bo->bo); 243 - } 244 - 245 - int cirrus_bo_push_sysram(struct cirrus_bo *bo); 246 - int cirrus_bo_pin(struct cirrus_bo *bo, u32 pl_flag, u64 *gpu_addr); 247 - 248 - extern int cirrus_bpp; 249 - 250 - #endif /* __CIRRUS_DRV_H__ */
-337
drivers/gpu/drm/cirrus/cirrus_ttm.c
··· 1 - /* 2 - * Copyright 2012 Red Hat Inc. 3 - * 4 - * Permission is hereby granted, free of charge, to any person obtaining a 5 - * copy of this software and associated documentation files (the 6 - * "Software"), to deal in the Software without restriction, including 7 - * without limitation the rights to use, copy, modify, merge, publish, 8 - * distribute, sub license, and/or sell copies of the Software, and to 9 - * permit persons to whom the Software is furnished to do so, subject to 10 - * the following conditions: 11 - * 12 - * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 13 - * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 14 - * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL 15 - * THE COPYRIGHT HOLDERS, AUTHORS AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, 16 - * DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR 17 - * OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE 18 - * USE OR OTHER DEALINGS IN THE SOFTWARE. 19 - * 20 - * The above copyright notice and this permission notice (including the 21 - * next paragraph) shall be included in all copies or substantial portions 22 - * of the Software. 23 - * 24 - */ 25 - /* 26 - * Authors: Dave Airlie <airlied@redhat.com> 27 - */ 28 - #include <drm/drmP.h> 29 - #include <drm/ttm/ttm_page_alloc.h> 30 - 31 - #include "cirrus_drv.h" 32 - 33 - static inline struct cirrus_device * 34 - cirrus_bdev(struct ttm_bo_device *bd) 35 - { 36 - return container_of(bd, struct cirrus_device, ttm.bdev); 37 - } 38 - 39 - static void cirrus_bo_ttm_destroy(struct ttm_buffer_object *tbo) 40 - { 41 - struct cirrus_bo *bo; 42 - 43 - bo = container_of(tbo, struct cirrus_bo, bo); 44 - 45 - drm_gem_object_release(&bo->gem); 46 - kfree(bo); 47 - } 48 - 49 - static bool cirrus_ttm_bo_is_cirrus_bo(struct ttm_buffer_object *bo) 50 - { 51 - if (bo->destroy == &cirrus_bo_ttm_destroy) 52 - return true; 53 - return false; 54 - } 55 - 56 - static int 57 - cirrus_bo_init_mem_type(struct ttm_bo_device *bdev, uint32_t type, 58 - struct ttm_mem_type_manager *man) 59 - { 60 - switch (type) { 61 - case TTM_PL_SYSTEM: 62 - man->flags = TTM_MEMTYPE_FLAG_MAPPABLE; 63 - man->available_caching = TTM_PL_MASK_CACHING; 64 - man->default_caching = TTM_PL_FLAG_CACHED; 65 - break; 66 - case TTM_PL_VRAM: 67 - man->func = &ttm_bo_manager_func; 68 - man->flags = TTM_MEMTYPE_FLAG_FIXED | 69 - TTM_MEMTYPE_FLAG_MAPPABLE; 70 - man->available_caching = TTM_PL_FLAG_UNCACHED | 71 - TTM_PL_FLAG_WC; 72 - man->default_caching = TTM_PL_FLAG_WC; 73 - break; 74 - default: 75 - DRM_ERROR("Unsupported memory type %u\n", (unsigned)type); 76 - return -EINVAL; 77 - } 78 - return 0; 79 - } 80 - 81 - static void 82 - cirrus_bo_evict_flags(struct ttm_buffer_object *bo, struct ttm_placement *pl) 83 - { 84 - struct cirrus_bo *cirrusbo = cirrus_bo(bo); 85 - 86 - if (!cirrus_ttm_bo_is_cirrus_bo(bo)) 87 - return; 88 - 89 - cirrus_ttm_placement(cirrusbo, TTM_PL_FLAG_SYSTEM); 90 - *pl = cirrusbo->placement; 91 - } 92 - 93 - static int cirrus_bo_verify_access(struct ttm_buffer_object *bo, struct file *filp) 94 - { 95 - struct cirrus_bo *cirrusbo = cirrus_bo(bo); 96 - 97 - return drm_vma_node_verify_access(&cirrusbo->gem.vma_node, 98 - filp->private_data); 99 - } 100 - 101 - static int cirrus_ttm_io_mem_reserve(struct ttm_bo_device *bdev, 102 - struct ttm_mem_reg *mem) 103 - { 104 - struct ttm_mem_type_manager *man = &bdev->man[mem->mem_type]; 105 - struct cirrus_device *cirrus = cirrus_bdev(bdev); 106 - 107 - mem->bus.addr = NULL; 108 - mem->bus.offset = 0; 109 - mem->bus.size = mem->num_pages << PAGE_SHIFT; 110 - mem->bus.base = 0; 111 - mem->bus.is_iomem = false; 112 - if (!(man->flags & TTM_MEMTYPE_FLAG_MAPPABLE)) 113 - return -EINVAL; 114 - switch (mem->mem_type) { 115 - case TTM_PL_SYSTEM: 116 - /* system memory */ 117 - return 0; 118 - case TTM_PL_VRAM: 119 - mem->bus.offset = mem->start << PAGE_SHIFT; 120 - mem->bus.base = pci_resource_start(cirrus->dev->pdev, 0); 121 - mem->bus.is_iomem = true; 122 - break; 123 - default: 124 - return -EINVAL; 125 - break; 126 - } 127 - return 0; 128 - } 129 - 130 - static void cirrus_ttm_io_mem_free(struct ttm_bo_device *bdev, struct ttm_mem_reg *mem) 131 - { 132 - } 133 - 134 - static void cirrus_ttm_backend_destroy(struct ttm_tt *tt) 135 - { 136 - ttm_tt_fini(tt); 137 - kfree(tt); 138 - } 139 - 140 - static struct ttm_backend_func cirrus_tt_backend_func = { 141 - .destroy = &cirrus_ttm_backend_destroy, 142 - }; 143 - 144 - 145 - static struct ttm_tt *cirrus_ttm_tt_create(struct ttm_buffer_object *bo, 146 - uint32_t page_flags) 147 - { 148 - struct ttm_tt *tt; 149 - 150 - tt = kzalloc(sizeof(struct ttm_tt), GFP_KERNEL); 151 - if (tt == NULL) 152 - return NULL; 153 - tt->func = &cirrus_tt_backend_func; 154 - if (ttm_tt_init(tt, bo, page_flags)) { 155 - kfree(tt); 156 - return NULL; 157 - } 158 - return tt; 159 - } 160 - 161 - struct ttm_bo_driver cirrus_bo_driver = { 162 - .ttm_tt_create = cirrus_ttm_tt_create, 163 - .init_mem_type = cirrus_bo_init_mem_type, 164 - .eviction_valuable = ttm_bo_eviction_valuable, 165 - .evict_flags = cirrus_bo_evict_flags, 166 - .move = NULL, 167 - .verify_access = cirrus_bo_verify_access, 168 - .io_mem_reserve = &cirrus_ttm_io_mem_reserve, 169 - .io_mem_free = &cirrus_ttm_io_mem_free, 170 - }; 171 - 172 - int cirrus_mm_init(struct cirrus_device *cirrus) 173 - { 174 - int ret; 175 - struct drm_device *dev = cirrus->dev; 176 - struct ttm_bo_device *bdev = &cirrus->ttm.bdev; 177 - 178 - ret = ttm_bo_device_init(&cirrus->ttm.bdev, 179 - &cirrus_bo_driver, 180 - dev->anon_inode->i_mapping, 181 - true); 182 - if (ret) { 183 - DRM_ERROR("Error initialising bo driver; %d\n", ret); 184 - return ret; 185 - } 186 - 187 - ret = ttm_bo_init_mm(bdev, TTM_PL_VRAM, 188 - cirrus->mc.vram_size >> PAGE_SHIFT); 189 - if (ret) { 190 - DRM_ERROR("Failed ttm VRAM init: %d\n", ret); 191 - return ret; 192 - } 193 - 194 - arch_io_reserve_memtype_wc(pci_resource_start(dev->pdev, 0), 195 - pci_resource_len(dev->pdev, 0)); 196 - 197 - cirrus->fb_mtrr = arch_phys_wc_add(pci_resource_start(dev->pdev, 0), 198 - pci_resource_len(dev->pdev, 0)); 199 - 200 - cirrus->mm_inited = true; 201 - return 0; 202 - } 203 - 204 - void cirrus_mm_fini(struct cirrus_device *cirrus) 205 - { 206 - struct drm_device *dev = cirrus->dev; 207 - 208 - if (!cirrus->mm_inited) 209 - return; 210 - 211 - ttm_bo_device_release(&cirrus->ttm.bdev); 212 - 213 - arch_phys_wc_del(cirrus->fb_mtrr); 214 - cirrus->fb_mtrr = 0; 215 - arch_io_free_memtype_wc(pci_resource_start(dev->pdev, 0), 216 - pci_resource_len(dev->pdev, 0)); 217 - } 218 - 219 - void cirrus_ttm_placement(struct cirrus_bo *bo, int domain) 220 - { 221 - u32 c = 0; 222 - unsigned i; 223 - bo->placement.placement = bo->placements; 224 - bo->placement.busy_placement = bo->placements; 225 - if (domain & TTM_PL_FLAG_VRAM) 226 - bo->placements[c++].flags = TTM_PL_FLAG_WC | TTM_PL_FLAG_UNCACHED | TTM_PL_FLAG_VRAM; 227 - if (domain & TTM_PL_FLAG_SYSTEM) 228 - bo->placements[c++].flags = TTM_PL_MASK_CACHING | TTM_PL_FLAG_SYSTEM; 229 - if (!c) 230 - bo->placements[c++].flags = TTM_PL_MASK_CACHING | TTM_PL_FLAG_SYSTEM; 231 - bo->placement.num_placement = c; 232 - bo->placement.num_busy_placement = c; 233 - for (i = 0; i < c; ++i) { 234 - bo->placements[i].fpfn = 0; 235 - bo->placements[i].lpfn = 0; 236 - } 237 - } 238 - 239 - int cirrus_bo_create(struct drm_device *dev, int size, int align, 240 - uint32_t flags, struct cirrus_bo **pcirrusbo) 241 - { 242 - struct cirrus_device *cirrus = dev->dev_private; 243 - struct cirrus_bo *cirrusbo; 244 - size_t acc_size; 245 - int ret; 246 - 247 - cirrusbo = kzalloc(sizeof(struct cirrus_bo), GFP_KERNEL); 248 - if (!cirrusbo) 249 - return -ENOMEM; 250 - 251 - ret = drm_gem_object_init(dev, &cirrusbo->gem, size); 252 - if (ret) { 253 - kfree(cirrusbo); 254 - return ret; 255 - } 256 - 257 - cirrusbo->bo.bdev = &cirrus->ttm.bdev; 258 - 259 - cirrus_ttm_placement(cirrusbo, TTM_PL_FLAG_VRAM | TTM_PL_FLAG_SYSTEM); 260 - 261 - acc_size = ttm_bo_dma_acc_size(&cirrus->ttm.bdev, size, 262 - sizeof(struct cirrus_bo)); 263 - 264 - ret = ttm_bo_init(&cirrus->ttm.bdev, &cirrusbo->bo, size, 265 - ttm_bo_type_device, &cirrusbo->placement, 266 - align >> PAGE_SHIFT, false, acc_size, 267 - NULL, NULL, cirrus_bo_ttm_destroy); 268 - if (ret) 269 - return ret; 270 - 271 - *pcirrusbo = cirrusbo; 272 - return 0; 273 - } 274 - 275 - static inline u64 cirrus_bo_gpu_offset(struct cirrus_bo *bo) 276 - { 277 - return bo->bo.offset; 278 - } 279 - 280 - int cirrus_bo_pin(struct cirrus_bo *bo, u32 pl_flag, u64 *gpu_addr) 281 - { 282 - struct ttm_operation_ctx ctx = { false, false }; 283 - int i, ret; 284 - 285 - if (bo->pin_count) { 286 - bo->pin_count++; 287 - if (gpu_addr) 288 - *gpu_addr = cirrus_bo_gpu_offset(bo); 289 - } 290 - 291 - cirrus_ttm_placement(bo, pl_flag); 292 - for (i = 0; i < bo->placement.num_placement; i++) 293 - bo->placements[i].flags |= TTM_PL_FLAG_NO_EVICT; 294 - ret = ttm_bo_validate(&bo->bo, &bo->placement, &ctx); 295 - if (ret) 296 - return ret; 297 - 298 - bo->pin_count = 1; 299 - if (gpu_addr) 300 - *gpu_addr = cirrus_bo_gpu_offset(bo); 301 - return 0; 302 - } 303 - 304 - int cirrus_bo_push_sysram(struct cirrus_bo *bo) 305 - { 306 - struct ttm_operation_ctx ctx = { false, false }; 307 - int i, ret; 308 - if (!bo->pin_count) { 309 - DRM_ERROR("unpin bad %p\n", bo); 310 - return 0; 311 - } 312 - bo->pin_count--; 313 - if (bo->pin_count) 314 - return 0; 315 - 316 - if (bo->kmap.virtual) 317 - ttm_bo_kunmap(&bo->kmap); 318 - 319 - cirrus_ttm_placement(bo, TTM_PL_FLAG_SYSTEM); 320 - for (i = 0; i < bo->placement.num_placement ; i++) 321 - bo->placements[i].flags |= TTM_PL_FLAG_NO_EVICT; 322 - 323 - ret = ttm_bo_validate(&bo->bo, &bo->placement, &ctx); 324 - if (ret) { 325 - DRM_ERROR("pushing to VRAM failed\n"); 326 - return ret; 327 - } 328 - return 0; 329 - } 330 - 331 - int cirrus_mmap(struct file *filp, struct vm_area_struct *vma) 332 - { 333 - struct drm_file *file_priv = filp->private_data; 334 - struct cirrus_device *cirrus = file_priv->minor->dev->dev_private; 335 - 336 - return ttm_bo_mmap(filp, vma, &cirrus->ttm.bdev); 337 - }
+1 -1
drivers/gpu/drm/drm_atomic_helper.c
··· 1423 1423 ret = wait_event_timeout(dev->vblank[i].queue, 1424 1424 old_state->crtcs[i].last_vblank_count != 1425 1425 drm_crtc_vblank_count(crtc), 1426 - msecs_to_jiffies(50)); 1426 + msecs_to_jiffies(100)); 1427 1427 1428 1428 WARN(!ret, "[CRTC:%d:%s] vblank wait timed out\n", 1429 1429 crtc->base.id, crtc->name);
+34 -7
drivers/gpu/drm/drm_atomic_state_helper.c
··· 57 57 */ 58 58 59 59 /** 60 + * __drm_atomic_helper_crtc_reset - reset state on CRTC 61 + * @crtc: drm CRTC 62 + * @crtc_state: CRTC state to assign 63 + * 64 + * Initializes the newly allocated @crtc_state and assigns it to 65 + * the &drm_crtc->state pointer of @crtc, usually required when 66 + * initializing the drivers or when called from the &drm_crtc_funcs.reset 67 + * hook. 68 + * 69 + * This is useful for drivers that subclass the CRTC state. 70 + */ 71 + void 72 + __drm_atomic_helper_crtc_reset(struct drm_crtc *crtc, 73 + struct drm_crtc_state *crtc_state) 74 + { 75 + if (crtc_state) 76 + crtc_state->crtc = crtc; 77 + 78 + crtc->state = crtc_state; 79 + } 80 + EXPORT_SYMBOL(__drm_atomic_helper_crtc_reset); 81 + 82 + /** 60 83 * drm_atomic_helper_crtc_reset - default &drm_crtc_funcs.reset hook for CRTCs 61 84 * @crtc: drm CRTC 62 85 * ··· 88 65 */ 89 66 void drm_atomic_helper_crtc_reset(struct drm_crtc *crtc) 90 67 { 91 - if (crtc->state) 92 - __drm_atomic_helper_crtc_destroy_state(crtc->state); 93 - 94 - kfree(crtc->state); 95 - crtc->state = kzalloc(sizeof(*crtc->state), GFP_KERNEL); 68 + struct drm_crtc_state *crtc_state = 69 + kzalloc(sizeof(*crtc->state), GFP_KERNEL); 96 70 97 71 if (crtc->state) 98 - crtc->state->crtc = crtc; 72 + crtc->funcs->atomic_destroy_state(crtc, crtc->state); 73 + 74 + __drm_atomic_helper_crtc_reset(crtc, crtc_state); 99 75 } 100 76 EXPORT_SYMBOL(drm_atomic_helper_crtc_reset); 101 77 ··· 336 314 * @conn_state: connector state to assign 337 315 * 338 316 * Initializes the newly allocated @conn_state and assigns it to 339 - * the &drm_conector->state pointer of @connector, usually required when 317 + * the &drm_connector->state pointer of @connector, usually required when 340 318 * initializing the drivers or when called from the &drm_connector_funcs.reset 341 319 * hook. 342 320 * ··· 391 369 drm_connector_get(connector); 392 370 state->commit = NULL; 393 371 372 + if (state->hdr_output_metadata) 373 + drm_property_blob_get(state->hdr_output_metadata); 374 + 394 375 /* Don't copy over a writeback job, they are used only once */ 395 376 state->writeback_job = NULL; 396 377 } ··· 441 416 442 417 if (state->writeback_job) 443 418 drm_writeback_cleanup_job(state->writeback_job); 419 + 420 + drm_property_blob_put(state->hdr_output_metadata); 444 421 } 445 422 EXPORT_SYMBOL(__drm_atomic_helper_connector_destroy_state); 446 423
+12
drivers/gpu/drm/drm_atomic_uapi.c
··· 676 676 { 677 677 struct drm_device *dev = connector->dev; 678 678 struct drm_mode_config *config = &dev->mode_config; 679 + bool replaced = false; 680 + int ret; 679 681 680 682 if (property == config->prop_crtc_id) { 681 683 struct drm_crtc *crtc = drm_crtc_find(dev, file_priv, val); ··· 728 726 */ 729 727 if (state->link_status != DRM_LINK_STATUS_GOOD) 730 728 state->link_status = val; 729 + } else if (property == config->hdr_output_metadata_property) { 730 + ret = drm_atomic_replace_property_blob_from_id(dev, 731 + &state->hdr_output_metadata, 732 + val, 733 + sizeof(struct hdr_output_metadata), -1, 734 + &replaced); 735 + return ret; 731 736 } else if (property == config->aspect_ratio_property) { 732 737 state->picture_aspect_ratio = val; 733 738 } else if (property == config->content_type_property) { ··· 823 814 *val = state->colorspace; 824 815 } else if (property == connector->scaling_mode_property) { 825 816 *val = state->scaling_mode; 817 + } else if (property == config->hdr_output_metadata_property) { 818 + *val = state->hdr_output_metadata ? 819 + state->hdr_output_metadata->base.id : 0; 826 820 } else if (property == connector->content_protection_property) { 827 821 *val = state->content_protection; 828 822 } else if (property == config->writeback_fb_id_property) {
+20
drivers/gpu/drm/drm_auth.c
··· 351 351 *master = NULL; 352 352 } 353 353 EXPORT_SYMBOL(drm_master_put); 354 + 355 + /* Used by drm_client and drm_fb_helper */ 356 + bool drm_master_internal_acquire(struct drm_device *dev) 357 + { 358 + mutex_lock(&dev->master_mutex); 359 + if (dev->master) { 360 + mutex_unlock(&dev->master_mutex); 361 + return false; 362 + } 363 + 364 + return true; 365 + } 366 + EXPORT_SYMBOL(drm_master_internal_acquire); 367 + 368 + /* Used by drm_client and drm_fb_helper */ 369 + void drm_master_internal_release(struct drm_device *dev) 370 + { 371 + mutex_unlock(&dev->master_mutex); 372 + } 373 + EXPORT_SYMBOL(drm_master_internal_release);
+2 -1
drivers/gpu/drm/drm_client.c
··· 243 243 static struct drm_client_buffer * 244 244 drm_client_buffer_create(struct drm_client_dev *client, u32 width, u32 height, u32 format) 245 245 { 246 + const struct drm_format_info *info = drm_format_info(format); 246 247 struct drm_mode_create_dumb dumb_args = { }; 247 248 struct drm_device *dev = client->dev; 248 249 struct drm_client_buffer *buffer; ··· 259 258 260 259 dumb_args.width = width; 261 260 dumb_args.height = height; 262 - dumb_args.bpp = drm_format_plane_cpp(format, 0) * 8; 261 + dumb_args.bpp = info->cpp[0] * 8; 263 262 ret = drm_mode_create_dumb(dev, &dumb_args, client->file); 264 263 if (ret) 265 264 goto err_delete;
+6
drivers/gpu/drm/drm_connector.c
··· 1058 1058 return -ENOMEM; 1059 1059 dev->mode_config.non_desktop_property = prop; 1060 1060 1061 + prop = drm_property_create(dev, DRM_MODE_PROP_BLOB, 1062 + "HDR_OUTPUT_METADATA", 0); 1063 + if (!prop) 1064 + return -ENOMEM; 1065 + dev->mode_config.hdr_output_metadata_property = prop; 1066 + 1061 1067 return 0; 1062 1068 } 1063 1069
+5 -3
drivers/gpu/drm/drm_dp_aux_dev.c
··· 27 27 28 28 #include <linux/device.h> 29 29 #include <linux/fs.h> 30 - #include <linux/slab.h> 31 30 #include <linux/init.h> 32 31 #include <linux/kernel.h> 33 32 #include <linux/module.h> 33 + #include <linux/sched/signal.h> 34 + #include <linux/slab.h> 34 35 #include <linux/uaccess.h> 35 36 #include <linux/uio.h> 36 - #include <drm/drm_dp_helper.h> 37 + 37 38 #include <drm/drm_crtc.h> 38 - #include <drm/drmP.h> 39 + #include <drm/drm_dp_helper.h> 40 + #include <drm/drm_print.h> 39 41 40 42 #include "drm_crtc_helper_internal.h" 41 43
+3 -1
drivers/gpu/drm/drm_dp_dual_mode_helper.c
··· 20 20 * OTHER DEALINGS IN THE SOFTWARE. 21 21 */ 22 22 23 + #include <linux/delay.h> 23 24 #include <linux/errno.h> 24 25 #include <linux/export.h> 25 26 #include <linux/i2c.h> 26 27 #include <linux/slab.h> 27 28 #include <linux/string.h> 29 + 28 30 #include <drm/drm_dp_dual_mode_helper.h> 29 - #include <drm/drmP.h> 31 + #include <drm/drm_print.h> 30 32 31 33 /** 32 34 * DOC: dp dual mode helpers
+7 -5
drivers/gpu/drm/drm_dp_helper.c
··· 20 20 * OF THIS SOFTWARE. 21 21 */ 22 22 23 + #include <linux/delay.h> 24 + #include <linux/errno.h> 25 + #include <linux/i2c.h> 26 + #include <linux/init.h> 23 27 #include <linux/kernel.h> 24 28 #include <linux/module.h> 25 - #include <linux/delay.h> 26 - #include <linux/init.h> 27 - #include <linux/errno.h> 28 29 #include <linux/sched.h> 29 - #include <linux/i2c.h> 30 30 #include <linux/seq_file.h> 31 + 31 32 #include <drm/drm_dp_helper.h> 32 - #include <drm/drmP.h> 33 + #include <drm/drm_print.h> 34 + #include <drm/drm_vblank.h> 33 35 34 36 #include "drm_crtc_helper_internal.h" 35 37
+7 -6
drivers/gpu/drm/drm_dp_mst_topology.c
··· 20 20 * OF THIS SOFTWARE. 21 21 */ 22 22 23 - #include <linux/kernel.h> 24 23 #include <linux/delay.h> 25 - #include <linux/init.h> 26 24 #include <linux/errno.h> 25 + #include <linux/i2c.h> 26 + #include <linux/init.h> 27 + #include <linux/kernel.h> 27 28 #include <linux/sched.h> 28 29 #include <linux/seq_file.h> 29 - #include <linux/i2c.h> 30 - #include <drm/drm_dp_mst_helper.h> 31 - #include <drm/drmP.h> 32 30 33 - #include <drm/drm_fixed.h> 34 31 #include <drm/drm_atomic.h> 35 32 #include <drm/drm_atomic_helper.h> 33 + #include <drm/drm_dp_mst_helper.h> 34 + #include <drm/drm_drv.h> 35 + #include <drm/drm_fixed.h> 36 + #include <drm/drm_print.h> 36 37 #include <drm/drm_probe_helper.h> 37 38 38 39 /**
+131 -4
drivers/gpu/drm/drm_edid.c
··· 27 27 * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER 28 28 * DEALINGS IN THE SOFTWARE. 29 29 */ 30 - #include <linux/kernel.h> 31 - #include <linux/slab.h> 30 + 32 31 #include <linux/hdmi.h> 33 32 #include <linux/i2c.h> 33 + #include <linux/kernel.h> 34 34 #include <linux/module.h> 35 + #include <linux/slab.h> 35 36 #include <linux/vga_switcheroo.h> 36 - #include <drm/drmP.h> 37 + 38 + #include <drm/drm_displayid.h> 39 + #include <drm/drm_drv.h> 37 40 #include <drm/drm_edid.h> 38 41 #include <drm/drm_encoder.h> 39 - #include <drm/drm_displayid.h> 42 + #include <drm/drm_print.h> 40 43 #include <drm/drm_scdc_helper.h> 41 44 42 45 #include "drm_crtc_internal.h" ··· 2852 2849 #define VIDEO_BLOCK 0x02 2853 2850 #define VENDOR_BLOCK 0x03 2854 2851 #define SPEAKER_BLOCK 0x04 2852 + #define HDR_STATIC_METADATA_BLOCK 0x6 2855 2853 #define USE_EXTENDED_TAG 0x07 2856 2854 #define EXT_VIDEO_CAPABILITY_BLOCK 0x00 2857 2855 #define EXT_VIDEO_DATA_BLOCK_420 0x0E ··· 3835 3831 mode->clock = clock; 3836 3832 } 3837 3833 3834 + static bool cea_db_is_hdmi_hdr_metadata_block(const u8 *db) 3835 + { 3836 + if (cea_db_tag(db) != USE_EXTENDED_TAG) 3837 + return false; 3838 + 3839 + if (db[1] != HDR_STATIC_METADATA_BLOCK) 3840 + return false; 3841 + 3842 + if (cea_db_payload_len(db) < 3) 3843 + return false; 3844 + 3845 + return true; 3846 + } 3847 + 3848 + static uint8_t eotf_supported(const u8 *edid_ext) 3849 + { 3850 + return edid_ext[2] & 3851 + (BIT(HDMI_EOTF_TRADITIONAL_GAMMA_SDR) | 3852 + BIT(HDMI_EOTF_TRADITIONAL_GAMMA_HDR) | 3853 + BIT(HDMI_EOTF_SMPTE_ST2084) | 3854 + BIT(HDMI_EOTF_BT_2100_HLG)); 3855 + } 3856 + 3857 + static uint8_t hdr_metadata_type(const u8 *edid_ext) 3858 + { 3859 + return edid_ext[3] & 3860 + BIT(HDMI_STATIC_METADATA_TYPE1); 3861 + } 3862 + 3863 + static void 3864 + drm_parse_hdr_metadata_block(struct drm_connector *connector, const u8 *db) 3865 + { 3866 + u16 len; 3867 + 3868 + len = cea_db_payload_len(db); 3869 + 3870 + connector->hdr_sink_metadata.hdmi_type1.eotf = 3871 + eotf_supported(db); 3872 + connector->hdr_sink_metadata.hdmi_type1.metadata_type = 3873 + hdr_metadata_type(db); 3874 + 3875 + if (len >= 4) 3876 + connector->hdr_sink_metadata.hdmi_type1.max_cll = db[4]; 3877 + if (len >= 5) 3878 + connector->hdr_sink_metadata.hdmi_type1.max_fall = db[5]; 3879 + if (len >= 6) 3880 + connector->hdr_sink_metadata.hdmi_type1.min_cll = db[6]; 3881 + } 3882 + 3838 3883 static void 3839 3884 drm_parse_hdmi_vsdb_audio(struct drm_connector *connector, const u8 *db) 3840 3885 { ··· 4511 4458 drm_parse_y420cmdb_bitmap(connector, db); 4512 4459 if (cea_db_is_vcdb(db)) 4513 4460 drm_parse_vcdb(connector, db); 4461 + if (cea_db_is_hdmi_hdr_metadata_block(db)) 4462 + drm_parse_hdr_metadata_block(connector, db); 4514 4463 } 4515 4464 } 4516 4465 ··· 4904 4849 return connector->display_info.hdmi.scdc.supported || 4905 4850 connector->display_info.color_formats & DRM_COLOR_FORMAT_YCRCB420; 4906 4851 } 4852 + 4853 + static inline bool is_eotf_supported(u8 output_eotf, u8 sink_eotf) 4854 + { 4855 + return sink_eotf & BIT(output_eotf); 4856 + } 4857 + 4858 + /** 4859 + * drm_hdmi_infoframe_set_hdr_metadata() - fill an HDMI DRM infoframe with 4860 + * HDR metadata from userspace 4861 + * @frame: HDMI DRM infoframe 4862 + * @conn_state: Connector state containing HDR metadata 4863 + * 4864 + * Return: 0 on success or a negative error code on failure. 4865 + */ 4866 + int 4867 + drm_hdmi_infoframe_set_hdr_metadata(struct hdmi_drm_infoframe *frame, 4868 + const struct drm_connector_state *conn_state) 4869 + { 4870 + struct drm_connector *connector; 4871 + struct hdr_output_metadata *hdr_metadata; 4872 + int err; 4873 + 4874 + if (!frame || !conn_state) 4875 + return -EINVAL; 4876 + 4877 + connector = conn_state->connector; 4878 + 4879 + if (!conn_state->hdr_output_metadata) 4880 + return -EINVAL; 4881 + 4882 + hdr_metadata = conn_state->hdr_output_metadata->data; 4883 + 4884 + if (!hdr_metadata || !connector) 4885 + return -EINVAL; 4886 + 4887 + /* Sink EOTF is Bit map while infoframe is absolute values */ 4888 + if (!is_eotf_supported(hdr_metadata->hdmi_metadata_type1.eotf, 4889 + connector->hdr_sink_metadata.hdmi_type1.eotf)) { 4890 + DRM_DEBUG_KMS("EOTF Not Supported\n"); 4891 + return -EINVAL; 4892 + } 4893 + 4894 + err = hdmi_drm_infoframe_init(frame); 4895 + if (err < 0) 4896 + return err; 4897 + 4898 + frame->eotf = hdr_metadata->hdmi_metadata_type1.eotf; 4899 + frame->metadata_type = hdr_metadata->hdmi_metadata_type1.metadata_type; 4900 + 4901 + BUILD_BUG_ON(sizeof(frame->display_primaries) != 4902 + sizeof(hdr_metadata->hdmi_metadata_type1.display_primaries)); 4903 + BUILD_BUG_ON(sizeof(frame->white_point) != 4904 + sizeof(hdr_metadata->hdmi_metadata_type1.white_point)); 4905 + 4906 + memcpy(&frame->display_primaries, 4907 + &hdr_metadata->hdmi_metadata_type1.display_primaries, 4908 + sizeof(frame->display_primaries)); 4909 + 4910 + memcpy(&frame->white_point, 4911 + &hdr_metadata->hdmi_metadata_type1.white_point, 4912 + sizeof(frame->white_point)); 4913 + 4914 + frame->max_display_mastering_luminance = 4915 + hdr_metadata->hdmi_metadata_type1.max_display_mastering_luminance; 4916 + frame->min_display_mastering_luminance = 4917 + hdr_metadata->hdmi_metadata_type1.min_display_mastering_luminance; 4918 + frame->max_fall = hdr_metadata->hdmi_metadata_type1.max_fall; 4919 + frame->max_cll = hdr_metadata->hdmi_metadata_type1.max_cll; 4920 + 4921 + return 0; 4922 + } 4923 + EXPORT_SYMBOL(drm_hdmi_infoframe_set_hdr_metadata); 4907 4924 4908 4925 /** 4909 4926 * drm_hdmi_avi_infoframe_from_display_mode() - fill an HDMI AVI infoframe with
+5 -2
drivers/gpu/drm/drm_edid_load.c
··· 7 7 8 8 */ 9 9 10 - #include <linux/module.h> 11 10 #include <linux/firmware.h> 12 - #include <drm/drmP.h> 11 + #include <linux/module.h> 12 + #include <linux/platform_device.h> 13 + 13 14 #include <drm/drm_crtc.h> 14 15 #include <drm/drm_crtc_helper.h> 16 + #include <drm/drm_drv.h> 15 17 #include <drm/drm_edid.h> 18 + #include <drm/drm_print.h> 16 19 17 20 static char edid_firmware[PATH_MAX]; 18 21 module_param_string(edid_firmware, edid_firmware, sizeof(edid_firmware), 0644);
+125 -121
drivers/gpu/drm/drm_fb_helper.c
··· 44 44 45 45 #include "drm_crtc_internal.h" 46 46 #include "drm_crtc_helper_internal.h" 47 + #include "drm_internal.h" 47 48 48 49 static bool drm_fbdev_emulation = true; 49 50 module_param_named(fbdev_emulation, drm_fbdev_emulation, bool, 0600); ··· 388 387 } 389 388 EXPORT_SYMBOL(drm_fb_helper_debug_leave); 390 389 390 + /* Check if the plane can hw rotate to match panel orientation */ 391 + static bool drm_fb_helper_panel_rotation(struct drm_mode_set *modeset, 392 + unsigned int *rotation) 393 + { 394 + struct drm_connector *connector = modeset->connectors[0]; 395 + struct drm_plane *plane = modeset->crtc->primary; 396 + u64 valid_mask = 0; 397 + unsigned int i; 398 + 399 + if (!modeset->num_connectors) 400 + return false; 401 + 402 + switch (connector->display_info.panel_orientation) { 403 + case DRM_MODE_PANEL_ORIENTATION_BOTTOM_UP: 404 + *rotation = DRM_MODE_ROTATE_180; 405 + break; 406 + case DRM_MODE_PANEL_ORIENTATION_LEFT_UP: 407 + *rotation = DRM_MODE_ROTATE_90; 408 + break; 409 + case DRM_MODE_PANEL_ORIENTATION_RIGHT_UP: 410 + *rotation = DRM_MODE_ROTATE_270; 411 + break; 412 + default: 413 + *rotation = DRM_MODE_ROTATE_0; 414 + } 415 + 416 + /* 417 + * TODO: support 90 / 270 degree hardware rotation, 418 + * depending on the hardware this may require the framebuffer 419 + * to be in a specific tiling format. 420 + */ 421 + if (*rotation != DRM_MODE_ROTATE_180 || !plane->rotation_property) 422 + return false; 423 + 424 + for (i = 0; i < plane->rotation_property->num_values; i++) 425 + valid_mask |= (1ULL << plane->rotation_property->values[i]); 426 + 427 + if (!(*rotation & valid_mask)) 428 + return false; 429 + 430 + return true; 431 + } 432 + 391 433 static int restore_fbdev_mode_atomic(struct drm_fb_helper *fb_helper, bool active) 392 434 { 393 435 struct drm_device *dev = fb_helper->dev; ··· 471 427 for (i = 0; i < fb_helper->crtc_count; i++) { 472 428 struct drm_mode_set *mode_set = &fb_helper->crtc_info[i].mode_set; 473 429 struct drm_plane *primary = mode_set->crtc->primary; 430 + unsigned int rotation; 474 431 475 - /* Cannot fail as we've already gotten the plane state above */ 476 - plane_state = drm_atomic_get_new_plane_state(state, primary); 477 - plane_state->rotation = fb_helper->crtc_info[i].rotation; 432 + if (drm_fb_helper_panel_rotation(mode_set, &rotation)) { 433 + /* Cannot fail as we've already gotten the plane state above */ 434 + plane_state = drm_atomic_get_new_plane_state(state, primary); 435 + plane_state->rotation = rotation; 436 + } 478 437 479 438 ret = __drm_atomic_helper_set_config(mode_set, state); 480 439 if (ret != 0) ··· 556 509 return ret; 557 510 } 558 511 559 - static int restore_fbdev_mode(struct drm_fb_helper *fb_helper) 512 + static int restore_fbdev_mode_force(struct drm_fb_helper *fb_helper) 560 513 { 561 514 struct drm_device *dev = fb_helper->dev; 562 515 ··· 564 517 return restore_fbdev_mode_atomic(fb_helper, true); 565 518 else 566 519 return restore_fbdev_mode_legacy(fb_helper); 520 + } 521 + 522 + static int restore_fbdev_mode(struct drm_fb_helper *fb_helper) 523 + { 524 + struct drm_device *dev = fb_helper->dev; 525 + int ret; 526 + 527 + if (!drm_master_internal_acquire(dev)) 528 + return -EBUSY; 529 + 530 + ret = restore_fbdev_mode_force(fb_helper); 531 + 532 + drm_master_internal_release(dev); 533 + 534 + return ret; 567 535 } 568 536 569 537 /** ··· 604 542 return 0; 605 543 606 544 mutex_lock(&fb_helper->lock); 607 - ret = restore_fbdev_mode(fb_helper); 545 + /* 546 + * TODO: 547 + * We should bail out here if there is a master by dropping _force. 548 + * Currently these igt tests fail if we do that: 549 + * - kms_fbcon_fbt@psr 550 + * - kms_fbcon_fbt@psr-suspend 551 + * 552 + * So first these tests need to be fixed so they drop master or don't 553 + * have an fd open. 554 + */ 555 + ret = restore_fbdev_mode_force(fb_helper); 608 556 609 557 do_delayed = fb_helper->delayed_hotplug; 610 558 if (do_delayed) ··· 627 555 return ret; 628 556 } 629 557 EXPORT_SYMBOL(drm_fb_helper_restore_fbdev_mode_unlocked); 630 - 631 - static bool drm_fb_helper_is_bound(struct drm_fb_helper *fb_helper) 632 - { 633 - struct drm_device *dev = fb_helper->dev; 634 - struct drm_crtc *crtc; 635 - int bound = 0, crtcs_bound = 0; 636 - 637 - /* 638 - * Sometimes user space wants everything disabled, so don't steal the 639 - * display if there's a master. 640 - */ 641 - if (READ_ONCE(dev->master)) 642 - return false; 643 - 644 - drm_for_each_crtc(crtc, dev) { 645 - drm_modeset_lock(&crtc->mutex, NULL); 646 - if (crtc->primary->fb) 647 - crtcs_bound++; 648 - if (crtc->primary->fb == fb_helper->fb) 649 - bound++; 650 - drm_modeset_unlock(&crtc->mutex); 651 - } 652 - 653 - if (bound < crtcs_bound) 654 - return false; 655 - 656 - return true; 657 - } 658 558 659 559 #ifdef CONFIG_MAGIC_SYSRQ 660 560 /* ··· 648 604 continue; 649 605 650 606 mutex_lock(&helper->lock); 651 - ret = restore_fbdev_mode(helper); 607 + ret = restore_fbdev_mode_force(helper); 652 608 if (ret) 653 609 error = true; 654 610 mutex_unlock(&helper->lock); ··· 707 663 static void drm_fb_helper_dpms(struct fb_info *info, int dpms_mode) 708 664 { 709 665 struct drm_fb_helper *fb_helper = info->par; 666 + struct drm_device *dev = fb_helper->dev; 710 667 711 668 /* 712 669 * For each CRTC in this fb, turn the connectors on/off. 713 670 */ 714 671 mutex_lock(&fb_helper->lock); 715 - if (!drm_fb_helper_is_bound(fb_helper)) { 716 - mutex_unlock(&fb_helper->lock); 717 - return; 718 - } 672 + if (!drm_master_internal_acquire(dev)) 673 + goto unlock; 719 674 720 - if (drm_drv_uses_atomic_modeset(fb_helper->dev)) 675 + if (drm_drv_uses_atomic_modeset(dev)) 721 676 restore_fbdev_mode_atomic(fb_helper, dpms_mode == DRM_MODE_DPMS_ON); 722 677 else 723 678 dpms_legacy(fb_helper, dpms_mode); 679 + 680 + drm_master_internal_release(dev); 681 + unlock: 724 682 mutex_unlock(&fb_helper->lock); 725 683 } 726 684 ··· 813 767 struct drm_clip_rect *clip) 814 768 { 815 769 struct drm_framebuffer *fb = fb_helper->fb; 816 - unsigned int cpp = drm_format_plane_cpp(fb->format->format, 0); 770 + unsigned int cpp = fb->format->cpp[0]; 817 771 size_t offset = clip->y1 * fb->pitches[0] + clip->x1 * cpp; 818 772 void *src = fb_helper->fbdev->screen_buffer + offset; 819 773 void *dst = fb_helper->buffer->vaddr + offset; ··· 927 881 if (!fb_helper->crtc_info[i].mode_set.connectors) 928 882 goto out_free; 929 883 fb_helper->crtc_info[i].mode_set.num_connectors = 0; 930 - fb_helper->crtc_info[i].rotation = DRM_MODE_ROTATE_0; 931 884 } 932 885 933 886 i = 0; ··· 1554 1509 int drm_fb_helper_setcmap(struct fb_cmap *cmap, struct fb_info *info) 1555 1510 { 1556 1511 struct drm_fb_helper *fb_helper = info->par; 1512 + struct drm_device *dev = fb_helper->dev; 1557 1513 int ret; 1558 1514 1559 1515 if (oops_in_progress) ··· 1562 1516 1563 1517 mutex_lock(&fb_helper->lock); 1564 1518 1565 - if (!drm_fb_helper_is_bound(fb_helper)) { 1519 + if (!drm_master_internal_acquire(dev)) { 1566 1520 ret = -EBUSY; 1567 - goto out; 1521 + goto unlock; 1568 1522 } 1569 1523 1570 1524 if (info->fix.visual == FB_VISUAL_TRUECOLOR) ··· 1574 1528 else 1575 1529 ret = setcmap_legacy(cmap, info); 1576 1530 1577 - out: 1531 + drm_master_internal_release(dev); 1532 + unlock: 1578 1533 mutex_unlock(&fb_helper->lock); 1579 1534 1580 1535 return ret; ··· 1595 1548 unsigned long arg) 1596 1549 { 1597 1550 struct drm_fb_helper *fb_helper = info->par; 1551 + struct drm_device *dev = fb_helper->dev; 1598 1552 struct drm_mode_set *mode_set; 1599 1553 struct drm_crtc *crtc; 1600 1554 int ret = 0; 1601 1555 1602 1556 mutex_lock(&fb_helper->lock); 1603 - if (!drm_fb_helper_is_bound(fb_helper)) { 1557 + if (!drm_master_internal_acquire(dev)) { 1604 1558 ret = -EBUSY; 1605 1559 goto unlock; 1606 1560 } ··· 1639 1591 } 1640 1592 1641 1593 ret = 0; 1642 - goto unlock; 1594 + break; 1643 1595 default: 1644 1596 ret = -ENOTTY; 1645 1597 } 1646 1598 1599 + drm_master_internal_release(dev); 1647 1600 unlock: 1648 1601 mutex_unlock(&fb_helper->lock); 1649 1602 return ret; ··· 1896 1847 return -EBUSY; 1897 1848 1898 1849 mutex_lock(&fb_helper->lock); 1899 - if (!drm_fb_helper_is_bound(fb_helper)) { 1900 - mutex_unlock(&fb_helper->lock); 1901 - return -EBUSY; 1850 + if (!drm_master_internal_acquire(dev)) { 1851 + ret = -EBUSY; 1852 + goto unlock; 1902 1853 } 1903 1854 1904 1855 if (drm_drv_uses_atomic_modeset(dev)) 1905 1856 ret = pan_display_atomic(var, info); 1906 1857 else 1907 1858 ret = pan_display_legacy(var, info); 1859 + 1860 + drm_master_internal_release(dev); 1861 + unlock: 1908 1862 mutex_unlock(&fb_helper->lock); 1909 1863 1910 1864 return ret; ··· 2031 1979 */ 2032 1980 bool lastv = true, lasth = true; 2033 1981 2034 - desired_mode = fb_helper->crtc_info[i].desired_mode; 2035 1982 mode_set = &fb_helper->crtc_info[i].mode_set; 1983 + desired_mode = mode_set->mode; 2036 1984 2037 1985 if (!desired_mode) 2038 1986 continue; 2039 1987 2040 1988 crtc_count++; 2041 1989 2042 - x = fb_helper->crtc_info[i].x; 2043 - y = fb_helper->crtc_info[i].y; 1990 + x = mode_set->x; 1991 + y = mode_set->y; 2044 1992 2045 1993 sizes.surface_width = max_t(u32, desired_mode->hdisplay + x, sizes.surface_width); 2046 1994 sizes.surface_height = max_t(u32, desired_mode->vdisplay + y, sizes.surface_height); ··· 2066 2014 DRM_INFO("Cannot find any crtc or sizes\n"); 2067 2015 2068 2016 /* First time: disable all crtc's.. */ 2069 - if (!fb_helper->deferred_setup && !READ_ONCE(fb_helper->dev->master)) 2017 + if (!fb_helper->deferred_setup) 2070 2018 restore_fbdev_mode(fb_helper); 2071 2019 return -EAGAIN; 2072 2020 } ··· 2555 2503 return best_score; 2556 2504 } 2557 2505 2558 - /* 2559 - * This function checks if rotation is necessary because of panel orientation 2560 - * and if it is, if it is supported. 2561 - * If rotation is necessary and supported, it gets set in fb_crtc.rotation. 2562 - * If rotation is necessary but not supported, a DRM_MODE_ROTATE_* flag gets 2563 - * or-ed into fb_helper->sw_rotations. In drm_setup_crtcs_fb() we check if only 2564 - * one bit is set and then we set fb_info.fbcon_rotate_hint to make fbcon do 2565 - * the unsupported rotation. 2566 - */ 2567 - static void drm_setup_crtc_rotation(struct drm_fb_helper *fb_helper, 2568 - struct drm_fb_helper_crtc *fb_crtc, 2569 - struct drm_connector *connector) 2570 - { 2571 - struct drm_plane *plane = fb_crtc->mode_set.crtc->primary; 2572 - uint64_t valid_mask = 0; 2573 - int i, rotation; 2574 - 2575 - fb_crtc->rotation = DRM_MODE_ROTATE_0; 2576 - 2577 - switch (connector->display_info.panel_orientation) { 2578 - case DRM_MODE_PANEL_ORIENTATION_BOTTOM_UP: 2579 - rotation = DRM_MODE_ROTATE_180; 2580 - break; 2581 - case DRM_MODE_PANEL_ORIENTATION_LEFT_UP: 2582 - rotation = DRM_MODE_ROTATE_90; 2583 - break; 2584 - case DRM_MODE_PANEL_ORIENTATION_RIGHT_UP: 2585 - rotation = DRM_MODE_ROTATE_270; 2586 - break; 2587 - default: 2588 - rotation = DRM_MODE_ROTATE_0; 2589 - } 2590 - 2591 - /* 2592 - * TODO: support 90 / 270 degree hardware rotation, 2593 - * depending on the hardware this may require the framebuffer 2594 - * to be in a specific tiling format. 2595 - */ 2596 - if (rotation != DRM_MODE_ROTATE_180 || !plane->rotation_property) { 2597 - fb_helper->sw_rotations |= rotation; 2598 - return; 2599 - } 2600 - 2601 - for (i = 0; i < plane->rotation_property->num_values; i++) 2602 - valid_mask |= (1ULL << plane->rotation_property->values[i]); 2603 - 2604 - if (!(rotation & valid_mask)) { 2605 - fb_helper->sw_rotations |= rotation; 2606 - return; 2607 - } 2608 - 2609 - fb_crtc->rotation = rotation; 2610 - /* Rotating in hardware, fbcon should not rotate */ 2611 - fb_helper->sw_rotations |= DRM_MODE_ROTATE_0; 2612 - } 2613 - 2614 2506 static struct drm_fb_helper_crtc * 2615 2507 drm_fb_helper_crtc(struct drm_fb_helper *fb_helper, struct drm_crtc *crtc) 2616 2508 { ··· 2801 2805 drm_fb_helper_modeset_release(fb_helper, 2802 2806 &fb_helper->crtc_info[i].mode_set); 2803 2807 2804 - fb_helper->sw_rotations = 0; 2805 2808 drm_fb_helper_for_each_connector(fb_helper, i) { 2806 2809 struct drm_display_mode *mode = modes[i]; 2807 2810 struct drm_fb_helper_crtc *fb_crtc = crtcs[i]; ··· 2814 2819 DRM_DEBUG_KMS("desired mode %s set on crtc %d (%d,%d)\n", 2815 2820 mode->name, fb_crtc->mode_set.crtc->base.id, offset->x, offset->y); 2816 2821 2817 - fb_crtc->desired_mode = mode; 2818 - fb_crtc->x = offset->x; 2819 - fb_crtc->y = offset->y; 2820 - modeset->mode = drm_mode_duplicate(dev, 2821 - fb_crtc->desired_mode); 2822 + modeset->mode = drm_mode_duplicate(dev, mode); 2822 2823 drm_connector_get(connector); 2823 - drm_setup_crtc_rotation(fb_helper, fb_crtc, connector); 2824 2824 modeset->connectors[modeset->num_connectors++] = connector; 2825 2825 modeset->x = offset->x; 2826 2826 modeset->y = offset->y; ··· 2838 2848 static void drm_setup_crtcs_fb(struct drm_fb_helper *fb_helper) 2839 2849 { 2840 2850 struct fb_info *info = fb_helper->fbdev; 2851 + unsigned int rotation, sw_rotations = 0; 2841 2852 int i; 2842 2853 2843 - for (i = 0; i < fb_helper->crtc_count; i++) 2844 - if (fb_helper->crtc_info[i].mode_set.num_connectors) 2845 - fb_helper->crtc_info[i].mode_set.fb = fb_helper->fb; 2854 + for (i = 0; i < fb_helper->crtc_count; i++) { 2855 + struct drm_mode_set *modeset = &fb_helper->crtc_info[i].mode_set; 2856 + 2857 + if (!modeset->num_connectors) 2858 + continue; 2859 + 2860 + modeset->fb = fb_helper->fb; 2861 + 2862 + if (drm_fb_helper_panel_rotation(modeset, &rotation)) 2863 + /* Rotating in hardware, fbcon should not rotate */ 2864 + sw_rotations |= DRM_MODE_ROTATE_0; 2865 + else 2866 + sw_rotations |= rotation; 2867 + } 2846 2868 2847 2869 mutex_lock(&fb_helper->dev->mode_config.mutex); 2848 2870 drm_fb_helper_for_each_connector(fb_helper, i) { ··· 2870 2868 } 2871 2869 mutex_unlock(&fb_helper->dev->mode_config.mutex); 2872 2870 2873 - switch (fb_helper->sw_rotations) { 2871 + switch (sw_rotations) { 2874 2872 case DRM_MODE_ROTATE_0: 2875 2873 info->fbcon_rotate_hint = FB_ROTATE_UR; 2876 2874 break; ··· 3043 3041 return err; 3044 3042 } 3045 3043 3046 - if (!fb_helper->fb || !drm_fb_helper_is_bound(fb_helper)) { 3044 + if (!fb_helper->fb || !drm_master_internal_acquire(fb_helper->dev)) { 3047 3045 fb_helper->delayed_hotplug = true; 3048 3046 mutex_unlock(&fb_helper->lock); 3049 3047 return err; 3050 3048 } 3049 + 3050 + drm_master_internal_release(fb_helper->dev); 3051 3051 3052 3052 DRM_DEBUG_KMS("\n"); 3053 3053
+50 -72
drivers/gpu/drm/drm_file.c
··· 100 100 * :ref:`IOCTL support in the userland interfaces chapter<drm_driver_ioctl>`. 101 101 */ 102 102 103 - static int drm_open_helper(struct file *filp, struct drm_minor *minor); 104 - 105 103 /** 106 104 * drm_file_alloc - allocate file context 107 105 * @minor: minor to allocate on ··· 271 273 drm_file_free(file_priv); 272 274 } 273 275 274 - static int drm_setup(struct drm_device * dev) 275 - { 276 - int ret; 277 - 278 - if (dev->driver->firstopen && 279 - drm_core_check_feature(dev, DRIVER_LEGACY)) { 280 - ret = dev->driver->firstopen(dev); 281 - if (ret != 0) 282 - return ret; 283 - } 284 - 285 - ret = drm_legacy_dma_setup(dev); 286 - if (ret < 0) 287 - return ret; 288 - 289 - 290 - DRM_DEBUG("\n"); 291 - return 0; 292 - } 293 - 294 - /** 295 - * drm_open - open method for DRM file 296 - * @inode: device inode 297 - * @filp: file pointer. 298 - * 299 - * This function must be used by drivers as their &file_operations.open method. 300 - * It looks up the correct DRM device and instantiates all the per-file 301 - * resources for it. It also calls the &drm_driver.open driver callback. 302 - * 303 - * RETURNS: 304 - * 305 - * 0 on success or negative errno value on falure. 306 - */ 307 - int drm_open(struct inode *inode, struct file *filp) 308 - { 309 - struct drm_device *dev; 310 - struct drm_minor *minor; 311 - int retcode; 312 - int need_setup = 0; 313 - 314 - minor = drm_minor_acquire(iminor(inode)); 315 - if (IS_ERR(minor)) 316 - return PTR_ERR(minor); 317 - 318 - dev = minor->dev; 319 - if (!dev->open_count++) 320 - need_setup = 1; 321 - 322 - /* share address_space across all char-devs of a single device */ 323 - filp->f_mapping = dev->anon_inode->i_mapping; 324 - 325 - retcode = drm_open_helper(filp, minor); 326 - if (retcode) 327 - goto err_undo; 328 - if (need_setup) { 329 - retcode = drm_setup(dev); 330 - if (retcode) { 331 - drm_close_helper(filp); 332 - goto err_undo; 333 - } 334 - } 335 - return 0; 336 - 337 - err_undo: 338 - dev->open_count--; 339 - drm_minor_release(minor); 340 - return retcode; 341 - } 342 - EXPORT_SYMBOL(drm_open); 343 - 344 276 /* 345 277 * Check whether DRI will run on this CPU. 346 278 * ··· 351 423 352 424 return 0; 353 425 } 426 + 427 + /** 428 + * drm_open - open method for DRM file 429 + * @inode: device inode 430 + * @filp: file pointer. 431 + * 432 + * This function must be used by drivers as their &file_operations.open method. 433 + * It looks up the correct DRM device and instantiates all the per-file 434 + * resources for it. It also calls the &drm_driver.open driver callback. 435 + * 436 + * RETURNS: 437 + * 438 + * 0 on success or negative errno value on falure. 439 + */ 440 + int drm_open(struct inode *inode, struct file *filp) 441 + { 442 + struct drm_device *dev; 443 + struct drm_minor *minor; 444 + int retcode; 445 + int need_setup = 0; 446 + 447 + minor = drm_minor_acquire(iminor(inode)); 448 + if (IS_ERR(minor)) 449 + return PTR_ERR(minor); 450 + 451 + dev = minor->dev; 452 + if (!dev->open_count++) 453 + need_setup = 1; 454 + 455 + /* share address_space across all char-devs of a single device */ 456 + filp->f_mapping = dev->anon_inode->i_mapping; 457 + 458 + retcode = drm_open_helper(filp, minor); 459 + if (retcode) 460 + goto err_undo; 461 + if (need_setup) { 462 + retcode = drm_legacy_setup(dev); 463 + if (retcode) { 464 + drm_close_helper(filp); 465 + goto err_undo; 466 + } 467 + } 468 + return 0; 469 + 470 + err_undo: 471 + dev->open_count--; 472 + drm_minor_release(minor); 473 + return retcode; 474 + } 475 + EXPORT_SYMBOL(drm_open); 354 476 355 477 void drm_lastclose(struct drm_device * dev) 356 478 {
+2 -2
drivers/gpu/drm/drm_format_helper.c
··· 36 36 void drm_fb_memcpy(void *dst, void *vaddr, struct drm_framebuffer *fb, 37 37 struct drm_rect *clip) 38 38 { 39 - unsigned int cpp = drm_format_plane_cpp(fb->format->format, 0); 39 + unsigned int cpp = fb->format->cpp[0]; 40 40 size_t len = (clip->x2 - clip->x1) * cpp; 41 41 unsigned int y, lines = clip->y2 - clip->y1; 42 42 ··· 63 63 struct drm_framebuffer *fb, 64 64 struct drm_rect *clip) 65 65 { 66 - unsigned int cpp = drm_format_plane_cpp(fb->format->format, 0); 66 + unsigned int cpp = fb->format->cpp[0]; 67 67 unsigned int offset = clip_offset(clip, fb->pitches[0], cpp); 68 68 size_t len = (clip->x2 - clip->x1) * cpp; 69 69 unsigned int y, lines = clip->y2 - clip->y1;
-118
drivers/gpu/drm/drm_fourcc.c
··· 333 333 EXPORT_SYMBOL(drm_get_format_info); 334 334 335 335 /** 336 - * drm_format_num_planes - get the number of planes for format 337 - * @format: pixel format (DRM_FORMAT_*) 338 - * 339 - * Returns: 340 - * The number of planes used by the specified pixel format. 341 - */ 342 - int drm_format_num_planes(uint32_t format) 343 - { 344 - const struct drm_format_info *info; 345 - 346 - info = drm_format_info(format); 347 - return info ? info->num_planes : 1; 348 - } 349 - EXPORT_SYMBOL(drm_format_num_planes); 350 - 351 - /** 352 - * drm_format_plane_cpp - determine the bytes per pixel value 353 - * @format: pixel format (DRM_FORMAT_*) 354 - * @plane: plane index 355 - * 356 - * Returns: 357 - * The bytes per pixel value for the specified plane. 358 - */ 359 - int drm_format_plane_cpp(uint32_t format, int plane) 360 - { 361 - const struct drm_format_info *info; 362 - 363 - info = drm_format_info(format); 364 - if (!info || plane >= info->num_planes) 365 - return 0; 366 - 367 - return info->cpp[plane]; 368 - } 369 - EXPORT_SYMBOL(drm_format_plane_cpp); 370 - 371 - /** 372 - * drm_format_horz_chroma_subsampling - get the horizontal chroma subsampling factor 373 - * @format: pixel format (DRM_FORMAT_*) 374 - * 375 - * Returns: 376 - * The horizontal chroma subsampling factor for the 377 - * specified pixel format. 378 - */ 379 - int drm_format_horz_chroma_subsampling(uint32_t format) 380 - { 381 - const struct drm_format_info *info; 382 - 383 - info = drm_format_info(format); 384 - return info ? info->hsub : 1; 385 - } 386 - EXPORT_SYMBOL(drm_format_horz_chroma_subsampling); 387 - 388 - /** 389 - * drm_format_vert_chroma_subsampling - get the vertical chroma subsampling factor 390 - * @format: pixel format (DRM_FORMAT_*) 391 - * 392 - * Returns: 393 - * The vertical chroma subsampling factor for the 394 - * specified pixel format. 395 - */ 396 - int drm_format_vert_chroma_subsampling(uint32_t format) 397 - { 398 - const struct drm_format_info *info; 399 - 400 - info = drm_format_info(format); 401 - return info ? info->vsub : 1; 402 - } 403 - EXPORT_SYMBOL(drm_format_vert_chroma_subsampling); 404 - 405 - /** 406 - * drm_format_plane_width - width of the plane given the first plane 407 - * @width: width of the first plane 408 - * @format: pixel format 409 - * @plane: plane index 410 - * 411 - * Returns: 412 - * The width of @plane, given that the width of the first plane is @width. 413 - */ 414 - int drm_format_plane_width(int width, uint32_t format, int plane) 415 - { 416 - const struct drm_format_info *info; 417 - 418 - info = drm_format_info(format); 419 - if (!info || plane >= info->num_planes) 420 - return 0; 421 - 422 - if (plane == 0) 423 - return width; 424 - 425 - return width / info->hsub; 426 - } 427 - EXPORT_SYMBOL(drm_format_plane_width); 428 - 429 - /** 430 - * drm_format_plane_height - height of the plane given the first plane 431 - * @height: height of the first plane 432 - * @format: pixel format 433 - * @plane: plane index 434 - * 435 - * Returns: 436 - * The height of @plane, given that the height of the first plane is @height. 437 - */ 438 - int drm_format_plane_height(int height, uint32_t format, int plane) 439 - { 440 - const struct drm_format_info *info; 441 - 442 - info = drm_format_info(format); 443 - if (!info || plane >= info->num_planes) 444 - return 0; 445 - 446 - if (plane == 0) 447 - return height; 448 - 449 - return height / info->vsub; 450 - } 451 - EXPORT_SYMBOL(drm_format_plane_height); 452 - 453 - /** 454 336 * drm_format_info_block_width - width in pixels of block. 455 337 * @info: pixel format info 456 338 * @plane: plane index
+772
drivers/gpu/drm/drm_gem_vram_helper.c
··· 1 + // SPDX-License-Identifier: GPL-2.0-or-later 2 + 3 + #include <drm/drm_gem_vram_helper.h> 4 + #include <drm/drm_device.h> 5 + #include <drm/drm_mode.h> 6 + #include <drm/drm_prime.h> 7 + #include <drm/drm_vram_mm_helper.h> 8 + #include <drm/ttm/ttm_page_alloc.h> 9 + 10 + /** 11 + * DOC: overview 12 + * 13 + * This library provides a GEM buffer object that is backed by video RAM 14 + * (VRAM). It can be used for framebuffer devices with dedicated memory. 15 + */ 16 + 17 + /* 18 + * Buffer-objects helpers 19 + */ 20 + 21 + static void drm_gem_vram_cleanup(struct drm_gem_vram_object *gbo) 22 + { 23 + /* We got here via ttm_bo_put(), which means that the 24 + * TTM buffer object in 'bo' has already been cleaned 25 + * up; only release the GEM object. 26 + */ 27 + drm_gem_object_release(&gbo->gem); 28 + } 29 + 30 + static void drm_gem_vram_destroy(struct drm_gem_vram_object *gbo) 31 + { 32 + drm_gem_vram_cleanup(gbo); 33 + kfree(gbo); 34 + } 35 + 36 + static void ttm_buffer_object_destroy(struct ttm_buffer_object *bo) 37 + { 38 + struct drm_gem_vram_object *gbo = drm_gem_vram_of_bo(bo); 39 + 40 + drm_gem_vram_destroy(gbo); 41 + } 42 + 43 + static void drm_gem_vram_placement(struct drm_gem_vram_object *gbo, 44 + unsigned long pl_flag) 45 + { 46 + unsigned int i; 47 + unsigned int c = 0; 48 + 49 + gbo->placement.placement = gbo->placements; 50 + gbo->placement.busy_placement = gbo->placements; 51 + 52 + if (pl_flag & TTM_PL_FLAG_VRAM) 53 + gbo->placements[c++].flags = TTM_PL_FLAG_WC | 54 + TTM_PL_FLAG_UNCACHED | 55 + TTM_PL_FLAG_VRAM; 56 + 57 + if (pl_flag & TTM_PL_FLAG_SYSTEM) 58 + gbo->placements[c++].flags = TTM_PL_MASK_CACHING | 59 + TTM_PL_FLAG_SYSTEM; 60 + 61 + if (!c) 62 + gbo->placements[c++].flags = TTM_PL_MASK_CACHING | 63 + TTM_PL_FLAG_SYSTEM; 64 + 65 + gbo->placement.num_placement = c; 66 + gbo->placement.num_busy_placement = c; 67 + 68 + for (i = 0; i < c; ++i) { 69 + gbo->placements[i].fpfn = 0; 70 + gbo->placements[i].lpfn = 0; 71 + } 72 + } 73 + 74 + static int drm_gem_vram_init(struct drm_device *dev, 75 + struct ttm_bo_device *bdev, 76 + struct drm_gem_vram_object *gbo, 77 + size_t size, unsigned long pg_align, 78 + bool interruptible) 79 + { 80 + int ret; 81 + size_t acc_size; 82 + 83 + ret = drm_gem_object_init(dev, &gbo->gem, size); 84 + if (ret) 85 + return ret; 86 + 87 + acc_size = ttm_bo_dma_acc_size(bdev, size, sizeof(*gbo)); 88 + 89 + gbo->bo.bdev = bdev; 90 + drm_gem_vram_placement(gbo, TTM_PL_FLAG_VRAM | TTM_PL_FLAG_SYSTEM); 91 + 92 + ret = ttm_bo_init(bdev, &gbo->bo, size, ttm_bo_type_device, 93 + &gbo->placement, pg_align, interruptible, acc_size, 94 + NULL, NULL, ttm_buffer_object_destroy); 95 + if (ret) 96 + goto err_drm_gem_object_release; 97 + 98 + return 0; 99 + 100 + err_drm_gem_object_release: 101 + drm_gem_object_release(&gbo->gem); 102 + return ret; 103 + } 104 + 105 + /** 106 + * drm_gem_vram_create() - Creates a VRAM-backed GEM object 107 + * @dev: the DRM device 108 + * @bdev: the TTM BO device backing the object 109 + * @size: the buffer size in bytes 110 + * @pg_align: the buffer's alignment in multiples of the page size 111 + * @interruptible: sleep interruptible if waiting for memory 112 + * 113 + * Returns: 114 + * A new instance of &struct drm_gem_vram_object on success, or 115 + * an ERR_PTR()-encoded error code otherwise. 116 + */ 117 + struct drm_gem_vram_object *drm_gem_vram_create(struct drm_device *dev, 118 + struct ttm_bo_device *bdev, 119 + size_t size, 120 + unsigned long pg_align, 121 + bool interruptible) 122 + { 123 + struct drm_gem_vram_object *gbo; 124 + int ret; 125 + 126 + gbo = kzalloc(sizeof(*gbo), GFP_KERNEL); 127 + if (!gbo) 128 + return ERR_PTR(-ENOMEM); 129 + 130 + ret = drm_gem_vram_init(dev, bdev, gbo, size, pg_align, interruptible); 131 + if (ret < 0) 132 + goto err_kfree; 133 + 134 + return gbo; 135 + 136 + err_kfree: 137 + kfree(gbo); 138 + return ERR_PTR(ret); 139 + } 140 + EXPORT_SYMBOL(drm_gem_vram_create); 141 + 142 + /** 143 + * drm_gem_vram_put() - Releases a reference to a VRAM-backed GEM object 144 + * @gbo: the GEM VRAM object 145 + * 146 + * See ttm_bo_put() for more information. 147 + */ 148 + void drm_gem_vram_put(struct drm_gem_vram_object *gbo) 149 + { 150 + ttm_bo_put(&gbo->bo); 151 + } 152 + EXPORT_SYMBOL(drm_gem_vram_put); 153 + 154 + /** 155 + * drm_gem_vram_lock() - Locks a VRAM-backed GEM object 156 + * @gbo: the GEM VRAM object 157 + * @no_wait: don't wait for buffer object to become available 158 + * 159 + * See ttm_bo_reserve() for more information. 160 + * 161 + * Returns: 162 + * 0 on success, or 163 + * a negative error code otherwise 164 + */ 165 + int drm_gem_vram_lock(struct drm_gem_vram_object *gbo, bool no_wait) 166 + { 167 + return ttm_bo_reserve(&gbo->bo, true, no_wait, NULL); 168 + } 169 + EXPORT_SYMBOL(drm_gem_vram_lock); 170 + 171 + /** 172 + * drm_gem_vram_unlock() - \ 173 + Release a reservation acquired by drm_gem_vram_lock() 174 + * @gbo: the GEM VRAM object 175 + * 176 + * See ttm_bo_unreserve() for more information. 177 + */ 178 + void drm_gem_vram_unlock(struct drm_gem_vram_object *gbo) 179 + { 180 + ttm_bo_unreserve(&gbo->bo); 181 + } 182 + EXPORT_SYMBOL(drm_gem_vram_unlock); 183 + 184 + /** 185 + * drm_gem_vram_mmap_offset() - Returns a GEM VRAM object's mmap offset 186 + * @gbo: the GEM VRAM object 187 + * 188 + * See drm_vma_node_offset_addr() for more information. 189 + * 190 + * Returns: 191 + * The buffer object's offset for userspace mappings on success, or 192 + * 0 if no offset is allocated. 193 + */ 194 + u64 drm_gem_vram_mmap_offset(struct drm_gem_vram_object *gbo) 195 + { 196 + return drm_vma_node_offset_addr(&gbo->bo.vma_node); 197 + } 198 + EXPORT_SYMBOL(drm_gem_vram_mmap_offset); 199 + 200 + /** 201 + * drm_gem_vram_offset() - \ 202 + Returns a GEM VRAM object's offset in video memory 203 + * @gbo: the GEM VRAM object 204 + * 205 + * This function returns the buffer object's offset in the device's video 206 + * memory. The buffer object has to be pinned to %TTM_PL_VRAM. 207 + * 208 + * Returns: 209 + * The buffer object's offset in video memory on success, or 210 + * a negative errno code otherwise. 211 + */ 212 + s64 drm_gem_vram_offset(struct drm_gem_vram_object *gbo) 213 + { 214 + if (WARN_ON_ONCE(!gbo->pin_count)) 215 + return (s64)-ENODEV; 216 + return gbo->bo.offset; 217 + } 218 + EXPORT_SYMBOL(drm_gem_vram_offset); 219 + 220 + /** 221 + * drm_gem_vram_pin() - Pins a GEM VRAM object in a region. 222 + * @gbo: the GEM VRAM object 223 + * @pl_flag: a bitmask of possible memory regions 224 + * 225 + * Pinning a buffer object ensures that it is not evicted from 226 + * a memory region. A pinned buffer object has to be unpinned before 227 + * it can be pinned to another region. 228 + * 229 + * Returns: 230 + * 0 on success, or 231 + * a negative error code otherwise. 232 + */ 233 + int drm_gem_vram_pin(struct drm_gem_vram_object *gbo, unsigned long pl_flag) 234 + { 235 + int i, ret; 236 + struct ttm_operation_ctx ctx = { false, false }; 237 + 238 + ret = ttm_bo_reserve(&gbo->bo, true, false, NULL); 239 + if (ret < 0) 240 + return ret; 241 + 242 + if (gbo->pin_count) 243 + goto out; 244 + 245 + drm_gem_vram_placement(gbo, pl_flag); 246 + for (i = 0; i < gbo->placement.num_placement; ++i) 247 + gbo->placements[i].flags |= TTM_PL_FLAG_NO_EVICT; 248 + 249 + ret = ttm_bo_validate(&gbo->bo, &gbo->placement, &ctx); 250 + if (ret < 0) 251 + goto err_ttm_bo_unreserve; 252 + 253 + out: 254 + ++gbo->pin_count; 255 + ttm_bo_unreserve(&gbo->bo); 256 + 257 + return 0; 258 + 259 + err_ttm_bo_unreserve: 260 + ttm_bo_unreserve(&gbo->bo); 261 + return ret; 262 + } 263 + EXPORT_SYMBOL(drm_gem_vram_pin); 264 + 265 + /** 266 + * drm_gem_vram_pin_locked() - Pins a GEM VRAM object in a region. 267 + * @gbo: the GEM VRAM object 268 + * @pl_flag: a bitmask of possible memory regions 269 + * 270 + * Pinning a buffer object ensures that it is not evicted from 271 + * a memory region. A pinned buffer object has to be unpinned before 272 + * it can be pinned to another region. 273 + * 274 + * This function pins a GEM VRAM object that has already been 275 + * locked. Use drm_gem_vram_pin() if possible. 276 + * 277 + * Returns: 278 + * 0 on success, or 279 + * a negative error code otherwise. 280 + */ 281 + int drm_gem_vram_pin_locked(struct drm_gem_vram_object *gbo, 282 + unsigned long pl_flag) 283 + { 284 + int i, ret; 285 + struct ttm_operation_ctx ctx = { false, false }; 286 + 287 + lockdep_assert_held(&gbo->bo.resv->lock.base); 288 + 289 + if (gbo->pin_count) { 290 + ++gbo->pin_count; 291 + return 0; 292 + } 293 + 294 + drm_gem_vram_placement(gbo, pl_flag); 295 + for (i = 0; i < gbo->placement.num_placement; ++i) 296 + gbo->placements[i].flags |= TTM_PL_FLAG_NO_EVICT; 297 + 298 + ret = ttm_bo_validate(&gbo->bo, &gbo->placement, &ctx); 299 + if (ret < 0) 300 + return ret; 301 + 302 + gbo->pin_count = 1; 303 + 304 + return 0; 305 + } 306 + EXPORT_SYMBOL(drm_gem_vram_pin_locked); 307 + 308 + /** 309 + * drm_gem_vram_unpin() - Unpins a GEM VRAM object 310 + * @gbo: the GEM VRAM object 311 + * 312 + * Returns: 313 + * 0 on success, or 314 + * a negative error code otherwise. 315 + */ 316 + int drm_gem_vram_unpin(struct drm_gem_vram_object *gbo) 317 + { 318 + int i, ret; 319 + struct ttm_operation_ctx ctx = { false, false }; 320 + 321 + ret = ttm_bo_reserve(&gbo->bo, true, false, NULL); 322 + if (ret < 0) 323 + return ret; 324 + 325 + if (WARN_ON_ONCE(!gbo->pin_count)) 326 + goto out; 327 + 328 + --gbo->pin_count; 329 + if (gbo->pin_count) 330 + goto out; 331 + 332 + for (i = 0; i < gbo->placement.num_placement ; ++i) 333 + gbo->placements[i].flags &= ~TTM_PL_FLAG_NO_EVICT; 334 + 335 + ret = ttm_bo_validate(&gbo->bo, &gbo->placement, &ctx); 336 + if (ret < 0) 337 + goto err_ttm_bo_unreserve; 338 + 339 + out: 340 + ttm_bo_unreserve(&gbo->bo); 341 + 342 + return 0; 343 + 344 + err_ttm_bo_unreserve: 345 + ttm_bo_unreserve(&gbo->bo); 346 + return ret; 347 + } 348 + EXPORT_SYMBOL(drm_gem_vram_unpin); 349 + 350 + /** 351 + * drm_gem_vram_unpin_locked() - Unpins a GEM VRAM object 352 + * @gbo: the GEM VRAM object 353 + * 354 + * This function unpins a GEM VRAM object that has already been 355 + * locked. Use drm_gem_vram_unpin() if possible. 356 + * 357 + * Returns: 358 + * 0 on success, or 359 + * a negative error code otherwise. 360 + */ 361 + int drm_gem_vram_unpin_locked(struct drm_gem_vram_object *gbo) 362 + { 363 + int i, ret; 364 + struct ttm_operation_ctx ctx = { false, false }; 365 + 366 + lockdep_assert_held(&gbo->bo.resv->lock.base); 367 + 368 + if (WARN_ON_ONCE(!gbo->pin_count)) 369 + return 0; 370 + 371 + --gbo->pin_count; 372 + if (gbo->pin_count) 373 + return 0; 374 + 375 + for (i = 0; i < gbo->placement.num_placement ; ++i) 376 + gbo->placements[i].flags &= ~TTM_PL_FLAG_NO_EVICT; 377 + 378 + ret = ttm_bo_validate(&gbo->bo, &gbo->placement, &ctx); 379 + if (ret < 0) 380 + return ret; 381 + 382 + return 0; 383 + } 384 + EXPORT_SYMBOL(drm_gem_vram_unpin_locked); 385 + 386 + /** 387 + * drm_gem_vram_kmap_at() - Maps a GEM VRAM object into kernel address space 388 + * @gbo: the GEM VRAM object 389 + * @map: establish a mapping if necessary 390 + * @is_iomem: returns true if the mapped memory is I/O memory, or false \ 391 + otherwise; can be NULL 392 + * @kmap: the mapping's kmap object 393 + * 394 + * This function maps the buffer object into the kernel's address space 395 + * or returns the current mapping. If the parameter map is false, the 396 + * function only queries the current mapping, but does not establish a 397 + * new one. 398 + * 399 + * Returns: 400 + * The buffers virtual address if mapped, or 401 + * NULL if not mapped, or 402 + * an ERR_PTR()-encoded error code otherwise. 403 + */ 404 + void *drm_gem_vram_kmap_at(struct drm_gem_vram_object *gbo, bool map, 405 + bool *is_iomem, struct ttm_bo_kmap_obj *kmap) 406 + { 407 + int ret; 408 + 409 + if (kmap->virtual || !map) 410 + goto out; 411 + 412 + ret = ttm_bo_kmap(&gbo->bo, 0, gbo->bo.num_pages, kmap); 413 + if (ret) 414 + return ERR_PTR(ret); 415 + 416 + out: 417 + if (!is_iomem) 418 + return kmap->virtual; 419 + if (!kmap->virtual) { 420 + *is_iomem = false; 421 + return NULL; 422 + } 423 + return ttm_kmap_obj_virtual(kmap, is_iomem); 424 + } 425 + EXPORT_SYMBOL(drm_gem_vram_kmap_at); 426 + 427 + /** 428 + * drm_gem_vram_kmap() - Maps a GEM VRAM object into kernel address space 429 + * @gbo: the GEM VRAM object 430 + * @map: establish a mapping if necessary 431 + * @is_iomem: returns true if the mapped memory is I/O memory, or false \ 432 + otherwise; can be NULL 433 + * 434 + * This function maps the buffer object into the kernel's address space 435 + * or returns the current mapping. If the parameter map is false, the 436 + * function only queries the current mapping, but does not establish a 437 + * new one. 438 + * 439 + * Returns: 440 + * The buffers virtual address if mapped, or 441 + * NULL if not mapped, or 442 + * an ERR_PTR()-encoded error code otherwise. 443 + */ 444 + void *drm_gem_vram_kmap(struct drm_gem_vram_object *gbo, bool map, 445 + bool *is_iomem) 446 + { 447 + return drm_gem_vram_kmap_at(gbo, map, is_iomem, &gbo->kmap); 448 + } 449 + EXPORT_SYMBOL(drm_gem_vram_kmap); 450 + 451 + /** 452 + * drm_gem_vram_kunmap_at() - Unmaps a GEM VRAM object 453 + * @gbo: the GEM VRAM object 454 + * @kmap: the mapping's kmap object 455 + */ 456 + void drm_gem_vram_kunmap_at(struct drm_gem_vram_object *gbo, 457 + struct ttm_bo_kmap_obj *kmap) 458 + { 459 + if (!kmap->virtual) 460 + return; 461 + 462 + ttm_bo_kunmap(kmap); 463 + kmap->virtual = NULL; 464 + } 465 + EXPORT_SYMBOL(drm_gem_vram_kunmap_at); 466 + 467 + /** 468 + * drm_gem_vram_kunmap() - Unmaps a GEM VRAM object 469 + * @gbo: the GEM VRAM object 470 + */ 471 + void drm_gem_vram_kunmap(struct drm_gem_vram_object *gbo) 472 + { 473 + drm_gem_vram_kunmap_at(gbo, &gbo->kmap); 474 + } 475 + EXPORT_SYMBOL(drm_gem_vram_kunmap); 476 + 477 + /** 478 + * drm_gem_vram_fill_create_dumb() - \ 479 + Helper for implementing &struct drm_driver.dumb_create 480 + * @file: the DRM file 481 + * @dev: the DRM device 482 + * @bdev: the TTM BO device managing the buffer object 483 + * @pg_align: the buffer's alignment in multiples of the page size 484 + * @interruptible: sleep interruptible if waiting for memory 485 + * @args: the arguments as provided to \ 486 + &struct drm_driver.dumb_create 487 + * 488 + * This helper function fills &struct drm_mode_create_dumb, which is used 489 + * by &struct drm_driver.dumb_create. Implementations of this interface 490 + * should forwards their arguments to this helper, plus the driver-specific 491 + * parameters. 492 + * 493 + * Returns: 494 + * 0 on success, or 495 + * a negative error code otherwise. 496 + */ 497 + int drm_gem_vram_fill_create_dumb(struct drm_file *file, 498 + struct drm_device *dev, 499 + struct ttm_bo_device *bdev, 500 + unsigned long pg_align, 501 + bool interruptible, 502 + struct drm_mode_create_dumb *args) 503 + { 504 + size_t pitch, size; 505 + struct drm_gem_vram_object *gbo; 506 + int ret; 507 + u32 handle; 508 + 509 + pitch = args->width * ((args->bpp + 7) / 8); 510 + size = pitch * args->height; 511 + 512 + size = roundup(size, PAGE_SIZE); 513 + if (!size) 514 + return -EINVAL; 515 + 516 + gbo = drm_gem_vram_create(dev, bdev, size, pg_align, interruptible); 517 + if (IS_ERR(gbo)) 518 + return PTR_ERR(gbo); 519 + 520 + ret = drm_gem_handle_create(file, &gbo->gem, &handle); 521 + if (ret) 522 + goto err_drm_gem_object_put_unlocked; 523 + 524 + drm_gem_object_put_unlocked(&gbo->gem); 525 + 526 + args->pitch = pitch; 527 + args->size = size; 528 + args->handle = handle; 529 + 530 + return 0; 531 + 532 + err_drm_gem_object_put_unlocked: 533 + drm_gem_object_put_unlocked(&gbo->gem); 534 + return ret; 535 + } 536 + EXPORT_SYMBOL(drm_gem_vram_fill_create_dumb); 537 + 538 + /* 539 + * Helpers for struct ttm_bo_driver 540 + */ 541 + 542 + static bool drm_is_gem_vram(struct ttm_buffer_object *bo) 543 + { 544 + return (bo->destroy == ttm_buffer_object_destroy); 545 + } 546 + 547 + /** 548 + * drm_gem_vram_bo_driver_evict_flags() - \ 549 + Implements &struct ttm_bo_driver.evict_flags 550 + * @bo: TTM buffer object. Refers to &struct drm_gem_vram_object.bo 551 + * @pl: TTM placement information. 552 + */ 553 + void drm_gem_vram_bo_driver_evict_flags(struct ttm_buffer_object *bo, 554 + struct ttm_placement *pl) 555 + { 556 + struct drm_gem_vram_object *gbo; 557 + 558 + /* TTM may pass BOs that are not GEM VRAM BOs. */ 559 + if (!drm_is_gem_vram(bo)) 560 + return; 561 + 562 + gbo = drm_gem_vram_of_bo(bo); 563 + drm_gem_vram_placement(gbo, TTM_PL_FLAG_SYSTEM); 564 + *pl = gbo->placement; 565 + } 566 + EXPORT_SYMBOL(drm_gem_vram_bo_driver_evict_flags); 567 + 568 + /** 569 + * drm_gem_vram_bo_driver_verify_access() - \ 570 + Implements &struct ttm_bo_driver.verify_access 571 + * @bo: TTM buffer object. Refers to &struct drm_gem_vram_object.bo 572 + * @filp: File pointer. 573 + * 574 + * Returns: 575 + * 0 on success, or 576 + * a negative errno code otherwise. 577 + */ 578 + int drm_gem_vram_bo_driver_verify_access(struct ttm_buffer_object *bo, 579 + struct file *filp) 580 + { 581 + struct drm_gem_vram_object *gbo = drm_gem_vram_of_bo(bo); 582 + 583 + return drm_vma_node_verify_access(&gbo->gem.vma_node, 584 + filp->private_data); 585 + } 586 + EXPORT_SYMBOL(drm_gem_vram_bo_driver_verify_access); 587 + 588 + /** 589 + * drm_gem_vram_mm_funcs - Functions for &struct drm_vram_mm 590 + * 591 + * Most users of @struct drm_gem_vram_object will also use 592 + * @struct drm_vram_mm. This instance of &struct drm_vram_mm_funcs 593 + * can be used to connect both. 594 + */ 595 + const struct drm_vram_mm_funcs drm_gem_vram_mm_funcs = { 596 + .evict_flags = drm_gem_vram_bo_driver_evict_flags, 597 + .verify_access = drm_gem_vram_bo_driver_verify_access 598 + }; 599 + EXPORT_SYMBOL(drm_gem_vram_mm_funcs); 600 + 601 + /* 602 + * Helpers for struct drm_driver 603 + */ 604 + 605 + /** 606 + * drm_gem_vram_driver_gem_free_object_unlocked() - \ 607 + Implements &struct drm_driver.gem_free_object_unlocked 608 + * @gem: GEM object. Refers to &struct drm_gem_vram_object.gem 609 + */ 610 + void drm_gem_vram_driver_gem_free_object_unlocked(struct drm_gem_object *gem) 611 + { 612 + struct drm_gem_vram_object *gbo = drm_gem_vram_of_gem(gem); 613 + 614 + drm_gem_vram_put(gbo); 615 + } 616 + EXPORT_SYMBOL(drm_gem_vram_driver_gem_free_object_unlocked); 617 + 618 + /** 619 + * drm_gem_vram_driver_create_dumb() - \ 620 + Implements &struct drm_driver.dumb_create 621 + * @file: the DRM file 622 + * @dev: the DRM device 623 + * @args: the arguments as provided to \ 624 + &struct drm_driver.dumb_create 625 + * 626 + * This function requires the driver to use @drm_device.vram_mm for its 627 + * instance of VRAM MM. 628 + * 629 + * Returns: 630 + * 0 on success, or 631 + * a negative error code otherwise. 632 + */ 633 + int drm_gem_vram_driver_dumb_create(struct drm_file *file, 634 + struct drm_device *dev, 635 + struct drm_mode_create_dumb *args) 636 + { 637 + if (WARN_ONCE(!dev->vram_mm, "VRAM MM not initialized")) 638 + return -EINVAL; 639 + 640 + return drm_gem_vram_fill_create_dumb(file, dev, &dev->vram_mm->bdev, 0, 641 + false, args); 642 + } 643 + EXPORT_SYMBOL(drm_gem_vram_driver_dumb_create); 644 + 645 + /** 646 + * drm_gem_vram_driver_dumb_mmap_offset() - \ 647 + Implements &struct drm_driver.dumb_mmap_offset 648 + * @file: DRM file pointer. 649 + * @dev: DRM device. 650 + * @handle: GEM handle 651 + * @offset: Returns the mapping's memory offset on success 652 + * 653 + * Returns: 654 + * 0 on success, or 655 + * a negative errno code otherwise. 656 + */ 657 + int drm_gem_vram_driver_dumb_mmap_offset(struct drm_file *file, 658 + struct drm_device *dev, 659 + uint32_t handle, uint64_t *offset) 660 + { 661 + struct drm_gem_object *gem; 662 + struct drm_gem_vram_object *gbo; 663 + 664 + gem = drm_gem_object_lookup(file, handle); 665 + if (!gem) 666 + return -ENOENT; 667 + 668 + gbo = drm_gem_vram_of_gem(gem); 669 + *offset = drm_gem_vram_mmap_offset(gbo); 670 + 671 + drm_gem_object_put_unlocked(gem); 672 + 673 + return 0; 674 + } 675 + EXPORT_SYMBOL(drm_gem_vram_driver_dumb_mmap_offset); 676 + 677 + /* 678 + * PRIME helpers for struct drm_driver 679 + */ 680 + 681 + /** 682 + * drm_gem_vram_driver_gem_prime_pin() - \ 683 + Implements &struct drm_driver.gem_prime_pin 684 + * @gem: The GEM object to pin 685 + * 686 + * Returns: 687 + * 0 on success, or 688 + * a negative errno code otherwise. 689 + */ 690 + int drm_gem_vram_driver_gem_prime_pin(struct drm_gem_object *gem) 691 + { 692 + struct drm_gem_vram_object *gbo = drm_gem_vram_of_gem(gem); 693 + 694 + return drm_gem_vram_pin(gbo, DRM_GEM_VRAM_PL_FLAG_VRAM); 695 + } 696 + EXPORT_SYMBOL(drm_gem_vram_driver_gem_prime_pin); 697 + 698 + /** 699 + * drm_gem_vram_driver_gem_prime_unpin() - \ 700 + Implements &struct drm_driver.gem_prime_unpin 701 + * @gem: The GEM object to unpin 702 + */ 703 + void drm_gem_vram_driver_gem_prime_unpin(struct drm_gem_object *gem) 704 + { 705 + struct drm_gem_vram_object *gbo = drm_gem_vram_of_gem(gem); 706 + 707 + drm_gem_vram_unpin(gbo); 708 + } 709 + EXPORT_SYMBOL(drm_gem_vram_driver_gem_prime_unpin); 710 + 711 + /** 712 + * drm_gem_vram_driver_gem_prime_vmap() - \ 713 + Implements &struct drm_driver.gem_prime_vmap 714 + * @gem: The GEM object to map 715 + * 716 + * Returns: 717 + * The buffers virtual address on success, or 718 + * NULL otherwise. 719 + */ 720 + void *drm_gem_vram_driver_gem_prime_vmap(struct drm_gem_object *gem) 721 + { 722 + struct drm_gem_vram_object *gbo = drm_gem_vram_of_gem(gem); 723 + int ret; 724 + void *base; 725 + 726 + ret = drm_gem_vram_pin(gbo, DRM_GEM_VRAM_PL_FLAG_VRAM); 727 + if (ret) 728 + return NULL; 729 + base = drm_gem_vram_kmap(gbo, true, NULL); 730 + if (IS_ERR(base)) { 731 + drm_gem_vram_unpin(gbo); 732 + return NULL; 733 + } 734 + return base; 735 + } 736 + EXPORT_SYMBOL(drm_gem_vram_driver_gem_prime_vmap); 737 + 738 + /** 739 + * drm_gem_vram_driver_gem_prime_vunmap() - \ 740 + Implements &struct drm_driver.gem_prime_vunmap 741 + * @gem: The GEM object to unmap 742 + * @vaddr: The mapping's base address 743 + */ 744 + void drm_gem_vram_driver_gem_prime_vunmap(struct drm_gem_object *gem, 745 + void *vaddr) 746 + { 747 + struct drm_gem_vram_object *gbo = drm_gem_vram_of_gem(gem); 748 + 749 + drm_gem_vram_kunmap(gbo); 750 + drm_gem_vram_unpin(gbo); 751 + } 752 + EXPORT_SYMBOL(drm_gem_vram_driver_gem_prime_vunmap); 753 + 754 + /** 755 + * drm_gem_vram_driver_gem_prime_mmap() - \ 756 + Implements &struct drm_driver.gem_prime_mmap 757 + * @gem: The GEM object to map 758 + * @vma: The VMA describing the mapping 759 + * 760 + * Returns: 761 + * 0 on success, or 762 + * a negative errno code otherwise. 763 + */ 764 + int drm_gem_vram_driver_gem_prime_mmap(struct drm_gem_object *gem, 765 + struct vm_area_struct *vma) 766 + { 767 + struct drm_gem_vram_object *gbo = drm_gem_vram_of_gem(gem); 768 + 769 + gbo->gem.vma_node.vm_node.start = gbo->bo.vma_node.vm_node.start; 770 + return drm_gem_prime_mmap(gem, vma); 771 + } 772 + EXPORT_SYMBOL(drm_gem_vram_driver_gem_prime_mmap);
+2
drivers/gpu/drm/drm_internal.h
··· 93 93 struct drm_file *file_priv); 94 94 int drm_master_open(struct drm_file *file_priv); 95 95 void drm_master_release(struct drm_file *file_priv); 96 + bool drm_master_internal_acquire(struct drm_device *dev); 97 + void drm_master_internal_release(struct drm_device *dev); 96 98 97 99 /* drm_sysfs.c */ 98 100 extern struct class *drm_class;
+2
drivers/gpu/drm/drm_legacy.h
··· 187 187 void drm_legacy_init_members(struct drm_device *dev); 188 188 void drm_legacy_destroy_members(struct drm_device *dev); 189 189 void drm_legacy_dev_reinit(struct drm_device *dev); 190 + int drm_legacy_setup(struct drm_device * dev); 190 191 #else 191 192 static inline void drm_legacy_init_members(struct drm_device *dev) {} 192 193 static inline void drm_legacy_destroy_members(struct drm_device *dev) {} 193 194 static inline void drm_legacy_dev_reinit(struct drm_device *dev) {} 195 + static inline int drm_legacy_setup(struct drm_device * dev) { return 0; } 194 196 #endif 195 197 196 198 #if IS_ENABLED(CONFIG_DRM_LEGACY)
+20
drivers/gpu/drm/drm_legacy_misc.c
··· 51 51 mutex_destroy(&dev->ctxlist_mutex); 52 52 } 53 53 54 + int drm_legacy_setup(struct drm_device * dev) 55 + { 56 + int ret; 57 + 58 + if (dev->driver->firstopen && 59 + drm_core_check_feature(dev, DRIVER_LEGACY)) { 60 + ret = dev->driver->firstopen(dev); 61 + if (ret != 0) 62 + return ret; 63 + } 64 + 65 + ret = drm_legacy_dma_setup(dev); 66 + if (ret < 0) 67 + return ret; 68 + 69 + 70 + DRM_DEBUG("\n"); 71 + return 0; 72 + } 73 + 54 74 void drm_legacy_dev_reinit(struct drm_device *dev) 55 75 { 56 76 if (dev->irq_enabled)
+17 -60
drivers/gpu/drm/drm_prime.c
··· 86 86 struct rb_node handle_rb; 87 87 }; 88 88 89 - struct drm_prime_attachment { 90 - struct sg_table *sgt; 91 - enum dma_data_direction dir; 92 - }; 93 - 94 89 static int drm_prime_add_buf_handle(struct drm_prime_file_private *prime_fpriv, 95 90 struct dma_buf *dma_buf, uint32_t handle) 96 91 { ··· 183 188 * @dma_buf: buffer to attach device to 184 189 * @attach: buffer attachment data 185 190 * 186 - * Allocates &drm_prime_attachment and calls &drm_driver.gem_prime_pin for 187 - * device specific attachment. This can be used as the &dma_buf_ops.attach 188 - * callback. 191 + * Calls &drm_driver.gem_prime_pin for device specific handling. This can be 192 + * used as the &dma_buf_ops.attach callback. 189 193 * 190 194 * Returns 0 on success, negative error code on failure. 191 195 */ 192 196 int drm_gem_map_attach(struct dma_buf *dma_buf, 193 197 struct dma_buf_attachment *attach) 194 198 { 195 - struct drm_prime_attachment *prime_attach; 196 199 struct drm_gem_object *obj = dma_buf->priv; 197 - 198 - prime_attach = kzalloc(sizeof(*prime_attach), GFP_KERNEL); 199 - if (!prime_attach) 200 - return -ENOMEM; 201 - 202 - prime_attach->dir = DMA_NONE; 203 - attach->priv = prime_attach; 204 200 205 201 return drm_gem_pin(obj); 206 202 } ··· 208 222 void drm_gem_map_detach(struct dma_buf *dma_buf, 209 223 struct dma_buf_attachment *attach) 210 224 { 211 - struct drm_prime_attachment *prime_attach = attach->priv; 212 225 struct drm_gem_object *obj = dma_buf->priv; 213 - 214 - if (prime_attach) { 215 - struct sg_table *sgt = prime_attach->sgt; 216 - 217 - if (sgt) { 218 - if (prime_attach->dir != DMA_NONE) 219 - dma_unmap_sg_attrs(attach->dev, sgt->sgl, 220 - sgt->nents, 221 - prime_attach->dir, 222 - DMA_ATTR_SKIP_CPU_SYNC); 223 - sg_free_table(sgt); 224 - } 225 - 226 - kfree(sgt); 227 - kfree(prime_attach); 228 - attach->priv = NULL; 229 - } 230 226 231 227 drm_gem_unpin(obj); 232 228 } ··· 254 286 struct sg_table *drm_gem_map_dma_buf(struct dma_buf_attachment *attach, 255 287 enum dma_data_direction dir) 256 288 { 257 - struct drm_prime_attachment *prime_attach = attach->priv; 258 289 struct drm_gem_object *obj = attach->dmabuf->priv; 259 290 struct sg_table *sgt; 260 291 261 - if (WARN_ON(dir == DMA_NONE || !prime_attach)) 292 + if (WARN_ON(dir == DMA_NONE)) 262 293 return ERR_PTR(-EINVAL); 263 - 264 - /* return the cached mapping when possible */ 265 - if (prime_attach->dir == dir) 266 - return prime_attach->sgt; 267 - 268 - /* 269 - * two mappings with different directions for the same attachment are 270 - * not allowed 271 - */ 272 - if (WARN_ON(prime_attach->dir != DMA_NONE)) 273 - return ERR_PTR(-EBUSY); 274 294 275 295 if (obj->funcs) 276 296 sgt = obj->funcs->get_sg_table(obj); 277 297 else 278 298 sgt = obj->dev->driver->gem_prime_get_sg_table(obj); 279 299 280 - if (!IS_ERR(sgt)) { 281 - if (!dma_map_sg_attrs(attach->dev, sgt->sgl, sgt->nents, dir, 282 - DMA_ATTR_SKIP_CPU_SYNC)) { 283 - sg_free_table(sgt); 284 - kfree(sgt); 285 - sgt = ERR_PTR(-ENOMEM); 286 - } else { 287 - prime_attach->sgt = sgt; 288 - prime_attach->dir = dir; 289 - } 300 + if (!dma_map_sg_attrs(attach->dev, sgt->sgl, sgt->nents, dir, 301 + DMA_ATTR_SKIP_CPU_SYNC)) { 302 + sg_free_table(sgt); 303 + kfree(sgt); 304 + sgt = ERR_PTR(-ENOMEM); 290 305 } 291 306 292 307 return sgt; ··· 282 331 * @sgt: scatterlist info of the buffer to unmap 283 332 * @dir: direction of DMA transfer 284 333 * 285 - * Not implemented. The unmap is done at drm_gem_map_detach(). This can be 286 - * used as the &dma_buf_ops.unmap_dma_buf callback. 334 + * This can be used as the &dma_buf_ops.unmap_dma_buf callback. 287 335 */ 288 336 void drm_gem_unmap_dma_buf(struct dma_buf_attachment *attach, 289 337 struct sg_table *sgt, 290 338 enum dma_data_direction dir) 291 339 { 292 - /* nothing to be done here */ 340 + if (!sgt) 341 + return; 342 + 343 + dma_unmap_sg_attrs(attach->dev, sgt->sgl, sgt->nents, dir, 344 + DMA_ATTR_SKIP_CPU_SYNC); 345 + sg_free_table(sgt); 346 + kfree(sgt); 293 347 } 294 348 EXPORT_SYMBOL(drm_gem_unmap_dma_buf); 295 349 ··· 408 452 EXPORT_SYMBOL(drm_gem_dmabuf_mmap); 409 453 410 454 static const struct dma_buf_ops drm_gem_prime_dmabuf_ops = { 455 + .cache_sgt_mapping = true, 411 456 .attach = drm_gem_map_attach, 412 457 .detach = drm_gem_map_detach, 413 458 .map_dma_buf = drm_gem_map_dma_buf,
+96
drivers/gpu/drm/drm_vram_helper_common.c
··· 1 + // SPDX-License-Identifier: GPL-2.0-or-later 2 + 3 + #include <linux/module.h> 4 + 5 + /** 6 + * DOC: overview 7 + * 8 + * This library provides &struct drm_gem_vram_object (GEM VRAM), a GEM 9 + * buffer object that is backed by video RAM. It can be used for 10 + * framebuffer devices with dedicated memory. The video RAM can be 11 + * managed with &struct drm_vram_mm (VRAM MM). Both data structures are 12 + * supposed to be used together, but can also be used individually. 13 + * 14 + * With the GEM interface userspace applications create, manage and destroy 15 + * graphics buffers, such as an on-screen framebuffer. GEM does not provide 16 + * an implementation of these interfaces. It's up to the DRM driver to 17 + * provide an implementation that suits the hardware. If the hardware device 18 + * contains dedicated video memory, the DRM driver can use the VRAM helper 19 + * library. Each active buffer object is stored in video RAM. Active 20 + * buffer are used for drawing the current frame, typically something like 21 + * the frame's scanout buffer or the cursor image. If there's no more space 22 + * left in VRAM, inactive GEM objects can be moved to system memory. 23 + * 24 + * The easiest way to use the VRAM helper library is to call 25 + * drm_vram_helper_alloc_mm(). The function allocates and initializes an 26 + * instance of &struct drm_vram_mm in &struct drm_device.vram_mm . Use 27 + * &DRM_GEM_VRAM_DRIVER to initialize &struct drm_driver and 28 + * &DRM_VRAM_MM_FILE_OPERATIONS to initialize &struct file_operations; 29 + * as illustrated below. 30 + * 31 + * .. code-block:: c 32 + * 33 + * struct file_operations fops ={ 34 + * .owner = THIS_MODULE, 35 + * DRM_VRAM_MM_FILE_OPERATION 36 + * }; 37 + * struct drm_driver drv = { 38 + * .driver_feature = DRM_ ... , 39 + * .fops = &fops, 40 + * DRM_GEM_VRAM_DRIVER 41 + * }; 42 + * 43 + * int init_drm_driver() 44 + * { 45 + * struct drm_device *dev; 46 + * uint64_t vram_base; 47 + * unsigned long vram_size; 48 + * int ret; 49 + * 50 + * // setup device, vram base and size 51 + * // ... 52 + * 53 + * ret = drm_vram_helper_alloc_mm(dev, vram_base, vram_size, 54 + * &drm_gem_vram_mm_funcs); 55 + * if (ret) 56 + * return ret; 57 + * return 0; 58 + * } 59 + * 60 + * This creates an instance of &struct drm_vram_mm, exports DRM userspace 61 + * interfaces for GEM buffer management and initializes file operations to 62 + * allow for accessing created GEM buffers. With this setup, the DRM driver 63 + * manages an area of video RAM with VRAM MM and provides GEM VRAM objects 64 + * to userspace. 65 + * 66 + * To clean up the VRAM memory management, call drm_vram_helper_release_mm() 67 + * in the driver's clean-up code. 68 + * 69 + * .. code-block:: c 70 + * 71 + * void fini_drm_driver() 72 + * { 73 + * struct drm_device *dev = ...; 74 + * 75 + * drm_vram_helper_release_mm(dev); 76 + * } 77 + * 78 + * For drawing or scanout operations, buffer object have to be pinned in video 79 + * RAM. Call drm_gem_vram_pin() with &DRM_GEM_VRAM_PL_FLAG_VRAM or 80 + * &DRM_GEM_VRAM_PL_FLAG_SYSTEM to pin a buffer object in video RAM or system 81 + * memory. Call drm_gem_vram_unpin() to release the pinned object afterwards. 82 + * 83 + * A buffer object that is pinned in video RAM has a fixed address within that 84 + * memory region. Call drm_gem_vram_offset() to retrieve this value. Typically 85 + * it's used to program the hardware's scanout engine for framebuffers, set 86 + * the cursor overlay's image for a mouse cursor, or use it as input to the 87 + * hardware's draing engine. 88 + * 89 + * To access a buffer object's memory from the DRM driver, call 90 + * drm_gem_vram_kmap(). It (optionally) maps the buffer into kernel address 91 + * space and returns the memory address. Use drm_gem_vram_kunmap() to 92 + * release the mapping. 93 + */ 94 + 95 + MODULE_DESCRIPTION("DRM VRAM memory-management helpers"); 96 + MODULE_LICENSE("GPL");
+295
drivers/gpu/drm/drm_vram_mm_helper.c
··· 1 + // SPDX-License-Identifier: GPL-2.0-or-later 2 + 3 + #include <drm/drm_vram_mm_helper.h> 4 + #include <drm/drmP.h> 5 + #include <drm/ttm/ttm_page_alloc.h> 6 + 7 + /** 8 + * DOC: overview 9 + * 10 + * The data structure &struct drm_vram_mm and its helpers implement a memory 11 + * manager for simple framebuffer devices with dedicated video memory. Buffer 12 + * objects are either placed in video RAM or evicted to system memory. These 13 + * helper functions work well with &struct drm_gem_vram_object. 14 + */ 15 + 16 + /* 17 + * TTM TT 18 + */ 19 + 20 + static void backend_func_destroy(struct ttm_tt *tt) 21 + { 22 + ttm_tt_fini(tt); 23 + kfree(tt); 24 + } 25 + 26 + static struct ttm_backend_func backend_func = { 27 + .destroy = backend_func_destroy 28 + }; 29 + 30 + /* 31 + * TTM BO device 32 + */ 33 + 34 + static struct ttm_tt *bo_driver_ttm_tt_create(struct ttm_buffer_object *bo, 35 + uint32_t page_flags) 36 + { 37 + struct ttm_tt *tt; 38 + int ret; 39 + 40 + tt = kzalloc(sizeof(*tt), GFP_KERNEL); 41 + if (!tt) 42 + return NULL; 43 + 44 + tt->func = &backend_func; 45 + 46 + ret = ttm_tt_init(tt, bo, page_flags); 47 + if (ret < 0) 48 + goto err_ttm_tt_init; 49 + 50 + return tt; 51 + 52 + err_ttm_tt_init: 53 + kfree(tt); 54 + return NULL; 55 + } 56 + 57 + static int bo_driver_init_mem_type(struct ttm_bo_device *bdev, uint32_t type, 58 + struct ttm_mem_type_manager *man) 59 + { 60 + switch (type) { 61 + case TTM_PL_SYSTEM: 62 + man->flags = TTM_MEMTYPE_FLAG_MAPPABLE; 63 + man->available_caching = TTM_PL_MASK_CACHING; 64 + man->default_caching = TTM_PL_FLAG_CACHED; 65 + break; 66 + case TTM_PL_VRAM: 67 + man->func = &ttm_bo_manager_func; 68 + man->flags = TTM_MEMTYPE_FLAG_FIXED | 69 + TTM_MEMTYPE_FLAG_MAPPABLE; 70 + man->available_caching = TTM_PL_FLAG_UNCACHED | 71 + TTM_PL_FLAG_WC; 72 + man->default_caching = TTM_PL_FLAG_WC; 73 + break; 74 + default: 75 + return -EINVAL; 76 + } 77 + return 0; 78 + } 79 + 80 + static void bo_driver_evict_flags(struct ttm_buffer_object *bo, 81 + struct ttm_placement *placement) 82 + { 83 + struct drm_vram_mm *vmm = drm_vram_mm_of_bdev(bo->bdev); 84 + 85 + if (vmm->funcs && vmm->funcs->evict_flags) 86 + vmm->funcs->evict_flags(bo, placement); 87 + } 88 + 89 + static int bo_driver_verify_access(struct ttm_buffer_object *bo, 90 + struct file *filp) 91 + { 92 + struct drm_vram_mm *vmm = drm_vram_mm_of_bdev(bo->bdev); 93 + 94 + if (!vmm->funcs || !vmm->funcs->verify_access) 95 + return 0; 96 + return vmm->funcs->verify_access(bo, filp); 97 + } 98 + 99 + static int bo_driver_io_mem_reserve(struct ttm_bo_device *bdev, 100 + struct ttm_mem_reg *mem) 101 + { 102 + struct ttm_mem_type_manager *man = bdev->man + mem->mem_type; 103 + struct drm_vram_mm *vmm = drm_vram_mm_of_bdev(bdev); 104 + 105 + if (!(man->flags & TTM_MEMTYPE_FLAG_MAPPABLE)) 106 + return -EINVAL; 107 + 108 + mem->bus.addr = NULL; 109 + mem->bus.size = mem->num_pages << PAGE_SHIFT; 110 + 111 + switch (mem->mem_type) { 112 + case TTM_PL_SYSTEM: /* nothing to do */ 113 + mem->bus.offset = 0; 114 + mem->bus.base = 0; 115 + mem->bus.is_iomem = false; 116 + break; 117 + case TTM_PL_VRAM: 118 + mem->bus.offset = mem->start << PAGE_SHIFT; 119 + mem->bus.base = vmm->vram_base; 120 + mem->bus.is_iomem = true; 121 + break; 122 + default: 123 + return -EINVAL; 124 + } 125 + 126 + return 0; 127 + } 128 + 129 + static void bo_driver_io_mem_free(struct ttm_bo_device *bdev, 130 + struct ttm_mem_reg *mem) 131 + { } 132 + 133 + static struct ttm_bo_driver bo_driver = { 134 + .ttm_tt_create = bo_driver_ttm_tt_create, 135 + .ttm_tt_populate = ttm_pool_populate, 136 + .ttm_tt_unpopulate = ttm_pool_unpopulate, 137 + .init_mem_type = bo_driver_init_mem_type, 138 + .eviction_valuable = ttm_bo_eviction_valuable, 139 + .evict_flags = bo_driver_evict_flags, 140 + .verify_access = bo_driver_verify_access, 141 + .io_mem_reserve = bo_driver_io_mem_reserve, 142 + .io_mem_free = bo_driver_io_mem_free, 143 + }; 144 + 145 + /* 146 + * struct drm_vram_mm 147 + */ 148 + 149 + /** 150 + * drm_vram_mm_init() - Initialize an instance of VRAM MM. 151 + * @vmm: the VRAM MM instance to initialize 152 + * @dev: the DRM device 153 + * @vram_base: the base address of the video memory 154 + * @vram_size: the size of the video memory in bytes 155 + * @funcs: callback functions for buffer objects 156 + * 157 + * Returns: 158 + * 0 on success, or 159 + * a negative error code otherwise. 160 + */ 161 + int drm_vram_mm_init(struct drm_vram_mm *vmm, struct drm_device *dev, 162 + uint64_t vram_base, size_t vram_size, 163 + const struct drm_vram_mm_funcs *funcs) 164 + { 165 + int ret; 166 + 167 + vmm->vram_base = vram_base; 168 + vmm->vram_size = vram_size; 169 + vmm->funcs = funcs; 170 + 171 + ret = ttm_bo_device_init(&vmm->bdev, &bo_driver, 172 + dev->anon_inode->i_mapping, 173 + true); 174 + if (ret) 175 + return ret; 176 + 177 + ret = ttm_bo_init_mm(&vmm->bdev, TTM_PL_VRAM, vram_size >> PAGE_SHIFT); 178 + if (ret) 179 + return ret; 180 + 181 + return 0; 182 + } 183 + EXPORT_SYMBOL(drm_vram_mm_init); 184 + 185 + /** 186 + * drm_vram_mm_cleanup() - Cleans up an initialized instance of VRAM MM. 187 + * @vmm: the VRAM MM instance to clean up 188 + */ 189 + void drm_vram_mm_cleanup(struct drm_vram_mm *vmm) 190 + { 191 + ttm_bo_device_release(&vmm->bdev); 192 + } 193 + EXPORT_SYMBOL(drm_vram_mm_cleanup); 194 + 195 + /** 196 + * drm_vram_mm_mmap() - Helper for implementing &struct file_operations.mmap() 197 + * @filp: the mapping's file structure 198 + * @vma: the mapping's memory area 199 + * @vmm: the VRAM MM instance 200 + * 201 + * Returns: 202 + * 0 on success, or 203 + * a negative error code otherwise. 204 + */ 205 + int drm_vram_mm_mmap(struct file *filp, struct vm_area_struct *vma, 206 + struct drm_vram_mm *vmm) 207 + { 208 + return ttm_bo_mmap(filp, vma, &vmm->bdev); 209 + } 210 + EXPORT_SYMBOL(drm_vram_mm_mmap); 211 + 212 + /* 213 + * Helpers for integration with struct drm_device 214 + */ 215 + 216 + /** 217 + * drm_vram_helper_alloc_mm - Allocates a device's instance of \ 218 + &struct drm_vram_mm 219 + * @dev: the DRM device 220 + * @vram_base: the base address of the video memory 221 + * @vram_size: the size of the video memory in bytes 222 + * @funcs: callback functions for buffer objects 223 + * 224 + * Returns: 225 + * The new instance of &struct drm_vram_mm on success, or 226 + * an ERR_PTR()-encoded errno code otherwise. 227 + */ 228 + struct drm_vram_mm *drm_vram_helper_alloc_mm( 229 + struct drm_device *dev, uint64_t vram_base, size_t vram_size, 230 + const struct drm_vram_mm_funcs *funcs) 231 + { 232 + int ret; 233 + 234 + if (WARN_ON(dev->vram_mm)) 235 + return dev->vram_mm; 236 + 237 + dev->vram_mm = kzalloc(sizeof(*dev->vram_mm), GFP_KERNEL); 238 + if (!dev->vram_mm) 239 + return ERR_PTR(-ENOMEM); 240 + 241 + ret = drm_vram_mm_init(dev->vram_mm, dev, vram_base, vram_size, funcs); 242 + if (ret) 243 + goto err_kfree; 244 + 245 + return dev->vram_mm; 246 + 247 + err_kfree: 248 + kfree(dev->vram_mm); 249 + dev->vram_mm = NULL; 250 + return ERR_PTR(ret); 251 + } 252 + EXPORT_SYMBOL(drm_vram_helper_alloc_mm); 253 + 254 + /** 255 + * drm_vram_helper_release_mm - Releases a device's instance of \ 256 + &struct drm_vram_mm 257 + * @dev: the DRM device 258 + */ 259 + void drm_vram_helper_release_mm(struct drm_device *dev) 260 + { 261 + if (!dev->vram_mm) 262 + return; 263 + 264 + drm_vram_mm_cleanup(dev->vram_mm); 265 + kfree(dev->vram_mm); 266 + dev->vram_mm = NULL; 267 + } 268 + EXPORT_SYMBOL(drm_vram_helper_release_mm); 269 + 270 + /* 271 + * Helpers for &struct file_operations 272 + */ 273 + 274 + /** 275 + * drm_vram_mm_file_operations_mmap() - \ 276 + Implements &struct file_operations.mmap() 277 + * @filp: the mapping's file structure 278 + * @vma: the mapping's memory area 279 + * 280 + * Returns: 281 + * 0 on success, or 282 + * a negative error code otherwise. 283 + */ 284 + int drm_vram_mm_file_operations_mmap( 285 + struct file *filp, struct vm_area_struct *vma) 286 + { 287 + struct drm_file *file_priv = filp->private_data; 288 + struct drm_device *dev = file_priv->minor->dev; 289 + 290 + if (WARN_ONCE(!dev->vram_mm, "VRAM MM not initialized")) 291 + return -EINVAL; 292 + 293 + return drm_vram_mm_mmap(filp, vma, dev->vram_mm); 294 + } 295 + EXPORT_SYMBOL(drm_vram_mm_file_operations_mmap);
-5
drivers/gpu/drm/etnaviv/etnaviv_dump.c
··· 118 118 unsigned int n_obj, n_bomap_pages; 119 119 size_t file_size, mmu_size; 120 120 __le64 *bomap, *bomap_start; 121 - unsigned long flags; 122 121 123 122 /* Only catch the first event, or when manually re-armed */ 124 123 if (!etnaviv_dump_core) ··· 134 135 mmu_size + gpu->buffer.size; 135 136 136 137 /* Add in the active command buffers */ 137 - spin_lock_irqsave(&gpu->sched.job_list_lock, flags); 138 138 list_for_each_entry(s_job, &gpu->sched.ring_mirror_list, node) { 139 139 submit = to_etnaviv_submit(s_job); 140 140 file_size += submit->cmdbuf.size; 141 141 n_obj++; 142 142 } 143 - spin_unlock_irqrestore(&gpu->sched.job_list_lock, flags); 144 143 145 144 /* Add in the active buffer objects */ 146 145 list_for_each_entry(vram, &gpu->mmu->mappings, mmu_node) { ··· 180 183 gpu->buffer.size, 181 184 etnaviv_cmdbuf_get_va(&gpu->buffer)); 182 185 183 - spin_lock_irqsave(&gpu->sched.job_list_lock, flags); 184 186 list_for_each_entry(s_job, &gpu->sched.ring_mirror_list, node) { 185 187 submit = to_etnaviv_submit(s_job); 186 188 etnaviv_core_dump_mem(&iter, ETDUMP_BUF_CMD, 187 189 submit->cmdbuf.vaddr, submit->cmdbuf.size, 188 190 etnaviv_cmdbuf_get_va(&submit->cmdbuf)); 189 191 } 190 - spin_unlock_irqrestore(&gpu->sched.job_list_lock, flags); 191 192 192 193 /* Reserve space for the bomap */ 193 194 if (n_bomap_pages) {
+1 -1
drivers/gpu/drm/etnaviv/etnaviv_sched.c
··· 109 109 } 110 110 111 111 /* block scheduler */ 112 - drm_sched_stop(&gpu->sched); 112 + drm_sched_stop(&gpu->sched, sched_job); 113 113 114 114 if(sched_job) 115 115 drm_sched_increase_karma(sched_job);
+11 -11
drivers/gpu/drm/gma500/accel_2d.c
··· 20 20 * 21 21 **************************************************************************/ 22 22 23 - #include <linux/module.h> 24 - #include <linux/kernel.h> 25 - #include <linux/errno.h> 26 - #include <linux/string.h> 27 - #include <linux/mm.h> 28 - #include <linux/tty.h> 29 - #include <linux/slab.h> 30 - #include <linux/delay.h> 31 - #include <linux/init.h> 32 23 #include <linux/console.h> 24 + #include <linux/delay.h> 25 + #include <linux/errno.h> 26 + #include <linux/init.h> 27 + #include <linux/kernel.h> 28 + #include <linux/mm.h> 29 + #include <linux/module.h> 30 + #include <linux/slab.h> 31 + #include <linux/string.h> 32 + #include <linux/tty.h> 33 33 34 - #include <drm/drmP.h> 35 34 #include <drm/drm.h> 36 35 #include <drm/drm_crtc.h> 36 + #include <drm/drm_fourcc.h> 37 37 38 + #include "framebuffer.h" 38 39 #include "psb_drv.h" 39 40 #include "psb_reg.h" 40 - #include "framebuffer.h" 41 41 42 42 /** 43 43 * psb_spank - reset the 2D engine
+2
drivers/gpu/drm/gma500/blitter.h
··· 17 17 #ifndef __BLITTER_H 18 18 #define __BLITTER_H 19 19 20 + struct drm_psb_private; 21 + 20 22 extern int gma_blt_wait_idle(struct drm_psb_private *dev_priv); 21 23 22 24 #endif
+7 -6
drivers/gpu/drm/gma500/cdv_device.c
··· 18 18 **************************************************************************/ 19 19 20 20 #include <linux/backlight.h> 21 - #include <drm/drmP.h> 21 + #include <linux/delay.h> 22 + 22 23 #include <drm/drm.h> 23 - #include <drm/gma_drm.h> 24 - #include "psb_drv.h" 25 - #include "psb_reg.h" 26 - #include "psb_intel_reg.h" 27 - #include "intel_bios.h" 24 + 28 25 #include "cdv_device.h" 29 26 #include "gma_device.h" 27 + #include "intel_bios.h" 28 + #include "psb_drv.h" 29 + #include "psb_intel_reg.h" 30 + #include "psb_reg.h" 30 31 31 32 #define VGA_SR_INDEX 0x3c4 32 33 #define VGA_SR_DATA 0x3c5
+4
drivers/gpu/drm/gma500/cdv_device.h
··· 15 15 * 51 Franklin St - Fifth Floor, Boston, MA 02110-1301 USA. 16 16 */ 17 17 18 + struct drm_crtc; 19 + struct drm_device; 20 + struct psb_intel_mode_device; 21 + 18 22 extern const struct drm_crtc_helper_funcs cdv_intel_helper_funcs; 19 23 extern const struct drm_crtc_funcs cdv_intel_crtc_funcs; 20 24 extern const struct gma_clock_funcs cdv_clock_funcs;
+4 -4
drivers/gpu/drm/gma500/cdv_intel_crt.c
··· 24 24 * Eric Anholt <eric@anholt.net> 25 25 */ 26 26 27 + #include <linux/delay.h> 27 28 #include <linux/i2c.h> 28 - #include <drm/drmP.h> 29 + #include <linux/pm_runtime.h> 29 30 31 + #include "cdv_device.h" 30 32 #include "intel_bios.h" 33 + #include "power.h" 31 34 #include "psb_drv.h" 32 35 #include "psb_intel_drv.h" 33 36 #include "psb_intel_reg.h" 34 - #include "power.h" 35 - #include "cdv_device.h" 36 - #include <linux/pm_runtime.h> 37 37 38 38 39 39 static void cdv_intel_crt_dpms(struct drm_encoder *encoder, int mode)
+6 -4
drivers/gpu/drm/gma500/cdv_intel_display.c
··· 18 18 * Eric Anholt <eric@anholt.net> 19 19 */ 20 20 21 + #include <linux/delay.h> 21 22 #include <linux/i2c.h> 22 23 23 - #include <drm/drmP.h> 24 + #include <drm/drm_crtc.h> 25 + 26 + #include "cdv_device.h" 24 27 #include "framebuffer.h" 28 + #include "gma_display.h" 29 + #include "power.h" 25 30 #include "psb_drv.h" 26 31 #include "psb_intel_drv.h" 27 32 #include "psb_intel_reg.h" 28 - #include "gma_display.h" 29 - #include "power.h" 30 - #include "cdv_device.h" 31 33 32 34 static bool cdv_intel_find_dp_pll(const struct gma_limit_t *limit, 33 35 struct drm_crtc *crtc, int target,
+5 -4
drivers/gpu/drm/gma500/cdv_intel_dp.c
··· 26 26 */ 27 27 28 28 #include <linux/i2c.h> 29 - #include <linux/slab.h> 30 29 #include <linux/module.h> 31 - #include <drm/drmP.h> 30 + #include <linux/slab.h> 31 + 32 32 #include <drm/drm_crtc.h> 33 33 #include <drm/drm_crtc_helper.h> 34 + #include <drm/drm_dp_helper.h> 35 + 36 + #include "gma_display.h" 34 37 #include "psb_drv.h" 35 38 #include "psb_intel_drv.h" 36 39 #include "psb_intel_reg.h" 37 - #include "gma_display.h" 38 - #include <drm/drm_dp_helper.h> 39 40 40 41 /** 41 42 * struct i2c_algo_dp_aux_data - driver interface structure for i2c over dp
+6 -5
drivers/gpu/drm/gma500/cdv_intel_hdmi.c
··· 27 27 * We should probably make this generic and share it with Medfield 28 28 */ 29 29 30 - #include <drm/drmP.h> 30 + #include <linux/pm_runtime.h> 31 + 31 32 #include <drm/drm.h> 32 33 #include <drm/drm_crtc.h> 33 34 #include <drm/drm_edid.h> 34 - #include "psb_intel_drv.h" 35 - #include "psb_drv.h" 36 - #include "psb_intel_reg.h" 35 + 37 36 #include "cdv_device.h" 38 - #include <linux/pm_runtime.h> 37 + #include "psb_drv.h" 38 + #include "psb_intel_drv.h" 39 + #include "psb_intel_reg.h" 39 40 40 41 /* hdmi control bits */ 41 42 #define HDMI_NULL_PACKETS_DURING_VSYNC (1 << 9)
+4 -5
drivers/gpu/drm/gma500/cdv_intel_lvds.c
··· 20 20 * Jesse Barnes <jesse.barnes@intel.com> 21 21 */ 22 22 23 - #include <linux/i2c.h> 24 23 #include <linux/dmi.h> 25 - #include <drm/drmP.h> 24 + #include <linux/i2c.h> 25 + #include <linux/pm_runtime.h> 26 26 27 + #include "cdv_device.h" 27 28 #include "intel_bios.h" 29 + #include "power.h" 28 30 #include "psb_drv.h" 29 31 #include "psb_intel_drv.h" 30 32 #include "psb_intel_reg.h" 31 - #include "power.h" 32 - #include <linux/pm_runtime.h> 33 - #include "cdv_device.h" 34 33 35 34 /** 36 35 * LVDS I2C backlight control macros
+15 -15
drivers/gpu/drm/gma500/framebuffer.c
··· 17 17 * 18 18 **************************************************************************/ 19 19 20 - #include <linux/module.h> 21 - #include <linux/kernel.h> 22 - #include <linux/errno.h> 23 - #include <linux/string.h> 24 - #include <linux/pfn_t.h> 25 - #include <linux/mm.h> 26 - #include <linux/tty.h> 27 - #include <linux/slab.h> 28 - #include <linux/delay.h> 29 - #include <linux/init.h> 30 20 #include <linux/console.h> 21 + #include <linux/delay.h> 22 + #include <linux/errno.h> 23 + #include <linux/init.h> 24 + #include <linux/kernel.h> 25 + #include <linux/mm.h> 26 + #include <linux/module.h> 27 + #include <linux/pfn_t.h> 28 + #include <linux/slab.h> 29 + #include <linux/string.h> 30 + #include <linux/tty.h> 31 31 32 - #include <drm/drmP.h> 33 32 #include <drm/drm.h> 34 33 #include <drm/drm_crtc.h> 35 34 #include <drm/drm_fb_helper.h> 35 + #include <drm/drm_fourcc.h> 36 36 #include <drm/drm_gem_framebuffer_helper.h> 37 37 38 - #include "psb_drv.h" 39 - #include "psb_intel_reg.h" 40 - #include "psb_intel_drv.h" 41 38 #include "framebuffer.h" 42 39 #include "gtt.h" 40 + #include "psb_drv.h" 41 + #include "psb_intel_drv.h" 42 + #include "psb_intel_reg.h" 43 43 44 44 static const struct drm_framebuffer_funcs psb_fb_funcs = { 45 45 .destroy = drm_gem_fb_destroy, ··· 232 232 * Reject unknown formats, YUV formats, and formats with more than 233 233 * 4 bytes per pixel. 234 234 */ 235 - info = drm_format_info(mode_cmd->pixel_format); 235 + info = drm_get_format_info(dev, mode_cmd); 236 236 if (!info || !info->depth || info->cpp[0] > 4) 237 237 return -EINVAL; 238 238
-1
drivers/gpu/drm/gma500/framebuffer.h
··· 22 22 #ifndef _FRAMEBUFFER_H_ 23 23 #define _FRAMEBUFFER_H_ 24 24 25 - #include <drm/drmP.h> 26 25 #include <drm/drm_fb_helper.h> 27 26 28 27 #include "psb_drv.h"
+3 -2
drivers/gpu/drm/gma500/gem.c
··· 23 23 * accelerated operations on a GEM object) 24 24 */ 25 25 26 - #include <drm/drmP.h> 26 + #include <linux/pagemap.h> 27 + 27 28 #include <drm/drm.h> 28 - #include <drm/gma_drm.h> 29 29 #include <drm/drm_vma_manager.h> 30 + 30 31 #include "psb_drv.h" 31 32 32 33 void psb_gem_free_object(struct drm_gem_object *obj)
-1
drivers/gpu/drm/gma500/gma_device.c
··· 13 13 * 14 14 **************************************************************************/ 15 15 16 - #include <drm/drmP.h> 17 16 #include "psb_drv.h" 18 17 19 18 void gma_get_core_freq(struct drm_device *dev)
+1
drivers/gpu/drm/gma500/gma_device.h
··· 15 15 16 16 #ifndef _GMA_DEVICE_H 17 17 #define _GMA_DEVICE_H 18 + struct drm_device; 18 19 19 20 extern void gma_get_core_freq(struct drm_device *dev); 20 21
+9 -3
drivers/gpu/drm/gma500/gma_display.c
··· 19 19 * Patrik Jakobsson <patrik.r.jakobsson@gmail.com> 20 20 */ 21 21 22 - #include <drm/drmP.h> 22 + #include <linux/delay.h> 23 + #include <linux/highmem.h> 24 + 25 + #include <drm/drm_crtc.h> 26 + #include <drm/drm_fourcc.h> 27 + #include <drm/drm_vblank.h> 28 + 29 + #include "framebuffer.h" 23 30 #include "gma_display.h" 31 + #include "psb_drv.h" 24 32 #include "psb_intel_drv.h" 25 33 #include "psb_intel_reg.h" 26 - #include "psb_drv.h" 27 - #include "framebuffer.h" 28 34 29 35 /** 30 36 * Returns whether any output on the specified pipe is of the specified type
+3
drivers/gpu/drm/gma500/gma_display.h
··· 24 24 25 25 #include <linux/pm_runtime.h> 26 26 27 + struct drm_encoder; 28 + struct drm_mode_set; 29 + 27 30 struct gma_clock_t { 28 31 /* given values */ 29 32 int n;
+3 -2
drivers/gpu/drm/gma500/gtt.c
··· 19 19 * Alan Cox <alan@linux.intel.com> 20 20 */ 21 21 22 - #include <drm/drmP.h> 23 22 #include <linux/shmem_fs.h> 23 + 24 24 #include <asm/set_memory.h> 25 - #include "psb_drv.h" 25 + 26 26 #include "blitter.h" 27 + #include "psb_drv.h" 27 28 28 29 29 30 /*
-1
drivers/gpu/drm/gma500/gtt.h
··· 20 20 #ifndef _PSB_GTT_H_ 21 21 #define _PSB_GTT_H_ 22 22 23 - #include <drm/drmP.h> 24 23 #include <drm/drm_gem.h> 25 24 26 25 /* This wants cleaning up with respect to the psb_dev and un-needed stuff */
+3 -3
drivers/gpu/drm/gma500/intel_bios.c
··· 18 18 * Eric Anholt <eric@anholt.net> 19 19 * 20 20 */ 21 - #include <drm/drmP.h> 22 21 #include <drm/drm.h> 23 - #include <drm/gma_drm.h> 22 + #include <drm/drm_dp_helper.h> 23 + 24 + #include "intel_bios.h" 24 25 #include "psb_drv.h" 25 26 #include "psb_intel_drv.h" 26 27 #include "psb_intel_reg.h" 27 - #include "intel_bios.h" 28 28 29 29 #define SLAVE_ADDR1 0x70 30 30 #define SLAVE_ADDR2 0x72
+1 -2
drivers/gpu/drm/gma500/intel_bios.h
··· 22 22 #ifndef _INTEL_BIOS_H_ 23 23 #define _INTEL_BIOS_H_ 24 24 25 - #include <drm/drmP.h> 26 - #include <drm/drm_dp_helper.h> 25 + struct drm_device; 27 26 28 27 struct vbt_header { 29 28 u8 signature[20]; /**< Always starts with 'VBT$' */
+6 -5
drivers/gpu/drm/gma500/intel_gmbus.c
··· 26 26 * Eric Anholt <eric@anholt.net> 27 27 * Chris Wilson <chris@chris-wilson.co.uk> 28 28 */ 29 - #include <linux/module.h> 30 - #include <linux/i2c.h> 29 + 30 + #include <linux/delay.h> 31 31 #include <linux/i2c-algo-bit.h> 32 - #include <drm/drmP.h> 33 - #include "psb_intel_drv.h" 34 - #include <drm/gma_drm.h> 32 + #include <linux/i2c.h> 33 + #include <linux/module.h> 34 + 35 35 #include "psb_drv.h" 36 + #include "psb_intel_drv.h" 36 37 #include "psb_intel_reg.h" 37 38 38 39 #define _wait_for(COND, MS, W) ({ \
+3 -1
drivers/gpu/drm/gma500/intel_i2c.c
··· 17 17 * Authors: 18 18 * Eric Anholt <eric@anholt.net> 19 19 */ 20 + 21 + #include <linux/delay.h> 20 22 #include <linux/export.h> 21 - #include <linux/i2c.h> 22 23 #include <linux/i2c-algo-bit.h> 24 + #include <linux/i2c.h> 23 25 24 26 #include "psb_drv.h" 25 27 #include "psb_intel_reg.h"
+9 -7
drivers/gpu/drm/gma500/mdfld_device.c
··· 17 17 * 18 18 **************************************************************************/ 19 19 20 - #include "psb_drv.h" 21 - #include "mid_bios.h" 22 - #include "mdfld_output.h" 23 - #include "mdfld_dsi_output.h" 24 - #include "tc35876x-dsi-lvds.h" 20 + #include <linux/delay.h> 25 21 26 22 #include <asm/intel_scu_ipc.h> 23 + 24 + #include "mdfld_dsi_output.h" 25 + #include "mdfld_output.h" 26 + #include "mid_bios.h" 27 + #include "psb_drv.h" 28 + #include "tc35876x-dsi-lvds.h" 27 29 28 30 #ifdef CONFIG_BACKLIGHT_CLASS_DEVICE 29 31 ··· 344 342 345 343 if (pipenum == 1) { 346 344 /* restore palette (gamma) */ 347 - /*DRM_UDELAY(50000); */ 345 + /* udelay(50000); */ 348 346 for (i = 0; i < 256; i++) 349 347 PSB_WVDC32(pipe->palette[i], map->palette + (i << 2)); 350 348 ··· 406 404 PSB_WVDC32(pipe->conf, map->conf); 407 405 408 406 /* restore palette (gamma) */ 409 - /*DRM_UDELAY(50000); */ 407 + /* udelay(50000); */ 410 408 for (i = 0; i < 256; i++) 411 409 PSB_WVDC32(pipe->palette[i], map->palette + (i << 2)); 412 410
+3 -1
drivers/gpu/drm/gma500/mdfld_dsi_dpi.c
··· 25 25 * Jackie Li<yaodong.li@intel.com> 26 26 */ 27 27 28 + #include <linux/delay.h> 29 + 28 30 #include "mdfld_dsi_dpi.h" 29 - #include "mdfld_output.h" 30 31 #include "mdfld_dsi_pkg_sender.h" 32 + #include "mdfld_output.h" 31 33 #include "psb_drv.h" 32 34 #include "tc35876x-dsi-lvds.h" 33 35
+9 -7
drivers/gpu/drm/gma500/mdfld_dsi_output.c
··· 25 25 * Jackie Li<yaodong.li@intel.com> 26 26 */ 27 27 28 - #include <linux/module.h> 29 - 30 - #include "mdfld_dsi_output.h" 31 - #include "mdfld_dsi_dpi.h" 32 - #include "mdfld_output.h" 33 - #include "mdfld_dsi_pkg_sender.h" 34 - #include "tc35876x-dsi-lvds.h" 28 + #include <linux/delay.h> 29 + #include <linux/moduleparam.h> 35 30 #include <linux/pm_runtime.h> 31 + 36 32 #include <asm/intel_scu_ipc.h> 33 + 34 + #include "mdfld_dsi_dpi.h" 35 + #include "mdfld_dsi_output.h" 36 + #include "mdfld_dsi_pkg_sender.h" 37 + #include "mdfld_output.h" 38 + #include "tc35876x-dsi-lvds.h" 37 39 38 40 /* get the LABC from command line. */ 39 41 static int LABC_control = 1;
+4 -4
drivers/gpu/drm/gma500/mdfld_dsi_output.h
··· 29 29 #define __MDFLD_DSI_OUTPUT_H__ 30 30 31 31 #include <linux/backlight.h> 32 - #include <drm/drmP.h> 32 + 33 + #include <asm/intel-mid.h> 34 + 33 35 #include <drm/drm.h> 34 36 #include <drm/drm_crtc.h> 35 37 #include <drm/drm_edid.h> 36 38 39 + #include "mdfld_output.h" 37 40 #include "psb_drv.h" 38 41 #include "psb_intel_drv.h" 39 42 #include "psb_intel_reg.h" 40 - #include "mdfld_output.h" 41 - 42 - #include <asm/intel-mid.h> 43 43 44 44 #define FLD_MASK(start, end) (((1 << ((start) - (end) + 1)) - 1) << (end)) 45 45 #define FLD_VAL(val, start, end) (((val) << (end)) & FLD_MASK(start, end))
+3 -1
drivers/gpu/drm/gma500/mdfld_dsi_pkg_sender.c
··· 24 24 * Jackie Li<yaodong.li@intel.com> 25 25 */ 26 26 27 + #include <linux/delay.h> 27 28 #include <linux/freezer.h> 29 + 28 30 #include <video/mipi_display.h> 29 31 32 + #include "mdfld_dsi_dpi.h" 30 33 #include "mdfld_dsi_output.h" 31 34 #include "mdfld_dsi_pkg_sender.h" 32 - #include "mdfld_dsi_dpi.h" 33 35 34 36 #define MDFLD_DSI_READ_MAX_COUNT 5000 35 37
+7 -4
drivers/gpu/drm/gma500/mdfld_intel_display.c
··· 18 18 * Eric Anholt <eric@anholt.net> 19 19 */ 20 20 21 + #include <linux/delay.h> 21 22 #include <linux/i2c.h> 22 23 #include <linux/pm_runtime.h> 23 24 24 - #include <drm/drmP.h> 25 - #include "psb_intel_reg.h" 26 - #include "gma_display.h" 25 + #include <drm/drm_crtc.h> 26 + #include <drm/drm_fourcc.h> 27 + 27 28 #include "framebuffer.h" 28 - #include "mdfld_output.h" 29 + #include "gma_display.h" 29 30 #include "mdfld_dsi_output.h" 31 + #include "mdfld_output.h" 32 + #include "psb_intel_reg.h" 30 33 31 34 /* Hardcoded currently */ 32 35 static int ksel = KSEL_CRYSTAL_19;
+2
drivers/gpu/drm/gma500/mdfld_tmd_vid.c
··· 27 27 * Scott Rowe <scott.m.rowe@intel.com> 28 28 */ 29 29 30 + #include <linux/delay.h> 31 + 30 32 #include "mdfld_dsi_dpi.h" 31 33 #include "mdfld_dsi_pkg_sender.h" 32 34
+2 -3
drivers/gpu/drm/gma500/mid_bios.c
··· 23 23 * - Check ioremap failures 24 24 */ 25 25 26 - #include <drm/drmP.h> 27 26 #include <drm/drm.h> 28 - #include <drm/gma_drm.h> 29 - #include "psb_drv.h" 27 + 30 28 #include "mid_bios.h" 29 + #include "psb_drv.h" 31 30 32 31 static void mid_get_fuse_settings(struct drm_device *dev) 33 32 {
+1
drivers/gpu/drm/gma500/mid_bios.h
··· 16 16 * 51 Franklin St - Fifth Floor, Boston, MA 02110-1301 USA. 17 17 * 18 18 **************************************************************************/ 19 + struct drm_device; 19 20 20 21 extern int mid_chip_setup(struct drm_device *dev); 21 22
+4 -2
drivers/gpu/drm/gma500/mmu.c
··· 15 15 * 51 Franklin St - Fifth Floor, Boston, MA 02110-1301 USA. 16 16 * 17 17 **************************************************************************/ 18 - #include <drm/drmP.h> 18 + 19 + #include <linux/highmem.h> 20 + 21 + #include "mmu.h" 19 22 #include "psb_drv.h" 20 23 #include "psb_reg.h" 21 - #include "mmu.h" 22 24 23 25 /* 24 26 * Code for the SGX MMU:
+2
drivers/gpu/drm/gma500/oaktrail.h
··· 17 17 * 18 18 **************************************************************************/ 19 19 20 + struct psb_intel_mode_device; 21 + 20 22 /* MID device specific descriptors */ 21 23 22 24 struct oaktrail_timing_info {
+5 -3
drivers/gpu/drm/gma500/oaktrail_crtc.c
··· 15 15 * 51 Franklin St - Fifth Floor, Boston, MA 02110-1301 USA. 16 16 */ 17 17 18 + #include <linux/delay.h> 18 19 #include <linux/i2c.h> 19 20 #include <linux/pm_runtime.h> 20 21 21 - #include <drm/drmP.h> 22 + #include <drm/drm_fourcc.h> 23 + 22 24 #include "framebuffer.h" 25 + #include "gma_display.h" 26 + #include "power.h" 23 27 #include "psb_drv.h" 24 28 #include "psb_intel_drv.h" 25 29 #include "psb_intel_reg.h" 26 - #include "gma_display.h" 27 - #include "power.h" 28 30 29 31 #define MRST_LIMIT_LVDS_100L 0 30 32 #define MRST_LIMIT_LVDS_83 1
+11 -9
drivers/gpu/drm/gma500/oaktrail_device.c
··· 18 18 **************************************************************************/ 19 19 20 20 #include <linux/backlight.h> 21 - #include <linux/module.h> 21 + #include <linux/delay.h> 22 22 #include <linux/dmi.h> 23 - #include <drm/drmP.h> 24 - #include <drm/drm.h> 25 - #include <drm/gma_drm.h> 26 - #include "psb_drv.h" 27 - #include "psb_reg.h" 28 - #include "psb_intel_reg.h" 23 + #include <linux/module.h> 24 + 29 25 #include <asm/intel-mid.h> 30 26 #include <asm/intel_scu_ipc.h> 31 - #include "mid_bios.h" 27 + 28 + #include <drm/drm.h> 29 + 32 30 #include "intel_bios.h" 31 + #include "mid_bios.h" 32 + #include "psb_drv.h" 33 + #include "psb_intel_reg.h" 34 + #include "psb_reg.h" 33 35 34 36 static int oaktrail_output_init(struct drm_device *dev) 35 37 { ··· 329 327 330 328 /* Actually enable it */ 331 329 PSB_WVDC32(p->dpll, MRST_DPLL_A); 332 - DRM_UDELAY(150); 330 + udelay(150); 333 331 334 332 /* Restore mode */ 335 333 PSB_WVDC32(p->htotal, HTOTAL_A);
+5 -3
drivers/gpu/drm/gma500/oaktrail_hdmi.c
··· 24 24 * Li Peng <peng.li@intel.com> 25 25 */ 26 26 27 - #include <drm/drmP.h> 27 + #include <linux/delay.h> 28 + 28 29 #include <drm/drm.h> 30 + 31 + #include "psb_drv.h" 29 32 #include "psb_intel_drv.h" 30 33 #include "psb_intel_reg.h" 31 - #include "psb_drv.h" 32 34 33 35 #define HDMI_READ(reg) readl(hdmi_dev->regs + (reg)) 34 36 #define HDMI_WRITE(reg, val) writel(val, hdmi_dev->regs + (reg)) ··· 817 815 PSB_WVDC32(hdmi_dev->saveDPLL_ADJUST, DPLL_ADJUST); 818 816 PSB_WVDC32(hdmi_dev->saveDPLL_UPDATE, DPLL_UPDATE); 819 817 PSB_WVDC32(hdmi_dev->saveDPLL_CLK_ENABLE, DPLL_CLK_ENABLE); 820 - DRM_UDELAY(150); 818 + udelay(150); 821 819 822 820 /* pipe */ 823 821 PSB_WVDC32(pipeb->src, PIPEBSRC);
+3 -3
drivers/gpu/drm/gma500/oaktrail_lvds.c
··· 21 21 */ 22 22 23 23 #include <linux/i2c.h> 24 - #include <drm/drmP.h> 24 + #include <linux/pm_runtime.h> 25 + 25 26 #include <asm/intel-mid.h> 26 27 27 28 #include "intel_bios.h" 29 + #include "power.h" 28 30 #include "psb_drv.h" 29 31 #include "psb_intel_drv.h" 30 32 #include "psb_intel_reg.h" 31 - #include "power.h" 32 - #include <linux/pm_runtime.h> 33 33 34 34 /* The max/min PWM frequency in BPCR[31:17] - */ 35 35 /* The smallest number is 1 (not 0) that can fit in the
+5 -6
drivers/gpu/drm/gma500/oaktrail_lvds_i2c.c
··· 23 23 * 24 24 */ 25 25 26 + #include <linux/delay.h> 27 + #include <linux/i2c-algo-bit.h> 28 + #include <linux/i2c.h> 29 + #include <linux/init.h> 30 + #include <linux/io.h> 26 31 #include <linux/kernel.h> 27 32 #include <linux/module.h> 28 33 #include <linux/pci.h> 29 34 #include <linux/types.h> 30 - #include <linux/i2c.h> 31 - #include <linux/i2c-algo-bit.h> 32 - #include <linux/init.h> 33 - #include <linux/io.h> 34 - #include <linux/delay.h> 35 35 36 - #include <drm/drmP.h> 37 36 #include "psb_drv.h" 38 37 #include "psb_intel_reg.h" 39 38
+3 -1
drivers/gpu/drm/gma500/power.h
··· 31 31 #define _PSB_POWERMGMT_H_ 32 32 33 33 #include <linux/pci.h> 34 - #include <drm/drmP.h> 34 + 35 + struct device; 36 + struct drm_device; 35 37 36 38 void gma_power_init(struct drm_device *dev); 37 39 void gma_power_uninit(struct drm_device *dev);
+6 -6
drivers/gpu/drm/gma500/psb_device.c
··· 18 18 **************************************************************************/ 19 19 20 20 #include <linux/backlight.h> 21 - #include <drm/drmP.h> 21 + 22 22 #include <drm/drm.h> 23 - #include <drm/gma_drm.h> 24 - #include "psb_drv.h" 25 - #include "psb_reg.h" 26 - #include "psb_intel_reg.h" 23 + 24 + #include "gma_device.h" 27 25 #include "intel_bios.h" 28 26 #include "psb_device.h" 29 - #include "gma_device.h" 27 + #include "psb_drv.h" 28 + #include "psb_intel_reg.h" 29 + #include "psb_reg.h" 30 30 31 31 static int psb_output_init(struct drm_device *dev) 32 32 {
+21 -12
drivers/gpu/drm/gma500/psb_drv.c
··· 19 19 * 20 20 **************************************************************************/ 21 21 22 - #include <drm/drmP.h> 22 + #include <linux/cpu.h> 23 + #include <linux/module.h> 24 + #include <linux/notifier.h> 25 + #include <linux/pm_runtime.h> 26 + #include <linux/spinlock.h> 27 + 28 + #include <asm/set_memory.h> 29 + 30 + #include <acpi/video.h> 31 + 23 32 #include <drm/drm.h> 24 - #include "psb_drv.h" 33 + #include <drm/drm_drv.h> 34 + #include <drm/drm_file.h> 35 + #include <drm/drm_ioctl.h> 36 + #include <drm/drm_irq.h> 37 + #include <drm/drm_pci.h> 38 + #include <drm/drm_pciids.h> 39 + #include <drm/drm_vblank.h> 40 + 25 41 #include "framebuffer.h" 26 - #include "psb_reg.h" 27 - #include "psb_intel_reg.h" 28 42 #include "intel_bios.h" 29 43 #include "mid_bios.h" 30 - #include <drm/drm_pciids.h> 31 44 #include "power.h" 32 - #include <linux/cpu.h> 33 - #include <linux/notifier.h> 34 - #include <linux/spinlock.h> 35 - #include <linux/pm_runtime.h> 36 - #include <acpi/video.h> 37 - #include <linux/module.h> 38 - #include <asm/set_memory.h> 45 + #include "psb_drv.h" 46 + #include "psb_intel_reg.h" 47 + #include "psb_reg.h" 39 48 40 49 static struct drm_driver driver; 41 50 static int psb_pci_probe(struct pci_dev *pdev, const struct pci_device_id *ent);
+8 -8
drivers/gpu/drm/gma500/psb_drv.h
··· 23 23 #include <linux/kref.h> 24 24 #include <linux/mm_types.h> 25 25 26 - #include <drm/drmP.h> 27 - #include <drm/gma_drm.h> 28 - #include "psb_reg.h" 29 - #include "psb_intel_drv.h" 26 + #include <drm/drm_device.h> 27 + 30 28 #include "gma_display.h" 31 - #include "intel_bios.h" 32 29 #include "gtt.h" 33 - #include "power.h" 34 - #include "opregion.h" 35 - #include "oaktrail.h" 30 + #include "intel_bios.h" 36 31 #include "mmu.h" 32 + #include "oaktrail.h" 33 + #include "opregion.h" 34 + #include "power.h" 35 + #include "psb_intel_drv.h" 36 + #include "psb_reg.h" 37 37 38 38 #define DRIVER_AUTHOR "Alan Cox <alan@linux.intel.com> and others" 39 39
+4 -3
drivers/gpu/drm/gma500/psb_intel_display.c
··· 18 18 * Eric Anholt <eric@anholt.net> 19 19 */ 20 20 21 + #include <linux/delay.h> 21 22 #include <linux/i2c.h> 22 23 23 - #include <drm/drmP.h> 24 24 #include <drm/drm_plane_helper.h> 25 + 25 26 #include "framebuffer.h" 27 + #include "gma_display.h" 28 + #include "power.h" 26 29 #include "psb_drv.h" 27 30 #include "psb_intel_drv.h" 28 31 #include "psb_intel_reg.h" 29 - #include "gma_display.h" 30 - #include "power.h" 31 32 32 33 #define INTEL_LIMIT_I9XX_SDVO_DAC 0 33 34 #define INTEL_LIMIT_I9XX_LVDS 1
+2 -3
drivers/gpu/drm/gma500/psb_intel_lvds.c
··· 21 21 */ 22 22 23 23 #include <linux/i2c.h> 24 - #include <drm/drmP.h> 24 + #include <linux/pm_runtime.h> 25 25 26 26 #include "intel_bios.h" 27 + #include "power.h" 27 28 #include "psb_drv.h" 28 29 #include "psb_intel_drv.h" 29 30 #include "psb_intel_reg.h" 30 - #include "power.h" 31 - #include <linux/pm_runtime.h> 32 31 33 32 /* 34 33 * LVDS I2C backlight control macros
+1 -1
drivers/gpu/drm/gma500/psb_intel_modes.c
··· 18 18 */ 19 19 20 20 #include <linux/i2c.h> 21 - #include <drm/drmP.h> 21 + 22 22 #include "psb_intel_drv.h" 23 23 24 24 /**
+9 -8
drivers/gpu/drm/gma500/psb_intel_sdvo.c
··· 25 25 * Authors: 26 26 * Eric Anholt <eric@anholt.net> 27 27 */ 28 - #include <linux/module.h> 29 - #include <linux/i2c.h> 30 - #include <linux/slab.h> 28 + 31 29 #include <linux/delay.h> 32 - #include <drm/drmP.h> 30 + #include <linux/i2c.h> 31 + #include <linux/kernel.h> 32 + #include <linux/module.h> 33 + #include <linux/slab.h> 34 + 33 35 #include <drm/drm_crtc.h> 34 36 #include <drm/drm_edid.h> 35 - #include "psb_intel_drv.h" 36 - #include <drm/gma_drm.h> 37 + 37 38 #include "psb_drv.h" 38 - #include "psb_intel_sdvo_regs.h" 39 + #include "psb_intel_drv.h" 39 40 #include "psb_intel_reg.h" 40 - #include <linux/kernel.h> 41 + #include "psb_intel_sdvo_regs.h" 41 42 42 43 #define SDVO_TMDS_MASK (SDVO_OUTPUT_TMDS0 | SDVO_OUTPUT_TMDS1) 43 44 #define SDVO_RGB_MASK (SDVO_OUTPUT_RGB0 | SDVO_OUTPUT_RGB1)
+7 -6
drivers/gpu/drm/gma500/psb_irq.c
··· 22 22 /* 23 23 */ 24 24 25 - #include <drm/drmP.h> 26 - #include "psb_drv.h" 27 - #include "psb_reg.h" 28 - #include "psb_intel_reg.h" 29 - #include "power.h" 30 - #include "psb_irq.h" 25 + #include <drm/drm_vblank.h> 26 + 31 27 #include "mdfld_output.h" 28 + #include "power.h" 29 + #include "psb_drv.h" 30 + #include "psb_intel_reg.h" 31 + #include "psb_irq.h" 32 + #include "psb_reg.h" 32 33 33 34 /* 34 35 * inline functions
+1 -1
drivers/gpu/drm/gma500/psb_irq.h
··· 24 24 #ifndef _PSB_IRQ_H_ 25 25 #define _PSB_IRQ_H_ 26 26 27 - #include <drm/drmP.h> 27 + struct drm_device; 28 28 29 29 bool sysirq_init(struct drm_device *dev); 30 30 void sysirq_uninit(struct drm_device *dev);
+4 -4
drivers/gpu/drm/gma500/psb_lid.c
··· 17 17 * Authors: Thomas Hellstrom <thomas-at-tungstengraphics-dot-com> 18 18 **************************************************************************/ 19 19 20 - #include <drm/drmP.h> 21 - #include "psb_drv.h" 22 - #include "psb_reg.h" 23 - #include "psb_intel_reg.h" 24 20 #include <linux/spinlock.h> 21 + 22 + #include "psb_drv.h" 23 + #include "psb_intel_reg.h" 24 + #include "psb_reg.h" 25 25 26 26 static void psb_lid_timer_func(struct timer_list *t) 27 27 {
+8 -5
drivers/gpu/drm/gma500/tc35876x-dsi-lvds.c
··· 22 22 * 23 23 */ 24 24 25 - #include "mdfld_dsi_dpi.h" 26 - #include "mdfld_output.h" 27 - #include "mdfld_dsi_pkg_sender.h" 28 - #include "tc35876x-dsi-lvds.h" 29 - #include <linux/platform_data/tc35876x.h> 25 + #include <linux/delay.h> 30 26 #include <linux/kernel.h> 31 27 #include <linux/module.h> 28 + #include <linux/platform_data/tc35876x.h> 29 + 32 30 #include <asm/intel_scu_ipc.h> 31 + 32 + #include "mdfld_dsi_dpi.h" 33 + #include "mdfld_dsi_pkg_sender.h" 34 + #include "mdfld_output.h" 35 + #include "tc35876x-dsi-lvds.h" 33 36 34 37 static struct i2c_client *tc35876x_client; 35 38 static struct i2c_client *cmi_lcd_i2c_client;
+1 -1
drivers/gpu/drm/hisilicon/hibmc/Kconfig
··· 3 3 tristate "DRM Support for Hisilicon Hibmc" 4 4 depends on DRM && PCI && MMU 5 5 select DRM_KMS_HELPER 6 - select DRM_TTM 6 + select DRM_VRAM_HELPER 7 7 8 8 help 9 9 Choose this option if you have a Hisilicon Hibmc soc chipset.
+9 -10
drivers/gpu/drm/hisilicon/hibmc/hibmc_drm_de.c
··· 96 96 struct drm_plane_state *state = plane->state; 97 97 u32 reg; 98 98 int ret; 99 - u64 gpu_addr = 0; 99 + s64 gpu_addr = 0; 100 100 unsigned int line_l; 101 101 struct hibmc_drm_private *priv = plane->dev->dev_private; 102 102 struct hibmc_framebuffer *hibmc_fb; 103 - struct hibmc_bo *bo; 103 + struct drm_gem_vram_object *gbo; 104 104 105 105 if (!state->fb) 106 106 return; 107 107 108 108 hibmc_fb = to_hibmc_framebuffer(state->fb); 109 - bo = gem_to_hibmc_bo(hibmc_fb->obj); 110 - ret = ttm_bo_reserve(&bo->bo, true, false, NULL); 109 + gbo = drm_gem_vram_of_gem(hibmc_fb->obj); 110 + 111 + ret = drm_gem_vram_pin(gbo, DRM_GEM_VRAM_PL_FLAG_VRAM); 111 112 if (ret) { 112 - DRM_ERROR("failed to reserve ttm_bo: %d", ret); 113 + DRM_ERROR("failed to pin bo: %d", ret); 113 114 return; 114 115 } 115 - 116 - ret = hibmc_bo_pin(bo, TTM_PL_FLAG_VRAM, &gpu_addr); 117 - ttm_bo_unreserve(&bo->bo); 118 - if (ret) { 119 - DRM_ERROR("failed to pin hibmc_bo: %d", ret); 116 + gpu_addr = drm_gem_vram_offset(gbo); 117 + if (gpu_addr < 0) { 118 + drm_gem_vram_unpin(gbo); 120 119 return; 121 120 } 122 121
+4 -10
drivers/gpu/drm/hisilicon/hibmc/hibmc_drm_drv.c
··· 27 27 28 28 static const struct file_operations hibmc_fops = { 29 29 .owner = THIS_MODULE, 30 - .open = drm_open, 31 - .release = drm_release, 32 - .unlocked_ioctl = drm_ioctl, 33 - .compat_ioctl = drm_compat_ioctl, 34 - .mmap = hibmc_mmap, 35 - .poll = drm_poll, 36 - .read = drm_read, 37 - .llseek = no_llseek, 30 + DRM_VRAM_MM_FILE_OPERATIONS 38 31 }; 39 32 40 33 static irqreturn_t hibmc_drm_interrupt(int irq, void *arg) ··· 56 63 .desc = "hibmc drm driver", 57 64 .major = 1, 58 65 .minor = 0, 59 - .gem_free_object_unlocked = hibmc_gem_free_object, 66 + .gem_free_object_unlocked = 67 + drm_gem_vram_driver_gem_free_object_unlocked, 60 68 .dumb_create = hibmc_dumb_create, 61 - .dumb_map_offset = hibmc_dumb_mmap_offset, 69 + .dumb_map_offset = drm_gem_vram_driver_dumb_mmap_offset, 62 70 .irq_handler = hibmc_drm_interrupt, 63 71 }; 64 72
+2 -31
drivers/gpu/drm/hisilicon/hibmc/hibmc_drm_drv.h
··· 23 23 #include <drm/drm_atomic.h> 24 24 #include <drm/drm_fb_helper.h> 25 25 #include <drm/drm_gem.h> 26 - #include <drm/ttm/ttm_bo_driver.h> 26 + #include <drm/drm_gem_vram_helper.h> 27 + #include <drm/drm_vram_mm_helper.h> 27 28 28 29 struct hibmc_framebuffer { 29 30 struct drm_framebuffer fb; ··· 49 48 struct drm_device *dev; 50 49 bool mode_config_initialized; 51 50 52 - /* ttm */ 53 - struct ttm_bo_device bdev; 54 - bool initialized; 55 - 56 51 /* fbdev */ 57 52 struct hibmc_fbdev *fbdev; 58 - bool mm_inited; 59 53 }; 60 54 61 55 #define to_hibmc_framebuffer(x) container_of(x, struct hibmc_framebuffer, fb) 62 - 63 - struct hibmc_bo { 64 - struct ttm_buffer_object bo; 65 - struct ttm_placement placement; 66 - struct ttm_bo_kmap_obj kmap; 67 - struct drm_gem_object gem; 68 - struct ttm_place placements[3]; 69 - int pin_count; 70 - }; 71 - 72 - static inline struct hibmc_bo *hibmc_bo(struct ttm_buffer_object *bo) 73 - { 74 - return container_of(bo, struct hibmc_bo, bo); 75 - } 76 - 77 - static inline struct hibmc_bo *gem_to_hibmc_bo(struct drm_gem_object *gem) 78 - { 79 - return container_of(gem, struct hibmc_bo, gem); 80 - } 81 56 82 57 void hibmc_set_power_mode(struct hibmc_drm_private *priv, 83 58 unsigned int power_mode); ··· 74 97 75 98 int hibmc_mm_init(struct hibmc_drm_private *hibmc); 76 99 void hibmc_mm_fini(struct hibmc_drm_private *hibmc); 77 - int hibmc_bo_pin(struct hibmc_bo *bo, u32 pl_flag, u64 *gpu_addr); 78 - int hibmc_bo_unpin(struct hibmc_bo *bo); 79 - void hibmc_gem_free_object(struct drm_gem_object *obj); 80 100 int hibmc_dumb_create(struct drm_file *file, struct drm_device *dev, 81 101 struct drm_mode_create_dumb *args); 82 - int hibmc_dumb_mmap_offset(struct drm_file *file, struct drm_device *dev, 83 - u32 handle, u64 *offset); 84 - int hibmc_mmap(struct file *filp, struct vm_area_struct *vma); 85 102 86 103 extern const struct drm_mode_config_funcs hibmc_mode_funcs; 87 104
+12 -25
drivers/gpu/drm/hisilicon/hibmc/hibmc_drm_fbdev.c
··· 63 63 struct drm_mode_fb_cmd2 mode_cmd; 64 64 struct drm_gem_object *gobj = NULL; 65 65 int ret = 0; 66 - int ret1; 67 66 size_t size; 68 67 unsigned int bytes_per_pixel; 69 - struct hibmc_bo *bo = NULL; 68 + struct drm_gem_vram_object *gbo = NULL; 69 + void *base; 70 70 71 71 DRM_DEBUG_DRIVER("surface width(%d), height(%d) and bpp(%d)\n", 72 72 sizes->surface_width, sizes->surface_height, ··· 88 88 return -ENOMEM; 89 89 } 90 90 91 - bo = gem_to_hibmc_bo(gobj); 91 + gbo = drm_gem_vram_of_gem(gobj); 92 92 93 - ret = ttm_bo_reserve(&bo->bo, true, false, NULL); 93 + ret = drm_gem_vram_pin(gbo, DRM_GEM_VRAM_PL_FLAG_VRAM); 94 94 if (ret) { 95 - DRM_ERROR("failed to reserve ttm_bo: %d\n", ret); 95 + DRM_ERROR("failed to pin fbcon: %d\n", ret); 96 96 goto out_unref_gem; 97 97 } 98 98 99 - ret = hibmc_bo_pin(bo, TTM_PL_FLAG_VRAM, NULL); 100 - if (ret) { 101 - DRM_ERROR("failed to pin fbcon: %d\n", ret); 102 - goto out_unreserve_ttm_bo; 103 - } 104 - 105 - ret = ttm_bo_kmap(&bo->bo, 0, bo->bo.num_pages, &bo->kmap); 106 - if (ret) { 99 + base = drm_gem_vram_kmap(gbo, true, NULL); 100 + if (IS_ERR(base)) { 101 + ret = PTR_ERR(base); 107 102 DRM_ERROR("failed to kmap fbcon: %d\n", ret); 108 103 goto out_unpin_bo; 109 104 } 110 - ttm_bo_unreserve(&bo->bo); 111 105 112 106 info = drm_fb_helper_alloc_fbi(helper); 113 107 if (IS_ERR(info)) { ··· 125 131 126 132 drm_fb_helper_fill_info(info, &priv->fbdev->helper, sizes); 127 133 128 - info->screen_base = bo->kmap.virtual; 134 + info->screen_base = base; 129 135 info->screen_size = size; 130 136 131 - info->fix.smem_start = bo->bo.mem.bus.offset + bo->bo.mem.bus.base; 137 + info->fix.smem_start = gbo->bo.mem.bus.offset + gbo->bo.mem.bus.base; 132 138 info->fix.smem_len = size; 133 139 return 0; 134 140 135 141 out_release_fbi: 136 - ret1 = ttm_bo_reserve(&bo->bo, true, false, NULL); 137 - if (ret1) { 138 - DRM_ERROR("failed to rsv ttm_bo when release fbi: %d\n", ret1); 139 - goto out_unref_gem; 140 - } 141 - ttm_bo_kunmap(&bo->kmap); 142 + drm_gem_vram_kunmap(gbo); 142 143 out_unpin_bo: 143 - hibmc_bo_unpin(bo); 144 - out_unreserve_ttm_bo: 145 - ttm_bo_unreserve(&bo->bo); 144 + drm_gem_vram_unpin(gbo); 146 145 out_unref_gem: 147 146 drm_gem_object_put_unlocked(gobj); 148 147
+16 -325
drivers/gpu/drm/hisilicon/hibmc/hibmc_ttm.c
··· 17 17 */ 18 18 19 19 #include <drm/drm_atomic_helper.h> 20 - #include <drm/ttm/ttm_page_alloc.h> 21 20 22 21 #include "hibmc_drm_drv.h" 23 22 24 - static inline struct hibmc_drm_private * 25 - hibmc_bdev(struct ttm_bo_device *bd) 26 - { 27 - return container_of(bd, struct hibmc_drm_private, bdev); 28 - } 29 - 30 - static void hibmc_bo_ttm_destroy(struct ttm_buffer_object *tbo) 31 - { 32 - struct hibmc_bo *bo = container_of(tbo, struct hibmc_bo, bo); 33 - 34 - drm_gem_object_release(&bo->gem); 35 - kfree(bo); 36 - } 37 - 38 - static bool hibmc_ttm_bo_is_hibmc_bo(struct ttm_buffer_object *bo) 39 - { 40 - return bo->destroy == &hibmc_bo_ttm_destroy; 41 - } 42 - 43 - static int 44 - hibmc_bo_init_mem_type(struct ttm_bo_device *bdev, u32 type, 45 - struct ttm_mem_type_manager *man) 46 - { 47 - switch (type) { 48 - case TTM_PL_SYSTEM: 49 - man->flags = TTM_MEMTYPE_FLAG_MAPPABLE; 50 - man->available_caching = TTM_PL_MASK_CACHING; 51 - man->default_caching = TTM_PL_FLAG_CACHED; 52 - break; 53 - case TTM_PL_VRAM: 54 - man->func = &ttm_bo_manager_func; 55 - man->flags = TTM_MEMTYPE_FLAG_FIXED | 56 - TTM_MEMTYPE_FLAG_MAPPABLE; 57 - man->available_caching = TTM_PL_FLAG_UNCACHED | 58 - TTM_PL_FLAG_WC; 59 - man->default_caching = TTM_PL_FLAG_WC; 60 - break; 61 - default: 62 - DRM_ERROR("unsupported memory type %u\n", type); 63 - return -EINVAL; 64 - } 65 - return 0; 66 - } 67 - 68 - void hibmc_ttm_placement(struct hibmc_bo *bo, int domain) 69 - { 70 - u32 count = 0; 71 - u32 i; 72 - 73 - bo->placement.placement = bo->placements; 74 - bo->placement.busy_placement = bo->placements; 75 - if (domain & TTM_PL_FLAG_VRAM) 76 - bo->placements[count++].flags = TTM_PL_FLAG_WC | 77 - TTM_PL_FLAG_UNCACHED | TTM_PL_FLAG_VRAM; 78 - if (domain & TTM_PL_FLAG_SYSTEM) 79 - bo->placements[count++].flags = TTM_PL_MASK_CACHING | 80 - TTM_PL_FLAG_SYSTEM; 81 - if (!count) 82 - bo->placements[count++].flags = TTM_PL_MASK_CACHING | 83 - TTM_PL_FLAG_SYSTEM; 84 - 85 - bo->placement.num_placement = count; 86 - bo->placement.num_busy_placement = count; 87 - for (i = 0; i < count; i++) { 88 - bo->placements[i].fpfn = 0; 89 - bo->placements[i].lpfn = 0; 90 - } 91 - } 92 - 93 - static void 94 - hibmc_bo_evict_flags(struct ttm_buffer_object *bo, struct ttm_placement *pl) 95 - { 96 - struct hibmc_bo *hibmcbo = hibmc_bo(bo); 97 - 98 - if (!hibmc_ttm_bo_is_hibmc_bo(bo)) 99 - return; 100 - 101 - hibmc_ttm_placement(hibmcbo, TTM_PL_FLAG_SYSTEM); 102 - *pl = hibmcbo->placement; 103 - } 104 - 105 - static int hibmc_bo_verify_access(struct ttm_buffer_object *bo, 106 - struct file *filp) 107 - { 108 - struct hibmc_bo *hibmcbo = hibmc_bo(bo); 109 - 110 - return drm_vma_node_verify_access(&hibmcbo->gem.vma_node, 111 - filp->private_data); 112 - } 113 - 114 - static int hibmc_ttm_io_mem_reserve(struct ttm_bo_device *bdev, 115 - struct ttm_mem_reg *mem) 116 - { 117 - struct ttm_mem_type_manager *man = &bdev->man[mem->mem_type]; 118 - struct hibmc_drm_private *hibmc = hibmc_bdev(bdev); 119 - 120 - mem->bus.addr = NULL; 121 - mem->bus.offset = 0; 122 - mem->bus.size = mem->num_pages << PAGE_SHIFT; 123 - mem->bus.base = 0; 124 - mem->bus.is_iomem = false; 125 - if (!(man->flags & TTM_MEMTYPE_FLAG_MAPPABLE)) 126 - return -EINVAL; 127 - switch (mem->mem_type) { 128 - case TTM_PL_SYSTEM: 129 - /* system memory */ 130 - return 0; 131 - case TTM_PL_VRAM: 132 - mem->bus.offset = mem->start << PAGE_SHIFT; 133 - mem->bus.base = pci_resource_start(hibmc->dev->pdev, 0); 134 - mem->bus.is_iomem = true; 135 - break; 136 - default: 137 - return -EINVAL; 138 - } 139 - return 0; 140 - } 141 - 142 - static void hibmc_ttm_backend_destroy(struct ttm_tt *tt) 143 - { 144 - ttm_tt_fini(tt); 145 - kfree(tt); 146 - } 147 - 148 - static struct ttm_backend_func hibmc_tt_backend_func = { 149 - .destroy = &hibmc_ttm_backend_destroy, 150 - }; 151 - 152 - static struct ttm_tt *hibmc_ttm_tt_create(struct ttm_buffer_object *bo, 153 - u32 page_flags) 154 - { 155 - struct ttm_tt *tt; 156 - int ret; 157 - 158 - tt = kzalloc(sizeof(*tt), GFP_KERNEL); 159 - if (!tt) { 160 - DRM_ERROR("failed to allocate ttm_tt\n"); 161 - return NULL; 162 - } 163 - tt->func = &hibmc_tt_backend_func; 164 - ret = ttm_tt_init(tt, bo, page_flags); 165 - if (ret) { 166 - DRM_ERROR("failed to initialize ttm_tt: %d\n", ret); 167 - kfree(tt); 168 - return NULL; 169 - } 170 - return tt; 171 - } 172 - 173 - struct ttm_bo_driver hibmc_bo_driver = { 174 - .ttm_tt_create = hibmc_ttm_tt_create, 175 - .init_mem_type = hibmc_bo_init_mem_type, 176 - .evict_flags = hibmc_bo_evict_flags, 177 - .move = NULL, 178 - .verify_access = hibmc_bo_verify_access, 179 - .io_mem_reserve = &hibmc_ttm_io_mem_reserve, 180 - .io_mem_free = NULL, 181 - }; 182 - 183 23 int hibmc_mm_init(struct hibmc_drm_private *hibmc) 184 24 { 25 + struct drm_vram_mm *vmm; 185 26 int ret; 186 27 struct drm_device *dev = hibmc->dev; 187 - struct ttm_bo_device *bdev = &hibmc->bdev; 188 28 189 - ret = ttm_bo_device_init(&hibmc->bdev, 190 - &hibmc_bo_driver, 191 - dev->anon_inode->i_mapping, 192 - true); 193 - if (ret) { 194 - DRM_ERROR("error initializing bo driver: %d\n", ret); 29 + vmm = drm_vram_helper_alloc_mm(dev, 30 + pci_resource_start(dev->pdev, 0), 31 + hibmc->fb_size, &drm_gem_vram_mm_funcs); 32 + if (IS_ERR(vmm)) { 33 + ret = PTR_ERR(vmm); 34 + DRM_ERROR("Error initializing VRAM MM; %d\n", ret); 195 35 return ret; 196 36 } 197 37 198 - ret = ttm_bo_init_mm(bdev, TTM_PL_VRAM, 199 - hibmc->fb_size >> PAGE_SHIFT); 200 - if (ret) { 201 - DRM_ERROR("failed ttm VRAM init: %d\n", ret); 202 - return ret; 203 - } 204 - 205 - hibmc->mm_inited = true; 206 38 return 0; 207 39 } 208 40 209 41 void hibmc_mm_fini(struct hibmc_drm_private *hibmc) 210 42 { 211 - if (!hibmc->mm_inited) 43 + if (!hibmc->dev->vram_mm) 212 44 return; 213 45 214 - ttm_bo_device_release(&hibmc->bdev); 215 - hibmc->mm_inited = false; 216 - } 217 - 218 - static void hibmc_bo_unref(struct hibmc_bo **bo) 219 - { 220 - struct ttm_buffer_object *tbo; 221 - 222 - if ((*bo) == NULL) 223 - return; 224 - 225 - tbo = &((*bo)->bo); 226 - ttm_bo_put(tbo); 227 - *bo = NULL; 228 - } 229 - 230 - int hibmc_bo_create(struct drm_device *dev, int size, int align, 231 - u32 flags, struct hibmc_bo **phibmcbo) 232 - { 233 - struct hibmc_drm_private *hibmc = dev->dev_private; 234 - struct hibmc_bo *hibmcbo; 235 - size_t acc_size; 236 - int ret; 237 - 238 - hibmcbo = kzalloc(sizeof(*hibmcbo), GFP_KERNEL); 239 - if (!hibmcbo) { 240 - DRM_ERROR("failed to allocate hibmcbo\n"); 241 - return -ENOMEM; 242 - } 243 - ret = drm_gem_object_init(dev, &hibmcbo->gem, size); 244 - if (ret) { 245 - DRM_ERROR("failed to initialize drm gem object: %d\n", ret); 246 - kfree(hibmcbo); 247 - return ret; 248 - } 249 - 250 - hibmcbo->bo.bdev = &hibmc->bdev; 251 - 252 - hibmc_ttm_placement(hibmcbo, TTM_PL_FLAG_VRAM | TTM_PL_FLAG_SYSTEM); 253 - 254 - acc_size = ttm_bo_dma_acc_size(&hibmc->bdev, size, 255 - sizeof(struct hibmc_bo)); 256 - 257 - ret = ttm_bo_init(&hibmc->bdev, &hibmcbo->bo, size, 258 - ttm_bo_type_device, &hibmcbo->placement, 259 - align >> PAGE_SHIFT, false, acc_size, 260 - NULL, NULL, hibmc_bo_ttm_destroy); 261 - if (ret) { 262 - hibmc_bo_unref(&hibmcbo); 263 - DRM_ERROR("failed to initialize ttm_bo: %d\n", ret); 264 - return ret; 265 - } 266 - 267 - *phibmcbo = hibmcbo; 268 - return 0; 269 - } 270 - 271 - int hibmc_bo_pin(struct hibmc_bo *bo, u32 pl_flag, u64 *gpu_addr) 272 - { 273 - struct ttm_operation_ctx ctx = { false, false }; 274 - int i, ret; 275 - 276 - if (bo->pin_count) { 277 - bo->pin_count++; 278 - if (gpu_addr) 279 - *gpu_addr = bo->bo.offset; 280 - return 0; 281 - } 282 - 283 - hibmc_ttm_placement(bo, pl_flag); 284 - for (i = 0; i < bo->placement.num_placement; i++) 285 - bo->placements[i].flags |= TTM_PL_FLAG_NO_EVICT; 286 - ret = ttm_bo_validate(&bo->bo, &bo->placement, &ctx); 287 - if (ret) 288 - return ret; 289 - 290 - bo->pin_count = 1; 291 - if (gpu_addr) 292 - *gpu_addr = bo->bo.offset; 293 - return 0; 294 - } 295 - 296 - int hibmc_bo_unpin(struct hibmc_bo *bo) 297 - { 298 - struct ttm_operation_ctx ctx = { false, false }; 299 - int i, ret; 300 - 301 - if (!bo->pin_count) { 302 - DRM_ERROR("unpin bad %p\n", bo); 303 - return 0; 304 - } 305 - bo->pin_count--; 306 - if (bo->pin_count) 307 - return 0; 308 - 309 - for (i = 0; i < bo->placement.num_placement ; i++) 310 - bo->placements[i].flags &= ~TTM_PL_FLAG_NO_EVICT; 311 - ret = ttm_bo_validate(&bo->bo, &bo->placement, &ctx); 312 - if (ret) { 313 - DRM_ERROR("validate failed for unpin: %d\n", ret); 314 - return ret; 315 - } 316 - 317 - return 0; 318 - } 319 - 320 - int hibmc_mmap(struct file *filp, struct vm_area_struct *vma) 321 - { 322 - struct drm_file *file_priv = filp->private_data; 323 - struct hibmc_drm_private *hibmc = file_priv->minor->dev->dev_private; 324 - 325 - return ttm_bo_mmap(filp, vma, &hibmc->bdev); 46 + drm_vram_helper_release_mm(hibmc->dev); 326 47 } 327 48 328 49 int hibmc_gem_create(struct drm_device *dev, u32 size, bool iskernel, 329 50 struct drm_gem_object **obj) 330 51 { 331 - struct hibmc_bo *hibmcbo; 52 + struct drm_gem_vram_object *gbo; 332 53 int ret; 333 54 334 55 *obj = NULL; 335 56 336 - size = PAGE_ALIGN(size); 337 - if (size == 0) { 338 - DRM_ERROR("error: zero size\n"); 57 + size = roundup(size, PAGE_SIZE); 58 + if (size == 0) 339 59 return -EINVAL; 340 - } 341 60 342 - ret = hibmc_bo_create(dev, size, 0, 0, &hibmcbo); 343 - if (ret) { 61 + gbo = drm_gem_vram_create(dev, &dev->vram_mm->bdev, size, 0, false); 62 + if (IS_ERR(gbo)) { 63 + ret = PTR_ERR(gbo); 344 64 if (ret != -ERESTARTSYS) 345 65 DRM_ERROR("failed to allocate GEM object: %d\n", ret); 346 66 return ret; 347 67 } 348 - *obj = &hibmcbo->gem; 68 + *obj = &gbo->gem; 349 69 return 0; 350 70 } 351 71 ··· 94 374 } 95 375 96 376 args->handle = handle; 97 - return 0; 98 - } 99 - 100 - void hibmc_gem_free_object(struct drm_gem_object *obj) 101 - { 102 - struct hibmc_bo *hibmcbo = gem_to_hibmc_bo(obj); 103 - 104 - hibmc_bo_unref(&hibmcbo); 105 - } 106 - 107 - static u64 hibmc_bo_mmap_offset(struct hibmc_bo *bo) 108 - { 109 - return drm_vma_node_offset_addr(&bo->bo.vma_node); 110 - } 111 - 112 - int hibmc_dumb_mmap_offset(struct drm_file *file, struct drm_device *dev, 113 - u32 handle, u64 *offset) 114 - { 115 - struct drm_gem_object *obj; 116 - struct hibmc_bo *bo; 117 - 118 - obj = drm_gem_object_lookup(file, handle); 119 - if (!obj) 120 - return -ENOENT; 121 - 122 - bo = gem_to_hibmc_bo(obj); 123 - *offset = hibmc_bo_mmap_offset(bo); 124 - 125 - drm_gem_object_put_unlocked(obj); 126 377 return 0; 127 378 } 128 379
+2 -3
drivers/gpu/drm/i915/intel_display.c
··· 14605 14605 ret = -ENOMEM; 14606 14606 goto fail; 14607 14607 } 14608 + __drm_atomic_helper_crtc_reset(&intel_crtc->base, &crtc_state->base); 14608 14609 intel_crtc->config = crtc_state; 14609 - intel_crtc->base.state = &crtc_state->base; 14610 - crtc_state->base.crtc = &intel_crtc->base; 14611 14610 14612 14611 primary = intel_primary_plane_create(dev_priv, pipe); 14613 14612 if (IS_ERR(primary)) { ··· 16148 16149 16149 16150 __drm_atomic_helper_crtc_destroy_state(&crtc_state->base); 16150 16151 memset(crtc_state, 0, sizeof(*crtc_state)); 16151 - crtc_state->base.crtc = &crtc->base; 16152 + __drm_atomic_helper_crtc_reset(&crtc->base, &crtc_state->base); 16152 16153 16153 16154 crtc_state->base.active = crtc_state->base.enable = 16154 16155 dev_priv->display.get_pipe_config(crtc, crtc_state);
+2 -1
drivers/gpu/drm/i915/intel_sprite.c
··· 325 325 u32 pixel_format, u64 modifier, 326 326 unsigned int rotation) 327 327 { 328 - int cpp = drm_format_plane_cpp(pixel_format, 0); 328 + const struct drm_format_info *info = drm_format_info(pixel_format); 329 + int cpp = info->cpp[0]; 329 330 330 331 /* 331 332 * "The stride in bytes must not exceed the
+6 -9
drivers/gpu/drm/imx/ipuv3-plane.c
··· 115 115 cma_obj = drm_fb_cma_get_gem_obj(fb, 1); 116 116 BUG_ON(!cma_obj); 117 117 118 - x /= drm_format_horz_chroma_subsampling(fb->format->format); 119 - y /= drm_format_vert_chroma_subsampling(fb->format->format); 118 + x /= fb->format->hsub; 119 + y /= fb->format->vsub; 120 120 121 121 return cma_obj->paddr + fb->offsets[1] + fb->pitches[1] * y + 122 122 fb->format->cpp[1] * x - eba; ··· 134 134 cma_obj = drm_fb_cma_get_gem_obj(fb, 2); 135 135 BUG_ON(!cma_obj); 136 136 137 - x /= drm_format_horz_chroma_subsampling(fb->format->format); 138 - y /= drm_format_vert_chroma_subsampling(fb->format->format); 137 + x /= fb->format->hsub; 138 + y /= fb->format->vsub; 139 139 140 140 return cma_obj->paddr + fb->offsets[2] + fb->pitches[2] * y + 141 141 fb->format->cpp[2] * x - eba; ··· 352 352 struct drm_framebuffer *old_fb = old_state->fb; 353 353 unsigned long eba, ubo, vbo, old_ubo, old_vbo, alpha_eba; 354 354 bool can_position = (plane->type == DRM_PLANE_TYPE_OVERLAY); 355 - int hsub, vsub; 356 355 int ret; 357 356 358 357 /* Ok to disable */ ··· 470 471 * The x/y offsets must be even in case of horizontal/vertical 471 472 * chroma subsampling. 472 473 */ 473 - hsub = drm_format_horz_chroma_subsampling(fb->format->format); 474 - vsub = drm_format_vert_chroma_subsampling(fb->format->format); 475 - if (((state->src.x1 >> 16) & (hsub - 1)) || 476 - ((state->src.y1 >> 16) & (vsub - 1))) 474 + if (((state->src.x1 >> 16) & (fb->format->hsub - 1)) || 475 + ((state->src.y1 >> 16) & (fb->format->vsub - 1))) 477 476 return -EINVAL; 478 477 break; 479 478 case DRM_FORMAT_RGB565_A8:
+1 -1
drivers/gpu/drm/lima/lima_drv.c
··· 17 17 18 18 int lima_sched_timeout_ms; 19 19 20 - MODULE_PARM_DESC(sched_timeout_ms, "task run timeout in ms (0 = no timeout (default))"); 20 + MODULE_PARM_DESC(sched_timeout_ms, "task run timeout in ms"); 21 21 module_param_named(sched_timeout_ms, lima_sched_timeout_ms, int, 0444); 22 22 23 23 static int lima_ioctl_get_param(struct drm_device *dev, void *data, struct drm_file *file)
+7 -1
drivers/gpu/drm/lima/lima_pp.c
··· 64 64 struct lima_ip *pp_bcast = data; 65 65 struct lima_device *dev = pp_bcast->dev; 66 66 struct lima_sched_pipe *pipe = dev->pipe + lima_pipe_pp; 67 - struct drm_lima_m450_pp_frame *frame = pipe->current_task->frame; 67 + struct drm_lima_m450_pp_frame *frame; 68 + 69 + /* for shared irq case */ 70 + if (!pipe->current_task) 71 + return IRQ_NONE; 72 + 73 + frame = pipe->current_task->frame; 68 74 69 75 for (i = 0; i < frame->num_pp; i++) { 70 76 struct lima_ip *ip = pipe->processor[i];
+5 -8
drivers/gpu/drm/lima/lima_sched.c
··· 258 258 static void lima_sched_handle_error_task(struct lima_sched_pipe *pipe, 259 259 struct lima_sched_task *task) 260 260 { 261 - drm_sched_stop(&pipe->base); 261 + drm_sched_stop(&pipe->base, &task->base); 262 262 263 263 if (task) 264 264 drm_sched_increase_karma(&task->base); ··· 329 329 330 330 int lima_sched_pipe_init(struct lima_sched_pipe *pipe, const char *name) 331 331 { 332 - long timeout; 333 - 334 - if (lima_sched_timeout_ms <= 0) 335 - timeout = MAX_SCHEDULE_TIMEOUT; 336 - else 337 - timeout = msecs_to_jiffies(lima_sched_timeout_ms); 332 + unsigned int timeout = lima_sched_timeout_ms > 0 ? 333 + lima_sched_timeout_ms : 500; 338 334 339 335 pipe->fence_context = dma_fence_context_alloc(1); 340 336 spin_lock_init(&pipe->fence_lock); 341 337 342 338 INIT_WORK(&pipe->error_work, lima_sched_error_work); 343 339 344 - return drm_sched_init(&pipe->base, &lima_sched_ops, 1, 0, timeout, name); 340 + return drm_sched_init(&pipe->base, &lima_sched_ops, 1, 0, 341 + msecs_to_jiffies(timeout), name); 345 342 } 346 343 347 344 void lima_sched_pipe_fini(struct lima_sched_pipe *pipe)
+5 -3
drivers/gpu/drm/mediatek/mtk_drm_fb.c
··· 32 32 const struct drm_mode_fb_cmd2 *mode, 33 33 struct drm_gem_object *obj) 34 34 { 35 + const struct drm_format_info *info = drm_get_format_info(dev, mode); 35 36 struct drm_framebuffer *fb; 36 37 int ret; 37 38 38 - if (drm_format_num_planes(mode->pixel_format) != 1) 39 + if (info->num_planes != 1) 39 40 return ERR_PTR(-EINVAL); 40 41 41 42 fb = kzalloc(sizeof(*fb), GFP_KERNEL); ··· 89 88 struct drm_file *file, 90 89 const struct drm_mode_fb_cmd2 *cmd) 91 90 { 91 + const struct drm_format_info *info = drm_get_format_info(dev, cmd); 92 92 struct drm_framebuffer *fb; 93 93 struct drm_gem_object *gem; 94 94 unsigned int width = cmd->width; ··· 97 95 unsigned int size, bpp; 98 96 int ret; 99 97 100 - if (drm_format_num_planes(cmd->pixel_format) != 1) 98 + if (info->num_planes != 1) 101 99 return ERR_PTR(-EINVAL); 102 100 103 101 gem = drm_gem_object_lookup(file, cmd->handles[0]); 104 102 if (!gem) 105 103 return ERR_PTR(-ENOENT); 106 104 107 - bpp = drm_format_plane_cpp(cmd->pixel_format, 0); 105 + bpp = info->cpp[0]; 108 106 size = (height - 1) * cmd->pitches[0] + width * bpp; 109 107 size += cmd->offsets[0]; 110 108
+3
drivers/gpu/drm/mediatek/mtk_hdmi.c
··· 341 341 ctrl_frame_en = VS_EN; 342 342 ctrl_reg = GRL_ACP_ISRC_CTRL; 343 343 break; 344 + default: 345 + dev_err(hdmi->dev, "Unknown infoframe type %d\n", frame_type); 346 + return; 344 347 } 345 348 mtk_hdmi_clear_bits(hdmi, ctrl_reg, ctrl_frame_en); 346 349 mtk_hdmi_write(hdmi, GRL_INFOFRM_TYPE, frame_type);
+7 -7
drivers/gpu/drm/meson/meson_overlay.c
··· 458 458 } 459 459 460 460 /* Update Canvas with buffer address */ 461 - priv->viu.vd1_planes = drm_format_num_planes(fb->format->format); 461 + priv->viu.vd1_planes = fb->format->num_planes; 462 462 463 463 switch (priv->viu.vd1_planes) { 464 464 case 3: ··· 466 466 priv->viu.vd1_addr2 = gem->paddr + fb->offsets[2]; 467 467 priv->viu.vd1_stride2 = fb->pitches[2]; 468 468 priv->viu.vd1_height2 = 469 - drm_format_plane_height(fb->height, 470 - fb->format->format, 2); 469 + drm_format_info_plane_height(fb->format, 470 + fb->height, 2); 471 471 DRM_DEBUG("plane 2 addr 0x%x stride %d height %d\n", 472 472 priv->viu.vd1_addr2, 473 473 priv->viu.vd1_stride2, ··· 478 478 priv->viu.vd1_addr1 = gem->paddr + fb->offsets[1]; 479 479 priv->viu.vd1_stride1 = fb->pitches[1]; 480 480 priv->viu.vd1_height1 = 481 - drm_format_plane_height(fb->height, 482 - fb->format->format, 1); 481 + drm_format_info_plane_height(fb->format, 482 + fb->height, 1); 483 483 DRM_DEBUG("plane 1 addr 0x%x stride %d height %d\n", 484 484 priv->viu.vd1_addr1, 485 485 priv->viu.vd1_stride1, ··· 490 490 priv->viu.vd1_addr0 = gem->paddr + fb->offsets[0]; 491 491 priv->viu.vd1_stride0 = fb->pitches[0]; 492 492 priv->viu.vd1_height0 = 493 - drm_format_plane_height(fb->height, 494 - fb->format->format, 0); 493 + drm_format_info_plane_height(fb->format, 494 + fb->height, 0); 495 495 DRM_DEBUG("plane 0 addr 0x%x stride %d height %d\n", 496 496 priv->viu.vd1_addr0, 497 497 priv->viu.vd1_stride0,
+1 -1
drivers/gpu/drm/mgag200/Kconfig
··· 3 3 tristate "Kernel modesetting driver for MGA G200 server engines" 4 4 depends on DRM && PCI && MMU 5 5 select DRM_KMS_HELPER 6 - select DRM_TTM 6 + select DRM_VRAM_HELPER 7 7 help 8 8 This is a KMS driver for the MGA G200 server chips, it 9 9 does not support the original MGA G200 or any of the desktop
+56 -46
drivers/gpu/drm/mgag200/mgag200_cursor.c
··· 23 23 WREG8(MGA_CURPOSXL, 0); 24 24 WREG8(MGA_CURPOSXH, 0); 25 25 if (mdev->cursor.pixels_1->pin_count) 26 - mgag200_bo_unpin(mdev->cursor.pixels_1); 26 + drm_gem_vram_unpin_locked(mdev->cursor.pixels_1); 27 27 if (mdev->cursor.pixels_2->pin_count) 28 - mgag200_bo_unpin(mdev->cursor.pixels_2); 28 + drm_gem_vram_unpin_locked(mdev->cursor.pixels_2); 29 29 } 30 30 31 31 int mga_crtc_cursor_set(struct drm_crtc *crtc, ··· 36 36 { 37 37 struct drm_device *dev = crtc->dev; 38 38 struct mga_device *mdev = (struct mga_device *)dev->dev_private; 39 - struct mgag200_bo *pixels_1 = mdev->cursor.pixels_1; 40 - struct mgag200_bo *pixels_2 = mdev->cursor.pixels_2; 41 - struct mgag200_bo *pixels_current = mdev->cursor.pixels_current; 42 - struct mgag200_bo *pixels_prev = mdev->cursor.pixels_prev; 39 + struct drm_gem_vram_object *pixels_1 = mdev->cursor.pixels_1; 40 + struct drm_gem_vram_object *pixels_2 = mdev->cursor.pixels_2; 41 + struct drm_gem_vram_object *pixels_current = mdev->cursor.pixels_current; 42 + struct drm_gem_vram_object *pixels_prev = mdev->cursor.pixels_prev; 43 43 struct drm_gem_object *obj; 44 - struct mgag200_bo *bo = NULL; 44 + struct drm_gem_vram_object *gbo = NULL; 45 45 int ret = 0; 46 + u8 *src, *dst; 46 47 unsigned int i, row, col; 47 48 uint32_t colour_set[16]; 48 49 uint32_t *next_space = &colour_set[0]; ··· 51 50 uint32_t this_colour; 52 51 bool found = false; 53 52 int colour_count = 0; 54 - u64 gpu_addr; 53 + s64 gpu_addr; 55 54 u8 reg_index; 56 55 u8 this_row[48]; 57 56 ··· 80 79 if (!obj) 81 80 return -ENOENT; 82 81 83 - ret = mgag200_bo_reserve(pixels_1, true); 82 + ret = drm_gem_vram_lock(pixels_1, true); 84 83 if (ret) { 85 84 WREG8(MGA_CURPOSXL, 0); 86 85 WREG8(MGA_CURPOSXH, 0); 87 86 goto out_unref; 88 87 } 89 - ret = mgag200_bo_reserve(pixels_2, true); 88 + ret = drm_gem_vram_lock(pixels_2, true); 90 89 if (ret) { 91 90 WREG8(MGA_CURPOSXL, 0); 92 91 WREG8(MGA_CURPOSXH, 0); 93 - mgag200_bo_unreserve(pixels_1); 94 - goto out_unreserve1; 92 + drm_gem_vram_unlock(pixels_1); 93 + goto out_unlock1; 95 94 } 96 95 97 96 /* Move cursor buffers into VRAM if they aren't already */ 98 97 if (!pixels_1->pin_count) { 99 - ret = mgag200_bo_pin(pixels_1, TTM_PL_FLAG_VRAM, 100 - &mdev->cursor.pixels_1_gpu_addr); 98 + ret = drm_gem_vram_pin_locked(pixels_1, 99 + DRM_GEM_VRAM_PL_FLAG_VRAM); 101 100 if (ret) 102 101 goto out1; 103 - } 104 - if (!pixels_2->pin_count) { 105 - ret = mgag200_bo_pin(pixels_2, TTM_PL_FLAG_VRAM, 106 - &mdev->cursor.pixels_2_gpu_addr); 107 - if (ret) { 108 - mgag200_bo_unpin(pixels_1); 102 + gpu_addr = drm_gem_vram_offset(pixels_1); 103 + if (gpu_addr < 0) { 104 + drm_gem_vram_unpin_locked(pixels_1); 109 105 goto out1; 110 106 } 107 + mdev->cursor.pixels_1_gpu_addr = gpu_addr; 108 + } 109 + if (!pixels_2->pin_count) { 110 + ret = drm_gem_vram_pin_locked(pixels_2, 111 + DRM_GEM_VRAM_PL_FLAG_VRAM); 112 + if (ret) { 113 + drm_gem_vram_unpin_locked(pixels_1); 114 + goto out1; 115 + } 116 + gpu_addr = drm_gem_vram_offset(pixels_2); 117 + if (gpu_addr < 0) { 118 + drm_gem_vram_unpin_locked(pixels_1); 119 + drm_gem_vram_unpin_locked(pixels_2); 120 + goto out1; 121 + } 122 + mdev->cursor.pixels_2_gpu_addr = gpu_addr; 111 123 } 112 124 113 - bo = gem_to_mga_bo(obj); 114 - ret = mgag200_bo_reserve(bo, true); 125 + gbo = drm_gem_vram_of_gem(obj); 126 + ret = drm_gem_vram_lock(gbo, true); 115 127 if (ret) { 116 - dev_err(&dev->pdev->dev, "failed to reserve user bo\n"); 128 + dev_err(&dev->pdev->dev, "failed to lock user bo\n"); 117 129 goto out1; 118 130 } 119 - if (!bo->kmap.virtual) { 120 - ret = ttm_bo_kmap(&bo->bo, 0, bo->bo.num_pages, &bo->kmap); 121 - if (ret) { 122 - dev_err(&dev->pdev->dev, "failed to kmap user buffer updates\n"); 123 - goto out2; 124 - } 131 + src = drm_gem_vram_kmap(gbo, true, NULL); 132 + if (IS_ERR(src)) { 133 + ret = PTR_ERR(src); 134 + dev_err(&dev->pdev->dev, "failed to kmap user buffer updates\n"); 135 + goto out2; 125 136 } 126 137 127 138 memset(&colour_set[0], 0, sizeof(uint32_t)*16); 128 139 /* width*height*4 = 16384 */ 129 140 for (i = 0; i < 16384; i += 4) { 130 - this_colour = ioread32(bo->kmap.virtual + i); 141 + this_colour = ioread32(src + i); 131 142 /* No transparency */ 132 143 if (this_colour>>24 != 0xff && 133 144 this_colour>>24 != 0x0) { ··· 191 178 } 192 179 193 180 /* Map up-coming buffer to write colour indices */ 194 - if (!pixels_prev->kmap.virtual) { 195 - ret = ttm_bo_kmap(&pixels_prev->bo, 0, 196 - pixels_prev->bo.num_pages, 197 - &pixels_prev->kmap); 198 - if (ret) { 199 - dev_err(&dev->pdev->dev, "failed to kmap cursor updates\n"); 200 - goto out3; 201 - } 181 + dst = drm_gem_vram_kmap(pixels_prev, true, NULL); 182 + if (IS_ERR(dst)) { 183 + ret = PTR_ERR(dst); 184 + dev_err(&dev->pdev->dev, "failed to kmap cursor updates\n"); 185 + goto out3; 202 186 } 203 187 204 188 /* now write colour indices into hardware cursor buffer */ 205 189 for (row = 0; row < 64; row++) { 206 190 memset(&this_row[0], 0, 48); 207 191 for (col = 0; col < 64; col++) { 208 - this_colour = ioread32(bo->kmap.virtual + 4*(col + 64*row)); 192 + this_colour = ioread32(src + 4*(col + 64*row)); 209 193 /* write transparent pixels */ 210 194 if (this_colour>>24 == 0x0) { 211 195 this_row[47 - col/8] |= 0x80>>(col%8); ··· 220 210 } 221 211 } 222 212 } 223 - memcpy_toio(pixels_prev->kmap.virtual + row*48, &this_row[0], 48); 213 + memcpy_toio(dst + row*48, &this_row[0], 48); 224 214 } 225 215 226 216 /* Program gpu address of cursor buffer */ ··· 246 236 } 247 237 ret = 0; 248 238 249 - ttm_bo_kunmap(&pixels_prev->kmap); 239 + drm_gem_vram_kunmap(pixels_prev); 250 240 out3: 251 - ttm_bo_kunmap(&bo->kmap); 241 + drm_gem_vram_kunmap(gbo); 252 242 out2: 253 - mgag200_bo_unreserve(bo); 243 + drm_gem_vram_unlock(gbo); 254 244 out1: 255 245 if (ret) 256 246 mga_hide_cursor(mdev); 257 - mgag200_bo_unreserve(pixels_1); 258 - out_unreserve1: 259 - mgag200_bo_unreserve(pixels_2); 247 + drm_gem_vram_unlock(pixels_1); 248 + out_unlock1: 249 + drm_gem_vram_unlock(pixels_2); 260 250 out_unref: 261 251 drm_gem_object_put_unlocked(obj); 262 252
+2 -11
drivers/gpu/drm/mgag200/mgag200_drv.c
··· 59 59 60 60 static const struct file_operations mgag200_driver_fops = { 61 61 .owner = THIS_MODULE, 62 - .open = drm_open, 63 - .release = drm_release, 64 - .unlocked_ioctl = drm_ioctl, 65 - .mmap = mgag200_mmap, 66 - .poll = drm_poll, 67 - .compat_ioctl = drm_compat_ioctl, 68 - .read = drm_read, 62 + DRM_VRAM_MM_FILE_OPERATIONS 69 63 }; 70 64 71 65 static struct drm_driver driver = { ··· 73 79 .major = DRIVER_MAJOR, 74 80 .minor = DRIVER_MINOR, 75 81 .patchlevel = DRIVER_PATCHLEVEL, 76 - 77 - .gem_free_object_unlocked = mgag200_gem_free_object, 78 - .dumb_create = mgag200_dumb_create, 79 - .dumb_map_offset = mgag200_dumb_mmap_offset, 82 + DRM_GEM_VRAM_DRIVER 80 83 }; 81 84 82 85 static struct pci_driver mgag200_pci_driver = {
+10 -64
drivers/gpu/drm/mgag200/mgag200_drv.h
··· 1 1 /* 2 2 * Copyright 2010 Matt Turner. 3 - * Copyright 2012 Red Hat 3 + * Copyright 2012 Red Hat 4 4 * 5 5 * This file is subject to the terms and conditions of the GNU General 6 6 * Public License version 2. See the file COPYING in the main ··· 17 17 18 18 #include <drm/drm_encoder.h> 19 19 #include <drm/drm_fb_helper.h> 20 - #include <drm/ttm/ttm_bo_api.h> 21 - #include <drm/ttm/ttm_bo_driver.h> 22 - #include <drm/ttm/ttm_placement.h> 23 - #include <drm/ttm/ttm_memory.h> 24 - #include <drm/ttm/ttm_module.h> 25 20 26 21 #include <drm/drm_gem.h> 22 + #include <drm/drm_gem_vram_helper.h> 23 + 24 + #include <drm/drm_vram_mm_helper.h> 27 25 28 26 #include <linux/i2c.h> 29 27 #include <linux/i2c-algo-bit.h> ··· 115 117 struct mga_framebuffer mfb; 116 118 void *sysram; 117 119 int size; 118 - struct ttm_bo_kmap_obj mapping; 119 120 int x1, y1, x2, y2; /* dirty rect */ 120 121 spinlock_t dirty_lock; 121 122 }; ··· 156 159 If either of these is NULL, then don't do hardware cursors, and 157 160 fall back to software. 158 161 */ 159 - struct mgag200_bo *pixels_1; 160 - struct mgag200_bo *pixels_2; 162 + struct drm_gem_vram_object *pixels_1; 163 + struct drm_gem_vram_object *pixels_2; 161 164 u64 pixels_1_gpu_addr, pixels_2_gpu_addr; 162 165 /* The currently displayed icon, this points to one of pixels_1, or pixels_2 */ 163 - struct mgag200_bo *pixels_current; 166 + struct drm_gem_vram_object *pixels_current; 164 167 /* The previously displayed icon */ 165 - struct mgag200_bo *pixels_prev; 168 + struct drm_gem_vram_object *pixels_prev; 166 169 }; 167 170 168 171 struct mga_mc { ··· 208 211 209 212 int fb_mtrr; 210 213 211 - struct { 212 - struct ttm_bo_device bdev; 213 - } ttm; 214 - 215 214 /* SE model number stored in reg 0x1e24 */ 216 215 u32 unique_rev_id; 217 216 }; 218 - 219 - 220 - struct mgag200_bo { 221 - struct ttm_buffer_object bo; 222 - struct ttm_placement placement; 223 - struct ttm_bo_kmap_obj kmap; 224 - struct drm_gem_object gem; 225 - struct ttm_place placements[3]; 226 - int pin_count; 227 - }; 228 - #define gem_to_mga_bo(gobj) container_of((gobj), struct mgag200_bo, gem) 229 - 230 - static inline struct mgag200_bo * 231 - mgag200_bo(struct ttm_buffer_object *bo) 232 - { 233 - return container_of(bo, struct mgag200_bo, bo); 234 - } 235 217 236 218 /* mgag200_mode.c */ 237 219 int mgag200_modeset_init(struct mga_device *mdev); ··· 235 259 int mgag200_dumb_create(struct drm_file *file, 236 260 struct drm_device *dev, 237 261 struct drm_mode_create_dumb *args); 238 - void mgag200_gem_free_object(struct drm_gem_object *obj); 239 - int 240 - mgag200_dumb_mmap_offset(struct drm_file *file, 241 - struct drm_device *dev, 242 - uint32_t handle, 243 - uint64_t *offset); 262 + 244 263 /* mgag200_i2c.c */ 245 264 struct mga_i2c_chan *mgag200_i2c_create(struct drm_device *dev); 246 265 void mgag200_i2c_destroy(struct mga_i2c_chan *i2c); 247 266 248 - void mgag200_ttm_placement(struct mgag200_bo *bo, int domain); 249 - 250 - static inline int mgag200_bo_reserve(struct mgag200_bo *bo, bool no_wait) 251 - { 252 - int ret; 253 - 254 - ret = ttm_bo_reserve(&bo->bo, true, no_wait, NULL); 255 - if (ret) { 256 - if (ret != -ERESTARTSYS && ret != -EBUSY) 257 - DRM_ERROR("reserve failed %p\n", bo); 258 - return ret; 259 - } 260 - return 0; 261 - } 262 - 263 - static inline void mgag200_bo_unreserve(struct mgag200_bo *bo) 264 - { 265 - ttm_bo_unreserve(&bo->bo); 266 - } 267 - 268 - int mgag200_bo_create(struct drm_device *dev, int size, int align, 269 - uint32_t flags, struct mgag200_bo **pastbo); 270 267 int mgag200_mm_init(struct mga_device *mdev); 271 268 void mgag200_mm_fini(struct mga_device *mdev); 272 269 int mgag200_mmap(struct file *filp, struct vm_area_struct *vma); 273 - int mgag200_bo_pin(struct mgag200_bo *bo, u32 pl_flag, u64 *gpu_addr); 274 - int mgag200_bo_unpin(struct mgag200_bo *bo); 275 - int mgag200_bo_push_sysram(struct mgag200_bo *bo); 276 - /* mgag200_cursor.c */ 270 + 277 271 int mga_crtc_cursor_set(struct drm_crtc *crtc, struct drm_file *file_priv, 278 272 uint32_t handle, uint32_t width, uint32_t height); 279 273 int mga_crtc_cursor_move(struct drm_crtc *crtc, int x, int y);
+25 -18
drivers/gpu/drm/mgag200/mgag200_fb.c
··· 23 23 { 24 24 int i; 25 25 struct drm_gem_object *obj; 26 - struct mgag200_bo *bo; 26 + struct drm_gem_vram_object *gbo; 27 27 int src_offset, dst_offset; 28 28 int bpp = mfbdev->mfb.base.format->cpp[0]; 29 29 int ret = -EBUSY; 30 + u8 *dst; 30 31 bool unmap = false; 31 32 bool store_for_later = false; 32 33 int x2, y2; 33 34 unsigned long flags; 34 35 35 36 obj = mfbdev->mfb.obj; 36 - bo = gem_to_mga_bo(obj); 37 + gbo = drm_gem_vram_of_gem(obj); 37 38 38 - /* 39 - * try and reserve the BO, if we fail with busy 40 - * then the BO is being moved and we should 41 - * store up the damage until later. 39 + /* Try to lock the BO. If we fail with -EBUSY then 40 + * the BO is being moved and we should store up the 41 + * damage until later. 42 42 */ 43 43 if (drm_can_sleep()) 44 - ret = mgag200_bo_reserve(bo, true); 44 + ret = drm_gem_vram_lock(gbo, true); 45 45 if (ret) { 46 46 if (ret != -EBUSY) 47 47 return; ··· 75 75 mfbdev->x2 = mfbdev->y2 = 0; 76 76 spin_unlock_irqrestore(&mfbdev->dirty_lock, flags); 77 77 78 - if (!bo->kmap.virtual) { 79 - ret = ttm_bo_kmap(&bo->bo, 0, bo->bo.num_pages, &bo->kmap); 80 - if (ret) { 78 + dst = drm_gem_vram_kmap(gbo, false, NULL); 79 + if (IS_ERR(dst)) { 80 + DRM_ERROR("failed to kmap fb updates\n"); 81 + goto out; 82 + } else if (!dst) { 83 + dst = drm_gem_vram_kmap(gbo, true, NULL); 84 + if (IS_ERR(dst)) { 81 85 DRM_ERROR("failed to kmap fb updates\n"); 82 - mgag200_bo_unreserve(bo); 83 - return; 86 + goto out; 84 87 } 85 88 unmap = true; 86 89 } 90 + 87 91 for (i = y; i <= y2; i++) { 88 92 /* assume equal stride for now */ 89 - src_offset = dst_offset = i * mfbdev->mfb.base.pitches[0] + (x * bpp); 90 - memcpy_toio(bo->kmap.virtual + src_offset, mfbdev->sysram + src_offset, (x2 - x + 1) * bpp); 91 - 93 + src_offset = dst_offset = 94 + i * mfbdev->mfb.base.pitches[0] + (x * bpp); 95 + memcpy_toio(dst + dst_offset, mfbdev->sysram + src_offset, 96 + (x2 - x + 1) * bpp); 92 97 } 93 - if (unmap) 94 - ttm_bo_kunmap(&bo->kmap); 95 98 96 - mgag200_bo_unreserve(bo); 99 + if (unmap) 100 + drm_gem_vram_kunmap(gbo); 101 + 102 + out: 103 + drm_gem_vram_unlock(gbo); 97 104 } 98 105 99 106 static void mga_fillrect(struct fb_info *info,
+12 -75
drivers/gpu/drm/mgag200/mgag200_main.c
··· 230 230 } 231 231 232 232 /* Make small buffers to store a hardware cursor (double buffered icon updates) */ 233 - mgag200_bo_create(dev, roundup(48*64, PAGE_SIZE), 0, 0, 234 - &mdev->cursor.pixels_1); 235 - mgag200_bo_create(dev, roundup(48*64, PAGE_SIZE), 0, 0, 236 - &mdev->cursor.pixels_2); 237 - if (!mdev->cursor.pixels_2 || !mdev->cursor.pixels_1) { 233 + mdev->cursor.pixels_1 = drm_gem_vram_create(dev, &dev->vram_mm->bdev, 234 + roundup(48*64, PAGE_SIZE), 235 + 0, 0); 236 + mdev->cursor.pixels_2 = drm_gem_vram_create(dev, &dev->vram_mm->bdev, 237 + roundup(48*64, PAGE_SIZE), 238 + 0, 0); 239 + if (IS_ERR(mdev->cursor.pixels_2) || IS_ERR(mdev->cursor.pixels_1)) { 238 240 mdev->cursor.pixels_1 = NULL; 239 241 mdev->cursor.pixels_2 = NULL; 240 242 dev_warn(&dev->pdev->dev, ··· 274 272 u32 size, bool iskernel, 275 273 struct drm_gem_object **obj) 276 274 { 277 - struct mgag200_bo *astbo; 275 + struct drm_gem_vram_object *gbo; 278 276 int ret; 279 277 280 278 *obj = NULL; ··· 283 281 if (size == 0) 284 282 return -EINVAL; 285 283 286 - ret = mgag200_bo_create(dev, size, 0, 0, &astbo); 287 - if (ret) { 284 + gbo = drm_gem_vram_create(dev, &dev->vram_mm->bdev, size, 0, false); 285 + if (IS_ERR(gbo)) { 286 + ret = PTR_ERR(gbo); 288 287 if (ret != -ERESTARTSYS) 289 288 DRM_ERROR("failed to allocate GEM object\n"); 290 289 return ret; 291 290 } 292 - *obj = &astbo->gem; 293 - return 0; 294 - } 295 - 296 - int mgag200_dumb_create(struct drm_file *file, 297 - struct drm_device *dev, 298 - struct drm_mode_create_dumb *args) 299 - { 300 - int ret; 301 - struct drm_gem_object *gobj; 302 - u32 handle; 303 - 304 - args->pitch = args->width * ((args->bpp + 7) / 8); 305 - args->size = args->pitch * args->height; 306 - 307 - ret = mgag200_gem_create(dev, args->size, false, 308 - &gobj); 309 - if (ret) 310 - return ret; 311 - 312 - ret = drm_gem_handle_create(file, gobj, &handle); 313 - drm_gem_object_put_unlocked(gobj); 314 - if (ret) 315 - return ret; 316 - 317 - args->handle = handle; 318 - return 0; 319 - } 320 - 321 - static void mgag200_bo_unref(struct mgag200_bo **bo) 322 - { 323 - if ((*bo) == NULL) 324 - return; 325 - ttm_bo_put(&((*bo)->bo)); 326 - *bo = NULL; 327 - } 328 - 329 - void mgag200_gem_free_object(struct drm_gem_object *obj) 330 - { 331 - struct mgag200_bo *mgag200_bo = gem_to_mga_bo(obj); 332 - 333 - mgag200_bo_unref(&mgag200_bo); 334 - } 335 - 336 - 337 - static inline u64 mgag200_bo_mmap_offset(struct mgag200_bo *bo) 338 - { 339 - return drm_vma_node_offset_addr(&bo->bo.vma_node); 340 - } 341 - 342 - int 343 - mgag200_dumb_mmap_offset(struct drm_file *file, 344 - struct drm_device *dev, 345 - uint32_t handle, 346 - uint64_t *offset) 347 - { 348 - struct drm_gem_object *obj; 349 - struct mgag200_bo *bo; 350 - 351 - obj = drm_gem_object_lookup(file, handle); 352 - if (obj == NULL) 353 - return -ENOENT; 354 - 355 - bo = gem_to_mga_bo(obj); 356 - *offset = mgag200_bo_mmap_offset(bo); 357 - 358 - drm_gem_object_put_unlocked(obj); 291 + *obj = &gbo->gem; 359 292 return 0; 360 293 }
+30 -29
drivers/gpu/drm/mgag200/mgag200_mode.c
··· 858 858 WREG_ECRT(0x0, ((u8)(addr >> 16) & 0xf) | crtcext0); 859 859 } 860 860 861 - 862 - /* ast is different - we will force move buffers out of VRAM */ 863 861 static int mga_crtc_do_set_base(struct drm_crtc *crtc, 864 862 struct drm_framebuffer *fb, 865 863 int x, int y, int atomic) ··· 865 867 struct mga_device *mdev = crtc->dev->dev_private; 866 868 struct drm_gem_object *obj; 867 869 struct mga_framebuffer *mga_fb; 868 - struct mgag200_bo *bo; 870 + struct drm_gem_vram_object *gbo; 869 871 int ret; 870 - u64 gpu_addr; 872 + s64 gpu_addr; 873 + void *base; 871 874 872 - /* push the previous fb to system ram */ 873 875 if (!atomic && fb) { 874 876 mga_fb = to_mga_framebuffer(fb); 875 877 obj = mga_fb->obj; 876 - bo = gem_to_mga_bo(obj); 877 - ret = mgag200_bo_reserve(bo, false); 878 - if (ret) 879 - return ret; 880 - mgag200_bo_push_sysram(bo); 881 - mgag200_bo_unreserve(bo); 878 + gbo = drm_gem_vram_of_gem(obj); 879 + 880 + /* unmap if console */ 881 + if (&mdev->mfbdev->mfb == mga_fb) 882 + drm_gem_vram_kunmap(gbo); 883 + drm_gem_vram_unpin(gbo); 882 884 } 883 885 884 886 mga_fb = to_mga_framebuffer(crtc->primary->fb); 885 887 obj = mga_fb->obj; 886 - bo = gem_to_mga_bo(obj); 888 + gbo = drm_gem_vram_of_gem(obj); 887 889 888 - ret = mgag200_bo_reserve(bo, false); 890 + ret = drm_gem_vram_pin(gbo, DRM_GEM_VRAM_PL_FLAG_VRAM); 889 891 if (ret) 890 892 return ret; 891 - 892 - ret = mgag200_bo_pin(bo, TTM_PL_FLAG_VRAM, &gpu_addr); 893 - if (ret) { 894 - mgag200_bo_unreserve(bo); 895 - return ret; 893 + gpu_addr = drm_gem_vram_offset(gbo); 894 + if (gpu_addr < 0) { 895 + ret = (int)gpu_addr; 896 + goto err_drm_gem_vram_unpin; 896 897 } 897 898 898 899 if (&mdev->mfbdev->mfb == mga_fb) { 899 900 /* if pushing console in kmap it */ 900 - ret = ttm_bo_kmap(&bo->bo, 0, bo->bo.num_pages, &bo->kmap); 901 - if (ret) 901 + base = drm_gem_vram_kmap(gbo, true, NULL); 902 + if (IS_ERR(base)) { 903 + ret = PTR_ERR(base); 902 904 DRM_ERROR("failed to kmap fbcon\n"); 903 - 905 + } 904 906 } 905 - mgag200_bo_unreserve(bo); 906 907 907 908 mga_set_start_address(crtc, (u32)gpu_addr); 908 909 909 910 return 0; 911 + 912 + err_drm_gem_vram_unpin: 913 + drm_gem_vram_unpin(gbo); 914 + return ret; 910 915 } 911 916 912 917 static int mga_crtc_mode_set_base(struct drm_crtc *crtc, int x, int y, ··· 1423 1422 1424 1423 static void mga_crtc_disable(struct drm_crtc *crtc) 1425 1424 { 1426 - int ret; 1427 1425 DRM_DEBUG_KMS("\n"); 1428 1426 mga_crtc_dpms(crtc, DRM_MODE_DPMS_OFF); 1429 1427 if (crtc->primary->fb) { 1428 + struct mga_device *mdev = crtc->dev->dev_private; 1430 1429 struct mga_framebuffer *mga_fb = to_mga_framebuffer(crtc->primary->fb); 1431 1430 struct drm_gem_object *obj = mga_fb->obj; 1432 - struct mgag200_bo *bo = gem_to_mga_bo(obj); 1433 - ret = mgag200_bo_reserve(bo, false); 1434 - if (ret) 1435 - return; 1436 - mgag200_bo_push_sysram(bo); 1437 - mgag200_bo_unreserve(bo); 1431 + struct drm_gem_vram_object *gbo = drm_gem_vram_of_gem(obj); 1432 + 1433 + /* unmap if console */ 1434 + if (&mdev->mfbdev->mfb == mga_fb) 1435 + drm_gem_vram_kunmap(gbo); 1436 + drm_gem_vram_unpin(gbo); 1438 1437 } 1439 1438 crtc->primary->fb = NULL; 1440 1439 }
+8 -293
drivers/gpu/drm/mgag200/mgag200_ttm.c
··· 26 26 * Authors: Dave Airlie <airlied@redhat.com> 27 27 */ 28 28 #include <drm/drmP.h> 29 - #include <drm/ttm/ttm_page_alloc.h> 30 29 31 30 #include "mgag200_drv.h" 32 31 33 - static inline struct mga_device * 34 - mgag200_bdev(struct ttm_bo_device *bd) 35 - { 36 - return container_of(bd, struct mga_device, ttm.bdev); 37 - } 38 - 39 - static void mgag200_bo_ttm_destroy(struct ttm_buffer_object *tbo) 40 - { 41 - struct mgag200_bo *bo; 42 - 43 - bo = container_of(tbo, struct mgag200_bo, bo); 44 - 45 - drm_gem_object_release(&bo->gem); 46 - kfree(bo); 47 - } 48 - 49 - static bool mgag200_ttm_bo_is_mgag200_bo(struct ttm_buffer_object *bo) 50 - { 51 - if (bo->destroy == &mgag200_bo_ttm_destroy) 52 - return true; 53 - return false; 54 - } 55 - 56 - static int 57 - mgag200_bo_init_mem_type(struct ttm_bo_device *bdev, uint32_t type, 58 - struct ttm_mem_type_manager *man) 59 - { 60 - switch (type) { 61 - case TTM_PL_SYSTEM: 62 - man->flags = TTM_MEMTYPE_FLAG_MAPPABLE; 63 - man->available_caching = TTM_PL_MASK_CACHING; 64 - man->default_caching = TTM_PL_FLAG_CACHED; 65 - break; 66 - case TTM_PL_VRAM: 67 - man->func = &ttm_bo_manager_func; 68 - man->flags = TTM_MEMTYPE_FLAG_FIXED | 69 - TTM_MEMTYPE_FLAG_MAPPABLE; 70 - man->available_caching = TTM_PL_FLAG_UNCACHED | 71 - TTM_PL_FLAG_WC; 72 - man->default_caching = TTM_PL_FLAG_WC; 73 - break; 74 - default: 75 - DRM_ERROR("Unsupported memory type %u\n", (unsigned)type); 76 - return -EINVAL; 77 - } 78 - return 0; 79 - } 80 - 81 - static void 82 - mgag200_bo_evict_flags(struct ttm_buffer_object *bo, struct ttm_placement *pl) 83 - { 84 - struct mgag200_bo *mgabo = mgag200_bo(bo); 85 - 86 - if (!mgag200_ttm_bo_is_mgag200_bo(bo)) 87 - return; 88 - 89 - mgag200_ttm_placement(mgabo, TTM_PL_FLAG_SYSTEM); 90 - *pl = mgabo->placement; 91 - } 92 - 93 - static int mgag200_bo_verify_access(struct ttm_buffer_object *bo, struct file *filp) 94 - { 95 - struct mgag200_bo *mgabo = mgag200_bo(bo); 96 - 97 - return drm_vma_node_verify_access(&mgabo->gem.vma_node, 98 - filp->private_data); 99 - } 100 - 101 - static int mgag200_ttm_io_mem_reserve(struct ttm_bo_device *bdev, 102 - struct ttm_mem_reg *mem) 103 - { 104 - struct ttm_mem_type_manager *man = &bdev->man[mem->mem_type]; 105 - struct mga_device *mdev = mgag200_bdev(bdev); 106 - 107 - mem->bus.addr = NULL; 108 - mem->bus.offset = 0; 109 - mem->bus.size = mem->num_pages << PAGE_SHIFT; 110 - mem->bus.base = 0; 111 - mem->bus.is_iomem = false; 112 - if (!(man->flags & TTM_MEMTYPE_FLAG_MAPPABLE)) 113 - return -EINVAL; 114 - switch (mem->mem_type) { 115 - case TTM_PL_SYSTEM: 116 - /* system memory */ 117 - return 0; 118 - case TTM_PL_VRAM: 119 - mem->bus.offset = mem->start << PAGE_SHIFT; 120 - mem->bus.base = pci_resource_start(mdev->dev->pdev, 0); 121 - mem->bus.is_iomem = true; 122 - break; 123 - default: 124 - return -EINVAL; 125 - break; 126 - } 127 - return 0; 128 - } 129 - 130 - static void mgag200_ttm_io_mem_free(struct ttm_bo_device *bdev, struct ttm_mem_reg *mem) 131 - { 132 - } 133 - 134 - static void mgag200_ttm_backend_destroy(struct ttm_tt *tt) 135 - { 136 - ttm_tt_fini(tt); 137 - kfree(tt); 138 - } 139 - 140 - static struct ttm_backend_func mgag200_tt_backend_func = { 141 - .destroy = &mgag200_ttm_backend_destroy, 142 - }; 143 - 144 - 145 - static struct ttm_tt *mgag200_ttm_tt_create(struct ttm_buffer_object *bo, 146 - uint32_t page_flags) 147 - { 148 - struct ttm_tt *tt; 149 - 150 - tt = kzalloc(sizeof(struct ttm_tt), GFP_KERNEL); 151 - if (tt == NULL) 152 - return NULL; 153 - tt->func = &mgag200_tt_backend_func; 154 - if (ttm_tt_init(tt, bo, page_flags)) { 155 - kfree(tt); 156 - return NULL; 157 - } 158 - return tt; 159 - } 160 - 161 - struct ttm_bo_driver mgag200_bo_driver = { 162 - .ttm_tt_create = mgag200_ttm_tt_create, 163 - .init_mem_type = mgag200_bo_init_mem_type, 164 - .eviction_valuable = ttm_bo_eviction_valuable, 165 - .evict_flags = mgag200_bo_evict_flags, 166 - .move = NULL, 167 - .verify_access = mgag200_bo_verify_access, 168 - .io_mem_reserve = &mgag200_ttm_io_mem_reserve, 169 - .io_mem_free = &mgag200_ttm_io_mem_free, 170 - }; 171 - 172 32 int mgag200_mm_init(struct mga_device *mdev) 173 33 { 34 + struct drm_vram_mm *vmm; 174 35 int ret; 175 36 struct drm_device *dev = mdev->dev; 176 - struct ttm_bo_device *bdev = &mdev->ttm.bdev; 177 37 178 - ret = ttm_bo_device_init(&mdev->ttm.bdev, 179 - &mgag200_bo_driver, 180 - dev->anon_inode->i_mapping, 181 - true); 182 - if (ret) { 183 - DRM_ERROR("Error initialising bo driver; %d\n", ret); 184 - return ret; 185 - } 186 - 187 - ret = ttm_bo_init_mm(bdev, TTM_PL_VRAM, mdev->mc.vram_size >> PAGE_SHIFT); 188 - if (ret) { 189 - DRM_ERROR("Failed ttm VRAM init: %d\n", ret); 38 + vmm = drm_vram_helper_alloc_mm(dev, pci_resource_start(dev->pdev, 0), 39 + mdev->mc.vram_size, 40 + &drm_gem_vram_mm_funcs); 41 + if (IS_ERR(vmm)) { 42 + ret = PTR_ERR(vmm); 43 + DRM_ERROR("Error initializing VRAM MM; %d\n", ret); 190 44 return ret; 191 45 } 192 46 ··· 57 203 { 58 204 struct drm_device *dev = mdev->dev; 59 205 60 - ttm_bo_device_release(&mdev->ttm.bdev); 206 + drm_vram_helper_release_mm(dev); 61 207 62 208 arch_io_free_memtype_wc(pci_resource_start(dev->pdev, 0), 63 209 pci_resource_len(dev->pdev, 0)); 64 210 arch_phys_wc_del(mdev->fb_mtrr); 65 211 mdev->fb_mtrr = 0; 66 - } 67 - 68 - void mgag200_ttm_placement(struct mgag200_bo *bo, int domain) 69 - { 70 - u32 c = 0; 71 - unsigned i; 72 - 73 - bo->placement.placement = bo->placements; 74 - bo->placement.busy_placement = bo->placements; 75 - if (domain & TTM_PL_FLAG_VRAM) 76 - bo->placements[c++].flags = TTM_PL_FLAG_WC | TTM_PL_FLAG_UNCACHED | TTM_PL_FLAG_VRAM; 77 - if (domain & TTM_PL_FLAG_SYSTEM) 78 - bo->placements[c++].flags = TTM_PL_MASK_CACHING | TTM_PL_FLAG_SYSTEM; 79 - if (!c) 80 - bo->placements[c++].flags = TTM_PL_MASK_CACHING | TTM_PL_FLAG_SYSTEM; 81 - bo->placement.num_placement = c; 82 - bo->placement.num_busy_placement = c; 83 - for (i = 0; i < c; ++i) { 84 - bo->placements[i].fpfn = 0; 85 - bo->placements[i].lpfn = 0; 86 - } 87 - } 88 - 89 - int mgag200_bo_create(struct drm_device *dev, int size, int align, 90 - uint32_t flags, struct mgag200_bo **pmgabo) 91 - { 92 - struct mga_device *mdev = dev->dev_private; 93 - struct mgag200_bo *mgabo; 94 - size_t acc_size; 95 - int ret; 96 - 97 - mgabo = kzalloc(sizeof(struct mgag200_bo), GFP_KERNEL); 98 - if (!mgabo) 99 - return -ENOMEM; 100 - 101 - ret = drm_gem_object_init(dev, &mgabo->gem, size); 102 - if (ret) { 103 - kfree(mgabo); 104 - return ret; 105 - } 106 - 107 - mgabo->bo.bdev = &mdev->ttm.bdev; 108 - 109 - mgag200_ttm_placement(mgabo, TTM_PL_FLAG_VRAM | TTM_PL_FLAG_SYSTEM); 110 - 111 - acc_size = ttm_bo_dma_acc_size(&mdev->ttm.bdev, size, 112 - sizeof(struct mgag200_bo)); 113 - 114 - ret = ttm_bo_init(&mdev->ttm.bdev, &mgabo->bo, size, 115 - ttm_bo_type_device, &mgabo->placement, 116 - align >> PAGE_SHIFT, false, acc_size, 117 - NULL, NULL, mgag200_bo_ttm_destroy); 118 - if (ret) 119 - return ret; 120 - 121 - *pmgabo = mgabo; 122 - return 0; 123 - } 124 - 125 - static inline u64 mgag200_bo_gpu_offset(struct mgag200_bo *bo) 126 - { 127 - return bo->bo.offset; 128 - } 129 - 130 - int mgag200_bo_pin(struct mgag200_bo *bo, u32 pl_flag, u64 *gpu_addr) 131 - { 132 - struct ttm_operation_ctx ctx = { false, false }; 133 - int i, ret; 134 - 135 - if (bo->pin_count) { 136 - bo->pin_count++; 137 - if (gpu_addr) 138 - *gpu_addr = mgag200_bo_gpu_offset(bo); 139 - return 0; 140 - } 141 - 142 - mgag200_ttm_placement(bo, pl_flag); 143 - for (i = 0; i < bo->placement.num_placement; i++) 144 - bo->placements[i].flags |= TTM_PL_FLAG_NO_EVICT; 145 - ret = ttm_bo_validate(&bo->bo, &bo->placement, &ctx); 146 - if (ret) 147 - return ret; 148 - 149 - bo->pin_count = 1; 150 - if (gpu_addr) 151 - *gpu_addr = mgag200_bo_gpu_offset(bo); 152 - return 0; 153 - } 154 - 155 - int mgag200_bo_unpin(struct mgag200_bo *bo) 156 - { 157 - struct ttm_operation_ctx ctx = { false, false }; 158 - int i; 159 - if (!bo->pin_count) { 160 - DRM_ERROR("unpin bad %p\n", bo); 161 - return 0; 162 - } 163 - bo->pin_count--; 164 - if (bo->pin_count) 165 - return 0; 166 - 167 - for (i = 0; i < bo->placement.num_placement ; i++) 168 - bo->placements[i].flags &= ~TTM_PL_FLAG_NO_EVICT; 169 - return ttm_bo_validate(&bo->bo, &bo->placement, &ctx); 170 - } 171 - 172 - int mgag200_bo_push_sysram(struct mgag200_bo *bo) 173 - { 174 - struct ttm_operation_ctx ctx = { false, false }; 175 - int i, ret; 176 - if (!bo->pin_count) { 177 - DRM_ERROR("unpin bad %p\n", bo); 178 - return 0; 179 - } 180 - bo->pin_count--; 181 - if (bo->pin_count) 182 - return 0; 183 - 184 - if (bo->kmap.virtual) 185 - ttm_bo_kunmap(&bo->kmap); 186 - 187 - mgag200_ttm_placement(bo, TTM_PL_FLAG_SYSTEM); 188 - for (i = 0; i < bo->placement.num_placement ; i++) 189 - bo->placements[i].flags |= TTM_PL_FLAG_NO_EVICT; 190 - 191 - ret = ttm_bo_validate(&bo->bo, &bo->placement, &ctx); 192 - if (ret) { 193 - DRM_ERROR("pushing to VRAM failed\n"); 194 - return ret; 195 - } 196 - return 0; 197 - } 198 - 199 - int mgag200_mmap(struct file *filp, struct vm_area_struct *vma) 200 - { 201 - struct drm_file *file_priv = filp->private_data; 202 - struct mga_device *mdev = file_priv->minor->dev->dev_private; 203 - 204 - return ttm_bo_mmap(filp, vma, &mdev->ttm.bdev); 205 212 }
+2 -4
drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c
··· 694 694 695 695 static void dpu_crtc_reset(struct drm_crtc *crtc) 696 696 { 697 - struct dpu_crtc_state *cstate; 697 + struct dpu_crtc_state *cstate = kzalloc(sizeof(*cstate), GFP_KERNEL); 698 698 699 699 if (crtc->state) 700 700 dpu_crtc_destroy_state(crtc, crtc->state); 701 701 702 - crtc->state = kzalloc(sizeof(*cstate), GFP_KERNEL); 703 - if (crtc->state) 704 - crtc->state->crtc = crtc; 702 + __drm_atomic_helper_crtc_reset(crtc, &cstate->base); 705 703 } 706 704 707 705 /**
+6 -3
drivers/gpu/drm/msm/disp/dpu1/dpu_formats.c
··· 1040 1040 const struct drm_mode_fb_cmd2 *cmd, 1041 1041 struct drm_gem_object **bos) 1042 1042 { 1043 - int ret, i, num_base_fmt_planes; 1043 + const struct drm_format_info *info; 1044 1044 const struct dpu_format *fmt; 1045 1045 struct dpu_hw_fmt_layout layout; 1046 1046 uint32_t bos_total_size = 0; 1047 + int ret, i; 1047 1048 1048 1049 if (!msm_fmt || !cmd || !bos) { 1049 1050 DRM_ERROR("invalid arguments\n"); ··· 1052 1051 } 1053 1052 1054 1053 fmt = to_dpu_format(msm_fmt); 1055 - num_base_fmt_planes = drm_format_num_planes(fmt->base.pixel_format); 1054 + info = drm_format_info(fmt->base.pixel_format); 1055 + if (!info) 1056 + return -EINVAL; 1056 1057 1057 1058 ret = dpu_format_get_plane_sizes(fmt, cmd->width, cmd->height, 1058 1059 &layout, cmd->pitches); 1059 1060 if (ret) 1060 1061 return ret; 1061 1062 1062 - for (i = 0; i < num_base_fmt_planes; i++) { 1063 + for (i = 0; i < info->num_planes; i++) { 1063 1064 if (!bos[i]) { 1064 1065 DRM_ERROR("invalid handle for plane %d\n", i); 1065 1066 return -EINVAL;
+2 -7
drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c
··· 557 557 struct dpu_plane_state *pstate, 558 558 const struct dpu_format *fmt, bool color_fill) 559 559 { 560 - uint32_t chroma_subsmpl_h, chroma_subsmpl_v; 560 + const struct drm_format_info *info = drm_format_info(fmt->base.pixel_format); 561 561 562 562 /* don't chroma subsample if decimating */ 563 - chroma_subsmpl_h = 564 - drm_format_horz_chroma_subsampling(fmt->base.pixel_format); 565 - chroma_subsmpl_v = 566 - drm_format_vert_chroma_subsampling(fmt->base.pixel_format); 567 - 568 563 /* update scaler. calculate default config for QSEED3 */ 569 564 _dpu_plane_setup_scaler3(pdpu, pstate, 570 565 drm_rect_width(&pdpu->pipe_cfg.src_rect), ··· 567 572 drm_rect_width(&pdpu->pipe_cfg.dst_rect), 568 573 drm_rect_height(&pdpu->pipe_cfg.dst_rect), 569 574 &pstate->scaler3_cfg, fmt, 570 - chroma_subsmpl_h, chroma_subsmpl_v); 575 + info->hsub, info->vsub); 571 576 } 572 577 573 578 /**
+13 -18
drivers/gpu/drm/msm/disp/mdp5/mdp5_crtc.c
··· 782 782 783 783 static void mdp5_crtc_restore_cursor(struct drm_crtc *crtc) 784 784 { 785 + const struct drm_format_info *info = drm_format_info(DRM_FORMAT_ARGB8888); 785 786 struct mdp5_crtc_state *mdp5_cstate = to_mdp5_crtc_state(crtc->state); 786 787 struct mdp5_crtc *mdp5_crtc = to_mdp5_crtc(crtc); 787 788 struct mdp5_kms *mdp5_kms = get_kms(crtc); ··· 801 800 width = mdp5_crtc->cursor.width; 802 801 height = mdp5_crtc->cursor.height; 803 802 804 - stride = width * drm_format_plane_cpp(DRM_FORMAT_ARGB8888, 0); 803 + stride = width * info->cpp[0]; 805 804 806 805 get_roi(crtc, &roi_w, &roi_h); 807 806 ··· 1003 1002 drm_printf(p, "\tcmd_mode=%d\n", mdp5_cstate->cmd_mode); 1004 1003 } 1005 1004 1006 - static void mdp5_crtc_reset(struct drm_crtc *crtc) 1007 - { 1008 - struct mdp5_crtc_state *mdp5_cstate; 1009 - 1010 - if (crtc->state) { 1011 - __drm_atomic_helper_crtc_destroy_state(crtc->state); 1012 - kfree(to_mdp5_crtc_state(crtc->state)); 1013 - } 1014 - 1015 - mdp5_cstate = kzalloc(sizeof(*mdp5_cstate), GFP_KERNEL); 1016 - 1017 - if (mdp5_cstate) { 1018 - mdp5_cstate->base.crtc = crtc; 1019 - crtc->state = &mdp5_cstate->base; 1020 - } 1021 - } 1022 - 1023 1005 static struct drm_crtc_state * 1024 1006 mdp5_crtc_duplicate_state(struct drm_crtc *crtc) 1025 1007 { ··· 1028 1044 __drm_atomic_helper_crtc_destroy_state(state); 1029 1045 1030 1046 kfree(mdp5_cstate); 1047 + } 1048 + 1049 + static void mdp5_crtc_reset(struct drm_crtc *crtc) 1050 + { 1051 + struct mdp5_crtc_state *mdp5_cstate = 1052 + kzalloc(sizeof(*mdp5_cstate), GFP_KERNEL); 1053 + 1054 + if (crtc->state) 1055 + mdp5_crtc_destroy_state(crtc, crtc->state); 1056 + 1057 + __drm_atomic_helper_crtc_reset(crtc, &mdp5_cstate->base); 1031 1058 } 1032 1059 1033 1060 static const struct drm_crtc_funcs mdp5_crtc_funcs = {
+10 -14
drivers/gpu/drm/msm/disp/mdp5/mdp5_plane.c
··· 650 650 uint32_t pixel_format, uint32_t src, uint32_t dest, 651 651 uint32_t phasex_steps[COMP_MAX]) 652 652 { 653 + const struct drm_format_info *info = drm_format_info(pixel_format); 653 654 struct mdp5_kms *mdp5_kms = get_kms(plane); 654 655 struct device *dev = mdp5_kms->dev->dev; 655 656 uint32_t phasex_step; 656 - unsigned int hsub; 657 657 int ret; 658 658 659 659 ret = calc_phase_step(src, dest, &phasex_step); ··· 662 662 return ret; 663 663 } 664 664 665 - hsub = drm_format_horz_chroma_subsampling(pixel_format); 666 - 667 665 phasex_steps[COMP_0] = phasex_step; 668 666 phasex_steps[COMP_3] = phasex_step; 669 - phasex_steps[COMP_1_2] = phasex_step / hsub; 667 + phasex_steps[COMP_1_2] = phasex_step / info->hsub; 670 668 671 669 return 0; 672 670 } ··· 673 675 uint32_t pixel_format, uint32_t src, uint32_t dest, 674 676 uint32_t phasey_steps[COMP_MAX]) 675 677 { 678 + const struct drm_format_info *info = drm_format_info(pixel_format); 676 679 struct mdp5_kms *mdp5_kms = get_kms(plane); 677 680 struct device *dev = mdp5_kms->dev->dev; 678 681 uint32_t phasey_step; 679 - unsigned int vsub; 680 682 int ret; 681 683 682 684 ret = calc_phase_step(src, dest, &phasey_step); ··· 685 687 return ret; 686 688 } 687 689 688 - vsub = drm_format_vert_chroma_subsampling(pixel_format); 689 - 690 690 phasey_steps[COMP_0] = phasey_step; 691 691 phasey_steps[COMP_3] = phasey_step; 692 - phasey_steps[COMP_1_2] = phasey_step / vsub; 692 + phasey_steps[COMP_1_2] = phasey_step / info->vsub; 693 693 694 694 return 0; 695 695 } ··· 695 699 static uint32_t get_scale_config(const struct mdp_format *format, 696 700 uint32_t src, uint32_t dst, bool horz) 697 701 { 702 + const struct drm_format_info *info = drm_format_info(format->base.pixel_format); 698 703 bool scaling = format->is_yuv ? true : (src != dst); 699 - uint32_t sub, pix_fmt = format->base.pixel_format; 704 + uint32_t sub; 700 705 uint32_t ya_filter, uv_filter; 701 706 bool yuv = format->is_yuv; 702 707 ··· 705 708 return 0; 706 709 707 710 if (yuv) { 708 - sub = horz ? drm_format_horz_chroma_subsampling(pix_fmt) : 709 - drm_format_vert_chroma_subsampling(pix_fmt); 711 + sub = horz ? info->hsub : info->vsub; 710 712 uv_filter = ((src / sub) <= dst) ? 711 713 SCALE_FILTER_BIL : SCALE_FILTER_PCMN; 712 714 } ··· 750 754 uint32_t src_w, int pe_left[COMP_MAX], int pe_right[COMP_MAX], 751 755 uint32_t src_h, int pe_top[COMP_MAX], int pe_bottom[COMP_MAX]) 752 756 { 753 - uint32_t pix_fmt = format->base.pixel_format; 757 + const struct drm_format_info *info = drm_format_info(format->base.pixel_format); 754 758 uint32_t lr, tb, req; 755 759 int i; 756 760 ··· 759 763 uint32_t roi_h = src_h; 760 764 761 765 if (format->is_yuv && i == COMP_1_2) { 762 - roi_w /= drm_format_horz_chroma_subsampling(pix_fmt); 763 - roi_h /= drm_format_vert_chroma_subsampling(pix_fmt); 766 + roi_w /= info->hsub; 767 + roi_h /= info->vsub; 764 768 } 765 769 766 770 lr = (pe_left[i] >= 0) ?
+4 -3
drivers/gpu/drm/msm/disp/mdp5/mdp5_smp.c
··· 127 127 const struct mdp_format *format, 128 128 u32 width, bool hdecim) 129 129 { 130 + const struct drm_format_info *info = drm_format_info(format->base.pixel_format); 130 131 struct mdp5_kms *mdp5_kms = get_kms(smp); 131 132 int rev = mdp5_cfg_get_hw_rev(mdp5_kms->cfg); 132 133 int i, hsub, nplanes, nlines; 133 134 u32 fmt = format->base.pixel_format; 134 135 uint32_t blkcfg = 0; 135 136 136 - nplanes = drm_format_num_planes(fmt); 137 - hsub = drm_format_horz_chroma_subsampling(fmt); 137 + nplanes = info->num_planes; 138 + hsub = info->hsub; 138 139 139 140 /* different if BWC (compressed framebuffer?) enabled: */ 140 141 nlines = 2; ··· 158 157 for (i = 0; i < nplanes; i++) { 159 158 int n, fetch_stride, cpp; 160 159 161 - cpp = drm_format_plane_cpp(fmt, i); 160 + cpp = info->cpp[i]; 162 161 fetch_stride = width * cpp / (i ? hsub : 1); 163 162 164 163 n = DIV_ROUND_UP(fetch_stride * nlines, smp->blk_size);
+9 -9
drivers/gpu/drm/msm/msm_fb.c
··· 106 106 struct drm_framebuffer *msm_framebuffer_create(struct drm_device *dev, 107 107 struct drm_file *file, const struct drm_mode_fb_cmd2 *mode_cmd) 108 108 { 109 + const struct drm_format_info *info = drm_get_format_info(dev, 110 + mode_cmd); 109 111 struct drm_gem_object *bos[4] = {0}; 110 112 struct drm_framebuffer *fb; 111 - int ret, i, n = drm_format_num_planes(mode_cmd->pixel_format); 113 + int ret, i, n = info->num_planes; 112 114 113 115 for (i = 0; i < n; i++) { 114 116 bos[i] = drm_gem_object_lookup(file, mode_cmd->handles[i]); ··· 137 135 static struct drm_framebuffer *msm_framebuffer_init(struct drm_device *dev, 138 136 const struct drm_mode_fb_cmd2 *mode_cmd, struct drm_gem_object **bos) 139 137 { 138 + const struct drm_format_info *info = drm_get_format_info(dev, 139 + mode_cmd); 140 140 struct msm_drm_private *priv = dev->dev_private; 141 141 struct msm_kms *kms = priv->kms; 142 142 struct msm_framebuffer *msm_fb = NULL; 143 143 struct drm_framebuffer *fb; 144 144 const struct msm_format *format; 145 145 int ret, i, n; 146 - unsigned int hsub, vsub; 147 146 148 147 DBG("create framebuffer: dev=%p, mode_cmd=%p (%dx%d@%4.4s)", 149 148 dev, mode_cmd, mode_cmd->width, mode_cmd->height, 150 149 (char *)&mode_cmd->pixel_format); 151 150 152 - n = drm_format_num_planes(mode_cmd->pixel_format); 153 - hsub = drm_format_horz_chroma_subsampling(mode_cmd->pixel_format); 154 - vsub = drm_format_vert_chroma_subsampling(mode_cmd->pixel_format); 155 - 151 + n = info->num_planes; 156 152 format = kms->funcs->get_format(kms, mode_cmd->pixel_format, 157 153 mode_cmd->modifier[0]); 158 154 if (!format) { ··· 176 176 } 177 177 178 178 for (i = 0; i < n; i++) { 179 - unsigned int width = mode_cmd->width / (i ? hsub : 1); 180 - unsigned int height = mode_cmd->height / (i ? vsub : 1); 179 + unsigned int width = mode_cmd->width / (i ? info->hsub : 1); 180 + unsigned int height = mode_cmd->height / (i ? info->vsub : 1); 181 181 unsigned int min_size; 182 182 183 183 min_size = (height - 1) * mode_cmd->pitches[i] 184 - + width * drm_format_plane_cpp(mode_cmd->pixel_format, i) 184 + + width * info->cpp[i] 185 185 + mode_cmd->offsets[i]; 186 186 187 187 if (bos[i]->size < min_size) {
+3 -10
drivers/gpu/drm/nouveau/dispnv50/head.c
··· 421 421 } 422 422 423 423 static void 424 - __drm_atomic_helper_crtc_reset(struct drm_crtc *crtc, 425 - struct drm_crtc_state *state) 426 - { 427 - if (crtc->state) 428 - crtc->funcs->atomic_destroy_state(crtc, crtc->state); 429 - crtc->state = state; 430 - crtc->state->crtc = crtc; 431 - } 432 - 433 - static void 434 424 nv50_head_reset(struct drm_crtc *crtc) 435 425 { 436 426 struct nv50_head_atom *asyh; 437 427 438 428 if (WARN_ON(!(asyh = kzalloc(sizeof(*asyh), GFP_KERNEL)))) 439 429 return; 430 + 431 + if (crtc->state) 432 + nv50_head_atomic_destroy_state(crtc, crtc->state); 440 433 441 434 __drm_atomic_helper_crtc_reset(crtc, &asyh->state); 442 435 }
-2
drivers/gpu/drm/nouveau/nvkm/subdev/bus/nv04.c
··· 26 26 27 27 #include <subdev/gpio.h> 28 28 29 - #include <subdev/gpio.h> 30 - 31 29 static void 32 30 nv04_bus_intr(struct nvkm_bus *bus) 33 31 {
+4 -2
drivers/gpu/drm/omapdrm/omap_fb.c
··· 298 298 struct drm_framebuffer *omap_framebuffer_create(struct drm_device *dev, 299 299 struct drm_file *file, const struct drm_mode_fb_cmd2 *mode_cmd) 300 300 { 301 - unsigned int num_planes = drm_format_num_planes(mode_cmd->pixel_format); 301 + const struct drm_format_info *info = drm_get_format_info(dev, 302 + mode_cmd); 303 + unsigned int num_planes = info->num_planes; 302 304 struct drm_gem_object *bos[4]; 303 305 struct drm_framebuffer *fb; 304 306 int i; ··· 339 337 dev, mode_cmd, mode_cmd->width, mode_cmd->height, 340 338 (char *)&mode_cmd->pixel_format); 341 339 342 - format = drm_format_info(mode_cmd->pixel_format); 340 + format = drm_get_format_info(dev, mode_cmd); 343 341 344 342 for (i = 0; i < ARRAY_SIZE(formats); i++) { 345 343 if (formats[i] == mode_cmd->pixel_format)
+18
drivers/gpu/drm/panel/Kconfig
··· 132 132 Say Y here if you want to enable support for Orise Technology 133 133 otm8009a 480x800 dsi 2dl panel. 134 134 135 + config DRM_PANEL_OSD_OSD101T2587_53TS 136 + tristate "OSD OSD101T2587-53TS DSI 1920x1200 video mode panel" 137 + depends on OF 138 + depends on DRM_MIPI_DSI 139 + depends on BACKLIGHT_CLASS_DEVICE 140 + help 141 + Say Y here if you want to enable support for One Stop Displays 142 + OSD101T2587-53TS 10.1" 1920x1200 dsi panel. 143 + 135 144 config DRM_PANEL_PANASONIC_VVX10F034N00 136 145 tristate "Panasonic VVX10F034N00 1920x1200 video mode panel" 137 146 depends on OF ··· 209 200 depends on DRM_MIPI_DSI 210 201 depends on BACKLIGHT_CLASS_DEVICE 211 202 select VIDEOMODE_HELPERS 203 + 204 + config DRM_PANEL_SAMSUNG_S6E63M0 205 + tristate "Samsung S6E63M0 RGB/SPI panel" 206 + depends on OF 207 + depends on SPI 208 + depends on BACKLIGHT_CLASS_DEVICE 209 + help 210 + Say Y here if you want to enable support for Samsung S6E63M0 211 + AMOLED LCD panel. 212 212 213 213 config DRM_PANEL_SAMSUNG_S6E8AA0 214 214 tristate "Samsung S6E8AA0 DSI video mode panel"
+2
drivers/gpu/drm/panel/Makefile
··· 11 11 obj-$(CONFIG_DRM_PANEL_LG_LG4573) += panel-lg-lg4573.o 12 12 obj-$(CONFIG_DRM_PANEL_OLIMEX_LCD_OLINUXINO) += panel-olimex-lcd-olinuxino.o 13 13 obj-$(CONFIG_DRM_PANEL_ORISETECH_OTM8009A) += panel-orisetech-otm8009a.o 14 + obj-$(CONFIG_DRM_PANEL_OSD_OSD101T2587_53TS) += panel-osd-osd101t2587-53ts.o 14 15 obj-$(CONFIG_DRM_PANEL_PANASONIC_VVX10F034N00) += panel-panasonic-vvx10f034n00.o 15 16 obj-$(CONFIG_DRM_PANEL_RASPBERRYPI_TOUCHSCREEN) += panel-raspberrypi-touchscreen.o 16 17 obj-$(CONFIG_DRM_PANEL_RAYDIUM_RM68200) += panel-raydium-rm68200.o ··· 21 20 obj-$(CONFIG_DRM_PANEL_SAMSUNG_S6D16D0) += panel-samsung-s6d16d0.o 22 21 obj-$(CONFIG_DRM_PANEL_SAMSUNG_S6E3HA2) += panel-samsung-s6e3ha2.o 23 22 obj-$(CONFIG_DRM_PANEL_SAMSUNG_S6E63J0X03) += panel-samsung-s6e63j0x03.o 23 + obj-$(CONFIG_DRM_PANEL_SAMSUNG_S6E63M0) += panel-samsung-s6e63m0.o 24 24 obj-$(CONFIG_DRM_PANEL_SAMSUNG_S6E8AA0) += panel-samsung-s6e8aa0.o 25 25 obj-$(CONFIG_DRM_PANEL_SEIKO_43WVF1G) += panel-seiko-43wvf1g.o 26 26 obj-$(CONFIG_DRM_PANEL_SHARP_LQ101R1SX01) += panel-sharp-lq101r1sx01.o
+254
drivers/gpu/drm/panel/panel-osd-osd101t2587-53ts.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * Copyright (C) 2019 Texas Instruments Incorporated - http://www.ti.com 4 + * Author: Peter Ujfalusi <peter.ujfalusi@ti.com> 5 + */ 6 + 7 + #include <linux/backlight.h> 8 + #include <linux/module.h> 9 + #include <linux/of.h> 10 + #include <linux/regulator/consumer.h> 11 + 12 + #include <drm/drm_crtc.h> 13 + #include <drm/drm_device.h> 14 + #include <drm/drm_mipi_dsi.h> 15 + #include <drm/drm_panel.h> 16 + 17 + #include <video/mipi_display.h> 18 + 19 + struct osd101t2587_panel { 20 + struct drm_panel base; 21 + struct mipi_dsi_device *dsi; 22 + 23 + struct backlight_device *backlight; 24 + struct regulator *supply; 25 + 26 + bool prepared; 27 + bool enabled; 28 + 29 + const struct drm_display_mode *default_mode; 30 + }; 31 + 32 + static inline struct osd101t2587_panel *ti_osd_panel(struct drm_panel *panel) 33 + { 34 + return container_of(panel, struct osd101t2587_panel, base); 35 + } 36 + 37 + static int osd101t2587_panel_disable(struct drm_panel *panel) 38 + { 39 + struct osd101t2587_panel *osd101t2587 = ti_osd_panel(panel); 40 + int ret; 41 + 42 + if (!osd101t2587->enabled) 43 + return 0; 44 + 45 + backlight_disable(osd101t2587->backlight); 46 + 47 + ret = mipi_dsi_shutdown_peripheral(osd101t2587->dsi); 48 + 49 + osd101t2587->enabled = false; 50 + 51 + return ret; 52 + } 53 + 54 + static int osd101t2587_panel_unprepare(struct drm_panel *panel) 55 + { 56 + struct osd101t2587_panel *osd101t2587 = ti_osd_panel(panel); 57 + 58 + if (!osd101t2587->prepared) 59 + return 0; 60 + 61 + regulator_disable(osd101t2587->supply); 62 + osd101t2587->prepared = false; 63 + 64 + return 0; 65 + } 66 + 67 + static int osd101t2587_panel_prepare(struct drm_panel *panel) 68 + { 69 + struct osd101t2587_panel *osd101t2587 = ti_osd_panel(panel); 70 + int ret; 71 + 72 + if (osd101t2587->prepared) 73 + return 0; 74 + 75 + ret = regulator_enable(osd101t2587->supply); 76 + if (!ret) 77 + osd101t2587->prepared = true; 78 + 79 + return ret; 80 + } 81 + 82 + static int osd101t2587_panel_enable(struct drm_panel *panel) 83 + { 84 + struct osd101t2587_panel *osd101t2587 = ti_osd_panel(panel); 85 + int ret; 86 + 87 + if (osd101t2587->enabled) 88 + return 0; 89 + 90 + ret = mipi_dsi_turn_on_peripheral(osd101t2587->dsi); 91 + if (ret) 92 + return ret; 93 + 94 + backlight_enable(osd101t2587->backlight); 95 + 96 + osd101t2587->enabled = true; 97 + 98 + return ret; 99 + } 100 + 101 + static const struct drm_display_mode default_mode_osd101t2587 = { 102 + .clock = 164400, 103 + .hdisplay = 1920, 104 + .hsync_start = 1920 + 152, 105 + .hsync_end = 1920 + 152 + 52, 106 + .htotal = 1920 + 152 + 52 + 20, 107 + .vdisplay = 1200, 108 + .vsync_start = 1200 + 24, 109 + .vsync_end = 1200 + 24 + 6, 110 + .vtotal = 1200 + 24 + 6 + 48, 111 + .vrefresh = 60, 112 + .flags = DRM_MODE_FLAG_NHSYNC | DRM_MODE_FLAG_NVSYNC, 113 + }; 114 + 115 + static int osd101t2587_panel_get_modes(struct drm_panel *panel) 116 + { 117 + struct osd101t2587_panel *osd101t2587 = ti_osd_panel(panel); 118 + struct drm_display_mode *mode; 119 + 120 + mode = drm_mode_duplicate(panel->drm, osd101t2587->default_mode); 121 + if (!mode) { 122 + dev_err(panel->drm->dev, "failed to add mode %ux%ux@%u\n", 123 + osd101t2587->default_mode->hdisplay, 124 + osd101t2587->default_mode->vdisplay, 125 + osd101t2587->default_mode->vrefresh); 126 + return -ENOMEM; 127 + } 128 + 129 + drm_mode_set_name(mode); 130 + 131 + drm_mode_probed_add(panel->connector, mode); 132 + 133 + panel->connector->display_info.width_mm = 217; 134 + panel->connector->display_info.height_mm = 136; 135 + 136 + return 1; 137 + } 138 + 139 + static const struct drm_panel_funcs osd101t2587_panel_funcs = { 140 + .disable = osd101t2587_panel_disable, 141 + .unprepare = osd101t2587_panel_unprepare, 142 + .prepare = osd101t2587_panel_prepare, 143 + .enable = osd101t2587_panel_enable, 144 + .get_modes = osd101t2587_panel_get_modes, 145 + }; 146 + 147 + static const struct of_device_id osd101t2587_of_match[] = { 148 + { 149 + .compatible = "osddisplays,osd101t2587-53ts", 150 + .data = &default_mode_osd101t2587, 151 + }, { 152 + /* sentinel */ 153 + } 154 + }; 155 + MODULE_DEVICE_TABLE(of, osd101t2587_of_match); 156 + 157 + static int osd101t2587_panel_add(struct osd101t2587_panel *osd101t2587) 158 + { 159 + struct device *dev = &osd101t2587->dsi->dev; 160 + 161 + osd101t2587->supply = devm_regulator_get(dev, "power"); 162 + if (IS_ERR(osd101t2587->supply)) 163 + return PTR_ERR(osd101t2587->supply); 164 + 165 + osd101t2587->backlight = devm_of_find_backlight(dev); 166 + if (IS_ERR(osd101t2587->backlight)) 167 + return PTR_ERR(osd101t2587->backlight); 168 + 169 + drm_panel_init(&osd101t2587->base); 170 + osd101t2587->base.funcs = &osd101t2587_panel_funcs; 171 + osd101t2587->base.dev = &osd101t2587->dsi->dev; 172 + 173 + return drm_panel_add(&osd101t2587->base); 174 + } 175 + 176 + static int osd101t2587_panel_probe(struct mipi_dsi_device *dsi) 177 + { 178 + struct osd101t2587_panel *osd101t2587; 179 + const struct of_device_id *id; 180 + int ret; 181 + 182 + id = of_match_node(osd101t2587_of_match, dsi->dev.of_node); 183 + if (!id) 184 + return -ENODEV; 185 + 186 + dsi->lanes = 4; 187 + dsi->format = MIPI_DSI_FMT_RGB888; 188 + dsi->mode_flags = MIPI_DSI_MODE_VIDEO | 189 + MIPI_DSI_MODE_VIDEO_BURST | 190 + MIPI_DSI_MODE_VIDEO_SYNC_PULSE | 191 + MIPI_DSI_MODE_EOT_PACKET; 192 + 193 + osd101t2587 = devm_kzalloc(&dsi->dev, sizeof(*osd101t2587), GFP_KERNEL); 194 + if (!osd101t2587) 195 + return -ENOMEM; 196 + 197 + mipi_dsi_set_drvdata(dsi, osd101t2587); 198 + 199 + osd101t2587->dsi = dsi; 200 + osd101t2587->default_mode = id->data; 201 + 202 + ret = osd101t2587_panel_add(osd101t2587); 203 + if (ret < 0) 204 + return ret; 205 + 206 + ret = mipi_dsi_attach(dsi); 207 + if (ret) 208 + drm_panel_remove(&osd101t2587->base); 209 + 210 + return ret; 211 + } 212 + 213 + static int osd101t2587_panel_remove(struct mipi_dsi_device *dsi) 214 + { 215 + struct osd101t2587_panel *osd101t2587 = mipi_dsi_get_drvdata(dsi); 216 + int ret; 217 + 218 + ret = osd101t2587_panel_disable(&osd101t2587->base); 219 + if (ret < 0) 220 + dev_warn(&dsi->dev, "failed to disable panel: %d\n", ret); 221 + 222 + osd101t2587_panel_unprepare(&osd101t2587->base); 223 + 224 + drm_panel_remove(&osd101t2587->base); 225 + 226 + ret = mipi_dsi_detach(dsi); 227 + if (ret < 0) 228 + dev_err(&dsi->dev, "failed to detach from DSI host: %d\n", ret); 229 + 230 + return ret; 231 + } 232 + 233 + static void osd101t2587_panel_shutdown(struct mipi_dsi_device *dsi) 234 + { 235 + struct osd101t2587_panel *osd101t2587 = mipi_dsi_get_drvdata(dsi); 236 + 237 + osd101t2587_panel_disable(&osd101t2587->base); 238 + osd101t2587_panel_unprepare(&osd101t2587->base); 239 + } 240 + 241 + static struct mipi_dsi_driver osd101t2587_panel_driver = { 242 + .driver = { 243 + .name = "panel-osd-osd101t2587-53ts", 244 + .of_match_table = osd101t2587_of_match, 245 + }, 246 + .probe = osd101t2587_panel_probe, 247 + .remove = osd101t2587_panel_remove, 248 + .shutdown = osd101t2587_panel_shutdown, 249 + }; 250 + module_mipi_dsi_driver(osd101t2587_panel_driver); 251 + 252 + MODULE_AUTHOR("Peter Ujfalusi <peter.ujfalusi@ti.com>"); 253 + MODULE_DESCRIPTION("OSD101T2587-53TS DSI panel"); 254 + MODULE_LICENSE("GPL v2");
-1
drivers/gpu/drm/panel/panel-raspberrypi-touchscreen.c
··· 57 57 #include <drm/drmP.h> 58 58 #include <drm/drm_crtc.h> 59 59 #include <drm/drm_mipi_dsi.h> 60 - #include <drm/drm_panel.h> 61 60 62 61 #define RPI_DSI_DRIVER_NAME "rpi-ts-dsi" 63 62
+514
drivers/gpu/drm/panel/panel-samsung-s6e63m0.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * S6E63M0 AMOLED LCD drm_panel driver. 4 + * 5 + * Copyright (C) 2019 Paweł Chmiel <pawel.mikolaj.chmiel@gmail.com> 6 + * Derived from drivers/gpu/drm/panel-samsung-ld9040.c 7 + * 8 + * Andrzej Hajda <a.hajda@samsung.com> 9 + */ 10 + 11 + #include <drm/drm_modes.h> 12 + #include <drm/drm_panel.h> 13 + #include <drm/drm_print.h> 14 + 15 + #include <linux/backlight.h> 16 + #include <linux/delay.h> 17 + #include <linux/gpio/consumer.h> 18 + #include <linux/module.h> 19 + #include <linux/regulator/consumer.h> 20 + #include <linux/spi/spi.h> 21 + 22 + #include <video/mipi_display.h> 23 + 24 + /* Manufacturer Command Set */ 25 + #define MCS_ELVSS_ON 0xb1 26 + #define MCS_MIECTL1 0xc0 27 + #define MCS_BCMODE 0xc1 28 + #define MCS_DISCTL 0xf2 29 + #define MCS_SRCCTL 0xf6 30 + #define MCS_IFCTL 0xf7 31 + #define MCS_PANELCTL 0xF8 32 + #define MCS_PGAMMACTL 0xfa 33 + 34 + #define NUM_GAMMA_LEVELS 11 35 + #define GAMMA_TABLE_COUNT 23 36 + 37 + #define DATA_MASK 0x100 38 + 39 + #define MAX_BRIGHTNESS (NUM_GAMMA_LEVELS - 1) 40 + 41 + /* array of gamma tables for gamma value 2.2 */ 42 + static u8 const s6e63m0_gamma_22[NUM_GAMMA_LEVELS][GAMMA_TABLE_COUNT] = { 43 + { MCS_PGAMMACTL, 0x00, 44 + 0x18, 0x08, 0x24, 0x78, 0xEC, 0x3D, 0xC8, 45 + 0xC2, 0xB6, 0xC4, 0xC7, 0xB6, 0xD5, 0xD7, 46 + 0xCC, 0x00, 0x39, 0x00, 0x36, 0x00, 0x51 }, 47 + { MCS_PGAMMACTL, 0x00, 48 + 0x18, 0x08, 0x24, 0x73, 0x4A, 0x3D, 0xC0, 49 + 0xC2, 0xB1, 0xBB, 0xBE, 0xAC, 0xCE, 0xCF, 50 + 0xC5, 0x00, 0x5D, 0x00, 0x5E, 0x00, 0x82 }, 51 + { MCS_PGAMMACTL, 0x00, 52 + 0x18, 0x08, 0x24, 0x70, 0x51, 0x3E, 0xBF, 53 + 0xC1, 0xAF, 0xB9, 0xBC, 0xAB, 0xCC, 0xCC, 54 + 0xC2, 0x00, 0x65, 0x00, 0x67, 0x00, 0x8D }, 55 + { MCS_PGAMMACTL, 0x00, 56 + 0x18, 0x08, 0x24, 0x6C, 0x54, 0x3A, 0xBC, 57 + 0xBF, 0xAC, 0xB7, 0xBB, 0xA9, 0xC9, 0xC9, 58 + 0xBE, 0x00, 0x71, 0x00, 0x73, 0x00, 0x9E }, 59 + { MCS_PGAMMACTL, 0x00, 60 + 0x18, 0x08, 0x24, 0x69, 0x54, 0x37, 0xBB, 61 + 0xBE, 0xAC, 0xB4, 0xB7, 0xA6, 0xC7, 0xC8, 62 + 0xBC, 0x00, 0x7B, 0x00, 0x7E, 0x00, 0xAB }, 63 + { MCS_PGAMMACTL, 0x00, 64 + 0x18, 0x08, 0x24, 0x66, 0x55, 0x34, 0xBA, 65 + 0xBD, 0xAB, 0xB1, 0xB5, 0xA3, 0xC5, 0xC6, 66 + 0xB9, 0x00, 0x85, 0x00, 0x88, 0x00, 0xBA }, 67 + { MCS_PGAMMACTL, 0x00, 68 + 0x18, 0x08, 0x24, 0x63, 0x53, 0x31, 0xB8, 69 + 0xBC, 0xA9, 0xB0, 0xB5, 0xA2, 0xC4, 0xC4, 70 + 0xB8, 0x00, 0x8B, 0x00, 0x8E, 0x00, 0xC2 }, 71 + { MCS_PGAMMACTL, 0x00, 72 + 0x18, 0x08, 0x24, 0x62, 0x54, 0x30, 0xB9, 73 + 0xBB, 0xA9, 0xB0, 0xB3, 0xA1, 0xC1, 0xC3, 74 + 0xB7, 0x00, 0x91, 0x00, 0x95, 0x00, 0xDA }, 75 + { MCS_PGAMMACTL, 0x00, 76 + 0x18, 0x08, 0x24, 0x66, 0x58, 0x34, 0xB6, 77 + 0xBA, 0xA7, 0xAF, 0xB3, 0xA0, 0xC1, 0xC2, 78 + 0xB7, 0x00, 0x97, 0x00, 0x9A, 0x00, 0xD1 }, 79 + { MCS_PGAMMACTL, 0x00, 80 + 0x18, 0x08, 0x24, 0x64, 0x56, 0x33, 0xB6, 81 + 0xBA, 0xA8, 0xAC, 0xB1, 0x9D, 0xC1, 0xC1, 82 + 0xB7, 0x00, 0x9C, 0x00, 0x9F, 0x00, 0xD6 }, 83 + { MCS_PGAMMACTL, 0x00, 84 + 0x18, 0x08, 0x24, 0x5f, 0x50, 0x2d, 0xB6, 85 + 0xB9, 0xA7, 0xAd, 0xB1, 0x9f, 0xbe, 0xC0, 86 + 0xB5, 0x00, 0xa0, 0x00, 0xa4, 0x00, 0xdb }, 87 + }; 88 + 89 + struct s6e63m0 { 90 + struct device *dev; 91 + struct drm_panel panel; 92 + struct backlight_device *bl_dev; 93 + 94 + struct regulator_bulk_data supplies[2]; 95 + struct gpio_desc *reset_gpio; 96 + 97 + bool prepared; 98 + bool enabled; 99 + 100 + /* 101 + * This field is tested by functions directly accessing bus before 102 + * transfer, transfer is skipped if it is set. In case of transfer 103 + * failure or unexpected response the field is set to error value. 104 + * Such construct allows to eliminate many checks in higher level 105 + * functions. 106 + */ 107 + int error; 108 + }; 109 + 110 + static const struct drm_display_mode default_mode = { 111 + .clock = 25628, 112 + .hdisplay = 480, 113 + .hsync_start = 480 + 16, 114 + .hsync_end = 480 + 16 + 2, 115 + .htotal = 480 + 16 + 2 + 16, 116 + .vdisplay = 800, 117 + .vsync_start = 800 + 28, 118 + .vsync_end = 800 + 28 + 2, 119 + .vtotal = 800 + 28 + 2 + 1, 120 + .vrefresh = 60, 121 + .width_mm = 53, 122 + .height_mm = 89, 123 + .flags = DRM_MODE_FLAG_NVSYNC | DRM_MODE_FLAG_NHSYNC, 124 + }; 125 + 126 + static inline struct s6e63m0 *panel_to_s6e63m0(struct drm_panel *panel) 127 + { 128 + return container_of(panel, struct s6e63m0, panel); 129 + } 130 + 131 + static int s6e63m0_clear_error(struct s6e63m0 *ctx) 132 + { 133 + int ret = ctx->error; 134 + 135 + ctx->error = 0; 136 + return ret; 137 + } 138 + 139 + static int s6e63m0_spi_write_word(struct s6e63m0 *ctx, u16 data) 140 + { 141 + struct spi_device *spi = to_spi_device(ctx->dev); 142 + struct spi_transfer xfer = { 143 + .len = 2, 144 + .tx_buf = &data, 145 + }; 146 + struct spi_message msg; 147 + 148 + spi_message_init(&msg); 149 + spi_message_add_tail(&xfer, &msg); 150 + 151 + return spi_sync(spi, &msg); 152 + } 153 + 154 + static void s6e63m0_dcs_write(struct s6e63m0 *ctx, const u8 *data, size_t len) 155 + { 156 + int ret = 0; 157 + 158 + if (ctx->error < 0 || len == 0) 159 + return; 160 + 161 + DRM_DEV_DEBUG(ctx->dev, "writing dcs seq: %*ph\n", (int)len, data); 162 + ret = s6e63m0_spi_write_word(ctx, *data); 163 + 164 + while (!ret && --len) { 165 + ++data; 166 + ret = s6e63m0_spi_write_word(ctx, *data | DATA_MASK); 167 + } 168 + 169 + if (ret) { 170 + DRM_DEV_ERROR(ctx->dev, "error %d writing dcs seq: %*ph\n", ret, 171 + (int)len, data); 172 + ctx->error = ret; 173 + } 174 + 175 + usleep_range(300, 310); 176 + } 177 + 178 + #define s6e63m0_dcs_write_seq_static(ctx, seq ...) \ 179 + ({ \ 180 + static const u8 d[] = { seq }; \ 181 + s6e63m0_dcs_write(ctx, d, ARRAY_SIZE(d)); \ 182 + }) 183 + 184 + static void s6e63m0_init(struct s6e63m0 *ctx) 185 + { 186 + s6e63m0_dcs_write_seq_static(ctx, MCS_PANELCTL, 187 + 0x01, 0x27, 0x27, 0x07, 0x07, 0x54, 0x9f, 188 + 0x63, 0x86, 0x1a, 0x33, 0x0d, 0x00, 0x00); 189 + 190 + s6e63m0_dcs_write_seq_static(ctx, MCS_DISCTL, 191 + 0x02, 0x03, 0x1c, 0x10, 0x10); 192 + s6e63m0_dcs_write_seq_static(ctx, MCS_IFCTL, 193 + 0x03, 0x00, 0x00); 194 + 195 + s6e63m0_dcs_write_seq_static(ctx, MCS_PGAMMACTL, 196 + 0x00, 0x18, 0x08, 0x24, 0x64, 0x56, 0x33, 197 + 0xb6, 0xba, 0xa8, 0xac, 0xb1, 0x9d, 0xc1, 198 + 0xc1, 0xb7, 0x00, 0x9c, 0x00, 0x9f, 0x00, 199 + 0xd6); 200 + s6e63m0_dcs_write_seq_static(ctx, MCS_PGAMMACTL, 201 + 0x01); 202 + 203 + s6e63m0_dcs_write_seq_static(ctx, MCS_SRCCTL, 204 + 0x00, 0x8c, 0x07); 205 + s6e63m0_dcs_write_seq_static(ctx, 0xb3, 206 + 0xc); 207 + 208 + s6e63m0_dcs_write_seq_static(ctx, 0xb5, 209 + 0x2c, 0x12, 0x0c, 0x0a, 0x10, 0x0e, 0x17, 210 + 0x13, 0x1f, 0x1a, 0x2a, 0x24, 0x1f, 0x1b, 211 + 0x1a, 0x17, 0x2b, 0x26, 0x22, 0x20, 0x3a, 212 + 0x34, 0x30, 0x2c, 0x29, 0x26, 0x25, 0x23, 213 + 0x21, 0x20, 0x1e, 0x1e); 214 + 215 + s6e63m0_dcs_write_seq_static(ctx, 0xb6, 216 + 0x00, 0x00, 0x11, 0x22, 0x33, 0x44, 0x44, 217 + 0x44, 0x55, 0x55, 0x66, 0x66, 0x66, 0x66, 218 + 0x66, 0x66); 219 + 220 + s6e63m0_dcs_write_seq_static(ctx, 0xb7, 221 + 0x2c, 0x12, 0x0c, 0x0a, 0x10, 0x0e, 0x17, 222 + 0x13, 0x1f, 0x1a, 0x2a, 0x24, 0x1f, 0x1b, 223 + 0x1a, 0x17, 0x2b, 0x26, 0x22, 0x20, 0x3a, 224 + 0x34, 0x30, 0x2c, 0x29, 0x26, 0x25, 0x23, 225 + 0x21, 0x20, 0x1e, 0x1e, 0x00, 0x00, 0x11, 226 + 0x22, 0x33, 0x44, 0x44, 0x44, 0x55, 0x55, 227 + 0x66, 0x66, 0x66, 0x66, 0x66, 0x66); 228 + 229 + s6e63m0_dcs_write_seq_static(ctx, 0xb9, 230 + 0x2c, 0x12, 0x0c, 0x0a, 0x10, 0x0e, 0x17, 231 + 0x13, 0x1f, 0x1a, 0x2a, 0x24, 0x1f, 0x1b, 232 + 0x1a, 0x17, 0x2b, 0x26, 0x22, 0x20, 0x3a, 233 + 0x34, 0x30, 0x2c, 0x29, 0x26, 0x25, 0x23, 234 + 0x21, 0x20, 0x1e, 0x1e); 235 + 236 + s6e63m0_dcs_write_seq_static(ctx, 0xba, 237 + 0x00, 0x00, 0x11, 0x22, 0x33, 0x44, 0x44, 238 + 0x44, 0x55, 0x55, 0x66, 0x66, 0x66, 0x66, 239 + 0x66, 0x66); 240 + 241 + s6e63m0_dcs_write_seq_static(ctx, MCS_BCMODE, 242 + 0x4d, 0x96, 0x1d, 0x00, 0x00, 0x01, 0xdf, 243 + 0x00, 0x00, 0x03, 0x1f, 0x00, 0x00, 0x00, 244 + 0x00, 0x00, 0x00, 0x00, 0x00, 0x03, 0x06, 245 + 0x09, 0x0d, 0x0f, 0x12, 0x15, 0x18); 246 + 247 + s6e63m0_dcs_write_seq_static(ctx, 0xb2, 248 + 0x10, 0x10, 0x0b, 0x05); 249 + 250 + s6e63m0_dcs_write_seq_static(ctx, MCS_MIECTL1, 251 + 0x01); 252 + 253 + s6e63m0_dcs_write_seq_static(ctx, MCS_ELVSS_ON, 254 + 0x0b); 255 + 256 + s6e63m0_dcs_write_seq_static(ctx, MIPI_DCS_EXIT_SLEEP_MODE); 257 + } 258 + 259 + static int s6e63m0_power_on(struct s6e63m0 *ctx) 260 + { 261 + int ret; 262 + 263 + ret = regulator_bulk_enable(ARRAY_SIZE(ctx->supplies), ctx->supplies); 264 + if (ret < 0) 265 + return ret; 266 + 267 + msleep(25); 268 + 269 + gpiod_set_value(ctx->reset_gpio, 0); 270 + msleep(120); 271 + 272 + return 0; 273 + } 274 + 275 + static int s6e63m0_power_off(struct s6e63m0 *ctx) 276 + { 277 + int ret; 278 + 279 + gpiod_set_value(ctx->reset_gpio, 1); 280 + msleep(120); 281 + 282 + ret = regulator_bulk_disable(ARRAY_SIZE(ctx->supplies), ctx->supplies); 283 + if (ret < 0) 284 + return ret; 285 + 286 + return 0; 287 + } 288 + 289 + static int s6e63m0_disable(struct drm_panel *panel) 290 + { 291 + struct s6e63m0 *ctx = panel_to_s6e63m0(panel); 292 + 293 + if (!ctx->enabled) 294 + return 0; 295 + 296 + backlight_disable(ctx->bl_dev); 297 + 298 + s6e63m0_dcs_write_seq_static(ctx, MIPI_DCS_ENTER_SLEEP_MODE); 299 + msleep(200); 300 + 301 + ctx->enabled = false; 302 + 303 + return 0; 304 + } 305 + 306 + static int s6e63m0_unprepare(struct drm_panel *panel) 307 + { 308 + struct s6e63m0 *ctx = panel_to_s6e63m0(panel); 309 + int ret; 310 + 311 + if (!ctx->prepared) 312 + return 0; 313 + 314 + s6e63m0_clear_error(ctx); 315 + 316 + ret = s6e63m0_power_off(ctx); 317 + if (ret < 0) 318 + return ret; 319 + 320 + ctx->prepared = false; 321 + 322 + return 0; 323 + } 324 + 325 + static int s6e63m0_prepare(struct drm_panel *panel) 326 + { 327 + struct s6e63m0 *ctx = panel_to_s6e63m0(panel); 328 + int ret; 329 + 330 + if (ctx->prepared) 331 + return 0; 332 + 333 + ret = s6e63m0_power_on(ctx); 334 + if (ret < 0) 335 + return ret; 336 + 337 + s6e63m0_init(ctx); 338 + 339 + ret = s6e63m0_clear_error(ctx); 340 + 341 + if (ret < 0) 342 + s6e63m0_unprepare(panel); 343 + 344 + ctx->prepared = true; 345 + 346 + return ret; 347 + } 348 + 349 + static int s6e63m0_enable(struct drm_panel *panel) 350 + { 351 + struct s6e63m0 *ctx = panel_to_s6e63m0(panel); 352 + 353 + if (ctx->enabled) 354 + return 0; 355 + 356 + s6e63m0_dcs_write_seq_static(ctx, MIPI_DCS_SET_DISPLAY_ON); 357 + 358 + backlight_enable(ctx->bl_dev); 359 + 360 + ctx->enabled = true; 361 + 362 + return 0; 363 + } 364 + 365 + static int s6e63m0_get_modes(struct drm_panel *panel) 366 + { 367 + struct drm_connector *connector = panel->connector; 368 + struct drm_display_mode *mode; 369 + 370 + mode = drm_mode_duplicate(panel->drm, &default_mode); 371 + if (!mode) { 372 + DRM_ERROR("failed to add mode %ux%ux@%u\n", 373 + default_mode.hdisplay, default_mode.vdisplay, 374 + default_mode.vrefresh); 375 + return -ENOMEM; 376 + } 377 + 378 + drm_mode_set_name(mode); 379 + 380 + mode->type = DRM_MODE_TYPE_DRIVER | DRM_MODE_TYPE_PREFERRED; 381 + drm_mode_probed_add(connector, mode); 382 + 383 + return 1; 384 + } 385 + 386 + static const struct drm_panel_funcs s6e63m0_drm_funcs = { 387 + .disable = s6e63m0_disable, 388 + .unprepare = s6e63m0_unprepare, 389 + .prepare = s6e63m0_prepare, 390 + .enable = s6e63m0_enable, 391 + .get_modes = s6e63m0_get_modes, 392 + }; 393 + 394 + static int s6e63m0_set_brightness(struct backlight_device *bd) 395 + { 396 + struct s6e63m0 *ctx = bl_get_data(bd); 397 + 398 + int brightness = bd->props.brightness; 399 + 400 + /* disable and set new gamma */ 401 + s6e63m0_dcs_write(ctx, s6e63m0_gamma_22[brightness], 402 + ARRAY_SIZE(s6e63m0_gamma_22[brightness])); 403 + 404 + /* update gamma table. */ 405 + s6e63m0_dcs_write_seq_static(ctx, MCS_PGAMMACTL, 0x01); 406 + 407 + return s6e63m0_clear_error(ctx); 408 + } 409 + 410 + static const struct backlight_ops s6e63m0_backlight_ops = { 411 + .update_status = s6e63m0_set_brightness, 412 + }; 413 + 414 + static int s6e63m0_backlight_register(struct s6e63m0 *ctx) 415 + { 416 + struct backlight_properties props = { 417 + .type = BACKLIGHT_RAW, 418 + .brightness = MAX_BRIGHTNESS, 419 + .max_brightness = MAX_BRIGHTNESS 420 + }; 421 + struct device *dev = ctx->dev; 422 + int ret = 0; 423 + 424 + ctx->bl_dev = devm_backlight_device_register(dev, "panel", dev, ctx, 425 + &s6e63m0_backlight_ops, 426 + &props); 427 + if (IS_ERR(ctx->bl_dev)) { 428 + ret = PTR_ERR(ctx->bl_dev); 429 + DRM_DEV_ERROR(dev, "error registering backlight device (%d)\n", 430 + ret); 431 + } 432 + 433 + return ret; 434 + } 435 + 436 + static int s6e63m0_probe(struct spi_device *spi) 437 + { 438 + struct device *dev = &spi->dev; 439 + struct s6e63m0 *ctx; 440 + int ret; 441 + 442 + ctx = devm_kzalloc(dev, sizeof(struct s6e63m0), GFP_KERNEL); 443 + if (!ctx) 444 + return -ENOMEM; 445 + 446 + spi_set_drvdata(spi, ctx); 447 + 448 + ctx->dev = dev; 449 + ctx->enabled = false; 450 + ctx->prepared = false; 451 + 452 + ctx->supplies[0].supply = "vdd3"; 453 + ctx->supplies[1].supply = "vci"; 454 + ret = devm_regulator_bulk_get(dev, ARRAY_SIZE(ctx->supplies), 455 + ctx->supplies); 456 + if (ret < 0) { 457 + DRM_DEV_ERROR(dev, "failed to get regulators: %d\n", ret); 458 + return ret; 459 + } 460 + 461 + ctx->reset_gpio = devm_gpiod_get(dev, "reset", GPIOD_OUT_HIGH); 462 + if (IS_ERR(ctx->reset_gpio)) { 463 + DRM_DEV_ERROR(dev, "cannot get reset-gpios %ld\n", 464 + PTR_ERR(ctx->reset_gpio)); 465 + return PTR_ERR(ctx->reset_gpio); 466 + } 467 + 468 + spi->bits_per_word = 9; 469 + spi->mode = SPI_MODE_3; 470 + ret = spi_setup(spi); 471 + if (ret < 0) { 472 + DRM_DEV_ERROR(dev, "spi setup failed.\n"); 473 + return ret; 474 + } 475 + 476 + drm_panel_init(&ctx->panel); 477 + ctx->panel.dev = dev; 478 + ctx->panel.funcs = &s6e63m0_drm_funcs; 479 + 480 + ret = s6e63m0_backlight_register(ctx); 481 + if (ret < 0) 482 + return ret; 483 + 484 + return drm_panel_add(&ctx->panel); 485 + } 486 + 487 + static int s6e63m0_remove(struct spi_device *spi) 488 + { 489 + struct s6e63m0 *ctx = spi_get_drvdata(spi); 490 + 491 + drm_panel_remove(&ctx->panel); 492 + 493 + return 0; 494 + } 495 + 496 + static const struct of_device_id s6e63m0_of_match[] = { 497 + { .compatible = "samsung,s6e63m0" }, 498 + { /* sentinel */ } 499 + }; 500 + MODULE_DEVICE_TABLE(of, s6e63m0_of_match); 501 + 502 + static struct spi_driver s6e63m0_driver = { 503 + .probe = s6e63m0_probe, 504 + .remove = s6e63m0_remove, 505 + .driver = { 506 + .name = "panel-samsung-s6e63m0", 507 + .of_match_table = s6e63m0_of_match, 508 + }, 509 + }; 510 + module_spi_driver(s6e63m0_driver); 511 + 512 + MODULE_AUTHOR("Paweł Chmiel <pawel.mikolaj.chmiel@gmail.com>"); 513 + MODULE_DESCRIPTION("s6e63m0 LCD Driver"); 514 + MODULE_LICENSE("GPL v2");
+211 -1
drivers/gpu/drm/panel/panel-simple.c
··· 1096 1096 .bus_format = MEDIA_BUS_FMT_RGB888_1X7X4_SPWG, 1097 1097 }; 1098 1098 1099 + static const struct drm_display_mode edt_et035012dm6_mode = { 1100 + .clock = 6500, 1101 + .hdisplay = 320, 1102 + .hsync_start = 320 + 20, 1103 + .hsync_end = 320 + 20 + 30, 1104 + .htotal = 320 + 20 + 68, 1105 + .vdisplay = 240, 1106 + .vsync_start = 240 + 4, 1107 + .vsync_end = 240 + 4 + 4, 1108 + .vtotal = 240 + 4 + 4 + 14, 1109 + .vrefresh = 60, 1110 + .flags = DRM_MODE_FLAG_NVSYNC | DRM_MODE_FLAG_NHSYNC, 1111 + }; 1112 + 1113 + static const struct panel_desc edt_et035012dm6 = { 1114 + .modes = &edt_et035012dm6_mode, 1115 + .num_modes = 1, 1116 + .bpc = 8, 1117 + .size = { 1118 + .width = 70, 1119 + .height = 52, 1120 + }, 1121 + .bus_format = MEDIA_BUS_FMT_RGB888_1X24, 1122 + .bus_flags = DRM_BUS_FLAG_DE_LOW | DRM_BUS_FLAG_PIXDATA_NEGEDGE, 1123 + }; 1124 + 1125 + static const struct drm_display_mode edt_etm0430g0dh6_mode = { 1126 + .clock = 9000, 1127 + .hdisplay = 480, 1128 + .hsync_start = 480 + 2, 1129 + .hsync_end = 480 + 2 + 41, 1130 + .htotal = 480 + 2 + 41 + 2, 1131 + .vdisplay = 272, 1132 + .vsync_start = 272 + 2, 1133 + .vsync_end = 272 + 2 + 10, 1134 + .vtotal = 272 + 2 + 10 + 2, 1135 + .vrefresh = 60, 1136 + .flags = DRM_MODE_FLAG_NHSYNC | DRM_MODE_FLAG_NVSYNC, 1137 + }; 1138 + 1139 + static const struct panel_desc edt_etm0430g0dh6 = { 1140 + .modes = &edt_etm0430g0dh6_mode, 1141 + .num_modes = 1, 1142 + .bpc = 6, 1143 + .size = { 1144 + .width = 95, 1145 + .height = 54, 1146 + }, 1147 + }; 1148 + 1099 1149 static const struct drm_display_mode edt_et057090dhu_mode = { 1100 1150 .clock = 25175, 1101 1151 .hdisplay = 640, ··· 1210 1160 .bus_flags = DRM_BUS_FLAG_DE_HIGH | DRM_BUS_FLAG_PIXDATA_DRIVE_POSEDGE, 1211 1161 }; 1212 1162 1163 + static const struct display_timing evervision_vgg804821_timing = { 1164 + .pixelclock = { 27600000, 33300000, 50000000 }, 1165 + .hactive = { 800, 800, 800 }, 1166 + .hfront_porch = { 40, 66, 70 }, 1167 + .hback_porch = { 40, 67, 70 }, 1168 + .hsync_len = { 40, 67, 70 }, 1169 + .vactive = { 480, 480, 480 }, 1170 + .vfront_porch = { 6, 10, 10 }, 1171 + .vback_porch = { 7, 11, 11 }, 1172 + .vsync_len = { 7, 11, 11 }, 1173 + .flags = DISPLAY_FLAGS_HSYNC_HIGH | DISPLAY_FLAGS_VSYNC_HIGH | 1174 + DISPLAY_FLAGS_DE_HIGH | DISPLAY_FLAGS_PIXDATA_NEGEDGE | 1175 + DISPLAY_FLAGS_SYNC_NEGEDGE, 1176 + }; 1177 + 1178 + static const struct panel_desc evervision_vgg804821 = { 1179 + .timings = &evervision_vgg804821_timing, 1180 + .num_timings = 1, 1181 + .bpc = 8, 1182 + .size = { 1183 + .width = 108, 1184 + .height = 64, 1185 + }, 1186 + .bus_format = MEDIA_BUS_FMT_RGB888_1X24, 1187 + .bus_flags = DRM_BUS_FLAG_DE_HIGH | DRM_BUS_FLAG_PIXDATA_NEGEDGE, 1188 + }; 1189 + 1213 1190 static const struct drm_display_mode foxlink_fl500wvr00_a0t_mode = { 1214 1191 .clock = 32260, 1215 1192 .hdisplay = 800, ··· 1259 1182 .height = 65, 1260 1183 }, 1261 1184 .bus_format = MEDIA_BUS_FMT_RGB888_1X24, 1185 + }; 1186 + 1187 + static const struct drm_display_mode friendlyarm_hd702e_mode = { 1188 + .clock = 67185, 1189 + .hdisplay = 800, 1190 + .hsync_start = 800 + 20, 1191 + .hsync_end = 800 + 20 + 24, 1192 + .htotal = 800 + 20 + 24 + 20, 1193 + .vdisplay = 1280, 1194 + .vsync_start = 1280 + 4, 1195 + .vsync_end = 1280 + 4 + 8, 1196 + .vtotal = 1280 + 4 + 8 + 4, 1197 + .vrefresh = 60, 1198 + .flags = DRM_MODE_FLAG_NVSYNC | DRM_MODE_FLAG_NHSYNC, 1199 + }; 1200 + 1201 + static const struct panel_desc friendlyarm_hd702e = { 1202 + .modes = &friendlyarm_hd702e_mode, 1203 + .num_modes = 1, 1204 + .size = { 1205 + .width = 94, 1206 + .height = 151, 1207 + }, 1262 1208 }; 1263 1209 1264 1210 static const struct drm_display_mode giantplus_gpg482739qs5_mode = { ··· 2455 2355 }, 2456 2356 }; 2457 2357 2358 + static const struct drm_display_mode tfc_s9700rtwv43tr_01b_mode = { 2359 + .clock = 30000, 2360 + .hdisplay = 800, 2361 + .hsync_start = 800 + 39, 2362 + .hsync_end = 800 + 39 + 47, 2363 + .htotal = 800 + 39 + 47 + 39, 2364 + .vdisplay = 480, 2365 + .vsync_start = 480 + 13, 2366 + .vsync_end = 480 + 13 + 2, 2367 + .vtotal = 480 + 13 + 2 + 29, 2368 + .vrefresh = 62, 2369 + }; 2370 + 2371 + static const struct panel_desc tfc_s9700rtwv43tr_01b = { 2372 + .modes = &tfc_s9700rtwv43tr_01b_mode, 2373 + .num_modes = 1, 2374 + .bpc = 8, 2375 + .size = { 2376 + .width = 155, 2377 + .height = 90, 2378 + }, 2379 + .bus_format = MEDIA_BUS_FMT_RGB888_1X24, 2380 + .bus_flags = DRM_BUS_FLAG_DE_HIGH | DRM_BUS_FLAG_PIXDATA_POSEDGE, 2381 + }; 2382 + 2458 2383 static const struct display_timing tianma_tm070jdhg30_timing = { 2459 2384 .pixelclock = { 62600000, 68200000, 78100000 }, 2460 2385 .hactive = { 1280, 1280, 1280 }, ··· 2633 2508 .bus_format = MEDIA_BUS_FMT_RGB666_1X18, 2634 2509 }; 2635 2510 2511 + static const struct drm_display_mode vl050_8048nt_c01_mode = { 2512 + .clock = 33333, 2513 + .hdisplay = 800, 2514 + .hsync_start = 800 + 210, 2515 + .hsync_end = 800 + 210 + 20, 2516 + .htotal = 800 + 210 + 20 + 46, 2517 + .vdisplay = 480, 2518 + .vsync_start = 480 + 22, 2519 + .vsync_end = 480 + 22 + 10, 2520 + .vtotal = 480 + 22 + 10 + 23, 2521 + .vrefresh = 60, 2522 + .flags = DRM_MODE_FLAG_NHSYNC | DRM_MODE_FLAG_NVSYNC, 2523 + }; 2524 + 2525 + static const struct panel_desc vl050_8048nt_c01 = { 2526 + .modes = &vl050_8048nt_c01_mode, 2527 + .num_modes = 1, 2528 + .bpc = 8, 2529 + .size = { 2530 + .width = 120, 2531 + .height = 76, 2532 + }, 2533 + .bus_format = MEDIA_BUS_FMT_RGB888_1X24, 2534 + .bus_flags = DRM_BUS_FLAG_DE_HIGH | DRM_BUS_FLAG_PIXDATA_POSEDGE, 2535 + }; 2536 + 2636 2537 static const struct drm_display_mode winstar_wf35ltiacd_mode = { 2637 2538 .clock = 6410, 2638 2539 .hdisplay = 320, ··· 2797 2646 .compatible = "dlc,dlc1010gig", 2798 2647 .data = &dlc_dlc1010gig, 2799 2648 }, { 2649 + .compatible = "edt,et035012dm6", 2650 + .data = &edt_et035012dm6, 2651 + }, { 2652 + .compatible = "edt,etm0430g0dh6", 2653 + .data = &edt_etm0430g0dh6, 2654 + }, { 2800 2655 .compatible = "edt,et057090dhu", 2801 2656 .data = &edt_et057090dhu, 2802 2657 }, { ··· 2818 2661 .compatible = "edt,etm0700g0edh6", 2819 2662 .data = &edt_etm0700g0bdh6, 2820 2663 }, { 2664 + .compatible = "evervision,vgg804821", 2665 + .data = &evervision_vgg804821, 2666 + }, { 2821 2667 .compatible = "foxlink,fl500wvr00-a0t", 2822 2668 .data = &foxlink_fl500wvr00_a0t, 2669 + }, { 2670 + .compatible = "friendlyarm,hd702e", 2671 + .data = &friendlyarm_hd702e, 2823 2672 }, { 2824 2673 .compatible = "giantplus,gpg482739qs5", 2825 2674 .data = &giantplus_gpg482739qs5 ··· 2965 2802 .compatible = "starry,kr122ea0sra", 2966 2803 .data = &starry_kr122ea0sra, 2967 2804 }, { 2805 + .compatible = "tfc,s9700rtwv43tr-01b", 2806 + .data = &tfc_s9700rtwv43tr_01b, 2807 + }, { 2968 2808 .compatible = "tianma,tm070jdhg30", 2969 2809 .data = &tianma_tm070jdhg30, 2970 2810 }, { ··· 3000 2834 }, { 3001 2835 .compatible = "urt,umsh-8596md-20t", 3002 2836 .data = &urt_umsh_8596md_parallel, 2837 + }, { 2838 + .compatible = "vxt,vl050-8048nt-c01", 2839 + .data = &vl050_8048nt_c01, 3003 2840 }, { 3004 2841 .compatible = "winstar,wf35ltiacd", 3005 2842 .data = &winstar_wf35ltiacd, ··· 3222 3053 .lanes = 4, 3223 3054 }; 3224 3055 3056 + static const struct drm_display_mode osd101t2045_53ts_mode = { 3057 + .clock = 154500, 3058 + .hdisplay = 1920, 3059 + .hsync_start = 1920 + 112, 3060 + .hsync_end = 1920 + 112 + 16, 3061 + .htotal = 1920 + 112 + 16 + 32, 3062 + .vdisplay = 1200, 3063 + .vsync_start = 1200 + 16, 3064 + .vsync_end = 1200 + 16 + 2, 3065 + .vtotal = 1200 + 16 + 2 + 16, 3066 + .vrefresh = 60, 3067 + .flags = DRM_MODE_FLAG_NHSYNC | DRM_MODE_FLAG_NVSYNC, 3068 + }; 3069 + 3070 + static const struct panel_desc_dsi osd101t2045_53ts = { 3071 + .desc = { 3072 + .modes = &osd101t2045_53ts_mode, 3073 + .num_modes = 1, 3074 + .bpc = 8, 3075 + .size = { 3076 + .width = 217, 3077 + .height = 136, 3078 + }, 3079 + }, 3080 + .flags = MIPI_DSI_MODE_VIDEO | MIPI_DSI_MODE_VIDEO_BURST | 3081 + MIPI_DSI_MODE_VIDEO_SYNC_PULSE | 3082 + MIPI_DSI_MODE_EOT_PACKET, 3083 + .format = MIPI_DSI_FMT_RGB888, 3084 + .lanes = 4, 3085 + }; 3086 + 3225 3087 static const struct of_device_id dsi_of_match[] = { 3226 3088 { 3227 3089 .compatible = "auo,b080uan01", ··· 3272 3072 }, { 3273 3073 .compatible = "lg,acx467akm-7", 3274 3074 .data = &lg_acx467akm_7 3075 + }, { 3076 + .compatible = "osddisplays,osd101t2045-53ts", 3077 + .data = &osd101t2045_53ts 3275 3078 }, { 3276 3079 /* sentinel */ 3277 3080 } ··· 3301 3098 dsi->format = desc->format; 3302 3099 dsi->lanes = desc->lanes; 3303 3100 3304 - return mipi_dsi_attach(dsi); 3101 + err = mipi_dsi_attach(dsi); 3102 + if (err) { 3103 + struct panel_simple *panel = dev_get_drvdata(&dsi->dev); 3104 + 3105 + drm_panel_remove(&panel->base); 3106 + } 3107 + 3108 + return err; 3305 3109 } 3306 3110 3307 3111 static int panel_simple_dsi_remove(struct mipi_dsi_device *dsi)
+22
drivers/gpu/drm/panfrost/panfrost_device.c
··· 55 55 if (err) 56 56 return err; 57 57 58 + pfdev->bus_clock = devm_clk_get_optional(pfdev->dev, "bus"); 59 + if (IS_ERR(pfdev->bus_clock)) { 60 + dev_err(pfdev->dev, "get bus_clock failed %ld\n", 61 + PTR_ERR(pfdev->bus_clock)); 62 + return PTR_ERR(pfdev->bus_clock); 63 + } 64 + 65 + if (pfdev->bus_clock) { 66 + rate = clk_get_rate(pfdev->bus_clock); 67 + dev_info(pfdev->dev, "bus_clock rate = %lu\n", rate); 68 + 69 + err = clk_prepare_enable(pfdev->bus_clock); 70 + if (err) 71 + goto disable_clock; 72 + } 73 + 58 74 return 0; 75 + 76 + disable_clock: 77 + clk_disable_unprepare(pfdev->clock); 78 + 79 + return err; 59 80 } 60 81 61 82 static void panfrost_clk_fini(struct panfrost_device *pfdev) 62 83 { 84 + clk_disable_unprepare(pfdev->bus_clock); 63 85 clk_disable_unprepare(pfdev->clock); 64 86 } 65 87
+1
drivers/gpu/drm/panfrost/panfrost_device.h
··· 66 66 67 67 void __iomem *iomem; 68 68 struct clk *clock; 69 + struct clk *bus_clock; 69 70 struct regulator *regulator; 70 71 struct reset_control *rstc; 71 72
+1 -1
drivers/gpu/drm/panfrost/panfrost_job.c
··· 387 387 mutex_lock(&pfdev->reset_lock); 388 388 389 389 for (i = 0; i < NUM_JOB_SLOTS; i++) 390 - drm_sched_stop(&pfdev->js->queue[i].sched); 390 + drm_sched_stop(&pfdev->js->queue[i].sched, sched_job); 391 391 392 392 if (sched_job) 393 393 drm_sched_increase_karma(sched_job);
+3 -1
drivers/gpu/drm/radeon/radeon_fb.c
··· 125 125 struct drm_mode_fb_cmd2 *mode_cmd, 126 126 struct drm_gem_object **gobj_p) 127 127 { 128 + const struct drm_format_info *info; 128 129 struct radeon_device *rdev = rfbdev->rdev; 129 130 struct drm_gem_object *gobj = NULL; 130 131 struct radeon_bo *rbo = NULL; ··· 136 135 int height = mode_cmd->height; 137 136 u32 cpp; 138 137 139 - cpp = drm_format_plane_cpp(mode_cmd->pixel_format, 0); 138 + info = drm_get_format_info(rdev->ddev, mode_cmd); 139 + cpp = info->cpp[0]; 140 140 141 141 /* need to align pitch with crtc limits */ 142 142 mode_cmd->pitches[0] = radeon_align_pitch(rdev, mode_cmd->width, cpp,
+6 -11
drivers/gpu/drm/rockchip/rockchip_drm_fb.c
··· 74 74 rockchip_user_fb_create(struct drm_device *dev, struct drm_file *file_priv, 75 75 const struct drm_mode_fb_cmd2 *mode_cmd) 76 76 { 77 + const struct drm_format_info *info = drm_get_format_info(dev, 78 + mode_cmd); 77 79 struct drm_framebuffer *fb; 78 80 struct drm_gem_object *objs[ROCKCHIP_MAX_FB_BUFFER]; 79 81 struct drm_gem_object *obj; 80 - unsigned int hsub; 81 - unsigned int vsub; 82 - int num_planes; 82 + int num_planes = min_t(int, info->num_planes, ROCKCHIP_MAX_FB_BUFFER); 83 83 int ret; 84 84 int i; 85 85 86 - hsub = drm_format_horz_chroma_subsampling(mode_cmd->pixel_format); 87 - vsub = drm_format_vert_chroma_subsampling(mode_cmd->pixel_format); 88 - num_planes = min(drm_format_num_planes(mode_cmd->pixel_format), 89 - ROCKCHIP_MAX_FB_BUFFER); 90 - 91 86 for (i = 0; i < num_planes; i++) { 92 - unsigned int width = mode_cmd->width / (i ? hsub : 1); 93 - unsigned int height = mode_cmd->height / (i ? vsub : 1); 87 + unsigned int width = mode_cmd->width / (i ? info->hsub : 1); 88 + unsigned int height = mode_cmd->height / (i ? info->vsub : 1); 94 89 unsigned int min_size; 95 90 96 91 obj = drm_gem_object_lookup(file_priv, mode_cmd->handles[i]); ··· 98 103 99 104 min_size = (height - 1) * mode_cmd->pitches[i] + 100 105 mode_cmd->offsets[i] + 101 - width * drm_format_plane_cpp(mode_cmd->pixel_format, i); 106 + width * info->cpp[i]; 102 107 103 108 if (obj->size < min_size) { 104 109 drm_gem_object_put_unlocked(obj);
+17 -22
drivers/gpu/drm/rockchip/rockchip_drm_vop.c
··· 315 315 316 316 static void scl_vop_cal_scl_fac(struct vop *vop, const struct vop_win_data *win, 317 317 uint32_t src_w, uint32_t src_h, uint32_t dst_w, 318 - uint32_t dst_h, uint32_t pixel_format) 318 + uint32_t dst_h, const struct drm_format_info *info) 319 319 { 320 320 uint16_t yrgb_hor_scl_mode, yrgb_ver_scl_mode; 321 321 uint16_t cbcr_hor_scl_mode = SCALE_NONE; 322 322 uint16_t cbcr_ver_scl_mode = SCALE_NONE; 323 - int hsub = drm_format_horz_chroma_subsampling(pixel_format); 324 - int vsub = drm_format_vert_chroma_subsampling(pixel_format); 325 - const struct drm_format_info *info; 326 323 bool is_yuv = false; 327 - uint16_t cbcr_src_w = src_w / hsub; 328 - uint16_t cbcr_src_h = src_h / vsub; 324 + uint16_t cbcr_src_w = src_w / info->hsub; 325 + uint16_t cbcr_src_h = src_h / info->vsub; 329 326 uint16_t vsu_mode; 330 327 uint16_t lb_mode; 331 328 uint32_t val; 332 329 int vskiplines; 333 - 334 - info = drm_format_info(pixel_format); 335 330 336 331 if (info->is_yuv) 337 332 is_yuv = true; ··· 826 831 (state->rotation & DRM_MODE_REFLECT_X) ? 1 : 0); 827 832 828 833 if (is_yuv) { 829 - int hsub = drm_format_horz_chroma_subsampling(fb->format->format); 830 - int vsub = drm_format_vert_chroma_subsampling(fb->format->format); 834 + int hsub = fb->format->hsub; 835 + int vsub = fb->format->vsub; 831 836 int bpp = fb->format->cpp[1]; 832 837 833 838 uv_obj = fb->obj[1]; ··· 851 856 if (win->phy->scl) 852 857 scl_vop_cal_scl_fac(vop, win, actual_w, actual_h, 853 858 drm_rect_width(dest), drm_rect_height(dest), 854 - fb->format->format); 859 + fb->format); 855 860 856 861 VOP_WIN_SET(vop, win, act_info, act_info); 857 862 VOP_WIN_SET(vop, win, dsp_info, dsp_info); ··· 1217 1222 drm_crtc_cleanup(crtc); 1218 1223 } 1219 1224 1220 - static void vop_crtc_reset(struct drm_crtc *crtc) 1221 - { 1222 - if (crtc->state) 1223 - __drm_atomic_helper_crtc_destroy_state(crtc->state); 1224 - kfree(crtc->state); 1225 - 1226 - crtc->state = kzalloc(sizeof(struct rockchip_crtc_state), GFP_KERNEL); 1227 - if (crtc->state) 1228 - crtc->state->crtc = crtc; 1229 - } 1230 - 1231 1225 static struct drm_crtc_state *vop_crtc_duplicate_state(struct drm_crtc *crtc) 1232 1226 { 1233 1227 struct rockchip_crtc_state *rockchip_state; ··· 1236 1252 1237 1253 __drm_atomic_helper_crtc_destroy_state(&s->base); 1238 1254 kfree(s); 1255 + } 1256 + 1257 + static void vop_crtc_reset(struct drm_crtc *crtc) 1258 + { 1259 + struct rockchip_crtc_state *crtc_state = 1260 + kzalloc(sizeof(*crtc_state), GFP_KERNEL); 1261 + 1262 + if (crtc->state) 1263 + vop_crtc_destroy_state(crtc, crtc->state); 1264 + 1265 + __drm_atomic_helper_crtc_reset(crtc, &crtc_state->base); 1239 1266 } 1240 1267 1241 1268 #ifdef CONFIG_DRM_ANALOGIX_DP
+108 -66
drivers/gpu/drm/scheduler/sched_main.c
··· 265 265 } 266 266 EXPORT_SYMBOL(drm_sched_resume_timeout); 267 267 268 - /* job_finish is called after hw fence signaled 269 - */ 270 - static void drm_sched_job_finish(struct work_struct *work) 271 - { 272 - struct drm_sched_job *s_job = container_of(work, struct drm_sched_job, 273 - finish_work); 274 - struct drm_gpu_scheduler *sched = s_job->sched; 275 - unsigned long flags; 276 - 277 - /* 278 - * Canceling the timeout without removing our job from the ring mirror 279 - * list is safe, as we will only end up in this worker if our jobs 280 - * finished fence has been signaled. So even if some another worker 281 - * manages to find this job as the next job in the list, the fence 282 - * signaled check below will prevent the timeout to be restarted. 283 - */ 284 - cancel_delayed_work_sync(&sched->work_tdr); 285 - 286 - spin_lock_irqsave(&sched->job_list_lock, flags); 287 - /* queue TDR for next job */ 288 - drm_sched_start_timeout(sched); 289 - spin_unlock_irqrestore(&sched->job_list_lock, flags); 290 - 291 - sched->ops->free_job(s_job); 292 - } 293 - 294 268 static void drm_sched_job_begin(struct drm_sched_job *s_job) 295 269 { 296 270 struct drm_gpu_scheduler *sched = s_job->sched; ··· 288 314 289 315 if (job) 290 316 job->sched->ops->timedout_job(job); 317 + 318 + /* 319 + * Guilty job did complete and hence needs to be manually removed 320 + * See drm_sched_stop doc. 321 + */ 322 + if (sched->free_guilty) { 323 + job->sched->ops->free_job(job); 324 + sched->free_guilty = false; 325 + } 291 326 292 327 spin_lock_irqsave(&sched->job_list_lock, flags); 293 328 drm_sched_start_timeout(sched); ··· 353 370 * 354 371 * @sched: scheduler instance 355 372 * 373 + * Stop the scheduler and also removes and frees all completed jobs. 374 + * Note: bad job will not be freed as it might be used later and so it's 375 + * callers responsibility to release it manually if it's not part of the 376 + * mirror list any more. 377 + * 356 378 */ 357 - void drm_sched_stop(struct drm_gpu_scheduler *sched) 379 + void drm_sched_stop(struct drm_gpu_scheduler *sched, struct drm_sched_job *bad) 358 380 { 359 - struct drm_sched_job *s_job; 381 + struct drm_sched_job *s_job, *tmp; 360 382 unsigned long flags; 361 - struct dma_fence *last_fence = NULL; 362 383 363 384 kthread_park(sched->thread); 364 385 365 386 /* 366 - * Verify all the signaled jobs in mirror list are removed from the ring 367 - * by waiting for the latest job to enter the list. This should insure that 368 - * also all the previous jobs that were in flight also already singaled 369 - * and removed from the list. 387 + * Iterate the job list from later to earlier one and either deactive 388 + * their HW callbacks or remove them from mirror list if they already 389 + * signaled. 390 + * This iteration is thread safe as sched thread is stopped. 370 391 */ 371 - spin_lock_irqsave(&sched->job_list_lock, flags); 372 - list_for_each_entry_reverse(s_job, &sched->ring_mirror_list, node) { 392 + list_for_each_entry_safe_reverse(s_job, tmp, &sched->ring_mirror_list, node) { 373 393 if (s_job->s_fence->parent && 374 394 dma_fence_remove_callback(s_job->s_fence->parent, 375 395 &s_job->cb)) { 376 - dma_fence_put(s_job->s_fence->parent); 377 - s_job->s_fence->parent = NULL; 378 396 atomic_dec(&sched->hw_rq_count); 379 397 } else { 380 - last_fence = dma_fence_get(&s_job->s_fence->finished); 381 - break; 398 + /* 399 + * remove job from ring_mirror_list. 400 + * Locking here is for concurrent resume timeout 401 + */ 402 + spin_lock_irqsave(&sched->job_list_lock, flags); 403 + list_del_init(&s_job->node); 404 + spin_unlock_irqrestore(&sched->job_list_lock, flags); 405 + 406 + /* 407 + * Wait for job's HW fence callback to finish using s_job 408 + * before releasing it. 409 + * 410 + * Job is still alive so fence refcount at least 1 411 + */ 412 + dma_fence_wait(&s_job->s_fence->finished, false); 413 + 414 + /* 415 + * We must keep bad job alive for later use during 416 + * recovery by some of the drivers but leave a hint 417 + * that the guilty job must be released. 418 + */ 419 + if (bad != s_job) 420 + sched->ops->free_job(s_job); 421 + else 422 + sched->free_guilty = true; 382 423 } 383 424 } 384 - spin_unlock_irqrestore(&sched->job_list_lock, flags); 385 425 386 - if (last_fence) { 387 - dma_fence_wait(last_fence, false); 388 - dma_fence_put(last_fence); 389 - } 426 + /* 427 + * Stop pending timer in flight as we rearm it in drm_sched_start. This 428 + * avoids the pending timeout work in progress to fire right away after 429 + * this TDR finished and before the newly restarted jobs had a 430 + * chance to complete. 431 + */ 432 + cancel_delayed_work(&sched->work_tdr); 390 433 } 391 434 392 435 EXPORT_SYMBOL(drm_sched_stop); ··· 426 417 void drm_sched_start(struct drm_gpu_scheduler *sched, bool full_recovery) 427 418 { 428 419 struct drm_sched_job *s_job, *tmp; 420 + unsigned long flags; 429 421 int r; 430 - 431 - if (!full_recovery) 432 - goto unpark; 433 422 434 423 /* 435 424 * Locking the list is not required here as the sched thread is parked 436 - * so no new jobs are being pushed in to HW and in drm_sched_stop we 437 - * flushed all the jobs who were still in mirror list but who already 438 - * signaled and removed them self from the list. Also concurrent 425 + * so no new jobs are being inserted or removed. Also concurrent 439 426 * GPU recovers can't run in parallel. 440 427 */ 441 428 list_for_each_entry_safe(s_job, tmp, &sched->ring_mirror_list, node) { 442 429 struct dma_fence *fence = s_job->s_fence->parent; 430 + 431 + atomic_inc(&sched->hw_rq_count); 432 + 433 + if (!full_recovery) 434 + continue; 443 435 444 436 if (fence) { 445 437 r = dma_fence_add_callback(fence, &s_job->cb, ··· 454 444 drm_sched_process_job(NULL, &s_job->cb); 455 445 } 456 446 457 - drm_sched_start_timeout(sched); 447 + if (full_recovery) { 448 + spin_lock_irqsave(&sched->job_list_lock, flags); 449 + drm_sched_start_timeout(sched); 450 + spin_unlock_irqrestore(&sched->job_list_lock, flags); 451 + } 458 452 459 - unpark: 460 453 kthread_unpark(sched->thread); 461 454 } 462 455 EXPORT_SYMBOL(drm_sched_start); ··· 476 463 uint64_t guilty_context; 477 464 bool found_guilty = false; 478 465 479 - /*TODO DO we need spinlock here ? */ 480 466 list_for_each_entry_safe(s_job, tmp, &sched->ring_mirror_list, node) { 481 467 struct drm_sched_fence *s_fence = s_job->s_fence; 482 468 ··· 487 475 if (found_guilty && s_job->s_fence->scheduled.context == guilty_context) 488 476 dma_fence_set_error(&s_fence->finished, -ECANCELED); 489 477 478 + dma_fence_put(s_job->s_fence->parent); 490 479 s_job->s_fence->parent = sched->ops->run_job(s_job); 491 - atomic_inc(&sched->hw_rq_count); 492 480 } 493 481 } 494 482 EXPORT_SYMBOL(drm_sched_resubmit_jobs); ··· 525 513 return -ENOMEM; 526 514 job->id = atomic64_inc_return(&sched->job_id_count); 527 515 528 - INIT_WORK(&job->finish_work, drm_sched_job_finish); 529 516 INIT_LIST_HEAD(&job->node); 530 517 531 518 return 0; ··· 607 596 struct drm_sched_job *s_job = container_of(cb, struct drm_sched_job, cb); 608 597 struct drm_sched_fence *s_fence = s_job->s_fence; 609 598 struct drm_gpu_scheduler *sched = s_fence->sched; 610 - unsigned long flags; 611 - 612 - cancel_delayed_work(&sched->work_tdr); 613 599 614 600 atomic_dec(&sched->hw_rq_count); 615 601 atomic_dec(&sched->num_jobs); 616 602 617 - spin_lock_irqsave(&sched->job_list_lock, flags); 618 - /* remove job from ring_mirror_list */ 619 - list_del_init(&s_job->node); 620 - spin_unlock_irqrestore(&sched->job_list_lock, flags); 603 + trace_drm_sched_process_job(s_fence); 621 604 622 605 drm_sched_fence_finished(s_fence); 623 - 624 - trace_drm_sched_process_job(s_fence); 625 606 wake_up_interruptible(&sched->wake_up_worker); 607 + } 626 608 627 - schedule_work(&s_job->finish_work); 609 + /** 610 + * drm_sched_cleanup_jobs - destroy finished jobs 611 + * 612 + * @sched: scheduler instance 613 + * 614 + * Remove all finished jobs from the mirror list and destroy them. 615 + */ 616 + static void drm_sched_cleanup_jobs(struct drm_gpu_scheduler *sched) 617 + { 618 + unsigned long flags; 619 + 620 + /* Don't destroy jobs while the timeout worker is running */ 621 + if (sched->timeout != MAX_SCHEDULE_TIMEOUT && 622 + !cancel_delayed_work(&sched->work_tdr)) 623 + return; 624 + 625 + 626 + while (!list_empty(&sched->ring_mirror_list)) { 627 + struct drm_sched_job *job; 628 + 629 + job = list_first_entry(&sched->ring_mirror_list, 630 + struct drm_sched_job, node); 631 + if (!dma_fence_is_signaled(&job->s_fence->finished)) 632 + break; 633 + 634 + spin_lock_irqsave(&sched->job_list_lock, flags); 635 + /* remove job from ring_mirror_list */ 636 + list_del_init(&job->node); 637 + spin_unlock_irqrestore(&sched->job_list_lock, flags); 638 + 639 + sched->ops->free_job(job); 640 + } 641 + 642 + /* queue timeout for next job */ 643 + spin_lock_irqsave(&sched->job_list_lock, flags); 644 + drm_sched_start_timeout(sched); 645 + spin_unlock_irqrestore(&sched->job_list_lock, flags); 646 + 628 647 } 629 648 630 649 /** ··· 696 655 struct dma_fence *fence; 697 656 698 657 wait_event_interruptible(sched->wake_up_worker, 658 + (drm_sched_cleanup_jobs(sched), 699 659 (!drm_sched_blocked(sched) && 700 660 (entity = drm_sched_select_entity(sched))) || 701 - kthread_should_stop()); 661 + kthread_should_stop())); 702 662 703 663 if (!entity) 704 664 continue;
+49 -11
drivers/gpu/drm/stm/dw_mipi_dsi-stm.c
··· 9 9 #include <linux/clk.h> 10 10 #include <linux/iopoll.h> 11 11 #include <linux/module.h> 12 + #include <linux/regulator/consumer.h> 12 13 #include <drm/drmP.h> 13 14 #include <drm/drm_mipi_dsi.h> 14 15 #include <drm/bridge/dw_mipi_dsi.h> ··· 77 76 u32 hw_version; 78 77 int lane_min_kbps; 79 78 int lane_max_kbps; 79 + struct regulator *vdd_supply; 80 80 }; 81 81 82 82 static inline void dsi_write(struct dw_mipi_dsi_stm *dsi, u32 reg, u32 val) ··· 316 314 res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 317 315 dsi->base = devm_ioremap_resource(dev, res); 318 316 if (IS_ERR(dsi->base)) { 319 - DRM_ERROR("Unable to get dsi registers\n"); 320 - return PTR_ERR(dsi->base); 317 + ret = PTR_ERR(dsi->base); 318 + DRM_ERROR("Unable to get dsi registers %d\n", ret); 319 + return ret; 320 + } 321 + 322 + dsi->vdd_supply = devm_regulator_get(dev, "phy-dsi"); 323 + if (IS_ERR(dsi->vdd_supply)) { 324 + ret = PTR_ERR(dsi->vdd_supply); 325 + if (ret != -EPROBE_DEFER) 326 + DRM_ERROR("Failed to request regulator: %d\n", ret); 327 + return ret; 328 + } 329 + 330 + ret = regulator_enable(dsi->vdd_supply); 331 + if (ret) { 332 + DRM_ERROR("Failed to enable regulator: %d\n", ret); 333 + return ret; 321 334 } 322 335 323 336 dsi->pllref_clk = devm_clk_get(dev, "ref"); 324 337 if (IS_ERR(dsi->pllref_clk)) { 325 338 ret = PTR_ERR(dsi->pllref_clk); 326 - dev_err(dev, "Unable to get pll reference clock: %d\n", ret); 327 - return ret; 339 + DRM_ERROR("Unable to get pll reference clock: %d\n", ret); 340 + goto err_clk_get; 328 341 } 329 342 330 343 ret = clk_prepare_enable(dsi->pllref_clk); 331 344 if (ret) { 332 - dev_err(dev, "%s: Failed to enable pllref_clk\n", __func__); 333 - return ret; 345 + DRM_ERROR("Failed to enable pllref_clk: %d\n", ret); 346 + goto err_clk_get; 334 347 } 335 348 336 349 dw_mipi_dsi_stm_plat_data.base = dsi->base; ··· 355 338 356 339 dsi->dsi = dw_mipi_dsi_probe(pdev, &dw_mipi_dsi_stm_plat_data); 357 340 if (IS_ERR(dsi->dsi)) { 358 - DRM_ERROR("Failed to initialize mipi dsi host\n"); 359 - clk_disable_unprepare(dsi->pllref_clk); 360 - return PTR_ERR(dsi->dsi); 341 + ret = PTR_ERR(dsi->dsi); 342 + DRM_ERROR("Failed to initialize mipi dsi host: %d\n", ret); 343 + goto err_dsi_probe; 361 344 } 362 345 363 346 return 0; 347 + 348 + err_dsi_probe: 349 + clk_disable_unprepare(dsi->pllref_clk); 350 + err_clk_get: 351 + regulator_disable(dsi->vdd_supply); 352 + 353 + return ret; 364 354 } 365 355 366 356 static int dw_mipi_dsi_stm_remove(struct platform_device *pdev) 367 357 { 368 358 struct dw_mipi_dsi_stm *dsi = platform_get_drvdata(pdev); 369 359 370 - clk_disable_unprepare(dsi->pllref_clk); 371 360 dw_mipi_dsi_remove(dsi->dsi); 361 + clk_disable_unprepare(dsi->pllref_clk); 362 + regulator_disable(dsi->vdd_supply); 372 363 373 364 return 0; 374 365 } ··· 388 363 DRM_DEBUG_DRIVER("\n"); 389 364 390 365 clk_disable_unprepare(dsi->pllref_clk); 366 + regulator_disable(dsi->vdd_supply); 391 367 392 368 return 0; 393 369 } ··· 396 370 static int __maybe_unused dw_mipi_dsi_stm_resume(struct device *dev) 397 371 { 398 372 struct dw_mipi_dsi_stm *dsi = dw_mipi_dsi_stm_plat_data.priv_data; 373 + int ret; 399 374 400 375 DRM_DEBUG_DRIVER("\n"); 401 376 402 - clk_prepare_enable(dsi->pllref_clk); 377 + ret = regulator_enable(dsi->vdd_supply); 378 + if (ret) { 379 + DRM_ERROR("Failed to enable regulator: %d\n", ret); 380 + return ret; 381 + } 382 + 383 + ret = clk_prepare_enable(dsi->pllref_clk); 384 + if (ret) { 385 + regulator_disable(dsi->vdd_supply); 386 + DRM_ERROR("Failed to enable pllref_clk: %d\n", ret); 387 + return ret; 388 + } 403 389 404 390 return 0; 405 391 }
+47 -20
drivers/gpu/drm/stm/ltdc.c
··· 232 232 PF_ARGB4444 /* 0x07 */ 233 233 }; 234 234 235 + static const u64 ltdc_format_modifiers[] = { 236 + DRM_FORMAT_MOD_LINEAR, 237 + DRM_FORMAT_MOD_INVALID 238 + }; 239 + 235 240 static inline u32 reg_read(void __iomem *base, u32 reg) 236 241 { 237 242 return readl_relaxed(base + reg); ··· 431 426 /* Enable IRQ */ 432 427 reg_set(ldev->regs, LTDC_IER, IER_RRIE | IER_FUIE | IER_TERRIE); 433 428 434 - /* Immediately commit the planes */ 435 - reg_set(ldev->regs, LTDC_SRCR, SRCR_IMR); 429 + /* Commit shadow registers = update planes at next vblank */ 430 + reg_set(ldev->regs, LTDC_SRCR, SRCR_VBR); 436 431 437 432 /* Enable LTDC */ 438 433 reg_set(ldev->regs, LTDC_GCR, GCR_LTDCEN); ··· 560 555 if (vm.flags & DISPLAY_FLAGS_VSYNC_HIGH) 561 556 val |= GCR_VSPOL; 562 557 563 - if (vm.flags & DISPLAY_FLAGS_DE_HIGH) 558 + if (vm.flags & DISPLAY_FLAGS_DE_LOW) 564 559 val |= GCR_DEPOL; 565 560 566 561 if (vm.flags & DISPLAY_FLAGS_PIXDATA_NEGEDGE) ··· 784 779 785 780 /* Configures the color frame buffer pitch in bytes & line length */ 786 781 pitch_in_bytes = fb->pitches[0]; 787 - line_length = drm_format_plane_cpp(fb->format->format, 0) * 782 + line_length = fb->format->cpp[0] * 788 783 (x1 - x0 + 1) + (ldev->caps.bus_width >> 3) - 1; 789 784 val = ((pitch_in_bytes << 16) | line_length); 790 785 reg_update_bits(ldev->regs, LTDC_L1CFBLR + lofs, ··· 827 822 828 823 mutex_lock(&ldev->err_lock); 829 824 if (ldev->error_status & ISR_FUIF) { 830 - DRM_DEBUG_DRIVER("Fifo underrun\n"); 825 + DRM_WARN("ltdc fifo underrun: please verify display mode\n"); 831 826 ldev->error_status &= ~ISR_FUIF; 832 827 } 833 828 if (ldev->error_status & ISR_TERRIF) { 834 - DRM_DEBUG_DRIVER("Transfer error\n"); 829 + DRM_WARN("ltdc transfer error\n"); 835 830 ldev->error_status &= ~ISR_TERRIF; 836 831 } 837 832 mutex_unlock(&ldev->err_lock); ··· 869 864 fpsi->counter = 0; 870 865 } 871 866 867 + static bool ltdc_plane_format_mod_supported(struct drm_plane *plane, 868 + u32 format, 869 + u64 modifier) 870 + { 871 + if (modifier == DRM_FORMAT_MOD_LINEAR) 872 + return true; 873 + 874 + return false; 875 + } 876 + 872 877 static const struct drm_plane_funcs ltdc_plane_funcs = { 873 878 .update_plane = drm_atomic_helper_update_plane, 874 879 .disable_plane = drm_atomic_helper_disable_plane, ··· 887 872 .atomic_duplicate_state = drm_atomic_helper_plane_duplicate_state, 888 873 .atomic_destroy_state = drm_atomic_helper_plane_destroy_state, 889 874 .atomic_print_state = ltdc_plane_atomic_print_state, 875 + .format_mod_supported = ltdc_plane_format_mod_supported, 890 876 }; 891 877 892 878 static const struct drm_plane_helper_funcs ltdc_plane_helper_funcs = { ··· 906 890 unsigned int i, nb_fmt = 0; 907 891 u32 formats[NB_PF * 2]; 908 892 u32 drm_fmt, drm_fmt_no_alpha; 893 + const u64 *modifiers = ltdc_format_modifiers; 909 894 int ret; 910 895 911 896 /* Get supported pixel formats */ ··· 935 918 936 919 ret = drm_universal_plane_init(ddev, plane, possible_crtcs, 937 920 &ltdc_plane_funcs, formats, nb_fmt, 938 - NULL, type, NULL); 921 + modifiers, type, NULL); 939 922 if (ret < 0) 940 923 return NULL; 941 924 ··· 1038 1021 struct ltdc_device *ldev = ddev->dev_private; 1039 1022 u32 bus_width_log2, lcr, gc2r; 1040 1023 1041 - /* at least 1 layer must be managed */ 1024 + /* 1025 + * at least 1 layer must be managed & the number of layers 1026 + * must not exceed LTDC_MAX_LAYER 1027 + */ 1042 1028 lcr = reg_read(ldev->regs, LTDC_LCR); 1043 1029 1044 - ldev->caps.nb_layers = max_t(int, lcr, 1); 1030 + ldev->caps.nb_layers = clamp((int)lcr, 1, LTDC_MAX_LAYER); 1045 1031 1046 1032 /* set data bus width */ 1047 1033 gc2r = reg_read(ldev->regs, LTDC_GC2R); ··· 1145 1125 1146 1126 ldev->pixel_clk = devm_clk_get(dev, "lcd"); 1147 1127 if (IS_ERR(ldev->pixel_clk)) { 1148 - DRM_ERROR("Unable to get lcd clock\n"); 1149 - return -ENODEV; 1128 + if (PTR_ERR(ldev->pixel_clk) != -EPROBE_DEFER) 1129 + DRM_ERROR("Unable to get lcd clock\n"); 1130 + return PTR_ERR(ldev->pixel_clk); 1150 1131 } 1151 1132 1152 1133 if (clk_prepare_enable(ldev->pixel_clk)) { 1153 1134 DRM_ERROR("Unable to prepare pixel clock\n"); 1154 1135 return -ENODEV; 1136 + } 1137 + 1138 + if (!IS_ERR(rstc)) { 1139 + reset_control_assert(rstc); 1140 + usleep_range(10, 20); 1141 + reset_control_deassert(rstc); 1155 1142 } 1156 1143 1157 1144 res = platform_get_resource(pdev, IORESOURCE_MEM, 0); ··· 1169 1142 goto err; 1170 1143 } 1171 1144 1145 + /* Disable interrupts */ 1146 + reg_clear(ldev->regs, LTDC_IER, 1147 + IER_LIE | IER_RRIE | IER_FUIE | IER_TERRIE); 1148 + 1172 1149 for (i = 0; i < MAX_IRQ; i++) { 1173 1150 irq = platform_get_irq(pdev, i); 1151 + if (irq == -EPROBE_DEFER) 1152 + goto err; 1153 + 1174 1154 if (irq < 0) 1175 1155 continue; 1176 1156 ··· 1190 1156 } 1191 1157 } 1192 1158 1193 - if (!IS_ERR(rstc)) { 1194 - reset_control_assert(rstc); 1195 - usleep_range(10, 20); 1196 - reset_control_deassert(rstc); 1197 - } 1198 - 1199 - /* Disable interrupts */ 1200 - reg_clear(ldev->regs, LTDC_IER, 1201 - IER_LIE | IER_RRIE | IER_FUIE | IER_TERRIE); 1202 1159 1203 1160 ret = ltdc_get_caps(ddev); 1204 1161 if (ret) { ··· 1227 1202 ret = -ENOMEM; 1228 1203 goto err; 1229 1204 } 1205 + 1206 + ddev->mode_config.allow_fb_modifiers = true; 1230 1207 1231 1208 ret = ltdc_crtc_init(ddev, crtc); 1232 1209 if (ret) {
+1 -15
drivers/gpu/drm/sun4i/sun4i_drv.c
··· 53 53 .minor = 0, 54 54 55 55 /* GEM Operations */ 56 + DRM_GEM_CMA_VMAP_DRIVER_OPS, 56 57 .dumb_create = drm_sun4i_gem_dumb_create, 57 - .gem_free_object_unlocked = drm_gem_cma_free_object, 58 - .gem_vm_ops = &drm_gem_cma_vm_ops, 59 - 60 - /* PRIME Operations */ 61 - .prime_handle_to_fd = drm_gem_prime_handle_to_fd, 62 - .prime_fd_to_handle = drm_gem_prime_fd_to_handle, 63 - .gem_prime_import = drm_gem_prime_import, 64 - .gem_prime_export = drm_gem_prime_export, 65 - .gem_prime_get_sg_table = drm_gem_cma_prime_get_sg_table, 66 - .gem_prime_import_sg_table = drm_gem_cma_prime_import_sg_table, 67 - .gem_prime_vmap = drm_gem_cma_prime_vmap, 68 - .gem_prime_vunmap = drm_gem_cma_prime_vunmap, 69 - .gem_prime_mmap = drm_gem_cma_prime_mmap, 70 - 71 - /* Frame Buffer Operations */ 72 58 }; 73 59 74 60 static int sun4i_drv_bind(struct device *dev)
+1
drivers/gpu/drm/sun4i/sun6i_mipi_dsi.c
··· 980 980 switch (msg->type) { 981 981 case MIPI_DSI_DCS_SHORT_WRITE: 982 982 case MIPI_DSI_DCS_SHORT_WRITE_PARAM: 983 + case MIPI_DSI_GENERIC_SHORT_WRITE_2_PARAM: 983 984 ret = sun6i_dsi_dcs_write_short(dsi, msg); 984 985 break; 985 986
+6 -11
drivers/gpu/drm/tegra/dc.c
··· 26 26 #include <drm/drm_atomic_helper.h> 27 27 #include <drm/drm_plane_helper.h> 28 28 29 + static void tegra_crtc_atomic_destroy_state(struct drm_crtc *crtc, 30 + struct drm_crtc_state *state); 31 + 29 32 static void tegra_dc_stats_reset(struct tegra_dc_stats *stats) 30 33 { 31 34 stats->frames = 0; ··· 1158 1155 1159 1156 static void tegra_crtc_reset(struct drm_crtc *crtc) 1160 1157 { 1161 - struct tegra_dc_state *state; 1158 + struct tegra_dc_state *state = kzalloc(sizeof(*state), GFP_KERNEL); 1162 1159 1163 1160 if (crtc->state) 1164 - __drm_atomic_helper_crtc_destroy_state(crtc->state); 1161 + tegra_crtc_atomic_destroy_state(crtc, crtc->state); 1165 1162 1166 - kfree(crtc->state); 1167 - crtc->state = NULL; 1168 - 1169 - state = kzalloc(sizeof(*state), GFP_KERNEL); 1170 - if (state) { 1171 - crtc->state = &state->base; 1172 - crtc->state->crtc = crtc; 1173 - } 1174 - 1163 + __drm_atomic_helper_crtc_reset(crtc, &state->base); 1175 1164 drm_crtc_vblank_reset(crtc); 1176 1165 } 1177 1166
+6 -8
drivers/gpu/drm/tegra/fb.c
··· 131 131 struct drm_file *file, 132 132 const struct drm_mode_fb_cmd2 *cmd) 133 133 { 134 - unsigned int hsub, vsub, i; 134 + const struct drm_format_info *info = drm_get_format_info(drm, cmd); 135 135 struct tegra_bo *planes[4]; 136 136 struct drm_gem_object *gem; 137 137 struct drm_framebuffer *fb; 138 + unsigned int i; 138 139 int err; 139 140 140 - hsub = drm_format_horz_chroma_subsampling(cmd->pixel_format); 141 - vsub = drm_format_vert_chroma_subsampling(cmd->pixel_format); 142 - 143 - for (i = 0; i < drm_format_num_planes(cmd->pixel_format); i++) { 144 - unsigned int width = cmd->width / (i ? hsub : 1); 145 - unsigned int height = cmd->height / (i ? vsub : 1); 141 + for (i = 0; i < info->num_planes; i++) { 142 + unsigned int width = cmd->width / (i ? info->hsub : 1); 143 + unsigned int height = cmd->height / (i ? info->vsub : 1); 146 144 unsigned int size, bpp; 147 145 148 146 gem = drm_gem_object_lookup(file, cmd->handles[i]); ··· 149 151 goto unreference; 150 152 } 151 153 152 - bpp = drm_format_plane_cpp(cmd->pixel_format, i); 154 + bpp = info->cpp[i]; 153 155 154 156 size = (height - 1) * cmd->pitches[i] + 155 157 width * bpp + cmd->offsets[i];
+32 -3
drivers/gpu/drm/v3d/v3d_debugfs.c
··· 26 26 REGDEF(V3D_HUB_IDENT3), 27 27 REGDEF(V3D_HUB_INT_STS), 28 28 REGDEF(V3D_HUB_INT_MSK_STS), 29 + 30 + REGDEF(V3D_MMU_CTL), 31 + REGDEF(V3D_MMU_VIO_ADDR), 32 + REGDEF(V3D_MMU_VIO_ID), 33 + REGDEF(V3D_MMU_DEBUG_INFO), 29 34 }; 30 35 31 36 static const struct v3d_reg_def v3d_gca_reg_defs[] = { ··· 55 50 REGDEF(V3D_PTB_BPCA), 56 51 REGDEF(V3D_PTB_BPCS), 57 52 58 - REGDEF(V3D_MMU_CTL), 59 - REGDEF(V3D_MMU_VIO_ADDR), 60 - 61 53 REGDEF(V3D_GMP_STATUS), 62 54 REGDEF(V3D_GMP_CFG), 63 55 REGDEF(V3D_GMP_VIO_ADDR), 56 + 57 + REGDEF(V3D_ERR_FDBGO), 58 + REGDEF(V3D_ERR_FDBGB), 59 + REGDEF(V3D_ERR_FDBGS), 60 + REGDEF(V3D_ERR_STAT), 61 + }; 62 + 63 + static const struct v3d_reg_def v3d_csd_reg_defs[] = { 64 + REGDEF(V3D_CSD_STATUS), 65 + REGDEF(V3D_CSD_CURRENT_CFG0), 66 + REGDEF(V3D_CSD_CURRENT_CFG1), 67 + REGDEF(V3D_CSD_CURRENT_CFG2), 68 + REGDEF(V3D_CSD_CURRENT_CFG3), 69 + REGDEF(V3D_CSD_CURRENT_CFG4), 70 + REGDEF(V3D_CSD_CURRENT_CFG5), 71 + REGDEF(V3D_CSD_CURRENT_CFG6), 64 72 }; 65 73 66 74 static int v3d_v3d_debugfs_regs(struct seq_file *m, void *unused) ··· 106 88 v3d_core_reg_defs[i].reg, 107 89 V3D_CORE_READ(core, 108 90 v3d_core_reg_defs[i].reg)); 91 + } 92 + 93 + if (v3d_has_csd(v3d)) { 94 + for (i = 0; i < ARRAY_SIZE(v3d_csd_reg_defs); i++) { 95 + seq_printf(m, "core %d %s (0x%04x): 0x%08x\n", 96 + core, 97 + v3d_csd_reg_defs[i].name, 98 + v3d_csd_reg_defs[i].reg, 99 + V3D_CORE_READ(core, 100 + v3d_csd_reg_defs[i].reg)); 101 + } 109 102 } 110 103 } 111 104
+13 -4
drivers/gpu/drm/v3d/v3d_drv.c
··· 7 7 * This driver supports the Broadcom V3D 3.3 and 4.1 OpenGL ES GPUs. 8 8 * For V3D 2.x support, see the VC4 driver. 9 9 * 10 - * Currently only single-core rendering using the binner and renderer, 11 - * along with TFU (texture formatting unit) rendering is supported. 12 - * V3D 4.x's CSD (compute shader dispatch) is not yet supported. 10 + * The V3D GPU includes a tiled render (composed of a bin and render 11 + * pipelines), the TFU (texture formatting unit), and the CSD (compute 12 + * shader dispatch). 13 13 */ 14 14 15 15 #include <linux/clk.h> ··· 120 120 case DRM_V3D_PARAM_SUPPORTS_TFU: 121 121 args->value = 1; 122 122 return 0; 123 + case DRM_V3D_PARAM_SUPPORTS_CSD: 124 + args->value = v3d_has_csd(v3d); 125 + return 0; 123 126 default: 124 127 DRM_DEBUG("Unknown parameter %d\n", args->param); 125 128 return -EINVAL; ··· 182 179 DRM_IOCTL_DEF_DRV(V3D_GET_PARAM, v3d_get_param_ioctl, DRM_RENDER_ALLOW), 183 180 DRM_IOCTL_DEF_DRV(V3D_GET_BO_OFFSET, v3d_get_bo_offset_ioctl, DRM_RENDER_ALLOW), 184 181 DRM_IOCTL_DEF_DRV(V3D_SUBMIT_TFU, v3d_submit_tfu_ioctl, DRM_RENDER_ALLOW | DRM_AUTH), 182 + DRM_IOCTL_DEF_DRV(V3D_SUBMIT_CSD, v3d_submit_csd_ioctl, DRM_RENDER_ALLOW | DRM_AUTH), 185 183 }; 186 184 187 185 static struct drm_driver v3d_drm_driver = { ··· 239 235 struct drm_device *drm; 240 236 struct v3d_dev *v3d; 241 237 int ret; 238 + u32 mmu_debug; 242 239 u32 ident1; 243 240 244 - dev->coherent_dma_mask = DMA_BIT_MASK(36); 245 241 246 242 v3d = kzalloc(sizeof(*v3d), GFP_KERNEL); 247 243 if (!v3d) ··· 257 253 ret = map_regs(v3d, &v3d->core_regs[0], "core0"); 258 254 if (ret) 259 255 goto dev_free; 256 + 257 + mmu_debug = V3D_READ(V3D_MMU_DEBUG_INFO); 258 + dev->coherent_dma_mask = 259 + DMA_BIT_MASK(30 + V3D_GET_FIELD(mmu_debug, V3D_MMU_PA_WIDTH)); 260 + v3d->va_width = 30 + V3D_GET_FIELD(mmu_debug, V3D_MMU_VA_WIDTH); 260 261 261 262 ident1 = V3D_READ(V3D_HUB_IDENT1); 262 263 v3d->ver = (V3D_GET_FIELD(ident1, V3D_HUB_IDENT1_TVER) * 10 +
+72 -46
drivers/gpu/drm/v3d/v3d_drv.h
··· 16 16 V3D_BIN, 17 17 V3D_RENDER, 18 18 V3D_TFU, 19 + V3D_CSD, 20 + V3D_CACHE_CLEAN, 19 21 }; 20 22 21 - #define V3D_MAX_QUEUES (V3D_TFU + 1) 23 + #define V3D_MAX_QUEUES (V3D_CACHE_CLEAN + 1) 22 24 23 25 struct v3d_queue_state { 24 26 struct drm_gpu_scheduler sched; ··· 57 55 */ 58 56 void *mmu_scratch; 59 57 dma_addr_t mmu_scratch_paddr; 58 + /* virtual address bits from V3D to the MMU. */ 59 + int va_width; 60 60 61 61 /* Number of V3D cores. */ 62 62 u32 cores; ··· 71 67 72 68 struct work_struct overflow_mem_work; 73 69 74 - struct v3d_exec_info *bin_job; 75 - struct v3d_exec_info *render_job; 70 + struct v3d_bin_job *bin_job; 71 + struct v3d_render_job *render_job; 76 72 struct v3d_tfu_job *tfu_job; 73 + struct v3d_csd_job *csd_job; 77 74 78 75 struct v3d_queue_state queue[V3D_MAX_QUEUES]; 79 76 ··· 97 92 */ 98 93 struct mutex sched_lock; 99 94 95 + /* Lock taken during a cache clean and when initiating an L2 96 + * flush, to keep L2 flushes from interfering with the 97 + * synchronous L2 cleans. 98 + */ 99 + struct mutex cache_clean_lock; 100 + 100 101 struct { 101 102 u32 num_allocated; 102 103 u32 pages_allocated; ··· 113 102 to_v3d_dev(struct drm_device *dev) 114 103 { 115 104 return (struct v3d_dev *)dev->dev_private; 105 + } 106 + 107 + static inline bool 108 + v3d_has_csd(struct v3d_dev *v3d) 109 + { 110 + return v3d->ver >= 41; 116 111 } 117 112 118 113 /* The per-fd struct, which tracks the MMU mappings. */ ··· 134 117 struct drm_mm_node node; 135 118 136 119 /* List entry for the BO's position in 137 - * v3d_exec_info->unref_list 120 + * v3d_render_job->unref_list 138 121 */ 139 122 struct list_head unref_head; 140 123 }; ··· 174 157 struct v3d_job { 175 158 struct drm_sched_job base; 176 159 177 - struct v3d_exec_info *exec; 160 + struct kref refcount; 178 161 179 - /* An optional fence userspace can pass in for the job to depend on. */ 180 - struct dma_fence *in_fence; 162 + struct v3d_dev *v3d; 163 + 164 + /* This is the array of BOs that were looked up at the start 165 + * of submission. 166 + */ 167 + struct drm_gem_object **bo; 168 + u32 bo_count; 169 + 170 + /* Array of struct dma_fence * to block on before submitting this job. 171 + */ 172 + struct xarray deps; 173 + unsigned long last_dep; 181 174 182 175 /* v3d fence to be signaled by IRQ handler when the job is complete. */ 183 176 struct dma_fence *irq_fence; 177 + 178 + /* scheduler fence for when the job is considered complete and 179 + * the BO reservations can be released. 180 + */ 181 + struct dma_fence *done_fence; 182 + 183 + /* Callback for the freeing of the job on refcount going to 0. */ 184 + void (*free)(struct kref *ref); 185 + }; 186 + 187 + struct v3d_bin_job { 188 + struct v3d_job base; 184 189 185 190 /* GPU virtual addresses of the start/end of the CL job. */ 186 191 u32 start, end; 187 192 188 193 u32 timedout_ctca, timedout_ctra; 189 - }; 190 194 191 - struct v3d_exec_info { 192 - struct v3d_dev *v3d; 193 - 194 - struct v3d_job bin, render; 195 - 196 - /* Fence for when the scheduler considers the binner to be 197 - * done, for render to depend on. 198 - */ 199 - struct dma_fence *bin_done_fence; 200 - 201 - /* Fence for when the scheduler considers the render to be 202 - * done, for when the BOs reservations should be complete. 203 - */ 204 - struct dma_fence *render_done_fence; 205 - 206 - struct kref refcount; 207 - 208 - /* This is the array of BOs that were looked up at the start of exec. */ 209 - struct v3d_bo **bo; 210 - u32 bo_count; 211 - 212 - /* List of overflow BOs used in the job that need to be 213 - * released once the job is complete. 214 - */ 215 - struct list_head unref_list; 195 + /* Corresponding render job, for attaching our overflow memory. */ 196 + struct v3d_render_job *render; 216 197 217 198 /* Submitted tile memory allocation start/size, tile state. */ 218 199 u32 qma, qms, qts; 219 200 }; 220 201 202 + struct v3d_render_job { 203 + struct v3d_job base; 204 + 205 + /* GPU virtual addresses of the start/end of the CL job. */ 206 + u32 start, end; 207 + 208 + u32 timedout_ctca, timedout_ctra; 209 + 210 + /* List of overflow BOs used in the job that need to be 211 + * released once the job is complete. 212 + */ 213 + struct list_head unref_list; 214 + }; 215 + 221 216 struct v3d_tfu_job { 222 - struct drm_sched_job base; 217 + struct v3d_job base; 223 218 224 219 struct drm_v3d_submit_tfu args; 220 + }; 225 221 226 - /* An optional fence userspace can pass in for the job to depend on. */ 227 - struct dma_fence *in_fence; 222 + struct v3d_csd_job { 223 + struct v3d_job base; 228 224 229 - /* v3d fence to be signaled by IRQ handler when the job is complete. */ 230 - struct dma_fence *irq_fence; 225 + u32 timedout_batches; 231 226 232 - struct v3d_dev *v3d; 233 - 234 - struct kref refcount; 235 - 236 - /* This is the array of BOs that were looked up at the start of exec. */ 237 - struct v3d_bo *bo[4]; 227 + struct drm_v3d_submit_csd args; 238 228 }; 239 229 240 230 /** ··· 305 281 struct drm_file *file_priv); 306 282 int v3d_submit_tfu_ioctl(struct drm_device *dev, void *data, 307 283 struct drm_file *file_priv); 284 + int v3d_submit_csd_ioctl(struct drm_device *dev, void *data, 285 + struct drm_file *file_priv); 308 286 int v3d_wait_bo_ioctl(struct drm_device *dev, void *data, 309 287 struct drm_file *file_priv); 310 - void v3d_exec_put(struct v3d_exec_info *exec); 311 - void v3d_tfu_job_put(struct v3d_tfu_job *exec); 288 + void v3d_job_put(struct v3d_job *job); 312 289 void v3d_reset(struct v3d_dev *v3d); 313 290 void v3d_invalidate_caches(struct v3d_dev *v3d); 291 + void v3d_clean_caches(struct v3d_dev *v3d); 314 292 315 293 /* v3d_irq.c */ 316 294 int v3d_irq_init(struct v3d_dev *v3d);
+2
drivers/gpu/drm/v3d/v3d_fence.c
··· 36 36 return "v3d-render"; 37 37 case V3D_TFU: 38 38 return "v3d-tfu"; 39 + case V3D_CSD: 40 + return "v3d-csd"; 39 41 default: 40 42 return NULL; 41 43 }
+367 -209
drivers/gpu/drm/v3d/v3d_gem.c
··· 109 109 { 110 110 struct drm_device *dev = &v3d->drm; 111 111 112 - DRM_ERROR("Resetting GPU.\n"); 112 + DRM_DEV_ERROR(dev->dev, "Resetting GPU for hang.\n"); 113 + DRM_DEV_ERROR(dev->dev, "V3D_ERR_STAT: 0x%08x\n", 114 + V3D_CORE_READ(0, V3D_ERR_STAT)); 113 115 trace_v3d_reset_begin(dev); 114 116 115 117 /* XXX: only needed for safe powerdown, not reset. */ ··· 164 162 /* While there is a busy bit (V3D_L2TCACTL_L2TFLS), we don't 165 163 * need to wait for completion before dispatching the job -- 166 164 * L2T accesses will be stalled until the flush has completed. 165 + * However, we do need to make sure we don't try to trigger a 166 + * new flush while the L2_CLEAN queue is trying to 167 + * synchronously clean after a job. 167 168 */ 169 + mutex_lock(&v3d->cache_clean_lock); 168 170 V3D_CORE_WRITE(core, V3D_CTL_L2TCACTL, 169 171 V3D_L2TCACTL_L2TFLS | 170 172 V3D_SET_FIELD(V3D_L2TCACTL_FLM_FLUSH, V3D_L2TCACTL_FLM)); 173 + mutex_unlock(&v3d->cache_clean_lock); 174 + } 175 + 176 + /* Cleans texture L1 and L2 cachelines (writing back dirty data). 177 + * 178 + * For cleaning, which happens from the CACHE_CLEAN queue after CSD has 179 + * executed, we need to make sure that the clean is done before 180 + * signaling job completion. So, we synchronously wait before 181 + * returning, and we make sure that L2 invalidates don't happen in the 182 + * meantime to confuse our are-we-done checks. 183 + */ 184 + void 185 + v3d_clean_caches(struct v3d_dev *v3d) 186 + { 187 + struct drm_device *dev = &v3d->drm; 188 + int core = 0; 189 + 190 + trace_v3d_cache_clean_begin(dev); 191 + 192 + V3D_CORE_WRITE(core, V3D_CTL_L2TCACTL, V3D_L2TCACTL_TMUWCF); 193 + if (wait_for(!(V3D_CORE_READ(core, V3D_CTL_L2TCACTL) & 194 + V3D_L2TCACTL_L2TFLS), 100)) { 195 + DRM_ERROR("Timeout waiting for L1T write combiner flush\n"); 196 + } 197 + 198 + mutex_lock(&v3d->cache_clean_lock); 199 + V3D_CORE_WRITE(core, V3D_CTL_L2TCACTL, 200 + V3D_L2TCACTL_L2TFLS | 201 + V3D_SET_FIELD(V3D_L2TCACTL_FLM_CLEAN, V3D_L2TCACTL_FLM)); 202 + 203 + if (wait_for(!(V3D_CORE_READ(core, V3D_CTL_L2TCACTL) & 204 + V3D_L2TCACTL_L2TFLS), 100)) { 205 + DRM_ERROR("Timeout waiting for L2T clean\n"); 206 + } 207 + 208 + mutex_unlock(&v3d->cache_clean_lock); 209 + 210 + trace_v3d_cache_clean_end(dev); 171 211 } 172 212 173 213 /* Invalidates the slice caches. These are read-only caches. */ ··· 237 193 v3d_invalidate_slices(v3d, 0); 238 194 } 239 195 240 - static void 241 - v3d_attach_object_fences(struct v3d_bo **bos, int bo_count, 242 - struct dma_fence *fence) 243 - { 244 - int i; 245 - 246 - for (i = 0; i < bo_count; i++) { 247 - /* XXX: Use shared fences for read-only objects. */ 248 - reservation_object_add_excl_fence(bos[i]->base.base.resv, 249 - fence); 250 - } 251 - } 252 - 253 - static void 254 - v3d_unlock_bo_reservations(struct v3d_bo **bos, 255 - int bo_count, 256 - struct ww_acquire_ctx *acquire_ctx) 257 - { 258 - drm_gem_unlock_reservations((struct drm_gem_object **)bos, bo_count, 259 - acquire_ctx); 260 - } 261 - 262 196 /* Takes the reservation lock on all the BOs being referenced, so that 263 197 * at queue submit time we can update the reservations. 264 198 * ··· 245 223 * to v3d, so we don't attach dma-buf fences to them. 246 224 */ 247 225 static int 248 - v3d_lock_bo_reservations(struct v3d_bo **bos, 249 - int bo_count, 226 + v3d_lock_bo_reservations(struct v3d_job *job, 250 227 struct ww_acquire_ctx *acquire_ctx) 251 228 { 252 229 int i, ret; 253 230 254 - ret = drm_gem_lock_reservations((struct drm_gem_object **)bos, 255 - bo_count, acquire_ctx); 231 + ret = drm_gem_lock_reservations(job->bo, job->bo_count, acquire_ctx); 256 232 if (ret) 257 233 return ret; 258 234 259 - /* Reserve space for our shared (read-only) fence references, 260 - * before we commit the CL to the hardware. 261 - */ 262 - for (i = 0; i < bo_count; i++) { 263 - ret = reservation_object_reserve_shared(bos[i]->base.base.resv, 264 - 1); 235 + for (i = 0; i < job->bo_count; i++) { 236 + ret = drm_gem_fence_array_add_implicit(&job->deps, 237 + job->bo[i], true); 265 238 if (ret) { 266 - v3d_unlock_bo_reservations(bos, bo_count, 267 - acquire_ctx); 239 + drm_gem_unlock_reservations(job->bo, job->bo_count, 240 + acquire_ctx); 268 241 return ret; 269 242 } 270 243 } ··· 268 251 } 269 252 270 253 /** 271 - * v3d_cl_lookup_bos() - Sets up exec->bo[] with the GEM objects 254 + * v3d_lookup_bos() - Sets up job->bo[] with the GEM objects 272 255 * referenced by the job. 273 256 * @dev: DRM device 274 257 * @file_priv: DRM file for this fd 275 - * @exec: V3D job being set up 258 + * @job: V3D job being set up 276 259 * 277 260 * The command validator needs to reference BOs by their index within 278 261 * the submitted job's BO list. This does the validation of the job's ··· 282 265 * failure, because that will happen at v3d_exec_cleanup() time. 283 266 */ 284 267 static int 285 - v3d_cl_lookup_bos(struct drm_device *dev, 286 - struct drm_file *file_priv, 287 - struct drm_v3d_submit_cl *args, 288 - struct v3d_exec_info *exec) 268 + v3d_lookup_bos(struct drm_device *dev, 269 + struct drm_file *file_priv, 270 + struct v3d_job *job, 271 + u64 bo_handles, 272 + u32 bo_count) 289 273 { 290 274 u32 *handles; 291 275 int ret = 0; 292 276 int i; 293 277 294 - exec->bo_count = args->bo_handle_count; 278 + job->bo_count = bo_count; 295 279 296 - if (!exec->bo_count) { 280 + if (!job->bo_count) { 297 281 /* See comment on bo_index for why we have to check 298 282 * this. 299 283 */ ··· 302 284 return -EINVAL; 303 285 } 304 286 305 - exec->bo = kvmalloc_array(exec->bo_count, 306 - sizeof(struct drm_gem_cma_object *), 307 - GFP_KERNEL | __GFP_ZERO); 308 - if (!exec->bo) { 287 + job->bo = kvmalloc_array(job->bo_count, 288 + sizeof(struct drm_gem_cma_object *), 289 + GFP_KERNEL | __GFP_ZERO); 290 + if (!job->bo) { 309 291 DRM_DEBUG("Failed to allocate validated BO pointers\n"); 310 292 return -ENOMEM; 311 293 } 312 294 313 - handles = kvmalloc_array(exec->bo_count, sizeof(u32), GFP_KERNEL); 295 + handles = kvmalloc_array(job->bo_count, sizeof(u32), GFP_KERNEL); 314 296 if (!handles) { 315 297 ret = -ENOMEM; 316 298 DRM_DEBUG("Failed to allocate incoming GEM handles\n"); ··· 318 300 } 319 301 320 302 if (copy_from_user(handles, 321 - (void __user *)(uintptr_t)args->bo_handles, 322 - exec->bo_count * sizeof(u32))) { 303 + (void __user *)(uintptr_t)bo_handles, 304 + job->bo_count * sizeof(u32))) { 323 305 ret = -EFAULT; 324 306 DRM_DEBUG("Failed to copy in GEM handles\n"); 325 307 goto fail; 326 308 } 327 309 328 310 spin_lock(&file_priv->table_lock); 329 - for (i = 0; i < exec->bo_count; i++) { 311 + for (i = 0; i < job->bo_count; i++) { 330 312 struct drm_gem_object *bo = idr_find(&file_priv->object_idr, 331 313 handles[i]); 332 314 if (!bo) { ··· 337 319 goto fail; 338 320 } 339 321 drm_gem_object_get(bo); 340 - exec->bo[i] = to_v3d_bo(bo); 322 + job->bo[i] = bo; 341 323 } 342 324 spin_unlock(&file_priv->table_lock); 343 325 ··· 347 329 } 348 330 349 331 static void 350 - v3d_exec_cleanup(struct kref *ref) 332 + v3d_job_free(struct kref *ref) 351 333 { 352 - struct v3d_exec_info *exec = container_of(ref, struct v3d_exec_info, 353 - refcount); 354 - struct v3d_dev *v3d = exec->v3d; 355 - unsigned int i; 356 - struct v3d_bo *bo, *save; 334 + struct v3d_job *job = container_of(ref, struct v3d_job, refcount); 335 + unsigned long index; 336 + struct dma_fence *fence; 337 + int i; 357 338 358 - dma_fence_put(exec->bin.in_fence); 359 - dma_fence_put(exec->render.in_fence); 360 - 361 - dma_fence_put(exec->bin.irq_fence); 362 - dma_fence_put(exec->render.irq_fence); 363 - 364 - dma_fence_put(exec->bin_done_fence); 365 - dma_fence_put(exec->render_done_fence); 366 - 367 - for (i = 0; i < exec->bo_count; i++) 368 - drm_gem_object_put_unlocked(&exec->bo[i]->base.base); 369 - kvfree(exec->bo); 370 - 371 - list_for_each_entry_safe(bo, save, &exec->unref_list, unref_head) { 372 - drm_gem_object_put_unlocked(&bo->base.base); 373 - } 374 - 375 - pm_runtime_mark_last_busy(v3d->dev); 376 - pm_runtime_put_autosuspend(v3d->dev); 377 - 378 - kfree(exec); 379 - } 380 - 381 - void v3d_exec_put(struct v3d_exec_info *exec) 382 - { 383 - kref_put(&exec->refcount, v3d_exec_cleanup); 384 - } 385 - 386 - static void 387 - v3d_tfu_job_cleanup(struct kref *ref) 388 - { 389 - struct v3d_tfu_job *job = container_of(ref, struct v3d_tfu_job, 390 - refcount); 391 - struct v3d_dev *v3d = job->v3d; 392 - unsigned int i; 393 - 394 - dma_fence_put(job->in_fence); 395 - dma_fence_put(job->irq_fence); 396 - 397 - for (i = 0; i < ARRAY_SIZE(job->bo); i++) { 339 + for (i = 0; i < job->bo_count; i++) { 398 340 if (job->bo[i]) 399 - drm_gem_object_put_unlocked(&job->bo[i]->base.base); 341 + drm_gem_object_put_unlocked(job->bo[i]); 400 342 } 343 + kvfree(job->bo); 401 344 402 - pm_runtime_mark_last_busy(v3d->dev); 403 - pm_runtime_put_autosuspend(v3d->dev); 345 + xa_for_each(&job->deps, index, fence) { 346 + dma_fence_put(fence); 347 + } 348 + xa_destroy(&job->deps); 349 + 350 + dma_fence_put(job->irq_fence); 351 + dma_fence_put(job->done_fence); 352 + 353 + pm_runtime_mark_last_busy(job->v3d->dev); 354 + pm_runtime_put_autosuspend(job->v3d->dev); 404 355 405 356 kfree(job); 406 357 } 407 358 408 - void v3d_tfu_job_put(struct v3d_tfu_job *job) 359 + static void 360 + v3d_render_job_free(struct kref *ref) 409 361 { 410 - kref_put(&job->refcount, v3d_tfu_job_cleanup); 362 + struct v3d_render_job *job = container_of(ref, struct v3d_render_job, 363 + base.refcount); 364 + struct v3d_bo *bo, *save; 365 + 366 + list_for_each_entry_safe(bo, save, &job->unref_list, unref_head) { 367 + drm_gem_object_put_unlocked(&bo->base.base); 368 + } 369 + 370 + v3d_job_free(ref); 371 + } 372 + 373 + void v3d_job_put(struct v3d_job *job) 374 + { 375 + kref_put(&job->refcount, job->free); 411 376 } 412 377 413 378 int ··· 426 425 return ret; 427 426 } 428 427 428 + static int 429 + v3d_job_init(struct v3d_dev *v3d, struct drm_file *file_priv, 430 + struct v3d_job *job, void (*free)(struct kref *ref), 431 + u32 in_sync) 432 + { 433 + struct dma_fence *in_fence = NULL; 434 + int ret; 435 + 436 + job->v3d = v3d; 437 + job->free = free; 438 + 439 + ret = pm_runtime_get_sync(v3d->dev); 440 + if (ret < 0) 441 + return ret; 442 + 443 + xa_init_flags(&job->deps, XA_FLAGS_ALLOC); 444 + 445 + ret = drm_syncobj_find_fence(file_priv, in_sync, 0, 0, &in_fence); 446 + if (ret == -EINVAL) 447 + goto fail; 448 + 449 + ret = drm_gem_fence_array_add(&job->deps, in_fence); 450 + if (ret) 451 + goto fail; 452 + 453 + kref_init(&job->refcount); 454 + 455 + return 0; 456 + fail: 457 + xa_destroy(&job->deps); 458 + pm_runtime_put_autosuspend(v3d->dev); 459 + return ret; 460 + } 461 + 462 + static int 463 + v3d_push_job(struct v3d_file_priv *v3d_priv, 464 + struct v3d_job *job, enum v3d_queue queue) 465 + { 466 + int ret; 467 + 468 + ret = drm_sched_job_init(&job->base, &v3d_priv->sched_entity[queue], 469 + v3d_priv); 470 + if (ret) 471 + return ret; 472 + 473 + job->done_fence = dma_fence_get(&job->base.s_fence->finished); 474 + 475 + /* put by scheduler job completion */ 476 + kref_get(&job->refcount); 477 + 478 + drm_sched_entity_push_job(&job->base, &v3d_priv->sched_entity[queue]); 479 + 480 + return 0; 481 + } 482 + 483 + static void 484 + v3d_attach_fences_and_unlock_reservation(struct drm_file *file_priv, 485 + struct v3d_job *job, 486 + struct ww_acquire_ctx *acquire_ctx, 487 + u32 out_sync, 488 + struct dma_fence *done_fence) 489 + { 490 + struct drm_syncobj *sync_out; 491 + int i; 492 + 493 + for (i = 0; i < job->bo_count; i++) { 494 + /* XXX: Use shared fences for read-only objects. */ 495 + reservation_object_add_excl_fence(job->bo[i]->resv, 496 + job->done_fence); 497 + } 498 + 499 + drm_gem_unlock_reservations(job->bo, job->bo_count, acquire_ctx); 500 + 501 + /* Update the return sync object for the job */ 502 + sync_out = drm_syncobj_find(file_priv, out_sync); 503 + if (sync_out) { 504 + drm_syncobj_replace_fence(sync_out, done_fence); 505 + drm_syncobj_put(sync_out); 506 + } 507 + } 508 + 429 509 /** 430 510 * v3d_submit_cl_ioctl() - Submits a job (frame) to the V3D. 431 511 * @dev: DRM device ··· 526 444 struct v3d_dev *v3d = to_v3d_dev(dev); 527 445 struct v3d_file_priv *v3d_priv = file_priv->driver_priv; 528 446 struct drm_v3d_submit_cl *args = data; 529 - struct v3d_exec_info *exec; 447 + struct v3d_bin_job *bin = NULL; 448 + struct v3d_render_job *render; 530 449 struct ww_acquire_ctx acquire_ctx; 531 - struct drm_syncobj *sync_out; 532 450 int ret = 0; 533 451 534 452 trace_v3d_submit_cl_ioctl(&v3d->drm, args->rcl_start, args->rcl_end); ··· 538 456 return -EINVAL; 539 457 } 540 458 541 - exec = kcalloc(1, sizeof(*exec), GFP_KERNEL); 542 - if (!exec) 459 + render = kcalloc(1, sizeof(*render), GFP_KERNEL); 460 + if (!render) 543 461 return -ENOMEM; 544 462 545 - ret = pm_runtime_get_sync(v3d->dev); 546 - if (ret < 0) { 547 - kfree(exec); 463 + render->start = args->rcl_start; 464 + render->end = args->rcl_end; 465 + INIT_LIST_HEAD(&render->unref_list); 466 + 467 + ret = v3d_job_init(v3d, file_priv, &render->base, 468 + v3d_render_job_free, args->in_sync_rcl); 469 + if (ret) { 470 + kfree(render); 548 471 return ret; 549 472 } 550 473 551 - kref_init(&exec->refcount); 474 + if (args->bcl_start != args->bcl_end) { 475 + bin = kcalloc(1, sizeof(*bin), GFP_KERNEL); 476 + if (!bin) 477 + return -ENOMEM; 552 478 553 - ret = drm_syncobj_find_fence(file_priv, args->in_sync_bcl, 554 - 0, 0, &exec->bin.in_fence); 555 - if (ret == -EINVAL) 556 - goto fail; 479 + ret = v3d_job_init(v3d, file_priv, &bin->base, 480 + v3d_job_free, args->in_sync_bcl); 481 + if (ret) { 482 + v3d_job_put(&render->base); 483 + return ret; 484 + } 557 485 558 - ret = drm_syncobj_find_fence(file_priv, args->in_sync_rcl, 559 - 0, 0, &exec->render.in_fence); 560 - if (ret == -EINVAL) 561 - goto fail; 486 + bin->start = args->bcl_start; 487 + bin->end = args->bcl_end; 488 + bin->qma = args->qma; 489 + bin->qms = args->qms; 490 + bin->qts = args->qts; 491 + bin->render = render; 492 + } 562 493 563 - exec->qma = args->qma; 564 - exec->qms = args->qms; 565 - exec->qts = args->qts; 566 - exec->bin.exec = exec; 567 - exec->bin.start = args->bcl_start; 568 - exec->bin.end = args->bcl_end; 569 - exec->render.exec = exec; 570 - exec->render.start = args->rcl_start; 571 - exec->render.end = args->rcl_end; 572 - exec->v3d = v3d; 573 - INIT_LIST_HEAD(&exec->unref_list); 574 - 575 - ret = v3d_cl_lookup_bos(dev, file_priv, args, exec); 494 + ret = v3d_lookup_bos(dev, file_priv, &render->base, 495 + args->bo_handles, args->bo_handle_count); 576 496 if (ret) 577 497 goto fail; 578 498 579 - ret = v3d_lock_bo_reservations(exec->bo, exec->bo_count, 580 - &acquire_ctx); 499 + ret = v3d_lock_bo_reservations(&render->base, &acquire_ctx); 581 500 if (ret) 582 501 goto fail; 583 502 584 503 mutex_lock(&v3d->sched_lock); 585 - if (exec->bin.start != exec->bin.end) { 586 - ret = drm_sched_job_init(&exec->bin.base, 587 - &v3d_priv->sched_entity[V3D_BIN], 588 - v3d_priv); 504 + if (bin) { 505 + ret = v3d_push_job(v3d_priv, &bin->base, V3D_BIN); 589 506 if (ret) 590 507 goto fail_unreserve; 591 508 592 - exec->bin_done_fence = 593 - dma_fence_get(&exec->bin.base.s_fence->finished); 594 - 595 - kref_get(&exec->refcount); /* put by scheduler job completion */ 596 - drm_sched_entity_push_job(&exec->bin.base, 597 - &v3d_priv->sched_entity[V3D_BIN]); 509 + ret = drm_gem_fence_array_add(&render->base.deps, 510 + dma_fence_get(bin->base.done_fence)); 511 + if (ret) 512 + goto fail_unreserve; 598 513 } 599 514 600 - ret = drm_sched_job_init(&exec->render.base, 601 - &v3d_priv->sched_entity[V3D_RENDER], 602 - v3d_priv); 515 + ret = v3d_push_job(v3d_priv, &render->base, V3D_RENDER); 603 516 if (ret) 604 517 goto fail_unreserve; 605 - 606 - exec->render_done_fence = 607 - dma_fence_get(&exec->render.base.s_fence->finished); 608 - 609 - kref_get(&exec->refcount); /* put by scheduler job completion */ 610 - drm_sched_entity_push_job(&exec->render.base, 611 - &v3d_priv->sched_entity[V3D_RENDER]); 612 518 mutex_unlock(&v3d->sched_lock); 613 519 614 - v3d_attach_object_fences(exec->bo, exec->bo_count, 615 - exec->render_done_fence); 520 + v3d_attach_fences_and_unlock_reservation(file_priv, 521 + &render->base, 522 + &acquire_ctx, 523 + args->out_sync, 524 + render->base.done_fence); 616 525 617 - v3d_unlock_bo_reservations(exec->bo, exec->bo_count, &acquire_ctx); 618 - 619 - /* Update the return sync object for the */ 620 - sync_out = drm_syncobj_find(file_priv, args->out_sync); 621 - if (sync_out) { 622 - drm_syncobj_replace_fence(sync_out, exec->render_done_fence); 623 - drm_syncobj_put(sync_out); 624 - } 625 - 626 - v3d_exec_put(exec); 526 + if (bin) 527 + v3d_job_put(&bin->base); 528 + v3d_job_put(&render->base); 627 529 628 530 return 0; 629 531 630 532 fail_unreserve: 631 533 mutex_unlock(&v3d->sched_lock); 632 - v3d_unlock_bo_reservations(exec->bo, exec->bo_count, &acquire_ctx); 534 + drm_gem_unlock_reservations(render->base.bo, 535 + render->base.bo_count, &acquire_ctx); 633 536 fail: 634 - v3d_exec_put(exec); 537 + if (bin) 538 + v3d_job_put(&bin->base); 539 + v3d_job_put(&render->base); 635 540 636 541 return ret; 637 542 } ··· 641 572 struct drm_v3d_submit_tfu *args = data; 642 573 struct v3d_tfu_job *job; 643 574 struct ww_acquire_ctx acquire_ctx; 644 - struct drm_syncobj *sync_out; 645 - struct dma_fence *sched_done_fence; 646 575 int ret = 0; 647 - int bo_count; 648 576 649 577 trace_v3d_submit_tfu_ioctl(&v3d->drm, args->iia); 650 578 ··· 649 583 if (!job) 650 584 return -ENOMEM; 651 585 652 - ret = pm_runtime_get_sync(v3d->dev); 653 - if (ret < 0) { 586 + ret = v3d_job_init(v3d, file_priv, &job->base, 587 + v3d_job_free, args->in_sync); 588 + if (ret) { 654 589 kfree(job); 655 590 return ret; 656 591 } 657 592 658 - kref_init(&job->refcount); 659 - 660 - ret = drm_syncobj_find_fence(file_priv, args->in_sync, 661 - 0, 0, &job->in_fence); 662 - if (ret == -EINVAL) 663 - goto fail; 593 + job->base.bo = kcalloc(ARRAY_SIZE(args->bo_handles), 594 + sizeof(*job->base.bo), GFP_KERNEL); 595 + if (!job->base.bo) { 596 + v3d_job_put(&job->base); 597 + return -ENOMEM; 598 + } 664 599 665 600 job->args = *args; 666 - job->v3d = v3d; 667 601 668 602 spin_lock(&file_priv->table_lock); 669 - for (bo_count = 0; bo_count < ARRAY_SIZE(job->bo); bo_count++) { 603 + for (job->base.bo_count = 0; 604 + job->base.bo_count < ARRAY_SIZE(args->bo_handles); 605 + job->base.bo_count++) { 670 606 struct drm_gem_object *bo; 671 607 672 - if (!args->bo_handles[bo_count]) 608 + if (!args->bo_handles[job->base.bo_count]) 673 609 break; 674 610 675 611 bo = idr_find(&file_priv->object_idr, 676 - args->bo_handles[bo_count]); 612 + args->bo_handles[job->base.bo_count]); 677 613 if (!bo) { 678 614 DRM_DEBUG("Failed to look up GEM BO %d: %d\n", 679 - bo_count, args->bo_handles[bo_count]); 615 + job->base.bo_count, 616 + args->bo_handles[job->base.bo_count]); 680 617 ret = -ENOENT; 681 618 spin_unlock(&file_priv->table_lock); 682 619 goto fail; 683 620 } 684 621 drm_gem_object_get(bo); 685 - job->bo[bo_count] = to_v3d_bo(bo); 622 + job->base.bo[job->base.bo_count] = bo; 686 623 } 687 624 spin_unlock(&file_priv->table_lock); 688 625 689 - ret = v3d_lock_bo_reservations(job->bo, bo_count, &acquire_ctx); 626 + ret = v3d_lock_bo_reservations(&job->base, &acquire_ctx); 690 627 if (ret) 691 628 goto fail; 692 629 693 630 mutex_lock(&v3d->sched_lock); 694 - ret = drm_sched_job_init(&job->base, 695 - &v3d_priv->sched_entity[V3D_TFU], 696 - v3d_priv); 631 + ret = v3d_push_job(v3d_priv, &job->base, V3D_TFU); 697 632 if (ret) 698 633 goto fail_unreserve; 699 - 700 - sched_done_fence = dma_fence_get(&job->base.s_fence->finished); 701 - 702 - kref_get(&job->refcount); /* put by scheduler job completion */ 703 - drm_sched_entity_push_job(&job->base, &v3d_priv->sched_entity[V3D_TFU]); 704 634 mutex_unlock(&v3d->sched_lock); 705 635 706 - v3d_attach_object_fences(job->bo, bo_count, sched_done_fence); 636 + v3d_attach_fences_and_unlock_reservation(file_priv, 637 + &job->base, &acquire_ctx, 638 + args->out_sync, 639 + job->base.done_fence); 707 640 708 - v3d_unlock_bo_reservations(job->bo, bo_count, &acquire_ctx); 709 - 710 - /* Update the return sync object */ 711 - sync_out = drm_syncobj_find(file_priv, args->out_sync); 712 - if (sync_out) { 713 - drm_syncobj_replace_fence(sync_out, sched_done_fence); 714 - drm_syncobj_put(sync_out); 715 - } 716 - dma_fence_put(sched_done_fence); 717 - 718 - v3d_tfu_job_put(job); 641 + v3d_job_put(&job->base); 719 642 720 643 return 0; 721 644 722 645 fail_unreserve: 723 646 mutex_unlock(&v3d->sched_lock); 724 - v3d_unlock_bo_reservations(job->bo, bo_count, &acquire_ctx); 647 + drm_gem_unlock_reservations(job->base.bo, job->base.bo_count, 648 + &acquire_ctx); 725 649 fail: 726 - v3d_tfu_job_put(job); 650 + v3d_job_put(&job->base); 651 + 652 + return ret; 653 + } 654 + 655 + /** 656 + * v3d_submit_csd_ioctl() - Submits a CSD (texture formatting) job to the V3D. 657 + * @dev: DRM device 658 + * @data: ioctl argument 659 + * @file_priv: DRM file for this fd 660 + * 661 + * Userspace provides the register setup for the CSD, which we don't 662 + * need to validate since the CSD is behind the MMU. 663 + */ 664 + int 665 + v3d_submit_csd_ioctl(struct drm_device *dev, void *data, 666 + struct drm_file *file_priv) 667 + { 668 + struct v3d_dev *v3d = to_v3d_dev(dev); 669 + struct v3d_file_priv *v3d_priv = file_priv->driver_priv; 670 + struct drm_v3d_submit_csd *args = data; 671 + struct v3d_csd_job *job; 672 + struct v3d_job *clean_job; 673 + struct ww_acquire_ctx acquire_ctx; 674 + int ret; 675 + 676 + trace_v3d_submit_csd_ioctl(&v3d->drm, args->cfg[5], args->cfg[6]); 677 + 678 + if (!v3d_has_csd(v3d)) { 679 + DRM_DEBUG("Attempting CSD submit on non-CSD hardware\n"); 680 + return -EINVAL; 681 + } 682 + 683 + job = kcalloc(1, sizeof(*job), GFP_KERNEL); 684 + if (!job) 685 + return -ENOMEM; 686 + 687 + ret = v3d_job_init(v3d, file_priv, &job->base, 688 + v3d_job_free, args->in_sync); 689 + if (ret) { 690 + kfree(job); 691 + return ret; 692 + } 693 + 694 + clean_job = kcalloc(1, sizeof(*clean_job), GFP_KERNEL); 695 + if (!clean_job) { 696 + v3d_job_put(&job->base); 697 + kfree(job); 698 + return -ENOMEM; 699 + } 700 + 701 + ret = v3d_job_init(v3d, file_priv, clean_job, v3d_job_free, 0); 702 + if (ret) { 703 + v3d_job_put(&job->base); 704 + kfree(clean_job); 705 + return ret; 706 + } 707 + 708 + job->args = *args; 709 + 710 + ret = v3d_lookup_bos(dev, file_priv, clean_job, 711 + args->bo_handles, args->bo_handle_count); 712 + if (ret) 713 + goto fail; 714 + 715 + ret = v3d_lock_bo_reservations(clean_job, &acquire_ctx); 716 + if (ret) 717 + goto fail; 718 + 719 + mutex_lock(&v3d->sched_lock); 720 + ret = v3d_push_job(v3d_priv, &job->base, V3D_CSD); 721 + if (ret) 722 + goto fail_unreserve; 723 + 724 + ret = drm_gem_fence_array_add(&clean_job->deps, 725 + dma_fence_get(job->base.done_fence)); 726 + if (ret) 727 + goto fail_unreserve; 728 + 729 + ret = v3d_push_job(v3d_priv, clean_job, V3D_CACHE_CLEAN); 730 + if (ret) 731 + goto fail_unreserve; 732 + mutex_unlock(&v3d->sched_lock); 733 + 734 + v3d_attach_fences_and_unlock_reservation(file_priv, 735 + clean_job, 736 + &acquire_ctx, 737 + args->out_sync, 738 + clean_job->done_fence); 739 + 740 + v3d_job_put(&job->base); 741 + v3d_job_put(clean_job); 742 + 743 + return 0; 744 + 745 + fail_unreserve: 746 + mutex_unlock(&v3d->sched_lock); 747 + drm_gem_unlock_reservations(clean_job->bo, clean_job->bo_count, 748 + &acquire_ctx); 749 + fail: 750 + v3d_job_put(&job->base); 751 + v3d_job_put(clean_job); 727 752 728 753 return ret; 729 754 } ··· 834 677 mutex_init(&v3d->bo_lock); 835 678 mutex_init(&v3d->reset_lock); 836 679 mutex_init(&v3d->sched_lock); 680 + mutex_init(&v3d->cache_clean_lock); 837 681 838 682 /* Note: We don't allocate address 0. Various bits of HW 839 683 * treat 0 as special, such as the occlusion query counters ··· 873 715 874 716 v3d_sched_fini(v3d); 875 717 876 - /* Waiting for exec to finish would need to be done before 718 + /* Waiting for jobs to finish would need to be done before 877 719 * unregistering V3D. 878 720 */ 879 721 WARN_ON(v3d->bin_job);
+43 -10
drivers/gpu/drm/v3d/v3d_irq.c
··· 4 4 /** 5 5 * DOC: Interrupt management for the V3D engine 6 6 * 7 - * When we take a bin, render, or TFU done interrupt, we need to 8 - * signal the fence for that job so that the scheduler can queue up 9 - * the next one and unblock any waiters. 7 + * When we take a bin, render, TFU done, or CSD done interrupt, we 8 + * need to signal the fence for that job so that the scheduler can 9 + * queue up the next one and unblock any waiters. 10 10 * 11 11 * When we take the binner out of memory interrupt, we need to 12 12 * allocate some new memory and pass it to the binner so that the ··· 20 20 #define V3D_CORE_IRQS ((u32)(V3D_INT_OUTOMEM | \ 21 21 V3D_INT_FLDONE | \ 22 22 V3D_INT_FRDONE | \ 23 + V3D_INT_CSDDONE | \ 23 24 V3D_INT_GMPV)) 24 25 25 26 #define V3D_HUB_IRQS ((u32)(V3D_HUB_INT_MMU_WRV | \ ··· 63 62 } 64 63 65 64 drm_gem_object_get(obj); 66 - list_add_tail(&bo->unref_head, &v3d->bin_job->unref_list); 65 + list_add_tail(&bo->unref_head, &v3d->bin_job->render->unref_list); 67 66 spin_unlock_irqrestore(&v3d->job_lock, irqflags); 68 67 69 68 V3D_CORE_WRITE(0, V3D_PTB_BPOA, bo->node.start << PAGE_SHIFT); ··· 97 96 98 97 if (intsts & V3D_INT_FLDONE) { 99 98 struct v3d_fence *fence = 100 - to_v3d_fence(v3d->bin_job->bin.irq_fence); 99 + to_v3d_fence(v3d->bin_job->base.irq_fence); 101 100 102 101 trace_v3d_bcl_irq(&v3d->drm, fence->seqno); 103 102 dma_fence_signal(&fence->base); ··· 106 105 107 106 if (intsts & V3D_INT_FRDONE) { 108 107 struct v3d_fence *fence = 109 - to_v3d_fence(v3d->render_job->render.irq_fence); 108 + to_v3d_fence(v3d->render_job->base.irq_fence); 110 109 111 110 trace_v3d_rcl_irq(&v3d->drm, fence->seqno); 111 + dma_fence_signal(&fence->base); 112 + status = IRQ_HANDLED; 113 + } 114 + 115 + if (intsts & V3D_INT_CSDDONE) { 116 + struct v3d_fence *fence = 117 + to_v3d_fence(v3d->csd_job->base.irq_fence); 118 + 119 + trace_v3d_csd_irq(&v3d->drm, fence->seqno); 112 120 dma_fence_signal(&fence->base); 113 121 status = IRQ_HANDLED; 114 122 } ··· 151 141 152 142 if (intsts & V3D_HUB_INT_TFUC) { 153 143 struct v3d_fence *fence = 154 - to_v3d_fence(v3d->tfu_job->irq_fence); 144 + to_v3d_fence(v3d->tfu_job->base.irq_fence); 155 145 156 146 trace_v3d_tfu_irq(&v3d->drm, fence->seqno); 157 147 dma_fence_signal(&fence->base); ··· 162 152 V3D_HUB_INT_MMU_PTI | 163 153 V3D_HUB_INT_MMU_CAP)) { 164 154 u32 axi_id = V3D_READ(V3D_MMU_VIO_ID); 165 - u64 vio_addr = (u64)V3D_READ(V3D_MMU_VIO_ADDR) << 8; 155 + u64 vio_addr = ((u64)V3D_READ(V3D_MMU_VIO_ADDR) << 156 + (v3d->va_width - 32)); 157 + static const char *const v3d41_axi_ids[] = { 158 + "L2T", 159 + "PTB", 160 + "PSE", 161 + "TLB", 162 + "CLE", 163 + "TFU", 164 + "MMU", 165 + "GMP", 166 + }; 167 + const char *client = "?"; 166 168 167 - dev_err(v3d->dev, "MMU error from client %d at 0x%08llx%s%s%s\n", 168 - axi_id, (long long)vio_addr, 169 + V3D_WRITE(V3D_MMU_CTL, 170 + V3D_READ(V3D_MMU_CTL) & (V3D_MMU_CTL_CAP_EXCEEDED | 171 + V3D_MMU_CTL_PT_INVALID | 172 + V3D_MMU_CTL_WRITE_VIOLATION)); 173 + 174 + if (v3d->ver >= 41) { 175 + axi_id = axi_id >> 5; 176 + if (axi_id < ARRAY_SIZE(v3d41_axi_ids)) 177 + client = v3d41_axi_ids[axi_id]; 178 + } 179 + 180 + dev_err(v3d->dev, "MMU error from client %s (%d) at 0x%llx%s%s%s\n", 181 + client, axi_id, (long long)vio_addr, 169 182 ((intsts & V3D_HUB_INT_MMU_WRV) ? 170 183 ", write violation" : ""), 171 184 ((intsts & V3D_HUB_INT_MMU_PTI) ?
+5 -2
drivers/gpu/drm/v3d/v3d_mmu.c
··· 69 69 V3D_WRITE(V3D_MMU_PT_PA_BASE, v3d->pt_paddr >> V3D_MMU_PAGE_SHIFT); 70 70 V3D_WRITE(V3D_MMU_CTL, 71 71 V3D_MMU_CTL_ENABLE | 72 - V3D_MMU_CTL_PT_INVALID | 72 + V3D_MMU_CTL_PT_INVALID_ENABLE | 73 73 V3D_MMU_CTL_PT_INVALID_ABORT | 74 + V3D_MMU_CTL_PT_INVALID_INT | 74 75 V3D_MMU_CTL_WRITE_VIOLATION_ABORT | 75 - V3D_MMU_CTL_CAP_EXCEEDED_ABORT); 76 + V3D_MMU_CTL_WRITE_VIOLATION_INT | 77 + V3D_MMU_CTL_CAP_EXCEEDED_ABORT | 78 + V3D_MMU_CTL_CAP_EXCEEDED_INT); 76 79 V3D_WRITE(V3D_MMU_ILLEGAL_ADDR, 77 80 (v3d->mmu_scratch_paddr >> V3D_MMU_PAGE_SHIFT) | 78 81 V3D_MMU_ILLEGAL_ADDR_ENABLE);
+121 -1
drivers/gpu/drm/v3d/v3d_regs.h
··· 152 152 # define V3D_MMU_CTL_PT_INVALID_ABORT BIT(19) 153 153 # define V3D_MMU_CTL_PT_INVALID_INT BIT(18) 154 154 # define V3D_MMU_CTL_PT_INVALID_EXCEPTION BIT(17) 155 - # define V3D_MMU_CTL_WRITE_VIOLATION BIT(16) 155 + # define V3D_MMU_CTL_PT_INVALID_ENABLE BIT(16) 156 + # define V3D_MMU_CTL_WRITE_VIOLATION BIT(12) 156 157 # define V3D_MMU_CTL_WRITE_VIOLATION_ABORT BIT(11) 157 158 # define V3D_MMU_CTL_WRITE_VIOLATION_INT BIT(10) 158 159 # define V3D_MMU_CTL_WRITE_VIOLATION_EXCEPTION BIT(9) ··· 191 190 192 191 /* Address that faulted */ 193 192 #define V3D_MMU_VIO_ADDR 0x01234 193 + 194 + #define V3D_MMU_DEBUG_INFO 0x01238 195 + # define V3D_MMU_PA_WIDTH_MASK V3D_MASK(11, 8) 196 + # define V3D_MMU_PA_WIDTH_SHIFT 8 197 + # define V3D_MMU_VA_WIDTH_MASK V3D_MASK(7, 4) 198 + # define V3D_MMU_VA_WIDTH_SHIFT 4 199 + # define V3D_MMU_VERSION_MASK V3D_MASK(3, 0) 200 + # define V3D_MMU_VERSION_SHIFT 0 194 201 195 202 /* Per-V3D-core registers */ 196 203 ··· 247 238 #define V3D_CTL_L2TCACTL 0x00030 248 239 # define V3D_L2TCACTL_TMUWCF BIT(8) 249 240 # define V3D_L2TCACTL_L2T_NO_WM BIT(4) 241 + /* Invalidates cache lines. */ 250 242 # define V3D_L2TCACTL_FLM_FLUSH 0 243 + /* Removes cachelines without writing dirty lines back. */ 251 244 # define V3D_L2TCACTL_FLM_CLEAR 1 245 + /* Writes out dirty cachelines and marks them clean, but doesn't invalidate. */ 252 246 # define V3D_L2TCACTL_FLM_CLEAN 2 253 247 # define V3D_L2TCACTL_FLM_MASK V3D_MASK(2, 1) 254 248 # define V3D_L2TCACTL_FLM_SHIFT 1 ··· 267 255 #define V3D_CTL_INT_MSK_CLR 0x00064 268 256 # define V3D_INT_QPU_MASK V3D_MASK(27, 16) 269 257 # define V3D_INT_QPU_SHIFT 16 258 + # define V3D_INT_CSDDONE BIT(7) 259 + # define V3D_INT_PCTR BIT(6) 270 260 # define V3D_INT_GMPV BIT(5) 271 261 # define V3D_INT_TRFB BIT(4) 272 262 # define V3D_INT_SPILLUSE BIT(3) ··· 387 373 #define V3D_GMP_CLEAR_LOAD 0x00814 388 374 #define V3D_GMP_PRESERVE_LOAD 0x00818 389 375 #define V3D_GMP_VALID_LINES 0x00820 376 + 377 + #define V3D_CSD_STATUS 0x00900 378 + # define V3D_CSD_STATUS_NUM_COMPLETED_MASK V3D_MASK(11, 4) 379 + # define V3D_CSD_STATUS_NUM_COMPLETED_SHIFT 4 380 + # define V3D_CSD_STATUS_NUM_ACTIVE_MASK V3D_MASK(3, 2) 381 + # define V3D_CSD_STATUS_NUM_ACTIVE_SHIFT 2 382 + # define V3D_CSD_STATUS_HAVE_CURRENT_DISPATCH BIT(1) 383 + # define V3D_CSD_STATUS_HAVE_QUEUED_DISPATCH BIT(0) 384 + 385 + #define V3D_CSD_QUEUED_CFG0 0x00904 386 + # define V3D_CSD_QUEUED_CFG0_NUM_WGS_X_MASK V3D_MASK(31, 16) 387 + # define V3D_CSD_QUEUED_CFG0_NUM_WGS_X_SHIFT 16 388 + # define V3D_CSD_QUEUED_CFG0_WG_X_OFFSET_MASK V3D_MASK(15, 0) 389 + # define V3D_CSD_QUEUED_CFG0_WG_X_OFFSET_SHIFT 0 390 + 391 + #define V3D_CSD_QUEUED_CFG1 0x00908 392 + # define V3D_CSD_QUEUED_CFG1_NUM_WGS_Y_MASK V3D_MASK(31, 16) 393 + # define V3D_CSD_QUEUED_CFG1_NUM_WGS_Y_SHIFT 16 394 + # define V3D_CSD_QUEUED_CFG1_WG_Y_OFFSET_MASK V3D_MASK(15, 0) 395 + # define V3D_CSD_QUEUED_CFG1_WG_Y_OFFSET_SHIFT 0 396 + 397 + #define V3D_CSD_QUEUED_CFG2 0x0090c 398 + # define V3D_CSD_QUEUED_CFG2_NUM_WGS_Z_MASK V3D_MASK(31, 16) 399 + # define V3D_CSD_QUEUED_CFG2_NUM_WGS_Z_SHIFT 16 400 + # define V3D_CSD_QUEUED_CFG2_WG_Z_OFFSET_MASK V3D_MASK(15, 0) 401 + # define V3D_CSD_QUEUED_CFG2_WG_Z_OFFSET_SHIFT 0 402 + 403 + #define V3D_CSD_QUEUED_CFG3 0x00910 404 + # define V3D_CSD_QUEUED_CFG3_OVERLAP_WITH_PREV BIT(26) 405 + # define V3D_CSD_QUEUED_CFG3_MAX_SG_ID_MASK V3D_MASK(25, 20) 406 + # define V3D_CSD_QUEUED_CFG3_MAX_SG_ID_SHIFT 20 407 + # define V3D_CSD_QUEUED_CFG3_BATCHES_PER_SG_M1_MASK V3D_MASK(19, 12) 408 + # define V3D_CSD_QUEUED_CFG3_BATCHES_PER_SG_M1_SHIFT 12 409 + # define V3D_CSD_QUEUED_CFG3_WGS_PER_SG_MASK V3D_MASK(11, 8) 410 + # define V3D_CSD_QUEUED_CFG3_WGS_PER_SG_SHIFT 8 411 + # define V3D_CSD_QUEUED_CFG3_WG_SIZE_MASK V3D_MASK(7, 0) 412 + # define V3D_CSD_QUEUED_CFG3_WG_SIZE_SHIFT 0 413 + 414 + /* Number of batches, minus 1 */ 415 + #define V3D_CSD_QUEUED_CFG4 0x00914 416 + 417 + /* Shader address, pnan, singleseg, threading, like a shader record. */ 418 + #define V3D_CSD_QUEUED_CFG5 0x00918 419 + 420 + /* Uniforms address (4 byte aligned) */ 421 + #define V3D_CSD_QUEUED_CFG6 0x0091c 422 + 423 + #define V3D_CSD_CURRENT_CFG0 0x00920 424 + #define V3D_CSD_CURRENT_CFG1 0x00924 425 + #define V3D_CSD_CURRENT_CFG2 0x00928 426 + #define V3D_CSD_CURRENT_CFG3 0x0092c 427 + #define V3D_CSD_CURRENT_CFG4 0x00930 428 + #define V3D_CSD_CURRENT_CFG5 0x00934 429 + #define V3D_CSD_CURRENT_CFG6 0x00938 430 + 431 + #define V3D_CSD_CURRENT_ID0 0x0093c 432 + # define V3D_CSD_CURRENT_ID0_WG_X_MASK V3D_MASK(31, 16) 433 + # define V3D_CSD_CURRENT_ID0_WG_X_SHIFT 16 434 + # define V3D_CSD_CURRENT_ID0_WG_IN_SG_MASK V3D_MASK(11, 8) 435 + # define V3D_CSD_CURRENT_ID0_WG_IN_SG_SHIFT 8 436 + # define V3D_CSD_CURRENT_ID0_L_IDX_MASK V3D_MASK(7, 0) 437 + # define V3D_CSD_CURRENT_ID0_L_IDX_SHIFT 0 438 + 439 + #define V3D_CSD_CURRENT_ID1 0x00940 440 + # define V3D_CSD_CURRENT_ID0_WG_Z_MASK V3D_MASK(31, 16) 441 + # define V3D_CSD_CURRENT_ID0_WG_Z_SHIFT 16 442 + # define V3D_CSD_CURRENT_ID0_WG_Y_MASK V3D_MASK(15, 0) 443 + # define V3D_CSD_CURRENT_ID0_WG_Y_SHIFT 0 444 + 445 + #define V3D_ERR_FDBGO 0x00f04 446 + #define V3D_ERR_FDBGB 0x00f08 447 + #define V3D_ERR_FDBGR 0x00f0c 448 + 449 + #define V3D_ERR_FDBGS 0x00f10 450 + # define V3D_ERR_FDBGS_INTERPZ_IP_STALL BIT(17) 451 + # define V3D_ERR_FDBGS_DEPTHO_FIFO_IP_STALL BIT(16) 452 + # define V3D_ERR_FDBGS_XYNRM_IP_STALL BIT(14) 453 + # define V3D_ERR_FDBGS_EZREQ_FIFO_OP_VALID BIT(13) 454 + # define V3D_ERR_FDBGS_QXYF_FIFO_OP_VALID BIT(12) 455 + # define V3D_ERR_FDBGS_QXYF_FIFO_OP_LAST BIT(11) 456 + # define V3D_ERR_FDBGS_EZTEST_ANYQVALID BIT(7) 457 + # define V3D_ERR_FDBGS_EZTEST_PASS BIT(6) 458 + # define V3D_ERR_FDBGS_EZTEST_QREADY BIT(5) 459 + # define V3D_ERR_FDBGS_EZTEST_VLF_OKNOVALID BIT(4) 460 + # define V3D_ERR_FDBGS_EZTEST_QSTALL BIT(3) 461 + # define V3D_ERR_FDBGS_EZTEST_IP_VLFSTALL BIT(2) 462 + # define V3D_ERR_FDBGS_EZTEST_IP_PRSTALL BIT(1) 463 + # define V3D_ERR_FDBGS_EZTEST_IP_QSTALL BIT(0) 464 + 465 + #define V3D_ERR_STAT 0x00f20 466 + # define V3D_ERR_L2CARE BIT(15) 467 + # define V3D_ERR_VCMBE BIT(14) 468 + # define V3D_ERR_VCMRE BIT(13) 469 + # define V3D_ERR_VCDI BIT(12) 470 + # define V3D_ERR_VCDE BIT(11) 471 + # define V3D_ERR_VDWE BIT(10) 472 + # define V3D_ERR_VPMEAS BIT(9) 473 + # define V3D_ERR_VPMEFNA BIT(8) 474 + # define V3D_ERR_VPMEWNA BIT(7) 475 + # define V3D_ERR_VPMERNA BIT(6) 476 + # define V3D_ERR_VPMERR BIT(5) 477 + # define V3D_ERR_VPMEWR BIT(4) 478 + # define V3D_ERR_VPAERRGL BIT(3) 479 + # define V3D_ERR_VPAEBRGL BIT(2) 480 + # define V3D_ERR_VPAERGS BIT(1) 481 + # define V3D_ERR_VPAEABB BIT(0) 390 482 391 483 #endif /* V3D_REGS_H */
+253 -135
drivers/gpu/drm/v3d/v3d_sched.c
··· 30 30 return container_of(sched_job, struct v3d_job, base); 31 31 } 32 32 33 + static struct v3d_bin_job * 34 + to_bin_job(struct drm_sched_job *sched_job) 35 + { 36 + return container_of(sched_job, struct v3d_bin_job, base.base); 37 + } 38 + 39 + static struct v3d_render_job * 40 + to_render_job(struct drm_sched_job *sched_job) 41 + { 42 + return container_of(sched_job, struct v3d_render_job, base.base); 43 + } 44 + 33 45 static struct v3d_tfu_job * 34 46 to_tfu_job(struct drm_sched_job *sched_job) 35 47 { 36 - return container_of(sched_job, struct v3d_tfu_job, base); 48 + return container_of(sched_job, struct v3d_tfu_job, base.base); 49 + } 50 + 51 + static struct v3d_csd_job * 52 + to_csd_job(struct drm_sched_job *sched_job) 53 + { 54 + return container_of(sched_job, struct v3d_csd_job, base.base); 37 55 } 38 56 39 57 static void ··· 60 42 struct v3d_job *job = to_v3d_job(sched_job); 61 43 62 44 drm_sched_job_cleanup(sched_job); 63 - 64 - v3d_exec_put(job->exec); 65 - } 66 - 67 - static void 68 - v3d_tfu_job_free(struct drm_sched_job *sched_job) 69 - { 70 - struct v3d_tfu_job *job = to_tfu_job(sched_job); 71 - 72 - drm_sched_job_cleanup(sched_job); 73 - 74 - v3d_tfu_job_put(job); 45 + v3d_job_put(job); 75 46 } 76 47 77 48 /** 78 - * Returns the fences that the bin or render job depends on, one by one. 79 - * v3d_job_run() won't be called until all of them have been signaled. 49 + * Returns the fences that the job depends on, one by one. 50 + * 51 + * If placed in the scheduler's .dependency method, the corresponding 52 + * .run_job won't be called until all of them have been signaled. 80 53 */ 81 54 static struct dma_fence * 82 55 v3d_job_dependency(struct drm_sched_job *sched_job, 83 56 struct drm_sched_entity *s_entity) 84 57 { 85 58 struct v3d_job *job = to_v3d_job(sched_job); 86 - struct v3d_exec_info *exec = job->exec; 87 - enum v3d_queue q = job == &exec->bin ? V3D_BIN : V3D_RENDER; 88 - struct dma_fence *fence; 89 - 90 - fence = job->in_fence; 91 - if (fence) { 92 - job->in_fence = NULL; 93 - return fence; 94 - } 95 - 96 - if (q == V3D_RENDER) { 97 - /* If we had a bin job, the render job definitely depends on 98 - * it. We first have to wait for bin to be scheduled, so that 99 - * its done_fence is created. 100 - */ 101 - fence = exec->bin_done_fence; 102 - if (fence) { 103 - exec->bin_done_fence = NULL; 104 - return fence; 105 - } 106 - } 107 59 108 60 /* XXX: Wait on a fence for switching the GMP if necessary, 109 61 * and then do so. 110 62 */ 111 63 112 - return fence; 113 - } 114 - 115 - /** 116 - * Returns the fences that the TFU job depends on, one by one. 117 - * v3d_tfu_job_run() won't be called until all of them have been 118 - * signaled. 119 - */ 120 - static struct dma_fence * 121 - v3d_tfu_job_dependency(struct drm_sched_job *sched_job, 122 - struct drm_sched_entity *s_entity) 123 - { 124 - struct v3d_tfu_job *job = to_tfu_job(sched_job); 125 - struct dma_fence *fence; 126 - 127 - fence = job->in_fence; 128 - if (fence) { 129 - job->in_fence = NULL; 130 - return fence; 131 - } 64 + if (!xa_empty(&job->deps)) 65 + return xa_erase(&job->deps, job->last_dep++); 132 66 133 67 return NULL; 134 68 } 135 69 136 - static struct dma_fence *v3d_job_run(struct drm_sched_job *sched_job) 70 + static struct dma_fence *v3d_bin_job_run(struct drm_sched_job *sched_job) 137 71 { 138 - struct v3d_job *job = to_v3d_job(sched_job); 139 - struct v3d_exec_info *exec = job->exec; 140 - enum v3d_queue q = job == &exec->bin ? V3D_BIN : V3D_RENDER; 141 - struct v3d_dev *v3d = exec->v3d; 72 + struct v3d_bin_job *job = to_bin_job(sched_job); 73 + struct v3d_dev *v3d = job->base.v3d; 142 74 struct drm_device *dev = &v3d->drm; 143 75 struct dma_fence *fence; 144 76 unsigned long irqflags; 145 77 146 - if (unlikely(job->base.s_fence->finished.error)) 78 + if (unlikely(job->base.base.s_fence->finished.error)) 147 79 return NULL; 148 80 149 81 /* Lock required around bin_job update vs 150 82 * v3d_overflow_mem_work(). 151 83 */ 152 84 spin_lock_irqsave(&v3d->job_lock, irqflags); 153 - if (q == V3D_BIN) { 154 - v3d->bin_job = job->exec; 155 - 156 - /* Clear out the overflow allocation, so we don't 157 - * reuse the overflow attached to a previous job. 158 - */ 159 - V3D_CORE_WRITE(0, V3D_PTB_BPOS, 0); 160 - } else { 161 - v3d->render_job = job->exec; 162 - } 85 + v3d->bin_job = job; 86 + /* Clear out the overflow allocation, so we don't 87 + * reuse the overflow attached to a previous job. 88 + */ 89 + V3D_CORE_WRITE(0, V3D_PTB_BPOS, 0); 163 90 spin_unlock_irqrestore(&v3d->job_lock, irqflags); 164 91 165 - /* Can we avoid this flush when q==RENDER? We need to be 166 - * careful of scheduling, though -- imagine job0 rendering to 167 - * texture and job1 reading, and them being executed as bin0, 168 - * bin1, render0, render1, so that render1's flush at bin time 169 - * wasn't enough. 170 - */ 171 92 v3d_invalidate_caches(v3d); 172 93 173 - fence = v3d_fence_create(v3d, q); 94 + fence = v3d_fence_create(v3d, V3D_BIN); 174 95 if (IS_ERR(fence)) 175 96 return NULL; 176 97 177 - if (job->irq_fence) 178 - dma_fence_put(job->irq_fence); 179 - job->irq_fence = dma_fence_get(fence); 98 + if (job->base.irq_fence) 99 + dma_fence_put(job->base.irq_fence); 100 + job->base.irq_fence = dma_fence_get(fence); 180 101 181 - trace_v3d_submit_cl(dev, q == V3D_RENDER, to_v3d_fence(fence)->seqno, 102 + trace_v3d_submit_cl(dev, false, to_v3d_fence(fence)->seqno, 182 103 job->start, job->end); 183 - 184 - if (q == V3D_BIN) { 185 - if (exec->qma) { 186 - V3D_CORE_WRITE(0, V3D_CLE_CT0QMA, exec->qma); 187 - V3D_CORE_WRITE(0, V3D_CLE_CT0QMS, exec->qms); 188 - } 189 - if (exec->qts) { 190 - V3D_CORE_WRITE(0, V3D_CLE_CT0QTS, 191 - V3D_CLE_CT0QTS_ENABLE | 192 - exec->qts); 193 - } 194 - } else { 195 - /* XXX: Set the QCFG */ 196 - } 197 104 198 105 /* Set the current and end address of the control list. 199 106 * Writing the end register is what starts the job. 200 107 */ 201 - V3D_CORE_WRITE(0, V3D_CLE_CTNQBA(q), job->start); 202 - V3D_CORE_WRITE(0, V3D_CLE_CTNQEA(q), job->end); 108 + if (job->qma) { 109 + V3D_CORE_WRITE(0, V3D_CLE_CT0QMA, job->qma); 110 + V3D_CORE_WRITE(0, V3D_CLE_CT0QMS, job->qms); 111 + } 112 + if (job->qts) { 113 + V3D_CORE_WRITE(0, V3D_CLE_CT0QTS, 114 + V3D_CLE_CT0QTS_ENABLE | 115 + job->qts); 116 + } 117 + V3D_CORE_WRITE(0, V3D_CLE_CT0QBA, job->start); 118 + V3D_CORE_WRITE(0, V3D_CLE_CT0QEA, job->end); 119 + 120 + return fence; 121 + } 122 + 123 + static struct dma_fence *v3d_render_job_run(struct drm_sched_job *sched_job) 124 + { 125 + struct v3d_render_job *job = to_render_job(sched_job); 126 + struct v3d_dev *v3d = job->base.v3d; 127 + struct drm_device *dev = &v3d->drm; 128 + struct dma_fence *fence; 129 + 130 + if (unlikely(job->base.base.s_fence->finished.error)) 131 + return NULL; 132 + 133 + v3d->render_job = job; 134 + 135 + /* Can we avoid this flush? We need to be careful of 136 + * scheduling, though -- imagine job0 rendering to texture and 137 + * job1 reading, and them being executed as bin0, bin1, 138 + * render0, render1, so that render1's flush at bin time 139 + * wasn't enough. 140 + */ 141 + v3d_invalidate_caches(v3d); 142 + 143 + fence = v3d_fence_create(v3d, V3D_RENDER); 144 + if (IS_ERR(fence)) 145 + return NULL; 146 + 147 + if (job->base.irq_fence) 148 + dma_fence_put(job->base.irq_fence); 149 + job->base.irq_fence = dma_fence_get(fence); 150 + 151 + trace_v3d_submit_cl(dev, true, to_v3d_fence(fence)->seqno, 152 + job->start, job->end); 153 + 154 + /* XXX: Set the QCFG */ 155 + 156 + /* Set the current and end address of the control list. 157 + * Writing the end register is what starts the job. 158 + */ 159 + V3D_CORE_WRITE(0, V3D_CLE_CT1QBA, job->start); 160 + V3D_CORE_WRITE(0, V3D_CLE_CT1QEA, job->end); 203 161 204 162 return fence; 205 163 } ··· 184 190 v3d_tfu_job_run(struct drm_sched_job *sched_job) 185 191 { 186 192 struct v3d_tfu_job *job = to_tfu_job(sched_job); 187 - struct v3d_dev *v3d = job->v3d; 193 + struct v3d_dev *v3d = job->base.v3d; 188 194 struct drm_device *dev = &v3d->drm; 189 195 struct dma_fence *fence; 190 196 ··· 193 199 return NULL; 194 200 195 201 v3d->tfu_job = job; 196 - if (job->irq_fence) 197 - dma_fence_put(job->irq_fence); 198 - job->irq_fence = dma_fence_get(fence); 202 + if (job->base.irq_fence) 203 + dma_fence_put(job->base.irq_fence); 204 + job->base.irq_fence = dma_fence_get(fence); 199 205 200 206 trace_v3d_submit_tfu(dev, to_v3d_fence(fence)->seqno); 201 207 ··· 217 223 return fence; 218 224 } 219 225 226 + static struct dma_fence * 227 + v3d_csd_job_run(struct drm_sched_job *sched_job) 228 + { 229 + struct v3d_csd_job *job = to_csd_job(sched_job); 230 + struct v3d_dev *v3d = job->base.v3d; 231 + struct drm_device *dev = &v3d->drm; 232 + struct dma_fence *fence; 233 + int i; 234 + 235 + v3d->csd_job = job; 236 + 237 + v3d_invalidate_caches(v3d); 238 + 239 + fence = v3d_fence_create(v3d, V3D_CSD); 240 + if (IS_ERR(fence)) 241 + return NULL; 242 + 243 + if (job->base.irq_fence) 244 + dma_fence_put(job->base.irq_fence); 245 + job->base.irq_fence = dma_fence_get(fence); 246 + 247 + trace_v3d_submit_csd(dev, to_v3d_fence(fence)->seqno); 248 + 249 + for (i = 1; i <= 6; i++) 250 + V3D_CORE_WRITE(0, V3D_CSD_QUEUED_CFG0 + 4 * i, job->args.cfg[i]); 251 + /* CFG0 write kicks off the job. */ 252 + V3D_CORE_WRITE(0, V3D_CSD_QUEUED_CFG0, job->args.cfg[0]); 253 + 254 + return fence; 255 + } 256 + 257 + static struct dma_fence * 258 + v3d_cache_clean_job_run(struct drm_sched_job *sched_job) 259 + { 260 + struct v3d_job *job = to_v3d_job(sched_job); 261 + struct v3d_dev *v3d = job->v3d; 262 + 263 + v3d_clean_caches(v3d); 264 + 265 + return NULL; 266 + } 267 + 220 268 static void 221 269 v3d_gpu_reset_for_timeout(struct v3d_dev *v3d, struct drm_sched_job *sched_job) 222 270 { ··· 268 232 269 233 /* block scheduler */ 270 234 for (q = 0; q < V3D_MAX_QUEUES; q++) 271 - drm_sched_stop(&v3d->queue[q].sched); 235 + drm_sched_stop(&v3d->queue[q].sched, sched_job); 272 236 273 237 if (sched_job) 274 238 drm_sched_increase_karma(sched_job); ··· 287 251 mutex_unlock(&v3d->reset_lock); 288 252 } 289 253 254 + /* If the current address or return address have changed, then the GPU 255 + * has probably made progress and we should delay the reset. This 256 + * could fail if the GPU got in an infinite loop in the CL, but that 257 + * is pretty unlikely outside of an i-g-t testcase. 258 + */ 290 259 static void 291 - v3d_job_timedout(struct drm_sched_job *sched_job) 260 + v3d_cl_job_timedout(struct drm_sched_job *sched_job, enum v3d_queue q, 261 + u32 *timedout_ctca, u32 *timedout_ctra) 292 262 { 293 263 struct v3d_job *job = to_v3d_job(sched_job); 294 - struct v3d_exec_info *exec = job->exec; 295 - struct v3d_dev *v3d = exec->v3d; 296 - enum v3d_queue job_q = job == &exec->bin ? V3D_BIN : V3D_RENDER; 297 - u32 ctca = V3D_CORE_READ(0, V3D_CLE_CTNCA(job_q)); 298 - u32 ctra = V3D_CORE_READ(0, V3D_CLE_CTNRA(job_q)); 264 + struct v3d_dev *v3d = job->v3d; 265 + u32 ctca = V3D_CORE_READ(0, V3D_CLE_CTNCA(q)); 266 + u32 ctra = V3D_CORE_READ(0, V3D_CLE_CTNRA(q)); 299 267 300 - /* If the current address or return address have changed, then 301 - * the GPU has probably made progress and we should delay the 302 - * reset. This could fail if the GPU got in an infinite loop 303 - * in the CL, but that is pretty unlikely outside of an i-g-t 304 - * testcase. 305 - */ 306 - if (job->timedout_ctca != ctca || job->timedout_ctra != ctra) { 307 - job->timedout_ctca = ctca; 308 - job->timedout_ctra = ctra; 268 + if (*timedout_ctca != ctca || *timedout_ctra != ctra) { 269 + *timedout_ctca = ctca; 270 + *timedout_ctra = ctra; 309 271 return; 310 272 } 311 273 ··· 311 277 } 312 278 313 279 static void 314 - v3d_tfu_job_timedout(struct drm_sched_job *sched_job) 280 + v3d_bin_job_timedout(struct drm_sched_job *sched_job) 315 281 { 316 - struct v3d_tfu_job *job = to_tfu_job(sched_job); 282 + struct v3d_bin_job *job = to_bin_job(sched_job); 283 + 284 + v3d_cl_job_timedout(sched_job, V3D_BIN, 285 + &job->timedout_ctca, &job->timedout_ctra); 286 + } 287 + 288 + static void 289 + v3d_render_job_timedout(struct drm_sched_job *sched_job) 290 + { 291 + struct v3d_render_job *job = to_render_job(sched_job); 292 + 293 + v3d_cl_job_timedout(sched_job, V3D_RENDER, 294 + &job->timedout_ctca, &job->timedout_ctra); 295 + } 296 + 297 + static void 298 + v3d_generic_job_timedout(struct drm_sched_job *sched_job) 299 + { 300 + struct v3d_job *job = to_v3d_job(sched_job); 317 301 318 302 v3d_gpu_reset_for_timeout(job->v3d, sched_job); 319 303 } 320 304 321 - static const struct drm_sched_backend_ops v3d_sched_ops = { 305 + static void 306 + v3d_csd_job_timedout(struct drm_sched_job *sched_job) 307 + { 308 + struct v3d_csd_job *job = to_csd_job(sched_job); 309 + struct v3d_dev *v3d = job->base.v3d; 310 + u32 batches = V3D_CORE_READ(0, V3D_CSD_CURRENT_CFG4); 311 + 312 + /* If we've made progress, skip reset and let the timer get 313 + * rearmed. 314 + */ 315 + if (job->timedout_batches != batches) { 316 + job->timedout_batches = batches; 317 + return; 318 + } 319 + 320 + v3d_gpu_reset_for_timeout(v3d, sched_job); 321 + } 322 + 323 + static const struct drm_sched_backend_ops v3d_bin_sched_ops = { 322 324 .dependency = v3d_job_dependency, 323 - .run_job = v3d_job_run, 324 - .timedout_job = v3d_job_timedout, 325 - .free_job = v3d_job_free 325 + .run_job = v3d_bin_job_run, 326 + .timedout_job = v3d_bin_job_timedout, 327 + .free_job = v3d_job_free, 328 + }; 329 + 330 + static const struct drm_sched_backend_ops v3d_render_sched_ops = { 331 + .dependency = v3d_job_dependency, 332 + .run_job = v3d_render_job_run, 333 + .timedout_job = v3d_render_job_timedout, 334 + .free_job = v3d_job_free, 326 335 }; 327 336 328 337 static const struct drm_sched_backend_ops v3d_tfu_sched_ops = { 329 - .dependency = v3d_tfu_job_dependency, 338 + .dependency = v3d_job_dependency, 330 339 .run_job = v3d_tfu_job_run, 331 - .timedout_job = v3d_tfu_job_timedout, 332 - .free_job = v3d_tfu_job_free 340 + .timedout_job = v3d_generic_job_timedout, 341 + .free_job = v3d_job_free, 342 + }; 343 + 344 + static const struct drm_sched_backend_ops v3d_csd_sched_ops = { 345 + .dependency = v3d_job_dependency, 346 + .run_job = v3d_csd_job_run, 347 + .timedout_job = v3d_csd_job_timedout, 348 + .free_job = v3d_job_free 349 + }; 350 + 351 + static const struct drm_sched_backend_ops v3d_cache_clean_sched_ops = { 352 + .dependency = v3d_job_dependency, 353 + .run_job = v3d_cache_clean_job_run, 354 + .timedout_job = v3d_generic_job_timedout, 355 + .free_job = v3d_job_free 333 356 }; 334 357 335 358 int ··· 398 307 int ret; 399 308 400 309 ret = drm_sched_init(&v3d->queue[V3D_BIN].sched, 401 - &v3d_sched_ops, 310 + &v3d_bin_sched_ops, 402 311 hw_jobs_limit, job_hang_limit, 403 312 msecs_to_jiffies(hang_limit_ms), 404 313 "v3d_bin"); ··· 408 317 } 409 318 410 319 ret = drm_sched_init(&v3d->queue[V3D_RENDER].sched, 411 - &v3d_sched_ops, 320 + &v3d_render_sched_ops, 412 321 hw_jobs_limit, job_hang_limit, 413 322 msecs_to_jiffies(hang_limit_ms), 414 323 "v3d_render"); 415 324 if (ret) { 416 325 dev_err(v3d->dev, "Failed to create render scheduler: %d.", 417 326 ret); 418 - drm_sched_fini(&v3d->queue[V3D_BIN].sched); 327 + v3d_sched_fini(v3d); 419 328 return ret; 420 329 } 421 330 ··· 427 336 if (ret) { 428 337 dev_err(v3d->dev, "Failed to create TFU scheduler: %d.", 429 338 ret); 430 - drm_sched_fini(&v3d->queue[V3D_RENDER].sched); 431 - drm_sched_fini(&v3d->queue[V3D_BIN].sched); 339 + v3d_sched_fini(v3d); 432 340 return ret; 341 + } 342 + 343 + if (v3d_has_csd(v3d)) { 344 + ret = drm_sched_init(&v3d->queue[V3D_CSD].sched, 345 + &v3d_csd_sched_ops, 346 + hw_jobs_limit, job_hang_limit, 347 + msecs_to_jiffies(hang_limit_ms), 348 + "v3d_csd"); 349 + if (ret) { 350 + dev_err(v3d->dev, "Failed to create CSD scheduler: %d.", 351 + ret); 352 + v3d_sched_fini(v3d); 353 + return ret; 354 + } 355 + 356 + ret = drm_sched_init(&v3d->queue[V3D_CACHE_CLEAN].sched, 357 + &v3d_cache_clean_sched_ops, 358 + hw_jobs_limit, job_hang_limit, 359 + msecs_to_jiffies(hang_limit_ms), 360 + "v3d_cache_clean"); 361 + if (ret) { 362 + dev_err(v3d->dev, "Failed to create CACHE_CLEAN scheduler: %d.", 363 + ret); 364 + v3d_sched_fini(v3d); 365 + return ret; 366 + } 433 367 } 434 368 435 369 return 0; ··· 465 349 { 466 350 enum v3d_queue q; 467 351 468 - for (q = 0; q < V3D_MAX_QUEUES; q++) 469 - drm_sched_fini(&v3d->queue[q].sched); 352 + for (q = 0; q < V3D_MAX_QUEUES; q++) { 353 + if (v3d->queue[q].sched.ready) 354 + drm_sched_fini(&v3d->queue[q].sched); 355 + } 470 356 }
+94
drivers/gpu/drm/v3d/v3d_trace.h
··· 124 124 __entry->seqno) 125 125 ); 126 126 127 + TRACE_EVENT(v3d_csd_irq, 128 + TP_PROTO(struct drm_device *dev, 129 + uint64_t seqno), 130 + TP_ARGS(dev, seqno), 131 + 132 + TP_STRUCT__entry( 133 + __field(u32, dev) 134 + __field(u64, seqno) 135 + ), 136 + 137 + TP_fast_assign( 138 + __entry->dev = dev->primary->index; 139 + __entry->seqno = seqno; 140 + ), 141 + 142 + TP_printk("dev=%u, seqno=%llu", 143 + __entry->dev, 144 + __entry->seqno) 145 + ); 146 + 127 147 TRACE_EVENT(v3d_submit_tfu_ioctl, 128 148 TP_PROTO(struct drm_device *dev, u32 iia), 129 149 TP_ARGS(dev, iia), ··· 181 161 TP_printk("dev=%u, seqno=%llu", 182 162 __entry->dev, 183 163 __entry->seqno) 164 + ); 165 + 166 + TRACE_EVENT(v3d_submit_csd_ioctl, 167 + TP_PROTO(struct drm_device *dev, u32 cfg5, u32 cfg6), 168 + TP_ARGS(dev, cfg5, cfg6), 169 + 170 + TP_STRUCT__entry( 171 + __field(u32, dev) 172 + __field(u32, cfg5) 173 + __field(u32, cfg6) 174 + ), 175 + 176 + TP_fast_assign( 177 + __entry->dev = dev->primary->index; 178 + __entry->cfg5 = cfg5; 179 + __entry->cfg6 = cfg6; 180 + ), 181 + 182 + TP_printk("dev=%u, CFG5 0x%08x, CFG6 0x%08x", 183 + __entry->dev, 184 + __entry->cfg5, 185 + __entry->cfg6) 186 + ); 187 + 188 + TRACE_EVENT(v3d_submit_csd, 189 + TP_PROTO(struct drm_device *dev, 190 + uint64_t seqno), 191 + TP_ARGS(dev, seqno), 192 + 193 + TP_STRUCT__entry( 194 + __field(u32, dev) 195 + __field(u64, seqno) 196 + ), 197 + 198 + TP_fast_assign( 199 + __entry->dev = dev->primary->index; 200 + __entry->seqno = seqno; 201 + ), 202 + 203 + TP_printk("dev=%u, seqno=%llu", 204 + __entry->dev, 205 + __entry->seqno) 206 + ); 207 + 208 + TRACE_EVENT(v3d_cache_clean_begin, 209 + TP_PROTO(struct drm_device *dev), 210 + TP_ARGS(dev), 211 + 212 + TP_STRUCT__entry( 213 + __field(u32, dev) 214 + ), 215 + 216 + TP_fast_assign( 217 + __entry->dev = dev->primary->index; 218 + ), 219 + 220 + TP_printk("dev=%u", 221 + __entry->dev) 222 + ); 223 + 224 + TRACE_EVENT(v3d_cache_clean_end, 225 + TP_PROTO(struct drm_device *dev), 226 + TP_ARGS(dev), 227 + 228 + TP_STRUCT__entry( 229 + __field(u32, dev) 230 + ), 231 + 232 + TP_fast_assign( 233 + __entry->dev = dev->primary->index; 234 + ), 235 + 236 + TP_printk("dev=%u", 237 + __entry->dev) 184 238 ); 185 239 186 240 TRACE_EVENT(v3d_reset_begin,
+1 -1
drivers/gpu/drm/vboxvideo/Kconfig
··· 3 3 tristate "Virtual Box Graphics Card" 4 4 depends on DRM && X86 && PCI 5 5 select DRM_KMS_HELPER 6 - select DRM_TTM 6 + select DRM_VRAM_HELPER 7 7 select GENERIC_ALLOCATOR 8 8 help 9 9 This is a KMS driver for the virtual Graphics Card used in
+2 -10
drivers/gpu/drm/vboxvideo/vbox_drv.c
··· 191 191 192 192 static const struct file_operations vbox_fops = { 193 193 .owner = THIS_MODULE, 194 - .open = drm_open, 195 - .release = drm_release, 196 - .unlocked_ioctl = drm_ioctl, 197 - .compat_ioctl = drm_compat_ioctl, 198 - .mmap = vbox_mmap, 199 - .poll = drm_poll, 200 - .read = drm_read, 194 + DRM_VRAM_MM_FILE_OPERATIONS 201 195 }; 202 196 203 197 static struct drm_driver driver = { ··· 209 215 .minor = DRIVER_MINOR, 210 216 .patchlevel = DRIVER_PATCHLEVEL, 211 217 212 - .gem_free_object_unlocked = vbox_gem_free_object, 213 - .dumb_create = vbox_dumb_create, 214 - .dumb_map_offset = vbox_dumb_mmap_offset, 218 + DRM_GEM_VRAM_DRIVER, 215 219 .prime_handle_to_fd = drm_gem_prime_handle_to_fd, 216 220 .prime_fd_to_handle = drm_gem_prime_fd_to_handle, 217 221 .gem_prime_export = drm_gem_prime_export,
+2 -73
drivers/gpu/drm/vboxvideo/vbox_drv.h
··· 18 18 #include <drm/drm_encoder.h> 19 19 #include <drm/drm_fb_helper.h> 20 20 #include <drm/drm_gem.h> 21 + #include <drm/drm_gem_vram_helper.h> 21 22 22 - #include <drm/ttm/ttm_bo_api.h> 23 - #include <drm/ttm/ttm_bo_driver.h> 24 - #include <drm/ttm/ttm_placement.h> 25 - #include <drm/ttm/ttm_memory.h> 26 - #include <drm/ttm/ttm_module.h> 23 + #include <drm/drm_vram_mm_helper.h> 27 24 28 25 #include "vboxvideo_guest.h" 29 26 #include "vboxvideo_vbe.h" ··· 74 77 75 78 int fb_mtrr; 76 79 77 - struct { 78 - struct ttm_bo_device bdev; 79 - } ttm; 80 - 81 80 struct mutex hw_mutex; /* protects modeset and accel/vbva accesses */ 82 81 struct work_struct hotplug_work; 83 82 u32 input_mapping_width; ··· 88 95 89 96 #undef CURSOR_PIXEL_COUNT 90 97 #undef CURSOR_DATA_SIZE 91 - 92 - struct vbox_gem_object; 93 98 94 99 struct vbox_connector { 95 100 struct drm_connector base; ··· 161 170 struct drm_fb_helper_surface_size *sizes); 162 171 void vbox_fbdev_fini(struct vbox_private *vbox); 163 172 164 - struct vbox_bo { 165 - struct ttm_buffer_object bo; 166 - struct ttm_placement placement; 167 - struct ttm_bo_kmap_obj kmap; 168 - struct drm_gem_object gem; 169 - struct ttm_place placements[3]; 170 - int pin_count; 171 - }; 172 - 173 - #define gem_to_vbox_bo(gobj) container_of((gobj), struct vbox_bo, gem) 174 - 175 - static inline struct vbox_bo *vbox_bo(struct ttm_buffer_object *bo) 176 - { 177 - return container_of(bo, struct vbox_bo, bo); 178 - } 179 - 180 - #define to_vbox_obj(x) container_of(x, struct vbox_gem_object, base) 181 - 182 - static inline u64 vbox_bo_gpu_offset(struct vbox_bo *bo) 183 - { 184 - return bo->bo.offset; 185 - } 186 - 187 - int vbox_dumb_create(struct drm_file *file, 188 - struct drm_device *dev, 189 - struct drm_mode_create_dumb *args); 190 - 191 - void vbox_gem_free_object(struct drm_gem_object *obj); 192 - int vbox_dumb_mmap_offset(struct drm_file *file, 193 - struct drm_device *dev, 194 - u32 handle, u64 *offset); 195 - 196 173 int vbox_mm_init(struct vbox_private *vbox); 197 174 void vbox_mm_fini(struct vbox_private *vbox); 198 175 199 - int vbox_bo_create(struct vbox_private *vbox, int size, int align, 200 - u32 flags, struct vbox_bo **pvboxbo); 201 - 202 176 int vbox_gem_create(struct vbox_private *vbox, 203 177 u32 size, bool iskernel, struct drm_gem_object **obj); 204 - 205 - int vbox_bo_pin(struct vbox_bo *bo, u32 pl_flag); 206 - int vbox_bo_unpin(struct vbox_bo *bo); 207 - 208 - static inline int vbox_bo_reserve(struct vbox_bo *bo, bool no_wait) 209 - { 210 - int ret; 211 - 212 - ret = ttm_bo_reserve(&bo->bo, true, no_wait, NULL); 213 - if (ret) { 214 - if (ret != -ERESTARTSYS && ret != -EBUSY) 215 - DRM_ERROR("reserve failed %p\n", bo); 216 - return ret; 217 - } 218 - return 0; 219 - } 220 - 221 - static inline void vbox_bo_unreserve(struct vbox_bo *bo) 222 - { 223 - ttm_bo_unreserve(&bo->bo); 224 - } 225 - 226 - void vbox_ttm_placement(struct vbox_bo *bo, int domain); 227 - int vbox_bo_push_sysram(struct vbox_bo *bo); 228 - int vbox_mmap(struct file *filp, struct vm_area_struct *vma); 229 - void *vbox_bo_kmap(struct vbox_bo *bo); 230 - void vbox_bo_kunmap(struct vbox_bo *bo); 231 178 232 179 /* vbox_prime.c */ 233 180 int vbox_gem_prime_pin(struct drm_gem_object *obj);
+11 -11
drivers/gpu/drm/vboxvideo/vbox_fb.c
··· 51 51 struct drm_framebuffer *fb; 52 52 struct fb_info *info; 53 53 struct drm_gem_object *gobj; 54 - struct vbox_bo *bo; 54 + struct drm_gem_vram_object *gbo; 55 55 int size, ret; 56 - u64 gpu_addr; 56 + s64 gpu_addr; 57 57 u32 pitch; 58 58 59 59 mode_cmd.width = sizes->surface_width; ··· 75 75 if (ret) 76 76 return ret; 77 77 78 - bo = gem_to_vbox_bo(gobj); 78 + gbo = drm_gem_vram_of_gem(gobj); 79 79 80 - ret = vbox_bo_pin(bo, TTM_PL_FLAG_VRAM); 80 + ret = drm_gem_vram_pin(gbo, DRM_GEM_VRAM_PL_FLAG_VRAM); 81 81 if (ret) 82 82 return ret; 83 83 ··· 86 86 return PTR_ERR(info); 87 87 88 88 info->screen_size = size; 89 - info->screen_base = (char __iomem *)vbox_bo_kmap(bo); 89 + info->screen_base = (char __iomem *)drm_gem_vram_kmap(gbo, true, NULL); 90 90 if (IS_ERR(info->screen_base)) 91 91 return PTR_ERR(info->screen_base); 92 92 ··· 104 104 105 105 drm_fb_helper_fill_info(info, helper, sizes); 106 106 107 - gpu_addr = vbox_bo_gpu_offset(bo); 107 + gpu_addr = drm_gem_vram_offset(gbo); 108 + if (gpu_addr < 0) 109 + return (int)gpu_addr; 108 110 info->fix.smem_start = info->apertures->ranges[0].base + gpu_addr; 109 111 info->fix.smem_len = vbox->available_vram_size - gpu_addr; 110 112 ··· 134 132 drm_fb_helper_unregister_fbi(&vbox->fb_helper); 135 133 136 134 if (afb->obj) { 137 - struct vbox_bo *bo = gem_to_vbox_bo(afb->obj); 135 + struct drm_gem_vram_object *gbo = drm_gem_vram_of_gem(afb->obj); 138 136 139 - vbox_bo_kunmap(bo); 140 - 141 - if (bo->pin_count) 142 - vbox_bo_unpin(bo); 137 + drm_gem_vram_kunmap(gbo); 138 + drm_gem_vram_unpin(gbo); 143 139 144 140 drm_gem_object_put_unlocked(afb->obj); 145 141 afb->obj = NULL;
+6 -69
drivers/gpu/drm/vboxvideo/vbox_main.c
··· 274 274 int vbox_gem_create(struct vbox_private *vbox, 275 275 u32 size, bool iskernel, struct drm_gem_object **obj) 276 276 { 277 - struct vbox_bo *vboxbo; 277 + struct drm_gem_vram_object *gbo; 278 278 int ret; 279 279 280 280 *obj = NULL; ··· 283 283 if (size == 0) 284 284 return -EINVAL; 285 285 286 - ret = vbox_bo_create(vbox, size, 0, 0, &vboxbo); 287 - if (ret) { 286 + gbo = drm_gem_vram_create(&vbox->ddev, &vbox->ddev.vram_mm->bdev, 287 + size, 0, false); 288 + if (IS_ERR(gbo)) { 289 + ret = PTR_ERR(gbo); 288 290 if (ret != -ERESTARTSYS) 289 291 DRM_ERROR("failed to allocate GEM object\n"); 290 292 return ret; 291 293 } 292 294 293 - *obj = &vboxbo->gem; 295 + *obj = &gbo->gem; 294 296 295 297 return 0; 296 - } 297 - 298 - int vbox_dumb_create(struct drm_file *file, 299 - struct drm_device *dev, struct drm_mode_create_dumb *args) 300 - { 301 - struct vbox_private *vbox = 302 - container_of(dev, struct vbox_private, ddev); 303 - struct drm_gem_object *gobj; 304 - u32 handle; 305 - int ret; 306 - 307 - args->pitch = args->width * ((args->bpp + 7) / 8); 308 - args->size = args->pitch * args->height; 309 - 310 - ret = vbox_gem_create(vbox, args->size, false, &gobj); 311 - if (ret) 312 - return ret; 313 - 314 - ret = drm_gem_handle_create(file, gobj, &handle); 315 - drm_gem_object_put_unlocked(gobj); 316 - if (ret) 317 - return ret; 318 - 319 - args->handle = handle; 320 - 321 - return 0; 322 - } 323 - 324 - void vbox_gem_free_object(struct drm_gem_object *obj) 325 - { 326 - struct vbox_bo *vbox_bo = gem_to_vbox_bo(obj); 327 - 328 - ttm_bo_put(&vbox_bo->bo); 329 - } 330 - 331 - static inline u64 vbox_bo_mmap_offset(struct vbox_bo *bo) 332 - { 333 - return drm_vma_node_offset_addr(&bo->bo.vma_node); 334 - } 335 - 336 - int 337 - vbox_dumb_mmap_offset(struct drm_file *file, 338 - struct drm_device *dev, 339 - u32 handle, u64 *offset) 340 - { 341 - struct drm_gem_object *obj; 342 - int ret; 343 - struct vbox_bo *bo; 344 - 345 - mutex_lock(&dev->struct_mutex); 346 - obj = drm_gem_object_lookup(file, handle); 347 - if (!obj) { 348 - ret = -ENOENT; 349 - goto out_unlock; 350 - } 351 - 352 - bo = gem_to_vbox_bo(obj); 353 - *offset = vbox_bo_mmap_offset(bo); 354 - 355 - drm_gem_object_put(obj); 356 - ret = 0; 357 - 358 - out_unlock: 359 - mutex_unlock(&dev->struct_mutex); 360 - return ret; 361 298 }
+19 -17
drivers/gpu/drm/vboxvideo/vbox_mode.c
··· 172 172 struct drm_framebuffer *fb, 173 173 int x, int y) 174 174 { 175 - struct vbox_bo *bo = gem_to_vbox_bo(to_vbox_framebuffer(fb)->obj); 175 + struct drm_gem_vram_object *gbo = 176 + drm_gem_vram_of_gem(to_vbox_framebuffer(fb)->obj); 176 177 struct vbox_private *vbox = crtc->dev->dev_private; 177 178 struct vbox_crtc *vbox_crtc = to_vbox_crtc(crtc); 178 179 bool needs_modeset = drm_atomic_crtc_needs_modeset(crtc->state); ··· 187 186 188 187 vbox_crtc->x = x; 189 188 vbox_crtc->y = y; 190 - vbox_crtc->fb_offset = vbox_bo_gpu_offset(bo); 189 + vbox_crtc->fb_offset = drm_gem_vram_offset(gbo); 191 190 192 191 /* vbox_do_modeset() checks vbox->single_framebuffer so update it now */ 193 192 if (needs_modeset && vbox_set_up_input_mapping(vbox)) { ··· 303 302 static int vbox_primary_prepare_fb(struct drm_plane *plane, 304 303 struct drm_plane_state *new_state) 305 304 { 306 - struct vbox_bo *bo; 305 + struct drm_gem_vram_object *gbo; 307 306 int ret; 308 307 309 308 if (!new_state->fb) 310 309 return 0; 311 310 312 - bo = gem_to_vbox_bo(to_vbox_framebuffer(new_state->fb)->obj); 313 - ret = vbox_bo_pin(bo, TTM_PL_FLAG_VRAM); 311 + gbo = drm_gem_vram_of_gem(to_vbox_framebuffer(new_state->fb)->obj); 312 + ret = drm_gem_vram_pin(gbo, DRM_GEM_VRAM_PL_FLAG_VRAM); 314 313 if (ret) 315 314 DRM_WARN("Error %d pinning new fb, out of video mem?\n", ret); 316 315 ··· 320 319 static void vbox_primary_cleanup_fb(struct drm_plane *plane, 321 320 struct drm_plane_state *old_state) 322 321 { 323 - struct vbox_bo *bo; 322 + struct drm_gem_vram_object *gbo; 324 323 325 324 if (!old_state->fb) 326 325 return; 327 326 328 - bo = gem_to_vbox_bo(to_vbox_framebuffer(old_state->fb)->obj); 329 - vbox_bo_unpin(bo); 327 + gbo = drm_gem_vram_of_gem(to_vbox_framebuffer(old_state->fb)->obj); 328 + drm_gem_vram_unpin(gbo); 330 329 } 331 330 332 331 static int vbox_cursor_atomic_check(struct drm_plane *plane, ··· 386 385 container_of(plane->dev, struct vbox_private, ddev); 387 386 struct vbox_crtc *vbox_crtc = to_vbox_crtc(plane->state->crtc); 388 387 struct drm_framebuffer *fb = plane->state->fb; 389 - struct vbox_bo *bo = gem_to_vbox_bo(to_vbox_framebuffer(fb)->obj); 388 + struct drm_gem_vram_object *gbo = 389 + drm_gem_vram_of_gem(to_vbox_framebuffer(fb)->obj); 390 390 u32 width = plane->state->crtc_w; 391 391 u32 height = plane->state->crtc_h; 392 392 size_t data_size, mask_size; ··· 406 404 vbox_crtc->cursor_enabled = true; 407 405 408 406 /* pinning is done in prepare/cleanup framebuffer */ 409 - src = vbox_bo_kmap(bo); 407 + src = drm_gem_vram_kmap(gbo, true, NULL); 410 408 if (IS_ERR(src)) { 411 409 mutex_unlock(&vbox->hw_mutex); 412 410 DRM_WARN("Could not kmap cursor bo, skipping update\n"); ··· 422 420 data_size = width * height * 4 + mask_size; 423 421 424 422 copy_cursor_image(src, vbox->cursor_data, width, height, mask_size); 425 - vbox_bo_kunmap(bo); 423 + drm_gem_vram_kunmap(gbo); 426 424 427 425 flags = VBOX_MOUSE_POINTER_VISIBLE | VBOX_MOUSE_POINTER_SHAPE | 428 426 VBOX_MOUSE_POINTER_ALPHA; ··· 462 460 static int vbox_cursor_prepare_fb(struct drm_plane *plane, 463 461 struct drm_plane_state *new_state) 464 462 { 465 - struct vbox_bo *bo; 463 + struct drm_gem_vram_object *gbo; 466 464 467 465 if (!new_state->fb) 468 466 return 0; 469 467 470 - bo = gem_to_vbox_bo(to_vbox_framebuffer(new_state->fb)->obj); 471 - return vbox_bo_pin(bo, TTM_PL_FLAG_SYSTEM); 468 + gbo = drm_gem_vram_of_gem(to_vbox_framebuffer(new_state->fb)->obj); 469 + return drm_gem_vram_pin(gbo, DRM_GEM_VRAM_PL_FLAG_SYSTEM); 472 470 } 473 471 474 472 static void vbox_cursor_cleanup_fb(struct drm_plane *plane, 475 473 struct drm_plane_state *old_state) 476 474 { 477 - struct vbox_bo *bo; 475 + struct drm_gem_vram_object *gbo; 478 476 479 477 if (!plane->state->fb) 480 478 return; 481 479 482 - bo = gem_to_vbox_bo(to_vbox_framebuffer(plane->state->fb)->obj); 483 - vbox_bo_unpin(bo); 480 + gbo = drm_gem_vram_of_gem(to_vbox_framebuffer(plane->state->fb)->obj); 481 + drm_gem_vram_unpin(gbo); 484 482 } 485 483 486 484 static const u32 vbox_cursor_plane_formats[] = {
+8 -347
drivers/gpu/drm/vboxvideo/vbox_ttm.c
··· 8 8 */ 9 9 #include <linux/pci.h> 10 10 #include <drm/drm_file.h> 11 - #include <drm/ttm/ttm_page_alloc.h> 12 11 #include "vbox_drv.h" 13 - 14 - static inline struct vbox_private *vbox_bdev(struct ttm_bo_device *bd) 15 - { 16 - return container_of(bd, struct vbox_private, ttm.bdev); 17 - } 18 - 19 - static void vbox_bo_ttm_destroy(struct ttm_buffer_object *tbo) 20 - { 21 - struct vbox_bo *bo; 22 - 23 - bo = container_of(tbo, struct vbox_bo, bo); 24 - 25 - drm_gem_object_release(&bo->gem); 26 - kfree(bo); 27 - } 28 - 29 - static bool vbox_ttm_bo_is_vbox_bo(struct ttm_buffer_object *bo) 30 - { 31 - if (bo->destroy == &vbox_bo_ttm_destroy) 32 - return true; 33 - 34 - return false; 35 - } 36 - 37 - static int 38 - vbox_bo_init_mem_type(struct ttm_bo_device *bdev, u32 type, 39 - struct ttm_mem_type_manager *man) 40 - { 41 - switch (type) { 42 - case TTM_PL_SYSTEM: 43 - man->flags = TTM_MEMTYPE_FLAG_MAPPABLE; 44 - man->available_caching = TTM_PL_MASK_CACHING; 45 - man->default_caching = TTM_PL_FLAG_CACHED; 46 - break; 47 - case TTM_PL_VRAM: 48 - man->func = &ttm_bo_manager_func; 49 - man->flags = TTM_MEMTYPE_FLAG_FIXED | TTM_MEMTYPE_FLAG_MAPPABLE; 50 - man->available_caching = TTM_PL_FLAG_UNCACHED | TTM_PL_FLAG_WC; 51 - man->default_caching = TTM_PL_FLAG_WC; 52 - break; 53 - default: 54 - DRM_ERROR("Unsupported memory type %u\n", (unsigned int)type); 55 - return -EINVAL; 56 - } 57 - 58 - return 0; 59 - } 60 - 61 - static void 62 - vbox_bo_evict_flags(struct ttm_buffer_object *bo, struct ttm_placement *pl) 63 - { 64 - struct vbox_bo *vboxbo = vbox_bo(bo); 65 - 66 - if (!vbox_ttm_bo_is_vbox_bo(bo)) 67 - return; 68 - 69 - vbox_ttm_placement(vboxbo, TTM_PL_FLAG_SYSTEM); 70 - *pl = vboxbo->placement; 71 - } 72 - 73 - static int vbox_bo_verify_access(struct ttm_buffer_object *bo, 74 - struct file *filp) 75 - { 76 - return 0; 77 - } 78 - 79 - static int vbox_ttm_io_mem_reserve(struct ttm_bo_device *bdev, 80 - struct ttm_mem_reg *mem) 81 - { 82 - struct ttm_mem_type_manager *man = &bdev->man[mem->mem_type]; 83 - struct vbox_private *vbox = vbox_bdev(bdev); 84 - 85 - mem->bus.addr = NULL; 86 - mem->bus.offset = 0; 87 - mem->bus.size = mem->num_pages << PAGE_SHIFT; 88 - mem->bus.base = 0; 89 - mem->bus.is_iomem = false; 90 - if (!(man->flags & TTM_MEMTYPE_FLAG_MAPPABLE)) 91 - return -EINVAL; 92 - switch (mem->mem_type) { 93 - case TTM_PL_SYSTEM: 94 - /* system memory */ 95 - return 0; 96 - case TTM_PL_VRAM: 97 - mem->bus.offset = mem->start << PAGE_SHIFT; 98 - mem->bus.base = pci_resource_start(vbox->ddev.pdev, 0); 99 - mem->bus.is_iomem = true; 100 - break; 101 - default: 102 - return -EINVAL; 103 - } 104 - return 0; 105 - } 106 - 107 - static void vbox_ttm_io_mem_free(struct ttm_bo_device *bdev, 108 - struct ttm_mem_reg *mem) 109 - { 110 - } 111 - 112 - static void vbox_ttm_backend_destroy(struct ttm_tt *tt) 113 - { 114 - ttm_tt_fini(tt); 115 - kfree(tt); 116 - } 117 - 118 - static struct ttm_backend_func vbox_tt_backend_func = { 119 - .destroy = &vbox_ttm_backend_destroy, 120 - }; 121 - 122 - static struct ttm_tt *vbox_ttm_tt_create(struct ttm_buffer_object *bo, 123 - u32 page_flags) 124 - { 125 - struct ttm_tt *tt; 126 - 127 - tt = kzalloc(sizeof(*tt), GFP_KERNEL); 128 - if (!tt) 129 - return NULL; 130 - 131 - tt->func = &vbox_tt_backend_func; 132 - if (ttm_tt_init(tt, bo, page_flags)) { 133 - kfree(tt); 134 - return NULL; 135 - } 136 - 137 - return tt; 138 - } 139 - 140 - static struct ttm_bo_driver vbox_bo_driver = { 141 - .ttm_tt_create = vbox_ttm_tt_create, 142 - .init_mem_type = vbox_bo_init_mem_type, 143 - .eviction_valuable = ttm_bo_eviction_valuable, 144 - .evict_flags = vbox_bo_evict_flags, 145 - .verify_access = vbox_bo_verify_access, 146 - .io_mem_reserve = &vbox_ttm_io_mem_reserve, 147 - .io_mem_free = &vbox_ttm_io_mem_free, 148 - }; 149 12 150 13 int vbox_mm_init(struct vbox_private *vbox) 151 14 { 15 + struct drm_vram_mm *vmm; 152 16 int ret; 153 17 struct drm_device *dev = &vbox->ddev; 154 - struct ttm_bo_device *bdev = &vbox->ttm.bdev; 155 18 156 - ret = ttm_bo_device_init(&vbox->ttm.bdev, 157 - &vbox_bo_driver, 158 - dev->anon_inode->i_mapping, 159 - true); 160 - if (ret) { 161 - DRM_ERROR("Error initialising bo driver; %d\n", ret); 19 + vmm = drm_vram_helper_alloc_mm(dev, pci_resource_start(dev->pdev, 0), 20 + vbox->available_vram_size, 21 + &drm_gem_vram_mm_funcs); 22 + if (IS_ERR(vmm)) { 23 + ret = PTR_ERR(vmm); 24 + DRM_ERROR("Error initializing VRAM MM; %d\n", ret); 162 25 return ret; 163 - } 164 - 165 - ret = ttm_bo_init_mm(bdev, TTM_PL_VRAM, 166 - vbox->available_vram_size >> PAGE_SHIFT); 167 - if (ret) { 168 - DRM_ERROR("Failed ttm VRAM init: %d\n", ret); 169 - goto err_device_release; 170 26 } 171 27 172 28 #ifdef DRM_MTRR_WC ··· 34 178 pci_resource_len(dev->pdev, 0)); 35 179 #endif 36 180 return 0; 37 - 38 - err_device_release: 39 - ttm_bo_device_release(&vbox->ttm.bdev); 40 - return ret; 41 181 } 42 182 43 183 void vbox_mm_fini(struct vbox_private *vbox) ··· 45 193 #else 46 194 arch_phys_wc_del(vbox->fb_mtrr); 47 195 #endif 48 - ttm_bo_device_release(&vbox->ttm.bdev); 49 - } 50 - 51 - void vbox_ttm_placement(struct vbox_bo *bo, int domain) 52 - { 53 - unsigned int i; 54 - u32 c = 0; 55 - 56 - bo->placement.placement = bo->placements; 57 - bo->placement.busy_placement = bo->placements; 58 - 59 - if (domain & TTM_PL_FLAG_VRAM) 60 - bo->placements[c++].flags = 61 - TTM_PL_FLAG_WC | TTM_PL_FLAG_UNCACHED | TTM_PL_FLAG_VRAM; 62 - if (domain & TTM_PL_FLAG_SYSTEM) 63 - bo->placements[c++].flags = 64 - TTM_PL_MASK_CACHING | TTM_PL_FLAG_SYSTEM; 65 - if (!c) 66 - bo->placements[c++].flags = 67 - TTM_PL_MASK_CACHING | TTM_PL_FLAG_SYSTEM; 68 - 69 - bo->placement.num_placement = c; 70 - bo->placement.num_busy_placement = c; 71 - 72 - for (i = 0; i < c; ++i) { 73 - bo->placements[i].fpfn = 0; 74 - bo->placements[i].lpfn = 0; 75 - } 76 - } 77 - 78 - int vbox_bo_create(struct vbox_private *vbox, int size, int align, 79 - u32 flags, struct vbox_bo **pvboxbo) 80 - { 81 - struct vbox_bo *vboxbo; 82 - size_t acc_size; 83 - int ret; 84 - 85 - vboxbo = kzalloc(sizeof(*vboxbo), GFP_KERNEL); 86 - if (!vboxbo) 87 - return -ENOMEM; 88 - 89 - ret = drm_gem_object_init(&vbox->ddev, &vboxbo->gem, size); 90 - if (ret) 91 - goto err_free_vboxbo; 92 - 93 - vboxbo->bo.bdev = &vbox->ttm.bdev; 94 - 95 - vbox_ttm_placement(vboxbo, TTM_PL_FLAG_VRAM | TTM_PL_FLAG_SYSTEM); 96 - 97 - acc_size = ttm_bo_dma_acc_size(&vbox->ttm.bdev, size, 98 - sizeof(struct vbox_bo)); 99 - 100 - ret = ttm_bo_init(&vbox->ttm.bdev, &vboxbo->bo, size, 101 - ttm_bo_type_device, &vboxbo->placement, 102 - align >> PAGE_SHIFT, false, acc_size, 103 - NULL, NULL, vbox_bo_ttm_destroy); 104 - if (ret) 105 - goto err_free_vboxbo; 106 - 107 - *pvboxbo = vboxbo; 108 - 109 - return 0; 110 - 111 - err_free_vboxbo: 112 - kfree(vboxbo); 113 - return ret; 114 - } 115 - 116 - int vbox_bo_pin(struct vbox_bo *bo, u32 pl_flag) 117 - { 118 - struct ttm_operation_ctx ctx = { false, false }; 119 - int i, ret; 120 - 121 - if (bo->pin_count) { 122 - bo->pin_count++; 123 - return 0; 124 - } 125 - 126 - ret = vbox_bo_reserve(bo, false); 127 - if (ret) 128 - return ret; 129 - 130 - vbox_ttm_placement(bo, pl_flag); 131 - 132 - for (i = 0; i < bo->placement.num_placement; i++) 133 - bo->placements[i].flags |= TTM_PL_FLAG_NO_EVICT; 134 - 135 - ret = ttm_bo_validate(&bo->bo, &bo->placement, &ctx); 136 - if (ret == 0) 137 - bo->pin_count = 1; 138 - 139 - vbox_bo_unreserve(bo); 140 - 141 - return ret; 142 - } 143 - 144 - int vbox_bo_unpin(struct vbox_bo *bo) 145 - { 146 - struct ttm_operation_ctx ctx = { false, false }; 147 - int i, ret; 148 - 149 - if (!bo->pin_count) { 150 - DRM_ERROR("unpin bad %p\n", bo); 151 - return 0; 152 - } 153 - bo->pin_count--; 154 - if (bo->pin_count) 155 - return 0; 156 - 157 - ret = vbox_bo_reserve(bo, false); 158 - if (ret) { 159 - DRM_ERROR("Error %d reserving bo, leaving it pinned\n", ret); 160 - return ret; 161 - } 162 - 163 - for (i = 0; i < bo->placement.num_placement; i++) 164 - bo->placements[i].flags &= ~TTM_PL_FLAG_NO_EVICT; 165 - 166 - ret = ttm_bo_validate(&bo->bo, &bo->placement, &ctx); 167 - 168 - vbox_bo_unreserve(bo); 169 - 170 - return ret; 171 - } 172 - 173 - /* 174 - * Move a vbox-owned buffer object to system memory if no one else has it 175 - * pinned. The caller must have pinned it previously, and this call will 176 - * release the caller's pin. 177 - */ 178 - int vbox_bo_push_sysram(struct vbox_bo *bo) 179 - { 180 - struct ttm_operation_ctx ctx = { false, false }; 181 - int i, ret; 182 - 183 - if (!bo->pin_count) { 184 - DRM_ERROR("unpin bad %p\n", bo); 185 - return 0; 186 - } 187 - bo->pin_count--; 188 - if (bo->pin_count) 189 - return 0; 190 - 191 - if (bo->kmap.virtual) { 192 - ttm_bo_kunmap(&bo->kmap); 193 - bo->kmap.virtual = NULL; 194 - } 195 - 196 - vbox_ttm_placement(bo, TTM_PL_FLAG_SYSTEM); 197 - 198 - for (i = 0; i < bo->placement.num_placement; i++) 199 - bo->placements[i].flags |= TTM_PL_FLAG_NO_EVICT; 200 - 201 - ret = ttm_bo_validate(&bo->bo, &bo->placement, &ctx); 202 - if (ret) { 203 - DRM_ERROR("pushing to VRAM failed\n"); 204 - return ret; 205 - } 206 - 207 - return 0; 208 - } 209 - 210 - int vbox_mmap(struct file *filp, struct vm_area_struct *vma) 211 - { 212 - struct drm_file *file_priv = filp->private_data; 213 - struct vbox_private *vbox = file_priv->minor->dev->dev_private; 214 - 215 - return ttm_bo_mmap(filp, vma, &vbox->ttm.bdev); 216 - } 217 - 218 - void *vbox_bo_kmap(struct vbox_bo *bo) 219 - { 220 - int ret; 221 - 222 - if (bo->kmap.virtual) 223 - return bo->kmap.virtual; 224 - 225 - ret = ttm_bo_kmap(&bo->bo, 0, bo->bo.num_pages, &bo->kmap); 226 - if (ret) { 227 - DRM_ERROR("Error kmapping bo: %d\n", ret); 228 - return NULL; 229 - } 230 - 231 - return bo->kmap.virtual; 232 - } 233 - 234 - void vbox_bo_kunmap(struct vbox_bo *bo) 235 - { 236 - if (bo->kmap.virtual) { 237 - ttm_bo_kunmap(&bo->kmap); 238 - bo->kmap.virtual = NULL; 239 - } 196 + drm_vram_helper_release_mm(&vbox->ddev); 240 197 }
+30 -1
drivers/gpu/drm/vc4/vc4_bo.c
··· 799 799 return obj; 800 800 } 801 801 802 + static int vc4_grab_bin_bo(struct vc4_dev *vc4, struct vc4_file *vc4file) 803 + { 804 + int ret; 805 + 806 + if (!vc4->v3d) 807 + return -ENODEV; 808 + 809 + if (vc4file->bin_bo_used) 810 + return 0; 811 + 812 + ret = vc4_v3d_bin_bo_get(vc4, &vc4file->bin_bo_used); 813 + if (ret) 814 + return ret; 815 + 816 + return 0; 817 + } 818 + 802 819 int vc4_create_bo_ioctl(struct drm_device *dev, void *data, 803 820 struct drm_file *file_priv) 804 821 { 805 822 struct drm_vc4_create_bo *args = data; 823 + struct vc4_file *vc4file = file_priv->driver_priv; 824 + struct vc4_dev *vc4 = to_vc4_dev(dev); 806 825 struct vc4_bo *bo = NULL; 807 826 int ret; 827 + 828 + ret = vc4_grab_bin_bo(vc4, vc4file); 829 + if (ret) 830 + return ret; 808 831 809 832 /* 810 833 * We can't allocate from the BO cache, because the BOs don't ··· 869 846 struct drm_file *file_priv) 870 847 { 871 848 struct drm_vc4_create_shader_bo *args = data; 849 + struct vc4_file *vc4file = file_priv->driver_priv; 850 + struct vc4_dev *vc4 = to_vc4_dev(dev); 872 851 struct vc4_bo *bo = NULL; 873 852 int ret; 874 853 ··· 889 864 DRM_INFO("Pad set: 0x%08x\n", args->pad); 890 865 return -EINVAL; 891 866 } 867 + 868 + ret = vc4_grab_bin_bo(vc4, vc4file); 869 + if (ret) 870 + return ret; 892 871 893 872 bo = vc4_bo_create(dev, args->size, true, VC4_BO_TYPE_V3D_SHADER); 894 873 if (IS_ERR(bo)) ··· 923 894 */ 924 895 ret = drm_gem_handle_create(file_priv, &bo->base.base, &args->handle); 925 896 926 - fail: 897 + fail: 927 898 drm_gem_object_put_unlocked(&bo->base.base); 928 899 929 900 return ret;
+6
drivers/gpu/drm/vc4/vc4_drv.c
··· 128 128 129 129 static void vc4_close(struct drm_device *dev, struct drm_file *file) 130 130 { 131 + struct vc4_dev *vc4 = to_vc4_dev(dev); 131 132 struct vc4_file *vc4file = file->driver_priv; 133 + 134 + if (vc4file->bin_bo_used) 135 + vc4_v3d_bin_bo_put(vc4); 132 136 133 137 vc4_perfmon_close_file(vc4file); 134 138 kfree(vc4file); ··· 277 273 vc4->dev = drm; 278 274 drm->dev_private = vc4; 279 275 INIT_LIST_HEAD(&vc4->debugfs_list); 276 + 277 + mutex_init(&vc4->bin_bo_lock); 280 278 281 279 ret = vc4_bo_cache_init(drm); 282 280 if (ret)
+14
drivers/gpu/drm/vc4/vc4_drv.h
··· 216 216 * the minor is available (after drm_dev_register()). 217 217 */ 218 218 struct list_head debugfs_list; 219 + 220 + /* Mutex for binner bo allocation. */ 221 + struct mutex bin_bo_lock; 222 + /* Reference count for our binner bo. */ 223 + struct kref bin_bo_kref; 219 224 }; 220 225 221 226 static inline struct vc4_dev * ··· 589 584 * NULL otherwise. 590 585 */ 591 586 struct vc4_perfmon *perfmon; 587 + 588 + /* Whether the exec has taken a reference to the binner BO, which should 589 + * happen with a VC4_PACKET_TILE_BINNING_MODE_CONFIG packet. 590 + */ 591 + bool bin_bo_used; 592 592 }; 593 593 594 594 /* Per-open file private data. Any driver-specific resource that has to be ··· 604 594 struct idr idr; 605 595 struct mutex lock; 606 596 } perfmon; 597 + 598 + bool bin_bo_used; 607 599 }; 608 600 609 601 static inline struct vc4_exec_info * ··· 845 833 extern struct platform_driver vc4_v3d_driver; 846 834 extern const struct of_device_id vc4_v3d_dt_match[]; 847 835 int vc4_v3d_get_bin_slot(struct vc4_dev *vc4); 836 + int vc4_v3d_bin_bo_get(struct vc4_dev *vc4, bool *used); 837 + void vc4_v3d_bin_bo_put(struct vc4_dev *vc4); 848 838 int vc4_v3d_pm_get(struct vc4_dev *vc4); 849 839 void vc4_v3d_pm_put(struct vc4_dev *vc4); 850 840
+11
drivers/gpu/drm/vc4/vc4_gem.c
··· 820 820 vc4_get_bcl(struct drm_device *dev, struct vc4_exec_info *exec) 821 821 { 822 822 struct drm_vc4_submit_cl *args = exec->args; 823 + struct vc4_dev *vc4 = to_vc4_dev(dev); 823 824 void *temp = NULL; 824 825 void *bin; 825 826 int ret = 0; ··· 919 918 if (ret) 920 919 goto fail; 921 920 921 + if (exec->found_tile_binning_mode_config_packet) { 922 + ret = vc4_v3d_bin_bo_get(vc4, &exec->bin_bo_used); 923 + if (ret) 924 + goto fail; 925 + } 926 + 922 927 /* Block waiting on any previous rendering into the CS's VBO, 923 928 * IB, or textures, so that pixels are actually written by the 924 929 * time we try to read them. ··· 972 965 spin_lock_irqsave(&vc4->job_lock, irqflags); 973 966 vc4->bin_alloc_used &= ~exec->bin_slots; 974 967 spin_unlock_irqrestore(&vc4->job_lock, irqflags); 968 + 969 + /* Release the reference on the binner BO if needed. */ 970 + if (exec->bin_bo_used) 971 + vc4_v3d_bin_bo_put(vc4); 975 972 976 973 /* Release the reference we had on the perf monitor. */ 977 974 vc4_perfmon_put(exec->perfmon);
+16 -4
drivers/gpu/drm/vc4/vc4_irq.c
··· 59 59 { 60 60 struct vc4_dev *vc4 = 61 61 container_of(work, struct vc4_dev, overflow_mem_work); 62 - struct vc4_bo *bo = vc4->bin_bo; 62 + struct vc4_bo *bo; 63 63 int bin_bo_slot; 64 64 struct vc4_exec_info *exec; 65 65 unsigned long irqflags; 66 66 67 + mutex_lock(&vc4->bin_bo_lock); 68 + 69 + if (!vc4->bin_bo) 70 + goto complete; 71 + 72 + bo = vc4->bin_bo; 73 + 67 74 bin_bo_slot = vc4_v3d_get_bin_slot(vc4); 68 75 if (bin_bo_slot < 0) { 69 76 DRM_ERROR("Couldn't allocate binner overflow mem\n"); 70 - return; 77 + goto complete; 71 78 } 72 79 73 80 spin_lock_irqsave(&vc4->job_lock, irqflags); ··· 105 98 V3D_WRITE(V3D_INTCTL, V3D_INT_OUTOMEM); 106 99 V3D_WRITE(V3D_INTENA, V3D_INT_OUTOMEM); 107 100 spin_unlock_irqrestore(&vc4->job_lock, irqflags); 101 + 102 + complete: 103 + mutex_unlock(&vc4->bin_bo_lock); 108 104 } 109 105 110 106 static void ··· 259 249 if (!vc4->v3d) 260 250 return 0; 261 251 262 - /* Enable both the render done and out of memory interrupts. */ 263 - V3D_WRITE(V3D_INTENA, V3D_DRIVER_IRQS); 252 + /* Enable the render done interrupts. The out-of-memory interrupt is 253 + * enabled as soon as we have a binner BO allocated. 254 + */ 255 + V3D_WRITE(V3D_INTENA, V3D_INT_FLDONE | V3D_INT_FRDONE); 264 256 265 257 return 0; 266 258 }
+5 -10
drivers/gpu/drm/vc4/vc4_plane.c
··· 310 310 struct drm_framebuffer *fb = state->fb; 311 311 struct drm_gem_cma_object *bo = drm_fb_cma_get_gem_obj(fb, 0); 312 312 u32 subpixel_src_mask = (1 << 16) - 1; 313 - u32 format = fb->format->format; 314 313 int num_planes = fb->format->num_planes; 315 314 struct drm_crtc_state *crtc_state; 316 - u32 h_subsample, v_subsample; 315 + u32 h_subsample = fb->format->hsub; 316 + u32 v_subsample = fb->format->vsub; 317 317 int i, ret; 318 318 319 319 crtc_state = drm_atomic_get_existing_crtc_state(state->state, ··· 327 327 INT_MAX, true, true); 328 328 if (ret) 329 329 return ret; 330 - 331 - h_subsample = drm_format_horz_chroma_subsampling(format); 332 - v_subsample = drm_format_vert_chroma_subsampling(format); 333 330 334 331 for (i = 0; i < num_planes; i++) 335 332 vc4_state->offsets[i] = bo->paddr + fb->offsets[i]; ··· 589 592 u32 ctl0_offset = vc4_state->dlist_count; 590 593 const struct hvs_format *format = vc4_get_hvs_format(fb->format->format); 591 594 u64 base_format_mod = fourcc_mod_broadcom_mod(fb->modifier); 592 - int num_planes = drm_format_num_planes(format->drm); 593 - u32 h_subsample, v_subsample; 595 + int num_planes = fb->format->num_planes; 596 + u32 h_subsample = fb->format->hsub; 597 + u32 v_subsample = fb->format->vsub; 594 598 bool mix_plane_alpha; 595 599 bool covers_screen; 596 600 u32 scl0, scl1, pitch0; ··· 620 622 scl0 = vc4_get_scl_field(state, 1); 621 623 scl1 = vc4_get_scl_field(state, 0); 622 624 } 623 - 624 - h_subsample = drm_format_horz_chroma_subsampling(format->drm); 625 - v_subsample = drm_format_vert_chroma_subsampling(format->drm); 626 625 627 626 rotation = drm_rotation_simplify(state->rotation, 628 627 DRM_MODE_ROTATE_0 |
+55 -17
drivers/gpu/drm/vc4/vc4_v3d.c
··· 213 213 } 214 214 215 215 /** 216 - * vc4_allocate_bin_bo() - allocates the memory that will be used for 216 + * bin_bo_alloc() - allocates the memory that will be used for 217 217 * tile binning. 218 218 * 219 219 * The binner has a limitation that the addresses in the tile state ··· 234 234 * overall CMA pool before they make scenes complicated enough to run 235 235 * out of bin space. 236 236 */ 237 - static int vc4_allocate_bin_bo(struct drm_device *drm) 237 + static int bin_bo_alloc(struct vc4_dev *vc4) 238 238 { 239 - struct vc4_dev *vc4 = to_vc4_dev(drm); 240 239 struct vc4_v3d *v3d = vc4->v3d; 241 240 uint32_t size = 16 * 1024 * 1024; 242 241 int ret = 0; 243 242 struct list_head list; 243 + 244 + if (!v3d) 245 + return -ENODEV; 244 246 245 247 /* We may need to try allocating more than once to get a BO 246 248 * that doesn't cross 256MB. Track the ones we've allocated ··· 253 251 INIT_LIST_HEAD(&list); 254 252 255 253 while (true) { 256 - struct vc4_bo *bo = vc4_bo_create(drm, size, true, 254 + struct vc4_bo *bo = vc4_bo_create(vc4->dev, size, true, 257 255 VC4_BO_TYPE_BIN); 258 256 259 257 if (IS_ERR(bo)) { ··· 294 292 WARN_ON_ONCE(sizeof(vc4->bin_alloc_used) * 8 != 295 293 bo->base.base.size / vc4->bin_alloc_size); 296 294 295 + kref_init(&vc4->bin_bo_kref); 296 + 297 + /* Enable the out-of-memory interrupt to set our 298 + * newly-allocated binner BO, potentially from an 299 + * already-pending-but-masked interrupt. 300 + */ 301 + V3D_WRITE(V3D_INTENA, V3D_INT_OUTOMEM); 302 + 297 303 break; 298 304 } 299 305 ··· 321 311 return ret; 322 312 } 323 313 314 + int vc4_v3d_bin_bo_get(struct vc4_dev *vc4, bool *used) 315 + { 316 + int ret = 0; 317 + 318 + mutex_lock(&vc4->bin_bo_lock); 319 + 320 + if (used && *used) 321 + goto complete; 322 + 323 + if (vc4->bin_bo) 324 + kref_get(&vc4->bin_bo_kref); 325 + else 326 + ret = bin_bo_alloc(vc4); 327 + 328 + if (ret == 0 && used) 329 + *used = true; 330 + 331 + complete: 332 + mutex_unlock(&vc4->bin_bo_lock); 333 + 334 + return ret; 335 + } 336 + 337 + static void bin_bo_release(struct kref *ref) 338 + { 339 + struct vc4_dev *vc4 = container_of(ref, struct vc4_dev, bin_bo_kref); 340 + 341 + if (WARN_ON_ONCE(!vc4->bin_bo)) 342 + return; 343 + 344 + drm_gem_object_put_unlocked(&vc4->bin_bo->base.base); 345 + vc4->bin_bo = NULL; 346 + } 347 + 348 + void vc4_v3d_bin_bo_put(struct vc4_dev *vc4) 349 + { 350 + mutex_lock(&vc4->bin_bo_lock); 351 + kref_put(&vc4->bin_bo_kref, bin_bo_release); 352 + mutex_unlock(&vc4->bin_bo_lock); 353 + } 354 + 324 355 #ifdef CONFIG_PM 325 356 static int vc4_v3d_runtime_suspend(struct device *dev) 326 357 { ··· 369 318 struct vc4_dev *vc4 = v3d->vc4; 370 319 371 320 vc4_irq_uninstall(vc4->dev); 372 - 373 - drm_gem_object_put_unlocked(&vc4->bin_bo->base.base); 374 - vc4->bin_bo = NULL; 375 321 376 322 clk_disable_unprepare(v3d->clk); 377 323 ··· 380 332 struct vc4_v3d *v3d = dev_get_drvdata(dev); 381 333 struct vc4_dev *vc4 = v3d->vc4; 382 334 int ret; 383 - 384 - ret = vc4_allocate_bin_bo(vc4->dev); 385 - if (ret) 386 - return ret; 387 335 388 336 ret = clk_prepare_enable(v3d->clk); 389 337 if (ret != 0) ··· 446 402 ret = clk_prepare_enable(v3d->clk); 447 403 if (ret != 0) 448 404 return ret; 449 - 450 - ret = vc4_allocate_bin_bo(drm); 451 - if (ret) { 452 - clk_disable_unprepare(v3d->clk); 453 - return ret; 454 - } 455 405 456 406 /* Reset the binner overflow address/size at setup, to be sure 457 407 * we don't reuse an old one.
+1 -1
drivers/gpu/drm/virtio/Makefile
··· 6 6 virtio-gpu-y := virtgpu_drv.o virtgpu_kms.o virtgpu_gem.o \ 7 7 virtgpu_fb.o virtgpu_display.o virtgpu_vq.o virtgpu_ttm.o \ 8 8 virtgpu_fence.o virtgpu_object.o virtgpu_debugfs.o virtgpu_plane.o \ 9 - virtgpu_ioctl.o virtgpu_prime.o 9 + virtgpu_ioctl.o virtgpu_prime.o virtgpu_trace_points.o 10 10 11 11 obj-$(CONFIG_DRM_VIRTIO_GPU) += virtio-gpu.o
+1 -2
drivers/gpu/drm/virtio/virtgpu_drv.h
··· 102 102 struct dma_fence f; 103 103 struct virtio_gpu_fence_driver *drv; 104 104 struct list_head node; 105 - uint64_t seq; 106 105 }; 107 106 #define to_virtio_fence(x) \ 108 107 container_of(x, struct virtio_gpu_fence, f) ··· 355 356 bool virtio_fence_signaled(struct dma_fence *f); 356 357 struct virtio_gpu_fence *virtio_gpu_fence_alloc( 357 358 struct virtio_gpu_device *vgdev); 358 - int virtio_gpu_fence_emit(struct virtio_gpu_device *vgdev, 359 + void virtio_gpu_fence_emit(struct virtio_gpu_device *vgdev, 359 360 struct virtio_gpu_ctrl_hdr *cmd_hdr, 360 361 struct virtio_gpu_fence *fence); 361 362 void virtio_gpu_fence_event_process(struct virtio_gpu_device *vdev,
+15 -10
drivers/gpu/drm/virtio/virtgpu_fence.c
··· 24 24 */ 25 25 26 26 #include <drm/drmP.h> 27 + #include <trace/events/dma_fence.h> 27 28 #include "virtgpu_drv.h" 28 29 29 30 static const char *virtio_get_driver_name(struct dma_fence *f) ··· 41 40 { 42 41 struct virtio_gpu_fence *fence = to_virtio_fence(f); 43 42 44 - if (atomic64_read(&fence->drv->last_seq) >= fence->seq) 43 + if (atomic64_read(&fence->drv->last_seq) >= fence->f.seqno) 45 44 return true; 46 45 return false; 47 46 } 48 47 49 48 static void virtio_fence_value_str(struct dma_fence *f, char *str, int size) 50 49 { 51 - struct virtio_gpu_fence *fence = to_virtio_fence(f); 52 - 53 - snprintf(str, size, "%llu", fence->seq); 50 + snprintf(str, size, "%llu", f->seqno); 54 51 } 55 52 56 53 static void virtio_timeline_value_str(struct dma_fence *f, char *str, int size) ··· 70 71 { 71 72 struct virtio_gpu_fence_driver *drv = &vgdev->fence_drv; 72 73 struct virtio_gpu_fence *fence = kzalloc(sizeof(struct virtio_gpu_fence), 73 - GFP_ATOMIC); 74 + GFP_KERNEL); 74 75 if (!fence) 75 76 return fence; 76 77 77 78 fence->drv = drv; 79 + 80 + /* This only partially initializes the fence because the seqno is 81 + * unknown yet. The fence must not be used outside of the driver 82 + * until virtio_gpu_fence_emit is called. 83 + */ 78 84 dma_fence_init(&fence->f, &virtio_fence_ops, &drv->lock, drv->context, 0); 79 85 80 86 return fence; 81 87 } 82 88 83 - int virtio_gpu_fence_emit(struct virtio_gpu_device *vgdev, 89 + void virtio_gpu_fence_emit(struct virtio_gpu_device *vgdev, 84 90 struct virtio_gpu_ctrl_hdr *cmd_hdr, 85 91 struct virtio_gpu_fence *fence) 86 92 { ··· 93 89 unsigned long irq_flags; 94 90 95 91 spin_lock_irqsave(&drv->lock, irq_flags); 96 - fence->seq = ++drv->sync_seq; 92 + fence->f.seqno = ++drv->sync_seq; 97 93 dma_fence_get(&fence->f); 98 94 list_add_tail(&fence->node, &drv->fences); 99 95 spin_unlock_irqrestore(&drv->lock, irq_flags); 100 96 97 + trace_dma_fence_emit(&fence->f); 98 + 101 99 cmd_hdr->flags |= cpu_to_le32(VIRTIO_GPU_FLAG_FENCE); 102 - cmd_hdr->fence_id = cpu_to_le64(fence->seq); 103 - return 0; 100 + cmd_hdr->fence_id = cpu_to_le64(fence->f.seqno); 104 101 } 105 102 106 103 void virtio_gpu_fence_event_process(struct virtio_gpu_device *vgdev, ··· 114 109 spin_lock_irqsave(&drv->lock, irq_flags); 115 110 atomic64_set(&vgdev->fence_drv.last_seq, last_seq); 116 111 list_for_each_entry_safe(fence, tmp, &drv->fences, node) { 117 - if (last_seq < fence->seq) 112 + if (last_seq < fence->f.seqno) 118 113 continue; 119 114 dma_fence_signal_locked(&fence->f); 120 115 list_del(&fence->node);
+9 -9
drivers/gpu/drm/virtio/virtgpu_ioctl.c
··· 553 553 554 554 struct drm_ioctl_desc virtio_gpu_ioctls[DRM_VIRTIO_NUM_IOCTLS] = { 555 555 DRM_IOCTL_DEF_DRV(VIRTGPU_MAP, virtio_gpu_map_ioctl, 556 - DRM_AUTH | DRM_UNLOCKED | DRM_RENDER_ALLOW), 556 + DRM_AUTH | DRM_RENDER_ALLOW), 557 557 558 558 DRM_IOCTL_DEF_DRV(VIRTGPU_EXECBUFFER, virtio_gpu_execbuffer_ioctl, 559 - DRM_AUTH | DRM_UNLOCKED | DRM_RENDER_ALLOW), 559 + DRM_AUTH | DRM_RENDER_ALLOW), 560 560 561 561 DRM_IOCTL_DEF_DRV(VIRTGPU_GETPARAM, virtio_gpu_getparam_ioctl, 562 - DRM_AUTH | DRM_UNLOCKED | DRM_RENDER_ALLOW), 562 + DRM_AUTH | DRM_RENDER_ALLOW), 563 563 564 564 DRM_IOCTL_DEF_DRV(VIRTGPU_RESOURCE_CREATE, 565 565 virtio_gpu_resource_create_ioctl, 566 - DRM_AUTH | DRM_UNLOCKED | DRM_RENDER_ALLOW), 566 + DRM_AUTH | DRM_RENDER_ALLOW), 567 567 568 568 DRM_IOCTL_DEF_DRV(VIRTGPU_RESOURCE_INFO, virtio_gpu_resource_info_ioctl, 569 - DRM_AUTH | DRM_UNLOCKED | DRM_RENDER_ALLOW), 569 + DRM_AUTH | DRM_RENDER_ALLOW), 570 570 571 571 /* make transfer async to the main ring? - no sure, can we 572 572 * thread these in the underlying GL 573 573 */ 574 574 DRM_IOCTL_DEF_DRV(VIRTGPU_TRANSFER_FROM_HOST, 575 575 virtio_gpu_transfer_from_host_ioctl, 576 - DRM_AUTH | DRM_UNLOCKED | DRM_RENDER_ALLOW), 576 + DRM_AUTH | DRM_RENDER_ALLOW), 577 577 DRM_IOCTL_DEF_DRV(VIRTGPU_TRANSFER_TO_HOST, 578 578 virtio_gpu_transfer_to_host_ioctl, 579 - DRM_AUTH | DRM_UNLOCKED | DRM_RENDER_ALLOW), 579 + DRM_AUTH | DRM_RENDER_ALLOW), 580 580 581 581 DRM_IOCTL_DEF_DRV(VIRTGPU_WAIT, virtio_gpu_wait_ioctl, 582 - DRM_AUTH | DRM_UNLOCKED | DRM_RENDER_ALLOW), 582 + DRM_AUTH | DRM_RENDER_ALLOW), 583 583 584 584 DRM_IOCTL_DEF_DRV(VIRTGPU_GET_CAPS, virtio_gpu_get_caps_ioctl, 585 - DRM_AUTH | DRM_UNLOCKED | DRM_RENDER_ALLOW), 585 + DRM_AUTH | DRM_RENDER_ALLOW), 586 586 };
+52
drivers/gpu/drm/virtio/virtgpu_trace.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + #if !defined(_VIRTGPU_TRACE_H_) || defined(TRACE_HEADER_MULTI_READ) 3 + #define _VIRTGPU_TRACE_H_ 4 + 5 + #include <linux/tracepoint.h> 6 + 7 + #undef TRACE_SYSTEM 8 + #define TRACE_SYSTEM virtio_gpu 9 + #define TRACE_INCLUDE_FILE virtgpu_trace 10 + 11 + DECLARE_EVENT_CLASS(virtio_gpu_cmd, 12 + TP_PROTO(struct virtqueue *vq, struct virtio_gpu_ctrl_hdr *hdr), 13 + TP_ARGS(vq, hdr), 14 + TP_STRUCT__entry( 15 + __field(int, dev) 16 + __field(unsigned int, vq) 17 + __field(const char *, name) 18 + __field(u32, type) 19 + __field(u32, flags) 20 + __field(u64, fence_id) 21 + __field(u32, ctx_id) 22 + ), 23 + TP_fast_assign( 24 + __entry->dev = vq->vdev->index; 25 + __entry->vq = vq->index; 26 + __entry->name = vq->name; 27 + __entry->type = le32_to_cpu(hdr->type); 28 + __entry->flags = le32_to_cpu(hdr->flags); 29 + __entry->fence_id = le64_to_cpu(hdr->fence_id); 30 + __entry->ctx_id = le32_to_cpu(hdr->ctx_id); 31 + ), 32 + TP_printk("vdev=%d vq=%u name=%s type=0x%x flags=0x%x fence_id=%llu ctx_id=%u", 33 + __entry->dev, __entry->vq, __entry->name, 34 + __entry->type, __entry->flags, __entry->fence_id, 35 + __entry->ctx_id) 36 + ); 37 + 38 + DEFINE_EVENT(virtio_gpu_cmd, virtio_gpu_cmd_queue, 39 + TP_PROTO(struct virtqueue *vq, struct virtio_gpu_ctrl_hdr *hdr), 40 + TP_ARGS(vq, hdr) 41 + ); 42 + 43 + DEFINE_EVENT(virtio_gpu_cmd, virtio_gpu_cmd_response, 44 + TP_PROTO(struct virtqueue *vq, struct virtio_gpu_ctrl_hdr *hdr), 45 + TP_ARGS(vq, hdr) 46 + ); 47 + 48 + #endif 49 + 50 + #undef TRACE_INCLUDE_PATH 51 + #define TRACE_INCLUDE_PATH ../../drivers/gpu/drm/virtio 52 + #include <trace/define_trace.h>
+5
drivers/gpu/drm/virtio/virtgpu_trace_points.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + #include "virtgpu_drv.h" 3 + 4 + #define CREATE_TRACE_POINTS 5 + #include "virtgpu_trace.h"
+10
drivers/gpu/drm/virtio/virtgpu_vq.c
··· 28 28 29 29 #include <drm/drmP.h> 30 30 #include "virtgpu_drv.h" 31 + #include "virtgpu_trace.h" 31 32 #include <linux/virtio.h> 32 33 #include <linux/virtio_config.h> 33 34 #include <linux/virtio_ring.h> ··· 193 192 194 193 list_for_each_entry_safe(entry, tmp, &reclaim_list, list) { 195 194 resp = (struct virtio_gpu_ctrl_hdr *)entry->resp_buf; 195 + 196 + trace_virtio_gpu_cmd_response(vgdev->ctrlq.vq, resp); 197 + 196 198 if (resp->type != cpu_to_le32(VIRTIO_GPU_RESP_OK_NODATA)) { 197 199 if (resp->type >= cpu_to_le32(VIRTIO_GPU_RESP_ERR_UNSPEC)) { 198 200 struct virtio_gpu_ctrl_hdr *cmd; ··· 288 284 spin_lock(&vgdev->ctrlq.qlock); 289 285 goto retry; 290 286 } else { 287 + trace_virtio_gpu_cmd_queue(vq, 288 + (struct virtio_gpu_ctrl_hdr *)vbuf->buf); 289 + 291 290 virtqueue_kick(vq); 292 291 } 293 292 ··· 366 359 spin_lock(&vgdev->cursorq.qlock); 367 360 goto retry; 368 361 } else { 362 + trace_virtio_gpu_cmd_queue(vq, 363 + (struct virtio_gpu_ctrl_hdr *)vbuf->buf); 364 + 369 365 virtqueue_kick(vq); 370 366 } 371 367
+13 -20
drivers/gpu/drm/vkms/vkms_crtc.c
··· 83 83 return true; 84 84 } 85 85 86 - static void vkms_atomic_crtc_reset(struct drm_crtc *crtc) 87 - { 88 - struct vkms_crtc_state *vkms_state = NULL; 89 - 90 - if (crtc->state) { 91 - vkms_state = to_vkms_crtc_state(crtc->state); 92 - __drm_atomic_helper_crtc_destroy_state(crtc->state); 93 - kfree(vkms_state); 94 - crtc->state = NULL; 95 - } 96 - 97 - vkms_state = kzalloc(sizeof(*vkms_state), GFP_KERNEL); 98 - if (!vkms_state) 99 - return; 100 - INIT_WORK(&vkms_state->crc_work, vkms_crc_work_handle); 101 - 102 - crtc->state = &vkms_state->base; 103 - crtc->state->crtc = crtc; 104 - } 105 - 106 86 static struct drm_crtc_state * 107 87 vkms_atomic_crtc_duplicate_state(struct drm_crtc *crtc) 108 88 { ··· 113 133 flush_work(&vkms_state->crc_work); 114 134 kfree(vkms_state); 115 135 } 136 + } 137 + 138 + static void vkms_atomic_crtc_reset(struct drm_crtc *crtc) 139 + { 140 + struct vkms_crtc_state *vkms_state = 141 + kzalloc(sizeof(*vkms_state), GFP_KERNEL); 142 + 143 + if (crtc->state) 144 + vkms_atomic_crtc_destroy_state(crtc, crtc->state); 145 + 146 + __drm_atomic_helper_crtc_reset(crtc, &vkms_state->base); 147 + if (vkms_state) 148 + INIT_WORK(&vkms_state->crc_work, vkms_crc_work_handle); 116 149 } 117 150 118 151 static const struct drm_crtc_funcs vkms_crtc_funcs = {
+2 -4
drivers/gpu/drm/zte/zx_plane.c
··· 199 199 u32 dst_x, dst_y, dst_w, dst_h; 200 200 uint32_t format; 201 201 int fmt; 202 - int num_planes; 203 202 int i; 204 203 205 204 if (!fb) ··· 217 218 dst_h = drm_rect_height(dst); 218 219 219 220 /* Set up data address registers for Y, Cb and Cr planes */ 220 - num_planes = drm_format_num_planes(format); 221 221 paddr_reg = layer + VL_Y; 222 - for (i = 0; i < num_planes; i++) { 222 + for (i = 0; i < fb->format->num_planes; i++) { 223 223 cma_obj = drm_fb_cma_get_gem_obj(fb, i); 224 224 paddr = cma_obj->paddr + fb->offsets[i]; 225 225 paddr += src_y * fb->pitches[i]; 226 - paddr += src_x * drm_format_plane_cpp(format, i); 226 + paddr += src_x * fb->format->cpp[i]; 227 227 zx_writel(paddr_reg, paddr); 228 228 paddr_reg += 4; 229 229 }
+257
drivers/video/hdmi.c
··· 650 650 return 0; 651 651 } 652 652 653 + /** 654 + * hdmi_drm_infoframe_init() - initialize an HDMI Dynaminc Range and 655 + * mastering infoframe 656 + * @frame: HDMI DRM infoframe 657 + * 658 + * Returns 0 on success or a negative error code on failure. 659 + */ 660 + int hdmi_drm_infoframe_init(struct hdmi_drm_infoframe *frame) 661 + { 662 + memset(frame, 0, sizeof(*frame)); 663 + 664 + frame->type = HDMI_INFOFRAME_TYPE_DRM; 665 + frame->version = 1; 666 + frame->length = HDMI_DRM_INFOFRAME_SIZE; 667 + 668 + return 0; 669 + } 670 + EXPORT_SYMBOL(hdmi_drm_infoframe_init); 671 + 672 + static int hdmi_drm_infoframe_check_only(const struct hdmi_drm_infoframe *frame) 673 + { 674 + if (frame->type != HDMI_INFOFRAME_TYPE_DRM || 675 + frame->version != 1) 676 + return -EINVAL; 677 + 678 + if (frame->length != HDMI_DRM_INFOFRAME_SIZE) 679 + return -EINVAL; 680 + 681 + return 0; 682 + } 683 + 684 + /** 685 + * hdmi_drm_infoframe_check() - check a HDMI DRM infoframe 686 + * @frame: HDMI DRM infoframe 687 + * 688 + * Validates that the infoframe is consistent. 689 + * Returns 0 on success or a negative error code on failure. 690 + */ 691 + int hdmi_drm_infoframe_check(struct hdmi_drm_infoframe *frame) 692 + { 693 + return hdmi_drm_infoframe_check_only(frame); 694 + } 695 + EXPORT_SYMBOL(hdmi_drm_infoframe_check); 696 + 697 + /** 698 + * hdmi_drm_infoframe_pack_only() - write HDMI DRM infoframe to binary buffer 699 + * @frame: HDMI DRM infoframe 700 + * @buffer: destination buffer 701 + * @size: size of buffer 702 + * 703 + * Packs the information contained in the @frame structure into a binary 704 + * representation that can be written into the corresponding controller 705 + * registers. Also computes the checksum as required by section 5.3.5 of 706 + * the HDMI 1.4 specification. 707 + * 708 + * Returns the number of bytes packed into the binary buffer or a negative 709 + * error code on failure. 710 + */ 711 + ssize_t hdmi_drm_infoframe_pack_only(const struct hdmi_drm_infoframe *frame, 712 + void *buffer, size_t size) 713 + { 714 + u8 *ptr = buffer; 715 + size_t length; 716 + int i; 717 + 718 + length = HDMI_INFOFRAME_HEADER_SIZE + frame->length; 719 + 720 + if (size < length) 721 + return -ENOSPC; 722 + 723 + memset(buffer, 0, size); 724 + 725 + ptr[0] = frame->type; 726 + ptr[1] = frame->version; 727 + ptr[2] = frame->length; 728 + ptr[3] = 0; /* checksum */ 729 + 730 + /* start infoframe payload */ 731 + ptr += HDMI_INFOFRAME_HEADER_SIZE; 732 + 733 + *ptr++ = frame->eotf; 734 + *ptr++ = frame->metadata_type; 735 + 736 + for (i = 0; i < 3; i++) { 737 + *ptr++ = frame->display_primaries[i].x; 738 + *ptr++ = frame->display_primaries[i].x >> 8; 739 + *ptr++ = frame->display_primaries[i].y; 740 + *ptr++ = frame->display_primaries[i].y >> 8; 741 + } 742 + 743 + *ptr++ = frame->white_point.x; 744 + *ptr++ = frame->white_point.x >> 8; 745 + 746 + *ptr++ = frame->white_point.y; 747 + *ptr++ = frame->white_point.y >> 8; 748 + 749 + *ptr++ = frame->max_display_mastering_luminance; 750 + *ptr++ = frame->max_display_mastering_luminance >> 8; 751 + 752 + *ptr++ = frame->min_display_mastering_luminance; 753 + *ptr++ = frame->min_display_mastering_luminance >> 8; 754 + 755 + *ptr++ = frame->max_cll; 756 + *ptr++ = frame->max_cll >> 8; 757 + 758 + *ptr++ = frame->max_fall; 759 + *ptr++ = frame->max_fall >> 8; 760 + 761 + hdmi_infoframe_set_checksum(buffer, length); 762 + 763 + return length; 764 + } 765 + EXPORT_SYMBOL(hdmi_drm_infoframe_pack_only); 766 + 767 + /** 768 + * hdmi_drm_infoframe_pack() - check a HDMI DRM infoframe, 769 + * and write it to binary buffer 770 + * @frame: HDMI DRM infoframe 771 + * @buffer: destination buffer 772 + * @size: size of buffer 773 + * 774 + * Validates that the infoframe is consistent and updates derived fields 775 + * (eg. length) based on other fields, after which it packs the information 776 + * contained in the @frame structure into a binary representation that 777 + * can be written into the corresponding controller registers. This function 778 + * also computes the checksum as required by section 5.3.5 of the HDMI 1.4 779 + * specification. 780 + * 781 + * Returns the number of bytes packed into the binary buffer or a negative 782 + * error code on failure. 783 + */ 784 + ssize_t hdmi_drm_infoframe_pack(struct hdmi_drm_infoframe *frame, 785 + void *buffer, size_t size) 786 + { 787 + int ret; 788 + 789 + ret = hdmi_drm_infoframe_check(frame); 790 + if (ret) 791 + return ret; 792 + 793 + return hdmi_drm_infoframe_pack_only(frame, buffer, size); 794 + } 795 + EXPORT_SYMBOL(hdmi_drm_infoframe_pack); 796 + 653 797 /* 654 798 * hdmi_vendor_any_infoframe_check() - check a vendor infoframe 655 799 */ ··· 902 758 length = hdmi_avi_infoframe_pack_only(&frame->avi, 903 759 buffer, size); 904 760 break; 761 + case HDMI_INFOFRAME_TYPE_DRM: 762 + length = hdmi_drm_infoframe_pack_only(&frame->drm, 763 + buffer, size); 764 + break; 905 765 case HDMI_INFOFRAME_TYPE_SPD: 906 766 length = hdmi_spd_infoframe_pack_only(&frame->spd, 907 767 buffer, size); ··· 954 806 case HDMI_INFOFRAME_TYPE_AVI: 955 807 length = hdmi_avi_infoframe_pack(&frame->avi, buffer, size); 956 808 break; 809 + case HDMI_INFOFRAME_TYPE_DRM: 810 + length = hdmi_drm_infoframe_pack(&frame->drm, buffer, size); 811 + break; 957 812 case HDMI_INFOFRAME_TYPE_SPD: 958 813 length = hdmi_spd_infoframe_pack(&frame->spd, buffer, size); 959 814 break; ··· 989 838 return "Source Product Description (SPD)"; 990 839 case HDMI_INFOFRAME_TYPE_AUDIO: 991 840 return "Audio"; 841 + case HDMI_INFOFRAME_TYPE_DRM: 842 + return "Dynamic Range and Mastering"; 992 843 } 993 844 return "Reserved"; 994 845 } ··· 1437 1284 frame->downmix_inhibit ? "Yes" : "No"); 1438 1285 } 1439 1286 1287 + /** 1288 + * hdmi_drm_infoframe_log() - log info of HDMI DRM infoframe 1289 + * @level: logging level 1290 + * @dev: device 1291 + * @frame: HDMI DRM infoframe 1292 + */ 1293 + static void hdmi_drm_infoframe_log(const char *level, 1294 + struct device *dev, 1295 + const struct hdmi_drm_infoframe *frame) 1296 + { 1297 + int i; 1298 + 1299 + hdmi_infoframe_log_header(level, dev, 1300 + (struct hdmi_any_infoframe *)frame); 1301 + hdmi_log("length: %d\n", frame->length); 1302 + hdmi_log("metadata type: %d\n", frame->metadata_type); 1303 + hdmi_log("eotf: %d\n", frame->eotf); 1304 + for (i = 0; i < 3; i++) { 1305 + hdmi_log("x[%d]: %d\n", i, frame->display_primaries[i].x); 1306 + hdmi_log("y[%d]: %d\n", i, frame->display_primaries[i].y); 1307 + } 1308 + 1309 + hdmi_log("white point x: %d\n", frame->white_point.x); 1310 + hdmi_log("white point y: %d\n", frame->white_point.y); 1311 + 1312 + hdmi_log("max_display_mastering_luminance: %d\n", 1313 + frame->max_display_mastering_luminance); 1314 + hdmi_log("min_display_mastering_luminance: %d\n", 1315 + frame->min_display_mastering_luminance); 1316 + 1317 + hdmi_log("max_cll: %d\n", frame->max_cll); 1318 + hdmi_log("max_fall: %d\n", frame->max_fall); 1319 + } 1320 + 1440 1321 static const char * 1441 1322 hdmi_3d_structure_get_name(enum hdmi_3d_structure s3d_struct) 1442 1323 { ··· 1558 1371 break; 1559 1372 case HDMI_INFOFRAME_TYPE_VENDOR: 1560 1373 hdmi_vendor_any_infoframe_log(level, dev, &frame->vendor); 1374 + break; 1375 + case HDMI_INFOFRAME_TYPE_DRM: 1376 + hdmi_drm_infoframe_log(level, dev, &frame->drm); 1561 1377 break; 1562 1378 } 1563 1379 } ··· 1805 1615 } 1806 1616 1807 1617 /** 1618 + * hdmi_drm_infoframe_unpack() - unpack binary buffer to a HDMI DRM infoframe 1619 + * @frame: HDMI DRM infoframe 1620 + * @buffer: source buffer 1621 + * @size: size of buffer 1622 + * 1623 + * Unpacks the information contained in binary @buffer into a structured 1624 + * @frame of the HDMI Dynamic Range and Mastering (DRM) information frame. 1625 + * Also verifies the checksum as required by section 5.3.5 of the HDMI 1.4 1626 + * specification. 1627 + * 1628 + * Returns 0 on success or a negative error code on failure. 1629 + */ 1630 + static int hdmi_drm_infoframe_unpack(struct hdmi_drm_infoframe *frame, 1631 + const void *buffer, size_t size) 1632 + { 1633 + const u8 *ptr = buffer; 1634 + const u8 *temp; 1635 + u8 x_lsb, x_msb; 1636 + u8 y_lsb, y_msb; 1637 + int ret; 1638 + int i; 1639 + 1640 + if (size < HDMI_INFOFRAME_SIZE(DRM)) 1641 + return -EINVAL; 1642 + 1643 + if (ptr[0] != HDMI_INFOFRAME_TYPE_DRM || 1644 + ptr[1] != 1 || 1645 + ptr[2] != HDMI_DRM_INFOFRAME_SIZE) 1646 + return -EINVAL; 1647 + 1648 + if (hdmi_infoframe_checksum(buffer, HDMI_INFOFRAME_SIZE(DRM)) != 0) 1649 + return -EINVAL; 1650 + 1651 + ret = hdmi_drm_infoframe_init(frame); 1652 + if (ret) 1653 + return ret; 1654 + 1655 + ptr += HDMI_INFOFRAME_HEADER_SIZE; 1656 + 1657 + frame->eotf = ptr[0] & 0x7; 1658 + frame->metadata_type = ptr[1] & 0x7; 1659 + 1660 + temp = ptr + 2; 1661 + for (i = 0; i < 3; i++) { 1662 + x_lsb = *temp++; 1663 + x_msb = *temp++; 1664 + frame->display_primaries[i].x = (x_msb << 8) | x_lsb; 1665 + y_lsb = *temp++; 1666 + y_msb = *temp++; 1667 + frame->display_primaries[i].y = (y_msb << 8) | y_lsb; 1668 + } 1669 + 1670 + frame->white_point.x = (ptr[15] << 8) | ptr[14]; 1671 + frame->white_point.y = (ptr[17] << 8) | ptr[16]; 1672 + 1673 + frame->max_display_mastering_luminance = (ptr[19] << 8) | ptr[18]; 1674 + frame->min_display_mastering_luminance = (ptr[21] << 8) | ptr[20]; 1675 + frame->max_cll = (ptr[23] << 8) | ptr[22]; 1676 + frame->max_fall = (ptr[25] << 8) | ptr[24]; 1677 + 1678 + return 0; 1679 + } 1680 + 1681 + /** 1808 1682 * hdmi_infoframe_unpack() - unpack binary buffer to a HDMI infoframe 1809 1683 * @frame: HDMI infoframe 1810 1684 * @buffer: source buffer ··· 1893 1639 switch (ptr[0]) { 1894 1640 case HDMI_INFOFRAME_TYPE_AVI: 1895 1641 ret = hdmi_avi_infoframe_unpack(&frame->avi, buffer, size); 1642 + break; 1643 + case HDMI_INFOFRAME_TYPE_DRM: 1644 + ret = hdmi_drm_infoframe_unpack(&frame->drm, buffer, size); 1896 1645 break; 1897 1646 case HDMI_INFOFRAME_TYPE_SPD: 1898 1647 ret = hdmi_spd_infoframe_unpack(&frame->spd, buffer, size);
+2
include/drm/drm_atomic_state_helper.h
··· 37 37 struct drm_modeset_acquire_ctx; 38 38 struct drm_device; 39 39 40 + void __drm_atomic_helper_crtc_reset(struct drm_crtc *crtc, 41 + struct drm_crtc_state *state); 40 42 void drm_atomic_helper_crtc_reset(struct drm_crtc *crtc); 41 43 void __drm_atomic_helper_crtc_duplicate_state(struct drm_crtc *crtc, 42 44 struct drm_crtc_state *state);
+14
include/drm/drm_connector.h
··· 517 517 * Used by the atomic helpers to select the encoder, through the 518 518 * &drm_connector_helper_funcs.atomic_best_encoder or 519 519 * &drm_connector_helper_funcs.best_encoder callbacks. 520 + * 521 + * NOTE: Atomic drivers must fill this out (either themselves or through 522 + * helpers), for otherwise the GETCONNECTOR and GETENCODER IOCTLs will 523 + * not return correct data to userspace. 520 524 */ 521 525 struct drm_encoder *best_encoder; 522 526 ··· 603 599 * and the connector bpc limitations obtained from edid. 604 600 */ 605 601 u8 max_bpc; 602 + 603 + /** 604 + * @hdr_output_metadata: 605 + * DRM blob property for HDR output metadata 606 + */ 607 + struct drm_property_blob *hdr_output_metadata; 606 608 }; 607 609 608 610 /** ··· 1249 1239 * &drm_mode_config.connector_free_work. 1250 1240 */ 1251 1241 struct llist_node free_node; 1242 + 1243 + /* HDR metdata */ 1244 + struct hdr_output_metadata hdr_output_metadata; 1245 + struct hdr_sink_metadata hdr_sink_metadata; 1252 1246 }; 1253 1247 1254 1248 #define obj_to_connector(x) container_of(x, struct drm_connector, base)
+4
include/drm/drm_device.h
··· 17 17 struct drm_sg_mem; 18 18 struct drm_local_map; 19 19 struct drm_vma_offset_manager; 20 + struct drm_vram_mm; 20 21 struct drm_fb_helper; 21 22 22 23 struct inode; ··· 286 285 287 286 /** @vma_offset_manager: GEM information */ 288 287 struct drm_vma_offset_manager *vma_offset_manager; 288 + 289 + /** @vram_mm: VRAM MM memory manager */ 290 + struct drm_vram_mm *vram_mm; 289 291 290 292 /** 291 293 * @switch_power_state:
+5
include/drm/drm_edid.h
··· 25 25 26 26 #include <linux/types.h> 27 27 #include <linux/hdmi.h> 28 + #include <drm/drm_mode.h> 28 29 29 30 struct drm_device; 30 31 struct i2c_adapter; ··· 370 369 struct drm_connector *connector, 371 370 const struct drm_display_mode *mode, 372 371 enum hdmi_quantization_range rgb_quant_range); 372 + 373 + int 374 + drm_hdmi_infoframe_set_hdr_metadata(struct hdmi_drm_infoframe *frame, 375 + const struct drm_connector_state *conn_state); 373 376 374 377 /** 375 378 * drm_eld_mnl - Get ELD monitor name length in bytes.
-10
include/drm/drm_fb_helper.h
··· 49 49 50 50 struct drm_fb_helper_crtc { 51 51 struct drm_mode_set mode_set; 52 - struct drm_display_mode *desired_mode; 53 - int x, y; 54 - int rotation; 55 52 }; 56 53 57 54 /** ··· 148 151 struct drm_fb_helper_crtc *crtc_info; 149 152 int connector_count; 150 153 int connector_info_alloc_count; 151 - /** 152 - * @sw_rotations: 153 - * Bitmask of all rotations requested for panel-orientation which 154 - * could not be handled in hardware. If only one bit is set 155 - * fbdev->fbcon_rotate_hint gets set to the requested rotation. 156 - */ 157 - int sw_rotations; 158 154 /** 159 155 * @connector_info: 160 156 *
+44 -6
include/drm/drm_fourcc.h
··· 260 260 return info->is_yuv && info->hsub == 1 && info->vsub == 1; 261 261 } 262 262 263 + /** 264 + * drm_format_info_plane_width - width of the plane given the first plane 265 + * @info: pixel format info 266 + * @width: width of the first plane 267 + * @plane: plane index 268 + * 269 + * Returns: 270 + * The width of @plane, given that the width of the first plane is @width. 271 + */ 272 + static inline 273 + int drm_format_info_plane_width(const struct drm_format_info *info, int width, 274 + int plane) 275 + { 276 + if (!info || plane >= info->num_planes) 277 + return 0; 278 + 279 + if (plane == 0) 280 + return width; 281 + 282 + return width / info->hsub; 283 + } 284 + 285 + /** 286 + * drm_format_info_plane_height - height of the plane given the first plane 287 + * @info: pixel format info 288 + * @height: height of the first plane 289 + * @plane: plane index 290 + * 291 + * Returns: 292 + * The height of @plane, given that the height of the first plane is @height. 293 + */ 294 + static inline 295 + int drm_format_info_plane_height(const struct drm_format_info *info, int height, 296 + int plane) 297 + { 298 + if (!info || plane >= info->num_planes) 299 + return 0; 300 + 301 + if (plane == 0) 302 + return height; 303 + 304 + return height / info->vsub; 305 + } 306 + 263 307 const struct drm_format_info *__drm_format_info(u32 format); 264 308 const struct drm_format_info *drm_format_info(u32 format); 265 309 const struct drm_format_info * ··· 312 268 uint32_t drm_mode_legacy_fb_format(uint32_t bpp, uint32_t depth); 313 269 uint32_t drm_driver_legacy_fb_format(struct drm_device *dev, 314 270 uint32_t bpp, uint32_t depth); 315 - int drm_format_num_planes(uint32_t format); 316 - int drm_format_plane_cpp(uint32_t format, int plane); 317 - int drm_format_horz_chroma_subsampling(uint32_t format); 318 - int drm_format_vert_chroma_subsampling(uint32_t format); 319 - int drm_format_plane_width(int width, uint32_t format, int plane); 320 - int drm_format_plane_height(int height, uint32_t format, int plane); 321 271 unsigned int drm_format_info_block_width(const struct drm_format_info *info, 322 272 int plane); 323 273 unsigned int drm_format_info_block_height(const struct drm_format_info *info,
+162
include/drm/drm_gem_vram_helper.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0-or-later */ 2 + 3 + #ifndef DRM_GEM_VRAM_HELPER_H 4 + #define DRM_GEM_VRAM_HELPER_H 5 + 6 + #include <drm/drm_gem.h> 7 + #include <drm/ttm/ttm_bo_api.h> 8 + #include <drm/ttm/ttm_placement.h> 9 + #include <linux/kernel.h> /* for container_of() */ 10 + 11 + struct drm_mode_create_dumb; 12 + struct drm_vram_mm_funcs; 13 + struct filp; 14 + struct vm_area_struct; 15 + 16 + #define DRM_GEM_VRAM_PL_FLAG_VRAM TTM_PL_FLAG_VRAM 17 + #define DRM_GEM_VRAM_PL_FLAG_SYSTEM TTM_PL_FLAG_SYSTEM 18 + 19 + /* 20 + * Buffer-object helpers 21 + */ 22 + 23 + /** 24 + * struct drm_gem_vram_object - GEM object backed by VRAM 25 + * @gem: GEM object 26 + * @bo: TTM buffer object 27 + * @kmap: Mapping information for @bo 28 + * @placement: TTM placement information. Supported placements are \ 29 + %TTM_PL_VRAM and %TTM_PL_SYSTEM 30 + * @placements: TTM placement information. 31 + * @pin_count: Pin counter 32 + * 33 + * The type struct drm_gem_vram_object represents a GEM object that is 34 + * backed by VRAM. It can be used for simple framebuffer devices with 35 + * dedicated memory. The buffer object can be evicted to system memory if 36 + * video memory becomes scarce. 37 + */ 38 + struct drm_gem_vram_object { 39 + struct drm_gem_object gem; 40 + struct ttm_buffer_object bo; 41 + struct ttm_bo_kmap_obj kmap; 42 + 43 + /* Supported placements are %TTM_PL_VRAM and %TTM_PL_SYSTEM */ 44 + struct ttm_placement placement; 45 + struct ttm_place placements[2]; 46 + 47 + int pin_count; 48 + }; 49 + 50 + /** 51 + * Returns the container of type &struct drm_gem_vram_object 52 + * for field bo. 53 + * @bo: the VRAM buffer object 54 + * Returns: The containing GEM VRAM object 55 + */ 56 + static inline struct drm_gem_vram_object *drm_gem_vram_of_bo( 57 + struct ttm_buffer_object *bo) 58 + { 59 + return container_of(bo, struct drm_gem_vram_object, bo); 60 + } 61 + 62 + /** 63 + * Returns the container of type &struct drm_gem_vram_object 64 + * for field gem. 65 + * @gem: the GEM object 66 + * Returns: The containing GEM VRAM object 67 + */ 68 + static inline struct drm_gem_vram_object *drm_gem_vram_of_gem( 69 + struct drm_gem_object *gem) 70 + { 71 + return container_of(gem, struct drm_gem_vram_object, gem); 72 + } 73 + 74 + struct drm_gem_vram_object *drm_gem_vram_create(struct drm_device *dev, 75 + struct ttm_bo_device *bdev, 76 + size_t size, 77 + unsigned long pg_align, 78 + bool interruptible); 79 + void drm_gem_vram_put(struct drm_gem_vram_object *gbo); 80 + int drm_gem_vram_lock(struct drm_gem_vram_object *gbo, bool no_wait); 81 + void drm_gem_vram_unlock(struct drm_gem_vram_object *gbo); 82 + u64 drm_gem_vram_mmap_offset(struct drm_gem_vram_object *gbo); 83 + s64 drm_gem_vram_offset(struct drm_gem_vram_object *gbo); 84 + int drm_gem_vram_pin(struct drm_gem_vram_object *gbo, unsigned long pl_flag); 85 + int drm_gem_vram_pin_locked(struct drm_gem_vram_object *gbo, 86 + unsigned long pl_flag); 87 + int drm_gem_vram_unpin(struct drm_gem_vram_object *gbo); 88 + int drm_gem_vram_unpin_locked(struct drm_gem_vram_object *gbo); 89 + void *drm_gem_vram_kmap_at(struct drm_gem_vram_object *gbo, bool map, 90 + bool *is_iomem, struct ttm_bo_kmap_obj *kmap); 91 + void *drm_gem_vram_kmap(struct drm_gem_vram_object *gbo, bool map, 92 + bool *is_iomem); 93 + void drm_gem_vram_kunmap_at(struct drm_gem_vram_object *gbo, 94 + struct ttm_bo_kmap_obj *kmap); 95 + void drm_gem_vram_kunmap(struct drm_gem_vram_object *gbo); 96 + 97 + int drm_gem_vram_fill_create_dumb(struct drm_file *file, 98 + struct drm_device *dev, 99 + struct ttm_bo_device *bdev, 100 + unsigned long pg_align, 101 + bool interruptible, 102 + struct drm_mode_create_dumb *args); 103 + 104 + /* 105 + * Helpers for struct ttm_bo_driver 106 + */ 107 + 108 + void drm_gem_vram_bo_driver_evict_flags(struct ttm_buffer_object *bo, 109 + struct ttm_placement *pl); 110 + 111 + int drm_gem_vram_bo_driver_verify_access(struct ttm_buffer_object *bo, 112 + struct file *filp); 113 + 114 + extern const struct drm_vram_mm_funcs drm_gem_vram_mm_funcs; 115 + 116 + /* 117 + * Helpers for struct drm_driver 118 + */ 119 + 120 + void drm_gem_vram_driver_gem_free_object_unlocked(struct drm_gem_object *gem); 121 + int drm_gem_vram_driver_dumb_create(struct drm_file *file, 122 + struct drm_device *dev, 123 + struct drm_mode_create_dumb *args); 124 + int drm_gem_vram_driver_dumb_mmap_offset(struct drm_file *file, 125 + struct drm_device *dev, 126 + uint32_t handle, uint64_t *offset); 127 + 128 + /** 129 + * define DRM_GEM_VRAM_DRIVER - default callback functions for \ 130 + &struct drm_driver 131 + * 132 + * Drivers that use VRAM MM and GEM VRAM can use this macro to initialize 133 + * &struct drm_driver with default functions. 134 + */ 135 + #define DRM_GEM_VRAM_DRIVER \ 136 + .gem_free_object_unlocked = \ 137 + drm_gem_vram_driver_gem_free_object_unlocked, \ 138 + .dumb_create = drm_gem_vram_driver_dumb_create, \ 139 + .dumb_map_offset = drm_gem_vram_driver_dumb_mmap_offset 140 + 141 + /* 142 + * PRIME helpers for struct drm_driver 143 + */ 144 + 145 + int drm_gem_vram_driver_gem_prime_pin(struct drm_gem_object *obj); 146 + void drm_gem_vram_driver_gem_prime_unpin(struct drm_gem_object *obj); 147 + void *drm_gem_vram_driver_gem_prime_vmap(struct drm_gem_object *obj); 148 + void drm_gem_vram_driver_gem_prime_vunmap(struct drm_gem_object *obj, 149 + void *vaddr); 150 + int drm_gem_vram_driver_gem_prime_mmap(struct drm_gem_object *obj, 151 + struct vm_area_struct *vma); 152 + 153 + #define DRM_GEM_VRAM_DRIVER_PRIME \ 154 + .gem_prime_export = drm_gem_prime_export, \ 155 + .gem_prime_import = drm_gem_prime_import, \ 156 + .gem_prime_pin = drm_gem_vram_driver_gem_prime_pin, \ 157 + .gem_prime_unpin = drm_gem_vram_driver_gem_prime_unpin, \ 158 + .gem_prime_vmap = drm_gem_vram_driver_gem_prime_vmap, \ 159 + .gem_prime_vunmap = drm_gem_vram_driver_gem_prime_vunmap, \ 160 + .gem_prime_mmap = drm_gem_vram_driver_gem_prime_mmap 161 + 162 + #endif
+7
include/drm/drm_mode_config.h
··· 836 836 */ 837 837 struct drm_property *writeback_out_fence_ptr_property; 838 838 839 + /** 840 + * hdr_output_metadata_property: Connector property containing hdr 841 + * metatda. This will be provided by userspace compositors based 842 + * on HDR content 843 + */ 844 + struct drm_property *hdr_output_metadata_property; 845 + 839 846 /* dumb ioctl parameters */ 840 847 uint32_t preferred_depth, prefer_shadow; 841 848
+102
include/drm/drm_vram_mm_helper.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0-or-later */ 2 + 3 + #ifndef DRM_VRAM_MM_HELPER_H 4 + #define DRM_VRAM_MM_HELPER_H 5 + 6 + #include <drm/ttm/ttm_bo_driver.h> 7 + 8 + struct drm_device; 9 + 10 + /** 11 + * struct drm_vram_mm_funcs - Callback functions for &struct drm_vram_mm 12 + * @evict_flags: Provides an implementation for struct \ 13 + &ttm_bo_driver.evict_flags 14 + * @verify_access: Provides an implementation for \ 15 + struct &ttm_bo_driver.verify_access 16 + * 17 + * These callback function integrate VRAM MM with TTM buffer objects. New 18 + * functions can be added if necessary. 19 + */ 20 + struct drm_vram_mm_funcs { 21 + void (*evict_flags)(struct ttm_buffer_object *bo, 22 + struct ttm_placement *placement); 23 + int (*verify_access)(struct ttm_buffer_object *bo, struct file *filp); 24 + }; 25 + 26 + /** 27 + * struct drm_vram_mm - An instance of VRAM MM 28 + * @vram_base: Base address of the managed video memory 29 + * @vram_size: Size of the managed video memory in bytes 30 + * @bdev: The TTM BO device. 31 + * @funcs: TTM BO functions 32 + * 33 + * The fields &struct drm_vram_mm.vram_base and 34 + * &struct drm_vram_mm.vrm_size are managed by VRAM MM, but are 35 + * available for public read access. Use the field 36 + * &struct drm_vram_mm.bdev to access the TTM BO device. 37 + */ 38 + struct drm_vram_mm { 39 + uint64_t vram_base; 40 + size_t vram_size; 41 + 42 + struct ttm_bo_device bdev; 43 + 44 + const struct drm_vram_mm_funcs *funcs; 45 + }; 46 + 47 + /** 48 + * drm_vram_mm_of_bdev() - \ 49 + Returns the container of type &struct ttm_bo_device for field bdev. 50 + * @bdev: the TTM BO device 51 + * 52 + * Returns: 53 + * The containing instance of &struct drm_vram_mm 54 + */ 55 + static inline struct drm_vram_mm *drm_vram_mm_of_bdev( 56 + struct ttm_bo_device *bdev) 57 + { 58 + return container_of(bdev, struct drm_vram_mm, bdev); 59 + } 60 + 61 + int drm_vram_mm_init(struct drm_vram_mm *vmm, struct drm_device *dev, 62 + uint64_t vram_base, size_t vram_size, 63 + const struct drm_vram_mm_funcs *funcs); 64 + void drm_vram_mm_cleanup(struct drm_vram_mm *vmm); 65 + 66 + int drm_vram_mm_mmap(struct file *filp, struct vm_area_struct *vma, 67 + struct drm_vram_mm *vmm); 68 + 69 + /* 70 + * Helpers for integration with struct drm_device 71 + */ 72 + 73 + struct drm_vram_mm *drm_vram_helper_alloc_mm( 74 + struct drm_device *dev, uint64_t vram_base, size_t vram_size, 75 + const struct drm_vram_mm_funcs *funcs); 76 + void drm_vram_helper_release_mm(struct drm_device *dev); 77 + 78 + /* 79 + * Helpers for &struct file_operations 80 + */ 81 + 82 + int drm_vram_mm_file_operations_mmap( 83 + struct file *filp, struct vm_area_struct *vma); 84 + 85 + /** 86 + * define DRM_VRAM_MM_FILE_OPERATIONS - default callback functions for \ 87 + &struct file_operations 88 + * 89 + * Drivers that use VRAM MM can use this macro to initialize 90 + * &struct file_operations with default functions. 91 + */ 92 + #define DRM_VRAM_MM_FILE_OPERATIONS \ 93 + .llseek = no_llseek, \ 94 + .read = drm_read, \ 95 + .poll = drm_poll, \ 96 + .unlocked_ioctl = drm_ioctl, \ 97 + .compat_ioctl = drm_compat_ioctl, \ 98 + .mmap = drm_vram_mm_file_operations_mmap, \ 99 + .open = drm_open, \ 100 + .release = drm_release \ 101 + 102 + #endif
-25
include/drm/gma_drm.h
··· 1 - /************************************************************************** 2 - * Copyright (c) 2007-2011, Intel Corporation. 3 - * All Rights Reserved. 4 - * Copyright (c) 2008, Tungsten Graphics Inc. Cedar Park, TX., USA. 5 - * All Rights Reserved. 6 - * 7 - * This program is free software; you can redistribute it and/or modify it 8 - * under the terms and conditions of the GNU General Public License, 9 - * version 2, as published by the Free Software Foundation. 10 - * 11 - * This program is distributed in the hope it will be useful, but WITHOUT 12 - * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or 13 - * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for 14 - * more details. 15 - * 16 - * You should have received a copy of the GNU General Public License along with 17 - * this program; if not, write to the Free Software Foundation, Inc., 18 - * 51 Franklin St - Fifth Floor, Boston, MA 02110-1301 USA. 19 - * 20 - **************************************************************************/ 21 - 22 - #ifndef _GMA_DRM_H_ 23 - #define _GMA_DRM_H_ 24 - 25 - #endif
+3 -5
include/drm/gpu_scheduler.h
··· 167 167 * @sched: the scheduler instance on which this job is scheduled. 168 168 * @s_fence: contains the fences for the scheduling of job. 169 169 * @finish_cb: the callback for the finished fence. 170 - * @finish_work: schedules the function @drm_sched_job_finish once the job has 171 - * finished to remove the job from the 172 - * @drm_gpu_scheduler.ring_mirror_list. 173 170 * @node: used to append this struct to the @drm_gpu_scheduler.ring_mirror_list. 174 171 * @id: a unique id assigned to each job scheduled on the scheduler. 175 172 * @karma: increment on every hang caused by this job. If this exceeds the hang ··· 185 188 struct drm_gpu_scheduler *sched; 186 189 struct drm_sched_fence *s_fence; 187 190 struct dma_fence_cb finish_cb; 188 - struct work_struct finish_work; 189 191 struct list_head node; 190 192 uint64_t id; 191 193 atomic_t karma; ··· 259 263 * guilty and it will be considered for scheduling further. 260 264 * @num_jobs: the number of jobs in queue in the scheduler 261 265 * @ready: marks if the underlying HW is ready to work 266 + * @free_guilty: A hit to time out handler to free the guilty job. 262 267 * 263 268 * One scheduler is implemented for each hardware ring. 264 269 */ ··· 280 283 int hang_limit; 281 284 atomic_t num_jobs; 282 285 bool ready; 286 + bool free_guilty; 283 287 }; 284 288 285 289 int drm_sched_init(struct drm_gpu_scheduler *sched, ··· 294 296 void *owner); 295 297 void drm_sched_job_cleanup(struct drm_sched_job *job); 296 298 void drm_sched_wakeup(struct drm_gpu_scheduler *sched); 297 - void drm_sched_stop(struct drm_gpu_scheduler *sched); 299 + void drm_sched_stop(struct drm_gpu_scheduler *sched, struct drm_sched_job *bad); 298 300 void drm_sched_start(struct drm_gpu_scheduler *sched, bool full_recovery); 299 301 void drm_sched_resubmit_jobs(struct drm_gpu_scheduler *sched); 300 302 void drm_sched_increase_karma(struct drm_sched_job *bad);
+38 -9
include/linux/dma-buf.h
··· 39 39 40 40 /** 41 41 * struct dma_buf_ops - operations possible on struct dma_buf 42 - * @map_atomic: [optional] maps a page from the buffer into kernel address 43 - * space, users may not block until the subsequent unmap call. 44 - * This callback must not sleep. 45 - * @unmap_atomic: [optional] unmaps a atomically mapped page from the buffer. 46 - * This Callback must not sleep. 47 - * @map: [optional] maps a page from the buffer into kernel address space. 48 - * @unmap: [optional] unmaps a page from the buffer. 49 42 * @vmap: [optional] creates a virtual mapping for the buffer into kernel 50 43 * address space. Same restrictions as for vmap and friends apply. 51 44 * @vunmap: [optional] unmaps a vmap from the buffer 52 45 */ 53 46 struct dma_buf_ops { 47 + /** 48 + * @cache_sgt_mapping: 49 + * 50 + * If true the framework will cache the first mapping made for each 51 + * attachment. This avoids creating mappings for attachments multiple 52 + * times. 53 + */ 54 + bool cache_sgt_mapping; 55 + 54 56 /** 55 57 * @attach: 56 58 * ··· 207 205 * to be restarted. 208 206 */ 209 207 int (*end_cpu_access)(struct dma_buf *, enum dma_data_direction); 210 - void *(*map)(struct dma_buf *, unsigned long); 211 - void (*unmap)(struct dma_buf *, unsigned long, void *); 212 208 213 209 /** 214 210 * @mmap: ··· 244 244 * 0 on success or a negative error code on failure. 245 245 */ 246 246 int (*mmap)(struct dma_buf *, struct vm_area_struct *vma); 247 + 248 + /** 249 + * @map: 250 + * 251 + * Maps a page from the buffer into kernel address space. The page is 252 + * specified by offset into the buffer in PAGE_SIZE units. 253 + * 254 + * This callback is optional. 255 + * 256 + * Returns: 257 + * 258 + * Virtual address pointer where requested page can be accessed. NULL 259 + * on error or when this function is unimplemented by the exporter. 260 + */ 261 + void *(*map)(struct dma_buf *, unsigned long); 262 + 263 + /** 264 + * @unmap: 265 + * 266 + * Unmaps a page from the buffer. Page offset and address pointer should 267 + * be the same as the one passed to and returned by matching call to map. 268 + * 269 + * This callback is optional. 270 + */ 271 + void (*unmap)(struct dma_buf *, unsigned long, void *); 247 272 248 273 void *(*vmap)(struct dma_buf *); 249 274 void (*vunmap)(struct dma_buf *, void *vaddr); ··· 332 307 * @dmabuf: buffer for this attachment. 333 308 * @dev: device attached to the buffer. 334 309 * @node: list of dma_buf_attachment. 310 + * @sgt: cached mapping. 311 + * @dir: direction of cached mapping. 335 312 * @priv: exporter specific attachment data. 336 313 * 337 314 * This structure holds the attachment information between the dma_buf buffer ··· 349 322 struct dma_buf *dmabuf; 350 323 struct device *dev; 351 324 struct list_head node; 325 + struct sg_table *sgt; 326 + enum dma_data_direction dir; 352 327 void *priv; 353 328 }; 354 329
+55
include/linux/hdmi.h
··· 47 47 HDMI_INFOFRAME_TYPE_AVI = 0x82, 48 48 HDMI_INFOFRAME_TYPE_SPD = 0x83, 49 49 HDMI_INFOFRAME_TYPE_AUDIO = 0x84, 50 + HDMI_INFOFRAME_TYPE_DRM = 0x87, 50 51 }; 51 52 52 53 #define HDMI_IEEE_OUI 0x000c03 ··· 56 55 #define HDMI_AVI_INFOFRAME_SIZE 13 57 56 #define HDMI_SPD_INFOFRAME_SIZE 25 58 57 #define HDMI_AUDIO_INFOFRAME_SIZE 10 58 + #define HDMI_DRM_INFOFRAME_SIZE 26 59 59 60 60 #define HDMI_INFOFRAME_SIZE(type) \ 61 61 (HDMI_INFOFRAME_HEADER_SIZE + HDMI_ ## type ## _INFOFRAME_SIZE) ··· 154 152 HDMI_CONTENT_TYPE_GAME, 155 153 }; 156 154 155 + enum hdmi_metadata_type { 156 + HDMI_STATIC_METADATA_TYPE1 = 1, 157 + }; 158 + 159 + enum hdmi_eotf { 160 + HDMI_EOTF_TRADITIONAL_GAMMA_SDR, 161 + HDMI_EOTF_TRADITIONAL_GAMMA_HDR, 162 + HDMI_EOTF_SMPTE_ST2084, 163 + HDMI_EOTF_BT_2100_HLG, 164 + }; 165 + 157 166 struct hdmi_avi_infoframe { 158 167 enum hdmi_infoframe_type type; 159 168 unsigned char version; ··· 188 175 unsigned short right_bar; 189 176 }; 190 177 178 + /* DRM Infoframe as per CTA 861.G spec */ 179 + struct hdmi_drm_infoframe { 180 + enum hdmi_infoframe_type type; 181 + unsigned char version; 182 + unsigned char length; 183 + enum hdmi_eotf eotf; 184 + enum hdmi_metadata_type metadata_type; 185 + struct { 186 + u16 x, y; 187 + } display_primaries[3]; 188 + struct { 189 + u16 x, y; 190 + } white_point; 191 + u16 max_display_mastering_luminance; 192 + u16 min_display_mastering_luminance; 193 + u16 max_cll; 194 + u16 max_fall; 195 + }; 196 + 191 197 int hdmi_avi_infoframe_init(struct hdmi_avi_infoframe *frame); 192 198 ssize_t hdmi_avi_infoframe_pack(struct hdmi_avi_infoframe *frame, void *buffer, 193 199 size_t size); 194 200 ssize_t hdmi_avi_infoframe_pack_only(const struct hdmi_avi_infoframe *frame, 195 201 void *buffer, size_t size); 196 202 int hdmi_avi_infoframe_check(struct hdmi_avi_infoframe *frame); 203 + int hdmi_drm_infoframe_init(struct hdmi_drm_infoframe *frame); 204 + ssize_t hdmi_drm_infoframe_pack(struct hdmi_drm_infoframe *frame, void *buffer, 205 + size_t size); 206 + ssize_t hdmi_drm_infoframe_pack_only(const struct hdmi_drm_infoframe *frame, 207 + void *buffer, size_t size); 208 + int hdmi_drm_infoframe_check(struct hdmi_drm_infoframe *frame); 197 209 198 210 enum hdmi_spd_sdi { 199 211 HDMI_SPD_SDI_UNKNOWN, ··· 358 320 unsigned int s3d_ext_data; 359 321 }; 360 322 323 + /* HDR Metadata as per 861.G spec */ 324 + struct hdr_static_metadata { 325 + __u8 eotf; 326 + __u8 metadata_type; 327 + __u16 max_cll; 328 + __u16 max_fall; 329 + __u16 min_cll; 330 + }; 331 + 332 + struct hdr_sink_metadata { 333 + __u32 metadata_type; 334 + union { 335 + struct hdr_static_metadata hdmi_type1; 336 + }; 337 + }; 338 + 361 339 int hdmi_vendor_infoframe_init(struct hdmi_vendor_infoframe *frame); 362 340 ssize_t hdmi_vendor_infoframe_pack(struct hdmi_vendor_infoframe *frame, 363 341 void *buffer, size_t size); ··· 409 355 struct hdmi_spd_infoframe spd; 410 356 union hdmi_vendor_any_infoframe vendor; 411 357 struct hdmi_audio_infoframe audio; 358 + struct hdmi_drm_infoframe drm; 412 359 }; 413 360 414 361 ssize_t hdmi_infoframe_pack(union hdmi_infoframe *frame, void *buffer,
+1
include/uapi/drm/drm.h
··· 50 50 51 51 #else /* One of the BSDs */ 52 52 53 + #include <stdint.h> 53 54 #include <sys/ioccom.h> 54 55 #include <sys/types.h> 55 56 typedef int8_t __s8;
+23
include/uapi/drm/drm_mode.h
··· 630 630 __u16 reserved; 631 631 }; 632 632 633 + /* HDR Metadata Infoframe as per 861.G spec */ 634 + struct hdr_metadata_infoframe { 635 + __u8 eotf; 636 + __u8 metadata_type; 637 + struct { 638 + __u16 x, y; 639 + } display_primaries[3]; 640 + struct { 641 + __u16 x, y; 642 + } white_point; 643 + __u16 max_display_mastering_luminance; 644 + __u16 min_display_mastering_luminance; 645 + __u16 max_cll; 646 + __u16 max_fall; 647 + }; 648 + 649 + struct hdr_output_metadata { 650 + __u32 metadata_type; 651 + union { 652 + struct hdr_metadata_infoframe hdmi_metadata_type1; 653 + }; 654 + }; 655 + 633 656 #define DRM_MODE_PAGE_FLIP_EVENT 0x01 634 657 #define DRM_MODE_PAGE_FLIP_ASYNC 0x02 635 658 #define DRM_MODE_PAGE_FLIP_TARGET_ABSOLUTE 0x4
+28
include/uapi/drm/v3d_drm.h
··· 37 37 #define DRM_V3D_GET_PARAM 0x04 38 38 #define DRM_V3D_GET_BO_OFFSET 0x05 39 39 #define DRM_V3D_SUBMIT_TFU 0x06 40 + #define DRM_V3D_SUBMIT_CSD 0x07 40 41 41 42 #define DRM_IOCTL_V3D_SUBMIT_CL DRM_IOWR(DRM_COMMAND_BASE + DRM_V3D_SUBMIT_CL, struct drm_v3d_submit_cl) 42 43 #define DRM_IOCTL_V3D_WAIT_BO DRM_IOWR(DRM_COMMAND_BASE + DRM_V3D_WAIT_BO, struct drm_v3d_wait_bo) ··· 46 45 #define DRM_IOCTL_V3D_GET_PARAM DRM_IOWR(DRM_COMMAND_BASE + DRM_V3D_GET_PARAM, struct drm_v3d_get_param) 47 46 #define DRM_IOCTL_V3D_GET_BO_OFFSET DRM_IOWR(DRM_COMMAND_BASE + DRM_V3D_GET_BO_OFFSET, struct drm_v3d_get_bo_offset) 48 47 #define DRM_IOCTL_V3D_SUBMIT_TFU DRM_IOW(DRM_COMMAND_BASE + DRM_V3D_SUBMIT_TFU, struct drm_v3d_submit_tfu) 48 + #define DRM_IOCTL_V3D_SUBMIT_CSD DRM_IOW(DRM_COMMAND_BASE + DRM_V3D_SUBMIT_CSD, struct drm_v3d_submit_csd) 49 49 50 50 /** 51 51 * struct drm_v3d_submit_cl - ioctl argument for submitting commands to the 3D ··· 192 190 DRM_V3D_PARAM_V3D_CORE0_IDENT1, 193 191 DRM_V3D_PARAM_V3D_CORE0_IDENT2, 194 192 DRM_V3D_PARAM_SUPPORTS_TFU, 193 + DRM_V3D_PARAM_SUPPORTS_CSD, 195 194 }; 196 195 197 196 struct drm_v3d_get_param { ··· 230 227 */ 231 228 __u32 in_sync; 232 229 /* Sync object to signal when the TFU job is done. */ 230 + __u32 out_sync; 231 + }; 232 + 233 + /* Submits a compute shader for dispatch. This job will block on any 234 + * previous compute shaders submitted on this fd, and any other 235 + * synchronization must be performed with in_sync/out_sync. 236 + */ 237 + struct drm_v3d_submit_csd { 238 + __u32 cfg[7]; 239 + __u32 coef[4]; 240 + 241 + /* Pointer to a u32 array of the BOs that are referenced by the job. 242 + */ 243 + __u64 bo_handles; 244 + 245 + /* Number of BO handles passed in (size is that times 4). */ 246 + __u32 bo_handle_count; 247 + 248 + /* sync object to block on before running the CSD job. Each 249 + * CSD job will execute in the order submitted to its FD. 250 + * Synchronization against rendering/TFU jobs or CSD from 251 + * other fds requires using sync objects. 252 + */ 253 + __u32 in_sync; 254 + /* Sync object to signal when the CSD job is done. */ 233 255 __u32 out_sync; 234 256 }; 235 257