Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'drm-misc-next-2024-01-11' of git://anongit.freedesktop.org/drm/drm-misc into drm-next

drm-misc-next for v6.9:

UAPI Changes:

virtio:
- add Venus capset defines

Cross-subsystem Changes:

Core Changes:

- fix drm_fixp2int_ceil()
- documentation fixes
- clean ups
- allow DRM_MM_DEBUG with DRM=m
- build fixes for debugfs support
- EDID cleanups
- sched: error-handling fixes
- ttm: add tests

Driver Changes:

bridge:
- ite-6505: fix DP link-training bug
- samsung-dsim: fix error checking in probe
- tc358767: fix regmap usage

efifb:
- use copy of global screen_info state

hisilicon:
- fix EDID includes

mgag200:
- improve ioremap usage
- convert to struct drm_edid

nouveau:
- disp: use kmemdup()
- fix EDID includes
- documentation fixes

panel:
- ltk050h3146w: error-handling fixes
- panel-edp: support delay between power-on and enable; use put_sync in
unprepare; support Mediatek MT8173 Chromebooks, BOE NV116WHM-N49 V8.0,
BOE NV122WUM-N41, CSO MNC207QS1-1 plus DT bindings
- panel-lvds: support EDT ETML0700Z9NDHA plus DT bindings
- panel-novatek: FRIDA FRD400B25025-A-CTK plus DT bindings

qaic:
- fixes to BO handling
- make use of DRM managed release
- fix order of remove operations

rockchip:
- analogix_dp: get encoder port from DT
- inno_hdmi: support HDMI for RK3128
- lvds: error-handling fixes

simplefb:
- fix logging

ssd130x:
- support SSD133x plus DT bindings

tegra:
- fix error handling

tilcdc:
- make use of DRM managed release

v3d:
- show memory stats in debugfs

vc4:
- fix error handling in plane prepare_fb
- fix framebuffer test in plane helpers

vesafb:
- use copy of global screen_info state

virtio:
- cleanups

vkms:
- fix OOB access when programming the LUT
- Kconfig improvements

vmwgfx:
- unmap surface before changing plane state
- fix memory leak in error handling
- documentation fixes

Signed-off-by: Dave Airlie <airlied@redhat.com>

From: Thomas Zimmermann <tzimmermann@suse.de>
Link: https://patchwork.freedesktop.org/patch/msgid/20240111154902.GA8448@linux-uq9g

+3411 -1111
+3 -1
Documentation/devicetree/bindings/display/panel/novatek,nt35510.yaml
··· 15 15 properties: 16 16 compatible: 17 17 items: 18 - - const: hydis,hva40wv1 18 + - enum: 19 + - frida,frd400b25025 20 + - hydis,hva40wv1 19 21 - const: novatek,nt35510 20 22 description: This indicates the panel manufacturer of the panel 21 23 that is in turn using the NT35510 panel driver. The compatible
+2
Documentation/devicetree/bindings/display/panel/panel-lvds.yaml
··· 42 42 - auo,b101ew05 43 43 # Chunghwa Picture Tubes Ltd. 7" WXGA (800x1280) TFT LCD LVDS panel 44 44 - chunghwa,claa070wp03xg 45 + # EDT ETML0700Z9NDHA 7.0" WSVGA (1024x600) color TFT LCD LVDS panel 46 + - edt,etml0700z9ndha 45 47 # HannStar Display Corp. HSD101PWW2 10.1" WXGA (1280x800) LVDS panel 46 48 - hannstar,hsd101pww2 47 49 # Hydis Technologies 7" WXGA (800x1280) TFT LCD LVDS panel
+10 -10
Documentation/devicetree/bindings/display/solomon,ssd1307fb.yaml
··· 131 131 const: sinowealth,sh1106 132 132 then: 133 133 properties: 134 - width: 134 + solomon,width: 135 135 default: 132 136 - height: 136 + solomon,height: 137 137 default: 64 138 138 solomon,dclk-div: 139 139 default: 1 ··· 149 149 - solomon,ssd1305 150 150 then: 151 151 properties: 152 - width: 152 + solomon,width: 153 153 default: 132 154 - height: 154 + solomon,height: 155 155 default: 64 156 156 solomon,dclk-div: 157 157 default: 1 ··· 167 167 - solomon,ssd1306 168 168 then: 169 169 properties: 170 - width: 170 + solomon,width: 171 171 default: 128 172 - height: 172 + solomon,height: 173 173 default: 64 174 174 solomon,dclk-div: 175 175 default: 1 ··· 185 185 - solomon,ssd1307 186 186 then: 187 187 properties: 188 - width: 188 + solomon,width: 189 189 default: 128 190 - height: 190 + solomon,height: 191 191 default: 39 192 192 solomon,dclk-div: 193 193 default: 2 ··· 205 205 - solomon,ssd1309 206 206 then: 207 207 properties: 208 - width: 208 + solomon,width: 209 209 default: 128 210 - height: 210 + solomon,height: 211 211 default: 64 212 212 solomon,dclk-div: 213 213 default: 1
+6 -6
Documentation/devicetree/bindings/display/solomon,ssd132x.yaml
··· 30 30 const: solomon,ssd1322 31 31 then: 32 32 properties: 33 - width: 33 + solomon,width: 34 34 default: 480 35 - height: 35 + solomon,height: 36 36 default: 128 37 37 38 38 - if: ··· 42 42 const: solomon,ssd1325 43 43 then: 44 44 properties: 45 - width: 45 + solomon,width: 46 46 default: 128 47 - height: 47 + solomon,height: 48 48 default: 80 49 49 50 50 - if: ··· 54 54 const: solomon,ssd1327 55 55 then: 56 56 properties: 57 - width: 57 + solomon,width: 58 58 default: 128 59 - height: 59 + solomon,height: 60 60 default: 128 61 61 62 62 unevaluatedProperties: false
+45
Documentation/devicetree/bindings/display/solomon,ssd133x.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/display/solomon,ssd133x.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: Solomon SSD133x OLED Display Controllers 8 + 9 + maintainers: 10 + - Javier Martinez Canillas <javierm@redhat.com> 11 + 12 + allOf: 13 + - $ref: solomon,ssd-common.yaml# 14 + 15 + properties: 16 + compatible: 17 + enum: 18 + - solomon,ssd1331 19 + 20 + solomon,width: 21 + default: 96 22 + 23 + solomon,height: 24 + default: 64 25 + 26 + required: 27 + - compatible 28 + - reg 29 + 30 + unevaluatedProperties: false 31 + 32 + examples: 33 + - | 34 + spi { 35 + #address-cells = <1>; 36 + #size-cells = <0>; 37 + 38 + oled@0 { 39 + compatible = "solomon,ssd1331"; 40 + reg = <0x0>; 41 + reset-gpios = <&gpio2 7>; 42 + dc-gpios = <&gpio2 8>; 43 + spi-max-frequency = <10000000>; 44 + }; 45 + };
-234
Documentation/gpu/rfc/xe.rst
··· 1 - ========================== 2 - Xe – Merge Acceptance Plan 3 - ========================== 4 - Xe is a new driver for Intel GPUs that supports both integrated and 5 - discrete platforms starting with Tiger Lake (first Intel Xe Architecture). 6 - 7 - This document aims to establish a merge plan for the Xe, by writing down clear 8 - pre-merge goals, in order to avoid unnecessary delays. 9 - 10 - Xe – Overview 11 - ============= 12 - The main motivation of Xe is to have a fresh base to work from that is 13 - unencumbered by older platforms, whilst also taking the opportunity to 14 - rearchitect our driver to increase sharing across the drm subsystem, both 15 - leveraging and allowing us to contribute more towards other shared components 16 - like TTM and drm/scheduler. 17 - 18 - This is also an opportunity to start from the beginning with a clean uAPI that is 19 - extensible by design and already aligned with the modern userspace needs. For 20 - this reason, the memory model is solely based on GPU Virtual Address space 21 - bind/unbind (‘VM_BIND’) of GEM buffer objects (BOs) and execution only supporting 22 - explicit synchronization. With persistent mapping across the execution, the 23 - userspace does not need to provide a list of all required mappings during each 24 - submission. 25 - 26 - The new driver leverages a lot from i915. As for display, the intent is to share 27 - the display code with the i915 driver so that there is maximum reuse there. 28 - 29 - As for the power management area, the goal is to have a much-simplified support 30 - for the system suspend states (S-states), PCI device suspend states (D-states), 31 - GPU/Render suspend states (R-states) and frequency management. It should leverage 32 - as much as possible all the existent PCI-subsystem infrastructure (pm and 33 - runtime_pm) and underlying firmware components such PCODE and GuC for the power 34 - states and frequency decisions. 35 - 36 - Repository: 37 - 38 - https://gitlab.freedesktop.org/drm/xe/kernel (branch drm-xe-next) 39 - 40 - Xe – Platforms 41 - ============== 42 - Currently, Xe is already functional and has experimental support for multiple 43 - platforms starting from Tiger Lake, with initial support in userspace implemented 44 - in Mesa (for Iris and Anv, our OpenGL and Vulkan drivers), as well as in NEO 45 - (for OpenCL and Level0). 46 - 47 - During a transition period, platforms will be supported by both Xe and i915. 48 - However, the force_probe mechanism existent in both drivers will allow only one 49 - official and by-default probe at a given time. 50 - 51 - For instance, in order to probe a DG2 which PCI ID is 0x5690 by Xe instead of 52 - i915, the following set of parameters need to be used: 53 - 54 - ``` 55 - i915.force_probe=!5690 xe.force_probe=5690 56 - ``` 57 - 58 - In both drivers, the ‘.require_force_probe’ protection forces the user to use the 59 - force_probe parameter while the driver is under development. This protection is 60 - only removed when the support for the platform and the uAPI are stable. Stability 61 - which needs to be demonstrated by CI results. 62 - 63 - In order to avoid user space regressions, i915 will continue to support all the 64 - current platforms that are already out of this protection. Xe support will be 65 - forever experimental and dependent on the usage of force_probe for these 66 - platforms. 67 - 68 - When the time comes for Xe, the protection will be lifted on Xe and kept in i915. 69 - 70 - Xe – Pre-Merge Goals - Work-in-Progress 71 - ======================================= 72 - 73 - Display integration with i915 74 - ----------------------------- 75 - In order to share the display code with the i915 driver so that there is maximum 76 - reuse, the i915/display/ code is built twice, once for i915.ko and then for 77 - xe.ko. Currently, the i915/display code in Xe tree is polluted with many 'ifdefs' 78 - depending on the build target. The goal is to refactor both Xe and i915/display 79 - code simultaneously in order to get a clean result before they land upstream, so 80 - that display can already be part of the initial pull request towards drm-next. 81 - 82 - However, display code should not gate the acceptance of Xe in upstream. Xe 83 - patches will be refactored in a way that display code can be removed, if needed, 84 - from the first pull request of Xe towards drm-next. The expectation is that when 85 - both drivers are part of the drm-tip, the introduction of cleaner patches will be 86 - easier and speed up. 87 - 88 - Xe – uAPI high level overview 89 - ============================= 90 - 91 - ...Warning: To be done in follow up patches after/when/where the main consensus in various items are individually reached. 92 - 93 - Xe – Pre-Merge Goals - Completed 94 - ================================ 95 - 96 - Drm_exec 97 - -------- 98 - Helper to make dma_resv locking for a big number of buffers is getting removed in 99 - the drm_exec series proposed in https://patchwork.freedesktop.org/patch/524376/ 100 - If that happens, Xe needs to change and incorporate the changes in the driver. 101 - The goal is to engage with the Community to understand if the best approach is to 102 - move that to the drivers that are using it or if we should keep the helpers in 103 - place waiting for Xe to get merged. 104 - 105 - This item ties into the GPUVA, VM_BIND, and even long-running compute support. 106 - 107 - As a key measurable result, we need to have a community consensus documented in 108 - this document and the Xe driver prepared for the changes, if necessary. 109 - 110 - Userptr integration and vm_bind 111 - ------------------------------- 112 - Different drivers implement different ways of dealing with execution of userptr. 113 - With multiple drivers currently introducing support to VM_BIND, the goal is to 114 - aim for a DRM consensus on what’s the best way to have that support. To some 115 - extent this is already getting addressed itself with the GPUVA where likely the 116 - userptr will be a GPUVA with a NULL GEM call VM bind directly on the userptr. 117 - However, there are more aspects around the rules for that and the usage of 118 - mmu_notifiers, locking and other aspects. 119 - 120 - This task here has the goal of introducing a documentation of the basic rules. 121 - 122 - The documentation *needs* to first live in this document (API session below) and 123 - then moved to another more specific document or at Xe level or at DRM level. 124 - 125 - Documentation should include: 126 - 127 - * The userptr part of the VM_BIND api. 128 - 129 - * Locking, including the page-faulting case. 130 - 131 - * O(1) complexity under VM_BIND. 132 - 133 - The document is now included in the drm documentation :doc:`here </gpu/drm-vm-bind-async>`. 134 - 135 - Some parts of userptr like mmu_notifiers should become GPUVA or DRM helpers when 136 - the second driver supporting VM_BIND+userptr appears. Details to be defined when 137 - the time comes. 138 - 139 - The DRM GPUVM helpers do not yet include the userptr parts, but discussions 140 - about implementing them are ongoing. 141 - 142 - ASYNC VM_BIND 143 - ------------- 144 - Although having a common DRM level IOCTL for VM_BIND is not a requirement to get 145 - Xe merged, it is mandatory to have a consensus with other drivers and Mesa. 146 - It needs to be clear how to handle async VM_BIND and interactions with userspace 147 - memory fences. Ideally with helper support so people don't get it wrong in all 148 - possible ways. 149 - 150 - As a key measurable result, the benefits of ASYNC VM_BIND and a discussion of 151 - various flavors, error handling and sample API suggestions are documented in 152 - :doc:`The ASYNC VM_BIND document </gpu/drm-vm-bind-async>`. 153 - 154 - Drm_scheduler 155 - ------------- 156 - Xe primarily uses Firmware based scheduling (GuC FW). However, it will use 157 - drm_scheduler as the scheduler ‘frontend’ for userspace submission in order to 158 - resolve syncobj and dma-buf implicit sync dependencies. However, drm_scheduler is 159 - not yet prepared to handle the 1-to-1 relationship between drm_gpu_scheduler and 160 - drm_sched_entity. 161 - 162 - Deeper changes to drm_scheduler should *not* be required to get Xe accepted, but 163 - some consensus needs to be reached between Xe and other community drivers that 164 - could also benefit from this work, for coupling FW based/assisted submission such 165 - as the ARM’s new Mali GPU driver, and others. 166 - 167 - As a key measurable result, the patch series introducing Xe itself shall not 168 - depend on any other patch touching drm_scheduler itself that was not yet merged 169 - through drm-misc. This, by itself, already includes the reach of an agreement for 170 - uniform 1 to 1 relationship implementation / usage across drivers. 171 - 172 - Long running compute: minimal data structure/scaffolding 173 - -------------------------------------------------------- 174 - The generic scheduler code needs to include the handling of endless compute 175 - contexts, with the minimal scaffolding for preempt-ctx fences (probably on the 176 - drm_sched_entity) and making sure drm_scheduler can cope with the lack of job 177 - completion fence. 178 - 179 - The goal is to achieve a consensus ahead of Xe initial pull-request, ideally with 180 - this minimal drm/scheduler work, if needed, merged to drm-misc in a way that any 181 - drm driver, including Xe, could re-use and add their own individual needs on top 182 - in a next stage. However, this should not block the initial merge. 183 - 184 - Dev_coredump 185 - ------------ 186 - 187 - Xe needs to align with other drivers on the way that the error states are 188 - dumped, avoiding a Xe only error_state solution. The goal is to use devcoredump 189 - infrastructure to report error states, since it produces a standardized way 190 - by exposing a virtual and temporary /sys/class/devcoredump device. 191 - 192 - As the key measurable result, Xe driver needs to provide GPU snapshots captured 193 - at hang time through devcoredump, but without depending on any core modification 194 - of devcoredump infrastructure itself. 195 - 196 - Later, when we are in-tree, the goal is to collaborate with devcoredump 197 - infrastructure with overall possible improvements, like multiple file support 198 - for better organization of the dumps, snapshot support, dmesg extra print, 199 - and whatever may make sense and help the overall infrastructure. 200 - 201 - DRM_VM_BIND 202 - ----------- 203 - Nouveau, and Xe are all implementing ‘VM_BIND’ and new ‘Exec’ uAPIs in order to 204 - fulfill the needs of the modern uAPI. Xe merge should *not* be blocked on the 205 - development of a common new drm_infrastructure. However, the Xe team needs to 206 - engage with the community to explore the options of a common API. 207 - 208 - As a key measurable result, the DRM_VM_BIND needs to be documented in this file 209 - below, or this entire block deleted if the consensus is for independent drivers 210 - vm_bind ioctls. 211 - 212 - Although having a common DRM level IOCTL for VM_BIND is not a requirement to get 213 - Xe merged, it is mandatory to enforce the overall locking scheme for all major 214 - structs and list (so vm and vma). So, a consensus is needed, and possibly some 215 - common helpers. If helpers are needed, they should be also documented in this 216 - document. 217 - 218 - GPU VA 219 - ------ 220 - Two main goals of Xe are meeting together here: 221 - 222 - 1) Have an uAPI that aligns with modern UMD needs. 223 - 224 - 2) Early upstream engagement. 225 - 226 - RedHat engineers working on Nouveau proposed a new DRM feature to handle keeping 227 - track of GPU virtual address mappings. This is still not merged upstream, but 228 - this aligns very well with our goals and with our VM_BIND. The engagement with 229 - upstream and the port of Xe towards GPUVA is already ongoing. 230 - 231 - As a key measurable result, Xe needs to be aligned with the GPU VA and working in 232 - our tree. Missing Nouveau patches should *not* block Xe and any needed GPUVA 233 - related patch should be independent and present on dri-devel or acked by 234 - maintainers to go along with the first Xe pull request towards drm-next.
+23
Documentation/gpu/todo.rst
··· 120 120 121 121 Level: Advanced 122 122 123 + Rename drm_atomic_state 124 + ----------------------- 125 + 126 + The KMS framework uses two slightly different definitions for the ``state`` 127 + concept. For a given object (plane, CRTC, encoder, etc., so 128 + ``drm_$OBJECT_state``), the state is the entire state of that object. However, 129 + at the device level, ``drm_atomic_state`` refers to a state update for a 130 + limited number of objects. 131 + 132 + The state isn't the entire device state, but only the full state of some 133 + objects in that device. This is confusing to newcomers, and 134 + ``drm_atomic_state`` should be renamed to something clearer like 135 + ``drm_atomic_commit``. 136 + 137 + In addition to renaming the structure itself, it would also imply renaming some 138 + related functions (``drm_atomic_state_alloc``, ``drm_atomic_state_get``, 139 + ``drm_atomic_state_put``, ``drm_atomic_state_init``, 140 + ``__drm_atomic_state_free``, etc.). 141 + 142 + Contact: Maxime Ripard <mripard@kernel.org> 143 + 144 + Level: Advanced 145 + 123 146 Fallout from atomic KMS 124 147 ----------------------- 125 148
-1
MAINTAINERS
··· 10467 10467 10468 10468 IMGTEC POWERVR DRM DRIVER 10469 10469 M: Frank Binns <frank.binns@imgtec.com> 10470 - M: Donald Robson <donald.robson@imgtec.com> 10471 10470 M: Matt Coster <matt.coster@imgtec.com> 10472 10471 S: Supported 10473 10472 T: git git://anongit.freedesktop.org/drm/drm-misc
+2 -2
drivers/accel/qaic/mhi_controller.c
··· 358 358 .wake_capable = false, 359 359 }, 360 360 { 361 - .num = 21, 362 361 .name = "QAIC_TIMESYNC", 362 + .num = 21, 363 363 .num_elements = 32, 364 364 .local_elements = 0, 365 365 .event_ring = 0, ··· 390 390 .wake_capable = false, 391 391 }, 392 392 { 393 - .num = 23, 394 393 .name = "QAIC_TIMESYNC_PERIODIC", 394 + .num = 23, 395 395 .num_elements = 32, 396 396 .local_elements = 0, 397 397 .event_ring = 0,
+1 -2
drivers/accel/qaic/qaic.h
··· 30 30 #define to_qaic_drm_device(dev) container_of(dev, struct qaic_drm_device, drm) 31 31 #define to_drm(qddev) (&(qddev)->drm) 32 32 #define to_accel_kdev(qddev) (to_drm(qddev)->accel->kdev) /* Return Linux device of accel node */ 33 + #define to_qaic_device(dev) (to_qaic_drm_device((dev))->qdev) 33 34 34 35 enum __packed dev_states { 35 36 /* Device is offline or will be very soon */ ··· 192 191 u32 nr_slice; 193 192 /* Number of slice that have been transferred by DMA engine */ 194 193 u32 nr_slice_xfer_done; 195 - /* true = BO is queued for execution, true = BO is not queued */ 196 - bool queued; 197 194 /* 198 195 * If true then user has attached slicing information to this BO by 199 196 * calling DRM_IOCTL_QAIC_ATTACH_SLICE_BO ioctl.
+26 -33
drivers/accel/qaic/qaic_data.c
··· 141 141 __le16 status; 142 142 } __packed; 143 143 144 + static inline bool bo_queued(struct qaic_bo *bo) 145 + { 146 + return !list_empty(&bo->xfer_list); 147 + } 148 + 144 149 inline int get_dbc_req_elem_size(void) 145 150 { 146 151 return sizeof(struct dbc_req); ··· 574 569 { 575 570 struct scatterlist *sg; 576 571 572 + if (!sgt) 573 + return; 574 + 577 575 for (sg = sgt->sgl; sg; sg = sg_next(sg)) 578 576 if (sg_page(sg)) 579 577 __free_pages(sg_page(sg), get_order(sg->length)); ··· 656 648 } 657 649 complete_all(&bo->xfer_done); 658 650 INIT_LIST_HEAD(&bo->slices); 651 + INIT_LIST_HEAD(&bo->xfer_list); 659 652 } 660 653 661 654 static struct qaic_bo *qaic_alloc_init_bo(void) ··· 718 709 if (ret) 719 710 goto free_bo; 720 711 712 + ret = drm_gem_create_mmap_offset(obj); 713 + if (ret) 714 + goto free_bo; 715 + 721 716 ret = drm_gem_handle_create(file_priv, obj, &args->handle); 722 717 if (ret) 723 - goto free_sgt; 718 + goto free_bo; 724 719 725 720 bo->handle = args->handle; 726 721 drm_gem_object_put(obj); ··· 733 720 734 721 return 0; 735 722 736 - free_sgt: 737 - qaic_free_sgt(bo->sgt); 738 723 free_bo: 739 - kfree(bo); 724 + drm_gem_object_put(obj); 740 725 unlock_dev_srcu: 741 726 srcu_read_unlock(&qdev->dev_lock, qdev_rcu_id); 742 727 unlock_usr_srcu: ··· 749 738 struct drm_gem_object *obj; 750 739 struct qaic_device *qdev; 751 740 struct qaic_user *usr; 752 - int ret; 741 + int ret = 0; 753 742 754 743 usr = file_priv->driver_priv; 755 744 usr_rcu_id = srcu_read_lock(&usr->qddev_lock); ··· 771 760 goto unlock_dev_srcu; 772 761 } 773 762 774 - ret = drm_gem_create_mmap_offset(obj); 775 - if (ret == 0) 776 - args->offset = drm_vma_node_offset_addr(&obj->vma_node); 763 + args->offset = drm_vma_node_offset_addr(&obj->vma_node); 777 764 778 765 drm_gem_object_put(obj); 779 766 ··· 837 828 struct sg_table *sgt; 838 829 int ret; 839 830 840 - if (obj->import_attach->dmabuf->size < hdr->size) 841 - return -EINVAL; 842 - 843 831 sgt = dma_buf_map_attachment(obj->import_attach, hdr->dir); 844 832 if (IS_ERR(sgt)) { 845 833 ret = PTR_ERR(sgt); ··· 852 846 struct qaic_attach_slice_hdr *hdr) 853 847 { 854 848 int ret; 855 - 856 - if (bo->base.size < hdr->size) 857 - return -EINVAL; 858 849 859 850 ret = dma_map_sgtable(&qdev->pdev->dev, bo->sgt, hdr->dir, 0); 860 851 if (ret) ··· 953 950 if (arg_size / args->hdr.count != sizeof(*slice_ent)) 954 951 return -EINVAL; 955 952 956 - if (args->hdr.size == 0) 957 - return -EINVAL; 958 - 959 953 if (!(args->hdr.dir == DMA_TO_DEVICE || args->hdr.dir == DMA_FROM_DEVICE)) 960 954 return -EINVAL; 961 955 ··· 992 992 goto free_slice_ent; 993 993 } 994 994 995 - ret = qaic_validate_req(qdev, slice_ent, args->hdr.count, args->hdr.size); 996 - if (ret) 997 - goto free_slice_ent; 998 - 999 995 obj = drm_gem_object_lookup(file_priv, args->hdr.handle); 1000 996 if (!obj) { 1001 997 ret = -ENOENT; 1002 998 goto free_slice_ent; 1003 999 } 1000 + 1001 + ret = qaic_validate_req(qdev, slice_ent, args->hdr.count, obj->size); 1002 + if (ret) 1003 + goto put_bo; 1004 1004 1005 1005 bo = to_qaic_bo(obj); 1006 1006 ret = mutex_lock_interruptible(&bo->lock); ··· 1173 1173 struct bo_slice *slice; 1174 1174 unsigned long flags; 1175 1175 struct qaic_bo *bo; 1176 - bool queued; 1177 1176 int i, j; 1178 1177 int ret; 1179 1178 ··· 1204 1205 } 1205 1206 1206 1207 spin_lock_irqsave(&dbc->xfer_lock, flags); 1207 - queued = bo->queued; 1208 - bo->queued = true; 1209 - if (queued) { 1208 + if (bo_queued(bo)) { 1210 1209 spin_unlock_irqrestore(&dbc->xfer_lock, flags); 1211 1210 ret = -EINVAL; 1212 1211 goto unlock_bo; ··· 1227 1230 else 1228 1231 ret = copy_exec_reqs(qdev, slice, dbc->id, head, tail); 1229 1232 if (ret) { 1230 - bo->queued = false; 1231 1233 spin_unlock_irqrestore(&dbc->xfer_lock, flags); 1232 1234 goto unlock_bo; 1233 1235 } ··· 1249 1253 spin_lock_irqsave(&dbc->xfer_lock, flags); 1250 1254 bo = list_last_entry(&dbc->xfer_list, struct qaic_bo, xfer_list); 1251 1255 obj = &bo->base; 1252 - bo->queued = false; 1253 - list_del(&bo->xfer_list); 1256 + list_del_init(&bo->xfer_list); 1254 1257 spin_unlock_irqrestore(&dbc->xfer_lock, flags); 1255 1258 dma_sync_sgtable_for_cpu(&qdev->pdev->dev, bo->sgt, bo->dir); 1256 1259 drm_gem_object_put(obj); ··· 1610 1615 */ 1611 1616 dma_sync_sgtable_for_cpu(&qdev->pdev->dev, bo->sgt, bo->dir); 1612 1617 bo->nr_slice_xfer_done = 0; 1613 - bo->queued = false; 1614 - list_del(&bo->xfer_list); 1618 + list_del_init(&bo->xfer_list); 1615 1619 bo->perf_stats.req_processed_ts = ktime_get_ns(); 1616 1620 complete_all(&bo->xfer_done); 1617 1621 drm_gem_object_put(&bo->base); ··· 1869 1875 1870 1876 /* Check if BO is committed to H/W for DMA */ 1871 1877 spin_lock_irqsave(&dbc->xfer_lock, flags); 1872 - if (bo->queued) { 1878 + if (bo_queued(bo)) { 1873 1879 spin_unlock_irqrestore(&dbc->xfer_lock, flags); 1874 1880 ret = -EBUSY; 1875 1881 goto unlock_ch_srcu; ··· 1899 1905 spin_lock_irqsave(&dbc->xfer_lock, flags); 1900 1906 while (!list_empty(&dbc->xfer_list)) { 1901 1907 bo = list_first_entry(&dbc->xfer_list, typeof(*bo), xfer_list); 1902 - bo->queued = false; 1903 - list_del(&bo->xfer_list); 1908 + list_del_init(&bo->xfer_list); 1904 1909 spin_unlock_irqrestore(&dbc->xfer_lock, flags); 1905 1910 bo->nr_slice_xfer_done = 0; 1906 1911 bo->req_id = 0;
+91 -53
drivers/accel/qaic/qaic_drv.c
··· 44 44 static bool link_up; 45 45 static DEFINE_IDA(qaic_usrs); 46 46 47 + static void qaicm_wq_release(struct drm_device *dev, void *res) 48 + { 49 + struct workqueue_struct *wq = res; 50 + 51 + destroy_workqueue(wq); 52 + } 53 + 54 + static struct workqueue_struct *qaicm_wq_init(struct drm_device *dev, const char *fmt) 55 + { 56 + struct workqueue_struct *wq; 57 + int ret; 58 + 59 + wq = alloc_workqueue(fmt, WQ_UNBOUND, 0); 60 + if (!wq) 61 + return ERR_PTR(-ENOMEM); 62 + ret = drmm_add_action_or_reset(dev, qaicm_wq_release, wq); 63 + if (ret) 64 + return ERR_PTR(ret); 65 + 66 + return wq; 67 + } 68 + 69 + static void qaicm_srcu_release(struct drm_device *dev, void *res) 70 + { 71 + struct srcu_struct *lock = res; 72 + 73 + cleanup_srcu_struct(lock); 74 + } 75 + 76 + static int qaicm_srcu_init(struct drm_device *dev, struct srcu_struct *lock) 77 + { 78 + int ret; 79 + 80 + ret = init_srcu_struct(lock); 81 + if (ret) 82 + return ret; 83 + 84 + return drmm_add_action_or_reset(dev, qaicm_srcu_release, lock); 85 + } 86 + 87 + static void qaicm_pci_release(struct drm_device *dev, void *res) 88 + { 89 + struct qaic_device *qdev = to_qaic_device(dev); 90 + 91 + pci_set_drvdata(qdev->pdev, NULL); 92 + } 93 + 47 94 static void free_usr(struct kref *kref) 48 95 { 49 96 struct qaic_user *usr = container_of(kref, struct qaic_user, ref_count); ··· 346 299 release_dbc(qdev, i); 347 300 } 348 301 349 - static void cleanup_qdev(struct qaic_device *qdev) 350 - { 351 - int i; 352 - 353 - for (i = 0; i < qdev->num_dbc; ++i) 354 - cleanup_srcu_struct(&qdev->dbc[i].ch_lock); 355 - cleanup_srcu_struct(&qdev->dev_lock); 356 - pci_set_drvdata(qdev->pdev, NULL); 357 - destroy_workqueue(qdev->cntl_wq); 358 - destroy_workqueue(qdev->qts_wq); 359 - } 360 - 361 302 static struct qaic_device *create_qdev(struct pci_dev *pdev, const struct pci_device_id *id) 362 303 { 304 + struct device *dev = &pdev->dev; 363 305 struct qaic_drm_device *qddev; 364 306 struct qaic_device *qdev; 365 - int i; 307 + struct drm_device *drm; 308 + int i, ret; 366 309 367 - qdev = devm_kzalloc(&pdev->dev, sizeof(*qdev), GFP_KERNEL); 310 + qdev = devm_kzalloc(dev, sizeof(*qdev), GFP_KERNEL); 368 311 if (!qdev) 369 312 return NULL; 370 313 371 314 qdev->dev_state = QAIC_OFFLINE; 372 315 if (id->device == PCI_DEV_AIC100) { 373 316 qdev->num_dbc = 16; 374 - qdev->dbc = devm_kcalloc(&pdev->dev, qdev->num_dbc, sizeof(*qdev->dbc), GFP_KERNEL); 317 + qdev->dbc = devm_kcalloc(dev, qdev->num_dbc, sizeof(*qdev->dbc), GFP_KERNEL); 375 318 if (!qdev->dbc) 376 319 return NULL; 377 320 } 378 321 379 - qdev->cntl_wq = alloc_workqueue("qaic_cntl", WQ_UNBOUND, 0); 380 - if (!qdev->cntl_wq) 322 + qddev = devm_drm_dev_alloc(&pdev->dev, &qaic_accel_driver, struct qaic_drm_device, drm); 323 + if (IS_ERR(qddev)) 381 324 return NULL; 382 325 383 - qdev->qts_wq = alloc_workqueue("qaic_ts", WQ_UNBOUND, 0); 384 - if (!qdev->qts_wq) { 385 - destroy_workqueue(qdev->cntl_wq); 386 - return NULL; 387 - } 388 - 326 + drm = to_drm(qddev); 389 327 pci_set_drvdata(pdev, qdev); 390 - qdev->pdev = pdev; 391 328 392 - mutex_init(&qdev->cntl_mutex); 329 + ret = drmm_mutex_init(drm, &qddev->users_mutex); 330 + if (ret) 331 + return NULL; 332 + ret = drmm_add_action_or_reset(drm, qaicm_pci_release, NULL); 333 + if (ret) 334 + return NULL; 335 + ret = drmm_mutex_init(drm, &qdev->cntl_mutex); 336 + if (ret) 337 + return NULL; 338 + 339 + qdev->cntl_wq = qaicm_wq_init(drm, "qaic_cntl"); 340 + if (IS_ERR(qdev->cntl_wq)) 341 + return NULL; 342 + qdev->qts_wq = qaicm_wq_init(drm, "qaic_ts"); 343 + if (IS_ERR(qdev->qts_wq)) 344 + return NULL; 345 + 346 + ret = qaicm_srcu_init(drm, &qdev->dev_lock); 347 + if (ret) 348 + return NULL; 349 + 350 + qdev->qddev = qddev; 351 + qdev->pdev = pdev; 352 + qddev->qdev = qdev; 353 + 393 354 INIT_LIST_HEAD(&qdev->cntl_xfer_list); 394 - init_srcu_struct(&qdev->dev_lock); 355 + INIT_LIST_HEAD(&qddev->users); 395 356 396 357 for (i = 0; i < qdev->num_dbc; ++i) { 397 358 spin_lock_init(&qdev->dbc[i].xfer_lock); 398 359 qdev->dbc[i].qdev = qdev; 399 360 qdev->dbc[i].id = i; 400 361 INIT_LIST_HEAD(&qdev->dbc[i].xfer_list); 401 - init_srcu_struct(&qdev->dbc[i].ch_lock); 362 + ret = qaicm_srcu_init(drm, &qdev->dbc[i].ch_lock); 363 + if (ret) 364 + return NULL; 402 365 init_waitqueue_head(&qdev->dbc[i].dbc_release); 403 366 INIT_LIST_HEAD(&qdev->dbc[i].bo_lists); 404 367 } 405 - 406 - qddev = devm_drm_dev_alloc(&pdev->dev, &qaic_accel_driver, struct qaic_drm_device, drm); 407 - if (IS_ERR(qddev)) { 408 - cleanup_qdev(qdev); 409 - return NULL; 410 - } 411 - 412 - drmm_mutex_init(to_drm(qddev), &qddev->users_mutex); 413 - INIT_LIST_HEAD(&qddev->users); 414 - qddev->qdev = qdev; 415 - qdev->qddev = qddev; 416 368 417 369 return qdev; 418 370 } ··· 518 472 519 473 ret = init_pci(qdev, pdev); 520 474 if (ret) 521 - goto cleanup_qdev; 475 + return ret; 522 476 523 477 for (i = 0; i < qdev->num_dbc; ++i) 524 478 qdev->dbc[i].dbc_base = qdev->bar_2 + QAIC_DBC_OFF(i); 525 479 526 480 mhi_irq = init_msi(qdev, pdev); 527 - if (mhi_irq < 0) { 528 - ret = mhi_irq; 529 - goto cleanup_qdev; 530 - } 481 + if (mhi_irq < 0) 482 + return mhi_irq; 531 483 532 484 ret = qaic_create_drm_device(qdev, QAIC_NO_PARTITION); 533 485 if (ret) 534 - goto cleanup_qdev; 486 + return ret; 535 487 536 488 qdev->mhi_cntrl = qaic_mhi_register_controller(pdev, qdev->bar_0, mhi_irq, 537 489 qdev->single_msi); 538 490 if (IS_ERR(qdev->mhi_cntrl)) { 539 491 ret = PTR_ERR(qdev->mhi_cntrl); 540 - goto cleanup_drm_dev; 492 + qaic_destroy_drm_device(qdev, QAIC_NO_PARTITION); 493 + return ret; 541 494 } 542 495 543 496 return 0; 544 - 545 - cleanup_drm_dev: 546 - qaic_destroy_drm_device(qdev, QAIC_NO_PARTITION); 547 - cleanup_qdev: 548 - cleanup_qdev(qdev); 549 - return ret; 550 497 } 551 498 552 499 static void qaic_pci_remove(struct pci_dev *pdev) ··· 550 511 return; 551 512 552 513 qaic_dev_reset_clean_local_state(qdev); 553 - qaic_destroy_drm_device(qdev, QAIC_NO_PARTITION); 554 514 qaic_mhi_free_controller(qdev->mhi_cntrl, link_up); 555 - cleanup_qdev(qdev); 515 + qaic_destroy_drm_device(qdev, QAIC_NO_PARTITION); 556 516 } 557 517 558 518 static void qaic_pci_shutdown(struct pci_dev *pdev)
+2 -14
drivers/gpu/drm/Kconfig
··· 42 42 config DRM_DEBUG_MM 43 43 bool "Insert extra checks and debug info into the DRM range managers" 44 44 default n 45 - depends on DRM=y 45 + depends on DRM 46 46 depends on STACKTRACE_SUPPORT 47 47 select STACKDEPOT 48 48 help ··· 289 289 as used by Mesa's software renderer for enhanced performance. 290 290 If M is selected the module will be called vgem. 291 291 292 - config DRM_VKMS 293 - tristate "Virtual KMS (EXPERIMENTAL)" 294 - depends on DRM && MMU 295 - select DRM_KMS_HELPER 296 - select DRM_GEM_SHMEM_HELPER 297 - select CRC32 298 - default n 299 - help 300 - Virtual Kernel Mode-Setting (VKMS) is used for testing or for 301 - running GPU in a headless machines. Choose this option to get 302 - a VKMS. 303 - 304 - If M is selected the module will be called vkms. 292 + source "drivers/gpu/drm/vkms/Kconfig" 305 293 306 294 source "drivers/gpu/drm/exynos/Kconfig" 307 295
+3 -1
drivers/gpu/drm/bridge/ite-it6505.c
··· 2240 2240 ret = it6505_link_start_auto_train(it6505); 2241 2241 DRM_DEV_DEBUG_DRIVER(dev, "auto train %s, auto_train_retry: %d", 2242 2242 ret ? "pass" : "failed", it6505->auto_train_retry); 2243 - it6505->auto_train_retry--; 2244 2243 2245 2244 if (ret) { 2245 + it6505->auto_train_retry = AUTO_TRAIN_RETRY; 2246 2246 it6505_link_train_ok(it6505); 2247 2247 return; 2248 + } else { 2249 + it6505->auto_train_retry--; 2248 2250 } 2249 2251 2250 2252 it6505_dump(it6505);
+4 -4
drivers/gpu/drm/bridge/samsung-dsim.c
··· 1992 1992 else 1993 1993 dsi->bridge.timings = &samsung_dsim_bridge_timings_de_high; 1994 1994 1995 - if (dsi->plat_data->host_ops && dsi->plat_data->host_ops->register_host) 1995 + if (dsi->plat_data->host_ops && dsi->plat_data->host_ops->register_host) { 1996 1996 ret = dsi->plat_data->host_ops->register_host(dsi); 1997 - 1998 - if (ret) 1999 - goto err_disable_runtime; 1997 + if (ret) 1998 + goto err_disable_runtime; 1999 + } 2000 2000 2001 2001 return 0; 2002 2002
+132 -37
drivers/gpu/drm/bridge/tc358767.c
··· 41 41 42 42 /* Registers */ 43 43 44 + /* DSI D-PHY Layer registers */ 45 + #define D0W_DPHYCONTTX 0x0004 46 + #define CLW_DPHYCONTTX 0x0020 47 + #define D0W_DPHYCONTRX 0x0024 48 + #define D1W_DPHYCONTRX 0x0028 49 + #define D2W_DPHYCONTRX 0x002c 50 + #define D3W_DPHYCONTRX 0x0030 51 + #define COM_DPHYCONTRX 0x0038 52 + #define CLW_CNTRL 0x0040 53 + #define D0W_CNTRL 0x0044 54 + #define D1W_CNTRL 0x0048 55 + #define D2W_CNTRL 0x004c 56 + #define D3W_CNTRL 0x0050 57 + #define TESTMODE_CNTRL 0x0054 58 + 44 59 /* PPI layer registers */ 45 60 #define PPI_STARTPPI 0x0104 /* START control bit */ 61 + #define PPI_BUSYPPI 0x0108 /* PPI busy status */ 46 62 #define PPI_LPTXTIMECNT 0x0114 /* LPTX timing signal */ 47 63 #define LPX_PERIOD 3 48 64 #define PPI_LANEENABLE 0x0134 ··· 75 59 76 60 /* DSI layer registers */ 77 61 #define DSI_STARTDSI 0x0204 /* START control bit of DSI-TX */ 62 + #define DSI_BUSYDSI 0x0208 /* DSI busy status */ 78 63 #define DSI_LANEENABLE 0x0210 /* Enables each lane */ 79 64 #define DSI_RX_START BIT(0) 80 65 ··· 85 68 #define LANEENABLE_L1EN BIT(2) 86 69 #define LANEENABLE_L2EN BIT(1) 87 70 #define LANEENABLE_L3EN BIT(2) 71 + 72 + #define DSI_LANESTATUS0 0x0214 /* DSI lane status 0 */ 73 + #define DSI_LANESTATUS1 0x0218 /* DSI lane status 1 */ 74 + #define DSI_INTSTATUS 0x0220 /* Interrupt Status */ 75 + #define DSI_INTMASK 0x0224 /* Interrupt Mask */ 76 + #define DSI_INTCLR 0x0228 /* Interrupt Clear */ 77 + #define DSI_LPTXTO 0x0230 /* LPTX Time Out Counter */ 78 + 79 + /* DSI General Registers */ 80 + #define DSIERRCNT 0x0300 /* DSI Error Count Register */ 81 + 82 + /* DSI Application Layer Registers */ 83 + #define APLCTRL 0x0400 /* Application layer Control Register */ 84 + #define RDPKTLN 0x0404 /* DSI Read packet Length Register */ 88 85 89 86 /* Display Parallel Input Interface */ 90 87 #define DPIPXLFMT 0x0440 ··· 145 114 #define VFUEN BIT(0) /* Video Frame Timing Upload */ 146 115 147 116 /* System */ 148 - #define TC_IDREG 0x0500 149 - #define SYSSTAT 0x0508 150 - #define SYSCTRL 0x0510 151 - #define DP0_AUDSRC_NO_INPUT (0 << 3) 152 - #define DP0_AUDSRC_I2S_RX (1 << 3) 153 - #define DP0_VIDSRC_NO_INPUT (0 << 0) 154 - #define DP0_VIDSRC_DSI_RX (1 << 0) 155 - #define DP0_VIDSRC_DPI_RX (2 << 0) 156 - #define DP0_VIDSRC_COLOR_BAR (3 << 0) 157 - #define SYSRSTENB 0x050c 117 + #define TC_IDREG 0x0500 /* Chip ID and Revision ID */ 118 + #define SYSBOOT 0x0504 /* System BootStrap Status Register */ 119 + #define SYSSTAT 0x0508 /* System Status Register */ 120 + #define SYSRSTENB 0x050c /* System Reset/Enable Register */ 158 121 #define ENBI2C (1 << 0) 159 122 #define ENBLCD0 (1 << 2) 160 123 #define ENBBM (1 << 3) 161 124 #define ENBDSIRX (1 << 4) 162 125 #define ENBREG (1 << 5) 163 126 #define ENBHDCP (1 << 8) 164 - #define GPIOM 0x0540 165 - #define GPIOC 0x0544 166 - #define GPIOO 0x0548 167 - #define GPIOI 0x054c 168 - #define INTCTL_G 0x0560 169 - #define INTSTS_G 0x0564 127 + #define SYSCTRL 0x0510 /* System Control Register */ 128 + #define DP0_AUDSRC_NO_INPUT (0 << 3) 129 + #define DP0_AUDSRC_I2S_RX (1 << 3) 130 + #define DP0_VIDSRC_NO_INPUT (0 << 0) 131 + #define DP0_VIDSRC_DSI_RX (1 << 0) 132 + #define DP0_VIDSRC_DPI_RX (2 << 0) 133 + #define DP0_VIDSRC_COLOR_BAR (3 << 0) 134 + #define GPIOM 0x0540 /* GPIO Mode Control Register */ 135 + #define GPIOC 0x0544 /* GPIO Direction Control Register */ 136 + #define GPIOO 0x0548 /* GPIO Output Register */ 137 + #define GPIOI 0x054c /* GPIO Input Register */ 138 + #define INTCTL_G 0x0560 /* General Interrupts Control Register */ 139 + #define INTSTS_G 0x0564 /* General Interrupts Status Register */ 170 140 171 141 #define INT_SYSERR BIT(16) 172 142 #define INT_GPIO_H(x) (1 << (x == 0 ? 2 : 10)) 173 143 #define INT_GPIO_LC(x) (1 << (x == 0 ? 3 : 11)) 174 144 175 - #define INT_GP0_LCNT 0x0584 176 - #define INT_GP1_LCNT 0x0588 145 + #define TEST_INT_C 0x0570 /* Test Interrupts Control Register */ 146 + #define TEST_INT_S 0x0574 /* Test Interrupts Status Register */ 147 + 148 + #define INT_GP0_LCNT 0x0584 /* Interrupt GPIO0 Low Count Value Register */ 149 + #define INT_GP1_LCNT 0x0588 /* Interrupt GPIO1 Low Count Value Register */ 177 150 178 151 /* Control */ 179 152 #define DP0CTL 0x0600 ··· 187 152 #define DP_EN BIT(0) /* Enable DPTX function */ 188 153 189 154 /* Clocks */ 190 - #define DP0_VIDMNGEN0 0x0610 191 - #define DP0_VIDMNGEN1 0x0614 192 - #define DP0_VMNGENSTATUS 0x0618 155 + #define DP0_VIDMNGEN0 0x0610 /* DP0 Video Force M Value Register */ 156 + #define DP0_VIDMNGEN1 0x0614 /* DP0 Video Force N Value Register */ 157 + #define DP0_VMNGENSTATUS 0x0618 /* DP0 Video Current M Value Register */ 158 + #define DP0_AUDMNGEN0 0x0628 /* DP0 Audio Force M Value Register */ 159 + #define DP0_AUDMNGEN1 0x062c /* DP0 Audio Force N Value Register */ 160 + #define DP0_AMNGENSTATUS 0x0630 /* DP0 Audio Current M Value Register */ 193 161 194 162 /* Main Channel */ 195 163 #define DP0_SECSAMPLE 0x0640 ··· 262 224 #define DP0_SNKLTCHGREQ 0x06d4 263 225 #define DP0_LTLOOPCTRL 0x06d8 264 226 #define DP0_SNKLTCTRL 0x06e4 227 + #define DP0_TPATDAT0 0x06e8 /* DP0 Test Pattern bits 29 to 0 */ 228 + #define DP0_TPATDAT1 0x06ec /* DP0 Test Pattern bits 59 to 30 */ 229 + #define DP0_TPATDAT2 0x06f0 /* DP0 Test Pattern bits 89 to 60 */ 230 + #define DP0_TPATDAT3 0x06f4 /* DP0 Test Pattern bits 119 to 90 */ 265 231 266 - #define DP1_SRCCTRL 0x07a0 232 + #define AUDCFG0 0x0700 /* DP0 Audio Config0 Register */ 233 + #define AUDCFG1 0x0704 /* DP0 Audio Config1 Register */ 234 + #define AUDIFDATA0 0x0708 /* DP0 Audio Info Frame Bytes 3 to 0 */ 235 + #define AUDIFDATA1 0x070c /* DP0 Audio Info Frame Bytes 7 to 4 */ 236 + #define AUDIFDATA2 0x0710 /* DP0 Audio Info Frame Bytes 11 to 8 */ 237 + #define AUDIFDATA3 0x0714 /* DP0 Audio Info Frame Bytes 15 to 12 */ 238 + #define AUDIFDATA4 0x0718 /* DP0 Audio Info Frame Bytes 19 to 16 */ 239 + #define AUDIFDATA5 0x071c /* DP0 Audio Info Frame Bytes 23 to 20 */ 240 + #define AUDIFDATA6 0x0720 /* DP0 Audio Info Frame Bytes 27 to 24 */ 241 + 242 + #define DP1_SRCCTRL 0x07a0 /* DP1 Control Register */ 267 243 268 244 /* PHY */ 269 245 #define DP_PHY_CTRL 0x0800 ··· 290 238 #define PHY_2LANE BIT(2) /* PHY Enable 2 lanes */ 291 239 #define PHY_A0_EN BIT(1) /* PHY Aux Channel0 Enable */ 292 240 #define PHY_M0_EN BIT(0) /* PHY Main Channel0 Enable */ 241 + #define DP_PHY_CFG_WR 0x0810 /* DP PHY Configuration Test Write Register */ 242 + #define DP_PHY_CFG_RD 0x0814 /* DP PHY Configuration Test Read Register */ 243 + #define DP0_AUX_PHY_CTRL 0x0820 /* DP0 AUX PHY Control Register */ 244 + #define DP0_MAIN_PHY_DBG 0x0840 /* DP0 Main PHY Test Debug Register */ 245 + 246 + /* I2S */ 247 + #define I2SCFG 0x0880 /* I2S Audio Config 0 Register */ 248 + #define I2SCH0STAT0 0x0888 /* I2S Audio Channel 0 Status Bytes 3 to 0 */ 249 + #define I2SCH0STAT1 0x088c /* I2S Audio Channel 0 Status Bytes 7 to 4 */ 250 + #define I2SCH0STAT2 0x0890 /* I2S Audio Channel 0 Status Bytes 11 to 8 */ 251 + #define I2SCH0STAT3 0x0894 /* I2S Audio Channel 0 Status Bytes 15 to 12 */ 252 + #define I2SCH0STAT4 0x0898 /* I2S Audio Channel 0 Status Bytes 19 to 16 */ 253 + #define I2SCH0STAT5 0x089c /* I2S Audio Channel 0 Status Bytes 23 to 20 */ 254 + #define I2SCH1STAT0 0x08a0 /* I2S Audio Channel 1 Status Bytes 3 to 0 */ 255 + #define I2SCH1STAT1 0x08a4 /* I2S Audio Channel 1 Status Bytes 7 to 4 */ 256 + #define I2SCH1STAT2 0x08a8 /* I2S Audio Channel 1 Status Bytes 11 to 8 */ 257 + #define I2SCH1STAT3 0x08ac /* I2S Audio Channel 1 Status Bytes 15 to 12 */ 258 + #define I2SCH1STAT4 0x08b0 /* I2S Audio Channel 1 Status Bytes 19 to 16 */ 259 + #define I2SCH1STAT5 0x08b4 /* I2S Audio Channel 1 Status Bytes 23 to 20 */ 293 260 294 261 /* PLL */ 295 262 #define DP0_PLLCTRL 0x0900 ··· 1904 1833 case 0x1f4: 1905 1834 /* DSI Protocol Layer */ 1906 1835 case DSI_STARTDSI: 1907 - case 0x208: 1836 + case DSI_BUSYDSI: 1908 1837 case DSI_LANEENABLE: 1909 - case 0x214: 1910 - case 0x218: 1911 - case 0x220: 1838 + case DSI_LANESTATUS0: 1839 + case DSI_LANESTATUS1: 1840 + case DSI_INTSTATUS: 1912 1841 case 0x224: 1913 1842 case 0x228: 1914 1843 case 0x230: 1915 1844 /* DSI General */ 1916 - case 0x300: 1845 + case DSIERRCNT: 1917 1846 /* DSI Application Layer */ 1918 1847 case 0x400: 1919 1848 case 0x404: ··· 2049 1978 } 2050 1979 2051 1980 static const struct regmap_range tc_volatile_ranges[] = { 1981 + regmap_reg_range(PPI_BUSYPPI, PPI_BUSYPPI), 1982 + regmap_reg_range(DSI_BUSYDSI, DSI_BUSYDSI), 1983 + regmap_reg_range(DSI_LANESTATUS0, DSI_INTSTATUS), 1984 + regmap_reg_range(DSIERRCNT, DSIERRCNT), 1985 + regmap_reg_range(VFUEN0, VFUEN0), 1986 + regmap_reg_range(SYSSTAT, SYSSTAT), 1987 + regmap_reg_range(GPIOI, GPIOI), 1988 + regmap_reg_range(INTSTS_G, INTSTS_G), 1989 + regmap_reg_range(DP0_VMNGENSTATUS, DP0_VMNGENSTATUS), 1990 + regmap_reg_range(DP0_AMNGENSTATUS, DP0_AMNGENSTATUS), 2052 1991 regmap_reg_range(DP0_AUXWDATA(0), DP0_AUXSTATUS), 2053 1992 regmap_reg_range(DP0_LTSTAT, DP0_SNKLTCHGREQ), 2054 1993 regmap_reg_range(DP_PHY_CTRL, DP_PHY_CTRL), 2055 1994 regmap_reg_range(DP0_PLLCTRL, PXL_PLLCTRL), 2056 - regmap_reg_range(VFUEN0, VFUEN0), 2057 - regmap_reg_range(INTSTS_G, INTSTS_G), 2058 - regmap_reg_range(GPIOI, GPIOI), 2059 1995 }; 2060 1996 2061 1997 static const struct regmap_access_table tc_volatile_table = { ··· 2070 1992 .n_yes_ranges = ARRAY_SIZE(tc_volatile_ranges), 2071 1993 }; 2072 1994 2073 - static bool tc_writeable_reg(struct device *dev, unsigned int reg) 2074 - { 2075 - return (reg != TC_IDREG) && 2076 - (reg != DP0_LTSTAT) && 2077 - (reg != DP0_SNKLTCHGREQ); 2078 - } 1995 + static const struct regmap_range tc_precious_ranges[] = { 1996 + regmap_reg_range(SYSSTAT, SYSSTAT), 1997 + }; 1998 + 1999 + static const struct regmap_access_table tc_precious_table = { 2000 + .yes_ranges = tc_precious_ranges, 2001 + .n_yes_ranges = ARRAY_SIZE(tc_precious_ranges), 2002 + }; 2003 + 2004 + static const struct regmap_range tc_non_writeable_ranges[] = { 2005 + regmap_reg_range(PPI_BUSYPPI, PPI_BUSYPPI), 2006 + regmap_reg_range(DSI_BUSYDSI, DSI_BUSYDSI), 2007 + regmap_reg_range(DSI_LANESTATUS0, DSI_INTSTATUS), 2008 + regmap_reg_range(TC_IDREG, SYSSTAT), 2009 + regmap_reg_range(GPIOI, GPIOI), 2010 + regmap_reg_range(DP0_LTSTAT, DP0_SNKLTCHGREQ), 2011 + }; 2012 + 2013 + static const struct regmap_access_table tc_writeable_table = { 2014 + .no_ranges = tc_non_writeable_ranges, 2015 + .n_no_ranges = ARRAY_SIZE(tc_non_writeable_ranges), 2016 + }; 2079 2017 2080 2018 static const struct regmap_config tc_regmap_config = { 2081 2019 .name = "tc358767", ··· 2102 2008 .cache_type = REGCACHE_MAPLE, 2103 2009 .readable_reg = tc_readable_reg, 2104 2010 .volatile_table = &tc_volatile_table, 2105 - .writeable_reg = tc_writeable_reg, 2011 + .precious_table = &tc_precious_table, 2012 + .wr_table = &tc_writeable_table, 2106 2013 .reg_format_endian = REGMAP_ENDIAN_BIG, 2107 2014 .val_format_endian = REGMAP_ENDIAN_LITTLE, 2108 2015 };
-4
drivers/gpu/drm/drm_debugfs.c
··· 45 45 #include "drm_crtc_internal.h" 46 46 #include "drm_internal.h" 47 47 48 - #if defined(CONFIG_DEBUG_FS) 49 - 50 48 /*************************************************** 51 49 * Initialization, etc. 52 50 **************************************************/ ··· 645 647 debugfs_remove_recursive(encoder->debugfs_entry); 646 648 encoder->debugfs_entry = NULL; 647 649 } 648 - 649 - #endif /* CONFIG_DEBUG_FS */
+2 -23
drivers/gpu/drm/drm_edid.c
··· 3611 3611 if (!mode_in_vsync_range(mode, edid, t)) 3612 3612 return false; 3613 3613 3614 - if ((max_clock = range_pixel_clock(edid, t))) 3614 + max_clock = range_pixel_clock(edid, t); 3615 + if (max_clock) 3615 3616 if (mode->clock > max_clock) 3616 3617 return false; 3617 3618 ··· 6990 6989 return num_modes; 6991 6990 } 6992 6991 EXPORT_SYMBOL(drm_add_modes_noedid); 6993 - 6994 - /** 6995 - * drm_set_preferred_mode - Sets the preferred mode of a connector 6996 - * @connector: connector whose mode list should be processed 6997 - * @hpref: horizontal resolution of preferred mode 6998 - * @vpref: vertical resolution of preferred mode 6999 - * 7000 - * Marks a mode as preferred if it matches the resolution specified by @hpref 7001 - * and @vpref. 7002 - */ 7003 - void drm_set_preferred_mode(struct drm_connector *connector, 7004 - int hpref, int vpref) 7005 - { 7006 - struct drm_display_mode *mode; 7007 - 7008 - list_for_each_entry(mode, &connector->probed_modes, head) { 7009 - if (mode->hdisplay == hpref && 7010 - mode->vdisplay == vpref) 7011 - mode->type |= DRM_MODE_TYPE_PREFERRED; 7012 - } 7013 - } 7014 - EXPORT_SYMBOL(drm_set_preferred_mode); 7015 6992 7016 6993 static bool is_hdmi2_sink(const struct drm_connector *connector) 7017 6994 {
+2 -2
drivers/gpu/drm/drm_ioc32.c
··· 229 229 unsigned int num; 230 230 /* 64-bit version has a 32-bit pad here */ 231 231 u64 data; /**< Pointer */ 232 - } __attribute__((packed)) drm_update_draw32_t; 232 + } __packed drm_update_draw32_t; 233 233 234 234 static int compat_drm_update_draw(struct file *file, unsigned int cmd, 235 235 unsigned long arg) ··· 296 296 u32 pitches[4]; 297 297 u32 offsets[4]; 298 298 u64 modifier[4]; 299 - } __attribute__((packed)) drm_mode_fb_cmd232_t; 299 + } __packed drm_mode_fb_cmd232_t; 300 300 301 301 static int compat_drm_mode_addfb2(struct file *file, unsigned int cmd, 302 302 unsigned long arg)
+22
drivers/gpu/drm/drm_modes.c
··· 2752 2752 drm_mode_is_420_also(display, mode); 2753 2753 } 2754 2754 EXPORT_SYMBOL(drm_mode_is_420); 2755 + 2756 + /** 2757 + * drm_set_preferred_mode - Sets the preferred mode of a connector 2758 + * @connector: connector whose mode list should be processed 2759 + * @hpref: horizontal resolution of preferred mode 2760 + * @vpref: vertical resolution of preferred mode 2761 + * 2762 + * Marks a mode as preferred if it matches the resolution specified by @hpref 2763 + * and @vpref. 2764 + */ 2765 + void drm_set_preferred_mode(struct drm_connector *connector, 2766 + int hpref, int vpref) 2767 + { 2768 + struct drm_display_mode *mode; 2769 + 2770 + list_for_each_entry(mode, &connector->probed_modes, head) { 2771 + if (mode->hdisplay == hpref && 2772 + mode->vdisplay == vpref) 2773 + mode->type |= DRM_MODE_TYPE_PREFERRED; 2774 + } 2775 + } 2776 + EXPORT_SYMBOL(drm_set_preferred_mode);
-36
drivers/gpu/drm/drm_probe_helper.c
··· 1101 1101 EXPORT_SYMBOL(drm_crtc_helper_mode_valid_fixed); 1102 1102 1103 1103 /** 1104 - * drm_connector_helper_get_modes_from_ddc - Updates the connector's EDID 1105 - * property from the connector's 1106 - * DDC channel 1107 - * @connector: The connector 1108 - * 1109 - * Returns: 1110 - * The number of detected display modes. 1111 - * 1112 - * Uses a connector's DDC channel to retrieve EDID data and update the 1113 - * connector's EDID property and display modes. Drivers can use this 1114 - * function to implement struct &drm_connector_helper_funcs.get_modes 1115 - * for connectors with a DDC channel. 1116 - */ 1117 - int drm_connector_helper_get_modes_from_ddc(struct drm_connector *connector) 1118 - { 1119 - struct edid *edid; 1120 - int count = 0; 1121 - 1122 - if (!connector->ddc) 1123 - return 0; 1124 - 1125 - edid = drm_get_edid(connector, connector->ddc); 1126 - 1127 - // clears property if EDID is NULL 1128 - drm_connector_update_edid_property(connector, edid); 1129 - 1130 - if (edid) { 1131 - count = drm_add_edid_modes(connector, edid); 1132 - kfree(edid); 1133 - } 1134 - 1135 - return count; 1136 - } 1137 - EXPORT_SYMBOL(drm_connector_helper_get_modes_from_ddc); 1138 - 1139 - /** 1140 1104 * drm_connector_helper_get_modes_fixed - Duplicates a display mode for a connector 1141 1105 * @connector: the connector 1142 1106 * @fixed_mode: the display hardware's mode
-1
drivers/gpu/drm/hisilicon/hibmc/hibmc_drm_drv.h
··· 18 18 #include <linux/i2c-algo-bit.h> 19 19 #include <linux/i2c.h> 20 20 21 - #include <drm/drm_edid.h> 22 21 #include <drm/drm_framebuffer.h> 23 22 24 23 struct hibmc_connector {
+1
drivers/gpu/drm/hisilicon/hibmc/hibmc_drm_vdac.c
··· 14 14 #include <linux/io.h> 15 15 16 16 #include <drm/drm_atomic_helper.h> 17 + #include <drm/drm_edid.h> 17 18 #include <drm/drm_probe_helper.h> 18 19 #include <drm/drm_print.h> 19 20 #include <drm/drm_simple_kms_helper.h>
+4 -5
drivers/gpu/drm/mgag200/mgag200_drv.c
··· 146 146 } 147 147 mdev->vram_res = res; 148 148 149 - /* Don't fail on errors, but performance might be reduced. */ 150 - devm_arch_io_reserve_memtype_wc(dev->dev, res->start, resource_size(res)); 151 - devm_arch_phys_wc_add(dev->dev, res->start, resource_size(res)); 152 - 153 - mdev->vram = devm_ioremap(dev->dev, res->start, resource_size(res)); 149 + mdev->vram = devm_ioremap_wc(dev->dev, res->start, resource_size(res)); 154 150 if (!mdev->vram) 155 151 return -ENOMEM; 152 + 153 + /* Don't fail on errors, but performance might be reduced. */ 154 + devm_arch_phys_wc_add(dev->dev, res->start, resource_size(res)); 156 155 157 156 return 0; 158 157 }
+10 -4
drivers/gpu/drm/mgag200/mgag200_mode.c
··· 14 14 #include <drm/drm_atomic.h> 15 15 #include <drm/drm_atomic_helper.h> 16 16 #include <drm/drm_damage_helper.h> 17 + #include <drm/drm_edid.h> 17 18 #include <drm/drm_format_helper.h> 18 19 #include <drm/drm_fourcc.h> 19 20 #include <drm/drm_framebuffer.h> 20 21 #include <drm/drm_gem_atomic_helper.h> 21 22 #include <drm/drm_gem_framebuffer_helper.h> 22 23 #include <drm/drm_print.h> 23 - #include <drm/drm_probe_helper.h> 24 24 25 25 #include "mgag200_drv.h" 26 26 ··· 717 717 int mgag200_vga_connector_helper_get_modes(struct drm_connector *connector) 718 718 { 719 719 struct mga_device *mdev = to_mga_device(connector->dev); 720 - int ret; 720 + const struct drm_edid *drm_edid; 721 + int count; 721 722 722 723 /* 723 724 * Protect access to I/O registers from concurrent modesetting 724 725 * by acquiring the I/O-register lock. 725 726 */ 726 727 mutex_lock(&mdev->rmmio_lock); 727 - ret = drm_connector_helper_get_modes_from_ddc(connector); 728 + 729 + drm_edid = drm_edid_read(connector); 730 + drm_edid_connector_update(connector, drm_edid); 731 + count = drm_edid_connector_add_modes(connector); 732 + drm_edid_free(drm_edid); 733 + 728 734 mutex_unlock(&mdev->rmmio_lock); 729 735 730 - return ret; 736 + return count; 731 737 } 732 738 733 739 /*
+2 -2
drivers/gpu/drm/nouveau/dispnv04/crtc.c
··· 449 449 regp->Attribute[NV_CIO_AR_CSEL_INDEX] = 0x00; 450 450 } 451 451 452 - /** 452 + /* 453 453 * Sets up registers for the given mode/adjusted_mode pair. 454 454 * 455 455 * The clocks, CRTCs and outputs attached to this CRTC must be off. ··· 625 625 return ret; 626 626 } 627 627 628 - /** 628 + /* 629 629 * Sets up registers for the given mode/adjusted_mode pair. 630 630 * 631 631 * The clocks, CRTCs and outputs attached to this CRTC must be off.
+1
drivers/gpu/drm/nouveau/dispnv50/head.c
··· 32 32 33 33 #include <drm/drm_atomic.h> 34 34 #include <drm/drm_atomic_helper.h> 35 + #include <drm/drm_edid.h> 35 36 #include <drm/drm_vblank.h> 36 37 #include "nouveau_connector.h" 37 38
+1 -1
drivers/gpu/drm/nouveau/nouveau_connector.h
··· 35 35 36 36 #include <drm/display/drm_dp_helper.h> 37 37 #include <drm/drm_crtc.h> 38 - #include <drm/drm_edid.h> 39 38 #include <drm/drm_encoder.h> 40 39 #include <drm/drm_util.h> 41 40 ··· 43 44 44 45 struct nvkm_i2c_port; 45 46 struct dcb_output; 47 + struct edid; 46 48 47 49 #ifdef CONFIG_DRM_NOUVEAU_BACKLIGHT 48 50 struct nouveau_backlight {
+2 -2
drivers/gpu/drm/nouveau/nouveau_ioc32.c
··· 1 - /** 1 + /* 2 2 * \file mga_ioc32.c 3 3 * 4 4 * 32-bit ioctl compatibility routines for the MGA DRM. ··· 38 38 39 39 #include "nouveau_ioctl.h" 40 40 41 - /** 41 + /* 42 42 * Called whenever a 32-bit process running under a 64-bit kernel 43 43 * performs an ioctl on /dev/dri/card<n>. 44 44 *
+1 -2
drivers/gpu/drm/nouveau/nvif/outp.c
··· 452 452 if (ret) 453 453 goto done; 454 454 455 - *pedid = kmalloc(args->size, GFP_KERNEL); 455 + *pedid = kmemdup(args->data, args->size, GFP_KERNEL); 456 456 if (!*pedid) { 457 457 ret = -ENOMEM; 458 458 goto done; 459 459 } 460 460 461 - memcpy(*pedid, args->data, args->size); 462 461 ret = args->size; 463 462 done: 464 463 kfree(args);
+1 -1
drivers/gpu/drm/nouveau/nvkm/engine/gr/gf100.c
··· 1040 1040 } 1041 1041 } 1042 1042 1043 - /** 1043 + /* 1044 1044 * Wait until GR goes idle. GR is considered idle if it is disabled by the 1045 1045 * MC (0x200) register, or GR is not busy and a context switch is not in 1046 1046 * progress.
+68 -68
drivers/gpu/drm/nouveau/nvkm/subdev/bios/init.c
··· 575 575 * init opcode handlers 576 576 *****************************************************************************/ 577 577 578 - /** 578 + /* 579 579 * init_reserved - stub for various unknown/unused single-byte opcodes 580 580 * 581 581 */ ··· 602 602 init->offset += length; 603 603 } 604 604 605 - /** 605 + /* 606 606 * INIT_DONE - opcode 0x71 607 607 * 608 608 */ ··· 613 613 init->offset = 0x0000; 614 614 } 615 615 616 - /** 616 + /* 617 617 * INIT_IO_RESTRICT_PROG - opcode 0x32 618 618 * 619 619 */ ··· 650 650 trace("}]\n"); 651 651 } 652 652 653 - /** 653 + /* 654 654 * INIT_REPEAT - opcode 0x33 655 655 * 656 656 */ ··· 676 676 init->repeat = repeat; 677 677 } 678 678 679 - /** 679 + /* 680 680 * INIT_IO_RESTRICT_PLL - opcode 0x34 681 681 * 682 682 */ ··· 716 716 trace("}]\n"); 717 717 } 718 718 719 - /** 719 + /* 720 720 * INIT_END_REPEAT - opcode 0x36 721 721 * 722 722 */ ··· 732 732 } 733 733 } 734 734 735 - /** 735 + /* 736 736 * INIT_COPY - opcode 0x37 737 737 * 738 738 */ ··· 759 759 init_wrvgai(init, port, index, data); 760 760 } 761 761 762 - /** 762 + /* 763 763 * INIT_NOT - opcode 0x38 764 764 * 765 765 */ ··· 771 771 init_exec_inv(init); 772 772 } 773 773 774 - /** 774 + /* 775 775 * INIT_IO_FLAG_CONDITION - opcode 0x39 776 776 * 777 777 */ ··· 788 788 init_exec_set(init, false); 789 789 } 790 790 791 - /** 791 + /* 792 792 * INIT_GENERIC_CONDITION - opcode 0x3a 793 793 * 794 794 */ ··· 840 840 } 841 841 } 842 842 843 - /** 843 + /* 844 844 * INIT_IO_MASK_OR - opcode 0x3b 845 845 * 846 846 */ ··· 859 859 init_wrvgai(init, 0x03d4, index, data &= ~(1 << or)); 860 860 } 861 861 862 - /** 862 + /* 863 863 * INIT_IO_OR - opcode 0x3c 864 864 * 865 865 */ ··· 878 878 init_wrvgai(init, 0x03d4, index, data | (1 << or)); 879 879 } 880 880 881 - /** 881 + /* 882 882 * INIT_ANDN_REG - opcode 0x47 883 883 * 884 884 */ ··· 895 895 init_mask(init, reg, mask, 0); 896 896 } 897 897 898 - /** 898 + /* 899 899 * INIT_OR_REG - opcode 0x48 900 900 * 901 901 */ ··· 912 912 init_mask(init, reg, 0, mask); 913 913 } 914 914 915 - /** 915 + /* 916 916 * INIT_INDEX_ADDRESS_LATCHED - opcode 0x49 917 917 * 918 918 */ ··· 942 942 } 943 943 } 944 944 945 - /** 945 + /* 946 946 * INIT_IO_RESTRICT_PLL2 - opcode 0x4a 947 947 * 948 948 */ ··· 977 977 trace("}]\n"); 978 978 } 979 979 980 - /** 980 + /* 981 981 * INIT_PLL2 - opcode 0x4b 982 982 * 983 983 */ ··· 994 994 init_prog_pll(init, reg, freq); 995 995 } 996 996 997 - /** 997 + /* 998 998 * INIT_I2C_BYTE - opcode 0x4c 999 999 * 1000 1000 */ ··· 1025 1025 } 1026 1026 } 1027 1027 1028 - /** 1028 + /* 1029 1029 * INIT_ZM_I2C_BYTE - opcode 0x4d 1030 1030 * 1031 1031 */ ··· 1051 1051 } 1052 1052 } 1053 1053 1054 - /** 1054 + /* 1055 1055 * INIT_ZM_I2C - opcode 0x4e 1056 1056 * 1057 1057 */ ··· 1085 1085 } 1086 1086 } 1087 1087 1088 - /** 1088 + /* 1089 1089 * INIT_TMDS - opcode 0x4f 1090 1090 * 1091 1091 */ ··· 1111 1111 init_wr32(init, reg + 0, addr); 1112 1112 } 1113 1113 1114 - /** 1114 + /* 1115 1115 * INIT_ZM_TMDS_GROUP - opcode 0x50 1116 1116 * 1117 1117 */ ··· 1138 1138 } 1139 1139 } 1140 1140 1141 - /** 1141 + /* 1142 1142 * INIT_CR_INDEX_ADDRESS_LATCHED - opcode 0x51 1143 1143 * 1144 1144 */ ··· 1168 1168 init_wrvgai(init, 0x03d4, addr0, save0); 1169 1169 } 1170 1170 1171 - /** 1171 + /* 1172 1172 * INIT_CR - opcode 0x52 1173 1173 * 1174 1174 */ ··· 1188 1188 init_wrvgai(init, 0x03d4, addr, val | data); 1189 1189 } 1190 1190 1191 - /** 1191 + /* 1192 1192 * INIT_ZM_CR - opcode 0x53 1193 1193 * 1194 1194 */ ··· 1205 1205 init_wrvgai(init, 0x03d4, addr, data); 1206 1206 } 1207 1207 1208 - /** 1208 + /* 1209 1209 * INIT_ZM_CR_GROUP - opcode 0x54 1210 1210 * 1211 1211 */ ··· 1229 1229 } 1230 1230 } 1231 1231 1232 - /** 1232 + /* 1233 1233 * INIT_CONDITION_TIME - opcode 0x56 1234 1234 * 1235 1235 */ ··· 1256 1256 init_exec_set(init, false); 1257 1257 } 1258 1258 1259 - /** 1259 + /* 1260 1260 * INIT_LTIME - opcode 0x57 1261 1261 * 1262 1262 */ ··· 1273 1273 mdelay(msec); 1274 1274 } 1275 1275 1276 - /** 1276 + /* 1277 1277 * INIT_ZM_REG_SEQUENCE - opcode 0x58 1278 1278 * 1279 1279 */ ··· 1298 1298 } 1299 1299 } 1300 1300 1301 - /** 1301 + /* 1302 1302 * INIT_PLL_INDIRECT - opcode 0x59 1303 1303 * 1304 1304 */ ··· 1317 1317 init_prog_pll(init, reg, freq); 1318 1318 } 1319 1319 1320 - /** 1320 + /* 1321 1321 * INIT_ZM_REG_INDIRECT - opcode 0x5a 1322 1322 * 1323 1323 */ ··· 1336 1336 init_wr32(init, addr, data); 1337 1337 } 1338 1338 1339 - /** 1339 + /* 1340 1340 * INIT_SUB_DIRECT - opcode 0x5b 1341 1341 * 1342 1342 */ ··· 1362 1362 init->offset += 3; 1363 1363 } 1364 1364 1365 - /** 1365 + /* 1366 1366 * INIT_JUMP - opcode 0x5c 1367 1367 * 1368 1368 */ ··· 1380 1380 init->offset += 3; 1381 1381 } 1382 1382 1383 - /** 1383 + /* 1384 1384 * INIT_I2C_IF - opcode 0x5e 1385 1385 * 1386 1386 */ ··· 1407 1407 init_exec_force(init, false); 1408 1408 } 1409 1409 1410 - /** 1410 + /* 1411 1411 * INIT_COPY_NV_REG - opcode 0x5f 1412 1412 * 1413 1413 */ ··· 1433 1433 init_mask(init, dreg, ~dmask, (data & smask) ^ sxor); 1434 1434 } 1435 1435 1436 - /** 1436 + /* 1437 1437 * INIT_ZM_INDEX_IO - opcode 0x62 1438 1438 * 1439 1439 */ ··· 1451 1451 init_wrvgai(init, port, index, data); 1452 1452 } 1453 1453 1454 - /** 1454 + /* 1455 1455 * INIT_COMPUTE_MEM - opcode 0x63 1456 1456 * 1457 1457 */ ··· 1469 1469 init_exec_force(init, false); 1470 1470 } 1471 1471 1472 - /** 1472 + /* 1473 1473 * INIT_RESET - opcode 0x65 1474 1474 * 1475 1475 */ ··· 1496 1496 init_exec_force(init, false); 1497 1497 } 1498 1498 1499 - /** 1499 + /* 1500 1500 * INIT_CONFIGURE_MEM - opcode 0x66 1501 1501 * 1502 1502 */ ··· 1555 1555 init_exec_force(init, false); 1556 1556 } 1557 1557 1558 - /** 1558 + /* 1559 1559 * INIT_CONFIGURE_CLK - opcode 0x67 1560 1560 * 1561 1561 */ ··· 1589 1589 init_exec_force(init, false); 1590 1590 } 1591 1591 1592 - /** 1592 + /* 1593 1593 * INIT_CONFIGURE_PREINIT - opcode 0x68 1594 1594 * 1595 1595 */ ··· 1615 1615 init_exec_force(init, false); 1616 1616 } 1617 1617 1618 - /** 1618 + /* 1619 1619 * INIT_IO - opcode 0x69 1620 1620 * 1621 1621 */ ··· 1655 1655 init_wrport(init, port, data | value); 1656 1656 } 1657 1657 1658 - /** 1658 + /* 1659 1659 * INIT_SUB - opcode 0x6b 1660 1660 * 1661 1661 */ ··· 1682 1682 init->offset += 2; 1683 1683 } 1684 1684 1685 - /** 1685 + /* 1686 1686 * INIT_RAM_CONDITION - opcode 0x6d 1687 1687 * 1688 1688 */ ··· 1701 1701 init_exec_set(init, false); 1702 1702 } 1703 1703 1704 - /** 1704 + /* 1705 1705 * INIT_NV_REG - opcode 0x6e 1706 1706 * 1707 1707 */ ··· 1719 1719 init_mask(init, reg, ~mask, data); 1720 1720 } 1721 1721 1722 - /** 1722 + /* 1723 1723 * INIT_MACRO - opcode 0x6f 1724 1724 * 1725 1725 */ ··· 1743 1743 init->offset += 2; 1744 1744 } 1745 1745 1746 - /** 1746 + /* 1747 1747 * INIT_RESUME - opcode 0x72 1748 1748 * 1749 1749 */ ··· 1755 1755 init_exec_set(init, true); 1756 1756 } 1757 1757 1758 - /** 1758 + /* 1759 1759 * INIT_STRAP_CONDITION - opcode 0x73 1760 1760 * 1761 1761 */ ··· 1773 1773 init_exec_set(init, false); 1774 1774 } 1775 1775 1776 - /** 1776 + /* 1777 1777 * INIT_TIME - opcode 0x74 1778 1778 * 1779 1779 */ ··· 1794 1794 } 1795 1795 } 1796 1796 1797 - /** 1797 + /* 1798 1798 * INIT_CONDITION - opcode 0x75 1799 1799 * 1800 1800 */ ··· 1811 1811 init_exec_set(init, false); 1812 1812 } 1813 1813 1814 - /** 1814 + /* 1815 1815 * INIT_IO_CONDITION - opcode 0x76 1816 1816 * 1817 1817 */ ··· 1828 1828 init_exec_set(init, false); 1829 1829 } 1830 1830 1831 - /** 1831 + /* 1832 1832 * INIT_ZM_REG16 - opcode 0x77 1833 1833 * 1834 1834 */ ··· 1845 1845 init_wr32(init, addr, data); 1846 1846 } 1847 1847 1848 - /** 1848 + /* 1849 1849 * INIT_INDEX_IO - opcode 0x78 1850 1850 * 1851 1851 */ ··· 1867 1867 init_wrvgai(init, port, index, data | value); 1868 1868 } 1869 1869 1870 - /** 1870 + /* 1871 1871 * INIT_PLL - opcode 0x79 1872 1872 * 1873 1873 */ ··· 1884 1884 init_prog_pll(init, reg, freq); 1885 1885 } 1886 1886 1887 - /** 1887 + /* 1888 1888 * INIT_ZM_REG - opcode 0x7a 1889 1889 * 1890 1890 */ ··· 1904 1904 init_wr32(init, addr, data); 1905 1905 } 1906 1906 1907 - /** 1907 + /* 1908 1908 * INIT_RAM_RESTRICT_PLL - opcde 0x87 1909 1909 * 1910 1910 */ ··· 1934 1934 } 1935 1935 } 1936 1936 1937 - /** 1937 + /* 1938 1938 * INIT_RESET_BEGUN - opcode 0x8c 1939 1939 * 1940 1940 */ ··· 1945 1945 init->offset += 1; 1946 1946 } 1947 1947 1948 - /** 1948 + /* 1949 1949 * INIT_RESET_END - opcode 0x8d 1950 1950 * 1951 1951 */ ··· 1956 1956 init->offset += 1; 1957 1957 } 1958 1958 1959 - /** 1959 + /* 1960 1960 * INIT_GPIO - opcode 0x8e 1961 1961 * 1962 1962 */ ··· 1972 1972 nvkm_gpio_reset(gpio, DCB_GPIO_UNUSED); 1973 1973 } 1974 1974 1975 - /** 1975 + /* 1976 1976 * INIT_RAM_RESTRICT_ZM_GROUP - opcode 0x8f 1977 1977 * 1978 1978 */ ··· 2010 2010 } 2011 2011 } 2012 2012 2013 - /** 2013 + /* 2014 2014 * INIT_COPY_ZM_REG - opcode 0x90 2015 2015 * 2016 2016 */ ··· 2027 2027 init_wr32(init, dreg, init_rd32(init, sreg)); 2028 2028 } 2029 2029 2030 - /** 2030 + /* 2031 2031 * INIT_ZM_REG_GROUP - opcode 0x91 2032 2032 * 2033 2033 */ ··· 2049 2049 } 2050 2050 } 2051 2051 2052 - /** 2052 + /* 2053 2053 * INIT_XLAT - opcode 0x96 2054 2054 * 2055 2055 */ ··· 2077 2077 init_mask(init, daddr, ~dmask, data); 2078 2078 } 2079 2079 2080 - /** 2080 + /* 2081 2081 * INIT_ZM_MASK_ADD - opcode 0x97 2082 2082 * 2083 2083 */ ··· 2098 2098 init_wr32(init, addr, data); 2099 2099 } 2100 2100 2101 - /** 2101 + /* 2102 2102 * INIT_AUXCH - opcode 0x98 2103 2103 * 2104 2104 */ ··· 2122 2122 } 2123 2123 } 2124 2124 2125 - /** 2125 + /* 2126 2126 * INIT_AUXCH - opcode 0x99 2127 2127 * 2128 2128 */ ··· 2144 2144 } 2145 2145 } 2146 2146 2147 - /** 2147 + /* 2148 2148 * INIT_I2C_LONG_IF - opcode 0x9a 2149 2149 * 2150 2150 */ ··· 2183 2183 init_exec_set(init, false); 2184 2184 } 2185 2185 2186 - /** 2186 + /* 2187 2187 * INIT_GPIO_NE - opcode 0xa9 2188 2188 * 2189 2189 */
+2 -2
drivers/gpu/drm/nouveau/nvkm/subdev/volt/gk20a.c
··· 45 45 /* 852 */ { 1608418, -21643, -269, 0, 763, -48}, 46 46 }; 47 47 48 - /** 48 + /* 49 49 * cvb_mv = ((c2 * speedo / s_scale + c1) * speedo / s_scale + c0) 50 50 */ 51 51 static inline int ··· 58 58 return mv; 59 59 } 60 60 61 - /** 61 + /* 62 62 * cvb_t_mv = 63 63 * ((c2 * speedo / s_scale + c1) * speedo / s_scale + c0) + 64 64 * ((c3 * speedo / s_scale + c4 + c5 * T / t_scale) * T / t_scale)
+95 -2
drivers/gpu/drm/panel/panel-edp.c
··· 71 71 unsigned int hpd_absent; 72 72 73 73 /** 74 + * @powered_on_to_enable: Time between panel powered on and enable. 75 + * 76 + * The minimum time, in milliseconds, that needs to have passed 77 + * between when panel powered on and enable may begin. 78 + * 79 + * This is (T3+T4+T5+T6+T8)-min on eDP timing diagrams or after the 80 + * power supply enabled until we can turn the backlight on and see 81 + * valid data. 82 + * 83 + * This doesn't normally need to be set if timings are already met by 84 + * prepare_to_enable or enable. 85 + */ 86 + unsigned int powered_on_to_enable; 87 + 88 + /** 74 89 * @prepare_to_enable: Time between prepare and enable. 75 90 * 76 91 * The minimum time, in milliseconds, that needs to have passed ··· 231 216 bool prepared; 232 217 233 218 ktime_t prepared_time; 219 + ktime_t powered_on_time; 234 220 ktime_t unprepared_time; 235 221 236 222 const struct panel_desc *desc; ··· 429 413 if (!p->prepared) 430 414 return 0; 431 415 432 - pm_runtime_mark_last_busy(panel->dev); 433 - ret = pm_runtime_put_autosuspend(panel->dev); 416 + ret = pm_runtime_put_sync_suspend(panel->dev); 434 417 if (ret < 0) 435 418 return ret; 436 419 p->prepared = false; ··· 469 454 } 470 455 471 456 gpiod_set_value_cansleep(p->enable_gpio, 1); 457 + 458 + p->powered_on_time = ktime_get_boottime(); 472 459 473 460 delay = p->desc->delay.hpd_reliable; 474 461 if (p->no_hpd) ··· 595 578 msleep(delay); 596 579 597 580 panel_edp_wait(p->prepared_time, p->desc->delay.prepare_to_enable); 581 + 582 + panel_edp_wait(p->powered_on_time, p->desc->delay.powered_on_to_enable); 598 583 599 584 p->enabled = true; 600 585 ··· 1856 1837 .prepare_to_enable = 80, 1857 1838 }; 1858 1839 1840 + static const struct panel_delay delay_200_500_e50_p2e80 = { 1841 + .hpd_absent = 200, 1842 + .unprepare = 500, 1843 + .enable = 50, 1844 + .prepare_to_enable = 80, 1845 + }; 1846 + 1859 1847 static const struct panel_delay delay_200_500_p2e100 = { 1860 1848 .hpd_absent = 200, 1861 1849 .unprepare = 500, ··· 1900 1874 .enable = 200, 1901 1875 }; 1902 1876 1877 + static const struct panel_delay delay_200_500_e200_d200 = { 1878 + .hpd_absent = 200, 1879 + .unprepare = 500, 1880 + .enable = 200, 1881 + .disable = 200, 1882 + }; 1883 + 1903 1884 static const struct panel_delay delay_200_500_e200_d10 = { 1904 1885 .hpd_absent = 200, 1905 1886 .unprepare = 500, ··· 1918 1885 .hpd_absent = 200, 1919 1886 .unprepare = 150, 1920 1887 .enable = 200, 1888 + }; 1889 + 1890 + static const struct panel_delay delay_200_500_e50_po2e200 = { 1891 + .hpd_absent = 200, 1892 + .unprepare = 500, 1893 + .enable = 50, 1894 + .powered_on_to_enable = 200, 1921 1895 }; 1922 1896 1923 1897 #define EDP_PANEL_ENTRY(vend_chr_0, vend_chr_1, vend_chr_2, product_id, _delay, _name) \ ··· 1952 1912 * Sort first by vendor, then by product ID. 1953 1913 */ 1954 1914 static const struct edp_panel_entry edp_panels[] = { 1915 + EDP_PANEL_ENTRY('A', 'U', 'O', 0x105c, &delay_200_500_e50, "B116XTN01.0"), 1955 1916 EDP_PANEL_ENTRY('A', 'U', 'O', 0x1062, &delay_200_500_e50, "B120XAN01.0"), 1917 + EDP_PANEL_ENTRY('A', 'U', 'O', 0x125c, &delay_200_500_e50, "Unknown"), 1956 1918 EDP_PANEL_ENTRY('A', 'U', 'O', 0x145c, &delay_200_500_e50, "B116XAB01.4"), 1957 1919 EDP_PANEL_ENTRY('A', 'U', 'O', 0x1e9b, &delay_200_500_e50, "B133UAN02.1"), 1958 1920 EDP_PANEL_ENTRY('A', 'U', 'O', 0x1ea5, &delay_200_500_e50, "B116XAK01.6"), ··· 1965 1923 EDP_PANEL_ENTRY('A', 'U', 'O', 0x403d, &delay_200_500_e50, "B140HAN04.0"), 1966 1924 EDP_PANEL_ENTRY2('A', 'U', 'O', 0x405c, &auo_b116xak01.delay, "B116XAK01.0", 1967 1925 &auo_b116xa3_mode), 1926 + EDP_PANEL_ENTRY('A', 'U', 'O', 0x435c, &delay_200_500_e50, "Unknown"), 1968 1927 EDP_PANEL_ENTRY('A', 'U', 'O', 0x582d, &delay_200_500_e50, "B133UAN01.0"), 1969 1928 EDP_PANEL_ENTRY2('A', 'U', 'O', 0x615c, &delay_200_500_e50, "B116XAN06.1", 1970 1929 &auo_b116xa3_mode), 1971 1930 EDP_PANEL_ENTRY('A', 'U', 'O', 0x635c, &delay_200_500_e50, "B116XAN06.3"), 1972 1931 EDP_PANEL_ENTRY('A', 'U', 'O', 0x639c, &delay_200_500_e50, "B140HAK02.7"), 1932 + EDP_PANEL_ENTRY('A', 'U', 'O', 0x723c, &delay_200_500_e50, "B140XTN07.2"), 1973 1933 EDP_PANEL_ENTRY('A', 'U', 'O', 0x8594, &delay_200_500_e50, "B133UAN01.0"), 1974 1934 EDP_PANEL_ENTRY('A', 'U', 'O', 0xf390, &delay_200_500_e50, "B140XTN07.7"), 1975 1935 1936 + EDP_PANEL_ENTRY('B', 'O', 'E', 0x0607, &delay_200_500_e200, "Unknown"), 1937 + EDP_PANEL_ENTRY('B', 'O', 'E', 0x0608, &delay_200_500_e50, "NT116WHM-N11"), 1938 + EDP_PANEL_ENTRY('B', 'O', 'E', 0x0668, &delay_200_500_e200, "Unknown"), 1939 + EDP_PANEL_ENTRY('B', 'O', 'E', 0x068f, &delay_200_500_e200, "Unknown"), 1940 + EDP_PANEL_ENTRY('B', 'O', 'E', 0x06e5, &delay_200_500_e200, "Unknown"), 1941 + EDP_PANEL_ENTRY('B', 'O', 'E', 0x0705, &delay_200_500_e200, "Unknown"), 1976 1942 EDP_PANEL_ENTRY('B', 'O', 'E', 0x0715, &delay_200_150_e200, "NT116WHM-N21"), 1943 + EDP_PANEL_ENTRY('B', 'O', 'E', 0x0717, &delay_200_500_e50_po2e200, "NV133FHM-N42"), 1977 1944 EDP_PANEL_ENTRY('B', 'O', 'E', 0x0731, &delay_200_500_e80, "NT116WHM-N42"), 1978 1945 EDP_PANEL_ENTRY('B', 'O', 'E', 0x0741, &delay_200_500_e200, "NT116WHM-N44"), 1946 + EDP_PANEL_ENTRY('B', 'O', 'E', 0x0744, &delay_200_500_e200, "Unknown"), 1947 + EDP_PANEL_ENTRY('B', 'O', 'E', 0x074c, &delay_200_500_e200, "Unknown"), 1948 + EDP_PANEL_ENTRY('B', 'O', 'E', 0x0751, &delay_200_500_e200, "Unknown"), 1949 + EDP_PANEL_ENTRY('B', 'O', 'E', 0x0754, &delay_200_500_e50_po2e200, "NV116WHM-N45"), 1950 + EDP_PANEL_ENTRY('B', 'O', 'E', 0x0771, &delay_200_500_e200, "Unknown"), 1979 1951 EDP_PANEL_ENTRY('B', 'O', 'E', 0x0786, &delay_200_500_p2e80, "NV116WHM-T01"), 1952 + EDP_PANEL_ENTRY('B', 'O', 'E', 0x0797, &delay_200_500_e200, "Unknown"), 1980 1953 EDP_PANEL_ENTRY('B', 'O', 'E', 0x07d1, &boe_nv133fhm_n61.delay, "NV133FHM-N61"), 1954 + EDP_PANEL_ENTRY('B', 'O', 'E', 0x07d3, &delay_200_500_e200, "Unknown"), 1981 1955 EDP_PANEL_ENTRY('B', 'O', 'E', 0x07f6, &delay_200_500_e200, "NT140FHM-N44"), 1956 + EDP_PANEL_ENTRY('B', 'O', 'E', 0x07f8, &delay_200_500_e200, "Unknown"), 1957 + EDP_PANEL_ENTRY('B', 'O', 'E', 0x0813, &delay_200_500_e200, "Unknown"), 1958 + EDP_PANEL_ENTRY('B', 'O', 'E', 0x0827, &delay_200_500_e50_p2e80, "NT140WHM-N44 V8.0"), 1982 1959 EDP_PANEL_ENTRY('B', 'O', 'E', 0x082d, &boe_nv133fhm_n61.delay, "NV133FHM-N62"), 1960 + EDP_PANEL_ENTRY('B', 'O', 'E', 0x0843, &delay_200_500_e200, "Unknown"), 1983 1961 EDP_PANEL_ENTRY('B', 'O', 'E', 0x08b2, &delay_200_500_e200, "NT140WHM-N49"), 1962 + EDP_PANEL_ENTRY('B', 'O', 'E', 0x0848, &delay_200_500_e200, "Unknown"), 1963 + EDP_PANEL_ENTRY('B', 'O', 'E', 0x0849, &delay_200_500_e200, "Unknown"), 1984 1964 EDP_PANEL_ENTRY('B', 'O', 'E', 0x09c3, &delay_200_500_e50, "NT116WHM-N21,836X2"), 1985 1965 EDP_PANEL_ENTRY('B', 'O', 'E', 0x094b, &delay_200_500_e50, "NT116WHM-N21"), 1986 1966 EDP_PANEL_ENTRY('B', 'O', 'E', 0x0951, &delay_200_500_e80, "NV116WHM-N47"), 1987 1967 EDP_PANEL_ENTRY('B', 'O', 'E', 0x095f, &delay_200_500_e50, "NE135FBM-N41 v8.1"), 1968 + EDP_PANEL_ENTRY('B', 'O', 'E', 0x096e, &delay_200_500_e50_po2e200, "NV116WHM-T07 V8.0"), 1988 1969 EDP_PANEL_ENTRY('B', 'O', 'E', 0x0979, &delay_200_500_e50, "NV116WHM-N49 V8.0"), 1989 1970 EDP_PANEL_ENTRY('B', 'O', 'E', 0x098d, &boe_nv110wtm_n61.delay, "NV110WTM-N61"), 1971 + EDP_PANEL_ENTRY('B', 'O', 'E', 0x0993, &delay_200_500_e80, "NV116WHM-T14 V8.0"), 1972 + EDP_PANEL_ENTRY('B', 'O', 'E', 0x09ad, &delay_200_500_e80, "NV116WHM-N47"), 1990 1973 EDP_PANEL_ENTRY('B', 'O', 'E', 0x09ae, &delay_200_500_e200, "NT140FHM-N45"), 1991 1974 EDP_PANEL_ENTRY('B', 'O', 'E', 0x09dd, &delay_200_500_e50, "NT116WHM-N21"), 1975 + EDP_PANEL_ENTRY('B', 'O', 'E', 0x0a36, &delay_200_500_e200, "Unknown"), 1976 + EDP_PANEL_ENTRY('B', 'O', 'E', 0x0a3e, &delay_200_500_e80, "NV116WHM-N49"), 1992 1977 EDP_PANEL_ENTRY('B', 'O', 'E', 0x0a5d, &delay_200_500_e50, "NV116WHM-N45"), 1993 1978 EDP_PANEL_ENTRY('B', 'O', 'E', 0x0ac5, &delay_200_500_e50, "NV116WHM-N4C"), 1979 + EDP_PANEL_ENTRY('B', 'O', 'E', 0x0b34, &delay_200_500_e80, "NV122WUM-N41"), 1994 1980 EDP_PANEL_ENTRY('B', 'O', 'E', 0x0b43, &delay_200_500_e200, "NV140FHM-T09"), 1995 1981 EDP_PANEL_ENTRY('B', 'O', 'E', 0x0b56, &delay_200_500_e80, "NT140FHM-N47"), 1996 1982 EDP_PANEL_ENTRY('B', 'O', 'E', 0x0c20, &delay_200_500_e80, "NT140FHM-N47"), 1997 1983 1984 + EDP_PANEL_ENTRY('C', 'M', 'N', 0x1130, &delay_200_500_e50, "N116BGE-EB2"), 1998 1985 EDP_PANEL_ENTRY('C', 'M', 'N', 0x1132, &delay_200_500_e80_d50, "N116BGE-EA2"), 1999 1986 EDP_PANEL_ENTRY('C', 'M', 'N', 0x1138, &innolux_n116bca_ea1.delay, "N116BCA-EA1-RC4"), 2000 1987 EDP_PANEL_ENTRY('C', 'M', 'N', 0x1139, &delay_200_500_e80_d50, "N116BGE-EA2"), 1988 + EDP_PANEL_ENTRY('C', 'M', 'N', 0x1141, &delay_200_500_e80_d50, "Unknown"), 2001 1989 EDP_PANEL_ENTRY('C', 'M', 'N', 0x1145, &delay_200_500_e80_d50, "N116BCN-EB1"), 1990 + EDP_PANEL_ENTRY('C', 'M', 'N', 0x114a, &delay_200_500_e80_d50, "Unknown"), 2002 1991 EDP_PANEL_ENTRY('C', 'M', 'N', 0x114c, &innolux_n116bca_ea1.delay, "N116BCA-EA1"), 2003 1992 EDP_PANEL_ENTRY('C', 'M', 'N', 0x1152, &delay_200_500_e80_d50, "N116BCN-EA1"), 2004 1993 EDP_PANEL_ENTRY('C', 'M', 'N', 0x1153, &delay_200_500_e80_d50, "N116BGE-EA2"), 2005 1994 EDP_PANEL_ENTRY('C', 'M', 'N', 0x1154, &delay_200_500_e80_d50, "N116BCA-EA2"), 1995 + EDP_PANEL_ENTRY('C', 'M', 'N', 0x1156, &delay_200_500_e80_d50, "Unknown"), 2006 1996 EDP_PANEL_ENTRY('C', 'M', 'N', 0x1157, &delay_200_500_e80_d50, "N116BGE-EA2"), 2007 1997 EDP_PANEL_ENTRY('C', 'M', 'N', 0x115b, &delay_200_500_e80_d50, "N116BCN-EB1"), 2008 1998 EDP_PANEL_ENTRY('C', 'M', 'N', 0x1247, &delay_200_500_e80_d50, "N120ACA-EA1"), 2009 1999 EDP_PANEL_ENTRY('C', 'M', 'N', 0x142b, &delay_200_500_e80_d50, "N140HCA-EAC"), 2000 + EDP_PANEL_ENTRY('C', 'M', 'N', 0x142e, &delay_200_500_e80_d50, "N140BGA-EA4"), 2010 2001 EDP_PANEL_ENTRY('C', 'M', 'N', 0x144f, &delay_200_500_e80_d50, "N140HGA-EA1"), 2011 2002 EDP_PANEL_ENTRY('C', 'M', 'N', 0x1468, &delay_200_500_e80, "N140HGA-EA1"), 2012 2003 EDP_PANEL_ENTRY('C', 'M', 'N', 0x14d4, &delay_200_500_e80_d50, "N140HCA-EAC"), 2013 2004 EDP_PANEL_ENTRY('C', 'M', 'N', 0x14d6, &delay_200_500_e80_d50, "N140BGA-EA4"), 2014 2005 EDP_PANEL_ENTRY('C', 'M', 'N', 0x14e5, &delay_200_500_e80_d50, "N140HGA-EA1"), 2015 2006 2007 + EDP_PANEL_ENTRY('C', 'S', 'O', 0x1200, &delay_200_500_e50, "MNC207QS1-1"), 2008 + 2009 + EDP_PANEL_ENTRY('H', 'K', 'C', 0x2d51, &delay_200_500_e200, "Unknown"), 2010 + EDP_PANEL_ENTRY('H', 'K', 'C', 0x2d5b, &delay_200_500_e200, "Unknown"), 2016 2011 EDP_PANEL_ENTRY('H', 'K', 'C', 0x2d5c, &delay_200_500_e200, "MB116AN01-2"), 2017 2012 2018 2013 EDP_PANEL_ENTRY('I', 'V', 'O', 0x048e, &delay_200_500_e200_d10, "M116NWR6 R5"), ··· 2058 1979 EDP_PANEL_ENTRY('I', 'V', 'O', 0x854b, &delay_200_500_p2e100, "R133NW4K-R0"), 2059 1980 EDP_PANEL_ENTRY('I', 'V', 'O', 0x8c4d, &delay_200_150_e200, "R140NWFM R1"), 2060 1981 1982 + EDP_PANEL_ENTRY('K', 'D', 'B', 0x044f, &delay_200_500_e80_d50, "Unknown"), 2061 1983 EDP_PANEL_ENTRY('K', 'D', 'B', 0x0624, &kingdisplay_kd116n21_30nv_a010.delay, "116N21-30NV-A010"), 1984 + EDP_PANEL_ENTRY('K', 'D', 'B', 0x1118, &delay_200_500_e50, "KD116N29-30NK-A005"), 2062 1985 EDP_PANEL_ENTRY('K', 'D', 'B', 0x1120, &delay_200_500_e80_d50, "116N29-30NK-C007"), 2063 1986 1987 + EDP_PANEL_ENTRY('K', 'D', 'C', 0x044f, &delay_200_500_e50, "KD116N9-30NH-F3"), 1988 + EDP_PANEL_ENTRY('K', 'D', 'C', 0x05f1, &delay_200_500_e80_d50, "KD116N5-30NV-G7"), 2064 1989 EDP_PANEL_ENTRY('K', 'D', 'C', 0x0809, &delay_200_500_e50, "KD116N2930A15"), 1990 + 1991 + EDP_PANEL_ENTRY('L', 'G', 'D', 0x0000, &delay_200_500_e200_d200, "Unknown"), 1992 + EDP_PANEL_ENTRY('L', 'G', 'D', 0x048d, &delay_200_500_e200_d200, "Unknown"), 1993 + EDP_PANEL_ENTRY('L', 'G', 'D', 0x0497, &delay_200_500_e200_d200, "LP116WH7-SPB1"), 1994 + EDP_PANEL_ENTRY('L', 'G', 'D', 0x052c, &delay_200_500_e200_d200, "LP133WF2-SPL7"), 1995 + EDP_PANEL_ENTRY('L', 'G', 'D', 0x0537, &delay_200_500_e200_d200, "Unknown"), 1996 + EDP_PANEL_ENTRY('L', 'G', 'D', 0x054a, &delay_200_500_e200_d200, "LP116WH8-SPC1"), 1997 + EDP_PANEL_ENTRY('L', 'G', 'D', 0x0567, &delay_200_500_e200_d200, "Unknown"), 1998 + EDP_PANEL_ENTRY('L', 'G', 'D', 0x05af, &delay_200_500_e200_d200, "Unknown"), 1999 + EDP_PANEL_ENTRY('L', 'G', 'D', 0x05f1, &delay_200_500_e200_d200, "Unknown"), 2065 2000 2066 2001 EDP_PANEL_ENTRY('S', 'D', 'C', 0x416d, &delay_100_500_e200, "ATNA45AF01"), 2067 2002
+7 -16
drivers/gpu/drm/panel/panel-leadtek-ltk050h3146w.c
··· 646 646 return -EINVAL; 647 647 648 648 ctx->reset_gpio = devm_gpiod_get_optional(dev, "reset", GPIOD_OUT_LOW); 649 - if (IS_ERR(ctx->reset_gpio)) { 650 - dev_err(dev, "cannot get reset gpio\n"); 651 - return PTR_ERR(ctx->reset_gpio); 652 - } 649 + if (IS_ERR(ctx->reset_gpio)) 650 + return dev_err_probe(dev, PTR_ERR(ctx->reset_gpio), "cannot get reset gpio\n"); 653 651 654 652 ctx->vci = devm_regulator_get(dev, "vci"); 655 - if (IS_ERR(ctx->vci)) { 656 - ret = PTR_ERR(ctx->vci); 657 - if (ret != -EPROBE_DEFER) 658 - dev_err(dev, "Failed to request vci regulator: %d\n", ret); 659 - return ret; 660 - } 653 + if (IS_ERR(ctx->vci)) 654 + return dev_err_probe(dev, PTR_ERR(ctx->vci), "Failed to request vci regulator\n"); 661 655 662 656 ctx->iovcc = devm_regulator_get(dev, "iovcc"); 663 - if (IS_ERR(ctx->iovcc)) { 664 - ret = PTR_ERR(ctx->iovcc); 665 - if (ret != -EPROBE_DEFER) 666 - dev_err(dev, "Failed to request iovcc regulator: %d\n", ret); 667 - return ret; 668 - } 657 + if (IS_ERR(ctx->iovcc)) 658 + return dev_err_probe(dev, PTR_ERR(ctx->iovcc), 659 + "Failed to request iovcc regulator\n"); 669 660 670 661 mipi_dsi_set_drvdata(dsi, ctx); 671 662
+367 -57
drivers/gpu/drm/panel/panel-novatek-nt35510.c
··· 36 36 #include <drm/drm_modes.h> 37 37 #include <drm/drm_panel.h> 38 38 39 + #define NT35510_CMD_CORRECT_GAMMA BIT(0) 40 + #define NT35510_CMD_CONTROL_DISPLAY BIT(1) 41 + 39 42 #define MCS_CMD_MAUCCTR 0xF0 /* Manufacturer command enable */ 40 43 #define MCS_CMD_READ_ID1 0xDA 41 44 #define MCS_CMD_READ_ID2 0xDB ··· 115 112 /* AVDD and AVEE setting 3 bytes */ 116 113 #define NT35510_P1_AVDD_LEN 3 117 114 #define NT35510_P1_AVEE_LEN 3 115 + #define NT35510_P1_VCL_LEN 3 118 116 #define NT35510_P1_VGH_LEN 3 119 117 #define NT35510_P1_VGL_LEN 3 120 118 #define NT35510_P1_VGP_LEN 3 121 119 #define NT35510_P1_VGN_LEN 3 120 + #define NT35510_P1_VCMOFF_LEN 2 122 121 /* BT1CTR thru BT5CTR setting 3 bytes */ 123 122 #define NT35510_P1_BT1CTR_LEN 3 124 123 #define NT35510_P1_BT2CTR_LEN 3 124 + #define NT35510_P1_BT3CTR_LEN 3 125 125 #define NT35510_P1_BT4CTR_LEN 3 126 126 #define NT35510_P1_BT5CTR_LEN 3 127 127 /* 52 gamma parameters times two per color: positive and negative */ 128 128 #define NT35510_P1_GAMMA_LEN 52 129 + 130 + #define NT35510_WRCTRLD_BCTRL BIT(5) 131 + #define NT35510_WRCTRLD_A BIT(4) 132 + #define NT35510_WRCTRLD_DD BIT(3) 133 + #define NT35510_WRCTRLD_BL BIT(2) 134 + #define NT35510_WRCTRLD_DB BIT(1) 135 + #define NT35510_WRCTRLD_G BIT(0) 136 + 137 + #define NT35510_WRCABC_OFF 0 138 + #define NT35510_WRCABC_UI_MODE 1 139 + #define NT35510_WRCABC_STILL_MODE 2 140 + #define NT35510_WRCABC_MOVING_MODE 3 129 141 130 142 /** 131 143 * struct nt35510_config - the display-specific NT35510 configuration ··· 190 172 */ 191 173 const struct drm_display_mode mode; 192 174 /** 175 + * @mode_flags: DSI operation mode related flags 176 + */ 177 + unsigned long mode_flags; 178 + /** 179 + * @cmds: enable DSI commands 180 + */ 181 + u32 cmds; 182 + /** 193 183 * @avdd: setting for AVDD ranging from 0x00 = 6.5V to 0x14 = 4.5V 194 184 * in 0.1V steps the default is 0x05 which means 6.0V 195 185 */ ··· 246 220 * The defaults are 4 and 3 yielding 0x34 247 221 */ 248 222 u8 bt2ctr[NT35510_P1_BT2CTR_LEN]; 223 + /** 224 + * @vcl: setting for VCL ranging from 0x00 = -2.5V to 0x11 = -4.0V 225 + * in 1V steps, the default is 0x00 which means -2.5V 226 + */ 227 + u8 vcl[NT35510_P1_VCL_LEN]; 228 + /** 229 + * @bt3ctr: setting for boost power control for the VCL step-up 230 + * circuit (3) 231 + * bits 0..2 in the lower nibble controls CLCK, the booster clock 232 + * frequency, the values are the same as for PCK in @bt1ctr. 233 + * bits 4..5 in the upper nibble controls BTCL, the boosting 234 + * amplification for the step-up circuit. 235 + * 0 = Disable 236 + * 1 = -0.5 x VDDB 237 + * 2 = -1 x VDDB 238 + * 3 = -2 x VDDB 239 + * The defaults are 4 and 2 yielding 0x24 240 + */ 241 + u8 bt3ctr[NT35510_P1_BT3CTR_LEN]; 249 242 /** 250 243 * @vgh: setting for VGH ranging from 0x00 = 7.0V to 0x0B = 18.0V 251 244 * in 1V steps, the default is 0x08 which means 15V ··· 318 273 * same layout of bytes as @vgp. 319 274 */ 320 275 u8 vgn[NT35510_P1_VGN_LEN]; 276 + /** 277 + * @vcmoff: setting the DC VCOM offset voltage 278 + * The first byte contains bit 8 of VCM in bit 0 and VCMOFFSEL in bit 4. 279 + * The second byte contains bits 0..7 of VCM. 280 + * VCMOFFSEL the common voltage offset mode. 281 + * VCMOFFSEL 0x00 = VCOM .. 0x01 Gamma. 282 + * The default is 0x00. 283 + * VCM the VCOM output voltage (VCMOFFSEL = 0) or the internal register 284 + * offset for gamma voltage (VCMOFFSEL = 1). 285 + * VCM 0x00 = 0V/0 .. 0x118 = 3.5V/280 in steps of 12.5mV/1step 286 + * The default is 0x00 = 0V/0. 287 + */ 288 + u8 vcmoff[NT35510_P1_VCMOFF_LEN]; 289 + /** 290 + * @dopctr: setting optional control for display 291 + * ERR bits 0..1 in the first byte is the ERR pin output signal setting. 292 + * 0 = Disable, ERR pin output low 293 + * 1 = ERR pin output CRC error only 294 + * 2 = ERR pin output ECC error only 295 + * 3 = ERR pin output CRC and ECC error 296 + * The default is 0. 297 + * N565 bit 2 in the first byte is the 16-bit/pixel format selection. 298 + * 0 = R[4:0] + G[5:3] & G[2:0] + B[4:0] 299 + * 1 = G[2:0] + R[4:0] & B[4:0] + G[5:3] 300 + * The default is 0. 301 + * DIS_EoTP_HS bit 3 in the first byte is "DSI protocol violation" error 302 + * reporting. 303 + * 0 = reporting when error 304 + * 1 = not reporting when error 305 + * DSIM bit 4 in the first byte is the video mode data type enable 306 + * 0 = Video mode data type disable 307 + * 1 = Video mode data type enable 308 + * The default is 0. 309 + * DSIG bit 5 int the first byte is the generic r/w data type enable 310 + * 0 = Generic r/w disable 311 + * 1 = Generic r/w enable 312 + * The default is 0. 313 + * DSITE bit 6 in the first byte is TE line enable 314 + * 0 = TE line is disabled 315 + * 1 = TE line is enabled 316 + * The default is 0. 317 + * RAMKP bit 7 in the first byte is the frame memory keep/loss in 318 + * sleep-in mode 319 + * 0 = contents loss in sleep-in 320 + * 1 = contents keep in sleep-in 321 + * The default is 0. 322 + * CRL bit 1 in the second byte is the source driver data shift 323 + * direction selection. This bit is XOR operation with bit RSMX 324 + * of 3600h command. 325 + * 0 (RMSX = 0) = S1 -> S1440 326 + * 0 (RMSX = 1) = S1440 -> S1 327 + * 1 (RMSX = 0) = S1440 -> S1 328 + * 1 (RMSX = 1) = S1 -> S1440 329 + * The default is 0. 330 + * CTB bit 2 in the second byte is the vertical scanning direction 331 + * selection for gate control signals. This bit is XOR operation 332 + * with bit ML of 3600h command. 333 + * 0 (ML = 0) = Forward (top -> bottom) 334 + * 0 (ML = 1) = Reverse (bottom -> top) 335 + * 1 (ML = 0) = Reverse (bottom -> top) 336 + * 1 (ML = 1) = Forward (top -> bottom) 337 + * The default is 0. 338 + * CRGB bit 3 in the second byte is RGB-BGR order selection. This 339 + * bit is XOR operation with bit RGB of 3600h command. 340 + * 0 (RGB = 0) = RGB/Normal 341 + * 0 (RGB = 1) = BGR/RB swap 342 + * 1 (RGB = 0) = BGR/RB swap 343 + * 1 (RGB = 1) = RGB/Normal 344 + * The default is 0. 345 + * TE_PWR_SEL bit 4 in the second byte is the TE output voltage 346 + * level selection (only valid when DSTB_SEL = 0 or DSTB_SEL = 1, 347 + * VSEL = High and VDDI = 1.665~3.3V). 348 + * 0 = TE output voltage level is VDDI 349 + * 1 = TE output voltage level is VDDA 350 + * The default is 0. 351 + */ 352 + u8 dopctr[NT35510_P0_DOPCTR_LEN]; 353 + /** 354 + * @madctl: Memory data access control 355 + * RSMY bit 0 is flip vertical. Flips the display image top to down. 356 + * RSMX bit 1 is flip horizontal. Flips the display image left to right. 357 + * MH bit 2 is the horizontal refresh order. 358 + * RGB bit 3 is the RGB-BGR order. 359 + * 0 = RGB color sequence 360 + * 1 = BGR color sequence 361 + * ML bit 4 is the vertical refresh order. 362 + * MV bit 5 is the row/column exchange. 363 + * MX bit 6 is the column address order. 364 + * MY bit 7 is the row address order. 365 + */ 366 + u8 madctl; 367 + /** 368 + * @sdhdtctr: source output data hold time 369 + * 0x00..0x3F = 0..31.5us in steps of 0.5us 370 + * The default is 0x05 = 2.5us. 371 + */ 372 + u8 sdhdtctr; 373 + /** 374 + * @gseqctr: EQ control for gate signals 375 + * GFEQ_XX[3:0]: time setting of EQ step for falling edge in steps 376 + * of 0.5us. 377 + * The default is 0x07 = 3.5us 378 + * GREQ_XX[7:4]: time setting of EQ step for rising edge in steps 379 + * of 0.5us. 380 + * The default is 0x07 = 3.5us 381 + */ 382 + u8 gseqctr[NT35510_P0_GSEQCTR_LEN]; 321 383 /** 322 384 * @sdeqctr: Source driver control settings, first byte is 323 385 * 0 for mode 1 and 1 for mode 2. Mode 1 uses two steps and ··· 495 343 * @gamma_corr_neg_b: Blue gamma correction parameters, negative 496 344 */ 497 345 u8 gamma_corr_neg_b[NT35510_P1_GAMMA_LEN]; 346 + /** 347 + * @wrdisbv: write display brightness 348 + * 0x00 value means the lowest brightness and 0xff value means 349 + * the highest brightness. 350 + * The default is 0x00. 351 + */ 352 + u8 wrdisbv; 353 + /** 354 + * @wrctrld: write control display 355 + * G bit 0 selects gamma curve: 0 = Manual, 1 = Automatic 356 + * DB bit 1 selects display brightness: 0 = Manual, 1 = Automatic 357 + * BL bit 2 controls backlight control: 0 = Off, 1 = On 358 + * DD bit 3 controls display dimming: 0 = Off, 1 = On 359 + * A bit 4 controls LABC block: 0 = Off, 1 = On 360 + * BCTRL bit 5 controls brightness block: 0 = Off, 1 = On 361 + */ 362 + u8 wrctrld; 363 + /** 364 + * @wrcabc: write content adaptive brightness control 365 + * There is possible to use 4 different modes for content adaptive 366 + * image functionality: 367 + * 0: Off 368 + * 1: User Interface Image (UI-Mode) 369 + * 2: Still Picture Image (Still-Mode) 370 + * 3: Moving Picture Image (Moving-Mode) 371 + * The default is 0 372 + */ 373 + u8 wrcabc; 374 + /** 375 + * @wrcabcmb: write CABC minimum brightness 376 + * Set the minimum brightness value of the display for CABC 377 + * function. 378 + * 0x00 value means the lowest brightness for CABC and 0xff 379 + * value means the highest brightness for CABC. 380 + * The default is 0x00. 381 + */ 382 + u8 wrcabcmb; 498 383 }; 499 384 500 385 /** ··· 675 486 nt->conf->bt2ctr); 676 487 if (ret) 677 488 return ret; 489 + ret = nt35510_send_long(nt, dsi, NT35510_P1_SETVCL, 490 + NT35510_P1_VCL_LEN, 491 + nt->conf->vcl); 492 + if (ret) 493 + return ret; 494 + ret = nt35510_send_long(nt, dsi, NT35510_P1_BT3CTR, 495 + NT35510_P1_BT3CTR_LEN, 496 + nt->conf->bt3ctr); 497 + if (ret) 498 + return ret; 678 499 ret = nt35510_send_long(nt, dsi, NT35510_P1_SETVGH, 679 500 NT35510_P1_VGH_LEN, 680 501 nt->conf->vgh); ··· 721 522 if (ret) 722 523 return ret; 723 524 525 + ret = nt35510_send_long(nt, dsi, NT35510_P1_SETVCMOFF, 526 + NT35510_P1_VCMOFF_LEN, 527 + nt->conf->vcmoff); 528 + if (ret) 529 + return ret; 530 + 724 531 /* Typically 10 ms */ 725 532 usleep_range(10000, 20000); 726 533 ··· 741 536 { 742 537 struct mipi_dsi_device *dsi = to_mipi_dsi_device(nt->dev); 743 538 const struct nt35510_config *conf = nt->conf; 744 - u8 dopctr[NT35510_P0_DOPCTR_LEN]; 745 - u8 gseqctr[NT35510_P0_GSEQCTR_LEN]; 746 539 u8 dpfrctr[NT35510_P0_DPFRCTR1_LEN]; 747 - /* FIXME: set up any rotation (assume none for now) */ 748 - u8 addr_mode = NT35510_ROTATE_0_SETTING; 749 - u8 val; 750 540 int ret; 751 541 752 - /* Enable TE, EoTP and RGB pixel format */ 753 - dopctr[0] = NT35510_DOPCTR_0_DSITE | NT35510_DOPCTR_0_EOTP | 754 - NT35510_DOPCTR_0_N565; 755 - dopctr[1] = NT35510_DOPCTR_1_CTB; 756 542 ret = nt35510_send_long(nt, dsi, NT35510_P0_DOPCTR, 757 543 NT35510_P0_DOPCTR_LEN, 758 - dopctr); 544 + conf->dopctr); 759 545 if (ret) 760 546 return ret; 761 547 762 - ret = mipi_dsi_dcs_write(dsi, MIPI_DCS_SET_ADDRESS_MODE, &addr_mode, 763 - sizeof(addr_mode)); 548 + ret = mipi_dsi_dcs_write(dsi, MIPI_DCS_SET_ADDRESS_MODE, &conf->madctl, 549 + sizeof(conf->madctl)); 764 550 if (ret < 0) 765 551 return ret; 766 552 767 - /* 768 - * Source data hold time, default 0x05 = 2.5us 769 - * 0x00..0x3F = 0 .. 31.5us in steps of 0.5us 770 - * 0x0A = 5us 771 - */ 772 - val = 0x0A; 773 - ret = mipi_dsi_dcs_write(dsi, NT35510_P0_SDHDTCTR, &val, 774 - sizeof(val)); 553 + ret = mipi_dsi_dcs_write(dsi, NT35510_P0_SDHDTCTR, &conf->sdhdtctr, 554 + sizeof(conf->sdhdtctr)); 775 555 if (ret < 0) 776 556 return ret; 777 557 778 - /* EQ control for gate signals, 0x00 = 0 us */ 779 - gseqctr[0] = 0x00; 780 - gseqctr[1] = 0x00; 781 558 ret = nt35510_send_long(nt, dsi, NT35510_P0_GSEQCTR, 782 559 NT35510_P0_GSEQCTR_LEN, 783 - gseqctr); 560 + conf->gseqctr); 784 561 if (ret) 785 562 return ret; 786 563 ··· 906 719 if (ret) 907 720 return ret; 908 721 909 - ret = nt35510_send_long(nt, dsi, NT35510_P1_SET_GAMMA_RED_POS, 910 - NT35510_P1_GAMMA_LEN, 911 - nt->conf->gamma_corr_pos_r); 912 - if (ret) 913 - return ret; 914 - ret = nt35510_send_long(nt, dsi, NT35510_P1_SET_GAMMA_GREEN_POS, 915 - NT35510_P1_GAMMA_LEN, 916 - nt->conf->gamma_corr_pos_g); 917 - if (ret) 918 - return ret; 919 - ret = nt35510_send_long(nt, dsi, NT35510_P1_SET_GAMMA_BLUE_POS, 920 - NT35510_P1_GAMMA_LEN, 921 - nt->conf->gamma_corr_pos_b); 922 - if (ret) 923 - return ret; 924 - ret = nt35510_send_long(nt, dsi, NT35510_P1_SET_GAMMA_RED_NEG, 925 - NT35510_P1_GAMMA_LEN, 926 - nt->conf->gamma_corr_neg_r); 927 - if (ret) 928 - return ret; 929 - ret = nt35510_send_long(nt, dsi, NT35510_P1_SET_GAMMA_GREEN_NEG, 930 - NT35510_P1_GAMMA_LEN, 931 - nt->conf->gamma_corr_neg_g); 932 - if (ret) 933 - return ret; 934 - ret = nt35510_send_long(nt, dsi, NT35510_P1_SET_GAMMA_BLUE_NEG, 935 - NT35510_P1_GAMMA_LEN, 936 - nt->conf->gamma_corr_neg_b); 937 - if (ret) 938 - return ret; 722 + if (nt->conf->cmds & NT35510_CMD_CORRECT_GAMMA) { 723 + ret = nt35510_send_long(nt, dsi, NT35510_P1_SET_GAMMA_RED_POS, 724 + NT35510_P1_GAMMA_LEN, 725 + nt->conf->gamma_corr_pos_r); 726 + if (ret) 727 + return ret; 728 + ret = nt35510_send_long(nt, dsi, NT35510_P1_SET_GAMMA_GREEN_POS, 729 + NT35510_P1_GAMMA_LEN, 730 + nt->conf->gamma_corr_pos_g); 731 + if (ret) 732 + return ret; 733 + ret = nt35510_send_long(nt, dsi, NT35510_P1_SET_GAMMA_BLUE_POS, 734 + NT35510_P1_GAMMA_LEN, 735 + nt->conf->gamma_corr_pos_b); 736 + if (ret) 737 + return ret; 738 + ret = nt35510_send_long(nt, dsi, NT35510_P1_SET_GAMMA_RED_NEG, 739 + NT35510_P1_GAMMA_LEN, 740 + nt->conf->gamma_corr_neg_r); 741 + if (ret) 742 + return ret; 743 + ret = nt35510_send_long(nt, dsi, NT35510_P1_SET_GAMMA_GREEN_NEG, 744 + NT35510_P1_GAMMA_LEN, 745 + nt->conf->gamma_corr_neg_g); 746 + if (ret) 747 + return ret; 748 + ret = nt35510_send_long(nt, dsi, NT35510_P1_SET_GAMMA_BLUE_NEG, 749 + NT35510_P1_GAMMA_LEN, 750 + nt->conf->gamma_corr_neg_b); 751 + if (ret) 752 + return ret; 753 + } 939 754 940 755 /* Set up stuff in manufacturer control, page 0 */ 941 756 ret = nt35510_send_long(nt, dsi, MCS_CMD_MAUCCTR, ··· 1016 827 /* Up to 120 ms */ 1017 828 usleep_range(120000, 150000); 1018 829 830 + if (nt->conf->cmds & NT35510_CMD_CONTROL_DISPLAY) { 831 + ret = mipi_dsi_dcs_write(dsi, MIPI_DCS_WRITE_CONTROL_DISPLAY, 832 + &nt->conf->wrctrld, 833 + sizeof(nt->conf->wrctrld)); 834 + if (ret < 0) 835 + return ret; 836 + 837 + ret = mipi_dsi_dcs_write(dsi, MIPI_DCS_WRITE_POWER_SAVE, 838 + &nt->conf->wrcabc, 839 + sizeof(nt->conf->wrcabc)); 840 + if (ret < 0) 841 + return ret; 842 + 843 + ret = mipi_dsi_dcs_write(dsi, MIPI_DCS_SET_CABC_MIN_BRIGHTNESS, 844 + &nt->conf->wrcabcmb, 845 + sizeof(nt->conf->wrcabcmb)); 846 + if (ret < 0) 847 + return ret; 848 + } 849 + 1019 850 ret = mipi_dsi_dcs_set_display_on(dsi); 1020 851 if (ret) { 1021 852 dev_err(nt->dev, "failed to turn display on (%d)\n", ret); ··· 1105 896 */ 1106 897 dsi->hs_rate = 349440000; 1107 898 dsi->lp_rate = 9600000; 1108 - dsi->mode_flags = MIPI_DSI_CLOCK_NON_CONTINUOUS; 1109 899 1110 900 /* 1111 901 * Every new incarnation of this display must have a unique ··· 1115 907 dev_err(dev, "missing device configuration\n"); 1116 908 return -ENODEV; 1117 909 } 910 + 911 + dsi->mode_flags = nt->conf->mode_flags; 1118 912 1119 913 nt->supplies[0].supply = "vdd"; /* 2.3-4.8 V */ 1120 914 nt->supplies[1].supply = "vddi"; /* 1.65-3.3V */ ··· 1133 923 if (ret) 1134 924 return ret; 1135 925 1136 - nt->reset_gpio = devm_gpiod_get_optional(dev, "reset", GPIOD_ASIS); 926 + nt->reset_gpio = devm_gpiod_get_optional(dev, "reset", GPIOD_OUT_HIGH); 1137 927 if (IS_ERR(nt->reset_gpio)) { 1138 928 dev_err(dev, "error getting RESET GPIO\n"); 1139 929 return PTR_ERR(nt->reset_gpio); ··· 1162 952 return PTR_ERR(bl); 1163 953 } 1164 954 bl->props.max_brightness = 255; 1165 - bl->props.brightness = 255; 955 + if (nt->conf->cmds & NT35510_CMD_CONTROL_DISPLAY) 956 + bl->props.brightness = nt->conf->wrdisbv; 957 + else 958 + bl->props.brightness = 255; 1166 959 bl->props.power = FB_BLANK_POWERDOWN; 1167 960 nt->panel.backlight = bl; 1168 961 } ··· 1243 1030 .vtotal = 800 + 2 + 0 + 5, /* VBP = 5 */ 1244 1031 .flags = 0, 1245 1032 }, 1033 + .mode_flags = MIPI_DSI_CLOCK_NON_CONTINUOUS, 1034 + .cmds = NT35510_CMD_CORRECT_GAMMA, 1246 1035 /* 0x09: AVDD = 5.6V */ 1247 1036 .avdd = { 0x09, 0x09, 0x09 }, 1248 1037 /* 0x34: PCK = Hsync/2, BTP = 2 x VDDB */ ··· 1253 1038 .avee = { 0x09, 0x09, 0x09 }, 1254 1039 /* 0x24: NCK = Hsync/2, BTN = -2 x VDDB */ 1255 1040 .bt2ctr = { 0x24, 0x24, 0x24 }, 1041 + /* VBCLA: -2.5V, VBCLB: -2.5V, VBCLC: -2.5V */ 1042 + .vcl = { 0x00, 0x00, 0x00 }, 1043 + /* 0x24: CLCK = Hsync/2, BTN = -1 x VDDB */ 1044 + .bt3ctr = { 0x24, 0x24, 0x24 }, 1256 1045 /* 0x05 = 12V */ 1257 1046 .vgh = { 0x05, 0x05, 0x05 }, 1258 1047 /* 0x24: NCKA = Hsync/2, VGH = 2 x AVDD - AVEE */ ··· 1269 1050 .vgp = { 0x00, 0xA3, 0x00 }, 1270 1051 /* VGMP: 0x0A3 = 5.0375V, VGSP = 0V */ 1271 1052 .vgn = { 0x00, 0xA3, 0x00 }, 1053 + /* VCMOFFSEL = VCOM voltage offset mode, VCM = 0V */ 1054 + .vcmoff = { 0x00, 0x00 }, 1055 + /* Enable TE, EoTP and RGB pixel format */ 1056 + .dopctr = { NT35510_DOPCTR_0_DSITE | NT35510_DOPCTR_0_EOTP | 1057 + NT35510_DOPCTR_0_N565, NT35510_DOPCTR_1_CTB }, 1058 + .madctl = NT35510_ROTATE_0_SETTING, 1059 + /* 0x0A: SDT = 5 us */ 1060 + .sdhdtctr = 0x0A, 1061 + /* EQ control for gate signals, 0x00 = 0 us */ 1062 + .gseqctr = { 0x00, 0x00 }, 1272 1063 /* SDEQCTR: source driver EQ mode 2, 2.5 us rise time on each step */ 1273 1064 .sdeqctr = { 0x01, 0x05, 0x05, 0x05 }, 1274 1065 /* SDVPCTR: Normal operation off color during v porch */ ··· 1302 1073 .gamma_corr_neg_b = { NT35510_GAMMA_NEG_DEFAULT }, 1303 1074 }; 1304 1075 1076 + static const struct nt35510_config nt35510_frida_frd400b25025 = { 1077 + .width_mm = 52, 1078 + .height_mm = 86, 1079 + .mode = { 1080 + .clock = 23000, 1081 + .hdisplay = 480, 1082 + .hsync_start = 480 + 34, /* HFP = 34 */ 1083 + .hsync_end = 480 + 34 + 2, /* HSync = 2 */ 1084 + .htotal = 480 + 34 + 2 + 34, /* HBP = 34 */ 1085 + .vdisplay = 800, 1086 + .vsync_start = 800 + 15, /* VFP = 15 */ 1087 + .vsync_end = 800 + 15 + 12, /* VSync = 12 */ 1088 + .vtotal = 800 + 15 + 12 + 15, /* VBP = 15 */ 1089 + .flags = 0, 1090 + }, 1091 + .mode_flags = MIPI_DSI_MODE_VIDEO | MIPI_DSI_MODE_VIDEO_BURST | 1092 + MIPI_DSI_MODE_LPM, 1093 + .cmds = NT35510_CMD_CONTROL_DISPLAY, 1094 + /* 0x03: AVDD = 6.2V */ 1095 + .avdd = { 0x03, 0x03, 0x03 }, 1096 + /* 0x46: PCK = 2 x Hsync, BTP = 2.5 x VDDB */ 1097 + .bt1ctr = { 0x46, 0x46, 0x46 }, 1098 + /* 0x03: AVEE = -6.2V */ 1099 + .avee = { 0x03, 0x03, 0x03 }, 1100 + /* 0x36: PCK = 2 x Hsync, BTP = 2 x VDDB */ 1101 + .bt2ctr = { 0x36, 0x36, 0x36 }, 1102 + /* VBCLA: -2.5V, VBCLB: -2.5V, VBCLC: -3.5V */ 1103 + .vcl = { 0x00, 0x00, 0x02 }, 1104 + /* 0x26: CLCK = 2 x Hsync, BTN = -1 x VDDB */ 1105 + .bt3ctr = { 0x26, 0x26, 0x26 }, 1106 + /* 0x09 = 16V */ 1107 + .vgh = { 0x09, 0x09, 0x09 }, 1108 + /* 0x36: HCK = 2 x Hsync, VGH = 2 x AVDD - AVEE */ 1109 + .bt4ctr = { 0x36, 0x36, 0x36 }, 1110 + /* 0x08 = -10V */ 1111 + .vgl = { 0x08, 0x08, 0x08 }, 1112 + /* 0x26: LCK = 2 x Hsync, VGL = AVDD + VCL - AVDD */ 1113 + .bt5ctr = { 0x26, 0x26, 0x26 }, 1114 + /* VGMP: 0x080 = 4.6V, VGSP = 0V */ 1115 + .vgp = { 0x00, 0x80, 0x00 }, 1116 + /* VGMP: 0x080 = 4.6V, VGSP = 0V */ 1117 + .vgn = { 0x00, 0x80, 0x00 }, 1118 + /* VCMOFFSEL = VCOM voltage offset mode, VCM = -1V */ 1119 + .vcmoff = { 0x00, 0x50 }, 1120 + .dopctr = { NT35510_DOPCTR_0_RAMKP | NT35510_DOPCTR_0_DSITE | 1121 + NT35510_DOPCTR_0_DSIG | NT35510_DOPCTR_0_DSIM | 1122 + NT35510_DOPCTR_0_EOTP | NT35510_DOPCTR_0_N565, 0 }, 1123 + .madctl = NT35510_ROTATE_180_SETTING, 1124 + /* 0x03: SDT = 1.5 us */ 1125 + .sdhdtctr = 0x03, 1126 + /* EQ control for gate signals, 0x00 = 0 us */ 1127 + .gseqctr = { 0x00, 0x00 }, 1128 + /* SDEQCTR: source driver EQ mode 2, 1 us rise time on each step */ 1129 + .sdeqctr = { 0x01, 0x02, 0x02, 0x02 }, 1130 + /* SDVPCTR: Normal operation off color during v porch */ 1131 + .sdvpctr = 0x01, 1132 + /* T1: number of pixel clocks on one scanline: 0x184 = 389 clocks */ 1133 + .t1 = 0x0184, 1134 + /* VBP: vertical back porch toward the panel */ 1135 + .vbp = 0x1C, 1136 + /* VFP: vertical front porch toward the panel */ 1137 + .vfp = 0x1C, 1138 + /* PSEL: divide pixel clock 23MHz with 1 (no clock downscaling) */ 1139 + .psel = 0, 1140 + /* DPTMCTR12: 0x03: LVGL = VGLX, overlap mode, swap R->L O->E */ 1141 + .dpmctr12 = { 0x03, 0x00, 0x00, }, 1142 + /* write display brightness */ 1143 + .wrdisbv = 0x7f, 1144 + /* write control display */ 1145 + .wrctrld = NT35510_WRCTRLD_BCTRL | NT35510_WRCTRLD_DD | 1146 + NT35510_WRCTRLD_BL, 1147 + /* write content adaptive brightness control */ 1148 + .wrcabc = NT35510_WRCABC_STILL_MODE, 1149 + /* write CABC minimum brightness */ 1150 + .wrcabcmb = 0xff, 1151 + }; 1152 + 1305 1153 static const struct of_device_id nt35510_of_match[] = { 1154 + { 1155 + .compatible = "frida,frd400b25025", 1156 + .data = &nt35510_frida_frd400b25025, 1157 + }, 1306 1158 { 1307 1159 .compatible = "hydis,hva40wv1", 1308 1160 .data = &nt35510_hydis_hva40wv1,
+3
drivers/gpu/drm/rockchip/analogix_dp-rockchip.c
··· 343 343 return ret; 344 344 } 345 345 346 + rockchip_drm_encoder_set_crtc_endpoint_id(&dp->encoder, 347 + dev->of_node, 0, 0); 348 + 346 349 dp->plat_data.encoder = &dp->encoder.encoder; 347 350 348 351 ret = analogix_dp_bind(dp->adp, drm_dev);
+354 -220
drivers/gpu/drm/rockchip/inno_hdmi.c
··· 10 10 #include <linux/delay.h> 11 11 #include <linux/err.h> 12 12 #include <linux/hdmi.h> 13 - #include <linux/mfd/syscon.h> 14 13 #include <linux/mod_devicetable.h> 15 14 #include <linux/module.h> 16 15 #include <linux/mutex.h> ··· 25 26 26 27 #include "inno_hdmi.h" 27 28 28 - struct hdmi_data_info { 29 - int vic; 30 - bool sink_has_audio; 31 - unsigned int enc_in_format; 32 - unsigned int enc_out_format; 33 - unsigned int colorimetry; 29 + #define INNO_HDMI_MIN_TMDS_CLOCK 25000000U 30 + 31 + struct inno_hdmi_phy_config { 32 + unsigned long pixelclock; 33 + u8 pre_emphasis; 34 + u8 voltage_level_control; 35 + }; 36 + 37 + struct inno_hdmi_variant { 38 + struct inno_hdmi_phy_config *phy_configs; 39 + struct inno_hdmi_phy_config *default_phy_config; 34 40 }; 35 41 36 42 struct inno_hdmi_i2c { ··· 50 46 51 47 struct inno_hdmi { 52 48 struct device *dev; 53 - struct drm_device *drm_dev; 54 49 55 - int irq; 56 50 struct clk *pclk; 51 + struct clk *refclk; 57 52 void __iomem *regs; 58 53 59 54 struct drm_connector connector; ··· 61 58 struct inno_hdmi_i2c *i2c; 62 59 struct i2c_adapter *ddc; 63 60 64 - unsigned int tmds_rate; 61 + const struct inno_hdmi_variant *variant; 62 + }; 65 63 66 - struct hdmi_data_info hdmi_data; 67 - struct drm_display_mode previous_mode; 64 + struct inno_hdmi_connector_state { 65 + struct drm_connector_state base; 66 + unsigned int enc_out_format; 67 + unsigned int colorimetry; 68 + bool rgb_limited_range; 68 69 }; 69 70 70 71 static struct inno_hdmi *encoder_to_inno_hdmi(struct drm_encoder *encoder) ··· 83 76 return container_of(connector, struct inno_hdmi, connector); 84 77 } 85 78 79 + #define to_inno_hdmi_conn_state(conn_state) \ 80 + container_of_const(conn_state, struct inno_hdmi_connector_state, base) 81 + 86 82 enum { 87 - CSC_ITU601_16_235_TO_RGB_0_255_8BIT, 88 - CSC_ITU601_0_255_TO_RGB_0_255_8BIT, 89 - CSC_ITU709_16_235_TO_RGB_0_255_8BIT, 90 83 CSC_RGB_0_255_TO_ITU601_16_235_8BIT, 91 84 CSC_RGB_0_255_TO_ITU709_16_235_8BIT, 92 85 CSC_RGB_0_255_TO_RGB_16_235_8BIT, 93 86 }; 94 87 95 88 static const char coeff_csc[][24] = { 96 - /* 97 - * YUV2RGB:601 SD mode(Y[16:235], UV[16:240], RGB[0:255]): 98 - * R = 1.164*Y + 1.596*V - 204 99 - * G = 1.164*Y - 0.391*U - 0.813*V + 154 100 - * B = 1.164*Y + 2.018*U - 258 101 - */ 102 - { 103 - 0x04, 0xa7, 0x00, 0x00, 0x06, 0x62, 0x02, 0xcc, 104 - 0x04, 0xa7, 0x11, 0x90, 0x13, 0x40, 0x00, 0x9a, 105 - 0x04, 0xa7, 0x08, 0x12, 0x00, 0x00, 0x03, 0x02 106 - }, 107 - /* 108 - * YUV2RGB:601 SD mode(YUV[0:255],RGB[0:255]): 109 - * R = Y + 1.402*V - 248 110 - * G = Y - 0.344*U - 0.714*V + 135 111 - * B = Y + 1.772*U - 227 112 - */ 113 - { 114 - 0x04, 0x00, 0x00, 0x00, 0x05, 0x9b, 0x02, 0xf8, 115 - 0x04, 0x00, 0x11, 0x60, 0x12, 0xdb, 0x00, 0x87, 116 - 0x04, 0x00, 0x07, 0x16, 0x00, 0x00, 0x02, 0xe3 117 - }, 118 - /* 119 - * YUV2RGB:709 HD mode(Y[16:235],UV[16:240],RGB[0:255]): 120 - * R = 1.164*Y + 1.793*V - 248 121 - * G = 1.164*Y - 0.213*U - 0.534*V + 77 122 - * B = 1.164*Y + 2.115*U - 289 123 - */ 124 - { 125 - 0x04, 0xa7, 0x00, 0x00, 0x07, 0x2c, 0x02, 0xf8, 126 - 0x04, 0xa7, 0x10, 0xda, 0x12, 0x22, 0x00, 0x4d, 127 - 0x04, 0xa7, 0x08, 0x74, 0x00, 0x00, 0x03, 0x21 128 - }, 129 - 130 89 /* 131 90 * RGB2YUV:601 SD mode: 132 91 * Cb = -0.291G - 0.148R + 0.439B + 128 ··· 128 155 }, 129 156 }; 130 157 158 + static struct inno_hdmi_phy_config rk3036_hdmi_phy_configs[] = { 159 + { 74250000, 0x3f, 0xbb }, 160 + { 165000000, 0x6f, 0xbb }, 161 + { ~0UL, 0x00, 0x00 } 162 + }; 163 + 164 + static struct inno_hdmi_phy_config rk3128_hdmi_phy_configs[] = { 165 + { 74250000, 0x3f, 0xaa }, 166 + { 165000000, 0x5f, 0xaa }, 167 + { ~0UL, 0x00, 0x00 } 168 + }; 169 + 170 + static int inno_hdmi_find_phy_config(struct inno_hdmi *hdmi, 171 + unsigned long pixelclk) 172 + { 173 + const struct inno_hdmi_phy_config *phy_configs = 174 + hdmi->variant->phy_configs; 175 + int i; 176 + 177 + for (i = 0; phy_configs[i].pixelclock != ~0UL; i++) { 178 + if (pixelclk <= phy_configs[i].pixelclock) 179 + return i; 180 + } 181 + 182 + DRM_DEV_DEBUG(hdmi->dev, "No phy configuration for pixelclock %lu\n", 183 + pixelclk); 184 + 185 + return -EINVAL; 186 + } 187 + 131 188 static inline u8 hdmi_readb(struct inno_hdmi *hdmi, u16 offset) 132 189 { 133 190 return readl_relaxed(hdmi->regs + (offset) * 0x04); ··· 177 174 hdmi_writeb(hdmi, offset, temp); 178 175 } 179 176 180 - static void inno_hdmi_i2c_init(struct inno_hdmi *hdmi) 177 + static void inno_hdmi_i2c_init(struct inno_hdmi *hdmi, unsigned long long rate) 181 178 { 182 - int ddc_bus_freq; 179 + unsigned long long ddc_bus_freq = rate >> 2; 183 180 184 - ddc_bus_freq = (hdmi->tmds_rate >> 2) / HDMI_SCL_RATE; 181 + do_div(ddc_bus_freq, HDMI_SCL_RATE); 185 182 186 183 hdmi_writeb(hdmi, DDC_BUS_FREQ_L, ddc_bus_freq & 0xFF); 187 184 hdmi_writeb(hdmi, DDC_BUS_FREQ_H, (ddc_bus_freq >> 8) & 0xFF); ··· 199 196 hdmi_modb(hdmi, HDMI_SYS_CTRL, m_POWER, v_PWR_OFF); 200 197 } 201 198 202 - static void inno_hdmi_set_pwr_mode(struct inno_hdmi *hdmi, int mode) 199 + static void inno_hdmi_standby(struct inno_hdmi *hdmi) 203 200 { 204 - switch (mode) { 205 - case NORMAL: 206 - inno_hdmi_sys_power(hdmi, false); 201 + inno_hdmi_sys_power(hdmi, false); 207 202 208 - hdmi_writeb(hdmi, HDMI_PHY_PRE_EMPHASIS, 0x6f); 209 - hdmi_writeb(hdmi, HDMI_PHY_DRIVER, 0xbb); 203 + hdmi_writeb(hdmi, HDMI_PHY_DRIVER, 0x00); 204 + hdmi_writeb(hdmi, HDMI_PHY_PRE_EMPHASIS, 0x00); 205 + hdmi_writeb(hdmi, HDMI_PHY_CHG_PWR, 0x00); 206 + hdmi_writeb(hdmi, HDMI_PHY_SYS_CTL, 0x15); 207 + }; 210 208 211 - hdmi_writeb(hdmi, HDMI_PHY_SYS_CTL, 0x15); 212 - hdmi_writeb(hdmi, HDMI_PHY_SYS_CTL, 0x14); 213 - hdmi_writeb(hdmi, HDMI_PHY_SYS_CTL, 0x10); 214 - hdmi_writeb(hdmi, HDMI_PHY_CHG_PWR, 0x0f); 215 - hdmi_writeb(hdmi, HDMI_PHY_SYNC, 0x00); 216 - hdmi_writeb(hdmi, HDMI_PHY_SYNC, 0x01); 209 + static void inno_hdmi_power_up(struct inno_hdmi *hdmi, 210 + unsigned long mpixelclock) 211 + { 212 + struct inno_hdmi_phy_config *phy_config; 213 + int ret = inno_hdmi_find_phy_config(hdmi, mpixelclock); 217 214 218 - inno_hdmi_sys_power(hdmi, true); 219 - break; 220 - 221 - case LOWER_PWR: 222 - inno_hdmi_sys_power(hdmi, false); 223 - hdmi_writeb(hdmi, HDMI_PHY_DRIVER, 0x00); 224 - hdmi_writeb(hdmi, HDMI_PHY_PRE_EMPHASIS, 0x00); 225 - hdmi_writeb(hdmi, HDMI_PHY_CHG_PWR, 0x00); 226 - hdmi_writeb(hdmi, HDMI_PHY_SYS_CTL, 0x15); 227 - 228 - break; 229 - 230 - default: 231 - DRM_DEV_ERROR(hdmi->dev, "Unknown power mode %d\n", mode); 215 + if (ret < 0) { 216 + phy_config = hdmi->variant->default_phy_config; 217 + DRM_DEV_ERROR(hdmi->dev, 218 + "Using default phy configuration for TMDS rate %lu", 219 + mpixelclock); 220 + } else { 221 + phy_config = &hdmi->variant->phy_configs[ret]; 232 222 } 233 - } 223 + 224 + inno_hdmi_sys_power(hdmi, false); 225 + 226 + hdmi_writeb(hdmi, HDMI_PHY_PRE_EMPHASIS, phy_config->pre_emphasis); 227 + hdmi_writeb(hdmi, HDMI_PHY_DRIVER, phy_config->voltage_level_control); 228 + hdmi_writeb(hdmi, HDMI_PHY_SYS_CTL, 0x15); 229 + hdmi_writeb(hdmi, HDMI_PHY_SYS_CTL, 0x14); 230 + hdmi_writeb(hdmi, HDMI_PHY_SYS_CTL, 0x10); 231 + hdmi_writeb(hdmi, HDMI_PHY_CHG_PWR, 0x0f); 232 + hdmi_writeb(hdmi, HDMI_PHY_SYNC, 0x00); 233 + hdmi_writeb(hdmi, HDMI_PHY_SYNC, 0x01); 234 + 235 + inno_hdmi_sys_power(hdmi, true); 236 + }; 234 237 235 238 static void inno_hdmi_reset(struct inno_hdmi *hdmi) 236 239 { ··· 253 244 val = v_REG_CLK_INV | v_REG_CLK_SOURCE_SYS | v_PWR_ON | v_INT_POL_HIGH; 254 245 hdmi_modb(hdmi, HDMI_SYS_CTRL, msk, val); 255 246 256 - inno_hdmi_set_pwr_mode(hdmi, NORMAL); 247 + inno_hdmi_standby(hdmi); 257 248 } 258 249 259 - static int inno_hdmi_upload_frame(struct inno_hdmi *hdmi, int setup_rc, 260 - union hdmi_infoframe *frame, u32 frame_index, 261 - u32 mask, u32 disable, u32 enable) 250 + static void inno_hdmi_disable_frame(struct inno_hdmi *hdmi, 251 + enum hdmi_infoframe_type type) 262 252 { 263 - if (mask) 264 - hdmi_modb(hdmi, HDMI_PACKET_SEND_AUTO, mask, disable); 253 + struct drm_connector *connector = &hdmi->connector; 265 254 266 - hdmi_writeb(hdmi, HDMI_CONTROL_PACKET_BUF_INDEX, frame_index); 267 - 268 - if (setup_rc >= 0) { 269 - u8 packed_frame[HDMI_MAXIMUM_INFO_FRAME_SIZE]; 270 - ssize_t rc, i; 271 - 272 - rc = hdmi_infoframe_pack(frame, packed_frame, 273 - sizeof(packed_frame)); 274 - if (rc < 0) 275 - return rc; 276 - 277 - for (i = 0; i < rc; i++) 278 - hdmi_writeb(hdmi, HDMI_CONTROL_PACKET_ADDR + i, 279 - packed_frame[i]); 280 - 281 - if (mask) 282 - hdmi_modb(hdmi, HDMI_PACKET_SEND_AUTO, mask, enable); 255 + if (type != HDMI_INFOFRAME_TYPE_AVI) { 256 + drm_err(connector->dev, 257 + "Unsupported infoframe type: %u\n", type); 258 + return; 283 259 } 284 260 285 - return setup_rc; 261 + hdmi_writeb(hdmi, HDMI_CONTROL_PACKET_BUF_INDEX, INFOFRAME_AVI); 286 262 } 287 263 288 - static int inno_hdmi_config_video_vsi(struct inno_hdmi *hdmi, 289 - struct drm_display_mode *mode) 264 + static int inno_hdmi_upload_frame(struct inno_hdmi *hdmi, 265 + union hdmi_infoframe *frame, enum hdmi_infoframe_type type) 290 266 { 291 - union hdmi_infoframe frame; 292 - int rc; 267 + struct drm_connector *connector = &hdmi->connector; 268 + u8 packed_frame[HDMI_MAXIMUM_INFO_FRAME_SIZE]; 269 + ssize_t rc, i; 293 270 294 - rc = drm_hdmi_vendor_infoframe_from_display_mode(&frame.vendor.hdmi, 295 - &hdmi->connector, 296 - mode); 271 + if (type != HDMI_INFOFRAME_TYPE_AVI) { 272 + drm_err(connector->dev, 273 + "Unsupported infoframe type: %u\n", type); 274 + return 0; 275 + } 297 276 298 - return inno_hdmi_upload_frame(hdmi, rc, &frame, INFOFRAME_VSI, 299 - m_PACKET_VSI_EN, v_PACKET_VSI_EN(0), v_PACKET_VSI_EN(1)); 277 + inno_hdmi_disable_frame(hdmi, type); 278 + 279 + rc = hdmi_infoframe_pack(frame, packed_frame, 280 + sizeof(packed_frame)); 281 + if (rc < 0) 282 + return rc; 283 + 284 + for (i = 0; i < rc; i++) 285 + hdmi_writeb(hdmi, HDMI_CONTROL_PACKET_ADDR + i, 286 + packed_frame[i]); 287 + 288 + return 0; 300 289 } 301 290 302 291 static int inno_hdmi_config_video_avi(struct inno_hdmi *hdmi, 303 292 struct drm_display_mode *mode) 304 293 { 294 + struct drm_connector *connector = &hdmi->connector; 295 + struct drm_connector_state *conn_state = connector->state; 296 + struct inno_hdmi_connector_state *inno_conn_state = 297 + to_inno_hdmi_conn_state(conn_state); 305 298 union hdmi_infoframe frame; 306 299 int rc; 307 300 308 301 rc = drm_hdmi_avi_infoframe_from_display_mode(&frame.avi, 309 302 &hdmi->connector, 310 303 mode); 304 + if (rc) { 305 + inno_hdmi_disable_frame(hdmi, HDMI_INFOFRAME_TYPE_AVI); 306 + return rc; 307 + } 311 308 312 - if (hdmi->hdmi_data.enc_out_format == HDMI_COLORSPACE_YUV444) 309 + if (inno_conn_state->enc_out_format == HDMI_COLORSPACE_YUV444) 313 310 frame.avi.colorspace = HDMI_COLORSPACE_YUV444; 314 - else if (hdmi->hdmi_data.enc_out_format == HDMI_COLORSPACE_YUV422) 311 + else if (inno_conn_state->enc_out_format == HDMI_COLORSPACE_YUV422) 315 312 frame.avi.colorspace = HDMI_COLORSPACE_YUV422; 316 313 else 317 314 frame.avi.colorspace = HDMI_COLORSPACE_RGB; 318 315 319 - return inno_hdmi_upload_frame(hdmi, rc, &frame, INFOFRAME_AVI, 0, 0, 0); 316 + if (inno_conn_state->enc_out_format == HDMI_COLORSPACE_RGB) { 317 + drm_hdmi_avi_infoframe_quant_range(&frame.avi, 318 + connector, mode, 319 + inno_conn_state->rgb_limited_range ? 320 + HDMI_QUANTIZATION_RANGE_LIMITED : 321 + HDMI_QUANTIZATION_RANGE_FULL); 322 + } else { 323 + frame.avi.quantization_range = HDMI_QUANTIZATION_RANGE_DEFAULT; 324 + frame.avi.ycc_quantization_range = 325 + HDMI_YCC_QUANTIZATION_RANGE_LIMITED; 326 + } 327 + 328 + return inno_hdmi_upload_frame(hdmi, &frame, HDMI_INFOFRAME_TYPE_AVI); 320 329 } 321 330 322 331 static int inno_hdmi_config_video_csc(struct inno_hdmi *hdmi) 323 332 { 324 - struct hdmi_data_info *data = &hdmi->hdmi_data; 333 + struct drm_connector *connector = &hdmi->connector; 334 + struct drm_connector_state *conn_state = connector->state; 335 + struct inno_hdmi_connector_state *inno_conn_state = 336 + to_inno_hdmi_conn_state(conn_state); 325 337 int c0_c2_change = 0; 326 338 int csc_enable = 0; 327 339 int csc_mode = 0; ··· 360 330 v_VIDEO_INPUT_CSP(0); 361 331 hdmi_writeb(hdmi, HDMI_VIDEO_CONTRL2, value); 362 332 363 - if (data->enc_in_format == data->enc_out_format) { 364 - if ((data->enc_in_format == HDMI_COLORSPACE_RGB) || 365 - (data->enc_in_format >= HDMI_COLORSPACE_YUV444)) { 333 + if (inno_conn_state->enc_out_format == HDMI_COLORSPACE_RGB) { 334 + if (inno_conn_state->rgb_limited_range) { 335 + csc_mode = CSC_RGB_0_255_TO_RGB_16_235_8BIT; 336 + auto_csc = AUTO_CSC_DISABLE; 337 + c0_c2_change = C0_C2_CHANGE_DISABLE; 338 + csc_enable = v_CSC_ENABLE; 339 + 340 + } else { 366 341 value = v_SOF_DISABLE | v_COLOR_DEPTH_NOT_INDICATED(1); 367 342 hdmi_writeb(hdmi, HDMI_VIDEO_CONTRL3, value); 368 343 ··· 377 342 v_VIDEO_C0_C2_SWAP(C0_C2_CHANGE_DISABLE)); 378 343 return 0; 379 344 } 380 - } 381 - 382 - if (data->colorimetry == HDMI_COLORIMETRY_ITU_601) { 383 - if ((data->enc_in_format == HDMI_COLORSPACE_RGB) && 384 - (data->enc_out_format == HDMI_COLORSPACE_YUV444)) { 385 - csc_mode = CSC_RGB_0_255_TO_ITU601_16_235_8BIT; 386 - auto_csc = AUTO_CSC_DISABLE; 387 - c0_c2_change = C0_C2_CHANGE_DISABLE; 388 - csc_enable = v_CSC_ENABLE; 389 - } else if ((data->enc_in_format == HDMI_COLORSPACE_YUV444) && 390 - (data->enc_out_format == HDMI_COLORSPACE_RGB)) { 391 - csc_mode = CSC_ITU601_16_235_TO_RGB_0_255_8BIT; 392 - auto_csc = AUTO_CSC_ENABLE; 393 - c0_c2_change = C0_C2_CHANGE_DISABLE; 394 - csc_enable = v_CSC_DISABLE; 395 - } 396 345 } else { 397 - if ((data->enc_in_format == HDMI_COLORSPACE_RGB) && 398 - (data->enc_out_format == HDMI_COLORSPACE_YUV444)) { 399 - csc_mode = CSC_RGB_0_255_TO_ITU709_16_235_8BIT; 400 - auto_csc = AUTO_CSC_DISABLE; 401 - c0_c2_change = C0_C2_CHANGE_DISABLE; 402 - csc_enable = v_CSC_ENABLE; 403 - } else if ((data->enc_in_format == HDMI_COLORSPACE_YUV444) && 404 - (data->enc_out_format == HDMI_COLORSPACE_RGB)) { 405 - csc_mode = CSC_ITU709_16_235_TO_RGB_0_255_8BIT; 406 - auto_csc = AUTO_CSC_ENABLE; 407 - c0_c2_change = C0_C2_CHANGE_DISABLE; 408 - csc_enable = v_CSC_DISABLE; 346 + if (inno_conn_state->colorimetry == HDMI_COLORIMETRY_ITU_601) { 347 + if (inno_conn_state->enc_out_format == HDMI_COLORSPACE_YUV444) { 348 + csc_mode = CSC_RGB_0_255_TO_ITU601_16_235_8BIT; 349 + auto_csc = AUTO_CSC_DISABLE; 350 + c0_c2_change = C0_C2_CHANGE_DISABLE; 351 + csc_enable = v_CSC_ENABLE; 352 + } 353 + } else { 354 + if (inno_conn_state->enc_out_format == HDMI_COLORSPACE_YUV444) { 355 + csc_mode = CSC_RGB_0_255_TO_ITU709_16_235_8BIT; 356 + auto_csc = AUTO_CSC_DISABLE; 357 + c0_c2_change = C0_C2_CHANGE_DISABLE; 358 + csc_enable = v_CSC_ENABLE; 359 + } 409 360 } 410 361 } 411 362 ··· 432 411 hdmi_writeb(hdmi, HDMI_VIDEO_EXT_HBLANK_L, value & 0xFF); 433 412 hdmi_writeb(hdmi, HDMI_VIDEO_EXT_HBLANK_H, (value >> 8) & 0xFF); 434 413 435 - value = mode->hsync_start - mode->hdisplay; 414 + value = mode->htotal - mode->hsync_start; 436 415 hdmi_writeb(hdmi, HDMI_VIDEO_EXT_HDELAY_L, value & 0xFF); 437 416 hdmi_writeb(hdmi, HDMI_VIDEO_EXT_HDELAY_H, (value >> 8) & 0xFF); 438 417 ··· 447 426 value = mode->vtotal - mode->vdisplay; 448 427 hdmi_writeb(hdmi, HDMI_VIDEO_EXT_VBLANK, value & 0xFF); 449 428 450 - value = mode->vsync_start - mode->vdisplay; 429 + value = mode->vtotal - mode->vsync_start; 451 430 hdmi_writeb(hdmi, HDMI_VIDEO_EXT_VDELAY, value & 0xFF); 452 431 453 432 value = mode->vsync_end - mode->vsync_start; ··· 464 443 struct drm_display_mode *mode) 465 444 { 466 445 struct drm_display_info *display = &hdmi->connector.display_info; 467 - 468 - hdmi->hdmi_data.vic = drm_match_cea_mode(mode); 469 - 470 - hdmi->hdmi_data.enc_in_format = HDMI_COLORSPACE_RGB; 471 - hdmi->hdmi_data.enc_out_format = HDMI_COLORSPACE_RGB; 472 - 473 - if ((hdmi->hdmi_data.vic == 6) || (hdmi->hdmi_data.vic == 7) || 474 - (hdmi->hdmi_data.vic == 21) || (hdmi->hdmi_data.vic == 22) || 475 - (hdmi->hdmi_data.vic == 2) || (hdmi->hdmi_data.vic == 3) || 476 - (hdmi->hdmi_data.vic == 17) || (hdmi->hdmi_data.vic == 18)) 477 - hdmi->hdmi_data.colorimetry = HDMI_COLORIMETRY_ITU_601; 478 - else 479 - hdmi->hdmi_data.colorimetry = HDMI_COLORIMETRY_ITU_709; 446 + unsigned long mpixelclock = mode->clock * 1000; 480 447 481 448 /* Mute video and audio output */ 482 449 hdmi_modb(hdmi, HDMI_AV_MUTE, m_AUDIO_MUTE | m_VIDEO_BLACK, ··· 478 469 479 470 inno_hdmi_config_video_csc(hdmi); 480 471 481 - if (display->is_hdmi) { 472 + if (display->is_hdmi) 482 473 inno_hdmi_config_video_avi(hdmi, mode); 483 - inno_hdmi_config_video_vsi(hdmi, mode); 484 - } 485 474 486 475 /* 487 476 * When IP controller have configured to an accurate video ··· 487 480 * DCLK_LCDC, so we need to init the TMDS rate to mode pixel 488 481 * clock rate, and reconfigure the DDC clock. 489 482 */ 490 - hdmi->tmds_rate = mode->clock * 1000; 491 - inno_hdmi_i2c_init(hdmi); 483 + inno_hdmi_i2c_init(hdmi, mpixelclock); 492 484 493 485 /* Unmute video and audio output */ 494 486 hdmi_modb(hdmi, HDMI_AV_MUTE, m_AUDIO_MUTE | m_VIDEO_BLACK, 495 487 v_AUDIO_MUTE(0) | v_VIDEO_MUTE(0)); 496 488 489 + inno_hdmi_power_up(hdmi, mpixelclock); 490 + 497 491 return 0; 498 492 } 499 493 500 - static void inno_hdmi_encoder_mode_set(struct drm_encoder *encoder, 501 - struct drm_display_mode *mode, 502 - struct drm_display_mode *adj_mode) 494 + static enum drm_mode_status inno_hdmi_display_mode_valid(struct inno_hdmi *hdmi, 495 + struct drm_display_mode *mode) 496 + { 497 + unsigned long mpixelclk, max_tolerance; 498 + long rounded_refclk; 499 + 500 + /* No support for double-clock modes */ 501 + if (mode->flags & DRM_MODE_FLAG_DBLCLK) 502 + return MODE_BAD; 503 + 504 + mpixelclk = mode->clock * 1000; 505 + 506 + if (mpixelclk < INNO_HDMI_MIN_TMDS_CLOCK) 507 + return MODE_CLOCK_LOW; 508 + 509 + if (inno_hdmi_find_phy_config(hdmi, mpixelclk) < 0) 510 + return MODE_CLOCK_HIGH; 511 + 512 + if (hdmi->refclk) { 513 + rounded_refclk = clk_round_rate(hdmi->refclk, mpixelclk); 514 + if (rounded_refclk < 0) 515 + return MODE_BAD; 516 + 517 + /* Vesa DMT standard mentions +/- 0.5% max tolerance */ 518 + max_tolerance = mpixelclk / 200; 519 + if (abs_diff((unsigned long)rounded_refclk, mpixelclk) > max_tolerance) 520 + return MODE_NOCLOCK; 521 + } 522 + 523 + return MODE_OK; 524 + } 525 + 526 + static void inno_hdmi_encoder_enable(struct drm_encoder *encoder, 527 + struct drm_atomic_state *state) 528 + { 529 + struct inno_hdmi *hdmi = encoder_to_inno_hdmi(encoder); 530 + struct drm_connector_state *conn_state; 531 + struct drm_crtc_state *crtc_state; 532 + 533 + conn_state = drm_atomic_get_new_connector_state(state, &hdmi->connector); 534 + if (WARN_ON(!conn_state)) 535 + return; 536 + 537 + crtc_state = drm_atomic_get_new_crtc_state(state, conn_state->crtc); 538 + if (WARN_ON(!crtc_state)) 539 + return; 540 + 541 + inno_hdmi_setup(hdmi, &crtc_state->adjusted_mode); 542 + } 543 + 544 + static void inno_hdmi_encoder_disable(struct drm_encoder *encoder, 545 + struct drm_atomic_state *state) 503 546 { 504 547 struct inno_hdmi *hdmi = encoder_to_inno_hdmi(encoder); 505 548 506 - inno_hdmi_setup(hdmi, adj_mode); 507 - 508 - /* Store the display mode for plugin/DPMS poweron events */ 509 - drm_mode_copy(&hdmi->previous_mode, adj_mode); 510 - } 511 - 512 - static void inno_hdmi_encoder_enable(struct drm_encoder *encoder) 513 - { 514 - struct inno_hdmi *hdmi = encoder_to_inno_hdmi(encoder); 515 - 516 - inno_hdmi_set_pwr_mode(hdmi, NORMAL); 517 - } 518 - 519 - static void inno_hdmi_encoder_disable(struct drm_encoder *encoder) 520 - { 521 - struct inno_hdmi *hdmi = encoder_to_inno_hdmi(encoder); 522 - 523 - inno_hdmi_set_pwr_mode(hdmi, LOWER_PWR); 524 - } 525 - 526 - static bool inno_hdmi_encoder_mode_fixup(struct drm_encoder *encoder, 527 - const struct drm_display_mode *mode, 528 - struct drm_display_mode *adj_mode) 529 - { 530 - return true; 549 + inno_hdmi_standby(hdmi); 531 550 } 532 551 533 552 static int ··· 562 529 struct drm_connector_state *conn_state) 563 530 { 564 531 struct rockchip_crtc_state *s = to_rockchip_crtc_state(crtc_state); 532 + struct inno_hdmi *hdmi = encoder_to_inno_hdmi(encoder); 533 + struct drm_display_mode *mode = &crtc_state->adjusted_mode; 534 + u8 vic = drm_match_cea_mode(mode); 535 + struct inno_hdmi_connector_state *inno_conn_state = 536 + to_inno_hdmi_conn_state(conn_state); 565 537 566 538 s->output_mode = ROCKCHIP_OUT_MODE_P888; 567 539 s->output_type = DRM_MODE_CONNECTOR_HDMIA; 568 540 569 - return 0; 541 + if (vic == 6 || vic == 7 || 542 + vic == 21 || vic == 22 || 543 + vic == 2 || vic == 3 || 544 + vic == 17 || vic == 18) 545 + inno_conn_state->colorimetry = HDMI_COLORIMETRY_ITU_601; 546 + else 547 + inno_conn_state->colorimetry = HDMI_COLORIMETRY_ITU_709; 548 + 549 + inno_conn_state->enc_out_format = HDMI_COLORSPACE_RGB; 550 + inno_conn_state->rgb_limited_range = 551 + drm_default_rgb_quant_range(mode) == HDMI_QUANTIZATION_RANGE_LIMITED; 552 + 553 + return inno_hdmi_display_mode_valid(hdmi, 554 + &crtc_state->adjusted_mode) == MODE_OK ? 0 : -EINVAL; 570 555 } 571 556 572 557 static struct drm_encoder_helper_funcs inno_hdmi_encoder_helper_funcs = { 573 - .enable = inno_hdmi_encoder_enable, 574 - .disable = inno_hdmi_encoder_disable, 575 - .mode_fixup = inno_hdmi_encoder_mode_fixup, 576 - .mode_set = inno_hdmi_encoder_mode_set, 577 - .atomic_check = inno_hdmi_encoder_atomic_check, 558 + .atomic_check = inno_hdmi_encoder_atomic_check, 559 + .atomic_enable = inno_hdmi_encoder_enable, 560 + .atomic_disable = inno_hdmi_encoder_disable, 578 561 }; 579 562 580 563 static enum drm_connector_status ··· 613 564 614 565 edid = drm_get_edid(connector, hdmi->ddc); 615 566 if (edid) { 616 - hdmi->hdmi_data.sink_has_audio = drm_detect_monitor_audio(edid); 617 567 drm_connector_update_edid_property(connector, edid); 618 568 ret = drm_add_edid_modes(connector, edid); 619 569 kfree(edid); ··· 625 577 inno_hdmi_connector_mode_valid(struct drm_connector *connector, 626 578 struct drm_display_mode *mode) 627 579 { 628 - return MODE_OK; 629 - } 580 + struct inno_hdmi *hdmi = connector_to_inno_hdmi(connector); 630 581 631 - static int 632 - inno_hdmi_probe_single_connector_modes(struct drm_connector *connector, 633 - uint32_t maxX, uint32_t maxY) 634 - { 635 - return drm_helper_probe_single_connector_modes(connector, 1920, 1080); 582 + return inno_hdmi_display_mode_valid(hdmi, mode); 636 583 } 637 584 638 585 static void inno_hdmi_connector_destroy(struct drm_connector *connector) ··· 636 593 drm_connector_cleanup(connector); 637 594 } 638 595 596 + static void 597 + inno_hdmi_connector_destroy_state(struct drm_connector *connector, 598 + struct drm_connector_state *state) 599 + { 600 + struct inno_hdmi_connector_state *inno_conn_state = 601 + to_inno_hdmi_conn_state(state); 602 + 603 + __drm_atomic_helper_connector_destroy_state(&inno_conn_state->base); 604 + kfree(inno_conn_state); 605 + } 606 + 607 + static void inno_hdmi_connector_reset(struct drm_connector *connector) 608 + { 609 + struct inno_hdmi_connector_state *inno_conn_state; 610 + 611 + if (connector->state) { 612 + inno_hdmi_connector_destroy_state(connector, connector->state); 613 + connector->state = NULL; 614 + } 615 + 616 + inno_conn_state = kzalloc(sizeof(*inno_conn_state), GFP_KERNEL); 617 + if (!inno_conn_state) 618 + return; 619 + 620 + __drm_atomic_helper_connector_reset(connector, &inno_conn_state->base); 621 + 622 + inno_conn_state->colorimetry = HDMI_COLORIMETRY_ITU_709; 623 + inno_conn_state->enc_out_format = HDMI_COLORSPACE_RGB; 624 + inno_conn_state->rgb_limited_range = false; 625 + } 626 + 627 + static struct drm_connector_state * 628 + inno_hdmi_connector_duplicate_state(struct drm_connector *connector) 629 + { 630 + struct inno_hdmi_connector_state *inno_conn_state; 631 + 632 + if (WARN_ON(!connector->state)) 633 + return NULL; 634 + 635 + inno_conn_state = kmemdup(to_inno_hdmi_conn_state(connector->state), 636 + sizeof(*inno_conn_state), GFP_KERNEL); 637 + 638 + if (!inno_conn_state) 639 + return NULL; 640 + 641 + __drm_atomic_helper_connector_duplicate_state(connector, 642 + &inno_conn_state->base); 643 + 644 + return &inno_conn_state->base; 645 + } 646 + 639 647 static const struct drm_connector_funcs inno_hdmi_connector_funcs = { 640 - .fill_modes = inno_hdmi_probe_single_connector_modes, 648 + .fill_modes = drm_helper_probe_single_connector_modes, 641 649 .detect = inno_hdmi_connector_detect, 642 650 .destroy = inno_hdmi_connector_destroy, 643 - .reset = drm_atomic_helper_connector_reset, 644 - .atomic_duplicate_state = drm_atomic_helper_connector_duplicate_state, 645 - .atomic_destroy_state = drm_atomic_helper_connector_destroy_state, 651 + .reset = inno_hdmi_connector_reset, 652 + .atomic_duplicate_state = inno_hdmi_connector_duplicate_state, 653 + .atomic_destroy_state = inno_hdmi_connector_destroy_state, 646 654 }; 647 655 648 656 static struct drm_connector_helper_funcs inno_hdmi_connector_helper_funcs = { ··· 913 819 struct platform_device *pdev = to_platform_device(dev); 914 820 struct drm_device *drm = data; 915 821 struct inno_hdmi *hdmi; 822 + const struct inno_hdmi_variant *variant; 916 823 int irq; 917 824 int ret; 918 825 ··· 922 827 return -ENOMEM; 923 828 924 829 hdmi->dev = dev; 925 - hdmi->drm_dev = drm; 830 + 831 + variant = of_device_get_match_data(hdmi->dev); 832 + if (!variant) 833 + return -EINVAL; 834 + 835 + hdmi->variant = variant; 926 836 927 837 hdmi->regs = devm_platform_ioremap_resource(pdev, 0); 928 838 if (IS_ERR(hdmi->regs)) ··· 946 846 return ret; 947 847 } 948 848 849 + hdmi->refclk = devm_clk_get_optional(hdmi->dev, "ref"); 850 + if (IS_ERR(hdmi->refclk)) { 851 + DRM_DEV_ERROR(hdmi->dev, "Unable to get HDMI reference clock\n"); 852 + ret = PTR_ERR(hdmi->refclk); 853 + goto err_disable_pclk; 854 + } 855 + 856 + ret = clk_prepare_enable(hdmi->refclk); 857 + if (ret) { 858 + DRM_DEV_ERROR(hdmi->dev, 859 + "Cannot enable HDMI reference clock: %d\n", ret); 860 + goto err_disable_pclk; 861 + } 862 + 949 863 irq = platform_get_irq(pdev, 0); 950 864 if (irq < 0) { 951 865 ret = irq; ··· 976 862 } 977 863 978 864 /* 979 - * When IP controller haven't configured to an accurate video 980 - * timing, then the TMDS clock source would be switched to 981 - * PCLK_HDMI, so we need to init the TMDS rate to PCLK rate, 982 - * and reconfigure the DDC clock. 865 + * When the controller isn't configured to an accurate 866 + * video timing and there is no reference clock available, 867 + * then the TMDS clock source would be switched to PCLK_HDMI, 868 + * so we need to init the TMDS rate to PCLK rate, and 869 + * reconfigure the DDC clock. 983 870 */ 984 - hdmi->tmds_rate = clk_get_rate(hdmi->pclk); 985 - inno_hdmi_i2c_init(hdmi); 871 + if (hdmi->refclk) 872 + inno_hdmi_i2c_init(hdmi, clk_get_rate(hdmi->refclk)); 873 + else 874 + inno_hdmi_i2c_init(hdmi, clk_get_rate(hdmi->pclk)); 986 875 987 876 ret = inno_hdmi_register(drm, hdmi); 988 877 if (ret) ··· 1009 892 err_put_adapter: 1010 893 i2c_put_adapter(hdmi->ddc); 1011 894 err_disable_clk: 895 + clk_disable_unprepare(hdmi->refclk); 896 + err_disable_pclk: 1012 897 clk_disable_unprepare(hdmi->pclk); 1013 898 return ret; 1014 899 } ··· 1024 905 hdmi->encoder.encoder.funcs->destroy(&hdmi->encoder.encoder); 1025 906 1026 907 i2c_put_adapter(hdmi->ddc); 908 + clk_disable_unprepare(hdmi->refclk); 1027 909 clk_disable_unprepare(hdmi->pclk); 1028 910 } 1029 911 ··· 1043 923 component_del(&pdev->dev, &inno_hdmi_ops); 1044 924 } 1045 925 926 + static const struct inno_hdmi_variant rk3036_inno_hdmi_variant = { 927 + .phy_configs = rk3036_hdmi_phy_configs, 928 + .default_phy_config = &rk3036_hdmi_phy_configs[1], 929 + }; 930 + 931 + static const struct inno_hdmi_variant rk3128_inno_hdmi_variant = { 932 + .phy_configs = rk3128_hdmi_phy_configs, 933 + .default_phy_config = &rk3128_hdmi_phy_configs[1], 934 + }; 935 + 1046 936 static const struct of_device_id inno_hdmi_dt_ids[] = { 1047 937 { .compatible = "rockchip,rk3036-inno-hdmi", 938 + .data = &rk3036_inno_hdmi_variant, 939 + }, 940 + { .compatible = "rockchip,rk3128-inno-hdmi", 941 + .data = &rk3128_inno_hdmi_variant, 1048 942 }, 1049 943 {}, 1050 944 };
-5
drivers/gpu/drm/rockchip/inno_hdmi.h
··· 10 10 11 11 #define DDC_SEGMENT_ADDR 0x30 12 12 13 - enum PWR_MODE { 14 - NORMAL, 15 - LOWER_PWR, 16 - }; 17 - 18 13 #define HDMI_SCL_RATE (100*1000) 19 14 #define DDC_BUS_FREQ_L 0x4b 20 15 #define DDC_BUS_FREQ_H 0x4c
+1 -2
drivers/gpu/drm/rockchip/rockchip_lvds.c
··· 576 576 ret = -EINVAL; 577 577 goto err_put_port; 578 578 } else if (ret) { 579 - DRM_DEV_ERROR(dev, "failed to find panel and bridge node\n"); 580 - ret = -EPROBE_DEFER; 579 + dev_err_probe(dev, ret, "failed to find panel and bridge node\n"); 581 580 goto err_put_port; 582 581 } 583 582 if (lvds->panel)
+12 -1
drivers/gpu/drm/rockchip/rockchip_vop_reg.c
··· 227 227 .type = DRM_PLANE_TYPE_CURSOR }, 228 228 }; 229 229 230 + static const struct vop_output rk3126_output = { 231 + .pin_pol = VOP_REG(RK3036_DSP_CTRL0, 0xf, 4), 232 + .hdmi_pin_pol = VOP_REG(RK3126_INT_SCALER, 0x7, 4), 233 + .hdmi_en = VOP_REG(RK3036_AXI_BUS_CTRL, 0x1, 22), 234 + .hdmi_dclk_pol = VOP_REG(RK3036_AXI_BUS_CTRL, 0x1, 23), 235 + .rgb_en = VOP_REG(RK3036_AXI_BUS_CTRL, 0x1, 24), 236 + .rgb_dclk_pol = VOP_REG(RK3036_AXI_BUS_CTRL, 0x1, 25), 237 + .mipi_en = VOP_REG(RK3036_AXI_BUS_CTRL, 0x1, 28), 238 + .mipi_dclk_pol = VOP_REG(RK3036_AXI_BUS_CTRL, 0x1, 29), 239 + }; 240 + 230 241 static const struct vop_data rk3126_vop = { 231 242 .intr = &rk3036_intr, 232 243 .common = &rk3036_common, 233 244 .modeset = &rk3036_modeset, 234 - .output = &rk3036_output, 245 + .output = &rk3126_output, 235 246 .win = rk3126_vop_win_data, 236 247 .win_size = ARRAY_SIZE(rk3126_vop_win_data), 237 248 .max_output = { 1920, 1080 },
+3
drivers/gpu/drm/rockchip/rockchip_vop_reg.h
··· 872 872 /* rk3036 register definition end */ 873 873 874 874 /* rk3126 register definition */ 875 + #define RK3126_INT_SCALER 0x0c 876 + 877 + /* win1 register */ 875 878 #define RK3126_WIN1_MST 0x4c 876 879 #define RK3126_WIN1_DSP_INFO 0x50 877 880 #define RK3126_WIN1_DSP_ST 0x54
+6 -5
drivers/gpu/drm/scheduler/sched_main.c
··· 1248 1248 long timeout, struct workqueue_struct *timeout_wq, 1249 1249 atomic_t *score, const char *name, struct device *dev) 1250 1250 { 1251 - int i, ret; 1251 + int i; 1252 1252 1253 1253 sched->ops = ops; 1254 1254 sched->credit_limit = credit_limit; ··· 1284 1284 1285 1285 sched->own_submit_wq = true; 1286 1286 } 1287 - ret = -ENOMEM; 1287 + 1288 1288 sched->sched_rq = kmalloc_array(num_rqs, sizeof(*sched->sched_rq), 1289 1289 GFP_KERNEL | __GFP_ZERO); 1290 1290 if (!sched->sched_rq) 1291 - goto Out_free; 1291 + goto Out_check_own; 1292 1292 sched->num_rqs = num_rqs; 1293 1293 for (i = DRM_SCHED_PRIORITY_KERNEL; i < sched->num_rqs; i++) { 1294 1294 sched->sched_rq[i] = kzalloc(sizeof(*sched->sched_rq[i]), GFP_KERNEL); ··· 1313 1313 Out_unroll: 1314 1314 for (--i ; i >= DRM_SCHED_PRIORITY_KERNEL; i--) 1315 1315 kfree(sched->sched_rq[i]); 1316 - Out_free: 1316 + 1317 1317 kfree(sched->sched_rq); 1318 1318 sched->sched_rq = NULL; 1319 + Out_check_own: 1319 1320 if (sched->own_submit_wq) 1320 1321 destroy_workqueue(sched->submit_wq); 1321 1322 drm_err(sched, "%s: Failed to setup GPU scheduler--out of memory\n", __func__); 1322 - return ret; 1323 + return -ENOMEM; 1323 1324 } 1324 1325 EXPORT_SYMBOL(drm_sched_init); 1325 1326
+7
drivers/gpu/drm/solomon/ssd130x-spi.c
··· 142 142 .compatible = "solomon,ssd1327", 143 143 .data = &ssd130x_variants[SSD1327_ID], 144 144 }, 145 + /* ssd133x family */ 146 + { 147 + .compatible = "solomon,ssd1331", 148 + .data = &ssd130x_variants[SSD1331_ID], 149 + }, 145 150 { /* sentinel */ } 146 151 }; 147 152 MODULE_DEVICE_TABLE(of, ssd130x_of_match); ··· 171 166 { "ssd1322", SSD1322_ID }, 172 167 { "ssd1325", SSD1325_ID }, 173 168 { "ssd1327", SSD1327_ID }, 169 + /* ssd133x family */ 170 + { "ssd1331", SSD1331_ID }, 174 171 { /* sentinel */ } 175 172 }; 176 173 MODULE_DEVICE_TABLE(spi, ssd130x_spi_table);
+370
drivers/gpu/drm/solomon/ssd130x.c
··· 119 119 #define SSD130X_SET_VCOMH_VOLTAGE 0xbe 120 120 #define SSD132X_SET_FUNCTION_SELECT_B 0xd5 121 121 122 + /* ssd133x commands */ 123 + #define SSD133X_SET_COL_RANGE 0x15 124 + #define SSD133X_SET_ROW_RANGE 0x75 125 + #define SSD133X_CONTRAST_A 0x81 126 + #define SSD133X_CONTRAST_B 0x82 127 + #define SSD133X_CONTRAST_C 0x83 128 + #define SSD133X_SET_MASTER_CURRENT 0x87 129 + #define SSD132X_SET_PRECHARGE_A 0x8a 130 + #define SSD132X_SET_PRECHARGE_B 0x8b 131 + #define SSD132X_SET_PRECHARGE_C 0x8c 132 + #define SSD133X_SET_DISPLAY_START 0xa1 133 + #define SSD133X_SET_DISPLAY_OFFSET 0xa2 134 + #define SSD133X_SET_DISPLAY_NORMAL 0xa4 135 + #define SSD133X_SET_MASTER_CONFIG 0xad 136 + #define SSD133X_POWER_SAVE_MODE 0xb0 137 + #define SSD133X_PHASES_PERIOD 0xb1 138 + #define SSD133X_SET_CLOCK_FREQ 0xb3 139 + #define SSD133X_SET_PRECHARGE_VOLTAGE 0xbb 140 + #define SSD133X_SET_VCOMH_VOLTAGE 0xbe 141 + 122 142 #define MAX_CONTRAST 255 123 143 124 144 const struct ssd130x_deviceinfo ssd130x_variants[] = { ··· 200 180 .default_width = 128, 201 181 .default_height = 128, 202 182 .family_id = SSD132X_FAMILY, 183 + }, 184 + /* ssd133x family */ 185 + [SSD1331_ID] = { 186 + .default_width = 96, 187 + .default_height = 64, 188 + .family_id = SSD133X_FAMILY, 203 189 } 204 190 }; 205 191 EXPORT_SYMBOL_NS_GPL(ssd130x_variants, DRM_SSD130X); ··· 615 589 return 0; 616 590 } 617 591 592 + static int ssd133x_init(struct ssd130x_device *ssd130x) 593 + { 594 + int ret; 595 + 596 + /* Set color A contrast */ 597 + ret = ssd130x_write_cmd(ssd130x, 2, SSD133X_CONTRAST_A, 0x91); 598 + if (ret < 0) 599 + return ret; 600 + 601 + /* Set color B contrast */ 602 + ret = ssd130x_write_cmd(ssd130x, 2, SSD133X_CONTRAST_B, 0x50); 603 + if (ret < 0) 604 + return ret; 605 + 606 + /* Set color C contrast */ 607 + ret = ssd130x_write_cmd(ssd130x, 2, SSD133X_CONTRAST_C, 0x7d); 608 + if (ret < 0) 609 + return ret; 610 + 611 + /* Set master current */ 612 + ret = ssd130x_write_cmd(ssd130x, 2, SSD133X_SET_MASTER_CURRENT, 0x06); 613 + if (ret < 0) 614 + return ret; 615 + 616 + /* Set column start and end */ 617 + ret = ssd130x_write_cmd(ssd130x, 3, SSD133X_SET_COL_RANGE, 0x00, ssd130x->width - 1); 618 + if (ret < 0) 619 + return ret; 620 + 621 + /* Set row start and end */ 622 + ret = ssd130x_write_cmd(ssd130x, 3, SSD133X_SET_ROW_RANGE, 0x00, ssd130x->height - 1); 623 + if (ret < 0) 624 + return ret; 625 + 626 + /* 627 + * Horizontal Address Increment 628 + * Normal order SA,SB,SC (e.g. RGB) 629 + * COM Split Odd Even 630 + * 256 color format 631 + */ 632 + ret = ssd130x_write_cmd(ssd130x, 2, SSD13XX_SET_SEG_REMAP, 0x20); 633 + if (ret < 0) 634 + return ret; 635 + 636 + /* Set display start and offset */ 637 + ret = ssd130x_write_cmd(ssd130x, 2, SSD133X_SET_DISPLAY_START, 0x00); 638 + if (ret < 0) 639 + return ret; 640 + 641 + ret = ssd130x_write_cmd(ssd130x, 2, SSD133X_SET_DISPLAY_OFFSET, 0x00); 642 + if (ret < 0) 643 + return ret; 644 + 645 + /* Set display mode normal */ 646 + ret = ssd130x_write_cmd(ssd130x, 1, SSD133X_SET_DISPLAY_NORMAL); 647 + if (ret < 0) 648 + return ret; 649 + 650 + /* Set multiplex ratio value */ 651 + ret = ssd130x_write_cmd(ssd130x, 2, SSD13XX_SET_MULTIPLEX_RATIO, ssd130x->height - 1); 652 + if (ret < 0) 653 + return ret; 654 + 655 + /* Set master configuration */ 656 + ret = ssd130x_write_cmd(ssd130x, 2, SSD133X_SET_MASTER_CONFIG, 0x8e); 657 + if (ret < 0) 658 + return ret; 659 + 660 + /* Set power mode */ 661 + ret = ssd130x_write_cmd(ssd130x, 2, SSD133X_POWER_SAVE_MODE, 0x0b); 662 + if (ret < 0) 663 + return ret; 664 + 665 + /* Set Phase 1 and 2 period */ 666 + ret = ssd130x_write_cmd(ssd130x, 2, SSD133X_PHASES_PERIOD, 0x31); 667 + if (ret < 0) 668 + return ret; 669 + 670 + /* Set clock divider */ 671 + ret = ssd130x_write_cmd(ssd130x, 2, SSD133X_SET_CLOCK_FREQ, 0xf0); 672 + if (ret < 0) 673 + return ret; 674 + 675 + /* Set pre-charge A */ 676 + ret = ssd130x_write_cmd(ssd130x, 2, SSD132X_SET_PRECHARGE_A, 0x64); 677 + if (ret < 0) 678 + return ret; 679 + 680 + /* Set pre-charge B */ 681 + ret = ssd130x_write_cmd(ssd130x, 2, SSD132X_SET_PRECHARGE_B, 0x78); 682 + if (ret < 0) 683 + return ret; 684 + 685 + /* Set pre-charge C */ 686 + ret = ssd130x_write_cmd(ssd130x, 2, SSD132X_SET_PRECHARGE_C, 0x64); 687 + if (ret < 0) 688 + return ret; 689 + 690 + /* Set pre-charge level */ 691 + ret = ssd130x_write_cmd(ssd130x, 2, SSD133X_SET_PRECHARGE_VOLTAGE, 0x3a); 692 + if (ret < 0) 693 + return ret; 694 + 695 + /* Set VCOMH voltage */ 696 + ret = ssd130x_write_cmd(ssd130x, 2, SSD133X_SET_VCOMH_VOLTAGE, 0x3e); 697 + if (ret < 0) 698 + return ret; 699 + 700 + return 0; 701 + } 702 + 618 703 static int ssd130x_update_rect(struct ssd130x_device *ssd130x, 619 704 struct drm_rect *rect, u8 *buf, 620 705 u8 *data_array) ··· 890 753 return ret; 891 754 } 892 755 756 + static int ssd133x_update_rect(struct ssd130x_device *ssd130x, 757 + struct drm_rect *rect, u8 *data_array, 758 + unsigned int pitch) 759 + { 760 + unsigned int x = rect->x1; 761 + unsigned int y = rect->y1; 762 + unsigned int columns = drm_rect_width(rect); 763 + unsigned int rows = drm_rect_height(rect); 764 + int ret; 765 + 766 + /* 767 + * The screen is divided in Segment and Common outputs, where 768 + * COM0 to COM[N - 1] are the rows and SEG0 to SEG[M - 1] are 769 + * the columns. 770 + * 771 + * Each Segment has a 8-bit pixel and each Common output has a 772 + * row of pixels. When using the (default) horizontal address 773 + * increment mode, each byte of data sent to the controller has 774 + * a Segment (e.g: SEG0). 775 + * 776 + * When using the 256 color depth format, each pixel contains 3 777 + * sub-pixels for color A, B and C. These have 3 bit, 3 bit and 778 + * 2 bits respectively. 779 + */ 780 + 781 + /* Set column start and end */ 782 + ret = ssd130x_write_cmd(ssd130x, 3, SSD133X_SET_COL_RANGE, x, columns - 1); 783 + if (ret < 0) 784 + return ret; 785 + 786 + /* Set row start and end */ 787 + ret = ssd130x_write_cmd(ssd130x, 3, SSD133X_SET_ROW_RANGE, y, rows - 1); 788 + if (ret < 0) 789 + return ret; 790 + 791 + /* Write out update in one go since horizontal addressing mode is used */ 792 + ret = ssd130x_write_data(ssd130x, data_array, pitch * rows); 793 + 794 + return ret; 795 + } 796 + 893 797 static void ssd130x_clear_screen(struct ssd130x_device *ssd130x, u8 *data_array) 894 798 { 895 799 unsigned int pages = DIV_ROUND_UP(ssd130x->height, SSD130X_PAGE_HEIGHT); ··· 981 803 982 804 /* Write out update in one go since horizontal addressing mode is used */ 983 805 ssd130x_write_data(ssd130x, data_array, columns * height); 806 + } 807 + 808 + static void ssd133x_clear_screen(struct ssd130x_device *ssd130x, u8 *data_array) 809 + { 810 + const struct drm_format_info *fi = drm_format_info(DRM_FORMAT_RGB332); 811 + unsigned int pitch; 812 + 813 + if (!fi) 814 + return; 815 + 816 + pitch = drm_format_info_min_pitch(fi, 0, ssd130x->width); 817 + 818 + memset(data_array, 0, pitch * ssd130x->height); 819 + 820 + /* Write out update in one go since horizontal addressing mode is used */ 821 + ssd130x_write_data(ssd130x, data_array, pitch * ssd130x->height); 984 822 } 985 823 986 824 static int ssd130x_fb_blit_rect(struct drm_framebuffer *fb, ··· 1056 862 drm_gem_fb_end_cpu_access(fb, DMA_FROM_DEVICE); 1057 863 1058 864 ssd132x_update_rect(ssd130x, rect, buf, data_array); 865 + 866 + return ret; 867 + } 868 + 869 + static int ssd133x_fb_blit_rect(struct drm_framebuffer *fb, 870 + const struct iosys_map *vmap, 871 + struct drm_rect *rect, u8 *data_array, 872 + struct drm_format_conv_state *fmtcnv_state) 873 + { 874 + struct ssd130x_device *ssd130x = drm_to_ssd130x(fb->dev); 875 + const struct drm_format_info *fi = drm_format_info(DRM_FORMAT_RGB332); 876 + unsigned int dst_pitch; 877 + struct iosys_map dst; 878 + int ret = 0; 879 + 880 + if (!fi) 881 + return -EINVAL; 882 + 883 + dst_pitch = drm_format_info_min_pitch(fi, 0, drm_rect_width(rect)); 884 + 885 + ret = drm_gem_fb_begin_cpu_access(fb, DMA_FROM_DEVICE); 886 + if (ret) 887 + return ret; 888 + 889 + iosys_map_set_vaddr(&dst, data_array); 890 + drm_fb_xrgb8888_to_rgb332(&dst, &dst_pitch, vmap, fb, rect, fmtcnv_state); 891 + 892 + drm_gem_fb_end_cpu_access(fb, DMA_FROM_DEVICE); 893 + 894 + ssd133x_update_rect(ssd130x, rect, data_array, dst_pitch); 1059 895 1060 896 return ret; 1061 897 } ··· 1188 964 return 0; 1189 965 } 1190 966 967 + static int ssd133x_primary_plane_atomic_check(struct drm_plane *plane, 968 + struct drm_atomic_state *state) 969 + { 970 + struct drm_plane_state *plane_state = drm_atomic_get_new_plane_state(state, plane); 971 + struct drm_crtc *crtc = plane_state->crtc; 972 + struct drm_crtc_state *crtc_state = NULL; 973 + int ret; 974 + 975 + if (crtc) 976 + crtc_state = drm_atomic_get_new_crtc_state(state, crtc); 977 + 978 + ret = drm_atomic_helper_check_plane_state(plane_state, crtc_state, 979 + DRM_PLANE_NO_SCALING, 980 + DRM_PLANE_NO_SCALING, 981 + false, false); 982 + if (ret) 983 + return ret; 984 + else if (!plane_state->visible) 985 + return 0; 986 + 987 + return 0; 988 + } 989 + 1191 990 static void ssd130x_primary_plane_atomic_update(struct drm_plane *plane, 1192 991 struct drm_atomic_state *state) 1193 992 { ··· 1281 1034 drm_dev_exit(idx); 1282 1035 } 1283 1036 1037 + static void ssd133x_primary_plane_atomic_update(struct drm_plane *plane, 1038 + struct drm_atomic_state *state) 1039 + { 1040 + struct drm_plane_state *plane_state = drm_atomic_get_new_plane_state(state, plane); 1041 + struct drm_plane_state *old_plane_state = drm_atomic_get_old_plane_state(state, plane); 1042 + struct drm_shadow_plane_state *shadow_plane_state = to_drm_shadow_plane_state(plane_state); 1043 + struct drm_crtc_state *crtc_state = drm_atomic_get_new_crtc_state(state, plane_state->crtc); 1044 + struct ssd130x_crtc_state *ssd130x_crtc_state = to_ssd130x_crtc_state(crtc_state); 1045 + struct drm_framebuffer *fb = plane_state->fb; 1046 + struct drm_atomic_helper_damage_iter iter; 1047 + struct drm_device *drm = plane->dev; 1048 + struct drm_rect dst_clip; 1049 + struct drm_rect damage; 1050 + int idx; 1051 + 1052 + if (!drm_dev_enter(drm, &idx)) 1053 + return; 1054 + 1055 + drm_atomic_helper_damage_iter_init(&iter, old_plane_state, plane_state); 1056 + drm_atomic_for_each_plane_damage(&iter, &damage) { 1057 + dst_clip = plane_state->dst; 1058 + 1059 + if (!drm_rect_intersect(&dst_clip, &damage)) 1060 + continue; 1061 + 1062 + ssd133x_fb_blit_rect(fb, &shadow_plane_state->data[0], &dst_clip, 1063 + ssd130x_crtc_state->data_array, 1064 + &shadow_plane_state->fmtcnv_state); 1065 + } 1066 + 1067 + drm_dev_exit(idx); 1068 + } 1069 + 1284 1070 static void ssd130x_primary_plane_atomic_disable(struct drm_plane *plane, 1285 1071 struct drm_atomic_state *state) 1286 1072 { ··· 1358 1078 return; 1359 1079 1360 1080 ssd132x_clear_screen(ssd130x, ssd130x_crtc_state->data_array); 1081 + 1082 + drm_dev_exit(idx); 1083 + } 1084 + 1085 + static void ssd133x_primary_plane_atomic_disable(struct drm_plane *plane, 1086 + struct drm_atomic_state *state) 1087 + { 1088 + struct drm_device *drm = plane->dev; 1089 + struct ssd130x_device *ssd130x = drm_to_ssd130x(drm); 1090 + struct drm_plane_state *plane_state = drm_atomic_get_new_plane_state(state, plane); 1091 + struct drm_crtc_state *crtc_state; 1092 + struct ssd130x_crtc_state *ssd130x_crtc_state; 1093 + int idx; 1094 + 1095 + if (!plane_state->crtc) 1096 + return; 1097 + 1098 + crtc_state = drm_atomic_get_new_crtc_state(state, plane_state->crtc); 1099 + ssd130x_crtc_state = to_ssd130x_crtc_state(crtc_state); 1100 + 1101 + if (!drm_dev_enter(drm, &idx)) 1102 + return; 1103 + 1104 + ssd133x_clear_screen(ssd130x, ssd130x_crtc_state->data_array); 1361 1105 1362 1106 drm_dev_exit(idx); 1363 1107 } ··· 1448 1144 .atomic_check = ssd132x_primary_plane_atomic_check, 1449 1145 .atomic_update = ssd132x_primary_plane_atomic_update, 1450 1146 .atomic_disable = ssd132x_primary_plane_atomic_disable, 1147 + }, 1148 + [SSD133X_FAMILY] = { 1149 + DRM_GEM_SHADOW_PLANE_HELPER_FUNCS, 1150 + .atomic_check = ssd133x_primary_plane_atomic_check, 1151 + .atomic_update = ssd133x_primary_plane_atomic_update, 1152 + .atomic_disable = ssd133x_primary_plane_atomic_disable, 1451 1153 } 1452 1154 }; 1453 1155 ··· 1524 1214 return 0; 1525 1215 } 1526 1216 1217 + static int ssd133x_crtc_atomic_check(struct drm_crtc *crtc, 1218 + struct drm_atomic_state *state) 1219 + { 1220 + struct drm_device *drm = crtc->dev; 1221 + struct ssd130x_device *ssd130x = drm_to_ssd130x(drm); 1222 + struct drm_crtc_state *crtc_state = drm_atomic_get_new_crtc_state(state, crtc); 1223 + struct ssd130x_crtc_state *ssd130x_state = to_ssd130x_crtc_state(crtc_state); 1224 + const struct drm_format_info *fi = drm_format_info(DRM_FORMAT_RGB332); 1225 + unsigned int pitch; 1226 + int ret; 1227 + 1228 + if (!fi) 1229 + return -EINVAL; 1230 + 1231 + ret = drm_crtc_helper_atomic_check(crtc, state); 1232 + if (ret) 1233 + return ret; 1234 + 1235 + pitch = drm_format_info_min_pitch(fi, 0, ssd130x->width); 1236 + 1237 + ssd130x_state->data_array = kmalloc(pitch * ssd130x->height, GFP_KERNEL); 1238 + if (!ssd130x_state->data_array) 1239 + return -ENOMEM; 1240 + 1241 + return 0; 1242 + } 1243 + 1527 1244 /* Called during init to allocate the CRTC's atomic state. */ 1528 1245 static void ssd130x_crtc_reset(struct drm_crtc *crtc) 1529 1246 { ··· 1611 1274 [SSD132X_FAMILY] = { 1612 1275 .mode_valid = ssd130x_crtc_mode_valid, 1613 1276 .atomic_check = ssd132x_crtc_atomic_check, 1277 + }, 1278 + [SSD133X_FAMILY] = { 1279 + .mode_valid = ssd130x_crtc_mode_valid, 1280 + .atomic_check = ssd133x_crtc_atomic_check, 1614 1281 }, 1615 1282 }; 1616 1283 ··· 1678 1337 ssd130x_power_off(ssd130x); 1679 1338 } 1680 1339 1340 + static void ssd133x_encoder_atomic_enable(struct drm_encoder *encoder, 1341 + struct drm_atomic_state *state) 1342 + { 1343 + struct drm_device *drm = encoder->dev; 1344 + struct ssd130x_device *ssd130x = drm_to_ssd130x(drm); 1345 + int ret; 1346 + 1347 + ret = ssd130x_power_on(ssd130x); 1348 + if (ret) 1349 + return; 1350 + 1351 + ret = ssd133x_init(ssd130x); 1352 + if (ret) 1353 + goto power_off; 1354 + 1355 + ssd130x_write_cmd(ssd130x, 1, SSD13XX_DISPLAY_ON); 1356 + 1357 + backlight_enable(ssd130x->bl_dev); 1358 + 1359 + return; 1360 + 1361 + power_off: 1362 + ssd130x_power_off(ssd130x); 1363 + } 1364 + 1681 1365 static void ssd130x_encoder_atomic_disable(struct drm_encoder *encoder, 1682 1366 struct drm_atomic_state *state) 1683 1367 { ··· 1723 1357 }, 1724 1358 [SSD132X_FAMILY] = { 1725 1359 .atomic_enable = ssd132x_encoder_atomic_enable, 1360 + .atomic_disable = ssd130x_encoder_atomic_disable, 1361 + }, 1362 + [SSD133X_FAMILY] = { 1363 + .atomic_enable = ssd133x_encoder_atomic_enable, 1726 1364 .atomic_disable = ssd130x_encoder_atomic_disable, 1727 1365 } 1728 1366 };
+4 -1
drivers/gpu/drm/solomon/ssd130x.h
··· 25 25 26 26 enum ssd130x_family_ids { 27 27 SSD130X_FAMILY, 28 - SSD132X_FAMILY 28 + SSD132X_FAMILY, 29 + SSD133X_FAMILY 29 30 }; 30 31 31 32 enum ssd130x_variants { ··· 40 39 SSD1322_ID, 41 40 SSD1325_ID, 42 41 SSD1327_ID, 42 + /* ssd133x family */ 43 + SSD1331_ID, 43 44 NR_SSD130X_VARIANTS 44 45 }; 45 46
+10 -4
drivers/gpu/drm/tegra/dpaux.c
··· 522 522 if (err < 0) { 523 523 dev_err(dpaux->dev, "failed to request IRQ#%u: %d\n", 524 524 dpaux->irq, err); 525 - return err; 525 + goto err_pm_disable; 526 526 } 527 527 528 528 disable_irq(dpaux->irq); ··· 542 542 */ 543 543 err = tegra_dpaux_pad_config(dpaux, DPAUX_PADCTL_FUNC_I2C); 544 544 if (err < 0) 545 - return err; 545 + goto err_pm_disable; 546 546 547 547 #ifdef CONFIG_GENERIC_PINCONF 548 548 dpaux->desc.name = dev_name(&pdev->dev); ··· 555 555 dpaux->pinctrl = devm_pinctrl_register(&pdev->dev, &dpaux->desc, dpaux); 556 556 if (IS_ERR(dpaux->pinctrl)) { 557 557 dev_err(&pdev->dev, "failed to register pincontrol\n"); 558 - return PTR_ERR(dpaux->pinctrl); 558 + err = PTR_ERR(dpaux->pinctrl); 559 + goto err_pm_disable; 559 560 } 560 561 #endif 561 562 /* enable and clear all interrupts */ ··· 572 571 err = devm_of_dp_aux_populate_ep_devices(&dpaux->aux); 573 572 if (err < 0) { 574 573 dev_err(dpaux->dev, "failed to populate AUX bus: %d\n", err); 575 - return err; 574 + goto err_pm_disable; 576 575 } 577 576 578 577 return 0; 578 + 579 + err_pm_disable: 580 + pm_runtime_put_sync(&pdev->dev); 581 + pm_runtime_disable(&pdev->dev); 582 + return err; 579 583 } 580 584 581 585 static void tegra_dpaux_remove(struct platform_device *pdev)
+1 -1
drivers/gpu/drm/tegra/drm.h
··· 13 13 14 14 #include <drm/drm_atomic.h> 15 15 #include <drm/drm_bridge.h> 16 - #include <drm/drm_edid.h> 17 16 #include <drm/drm_encoder.h> 18 17 #include <drm/drm_fixed.h> 19 18 #include <drm/drm_probe_helper.h> ··· 25 26 /* XXX move to include/uapi/drm/drm_fourcc.h? */ 26 27 #define DRM_FORMAT_MOD_NVIDIA_SECTOR_LAYOUT BIT_ULL(22) 27 28 29 + struct edid; 28 30 struct reset_control; 29 31 30 32 struct tegra_drm {
+39 -20
drivers/gpu/drm/tegra/dsi.c
··· 1544 1544 np = of_parse_phandle(dsi->dev->of_node, "nvidia,ganged-mode", 0); 1545 1545 if (np) { 1546 1546 struct platform_device *gangster = of_find_device_by_node(np); 1547 + of_node_put(np); 1548 + if (!gangster) 1549 + return -EPROBE_DEFER; 1547 1550 1548 1551 dsi->slave = platform_get_drvdata(gangster); 1549 - of_node_put(np); 1550 1552 1551 1553 if (!dsi->slave) { 1552 1554 put_device(&gangster->dev); ··· 1596 1594 1597 1595 if (!pdev->dev.pm_domain) { 1598 1596 dsi->rst = devm_reset_control_get(&pdev->dev, "dsi"); 1599 - if (IS_ERR(dsi->rst)) 1600 - return PTR_ERR(dsi->rst); 1597 + if (IS_ERR(dsi->rst)) { 1598 + err = PTR_ERR(dsi->rst); 1599 + goto remove; 1600 + } 1601 1601 } 1602 1602 1603 1603 dsi->clk = devm_clk_get(&pdev->dev, NULL); 1604 - if (IS_ERR(dsi->clk)) 1605 - return dev_err_probe(&pdev->dev, PTR_ERR(dsi->clk), 1606 - "cannot get DSI clock\n"); 1604 + if (IS_ERR(dsi->clk)) { 1605 + err = dev_err_probe(&pdev->dev, PTR_ERR(dsi->clk), 1606 + "cannot get DSI clock\n"); 1607 + goto remove; 1608 + } 1607 1609 1608 1610 dsi->clk_lp = devm_clk_get(&pdev->dev, "lp"); 1609 - if (IS_ERR(dsi->clk_lp)) 1610 - return dev_err_probe(&pdev->dev, PTR_ERR(dsi->clk_lp), 1611 - "cannot get low-power clock\n"); 1611 + if (IS_ERR(dsi->clk_lp)) { 1612 + err = dev_err_probe(&pdev->dev, PTR_ERR(dsi->clk_lp), 1613 + "cannot get low-power clock\n"); 1614 + goto remove; 1615 + } 1612 1616 1613 1617 dsi->clk_parent = devm_clk_get(&pdev->dev, "parent"); 1614 - if (IS_ERR(dsi->clk_parent)) 1615 - return dev_err_probe(&pdev->dev, PTR_ERR(dsi->clk_parent), 1616 - "cannot get parent clock\n"); 1618 + if (IS_ERR(dsi->clk_parent)) { 1619 + err = dev_err_probe(&pdev->dev, PTR_ERR(dsi->clk_parent), 1620 + "cannot get parent clock\n"); 1621 + goto remove; 1622 + } 1617 1623 1618 1624 dsi->vdd = devm_regulator_get(&pdev->dev, "avdd-dsi-csi"); 1619 - if (IS_ERR(dsi->vdd)) 1620 - return dev_err_probe(&pdev->dev, PTR_ERR(dsi->vdd), 1621 - "cannot get VDD supply\n"); 1625 + if (IS_ERR(dsi->vdd)) { 1626 + err = dev_err_probe(&pdev->dev, PTR_ERR(dsi->vdd), 1627 + "cannot get VDD supply\n"); 1628 + goto remove; 1629 + } 1622 1630 1623 1631 err = tegra_dsi_setup_clocks(dsi); 1624 1632 if (err < 0) { 1625 1633 dev_err(&pdev->dev, "cannot setup clocks\n"); 1626 - return err; 1634 + goto remove; 1627 1635 } 1628 1636 1629 1637 regs = platform_get_resource(pdev, IORESOURCE_MEM, 0); 1630 1638 dsi->regs = devm_ioremap_resource(&pdev->dev, regs); 1631 - if (IS_ERR(dsi->regs)) 1632 - return PTR_ERR(dsi->regs); 1639 + if (IS_ERR(dsi->regs)) { 1640 + err = PTR_ERR(dsi->regs); 1641 + goto remove; 1642 + } 1633 1643 1634 1644 dsi->mipi = tegra_mipi_request(&pdev->dev, pdev->dev.of_node); 1635 - if (IS_ERR(dsi->mipi)) 1636 - return PTR_ERR(dsi->mipi); 1645 + if (IS_ERR(dsi->mipi)) { 1646 + err = PTR_ERR(dsi->mipi); 1647 + goto remove; 1648 + } 1637 1649 1638 1650 dsi->host.ops = &tegra_dsi_host_ops; 1639 1651 dsi->host.dev = &pdev->dev; ··· 1675 1659 return 0; 1676 1660 1677 1661 unregister: 1662 + pm_runtime_disable(&pdev->dev); 1678 1663 mipi_dsi_host_unregister(&dsi->host); 1679 1664 mipi_free: 1680 1665 tegra_mipi_free(dsi->mipi); 1666 + remove: 1667 + tegra_output_remove(&dsi->output); 1681 1668 return err; 1682 1669 } 1683 1670
+13 -7
drivers/gpu/drm/tegra/hdmi.c
··· 1856 1856 return err; 1857 1857 1858 1858 hdmi->regs = devm_platform_ioremap_resource(pdev, 0); 1859 - if (IS_ERR(hdmi->regs)) 1860 - return PTR_ERR(hdmi->regs); 1859 + if (IS_ERR(hdmi->regs)) { 1860 + err = PTR_ERR(hdmi->regs); 1861 + goto remove; 1862 + } 1861 1863 1862 1864 err = platform_get_irq(pdev, 0); 1863 1865 if (err < 0) 1864 - return err; 1866 + goto remove; 1865 1867 1866 1868 hdmi->irq = err; 1867 1869 ··· 1872 1870 if (err < 0) { 1873 1871 dev_err(&pdev->dev, "failed to request IRQ#%u: %d\n", 1874 1872 hdmi->irq, err); 1875 - return err; 1873 + goto remove; 1876 1874 } 1877 1875 1878 1876 platform_set_drvdata(pdev, hdmi); 1879 1877 1880 1878 err = devm_pm_runtime_enable(&pdev->dev); 1881 1879 if (err) 1882 - return err; 1880 + goto remove; 1883 1881 1884 1882 err = devm_tegra_core_dev_init_opp_table_common(&pdev->dev); 1885 1883 if (err) 1886 - return err; 1884 + goto remove; 1887 1885 1888 1886 INIT_LIST_HEAD(&hdmi->client.list); 1889 1887 hdmi->client.ops = &hdmi_client_ops; ··· 1893 1891 if (err < 0) { 1894 1892 dev_err(&pdev->dev, "failed to register host1x client: %d\n", 1895 1893 err); 1896 - return err; 1894 + goto remove; 1897 1895 } 1898 1896 1899 1897 return 0; 1898 + 1899 + remove: 1900 + tegra_output_remove(&hdmi->output); 1901 + return err; 1900 1902 } 1901 1903 1902 1904 static void tegra_hdmi_remove(struct platform_device *pdev)
+13 -4
drivers/gpu/drm/tegra/output.c
··· 8 8 #include <linux/of.h> 9 9 10 10 #include <drm/drm_atomic_helper.h> 11 + #include <drm/drm_edid.h> 11 12 #include <drm/drm_of.h> 12 13 #include <drm/drm_panel.h> 13 14 #include <drm/drm_simple_kms_helper.h> ··· 143 142 GPIOD_IN, 144 143 "HDMI hotplug detect"); 145 144 if (IS_ERR(output->hpd_gpio)) { 146 - if (PTR_ERR(output->hpd_gpio) != -ENOENT) 147 - return PTR_ERR(output->hpd_gpio); 145 + if (PTR_ERR(output->hpd_gpio) != -ENOENT) { 146 + err = PTR_ERR(output->hpd_gpio); 147 + goto put_i2c; 148 + } 148 149 149 150 output->hpd_gpio = NULL; 150 151 } ··· 155 152 err = gpiod_to_irq(output->hpd_gpio); 156 153 if (err < 0) { 157 154 dev_err(output->dev, "gpiod_to_irq(): %d\n", err); 158 - return err; 155 + goto put_i2c; 159 156 } 160 157 161 158 output->hpd_irq = err; ··· 168 165 if (err < 0) { 169 166 dev_err(output->dev, "failed to request IRQ#%u: %d\n", 170 167 output->hpd_irq, err); 171 - return err; 168 + goto put_i2c; 172 169 } 173 170 174 171 output->connector.polled = DRM_CONNECTOR_POLL_HPD; ··· 182 179 } 183 180 184 181 return 0; 182 + 183 + put_i2c: 184 + if (output->ddc) 185 + i2c_put_adapter(output->ddc); 186 + 187 + return err; 185 188 } 186 189 187 190 void tegra_output_remove(struct tegra_output *output)
+13 -5
drivers/gpu/drm/tegra/rgb.c
··· 225 225 rgb->clk = devm_clk_get(dc->dev, NULL); 226 226 if (IS_ERR(rgb->clk)) { 227 227 dev_err(dc->dev, "failed to get clock\n"); 228 - return PTR_ERR(rgb->clk); 228 + err = PTR_ERR(rgb->clk); 229 + goto remove; 229 230 } 230 231 231 232 rgb->clk_parent = devm_clk_get(dc->dev, "parent"); 232 233 if (IS_ERR(rgb->clk_parent)) { 233 234 dev_err(dc->dev, "failed to get parent clock\n"); 234 - return PTR_ERR(rgb->clk_parent); 235 + err = PTR_ERR(rgb->clk_parent); 236 + goto remove; 235 237 } 236 238 237 239 err = clk_set_parent(rgb->clk, rgb->clk_parent); 238 240 if (err < 0) { 239 241 dev_err(dc->dev, "failed to set parent clock: %d\n", err); 240 - return err; 242 + goto remove; 241 243 } 242 244 243 245 rgb->pll_d_out0 = clk_get_sys(NULL, "pll_d_out0"); 244 246 if (IS_ERR(rgb->pll_d_out0)) { 245 247 err = PTR_ERR(rgb->pll_d_out0); 246 248 dev_err(dc->dev, "failed to get pll_d_out0: %d\n", err); 247 - return err; 249 + goto remove; 248 250 } 249 251 250 252 if (dc->soc->has_pll_d2_out0) { ··· 254 252 if (IS_ERR(rgb->pll_d2_out0)) { 255 253 err = PTR_ERR(rgb->pll_d2_out0); 256 254 dev_err(dc->dev, "failed to get pll_d2_out0: %d\n", err); 257 - return err; 255 + goto put_pll; 258 256 } 259 257 } 260 258 261 259 dc->rgb = &rgb->output; 262 260 263 261 return 0; 262 + 263 + put_pll: 264 + clk_put(rgb->pll_d_out0); 265 + remove: 266 + tegra_output_remove(&rgb->output); 267 + return err; 264 268 } 265 269 266 270 void tegra_dc_rgb_remove(struct tegra_dc *dc)
+1
drivers/gpu/drm/tegra/sor.c
··· 20 20 #include <drm/display/drm_scdc_helper.h> 21 21 #include <drm/drm_atomic_helper.h> 22 22 #include <drm/drm_debugfs.h> 23 + #include <drm/drm_edid.h> 23 24 #include <drm/drm_eld.h> 24 25 #include <drm/drm_file.h> 25 26 #include <drm/drm_panel.h>
+4 -15
drivers/gpu/drm/tilcdc/tilcdc_drv.c
··· 182 182 if (priv->clk) 183 183 clk_put(priv->clk); 184 184 185 - if (priv->mmio) 186 - iounmap(priv->mmio); 187 - 188 185 if (priv->wq) 189 186 destroy_workqueue(priv->wq); 190 187 ··· 198 201 struct platform_device *pdev = to_platform_device(dev); 199 202 struct device_node *node = dev->of_node; 200 203 struct tilcdc_drm_private *priv; 201 - struct resource *res; 202 204 u32 bpp = 0; 203 205 int ret; 204 206 ··· 222 226 goto init_failed; 223 227 } 224 228 225 - res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 226 - if (!res) { 227 - dev_err(dev, "failed to get memory resource\n"); 228 - ret = -EINVAL; 229 - goto init_failed; 230 - } 231 - 232 - priv->mmio = ioremap(res->start, resource_size(res)); 233 - if (!priv->mmio) { 234 - dev_err(dev, "failed to ioremap\n"); 235 - ret = -ENOMEM; 229 + priv->mmio = devm_platform_ioremap_resource(pdev, 0); 230 + if (IS_ERR(priv->mmio)) { 231 + dev_err(dev, "failed to request / ioremap\n"); 232 + ret = PTR_ERR(priv->mmio); 236 233 goto init_failed; 237 234 } 238 235
+3
drivers/gpu/drm/ttm/tests/Makefile
··· 3 3 obj-$(CONFIG_DRM_TTM_KUNIT_TEST) += \ 4 4 ttm_device_test.o \ 5 5 ttm_pool_test.o \ 6 + ttm_resource_test.o \ 7 + ttm_tt_test.o \ 8 + ttm_bo_test.o \ 6 9 ttm_kunit_helpers.o
+622
drivers/gpu/drm/ttm/tests/ttm_bo_test.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 AND MIT 2 + /* 3 + * Copyright © 2023 Intel Corporation 4 + */ 5 + #include <linux/dma-resv.h> 6 + #include <linux/kthread.h> 7 + #include <linux/delay.h> 8 + #include <linux/timer.h> 9 + #include <linux/jiffies.h> 10 + #include <linux/mutex.h> 11 + #include <linux/ww_mutex.h> 12 + 13 + #include <drm/ttm/ttm_resource.h> 14 + #include <drm/ttm/ttm_placement.h> 15 + #include <drm/ttm/ttm_tt.h> 16 + 17 + #include "ttm_kunit_helpers.h" 18 + 19 + #define BO_SIZE SZ_8K 20 + 21 + struct ttm_bo_test_case { 22 + const char *description; 23 + bool interruptible; 24 + bool no_wait; 25 + }; 26 + 27 + static const struct ttm_bo_test_case ttm_bo_reserved_cases[] = { 28 + { 29 + .description = "Cannot be interrupted and sleeps", 30 + .interruptible = false, 31 + .no_wait = false, 32 + }, 33 + { 34 + .description = "Cannot be interrupted, locks straight away", 35 + .interruptible = false, 36 + .no_wait = true, 37 + }, 38 + { 39 + .description = "Can be interrupted, sleeps", 40 + .interruptible = true, 41 + .no_wait = false, 42 + }, 43 + }; 44 + 45 + static void ttm_bo_init_case_desc(const struct ttm_bo_test_case *t, 46 + char *desc) 47 + { 48 + strscpy(desc, t->description, KUNIT_PARAM_DESC_SIZE); 49 + } 50 + 51 + KUNIT_ARRAY_PARAM(ttm_bo_reserve, ttm_bo_reserved_cases, ttm_bo_init_case_desc); 52 + 53 + static void ttm_bo_reserve_optimistic_no_ticket(struct kunit *test) 54 + { 55 + const struct ttm_bo_test_case *params = test->param_value; 56 + struct ttm_buffer_object *bo; 57 + int err; 58 + 59 + bo = ttm_bo_kunit_init(test, test->priv, BO_SIZE); 60 + 61 + err = ttm_bo_reserve(bo, params->interruptible, params->no_wait, NULL); 62 + KUNIT_ASSERT_EQ(test, err, 0); 63 + 64 + dma_resv_unlock(bo->base.resv); 65 + } 66 + 67 + static void ttm_bo_reserve_locked_no_sleep(struct kunit *test) 68 + { 69 + struct ttm_buffer_object *bo; 70 + bool interruptible = false; 71 + bool no_wait = true; 72 + int err; 73 + 74 + bo = ttm_bo_kunit_init(test, test->priv, BO_SIZE); 75 + 76 + /* Let's lock it beforehand */ 77 + dma_resv_lock(bo->base.resv, NULL); 78 + 79 + err = ttm_bo_reserve(bo, interruptible, no_wait, NULL); 80 + dma_resv_unlock(bo->base.resv); 81 + 82 + KUNIT_ASSERT_EQ(test, err, -EBUSY); 83 + } 84 + 85 + static void ttm_bo_reserve_no_wait_ticket(struct kunit *test) 86 + { 87 + struct ttm_buffer_object *bo; 88 + struct ww_acquire_ctx ctx; 89 + bool interruptible = false; 90 + bool no_wait = true; 91 + int err; 92 + 93 + ww_acquire_init(&ctx, &reservation_ww_class); 94 + 95 + bo = ttm_bo_kunit_init(test, test->priv, BO_SIZE); 96 + 97 + err = ttm_bo_reserve(bo, interruptible, no_wait, &ctx); 98 + KUNIT_ASSERT_EQ(test, err, -EBUSY); 99 + 100 + ww_acquire_fini(&ctx); 101 + } 102 + 103 + static void ttm_bo_reserve_double_resv(struct kunit *test) 104 + { 105 + struct ttm_buffer_object *bo; 106 + struct ww_acquire_ctx ctx; 107 + bool interruptible = false; 108 + bool no_wait = false; 109 + int err; 110 + 111 + ww_acquire_init(&ctx, &reservation_ww_class); 112 + 113 + bo = ttm_bo_kunit_init(test, test->priv, BO_SIZE); 114 + 115 + err = ttm_bo_reserve(bo, interruptible, no_wait, &ctx); 116 + KUNIT_ASSERT_EQ(test, err, 0); 117 + 118 + err = ttm_bo_reserve(bo, interruptible, no_wait, &ctx); 119 + 120 + dma_resv_unlock(bo->base.resv); 121 + ww_acquire_fini(&ctx); 122 + 123 + KUNIT_ASSERT_EQ(test, err, -EALREADY); 124 + } 125 + 126 + /* 127 + * A test case heavily inspired by ww_test_edeadlk_normal(). It injects 128 + * a deadlock by manipulating the sequence number of the context that holds 129 + * dma_resv lock of bo2 so the other context is "wounded" and has to back off 130 + * (indicated by -EDEADLK). The subtest checks if ttm_bo_reserve() properly 131 + * propagates that error. 132 + */ 133 + static void ttm_bo_reserve_deadlock(struct kunit *test) 134 + { 135 + struct ttm_buffer_object *bo1, *bo2; 136 + struct ww_acquire_ctx ctx1, ctx2; 137 + bool interruptible = false; 138 + bool no_wait = false; 139 + int err; 140 + 141 + bo1 = ttm_bo_kunit_init(test, test->priv, BO_SIZE); 142 + bo2 = ttm_bo_kunit_init(test, test->priv, BO_SIZE); 143 + 144 + ww_acquire_init(&ctx1, &reservation_ww_class); 145 + mutex_lock(&bo2->base.resv->lock.base); 146 + 147 + /* The deadlock will be caught by WW mutex, don't warn about it */ 148 + lock_release(&bo2->base.resv->lock.base.dep_map, 1); 149 + 150 + bo2->base.resv->lock.ctx = &ctx2; 151 + ctx2 = ctx1; 152 + ctx2.stamp--; /* Make the context holding the lock younger */ 153 + 154 + err = ttm_bo_reserve(bo1, interruptible, no_wait, &ctx1); 155 + KUNIT_ASSERT_EQ(test, err, 0); 156 + 157 + err = ttm_bo_reserve(bo2, interruptible, no_wait, &ctx1); 158 + KUNIT_ASSERT_EQ(test, err, -EDEADLK); 159 + 160 + dma_resv_unlock(bo1->base.resv); 161 + ww_acquire_fini(&ctx1); 162 + } 163 + 164 + #if IS_BUILTIN(CONFIG_DRM_TTM_KUNIT_TEST) 165 + struct signal_timer { 166 + struct timer_list timer; 167 + struct ww_acquire_ctx *ctx; 168 + }; 169 + 170 + static void signal_for_ttm_bo_reserve(struct timer_list *t) 171 + { 172 + struct signal_timer *s_timer = from_timer(s_timer, t, timer); 173 + struct task_struct *task = s_timer->ctx->task; 174 + 175 + do_send_sig_info(SIGTERM, SEND_SIG_PRIV, task, PIDTYPE_PID); 176 + } 177 + 178 + static int threaded_ttm_bo_reserve(void *arg) 179 + { 180 + struct ttm_buffer_object *bo = arg; 181 + struct signal_timer s_timer; 182 + struct ww_acquire_ctx ctx; 183 + bool interruptible = true; 184 + bool no_wait = false; 185 + int err; 186 + 187 + ww_acquire_init(&ctx, &reservation_ww_class); 188 + 189 + /* Prepare a signal that will interrupt the reservation attempt */ 190 + timer_setup_on_stack(&s_timer.timer, &signal_for_ttm_bo_reserve, 0); 191 + s_timer.ctx = &ctx; 192 + 193 + mod_timer(&s_timer.timer, msecs_to_jiffies(100)); 194 + 195 + err = ttm_bo_reserve(bo, interruptible, no_wait, &ctx); 196 + 197 + timer_delete_sync(&s_timer.timer); 198 + destroy_timer_on_stack(&s_timer.timer); 199 + 200 + ww_acquire_fini(&ctx); 201 + 202 + return err; 203 + } 204 + 205 + static void ttm_bo_reserve_interrupted(struct kunit *test) 206 + { 207 + struct ttm_buffer_object *bo; 208 + struct task_struct *task; 209 + int err; 210 + 211 + bo = ttm_bo_kunit_init(test, test->priv, BO_SIZE); 212 + 213 + task = kthread_create(threaded_ttm_bo_reserve, bo, "ttm-bo-reserve"); 214 + 215 + if (IS_ERR(task)) 216 + KUNIT_FAIL(test, "Couldn't create ttm bo reserve task\n"); 217 + 218 + /* Take a lock so the threaded reserve has to wait */ 219 + mutex_lock(&bo->base.resv->lock.base); 220 + 221 + wake_up_process(task); 222 + msleep(20); 223 + err = kthread_stop(task); 224 + 225 + mutex_unlock(&bo->base.resv->lock.base); 226 + 227 + KUNIT_ASSERT_EQ(test, err, -ERESTARTSYS); 228 + } 229 + #endif /* IS_BUILTIN(CONFIG_DRM_TTM_KUNIT_TEST) */ 230 + 231 + static void ttm_bo_unreserve_basic(struct kunit *test) 232 + { 233 + struct ttm_test_devices *priv = test->priv; 234 + struct ttm_buffer_object *bo; 235 + struct ttm_device *ttm_dev; 236 + struct ttm_resource *res1, *res2; 237 + struct ttm_place *place; 238 + struct ttm_resource_manager *man; 239 + unsigned int bo_prio = TTM_MAX_BO_PRIORITY - 1; 240 + uint32_t mem_type = TTM_PL_SYSTEM; 241 + int err; 242 + 243 + place = ttm_place_kunit_init(test, mem_type, 0); 244 + 245 + ttm_dev = kunit_kzalloc(test, sizeof(*ttm_dev), GFP_KERNEL); 246 + KUNIT_ASSERT_NOT_NULL(test, ttm_dev); 247 + 248 + err = ttm_device_kunit_init(priv, ttm_dev, false, false); 249 + KUNIT_ASSERT_EQ(test, err, 0); 250 + priv->ttm_dev = ttm_dev; 251 + 252 + bo = ttm_bo_kunit_init(test, test->priv, BO_SIZE); 253 + bo->priority = bo_prio; 254 + 255 + err = ttm_resource_alloc(bo, place, &res1); 256 + KUNIT_ASSERT_EQ(test, err, 0); 257 + 258 + bo->resource = res1; 259 + 260 + /* Add a dummy resource to populate LRU */ 261 + ttm_resource_alloc(bo, place, &res2); 262 + 263 + dma_resv_lock(bo->base.resv, NULL); 264 + ttm_bo_unreserve(bo); 265 + 266 + man = ttm_manager_type(priv->ttm_dev, mem_type); 267 + KUNIT_ASSERT_EQ(test, 268 + list_is_last(&res1->lru, &man->lru[bo->priority]), 1); 269 + 270 + ttm_resource_free(bo, &res2); 271 + ttm_resource_free(bo, &res1); 272 + } 273 + 274 + static void ttm_bo_unreserve_pinned(struct kunit *test) 275 + { 276 + struct ttm_test_devices *priv = test->priv; 277 + struct ttm_buffer_object *bo; 278 + struct ttm_device *ttm_dev; 279 + struct ttm_resource *res1, *res2; 280 + struct ttm_place *place; 281 + uint32_t mem_type = TTM_PL_SYSTEM; 282 + int err; 283 + 284 + ttm_dev = kunit_kzalloc(test, sizeof(*ttm_dev), GFP_KERNEL); 285 + KUNIT_ASSERT_NOT_NULL(test, ttm_dev); 286 + 287 + err = ttm_device_kunit_init(priv, ttm_dev, false, false); 288 + KUNIT_ASSERT_EQ(test, err, 0); 289 + priv->ttm_dev = ttm_dev; 290 + 291 + bo = ttm_bo_kunit_init(test, test->priv, BO_SIZE); 292 + place = ttm_place_kunit_init(test, mem_type, 0); 293 + 294 + dma_resv_lock(bo->base.resv, NULL); 295 + ttm_bo_pin(bo); 296 + 297 + err = ttm_resource_alloc(bo, place, &res1); 298 + KUNIT_ASSERT_EQ(test, err, 0); 299 + bo->resource = res1; 300 + 301 + /* Add a dummy resource to the pinned list */ 302 + err = ttm_resource_alloc(bo, place, &res2); 303 + KUNIT_ASSERT_EQ(test, err, 0); 304 + KUNIT_ASSERT_EQ(test, 305 + list_is_last(&res2->lru, &priv->ttm_dev->pinned), 1); 306 + 307 + ttm_bo_unreserve(bo); 308 + KUNIT_ASSERT_EQ(test, 309 + list_is_last(&res1->lru, &priv->ttm_dev->pinned), 1); 310 + 311 + ttm_resource_free(bo, &res1); 312 + ttm_resource_free(bo, &res2); 313 + } 314 + 315 + static void ttm_bo_unreserve_bulk(struct kunit *test) 316 + { 317 + struct ttm_test_devices *priv = test->priv; 318 + struct ttm_lru_bulk_move lru_bulk_move; 319 + struct ttm_lru_bulk_move_pos *pos; 320 + struct ttm_buffer_object *bo1, *bo2; 321 + struct ttm_resource *res1, *res2; 322 + struct ttm_device *ttm_dev; 323 + struct ttm_place *place; 324 + uint32_t mem_type = TTM_PL_SYSTEM; 325 + unsigned int bo_priority = 0; 326 + int err; 327 + 328 + ttm_lru_bulk_move_init(&lru_bulk_move); 329 + 330 + place = ttm_place_kunit_init(test, mem_type, 0); 331 + 332 + ttm_dev = kunit_kzalloc(test, sizeof(*ttm_dev), GFP_KERNEL); 333 + KUNIT_ASSERT_NOT_NULL(test, ttm_dev); 334 + 335 + err = ttm_device_kunit_init(priv, ttm_dev, false, false); 336 + KUNIT_ASSERT_EQ(test, err, 0); 337 + priv->ttm_dev = ttm_dev; 338 + 339 + bo1 = ttm_bo_kunit_init(test, test->priv, BO_SIZE); 340 + bo2 = ttm_bo_kunit_init(test, test->priv, BO_SIZE); 341 + 342 + dma_resv_lock(bo1->base.resv, NULL); 343 + ttm_bo_set_bulk_move(bo1, &lru_bulk_move); 344 + dma_resv_unlock(bo1->base.resv); 345 + 346 + err = ttm_resource_alloc(bo1, place, &res1); 347 + KUNIT_ASSERT_EQ(test, err, 0); 348 + bo1->resource = res1; 349 + 350 + dma_resv_lock(bo2->base.resv, NULL); 351 + ttm_bo_set_bulk_move(bo2, &lru_bulk_move); 352 + dma_resv_unlock(bo2->base.resv); 353 + 354 + err = ttm_resource_alloc(bo2, place, &res2); 355 + KUNIT_ASSERT_EQ(test, err, 0); 356 + bo2->resource = res2; 357 + 358 + ttm_bo_reserve(bo1, false, false, NULL); 359 + ttm_bo_unreserve(bo1); 360 + 361 + pos = &lru_bulk_move.pos[mem_type][bo_priority]; 362 + KUNIT_ASSERT_PTR_EQ(test, res1, pos->last); 363 + 364 + ttm_resource_free(bo1, &res1); 365 + ttm_resource_free(bo2, &res2); 366 + } 367 + 368 + static void ttm_bo_put_basic(struct kunit *test) 369 + { 370 + struct ttm_test_devices *priv = test->priv; 371 + struct ttm_buffer_object *bo; 372 + struct ttm_resource *res; 373 + struct ttm_device *ttm_dev; 374 + struct ttm_place *place; 375 + uint32_t mem_type = TTM_PL_SYSTEM; 376 + int err; 377 + 378 + place = ttm_place_kunit_init(test, mem_type, 0); 379 + 380 + ttm_dev = kunit_kzalloc(test, sizeof(*ttm_dev), GFP_KERNEL); 381 + KUNIT_ASSERT_NOT_NULL(test, ttm_dev); 382 + 383 + err = ttm_device_kunit_init(priv, ttm_dev, false, false); 384 + KUNIT_ASSERT_EQ(test, err, 0); 385 + priv->ttm_dev = ttm_dev; 386 + 387 + bo = ttm_bo_kunit_init(test, test->priv, BO_SIZE); 388 + bo->type = ttm_bo_type_device; 389 + 390 + err = ttm_resource_alloc(bo, place, &res); 391 + KUNIT_ASSERT_EQ(test, err, 0); 392 + bo->resource = res; 393 + 394 + dma_resv_lock(bo->base.resv, NULL); 395 + err = ttm_tt_create(bo, false); 396 + dma_resv_unlock(bo->base.resv); 397 + KUNIT_EXPECT_EQ(test, err, 0); 398 + 399 + ttm_bo_put(bo); 400 + } 401 + 402 + static const char *mock_name(struct dma_fence *f) 403 + { 404 + return "kunit-ttm-bo-put"; 405 + } 406 + 407 + static const struct dma_fence_ops mock_fence_ops = { 408 + .get_driver_name = mock_name, 409 + .get_timeline_name = mock_name, 410 + }; 411 + 412 + static void ttm_bo_put_shared_resv(struct kunit *test) 413 + { 414 + struct ttm_test_devices *priv = test->priv; 415 + struct ttm_buffer_object *bo; 416 + struct dma_resv *external_resv; 417 + struct dma_fence *fence; 418 + /* A dummy DMA fence lock */ 419 + spinlock_t fence_lock; 420 + struct ttm_device *ttm_dev; 421 + int err; 422 + 423 + ttm_dev = kunit_kzalloc(test, sizeof(*ttm_dev), GFP_KERNEL); 424 + KUNIT_ASSERT_NOT_NULL(test, ttm_dev); 425 + 426 + err = ttm_device_kunit_init(priv, ttm_dev, false, false); 427 + KUNIT_ASSERT_EQ(test, err, 0); 428 + priv->ttm_dev = ttm_dev; 429 + 430 + external_resv = kunit_kzalloc(test, sizeof(*ttm_dev), GFP_KERNEL); 431 + KUNIT_ASSERT_NOT_NULL(test, external_resv); 432 + 433 + dma_resv_init(external_resv); 434 + 435 + fence = kunit_kzalloc(test, sizeof(*fence), GFP_KERNEL); 436 + KUNIT_ASSERT_NOT_NULL(test, fence); 437 + 438 + spin_lock_init(&fence_lock); 439 + dma_fence_init(fence, &mock_fence_ops, &fence_lock, 0, 0); 440 + 441 + dma_resv_lock(external_resv, NULL); 442 + dma_resv_reserve_fences(external_resv, 1); 443 + dma_resv_add_fence(external_resv, fence, DMA_RESV_USAGE_BOOKKEEP); 444 + dma_resv_unlock(external_resv); 445 + 446 + dma_fence_signal(fence); 447 + 448 + bo = ttm_bo_kunit_init(test, test->priv, BO_SIZE); 449 + bo->type = ttm_bo_type_device; 450 + bo->base.resv = external_resv; 451 + 452 + ttm_bo_put(bo); 453 + } 454 + 455 + static void ttm_bo_pin_basic(struct kunit *test) 456 + { 457 + struct ttm_test_devices *priv = test->priv; 458 + struct ttm_buffer_object *bo; 459 + struct ttm_device *ttm_dev; 460 + unsigned int no_pins = 3; 461 + int err; 462 + 463 + ttm_dev = kunit_kzalloc(test, sizeof(*ttm_dev), GFP_KERNEL); 464 + KUNIT_ASSERT_NOT_NULL(test, ttm_dev); 465 + 466 + err = ttm_device_kunit_init(priv, ttm_dev, false, false); 467 + KUNIT_ASSERT_EQ(test, err, 0); 468 + priv->ttm_dev = ttm_dev; 469 + 470 + bo = ttm_bo_kunit_init(test, test->priv, BO_SIZE); 471 + 472 + for (int i = 0; i < no_pins; i++) { 473 + dma_resv_lock(bo->base.resv, NULL); 474 + ttm_bo_pin(bo); 475 + dma_resv_unlock(bo->base.resv); 476 + } 477 + 478 + KUNIT_ASSERT_EQ(test, bo->pin_count, no_pins); 479 + } 480 + 481 + static void ttm_bo_pin_unpin_resource(struct kunit *test) 482 + { 483 + struct ttm_test_devices *priv = test->priv; 484 + struct ttm_lru_bulk_move lru_bulk_move; 485 + struct ttm_lru_bulk_move_pos *pos; 486 + struct ttm_buffer_object *bo; 487 + struct ttm_resource *res; 488 + struct ttm_device *ttm_dev; 489 + struct ttm_place *place; 490 + uint32_t mem_type = TTM_PL_SYSTEM; 491 + unsigned int bo_priority = 0; 492 + int err; 493 + 494 + ttm_lru_bulk_move_init(&lru_bulk_move); 495 + 496 + place = ttm_place_kunit_init(test, mem_type, 0); 497 + 498 + ttm_dev = kunit_kzalloc(test, sizeof(*ttm_dev), GFP_KERNEL); 499 + KUNIT_ASSERT_NOT_NULL(test, ttm_dev); 500 + 501 + err = ttm_device_kunit_init(priv, ttm_dev, false, false); 502 + KUNIT_ASSERT_EQ(test, err, 0); 503 + priv->ttm_dev = ttm_dev; 504 + 505 + bo = ttm_bo_kunit_init(test, test->priv, BO_SIZE); 506 + 507 + err = ttm_resource_alloc(bo, place, &res); 508 + KUNIT_ASSERT_EQ(test, err, 0); 509 + bo->resource = res; 510 + 511 + dma_resv_lock(bo->base.resv, NULL); 512 + ttm_bo_set_bulk_move(bo, &lru_bulk_move); 513 + ttm_bo_pin(bo); 514 + dma_resv_unlock(bo->base.resv); 515 + 516 + pos = &lru_bulk_move.pos[mem_type][bo_priority]; 517 + 518 + KUNIT_ASSERT_EQ(test, bo->pin_count, 1); 519 + KUNIT_ASSERT_NULL(test, pos->first); 520 + KUNIT_ASSERT_NULL(test, pos->last); 521 + 522 + dma_resv_lock(bo->base.resv, NULL); 523 + ttm_bo_unpin(bo); 524 + dma_resv_unlock(bo->base.resv); 525 + 526 + KUNIT_ASSERT_PTR_EQ(test, res, pos->last); 527 + KUNIT_ASSERT_EQ(test, bo->pin_count, 0); 528 + 529 + ttm_resource_free(bo, &res); 530 + } 531 + 532 + static void ttm_bo_multiple_pin_one_unpin(struct kunit *test) 533 + { 534 + struct ttm_test_devices *priv = test->priv; 535 + struct ttm_lru_bulk_move lru_bulk_move; 536 + struct ttm_lru_bulk_move_pos *pos; 537 + struct ttm_buffer_object *bo; 538 + struct ttm_resource *res; 539 + struct ttm_device *ttm_dev; 540 + struct ttm_place *place; 541 + uint32_t mem_type = TTM_PL_SYSTEM; 542 + unsigned int bo_priority = 0; 543 + int err; 544 + 545 + ttm_lru_bulk_move_init(&lru_bulk_move); 546 + 547 + place = ttm_place_kunit_init(test, mem_type, 0); 548 + 549 + ttm_dev = kunit_kzalloc(test, sizeof(*ttm_dev), GFP_KERNEL); 550 + KUNIT_ASSERT_NOT_NULL(test, ttm_dev); 551 + 552 + err = ttm_device_kunit_init(priv, ttm_dev, false, false); 553 + KUNIT_ASSERT_EQ(test, err, 0); 554 + priv->ttm_dev = ttm_dev; 555 + 556 + bo = ttm_bo_kunit_init(test, test->priv, BO_SIZE); 557 + 558 + err = ttm_resource_alloc(bo, place, &res); 559 + KUNIT_ASSERT_EQ(test, err, 0); 560 + bo->resource = res; 561 + 562 + dma_resv_lock(bo->base.resv, NULL); 563 + ttm_bo_set_bulk_move(bo, &lru_bulk_move); 564 + 565 + /* Multiple pins */ 566 + ttm_bo_pin(bo); 567 + ttm_bo_pin(bo); 568 + 569 + dma_resv_unlock(bo->base.resv); 570 + 571 + pos = &lru_bulk_move.pos[mem_type][bo_priority]; 572 + 573 + KUNIT_ASSERT_EQ(test, bo->pin_count, 2); 574 + KUNIT_ASSERT_NULL(test, pos->first); 575 + KUNIT_ASSERT_NULL(test, pos->last); 576 + 577 + dma_resv_lock(bo->base.resv, NULL); 578 + ttm_bo_unpin(bo); 579 + dma_resv_unlock(bo->base.resv); 580 + 581 + KUNIT_ASSERT_EQ(test, bo->pin_count, 1); 582 + KUNIT_ASSERT_NULL(test, pos->first); 583 + KUNIT_ASSERT_NULL(test, pos->last); 584 + 585 + dma_resv_lock(bo->base.resv, NULL); 586 + ttm_bo_unpin(bo); 587 + dma_resv_unlock(bo->base.resv); 588 + 589 + ttm_resource_free(bo, &res); 590 + } 591 + 592 + static struct kunit_case ttm_bo_test_cases[] = { 593 + KUNIT_CASE_PARAM(ttm_bo_reserve_optimistic_no_ticket, 594 + ttm_bo_reserve_gen_params), 595 + KUNIT_CASE(ttm_bo_reserve_locked_no_sleep), 596 + KUNIT_CASE(ttm_bo_reserve_no_wait_ticket), 597 + KUNIT_CASE(ttm_bo_reserve_double_resv), 598 + #if IS_BUILTIN(CONFIG_DRM_TTM_KUNIT_TEST) 599 + KUNIT_CASE(ttm_bo_reserve_interrupted), 600 + #endif 601 + KUNIT_CASE(ttm_bo_reserve_deadlock), 602 + KUNIT_CASE(ttm_bo_unreserve_basic), 603 + KUNIT_CASE(ttm_bo_unreserve_pinned), 604 + KUNIT_CASE(ttm_bo_unreserve_bulk), 605 + KUNIT_CASE(ttm_bo_put_basic), 606 + KUNIT_CASE(ttm_bo_put_shared_resv), 607 + KUNIT_CASE(ttm_bo_pin_basic), 608 + KUNIT_CASE(ttm_bo_pin_unpin_resource), 609 + KUNIT_CASE(ttm_bo_multiple_pin_one_unpin), 610 + {} 611 + }; 612 + 613 + static struct kunit_suite ttm_bo_test_suite = { 614 + .name = "ttm_bo", 615 + .init = ttm_test_devices_init, 616 + .exit = ttm_test_devices_fini, 617 + .test_cases = ttm_bo_test_cases, 618 + }; 619 + 620 + kunit_test_suites(&ttm_bo_test_suite); 621 + 622 + MODULE_LICENSE("GPL");
+47 -1
drivers/gpu/drm/ttm/tests/ttm_kunit_helpers.c
··· 2 2 /* 3 3 * Copyright © 2023 Intel Corporation 4 4 */ 5 + #include <drm/ttm/ttm_tt.h> 6 + 5 7 #include "ttm_kunit_helpers.h" 6 8 9 + static struct ttm_tt *ttm_tt_simple_create(struct ttm_buffer_object *bo, 10 + uint32_t page_flags) 11 + { 12 + struct ttm_tt *tt; 13 + 14 + tt = kzalloc(sizeof(*tt), GFP_KERNEL); 15 + ttm_tt_init(tt, bo, page_flags, ttm_cached, 0); 16 + 17 + return tt; 18 + } 19 + 20 + static void ttm_tt_simple_destroy(struct ttm_device *bdev, struct ttm_tt *ttm) 21 + { 22 + kfree(ttm); 23 + } 24 + 25 + static void dummy_ttm_bo_destroy(struct ttm_buffer_object *bo) 26 + { 27 + } 28 + 7 29 struct ttm_device_funcs ttm_dev_funcs = { 30 + .ttm_tt_create = ttm_tt_simple_create, 31 + .ttm_tt_destroy = ttm_tt_simple_destroy, 8 32 }; 9 33 EXPORT_SYMBOL_GPL(ttm_dev_funcs); 10 34 ··· 53 29 struct ttm_test_devices *devs, 54 30 size_t size) 55 31 { 56 - struct drm_gem_object gem_obj = { .size = size }; 32 + struct drm_gem_object gem_obj = { }; 57 33 struct ttm_buffer_object *bo; 34 + int err; 58 35 59 36 bo = kunit_kzalloc(test, sizeof(*bo), GFP_KERNEL); 60 37 KUNIT_ASSERT_NOT_NULL(test, bo); 61 38 62 39 bo->base = gem_obj; 40 + err = drm_gem_object_init(devs->drm, &bo->base, size); 41 + KUNIT_ASSERT_EQ(test, err, 0); 42 + 63 43 bo->bdev = devs->ttm_dev; 44 + bo->destroy = dummy_ttm_bo_destroy; 45 + 46 + kref_init(&bo->kref); 64 47 65 48 return bo; 66 49 } 67 50 EXPORT_SYMBOL_GPL(ttm_bo_kunit_init); 51 + 52 + struct ttm_place *ttm_place_kunit_init(struct kunit *test, 53 + uint32_t mem_type, uint32_t flags) 54 + { 55 + struct ttm_place *place; 56 + 57 + place = kunit_kzalloc(test, sizeof(*place), GFP_KERNEL); 58 + KUNIT_ASSERT_NOT_NULL(test, place); 59 + 60 + place->mem_type = mem_type; 61 + place->flags = flags; 62 + 63 + return place; 64 + } 65 + EXPORT_SYMBOL_GPL(ttm_place_kunit_init); 68 66 69 67 struct ttm_test_devices *ttm_test_devices_basic(struct kunit *test) 70 68 {
+3
drivers/gpu/drm/ttm/tests/ttm_kunit_helpers.h
··· 8 8 #include <drm/drm_drv.h> 9 9 #include <drm/ttm/ttm_device.h> 10 10 #include <drm/ttm/ttm_bo.h> 11 + #include <drm/ttm/ttm_placement.h> 11 12 12 13 #include <drm/drm_kunit_helpers.h> 13 14 #include <kunit/test.h> ··· 29 28 struct ttm_buffer_object *ttm_bo_kunit_init(struct kunit *test, 30 29 struct ttm_test_devices *devs, 31 30 size_t size); 31 + struct ttm_place *ttm_place_kunit_init(struct kunit *test, 32 + uint32_t mem_type, uint32_t flags); 32 33 33 34 struct ttm_test_devices *ttm_test_devices_basic(struct kunit *test); 34 35 struct ttm_test_devices *ttm_test_devices_all(struct kunit *test);
+1 -2
drivers/gpu/drm/ttm/tests/ttm_pool_test.c
··· 78 78 struct ttm_test_devices *devs = priv->devs; 79 79 struct ttm_pool *pool; 80 80 struct ttm_tt *tt; 81 - unsigned long order = __fls(size / PAGE_SIZE); 82 81 int err; 83 82 84 - tt = ttm_tt_kunit_init(test, order, caching, size); 83 + tt = ttm_tt_kunit_init(test, 0, caching, size); 85 84 KUNIT_ASSERT_NOT_NULL(test, tt); 86 85 87 86 pool = kunit_kzalloc(test, sizeof(*pool), GFP_KERNEL);
+335
drivers/gpu/drm/ttm/tests/ttm_resource_test.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 AND MIT 2 + /* 3 + * Copyright © 2023 Intel Corporation 4 + */ 5 + #include <drm/ttm/ttm_resource.h> 6 + 7 + #include "ttm_kunit_helpers.h" 8 + 9 + #define RES_SIZE SZ_4K 10 + #define TTM_PRIV_DUMMY_REG (TTM_NUM_MEM_TYPES - 1) 11 + 12 + struct ttm_resource_test_case { 13 + const char *description; 14 + uint32_t mem_type; 15 + uint32_t flags; 16 + }; 17 + 18 + struct ttm_resource_test_priv { 19 + struct ttm_test_devices *devs; 20 + struct ttm_buffer_object *bo; 21 + struct ttm_place *place; 22 + }; 23 + 24 + static const struct ttm_resource_manager_func ttm_resource_manager_mock_funcs = { }; 25 + 26 + static int ttm_resource_test_init(struct kunit *test) 27 + { 28 + struct ttm_resource_test_priv *priv; 29 + 30 + priv = kunit_kzalloc(test, sizeof(*priv), GFP_KERNEL); 31 + KUNIT_ASSERT_NOT_NULL(test, priv); 32 + 33 + priv->devs = ttm_test_devices_all(test); 34 + KUNIT_ASSERT_NOT_NULL(test, priv->devs); 35 + 36 + test->priv = priv; 37 + 38 + return 0; 39 + } 40 + 41 + static void ttm_resource_test_fini(struct kunit *test) 42 + { 43 + struct ttm_resource_test_priv *priv = test->priv; 44 + 45 + ttm_test_devices_put(test, priv->devs); 46 + } 47 + 48 + static void ttm_init_test_mocks(struct kunit *test, 49 + struct ttm_resource_test_priv *priv, 50 + uint32_t mem_type, uint32_t flags) 51 + { 52 + size_t size = RES_SIZE; 53 + 54 + /* Make sure we have what we need for a good BO mock */ 55 + KUNIT_ASSERT_NOT_NULL(test, priv->devs->ttm_dev); 56 + 57 + priv->bo = ttm_bo_kunit_init(test, priv->devs, size); 58 + priv->place = ttm_place_kunit_init(test, mem_type, flags); 59 + } 60 + 61 + static void ttm_init_test_manager(struct kunit *test, 62 + struct ttm_resource_test_priv *priv, 63 + uint32_t mem_type) 64 + { 65 + struct ttm_device *ttm_dev = priv->devs->ttm_dev; 66 + struct ttm_resource_manager *man; 67 + size_t size = SZ_16K; 68 + 69 + man = kunit_kzalloc(test, sizeof(*man), GFP_KERNEL); 70 + KUNIT_ASSERT_NOT_NULL(test, man); 71 + 72 + man->use_tt = false; 73 + man->func = &ttm_resource_manager_mock_funcs; 74 + 75 + ttm_resource_manager_init(man, ttm_dev, size); 76 + ttm_set_driver_manager(ttm_dev, mem_type, man); 77 + ttm_resource_manager_set_used(man, true); 78 + } 79 + 80 + static const struct ttm_resource_test_case ttm_resource_cases[] = { 81 + { 82 + .description = "Init resource in TTM_PL_SYSTEM", 83 + .mem_type = TTM_PL_SYSTEM, 84 + }, 85 + { 86 + .description = "Init resource in TTM_PL_VRAM", 87 + .mem_type = TTM_PL_VRAM, 88 + }, 89 + { 90 + .description = "Init resource in a private placement", 91 + .mem_type = TTM_PRIV_DUMMY_REG, 92 + }, 93 + { 94 + .description = "Init resource in TTM_PL_SYSTEM, set placement flags", 95 + .mem_type = TTM_PL_SYSTEM, 96 + .flags = TTM_PL_FLAG_TOPDOWN, 97 + }, 98 + }; 99 + 100 + static void ttm_resource_case_desc(const struct ttm_resource_test_case *t, char *desc) 101 + { 102 + strscpy(desc, t->description, KUNIT_PARAM_DESC_SIZE); 103 + } 104 + 105 + KUNIT_ARRAY_PARAM(ttm_resource, ttm_resource_cases, ttm_resource_case_desc); 106 + 107 + static void ttm_resource_init_basic(struct kunit *test) 108 + { 109 + const struct ttm_resource_test_case *params = test->param_value; 110 + struct ttm_resource_test_priv *priv = test->priv; 111 + struct ttm_resource *res; 112 + struct ttm_buffer_object *bo; 113 + struct ttm_place *place; 114 + struct ttm_resource_manager *man; 115 + uint64_t expected_usage; 116 + 117 + ttm_init_test_mocks(test, priv, params->mem_type, params->flags); 118 + bo = priv->bo; 119 + place = priv->place; 120 + 121 + if (params->mem_type > TTM_PL_SYSTEM) 122 + ttm_init_test_manager(test, priv, params->mem_type); 123 + 124 + res = kunit_kzalloc(test, sizeof(*res), GFP_KERNEL); 125 + KUNIT_ASSERT_NOT_NULL(test, res); 126 + 127 + man = ttm_manager_type(priv->devs->ttm_dev, place->mem_type); 128 + expected_usage = man->usage + RES_SIZE; 129 + 130 + KUNIT_ASSERT_TRUE(test, list_empty(&man->lru[bo->priority])); 131 + 132 + ttm_resource_init(bo, place, res); 133 + 134 + KUNIT_ASSERT_EQ(test, res->start, 0); 135 + KUNIT_ASSERT_EQ(test, res->size, RES_SIZE); 136 + KUNIT_ASSERT_EQ(test, res->mem_type, place->mem_type); 137 + KUNIT_ASSERT_EQ(test, res->placement, place->flags); 138 + KUNIT_ASSERT_PTR_EQ(test, res->bo, bo); 139 + 140 + KUNIT_ASSERT_NULL(test, res->bus.addr); 141 + KUNIT_ASSERT_EQ(test, res->bus.offset, 0); 142 + KUNIT_ASSERT_FALSE(test, res->bus.is_iomem); 143 + KUNIT_ASSERT_EQ(test, res->bus.caching, ttm_cached); 144 + KUNIT_ASSERT_EQ(test, man->usage, expected_usage); 145 + 146 + KUNIT_ASSERT_TRUE(test, list_is_singular(&man->lru[bo->priority])); 147 + 148 + ttm_resource_fini(man, res); 149 + } 150 + 151 + static void ttm_resource_init_pinned(struct kunit *test) 152 + { 153 + struct ttm_resource_test_priv *priv = test->priv; 154 + struct ttm_resource *res; 155 + struct ttm_buffer_object *bo; 156 + struct ttm_place *place; 157 + struct ttm_resource_manager *man; 158 + 159 + ttm_init_test_mocks(test, priv, TTM_PL_SYSTEM, 0); 160 + bo = priv->bo; 161 + place = priv->place; 162 + 163 + man = ttm_manager_type(priv->devs->ttm_dev, place->mem_type); 164 + 165 + res = kunit_kzalloc(test, sizeof(*res), GFP_KERNEL); 166 + KUNIT_ASSERT_NOT_NULL(test, res); 167 + KUNIT_ASSERT_TRUE(test, list_empty(&bo->bdev->pinned)); 168 + 169 + dma_resv_lock(bo->base.resv, NULL); 170 + ttm_bo_pin(bo); 171 + ttm_resource_init(bo, place, res); 172 + KUNIT_ASSERT_TRUE(test, list_is_singular(&bo->bdev->pinned)); 173 + 174 + ttm_bo_unpin(bo); 175 + ttm_resource_fini(man, res); 176 + dma_resv_unlock(bo->base.resv); 177 + 178 + KUNIT_ASSERT_TRUE(test, list_empty(&bo->bdev->pinned)); 179 + } 180 + 181 + static void ttm_resource_fini_basic(struct kunit *test) 182 + { 183 + struct ttm_resource_test_priv *priv = test->priv; 184 + struct ttm_resource *res; 185 + struct ttm_buffer_object *bo; 186 + struct ttm_place *place; 187 + struct ttm_resource_manager *man; 188 + 189 + ttm_init_test_mocks(test, priv, TTM_PL_SYSTEM, 0); 190 + bo = priv->bo; 191 + place = priv->place; 192 + 193 + man = ttm_manager_type(priv->devs->ttm_dev, place->mem_type); 194 + 195 + res = kunit_kzalloc(test, sizeof(*res), GFP_KERNEL); 196 + KUNIT_ASSERT_NOT_NULL(test, res); 197 + 198 + ttm_resource_init(bo, place, res); 199 + ttm_resource_fini(man, res); 200 + 201 + KUNIT_ASSERT_TRUE(test, list_empty(&res->lru)); 202 + KUNIT_ASSERT_EQ(test, man->usage, 0); 203 + } 204 + 205 + static void ttm_resource_manager_init_basic(struct kunit *test) 206 + { 207 + struct ttm_resource_test_priv *priv = test->priv; 208 + struct ttm_resource_manager *man; 209 + size_t size = SZ_16K; 210 + 211 + man = kunit_kzalloc(test, sizeof(*man), GFP_KERNEL); 212 + KUNIT_ASSERT_NOT_NULL(test, man); 213 + 214 + ttm_resource_manager_init(man, priv->devs->ttm_dev, size); 215 + 216 + KUNIT_ASSERT_PTR_EQ(test, man->bdev, priv->devs->ttm_dev); 217 + KUNIT_ASSERT_EQ(test, man->size, size); 218 + KUNIT_ASSERT_EQ(test, man->usage, 0); 219 + KUNIT_ASSERT_NULL(test, man->move); 220 + KUNIT_ASSERT_NOT_NULL(test, &man->move_lock); 221 + 222 + for (int i = 0; i < TTM_MAX_BO_PRIORITY; ++i) 223 + KUNIT_ASSERT_TRUE(test, list_empty(&man->lru[i])); 224 + } 225 + 226 + static void ttm_resource_manager_usage_basic(struct kunit *test) 227 + { 228 + struct ttm_resource_test_priv *priv = test->priv; 229 + struct ttm_resource *res; 230 + struct ttm_buffer_object *bo; 231 + struct ttm_place *place; 232 + struct ttm_resource_manager *man; 233 + uint64_t actual_usage; 234 + 235 + ttm_init_test_mocks(test, priv, TTM_PL_SYSTEM, TTM_PL_FLAG_TOPDOWN); 236 + bo = priv->bo; 237 + place = priv->place; 238 + 239 + res = kunit_kzalloc(test, sizeof(*res), GFP_KERNEL); 240 + KUNIT_ASSERT_NOT_NULL(test, res); 241 + 242 + man = ttm_manager_type(priv->devs->ttm_dev, place->mem_type); 243 + 244 + ttm_resource_init(bo, place, res); 245 + actual_usage = ttm_resource_manager_usage(man); 246 + 247 + KUNIT_ASSERT_EQ(test, actual_usage, RES_SIZE); 248 + 249 + ttm_resource_fini(man, res); 250 + } 251 + 252 + static void ttm_resource_manager_set_used_basic(struct kunit *test) 253 + { 254 + struct ttm_resource_test_priv *priv = test->priv; 255 + struct ttm_resource_manager *man; 256 + 257 + man = ttm_manager_type(priv->devs->ttm_dev, TTM_PL_SYSTEM); 258 + KUNIT_ASSERT_TRUE(test, man->use_type); 259 + 260 + ttm_resource_manager_set_used(man, false); 261 + KUNIT_ASSERT_FALSE(test, man->use_type); 262 + } 263 + 264 + static void ttm_sys_man_alloc_basic(struct kunit *test) 265 + { 266 + struct ttm_resource_test_priv *priv = test->priv; 267 + struct ttm_resource_manager *man; 268 + struct ttm_buffer_object *bo; 269 + struct ttm_place *place; 270 + struct ttm_resource *res; 271 + uint32_t mem_type = TTM_PL_SYSTEM; 272 + int ret; 273 + 274 + ttm_init_test_mocks(test, priv, mem_type, 0); 275 + bo = priv->bo; 276 + place = priv->place; 277 + 278 + man = ttm_manager_type(priv->devs->ttm_dev, mem_type); 279 + ret = man->func->alloc(man, bo, place, &res); 280 + 281 + KUNIT_ASSERT_EQ(test, ret, 0); 282 + KUNIT_ASSERT_EQ(test, res->size, RES_SIZE); 283 + KUNIT_ASSERT_EQ(test, res->mem_type, mem_type); 284 + KUNIT_ASSERT_PTR_EQ(test, res->bo, bo); 285 + 286 + ttm_resource_fini(man, res); 287 + } 288 + 289 + static void ttm_sys_man_free_basic(struct kunit *test) 290 + { 291 + struct ttm_resource_test_priv *priv = test->priv; 292 + struct ttm_resource_manager *man; 293 + struct ttm_buffer_object *bo; 294 + struct ttm_place *place; 295 + struct ttm_resource *res; 296 + uint32_t mem_type = TTM_PL_SYSTEM; 297 + 298 + ttm_init_test_mocks(test, priv, mem_type, 0); 299 + bo = priv->bo; 300 + place = priv->place; 301 + 302 + res = kunit_kzalloc(test, sizeof(*res), GFP_KERNEL); 303 + KUNIT_ASSERT_NOT_NULL(test, res); 304 + 305 + ttm_resource_alloc(bo, place, &res); 306 + 307 + man = ttm_manager_type(priv->devs->ttm_dev, mem_type); 308 + man->func->free(man, res); 309 + 310 + KUNIT_ASSERT_TRUE(test, list_empty(&man->lru[bo->priority])); 311 + KUNIT_ASSERT_EQ(test, man->usage, 0); 312 + } 313 + 314 + static struct kunit_case ttm_resource_test_cases[] = { 315 + KUNIT_CASE_PARAM(ttm_resource_init_basic, ttm_resource_gen_params), 316 + KUNIT_CASE(ttm_resource_init_pinned), 317 + KUNIT_CASE(ttm_resource_fini_basic), 318 + KUNIT_CASE(ttm_resource_manager_init_basic), 319 + KUNIT_CASE(ttm_resource_manager_usage_basic), 320 + KUNIT_CASE(ttm_resource_manager_set_used_basic), 321 + KUNIT_CASE(ttm_sys_man_alloc_basic), 322 + KUNIT_CASE(ttm_sys_man_free_basic), 323 + {} 324 + }; 325 + 326 + static struct kunit_suite ttm_resource_test_suite = { 327 + .name = "ttm_resource", 328 + .init = ttm_resource_test_init, 329 + .exit = ttm_resource_test_fini, 330 + .test_cases = ttm_resource_test_cases, 331 + }; 332 + 333 + kunit_test_suites(&ttm_resource_test_suite); 334 + 335 + MODULE_LICENSE("GPL");
+295
drivers/gpu/drm/ttm/tests/ttm_tt_test.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 AND MIT 2 + /* 3 + * Copyright © 2023 Intel Corporation 4 + */ 5 + #include <linux/shmem_fs.h> 6 + #include <drm/ttm/ttm_tt.h> 7 + 8 + #include "ttm_kunit_helpers.h" 9 + 10 + #define BO_SIZE SZ_4K 11 + 12 + struct ttm_tt_test_case { 13 + const char *description; 14 + uint32_t size; 15 + uint32_t extra_pages_num; 16 + }; 17 + 18 + static int ttm_tt_test_init(struct kunit *test) 19 + { 20 + struct ttm_test_devices *priv; 21 + 22 + priv = kunit_kzalloc(test, sizeof(*priv), GFP_KERNEL); 23 + KUNIT_ASSERT_NOT_NULL(test, priv); 24 + 25 + priv = ttm_test_devices_all(test); 26 + test->priv = priv; 27 + 28 + return 0; 29 + } 30 + 31 + static const struct ttm_tt_test_case ttm_tt_init_basic_cases[] = { 32 + { 33 + .description = "Page-aligned size", 34 + .size = SZ_4K, 35 + }, 36 + { 37 + .description = "Extra pages requested", 38 + .size = SZ_4K, 39 + .extra_pages_num = 1, 40 + }, 41 + }; 42 + 43 + static void ttm_tt_init_case_desc(const struct ttm_tt_test_case *t, 44 + char *desc) 45 + { 46 + strscpy(desc, t->description, KUNIT_PARAM_DESC_SIZE); 47 + } 48 + 49 + KUNIT_ARRAY_PARAM(ttm_tt_init_basic, ttm_tt_init_basic_cases, 50 + ttm_tt_init_case_desc); 51 + 52 + static void ttm_tt_init_basic(struct kunit *test) 53 + { 54 + const struct ttm_tt_test_case *params = test->param_value; 55 + struct ttm_buffer_object *bo; 56 + struct ttm_tt *tt; 57 + uint32_t page_flags = TTM_TT_FLAG_ZERO_ALLOC; 58 + enum ttm_caching caching = ttm_cached; 59 + uint32_t extra_pages = params->extra_pages_num; 60 + int num_pages = params->size >> PAGE_SHIFT; 61 + int err; 62 + 63 + tt = kunit_kzalloc(test, sizeof(*tt), GFP_KERNEL); 64 + KUNIT_ASSERT_NOT_NULL(test, tt); 65 + 66 + bo = ttm_bo_kunit_init(test, test->priv, params->size); 67 + 68 + err = ttm_tt_init(tt, bo, page_flags, caching, extra_pages); 69 + KUNIT_ASSERT_EQ(test, err, 0); 70 + 71 + KUNIT_ASSERT_EQ(test, tt->num_pages, num_pages + extra_pages); 72 + 73 + KUNIT_ASSERT_EQ(test, tt->page_flags, page_flags); 74 + KUNIT_ASSERT_EQ(test, tt->caching, caching); 75 + 76 + KUNIT_ASSERT_NULL(test, tt->dma_address); 77 + KUNIT_ASSERT_NULL(test, tt->swap_storage); 78 + } 79 + 80 + static void ttm_tt_init_misaligned(struct kunit *test) 81 + { 82 + struct ttm_buffer_object *bo; 83 + struct ttm_tt *tt; 84 + enum ttm_caching caching = ttm_cached; 85 + uint32_t size = SZ_8K; 86 + int num_pages = (size + SZ_4K) >> PAGE_SHIFT; 87 + int err; 88 + 89 + tt = kunit_kzalloc(test, sizeof(*tt), GFP_KERNEL); 90 + KUNIT_ASSERT_NOT_NULL(test, tt); 91 + 92 + bo = ttm_bo_kunit_init(test, test->priv, size); 93 + 94 + /* Make the object size misaligned */ 95 + bo->base.size += 1; 96 + 97 + err = ttm_tt_init(tt, bo, 0, caching, 0); 98 + KUNIT_ASSERT_EQ(test, err, 0); 99 + 100 + KUNIT_ASSERT_EQ(test, tt->num_pages, num_pages); 101 + } 102 + 103 + static void ttm_tt_fini_basic(struct kunit *test) 104 + { 105 + struct ttm_buffer_object *bo; 106 + struct ttm_tt *tt; 107 + enum ttm_caching caching = ttm_cached; 108 + int err; 109 + 110 + tt = kunit_kzalloc(test, sizeof(*tt), GFP_KERNEL); 111 + KUNIT_ASSERT_NOT_NULL(test, tt); 112 + 113 + bo = ttm_bo_kunit_init(test, test->priv, BO_SIZE); 114 + 115 + err = ttm_tt_init(tt, bo, 0, caching, 0); 116 + KUNIT_ASSERT_EQ(test, err, 0); 117 + KUNIT_ASSERT_NOT_NULL(test, tt->pages); 118 + 119 + ttm_tt_fini(tt); 120 + KUNIT_ASSERT_NULL(test, tt->pages); 121 + } 122 + 123 + static void ttm_tt_fini_sg(struct kunit *test) 124 + { 125 + struct ttm_buffer_object *bo; 126 + struct ttm_tt *tt; 127 + enum ttm_caching caching = ttm_cached; 128 + int err; 129 + 130 + tt = kunit_kzalloc(test, sizeof(*tt), GFP_KERNEL); 131 + KUNIT_ASSERT_NOT_NULL(test, tt); 132 + 133 + bo = ttm_bo_kunit_init(test, test->priv, BO_SIZE); 134 + 135 + err = ttm_sg_tt_init(tt, bo, 0, caching); 136 + KUNIT_ASSERT_EQ(test, err, 0); 137 + KUNIT_ASSERT_NOT_NULL(test, tt->dma_address); 138 + 139 + ttm_tt_fini(tt); 140 + KUNIT_ASSERT_NULL(test, tt->dma_address); 141 + } 142 + 143 + static void ttm_tt_fini_shmem(struct kunit *test) 144 + { 145 + struct ttm_buffer_object *bo; 146 + struct ttm_tt *tt; 147 + struct file *shmem; 148 + enum ttm_caching caching = ttm_cached; 149 + int err; 150 + 151 + tt = kunit_kzalloc(test, sizeof(*tt), GFP_KERNEL); 152 + KUNIT_ASSERT_NOT_NULL(test, tt); 153 + 154 + bo = ttm_bo_kunit_init(test, test->priv, BO_SIZE); 155 + 156 + err = ttm_tt_init(tt, bo, 0, caching, 0); 157 + KUNIT_ASSERT_EQ(test, err, 0); 158 + 159 + shmem = shmem_file_setup("ttm swap", BO_SIZE, 0); 160 + tt->swap_storage = shmem; 161 + 162 + ttm_tt_fini(tt); 163 + KUNIT_ASSERT_NULL(test, tt->swap_storage); 164 + } 165 + 166 + static void ttm_tt_create_basic(struct kunit *test) 167 + { 168 + struct ttm_buffer_object *bo; 169 + int err; 170 + 171 + bo = ttm_bo_kunit_init(test, test->priv, BO_SIZE); 172 + bo->type = ttm_bo_type_device; 173 + 174 + dma_resv_lock(bo->base.resv, NULL); 175 + err = ttm_tt_create(bo, false); 176 + dma_resv_unlock(bo->base.resv); 177 + 178 + KUNIT_EXPECT_EQ(test, err, 0); 179 + KUNIT_EXPECT_NOT_NULL(test, bo->ttm); 180 + 181 + /* Free manually, as it was allocated outside of KUnit */ 182 + kfree(bo->ttm); 183 + } 184 + 185 + static void ttm_tt_create_invalid_bo_type(struct kunit *test) 186 + { 187 + struct ttm_buffer_object *bo; 188 + int err; 189 + 190 + bo = ttm_bo_kunit_init(test, test->priv, BO_SIZE); 191 + bo->type = ttm_bo_type_sg + 1; 192 + 193 + dma_resv_lock(bo->base.resv, NULL); 194 + err = ttm_tt_create(bo, false); 195 + dma_resv_unlock(bo->base.resv); 196 + 197 + KUNIT_EXPECT_EQ(test, err, -EINVAL); 198 + KUNIT_EXPECT_NULL(test, bo->ttm); 199 + } 200 + 201 + static void ttm_tt_create_ttm_exists(struct kunit *test) 202 + { 203 + struct ttm_buffer_object *bo; 204 + struct ttm_tt *tt; 205 + enum ttm_caching caching = ttm_cached; 206 + int err; 207 + 208 + tt = kunit_kzalloc(test, sizeof(*tt), GFP_KERNEL); 209 + KUNIT_ASSERT_NOT_NULL(test, tt); 210 + 211 + bo = ttm_bo_kunit_init(test, test->priv, BO_SIZE); 212 + 213 + err = ttm_tt_init(tt, bo, 0, caching, 0); 214 + KUNIT_ASSERT_EQ(test, err, 0); 215 + bo->ttm = tt; 216 + 217 + dma_resv_lock(bo->base.resv, NULL); 218 + err = ttm_tt_create(bo, false); 219 + dma_resv_unlock(bo->base.resv); 220 + 221 + /* Expect to keep the previous TTM */ 222 + KUNIT_ASSERT_EQ(test, err, 0); 223 + KUNIT_ASSERT_PTR_EQ(test, tt, bo->ttm); 224 + } 225 + 226 + static struct ttm_tt *ttm_tt_null_create(struct ttm_buffer_object *bo, 227 + uint32_t page_flags) 228 + { 229 + return NULL; 230 + } 231 + 232 + static struct ttm_device_funcs ttm_dev_empty_funcs = { 233 + .ttm_tt_create = ttm_tt_null_create, 234 + }; 235 + 236 + static void ttm_tt_create_failed(struct kunit *test) 237 + { 238 + const struct ttm_test_devices *devs = test->priv; 239 + struct ttm_buffer_object *bo; 240 + int err; 241 + 242 + bo = ttm_bo_kunit_init(test, test->priv, BO_SIZE); 243 + 244 + /* Update ttm_device_funcs so we don't alloc ttm_tt */ 245 + devs->ttm_dev->funcs = &ttm_dev_empty_funcs; 246 + 247 + dma_resv_lock(bo->base.resv, NULL); 248 + err = ttm_tt_create(bo, false); 249 + dma_resv_unlock(bo->base.resv); 250 + 251 + KUNIT_ASSERT_EQ(test, err, -ENOMEM); 252 + } 253 + 254 + static void ttm_tt_destroy_basic(struct kunit *test) 255 + { 256 + const struct ttm_test_devices *devs = test->priv; 257 + struct ttm_buffer_object *bo; 258 + int err; 259 + 260 + bo = ttm_bo_kunit_init(test, test->priv, BO_SIZE); 261 + 262 + dma_resv_lock(bo->base.resv, NULL); 263 + err = ttm_tt_create(bo, false); 264 + dma_resv_unlock(bo->base.resv); 265 + 266 + KUNIT_ASSERT_EQ(test, err, 0); 267 + KUNIT_ASSERT_NOT_NULL(test, bo->ttm); 268 + 269 + ttm_tt_destroy(devs->ttm_dev, bo->ttm); 270 + } 271 + 272 + static struct kunit_case ttm_tt_test_cases[] = { 273 + KUNIT_CASE_PARAM(ttm_tt_init_basic, ttm_tt_init_basic_gen_params), 274 + KUNIT_CASE(ttm_tt_init_misaligned), 275 + KUNIT_CASE(ttm_tt_fini_basic), 276 + KUNIT_CASE(ttm_tt_fini_sg), 277 + KUNIT_CASE(ttm_tt_fini_shmem), 278 + KUNIT_CASE(ttm_tt_create_basic), 279 + KUNIT_CASE(ttm_tt_create_invalid_bo_type), 280 + KUNIT_CASE(ttm_tt_create_ttm_exists), 281 + KUNIT_CASE(ttm_tt_create_failed), 282 + KUNIT_CASE(ttm_tt_destroy_basic), 283 + {} 284 + }; 285 + 286 + static struct kunit_suite ttm_tt_test_suite = { 287 + .name = "ttm_tt", 288 + .init = ttm_tt_test_init, 289 + .exit = ttm_test_devices_fini, 290 + .test_cases = ttm_tt_test_cases, 291 + }; 292 + 293 + kunit_test_suites(&ttm_tt_test_suite); 294 + 295 + MODULE_LICENSE("GPL");
+3
drivers/gpu/drm/ttm/ttm_resource.c
··· 30 30 #include <drm/ttm/ttm_placement.h> 31 31 #include <drm/ttm/ttm_resource.h> 32 32 33 + #include <drm/drm_util.h> 34 + 33 35 /** 34 36 * ttm_lru_bulk_move_init - initialize a bulk move structure 35 37 * @bulk: the structure to init ··· 242 240 spin_unlock(&bo->bdev->lru_lock); 243 241 return 0; 244 242 } 243 + EXPORT_SYMBOL_FOR_TESTS_ONLY(ttm_resource_alloc); 245 244 246 245 void ttm_resource_free(struct ttm_buffer_object *bo, struct ttm_resource **res) 247 246 {
+3
drivers/gpu/drm/ttm/ttm_tt.c
··· 36 36 #include <linux/file.h> 37 37 #include <linux/module.h> 38 38 #include <drm/drm_cache.h> 39 + #include <drm/drm_util.h> 39 40 #include <drm/ttm/ttm_bo.h> 40 41 #include <drm/ttm/ttm_tt.h> 41 42 ··· 92 91 93 92 return 0; 94 93 } 94 + EXPORT_SYMBOL_FOR_TESTS_ONLY(ttm_tt_create); 95 95 96 96 /* 97 97 * Allocates storage for pointers to the pages that back the ttm. ··· 131 129 { 132 130 bdev->funcs->ttm_tt_destroy(bdev, ttm); 133 131 } 132 + EXPORT_SYMBOL_FOR_TESTS_ONLY(ttm_tt_destroy); 134 133 135 134 static void ttm_tt_init_fields(struct ttm_tt *ttm, 136 135 struct ttm_buffer_object *bo,
+15
drivers/gpu/drm/v3d/v3d_debugfs.c
··· 260 260 return 0; 261 261 } 262 262 263 + static int v3d_debugfs_mm(struct seq_file *m, void *unused) 264 + { 265 + struct drm_printer p = drm_seq_file_printer(m); 266 + struct drm_debugfs_entry *entry = m->private; 267 + struct drm_device *dev = entry->dev; 268 + struct v3d_dev *v3d = to_v3d_dev(dev); 269 + 270 + spin_lock(&v3d->mm_lock); 271 + drm_mm_print(&v3d->mm, &p); 272 + spin_unlock(&v3d->mm_lock); 273 + 274 + return 0; 275 + } 276 + 263 277 static const struct drm_debugfs_info v3d_debugfs_list[] = { 264 278 {"v3d_ident", v3d_v3d_debugfs_ident, 0}, 265 279 {"v3d_regs", v3d_v3d_debugfs_regs, 0}, 266 280 {"measure_clock", v3d_measure_clock, 0}, 267 281 {"bo_stats", v3d_debugfs_bo_stats, 0}, 282 + {"v3d_mm", v3d_debugfs_mm, 0}, 268 283 }; 269 284 270 285 void
+5 -5
drivers/gpu/drm/vc4/vc4_plane.c
··· 1497 1497 struct drm_plane_state *state) 1498 1498 { 1499 1499 struct vc4_bo *bo; 1500 + int ret; 1500 1501 1501 1502 if (!state->fb) 1502 1503 return 0; 1503 1504 1504 1505 bo = to_vc4_bo(&drm_fb_dma_get_gem_obj(state->fb, 0)->base); 1505 1506 1506 - drm_gem_plane_helper_prepare_fb(plane, state); 1507 - 1508 - if (plane->state->fb == state->fb) 1509 - return 0; 1507 + ret = drm_gem_plane_helper_prepare_fb(plane, state); 1508 + if (ret) 1509 + return ret; 1510 1510 1511 1511 return vc4_bo_inc_usecnt(bo); 1512 1512 } ··· 1516 1516 { 1517 1517 struct vc4_bo *bo; 1518 1518 1519 - if (plane->state->fb == state->fb || !state->fb) 1519 + if (!state->fb) 1520 1520 return; 1521 1521 1522 1522 bo = to_vc4_bo(&drm_fb_dma_get_gem_obj(state->fb, 0)->base);
+3 -3
drivers/gpu/drm/virtio/virtgpu_submit.c
··· 99 99 return 0; 100 100 101 101 /* 102 - * kvalloc at first tries to allocate memory using kmalloc and 103 - * falls back to vmalloc only on failure. It also uses __GFP_NOWARN 102 + * kvmalloc() at first tries to allocate memory using kmalloc() and 103 + * falls back to vmalloc() only on failure. It also uses __GFP_NOWARN 104 104 * internally for allocations larger than a page size, preventing 105 105 * storm of KMSG warnings. 106 106 */ ··· 529 529 virtio_gpu_submit(&submit); 530 530 531 531 /* 532 - * Set up usr-out data after submitting the job to optimize 532 + * Set up user-out data after submitting the job to optimize 533 533 * the job submission path. 534 534 */ 535 535 virtio_gpu_install_out_fence_fd(&submit);
+15
drivers/gpu/drm/vkms/Kconfig
··· 1 + # SPDX-License-Identifier: GPL-2.0-only 2 + 3 + config DRM_VKMS 4 + tristate "Virtual KMS (EXPERIMENTAL)" 5 + depends on DRM && MMU 6 + select DRM_KMS_HELPER 7 + select DRM_GEM_SHMEM_HELPER 8 + select CRC32 9 + default n 10 + help 11 + Virtual Kernel Mode-Setting (VKMS) is used for testing or for 12 + running GPU in a headless machines. Choose this option to get 13 + a VKMS. 14 + 15 + If M is selected the module will be called vkms.
+10 -4
drivers/gpu/drm/vkms/vkms_composer.c
··· 123 123 enum lut_channel channel) 124 124 { 125 125 s64 lut_index = get_lut_index(lut, channel_value); 126 + u16 *floor_lut_value, *ceil_lut_value; 127 + u16 floor_channel_value, ceil_channel_value; 126 128 127 129 /* 128 130 * This checks if `struct drm_color_lut` has any gap added by the compiler ··· 132 130 */ 133 131 static_assert(sizeof(struct drm_color_lut) == sizeof(__u16) * 4); 134 132 135 - u16 *floor_lut_value = (__u16 *)&lut->base[drm_fixp2int(lut_index)]; 136 - u16 *ceil_lut_value = (__u16 *)&lut->base[drm_fixp2int_ceil(lut_index)]; 133 + floor_lut_value = (__u16 *)&lut->base[drm_fixp2int(lut_index)]; 134 + if (drm_fixp2int(lut_index) == (lut->lut_length - 1)) 135 + /* We're at the end of the LUT array, use same value for ceil and floor */ 136 + ceil_lut_value = floor_lut_value; 137 + else 138 + ceil_lut_value = (__u16 *)&lut->base[drm_fixp2int_ceil(lut_index)]; 137 139 138 - u16 floor_channel_value = floor_lut_value[channel]; 139 - u16 ceil_channel_value = ceil_lut_value[channel]; 140 + floor_channel_value = floor_lut_value[channel]; 141 + ceil_channel_value = ceil_lut_value[channel]; 140 142 141 143 return lerp_u16(floor_channel_value, ceil_channel_value, 142 144 lut_index & DRM_FIXED_DECIMAL_MASK);
+2 -2
drivers/gpu/drm/vmwgfx/vmwgfx_execbuf.c
··· 621 621 * @sw_context: Pointer to the software context. 622 622 * @res_type: Resource type. 623 623 * @dirty: Whether to change dirty status. 624 - * @converter: User-space visisble type specific information. 624 + * @converter: User-space visible type specific information. 625 625 * @id_loc: Pointer to the location in the command buffer currently being parsed 626 626 * from where the user-space resource id handle is located. 627 - * @p_res: Pointer to pointer to resource validalidation node. Populated on 627 + * @p_res: Pointer to pointer to resource validation node. Populated on 628 628 * exit. 629 629 */ 630 630 static int
+4 -1
drivers/gpu/drm/vmwgfx/vmwgfx_gmrid_manager.c
··· 64 64 ttm_resource_init(bo, place, *res); 65 65 66 66 id = ida_alloc_max(&gman->gmr_ida, gman->max_gmr_ids - 1, GFP_KERNEL); 67 - if (id < 0) 67 + if (id < 0) { 68 + ttm_resource_fini(man, *res); 69 + kfree(*res); 68 70 return id; 71 + } 69 72 70 73 spin_lock(&gman->lock); 71 74
+4
drivers/gpu/drm/vmwgfx/vmwgfx_kms.c
··· 694 694 int ret = 0; 695 695 696 696 if (vps->surf) { 697 + if (vps->surf_mapped) { 698 + vmw_bo_unmap(vps->surf->res.guest_memory_bo); 699 + vps->surf_mapped = false; 700 + } 697 701 vmw_surface_unreference(&vps->surf); 698 702 vps->surf = NULL; 699 703 }
+12 -5
drivers/gpu/drm/vmwgfx/vmwgfx_stdu.c
··· 53 53 * struct vmw_stdu_dirty - closure structure for the update functions 54 54 * 55 55 * @base: The base type we derive from. Used by vmw_kms_helper_dirty(). 56 - * @transfer: Transfer direction for DMA command. 57 56 * @left: Left side of bounding box. 58 57 * @right: Right side of bounding box. 59 58 * @top: Top side of bounding box. ··· 99 100 }; 100 101 101 102 /** 102 - * struct vmw_screen_target_display_unit 103 + * struct vmw_screen_target_display_unit - conglomerated STDU structure 103 104 * 104 105 * @base: VMW specific DU structure 105 106 * @display_srf: surface to be displayed. The dimension of this will always ··· 207 208 * @res: Buffer to bind to the screen target. Set to NULL to blank screen. 208 209 * 209 210 * Binding a surface to a Screen Target the same as flipping 211 + * 212 + * Returns: %0 on success or -errno code on failure 210 213 */ 211 214 static int vmw_stdu_bind_st(struct vmw_private *dev_priv, 212 215 struct vmw_screen_target_display_unit *stdu, ··· 315 314 * 316 315 * @dev_priv: VMW DRM device 317 316 * @stdu: display unit to destroy 317 + * 318 + * Returns: %0 on success, negative error code on failure. -ERESTARTSYS if 319 + * interrupted. 318 320 */ 319 321 static int vmw_stdu_destroy_st(struct vmw_private *dev_priv, 320 322 struct vmw_screen_target_display_unit *stdu) ··· 540 536 * If DMA-ing till the screen target system, the function will also notify 541 537 * the screen target system that a bounding box of the cliprects has been 542 538 * updated. 543 - * Returns 0 on success, negative error code on failure. -ERESTARTSYS if 539 + * 540 + * Returns: %0 on success, negative error code on failure. -ERESTARTSYS if 544 541 * interrupted. 545 542 */ 546 543 int vmw_kms_stdu_readback(struct vmw_private *dev_priv, ··· 708 703 * case the device has already synchronized. 709 704 * @crtc: If crtc is passed, perform surface dirty on that crtc only. 710 705 * 711 - * Returns 0 on success, negative error code on failure. -ERESTARTSYS if 706 + * Returns: %0 on success, negative error code on failure. -ERESTARTSYS if 712 707 * interrupted. 713 708 */ 714 709 int vmw_kms_stdu_surface_dirty(struct vmw_private *dev_priv, ··· 892 887 * backed by a buffer object. The display surface is pinned here, and it'll 893 888 * be unpinned in .cleanup_fb() 894 889 * 895 - * Returns 0 on success 890 + * Returns: %0 on success 896 891 */ 897 892 static int 898 893 vmw_stdu_primary_plane_prepare_fb(struct drm_plane *plane, ··· 1470 1465 * This function is called once per CRTC, and allocates one Screen Target 1471 1466 * display unit to represent that CRTC. Since the SVGA device does not separate 1472 1467 * out encoder and connector, they are represented as part of the STDU as well. 1468 + * 1469 + * Returns: %0 on success or -errno code on failure 1473 1470 */ 1474 1471 static int vmw_stdu_init(struct vmw_private *dev_priv, unsigned unit) 1475 1472 {
-1
drivers/gpu/drm/vmwgfx/vmwgfx_surface.c
··· 44 44 * struct vmw_user_surface - User-space visible surface resource 45 45 * 46 46 * @prime: The TTM prime object. 47 - * @base: The TTM base object handling user-space visibility. 48 47 * @srf: The surface metadata. 49 48 * @master: Master of the creating client. Used for security check. 50 49 */
+76 -60
drivers/video/fbdev/efifb.c
··· 108 108 */ 109 109 #if defined CONFIG_FRAMEBUFFER_CONSOLE_DEFERRED_TAKEOVER && \ 110 110 defined CONFIG_ACPI_BGRT 111 - static void efifb_copy_bmp(u8 *src, u32 *dst, int width, struct screen_info *si) 111 + static void efifb_copy_bmp(u8 *src, u32 *dst, int width, const struct screen_info *si) 112 112 { 113 113 u8 r, g, b; 114 114 ··· 130 130 * resolution still fits, it will be displayed very close to the right edge of 131 131 * the display looking quite bad. This function checks for this. 132 132 */ 133 - static bool efifb_bgrt_sanity_check(struct screen_info *si, u32 bmp_width) 133 + static bool efifb_bgrt_sanity_check(const struct screen_info *si, u32 bmp_width) 134 134 { 135 135 /* 136 136 * All x86 firmwares horizontally center the image (the yoffset ··· 141 141 return bgrt_tab.image_offset_x == expected_xoffset; 142 142 } 143 143 #else 144 - static bool efifb_bgrt_sanity_check(struct screen_info *si, u32 bmp_width) 144 + static bool efifb_bgrt_sanity_check(const struct screen_info *si, u32 bmp_width) 145 145 { 146 146 return true; 147 147 } 148 148 #endif 149 149 150 - static void efifb_show_boot_graphics(struct fb_info *info) 150 + static void efifb_show_boot_graphics(struct fb_info *info, const struct screen_info *si) 151 151 { 152 152 u32 bmp_width, bmp_height, bmp_pitch, dst_x, y, src_y; 153 - struct screen_info *si = &screen_info; 154 153 struct bmp_file_header *file_header; 155 154 struct bmp_dib_header *dib_header; 156 155 void *bgrt_image = NULL; ··· 246 247 pr_warn("efifb: Ignoring BGRT: unexpected or invalid BMP data\n"); 247 248 } 248 249 #else 249 - static inline void efifb_show_boot_graphics(struct fb_info *info) {} 250 + static inline void efifb_show_boot_graphics(struct fb_info *info, const struct screen_info *si) 251 + { } 250 252 #endif 251 253 252 254 /* ··· 282 282 .fb_setcolreg = efifb_setcolreg, 283 283 }; 284 284 285 - static int efifb_setup(char *options) 285 + static int efifb_setup(struct screen_info *si, char *options) 286 286 { 287 287 char *this_opt; 288 288 ··· 290 290 while ((this_opt = strsep(&options, ",")) != NULL) { 291 291 if (!*this_opt) continue; 292 292 293 - efifb_setup_from_dmi(&screen_info, this_opt); 293 + efifb_setup_from_dmi(si, this_opt); 294 294 295 295 if (!strncmp(this_opt, "base:", 5)) 296 - screen_info.lfb_base = simple_strtoul(this_opt+5, NULL, 0); 296 + si->lfb_base = simple_strtoul(this_opt+5, NULL, 0); 297 297 else if (!strncmp(this_opt, "stride:", 7)) 298 - screen_info.lfb_linelength = simple_strtoul(this_opt+7, NULL, 0) * 4; 298 + si->lfb_linelength = simple_strtoul(this_opt+7, NULL, 0) * 4; 299 299 else if (!strncmp(this_opt, "height:", 7)) 300 - screen_info.lfb_height = simple_strtoul(this_opt+7, NULL, 0); 300 + si->lfb_height = simple_strtoul(this_opt+7, NULL, 0); 301 301 else if (!strncmp(this_opt, "width:", 6)) 302 - screen_info.lfb_width = simple_strtoul(this_opt+6, NULL, 0); 302 + si->lfb_width = simple_strtoul(this_opt+6, NULL, 0); 303 303 else if (!strcmp(this_opt, "nowc")) 304 304 mem_flags &= ~EFI_MEMORY_WC; 305 305 else if (!strcmp(this_opt, "nobgrt")) ··· 310 310 return 0; 311 311 } 312 312 313 - static inline bool fb_base_is_valid(void) 313 + static inline bool fb_base_is_valid(struct screen_info *si) 314 314 { 315 - if (screen_info.lfb_base) 315 + if (si->lfb_base) 316 316 return true; 317 317 318 - if (!(screen_info.capabilities & VIDEO_CAPABILITY_64BIT_BASE)) 318 + if (!(si->capabilities & VIDEO_CAPABILITY_64BIT_BASE)) 319 319 return false; 320 320 321 - if (screen_info.ext_lfb_base) 321 + if (si->ext_lfb_base) 322 322 return true; 323 323 324 324 return false; ··· 329 329 struct device_attribute *attr, \ 330 330 char *buf) \ 331 331 { \ 332 - return sprintf(buf, fmt "\n", (screen_info.lfb_##name)); \ 332 + struct screen_info *si = dev_get_platdata(dev); \ 333 + if (!si) \ 334 + return -ENODEV; \ 335 + return sprintf(buf, fmt "\n", (si->lfb_##name)); \ 333 336 } \ 334 337 static DEVICE_ATTR_RO(name) 335 338 ··· 359 356 360 357 static int efifb_probe(struct platform_device *dev) 361 358 { 359 + struct screen_info *si; 362 360 struct fb_info *info; 363 361 struct efifb_par *par; 364 362 int err, orientation; ··· 369 365 char *option = NULL; 370 366 efi_memory_desc_t md; 371 367 372 - if (screen_info.orig_video_isVGA != VIDEO_TYPE_EFI || pci_dev_disabled) 368 + /* 369 + * If we fail probing the device, the kernel might try a different 370 + * driver. We get a copy of the attached screen_info, so that we can 371 + * modify its values without affecting later drivers. 372 + */ 373 + si = dev_get_platdata(&dev->dev); 374 + if (!si) 375 + return -ENODEV; 376 + si = devm_kmemdup(&dev->dev, si, sizeof(*si), GFP_KERNEL); 377 + if (!si) 378 + return -ENOMEM; 379 + 380 + if (si->orig_video_isVGA != VIDEO_TYPE_EFI || pci_dev_disabled) 373 381 return -ENODEV; 374 382 375 383 if (fb_get_options("efifb", &option)) 376 384 return -ENODEV; 377 - efifb_setup(option); 385 + efifb_setup(si, option); 378 386 379 387 /* We don't get linelength from UGA Draw Protocol, only from 380 388 * EFI Graphics Protocol. So if it's not in DMI, and it's not 381 389 * passed in from the user, we really can't use the framebuffer. 382 390 */ 383 - if (!screen_info.lfb_linelength) 391 + if (!si->lfb_linelength) 384 392 return -ENODEV; 385 393 386 - if (!screen_info.lfb_depth) 387 - screen_info.lfb_depth = 32; 388 - if (!screen_info.pages) 389 - screen_info.pages = 1; 390 - if (!fb_base_is_valid()) { 394 + if (!si->lfb_depth) 395 + si->lfb_depth = 32; 396 + if (!si->pages) 397 + si->pages = 1; 398 + if (!fb_base_is_valid(si)) { 391 399 printk(KERN_DEBUG "efifb: invalid framebuffer address\n"); 392 400 return -ENODEV; 393 401 } 394 402 printk(KERN_INFO "efifb: probing for efifb\n"); 395 403 396 404 /* just assume they're all unset if any are */ 397 - if (!screen_info.blue_size) { 398 - screen_info.blue_size = 8; 399 - screen_info.blue_pos = 0; 400 - screen_info.green_size = 8; 401 - screen_info.green_pos = 8; 402 - screen_info.red_size = 8; 403 - screen_info.red_pos = 16; 404 - screen_info.rsvd_size = 8; 405 - screen_info.rsvd_pos = 24; 405 + if (!si->blue_size) { 406 + si->blue_size = 8; 407 + si->blue_pos = 0; 408 + si->green_size = 8; 409 + si->green_pos = 8; 410 + si->red_size = 8; 411 + si->red_pos = 16; 412 + si->rsvd_size = 8; 413 + si->rsvd_pos = 24; 406 414 } 407 415 408 - efifb_fix.smem_start = screen_info.lfb_base; 416 + efifb_fix.smem_start = si->lfb_base; 409 417 410 - if (screen_info.capabilities & VIDEO_CAPABILITY_64BIT_BASE) { 418 + if (si->capabilities & VIDEO_CAPABILITY_64BIT_BASE) { 411 419 u64 ext_lfb_base; 412 420 413 - ext_lfb_base = (u64)(unsigned long)screen_info.ext_lfb_base << 32; 421 + ext_lfb_base = (u64)(unsigned long)si->ext_lfb_base << 32; 414 422 efifb_fix.smem_start |= ext_lfb_base; 415 423 } 416 424 ··· 433 417 efifb_fix.smem_start = bar_resource->start + bar_offset; 434 418 } 435 419 436 - efifb_defined.bits_per_pixel = screen_info.lfb_depth; 437 - efifb_defined.xres = screen_info.lfb_width; 438 - efifb_defined.yres = screen_info.lfb_height; 439 - efifb_fix.line_length = screen_info.lfb_linelength; 420 + efifb_defined.bits_per_pixel = si->lfb_depth; 421 + efifb_defined.xres = si->lfb_width; 422 + efifb_defined.yres = si->lfb_height; 423 + efifb_fix.line_length = si->lfb_linelength; 440 424 441 425 /* size_vmode -- that is the amount of memory needed for the 442 426 * used video mode, i.e. the minimum amount of ··· 446 430 /* size_total -- all video memory we have. Used for 447 431 * entries, ressource allocation and bounds 448 432 * checking. */ 449 - size_total = screen_info.lfb_size; 433 + size_total = si->lfb_size; 450 434 if (size_total < size_vmode) 451 435 size_total = size_vmode; 452 436 ··· 521 505 goto err_release_fb; 522 506 } 523 507 524 - efifb_show_boot_graphics(info); 508 + efifb_show_boot_graphics(info, si); 525 509 526 510 pr_info("efifb: framebuffer at 0x%lx, using %dk, total %dk\n", 527 511 efifb_fix.smem_start, size_remap/1024, size_total/1024); 528 512 pr_info("efifb: mode is %dx%dx%d, linelength=%d, pages=%d\n", 529 513 efifb_defined.xres, efifb_defined.yres, 530 514 efifb_defined.bits_per_pixel, efifb_fix.line_length, 531 - screen_info.pages); 515 + si->pages); 532 516 533 517 efifb_defined.xres_virtual = efifb_defined.xres; 534 518 efifb_defined.yres_virtual = efifb_fix.smem_len / ··· 542 526 efifb_defined.left_margin = (efifb_defined.xres / 8) & 0xf8; 543 527 efifb_defined.hsync_len = (efifb_defined.xres / 8) & 0xf8; 544 528 545 - efifb_defined.red.offset = screen_info.red_pos; 546 - efifb_defined.red.length = screen_info.red_size; 547 - efifb_defined.green.offset = screen_info.green_pos; 548 - efifb_defined.green.length = screen_info.green_size; 549 - efifb_defined.blue.offset = screen_info.blue_pos; 550 - efifb_defined.blue.length = screen_info.blue_size; 551 - efifb_defined.transp.offset = screen_info.rsvd_pos; 552 - efifb_defined.transp.length = screen_info.rsvd_size; 529 + efifb_defined.red.offset = si->red_pos; 530 + efifb_defined.red.length = si->red_size; 531 + efifb_defined.green.offset = si->green_pos; 532 + efifb_defined.green.length = si->green_size; 533 + efifb_defined.blue.offset = si->blue_pos; 534 + efifb_defined.blue.length = si->blue_size; 535 + efifb_defined.transp.offset = si->rsvd_pos; 536 + efifb_defined.transp.length = si->rsvd_size; 553 537 554 538 pr_info("efifb: %s: " 555 539 "size=%d:%d:%d:%d, shift=%d:%d:%d:%d\n", 556 540 "Truecolor", 557 - screen_info.rsvd_size, 558 - screen_info.red_size, 559 - screen_info.green_size, 560 - screen_info.blue_size, 561 - screen_info.rsvd_pos, 562 - screen_info.red_pos, 563 - screen_info.green_pos, 564 - screen_info.blue_pos); 541 + si->rsvd_size, 542 + si->red_size, 543 + si->green_size, 544 + si->blue_size, 545 + si->rsvd_pos, 546 + si->red_pos, 547 + si->green_pos, 548 + si->blue_pos); 565 549 566 550 efifb_fix.ypanstep = 0; 567 551 efifb_fix.ywrapstep = 0;
+1 -1
drivers/video/fbdev/simplefb.c
··· 470 470 if (err == -ENOENT) 471 471 return 0; 472 472 473 - dev_info(dev, "failed to parse power-domains: %d\n", err); 473 + dev_err(dev, "failed to parse power-domains: %d\n", err); 474 474 return err; 475 475 } 476 476
+47 -31
drivers/video/fbdev/vesafb.c
··· 243 243 244 244 static int vesafb_probe(struct platform_device *dev) 245 245 { 246 + struct screen_info *si; 246 247 struct fb_info *info; 247 248 struct vesafb_par *par; 248 249 int i, err; ··· 252 251 unsigned int size_total; 253 252 char *option = NULL; 254 253 254 + /* 255 + * If we fail probing the device, the kernel might try a different 256 + * driver. We get a copy of the attached screen_info, so that we can 257 + * modify its values without affecting later drivers. 258 + */ 259 + si = dev_get_platdata(&dev->dev); 260 + if (!si) 261 + return -ENODEV; 262 + si = devm_kmemdup(&dev->dev, si, sizeof(*si), GFP_KERNEL); 263 + if (!si) 264 + return -ENOMEM; 265 + 255 266 /* ignore error return of fb_get_options */ 256 267 fb_get_options("vesafb", &option); 257 268 vesafb_setup(option); 258 269 259 - if (screen_info.orig_video_isVGA != VIDEO_TYPE_VLFB) 270 + if (si->orig_video_isVGA != VIDEO_TYPE_VLFB) 260 271 return -ENODEV; 261 272 262 - vga_compat = (screen_info.capabilities & 2) ? 0 : 1; 263 - vesafb_fix.smem_start = screen_info.lfb_base; 264 - vesafb_defined.bits_per_pixel = screen_info.lfb_depth; 273 + vga_compat = (si->capabilities & 2) ? 0 : 1; 274 + vesafb_fix.smem_start = si->lfb_base; 275 + vesafb_defined.bits_per_pixel = si->lfb_depth; 265 276 if (15 == vesafb_defined.bits_per_pixel) 266 277 vesafb_defined.bits_per_pixel = 16; 267 - vesafb_defined.xres = screen_info.lfb_width; 268 - vesafb_defined.yres = screen_info.lfb_height; 269 - vesafb_fix.line_length = screen_info.lfb_linelength; 278 + vesafb_defined.xres = si->lfb_width; 279 + vesafb_defined.yres = si->lfb_height; 280 + vesafb_fix.line_length = si->lfb_linelength; 270 281 vesafb_fix.visual = (vesafb_defined.bits_per_pixel == 8) ? 271 282 FB_VISUAL_PSEUDOCOLOR : FB_VISUAL_TRUECOLOR; 272 283 ··· 290 277 /* size_total -- all video memory we have. Used for mtrr 291 278 * entries, resource allocation and bounds 292 279 * checking. */ 293 - size_total = screen_info.lfb_size * 65536; 280 + size_total = si->lfb_size * 65536; 294 281 if (vram_total) 295 282 size_total = vram_total * 1024 * 1024; 296 283 if (size_total < size_vmode) ··· 310 297 vesafb_fix.smem_len = size_remap; 311 298 312 299 #ifndef __i386__ 313 - screen_info.vesapm_seg = 0; 300 + si->vesapm_seg = 0; 314 301 #endif 315 302 316 303 if (!request_mem_region(vesafb_fix.smem_start, size_total, "vesafb")) { ··· 330 317 par = info->par; 331 318 info->pseudo_palette = par->pseudo_palette; 332 319 333 - par->base = screen_info.lfb_base; 320 + par->base = si->lfb_base; 334 321 par->size = size_total; 335 322 336 323 printk(KERN_INFO "vesafb: mode is %dx%dx%d, linelength=%d, pages=%d\n", 337 - vesafb_defined.xres, vesafb_defined.yres, vesafb_defined.bits_per_pixel, vesafb_fix.line_length, screen_info.pages); 324 + vesafb_defined.xres, vesafb_defined.yres, vesafb_defined.bits_per_pixel, 325 + vesafb_fix.line_length, si->pages); 338 326 339 - if (screen_info.vesapm_seg) { 327 + if (si->vesapm_seg) { 340 328 printk(KERN_INFO "vesafb: protected mode interface info at %04x:%04x\n", 341 - screen_info.vesapm_seg,screen_info.vesapm_off); 329 + si->vesapm_seg, si->vesapm_off); 342 330 } 343 331 344 - if (screen_info.vesapm_seg < 0xc000) 332 + if (si->vesapm_seg < 0xc000) 345 333 ypan = pmi_setpal = 0; /* not available or some DOS TSR ... */ 346 334 347 335 if (ypan || pmi_setpal) { 336 + unsigned long pmi_phys; 348 337 unsigned short *pmi_base; 349 - pmi_base = (unsigned short*)phys_to_virt(((unsigned long)screen_info.vesapm_seg << 4) + screen_info.vesapm_off); 338 + pmi_phys = ((unsigned long)si->vesapm_seg << 4) + si->vesapm_off; 339 + pmi_base = (unsigned short *)phys_to_virt(pmi_phys); 350 340 pmi_start = (void*)((char*)pmi_base + pmi_base[1]); 351 341 pmi_pal = (void*)((char*)pmi_base + pmi_base[2]); 352 342 printk(KERN_INFO "vesafb: pmi: set display start = %p, set palette = %p\n",pmi_start,pmi_pal); ··· 393 377 vesafb_defined.left_margin = (vesafb_defined.xres / 8) & 0xf8; 394 378 vesafb_defined.hsync_len = (vesafb_defined.xres / 8) & 0xf8; 395 379 396 - vesafb_defined.red.offset = screen_info.red_pos; 397 - vesafb_defined.red.length = screen_info.red_size; 398 - vesafb_defined.green.offset = screen_info.green_pos; 399 - vesafb_defined.green.length = screen_info.green_size; 400 - vesafb_defined.blue.offset = screen_info.blue_pos; 401 - vesafb_defined.blue.length = screen_info.blue_size; 402 - vesafb_defined.transp.offset = screen_info.rsvd_pos; 403 - vesafb_defined.transp.length = screen_info.rsvd_size; 380 + vesafb_defined.red.offset = si->red_pos; 381 + vesafb_defined.red.length = si->red_size; 382 + vesafb_defined.green.offset = si->green_pos; 383 + vesafb_defined.green.length = si->green_size; 384 + vesafb_defined.blue.offset = si->blue_pos; 385 + vesafb_defined.blue.length = si->blue_size; 386 + vesafb_defined.transp.offset = si->rsvd_pos; 387 + vesafb_defined.transp.length = si->rsvd_size; 404 388 405 389 if (vesafb_defined.bits_per_pixel <= 8) { 406 390 depth = vesafb_defined.green.length; ··· 415 399 (vesafb_defined.bits_per_pixel > 8) ? 416 400 "Truecolor" : (vga_compat || pmi_setpal) ? 417 401 "Pseudocolor" : "Static Pseudocolor", 418 - screen_info.rsvd_size, 419 - screen_info.red_size, 420 - screen_info.green_size, 421 - screen_info.blue_size, 422 - screen_info.rsvd_pos, 423 - screen_info.red_pos, 424 - screen_info.green_pos, 425 - screen_info.blue_pos); 402 + si->rsvd_size, 403 + si->red_size, 404 + si->green_size, 405 + si->blue_size, 406 + si->rsvd_pos, 407 + si->red_pos, 408 + si->green_pos, 409 + si->blue_pos); 426 410 427 411 vesafb_fix.ypanstep = ypan ? 1 : 0; 428 412 vesafb_fix.ywrapstep = (ypan>1) ? 1 : 0;
+59 -11
include/drm/drm_atomic.h
··· 346 346 }; 347 347 348 348 /** 349 - * struct drm_atomic_state - the global state object for atomic updates 350 - * @ref: count of all references to this state (will not be freed until zero) 351 - * @dev: parent DRM device 352 - * @async_update: hint for asynchronous plane update 353 - * @planes: pointer to array of structures with per-plane data 354 - * @crtcs: pointer to array of CRTC pointers 355 - * @num_connector: size of the @connectors and @connector_states arrays 356 - * @connectors: pointer to array of structures with per-connector data 357 - * @num_private_objs: size of the @private_objs array 358 - * @private_objs: pointer to array of private object pointers 359 - * @acquire_ctx: acquire context for this atomic modeset state update 349 + * struct drm_atomic_state - Atomic commit structure 350 + * 351 + * This structure is the kernel counterpart of @drm_mode_atomic and represents 352 + * an atomic commit that transitions from an old to a new display state. It 353 + * contains all the objects affected by the atomic commit and both the new 354 + * state structures and pointers to the old state structures for 355 + * these. 360 356 * 361 357 * States are added to an atomic update by calling drm_atomic_get_crtc_state(), 362 358 * drm_atomic_get_plane_state(), drm_atomic_get_connector_state(), or for 363 359 * private state structures, drm_atomic_get_private_obj_state(). 364 360 */ 365 361 struct drm_atomic_state { 362 + /** 363 + * @ref: 364 + * 365 + * Count of all references to this update (will not be freed until zero). 366 + */ 366 367 struct kref ref; 367 368 369 + /** 370 + * @dev: Parent DRM Device. 371 + */ 368 372 struct drm_device *dev; 369 373 370 374 /** ··· 392 388 * flag are not allowed. 393 389 */ 394 390 bool legacy_cursor_update : 1; 391 + 392 + /** 393 + * @async_update: hint for asynchronous plane update 394 + */ 395 395 bool async_update : 1; 396 + 396 397 /** 397 398 * @duplicated: 398 399 * ··· 407 398 * states. 408 399 */ 409 400 bool duplicated : 1; 401 + 402 + /** 403 + * @planes: 404 + * 405 + * Pointer to array of @drm_plane and @drm_plane_state part of this 406 + * update. 407 + */ 410 408 struct __drm_planes_state *planes; 409 + 410 + /** 411 + * @crtcs: 412 + * 413 + * Pointer to array of @drm_crtc and @drm_crtc_state part of this 414 + * update. 415 + */ 411 416 struct __drm_crtcs_state *crtcs; 417 + 418 + /** 419 + * @num_connector: size of the @connectors array 420 + */ 412 421 int num_connector; 422 + 423 + /** 424 + * @connectors: 425 + * 426 + * Pointer to array of @drm_connector and @drm_connector_state part of 427 + * this update. 428 + */ 413 429 struct __drm_connnectors_state *connectors; 430 + 431 + /** 432 + * @num_private_objs: size of the @private_objs array 433 + */ 414 434 int num_private_objs; 435 + 436 + /** 437 + * @private_objs: 438 + * 439 + * Pointer to array of @drm_private_obj and @drm_private_obj_state part 440 + * of this update. 441 + */ 415 442 struct __drm_private_objs_state *private_objs; 416 443 444 + /** 445 + * @acquire_ctx: acquire context for this atomic modeset state update 446 + */ 417 447 struct drm_modeset_acquire_ctx *acquire_ctx; 418 448 419 449 /**
+21 -25
include/drm/drm_edid.h
··· 24 24 #define __DRM_EDID_H__ 25 25 26 26 #include <linux/types.h> 27 - #include <linux/hdmi.h> 28 - #include <drm/drm_mode.h> 29 27 28 + enum hdmi_quantization_range; 29 + struct drm_connector; 30 30 struct drm_device; 31 + struct drm_display_mode; 31 32 struct drm_edid; 33 + struct hdmi_avi_infoframe; 34 + struct hdmi_vendor_infoframe; 32 35 struct i2c_adapter; 33 36 34 37 #define EDID_LENGTH 128 ··· 49 46 u8 t1; 50 47 u8 t2; 51 48 u8 mfg_rsvd; 52 - } __attribute__((packed)); 49 + } __packed; 53 50 54 51 /* 00=16:10, 01=4:3, 10=5:4, 11=16:9 */ 55 52 #define EDID_TIMING_ASPECT_SHIFT 6 ··· 62 59 struct std_timing { 63 60 u8 hsize; /* need to multiply by 8 then add 248 */ 64 61 u8 vfreq_aspect; 65 - } __attribute__((packed)); 62 + } __packed; 66 63 67 64 #define DRM_EDID_PT_HSYNC_POSITIVE (1 << 1) 68 65 #define DRM_EDID_PT_VSYNC_POSITIVE (1 << 2) ··· 88 85 u8 hborder; 89 86 u8 vborder; 90 87 u8 misc; 91 - } __attribute__((packed)); 88 + } __packed; 92 89 93 90 /* If it's not pixel timing, it'll be one of the below */ 94 91 struct detailed_data_string { 95 92 u8 str[13]; 96 - } __attribute__((packed)); 93 + } __packed; 97 94 98 95 #define DRM_EDID_RANGE_OFFSET_MIN_VFREQ (1 << 0) /* 1.4 */ 99 96 #define DRM_EDID_RANGE_OFFSET_MAX_VFREQ (1 << 1) /* 1.4 */ ··· 123 120 __le16 m; 124 121 u8 k; 125 122 u8 j; /* need to divide by 2 */ 126 - } __attribute__((packed)) gtf2; 123 + } __packed gtf2; 127 124 struct { 128 125 u8 version; 129 126 u8 data1; /* high 6 bits: extra clock resolution */ ··· 132 129 u8 flags; /* preferred aspect and blanking support */ 133 130 u8 supported_scalings; 134 131 u8 preferred_refresh; 135 - } __attribute__((packed)) cvt; 136 - } __attribute__((packed)) formula; 137 - } __attribute__((packed)); 132 + } __packed cvt; 133 + } __packed formula; 134 + } __packed; 138 135 139 136 struct detailed_data_wpindex { 140 137 u8 white_yx_lo; /* Lower 2 bits each */ 141 138 u8 white_x_hi; 142 139 u8 white_y_hi; 143 140 u8 gamma; /* need to divide by 100 then add 1 */ 144 - } __attribute__((packed)); 141 + } __packed; 145 142 146 143 struct detailed_data_color_point { 147 144 u8 windex1; 148 145 u8 wpindex1[3]; 149 146 u8 windex2; 150 147 u8 wpindex2[3]; 151 - } __attribute__((packed)); 148 + } __packed; 152 149 153 150 struct cvt_timing { 154 151 u8 code[3]; 155 - } __attribute__((packed)); 152 + } __packed; 156 153 157 154 struct detailed_non_pixel { 158 155 u8 pad1; ··· 166 163 struct detailed_data_wpindex color; 167 164 struct std_timing timings[6]; 168 165 struct cvt_timing cvt[4]; 169 - } __attribute__((packed)) data; 170 - } __attribute__((packed)); 166 + } __packed data; 167 + } __packed; 171 168 172 169 #define EDID_DETAIL_EST_TIMINGS 0xf7 173 170 #define EDID_DETAIL_CVT_3BYTE 0xf8 ··· 184 181 union { 185 182 struct detailed_pixel_timing pixel_data; 186 183 struct detailed_non_pixel other_data; 187 - } __attribute__((packed)) data; 188 - } __attribute__((packed)); 184 + } __packed data; 185 + } __packed; 189 186 190 187 #define DRM_EDID_INPUT_SERRATION_VSYNC (1 << 0) 191 188 #define DRM_EDID_INPUT_SYNC_ON_GREEN (1 << 1) ··· 310 307 u8 extensions; 311 308 /* Checksum */ 312 309 u8 checksum; 313 - } __attribute__((packed)); 310 + } __packed; 314 311 315 312 #define EDID_PRODUCT_ID(e) ((e)->prod_code[0] | ((e)->prod_code[1] << 8)) 316 313 ··· 321 318 u8 freq; 322 319 u8 byte2; /* meaning depends on format */ 323 320 }; 324 - 325 - struct drm_encoder; 326 - struct drm_connector; 327 - struct drm_connector_state; 328 - struct drm_display_mode; 329 321 330 322 int drm_edid_to_sad(const struct edid *edid, struct cea_sad **sads); 331 323 int drm_edid_to_speaker_allocation(const struct edid *edid, u8 **sadb); ··· 424 426 drm_default_rgb_quant_range(const struct drm_display_mode *mode); 425 427 int drm_add_modes_noedid(struct drm_connector *connector, 426 428 int hdisplay, int vdisplay); 427 - void drm_set_preferred_mode(struct drm_connector *connector, 428 - int hpref, int vpref); 429 429 430 430 int drm_edid_header_is_valid(const void *edid); 431 431 bool drm_edid_block_valid(u8 *raw_edid, int block, bool print_bad_edid,
+1 -1
include/drm/drm_fixed.h
··· 95 95 96 96 static inline int drm_fixp2int_ceil(s64 a) 97 97 { 98 - if (a > 0) 98 + if (a >= 0) 99 99 return drm_fixp2int(a + DRM_FIXED_ALMOST_ONE); 100 100 else 101 101 return drm_fixp2int(a - DRM_FIXED_ALMOST_ONE);
+2
include/drm/drm_modes.h
··· 467 467 const struct drm_display_mode *mode); 468 468 bool drm_mode_is_420(const struct drm_display_info *display, 469 469 const struct drm_display_mode *mode); 470 + void drm_set_preferred_mode(struct drm_connector *connector, 471 + int hpref, int vpref); 470 472 471 473 struct drm_display_mode *drm_analog_tv_mode(struct drm_device *dev, 472 474 enum drm_connector_tv_mode mode,
-1
include/drm/drm_probe_helper.h
··· 32 32 const struct drm_display_mode *mode, 33 33 const struct drm_display_mode *fixed_mode); 34 34 35 - int drm_connector_helper_get_modes_from_ddc(struct drm_connector *connector); 36 35 int drm_connector_helper_get_modes_fixed(struct drm_connector *connector, 37 36 const struct drm_display_mode *fixed_mode); 38 37 int drm_connector_helper_get_modes(struct drm_connector *connector);
+27 -29
include/uapi/drm/nouveau_drm.h
··· 238 238 struct drm_nouveau_vm_bind_op { 239 239 /** 240 240 * @op: the operation type 241 + * 242 + * Supported values: 243 + * 244 + * %DRM_NOUVEAU_VM_BIND_OP_MAP - Map a GEM object to the GPU's VA 245 + * space. Optionally, the &DRM_NOUVEAU_VM_BIND_SPARSE flag can be 246 + * passed to instruct the kernel to create sparse mappings for the 247 + * given range. 248 + * 249 + * %DRM_NOUVEAU_VM_BIND_OP_UNMAP - Unmap an existing mapping in the 250 + * GPU's VA space. If the region the mapping is located in is a 251 + * sparse region, new sparse mappings are created where the unmapped 252 + * (memory backed) mapping was mapped previously. To remove a sparse 253 + * region the &DRM_NOUVEAU_VM_BIND_SPARSE must be set. 241 254 */ 242 255 __u32 op; 243 - /** 244 - * @DRM_NOUVEAU_VM_BIND_OP_MAP: 245 - * 246 - * Map a GEM object to the GPU's VA space. Optionally, the 247 - * &DRM_NOUVEAU_VM_BIND_SPARSE flag can be passed to instruct the kernel to 248 - * create sparse mappings for the given range. 249 - */ 250 256 #define DRM_NOUVEAU_VM_BIND_OP_MAP 0x0 251 - /** 252 - * @DRM_NOUVEAU_VM_BIND_OP_UNMAP: 253 - * 254 - * Unmap an existing mapping in the GPU's VA space. If the region the mapping 255 - * is located in is a sparse region, new sparse mappings are created where the 256 - * unmapped (memory backed) mapping was mapped previously. To remove a sparse 257 - * region the &DRM_NOUVEAU_VM_BIND_SPARSE must be set. 258 - */ 259 257 #define DRM_NOUVEAU_VM_BIND_OP_UNMAP 0x1 260 258 /** 261 259 * @flags: the flags for a &drm_nouveau_vm_bind_op 260 + * 261 + * Supported values: 262 + * 263 + * %DRM_NOUVEAU_VM_BIND_SPARSE - Indicates that an allocated VA 264 + * space region should be sparse. 262 265 */ 263 266 __u32 flags; 264 - /** 265 - * @DRM_NOUVEAU_VM_BIND_SPARSE: 266 - * 267 - * Indicates that an allocated VA space region should be sparse. 268 - */ 269 267 #define DRM_NOUVEAU_VM_BIND_SPARSE (1 << 8) 270 268 /** 271 269 * @handle: the handle of the DRM GEM object to map ··· 299 301 __u32 op_count; 300 302 /** 301 303 * @flags: the flags for a &drm_nouveau_vm_bind ioctl 304 + * 305 + * Supported values: 306 + * 307 + * %DRM_NOUVEAU_VM_BIND_RUN_ASYNC - Indicates that the given VM_BIND 308 + * operation should be executed asynchronously by the kernel. 309 + * 310 + * If this flag is not supplied the kernel executes the associated 311 + * operations synchronously and doesn't accept any &drm_nouveau_sync 312 + * objects. 302 313 */ 303 314 __u32 flags; 304 - /** 305 - * @DRM_NOUVEAU_VM_BIND_RUN_ASYNC: 306 - * 307 - * Indicates that the given VM_BIND operation should be executed asynchronously 308 - * by the kernel. 309 - * 310 - * If this flag is not supplied the kernel executes the associated operations 311 - * synchronously and doesn't accept any &drm_nouveau_sync objects. 312 - */ 313 315 #define DRM_NOUVEAU_VM_BIND_RUN_ASYNC 0x1 314 316 /** 315 317 * @wait_count: the number of wait &drm_nouveau_syncs
+1 -12
include/uapi/drm/qaic_accel.h
··· 242 242 * @dbc_id: In. Associate the sliced BO with this DBC. 243 243 * @handle: In. GEM handle of the BO to slice. 244 244 * @dir: In. Direction of data flow. 1 = DMA_TO_DEVICE, 2 = DMA_FROM_DEVICE 245 - * @size: In. Total length of BO being used. This should not exceed base 246 - * size of BO (struct drm_gem_object.base) 247 - * For BOs being allocated using DRM_IOCTL_QAIC_CREATE_BO, size of 248 - * BO requested is PAGE_SIZE aligned then allocated hence allocated 249 - * BO size maybe bigger. This size should not exceed the new 250 - * PAGE_SIZE aligned BO size. 251 - * @dev_addr: In. Device address this slice pushes to or pulls from. 252 - * @db_addr: In. Address of the doorbell to ring. 253 - * @db_data: In. Data to write to the doorbell. 254 - * @db_len: In. Size of the doorbell data in bits - 32, 16, or 8. 0 is for 255 - * inactive doorbells. 256 - * @offset: In. Start of this slice as an offset from the start of the BO. 245 + * @size: Deprecated. This value is ignored and size of @handle is used instead. 257 246 */ 258 247 struct qaic_attach_slice_hdr { 259 248 __u32 count;
+2
include/uapi/linux/virtio_gpu.h
··· 309 309 310 310 #define VIRTIO_GPU_CAPSET_VIRGL 1 311 311 #define VIRTIO_GPU_CAPSET_VIRGL2 2 312 + /* 3 is reserved for gfxstream */ 313 + #define VIRTIO_GPU_CAPSET_VENUS 4 312 314 313 315 /* VIRTIO_GPU_CMD_GET_CAPSET_INFO */ 314 316 struct virtio_gpu_get_capset_info {