Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'drm-misc-next-2023-09-11-1' of git://anongit.freedesktop.org/drm/drm-misc into drm-next

drm-misc-next for v6.7-rc1:

UAPI Changes:
- Nouveau changed to not set NO_PREFETCH flag explicitly.

Cross-subsystem Changes:
- Update documentation of dma-buf intro and uapi.
- fbdev/sbus fixes.
- Use initializer macros in a lot of fbdev drivers.
- Add Boris Brezillon as Panfrost driver maintainer.
- Add Jessica Zhang as drm/panel reviewer.
- Make more fbdev drivers use fb_ops helpers for deferred io.
- Small hid trailing whitespace fix.
- Use fb_ops in hid/picolcd

Core Changes:
- Assorted small fixes to ttm tests, drm/mst.
- Documentation updates to bridge.
- Add kunit tests for some drm_fb functions.
- Rework drm_debugfs implementation.
- Update xe documentation to mark todos as completed.

Driver Changes:
- Add support to rockchip for rv1126 mipi-dsi and vop.
- Assorted small fixes to nouveau, bridge/samsung-dsim,
bridge/lvds-codec, loongson, rockchip, panfrost, gma500, repaper,
komeda, virtio, ssd130x.
- Add support for simple panels Mitsubishi AA084XE01,
JDI LPM102A188A,
- Documentation updates to accel/ivpu.
- Some nouveau scheduling/fence fixes.
- Power management related fixes and other fixes to ivpu.
- Assorted bridge/it66121 fixes.
- Make platform drivers return void in remove() callback.

Signed-off-by: Dave Airlie <airlied@redhat.com>

From: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/3da6554b-3b47-fe7d-c4ea-21f4f819dbb6@linux.intel.com

+3038 -1289
+94
Documentation/devicetree/bindings/display/panel/jdi,lpm102a188a.yaml
··· 1 + # SPDX-License-Identifier: GPL-2.0-only OR BSD-2-Clause 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/display/panel/jdi,lpm102a188a.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: JDI LPM102A188A 2560x1800 10.2" DSI Panel 8 + 9 + maintainers: 10 + - Diogo Ivo <diogo.ivo@tecnico.ulisboa.pt> 11 + 12 + description: | 13 + This panel requires a dual-channel DSI host to operate. It supports two modes: 14 + - left-right: each channel drives the left or right half of the screen 15 + - even-odd: each channel drives the even or odd lines of the screen 16 + 17 + Each of the DSI channels controls a separate DSI peripheral. The peripheral 18 + driven by the first link (DSI-LINK1) is considered the primary peripheral 19 + and controls the device. The 'link2' property contains a phandle to the 20 + peripheral driven by the second link (DSI-LINK2). 21 + 22 + allOf: 23 + - $ref: panel-common.yaml# 24 + 25 + properties: 26 + compatible: 27 + const: jdi,lpm102a188a 28 + 29 + reg: true 30 + enable-gpios: true 31 + reset-gpios: true 32 + power-supply: true 33 + backlight: true 34 + 35 + ddi-supply: 36 + description: The regulator that provides IOVCC (1.8V). 37 + 38 + link2: 39 + $ref: /schemas/types.yaml#/definitions/phandle 40 + description: | 41 + phandle to the DSI peripheral on the secondary link. Note that the 42 + presence of this property marks the containing node as DSI-LINK1. 43 + 44 + required: 45 + - compatible 46 + - reg 47 + 48 + if: 49 + required: 50 + - link2 51 + then: 52 + required: 53 + - power-supply 54 + - ddi-supply 55 + - enable-gpios 56 + - reset-gpios 57 + 58 + additionalProperties: false 59 + 60 + examples: 61 + - | 62 + #include <dt-bindings/gpio/gpio.h> 63 + #include <dt-bindings/gpio/tegra-gpio.h> 64 + 65 + dsia: dsi@54300000 { 66 + #address-cells = <1>; 67 + #size-cells = <0>; 68 + reg = <0x0 0x54300000 0x0 0x00040000>; 69 + 70 + link2: panel@0 { 71 + compatible = "jdi,lpm102a188a"; 72 + reg = <0>; 73 + }; 74 + }; 75 + 76 + dsib: dsi@54400000{ 77 + #address-cells = <1>; 78 + #size-cells = <0>; 79 + reg = <0x0 0x54400000 0x0 0x00040000>; 80 + nvidia,ganged-mode = <&dsia>; 81 + 82 + link1: panel@0 { 83 + compatible = "jdi,lpm102a188a"; 84 + reg = <0>; 85 + power-supply = <&pplcd_vdd>; 86 + ddi-supply = <&pp1800_lcdio>; 87 + enable-gpios = <&gpio TEGRA_GPIO(V, 1) GPIO_ACTIVE_HIGH>; 88 + reset-gpios = <&gpio TEGRA_GPIO(V, 2) GPIO_ACTIVE_LOW>; 89 + link2 = <&link2>; 90 + backlight = <&backlight>; 91 + }; 92 + }; 93 + 94 + ...
+2
Documentation/devicetree/bindings/display/panel/panel-simple.yaml
··· 238 238 - logictechno,lttd800480070-l6wh-rt 239 239 # Mitsubishi "AA070MC01 7.0" WVGA TFT LCD panel 240 240 - mitsubishi,aa070mc01-ca1 241 + # Mitsubishi AA084XE01 8.4" XGA TFT LCD panel 242 + - mitsubishi,aa084xe01 241 243 # Multi-Inno Technology Co.,Ltd MI0700S4T-6 7" 800x480 TFT Resistive Touch Module 242 244 - multi-inno,mi0700s4t-6 243 245 # Multi-Inno Technology Co.,Ltd MI0800FT-9 8" 800x600 TFT Resistive Touch Module
+2
Documentation/devicetree/bindings/display/rockchip/rockchip,dw-mipi-dsi.yaml
··· 18 18 - rockchip,rk3288-mipi-dsi 19 19 - rockchip,rk3399-mipi-dsi 20 20 - rockchip,rk3568-mipi-dsi 21 + - rockchip,rv1126-mipi-dsi 21 22 - const: snps,dw-mipi-dsi 22 23 23 24 interrupts: ··· 78 77 enum: 79 78 - rockchip,px30-mipi-dsi 80 79 - rockchip,rk3568-mipi-dsi 80 + - rockchip,rv1126-mipi-dsi 81 81 82 82 then: 83 83 properties:
+1
Documentation/devicetree/bindings/display/rockchip/rockchip-vop.yaml
··· 31 31 - rockchip,rk3368-vop 32 32 - rockchip,rk3399-vop-big 33 33 - rockchip,rk3399-vop-lit 34 + - rockchip,rv1126-vop 34 35 35 36 reg: 36 37 minItems: 1
+23 -7
Documentation/driver-api/dma-buf.rst
··· 5 5 hardware (DMA) access across multiple device drivers and subsystems, and 6 6 for synchronizing asynchronous hardware access. 7 7 8 - This is used, for example, by drm "prime" multi-GPU support, but is of 9 - course not limited to GPU use cases. 8 + As an example, it is used extensively by the DRM subsystem to exchange 9 + buffers between processes, contexts, library APIs within the same 10 + process, and also to exchange buffers with other subsystems such as 11 + V4L2. 10 12 11 - The three main components of this are: (1) dma-buf, representing a 12 - sg_table and exposed to userspace as a file descriptor to allow passing 13 - between devices, (2) fence, which provides a mechanism to signal when 14 - one device has finished access, and (3) reservation, which manages the 15 - shared or exclusive fence(s) associated with the buffer. 13 + This document describes the way in which kernel subsystems can use and 14 + interact with the three main primitives offered by dma-buf: 15 + 16 + - dma-buf, representing a sg_table and exposed to userspace as a file 17 + descriptor to allow passing between processes, subsystems, devices, 18 + etc; 19 + - dma-fence, providing a mechanism to signal when an asynchronous 20 + hardware operation has completed; and 21 + - dma-resv, which manages a set of dma-fences for a particular dma-buf 22 + allowing implicit (kernel-ordered) synchronization of work to 23 + preserve the illusion of coherent access 24 + 25 + 26 + Userspace API principles and use 27 + -------------------------------- 28 + 29 + For more details on how to design your subsystem's API for dma-buf use, please 30 + see Documentation/userspace-api/dma-buf-alloc-exchange.rst. 31 + 16 32 17 33 Shared DMA Buffers 18 34 ------------------
+7
Documentation/gpu/drm-uapi.rst
··· 486 486 487 487 .. kernel-doc:: include/uapi/drm/drm_mode.h 488 488 :internal: 489 + 490 + 491 + dma-buf interoperability 492 + ======================== 493 + 494 + Please see Documentation/userspace-api/dma-buf-alloc-exchange.rst for 495 + information on how dma-buf is integrated and exposed within DRM.
+43 -46
Documentation/gpu/rfc/xe.rst
··· 67 67 68 68 When the time comes for Xe, the protection will be lifted on Xe and kept in i915. 69 69 70 - Xe driver will be protected with both STAGING Kconfig and force_probe. Changes in 71 - the uAPI are expected while the driver is behind these protections. STAGING will 72 - be removed when the driver uAPI gets to a mature state where we can guarantee the 73 - ‘no regression’ rule. Then force_probe will be lifted only for future platforms 74 - that will be productized with Xe driver, but not with i915. 75 - 76 - Xe – Pre-Merge Goals 77 - ==================== 70 + Xe – Pre-Merge Goals - Work-in-Progress 71 + ======================================= 78 72 79 73 Drm_scheduler 80 74 ------------- ··· 87 93 depend on any other patch touching drm_scheduler itself that was not yet merged 88 94 through drm-misc. This, by itself, already includes the reach of an agreement for 89 95 uniform 1 to 1 relationship implementation / usage across drivers. 90 - 91 - GPU VA 92 - ------ 93 - Two main goals of Xe are meeting together here: 94 - 95 - 1) Have an uAPI that aligns with modern UMD needs. 96 - 97 - 2) Early upstream engagement. 98 - 99 - RedHat engineers working on Nouveau proposed a new DRM feature to handle keeping 100 - track of GPU virtual address mappings. This is still not merged upstream, but 101 - this aligns very well with our goals and with our VM_BIND. The engagement with 102 - upstream and the port of Xe towards GPUVA is already ongoing. 103 - 104 - As a key measurable result, Xe needs to be aligned with the GPU VA and working in 105 - our tree. Missing Nouveau patches should *not* block Xe and any needed GPUVA 106 - related patch should be independent and present on dri-devel or acked by 107 - maintainers to go along with the first Xe pull request towards drm-next. 108 - 109 - DRM_VM_BIND 110 - ----------- 111 - Nouveau, and Xe are all implementing ‘VM_BIND’ and new ‘Exec’ uAPIs in order to 112 - fulfill the needs of the modern uAPI. Xe merge should *not* be blocked on the 113 - development of a common new drm_infrastructure. However, the Xe team needs to 114 - engage with the community to explore the options of a common API. 115 - 116 - As a key measurable result, the DRM_VM_BIND needs to be documented in this file 117 - below, or this entire block deleted if the consensus is for independent drivers 118 - vm_bind ioctls. 119 - 120 - Although having a common DRM level IOCTL for VM_BIND is not a requirement to get 121 - Xe merged, it is mandatory to enforce the overall locking scheme for all major 122 - structs and list (so vm and vma). So, a consensus is needed, and possibly some 123 - common helpers. If helpers are needed, they should be also documented in this 124 - document. 125 96 126 97 ASYNC VM_BIND 127 98 ------------- ··· 171 212 As a key measurable result, we need to have a community consensus documented in 172 213 this document and the Xe driver prepared for the changes, if necessary. 173 214 215 + Xe – uAPI high level overview 216 + ============================= 217 + 218 + ...Warning: To be done in follow up patches after/when/where the main consensus in various items are individually reached. 219 + 220 + Xe – Pre-Merge Goals - Completed 221 + ================================ 222 + 174 223 Dev_coredump 175 224 ------------ 176 225 ··· 196 229 for better organization of the dumps, snapshot support, dmesg extra print, 197 230 and whatever may make sense and help the overall infrastructure. 198 231 199 - Xe – uAPI high level overview 200 - ============================= 232 + DRM_VM_BIND 233 + ----------- 234 + Nouveau, and Xe are all implementing ‘VM_BIND’ and new ‘Exec’ uAPIs in order to 235 + fulfill the needs of the modern uAPI. Xe merge should *not* be blocked on the 236 + development of a common new drm_infrastructure. However, the Xe team needs to 237 + engage with the community to explore the options of a common API. 201 238 202 - ...Warning: To be done in follow up patches after/when/where the main consensus in various items are individually reached. 239 + As a key measurable result, the DRM_VM_BIND needs to be documented in this file 240 + below, or this entire block deleted if the consensus is for independent drivers 241 + vm_bind ioctls. 242 + 243 + Although having a common DRM level IOCTL for VM_BIND is not a requirement to get 244 + Xe merged, it is mandatory to enforce the overall locking scheme for all major 245 + structs and list (so vm and vma). So, a consensus is needed, and possibly some 246 + common helpers. If helpers are needed, they should be also documented in this 247 + document. 248 + 249 + GPU VA 250 + ------ 251 + Two main goals of Xe are meeting together here: 252 + 253 + 1) Have an uAPI that aligns with modern UMD needs. 254 + 255 + 2) Early upstream engagement. 256 + 257 + RedHat engineers working on Nouveau proposed a new DRM feature to handle keeping 258 + track of GPU virtual address mappings. This is still not merged upstream, but 259 + this aligns very well with our goals and with our VM_BIND. The engagement with 260 + upstream and the port of Xe towards GPUVA is already ongoing. 261 + 262 + As a key measurable result, Xe needs to be aligned with the GPU VA and working in 263 + our tree. Missing Nouveau patches should *not* block Xe and any needed GPUVA 264 + related patch should be independent and present on dri-devel or acked by 265 + maintainers to go along with the first Xe pull request towards drm-next.
+389
Documentation/userspace-api/dma-buf-alloc-exchange.rst
··· 1 + .. SPDX-License-Identifier: GPL-2.0 2 + .. Copyright 2021-2023 Collabora Ltd. 3 + 4 + ======================== 5 + Exchanging pixel buffers 6 + ======================== 7 + 8 + As originally designed, the Linux graphics subsystem had extremely limited 9 + support for sharing pixel-buffer allocations between processes, devices, and 10 + subsystems. Modern systems require extensive integration between all three 11 + classes; this document details how applications and kernel subsystems should 12 + approach this sharing for two-dimensional image data. 13 + 14 + It is written with reference to the DRM subsystem for GPU and display devices, 15 + V4L2 for media devices, and also to Vulkan, EGL and Wayland, for userspace 16 + support, however any other subsystems should also follow this design and advice. 17 + 18 + 19 + Glossary of terms 20 + ================= 21 + 22 + .. glossary:: 23 + 24 + image: 25 + Conceptually a two-dimensional array of pixels. The pixels may be stored 26 + in one or more memory buffers. Has width and height in pixels, pixel 27 + format and modifier (implicit or explicit). 28 + 29 + row: 30 + A span along a single y-axis value, e.g. from co-ordinates (0,100) to 31 + (200,100). 32 + 33 + scanline: 34 + Synonym for row. 35 + 36 + column: 37 + A span along a single x-axis value, e.g. from co-ordinates (100,0) to 38 + (100,100). 39 + 40 + memory buffer: 41 + A piece of memory for storing (parts of) pixel data. Has stride and size 42 + in bytes and at least one handle in some API. May contain one or more 43 + planes. 44 + 45 + plane: 46 + A two-dimensional array of some or all of an image's color and alpha 47 + channel values. 48 + 49 + pixel: 50 + A picture element. Has a single color value which is defined by one or 51 + more color channels values, e.g. R, G and B, or Y, Cb and Cr. May also 52 + have an alpha value as an additional channel. 53 + 54 + pixel data: 55 + Bytes or bits that represent some or all of the color/alpha channel values 56 + of a pixel or an image. The data for one pixel may be spread over several 57 + planes or memory buffers depending on format and modifier. 58 + 59 + color value: 60 + A tuple of numbers, representing a color. Each element in the tuple is a 61 + color channel value. 62 + 63 + color channel: 64 + One of the dimensions in a color model. For example, RGB model has 65 + channels R, G, and B. Alpha channel is sometimes counted as a color 66 + channel as well. 67 + 68 + pixel format: 69 + A description of how pixel data represents the pixel's color and alpha 70 + values. 71 + 72 + modifier: 73 + A description of how pixel data is laid out in memory buffers. 74 + 75 + alpha: 76 + A value that denotes the color coverage in a pixel. Sometimes used for 77 + translucency instead. 78 + 79 + stride: 80 + A value that denotes the relationship between pixel-location co-ordinates 81 + and byte-offset values. Typically used as the byte offset between two 82 + pixels at the start of vertically-consecutive tiling blocks. For linear 83 + layouts, the byte offset between two vertically-adjacent pixels. For 84 + non-linear formats the stride must be computed in a consistent way, which 85 + usually is done as-if the layout was linear. 86 + 87 + pitch: 88 + Synonym for stride. 89 + 90 + 91 + Formats and modifiers 92 + ===================== 93 + 94 + Each buffer must have an underlying format. This format describes the color 95 + values provided for each pixel. Although each subsystem has its own format 96 + descriptions (e.g. V4L2 and fbdev), the ``DRM_FORMAT_*`` tokens should be reused 97 + wherever possible, as they are the standard descriptions used for interchange. 98 + These tokens are described in the ``drm_fourcc.h`` file, which is a part of 99 + DRM's uAPI. 100 + 101 + Each ``DRM_FORMAT_*`` token describes the translation between a pixel 102 + co-ordinate in an image, and the color values for that pixel contained within 103 + its memory buffers. The number and type of color channels are described: 104 + whether they are RGB or YUV, integer or floating-point, the size of each channel 105 + and their locations within the pixel memory, and the relationship between color 106 + planes. 107 + 108 + For example, ``DRM_FORMAT_ARGB8888`` describes a format in which each pixel has 109 + a single 32-bit value in memory. Alpha, red, green, and blue, color channels are 110 + available at 8-bit precision per channel, ordered respectively from most to 111 + least significant bits in little-endian storage. ``DRM_FORMAT_*`` is not 112 + affected by either CPU or device endianness; the byte pattern in memory is 113 + always as described in the format definition, which is usually little-endian. 114 + 115 + As a more complex example, ``DRM_FORMAT_NV12`` describes a format in which luma 116 + and chroma YUV samples are stored in separate planes, where the chroma plane is 117 + stored at half the resolution in both dimensions (i.e. one U/V chroma 118 + sample is stored for each 2x2 pixel grouping). 119 + 120 + Format modifiers describe a translation mechanism between these per-pixel memory 121 + samples, and the actual memory storage for the buffer. The most straightforward 122 + modifier is ``DRM_FORMAT_MOD_LINEAR``, describing a scheme in which each plane 123 + is laid out row-sequentially, from the top-left to the bottom-right corner. 124 + This is considered the baseline interchange format, and most convenient for CPU 125 + access. 126 + 127 + Modern hardware employs much more sophisticated access mechanisms, typically 128 + making use of tiled access and possibly also compression. For example, the 129 + ``DRM_FORMAT_MOD_VIVANTE_TILED`` modifier describes memory storage where pixels 130 + are stored in 4x4 blocks arranged in row-major ordering, i.e. the first tile in 131 + a plane stores pixels (0,0) to (3,3) inclusive, and the second tile in a plane 132 + stores pixels (4,0) to (7,3) inclusive. 133 + 134 + Some modifiers may modify the number of planes required for an image; for 135 + example, the ``I915_FORMAT_MOD_Y_TILED_CCS`` modifier adds a second plane to RGB 136 + formats in which it stores data about the status of every tile, notably 137 + including whether the tile is fully populated with pixel data, or can be 138 + expanded from a single solid color. 139 + 140 + These extended layouts are highly vendor-specific, and even specific to 141 + particular generations or configurations of devices per-vendor. For this reason, 142 + support of modifiers must be explicitly enumerated and negotiated by all users 143 + in order to ensure a compatible and optimal pipeline, as discussed below. 144 + 145 + 146 + Dimensions and size 147 + =================== 148 + 149 + Each pixel buffer must be accompanied by logical pixel dimensions. This refers 150 + to the number of unique samples which can be extracted from, or stored to, the 151 + underlying memory storage. For example, even though a 1920x1080 152 + ``DRM_FORMAT_NV12`` buffer has a luma plane containing 1920x1080 samples for the Y 153 + component, and 960x540 samples for the U and V components, the overall buffer is 154 + still described as having dimensions of 1920x1080. 155 + 156 + The in-memory storage of a buffer is not guaranteed to begin immediately at the 157 + base address of the underlying memory, nor is it guaranteed that the memory 158 + storage is tightly clipped to either dimension. 159 + 160 + Each plane must therefore be described with an ``offset`` in bytes, which will be 161 + added to the base address of the memory storage before performing any per-pixel 162 + calculations. This may be used to combine multiple planes into a single memory 163 + buffer; for example, ``DRM_FORMAT_NV12`` may be stored in a single memory buffer 164 + where the luma plane's storage begins immediately at the start of the buffer 165 + with an offset of 0, and the chroma plane's storage follows within the same buffer 166 + beginning from the byte offset for that plane. 167 + 168 + Each plane must also have a ``stride`` in bytes, expressing the offset in memory 169 + between two contiguous row. For example, a ``DRM_FORMAT_MOD_LINEAR`` buffer 170 + with dimensions of 1000x1000 may have been allocated as if it were 1024x1000, in 171 + order to allow for aligned access patterns. In this case, the buffer will still 172 + be described with a width of 1000, however the stride will be ``1024 * bpp``, 173 + indicating that there are 24 pixels at the positive extreme of the x axis whose 174 + values are not significant. 175 + 176 + Buffers may also be padded further in the y dimension, simply by allocating a 177 + larger area than would ordinarily be required. For example, many media decoders 178 + are not able to natively output buffers of height 1080, but instead require an 179 + effective height of 1088 pixels. In this case, the buffer continues to be 180 + described as having a height of 1080, with the memory allocation for each buffer 181 + being increased to account for the extra padding. 182 + 183 + 184 + Enumeration 185 + =========== 186 + 187 + Every user of pixel buffers must be able to enumerate a set of supported formats 188 + and modifiers, described together. Within KMS, this is achieved with the 189 + ``IN_FORMATS`` property on each DRM plane, listing the supported DRM formats, and 190 + the modifiers supported for each format. In userspace, this is supported through 191 + the `EGL_EXT_image_dma_buf_import_modifiers`_ extension entrypoints for EGL, the 192 + `VK_EXT_image_drm_format_modifier`_ extension for Vulkan, and the 193 + `zwp_linux_dmabuf_v1`_ extension for Wayland. 194 + 195 + Each of these interfaces allows users to query a set of supported 196 + format+modifier combinations. 197 + 198 + 199 + Negotiation 200 + =========== 201 + 202 + It is the responsibility of userspace to negotiate an acceptable format+modifier 203 + combination for its usage. This is performed through a simple intersection of 204 + lists. For example, if a user wants to use Vulkan to render an image to be 205 + displayed on a KMS plane, it must: 206 + 207 + - query KMS for the ``IN_FORMATS`` property for the given plane 208 + - query Vulkan for the supported formats for its physical device, making sure 209 + to pass the ``VkImageUsageFlagBits`` and ``VkImageCreateFlagBits`` 210 + corresponding to the intended rendering use 211 + - intersect these formats to determine the most appropriate one 212 + - for this format, intersect the lists of supported modifiers for both KMS and 213 + Vulkan, to obtain a final list of acceptable modifiers for that format 214 + 215 + This intersection must be performed for all usages. For example, if the user 216 + also wishes to encode the image to a video stream, it must query the media API 217 + it intends to use for encoding for the set of modifiers it supports, and 218 + additionally intersect against this list. 219 + 220 + If the intersection of all lists is an empty list, it is not possible to share 221 + buffers in this way, and an alternate strategy must be considered (e.g. using 222 + CPU access routines to copy data between the different uses, with the 223 + corresponding performance cost). 224 + 225 + The resulting modifier list is unsorted; the order is not significant. 226 + 227 + 228 + Allocation 229 + ========== 230 + 231 + Once userspace has determined an appropriate format, and corresponding list of 232 + acceptable modifiers, it must allocate the buffer. As there is no universal 233 + buffer-allocation interface available at either kernel or userspace level, the 234 + client makes an arbitrary choice of allocation interface such as Vulkan, GBM, or 235 + a media API. 236 + 237 + Each allocation request must take, at a minimum: the pixel format, a list of 238 + acceptable modifiers, and the buffer's width and height. Each API may extend 239 + this set of properties in different ways, such as allowing allocation in more 240 + than two dimensions, intended usage patterns, etc. 241 + 242 + The component which allocates the buffer will make an arbitrary choice of what 243 + it considers the 'best' modifier within the acceptable list for the requested 244 + allocation, any padding required, and further properties of the underlying 245 + memory buffers such as whether they are stored in system or device-specific 246 + memory, whether or not they are physically contiguous, and their cache mode. 247 + These properties of the memory buffer are not visible to userspace, however the 248 + ``dma-heaps`` API is an effort to address this. 249 + 250 + After allocation, the client must query the allocator to determine the actual 251 + modifier selected for the buffer, as well as the per-plane offset and stride. 252 + Allocators are not permitted to vary the format in use, to select a modifier not 253 + provided within the acceptable list, nor to vary the pixel dimensions other than 254 + the padding expressed through offset, stride, and size. 255 + 256 + Communicating additional constraints, such as alignment of stride or offset, 257 + placement within a particular memory area, etc, is out of scope of dma-buf, 258 + and is not solved by format and modifier tokens. 259 + 260 + 261 + Import 262 + ====== 263 + 264 + To use a buffer within a different context, device, or subsystem, the user 265 + passes these parameters (format, modifier, width, height, and per-plane offset 266 + and stride) to an importing API. 267 + 268 + Each memory buffer is referred to by a buffer handle, which may be unique or 269 + duplicated within an image. For example, a ``DRM_FORMAT_NV12`` buffer may have 270 + the luma and chroma buffers combined into a single memory buffer by use of the 271 + per-plane offset parameters, or they may be completely separate allocations in 272 + memory. For this reason, each import and allocation API must provide a separate 273 + handle for each plane. 274 + 275 + Each kernel subsystem has its own types and interfaces for buffer management. 276 + DRM uses GEM buffer objects (BOs), V4L2 has its own references, etc. These types 277 + are not portable between contexts, processes, devices, or subsystems. 278 + 279 + To address this, ``dma-buf`` handles are used as the universal interchange for 280 + buffers. Subsystem-specific operations are used to export native buffer handles 281 + to a ``dma-buf`` file descriptor, and to import those file descriptors into a 282 + native buffer handle. dma-buf file descriptors can be transferred between 283 + contexts, processes, devices, and subsystems. 284 + 285 + For example, a Wayland media player may use V4L2 to decode a video frame into a 286 + ``DRM_FORMAT_NV12`` buffer. This will result in two memory planes (luma and 287 + chroma) being dequeued by the user from V4L2. These planes are then exported to 288 + one dma-buf file descriptor per plane, these descriptors are then sent along 289 + with the metadata (format, modifier, width, height, per-plane offset and stride) 290 + to the Wayland server. The Wayland server will then import these file 291 + descriptors as an EGLImage for use through EGL/OpenGL (ES), a VkImage for use 292 + through Vulkan, or a KMS framebuffer object; each of these import operations 293 + will take the same metadata and convert the dma-buf file descriptors into their 294 + native buffer handles. 295 + 296 + Having a non-empty intersection of supported modifiers does not guarantee that 297 + import will succeed into all consumers; they may have constraints beyond those 298 + implied by modifiers which must be satisfied. 299 + 300 + 301 + Implicit modifiers 302 + ================== 303 + 304 + The concept of modifiers post-dates all of the subsystems mentioned above. As 305 + such, it has been retrofitted into all of these APIs, and in order to ensure 306 + backwards compatibility, support is needed for drivers and userspace which do 307 + not (yet) support modifiers. 308 + 309 + As an example, GBM is used to allocate buffers to be shared between EGL for 310 + rendering and KMS for display. It has two entrypoints for allocating buffers: 311 + ``gbm_bo_create`` which only takes the format, width, height, and a usage token, 312 + and ``gbm_bo_create_with_modifiers`` which extends this with a list of modifiers. 313 + 314 + In the latter case, the allocation is as discussed above, being provided with a 315 + list of acceptable modifiers that the implementation can choose from (or fail if 316 + it is not possible to allocate within those constraints). In the former case 317 + where modifiers are not provided, the GBM implementation must make its own 318 + choice as to what is likely to be the 'best' layout. Such a choice is entirely 319 + implementation-specific: some will internally use tiled layouts which are not 320 + CPU-accessible if the implementation decides that is a good idea through 321 + whatever heuristic. It is the implementation's responsibility to ensure that 322 + this choice is appropriate. 323 + 324 + To support this case where the layout is not known because there is no awareness 325 + of modifiers, a special ``DRM_FORMAT_MOD_INVALID`` token has been defined. This 326 + pseudo-modifier declares that the layout is not known, and that the driver 327 + should use its own logic to determine what the underlying layout may be. 328 + 329 + .. note:: 330 + 331 + ``DRM_FORMAT_MOD_INVALID`` is a non-zero value. The modifier value zero is 332 + ``DRM_FORMAT_MOD_LINEAR``, which is an explicit guarantee that the image 333 + has the linear layout. Care and attention should be taken to ensure that 334 + zero as a default value is not mixed up with either no modifier or the linear 335 + modifier. Also note that in some APIs the invalid modifier value is specified 336 + with an out-of-band flag, like in ``DRM_IOCTL_MODE_ADDFB2``. 337 + 338 + There are four cases where this token may be used: 339 + - during enumeration, an interface may return ``DRM_FORMAT_MOD_INVALID``, either 340 + as the sole member of a modifier list to declare that explicit modifiers are 341 + not supported, or as part of a larger list to declare that implicit modifiers 342 + may be used 343 + - during allocation, a user may supply ``DRM_FORMAT_MOD_INVALID``, either as the 344 + sole member of a modifier list (equivalent to not supplying a modifier list 345 + at all) to declare that explicit modifiers are not supported and must not be 346 + used, or as part of a larger list to declare that an allocation using implicit 347 + modifiers is acceptable 348 + - in a post-allocation query, an implementation may return 349 + ``DRM_FORMAT_MOD_INVALID`` as the modifier of the allocated buffer to declare 350 + that the underlying layout is implementation-defined and that an explicit 351 + modifier description is not available; per the above rules, this may only be 352 + returned when the user has included ``DRM_FORMAT_MOD_INVALID`` as part of the 353 + list of acceptable modifiers, or not provided a list 354 + - when importing a buffer, the user may supply ``DRM_FORMAT_MOD_INVALID`` as the 355 + buffer modifier (or not supply a modifier) to indicate that the modifier is 356 + unknown for whatever reason; this is only acceptable when the buffer has 357 + not been allocated with an explicit modifier 358 + 359 + It follows from this that for any single buffer, the complete chain of operations 360 + formed by the producer and all the consumers must be either fully implicit or fully 361 + explicit. For example, if a user wishes to allocate a buffer for use between 362 + GPU, display, and media, but the media API does not support modifiers, then the 363 + user **must not** allocate the buffer with explicit modifiers and attempt to 364 + import the buffer into the media API with no modifier, but either perform the 365 + allocation using implicit modifiers, or allocate the buffer for media use 366 + separately and copy between the two buffers. 367 + 368 + As one exception to the above, allocations may be 'upgraded' from implicit 369 + to explicit modifiers. For example, if the buffer is allocated with 370 + ``gbm_bo_create`` (taking no modifiers), the user may then query the modifier with 371 + ``gbm_bo_get_modifier`` and then use this modifier as an explicit modifier token 372 + if a valid modifier is returned. 373 + 374 + When allocating buffers for exchange between different users and modifiers are 375 + not available, implementations are strongly encouraged to use 376 + ``DRM_FORMAT_MOD_LINEAR`` for their allocation, as this is the universal baseline 377 + for exchange. However, it is not guaranteed that this will result in the correct 378 + interpretation of buffer content, as implicit modifier operation may still be 379 + subject to driver-specific heuristics. 380 + 381 + Any new users - userspace programs and protocols, kernel subsystems, etc - 382 + wishing to exchange buffers must offer interoperability through dma-buf file 383 + descriptors for memory planes, DRM format tokens to describe the format, DRM 384 + format modifiers to describe the layout in memory, at least width and height for 385 + dimensions, and at least offset and stride for each memory plane. 386 + 387 + .. _zwp_linux_dmabuf_v1: https://gitlab.freedesktop.org/wayland/wayland-protocols/-/blob/main/unstable/linux-dmabuf/linux-dmabuf-unstable-v1.xml 388 + .. _VK_EXT_image_drm_format_modifier: https://registry.khronos.org/vulkan/specs/1.3-extensions/man/html/VK_EXT_image_drm_format_modifier.html 389 + .. _EGL_EXT_image_dma_buf_import_modifiers: https://registry.khronos.org/EGL/extensions/EXT/EGL_EXT_image_dma_buf_import_modifiers.txt
+1
Documentation/userspace-api/index.rst
··· 22 22 unshare 23 23 spec_ctrl 24 24 accelerators/ocxl 25 + dma-buf-alloc-exchange 25 26 ebpf/index 26 27 ELF 27 28 ioctl/index
+3 -2
MAINTAINERS
··· 1626 1626 F: drivers/gpu/drm/arm/display/komeda/ 1627 1627 1628 1628 ARM MALI PANFROST DRM DRIVER 1629 + M: Boris Brezillon <boris.brezillon@collabora.com> 1629 1630 M: Rob Herring <robh@kernel.org> 1630 - M: Tomeu Vizoso <tomeu.vizoso@collabora.com> 1631 1631 R: Steven Price <steven.price@arm.com> 1632 - R: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com> 1633 1632 L: dri-devel@lists.freedesktop.org 1634 1633 S: Supported 1635 1634 T: git git://anongit.freedesktop.org/drm/drm-misc ··· 6131 6132 S: Maintained 6132 6133 T: git git://anongit.freedesktop.org/drm/drm-misc 6133 6134 F: Documentation/driver-api/dma-buf.rst 6135 + F: Documentation/userspace-api/dma-buf-alloc-exchange.rst 6134 6136 F: drivers/dma-buf/ 6135 6137 F: include/linux/*fence.h 6136 6138 F: include/linux/dma-buf.h ··· 7138 7138 7139 7139 DRM PANEL DRIVERS 7140 7140 M: Neil Armstrong <neil.armstrong@linaro.org> 7141 + R: Jessica Zhang <quic_jesszhan@quicinc.com> 7141 7142 R: Sam Ravnborg <sam@ravnborg.org> 7142 7143 L: dri-devel@lists.freedesktop.org 7143 7144 S: Maintained
+18 -17
drivers/accel/drm_accel.c
··· 79 79 #define ACCEL_DEBUGFS_ENTRIES ARRAY_SIZE(accel_debugfs_list) 80 80 81 81 /** 82 - * accel_debugfs_init() - Initialize debugfs for accel minor 83 - * @minor: Pointer to the drm_minor instance. 84 - * @minor_id: The minor's id 82 + * accel_debugfs_init() - Initialize debugfs for device 83 + * @dev: Pointer to the device instance. 85 84 * 86 - * This function initializes the drm minor's debugfs members and creates 87 - * a root directory for the minor in debugfs. It also creates common files 88 - * for accelerators and calls the driver's debugfs init callback. 85 + * This function creates a root directory for the device in debugfs. 89 86 */ 90 - void accel_debugfs_init(struct drm_minor *minor, int minor_id) 87 + void accel_debugfs_init(struct drm_device *dev) 91 88 { 92 - struct drm_device *dev = minor->dev; 93 - char name[64]; 89 + drm_debugfs_dev_init(dev, accel_debugfs_root); 90 + } 94 91 95 - INIT_LIST_HEAD(&minor->debugfs_list); 96 - mutex_init(&minor->debugfs_lock); 97 - sprintf(name, "%d", minor_id); 98 - minor->debugfs_root = debugfs_create_dir(name, accel_debugfs_root); 92 + /** 93 + * accel_debugfs_register() - Register debugfs for device 94 + * @dev: Pointer to the device instance. 95 + * 96 + * Creates common files for accelerators. 97 + */ 98 + void accel_debugfs_register(struct drm_device *dev) 99 + { 100 + struct drm_minor *minor = dev->accel; 101 + 102 + minor->debugfs_root = dev->debugfs_root; 99 103 100 104 drm_debugfs_create_files(accel_debugfs_list, ACCEL_DEBUGFS_ENTRIES, 101 - minor->debugfs_root, minor); 102 - 103 - if (dev->driver->debugfs_init) 104 - dev->driver->debugfs_init(minor); 105 + dev->debugfs_root, minor); 105 106 } 106 107 107 108 /**
+20 -45
drivers/accel/ivpu/ivpu_drv.c
··· 518 518 lockdep_set_class(&vdev->submitted_jobs_xa.xa_lock, &submitted_jobs_xa_lock_class_key); 519 519 520 520 ret = ivpu_pci_init(vdev); 521 - if (ret) { 522 - ivpu_err(vdev, "Failed to initialize PCI device: %d\n", ret); 521 + if (ret) 523 522 goto err_xa_destroy; 524 - } 525 523 526 524 ret = ivpu_irq_init(vdev); 527 - if (ret) { 528 - ivpu_err(vdev, "Failed to initialize IRQs: %d\n", ret); 525 + if (ret) 529 526 goto err_xa_destroy; 530 - } 531 527 532 528 /* Init basic HW info based on buttress registers which are accessible before power up */ 533 529 ret = ivpu_hw_info_init(vdev); 534 - if (ret) { 535 - ivpu_err(vdev, "Failed to initialize HW info: %d\n", ret); 530 + if (ret) 536 531 goto err_xa_destroy; 537 - } 538 532 539 533 /* Power up early so the rest of init code can access VPU registers */ 540 534 ret = ivpu_hw_power_up(vdev); 541 - if (ret) { 542 - ivpu_err(vdev, "Failed to power up HW: %d\n", ret); 535 + if (ret) 543 536 goto err_xa_destroy; 544 - } 545 537 546 538 ret = ivpu_mmu_global_context_init(vdev); 547 - if (ret) { 548 - ivpu_err(vdev, "Failed to initialize global MMU context: %d\n", ret); 539 + if (ret) 549 540 goto err_power_down; 550 - } 551 541 552 542 ret = ivpu_mmu_init(vdev); 553 - if (ret) { 554 - ivpu_err(vdev, "Failed to initialize MMU device: %d\n", ret); 543 + if (ret) 555 544 goto err_mmu_gctx_fini; 556 - } 545 + 546 + ret = ivpu_mmu_reserved_context_init(vdev); 547 + if (ret) 548 + goto err_mmu_gctx_fini; 557 549 558 550 ret = ivpu_fw_init(vdev); 559 - if (ret) { 560 - ivpu_err(vdev, "Failed to initialize firmware: %d\n", ret); 561 - goto err_mmu_gctx_fini; 562 - } 551 + if (ret) 552 + goto err_mmu_rctx_fini; 563 553 564 554 ret = ivpu_ipc_init(vdev); 565 - if (ret) { 566 - ivpu_err(vdev, "Failed to initialize IPC: %d\n", ret); 555 + if (ret) 567 556 goto err_fw_fini; 568 - } 569 557 570 - ret = ivpu_pm_init(vdev); 571 - if (ret) { 572 - ivpu_err(vdev, "Failed to initialize PM: %d\n", ret); 573 - goto err_ipc_fini; 574 - } 558 + ivpu_pm_init(vdev); 575 559 576 560 ret = ivpu_job_done_thread_init(vdev); 577 - if (ret) { 578 - ivpu_err(vdev, "Failed to initialize job done thread: %d\n", ret); 561 + if (ret) 579 562 goto err_ipc_fini; 580 - } 581 - 582 - ret = ivpu_fw_load(vdev); 583 - if (ret) { 584 - ivpu_err(vdev, "Failed to load firmware: %d\n", ret); 585 - goto err_job_done_thread_fini; 586 - } 587 563 588 564 ret = ivpu_boot(vdev); 589 - if (ret) { 590 - ivpu_err(vdev, "Failed to boot: %d\n", ret); 565 + if (ret) 591 566 goto err_job_done_thread_fini; 592 - } 593 567 594 568 ivpu_pm_enable(vdev); 595 569 ··· 575 601 ivpu_ipc_fini(vdev); 576 602 err_fw_fini: 577 603 ivpu_fw_fini(vdev); 604 + err_mmu_rctx_fini: 605 + ivpu_mmu_reserved_context_fini(vdev); 578 606 err_mmu_gctx_fini: 579 607 ivpu_mmu_global_context_fini(vdev); 580 608 err_power_down: ··· 600 624 601 625 ivpu_ipc_fini(vdev); 602 626 ivpu_fw_fini(vdev); 627 + ivpu_mmu_reserved_context_fini(vdev); 603 628 ivpu_mmu_global_context_fini(vdev); 604 629 605 630 drm_WARN_ON(&vdev->drm, !xa_empty(&vdev->submitted_jobs_xa)); ··· 628 651 pci_set_drvdata(pdev, vdev); 629 652 630 653 ret = ivpu_dev_init(vdev); 631 - if (ret) { 632 - dev_err(&pdev->dev, "Failed to initialize VPU device: %d\n", ret); 654 + if (ret) 633 655 return ret; 634 - } 635 656 636 657 ret = drm_dev_register(&vdev->drm, 0); 637 658 if (ret) {
+13 -5
drivers/accel/ivpu/ivpu_drv.h
··· 28 28 #define IVPU_HW_37XX 37 29 29 #define IVPU_HW_40XX 40 30 30 31 - #define IVPU_GLOBAL_CONTEXT_MMU_SSID 0 32 - /* SSID 1 is used by the VPU to represent invalid context */ 33 - #define IVPU_USER_CONTEXT_MIN_SSID 2 34 - #define IVPU_USER_CONTEXT_MAX_SSID (IVPU_USER_CONTEXT_MIN_SSID + 63) 31 + #define IVPU_GLOBAL_CONTEXT_MMU_SSID 0 32 + /* SSID 1 is used by the VPU to represent reserved context */ 33 + #define IVPU_RESERVED_CONTEXT_MMU_SSID 1 34 + #define IVPU_USER_CONTEXT_MIN_SSID 2 35 + #define IVPU_USER_CONTEXT_MAX_SSID (IVPU_USER_CONTEXT_MIN_SSID + 63) 35 36 36 - #define IVPU_NUM_ENGINES 2 37 + #define IVPU_NUM_ENGINES 2 37 38 38 39 #define IVPU_PLATFORM_SILICON 0 39 40 #define IVPU_PLATFORM_SIMICS 2 ··· 76 75 77 76 #define IVPU_WA(wa_name) (vdev->wa.wa_name) 78 77 78 + #define IVPU_PRINT_WA(wa_name) do { \ 79 + if (IVPU_WA(wa_name)) \ 80 + ivpu_dbg(vdev, MISC, "Using WA: " #wa_name "\n"); \ 81 + } while (0) 82 + 79 83 struct ivpu_wa_table { 80 84 bool punit_disabled; 81 85 bool clear_runtime_mem; ··· 110 104 struct ivpu_pm_info *pm; 111 105 112 106 struct ivpu_mmu_context gctx; 107 + struct ivpu_mmu_context rctx; 113 108 struct xarray context_xa; 114 109 struct xa_limit context_xa_limit; 115 110 ··· 124 117 int jsm; 125 118 int tdr; 126 119 int reschedule_suspend; 120 + int autosuspend; 127 121 } timeout; 128 122 }; 129 123
+3 -3
drivers/accel/ivpu/ivpu_fw.c
··· 301 301 if (ret) 302 302 goto err_fw_release; 303 303 304 + ivpu_fw_load(vdev); 305 + 304 306 return 0; 305 307 306 308 err_fw_release: ··· 316 314 ivpu_fw_release(vdev); 317 315 } 318 316 319 - int ivpu_fw_load(struct ivpu_device *vdev) 317 + void ivpu_fw_load(struct ivpu_device *vdev) 320 318 { 321 319 struct ivpu_fw_info *fw = vdev->fw; 322 320 u64 image_end_offset = fw->image_load_offset + fw->image_size; ··· 333 331 } 334 332 335 333 wmb(); /* Flush WC buffers after writing fw->mem */ 336 - 337 - return 0; 338 334 } 339 335 340 336 static void ivpu_fw_boot_params_print(struct ivpu_device *vdev, struct vpu_boot_params *boot_params)
+1 -1
drivers/accel/ivpu/ivpu_fw.h
··· 31 31 32 32 int ivpu_fw_init(struct ivpu_device *vdev); 33 33 void ivpu_fw_fini(struct ivpu_device *vdev); 34 - int ivpu_fw_load(struct ivpu_device *vdev); 34 + void ivpu_fw_load(struct ivpu_device *vdev); 35 35 void ivpu_fw_boot_params_setup(struct ivpu_device *vdev, struct vpu_boot_params *bp); 36 36 37 37 static inline bool ivpu_fw_is_cold_boot(struct ivpu_device *vdev)
+41 -34
drivers/accel/ivpu/ivpu_hw_37xx.c
··· 104 104 105 105 if (ivpu_device_id(vdev) == PCI_DEVICE_ID_MTL && ivpu_revision(vdev) < 4) 106 106 vdev->wa.interrupt_clear_with_0 = true; 107 + 108 + IVPU_PRINT_WA(punit_disabled); 109 + IVPU_PRINT_WA(clear_runtime_mem); 110 + IVPU_PRINT_WA(d3hot_after_power_off); 111 + IVPU_PRINT_WA(interrupt_clear_with_0); 107 112 } 108 113 109 114 static void ivpu_hw_timeouts_init(struct ivpu_device *vdev) ··· 118 113 vdev->timeout.jsm = 50000; 119 114 vdev->timeout.tdr = 2000000; 120 115 vdev->timeout.reschedule_suspend = 1000; 116 + vdev->timeout.autosuspend = -1; 121 117 } else { 122 118 vdev->timeout.boot = 1000; 123 119 vdev->timeout.jsm = 500; 124 120 vdev->timeout.tdr = 2000; 125 121 vdev->timeout.reschedule_suspend = 10; 122 + vdev->timeout.autosuspend = 10; 126 123 } 127 124 } 128 125 ··· 352 345 353 346 static int ivpu_boot_top_noc_qrenqn_check(struct ivpu_device *vdev, u32 exp_val) 354 347 { 355 - u32 val = REGV_RD32(MTL_VPU_TOP_NOC_QREQN); 348 + u32 val = REGV_RD32(VPU_37XX_TOP_NOC_QREQN); 356 349 357 - if (!REG_TEST_FLD_NUM(MTL_VPU_TOP_NOC_QREQN, CPU_CTRL, exp_val, val) || 358 - !REG_TEST_FLD_NUM(MTL_VPU_TOP_NOC_QREQN, HOSTIF_L2CACHE, exp_val, val)) 350 + if (!REG_TEST_FLD_NUM(VPU_37XX_TOP_NOC_QREQN, CPU_CTRL, exp_val, val) || 351 + !REG_TEST_FLD_NUM(VPU_37XX_TOP_NOC_QREQN, HOSTIF_L2CACHE, exp_val, val)) 359 352 return -EIO; 360 353 361 354 return 0; ··· 363 356 364 357 static int ivpu_boot_top_noc_qacceptn_check(struct ivpu_device *vdev, u32 exp_val) 365 358 { 366 - u32 val = REGV_RD32(MTL_VPU_TOP_NOC_QACCEPTN); 359 + u32 val = REGV_RD32(VPU_37XX_TOP_NOC_QACCEPTN); 367 360 368 - if (!REG_TEST_FLD_NUM(MTL_VPU_TOP_NOC_QACCEPTN, CPU_CTRL, exp_val, val) || 369 - !REG_TEST_FLD_NUM(MTL_VPU_TOP_NOC_QACCEPTN, HOSTIF_L2CACHE, exp_val, val)) 361 + if (!REG_TEST_FLD_NUM(VPU_37XX_TOP_NOC_QACCEPTN, CPU_CTRL, exp_val, val) || 362 + !REG_TEST_FLD_NUM(VPU_37XX_TOP_NOC_QACCEPTN, HOSTIF_L2CACHE, exp_val, val)) 370 363 return -EIO; 371 364 372 365 return 0; ··· 374 367 375 368 static int ivpu_boot_top_noc_qdeny_check(struct ivpu_device *vdev, u32 exp_val) 376 369 { 377 - u32 val = REGV_RD32(MTL_VPU_TOP_NOC_QDENY); 370 + u32 val = REGV_RD32(VPU_37XX_TOP_NOC_QDENY); 378 371 379 - if (!REG_TEST_FLD_NUM(MTL_VPU_TOP_NOC_QDENY, CPU_CTRL, exp_val, val) || 380 - !REG_TEST_FLD_NUM(MTL_VPU_TOP_NOC_QDENY, HOSTIF_L2CACHE, exp_val, val)) 372 + if (!REG_TEST_FLD_NUM(VPU_37XX_TOP_NOC_QDENY, CPU_CTRL, exp_val, val) || 373 + !REG_TEST_FLD_NUM(VPU_37XX_TOP_NOC_QDENY, HOSTIF_L2CACHE, exp_val, val)) 381 374 return -EIO; 382 375 383 376 return 0; ··· 430 423 int ret; 431 424 u32 val; 432 425 433 - val = REGV_RD32(MTL_VPU_TOP_NOC_QREQN); 426 + val = REGV_RD32(VPU_37XX_TOP_NOC_QREQN); 434 427 if (enable) { 435 - val = REG_SET_FLD(MTL_VPU_TOP_NOC_QREQN, CPU_CTRL, val); 436 - val = REG_SET_FLD(MTL_VPU_TOP_NOC_QREQN, HOSTIF_L2CACHE, val); 428 + val = REG_SET_FLD(VPU_37XX_TOP_NOC_QREQN, CPU_CTRL, val); 429 + val = REG_SET_FLD(VPU_37XX_TOP_NOC_QREQN, HOSTIF_L2CACHE, val); 437 430 } else { 438 - val = REG_CLR_FLD(MTL_VPU_TOP_NOC_QREQN, CPU_CTRL, val); 439 - val = REG_CLR_FLD(MTL_VPU_TOP_NOC_QREQN, HOSTIF_L2CACHE, val); 431 + val = REG_CLR_FLD(VPU_37XX_TOP_NOC_QREQN, CPU_CTRL, val); 432 + val = REG_CLR_FLD(VPU_37XX_TOP_NOC_QREQN, HOSTIF_L2CACHE, val); 440 433 } 441 - REGV_WR32(MTL_VPU_TOP_NOC_QREQN, val); 434 + REGV_WR32(VPU_37XX_TOP_NOC_QREQN, val); 442 435 443 436 ret = ivpu_boot_top_noc_qacceptn_check(vdev, enable ? 0x1 : 0x0); 444 437 if (ret) { ··· 570 563 { 571 564 u32 val; 572 565 573 - val = REGV_RD32(MTL_VPU_CPU_SS_MSSCPU_CPR_LEON_RT_VEC); 574 - val = REG_SET_FLD(MTL_VPU_CPU_SS_MSSCPU_CPR_LEON_RT_VEC, IRQI_RSTRUN0, val); 566 + val = REGV_RD32(VPU_37XX_CPU_SS_MSSCPU_CPR_LEON_RT_VEC); 567 + val = REG_SET_FLD(VPU_37XX_CPU_SS_MSSCPU_CPR_LEON_RT_VEC, IRQI_RSTRUN0, val); 575 568 576 - val = REG_CLR_FLD(MTL_VPU_CPU_SS_MSSCPU_CPR_LEON_RT_VEC, IRQI_RSTVEC, val); 577 - REGV_WR32(MTL_VPU_CPU_SS_MSSCPU_CPR_LEON_RT_VEC, val); 569 + val = REG_CLR_FLD(VPU_37XX_CPU_SS_MSSCPU_CPR_LEON_RT_VEC, IRQI_RSTVEC, val); 570 + REGV_WR32(VPU_37XX_CPU_SS_MSSCPU_CPR_LEON_RT_VEC, val); 578 571 579 - val = REG_SET_FLD(MTL_VPU_CPU_SS_MSSCPU_CPR_LEON_RT_VEC, IRQI_RESUME0, val); 580 - REGV_WR32(MTL_VPU_CPU_SS_MSSCPU_CPR_LEON_RT_VEC, val); 572 + val = REG_SET_FLD(VPU_37XX_CPU_SS_MSSCPU_CPR_LEON_RT_VEC, IRQI_RESUME0, val); 573 + REGV_WR32(VPU_37XX_CPU_SS_MSSCPU_CPR_LEON_RT_VEC, val); 581 574 582 - val = REG_CLR_FLD(MTL_VPU_CPU_SS_MSSCPU_CPR_LEON_RT_VEC, IRQI_RESUME0, val); 583 - REGV_WR32(MTL_VPU_CPU_SS_MSSCPU_CPR_LEON_RT_VEC, val); 575 + val = REG_CLR_FLD(VPU_37XX_CPU_SS_MSSCPU_CPR_LEON_RT_VEC, IRQI_RESUME0, val); 576 + REGV_WR32(VPU_37XX_CPU_SS_MSSCPU_CPR_LEON_RT_VEC, val); 584 577 585 578 val = vdev->fw->entry_point >> 9; 586 579 REGV_WR32(VPU_37XX_HOST_SS_LOADING_ADDRESS_LO, val); ··· 784 777 u32 val; 785 778 786 779 /* Enable writing and set non-zero WDT value */ 787 - REGV_WR32(MTL_VPU_CPU_SS_TIM_SAFE, TIM_SAFE_ENABLE); 788 - REGV_WR32(MTL_VPU_CPU_SS_TIM_WATCHDOG, TIM_WATCHDOG_RESET_VALUE); 780 + REGV_WR32(VPU_37XX_CPU_SS_TIM_SAFE, TIM_SAFE_ENABLE); 781 + REGV_WR32(VPU_37XX_CPU_SS_TIM_WATCHDOG, TIM_WATCHDOG_RESET_VALUE); 789 782 790 783 /* Enable writing and disable watchdog timer */ 791 - REGV_WR32(MTL_VPU_CPU_SS_TIM_SAFE, TIM_SAFE_ENABLE); 792 - REGV_WR32(MTL_VPU_CPU_SS_TIM_WDOG_EN, 0); 784 + REGV_WR32(VPU_37XX_CPU_SS_TIM_SAFE, TIM_SAFE_ENABLE); 785 + REGV_WR32(VPU_37XX_CPU_SS_TIM_WDOG_EN, 0); 793 786 794 787 /* Now clear the timeout interrupt */ 795 - val = REGV_RD32(MTL_VPU_CPU_SS_TIM_GEN_CONFIG); 796 - val = REG_CLR_FLD(MTL_VPU_CPU_SS_TIM_GEN_CONFIG, WDOG_TO_INT_CLR, val); 797 - REGV_WR32(MTL_VPU_CPU_SS_TIM_GEN_CONFIG, val); 788 + val = REGV_RD32(VPU_37XX_CPU_SS_TIM_GEN_CONFIG); 789 + val = REG_CLR_FLD(VPU_37XX_CPU_SS_TIM_GEN_CONFIG, WDOG_TO_INT_CLR, val); 790 + REGV_WR32(VPU_37XX_CPU_SS_TIM_GEN_CONFIG, val); 798 791 } 799 792 800 793 static u32 ivpu_hw_37xx_pll_to_freq(u32 ratio, u32 config) ··· 841 834 842 835 static void ivpu_hw_37xx_reg_db_set(struct ivpu_device *vdev, u32 db_id) 843 836 { 844 - u32 reg_stride = MTL_VPU_CPU_SS_DOORBELL_1 - MTL_VPU_CPU_SS_DOORBELL_0; 845 - u32 val = REG_FLD(MTL_VPU_CPU_SS_DOORBELL_0, SET); 837 + u32 reg_stride = VPU_37XX_CPU_SS_DOORBELL_1 - VPU_37XX_CPU_SS_DOORBELL_0; 838 + u32 val = REG_FLD(VPU_37XX_CPU_SS_DOORBELL_0, SET); 846 839 847 - REGV_WR32I(MTL_VPU_CPU_SS_DOORBELL_0, reg_stride, db_id, val); 840 + REGV_WR32I(VPU_37XX_CPU_SS_DOORBELL_0, reg_stride, db_id, val); 848 841 } 849 842 850 843 static u32 ivpu_hw_37xx_reg_ipc_rx_addr_get(struct ivpu_device *vdev) ··· 861 854 862 855 static void ivpu_hw_37xx_reg_ipc_tx_set(struct ivpu_device *vdev, u32 vpu_addr) 863 856 { 864 - REGV_WR32(MTL_VPU_CPU_SS_TIM_IPC_FIFO, vpu_addr); 857 + REGV_WR32(VPU_37XX_CPU_SS_TIM_IPC_FIFO, vpu_addr); 865 858 } 866 859 867 860 static void ivpu_hw_37xx_irq_clear(struct ivpu_device *vdev)
+77 -110
drivers/accel/ivpu/ivpu_hw_37xx_reg.h
··· 3 3 * Copyright (C) 2020-2023 Intel Corporation 4 4 */ 5 5 6 - #ifndef __IVPU_HW_MTL_REG_H__ 7 - #define __IVPU_HW_MTL_REG_H__ 6 + #ifndef __IVPU_HW_37XX_REG_H__ 7 + #define __IVPU_HW_37XX_REG_H__ 8 8 9 9 #include <linux/bits.h> 10 10 11 - #define VPU_37XX_BUTTRESS_INTERRUPT_TYPE 0x00000000u 11 + #define VPU_37XX_BUTTRESS_INTERRUPT_TYPE 0x00000000u 12 12 13 - #define VPU_37XX_BUTTRESS_INTERRUPT_STAT 0x00000004u 14 - #define VPU_37XX_BUTTRESS_INTERRUPT_STAT_FREQ_CHANGE_MASK BIT_MASK(0) 13 + #define VPU_37XX_BUTTRESS_INTERRUPT_STAT 0x00000004u 14 + #define VPU_37XX_BUTTRESS_INTERRUPT_STAT_FREQ_CHANGE_MASK BIT_MASK(0) 15 15 #define VPU_37XX_BUTTRESS_INTERRUPT_STAT_ATS_ERR_MASK BIT_MASK(1) 16 16 #define VPU_37XX_BUTTRESS_INTERRUPT_STAT_UFI_ERR_MASK BIT_MASK(2) 17 17 18 - #define VPU_37XX_BUTTRESS_WP_REQ_PAYLOAD0 0x00000008u 19 - #define VPU_37XX_BUTTRESS_WP_REQ_PAYLOAD0_MIN_RATIO_MASK GENMASK(15, 0) 20 - #define VPU_37XX_BUTTRESS_WP_REQ_PAYLOAD0_MAX_RATIO_MASK GENMASK(31, 16) 18 + #define VPU_37XX_BUTTRESS_WP_REQ_PAYLOAD0 0x00000008u 19 + #define VPU_37XX_BUTTRESS_WP_REQ_PAYLOAD0_MIN_RATIO_MASK GENMASK(15, 0) 20 + #define VPU_37XX_BUTTRESS_WP_REQ_PAYLOAD0_MAX_RATIO_MASK GENMASK(31, 16) 21 21 22 - #define VPU_37XX_BUTTRESS_WP_REQ_PAYLOAD1 0x0000000cu 23 - #define VPU_37XX_BUTTRESS_WP_REQ_PAYLOAD1_TARGET_RATIO_MASK GENMASK(15, 0) 24 - #define VPU_37XX_BUTTRESS_WP_REQ_PAYLOAD1_EPP_MASK GENMASK(31, 16) 22 + #define VPU_37XX_BUTTRESS_WP_REQ_PAYLOAD1 0x0000000cu 23 + #define VPU_37XX_BUTTRESS_WP_REQ_PAYLOAD1_TARGET_RATIO_MASK GENMASK(15, 0) 24 + #define VPU_37XX_BUTTRESS_WP_REQ_PAYLOAD1_EPP_MASK GENMASK(31, 16) 25 25 26 - #define VPU_37XX_BUTTRESS_WP_REQ_PAYLOAD2 0x00000010u 26 + #define VPU_37XX_BUTTRESS_WP_REQ_PAYLOAD2 0x00000010u 27 27 #define VPU_37XX_BUTTRESS_WP_REQ_PAYLOAD2_CONFIG_MASK GENMASK(15, 0) 28 28 29 - #define VPU_37XX_BUTTRESS_WP_REQ_CMD 0x00000014u 29 + #define VPU_37XX_BUTTRESS_WP_REQ_CMD 0x00000014u 30 30 #define VPU_37XX_BUTTRESS_WP_REQ_CMD_SEND_MASK BIT_MASK(0) 31 31 32 32 #define VPU_37XX_BUTTRESS_WP_DOWNLOAD 0x00000018u 33 33 #define VPU_37XX_BUTTRESS_WP_DOWNLOAD_TARGET_RATIO_MASK GENMASK(15, 0) 34 34 35 35 #define VPU_37XX_BUTTRESS_CURRENT_PLL 0x0000001cu 36 - #define VPU_37XX_BUTTRESS_CURRENT_PLL_RATIO_MASK GENMASK(15, 0) 36 + #define VPU_37XX_BUTTRESS_CURRENT_PLL_RATIO_MASK GENMASK(15, 0) 37 37 38 - #define VPU_37XX_BUTTRESS_PLL_ENABLE 0x00000020u 38 + #define VPU_37XX_BUTTRESS_PLL_ENABLE 0x00000020u 39 39 40 - #define VPU_37XX_BUTTRESS_FMIN_FUSE 0x00000024u 41 - #define VPU_37XX_BUTTRESS_FMIN_FUSE_MIN_RATIO_MASK GENMASK(7, 0) 42 - #define VPU_37XX_BUTTRESS_FMIN_FUSE_PN_RATIO_MASK GENMASK(15, 8) 40 + #define VPU_37XX_BUTTRESS_FMIN_FUSE 0x00000024u 41 + #define VPU_37XX_BUTTRESS_FMIN_FUSE_MIN_RATIO_MASK GENMASK(7, 0) 42 + #define VPU_37XX_BUTTRESS_FMIN_FUSE_PN_RATIO_MASK GENMASK(15, 8) 43 43 44 - #define VPU_37XX_BUTTRESS_FMAX_FUSE 0x00000028u 45 - #define VPU_37XX_BUTTRESS_FMAX_FUSE_MAX_RATIO_MASK GENMASK(7, 0) 44 + #define VPU_37XX_BUTTRESS_FMAX_FUSE 0x00000028u 45 + #define VPU_37XX_BUTTRESS_FMAX_FUSE_MAX_RATIO_MASK GENMASK(7, 0) 46 46 47 - #define VPU_37XX_BUTTRESS_TILE_FUSE 0x0000002cu 47 + #define VPU_37XX_BUTTRESS_TILE_FUSE 0x0000002cu 48 48 #define VPU_37XX_BUTTRESS_TILE_FUSE_VALID_MASK BIT_MASK(0) 49 - #define VPU_37XX_BUTTRESS_TILE_FUSE_SKU_MASK GENMASK(3, 2) 49 + #define VPU_37XX_BUTTRESS_TILE_FUSE_SKU_MASK GENMASK(3, 2) 50 50 51 - #define VPU_37XX_BUTTRESS_LOCAL_INT_MASK 0x00000030u 52 - #define VPU_37XX_BUTTRESS_GLOBAL_INT_MASK 0x00000034u 51 + #define VPU_37XX_BUTTRESS_LOCAL_INT_MASK 0x00000030u 52 + #define VPU_37XX_BUTTRESS_GLOBAL_INT_MASK 0x00000034u 53 53 54 - #define VPU_37XX_BUTTRESS_PLL_STATUS 0x00000040u 54 + #define VPU_37XX_BUTTRESS_PLL_STATUS 0x00000040u 55 55 #define VPU_37XX_BUTTRESS_PLL_STATUS_LOCK_MASK BIT_MASK(1) 56 56 57 - #define VPU_37XX_BUTTRESS_VPU_STATUS 0x00000044u 57 + #define VPU_37XX_BUTTRESS_VPU_STATUS 0x00000044u 58 58 #define VPU_37XX_BUTTRESS_VPU_STATUS_READY_MASK BIT_MASK(0) 59 59 #define VPU_37XX_BUTTRESS_VPU_STATUS_IDLE_MASK BIT_MASK(1) 60 60 61 - #define VPU_37XX_BUTTRESS_VPU_D0I3_CONTROL 0x00000060u 62 - #define VPU_37XX_BUTTRESS_VPU_D0I3_CONTROL_INPROGRESS_MASK BIT_MASK(0) 63 - #define VPU_37XX_BUTTRESS_VPU_D0I3_CONTROL_I3_MASK BIT_MASK(2) 61 + #define VPU_37XX_BUTTRESS_VPU_D0I3_CONTROL 0x00000060u 62 + #define VPU_37XX_BUTTRESS_VPU_D0I3_CONTROL_INPROGRESS_MASK BIT_MASK(0) 63 + #define VPU_37XX_BUTTRESS_VPU_D0I3_CONTROL_I3_MASK BIT_MASK(2) 64 64 65 65 #define VPU_37XX_BUTTRESS_VPU_IP_RESET 0x00000050u 66 - #define VPU_37XX_BUTTRESS_VPU_IP_RESET_TRIGGER_MASK BIT_MASK(0) 66 + #define VPU_37XX_BUTTRESS_VPU_IP_RESET_TRIGGER_MASK BIT_MASK(0) 67 67 68 68 #define VPU_37XX_BUTTRESS_VPU_TELEMETRY_OFFSET 0x00000080u 69 - #define VPU_37XX_BUTTRESS_VPU_TELEMETRY_SIZE 0x00000084u 69 + #define VPU_37XX_BUTTRESS_VPU_TELEMETRY_SIZE 0x00000084u 70 70 #define VPU_37XX_BUTTRESS_VPU_TELEMETRY_ENABLE 0x00000088u 71 71 72 72 #define VPU_37XX_BUTTRESS_ATS_ERR_LOG_0 0x000000a0u ··· 74 74 #define VPU_37XX_BUTTRESS_ATS_ERR_CLEAR 0x000000a8u 75 75 76 76 #define VPU_37XX_BUTTRESS_UFI_ERR_LOG 0x000000b0u 77 - #define VPU_37XX_BUTTRESS_UFI_ERR_LOG_CQ_ID_MASK GENMASK(11, 0) 78 - #define VPU_37XX_BUTTRESS_UFI_ERR_LOG_AXI_ID_MASK GENMASK(19, 12) 79 - #define VPU_37XX_BUTTRESS_UFI_ERR_LOG_OPCODE_MASK GENMASK(24, 20) 77 + #define VPU_37XX_BUTTRESS_UFI_ERR_LOG_CQ_ID_MASK GENMASK(11, 0) 78 + #define VPU_37XX_BUTTRESS_UFI_ERR_LOG_AXI_ID_MASK GENMASK(19, 12) 79 + #define VPU_37XX_BUTTRESS_UFI_ERR_LOG_OPCODE_MASK GENMASK(24, 20) 80 80 81 81 #define VPU_37XX_BUTTRESS_UFI_ERR_CLEAR 0x000000b4u 82 82 ··· 113 113 #define VPU_37XX_HOST_SS_NOC_QDENY 0x0000015cu 114 114 #define VPU_37XX_HOST_SS_NOC_QDENY_TOP_SOCMMIO_MASK BIT_MASK(0) 115 115 116 - #define MTL_VPU_TOP_NOC_QREQN 0x00000160u 117 - #define MTL_VPU_TOP_NOC_QREQN_CPU_CTRL_MASK BIT_MASK(0) 118 - #define MTL_VPU_TOP_NOC_QREQN_HOSTIF_L2CACHE_MASK BIT_MASK(1) 116 + #define VPU_37XX_TOP_NOC_QREQN 0x00000160u 117 + #define VPU_37XX_TOP_NOC_QREQN_CPU_CTRL_MASK BIT_MASK(0) 118 + #define VPU_37XX_TOP_NOC_QREQN_HOSTIF_L2CACHE_MASK BIT_MASK(1) 119 119 120 - #define MTL_VPU_TOP_NOC_QACCEPTN 0x00000164u 121 - #define MTL_VPU_TOP_NOC_QACCEPTN_CPU_CTRL_MASK BIT_MASK(0) 122 - #define MTL_VPU_TOP_NOC_QACCEPTN_HOSTIF_L2CACHE_MASK BIT_MASK(1) 120 + #define VPU_37XX_TOP_NOC_QACCEPTN 0x00000164u 121 + #define VPU_37XX_TOP_NOC_QACCEPTN_CPU_CTRL_MASK BIT_MASK(0) 122 + #define VPU_37XX_TOP_NOC_QACCEPTN_HOSTIF_L2CACHE_MASK BIT_MASK(1) 123 123 124 - #define MTL_VPU_TOP_NOC_QDENY 0x00000168u 125 - #define MTL_VPU_TOP_NOC_QDENY_CPU_CTRL_MASK BIT_MASK(0) 126 - #define MTL_VPU_TOP_NOC_QDENY_HOSTIF_L2CACHE_MASK BIT_MASK(1) 124 + #define VPU_37XX_TOP_NOC_QDENY 0x00000168u 125 + #define VPU_37XX_TOP_NOC_QDENY_CPU_CTRL_MASK BIT_MASK(0) 126 + #define VPU_37XX_TOP_NOC_QDENY_HOSTIF_L2CACHE_MASK BIT_MASK(1) 127 127 128 128 #define VPU_37XX_HOST_SS_FW_SOC_IRQ_EN 0x00000170u 129 129 #define VPU_37XX_HOST_SS_FW_SOC_IRQ_EN_CSS_ROM_CMX_MASK BIT_MASK(0) ··· 140 140 #define VPU_37XX_HOST_SS_ICB_STATUS_0_TIMER_2_INT_MASK BIT_MASK(2) 141 141 #define VPU_37XX_HOST_SS_ICB_STATUS_0_TIMER_3_INT_MASK BIT_MASK(3) 142 142 #define VPU_37XX_HOST_SS_ICB_STATUS_0_HOST_IPC_FIFO_INT_MASK BIT_MASK(4) 143 - #define VPU_37XX_HOST_SS_ICB_STATUS_0_MMU_IRQ_0_INT_MASK BIT_MASK(5) 144 - #define VPU_37XX_HOST_SS_ICB_STATUS_0_MMU_IRQ_1_INT_MASK BIT_MASK(6) 145 - #define VPU_37XX_HOST_SS_ICB_STATUS_0_MMU_IRQ_2_INT_MASK BIT_MASK(7) 143 + #define VPU_37XX_HOST_SS_ICB_STATUS_0_MMU_IRQ_0_INT_MASK BIT_MASK(5) 144 + #define VPU_37XX_HOST_SS_ICB_STATUS_0_MMU_IRQ_1_INT_MASK BIT_MASK(6) 145 + #define VPU_37XX_HOST_SS_ICB_STATUS_0_MMU_IRQ_2_INT_MASK BIT_MASK(7) 146 146 #define VPU_37XX_HOST_SS_ICB_STATUS_0_NOC_FIREWALL_INT_MASK BIT_MASK(8) 147 147 #define VPU_37XX_HOST_SS_ICB_STATUS_0_CPU_INT_REDIRECT_0_INT_MASK BIT_MASK(30) 148 148 #define VPU_37XX_HOST_SS_ICB_STATUS_0_CPU_INT_REDIRECT_1_INT_MASK BIT_MASK(31) ··· 164 164 #define VPU_37XX_HOST_SS_TIM_IPC_FIFO_STAT_FILL_LEVEL_MASK GENMASK(23, 16) 165 165 #define VPU_37XX_HOST_SS_TIM_IPC_FIFO_STAT_RSVD0_MASK GENMASK(31, 24) 166 166 167 - #define VPU_37XX_HOST_SS_AON_PWR_ISO_EN0 0x00030020u 167 + #define VPU_37XX_HOST_SS_AON_PWR_ISO_EN0 0x00030020u 168 168 #define VPU_37XX_HOST_SS_AON_PWR_ISO_EN0_MSS_CPU_MASK BIT_MASK(3) 169 169 170 170 #define VPU_37XX_HOST_SS_AON_PWR_ISLAND_EN0 0x00030024u 171 - #define VPU_37XX_HOST_SS_AON_PWR_ISLAND_EN0_MSS_CPU_MASK BIT_MASK(3) 171 + #define VPU_37XX_HOST_SS_AON_PWR_ISLAND_EN0_MSS_CPU_MASK BIT_MASK(3) 172 172 173 173 #define VPU_37XX_HOST_SS_AON_PWR_ISLAND_TRICKLE_EN0 0x00030028u 174 - #define VPU_37XX_HOST_SS_AON_PWR_ISLAND_TRICKLE_EN0_MSS_CPU_MASK BIT_MASK(3) 174 + #define VPU_37XX_HOST_SS_AON_PWR_ISLAND_TRICKLE_EN0_MSS_CPU_MASK BIT_MASK(3) 175 175 176 176 #define VPU_37XX_HOST_SS_AON_PWR_ISLAND_STATUS0 0x0003002cu 177 177 #define VPU_37XX_HOST_SS_AON_PWR_ISLAND_STATUS0_MSS_CPU_MASK BIT_MASK(3) ··· 187 187 #define VPU_37XX_HOST_SS_LOADING_ADDRESS_LO_IOSF_RS_ID_MASK GENMASK(2, 1) 188 188 #define VPU_37XX_HOST_SS_LOADING_ADDRESS_LO_IMAGE_LOCATION_MASK GENMASK(31, 3) 189 189 190 - #define VPU_37XX_HOST_SS_WORKPOINT_CONFIG_MIRROR 0x00082020u 190 + #define VPU_37XX_HOST_SS_WORKPOINT_CONFIG_MIRROR 0x00082020u 191 191 #define VPU_37XX_HOST_SS_WORKPOINT_CONFIG_MIRROR_FINAL_PLL_FREQ_MASK GENMASK(15, 0) 192 192 #define VPU_37XX_HOST_SS_WORKPOINT_CONFIG_MIRROR_CONFIG_ID_MASK GENMASK(31, 16) 193 193 194 - #define VPU_37XX_HOST_MMU_IDR0 0x00200000u 195 - #define VPU_37XX_HOST_MMU_IDR1 0x00200004u 196 - #define VPU_37XX_HOST_MMU_IDR3 0x0020000cu 197 - #define VPU_37XX_HOST_MMU_IDR5 0x00200014u 198 - #define VPU_37XX_HOST_MMU_CR0 0x00200020u 199 - #define VPU_37XX_HOST_MMU_CR0ACK 0x00200024u 200 - #define VPU_37XX_HOST_MMU_CR1 0x00200028u 201 - #define VPU_37XX_HOST_MMU_CR2 0x0020002cu 202 - #define VPU_37XX_HOST_MMU_IRQ_CTRL 0x00200050u 203 - #define VPU_37XX_HOST_MMU_IRQ_CTRLACK 0x00200054u 204 - 205 - #define VPU_37XX_HOST_MMU_GERROR 0x00200060u 206 - #define VPU_37XX_HOST_MMU_GERROR_CMDQ_MASK BIT_MASK(0) 207 - #define VPU_37XX_HOST_MMU_GERROR_EVTQ_ABT_MASK BIT_MASK(2) 208 - #define VPU_37XX_HOST_MMU_GERROR_PRIQ_ABT_MASK BIT_MASK(3) 209 - #define VPU_37XX_HOST_MMU_GERROR_MSI_CMDQ_ABT_MASK BIT_MASK(4) 210 - #define VPU_37XX_HOST_MMU_GERROR_MSI_EVTQ_ABT_MASK BIT_MASK(5) 211 - #define VPU_37XX_HOST_MMU_GERROR_MSI_PRIQ_ABT_MASK BIT_MASK(6) 212 - #define VPU_37XX_HOST_MMU_GERROR_MSI_ABT_MASK BIT_MASK(7) 213 - 214 - #define VPU_37XX_HOST_MMU_GERRORN 0x00200064u 215 - 216 - #define VPU_37XX_HOST_MMU_STRTAB_BASE 0x00200080u 217 - #define VPU_37XX_HOST_MMU_STRTAB_BASE_CFG 0x00200088u 218 - #define VPU_37XX_HOST_MMU_CMDQ_BASE 0x00200090u 219 - #define VPU_37XX_HOST_MMU_CMDQ_PROD 0x00200098u 220 - #define VPU_37XX_HOST_MMU_CMDQ_CONS 0x0020009cu 221 - #define VPU_37XX_HOST_MMU_EVTQ_BASE 0x002000a0u 222 - #define VPU_37XX_HOST_MMU_EVTQ_PROD 0x002000a8u 223 - #define VPU_37XX_HOST_MMU_EVTQ_CONS 0x002000acu 224 - #define VPU_37XX_HOST_MMU_EVTQ_PROD_SEC (0x002000a8u + SZ_64K) 225 - #define VPU_37XX_HOST_MMU_EVTQ_CONS_SEC (0x002000acu + SZ_64K) 226 - 227 194 #define VPU_37XX_HOST_IF_TCU_PTW_OVERRIDES 0x00360000u 228 195 #define VPU_37XX_HOST_IF_TCU_PTW_OVERRIDES_CACHE_OVERRIDE_EN_MASK BIT_MASK(0) 229 - #define VPU_37XX_HOST_IF_TCU_PTW_OVERRIDES_AWCACHE_OVERRIDE_MASK BIT_MASK(1) 230 - #define VPU_37XX_HOST_IF_TCU_PTW_OVERRIDES_ARCACHE_OVERRIDE_MASK BIT_MASK(2) 196 + #define VPU_37XX_HOST_IF_TCU_PTW_OVERRIDES_AWCACHE_OVERRIDE_MASK BIT_MASK(1) 197 + #define VPU_37XX_HOST_IF_TCU_PTW_OVERRIDES_ARCACHE_OVERRIDE_MASK BIT_MASK(2) 231 198 #define VPU_37XX_HOST_IF_TCU_PTW_OVERRIDES_NOSNOOP_OVERRIDE_EN_MASK BIT_MASK(3) 232 199 #define VPU_37XX_HOST_IF_TCU_PTW_OVERRIDES_AW_NOSNOOP_OVERRIDE_MASK BIT_MASK(4) 233 200 #define VPU_37XX_HOST_IF_TCU_PTW_OVERRIDES_AR_NOSNOOP_OVERRIDE_MASK BIT_MASK(5) ··· 213 246 #define VPU_37XX_HOST_IF_TBU_MMUSSIDV_TBU4_AWMMUSSIDV_MASK BIT_MASK(8) 214 247 #define VPU_37XX_HOST_IF_TBU_MMUSSIDV_TBU4_ARMMUSSIDV_MASK BIT_MASK(9) 215 248 216 - #define MTL_VPU_CPU_SS_DSU_LEON_RT_BASE 0x04000000u 217 - #define MTL_VPU_CPU_SS_DSU_LEON_RT_DSU_CTRL 0x04000000u 218 - #define MTL_VPU_CPU_SS_DSU_LEON_RT_PC_REG 0x04400010u 219 - #define MTL_VPU_CPU_SS_DSU_LEON_RT_NPC_REG 0x04400014u 220 - #define MTL_VPU_CPU_SS_DSU_LEON_RT_DSU_TRAP_REG 0x04400020u 249 + #define VPU_37XX_CPU_SS_DSU_LEON_RT_BASE 0x04000000u 250 + #define VPU_37XX_CPU_SS_DSU_LEON_RT_DSU_CTRL 0x04000000u 251 + #define VPU_37XX_CPU_SS_DSU_LEON_RT_PC_REG 0x04400010u 252 + #define VPU_37XX_CPU_SS_DSU_LEON_RT_NPC_REG 0x04400014u 253 + #define VPU_37XX_CPU_SS_DSU_LEON_RT_DSU_TRAP_REG 0x04400020u 221 254 222 - #define MTL_VPU_CPU_SS_MSSCPU_CPR_CLK_SET 0x06010004u 223 - #define MTL_VPU_CPU_SS_MSSCPU_CPR_CLK_SET_CPU_DSU_MASK BIT_MASK(1) 255 + #define VPU_37XX_CPU_SS_MSSCPU_CPR_CLK_SET 0x06010004u 256 + #define VPU_37XX_CPU_SS_MSSCPU_CPR_CLK_SET_CPU_DSU_MASK BIT_MASK(1) 224 257 225 - #define MTL_VPU_CPU_SS_MSSCPU_CPR_RST_CLR 0x06010018u 226 - #define MTL_VPU_CPU_SS_MSSCPU_CPR_RST_CLR_CPU_DSU_MASK BIT_MASK(1) 258 + #define VPU_37XX_CPU_SS_MSSCPU_CPR_RST_CLR 0x06010018u 259 + #define VPU_37XX_CPU_SS_MSSCPU_CPR_RST_CLR_CPU_DSU_MASK BIT_MASK(1) 227 260 228 - #define MTL_VPU_CPU_SS_MSSCPU_CPR_LEON_RT_VEC 0x06010040u 229 - #define MTL_VPU_CPU_SS_MSSCPU_CPR_LEON_RT_VEC_IRQI_RSTRUN0_MASK BIT_MASK(0) 230 - #define MTL_VPU_CPU_SS_MSSCPU_CPR_LEON_RT_VEC_IRQI_RESUME0_MASK BIT_MASK(1) 231 - #define MTL_VPU_CPU_SS_MSSCPU_CPR_LEON_RT_VEC_IRQI_RSTRUN1_MASK BIT_MASK(2) 232 - #define MTL_VPU_CPU_SS_MSSCPU_CPR_LEON_RT_VEC_IRQI_RESUME1_MASK BIT_MASK(3) 233 - #define MTL_VPU_CPU_SS_MSSCPU_CPR_LEON_RT_VEC_IRQI_RSTVEC_MASK GENMASK(31, 4) 261 + #define VPU_37XX_CPU_SS_MSSCPU_CPR_LEON_RT_VEC 0x06010040u 262 + #define VPU_37XX_CPU_SS_MSSCPU_CPR_LEON_RT_VEC_IRQI_RSTRUN0_MASK BIT_MASK(0) 263 + #define VPU_37XX_CPU_SS_MSSCPU_CPR_LEON_RT_VEC_IRQI_RESUME0_MASK BIT_MASK(1) 264 + #define VPU_37XX_CPU_SS_MSSCPU_CPR_LEON_RT_VEC_IRQI_RSTRUN1_MASK BIT_MASK(2) 265 + #define VPU_37XX_CPU_SS_MSSCPU_CPR_LEON_RT_VEC_IRQI_RESUME1_MASK BIT_MASK(3) 266 + #define VPU_37XX_CPU_SS_MSSCPU_CPR_LEON_RT_VEC_IRQI_RSTVEC_MASK GENMASK(31, 4) 234 267 235 - #define MTL_VPU_CPU_SS_TIM_WATCHDOG 0x0602009cu 236 - #define MTL_VPU_CPU_SS_TIM_WDOG_EN 0x060200a4u 237 - #define MTL_VPU_CPU_SS_TIM_SAFE 0x060200a8u 238 - #define MTL_VPU_CPU_SS_TIM_IPC_FIFO 0x060200f0u 268 + #define VPU_37XX_CPU_SS_TIM_WATCHDOG 0x0602009cu 269 + #define VPU_37XX_CPU_SS_TIM_WDOG_EN 0x060200a4u 270 + #define VPU_37XX_CPU_SS_TIM_SAFE 0x060200a8u 271 + #define VPU_37XX_CPU_SS_TIM_IPC_FIFO 0x060200f0u 239 272 240 - #define MTL_VPU_CPU_SS_TIM_GEN_CONFIG 0x06021008u 241 - #define MTL_VPU_CPU_SS_TIM_GEN_CONFIG_WDOG_TO_INT_CLR_MASK BIT_MASK(9) 273 + #define VPU_37XX_CPU_SS_TIM_GEN_CONFIG 0x06021008u 274 + #define VPU_37XX_CPU_SS_TIM_GEN_CONFIG_WDOG_TO_INT_CLR_MASK BIT_MASK(9) 242 275 243 - #define MTL_VPU_CPU_SS_DOORBELL_0 0x06300000u 244 - #define MTL_VPU_CPU_SS_DOORBELL_0_SET_MASK BIT_MASK(0) 276 + #define VPU_37XX_CPU_SS_DOORBELL_0 0x06300000u 277 + #define VPU_37XX_CPU_SS_DOORBELL_0_SET_MASK BIT_MASK(0) 245 278 246 - #define MTL_VPU_CPU_SS_DOORBELL_1 0x06301000u 279 + #define VPU_37XX_CPU_SS_DOORBELL_1 0x06301000u 247 280 248 - #endif /* __IVPU_HW_MTL_REG_H__ */ 281 + #endif /* __IVPU_HW_37XX_REG_H__ */
+7
drivers/accel/ivpu/ivpu_hw_40xx.c
··· 126 126 127 127 if (ivpu_hw_gen(vdev) == IVPU_HW_40XX) 128 128 vdev->wa.disable_clock_relinquish = true; 129 + 130 + IVPU_PRINT_WA(punit_disabled); 131 + IVPU_PRINT_WA(clear_runtime_mem); 132 + IVPU_PRINT_WA(disable_clock_relinquish); 129 133 } 130 134 131 135 static void ivpu_hw_timeouts_init(struct ivpu_device *vdev) ··· 139 135 vdev->timeout.jsm = 50000; 140 136 vdev->timeout.tdr = 2000000; 141 137 vdev->timeout.reschedule_suspend = 1000; 138 + vdev->timeout.autosuspend = -1; 142 139 } else if (ivpu_is_simics(vdev)) { 143 140 vdev->timeout.boot = 50; 144 141 vdev->timeout.jsm = 500; 145 142 vdev->timeout.tdr = 10000; 146 143 vdev->timeout.reschedule_suspend = 10; 144 + vdev->timeout.autosuspend = -1; 147 145 } else { 148 146 vdev->timeout.boot = 1000; 149 147 vdev->timeout.jsm = 500; 150 148 vdev->timeout.tdr = 2000; 151 149 vdev->timeout.reschedule_suspend = 10; 150 + vdev->timeout.autosuspend = 10; 152 151 } 153 152 } 154 153
+9 -4
drivers/accel/ivpu/ivpu_ipc.c
··· 426 426 int ivpu_ipc_init(struct ivpu_device *vdev) 427 427 { 428 428 struct ivpu_ipc_info *ipc = vdev->ipc; 429 - int ret = -ENOMEM; 429 + int ret; 430 430 431 431 ipc->mem_tx = ivpu_bo_alloc_internal(vdev, 0, SZ_16K, DRM_IVPU_BO_WC); 432 - if (!ipc->mem_tx) 433 - return ret; 432 + if (!ipc->mem_tx) { 433 + ivpu_err(vdev, "Failed to allocate mem_tx\n"); 434 + return -ENOMEM; 435 + } 434 436 435 437 ipc->mem_rx = ivpu_bo_alloc_internal(vdev, 0, SZ_16K, DRM_IVPU_BO_WC); 436 - if (!ipc->mem_rx) 438 + if (!ipc->mem_rx) { 439 + ivpu_err(vdev, "Failed to allocate mem_rx\n"); 440 + ret = -ENOMEM; 437 441 goto err_free_tx; 442 + } 438 443 439 444 ipc->mm_tx = devm_gen_pool_create(vdev->drm.dev, __ffs(IVPU_IPC_ALIGNMENT), 440 445 -1, "TX_IPC_JSM");
+75 -42
drivers/accel/ivpu/ivpu_mmu.c
··· 7 7 #include <linux/highmem.h> 8 8 9 9 #include "ivpu_drv.h" 10 - #include "ivpu_hw_37xx_reg.h" 11 10 #include "ivpu_hw_reg_io.h" 12 11 #include "ivpu_mmu.h" 13 12 #include "ivpu_mmu_context.h" 14 13 #include "ivpu_pm.h" 14 + 15 + #define IVPU_MMU_REG_IDR0 0x00200000u 16 + #define IVPU_MMU_REG_IDR1 0x00200004u 17 + #define IVPU_MMU_REG_IDR3 0x0020000cu 18 + #define IVPU_MMU_REG_IDR5 0x00200014u 19 + #define IVPU_MMU_REG_CR0 0x00200020u 20 + #define IVPU_MMU_REG_CR0ACK 0x00200024u 21 + #define IVPU_MMU_REG_CR1 0x00200028u 22 + #define IVPU_MMU_REG_CR2 0x0020002cu 23 + #define IVPU_MMU_REG_IRQ_CTRL 0x00200050u 24 + #define IVPU_MMU_REG_IRQ_CTRLACK 0x00200054u 25 + 26 + #define IVPU_MMU_REG_GERROR 0x00200060u 27 + #define IVPU_MMU_REG_GERROR_CMDQ_MASK BIT_MASK(0) 28 + #define IVPU_MMU_REG_GERROR_EVTQ_ABT_MASK BIT_MASK(2) 29 + #define IVPU_MMU_REG_GERROR_PRIQ_ABT_MASK BIT_MASK(3) 30 + #define IVPU_MMU_REG_GERROR_MSI_CMDQ_ABT_MASK BIT_MASK(4) 31 + #define IVPU_MMU_REG_GERROR_MSI_EVTQ_ABT_MASK BIT_MASK(5) 32 + #define IVPU_MMU_REG_GERROR_MSI_PRIQ_ABT_MASK BIT_MASK(6) 33 + #define IVPU_MMU_REG_GERROR_MSI_ABT_MASK BIT_MASK(7) 34 + 35 + #define IVPU_MMU_REG_GERRORN 0x00200064u 36 + 37 + #define IVPU_MMU_REG_STRTAB_BASE 0x00200080u 38 + #define IVPU_MMU_REG_STRTAB_BASE_CFG 0x00200088u 39 + #define IVPU_MMU_REG_CMDQ_BASE 0x00200090u 40 + #define IVPU_MMU_REG_CMDQ_PROD 0x00200098u 41 + #define IVPU_MMU_REG_CMDQ_CONS 0x0020009cu 42 + #define IVPU_MMU_REG_EVTQ_BASE 0x002000a0u 43 + #define IVPU_MMU_REG_EVTQ_PROD 0x002000a8u 44 + #define IVPU_MMU_REG_EVTQ_CONS 0x002000acu 45 + #define IVPU_MMU_REG_EVTQ_PROD_SEC (0x002000a8u + SZ_64K) 46 + #define IVPU_MMU_REG_EVTQ_CONS_SEC (0x002000acu + SZ_64K) 47 + #define IVPU_MMU_REG_CMDQ_CONS_ERR_MASK GENMASK(30, 24) 15 48 16 49 #define IVPU_MMU_IDR0_REF 0x080f3e0f 17 50 #define IVPU_MMU_IDR0_REF_SIMICS 0x080f3e1f ··· 219 186 #define IVPU_MMU_REG_TIMEOUT_US (10 * USEC_PER_MSEC) 220 187 #define IVPU_MMU_QUEUE_TIMEOUT_US (100 * USEC_PER_MSEC) 221 188 222 - #define IVPU_MMU_GERROR_ERR_MASK ((REG_FLD(VPU_37XX_HOST_MMU_GERROR, CMDQ)) | \ 223 - (REG_FLD(VPU_37XX_HOST_MMU_GERROR, EVTQ_ABT)) | \ 224 - (REG_FLD(VPU_37XX_HOST_MMU_GERROR, PRIQ_ABT)) | \ 225 - (REG_FLD(VPU_37XX_HOST_MMU_GERROR, MSI_CMDQ_ABT)) | \ 226 - (REG_FLD(VPU_37XX_HOST_MMU_GERROR, MSI_EVTQ_ABT)) | \ 227 - (REG_FLD(VPU_37XX_HOST_MMU_GERROR, MSI_PRIQ_ABT)) | \ 228 - (REG_FLD(VPU_37XX_HOST_MMU_GERROR, MSI_ABT))) 189 + #define IVPU_MMU_GERROR_ERR_MASK ((REG_FLD(IVPU_MMU_REG_GERROR, CMDQ)) | \ 190 + (REG_FLD(IVPU_MMU_REG_GERROR, EVTQ_ABT)) | \ 191 + (REG_FLD(IVPU_MMU_REG_GERROR, PRIQ_ABT)) | \ 192 + (REG_FLD(IVPU_MMU_REG_GERROR, MSI_CMDQ_ABT)) | \ 193 + (REG_FLD(IVPU_MMU_REG_GERROR, MSI_EVTQ_ABT)) | \ 194 + (REG_FLD(IVPU_MMU_REG_GERROR, MSI_PRIQ_ABT)) | \ 195 + (REG_FLD(IVPU_MMU_REG_GERROR, MSI_ABT))) 229 196 230 197 static char *ivpu_mmu_event_to_str(u32 cmd) 231 198 { ··· 283 250 else 284 251 val_ref = IVPU_MMU_IDR0_REF; 285 252 286 - val = REGV_RD32(VPU_37XX_HOST_MMU_IDR0); 253 + val = REGV_RD32(IVPU_MMU_REG_IDR0); 287 254 if (val != val_ref) 288 255 ivpu_dbg(vdev, MMU, "IDR0 0x%x != IDR0_REF 0x%x\n", val, val_ref); 289 256 290 - val = REGV_RD32(VPU_37XX_HOST_MMU_IDR1); 257 + val = REGV_RD32(IVPU_MMU_REG_IDR1); 291 258 if (val != IVPU_MMU_IDR1_REF) 292 259 ivpu_dbg(vdev, MMU, "IDR1 0x%x != IDR1_REF 0x%x\n", val, IVPU_MMU_IDR1_REF); 293 260 294 - val = REGV_RD32(VPU_37XX_HOST_MMU_IDR3); 261 + val = REGV_RD32(IVPU_MMU_REG_IDR3); 295 262 if (val != IVPU_MMU_IDR3_REF) 296 263 ivpu_dbg(vdev, MMU, "IDR3 0x%x != IDR3_REF 0x%x\n", val, IVPU_MMU_IDR3_REF); 297 264 ··· 302 269 else 303 270 val_ref = IVPU_MMU_IDR5_REF; 304 271 305 - val = REGV_RD32(VPU_37XX_HOST_MMU_IDR5); 272 + val = REGV_RD32(IVPU_MMU_REG_IDR5); 306 273 if (val != val_ref) 307 274 ivpu_dbg(vdev, MMU, "IDR5 0x%x != IDR5_REF 0x%x\n", val, val_ref); 308 275 } ··· 429 396 u32 irq_ctrl = IVPU_MMU_IRQ_EVTQ_EN | IVPU_MMU_IRQ_GERROR_EN; 430 397 int ret; 431 398 432 - ret = ivpu_mmu_reg_write(vdev, VPU_37XX_HOST_MMU_IRQ_CTRL, 0); 399 + ret = ivpu_mmu_reg_write(vdev, IVPU_MMU_REG_IRQ_CTRL, 0); 433 400 if (ret) 434 401 return ret; 435 402 436 - return ivpu_mmu_reg_write(vdev, VPU_37XX_HOST_MMU_IRQ_CTRL, irq_ctrl); 403 + return ivpu_mmu_reg_write(vdev, IVPU_MMU_REG_IRQ_CTRL, irq_ctrl); 437 404 } 438 405 439 406 static int ivpu_mmu_cmdq_wait_for_cons(struct ivpu_device *vdev) 440 407 { 441 408 struct ivpu_mmu_queue *cmdq = &vdev->mmu->cmdq; 442 409 443 - return REGV_POLL(VPU_37XX_HOST_MMU_CMDQ_CONS, cmdq->cons, (cmdq->prod == cmdq->cons), 410 + return REGV_POLL(IVPU_MMU_REG_CMDQ_CONS, cmdq->cons, (cmdq->prod == cmdq->cons), 444 411 IVPU_MMU_QUEUE_TIMEOUT_US); 445 412 } 446 413 ··· 480 447 return ret; 481 448 482 449 clflush_cache_range(q->base, IVPU_MMU_CMDQ_SIZE); 483 - REGV_WR32(VPU_37XX_HOST_MMU_CMDQ_PROD, q->prod); 450 + REGV_WR32(IVPU_MMU_REG_CMDQ_PROD, q->prod); 484 451 485 452 ret = ivpu_mmu_cmdq_wait_for_cons(vdev); 486 453 if (ret) ··· 528 495 mmu->evtq.prod = 0; 529 496 mmu->evtq.cons = 0; 530 497 531 - ret = ivpu_mmu_reg_write(vdev, VPU_37XX_HOST_MMU_CR0, 0); 498 + ret = ivpu_mmu_reg_write(vdev, IVPU_MMU_REG_CR0, 0); 532 499 if (ret) 533 500 return ret; 534 501 ··· 538 505 FIELD_PREP(IVPU_MMU_CR1_QUEUE_SH, IVPU_MMU_SH_ISH) | 539 506 FIELD_PREP(IVPU_MMU_CR1_QUEUE_OC, IVPU_MMU_CACHE_WB) | 540 507 FIELD_PREP(IVPU_MMU_CR1_QUEUE_IC, IVPU_MMU_CACHE_WB); 541 - REGV_WR32(VPU_37XX_HOST_MMU_CR1, val); 508 + REGV_WR32(IVPU_MMU_REG_CR1, val); 542 509 543 - REGV_WR64(VPU_37XX_HOST_MMU_STRTAB_BASE, mmu->strtab.dma_q); 544 - REGV_WR32(VPU_37XX_HOST_MMU_STRTAB_BASE_CFG, mmu->strtab.base_cfg); 510 + REGV_WR64(IVPU_MMU_REG_STRTAB_BASE, mmu->strtab.dma_q); 511 + REGV_WR32(IVPU_MMU_REG_STRTAB_BASE_CFG, mmu->strtab.base_cfg); 545 512 546 - REGV_WR64(VPU_37XX_HOST_MMU_CMDQ_BASE, mmu->cmdq.dma_q); 547 - REGV_WR32(VPU_37XX_HOST_MMU_CMDQ_PROD, 0); 548 - REGV_WR32(VPU_37XX_HOST_MMU_CMDQ_CONS, 0); 513 + REGV_WR64(IVPU_MMU_REG_CMDQ_BASE, mmu->cmdq.dma_q); 514 + REGV_WR32(IVPU_MMU_REG_CMDQ_PROD, 0); 515 + REGV_WR32(IVPU_MMU_REG_CMDQ_CONS, 0); 549 516 550 517 val = IVPU_MMU_CR0_CMDQEN; 551 - ret = ivpu_mmu_reg_write(vdev, VPU_37XX_HOST_MMU_CR0, val); 518 + ret = ivpu_mmu_reg_write(vdev, IVPU_MMU_REG_CR0, val); 552 519 if (ret) 553 520 return ret; 554 521 ··· 564 531 if (ret) 565 532 return ret; 566 533 567 - REGV_WR64(VPU_37XX_HOST_MMU_EVTQ_BASE, mmu->evtq.dma_q); 568 - REGV_WR32(VPU_37XX_HOST_MMU_EVTQ_PROD_SEC, 0); 569 - REGV_WR32(VPU_37XX_HOST_MMU_EVTQ_CONS_SEC, 0); 534 + REGV_WR64(IVPU_MMU_REG_EVTQ_BASE, mmu->evtq.dma_q); 535 + REGV_WR32(IVPU_MMU_REG_EVTQ_PROD_SEC, 0); 536 + REGV_WR32(IVPU_MMU_REG_EVTQ_CONS_SEC, 0); 570 537 571 538 val |= IVPU_MMU_CR0_EVTQEN; 572 - ret = ivpu_mmu_reg_write(vdev, VPU_37XX_HOST_MMU_CR0, val); 539 + ret = ivpu_mmu_reg_write(vdev, IVPU_MMU_REG_CR0, val); 573 540 if (ret) 574 541 return ret; 575 542 576 543 val |= IVPU_MMU_CR0_ATSCHK; 577 - ret = ivpu_mmu_reg_write(vdev, VPU_37XX_HOST_MMU_CR0, val); 544 + ret = ivpu_mmu_reg_write(vdev, IVPU_MMU_REG_CR0, val); 578 545 if (ret) 579 546 return ret; 580 547 ··· 583 550 return ret; 584 551 585 552 val |= IVPU_MMU_CR0_SMMUEN; 586 - return ivpu_mmu_reg_write(vdev, VPU_37XX_HOST_MMU_CR0, val); 553 + return ivpu_mmu_reg_write(vdev, IVPU_MMU_REG_CR0, val); 587 554 } 588 555 589 556 static void ivpu_mmu_strtab_link_cd(struct ivpu_device *vdev, u32 sid) ··· 834 801 u32 idx = IVPU_MMU_Q_IDX(evtq->cons); 835 802 u32 *evt = evtq->base + (idx * IVPU_MMU_EVTQ_CMD_SIZE); 836 803 837 - evtq->prod = REGV_RD32(VPU_37XX_HOST_MMU_EVTQ_PROD_SEC); 804 + evtq->prod = REGV_RD32(IVPU_MMU_REG_EVTQ_PROD_SEC); 838 805 if (!CIRC_CNT(IVPU_MMU_Q_IDX(evtq->prod), IVPU_MMU_Q_IDX(evtq->cons), IVPU_MMU_Q_COUNT)) 839 806 return NULL; 840 807 841 808 clflush_cache_range(evt, IVPU_MMU_EVTQ_CMD_SIZE); 842 809 843 810 evtq->cons = (evtq->cons + 1) & IVPU_MMU_Q_WRAP_MASK; 844 - REGV_WR32(VPU_37XX_HOST_MMU_EVTQ_CONS_SEC, evtq->cons); 811 + REGV_WR32(IVPU_MMU_REG_EVTQ_CONS_SEC, evtq->cons); 845 812 846 813 return evt; 847 814 } ··· 874 841 875 842 ivpu_dbg(vdev, IRQ, "MMU error\n"); 876 843 877 - gerror_val = REGV_RD32(VPU_37XX_HOST_MMU_GERROR); 878 - gerrorn_val = REGV_RD32(VPU_37XX_HOST_MMU_GERRORN); 844 + gerror_val = REGV_RD32(IVPU_MMU_REG_GERROR); 845 + gerrorn_val = REGV_RD32(IVPU_MMU_REG_GERRORN); 879 846 880 847 active = gerror_val ^ gerrorn_val; 881 848 if (!(active & IVPU_MMU_GERROR_ERR_MASK)) 882 849 return; 883 850 884 - if (REG_TEST_FLD(VPU_37XX_HOST_MMU_GERROR, MSI_ABT, active)) 851 + if (REG_TEST_FLD(IVPU_MMU_REG_GERROR, MSI_ABT, active)) 885 852 ivpu_warn_ratelimited(vdev, "MMU MSI ABT write aborted\n"); 886 853 887 - if (REG_TEST_FLD(VPU_37XX_HOST_MMU_GERROR, MSI_PRIQ_ABT, active)) 854 + if (REG_TEST_FLD(IVPU_MMU_REG_GERROR, MSI_PRIQ_ABT, active)) 888 855 ivpu_warn_ratelimited(vdev, "MMU PRIQ MSI ABT write aborted\n"); 889 856 890 - if (REG_TEST_FLD(VPU_37XX_HOST_MMU_GERROR, MSI_EVTQ_ABT, active)) 857 + if (REG_TEST_FLD(IVPU_MMU_REG_GERROR, MSI_EVTQ_ABT, active)) 891 858 ivpu_warn_ratelimited(vdev, "MMU EVTQ MSI ABT write aborted\n"); 892 859 893 - if (REG_TEST_FLD(VPU_37XX_HOST_MMU_GERROR, MSI_CMDQ_ABT, active)) 860 + if (REG_TEST_FLD(IVPU_MMU_REG_GERROR, MSI_CMDQ_ABT, active)) 894 861 ivpu_warn_ratelimited(vdev, "MMU CMDQ MSI ABT write aborted\n"); 895 862 896 - if (REG_TEST_FLD(VPU_37XX_HOST_MMU_GERROR, PRIQ_ABT, active)) 863 + if (REG_TEST_FLD(IVPU_MMU_REG_GERROR, PRIQ_ABT, active)) 897 864 ivpu_err_ratelimited(vdev, "MMU PRIQ write aborted\n"); 898 865 899 - if (REG_TEST_FLD(VPU_37XX_HOST_MMU_GERROR, EVTQ_ABT, active)) 866 + if (REG_TEST_FLD(IVPU_MMU_REG_GERROR, EVTQ_ABT, active)) 900 867 ivpu_err_ratelimited(vdev, "MMU EVTQ write aborted\n"); 901 868 902 - if (REG_TEST_FLD(VPU_37XX_HOST_MMU_GERROR, CMDQ, active)) 869 + if (REG_TEST_FLD(IVPU_MMU_REG_GERROR, CMDQ, active)) 903 870 ivpu_err_ratelimited(vdev, "MMU CMDQ write aborted\n"); 904 871 905 - REGV_WR32(VPU_37XX_HOST_MMU_GERRORN, gerror_val); 872 + REGV_WR32(IVPU_MMU_REG_GERRORN, gerror_val); 906 873 } 907 874 908 875 int ivpu_mmu_set_pgtable(struct ivpu_device *vdev, int ssid, struct ivpu_mmu_pgtable *pgtable)
+15 -3
drivers/accel/ivpu/ivpu_mmu_context.c
··· 427 427 INIT_LIST_HEAD(&ctx->bo_list); 428 428 429 429 ret = ivpu_mmu_pgtable_init(vdev, &ctx->pgtable); 430 - if (ret) 430 + if (ret) { 431 + ivpu_err(vdev, "Failed to initialize pgtable for ctx %u: %d\n", context_id, ret); 431 432 return ret; 433 + } 432 434 433 435 if (!context_id) { 434 436 start = vdev->hw->ranges.global.start; ··· 469 467 return ivpu_mmu_context_fini(vdev, &vdev->gctx); 470 468 } 471 469 470 + int ivpu_mmu_reserved_context_init(struct ivpu_device *vdev) 471 + { 472 + return ivpu_mmu_user_context_init(vdev, &vdev->rctx, IVPU_RESERVED_CONTEXT_MMU_SSID); 473 + } 474 + 475 + void ivpu_mmu_reserved_context_fini(struct ivpu_device *vdev) 476 + { 477 + return ivpu_mmu_user_context_fini(vdev, &vdev->rctx); 478 + } 479 + 472 480 void ivpu_mmu_user_context_mark_invalid(struct ivpu_device *vdev, u32 ssid) 473 481 { 474 482 struct ivpu_file_priv *file_priv; ··· 500 488 501 489 ret = ivpu_mmu_context_init(vdev, ctx, ctx_id); 502 490 if (ret) { 503 - ivpu_err(vdev, "Failed to initialize context: %d\n", ret); 491 + ivpu_err(vdev, "Failed to initialize context %u: %d\n", ctx_id, ret); 504 492 return ret; 505 493 } 506 494 507 495 ret = ivpu_mmu_set_pgtable(vdev, ctx_id, &ctx->pgtable); 508 496 if (ret) { 509 - ivpu_err(vdev, "Failed to set page table: %d\n", ret); 497 + ivpu_err(vdev, "Failed to set page table for context %u: %d\n", ctx_id, ret); 510 498 goto err_context_fini; 511 499 } 512 500
+2
drivers/accel/ivpu/ivpu_mmu_context.h
··· 32 32 33 33 int ivpu_mmu_global_context_init(struct ivpu_device *vdev); 34 34 void ivpu_mmu_global_context_fini(struct ivpu_device *vdev); 35 + int ivpu_mmu_reserved_context_init(struct ivpu_device *vdev); 36 + void ivpu_mmu_reserved_context_fini(struct ivpu_device *vdev); 35 37 36 38 int ivpu_mmu_user_context_init(struct ivpu_device *vdev, struct ivpu_mmu_context *ctx, u32 ctx_id); 37 39 void ivpu_mmu_user_context_fini(struct ivpu_device *vdev, struct ivpu_mmu_context *ctx);
+8 -8
drivers/accel/ivpu/ivpu_pm.c
··· 282 282 pm_runtime_put_autosuspend(vdev->drm.dev); 283 283 } 284 284 285 - int ivpu_pm_init(struct ivpu_device *vdev) 285 + void ivpu_pm_init(struct ivpu_device *vdev) 286 286 { 287 287 struct device *dev = vdev->drm.dev; 288 288 struct ivpu_pm_info *pm = vdev->pm; 289 + int delay; 289 290 290 291 pm->vdev = vdev; 291 292 pm->suspend_reschedule_counter = PM_RESCHEDULE_LIMIT; ··· 294 293 atomic_set(&pm->in_reset, 0); 295 294 INIT_WORK(&pm->recovery_work, ivpu_pm_recovery_work); 296 295 297 - pm_runtime_use_autosuspend(dev); 298 - 299 296 if (ivpu_disable_recovery) 300 - pm_runtime_set_autosuspend_delay(dev, -1); 301 - else if (ivpu_is_silicon(vdev)) 302 - pm_runtime_set_autosuspend_delay(dev, 100); 297 + delay = -1; 303 298 else 304 - pm_runtime_set_autosuspend_delay(dev, 60000); 299 + delay = vdev->timeout.autosuspend; 305 300 306 - return 0; 301 + pm_runtime_use_autosuspend(dev); 302 + pm_runtime_set_autosuspend_delay(dev, delay); 303 + 304 + ivpu_dbg(vdev, PM, "Autosuspend delay = %d\n", delay); 307 305 } 308 306 309 307 void ivpu_pm_cancel_recovery(struct ivpu_device *vdev)
+1 -1
drivers/accel/ivpu/ivpu_pm.h
··· 19 19 u32 suspend_reschedule_counter; 20 20 }; 21 21 22 - int ivpu_pm_init(struct ivpu_device *vdev); 22 + void ivpu_pm_init(struct ivpu_device *vdev); 23 23 void ivpu_pm_enable(struct ivpu_device *vdev); 24 24 void ivpu_pm_disable(struct ivpu_device *vdev); 25 25 void ivpu_pm_cancel_recovery(struct ivpu_device *vdev);
+12 -8
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_helpers.c
··· 219 219 /* Set correct time_slots/PBN of old payload. 220 220 * other fields (delete & dsc_enabled) in 221 221 * struct drm_dp_mst_atomic_payload are don't care fields 222 - * while calling drm_dp_remove_payload() 222 + * while calling drm_dp_remove_payload_part2() 223 223 */ 224 224 for (i = 0; i < current_link_table.stream_count; i++) { 225 225 dc_alloc = ··· 263 263 264 264 mst_mgr = &aconnector->mst_root->mst_mgr; 265 265 mst_state = to_drm_dp_mst_topology_state(mst_mgr->base.state); 266 - 267 - /* It's OK for this to fail */ 268 266 new_payload = drm_atomic_get_mst_payload_state(mst_state, aconnector->mst_output_port); 269 267 270 268 if (enable) { 271 269 target_payload = new_payload; 272 270 271 + /* It's OK for this to fail */ 273 272 drm_dp_add_payload_part1(mst_mgr, mst_state, new_payload); 274 273 } else { 275 274 /* construct old payload by VCPI*/ ··· 276 277 new_payload, &old_payload); 277 278 target_payload = &old_payload; 278 279 279 - drm_dp_remove_payload(mst_mgr, mst_state, &old_payload, new_payload); 280 + drm_dp_remove_payload_part1(mst_mgr, mst_state, new_payload); 280 281 } 281 282 282 283 /* mst_mgr->->payloads are VC payload notify MST branch using DPCD or ··· 343 344 struct amdgpu_dm_connector *aconnector; 344 345 struct drm_dp_mst_topology_state *mst_state; 345 346 struct drm_dp_mst_topology_mgr *mst_mgr; 346 - struct drm_dp_mst_atomic_payload *payload; 347 + struct drm_dp_mst_atomic_payload *new_payload, *old_payload; 347 348 enum mst_progress_status set_flag = MST_ALLOCATE_NEW_PAYLOAD; 348 349 enum mst_progress_status clr_flag = MST_CLEAR_ALLOCATED_PAYLOAD; 349 350 int ret = 0; ··· 356 357 mst_mgr = &aconnector->mst_root->mst_mgr; 357 358 mst_state = to_drm_dp_mst_topology_state(mst_mgr->base.state); 358 359 359 - payload = drm_atomic_get_mst_payload_state(mst_state, aconnector->mst_output_port); 360 + new_payload = drm_atomic_get_mst_payload_state(mst_state, aconnector->mst_output_port); 360 361 361 362 if (!enable) { 362 363 set_flag = MST_CLEAR_ALLOCATED_PAYLOAD; 363 364 clr_flag = MST_ALLOCATE_NEW_PAYLOAD; 364 365 } 365 366 366 - if (enable) 367 - ret = drm_dp_add_payload_part2(mst_mgr, mst_state->base.state, payload); 367 + if (enable) { 368 + ret = drm_dp_add_payload_part2(mst_mgr, mst_state->base.state, new_payload); 369 + } else { 370 + dm_helpers_construct_old_payload(stream->link, mst_state->pbn_div, 371 + new_payload, old_payload); 372 + drm_dp_remove_payload_part2(mst_mgr, mst_state, old_payload, new_payload); 373 + } 368 374 369 375 if (ret) { 370 376 amdgpu_dm_set_mst_status(&aconnector->mst_status,
+6 -3
drivers/gpu/drm/arm/display/komeda/komeda_pipeline_state.c
··· 1223 1223 return 0; 1224 1224 } 1225 1225 1226 - static void 1226 + static int 1227 1227 komeda_pipeline_unbound_components(struct komeda_pipeline *pipe, 1228 1228 struct komeda_pipeline_state *new) 1229 1229 { ··· 1243 1243 c = komeda_pipeline_get_component(pipe, id); 1244 1244 c_st = komeda_component_get_state_and_set_user(c, 1245 1245 drm_st, NULL, new->crtc); 1246 + if (PTR_ERR(c_st) == -EDEADLK) 1247 + return -EDEADLK; 1246 1248 WARN_ON(IS_ERR(c_st)); 1247 1249 } 1250 + 1251 + return 0; 1248 1252 } 1249 1253 1250 1254 /* release unclaimed pipeline resource */ ··· 1270 1266 if (WARN_ON(IS_ERR_OR_NULL(st))) 1271 1267 return -EINVAL; 1272 1268 1273 - komeda_pipeline_unbound_components(pipe, st); 1269 + return komeda_pipeline_unbound_components(pipe, st); 1274 1270 1275 - return 0; 1276 1271 } 1277 1272 1278 1273 /* Since standalone disabled components must be disabled separately and in the
+2
drivers/gpu/drm/bridge/Kconfig
··· 181 181 select DRM_KMS_HELPER 182 182 select DRM_MIPI_DSI 183 183 select DRM_PANEL_BRIDGE 184 + select GENERIC_PHY 184 185 select GENERIC_PHY_MIPI_DPHY 185 186 select MFD_SYSCON 186 187 select MULTIPLEXER ··· 228 227 select DRM_KMS_HELPER 229 228 select DRM_MIPI_DSI 230 229 select DRM_PANEL_BRIDGE 230 + select GENERIC_PHY 231 231 select GENERIC_PHY_MIPI_DPHY 232 232 help 233 233 The Samsung MIPI DSIM bridge controller driver.
-9
drivers/gpu/drm/bridge/analogix/analogix-anx78xx.c
··· 1231 1231 1232 1232 mutex_init(&anx78xx->lock); 1233 1233 1234 - #if IS_ENABLED(CONFIG_OF) 1235 1234 anx78xx->bridge.of_node = client->dev.of_node; 1236 - #endif 1237 1235 1238 1236 anx78xx->client = client; 1239 1237 i2c_set_clientdata(client, anx78xx); ··· 1365 1367 kfree(anx78xx->edid); 1366 1368 } 1367 1369 1368 - static const struct i2c_device_id anx78xx_id[] = { 1369 - { "anx7814", 0 }, 1370 - { /* sentinel */ } 1371 - }; 1372 - MODULE_DEVICE_TABLE(i2c, anx78xx_id); 1373 - 1374 1370 static const struct of_device_id anx78xx_match_table[] = { 1375 1371 { .compatible = "analogix,anx7808", .data = anx7808_i2c_addresses }, 1376 1372 { .compatible = "analogix,anx7812", .data = anx781x_i2c_addresses }, ··· 1381 1389 }, 1382 1390 .probe = anx78xx_i2c_probe, 1383 1391 .remove = anx78xx_i2c_remove, 1384 - .id_table = anx78xx_id, 1385 1392 }; 1386 1393 module_i2c_driver(anx78xx_driver); 1387 1394
+1
drivers/gpu/drm/bridge/cadence/Kconfig
··· 4 4 select DRM_KMS_HELPER 5 5 select DRM_MIPI_DSI 6 6 select DRM_PANEL_BRIDGE 7 + select GENERIC_PHY 7 8 select GENERIC_PHY_MIPI_DPHY 8 9 depends on OF 9 10 help
+16 -13
drivers/gpu/drm/bridge/ite-it66121.c
··· 1447 1447 struct it66121_ctx *ctx = dev_get_drvdata(dev); 1448 1448 1449 1449 mutex_lock(&ctx->lock); 1450 - 1451 - memcpy(buf, ctx->connector->eld, 1452 - min(sizeof(ctx->connector->eld), len)); 1453 - 1450 + if (!ctx->connector) { 1451 + /* Pass en empty ELD if connector not available */ 1452 + dev_dbg(dev, "No connector present, passing empty EDID data"); 1453 + memset(buf, 0, len); 1454 + } else { 1455 + memcpy(buf, ctx->connector->eld, 1456 + min(sizeof(ctx->connector->eld), len)); 1457 + } 1454 1458 mutex_unlock(&ctx->lock); 1455 1459 1456 1460 return 0; ··· 1505 1501 1506 1502 static int it66121_probe(struct i2c_client *client) 1507 1503 { 1508 - const struct i2c_device_id *id = i2c_client_get_device_id(client); 1509 1504 u32 revision_id, vendor_ids[2] = { 0 }, device_ids[2] = { 0 }; 1510 1505 struct device_node *ep; 1511 1506 int ret; ··· 1526 1523 1527 1524 ctx->dev = dev; 1528 1525 ctx->client = client; 1529 - ctx->info = (const struct it66121_chip_info *) id->driver_data; 1526 + ctx->info = i2c_get_match_data(client); 1530 1527 1531 1528 of_property_read_u32(ep, "bus-width", &ctx->bus_width); 1532 1529 of_node_put(ep); ··· 1612 1609 mutex_destroy(&ctx->lock); 1613 1610 } 1614 1611 1615 - static const struct of_device_id it66121_dt_match[] = { 1616 - { .compatible = "ite,it66121" }, 1617 - { .compatible = "ite,it6610" }, 1618 - { } 1619 - }; 1620 - MODULE_DEVICE_TABLE(of, it66121_dt_match); 1621 - 1622 1612 static const struct it66121_chip_info it66121_chip_info = { 1623 1613 .id = ID_IT66121, 1624 1614 .vid = 0x4954, ··· 1623 1627 .vid = 0xca00, 1624 1628 .pid = 0x0611, 1625 1629 }; 1630 + 1631 + static const struct of_device_id it66121_dt_match[] = { 1632 + { .compatible = "ite,it66121", &it66121_chip_info }, 1633 + { .compatible = "ite,it6610", &it6610_chip_info }, 1634 + { } 1635 + }; 1636 + MODULE_DEVICE_TABLE(of, it66121_dt_match); 1626 1637 1627 1638 static const struct i2c_device_id it66121_id[] = { 1628 1639 { "it66121", (kernel_ulong_t) &it66121_chip_info },
+10 -12
drivers/gpu/drm/bridge/lontium-lt8912b.c
··· 45 45 46 46 u8 data_lanes; 47 47 bool is_power_on; 48 - bool is_attached; 49 48 }; 50 49 51 50 static int lt8912_write_init_config(struct lt8912 *lt) ··· 558 559 struct lt8912 *lt = bridge_to_lt8912(bridge); 559 560 int ret; 560 561 562 + ret = drm_bridge_attach(bridge->encoder, lt->hdmi_port, bridge, 563 + DRM_BRIDGE_ATTACH_NO_CONNECTOR); 564 + if (ret < 0) { 565 + dev_err(lt->dev, "Failed to attach next bridge (%d)\n", ret); 566 + return ret; 567 + } 568 + 561 569 if (!(flags & DRM_BRIDGE_ATTACH_NO_CONNECTOR)) { 562 570 ret = lt8912_bridge_connector_init(bridge); 563 571 if (ret) { ··· 581 575 if (ret) 582 576 goto error; 583 577 584 - lt->is_attached = true; 585 - 586 578 return 0; 587 579 588 580 error: ··· 592 588 { 593 589 struct lt8912 *lt = bridge_to_lt8912(bridge); 594 590 595 - if (lt->is_attached) { 596 - lt8912_hard_power_off(lt); 591 + lt8912_hard_power_off(lt); 597 592 598 - if (lt->hdmi_port->ops & DRM_BRIDGE_OP_HPD) 599 - drm_bridge_hpd_disable(lt->hdmi_port); 600 - 601 - drm_connector_unregister(&lt->connector); 602 - drm_connector_cleanup(&lt->connector); 603 - } 593 + if (lt->connector.dev && lt->hdmi_port->ops & DRM_BRIDGE_OP_HPD) 594 + drm_bridge_hpd_disable(lt->hdmi_port); 604 595 } 605 596 606 597 static enum drm_connector_status ··· 749 750 { 750 751 struct lt8912 *lt = i2c_get_clientdata(client); 751 752 752 - lt8912_bridge_detach(&lt->bridge); 753 753 drm_bridge_remove(&lt->bridge); 754 754 lt8912_free_i2c(lt); 755 755 lt8912_put_dt(lt);
+4 -8
drivers/gpu/drm/bridge/lvds-codec.c
··· 5 5 */ 6 6 7 7 #include <linux/gpio/consumer.h> 8 + #include <linux/media-bus-format.h> 8 9 #include <linux/module.h> 9 10 #include <linux/of.h> 10 11 #include <linux/of_graph.h> ··· 72 71 "Failed to disable regulator \"vcc\": %d\n", ret); 73 72 } 74 73 75 - static const struct drm_bridge_funcs funcs = { 76 - .attach = lvds_codec_attach, 77 - .enable = lvds_codec_enable, 78 - .disable = lvds_codec_disable, 79 - }; 80 - 81 74 #define MAX_INPUT_SEL_FORMATS 1 82 75 static u32 * 83 76 lvds_codec_atomic_get_input_bus_fmts(struct drm_bridge *bridge, ··· 97 102 return input_fmts; 98 103 } 99 104 100 - static const struct drm_bridge_funcs funcs_decoder = { 105 + static const struct drm_bridge_funcs funcs = { 101 106 .attach = lvds_codec_attach, 102 107 .enable = lvds_codec_enable, 103 108 .disable = lvds_codec_disable, ··· 179 184 return ret; 180 185 } else { 181 186 lvds_codec->bus_format = ret; 182 - lvds_codec->bridge.funcs = &funcs_decoder; 183 187 } 188 + } else { 189 + lvds_codec->bus_format = MEDIA_BUS_FMT_RGB888_1X24; 184 190 } 185 191 186 192 /*
+16 -2
drivers/gpu/drm/bridge/panel.c
··· 4 4 * Copyright (C) 2017 Broadcom 5 5 */ 6 6 7 + #include <linux/device.h> 8 + 7 9 #include <drm/drm_atomic_helper.h> 8 10 #include <drm/drm_bridge.h> 9 11 #include <drm/drm_connector.h> ··· 21 19 struct drm_bridge bridge; 22 20 struct drm_connector connector; 23 21 struct drm_panel *panel; 22 + struct device_link *link; 24 23 u32 connector_type; 25 24 }; 26 25 ··· 63 60 { 64 61 struct panel_bridge *panel_bridge = drm_bridge_to_panel_bridge(bridge); 65 62 struct drm_connector *connector = &panel_bridge->connector; 63 + struct drm_panel *panel = panel_bridge->panel; 64 + struct drm_device *drm_dev = bridge->dev; 66 65 int ret; 67 66 68 67 if (flags & DRM_BRIDGE_ATTACH_NO_CONNECTOR) ··· 75 70 return -ENODEV; 76 71 } 77 72 73 + panel_bridge->link = device_link_add(drm_dev->dev, panel->dev, 74 + DL_FLAG_STATELESS); 75 + if (!panel_bridge->link) { 76 + DRM_ERROR("Failed to add device link between %s and %s\n", 77 + dev_name(drm_dev->dev), dev_name(panel->dev)); 78 + return -EINVAL; 79 + } 80 + 78 81 drm_connector_helper_add(connector, 79 82 &panel_bridge_connector_helper_funcs); 80 83 ··· 91 78 panel_bridge->connector_type); 92 79 if (ret) { 93 80 DRM_ERROR("Failed to initialize connector\n"); 81 + device_link_del(panel_bridge->link); 94 82 return ret; 95 83 } 96 84 ··· 113 99 { 114 100 struct panel_bridge *panel_bridge = drm_bridge_to_panel_bridge(bridge); 115 101 struct drm_connector *connector = &panel_bridge->connector; 102 + 103 + device_link_del(panel_bridge->link); 116 104 117 105 /* 118 106 * Cleanup the connector if we know it was initialized. ··· 318 302 panel_bridge->panel = panel; 319 303 320 304 panel_bridge->bridge.funcs = &panel_bridge_bridge_funcs; 321 - #ifdef CONFIG_OF 322 305 panel_bridge->bridge.of_node = panel->dev->of_node; 323 - #endif 324 306 panel_bridge->bridge.ops = DRM_BRIDGE_OP_MODES; 325 307 panel_bridge->bridge.type = connector_type; 326 308
+17 -3
drivers/gpu/drm/bridge/samsung-dsim.c
··· 385 385 [RESET_TYPE] = DSIM_SWRST, 386 386 [PLL_TIMER] = 500, 387 387 [STOP_STATE_CNT] = 0xf, 388 - [PHYCTRL_ULPS_EXIT] = 0, 388 + [PHYCTRL_ULPS_EXIT] = DSIM_PHYCTRL_ULPS_EXIT(0xaf), 389 389 [PHYCTRL_VREG_LP] = 0, 390 390 [PHYCTRL_SLEW_UP] = 0, 391 391 [PHYTIMING_LPX] = DSIM_PHYTIMING_LPX(0x06), ··· 413 413 .m_min = 41, 414 414 .m_max = 125, 415 415 .min_freq = 500, 416 + .has_broken_fifoctrl_emptyhdr = 1, 416 417 }; 417 418 418 419 static const struct samsung_dsim_driver_data exynos4_dsi_driver_data = { ··· 430 429 .m_min = 41, 431 430 .m_max = 125, 432 431 .min_freq = 500, 432 + .has_broken_fifoctrl_emptyhdr = 1, 433 433 }; 434 434 435 435 static const struct samsung_dsim_driver_data exynos5_dsi_driver_data = { ··· 1012 1010 do { 1013 1011 u32 reg = samsung_dsim_read(dsi, DSIM_FIFOCTRL_REG); 1014 1012 1015 - if (reg & DSIM_SFR_HEADER_EMPTY) 1016 - return 0; 1013 + if (!dsi->driver_data->has_broken_fifoctrl_emptyhdr) { 1014 + if (reg & DSIM_SFR_HEADER_EMPTY) 1015 + return 0; 1016 + } else { 1017 + if (!(reg & DSIM_SFR_HEADER_FULL)) { 1018 + /* 1019 + * Wait a little bit, so the pending data can 1020 + * actually leave the FIFO to avoid overflow. 1021 + */ 1022 + if (!cond_resched()) 1023 + usleep_range(950, 1050); 1024 + return 0; 1025 + } 1026 + } 1017 1027 1018 1028 if (!cond_resched()) 1019 1029 usleep_range(950, 1050);
-2
drivers/gpu/drm/bridge/synopsys/dw-hdmi.c
··· 3541 3541 | DRM_BRIDGE_OP_HPD; 3542 3542 hdmi->bridge.interlace_allowed = true; 3543 3543 hdmi->bridge.ddc = hdmi->ddc; 3544 - #ifdef CONFIG_OF 3545 3544 hdmi->bridge.of_node = pdev->dev.of_node; 3546 - #endif 3547 3545 3548 3546 memset(&pdevinfo, 0, sizeof(pdevinfo)); 3549 3547 pdevinfo.parent = dev;
-2
drivers/gpu/drm/bridge/synopsys/dw-mipi-dsi.c
··· 1182 1182 1183 1183 dsi->bridge.driver_private = dsi; 1184 1184 dsi->bridge.funcs = &dw_mipi_dsi_bridge_funcs; 1185 - #ifdef CONFIG_OF 1186 1185 dsi->bridge.of_node = pdev->dev.of_node; 1187 - #endif 1188 1186 1189 1187 return dsi; 1190 1188 }
+104 -67
drivers/gpu/drm/display/drm_dp_mst_topology.c
··· 3255 3255 } 3256 3256 EXPORT_SYMBOL(drm_dp_send_query_stream_enc_status); 3257 3257 3258 - static int drm_dp_create_payload_step1(struct drm_dp_mst_topology_mgr *mgr, 3259 - struct drm_dp_mst_atomic_payload *payload) 3258 + static int drm_dp_create_payload_at_dfp(struct drm_dp_mst_topology_mgr *mgr, 3259 + struct drm_dp_mst_atomic_payload *payload) 3260 3260 { 3261 3261 return drm_dp_dpcd_write_payload(mgr, payload->vcpi, payload->vc_start_slot, 3262 3262 payload->time_slots); 3263 3263 } 3264 3264 3265 - static int drm_dp_create_payload_step2(struct drm_dp_mst_topology_mgr *mgr, 3266 - struct drm_dp_mst_atomic_payload *payload) 3265 + static int drm_dp_create_payload_to_remote(struct drm_dp_mst_topology_mgr *mgr, 3266 + struct drm_dp_mst_atomic_payload *payload) 3267 3267 { 3268 3268 int ret; 3269 3269 struct drm_dp_mst_port *port = drm_dp_mst_topology_get_port_validated(mgr, payload->port); ··· 3276 3276 return ret; 3277 3277 } 3278 3278 3279 - static int drm_dp_destroy_payload_step1(struct drm_dp_mst_topology_mgr *mgr, 3280 - struct drm_dp_mst_topology_state *mst_state, 3281 - struct drm_dp_mst_atomic_payload *payload) 3279 + static void drm_dp_destroy_payload_at_remote_and_dfp(struct drm_dp_mst_topology_mgr *mgr, 3280 + struct drm_dp_mst_topology_state *mst_state, 3281 + struct drm_dp_mst_atomic_payload *payload) 3282 3282 { 3283 3283 drm_dbg_kms(mgr->dev, "\n"); 3284 3284 3285 3285 /* it's okay for these to fail */ 3286 - drm_dp_payload_send_msg(mgr, payload->port, payload->vcpi, 0); 3287 - drm_dp_dpcd_write_payload(mgr, payload->vcpi, payload->vc_start_slot, 0); 3286 + if (payload->payload_allocation_status == DRM_DP_MST_PAYLOAD_ALLOCATION_REMOTE) { 3287 + drm_dp_payload_send_msg(mgr, payload->port, payload->vcpi, 0); 3288 + payload->payload_allocation_status = DRM_DP_MST_PAYLOAD_ALLOCATION_DFP; 3289 + } 3288 3290 3289 - return 0; 3291 + if (payload->payload_allocation_status == DRM_DP_MST_PAYLOAD_ALLOCATION_DFP) 3292 + drm_dp_dpcd_write_payload(mgr, payload->vcpi, payload->vc_start_slot, 0); 3290 3293 } 3291 3294 3292 3295 /** ··· 3299 3296 * @payload: The payload to write 3300 3297 * 3301 3298 * Determines the starting time slot for the given payload, and programs the VCPI for this payload 3302 - * into hardware. After calling this, the driver should generate ACT and payload packets. 3299 + * into the DPCD of DPRX. After calling this, the driver should generate ACT and payload packets. 3303 3300 * 3304 - * Returns: 0 on success, error code on failure. In the event that this fails, 3305 - * @payload.vc_start_slot will also be set to -1. 3301 + * Returns: 0 on success, error code on failure. 3306 3302 */ 3307 3303 int drm_dp_add_payload_part1(struct drm_dp_mst_topology_mgr *mgr, 3308 3304 struct drm_dp_mst_topology_state *mst_state, 3309 3305 struct drm_dp_mst_atomic_payload *payload) 3310 3306 { 3311 3307 struct drm_dp_mst_port *port; 3312 - int ret; 3308 + int ret = 0; 3309 + bool allocate = true; 3313 3310 3314 - port = drm_dp_mst_topology_get_port_validated(mgr, payload->port); 3315 - if (!port) { 3316 - drm_dbg_kms(mgr->dev, 3317 - "VCPI %d for port %p not in topology, not creating a payload\n", 3318 - payload->vcpi, payload->port); 3319 - payload->vc_start_slot = -1; 3320 - return 0; 3321 - } 3322 - 3311 + /* Update mst mgr info */ 3323 3312 if (mgr->payload_count == 0) 3324 3313 mgr->next_start_slot = mst_state->start_slot; 3325 3314 3326 3315 payload->vc_start_slot = mgr->next_start_slot; 3327 3316 3328 - ret = drm_dp_create_payload_step1(mgr, payload); 3329 - drm_dp_mst_topology_put_port(port); 3330 - if (ret < 0) { 3331 - drm_warn(mgr->dev, "Failed to create MST payload for port %p: %d\n", 3332 - payload->port, ret); 3333 - payload->vc_start_slot = -1; 3334 - return ret; 3335 - } 3336 - 3337 3317 mgr->payload_count++; 3338 3318 mgr->next_start_slot += payload->time_slots; 3339 3319 3340 - return 0; 3320 + /* Allocate payload to immediate downstream facing port */ 3321 + port = drm_dp_mst_topology_get_port_validated(mgr, payload->port); 3322 + if (!port) { 3323 + drm_dbg_kms(mgr->dev, 3324 + "VCPI %d for port %p not in topology, not creating a payload to remote\n", 3325 + payload->vcpi, payload->port); 3326 + allocate = false; 3327 + } 3328 + 3329 + if (allocate) { 3330 + ret = drm_dp_create_payload_at_dfp(mgr, payload); 3331 + if (ret < 0) 3332 + drm_warn(mgr->dev, "Failed to create MST payload for port %p: %d\n", 3333 + payload->port, ret); 3334 + 3335 + } 3336 + 3337 + payload->payload_allocation_status = 3338 + (!allocate || ret < 0) ? DRM_DP_MST_PAYLOAD_ALLOCATION_LOCAL : 3339 + DRM_DP_MST_PAYLOAD_ALLOCATION_DFP; 3340 + 3341 + drm_dp_mst_topology_put_port(port); 3342 + 3343 + return ret; 3341 3344 } 3342 3345 EXPORT_SYMBOL(drm_dp_add_payload_part1); 3343 3346 3344 3347 /** 3345 - * drm_dp_remove_payload() - Remove an MST payload 3348 + * drm_dp_remove_payload_part1() - Remove an MST payload along the virtual channel 3346 3349 * @mgr: Manager to use. 3347 3350 * @mst_state: The MST atomic state 3348 - * @old_payload: The payload with its old state 3349 - * @new_payload: The payload to write 3351 + * @payload: The payload to remove 3350 3352 * 3351 - * Removes a payload from an MST topology if it was successfully assigned a start slot. Also updates 3352 - * the starting time slots of all other payloads which would have been shifted towards the start of 3353 - * the VC table as a result. After calling this, the driver should generate ACT and payload packets. 3353 + * Removes a payload along the virtual channel if it was successfully allocated. 3354 + * After calling this, the driver should set HW to generate ACT and then switch to new 3355 + * payload allocation state. 3354 3356 */ 3355 - void drm_dp_remove_payload(struct drm_dp_mst_topology_mgr *mgr, 3356 - struct drm_dp_mst_topology_state *mst_state, 3357 - const struct drm_dp_mst_atomic_payload *old_payload, 3358 - struct drm_dp_mst_atomic_payload *new_payload) 3357 + void drm_dp_remove_payload_part1(struct drm_dp_mst_topology_mgr *mgr, 3358 + struct drm_dp_mst_topology_state *mst_state, 3359 + struct drm_dp_mst_atomic_payload *payload) 3359 3360 { 3360 - struct drm_dp_mst_atomic_payload *pos; 3361 + /* Remove remote payload allocation */ 3361 3362 bool send_remove = false; 3362 3363 3363 - /* We failed to make the payload, so nothing to do */ 3364 - if (new_payload->vc_start_slot == -1) 3365 - return; 3366 - 3367 3364 mutex_lock(&mgr->lock); 3368 - send_remove = drm_dp_mst_port_downstream_of_branch(new_payload->port, mgr->mst_primary); 3365 + send_remove = drm_dp_mst_port_downstream_of_branch(payload->port, mgr->mst_primary); 3369 3366 mutex_unlock(&mgr->lock); 3370 3367 3371 3368 if (send_remove) 3372 - drm_dp_destroy_payload_step1(mgr, mst_state, new_payload); 3369 + drm_dp_destroy_payload_at_remote_and_dfp(mgr, mst_state, payload); 3373 3370 else 3374 3371 drm_dbg_kms(mgr->dev, "Payload for VCPI %d not in topology, not sending remove\n", 3375 - new_payload->vcpi); 3372 + payload->vcpi); 3376 3373 3374 + payload->payload_allocation_status = DRM_DP_MST_PAYLOAD_ALLOCATION_LOCAL; 3375 + } 3376 + EXPORT_SYMBOL(drm_dp_remove_payload_part1); 3377 + 3378 + /** 3379 + * drm_dp_remove_payload_part2() - Remove an MST payload locally 3380 + * @mgr: Manager to use. 3381 + * @mst_state: The MST atomic state 3382 + * @old_payload: The payload with its old state 3383 + * @new_payload: The payload with its latest state 3384 + * 3385 + * Updates the starting time slots of all other payloads which would have been shifted towards 3386 + * the start of the payload ID table as a result of removing a payload. Driver should call this 3387 + * function whenever it removes a payload in its HW. It's independent to the result of payload 3388 + * allocation/deallocation at branch devices along the virtual channel. 3389 + */ 3390 + void drm_dp_remove_payload_part2(struct drm_dp_mst_topology_mgr *mgr, 3391 + struct drm_dp_mst_topology_state *mst_state, 3392 + const struct drm_dp_mst_atomic_payload *old_payload, 3393 + struct drm_dp_mst_atomic_payload *new_payload) 3394 + { 3395 + struct drm_dp_mst_atomic_payload *pos; 3396 + 3397 + /* Remove local payload allocation */ 3377 3398 list_for_each_entry(pos, &mst_state->payloads, next) { 3378 3399 if (pos != new_payload && pos->vc_start_slot > new_payload->vc_start_slot) 3379 3400 pos->vc_start_slot -= old_payload->time_slots; ··· 3409 3382 3410 3383 if (new_payload->delete) 3411 3384 drm_dp_mst_put_port_malloc(new_payload->port); 3412 - } 3413 - EXPORT_SYMBOL(drm_dp_remove_payload); 3414 3385 3386 + new_payload->payload_allocation_status = DRM_DP_MST_PAYLOAD_ALLOCATION_NONE; 3387 + } 3388 + EXPORT_SYMBOL(drm_dp_remove_payload_part2); 3415 3389 /** 3416 3390 * drm_dp_add_payload_part2() - Execute payload update part 2 3417 3391 * @mgr: Manager to use. ··· 3431 3403 int ret = 0; 3432 3404 3433 3405 /* Skip failed payloads */ 3434 - if (payload->vc_start_slot == -1) { 3435 - drm_dbg_kms(mgr->dev, "Part 1 of payload creation for %s failed, skipping part 2\n", 3406 + if (payload->payload_allocation_status != DRM_DP_MST_PAYLOAD_ALLOCATION_DFP) { 3407 + drm_dbg_kms(state->dev, "Part 1 of payload creation for %s failed, skipping part 2\n", 3436 3408 payload->port->connector->name); 3437 3409 return -EIO; 3438 3410 } 3439 3411 3440 - ret = drm_dp_create_payload_step2(mgr, payload); 3441 - if (ret < 0) { 3442 - if (!payload->delete) 3443 - drm_err(mgr->dev, "Step 2 of creating MST payload for %p failed: %d\n", 3444 - payload->port, ret); 3445 - else 3446 - drm_dbg_kms(mgr->dev, "Step 2 of removing MST payload for %p failed: %d\n", 3447 - payload->port, ret); 3448 - } 3412 + /* Allocate payload to remote end */ 3413 + ret = drm_dp_create_payload_to_remote(mgr, payload); 3414 + if (ret < 0) 3415 + drm_err(mgr->dev, "Step 2 of creating MST payload for %p failed: %d\n", 3416 + payload->port, ret); 3417 + else 3418 + payload->payload_allocation_status = DRM_DP_MST_PAYLOAD_ALLOCATION_REMOTE; 3449 3419 3450 3420 return ret; 3451 3421 } ··· 4354 4328 drm_dp_mst_get_port_malloc(port); 4355 4329 payload->port = port; 4356 4330 payload->vc_start_slot = -1; 4331 + payload->payload_allocation_status = DRM_DP_MST_PAYLOAD_ALLOCATION_NONE; 4357 4332 list_add(&payload->next, &topology_state->payloads); 4358 4333 } 4359 4334 payload->time_slots = req_slots; ··· 4524 4497 } 4525 4498 4526 4499 /* Now that previous state is committed, it's safe to copy over the start slot 4527 - * assignments 4500 + * and allocation status assignments 4528 4501 */ 4529 4502 list_for_each_entry(old_payload, &old_mst_state->payloads, next) { 4530 4503 if (old_payload->delete) ··· 4533 4506 new_payload = drm_atomic_get_mst_payload_state(new_mst_state, 4534 4507 old_payload->port); 4535 4508 new_payload->vc_start_slot = old_payload->vc_start_slot; 4509 + new_payload->payload_allocation_status = 4510 + old_payload->payload_allocation_status; 4536 4511 } 4537 4512 } 4538 4513 } ··· 4851 4822 struct drm_dp_mst_atomic_payload *payload; 4852 4823 int i, ret; 4853 4824 4825 + static const char *const status[] = { 4826 + "None", 4827 + "Local", 4828 + "DFP", 4829 + "Remote", 4830 + }; 4831 + 4854 4832 mutex_lock(&mgr->lock); 4855 4833 if (mgr->mst_primary) 4856 4834 drm_dp_mst_dump_mstb(m, mgr->mst_primary); ··· 4874 4838 seq_printf(m, "payload_mask: %x, max_payloads: %d, start_slot: %u, pbn_div: %d\n", 4875 4839 state->payload_mask, mgr->max_payloads, state->start_slot, state->pbn_div); 4876 4840 4877 - seq_printf(m, "\n| idx | port | vcpi | slots | pbn | dsc | sink name |\n"); 4841 + seq_printf(m, "\n| idx | port | vcpi | slots | pbn | dsc | status | sink name |\n"); 4878 4842 for (i = 0; i < mgr->max_payloads; i++) { 4879 4843 list_for_each_entry(payload, &state->payloads, next) { 4880 4844 char name[14]; ··· 4883 4847 continue; 4884 4848 4885 4849 fetch_monitor_name(mgr, payload->port, name, sizeof(name)); 4886 - seq_printf(m, " %5d %6d %6d %02d - %02d %5d %5s %19s\n", 4850 + seq_printf(m, " %5d %6d %6d %02d - %02d %5d %5s %8s %19s\n", 4887 4851 i, 4888 4852 payload->port->port_num, 4889 4853 payload->vcpi, ··· 4891 4855 payload->vc_start_slot + payload->time_slots - 1, 4892 4856 payload->pbn, 4893 4857 payload->dsc_enabled ? "Y" : "N", 4858 + status[payload->payload_allocation_status], 4894 4859 (*name != 0) ? name : "Unknown"); 4895 4860 } 4896 4861 }
+2 -2
drivers/gpu/drm/drm_atomic.c
··· 1841 1841 {"state", drm_state_info, 0}, 1842 1842 }; 1843 1843 1844 - void drm_atomic_debugfs_init(struct drm_minor *minor) 1844 + void drm_atomic_debugfs_init(struct drm_device *dev) 1845 1845 { 1846 - drm_debugfs_add_files(minor->dev, drm_atomic_debugfs_list, 1846 + drm_debugfs_add_files(dev, drm_atomic_debugfs_list, 1847 1847 ARRAY_SIZE(drm_atomic_debugfs_list)); 1848 1848 } 1849 1849 #endif
+2 -2
drivers/gpu/drm/drm_bridge.c
··· 1384 1384 { "bridge_chains", drm_bridge_chains_info, 0 }, 1385 1385 }; 1386 1386 1387 - void drm_bridge_debugfs_init(struct drm_minor *minor) 1387 + void drm_bridge_debugfs_init(struct drm_device *dev) 1388 1388 { 1389 - drm_debugfs_add_files(minor->dev, drm_bridge_debugfs_list, 1389 + drm_debugfs_add_files(dev, drm_bridge_debugfs_list, 1390 1390 ARRAY_SIZE(drm_bridge_debugfs_list)); 1391 1391 } 1392 1392 #endif
+2 -2
drivers/gpu/drm/drm_client.c
··· 535 535 { "internal_clients", drm_client_debugfs_internal_clients, 0 }, 536 536 }; 537 537 538 - void drm_client_debugfs_init(struct drm_minor *minor) 538 + void drm_client_debugfs_init(struct drm_device *dev) 539 539 { 540 - drm_debugfs_add_files(minor->dev, drm_client_debugfs_list, 540 + drm_debugfs_add_files(dev, drm_client_debugfs_list, 541 541 ARRAY_SIZE(drm_client_debugfs_list)); 542 542 } 543 543 #endif
+1 -1
drivers/gpu/drm/drm_crtc_internal.h
··· 232 232 /* drm_atomic.c */ 233 233 #ifdef CONFIG_DEBUG_FS 234 234 struct drm_minor; 235 - void drm_atomic_debugfs_init(struct drm_minor *minor); 235 + void drm_atomic_debugfs_init(struct drm_device *dev); 236 236 #endif 237 237 238 238 int __drm_atomic_helper_disable_plane(struct drm_plane *plane,
+79 -95
drivers/gpu/drm/drm_debugfs.c
··· 150 150 { 151 151 struct drm_info_node *node = inode->i_private; 152 152 153 + if (!device_is_registered(node->minor->kdev)) 154 + return -ENODEV; 155 + 153 156 return single_open(file, node->info_ent->show, node); 154 157 } 155 158 ··· 160 157 { 161 158 struct drm_debugfs_entry *entry = inode->i_private; 162 159 struct drm_debugfs_info *node = &entry->file; 160 + struct drm_minor *minor = entry->dev->primary ?: entry->dev->accel; 161 + 162 + if (!device_is_registered(minor->kdev)) 163 + return -ENODEV; 163 164 164 165 return single_open(file, node->show, entry); 165 166 } ··· 234 227 * 235 228 * Create a given set of debugfs files represented by an array of 236 229 * &struct drm_info_list in the given root directory. These files will be removed 237 - * automatically on drm_debugfs_cleanup(). 230 + * automatically on drm_debugfs_dev_fini(). 238 231 */ 239 232 void drm_debugfs_create_files(const struct drm_info_list *files, int count, 240 233 struct dentry *root, struct drm_minor *minor) ··· 249 242 if (features && !drm_core_check_all_features(dev, features)) 250 243 continue; 251 244 252 - tmp = kmalloc(sizeof(struct drm_info_node), GFP_KERNEL); 245 + tmp = drmm_kzalloc(dev, sizeof(*tmp), GFP_KERNEL); 253 246 if (tmp == NULL) 254 247 continue; 255 248 ··· 258 251 0444, root, tmp, 259 252 &drm_debugfs_fops); 260 253 tmp->info_ent = &files[i]; 261 - 262 - mutex_lock(&minor->debugfs_lock); 263 - list_add(&tmp->list, &minor->debugfs_list); 264 - mutex_unlock(&minor->debugfs_lock); 265 254 } 266 255 } 267 256 EXPORT_SYMBOL(drm_debugfs_create_files); 268 257 269 - int drm_debugfs_init(struct drm_minor *minor, int minor_id, 270 - struct dentry *root) 271 - { 272 - struct drm_device *dev = minor->dev; 273 - struct drm_debugfs_entry *entry, *tmp; 274 - char name[64]; 275 - 276 - INIT_LIST_HEAD(&minor->debugfs_list); 277 - mutex_init(&minor->debugfs_lock); 278 - sprintf(name, "%d", minor_id); 279 - minor->debugfs_root = debugfs_create_dir(name, root); 280 - 281 - drm_debugfs_add_files(minor->dev, drm_debugfs_list, DRM_DEBUGFS_ENTRIES); 282 - 283 - if (drm_drv_uses_atomic_modeset(dev)) { 284 - drm_atomic_debugfs_init(minor); 285 - drm_bridge_debugfs_init(minor); 286 - } 287 - 288 - if (drm_core_check_feature(dev, DRIVER_MODESET)) { 289 - drm_framebuffer_debugfs_init(minor); 290 - 291 - drm_client_debugfs_init(minor); 292 - } 293 - 294 - if (dev->driver->debugfs_init) 295 - dev->driver->debugfs_init(minor); 296 - 297 - list_for_each_entry_safe(entry, tmp, &dev->debugfs_list, list) { 298 - debugfs_create_file(entry->file.name, 0444, 299 - minor->debugfs_root, entry, &drm_debugfs_entry_fops); 300 - list_del(&entry->list); 301 - } 302 - 303 - return 0; 304 - } 305 - 306 - void drm_debugfs_late_register(struct drm_device *dev) 307 - { 308 - struct drm_minor *minor = dev->primary; 309 - struct drm_debugfs_entry *entry, *tmp; 310 - 311 - if (!minor) 312 - return; 313 - 314 - list_for_each_entry_safe(entry, tmp, &dev->debugfs_list, list) { 315 - debugfs_create_file(entry->file.name, 0444, 316 - minor->debugfs_root, entry, &drm_debugfs_entry_fops); 317 - list_del(&entry->list); 318 - } 319 - } 320 - 321 258 int drm_debugfs_remove_files(const struct drm_info_list *files, int count, 322 - struct drm_minor *minor) 259 + struct dentry *root, struct drm_minor *minor) 323 260 { 324 - struct list_head *pos, *q; 325 - struct drm_info_node *tmp; 326 261 int i; 327 262 328 - mutex_lock(&minor->debugfs_lock); 329 263 for (i = 0; i < count; i++) { 330 - list_for_each_safe(pos, q, &minor->debugfs_list) { 331 - tmp = list_entry(pos, struct drm_info_node, list); 332 - if (tmp->info_ent == &files[i]) { 333 - debugfs_remove(tmp->dent); 334 - list_del(pos); 335 - kfree(tmp); 336 - } 337 - } 264 + struct dentry *dent = debugfs_lookup(files[i].name, root); 265 + 266 + if (!dent) 267 + continue; 268 + 269 + drmm_kfree(minor->dev, d_inode(dent)->i_private); 270 + debugfs_remove(dent); 338 271 } 339 - mutex_unlock(&minor->debugfs_lock); 340 272 return 0; 341 273 } 342 274 EXPORT_SYMBOL(drm_debugfs_remove_files); 343 275 344 - static void drm_debugfs_remove_all_files(struct drm_minor *minor) 276 + /** 277 + * drm_debugfs_dev_init - create debugfs directory for the device 278 + * @dev: the device which we want to create the directory for 279 + * @root: the parent directory depending on the device type 280 + * 281 + * Creates the debugfs directory for the device under the given root directory. 282 + */ 283 + void drm_debugfs_dev_init(struct drm_device *dev, struct dentry *root) 345 284 { 346 - struct drm_info_node *node, *tmp; 347 - 348 - mutex_lock(&minor->debugfs_lock); 349 - list_for_each_entry_safe(node, tmp, &minor->debugfs_list, list) { 350 - debugfs_remove(node->dent); 351 - list_del(&node->list); 352 - kfree(node); 353 - } 354 - mutex_unlock(&minor->debugfs_lock); 285 + dev->debugfs_root = debugfs_create_dir(dev->unique, root); 355 286 } 356 287 357 - void drm_debugfs_cleanup(struct drm_minor *minor) 288 + /** 289 + * drm_debugfs_dev_fini - cleanup debugfs directory 290 + * @dev: the device to cleanup the debugfs stuff 291 + * 292 + * Remove the debugfs directory, might be called multiple times. 293 + */ 294 + void drm_debugfs_dev_fini(struct drm_device *dev) 358 295 { 359 - if (!minor->debugfs_root) 360 - return; 296 + debugfs_remove_recursive(dev->debugfs_root); 297 + dev->debugfs_root = NULL; 298 + } 361 299 362 - drm_debugfs_remove_all_files(minor); 300 + void drm_debugfs_dev_register(struct drm_device *dev) 301 + { 302 + drm_debugfs_add_files(dev, drm_debugfs_list, DRM_DEBUGFS_ENTRIES); 363 303 364 - debugfs_remove_recursive(minor->debugfs_root); 365 - minor->debugfs_root = NULL; 304 + if (drm_core_check_feature(dev, DRIVER_MODESET)) { 305 + drm_framebuffer_debugfs_init(dev); 306 + drm_client_debugfs_init(dev); 307 + } 308 + if (drm_drv_uses_atomic_modeset(dev)) { 309 + drm_atomic_debugfs_init(dev); 310 + drm_bridge_debugfs_init(dev); 311 + } 312 + } 313 + 314 + int drm_debugfs_register(struct drm_minor *minor, int minor_id, 315 + struct dentry *root) 316 + { 317 + struct drm_device *dev = minor->dev; 318 + char name[64]; 319 + 320 + sprintf(name, "%d", minor_id); 321 + minor->debugfs_symlink = debugfs_create_symlink(name, root, 322 + dev->unique); 323 + 324 + /* TODO: Only for compatibility with drivers */ 325 + minor->debugfs_root = dev->debugfs_root; 326 + 327 + if (dev->driver->debugfs_init && dev->render != minor) 328 + dev->driver->debugfs_init(minor); 329 + 330 + return 0; 331 + } 332 + 333 + void drm_debugfs_unregister(struct drm_minor *minor) 334 + { 335 + debugfs_remove(minor->debugfs_symlink); 336 + minor->debugfs_symlink = NULL; 366 337 } 367 338 368 339 /** ··· 366 381 entry->file.data = data; 367 382 entry->dev = dev; 368 383 369 - mutex_lock(&dev->debugfs_mutex); 370 - list_add(&entry->list, &dev->debugfs_list); 371 - mutex_unlock(&dev->debugfs_mutex); 384 + debugfs_create_file(name, 0444, dev->debugfs_root, entry, 385 + &drm_debugfs_entry_fops); 372 386 } 373 387 EXPORT_SYMBOL(drm_debugfs_add_file); 374 388 ··· 524 540 525 541 void drm_debugfs_connector_add(struct drm_connector *connector) 526 542 { 527 - struct drm_minor *minor = connector->dev->primary; 543 + struct drm_device *dev = connector->dev; 528 544 struct dentry *root; 529 545 530 - if (!minor->debugfs_root) 546 + if (!dev->debugfs_root) 531 547 return; 532 548 533 - root = debugfs_create_dir(connector->name, minor->debugfs_root); 549 + root = debugfs_create_dir(connector->name, dev->debugfs_root); 534 550 connector->debugfs_entry = root; 535 551 536 552 /* force */ ··· 565 581 566 582 void drm_debugfs_crtc_add(struct drm_crtc *crtc) 567 583 { 568 - struct drm_minor *minor = crtc->dev->primary; 584 + struct drm_device *dev = crtc->dev; 569 585 struct dentry *root; 570 586 char *name; 571 587 ··· 573 589 if (!name) 574 590 return; 575 591 576 - root = debugfs_create_dir(name, minor->debugfs_root); 592 + root = debugfs_create_dir(name, dev->debugfs_root); 577 593 kfree(name); 578 594 579 595 crtc->debugfs_entry = root;
+19 -9
drivers/gpu/drm/drm_drv.c
··· 172 172 if (!minor) 173 173 return 0; 174 174 175 - if (minor->type == DRM_MINOR_ACCEL) { 176 - accel_debugfs_init(minor, minor->index); 177 - } else { 178 - ret = drm_debugfs_init(minor, minor->index, drm_debugfs_root); 175 + if (minor->type != DRM_MINOR_ACCEL) { 176 + ret = drm_debugfs_register(minor, minor->index, 177 + drm_debugfs_root); 179 178 if (ret) { 180 179 DRM_ERROR("DRM: Failed to initialize /sys/kernel/debug/dri.\n"); 181 180 goto err_debugfs; ··· 198 199 return 0; 199 200 200 201 err_debugfs: 201 - drm_debugfs_cleanup(minor); 202 + drm_debugfs_unregister(minor); 202 203 return ret; 203 204 } 204 205 ··· 222 223 223 224 device_del(minor->kdev); 224 225 dev_set_drvdata(minor->kdev, NULL); /* safety belt */ 225 - drm_debugfs_cleanup(minor); 226 + drm_debugfs_unregister(minor); 226 227 } 227 228 228 229 /* ··· 597 598 mutex_destroy(&dev->clientlist_mutex); 598 599 mutex_destroy(&dev->filelist_mutex); 599 600 mutex_destroy(&dev->struct_mutex); 600 - mutex_destroy(&dev->debugfs_mutex); 601 601 drm_legacy_destroy_members(dev); 602 602 } 603 603 ··· 637 639 INIT_LIST_HEAD(&dev->filelist_internal); 638 640 INIT_LIST_HEAD(&dev->clientlist); 639 641 INIT_LIST_HEAD(&dev->vblank_event_list); 640 - INIT_LIST_HEAD(&dev->debugfs_list); 641 642 642 643 spin_lock_init(&dev->event_lock); 643 644 mutex_init(&dev->struct_mutex); 644 645 mutex_init(&dev->filelist_mutex); 645 646 mutex_init(&dev->clientlist_mutex); 646 647 mutex_init(&dev->master_mutex); 647 - mutex_init(&dev->debugfs_mutex); 648 648 649 649 ret = drmm_add_action_or_reset(dev, drm_dev_init_release, NULL); 650 650 if (ret) ··· 692 696 ret = -ENOMEM; 693 697 goto err; 694 698 } 699 + 700 + if (drm_core_check_feature(dev, DRIVER_COMPUTE_ACCEL)) 701 + accel_debugfs_init(dev); 702 + else 703 + drm_debugfs_dev_init(dev, drm_debugfs_root); 695 704 696 705 return 0; 697 706 ··· 786 785 static void drm_dev_release(struct kref *ref) 787 786 { 788 787 struct drm_device *dev = container_of(ref, struct drm_device, ref); 788 + 789 + /* Just in case register/unregister was never called */ 790 + drm_debugfs_dev_fini(dev); 789 791 790 792 if (dev->driver->release) 791 793 dev->driver->release(dev); ··· 920 916 if (drm_dev_needs_global_mutex(dev)) 921 917 mutex_lock(&drm_global_mutex); 922 918 919 + if (drm_core_check_feature(dev, DRIVER_COMPUTE_ACCEL)) 920 + accel_debugfs_register(dev); 921 + else 922 + drm_debugfs_dev_register(dev); 923 + 923 924 ret = drm_minor_register(dev, DRM_MINOR_RENDER); 924 925 if (ret) 925 926 goto err_minors; ··· 1010 1001 drm_minor_unregister(dev, DRM_MINOR_ACCEL); 1011 1002 drm_minor_unregister(dev, DRM_MINOR_PRIMARY); 1012 1003 drm_minor_unregister(dev, DRM_MINOR_RENDER); 1004 + drm_debugfs_dev_fini(dev); 1013 1005 } 1014 1006 EXPORT_SYMBOL(drm_dev_unregister); 1015 1007
+2 -2
drivers/gpu/drm/drm_framebuffer.c
··· 1222 1222 { "framebuffer", drm_framebuffer_info, 0 }, 1223 1223 }; 1224 1224 1225 - void drm_framebuffer_debugfs_init(struct drm_minor *minor) 1225 + void drm_framebuffer_debugfs_init(struct drm_device *dev) 1226 1226 { 1227 - drm_debugfs_add_files(minor->dev, drm_framebuffer_debugfs_list, 1227 + drm_debugfs_add_files(dev, drm_framebuffer_debugfs_list, 1228 1228 ARRAY_SIZE(drm_framebuffer_debugfs_list)); 1229 1229 } 1230 1230 #endif
+17 -12
drivers/gpu/drm/drm_internal.h
··· 180 180 181 181 /* drm_debugfs.c drm_debugfs_crc.c */ 182 182 #if defined(CONFIG_DEBUG_FS) 183 - int drm_debugfs_init(struct drm_minor *minor, int minor_id, 184 - struct dentry *root); 185 - void drm_debugfs_cleanup(struct drm_minor *minor); 186 - void drm_debugfs_late_register(struct drm_device *dev); 183 + void drm_debugfs_dev_fini(struct drm_device *dev); 184 + void drm_debugfs_dev_register(struct drm_device *dev); 185 + int drm_debugfs_register(struct drm_minor *minor, int minor_id, 186 + struct dentry *root); 187 + void drm_debugfs_unregister(struct drm_minor *minor); 187 188 void drm_debugfs_connector_add(struct drm_connector *connector); 188 189 void drm_debugfs_connector_remove(struct drm_connector *connector); 189 190 void drm_debugfs_crtc_add(struct drm_crtc *crtc); 190 191 void drm_debugfs_crtc_remove(struct drm_crtc *crtc); 191 192 void drm_debugfs_crtc_crc_add(struct drm_crtc *crtc); 192 193 #else 193 - static inline int drm_debugfs_init(struct drm_minor *minor, int minor_id, 194 - struct dentry *root) 194 + static inline void drm_debugfs_dev_fini(struct drm_device *dev) 195 + { 196 + } 197 + 198 + static inline void drm_debugfs_dev_register(struct drm_device *dev) 199 + { 200 + } 201 + 202 + static inline int drm_debugfs_register(struct drm_minor *minor, int minor_id, 203 + struct dentry *root) 195 204 { 196 205 return 0; 197 206 } 198 207 199 - static inline void drm_debugfs_cleanup(struct drm_minor *minor) 200 - { 201 - } 202 - 203 - static inline void drm_debugfs_late_register(struct drm_device *dev) 208 + static inline void drm_debugfs_unregister(struct drm_minor *minor) 204 209 { 205 210 } 206 211 ··· 264 259 /* drm_framebuffer.c */ 265 260 void drm_framebuffer_print_info(struct drm_printer *p, unsigned int indent, 266 261 const struct drm_framebuffer *fb); 267 - void drm_framebuffer_debugfs_init(struct drm_minor *minor); 262 + void drm_framebuffer_debugfs_init(struct drm_device *dev);
-2
drivers/gpu/drm/drm_mode_config.c
··· 54 54 if (ret) 55 55 goto err_connector; 56 56 57 - drm_debugfs_late_register(dev); 58 - 59 57 return 0; 60 58 61 59 err_connector:
-1
drivers/gpu/drm/gma500/gma_display.h
··· 81 81 82 82 /* Common clock related functions */ 83 83 extern const struct gma_limit_t *gma_limit(struct drm_crtc *crtc, int refclk); 84 - extern void gma_clock(int refclk, struct gma_clock_t *clock); 85 84 extern bool gma_pll_is_valid(struct drm_crtc *crtc, 86 85 const struct gma_limit_t *limit, 87 86 struct gma_clock_t *clock);
+1 -8
drivers/gpu/drm/gma500/psb_drv.h
··· 161 161 162 162 #define PSB_NUM_VBLANKS 2 163 163 164 - 165 - #define PSB_2D_SIZE (256*1024*1024) 166 - #define PSB_MAX_RELOC_PAGES 1024 167 - 168 - #define PSB_LOW_REG_OFFS 0x0204 169 - #define PSB_HIGH_REG_OFFS 0x0600 170 - 171 - #define PSB_NUM_VBLANKS 2 172 164 #define PSB_WATCHDOG_DELAY (HZ * 2) 173 165 #define PSB_LID_DELAY (HZ / 10) 174 166 ··· 416 424 uint32_t pipestat[PSB_NUM_PIPE]; 417 425 418 426 spinlock_t irqmask_lock; 427 + bool irq_enabled; 419 428 420 429 /* Power */ 421 430 bool pm_initialized;
-14
drivers/gpu/drm/gma500/psb_intel_drv.h
··· 186 186 187 187 extern void psb_intel_crtc_init(struct drm_device *dev, int pipe, 188 188 struct psb_intel_mode_device *mode_dev); 189 - extern void psb_intel_crt_init(struct drm_device *dev); 190 189 extern bool psb_intel_sdvo_init(struct drm_device *dev, int output_device); 191 - extern void psb_intel_dvo_init(struct drm_device *dev); 192 - extern void psb_intel_tv_init(struct drm_device *dev); 193 190 extern void psb_intel_lvds_init(struct drm_device *dev, 194 191 struct psb_intel_mode_device *mode_dev); 195 192 extern void psb_intel_lvds_set_brightness(struct drm_device *dev, int level); 196 193 extern void oaktrail_lvds_init(struct drm_device *dev, 197 194 struct psb_intel_mode_device *mode_dev); 198 - extern void oaktrail_wait_for_INTR_PKT_SENT(struct drm_device *dev); 199 195 struct gma_i2c_chan *oaktrail_lvds_i2c_init(struct drm_device *dev); 200 - extern void mid_dsi_init(struct drm_device *dev, 201 - struct psb_intel_mode_device *mode_dev, int dsi_num); 202 196 203 197 extern struct drm_encoder *gma_best_encoder(struct drm_connector *connector); 204 198 extern void gma_connector_attach_encoder(struct gma_connector *connector, ··· 208 214 struct drm_crtc *crtc); 209 215 extern struct drm_crtc *psb_intel_get_crtc_from_pipe(struct drm_device *dev, 210 216 int pipe); 211 - extern struct drm_connector *psb_intel_sdvo_find(struct drm_device *dev, 212 - int sdvoB); 213 - extern int intelfb_probe(struct drm_device *dev); 214 - extern int intelfb_remove(struct drm_device *dev, 215 - struct drm_framebuffer *fb); 216 217 extern bool psb_intel_lvds_mode_fixup(struct drm_encoder *encoder, 217 218 const struct drm_display_mode *mode, 218 219 struct drm_display_mode *adjusted_mode); ··· 230 241 extern void cdv_intel_dp_set_m_n(struct drm_crtc *crtc, 231 242 struct drm_display_mode *mode, 232 243 struct drm_display_mode *adjusted_mode); 233 - 234 - extern void psb_intel_attach_force_audio_property(struct drm_connector *connector); 235 - extern void psb_intel_attach_broadcast_rgb_property(struct drm_connector *connector); 236 244 237 245 extern int cdv_sb_read(struct drm_device *dev, u32 reg, u32 *val); 238 246 extern int cdv_sb_write(struct drm_device *dev, u32 reg, u32 val);
+5
drivers/gpu/drm/gma500/psb_irq.c
··· 327 327 328 328 gma_irq_postinstall(dev); 329 329 330 + dev_priv->irq_enabled = true; 331 + 330 332 return 0; 331 333 } 332 334 ··· 338 336 struct pci_dev *pdev = to_pci_dev(dev->dev); 339 337 unsigned long irqflags; 340 338 unsigned int i; 339 + 340 + if (!dev_priv->irq_enabled) 341 + return; 341 342 342 343 spin_lock_irqsave(&dev_priv->irqmask_lock, irqflags); 343 344
+12 -6
drivers/gpu/drm/i915/display/intel_dp_mst.c
··· 557 557 struct intel_dp *intel_dp = &dig_port->dp; 558 558 struct intel_connector *connector = 559 559 to_intel_connector(old_conn_state->connector); 560 - struct drm_dp_mst_topology_state *old_mst_state = 561 - drm_atomic_get_old_mst_topology_state(&state->base, &intel_dp->mst_mgr); 562 560 struct drm_dp_mst_topology_state *new_mst_state = 563 561 drm_atomic_get_new_mst_topology_state(&state->base, &intel_dp->mst_mgr); 564 - const struct drm_dp_mst_atomic_payload *old_payload = 565 - drm_atomic_get_mst_payload_state(old_mst_state, connector->port); 566 562 struct drm_dp_mst_atomic_payload *new_payload = 567 563 drm_atomic_get_mst_payload_state(new_mst_state, connector->port); 568 564 struct drm_i915_private *i915 = to_i915(connector->base.dev); ··· 568 572 569 573 intel_hdcp_disable(intel_mst->connector); 570 574 571 - drm_dp_remove_payload(&intel_dp->mst_mgr, new_mst_state, 572 - old_payload, new_payload); 575 + drm_dp_remove_payload_part1(&intel_dp->mst_mgr, new_mst_state, new_payload); 573 576 574 577 intel_audio_codec_disable(encoder, old_crtc_state, old_conn_state); 575 578 } ··· 583 588 struct intel_dp *intel_dp = &dig_port->dp; 584 589 struct intel_connector *connector = 585 590 to_intel_connector(old_conn_state->connector); 591 + struct drm_dp_mst_topology_state *old_mst_state = 592 + drm_atomic_get_old_mst_topology_state(&state->base, &intel_dp->mst_mgr); 593 + struct drm_dp_mst_topology_state *new_mst_state = 594 + drm_atomic_get_new_mst_topology_state(&state->base, &intel_dp->mst_mgr); 595 + const struct drm_dp_mst_atomic_payload *old_payload = 596 + drm_atomic_get_mst_payload_state(old_mst_state, connector->port); 597 + struct drm_dp_mst_atomic_payload *new_payload = 598 + drm_atomic_get_mst_payload_state(new_mst_state, connector->port); 586 599 struct drm_i915_private *dev_priv = to_i915(connector->base.dev); 587 600 bool last_mst_stream; 588 601 ··· 610 607 TRANS_DDI_DP_VC_PAYLOAD_ALLOC, 0); 611 608 612 609 wait_for_act_sent(encoder, old_crtc_state); 610 + 611 + drm_dp_remove_payload_part2(&intel_dp->mst_mgr, new_mst_state, 612 + old_payload, new_payload); 613 613 614 614 intel_ddi_disable_transcoder_func(old_crtc_state); 615 615
+2 -4
drivers/gpu/drm/imx/ipuv3/dw_hdmi-imx.c
··· 255 255 return ret; 256 256 } 257 257 258 - static int dw_hdmi_imx_remove(struct platform_device *pdev) 258 + static void dw_hdmi_imx_remove(struct platform_device *pdev) 259 259 { 260 260 struct imx_hdmi *hdmi = platform_get_drvdata(pdev); 261 261 262 262 component_del(&pdev->dev, &dw_hdmi_imx_ops); 263 263 dw_hdmi_remove(hdmi->hdmi); 264 - 265 - return 0; 266 264 } 267 265 268 266 static struct platform_driver dw_hdmi_imx_platform_driver = { 269 267 .probe = dw_hdmi_imx_probe, 270 - .remove = dw_hdmi_imx_remove, 268 + .remove_new = dw_hdmi_imx_remove, 271 269 .driver = { 272 270 .name = "dwhdmi-imx", 273 271 .of_match_table = dw_hdmi_imx_dt_ids,
+2 -3
drivers/gpu/drm/imx/ipuv3/imx-drm-core.c
··· 292 292 return ret; 293 293 } 294 294 295 - static int imx_drm_platform_remove(struct platform_device *pdev) 295 + static void imx_drm_platform_remove(struct platform_device *pdev) 296 296 { 297 297 component_master_del(&pdev->dev, &imx_drm_ops); 298 - return 0; 299 298 } 300 299 301 300 #ifdef CONFIG_PM_SLEEP ··· 323 324 324 325 static struct platform_driver imx_drm_pdrv = { 325 326 .probe = imx_drm_platform_probe, 326 - .remove = imx_drm_platform_remove, 327 + .remove_new = imx_drm_platform_remove, 327 328 .driver = { 328 329 .name = "imx-drm", 329 330 .pm = &imx_drm_pm_ops,
+2 -3
drivers/gpu/drm/imx/ipuv3/imx-ldb.c
··· 737 737 return ret; 738 738 } 739 739 740 - static int imx_ldb_remove(struct platform_device *pdev) 740 + static void imx_ldb_remove(struct platform_device *pdev) 741 741 { 742 742 struct imx_ldb *imx_ldb = platform_get_drvdata(pdev); 743 743 int i; ··· 750 750 } 751 751 752 752 component_del(&pdev->dev, &imx_ldb_ops); 753 - return 0; 754 753 } 755 754 756 755 static struct platform_driver imx_ldb_driver = { 757 756 .probe = imx_ldb_probe, 758 - .remove = imx_ldb_remove, 757 + .remove_new = imx_ldb_remove, 759 758 .driver = { 760 759 .of_match_table = imx_ldb_dt_ids, 761 760 .name = DRIVER_NAME,
+2 -3
drivers/gpu/drm/imx/ipuv3/imx-tve.c
··· 645 645 return component_add(dev, &imx_tve_ops); 646 646 } 647 647 648 - static int imx_tve_remove(struct platform_device *pdev) 648 + static void imx_tve_remove(struct platform_device *pdev) 649 649 { 650 650 component_del(&pdev->dev, &imx_tve_ops); 651 - return 0; 652 651 } 653 652 654 653 static const struct of_device_id imx_tve_dt_ids[] = { ··· 658 659 659 660 static struct platform_driver imx_tve_driver = { 660 661 .probe = imx_tve_probe, 661 - .remove = imx_tve_remove, 662 + .remove_new = imx_tve_remove, 662 663 .driver = { 663 664 .of_match_table = imx_tve_dt_ids, 664 665 .name = "imx-tve",
+2 -3
drivers/gpu/drm/imx/ipuv3/ipuv3-crtc.c
··· 441 441 return component_add(dev, &ipu_crtc_ops); 442 442 } 443 443 444 - static int ipu_drm_remove(struct platform_device *pdev) 444 + static void ipu_drm_remove(struct platform_device *pdev) 445 445 { 446 446 component_del(&pdev->dev, &ipu_crtc_ops); 447 - return 0; 448 447 } 449 448 450 449 struct platform_driver ipu_drm_driver = { ··· 451 452 .name = "imx-ipuv3-crtc", 452 453 }, 453 454 .probe = ipu_drm_probe, 454 - .remove = ipu_drm_remove, 455 + .remove_new = ipu_drm_remove, 455 456 };
+2 -4
drivers/gpu/drm/imx/ipuv3/parallel-display.c
··· 353 353 return component_add(dev, &imx_pd_ops); 354 354 } 355 355 356 - static int imx_pd_remove(struct platform_device *pdev) 356 + static void imx_pd_remove(struct platform_device *pdev) 357 357 { 358 358 component_del(&pdev->dev, &imx_pd_ops); 359 - 360 - return 0; 361 359 } 362 360 363 361 static const struct of_device_id imx_pd_dt_ids[] = { ··· 366 368 367 369 static struct platform_driver imx_pd_driver = { 368 370 .probe = imx_pd_probe, 369 - .remove = imx_pd_remove, 371 + .remove_new = imx_pd_remove, 370 372 .driver = { 371 373 .of_match_table = imx_pd_dt_ids, 372 374 .name = "imx-parallel-display",
+2 -4
drivers/gpu/drm/ingenic/ingenic-drm-drv.c
··· 1449 1449 return component_master_add_with_match(dev, &ingenic_master_ops, match); 1450 1450 } 1451 1451 1452 - static int ingenic_drm_remove(struct platform_device *pdev) 1452 + static void ingenic_drm_remove(struct platform_device *pdev) 1453 1453 { 1454 1454 struct device *dev = &pdev->dev; 1455 1455 ··· 1457 1457 ingenic_drm_unbind(dev); 1458 1458 else 1459 1459 component_master_del(dev, &ingenic_master_ops); 1460 - 1461 - return 0; 1462 1460 } 1463 1461 1464 1462 static int ingenic_drm_suspend(struct device *dev) ··· 1609 1611 .of_match_table = of_match_ptr(ingenic_drm_of_match), 1610 1612 }, 1611 1613 .probe = ingenic_drm_probe, 1612 - .remove = ingenic_drm_remove, 1614 + .remove_new = ingenic_drm_remove, 1613 1615 }; 1614 1616 1615 1617 static int ingenic_drm_init(void)
+2 -3
drivers/gpu/drm/ingenic/ingenic-ipu.c
··· 922 922 return component_add(&pdev->dev, &ingenic_ipu_ops); 923 923 } 924 924 925 - static int ingenic_ipu_remove(struct platform_device *pdev) 925 + static void ingenic_ipu_remove(struct platform_device *pdev) 926 926 { 927 927 component_del(&pdev->dev, &ingenic_ipu_ops); 928 - return 0; 929 928 } 930 929 931 930 static const u32 jz4725b_ipu_formats[] = { ··· 991 992 .of_match_table = ingenic_ipu_of_match, 992 993 }, 993 994 .probe = ingenic_ipu_probe, 994 - .remove = ingenic_ipu_remove, 995 + .remove_new = ingenic_ipu_remove, 995 996 }; 996 997 997 998 struct platform_driver *ingenic_ipu_driver_ptr = &ingenic_ipu_driver;
+4 -2
drivers/gpu/drm/loongson/lsdc_pixpll.c
··· 120 120 struct lsdc_pixpll_parms *pparms; 121 121 122 122 this->mmio = ioremap(this->reg_base, this->reg_size); 123 - if (IS_ERR_OR_NULL(this->mmio)) 123 + if (!this->mmio) 124 124 return -ENOMEM; 125 125 126 126 pparms = kzalloc(sizeof(*pparms), GFP_KERNEL); 127 - if (IS_ERR_OR_NULL(pparms)) 127 + if (!pparms) { 128 + iounmap(this->mmio); 128 129 return -ENOMEM; 130 + } 129 131 130 132 pparms->ref_clock = LSDC_PLL_REF_CLK_KHZ; 131 133
+2 -3
drivers/gpu/drm/msm/adreno/adreno_device.c
··· 751 751 return 0; 752 752 } 753 753 754 - static int adreno_remove(struct platform_device *pdev) 754 + static void adreno_remove(struct platform_device *pdev) 755 755 { 756 756 component_del(&pdev->dev, &a3xx_ops); 757 - return 0; 758 757 } 759 758 760 759 static void adreno_shutdown(struct platform_device *pdev) ··· 868 869 869 870 static struct platform_driver adreno_driver = { 870 871 .probe = adreno_probe, 871 - .remove = adreno_remove, 872 + .remove_new = adreno_remove, 872 873 .shutdown = adreno_shutdown, 873 874 .driver = { 874 875 .name = "adreno",
+2 -4
drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c
··· 1302 1302 return msm_drv_probe(&pdev->dev, dpu_kms_init); 1303 1303 } 1304 1304 1305 - static int dpu_dev_remove(struct platform_device *pdev) 1305 + static void dpu_dev_remove(struct platform_device *pdev) 1306 1306 { 1307 1307 component_master_del(&pdev->dev, &msm_drm_ops); 1308 - 1309 - return 0; 1310 1308 } 1311 1309 1312 1310 static int __maybe_unused dpu_runtime_suspend(struct device *dev) ··· 1380 1382 1381 1383 static struct platform_driver dpu_driver = { 1382 1384 .probe = dpu_dev_probe, 1383 - .remove = dpu_dev_remove, 1385 + .remove_new = dpu_dev_remove, 1384 1386 .shutdown = msm_drv_shutdown, 1385 1387 .driver = { 1386 1388 .name = "msm_dpu",
+2 -4
drivers/gpu/drm/msm/disp/mdp4/mdp4_kms.c
··· 560 560 return msm_drv_probe(&pdev->dev, mdp4_kms_init); 561 561 } 562 562 563 - static int mdp4_remove(struct platform_device *pdev) 563 + static void mdp4_remove(struct platform_device *pdev) 564 564 { 565 565 component_master_del(&pdev->dev, &msm_drm_ops); 566 - 567 - return 0; 568 566 } 569 567 570 568 static const struct of_device_id mdp4_dt_match[] = { ··· 573 575 574 576 static struct platform_driver mdp4_platform_driver = { 575 577 .probe = mdp4_probe, 576 - .remove = mdp4_remove, 578 + .remove_new = mdp4_remove, 577 579 .shutdown = msm_drv_shutdown, 578 580 .driver = { 579 581 .name = "mdp4",
+2 -3
drivers/gpu/drm/msm/disp/mdp5/mdp5_kms.c
··· 942 942 return msm_drv_probe(&pdev->dev, mdp5_kms_init); 943 943 } 944 944 945 - static int mdp5_dev_remove(struct platform_device *pdev) 945 + static void mdp5_dev_remove(struct platform_device *pdev) 946 946 { 947 947 DBG(""); 948 948 component_master_del(&pdev->dev, &msm_drm_ops); 949 - return 0; 950 949 } 951 950 952 951 static __maybe_unused int mdp5_runtime_suspend(struct device *dev) ··· 986 987 987 988 static struct platform_driver mdp5_driver = { 988 989 .probe = mdp5_dev_probe, 989 - .remove = mdp5_dev_remove, 990 + .remove_new = mdp5_dev_remove, 990 991 .shutdown = msm_drv_shutdown, 991 992 .driver = { 992 993 .name = "msm_mdp",
+2 -4
drivers/gpu/drm/msm/dp/dp_display.c
··· 1296 1296 return rc; 1297 1297 } 1298 1298 1299 - static int dp_display_remove(struct platform_device *pdev) 1299 + static void dp_display_remove(struct platform_device *pdev) 1300 1300 { 1301 1301 struct dp_display_private *dp = dev_get_dp_display_private(&pdev->dev); 1302 1302 ··· 1304 1304 dp_display_deinit_sub_modules(dp); 1305 1305 1306 1306 platform_set_drvdata(pdev, NULL); 1307 - 1308 - return 0; 1309 1307 } 1310 1308 1311 1309 static int dp_pm_resume(struct device *dev) ··· 1413 1415 1414 1416 static struct platform_driver dp_display_driver = { 1415 1417 .probe = dp_display_probe, 1416 - .remove = dp_display_remove, 1418 + .remove_new = dp_display_remove, 1417 1419 .driver = { 1418 1420 .name = "msm-dp-display", 1419 1421 .of_match_table = dp_dt_match,
+2 -4
drivers/gpu/drm/msm/dsi/dsi.c
··· 161 161 return 0; 162 162 } 163 163 164 - static int dsi_dev_remove(struct platform_device *pdev) 164 + static void dsi_dev_remove(struct platform_device *pdev) 165 165 { 166 166 struct msm_dsi *msm_dsi = platform_get_drvdata(pdev); 167 167 168 168 DBG(""); 169 169 dsi_destroy(msm_dsi); 170 - 171 - return 0; 172 170 } 173 171 174 172 static const struct of_device_id dt_match[] = { ··· 185 187 186 188 static struct platform_driver dsi_driver = { 187 189 .probe = dsi_dev_probe, 188 - .remove = dsi_dev_remove, 190 + .remove_new = dsi_dev_remove, 189 191 .driver = { 190 192 .name = "msm_dsi", 191 193 .of_match_table = dt_match,
+2 -4
drivers/gpu/drm/msm/hdmi/hdmi.c
··· 551 551 return ret; 552 552 } 553 553 554 - static int msm_hdmi_dev_remove(struct platform_device *pdev) 554 + static void msm_hdmi_dev_remove(struct platform_device *pdev) 555 555 { 556 556 struct hdmi *hdmi = dev_get_drvdata(&pdev->dev); 557 557 558 558 component_del(&pdev->dev, &msm_hdmi_ops); 559 559 560 560 msm_hdmi_put_phy(hdmi); 561 - 562 - return 0; 563 561 } 564 562 565 563 static const struct of_device_id msm_hdmi_dt_match[] = { ··· 572 574 573 575 static struct platform_driver msm_hdmi_driver = { 574 576 .probe = msm_hdmi_dev_probe, 575 - .remove = msm_hdmi_dev_remove, 577 + .remove_new = msm_hdmi_dev_remove, 576 578 .driver = { 577 579 .name = "hdmi_msm", 578 580 .of_match_table = msm_hdmi_dt_match,
+2 -4
drivers/gpu/drm/msm/hdmi/hdmi_phy.c
··· 177 177 return 0; 178 178 } 179 179 180 - static int msm_hdmi_phy_remove(struct platform_device *pdev) 180 + static void msm_hdmi_phy_remove(struct platform_device *pdev) 181 181 { 182 182 pm_runtime_disable(&pdev->dev); 183 - 184 - return 0; 185 183 } 186 184 187 185 static const struct of_device_id msm_hdmi_phy_dt_match[] = { ··· 198 200 199 201 static struct platform_driver msm_hdmi_phy_platform_driver = { 200 202 .probe = msm_hdmi_phy_probe, 201 - .remove = msm_hdmi_phy_remove, 203 + .remove_new = msm_hdmi_phy_remove, 202 204 .driver = { 203 205 .name = "msm_hdmi_phy", 204 206 .of_match_table = msm_hdmi_phy_dt_match,
+2 -4
drivers/gpu/drm/msm/msm_drv.c
··· 1278 1278 return msm_drv_probe(&pdev->dev, NULL); 1279 1279 } 1280 1280 1281 - static int msm_pdev_remove(struct platform_device *pdev) 1281 + static void msm_pdev_remove(struct platform_device *pdev) 1282 1282 { 1283 1283 component_master_del(&pdev->dev, &msm_drm_ops); 1284 - 1285 - return 0; 1286 1284 } 1287 1285 1288 1286 void msm_drv_shutdown(struct platform_device *pdev) ··· 1301 1303 1302 1304 static struct platform_driver msm_platform_driver = { 1303 1305 .probe = msm_pdev_probe, 1304 - .remove = msm_pdev_remove, 1306 + .remove_new = msm_pdev_remove, 1305 1307 .shutdown = msm_drv_shutdown, 1306 1308 .driver = { 1307 1309 .name = "msm",
+2 -4
drivers/gpu/drm/msm/msm_mdss.c
··· 497 497 return 0; 498 498 } 499 499 500 - static int mdss_remove(struct platform_device *pdev) 500 + static void mdss_remove(struct platform_device *pdev) 501 501 { 502 502 struct msm_mdss *mdss = platform_get_drvdata(pdev); 503 503 504 504 of_platform_depopulate(&pdev->dev); 505 505 506 506 msm_mdss_destroy(mdss); 507 - 508 - return 0; 509 507 } 510 508 511 509 static const struct msm_mdss_data msm8998_data = { ··· 627 629 628 630 static struct platform_driver mdss_platform_driver = { 629 631 .probe = mdss_probe, 630 - .remove = mdss_remove, 632 + .remove_new = mdss_remove, 631 633 .driver = { 632 634 .name = "msm-mdss", 633 635 .of_match_table = mdss_dt_match,
+11 -10
drivers/gpu/drm/nouveau/dispnv50/disp.c
··· 882 882 883 883 static void 884 884 nv50_msto_cleanup(struct drm_atomic_state *state, 885 - struct drm_dp_mst_topology_state *mst_state, 885 + struct drm_dp_mst_topology_state *new_mst_state, 886 886 struct drm_dp_mst_topology_mgr *mgr, 887 887 struct nv50_msto *msto) 888 888 { 889 889 struct nouveau_drm *drm = nouveau_drm(msto->encoder.dev); 890 - struct drm_dp_mst_atomic_payload *payload = 891 - drm_atomic_get_mst_payload_state(mst_state, msto->mstc->port); 890 + struct drm_dp_mst_atomic_payload *new_payload = 891 + drm_atomic_get_mst_payload_state(new_mst_state, msto->mstc->port); 892 + struct drm_dp_mst_topology_state *old_mst_state = 893 + drm_atomic_get_old_mst_topology_state(state, mgr); 894 + const struct drm_dp_mst_atomic_payload *old_payload = 895 + drm_atomic_get_mst_payload_state(old_mst_state, msto->mstc->port); 892 896 893 897 NV_ATOMIC(drm, "%s: msto cleanup\n", msto->encoder.name); 894 898 895 899 if (msto->disabled) { 896 900 msto->mstc = NULL; 897 901 msto->disabled = false; 902 + drm_dp_remove_payload_part2(mgr, new_mst_state, old_payload, new_payload); 898 903 } else if (msto->enabled) { 899 - drm_dp_add_payload_part2(mgr, state, payload); 904 + drm_dp_add_payload_part2(mgr, state, new_payload); 900 905 msto->enabled = false; 901 906 } 902 907 } ··· 915 910 struct nouveau_drm *drm = nouveau_drm(msto->encoder.dev); 916 911 struct nv50_mstc *mstc = msto->mstc; 917 912 struct nv50_mstm *mstm = mstc->mstm; 918 - struct drm_dp_mst_topology_state *old_mst_state; 919 - struct drm_dp_mst_atomic_payload *payload, *old_payload; 913 + struct drm_dp_mst_atomic_payload *payload; 920 914 921 915 NV_ATOMIC(drm, "%s: msto prepare\n", msto->encoder.name); 922 916 923 - old_mst_state = drm_atomic_get_old_mst_topology_state(state, mgr); 924 - 925 917 payload = drm_atomic_get_mst_payload_state(mst_state, mstc->port); 926 - old_payload = drm_atomic_get_mst_payload_state(old_mst_state, mstc->port); 927 918 928 919 // TODO: Figure out if we want to do a better job of handling VCPI allocation failures here? 929 920 if (msto->disabled) { 930 - drm_dp_remove_payload(mgr, mst_state, old_payload, payload); 921 + drm_dp_remove_payload_part1(mgr, mst_state, payload); 931 922 932 923 nvif_outp_dp_mst_vcpi(&mstm->outp->outp, msto->head->base.index, 0, 0, 0, 0); 933 924 } else {
+5 -14
drivers/gpu/drm/nouveau/nouveau_drv.h
··· 189 189 static inline void * 190 190 u_memcpya(uint64_t user, unsigned int nmemb, unsigned int size) 191 191 { 192 - void *mem; 193 - void __user *userptr = (void __force __user *)(uintptr_t)user; 192 + void __user *userptr = u64_to_user_ptr(user); 193 + size_t bytes; 194 194 195 - size *= nmemb; 196 - 197 - mem = kvmalloc(size, GFP_KERNEL); 198 - if (!mem) 199 - return ERR_PTR(-ENOMEM); 200 - 201 - if (copy_from_user(mem, userptr, size)) { 202 - u_free(mem); 203 - return ERR_PTR(-EFAULT); 204 - } 205 - 206 - return mem; 195 + if (unlikely(check_mul_overflow(nmemb, size, &bytes))) 196 + return NULL; 197 + return vmemdup_user(userptr, bytes); 207 198 } 208 199 209 200 #include <nvif/object.h>
+11
drivers/gpu/drm/panel/Kconfig
··· 244 244 The panel has a 1200(RGB)×1920 (WUXGA) resolution and uses 245 245 24 bit per pixel. 246 246 247 + config DRM_PANEL_JDI_LPM102A188A 248 + tristate "JDI LPM102A188A DSI panel" 249 + depends on OF && GPIOLIB 250 + depends on DRM_MIPI_DSI 251 + depends on BACKLIGHT_CLASS_DEVICE 252 + help 253 + Say Y here if you want to enable support for JDI LPM102A188A DSI 254 + command mode panel as found in Google Pixel C devices. 255 + The panel has a 2560×1800 resolution. It provides a MIPI DSI interface 256 + to the host. 257 + 247 258 config DRM_PANEL_JDI_R63452 248 259 tristate "JDI R63452 Full HD DSI panel" 249 260 depends on OF
+1
drivers/gpu/drm/panel/Makefile
··· 22 22 obj-$(CONFIG_DRM_PANEL_INNOLUX_P079ZCA) += panel-innolux-p079zca.o 23 23 obj-$(CONFIG_DRM_PANEL_JADARD_JD9365DA_H3) += panel-jadard-jd9365da-h3.o 24 24 obj-$(CONFIG_DRM_PANEL_JDI_LT070ME05000) += panel-jdi-lt070me05000.o 25 + obj-$(CONFIG_DRM_PANEL_JDI_LPM102A188A) += panel-jdi-lpm102a188a.o 25 26 obj-$(CONFIG_DRM_PANEL_JDI_R63452) += panel-jdi-fhd-r63452.o 26 27 obj-$(CONFIG_DRM_PANEL_KHADAS_TS050) += panel-khadas-ts050.o 27 28 obj-$(CONFIG_DRM_PANEL_KINGDISPLAY_KD097D04) += panel-kingdisplay-kd097d04.o
+551
drivers/gpu/drm/panel/panel-jdi-lpm102a188a.c
··· 1 + // SPDX-License-Identifier: GPL-2.0-only 2 + /* 3 + * Copyright (C) 2014 Google, Inc. 4 + * 5 + * Copyright (C) 2022 Diogo Ivo <diogo.ivo@tecnico.ulisboa.pt> 6 + * 7 + * Adapted from the downstream Pixel C driver written by Sean Paul 8 + */ 9 + 10 + #include <linux/backlight.h> 11 + #include <linux/delay.h> 12 + #include <linux/gpio/consumer.h> 13 + #include <linux/module.h> 14 + #include <linux/of.h> 15 + #include <linux/regulator/consumer.h> 16 + 17 + #include <video/mipi_display.h> 18 + 19 + #include <drm/drm_crtc.h> 20 + #include <drm/drm_mipi_dsi.h> 21 + #include <drm/drm_panel.h> 22 + 23 + #define MCS_CMD_ACS_PROT 0xB0 24 + #define MCS_CMD_ACS_PROT_OFF (0 << 0) 25 + 26 + #define MCS_PWR_CTRL_FUNC 0xD0 27 + #define MCS_PWR_CTRL_PARAM1_DEFAULT (2 << 0) 28 + #define MCS_PWR_CTRL_PARAM1_VGH_210_DIV (1 << 4) 29 + #define MCS_PWR_CTRL_PARAM1_VGH_240_DIV (2 << 4) 30 + #define MCS_PWR_CTRL_PARAM1_VGH_280_DIV (3 << 4) 31 + #define MCS_PWR_CTRL_PARAM1_VGH_330_DIV (4 << 4) 32 + #define MCS_PWR_CTRL_PARAM1_VGH_410_DIV (5 << 4) 33 + #define MCS_PWR_CTRL_PARAM2_DEFAULT (9 << 4) 34 + #define MCS_PWR_CTRL_PARAM2_VGL_210_DIV (1 << 0) 35 + #define MCS_PWR_CTRL_PARAM2_VGL_240_DIV (2 << 0) 36 + #define MCS_PWR_CTRL_PARAM2_VGL_280_DIV (3 << 0) 37 + #define MCS_PWR_CTRL_PARAM2_VGL_330_DIV (4 << 0) 38 + #define MCS_PWR_CTRL_PARAM2_VGL_410_DIV (5 << 0) 39 + 40 + struct jdi_panel { 41 + struct drm_panel base; 42 + struct mipi_dsi_device *link1; 43 + struct mipi_dsi_device *link2; 44 + 45 + struct regulator *supply; 46 + struct regulator *ddi_supply; 47 + struct backlight_device *backlight; 48 + 49 + struct gpio_desc *enable_gpio; 50 + struct gpio_desc *reset_gpio; 51 + 52 + const struct drm_display_mode *mode; 53 + }; 54 + 55 + static inline struct jdi_panel *to_panel_jdi(struct drm_panel *panel) 56 + { 57 + return container_of(panel, struct jdi_panel, base); 58 + } 59 + 60 + static void jdi_wait_frames(struct jdi_panel *jdi, unsigned int frames) 61 + { 62 + unsigned int refresh = drm_mode_vrefresh(jdi->mode); 63 + 64 + if (WARN_ON(frames > refresh)) 65 + return; 66 + 67 + msleep(1000 / (refresh / frames)); 68 + } 69 + 70 + static int jdi_panel_disable(struct drm_panel *panel) 71 + { 72 + struct jdi_panel *jdi = to_panel_jdi(panel); 73 + 74 + backlight_disable(jdi->backlight); 75 + 76 + jdi_wait_frames(jdi, 2); 77 + 78 + return 0; 79 + } 80 + 81 + static int jdi_panel_unprepare(struct drm_panel *panel) 82 + { 83 + struct jdi_panel *jdi = to_panel_jdi(panel); 84 + int ret; 85 + 86 + ret = mipi_dsi_dcs_set_display_off(jdi->link1); 87 + if (ret < 0) 88 + dev_err(panel->dev, "failed to set display off: %d\n", ret); 89 + 90 + ret = mipi_dsi_dcs_set_display_off(jdi->link2); 91 + if (ret < 0) 92 + dev_err(panel->dev, "failed to set display off: %d\n", ret); 93 + 94 + /* Specified by JDI @ 50ms, subject to change */ 95 + msleep(50); 96 + 97 + ret = mipi_dsi_dcs_enter_sleep_mode(jdi->link1); 98 + if (ret < 0) 99 + dev_err(panel->dev, "failed to enter sleep mode: %d\n", ret); 100 + ret = mipi_dsi_dcs_enter_sleep_mode(jdi->link2); 101 + if (ret < 0) 102 + dev_err(panel->dev, "failed to enter sleep mode: %d\n", ret); 103 + 104 + /* Specified by JDI @ 150ms, subject to change */ 105 + msleep(150); 106 + 107 + gpiod_set_value(jdi->reset_gpio, 1); 108 + 109 + /* T4 = 1ms */ 110 + usleep_range(1000, 3000); 111 + 112 + gpiod_set_value(jdi->enable_gpio, 0); 113 + 114 + /* T5 = 2ms */ 115 + usleep_range(2000, 4000); 116 + 117 + regulator_disable(jdi->ddi_supply); 118 + 119 + /* T6 = 2ms plus some time to discharge capacitors */ 120 + usleep_range(7000, 9000); 121 + 122 + regulator_disable(jdi->supply); 123 + /* Specified by JDI @ 20ms, subject to change */ 124 + msleep(20); 125 + 126 + return ret; 127 + } 128 + 129 + static int jdi_setup_symmetrical_split(struct mipi_dsi_device *left, 130 + struct mipi_dsi_device *right, 131 + const struct drm_display_mode *mode) 132 + { 133 + int err; 134 + 135 + err = mipi_dsi_dcs_set_column_address(left, 0, mode->hdisplay / 2 - 1); 136 + if (err < 0) { 137 + dev_err(&left->dev, "failed to set column address: %d\n", err); 138 + return err; 139 + } 140 + 141 + err = mipi_dsi_dcs_set_column_address(right, 0, mode->hdisplay / 2 - 1); 142 + if (err < 0) { 143 + dev_err(&right->dev, "failed to set column address: %d\n", err); 144 + return err; 145 + } 146 + 147 + err = mipi_dsi_dcs_set_page_address(left, 0, mode->vdisplay - 1); 148 + if (err < 0) { 149 + dev_err(&left->dev, "failed to set page address: %d\n", err); 150 + return err; 151 + } 152 + 153 + err = mipi_dsi_dcs_set_page_address(right, 0, mode->vdisplay - 1); 154 + if (err < 0) { 155 + dev_err(&right->dev, "failed to set page address: %d\n", err); 156 + return err; 157 + } 158 + 159 + return 0; 160 + } 161 + 162 + static int jdi_write_dcdc_registers(struct jdi_panel *jdi) 163 + { 164 + /* Clear the manufacturer command access protection */ 165 + mipi_dsi_generic_write_seq(jdi->link1, MCS_CMD_ACS_PROT, 166 + MCS_CMD_ACS_PROT_OFF); 167 + mipi_dsi_generic_write_seq(jdi->link2, MCS_CMD_ACS_PROT, 168 + MCS_CMD_ACS_PROT_OFF); 169 + /* 170 + * Change the VGH/VGL divide rations to move the noise generated by the 171 + * TCONN. This should hopefully avoid interaction with the backlight 172 + * controller. 173 + */ 174 + mipi_dsi_generic_write_seq(jdi->link1, MCS_PWR_CTRL_FUNC, 175 + MCS_PWR_CTRL_PARAM1_VGH_330_DIV | 176 + MCS_PWR_CTRL_PARAM1_DEFAULT, 177 + MCS_PWR_CTRL_PARAM2_VGL_410_DIV | 178 + MCS_PWR_CTRL_PARAM2_DEFAULT); 179 + 180 + mipi_dsi_generic_write_seq(jdi->link2, MCS_PWR_CTRL_FUNC, 181 + MCS_PWR_CTRL_PARAM1_VGH_330_DIV | 182 + MCS_PWR_CTRL_PARAM1_DEFAULT, 183 + MCS_PWR_CTRL_PARAM2_VGL_410_DIV | 184 + MCS_PWR_CTRL_PARAM2_DEFAULT); 185 + 186 + return 0; 187 + } 188 + 189 + static int jdi_panel_prepare(struct drm_panel *panel) 190 + { 191 + struct jdi_panel *jdi = to_panel_jdi(panel); 192 + int err; 193 + 194 + /* Disable backlight to avoid showing random pixels 195 + * with a conservative delay for it to take effect. 196 + */ 197 + backlight_disable(jdi->backlight); 198 + jdi_wait_frames(jdi, 3); 199 + 200 + jdi->link1->mode_flags |= MIPI_DSI_MODE_LPM; 201 + jdi->link2->mode_flags |= MIPI_DSI_MODE_LPM; 202 + 203 + err = regulator_enable(jdi->supply); 204 + if (err < 0) { 205 + dev_err(panel->dev, "failed to enable supply: %d\n", err); 206 + return err; 207 + } 208 + /* T1 = 2ms */ 209 + usleep_range(2000, 4000); 210 + 211 + err = regulator_enable(jdi->ddi_supply); 212 + if (err < 0) { 213 + dev_err(panel->dev, "failed to enable ddi_supply: %d\n", err); 214 + goto supply_off; 215 + } 216 + /* T2 = 1ms */ 217 + usleep_range(1000, 3000); 218 + 219 + gpiod_set_value(jdi->enable_gpio, 1); 220 + /* T3 = 10ms */ 221 + usleep_range(10000, 15000); 222 + 223 + gpiod_set_value(jdi->reset_gpio, 0); 224 + /* Specified by JDI @ 3ms, subject to change */ 225 + usleep_range(3000, 5000); 226 + 227 + /* 228 + * TODO: The device supports both left-right and even-odd split 229 + * configurations, but this driver currently supports only the left- 230 + * right split. To support a different mode a mechanism needs to be 231 + * put in place to communicate the configuration back to the DSI host 232 + * controller. 233 + */ 234 + err = jdi_setup_symmetrical_split(jdi->link1, jdi->link2, 235 + jdi->mode); 236 + if (err < 0) { 237 + dev_err(panel->dev, "failed to set up symmetrical split: %d\n", 238 + err); 239 + goto poweroff; 240 + } 241 + 242 + err = mipi_dsi_dcs_set_tear_scanline(jdi->link1, 243 + jdi->mode->vdisplay - 16); 244 + if (err < 0) { 245 + dev_err(panel->dev, "failed to set tear scanline: %d\n", err); 246 + goto poweroff; 247 + } 248 + 249 + err = mipi_dsi_dcs_set_tear_scanline(jdi->link2, 250 + jdi->mode->vdisplay - 16); 251 + if (err < 0) { 252 + dev_err(panel->dev, "failed to set tear scanline: %d\n", err); 253 + goto poweroff; 254 + } 255 + 256 + err = mipi_dsi_dcs_set_tear_on(jdi->link1, 257 + MIPI_DSI_DCS_TEAR_MODE_VBLANK); 258 + if (err < 0) { 259 + dev_err(panel->dev, "failed to set tear on: %d\n", err); 260 + goto poweroff; 261 + } 262 + 263 + err = mipi_dsi_dcs_set_tear_on(jdi->link2, 264 + MIPI_DSI_DCS_TEAR_MODE_VBLANK); 265 + if (err < 0) { 266 + dev_err(panel->dev, "failed to set tear on: %d\n", err); 267 + goto poweroff; 268 + } 269 + 270 + err = mipi_dsi_dcs_set_pixel_format(jdi->link1, MIPI_DCS_PIXEL_FMT_24BIT); 271 + if (err < 0) { 272 + dev_err(panel->dev, "failed to set pixel format: %d\n", err); 273 + goto poweroff; 274 + } 275 + 276 + err = mipi_dsi_dcs_set_pixel_format(jdi->link2, MIPI_DCS_PIXEL_FMT_24BIT); 277 + if (err < 0) { 278 + dev_err(panel->dev, "failed to set pixel format: %d\n", err); 279 + goto poweroff; 280 + } 281 + 282 + err = mipi_dsi_dcs_exit_sleep_mode(jdi->link1); 283 + if (err < 0) { 284 + dev_err(panel->dev, "failed to exit sleep mode: %d\n", err); 285 + goto poweroff; 286 + } 287 + 288 + err = mipi_dsi_dcs_exit_sleep_mode(jdi->link2); 289 + if (err < 0) { 290 + dev_err(panel->dev, "failed to exit sleep mode: %d\n", err); 291 + goto poweroff; 292 + } 293 + 294 + err = jdi_write_dcdc_registers(jdi); 295 + if (err < 0) { 296 + dev_err(panel->dev, "failed to write dcdc registers: %d\n", err); 297 + goto poweroff; 298 + } 299 + /* 300 + * We need to wait 150ms between mipi_dsi_dcs_exit_sleep_mode() and 301 + * mipi_dsi_dcs_set_display_on(). 302 + */ 303 + msleep(150); 304 + 305 + err = mipi_dsi_dcs_set_display_on(jdi->link1); 306 + if (err < 0) { 307 + dev_err(panel->dev, "failed to set display on: %d\n", err); 308 + goto poweroff; 309 + } 310 + 311 + err = mipi_dsi_dcs_set_display_on(jdi->link2); 312 + if (err < 0) { 313 + dev_err(panel->dev, "failed to set display on: %d\n", err); 314 + goto poweroff; 315 + } 316 + 317 + jdi->link1->mode_flags &= ~MIPI_DSI_MODE_LPM; 318 + jdi->link2->mode_flags &= ~MIPI_DSI_MODE_LPM; 319 + 320 + return 0; 321 + 322 + poweroff: 323 + regulator_disable(jdi->ddi_supply); 324 + 325 + /* T6 = 2ms plus some time to discharge capacitors */ 326 + usleep_range(7000, 9000); 327 + supply_off: 328 + regulator_disable(jdi->supply); 329 + /* Specified by JDI @ 20ms, subject to change */ 330 + msleep(20); 331 + 332 + return err; 333 + } 334 + 335 + static int jdi_panel_enable(struct drm_panel *panel) 336 + { 337 + struct jdi_panel *jdi = to_panel_jdi(panel); 338 + 339 + /* 340 + * Ensure we send image data before turning the backlight 341 + * on, to avoid the display showing random pixels. 342 + */ 343 + jdi_wait_frames(jdi, 3); 344 + 345 + backlight_enable(jdi->backlight); 346 + 347 + return 0; 348 + } 349 + 350 + static const struct drm_display_mode default_mode = { 351 + .clock = (2560 + 80 + 80 + 80) * (1800 + 4 + 4 + 4) * 60 / 1000, 352 + .hdisplay = 2560, 353 + .hsync_start = 2560 + 80, 354 + .hsync_end = 2560 + 80 + 80, 355 + .htotal = 2560 + 80 + 80 + 80, 356 + .vdisplay = 1800, 357 + .vsync_start = 1800 + 4, 358 + .vsync_end = 1800 + 4 + 4, 359 + .vtotal = 1800 + 4 + 4 + 4, 360 + .flags = 0, 361 + }; 362 + 363 + static int jdi_panel_get_modes(struct drm_panel *panel, 364 + struct drm_connector *connector) 365 + { 366 + struct drm_display_mode *mode; 367 + struct jdi_panel *jdi = to_panel_jdi(panel); 368 + struct device *dev = &jdi->link1->dev; 369 + 370 + mode = drm_mode_duplicate(connector->dev, &default_mode); 371 + if (!mode) { 372 + dev_err(dev, "failed to add mode %ux%ux@%u\n", 373 + default_mode.hdisplay, default_mode.vdisplay, 374 + drm_mode_vrefresh(&default_mode)); 375 + return -ENOMEM; 376 + } 377 + 378 + drm_mode_set_name(mode); 379 + 380 + drm_mode_probed_add(connector, mode); 381 + 382 + connector->display_info.width_mm = 211; 383 + connector->display_info.height_mm = 148; 384 + connector->display_info.bpc = 8; 385 + 386 + return 1; 387 + } 388 + 389 + static const struct drm_panel_funcs jdi_panel_funcs = { 390 + .prepare = jdi_panel_prepare, 391 + .enable = jdi_panel_enable, 392 + .disable = jdi_panel_disable, 393 + .unprepare = jdi_panel_unprepare, 394 + .get_modes = jdi_panel_get_modes, 395 + }; 396 + 397 + static const struct of_device_id jdi_of_match[] = { 398 + { .compatible = "jdi,lpm102a188a", }, 399 + { } 400 + }; 401 + MODULE_DEVICE_TABLE(of, jdi_of_match); 402 + 403 + static int jdi_panel_add(struct jdi_panel *jdi) 404 + { 405 + struct device *dev = &jdi->link1->dev; 406 + 407 + jdi->mode = &default_mode; 408 + 409 + jdi->supply = devm_regulator_get(dev, "power"); 410 + if (IS_ERR(jdi->supply)) 411 + return dev_err_probe(dev, PTR_ERR(jdi->supply), 412 + "failed to get power regulator\n"); 413 + 414 + jdi->ddi_supply = devm_regulator_get(dev, "ddi"); 415 + if (IS_ERR(jdi->ddi_supply)) 416 + return dev_err_probe(dev, PTR_ERR(jdi->ddi_supply), 417 + "failed to get ddi regulator\n"); 418 + 419 + jdi->reset_gpio = devm_gpiod_get(dev, "reset", GPIOD_OUT_HIGH); 420 + if (IS_ERR(jdi->reset_gpio)) 421 + return dev_err_probe(dev, PTR_ERR(jdi->reset_gpio), 422 + "failed to get reset gpio\n"); 423 + /* T4 = 1ms */ 424 + usleep_range(1000, 3000); 425 + 426 + jdi->enable_gpio = devm_gpiod_get(dev, "enable", GPIOD_OUT_LOW); 427 + if (IS_ERR(jdi->enable_gpio)) 428 + return dev_err_probe(dev, PTR_ERR(jdi->enable_gpio), 429 + "failed to get enable gpio\n"); 430 + /* T5 = 2ms */ 431 + usleep_range(2000, 4000); 432 + 433 + jdi->backlight = devm_of_find_backlight(dev); 434 + if (IS_ERR(jdi->backlight)) 435 + return dev_err_probe(dev, PTR_ERR(jdi->backlight), 436 + "failed to create backlight\n"); 437 + 438 + drm_panel_init(&jdi->base, &jdi->link1->dev, &jdi_panel_funcs, 439 + DRM_MODE_CONNECTOR_DSI); 440 + 441 + drm_panel_add(&jdi->base); 442 + 443 + return 0; 444 + } 445 + 446 + static void jdi_panel_del(struct jdi_panel *jdi) 447 + { 448 + if (jdi->base.dev) 449 + drm_panel_remove(&jdi->base); 450 + 451 + if (jdi->link2) 452 + put_device(&jdi->link2->dev); 453 + } 454 + 455 + static int jdi_panel_dsi_probe(struct mipi_dsi_device *dsi) 456 + { 457 + struct mipi_dsi_device *secondary = NULL; 458 + struct jdi_panel *jdi; 459 + struct device_node *np; 460 + int err; 461 + 462 + dsi->lanes = 4; 463 + dsi->format = MIPI_DSI_FMT_RGB888; 464 + dsi->mode_flags = 0; 465 + 466 + /* Find DSI-LINK1 */ 467 + np = of_parse_phandle(dsi->dev.of_node, "link2", 0); 468 + if (np) { 469 + secondary = of_find_mipi_dsi_device_by_node(np); 470 + of_node_put(np); 471 + 472 + if (!secondary) 473 + return -EPROBE_DEFER; 474 + } 475 + 476 + /* register a panel for only the DSI-LINK1 interface */ 477 + if (secondary) { 478 + jdi = devm_kzalloc(&dsi->dev, sizeof(*jdi), GFP_KERNEL); 479 + if (!jdi) { 480 + put_device(&secondary->dev); 481 + return -ENOMEM; 482 + } 483 + 484 + mipi_dsi_set_drvdata(dsi, jdi); 485 + 486 + jdi->link1 = dsi; 487 + jdi->link2 = secondary; 488 + 489 + err = jdi_panel_add(jdi); 490 + if (err < 0) { 491 + put_device(&secondary->dev); 492 + return err; 493 + } 494 + } 495 + 496 + err = mipi_dsi_attach(dsi); 497 + if (err < 0) { 498 + if (secondary) 499 + jdi_panel_del(jdi); 500 + 501 + return err; 502 + } 503 + 504 + return 0; 505 + } 506 + 507 + static void jdi_panel_dsi_remove(struct mipi_dsi_device *dsi) 508 + { 509 + struct jdi_panel *jdi = mipi_dsi_get_drvdata(dsi); 510 + int err; 511 + 512 + /* only detach from host for the DSI-LINK2 interface */ 513 + if (!jdi) 514 + mipi_dsi_detach(dsi); 515 + 516 + err = jdi_panel_disable(&jdi->base); 517 + if (err < 0) 518 + dev_err(&dsi->dev, "failed to disable panel: %d\n", err); 519 + 520 + err = mipi_dsi_detach(dsi); 521 + if (err < 0) 522 + dev_err(&dsi->dev, "failed to detach from DSI host: %d\n", err); 523 + 524 + jdi_panel_del(jdi); 525 + } 526 + 527 + static void jdi_panel_dsi_shutdown(struct mipi_dsi_device *dsi) 528 + { 529 + struct jdi_panel *jdi = mipi_dsi_get_drvdata(dsi); 530 + 531 + if (!jdi) 532 + return; 533 + 534 + jdi_panel_disable(&jdi->base); 535 + } 536 + 537 + static struct mipi_dsi_driver jdi_panel_dsi_driver = { 538 + .driver = { 539 + .name = "panel-jdi-lpm102a188a", 540 + .of_match_table = jdi_of_match, 541 + }, 542 + .probe = jdi_panel_dsi_probe, 543 + .remove = jdi_panel_dsi_remove, 544 + .shutdown = jdi_panel_dsi_shutdown, 545 + }; 546 + module_mipi_dsi_driver(jdi_panel_dsi_driver); 547 + 548 + MODULE_AUTHOR("Sean Paul <seanpaul@chromium.org>"); 549 + MODULE_AUTHOR("Diogo Ivo <diogo.ivo@tecnico.ulisboa.pt>"); 550 + MODULE_DESCRIPTION("DRM Driver for JDI LPM102A188A DSI panel, command mode"); 551 + MODULE_LICENSE("GPL");
-4
drivers/gpu/drm/panel/panel-jdi-lt070me05000.c
··· 5 5 * 6 6 * Copyright (C) 2016 Linaro Ltd 7 7 * Author: Sumit Semwal <sumit.semwal@linaro.org> 8 - * 9 - * From internet archives, the panel for Nexus 7 2nd Gen, 2013 model is a 10 - * JDI model LT070ME05000, and its data sheet is at: 11 - * http://panelone.net/en/7-0-inch/JDI_LT070ME05000_7.0_inch-datasheet 12 8 */ 13 9 14 10 #include <linux/backlight.h>
+29
drivers/gpu/drm/panel/panel-simple.c
··· 2793 2793 .bus_flags = DRM_BUS_FLAG_DE_HIGH, 2794 2794 }; 2795 2795 2796 + static const struct drm_display_mode mitsubishi_aa084xe01_mode = { 2797 + .clock = 56234, 2798 + .hdisplay = 1024, 2799 + .hsync_start = 1024 + 24, 2800 + .hsync_end = 1024 + 24 + 63, 2801 + .htotal = 1024 + 24 + 63 + 1, 2802 + .vdisplay = 768, 2803 + .vsync_start = 768 + 3, 2804 + .vsync_end = 768 + 3 + 6, 2805 + .vtotal = 768 + 3 + 6 + 1, 2806 + .flags = DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC, 2807 + }; 2808 + 2809 + static const struct panel_desc mitsubishi_aa084xe01 = { 2810 + .modes = &mitsubishi_aa084xe01_mode, 2811 + .num_modes = 1, 2812 + .bpc = 8, 2813 + .size = { 2814 + .width = 1024, 2815 + .height = 768, 2816 + }, 2817 + .bus_format = MEDIA_BUS_FMT_RGB565_1X16, 2818 + .connector_type = DRM_MODE_CONNECTOR_DPI, 2819 + .bus_flags = DRM_BUS_FLAG_DE_HIGH | DRM_BUS_FLAG_PIXDATA_SAMPLE_NEGEDGE, 2820 + }; 2821 + 2796 2822 static const struct display_timing multi_inno_mi0700s4t_6_timing = { 2797 2823 .pixelclock = { 29000000, 33000000, 38000000 }, 2798 2824 .hactive = { 800, 800, 800 }, ··· 4347 4321 }, { 4348 4322 .compatible = "mitsubishi,aa070mc01-ca1", 4349 4323 .data = &mitsubishi_aa070mc01, 4324 + }, { 4325 + .compatible = "mitsubishi,aa084xe01", 4326 + .data = &mitsubishi_aa084xe01, 4350 4327 }, { 4351 4328 .compatible = "multi-inno,mi0700s4t-6", 4352 4329 .data = &multi_inno_mi0700s4t_6,
+2 -2
drivers/gpu/drm/panfrost/panfrost_gpu.c
··· 390 390 dma_set_max_seg_size(pfdev->dev, UINT_MAX); 391 391 392 392 irq = platform_get_irq_byname(to_platform_device(pfdev->dev), "gpu"); 393 - if (irq <= 0) 394 - return -ENODEV; 393 + if (irq < 0) 394 + return irq; 395 395 396 396 err = devm_request_irq(pfdev->dev, irq, panfrost_gpu_irq_handler, 397 397 IRQF_SHARED, KBUILD_MODNAME "-gpu", pfdev);
+2 -2
drivers/gpu/drm/panfrost/panfrost_job.c
··· 810 810 spin_lock_init(&js->job_lock); 811 811 812 812 js->irq = platform_get_irq_byname(to_platform_device(pfdev->dev), "job"); 813 - if (js->irq <= 0) 814 - return -ENODEV; 813 + if (js->irq < 0) 814 + return js->irq; 815 815 816 816 ret = devm_request_threaded_irq(pfdev->dev, js->irq, 817 817 panfrost_job_irq_handler,
+2 -2
drivers/gpu/drm/panfrost/panfrost_mmu.c
··· 755 755 int err, irq; 756 756 757 757 irq = platform_get_irq_byname(to_platform_device(pfdev->dev), "mmu"); 758 - if (irq <= 0) 759 - return -ENODEV; 758 + if (irq < 0) 759 + return irq; 760 760 761 761 err = devm_request_threaded_irq(pfdev->dev, irq, 762 762 panfrost_mmu_irq_handler,
+2 -4
drivers/gpu/drm/renesas/shmobile/shmob_drm_drv.c
··· 172 172 * Platform driver 173 173 */ 174 174 175 - static int shmob_drm_remove(struct platform_device *pdev) 175 + static void shmob_drm_remove(struct platform_device *pdev) 176 176 { 177 177 struct shmob_drm_device *sdev = platform_get_drvdata(pdev); 178 178 struct drm_device *ddev = sdev->ddev; ··· 181 181 drm_kms_helper_poll_fini(ddev); 182 182 free_irq(sdev->irq, ddev); 183 183 drm_dev_put(ddev); 184 - 185 - return 0; 186 184 } 187 185 188 186 static int shmob_drm_probe(struct platform_device *pdev) ··· 286 288 287 289 static struct platform_driver shmob_drm_platform_driver = { 288 290 .probe = shmob_drm_probe, 289 - .remove = shmob_drm_remove, 291 + .remove_new = shmob_drm_remove, 290 292 .driver = { 291 293 .name = "shmob-drm", 292 294 .pm = pm_sleep_ptr(&shmob_drm_pm_ops),
+20
drivers/gpu/drm/rockchip/dw-mipi-dsi-rockchip.c
··· 198 198 #define RK3568_DSI1_TURNDISABLE BIT(2) 199 199 #define RK3568_DSI1_FORCERXMODE BIT(0) 200 200 201 + #define RV1126_GRF_DSIPHY_CON 0x10220 202 + #define RV1126_DSI_FORCETXSTOPMODE (0xf << 4) 203 + #define RV1126_DSI_TURNDISABLE BIT(2) 204 + #define RV1126_DSI_FORCERXMODE BIT(0) 205 + 201 206 #define HIWORD_UPDATE(val, mask) (val | (mask) << 16) 202 207 203 208 enum { ··· 1656 1651 { /* sentinel */ } 1657 1652 }; 1658 1653 1654 + static const struct rockchip_dw_dsi_chip_data rv1126_chip_data[] = { 1655 + { 1656 + .reg = 0xffb30000, 1657 + .lanecfg1_grf_reg = RV1126_GRF_DSIPHY_CON, 1658 + .lanecfg1 = HIWORD_UPDATE(0, RV1126_DSI_TURNDISABLE | 1659 + RV1126_DSI_FORCERXMODE | 1660 + RV1126_DSI_FORCETXSTOPMODE), 1661 + .max_data_lanes = 4, 1662 + }, 1663 + { /* sentinel */ } 1664 + }; 1665 + 1659 1666 static const struct of_device_id dw_mipi_dsi_rockchip_dt_ids[] = { 1660 1667 { 1661 1668 .compatible = "rockchip,px30-mipi-dsi", ··· 1681 1664 }, { 1682 1665 .compatible = "rockchip,rk3568-mipi-dsi", 1683 1666 .data = &rk3568_chip_data, 1667 + }, { 1668 + .compatible = "rockchip,rv1126-mipi-dsi", 1669 + .data = &rv1126_chip_data, 1684 1670 }, 1685 1671 { /* sentinel */ } 1686 1672 };
+9 -15
drivers/gpu/drm/rockchip/rockchip_drm_vop.c
··· 765 765 } 766 766 } 767 767 768 - static void vop_plane_destroy(struct drm_plane *plane) 769 - { 770 - drm_plane_cleanup(plane); 771 - } 772 - 773 768 static inline bool rockchip_afbc(u64 modifier) 774 769 { 775 770 return modifier == ROCKCHIP_AFBC_MOD; ··· 1126 1131 static const struct drm_plane_funcs vop_plane_funcs = { 1127 1132 .update_plane = drm_atomic_helper_update_plane, 1128 1133 .disable_plane = drm_atomic_helper_disable_plane, 1129 - .destroy = vop_plane_destroy, 1134 + .destroy = drm_plane_cleanup, 1130 1135 .reset = drm_atomic_helper_plane_reset, 1131 1136 .atomic_duplicate_state = drm_atomic_helper_plane_duplicate_state, 1132 1137 .atomic_destroy_state = drm_atomic_helper_plane_destroy_state, ··· 1597 1602 .atomic_disable = vop_crtc_atomic_disable, 1598 1603 }; 1599 1604 1600 - static void vop_crtc_destroy(struct drm_crtc *crtc) 1601 - { 1602 - drm_crtc_cleanup(crtc); 1603 - } 1604 - 1605 1605 static struct drm_crtc_state *vop_crtc_duplicate_state(struct drm_crtc *crtc) 1606 1606 { 1607 1607 struct rockchip_crtc_state *rockchip_state; ··· 1604 1614 if (WARN_ON(!crtc->state)) 1605 1615 return NULL; 1606 1616 1607 - rockchip_state = kzalloc(sizeof(*rockchip_state), GFP_KERNEL); 1617 + rockchip_state = kmemdup(to_rockchip_crtc_state(crtc->state), 1618 + sizeof(*rockchip_state), GFP_KERNEL); 1608 1619 if (!rockchip_state) 1609 1620 return NULL; 1610 1621 ··· 1630 1639 if (crtc->state) 1631 1640 vop_crtc_destroy_state(crtc, crtc->state); 1632 1641 1633 - __drm_atomic_helper_crtc_reset(crtc, &crtc_state->base); 1642 + if (crtc_state) 1643 + __drm_atomic_helper_crtc_reset(crtc, &crtc_state->base); 1644 + else 1645 + __drm_atomic_helper_crtc_reset(crtc, NULL); 1634 1646 } 1635 1647 1636 1648 #ifdef CONFIG_DRM_ANALOGIX_DP ··· 1704 1710 static const struct drm_crtc_funcs vop_crtc_funcs = { 1705 1711 .set_config = drm_atomic_helper_set_config, 1706 1712 .page_flip = drm_atomic_helper_page_flip, 1707 - .destroy = vop_crtc_destroy, 1713 + .destroy = drm_crtc_cleanup, 1708 1714 .reset = vop_crtc_reset, 1709 1715 .atomic_duplicate_state = vop_crtc_duplicate_state, 1710 1716 .atomic_destroy_state = vop_crtc_destroy_state, ··· 1955 1961 */ 1956 1962 list_for_each_entry_safe(plane, tmp, &drm_dev->mode_config.plane_list, 1957 1963 head) 1958 - vop_plane_destroy(plane); 1964 + drm_plane_cleanup(plane); 1959 1965 1960 1966 /* 1961 1967 * Destroy CRTC after vop_plane_destroy() since vop_disable_plane()
+19 -20
drivers/gpu/drm/rockchip/rockchip_drm_vop2.c
··· 2079 2079 .atomic_disable = vop2_crtc_atomic_disable, 2080 2080 }; 2081 2081 2082 - static void vop2_crtc_reset(struct drm_crtc *crtc) 2083 - { 2084 - struct rockchip_crtc_state *vcstate = to_rockchip_crtc_state(crtc->state); 2085 - 2086 - if (crtc->state) { 2087 - __drm_atomic_helper_crtc_destroy_state(crtc->state); 2088 - kfree(vcstate); 2089 - } 2090 - 2091 - vcstate = kzalloc(sizeof(*vcstate), GFP_KERNEL); 2092 - if (!vcstate) 2093 - return; 2094 - 2095 - crtc->state = &vcstate->base; 2096 - crtc->state->crtc = crtc; 2097 - } 2098 - 2099 2082 static struct drm_crtc_state *vop2_crtc_duplicate_state(struct drm_crtc *crtc) 2100 2083 { 2101 - struct rockchip_crtc_state *vcstate, *old_vcstate; 2084 + struct rockchip_crtc_state *vcstate; 2102 2085 2103 - old_vcstate = to_rockchip_crtc_state(crtc->state); 2086 + if (WARN_ON(!crtc->state)) 2087 + return NULL; 2104 2088 2105 - vcstate = kmemdup(old_vcstate, sizeof(*old_vcstate), GFP_KERNEL); 2089 + vcstate = kmemdup(to_rockchip_crtc_state(crtc->state), 2090 + sizeof(*vcstate), GFP_KERNEL); 2106 2091 if (!vcstate) 2107 2092 return NULL; 2108 2093 ··· 2103 2118 2104 2119 __drm_atomic_helper_crtc_destroy_state(&vcstate->base); 2105 2120 kfree(vcstate); 2121 + } 2122 + 2123 + static void vop2_crtc_reset(struct drm_crtc *crtc) 2124 + { 2125 + struct rockchip_crtc_state *vcstate = 2126 + kzalloc(sizeof(*vcstate), GFP_KERNEL); 2127 + 2128 + if (crtc->state) 2129 + vop2_crtc_destroy_state(crtc, crtc->state); 2130 + 2131 + if (vcstate) 2132 + __drm_atomic_helper_crtc_reset(crtc, &vcstate->base); 2133 + else 2134 + __drm_atomic_helper_crtc_reset(crtc, NULL); 2106 2135 } 2107 2136 2108 2137 static const struct drm_crtc_funcs vop2_crtc_funcs = {
+55
drivers/gpu/drm/rockchip/rockchip_vop_reg.c
··· 1120 1120 .max_output = { 4096, 2160 }, 1121 1121 }; 1122 1122 1123 + static const struct vop_common rv1126_common = { 1124 + .standby = VOP_REG_SYNC(PX30_SYS_CTRL2, 0x1, 1), 1125 + .out_mode = VOP_REG(PX30_DSP_CTRL2, 0xf, 16), 1126 + .dsp_blank = VOP_REG(PX30_DSP_CTRL2, 0x1, 14), 1127 + .dither_down_en = VOP_REG(PX30_DSP_CTRL2, 0x1, 8), 1128 + .dither_down_sel = VOP_REG(PX30_DSP_CTRL2, 0x1, 7), 1129 + .dither_down_mode = VOP_REG(PX30_DSP_CTRL2, 0x1, 6), 1130 + .cfg_done = VOP_REG_SYNC(PX30_REG_CFG_DONE, 0x1, 0), 1131 + .dither_up = VOP_REG(PX30_DSP_CTRL2, 0x1, 2), 1132 + .dsp_lut_en = VOP_REG(PX30_DSP_CTRL2, 0x1, 5), 1133 + .gate_en = VOP_REG(PX30_DSP_CTRL2, 0x1, 0), 1134 + }; 1135 + 1136 + static const struct vop_modeset rv1126_modeset = { 1137 + .htotal_pw = VOP_REG(PX30_DSP_HTOTAL_HS_END, 0x0fff0fff, 0), 1138 + .hact_st_end = VOP_REG(PX30_DSP_HACT_ST_END, 0x0fff0fff, 0), 1139 + .vtotal_pw = VOP_REG(PX30_DSP_VTOTAL_VS_END, 0x0fff0fff, 0), 1140 + .vact_st_end = VOP_REG(PX30_DSP_VACT_ST_END, 0x0fff0fff, 0), 1141 + }; 1142 + 1143 + static const struct vop_output rv1126_output = { 1144 + .rgb_dclk_pol = VOP_REG(PX30_DSP_CTRL0, 0x1, 1), 1145 + .rgb_pin_pol = VOP_REG(PX30_DSP_CTRL0, 0x7, 2), 1146 + .rgb_en = VOP_REG(PX30_DSP_CTRL0, 0x1, 0), 1147 + .mipi_dclk_pol = VOP_REG(PX30_DSP_CTRL0, 0x1, 25), 1148 + .mipi_pin_pol = VOP_REG(PX30_DSP_CTRL0, 0x7, 26), 1149 + .mipi_en = VOP_REG(PX30_DSP_CTRL0, 0x1, 24), 1150 + }; 1151 + 1152 + static const struct vop_misc rv1126_misc = { 1153 + .global_regdone_en = VOP_REG(PX30_SYS_CTRL2, 0x1, 13), 1154 + }; 1155 + 1156 + static const struct vop_win_data rv1126_vop_win_data[] = { 1157 + { .base = 0x00, .phy = &px30_win0_data, 1158 + .type = DRM_PLANE_TYPE_OVERLAY }, 1159 + { .base = 0x00, .phy = &px30_win2_data, 1160 + .type = DRM_PLANE_TYPE_PRIMARY }, 1161 + }; 1162 + 1163 + static const struct vop_data rv1126_vop = { 1164 + .version = VOP_VERSION(2, 0xb), 1165 + .intr = &px30_intr, 1166 + .common = &rv1126_common, 1167 + .modeset = &rv1126_modeset, 1168 + .output = &rv1126_output, 1169 + .misc = &rv1126_misc, 1170 + .win = rv1126_vop_win_data, 1171 + .win_size = ARRAY_SIZE(rv1126_vop_win_data), 1172 + .max_output = { 1920, 1080 }, 1173 + .lut_size = 1024, 1174 + }; 1175 + 1123 1176 static const struct of_device_id vop_driver_dt_match[] = { 1124 1177 { .compatible = "rockchip,rk3036-vop", 1125 1178 .data = &rk3036_vop }, ··· 1200 1147 .data = &rk3228_vop }, 1201 1148 { .compatible = "rockchip,rk3328-vop", 1202 1149 .data = &rk3328_vop }, 1150 + { .compatible = "rockchip,rv1126-vop", 1151 + .data = &rv1126_vop }, 1203 1152 {}, 1204 1153 }; 1205 1154 MODULE_DEVICE_TABLE(of, vop_driver_dt_match);
+40 -9
drivers/gpu/drm/solomon/ssd130x.c
··· 272 272 /* Enable the PWM */ 273 273 pwm_enable(ssd130x->pwm); 274 274 275 - dev_dbg(dev, "Using PWM%d with a %lluns period.\n", 276 - ssd130x->pwm->pwm, pwm_get_period(ssd130x->pwm)); 275 + dev_dbg(dev, "Using PWM %s with a %lluns period.\n", 276 + ssd130x->pwm->label, pwm_get_period(ssd130x->pwm)); 277 277 278 278 return 0; 279 279 } ··· 553 553 static void ssd130x_clear_screen(struct ssd130x_device *ssd130x, 554 554 struct ssd130x_plane_state *ssd130x_state) 555 555 { 556 - struct drm_rect fullscreen = { 557 - .x1 = 0, 558 - .x2 = ssd130x->width, 559 - .y1 = 0, 560 - .y2 = ssd130x->height, 561 - }; 556 + unsigned int page_height = ssd130x->device_info->page_height; 557 + unsigned int pages = DIV_ROUND_UP(ssd130x->height, page_height); 558 + u8 *data_array = ssd130x_state->data_array; 559 + unsigned int width = ssd130x->width; 560 + int ret, i; 562 561 563 - ssd130x_update_rect(ssd130x, ssd130x_state, &fullscreen); 562 + if (!ssd130x->page_address_mode) { 563 + memset(data_array, 0, width * pages); 564 + 565 + /* Set address range for horizontal addressing mode */ 566 + ret = ssd130x_set_col_range(ssd130x, ssd130x->col_offset, width); 567 + if (ret < 0) 568 + return; 569 + 570 + ret = ssd130x_set_page_range(ssd130x, ssd130x->page_offset, pages); 571 + if (ret < 0) 572 + return; 573 + 574 + /* Write out update in one go if we aren't using page addressing mode */ 575 + ssd130x_write_data(ssd130x, data_array, width * pages); 576 + } else { 577 + /* 578 + * In page addressing mode, the start address needs to be reset, 579 + * and each page then needs to be written out separately. 580 + */ 581 + memset(data_array, 0, width); 582 + 583 + for (i = 0; i < pages; i++) { 584 + ret = ssd130x_set_page_pos(ssd130x, 585 + ssd130x->page_offset + i, 586 + ssd130x->col_offset); 587 + if (ret < 0) 588 + return; 589 + 590 + ret = ssd130x_write_data(ssd130x, data_array, width); 591 + if (ret < 0) 592 + return; 593 + } 594 + } 564 595 } 565 596 566 597 static int ssd130x_fb_blit_rect(struct drm_plane_state *state,
+2 -2
drivers/gpu/drm/solomon/ssd130x.h
··· 40 40 u32 default_width; 41 41 u32 default_height; 42 42 u32 page_height; 43 - int need_pwm; 44 - int need_chargepump; 43 + bool need_pwm; 44 + bool need_chargepump; 45 45 bool page_mode_only; 46 46 }; 47 47
+8 -1
drivers/gpu/drm/tegra/dc.c
··· 1746 1746 unsigned int count = ARRAY_SIZE(debugfs_files); 1747 1747 struct drm_minor *minor = crtc->dev->primary; 1748 1748 struct tegra_dc *dc = to_tegra_dc(crtc); 1749 + struct dentry *root; 1749 1750 1750 - drm_debugfs_remove_files(dc->debugfs_files, count, minor); 1751 + #ifdef CONFIG_DEBUG_FS 1752 + root = crtc->debugfs_entry; 1753 + #else 1754 + root = NULL; 1755 + #endif 1756 + 1757 + drm_debugfs_remove_files(dc->debugfs_files, count, root, minor); 1751 1758 kfree(dc->debugfs_files); 1752 1759 dc->debugfs_files = NULL; 1753 1760 }
+1
drivers/gpu/drm/tegra/dsi.c
··· 256 256 struct tegra_dsi *dsi = to_dsi(output); 257 257 258 258 drm_debugfs_remove_files(dsi->debugfs_files, count, 259 + connector->debugfs_entry, 259 260 connector->dev->primary); 260 261 kfree(dsi->debugfs_files); 261 262 dsi->debugfs_files = NULL;
+2 -1
drivers/gpu/drm/tegra/hdmi.c
··· 1116 1116 unsigned int count = ARRAY_SIZE(debugfs_files); 1117 1117 struct tegra_hdmi *hdmi = to_hdmi(output); 1118 1118 1119 - drm_debugfs_remove_files(hdmi->debugfs_files, count, minor); 1119 + drm_debugfs_remove_files(hdmi->debugfs_files, count, 1120 + connector->debugfs_entry, minor); 1120 1121 kfree(hdmi->debugfs_files); 1121 1122 hdmi->debugfs_files = NULL; 1122 1123 }
+1
drivers/gpu/drm/tegra/sor.c
··· 1708 1708 struct tegra_sor *sor = to_sor(output); 1709 1709 1710 1710 drm_debugfs_remove_files(sor->debugfs_files, count, 1711 + connector->debugfs_entry, 1711 1712 connector->dev->primary); 1712 1713 kfree(sor->debugfs_files); 1713 1714 sor->debugfs_files = NULL;
+756 -57
drivers/gpu/drm/tests/drm_format_helper_test.c
··· 3 3 #include <kunit/test.h> 4 4 5 5 #include <drm/drm_device.h> 6 + #include <drm/drm_drv.h> 6 7 #include <drm/drm_file.h> 7 8 #include <drm/drm_format_helper.h> 8 9 #include <drm/drm_fourcc.h> 9 10 #include <drm/drm_framebuffer.h> 10 11 #include <drm/drm_gem_framebuffer_helper.h> 12 + #include <drm/drm_kunit_helpers.h> 11 13 #include <drm/drm_mode.h> 12 14 #include <drm/drm_print.h> 13 15 #include <drm/drm_rect.h> ··· 17 15 #include "../drm_crtc_internal.h" 18 16 19 17 #define TEST_BUF_SIZE 50 18 + 19 + #define TEST_USE_DEFAULT_PITCH 0 20 20 21 21 struct convert_to_gray8_result { 22 22 unsigned int dst_pitch; ··· 76 72 const u8 expected[TEST_BUF_SIZE]; 77 73 }; 78 74 75 + struct fb_swab_result { 76 + unsigned int dst_pitch; 77 + const u32 expected[TEST_BUF_SIZE]; 78 + }; 79 + 79 80 struct convert_xrgb8888_case { 80 81 const char *name; 81 82 unsigned int pitch; ··· 97 88 struct convert_to_xrgb2101010_result xrgb2101010_result; 98 89 struct convert_to_argb2101010_result argb2101010_result; 99 90 struct convert_to_mono_result mono_result; 91 + struct fb_swab_result swab_result; 100 92 }; 101 93 102 94 static struct convert_xrgb8888_case convert_xrgb8888_cases[] = { ··· 107 97 .clip = DRM_RECT_INIT(0, 0, 1, 1), 108 98 .xrgb8888 = { 0x01FF0000 }, 109 99 .gray8_result = { 110 - .dst_pitch = 0, 100 + .dst_pitch = TEST_USE_DEFAULT_PITCH, 111 101 .expected = { 0x4C }, 112 102 }, 113 103 .rgb332_result = { 114 - .dst_pitch = 0, 104 + .dst_pitch = TEST_USE_DEFAULT_PITCH, 115 105 .expected = { 0xE0 }, 116 106 }, 117 107 .rgb565_result = { 118 - .dst_pitch = 0, 108 + .dst_pitch = TEST_USE_DEFAULT_PITCH, 119 109 .expected = { 0xF800 }, 120 110 .expected_swab = { 0x00F8 }, 121 111 }, 122 112 .xrgb1555_result = { 123 - .dst_pitch = 0, 113 + .dst_pitch = TEST_USE_DEFAULT_PITCH, 124 114 .expected = { 0x7C00 }, 125 115 }, 126 116 .argb1555_result = { 127 - .dst_pitch = 0, 117 + .dst_pitch = TEST_USE_DEFAULT_PITCH, 128 118 .expected = { 0xFC00 }, 129 119 }, 130 120 .rgba5551_result = { 131 - .dst_pitch = 0, 121 + .dst_pitch = TEST_USE_DEFAULT_PITCH, 132 122 .expected = { 0xF801 }, 133 123 }, 134 124 .rgb888_result = { 135 - .dst_pitch = 0, 125 + .dst_pitch = TEST_USE_DEFAULT_PITCH, 136 126 .expected = { 0x00, 0x00, 0xFF }, 137 127 }, 138 128 .argb8888_result = { 139 - .dst_pitch = 0, 129 + .dst_pitch = TEST_USE_DEFAULT_PITCH, 140 130 .expected = { 0xFFFF0000 }, 141 131 }, 142 132 .xrgb2101010_result = { 143 - .dst_pitch = 0, 133 + .dst_pitch = TEST_USE_DEFAULT_PITCH, 144 134 .expected = { 0x3FF00000 }, 145 135 }, 146 136 .argb2101010_result = { 147 - .dst_pitch = 0, 137 + .dst_pitch = TEST_USE_DEFAULT_PITCH, 148 138 .expected = { 0xFFF00000 }, 149 139 }, 150 140 .mono_result = { 151 - .dst_pitch = 0, 141 + .dst_pitch = TEST_USE_DEFAULT_PITCH, 152 142 .expected = { 0b0 }, 143 + }, 144 + .swab_result = { 145 + .dst_pitch = TEST_USE_DEFAULT_PITCH, 146 + .expected = { 0x0000FF01 }, 153 147 }, 154 148 }, 155 149 { ··· 165 151 0x00000000, 0x10FF0000, 166 152 }, 167 153 .gray8_result = { 168 - .dst_pitch = 0, 154 + .dst_pitch = TEST_USE_DEFAULT_PITCH, 169 155 .expected = { 0x4C }, 170 156 }, 171 157 .rgb332_result = { 172 - .dst_pitch = 0, 158 + .dst_pitch = TEST_USE_DEFAULT_PITCH, 173 159 .expected = { 0xE0 }, 174 160 }, 175 161 .rgb565_result = { 176 - .dst_pitch = 0, 162 + .dst_pitch = TEST_USE_DEFAULT_PITCH, 177 163 .expected = { 0xF800 }, 178 164 .expected_swab = { 0x00F8 }, 179 165 }, 180 166 .xrgb1555_result = { 181 - .dst_pitch = 0, 167 + .dst_pitch = TEST_USE_DEFAULT_PITCH, 182 168 .expected = { 0x7C00 }, 183 169 }, 184 170 .argb1555_result = { 185 - .dst_pitch = 0, 171 + .dst_pitch = TEST_USE_DEFAULT_PITCH, 186 172 .expected = { 0xFC00 }, 187 173 }, 188 174 .rgba5551_result = { 189 - .dst_pitch = 0, 175 + .dst_pitch = TEST_USE_DEFAULT_PITCH, 190 176 .expected = { 0xF801 }, 191 177 }, 192 178 .rgb888_result = { 193 - .dst_pitch = 0, 179 + .dst_pitch = TEST_USE_DEFAULT_PITCH, 194 180 .expected = { 0x00, 0x00, 0xFF }, 195 181 }, 196 182 .argb8888_result = { 197 - .dst_pitch = 0, 183 + .dst_pitch = TEST_USE_DEFAULT_PITCH, 198 184 .expected = { 0xFFFF0000 }, 199 185 }, 200 186 .xrgb2101010_result = { 201 - .dst_pitch = 0, 187 + .dst_pitch = TEST_USE_DEFAULT_PITCH, 202 188 .expected = { 0x3FF00000 }, 203 189 }, 204 190 .argb2101010_result = { 205 - .dst_pitch = 0, 191 + .dst_pitch = TEST_USE_DEFAULT_PITCH, 206 192 .expected = { 0xFFF00000 }, 207 193 }, 208 194 .mono_result = { 209 - .dst_pitch = 0, 195 + .dst_pitch = TEST_USE_DEFAULT_PITCH, 210 196 .expected = { 0b0 }, 197 + }, 198 + .swab_result = { 199 + .dst_pitch = TEST_USE_DEFAULT_PITCH, 200 + .expected = { 0x0000FF10 }, 211 201 }, 212 202 }, 213 203 { ··· 230 212 0x00000000, 0x77FFFF00, 0x8800FFFF, 0x00000000, 231 213 }, 232 214 .gray8_result = { 233 - .dst_pitch = 0, 215 + .dst_pitch = TEST_USE_DEFAULT_PITCH, 234 216 .expected = { 235 217 0xFF, 0x00, 236 218 0x4C, 0x99, ··· 239 221 }, 240 222 }, 241 223 .rgb332_result = { 242 - .dst_pitch = 0, 224 + .dst_pitch = TEST_USE_DEFAULT_PITCH, 243 225 .expected = { 244 226 0xFF, 0x00, 245 227 0xE0, 0x1C, ··· 248 230 }, 249 231 }, 250 232 .rgb565_result = { 251 - .dst_pitch = 0, 233 + .dst_pitch = TEST_USE_DEFAULT_PITCH, 252 234 .expected = { 253 235 0xFFFF, 0x0000, 254 236 0xF800, 0x07E0, ··· 263 245 }, 264 246 }, 265 247 .xrgb1555_result = { 266 - .dst_pitch = 0, 248 + .dst_pitch = TEST_USE_DEFAULT_PITCH, 267 249 .expected = { 268 250 0x7FFF, 0x0000, 269 251 0x7C00, 0x03E0, ··· 272 254 }, 273 255 }, 274 256 .argb1555_result = { 275 - .dst_pitch = 0, 257 + .dst_pitch = TEST_USE_DEFAULT_PITCH, 276 258 .expected = { 277 259 0xFFFF, 0x8000, 278 260 0xFC00, 0x83E0, ··· 281 263 }, 282 264 }, 283 265 .rgba5551_result = { 284 - .dst_pitch = 0, 266 + .dst_pitch = TEST_USE_DEFAULT_PITCH, 285 267 .expected = { 286 268 0xFFFF, 0x0001, 287 269 0xF801, 0x07C1, ··· 290 272 }, 291 273 }, 292 274 .rgb888_result = { 293 - .dst_pitch = 0, 275 + .dst_pitch = TEST_USE_DEFAULT_PITCH, 294 276 .expected = { 295 277 0xFF, 0xFF, 0xFF, 0x00, 0x00, 0x00, 296 278 0x00, 0x00, 0xFF, 0x00, 0xFF, 0x00, ··· 299 281 }, 300 282 }, 301 283 .argb8888_result = { 302 - .dst_pitch = 0, 284 + .dst_pitch = TEST_USE_DEFAULT_PITCH, 303 285 .expected = { 304 286 0xFFFFFFFF, 0xFF000000, 305 287 0xFFFF0000, 0xFF00FF00, ··· 308 290 }, 309 291 }, 310 292 .xrgb2101010_result = { 311 - .dst_pitch = 0, 293 + .dst_pitch = TEST_USE_DEFAULT_PITCH, 312 294 .expected = { 313 295 0x3FFFFFFF, 0x00000000, 314 296 0x3FF00000, 0x000FFC00, ··· 317 299 }, 318 300 }, 319 301 .argb2101010_result = { 320 - .dst_pitch = 0, 302 + .dst_pitch = TEST_USE_DEFAULT_PITCH, 321 303 .expected = { 322 304 0xFFFFFFFF, 0xC0000000, 323 305 0xFFF00000, 0xC00FFC00, ··· 326 308 }, 327 309 }, 328 310 .mono_result = { 329 - .dst_pitch = 0, 311 + .dst_pitch = TEST_USE_DEFAULT_PITCH, 330 312 .expected = { 331 313 0b01, 332 314 0b10, 333 315 0b00, 334 316 0b11, 317 + }, 318 + }, 319 + .swab_result = { 320 + .dst_pitch = TEST_USE_DEFAULT_PITCH, 321 + .expected = { 322 + 0xFFFFFF11, 0x00000022, 323 + 0x0000FF33, 0x00FF0044, 324 + 0xFF000055, 0xFF00FF66, 325 + 0x00FFFF77, 0xFFFF0088, 335 326 }, 336 327 }, 337 328 }, ··· 450 423 0b010, 0b000, 451 424 }, 452 425 }, 426 + .swab_result = { 427 + .dst_pitch = 20, 428 + .expected = { 429 + 0x9C440EA1, 0x054D11B1, 0x03F3A8C1, 0x00000000, 0x00000000, 430 + 0x73F06CD1, 0x9C440EA2, 0x054D11B2, 0x00000000, 0x00000000, 431 + 0x0303A8C2, 0x73F06CD2, 0x9C440EA3, 0x00000000, 0x00000000, 432 + }, 433 + }, 453 434 }, 454 435 }; 455 436 ··· 472 437 * The size of the destination buffer or negative value on error. 473 438 */ 474 439 static size_t conversion_buf_size(u32 dst_format, unsigned int dst_pitch, 475 - const struct drm_rect *clip) 440 + const struct drm_rect *clip, int plane) 476 441 { 477 442 const struct drm_format_info *dst_fi = drm_format_info(dst_format); 478 443 ··· 480 445 return -EINVAL; 481 446 482 447 if (!dst_pitch) 483 - dst_pitch = drm_format_info_min_pitch(dst_fi, 0, drm_rect_width(clip)); 448 + dst_pitch = drm_format_info_min_pitch(dst_fi, plane, drm_rect_width(clip)); 484 449 485 450 return dst_pitch * drm_rect_height(clip); 486 451 } ··· 554 519 }; 555 520 556 521 dst_size = conversion_buf_size(DRM_FORMAT_R8, result->dst_pitch, 557 - &params->clip); 522 + &params->clip, 0); 558 523 KUNIT_ASSERT_GT(test, dst_size, 0); 559 524 560 525 buf = kunit_kzalloc(test, dst_size, GFP_KERNEL); ··· 565 530 KUNIT_ASSERT_NOT_ERR_OR_NULL(test, xrgb8888); 566 531 iosys_map_set_vaddr(&src, xrgb8888); 567 532 568 - drm_fb_xrgb8888_to_gray8(&dst, &result->dst_pitch, &src, &fb, &params->clip); 533 + const unsigned int *dst_pitch = (result->dst_pitch == TEST_USE_DEFAULT_PITCH) ? 534 + NULL : &result->dst_pitch; 535 + 536 + drm_fb_xrgb8888_to_gray8(&dst, dst_pitch, &src, &fb, &params->clip); 537 + 569 538 KUNIT_EXPECT_MEMEQ(test, buf, result->expected, dst_size); 570 539 } 571 540 ··· 588 549 }; 589 550 590 551 dst_size = conversion_buf_size(DRM_FORMAT_RGB332, result->dst_pitch, 591 - &params->clip); 552 + &params->clip, 0); 592 553 KUNIT_ASSERT_GT(test, dst_size, 0); 593 554 594 555 buf = kunit_kzalloc(test, dst_size, GFP_KERNEL); ··· 599 560 KUNIT_ASSERT_NOT_ERR_OR_NULL(test, xrgb8888); 600 561 iosys_map_set_vaddr(&src, xrgb8888); 601 562 602 - drm_fb_xrgb8888_to_rgb332(&dst, &result->dst_pitch, &src, &fb, &params->clip); 563 + const unsigned int *dst_pitch = (result->dst_pitch == TEST_USE_DEFAULT_PITCH) ? 564 + NULL : &result->dst_pitch; 565 + 566 + drm_fb_xrgb8888_to_rgb332(&dst, dst_pitch, &src, &fb, &params->clip); 603 567 KUNIT_EXPECT_MEMEQ(test, buf, result->expected, dst_size); 604 568 } 605 569 ··· 621 579 }; 622 580 623 581 dst_size = conversion_buf_size(DRM_FORMAT_RGB565, result->dst_pitch, 624 - &params->clip); 582 + &params->clip, 0); 625 583 KUNIT_ASSERT_GT(test, dst_size, 0); 626 584 627 585 buf = kunit_kzalloc(test, dst_size, GFP_KERNEL); ··· 632 590 KUNIT_ASSERT_NOT_ERR_OR_NULL(test, xrgb8888); 633 591 iosys_map_set_vaddr(&src, xrgb8888); 634 592 635 - drm_fb_xrgb8888_to_rgb565(&dst, &result->dst_pitch, &src, &fb, &params->clip, false); 593 + const unsigned int *dst_pitch = (result->dst_pitch == TEST_USE_DEFAULT_PITCH) ? 594 + NULL : &result->dst_pitch; 595 + 596 + drm_fb_xrgb8888_to_rgb565(&dst, dst_pitch, &src, &fb, &params->clip, false); 636 597 buf = le16buf_to_cpu(test, (__force const __le16 *)buf, dst_size / sizeof(__le16)); 637 598 KUNIT_EXPECT_MEMEQ(test, buf, result->expected, dst_size); 638 599 ··· 660 615 }; 661 616 662 617 dst_size = conversion_buf_size(DRM_FORMAT_XRGB1555, result->dst_pitch, 663 - &params->clip); 618 + &params->clip, 0); 664 619 KUNIT_ASSERT_GT(test, dst_size, 0); 665 620 666 621 buf = kunit_kzalloc(test, dst_size, GFP_KERNEL); ··· 671 626 KUNIT_ASSERT_NOT_ERR_OR_NULL(test, xrgb8888); 672 627 iosys_map_set_vaddr(&src, xrgb8888); 673 628 674 - drm_fb_xrgb8888_to_xrgb1555(&dst, &result->dst_pitch, &src, &fb, &params->clip); 629 + const unsigned int *dst_pitch = (result->dst_pitch == TEST_USE_DEFAULT_PITCH) ? 630 + NULL : &result->dst_pitch; 631 + 632 + drm_fb_xrgb8888_to_xrgb1555(&dst, dst_pitch, &src, &fb, &params->clip); 675 633 buf = le16buf_to_cpu(test, (__force const __le16 *)buf, dst_size / sizeof(__le16)); 676 634 KUNIT_EXPECT_MEMEQ(test, buf, result->expected, dst_size); 677 635 } ··· 694 646 }; 695 647 696 648 dst_size = conversion_buf_size(DRM_FORMAT_ARGB1555, result->dst_pitch, 697 - &params->clip); 649 + &params->clip, 0); 698 650 KUNIT_ASSERT_GT(test, dst_size, 0); 699 651 700 652 buf = kunit_kzalloc(test, dst_size, GFP_KERNEL); ··· 705 657 KUNIT_ASSERT_NOT_ERR_OR_NULL(test, xrgb8888); 706 658 iosys_map_set_vaddr(&src, xrgb8888); 707 659 708 - drm_fb_xrgb8888_to_argb1555(&dst, &result->dst_pitch, &src, &fb, &params->clip); 660 + const unsigned int *dst_pitch = (result->dst_pitch == TEST_USE_DEFAULT_PITCH) ? 661 + NULL : &result->dst_pitch; 662 + 663 + drm_fb_xrgb8888_to_argb1555(&dst, dst_pitch, &src, &fb, &params->clip); 709 664 buf = le16buf_to_cpu(test, (__force const __le16 *)buf, dst_size / sizeof(__le16)); 710 665 KUNIT_EXPECT_MEMEQ(test, buf, result->expected, dst_size); 711 666 } ··· 728 677 }; 729 678 730 679 dst_size = conversion_buf_size(DRM_FORMAT_RGBA5551, result->dst_pitch, 731 - &params->clip); 680 + &params->clip, 0); 732 681 KUNIT_ASSERT_GT(test, dst_size, 0); 733 682 734 683 buf = kunit_kzalloc(test, dst_size, GFP_KERNEL); ··· 739 688 KUNIT_ASSERT_NOT_ERR_OR_NULL(test, xrgb8888); 740 689 iosys_map_set_vaddr(&src, xrgb8888); 741 690 742 - drm_fb_xrgb8888_to_rgba5551(&dst, &result->dst_pitch, &src, &fb, &params->clip); 691 + const unsigned int *dst_pitch = (result->dst_pitch == TEST_USE_DEFAULT_PITCH) ? 692 + NULL : &result->dst_pitch; 693 + 694 + drm_fb_xrgb8888_to_rgba5551(&dst, dst_pitch, &src, &fb, &params->clip); 743 695 buf = le16buf_to_cpu(test, (__force const __le16 *)buf, dst_size / sizeof(__le16)); 744 696 KUNIT_EXPECT_MEMEQ(test, buf, result->expected, dst_size); 745 697 } ··· 762 708 }; 763 709 764 710 dst_size = conversion_buf_size(DRM_FORMAT_RGB888, result->dst_pitch, 765 - &params->clip); 711 + &params->clip, 0); 766 712 KUNIT_ASSERT_GT(test, dst_size, 0); 767 713 768 714 buf = kunit_kzalloc(test, dst_size, GFP_KERNEL); ··· 777 723 * RGB888 expected results are already in little-endian 778 724 * order, so there's no need to convert the test output. 779 725 */ 780 - drm_fb_xrgb8888_to_rgb888(&dst, &result->dst_pitch, &src, &fb, &params->clip); 726 + const unsigned int *dst_pitch = (result->dst_pitch == TEST_USE_DEFAULT_PITCH) ? 727 + NULL : &result->dst_pitch; 728 + 729 + drm_fb_xrgb8888_to_rgb888(&dst, dst_pitch, &src, &fb, &params->clip); 781 730 KUNIT_EXPECT_MEMEQ(test, buf, result->expected, dst_size); 782 731 } 783 732 ··· 799 742 }; 800 743 801 744 dst_size = conversion_buf_size(DRM_FORMAT_ARGB8888, 802 - result->dst_pitch, &params->clip); 745 + result->dst_pitch, &params->clip, 0); 803 746 KUNIT_ASSERT_GT(test, dst_size, 0); 804 747 805 748 buf = kunit_kzalloc(test, dst_size, GFP_KERNEL); ··· 810 753 KUNIT_ASSERT_NOT_ERR_OR_NULL(test, xrgb8888); 811 754 iosys_map_set_vaddr(&src, xrgb8888); 812 755 813 - drm_fb_xrgb8888_to_argb8888(&dst, &result->dst_pitch, &src, &fb, &params->clip); 756 + const unsigned int *dst_pitch = (result->dst_pitch == TEST_USE_DEFAULT_PITCH) ? 757 + NULL : &result->dst_pitch; 758 + 759 + drm_fb_xrgb8888_to_argb8888(&dst, dst_pitch, &src, &fb, &params->clip); 814 760 buf = le32buf_to_cpu(test, (__force const __le32 *)buf, dst_size / sizeof(u32)); 815 761 KUNIT_EXPECT_MEMEQ(test, buf, result->expected, dst_size); 816 762 } ··· 833 773 }; 834 774 835 775 dst_size = conversion_buf_size(DRM_FORMAT_XRGB2101010, 836 - result->dst_pitch, &params->clip); 776 + result->dst_pitch, &params->clip, 0); 837 777 KUNIT_ASSERT_GT(test, dst_size, 0); 838 778 839 779 buf = kunit_kzalloc(test, dst_size, GFP_KERNEL); ··· 844 784 KUNIT_ASSERT_NOT_ERR_OR_NULL(test, xrgb8888); 845 785 iosys_map_set_vaddr(&src, xrgb8888); 846 786 847 - drm_fb_xrgb8888_to_xrgb2101010(&dst, &result->dst_pitch, &src, &fb, &params->clip); 787 + const unsigned int *dst_pitch = (result->dst_pitch == TEST_USE_DEFAULT_PITCH) ? 788 + NULL : &result->dst_pitch; 789 + 790 + drm_fb_xrgb8888_to_xrgb2101010(&dst, dst_pitch, &src, &fb, &params->clip); 848 791 buf = le32buf_to_cpu(test, buf, dst_size / sizeof(u32)); 849 792 KUNIT_EXPECT_MEMEQ(test, buf, result->expected, dst_size); 850 793 } ··· 867 804 }; 868 805 869 806 dst_size = conversion_buf_size(DRM_FORMAT_ARGB2101010, 870 - result->dst_pitch, &params->clip); 807 + result->dst_pitch, &params->clip, 0); 871 808 KUNIT_ASSERT_GT(test, dst_size, 0); 872 809 873 810 buf = kunit_kzalloc(test, dst_size, GFP_KERNEL); ··· 878 815 KUNIT_ASSERT_NOT_ERR_OR_NULL(test, xrgb8888); 879 816 iosys_map_set_vaddr(&src, xrgb8888); 880 817 881 - drm_fb_xrgb8888_to_argb2101010(&dst, &result->dst_pitch, &src, &fb, &params->clip); 818 + const unsigned int *dst_pitch = (result->dst_pitch == TEST_USE_DEFAULT_PITCH) ? 819 + NULL : &result->dst_pitch; 820 + 821 + drm_fb_xrgb8888_to_argb2101010(&dst, dst_pitch, &src, &fb, &params->clip); 882 822 buf = le32buf_to_cpu(test, (__force const __le32 *)buf, dst_size / sizeof(u32)); 883 823 KUNIT_EXPECT_MEMEQ(test, buf, result->expected, dst_size); 884 824 } ··· 900 834 .pitches = { params->pitch, 0, 0 }, 901 835 }; 902 836 903 - dst_size = conversion_buf_size(DRM_FORMAT_C1, result->dst_pitch, &params->clip); 837 + dst_size = conversion_buf_size(DRM_FORMAT_C1, result->dst_pitch, &params->clip, 0); 904 838 905 839 KUNIT_ASSERT_GT(test, dst_size, 0); 906 840 ··· 912 846 KUNIT_ASSERT_NOT_ERR_OR_NULL(test, xrgb8888); 913 847 iosys_map_set_vaddr(&src, xrgb8888); 914 848 915 - drm_fb_xrgb8888_to_mono(&dst, &result->dst_pitch, &src, &fb, &params->clip); 849 + const unsigned int *dst_pitch = (result->dst_pitch == TEST_USE_DEFAULT_PITCH) ? 850 + NULL : &result->dst_pitch; 851 + 852 + drm_fb_xrgb8888_to_mono(&dst, dst_pitch, &src, &fb, &params->clip); 916 853 KUNIT_EXPECT_MEMEQ(test, buf, result->expected, dst_size); 854 + } 855 + 856 + static void drm_test_fb_swab(struct kunit *test) 857 + { 858 + const struct convert_xrgb8888_case *params = test->param_value; 859 + const struct fb_swab_result *result = &params->swab_result; 860 + size_t dst_size; 861 + u32 *buf = NULL; 862 + __le32 *xrgb8888 = NULL; 863 + struct iosys_map dst, src; 864 + 865 + struct drm_framebuffer fb = { 866 + .format = drm_format_info(DRM_FORMAT_XRGB8888), 867 + .pitches = { params->pitch, 0, 0 }, 868 + }; 869 + 870 + dst_size = conversion_buf_size(DRM_FORMAT_XRGB8888, result->dst_pitch, &params->clip, 0); 871 + 872 + KUNIT_ASSERT_GT(test, dst_size, 0); 873 + 874 + buf = kunit_kzalloc(test, dst_size, GFP_KERNEL); 875 + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, buf); 876 + iosys_map_set_vaddr(&dst, buf); 877 + 878 + xrgb8888 = cpubuf_to_le32(test, params->xrgb8888, TEST_BUF_SIZE); 879 + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, xrgb8888); 880 + iosys_map_set_vaddr(&src, xrgb8888); 881 + 882 + const unsigned int *dst_pitch = (result->dst_pitch == TEST_USE_DEFAULT_PITCH) ? 883 + NULL : &result->dst_pitch; 884 + 885 + drm_fb_swab(&dst, dst_pitch, &src, &fb, &params->clip, false); 886 + buf = le32buf_to_cpu(test, (__force const __le32 *)buf, dst_size / sizeof(u32)); 887 + KUNIT_EXPECT_MEMEQ(test, buf, result->expected, dst_size); 888 + } 889 + 890 + struct clip_offset_case { 891 + const char *name; 892 + unsigned int pitch; 893 + u32 format; 894 + struct drm_rect clip; 895 + unsigned int expected_offset; 896 + }; 897 + 898 + static struct clip_offset_case clip_offset_cases[] = { 899 + { 900 + .name = "pass through", 901 + .pitch = TEST_USE_DEFAULT_PITCH, 902 + .format = DRM_FORMAT_XRGB8888, 903 + .clip = DRM_RECT_INIT(0, 0, 3, 3), 904 + .expected_offset = 0 905 + }, 906 + { 907 + .name = "horizontal offset", 908 + .pitch = TEST_USE_DEFAULT_PITCH, 909 + .format = DRM_FORMAT_XRGB8888, 910 + .clip = DRM_RECT_INIT(1, 0, 3, 3), 911 + .expected_offset = 4, 912 + }, 913 + { 914 + .name = "vertical offset", 915 + .pitch = TEST_USE_DEFAULT_PITCH, 916 + .format = DRM_FORMAT_XRGB8888, 917 + .clip = DRM_RECT_INIT(0, 1, 3, 3), 918 + .expected_offset = 12, 919 + }, 920 + { 921 + .name = "horizontal and vertical offset", 922 + .pitch = TEST_USE_DEFAULT_PITCH, 923 + .format = DRM_FORMAT_XRGB8888, 924 + .clip = DRM_RECT_INIT(1, 1, 3, 3), 925 + .expected_offset = 16, 926 + }, 927 + { 928 + .name = "horizontal offset (custom pitch)", 929 + .pitch = 20, 930 + .format = DRM_FORMAT_XRGB8888, 931 + .clip = DRM_RECT_INIT(1, 0, 3, 3), 932 + .expected_offset = 4, 933 + }, 934 + { 935 + .name = "vertical offset (custom pitch)", 936 + .pitch = 20, 937 + .format = DRM_FORMAT_XRGB8888, 938 + .clip = DRM_RECT_INIT(0, 1, 3, 3), 939 + .expected_offset = 20, 940 + }, 941 + { 942 + .name = "horizontal and vertical offset (custom pitch)", 943 + .pitch = 20, 944 + .format = DRM_FORMAT_XRGB8888, 945 + .clip = DRM_RECT_INIT(1, 1, 3, 3), 946 + .expected_offset = 24, 947 + }, 948 + }; 949 + 950 + static void clip_offset_case_desc(struct clip_offset_case *t, char *desc) 951 + { 952 + strscpy(desc, t->name, KUNIT_PARAM_DESC_SIZE); 953 + } 954 + 955 + KUNIT_ARRAY_PARAM(clip_offset, clip_offset_cases, clip_offset_case_desc); 956 + 957 + static void drm_test_fb_clip_offset(struct kunit *test) 958 + { 959 + const struct clip_offset_case *params = test->param_value; 960 + const struct drm_format_info *format_info = drm_format_info(params->format); 961 + 962 + unsigned int offset; 963 + unsigned int pitch = params->pitch; 964 + 965 + if (pitch == TEST_USE_DEFAULT_PITCH) 966 + pitch = drm_format_info_min_pitch(format_info, 0, 967 + drm_rect_width(&params->clip)); 968 + 969 + /* 970 + * Assure that the pitch is not zero, because this will inevitable cause the 971 + * wrong expected result 972 + */ 973 + KUNIT_ASSERT_NE(test, pitch, 0); 974 + 975 + offset = drm_fb_clip_offset(pitch, format_info, &params->clip); 976 + 977 + KUNIT_EXPECT_EQ(test, offset, params->expected_offset); 978 + } 979 + 980 + struct fb_build_fourcc_list_case { 981 + const char *name; 982 + u32 native_fourccs[TEST_BUF_SIZE]; 983 + size_t native_fourccs_size; 984 + u32 expected[TEST_BUF_SIZE]; 985 + size_t expected_fourccs_size; 986 + }; 987 + 988 + static struct fb_build_fourcc_list_case fb_build_fourcc_list_cases[] = { 989 + { 990 + .name = "no native formats", 991 + .native_fourccs = { }, 992 + .native_fourccs_size = 0, 993 + .expected = { DRM_FORMAT_XRGB8888 }, 994 + .expected_fourccs_size = 1, 995 + }, 996 + { 997 + .name = "XRGB8888 as native format", 998 + .native_fourccs = { DRM_FORMAT_XRGB8888 }, 999 + .native_fourccs_size = 1, 1000 + .expected = { DRM_FORMAT_XRGB8888 }, 1001 + .expected_fourccs_size = 1, 1002 + }, 1003 + { 1004 + .name = "remove duplicates", 1005 + .native_fourccs = { 1006 + DRM_FORMAT_XRGB8888, 1007 + DRM_FORMAT_XRGB8888, 1008 + DRM_FORMAT_RGB888, 1009 + DRM_FORMAT_RGB888, 1010 + DRM_FORMAT_RGB888, 1011 + DRM_FORMAT_XRGB8888, 1012 + DRM_FORMAT_RGB888, 1013 + DRM_FORMAT_RGB565, 1014 + DRM_FORMAT_RGB888, 1015 + DRM_FORMAT_XRGB8888, 1016 + DRM_FORMAT_RGB565, 1017 + DRM_FORMAT_RGB565, 1018 + DRM_FORMAT_XRGB8888, 1019 + }, 1020 + .native_fourccs_size = 11, 1021 + .expected = { 1022 + DRM_FORMAT_XRGB8888, 1023 + DRM_FORMAT_RGB888, 1024 + DRM_FORMAT_RGB565, 1025 + }, 1026 + .expected_fourccs_size = 3, 1027 + }, 1028 + { 1029 + .name = "convert alpha formats", 1030 + .native_fourccs = { 1031 + DRM_FORMAT_ARGB1555, 1032 + DRM_FORMAT_ABGR1555, 1033 + DRM_FORMAT_RGBA5551, 1034 + DRM_FORMAT_BGRA5551, 1035 + DRM_FORMAT_ARGB8888, 1036 + DRM_FORMAT_ABGR8888, 1037 + DRM_FORMAT_RGBA8888, 1038 + DRM_FORMAT_BGRA8888, 1039 + DRM_FORMAT_ARGB2101010, 1040 + DRM_FORMAT_ABGR2101010, 1041 + DRM_FORMAT_RGBA1010102, 1042 + DRM_FORMAT_BGRA1010102, 1043 + }, 1044 + .native_fourccs_size = 12, 1045 + .expected = { 1046 + DRM_FORMAT_XRGB1555, 1047 + DRM_FORMAT_XBGR1555, 1048 + DRM_FORMAT_RGBX5551, 1049 + DRM_FORMAT_BGRX5551, 1050 + DRM_FORMAT_XRGB8888, 1051 + DRM_FORMAT_XBGR8888, 1052 + DRM_FORMAT_RGBX8888, 1053 + DRM_FORMAT_BGRX8888, 1054 + DRM_FORMAT_XRGB2101010, 1055 + DRM_FORMAT_XBGR2101010, 1056 + DRM_FORMAT_RGBX1010102, 1057 + DRM_FORMAT_BGRX1010102, 1058 + }, 1059 + .expected_fourccs_size = 12, 1060 + }, 1061 + { 1062 + .name = "random formats", 1063 + .native_fourccs = { 1064 + DRM_FORMAT_Y212, 1065 + DRM_FORMAT_ARGB1555, 1066 + DRM_FORMAT_ABGR16161616F, 1067 + DRM_FORMAT_C8, 1068 + DRM_FORMAT_BGR888, 1069 + DRM_FORMAT_XRGB1555, 1070 + DRM_FORMAT_RGBA5551, 1071 + DRM_FORMAT_BGR565_A8, 1072 + DRM_FORMAT_R10, 1073 + DRM_FORMAT_XYUV8888, 1074 + }, 1075 + .native_fourccs_size = 10, 1076 + .expected = { 1077 + DRM_FORMAT_Y212, 1078 + DRM_FORMAT_XRGB1555, 1079 + DRM_FORMAT_ABGR16161616F, 1080 + DRM_FORMAT_C8, 1081 + DRM_FORMAT_BGR888, 1082 + DRM_FORMAT_RGBX5551, 1083 + DRM_FORMAT_BGR565_A8, 1084 + DRM_FORMAT_R10, 1085 + DRM_FORMAT_XYUV8888, 1086 + DRM_FORMAT_XRGB8888, 1087 + }, 1088 + .expected_fourccs_size = 10, 1089 + }, 1090 + }; 1091 + 1092 + static void fb_build_fourcc_list_case_desc(struct fb_build_fourcc_list_case *t, char *desc) 1093 + { 1094 + strscpy(desc, t->name, KUNIT_PARAM_DESC_SIZE); 1095 + } 1096 + 1097 + KUNIT_ARRAY_PARAM(fb_build_fourcc_list, fb_build_fourcc_list_cases, fb_build_fourcc_list_case_desc); 1098 + 1099 + static void drm_test_fb_build_fourcc_list(struct kunit *test) 1100 + { 1101 + const struct fb_build_fourcc_list_case *params = test->param_value; 1102 + u32 fourccs_out[TEST_BUF_SIZE] = {0}; 1103 + size_t nfourccs_out; 1104 + struct drm_device *drm; 1105 + struct device *dev; 1106 + 1107 + dev = drm_kunit_helper_alloc_device(test); 1108 + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, dev); 1109 + 1110 + drm = __drm_kunit_helper_alloc_drm_device(test, dev, sizeof(*drm), 0, DRIVER_MODESET); 1111 + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, drm); 1112 + 1113 + nfourccs_out = drm_fb_build_fourcc_list(drm, params->native_fourccs, 1114 + params->native_fourccs_size, 1115 + fourccs_out, TEST_BUF_SIZE); 1116 + 1117 + KUNIT_EXPECT_EQ(test, nfourccs_out, params->expected_fourccs_size); 1118 + KUNIT_EXPECT_MEMEQ(test, fourccs_out, params->expected, TEST_BUF_SIZE); 1119 + } 1120 + 1121 + struct fb_memcpy_case { 1122 + const char *name; 1123 + u32 format; 1124 + struct drm_rect clip; 1125 + unsigned int src_pitches[DRM_FORMAT_MAX_PLANES]; 1126 + const u32 src[DRM_FORMAT_MAX_PLANES][TEST_BUF_SIZE]; 1127 + unsigned int dst_pitches[DRM_FORMAT_MAX_PLANES]; 1128 + const u32 expected[DRM_FORMAT_MAX_PLANES][TEST_BUF_SIZE]; 1129 + }; 1130 + 1131 + /* The `src` and `expected` buffers are u32 arrays. To deal with planes that 1132 + * have a cpp != 4 the values are stored together on the same u32 number in a 1133 + * way so the order in memory is correct in a little-endian machine. 1134 + * 1135 + * Because of that, on some occasions, parts of a u32 will not be part of the 1136 + * test, to make this explicit the 0xFF byte is used on those parts. 1137 + */ 1138 + 1139 + static struct fb_memcpy_case fb_memcpy_cases[] = { 1140 + { 1141 + .name = "single_pixel_source_buffer", 1142 + .format = DRM_FORMAT_XRGB8888, 1143 + .clip = DRM_RECT_INIT(0, 0, 1, 1), 1144 + .src_pitches = { 1 * 4 }, 1145 + .src = {{ 0x01020304 }}, 1146 + .dst_pitches = { TEST_USE_DEFAULT_PITCH }, 1147 + .expected = {{ 0x01020304 }}, 1148 + }, 1149 + { 1150 + .name = "single_pixel_source_buffer", 1151 + .format = DRM_FORMAT_XRGB8888_A8, 1152 + .clip = DRM_RECT_INIT(0, 0, 1, 1), 1153 + .src_pitches = { 1 * 4, 1 }, 1154 + .src = { 1155 + { 0x01020304 }, 1156 + { 0xFFFFFF01 }, 1157 + }, 1158 + .dst_pitches = { TEST_USE_DEFAULT_PITCH }, 1159 + .expected = { 1160 + { 0x01020304 }, 1161 + { 0x00000001 }, 1162 + }, 1163 + }, 1164 + { 1165 + .name = "single_pixel_source_buffer", 1166 + .format = DRM_FORMAT_YUV444, 1167 + .clip = DRM_RECT_INIT(0, 0, 1, 1), 1168 + .src_pitches = { 1, 1, 1 }, 1169 + .src = { 1170 + { 0xFFFFFF01 }, 1171 + { 0xFFFFFF01 }, 1172 + { 0xFFFFFF01 }, 1173 + }, 1174 + .dst_pitches = { TEST_USE_DEFAULT_PITCH }, 1175 + .expected = { 1176 + { 0x00000001 }, 1177 + { 0x00000001 }, 1178 + { 0x00000001 }, 1179 + }, 1180 + }, 1181 + { 1182 + .name = "single_pixel_clip_rectangle", 1183 + .format = DRM_FORMAT_XBGR8888, 1184 + .clip = DRM_RECT_INIT(1, 1, 1, 1), 1185 + .src_pitches = { 2 * 4 }, 1186 + .src = { 1187 + { 1188 + 0x00000000, 0x00000000, 1189 + 0x00000000, 0x01020304, 1190 + }, 1191 + }, 1192 + .dst_pitches = { TEST_USE_DEFAULT_PITCH }, 1193 + .expected = { 1194 + { 0x01020304 }, 1195 + }, 1196 + }, 1197 + { 1198 + .name = "single_pixel_clip_rectangle", 1199 + .format = DRM_FORMAT_XRGB8888_A8, 1200 + .clip = DRM_RECT_INIT(1, 1, 1, 1), 1201 + .src_pitches = { 2 * 4, 2 * 1 }, 1202 + .src = { 1203 + { 1204 + 0x00000000, 0x00000000, 1205 + 0x00000000, 0x01020304, 1206 + }, 1207 + { 0x01000000 }, 1208 + }, 1209 + .dst_pitches = { TEST_USE_DEFAULT_PITCH }, 1210 + .expected = { 1211 + { 0x01020304 }, 1212 + { 0x00000001 }, 1213 + }, 1214 + }, 1215 + { 1216 + .name = "single_pixel_clip_rectangle", 1217 + .format = DRM_FORMAT_YUV444, 1218 + .clip = DRM_RECT_INIT(1, 1, 1, 1), 1219 + .src_pitches = { 2 * 1, 2 * 1, 2 * 1 }, 1220 + .src = { 1221 + { 0x01000000 }, 1222 + { 0x01000000 }, 1223 + { 0x01000000 }, 1224 + }, 1225 + .dst_pitches = { TEST_USE_DEFAULT_PITCH }, 1226 + .expected = { 1227 + { 0x00000001 }, 1228 + { 0x00000001 }, 1229 + { 0x00000001 }, 1230 + }, 1231 + }, 1232 + { 1233 + .name = "well_known_colors", 1234 + .format = DRM_FORMAT_XBGR8888, 1235 + .clip = DRM_RECT_INIT(1, 1, 2, 4), 1236 + .src_pitches = { 4 * 4 }, 1237 + .src = { 1238 + { 1239 + 0x00000000, 0x00000000, 0x00000000, 0x00000000, 1240 + 0x00000000, 0x11FFFFFF, 0x22000000, 0x00000000, 1241 + 0x00000000, 0x33FF0000, 0x4400FF00, 0x00000000, 1242 + 0x00000000, 0x550000FF, 0x66FF00FF, 0x00000000, 1243 + 0x00000000, 0x77FFFF00, 0x8800FFFF, 0x00000000, 1244 + }, 1245 + }, 1246 + .dst_pitches = { TEST_USE_DEFAULT_PITCH }, 1247 + .expected = { 1248 + { 1249 + 0x11FFFFFF, 0x22000000, 1250 + 0x33FF0000, 0x4400FF00, 1251 + 0x550000FF, 0x66FF00FF, 1252 + 0x77FFFF00, 0x8800FFFF, 1253 + }, 1254 + }, 1255 + }, 1256 + { 1257 + .name = "well_known_colors", 1258 + .format = DRM_FORMAT_XRGB8888_A8, 1259 + .clip = DRM_RECT_INIT(1, 1, 2, 4), 1260 + .src_pitches = { 4 * 4, 4 * 1 }, 1261 + .src = { 1262 + { 1263 + 0x00000000, 0x00000000, 0x00000000, 0x00000000, 1264 + 0x00000000, 0xFFFFFFFF, 0xFF000000, 0x00000000, 1265 + 0x00000000, 0xFFFF0000, 0xFF00FF00, 0x00000000, 1266 + 0x00000000, 0xFF0000FF, 0xFFFF00FF, 0x00000000, 1267 + 0x00000000, 0xFFFFFF00, 0xFF00FFFF, 0x00000000, 1268 + }, 1269 + { 1270 + 0x00000000, 1271 + 0x00221100, 1272 + 0x00443300, 1273 + 0x00665500, 1274 + 0x00887700, 1275 + }, 1276 + }, 1277 + .dst_pitches = { TEST_USE_DEFAULT_PITCH }, 1278 + .expected = { 1279 + { 1280 + 0xFFFFFFFF, 0xFF000000, 1281 + 0xFFFF0000, 0xFF00FF00, 1282 + 0xFF0000FF, 0xFFFF00FF, 1283 + 0xFFFFFF00, 0xFF00FFFF, 1284 + }, 1285 + { 1286 + 0x44332211, 1287 + 0x88776655, 1288 + }, 1289 + }, 1290 + }, 1291 + { 1292 + .name = "well_known_colors", 1293 + .format = DRM_FORMAT_YUV444, 1294 + .clip = DRM_RECT_INIT(1, 1, 2, 4), 1295 + .src_pitches = { 4 * 1, 4 * 1, 4 * 1 }, 1296 + .src = { 1297 + { 1298 + 0x00000000, 1299 + 0x0000FF00, 1300 + 0x00954C00, 1301 + 0x00691D00, 1302 + 0x00B2E100, 1303 + }, 1304 + { 1305 + 0x00000000, 1306 + 0x00000000, 1307 + 0x00BEDE00, 1308 + 0x00436500, 1309 + 0x00229B00, 1310 + }, 1311 + { 1312 + 0x00000000, 1313 + 0x00000000, 1314 + 0x007E9C00, 1315 + 0x0083E700, 1316 + 0x00641A00, 1317 + }, 1318 + }, 1319 + .dst_pitches = { TEST_USE_DEFAULT_PITCH }, 1320 + .expected = { 1321 + { 1322 + 0x954C00FF, 1323 + 0xB2E1691D, 1324 + }, 1325 + { 1326 + 0xBEDE0000, 1327 + 0x229B4365, 1328 + }, 1329 + { 1330 + 0x7E9C0000, 1331 + 0x641A83E7, 1332 + }, 1333 + }, 1334 + }, 1335 + { 1336 + .name = "destination_pitch", 1337 + .format = DRM_FORMAT_XBGR8888, 1338 + .clip = DRM_RECT_INIT(0, 0, 3, 3), 1339 + .src_pitches = { 3 * 4 }, 1340 + .src = { 1341 + { 1342 + 0xA10E449C, 0xB1114D05, 0xC1A8F303, 1343 + 0xD16CF073, 0xA20E449C, 0xB2114D05, 1344 + 0xC2A80303, 0xD26CF073, 0xA30E449C, 1345 + }, 1346 + }, 1347 + .dst_pitches = { 5 * 4 }, 1348 + .expected = { 1349 + { 1350 + 0xA10E449C, 0xB1114D05, 0xC1A8F303, 0x00000000, 0x00000000, 1351 + 0xD16CF073, 0xA20E449C, 0xB2114D05, 0x00000000, 0x00000000, 1352 + 0xC2A80303, 0xD26CF073, 0xA30E449C, 0x00000000, 0x00000000, 1353 + }, 1354 + }, 1355 + }, 1356 + { 1357 + .name = "destination_pitch", 1358 + .format = DRM_FORMAT_XRGB8888_A8, 1359 + .clip = DRM_RECT_INIT(0, 0, 3, 3), 1360 + .src_pitches = { 3 * 4, 3 * 1 }, 1361 + .src = { 1362 + { 1363 + 0xFF0E449C, 0xFF114D05, 0xFFA8F303, 1364 + 0xFF6CF073, 0xFF0E449C, 0xFF114D05, 1365 + 0xFFA80303, 0xFF6CF073, 0xFF0E449C, 1366 + }, 1367 + { 1368 + 0xB2C1B1A1, 1369 + 0xD2A3D1A2, 1370 + 0xFFFFFFC2, 1371 + }, 1372 + }, 1373 + .dst_pitches = { 5 * 4, 5 * 1 }, 1374 + .expected = { 1375 + { 1376 + 0xFF0E449C, 0xFF114D05, 0xFFA8F303, 0x00000000, 0x00000000, 1377 + 0xFF6CF073, 0xFF0E449C, 0xFF114D05, 0x00000000, 0x00000000, 1378 + 0xFFA80303, 0xFF6CF073, 0xFF0E449C, 0x00000000, 0x00000000, 1379 + }, 1380 + { 1381 + 0x00C1B1A1, 1382 + 0xD1A2B200, 1383 + 0xD2A30000, 1384 + 0xFF0000C2, 1385 + }, 1386 + }, 1387 + }, 1388 + { 1389 + .name = "destination_pitch", 1390 + .format = DRM_FORMAT_YUV444, 1391 + .clip = DRM_RECT_INIT(0, 0, 3, 3), 1392 + .src_pitches = { 3 * 1, 3 * 1, 3 * 1 }, 1393 + .src = { 1394 + { 1395 + 0xBAC1323D, 1396 + 0xBA34323D, 1397 + 0xFFFFFF3D, 1398 + }, 1399 + { 1400 + 0xE1ABEC2A, 1401 + 0xE1EAEC2A, 1402 + 0xFFFFFF2A, 1403 + }, 1404 + { 1405 + 0xBCEBE4D7, 1406 + 0xBC65E4D7, 1407 + 0xFFFFFFD7, 1408 + }, 1409 + }, 1410 + .dst_pitches = { 5 * 1, 5 * 1, 5 * 1 }, 1411 + .expected = { 1412 + { 1413 + 0x00C1323D, 1414 + 0x323DBA00, 1415 + 0xBA340000, 1416 + 0xFF00003D, 1417 + }, 1418 + { 1419 + 0x00ABEC2A, 1420 + 0xEC2AE100, 1421 + 0xE1EA0000, 1422 + 0xFF00002A, 1423 + }, 1424 + { 1425 + 0x00EBE4D7, 1426 + 0xE4D7BC00, 1427 + 0xBC650000, 1428 + 0xFF0000D7, 1429 + }, 1430 + }, 1431 + }, 1432 + }; 1433 + 1434 + static void fb_memcpy_case_desc(struct fb_memcpy_case *t, char *desc) 1435 + { 1436 + snprintf(desc, KUNIT_PARAM_DESC_SIZE, "%s: %p4cc", t->name, &t->format); 1437 + } 1438 + 1439 + KUNIT_ARRAY_PARAM(fb_memcpy, fb_memcpy_cases, fb_memcpy_case_desc); 1440 + 1441 + static void drm_test_fb_memcpy(struct kunit *test) 1442 + { 1443 + const struct fb_memcpy_case *params = test->param_value; 1444 + size_t dst_size[DRM_FORMAT_MAX_PLANES] = { 0 }; 1445 + u32 *buf[DRM_FORMAT_MAX_PLANES] = { 0 }; 1446 + __le32 *src_cp[DRM_FORMAT_MAX_PLANES] = { 0 }; 1447 + __le32 *expected[DRM_FORMAT_MAX_PLANES] = { 0 }; 1448 + struct iosys_map dst[DRM_FORMAT_MAX_PLANES]; 1449 + struct iosys_map src[DRM_FORMAT_MAX_PLANES]; 1450 + 1451 + struct drm_framebuffer fb = { 1452 + .format = drm_format_info(params->format), 1453 + }; 1454 + 1455 + memcpy(fb.pitches, params->src_pitches, DRM_FORMAT_MAX_PLANES * sizeof(int)); 1456 + 1457 + for (size_t i = 0; i < fb.format->num_planes; i++) { 1458 + dst_size[i] = conversion_buf_size(params->format, params->dst_pitches[i], 1459 + &params->clip, i); 1460 + KUNIT_ASSERT_GT(test, dst_size[i], 0); 1461 + 1462 + buf[i] = kunit_kzalloc(test, dst_size[i], GFP_KERNEL); 1463 + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, buf[i]); 1464 + iosys_map_set_vaddr(&dst[i], buf[i]); 1465 + 1466 + src_cp[i] = cpubuf_to_le32(test, params->src[i], TEST_BUF_SIZE); 1467 + iosys_map_set_vaddr(&src[i], src_cp[i]); 1468 + } 1469 + 1470 + const unsigned int *dst_pitches = params->dst_pitches[0] == TEST_USE_DEFAULT_PITCH ? NULL : 1471 + params->dst_pitches; 1472 + 1473 + drm_fb_memcpy(dst, dst_pitches, src, &fb, &params->clip); 1474 + 1475 + for (size_t i = 0; i < fb.format->num_planes; i++) { 1476 + expected[i] = cpubuf_to_le32(test, params->expected[i], TEST_BUF_SIZE); 1477 + KUNIT_EXPECT_MEMEQ_MSG(test, buf[i], expected[i], dst_size[i], 1478 + "Failed expectation on plane %zu", i); 1479 + } 917 1480 } 918 1481 919 1482 static struct kunit_case drm_format_helper_test_cases[] = { ··· 1557 862 KUNIT_CASE_PARAM(drm_test_fb_xrgb8888_to_xrgb2101010, convert_xrgb8888_gen_params), 1558 863 KUNIT_CASE_PARAM(drm_test_fb_xrgb8888_to_argb2101010, convert_xrgb8888_gen_params), 1559 864 KUNIT_CASE_PARAM(drm_test_fb_xrgb8888_to_mono, convert_xrgb8888_gen_params), 865 + KUNIT_CASE_PARAM(drm_test_fb_swab, convert_xrgb8888_gen_params), 866 + KUNIT_CASE_PARAM(drm_test_fb_clip_offset, clip_offset_gen_params), 867 + KUNIT_CASE_PARAM(drm_test_fb_build_fourcc_list, fb_build_fourcc_list_gen_params), 868 + KUNIT_CASE_PARAM(drm_test_fb_memcpy, fb_memcpy_gen_params), 1560 869 {} 1561 870 }; 1562 871
+1 -1
drivers/gpu/drm/tiny/repaper.c
··· 949 949 950 950 match = device_get_match_data(dev); 951 951 if (match) { 952 - model = (enum repaper_model)match; 952 + model = (enum repaper_model)(uintptr_t)match; 953 953 } else { 954 954 spi_id = spi_get_device_id(spi); 955 955 model = (enum repaper_model)spi_id->driver_data;
-7
drivers/gpu/drm/virtio/virtgpu_drv.h
··· 344 344 struct virtio_gpu_object *obj, 345 345 struct virtio_gpu_mem_entry *ents, 346 346 unsigned int nents); 347 - int virtio_gpu_attach_status_page(struct virtio_gpu_device *vgdev); 348 - int virtio_gpu_detach_status_page(struct virtio_gpu_device *vgdev); 349 347 void virtio_gpu_cursor_ping(struct virtio_gpu_device *vgdev, 350 348 struct virtio_gpu_output *output); 351 349 int virtio_gpu_cmd_get_display_info(struct virtio_gpu_device *vgdev); ··· 392 394 struct virtio_gpu_fence *fence); 393 395 void virtio_gpu_ctrl_ack(struct virtqueue *vq); 394 396 void virtio_gpu_cursor_ack(struct virtqueue *vq); 395 - void virtio_gpu_fence_ack(struct virtqueue *vq); 396 397 void virtio_gpu_dequeue_ctrl_func(struct work_struct *work); 397 398 void virtio_gpu_dequeue_cursor_func(struct work_struct *work); 398 - void virtio_gpu_dequeue_fence_func(struct work_struct *work); 399 - 400 399 void virtio_gpu_notify(struct virtio_gpu_device *vgdev); 401 400 402 401 int ··· 460 465 int flags); 461 466 struct drm_gem_object *virtgpu_gem_prime_import(struct drm_device *dev, 462 467 struct dma_buf *buf); 463 - int virtgpu_gem_prime_get_uuid(struct drm_gem_object *obj, 464 - uuid_t *uuid); 465 468 struct drm_gem_object *virtgpu_gem_prime_import_sg_table( 466 469 struct drm_device *dev, struct dma_buf_attachment *attach, 467 470 struct sg_table *sgt);
+2 -6
drivers/hid/Kconfig
··· 878 878 default !EXPERT 879 879 depends on HID_PICOLCD 880 880 depends on HID_PICOLCD=FB || FB=y 881 - select FB_DEFERRED_IO 882 - select FB_SYS_FILLRECT 883 - select FB_SYS_COPYAREA 884 - select FB_SYS_IMAGEBLIT 885 - select FB_SYS_FOPS 881 + select FB_SYSMEM_HELPERS_DEFERRED 886 882 help 887 883 Provide access to PicoLCD's 256x64 monochrome display via a 888 884 framebuffer device. ··· 1040 1044 * Guitar Hero PS3 and PC guitar dongles 1041 1045 1042 1046 config SONY_FF 1043 - bool "Sony PS2/3/4 accessories force feedback support" 1047 + bool "Sony PS2/3/4 accessories force feedback support" 1044 1048 depends on HID_SONY 1045 1049 select INPUT_FF_MEMLESS 1046 1050 help
+19 -54
drivers/hid/hid-picolcd_fb.c
··· 283 283 mutex_unlock(&info->lock); 284 284 } 285 285 286 - /* Stub to call the system default and update the image on the picoLCD */ 287 - static void picolcd_fb_fillrect(struct fb_info *info, 288 - const struct fb_fillrect *rect) 289 - { 290 - if (!info->par) 291 - return; 292 - sys_fillrect(info, rect); 293 - 294 - schedule_delayed_work(&info->deferred_work, 0); 295 - } 296 - 297 - /* Stub to call the system default and update the image on the picoLCD */ 298 - static void picolcd_fb_copyarea(struct fb_info *info, 299 - const struct fb_copyarea *area) 300 - { 301 - if (!info->par) 302 - return; 303 - sys_copyarea(info, area); 304 - 305 - schedule_delayed_work(&info->deferred_work, 0); 306 - } 307 - 308 - /* Stub to call the system default and update the image on the picoLCD */ 309 - static void picolcd_fb_imageblit(struct fb_info *info, const struct fb_image *image) 310 - { 311 - if (!info->par) 312 - return; 313 - sys_imageblit(info, image); 314 - 315 - schedule_delayed_work(&info->deferred_work, 0); 316 - } 317 - 318 - /* 319 - * this is the slow path from userspace. they can seek and write to 320 - * the fb. it's inefficient to do anything less than a full screen draw 321 - */ 322 - static ssize_t picolcd_fb_write(struct fb_info *info, const char __user *buf, 323 - size_t count, loff_t *ppos) 324 - { 325 - ssize_t ret; 326 - if (!info->par) 327 - return -ENODEV; 328 - ret = fb_sys_write(info, buf, count, ppos); 329 - if (ret >= 0) 330 - schedule_delayed_work(&info->deferred_work, 0); 331 - return ret; 332 - } 333 - 334 286 static int picolcd_fb_blank(int blank, struct fb_info *info) 335 287 { 336 288 /* We let fb notification do this for us via lcd/backlight device */ ··· 369 417 return 0; 370 418 } 371 419 420 + static void picolcdfb_ops_damage_range(struct fb_info *info, off_t off, size_t len) 421 + { 422 + if (!info->par) 423 + return; 424 + schedule_delayed_work(&info->deferred_work, 0); 425 + } 426 + 427 + static void picolcdfb_ops_damage_area(struct fb_info *info, u32 x, u32 y, u32 width, u32 height) 428 + { 429 + if (!info->par) 430 + return; 431 + schedule_delayed_work(&info->deferred_work, 0); 432 + } 433 + 434 + FB_GEN_DEFAULT_DEFERRED_SYSMEM_OPS(picolcdfb_ops, 435 + picolcdfb_ops_damage_range, 436 + picolcdfb_ops_damage_area) 437 + 372 438 static const struct fb_ops picolcdfb_ops = { 373 439 .owner = THIS_MODULE, 440 + FB_DEFAULT_DEFERRED_OPS(picolcdfb_ops), 374 441 .fb_destroy = picolcd_fb_destroy, 375 - .fb_read = fb_sys_read, 376 - .fb_write = picolcd_fb_write, 377 442 .fb_blank = picolcd_fb_blank, 378 - .fb_fillrect = picolcd_fb_fillrect, 379 - .fb_copyarea = picolcd_fb_copyarea, 380 - .fb_imageblit = picolcd_fb_imageblit, 381 443 .fb_check_var = picolcd_fb_check_var, 382 444 .fb_set_par = picolcd_set_par, 383 - .fb_mmap = fb_deferred_io_mmap, 384 445 }; 385 446 386 447
+1 -5
drivers/staging/fbtft/Kconfig
··· 4 4 depends on FB && SPI 5 5 depends on FB_DEVICE 6 6 depends on GPIOLIB || COMPILE_TEST 7 - select FB_SYS_FILLRECT 8 - select FB_SYS_COPYAREA 9 - select FB_SYS_IMAGEBLIT 10 - select FB_SYS_FOPS 11 - select FB_DEFERRED_IO 12 7 select FB_BACKLIGHT 8 + select FB_SYSMEM_HELPERS_DEFERRED 13 9 14 10 config FB_TFT_AGM1264K_FL 15 11 tristate "FB driver for the AGM1264K-FL LCD display"
+27 -72
drivers/staging/fbtft/fbtft-core.c
··· 357 357 dirty_lines_start, dirty_lines_end); 358 358 } 359 359 360 - static void fbtft_fb_fillrect(struct fb_info *info, 361 - const struct fb_fillrect *rect) 362 - { 363 - struct fbtft_par *par = info->par; 364 - 365 - dev_dbg(info->dev, 366 - "%s: dx=%d, dy=%d, width=%d, height=%d\n", 367 - __func__, rect->dx, rect->dy, rect->width, rect->height); 368 - sys_fillrect(info, rect); 369 - 370 - par->fbtftops.mkdirty(info, rect->dy, rect->height); 371 - } 372 - 373 - static void fbtft_fb_copyarea(struct fb_info *info, 374 - const struct fb_copyarea *area) 375 - { 376 - struct fbtft_par *par = info->par; 377 - 378 - dev_dbg(info->dev, 379 - "%s: dx=%d, dy=%d, width=%d, height=%d\n", 380 - __func__, area->dx, area->dy, area->width, area->height); 381 - sys_copyarea(info, area); 382 - 383 - par->fbtftops.mkdirty(info, area->dy, area->height); 384 - } 385 - 386 - static void fbtft_fb_imageblit(struct fb_info *info, 387 - const struct fb_image *image) 388 - { 389 - struct fbtft_par *par = info->par; 390 - 391 - dev_dbg(info->dev, 392 - "%s: dx=%d, dy=%d, width=%d, height=%d\n", 393 - __func__, image->dx, image->dy, image->width, image->height); 394 - sys_imageblit(info, image); 395 - 396 - par->fbtftops.mkdirty(info, image->dy, image->height); 397 - } 398 - 399 - static ssize_t fbtft_fb_write(struct fb_info *info, const char __user *buf, 400 - size_t count, loff_t *ppos) 401 - { 402 - struct fbtft_par *par = info->par; 403 - ssize_t res; 404 - 405 - dev_dbg(info->dev, 406 - "%s: count=%zd, ppos=%llu\n", __func__, count, *ppos); 407 - res = fb_sys_write(info, buf, count, ppos); 408 - 409 - /* TODO: only mark changed area update all for now */ 410 - par->fbtftops.mkdirty(info, -1, 0); 411 - 412 - return res; 413 - } 414 - 415 360 /* from pxafb.c */ 416 361 static unsigned int chan_to_field(unsigned int chan, struct fb_bitfield *bf) 417 362 { ··· 418 473 return ret; 419 474 } 420 475 476 + static void fbtft_ops_damage_range(struct fb_info *info, off_t off, size_t len) 477 + { 478 + struct fbtft_par *par = info->par; 479 + 480 + /* TODO: only mark changed area update all for now */ 481 + par->fbtftops.mkdirty(info, -1, 0); 482 + } 483 + 484 + static void fbtft_ops_damage_area(struct fb_info *info, u32 x, u32 y, u32 width, u32 height) 485 + { 486 + struct fbtft_par *par = info->par; 487 + 488 + par->fbtftops.mkdirty(info, y, height); 489 + } 490 + 491 + FB_GEN_DEFAULT_DEFERRED_SYSMEM_OPS(fbtft_ops, 492 + fbtft_ops_damage_range, 493 + fbtft_ops_damage_area) 494 + 495 + static const struct fb_ops fbtft_ops = { 496 + .owner = THIS_MODULE, 497 + FB_DEFAULT_DEFERRED_OPS(fbtft_ops), 498 + .fb_setcolreg = fbtft_fb_setcolreg, 499 + .fb_blank = fbtft_fb_blank, 500 + }; 501 + 421 502 static void fbtft_merge_fbtftops(struct fbtft_ops *dst, struct fbtft_ops *src) 422 503 { 423 504 if (src->write) ··· 492 521 * Creates a new frame buffer info structure. 493 522 * 494 523 * Also creates and populates the following structures: 495 - * info->fbops 496 524 * info->fbdefio 497 525 * info->pseudo_palette 498 526 * par->fbtftops ··· 506 536 { 507 537 struct fb_info *info; 508 538 struct fbtft_par *par; 509 - struct fb_ops *fbops = NULL; 510 539 struct fb_deferred_io *fbdefio = NULL; 511 540 u8 *vmem = NULL; 512 541 void *txbuf = NULL; ··· 580 611 if (!vmem) 581 612 goto alloc_fail; 582 613 583 - fbops = devm_kzalloc(dev, sizeof(struct fb_ops), GFP_KERNEL); 584 - if (!fbops) 585 - goto alloc_fail; 586 - 587 614 fbdefio = devm_kzalloc(dev, sizeof(struct fb_deferred_io), GFP_KERNEL); 588 615 if (!fbdefio) 589 616 goto alloc_fail; ··· 603 638 goto alloc_fail; 604 639 605 640 info->screen_buffer = vmem; 606 - info->fbops = fbops; 641 + info->fbops = &fbtft_ops; 607 642 info->fbdefio = fbdefio; 608 - 609 - fbops->owner = dev->driver->owner; 610 - fbops->fb_read = fb_sys_read; 611 - fbops->fb_write = fbtft_fb_write; 612 - fbops->fb_fillrect = fbtft_fb_fillrect; 613 - fbops->fb_copyarea = fbtft_fb_copyarea; 614 - fbops->fb_imageblit = fbtft_fb_imageblit; 615 - fbops->fb_setcolreg = fbtft_fb_setcolreg; 616 - fbops->fb_blank = fbtft_fb_blank; 617 - fbops->fb_mmap = fb_deferred_io_mmap; 618 643 619 644 fbdefio->delay = HZ / fps; 620 645 fbdefio->sort_pagereflist = true;
+13 -22
drivers/video/fbdev/Kconfig
··· 518 518 help 519 519 Say Y if you want support for SBUS or UPA based frame buffer device. 520 520 521 + config FB_SBUS_HELPERS 522 + bool 523 + select FB_CFB_COPYAREA 524 + select FB_CFB_FILLRECT 525 + select FB_CFB_IMAGEBLIT 526 + 521 527 config FB_BW2 522 528 bool "BWtwo support" 523 529 depends on (FB = y) && (SPARC && FB_SBUS) 524 - select FB_CFB_FILLRECT 525 - select FB_CFB_COPYAREA 526 - select FB_CFB_IMAGEBLIT 530 + select FB_SBUS_HELPERS 527 531 help 528 532 This is the frame buffer device driver for the BWtwo frame buffer. 529 533 530 534 config FB_CG3 531 535 bool "CGthree support" 532 536 depends on (FB = y) && (SPARC && FB_SBUS) 533 - select FB_CFB_FILLRECT 534 - select FB_CFB_COPYAREA 535 - select FB_CFB_IMAGEBLIT 537 + select FB_SBUS_HELPERS 536 538 help 537 539 This is the frame buffer device driver for the CGthree frame buffer. 538 540 ··· 559 557 config FB_TCX 560 558 bool "TCX (SS4/SS5 only) support" 561 559 depends on FB_SBUS 562 - select FB_CFB_FILLRECT 563 - select FB_CFB_COPYAREA 564 - select FB_CFB_IMAGEBLIT 560 + select FB_SBUS_HELPERS 565 561 help 566 562 This is the frame buffer device driver for the TCX 24/8bit frame 567 563 buffer. ··· 567 567 config FB_CG14 568 568 bool "CGfourteen (SX) support" 569 569 depends on FB_SBUS 570 - select FB_CFB_FILLRECT 571 - select FB_CFB_COPYAREA 572 - select FB_CFB_IMAGEBLIT 570 + select FB_SBUS_HELPERS 573 571 help 574 572 This is the frame buffer device driver for the CGfourteen frame 575 573 buffer on Desktop SPARCsystems with the SX graphics option. ··· 575 577 config FB_P9100 576 578 bool "P9100 (Sparcbook 3 only) support" 577 579 depends on FB_SBUS 578 - select FB_CFB_FILLRECT 579 - select FB_CFB_COPYAREA 580 - select FB_CFB_IMAGEBLIT 580 + select FB_SBUS_HELPERS 581 581 help 582 582 This is the frame buffer device driver for the P9100 card 583 583 supported on Sparcbook 3 machines. ··· 583 587 config FB_LEO 584 588 bool "Leo (ZX) support" 585 589 depends on FB_SBUS 586 - select FB_CFB_FILLRECT 587 - select FB_CFB_COPYAREA 588 - select FB_CFB_IMAGEBLIT 590 + select FB_SBUS_HELPERS 589 591 help 590 592 This is the frame buffer device driver for the SBUS-based Sun ZX 591 593 (leo) frame buffer cards. ··· 1894 1900 config FB_HYPERV 1895 1901 tristate "Microsoft Hyper-V Synthetic Video support" 1896 1902 depends on FB && HYPERV 1897 - select FB_CFB_FILLRECT 1898 - select FB_CFB_COPYAREA 1899 - select FB_CFB_IMAGEBLIT 1900 - select FB_DEFERRED_IO 1901 1903 select DMA_CMA if HAVE_DMA_CONTIGUOUS && CMA 1904 + select FB_IOMEM_HELPERS_DEFERRED 1902 1905 select VIDEO_NOMODESET 1903 1906 help 1904 1907 This framebuffer driver supports Microsoft Hyper-V Synthetic Video.
+9 -8
drivers/video/fbdev/Makefile
··· 8 8 obj-y += core/ 9 9 10 10 obj-$(CONFIG_FB_MACMODES) += macmodes.o 11 + obj-$(CONFIG_FB_SBUS) += sbuslib.o 11 12 obj-$(CONFIG_FB_WMT_GE_ROPS) += wmt_ge_rops.o 12 13 13 14 # Hardware specific drivers go first ··· 46 45 obj-$(CONFIG_FB_S3) += s3fb.o 47 46 obj-$(CONFIG_FB_ARK) += arkfb.o 48 47 obj-$(CONFIG_FB_STI) += stifb.o 49 - obj-$(CONFIG_FB_FFB) += ffb.o sbuslib.o 50 - obj-$(CONFIG_FB_CG6) += cg6.o sbuslib.o 51 - obj-$(CONFIG_FB_CG3) += cg3.o sbuslib.o 52 - obj-$(CONFIG_FB_BW2) += bw2.o sbuslib.o 53 - obj-$(CONFIG_FB_CG14) += cg14.o sbuslib.o 54 - obj-$(CONFIG_FB_P9100) += p9100.o sbuslib.o 55 - obj-$(CONFIG_FB_TCX) += tcx.o sbuslib.o 56 - obj-$(CONFIG_FB_LEO) += leo.o sbuslib.o 48 + obj-$(CONFIG_FB_FFB) += ffb.o 49 + obj-$(CONFIG_FB_CG6) += cg6.o 50 + obj-$(CONFIG_FB_CG3) += cg3.o 51 + obj-$(CONFIG_FB_BW2) += bw2.o 52 + obj-$(CONFIG_FB_CG14) += cg14.o 53 + obj-$(CONFIG_FB_P9100) += p9100.o 54 + obj-$(CONFIG_FB_TCX) += tcx.o 55 + obj-$(CONFIG_FB_LEO) += leo.o 57 56 obj-$(CONFIG_FB_ACORN) += acornfb.o 58 57 obj-$(CONFIG_FB_ATARI) += atafb.o c2p_iplan2.o atafb_mfb.o \ 59 58 atafb_iplan2p2.o atafb_iplan2p4.o atafb_iplan2p8.o
+5 -12
drivers/video/fbdev/bw2.c
··· 31 31 32 32 static int bw2_blank(int, struct fb_info *); 33 33 34 - static int bw2_mmap(struct fb_info *, struct vm_area_struct *); 35 - static int bw2_ioctl(struct fb_info *, unsigned int, unsigned long); 34 + static int bw2_sbusfb_mmap(struct fb_info *info, struct vm_area_struct *vma); 35 + static int bw2_sbusfb_ioctl(struct fb_info *info, unsigned int cmd, unsigned long arg); 36 36 37 37 /* 38 38 * Frame buffer operations ··· 40 40 41 41 static const struct fb_ops bw2_ops = { 42 42 .owner = THIS_MODULE, 43 + FB_DEFAULT_SBUS_OPS(bw2), 43 44 .fb_blank = bw2_blank, 44 - .fb_fillrect = cfb_fillrect, 45 - .fb_copyarea = cfb_copyarea, 46 - .fb_imageblit = cfb_imageblit, 47 - .fb_mmap = bw2_mmap, 48 - .fb_ioctl = bw2_ioctl, 49 - #ifdef CONFIG_COMPAT 50 - .fb_compat_ioctl = sbusfb_compat_ioctl, 51 - #endif 52 45 }; 53 46 54 47 /* OBio addresses for the bwtwo registers */ ··· 154 161 { .size = 0 } 155 162 }; 156 163 157 - static int bw2_mmap(struct fb_info *info, struct vm_area_struct *vma) 164 + static int bw2_sbusfb_mmap(struct fb_info *info, struct vm_area_struct *vma) 158 165 { 159 166 struct bw2_par *par = (struct bw2_par *)info->par; 160 167 ··· 164 171 vma); 165 172 } 166 173 167 - static int bw2_ioctl(struct fb_info *info, unsigned int cmd, unsigned long arg) 174 + static int bw2_sbusfb_ioctl(struct fb_info *info, unsigned int cmd, unsigned long arg) 168 175 { 169 176 return sbusfb_ioctl_helper(cmd, arg, info, 170 177 FBTYPE_SUN2BW, 1, info->fix.smem_len);
+6 -13
drivers/video/fbdev/cg14.c
··· 31 31 32 32 static int cg14_setcolreg(unsigned, unsigned, unsigned, unsigned, 33 33 unsigned, struct fb_info *); 34 - 35 - static int cg14_mmap(struct fb_info *, struct vm_area_struct *); 36 - static int cg14_ioctl(struct fb_info *, unsigned int, unsigned long); 37 34 static int cg14_pan_display(struct fb_var_screeninfo *, struct fb_info *); 35 + 36 + static int cg14_sbusfb_mmap(struct fb_info *info, struct vm_area_struct *vma); 37 + static int cg14_sbusfb_ioctl(struct fb_info *info, unsigned int cmd, unsigned long arg); 38 38 39 39 /* 40 40 * Frame buffer operations ··· 42 42 43 43 static const struct fb_ops cg14_ops = { 44 44 .owner = THIS_MODULE, 45 + FB_DEFAULT_SBUS_OPS(cg14), 45 46 .fb_setcolreg = cg14_setcolreg, 46 47 .fb_pan_display = cg14_pan_display, 47 - .fb_fillrect = cfb_fillrect, 48 - .fb_copyarea = cfb_copyarea, 49 - .fb_imageblit = cfb_imageblit, 50 - .fb_mmap = cg14_mmap, 51 - .fb_ioctl = cg14_ioctl, 52 - #ifdef CONFIG_COMPAT 53 - .fb_compat_ioctl = sbusfb_compat_ioctl, 54 - #endif 55 48 }; 56 49 57 50 #define CG14_MCR_INTENABLE_SHIFT 7 ··· 258 265 return 0; 259 266 } 260 267 261 - static int cg14_mmap(struct fb_info *info, struct vm_area_struct *vma) 268 + static int cg14_sbusfb_mmap(struct fb_info *info, struct vm_area_struct *vma) 262 269 { 263 270 struct cg14_par *par = (struct cg14_par *) info->par; 264 271 ··· 267 274 par->iospace, vma); 268 275 } 269 276 270 - static int cg14_ioctl(struct fb_info *info, unsigned int cmd, unsigned long arg) 277 + static int cg14_sbusfb_ioctl(struct fb_info *info, unsigned int cmd, unsigned long arg) 271 278 { 272 279 struct cg14_par *par = (struct cg14_par *) info->par; 273 280 struct cg14_regs __iomem *regs = par->regs;
+5 -12
drivers/video/fbdev/cg3.c
··· 33 33 unsigned, struct fb_info *); 34 34 static int cg3_blank(int, struct fb_info *); 35 35 36 - static int cg3_mmap(struct fb_info *, struct vm_area_struct *); 37 - static int cg3_ioctl(struct fb_info *, unsigned int, unsigned long); 36 + static int cg3_sbusfb_mmap(struct fb_info *info, struct vm_area_struct *vma); 37 + static int cg3_sbusfb_ioctl(struct fb_info *info, unsigned int cmd, unsigned long arg); 38 38 39 39 /* 40 40 * Frame buffer operations ··· 42 42 43 43 static const struct fb_ops cg3_ops = { 44 44 .owner = THIS_MODULE, 45 + FB_DEFAULT_SBUS_OPS(cg3), 45 46 .fb_setcolreg = cg3_setcolreg, 46 47 .fb_blank = cg3_blank, 47 - .fb_fillrect = cfb_fillrect, 48 - .fb_copyarea = cfb_copyarea, 49 - .fb_imageblit = cfb_imageblit, 50 - .fb_mmap = cg3_mmap, 51 - .fb_ioctl = cg3_ioctl, 52 - #ifdef CONFIG_COMPAT 53 - .fb_compat_ioctl = sbusfb_compat_ioctl, 54 - #endif 55 48 }; 56 49 57 50 ··· 218 225 { .size = 0 } 219 226 }; 220 227 221 - static int cg3_mmap(struct fb_info *info, struct vm_area_struct *vma) 228 + static int cg3_sbusfb_mmap(struct fb_info *info, struct vm_area_struct *vma) 222 229 { 223 230 struct cg3_par *par = (struct cg3_par *)info->par; 224 231 ··· 228 235 vma); 229 236 } 230 237 231 - static int cg3_ioctl(struct fb_info *info, unsigned int cmd, unsigned long arg) 238 + static int cg3_sbusfb_ioctl(struct fb_info *info, unsigned int cmd, unsigned long arg) 232 239 { 233 240 return sbusfb_ioctl_helper(cmd, arg, info, 234 241 FBTYPE_SUN3COLOR, 8, info->fix.smem_len);
+8 -9
drivers/video/fbdev/cg6.c
··· 37 37 static void cg6_fillrect(struct fb_info *, const struct fb_fillrect *); 38 38 static void cg6_copyarea(struct fb_info *info, const struct fb_copyarea *area); 39 39 static int cg6_sync(struct fb_info *); 40 - static int cg6_mmap(struct fb_info *, struct vm_area_struct *); 41 - static int cg6_ioctl(struct fb_info *, unsigned int, unsigned long); 42 40 static int cg6_pan_display(struct fb_var_screeninfo *, struct fb_info *); 41 + 42 + static int cg6_sbusfb_mmap(struct fb_info *info, struct vm_area_struct *vma); 43 + static int cg6_sbusfb_ioctl(struct fb_info *info, unsigned int cmd, unsigned long arg); 43 44 44 45 /* 45 46 * Frame buffer operations ··· 48 47 49 48 static const struct fb_ops cg6_ops = { 50 49 .owner = THIS_MODULE, 50 + __FB_DEFAULT_SBUS_OPS_RDWR(cg6), 51 51 .fb_setcolreg = cg6_setcolreg, 52 52 .fb_blank = cg6_blank, 53 53 .fb_pan_display = cg6_pan_display, ··· 56 54 .fb_copyarea = cg6_copyarea, 57 55 .fb_imageblit = cg6_imageblit, 58 56 .fb_sync = cg6_sync, 59 - .fb_mmap = cg6_mmap, 60 - .fb_ioctl = cg6_ioctl, 61 - #ifdef CONFIG_COMPAT 62 - .fb_compat_ioctl = sbusfb_compat_ioctl, 63 - #endif 57 + __FB_DEFAULT_SBUS_OPS_IOCTL(cg6), 58 + __FB_DEFAULT_SBUS_OPS_MMAP(cg6), 64 59 }; 65 60 66 61 /* Offset of interesting structures in the OBIO space */ ··· 589 590 { .size = 0 } 590 591 }; 591 592 592 - static int cg6_mmap(struct fb_info *info, struct vm_area_struct *vma) 593 + static int cg6_sbusfb_mmap(struct fb_info *info, struct vm_area_struct *vma) 593 594 { 594 595 struct cg6_par *par = (struct cg6_par *)info->par; 595 596 ··· 598 599 par->which_io, vma); 599 600 } 600 601 601 - static int cg6_ioctl(struct fb_info *info, unsigned int cmd, unsigned long arg) 602 + static int cg6_sbusfb_ioctl(struct fb_info *info, unsigned int cmd, unsigned long arg) 602 603 { 603 604 return sbusfb_ioctl_helper(cmd, arg, info, 604 605 FBTYPE_SUNFAST_COLOR, 8, info->fix.smem_len);
+6
drivers/video/fbdev/core/Kconfig
··· 151 151 select FB_CFB_FILLRECT 152 152 select FB_CFB_IMAGEBLIT 153 153 154 + config FB_IOMEM_HELPERS_DEFERRED 155 + bool 156 + depends on FB_CORE 157 + select FB_DEFERRED_IO 158 + select FB_IOMEM_HELPERS 159 + 154 160 config FB_SYSMEM_HELPERS 155 161 bool 156 162 depends on FB_CORE
+8 -9
drivers/video/fbdev/ffb.c
··· 37 37 static void ffb_fillrect(struct fb_info *, const struct fb_fillrect *); 38 38 static void ffb_copyarea(struct fb_info *, const struct fb_copyarea *); 39 39 static int ffb_sync(struct fb_info *); 40 - static int ffb_mmap(struct fb_info *, struct vm_area_struct *); 41 - static int ffb_ioctl(struct fb_info *, unsigned int, unsigned long); 42 40 static int ffb_pan_display(struct fb_var_screeninfo *, struct fb_info *); 41 + 42 + static int ffb_sbusfb_mmap(struct fb_info *info, struct vm_area_struct *vma); 43 + static int ffb_sbusfb_ioctl(struct fb_info *info, unsigned int cmd, unsigned long arg); 43 44 44 45 /* 45 46 * Frame buffer operations ··· 48 47 49 48 static const struct fb_ops ffb_ops = { 50 49 .owner = THIS_MODULE, 50 + __FB_DEFAULT_SBUS_OPS_RDWR(ffb), 51 51 .fb_setcolreg = ffb_setcolreg, 52 52 .fb_blank = ffb_blank, 53 53 .fb_pan_display = ffb_pan_display, ··· 56 54 .fb_copyarea = ffb_copyarea, 57 55 .fb_imageblit = ffb_imageblit, 58 56 .fb_sync = ffb_sync, 59 - .fb_mmap = ffb_mmap, 60 - .fb_ioctl = ffb_ioctl, 61 - #ifdef CONFIG_COMPAT 62 - .fb_compat_ioctl = sbusfb_compat_ioctl, 63 - #endif 57 + __FB_DEFAULT_SBUS_OPS_IOCTL(ffb), 58 + __FB_DEFAULT_SBUS_OPS_MMAP(ffb), 64 59 }; 65 60 66 61 /* Register layout and definitions */ ··· 849 850 { .size = 0 } 850 851 }; 851 852 852 - static int ffb_mmap(struct fb_info *info, struct vm_area_struct *vma) 853 + static int ffb_sbusfb_mmap(struct fb_info *info, struct vm_area_struct *vma) 853 854 { 854 855 struct ffb_par *par = (struct ffb_par *)info->par; 855 856 ··· 858 859 0, vma); 859 860 } 860 861 861 - static int ffb_ioctl(struct fb_info *info, unsigned int cmd, unsigned long arg) 862 + static int ffb_sbusfb_ioctl(struct fb_info *info, unsigned int cmd, unsigned long arg) 862 863 { 863 864 struct ffb_par *par = (struct ffb_par *)info->par; 864 865
+14 -34
drivers/video/fbdev/hyperv_fb.c
··· 848 848 return 1; /* get fb_blank to set the colormap to all black */ 849 849 } 850 850 851 - static void hvfb_cfb_fillrect(struct fb_info *p, 852 - const struct fb_fillrect *rect) 851 + static void hvfb_ops_damage_range(struct fb_info *info, off_t off, size_t len) 853 852 { 854 - struct hvfb_par *par = p->par; 855 - 856 - cfb_fillrect(p, rect); 857 - if (par->synchronous_fb) 858 - synthvid_update(p, 0, 0, INT_MAX, INT_MAX); 859 - else 860 - hvfb_ondemand_refresh_throttle(par, rect->dx, rect->dy, 861 - rect->width, rect->height); 853 + /* TODO: implement damage handling */ 862 854 } 863 855 864 - static void hvfb_cfb_copyarea(struct fb_info *p, 865 - const struct fb_copyarea *area) 856 + static void hvfb_ops_damage_area(struct fb_info *info, u32 x, u32 y, u32 width, u32 height) 866 857 { 867 - struct hvfb_par *par = p->par; 858 + struct hvfb_par *par = info->par; 868 859 869 - cfb_copyarea(p, area); 870 860 if (par->synchronous_fb) 871 - synthvid_update(p, 0, 0, INT_MAX, INT_MAX); 861 + synthvid_update(info, 0, 0, INT_MAX, INT_MAX); 872 862 else 873 - hvfb_ondemand_refresh_throttle(par, area->dx, area->dy, 874 - area->width, area->height); 863 + hvfb_ondemand_refresh_throttle(par, x, y, width, height); 875 864 } 876 865 877 - static void hvfb_cfb_imageblit(struct fb_info *p, 878 - const struct fb_image *image) 879 - { 880 - struct hvfb_par *par = p->par; 881 - 882 - cfb_imageblit(p, image); 883 - if (par->synchronous_fb) 884 - synthvid_update(p, 0, 0, INT_MAX, INT_MAX); 885 - else 886 - hvfb_ondemand_refresh_throttle(par, image->dx, image->dy, 887 - image->width, image->height); 888 - } 866 + /* 867 + * TODO: GEN1 codepaths allocate from system or DMA-able memory. Fix the 868 + * driver to use the _SYSMEM_ or _DMAMEM_ helpers in these cases. 869 + */ 870 + FB_GEN_DEFAULT_DEFERRED_IOMEM_OPS(hvfb_ops, 871 + hvfb_ops_damage_range, 872 + hvfb_ops_damage_area) 889 873 890 874 static const struct fb_ops hvfb_ops = { 891 875 .owner = THIS_MODULE, 876 + FB_DEFAULT_DEFERRED_OPS(hvfb_ops), 892 877 .fb_check_var = hvfb_check_var, 893 878 .fb_set_par = hvfb_set_par, 894 879 .fb_setcolreg = hvfb_setcolreg, 895 - .fb_fillrect = hvfb_cfb_fillrect, 896 - .fb_copyarea = hvfb_cfb_copyarea, 897 - .fb_imageblit = hvfb_cfb_imageblit, 898 880 .fb_blank = hvfb_blank, 899 - .fb_mmap = fb_deferred_io_mmap, 900 881 }; 901 - 902 882 903 883 /* Get options from kernel paramenter "video=" */ 904 884 static void hvfb_get_option(struct fb_info *info)
+6 -13
drivers/video/fbdev/leo.c
··· 31 31 static int leo_setcolreg(unsigned, unsigned, unsigned, unsigned, 32 32 unsigned, struct fb_info *); 33 33 static int leo_blank(int, struct fb_info *); 34 - 35 - static int leo_mmap(struct fb_info *, struct vm_area_struct *); 36 - static int leo_ioctl(struct fb_info *, unsigned int, unsigned long); 37 34 static int leo_pan_display(struct fb_var_screeninfo *, struct fb_info *); 35 + 36 + static int leo_sbusfb_mmap(struct fb_info *info, struct vm_area_struct *vma); 37 + static int leo_sbusfb_ioctl(struct fb_info *info, unsigned int cmd, unsigned long arg); 38 38 39 39 /* 40 40 * Frame buffer operations ··· 42 42 43 43 static const struct fb_ops leo_ops = { 44 44 .owner = THIS_MODULE, 45 + FB_DEFAULT_SBUS_OPS(leo), 45 46 .fb_setcolreg = leo_setcolreg, 46 47 .fb_blank = leo_blank, 47 48 .fb_pan_display = leo_pan_display, 48 - .fb_fillrect = cfb_fillrect, 49 - .fb_copyarea = cfb_copyarea, 50 - .fb_imageblit = cfb_imageblit, 51 - .fb_mmap = leo_mmap, 52 - .fb_ioctl = leo_ioctl, 53 - #ifdef CONFIG_COMPAT 54 - .fb_compat_ioctl = sbusfb_compat_ioctl, 55 - #endif 56 49 }; 57 50 58 51 #define LEO_OFF_LC_SS0_KRN 0x00200000UL ··· 407 414 { .size = 0 } 408 415 }; 409 416 410 - static int leo_mmap(struct fb_info *info, struct vm_area_struct *vma) 417 + static int leo_sbusfb_mmap(struct fb_info *info, struct vm_area_struct *vma) 411 418 { 412 419 struct leo_par *par = (struct leo_par *)info->par; 413 420 ··· 416 423 par->which_io, vma); 417 424 } 418 425 419 - static int leo_ioctl(struct fb_info *info, unsigned int cmd, unsigned long arg) 426 + static int leo_sbusfb_ioctl(struct fb_info *info, unsigned int cmd, unsigned long arg) 420 427 { 421 428 return sbusfb_ioctl_helper(cmd, arg, info, 422 429 FBTYPE_SUNLEO, 32, info->fix.smem_len);
+5 -13
drivers/video/fbdev/p9100.c
··· 31 31 unsigned, struct fb_info *); 32 32 static int p9100_blank(int, struct fb_info *); 33 33 34 - static int p9100_mmap(struct fb_info *, struct vm_area_struct *); 35 - static int p9100_ioctl(struct fb_info *, unsigned int, unsigned long); 34 + static int p9100_sbusfb_mmap(struct fb_info *info, struct vm_area_struct *vma); 35 + static int p9100_sbusfb_ioctl(struct fb_info *info, unsigned int cmd, unsigned long arg); 36 36 37 37 /* 38 38 * Frame buffer operations ··· 40 40 41 41 static const struct fb_ops p9100_ops = { 42 42 .owner = THIS_MODULE, 43 + FB_DEFAULT_SBUS_OPS(p9100), 43 44 .fb_setcolreg = p9100_setcolreg, 44 45 .fb_blank = p9100_blank, 45 - .fb_fillrect = cfb_fillrect, 46 - .fb_copyarea = cfb_copyarea, 47 - .fb_imageblit = cfb_imageblit, 48 - .fb_mmap = p9100_mmap, 49 - .fb_ioctl = p9100_ioctl, 50 - #ifdef CONFIG_COMPAT 51 - .fb_compat_ioctl = sbusfb_compat_ioctl, 52 - #endif 53 46 }; 54 47 55 48 /* P9100 control registers */ ··· 211 218 { 0, 0, 0 } 212 219 }; 213 220 214 - static int p9100_mmap(struct fb_info *info, struct vm_area_struct *vma) 221 + static int p9100_sbusfb_mmap(struct fb_info *info, struct vm_area_struct *vma) 215 222 { 216 223 struct p9100_par *par = (struct p9100_par *)info->par; 217 224 ··· 220 227 par->which_io, vma); 221 228 } 222 229 223 - static int p9100_ioctl(struct fb_info *info, unsigned int cmd, 224 - unsigned long arg) 230 + static int p9100_sbusfb_ioctl(struct fb_info *info, unsigned int cmd, unsigned long arg) 225 231 { 226 232 /* Make it look like a cg3. */ 227 233 return sbusfb_ioctl_helper(cmd, arg, info,
+36 -1
drivers/video/fbdev/sbuslib.h
··· 3 3 #ifndef _SBUSLIB_H 4 4 #define _SBUSLIB_H 5 5 6 + struct device_node; 7 + struct fb_info; 8 + struct fb_var_screeninfo; 9 + struct vm_area_struct; 10 + 6 11 struct sbus_mmap_map { 7 12 unsigned long voff; 8 13 unsigned long poff; ··· 19 14 20 15 extern void sbusfb_fill_var(struct fb_var_screeninfo *var, 21 16 struct device_node *dp, int bpp); 22 - struct vm_area_struct; 23 17 extern int sbusfb_mmap_helper(struct sbus_mmap_map *map, 24 18 unsigned long physbase, unsigned long fbsize, 25 19 unsigned long iospace, ··· 28 24 int type, int fb_depth, unsigned long fb_size); 29 25 int sbusfb_compat_ioctl(struct fb_info *info, unsigned int cmd, 30 26 unsigned long arg); 27 + 28 + /* 29 + * Initialize struct fb_ops for SBUS I/O. 30 + */ 31 + 32 + #define __FB_DEFAULT_SBUS_OPS_RDWR(__prefix) \ 33 + .fb_read = fb_io_read, \ 34 + .fb_write = fb_io_write 35 + 36 + #define __FB_DEFAULT_SBUS_OPS_DRAW(__prefix) \ 37 + .fb_fillrect = cfb_fillrect, \ 38 + .fb_copyarea = cfb_copyarea, \ 39 + .fb_imageblit = cfb_imageblit 40 + 41 + #ifdef CONFIG_COMPAT 42 + #define __FB_DEFAULT_SBUS_OPS_IOCTL(__prefix) \ 43 + .fb_ioctl = __prefix ## _sbusfb_ioctl, \ 44 + .fb_compat_ioctl = sbusfb_compat_ioctl 45 + #else 46 + #define __FB_DEFAULT_SBUS_OPS_IOCTL(__prefix) \ 47 + .fb_ioctl = __prefix ## _sbusfb_ioctl 48 + #endif 49 + 50 + #define __FB_DEFAULT_SBUS_OPS_MMAP(__prefix) \ 51 + .fb_mmap = __prefix ## _sbusfb_mmap 52 + 53 + #define FB_DEFAULT_SBUS_OPS(__prefix) \ 54 + __FB_DEFAULT_SBUS_OPS_RDWR(__prefix), \ 55 + __FB_DEFAULT_SBUS_OPS_DRAW(__prefix), \ 56 + __FB_DEFAULT_SBUS_OPS_IOCTL(__prefix), \ 57 + __FB_DEFAULT_SBUS_OPS_MMAP(__prefix) 31 58 32 59 #endif /* _SBUSLIB_H */
+22 -63
drivers/video/fbdev/smscufx.c
··· 894 894 return 0; 895 895 } 896 896 897 - /* Path triggered by usermode clients who write to filesystem 898 - * e.g. cat filename > /dev/fb1 899 - * Not used by X Windows or text-mode console. But useful for testing. 900 - * Slow because of extra copy and we must assume all pixels dirty. */ 901 - static ssize_t ufx_ops_write(struct fb_info *info, const char __user *buf, 902 - size_t count, loff_t *ppos) 903 - { 904 - ssize_t result; 905 - struct ufx_data *dev = info->par; 906 - u32 offset = (u32) *ppos; 907 - 908 - result = fb_sys_write(info, buf, count, ppos); 909 - 910 - if (result > 0) { 911 - int start = max((int)(offset / info->fix.line_length), 0); 912 - int lines = min((u32)((result / info->fix.line_length) + 1), 913 - (u32)info->var.yres); 914 - 915 - ufx_handle_damage(dev, 0, start, info->var.xres, lines); 916 - } 917 - 918 - return result; 919 - } 920 - 921 - static void ufx_ops_copyarea(struct fb_info *info, 922 - const struct fb_copyarea *area) 923 - { 924 - 925 - struct ufx_data *dev = info->par; 926 - 927 - sys_copyarea(info, area); 928 - 929 - ufx_handle_damage(dev, area->dx, area->dy, 930 - area->width, area->height); 931 - } 932 - 933 - static void ufx_ops_imageblit(struct fb_info *info, 934 - const struct fb_image *image) 935 - { 936 - struct ufx_data *dev = info->par; 937 - 938 - sys_imageblit(info, image); 939 - 940 - ufx_handle_damage(dev, image->dx, image->dy, 941 - image->width, image->height); 942 - } 943 - 944 - static void ufx_ops_fillrect(struct fb_info *info, 945 - const struct fb_fillrect *rect) 946 - { 947 - struct ufx_data *dev = info->par; 948 - 949 - sys_fillrect(info, rect); 950 - 951 - ufx_handle_damage(dev, rect->dx, rect->dy, rect->width, 952 - rect->height); 953 - } 954 - 955 897 /* NOTE: fb_defio.c is holding info->fbdefio.mutex 956 898 * Touching ANY framebuffer memory that triggers a page fault 957 899 * in fb_defio will cause a deadlock, when it also tries to ··· 1221 1279 return 0; 1222 1280 } 1223 1281 1282 + static void ufx_ops_damage_range(struct fb_info *info, off_t off, size_t len) 1283 + { 1284 + struct ufx_data *dev = info->par; 1285 + int start = max((int)(off / info->fix.line_length), 0); 1286 + int lines = min((u32)((len / info->fix.line_length) + 1), (u32)info->var.yres); 1287 + 1288 + ufx_handle_damage(dev, 0, start, info->var.xres, lines); 1289 + } 1290 + 1291 + static void ufx_ops_damage_area(struct fb_info *info, u32 x, u32 y, u32 width, u32 height) 1292 + { 1293 + struct ufx_data *dev = info->par; 1294 + 1295 + ufx_handle_damage(dev, x, y, width, height); 1296 + } 1297 + 1298 + FB_GEN_DEFAULT_DEFERRED_SYSMEM_OPS(ufx_ops, 1299 + ufx_ops_damage_range, 1300 + ufx_ops_damage_area) 1301 + 1224 1302 static const struct fb_ops ufx_ops = { 1225 1303 .owner = THIS_MODULE, 1226 - .fb_read = fb_sys_read, 1227 - .fb_write = ufx_ops_write, 1304 + __FB_DEFAULT_DEFERRED_OPS_RDWR(ufx_ops), 1228 1305 .fb_setcolreg = ufx_ops_setcolreg, 1229 - .fb_fillrect = ufx_ops_fillrect, 1230 - .fb_copyarea = ufx_ops_copyarea, 1231 - .fb_imageblit = ufx_ops_imageblit, 1306 + __FB_DEFAULT_DEFERRED_OPS_DRAW(ufx_ops), 1232 1307 .fb_mmap = ufx_ops_mmap, 1233 1308 .fb_ioctl = ufx_ops_ioctl, 1234 1309 .fb_open = ufx_ops_open,
+6 -14
drivers/video/fbdev/tcx.c
··· 32 32 static int tcx_setcolreg(unsigned, unsigned, unsigned, unsigned, 33 33 unsigned, struct fb_info *); 34 34 static int tcx_blank(int, struct fb_info *); 35 - 36 - static int tcx_mmap(struct fb_info *, struct vm_area_struct *); 37 - static int tcx_ioctl(struct fb_info *, unsigned int, unsigned long); 38 35 static int tcx_pan_display(struct fb_var_screeninfo *, struct fb_info *); 36 + 37 + static int tcx_sbusfb_mmap(struct fb_info *info, struct vm_area_struct *vma); 38 + static int tcx_sbusfb_ioctl(struct fb_info *info, unsigned int cmd, unsigned long arg); 39 39 40 40 /* 41 41 * Frame buffer operations ··· 43 43 44 44 static const struct fb_ops tcx_ops = { 45 45 .owner = THIS_MODULE, 46 + FB_DEFAULT_SBUS_OPS(tcx), 46 47 .fb_setcolreg = tcx_setcolreg, 47 48 .fb_blank = tcx_blank, 48 49 .fb_pan_display = tcx_pan_display, 49 - .fb_fillrect = cfb_fillrect, 50 - .fb_copyarea = cfb_copyarea, 51 - .fb_imageblit = cfb_imageblit, 52 - .fb_mmap = tcx_mmap, 53 - .fb_ioctl = tcx_ioctl, 54 - #ifdef CONFIG_COMPAT 55 - .fb_compat_ioctl = sbusfb_compat_ioctl, 56 - #endif 57 50 }; 58 51 59 52 /* THC definitions */ ··· 292 299 { .size = 0 } 293 300 }; 294 301 295 - static int tcx_mmap(struct fb_info *info, struct vm_area_struct *vma) 302 + static int tcx_sbusfb_mmap(struct fb_info *info, struct vm_area_struct *vma) 296 303 { 297 304 struct tcx_par *par = (struct tcx_par *)info->par; 298 305 ··· 301 308 par->which_io, vma); 302 309 } 303 310 304 - static int tcx_ioctl(struct fb_info *info, unsigned int cmd, 305 - unsigned long arg) 311 + static int tcx_sbusfb_ioctl(struct fb_info *info, unsigned int cmd, unsigned long arg) 306 312 { 307 313 struct tcx_par *par = (struct tcx_par *) info->par; 308 314
+22 -67
drivers/video/fbdev/udlfb.c
··· 716 716 } 717 717 718 718 /* 719 - * Path triggered by usermode clients who write to filesystem 720 - * e.g. cat filename > /dev/fb1 721 - * Not used by X Windows or text-mode console. But useful for testing. 722 - * Slow because of extra copy and we must assume all pixels dirty. 723 - */ 724 - static ssize_t dlfb_ops_write(struct fb_info *info, const char __user *buf, 725 - size_t count, loff_t *ppos) 726 - { 727 - ssize_t result; 728 - struct dlfb_data *dlfb = info->par; 729 - u32 offset = (u32) *ppos; 730 - 731 - result = fb_sys_write(info, buf, count, ppos); 732 - 733 - if (result > 0) { 734 - int start = max((int)(offset / info->fix.line_length), 0); 735 - int lines = min((u32)((result / info->fix.line_length) + 1), 736 - (u32)info->var.yres); 737 - 738 - dlfb_handle_damage(dlfb, 0, start, info->var.xres, 739 - lines); 740 - } 741 - 742 - return result; 743 - } 744 - 745 - /* hardware has native COPY command (see libdlo), but not worth it for fbcon */ 746 - static void dlfb_ops_copyarea(struct fb_info *info, 747 - const struct fb_copyarea *area) 748 - { 749 - 750 - struct dlfb_data *dlfb = info->par; 751 - 752 - sys_copyarea(info, area); 753 - 754 - dlfb_offload_damage(dlfb, area->dx, area->dy, 755 - area->width, area->height); 756 - } 757 - 758 - static void dlfb_ops_imageblit(struct fb_info *info, 759 - const struct fb_image *image) 760 - { 761 - struct dlfb_data *dlfb = info->par; 762 - 763 - sys_imageblit(info, image); 764 - 765 - dlfb_offload_damage(dlfb, image->dx, image->dy, 766 - image->width, image->height); 767 - } 768 - 769 - static void dlfb_ops_fillrect(struct fb_info *info, 770 - const struct fb_fillrect *rect) 771 - { 772 - struct dlfb_data *dlfb = info->par; 773 - 774 - sys_fillrect(info, rect); 775 - 776 - dlfb_offload_damage(dlfb, rect->dx, rect->dy, rect->width, 777 - rect->height); 778 - } 779 - 780 - /* 781 719 * NOTE: fb_defio.c is holding info->fbdefio.mutex 782 720 * Touching ANY framebuffer memory that triggers a page fault 783 721 * in fb_defio will cause a deadlock, when it also tries to ··· 1124 1186 return 0; 1125 1187 } 1126 1188 1189 + static void dlfb_ops_damage_range(struct fb_info *info, off_t off, size_t len) 1190 + { 1191 + struct dlfb_data *dlfb = info->par; 1192 + int start = max((int)(off / info->fix.line_length), 0); 1193 + int lines = min((u32)((len / info->fix.line_length) + 1), (u32)info->var.yres); 1194 + 1195 + dlfb_handle_damage(dlfb, 0, start, info->var.xres, lines); 1196 + } 1197 + 1198 + static void dlfb_ops_damage_area(struct fb_info *info, u32 x, u32 y, u32 width, u32 height) 1199 + { 1200 + struct dlfb_data *dlfb = info->par; 1201 + 1202 + dlfb_offload_damage(dlfb, x, y, width, height); 1203 + } 1204 + 1205 + FB_GEN_DEFAULT_DEFERRED_SYSMEM_OPS(dlfb_ops, 1206 + dlfb_ops_damage_range, 1207 + dlfb_ops_damage_area) 1208 + 1127 1209 static const struct fb_ops dlfb_ops = { 1128 1210 .owner = THIS_MODULE, 1129 - .fb_read = fb_sys_read, 1130 - .fb_write = dlfb_ops_write, 1211 + __FB_DEFAULT_DEFERRED_OPS_RDWR(dlfb_ops), 1131 1212 .fb_setcolreg = dlfb_ops_setcolreg, 1132 - .fb_fillrect = dlfb_ops_fillrect, 1133 - .fb_copyarea = dlfb_ops_copyarea, 1134 - .fb_imageblit = dlfb_ops_imageblit, 1213 + __FB_DEFAULT_DEFERRED_OPS_DRAW(dlfb_ops), 1135 1214 .fb_mmap = dlfb_ops_mmap, 1136 1215 .fb_ioctl = dlfb_ops_ioctl, 1137 1216 .fb_open = dlfb_ops_open,
+1
include/drm/bridge/samsung-dsim.h
··· 53 53 unsigned int plltmr_reg; 54 54 unsigned int has_freqband:1; 55 55 unsigned int has_clklane_stop:1; 56 + unsigned int has_broken_fifoctrl_emptyhdr:1; 56 57 unsigned int num_clks; 57 58 unsigned int min_freq; 58 59 unsigned int max_freq;
+18 -5
include/drm/display/drm_dp_mst_helper.h
··· 46 46 }; 47 47 #endif /* IS_ENABLED(CONFIG_DRM_DEBUG_DP_MST_TOPOLOGY_REFS) */ 48 48 49 + enum drm_dp_mst_payload_allocation { 50 + DRM_DP_MST_PAYLOAD_ALLOCATION_NONE, 51 + DRM_DP_MST_PAYLOAD_ALLOCATION_LOCAL, 52 + DRM_DP_MST_PAYLOAD_ALLOCATION_DFP, 53 + DRM_DP_MST_PAYLOAD_ALLOCATION_REMOTE, 54 + }; 55 + 49 56 struct drm_dp_mst_branch; 50 57 51 58 /** ··· 544 537 * drm_dp_mst_atomic_wait_for_dependencies() has been called, which will ensure the 545 538 * previous MST states payload start slots have been copied over to the new state. Note 546 539 * that a new start slot won't be assigned/removed from this payload until 547 - * drm_dp_add_payload_part1()/drm_dp_remove_payload() have been called. 540 + * drm_dp_add_payload_part1()/drm_dp_remove_payload_part2() have been called. 548 541 * * Acquire the MST modesetting lock, and then wait for any pending MST-related commits to 549 542 * get committed to hardware by calling drm_crtc_commit_wait() on each of the 550 543 * &drm_crtc_commit structs in &drm_dp_mst_topology_state.commit_deps. ··· 570 563 bool delete : 1; 571 564 /** @dsc_enabled: Whether or not this payload has DSC enabled */ 572 565 bool dsc_enabled : 1; 566 + 567 + /** @payload_allocation_status: The allocation status of this payload */ 568 + enum drm_dp_mst_payload_allocation payload_allocation_status; 573 569 574 570 /** @next: The list node for this payload */ 575 571 struct list_head next; ··· 852 842 int drm_dp_add_payload_part2(struct drm_dp_mst_topology_mgr *mgr, 853 843 struct drm_atomic_state *state, 854 844 struct drm_dp_mst_atomic_payload *payload); 855 - void drm_dp_remove_payload(struct drm_dp_mst_topology_mgr *mgr, 856 - struct drm_dp_mst_topology_state *mst_state, 857 - const struct drm_dp_mst_atomic_payload *old_payload, 858 - struct drm_dp_mst_atomic_payload *new_payload); 845 + void drm_dp_remove_payload_part1(struct drm_dp_mst_topology_mgr *mgr, 846 + struct drm_dp_mst_topology_state *mst_state, 847 + struct drm_dp_mst_atomic_payload *payload); 848 + void drm_dp_remove_payload_part2(struct drm_dp_mst_topology_mgr *mgr, 849 + struct drm_dp_mst_topology_state *mst_state, 850 + const struct drm_dp_mst_atomic_payload *old_payload, 851 + struct drm_dp_mst_atomic_payload *new_payload); 859 852 860 853 int drm_dp_check_act_status(struct drm_dp_mst_topology_mgr *mgr); 861 854
+7 -2
include/drm/drm_accel.h
··· 58 58 void accel_minor_replace(struct drm_minor *minor, int index); 59 59 void accel_set_device_instance_params(struct device *kdev, int index); 60 60 int accel_open(struct inode *inode, struct file *filp); 61 - void accel_debugfs_init(struct drm_minor *minor, int minor_id); 61 + void accel_debugfs_init(struct drm_device *dev); 62 + void accel_debugfs_register(struct drm_device *dev); 62 63 63 64 #else 64 65 ··· 90 89 { 91 90 } 92 91 93 - static inline void accel_debugfs_init(struct drm_minor *minor, int minor_id) 92 + static inline void accel_debugfs_init(struct drm_device *dev) 93 + { 94 + } 95 + 96 + static inline void accel_debugfs_register(struct drm_device *dev) 94 97 { 95 98 } 96 99
+1 -1
include/drm/drm_atomic.h
··· 1126 1126 struct drm_bus_cfg input_bus_cfg; 1127 1127 1128 1128 /** 1129 - * @output_bus_cfg: input bus configuration 1129 + * @output_bus_cfg: output bus configuration 1130 1130 */ 1131 1131 struct drm_bus_cfg output_bus_cfg; 1132 1132 };
+3 -3
include/drm/drm_bridge.h
··· 32 32 #include <drm/drm_mode_object.h> 33 33 #include <drm/drm_modes.h> 34 34 35 + struct device_node; 36 + 35 37 struct drm_bridge; 36 38 struct drm_bridge_timings; 37 39 struct drm_connector; ··· 718 716 struct drm_encoder *encoder; 719 717 /** @chain_node: used to form a bridge chain */ 720 718 struct list_head chain_node; 721 - #ifdef CONFIG_OF 722 719 /** @of_node: device node pointer to the bridge */ 723 720 struct device_node *of_node; 724 - #endif 725 721 /** @list: to keep track of all added bridges */ 726 722 struct list_head list; 727 723 /** ··· 950 950 } 951 951 #endif 952 952 953 - void drm_bridge_debugfs_init(struct drm_minor *minor); 953 + void drm_bridge_debugfs_init(struct drm_device *dev); 954 954 955 955 #endif
+1 -1
include/drm/drm_client.h
··· 195 195 drm_for_each_connector_iter(connector, iter) \ 196 196 if (connector->connector_type != DRM_MODE_CONNECTOR_WRITEBACK) 197 197 198 - void drm_client_debugfs_init(struct drm_minor *minor); 198 + void drm_client_debugfs_init(struct drm_device *dev); 199 199 200 200 #endif
+2 -1
include/drm/drm_connector.h
··· 498 498 * ITU-R BT.601 colorimetry format 499 499 * The DP spec does not say whether this is the 525 or the 625 500 500 * line version. 501 + * @DRM_MODE_COLORIMETRY_COUNT: 502 + * Not a valid value; merely used four counting 501 503 */ 502 504 enum drm_colorspace { 503 505 /* For Default case, driver will set the colorspace */ ··· 524 522 DRM_MODE_COLORIMETRY_RGB_WIDE_FIXED = 13, 525 523 DRM_MODE_COLORIMETRY_RGB_WIDE_FLOAT = 14, 526 524 DRM_MODE_COLORIMETRY_BT601_YCC = 15, 527 - /* not a valid value; merely used for counting */ 528 525 DRM_MODE_COLORIMETRY_COUNT 529 526 }; 530 527
+2 -2
include/drm/drm_debugfs.h
··· 142 142 void drm_debugfs_create_files(const struct drm_info_list *files, 143 143 int count, struct dentry *root, 144 144 struct drm_minor *minor); 145 - int drm_debugfs_remove_files(const struct drm_info_list *files, 146 - int count, struct drm_minor *minor); 145 + int drm_debugfs_remove_files(const struct drm_info_list *files, int count, 146 + struct dentry *root, struct drm_minor *minor); 147 147 148 148 void drm_debugfs_add_file(struct drm_device *dev, const char *name, 149 149 int (*show)(struct seq_file*, void*), void *data);
+3 -11
include/drm/drm_device.h
··· 312 312 struct drm_fb_helper *fb_helper; 313 313 314 314 /** 315 - * @debugfs_mutex: 315 + * @debugfs_root: 316 316 * 317 - * Protects &debugfs_list access. 317 + * Root directory for debugfs files. 318 318 */ 319 - struct mutex debugfs_mutex; 320 - 321 - /** 322 - * @debugfs_list: 323 - * 324 - * List of debugfs files to be created by the DRM device. The files 325 - * must be added during drm_dev_register(). 326 - */ 327 - struct list_head debugfs_list; 319 + struct dentry *debugfs_root; 328 320 329 321 /* Everything below here is for legacy driver, never use! */ 330 322 /* private: */
+8
include/drm/drm_drv.h
··· 581 581 return video_firmware_drivers_only(); 582 582 } 583 583 584 + #if defined(CONFIG_DEBUG_FS) 585 + void drm_debugfs_dev_init(struct drm_device *dev, struct dentry *root); 586 + #else 587 + static inline void drm_debugfs_dev_init(struct drm_device *dev, struct dentry *root) 588 + { 589 + } 590 + #endif 591 + 584 592 #endif
+1 -3
include/drm/drm_file.h
··· 79 79 struct device *kdev; /* Linux device */ 80 80 struct drm_device *dev; 81 81 82 + struct dentry *debugfs_symlink; 82 83 struct dentry *debugfs_root; 83 - 84 - struct list_head debugfs_list; 85 - struct mutex debugfs_lock; /* Protects debugfs_list. */ 86 84 }; 87 85 88 86 /**
+16 -2
include/uapi/drm/ivpu_accel.h
··· 69 69 #define DRM_IVPU_CONTEXT_PRIORITY_FOCUS 2 70 70 #define DRM_IVPU_CONTEXT_PRIORITY_REALTIME 3 71 71 72 - #define DRM_IVPU_CAP_METRIC_STREAMER 1 73 - #define DRM_IVPU_CAP_DMA_MEMORY_RANGE 2 72 + /** 73 + * DRM_IVPU_CAP_METRIC_STREAMER 74 + * 75 + * Metric streamer support. Provides sampling of various hardware performance 76 + * metrics like DMA bandwidth and cache miss/hits. Can be used for profiling. 77 + */ 78 + #define DRM_IVPU_CAP_METRIC_STREAMER 1 79 + /** 80 + * DRM_IVPU_CAP_DMA_MEMORY_RANGE 81 + * 82 + * Driver has capability to allocate separate memory range 83 + * accessible by hardware DMA. 84 + */ 85 + #define DRM_IVPU_CAP_DMA_MEMORY_RANGE 2 74 86 75 87 /** 76 88 * struct drm_ivpu_param - Get/Set VPU parameters ··· 135 123 * %DRM_IVPU_PARAM_SKU: 136 124 * VPU SKU ID (read-only) 137 125 * 126 + * %DRM_IVPU_PARAM_CAPABILITIES: 127 + * Supported capabilities (read-only) 138 128 */ 139 129 __u32 param; 140 130