Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'drm-misc-next-2017-03-06' of git://anongit.freedesktop.org/git/drm-misc into drm-next

First slice of drm-misc-next for 4.12:

Core/subsystem-wide:
- link status core patch from Manasi, for signalling link train fail
to userspace. I also had the i915 patch in here, but that had a
small buglet in our CI, so reverted.
- more debugfs_remove removal from Noralf, almost there now (Noralf
said he'll try to follow up with the stragglers).
- drm todo moved into kerneldoc, for better visibility (see
Documentation/gpu/todo.rst), lots of starter tasks in there.
- devm_ of helpers + use it in sti (from Ben Gaignard, acked by Rob
Herring)
- extended framebuffer fbdev support (for fbdev flipping), and vblank
wait ioctl fbdev support (Maxime Ripard)
- misc small things all over, as usual
- add vblank callbacks to drm_crtc_funcs, plus make lots of good use
of this to simplify drivers (Shawn Guo)
- new atomic iterator macros to unconfuse old vs. new state

Small drivers:
- vc4 improvements from Eric
- vc4 kerneldocs (Eric)!
- tons of improvements for dw-mipi-dsi in rockchip from John Keeping
and Chris Zhong.
- MAINTAINERS entries for drivers managed in drm-misc. It's not yet
official, still an experiment, but definitely not complete fail and
better to avoid confusion. We kinda screwed that up with drm-misc a
bit when we started committers last year.
- qxl atomic conversion (Gabriel Krisman)
- bunch of virtual driver polish (qxl, virgl, ...)
- misc tiny patches all over

This is the first time we've done the same merge-window blackout for
drm-misc as we've done for drm-intel for ages, hence why we have a
_lot_ of stuff queued already. But it's still only half of drm-intel
(room to grow!), and the drivers in drm-misc experiment seems to work
at least insofar as that you also get lots of driver updates here
alredy.

* tag 'drm-misc-next-2017-03-06' of git://anongit.freedesktop.org/git/drm-misc: (141 commits)
drm/vc4: Fix OOPSes from trying to cache a partially constructed BO.
drm/vc4: Fulfill user BO creation requests from the kernel BO cache.
Revert "drm/i915: Implement Link Rate fallback on Link training failure"
drm/fb-helper: implement ioctl FBIO_WAITFORVSYNC
drm: Update drm_fbdev_cma_init documentation
drm/rockchip/dsi: add dw-mipi power domain support
drm/rockchip/dsi: fix insufficient bandwidth of some panel
dt-bindings: add power domain node for dw-mipi-rockchip
drm/rockchip/dsi: remove mode_valid function
drm/rockchip/dsi: dw-mipi: correct the coding style
drm/rockchip/dsi: dw-mipi: support RK3399 mipi dsi
dt-bindings: add rk3399 support for dw-mipi-rockchip
drm/rockchip: dw-mipi-dsi: add reset control
drm/rockchip: dw-mipi-dsi: support non-burst modes
drm/rockchip: dw-mipi-dsi: defer probe if panel is not loaded
drm/rockchip: vop: test for P{H,V}SYNC
drm/rockchip: dw-mipi-dsi: use positive check for N{H, V}SYNC
drm/rockchip: dw-mipi-dsi: use specific poll helper
drm/rockchip: dw-mipi-dsi: improve PLL configuration
drm/rockchip: dw-mipi-dsi: properly configure PHY timing
...

+4021 -4145
+6 -1
Documentation/devicetree/bindings/display/rockchip/dw_mipi_dsi_rockchip.txt
··· 5 5 - #address-cells: Should be <1>. 6 6 - #size-cells: Should be <0>. 7 7 - compatible: "rockchip,rk3288-mipi-dsi", "snps,dw-mipi-dsi". 8 + "rockchip,rk3399-mipi-dsi", "snps,dw-mipi-dsi". 8 9 - reg: Represent the physical address range of the controller. 9 10 - interrupts: Represent the controller's interrupt to the CPU(s). 10 11 - clocks, clock-names: Phandles to the controller's pll reference 11 - clock(ref) and APB clock(pclk), as described in [1]. 12 + clock(ref) and APB clock(pclk). For RK3399, a phy config clock 13 + (phy_cfg) is additional required. As described in [1]. 12 14 - rockchip,grf: this soc should set GRF regs to mux vopl/vopb. 13 15 - ports: contain a port node with endpoint definitions as defined in [2]. 14 16 For vopb,set the reg = <0> and set the reg = <1> for vopl. 17 + 18 + Optional properties: 19 + - power-domains: a phandle to mipi dsi power domain node. 15 20 16 21 [1] Documentation/devicetree/bindings/clock/clock-bindings.txt 17 22 [2] Documentation/devicetree/bindings/media/video-interfaces.txt
+6 -8
Documentation/gpu/drm-mm.rst
··· 183 183 -------------------- 184 184 185 185 All GEM objects are reference-counted by the GEM core. References can be 186 - acquired and release by :c:func:`calling 187 - drm_gem_object_reference()` and 188 - :c:func:`drm_gem_object_unreference()` respectively. The caller 189 - must hold the :c:type:`struct drm_device <drm_device>` 190 - struct_mutex lock when calling 191 - :c:func:`drm_gem_object_reference()`. As a convenience, GEM 192 - provides :c:func:`drm_gem_object_unreference_unlocked()` 193 - functions that can be called without holding the lock. 186 + acquired and release by :c:func:`calling drm_gem_object_get()` and 187 + :c:func:`drm_gem_object_put()` respectively. The caller must hold the 188 + :c:type:`struct drm_device <drm_device>` struct_mutex lock when calling 189 + :c:func:`drm_gem_object_get()`. As a convenience, GEM provides 190 + :c:func:`drm_gem_object_put_unlocked()` functions that can be called without 191 + holding the lock. 194 192 195 193 When the last reference to a GEM object is released the GEM core calls 196 194 the :c:type:`struct drm_driver <drm_driver>` gem_free_object
+2
Documentation/gpu/index.rst
··· 12 12 drm-uapi 13 13 i915 14 14 tinydrm 15 + vc4 15 16 vga-switcheroo 16 17 vgaarbiter 18 + todo 17 19 18 20 .. only:: subproject and html 19 21
+10
Documentation/gpu/introduction.rst
··· 50 50 and "FIXME" where the interface could be cleaned up. 51 51 52 52 Also read the :ref:`guidelines for the kernel documentation at large <doc_guide>`. 53 + 54 + Getting Started 55 + =============== 56 + 57 + Developers interested in helping out with the DRM subsystem are very welcome. 58 + Often people will resort to sending in patches for various issues reported by 59 + checkpatch or sparse. We welcome such contributions. 60 + 61 + Anyone looking to kick it up a notch can find a list of janitorial tasks on 62 + the :ref:`TODO list <todo>`.
+321
Documentation/gpu/todo.rst
··· 1 + .. _todo: 2 + 3 + ========= 4 + TODO list 5 + ========= 6 + 7 + This section contains a list of smaller janitorial tasks in the kernel DRM 8 + graphics subsystem useful as newbie projects. Or for slow rainy days. 9 + 10 + Subsystem-wide refactorings 11 + =========================== 12 + 13 + De-midlayer drivers 14 + ------------------- 15 + 16 + With the recent ``drm_bus`` cleanup patches for 3.17 it is no longer required 17 + to have a ``drm_bus`` structure set up. Drivers can directly set up the 18 + ``drm_device`` structure instead of relying on bus methods in ``drm_usb.c`` 19 + and ``drm_platform.c``. The goal is to get rid of the driver's ``->load`` / 20 + ``->unload`` callbacks and open-code the load/unload sequence properly, using 21 + the new two-stage ``drm_device`` setup/teardown. 22 + 23 + Once all existing drivers are converted we can also remove those bus support 24 + files for USB and platform devices. 25 + 26 + All you need is a GPU for a non-converted driver (currently almost all of 27 + them, but also all the virtual ones used by KVM, so everyone qualifies). 28 + 29 + Contact: Daniel Vetter, Thierry Reding, respective driver maintainers 30 + 31 + Switch from reference/unreference to get/put 32 + -------------------------------------------- 33 + 34 + For some reason DRM core uses ``reference``/``unreference`` suffixes for 35 + refcounting functions, but kernel uses ``get``/``put`` (e.g. 36 + ``kref_get``/``put()``). It would be good to switch over for consistency, and 37 + it's shorter. Needs to be done in 3 steps for each pair of functions: 38 + 39 + * Create new ``get``/``put`` functions, define the old names as compatibility 40 + wrappers 41 + * Switch over each file/driver using a cocci-generated spatch. 42 + * Once all users of the old names are gone, remove them. 43 + 44 + This way drivers/patches in the progress of getting merged won't break. 45 + 46 + Contact: Daniel Vetter 47 + 48 + Convert existing KMS drivers to atomic modesetting 49 + -------------------------------------------------- 50 + 51 + 3.19 has the atomic modeset interfaces and helpers, so drivers can now be 52 + converted over. Modern compositors like Wayland or Surfaceflinger on Android 53 + really want an atomic modeset interface, so this is all about the bright 54 + future. 55 + 56 + There is a conversion guide for atomic and all you need is a GPU for a 57 + non-converted driver (again virtual HW drivers for KVM are still all 58 + suitable). 59 + 60 + As part of this drivers also need to convert to universal plane (which means 61 + exposing primary & cursor as proper plane objects). But that's much easier to 62 + do by directly using the new atomic helper driver callbacks. 63 + 64 + Contact: Daniel Vetter, respective driver maintainers 65 + 66 + Clean up the clipped coordination confusion around planes 67 + --------------------------------------------------------- 68 + 69 + We have a helper to get this right with drm_plane_helper_check_update(), but 70 + it's not consistently used. This should be fixed, preferrably in the atomic 71 + helpers (and drivers then moved over to clipped coordinates). Probably the 72 + helper should also be moved from drm_plane_helper.c to the atomic helpers, to 73 + avoid confusion - the other helpers in that file are all deprecated legacy 74 + helpers. 75 + 76 + Contact: Ville Syrjälä, Daniel Vetter, driver maintainers 77 + 78 + Implement deferred fbdev setup in the helper 79 + -------------------------------------------- 80 + 81 + Many (especially embedded drivers) want to delay fbdev setup until there's a 82 + real screen plugged in. This is to avoid the dreaded fallback to the low-res 83 + fbdev default. Many drivers have a hacked-up (and often broken) version of this, 84 + better to do it once in the shared helpers. Thierry has a patch series, but that 85 + one needs to be rebased and final polish applied. 86 + 87 + Contact: Thierry Reding, Daniel Vetter, driver maintainers 88 + 89 + Convert early atomic drivers to async commit helpers 90 + ---------------------------------------------------- 91 + 92 + For the first year the atomic modeset helpers didn't support asynchronous / 93 + nonblocking commits, and every driver had to hand-roll them. This is fixed 94 + now, but there's still a pile of existing drivers that easily could be 95 + converted over to the new infrastructure. 96 + 97 + One issue with the helpers is that they require that drivers handle completion 98 + events for atomic commits correctly. But fixing these bugs is good anyway. 99 + 100 + Contact: Daniel Vetter, respective driver maintainers 101 + 102 + Fallout from atomic KMS 103 + ----------------------- 104 + 105 + ``drm_atomic_helper.c`` provides a batch of functions which implement legacy 106 + IOCTLs on top of the new atomic driver interface. Which is really nice for 107 + gradual conversion of drivers, but unfortunately the semantic mismatches are 108 + a bit too severe. So there's some follow-up work to adjust the function 109 + interfaces to fix these issues: 110 + 111 + * atomic needs the lock acquire context. At the moment that's passed around 112 + implicitly with some horrible hacks, and it's also allocate with 113 + ``GFP_NOFAIL`` behind the scenes. All legacy paths need to start allocating 114 + the acquire context explicitly on stack and then also pass it down into 115 + drivers explicitly so that the legacy-on-atomic functions can use them. 116 + 117 + * A bunch of the vtable hooks are now in the wrong place: DRM has a split 118 + between core vfunc tables (named ``drm_foo_funcs``), which are used to 119 + implement the userspace ABI. And then there's the optional hooks for the 120 + helper libraries (name ``drm_foo_helper_funcs``), which are purely for 121 + internal use. Some of these hooks should be move from ``_funcs`` to 122 + ``_helper_funcs`` since they are not part of the core ABI. There's a 123 + ``FIXME`` comment in the kerneldoc for each such case in ``drm_crtc.h``. 124 + 125 + * There's a new helper ``drm_atomic_helper_best_encoder()`` which could be 126 + used by all atomic drivers which don't select the encoder for a given 127 + connector at runtime. That's almost all of them, and would allow us to get 128 + rid of a lot of ``best_encoder`` boilerplate in drivers. 129 + 130 + Contact: Daniel Vetter 131 + 132 + Get rid of dev->struct_mutex from GEM drivers 133 + --------------------------------------------- 134 + 135 + ``dev->struct_mutex`` is the Big DRM Lock from legacy days and infested 136 + everything. Nowadays in modern drivers the only bit where it's mandatory is 137 + serializing GEM buffer object destruction. Which unfortunately means drivers 138 + have to keep track of that lock and either call ``unreference`` or 139 + ``unreference_locked`` depending upon context. 140 + 141 + Core GEM doesn't have a need for ``struct_mutex`` any more since kernel 4.8, 142 + and there's a ``gem_free_object_unlocked`` callback for any drivers which are 143 + entirely ``struct_mutex`` free. 144 + 145 + For drivers that need ``struct_mutex`` it should be replaced with a driver- 146 + private lock. The tricky part is the BO free functions, since those can't 147 + reliably take that lock any more. Instead state needs to be protected with 148 + suitable subordinate locks or some cleanup work pushed to a worker thread. For 149 + performance-critical drivers it might also be better to go with a more 150 + fine-grained per-buffer object and per-context lockings scheme. Currently the 151 + following drivers still use ``struct_mutex``: ``msm``, ``omapdrm`` and 152 + ``udl``. 153 + 154 + Contact: Daniel Vetter 155 + 156 + Core refactorings 157 + ================= 158 + 159 + Use new IDR deletion interface to clean up drm_gem_handle_delete() 160 + ------------------------------------------------------------------ 161 + 162 + See the "This is gross" comment -- apparently the IDR system now can return an 163 + error code instead of oopsing. 164 + 165 + Clean up the DRM header mess 166 + ---------------------------- 167 + 168 + Currently the DRM subsystem has only one global header, ``drmP.h``. This is 169 + used both for functions exported to helper libraries and drivers and functions 170 + only used internally in the ``drm.ko`` module. The goal would be to move all 171 + header declarations not needed outside of ``drm.ko`` into 172 + ``drivers/gpu/drm/drm_*_internal.h`` header files. ``EXPORT_SYMBOL`` also 173 + needs to be dropped for these functions. 174 + 175 + This would nicely tie in with the below task to create kerneldoc after the API 176 + is cleaned up. Or with the "hide legacy cruft better" task. 177 + 178 + Note that this is well in progress, but ``drmP.h`` is still huge. The updated 179 + plan is to switch to per-file driver API headers, which will also structure 180 + the kerneldoc better. This should also allow more fine-grained ``#include`` 181 + directives. 182 + 183 + Contact: Daniel Vetter 184 + 185 + Add missing kerneldoc for exported functions 186 + -------------------------------------------- 187 + 188 + The DRM reference documentation is still lacking kerneldoc in a few areas. The 189 + task would be to clean up interfaces like moving functions around between 190 + files to better group them and improving the interfaces like dropping return 191 + values for functions that never fail. Then write kerneldoc for all exported 192 + functions and an overview section and integrate it all into the drm DocBook. 193 + 194 + See https://dri.freedesktop.org/docs/drm/ for what's there already. 195 + 196 + Contact: Daniel Vetter 197 + 198 + Hide legacy cruft better 199 + ------------------------ 200 + 201 + Way back DRM supported only drivers which shadow-attached to PCI devices with 202 + userspace or fbdev drivers setting up outputs. Modern DRM drivers take charge 203 + of the entire device, you can spot them with the DRIVER_MODESET flag. 204 + 205 + Unfortunately there's still large piles of legacy code around which needs to 206 + be hidden so that driver writers don't accidentally end up using it. And to 207 + prevent security issues in those legacy IOCTLs from being exploited on modern 208 + drivers. This has multiple possible subtasks: 209 + 210 + * Make sure legacy IOCTLs can't be used on modern drivers. 211 + * Extract support code for legacy features into a ``drm-legacy.ko`` kernel 212 + module and compile it only when one of the legacy drivers is enabled. 213 + * Extract legacy functions into their own headers and remove it that from the 214 + monolithic ``drmP.h`` header. 215 + * Remove any lingering cruft from the OS abstraction layer from modern 216 + drivers. 217 + 218 + This is mostly done, the only thing left is to split up ``drm_irq.c`` into 219 + legacy cruft and the parts needed by modern KMS drivers. 220 + 221 + Contact: Daniel Vetter 222 + 223 + Make panic handling work 224 + ------------------------ 225 + 226 + This is a really varied tasks with lots of little bits and pieces: 227 + 228 + * The panic path can't be tested currently, leading to constant breaking. The 229 + main issue here is that panics can be triggered from hardirq contexts and 230 + hence all panic related callback can run in hardirq context. It would be 231 + awesome if we could test at least the fbdev helper code and driver code by 232 + e.g. trigger calls through drm debugfs files. hardirq context could be 233 + achieved by using an IPI to the local processor. 234 + 235 + * There's a massive confusion of different panic handlers. DRM fbdev emulation 236 + helpers have one, but on top of that the fbcon code itself also has one. We 237 + need to make sure that they stop fighting over each another. 238 + 239 + * ``drm_can_sleep()`` is a mess. It hides real bugs in normal operations and 240 + isn't a full solution for panic paths. We need to make sure that it only 241 + returns true if there's a panic going on for real, and fix up all the 242 + fallout. 243 + 244 + * The panic handler must never sleep, which also means it can't ever 245 + ``mutex_lock()``. Also it can't grab any other lock unconditionally, not 246 + even spinlocks (because NMI and hardirq can panic too). We need to either 247 + make sure to not call such paths, or trylock everything. Really tricky. 248 + 249 + * For the above locking troubles reasons it's pretty much impossible to 250 + attempt a synchronous modeset from panic handlers. The only thing we could 251 + try to achive is an atomic ``set_base`` of the primary plane, and hope that 252 + it shows up. Everything else probably needs to be delayed to some worker or 253 + something else which happens later on. Otherwise it just kills the box 254 + harder, prevent the panic from going out on e.g. netconsole. 255 + 256 + * There's also proposal for a simplied DRM console instead of the full-blown 257 + fbcon and DRM fbdev emulation. Any kind of panic handling tricks should 258 + obviously work for both console, in case we ever get kmslog merged. 259 + 260 + Contact: Daniel Vetter 261 + 262 + Better Testing 263 + ============== 264 + 265 + Enable trinity for DRM 266 + ---------------------- 267 + 268 + And fix up the fallout. Should be really interesting ... 269 + 270 + Make KMS tests in i-g-t generic 271 + ------------------------------- 272 + 273 + The i915 driver team maintains an extensive testsuite for the i915 DRM driver, 274 + including tons of testcases for corner-cases in the modesetting API. It would 275 + be awesome if those tests (at least the ones not relying on Intel-specific GEM 276 + features) could be made to run on any KMS driver. 277 + 278 + Basic work to run i-g-t tests on non-i915 is done, what's now missing is mass- 279 + converting things over. For modeset tests we also first need a bit of 280 + infrastructure to use dumb buffers for untiled buffers, to be able to run all 281 + the non-i915 specific modeset tests. 282 + 283 + Contact: Daniel Vetter 284 + 285 + Create a virtual KMS driver for testing (vkms) 286 + ---------------------------------------------- 287 + 288 + With all the latest helpers it should be fairly simple to create a virtual KMS 289 + driver useful for testing, or for running X or similar on headless machines 290 + (to be able to still use the GPU). This would be similar to vgem, but aimed at 291 + the modeset side. 292 + 293 + Once the basics are there there's tons of possibilities to extend it. 294 + 295 + Contact: Daniel Vetter 296 + 297 + Driver Specific 298 + =============== 299 + 300 + Outside DRM 301 + =========== 302 + 303 + Better kerneldoc 304 + ---------------- 305 + 306 + This is pretty much done, but there's some advanced topics: 307 + 308 + Come up with a way to hyperlink to struct members. Currently you can hyperlink 309 + to the struct using ``#struct_name``, but not to a member within. Would need 310 + buy-in from kerneldoc maintainers, and the big question is how to make it work 311 + without totally unsightly 312 + ``drm_foo_bar_really_long_structure->even_longer_memeber`` all over the text 313 + which breaks text flow. 314 + 315 + Figure out how to integrate the asciidoc support for ascii-diagrams. We have a 316 + few of those (e.g. to describe mode timings), and asciidoc supports converting 317 + some ascii-art dialect into pngs. Would be really pretty to make that work. 318 + 319 + Contact: Daniel Vetter, Jani Nikula 320 + 321 + Jani is working on this already, hopefully lands in 4.8.
+89
Documentation/gpu/vc4.rst
··· 1 + ===================================== 2 + drm/vc4 Broadcom VC4 Graphics Driver 3 + ===================================== 4 + 5 + .. kernel-doc:: drivers/gpu/drm/vc4/vc4_drv.c 6 + :doc: Broadcom VC4 Graphics Driver 7 + 8 + Display Hardware Handling 9 + ========================= 10 + 11 + This section covers everything related to the display hardware including 12 + the mode setting infrastructure, plane, sprite and cursor handling and 13 + display, output probing and related topics. 14 + 15 + Pixel Valve (DRM CRTC) 16 + ---------------------- 17 + 18 + .. kernel-doc:: drivers/gpu/drm/vc4/vc4_crtc.c 19 + :doc: VC4 CRTC module 20 + 21 + HVS 22 + --- 23 + 24 + .. kernel-doc:: drivers/gpu/drm/vc4/vc4_hvs.c 25 + :doc: VC4 HVS module. 26 + 27 + HVS planes 28 + ---------- 29 + 30 + .. kernel-doc:: drivers/gpu/drm/vc4/vc4_plane.c 31 + :doc: VC4 plane module 32 + 33 + HDMI encoder 34 + ------------ 35 + 36 + .. kernel-doc:: drivers/gpu/drm/vc4/vc4_hdmi.c 37 + :doc: VC4 Falcon HDMI module 38 + 39 + DSI encoder 40 + ----------- 41 + 42 + .. kernel-doc:: drivers/gpu/drm/vc4/vc4_dsi.c 43 + :doc: VC4 DSI0/DSI1 module 44 + 45 + DPI encoder 46 + ----------- 47 + 48 + .. kernel-doc:: drivers/gpu/drm/vc4/vc4_dpi.c 49 + :doc: VC4 DPI module 50 + 51 + VEC (Composite TV out) encoder 52 + ------------------------------ 53 + 54 + .. kernel-doc:: drivers/gpu/drm/vc4/vc4_vec.c 55 + :doc: VC4 SDTV module 56 + 57 + Memory Management and 3D Command Submission 58 + =========================================== 59 + 60 + This section covers the GEM implementation in the vc4 driver. 61 + 62 + GPU buffer object (BO) management 63 + --------------------------------- 64 + 65 + .. kernel-doc:: drivers/gpu/drm/vc4/vc4_bo.c 66 + :doc: VC4 GEM BO management support 67 + 68 + V3D binner command list (BCL) validation 69 + ---------------------------------------- 70 + 71 + .. kernel-doc:: drivers/gpu/drm/vc4/vc4_validate.c 72 + :doc: Command list validator for VC4. 73 + 74 + V3D render command list (RCL) generation 75 + ---------------------------------------- 76 + 77 + .. kernel-doc:: drivers/gpu/drm/vc4/vc4_render_cl.c 78 + :doc: Render command list generation 79 + 80 + Shader validator for VC4 81 + --------------------------- 82 + .. kernel-doc:: drivers/gpu/drm/vc4/vc4_validate_shaders.c 83 + :doc: Shader validator for VC4. 84 + 85 + V3D Interrupts 86 + -------------- 87 + 88 + .. kernel-doc:: drivers/gpu/drm/vc4/vc4_irq.c 89 + :doc: Interrupt management for the V3D engine
+10 -4
MAINTAINERS
··· 4174 4174 DRM DRIVER FOR BOCHS VIRTUAL GPU 4175 4175 M: Gerd Hoffmann <kraxel@redhat.com> 4176 4176 L: virtualization@lists.linux-foundation.org 4177 - T: git git://git.kraxel.org/linux drm-qemu 4177 + T: git git://anongit.freedesktop.org/drm/drm-misc 4178 4178 S: Maintained 4179 4179 F: drivers/gpu/drm/bochs/ 4180 4180 ··· 4182 4182 M: Dave Airlie <airlied@redhat.com> 4183 4183 M: Gerd Hoffmann <kraxel@redhat.com> 4184 4184 L: virtualization@lists.linux-foundation.org 4185 - T: git git://git.kraxel.org/linux drm-qemu 4185 + T: git git://anongit.freedesktop.org/drm/drm-misc 4186 4186 S: Obsolete 4187 4187 W: https://www.kraxel.org/blog/2014/10/qemu-using-cirrus-considered-harmful/ 4188 4188 F: drivers/gpu/drm/cirrus/ ··· 4239 4239 S: Supported 4240 4240 F: drivers/gpu/drm/atmel-hlcdc/ 4241 4241 F: Documentation/devicetree/bindings/drm/atmel/ 4242 + T: git git://anongit.freedesktop.org/drm/drm-misc 4242 4243 4243 4244 DRM DRIVERS FOR ALLWINNER A10 4244 4245 M: Maxime Ripard <maxime.ripard@free-electrons.com> ··· 4256 4255 S: Supported 4257 4256 F: drivers/gpu/drm/meson/ 4258 4257 F: Documentation/devicetree/bindings/display/amlogic,meson-vpu.txt 4258 + T: git git://anongit.freedesktop.org/drm/drm-meson 4259 + T: git git://anongit.freedesktop.org/drm/drm-misc 4259 4260 4260 4261 DRM DRIVERS FOR EXYNOS 4261 4262 M: Inki Dae <inki.dae@samsung.com> ··· 4388 4385 M: Dave Airlie <airlied@redhat.com> 4389 4386 M: Gerd Hoffmann <kraxel@redhat.com> 4390 4387 L: virtualization@lists.linux-foundation.org 4391 - T: git git://git.kraxel.org/linux drm-qemu 4388 + T: git git://anongit.freedesktop.org/drm/drm-misc 4392 4389 S: Maintained 4393 4390 F: drivers/gpu/drm/qxl/ 4394 4391 F: include/uapi/drm/qxl_drm.h ··· 4399 4396 S: Maintained 4400 4397 F: drivers/gpu/drm/rockchip/ 4401 4398 F: Documentation/devicetree/bindings/display/rockchip/ 4399 + T: git git://anongit.freedesktop.org/drm/drm-misc 4402 4400 4403 4401 DRM DRIVER FOR SAVAGE VIDEO CARDS 4404 4402 S: Orphan / Obsolete ··· 4458 4454 F: drivers/gpu/drm/vc4/ 4459 4455 F: include/uapi/drm/vc4_drm.h 4460 4456 F: Documentation/devicetree/bindings/display/brcm,bcm-vc4.txt 4457 + T: git git://anongit.freedesktop.org/drm/drm-misc 4461 4458 4462 4459 DRM DRIVERS FOR TI OMAP 4463 4460 M: Tomi Valkeinen <tomi.valkeinen@ti.com> ··· 4481 4476 S: Maintained 4482 4477 F: drivers/gpu/drm/zte/ 4483 4478 F: Documentation/devicetree/bindings/display/zte,vou.txt 4479 + T: git git://anongit.freedesktop.org/drm/drm-misc 4484 4480 4485 4481 DSBR100 USB FM RADIO DRIVER 4486 4482 M: Alexey Klimov <klimov.linux@gmail.com> ··· 13320 13314 M: Gerd Hoffmann <kraxel@redhat.com> 13321 13315 L: dri-devel@lists.freedesktop.org 13322 13316 L: virtualization@lists.linux-foundation.org 13323 - T: git git://git.kraxel.org/linux drm-qemu 13317 + T: git git://anongit.freedesktop.org/drm/drm-misc 13324 13318 S: Maintained 13325 13319 F: drivers/gpu/drm/virtio/ 13326 13320 F: include/uapi/linux/virtio_gpu.h
+2
drivers/dma-buf/dma-fence.c
··· 240 240 * after it signals with dma_fence_signal. The callback itself can be called 241 241 * from irq context. 242 242 * 243 + * Returns 0 in case of success, -ENOENT if the fence is already signaled 244 + * and -EINVAL in case of error. 243 245 */ 244 246 int dma_fence_add_callback(struct dma_fence *fence, struct dma_fence_cb *cb, 245 247 dma_fence_func_t func)
+9
drivers/gpu/drm/Kconfig
··· 99 99 100 100 If in doubt, say "Y". 101 101 102 + config DRM_FBDEV_OVERALLOC 103 + int "Overallocation of the fbdev buffer" 104 + depends on DRM_FBDEV_EMULATION 105 + default 100 106 + help 107 + Defines the fbdev buffer overallocation in percent. Default 108 + is 100. Typical values for double buffering will be 200, 109 + triple buffering 300. 110 + 102 111 config DRM_LOAD_EDID_FIRMWARE 103 112 bool "Allow to specify an EDID data set instead of probing for it" 104 113 depends on DRM_KMS_HELPER
+4 -7
drivers/gpu/drm/amd/amdgpu/amdgpu_fb.c
··· 224 224 info = drm_fb_helper_alloc_fbi(helper); 225 225 if (IS_ERR(info)) { 226 226 ret = PTR_ERR(info); 227 - goto out_unref; 227 + goto out; 228 228 } 229 229 230 230 info->par = rfbdev; ··· 233 233 ret = amdgpu_framebuffer_init(adev->ddev, &rfbdev->rfb, &mode_cmd, gobj); 234 234 if (ret) { 235 235 DRM_ERROR("failed to initialize framebuffer %d\n", ret); 236 - goto out_destroy_fbi; 236 + goto out; 237 237 } 238 238 239 239 fb = &rfbdev->rfb.base; ··· 266 266 267 267 if (info->screen_base == NULL) { 268 268 ret = -ENOSPC; 269 - goto out_destroy_fbi; 269 + goto out; 270 270 } 271 271 272 272 DRM_INFO("fb mappable at 0x%lX\n", info->fix.smem_start); ··· 278 278 vga_switcheroo_client_fb_set(adev->ddev->pdev, info); 279 279 return 0; 280 280 281 - out_destroy_fbi: 282 - drm_fb_helper_release_fbi(helper); 283 - out_unref: 281 + out: 284 282 if (abo) { 285 283 286 284 } ··· 302 304 struct amdgpu_framebuffer *rfb = &rfbdev->rfb; 303 305 304 306 drm_fb_helper_unregister_fbi(&rfbdev->helper); 305 - drm_fb_helper_release_fbi(&rfbdev->helper); 306 307 307 308 if (rfb->obj) { 308 309 amdgpufb_destroy_pinned_object(rfb->obj);
-1
drivers/gpu/drm/arc/arcpgu_drv.c
··· 175 175 .dumb_create = drm_gem_cma_dumb_create, 176 176 .dumb_map_offset = drm_gem_cma_dumb_map_offset, 177 177 .dumb_destroy = drm_gem_dumb_destroy, 178 - .get_vblank_counter = drm_vblank_no_hw_counter, 179 178 .prime_handle_to_fd = drm_gem_prime_handle_to_fd, 180 179 .prime_fd_to_handle = drm_gem_prime_fd_to_handle, 181 180 .gem_free_object_unlocked = drm_gem_cma_free_object,
+20
drivers/gpu/drm/arm/hdlcd_crtc.c
··· 42 42 drm_crtc_cleanup(crtc); 43 43 } 44 44 45 + static int hdlcd_crtc_enable_vblank(struct drm_crtc *crtc) 46 + { 47 + struct hdlcd_drm_private *hdlcd = crtc_to_hdlcd_priv(crtc); 48 + unsigned int mask = hdlcd_read(hdlcd, HDLCD_REG_INT_MASK); 49 + 50 + hdlcd_write(hdlcd, HDLCD_REG_INT_MASK, mask | HDLCD_INTERRUPT_VSYNC); 51 + 52 + return 0; 53 + } 54 + 55 + static void hdlcd_crtc_disable_vblank(struct drm_crtc *crtc) 56 + { 57 + struct hdlcd_drm_private *hdlcd = crtc_to_hdlcd_priv(crtc); 58 + unsigned int mask = hdlcd_read(hdlcd, HDLCD_REG_INT_MASK); 59 + 60 + hdlcd_write(hdlcd, HDLCD_REG_INT_MASK, mask & ~HDLCD_INTERRUPT_VSYNC); 61 + } 62 + 45 63 static const struct drm_crtc_funcs hdlcd_crtc_funcs = { 46 64 .destroy = hdlcd_crtc_cleanup, 47 65 .set_config = drm_atomic_helper_set_config, ··· 67 49 .reset = drm_atomic_helper_crtc_reset, 68 50 .atomic_duplicate_state = drm_atomic_helper_crtc_duplicate_state, 69 51 .atomic_destroy_state = drm_atomic_helper_crtc_destroy_state, 52 + .enable_vblank = hdlcd_crtc_enable_vblank, 53 + .disable_vblank = hdlcd_crtc_disable_vblank, 70 54 }; 71 55 72 56 static struct simplefb_format supported_formats[] = SIMPLEFB_FORMATS;
-21
drivers/gpu/drm/arm/hdlcd_drv.c
··· 199 199 hdlcd_write(hdlcd, HDLCD_REG_INT_MASK, irq_mask); 200 200 } 201 201 202 - static int hdlcd_enable_vblank(struct drm_device *drm, unsigned int crtc) 203 - { 204 - struct hdlcd_drm_private *hdlcd = drm->dev_private; 205 - unsigned int mask = hdlcd_read(hdlcd, HDLCD_REG_INT_MASK); 206 - 207 - hdlcd_write(hdlcd, HDLCD_REG_INT_MASK, mask | HDLCD_INTERRUPT_VSYNC); 208 - 209 - return 0; 210 - } 211 - 212 - static void hdlcd_disable_vblank(struct drm_device *drm, unsigned int crtc) 213 - { 214 - struct hdlcd_drm_private *hdlcd = drm->dev_private; 215 - unsigned int mask = hdlcd_read(hdlcd, HDLCD_REG_INT_MASK); 216 - 217 - hdlcd_write(hdlcd, HDLCD_REG_INT_MASK, mask & ~HDLCD_INTERRUPT_VSYNC); 218 - } 219 - 220 202 #ifdef CONFIG_DEBUG_FS 221 203 static int hdlcd_show_underrun_count(struct seq_file *m, void *arg) 222 204 { ··· 260 278 .irq_preinstall = hdlcd_irq_preinstall, 261 279 .irq_postinstall = hdlcd_irq_postinstall, 262 280 .irq_uninstall = hdlcd_irq_uninstall, 263 - .get_vblank_counter = drm_vblank_no_hw_counter, 264 - .enable_vblank = hdlcd_enable_vblank, 265 - .disable_vblank = hdlcd_disable_vblank, 266 281 .gem_free_object_unlocked = drm_gem_cma_free_object, 267 282 .gem_vm_ops = &drm_gem_cma_vm_ops, 268 283 .dumb_create = drm_gem_cma_dumb_create,
+21
drivers/gpu/drm/arm/malidp_crtc.c
··· 167 167 .atomic_check = malidp_crtc_atomic_check, 168 168 }; 169 169 170 + static int malidp_crtc_enable_vblank(struct drm_crtc *crtc) 171 + { 172 + struct malidp_drm *malidp = crtc_to_malidp_device(crtc); 173 + struct malidp_hw_device *hwdev = malidp->dev; 174 + 175 + malidp_hw_enable_irq(hwdev, MALIDP_DE_BLOCK, 176 + hwdev->map.de_irq_map.vsync_irq); 177 + return 0; 178 + } 179 + 180 + static void malidp_crtc_disable_vblank(struct drm_crtc *crtc) 181 + { 182 + struct malidp_drm *malidp = crtc_to_malidp_device(crtc); 183 + struct malidp_hw_device *hwdev = malidp->dev; 184 + 185 + malidp_hw_disable_irq(hwdev, MALIDP_DE_BLOCK, 186 + hwdev->map.de_irq_map.vsync_irq); 187 + } 188 + 170 189 static const struct drm_crtc_funcs malidp_crtc_funcs = { 171 190 .destroy = drm_crtc_cleanup, 172 191 .set_config = drm_atomic_helper_set_config, ··· 193 174 .reset = drm_atomic_helper_crtc_reset, 194 175 .atomic_duplicate_state = drm_atomic_helper_crtc_duplicate_state, 195 176 .atomic_destroy_state = drm_atomic_helper_crtc_destroy_state, 177 + .enable_vblank = malidp_crtc_enable_vblank, 178 + .disable_vblank = malidp_crtc_disable_vblank, 196 179 }; 197 180 198 181 int malidp_crtc_init(struct drm_device *drm)
+1 -23
drivers/gpu/drm/arm/malidp_drv.c
··· 100 100 drm_atomic_helper_cleanup_planes(drm, state); 101 101 } 102 102 103 - static struct drm_mode_config_helper_funcs malidp_mode_config_helpers = { 103 + static const struct drm_mode_config_helper_funcs malidp_mode_config_helpers = { 104 104 .atomic_commit_tail = malidp_atomic_commit_tail, 105 105 }; 106 106 ··· 110 110 .atomic_check = drm_atomic_helper_check, 111 111 .atomic_commit = drm_atomic_helper_commit, 112 112 }; 113 - 114 - static int malidp_enable_vblank(struct drm_device *drm, unsigned int crtc) 115 - { 116 - struct malidp_drm *malidp = drm->dev_private; 117 - struct malidp_hw_device *hwdev = malidp->dev; 118 - 119 - malidp_hw_enable_irq(hwdev, MALIDP_DE_BLOCK, 120 - hwdev->map.de_irq_map.vsync_irq); 121 - return 0; 122 - } 123 - 124 - static void malidp_disable_vblank(struct drm_device *drm, unsigned int pipe) 125 - { 126 - struct malidp_drm *malidp = drm->dev_private; 127 - struct malidp_hw_device *hwdev = malidp->dev; 128 - 129 - malidp_hw_disable_irq(hwdev, MALIDP_DE_BLOCK, 130 - hwdev->map.de_irq_map.vsync_irq); 131 - } 132 113 133 114 static int malidp_init(struct drm_device *drm) 134 115 { ··· 194 213 .driver_features = DRIVER_GEM | DRIVER_MODESET | DRIVER_ATOMIC | 195 214 DRIVER_PRIME, 196 215 .lastclose = malidp_lastclose, 197 - .get_vblank_counter = drm_vblank_no_hw_counter, 198 - .enable_vblank = malidp_enable_vblank, 199 - .disable_vblank = malidp_disable_vblank, 200 216 .gem_free_object_unlocked = drm_gem_cma_free_object, 201 217 .gem_vm_ops = &drm_gem_cma_vm_ops, 202 218 .dumb_create = drm_gem_cma_dumb_create,
+37 -19
drivers/gpu/drm/armada/armada_crtc.c
··· 418 418 return true; 419 419 } 420 420 421 + /* These are locked by dev->vbl_lock */ 422 + static void armada_drm_crtc_disable_irq(struct armada_crtc *dcrtc, u32 mask) 423 + { 424 + if (dcrtc->irq_ena & mask) { 425 + dcrtc->irq_ena &= ~mask; 426 + writel(dcrtc->irq_ena, dcrtc->base + LCD_SPU_IRQ_ENA); 427 + } 428 + } 429 + 430 + static void armada_drm_crtc_enable_irq(struct armada_crtc *dcrtc, u32 mask) 431 + { 432 + if ((dcrtc->irq_ena & mask) != mask) { 433 + dcrtc->irq_ena |= mask; 434 + writel(dcrtc->irq_ena, dcrtc->base + LCD_SPU_IRQ_ENA); 435 + if (readl_relaxed(dcrtc->base + LCD_SPU_IRQ_ISR) & mask) 436 + writel(0, dcrtc->base + LCD_SPU_IRQ_ISR); 437 + } 438 + } 439 + 421 440 static void armada_drm_crtc_irq(struct armada_crtc *dcrtc, u32 stat) 422 441 { 423 442 void __iomem *base = dcrtc->base; ··· 508 489 return IRQ_HANDLED; 509 490 } 510 491 return IRQ_NONE; 511 - } 512 - 513 - /* These are locked by dev->vbl_lock */ 514 - void armada_drm_crtc_disable_irq(struct armada_crtc *dcrtc, u32 mask) 515 - { 516 - if (dcrtc->irq_ena & mask) { 517 - dcrtc->irq_ena &= ~mask; 518 - writel(dcrtc->irq_ena, dcrtc->base + LCD_SPU_IRQ_ENA); 519 - } 520 - } 521 - 522 - void armada_drm_crtc_enable_irq(struct armada_crtc *dcrtc, u32 mask) 523 - { 524 - if ((dcrtc->irq_ena & mask) != mask) { 525 - dcrtc->irq_ena |= mask; 526 - writel(dcrtc->irq_ena, dcrtc->base + LCD_SPU_IRQ_ENA); 527 - if (readl_relaxed(dcrtc->base + LCD_SPU_IRQ_ISR) & mask) 528 - writel(0, dcrtc->base + LCD_SPU_IRQ_ISR); 529 - } 530 492 } 531 493 532 494 static uint32_t armada_drm_crtc_calculate_csc(struct armada_crtc *dcrtc) ··· 1109 1109 return 0; 1110 1110 } 1111 1111 1112 + /* These are called under the vbl_lock. */ 1113 + static int armada_drm_crtc_enable_vblank(struct drm_crtc *crtc) 1114 + { 1115 + struct armada_crtc *dcrtc = drm_to_armada_crtc(crtc); 1116 + 1117 + armada_drm_crtc_enable_irq(dcrtc, VSYNC_IRQ_ENA); 1118 + return 0; 1119 + } 1120 + 1121 + static void armada_drm_crtc_disable_vblank(struct drm_crtc *crtc) 1122 + { 1123 + struct armada_crtc *dcrtc = drm_to_armada_crtc(crtc); 1124 + 1125 + armada_drm_crtc_disable_irq(dcrtc, VSYNC_IRQ_ENA); 1126 + } 1127 + 1112 1128 static const struct drm_crtc_funcs armada_crtc_funcs = { 1113 1129 .cursor_set = armada_drm_crtc_cursor_set, 1114 1130 .cursor_move = armada_drm_crtc_cursor_move, ··· 1132 1116 .set_config = drm_crtc_helper_set_config, 1133 1117 .page_flip = armada_drm_crtc_page_flip, 1134 1118 .set_property = armada_drm_crtc_set_property, 1119 + .enable_vblank = armada_drm_crtc_enable_vblank, 1120 + .disable_vblank = armada_drm_crtc_disable_vblank, 1135 1121 }; 1136 1122 1137 1123 static const struct drm_plane_funcs armada_primary_plane_funcs = {
-2
drivers/gpu/drm/armada/armada_crtc.h
··· 104 104 105 105 void armada_drm_crtc_gamma_set(struct drm_crtc *, u16, u16, u16, int); 106 106 void armada_drm_crtc_gamma_get(struct drm_crtc *, u16 *, u16 *, u16 *, int); 107 - void armada_drm_crtc_disable_irq(struct armada_crtc *, u32); 108 - void armada_drm_crtc_enable_irq(struct armada_crtc *, u32); 109 107 void armada_drm_crtc_update_regs(struct armada_crtc *, struct armada_regs *); 110 108 111 109 void armada_drm_crtc_plane_disable(struct armada_crtc *dcrtc,
+10 -55
drivers/gpu/drm/armada/armada_debugfs.c
··· 107 107 }; 108 108 #define ARMADA_DEBUGFS_ENTRIES ARRAY_SIZE(armada_debugfs_list) 109 109 110 - static int drm_add_fake_info_node(struct drm_minor *minor, struct dentry *ent, 111 - const void *key) 112 - { 113 - struct drm_info_node *node; 114 - 115 - node = kmalloc(sizeof(struct drm_info_node), GFP_KERNEL); 116 - if (!node) { 117 - debugfs_remove(ent); 118 - return -ENOMEM; 119 - } 120 - 121 - node->minor = minor; 122 - node->dent = ent; 123 - node->info_ent = (void *) key; 124 - 125 - mutex_lock(&minor->debugfs_lock); 126 - list_add(&node->list, &minor->debugfs_list); 127 - mutex_unlock(&minor->debugfs_lock); 128 - 129 - return 0; 130 - } 131 - 132 - static int armada_debugfs_create(struct dentry *root, struct drm_minor *minor, 133 - const char *name, umode_t mode, const struct file_operations *fops) 134 - { 135 - struct dentry *de; 136 - 137 - de = debugfs_create_file(name, mode, root, minor->dev, fops); 138 - 139 - return drm_add_fake_info_node(minor, de, fops); 140 - } 141 - 142 110 int armada_drm_debugfs_init(struct drm_minor *minor) 143 111 { 112 + struct dentry *de; 144 113 int ret; 145 114 146 115 ret = drm_debugfs_create_files(armada_debugfs_list, ··· 118 149 if (ret) 119 150 return ret; 120 151 121 - ret = armada_debugfs_create(minor->debugfs_root, minor, 122 - "reg", S_IFREG | S_IRUSR, &fops_reg_r); 123 - if (ret) 124 - goto err_1; 152 + de = debugfs_create_file("reg", S_IFREG | S_IRUSR, 153 + minor->debugfs_root, minor->dev, &fops_reg_r); 154 + if (!de) 155 + return -ENOMEM; 125 156 126 - ret = armada_debugfs_create(minor->debugfs_root, minor, 127 - "reg_wr", S_IFREG | S_IWUSR, &fops_reg_w); 128 - if (ret) 129 - goto err_2; 130 - return ret; 157 + de = debugfs_create_file("reg_wr", S_IFREG | S_IWUSR, 158 + minor->debugfs_root, minor->dev, &fops_reg_w); 159 + if (!de) 160 + return -ENOMEM; 131 161 132 - err_2: 133 - drm_debugfs_remove_files((struct drm_info_list *)&fops_reg_r, 1, minor); 134 - err_1: 135 - drm_debugfs_remove_files(armada_debugfs_list, ARMADA_DEBUGFS_ENTRIES, 136 - minor); 137 - return ret; 138 - } 139 - 140 - void armada_drm_debugfs_cleanup(struct drm_minor *minor) 141 - { 142 - drm_debugfs_remove_files((struct drm_info_list *)&fops_reg_w, 1, minor); 143 - drm_debugfs_remove_files((struct drm_info_list *)&fops_reg_r, 1, minor); 144 - drm_debugfs_remove_files(armada_debugfs_list, ARMADA_DEBUGFS_ENTRIES, 145 - minor); 162 + return 0; 146 163 }
-1
drivers/gpu/drm/armada/armada_drm.h
··· 90 90 int armada_overlay_plane_create(struct drm_device *, unsigned long); 91 91 92 92 int armada_drm_debugfs_init(struct drm_minor *); 93 - void armada_drm_debugfs_cleanup(struct drm_minor *); 94 93 95 94 #endif
-20
drivers/gpu/drm/armada/armada_drv.c
··· 49 49 spin_unlock_irqrestore(&dev->event_lock, flags); 50 50 } 51 51 52 - /* These are called under the vbl_lock. */ 53 - static int armada_drm_enable_vblank(struct drm_device *dev, unsigned int pipe) 54 - { 55 - struct armada_private *priv = dev->dev_private; 56 - armada_drm_crtc_enable_irq(priv->dcrtc[pipe], VSYNC_IRQ_ENA); 57 - return 0; 58 - } 59 - 60 - static void armada_drm_disable_vblank(struct drm_device *dev, unsigned int pipe) 61 - { 62 - struct armada_private *priv = dev->dev_private; 63 - armada_drm_crtc_disable_irq(priv->dcrtc[pipe], VSYNC_IRQ_ENA); 64 - } 65 - 66 52 static struct drm_ioctl_desc armada_ioctls[] = { 67 53 DRM_IOCTL_DEF_DRV(ARMADA_GEM_CREATE, armada_gem_create_ioctl,0), 68 54 DRM_IOCTL_DEF_DRV(ARMADA_GEM_MMAP, armada_gem_mmap_ioctl, 0), ··· 73 87 74 88 static struct drm_driver armada_drm_driver = { 75 89 .lastclose = armada_drm_lastclose, 76 - .get_vblank_counter = drm_vblank_no_hw_counter, 77 - .enable_vblank = armada_drm_enable_vblank, 78 - .disable_vblank = armada_drm_disable_vblank, 79 90 .gem_free_object_unlocked = armada_gem_free_object, 80 91 .prime_handle_to_fd = drm_gem_prime_handle_to_fd, 81 92 .prime_fd_to_handle = drm_gem_prime_fd_to_handle, ··· 209 226 drm_kms_helper_poll_fini(&priv->drm); 210 227 armada_fbdev_fini(&priv->drm); 211 228 212 - #ifdef CONFIG_DEBUG_FS 213 - armada_drm_debugfs_cleanup(priv->drm.primary); 214 - #endif 215 229 drm_dev_unregister(&priv->drm); 216 230 217 231 component_unbind_all(dev, &priv->drm);
-2
drivers/gpu/drm/armada/armada_fbdev.c
··· 157 157 158 158 return 0; 159 159 err_fb_setup: 160 - drm_fb_helper_release_fbi(fbh); 161 160 drm_fb_helper_fini(fbh); 162 161 err_fb_helper: 163 162 priv->fbdev = NULL; ··· 178 179 179 180 if (fbh) { 180 181 drm_fb_helper_unregister_fbi(fbh); 181 - drm_fb_helper_release_fbi(fbh); 182 182 183 183 drm_fb_helper_fini(fbh); 184 184
+3 -6
drivers/gpu/drm/ast/ast_fb.c
··· 215 215 info = drm_fb_helper_alloc_fbi(helper); 216 216 if (IS_ERR(info)) { 217 217 ret = PTR_ERR(info); 218 - goto err_free_vram; 218 + goto out; 219 219 } 220 220 info->par = afbdev; 221 221 222 222 ret = ast_framebuffer_init(dev, &afbdev->afb, &mode_cmd, gobj); 223 223 if (ret) 224 - goto err_release_fbi; 224 + goto out; 225 225 226 226 afbdev->sysram = sysram; 227 227 afbdev->size = size; ··· 250 250 251 251 return 0; 252 252 253 - err_release_fbi: 254 - drm_fb_helper_release_fbi(helper); 255 - err_free_vram: 253 + out: 256 254 vfree(sysram); 257 255 return ret; 258 256 } ··· 285 287 struct ast_framebuffer *afb = &afbdev->afb; 286 288 287 289 drm_fb_helper_unregister_fbi(&afbdev->helper); 288 - drm_fb_helper_release_fbi(&afbdev->helper); 289 290 290 291 if (afb->obj) { 291 292 drm_gem_object_unreference_unlocked(afb->obj);
-1
drivers/gpu/drm/atmel-hlcdc/Makefile
··· 1 1 atmel-hlcdc-dc-y := atmel_hlcdc_crtc.o \ 2 2 atmel_hlcdc_dc.o \ 3 - atmel_hlcdc_layer.o \ 4 3 atmel_hlcdc_output.o \ 5 4 atmel_hlcdc_plane.o 6 5
+51 -9
drivers/gpu/drm/atmel-hlcdc/atmel_hlcdc_crtc.c
··· 434 434 kfree(state); 435 435 } 436 436 437 + static int atmel_hlcdc_crtc_enable_vblank(struct drm_crtc *c) 438 + { 439 + struct atmel_hlcdc_crtc *crtc = drm_crtc_to_atmel_hlcdc_crtc(c); 440 + struct regmap *regmap = crtc->dc->hlcdc->regmap; 441 + 442 + /* Enable SOF (Start Of Frame) interrupt for vblank counting */ 443 + regmap_write(regmap, ATMEL_HLCDC_IER, ATMEL_HLCDC_SOF); 444 + 445 + return 0; 446 + } 447 + 448 + static void atmel_hlcdc_crtc_disable_vblank(struct drm_crtc *c) 449 + { 450 + struct atmel_hlcdc_crtc *crtc = drm_crtc_to_atmel_hlcdc_crtc(c); 451 + struct regmap *regmap = crtc->dc->hlcdc->regmap; 452 + 453 + regmap_write(regmap, ATMEL_HLCDC_IDR, ATMEL_HLCDC_SOF); 454 + } 455 + 437 456 static const struct drm_crtc_funcs atmel_hlcdc_crtc_funcs = { 438 457 .page_flip = drm_atomic_helper_page_flip, 439 458 .set_config = drm_atomic_helper_set_config, ··· 460 441 .reset = atmel_hlcdc_crtc_reset, 461 442 .atomic_duplicate_state = atmel_hlcdc_crtc_duplicate_state, 462 443 .atomic_destroy_state = atmel_hlcdc_crtc_destroy_state, 444 + .enable_vblank = atmel_hlcdc_crtc_enable_vblank, 445 + .disable_vblank = atmel_hlcdc_crtc_disable_vblank, 463 446 }; 464 447 465 448 int atmel_hlcdc_crtc_create(struct drm_device *dev) 466 449 { 450 + struct atmel_hlcdc_plane *primary = NULL, *cursor = NULL; 467 451 struct atmel_hlcdc_dc *dc = dev->dev_private; 468 - struct atmel_hlcdc_planes *planes = dc->planes; 469 452 struct atmel_hlcdc_crtc *crtc; 470 453 int ret; 471 454 int i; ··· 478 457 479 458 crtc->dc = dc; 480 459 481 - ret = drm_crtc_init_with_planes(dev, &crtc->base, 482 - &planes->primary->base, 483 - planes->cursor ? &planes->cursor->base : NULL, 484 - &atmel_hlcdc_crtc_funcs, NULL); 460 + for (i = 0; i < ATMEL_HLCDC_MAX_LAYERS; i++) { 461 + if (!dc->layers[i]) 462 + continue; 463 + 464 + switch (dc->layers[i]->desc->type) { 465 + case ATMEL_HLCDC_BASE_LAYER: 466 + primary = atmel_hlcdc_layer_to_plane(dc->layers[i]); 467 + break; 468 + 469 + case ATMEL_HLCDC_CURSOR_LAYER: 470 + cursor = atmel_hlcdc_layer_to_plane(dc->layers[i]); 471 + break; 472 + 473 + default: 474 + break; 475 + } 476 + } 477 + 478 + ret = drm_crtc_init_with_planes(dev, &crtc->base, &primary->base, 479 + &cursor->base, &atmel_hlcdc_crtc_funcs, 480 + NULL); 485 481 if (ret < 0) 486 482 goto fail; 487 483 488 484 crtc->id = drm_crtc_index(&crtc->base); 489 485 490 - if (planes->cursor) 491 - planes->cursor->base.possible_crtcs = 1 << crtc->id; 486 + for (i = 0; i < ATMEL_HLCDC_MAX_LAYERS; i++) { 487 + struct atmel_hlcdc_plane *overlay; 492 488 493 - for (i = 0; i < planes->noverlays; i++) 494 - planes->overlays[i]->base.possible_crtcs = 1 << crtc->id; 489 + if (dc->layers[i] && 490 + dc->layers[i]->desc->type == ATMEL_HLCDC_OVERLAY_LAYER) { 491 + overlay = atmel_hlcdc_layer_to_plane(dc->layers[i]); 492 + overlay->base.possible_crtcs = 1 << crtc->id; 493 + } 494 + } 495 495 496 496 drm_crtc_helper_add(&crtc->base, &lcdc_crtc_helper_funcs); 497 497 drm_crtc_vblank_reset(&crtc->base);
+43 -61
drivers/gpu/drm/atmel-hlcdc/atmel_hlcdc_dc.c
··· 36 36 .regs_offset = 0x40, 37 37 .id = 0, 38 38 .type = ATMEL_HLCDC_BASE_LAYER, 39 - .nconfigs = 5, 39 + .cfgs_offset = 0x2c, 40 40 .layout = { 41 41 .xstride = { 2 }, 42 42 .default_color = 3, ··· 65 65 .regs_offset = 0x40, 66 66 .id = 0, 67 67 .type = ATMEL_HLCDC_BASE_LAYER, 68 - .nconfigs = 5, 68 + .cfgs_offset = 0x2c, 69 69 .layout = { 70 70 .xstride = { 2 }, 71 71 .default_color = 3, ··· 80 80 .regs_offset = 0x100, 81 81 .id = 1, 82 82 .type = ATMEL_HLCDC_OVERLAY_LAYER, 83 - .nconfigs = 10, 83 + .cfgs_offset = 0x2c, 84 84 .layout = { 85 85 .pos = 2, 86 86 .size = 3, ··· 98 98 .regs_offset = 0x280, 99 99 .id = 2, 100 100 .type = ATMEL_HLCDC_OVERLAY_LAYER, 101 - .nconfigs = 17, 101 + .cfgs_offset = 0x4c, 102 102 .layout = { 103 103 .pos = 2, 104 104 .size = 3, ··· 109 109 .chroma_key = 10, 110 110 .chroma_key_mask = 11, 111 111 .general_config = 12, 112 + .scaler_config = 13, 112 113 .csc = 14, 113 114 }, 114 115 }, ··· 119 118 .regs_offset = 0x340, 120 119 .id = 3, 121 120 .type = ATMEL_HLCDC_CURSOR_LAYER, 122 - .nconfigs = 10, 123 121 .max_width = 128, 124 122 .max_height = 128, 123 + .cfgs_offset = 0x2c, 125 124 .layout = { 126 125 .pos = 2, 127 126 .size = 3, ··· 154 153 .regs_offset = 0x40, 155 154 .id = 0, 156 155 .type = ATMEL_HLCDC_BASE_LAYER, 157 - .nconfigs = 7, 156 + .cfgs_offset = 0x2c, 158 157 .layout = { 159 158 .xstride = { 2 }, 160 159 .default_color = 3, ··· 169 168 .regs_offset = 0x140, 170 169 .id = 1, 171 170 .type = ATMEL_HLCDC_OVERLAY_LAYER, 172 - .nconfigs = 10, 171 + .cfgs_offset = 0x2c, 173 172 .layout = { 174 173 .pos = 2, 175 174 .size = 3, ··· 187 186 .regs_offset = 0x240, 188 187 .id = 2, 189 188 .type = ATMEL_HLCDC_OVERLAY_LAYER, 190 - .nconfigs = 10, 189 + .cfgs_offset = 0x2c, 191 190 .layout = { 192 191 .pos = 2, 193 192 .size = 3, ··· 205 204 .regs_offset = 0x340, 206 205 .id = 3, 207 206 .type = ATMEL_HLCDC_OVERLAY_LAYER, 208 - .nconfigs = 42, 207 + .cfgs_offset = 0x4c, 209 208 .layout = { 210 209 .pos = 2, 211 210 .size = 3, ··· 216 215 .chroma_key = 10, 217 216 .chroma_key_mask = 11, 218 217 .general_config = 12, 218 + .scaler_config = 13, 219 + .phicoeffs = { 220 + .x = 17, 221 + .y = 33, 222 + }, 219 223 .csc = 14, 220 224 }, 221 225 }, ··· 230 224 .regs_offset = 0x440, 231 225 .id = 4, 232 226 .type = ATMEL_HLCDC_CURSOR_LAYER, 233 - .nconfigs = 10, 234 227 .max_width = 128, 235 228 .max_height = 128, 229 + .cfgs_offset = 0x2c, 236 230 .layout = { 237 231 .pos = 2, 238 232 .size = 3, ··· 242 236 .chroma_key = 7, 243 237 .chroma_key_mask = 8, 244 238 .general_config = 9, 239 + .scaler_config = 13, 245 240 }, 246 241 }, 247 242 }; ··· 267 260 .regs_offset = 0x40, 268 261 .id = 0, 269 262 .type = ATMEL_HLCDC_BASE_LAYER, 270 - .nconfigs = 7, 263 + .cfgs_offset = 0x2c, 271 264 .layout = { 272 265 .xstride = { 2 }, 273 266 .default_color = 3, ··· 282 275 .regs_offset = 0x140, 283 276 .id = 1, 284 277 .type = ATMEL_HLCDC_OVERLAY_LAYER, 285 - .nconfigs = 10, 278 + .cfgs_offset = 0x2c, 286 279 .layout = { 287 280 .pos = 2, 288 281 .size = 3, ··· 300 293 .regs_offset = 0x240, 301 294 .id = 2, 302 295 .type = ATMEL_HLCDC_OVERLAY_LAYER, 303 - .nconfigs = 10, 296 + .cfgs_offset = 0x2c, 304 297 .layout = { 305 298 .pos = 2, 306 299 .size = 3, ··· 318 311 .regs_offset = 0x340, 319 312 .id = 3, 320 313 .type = ATMEL_HLCDC_OVERLAY_LAYER, 321 - .nconfigs = 42, 314 + .cfgs_offset = 0x4c, 322 315 .layout = { 323 316 .pos = 2, 324 317 .size = 3, ··· 329 322 .chroma_key = 10, 330 323 .chroma_key_mask = 11, 331 324 .general_config = 12, 325 + .scaler_config = 13, 326 + .phicoeffs = { 327 + .x = 17, 328 + .y = 33, 329 + }, 332 330 .csc = 14, 333 331 }, 334 332 }, ··· 404 392 return MODE_OK; 405 393 } 406 394 395 + static void atmel_hlcdc_layer_irq(struct atmel_hlcdc_layer *layer) 396 + { 397 + if (!layer) 398 + return; 399 + 400 + if (layer->desc->type == ATMEL_HLCDC_BASE_LAYER || 401 + layer->desc->type == ATMEL_HLCDC_OVERLAY_LAYER || 402 + layer->desc->type == ATMEL_HLCDC_CURSOR_LAYER) 403 + atmel_hlcdc_plane_irq(atmel_hlcdc_layer_to_plane(layer)); 404 + } 405 + 407 406 static irqreturn_t atmel_hlcdc_dc_irq_handler(int irq, void *data) 408 407 { 409 408 struct drm_device *dev = data; ··· 433 410 atmel_hlcdc_crtc_irq(dc->crtc); 434 411 435 412 for (i = 0; i < ATMEL_HLCDC_MAX_LAYERS; i++) { 436 - struct atmel_hlcdc_layer *layer = dc->layers[i]; 437 - 438 - if (!(ATMEL_HLCDC_LAYER_STATUS(i) & status) || !layer) 439 - continue; 440 - 441 - atmel_hlcdc_layer_irq(layer); 413 + if (ATMEL_HLCDC_LAYER_STATUS(i) & status) 414 + atmel_hlcdc_layer_irq(dc->layers[i]); 442 415 } 443 416 444 417 return IRQ_HANDLED; ··· 556 537 static int atmel_hlcdc_dc_modeset_init(struct drm_device *dev) 557 538 { 558 539 struct atmel_hlcdc_dc *dc = dev->dev_private; 559 - struct atmel_hlcdc_planes *planes; 560 540 int ret; 561 - int i; 562 541 563 542 drm_mode_config_init(dev); 564 543 ··· 566 549 return ret; 567 550 } 568 551 569 - planes = atmel_hlcdc_create_planes(dev); 570 - if (IS_ERR(planes)) { 571 - dev_err(dev->dev, "failed to create planes\n"); 572 - return PTR_ERR(planes); 552 + ret = atmel_hlcdc_create_planes(dev); 553 + if (ret) { 554 + dev_err(dev->dev, "failed to create planes: %d\n", ret); 555 + return ret; 573 556 } 574 - 575 - dc->planes = planes; 576 - 577 - dc->layers[planes->primary->layer.desc->id] = 578 - &planes->primary->layer; 579 - 580 - if (planes->cursor) 581 - dc->layers[planes->cursor->layer.desc->id] = 582 - &planes->cursor->layer; 583 - 584 - for (i = 0; i < planes->noverlays; i++) 585 - dc->layers[planes->overlays[i]->layer.desc->id] = 586 - &planes->overlays[i]->layer; 587 557 588 558 ret = atmel_hlcdc_crtc_create(dev); 589 559 if (ret) { ··· 724 720 regmap_read(dc->hlcdc->regmap, ATMEL_HLCDC_ISR, &isr); 725 721 } 726 722 727 - static int atmel_hlcdc_dc_enable_vblank(struct drm_device *dev, 728 - unsigned int pipe) 729 - { 730 - struct atmel_hlcdc_dc *dc = dev->dev_private; 731 - 732 - /* Enable SOF (Start Of Frame) interrupt for vblank counting */ 733 - regmap_write(dc->hlcdc->regmap, ATMEL_HLCDC_IER, ATMEL_HLCDC_SOF); 734 - 735 - return 0; 736 - } 737 - 738 - static void atmel_hlcdc_dc_disable_vblank(struct drm_device *dev, 739 - unsigned int pipe) 740 - { 741 - struct atmel_hlcdc_dc *dc = dev->dev_private; 742 - 743 - regmap_write(dc->hlcdc->regmap, ATMEL_HLCDC_IDR, ATMEL_HLCDC_SOF); 744 - } 745 - 746 723 static const struct file_operations fops = { 747 724 .owner = THIS_MODULE, 748 725 .open = drm_open, ··· 745 760 .irq_preinstall = atmel_hlcdc_dc_irq_uninstall, 746 761 .irq_postinstall = atmel_hlcdc_dc_irq_postinstall, 747 762 .irq_uninstall = atmel_hlcdc_dc_irq_uninstall, 748 - .get_vblank_counter = drm_vblank_no_hw_counter, 749 - .enable_vblank = atmel_hlcdc_dc_enable_vblank, 750 - .disable_vblank = atmel_hlcdc_dc_disable_vblank, 751 763 .gem_free_object_unlocked = drm_gem_cma_free_object, 752 764 .gem_vm_ops = &drm_gem_cma_vm_ops, 753 765 .prime_handle_to_fd = drm_gem_prime_handle_to_fd,
+312 -51
drivers/gpu/drm/atmel-hlcdc/atmel_hlcdc_dc.h
··· 23 23 #define DRM_ATMEL_HLCDC_H 24 24 25 25 #include <linux/clk.h> 26 + #include <linux/dmapool.h> 26 27 #include <linux/irqdomain.h> 28 + #include <linux/mfd/atmel-hlcdc.h> 27 29 #include <linux/pwm.h> 28 30 29 31 #include <drm/drm_atomic.h> ··· 38 36 #include <drm/drm_plane_helper.h> 39 37 #include <drm/drmP.h> 40 38 41 - #include "atmel_hlcdc_layer.h" 39 + #define ATMEL_HLCDC_LAYER_CHER 0x0 40 + #define ATMEL_HLCDC_LAYER_CHDR 0x4 41 + #define ATMEL_HLCDC_LAYER_CHSR 0x8 42 + #define ATMEL_HLCDC_LAYER_EN BIT(0) 43 + #define ATMEL_HLCDC_LAYER_UPDATE BIT(1) 44 + #define ATMEL_HLCDC_LAYER_A2Q BIT(2) 45 + #define ATMEL_HLCDC_LAYER_RST BIT(8) 42 46 43 - #define ATMEL_HLCDC_MAX_LAYERS 5 47 + #define ATMEL_HLCDC_LAYER_IER 0xc 48 + #define ATMEL_HLCDC_LAYER_IDR 0x10 49 + #define ATMEL_HLCDC_LAYER_IMR 0x14 50 + #define ATMEL_HLCDC_LAYER_ISR 0x18 51 + #define ATMEL_HLCDC_LAYER_DFETCH BIT(0) 52 + #define ATMEL_HLCDC_LAYER_LFETCH BIT(1) 53 + #define ATMEL_HLCDC_LAYER_DMA_IRQ(p) BIT(2 + (8 * (p))) 54 + #define ATMEL_HLCDC_LAYER_DSCR_IRQ(p) BIT(3 + (8 * (p))) 55 + #define ATMEL_HLCDC_LAYER_ADD_IRQ(p) BIT(4 + (8 * (p))) 56 + #define ATMEL_HLCDC_LAYER_DONE_IRQ(p) BIT(5 + (8 * (p))) 57 + #define ATMEL_HLCDC_LAYER_OVR_IRQ(p) BIT(6 + (8 * (p))) 58 + 59 + #define ATMEL_HLCDC_LAYER_PLANE_HEAD(p) (((p) * 0x10) + 0x1c) 60 + #define ATMEL_HLCDC_LAYER_PLANE_ADDR(p) (((p) * 0x10) + 0x20) 61 + #define ATMEL_HLCDC_LAYER_PLANE_CTRL(p) (((p) * 0x10) + 0x24) 62 + #define ATMEL_HLCDC_LAYER_PLANE_NEXT(p) (((p) * 0x10) + 0x28) 63 + 64 + #define ATMEL_HLCDC_LAYER_DMA_CFG 0 65 + #define ATMEL_HLCDC_LAYER_DMA_SIF BIT(0) 66 + #define ATMEL_HLCDC_LAYER_DMA_BLEN_MASK GENMASK(5, 4) 67 + #define ATMEL_HLCDC_LAYER_DMA_BLEN_SINGLE (0 << 4) 68 + #define ATMEL_HLCDC_LAYER_DMA_BLEN_INCR4 (1 << 4) 69 + #define ATMEL_HLCDC_LAYER_DMA_BLEN_INCR8 (2 << 4) 70 + #define ATMEL_HLCDC_LAYER_DMA_BLEN_INCR16 (3 << 4) 71 + #define ATMEL_HLCDC_LAYER_DMA_DLBO BIT(8) 72 + #define ATMEL_HLCDC_LAYER_DMA_ROTDIS BIT(12) 73 + #define ATMEL_HLCDC_LAYER_DMA_LOCKDIS BIT(13) 74 + 75 + #define ATMEL_HLCDC_LAYER_FORMAT_CFG 1 76 + #define ATMEL_HLCDC_LAYER_RGB (0 << 0) 77 + #define ATMEL_HLCDC_LAYER_CLUT (1 << 0) 78 + #define ATMEL_HLCDC_LAYER_YUV (2 << 0) 79 + #define ATMEL_HLCDC_RGB_MODE(m) \ 80 + (ATMEL_HLCDC_LAYER_RGB | (((m) & 0xf) << 4)) 81 + #define ATMEL_HLCDC_CLUT_MODE(m) \ 82 + (ATMEL_HLCDC_LAYER_CLUT | (((m) & 0x3) << 8)) 83 + #define ATMEL_HLCDC_YUV_MODE(m) \ 84 + (ATMEL_HLCDC_LAYER_YUV | (((m) & 0xf) << 12)) 85 + #define ATMEL_HLCDC_YUV422ROT BIT(16) 86 + #define ATMEL_HLCDC_YUV422SWP BIT(17) 87 + #define ATMEL_HLCDC_DSCALEOPT BIT(20) 88 + 89 + #define ATMEL_HLCDC_XRGB4444_MODE ATMEL_HLCDC_RGB_MODE(0) 90 + #define ATMEL_HLCDC_ARGB4444_MODE ATMEL_HLCDC_RGB_MODE(1) 91 + #define ATMEL_HLCDC_RGBA4444_MODE ATMEL_HLCDC_RGB_MODE(2) 92 + #define ATMEL_HLCDC_RGB565_MODE ATMEL_HLCDC_RGB_MODE(3) 93 + #define ATMEL_HLCDC_ARGB1555_MODE ATMEL_HLCDC_RGB_MODE(4) 94 + #define ATMEL_HLCDC_XRGB8888_MODE ATMEL_HLCDC_RGB_MODE(9) 95 + #define ATMEL_HLCDC_RGB888_MODE ATMEL_HLCDC_RGB_MODE(10) 96 + #define ATMEL_HLCDC_ARGB8888_MODE ATMEL_HLCDC_RGB_MODE(12) 97 + #define ATMEL_HLCDC_RGBA8888_MODE ATMEL_HLCDC_RGB_MODE(13) 98 + 99 + #define ATMEL_HLCDC_AYUV_MODE ATMEL_HLCDC_YUV_MODE(0) 100 + #define ATMEL_HLCDC_YUYV_MODE ATMEL_HLCDC_YUV_MODE(1) 101 + #define ATMEL_HLCDC_UYVY_MODE ATMEL_HLCDC_YUV_MODE(2) 102 + #define ATMEL_HLCDC_YVYU_MODE ATMEL_HLCDC_YUV_MODE(3) 103 + #define ATMEL_HLCDC_VYUY_MODE ATMEL_HLCDC_YUV_MODE(4) 104 + #define ATMEL_HLCDC_NV61_MODE ATMEL_HLCDC_YUV_MODE(5) 105 + #define ATMEL_HLCDC_YUV422_MODE ATMEL_HLCDC_YUV_MODE(6) 106 + #define ATMEL_HLCDC_NV21_MODE ATMEL_HLCDC_YUV_MODE(7) 107 + #define ATMEL_HLCDC_YUV420_MODE ATMEL_HLCDC_YUV_MODE(8) 108 + 109 + #define ATMEL_HLCDC_LAYER_POS(x, y) ((x) | ((y) << 16)) 110 + #define ATMEL_HLCDC_LAYER_SIZE(w, h) (((w) - 1) | (((h) - 1) << 16)) 111 + 112 + #define ATMEL_HLCDC_LAYER_CRKEY BIT(0) 113 + #define ATMEL_HLCDC_LAYER_INV BIT(1) 114 + #define ATMEL_HLCDC_LAYER_ITER2BL BIT(2) 115 + #define ATMEL_HLCDC_LAYER_ITER BIT(3) 116 + #define ATMEL_HLCDC_LAYER_REVALPHA BIT(4) 117 + #define ATMEL_HLCDC_LAYER_GAEN BIT(5) 118 + #define ATMEL_HLCDC_LAYER_LAEN BIT(6) 119 + #define ATMEL_HLCDC_LAYER_OVR BIT(7) 120 + #define ATMEL_HLCDC_LAYER_DMA BIT(8) 121 + #define ATMEL_HLCDC_LAYER_REP BIT(9) 122 + #define ATMEL_HLCDC_LAYER_DSTKEY BIT(10) 123 + #define ATMEL_HLCDC_LAYER_DISCEN BIT(11) 124 + #define ATMEL_HLCDC_LAYER_GA_SHIFT 16 125 + #define ATMEL_HLCDC_LAYER_GA_MASK \ 126 + GENMASK(23, ATMEL_HLCDC_LAYER_GA_SHIFT) 127 + #define ATMEL_HLCDC_LAYER_GA(x) \ 128 + ((x) << ATMEL_HLCDC_LAYER_GA_SHIFT) 129 + 130 + #define ATMEL_HLCDC_LAYER_DISC_POS(x, y) ((x) | ((y) << 16)) 131 + #define ATMEL_HLCDC_LAYER_DISC_SIZE(w, h) (((w) - 1) | (((h) - 1) << 16)) 132 + 133 + #define ATMEL_HLCDC_LAYER_SCALER_FACTORS(x, y) ((x) | ((y) << 16)) 134 + #define ATMEL_HLCDC_LAYER_SCALER_ENABLE BIT(31) 135 + 136 + #define ATMEL_HLCDC_LAYER_MAX_PLANES 3 137 + 138 + #define ATMEL_HLCDC_DMA_CHANNEL_DSCR_RESERVED BIT(0) 139 + #define ATMEL_HLCDC_DMA_CHANNEL_DSCR_LOADED BIT(1) 140 + #define ATMEL_HLCDC_DMA_CHANNEL_DSCR_DONE BIT(2) 141 + #define ATMEL_HLCDC_DMA_CHANNEL_DSCR_OVERRUN BIT(3) 142 + 143 + #define ATMEL_HLCDC_MAX_LAYERS 6 144 + 145 + /** 146 + * Atmel HLCDC Layer registers layout structure 147 + * 148 + * Each HLCDC layer has its own register organization and a given register 149 + * can be placed differently on 2 different layers depending on its 150 + * capabilities. 151 + * This structure stores common registers layout for a given layer and is 152 + * used by HLCDC layer code to choose the appropriate register to write to 153 + * or to read from. 154 + * 155 + * For all fields, a value of zero means "unsupported". 156 + * 157 + * See Atmel's datasheet for a detailled description of these registers. 158 + * 159 + * @xstride: xstride registers 160 + * @pstride: pstride registers 161 + * @pos: position register 162 + * @size: displayed size register 163 + * @memsize: memory size register 164 + * @default_color: default color register 165 + * @chroma_key: chroma key register 166 + * @chroma_key_mask: chroma key mask register 167 + * @general_config: general layer config register 168 + * @sacler_config: scaler factors register 169 + * @phicoeffs: X/Y PHI coefficient registers 170 + * @disc_pos: discard area position register 171 + * @disc_size: discard area size register 172 + * @csc: color space conversion register 173 + */ 174 + struct atmel_hlcdc_layer_cfg_layout { 175 + int xstride[ATMEL_HLCDC_LAYER_MAX_PLANES]; 176 + int pstride[ATMEL_HLCDC_LAYER_MAX_PLANES]; 177 + int pos; 178 + int size; 179 + int memsize; 180 + int default_color; 181 + int chroma_key; 182 + int chroma_key_mask; 183 + int general_config; 184 + int scaler_config; 185 + struct { 186 + int x; 187 + int y; 188 + } phicoeffs; 189 + int disc_pos; 190 + int disc_size; 191 + int csc; 192 + }; 193 + 194 + /** 195 + * Atmel HLCDC DMA descriptor structure 196 + * 197 + * This structure is used by the HLCDC DMA engine to schedule a DMA transfer. 198 + * 199 + * The structure fields must remain in this specific order, because they're 200 + * used by the HLCDC DMA engine, which expect them in this order. 201 + * HLCDC DMA descriptors must be aligned on 64 bits. 202 + * 203 + * @addr: buffer DMA address 204 + * @ctrl: DMA transfer options 205 + * @next: next DMA descriptor to fetch 206 + * @self: descriptor DMA address 207 + */ 208 + struct atmel_hlcdc_dma_channel_dscr { 209 + dma_addr_t addr; 210 + u32 ctrl; 211 + dma_addr_t next; 212 + dma_addr_t self; 213 + } __aligned(sizeof(u64)); 214 + 215 + /** 216 + * Atmel HLCDC layer types 217 + */ 218 + enum atmel_hlcdc_layer_type { 219 + ATMEL_HLCDC_NO_LAYER, 220 + ATMEL_HLCDC_BASE_LAYER, 221 + ATMEL_HLCDC_OVERLAY_LAYER, 222 + ATMEL_HLCDC_CURSOR_LAYER, 223 + ATMEL_HLCDC_PP_LAYER, 224 + }; 225 + 226 + /** 227 + * Atmel HLCDC Supported formats structure 228 + * 229 + * This structure list all the formats supported by a given layer. 230 + * 231 + * @nformats: number of supported formats 232 + * @formats: supported formats 233 + */ 234 + struct atmel_hlcdc_formats { 235 + int nformats; 236 + u32 *formats; 237 + }; 238 + 239 + /** 240 + * Atmel HLCDC Layer description structure 241 + * 242 + * This structure describes the capabilities provided by a given layer. 243 + * 244 + * @name: layer name 245 + * @type: layer type 246 + * @id: layer id 247 + * @regs_offset: offset of the layer registers from the HLCDC registers base 248 + * @cfgs_offset: CFGX registers offset from the layer registers base 249 + * @formats: supported formats 250 + * @layout: config registers layout 251 + * @max_width: maximum width supported by this layer (0 means unlimited) 252 + * @max_height: maximum height supported by this layer (0 means unlimited) 253 + */ 254 + struct atmel_hlcdc_layer_desc { 255 + const char *name; 256 + enum atmel_hlcdc_layer_type type; 257 + int id; 258 + int regs_offset; 259 + int cfgs_offset; 260 + struct atmel_hlcdc_formats *formats; 261 + struct atmel_hlcdc_layer_cfg_layout layout; 262 + int max_width; 263 + int max_height; 264 + }; 265 + 266 + /** 267 + * Atmel HLCDC Layer. 268 + * 269 + * A layer can be a DRM plane of a post processing layer used to render 270 + * HLCDC composition into memory. 271 + * 272 + * @desc: layer description 273 + * @regmap: pointer to the HLCDC regmap 274 + */ 275 + struct atmel_hlcdc_layer { 276 + const struct atmel_hlcdc_layer_desc *desc; 277 + struct regmap *regmap; 278 + }; 279 + 280 + /** 281 + * Atmel HLCDC Plane. 282 + * 283 + * @base: base DRM plane structure 284 + * @layer: HLCDC layer structure 285 + * @properties: pointer to the property definitions structure 286 + */ 287 + struct atmel_hlcdc_plane { 288 + struct drm_plane base; 289 + struct atmel_hlcdc_layer layer; 290 + struct atmel_hlcdc_plane_properties *properties; 291 + }; 292 + 293 + static inline struct atmel_hlcdc_plane * 294 + drm_plane_to_atmel_hlcdc_plane(struct drm_plane *p) 295 + { 296 + return container_of(p, struct atmel_hlcdc_plane, base); 297 + } 298 + 299 + static inline struct atmel_hlcdc_plane * 300 + atmel_hlcdc_layer_to_plane(struct atmel_hlcdc_layer *layer) 301 + { 302 + return container_of(layer, struct atmel_hlcdc_plane, layer); 303 + } 44 304 45 305 /** 46 306 * Atmel HLCDC Display Controller description structure. 47 307 * 48 - * This structure describe the HLCDC IP capabilities and depends on the 308 + * This structure describes the HLCDC IP capabilities and depends on the 49 309 * HLCDC IP version (or Atmel SoC family). 50 310 * 51 311 * @min_width: minimum width supported by the Display Controller ··· 348 84 }; 349 85 350 86 /** 351 - * Atmel HLCDC Plane. 352 - * 353 - * @base: base DRM plane structure 354 - * @layer: HLCDC layer structure 355 - * @properties: pointer to the property definitions structure 356 - * @rotation: current rotation status 357 - */ 358 - struct atmel_hlcdc_plane { 359 - struct drm_plane base; 360 - struct atmel_hlcdc_layer layer; 361 - struct atmel_hlcdc_plane_properties *properties; 362 - }; 363 - 364 - static inline struct atmel_hlcdc_plane * 365 - drm_plane_to_atmel_hlcdc_plane(struct drm_plane *p) 366 - { 367 - return container_of(p, struct atmel_hlcdc_plane, base); 368 - } 369 - 370 - static inline struct atmel_hlcdc_plane * 371 - atmel_hlcdc_layer_to_plane(struct atmel_hlcdc_layer *l) 372 - { 373 - return container_of(l, struct atmel_hlcdc_plane, layer); 374 - } 375 - 376 - /** 377 - * Atmel HLCDC Planes. 378 - * 379 - * This structure stores the instantiated HLCDC Planes and can be accessed by 380 - * the HLCDC Display Controller or the HLCDC CRTC. 381 - * 382 - * @primary: primary plane 383 - * @cursor: hardware cursor plane 384 - * @overlays: overlay plane table 385 - * @noverlays: number of overlay planes 386 - */ 387 - struct atmel_hlcdc_planes { 388 - struct atmel_hlcdc_plane *primary; 389 - struct atmel_hlcdc_plane *cursor; 390 - struct atmel_hlcdc_plane **overlays; 391 - int noverlays; 392 - }; 393 - 394 - /** 395 87 * Atmel HLCDC Display Controller. 396 88 * 397 89 * @desc: HLCDC Display Controller description 90 + * @dscrpool: DMA coherent pool used to allocate DMA descriptors 398 91 * @hlcdc: pointer to the atmel_hlcdc structure provided by the MFD device 399 92 * @fbdev: framebuffer device attached to the Display Controller 400 93 * @crtc: CRTC provided by the display controller 401 94 * @planes: instantiated planes 402 - * @layers: active HLCDC layer 95 + * @layers: active HLCDC layers 403 96 * @wq: display controller workqueue 404 97 * @commit: used for async commit handling 405 98 */ 406 99 struct atmel_hlcdc_dc { 407 100 const struct atmel_hlcdc_dc_desc *desc; 101 + struct dma_pool *dscrpool; 408 102 struct atmel_hlcdc *hlcdc; 409 103 struct drm_fbdev_cma *fbdev; 410 104 struct drm_crtc *crtc; 411 - struct atmel_hlcdc_planes *planes; 412 105 struct atmel_hlcdc_layer *layers[ATMEL_HLCDC_MAX_LAYERS]; 413 106 struct workqueue_struct *wq; 414 107 struct { ··· 377 156 extern struct atmel_hlcdc_formats atmel_hlcdc_plane_rgb_formats; 378 157 extern struct atmel_hlcdc_formats atmel_hlcdc_plane_rgb_and_yuv_formats; 379 158 159 + static inline void atmel_hlcdc_layer_write_reg(struct atmel_hlcdc_layer *layer, 160 + unsigned int reg, u32 val) 161 + { 162 + regmap_write(layer->regmap, layer->desc->regs_offset + reg, val); 163 + } 164 + 165 + static inline u32 atmel_hlcdc_layer_read_reg(struct atmel_hlcdc_layer *layer, 166 + unsigned int reg) 167 + { 168 + u32 val; 169 + 170 + regmap_read(layer->regmap, layer->desc->regs_offset + reg, &val); 171 + 172 + return val; 173 + } 174 + 175 + static inline void atmel_hlcdc_layer_write_cfg(struct atmel_hlcdc_layer *layer, 176 + unsigned int cfgid, u32 val) 177 + { 178 + atmel_hlcdc_layer_write_reg(layer, 179 + layer->desc->cfgs_offset + 180 + (cfgid * sizeof(u32)), val); 181 + } 182 + 183 + static inline u32 atmel_hlcdc_layer_read_cfg(struct atmel_hlcdc_layer *layer, 184 + unsigned int cfgid) 185 + { 186 + return atmel_hlcdc_layer_read_reg(layer, 187 + layer->desc->cfgs_offset + 188 + (cfgid * sizeof(u32))); 189 + } 190 + 191 + static inline void atmel_hlcdc_layer_init(struct atmel_hlcdc_layer *layer, 192 + const struct atmel_hlcdc_layer_desc *desc, 193 + struct regmap *regmap) 194 + { 195 + layer->desc = desc; 196 + layer->regmap = regmap; 197 + } 198 + 380 199 int atmel_hlcdc_dc_mode_valid(struct atmel_hlcdc_dc *dc, 381 200 struct drm_display_mode *mode); 382 201 383 - struct atmel_hlcdc_planes * 384 - atmel_hlcdc_create_planes(struct drm_device *dev); 202 + int atmel_hlcdc_create_planes(struct drm_device *dev); 203 + void atmel_hlcdc_plane_irq(struct atmel_hlcdc_plane *plane); 385 204 386 205 int atmel_hlcdc_plane_prepare_disc_area(struct drm_crtc_state *c_state); 387 206 int atmel_hlcdc_plane_prepare_ahb_routing(struct drm_crtc_state *c_state);
-666
drivers/gpu/drm/atmel-hlcdc/atmel_hlcdc_layer.c
··· 1 - /* 2 - * Copyright (C) 2014 Free Electrons 3 - * Copyright (C) 2014 Atmel 4 - * 5 - * Author: Boris BREZILLON <boris.brezillon@free-electrons.com> 6 - * 7 - * This program is free software; you can redistribute it and/or modify it 8 - * under the terms of the GNU General Public License version 2 as published by 9 - * the Free Software Foundation. 10 - * 11 - * This program is distributed in the hope that it will be useful, but WITHOUT 12 - * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or 13 - * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for 14 - * more details. 15 - * 16 - * You should have received a copy of the GNU General Public License along with 17 - * this program. If not, see <http://www.gnu.org/licenses/>. 18 - */ 19 - 20 - #include <linux/dma-mapping.h> 21 - #include <linux/interrupt.h> 22 - 23 - #include "atmel_hlcdc_dc.h" 24 - 25 - static void 26 - atmel_hlcdc_layer_fb_flip_release(struct drm_flip_work *work, void *val) 27 - { 28 - struct atmel_hlcdc_layer_fb_flip *flip = val; 29 - 30 - if (flip->fb) 31 - drm_framebuffer_unreference(flip->fb); 32 - kfree(flip); 33 - } 34 - 35 - static void 36 - atmel_hlcdc_layer_fb_flip_destroy(struct atmel_hlcdc_layer_fb_flip *flip) 37 - { 38 - if (flip->fb) 39 - drm_framebuffer_unreference(flip->fb); 40 - kfree(flip->task); 41 - kfree(flip); 42 - } 43 - 44 - static void 45 - atmel_hlcdc_layer_fb_flip_release_queue(struct atmel_hlcdc_layer *layer, 46 - struct atmel_hlcdc_layer_fb_flip *flip) 47 - { 48 - int i; 49 - 50 - if (!flip) 51 - return; 52 - 53 - for (i = 0; i < layer->max_planes; i++) { 54 - if (!flip->dscrs[i]) 55 - break; 56 - 57 - flip->dscrs[i]->status = 0; 58 - flip->dscrs[i] = NULL; 59 - } 60 - 61 - drm_flip_work_queue_task(&layer->gc, flip->task); 62 - drm_flip_work_commit(&layer->gc, layer->wq); 63 - } 64 - 65 - static void atmel_hlcdc_layer_update_reset(struct atmel_hlcdc_layer *layer, 66 - int id) 67 - { 68 - struct atmel_hlcdc_layer_update *upd = &layer->update; 69 - struct atmel_hlcdc_layer_update_slot *slot; 70 - 71 - if (id < 0 || id > 1) 72 - return; 73 - 74 - slot = &upd->slots[id]; 75 - bitmap_clear(slot->updated_configs, 0, layer->desc->nconfigs); 76 - memset(slot->configs, 0, 77 - sizeof(*slot->configs) * layer->desc->nconfigs); 78 - 79 - if (slot->fb_flip) { 80 - atmel_hlcdc_layer_fb_flip_release_queue(layer, slot->fb_flip); 81 - slot->fb_flip = NULL; 82 - } 83 - } 84 - 85 - static void atmel_hlcdc_layer_update_apply(struct atmel_hlcdc_layer *layer) 86 - { 87 - struct atmel_hlcdc_layer_dma_channel *dma = &layer->dma; 88 - const struct atmel_hlcdc_layer_desc *desc = layer->desc; 89 - struct atmel_hlcdc_layer_update *upd = &layer->update; 90 - struct regmap *regmap = layer->hlcdc->regmap; 91 - struct atmel_hlcdc_layer_update_slot *slot; 92 - struct atmel_hlcdc_layer_fb_flip *fb_flip; 93 - struct atmel_hlcdc_dma_channel_dscr *dscr; 94 - unsigned int cfg; 95 - u32 action = 0; 96 - int i = 0; 97 - 98 - if (upd->pending < 0 || upd->pending > 1) 99 - return; 100 - 101 - slot = &upd->slots[upd->pending]; 102 - 103 - for_each_set_bit(cfg, slot->updated_configs, layer->desc->nconfigs) { 104 - regmap_write(regmap, 105 - desc->regs_offset + 106 - ATMEL_HLCDC_LAYER_CFG(layer, cfg), 107 - slot->configs[cfg]); 108 - action |= ATMEL_HLCDC_LAYER_UPDATE; 109 - } 110 - 111 - fb_flip = slot->fb_flip; 112 - 113 - if (!fb_flip->fb) 114 - goto apply; 115 - 116 - if (dma->status == ATMEL_HLCDC_LAYER_DISABLED) { 117 - for (i = 0; i < fb_flip->ngems; i++) { 118 - dscr = fb_flip->dscrs[i]; 119 - dscr->ctrl = ATMEL_HLCDC_LAYER_DFETCH | 120 - ATMEL_HLCDC_LAYER_DMA_IRQ | 121 - ATMEL_HLCDC_LAYER_ADD_IRQ | 122 - ATMEL_HLCDC_LAYER_DONE_IRQ; 123 - 124 - regmap_write(regmap, 125 - desc->regs_offset + 126 - ATMEL_HLCDC_LAYER_PLANE_ADDR(i), 127 - dscr->addr); 128 - regmap_write(regmap, 129 - desc->regs_offset + 130 - ATMEL_HLCDC_LAYER_PLANE_CTRL(i), 131 - dscr->ctrl); 132 - regmap_write(regmap, 133 - desc->regs_offset + 134 - ATMEL_HLCDC_LAYER_PLANE_NEXT(i), 135 - dscr->next); 136 - } 137 - 138 - action |= ATMEL_HLCDC_LAYER_DMA_CHAN; 139 - dma->status = ATMEL_HLCDC_LAYER_ENABLED; 140 - } else { 141 - for (i = 0; i < fb_flip->ngems; i++) { 142 - dscr = fb_flip->dscrs[i]; 143 - dscr->ctrl = ATMEL_HLCDC_LAYER_DFETCH | 144 - ATMEL_HLCDC_LAYER_DMA_IRQ | 145 - ATMEL_HLCDC_LAYER_DSCR_IRQ | 146 - ATMEL_HLCDC_LAYER_DONE_IRQ; 147 - 148 - regmap_write(regmap, 149 - desc->regs_offset + 150 - ATMEL_HLCDC_LAYER_PLANE_HEAD(i), 151 - dscr->next); 152 - } 153 - 154 - action |= ATMEL_HLCDC_LAYER_A2Q; 155 - } 156 - 157 - /* Release unneeded descriptors */ 158 - for (i = fb_flip->ngems; i < layer->max_planes; i++) { 159 - fb_flip->dscrs[i]->status = 0; 160 - fb_flip->dscrs[i] = NULL; 161 - } 162 - 163 - dma->queue = fb_flip; 164 - slot->fb_flip = NULL; 165 - 166 - apply: 167 - if (action) 168 - regmap_write(regmap, 169 - desc->regs_offset + ATMEL_HLCDC_LAYER_CHER, 170 - action); 171 - 172 - atmel_hlcdc_layer_update_reset(layer, upd->pending); 173 - 174 - upd->pending = -1; 175 - } 176 - 177 - void atmel_hlcdc_layer_irq(struct atmel_hlcdc_layer *layer) 178 - { 179 - struct atmel_hlcdc_layer_dma_channel *dma = &layer->dma; 180 - const struct atmel_hlcdc_layer_desc *desc = layer->desc; 181 - struct regmap *regmap = layer->hlcdc->regmap; 182 - struct atmel_hlcdc_layer_fb_flip *flip; 183 - unsigned long flags; 184 - unsigned int isr, imr; 185 - unsigned int status; 186 - unsigned int plane_status; 187 - u32 flip_status; 188 - 189 - int i; 190 - 191 - regmap_read(regmap, desc->regs_offset + ATMEL_HLCDC_LAYER_IMR, &imr); 192 - regmap_read(regmap, desc->regs_offset + ATMEL_HLCDC_LAYER_ISR, &isr); 193 - status = imr & isr; 194 - if (!status) 195 - return; 196 - 197 - spin_lock_irqsave(&layer->lock, flags); 198 - 199 - flip = dma->queue ? dma->queue : dma->cur; 200 - 201 - if (!flip) { 202 - spin_unlock_irqrestore(&layer->lock, flags); 203 - return; 204 - } 205 - 206 - /* 207 - * Set LOADED and DONE flags: they'll be cleared if at least one 208 - * memory plane is not LOADED or DONE. 209 - */ 210 - flip_status = ATMEL_HLCDC_DMA_CHANNEL_DSCR_LOADED | 211 - ATMEL_HLCDC_DMA_CHANNEL_DSCR_DONE; 212 - for (i = 0; i < flip->ngems; i++) { 213 - plane_status = (status >> (8 * i)); 214 - 215 - if (plane_status & 216 - (ATMEL_HLCDC_LAYER_ADD_IRQ | 217 - ATMEL_HLCDC_LAYER_DSCR_IRQ) & 218 - ~flip->dscrs[i]->ctrl) { 219 - flip->dscrs[i]->status |= 220 - ATMEL_HLCDC_DMA_CHANNEL_DSCR_LOADED; 221 - flip->dscrs[i]->ctrl |= 222 - ATMEL_HLCDC_LAYER_ADD_IRQ | 223 - ATMEL_HLCDC_LAYER_DSCR_IRQ; 224 - } 225 - 226 - if (plane_status & 227 - ATMEL_HLCDC_LAYER_DONE_IRQ & 228 - ~flip->dscrs[i]->ctrl) { 229 - flip->dscrs[i]->status |= 230 - ATMEL_HLCDC_DMA_CHANNEL_DSCR_DONE; 231 - flip->dscrs[i]->ctrl |= 232 - ATMEL_HLCDC_LAYER_DONE_IRQ; 233 - } 234 - 235 - if (plane_status & ATMEL_HLCDC_LAYER_OVR_IRQ) 236 - flip->dscrs[i]->status |= 237 - ATMEL_HLCDC_DMA_CHANNEL_DSCR_OVERRUN; 238 - 239 - /* 240 - * Clear LOADED and DONE flags if the memory plane is either 241 - * not LOADED or not DONE. 242 - */ 243 - if (!(flip->dscrs[i]->status & 244 - ATMEL_HLCDC_DMA_CHANNEL_DSCR_LOADED)) 245 - flip_status &= ~ATMEL_HLCDC_DMA_CHANNEL_DSCR_LOADED; 246 - 247 - if (!(flip->dscrs[i]->status & 248 - ATMEL_HLCDC_DMA_CHANNEL_DSCR_DONE)) 249 - flip_status &= ~ATMEL_HLCDC_DMA_CHANNEL_DSCR_DONE; 250 - 251 - /* 252 - * An overrun on one memory plane impact the whole framebuffer 253 - * transfer, hence we set the OVERRUN flag as soon as there's 254 - * one memory plane reporting such an overrun. 255 - */ 256 - flip_status |= flip->dscrs[i]->status & 257 - ATMEL_HLCDC_DMA_CHANNEL_DSCR_OVERRUN; 258 - } 259 - 260 - /* Get changed bits */ 261 - flip_status ^= flip->status; 262 - flip->status |= flip_status; 263 - 264 - if (flip_status & ATMEL_HLCDC_DMA_CHANNEL_DSCR_LOADED) { 265 - atmel_hlcdc_layer_fb_flip_release_queue(layer, dma->cur); 266 - dma->cur = dma->queue; 267 - dma->queue = NULL; 268 - } 269 - 270 - if (flip_status & ATMEL_HLCDC_DMA_CHANNEL_DSCR_DONE) { 271 - atmel_hlcdc_layer_fb_flip_release_queue(layer, dma->cur); 272 - dma->cur = NULL; 273 - } 274 - 275 - if (flip_status & ATMEL_HLCDC_DMA_CHANNEL_DSCR_OVERRUN) { 276 - regmap_write(regmap, 277 - desc->regs_offset + ATMEL_HLCDC_LAYER_CHDR, 278 - ATMEL_HLCDC_LAYER_RST); 279 - if (dma->queue) 280 - atmel_hlcdc_layer_fb_flip_release_queue(layer, 281 - dma->queue); 282 - 283 - if (dma->cur) 284 - atmel_hlcdc_layer_fb_flip_release_queue(layer, 285 - dma->cur); 286 - 287 - dma->cur = NULL; 288 - dma->queue = NULL; 289 - } 290 - 291 - if (!dma->queue) { 292 - atmel_hlcdc_layer_update_apply(layer); 293 - 294 - if (!dma->cur) 295 - dma->status = ATMEL_HLCDC_LAYER_DISABLED; 296 - } 297 - 298 - spin_unlock_irqrestore(&layer->lock, flags); 299 - } 300 - 301 - void atmel_hlcdc_layer_disable(struct atmel_hlcdc_layer *layer) 302 - { 303 - struct atmel_hlcdc_layer_dma_channel *dma = &layer->dma; 304 - struct atmel_hlcdc_layer_update *upd = &layer->update; 305 - struct regmap *regmap = layer->hlcdc->regmap; 306 - const struct atmel_hlcdc_layer_desc *desc = layer->desc; 307 - unsigned long flags; 308 - unsigned int isr; 309 - 310 - spin_lock_irqsave(&layer->lock, flags); 311 - 312 - /* Disable the layer */ 313 - regmap_write(regmap, desc->regs_offset + ATMEL_HLCDC_LAYER_CHDR, 314 - ATMEL_HLCDC_LAYER_RST | ATMEL_HLCDC_LAYER_A2Q | 315 - ATMEL_HLCDC_LAYER_UPDATE); 316 - 317 - /* Clear all pending interrupts */ 318 - regmap_read(regmap, desc->regs_offset + ATMEL_HLCDC_LAYER_ISR, &isr); 319 - 320 - /* Discard current and queued framebuffer transfers. */ 321 - if (dma->cur) { 322 - atmel_hlcdc_layer_fb_flip_release_queue(layer, dma->cur); 323 - dma->cur = NULL; 324 - } 325 - 326 - if (dma->queue) { 327 - atmel_hlcdc_layer_fb_flip_release_queue(layer, dma->queue); 328 - dma->queue = NULL; 329 - } 330 - 331 - /* 332 - * Then discard the pending update request (if any) to prevent 333 - * DMA irq handler from restarting the DMA channel after it has 334 - * been disabled. 335 - */ 336 - if (upd->pending >= 0) { 337 - atmel_hlcdc_layer_update_reset(layer, upd->pending); 338 - upd->pending = -1; 339 - } 340 - 341 - dma->status = ATMEL_HLCDC_LAYER_DISABLED; 342 - 343 - spin_unlock_irqrestore(&layer->lock, flags); 344 - } 345 - 346 - int atmel_hlcdc_layer_update_start(struct atmel_hlcdc_layer *layer) 347 - { 348 - struct atmel_hlcdc_layer_dma_channel *dma = &layer->dma; 349 - struct atmel_hlcdc_layer_update *upd = &layer->update; 350 - struct regmap *regmap = layer->hlcdc->regmap; 351 - struct atmel_hlcdc_layer_fb_flip *fb_flip; 352 - struct atmel_hlcdc_layer_update_slot *slot; 353 - unsigned long flags; 354 - int i, j = 0; 355 - 356 - fb_flip = kzalloc(sizeof(*fb_flip), GFP_KERNEL); 357 - if (!fb_flip) 358 - return -ENOMEM; 359 - 360 - fb_flip->task = drm_flip_work_allocate_task(fb_flip, GFP_KERNEL); 361 - if (!fb_flip->task) { 362 - kfree(fb_flip); 363 - return -ENOMEM; 364 - } 365 - 366 - spin_lock_irqsave(&layer->lock, flags); 367 - 368 - upd->next = upd->pending ? 0 : 1; 369 - 370 - slot = &upd->slots[upd->next]; 371 - 372 - for (i = 0; i < layer->max_planes * 4; i++) { 373 - if (!dma->dscrs[i].status) { 374 - fb_flip->dscrs[j++] = &dma->dscrs[i]; 375 - dma->dscrs[i].status = 376 - ATMEL_HLCDC_DMA_CHANNEL_DSCR_RESERVED; 377 - if (j == layer->max_planes) 378 - break; 379 - } 380 - } 381 - 382 - if (j < layer->max_planes) { 383 - for (i = 0; i < j; i++) 384 - fb_flip->dscrs[i]->status = 0; 385 - } 386 - 387 - if (j < layer->max_planes) { 388 - spin_unlock_irqrestore(&layer->lock, flags); 389 - atmel_hlcdc_layer_fb_flip_destroy(fb_flip); 390 - return -EBUSY; 391 - } 392 - 393 - slot->fb_flip = fb_flip; 394 - 395 - if (upd->pending >= 0) { 396 - memcpy(slot->configs, 397 - upd->slots[upd->pending].configs, 398 - layer->desc->nconfigs * sizeof(u32)); 399 - memcpy(slot->updated_configs, 400 - upd->slots[upd->pending].updated_configs, 401 - DIV_ROUND_UP(layer->desc->nconfigs, 402 - BITS_PER_BYTE * sizeof(unsigned long)) * 403 - sizeof(unsigned long)); 404 - slot->fb_flip->fb = upd->slots[upd->pending].fb_flip->fb; 405 - if (upd->slots[upd->pending].fb_flip->fb) { 406 - slot->fb_flip->fb = 407 - upd->slots[upd->pending].fb_flip->fb; 408 - slot->fb_flip->ngems = 409 - upd->slots[upd->pending].fb_flip->ngems; 410 - drm_framebuffer_reference(slot->fb_flip->fb); 411 - } 412 - } else { 413 - regmap_bulk_read(regmap, 414 - layer->desc->regs_offset + 415 - ATMEL_HLCDC_LAYER_CFG(layer, 0), 416 - upd->slots[upd->next].configs, 417 - layer->desc->nconfigs); 418 - } 419 - 420 - spin_unlock_irqrestore(&layer->lock, flags); 421 - 422 - return 0; 423 - } 424 - 425 - void atmel_hlcdc_layer_update_rollback(struct atmel_hlcdc_layer *layer) 426 - { 427 - struct atmel_hlcdc_layer_update *upd = &layer->update; 428 - 429 - atmel_hlcdc_layer_update_reset(layer, upd->next); 430 - upd->next = -1; 431 - } 432 - 433 - void atmel_hlcdc_layer_update_set_fb(struct atmel_hlcdc_layer *layer, 434 - struct drm_framebuffer *fb, 435 - unsigned int *offsets) 436 - { 437 - struct atmel_hlcdc_layer_update *upd = &layer->update; 438 - struct atmel_hlcdc_layer_fb_flip *fb_flip; 439 - struct atmel_hlcdc_layer_update_slot *slot; 440 - struct atmel_hlcdc_dma_channel_dscr *dscr; 441 - struct drm_framebuffer *old_fb; 442 - int nplanes = 0; 443 - int i; 444 - 445 - if (upd->next < 0 || upd->next > 1) 446 - return; 447 - 448 - if (fb) 449 - nplanes = fb->format->num_planes; 450 - 451 - if (nplanes > layer->max_planes) 452 - return; 453 - 454 - slot = &upd->slots[upd->next]; 455 - 456 - fb_flip = slot->fb_flip; 457 - old_fb = slot->fb_flip->fb; 458 - 459 - for (i = 0; i < nplanes; i++) { 460 - struct drm_gem_cma_object *gem; 461 - 462 - dscr = slot->fb_flip->dscrs[i]; 463 - gem = drm_fb_cma_get_gem_obj(fb, i); 464 - dscr->addr = gem->paddr + offsets[i]; 465 - } 466 - 467 - fb_flip->ngems = nplanes; 468 - fb_flip->fb = fb; 469 - 470 - if (fb) 471 - drm_framebuffer_reference(fb); 472 - 473 - if (old_fb) 474 - drm_framebuffer_unreference(old_fb); 475 - } 476 - 477 - void atmel_hlcdc_layer_update_cfg(struct atmel_hlcdc_layer *layer, int cfg, 478 - u32 mask, u32 val) 479 - { 480 - struct atmel_hlcdc_layer_update *upd = &layer->update; 481 - struct atmel_hlcdc_layer_update_slot *slot; 482 - 483 - if (upd->next < 0 || upd->next > 1) 484 - return; 485 - 486 - if (cfg >= layer->desc->nconfigs) 487 - return; 488 - 489 - slot = &upd->slots[upd->next]; 490 - slot->configs[cfg] &= ~mask; 491 - slot->configs[cfg] |= (val & mask); 492 - set_bit(cfg, slot->updated_configs); 493 - } 494 - 495 - void atmel_hlcdc_layer_update_commit(struct atmel_hlcdc_layer *layer) 496 - { 497 - struct atmel_hlcdc_layer_dma_channel *dma = &layer->dma; 498 - struct atmel_hlcdc_layer_update *upd = &layer->update; 499 - struct atmel_hlcdc_layer_update_slot *slot; 500 - unsigned long flags; 501 - 502 - if (upd->next < 0 || upd->next > 1) 503 - return; 504 - 505 - slot = &upd->slots[upd->next]; 506 - 507 - spin_lock_irqsave(&layer->lock, flags); 508 - 509 - /* 510 - * Release pending update request and replace it by the new one. 511 - */ 512 - if (upd->pending >= 0) 513 - atmel_hlcdc_layer_update_reset(layer, upd->pending); 514 - 515 - upd->pending = upd->next; 516 - upd->next = -1; 517 - 518 - if (!dma->queue) 519 - atmel_hlcdc_layer_update_apply(layer); 520 - 521 - spin_unlock_irqrestore(&layer->lock, flags); 522 - 523 - 524 - upd->next = -1; 525 - } 526 - 527 - static int atmel_hlcdc_layer_dma_init(struct drm_device *dev, 528 - struct atmel_hlcdc_layer *layer) 529 - { 530 - struct atmel_hlcdc_layer_dma_channel *dma = &layer->dma; 531 - dma_addr_t dma_addr; 532 - int i; 533 - 534 - dma->dscrs = dma_alloc_coherent(dev->dev, 535 - layer->max_planes * 4 * 536 - sizeof(*dma->dscrs), 537 - &dma_addr, GFP_KERNEL); 538 - if (!dma->dscrs) 539 - return -ENOMEM; 540 - 541 - for (i = 0; i < layer->max_planes * 4; i++) { 542 - struct atmel_hlcdc_dma_channel_dscr *dscr = &dma->dscrs[i]; 543 - 544 - dscr->next = dma_addr + (i * sizeof(*dscr)); 545 - } 546 - 547 - return 0; 548 - } 549 - 550 - static void atmel_hlcdc_layer_dma_cleanup(struct drm_device *dev, 551 - struct atmel_hlcdc_layer *layer) 552 - { 553 - struct atmel_hlcdc_layer_dma_channel *dma = &layer->dma; 554 - int i; 555 - 556 - for (i = 0; i < layer->max_planes * 4; i++) { 557 - struct atmel_hlcdc_dma_channel_dscr *dscr = &dma->dscrs[i]; 558 - 559 - dscr->status = 0; 560 - } 561 - 562 - dma_free_coherent(dev->dev, layer->max_planes * 4 * 563 - sizeof(*dma->dscrs), dma->dscrs, 564 - dma->dscrs[0].next); 565 - } 566 - 567 - static int atmel_hlcdc_layer_update_init(struct drm_device *dev, 568 - struct atmel_hlcdc_layer *layer, 569 - const struct atmel_hlcdc_layer_desc *desc) 570 - { 571 - struct atmel_hlcdc_layer_update *upd = &layer->update; 572 - int updated_size; 573 - void *buffer; 574 - int i; 575 - 576 - updated_size = DIV_ROUND_UP(desc->nconfigs, 577 - BITS_PER_BYTE * 578 - sizeof(unsigned long)); 579 - 580 - buffer = devm_kzalloc(dev->dev, 581 - ((desc->nconfigs * sizeof(u32)) + 582 - (updated_size * sizeof(unsigned long))) * 2, 583 - GFP_KERNEL); 584 - if (!buffer) 585 - return -ENOMEM; 586 - 587 - for (i = 0; i < 2; i++) { 588 - upd->slots[i].updated_configs = buffer; 589 - buffer += updated_size * sizeof(unsigned long); 590 - upd->slots[i].configs = buffer; 591 - buffer += desc->nconfigs * sizeof(u32); 592 - } 593 - 594 - upd->pending = -1; 595 - upd->next = -1; 596 - 597 - return 0; 598 - } 599 - 600 - int atmel_hlcdc_layer_init(struct drm_device *dev, 601 - struct atmel_hlcdc_layer *layer, 602 - const struct atmel_hlcdc_layer_desc *desc) 603 - { 604 - struct atmel_hlcdc_dc *dc = dev->dev_private; 605 - struct regmap *regmap = dc->hlcdc->regmap; 606 - unsigned int tmp; 607 - int ret; 608 - int i; 609 - 610 - layer->hlcdc = dc->hlcdc; 611 - layer->wq = dc->wq; 612 - layer->desc = desc; 613 - 614 - regmap_write(regmap, desc->regs_offset + ATMEL_HLCDC_LAYER_CHDR, 615 - ATMEL_HLCDC_LAYER_RST); 616 - for (i = 0; i < desc->formats->nformats; i++) { 617 - int nplanes = drm_format_num_planes(desc->formats->formats[i]); 618 - 619 - if (nplanes > layer->max_planes) 620 - layer->max_planes = nplanes; 621 - } 622 - 623 - spin_lock_init(&layer->lock); 624 - drm_flip_work_init(&layer->gc, desc->name, 625 - atmel_hlcdc_layer_fb_flip_release); 626 - ret = atmel_hlcdc_layer_dma_init(dev, layer); 627 - if (ret) 628 - return ret; 629 - 630 - ret = atmel_hlcdc_layer_update_init(dev, layer, desc); 631 - if (ret) 632 - return ret; 633 - 634 - /* Flush Status Register */ 635 - regmap_write(regmap, desc->regs_offset + ATMEL_HLCDC_LAYER_IDR, 636 - 0xffffffff); 637 - regmap_read(regmap, desc->regs_offset + ATMEL_HLCDC_LAYER_ISR, 638 - &tmp); 639 - 640 - tmp = 0; 641 - for (i = 0; i < layer->max_planes; i++) 642 - tmp |= (ATMEL_HLCDC_LAYER_DMA_IRQ | 643 - ATMEL_HLCDC_LAYER_DSCR_IRQ | 644 - ATMEL_HLCDC_LAYER_ADD_IRQ | 645 - ATMEL_HLCDC_LAYER_DONE_IRQ | 646 - ATMEL_HLCDC_LAYER_OVR_IRQ) << (8 * i); 647 - 648 - regmap_write(regmap, desc->regs_offset + ATMEL_HLCDC_LAYER_IER, tmp); 649 - 650 - return 0; 651 - } 652 - 653 - void atmel_hlcdc_layer_cleanup(struct drm_device *dev, 654 - struct atmel_hlcdc_layer *layer) 655 - { 656 - const struct atmel_hlcdc_layer_desc *desc = layer->desc; 657 - struct regmap *regmap = layer->hlcdc->regmap; 658 - 659 - regmap_write(regmap, desc->regs_offset + ATMEL_HLCDC_LAYER_IDR, 660 - 0xffffffff); 661 - regmap_write(regmap, desc->regs_offset + ATMEL_HLCDC_LAYER_CHDR, 662 - ATMEL_HLCDC_LAYER_RST); 663 - 664 - atmel_hlcdc_layer_dma_cleanup(dev, layer); 665 - drm_flip_work_cleanup(&layer->gc); 666 - }
-399
drivers/gpu/drm/atmel-hlcdc/atmel_hlcdc_layer.h
··· 1 - /* 2 - * Copyright (C) 2014 Free Electrons 3 - * Copyright (C) 2014 Atmel 4 - * 5 - * Author: Boris BREZILLON <boris.brezillon@free-electrons.com> 6 - * 7 - * This program is free software; you can redistribute it and/or modify it 8 - * under the terms of the GNU General Public License version 2 as published by 9 - * the Free Software Foundation. 10 - * 11 - * This program is distributed in the hope that it will be useful, but WITHOUT 12 - * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or 13 - * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for 14 - * more details. 15 - * 16 - * You should have received a copy of the GNU General Public License along with 17 - * this program. If not, see <http://www.gnu.org/licenses/>. 18 - */ 19 - 20 - #ifndef DRM_ATMEL_HLCDC_LAYER_H 21 - #define DRM_ATMEL_HLCDC_LAYER_H 22 - 23 - #include <linux/mfd/atmel-hlcdc.h> 24 - 25 - #include <drm/drm_crtc.h> 26 - #include <drm/drm_flip_work.h> 27 - #include <drm/drmP.h> 28 - 29 - #define ATMEL_HLCDC_LAYER_CHER 0x0 30 - #define ATMEL_HLCDC_LAYER_CHDR 0x4 31 - #define ATMEL_HLCDC_LAYER_CHSR 0x8 32 - #define ATMEL_HLCDC_LAYER_DMA_CHAN BIT(0) 33 - #define ATMEL_HLCDC_LAYER_UPDATE BIT(1) 34 - #define ATMEL_HLCDC_LAYER_A2Q BIT(2) 35 - #define ATMEL_HLCDC_LAYER_RST BIT(8) 36 - 37 - #define ATMEL_HLCDC_LAYER_IER 0xc 38 - #define ATMEL_HLCDC_LAYER_IDR 0x10 39 - #define ATMEL_HLCDC_LAYER_IMR 0x14 40 - #define ATMEL_HLCDC_LAYER_ISR 0x18 41 - #define ATMEL_HLCDC_LAYER_DFETCH BIT(0) 42 - #define ATMEL_HLCDC_LAYER_LFETCH BIT(1) 43 - #define ATMEL_HLCDC_LAYER_DMA_IRQ BIT(2) 44 - #define ATMEL_HLCDC_LAYER_DSCR_IRQ BIT(3) 45 - #define ATMEL_HLCDC_LAYER_ADD_IRQ BIT(4) 46 - #define ATMEL_HLCDC_LAYER_DONE_IRQ BIT(5) 47 - #define ATMEL_HLCDC_LAYER_OVR_IRQ BIT(6) 48 - 49 - #define ATMEL_HLCDC_LAYER_PLANE_HEAD(n) (((n) * 0x10) + 0x1c) 50 - #define ATMEL_HLCDC_LAYER_PLANE_ADDR(n) (((n) * 0x10) + 0x20) 51 - #define ATMEL_HLCDC_LAYER_PLANE_CTRL(n) (((n) * 0x10) + 0x24) 52 - #define ATMEL_HLCDC_LAYER_PLANE_NEXT(n) (((n) * 0x10) + 0x28) 53 - #define ATMEL_HLCDC_LAYER_CFG(p, c) (((c) * 4) + ((p)->max_planes * 0x10) + 0x1c) 54 - 55 - #define ATMEL_HLCDC_LAYER_DMA_CFG_ID 0 56 - #define ATMEL_HLCDC_LAYER_DMA_CFG(p) ATMEL_HLCDC_LAYER_CFG(p, ATMEL_HLCDC_LAYER_DMA_CFG_ID) 57 - #define ATMEL_HLCDC_LAYER_DMA_SIF BIT(0) 58 - #define ATMEL_HLCDC_LAYER_DMA_BLEN_MASK GENMASK(5, 4) 59 - #define ATMEL_HLCDC_LAYER_DMA_BLEN_SINGLE (0 << 4) 60 - #define ATMEL_HLCDC_LAYER_DMA_BLEN_INCR4 (1 << 4) 61 - #define ATMEL_HLCDC_LAYER_DMA_BLEN_INCR8 (2 << 4) 62 - #define ATMEL_HLCDC_LAYER_DMA_BLEN_INCR16 (3 << 4) 63 - #define ATMEL_HLCDC_LAYER_DMA_DLBO BIT(8) 64 - #define ATMEL_HLCDC_LAYER_DMA_ROTDIS BIT(12) 65 - #define ATMEL_HLCDC_LAYER_DMA_LOCKDIS BIT(13) 66 - 67 - #define ATMEL_HLCDC_LAYER_FORMAT_CFG_ID 1 68 - #define ATMEL_HLCDC_LAYER_FORMAT_CFG(p) ATMEL_HLCDC_LAYER_CFG(p, ATMEL_HLCDC_LAYER_FORMAT_CFG_ID) 69 - #define ATMEL_HLCDC_LAYER_RGB (0 << 0) 70 - #define ATMEL_HLCDC_LAYER_CLUT (1 << 0) 71 - #define ATMEL_HLCDC_LAYER_YUV (2 << 0) 72 - #define ATMEL_HLCDC_RGB_MODE(m) (((m) & 0xf) << 4) 73 - #define ATMEL_HLCDC_CLUT_MODE(m) (((m) & 0x3) << 8) 74 - #define ATMEL_HLCDC_YUV_MODE(m) (((m) & 0xf) << 12) 75 - #define ATMEL_HLCDC_YUV422ROT BIT(16) 76 - #define ATMEL_HLCDC_YUV422SWP BIT(17) 77 - #define ATMEL_HLCDC_DSCALEOPT BIT(20) 78 - 79 - #define ATMEL_HLCDC_XRGB4444_MODE (ATMEL_HLCDC_LAYER_RGB | ATMEL_HLCDC_RGB_MODE(0)) 80 - #define ATMEL_HLCDC_ARGB4444_MODE (ATMEL_HLCDC_LAYER_RGB | ATMEL_HLCDC_RGB_MODE(1)) 81 - #define ATMEL_HLCDC_RGBA4444_MODE (ATMEL_HLCDC_LAYER_RGB | ATMEL_HLCDC_RGB_MODE(2)) 82 - #define ATMEL_HLCDC_RGB565_MODE (ATMEL_HLCDC_LAYER_RGB | ATMEL_HLCDC_RGB_MODE(3)) 83 - #define ATMEL_HLCDC_ARGB1555_MODE (ATMEL_HLCDC_LAYER_RGB | ATMEL_HLCDC_RGB_MODE(4)) 84 - #define ATMEL_HLCDC_XRGB8888_MODE (ATMEL_HLCDC_LAYER_RGB | ATMEL_HLCDC_RGB_MODE(9)) 85 - #define ATMEL_HLCDC_RGB888_MODE (ATMEL_HLCDC_LAYER_RGB | ATMEL_HLCDC_RGB_MODE(10)) 86 - #define ATMEL_HLCDC_ARGB8888_MODE (ATMEL_HLCDC_LAYER_RGB | ATMEL_HLCDC_RGB_MODE(12)) 87 - #define ATMEL_HLCDC_RGBA8888_MODE (ATMEL_HLCDC_LAYER_RGB | ATMEL_HLCDC_RGB_MODE(13)) 88 - 89 - #define ATMEL_HLCDC_AYUV_MODE (ATMEL_HLCDC_LAYER_YUV | ATMEL_HLCDC_YUV_MODE(0)) 90 - #define ATMEL_HLCDC_YUYV_MODE (ATMEL_HLCDC_LAYER_YUV | ATMEL_HLCDC_YUV_MODE(1)) 91 - #define ATMEL_HLCDC_UYVY_MODE (ATMEL_HLCDC_LAYER_YUV | ATMEL_HLCDC_YUV_MODE(2)) 92 - #define ATMEL_HLCDC_YVYU_MODE (ATMEL_HLCDC_LAYER_YUV | ATMEL_HLCDC_YUV_MODE(3)) 93 - #define ATMEL_HLCDC_VYUY_MODE (ATMEL_HLCDC_LAYER_YUV | ATMEL_HLCDC_YUV_MODE(4)) 94 - #define ATMEL_HLCDC_NV61_MODE (ATMEL_HLCDC_LAYER_YUV | ATMEL_HLCDC_YUV_MODE(5)) 95 - #define ATMEL_HLCDC_YUV422_MODE (ATMEL_HLCDC_LAYER_YUV | ATMEL_HLCDC_YUV_MODE(6)) 96 - #define ATMEL_HLCDC_NV21_MODE (ATMEL_HLCDC_LAYER_YUV | ATMEL_HLCDC_YUV_MODE(7)) 97 - #define ATMEL_HLCDC_YUV420_MODE (ATMEL_HLCDC_LAYER_YUV | ATMEL_HLCDC_YUV_MODE(8)) 98 - 99 - #define ATMEL_HLCDC_LAYER_POS_CFG(p) ATMEL_HLCDC_LAYER_CFG(p, (p)->desc->layout.pos) 100 - #define ATMEL_HLCDC_LAYER_SIZE_CFG(p) ATMEL_HLCDC_LAYER_CFG(p, (p)->desc->layout.size) 101 - #define ATMEL_HLCDC_LAYER_MEMSIZE_CFG(p) ATMEL_HLCDC_LAYER_CFG(p, (p)->desc->layout.memsize) 102 - #define ATMEL_HLCDC_LAYER_XSTRIDE_CFG(p) ATMEL_HLCDC_LAYER_CFG(p, (p)->desc->layout.xstride) 103 - #define ATMEL_HLCDC_LAYER_PSTRIDE_CFG(p) ATMEL_HLCDC_LAYER_CFG(p, (p)->desc->layout.pstride) 104 - #define ATMEL_HLCDC_LAYER_DFLTCOLOR_CFG(p) ATMEL_HLCDC_LAYER_CFG(p, (p)->desc->layout.default_color) 105 - #define ATMEL_HLCDC_LAYER_CRKEY_CFG(p) ATMEL_HLCDC_LAYER_CFG(p, (p)->desc->layout.chroma_key) 106 - #define ATMEL_HLCDC_LAYER_CRKEY_MASK_CFG(p) ATMEL_HLCDC_LAYER_CFG(p, (p)->desc->layout.chroma_key_mask) 107 - 108 - #define ATMEL_HLCDC_LAYER_GENERAL_CFG(p) ATMEL_HLCDC_LAYER_CFG(p, (p)->desc->layout.general_config) 109 - #define ATMEL_HLCDC_LAYER_CRKEY BIT(0) 110 - #define ATMEL_HLCDC_LAYER_INV BIT(1) 111 - #define ATMEL_HLCDC_LAYER_ITER2BL BIT(2) 112 - #define ATMEL_HLCDC_LAYER_ITER BIT(3) 113 - #define ATMEL_HLCDC_LAYER_REVALPHA BIT(4) 114 - #define ATMEL_HLCDC_LAYER_GAEN BIT(5) 115 - #define ATMEL_HLCDC_LAYER_LAEN BIT(6) 116 - #define ATMEL_HLCDC_LAYER_OVR BIT(7) 117 - #define ATMEL_HLCDC_LAYER_DMA BIT(8) 118 - #define ATMEL_HLCDC_LAYER_REP BIT(9) 119 - #define ATMEL_HLCDC_LAYER_DSTKEY BIT(10) 120 - #define ATMEL_HLCDC_LAYER_DISCEN BIT(11) 121 - #define ATMEL_HLCDC_LAYER_GA_SHIFT 16 122 - #define ATMEL_HLCDC_LAYER_GA_MASK GENMASK(23, ATMEL_HLCDC_LAYER_GA_SHIFT) 123 - #define ATMEL_HLCDC_LAYER_GA(x) ((x) << ATMEL_HLCDC_LAYER_GA_SHIFT) 124 - 125 - #define ATMEL_HLCDC_LAYER_CSC_CFG(p, o) ATMEL_HLCDC_LAYER_CFG(p, (p)->desc->layout.csc + o) 126 - 127 - #define ATMEL_HLCDC_LAYER_DISC_POS_CFG(p) ATMEL_HLCDC_LAYER_CFG(p, (p)->desc->layout.disc_pos) 128 - 129 - #define ATMEL_HLCDC_LAYER_DISC_SIZE_CFG(p) ATMEL_HLCDC_LAYER_CFG(p, (p)->desc->layout.disc_size) 130 - 131 - #define ATMEL_HLCDC_MAX_PLANES 3 132 - 133 - #define ATMEL_HLCDC_DMA_CHANNEL_DSCR_RESERVED BIT(0) 134 - #define ATMEL_HLCDC_DMA_CHANNEL_DSCR_LOADED BIT(1) 135 - #define ATMEL_HLCDC_DMA_CHANNEL_DSCR_DONE BIT(2) 136 - #define ATMEL_HLCDC_DMA_CHANNEL_DSCR_OVERRUN BIT(3) 137 - 138 - /** 139 - * Atmel HLCDC Layer registers layout structure 140 - * 141 - * Each HLCDC layer has its own register organization and a given register 142 - * can be placed differently on 2 different layers depending on its 143 - * capabilities. 144 - * This structure stores common registers layout for a given layer and is 145 - * used by HLCDC layer code to choose the appropriate register to write to 146 - * or to read from. 147 - * 148 - * For all fields, a value of zero means "unsupported". 149 - * 150 - * See Atmel's datasheet for a detailled description of these registers. 151 - * 152 - * @xstride: xstride registers 153 - * @pstride: pstride registers 154 - * @pos: position register 155 - * @size: displayed size register 156 - * @memsize: memory size register 157 - * @default_color: default color register 158 - * @chroma_key: chroma key register 159 - * @chroma_key_mask: chroma key mask register 160 - * @general_config: general layer config register 161 - * @disc_pos: discard area position register 162 - * @disc_size: discard area size register 163 - * @csc: color space conversion register 164 - */ 165 - struct atmel_hlcdc_layer_cfg_layout { 166 - int xstride[ATMEL_HLCDC_MAX_PLANES]; 167 - int pstride[ATMEL_HLCDC_MAX_PLANES]; 168 - int pos; 169 - int size; 170 - int memsize; 171 - int default_color; 172 - int chroma_key; 173 - int chroma_key_mask; 174 - int general_config; 175 - int disc_pos; 176 - int disc_size; 177 - int csc; 178 - }; 179 - 180 - /** 181 - * Atmel HLCDC framebuffer flip structure 182 - * 183 - * This structure is allocated when someone asked for a layer update (most 184 - * likely a DRM plane update, either primary, overlay or cursor plane) and 185 - * released when the layer do not need to reference the framebuffer object 186 - * anymore (i.e. the layer was disabled or updated). 187 - * 188 - * @dscrs: DMA descriptors 189 - * @fb: the referenced framebuffer object 190 - * @ngems: number of GEM objects referenced by the fb element 191 - * @status: fb flip operation status 192 - */ 193 - struct atmel_hlcdc_layer_fb_flip { 194 - struct atmel_hlcdc_dma_channel_dscr *dscrs[ATMEL_HLCDC_MAX_PLANES]; 195 - struct drm_flip_task *task; 196 - struct drm_framebuffer *fb; 197 - int ngems; 198 - u32 status; 199 - }; 200 - 201 - /** 202 - * Atmel HLCDC DMA descriptor structure 203 - * 204 - * This structure is used by the HLCDC DMA engine to schedule a DMA transfer. 205 - * 206 - * The structure fields must remain in this specific order, because they're 207 - * used by the HLCDC DMA engine, which expect them in this order. 208 - * HLCDC DMA descriptors must be aligned on 64 bits. 209 - * 210 - * @addr: buffer DMA address 211 - * @ctrl: DMA transfer options 212 - * @next: next DMA descriptor to fetch 213 - * @gem_flip: the attached gem_flip operation 214 - */ 215 - struct atmel_hlcdc_dma_channel_dscr { 216 - dma_addr_t addr; 217 - u32 ctrl; 218 - dma_addr_t next; 219 - u32 status; 220 - } __aligned(sizeof(u64)); 221 - 222 - /** 223 - * Atmel HLCDC layer types 224 - */ 225 - enum atmel_hlcdc_layer_type { 226 - ATMEL_HLCDC_BASE_LAYER, 227 - ATMEL_HLCDC_OVERLAY_LAYER, 228 - ATMEL_HLCDC_CURSOR_LAYER, 229 - ATMEL_HLCDC_PP_LAYER, 230 - }; 231 - 232 - /** 233 - * Atmel HLCDC Supported formats structure 234 - * 235 - * This structure list all the formats supported by a given layer. 236 - * 237 - * @nformats: number of supported formats 238 - * @formats: supported formats 239 - */ 240 - struct atmel_hlcdc_formats { 241 - int nformats; 242 - uint32_t *formats; 243 - }; 244 - 245 - /** 246 - * Atmel HLCDC Layer description structure 247 - * 248 - * This structure describe the capabilities provided by a given layer. 249 - * 250 - * @name: layer name 251 - * @type: layer type 252 - * @id: layer id 253 - * @regs_offset: offset of the layer registers from the HLCDC registers base 254 - * @nconfigs: number of config registers provided by this layer 255 - * @formats: supported formats 256 - * @layout: config registers layout 257 - * @max_width: maximum width supported by this layer (0 means unlimited) 258 - * @max_height: maximum height supported by this layer (0 means unlimited) 259 - */ 260 - struct atmel_hlcdc_layer_desc { 261 - const char *name; 262 - enum atmel_hlcdc_layer_type type; 263 - int id; 264 - int regs_offset; 265 - int nconfigs; 266 - struct atmel_hlcdc_formats *formats; 267 - struct atmel_hlcdc_layer_cfg_layout layout; 268 - int max_width; 269 - int max_height; 270 - }; 271 - 272 - /** 273 - * Atmel HLCDC Layer Update Slot structure 274 - * 275 - * This structure stores layer update requests to be applied on next frame. 276 - * This is the base structure behind the atomic layer update infrastructure. 277 - * 278 - * Atomic layer update provides a way to update all layer's parameters 279 - * simultaneously. This is needed to avoid incompatible sequential updates 280 - * like this one: 281 - * 1) update layer format from RGB888 (1 plane/buffer) to YUV422 282 - * (2 planes/buffers) 283 - * 2) the format update is applied but the DMA channel for the second 284 - * plane/buffer is not enabled 285 - * 3) enable the DMA channel for the second plane 286 - * 287 - * @fb_flip: fb_flip object 288 - * @updated_configs: bitmask used to record modified configs 289 - * @configs: new config values 290 - */ 291 - struct atmel_hlcdc_layer_update_slot { 292 - struct atmel_hlcdc_layer_fb_flip *fb_flip; 293 - unsigned long *updated_configs; 294 - u32 *configs; 295 - }; 296 - 297 - /** 298 - * Atmel HLCDC Layer Update structure 299 - * 300 - * This structure provides a way to queue layer update requests. 301 - * 302 - * At a given time there is at most: 303 - * - one pending update request, which means the update request has been 304 - * committed (or validated) and is waiting for the DMA channel(s) to be 305 - * available 306 - * - one request being prepared, which means someone started a layer update 307 - * but has not committed it yet. There cannot be more than one started 308 - * request, because the update lock is taken when starting a layer update 309 - * and release when committing or rolling back the request. 310 - * 311 - * @slots: update slots. One is used for pending request and the other one 312 - * for started update request 313 - * @pending: the pending slot index or -1 if no request is pending 314 - * @next: the started update slot index or -1 no update has been started 315 - */ 316 - struct atmel_hlcdc_layer_update { 317 - struct atmel_hlcdc_layer_update_slot slots[2]; 318 - int pending; 319 - int next; 320 - }; 321 - 322 - enum atmel_hlcdc_layer_dma_channel_status { 323 - ATMEL_HLCDC_LAYER_DISABLED, 324 - ATMEL_HLCDC_LAYER_ENABLED, 325 - ATMEL_HLCDC_LAYER_DISABLING, 326 - }; 327 - 328 - /** 329 - * Atmel HLCDC Layer DMA channel structure 330 - * 331 - * This structure stores information on the DMA channel associated to a 332 - * given layer. 333 - * 334 - * @status: DMA channel status 335 - * @cur: current framebuffer 336 - * @queue: next framebuffer 337 - * @dscrs: allocated DMA descriptors 338 - */ 339 - struct atmel_hlcdc_layer_dma_channel { 340 - enum atmel_hlcdc_layer_dma_channel_status status; 341 - struct atmel_hlcdc_layer_fb_flip *cur; 342 - struct atmel_hlcdc_layer_fb_flip *queue; 343 - struct atmel_hlcdc_dma_channel_dscr *dscrs; 344 - }; 345 - 346 - /** 347 - * Atmel HLCDC Layer structure 348 - * 349 - * This structure stores information on the layer instance. 350 - * 351 - * @desc: layer description 352 - * @max_planes: maximum planes/buffers that can be associated with this layer. 353 - * This depends on the supported formats. 354 - * @hlcdc: pointer to the atmel_hlcdc structure provided by the MFD device 355 - * @dma: dma channel 356 - * @gc: fb flip garbage collector 357 - * @update: update handler 358 - * @lock: layer lock 359 - */ 360 - struct atmel_hlcdc_layer { 361 - const struct atmel_hlcdc_layer_desc *desc; 362 - int max_planes; 363 - struct atmel_hlcdc *hlcdc; 364 - struct workqueue_struct *wq; 365 - struct drm_flip_work gc; 366 - struct atmel_hlcdc_layer_dma_channel dma; 367 - struct atmel_hlcdc_layer_update update; 368 - spinlock_t lock; 369 - }; 370 - 371 - void atmel_hlcdc_layer_irq(struct atmel_hlcdc_layer *layer); 372 - 373 - int atmel_hlcdc_layer_init(struct drm_device *dev, 374 - struct atmel_hlcdc_layer *layer, 375 - const struct atmel_hlcdc_layer_desc *desc); 376 - 377 - void atmel_hlcdc_layer_cleanup(struct drm_device *dev, 378 - struct atmel_hlcdc_layer *layer); 379 - 380 - void atmel_hlcdc_layer_disable(struct atmel_hlcdc_layer *layer); 381 - 382 - int atmel_hlcdc_layer_update_start(struct atmel_hlcdc_layer *layer); 383 - 384 - void atmel_hlcdc_layer_update_cfg(struct atmel_hlcdc_layer *layer, int cfg, 385 - u32 mask, u32 val); 386 - 387 - void atmel_hlcdc_layer_update_set_fb(struct atmel_hlcdc_layer *layer, 388 - struct drm_framebuffer *fb, 389 - unsigned int *offsets); 390 - 391 - void atmel_hlcdc_layer_update_set_finished(struct atmel_hlcdc_layer *layer, 392 - void (*finished)(void *data), 393 - void *finished_data); 394 - 395 - void atmel_hlcdc_layer_update_rollback(struct atmel_hlcdc_layer *layer); 396 - 397 - void atmel_hlcdc_layer_update_commit(struct atmel_hlcdc_layer *layer); 398 - 399 - #endif /* DRM_ATMEL_HLCDC_LAYER_H */
+310 -326
drivers/gpu/drm/atmel-hlcdc/atmel_hlcdc_plane.c
··· 32 32 * @src_w: buffer width 33 33 * @src_h: buffer height 34 34 * @alpha: alpha blending of the plane 35 + * @disc_x: x discard position 36 + * @disc_y: y discard position 37 + * @disc_w: discard width 38 + * @disc_h: discard height 35 39 * @bpp: bytes per pixel deduced from pixel_format 36 40 * @offsets: offsets to apply to the GEM buffers 37 41 * @xstride: value to add to the pixel pointer between each line 38 42 * @pstride: value to add to the pixel pointer between each pixel 39 43 * @nplanes: number of planes (deduced from pixel_format) 40 - * @prepared: plane update has been prepared 44 + * @dscrs: DMA descriptors 41 45 */ 42 46 struct atmel_hlcdc_plane_state { 43 47 struct drm_plane_state base; ··· 56 52 57 53 u8 alpha; 58 54 59 - bool disc_updated; 60 - 61 55 int disc_x; 62 56 int disc_y; 63 57 int disc_w; ··· 64 62 int ahb_id; 65 63 66 64 /* These fields are private and should not be touched */ 67 - int bpp[ATMEL_HLCDC_MAX_PLANES]; 68 - unsigned int offsets[ATMEL_HLCDC_MAX_PLANES]; 69 - int xstride[ATMEL_HLCDC_MAX_PLANES]; 70 - int pstride[ATMEL_HLCDC_MAX_PLANES]; 65 + int bpp[ATMEL_HLCDC_LAYER_MAX_PLANES]; 66 + unsigned int offsets[ATMEL_HLCDC_LAYER_MAX_PLANES]; 67 + int xstride[ATMEL_HLCDC_LAYER_MAX_PLANES]; 68 + int pstride[ATMEL_HLCDC_LAYER_MAX_PLANES]; 71 69 int nplanes; 72 - bool prepared; 70 + 71 + /* DMA descriptors. */ 72 + struct atmel_hlcdc_dma_channel_dscr *dscrs[ATMEL_HLCDC_LAYER_MAX_PLANES]; 73 73 }; 74 74 75 75 static inline struct atmel_hlcdc_plane_state * ··· 263 259 0x00205907, 264 260 }; 265 261 262 + #define ATMEL_HLCDC_XPHIDEF 4 263 + #define ATMEL_HLCDC_YPHIDEF 4 264 + 265 + static u32 atmel_hlcdc_plane_phiscaler_get_factor(u32 srcsize, 266 + u32 dstsize, 267 + u32 phidef) 268 + { 269 + u32 factor, max_memsize; 270 + 271 + factor = (256 * ((8 * (srcsize - 1)) - phidef)) / (dstsize - 1); 272 + max_memsize = ((factor * (dstsize - 1)) + (256 * phidef)) / 2048; 273 + 274 + if (max_memsize > srcsize - 1) 275 + factor--; 276 + 277 + return factor; 278 + } 279 + 280 + static void 281 + atmel_hlcdc_plane_scaler_set_phicoeff(struct atmel_hlcdc_plane *plane, 282 + const u32 *coeff_tab, int size, 283 + unsigned int cfg_offs) 284 + { 285 + int i; 286 + 287 + for (i = 0; i < size; i++) 288 + atmel_hlcdc_layer_write_cfg(&plane->layer, cfg_offs + i, 289 + coeff_tab[i]); 290 + } 291 + 292 + void atmel_hlcdc_plane_setup_scaler(struct atmel_hlcdc_plane *plane, 293 + struct atmel_hlcdc_plane_state *state) 294 + { 295 + const struct atmel_hlcdc_layer_desc *desc = plane->layer.desc; 296 + u32 xfactor, yfactor; 297 + 298 + if (!desc->layout.scaler_config) 299 + return; 300 + 301 + if (state->crtc_w == state->src_w && state->crtc_h == state->src_h) { 302 + atmel_hlcdc_layer_write_cfg(&plane->layer, 303 + desc->layout.scaler_config, 0); 304 + return; 305 + } 306 + 307 + if (desc->layout.phicoeffs.x) { 308 + xfactor = atmel_hlcdc_plane_phiscaler_get_factor(state->src_w, 309 + state->crtc_w, 310 + ATMEL_HLCDC_XPHIDEF); 311 + 312 + yfactor = atmel_hlcdc_plane_phiscaler_get_factor(state->src_h, 313 + state->crtc_h, 314 + ATMEL_HLCDC_YPHIDEF); 315 + 316 + atmel_hlcdc_plane_scaler_set_phicoeff(plane, 317 + state->crtc_w < state->src_w ? 318 + heo_downscaling_xcoef : 319 + heo_upscaling_xcoef, 320 + ARRAY_SIZE(heo_upscaling_xcoef), 321 + desc->layout.phicoeffs.x); 322 + 323 + atmel_hlcdc_plane_scaler_set_phicoeff(plane, 324 + state->crtc_h < state->src_h ? 325 + heo_downscaling_ycoef : 326 + heo_upscaling_ycoef, 327 + ARRAY_SIZE(heo_upscaling_ycoef), 328 + desc->layout.phicoeffs.y); 329 + } else { 330 + xfactor = (1024 * state->src_w) / state->crtc_w; 331 + yfactor = (1024 * state->src_h) / state->crtc_h; 332 + } 333 + 334 + atmel_hlcdc_layer_write_cfg(&plane->layer, desc->layout.scaler_config, 335 + ATMEL_HLCDC_LAYER_SCALER_ENABLE | 336 + ATMEL_HLCDC_LAYER_SCALER_FACTORS(xfactor, 337 + yfactor)); 338 + } 339 + 266 340 static void 267 341 atmel_hlcdc_plane_update_pos_and_size(struct atmel_hlcdc_plane *plane, 268 342 struct atmel_hlcdc_plane_state *state) 269 343 { 270 - const struct atmel_hlcdc_layer_cfg_layout *layout = 271 - &plane->layer.desc->layout; 344 + const struct atmel_hlcdc_layer_desc *desc = plane->layer.desc; 272 345 273 - if (layout->size) 274 - atmel_hlcdc_layer_update_cfg(&plane->layer, 275 - layout->size, 276 - 0xffffffff, 277 - (state->crtc_w - 1) | 278 - ((state->crtc_h - 1) << 16)); 346 + if (desc->layout.size) 347 + atmel_hlcdc_layer_write_cfg(&plane->layer, desc->layout.size, 348 + ATMEL_HLCDC_LAYER_SIZE(state->crtc_w, 349 + state->crtc_h)); 279 350 280 - if (layout->memsize) 281 - atmel_hlcdc_layer_update_cfg(&plane->layer, 282 - layout->memsize, 283 - 0xffffffff, 284 - (state->src_w - 1) | 285 - ((state->src_h - 1) << 16)); 351 + if (desc->layout.memsize) 352 + atmel_hlcdc_layer_write_cfg(&plane->layer, 353 + desc->layout.memsize, 354 + ATMEL_HLCDC_LAYER_SIZE(state->src_w, 355 + state->src_h)); 286 356 287 - if (layout->pos) 288 - atmel_hlcdc_layer_update_cfg(&plane->layer, 289 - layout->pos, 290 - 0xffffffff, 291 - state->crtc_x | 292 - (state->crtc_y << 16)); 357 + if (desc->layout.pos) 358 + atmel_hlcdc_layer_write_cfg(&plane->layer, desc->layout.pos, 359 + ATMEL_HLCDC_LAYER_POS(state->crtc_x, 360 + state->crtc_y)); 293 361 294 - /* TODO: rework the rescaling part */ 295 - if (state->crtc_w != state->src_w || state->crtc_h != state->src_h) { 296 - u32 factor_reg = 0; 297 - 298 - if (state->crtc_w != state->src_w) { 299 - int i; 300 - u32 factor; 301 - u32 *coeff_tab = heo_upscaling_xcoef; 302 - u32 max_memsize; 303 - 304 - if (state->crtc_w < state->src_w) 305 - coeff_tab = heo_downscaling_xcoef; 306 - for (i = 0; i < ARRAY_SIZE(heo_upscaling_xcoef); i++) 307 - atmel_hlcdc_layer_update_cfg(&plane->layer, 308 - 17 + i, 309 - 0xffffffff, 310 - coeff_tab[i]); 311 - factor = ((8 * 256 * state->src_w) - (256 * 4)) / 312 - state->crtc_w; 313 - factor++; 314 - max_memsize = ((factor * state->crtc_w) + (256 * 4)) / 315 - 2048; 316 - if (max_memsize > state->src_w) 317 - factor--; 318 - factor_reg |= factor | 0x80000000; 319 - } 320 - 321 - if (state->crtc_h != state->src_h) { 322 - int i; 323 - u32 factor; 324 - u32 *coeff_tab = heo_upscaling_ycoef; 325 - u32 max_memsize; 326 - 327 - if (state->crtc_h < state->src_h) 328 - coeff_tab = heo_downscaling_ycoef; 329 - for (i = 0; i < ARRAY_SIZE(heo_upscaling_ycoef); i++) 330 - atmel_hlcdc_layer_update_cfg(&plane->layer, 331 - 33 + i, 332 - 0xffffffff, 333 - coeff_tab[i]); 334 - factor = ((8 * 256 * state->src_h) - (256 * 4)) / 335 - state->crtc_h; 336 - factor++; 337 - max_memsize = ((factor * state->crtc_h) + (256 * 4)) / 338 - 2048; 339 - if (max_memsize > state->src_h) 340 - factor--; 341 - factor_reg |= (factor << 16) | 0x80000000; 342 - } 343 - 344 - atmel_hlcdc_layer_update_cfg(&plane->layer, 13, 0xffffffff, 345 - factor_reg); 346 - } else { 347 - atmel_hlcdc_layer_update_cfg(&plane->layer, 13, 0xffffffff, 0); 348 - } 362 + atmel_hlcdc_plane_setup_scaler(plane, state); 349 363 } 350 364 351 365 static void 352 366 atmel_hlcdc_plane_update_general_settings(struct atmel_hlcdc_plane *plane, 353 367 struct atmel_hlcdc_plane_state *state) 354 368 { 355 - const struct atmel_hlcdc_layer_cfg_layout *layout = 356 - &plane->layer.desc->layout; 357 - unsigned int cfg = ATMEL_HLCDC_LAYER_DMA; 369 + unsigned int cfg = ATMEL_HLCDC_LAYER_DMA_BLEN_INCR16 | state->ahb_id; 370 + const struct atmel_hlcdc_layer_desc *desc = plane->layer.desc; 371 + u32 format = state->base.fb->format->format; 372 + 373 + /* 374 + * Rotation optimization is not working on RGB888 (rotation is still 375 + * working but without any optimization). 376 + */ 377 + if (format == DRM_FORMAT_RGB888) 378 + cfg |= ATMEL_HLCDC_LAYER_DMA_ROTDIS; 379 + 380 + atmel_hlcdc_layer_write_cfg(&plane->layer, ATMEL_HLCDC_LAYER_DMA_CFG, 381 + cfg); 382 + 383 + cfg = ATMEL_HLCDC_LAYER_DMA; 358 384 359 385 if (plane->base.type != DRM_PLANE_TYPE_PRIMARY) { 360 386 cfg |= ATMEL_HLCDC_LAYER_OVR | ATMEL_HLCDC_LAYER_ITER2BL | 361 387 ATMEL_HLCDC_LAYER_ITER; 362 388 363 - if (atmel_hlcdc_format_embeds_alpha(state->base.fb->format->format)) 389 + if (atmel_hlcdc_format_embeds_alpha(format)) 364 390 cfg |= ATMEL_HLCDC_LAYER_LAEN; 365 391 else 366 392 cfg |= ATMEL_HLCDC_LAYER_GAEN | 367 393 ATMEL_HLCDC_LAYER_GA(state->alpha); 368 394 } 369 395 370 - atmel_hlcdc_layer_update_cfg(&plane->layer, 371 - ATMEL_HLCDC_LAYER_DMA_CFG_ID, 372 - ATMEL_HLCDC_LAYER_DMA_BLEN_MASK | 373 - ATMEL_HLCDC_LAYER_DMA_SIF, 374 - ATMEL_HLCDC_LAYER_DMA_BLEN_INCR16 | 375 - state->ahb_id); 396 + if (state->disc_h && state->disc_w) 397 + cfg |= ATMEL_HLCDC_LAYER_DISCEN; 376 398 377 - atmel_hlcdc_layer_update_cfg(&plane->layer, layout->general_config, 378 - ATMEL_HLCDC_LAYER_ITER2BL | 379 - ATMEL_HLCDC_LAYER_ITER | 380 - ATMEL_HLCDC_LAYER_GAEN | 381 - ATMEL_HLCDC_LAYER_GA_MASK | 382 - ATMEL_HLCDC_LAYER_LAEN | 383 - ATMEL_HLCDC_LAYER_OVR | 384 - ATMEL_HLCDC_LAYER_DMA, cfg); 399 + atmel_hlcdc_layer_write_cfg(&plane->layer, desc->layout.general_config, 400 + cfg); 385 401 } 386 402 387 403 static void atmel_hlcdc_plane_update_format(struct atmel_hlcdc_plane *plane, ··· 420 396 drm_rotation_90_or_270(state->base.rotation)) 421 397 cfg |= ATMEL_HLCDC_YUV422ROT; 422 398 423 - atmel_hlcdc_layer_update_cfg(&plane->layer, 424 - ATMEL_HLCDC_LAYER_FORMAT_CFG_ID, 425 - 0xffffffff, 426 - cfg); 427 - 428 - /* 429 - * Rotation optimization is not working on RGB888 (rotation is still 430 - * working but without any optimization). 431 - */ 432 - if (state->base.fb->format->format == DRM_FORMAT_RGB888) 433 - cfg = ATMEL_HLCDC_LAYER_DMA_ROTDIS; 434 - else 435 - cfg = 0; 436 - 437 - atmel_hlcdc_layer_update_cfg(&plane->layer, 438 - ATMEL_HLCDC_LAYER_DMA_CFG_ID, 439 - ATMEL_HLCDC_LAYER_DMA_ROTDIS, cfg); 399 + atmel_hlcdc_layer_write_cfg(&plane->layer, 400 + ATMEL_HLCDC_LAYER_FORMAT_CFG, cfg); 440 401 } 441 402 442 403 static void atmel_hlcdc_plane_update_buffers(struct atmel_hlcdc_plane *plane, 443 404 struct atmel_hlcdc_plane_state *state) 444 405 { 445 - struct atmel_hlcdc_layer *layer = &plane->layer; 446 - const struct atmel_hlcdc_layer_cfg_layout *layout = 447 - &layer->desc->layout; 406 + const struct atmel_hlcdc_layer_desc *desc = plane->layer.desc; 407 + struct drm_framebuffer *fb = state->base.fb; 408 + u32 sr; 448 409 int i; 449 410 450 - atmel_hlcdc_layer_update_set_fb(&plane->layer, state->base.fb, 451 - state->offsets); 411 + sr = atmel_hlcdc_layer_read_reg(&plane->layer, ATMEL_HLCDC_LAYER_CHSR); 452 412 453 413 for (i = 0; i < state->nplanes; i++) { 454 - if (layout->xstride[i]) { 455 - atmel_hlcdc_layer_update_cfg(&plane->layer, 456 - layout->xstride[i], 457 - 0xffffffff, 458 - state->xstride[i]); 414 + struct drm_gem_cma_object *gem = drm_fb_cma_get_gem_obj(fb, i); 415 + 416 + state->dscrs[i]->addr = gem->paddr + state->offsets[i]; 417 + 418 + atmel_hlcdc_layer_write_reg(&plane->layer, 419 + ATMEL_HLCDC_LAYER_PLANE_HEAD(i), 420 + state->dscrs[i]->self); 421 + 422 + if (!(sr & ATMEL_HLCDC_LAYER_EN)) { 423 + atmel_hlcdc_layer_write_reg(&plane->layer, 424 + ATMEL_HLCDC_LAYER_PLANE_ADDR(i), 425 + state->dscrs[i]->addr); 426 + atmel_hlcdc_layer_write_reg(&plane->layer, 427 + ATMEL_HLCDC_LAYER_PLANE_CTRL(i), 428 + state->dscrs[i]->ctrl); 429 + atmel_hlcdc_layer_write_reg(&plane->layer, 430 + ATMEL_HLCDC_LAYER_PLANE_NEXT(i), 431 + state->dscrs[i]->self); 459 432 } 460 433 461 - if (layout->pstride[i]) { 462 - atmel_hlcdc_layer_update_cfg(&plane->layer, 463 - layout->pstride[i], 464 - 0xffffffff, 465 - state->pstride[i]); 466 - } 434 + if (desc->layout.xstride[i]) 435 + atmel_hlcdc_layer_write_cfg(&plane->layer, 436 + desc->layout.xstride[i], 437 + state->xstride[i]); 438 + 439 + if (desc->layout.pstride[i]) 440 + atmel_hlcdc_layer_write_cfg(&plane->layer, 441 + desc->layout.pstride[i], 442 + state->pstride[i]); 467 443 } 468 444 } 469 445 ··· 552 528 disc_w = ovl_state->crtc_w; 553 529 } 554 530 555 - if (disc_x == primary_state->disc_x && 556 - disc_y == primary_state->disc_y && 557 - disc_w == primary_state->disc_w && 558 - disc_h == primary_state->disc_h) 559 - return 0; 560 - 561 - 562 531 primary_state->disc_x = disc_x; 563 532 primary_state->disc_y = disc_y; 564 533 primary_state->disc_w = disc_w; 565 534 primary_state->disc_h = disc_h; 566 - primary_state->disc_updated = true; 567 535 568 536 return 0; 569 537 } ··· 564 548 atmel_hlcdc_plane_update_disc_area(struct atmel_hlcdc_plane *plane, 565 549 struct atmel_hlcdc_plane_state *state) 566 550 { 567 - const struct atmel_hlcdc_layer_cfg_layout *layout = 568 - &plane->layer.desc->layout; 569 - int disc_surface = 0; 551 + const struct atmel_hlcdc_layer_cfg_layout *layout; 570 552 571 - if (!state->disc_updated) 553 + layout = &plane->layer.desc->layout; 554 + if (!layout->disc_pos || !layout->disc_size) 572 555 return; 573 556 574 - disc_surface = state->disc_h * state->disc_w; 557 + atmel_hlcdc_layer_write_cfg(&plane->layer, layout->disc_pos, 558 + ATMEL_HLCDC_LAYER_DISC_POS(state->disc_x, 559 + state->disc_y)); 575 560 576 - atmel_hlcdc_layer_update_cfg(&plane->layer, layout->general_config, 577 - ATMEL_HLCDC_LAYER_DISCEN, 578 - disc_surface ? ATMEL_HLCDC_LAYER_DISCEN : 0); 579 - 580 - if (!disc_surface) 581 - return; 582 - 583 - atmel_hlcdc_layer_update_cfg(&plane->layer, 584 - layout->disc_pos, 585 - 0xffffffff, 586 - state->disc_x | (state->disc_y << 16)); 587 - 588 - atmel_hlcdc_layer_update_cfg(&plane->layer, 589 - layout->disc_size, 590 - 0xffffffff, 591 - (state->disc_w - 1) | 592 - ((state->disc_h - 1) << 16)); 561 + atmel_hlcdc_layer_write_cfg(&plane->layer, layout->disc_size, 562 + ATMEL_HLCDC_LAYER_DISC_SIZE(state->disc_w, 563 + state->disc_h)); 593 564 } 594 565 595 566 static int atmel_hlcdc_plane_atomic_check(struct drm_plane *p, ··· 585 582 struct atmel_hlcdc_plane *plane = drm_plane_to_atmel_hlcdc_plane(p); 586 583 struct atmel_hlcdc_plane_state *state = 587 584 drm_plane_state_to_atmel_hlcdc_plane_state(s); 588 - const struct atmel_hlcdc_layer_cfg_layout *layout = 589 - &plane->layer.desc->layout; 585 + const struct atmel_hlcdc_layer_desc *desc = plane->layer.desc; 590 586 struct drm_framebuffer *fb = state->base.fb; 591 587 const struct drm_display_mode *mode; 592 588 struct drm_crtc_state *crtc_state; ··· 624 622 state->src_h >>= 16; 625 623 626 624 state->nplanes = fb->format->num_planes; 627 - if (state->nplanes > ATMEL_HLCDC_MAX_PLANES) 625 + if (state->nplanes > ATMEL_HLCDC_LAYER_MAX_PLANES) 628 626 return -EINVAL; 629 627 630 628 /* ··· 728 726 state->crtc_w = patched_crtc_w; 729 727 state->crtc_h = patched_crtc_h; 730 728 731 - if (!layout->size && 729 + if (!desc->layout.size && 732 730 (mode->hdisplay != state->crtc_w || 733 731 mode->vdisplay != state->crtc_h)) 734 732 return -EINVAL; 735 733 736 - if (plane->layer.desc->max_height && 737 - state->crtc_h > plane->layer.desc->max_height) 734 + if (desc->max_height && state->crtc_h > desc->max_height) 738 735 return -EINVAL; 739 736 740 - if (plane->layer.desc->max_width && 741 - state->crtc_w > plane->layer.desc->max_width) 737 + if (desc->max_width && state->crtc_w > desc->max_width) 742 738 return -EINVAL; 743 739 744 740 if ((state->crtc_h != state->src_h || state->crtc_w != state->src_w) && 745 - (!layout->memsize || 741 + (!desc->layout.memsize || 746 742 atmel_hlcdc_format_embeds_alpha(state->base.fb->format->format))) 747 743 return -EINVAL; 748 744 ··· 754 754 return 0; 755 755 } 756 756 757 - static int atmel_hlcdc_plane_prepare_fb(struct drm_plane *p, 758 - struct drm_plane_state *new_state) 759 - { 760 - /* 761 - * FIXME: we should avoid this const -> non-const cast but it's 762 - * currently the only solution we have to modify the ->prepared 763 - * state and rollback the update request. 764 - * Ideally, we should rework the code to attach all the resources 765 - * to atmel_hlcdc_plane_state (including the DMA desc allocation), 766 - * but this require a complete rework of the atmel_hlcdc_layer 767 - * code. 768 - */ 769 - struct drm_plane_state *s = (struct drm_plane_state *)new_state; 770 - struct atmel_hlcdc_plane *plane = drm_plane_to_atmel_hlcdc_plane(p); 771 - struct atmel_hlcdc_plane_state *state = 772 - drm_plane_state_to_atmel_hlcdc_plane_state(s); 773 - int ret; 774 - 775 - ret = atmel_hlcdc_layer_update_start(&plane->layer); 776 - if (!ret) 777 - state->prepared = true; 778 - 779 - return ret; 780 - } 781 - 782 - static void atmel_hlcdc_plane_cleanup_fb(struct drm_plane *p, 783 - struct drm_plane_state *old_state) 784 - { 785 - /* 786 - * FIXME: we should avoid this const -> non-const cast but it's 787 - * currently the only solution we have to modify the ->prepared 788 - * state and rollback the update request. 789 - * Ideally, we should rework the code to attach all the resources 790 - * to atmel_hlcdc_plane_state (including the DMA desc allocation), 791 - * but this require a complete rework of the atmel_hlcdc_layer 792 - * code. 793 - */ 794 - struct drm_plane_state *s = (struct drm_plane_state *)old_state; 795 - struct atmel_hlcdc_plane *plane = drm_plane_to_atmel_hlcdc_plane(p); 796 - struct atmel_hlcdc_plane_state *state = 797 - drm_plane_state_to_atmel_hlcdc_plane_state(s); 798 - 799 - /* 800 - * The Request has already been applied or cancelled, nothing to do 801 - * here. 802 - */ 803 - if (!state->prepared) 804 - return; 805 - 806 - atmel_hlcdc_layer_update_rollback(&plane->layer); 807 - state->prepared = false; 808 - } 809 - 810 757 static void atmel_hlcdc_plane_atomic_update(struct drm_plane *p, 811 758 struct drm_plane_state *old_s) 812 759 { 813 760 struct atmel_hlcdc_plane *plane = drm_plane_to_atmel_hlcdc_plane(p); 814 761 struct atmel_hlcdc_plane_state *state = 815 762 drm_plane_state_to_atmel_hlcdc_plane_state(p->state); 763 + u32 sr; 816 764 817 765 if (!p->state->crtc || !p->state->fb) 818 766 return; ··· 771 823 atmel_hlcdc_plane_update_buffers(plane, state); 772 824 atmel_hlcdc_plane_update_disc_area(plane, state); 773 825 774 - atmel_hlcdc_layer_update_commit(&plane->layer); 826 + /* Enable the overrun interrupts. */ 827 + atmel_hlcdc_layer_write_reg(&plane->layer, ATMEL_HLCDC_LAYER_IER, 828 + ATMEL_HLCDC_LAYER_OVR_IRQ(0) | 829 + ATMEL_HLCDC_LAYER_OVR_IRQ(1) | 830 + ATMEL_HLCDC_LAYER_OVR_IRQ(2)); 831 + 832 + /* Apply the new config at the next SOF event. */ 833 + sr = atmel_hlcdc_layer_read_reg(&plane->layer, ATMEL_HLCDC_LAYER_CHSR); 834 + atmel_hlcdc_layer_write_reg(&plane->layer, ATMEL_HLCDC_LAYER_CHER, 835 + ATMEL_HLCDC_LAYER_UPDATE | 836 + (sr & ATMEL_HLCDC_LAYER_EN ? 837 + ATMEL_HLCDC_LAYER_A2Q : ATMEL_HLCDC_LAYER_EN)); 775 838 } 776 839 777 840 static void atmel_hlcdc_plane_atomic_disable(struct drm_plane *p, ··· 790 831 { 791 832 struct atmel_hlcdc_plane *plane = drm_plane_to_atmel_hlcdc_plane(p); 792 833 793 - atmel_hlcdc_layer_disable(&plane->layer); 834 + /* Disable interrupts */ 835 + atmel_hlcdc_layer_write_reg(&plane->layer, ATMEL_HLCDC_LAYER_IDR, 836 + 0xffffffff); 837 + 838 + /* Disable the layer */ 839 + atmel_hlcdc_layer_write_reg(&plane->layer, ATMEL_HLCDC_LAYER_CHDR, 840 + ATMEL_HLCDC_LAYER_RST | 841 + ATMEL_HLCDC_LAYER_A2Q | 842 + ATMEL_HLCDC_LAYER_UPDATE); 843 + 844 + /* Clear all pending interrupts */ 845 + atmel_hlcdc_layer_read_reg(&plane->layer, ATMEL_HLCDC_LAYER_ISR); 794 846 } 795 847 796 848 static void atmel_hlcdc_plane_destroy(struct drm_plane *p) ··· 811 841 if (plane->base.fb) 812 842 drm_framebuffer_unreference(plane->base.fb); 813 843 814 - atmel_hlcdc_layer_cleanup(p->dev, &plane->layer); 815 - 816 844 drm_plane_cleanup(p); 817 - devm_kfree(p->dev->dev, plane); 818 845 } 819 846 820 847 static int atmel_hlcdc_plane_atomic_set_property(struct drm_plane *p, ··· 851 884 } 852 885 853 886 static int atmel_hlcdc_plane_init_properties(struct atmel_hlcdc_plane *plane, 854 - const struct atmel_hlcdc_layer_desc *desc, 855 - struct atmel_hlcdc_plane_properties *props) 887 + struct atmel_hlcdc_plane_properties *props) 856 888 { 857 - struct regmap *regmap = plane->layer.hlcdc->regmap; 889 + const struct atmel_hlcdc_layer_desc *desc = plane->layer.desc; 858 890 859 891 if (desc->type == ATMEL_HLCDC_OVERLAY_LAYER || 860 - desc->type == ATMEL_HLCDC_CURSOR_LAYER) { 892 + desc->type == ATMEL_HLCDC_CURSOR_LAYER) 861 893 drm_object_attach_property(&plane->base.base, 862 894 props->alpha, 255); 863 - 864 - /* Set default alpha value */ 865 - regmap_update_bits(regmap, 866 - desc->regs_offset + 867 - ATMEL_HLCDC_LAYER_GENERAL_CFG(&plane->layer), 868 - ATMEL_HLCDC_LAYER_GA_MASK, 869 - ATMEL_HLCDC_LAYER_GA_MASK); 870 - } 871 895 872 896 if (desc->layout.xstride && desc->layout.pstride) { 873 897 int ret; ··· 878 920 * TODO: decare a "yuv-to-rgb-conv-factors" property to let 879 921 * userspace modify these factors (using a BLOB property ?). 880 922 */ 881 - regmap_write(regmap, 882 - desc->regs_offset + 883 - ATMEL_HLCDC_LAYER_CSC_CFG(&plane->layer, 0), 884 - 0x4c900091); 885 - regmap_write(regmap, 886 - desc->regs_offset + 887 - ATMEL_HLCDC_LAYER_CSC_CFG(&plane->layer, 1), 888 - 0x7a5f5090); 889 - regmap_write(regmap, 890 - desc->regs_offset + 891 - ATMEL_HLCDC_LAYER_CSC_CFG(&plane->layer, 2), 892 - 0x40040890); 923 + atmel_hlcdc_layer_write_cfg(&plane->layer, 924 + desc->layout.csc, 925 + 0x4c900091); 926 + atmel_hlcdc_layer_write_cfg(&plane->layer, 927 + desc->layout.csc + 1, 928 + 0x7a5f5090); 929 + atmel_hlcdc_layer_write_cfg(&plane->layer, 930 + desc->layout.csc + 2, 931 + 0x40040890); 893 932 } 894 933 895 934 return 0; 896 935 } 897 936 937 + void atmel_hlcdc_plane_irq(struct atmel_hlcdc_plane *plane) 938 + { 939 + const struct atmel_hlcdc_layer_desc *desc = plane->layer.desc; 940 + u32 isr; 941 + 942 + isr = atmel_hlcdc_layer_read_reg(&plane->layer, ATMEL_HLCDC_LAYER_ISR); 943 + 944 + /* 945 + * There's not much we can do in case of overrun except informing 946 + * the user. However, we are in interrupt context here, hence the 947 + * use of dev_dbg(). 948 + */ 949 + if (isr & 950 + (ATMEL_HLCDC_LAYER_OVR_IRQ(0) | ATMEL_HLCDC_LAYER_OVR_IRQ(1) | 951 + ATMEL_HLCDC_LAYER_OVR_IRQ(2))) 952 + dev_dbg(plane->base.dev->dev, "overrun on plane %s\n", 953 + desc->name); 954 + } 955 + 898 956 static struct drm_plane_helper_funcs atmel_hlcdc_layer_plane_helper_funcs = { 899 - .prepare_fb = atmel_hlcdc_plane_prepare_fb, 900 - .cleanup_fb = atmel_hlcdc_plane_cleanup_fb, 901 957 .atomic_check = atmel_hlcdc_plane_atomic_check, 902 958 .atomic_update = atmel_hlcdc_plane_atomic_update, 903 959 .atomic_disable = atmel_hlcdc_plane_atomic_disable, 904 960 }; 961 + 962 + static int atmel_hlcdc_plane_alloc_dscrs(struct drm_plane *p, 963 + struct atmel_hlcdc_plane_state *state) 964 + { 965 + struct atmel_hlcdc_dc *dc = p->dev->dev_private; 966 + int i; 967 + 968 + for (i = 0; i < ARRAY_SIZE(state->dscrs); i++) { 969 + struct atmel_hlcdc_dma_channel_dscr *dscr; 970 + dma_addr_t dscr_dma; 971 + 972 + dscr = dma_pool_alloc(dc->dscrpool, GFP_KERNEL, &dscr_dma); 973 + if (!dscr) 974 + goto err; 975 + 976 + dscr->addr = 0; 977 + dscr->next = dscr_dma; 978 + dscr->self = dscr_dma; 979 + dscr->ctrl = ATMEL_HLCDC_LAYER_DFETCH; 980 + 981 + state->dscrs[i] = dscr; 982 + } 983 + 984 + return 0; 985 + 986 + err: 987 + for (i--; i >= 0; i--) { 988 + dma_pool_free(dc->dscrpool, state->dscrs[i], 989 + state->dscrs[i]->self); 990 + } 991 + 992 + return -ENOMEM; 993 + } 905 994 906 995 static void atmel_hlcdc_plane_reset(struct drm_plane *p) 907 996 { ··· 966 961 967 962 state = kzalloc(sizeof(*state), GFP_KERNEL); 968 963 if (state) { 964 + if (atmel_hlcdc_plane_alloc_dscrs(p, state)) { 965 + kfree(state); 966 + dev_err(p->dev->dev, 967 + "Failed to allocate initial plane state\n"); 968 + return; 969 + } 970 + 969 971 state->alpha = 255; 970 972 p->state = &state->base; 971 973 p->state->plane = p; ··· 990 978 if (!copy) 991 979 return NULL; 992 980 993 - copy->disc_updated = false; 994 - copy->prepared = false; 981 + if (atmel_hlcdc_plane_alloc_dscrs(p, copy)) { 982 + kfree(copy); 983 + return NULL; 984 + } 995 985 996 986 if (copy->base.fb) 997 987 drm_framebuffer_reference(copy->base.fb); ··· 1001 987 return &copy->base; 1002 988 } 1003 989 1004 - static void atmel_hlcdc_plane_atomic_destroy_state(struct drm_plane *plane, 990 + static void atmel_hlcdc_plane_atomic_destroy_state(struct drm_plane *p, 1005 991 struct drm_plane_state *s) 1006 992 { 1007 993 struct atmel_hlcdc_plane_state *state = 1008 994 drm_plane_state_to_atmel_hlcdc_plane_state(s); 995 + struct atmel_hlcdc_dc *dc = p->dev->dev_private; 996 + int i; 997 + 998 + for (i = 0; i < ARRAY_SIZE(state->dscrs); i++) { 999 + dma_pool_free(dc->dscrpool, state->dscrs[i], 1000 + state->dscrs[i]->self); 1001 + } 1009 1002 1010 1003 if (s->fb) 1011 1004 drm_framebuffer_unreference(s->fb); ··· 1032 1011 .atomic_get_property = atmel_hlcdc_plane_atomic_get_property, 1033 1012 }; 1034 1013 1035 - static struct atmel_hlcdc_plane * 1036 - atmel_hlcdc_plane_create(struct drm_device *dev, 1037 - const struct atmel_hlcdc_layer_desc *desc, 1038 - struct atmel_hlcdc_plane_properties *props) 1014 + static int atmel_hlcdc_plane_create(struct drm_device *dev, 1015 + const struct atmel_hlcdc_layer_desc *desc, 1016 + struct atmel_hlcdc_plane_properties *props) 1039 1017 { 1018 + struct atmel_hlcdc_dc *dc = dev->dev_private; 1040 1019 struct atmel_hlcdc_plane *plane; 1041 1020 enum drm_plane_type type; 1042 1021 int ret; 1043 1022 1044 1023 plane = devm_kzalloc(dev->dev, sizeof(*plane), GFP_KERNEL); 1045 1024 if (!plane) 1046 - return ERR_PTR(-ENOMEM); 1025 + return -ENOMEM; 1047 1026 1048 - ret = atmel_hlcdc_layer_init(dev, &plane->layer, desc); 1049 - if (ret) 1050 - return ERR_PTR(ret); 1027 + atmel_hlcdc_layer_init(&plane->layer, desc, dc->hlcdc->regmap); 1028 + plane->properties = props; 1051 1029 1052 1030 if (desc->type == ATMEL_HLCDC_BASE_LAYER) 1053 1031 type = DRM_PLANE_TYPE_PRIMARY; ··· 1060 1040 desc->formats->formats, 1061 1041 desc->formats->nformats, type, NULL); 1062 1042 if (ret) 1063 - return ERR_PTR(ret); 1043 + return ret; 1064 1044 1065 1045 drm_plane_helper_add(&plane->base, 1066 1046 &atmel_hlcdc_layer_plane_helper_funcs); 1067 1047 1068 1048 /* Set default property values*/ 1069 - ret = atmel_hlcdc_plane_init_properties(plane, desc, props); 1049 + ret = atmel_hlcdc_plane_init_properties(plane, props); 1070 1050 if (ret) 1071 - return ERR_PTR(ret); 1051 + return ret; 1072 1052 1073 - return plane; 1053 + dc->layers[desc->id] = &plane->layer; 1054 + 1055 + return 0; 1074 1056 } 1075 1057 1076 1058 static struct atmel_hlcdc_plane_properties * ··· 1091 1069 return props; 1092 1070 } 1093 1071 1094 - struct atmel_hlcdc_planes * 1095 - atmel_hlcdc_create_planes(struct drm_device *dev) 1072 + int atmel_hlcdc_create_planes(struct drm_device *dev) 1096 1073 { 1097 1074 struct atmel_hlcdc_dc *dc = dev->dev_private; 1098 1075 struct atmel_hlcdc_plane_properties *props; 1099 - struct atmel_hlcdc_planes *planes; 1100 1076 const struct atmel_hlcdc_layer_desc *descs = dc->desc->layers; 1101 1077 int nlayers = dc->desc->nlayers; 1102 - int i; 1103 - 1104 - planes = devm_kzalloc(dev->dev, sizeof(*planes), GFP_KERNEL); 1105 - if (!planes) 1106 - return ERR_PTR(-ENOMEM); 1107 - 1108 - for (i = 0; i < nlayers; i++) { 1109 - if (descs[i].type == ATMEL_HLCDC_OVERLAY_LAYER) 1110 - planes->noverlays++; 1111 - } 1112 - 1113 - if (planes->noverlays) { 1114 - planes->overlays = devm_kzalloc(dev->dev, 1115 - planes->noverlays * 1116 - sizeof(*planes->overlays), 1117 - GFP_KERNEL); 1118 - if (!planes->overlays) 1119 - return ERR_PTR(-ENOMEM); 1120 - } 1078 + int i, ret; 1121 1079 1122 1080 props = atmel_hlcdc_plane_create_properties(dev); 1123 1081 if (IS_ERR(props)) 1124 - return ERR_CAST(props); 1082 + return PTR_ERR(props); 1125 1083 1126 - planes->noverlays = 0; 1084 + dc->dscrpool = dmam_pool_create("atmel-hlcdc-dscr", dev->dev, 1085 + sizeof(struct atmel_hlcdc_dma_channel_dscr), 1086 + sizeof(u64), 0); 1087 + if (!dc->dscrpool) 1088 + return -ENOMEM; 1089 + 1127 1090 for (i = 0; i < nlayers; i++) { 1128 - struct atmel_hlcdc_plane *plane; 1129 - 1130 - if (descs[i].type == ATMEL_HLCDC_PP_LAYER) 1091 + if (descs[i].type != ATMEL_HLCDC_BASE_LAYER && 1092 + descs[i].type != ATMEL_HLCDC_OVERLAY_LAYER && 1093 + descs[i].type != ATMEL_HLCDC_CURSOR_LAYER) 1131 1094 continue; 1132 1095 1133 - plane = atmel_hlcdc_plane_create(dev, &descs[i], props); 1134 - if (IS_ERR(plane)) 1135 - return ERR_CAST(plane); 1136 - 1137 - plane->properties = props; 1138 - 1139 - switch (descs[i].type) { 1140 - case ATMEL_HLCDC_BASE_LAYER: 1141 - if (planes->primary) 1142 - return ERR_PTR(-EINVAL); 1143 - planes->primary = plane; 1144 - break; 1145 - 1146 - case ATMEL_HLCDC_OVERLAY_LAYER: 1147 - planes->overlays[planes->noverlays++] = plane; 1148 - break; 1149 - 1150 - case ATMEL_HLCDC_CURSOR_LAYER: 1151 - if (planes->cursor) 1152 - return ERR_PTR(-EINVAL); 1153 - planes->cursor = plane; 1154 - break; 1155 - 1156 - default: 1157 - break; 1158 - } 1096 + ret = atmel_hlcdc_plane_create(dev, &descs[i], props); 1097 + if (ret) 1098 + return ret; 1159 1099 } 1160 1100 1161 - return planes; 1101 + return 0; 1162 1102 }
+1 -4
drivers/gpu/drm/bochs/bochs_fbdev.c
··· 107 107 info->par = &bochs->fb.helper; 108 108 109 109 ret = bochs_framebuffer_init(bochs->dev, &bochs->fb.gfb, &mode_cmd, gobj); 110 - if (ret) { 111 - drm_fb_helper_release_fbi(helper); 110 + if (ret) 112 111 return ret; 113 - } 114 112 115 113 bochs->fb.size = size; 116 114 ··· 142 144 DRM_DEBUG_DRIVER("\n"); 143 145 144 146 drm_fb_helper_unregister_fbi(&bochs->fb.helper); 145 - drm_fb_helper_release_fbi(&bochs->fb.helper); 146 147 147 148 if (gfb->obj) { 148 149 drm_gem_object_unreference_unlocked(gfb->obj);
+4
drivers/gpu/drm/bridge/sil-sii8620.c
··· 2184 2184 sii8620_irq_thread, 2185 2185 IRQF_TRIGGER_HIGH | IRQF_ONESHOT, 2186 2186 "sii8620", ctx); 2187 + if (ret < 0) { 2188 + dev_err(dev, "failed to install IRQ handler\n"); 2189 + return ret; 2190 + } 2187 2191 2188 2192 ctx->gpio_reset = devm_gpiod_get(dev, "reset", GPIOD_OUT_HIGH); 2189 2193 if (IS_ERR(ctx->gpio_reset)) {
+1 -1
drivers/gpu/drm/bridge/ti-tfp410.c
··· 220 220 }; 221 221 MODULE_DEVICE_TABLE(of, tfp410_match); 222 222 223 - struct platform_driver tfp410_platform_driver = { 223 + static struct platform_driver tfp410_platform_driver = { 224 224 .probe = tfp410_probe, 225 225 .remove = tfp410_remove, 226 226 .driver = {
-1
drivers/gpu/drm/cirrus/cirrus_fbdev.c
··· 250 250 struct cirrus_framebuffer *gfb = &gfbdev->gfb; 251 251 252 252 drm_fb_helper_unregister_fbi(&gfbdev->helper); 253 - drm_fb_helper_release_fbi(&gfbdev->helper); 254 253 255 254 if (gfb->obj) { 256 255 drm_gem_object_unreference_unlocked(gfb->obj);
+158 -42
drivers/gpu/drm/drm_atomic.c
··· 150 150 state->connectors[i].state); 151 151 state->connectors[i].ptr = NULL; 152 152 state->connectors[i].state = NULL; 153 - drm_connector_unreference(connector); 153 + drm_connector_put(connector); 154 154 } 155 155 156 156 for (i = 0; i < config->num_crtc; i++) { ··· 275 275 return ERR_PTR(-ENOMEM); 276 276 277 277 state->crtcs[index].state = crtc_state; 278 + state->crtcs[index].old_state = crtc->state; 279 + state->crtcs[index].new_state = crtc_state; 278 280 state->crtcs[index].ptr = crtc; 279 281 crtc_state->state = state; 280 282 ··· 324 322 if (mode && memcmp(&state->mode, mode, sizeof(*mode)) == 0) 325 323 return 0; 326 324 327 - drm_property_unreference_blob(state->mode_blob); 325 + drm_property_blob_put(state->mode_blob); 328 326 state->mode_blob = NULL; 329 327 330 328 if (mode) { ··· 370 368 if (blob == state->mode_blob) 371 369 return 0; 372 370 373 - drm_property_unreference_blob(state->mode_blob); 371 + drm_property_blob_put(state->mode_blob); 374 372 state->mode_blob = NULL; 375 373 376 374 memset(&state->mode, 0, sizeof(state->mode)); ··· 382 380 blob->data)) 383 381 return -EINVAL; 384 382 385 - state->mode_blob = drm_property_reference_blob(blob); 383 + state->mode_blob = drm_property_blob_get(blob); 386 384 state->enable = true; 387 385 DRM_DEBUG_ATOMIC("Set [MODE:%s] for CRTC state %p\n", 388 386 state->mode.name, state); ··· 415 413 if (old_blob == new_blob) 416 414 return; 417 415 418 - drm_property_unreference_blob(old_blob); 416 + drm_property_blob_put(old_blob); 419 417 if (new_blob) 420 - drm_property_reference_blob(new_blob); 418 + drm_property_blob_get(new_blob); 421 419 *blob = new_blob; 422 420 *replaced = true; 423 421 ··· 439 437 return -EINVAL; 440 438 441 439 if (expected_size > 0 && expected_size != new_blob->length) { 442 - drm_property_unreference_blob(new_blob); 440 + drm_property_blob_put(new_blob); 443 441 return -EINVAL; 444 442 } 445 443 } 446 444 447 445 drm_atomic_replace_property_blob(blob, new_blob, replaced); 448 - drm_property_unreference_blob(new_blob); 446 + drm_property_blob_put(new_blob); 449 447 450 448 return 0; 451 449 } ··· 480 478 struct drm_property_blob *mode = 481 479 drm_property_lookup_blob(dev, val); 482 480 ret = drm_atomic_set_mode_prop_for_crtc(state, mode); 483 - drm_property_unreference_blob(mode); 481 + drm_property_blob_put(mode); 484 482 return ret; 485 483 } else if (property == config->degamma_lut_property) { 486 484 ret = drm_atomic_replace_property_blob_from_id(crtc, ··· 623 621 * pipe. 624 622 */ 625 623 if (state->event && !state->active && !crtc->state->active) { 626 - DRM_DEBUG_ATOMIC("[CRTC:%d] requesting event but off\n", 627 - crtc->base.id); 624 + DRM_DEBUG_ATOMIC("[CRTC:%d:%s] requesting event but off\n", 625 + crtc->base.id, crtc->name); 628 626 return -EINVAL; 629 627 } 630 628 ··· 691 689 692 690 state->planes[index].state = plane_state; 693 691 state->planes[index].ptr = plane; 692 + state->planes[index].old_state = plane->state; 693 + state->planes[index].new_state = plane_state; 694 694 plane_state->state = state; 695 695 696 696 DRM_DEBUG_ATOMIC("Added [PLANE:%d:%s] %p state to %p\n", ··· 737 733 struct drm_framebuffer *fb = drm_framebuffer_lookup(dev, val); 738 734 drm_atomic_set_fb_for_plane(state, fb); 739 735 if (fb) 740 - drm_framebuffer_unreference(fb); 736 + drm_framebuffer_put(fb); 741 737 } else if (property == config->prop_in_fence_fd) { 742 738 if (state->fence) 743 739 return -EINVAL; ··· 1030 1026 if (!connector_state) 1031 1027 return ERR_PTR(-ENOMEM); 1032 1028 1033 - drm_connector_reference(connector); 1029 + drm_connector_get(connector); 1034 1030 state->connectors[index].state = connector_state; 1031 + state->connectors[index].old_state = connector->state; 1032 + state->connectors[index].new_state = connector_state; 1035 1033 state->connectors[index].ptr = connector; 1036 1034 connector_state->state = state; 1037 1035 1038 - DRM_DEBUG_ATOMIC("Added [CONNECTOR:%d] %p state to %p\n", 1039 - connector->base.id, connector_state, state); 1036 + DRM_DEBUG_ATOMIC("Added [CONNECTOR:%d:%s] %p state to %p\n", 1037 + connector->base.id, connector->name, 1038 + connector_state, state); 1040 1039 1041 1040 if (connector_state->crtc) { 1042 1041 struct drm_crtc_state *crtc_state; ··· 1109 1102 state->tv.saturation = val; 1110 1103 } else if (property == config->tv_hue_property) { 1111 1104 state->tv.hue = val; 1105 + } else if (property == config->link_status_property) { 1106 + /* Never downgrade from GOOD to BAD on userspace's request here, 1107 + * only hw issues can do that. 1108 + * 1109 + * For an atomic property the userspace doesn't need to be able 1110 + * to understand all the properties, but needs to be able to 1111 + * restore the state it wants on VT switch. So if the userspace 1112 + * tries to change the link_status from GOOD to BAD, driver 1113 + * silently rejects it and returns a 0. This prevents userspace 1114 + * from accidently breaking the display when it restores the 1115 + * state. 1116 + */ 1117 + if (state->link_status != DRM_LINK_STATUS_GOOD) 1118 + state->link_status = val; 1112 1119 } else if (connector->funcs->atomic_set_property) { 1113 1120 return connector->funcs->atomic_set_property(connector, 1114 1121 state, property, val); ··· 1197 1176 *val = state->tv.saturation; 1198 1177 } else if (property == config->tv_hue_property) { 1199 1178 *val = state->tv.hue; 1179 + } else if (property == config->link_status_property) { 1180 + *val = state->link_status; 1200 1181 } else if (connector->funcs->atomic_get_property) { 1201 1182 return connector->funcs->atomic_get_property(connector, 1202 1183 state, property, val); ··· 1380 1357 crtc_state->connector_mask &= 1381 1358 ~(1 << drm_connector_index(conn_state->connector)); 1382 1359 1383 - drm_connector_unreference(conn_state->connector); 1360 + drm_connector_put(conn_state->connector); 1384 1361 conn_state->crtc = NULL; 1385 1362 } 1386 1363 ··· 1392 1369 crtc_state->connector_mask |= 1393 1370 1 << drm_connector_index(conn_state->connector); 1394 1371 1395 - drm_connector_reference(conn_state->connector); 1372 + drm_connector_get(conn_state->connector); 1396 1373 conn_state->crtc = crtc; 1397 1374 1398 1375 DRM_DEBUG_ATOMIC("Link connector state %p to [CRTC:%d:%s]\n", ··· 1431 1408 struct drm_connector *connector; 1432 1409 struct drm_connector_state *conn_state; 1433 1410 struct drm_connector_list_iter conn_iter; 1411 + struct drm_crtc_state *crtc_state; 1434 1412 int ret; 1413 + 1414 + crtc_state = drm_atomic_get_crtc_state(state, crtc); 1415 + if (IS_ERR(crtc_state)) 1416 + return PTR_ERR(crtc_state); 1435 1417 1436 1418 ret = drm_modeset_lock(&config->connection_mutex, state->acquire_ctx); 1437 1419 if (ret) ··· 1446 1418 crtc->base.id, crtc->name, state); 1447 1419 1448 1420 /* 1449 - * Changed connectors are already in @state, so only need to look at the 1450 - * current configuration. 1421 + * Changed connectors are already in @state, so only need to look 1422 + * at the connector_mask in crtc_state. 1451 1423 */ 1452 - drm_connector_list_iter_get(state->dev, &conn_iter); 1424 + drm_connector_list_iter_begin(state->dev, &conn_iter); 1453 1425 drm_for_each_connector_iter(connector, &conn_iter) { 1454 - if (connector->state->crtc != crtc) 1426 + if (!(crtc_state->connector_mask & (1 << drm_connector_index(connector)))) 1455 1427 continue; 1456 1428 1457 1429 conn_state = drm_atomic_get_connector_state(state, connector); 1458 1430 if (IS_ERR(conn_state)) { 1459 - drm_connector_list_iter_put(&conn_iter); 1431 + drm_connector_list_iter_end(&conn_iter); 1460 1432 return PTR_ERR(conn_state); 1461 1433 } 1462 1434 } 1463 - drm_connector_list_iter_put(&conn_iter); 1435 + drm_connector_list_iter_end(&conn_iter); 1464 1436 1465 1437 return 0; 1466 1438 } ··· 1574 1546 1575 1547 DRM_DEBUG_ATOMIC("checking %p\n", state); 1576 1548 1577 - for_each_plane_in_state(state, plane, plane_state, i) { 1549 + for_each_new_plane_in_state(state, plane, plane_state, i) { 1578 1550 ret = drm_atomic_plane_check(plane, plane_state); 1579 1551 if (ret) { 1580 1552 DRM_DEBUG_ATOMIC("[PLANE:%d:%s] atomic core check failed\n", ··· 1583 1555 } 1584 1556 } 1585 1557 1586 - for_each_crtc_in_state(state, crtc, crtc_state, i) { 1558 + for_each_new_crtc_in_state(state, crtc, crtc_state, i) { 1587 1559 ret = drm_atomic_crtc_check(crtc, crtc_state); 1588 1560 if (ret) { 1589 1561 DRM_DEBUG_ATOMIC("[CRTC:%d:%s] atomic core check failed\n", ··· 1596 1568 ret = config->funcs->atomic_check(state->dev, state); 1597 1569 1598 1570 if (!state->allow_modeset) { 1599 - for_each_crtc_in_state(state, crtc, crtc_state, i) { 1571 + for_each_new_crtc_in_state(state, crtc, crtc_state, i) { 1600 1572 if (drm_atomic_crtc_needs_modeset(crtc_state)) { 1601 1573 DRM_DEBUG_ATOMIC("[CRTC:%d:%s] requires full modeset\n", 1602 1574 crtc->base.id, crtc->name); ··· 1680 1652 1681 1653 DRM_DEBUG_ATOMIC("checking %p\n", state); 1682 1654 1683 - for_each_plane_in_state(state, plane, plane_state, i) 1655 + for_each_new_plane_in_state(state, plane, plane_state, i) 1684 1656 drm_atomic_plane_print_state(&p, plane_state); 1685 1657 1686 - for_each_crtc_in_state(state, crtc, crtc_state, i) 1658 + for_each_new_crtc_in_state(state, crtc, crtc_state, i) 1687 1659 drm_atomic_crtc_print_state(&p, crtc_state); 1688 1660 1689 - for_each_connector_in_state(state, connector, connector_state, i) 1661 + for_each_new_connector_in_state(state, connector, connector_state, i) 1690 1662 drm_atomic_connector_print_state(&p, connector_state); 1691 1663 } 1692 1664 ··· 1722 1694 list_for_each_entry(crtc, &config->crtc_list, head) 1723 1695 drm_atomic_crtc_print_state(p, crtc->state); 1724 1696 1725 - drm_connector_list_iter_get(dev, &conn_iter); 1697 + drm_connector_list_iter_begin(dev, &conn_iter); 1726 1698 drm_for_each_connector_iter(connector, &conn_iter) 1727 1699 drm_atomic_connector_print_state(p, connector->state); 1728 - drm_connector_list_iter_put(&conn_iter); 1700 + drm_connector_list_iter_end(&conn_iter); 1729 1701 } 1730 1702 EXPORT_SYMBOL(drm_state_dump); 1731 1703 ··· 1865 1837 if (ret == 0) { 1866 1838 struct drm_framebuffer *new_fb = plane->state->fb; 1867 1839 if (new_fb) 1868 - drm_framebuffer_reference(new_fb); 1840 + drm_framebuffer_get(new_fb); 1869 1841 plane->fb = new_fb; 1870 1842 plane->crtc = plane->state->crtc; 1871 1843 1872 1844 if (plane->old_fb) 1873 - drm_framebuffer_unreference(plane->old_fb); 1845 + drm_framebuffer_put(plane->old_fb); 1874 1846 } 1875 1847 plane->old_fb = NULL; 1876 1848 } ··· 1966 1938 if (arg->flags & DRM_MODE_ATOMIC_TEST_ONLY) 1967 1939 return 0; 1968 1940 1969 - for_each_crtc_in_state(state, crtc, crtc_state, i) { 1941 + for_each_new_crtc_in_state(state, crtc, crtc_state, i) { 1970 1942 s32 __user *fence_ptr; 1971 1943 1972 1944 fence_ptr = get_out_fence_for_crtc(crtc_state->state, crtc); ··· 2046 2018 return; 2047 2019 } 2048 2020 2049 - for_each_crtc_in_state(state, crtc, crtc_state, i) { 2021 + for_each_new_crtc_in_state(state, crtc, crtc_state, i) { 2050 2022 struct drm_pending_vblank_event *event = crtc_state->event; 2051 2023 /* 2052 2024 * Free the allocated event. drm_atomic_helper_setup_commit ··· 2075 2047 } 2076 2048 2077 2049 kfree(fence_state); 2050 + } 2051 + 2052 + int drm_atomic_remove_fb(struct drm_framebuffer *fb) 2053 + { 2054 + struct drm_modeset_acquire_ctx ctx; 2055 + struct drm_device *dev = fb->dev; 2056 + struct drm_atomic_state *state; 2057 + struct drm_plane *plane; 2058 + struct drm_connector *conn; 2059 + struct drm_connector_state *conn_state; 2060 + int i, ret = 0; 2061 + unsigned plane_mask; 2062 + 2063 + state = drm_atomic_state_alloc(dev); 2064 + if (!state) 2065 + return -ENOMEM; 2066 + 2067 + drm_modeset_acquire_init(&ctx, 0); 2068 + state->acquire_ctx = &ctx; 2069 + 2070 + retry: 2071 + plane_mask = 0; 2072 + ret = drm_modeset_lock_all_ctx(dev, &ctx); 2073 + if (ret) 2074 + goto unlock; 2075 + 2076 + drm_for_each_plane(plane, dev) { 2077 + struct drm_plane_state *plane_state; 2078 + 2079 + if (plane->state->fb != fb) 2080 + continue; 2081 + 2082 + plane_state = drm_atomic_get_plane_state(state, plane); 2083 + if (IS_ERR(plane_state)) { 2084 + ret = PTR_ERR(plane_state); 2085 + goto unlock; 2086 + } 2087 + 2088 + if (plane_state->crtc->primary == plane) { 2089 + struct drm_crtc_state *crtc_state; 2090 + 2091 + crtc_state = drm_atomic_get_existing_crtc_state(state, plane_state->crtc); 2092 + 2093 + ret = drm_atomic_add_affected_connectors(state, plane_state->crtc); 2094 + if (ret) 2095 + goto unlock; 2096 + 2097 + crtc_state->active = false; 2098 + ret = drm_atomic_set_mode_for_crtc(crtc_state, NULL); 2099 + if (ret) 2100 + goto unlock; 2101 + } 2102 + 2103 + drm_atomic_set_fb_for_plane(plane_state, NULL); 2104 + ret = drm_atomic_set_crtc_for_plane(plane_state, NULL); 2105 + if (ret) 2106 + goto unlock; 2107 + 2108 + plane_mask |= BIT(drm_plane_index(plane)); 2109 + 2110 + plane->old_fb = plane->fb; 2111 + } 2112 + 2113 + for_each_connector_in_state(state, conn, conn_state, i) { 2114 + ret = drm_atomic_set_crtc_for_connector(conn_state, NULL); 2115 + 2116 + if (ret) 2117 + goto unlock; 2118 + } 2119 + 2120 + if (plane_mask) 2121 + ret = drm_atomic_commit(state); 2122 + 2123 + unlock: 2124 + if (plane_mask) 2125 + drm_atomic_clean_old_fb(dev, plane_mask, ret); 2126 + 2127 + if (ret == -EDEADLK) { 2128 + drm_modeset_backoff(&ctx); 2129 + goto retry; 2130 + } 2131 + 2132 + drm_atomic_state_put(state); 2133 + 2134 + drm_modeset_drop_locks(&ctx); 2135 + drm_modeset_acquire_fini(&ctx); 2136 + 2137 + return ret; 2078 2138 } 2079 2139 2080 2140 int drm_mode_atomic_ioctl(struct drm_device *dev, ··· 2238 2122 } 2239 2123 2240 2124 if (!obj->properties) { 2241 - drm_mode_object_unreference(obj); 2125 + drm_mode_object_put(obj); 2242 2126 ret = -ENOENT; 2243 2127 goto out; 2244 2128 } 2245 2129 2246 2130 if (get_user(count_props, count_props_ptr + copied_objs)) { 2247 - drm_mode_object_unreference(obj); 2131 + drm_mode_object_put(obj); 2248 2132 ret = -EFAULT; 2249 2133 goto out; 2250 2134 } ··· 2257 2141 struct drm_property *prop; 2258 2142 2259 2143 if (get_user(prop_id, props_ptr + copied_props)) { 2260 - drm_mode_object_unreference(obj); 2144 + drm_mode_object_put(obj); 2261 2145 ret = -EFAULT; 2262 2146 goto out; 2263 2147 } 2264 2148 2265 2149 prop = drm_mode_obj_find_prop_id(obj, prop_id); 2266 2150 if (!prop) { 2267 - drm_mode_object_unreference(obj); 2151 + drm_mode_object_put(obj); 2268 2152 ret = -ENOENT; 2269 2153 goto out; 2270 2154 } ··· 2272 2156 if (copy_from_user(&prop_value, 2273 2157 prop_values_ptr + copied_props, 2274 2158 sizeof(prop_value))) { 2275 - drm_mode_object_unreference(obj); 2159 + drm_mode_object_put(obj); 2276 2160 ret = -EFAULT; 2277 2161 goto out; 2278 2162 } 2279 2163 2280 2164 ret = atomic_set_prop(state, obj, prop, prop_value); 2281 2165 if (ret) { 2282 - drm_mode_object_unreference(obj); 2166 + drm_mode_object_put(obj); 2283 2167 goto out; 2284 2168 } 2285 2169 ··· 2292 2176 plane_mask |= (1 << drm_plane_index(plane)); 2293 2177 plane->old_fb = plane->fb; 2294 2178 } 2295 - drm_mode_object_unreference(obj); 2179 + drm_mode_object_put(obj); 2296 2180 } 2297 2181 2298 2182 ret = prepare_crtc_signaling(dev, state, arg, file_priv, &fence_state,
+142 -63
drivers/gpu/drm/drm_atomic_helper.c
··· 145 145 * and the crtc is disabled if no encoder is left. This preserves 146 146 * compatibility with the legacy set_config behavior. 147 147 */ 148 - drm_connector_list_iter_get(state->dev, &conn_iter); 148 + drm_connector_list_iter_begin(state->dev, &conn_iter); 149 149 drm_for_each_connector_iter(connector, &conn_iter) { 150 150 struct drm_crtc_state *crtc_state; 151 151 ··· 193 193 } 194 194 } 195 195 out: 196 - drm_connector_list_iter_put(&conn_iter); 196 + drm_connector_list_iter_end(&conn_iter); 197 197 198 198 return ret; 199 199 } ··· 322 322 } 323 323 324 324 if (!drm_encoder_crtc_ok(new_encoder, connector_state->crtc)) { 325 - DRM_DEBUG_ATOMIC("[ENCODER:%d:%s] incompatible with [CRTC:%d]\n", 325 + DRM_DEBUG_ATOMIC("[ENCODER:%d:%s] incompatible with [CRTC:%d:%s]\n", 326 326 new_encoder->base.id, 327 327 new_encoder->name, 328 - connector_state->crtc->base.id); 328 + connector_state->crtc->base.id, 329 + connector_state->crtc->name); 329 330 return -EINVAL; 330 331 } 331 332 ··· 530 529 connector_state); 531 530 if (ret) 532 531 return ret; 532 + if (connector->state->crtc) { 533 + crtc_state = drm_atomic_get_existing_crtc_state(state, 534 + connector->state->crtc); 535 + if (connector->state->link_status != 536 + connector_state->link_status) 537 + crtc_state->connectors_changed = true; 538 + } 533 539 } 534 540 535 541 /* ··· 1127 1119 drm_crtc_vblank_count(crtc), 1128 1120 msecs_to_jiffies(50)); 1129 1121 1130 - WARN(!ret, "[CRTC:%d] vblank wait timed out\n", crtc->base.id); 1122 + WARN(!ret, "[CRTC:%d:%s] vblank wait timed out\n", 1123 + crtc->base.id, crtc->name); 1131 1124 1132 1125 drm_crtc_vblank_put(crtc); 1133 1126 } ··· 1179 1170 static void commit_tail(struct drm_atomic_state *old_state) 1180 1171 { 1181 1172 struct drm_device *dev = old_state->dev; 1182 - struct drm_mode_config_helper_funcs *funcs; 1173 + const struct drm_mode_config_helper_funcs *funcs; 1183 1174 1184 1175 funcs = dev->mode_config.helper_private; 1185 1176 ··· 1986 1977 int i; 1987 1978 long ret; 1988 1979 struct drm_connector *connector; 1989 - struct drm_connector_state *conn_state; 1980 + struct drm_connector_state *conn_state, *old_conn_state; 1990 1981 struct drm_crtc *crtc; 1991 - struct drm_crtc_state *crtc_state; 1982 + struct drm_crtc_state *crtc_state, *old_crtc_state; 1992 1983 struct drm_plane *plane; 1993 - struct drm_plane_state *plane_state; 1984 + struct drm_plane_state *plane_state, *old_plane_state; 1994 1985 struct drm_crtc_commit *commit; 1995 1986 1996 1987 if (stall) { ··· 2014 2005 } 2015 2006 } 2016 2007 2017 - for_each_connector_in_state(state, connector, conn_state, i) { 2008 + for_each_oldnew_connector_in_state(state, connector, old_conn_state, conn_state, i) { 2009 + WARN_ON(connector->state != old_conn_state); 2010 + 2018 2011 connector->state->state = state; 2019 2012 swap(state->connectors[i].state, connector->state); 2020 2013 connector->state->state = NULL; 2021 2014 } 2022 2015 2023 - for_each_crtc_in_state(state, crtc, crtc_state, i) { 2016 + for_each_oldnew_crtc_in_state(state, crtc, old_crtc_state, crtc_state, i) { 2017 + WARN_ON(crtc->state != old_crtc_state); 2018 + 2024 2019 crtc->state->state = state; 2025 2020 swap(state->crtcs[i].state, crtc->state); 2026 2021 crtc->state->state = NULL; ··· 2039 2026 } 2040 2027 } 2041 2028 2042 - for_each_plane_in_state(state, plane, plane_state, i) { 2029 + for_each_oldnew_plane_in_state(state, plane, old_plane_state, plane_state, i) { 2030 + WARN_ON(plane->state != old_plane_state); 2031 + 2043 2032 plane->state->state = state; 2044 2033 swap(state->planes[i].state, plane->state); 2045 2034 plane->state->state = NULL; ··· 2248 2233 NULL); 2249 2234 if (ret) 2250 2235 return ret; 2236 + /* Make sure legacy setCrtc always re-trains */ 2237 + conn_state->link_status = DRM_LINK_STATUS_GOOD; 2251 2238 } 2252 2239 } 2253 2240 ··· 2292 2275 * @set: mode set configuration 2293 2276 * 2294 2277 * Provides a default crtc set_config handler using the atomic driver interface. 2278 + * 2279 + * NOTE: For backwards compatibility with old userspace this automatically 2280 + * resets the "link-status" property to GOOD, to force any link 2281 + * re-training. The SETCRTC ioctl does not define whether an update does 2282 + * need a full modeset or just a plane update, hence we're allowed to do 2283 + * that. See also drm_mode_connector_set_link_status_property(). 2295 2284 * 2296 2285 * Returns: 2297 2286 * Returns 0 on success, negative errno numbers on failure. ··· 2442 2419 struct drm_modeset_acquire_ctx *ctx) 2443 2420 { 2444 2421 struct drm_atomic_state *state; 2422 + struct drm_connector_state *conn_state; 2445 2423 struct drm_connector *conn; 2446 - struct drm_connector_list_iter conn_iter; 2447 - int err; 2424 + struct drm_plane_state *plane_state; 2425 + struct drm_plane *plane; 2426 + struct drm_crtc_state *crtc_state; 2427 + struct drm_crtc *crtc; 2428 + int ret, i; 2448 2429 2449 2430 state = drm_atomic_state_alloc(dev); 2450 2431 if (!state) ··· 2456 2429 2457 2430 state->acquire_ctx = ctx; 2458 2431 2459 - drm_connector_list_iter_get(dev, &conn_iter); 2460 - drm_for_each_connector_iter(conn, &conn_iter) { 2461 - struct drm_crtc *crtc = conn->state->crtc; 2462 - struct drm_crtc_state *crtc_state; 2463 - 2464 - if (!crtc || conn->dpms != DRM_MODE_DPMS_ON) 2465 - continue; 2466 - 2432 + drm_for_each_crtc(crtc, dev) { 2467 2433 crtc_state = drm_atomic_get_crtc_state(state, crtc); 2468 2434 if (IS_ERR(crtc_state)) { 2469 - err = PTR_ERR(crtc_state); 2435 + ret = PTR_ERR(crtc_state); 2470 2436 goto free; 2471 2437 } 2472 2438 2473 2439 crtc_state->active = false; 2440 + 2441 + ret = drm_atomic_set_mode_prop_for_crtc(crtc_state, NULL); 2442 + if (ret < 0) 2443 + goto free; 2444 + 2445 + ret = drm_atomic_add_affected_planes(state, crtc); 2446 + if (ret < 0) 2447 + goto free; 2448 + 2449 + ret = drm_atomic_add_affected_connectors(state, crtc); 2450 + if (ret < 0) 2451 + goto free; 2474 2452 } 2475 2453 2476 - err = drm_atomic_commit(state); 2454 + for_each_connector_in_state(state, conn, conn_state, i) { 2455 + ret = drm_atomic_set_crtc_for_connector(conn_state, NULL); 2456 + if (ret < 0) 2457 + goto free; 2458 + } 2459 + 2460 + for_each_plane_in_state(state, plane, plane_state, i) { 2461 + ret = drm_atomic_set_crtc_for_plane(plane_state, NULL); 2462 + if (ret < 0) 2463 + goto free; 2464 + 2465 + drm_atomic_set_fb_for_plane(plane_state, NULL); 2466 + } 2467 + 2468 + ret = drm_atomic_commit(state); 2477 2469 free: 2478 - drm_connector_list_iter_put(&conn_iter); 2479 2470 drm_atomic_state_put(state); 2480 - return err; 2471 + return ret; 2481 2472 } 2473 + 2482 2474 EXPORT_SYMBOL(drm_atomic_helper_disable_all); 2483 2475 2484 2476 /** ··· 2523 2477 * 2524 2478 * See also: 2525 2479 * drm_atomic_helper_duplicate_state(), drm_atomic_helper_disable_all(), 2526 - * drm_atomic_helper_resume() 2480 + * drm_atomic_helper_resume(), drm_atomic_helper_commit_duplicated_state() 2527 2481 */ 2528 2482 struct drm_atomic_state *drm_atomic_helper_suspend(struct drm_device *dev) 2529 2483 { ··· 2564 2518 EXPORT_SYMBOL(drm_atomic_helper_suspend); 2565 2519 2566 2520 /** 2521 + * drm_atomic_helper_commit_duplicated_state - commit duplicated state 2522 + * @state: duplicated atomic state to commit 2523 + * @ctx: pointer to acquire_ctx to use for commit. 2524 + * 2525 + * The state returned by drm_atomic_helper_duplicate_state() and 2526 + * drm_atomic_helper_suspend() is partially invalid, and needs to 2527 + * be fixed up before commit. 2528 + * 2529 + * Returns: 2530 + * 0 on success or a negative error code on failure. 2531 + * 2532 + * See also: 2533 + * drm_atomic_helper_suspend() 2534 + */ 2535 + int drm_atomic_helper_commit_duplicated_state(struct drm_atomic_state *state, 2536 + struct drm_modeset_acquire_ctx *ctx) 2537 + { 2538 + int i; 2539 + struct drm_plane *plane; 2540 + struct drm_plane_state *plane_state; 2541 + struct drm_connector *connector; 2542 + struct drm_connector_state *conn_state; 2543 + struct drm_crtc *crtc; 2544 + struct drm_crtc_state *crtc_state; 2545 + 2546 + state->acquire_ctx = ctx; 2547 + 2548 + for_each_new_plane_in_state(state, plane, plane_state, i) 2549 + state->planes[i].old_state = plane->state; 2550 + 2551 + for_each_new_crtc_in_state(state, crtc, crtc_state, i) 2552 + state->crtcs[i].old_state = crtc->state; 2553 + 2554 + for_each_new_connector_in_state(state, connector, conn_state, i) 2555 + state->connectors[i].old_state = connector->state; 2556 + 2557 + return drm_atomic_commit(state); 2558 + } 2559 + EXPORT_SYMBOL(drm_atomic_helper_commit_duplicated_state); 2560 + 2561 + /** 2567 2562 * drm_atomic_helper_resume - subsystem-level resume helper 2568 2563 * @dev: DRM device 2569 2564 * @state: atomic state to resume to ··· 2627 2540 int err; 2628 2541 2629 2542 drm_mode_config_reset(dev); 2543 + 2630 2544 drm_modeset_lock_all(dev); 2631 - state->acquire_ctx = config->acquire_ctx; 2632 - err = drm_atomic_commit(state); 2545 + err = drm_atomic_helper_commit_duplicated_state(state, config->acquire_ctx); 2633 2546 drm_modeset_unlock_all(dev); 2634 2547 2635 2548 return err; ··· 2805 2718 struct drm_atomic_state *state, 2806 2719 struct drm_crtc *crtc, 2807 2720 struct drm_framebuffer *fb, 2808 - struct drm_pending_vblank_event *event) 2721 + struct drm_pending_vblank_event *event, 2722 + uint32_t flags) 2809 2723 { 2810 2724 struct drm_plane *plane = crtc->primary; 2811 2725 struct drm_plane_state *plane_state; ··· 2818 2730 return PTR_ERR(crtc_state); 2819 2731 2820 2732 crtc_state->event = event; 2733 + crtc_state->pageflip_flags = flags; 2821 2734 2822 2735 plane_state = drm_atomic_get_plane_state(state, plane); 2823 2736 if (IS_ERR(plane_state)) 2824 2737 return PTR_ERR(plane_state); 2825 - 2826 2738 2827 2739 ret = drm_atomic_set_crtc_for_plane(plane_state, crtc); 2828 2740 if (ret != 0) ··· 2832 2744 /* Make sure we don't accidentally do a full modeset. */ 2833 2745 state->allow_modeset = false; 2834 2746 if (!crtc_state->active) { 2835 - DRM_DEBUG_ATOMIC("[CRTC:%d] disabled, rejecting legacy flip\n", 2836 - crtc->base.id); 2747 + DRM_DEBUG_ATOMIC("[CRTC:%d:%s] disabled, rejecting legacy flip\n", 2748 + crtc->base.id, crtc->name); 2837 2749 return -EINVAL; 2838 2750 } 2839 2751 ··· 2850 2762 * Provides a default &drm_crtc_funcs.page_flip implementation 2851 2763 * using the atomic driver interface. 2852 2764 * 2853 - * Note that for now so called async page flips (i.e. updates which are not 2854 - * synchronized to vblank) are not supported, since the atomic interfaces have 2855 - * no provisions for this yet. 2856 - * 2857 2765 * Returns: 2858 2766 * Returns 0 on success, negative errno numbers on failure. 2859 2767 * ··· 2865 2781 struct drm_atomic_state *state; 2866 2782 int ret = 0; 2867 2783 2868 - if (flags & DRM_MODE_PAGE_FLIP_ASYNC) 2869 - return -EINVAL; 2870 - 2871 2784 state = drm_atomic_state_alloc(plane->dev); 2872 2785 if (!state) 2873 2786 return -ENOMEM; ··· 2872 2791 state->acquire_ctx = drm_modeset_legacy_acquire_ctx(crtc); 2873 2792 2874 2793 retry: 2875 - ret = page_flip_common(state, crtc, fb, event); 2794 + ret = page_flip_common(state, crtc, fb, event, flags); 2876 2795 if (ret != 0) 2877 2796 goto fail; 2878 2797 ··· 2927 2846 struct drm_crtc_state *crtc_state; 2928 2847 int ret = 0; 2929 2848 2930 - if (flags & DRM_MODE_PAGE_FLIP_ASYNC) 2931 - return -EINVAL; 2932 - 2933 2849 state = drm_atomic_state_alloc(plane->dev); 2934 2850 if (!state) 2935 2851 return -ENOMEM; ··· 2934 2856 state->acquire_ctx = drm_modeset_legacy_acquire_ctx(crtc); 2935 2857 2936 2858 retry: 2937 - ret = page_flip_common(state, crtc, fb, event); 2859 + ret = page_flip_common(state, crtc, fb, event, flags); 2938 2860 if (ret != 0) 2939 2861 goto fail; 2940 2862 ··· 3018 2940 3019 2941 WARN_ON(!drm_modeset_is_locked(&config->connection_mutex)); 3020 2942 3021 - drm_connector_list_iter_get(connector->dev, &conn_iter); 2943 + drm_connector_list_iter_begin(connector->dev, &conn_iter); 3022 2944 drm_for_each_connector_iter(tmp_connector, &conn_iter) { 3023 2945 if (tmp_connector->state->crtc != crtc) 3024 2946 continue; ··· 3028 2950 break; 3029 2951 } 3030 2952 } 3031 - drm_connector_list_iter_put(&conn_iter); 2953 + drm_connector_list_iter_end(&conn_iter); 3032 2954 crtc_state->active = active; 3033 2955 3034 2956 ret = drm_atomic_commit(state); ··· 3120 3042 memcpy(state, crtc->state, sizeof(*state)); 3121 3043 3122 3044 if (state->mode_blob) 3123 - drm_property_reference_blob(state->mode_blob); 3045 + drm_property_blob_get(state->mode_blob); 3124 3046 if (state->degamma_lut) 3125 - drm_property_reference_blob(state->degamma_lut); 3047 + drm_property_blob_get(state->degamma_lut); 3126 3048 if (state->ctm) 3127 - drm_property_reference_blob(state->ctm); 3049 + drm_property_blob_get(state->ctm); 3128 3050 if (state->gamma_lut) 3129 - drm_property_reference_blob(state->gamma_lut); 3051 + drm_property_blob_get(state->gamma_lut); 3130 3052 state->mode_changed = false; 3131 3053 state->active_changed = false; 3132 3054 state->planes_changed = false; ··· 3134 3056 state->color_mgmt_changed = false; 3135 3057 state->zpos_changed = false; 3136 3058 state->event = NULL; 3059 + state->pageflip_flags = 0; 3137 3060 } 3138 3061 EXPORT_SYMBOL(__drm_atomic_helper_crtc_duplicate_state); 3139 3062 ··· 3171 3092 */ 3172 3093 void __drm_atomic_helper_crtc_destroy_state(struct drm_crtc_state *state) 3173 3094 { 3174 - drm_property_unreference_blob(state->mode_blob); 3175 - drm_property_unreference_blob(state->degamma_lut); 3176 - drm_property_unreference_blob(state->ctm); 3177 - drm_property_unreference_blob(state->gamma_lut); 3095 + drm_property_blob_put(state->mode_blob); 3096 + drm_property_blob_put(state->degamma_lut); 3097 + drm_property_blob_put(state->ctm); 3098 + drm_property_blob_put(state->gamma_lut); 3178 3099 } 3179 3100 EXPORT_SYMBOL(__drm_atomic_helper_crtc_destroy_state); 3180 3101 ··· 3230 3151 memcpy(state, plane->state, sizeof(*state)); 3231 3152 3232 3153 if (state->fb) 3233 - drm_framebuffer_reference(state->fb); 3154 + drm_framebuffer_get(state->fb); 3234 3155 3235 3156 state->fence = NULL; 3236 3157 } ··· 3270 3191 void __drm_atomic_helper_plane_destroy_state(struct drm_plane_state *state) 3271 3192 { 3272 3193 if (state->fb) 3273 - drm_framebuffer_unreference(state->fb); 3194 + drm_framebuffer_put(state->fb); 3274 3195 3275 3196 if (state->fence) 3276 3197 dma_fence_put(state->fence); ··· 3351 3272 { 3352 3273 memcpy(state, connector->state, sizeof(*state)); 3353 3274 if (state->crtc) 3354 - drm_connector_reference(connector); 3275 + drm_connector_get(connector); 3355 3276 } 3356 3277 EXPORT_SYMBOL(__drm_atomic_helper_connector_duplicate_state); 3357 3278 ··· 3439 3360 } 3440 3361 } 3441 3362 3442 - drm_connector_list_iter_get(dev, &conn_iter); 3363 + drm_connector_list_iter_begin(dev, &conn_iter); 3443 3364 drm_for_each_connector_iter(conn, &conn_iter) { 3444 3365 struct drm_connector_state *conn_state; 3445 3366 3446 3367 conn_state = drm_atomic_get_connector_state(state, conn); 3447 3368 if (IS_ERR(conn_state)) { 3448 3369 err = PTR_ERR(conn_state); 3449 - drm_connector_list_iter_put(&conn_iter); 3370 + drm_connector_list_iter_end(&conn_iter); 3450 3371 goto free; 3451 3372 } 3452 3373 } 3453 - drm_connector_list_iter_put(&conn_iter); 3374 + drm_connector_list_iter_end(&conn_iter); 3454 3375 3455 3376 /* clear the acquire context so that it isn't accidentally reused */ 3456 3377 state->acquire_ctx = NULL; ··· 3477 3398 __drm_atomic_helper_connector_destroy_state(struct drm_connector_state *state) 3478 3399 { 3479 3400 if (state->crtc) 3480 - drm_connector_unreference(state->connector); 3401 + drm_connector_put(state->connector); 3481 3402 } 3482 3403 EXPORT_SYMBOL(__drm_atomic_helper_connector_destroy_state); 3483 3404 ··· 3572 3493 goto backoff; 3573 3494 3574 3495 drm_atomic_state_put(state); 3575 - drm_property_unreference_blob(blob); 3496 + drm_property_blob_put(blob); 3576 3497 return ret; 3577 3498 3578 3499 backoff:
+6 -6
drivers/gpu/drm/drm_cache.c
··· 88 88 } 89 89 90 90 if (wbinvd_on_all_cpus()) 91 - printk(KERN_ERR "Timed out waiting for cache flush.\n"); 91 + pr_err("Timed out waiting for cache flush\n"); 92 92 93 93 #elif defined(__powerpc__) 94 94 unsigned long i; ··· 105 105 kunmap_atomic(page_virtual); 106 106 } 107 107 #else 108 - printk(KERN_ERR "Architecture has no drm_cache.c support\n"); 108 + pr_err("Architecture has no drm_cache.c support\n"); 109 109 WARN_ON_ONCE(1); 110 110 #endif 111 111 } ··· 134 134 } 135 135 136 136 if (wbinvd_on_all_cpus()) 137 - printk(KERN_ERR "Timed out waiting for cache flush.\n"); 137 + pr_err("Timed out waiting for cache flush\n"); 138 138 #else 139 - printk(KERN_ERR "Architecture has no drm_cache.c support\n"); 139 + pr_err("Architecture has no drm_cache.c support\n"); 140 140 WARN_ON_ONCE(1); 141 141 #endif 142 142 } ··· 167 167 } 168 168 169 169 if (wbinvd_on_all_cpus()) 170 - printk(KERN_ERR "Timed out waiting for cache flush.\n"); 170 + pr_err("Timed out waiting for cache flush\n"); 171 171 #else 172 - printk(KERN_ERR "Architecture has no drm_cache.c support\n"); 172 + pr_err("Architecture has no drm_cache.c support\n"); 173 173 WARN_ON_ONCE(1); 174 174 #endif 175 175 }
+96 -36
drivers/gpu/drm/drm_connector.c
··· 35 35 * als fixed panels or anything else that can display pixels in some form. As 36 36 * opposed to all other KMS objects representing hardware (like CRTC, encoder or 37 37 * plane abstractions) connectors can be hotplugged and unplugged at runtime. 38 - * Hence they are reference-counted using drm_connector_reference() and 39 - * drm_connector_unreference(). 38 + * Hence they are reference-counted using drm_connector_get() and 39 + * drm_connector_put(). 40 40 * 41 41 * KMS driver must create, initialize, register and attach at a &struct 42 42 * drm_connector for each such sink. The instance is created as other KMS ··· 128 128 return; 129 129 130 130 if (mode->force) { 131 - const char *s; 132 - 133 - switch (mode->force) { 134 - case DRM_FORCE_OFF: 135 - s = "OFF"; 136 - break; 137 - case DRM_FORCE_ON_DIGITAL: 138 - s = "ON - dig"; 139 - break; 140 - default: 141 - case DRM_FORCE_ON: 142 - s = "ON"; 143 - break; 144 - } 145 - 146 - DRM_INFO("forcing %s connector %s\n", connector->name, s); 131 + DRM_INFO("forcing %s connector %s\n", connector->name, 132 + drm_get_connector_force_name(mode->force)); 147 133 connector->force = mode->force; 148 134 } 149 135 ··· 175 189 struct ida *connector_ida = 176 190 &drm_connector_enum_list[connector_type].ida; 177 191 178 - ret = drm_mode_object_get_reg(dev, &connector->base, 179 - DRM_MODE_OBJECT_CONNECTOR, 180 - false, drm_connector_free); 192 + ret = __drm_mode_object_add(dev, &connector->base, 193 + DRM_MODE_OBJECT_CONNECTOR, 194 + false, drm_connector_free); 181 195 if (ret) 182 196 return ret; 183 197 ··· 229 243 230 244 drm_object_attach_property(&connector->base, 231 245 config->dpms_property, 0); 246 + 247 + drm_object_attach_property(&connector->base, 248 + config->link_status_property, 249 + 0); 232 250 233 251 if (drm_core_check_feature(dev, DRIVER_ATOMIC)) { 234 252 drm_object_attach_property(&connector->base, config->prop_crtc_id, 0); ··· 435 445 struct drm_connector *connector; 436 446 struct drm_connector_list_iter conn_iter; 437 447 438 - drm_connector_list_iter_get(dev, &conn_iter); 448 + drm_connector_list_iter_begin(dev, &conn_iter); 439 449 drm_for_each_connector_iter(connector, &conn_iter) 440 450 drm_connector_unregister(connector); 441 - drm_connector_list_iter_put(&conn_iter); 451 + drm_connector_list_iter_end(&conn_iter); 442 452 } 443 453 444 454 int drm_connector_register_all(struct drm_device *dev) ··· 447 457 struct drm_connector_list_iter conn_iter; 448 458 int ret = 0; 449 459 450 - drm_connector_list_iter_get(dev, &conn_iter); 460 + drm_connector_list_iter_begin(dev, &conn_iter); 451 461 drm_for_each_connector_iter(connector, &conn_iter) { 452 462 ret = drm_connector_register(connector); 453 463 if (ret) 454 464 break; 455 465 } 456 - drm_connector_list_iter_put(&conn_iter); 466 + drm_connector_list_iter_end(&conn_iter); 457 467 458 468 if (ret) 459 469 drm_connector_unregister_all(dev); ··· 478 488 } 479 489 EXPORT_SYMBOL(drm_get_connector_status_name); 480 490 491 + /** 492 + * drm_get_connector_force_name - return a string for connector force 493 + * @force: connector force to get name of 494 + * 495 + * Returns: const pointer to name. 496 + */ 497 + const char *drm_get_connector_force_name(enum drm_connector_force force) 498 + { 499 + switch (force) { 500 + case DRM_FORCE_UNSPECIFIED: 501 + return "unspecified"; 502 + case DRM_FORCE_OFF: 503 + return "off"; 504 + case DRM_FORCE_ON: 505 + return "on"; 506 + case DRM_FORCE_ON_DIGITAL: 507 + return "digital"; 508 + default: 509 + return "unknown"; 510 + } 511 + } 512 + 481 513 #ifdef CONFIG_LOCKDEP 482 514 static struct lockdep_map connector_list_iter_dep_map = { 483 515 .name = "drm_connector_list_iter" ··· 507 495 #endif 508 496 509 497 /** 510 - * drm_connector_list_iter_get - initialize a connector_list iterator 498 + * drm_connector_list_iter_begin - initialize a connector_list iterator 511 499 * @dev: DRM device 512 500 * @iter: connector_list iterator 513 501 * 514 502 * Sets @iter up to walk the &drm_mode_config.connector_list of @dev. @iter 515 - * must always be cleaned up again by calling drm_connector_list_iter_put(). 503 + * must always be cleaned up again by calling drm_connector_list_iter_end(). 516 504 * Iteration itself happens using drm_connector_list_iter_next() or 517 505 * drm_for_each_connector_iter(). 518 506 */ 519 - void drm_connector_list_iter_get(struct drm_device *dev, 520 - struct drm_connector_list_iter *iter) 507 + void drm_connector_list_iter_begin(struct drm_device *dev, 508 + struct drm_connector_list_iter *iter) 521 509 { 522 510 iter->dev = dev; 523 511 iter->conn = NULL; 524 512 lock_acquire_shared_recursive(&connector_list_iter_dep_map, 0, 1, NULL, _RET_IP_); 525 513 } 526 - EXPORT_SYMBOL(drm_connector_list_iter_get); 514 + EXPORT_SYMBOL(drm_connector_list_iter_begin); 527 515 528 516 /** 529 517 * drm_connector_list_iter_next - return next connector ··· 557 545 spin_unlock_irqrestore(&config->connector_list_lock, flags); 558 546 559 547 if (old_conn) 560 - drm_connector_unreference(old_conn); 548 + drm_connector_put(old_conn); 561 549 562 550 return iter->conn; 563 551 } 564 552 EXPORT_SYMBOL(drm_connector_list_iter_next); 565 553 566 554 /** 567 - * drm_connector_list_iter_put - tear down a connector_list iterator 555 + * drm_connector_list_iter_end - tear down a connector_list iterator 568 556 * @iter: connector_list iterator 569 557 * 570 558 * Tears down @iter and releases any resources (like &drm_connector references) ··· 572 560 * iteration completes fully or when it was aborted without walking the entire 573 561 * list. 574 562 */ 575 - void drm_connector_list_iter_put(struct drm_connector_list_iter *iter) 563 + void drm_connector_list_iter_end(struct drm_connector_list_iter *iter) 576 564 { 577 565 iter->dev = NULL; 578 566 if (iter->conn) 579 - drm_connector_unreference(iter->conn); 567 + drm_connector_put(iter->conn); 580 568 lock_release(&connector_list_iter_dep_map, 0, _RET_IP_); 581 569 } 582 - EXPORT_SYMBOL(drm_connector_list_iter_put); 570 + EXPORT_SYMBOL(drm_connector_list_iter_end); 583 571 584 572 static const struct drm_prop_enum_list drm_subpixel_enum_list[] = { 585 573 { SubPixelUnknown, "Unknown" }, ··· 610 598 { DRM_MODE_DPMS_OFF, "Off" } 611 599 }; 612 600 DRM_ENUM_NAME_FN(drm_get_dpms_name, drm_dpms_enum_list) 601 + 602 + static const struct drm_prop_enum_list drm_link_status_enum_list[] = { 603 + { DRM_MODE_LINK_STATUS_GOOD, "Good" }, 604 + { DRM_MODE_LINK_STATUS_BAD, "Bad" }, 605 + }; 606 + DRM_ENUM_NAME_FN(drm_get_link_status_name, drm_link_status_enum_list) 613 607 614 608 /** 615 609 * drm_display_info_set_bus_formats - set the supported bus formats ··· 736 718 * tiling and virtualize both &drm_crtc and &drm_plane if needed. Drivers 737 719 * should update this value using drm_mode_connector_set_tile_property(). 738 720 * Userspace cannot change this property. 721 + * link-status: 722 + * Connector link-status property to indicate the status of link. The default 723 + * value of link-status is "GOOD". If something fails during or after modeset, 724 + * the kernel driver may set this to "BAD" and issue a hotplug uevent. Drivers 725 + * should update this value using drm_mode_connector_set_link_status_property(). 739 726 * 740 727 * Connectors also have one standardized atomic property: 741 728 * ··· 781 758 if (!prop) 782 759 return -ENOMEM; 783 760 dev->mode_config.tile_property = prop; 761 + 762 + prop = drm_property_create_enum(dev, 0, "link-status", 763 + drm_link_status_enum_list, 764 + ARRAY_SIZE(drm_link_status_enum_list)); 765 + if (!prop) 766 + return -ENOMEM; 767 + dev->mode_config.link_status_property = prop; 784 768 785 769 return 0; 786 770 } ··· 1118 1088 } 1119 1089 EXPORT_SYMBOL(drm_mode_connector_update_edid_property); 1120 1090 1091 + /** 1092 + * drm_mode_connector_set_link_status_property - Set link status property of a connector 1093 + * @connector: drm connector 1094 + * @link_status: new value of link status property (0: Good, 1: Bad) 1095 + * 1096 + * In usual working scenario, this link status property will always be set to 1097 + * "GOOD". If something fails during or after a mode set, the kernel driver 1098 + * may set this link status property to "BAD". The caller then needs to send a 1099 + * hotplug uevent for userspace to re-check the valid modes through 1100 + * GET_CONNECTOR_IOCTL and retry modeset. 1101 + * 1102 + * Note: Drivers cannot rely on userspace to support this property and 1103 + * issue a modeset. As such, they may choose to handle issues (like 1104 + * re-training a link) without userspace's intervention. 1105 + * 1106 + * The reason for adding this property is to handle link training failures, but 1107 + * it is not limited to DP or link training. For example, if we implement 1108 + * asynchronous setcrtc, this property can be used to report any failures in that. 1109 + */ 1110 + void drm_mode_connector_set_link_status_property(struct drm_connector *connector, 1111 + uint64_t link_status) 1112 + { 1113 + struct drm_device *dev = connector->dev; 1114 + 1115 + drm_modeset_lock(&dev->mode_config.connection_mutex, NULL); 1116 + connector->state->link_status = link_status; 1117 + drm_modeset_unlock(&dev->mode_config.connection_mutex); 1118 + } 1119 + EXPORT_SYMBOL(drm_mode_connector_set_link_status_property); 1120 + 1121 1121 int drm_mode_connector_set_obj_prop(struct drm_mode_object *obj, 1122 1122 struct drm_property *property, 1123 1123 uint64_t value) ··· 1309 1249 out: 1310 1250 mutex_unlock(&dev->mode_config.mutex); 1311 1251 out_unref: 1312 - drm_connector_unreference(connector); 1252 + drm_connector_put(connector); 1313 1253 1314 1254 return ret; 1315 1255 }
+6 -6
drivers/gpu/drm/drm_crtc.c
··· 282 282 spin_lock_init(&crtc->commit_lock); 283 283 284 284 drm_modeset_lock_init(&crtc->mutex); 285 - ret = drm_mode_object_get(dev, &crtc->base, DRM_MODE_OBJECT_CRTC); 285 + ret = drm_mode_object_add(dev, &crtc->base, DRM_MODE_OBJECT_CRTC); 286 286 if (ret) 287 287 return ret; 288 288 ··· 471 471 472 472 drm_for_each_crtc(tmp, crtc->dev) { 473 473 if (tmp->primary->fb) 474 - drm_framebuffer_reference(tmp->primary->fb); 474 + drm_framebuffer_get(tmp->primary->fb); 475 475 if (tmp->primary->old_fb) 476 - drm_framebuffer_unreference(tmp->primary->old_fb); 476 + drm_framebuffer_put(tmp->primary->old_fb); 477 477 tmp->primary->old_fb = NULL; 478 478 } 479 479 ··· 567 567 } 568 568 fb = crtc->primary->fb; 569 569 /* Make refcounting symmetric with the lookup path. */ 570 - drm_framebuffer_reference(fb); 570 + drm_framebuffer_get(fb); 571 571 } else { 572 572 fb = drm_framebuffer_lookup(dev, crtc_req->fb_id); 573 573 if (!fb) { ··· 680 680 681 681 out: 682 682 if (fb) 683 - drm_framebuffer_unreference(fb); 683 + drm_framebuffer_put(fb); 684 684 685 685 if (connector_set) { 686 686 for (i = 0; i < crtc_req->count_connectors; i++) { 687 687 if (connector_set[i]) 688 - drm_connector_unreference(connector_set[i]); 688 + drm_connector_put(connector_set[i]); 689 689 } 690 690 } 691 691 kfree(connector_set);
+21 -21
drivers/gpu/drm/drm_crtc_helper.c
··· 102 102 } 103 103 104 104 105 - drm_connector_list_iter_get(dev, &conn_iter); 105 + drm_connector_list_iter_begin(dev, &conn_iter); 106 106 drm_for_each_connector_iter(connector, &conn_iter) { 107 107 if (connector->encoder == encoder) { 108 - drm_connector_list_iter_put(&conn_iter); 108 + drm_connector_list_iter_end(&conn_iter); 109 109 return true; 110 110 } 111 111 } 112 - drm_connector_list_iter_put(&conn_iter); 112 + drm_connector_list_iter_end(&conn_iter); 113 113 return false; 114 114 } 115 115 EXPORT_SYMBOL(drm_helper_encoder_in_use); ··· 449 449 if (encoder->crtc != crtc) 450 450 continue; 451 451 452 - drm_connector_list_iter_get(dev, &conn_iter); 452 + drm_connector_list_iter_begin(dev, &conn_iter); 453 453 drm_for_each_connector_iter(connector, &conn_iter) { 454 454 if (connector->encoder != encoder) 455 455 continue; ··· 465 465 connector->dpms = DRM_MODE_DPMS_OFF; 466 466 467 467 /* we keep a reference while the encoder is bound */ 468 - drm_connector_unreference(connector); 468 + drm_connector_put(connector); 469 469 } 470 - drm_connector_list_iter_put(&conn_iter); 470 + drm_connector_list_iter_end(&conn_iter); 471 471 } 472 472 473 473 __drm_helper_disable_unused_functions(dev); ··· 583 583 } 584 584 585 585 count = 0; 586 - drm_connector_list_iter_get(dev, &conn_iter); 586 + drm_connector_list_iter_begin(dev, &conn_iter); 587 587 drm_for_each_connector_iter(connector, &conn_iter) 588 588 save_connector_encoders[count++] = connector->encoder; 589 - drm_connector_list_iter_put(&conn_iter); 589 + drm_connector_list_iter_end(&conn_iter); 590 590 591 591 save_set.crtc = set->crtc; 592 592 save_set.mode = &set->crtc->mode; ··· 623 623 for (ro = 0; ro < set->num_connectors; ro++) { 624 624 if (set->connectors[ro]->encoder) 625 625 continue; 626 - drm_connector_reference(set->connectors[ro]); 626 + drm_connector_get(set->connectors[ro]); 627 627 } 628 628 629 629 /* a) traverse passed in connector list and get encoders for them */ 630 630 count = 0; 631 - drm_connector_list_iter_get(dev, &conn_iter); 631 + drm_connector_list_iter_begin(dev, &conn_iter); 632 632 drm_for_each_connector_iter(connector, &conn_iter) { 633 633 const struct drm_connector_helper_funcs *connector_funcs = 634 634 connector->helper_private; ··· 662 662 connector->encoder = new_encoder; 663 663 } 664 664 } 665 - drm_connector_list_iter_put(&conn_iter); 665 + drm_connector_list_iter_end(&conn_iter); 666 666 667 667 if (fail) { 668 668 ret = -EINVAL; ··· 670 670 } 671 671 672 672 count = 0; 673 - drm_connector_list_iter_get(dev, &conn_iter); 673 + drm_connector_list_iter_begin(dev, &conn_iter); 674 674 drm_for_each_connector_iter(connector, &conn_iter) { 675 675 if (!connector->encoder) 676 676 continue; ··· 689 689 if (new_crtc && 690 690 !drm_encoder_crtc_ok(connector->encoder, new_crtc)) { 691 691 ret = -EINVAL; 692 - drm_connector_list_iter_put(&conn_iter); 692 + drm_connector_list_iter_end(&conn_iter); 693 693 goto fail; 694 694 } 695 695 if (new_crtc != connector->encoder->crtc) { ··· 706 706 connector->base.id, connector->name); 707 707 } 708 708 } 709 - drm_connector_list_iter_put(&conn_iter); 709 + drm_connector_list_iter_end(&conn_iter); 710 710 711 711 /* mode_set_base is not a required function */ 712 712 if (fb_changed && !crtc_funcs->mode_set_base) ··· 761 761 } 762 762 763 763 count = 0; 764 - drm_connector_list_iter_get(dev, &conn_iter); 764 + drm_connector_list_iter_begin(dev, &conn_iter); 765 765 drm_for_each_connector_iter(connector, &conn_iter) 766 766 connector->encoder = save_connector_encoders[count++]; 767 - drm_connector_list_iter_put(&conn_iter); 767 + drm_connector_list_iter_end(&conn_iter); 768 768 769 769 /* after fail drop reference on all unbound connectors in set, let 770 770 * bound connectors keep their reference ··· 772 772 for (ro = 0; ro < set->num_connectors; ro++) { 773 773 if (set->connectors[ro]->encoder) 774 774 continue; 775 - drm_connector_unreference(set->connectors[ro]); 775 + drm_connector_put(set->connectors[ro]); 776 776 } 777 777 778 778 /* Try to restore the config */ ··· 794 794 struct drm_connector_list_iter conn_iter; 795 795 struct drm_device *dev = encoder->dev; 796 796 797 - drm_connector_list_iter_get(dev, &conn_iter); 797 + drm_connector_list_iter_begin(dev, &conn_iter); 798 798 drm_for_each_connector_iter(connector, &conn_iter) 799 799 if (connector->encoder == encoder) 800 800 if (connector->dpms < dpms) 801 801 dpms = connector->dpms; 802 - drm_connector_list_iter_put(&conn_iter); 802 + drm_connector_list_iter_end(&conn_iter); 803 803 804 804 return dpms; 805 805 } ··· 835 835 struct drm_connector_list_iter conn_iter; 836 836 struct drm_device *dev = crtc->dev; 837 837 838 - drm_connector_list_iter_get(dev, &conn_iter); 838 + drm_connector_list_iter_begin(dev, &conn_iter); 839 839 drm_for_each_connector_iter(connector, &conn_iter) 840 840 if (connector->encoder && connector->encoder->crtc == crtc) 841 841 if (connector->dpms < dpms) 842 842 dpms = connector->dpms; 843 - drm_connector_list_iter_put(&conn_iter); 843 + drm_connector_list_iter_end(&conn_iter); 844 844 845 845 return dpms; 846 846 }
+7 -7
drivers/gpu/drm/drm_crtc_internal.h
··· 98 98 void *data, struct drm_file *file_priv); 99 99 100 100 /* drm_mode_object.c */ 101 - int drm_mode_object_get_reg(struct drm_device *dev, 102 - struct drm_mode_object *obj, 103 - uint32_t obj_type, 104 - bool register_obj, 105 - void (*obj_free_cb)(struct kref *kref)); 101 + int __drm_mode_object_add(struct drm_device *dev, struct drm_mode_object *obj, 102 + uint32_t obj_type, bool register_obj, 103 + void (*obj_free_cb)(struct kref *kref)); 104 + int drm_mode_object_add(struct drm_device *dev, struct drm_mode_object *obj, 105 + uint32_t obj_type); 106 106 void drm_mode_object_register(struct drm_device *dev, 107 107 struct drm_mode_object *obj); 108 - int drm_mode_object_get(struct drm_device *dev, 109 - struct drm_mode_object *obj, uint32_t obj_type); 110 108 struct drm_mode_object *__drm_mode_object_find(struct drm_device *dev, 111 109 uint32_t id, uint32_t type); 112 110 void drm_mode_object_unregister(struct drm_device *dev, ··· 140 142 struct drm_property *property, 141 143 uint64_t value); 142 144 int drm_connector_create_standard_properties(struct drm_device *dev); 145 + const char *drm_get_connector_force_name(enum drm_connector_force force); 143 146 144 147 /* IOCTL */ 145 148 int drm_mode_connector_property_set_ioctl(struct drm_device *dev, ··· 182 183 struct drm_property *property, uint64_t *val); 183 184 int drm_mode_atomic_ioctl(struct drm_device *dev, 184 185 void *data, struct drm_file *file_priv); 186 + int drm_atomic_remove_fb(struct drm_framebuffer *fb); 185 187 186 188 187 189 /* drm_plane.c */
+1 -23
drivers/gpu/drm/drm_debugfs.c
··· 261 261 static int connector_show(struct seq_file *m, void *data) 262 262 { 263 263 struct drm_connector *connector = m->private; 264 - const char *status; 265 264 266 - switch (connector->force) { 267 - case DRM_FORCE_ON: 268 - status = "on\n"; 269 - break; 270 - 271 - case DRM_FORCE_ON_DIGITAL: 272 - status = "digital\n"; 273 - break; 274 - 275 - case DRM_FORCE_OFF: 276 - status = "off\n"; 277 - break; 278 - 279 - case DRM_FORCE_UNSPECIFIED: 280 - status = "unspecified\n"; 281 - break; 282 - 283 - default: 284 - return 0; 285 - } 286 - 287 - seq_puts(m, status); 265 + seq_printf(m, "%s\n", drm_get_connector_force_name(connector->force)); 288 266 289 267 return 0; 290 268 }
+2
drivers/gpu/drm/drm_dp_dual_mode_helper.c
··· 386 386 return "type 2 DVI"; 387 387 case DRM_DP_DUAL_MODE_TYPE2_HDMI: 388 388 return "type 2 HDMI"; 389 + case DRM_DP_DUAL_MODE_LSPCON: 390 + return "lspcon"; 389 391 default: 390 392 WARN_ON(type != DRM_DP_DUAL_MODE_UNKNOWN); 391 393 return "unknown";
+22 -12
drivers/gpu/drm/drm_edid.c
··· 1131 1131 1132 1132 csum = drm_edid_block_checksum(raw_edid); 1133 1133 if (csum) { 1134 - if (print_bad_edid) { 1135 - DRM_ERROR("EDID checksum is invalid, remainder is %d\n", csum); 1136 - } 1137 - 1138 1134 if (edid_corrupt) 1139 1135 *edid_corrupt = true; 1140 1136 1141 1137 /* allow CEA to slide through, switches mangle this */ 1142 - if (raw_edid[0] != 0x02) 1138 + if (raw_edid[0] == CEA_EXT) { 1139 + DRM_DEBUG("EDID checksum is invalid, remainder is %d\n", csum); 1140 + DRM_DEBUG("Assuming a KVM switch modified the CEA block but left the original checksum\n"); 1141 + } else { 1142 + if (print_bad_edid) 1143 + DRM_NOTE("EDID checksum is invalid, remainder is %d\n", csum); 1144 + 1143 1145 goto bad; 1146 + } 1144 1147 } 1145 1148 1146 1149 /* per-block-type checks */ 1147 1150 switch (raw_edid[0]) { 1148 1151 case 0: /* base */ 1149 1152 if (edid->version != 1) { 1150 - DRM_ERROR("EDID has major version %d, instead of 1\n", edid->version); 1153 + DRM_NOTE("EDID has major version %d, instead of 1\n", edid->version); 1151 1154 goto bad; 1152 1155 } 1153 1156 ··· 1167 1164 bad: 1168 1165 if (print_bad_edid) { 1169 1166 if (drm_edid_is_zero(raw_edid, EDID_LENGTH)) { 1170 - printk(KERN_ERR "EDID block is all zeroes\n"); 1167 + pr_notice("EDID block is all zeroes\n"); 1171 1168 } else { 1172 - printk(KERN_ERR "Raw EDID:\n"); 1173 - print_hex_dump(KERN_ERR, " \t", DUMP_PREFIX_NONE, 16, 1, 1174 - raw_edid, EDID_LENGTH, false); 1169 + pr_notice("Raw EDID:\n"); 1170 + print_hex_dump(KERN_NOTICE, 1171 + " \t", DUMP_PREFIX_NONE, 16, 1, 1172 + raw_edid, EDID_LENGTH, false); 1175 1173 } 1176 1174 } 1177 1175 return false; ··· 1428 1424 { 1429 1425 struct edid *edid; 1430 1426 1431 - if (!drm_probe_ddc(adapter)) 1427 + if (connector->force == DRM_FORCE_OFF) 1428 + return NULL; 1429 + 1430 + if (connector->force == DRM_FORCE_UNSPECIFIED && !drm_probe_ddc(adapter)) 1432 1431 return NULL; 1433 1432 1434 1433 edid = drm_do_get_edid(connector, drm_do_probe_ddc_edid, adapter); ··· 3440 3433 connector->video_latency[1] = 0; 3441 3434 connector->audio_latency[1] = 0; 3442 3435 3436 + if (!edid) 3437 + return; 3438 + 3443 3439 cea = drm_find_cea_extension(edid); 3444 3440 if (!cea) { 3445 3441 DRM_DEBUG_KMS("ELD: no CEA Extension found\n"); ··· 4009 3999 csum += displayid[i]; 4010 4000 } 4011 4001 if (csum) { 4012 - DRM_ERROR("DisplayID checksum invalid, remainder is %d\n", csum); 4002 + DRM_NOTE("DisplayID checksum invalid, remainder is %d\n", csum); 4013 4003 return -EINVAL; 4014 4004 } 4015 4005 return 0;
+4 -13
drivers/gpu/drm/drm_edid_load.c
··· 256 256 return edid; 257 257 } 258 258 259 - int drm_load_edid_firmware(struct drm_connector *connector) 259 + struct edid *drm_load_edid_firmware(struct drm_connector *connector) 260 260 { 261 261 const char *connector_name = connector->name; 262 262 char *edidname, *last, *colon, *fwstr, *edidstr, *fallback = NULL; 263 - int ret; 264 263 struct edid *edid; 265 264 266 265 if (edid_firmware[0] == '\0') 267 - return 0; 266 + return ERR_PTR(-ENOENT); 268 267 269 268 /* 270 269 * If there are multiple edid files specified and separated ··· 292 293 if (!edidname) { 293 294 if (!fallback) { 294 295 kfree(fwstr); 295 - return 0; 296 + return ERR_PTR(-ENOENT); 296 297 } 297 298 edidname = fallback; 298 299 } ··· 304 305 edid = edid_load(connector, edidname, connector_name); 305 306 kfree(fwstr); 306 307 307 - if (IS_ERR_OR_NULL(edid)) 308 - return 0; 309 - 310 - drm_mode_connector_update_edid_property(connector, edid); 311 - ret = drm_add_edid_modes(connector, edid); 312 - drm_edid_to_eld(connector, edid); 313 - kfree(edid); 314 - 315 - return ret; 308 + return edid; 316 309 }
+4 -4
drivers/gpu/drm/drm_encoder.c
··· 110 110 { 111 111 int ret; 112 112 113 - ret = drm_mode_object_get(dev, &encoder->base, DRM_MODE_OBJECT_ENCODER); 113 + ret = drm_mode_object_add(dev, &encoder->base, DRM_MODE_OBJECT_ENCODER); 114 114 if (ret) 115 115 return ret; 116 116 ··· 188 188 189 189 /* For atomic drivers only state objects are synchronously updated and 190 190 * protected by modeset locks, so check those first. */ 191 - drm_connector_list_iter_get(dev, &conn_iter); 191 + drm_connector_list_iter_begin(dev, &conn_iter); 192 192 drm_for_each_connector_iter(connector, &conn_iter) { 193 193 if (!connector->state) 194 194 continue; ··· 198 198 if (connector->state->best_encoder != encoder) 199 199 continue; 200 200 201 - drm_connector_list_iter_put(&conn_iter); 201 + drm_connector_list_iter_end(&conn_iter); 202 202 return connector->state->crtc; 203 203 } 204 - drm_connector_list_iter_put(&conn_iter); 204 + drm_connector_list_iter_end(&conn_iter); 205 205 206 206 /* Don't return stale data (e.g. pending async disable). */ 207 207 if (uses_atomic)
+9 -11
drivers/gpu/drm/drm_fb_cma_helper.c
··· 102 102 103 103 for (i = 0; i < 4; i++) { 104 104 if (fb_cma->obj[i]) 105 - drm_gem_object_unreference_unlocked(&fb_cma->obj[i]->base); 105 + drm_gem_object_put_unlocked(&fb_cma->obj[i]->base); 106 106 } 107 107 108 108 drm_framebuffer_cleanup(fb); ··· 190 190 if (!obj) { 191 191 dev_err(dev->dev, "Failed to lookup GEM object\n"); 192 192 ret = -ENXIO; 193 - goto err_gem_object_unreference; 193 + goto err_gem_object_put; 194 194 } 195 195 196 196 min_size = (height - 1) * mode_cmd->pitches[i] ··· 198 198 + mode_cmd->offsets[i]; 199 199 200 200 if (obj->size < min_size) { 201 - drm_gem_object_unreference_unlocked(obj); 201 + drm_gem_object_put_unlocked(obj); 202 202 ret = -EINVAL; 203 - goto err_gem_object_unreference; 203 + goto err_gem_object_put; 204 204 } 205 205 objs[i] = to_drm_gem_cma_obj(obj); 206 206 } ··· 208 208 fb_cma = drm_fb_cma_alloc(dev, mode_cmd, objs, i, funcs); 209 209 if (IS_ERR(fb_cma)) { 210 210 ret = PTR_ERR(fb_cma); 211 - goto err_gem_object_unreference; 211 + goto err_gem_object_put; 212 212 } 213 213 214 214 return &fb_cma->fb; 215 215 216 - err_gem_object_unreference: 216 + err_gem_object_put: 217 217 for (i--; i >= 0; i--) 218 - drm_gem_object_unreference_unlocked(&objs[i]->base); 218 + drm_gem_object_put_unlocked(&objs[i]->base); 219 219 return ERR_PTR(ret); 220 220 } 221 221 EXPORT_SYMBOL_GPL(drm_fb_cma_create_with_funcs); ··· 475 475 err_cma_destroy: 476 476 drm_framebuffer_remove(&fbdev_cma->fb->fb); 477 477 err_fb_info_destroy: 478 - drm_fb_helper_release_fbi(helper); 478 + drm_fb_helper_fini(helper); 479 479 err_gem_free_object: 480 - drm_gem_object_unreference_unlocked(&obj->base); 480 + drm_gem_object_put_unlocked(&obj->base); 481 481 return ret; 482 482 } 483 483 ··· 547 547 * drm_fbdev_cma_init() - Allocate and initializes a drm_fbdev_cma struct 548 548 * @dev: DRM device 549 549 * @preferred_bpp: Preferred bits per pixel for the device 550 - * @num_crtc: Number of CRTCs 551 550 * @max_conn_count: Maximum number of connectors 552 551 * 553 552 * Returns a newly allocated drm_fbdev_cma struct or a ERR_PTR. ··· 569 570 drm_fb_helper_unregister_fbi(&fbdev_cma->fb_helper); 570 571 if (fbdev_cma->fb_helper.fbdev) 571 572 drm_fbdev_cma_defio_fini(fbdev_cma->fb_helper.fbdev); 572 - drm_fb_helper_release_fbi(&fbdev_cma->fb_helper); 573 573 574 574 if (fbdev_cma->fb) 575 575 drm_framebuffer_remove(&fbdev_cma->fb->fb);
+108 -32
drivers/gpu/drm/drm_fb_helper.c
··· 48 48 MODULE_PARM_DESC(fbdev_emulation, 49 49 "Enable legacy fbdev emulation [default=true]"); 50 50 51 + static int drm_fbdev_overalloc = CONFIG_DRM_FBDEV_OVERALLOC; 52 + module_param(drm_fbdev_overalloc, int, 0444); 53 + MODULE_PARM_DESC(drm_fbdev_overalloc, 54 + "Overallocation of the fbdev buffer (%) [default=" 55 + __MODULE_STRING(CONFIG_DRM_FBDEV_OVERALLOC) "]"); 56 + 51 57 static LIST_HEAD(kernel_fb_helper_list); 52 58 static DEFINE_MUTEX(kernel_fb_helper_lock); 53 59 ··· 69 63 * drm_fb_helper_init(), drm_fb_helper_single_add_all_connectors() and 70 64 * drm_fb_helper_initial_config(). Drivers with fancier requirements than the 71 65 * default behaviour can override the third step with their own code. 72 - * Teardown is done with drm_fb_helper_fini(). 66 + * Teardown is done with drm_fb_helper_fini() after the fbdev device is 67 + * unregisters using drm_fb_helper_unregister_fbi(). 73 68 * 74 69 * At runtime drivers should restore the fbdev console by calling 75 70 * drm_fb_helper_restore_fbdev_mode_unlocked() from their &drm_driver.lastclose ··· 134 127 return 0; 135 128 136 129 mutex_lock(&dev->mode_config.mutex); 137 - drm_connector_list_iter_get(dev, &conn_iter); 130 + drm_connector_list_iter_begin(dev, &conn_iter); 138 131 drm_for_each_connector_iter(connector, &conn_iter) { 139 132 ret = drm_fb_helper_add_one_connector(fb_helper, connector); 140 133 ··· 148 141 struct drm_fb_helper_connector *fb_helper_connector = 149 142 fb_helper->connector_info[i]; 150 143 151 - drm_connector_unreference(fb_helper_connector->connector); 144 + drm_connector_put(fb_helper_connector->connector); 152 145 153 146 kfree(fb_helper_connector); 154 147 fb_helper->connector_info[i] = NULL; 155 148 } 156 149 fb_helper->connector_count = 0; 157 150 out: 158 - drm_connector_list_iter_put(&conn_iter); 151 + drm_connector_list_iter_end(&conn_iter); 159 152 mutex_unlock(&dev->mode_config.mutex); 160 153 161 154 return ret; ··· 185 178 if (!fb_helper_connector) 186 179 return -ENOMEM; 187 180 188 - drm_connector_reference(connector); 181 + drm_connector_get(connector); 189 182 fb_helper_connector->connector = connector; 190 183 fb_helper->connector_info[fb_helper->connector_count++] = fb_helper_connector; 191 184 return 0; ··· 211 204 if (i == fb_helper->connector_count) 212 205 return -EINVAL; 213 206 fb_helper_connector = fb_helper->connector_info[i]; 214 - drm_connector_unreference(fb_helper_connector->connector); 207 + drm_connector_put(fb_helper_connector->connector); 215 208 216 209 for (j = i + 1; j < fb_helper->connector_count; j++) { 217 210 fb_helper->connector_info[j - 1] = fb_helper->connector_info[j]; ··· 633 626 int i; 634 627 635 628 for (i = 0; i < modeset->num_connectors; i++) { 636 - drm_connector_unreference(modeset->connectors[i]); 629 + drm_connector_put(modeset->connectors[i]); 637 630 modeset->connectors[i] = NULL; 638 631 } 639 632 modeset->num_connectors = 0; ··· 650 643 int i; 651 644 652 645 for (i = 0; i < helper->connector_count; i++) { 653 - drm_connector_unreference(helper->connector_info[i]->connector); 646 + drm_connector_put(helper->connector_info[i]->connector); 654 647 kfree(helper->connector_info[i]); 655 648 } 656 649 kfree(helper->connector_info); ··· 716 709 EXPORT_SYMBOL(drm_fb_helper_prepare); 717 710 718 711 /** 719 - * drm_fb_helper_init - initialize a drm_fb_helper structure 712 + * drm_fb_helper_init - initialize a &struct drm_fb_helper 720 713 * @dev: drm device 721 714 * @fb_helper: driver-allocated fbdev helper structure to initialize 722 715 * @max_conn_count: max connector count ··· 787 780 * @fb_helper: driver-allocated fbdev helper 788 781 * 789 782 * A helper to alloc fb_info and the members cmap and apertures. Called 790 - * by the driver within the fb_probe fb_helper callback function. 783 + * by the driver within the fb_probe fb_helper callback function. Drivers do not 784 + * need to release the allocated fb_info structure themselves, this is 785 + * automatically done when calling drm_fb_helper_fini(). 791 786 * 792 787 * RETURNS: 793 788 * fb_info pointer if things went okay, pointer containing error code ··· 832 823 * @fb_helper: driver-allocated fbdev helper 833 824 * 834 825 * A wrapper around unregister_framebuffer, to release the fb_info 835 - * framebuffer device 826 + * framebuffer device. This must be called before releasing all resources for 827 + * @fb_helper by calling drm_fb_helper_fini(). 836 828 */ 837 829 void drm_fb_helper_unregister_fbi(struct drm_fb_helper *fb_helper) 838 830 { ··· 843 833 EXPORT_SYMBOL(drm_fb_helper_unregister_fbi); 844 834 845 835 /** 846 - * drm_fb_helper_release_fbi - dealloc fb_info and its members 836 + * drm_fb_helper_fini - finialize a &struct drm_fb_helper 847 837 * @fb_helper: driver-allocated fbdev helper 848 838 * 849 - * A helper to free memory taken by fb_info and the members cmap and 850 - * apertures 839 + * This cleans up all remaining resources associated with @fb_helper. Must be 840 + * called after drm_fb_helper_unlink_fbi() was called. 851 841 */ 852 - void drm_fb_helper_release_fbi(struct drm_fb_helper *fb_helper) 853 - { 854 - if (fb_helper) { 855 - struct fb_info *info = fb_helper->fbdev; 856 - 857 - if (info) { 858 - if (info->cmap.len) 859 - fb_dealloc_cmap(&info->cmap); 860 - framebuffer_release(info); 861 - } 862 - 863 - fb_helper->fbdev = NULL; 864 - } 865 - } 866 - EXPORT_SYMBOL(drm_fb_helper_release_fbi); 867 - 868 842 void drm_fb_helper_fini(struct drm_fb_helper *fb_helper) 869 843 { 870 - if (!drm_fbdev_emulation) 844 + struct fb_info *info; 845 + 846 + if (!drm_fbdev_emulation || !fb_helper) 871 847 return; 848 + 849 + info = fb_helper->fbdev; 850 + if (info) { 851 + if (info->cmap.len) 852 + fb_dealloc_cmap(&info->cmap); 853 + framebuffer_release(info); 854 + } 855 + fb_helper->fbdev = NULL; 872 856 873 857 cancel_work_sync(&fb_helper->resume_work); 874 858 cancel_work_sync(&fb_helper->dirty_work); ··· 1245 1241 EXPORT_SYMBOL(drm_fb_helper_setcmap); 1246 1242 1247 1243 /** 1244 + * drm_fb_helper_ioctl - legacy ioctl implementation 1245 + * @info: fbdev registered by the helper 1246 + * @cmd: ioctl command 1247 + * @arg: ioctl argument 1248 + * 1249 + * A helper to implement the standard fbdev ioctl. Only 1250 + * FBIO_WAITFORVSYNC is implemented for now. 1251 + */ 1252 + int drm_fb_helper_ioctl(struct fb_info *info, unsigned int cmd, 1253 + unsigned long arg) 1254 + { 1255 + struct drm_fb_helper *fb_helper = info->par; 1256 + struct drm_device *dev = fb_helper->dev; 1257 + struct drm_mode_set *mode_set; 1258 + struct drm_crtc *crtc; 1259 + int ret = 0; 1260 + 1261 + mutex_lock(&dev->mode_config.mutex); 1262 + if (!drm_fb_helper_is_bound(fb_helper)) { 1263 + ret = -EBUSY; 1264 + goto unlock; 1265 + } 1266 + 1267 + switch (cmd) { 1268 + case FBIO_WAITFORVSYNC: 1269 + /* 1270 + * Only consider the first CRTC. 1271 + * 1272 + * This ioctl is supposed to take the CRTC number as 1273 + * an argument, but in fbdev times, what that number 1274 + * was supposed to be was quite unclear, different 1275 + * drivers were passing that argument differently 1276 + * (some by reference, some by value), and most of the 1277 + * userspace applications were just hardcoding 0 as an 1278 + * argument. 1279 + * 1280 + * The first CRTC should be the integrated panel on 1281 + * most drivers, so this is the best choice we can 1282 + * make. If we're not smart enough here, one should 1283 + * just consider switch the userspace to KMS. 1284 + */ 1285 + mode_set = &fb_helper->crtc_info[0].mode_set; 1286 + crtc = mode_set->crtc; 1287 + 1288 + /* 1289 + * Only wait for a vblank event if the CRTC is 1290 + * enabled, otherwise just don't do anythintg, 1291 + * not even report an error. 1292 + */ 1293 + ret = drm_crtc_vblank_get(crtc); 1294 + if (!ret) { 1295 + drm_crtc_wait_one_vblank(crtc); 1296 + drm_crtc_vblank_put(crtc); 1297 + } 1298 + 1299 + ret = 0; 1300 + goto unlock; 1301 + default: 1302 + ret = -ENOTTY; 1303 + } 1304 + 1305 + unlock: 1306 + mutex_unlock(&dev->mode_config.mutex); 1307 + return ret; 1308 + } 1309 + EXPORT_SYMBOL(drm_fb_helper_ioctl); 1310 + 1311 + /** 1248 1312 * drm_fb_helper_check_var - implementation for &fb_ops.fb_check_var 1249 1313 * @var: screeninfo to check 1250 1314 * @info: fbdev registered by the helper ··· 1651 1579 sizes.fb_width = sizes.surface_width = 1024; 1652 1580 sizes.fb_height = sizes.surface_height = 768; 1653 1581 } 1582 + 1583 + /* Handle our overallocation */ 1584 + sizes.surface_height *= drm_fbdev_overalloc; 1585 + sizes.surface_height /= 100; 1654 1586 1655 1587 /* push down into drivers */ 1656 1588 ret = (*fb_helper->funcs->fb_probe)(fb_helper, &sizes); ··· 2260 2184 fb_crtc->y = offset->y; 2261 2185 modeset->mode = drm_mode_duplicate(dev, 2262 2186 fb_crtc->desired_mode); 2263 - drm_connector_reference(connector); 2187 + drm_connector_get(connector); 2264 2188 modeset->connectors[modeset->num_connectors++] = connector; 2265 2189 modeset->fb = fb_helper->fb; 2266 2190 modeset->x = offset->x;
+26 -19
drivers/gpu/drm/drm_framebuffer.c
··· 52 52 * metadata fields. 53 53 * 54 54 * The lifetime of a drm framebuffer is controlled with a reference count, 55 - * drivers can grab additional references with drm_framebuffer_reference() and 56 - * drop them again with drm_framebuffer_unreference(). For driver-private 57 - * framebuffers for which the last reference is never dropped (e.g. for the 58 - * fbdev framebuffer when the struct &struct drm_framebuffer is embedded into 59 - * the fbdev helper struct) drivers can manually clean up a framebuffer at 60 - * module unload time with drm_framebuffer_unregister_private(). But doing this 61 - * is not recommended, and it's better to have a normal free-standing &struct 55 + * drivers can grab additional references with drm_framebuffer_get() and drop 56 + * them again with drm_framebuffer_put(). For driver-private framebuffers for 57 + * which the last reference is never dropped (e.g. for the fbdev framebuffer 58 + * when the struct &struct drm_framebuffer is embedded into the fbdev helper 59 + * struct) drivers can manually clean up a framebuffer at module unload time 60 + * with drm_framebuffer_unregister_private(). But doing this is not 61 + * recommended, and it's better to have a normal free-standing &struct 62 62 * drm_framebuffer. 63 63 */ 64 64 ··· 374 374 mutex_unlock(&file_priv->fbs_lock); 375 375 376 376 /* drop the reference we picked up in framebuffer lookup */ 377 - drm_framebuffer_unreference(fb); 377 + drm_framebuffer_put(fb); 378 378 379 379 /* 380 380 * we now own the reference that was stored in the fbs list ··· 394 394 flush_work(&arg.work); 395 395 destroy_work_on_stack(&arg.work); 396 396 } else 397 - drm_framebuffer_unreference(fb); 397 + drm_framebuffer_put(fb); 398 398 399 399 return 0; 400 400 401 401 fail_unref: 402 - drm_framebuffer_unreference(fb); 402 + drm_framebuffer_put(fb); 403 403 return -ENOENT; 404 404 } 405 405 ··· 453 453 ret = -ENODEV; 454 454 } 455 455 456 - drm_framebuffer_unreference(fb); 456 + drm_framebuffer_put(fb); 457 457 458 458 return ret; 459 459 } ··· 540 540 out_err2: 541 541 kfree(clips); 542 542 out_err1: 543 - drm_framebuffer_unreference(fb); 543 + drm_framebuffer_put(fb); 544 544 545 545 return ret; 546 546 } ··· 580 580 list_del_init(&fb->filp_head); 581 581 582 582 /* This drops the fpriv->fbs reference. */ 583 - drm_framebuffer_unreference(fb); 583 + drm_framebuffer_put(fb); 584 584 } 585 585 } 586 586 ··· 638 638 639 639 fb->funcs = funcs; 640 640 641 - ret = drm_mode_object_get_reg(dev, &fb->base, DRM_MODE_OBJECT_FB, 642 - false, drm_framebuffer_free); 641 + ret = __drm_mode_object_add(dev, &fb->base, DRM_MODE_OBJECT_FB, 642 + false, drm_framebuffer_free); 643 643 if (ret) 644 644 goto out; 645 645 ··· 661 661 * 662 662 * If successful, this grabs an additional reference to the framebuffer - 663 663 * callers need to make sure to eventually unreference the returned framebuffer 664 - * again, using @drm_framebuffer_unreference. 664 + * again, using drm_framebuffer_put(). 665 665 */ 666 666 struct drm_framebuffer *drm_framebuffer_lookup(struct drm_device *dev, 667 667 uint32_t id) ··· 687 687 * 688 688 * NOTE: This function is deprecated. For driver-private framebuffers it is not 689 689 * recommended to embed a framebuffer struct info fbdev struct, instead, a 690 - * framebuffer pointer is preferred and drm_framebuffer_unreference() should be 691 - * called when the framebuffer is to be cleaned up. 690 + * framebuffer pointer is preferred and drm_framebuffer_put() should be called 691 + * when the framebuffer is to be cleaned up. 692 692 */ 693 693 void drm_framebuffer_unregister_private(struct drm_framebuffer *fb) 694 694 { ··· 773 773 * in this manner. 774 774 */ 775 775 if (drm_framebuffer_read_refcount(fb) > 1) { 776 + if (drm_drv_uses_atomic_modeset(dev)) { 777 + int ret = drm_atomic_remove_fb(fb); 778 + WARN(ret, "atomic remove_fb failed with %i\n", ret); 779 + goto out; 780 + } 781 + 776 782 drm_modeset_lock_all(dev); 777 783 /* remove from any CRTC */ 778 784 drm_for_each_crtc(crtc, dev) { ··· 796 790 drm_modeset_unlock_all(dev); 797 791 } 798 792 799 - drm_framebuffer_unreference(fb); 793 + out: 794 + drm_framebuffer_put(fb); 800 795 } 801 796 EXPORT_SYMBOL(drm_framebuffer_remove); 802 797
+22 -22
drivers/gpu/drm/drm_gem.c
··· 218 218 } 219 219 220 220 static void 221 - drm_gem_object_handle_unreference_unlocked(struct drm_gem_object *obj) 221 + drm_gem_object_handle_put_unlocked(struct drm_gem_object *obj) 222 222 { 223 223 struct drm_device *dev = obj->dev; 224 224 bool final = false; ··· 241 241 mutex_unlock(&dev->object_name_lock); 242 242 243 243 if (final) 244 - drm_gem_object_unreference_unlocked(obj); 244 + drm_gem_object_put_unlocked(obj); 245 245 } 246 246 247 247 /* ··· 262 262 if (dev->driver->gem_close_object) 263 263 dev->driver->gem_close_object(obj, file_priv); 264 264 265 - drm_gem_object_handle_unreference_unlocked(obj); 265 + drm_gem_object_handle_put_unlocked(obj); 266 266 267 267 return 0; 268 268 } ··· 352 352 353 353 WARN_ON(!mutex_is_locked(&dev->object_name_lock)); 354 354 if (obj->handle_count++ == 0) 355 - drm_gem_object_reference(obj); 355 + drm_gem_object_get(obj); 356 356 357 357 /* 358 358 * Get the user-visible handle using idr. Preload and perform ··· 392 392 idr_remove(&file_priv->object_idr, handle); 393 393 spin_unlock(&file_priv->table_lock); 394 394 err_unref: 395 - drm_gem_object_handle_unreference_unlocked(obj); 395 + drm_gem_object_handle_put_unlocked(obj); 396 396 return ret; 397 397 } 398 398 ··· 606 606 /* Check if we currently have a reference on the object */ 607 607 obj = idr_find(&filp->object_idr, handle); 608 608 if (obj) 609 - drm_gem_object_reference(obj); 609 + drm_gem_object_get(obj); 610 610 611 611 spin_unlock(&filp->table_lock); 612 612 ··· 683 683 684 684 err: 685 685 mutex_unlock(&dev->object_name_lock); 686 - drm_gem_object_unreference_unlocked(obj); 686 + drm_gem_object_put_unlocked(obj); 687 687 return ret; 688 688 } 689 689 ··· 713 713 mutex_lock(&dev->object_name_lock); 714 714 obj = idr_find(&dev->object_name_idr, (int) args->name); 715 715 if (obj) { 716 - drm_gem_object_reference(obj); 716 + drm_gem_object_get(obj); 717 717 } else { 718 718 mutex_unlock(&dev->object_name_lock); 719 719 return -ENOENT; ··· 721 721 722 722 /* drm_gem_handle_create_tail unlocks dev->object_name_lock. */ 723 723 ret = drm_gem_handle_create_tail(file_priv, obj, &handle); 724 - drm_gem_object_unreference_unlocked(obj); 724 + drm_gem_object_put_unlocked(obj); 725 725 if (ret) 726 726 return ret; 727 727 ··· 809 809 EXPORT_SYMBOL(drm_gem_object_free); 810 810 811 811 /** 812 - * drm_gem_object_unreference_unlocked - release a GEM BO reference 812 + * drm_gem_object_put_unlocked - drop a GEM buffer object reference 813 813 * @obj: GEM buffer object 814 814 * 815 815 * This releases a reference to @obj. Callers must not hold the 816 816 * &drm_device.struct_mutex lock when calling this function. 817 817 * 818 - * See also __drm_gem_object_unreference(). 818 + * See also __drm_gem_object_put(). 819 819 */ 820 820 void 821 - drm_gem_object_unreference_unlocked(struct drm_gem_object *obj) 821 + drm_gem_object_put_unlocked(struct drm_gem_object *obj) 822 822 { 823 823 struct drm_device *dev; 824 824 ··· 834 834 &dev->struct_mutex)) 835 835 mutex_unlock(&dev->struct_mutex); 836 836 } 837 - EXPORT_SYMBOL(drm_gem_object_unreference_unlocked); 837 + EXPORT_SYMBOL(drm_gem_object_put_unlocked); 838 838 839 839 /** 840 - * drm_gem_object_unreference - release a GEM BO reference 840 + * drm_gem_object_put - release a GEM buffer object reference 841 841 * @obj: GEM buffer object 842 842 * 843 843 * This releases a reference to @obj. Callers must hold the ··· 845 845 * driver doesn't use &drm_device.struct_mutex for anything. 846 846 * 847 847 * For drivers not encumbered with legacy locking use 848 - * drm_gem_object_unreference_unlocked() instead. 848 + * drm_gem_object_put_unlocked() instead. 849 849 */ 850 850 void 851 - drm_gem_object_unreference(struct drm_gem_object *obj) 851 + drm_gem_object_put(struct drm_gem_object *obj) 852 852 { 853 853 if (obj) { 854 854 WARN_ON(!mutex_is_locked(&obj->dev->struct_mutex)); ··· 856 856 kref_put(&obj->refcount, drm_gem_object_free); 857 857 } 858 858 } 859 - EXPORT_SYMBOL(drm_gem_object_unreference); 859 + EXPORT_SYMBOL(drm_gem_object_put); 860 860 861 861 /** 862 862 * drm_gem_vm_open - vma->ops->open implementation for GEM ··· 869 869 { 870 870 struct drm_gem_object *obj = vma->vm_private_data; 871 871 872 - drm_gem_object_reference(obj); 872 + drm_gem_object_get(obj); 873 873 } 874 874 EXPORT_SYMBOL(drm_gem_vm_open); 875 875 ··· 884 884 { 885 885 struct drm_gem_object *obj = vma->vm_private_data; 886 886 887 - drm_gem_object_unreference_unlocked(obj); 887 + drm_gem_object_put_unlocked(obj); 888 888 } 889 889 EXPORT_SYMBOL(drm_gem_vm_close); 890 890 ··· 935 935 * (which should happen whether the vma was created by this call, or 936 936 * by a vm_open due to mremap or partial unmap or whatever). 937 937 */ 938 - drm_gem_object_reference(obj); 938 + drm_gem_object_get(obj); 939 939 940 940 return 0; 941 941 } ··· 992 992 return -EINVAL; 993 993 994 994 if (!drm_vma_node_is_allowed(node, priv)) { 995 - drm_gem_object_unreference_unlocked(obj); 995 + drm_gem_object_put_unlocked(obj); 996 996 return -EACCES; 997 997 } 998 998 999 999 ret = drm_gem_mmap_obj(obj, drm_vma_node_size(node) << PAGE_SHIFT, 1000 1000 vma); 1001 1001 1002 - drm_gem_object_unreference_unlocked(obj); 1002 + drm_gem_object_put_unlocked(obj); 1003 1003 1004 1004 return ret; 1005 1005 }
+5 -5
drivers/gpu/drm/drm_gem_cma_helper.c
··· 121 121 return cma_obj; 122 122 123 123 error: 124 - drm_gem_object_unreference_unlocked(&cma_obj->base); 124 + drm_gem_object_put_unlocked(&cma_obj->base); 125 125 return ERR_PTR(ret); 126 126 } 127 127 EXPORT_SYMBOL_GPL(drm_gem_cma_create); ··· 163 163 */ 164 164 ret = drm_gem_handle_create(file_priv, gem_obj, handle); 165 165 /* drop reference from allocate - handle holds it now. */ 166 - drm_gem_object_unreference_unlocked(gem_obj); 166 + drm_gem_object_put_unlocked(gem_obj); 167 167 if (ret) 168 168 return ERR_PTR(ret); 169 169 ··· 293 293 294 294 *offset = drm_vma_node_offset_addr(&gem_obj->vma_node); 295 295 296 - drm_gem_object_unreference_unlocked(gem_obj); 296 + drm_gem_object_put_unlocked(gem_obj); 297 297 298 298 return 0; 299 299 } ··· 416 416 return -EINVAL; 417 417 418 418 if (!drm_vma_node_is_allowed(node, priv)) { 419 - drm_gem_object_unreference_unlocked(obj); 419 + drm_gem_object_put_unlocked(obj); 420 420 return -EACCES; 421 421 } 422 422 423 423 cma_obj = to_drm_gem_cma_obj(obj); 424 424 425 - drm_gem_object_unreference_unlocked(obj); 425 + drm_gem_object_put_unlocked(obj); 426 426 427 427 return cma_obj->vaddr ? (unsigned long)cma_obj->vaddr : -EINVAL; 428 428 }
+1 -2
drivers/gpu/drm/drm_ioc32.c
··· 257 257 258 258 m32.handle = (unsigned long)handle; 259 259 if (m32.handle != (unsigned long)handle) 260 - printk_ratelimited(KERN_ERR "compat_drm_addmap truncated handle" 261 - " %p for type %d offset %x\n", 260 + pr_err_ratelimited("compat_drm_addmap truncated handle %p for type %d offset %x\n", 262 261 handle, m32.type, m32.offset); 263 262 264 263 if (copy_to_user(argp, &m32, sizeof(m32)))
+57 -24
drivers/gpu/drm/drm_irq.c
··· 90 90 } 91 91 92 92 /* 93 + * "No hw counter" fallback implementation of .get_vblank_counter() hook, 94 + * if there is no useable hardware frame counter available. 95 + */ 96 + static u32 drm_vblank_no_hw_counter(struct drm_device *dev, unsigned int pipe) 97 + { 98 + WARN_ON_ONCE(dev->max_vblank_count != 0); 99 + return 0; 100 + } 101 + 102 + static u32 __get_vblank_counter(struct drm_device *dev, unsigned int pipe) 103 + { 104 + if (drm_core_check_feature(dev, DRIVER_MODESET)) { 105 + struct drm_crtc *crtc = drm_crtc_from_index(dev, pipe); 106 + 107 + if (crtc->funcs->get_vblank_counter) 108 + return crtc->funcs->get_vblank_counter(crtc); 109 + } 110 + 111 + if (dev->driver->get_vblank_counter) 112 + return dev->driver->get_vblank_counter(dev, pipe); 113 + 114 + return drm_vblank_no_hw_counter(dev, pipe); 115 + } 116 + 117 + /* 93 118 * Reset the stored timestamp for the current vblank count to correspond 94 119 * to the last vblank occurred. 95 120 * ··· 137 112 * when drm_vblank_enable() applies the diff 138 113 */ 139 114 do { 140 - cur_vblank = dev->driver->get_vblank_counter(dev, pipe); 115 + cur_vblank = __get_vblank_counter(dev, pipe); 141 116 rc = drm_get_last_vbltimestamp(dev, pipe, &t_vblank, 0); 142 - } while (cur_vblank != dev->driver->get_vblank_counter(dev, pipe) && --count > 0); 117 + } while (cur_vblank != __get_vblank_counter(dev, pipe) && --count > 0); 143 118 144 119 /* 145 120 * Only reinitialize corresponding vblank timestamp if high-precision query ··· 193 168 * corresponding vblank timestamp. 194 169 */ 195 170 do { 196 - cur_vblank = dev->driver->get_vblank_counter(dev, pipe); 171 + cur_vblank = __get_vblank_counter(dev, pipe); 197 172 rc = drm_get_last_vbltimestamp(dev, pipe, &t_vblank, flags); 198 - } while (cur_vblank != dev->driver->get_vblank_counter(dev, pipe) && --count > 0); 173 + } while (cur_vblank != __get_vblank_counter(dev, pipe) && --count > 0); 199 174 200 175 if (dev->max_vblank_count != 0) { 201 176 /* trust the hw counter when it's around */ ··· 300 275 } 301 276 EXPORT_SYMBOL(drm_accurate_vblank_count); 302 277 278 + static void __disable_vblank(struct drm_device *dev, unsigned int pipe) 279 + { 280 + if (drm_core_check_feature(dev, DRIVER_MODESET)) { 281 + struct drm_crtc *crtc = drm_crtc_from_index(dev, pipe); 282 + 283 + if (crtc->funcs->disable_vblank) { 284 + crtc->funcs->disable_vblank(crtc); 285 + return; 286 + } 287 + } 288 + 289 + dev->driver->disable_vblank(dev, pipe); 290 + } 291 + 303 292 /* 304 293 * Disable vblank irq's on crtc, make sure that last vblank count 305 294 * of hardware and corresponding consistent software vblank counter ··· 337 298 * hardware potentially runtime suspended. 338 299 */ 339 300 if (vblank->enabled) { 340 - dev->driver->disable_vblank(dev, pipe); 301 + __disable_vblank(dev, pipe); 341 302 vblank->enabled = false; 342 303 } 343 304 ··· 1066 1027 } 1067 1028 EXPORT_SYMBOL(drm_crtc_send_vblank_event); 1068 1029 1030 + static int __enable_vblank(struct drm_device *dev, unsigned int pipe) 1031 + { 1032 + if (drm_core_check_feature(dev, DRIVER_MODESET)) { 1033 + struct drm_crtc *crtc = drm_crtc_from_index(dev, pipe); 1034 + 1035 + if (crtc->funcs->enable_vblank) 1036 + return crtc->funcs->enable_vblank(crtc); 1037 + } 1038 + 1039 + return dev->driver->enable_vblank(dev, pipe); 1040 + } 1041 + 1069 1042 /** 1070 1043 * drm_vblank_enable - enable the vblank interrupt on a CRTC 1071 1044 * @dev: DRM device ··· 1103 1052 * timestamps. Filtercode in drm_handle_vblank() will 1104 1053 * prevent double-accounting of same vblank interval. 1105 1054 */ 1106 - ret = dev->driver->enable_vblank(dev, pipe); 1055 + ret = __enable_vblank(dev, pipe); 1107 1056 DRM_DEBUG("enabling vblank on crtc %u, ret: %d\n", pipe, ret); 1108 1057 if (ret) 1109 1058 atomic_dec(&vblank->refcount); ··· 1758 1707 return drm_handle_vblank(crtc->dev, drm_crtc_index(crtc)); 1759 1708 } 1760 1709 EXPORT_SYMBOL(drm_crtc_handle_vblank); 1761 - 1762 - /** 1763 - * drm_vblank_no_hw_counter - "No hw counter" implementation of .get_vblank_counter() 1764 - * @dev: DRM device 1765 - * @pipe: CRTC for which to read the counter 1766 - * 1767 - * Drivers can plug this into the .get_vblank_counter() function if 1768 - * there is no useable hardware frame counter available. 1769 - * 1770 - * Returns: 1771 - * 0 1772 - */ 1773 - u32 drm_vblank_no_hw_counter(struct drm_device *dev, unsigned int pipe) 1774 - { 1775 - WARN_ON_ONCE(dev->max_vblank_count != 0); 1776 - return 0; 1777 - } 1778 - EXPORT_SYMBOL(drm_vblank_no_hw_counter);
+1 -1
drivers/gpu/drm/drm_mm.c
··· 170 170 __drm_mm_interval_first(const struct drm_mm *mm, u64 start, u64 last) 171 171 { 172 172 return drm_mm_interval_tree_iter_first((struct rb_root *)&mm->interval_tree, 173 - start, last); 173 + start, last) ?: (struct drm_mm_node *)&mm->head_node; 174 174 } 175 175 EXPORT_SYMBOL(__drm_mm_interval_first); 176 176
+11 -11
drivers/gpu/drm/drm_mode_config.c
··· 139 139 } 140 140 card_res->count_encoders = count; 141 141 142 - drm_connector_list_iter_get(dev, &conn_iter); 142 + drm_connector_list_iter_begin(dev, &conn_iter); 143 143 count = 0; 144 144 connector_id = u64_to_user_ptr(card_res->connector_id_ptr); 145 145 drm_for_each_connector_iter(connector, &conn_iter) { 146 146 if (count < card_res->count_connectors && 147 147 put_user(connector->base.id, connector_id + count)) { 148 - drm_connector_list_iter_put(&conn_iter); 148 + drm_connector_list_iter_end(&conn_iter); 149 149 return -EFAULT; 150 150 } 151 151 count++; 152 152 } 153 153 card_res->count_connectors = count; 154 - drm_connector_list_iter_put(&conn_iter); 154 + drm_connector_list_iter_end(&conn_iter); 155 155 156 156 return ret; 157 157 } ··· 184 184 if (encoder->funcs->reset) 185 185 encoder->funcs->reset(encoder); 186 186 187 - drm_connector_list_iter_get(dev, &conn_iter); 187 + drm_connector_list_iter_begin(dev, &conn_iter); 188 188 drm_for_each_connector_iter(connector, &conn_iter) 189 189 if (connector->funcs->reset) 190 190 connector->funcs->reset(connector); 191 - drm_connector_list_iter_put(&conn_iter); 191 + drm_connector_list_iter_end(&conn_iter); 192 192 } 193 193 EXPORT_SYMBOL(drm_mode_config_reset); 194 194 ··· 412 412 encoder->funcs->destroy(encoder); 413 413 } 414 414 415 - drm_connector_list_iter_get(dev, &conn_iter); 415 + drm_connector_list_iter_begin(dev, &conn_iter); 416 416 drm_for_each_connector_iter(connector, &conn_iter) { 417 417 /* drm_connector_list_iter holds an full reference to the 418 418 * current connector itself, which means it is inherently safe 419 419 * against unreferencing the current connector - but not against 420 420 * deleting it right away. */ 421 - drm_connector_unreference(connector); 421 + drm_connector_put(connector); 422 422 } 423 - drm_connector_list_iter_put(&conn_iter); 423 + drm_connector_list_iter_end(&conn_iter); 424 424 if (WARN_ON(!list_empty(&dev->mode_config.connector_list))) { 425 - drm_connector_list_iter_get(dev, &conn_iter); 425 + drm_connector_list_iter_begin(dev, &conn_iter); 426 426 drm_for_each_connector_iter(connector, &conn_iter) 427 427 DRM_ERROR("connector %s leaked!\n", connector->name); 428 - drm_connector_list_iter_put(&conn_iter); 428 + drm_connector_list_iter_end(&conn_iter); 429 429 } 430 430 431 431 list_for_each_entry_safe(property, pt, &dev->mode_config.property_list, ··· 444 444 445 445 list_for_each_entry_safe(blob, bt, &dev->mode_config.property_blob_list, 446 446 head_global) { 447 - drm_property_unreference_blob(blob); 447 + drm_property_blob_put(blob); 448 448 } 449 449 450 450 /*
+20 -24
drivers/gpu/drm/drm_mode_object.c
··· 31 31 * Internal function to assign a slot in the object idr and optionally 32 32 * register the object into the idr. 33 33 */ 34 - int drm_mode_object_get_reg(struct drm_device *dev, 35 - struct drm_mode_object *obj, 36 - uint32_t obj_type, 37 - bool register_obj, 38 - void (*obj_free_cb)(struct kref *kref)) 34 + int __drm_mode_object_add(struct drm_device *dev, struct drm_mode_object *obj, 35 + uint32_t obj_type, bool register_obj, 36 + void (*obj_free_cb)(struct kref *kref)) 39 37 { 40 38 int ret; 41 39 ··· 57 59 } 58 60 59 61 /** 60 - * drm_mode_object_get - allocate a new modeset identifier 62 + * drm_mode_object_add - allocate a new modeset identifier 61 63 * @dev: DRM device 62 64 * @obj: object pointer, used to generate unique ID 63 65 * @obj_type: object type 64 66 * 65 67 * Create a unique identifier based on @ptr in @dev's identifier space. Used 66 - * for tracking modes, CRTCs and connectors. Note that despite the _get postfix 67 - * modeset identifiers are _not_ reference counted. Hence don't use this for 68 - * reference counted modeset objects like framebuffers. 68 + * for tracking modes, CRTCs and connectors. 69 69 * 70 70 * Returns: 71 71 * Zero on success, error code on failure. 72 72 */ 73 - int drm_mode_object_get(struct drm_device *dev, 73 + int drm_mode_object_add(struct drm_device *dev, 74 74 struct drm_mode_object *obj, uint32_t obj_type) 75 75 { 76 - return drm_mode_object_get_reg(dev, obj, obj_type, true, NULL); 76 + return __drm_mode_object_add(dev, obj, obj_type, true, NULL); 77 77 } 78 78 79 79 void drm_mode_object_register(struct drm_device *dev, ··· 133 137 * 134 138 * This function is used to look up a modeset object. It will acquire a 135 139 * reference for reference counted objects. This reference must be dropped again 136 - * by callind drm_mode_object_unreference(). 140 + * by callind drm_mode_object_put(). 137 141 */ 138 142 struct drm_mode_object *drm_mode_object_find(struct drm_device *dev, 139 143 uint32_t id, uint32_t type) ··· 146 150 EXPORT_SYMBOL(drm_mode_object_find); 147 151 148 152 /** 149 - * drm_mode_object_unreference - decr the object refcnt 150 - * @obj: mode_object 153 + * drm_mode_object_put - release a mode object reference 154 + * @obj: DRM mode object 151 155 * 152 156 * This function decrements the object's refcount if it is a refcounted modeset 153 157 * object. It is a no-op on any other object. This is used to drop references 154 - * acquired with drm_mode_object_reference(). 158 + * acquired with drm_mode_object_get(). 155 159 */ 156 - void drm_mode_object_unreference(struct drm_mode_object *obj) 160 + void drm_mode_object_put(struct drm_mode_object *obj) 157 161 { 158 162 if (obj->free_cb) { 159 163 DRM_DEBUG("OBJ ID: %d (%d)\n", obj->id, kref_read(&obj->refcount)); 160 164 kref_put(&obj->refcount, obj->free_cb); 161 165 } 162 166 } 163 - EXPORT_SYMBOL(drm_mode_object_unreference); 167 + EXPORT_SYMBOL(drm_mode_object_put); 164 168 165 169 /** 166 - * drm_mode_object_reference - incr the object refcnt 167 - * @obj: mode_object 170 + * drm_mode_object_get - acquire a mode object reference 171 + * @obj: DRM mode object 168 172 * 169 173 * This function increments the object's refcount if it is a refcounted modeset 170 174 * object. It is a no-op on any other object. References should be dropped again 171 - * by calling drm_mode_object_unreference(). 175 + * by calling drm_mode_object_put(). 172 176 */ 173 - void drm_mode_object_reference(struct drm_mode_object *obj) 177 + void drm_mode_object_get(struct drm_mode_object *obj) 174 178 { 175 179 if (obj->free_cb) { 176 180 DRM_DEBUG("OBJ ID: %d (%d)\n", obj->id, kref_read(&obj->refcount)); 177 181 kref_get(&obj->refcount); 178 182 } 179 183 } 180 - EXPORT_SYMBOL(drm_mode_object_reference); 184 + EXPORT_SYMBOL(drm_mode_object_get); 181 185 182 186 /** 183 187 * drm_object_attach_property - attach a property to a modeset object ··· 363 367 &arg->count_props); 364 368 365 369 out_unref: 366 - drm_mode_object_unreference(obj); 370 + drm_mode_object_put(obj); 367 371 out: 368 372 drm_modeset_unlock_all(dev); 369 373 return ret; ··· 428 432 drm_property_change_valid_put(property, ref); 429 433 430 434 out_unref: 431 - drm_mode_object_unreference(arg_obj); 435 + drm_mode_object_put(arg_obj); 432 436 out: 433 437 drm_modeset_unlock_all(dev); 434 438 return ret;
+1 -1
drivers/gpu/drm/drm_modes.c
··· 71 71 if (!nmode) 72 72 return NULL; 73 73 74 - if (drm_mode_object_get(dev, &nmode->base, DRM_MODE_OBJECT_MODE)) { 74 + if (drm_mode_object_add(dev, &nmode->base, DRM_MODE_OBJECT_MODE)) { 75 75 kfree(nmode); 76 76 return NULL; 77 77 }
+7 -7
drivers/gpu/drm/drm_plane.c
··· 88 88 struct drm_mode_config *config = &dev->mode_config; 89 89 int ret; 90 90 91 - ret = drm_mode_object_get(dev, &plane->base, DRM_MODE_OBJECT_PLANE); 91 + ret = drm_mode_object_add(dev, &plane->base, DRM_MODE_OBJECT_PLANE); 92 92 if (ret) 93 93 return ret; 94 94 ··· 293 293 return; 294 294 } 295 295 /* disconnect the plane from the fb and crtc: */ 296 - drm_framebuffer_unreference(plane->old_fb); 296 + drm_framebuffer_put(plane->old_fb); 297 297 plane->old_fb = NULL; 298 298 plane->fb = NULL; 299 299 plane->crtc = NULL; ··· 520 520 521 521 out: 522 522 if (fb) 523 - drm_framebuffer_unreference(fb); 523 + drm_framebuffer_put(fb); 524 524 if (plane->old_fb) 525 - drm_framebuffer_unreference(plane->old_fb); 525 + drm_framebuffer_put(plane->old_fb); 526 526 plane->old_fb = NULL; 527 527 528 528 return ret; ··· 638 638 } else { 639 639 fb = crtc->cursor->fb; 640 640 if (fb) 641 - drm_framebuffer_reference(fb); 641 + drm_framebuffer_get(fb); 642 642 } 643 643 644 644 if (req->flags & DRM_MODE_CURSOR_MOVE) { ··· 902 902 if (ret && crtc->funcs->page_flip_target) 903 903 drm_crtc_vblank_put(crtc); 904 904 if (fb) 905 - drm_framebuffer_unreference(fb); 905 + drm_framebuffer_put(fb); 906 906 if (crtc->primary->old_fb) 907 - drm_framebuffer_unreference(crtc->primary->old_fb); 907 + drm_framebuffer_put(crtc->primary->old_fb); 908 908 crtc->primary->old_fb = NULL; 909 909 drm_modeset_unlock_crtc(crtc); 910 910
+3 -4
drivers/gpu/drm/drm_plane_helper.c
··· 85 85 */ 86 86 WARN_ON(!drm_modeset_is_locked(&dev->mode_config.connection_mutex)); 87 87 88 - drm_connector_list_iter_get(dev, &conn_iter); 88 + drm_connector_list_iter_begin(dev, &conn_iter); 89 89 drm_for_each_connector_iter(connector, &conn_iter) { 90 90 if (connector->encoder && connector->encoder->crtc == crtc) { 91 91 if (connector_list != NULL && count < num_connectors) ··· 94 94 count++; 95 95 } 96 96 } 97 - drm_connector_list_iter_put(&conn_iter); 97 + drm_connector_list_iter_end(&conn_iter); 98 98 99 99 return count; 100 100 } ··· 450 450 goto out; 451 451 } 452 452 453 - if (plane_funcs->prepare_fb && plane_state->fb && 454 - plane_state->fb != old_fb) { 453 + if (plane_funcs->prepare_fb && plane_state->fb != old_fb) { 455 454 ret = plane_funcs->prepare_fb(plane, 456 455 plane_state); 457 456 if (ret)
+5 -5
drivers/gpu/drm/drm_prime.c
··· 318 318 return dma_buf; 319 319 320 320 drm_dev_ref(dev); 321 - drm_gem_object_reference(exp_info->priv); 321 + drm_gem_object_get(exp_info->priv); 322 322 323 323 return dma_buf; 324 324 } ··· 339 339 struct drm_device *dev = obj->dev; 340 340 341 341 /* drop the reference on the export fd holds */ 342 - drm_gem_object_unreference_unlocked(obj); 342 + drm_gem_object_put_unlocked(obj); 343 343 344 344 drm_dev_unref(dev); 345 345 } ··· 585 585 fail_put_dmabuf: 586 586 dma_buf_put(dmabuf); 587 587 out: 588 - drm_gem_object_unreference_unlocked(obj); 588 + drm_gem_object_put_unlocked(obj); 589 589 out_unlock: 590 590 mutex_unlock(&file_priv->prime.lock); 591 591 ··· 616 616 * Importing dmabuf exported from out own gem increases 617 617 * refcount on gem itself instead of f_count of dmabuf. 618 618 */ 619 - drm_gem_object_reference(obj); 619 + drm_gem_object_get(obj); 620 620 return obj; 621 621 } 622 622 } ··· 704 704 705 705 /* _handle_create_tail unconditionally unlocks dev->object_name_lock. */ 706 706 ret = drm_gem_handle_create_tail(file_priv, obj, handle); 707 - drm_gem_object_unreference_unlocked(obj); 707 + drm_gem_object_put_unlocked(obj); 708 708 if (ret) 709 709 goto out_put; 710 710
+1 -1
drivers/gpu/drm/drm_print.c
··· 36 36 37 37 void __drm_printfn_info(struct drm_printer *p, struct va_format *vaf) 38 38 { 39 - dev_printk(KERN_INFO, p->arg, "[" DRM_NAME "] %pV", vaf); 39 + dev_info(p->arg, "[" DRM_NAME "] %pV", vaf); 40 40 } 41 41 EXPORT_SYMBOL(__drm_printfn_info); 42 42
+13 -7
drivers/gpu/drm/drm_probe_helper.c
··· 140 140 if (!dev->mode_config.poll_enabled || !drm_kms_helper_poll) 141 141 return; 142 142 143 - drm_connector_list_iter_get(dev, &conn_iter); 143 + drm_connector_list_iter_begin(dev, &conn_iter); 144 144 drm_for_each_connector_iter(connector, &conn_iter) { 145 145 if (connector->polled & (DRM_CONNECTOR_POLL_CONNECT | 146 146 DRM_CONNECTOR_POLL_DISCONNECT)) 147 147 poll = true; 148 148 } 149 - drm_connector_list_iter_put(&conn_iter); 149 + drm_connector_list_iter_end(&conn_iter); 150 150 151 151 if (dev->mode_config.delayed_event) { 152 152 /* ··· 311 311 count = drm_add_edid_modes(connector, edid); 312 312 drm_edid_to_eld(connector, edid); 313 313 } else { 314 - count = drm_load_edid_firmware(connector); 314 + struct edid *edid = drm_load_edid_firmware(connector); 315 + if (!IS_ERR_OR_NULL(edid)) { 316 + drm_mode_connector_update_edid_property(connector, edid); 317 + count = drm_add_edid_modes(connector, edid); 318 + drm_edid_to_eld(connector, edid); 319 + kfree(edid); 320 + } 315 321 if (count == 0) 316 322 count = (*connector_funcs->get_modes)(connector); 317 323 } ··· 420 414 goto out; 421 415 } 422 416 423 - drm_connector_list_iter_get(dev, &conn_iter); 417 + drm_connector_list_iter_begin(dev, &conn_iter); 424 418 drm_for_each_connector_iter(connector, &conn_iter) { 425 419 /* Ignore forced connectors. */ 426 420 if (connector->force) ··· 474 468 changed = true; 475 469 } 476 470 } 477 - drm_connector_list_iter_put(&conn_iter); 471 + drm_connector_list_iter_end(&conn_iter); 478 472 479 473 mutex_unlock(&dev->mode_config.mutex); 480 474 ··· 580 574 return false; 581 575 582 576 mutex_lock(&dev->mode_config.mutex); 583 - drm_connector_list_iter_get(dev, &conn_iter); 577 + drm_connector_list_iter_begin(dev, &conn_iter); 584 578 drm_for_each_connector_iter(connector, &conn_iter) { 585 579 /* Only handle HPD capable connectors. */ 586 580 if (!(connector->polled & DRM_CONNECTOR_POLL_HPD)) ··· 597 591 if (old_status != connector->status) 598 592 changed = true; 599 593 } 600 - drm_connector_list_iter_put(&conn_iter); 594 + drm_connector_list_iter_end(&conn_iter); 601 595 mutex_unlock(&dev->mode_config.mutex); 602 596 603 597 if (changed)
+26 -26
drivers/gpu/drm/drm_property.c
··· 91 91 goto fail; 92 92 } 93 93 94 - ret = drm_mode_object_get(dev, &property->base, DRM_MODE_OBJECT_PROPERTY); 94 + ret = drm_mode_object_add(dev, &property->base, DRM_MODE_OBJECT_PROPERTY); 95 95 if (ret) 96 96 goto fail; 97 97 ··· 570 570 if (data) 571 571 memcpy(blob->data, data, length); 572 572 573 - ret = drm_mode_object_get_reg(dev, &blob->base, DRM_MODE_OBJECT_BLOB, 574 - true, drm_property_free_blob); 573 + ret = __drm_mode_object_add(dev, &blob->base, DRM_MODE_OBJECT_BLOB, 574 + true, drm_property_free_blob); 575 575 if (ret) { 576 576 kfree(blob); 577 577 return ERR_PTR(-EINVAL); ··· 587 587 EXPORT_SYMBOL(drm_property_create_blob); 588 588 589 589 /** 590 - * drm_property_unreference_blob - Unreference a blob property 591 - * @blob: Pointer to blob property 590 + * drm_property_blob_put - release a blob property reference 591 + * @blob: DRM blob property 592 592 * 593 - * Drop a reference on a blob property. May free the object. 593 + * Releases a reference to a blob property. May free the object. 594 594 */ 595 - void drm_property_unreference_blob(struct drm_property_blob *blob) 595 + void drm_property_blob_put(struct drm_property_blob *blob) 596 596 { 597 597 if (!blob) 598 598 return; 599 599 600 - drm_mode_object_unreference(&blob->base); 600 + drm_mode_object_put(&blob->base); 601 601 } 602 - EXPORT_SYMBOL(drm_property_unreference_blob); 602 + EXPORT_SYMBOL(drm_property_blob_put); 603 603 604 604 void drm_property_destroy_user_blobs(struct drm_device *dev, 605 605 struct drm_file *file_priv) ··· 612 612 */ 613 613 list_for_each_entry_safe(blob, bt, &file_priv->blobs, head_file) { 614 614 list_del_init(&blob->head_file); 615 - drm_property_unreference_blob(blob); 615 + drm_property_blob_put(blob); 616 616 } 617 617 } 618 618 619 619 /** 620 - * drm_property_reference_blob - Take a reference on an existing property 621 - * @blob: Pointer to blob property 620 + * drm_property_blob_get - acquire blob property reference 621 + * @blob: DRM blob property 622 622 * 623 - * Take a new reference on an existing blob property. Returns @blob, which 623 + * Acquires a reference to an existing blob property. Returns @blob, which 624 624 * allows this to be used as a shorthand in assignments. 625 625 */ 626 - struct drm_property_blob *drm_property_reference_blob(struct drm_property_blob *blob) 626 + struct drm_property_blob *drm_property_blob_get(struct drm_property_blob *blob) 627 627 { 628 - drm_mode_object_reference(&blob->base); 628 + drm_mode_object_get(&blob->base); 629 629 return blob; 630 630 } 631 - EXPORT_SYMBOL(drm_property_reference_blob); 631 + EXPORT_SYMBOL(drm_property_blob_get); 632 632 633 633 /** 634 634 * drm_property_lookup_blob - look up a blob property and take a reference ··· 637 637 * 638 638 * If successful, this takes an additional reference to the blob property. 639 639 * callers need to make sure to eventually unreference the returned property 640 - * again, using @drm_property_unreference_blob. 640 + * again, using drm_property_blob_put(). 641 641 * 642 642 * Return: 643 643 * NULL on failure, pointer to the blob on success. ··· 712 712 goto err_created; 713 713 } 714 714 715 - drm_property_unreference_blob(old_blob); 715 + drm_property_blob_put(old_blob); 716 716 *replace = new_blob; 717 717 718 718 return 0; 719 719 720 720 err_created: 721 - drm_property_unreference_blob(new_blob); 721 + drm_property_blob_put(new_blob); 722 722 return ret; 723 723 } 724 724 EXPORT_SYMBOL(drm_property_replace_global_blob); ··· 747 747 } 748 748 out_resp->length = blob->length; 749 749 unref: 750 - drm_property_unreference_blob(blob); 750 + drm_property_blob_put(blob); 751 751 752 752 return ret; 753 753 } ··· 784 784 return 0; 785 785 786 786 out_blob: 787 - drm_property_unreference_blob(blob); 787 + drm_property_blob_put(blob); 788 788 return ret; 789 789 } 790 790 ··· 823 823 mutex_unlock(&dev->mode_config.blob_lock); 824 824 825 825 /* One reference from lookup, and one from the filp. */ 826 - drm_property_unreference_blob(blob); 827 - drm_property_unreference_blob(blob); 826 + drm_property_blob_put(blob); 827 + drm_property_blob_put(blob); 828 828 829 829 return 0; 830 830 831 831 err: 832 832 mutex_unlock(&dev->mode_config.blob_lock); 833 - drm_property_unreference_blob(blob); 833 + drm_property_blob_put(blob); 834 834 835 835 return ret; 836 836 } ··· 906 906 return; 907 907 908 908 if (drm_property_type_is(property, DRM_MODE_PROP_OBJECT)) { 909 - drm_mode_object_unreference(ref); 909 + drm_mode_object_put(ref); 910 910 } else if (drm_property_type_is(property, DRM_MODE_PROP_BLOB)) 911 - drm_property_unreference_blob(obj_to_blob(ref)); 911 + drm_property_blob_put(obj_to_blob(ref)); 912 912 }
+20 -20
drivers/gpu/drm/exynos/exynos_drm_crtc.c
··· 122 122 kfree(exynos_crtc); 123 123 } 124 124 125 + static int exynos_drm_crtc_enable_vblank(struct drm_crtc *crtc) 126 + { 127 + struct exynos_drm_crtc *exynos_crtc = to_exynos_crtc(crtc); 128 + 129 + if (exynos_crtc->ops->enable_vblank) 130 + return exynos_crtc->ops->enable_vblank(exynos_crtc); 131 + 132 + return 0; 133 + } 134 + 135 + static void exynos_drm_crtc_disable_vblank(struct drm_crtc *crtc) 136 + { 137 + struct exynos_drm_crtc *exynos_crtc = to_exynos_crtc(crtc); 138 + 139 + if (exynos_crtc->ops->disable_vblank) 140 + exynos_crtc->ops->disable_vblank(exynos_crtc); 141 + } 142 + 125 143 static const struct drm_crtc_funcs exynos_crtc_funcs = { 126 144 .set_config = drm_atomic_helper_set_config, 127 145 .page_flip = drm_atomic_helper_page_flip, ··· 147 129 .reset = drm_atomic_helper_crtc_reset, 148 130 .atomic_duplicate_state = drm_atomic_helper_crtc_duplicate_state, 149 131 .atomic_destroy_state = drm_atomic_helper_crtc_destroy_state, 132 + .enable_vblank = exynos_drm_crtc_enable_vblank, 133 + .disable_vblank = exynos_drm_crtc_disable_vblank, 150 134 }; 151 135 152 136 struct exynos_drm_crtc *exynos_drm_crtc_create(struct drm_device *drm_dev, ··· 186 166 plane->funcs->destroy(plane); 187 167 kfree(exynos_crtc); 188 168 return ERR_PTR(ret); 189 - } 190 - 191 - int exynos_drm_crtc_enable_vblank(struct drm_device *dev, unsigned int pipe) 192 - { 193 - struct exynos_drm_crtc *exynos_crtc = exynos_drm_crtc_from_pipe(dev, 194 - pipe); 195 - 196 - if (exynos_crtc->ops->enable_vblank) 197 - return exynos_crtc->ops->enable_vblank(exynos_crtc); 198 - 199 - return 0; 200 - } 201 - 202 - void exynos_drm_crtc_disable_vblank(struct drm_device *dev, unsigned int pipe) 203 - { 204 - struct exynos_drm_crtc *exynos_crtc = exynos_drm_crtc_from_pipe(dev, 205 - pipe); 206 - 207 - if (exynos_crtc->ops->disable_vblank) 208 - exynos_crtc->ops->disable_vblank(exynos_crtc); 209 169 } 210 170 211 171 int exynos_drm_crtc_get_pipe_from_type(struct drm_device *drm_dev,
-2
drivers/gpu/drm/exynos/exynos_drm_crtc.h
··· 23 23 enum exynos_drm_output_type type, 24 24 const struct exynos_drm_crtc_ops *ops, 25 25 void *context); 26 - int exynos_drm_crtc_enable_vblank(struct drm_device *dev, unsigned int pipe); 27 - void exynos_drm_crtc_disable_vblank(struct drm_device *dev, unsigned int pipe); 28 26 void exynos_drm_crtc_wait_pending_update(struct exynos_drm_crtc *exynos_crtc); 29 27 void exynos_drm_crtc_finish_update(struct exynos_drm_crtc *exynos_crtc, 30 28 struct exynos_drm_plane *exynos_plane);
-4
drivers/gpu/drm/exynos/exynos_drm_drv.c
··· 22 22 #include <drm/exynos_drm.h> 23 23 24 24 #include "exynos_drm_drv.h" 25 - #include "exynos_drm_crtc.h" 26 25 #include "exynos_drm_fbdev.h" 27 26 #include "exynos_drm_fb.h" 28 27 #include "exynos_drm_gem.h" ··· 262 263 .preclose = exynos_drm_preclose, 263 264 .lastclose = exynos_drm_lastclose, 264 265 .postclose = exynos_drm_postclose, 265 - .get_vblank_counter = drm_vblank_no_hw_counter, 266 - .enable_vblank = exynos_drm_crtc_enable_vblank, 267 - .disable_vblank = exynos_drm_crtc_disable_vblank, 268 266 .gem_free_object_unlocked = exynos_drm_gem_free_object, 269 267 .gem_vm_ops = &exynos_drm_gem_vm_ops, 270 268 .dumb_create = exynos_drm_gem_dumb_create,
-8
drivers/gpu/drm/exynos/exynos_drm_drv.h
··· 222 222 wait_queue_head_t wait; 223 223 }; 224 224 225 - static inline struct exynos_drm_crtc * 226 - exynos_drm_crtc_from_pipe(struct drm_device *dev, int pipe) 227 - { 228 - struct drm_crtc *crtc = drm_crtc_from_index(dev, pipe); 229 - 230 - return to_exynos_crtc(crtc); 231 - } 232 - 233 225 static inline struct device *to_dma_dev(struct drm_device *dev) 234 226 { 235 227 struct exynos_drm_private *priv = dev->dev_private;
-2
drivers/gpu/drm/exynos/exynos_drm_fbdev.c
··· 99 99 VM_MAP, pgprot_writecombine(PAGE_KERNEL)); 100 100 if (!exynos_gem->kvaddr) { 101 101 DRM_ERROR("failed to map pages to kernel space.\n"); 102 - drm_fb_helper_release_fbi(helper); 103 102 return -EIO; 104 103 } 105 104 ··· 271 272 } 272 273 273 274 drm_fb_helper_unregister_fbi(fb_helper); 274 - drm_fb_helper_release_fbi(fb_helper); 275 275 276 276 drm_fb_helper_fini(fb_helper); 277 277 }
+5 -2
drivers/gpu/drm/exynos/exynos_hdmi.c
··· 43 43 44 44 #include <drm/exynos_drm.h> 45 45 46 - #include "exynos_drm_drv.h" 47 46 #include "exynos_drm_crtc.h" 48 47 49 48 #define HOTPLUG_DEBOUNCE_MS 1100 ··· 1702 1703 struct drm_device *drm_dev = data; 1703 1704 struct hdmi_context *hdata = dev_get_drvdata(dev); 1704 1705 struct drm_encoder *encoder = &hdata->encoder; 1706 + struct exynos_drm_crtc *exynos_crtc; 1707 + struct drm_crtc *crtc; 1705 1708 int ret, pipe; 1706 1709 1707 1710 hdata->drm_dev = drm_dev; ··· 1715 1714 1716 1715 hdata->phy_clk.enable = hdmiphy_clk_enable; 1717 1716 1718 - exynos_drm_crtc_from_pipe(drm_dev, pipe)->pipe_clk = &hdata->phy_clk; 1717 + crtc = drm_crtc_from_index(drm_dev, pipe); 1718 + exynos_crtc = to_exynos_crtc(crtc); 1719 + exynos_crtc->pipe_clk = &hdata->phy_clk; 1719 1720 1720 1721 encoder->possible_crtcs = 1 << pipe; 1721 1722
+26
drivers/gpu/drm/fsl-dcu/fsl_dcu_drm_crtc.c
··· 137 137 .mode_set_nofb = fsl_dcu_drm_crtc_mode_set_nofb, 138 138 }; 139 139 140 + static int fsl_dcu_drm_crtc_enable_vblank(struct drm_crtc *crtc) 141 + { 142 + struct drm_device *dev = crtc->dev; 143 + struct fsl_dcu_drm_device *fsl_dev = dev->dev_private; 144 + unsigned int value; 145 + 146 + regmap_read(fsl_dev->regmap, DCU_INT_MASK, &value); 147 + value &= ~DCU_INT_MASK_VBLANK; 148 + regmap_write(fsl_dev->regmap, DCU_INT_MASK, value); 149 + 150 + return 0; 151 + } 152 + 153 + static void fsl_dcu_drm_crtc_disable_vblank(struct drm_crtc *crtc) 154 + { 155 + struct drm_device *dev = crtc->dev; 156 + struct fsl_dcu_drm_device *fsl_dev = dev->dev_private; 157 + unsigned int value; 158 + 159 + regmap_read(fsl_dev->regmap, DCU_INT_MASK, &value); 160 + value |= DCU_INT_MASK_VBLANK; 161 + regmap_write(fsl_dev->regmap, DCU_INT_MASK, value); 162 + } 163 + 140 164 static const struct drm_crtc_funcs fsl_dcu_drm_crtc_funcs = { 141 165 .atomic_duplicate_state = drm_atomic_helper_crtc_duplicate_state, 142 166 .atomic_destroy_state = drm_atomic_helper_crtc_destroy_state, ··· 168 144 .page_flip = drm_atomic_helper_page_flip, 169 145 .reset = drm_atomic_helper_crtc_reset, 170 146 .set_config = drm_atomic_helper_set_config, 147 + .enable_vblank = fsl_dcu_drm_crtc_enable_vblank, 148 + .disable_vblank = fsl_dcu_drm_crtc_disable_vblank, 171 149 }; 172 150 173 151 int fsl_dcu_drm_crtc_create(struct fsl_dcu_drm_device *fsl_dev)
-26
drivers/gpu/drm/fsl-dcu/fsl_dcu_drm_drv.c
··· 154 154 return IRQ_HANDLED; 155 155 } 156 156 157 - static int fsl_dcu_drm_enable_vblank(struct drm_device *dev, unsigned int pipe) 158 - { 159 - struct fsl_dcu_drm_device *fsl_dev = dev->dev_private; 160 - unsigned int value; 161 - 162 - regmap_read(fsl_dev->regmap, DCU_INT_MASK, &value); 163 - value &= ~DCU_INT_MASK_VBLANK; 164 - regmap_write(fsl_dev->regmap, DCU_INT_MASK, value); 165 - 166 - return 0; 167 - } 168 - 169 - static void fsl_dcu_drm_disable_vblank(struct drm_device *dev, 170 - unsigned int pipe) 171 - { 172 - struct fsl_dcu_drm_device *fsl_dev = dev->dev_private; 173 - unsigned int value; 174 - 175 - regmap_read(fsl_dev->regmap, DCU_INT_MASK, &value); 176 - value |= DCU_INT_MASK_VBLANK; 177 - regmap_write(fsl_dev->regmap, DCU_INT_MASK, value); 178 - } 179 - 180 157 static void fsl_dcu_drm_lastclose(struct drm_device *dev) 181 158 { 182 159 struct fsl_dcu_drm_device *fsl_dev = dev->dev_private; ··· 180 203 .load = fsl_dcu_load, 181 204 .unload = fsl_dcu_unload, 182 205 .irq_handler = fsl_dcu_drm_irq, 183 - .get_vblank_counter = drm_vblank_no_hw_counter, 184 - .enable_vblank = fsl_dcu_drm_enable_vblank, 185 - .disable_vblank = fsl_dcu_drm_disable_vblank, 186 206 .gem_free_object_unlocked = drm_gem_cma_free_object, 187 207 .gem_vm_ops = &drm_gem_cma_vm_ops, 188 208 .prime_handle_to_fd = drm_gem_prime_handle_to_fd,
+4 -5
drivers/gpu/drm/gma500/cdv_intel_lvds.c
··· 284 284 head) { 285 285 if (tmp_encoder != encoder 286 286 && tmp_encoder->crtc == encoder->crtc) { 287 - printk(KERN_ERR "Can't enable LVDS and another " 288 - "encoder on the same pipe\n"); 287 + pr_err("Can't enable LVDS and another encoder on the same pipe\n"); 289 288 return false; 290 289 } 291 290 } ··· 755 756 756 757 failed_find: 757 758 mutex_unlock(&dev->mode_config.mutex); 758 - printk(KERN_ERR "Failed find\n"); 759 + pr_err("Failed find\n"); 759 760 psb_intel_i2c_destroy(gma_encoder->ddc_bus); 760 761 failed_ddc: 761 - printk(KERN_ERR "Failed DDC\n"); 762 + pr_err("Failed DDC\n"); 762 763 psb_intel_i2c_destroy(gma_encoder->i2c_bus); 763 764 failed_blc_i2c: 764 - printk(KERN_ERR "Failed BLC\n"); 765 + pr_err("Failed BLC\n"); 765 766 drm_encoder_cleanup(encoder); 766 767 drm_connector_cleanup(connector); 767 768 kfree(lvds_priv);
+3 -6
drivers/gpu/drm/gma500/framebuffer.c
··· 393 393 info = drm_fb_helper_alloc_fbi(&fbdev->psb_fb_helper); 394 394 if (IS_ERR(info)) { 395 395 ret = PTR_ERR(info); 396 - goto err_free_range; 396 + goto out; 397 397 } 398 398 info->par = fbdev; 399 399 ··· 401 401 402 402 ret = psb_framebuffer_init(dev, psbfb, &mode_cmd, backing); 403 403 if (ret) 404 - goto err_release; 404 + goto out; 405 405 406 406 fb = &psbfb->base; 407 407 psbfb->fbdev = info; ··· 446 446 psbfb->base.width, psbfb->base.height); 447 447 448 448 return 0; 449 - err_release: 450 - drm_fb_helper_release_fbi(&fbdev->psb_fb_helper); 451 - err_free_range: 449 + out: 452 450 psb_gtt_free_range(dev, backing); 453 451 return ret; 454 452 } ··· 535 537 struct psb_framebuffer *psbfb = &fbdev->pfb; 536 538 537 539 drm_fb_helper_unregister_fbi(&fbdev->psb_fb_helper); 538 - drm_fb_helper_release_fbi(&fbdev->psb_fb_helper); 539 540 540 541 drm_fb_helper_fini(&fbdev->psb_fb_helper); 541 542 drm_framebuffer_unregister_private(&psbfb->base);
+9 -9
drivers/gpu/drm/gma500/oaktrail_lvds.c
··· 255 255 ((ti->vblank_hi << 8) | ti->vblank_lo); 256 256 mode->clock = ti->pixel_clock * 10; 257 257 #if 0 258 - printk(KERN_INFO "hdisplay is %d\n", mode->hdisplay); 259 - printk(KERN_INFO "vdisplay is %d\n", mode->vdisplay); 260 - printk(KERN_INFO "HSS is %d\n", mode->hsync_start); 261 - printk(KERN_INFO "HSE is %d\n", mode->hsync_end); 262 - printk(KERN_INFO "htotal is %d\n", mode->htotal); 263 - printk(KERN_INFO "VSS is %d\n", mode->vsync_start); 264 - printk(KERN_INFO "VSE is %d\n", mode->vsync_end); 265 - printk(KERN_INFO "vtotal is %d\n", mode->vtotal); 266 - printk(KERN_INFO "clock is %d\n", mode->clock); 258 + pr_info("hdisplay is %d\n", mode->hdisplay); 259 + pr_info("vdisplay is %d\n", mode->vdisplay); 260 + pr_info("HSS is %d\n", mode->hsync_start); 261 + pr_info("HSE is %d\n", mode->hsync_end); 262 + pr_info("htotal is %d\n", mode->htotal); 263 + pr_info("VSS is %d\n", mode->vsync_start); 264 + pr_info("VSE is %d\n", mode->vsync_end); 265 + pr_info("vtotal is %d\n", mode->vtotal); 266 + pr_info("clock is %d\n", mode->clock); 267 267 #endif 268 268 mode_dev->panel_fixed_mode = mode; 269 269 }
+2 -3
drivers/gpu/drm/gma500/psb_drv.h
··· 905 905 #define PSB_RSGX32(_offs) \ 906 906 ({ \ 907 907 if (inl(dev_priv->apm_base + PSB_APM_STS) & 0x3) { \ 908 - printk(KERN_ERR \ 909 - "access sgx when it's off!! (READ) %s, %d\n", \ 910 - __FILE__, __LINE__); \ 908 + pr_err("access sgx when it's off!! (READ) %s, %d\n", \ 909 + __FILE__, __LINE__); \ 911 910 melay(1000); \ 912 911 } \ 913 912 ioread32(dev_priv->sgx_reg + (_offs)); \
+3 -4
drivers/gpu/drm/gma500/psb_intel_lvds.c
··· 388 388 389 389 /* PSB requires the LVDS is on pipe B, MRST has only one pipe anyway */ 390 390 if (!IS_MRST(dev) && gma_crtc->pipe == 0) { 391 - printk(KERN_ERR "Can't support LVDS on pipe A\n"); 391 + pr_err("Can't support LVDS on pipe A\n"); 392 392 return false; 393 393 } 394 394 if (IS_MRST(dev) && gma_crtc->pipe != 0) { 395 - printk(KERN_ERR "Must use PIPE A\n"); 395 + pr_err("Must use PIPE A\n"); 396 396 return false; 397 397 } 398 398 /* Should never happen!! */ ··· 400 400 head) { 401 401 if (tmp_encoder != encoder 402 402 && tmp_encoder->crtc == encoder->crtc) { 403 - printk(KERN_ERR "Can't enable LVDS and another " 404 - "encoder on the same pipe\n"); 403 + pr_err("Can't enable LVDS and another encoder on the same pipe\n"); 405 404 return false; 406 405 } 407 406 }
+20
drivers/gpu/drm/hisilicon/hibmc/hibmc_drm_de.c
··· 423 423 spin_unlock_irqrestore(&crtc->dev->event_lock, flags); 424 424 } 425 425 426 + static int hibmc_crtc_enable_vblank(struct drm_crtc *crtc) 427 + { 428 + struct hibmc_drm_private *priv = crtc->dev->dev_private; 429 + 430 + writel(HIBMC_RAW_INTERRUPT_EN_VBLANK(1), 431 + priv->mmio + HIBMC_RAW_INTERRUPT_EN); 432 + 433 + return 0; 434 + } 435 + 436 + static void hibmc_crtc_disable_vblank(struct drm_crtc *crtc) 437 + { 438 + struct hibmc_drm_private *priv = crtc->dev->dev_private; 439 + 440 + writel(HIBMC_RAW_INTERRUPT_EN_VBLANK(0), 441 + priv->mmio + HIBMC_RAW_INTERRUPT_EN); 442 + } 443 + 426 444 static const struct drm_crtc_funcs hibmc_crtc_funcs = { 427 445 .page_flip = drm_atomic_helper_page_flip, 428 446 .set_config = drm_atomic_helper_set_config, ··· 448 430 .reset = drm_atomic_helper_crtc_reset, 449 431 .atomic_duplicate_state = drm_atomic_helper_crtc_duplicate_state, 450 432 .atomic_destroy_state = drm_atomic_helper_crtc_destroy_state, 433 + .enable_vblank = hibmc_crtc_enable_vblank, 434 + .disable_vblank = hibmc_crtc_disable_vblank, 451 435 }; 452 436 453 437 static const struct drm_crtc_helper_funcs hibmc_crtc_helper_funcs = {
-23
drivers/gpu/drm/hisilicon/hibmc/hibmc_drm_drv.c
··· 37 37 .llseek = no_llseek, 38 38 }; 39 39 40 - static int hibmc_enable_vblank(struct drm_device *dev, unsigned int pipe) 41 - { 42 - struct hibmc_drm_private *priv = 43 - (struct hibmc_drm_private *)dev->dev_private; 44 - 45 - writel(HIBMC_RAW_INTERRUPT_EN_VBLANK(1), 46 - priv->mmio + HIBMC_RAW_INTERRUPT_EN); 47 - 48 - return 0; 49 - } 50 - 51 - static void hibmc_disable_vblank(struct drm_device *dev, unsigned int pipe) 52 - { 53 - struct hibmc_drm_private *priv = 54 - (struct hibmc_drm_private *)dev->dev_private; 55 - 56 - writel(HIBMC_RAW_INTERRUPT_EN_VBLANK(0), 57 - priv->mmio + HIBMC_RAW_INTERRUPT_EN); 58 - } 59 - 60 40 irqreturn_t hibmc_drm_interrupt(int irq, void *arg) 61 41 { 62 42 struct drm_device *dev = (struct drm_device *)arg; ··· 64 84 .desc = "hibmc drm driver", 65 85 .major = 1, 66 86 .minor = 0, 67 - .get_vblank_counter = drm_vblank_no_hw_counter, 68 - .enable_vblank = hibmc_enable_vblank, 69 - .disable_vblank = hibmc_disable_vblank, 70 87 .gem_free_object_unlocked = hibmc_gem_free_object, 71 88 .dumb_create = hibmc_dumb_create, 72 89 .dumb_map_offset = hibmc_dumb_mmap_offset,
-2
drivers/gpu/drm/hisilicon/hibmc/hibmc_drm_fbdev.c
··· 147 147 return 0; 148 148 149 149 out_release_fbi: 150 - drm_fb_helper_release_fbi(helper); 151 150 ret1 = ttm_bo_reserve(&bo->bo, true, false, NULL); 152 151 if (ret1) { 153 152 DRM_ERROR("failed to rsv ttm_bo when release fbi: %d\n", ret1); ··· 169 170 struct drm_fb_helper *fbh = &fbdev->helper; 170 171 171 172 drm_fb_helper_unregister_fbi(fbh); 172 - drm_fb_helper_release_fbi(fbh); 173 173 174 174 drm_fb_helper_fini(fbh); 175 175
+4 -7
drivers/gpu/drm/hisilicon/kirin/kirin_drm_ade.c
··· 302 302 SOCKET_QOS_EN, SOCKET_QOS_EN); 303 303 } 304 304 305 - static int ade_enable_vblank(struct drm_device *dev, unsigned int pipe) 305 + static int ade_crtc_enable_vblank(struct drm_crtc *crtc) 306 306 { 307 - struct drm_crtc *crtc = drm_crtc_from_index(dev, pipe); 308 307 struct ade_crtc *acrtc = to_ade_crtc(crtc); 309 308 struct ade_hw_ctx *ctx = acrtc->ctx; 310 309 void __iomem *base = ctx->base; ··· 317 318 return 0; 318 319 } 319 320 320 - static void ade_disable_vblank(struct drm_device *dev, unsigned int pipe) 321 + static void ade_crtc_disable_vblank(struct drm_crtc *crtc) 321 322 { 322 - struct drm_crtc *crtc = drm_crtc_from_index(dev, pipe); 323 323 struct ade_crtc *acrtc = to_ade_crtc(crtc); 324 324 struct ade_hw_ctx *ctx = acrtc->ctx; 325 325 void __iomem *base = ctx->base; ··· 568 570 .set_property = drm_atomic_helper_crtc_set_property, 569 571 .atomic_duplicate_state = drm_atomic_helper_crtc_duplicate_state, 570 572 .atomic_destroy_state = drm_atomic_helper_crtc_destroy_state, 573 + .enable_vblank = ade_crtc_enable_vblank, 574 + .disable_vblank = ade_crtc_disable_vblank, 571 575 }; 572 576 573 577 static int ade_crtc_init(struct drm_device *dev, struct drm_crtc *crtc, ··· 1025 1025 IRQF_SHARED, dev->driver->name, acrtc); 1026 1026 if (ret) 1027 1027 return ret; 1028 - dev->driver->get_vblank_counter = drm_vblank_no_hw_counter; 1029 - dev->driver->enable_vblank = ade_enable_vblank; 1030 - dev->driver->disable_vblank = ade_disable_vblank; 1031 1028 1032 1029 return 0; 1033 1030 }
+12 -85
drivers/gpu/drm/i915/i915_debugfs.c
··· 35 35 return to_i915(node->minor->dev); 36 36 } 37 37 38 - /* As the drm_debugfs_init() routines are called before dev->dev_private is 39 - * allocated we need to hook into the minor for release. */ 40 - static int 41 - drm_add_fake_info_node(struct drm_minor *minor, 42 - struct dentry *ent, 43 - const void *key) 44 - { 45 - struct drm_info_node *node; 46 - 47 - node = kmalloc(sizeof(*node), GFP_KERNEL); 48 - if (node == NULL) { 49 - debugfs_remove(ent); 50 - return -ENOMEM; 51 - } 52 - 53 - node->minor = minor; 54 - node->dent = ent; 55 - node->info_ent = (void *)key; 56 - 57 - mutex_lock(&minor->debugfs_lock); 58 - list_add(&node->list, &minor->debugfs_list); 59 - mutex_unlock(&minor->debugfs_lock); 60 - 61 - return 0; 62 - } 63 - 64 38 static int i915_capabilities(struct seq_file *m, void *data) 65 39 { 66 40 struct drm_i915_private *dev_priv = node_to_i915(m->private); ··· 4567 4593 .release = i915_forcewake_release, 4568 4594 }; 4569 4595 4570 - static int i915_forcewake_create(struct dentry *root, struct drm_minor *minor) 4571 - { 4572 - struct dentry *ent; 4573 - 4574 - ent = debugfs_create_file("i915_forcewake_user", 4575 - S_IRUSR, 4576 - root, to_i915(minor->dev), 4577 - &i915_forcewake_fops); 4578 - if (!ent) 4579 - return -ENOMEM; 4580 - 4581 - return drm_add_fake_info_node(minor, ent, &i915_forcewake_fops); 4582 - } 4583 - 4584 - static int i915_debugfs_create(struct dentry *root, 4585 - struct drm_minor *minor, 4586 - const char *name, 4587 - const struct file_operations *fops) 4588 - { 4589 - struct dentry *ent; 4590 - 4591 - ent = debugfs_create_file(name, 4592 - S_IRUGO | S_IWUSR, 4593 - root, to_i915(minor->dev), 4594 - fops); 4595 - if (!ent) 4596 - return -ENOMEM; 4597 - 4598 - return drm_add_fake_info_node(minor, ent, fops); 4599 - } 4600 - 4601 4596 static const struct drm_info_list i915_debugfs_list[] = { 4602 4597 {"i915_capabilities", i915_capabilities, 0}, 4603 4598 {"i915_gem_objects", i915_gem_object_info, 0}, ··· 4649 4706 int i915_debugfs_register(struct drm_i915_private *dev_priv) 4650 4707 { 4651 4708 struct drm_minor *minor = dev_priv->drm.primary; 4709 + struct dentry *ent; 4652 4710 int ret, i; 4653 4711 4654 - ret = i915_forcewake_create(minor->debugfs_root, minor); 4655 - if (ret) 4656 - return ret; 4712 + ent = debugfs_create_file("i915_forcewake_user", S_IRUSR, 4713 + minor->debugfs_root, to_i915(minor->dev), 4714 + &i915_forcewake_fops); 4715 + if (!ent) 4716 + return -ENOMEM; 4657 4717 4658 4718 ret = intel_pipe_crc_create(minor); 4659 4719 if (ret) 4660 4720 return ret; 4661 4721 4662 4722 for (i = 0; i < ARRAY_SIZE(i915_debugfs_files); i++) { 4663 - ret = i915_debugfs_create(minor->debugfs_root, minor, 4664 - i915_debugfs_files[i].name, 4723 + ent = debugfs_create_file(i915_debugfs_files[i].name, 4724 + S_IRUGO | S_IWUSR, 4725 + minor->debugfs_root, 4726 + to_i915(minor->dev), 4665 4727 i915_debugfs_files[i].fops); 4666 - if (ret) 4667 - return ret; 4728 + if (!ent) 4729 + return -ENOMEM; 4668 4730 } 4669 4731 4670 4732 return drm_debugfs_create_files(i915_debugfs_list, 4671 4733 I915_DEBUGFS_ENTRIES, 4672 4734 minor->debugfs_root, minor); 4673 - } 4674 - 4675 - void i915_debugfs_unregister(struct drm_i915_private *dev_priv) 4676 - { 4677 - struct drm_minor *minor = dev_priv->drm.primary; 4678 - int i; 4679 - 4680 - drm_debugfs_remove_files(i915_debugfs_list, 4681 - I915_DEBUGFS_ENTRIES, minor); 4682 - 4683 - drm_debugfs_remove_files((struct drm_info_list *)&i915_forcewake_fops, 4684 - 1, minor); 4685 - 4686 - intel_pipe_crc_cleanup(minor); 4687 - 4688 - for (i = 0; i < ARRAY_SIZE(i915_debugfs_files); i++) { 4689 - struct drm_info_list *info_list = 4690 - (struct drm_info_list *)i915_debugfs_files[i].fops; 4691 - 4692 - drm_debugfs_remove_files(info_list, 1, minor); 4693 - } 4694 4735 } 4695 4736 4696 4737 struct dpcd_block {
-1
drivers/gpu/drm/i915/i915_drv.c
··· 1167 1167 1168 1168 i915_teardown_sysfs(dev_priv); 1169 1169 i915_guc_log_unregister(dev_priv); 1170 - i915_debugfs_unregister(dev_priv); 1171 1170 drm_dev_unregister(&dev_priv->drm); 1172 1171 1173 1172 i915_gem_shrinker_cleanup(dev_priv);
-2
drivers/gpu/drm/i915/i915_drv.h
··· 3528 3528 /* i915_debugfs.c */ 3529 3529 #ifdef CONFIG_DEBUG_FS 3530 3530 int i915_debugfs_register(struct drm_i915_private *dev_priv); 3531 - void i915_debugfs_unregister(struct drm_i915_private *dev_priv); 3532 3531 int i915_debugfs_connector_add(struct drm_connector *connector); 3533 3532 void intel_display_crc_init(struct drm_i915_private *dev_priv); 3534 3533 #else 3535 3534 static inline int i915_debugfs_register(struct drm_i915_private *dev_priv) {return 0;} 3536 - static inline void i915_debugfs_unregister(struct drm_i915_private *dev_priv) {} 3537 3535 static inline int i915_debugfs_connector_add(struct drm_connector *connector) 3538 3536 { return 0; } 3539 3537 static inline void intel_display_crc_init(struct drm_i915_private *dev_priv) {}
-1
drivers/gpu/drm/i915/i915_irq.c
··· 4249 4249 if (IS_GEN2(dev_priv)) { 4250 4250 /* Gen2 doesn't have a hardware frame counter */ 4251 4251 dev->max_vblank_count = 0; 4252 - dev->driver->get_vblank_counter = drm_vblank_no_hw_counter; 4253 4252 } else if (IS_G4X(dev_priv) || INTEL_INFO(dev_priv)->gen >= 5) { 4254 4253 dev->max_vblank_count = 0xffffffff; /* full 32 bit counter */ 4255 4254 dev->driver->get_vblank_counter = g4x_get_vblank_counter;
+4 -4
drivers/gpu/drm/i915/i915_sw_fence.c
··· 395 395 { 396 396 struct i915_sw_dma_fence_cb *cb = (struct i915_sw_dma_fence_cb *)data; 397 397 398 - printk(KERN_WARNING "asynchronous wait on fence %s:%s:%x timed out\n", 399 - cb->dma->ops->get_driver_name(cb->dma), 400 - cb->dma->ops->get_timeline_name(cb->dma), 401 - cb->dma->seqno); 398 + pr_warn("asynchronous wait on fence %s:%s:%x timed out\n", 399 + cb->dma->ops->get_driver_name(cb->dma), 400 + cb->dma->ops->get_timeline_name(cb->dma), 401 + cb->dma->seqno); 402 402 dma_fence_put(cb->dma); 403 403 cb->dma = NULL; 404 404
+7 -6
drivers/gpu/drm/i915/intel_display.c
··· 3482 3482 3483 3483 static int 3484 3484 __intel_display_resume(struct drm_device *dev, 3485 - struct drm_atomic_state *state) 3485 + struct drm_atomic_state *state, 3486 + struct drm_modeset_acquire_ctx *ctx) 3486 3487 { 3487 3488 struct drm_crtc_state *crtc_state; 3488 3489 struct drm_crtc *crtc; ··· 3507 3506 /* ignore any reset values/BIOS leftovers in the WM registers */ 3508 3507 to_intel_atomic_state(state)->skip_intermediate_wm = true; 3509 3508 3510 - ret = drm_atomic_commit(state); 3509 + ret = drm_atomic_helper_commit_duplicated_state(state, ctx); 3511 3510 3512 3511 WARN_ON(ret == -EDEADLK); 3513 3512 return ret; ··· 3597 3596 */ 3598 3597 intel_update_primary_planes(dev); 3599 3598 } else { 3600 - ret = __intel_display_resume(dev, state); 3599 + ret = __intel_display_resume(dev, state, ctx); 3601 3600 if (ret) 3602 3601 DRM_ERROR("Restoring old state failed with %i\n", ret); 3603 3602 } ··· 3617 3616 dev_priv->display.hpd_irq_setup(dev_priv); 3618 3617 spin_unlock_irq(&dev_priv->irq_lock); 3619 3618 3620 - ret = __intel_display_resume(dev, state); 3619 + ret = __intel_display_resume(dev, state, ctx); 3621 3620 if (ret) 3622 3621 DRM_ERROR("Restoring old state failed with %i\n", ret); 3623 3622 ··· 11324 11323 if (!state) 11325 11324 return; 11326 11325 11327 - ret = drm_atomic_commit(state); 11326 + ret = drm_atomic_helper_commit_duplicated_state(state, ctx); 11328 11327 if (ret) 11329 11328 DRM_DEBUG_KMS("Couldn't release load detect pipe: %i\n", ret); 11330 11329 drm_atomic_state_put(state); ··· 17241 17240 } 17242 17241 17243 17242 if (!ret) 17244 - ret = __intel_display_resume(dev, state); 17243 + ret = __intel_display_resume(dev, state, &ctx); 17245 17244 17246 17245 drm_modeset_drop_locks(&ctx); 17247 17246 drm_modeset_acquire_fini(&ctx);
-1
drivers/gpu/drm/i915/intel_drv.h
··· 1891 1891 1892 1892 /* intel_pipe_crc.c */ 1893 1893 int intel_pipe_crc_create(struct drm_minor *minor); 1894 - void intel_pipe_crc_cleanup(struct drm_minor *minor); 1895 1894 #ifdef CONFIG_DEBUG_FS 1896 1895 int intel_crtc_set_crc_source(struct drm_crtc *crtc, const char *source_name, 1897 1896 size_t *values_cnt);
+1 -4
drivers/gpu/drm/i915/intel_fbdev.c
··· 253 253 if (IS_ERR(vaddr)) { 254 254 DRM_ERROR("Failed to remap framebuffer into virtual memory\n"); 255 255 ret = PTR_ERR(vaddr); 256 - goto out_destroy_fbi; 256 + goto out_unpin; 257 257 } 258 258 info->screen_base = vaddr; 259 259 info->screen_size = vma->node.size; ··· 281 281 vga_switcheroo_client_fb_set(pdev, info); 282 282 return 0; 283 283 284 - out_destroy_fbi: 285 - drm_fb_helper_release_fbi(helper); 286 284 out_unpin: 287 285 intel_unpin_fb_vma(vma); 288 286 out_unlock: ··· 541 543 */ 542 544 543 545 drm_fb_helper_unregister_fbi(&ifbdev->helper); 544 - drm_fb_helper_release_fbi(&ifbdev->helper); 545 546 546 547 drm_fb_helper_fini(&ifbdev->helper); 547 548
+11 -57
drivers/gpu/drm/i915/intel_pipe_crc.c
··· 36 36 enum pipe pipe; 37 37 }; 38 38 39 - /* As the drm_debugfs_init() routines are called before dev->dev_private is 40 - * allocated we need to hook into the minor for release. 41 - */ 42 - static int drm_add_fake_info_node(struct drm_minor *minor, 43 - struct dentry *ent, const void *key) 44 - { 45 - struct drm_info_node *node; 46 - 47 - node = kmalloc(sizeof(*node), GFP_KERNEL); 48 - if (node == NULL) { 49 - debugfs_remove(ent); 50 - return -ENOMEM; 51 - } 52 - 53 - node->minor = minor; 54 - node->dent = ent; 55 - node->info_ent = (void *) key; 56 - 57 - mutex_lock(&minor->debugfs_lock); 58 - list_add(&node->list, &minor->debugfs_list); 59 - mutex_unlock(&minor->debugfs_lock); 60 - 61 - return 0; 62 - } 63 - 64 39 static int i915_pipe_crc_open(struct inode *inode, struct file *filep) 65 40 { 66 41 struct pipe_crc_info *info = inode->i_private; ··· 183 208 .pipe = PIPE_C, 184 209 }, 185 210 }; 186 - 187 - static int i915_pipe_crc_create(struct dentry *root, struct drm_minor *minor, 188 - enum pipe pipe) 189 - { 190 - struct drm_i915_private *dev_priv = to_i915(minor->dev); 191 - struct dentry *ent; 192 - struct pipe_crc_info *info = &i915_pipe_crc_data[pipe]; 193 - 194 - info->dev_priv = dev_priv; 195 - ent = debugfs_create_file(info->name, S_IRUGO, root, info, 196 - &i915_pipe_crc_fops); 197 - if (!ent) 198 - return -ENOMEM; 199 - 200 - return drm_add_fake_info_node(minor, ent, info); 201 - } 202 211 203 212 static const char * const pipe_crc_sources[] = { 204 213 "none", ··· 887 928 888 929 int intel_pipe_crc_create(struct drm_minor *minor) 889 930 { 890 - int ret, i; 891 - 892 - for (i = 0; i < ARRAY_SIZE(i915_pipe_crc_data); i++) { 893 - ret = i915_pipe_crc_create(minor->debugfs_root, minor, i); 894 - if (ret) 895 - return ret; 896 - } 897 - 898 - return 0; 899 - } 900 - 901 - void intel_pipe_crc_cleanup(struct drm_minor *minor) 902 - { 931 + struct drm_i915_private *dev_priv = to_i915(minor->dev); 932 + struct dentry *ent; 903 933 int i; 904 934 905 935 for (i = 0; i < ARRAY_SIZE(i915_pipe_crc_data); i++) { 906 - struct drm_info_list *info_list = 907 - (struct drm_info_list *)&i915_pipe_crc_data[i]; 936 + struct pipe_crc_info *info = &i915_pipe_crc_data[i]; 908 937 909 - drm_debugfs_remove_files(info_list, 1, minor); 938 + info->dev_priv = dev_priv; 939 + ent = debugfs_create_file(info->name, S_IRUGO, 940 + minor->debugfs_root, info, 941 + &i915_pipe_crc_fops); 942 + if (!ent) 943 + return -ENOMEM; 910 944 } 945 + 946 + return 0; 911 947 } 912 948 913 949 int intel_crtc_set_crc_source(struct drm_crtc *crtc, const char *source_name,
+1 -103
drivers/gpu/drm/imx/imx-drm-core.c
··· 40 40 41 41 struct imx_drm_device { 42 42 struct drm_device *drm; 43 - struct imx_drm_crtc *crtc[MAX_CRTC]; 44 43 unsigned int pipes; 45 44 struct drm_fbdev_cma *fbhelper; 46 45 struct drm_atomic_state *state; 47 - }; 48 - 49 - struct imx_drm_crtc { 50 - struct drm_crtc *crtc; 51 - struct imx_drm_crtc_helper_funcs imx_drm_helper_funcs; 52 46 }; 53 47 54 48 #if IS_ENABLED(CONFIG_DRM_FBDEV_EMULATION) ··· 55 61 struct imx_drm_device *imxdrm = drm->dev_private; 56 62 57 63 drm_fbdev_cma_restore_mode(imxdrm->fbhelper); 58 - } 59 - 60 - static int imx_drm_enable_vblank(struct drm_device *drm, unsigned int pipe) 61 - { 62 - struct imx_drm_device *imxdrm = drm->dev_private; 63 - struct imx_drm_crtc *imx_drm_crtc = imxdrm->crtc[pipe]; 64 - int ret; 65 - 66 - if (!imx_drm_crtc) 67 - return -EINVAL; 68 - 69 - if (!imx_drm_crtc->imx_drm_helper_funcs.enable_vblank) 70 - return -ENOSYS; 71 - 72 - ret = imx_drm_crtc->imx_drm_helper_funcs.enable_vblank( 73 - imx_drm_crtc->crtc); 74 - 75 - return ret; 76 - } 77 - 78 - static void imx_drm_disable_vblank(struct drm_device *drm, unsigned int pipe) 79 - { 80 - struct imx_drm_device *imxdrm = drm->dev_private; 81 - struct imx_drm_crtc *imx_drm_crtc = imxdrm->crtc[pipe]; 82 - 83 - if (!imx_drm_crtc) 84 - return; 85 - 86 - if (!imx_drm_crtc->imx_drm_helper_funcs.disable_vblank) 87 - return; 88 - 89 - imx_drm_crtc->imx_drm_helper_funcs.disable_vblank(imx_drm_crtc->crtc); 90 64 } 91 65 92 66 static const struct file_operations imx_drm_driver_fops = { ··· 138 176 drm_atomic_helper_cleanup_planes(dev, state); 139 177 } 140 178 141 - static struct drm_mode_config_helper_funcs imx_drm_mode_config_helpers = { 179 + static const struct drm_mode_config_helper_funcs imx_drm_mode_config_helpers = { 142 180 .atomic_commit_tail = imx_drm_atomic_commit_tail, 143 181 }; 144 182 145 - /* 146 - * imx_drm_add_crtc - add a new crtc 147 - */ 148 - int imx_drm_add_crtc(struct drm_device *drm, struct drm_crtc *crtc, 149 - struct imx_drm_crtc **new_crtc, struct drm_plane *primary_plane, 150 - const struct imx_drm_crtc_helper_funcs *imx_drm_helper_funcs, 151 - struct device_node *port) 152 - { 153 - struct imx_drm_device *imxdrm = drm->dev_private; 154 - struct imx_drm_crtc *imx_drm_crtc; 155 - 156 - /* 157 - * The vblank arrays are dimensioned by MAX_CRTC - we can't 158 - * pass IDs greater than this to those functions. 159 - */ 160 - if (imxdrm->pipes >= MAX_CRTC) 161 - return -EINVAL; 162 - 163 - if (imxdrm->drm->open_count) 164 - return -EBUSY; 165 - 166 - imx_drm_crtc = kzalloc(sizeof(*imx_drm_crtc), GFP_KERNEL); 167 - if (!imx_drm_crtc) 168 - return -ENOMEM; 169 - 170 - imx_drm_crtc->imx_drm_helper_funcs = *imx_drm_helper_funcs; 171 - imx_drm_crtc->crtc = crtc; 172 - 173 - crtc->port = port; 174 - 175 - imxdrm->crtc[imxdrm->pipes++] = imx_drm_crtc; 176 - 177 - *new_crtc = imx_drm_crtc; 178 - 179 - drm_crtc_helper_add(crtc, 180 - imx_drm_crtc->imx_drm_helper_funcs.crtc_helper_funcs); 181 - 182 - drm_crtc_init_with_planes(drm, crtc, primary_plane, NULL, 183 - imx_drm_crtc->imx_drm_helper_funcs.crtc_funcs, NULL); 184 - 185 - return 0; 186 - } 187 - EXPORT_SYMBOL_GPL(imx_drm_add_crtc); 188 - 189 - /* 190 - * imx_drm_remove_crtc - remove a crtc 191 - */ 192 - int imx_drm_remove_crtc(struct imx_drm_crtc *imx_drm_crtc) 193 - { 194 - struct imx_drm_device *imxdrm = imx_drm_crtc->crtc->dev->dev_private; 195 - unsigned int pipe = drm_crtc_index(imx_drm_crtc->crtc); 196 - 197 - drm_crtc_cleanup(imx_drm_crtc->crtc); 198 - 199 - imxdrm->crtc[pipe] = NULL; 200 - 201 - kfree(imx_drm_crtc); 202 - 203 - return 0; 204 - } 205 - EXPORT_SYMBOL_GPL(imx_drm_remove_crtc); 206 183 207 184 int imx_drm_encoder_parse_of(struct drm_device *drm, 208 185 struct drm_encoder *encoder, struct device_node *np) ··· 189 288 .gem_prime_vmap = drm_gem_cma_prime_vmap, 190 289 .gem_prime_vunmap = drm_gem_cma_prime_vunmap, 191 290 .gem_prime_mmap = drm_gem_cma_prime_mmap, 192 - .get_vblank_counter = drm_vblank_no_hw_counter, 193 - .enable_vblank = imx_drm_enable_vblank, 194 - .disable_vblank = imx_drm_disable_vblank, 195 291 .ioctls = imx_drm_ioctls, 196 292 .num_ioctls = ARRAY_SIZE(imx_drm_ioctls), 197 293 .fops = &imx_drm_driver_fops,
-13
drivers/gpu/drm/imx/imx-drm.h
··· 25 25 { 26 26 return container_of(s, struct imx_crtc_state, base); 27 27 } 28 - 29 - struct imx_drm_crtc_helper_funcs { 30 - int (*enable_vblank)(struct drm_crtc *crtc); 31 - void (*disable_vblank)(struct drm_crtc *crtc); 32 - const struct drm_crtc_helper_funcs *crtc_helper_funcs; 33 - const struct drm_crtc_funcs *crtc_funcs; 34 - }; 35 - 36 - int imx_drm_add_crtc(struct drm_device *drm, struct drm_crtc *crtc, 37 - struct imx_drm_crtc **new_crtc, struct drm_plane *primary_plane, 38 - const struct imx_drm_crtc_helper_funcs *imx_helper_funcs, 39 - struct device_node *port); 40 - int imx_drm_remove_crtc(struct imx_drm_crtc *); 41 28 int imx_drm_init_drm(struct platform_device *pdev, 42 29 int preferred_bpp); 43 30 int imx_drm_exit_drm(void);
+22 -36
drivers/gpu/drm/imx/ipuv3-crtc.c
··· 129 129 kfree(to_imx_crtc_state(state)); 130 130 } 131 131 132 - static void imx_drm_crtc_destroy(struct drm_crtc *crtc) 132 + static int ipu_enable_vblank(struct drm_crtc *crtc) 133 133 { 134 - imx_drm_remove_crtc(to_ipu_crtc(crtc)->imx_crtc); 134 + struct ipu_crtc *ipu_crtc = to_ipu_crtc(crtc); 135 + 136 + enable_irq(ipu_crtc->irq); 137 + 138 + return 0; 139 + } 140 + 141 + static void ipu_disable_vblank(struct drm_crtc *crtc) 142 + { 143 + struct ipu_crtc *ipu_crtc = to_ipu_crtc(crtc); 144 + 145 + disable_irq_nosync(ipu_crtc->irq); 135 146 } 136 147 137 148 static const struct drm_crtc_funcs ipu_crtc_funcs = { 138 149 .set_config = drm_atomic_helper_set_config, 139 - .destroy = imx_drm_crtc_destroy, 150 + .destroy = drm_crtc_cleanup, 140 151 .page_flip = drm_atomic_helper_page_flip, 141 152 .reset = imx_drm_crtc_reset, 142 153 .atomic_duplicate_state = imx_drm_crtc_duplicate_state, 143 154 .atomic_destroy_state = imx_drm_crtc_destroy_state, 155 + .enable_vblank = ipu_enable_vblank, 156 + .disable_vblank = ipu_disable_vblank, 144 157 }; 145 158 146 159 static irqreturn_t ipu_irq_handler(int irq, void *dev_id) ··· 274 261 .enable = ipu_crtc_enable, 275 262 }; 276 263 277 - static int ipu_enable_vblank(struct drm_crtc *crtc) 278 - { 279 - struct ipu_crtc *ipu_crtc = to_ipu_crtc(crtc); 280 - 281 - enable_irq(ipu_crtc->irq); 282 - 283 - return 0; 284 - } 285 - 286 - static void ipu_disable_vblank(struct drm_crtc *crtc) 287 - { 288 - struct ipu_crtc *ipu_crtc = to_ipu_crtc(crtc); 289 - 290 - disable_irq_nosync(ipu_crtc->irq); 291 - } 292 - 293 - static const struct imx_drm_crtc_helper_funcs ipu_crtc_helper_funcs = { 294 - .enable_vblank = ipu_enable_vblank, 295 - .disable_vblank = ipu_disable_vblank, 296 - .crtc_funcs = &ipu_crtc_funcs, 297 - .crtc_helper_funcs = &ipu_helper_funcs, 298 - }; 299 - 300 264 static void ipu_put_resources(struct ipu_crtc *ipu_crtc) 301 265 { 302 266 if (!IS_ERR_OR_NULL(ipu_crtc->dc)) ··· 311 321 struct ipu_client_platformdata *pdata, struct drm_device *drm) 312 322 { 313 323 struct ipu_soc *ipu = dev_get_drvdata(ipu_crtc->dev->parent); 324 + struct drm_crtc *crtc = &ipu_crtc->base; 314 325 int dp = -EINVAL; 315 326 int ret; 316 327 ··· 331 340 goto err_put_resources; 332 341 } 333 342 334 - ret = imx_drm_add_crtc(drm, &ipu_crtc->base, &ipu_crtc->imx_crtc, 335 - &ipu_crtc->plane[0]->base, &ipu_crtc_helper_funcs, 336 - pdata->of_node); 337 - if (ret) { 338 - dev_err(ipu_crtc->dev, "adding crtc failed with %d.\n", ret); 339 - goto err_put_resources; 340 - } 343 + crtc->port = pdata->of_node; 344 + drm_crtc_helper_add(crtc, &ipu_helper_funcs); 345 + drm_crtc_init_with_planes(drm, crtc, &ipu_crtc->plane[0]->base, NULL, 346 + &ipu_crtc_funcs, NULL); 341 347 342 348 ret = ipu_plane_get_resources(ipu_crtc->plane[0]); 343 349 if (ret) { 344 350 dev_err(ipu_crtc->dev, "getting plane 0 resources failed with %d.\n", 345 351 ret); 346 - goto err_remove_crtc; 352 + goto err_put_resources; 347 353 } 348 354 349 355 /* If this crtc is using the DP, add an overlay plane */ ··· 378 390 ipu_plane_put_resources(ipu_crtc->plane[1]); 379 391 err_put_plane0_res: 380 392 ipu_plane_put_resources(ipu_crtc->plane[0]); 381 - err_remove_crtc: 382 - imx_drm_remove_crtc(ipu_crtc->imx_crtc); 383 393 err_put_resources: 384 394 ipu_put_resources(ipu_crtc); 385 395
+4 -4
drivers/gpu/drm/mediatek/mtk_drm_crtc.c
··· 168 168 state->pending_config = true; 169 169 } 170 170 171 - int mtk_drm_crtc_enable_vblank(struct drm_device *drm, unsigned int pipe) 171 + static int mtk_drm_crtc_enable_vblank(struct drm_crtc *crtc) 172 172 { 173 - struct drm_crtc *crtc = drm_crtc_from_index(drm, pipe); 174 173 struct mtk_drm_crtc *mtk_crtc = to_mtk_crtc(crtc); 175 174 struct mtk_ddp_comp *ovl = mtk_crtc->ddp_comp[0]; 176 175 ··· 178 179 return 0; 179 180 } 180 181 181 - void mtk_drm_crtc_disable_vblank(struct drm_device *drm, unsigned int pipe) 182 + static void mtk_drm_crtc_disable_vblank(struct drm_crtc *crtc) 182 183 { 183 - struct drm_crtc *crtc = drm_crtc_from_index(drm, pipe); 184 184 struct mtk_drm_crtc *mtk_crtc = to_mtk_crtc(crtc); 185 185 struct mtk_ddp_comp *ovl = mtk_crtc->ddp_comp[0]; 186 186 ··· 434 436 .atomic_duplicate_state = mtk_drm_crtc_duplicate_state, 435 437 .atomic_destroy_state = mtk_drm_crtc_destroy_state, 436 438 .gamma_set = drm_atomic_helper_legacy_gamma_set, 439 + .enable_vblank = mtk_drm_crtc_enable_vblank, 440 + .disable_vblank = mtk_drm_crtc_disable_vblank, 437 441 }; 438 442 439 443 static const struct drm_crtc_helper_funcs mtk_crtc_helper_funcs = {
-2
drivers/gpu/drm/mediatek/mtk_drm_crtc.h
··· 23 23 #define MTK_MAX_BPC 10 24 24 #define MTK_MIN_BPC 3 25 25 26 - int mtk_drm_crtc_enable_vblank(struct drm_device *drm, unsigned int pipe); 27 - void mtk_drm_crtc_disable_vblank(struct drm_device *drm, unsigned int pipe); 28 26 void mtk_drm_crtc_commit(struct drm_crtc *crtc); 29 27 void mtk_crtc_ddp_irq(struct drm_crtc *crtc, struct mtk_ddp_comp *ovl); 30 28 int mtk_drm_crtc_create(struct drm_device *drm_dev,
-4
drivers/gpu/drm/mediatek/mtk_drm_drv.c
··· 256 256 .driver_features = DRIVER_MODESET | DRIVER_GEM | DRIVER_PRIME | 257 257 DRIVER_ATOMIC, 258 258 259 - .get_vblank_counter = drm_vblank_no_hw_counter, 260 - .enable_vblank = mtk_drm_crtc_enable_vblank, 261 - .disable_vblank = mtk_drm_crtc_disable_vblank, 262 - 263 259 .gem_free_object_unlocked = mtk_drm_gem_free_object, 264 260 .gem_vm_ops = &drm_gem_cma_vm_ops, 265 261 .dumb_create = mtk_drm_gem_dumb_create,
+22
drivers/gpu/drm/meson/meson_crtc.c
··· 33 33 34 34 #include "meson_crtc.h" 35 35 #include "meson_plane.h" 36 + #include "meson_venc.h" 36 37 #include "meson_vpp.h" 37 38 #include "meson_viu.h" 38 39 #include "meson_registers.h" ··· 49 48 50 49 /* CRTC */ 51 50 51 + static int meson_crtc_enable_vblank(struct drm_crtc *crtc) 52 + { 53 + struct meson_crtc *meson_crtc = to_meson_crtc(crtc); 54 + struct meson_drm *priv = meson_crtc->priv; 55 + 56 + meson_venc_enable_vsync(priv); 57 + 58 + return 0; 59 + } 60 + 61 + static void meson_crtc_disable_vblank(struct drm_crtc *crtc) 62 + { 63 + struct meson_crtc *meson_crtc = to_meson_crtc(crtc); 64 + struct meson_drm *priv = meson_crtc->priv; 65 + 66 + meson_venc_disable_vsync(priv); 67 + } 68 + 52 69 static const struct drm_crtc_funcs meson_crtc_funcs = { 53 70 .atomic_destroy_state = drm_atomic_helper_crtc_destroy_state, 54 71 .atomic_duplicate_state = drm_atomic_helper_crtc_duplicate_state, ··· 74 55 .page_flip = drm_atomic_helper_page_flip, 75 56 .reset = drm_atomic_helper_crtc_reset, 76 57 .set_config = drm_atomic_helper_set_config, 58 + .enable_vblank = meson_crtc_enable_vblank, 59 + .disable_vblank = meson_crtc_disable_vblank, 60 + 77 61 }; 78 62 79 63 static void meson_crtc_enable(struct drm_crtc *crtc)
-21
drivers/gpu/drm/meson/meson_drv.c
··· 79 79 .fb_create = drm_fb_cma_create, 80 80 }; 81 81 82 - static int meson_enable_vblank(struct drm_device *dev, unsigned int crtc) 83 - { 84 - struct meson_drm *priv = dev->dev_private; 85 - 86 - meson_venc_enable_vsync(priv); 87 - 88 - return 0; 89 - } 90 - 91 - static void meson_disable_vblank(struct drm_device *dev, unsigned int crtc) 92 - { 93 - struct meson_drm *priv = dev->dev_private; 94 - 95 - meson_venc_disable_vsync(priv); 96 - } 97 - 98 82 static irqreturn_t meson_irq(int irq, void *arg) 99 83 { 100 84 struct drm_device *dev = arg; ··· 109 125 .driver_features = DRIVER_HAVE_IRQ | DRIVER_GEM | 110 126 DRIVER_MODESET | DRIVER_PRIME | 111 127 DRIVER_ATOMIC, 112 - 113 - /* Vblank */ 114 - .enable_vblank = meson_enable_vblank, 115 - .disable_vblank = meson_disable_vblank, 116 - .get_vblank_counter = drm_vblank_no_hw_counter, 117 128 118 129 /* IRQ */ 119 130 .irq_handler = meson_irq,
+1 -4
drivers/gpu/drm/mgag200/mgag200_fb.c
··· 198 198 199 199 ret = mgag200_framebuffer_init(dev, &mfbdev->mfb, &mode_cmd, gobj); 200 200 if (ret) 201 - goto err_framebuffer_init; 201 + goto err_alloc_fbi; 202 202 203 203 mfbdev->sysram = sysram; 204 204 mfbdev->size = size; ··· 230 230 231 231 return 0; 232 232 233 - err_framebuffer_init: 234 - drm_fb_helper_release_fbi(helper); 235 233 err_alloc_fbi: 236 234 vfree(sysram); 237 235 err_sysram: ··· 244 246 struct mga_framebuffer *mfb = &mfbdev->mfb; 245 247 246 248 drm_fb_helper_unregister_fbi(&mfbdev->helper); 247 - drm_fb_helper_release_fbi(&mfbdev->helper); 248 249 249 250 if (mfb->obj) { 250 251 drm_gem_object_unreference_unlocked(mfb->obj);
+1 -1
drivers/gpu/drm/mgag200/mgag200_mode.c
··· 195 195 } 196 196 197 197 if (delta > permitteddelta) { 198 - printk(KERN_WARNING "PLL delta too large\n"); 198 + pr_warn("PLL delta too large\n"); 199 199 return 1; 200 200 } 201 201
+1
drivers/gpu/drm/msm/dsi/dsi_host.c
··· 1740 1740 1741 1741 msm_host->rx_buf = devm_kzalloc(&pdev->dev, SZ_4K, GFP_KERNEL); 1742 1742 if (!msm_host->rx_buf) { 1743 + ret = -ENOMEM; 1743 1744 pr_err("%s: alloc rx temp buf failed\n", __func__); 1744 1745 goto fail; 1745 1746 }
-7
drivers/gpu/drm/msm/mdp/mdp5/mdp5_kms.c
··· 214 214 215 215 return 0; 216 216 } 217 - 218 - static void mdp5_kms_debugfs_cleanup(struct msm_kms *kms, struct drm_minor *minor) 219 - { 220 - drm_debugfs_remove_files(mdp5_debugfs_list, 221 - ARRAY_SIZE(mdp5_debugfs_list), minor); 222 - } 223 217 #endif 224 218 225 219 static const struct mdp_kms_funcs kms_funcs = { ··· 236 242 .destroy = mdp5_kms_destroy, 237 243 #ifdef CONFIG_DEBUG_FS 238 244 .debugfs_init = mdp5_kms_debugfs_init, 239 - .debugfs_cleanup = mdp5_kms_debugfs_cleanup, 240 245 #endif 241 246 }, 242 247 .set_irqmask = mdp5_set_irqmask,
-2
drivers/gpu/drm/msm/msm_debugfs.c
··· 170 170 struct drm_device *dev = minor->dev; 171 171 struct msm_drm_private *priv = dev->dev_private; 172 172 173 - drm_debugfs_remove_files(msm_debugfs_list, 174 - ARRAY_SIZE(msm_debugfs_list), minor); 175 173 if (!priv) 176 174 return; 177 175
+1 -2
drivers/gpu/drm/msm/msm_drv.c
··· 152 152 { 153 153 u32 val = readl(addr); 154 154 if (reglog) 155 - printk(KERN_ERR "IO:R %p %08x\n", addr, val); 155 + pr_err("IO:R %p %08x\n", addr, val); 156 156 return val; 157 157 } 158 158 ··· 816 816 .irq_preinstall = msm_irq_preinstall, 817 817 .irq_postinstall = msm_irq_postinstall, 818 818 .irq_uninstall = msm_irq_uninstall, 819 - .get_vblank_counter = drm_vblank_no_hw_counter, 820 819 .enable_vblank = msm_enable_vblank, 821 820 .disable_vblank = msm_disable_vblank, 822 821 .gem_free_object = msm_gem_free_object,
-1
drivers/gpu/drm/msm/msm_fbdev.c
··· 235 235 DBG(); 236 236 237 237 drm_fb_helper_unregister_fbi(helper); 238 - drm_fb_helper_release_fbi(helper); 239 238 240 239 drm_fb_helper_fini(helper); 241 240
+3 -26
drivers/gpu/drm/msm/msm_perf.c
··· 41 41 int buftot, bufpos; 42 42 43 43 unsigned long next_jiffies; 44 - 45 - struct dentry *ent; 46 - struct drm_info_node *node; 47 44 }; 48 45 49 46 #define SAMPLE_TIME (HZ/4) ··· 205 208 { 206 209 struct msm_drm_private *priv = minor->dev->dev_private; 207 210 struct msm_perf_state *perf; 211 + struct dentry *ent; 208 212 209 213 /* only create on first minor: */ 210 214 if (priv->perf) ··· 220 222 mutex_init(&perf->read_lock); 221 223 priv->perf = perf; 222 224 223 - perf->node = kzalloc(sizeof(*perf->node), GFP_KERNEL); 224 - if (!perf->node) 225 - goto fail; 226 - 227 - perf->ent = debugfs_create_file("perf", S_IFREG | S_IRUGO, 225 + ent = debugfs_create_file("perf", S_IFREG | S_IRUGO, 228 226 minor->debugfs_root, perf, &perf_debugfs_fops); 229 - if (!perf->ent) { 227 + if (!ent) { 230 228 DRM_ERROR("Cannot create /sys/kernel/debug/dri/%pd/perf\n", 231 229 minor->debugfs_root); 232 230 goto fail; 233 231 } 234 - 235 - perf->node->minor = minor; 236 - perf->node->dent = perf->ent; 237 - perf->node->info_ent = NULL; 238 - 239 - mutex_lock(&minor->debugfs_lock); 240 - list_add(&perf->node->list, &minor->debugfs_list); 241 - mutex_unlock(&minor->debugfs_lock); 242 232 243 233 return 0; 244 234 ··· 244 258 return; 245 259 246 260 priv->perf = NULL; 247 - 248 - debugfs_remove(perf->ent); 249 - 250 - if (perf->node) { 251 - mutex_lock(&minor->debugfs_lock); 252 - list_del(&perf->node->list); 253 - mutex_unlock(&minor->debugfs_lock); 254 - kfree(perf->node); 255 - } 256 261 257 262 mutex_destroy(&perf->read_lock); 258 263
+3 -28
drivers/gpu/drm/msm/msm_rd.c
··· 84 84 85 85 bool open; 86 86 87 - struct dentry *ent; 88 - struct drm_info_node *node; 89 - 90 87 /* current submit to read out: */ 91 88 struct msm_gem_submit *submit; 92 89 ··· 216 219 { 217 220 struct msm_drm_private *priv = minor->dev->dev_private; 218 221 struct msm_rd_state *rd; 222 + struct dentry *ent; 219 223 220 224 /* only create on first minor: */ 221 225 if (priv->rd) ··· 234 236 235 237 init_waitqueue_head(&rd->fifo_event); 236 238 237 - rd->node = kzalloc(sizeof(*rd->node), GFP_KERNEL); 238 - if (!rd->node) 239 - goto fail; 240 - 241 - rd->ent = debugfs_create_file("rd", S_IFREG | S_IRUGO, 239 + ent = debugfs_create_file("rd", S_IFREG | S_IRUGO, 242 240 minor->debugfs_root, rd, &rd_debugfs_fops); 243 - if (!rd->ent) { 241 + if (!ent) { 244 242 DRM_ERROR("Cannot create /sys/kernel/debug/dri/%pd/rd\n", 245 243 minor->debugfs_root); 246 244 goto fail; 247 245 } 248 - 249 - rd->node->minor = minor; 250 - rd->node->dent = rd->ent; 251 - rd->node->info_ent = NULL; 252 - 253 - mutex_lock(&minor->debugfs_lock); 254 - list_add(&rd->node->list, &minor->debugfs_list); 255 - mutex_unlock(&minor->debugfs_lock); 256 246 257 247 return 0; 258 248 ··· 258 272 return; 259 273 260 274 priv->rd = NULL; 261 - 262 - debugfs_remove(rd->ent); 263 - 264 - if (rd->node) { 265 - mutex_lock(&minor->debugfs_lock); 266 - list_del(&rd->node->list); 267 - mutex_unlock(&minor->debugfs_lock); 268 - kfree(rd->node); 269 - } 270 - 271 275 mutex_destroy(&rd->read_lock); 272 - 273 276 kfree(rd); 274 277 } 275 278
+2 -2
drivers/gpu/drm/mxsfb/mxsfb_drv.c
··· 126 126 return drm_fb_cma_prepare_fb(&pipe->plane, plane_state); 127 127 } 128 128 129 - struct drm_simple_display_pipe_funcs mxsfb_funcs = { 129 + static struct drm_simple_display_pipe_funcs mxsfb_funcs = { 130 130 .enable = mxsfb_pipe_enable, 131 131 .disable = mxsfb_pipe_disable, 132 132 .update = mxsfb_pipe_update, ··· 221 221 mxsfb->fbdev = drm_fbdev_cma_init(drm, 32, 222 222 drm->mode_config.num_connector); 223 223 if (IS_ERR(mxsfb->fbdev)) { 224 + ret = PTR_ERR(mxsfb->fbdev); 224 225 mxsfb->fbdev = NULL; 225 226 dev_err(drm->dev, "Failed to init FB CMA area\n"); 226 227 goto err_cma; ··· 341 340 .irq_handler = mxsfb_irq_handler, 342 341 .irq_preinstall = mxsfb_irq_preinstall, 343 342 .irq_uninstall = mxsfb_irq_preinstall, 344 - .get_vblank_counter = drm_vblank_no_hw_counter, 345 343 .enable_vblank = mxsfb_enable_vblank, 346 344 .disable_vblank = mxsfb_disable_vblank, 347 345 .gem_free_object = drm_gem_cma_free_object,
+4 -3
drivers/gpu/drm/nouveau/nouveau_acpi.c
··· 326 326 nouveau_dsm_priv.dhandle = dhandle; 327 327 acpi_get_name(nouveau_dsm_priv.dhandle, ACPI_FULL_PATHNAME, 328 328 &buffer); 329 - printk(KERN_INFO "VGA switcheroo: detected Optimus DSM method %s handle\n", 329 + pr_info("VGA switcheroo: detected Optimus DSM method %s handle\n", 330 330 acpi_method_name); 331 331 if (has_power_resources) 332 332 pr_info("nouveau: detected PR support, will not use DSM\n"); ··· 338 338 nouveau_dsm_priv.dhandle = dhandle; 339 339 acpi_get_name(nouveau_dsm_priv.dhandle, ACPI_FULL_PATHNAME, 340 340 &buffer); 341 - printk(KERN_INFO "VGA switcheroo: detected DSM switching method %s handle\n", 341 + pr_info("VGA switcheroo: detected DSM switching method %s handle\n", 342 342 acpi_method_name); 343 343 nouveau_dsm_priv.dsm_detected = true; 344 344 ret = true; ··· 406 406 407 407 status = acpi_evaluate_object(rom_handle, NULL, &rom_arg, &buffer); 408 408 if (ACPI_FAILURE(status)) { 409 - printk(KERN_INFO "failed to evaluate ROM got %s\n", acpi_format_exception(status)); 409 + pr_info("failed to evaluate ROM got %s\n", 410 + acpi_format_exception(status)); 410 411 return -ENODEV; 411 412 } 412 413 obj = (union acpi_object *)buffer.pointer;
+12 -50
drivers/gpu/drm/nouveau/nouveau_debugfs.c
··· 49 49 static int 50 50 nouveau_debugfs_pstate_get(struct seq_file *m, void *data) 51 51 { 52 - struct drm_info_node *node = (struct drm_info_node *) m->private; 53 - struct nouveau_debugfs *debugfs = nouveau_debugfs(node->minor->dev); 52 + struct drm_device *drm = m->private; 53 + struct nouveau_debugfs *debugfs = nouveau_debugfs(drm); 54 54 struct nvif_object *ctrl = &debugfs->ctrl; 55 55 struct nvif_control_pstate_info_v0 info = {}; 56 56 int ret, i; ··· 120 120 size_t len, loff_t *offp) 121 121 { 122 122 struct seq_file *m = file->private_data; 123 - struct drm_info_node *node = (struct drm_info_node *) m->private; 124 - struct nouveau_debugfs *debugfs = nouveau_debugfs(node->minor->dev); 123 + struct drm_device *drm = m->private; 124 + struct nouveau_debugfs *debugfs = nouveau_debugfs(drm); 125 125 struct nvif_object *ctrl = &debugfs->ctrl; 126 126 struct nvif_control_pstate_user_v0 args = { .pwrsrc = -EINVAL }; 127 127 char buf[32] = {}, *tmp, *cur = buf; ··· 192 192 {"pstate", &nouveau_pstate_fops}, 193 193 }; 194 194 195 - static int 196 - nouveau_debugfs_create_file(struct drm_minor *minor, 197 - const struct nouveau_debugfs_files *ndf) 198 - { 199 - struct drm_info_node *node; 200 - 201 - node = kmalloc(sizeof(*node), GFP_KERNEL); 202 - if (node == NULL) 203 - return -ENOMEM; 204 - 205 - node->minor = minor; 206 - node->info_ent = (const void *)ndf->fops; 207 - node->dent = debugfs_create_file(ndf->name, S_IRUGO | S_IWUSR, 208 - minor->debugfs_root, node, ndf->fops); 209 - if (!node->dent) { 210 - kfree(node); 211 - return -ENOMEM; 212 - } 213 - 214 - mutex_lock(&minor->debugfs_lock); 215 - list_add(&node->list, &minor->debugfs_list); 216 - mutex_unlock(&minor->debugfs_lock); 217 - return 0; 218 - } 219 - 220 195 int 221 196 nouveau_drm_debugfs_init(struct drm_minor *minor) 222 197 { 223 - int i, ret; 198 + struct dentry *dentry; 199 + int i; 224 200 225 201 for (i = 0; i < ARRAY_SIZE(nouveau_debugfs_files); i++) { 226 - ret = nouveau_debugfs_create_file(minor, 227 - &nouveau_debugfs_files[i]); 228 - 229 - if (ret) 230 - return ret; 202 + dentry = debugfs_create_file(nouveau_debugfs_files[i].name, 203 + S_IRUGO | S_IWUSR, 204 + minor->debugfs_root, minor->dev, 205 + nouveau_debugfs_files[i].fops); 206 + if (!dentry) 207 + return -ENOMEM; 231 208 } 232 209 233 210 return drm_debugfs_create_files(nouveau_debugfs_list, 234 211 NOUVEAU_DEBUGFS_ENTRIES, 235 212 minor->debugfs_root, minor); 236 - } 237 - 238 - void 239 - nouveau_drm_debugfs_cleanup(struct drm_minor *minor) 240 - { 241 - int i; 242 - 243 - drm_debugfs_remove_files(nouveau_debugfs_list, NOUVEAU_DEBUGFS_ENTRIES, 244 - minor); 245 - 246 - for (i = 0; i < ARRAY_SIZE(nouveau_debugfs_files); i++) { 247 - drm_debugfs_remove_files((struct drm_info_list *) 248 - nouveau_debugfs_files[i].fops, 249 - 1, minor); 250 - } 251 213 } 252 214 253 215 int
-6
drivers/gpu/drm/nouveau/nouveau_debugfs.h
··· 18 18 } 19 19 20 20 extern int nouveau_drm_debugfs_init(struct drm_minor *); 21 - extern void nouveau_drm_debugfs_cleanup(struct drm_minor *); 22 21 extern int nouveau_debugfs_init(struct nouveau_drm *); 23 22 extern void nouveau_debugfs_fini(struct nouveau_drm *); 24 23 #else ··· 25 26 nouveau_drm_debugfs_init(struct drm_minor *minor) 26 27 { 27 28 return 0; 28 - } 29 - 30 - static inline void 31 - nouveau_drm_debugfs_cleanup(struct drm_minor *minor) 32 - { 33 29 } 34 30 35 31 static inline int
+1 -112
drivers/gpu/drm/nouveau/nouveau_display.c
··· 625 625 kfree(disp); 626 626 } 627 627 628 - static int 629 - nouveau_atomic_disable_connector(struct drm_atomic_state *state, 630 - struct drm_connector *connector) 631 - { 632 - struct drm_connector_state *connector_state; 633 - struct drm_crtc *crtc; 634 - struct drm_crtc_state *crtc_state; 635 - struct drm_plane_state *plane_state; 636 - struct drm_plane *plane; 637 - int ret; 638 - 639 - if (!(crtc = connector->state->crtc)) 640 - return 0; 641 - 642 - connector_state = drm_atomic_get_connector_state(state, connector); 643 - if (IS_ERR(connector_state)) 644 - return PTR_ERR(connector_state); 645 - 646 - ret = drm_atomic_set_crtc_for_connector(connector_state, NULL); 647 - if (ret) 648 - return ret; 649 - 650 - crtc_state = drm_atomic_get_crtc_state(state, crtc); 651 - if (IS_ERR(crtc_state)) 652 - return PTR_ERR(crtc_state); 653 - 654 - ret = drm_atomic_set_mode_for_crtc(crtc_state, NULL); 655 - if (ret) 656 - return ret; 657 - 658 - crtc_state->active = false; 659 - 660 - drm_for_each_plane_mask(plane, connector->dev, crtc_state->plane_mask) { 661 - plane_state = drm_atomic_get_plane_state(state, plane); 662 - if (IS_ERR(plane_state)) 663 - return PTR_ERR(plane_state); 664 - 665 - ret = drm_atomic_set_crtc_for_plane(plane_state, NULL); 666 - if (ret) 667 - return ret; 668 - 669 - drm_atomic_set_fb_for_plane(plane_state, NULL); 670 - } 671 - 672 - return 0; 673 - } 674 - 675 - static int 676 - nouveau_atomic_disable(struct drm_device *dev, 677 - struct drm_modeset_acquire_ctx *ctx) 678 - { 679 - struct drm_atomic_state *state; 680 - struct drm_connector *connector; 681 - int ret; 682 - 683 - state = drm_atomic_state_alloc(dev); 684 - if (!state) 685 - return -ENOMEM; 686 - 687 - state->acquire_ctx = ctx; 688 - 689 - drm_for_each_connector(connector, dev) { 690 - ret = nouveau_atomic_disable_connector(state, connector); 691 - if (ret) 692 - break; 693 - } 694 - 695 - if (ret == 0) 696 - ret = drm_atomic_commit(state); 697 - drm_atomic_state_put(state); 698 - return ret; 699 - } 700 - 701 - static struct drm_atomic_state * 702 - nouveau_atomic_suspend(struct drm_device *dev) 703 - { 704 - struct drm_modeset_acquire_ctx ctx; 705 - struct drm_atomic_state *state; 706 - int ret; 707 - 708 - drm_modeset_acquire_init(&ctx, 0); 709 - 710 - retry: 711 - ret = drm_modeset_lock_all_ctx(dev, &ctx); 712 - if (ret < 0) { 713 - state = ERR_PTR(ret); 714 - goto unlock; 715 - } 716 - 717 - state = drm_atomic_helper_duplicate_state(dev, &ctx); 718 - if (IS_ERR(state)) 719 - goto unlock; 720 - 721 - ret = nouveau_atomic_disable(dev, &ctx); 722 - if (ret < 0) { 723 - drm_atomic_state_put(state); 724 - state = ERR_PTR(ret); 725 - goto unlock; 726 - } 727 - 728 - unlock: 729 - if (PTR_ERR(state) == -EDEADLK) { 730 - drm_modeset_backoff(&ctx); 731 - goto retry; 732 - } 733 - 734 - drm_modeset_drop_locks(&ctx); 735 - drm_modeset_acquire_fini(&ctx); 736 - return state; 737 - } 738 - 739 628 int 740 629 nouveau_display_suspend(struct drm_device *dev, bool runtime) 741 630 { ··· 633 744 634 745 if (drm_drv_uses_atomic_modeset(dev)) { 635 746 if (!runtime) { 636 - disp->suspend = nouveau_atomic_suspend(dev); 747 + disp->suspend = drm_atomic_helper_suspend(dev); 637 748 if (IS_ERR(disp->suspend)) { 638 749 int ret = PTR_ERR(disp->suspend); 639 750 disp->suspend = NULL;
-2
drivers/gpu/drm/nouveau/nouveau_drm.c
··· 980 980 981 981 #if defined(CONFIG_DEBUG_FS) 982 982 .debugfs_init = nouveau_drm_debugfs_init, 983 - .debugfs_cleanup = nouveau_drm_debugfs_cleanup, 984 983 #endif 985 984 986 - .get_vblank_counter = drm_vblank_no_hw_counter, 987 985 .enable_vblank = nouveau_display_vblank_enable, 988 986 .disable_vblank = nouveau_display_vblank_disable, 989 987 .get_scanout_position = nouveau_display_scanoutpos,
-1
drivers/gpu/drm/nouveau/nouveau_fbcon.c
··· 445 445 struct nouveau_framebuffer *nouveau_fb = nouveau_framebuffer(fbcon->helper.fb); 446 446 447 447 drm_fb_helper_unregister_fbi(&fbcon->helper); 448 - drm_fb_helper_release_fbi(&fbcon->helper); 449 448 drm_fb_helper_fini(&fbcon->helper); 450 449 451 450 if (nouveau_fb->nvbo) {
+2 -2
drivers/gpu/drm/nouveau/nouveau_vga.c
··· 41 41 return; 42 42 43 43 if (state == VGA_SWITCHEROO_ON) { 44 - printk(KERN_ERR "VGA switcheroo: switched nouveau on\n"); 44 + pr_err("VGA switcheroo: switched nouveau on\n"); 45 45 dev->switch_power_state = DRM_SWITCH_POWER_CHANGING; 46 46 nouveau_pmops_resume(&pdev->dev); 47 47 drm_kms_helper_poll_enable(dev); 48 48 dev->switch_power_state = DRM_SWITCH_POWER_ON; 49 49 } else { 50 - printk(KERN_ERR "VGA switcheroo: switched nouveau off\n"); 50 + pr_err("VGA switcheroo: switched nouveau off\n"); 51 51 dev->switch_power_state = DRM_SWITCH_POWER_CHANGING; 52 52 drm_kms_helper_poll_disable(dev); 53 53 nouveau_switcheroo_optimus_dsm();
+21 -85
drivers/gpu/drm/nouveau/nv50_display.c
··· 705 705 break; 706 706 ) < 0) { 707 707 mutex_unlock(&dmac->lock); 708 - printk(KERN_ERR "nouveau: evo channel stalled\n"); 708 + pr_err("nouveau: evo channel stalled\n"); 709 709 return NULL; 710 710 } 711 711 ··· 723 723 mutex_unlock(&dmac->lock); 724 724 } 725 725 726 - #define evo_mthd(p,m,s) do { \ 727 - const u32 _m = (m), _s = (s); \ 728 - if (drm_debug & DRM_UT_KMS) \ 729 - printk(KERN_ERR "%04x %d %s\n", _m, _s, __func__); \ 730 - *((p)++) = ((_s << 18) | _m); \ 726 + #define evo_mthd(p, m, s) do { \ 727 + const u32 _m = (m), _s = (s); \ 728 + if (drm_debug & DRM_UT_KMS) \ 729 + pr_err("%04x %d %s\n", _m, _s, __func__); \ 730 + *((p)++) = ((_s << 18) | _m); \ 731 731 } while(0) 732 732 733 - #define evo_data(p,d) do { \ 734 - const u32 _d = (d); \ 735 - if (drm_debug & DRM_UT_KMS) \ 736 - printk(KERN_ERR "\t%08x\n", _d); \ 737 - *((p)++) = _d; \ 733 + #define evo_data(p, d) do { \ 734 + const u32 _d = (d); \ 735 + if (drm_debug & DRM_UT_KMS) \ 736 + pr_err("\t%08x\n", _d); \ 737 + *((p)++) = _d; \ 738 738 } while(0) 739 739 740 740 /****************************************************************************** ··· 831 831 static int 832 832 nv50_wndw_atomic_check_acquire(struct nv50_wndw *wndw, 833 833 struct nv50_wndw_atom *asyw, 834 - struct nv50_head_atom *asyh) 834 + struct nv50_head_atom *asyh, 835 + u32 pflip_flags) 835 836 { 836 837 struct nouveau_framebuffer *fb = nouveau_framebuffer(asyw->state.fb); 837 838 struct nouveau_drm *drm = nouveau_drm(wndw->plane.dev); ··· 847 846 asyw->image.w = fb->base.width; 848 847 asyw->image.h = fb->base.height; 849 848 asyw->image.kind = (fb->nvbo->tile_flags & 0x0000ff00) >> 8; 849 + 850 + asyw->interval = pflip_flags & DRM_MODE_PAGE_FLIP_ASYNC ? 0 : 1; 851 + 850 852 if (asyw->image.kind) { 851 853 asyw->image.layout = 0; 852 854 if (drm->client.device.info.chipset >= 0xc0) ··· 887 883 struct nv50_head_atom *harm = NULL, *asyh = NULL; 888 884 bool varm = false, asyv = false, asym = false; 889 885 int ret; 886 + u32 pflip_flags = 0; 890 887 891 888 NV_ATOMIC(drm, "%s atomic_check\n", plane->name); 892 889 if (asyw->state.crtc) { ··· 896 891 return PTR_ERR(asyh); 897 892 asym = drm_atomic_crtc_needs_modeset(&asyh->state); 898 893 asyv = asyh->state.active; 894 + pflip_flags = asyh->state.pageflip_flags; 899 895 } 900 896 901 897 if (armw->state.crtc) { ··· 913 907 asyw->set.point = true; 914 908 915 909 if (!varm || asym || armw->state.fb != asyw->state.fb) { 916 - ret = nv50_wndw_atomic_check_acquire(wndw, asyw, asyh); 910 + ret = nv50_wndw_atomic_check_acquire( 911 + wndw, asyw, asyh, pflip_flags); 917 912 if (ret) 918 913 return ret; 919 914 } ··· 2226 2219 .atomic_check = nv50_head_atomic_check, 2227 2220 }; 2228 2221 2229 - /* This is identical to the version in the atomic helpers, except that 2230 - * it supports non-vblanked ("async") page flips. 2231 - */ 2232 - static int 2233 - nv50_head_page_flip(struct drm_crtc *crtc, struct drm_framebuffer *fb, 2234 - struct drm_pending_vblank_event *event, u32 flags) 2235 - { 2236 - struct drm_plane *plane = crtc->primary; 2237 - struct drm_atomic_state *state; 2238 - struct drm_plane_state *plane_state; 2239 - struct drm_crtc_state *crtc_state; 2240 - int ret = 0; 2241 - 2242 - state = drm_atomic_state_alloc(plane->dev); 2243 - if (!state) 2244 - return -ENOMEM; 2245 - 2246 - state->acquire_ctx = drm_modeset_legacy_acquire_ctx(crtc); 2247 - retry: 2248 - crtc_state = drm_atomic_get_crtc_state(state, crtc); 2249 - if (IS_ERR(crtc_state)) { 2250 - ret = PTR_ERR(crtc_state); 2251 - goto fail; 2252 - } 2253 - crtc_state->event = event; 2254 - 2255 - plane_state = drm_atomic_get_plane_state(state, plane); 2256 - if (IS_ERR(plane_state)) { 2257 - ret = PTR_ERR(plane_state); 2258 - goto fail; 2259 - } 2260 - 2261 - ret = drm_atomic_set_crtc_for_plane(plane_state, crtc); 2262 - if (ret != 0) 2263 - goto fail; 2264 - drm_atomic_set_fb_for_plane(plane_state, fb); 2265 - 2266 - /* Make sure we don't accidentally do a full modeset. */ 2267 - state->allow_modeset = false; 2268 - if (!crtc_state->active) { 2269 - DRM_DEBUG_ATOMIC("[CRTC:%d] disabled, rejecting legacy flip\n", 2270 - crtc->base.id); 2271 - ret = -EINVAL; 2272 - goto fail; 2273 - } 2274 - 2275 - if (flags & DRM_MODE_PAGE_FLIP_ASYNC) 2276 - nv50_wndw_atom(plane_state)->interval = 0; 2277 - 2278 - ret = drm_atomic_nonblocking_commit(state); 2279 - fail: 2280 - if (ret == -EDEADLK) 2281 - goto backoff; 2282 - 2283 - drm_atomic_state_put(state); 2284 - return ret; 2285 - 2286 - backoff: 2287 - drm_atomic_state_clear(state); 2288 - drm_atomic_legacy_backoff(state); 2289 - 2290 - /* 2291 - * Someone might have exchanged the framebuffer while we dropped locks 2292 - * in the backoff code. We need to fix up the fb refcount tracking the 2293 - * core does for us. 2294 - */ 2295 - plane->old_fb = plane->fb; 2296 - 2297 - goto retry; 2298 - } 2299 - 2300 2222 static int 2301 2223 nv50_head_gamma_set(struct drm_crtc *crtc, u16 *r, u16 *g, u16 *b, 2302 2224 uint32_t size) ··· 2320 2384 .gamma_set = nv50_head_gamma_set, 2321 2385 .destroy = nv50_head_destroy, 2322 2386 .set_config = drm_atomic_helper_set_config, 2323 - .page_flip = nv50_head_page_flip, 2387 + .page_flip = drm_atomic_helper_page_flip, 2324 2388 .set_property = drm_atomic_helper_crtc_set_property, 2325 2389 .atomic_duplicate_state = nv50_head_atomic_duplicate_state, 2326 2390 .atomic_destroy_state = nv50_head_atomic_destroy_state,
+5 -5
drivers/gpu/drm/nouveau/nvkm/core/mm.c
··· 31 31 { 32 32 struct nvkm_mm_node *node; 33 33 34 - printk(KERN_ERR "nvkm: %s\n", header); 35 - printk(KERN_ERR "nvkm: node list:\n"); 34 + pr_err("nvkm: %s\n", header); 35 + pr_err("nvkm: node list:\n"); 36 36 list_for_each_entry(node, &mm->nodes, nl_entry) { 37 - printk(KERN_ERR "nvkm: \t%08x %08x %d\n", 37 + pr_err("nvkm: \t%08x %08x %d\n", 38 38 node->offset, node->length, node->type); 39 39 } 40 - printk(KERN_ERR "nvkm: free list:\n"); 40 + pr_err("nvkm: free list:\n"); 41 41 list_for_each_entry(node, &mm->free, fl_entry) { 42 - printk(KERN_ERR "nvkm: \t%08x %08x %d\n", 42 + pr_err("nvkm: \t%08x %08x %d\n", 43 43 node->offset, node->length, node->type); 44 44 } 45 45 }
+8 -9
drivers/gpu/drm/omapdrm/dss/dsi.c
··· 582 582 583 583 total_bytes = dsi->update_bytes; 584 584 585 - printk(KERN_INFO "DSI(%s): %u us + %u us = %u us (%uHz), " 586 - "%u bytes, %u kbytes/sec\n", 587 - name, 588 - setup_us, 589 - trans_us, 590 - total_us, 591 - 1000*1000 / total_us, 592 - total_bytes, 593 - total_bytes * 1000 / total_us); 585 + pr_info("DSI(%s): %u us + %u us = %u us (%uHz), %u bytes, %u kbytes/sec\n", 586 + name, 587 + setup_us, 588 + trans_us, 589 + total_us, 590 + 1000 * 1000 / total_us, 591 + total_bytes, 592 + total_bytes * 1000 / total_us); 594 593 } 595 594 #else 596 595 static inline void dsi_perf_mark_setup(struct platform_device *dsidev)
+1 -2
drivers/gpu/drm/omapdrm/dss/dss.c
··· 1254 1254 dss.lcd_clk_source[1] = DSS_CLK_SRC_FCK; 1255 1255 1256 1256 rev = dss_read_reg(DSS_REVISION); 1257 - printk(KERN_INFO "OMAP DSS rev %d.%d\n", 1258 - FLD_GET(rev, 7, 4), FLD_GET(rev, 3, 0)); 1257 + pr_info("OMAP DSS rev %d.%d\n", FLD_GET(rev, 7, 4), FLD_GET(rev, 3, 0)); 1259 1258 1260 1259 dss_runtime_put(); 1261 1260
+6 -9
drivers/gpu/drm/omapdrm/dss/dss.h
··· 42 42 43 43 #ifdef DSS_SUBSYS_NAME 44 44 #define DSSERR(format, ...) \ 45 - printk(KERN_ERR "omapdss " DSS_SUBSYS_NAME " error: " format, \ 46 - ## __VA_ARGS__) 45 + pr_err("omapdss " DSS_SUBSYS_NAME " error: " format, ##__VA_ARGS__) 47 46 #else 48 47 #define DSSERR(format, ...) \ 49 - printk(KERN_ERR "omapdss error: " format, ## __VA_ARGS__) 48 + pr_err("omapdss error: " format, ##__VA_ARGS__) 50 49 #endif 51 50 52 51 #ifdef DSS_SUBSYS_NAME 53 52 #define DSSINFO(format, ...) \ 54 - printk(KERN_INFO "omapdss " DSS_SUBSYS_NAME ": " format, \ 55 - ## __VA_ARGS__) 53 + pr_info("omapdss " DSS_SUBSYS_NAME ": " format, ##__VA_ARGS__) 56 54 #else 57 55 #define DSSINFO(format, ...) \ 58 - printk(KERN_INFO "omapdss: " format, ## __VA_ARGS__) 56 + pr_info("omapdss: " format, ## __VA_ARGS__) 59 57 #endif 60 58 61 59 #ifdef DSS_SUBSYS_NAME 62 60 #define DSSWARN(format, ...) \ 63 - printk(KERN_WARNING "omapdss " DSS_SUBSYS_NAME ": " format, \ 64 - ## __VA_ARGS__) 61 + pr_warn("omapdss " DSS_SUBSYS_NAME ": " format, ##__VA_ARGS__) 65 62 #else 66 63 #define DSSWARN(format, ...) \ 67 - printk(KERN_WARNING "omapdss: " format, ## __VA_ARGS__) 64 + pr_warn("omapdss: " format, ##__VA_ARGS__) 68 65 #endif 69 66 70 67 /* OMAP TRM gives bitfields as start:end, where start is the higher bit
+2
drivers/gpu/drm/omapdrm/omap_crtc.c
··· 495 495 .atomic_destroy_state = drm_atomic_helper_crtc_destroy_state, 496 496 .atomic_set_property = omap_crtc_atomic_set_property, 497 497 .atomic_get_property = omap_crtc_atomic_get_property, 498 + .enable_vblank = omap_irq_enable_vblank, 499 + .disable_vblank = omap_irq_disable_vblank, 498 500 }; 499 501 500 502 static const struct drm_crtc_helper_funcs omap_crtc_helper_funcs = {
-3
drivers/gpu/drm/omapdrm/omap_drv.c
··· 727 727 DRIVER_ATOMIC, 728 728 .open = dev_open, 729 729 .lastclose = dev_lastclose, 730 - .get_vblank_counter = drm_vblank_no_hw_counter, 731 - .enable_vblank = omap_irq_enable_vblank, 732 - .disable_vblank = omap_irq_disable_vblank, 733 730 #ifdef CONFIG_DEBUG_FS 734 731 .debugfs_init = omap_debugfs_init, 735 732 #endif
+2 -2
drivers/gpu/drm/omapdrm/omap_drv.h
··· 112 112 int omap_gem_resume(struct device *dev); 113 113 #endif 114 114 115 - int omap_irq_enable_vblank(struct drm_device *dev, unsigned int pipe); 116 - void omap_irq_disable_vblank(struct drm_device *dev, unsigned int pipe); 115 + int omap_irq_enable_vblank(struct drm_crtc *crtc); 116 + void omap_irq_disable_vblank(struct drm_crtc *crtc); 117 117 void omap_drm_irq_uninstall(struct drm_device *dev); 118 118 int omap_drm_irq_install(struct drm_device *dev); 119 119
-4
drivers/gpu/drm/omapdrm/omap_fbdev.c
··· 222 222 fail: 223 223 224 224 if (ret) { 225 - 226 - drm_fb_helper_release_fbi(helper); 227 - 228 225 if (fb) 229 226 drm_framebuffer_remove(fb); 230 227 } ··· 298 301 DBG(); 299 302 300 303 drm_fb_helper_unregister_fbi(helper); 301 - drm_fb_helper_release_fbi(helper); 302 304 303 305 drm_fb_helper_fini(helper); 304 306
+2 -3
drivers/gpu/drm/omapdrm/omap_gem.c
··· 1107 1107 1108 1108 /* macro for sync debug.. */ 1109 1109 #define SYNCDBG 0 1110 - #define SYNC(fmt, ...) do { if (SYNCDBG) \ 1111 - printk(KERN_ERR "%s:%d: "fmt"\n", \ 1112 - __func__, __LINE__, ##__VA_ARGS__); \ 1110 + #define SYNC(fmt, ...) do { if (SYNCDBG) \ 1111 + pr_err("%s:%d: " fmt "\n", __func__, __LINE__, ##__VA_ARGS__); \ 1113 1112 } while (0) 1114 1113 1115 1114
+10 -8
drivers/gpu/drm/omapdrm/omap_irq.c
··· 101 101 * Zero on success, appropriate errno if the given @crtc's vblank 102 102 * interrupt cannot be enabled. 103 103 */ 104 - int omap_irq_enable_vblank(struct drm_device *dev, unsigned int pipe) 104 + int omap_irq_enable_vblank(struct drm_crtc *crtc) 105 105 { 106 + struct drm_device *dev = crtc->dev; 106 107 struct omap_drm_private *priv = dev->dev_private; 107 - struct drm_crtc *crtc = priv->crtcs[pipe]; 108 108 unsigned long flags; 109 + enum omap_channel channel = omap_crtc_channel(crtc); 109 110 110 - DBG("dev=%p, crtc=%u", dev, pipe); 111 + DBG("dev=%p, crtc=%u", dev, channel); 111 112 112 113 spin_lock_irqsave(&priv->wait_lock, flags); 113 - priv->irq_mask |= dispc_mgr_get_vsync_irq(omap_crtc_channel(crtc)); 114 + priv->irq_mask |= dispc_mgr_get_vsync_irq(channel); 114 115 omap_irq_update(dev); 115 116 spin_unlock_irqrestore(&priv->wait_lock, flags); 116 117 ··· 127 126 * a hardware vblank counter, this routine should be a no-op, since 128 127 * interrupts will have to stay on to keep the count accurate. 129 128 */ 130 - void omap_irq_disable_vblank(struct drm_device *dev, unsigned int pipe) 129 + void omap_irq_disable_vblank(struct drm_crtc *crtc) 131 130 { 131 + struct drm_device *dev = crtc->dev; 132 132 struct omap_drm_private *priv = dev->dev_private; 133 - struct drm_crtc *crtc = priv->crtcs[pipe]; 134 133 unsigned long flags; 134 + enum omap_channel channel = omap_crtc_channel(crtc); 135 135 136 - DBG("dev=%p, crtc=%u", dev, pipe); 136 + DBG("dev=%p, crtc=%u", dev, channel); 137 137 138 138 spin_lock_irqsave(&priv->wait_lock, flags); 139 - priv->irq_mask &= ~dispc_mgr_get_vsync_irq(omap_crtc_channel(crtc)); 139 + priv->irq_mask &= ~dispc_mgr_get_vsync_irq(channel); 140 140 omap_irq_update(dev); 141 141 spin_unlock_irqrestore(&priv->wait_lock, flags); 142 142 }
-9
drivers/gpu/drm/qxl/qxl_debugfs.c
··· 100 100 return 0; 101 101 } 102 102 103 - void 104 - qxl_debugfs_takedown(struct drm_minor *minor) 105 - { 106 - #if defined(CONFIG_DEBUG_FS) 107 - drm_debugfs_remove_files(qxl_debugfs_list, QXL_DEBUGFS_ENTRIES, 108 - minor); 109 - #endif 110 - } 111 - 112 103 int qxl_debugfs_add_files(struct qxl_device *qdev, 113 104 struct drm_info_list *files, 114 105 unsigned nfiles)
+398 -407
drivers/gpu/drm/qxl/qxl_display.c
··· 30 30 #include "qxl_object.h" 31 31 #include "drm_crtc_helper.h" 32 32 #include <drm/drm_plane_helper.h> 33 + #include <drm/drm_atomic_helper.h> 34 + #include <drm/drm_atomic.h> 33 35 34 36 static bool qxl_head_enabled(struct qxl_head *head) 35 37 { ··· 253 251 return i - 1; 254 252 } 255 253 254 + static void qxl_crtc_atomic_flush(struct drm_crtc *crtc, 255 + struct drm_crtc_state *old_crtc_state) 256 + { 257 + struct drm_device *dev = crtc->dev; 258 + struct drm_pending_vblank_event *event; 259 + unsigned long flags; 260 + 261 + if (crtc->state && crtc->state->event) { 262 + event = crtc->state->event; 263 + crtc->state->event = NULL; 264 + 265 + spin_lock_irqsave(&dev->event_lock, flags); 266 + drm_crtc_send_vblank_event(crtc, event); 267 + spin_unlock_irqrestore(&dev->event_lock, flags); 268 + } 269 + } 270 + 256 271 static void qxl_crtc_destroy(struct drm_crtc *crtc) 257 272 { 258 273 struct qxl_crtc *qxl_crtc = to_qxl_crtc(crtc); 259 274 260 275 drm_crtc_cleanup(crtc); 261 - qxl_bo_unref(&qxl_crtc->cursor_bo); 262 276 kfree(qxl_crtc); 263 277 } 264 278 265 - static int qxl_crtc_page_flip(struct drm_crtc *crtc, 266 - struct drm_framebuffer *fb, 267 - struct drm_pending_vblank_event *event, 268 - uint32_t page_flip_flags) 269 - { 270 - struct drm_device *dev = crtc->dev; 271 - struct qxl_device *qdev = dev->dev_private; 272 - struct qxl_framebuffer *qfb_src = to_qxl_framebuffer(fb); 273 - struct qxl_framebuffer *qfb_old = to_qxl_framebuffer(crtc->primary->fb); 274 - struct qxl_bo *bo_old = gem_to_qxl_bo(qfb_old->obj); 275 - struct qxl_bo *bo = gem_to_qxl_bo(qfb_src->obj); 276 - unsigned long flags; 277 - struct drm_clip_rect norect = { 278 - .x1 = 0, 279 - .y1 = 0, 280 - .x2 = fb->width, 281 - .y2 = fb->height 282 - }; 283 - int inc = 1; 284 - int one_clip_rect = 1; 285 - int ret = 0; 286 - 287 - crtc->primary->fb = fb; 288 - bo_old->is_primary = false; 289 - bo->is_primary = true; 290 - 291 - ret = qxl_bo_reserve(bo, false); 292 - if (ret) 293 - return ret; 294 - ret = qxl_bo_pin(bo, bo->type, NULL); 295 - qxl_bo_unreserve(bo); 296 - if (ret) 297 - return ret; 298 - 299 - qxl_draw_dirty_fb(qdev, qfb_src, bo, 0, 0, 300 - &norect, one_clip_rect, inc); 301 - 302 - drm_crtc_vblank_get(crtc); 303 - 304 - if (event) { 305 - spin_lock_irqsave(&dev->event_lock, flags); 306 - drm_crtc_send_vblank_event(crtc, event); 307 - spin_unlock_irqrestore(&dev->event_lock, flags); 308 - } 309 - drm_crtc_vblank_put(crtc); 310 - 311 - ret = qxl_bo_reserve(bo, false); 312 - if (!ret) { 313 - qxl_bo_unpin(bo); 314 - qxl_bo_unreserve(bo); 315 - } 316 - 317 - return 0; 318 - } 319 - 320 - static int 321 - qxl_hide_cursor(struct qxl_device *qdev) 322 - { 323 - struct qxl_release *release; 324 - struct qxl_cursor_cmd *cmd; 325 - int ret; 326 - 327 - ret = qxl_alloc_release_reserved(qdev, sizeof(*cmd), QXL_RELEASE_CURSOR_CMD, 328 - &release, NULL); 329 - if (ret) 330 - return ret; 331 - 332 - ret = qxl_release_reserve_list(release, true); 333 - if (ret) { 334 - qxl_release_free(qdev, release); 335 - return ret; 336 - } 337 - 338 - cmd = (struct qxl_cursor_cmd *)qxl_release_map(qdev, release); 339 - cmd->type = QXL_CURSOR_HIDE; 340 - qxl_release_unmap(qdev, release, &cmd->release_info); 341 - 342 - qxl_push_cursor_ring_release(qdev, release, QXL_CMD_CURSOR, false); 343 - qxl_release_fence_buffer_objects(release); 344 - return 0; 345 - } 346 - 347 - static int qxl_crtc_apply_cursor(struct drm_crtc *crtc) 348 - { 349 - struct qxl_crtc *qcrtc = to_qxl_crtc(crtc); 350 - struct drm_device *dev = crtc->dev; 351 - struct qxl_device *qdev = dev->dev_private; 352 - struct qxl_cursor_cmd *cmd; 353 - struct qxl_release *release; 354 - int ret = 0; 355 - 356 - if (!qcrtc->cursor_bo) 357 - return 0; 358 - 359 - ret = qxl_alloc_release_reserved(qdev, sizeof(*cmd), 360 - QXL_RELEASE_CURSOR_CMD, 361 - &release, NULL); 362 - if (ret) 363 - return ret; 364 - 365 - ret = qxl_release_list_add(release, qcrtc->cursor_bo); 366 - if (ret) 367 - goto out_free_release; 368 - 369 - ret = qxl_release_reserve_list(release, false); 370 - if (ret) 371 - goto out_free_release; 372 - 373 - cmd = (struct qxl_cursor_cmd *)qxl_release_map(qdev, release); 374 - cmd->type = QXL_CURSOR_SET; 375 - cmd->u.set.position.x = qcrtc->cur_x + qcrtc->hot_spot_x; 376 - cmd->u.set.position.y = qcrtc->cur_y + qcrtc->hot_spot_y; 377 - 378 - cmd->u.set.shape = qxl_bo_physical_address(qdev, qcrtc->cursor_bo, 0); 379 - 380 - cmd->u.set.visible = 1; 381 - qxl_release_unmap(qdev, release, &cmd->release_info); 382 - 383 - qxl_push_cursor_ring_release(qdev, release, QXL_CMD_CURSOR, false); 384 - qxl_release_fence_buffer_objects(release); 385 - 386 - return ret; 387 - 388 - out_free_release: 389 - qxl_release_free(qdev, release); 390 - return ret; 391 - } 392 - 393 - static int qxl_crtc_cursor_set2(struct drm_crtc *crtc, 394 - struct drm_file *file_priv, 395 - uint32_t handle, 396 - uint32_t width, 397 - uint32_t height, int32_t hot_x, int32_t hot_y) 398 - { 399 - struct drm_device *dev = crtc->dev; 400 - struct qxl_device *qdev = dev->dev_private; 401 - struct qxl_crtc *qcrtc = to_qxl_crtc(crtc); 402 - struct drm_gem_object *obj; 403 - struct qxl_cursor *cursor; 404 - struct qxl_cursor_cmd *cmd; 405 - struct qxl_bo *cursor_bo, *user_bo; 406 - struct qxl_release *release; 407 - void *user_ptr; 408 - 409 - int size = 64*64*4; 410 - int ret = 0; 411 - if (!handle) 412 - return qxl_hide_cursor(qdev); 413 - 414 - obj = drm_gem_object_lookup(file_priv, handle); 415 - if (!obj) { 416 - DRM_ERROR("cannot find cursor object\n"); 417 - return -ENOENT; 418 - } 419 - 420 - user_bo = gem_to_qxl_bo(obj); 421 - 422 - ret = qxl_bo_reserve(user_bo, false); 423 - if (ret) 424 - goto out_unref; 425 - 426 - ret = qxl_bo_pin(user_bo, QXL_GEM_DOMAIN_CPU, NULL); 427 - qxl_bo_unreserve(user_bo); 428 - if (ret) 429 - goto out_unref; 430 - 431 - ret = qxl_bo_kmap(user_bo, &user_ptr); 432 - if (ret) 433 - goto out_unpin; 434 - 435 - ret = qxl_alloc_release_reserved(qdev, sizeof(*cmd), 436 - QXL_RELEASE_CURSOR_CMD, 437 - &release, NULL); 438 - if (ret) 439 - goto out_kunmap; 440 - 441 - ret = qxl_alloc_bo_reserved(qdev, release, sizeof(struct qxl_cursor) + size, 442 - &cursor_bo); 443 - if (ret) 444 - goto out_free_release; 445 - 446 - ret = qxl_release_reserve_list(release, false); 447 - if (ret) 448 - goto out_free_bo; 449 - 450 - ret = qxl_bo_kmap(cursor_bo, (void **)&cursor); 451 - if (ret) 452 - goto out_backoff; 453 - 454 - cursor->header.unique = 0; 455 - cursor->header.type = SPICE_CURSOR_TYPE_ALPHA; 456 - cursor->header.width = 64; 457 - cursor->header.height = 64; 458 - cursor->header.hot_spot_x = hot_x; 459 - cursor->header.hot_spot_y = hot_y; 460 - cursor->data_size = size; 461 - cursor->chunk.next_chunk = 0; 462 - cursor->chunk.prev_chunk = 0; 463 - cursor->chunk.data_size = size; 464 - 465 - memcpy(cursor->chunk.data, user_ptr, size); 466 - 467 - qxl_bo_kunmap(cursor_bo); 468 - 469 - qxl_bo_kunmap(user_bo); 470 - 471 - qcrtc->cur_x += qcrtc->hot_spot_x - hot_x; 472 - qcrtc->cur_y += qcrtc->hot_spot_y - hot_y; 473 - qcrtc->hot_spot_x = hot_x; 474 - qcrtc->hot_spot_y = hot_y; 475 - 476 - cmd = (struct qxl_cursor_cmd *)qxl_release_map(qdev, release); 477 - cmd->type = QXL_CURSOR_SET; 478 - cmd->u.set.position.x = qcrtc->cur_x + qcrtc->hot_spot_x; 479 - cmd->u.set.position.y = qcrtc->cur_y + qcrtc->hot_spot_y; 480 - 481 - cmd->u.set.shape = qxl_bo_physical_address(qdev, cursor_bo, 0); 482 - 483 - cmd->u.set.visible = 1; 484 - qxl_release_unmap(qdev, release, &cmd->release_info); 485 - 486 - qxl_push_cursor_ring_release(qdev, release, QXL_CMD_CURSOR, false); 487 - qxl_release_fence_buffer_objects(release); 488 - 489 - /* finish with the userspace bo */ 490 - ret = qxl_bo_reserve(user_bo, false); 491 - if (!ret) { 492 - qxl_bo_unpin(user_bo); 493 - qxl_bo_unreserve(user_bo); 494 - } 495 - drm_gem_object_unreference_unlocked(obj); 496 - 497 - qxl_bo_unref (&qcrtc->cursor_bo); 498 - qcrtc->cursor_bo = cursor_bo; 499 - 500 - return ret; 501 - 502 - out_backoff: 503 - qxl_release_backoff_reserve_list(release); 504 - out_free_bo: 505 - qxl_bo_unref(&cursor_bo); 506 - out_free_release: 507 - qxl_release_free(qdev, release); 508 - out_kunmap: 509 - qxl_bo_kunmap(user_bo); 510 - out_unpin: 511 - qxl_bo_unpin(user_bo); 512 - out_unref: 513 - drm_gem_object_unreference_unlocked(obj); 514 - return ret; 515 - } 516 - 517 - static int qxl_crtc_cursor_move(struct drm_crtc *crtc, 518 - int x, int y) 519 - { 520 - struct drm_device *dev = crtc->dev; 521 - struct qxl_device *qdev = dev->dev_private; 522 - struct qxl_crtc *qcrtc = to_qxl_crtc(crtc); 523 - struct qxl_release *release; 524 - struct qxl_cursor_cmd *cmd; 525 - int ret; 526 - 527 - ret = qxl_alloc_release_reserved(qdev, sizeof(*cmd), QXL_RELEASE_CURSOR_CMD, 528 - &release, NULL); 529 - if (ret) 530 - return ret; 531 - 532 - ret = qxl_release_reserve_list(release, true); 533 - if (ret) { 534 - qxl_release_free(qdev, release); 535 - return ret; 536 - } 537 - 538 - qcrtc->cur_x = x; 539 - qcrtc->cur_y = y; 540 - 541 - cmd = (struct qxl_cursor_cmd *)qxl_release_map(qdev, release); 542 - cmd->type = QXL_CURSOR_MOVE; 543 - cmd->u.position.x = qcrtc->cur_x + qcrtc->hot_spot_x; 544 - cmd->u.position.y = qcrtc->cur_y + qcrtc->hot_spot_y; 545 - qxl_release_unmap(qdev, release, &cmd->release_info); 546 - 547 - qxl_push_cursor_ring_release(qdev, release, QXL_CMD_CURSOR, false); 548 - qxl_release_fence_buffer_objects(release); 549 - 550 - return 0; 551 - } 552 - 553 - 554 279 static const struct drm_crtc_funcs qxl_crtc_funcs = { 555 - .cursor_set2 = qxl_crtc_cursor_set2, 556 - .cursor_move = qxl_crtc_cursor_move, 557 - .set_config = drm_crtc_helper_set_config, 280 + .set_config = drm_atomic_helper_set_config, 558 281 .destroy = qxl_crtc_destroy, 559 - .page_flip = qxl_crtc_page_flip, 282 + .page_flip = drm_atomic_helper_page_flip, 283 + .reset = drm_atomic_helper_crtc_reset, 284 + .atomic_duplicate_state = drm_atomic_helper_crtc_duplicate_state, 285 + .atomic_destroy_state = drm_atomic_helper_crtc_destroy_state, 560 286 }; 561 287 562 288 void qxl_user_framebuffer_destroy(struct drm_framebuffer *fb) ··· 422 692 423 693 } 424 694 425 - static int qxl_crtc_mode_set(struct drm_crtc *crtc, 426 - struct drm_display_mode *mode, 427 - struct drm_display_mode *adjusted_mode, 428 - int x, int y, 429 - struct drm_framebuffer *old_fb) 695 + void qxl_mode_set_nofb(struct drm_crtc *crtc) 430 696 { 431 - struct drm_device *dev = crtc->dev; 432 - struct qxl_device *qdev = dev->dev_private; 433 - struct qxl_framebuffer *qfb; 434 - struct qxl_bo *bo, *old_bo = NULL; 697 + struct qxl_device *qdev = crtc->dev->dev_private; 435 698 struct qxl_crtc *qcrtc = to_qxl_crtc(crtc); 436 - bool recreate_primary = false; 437 - int ret; 438 - int surf_id; 439 - if (!crtc->primary->fb) { 440 - DRM_DEBUG_KMS("No FB bound\n"); 441 - return 0; 442 - } 699 + struct drm_display_mode *mode = &crtc->mode; 443 700 444 - if (old_fb) { 445 - qfb = to_qxl_framebuffer(old_fb); 446 - old_bo = gem_to_qxl_bo(qfb->obj); 447 - } 448 - qfb = to_qxl_framebuffer(crtc->primary->fb); 449 - bo = gem_to_qxl_bo(qfb->obj); 450 - DRM_DEBUG("+%d+%d (%d,%d) => (%d,%d)\n", 451 - x, y, 452 - mode->hdisplay, mode->vdisplay, 453 - adjusted_mode->hdisplay, 454 - adjusted_mode->vdisplay); 701 + DRM_DEBUG("Mode set (%d,%d)\n", 702 + mode->hdisplay, mode->vdisplay); 455 703 456 - if (bo->is_primary == false) 457 - recreate_primary = true; 704 + qxl_monitors_config_set(qdev, qcrtc->index, 0, 0, 705 + mode->hdisplay, mode->vdisplay, 0); 458 706 459 - if (bo->surf.stride * bo->surf.height > qdev->vram_size) { 460 - DRM_ERROR("Mode doesn't fit in vram size (vgamem)"); 461 - return -EINVAL; 462 - } 463 - 464 - ret = qxl_bo_reserve(bo, false); 465 - if (ret != 0) 466 - return ret; 467 - ret = qxl_bo_pin(bo, bo->type, NULL); 468 - if (ret != 0) { 469 - qxl_bo_unreserve(bo); 470 - return -EINVAL; 471 - } 472 - qxl_bo_unreserve(bo); 473 - if (recreate_primary) { 474 - qxl_io_destroy_primary(qdev); 475 - qxl_io_log(qdev, 476 - "recreate primary: %dx%d,%d,%d\n", 477 - bo->surf.width, bo->surf.height, 478 - bo->surf.stride, bo->surf.format); 479 - qxl_io_create_primary(qdev, 0, bo); 480 - bo->is_primary = true; 481 - 482 - ret = qxl_crtc_apply_cursor(crtc); 483 - if (ret) { 484 - DRM_ERROR("could not set cursor after modeset"); 485 - ret = 0; 486 - } 487 - } 488 - 489 - if (bo->is_primary) { 490 - DRM_DEBUG_KMS("setting surface_id to 0 for primary surface %d on crtc %d\n", bo->surface_id, qcrtc->index); 491 - surf_id = 0; 492 - } else { 493 - surf_id = bo->surface_id; 494 - } 495 - 496 - if (old_bo && old_bo != bo) { 497 - old_bo->is_primary = false; 498 - ret = qxl_bo_reserve(old_bo, false); 499 - qxl_bo_unpin(old_bo); 500 - qxl_bo_unreserve(old_bo); 501 - } 502 - 503 - qxl_monitors_config_set(qdev, qcrtc->index, x, y, 504 - mode->hdisplay, 505 - mode->vdisplay, surf_id); 506 - return 0; 507 - } 508 - 509 - static void qxl_crtc_prepare(struct drm_crtc *crtc) 510 - { 511 - DRM_DEBUG("current: %dx%d+%d+%d (%d).\n", 512 - crtc->mode.hdisplay, crtc->mode.vdisplay, 513 - crtc->x, crtc->y, crtc->enabled); 514 707 } 515 708 516 709 static void qxl_crtc_commit(struct drm_crtc *crtc) ··· 444 791 static void qxl_crtc_disable(struct drm_crtc *crtc) 445 792 { 446 793 struct qxl_crtc *qcrtc = to_qxl_crtc(crtc); 447 - struct drm_device *dev = crtc->dev; 448 - struct qxl_device *qdev = dev->dev_private; 449 - if (crtc->primary->fb) { 450 - struct qxl_framebuffer *qfb = to_qxl_framebuffer(crtc->primary->fb); 451 - struct qxl_bo *bo = gem_to_qxl_bo(qfb->obj); 452 - int ret; 453 - ret = qxl_bo_reserve(bo, false); 454 - qxl_bo_unpin(bo); 455 - qxl_bo_unreserve(bo); 456 - crtc->primary->fb = NULL; 457 - } 794 + struct qxl_device *qdev = crtc->dev->dev_private; 458 795 459 796 qxl_monitors_config_set(qdev, qcrtc->index, 0, 0, 0, 0, 0); 460 797 ··· 455 812 .dpms = qxl_crtc_dpms, 456 813 .disable = qxl_crtc_disable, 457 814 .mode_fixup = qxl_crtc_mode_fixup, 458 - .mode_set = qxl_crtc_mode_set, 459 - .prepare = qxl_crtc_prepare, 815 + .mode_set_nofb = qxl_mode_set_nofb, 460 816 .commit = qxl_crtc_commit, 817 + .atomic_flush = qxl_crtc_atomic_flush, 461 818 }; 819 + 820 + int qxl_primary_atomic_check(struct drm_plane *plane, 821 + struct drm_plane_state *state) 822 + { 823 + struct qxl_device *qdev = plane->dev->dev_private; 824 + struct qxl_framebuffer *qfb; 825 + struct qxl_bo *bo; 826 + 827 + if (!state->crtc || !state->fb) 828 + return 0; 829 + 830 + qfb = to_qxl_framebuffer(state->fb); 831 + bo = gem_to_qxl_bo(qfb->obj); 832 + 833 + if (bo->surf.stride * bo->surf.height > qdev->vram_size) { 834 + DRM_ERROR("Mode doesn't fit in vram size (vgamem)"); 835 + return -EINVAL; 836 + } 837 + 838 + return 0; 839 + } 840 + 841 + static void qxl_primary_atomic_update(struct drm_plane *plane, 842 + struct drm_plane_state *old_state) 843 + { 844 + struct qxl_device *qdev = plane->dev->dev_private; 845 + struct qxl_framebuffer *qfb = 846 + to_qxl_framebuffer(plane->state->fb); 847 + struct qxl_framebuffer *qfb_old; 848 + struct qxl_bo *bo = gem_to_qxl_bo(qfb->obj); 849 + struct qxl_bo *bo_old; 850 + struct drm_clip_rect norect = { 851 + .x1 = 0, 852 + .y1 = 0, 853 + .x2 = qfb->base.width, 854 + .y2 = qfb->base.height 855 + }; 856 + 857 + if (!old_state->fb) { 858 + qxl_io_log(qdev, 859 + "create primary fb: %dx%d,%d,%d\n", 860 + bo->surf.width, bo->surf.height, 861 + bo->surf.stride, bo->surf.format); 862 + 863 + qxl_io_create_primary(qdev, 0, bo); 864 + bo->is_primary = true; 865 + return; 866 + 867 + } else { 868 + qfb_old = to_qxl_framebuffer(old_state->fb); 869 + bo_old = gem_to_qxl_bo(qfb_old->obj); 870 + bo_old->is_primary = false; 871 + } 872 + 873 + bo->is_primary = true; 874 + qxl_draw_dirty_fb(qdev, qfb, bo, 0, 0, &norect, 1, 1); 875 + } 876 + 877 + static void qxl_primary_atomic_disable(struct drm_plane *plane, 878 + struct drm_plane_state *old_state) 879 + { 880 + struct qxl_device *qdev = plane->dev->dev_private; 881 + 882 + if (old_state->fb) 883 + { struct qxl_framebuffer *qfb = 884 + to_qxl_framebuffer(old_state->fb); 885 + struct qxl_bo *bo = gem_to_qxl_bo(qfb->obj); 886 + 887 + qxl_io_destroy_primary(qdev); 888 + bo->is_primary = false; 889 + } 890 + } 891 + 892 + int qxl_plane_atomic_check(struct drm_plane *plane, 893 + struct drm_plane_state *state) 894 + { 895 + return 0; 896 + } 897 + 898 + static void qxl_cursor_atomic_update(struct drm_plane *plane, 899 + struct drm_plane_state *old_state) 900 + { 901 + struct drm_device *dev = plane->dev; 902 + struct qxl_device *qdev = dev->dev_private; 903 + struct drm_framebuffer *fb = plane->state->fb; 904 + struct qxl_release *release; 905 + struct qxl_cursor_cmd *cmd; 906 + struct qxl_cursor *cursor; 907 + struct drm_gem_object *obj; 908 + struct qxl_bo *cursor_bo, *user_bo = NULL; 909 + int ret; 910 + void *user_ptr; 911 + int size = 64*64*4; 912 + 913 + ret = qxl_alloc_release_reserved(qdev, sizeof(*cmd), 914 + QXL_RELEASE_CURSOR_CMD, 915 + &release, NULL); 916 + 917 + cmd = (struct qxl_cursor_cmd *) qxl_release_map(qdev, release); 918 + 919 + if (fb != old_state->fb) { 920 + obj = to_qxl_framebuffer(fb)->obj; 921 + user_bo = gem_to_qxl_bo(obj); 922 + 923 + /* pinning is done in the prepare/cleanup framevbuffer */ 924 + ret = qxl_bo_kmap(user_bo, &user_ptr); 925 + if (ret) 926 + goto out_free_release; 927 + 928 + ret = qxl_alloc_bo_reserved(qdev, release, 929 + sizeof(struct qxl_cursor) + size, 930 + &cursor_bo); 931 + if (ret) 932 + goto out_kunmap; 933 + 934 + ret = qxl_release_reserve_list(release, true); 935 + if (ret) 936 + goto out_free_bo; 937 + 938 + ret = qxl_bo_kmap(cursor_bo, (void **)&cursor); 939 + if (ret) 940 + goto out_backoff; 941 + 942 + cursor->header.unique = 0; 943 + cursor->header.type = SPICE_CURSOR_TYPE_ALPHA; 944 + cursor->header.width = 64; 945 + cursor->header.height = 64; 946 + cursor->header.hot_spot_x = fb->hot_x; 947 + cursor->header.hot_spot_y = fb->hot_y; 948 + cursor->data_size = size; 949 + cursor->chunk.next_chunk = 0; 950 + cursor->chunk.prev_chunk = 0; 951 + cursor->chunk.data_size = size; 952 + memcpy(cursor->chunk.data, user_ptr, size); 953 + qxl_bo_kunmap(cursor_bo); 954 + qxl_bo_kunmap(user_bo); 955 + 956 + cmd->u.set.visible = 1; 957 + cmd->u.set.shape = qxl_bo_physical_address(qdev, 958 + cursor_bo, 0); 959 + cmd->type = QXL_CURSOR_SET; 960 + } else { 961 + 962 + ret = qxl_release_reserve_list(release, true); 963 + if (ret) 964 + goto out_free_release; 965 + 966 + cmd->type = QXL_CURSOR_MOVE; 967 + } 968 + 969 + cmd->u.position.x = plane->state->crtc_x + fb->hot_x; 970 + cmd->u.position.y = plane->state->crtc_y + fb->hot_y; 971 + 972 + qxl_release_unmap(qdev, release, &cmd->release_info); 973 + qxl_push_cursor_ring_release(qdev, release, QXL_CMD_CURSOR, false); 974 + qxl_release_fence_buffer_objects(release); 975 + 976 + return; 977 + 978 + out_backoff: 979 + qxl_release_backoff_reserve_list(release); 980 + out_free_bo: 981 + qxl_bo_unref(&cursor_bo); 982 + out_kunmap: 983 + qxl_bo_kunmap(user_bo); 984 + out_free_release: 985 + qxl_release_free(qdev, release); 986 + return; 987 + 988 + } 989 + 990 + void qxl_cursor_atomic_disable(struct drm_plane *plane, 991 + struct drm_plane_state *old_state) 992 + { 993 + struct qxl_device *qdev = plane->dev->dev_private; 994 + struct qxl_release *release; 995 + struct qxl_cursor_cmd *cmd; 996 + int ret; 997 + 998 + ret = qxl_alloc_release_reserved(qdev, sizeof(*cmd), 999 + QXL_RELEASE_CURSOR_CMD, 1000 + &release, NULL); 1001 + if (ret) 1002 + return; 1003 + 1004 + ret = qxl_release_reserve_list(release, true); 1005 + if (ret) { 1006 + qxl_release_free(qdev, release); 1007 + return; 1008 + } 1009 + 1010 + cmd = (struct qxl_cursor_cmd *)qxl_release_map(qdev, release); 1011 + cmd->type = QXL_CURSOR_HIDE; 1012 + qxl_release_unmap(qdev, release, &cmd->release_info); 1013 + 1014 + qxl_push_cursor_ring_release(qdev, release, QXL_CMD_CURSOR, false); 1015 + qxl_release_fence_buffer_objects(release); 1016 + } 1017 + 1018 + int qxl_plane_prepare_fb(struct drm_plane *plane, 1019 + struct drm_plane_state *new_state) 1020 + { 1021 + struct drm_gem_object *obj; 1022 + struct qxl_bo *user_bo; 1023 + int ret; 1024 + 1025 + if (!new_state->fb) 1026 + return 0; 1027 + 1028 + obj = to_qxl_framebuffer(new_state->fb)->obj; 1029 + user_bo = gem_to_qxl_bo(obj); 1030 + 1031 + ret = qxl_bo_pin(user_bo, QXL_GEM_DOMAIN_CPU, NULL); 1032 + if (ret) 1033 + return ret; 1034 + 1035 + return 0; 1036 + } 1037 + 1038 + static void qxl_plane_cleanup_fb(struct drm_plane *plane, 1039 + struct drm_plane_state *old_state) 1040 + { 1041 + struct drm_gem_object *obj; 1042 + struct qxl_bo *user_bo; 1043 + 1044 + if (!plane->state->fb) { 1045 + /* we never executed prepare_fb, so there's nothing to 1046 + * unpin. 1047 + */ 1048 + return; 1049 + } 1050 + 1051 + obj = to_qxl_framebuffer(plane->state->fb)->obj; 1052 + user_bo = gem_to_qxl_bo(obj); 1053 + qxl_bo_unpin(user_bo); 1054 + } 1055 + 1056 + static const uint32_t qxl_cursor_plane_formats[] = { 1057 + DRM_FORMAT_ARGB8888, 1058 + }; 1059 + 1060 + static const struct drm_plane_helper_funcs qxl_cursor_helper_funcs = { 1061 + .atomic_check = qxl_plane_atomic_check, 1062 + .atomic_update = qxl_cursor_atomic_update, 1063 + .atomic_disable = qxl_cursor_atomic_disable, 1064 + .prepare_fb = qxl_plane_prepare_fb, 1065 + .cleanup_fb = qxl_plane_cleanup_fb, 1066 + }; 1067 + 1068 + static const struct drm_plane_funcs qxl_cursor_plane_funcs = { 1069 + .update_plane = drm_atomic_helper_update_plane, 1070 + .disable_plane = drm_atomic_helper_disable_plane, 1071 + .destroy = drm_primary_helper_destroy, 1072 + .reset = drm_atomic_helper_plane_reset, 1073 + .atomic_duplicate_state = drm_atomic_helper_plane_duplicate_state, 1074 + .atomic_destroy_state = drm_atomic_helper_plane_destroy_state, 1075 + }; 1076 + 1077 + static const uint32_t qxl_primary_plane_formats[] = { 1078 + DRM_FORMAT_XRGB8888, 1079 + DRM_FORMAT_ARGB8888, 1080 + }; 1081 + 1082 + static const struct drm_plane_helper_funcs primary_helper_funcs = { 1083 + .atomic_check = qxl_primary_atomic_check, 1084 + .atomic_update = qxl_primary_atomic_update, 1085 + .atomic_disable = qxl_primary_atomic_disable, 1086 + .prepare_fb = qxl_plane_prepare_fb, 1087 + .cleanup_fb = qxl_plane_cleanup_fb, 1088 + }; 1089 + 1090 + static const struct drm_plane_funcs qxl_primary_plane_funcs = { 1091 + .update_plane = drm_atomic_helper_update_plane, 1092 + .disable_plane = drm_atomic_helper_disable_plane, 1093 + .destroy = drm_primary_helper_destroy, 1094 + .reset = drm_atomic_helper_plane_reset, 1095 + .atomic_duplicate_state = drm_atomic_helper_plane_duplicate_state, 1096 + .atomic_destroy_state = drm_atomic_helper_plane_destroy_state, 1097 + }; 1098 + 1099 + static struct drm_plane *qxl_create_plane(struct qxl_device *qdev, 1100 + unsigned int possible_crtcs, 1101 + enum drm_plane_type type) 1102 + { 1103 + const struct drm_plane_helper_funcs *helper_funcs = NULL; 1104 + struct drm_plane *plane; 1105 + const struct drm_plane_funcs *funcs; 1106 + const uint32_t *formats; 1107 + int num_formats; 1108 + int err; 1109 + 1110 + if (type == DRM_PLANE_TYPE_PRIMARY) { 1111 + funcs = &qxl_primary_plane_funcs; 1112 + formats = qxl_primary_plane_formats; 1113 + num_formats = ARRAY_SIZE(qxl_primary_plane_formats); 1114 + helper_funcs = &primary_helper_funcs; 1115 + } else if (type == DRM_PLANE_TYPE_CURSOR) { 1116 + funcs = &qxl_cursor_plane_funcs; 1117 + formats = qxl_cursor_plane_formats; 1118 + helper_funcs = &qxl_cursor_helper_funcs; 1119 + num_formats = ARRAY_SIZE(qxl_cursor_plane_formats); 1120 + } else { 1121 + return ERR_PTR(-EINVAL); 1122 + } 1123 + 1124 + plane = kzalloc(sizeof(*plane), GFP_KERNEL); 1125 + if (!plane) 1126 + return ERR_PTR(-ENOMEM); 1127 + 1128 + err = drm_universal_plane_init(&qdev->ddev, plane, possible_crtcs, 1129 + funcs, formats, num_formats, 1130 + type, NULL); 1131 + if (err) 1132 + goto free_plane; 1133 + 1134 + drm_plane_helper_add(plane, helper_funcs); 1135 + 1136 + return plane; 1137 + 1138 + free_plane: 1139 + kfree(plane); 1140 + return ERR_PTR(-EINVAL); 1141 + } 462 1142 463 1143 static int qdev_crtc_init(struct drm_device *dev, int crtc_id) 464 1144 { 465 1145 struct qxl_crtc *qxl_crtc; 1146 + struct drm_plane *primary, *cursor; 1147 + struct qxl_device *qdev = dev->dev_private; 1148 + int r; 466 1149 467 1150 qxl_crtc = kzalloc(sizeof(struct qxl_crtc), GFP_KERNEL); 468 1151 if (!qxl_crtc) 469 1152 return -ENOMEM; 470 1153 471 - drm_crtc_init(dev, &qxl_crtc->base, &qxl_crtc_funcs); 1154 + primary = qxl_create_plane(qdev, 1 << crtc_id, DRM_PLANE_TYPE_PRIMARY); 1155 + if (IS_ERR(primary)) { 1156 + r = -ENOMEM; 1157 + goto free_mem; 1158 + } 1159 + 1160 + cursor = qxl_create_plane(qdev, 1 << crtc_id, DRM_PLANE_TYPE_CURSOR); 1161 + if (IS_ERR(cursor)) { 1162 + r = -ENOMEM; 1163 + goto clean_primary; 1164 + } 1165 + 1166 + r = drm_crtc_init_with_planes(dev, &qxl_crtc->base, primary, cursor, 1167 + &qxl_crtc_funcs, NULL); 1168 + if (r) 1169 + goto clean_cursor; 1170 + 472 1171 qxl_crtc->index = crtc_id; 473 1172 drm_crtc_helper_add(&qxl_crtc->base, &qxl_crtc_helper_funcs); 474 1173 return 0; 1174 + 1175 + clean_cursor: 1176 + drm_plane_cleanup(cursor); 1177 + kfree(cursor); 1178 + clean_primary: 1179 + drm_plane_cleanup(primary); 1180 + kfree(primary); 1181 + free_mem: 1182 + kfree(qxl_crtc); 1183 + return r; 475 1184 } 476 1185 477 1186 static void qxl_enc_dpms(struct drm_encoder *encoder, int mode) ··· 1014 1019 .fill_modes = drm_helper_probe_single_connector_modes, 1015 1020 .set_property = qxl_conn_set_property, 1016 1021 .destroy = qxl_conn_destroy, 1022 + .reset = drm_atomic_helper_connector_reset, 1023 + .atomic_duplicate_state = drm_atomic_helper_connector_duplicate_state, 1024 + .atomic_destroy_state = drm_atomic_helper_connector_destroy_state, 1017 1025 }; 1018 1026 1019 1027 static void qxl_enc_destroy(struct drm_encoder *encoder) ··· 1107 1109 1108 1110 static const struct drm_mode_config_funcs qxl_mode_funcs = { 1109 1111 .fb_create = qxl_user_framebuffer_create, 1112 + .atomic_check = drm_atomic_helper_check, 1113 + .atomic_commit = drm_atomic_helper_commit, 1110 1114 }; 1111 1115 1112 1116 int qxl_create_monitors_object(struct qxl_device *qdev) ··· 1128 1128 } 1129 1129 qdev->monitors_config_bo = gem_to_qxl_bo(gobj); 1130 1130 1131 - ret = qxl_bo_reserve(qdev->monitors_config_bo, false); 1131 + ret = qxl_bo_pin(qdev->monitors_config_bo, QXL_GEM_DOMAIN_VRAM, NULL); 1132 1132 if (ret) 1133 1133 return ret; 1134 - 1135 - ret = qxl_bo_pin(qdev->monitors_config_bo, QXL_GEM_DOMAIN_VRAM, NULL); 1136 - if (ret) { 1137 - qxl_bo_unreserve(qdev->monitors_config_bo); 1138 - return ret; 1139 - } 1140 - 1141 - qxl_bo_unreserve(qdev->monitors_config_bo); 1142 1134 1143 1135 qxl_bo_kmap(qdev->monitors_config_bo, NULL); 1144 1136 ··· 1151 1159 qdev->ram_header->monitors_config = 0; 1152 1160 1153 1161 qxl_bo_kunmap(qdev->monitors_config_bo); 1154 - ret = qxl_bo_reserve(qdev->monitors_config_bo, false); 1162 + ret = qxl_bo_unpin(qdev->monitors_config_bo); 1155 1163 if (ret) 1156 1164 return ret; 1157 - 1158 - qxl_bo_unpin(qdev->monitors_config_bo); 1159 - qxl_bo_unreserve(qdev->monitors_config_bo); 1160 1165 1161 1166 qxl_bo_unref(&qdev->monitors_config_bo); 1162 1167 return 0; ··· 1173 1184 qdev->ddev.mode_config.funcs = (void *)&qxl_mode_funcs; 1174 1185 1175 1186 /* modes will be validated against the framebuffer size */ 1176 - qdev->ddev.mode_config.min_width = 320; 1177 - qdev->ddev.mode_config.min_height = 200; 1187 + qdev->ddev.mode_config.min_width = 0; 1188 + qdev->ddev.mode_config.min_height = 0; 1178 1189 qdev->ddev.mode_config.max_width = 8192; 1179 1190 qdev->ddev.mode_config.max_height = 8192; 1180 1191 ··· 1189 1200 } 1190 1201 1191 1202 qdev->mode_info.mode_config_initialized = true; 1203 + 1204 + drm_mode_config_reset(&qdev->ddev); 1192 1205 1193 1206 /* primary surface must be created by this point, to allow 1194 1207 * issuing command queue commands and having them read by
+4 -28
drivers/gpu/drm/qxl/qxl_drv.c
··· 79 79 if (ret) 80 80 goto free_dev; 81 81 82 - ret = qxl_device_init(qdev, &qxl_driver, pdev, ent->driver_data); 82 + ret = qxl_device_init(qdev, &qxl_driver, pdev); 83 83 if (ret) 84 84 goto disable_pci; 85 85 86 - ret = drm_vblank_init(&qdev->ddev, 1); 87 - if (ret) 88 - goto unload; 89 - 90 86 ret = qxl_modeset_init(qdev); 91 87 if (ret) 92 - goto vblank_cleanup; 88 + goto unload; 93 89 94 90 drm_kms_helper_poll_init(&qdev->ddev); 95 91 ··· 98 102 99 103 modeset_cleanup: 100 104 qxl_modeset_fini(qdev); 101 - vblank_cleanup: 102 - drm_vblank_cleanup(&qdev->ddev); 103 105 unload: 104 106 qxl_device_fini(qdev); 105 107 disable_pci: ··· 241 247 return qxl_drm_resume(drm_dev, false); 242 248 } 243 249 244 - static u32 qxl_noop_get_vblank_counter(struct drm_device *dev, 245 - unsigned int pipe) 246 - { 247 - return 0; 248 - } 249 - 250 - static int qxl_noop_enable_vblank(struct drm_device *dev, unsigned int pipe) 251 - { 252 - return 0; 253 - } 254 - 255 - static void qxl_noop_disable_vblank(struct drm_device *dev, unsigned int pipe) 256 - { 257 - } 258 - 259 250 static const struct dev_pm_ops qxl_pm_ops = { 260 251 .suspend = qxl_pm_suspend, 261 252 .resume = qxl_pm_resume, ··· 259 280 260 281 static struct drm_driver qxl_driver = { 261 282 .driver_features = DRIVER_GEM | DRIVER_MODESET | DRIVER_PRIME | 262 - DRIVER_HAVE_IRQ | DRIVER_IRQ_SHARED, 263 - .get_vblank_counter = qxl_noop_get_vblank_counter, 264 - .enable_vblank = qxl_noop_enable_vblank, 265 - .disable_vblank = qxl_noop_disable_vblank, 283 + DRIVER_HAVE_IRQ | DRIVER_IRQ_SHARED | 284 + DRIVER_ATOMIC, 266 285 267 286 .set_busid = drm_pci_set_busid, 268 287 ··· 269 292 .dumb_destroy = drm_gem_dumb_destroy, 270 293 #if defined(CONFIG_DEBUG_FS) 271 294 .debugfs_init = qxl_debugfs_init, 272 - .debugfs_cleanup = qxl_debugfs_takedown, 273 295 #endif 274 296 .prime_handle_to_fd = drm_gem_prime_handle_to_fd, 275 297 .prime_fd_to_handle = drm_gem_prime_fd_to_handle,
+1 -8
drivers/gpu/drm/qxl/qxl_drv.h
··· 134 134 struct qxl_crtc { 135 135 struct drm_crtc base; 136 136 int index; 137 - int cur_x; 138 - int cur_y; 139 - int hot_spot_x; 140 - int hot_spot_y; 141 - struct qxl_bo *cursor_bo; 142 137 }; 143 138 144 139 struct qxl_output { ··· 238 243 239 244 struct qxl_device { 240 245 struct drm_device ddev; 241 - unsigned long flags; 242 246 243 247 resource_size_t vram_base, vram_size; 244 248 resource_size_t surfaceram_base, surfaceram_size; ··· 329 335 extern int qxl_max_ioctl; 330 336 331 337 int qxl_device_init(struct qxl_device *qdev, struct drm_driver *drv, 332 - struct pci_dev *pdev, unsigned long flags); 338 + struct pci_dev *pdev); 333 339 void qxl_device_fini(struct qxl_device *qdev); 334 340 335 341 int qxl_modeset_init(struct qxl_device *qdev); ··· 523 529 /* debugfs */ 524 530 525 531 int qxl_debugfs_init(struct drm_minor *minor); 526 - void qxl_debugfs_takedown(struct drm_minor *minor); 527 532 int qxl_ttm_debugfs_init(struct qxl_device *qdev); 528 533 529 534 /* qxl_prime.c */
+8 -22
drivers/gpu/drm/qxl/qxl_fb.c
··· 90 90 static void qxlfb_destroy_pinned_object(struct drm_gem_object *gobj) 91 91 { 92 92 struct qxl_bo *qbo = gem_to_qxl_bo(gobj); 93 - int ret; 94 93 95 - ret = qxl_bo_reserve(qbo, false); 96 - if (likely(ret == 0)) { 97 - qxl_bo_kunmap(qbo); 98 - qxl_bo_unpin(qbo); 99 - qxl_bo_unreserve(qbo); 100 - } 94 + qxl_bo_kunmap(qbo); 95 + qxl_bo_unpin(qbo); 96 + 101 97 drm_gem_object_unreference_unlocked(gobj); 102 98 } 103 99 ··· 144 148 qbo->surf.height = mode_cmd->height; 145 149 qbo->surf.stride = mode_cmd->pitches[0]; 146 150 qbo->surf.format = SPICE_SURFACE_FMT_32_xRGB; 147 - ret = qxl_bo_reserve(qbo, false); 148 - if (unlikely(ret != 0)) 149 - goto out_unref; 151 + 150 152 ret = qxl_bo_pin(qbo, QXL_GEM_DOMAIN_SURFACE, NULL); 151 153 if (ret) { 152 - qxl_bo_unreserve(qbo); 153 154 goto out_unref; 154 155 } 155 156 ret = qxl_bo_kmap(qbo, NULL); 156 - qxl_bo_unreserve(qbo); /* unreserve, will be mmaped */ 157 + 157 158 if (ret) 158 159 goto out_unref; 159 160 ··· 298 305 299 306 if (info->screen_base == NULL) { 300 307 ret = -ENOSPC; 301 - goto out_destroy_fbi; 308 + goto out_unref; 302 309 } 303 310 304 311 #ifdef CONFIG_DRM_FBDEV_EMULATION ··· 313 320 fb->format->depth, fb->pitches[0], fb->width, fb->height); 314 321 return 0; 315 322 316 - out_destroy_fbi: 317 - drm_fb_helper_release_fbi(&qfbdev->helper); 318 323 out_unref: 319 324 if (qbo) { 320 - ret = qxl_bo_reserve(qbo, false); 321 - if (likely(ret == 0)) { 322 - qxl_bo_kunmap(qbo); 323 - qxl_bo_unpin(qbo); 324 - qxl_bo_unreserve(qbo); 325 - } 325 + qxl_bo_kunmap(qbo); 326 + qxl_bo_unpin(qbo); 326 327 } 327 328 if (fb && ret) { 328 329 drm_gem_object_unreference_unlocked(gobj); ··· 350 363 struct qxl_framebuffer *qfb = &qfbdev->qfb; 351 364 352 365 drm_fb_helper_unregister_fbi(&qfbdev->helper); 353 - drm_fb_helper_release_fbi(&qfbdev->helper); 354 366 355 367 if (qfb->obj) { 356 368 qxlfb_destroy_pinned_object(qfb->obj);
+1 -4
drivers/gpu/drm/qxl/qxl_kms.c
··· 117 117 118 118 int qxl_device_init(struct qxl_device *qdev, 119 119 struct drm_driver *drv, 120 - struct pci_dev *pdev, 121 - unsigned long flags) 120 + struct pci_dev *pdev) 122 121 { 123 122 int r, sb; 124 123 ··· 128 129 qdev->ddev.pdev = pdev; 129 130 pci_set_drvdata(pdev, &qdev->ddev); 130 131 qdev->ddev.dev_private = qdev; 131 - 132 - qdev->flags = flags; 133 132 134 133 mutex_init(&qdev->gem.mutex); 135 134 mutex_init(&qdev->update_area_mutex);
+39 -2
drivers/gpu/drm/qxl/qxl_object.c
··· 221 221 return bo; 222 222 } 223 223 224 - int qxl_bo_pin(struct qxl_bo *bo, u32 domain, u64 *gpu_addr) 224 + int __qxl_bo_pin(struct qxl_bo *bo, u32 domain, u64 *gpu_addr) 225 225 { 226 226 struct drm_device *ddev = bo->gem_base.dev; 227 227 int r; ··· 244 244 return r; 245 245 } 246 246 247 - int qxl_bo_unpin(struct qxl_bo *bo) 247 + int __qxl_bo_unpin(struct qxl_bo *bo) 248 248 { 249 249 struct drm_device *ddev = bo->gem_base.dev; 250 250 int r, i; ··· 261 261 r = ttm_bo_validate(&bo->tbo, &bo->placement, false, false); 262 262 if (unlikely(r != 0)) 263 263 dev_err(ddev->dev, "%p validate failed for unpin\n", bo); 264 + return r; 265 + } 266 + 267 + 268 + /* 269 + * Reserve the BO before pinning the object. If the BO was reserved 270 + * beforehand, use the internal version directly __qxl_bo_pin. 271 + * 272 + */ 273 + int qxl_bo_pin(struct qxl_bo *bo, u32 domain, u64 *gpu_addr) 274 + { 275 + int r; 276 + 277 + r = qxl_bo_reserve(bo, false); 278 + if (r) 279 + return r; 280 + 281 + r = __qxl_bo_pin(bo, bo->type, NULL); 282 + qxl_bo_unreserve(bo); 283 + return r; 284 + } 285 + 286 + /* 287 + * Reserve the BO before pinning the object. If the BO was reserved 288 + * beforehand, use the internal version directly __qxl_bo_unpin. 289 + * 290 + */ 291 + int qxl_bo_unpin(struct qxl_bo *bo) 292 + { 293 + int r; 294 + 295 + r = qxl_bo_reserve(bo, false); 296 + if (r) 297 + return r; 298 + 299 + r = __qxl_bo_unpin(bo); 300 + qxl_bo_unreserve(bo); 264 301 return r; 265 302 } 266 303
+3 -4
drivers/gpu/drm/r128/r128_cce.c
··· 149 149 150 150 pdev = platform_device_register_simple("r128_cce", 0, NULL, 0); 151 151 if (IS_ERR(pdev)) { 152 - printk(KERN_ERR "r128_cce: Failed to register firmware\n"); 152 + pr_err("r128_cce: Failed to register firmware\n"); 153 153 return PTR_ERR(pdev); 154 154 } 155 155 rc = request_firmware(&fw, FIRMWARE_NAME, &pdev->dev); 156 156 platform_device_unregister(pdev); 157 157 if (rc) { 158 - printk(KERN_ERR "r128_cce: Failed to load firmware \"%s\"\n", 158 + pr_err("r128_cce: Failed to load firmware \"%s\"\n", 159 159 FIRMWARE_NAME); 160 160 return rc; 161 161 } 162 162 163 163 if (fw->size != 256 * 8) { 164 - printk(KERN_ERR 165 - "r128_cce: Bogus length %zu in firmware \"%s\"\n", 164 + pr_err("r128_cce: Bogus length %zu in firmware \"%s\"\n", 166 165 fw->size, FIRMWARE_NAME); 167 166 rc = -EINVAL; 168 167 goto out_release;
+4 -7
drivers/gpu/drm/radeon/radeon_fb.c
··· 242 242 info = drm_fb_helper_alloc_fbi(helper); 243 243 if (IS_ERR(info)) { 244 244 ret = PTR_ERR(info); 245 - goto out_unref; 245 + goto out; 246 246 } 247 247 248 248 info->par = rfbdev; ··· 251 251 ret = radeon_framebuffer_init(rdev->ddev, &rfbdev->rfb, &mode_cmd, gobj); 252 252 if (ret) { 253 253 DRM_ERROR("failed to initialize framebuffer %d\n", ret); 254 - goto out_destroy_fbi; 254 + goto out; 255 255 } 256 256 257 257 fb = &rfbdev->rfb.base; ··· 284 284 285 285 if (info->screen_base == NULL) { 286 286 ret = -ENOSPC; 287 - goto out_destroy_fbi; 287 + goto out; 288 288 } 289 289 290 290 DRM_INFO("fb mappable at 0x%lX\n", info->fix.smem_start); ··· 296 296 vga_switcheroo_client_fb_set(rdev->ddev->pdev, info); 297 297 return 0; 298 298 299 - out_destroy_fbi: 300 - drm_fb_helper_release_fbi(helper); 301 - out_unref: 299 + out: 302 300 if (rbo) { 303 301 304 302 } ··· 320 322 struct radeon_framebuffer *rfb = &rfbdev->rfb; 321 323 322 324 drm_fb_helper_unregister_fbi(&rfbdev->helper); 323 - drm_fb_helper_release_fbi(&rfbdev->helper); 324 325 325 326 if (rfb->obj) { 326 327 radeonfb_destroy_pinned_object(rfb->obj);
+19 -10
drivers/gpu/drm/rcar-du/rcar_du_crtc.c
··· 529 529 .atomic_flush = rcar_du_crtc_atomic_flush, 530 530 }; 531 531 532 + static int rcar_du_crtc_enable_vblank(struct drm_crtc *crtc) 533 + { 534 + struct rcar_du_crtc *rcrtc = to_rcar_crtc(crtc); 535 + 536 + rcar_du_crtc_write(rcrtc, DSRCR, DSRCR_VBCL); 537 + rcar_du_crtc_set(rcrtc, DIER, DIER_VBE); 538 + 539 + return 0; 540 + } 541 + 542 + static void rcar_du_crtc_disable_vblank(struct drm_crtc *crtc) 543 + { 544 + struct rcar_du_crtc *rcrtc = to_rcar_crtc(crtc); 545 + 546 + rcar_du_crtc_clr(rcrtc, DIER, DIER_VBE); 547 + } 548 + 532 549 static const struct drm_crtc_funcs crtc_funcs = { 533 550 .reset = drm_atomic_helper_crtc_reset, 534 551 .destroy = drm_crtc_cleanup, ··· 553 536 .page_flip = drm_atomic_helper_page_flip, 554 537 .atomic_duplicate_state = drm_atomic_helper_crtc_duplicate_state, 555 538 .atomic_destroy_state = drm_atomic_helper_crtc_destroy_state, 539 + .enable_vblank = rcar_du_crtc_enable_vblank, 540 + .disable_vblank = rcar_du_crtc_disable_vblank, 556 541 }; 557 542 558 543 /* ----------------------------------------------------------------------------- ··· 668 649 } 669 650 670 651 return 0; 671 - } 672 - 673 - void rcar_du_crtc_enable_vblank(struct rcar_du_crtc *rcrtc, bool enable) 674 - { 675 - if (enable) { 676 - rcar_du_crtc_write(rcrtc, DSRCR, DSRCR_VBCL); 677 - rcar_du_crtc_set(rcrtc, DIER, DIER_VBE); 678 - } else { 679 - rcar_du_crtc_clr(rcrtc, DIER, DIER_VBE); 680 - } 681 652 }
-1
drivers/gpu/drm/rcar-du/rcar_du_crtc.h
··· 66 66 }; 67 67 68 68 int rcar_du_crtc_create(struct rcar_du_group *rgrp, unsigned int index); 69 - void rcar_du_crtc_enable_vblank(struct rcar_du_crtc *rcrtc, bool enable); 70 69 void rcar_du_crtc_suspend(struct rcar_du_crtc *rcrtc); 71 70 void rcar_du_crtc_resume(struct rcar_du_crtc *rcrtc); 72 71
-20
drivers/gpu/drm/rcar-du/rcar_du_drv.c
··· 26 26 #include <drm/drm_fb_cma_helper.h> 27 27 #include <drm/drm_gem_cma_helper.h> 28 28 29 - #include "rcar_du_crtc.h" 30 29 #include "rcar_du_drv.h" 31 30 #include "rcar_du_kms.h" 32 31 #include "rcar_du_regs.h" ··· 226 227 drm_fbdev_cma_restore_mode(rcdu->fbdev); 227 228 } 228 229 229 - static int rcar_du_enable_vblank(struct drm_device *dev, unsigned int pipe) 230 - { 231 - struct rcar_du_device *rcdu = dev->dev_private; 232 - 233 - rcar_du_crtc_enable_vblank(&rcdu->crtcs[pipe], true); 234 - 235 - return 0; 236 - } 237 - 238 - static void rcar_du_disable_vblank(struct drm_device *dev, unsigned int pipe) 239 - { 240 - struct rcar_du_device *rcdu = dev->dev_private; 241 - 242 - rcar_du_crtc_enable_vblank(&rcdu->crtcs[pipe], false); 243 - } 244 - 245 230 static const struct file_operations rcar_du_fops = { 246 231 .owner = THIS_MODULE, 247 232 .open = drm_open, ··· 242 259 .driver_features = DRIVER_GEM | DRIVER_MODESET | DRIVER_PRIME 243 260 | DRIVER_ATOMIC, 244 261 .lastclose = rcar_du_lastclose, 245 - .get_vblank_counter = drm_vblank_no_hw_counter, 246 - .enable_vblank = rcar_du_enable_vblank, 247 - .disable_vblank = rcar_du_disable_vblank, 248 262 .gem_free_object_unlocked = drm_gem_cma_free_object, 249 263 .gem_vm_ops = &drm_gem_cma_vm_ops, 250 264 .prime_handle_to_fd = drm_gem_prime_handle_to_fd,
+316 -177
drivers/gpu/drm/rockchip/dw-mipi-dsi.c
··· 12 12 #include <linux/math64.h> 13 13 #include <linux/module.h> 14 14 #include <linux/of_device.h> 15 + #include <linux/pm_runtime.h> 15 16 #include <linux/regmap.h> 17 + #include <linux/reset.h> 16 18 #include <linux/mfd/syscon.h> 17 19 #include <drm/drm_atomic_helper.h> 18 20 #include <drm/drm_crtc.h> ··· 30 28 31 29 #define DRIVER_NAME "dw-mipi-dsi" 32 30 33 - #define GRF_SOC_CON6 0x025c 34 - #define DSI0_SEL_VOP_LIT (1 << 6) 35 - #define DSI1_SEL_VOP_LIT (1 << 9) 31 + #define RK3288_GRF_SOC_CON6 0x025c 32 + #define RK3288_DSI0_SEL_VOP_LIT BIT(6) 33 + #define RK3288_DSI1_SEL_VOP_LIT BIT(9) 34 + 35 + #define RK3399_GRF_SOC_CON19 0x6250 36 + #define RK3399_DSI0_SEL_VOP_LIT BIT(0) 37 + #define RK3399_DSI1_SEL_VOP_LIT BIT(4) 38 + 39 + /* disable turnrequest, turndisable, forcetxstopmode, forcerxmode */ 40 + #define RK3399_GRF_SOC_CON22 0x6258 41 + #define RK3399_GRF_DSI_MODE 0xffff0000 36 42 37 43 #define DSI_VERSION 0x00 38 44 #define DSI_PWR_UP 0x04 ··· 92 82 #define FRAME_BTA_ACK BIT(14) 93 83 #define ENABLE_LOW_POWER (0x3f << 8) 94 84 #define ENABLE_LOW_POWER_MASK (0x3f << 8) 95 - #define VID_MODE_TYPE_BURST_SYNC_PULSES 0x2 85 + #define VID_MODE_TYPE_NON_BURST_SYNC_PULSES 0x0 86 + #define VID_MODE_TYPE_NON_BURST_SYNC_EVENTS 0x1 87 + #define VID_MODE_TYPE_BURST 0x2 96 88 #define VID_MODE_TYPE_MASK 0x3 97 89 98 90 #define DSI_VID_PKT_SIZE 0x3c ··· 159 147 #define LPRX_TO_CNT(p) ((p) & 0xffff) 160 148 161 149 #define DSI_BTA_TO_CNT 0x8c 162 - 163 150 #define DSI_LPCLK_CTRL 0x94 164 151 #define AUTO_CLKLANE_CTRL BIT(1) 165 152 #define PHY_TXREQUESTCLKHS BIT(0) ··· 224 213 225 214 #define HSFREQRANGE_SEL(val) (((val) & 0x3f) << 1) 226 215 227 - #define INPUT_DIVIDER(val) ((val - 1) & 0x7f) 216 + #define INPUT_DIVIDER(val) (((val) - 1) & 0x7f) 228 217 #define LOW_PROGRAM_EN 0 229 218 #define HIGH_PROGRAM_EN BIT(7) 230 - #define LOOP_DIV_LOW_SEL(val) ((val - 1) & 0x1f) 231 - #define LOOP_DIV_HIGH_SEL(val) (((val - 1) >> 5) & 0x1f) 219 + #define LOOP_DIV_LOW_SEL(val) (((val) - 1) & 0x1f) 220 + #define LOOP_DIV_HIGH_SEL(val) ((((val) - 1) >> 5) & 0x1f) 232 221 #define PLL_LOOP_DIV_EN BIT(5) 233 222 #define PLL_INPUT_DIV_EN BIT(4) 234 223 ··· 274 263 }; 275 264 276 265 struct dw_mipi_dsi_plat_data { 266 + u32 dsi0_en_bit; 267 + u32 dsi1_en_bit; 268 + u32 grf_switch_reg; 269 + u32 grf_dsi0_mode; 270 + u32 grf_dsi0_mode_reg; 277 271 unsigned int max_data_lanes; 278 - enum drm_mode_status (*mode_valid)(struct drm_connector *connector, 279 - struct drm_display_mode *mode); 280 272 }; 281 273 282 274 struct dw_mipi_dsi { ··· 293 279 294 280 struct clk *pllref_clk; 295 281 struct clk *pclk; 282 + struct clk *phy_cfg_clk; 296 283 284 + int dpms_mode; 297 285 unsigned int lane_mbps; /* per lane */ 298 286 u32 channel; 299 287 u32 lanes; 300 288 u32 format; 301 289 u16 input_div; 302 290 u16 feedback_div; 303 - struct drm_display_mode *mode; 291 + unsigned long mode_flags; 304 292 305 293 const struct dw_mipi_dsi_plat_data *pdata; 306 294 }; ··· 346 330 * The controller should generate 2 frames before 347 331 * preparing the peripheral. 348 332 */ 349 - static void dw_mipi_dsi_wait_for_two_frames(struct dw_mipi_dsi *dsi) 333 + static void dw_mipi_dsi_wait_for_two_frames(struct drm_display_mode *mode) 350 334 { 351 335 int refresh, two_frames; 352 336 353 - refresh = drm_mode_vrefresh(dsi->mode); 337 + refresh = drm_mode_vrefresh(mode); 354 338 two_frames = DIV_ROUND_UP(MSEC_PER_SEC, refresh) * 2; 355 339 msleep(two_frames); 356 340 } ··· 369 353 { 370 354 return container_of(encoder, struct dw_mipi_dsi, encoder); 371 355 } 356 + 372 357 static inline void dsi_write(struct dw_mipi_dsi *dsi, u32 reg, u32 val) 373 358 { 374 359 writel(val, dsi->base + reg); ··· 381 364 } 382 365 383 366 static void dw_mipi_dsi_phy_write(struct dw_mipi_dsi *dsi, u8 test_code, 384 - u8 test_data) 367 + u8 test_data) 385 368 { 386 369 /* 387 370 * With the falling edge on TESTCLK, the TESTDIN[7:0] signal content ··· 401 384 dsi_write(dsi, DSI_PHY_TST_CTRL0, PHY_TESTCLK | PHY_UNTESTCLR); 402 385 } 403 386 387 + /** 388 + * ns2bc - Nanoseconds to byte clock cycles 389 + */ 390 + static inline unsigned int ns2bc(struct dw_mipi_dsi *dsi, int ns) 391 + { 392 + return DIV_ROUND_UP(ns * dsi->lane_mbps / 8, 1000); 393 + } 394 + 395 + /** 396 + * ns2ui - Nanoseconds to UI time periods 397 + */ 398 + static inline unsigned int ns2ui(struct dw_mipi_dsi *dsi, int ns) 399 + { 400 + return DIV_ROUND_UP(ns * dsi->lane_mbps, 1000); 401 + } 402 + 404 403 static int dw_mipi_dsi_phy_init(struct dw_mipi_dsi *dsi) 405 404 { 406 405 int ret, testdin, vco, val; ··· 431 398 return testdin; 432 399 } 433 400 434 - dsi_write(dsi, DSI_PWR_UP, POWERUP); 401 + /* Start by clearing PHY state */ 402 + dsi_write(dsi, DSI_PHY_TST_CTRL0, PHY_UNTESTCLR); 403 + dsi_write(dsi, DSI_PHY_TST_CTRL0, PHY_TESTCLR); 404 + dsi_write(dsi, DSI_PHY_TST_CTRL0, PHY_UNTESTCLR); 405 + 406 + ret = clk_prepare_enable(dsi->phy_cfg_clk); 407 + if (ret) { 408 + dev_err(dsi->dev, "Failed to enable phy_cfg_clk\n"); 409 + return ret; 410 + } 435 411 436 412 dw_mipi_dsi_phy_write(dsi, 0x10, BYPASS_VCO_RANGE | 437 413 VCO_RANGE_CON_SEL(vco) | ··· 453 411 454 412 dw_mipi_dsi_phy_write(dsi, 0x44, HSFREQRANGE_SEL(testdin)); 455 413 456 - dw_mipi_dsi_phy_write(dsi, 0x19, PLL_LOOP_DIV_EN | PLL_INPUT_DIV_EN); 457 414 dw_mipi_dsi_phy_write(dsi, 0x17, INPUT_DIVIDER(dsi->input_div)); 458 415 dw_mipi_dsi_phy_write(dsi, 0x18, LOOP_DIV_LOW_SEL(dsi->feedback_div) | 459 416 LOW_PROGRAM_EN); 460 417 dw_mipi_dsi_phy_write(dsi, 0x18, LOOP_DIV_HIGH_SEL(dsi->feedback_div) | 461 418 HIGH_PROGRAM_EN); 419 + dw_mipi_dsi_phy_write(dsi, 0x19, PLL_LOOP_DIV_EN | PLL_INPUT_DIV_EN); 420 + 421 + dw_mipi_dsi_phy_write(dsi, 0x22, LOW_PROGRAM_EN | 422 + BIASEXTR_SEL(BIASEXTR_127_7)); 423 + dw_mipi_dsi_phy_write(dsi, 0x22, HIGH_PROGRAM_EN | 424 + BANDGAP_SEL(BANDGAP_96_10)); 462 425 463 426 dw_mipi_dsi_phy_write(dsi, 0x20, POWER_CONTROL | INTERNAL_REG_CURRENT | 464 427 BIAS_BLOCK_ON | BANDGAP_ON); ··· 474 427 SETRD_MAX | POWER_MANAGE | 475 428 TER_RESISTORS_ON); 476 429 477 - dw_mipi_dsi_phy_write(dsi, 0x22, LOW_PROGRAM_EN | 478 - BIASEXTR_SEL(BIASEXTR_127_7)); 479 - dw_mipi_dsi_phy_write(dsi, 0x22, HIGH_PROGRAM_EN | 480 - BANDGAP_SEL(BANDGAP_96_10)); 430 + dw_mipi_dsi_phy_write(dsi, 0x60, TLP_PROGRAM_EN | ns2bc(dsi, 500)); 431 + dw_mipi_dsi_phy_write(dsi, 0x61, THS_PRE_PROGRAM_EN | ns2ui(dsi, 40)); 432 + dw_mipi_dsi_phy_write(dsi, 0x62, THS_ZERO_PROGRAM_EN | ns2bc(dsi, 300)); 433 + dw_mipi_dsi_phy_write(dsi, 0x63, THS_PRE_PROGRAM_EN | ns2ui(dsi, 100)); 434 + dw_mipi_dsi_phy_write(dsi, 0x64, BIT(5) | ns2bc(dsi, 100)); 435 + dw_mipi_dsi_phy_write(dsi, 0x65, BIT(5) | (ns2bc(dsi, 60) + 7)); 481 436 482 - dw_mipi_dsi_phy_write(dsi, 0x70, TLP_PROGRAM_EN | 0xf); 483 - dw_mipi_dsi_phy_write(dsi, 0x71, THS_PRE_PROGRAM_EN | 0x55); 484 - dw_mipi_dsi_phy_write(dsi, 0x72, THS_ZERO_PROGRAM_EN | 0xa); 437 + dw_mipi_dsi_phy_write(dsi, 0x70, TLP_PROGRAM_EN | ns2bc(dsi, 500)); 438 + dw_mipi_dsi_phy_write(dsi, 0x71, 439 + THS_PRE_PROGRAM_EN | (ns2ui(dsi, 50) + 5)); 440 + dw_mipi_dsi_phy_write(dsi, 0x72, 441 + THS_ZERO_PROGRAM_EN | (ns2bc(dsi, 140) + 2)); 442 + dw_mipi_dsi_phy_write(dsi, 0x73, 443 + THS_PRE_PROGRAM_EN | (ns2ui(dsi, 60) + 8)); 444 + dw_mipi_dsi_phy_write(dsi, 0x74, BIT(5) | ns2bc(dsi, 100)); 485 445 486 446 dsi_write(dsi, DSI_PHY_RSTZ, PHY_ENFORCEPLL | PHY_ENABLECLK | 487 447 PHY_UNRSTZ | PHY_UNSHUTDOWNZ); 488 448 489 - 490 - ret = readx_poll_timeout(readl, dsi->base + DSI_PHY_STATUS, 449 + ret = readl_poll_timeout(dsi->base + DSI_PHY_STATUS, 491 450 val, val & LOCK, 1000, PHY_STATUS_TIMEOUT_US); 492 451 if (ret < 0) { 493 452 dev_err(dsi->dev, "failed to wait for phy lock state\n"); 494 - return ret; 453 + goto phy_init_end; 495 454 } 496 455 497 - ret = readx_poll_timeout(readl, dsi->base + DSI_PHY_STATUS, 456 + ret = readl_poll_timeout(dsi->base + DSI_PHY_STATUS, 498 457 val, val & STOP_STATE_CLK_LANE, 1000, 499 458 PHY_STATUS_TIMEOUT_US); 500 - if (ret < 0) { 459 + if (ret < 0) 501 460 dev_err(dsi->dev, 502 461 "failed to wait for phy clk lane stop state\n"); 503 - return ret; 504 - } 462 + 463 + phy_init_end: 464 + clk_disable_unprepare(dsi->phy_cfg_clk); 505 465 506 466 return ret; 507 467 } 508 468 509 - static int dw_mipi_dsi_get_lane_bps(struct dw_mipi_dsi *dsi) 469 + static int dw_mipi_dsi_get_lane_bps(struct dw_mipi_dsi *dsi, 470 + struct drm_display_mode *mode) 510 471 { 511 472 unsigned int i, pre; 512 473 unsigned long mpclk, pllref, tmp; ··· 529 474 return bpp; 530 475 } 531 476 532 - mpclk = DIV_ROUND_UP(dsi->mode->clock, MSEC_PER_SEC); 477 + mpclk = DIV_ROUND_UP(mode->clock, MSEC_PER_SEC); 533 478 if (mpclk) { 534 - /* take 1 / 0.9, since mbps must big than bandwidth of RGB */ 535 - tmp = mpclk * (bpp / dsi->lanes) * 10 / 9; 479 + /* take 1 / 0.8, since mbps must big than bandwidth of RGB */ 480 + tmp = mpclk * (bpp / dsi->lanes) * 10 / 8; 536 481 if (tmp < max_mbps) 537 482 target_mbps = tmp; 538 483 else ··· 542 487 pllref = DIV_ROUND_UP(clk_get_rate(dsi->pllref_clk), USEC_PER_SEC); 543 488 tmp = pllref; 544 489 545 - for (i = 1; i < 6; i++) { 490 + /* 491 + * The limits on the PLL divisor are: 492 + * 493 + * 5MHz <= (pllref / n) <= 40MHz 494 + * 495 + * we walk over these values in descreasing order so that if we hit 496 + * an exact match for target_mbps it is more likely that "m" will be 497 + * even. 498 + * 499 + * TODO: ensure that "m" is even after this loop. 500 + */ 501 + for (i = pllref / 5; i > (pllref / 40); i--) { 546 502 pre = pllref / i; 547 503 if ((tmp > (target_mbps % pre)) && (target_mbps / pre < 512)) { 548 504 tmp = target_mbps % pre; ··· 578 512 579 513 if (device->lanes > dsi->pdata->max_data_lanes) { 580 514 dev_err(dsi->dev, "the number of data lanes(%u) is too many\n", 581 - device->lanes); 582 - return -EINVAL; 583 - } 584 - 585 - if (!(device->mode_flags & MIPI_DSI_MODE_VIDEO_BURST) || 586 - !(device->mode_flags & MIPI_DSI_MODE_VIDEO_SYNC_PULSE)) { 587 - dev_err(dsi->dev, "device mode is unsupported\n"); 515 + device->lanes); 588 516 return -EINVAL; 589 517 } 590 518 591 519 dsi->lanes = device->lanes; 592 520 dsi->channel = device->channel; 593 521 dsi->format = device->format; 522 + dsi->mode_flags = device->mode_flags; 594 523 dsi->panel = of_drm_find_panel(device->dev.of_node); 595 524 if (dsi->panel) 596 525 return drm_panel_attach(dsi->panel, &dsi->connector); ··· 603 542 return 0; 604 543 } 605 544 606 - static int dw_mipi_dsi_gen_pkt_hdr_write(struct dw_mipi_dsi *dsi, u32 val) 545 + static void dw_mipi_message_config(struct dw_mipi_dsi *dsi, 546 + const struct mipi_dsi_msg *msg) 547 + { 548 + bool lpm = msg->flags & MIPI_DSI_MSG_USE_LPM; 549 + u32 val = 0; 550 + 551 + if (msg->flags & MIPI_DSI_MSG_REQ_ACK) 552 + val |= EN_ACK_RQST; 553 + if (lpm) 554 + val |= CMD_MODE_ALL_LP; 555 + 556 + dsi_write(dsi, DSI_LPCLK_CTRL, lpm ? 0 : PHY_TXREQUESTCLKHS); 557 + dsi_write(dsi, DSI_CMD_MODE_CFG, val); 558 + } 559 + 560 + static int dw_mipi_dsi_gen_pkt_hdr_write(struct dw_mipi_dsi *dsi, u32 hdr_val) 607 561 { 608 562 int ret; 563 + u32 val, mask; 609 564 610 - ret = readx_poll_timeout(readl, dsi->base + DSI_CMD_PKT_STATUS, 565 + ret = readl_poll_timeout(dsi->base + DSI_CMD_PKT_STATUS, 611 566 val, !(val & GEN_CMD_FULL), 1000, 612 567 CMD_PKT_STATUS_TIMEOUT_US); 613 568 if (ret < 0) { ··· 631 554 return ret; 632 555 } 633 556 634 - dsi_write(dsi, DSI_GEN_HDR, val); 557 + dsi_write(dsi, DSI_GEN_HDR, hdr_val); 635 558 636 - ret = readx_poll_timeout(readl, dsi->base + DSI_CMD_PKT_STATUS, 637 - val, val & (GEN_CMD_EMPTY | GEN_PLD_W_EMPTY), 559 + mask = GEN_CMD_EMPTY | GEN_PLD_W_EMPTY; 560 + ret = readl_poll_timeout(dsi->base + DSI_CMD_PKT_STATUS, 561 + val, (val & mask) == mask, 638 562 1000, CMD_PKT_STATUS_TIMEOUT_US); 639 563 if (ret < 0) { 640 564 dev_err(dsi->dev, "failed to write command FIFO\n"); ··· 648 570 static int dw_mipi_dsi_dcs_short_write(struct dw_mipi_dsi *dsi, 649 571 const struct mipi_dsi_msg *msg) 650 572 { 651 - const u16 *tx_buf = msg->tx_buf; 652 - u32 val = GEN_HDATA(*tx_buf) | GEN_HTYPE(msg->type); 573 + const u8 *tx_buf = msg->tx_buf; 574 + u16 data = 0; 575 + u32 val; 576 + 577 + if (msg->tx_len > 0) 578 + data |= tx_buf[0]; 579 + if (msg->tx_len > 1) 580 + data |= tx_buf[1] << 8; 653 581 654 582 if (msg->tx_len > 2) { 655 583 dev_err(dsi->dev, "too long tx buf length %zu for short write\n", ··· 663 579 return -EINVAL; 664 580 } 665 581 582 + val = GEN_HDATA(data) | GEN_HTYPE(msg->type); 666 583 return dw_mipi_dsi_gen_pkt_hdr_write(dsi, val); 667 584 } 668 585 669 586 static int dw_mipi_dsi_dcs_long_write(struct dw_mipi_dsi *dsi, 670 587 const struct mipi_dsi_msg *msg) 671 588 { 672 - const u32 *tx_buf = msg->tx_buf; 673 - int len = msg->tx_len, pld_data_bytes = sizeof(*tx_buf), ret; 674 - u32 val = GEN_HDATA(msg->tx_len) | GEN_HTYPE(msg->type); 675 - u32 remainder = 0; 589 + const u8 *tx_buf = msg->tx_buf; 590 + int len = msg->tx_len, pld_data_bytes = sizeof(u32), ret; 591 + u32 hdr_val = GEN_HDATA(msg->tx_len) | GEN_HTYPE(msg->type); 592 + u32 remainder; 593 + u32 val; 676 594 677 595 if (msg->tx_len < 3) { 678 596 dev_err(dsi->dev, "wrong tx buf length %zu for long write\n", ··· 684 598 685 599 while (DIV_ROUND_UP(len, pld_data_bytes)) { 686 600 if (len < pld_data_bytes) { 601 + remainder = 0; 687 602 memcpy(&remainder, tx_buf, len); 688 603 dsi_write(dsi, DSI_GEN_PLD_DATA, remainder); 689 604 len = 0; 690 605 } else { 691 - dsi_write(dsi, DSI_GEN_PLD_DATA, *tx_buf); 692 - tx_buf++; 606 + memcpy(&remainder, tx_buf, pld_data_bytes); 607 + dsi_write(dsi, DSI_GEN_PLD_DATA, remainder); 608 + tx_buf += pld_data_bytes; 693 609 len -= pld_data_bytes; 694 610 } 695 611 696 - ret = readx_poll_timeout(readl, dsi->base + DSI_CMD_PKT_STATUS, 612 + ret = readl_poll_timeout(dsi->base + DSI_CMD_PKT_STATUS, 697 613 val, !(val & GEN_PLD_W_FULL), 1000, 698 614 CMD_PKT_STATUS_TIMEOUT_US); 699 615 if (ret < 0) { ··· 705 617 } 706 618 } 707 619 708 - return dw_mipi_dsi_gen_pkt_hdr_write(dsi, val); 620 + return dw_mipi_dsi_gen_pkt_hdr_write(dsi, hdr_val); 709 621 } 710 622 711 623 static ssize_t dw_mipi_dsi_host_transfer(struct mipi_dsi_host *host, ··· 713 625 { 714 626 struct dw_mipi_dsi *dsi = host_to_dsi(host); 715 627 int ret; 628 + 629 + dw_mipi_message_config(dsi, msg); 716 630 717 631 switch (msg->type) { 718 632 case MIPI_DSI_DCS_SHORT_WRITE: ··· 726 636 ret = dw_mipi_dsi_dcs_long_write(dsi, msg); 727 637 break; 728 638 default: 729 - dev_err(dsi->dev, "unsupported message type\n"); 639 + dev_err(dsi->dev, "unsupported message type 0x%02x\n", 640 + msg->type); 730 641 ret = -EINVAL; 731 642 } 732 643 ··· 744 653 { 745 654 u32 val; 746 655 747 - val = VID_MODE_TYPE_BURST_SYNC_PULSES | ENABLE_LOW_POWER; 656 + val = ENABLE_LOW_POWER; 657 + 658 + if (dsi->mode_flags & MIPI_DSI_MODE_VIDEO_BURST) 659 + val |= VID_MODE_TYPE_BURST; 660 + else if (dsi->mode_flags & MIPI_DSI_MODE_VIDEO_SYNC_PULSE) 661 + val |= VID_MODE_TYPE_NON_BURST_SYNC_PULSES; 662 + else 663 + val |= VID_MODE_TYPE_NON_BURST_SYNC_EVENTS; 748 664 749 665 dsi_write(dsi, DSI_VID_MODE_CFG, val); 750 666 } ··· 767 669 dsi_write(dsi, DSI_PWR_UP, RESET); 768 670 dsi_write(dsi, DSI_MODE_CFG, ENABLE_VIDEO_MODE); 769 671 dw_mipi_dsi_video_mode_config(dsi); 672 + dsi_write(dsi, DSI_LPCLK_CTRL, PHY_TXREQUESTCLKHS); 770 673 dsi_write(dsi, DSI_PWR_UP, POWERUP); 771 674 } 772 675 } ··· 780 681 781 682 static void dw_mipi_dsi_init(struct dw_mipi_dsi *dsi) 782 683 { 684 + /* 685 + * The maximum permitted escape clock is 20MHz and it is derived from 686 + * lanebyteclk, which is running at "lane_mbps / 8". Thus we want: 687 + * 688 + * (lane_mbps >> 3) / esc_clk_division < 20 689 + * which is: 690 + * (lane_mbps >> 3) / 20 > esc_clk_division 691 + */ 692 + u32 esc_clk_division = (dsi->lane_mbps >> 3) / 20 + 1; 693 + 783 694 dsi_write(dsi, DSI_PWR_UP, RESET); 784 695 dsi_write(dsi, DSI_PHY_RSTZ, PHY_DISFORCEPLL | PHY_DISABLECLK 785 696 | PHY_RSTZ | PHY_SHUTDOWNZ); 786 697 dsi_write(dsi, DSI_CLKMGR_CFG, TO_CLK_DIVIDSION(10) | 787 - TX_ESC_CLK_DIVIDSION(7)); 788 - dsi_write(dsi, DSI_LPCLK_CTRL, PHY_TXREQUESTCLKHS); 698 + TX_ESC_CLK_DIVIDSION(esc_clk_division)); 789 699 } 790 700 791 701 static void dw_mipi_dsi_dpi_config(struct dw_mipi_dsi *dsi, ··· 817 709 break; 818 710 } 819 711 820 - if (!(mode->flags & DRM_MODE_FLAG_PVSYNC)) 712 + if (mode->flags & DRM_MODE_FLAG_NVSYNC) 821 713 val |= VSYNC_ACTIVE_LOW; 822 - if (!(mode->flags & DRM_MODE_FLAG_PHSYNC)) 714 + if (mode->flags & DRM_MODE_FLAG_NHSYNC) 823 715 val |= HSYNC_ACTIVE_LOW; 824 716 825 717 dsi_write(dsi, DSI_DPI_VCID, DPI_VID(dsi->channel)); ··· 844 736 { 845 737 dsi_write(dsi, DSI_TO_CNT_CFG, HSTX_TO_CNT(1000) | LPRX_TO_CNT(1000)); 846 738 dsi_write(dsi, DSI_BTA_TO_CNT, 0xd00); 847 - dsi_write(dsi, DSI_CMD_MODE_CFG, CMD_MODE_ALL_LP); 848 739 dsi_write(dsi, DSI_MODE_CFG, ENABLE_CMD_MODE); 849 740 } 850 741 851 742 /* Get lane byte clock cycles. */ 852 743 static u32 dw_mipi_dsi_get_hcomponent_lbcc(struct dw_mipi_dsi *dsi, 744 + struct drm_display_mode *mode, 853 745 u32 hcomponent) 854 746 { 855 747 u32 frac, lbcc; 856 748 857 749 lbcc = hcomponent * dsi->lane_mbps * MSEC_PER_SEC / 8; 858 750 859 - frac = lbcc % dsi->mode->clock; 860 - lbcc = lbcc / dsi->mode->clock; 751 + frac = lbcc % mode->clock; 752 + lbcc = lbcc / mode->clock; 861 753 if (frac) 862 754 lbcc++; 863 755 864 756 return lbcc; 865 757 } 866 758 867 - static void dw_mipi_dsi_line_timer_config(struct dw_mipi_dsi *dsi) 759 + static void dw_mipi_dsi_line_timer_config(struct dw_mipi_dsi *dsi, 760 + struct drm_display_mode *mode) 868 761 { 869 762 u32 htotal, hsa, hbp, lbcc; 870 - struct drm_display_mode *mode = dsi->mode; 871 763 872 764 htotal = mode->htotal; 873 765 hsa = mode->hsync_end - mode->hsync_start; 874 766 hbp = mode->htotal - mode->hsync_end; 875 767 876 - lbcc = dw_mipi_dsi_get_hcomponent_lbcc(dsi, htotal); 768 + lbcc = dw_mipi_dsi_get_hcomponent_lbcc(dsi, mode, htotal); 877 769 dsi_write(dsi, DSI_VID_HLINE_TIME, lbcc); 878 770 879 - lbcc = dw_mipi_dsi_get_hcomponent_lbcc(dsi, hsa); 771 + lbcc = dw_mipi_dsi_get_hcomponent_lbcc(dsi, mode, hsa); 880 772 dsi_write(dsi, DSI_VID_HSA_TIME, lbcc); 881 773 882 - lbcc = dw_mipi_dsi_get_hcomponent_lbcc(dsi, hbp); 774 + lbcc = dw_mipi_dsi_get_hcomponent_lbcc(dsi, mode, hbp); 883 775 dsi_write(dsi, DSI_VID_HBP_TIME, lbcc); 884 776 } 885 777 886 - static void dw_mipi_dsi_vertical_timing_config(struct dw_mipi_dsi *dsi) 778 + static void dw_mipi_dsi_vertical_timing_config(struct dw_mipi_dsi *dsi, 779 + struct drm_display_mode *mode) 887 780 { 888 781 u32 vactive, vsa, vfp, vbp; 889 - struct drm_display_mode *mode = dsi->mode; 890 782 891 783 vactive = mode->vdisplay; 892 784 vsa = mode->vsync_end - mode->vsync_start; ··· 922 814 dsi_write(dsi, DSI_INT_MSK1, 0); 923 815 } 924 816 925 - static void dw_mipi_dsi_encoder_mode_set(struct drm_encoder *encoder, 926 - struct drm_display_mode *mode, 927 - struct drm_display_mode *adjusted_mode) 817 + static void dw_mipi_dsi_encoder_disable(struct drm_encoder *encoder) 928 818 { 929 819 struct dw_mipi_dsi *dsi = encoder_to_dsi(encoder); 930 - int ret; 931 820 932 - dsi->mode = adjusted_mode; 933 - 934 - ret = dw_mipi_dsi_get_lane_bps(dsi); 935 - if (ret < 0) 821 + if (dsi->dpms_mode != DRM_MODE_DPMS_ON) 936 822 return; 937 823 938 824 if (clk_prepare_enable(dsi->pclk)) { ··· 934 832 return; 935 833 } 936 834 835 + drm_panel_disable(dsi->panel); 836 + 837 + dw_mipi_dsi_set_mode(dsi, DW_MIPI_DSI_CMD_MODE); 838 + drm_panel_unprepare(dsi->panel); 839 + 840 + dw_mipi_dsi_disable(dsi); 841 + pm_runtime_put(dsi->dev); 842 + clk_disable_unprepare(dsi->pclk); 843 + dsi->dpms_mode = DRM_MODE_DPMS_OFF; 844 + } 845 + 846 + static void dw_mipi_dsi_encoder_enable(struct drm_encoder *encoder) 847 + { 848 + struct dw_mipi_dsi *dsi = encoder_to_dsi(encoder); 849 + struct drm_display_mode *mode = &encoder->crtc->state->adjusted_mode; 850 + const struct dw_mipi_dsi_plat_data *pdata = dsi->pdata; 851 + int mux = drm_of_encoder_active_endpoint_id(dsi->dev->of_node, encoder); 852 + u32 val; 853 + int ret; 854 + 855 + ret = dw_mipi_dsi_get_lane_bps(dsi, mode); 856 + if (ret < 0) 857 + return; 858 + 859 + if (dsi->dpms_mode == DRM_MODE_DPMS_ON) 860 + return; 861 + 862 + if (clk_prepare_enable(dsi->pclk)) { 863 + dev_err(dsi->dev, "%s: Failed to enable pclk\n", __func__); 864 + return; 865 + } 866 + 867 + pm_runtime_get_sync(dsi->dev); 937 868 dw_mipi_dsi_init(dsi); 938 869 dw_mipi_dsi_dpi_config(dsi, mode); 939 870 dw_mipi_dsi_packet_handler_config(dsi); 940 871 dw_mipi_dsi_video_mode_config(dsi); 941 872 dw_mipi_dsi_video_packet_config(dsi, mode); 942 873 dw_mipi_dsi_command_mode_config(dsi); 943 - dw_mipi_dsi_line_timer_config(dsi); 944 - dw_mipi_dsi_vertical_timing_config(dsi); 874 + dw_mipi_dsi_line_timer_config(dsi, mode); 875 + dw_mipi_dsi_vertical_timing_config(dsi, mode); 945 876 dw_mipi_dsi_dphy_timing_config(dsi); 946 877 dw_mipi_dsi_dphy_interface_config(dsi); 947 878 dw_mipi_dsi_clear_err(dsi); 948 - if (drm_panel_prepare(dsi->panel)) 949 - dev_err(dsi->dev, "failed to prepare panel\n"); 950 879 951 - clk_disable_unprepare(dsi->pclk); 952 - } 953 - 954 - static void dw_mipi_dsi_encoder_disable(struct drm_encoder *encoder) 955 - { 956 - struct dw_mipi_dsi *dsi = encoder_to_dsi(encoder); 957 - 958 - drm_panel_disable(dsi->panel); 959 - 960 - if (clk_prepare_enable(dsi->pclk)) { 961 - dev_err(dsi->dev, "%s: Failed to enable pclk\n", __func__); 962 - return; 963 - } 964 - 965 - dw_mipi_dsi_set_mode(dsi, DW_MIPI_DSI_CMD_MODE); 966 - drm_panel_unprepare(dsi->panel); 967 - dw_mipi_dsi_set_mode(dsi, DW_MIPI_DSI_VID_MODE); 968 - 969 - /* 970 - * This is necessary to make sure the peripheral will be driven 971 - * normally when the display is enabled again later. 972 - */ 973 - msleep(120); 974 - 975 - dw_mipi_dsi_set_mode(dsi, DW_MIPI_DSI_CMD_MODE); 976 - dw_mipi_dsi_disable(dsi); 977 - clk_disable_unprepare(dsi->pclk); 978 - } 979 - 980 - static void dw_mipi_dsi_encoder_commit(struct drm_encoder *encoder) 981 - { 982 - struct dw_mipi_dsi *dsi = encoder_to_dsi(encoder); 983 - int mux = drm_of_encoder_active_endpoint_id(dsi->dev->of_node, encoder); 984 - u32 val; 985 - 986 - if (clk_prepare_enable(dsi->pclk)) { 987 - dev_err(dsi->dev, "%s: Failed to enable pclk\n", __func__); 988 - return; 989 - } 880 + if (pdata->grf_dsi0_mode_reg) 881 + regmap_write(dsi->grf_regmap, pdata->grf_dsi0_mode_reg, 882 + pdata->grf_dsi0_mode); 990 883 991 884 dw_mipi_dsi_phy_init(dsi); 992 - dw_mipi_dsi_wait_for_two_frames(dsi); 885 + dw_mipi_dsi_wait_for_two_frames(mode); 886 + 887 + dw_mipi_dsi_set_mode(dsi, DW_MIPI_DSI_CMD_MODE); 888 + if (drm_panel_prepare(dsi->panel)) 889 + dev_err(dsi->dev, "failed to prepare panel\n"); 993 890 994 891 dw_mipi_dsi_set_mode(dsi, DW_MIPI_DSI_VID_MODE); 995 892 drm_panel_enable(dsi->panel); ··· 996 895 clk_disable_unprepare(dsi->pclk); 997 896 998 897 if (mux) 999 - val = DSI0_SEL_VOP_LIT | (DSI0_SEL_VOP_LIT << 16); 898 + val = pdata->dsi0_en_bit | (pdata->dsi0_en_bit << 16); 1000 899 else 1001 - val = DSI0_SEL_VOP_LIT << 16; 900 + val = pdata->dsi0_en_bit << 16; 1002 901 1003 - regmap_write(dsi->grf_regmap, GRF_SOC_CON6, val); 902 + regmap_write(dsi->grf_regmap, pdata->grf_switch_reg, val); 1004 903 dev_dbg(dsi->dev, "vop %s output to dsi0\n", (mux) ? "LIT" : "BIG"); 904 + dsi->dpms_mode = DRM_MODE_DPMS_ON; 1005 905 } 1006 906 1007 907 static int ··· 1033 931 return 0; 1034 932 } 1035 933 1036 - static struct drm_encoder_helper_funcs 934 + static const struct drm_encoder_helper_funcs 1037 935 dw_mipi_dsi_encoder_helper_funcs = { 1038 - .commit = dw_mipi_dsi_encoder_commit, 1039 - .mode_set = dw_mipi_dsi_encoder_mode_set, 936 + .enable = dw_mipi_dsi_encoder_enable, 1040 937 .disable = dw_mipi_dsi_encoder_disable, 1041 938 .atomic_check = dw_mipi_dsi_encoder_atomic_check, 1042 939 }; 1043 940 1044 - static struct drm_encoder_funcs dw_mipi_dsi_encoder_funcs = { 941 + static const struct drm_encoder_funcs dw_mipi_dsi_encoder_funcs = { 1045 942 .destroy = drm_encoder_cleanup, 1046 943 }; 1047 944 ··· 1051 950 return drm_panel_get_modes(dsi->panel); 1052 951 } 1053 952 1054 - static enum drm_mode_status dw_mipi_dsi_mode_valid( 1055 - struct drm_connector *connector, 1056 - struct drm_display_mode *mode) 1057 - { 1058 - struct dw_mipi_dsi *dsi = con_to_dsi(connector); 1059 - 1060 - enum drm_mode_status mode_status = MODE_OK; 1061 - 1062 - if (dsi->pdata->mode_valid) 1063 - mode_status = dsi->pdata->mode_valid(connector, mode); 1064 - 1065 - return mode_status; 1066 - } 1067 - 1068 953 static struct drm_connector_helper_funcs dw_mipi_dsi_connector_helper_funcs = { 1069 954 .get_modes = dw_mipi_dsi_connector_get_modes, 1070 - .mode_valid = dw_mipi_dsi_mode_valid, 1071 955 }; 1072 956 1073 957 static void dw_mipi_dsi_drm_connector_destroy(struct drm_connector *connector) ··· 1061 975 drm_connector_cleanup(connector); 1062 976 } 1063 977 1064 - static struct drm_connector_funcs dw_mipi_dsi_atomic_connector_funcs = { 978 + static const struct drm_connector_funcs dw_mipi_dsi_atomic_connector_funcs = { 1065 979 .dpms = drm_atomic_helper_connector_dpms, 1066 980 .fill_modes = drm_helper_probe_single_connector_modes, 1067 981 .destroy = dw_mipi_dsi_drm_connector_destroy, ··· 1071 985 }; 1072 986 1073 987 static int dw_mipi_dsi_register(struct drm_device *drm, 1074 - struct dw_mipi_dsi *dsi) 988 + struct dw_mipi_dsi *dsi) 1075 989 { 1076 990 struct drm_encoder *encoder = &dsi->encoder; 1077 991 struct drm_connector *connector = &dsi->connector; ··· 1092 1006 drm_encoder_helper_add(&dsi->encoder, 1093 1007 &dw_mipi_dsi_encoder_helper_funcs); 1094 1008 ret = drm_encoder_init(drm, &dsi->encoder, &dw_mipi_dsi_encoder_funcs, 1095 - DRM_MODE_ENCODER_DSI, NULL); 1009 + DRM_MODE_ENCODER_DSI, NULL); 1096 1010 if (ret) { 1097 1011 dev_err(dev, "Failed to initialize encoder with drm\n"); 1098 1012 return ret; 1099 1013 } 1100 1014 1101 1015 drm_connector_helper_add(connector, 1102 - &dw_mipi_dsi_connector_helper_funcs); 1016 + &dw_mipi_dsi_connector_helper_funcs); 1103 1017 1104 1018 drm_connector_init(drm, &dsi->connector, 1105 1019 &dw_mipi_dsi_atomic_connector_funcs, ··· 1123 1037 return 0; 1124 1038 } 1125 1039 1126 - static enum drm_mode_status rk3288_mipi_dsi_mode_valid( 1127 - struct drm_connector *connector, 1128 - struct drm_display_mode *mode) 1129 - { 1130 - /* 1131 - * The VID_PKT_SIZE field in the DSI_VID_PKT_CFG 1132 - * register is 11-bit. 1133 - */ 1134 - if (mode->hdisplay > 0x7ff) 1135 - return MODE_BAD_HVALUE; 1136 - 1137 - /* 1138 - * The V_ACTIVE_LINES field in the DSI_VTIMING_CFG 1139 - * register is 11-bit. 1140 - */ 1141 - if (mode->vdisplay > 0x7ff) 1142 - return MODE_BAD_VVALUE; 1143 - 1144 - return MODE_OK; 1145 - } 1146 - 1147 1040 static struct dw_mipi_dsi_plat_data rk3288_mipi_dsi_drv_data = { 1041 + .dsi0_en_bit = RK3288_DSI0_SEL_VOP_LIT, 1042 + .dsi1_en_bit = RK3288_DSI1_SEL_VOP_LIT, 1043 + .grf_switch_reg = RK3288_GRF_SOC_CON6, 1148 1044 .max_data_lanes = 4, 1149 - .mode_valid = rk3288_mipi_dsi_mode_valid, 1045 + }; 1046 + 1047 + static struct dw_mipi_dsi_plat_data rk3399_mipi_dsi_drv_data = { 1048 + .dsi0_en_bit = RK3399_DSI0_SEL_VOP_LIT, 1049 + .dsi1_en_bit = RK3399_DSI1_SEL_VOP_LIT, 1050 + .grf_switch_reg = RK3399_GRF_SOC_CON19, 1051 + .grf_dsi0_mode = RK3399_GRF_DSI_MODE, 1052 + .grf_dsi0_mode_reg = RK3399_GRF_SOC_CON22, 1053 + .max_data_lanes = 4, 1150 1054 }; 1151 1055 1152 1056 static const struct of_device_id dw_mipi_dsi_dt_ids[] = { 1153 1057 { 1154 1058 .compatible = "rockchip,rk3288-mipi-dsi", 1155 1059 .data = &rk3288_mipi_dsi_drv_data, 1060 + }, { 1061 + .compatible = "rockchip,rk3399-mipi-dsi", 1062 + .data = &rk3399_mipi_dsi_drv_data, 1156 1063 }, 1157 1064 { /* sentinel */ } 1158 1065 }; 1159 1066 MODULE_DEVICE_TABLE(of, dw_mipi_dsi_dt_ids); 1160 1067 1161 1068 static int dw_mipi_dsi_bind(struct device *dev, struct device *master, 1162 - void *data) 1069 + void *data) 1163 1070 { 1164 1071 const struct of_device_id *of_id = 1165 1072 of_match_device(dw_mipi_dsi_dt_ids, dev); 1166 1073 const struct dw_mipi_dsi_plat_data *pdata = of_id->data; 1167 1074 struct platform_device *pdev = to_platform_device(dev); 1075 + struct reset_control *apb_rst; 1168 1076 struct drm_device *drm = data; 1169 1077 struct dw_mipi_dsi *dsi; 1170 1078 struct resource *res; ··· 1170 1090 1171 1091 dsi->dev = dev; 1172 1092 dsi->pdata = pdata; 1093 + dsi->dpms_mode = DRM_MODE_DPMS_OFF; 1173 1094 1174 1095 ret = rockchip_mipi_parse_dt(dsi); 1175 1096 if (ret) ··· 1198 1117 return ret; 1199 1118 } 1200 1119 1120 + /* 1121 + * Note that the reset was not defined in the initial device tree, so 1122 + * we have to be prepared for it not being found. 1123 + */ 1124 + apb_rst = devm_reset_control_get(dev, "apb"); 1125 + if (IS_ERR(apb_rst)) { 1126 + ret = PTR_ERR(apb_rst); 1127 + if (ret == -ENOENT) { 1128 + apb_rst = NULL; 1129 + } else { 1130 + dev_err(dev, "Unable to get reset control: %d\n", ret); 1131 + return ret; 1132 + } 1133 + } 1134 + 1135 + if (apb_rst) { 1136 + ret = clk_prepare_enable(dsi->pclk); 1137 + if (ret) { 1138 + dev_err(dev, "%s: Failed to enable pclk\n", __func__); 1139 + return ret; 1140 + } 1141 + 1142 + reset_control_assert(apb_rst); 1143 + usleep_range(10, 20); 1144 + reset_control_deassert(apb_rst); 1145 + 1146 + clk_disable_unprepare(dsi->pclk); 1147 + } 1148 + 1149 + dsi->phy_cfg_clk = devm_clk_get(dev, "phy_cfg"); 1150 + if (IS_ERR(dsi->phy_cfg_clk)) { 1151 + ret = PTR_ERR(dsi->phy_cfg_clk); 1152 + if (ret != -ENOENT) { 1153 + dev_err(dev, "Unable to get phy_cfg_clk: %d\n", ret); 1154 + return ret; 1155 + } 1156 + dsi->phy_cfg_clk = NULL; 1157 + dev_dbg(dev, "have not phy_cfg_clk\n"); 1158 + } 1159 + 1201 1160 ret = clk_prepare_enable(dsi->pllref_clk); 1202 1161 if (ret) { 1203 1162 dev_err(dev, "%s: Failed to enable pllref_clk\n", __func__); ··· 1250 1129 goto err_pllref; 1251 1130 } 1252 1131 1253 - dev_set_drvdata(dev, dsi); 1132 + pm_runtime_enable(dev); 1254 1133 1255 1134 dsi->dsi_host.ops = &dw_mipi_dsi_host_ops; 1256 1135 dsi->dsi_host.dev = dev; 1257 - return mipi_dsi_host_register(&dsi->dsi_host); 1136 + ret = mipi_dsi_host_register(&dsi->dsi_host); 1137 + if (ret) { 1138 + dev_err(dev, "Failed to register MIPI host: %d\n", ret); 1139 + goto err_cleanup; 1140 + } 1258 1141 1142 + if (!dsi->panel) { 1143 + ret = -EPROBE_DEFER; 1144 + goto err_mipi_dsi_host; 1145 + } 1146 + 1147 + dev_set_drvdata(dev, dsi); 1148 + return 0; 1149 + 1150 + err_mipi_dsi_host: 1151 + mipi_dsi_host_unregister(&dsi->dsi_host); 1152 + err_cleanup: 1153 + drm_encoder_cleanup(&dsi->encoder); 1154 + drm_connector_cleanup(&dsi->connector); 1259 1155 err_pllref: 1260 1156 clk_disable_unprepare(dsi->pllref_clk); 1261 1157 return ret; 1262 1158 } 1263 1159 1264 1160 static void dw_mipi_dsi_unbind(struct device *dev, struct device *master, 1265 - void *data) 1161 + void *data) 1266 1162 { 1267 1163 struct dw_mipi_dsi *dsi = dev_get_drvdata(dev); 1268 1164 1269 1165 mipi_dsi_host_unregister(&dsi->dsi_host); 1166 + pm_runtime_disable(dev); 1270 1167 clk_disable_unprepare(dsi->pllref_clk); 1271 1168 } 1272 1169
-52
drivers/gpu/drm/rockchip/rockchip_drm_drv.c
··· 77 77 iommu_detach_device(domain, dev); 78 78 } 79 79 80 - int rockchip_register_crtc_funcs(struct drm_crtc *crtc, 81 - const struct rockchip_crtc_funcs *crtc_funcs) 82 - { 83 - int pipe = drm_crtc_index(crtc); 84 - struct rockchip_drm_private *priv = crtc->dev->dev_private; 85 - 86 - if (pipe >= ROCKCHIP_MAX_CRTC) 87 - return -EINVAL; 88 - 89 - priv->crtc_funcs[pipe] = crtc_funcs; 90 - 91 - return 0; 92 - } 93 - 94 - void rockchip_unregister_crtc_funcs(struct drm_crtc *crtc) 95 - { 96 - int pipe = drm_crtc_index(crtc); 97 - struct rockchip_drm_private *priv = crtc->dev->dev_private; 98 - 99 - if (pipe >= ROCKCHIP_MAX_CRTC) 100 - return; 101 - 102 - priv->crtc_funcs[pipe] = NULL; 103 - } 104 - 105 - static int rockchip_drm_crtc_enable_vblank(struct drm_device *dev, 106 - unsigned int pipe) 107 - { 108 - struct rockchip_drm_private *priv = dev->dev_private; 109 - struct drm_crtc *crtc = drm_crtc_from_index(dev, pipe); 110 - 111 - if (crtc && priv->crtc_funcs[pipe] && 112 - priv->crtc_funcs[pipe]->enable_vblank) 113 - return priv->crtc_funcs[pipe]->enable_vblank(crtc); 114 - 115 - return 0; 116 - } 117 - 118 - static void rockchip_drm_crtc_disable_vblank(struct drm_device *dev, 119 - unsigned int pipe) 120 - { 121 - struct rockchip_drm_private *priv = dev->dev_private; 122 - struct drm_crtc *crtc = drm_crtc_from_index(dev, pipe); 123 - 124 - if (crtc && priv->crtc_funcs[pipe] && 125 - priv->crtc_funcs[pipe]->enable_vblank) 126 - priv->crtc_funcs[pipe]->disable_vblank(crtc); 127 - } 128 - 129 80 static int rockchip_drm_init_iommu(struct drm_device *drm_dev) 130 81 { 131 82 struct rockchip_drm_private *private = drm_dev->dev_private; ··· 228 277 .driver_features = DRIVER_MODESET | DRIVER_GEM | 229 278 DRIVER_PRIME | DRIVER_ATOMIC, 230 279 .lastclose = rockchip_drm_lastclose, 231 - .get_vblank_counter = drm_vblank_no_hw_counter, 232 - .enable_vblank = rockchip_drm_crtc_enable_vblank, 233 - .disable_vblank = rockchip_drm_crtc_disable_vblank, 234 280 .gem_vm_ops = &drm_gem_cma_vm_ops, 235 281 .gem_free_object_unlocked = rockchip_gem_free_object, 236 282 .dumb_create = rockchip_gem_dumb_create,
-14
drivers/gpu/drm/rockchip/rockchip_drm_drv.h
··· 32 32 struct drm_connector; 33 33 struct iommu_domain; 34 34 35 - /* 36 - * Rockchip drm private crtc funcs. 37 - * @enable_vblank: enable crtc vblank irq. 38 - * @disable_vblank: disable crtc vblank irq. 39 - */ 40 - struct rockchip_crtc_funcs { 41 - int (*enable_vblank)(struct drm_crtc *crtc); 42 - void (*disable_vblank)(struct drm_crtc *crtc); 43 - }; 44 - 45 35 struct rockchip_crtc_state { 46 36 struct drm_crtc_state base; 47 37 int output_type; ··· 49 59 struct rockchip_drm_private { 50 60 struct drm_fb_helper fbdev_helper; 51 61 struct drm_gem_object *fbdev_bo; 52 - const struct rockchip_crtc_funcs *crtc_funcs[ROCKCHIP_MAX_CRTC]; 53 62 struct drm_atomic_state *state; 54 63 struct iommu_domain *domain; 55 64 /* protect drm_mm on multi-threads */ ··· 58 69 spinlock_t psr_list_lock; 59 70 }; 60 71 61 - int rockchip_register_crtc_funcs(struct drm_crtc *crtc, 62 - const struct rockchip_crtc_funcs *crtc_funcs); 63 - void rockchip_unregister_crtc_funcs(struct drm_crtc *crtc); 64 72 int rockchip_drm_dma_attach_device(struct drm_device *drm_dev, 65 73 struct device *dev); 66 74 void rockchip_drm_dma_detach_device(struct drm_device *drm_dev,
+1 -1
drivers/gpu/drm/rockchip/rockchip_drm_fb.c
··· 193 193 drm_atomic_helper_cleanup_planes(dev, state); 194 194 } 195 195 196 - static struct drm_mode_config_helper_funcs rockchip_mode_config_helpers = { 196 + static const struct drm_mode_config_helper_funcs rockchip_mode_config_helpers = { 197 197 .atomic_commit_tail = rockchip_atomic_commit_tail, 198 198 }; 199 199
+3 -6
drivers/gpu/drm/rockchip/rockchip_drm_fbdev.c
··· 78 78 if (IS_ERR(fbi)) { 79 79 dev_err(dev->dev, "Failed to create framebuffer info.\n"); 80 80 ret = PTR_ERR(fbi); 81 - goto err_rockchip_gem_free_object; 81 + goto out; 82 82 } 83 83 84 84 helper->fb = rockchip_drm_framebuffer_init(dev, &mode_cmd, ··· 86 86 if (IS_ERR(helper->fb)) { 87 87 dev_err(dev->dev, "Failed to allocate DRM framebuffer.\n"); 88 88 ret = PTR_ERR(helper->fb); 89 - goto err_release_fbi; 89 + goto out; 90 90 } 91 91 92 92 fbi->par = helper; ··· 114 114 115 115 return 0; 116 116 117 - err_release_fbi: 118 - drm_fb_helper_release_fbi(helper); 119 - err_rockchip_gem_free_object: 117 + out: 120 118 rockchip_gem_free_object(&rk_obj->base); 121 119 return ret; 122 120 } ··· 171 173 helper = &private->fbdev_helper; 172 174 173 175 drm_fb_helper_unregister_fbi(helper); 174 - drm_fb_helper_release_fbi(helper); 175 176 176 177 if (helper->fb) 177 178 drm_framebuffer_unreference(helper->fb);
+6 -11
drivers/gpu/drm/rockchip/rockchip_drm_vop.c
··· 857 857 spin_unlock_irqrestore(&vop->irq_lock, flags); 858 858 } 859 859 860 - static const struct rockchip_crtc_funcs private_crtc_funcs = { 861 - .enable_vblank = vop_crtc_enable_vblank, 862 - .disable_vblank = vop_crtc_disable_vblank, 863 - }; 864 - 865 860 static bool vop_crtc_mode_fixup(struct drm_crtc *crtc, 866 861 const struct drm_display_mode *mode, 867 862 struct drm_display_mode *adjusted_mode) ··· 932 937 } 933 938 934 939 pin_pol = BIT(DCLK_INVERT); 935 - pin_pol |= (adjusted_mode->flags & DRM_MODE_FLAG_NHSYNC) ? 936 - 0 : BIT(HSYNC_POSITIVE); 937 - pin_pol |= (adjusted_mode->flags & DRM_MODE_FLAG_NVSYNC) ? 938 - 0 : BIT(VSYNC_POSITIVE); 940 + pin_pol |= (adjusted_mode->flags & DRM_MODE_FLAG_PHSYNC) ? 941 + BIT(HSYNC_POSITIVE) : 0; 942 + pin_pol |= (adjusted_mode->flags & DRM_MODE_FLAG_PVSYNC) ? 943 + BIT(VSYNC_POSITIVE) : 0; 939 944 VOP_CTRL_SET(vop, pin_pol, pin_pol); 940 945 941 946 switch (s->output_type) { ··· 1118 1123 .reset = vop_crtc_reset, 1119 1124 .atomic_duplicate_state = vop_crtc_duplicate_state, 1120 1125 .atomic_destroy_state = vop_crtc_destroy_state, 1126 + .enable_vblank = vop_crtc_enable_vblank, 1127 + .disable_vblank = vop_crtc_disable_vblank, 1121 1128 }; 1122 1129 1123 1130 static void vop_fb_unref_worker(struct drm_flip_work *work, void *val) ··· 1291 1294 init_completion(&vop->dsp_hold_completion); 1292 1295 init_completion(&vop->line_flag_completion); 1293 1296 crtc->port = port; 1294 - rockchip_register_crtc_funcs(crtc, &private_crtc_funcs); 1295 1297 1296 1298 return 0; 1297 1299 ··· 1309 1313 struct drm_device *drm_dev = vop->drm_dev; 1310 1314 struct drm_plane *plane, *tmp; 1311 1315 1312 - rockchip_unregister_crtc_funcs(crtc); 1313 1316 of_node_put(crtc->port); 1314 1317 1315 1318 /*
+7 -5
drivers/gpu/drm/selftests/test-drm_mm.c
··· 181 181 } 182 182 183 183 if (misalignment(node, alignment)) { 184 - pr_err("node is misalinged, start %llx rem %llu, expected alignment %llu\n", 184 + pr_err("node is misaligned, start %llx rem %llu, expected alignment %llu\n", 185 185 node->start, misalignment(node, alignment), alignment); 186 186 ok = false; 187 187 } ··· 839 839 n++; 840 840 } 841 841 842 - drm_mm_for_each_node_in_range(node, mm, 0, start) { 843 - if (node) { 842 + if (start > 0) { 843 + node = __drm_mm_interval_first(mm, 0, start - 1); 844 + if (node->allocated) { 844 845 pr_err("node before start: node=%llx+%llu, start=%llx\n", 845 846 node->start, node->size, start); 846 847 return false; 847 848 } 848 849 } 849 850 850 - drm_mm_for_each_node_in_range(node, mm, end, U64_MAX) { 851 - if (node) { 851 + if (end < U64_MAX) { 852 + node = __drm_mm_interval_first(mm, end, U64_MAX); 853 + if (node->allocated) { 852 854 pr_err("node after end: node=%llx+%llu, end=%llx\n", 853 855 node->start, node->size, end); 854 856 return false;
+35 -16
drivers/gpu/drm/shmobile/shmob_drm_crtc.c
··· 476 476 return 0; 477 477 } 478 478 479 + static void shmob_drm_crtc_enable_vblank(struct shmob_drm_device *sdev, 480 + bool enable) 481 + { 482 + unsigned long flags; 483 + u32 ldintr; 484 + 485 + /* Be careful not to acknowledge any pending interrupt. */ 486 + spin_lock_irqsave(&sdev->irq_lock, flags); 487 + ldintr = lcdc_read(sdev, LDINTR) | LDINTR_STATUS_MASK; 488 + if (enable) 489 + ldintr |= LDINTR_VEE; 490 + else 491 + ldintr &= ~LDINTR_VEE; 492 + lcdc_write(sdev, LDINTR, ldintr); 493 + spin_unlock_irqrestore(&sdev->irq_lock, flags); 494 + } 495 + 496 + static int shmob_drm_enable_vblank(struct drm_crtc *crtc) 497 + { 498 + struct shmob_drm_device *sdev = crtc->dev->dev_private; 499 + 500 + shmob_drm_crtc_enable_vblank(sdev, true); 501 + 502 + return 0; 503 + } 504 + 505 + static void shmob_drm_disable_vblank(struct drm_crtc *crtc) 506 + { 507 + struct shmob_drm_device *sdev = crtc->dev->dev_private; 508 + 509 + shmob_drm_crtc_enable_vblank(sdev, false); 510 + } 511 + 479 512 static const struct drm_crtc_funcs crtc_funcs = { 480 513 .destroy = drm_crtc_cleanup, 481 514 .set_config = drm_crtc_helper_set_config, 482 515 .page_flip = shmob_drm_crtc_page_flip, 516 + .enable_vblank = shmob_drm_enable_vblank, 517 + .disable_vblank = shmob_drm_disable_vblank, 483 518 }; 484 519 485 520 int shmob_drm_crtc_create(struct shmob_drm_device *sdev) ··· 627 592 drm_encoder_helper_add(encoder, &encoder_helper_funcs); 628 593 629 594 return 0; 630 - } 631 - 632 - void shmob_drm_crtc_enable_vblank(struct shmob_drm_device *sdev, bool enable) 633 - { 634 - unsigned long flags; 635 - u32 ldintr; 636 - 637 - /* Be careful not to acknowledge any pending interrupt. */ 638 - spin_lock_irqsave(&sdev->irq_lock, flags); 639 - ldintr = lcdc_read(sdev, LDINTR) | LDINTR_STATUS_MASK; 640 - if (enable) 641 - ldintr |= LDINTR_VEE; 642 - else 643 - ldintr &= ~LDINTR_VEE; 644 - lcdc_write(sdev, LDINTR, ldintr); 645 - spin_unlock_irqrestore(&sdev->irq_lock, flags); 646 595 } 647 596 648 597 /* -----------------------------------------------------------------------------
-1
drivers/gpu/drm/shmobile/shmob_drm_crtc.h
··· 47 47 }; 48 48 49 49 int shmob_drm_crtc_create(struct shmob_drm_device *sdev); 50 - void shmob_drm_crtc_enable_vblank(struct shmob_drm_device *sdev, bool enable); 51 50 void shmob_drm_crtc_finish_page_flip(struct shmob_drm_crtc *scrtc); 52 51 void shmob_drm_crtc_suspend(struct shmob_drm_crtc *scrtc); 53 52 void shmob_drm_crtc_resume(struct shmob_drm_crtc *scrtc);
-20
drivers/gpu/drm/shmobile/shmob_drm_drv.c
··· 23 23 #include <drm/drm_crtc_helper.h> 24 24 #include <drm/drm_gem_cma_helper.h> 25 25 26 - #include "shmob_drm_crtc.h" 27 26 #include "shmob_drm_drv.h" 28 27 #include "shmob_drm_kms.h" 29 28 #include "shmob_drm_plane.h" ··· 221 222 return IRQ_HANDLED; 222 223 } 223 224 224 - static int shmob_drm_enable_vblank(struct drm_device *dev, unsigned int pipe) 225 - { 226 - struct shmob_drm_device *sdev = dev->dev_private; 227 - 228 - shmob_drm_crtc_enable_vblank(sdev, true); 229 - 230 - return 0; 231 - } 232 - 233 - static void shmob_drm_disable_vblank(struct drm_device *dev, unsigned int pipe) 234 - { 235 - struct shmob_drm_device *sdev = dev->dev_private; 236 - 237 - shmob_drm_crtc_enable_vblank(sdev, false); 238 - } 239 - 240 225 static const struct file_operations shmob_drm_fops = { 241 226 .owner = THIS_MODULE, 242 227 .open = drm_open, ··· 239 256 .load = shmob_drm_load, 240 257 .unload = shmob_drm_unload, 241 258 .irq_handler = shmob_drm_irq, 242 - .get_vblank_counter = drm_vblank_no_hw_counter, 243 - .enable_vblank = shmob_drm_enable_vblank, 244 - .disable_vblank = shmob_drm_disable_vblank, 245 259 .gem_free_object_unlocked = drm_gem_cma_free_object, 246 260 .gem_vm_ops = &drm_gem_cma_vm_ops, 247 261 .prime_handle_to_fd = drm_gem_prime_handle_to_fd,
+1 -3
drivers/gpu/drm/sti/sti_drv.c
··· 188 188 .dumb_destroy = drm_gem_dumb_destroy, 189 189 .fops = &sti_driver_fops, 190 190 191 - .get_vblank_counter = drm_vblank_no_hw_counter, 192 191 .enable_vblank = sti_crtc_enable_vblank, 193 192 .disable_vblank = sti_crtc_disable_vblank, 194 193 ··· 324 325 325 326 dma_set_coherent_mask(dev, DMA_BIT_MASK(32)); 326 327 327 - of_platform_populate(node, NULL, NULL, dev); 328 + devm_of_platform_populate(dev); 328 329 329 330 child_np = of_get_next_available_child(node, NULL); 330 331 ··· 340 341 static int sti_platform_remove(struct platform_device *pdev) 341 342 { 342 343 component_master_del(&pdev->dev, &sti_ops); 343 - of_platform_depopulate(&pdev->dev); 344 344 345 345 return 0; 346 346 }
+24
drivers/gpu/drm/sun4i/sun4i_crtc.c
··· 104 104 .enable = sun4i_crtc_enable, 105 105 }; 106 106 107 + static int sun4i_crtc_enable_vblank(struct drm_crtc *crtc) 108 + { 109 + struct sun4i_crtc *scrtc = drm_crtc_to_sun4i_crtc(crtc); 110 + struct sun4i_drv *drv = scrtc->drv; 111 + 112 + DRM_DEBUG_DRIVER("Enabling VBLANK on crtc %p\n", crtc); 113 + 114 + sun4i_tcon_enable_vblank(drv->tcon, true); 115 + 116 + return 0; 117 + } 118 + 119 + static void sun4i_crtc_disable_vblank(struct drm_crtc *crtc) 120 + { 121 + struct sun4i_crtc *scrtc = drm_crtc_to_sun4i_crtc(crtc); 122 + struct sun4i_drv *drv = scrtc->drv; 123 + 124 + DRM_DEBUG_DRIVER("Disabling VBLANK on crtc %p\n", crtc); 125 + 126 + sun4i_tcon_enable_vblank(drv->tcon, false); 127 + } 128 + 107 129 static const struct drm_crtc_funcs sun4i_crtc_funcs = { 108 130 .atomic_destroy_state = drm_atomic_helper_crtc_destroy_state, 109 131 .atomic_duplicate_state = drm_atomic_helper_crtc_duplicate_state, ··· 133 111 .page_flip = drm_atomic_helper_page_flip, 134 112 .reset = drm_atomic_helper_crtc_reset, 135 113 .set_config = drm_atomic_helper_set_config, 114 + .enable_vblank = sun4i_crtc_enable_vblank, 115 + .disable_vblank = sun4i_crtc_disable_vblank, 136 116 }; 137 117 138 118 struct sun4i_crtc *sun4i_crtc_init(struct drm_device *drm)
-28
drivers/gpu/drm/sun4i/sun4i_drv.c
··· 24 24 #include "sun4i_drv.h" 25 25 #include "sun4i_framebuffer.h" 26 26 #include "sun4i_layer.h" 27 - #include "sun4i_tcon.h" 28 - 29 - static int sun4i_drv_enable_vblank(struct drm_device *drm, unsigned int pipe) 30 - { 31 - struct sun4i_drv *drv = drm->dev_private; 32 - struct sun4i_tcon *tcon = drv->tcon; 33 - 34 - DRM_DEBUG_DRIVER("Enabling VBLANK on pipe %d\n", pipe); 35 - 36 - sun4i_tcon_enable_vblank(tcon, true); 37 - 38 - return 0; 39 - } 40 - 41 - static void sun4i_drv_disable_vblank(struct drm_device *drm, unsigned int pipe) 42 - { 43 - struct sun4i_drv *drv = drm->dev_private; 44 - struct sun4i_tcon *tcon = drv->tcon; 45 - 46 - DRM_DEBUG_DRIVER("Disabling VBLANK on pipe %d\n", pipe); 47 - 48 - sun4i_tcon_enable_vblank(tcon, false); 49 - } 50 27 51 28 static const struct file_operations sun4i_drv_fops = { 52 29 .owner = THIS_MODULE, ··· 67 90 .gem_prime_mmap = drm_gem_cma_prime_mmap, 68 91 69 92 /* Frame Buffer Operations */ 70 - 71 - /* VBlank Operations */ 72 - .get_vblank_counter = drm_vblank_no_hw_counter, 73 - .enable_vblank = sun4i_drv_enable_vblank, 74 - .disable_vblank = sun4i_drv_disable_vblank, 75 93 }; 76 94 77 95 static void sun4i_remove_framebuffers(void)
+12 -3
drivers/gpu/drm/tegra/dc.c
··· 909 909 return 0; 910 910 } 911 911 912 - u32 tegra_dc_get_vblank_counter(struct tegra_dc *dc) 912 + static u32 tegra_dc_get_vblank_counter(struct drm_crtc *crtc) 913 913 { 914 + struct tegra_dc *dc = to_tegra_dc(crtc); 915 + 914 916 if (dc->syncpt) 915 917 return host1x_syncpt_read(dc->syncpt); 916 918 ··· 920 918 return drm_crtc_vblank_count(&dc->base); 921 919 } 922 920 923 - void tegra_dc_enable_vblank(struct tegra_dc *dc) 921 + static int tegra_dc_enable_vblank(struct drm_crtc *crtc) 924 922 { 923 + struct tegra_dc *dc = to_tegra_dc(crtc); 925 924 unsigned long value, flags; 926 925 927 926 spin_lock_irqsave(&dc->lock, flags); ··· 932 929 tegra_dc_writel(dc, value, DC_CMD_INT_MASK); 933 930 934 931 spin_unlock_irqrestore(&dc->lock, flags); 932 + 933 + return 0; 935 934 } 936 935 937 - void tegra_dc_disable_vblank(struct tegra_dc *dc) 936 + static void tegra_dc_disable_vblank(struct drm_crtc *crtc) 938 937 { 938 + struct tegra_dc *dc = to_tegra_dc(crtc); 939 939 unsigned long value, flags; 940 940 941 941 spin_lock_irqsave(&dc->lock, flags); ··· 1042 1036 .reset = tegra_crtc_reset, 1043 1037 .atomic_duplicate_state = tegra_crtc_atomic_duplicate_state, 1044 1038 .atomic_destroy_state = tegra_crtc_atomic_destroy_state, 1039 + .get_vblank_counter = tegra_dc_get_vblank_counter, 1040 + .enable_vblank = tegra_dc_enable_vblank, 1041 + .disable_vblank = tegra_dc_disable_vblank, 1045 1042 }; 1046 1043 1047 1044 static int tegra_dc_set_timings(struct tegra_dc *dc,
-38
drivers/gpu/drm/tegra/drm.c
··· 804 804 .llseek = noop_llseek, 805 805 }; 806 806 807 - static u32 tegra_drm_get_vblank_counter(struct drm_device *drm, 808 - unsigned int pipe) 809 - { 810 - struct drm_crtc *crtc = drm_crtc_from_index(drm, pipe); 811 - struct tegra_dc *dc = to_tegra_dc(crtc); 812 - 813 - if (!crtc) 814 - return 0; 815 - 816 - return tegra_dc_get_vblank_counter(dc); 817 - } 818 - 819 - static int tegra_drm_enable_vblank(struct drm_device *drm, unsigned int pipe) 820 - { 821 - struct drm_crtc *crtc = drm_crtc_from_index(drm, pipe); 822 - struct tegra_dc *dc = to_tegra_dc(crtc); 823 - 824 - if (!crtc) 825 - return -ENODEV; 826 - 827 - tegra_dc_enable_vblank(dc); 828 - 829 - return 0; 830 - } 831 - 832 - static void tegra_drm_disable_vblank(struct drm_device *drm, unsigned int pipe) 833 - { 834 - struct drm_crtc *crtc = drm_crtc_from_index(drm, pipe); 835 - struct tegra_dc *dc = to_tegra_dc(crtc); 836 - 837 - if (crtc) 838 - tegra_dc_disable_vblank(dc); 839 - } 840 - 841 807 static void tegra_drm_preclose(struct drm_device *drm, struct drm_file *file) 842 808 { 843 809 struct tegra_drm_file *fpriv = file->driver_priv; ··· 870 904 .open = tegra_drm_open, 871 905 .preclose = tegra_drm_preclose, 872 906 .lastclose = tegra_drm_lastclose, 873 - 874 - .get_vblank_counter = tegra_drm_get_vblank_counter, 875 - .enable_vblank = tegra_drm_enable_vblank, 876 - .disable_vblank = tegra_drm_disable_vblank, 877 907 878 908 #if defined(CONFIG_DEBUG_FS) 879 909 .debugfs_init = tegra_debugfs_init,
-3
drivers/gpu/drm/tegra/drm.h
··· 193 193 }; 194 194 195 195 /* from dc.c */ 196 - u32 tegra_dc_get_vblank_counter(struct tegra_dc *dc); 197 - void tegra_dc_enable_vblank(struct tegra_dc *dc); 198 - void tegra_dc_disable_vblank(struct tegra_dc *dc); 199 196 void tegra_dc_commit(struct tegra_dc *dc); 200 197 int tegra_dc_state_setup_clock(struct tegra_dc *dc, 201 198 struct drm_crtc_state *crtc_state,
+1 -4
drivers/gpu/drm/tegra/fb.c
··· 235 235 dev_err(drm->dev, "failed to allocate DRM framebuffer: %d\n", 236 236 err); 237 237 drm_gem_object_unreference_unlocked(&bo->gem); 238 - goto release; 238 + return PTR_ERR(fbdev->fb); 239 239 } 240 240 241 241 fb = &fbdev->fb->base; ··· 272 272 273 273 destroy: 274 274 drm_framebuffer_remove(fb); 275 - release: 276 - drm_fb_helper_release_fbi(helper); 277 275 return err; 278 276 } 279 277 ··· 337 339 static void tegra_fbdev_exit(struct tegra_fbdev *fbdev) 338 340 { 339 341 drm_fb_helper_unregister_fbi(&fbdev->base); 340 - drm_fb_helper_release_fbi(&fbdev->base); 341 342 342 343 if (fbdev->fb) 343 344 drm_framebuffer_remove(&fbdev->fb->base);
+11
drivers/gpu/drm/tilcdc/tilcdc_crtc.c
··· 695 695 return 0; 696 696 } 697 697 698 + static int tilcdc_crtc_enable_vblank(struct drm_crtc *crtc) 699 + { 700 + return 0; 701 + } 702 + 703 + static void tilcdc_crtc_disable_vblank(struct drm_crtc *crtc) 704 + { 705 + } 706 + 698 707 static const struct drm_crtc_funcs tilcdc_crtc_funcs = { 699 708 .destroy = tilcdc_crtc_destroy, 700 709 .set_config = drm_atomic_helper_set_config, ··· 711 702 .reset = drm_atomic_helper_crtc_reset, 712 703 .atomic_duplicate_state = drm_atomic_helper_crtc_duplicate_state, 713 704 .atomic_destroy_state = drm_atomic_helper_crtc_destroy_state, 705 + .enable_vblank = tilcdc_crtc_enable_vblank, 706 + .disable_vblank = tilcdc_crtc_disable_vblank, 714 707 }; 715 708 716 709 static const struct drm_crtc_helper_funcs tilcdc_crtc_helper_funcs = {
-13
drivers/gpu/drm/tilcdc/tilcdc_drv.c
··· 437 437 return tilcdc_crtc_irq(priv->crtc); 438 438 } 439 439 440 - static int tilcdc_enable_vblank(struct drm_device *dev, unsigned int pipe) 441 - { 442 - return 0; 443 - } 444 - 445 - static void tilcdc_disable_vblank(struct drm_device *dev, unsigned int pipe) 446 - { 447 - return; 448 - } 449 - 450 440 #if defined(CONFIG_DEBUG_FS) 451 441 static const struct { 452 442 const char *name; ··· 547 557 DRIVER_PRIME | DRIVER_ATOMIC), 548 558 .lastclose = tilcdc_lastclose, 549 559 .irq_handler = tilcdc_irq, 550 - .get_vblank_counter = drm_vblank_no_hw_counter, 551 - .enable_vblank = tilcdc_enable_vblank, 552 - .disable_vblank = tilcdc_disable_vblank, 553 560 .gem_free_object_unlocked = drm_gem_cma_free_object, 554 561 .gem_vm_ops = &drm_gem_cma_vm_ops, 555 562 .dumb_create = drm_gem_cma_dumb_create,
+1 -1
drivers/gpu/drm/tinydrm/core/tinydrm-helpers.c
··· 451 451 ret = spi_sync(spi, &m); 452 452 if (ret) 453 453 return ret; 454 - }; 454 + } 455 455 456 456 return 0; 457 457 }
+1 -1
drivers/gpu/drm/ttm/ttm_bo.c
··· 982 982 } 983 983 984 984 if (!type_found) { 985 - printk(KERN_ERR TTM_PFX "No compatible memory type found.\n"); 985 + pr_err(TTM_PFX "No compatible memory type found\n"); 986 986 return -EINVAL; 987 987 } 988 988
+1 -4
drivers/gpu/drm/udl/udl_fb.c
··· 381 381 382 382 ret = udl_framebuffer_init(dev, &ufbdev->ufb, &mode_cmd, obj); 383 383 if (ret) 384 - goto out_destroy_fbi; 384 + goto out_gfree; 385 385 386 386 fb = &ufbdev->ufb.base; 387 387 ··· 403 403 ufbdev->ufb.obj->vmapping); 404 404 405 405 return ret; 406 - out_destroy_fbi: 407 - drm_fb_helper_release_fbi(helper); 408 406 out_gfree: 409 407 drm_gem_object_unreference_unlocked(&ufbdev->ufb.obj->base); 410 408 out: ··· 417 419 struct udl_fbdev *ufbdev) 418 420 { 419 421 drm_fb_helper_unregister_fbi(&ufbdev->helper); 420 - drm_fb_helper_release_fbi(&ufbdev->helper); 421 422 drm_fb_helper_fini(&ufbdev->helper); 422 423 drm_framebuffer_unregister_private(&ufbdev->ufb.base); 423 424 drm_framebuffer_cleanup(&ufbdev->ufb.base);
+19 -7
drivers/gpu/drm/vc4/vc4_bo.c
··· 6 6 * published by the Free Software Foundation. 7 7 */ 8 8 9 - /* DOC: VC4 GEM BO management support. 9 + /** 10 + * DOC: VC4 GEM BO management support 10 11 * 11 12 * The VC4 GPU architecture (both scanout and rendering) has direct 12 13 * access to system memory with no MMU in between. To support it, we ··· 187 186 188 187 /** 189 188 * vc4_gem_create_object - Implementation of driver->gem_create_object. 189 + * @dev: DRM device 190 + * @size: Size in bytes of the memory the object will reference 190 191 * 191 192 * This lets the CMA helpers allocate object structs for us, and keep 192 193 * our BO stats correct. ··· 211 208 } 212 209 213 210 struct vc4_bo *vc4_bo_create(struct drm_device *dev, size_t unaligned_size, 214 - bool from_cache) 211 + bool allow_unzeroed) 215 212 { 216 213 size_t size = roundup(unaligned_size, PAGE_SIZE); 217 214 struct vc4_dev *vc4 = to_vc4_dev(dev); 218 215 struct drm_gem_cma_object *cma_obj; 216 + struct vc4_bo *bo; 219 217 220 218 if (size == 0) 221 219 return ERR_PTR(-EINVAL); 222 220 223 221 /* First, try to get a vc4_bo from the kernel BO cache. */ 224 - if (from_cache) { 225 - struct vc4_bo *bo = vc4_bo_get_from_cache(dev, size); 226 - 227 - if (bo) 228 - return bo; 222 + bo = vc4_bo_get_from_cache(dev, size); 223 + if (bo) { 224 + if (!allow_unzeroed) 225 + memset(bo->base.vaddr, 0, bo->base.base.size); 226 + return bo; 229 227 } 230 228 231 229 cma_obj = drm_gem_cma_create(dev, size); ··· 313 309 314 310 /* Don't cache if it was publicly named. */ 315 311 if (gem_bo->name) { 312 + vc4_bo_destroy(bo); 313 + goto out; 314 + } 315 + 316 + /* If this object was partially constructed but CMA allocation 317 + * had failed, just free it. 318 + */ 319 + if (!bo->base.vaddr) { 316 320 vc4_bo_destroy(bo); 317 321 goto out; 318 322 }
+8 -7
drivers/gpu/drm/vc4/vc4_crtc.c
··· 11 11 * 12 12 * In VC4, the Pixel Valve is what most closely corresponds to the 13 13 * DRM's concept of a CRTC. The PV generates video timings from the 14 - * output's clock plus its configuration. It pulls scaled pixels from 14 + * encoder's clock plus its configuration. It pulls scaled pixels from 15 15 * the HVS at that timing, and feeds it to the encoder. 16 16 * 17 17 * However, the DRM CRTC also collects the configuration of all the 18 - * DRM planes attached to it. As a result, this file also manages 19 - * setup of the VC4 HVS's display elements on the CRTC. 18 + * DRM planes attached to it. As a result, the CRTC is also 19 + * responsible for writing the display list for the HVS channel that 20 + * the CRTC will use. 20 21 * 21 22 * The 2835 has 3 different pixel valves. pv0 in the audio power 22 23 * domain feeds DSI0 or DPI, while pv1 feeds DS1 or SMI. pv2 in the ··· 655 654 } 656 655 } 657 656 658 - int vc4_enable_vblank(struct drm_device *dev, unsigned int crtc_id) 657 + static int vc4_enable_vblank(struct drm_crtc *crtc) 659 658 { 660 - struct drm_crtc *crtc = drm_crtc_from_index(dev, crtc_id); 661 659 struct vc4_crtc *vc4_crtc = to_vc4_crtc(crtc); 662 660 663 661 CRTC_WRITE(PV_INTEN, PV_INT_VFP_START); ··· 664 664 return 0; 665 665 } 666 666 667 - void vc4_disable_vblank(struct drm_device *dev, unsigned int crtc_id) 667 + static void vc4_disable_vblank(struct drm_crtc *crtc) 668 668 { 669 - struct drm_crtc *crtc = drm_crtc_from_index(dev, crtc_id); 670 669 struct vc4_crtc *vc4_crtc = to_vc4_crtc(crtc); 671 670 672 671 CRTC_WRITE(PV_INTEN, 0); ··· 856 857 .atomic_duplicate_state = vc4_crtc_duplicate_state, 857 858 .atomic_destroy_state = vc4_crtc_destroy_state, 858 859 .gamma_set = vc4_crtc_gamma_set, 860 + .enable_vblank = vc4_enable_vblank, 861 + .disable_vblank = vc4_disable_vblank, 859 862 }; 860 863 861 864 static const struct drm_crtc_helper_funcs vc4_crtc_helper_funcs = {
+2 -14
drivers/gpu/drm/vc4/vc4_dpi.c
··· 18 18 * DOC: VC4 DPI module 19 19 * 20 20 * The VC4 DPI hardware supports MIPI DPI type 4 and Nokia ViSSI 21 - * signals, which are routed out to GPIO0-27 with the ALT2 function. 21 + * signals. On BCM2835, these can be routed out to GPIO0-27 with the 22 + * ALT2 function. 22 23 */ 23 24 24 25 #include "drm_atomic_helper.h" ··· 144 143 DPI_REG(DPI_C), 145 144 DPI_REG(DPI_ID), 146 145 }; 147 - 148 - static void vc4_dpi_dump_regs(struct vc4_dpi *dpi) 149 - { 150 - int i; 151 - 152 - for (i = 0; i < ARRAY_SIZE(dpi_regs); i++) { 153 - DRM_INFO("0x%04x (%s): 0x%08x\n", 154 - dpi_regs[i].reg, dpi_regs[i].name, 155 - DPI_READ(dpi_regs[i].reg)); 156 - } 157 - } 158 146 159 147 #ifdef CONFIG_DEBUG_FS 160 148 int vc4_dpi_debugfs_regs(struct seq_file *m, void *unused) ··· 405 415 dpi->regs = vc4_ioremap_regs(pdev, 0); 406 416 if (IS_ERR(dpi->regs)) 407 417 return PTR_ERR(dpi->regs); 408 - 409 - vc4_dpi_dump_regs(dpi); 410 418 411 419 if (DPI_READ(DPI_ID) != DPI_ID_VALUE) { 412 420 dev_err(dev, "Port returned 0x%08x for ID instead of 0x%08x\n",
+16 -3
drivers/gpu/drm/vc4/vc4_drv.c
··· 7 7 * published by the Free Software Foundation. 8 8 */ 9 9 10 + /** 11 + * DOC: Broadcom VC4 Graphics Driver 12 + * 13 + * The Broadcom VideoCore 4 (present in the Raspberry Pi) contains a 14 + * OpenGL ES 2.0-compatible 3D engine called V3D, and a highly 15 + * configurable display output pipeline that supports HDMI, DSI, DPI, 16 + * and Composite TV output. 17 + * 18 + * The 3D engine also has an interface for submitting arbitrary 19 + * compute shader-style jobs using the same shader processor as is 20 + * used for vertex and fragment shaders in GLES 2.0. However, given 21 + * that the hardware isn't able to expose any standard interfaces like 22 + * OpenGL compute shaders or OpenCL, it isn't supported by this 23 + * driver. 24 + */ 25 + 10 26 #include <linux/clk.h> 11 27 #include <linux/component.h> 12 28 #include <linux/device.h> ··· 153 137 .irq_postinstall = vc4_irq_postinstall, 154 138 .irq_uninstall = vc4_irq_uninstall, 155 139 156 - .enable_vblank = vc4_enable_vblank, 157 - .disable_vblank = vc4_disable_vblank, 158 - .get_vblank_counter = drm_vblank_no_hw_counter, 159 140 .get_scanout_position = vc4_crtc_get_scanoutpos, 160 141 .get_vblank_timestamp = vc4_crtc_get_vblank_timestamp, 161 142
-2
drivers/gpu/drm/vc4/vc4_drv.h
··· 444 444 445 445 /* vc4_crtc.c */ 446 446 extern struct platform_driver vc4_crtc_driver; 447 - int vc4_enable_vblank(struct drm_device *dev, unsigned int crtc_id); 448 - void vc4_disable_vblank(struct drm_device *dev, unsigned int crtc_id); 449 447 bool vc4_event_pending(struct drm_crtc *crtc); 450 448 int vc4_crtc_debugfs_regs(struct seq_file *m, void *arg); 451 449 int vc4_crtc_get_scanoutpos(struct drm_device *dev, unsigned int crtc_id,
+7 -14
drivers/gpu/drm/vc4/vc4_dsi.c
··· 771 771 static struct drm_connector *vc4_dsi_connector_init(struct drm_device *dev, 772 772 struct vc4_dsi *dsi) 773 773 { 774 - struct drm_connector *connector = NULL; 774 + struct drm_connector *connector; 775 775 struct vc4_dsi_connector *dsi_connector; 776 - int ret = 0; 777 776 778 777 dsi_connector = devm_kzalloc(dev->dev, sizeof(*dsi_connector), 779 778 GFP_KERNEL); 780 - if (!dsi_connector) { 781 - ret = -ENOMEM; 782 - goto fail; 783 - } 779 + if (!dsi_connector) 780 + return ERR_PTR(-ENOMEM); 781 + 784 782 connector = &dsi_connector->base; 785 783 786 784 dsi_connector->dsi = dsi; ··· 794 796 drm_mode_connector_attach_encoder(connector, dsi->encoder); 795 797 796 798 return connector; 797 - 798 - fail: 799 - if (connector) 800 - vc4_dsi_connector_destroy(connector); 801 - 802 - return ERR_PTR(ret); 803 799 } 804 800 805 801 static void vc4_dsi_encoder_destroy(struct drm_encoder *encoder) ··· 1453 1461 } 1454 1462 1455 1463 /** 1456 - * Exposes clocks generated by the analog PHY that are consumed by 1457 - * CPRMAN (clk-bcm2835.c). 1464 + * vc4_dsi_init_phy_clocks - Exposes clocks generated by the analog 1465 + * PHY that are consumed by CPRMAN (clk-bcm2835.c). 1466 + * @dsi: DSI encoder 1458 1467 */ 1459 1468 static int 1460 1469 vc4_dsi_init_phy_clocks(struct vc4_dsi *dsi)
+21 -5
drivers/gpu/drm/vc4/vc4_gem.c
··· 512 512 } 513 513 514 514 /** 515 - * Looks up a bunch of GEM handles for BOs and stores the array for 516 - * use in the command validator that actually writes relocated 517 - * addresses pointing to them. 515 + * vc4_cl_lookup_bos() - Sets up exec->bo[] with the GEM objects 516 + * referenced by the job. 517 + * @dev: DRM device 518 + * @file_priv: DRM file for this fd 519 + * @exec: V3D job being set up 520 + * 521 + * The command validator needs to reference BOs by their index within 522 + * the submitted job's BO list. This does the validation of the job's 523 + * BO list and reference counting for the lifetime of the job. 524 + * 525 + * Note that this function doesn't need to unreference the BOs on 526 + * failure, because that will happen at vc4_complete_exec() time. 518 527 */ 519 528 static int 520 529 vc4_cl_lookup_bos(struct drm_device *dev, ··· 856 847 } 857 848 858 849 /** 859 - * Submits a command list to the VC4. 850 + * vc4_submit_cl_ioctl() - Submits a job (frame) to the VC4. 851 + * @dev: DRM device 852 + * @data: ioctl argument 853 + * @file_priv: DRM file for this fd 860 854 * 861 - * This is what is called batchbuffer emitting on other hardware. 855 + * This is the main entrypoint for userspace to submit a 3D frame to 856 + * the GPU. Userspace provides the binner command list (if 857 + * applicable), and the kernel sets up the render command list to draw 858 + * to the framebuffer described in the ioctl, using the command lists 859 + * that the 3D engine's binner will produce. 862 860 */ 863 861 int 864 862 vc4_submit_cl_ioctl(struct drm_device *dev, void *data,
+20 -3
drivers/gpu/drm/vc4/vc4_hdmi.c
··· 20 20 /** 21 21 * DOC: VC4 Falcon HDMI module 22 22 * 23 - * The HDMI core has a state machine and a PHY. Most of the unit 24 - * operates off of the HSM clock from CPRMAN. It also internally uses 25 - * the PLLH_PIX clock for the PHY. 23 + * The HDMI core has a state machine and a PHY. On BCM2835, most of 24 + * the unit operates off of the HSM clock from CPRMAN. It also 25 + * internally uses the PLLH_PIX clock for the PHY. 26 + * 27 + * HDMI infoframes are kept within a small packet ram, where each 28 + * packet can be individually enabled for including in a frame. 29 + * 30 + * HDMI audio is implemented entirely within the HDMI IP block. A 31 + * register in the HDMI encoder takes SPDIF frames from the DMA engine 32 + * and transfers them over an internal MAI (multi-channel audio 33 + * interconnect) bus to the encoder side for insertion into the video 34 + * blank regions. 35 + * 36 + * The driver's HDMI encoder does not yet support power management. 37 + * The HDMI encoder's power domain and the HSM/pixel clocks are kept 38 + * continuously running, and only the HDMI logic and packet ram are 39 + * powered off/on at disable/enable time. 40 + * 41 + * The driver does not yet support CEC control, though the HDMI 42 + * encoder block has CEC support. 26 43 */ 27 44 28 45 #include "drm_atomic_helper.h"
+6 -6
drivers/gpu/drm/vc4/vc4_hvs.c
··· 9 9 /** 10 10 * DOC: VC4 HVS module. 11 11 * 12 - * The HVS is the piece of hardware that does translation, scaling, 13 - * colorspace conversion, and compositing of pixels stored in 14 - * framebuffers into a FIFO of pixels going out to the Pixel Valve 15 - * (CRTC). It operates at the system clock rate (the system audio 16 - * clock gate, specifically), which is much higher than the pixel 17 - * clock rate. 12 + * The Hardware Video Scaler (HVS) is the piece of hardware that does 13 + * translation, scaling, colorspace conversion, and compositing of 14 + * pixels stored in framebuffers into a FIFO of pixels going out to 15 + * the Pixel Valve (CRTC). It operates at the system clock rate (the 16 + * system audio clock gate, specifically), which is much higher than 17 + * the pixel clock rate. 18 18 * 19 19 * There is a single global HVS, with multiple output FIFOs that can 20 20 * be consumed by the PVs. This file just manages the resources for
+2 -1
drivers/gpu/drm/vc4/vc4_irq.c
··· 21 21 * IN THE SOFTWARE. 22 22 */ 23 23 24 - /** DOC: Interrupt management for the V3D engine. 24 + /** 25 + * DOC: Interrupt management for the V3D engine 25 26 * 26 27 * We have an interrupt status register (V3D_INTCTL) which reports 27 28 * interrupts, and where writing 1 bits clears those interrupts.
+6 -6
drivers/gpu/drm/vc4/vc4_plane.c
··· 20 20 21 21 #include "vc4_drv.h" 22 22 #include "vc4_regs.h" 23 + #include "drm_atomic.h" 23 24 #include "drm_atomic_helper.h" 24 25 #include "drm_fb_cma_helper.h" 25 26 #include "drm_plane_helper.h" ··· 770 769 if (!plane_state) 771 770 goto out; 772 771 773 - /* If we're changing the cursor contents, do that in the 774 - * normal vblank-synced atomic path. 775 - */ 776 - if (fb != plane_state->fb) 777 - goto out; 778 - 779 772 /* No configuring new scaling in the fast path. */ 780 773 if (crtc_w != plane_state->crtc_w || 781 774 crtc_h != plane_state->crtc_h || 782 775 src_w != plane_state->src_w || 783 776 src_h != plane_state->src_h) { 784 777 goto out; 778 + } 779 + 780 + if (fb != plane_state->fb) { 781 + drm_atomic_set_fb_for_plane(plane->state, fb); 782 + vc4_plane_async_set_fb(plane, fb); 785 783 } 786 784 787 785 /* Set the cursor's position on the screen. This is the
+4
drivers/gpu/drm/vc4/vc4_render_cl.c
··· 24 24 /** 25 25 * DOC: Render command list generation 26 26 * 27 + * In the V3D hardware, render command lists are what load and store 28 + * tiles of a framebuffer and optionally call out to binner-generated 29 + * command lists to do the 3D drawing for that tile. 30 + * 27 31 * In the VC4 driver, render command list generation is performed by the 28 32 * kernel instead of userspace. We do this because validating a 29 33 * user-submitted command list is hard to get right and has high CPU overhead,
+21 -13
drivers/gpu/drm/vc4/vc4_validate.c
··· 22 22 */ 23 23 24 24 /** 25 - * Command list validator for VC4. 25 + * DOC: Command list validator for VC4. 26 26 * 27 - * The VC4 has no IOMMU between it and system memory. So, a user with 28 - * access to execute command lists could escalate privilege by 27 + * Since the VC4 has no IOMMU between it and system memory, a user 28 + * with access to execute command lists could escalate privilege by 29 29 * overwriting system memory (drawing to it as a framebuffer) or 30 - * reading system memory it shouldn't (reading it as a texture, or 31 - * uniform data, or vertex data). 30 + * reading system memory it shouldn't (reading it as a vertex buffer 31 + * or index buffer) 32 32 * 33 - * This validates command lists to ensure that all accesses are within 34 - * the bounds of the GEM objects referenced. It explicitly whitelists 35 - * packets, and looks at the offsets in any address fields to make 36 - * sure they're constrained within the BOs they reference. 33 + * We validate binner command lists to ensure that all accesses are 34 + * within the bounds of the GEM objects referenced by the submitted 35 + * job. It explicitly whitelists packets, and looks at the offsets in 36 + * any address fields to make sure they're contained within the BOs 37 + * they reference. 37 38 * 38 - * Note that because of the validation that's happening anyway, this 39 - * is where GEM relocation processing happens. 39 + * Note that because CL validation is already reading the 40 + * user-submitted CL and writing the validated copy out to the memory 41 + * that the GPU will actually read, this is also where GEM relocation 42 + * processing (turning BO references into actual addresses for the GPU 43 + * to use) happens. 40 44 */ 41 45 42 46 #include "uapi/drm/vc4_drm.h" ··· 88 84 } 89 85 90 86 /** 91 - * The texture unit decides what tiling format a particular miplevel is using 92 - * this function, so we lay out our miptrees accordingly. 87 + * size_is_lt() - Returns whether a miplevel of the given size will 88 + * use the lineartile (LT) tiling layout rather than the normal T 89 + * tiling layout. 90 + * @width: Width in pixels of the miplevel 91 + * @height: Height in pixels of the miplevel 92 + * @cpp: Bytes per pixel of the pixel format 93 93 */ 94 94 static bool 95 95 size_is_lt(uint32_t width, uint32_t height, int cpp)
+13 -8
drivers/gpu/drm/vc4/vc4_validate_shaders.c
··· 24 24 /** 25 25 * DOC: Shader validator for VC4. 26 26 * 27 - * The VC4 has no IOMMU between it and system memory, so a user with 28 - * access to execute shaders could escalate privilege by overwriting 29 - * system memory (using the VPM write address register in the 30 - * general-purpose DMA mode) or reading system memory it shouldn't 31 - * (reading it as a texture, or uniform data, or vertex data). 27 + * Since the VC4 has no IOMMU between it and system memory, a user 28 + * with access to execute shaders could escalate privilege by 29 + * overwriting system memory (using the VPM write address register in 30 + * the general-purpose DMA mode) or reading system memory it shouldn't 31 + * (reading it as a texture, uniform data, or direct-addressed TMU 32 + * lookup). 32 33 * 33 - * This walks over a shader BO, ensuring that its accesses are 34 - * appropriately bounded, and recording how many texture accesses are 35 - * made and where so that we can do relocations for them in the 34 + * The shader validator walks over a shader's BO, ensuring that its 35 + * accesses are appropriately bounded, and recording where texture 36 + * accesses are made so that we can do relocations for them in the 36 37 * uniform stream. 38 + * 39 + * Shader BO are immutable for their lifetimes (enforced by not 40 + * allowing mmaps, GEM prime export, or rendering to from a CL), so 41 + * this validation is only performed at BO creation time. 37 42 */ 38 43 39 44 #include "vc4_drv.h"
+6
drivers/gpu/drm/vc4/vc4_vec.c
··· 16 16 17 17 /** 18 18 * DOC: VC4 SDTV module 19 + * 20 + * The VEC encoder generates PAL or NTSC composite video output. 21 + * 22 + * TV mode selection is done by an atomic property on the encoder, 23 + * because a drm_mode_modeinfo is insufficient to distinguish between 24 + * PAL and PAL-M or NTSC and NTSC-J. 19 25 */ 20 26 21 27 #include <drm/drm_atomic_helper.h>
+3 -7
drivers/gpu/drm/via/via_dmablit.c
··· 238 238 vsg->pages = vzalloc(sizeof(struct page *) * vsg->num_pages); 239 239 if (NULL == vsg->pages) 240 240 return -ENOMEM; 241 - down_read(&current->mm->mmap_sem); 242 - ret = get_user_pages((unsigned long)xfer->mem_addr, 243 - vsg->num_pages, 244 - (vsg->direction == DMA_FROM_DEVICE) ? FOLL_WRITE : 0, 245 - vsg->pages, NULL); 246 - 247 - up_read(&current->mm->mmap_sem); 241 + ret = get_user_pages_unlocked((unsigned long)xfer->mem_addr, 242 + vsg->num_pages, vsg->pages, 243 + (vsg->direction == DMA_FROM_DEVICE) ? FOLL_WRITE : 0); 248 244 if (ret != vsg->num_pages) { 249 245 if (ret < 0) 250 246 return ret;
-8
drivers/gpu/drm/virtio/virtgpu_debugfs.c
··· 54 54 minor->debugfs_root, minor); 55 55 return 0; 56 56 } 57 - 58 - void 59 - virtio_gpu_debugfs_takedown(struct drm_minor *minor) 60 - { 61 - drm_debugfs_remove_files(virtio_gpu_debugfs_list, 62 - VIRTIO_GPU_DEBUGFS_ENTRIES, 63 - minor); 64 - }
+1 -1
drivers/gpu/drm/virtio/virtgpu_display.c
··· 347 347 drm_atomic_helper_cleanup_planes(dev, state); 348 348 } 349 349 350 - static struct drm_mode_config_helper_funcs virtio_mode_config_helpers = { 350 + static const struct drm_mode_config_helper_funcs virtio_mode_config_helpers = { 351 351 .atomic_commit_tail = vgdev_atomic_commit_tail, 352 352 }; 353 353
-1
drivers/gpu/drm/virtio/virtgpu_drv.c
··· 126 126 127 127 #if defined(CONFIG_DEBUG_FS) 128 128 .debugfs_init = virtio_gpu_debugfs_init, 129 - .debugfs_cleanup = virtio_gpu_debugfs_takedown, 130 129 #endif 131 130 .prime_handle_to_fd = drm_gem_prime_handle_to_fd, 132 131 .prime_fd_to_handle = drm_gem_prime_fd_to_handle,
-1
drivers/gpu/drm/virtio/virtgpu_drv.h
··· 422 422 423 423 /* virgl debufs */ 424 424 int virtio_gpu_debugfs_init(struct drm_minor *minor); 425 - void virtio_gpu_debugfs_takedown(struct drm_minor *minor); 426 425 427 426 #endif
+1 -4
drivers/gpu/drm/virtio/virtgpu_fb.c
··· 320 320 ret = virtio_gpu_framebuffer_init(dev, &vfbdev->vgfb, 321 321 &mode_cmd, &obj->gem_base); 322 322 if (ret) 323 - goto err_fb_init; 323 + goto err_fb_alloc; 324 324 325 325 fb = &vfbdev->vgfb.base; 326 326 ··· 341 341 info->fix.mmio_len = 0; 342 342 return 0; 343 343 344 - err_fb_init: 345 - drm_fb_helper_release_fbi(helper); 346 344 err_fb_alloc: 347 345 virtio_gpu_cmd_resource_inval_backing(vgdev, resid); 348 346 err_obj_attach: ··· 355 357 struct virtio_gpu_framebuffer *vgfb = &vgfbdev->vgfb; 356 358 357 359 drm_fb_helper_unregister_fbi(&vgfbdev->helper); 358 - drm_fb_helper_release_fbi(&vgfbdev->helper); 359 360 360 361 if (vgfb->obj) 361 362 vgfb->obj = NULL;
+1
drivers/gpu/drm/virtio/virtgpu_plane.c
··· 44 44 45 45 static void virtio_gpu_plane_destroy(struct drm_plane *plane) 46 46 { 47 + drm_plane_cleanup(plane); 47 48 kfree(plane); 48 49 } 49 50
+2 -4
drivers/gpu/drm/vmwgfx/vmwgfx_fence.c
··· 728 728 729 729 base = ttm_base_object_lookup(tfile, arg->handle); 730 730 if (unlikely(base == NULL)) { 731 - printk(KERN_ERR "Wait invalid fence object handle " 732 - "0x%08lx.\n", 731 + pr_err("Wait invalid fence object handle 0x%08lx\n", 733 732 (unsigned long)arg->handle); 734 733 return -EINVAL; 735 734 } ··· 772 773 773 774 base = ttm_base_object_lookup(tfile, arg->handle); 774 775 if (unlikely(base == NULL)) { 775 - printk(KERN_ERR "Fence signaled invalid fence object handle " 776 - "0x%08lx.\n", 776 + pr_err("Fence signaled invalid fence object handle 0x%08lx\n", 777 777 (unsigned long)arg->handle); 778 778 return -EINVAL; 779 779 }
+1 -2
drivers/gpu/drm/vmwgfx/vmwgfx_gmrid_manager.c
··· 159 159 static void vmw_gmrid_man_debug(struct ttm_mem_type_manager *man, 160 160 const char *prefix) 161 161 { 162 - printk(KERN_INFO "%s: No debug info available for the GMR " 163 - "id manager.\n", prefix); 162 + pr_info("%s: No debug info available for the GMR id manager\n", prefix); 164 163 } 165 164 166 165 const struct ttm_mem_type_manager_func vmw_gmrid_manager_func = {
+2 -2
drivers/gpu/drm/vmwgfx/vmwgfx_resource.c
··· 736 736 737 737 base = ttm_base_object_lookup(tfile, handle); 738 738 if (unlikely(base == NULL)) { 739 - printk(KERN_ERR "Invalid buffer object handle 0x%08lx.\n", 739 + pr_err("Invalid buffer object handle 0x%08lx\n", 740 740 (unsigned long)handle); 741 741 return -ESRCH; 742 742 } 743 743 744 744 if (unlikely(ttm_base_object_type(base) != ttm_buffer_type)) { 745 745 ttm_base_object_unref(&base); 746 - printk(KERN_ERR "Invalid buffer object handle 0x%08lx.\n", 746 + pr_err("Invalid buffer object handle 0x%08lx\n", 747 747 (unsigned long)handle); 748 748 return -EINVAL; 749 749 }
-3
drivers/gpu/drm/zte/zx_drm_drv.c
··· 71 71 .driver_features = DRIVER_GEM | DRIVER_MODESET | DRIVER_PRIME | 72 72 DRIVER_ATOMIC, 73 73 .lastclose = zx_drm_lastclose, 74 - .get_vblank_counter = drm_vblank_no_hw_counter, 75 - .enable_vblank = zx_vou_enable_vblank, 76 - .disable_vblank = zx_vou_disable_vblank, 77 74 .gem_free_object = drm_gem_cma_free_object, 78 75 .gem_vm_ops = &drm_gem_cma_vm_ops, 79 76 .dumb_create = drm_gem_cma_dumb_create,
+23 -38
drivers/gpu/drm/zte/zx_vou.c
··· 470 470 .atomic_flush = zx_crtc_atomic_flush, 471 471 }; 472 472 473 + static int zx_vou_enable_vblank(struct drm_crtc *crtc) 474 + { 475 + struct zx_crtc *zcrtc = to_zx_crtc(crtc); 476 + struct zx_vou_hw *vou = crtc_to_vou(crtc); 477 + u32 int_frame_mask = zcrtc->bits->int_frame_mask; 478 + 479 + zx_writel_mask(vou->timing + TIMING_INT_CTRL, int_frame_mask, 480 + int_frame_mask); 481 + 482 + return 0; 483 + } 484 + 485 + static void zx_vou_disable_vblank(struct drm_crtc *crtc) 486 + { 487 + struct zx_crtc *zcrtc = to_zx_crtc(crtc); 488 + struct zx_vou_hw *vou = crtc_to_vou(crtc); 489 + 490 + zx_writel_mask(vou->timing + TIMING_INT_CTRL, 491 + zcrtc->bits->int_frame_mask, 0); 492 + } 493 + 473 494 static const struct drm_crtc_funcs zx_crtc_funcs = { 474 495 .destroy = drm_crtc_cleanup, 475 496 .set_config = drm_atomic_helper_set_config, ··· 498 477 .reset = drm_atomic_helper_crtc_reset, 499 478 .atomic_duplicate_state = drm_atomic_helper_crtc_duplicate_state, 500 479 .atomic_destroy_state = drm_atomic_helper_crtc_destroy_state, 480 + .enable_vblank = zx_vou_enable_vblank, 481 + .disable_vblank = zx_vou_disable_vblank, 501 482 }; 502 483 503 484 static int zx_crtc_init(struct drm_device *drm, struct zx_vou_hw *vou, ··· 574 551 vou->aux_crtc = zcrtc; 575 552 576 553 return 0; 577 - } 578 - 579 - int zx_vou_enable_vblank(struct drm_device *drm, unsigned int pipe) 580 - { 581 - struct drm_crtc *crtc; 582 - struct zx_crtc *zcrtc; 583 - struct zx_vou_hw *vou; 584 - u32 int_frame_mask; 585 - 586 - crtc = drm_crtc_from_index(drm, pipe); 587 - if (!crtc) 588 - return 0; 589 - 590 - vou = crtc_to_vou(crtc); 591 - zcrtc = to_zx_crtc(crtc); 592 - int_frame_mask = zcrtc->bits->int_frame_mask; 593 - 594 - zx_writel_mask(vou->timing + TIMING_INT_CTRL, int_frame_mask, 595 - int_frame_mask); 596 - 597 - return 0; 598 - } 599 - 600 - void zx_vou_disable_vblank(struct drm_device *drm, unsigned int pipe) 601 - { 602 - struct drm_crtc *crtc; 603 - struct zx_crtc *zcrtc; 604 - struct zx_vou_hw *vou; 605 - 606 - crtc = drm_crtc_from_index(drm, pipe); 607 - if (!crtc) 608 - return; 609 - 610 - vou = crtc_to_vou(crtc); 611 - zcrtc = to_zx_crtc(crtc); 612 - 613 - zx_writel_mask(vou->timing + TIMING_INT_CTRL, 614 - zcrtc->bits->int_frame_mask, 0); 615 554 } 616 555 617 556 void zx_vou_layer_enable(struct drm_plane *plane)
-3
drivers/gpu/drm/zte/zx_vou.h
··· 61 61 void zx_vou_config_dividers(struct drm_crtc *crtc, 62 62 struct vou_div_config *configs, int num); 63 63 64 - int zx_vou_enable_vblank(struct drm_device *drm, unsigned int pipe); 65 - void zx_vou_disable_vblank(struct drm_device *drm, unsigned int pipe); 66 - 67 64 void zx_vou_layer_enable(struct drm_plane *plane); 68 65 void zx_vou_layer_disable(struct drm_plane *plane); 69 66
+15 -13
drivers/gpu/vga/vga_switcheroo.c
··· 95 95 * @pwr_state: current power state 96 96 * @ops: client callbacks 97 97 * @id: client identifier. Determining the id requires the handler, 98 - * so gpus are initially assigned VGA_SWITCHEROO_UNKNOWN_ID 99 - * and later given their true id in vga_switcheroo_enable() 98 + * so gpus are initially assigned VGA_SWITCHEROO_UNKNOWN_ID 99 + * and later given their true id in vga_switcheroo_enable() 100 100 * @active: whether the outputs are currently switched to this client 101 101 * @driver_power_control: whether power state is controlled by the driver's 102 - * runtime pm. If true, writing ON and OFF to the vga_switcheroo debugfs 103 - * interface is a no-op so as not to interfere with runtime pm 102 + * runtime pm. If true, writing ON and OFF to the vga_switcheroo debugfs 103 + * interface is a no-op so as not to interfere with runtime pm 104 104 * @list: client list 105 105 * 106 106 * Registered client. A client can be either a GPU or an audio device on a GPU. ··· 126 126 /** 127 127 * struct vgasr_priv - vga_switcheroo private data 128 128 * @active: whether vga_switcheroo is enabled. 129 - * Prerequisite is the registration of two GPUs and a handler 129 + * Prerequisite is the registration of two GPUs and a handler 130 130 * @delayed_switch_active: whether a delayed switch is pending 131 131 * @delayed_client_id: client to which a delayed switch is pending 132 132 * @debugfs_root: directory for vga_switcheroo debugfs interface 133 133 * @switch_file: file for vga_switcheroo debugfs interface 134 134 * @registered_clients: number of registered GPUs 135 - * (counting only vga clients, not audio clients) 135 + * (counting only vga clients, not audio clients) 136 136 * @clients: list of registered clients 137 137 * @handler: registered handler 138 138 * @handler_flags: flags of registered handler ··· 214 214 * 215 215 * Return: 0 on success, -EINVAL if a handler was already registered. 216 216 */ 217 - int vga_switcheroo_register_handler(const struct vga_switcheroo_handler *handler, 218 - enum vga_switcheroo_handler_flags_t handler_flags) 217 + int vga_switcheroo_register_handler( 218 + const struct vga_switcheroo_handler *handler, 219 + enum vga_switcheroo_handler_flags_t handler_flags) 219 220 { 220 221 mutex_lock(&vgasr_mutex); 221 222 if (vgasr_priv.handler) { ··· 306 305 * @pdev: client pci device 307 306 * @ops: client callbacks 308 307 * @driver_power_control: whether power state is controlled by the driver's 309 - * runtime pm 308 + * runtime pm 310 309 * 311 310 * Register vga client (GPU). Enable vga_switcheroo if another GPU and a 312 311 * handler have already registered. The power state of the client is assumed ··· 338 337 * Return: 0 on success, -ENOMEM on memory allocation error. 339 338 */ 340 339 int vga_switcheroo_register_audio_client(struct pci_dev *pdev, 341 - const struct vga_switcheroo_client_ops *ops, 342 - enum vga_switcheroo_client_id id) 340 + const struct vga_switcheroo_client_ops *ops, 341 + enum vga_switcheroo_client_id id) 343 342 { 344 343 return register_client(pdev, ops, id | ID_BIT_AUDIO, false, false); 345 344 } ··· 1085 1084 int ret; 1086 1085 1087 1086 /* we need to check if we have to switch back on the video 1088 - device so the audio device can come back */ 1087 + * device so the audio device can come back 1088 + */ 1089 1089 mutex_lock(&vgasr_mutex); 1090 1090 list_for_each_entry(client, &vgasr_priv.clients, list) { 1091 1091 if (PCI_SLOT(client->pdev->devfn) == PCI_SLOT(pdev->devfn) && ··· 1114 1112 1115 1113 /** 1116 1114 * vga_switcheroo_init_domain_pm_optimus_hdmi_audio() - helper for driver 1117 - * power control 1115 + * power control 1118 1116 * @dev: audio client device 1119 1117 * @domain: power domain 1120 1118 *
+71
drivers/of/platform.c
··· 571 571 } 572 572 EXPORT_SYMBOL_GPL(of_platform_depopulate); 573 573 574 + static void devm_of_platform_populate_release(struct device *dev, void *res) 575 + { 576 + of_platform_depopulate(*(struct device **)res); 577 + } 578 + 579 + /** 580 + * devm_of_platform_populate() - Populate platform_devices from device tree data 581 + * @dev: device that requested to populate from device tree data 582 + * 583 + * Similar to of_platform_populate(), but will automatically call 584 + * of_platform_depopulate() when the device is unbound from the bus. 585 + * 586 + * Returns 0 on success, < 0 on failure. 587 + */ 588 + int devm_of_platform_populate(struct device *dev) 589 + { 590 + struct device **ptr; 591 + int ret; 592 + 593 + if (!dev) 594 + return -EINVAL; 595 + 596 + ptr = devres_alloc(devm_of_platform_populate_release, 597 + sizeof(*ptr), GFP_KERNEL); 598 + if (!ptr) 599 + return -ENOMEM; 600 + 601 + ret = of_platform_populate(dev->of_node, NULL, NULL, dev); 602 + if (ret) { 603 + devres_free(ptr); 604 + } else { 605 + *ptr = dev; 606 + devres_add(dev, ptr); 607 + } 608 + 609 + return ret; 610 + } 611 + EXPORT_SYMBOL_GPL(devm_of_platform_populate); 612 + 613 + static int devm_of_platform_match(struct device *dev, void *res, void *data) 614 + { 615 + struct device **ptr = res; 616 + 617 + if (!ptr) { 618 + WARN_ON(!ptr); 619 + return 0; 620 + } 621 + 622 + return *ptr == data; 623 + } 624 + 625 + /** 626 + * devm_of_platform_depopulate() - Remove devices populated from device tree 627 + * @dev: device that requested to depopulate from device tree data 628 + * 629 + * Complementary to devm_of_platform_populate(), this function removes children 630 + * of the given device (and, recurrently, their children) that have been 631 + * created from their respective device tree nodes (and only those, 632 + * leaving others - eg. manually created - unharmed). 633 + */ 634 + void devm_of_platform_depopulate(struct device *dev) 635 + { 636 + int ret; 637 + 638 + ret = devres_release(dev, devm_of_platform_populate_release, 639 + devm_of_platform_match, dev); 640 + 641 + WARN_ON(ret); 642 + } 643 + EXPORT_SYMBOL_GPL(devm_of_platform_depopulate); 644 + 574 645 #ifdef CONFIG_OF_DYNAMIC 575 646 static int of_platform_notify(struct notifier_block *nb, 576 647 unsigned long action, void *arg)
+78 -3
include/drm/drm_atomic.h
··· 138 138 139 139 struct __drm_planes_state { 140 140 struct drm_plane *ptr; 141 - struct drm_plane_state *state; 141 + struct drm_plane_state *state, *old_state, *new_state; 142 142 }; 143 143 144 144 struct __drm_crtcs_state { 145 145 struct drm_crtc *ptr; 146 - struct drm_crtc_state *state; 146 + struct drm_crtc_state *state, *old_state, *new_state; 147 147 struct drm_crtc_commit *commit; 148 148 s32 __user *out_fence_ptr; 149 149 unsigned last_vblank_count; ··· 151 151 152 152 struct __drm_connnectors_state { 153 153 struct drm_connector *ptr; 154 - struct drm_connector_state *state; 154 + struct drm_connector_state *state, *old_state, *new_state; 155 155 }; 156 156 157 157 /** ··· 398 398 (__i)++) \ 399 399 for_each_if (connector) 400 400 401 + #define for_each_oldnew_connector_in_state(__state, connector, old_connector_state, new_connector_state, __i) \ 402 + for ((__i) = 0; \ 403 + (__i) < (__state)->num_connector && \ 404 + ((connector) = (__state)->connectors[__i].ptr, \ 405 + (old_connector_state) = (__state)->connectors[__i].old_state, \ 406 + (new_connector_state) = (__state)->connectors[__i].new_state, 1); \ 407 + (__i)++) \ 408 + for_each_if (connector) 409 + 410 + #define for_each_old_connector_in_state(__state, connector, old_connector_state, __i) \ 411 + for ((__i) = 0; \ 412 + (__i) < (__state)->num_connector && \ 413 + ((connector) = (__state)->connectors[__i].ptr, \ 414 + (old_connector_state) = (__state)->connectors[__i].old_state, 1); \ 415 + (__i)++) \ 416 + for_each_if (connector) 417 + 418 + #define for_each_new_connector_in_state(__state, connector, new_connector_state, __i) \ 419 + for ((__i) = 0; \ 420 + (__i) < (__state)->num_connector && \ 421 + ((connector) = (__state)->connectors[__i].ptr, \ 422 + (new_connector_state) = (__state)->connectors[__i].new_state, 1); \ 423 + (__i)++) \ 424 + for_each_if (connector) 425 + 401 426 #define for_each_crtc_in_state(__state, crtc, crtc_state, __i) \ 402 427 for ((__i) = 0; \ 403 428 (__i) < (__state)->dev->mode_config.num_crtc && \ ··· 431 406 (__i)++) \ 432 407 for_each_if (crtc_state) 433 408 409 + #define for_each_oldnew_crtc_in_state(__state, crtc, old_crtc_state, new_crtc_state, __i) \ 410 + for ((__i) = 0; \ 411 + (__i) < (__state)->dev->mode_config.num_crtc && \ 412 + ((crtc) = (__state)->crtcs[__i].ptr, \ 413 + (old_crtc_state) = (__state)->crtcs[__i].old_state, \ 414 + (new_crtc_state) = (__state)->crtcs[__i].new_state, 1); \ 415 + (__i)++) \ 416 + for_each_if (crtc) 417 + 418 + #define for_each_old_crtc_in_state(__state, crtc, old_crtc_state, __i) \ 419 + for ((__i) = 0; \ 420 + (__i) < (__state)->dev->mode_config.num_crtc && \ 421 + ((crtc) = (__state)->crtcs[__i].ptr, \ 422 + (old_crtc_state) = (__state)->crtcs[__i].old_state, 1); \ 423 + (__i)++) \ 424 + for_each_if (crtc) 425 + 426 + #define for_each_new_crtc_in_state(__state, crtc, new_crtc_state, __i) \ 427 + for ((__i) = 0; \ 428 + (__i) < (__state)->dev->mode_config.num_crtc && \ 429 + ((crtc) = (__state)->crtcs[__i].ptr, \ 430 + (new_crtc_state) = (__state)->crtcs[__i].new_state, 1); \ 431 + (__i)++) \ 432 + for_each_if (crtc) 433 + 434 434 #define for_each_plane_in_state(__state, plane, plane_state, __i) \ 435 435 for ((__i) = 0; \ 436 436 (__i) < (__state)->dev->mode_config.num_total_plane && \ ··· 463 413 (plane_state) = (__state)->planes[__i].state, 1); \ 464 414 (__i)++) \ 465 415 for_each_if (plane_state) 416 + 417 + #define for_each_oldnew_plane_in_state(__state, plane, old_plane_state, new_plane_state, __i) \ 418 + for ((__i) = 0; \ 419 + (__i) < (__state)->dev->mode_config.num_total_plane && \ 420 + ((plane) = (__state)->planes[__i].ptr, \ 421 + (old_plane_state) = (__state)->planes[__i].old_state, \ 422 + (new_plane_state) = (__state)->planes[__i].new_state, 1); \ 423 + (__i)++) \ 424 + for_each_if (plane) 425 + 426 + #define for_each_old_plane_in_state(__state, plane, old_plane_state, __i) \ 427 + for ((__i) = 0; \ 428 + (__i) < (__state)->dev->mode_config.num_total_plane && \ 429 + ((plane) = (__state)->planes[__i].ptr, \ 430 + (old_plane_state) = (__state)->planes[__i].old_state, 1); \ 431 + (__i)++) \ 432 + for_each_if (plane) 433 + 434 + #define for_each_new_plane_in_state(__state, plane, new_plane_state, __i) \ 435 + for ((__i) = 0; \ 436 + (__i) < (__state)->dev->mode_config.num_total_plane && \ 437 + ((plane) = (__state)->planes[__i].ptr, \ 438 + (new_plane_state) = (__state)->planes[__i].new_state, 1); \ 439 + (__i)++) \ 440 + for_each_if (plane) 466 441 467 442 /** 468 443 * drm_atomic_crtc_needs_modeset - compute combined modeset need
+2
include/drm/drm_atomic_helper.h
··· 105 105 int drm_atomic_helper_disable_all(struct drm_device *dev, 106 106 struct drm_modeset_acquire_ctx *ctx); 107 107 struct drm_atomic_state *drm_atomic_helper_suspend(struct drm_device *dev); 108 + int drm_atomic_helper_commit_duplicated_state(struct drm_atomic_state *state, 109 + struct drm_modeset_acquire_ctx *ctx); 108 110 int drm_atomic_helper_resume(struct drm_device *dev, 109 111 struct drm_atomic_state *state); 110 112
+63 -14
include/drm/drm_connector.h
··· 90 90 }; 91 91 92 92 /** 93 + * enum drm_link_status - connector's link_status property value 94 + * 95 + * This enum is used as the connector's link status property value. 96 + * It is set to the values defined in uapi. 97 + * 98 + * @DRM_LINK_STATUS_GOOD: DP Link is Good as a result of successful 99 + * link training 100 + * @DRM_LINK_STATUS_BAD: DP Link is BAD as a result of link training 101 + * failure 102 + */ 103 + enum drm_link_status { 104 + DRM_LINK_STATUS_GOOD = DRM_MODE_LINK_STATUS_GOOD, 105 + DRM_LINK_STATUS_BAD = DRM_MODE_LINK_STATUS_BAD, 106 + }; 107 + 108 + /** 93 109 * struct drm_display_info - runtime data about the connected sink 94 110 * 95 111 * Describes a given display (e.g. CRT or flat panel) and its limitations. For ··· 258 242 struct drm_crtc *crtc; 259 243 260 244 struct drm_encoder *best_encoder; 245 + 246 + /** 247 + * @link_status: Connector link_status to keep track of whether link is 248 + * GOOD or BAD to notify userspace if retraining is necessary. 249 + */ 250 + enum drm_link_status link_status; 261 251 262 252 struct drm_atomic_state *state; 263 253 ··· 817 795 } 818 796 819 797 /** 820 - * drm_connector_reference - incr the connector refcnt 821 - * @connector: connector 798 + * drm_connector_get - acquire a connector reference 799 + * @connector: DRM connector 822 800 * 823 801 * This function increments the connector's refcount. 824 802 */ 825 - static inline void drm_connector_reference(struct drm_connector *connector) 803 + static inline void drm_connector_get(struct drm_connector *connector) 826 804 { 827 - drm_mode_object_reference(&connector->base); 805 + drm_mode_object_get(&connector->base); 828 806 } 829 807 830 808 /** 831 - * drm_connector_unreference - unref a connector 832 - * @connector: connector to unref 809 + * drm_connector_put - release a connector reference 810 + * @connector: DRM connector 833 811 * 834 - * This function decrements the connector's refcount and frees it if it drops to zero. 812 + * This function decrements the connector's reference count and frees the 813 + * object if the reference count drops to zero. 814 + */ 815 + static inline void drm_connector_put(struct drm_connector *connector) 816 + { 817 + drm_mode_object_put(&connector->base); 818 + } 819 + 820 + /** 821 + * drm_connector_reference - acquire a connector reference 822 + * @connector: DRM connector 823 + * 824 + * This is a compatibility alias for drm_connector_get() and should not be 825 + * used by new code. 826 + */ 827 + static inline void drm_connector_reference(struct drm_connector *connector) 828 + { 829 + drm_connector_get(connector); 830 + } 831 + 832 + /** 833 + * drm_connector_unreference - release a connector reference 834 + * @connector: DRM connector 835 + * 836 + * This is a compatibility alias for drm_connector_put() and should not be 837 + * used by new code. 835 838 */ 836 839 static inline void drm_connector_unreference(struct drm_connector *connector) 837 840 { 838 - drm_mode_object_unreference(&connector->base); 841 + drm_connector_put(connector); 839 842 } 840 843 841 844 const char *drm_get_connector_status_name(enum drm_connector_status status); ··· 884 837 int drm_mode_connector_set_tile_property(struct drm_connector *connector); 885 838 int drm_mode_connector_update_edid_property(struct drm_connector *connector, 886 839 const struct edid *edid); 840 + void drm_mode_connector_set_link_status_property(struct drm_connector *connector, 841 + uint64_t link_status); 887 842 888 843 /** 889 844 * struct drm_tile_group - Tile group metadata ··· 931 882 * 932 883 * This iterator tracks state needed to be able to walk the connector_list 933 884 * within struct drm_mode_config. Only use together with 934 - * drm_connector_list_iter_get(), drm_connector_list_iter_put() and 885 + * drm_connector_list_iter_begin(), drm_connector_list_iter_end() and 935 886 * drm_connector_list_iter_next() respectively the convenience macro 936 887 * drm_for_each_connector_iter(). 937 888 */ ··· 941 892 struct drm_connector *conn; 942 893 }; 943 894 944 - void drm_connector_list_iter_get(struct drm_device *dev, 945 - struct drm_connector_list_iter *iter); 895 + void drm_connector_list_iter_begin(struct drm_device *dev, 896 + struct drm_connector_list_iter *iter); 946 897 struct drm_connector * 947 898 drm_connector_list_iter_next(struct drm_connector_list_iter *iter); 948 - void drm_connector_list_iter_put(struct drm_connector_list_iter *iter); 899 + void drm_connector_list_iter_end(struct drm_connector_list_iter *iter); 949 900 950 901 /** 951 902 * drm_for_each_connector_iter - connector_list iterator macro ··· 953 904 * @iter: &struct drm_connector_list_iter 954 905 * 955 906 * Note that @connector is only valid within the list body, if you want to use 956 - * @connector after calling drm_connector_list_iter_put() then you need to grab 957 - * your own reference first using drm_connector_reference(). 907 + * @connector after calling drm_connector_list_iter_end() then you need to grab 908 + * your own reference first using drm_connector_begin(). 958 909 */ 959 910 #define drm_for_each_connector_iter(connector, iter) \ 960 911 while ((connector = drm_connector_list_iter_next(iter)))
+52 -1
include/drm/drm_crtc.h
··· 155 155 * Target vertical blank period when a page flip 156 156 * should take effect. 157 157 */ 158 - 159 158 u32 target_vblank; 159 + 160 + /** 161 + * @pageflip_flags: 162 + * 163 + * DRM_MODE_PAGE_FLIP_* flags, as passed to the page flip ioctl. 164 + * Zero in any other case. 165 + */ 166 + u32 pageflip_flags; 160 167 161 168 /** 162 169 * @event: ··· 608 601 */ 609 602 void (*atomic_print_state)(struct drm_printer *p, 610 603 const struct drm_crtc_state *state); 604 + 605 + /** 606 + * @get_vblank_counter: 607 + * 608 + * Driver callback for fetching a raw hardware vblank counter for the 609 + * CRTC. It's meant to be used by new drivers as the replacement of 610 + * &drm_driver.get_vblank_counter hook. 611 + * 612 + * This callback is optional. If a device doesn't have a hardware 613 + * counter, the driver can simply leave the hook as NULL. The DRM core 614 + * will account for missed vblank events while interrupts where disabled 615 + * based on system timestamps. 616 + * 617 + * Wraparound handling and loss of events due to modesetting is dealt 618 + * with in the DRM core code, as long as drivers call 619 + * drm_crtc_vblank_off() and drm_crtc_vblank_on() when disabling or 620 + * enabling a CRTC. 621 + * 622 + * Returns: 623 + * 624 + * Raw vblank counter value. 625 + */ 626 + u32 (*get_vblank_counter)(struct drm_crtc *crtc); 627 + 628 + /** 629 + * @enable_vblank: 630 + * 631 + * Enable vblank interrupts for the CRTC. It's meant to be used by 632 + * new drivers as the replacement of &drm_driver.enable_vblank hook. 633 + * 634 + * Returns: 635 + * 636 + * Zero on success, appropriate errno if the vblank interrupt cannot 637 + * be enabled. 638 + */ 639 + int (*enable_vblank)(struct drm_crtc *crtc); 640 + 641 + /** 642 + * @disable_vblank: 643 + * 644 + * Disable vblank interrupts for the CRTC. It's meant to be used by 645 + * new drivers as the replacement of &drm_driver.disable_vblank hook. 646 + */ 647 + void (*disable_vblank)(struct drm_crtc *crtc); 611 648 }; 612 649 613 650 /**
+12 -4
include/drm/drm_drv.h
··· 120 120 * 121 121 * Driver callback for fetching a raw hardware vblank counter for the 122 122 * CRTC specified with the pipe argument. If a device doesn't have a 123 - * hardware counter, the driver can simply use 124 - * drm_vblank_no_hw_counter() function. The DRM core will account for 125 - * missed vblank events while interrupts where disabled based on system 126 - * timestamps. 123 + * hardware counter, the driver can simply leave the hook as NULL. 124 + * The DRM core will account for missed vblank events while interrupts 125 + * where disabled based on system timestamps. 127 126 * 128 127 * Wraparound handling and loss of events due to modesetting is dealt 129 128 * with in the DRM core code, as long as drivers call 130 129 * drm_crtc_vblank_off() and drm_crtc_vblank_on() when disabling or 131 130 * enabling a CRTC. 131 + * 132 + * This is deprecated and should not be used by new drivers. 133 + * Use &drm_crtc_funcs.get_vblank_counter instead. 132 134 * 133 135 * Returns: 134 136 * ··· 144 142 * Enable vblank interrupts for the CRTC specified with the pipe 145 143 * argument. 146 144 * 145 + * This is deprecated and should not be used by new drivers. 146 + * Use &drm_crtc_funcs.enable_vblank instead. 147 + * 147 148 * Returns: 148 149 * 149 150 * Zero on success, appropriate errno if the given @crtc's vblank ··· 159 154 * 160 155 * Disable vblank interrupts for the CRTC specified with the pipe 161 156 * argument. 157 + * 158 + * This is deprecated and should not be used by new drivers. 159 + * Use &drm_crtc_funcs.disable_vblank instead. 162 160 */ 163 161 void (*disable_vblank) (struct drm_device *dev, unsigned int pipe); 164 162
+4 -3
include/drm/drm_edid.h
··· 332 332 const struct drm_display_mode *mode); 333 333 334 334 #ifdef CONFIG_DRM_LOAD_EDID_FIRMWARE 335 - int drm_load_edid_firmware(struct drm_connector *connector); 335 + struct edid *drm_load_edid_firmware(struct drm_connector *connector); 336 336 #else 337 - static inline int drm_load_edid_firmware(struct drm_connector *connector) 337 + static inline struct edid * 338 + drm_load_edid_firmware(struct drm_connector *connector) 338 339 { 339 - return 0; 340 + return ERR_PTR(-ENOENT); 340 341 } 341 342 #endif 342 343
+11 -5
include/drm/drm_fb_helper.h
··· 230 230 .fb_blank = drm_fb_helper_blank, \ 231 231 .fb_pan_display = drm_fb_helper_pan_display, \ 232 232 .fb_debug_enter = drm_fb_helper_debug_enter, \ 233 - .fb_debug_leave = drm_fb_helper_debug_leave 233 + .fb_debug_leave = drm_fb_helper_debug_leave, \ 234 + .fb_ioctl = drm_fb_helper_ioctl 234 235 235 236 #ifdef CONFIG_DRM_FBDEV_EMULATION 236 237 void drm_fb_helper_prepare(struct drm_device *dev, struct drm_fb_helper *helper, ··· 250 249 251 250 struct fb_info *drm_fb_helper_alloc_fbi(struct drm_fb_helper *fb_helper); 252 251 void drm_fb_helper_unregister_fbi(struct drm_fb_helper *fb_helper); 253 - void drm_fb_helper_release_fbi(struct drm_fb_helper *fb_helper); 254 252 void drm_fb_helper_fill_var(struct fb_info *info, struct drm_fb_helper *fb_helper, 255 253 uint32_t fb_width, uint32_t fb_height); 256 254 void drm_fb_helper_fill_fix(struct fb_info *info, uint32_t pitch, ··· 284 284 bool suspend); 285 285 286 286 int drm_fb_helper_setcmap(struct fb_cmap *cmap, struct fb_info *info); 287 + 288 + int drm_fb_helper_ioctl(struct fb_info *info, unsigned int cmd, 289 + unsigned long arg); 287 290 288 291 int drm_fb_helper_hotplug_event(struct drm_fb_helper *fb_helper); 289 292 int drm_fb_helper_initial_config(struct drm_fb_helper *fb_helper, int bpp_sel); ··· 357 354 static inline void drm_fb_helper_unregister_fbi(struct drm_fb_helper *fb_helper) 358 355 { 359 356 } 360 - static inline void drm_fb_helper_release_fbi(struct drm_fb_helper *fb_helper) 361 - { 362 - } 363 357 364 358 static inline void drm_fb_helper_fill_var(struct fb_info *info, 365 359 struct drm_fb_helper *fb_helper, ··· 371 371 372 372 static inline int drm_fb_helper_setcmap(struct fb_cmap *cmap, 373 373 struct fb_info *info) 374 + { 375 + return 0; 376 + } 377 + 378 + static inline int drm_fb_helper_ioctl(struct fb_info *info, unsigned int cmd, 379 + unsigned long arg) 374 380 { 375 381 return 0; 376 382 }
+38 -13
include/drm/drm_framebuffer.h
··· 101 101 * cleanup (like releasing the reference(s) on the backing GEM bo(s)) 102 102 * should be deferred. In cases like this, the driver would like to 103 103 * hold a ref to the fb even though it has already been removed from 104 - * userspace perspective. See drm_framebuffer_reference() and 105 - * drm_framebuffer_unreference(). 104 + * userspace perspective. See drm_framebuffer_get() and 105 + * drm_framebuffer_put(). 106 106 * 107 107 * The refcount is stored inside the mode object @base. 108 108 */ ··· 204 204 void drm_framebuffer_unregister_private(struct drm_framebuffer *fb); 205 205 206 206 /** 207 - * drm_framebuffer_reference - incr the fb refcnt 208 - * @fb: framebuffer 207 + * drm_framebuffer_get - acquire a framebuffer reference 208 + * @fb: DRM framebuffer 209 209 * 210 - * This functions increments the fb's refcount. 210 + * This function increments the framebuffer's reference count. 211 211 */ 212 - static inline void drm_framebuffer_reference(struct drm_framebuffer *fb) 212 + static inline void drm_framebuffer_get(struct drm_framebuffer *fb) 213 213 { 214 - drm_mode_object_reference(&fb->base); 214 + drm_mode_object_get(&fb->base); 215 215 } 216 216 217 217 /** 218 - * drm_framebuffer_unreference - unref a framebuffer 219 - * @fb: framebuffer to unref 218 + * drm_framebuffer_put - release a framebuffer reference 219 + * @fb: DRM framebuffer 220 220 * 221 - * This functions decrements the fb's refcount and frees it if it drops to zero. 221 + * This function decrements the framebuffer's reference count and frees the 222 + * framebuffer if the reference count drops to zero. 223 + */ 224 + static inline void drm_framebuffer_put(struct drm_framebuffer *fb) 225 + { 226 + drm_mode_object_put(&fb->base); 227 + } 228 + 229 + /** 230 + * drm_framebuffer_reference - acquire a framebuffer reference 231 + * @fb: DRM framebuffer 232 + * 233 + * This is a compatibility alias for drm_framebuffer_get() and should not be 234 + * used by new code. 235 + */ 236 + static inline void drm_framebuffer_reference(struct drm_framebuffer *fb) 237 + { 238 + drm_framebuffer_get(fb); 239 + } 240 + 241 + /** 242 + * drm_framebuffer_unreference - release a framebuffer reference 243 + * @fb: DRM framebuffer 244 + * 245 + * This is a compatibility alias for drm_framebuffer_put() and should not be 246 + * used by new code. 222 247 */ 223 248 static inline void drm_framebuffer_unreference(struct drm_framebuffer *fb) 224 249 { 225 - drm_mode_object_unreference(&fb->base); 250 + drm_framebuffer_put(fb); 226 251 } 227 252 228 253 /** ··· 273 248 struct drm_framebuffer *fb) 274 249 { 275 250 if (fb) 276 - drm_framebuffer_reference(fb); 251 + drm_framebuffer_get(fb); 277 252 if (*p) 278 - drm_framebuffer_unreference(*p); 253 + drm_framebuffer_put(*p); 279 254 *p = fb; 280 255 } 281 256
+64 -16
include/drm/drm_gem.h
··· 48 48 * 49 49 * Reference count of this object 50 50 * 51 - * Please use drm_gem_object_reference() to acquire and 52 - * drm_gem_object_unreference() or drm_gem_object_unreference_unlocked() 53 - * to release a reference to a GEM buffer object. 51 + * Please use drm_gem_object_get() to acquire and drm_gem_object_put() 52 + * or drm_gem_object_put_unlocked() to release a reference to a GEM 53 + * buffer object. 54 54 */ 55 55 struct kref refcount; 56 56 ··· 187 187 int drm_gem_mmap(struct file *filp, struct vm_area_struct *vma); 188 188 189 189 /** 190 - * drm_gem_object_reference - acquire a GEM BO reference 190 + * drm_gem_object_get - acquire a GEM buffer object reference 191 191 * @obj: GEM buffer object 192 192 * 193 - * This acquires additional reference to @obj. It is illegal to call this 194 - * without already holding a reference. No locks required. 193 + * This function acquires an additional reference to @obj. It is illegal to 194 + * call this without already holding a reference. No locks required. 195 195 */ 196 - static inline void 197 - drm_gem_object_reference(struct drm_gem_object *obj) 196 + static inline void drm_gem_object_get(struct drm_gem_object *obj) 198 197 { 199 198 kref_get(&obj->refcount); 200 199 } 201 200 202 201 /** 203 - * __drm_gem_object_unreference - raw function to release a GEM BO reference 202 + * __drm_gem_object_put - raw function to release a GEM buffer object reference 204 203 * @obj: GEM buffer object 205 204 * 206 205 * This function is meant to be used by drivers which are not encumbered with 207 206 * &drm_device.struct_mutex legacy locking and which are using the 208 207 * gem_free_object_unlocked callback. It avoids all the locking checks and 209 - * locking overhead of drm_gem_object_unreference() and 210 - * drm_gem_object_unreference_unlocked(). 208 + * locking overhead of drm_gem_object_put() and drm_gem_object_put_unlocked(). 211 209 * 212 210 * Drivers should never call this directly in their code. Instead they should 213 - * wrap it up into a ``driver_gem_object_unreference(struct driver_gem_object 214 - * *obj)`` wrapper function, and use that. Shared code should never call this, to 211 + * wrap it up into a ``driver_gem_object_put(struct driver_gem_object *obj)`` 212 + * wrapper function, and use that. Shared code should never call this, to 215 213 * avoid breaking drivers by accident which still depend upon 216 214 * &drm_device.struct_mutex locking. 217 215 */ 218 216 static inline void 219 - __drm_gem_object_unreference(struct drm_gem_object *obj) 217 + __drm_gem_object_put(struct drm_gem_object *obj) 220 218 { 221 219 kref_put(&obj->refcount, drm_gem_object_free); 222 220 } 223 221 224 - void drm_gem_object_unreference_unlocked(struct drm_gem_object *obj); 225 - void drm_gem_object_unreference(struct drm_gem_object *obj); 222 + void drm_gem_object_put_unlocked(struct drm_gem_object *obj); 223 + void drm_gem_object_put(struct drm_gem_object *obj); 224 + 225 + /** 226 + * drm_gem_object_reference - acquire a GEM buffer object reference 227 + * @obj: GEM buffer object 228 + * 229 + * This is a compatibility alias for drm_gem_object_get() and should not be 230 + * used by new code. 231 + */ 232 + static inline void drm_gem_object_reference(struct drm_gem_object *obj) 233 + { 234 + drm_gem_object_get(obj); 235 + } 236 + 237 + /** 238 + * __drm_gem_object_unreference - raw function to release a GEM buffer object 239 + * reference 240 + * @obj: GEM buffer object 241 + * 242 + * This is a compatibility alias for __drm_gem_object_put() and should not be 243 + * used by new code. 244 + */ 245 + static inline void __drm_gem_object_unreference(struct drm_gem_object *obj) 246 + { 247 + __drm_gem_object_put(obj); 248 + } 249 + 250 + /** 251 + * drm_gem_object_unreference_unlocked - release a GEM buffer object reference 252 + * @obj: GEM buffer object 253 + * 254 + * This is a compatibility alias for drm_gem_object_put_unlocked() and should 255 + * not be used by new code. 256 + */ 257 + static inline void 258 + drm_gem_object_unreference_unlocked(struct drm_gem_object *obj) 259 + { 260 + drm_gem_object_put_unlocked(obj); 261 + } 262 + 263 + /** 264 + * drm_gem_object_unreference - release a GEM buffer object reference 265 + * @obj: GEM buffer object 266 + * 267 + * This is a compatibility alias for drm_gem_object_put() and should not be 268 + * used by new code. 269 + */ 270 + static inline void drm_gem_object_unreference(struct drm_gem_object *obj) 271 + { 272 + drm_gem_object_put(obj); 273 + } 226 274 227 275 int drm_gem_handle_create(struct drm_file *file_priv, 228 276 struct drm_gem_object *obj,
-1
include/drm/drm_irq.h
··· 152 152 void drm_crtc_vblank_on(struct drm_crtc *crtc); 153 153 void drm_vblank_cleanup(struct drm_device *dev); 154 154 u32 drm_accurate_vblank_count(struct drm_crtc *crtc); 155 - u32 drm_vblank_no_hw_counter(struct drm_device *dev, unsigned int pipe); 156 155 157 156 int drm_calc_vbltimestamp_from_scanoutpos(struct drm_device *dev, 158 157 unsigned int pipe, int *max_error,
+4 -1
include/drm/drm_mm.h
··· 460 460 * but using the internal interval tree to accelerate the search for the 461 461 * starting node, and so not safe against removal of elements. It assumes 462 462 * that @end is within (or is the upper limit of) the drm_mm allocator. 463 + * If [@start, @end] are beyond the range of the drm_mm, the iterator may walk 464 + * over the special _unallocated_ &drm_mm.head_node, and may even continue 465 + * indefinitely. 463 466 */ 464 467 #define drm_mm_for_each_node_in_range(node__, mm__, start__, end__) \ 465 468 for (node__ = __drm_mm_interval_first((mm__), (start__), (end__)-1); \ 466 - node__ && node__->start < (end__); \ 469 + node__->start < (end__); \ 467 470 node__ = list_next_entry(node__, node_list)) 468 471 469 472 void drm_mm_scan_init_with_range(struct drm_mm_scan *scan,
+9 -4
include/drm/drm_mode_config.h
··· 267 267 * passed-in &drm_atomic_state. This hook is called when the caller 268 268 * encountered a &drm_modeset_lock deadlock and needs to drop all 269 269 * already acquired locks as part of the deadlock avoidance dance 270 - * implemented in drm_modeset_lock_backoff(). 270 + * implemented in drm_modeset_backoff(). 271 271 * 272 272 * Any duplicated state must be invalidated since a concurrent atomic 273 273 * update might change it, and the drm atomic interfaces always apply ··· 285 285 * itself. Note that the core first calls drm_atomic_state_clear() to 286 286 * avoid code duplicate between the clear and free hooks. 287 287 * 288 - * Drivers that implement this must call drm_atomic_state_default_free() 289 - * to release common resources. 288 + * Drivers that implement this must call 289 + * drm_atomic_state_default_release() to release common resources. 290 290 */ 291 291 void (*atomic_state_free)(struct drm_atomic_state *state); 292 292 }; ··· 438 438 * multiple CRTCs. 439 439 */ 440 440 struct drm_property *tile_property; 441 + /** 442 + * @link_status_property: Default connector property for link status 443 + * of a connector 444 + */ 445 + struct drm_property *link_status_property; 441 446 /** 442 447 * @plane_type_property: Default plane property to differentiate 443 448 * CURSOR, PRIMARY and OVERLAY legacy uses of planes. ··· 666 661 /* cursor size */ 667 662 uint32_t cursor_width, cursor_height; 668 663 669 - struct drm_mode_config_helper_funcs *helper_private; 664 + const struct drm_mode_config_helper_funcs *helper_private; 670 665 }; 671 666 672 667 void drm_mode_config_init(struct drm_device *dev);
+30 -6
include/drm/drm_mode_object.h
··· 45 45 * drm_object_attach_property() before the object is visible to userspace. 46 46 * 47 47 * - For objects with dynamic lifetimes (as indicated by a non-NULL @free_cb) it 48 - * provides reference counting through drm_mode_object_reference() and 49 - * drm_mode_object_unreference(). This is used by &drm_framebuffer, 50 - * &drm_connector and &drm_property_blob. These objects provide specialized 51 - * reference counting wrappers. 48 + * provides reference counting through drm_mode_object_get() and 49 + * drm_mode_object_put(). This is used by &drm_framebuffer, &drm_connector 50 + * and &drm_property_blob. These objects provide specialized reference 51 + * counting wrappers. 52 52 */ 53 53 struct drm_mode_object { 54 54 uint32_t id; ··· 114 114 115 115 struct drm_mode_object *drm_mode_object_find(struct drm_device *dev, 116 116 uint32_t id, uint32_t type); 117 - void drm_mode_object_reference(struct drm_mode_object *obj); 118 - void drm_mode_object_unreference(struct drm_mode_object *obj); 117 + void drm_mode_object_get(struct drm_mode_object *obj); 118 + void drm_mode_object_put(struct drm_mode_object *obj); 119 + 120 + /** 121 + * drm_mode_object_reference - acquire a mode object reference 122 + * @obj: DRM mode object 123 + * 124 + * This is a compatibility alias for drm_mode_object_get() and should not be 125 + * used by new code. 126 + */ 127 + static inline void drm_mode_object_reference(struct drm_mode_object *obj) 128 + { 129 + drm_mode_object_get(obj); 130 + } 131 + 132 + /** 133 + * drm_mode_object_unreference - release a mode object reference 134 + * @obj: DRM mode object 135 + * 136 + * This is a compatibility alias for drm_mode_object_put() and should not be 137 + * used by new code. 138 + */ 139 + static inline void drm_mode_object_unreference(struct drm_mode_object *obj) 140 + { 141 + drm_mode_object_put(obj); 142 + } 119 143 120 144 int drm_object_property_set_value(struct drm_mode_object *obj, 121 145 struct drm_property *property,
+3
include/drm/drm_print.h
··· 26 26 #ifndef DRM_PRINT_H_ 27 27 #define DRM_PRINT_H_ 28 28 29 + #include <linux/compiler.h> 30 + #include <linux/printk.h> 29 31 #include <linux/seq_file.h> 30 32 #include <linux/device.h> 31 33 ··· 77 75 void __drm_printfn_info(struct drm_printer *p, struct va_format *vaf); 78 76 void __drm_printfn_debug(struct drm_printer *p, struct va_format *vaf); 79 77 78 + __printf(2, 3) 80 79 void drm_printf(struct drm_printer *p, const char *f, ...); 81 80 82 81
+30 -5
include/drm/drm_property.h
··· 200 200 * Blobs are used to store bigger values than what fits directly into the 64 201 201 * bits available for a &drm_property. 202 202 * 203 - * Blobs are reference counted using drm_property_reference_blob() and 204 - * drm_property_unreference_blob(). They are created using 205 - * drm_property_create_blob(). 203 + * Blobs are reference counted using drm_property_blob_get() and 204 + * drm_property_blob_put(). They are created using drm_property_create_blob(). 206 205 */ 207 206 struct drm_property_blob { 208 207 struct drm_mode_object base; ··· 273 274 const void *data, 274 275 struct drm_mode_object *obj_holds_id, 275 276 struct drm_property *prop_holds_id); 276 - struct drm_property_blob *drm_property_reference_blob(struct drm_property_blob *blob); 277 - void drm_property_unreference_blob(struct drm_property_blob *blob); 277 + struct drm_property_blob *drm_property_blob_get(struct drm_property_blob *blob); 278 + void drm_property_blob_put(struct drm_property_blob *blob); 279 + 280 + /** 281 + * drm_property_reference_blob - acquire a blob property reference 282 + * @blob: DRM blob property 283 + * 284 + * This is a compatibility alias for drm_property_blob_get() and should not be 285 + * used by new code. 286 + */ 287 + static inline struct drm_property_blob * 288 + drm_property_reference_blob(struct drm_property_blob *blob) 289 + { 290 + return drm_property_blob_get(blob); 291 + } 292 + 293 + /** 294 + * drm_property_unreference_blob - release a blob property reference 295 + * @blob: DRM blob property 296 + * 297 + * This is a compatibility alias for drm_property_blob_put() and should not be 298 + * used by new code. 299 + */ 300 + static inline void 301 + drm_property_unreference_blob(struct drm_property_blob *blob) 302 + { 303 + drm_property_blob_put(blob); 304 + } 278 305 279 306 /** 280 307 * drm_connector_find - find property object
+11
include/linux/of_platform.h
··· 76 76 const struct of_dev_auxdata *lookup, 77 77 struct device *parent); 78 78 extern void of_platform_depopulate(struct device *parent); 79 + 80 + extern int devm_of_platform_populate(struct device *dev); 81 + 82 + extern void devm_of_platform_depopulate(struct device *dev); 79 83 #else 80 84 static inline int of_platform_populate(struct device_node *root, 81 85 const struct of_device_id *matches, ··· 95 91 return -ENODEV; 96 92 } 97 93 static inline void of_platform_depopulate(struct device *parent) { } 94 + 95 + static inline int devm_of_platform_populate(struct device *dev) 96 + { 97 + return -ENODEV; 98 + } 99 + 100 + static inline void devm_of_platform_depopulate(struct device *dev) { } 98 101 #endif 99 102 100 103 #if defined(CONFIG_OF_DYNAMIC) && defined(CONFIG_OF_ADDRESS)
+20
include/linux/reservation.h
··· 167 167 } 168 168 169 169 /** 170 + * reservation_object_trylock - trylock the reservation object 171 + * @obj: the reservation object 172 + * 173 + * Tries to lock the reservation object for exclusive access and modification. 174 + * Note, that the lock is only against other writers, readers will run 175 + * concurrently with a writer under RCU. The seqlock is used to notify readers 176 + * if they overlap with a writer. 177 + * 178 + * Also note that since no context is provided, no deadlock protection is 179 + * possible. 180 + * 181 + * Returns true if the lock was acquired, false otherwise. 182 + */ 183 + static inline bool __must_check 184 + reservation_object_trylock(struct reservation_object *obj) 185 + { 186 + return ww_mutex_trylock(&obj->lock); 187 + } 188 + 189 + /** 170 190 * reservation_object_unlock - unlock the reservation object 171 191 * @obj: the reservation object 172 192 *
+4
include/uapi/drm/drm_mode.h
··· 123 123 #define DRM_MODE_DIRTY_ON 1 124 124 #define DRM_MODE_DIRTY_ANNOTATE 2 125 125 126 + /* Link Status options */ 127 + #define DRM_MODE_LINK_STATUS_GOOD 0 128 + #define DRM_MODE_LINK_STATUS_BAD 1 129 + 126 130 struct drm_mode_modeinfo { 127 131 __u32 clock; 128 132 __u16 hdisplay;
+92
scripts/coccinelle/api/drm-get-put.cocci
··· 1 + /// 2 + /// Use drm_*_get() and drm_*_put() helpers instead of drm_*_reference() and 3 + /// drm_*_unreference() helpers. 4 + /// 5 + // Confidence: High 6 + // Copyright: (C) 2017 NVIDIA Corporation 7 + // Options: --no-includes --include-headers 8 + // 9 + 10 + virtual patch 11 + virtual report 12 + 13 + @depends on patch@ 14 + expression object; 15 + @@ 16 + 17 + ( 18 + - drm_mode_object_reference(object) 19 + + drm_mode_object_get(object) 20 + | 21 + - drm_mode_object_unreference(object) 22 + + drm_mode_object_put(object) 23 + | 24 + - drm_connector_reference(object) 25 + + drm_connector_get(object) 26 + | 27 + - drm_connector_unreference(object) 28 + + drm_connector_put(object) 29 + | 30 + - drm_framebuffer_reference(object) 31 + + drm_framebuffer_get(object) 32 + | 33 + - drm_framebuffer_unreference(object) 34 + + drm_framebuffer_put(object) 35 + | 36 + - drm_gem_object_reference(object) 37 + + drm_gem_object_get(object) 38 + | 39 + - drm_gem_object_unreference(object) 40 + + drm_gem_object_put(object) 41 + | 42 + - __drm_gem_object_unreference(object) 43 + + __drm_gem_object_put(object) 44 + | 45 + - drm_gem_object_unreference_unlocked(object) 46 + + drm_gem_object_put_unlocked(object) 47 + | 48 + - drm_property_reference_blob(object) 49 + + drm_property_blob_get(object) 50 + | 51 + - drm_property_unreference_blob(object) 52 + + drm_property_blob_put(object) 53 + ) 54 + 55 + @r depends on report@ 56 + expression object; 57 + position p; 58 + @@ 59 + 60 + ( 61 + drm_mode_object_unreference@p(object) 62 + | 63 + drm_mode_object_reference@p(object) 64 + | 65 + drm_connector_unreference@p(object) 66 + | 67 + drm_connector_reference@p(object) 68 + | 69 + drm_framebuffer_unreference@p(object) 70 + | 71 + drm_framebuffer_reference@p(object) 72 + | 73 + drm_gem_object_unreference@p(object) 74 + | 75 + drm_gem_object_reference@p(object) 76 + | 77 + __drm_gem_object_unreference(object) 78 + | 79 + drm_gem_object_unreference_unlocked(object) 80 + | 81 + drm_property_unreference_blob@p(object) 82 + | 83 + drm_property_reference_blob@p(object) 84 + ) 85 + 86 + @script:python depends on report@ 87 + object << r.object; 88 + p << r.p; 89 + @@ 90 + 91 + msg="WARNING: use get/put helpers to reference and dereference %s" % (object) 92 + coccilib.report.print_report(p[0], msg)