Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'topic/drm-misc-2016-05-04' of git://anongit.freedesktop.org/drm-intel into drm-next

Ofc I promise just a few leftovers for drm-misc and somehow it's the
biggest pull. But really mostly trivial stuff:
- MAINTAINERS updates from Emil
- rename async to nonblock in atomic_commit to avoid the confusion between
nonblocking ioctl and async flip (= not vblank synced), from Maarten.
Needs to be regened with newer drivers, but probably only after -rc1 to
catch them all.
- actually lockless gem_object_free, plus acked driver conversion patches.
All the trickier prep stuff already is in drm-next.
- Noralf's nice work for generic defio support in our fbdev emulation.
Keeps the udl hack, and qxl is tested by Gerd.

* tag 'topic/drm-misc-2016-05-04' of git://anongit.freedesktop.org/drm-intel: (47 commits)
drm: Fixup locking WARN_ON mistake around gem_object_free_unlocked
drm/etnaviv: Use lockless gem BO free callback
drm/imx: Use lockless gem BO free callback
drm/radeon: Use lockless gem BO free callback
drm/amdgpu: Use lockless gem BO free callback
drm/gem: support BO freeing without dev->struct_mutex
MAINTAINERS: Add myself for the new VC4 (RPi GPU) graphics driver.
MAINTAINERS: Add a bunch of legacy (UMS) DRM drivers
MAINTAINERS: Add a few DRM drivers by Dave Airlie
MAINTAINERS: List the correct git repo for the Renesas DRM drivers
MAINTAINERS: Update the files list for the Renesas DRM drivers
MAINTAINERS: Update the files list for the Armada DRM driver
MAINTAINERS: Update the files list for the Rockchip DRM driver
MAINTAINERS: Update the files list for the Exynos DRM driver
MAINTAINERS: Add maintainer entry for the VMWGFX DRM driver
MAINTAINERS: Add maintainer entry for the MSM DRM driver
MAINTAINERS: Add maintainer entry for the Nouveau DRM driver
MAINTAINERS: Update the files list for the Etnaviv DRM driver
MAINTAINERS: Remove unneded wildcard for the i915 DRM driver
drm/atomic: Add WARN_ON when state->acquire_ctx is not set.
...

+657 -462
+10 -12
Documentation/DocBook/gpu.tmpl
··· 1817 1817 </tr> 1818 1818 <tr> 1819 1819 <td rowspan="42" valign="top" >DRM</td> 1820 - <td valign="top" >Generic</td> 1820 + <td rowspan="2" valign="top" >Generic</td> 1821 1821 <td valign="top" >“rotation”</td> 1822 1822 <td valign="top" >BITMASK</td> 1823 1823 <td valign="top" >{ 0, "rotate-0" }, ··· 1830 1830 <td valign="top" >rotate-(degrees) rotates the image by the specified amount in degrees 1831 1831 in counter clockwise direction. reflect-x and reflect-y reflects the 1832 1832 image along the specified axis prior to rotation</td> 1833 + </tr> 1834 + <tr> 1835 + <td valign="top" >“scaling mode”</td> 1836 + <td valign="top" >ENUM</td> 1837 + <td valign="top" >{ "None", "Full", "Center", "Full aspect" }</td> 1838 + <td valign="top" >Connector</td> 1839 + <td valign="top" >Supported by: amdgpu, gma500, i915, nouveau and radeon.</td> 1833 1840 </tr> 1834 1841 <tr> 1835 1842 <td rowspan="5" valign="top" >Connector</td> ··· 2075 2068 <td valign="top" >property to suggest an Y offset for a connector</td> 2076 2069 </tr> 2077 2070 <tr> 2078 - <td rowspan="8" valign="top" >Optional</td> 2079 - <td valign="top" >“scaling mode”</td> 2080 - <td valign="top" >ENUM</td> 2081 - <td valign="top" >{ "None", "Full", "Center", "Full aspect" }</td> 2082 - <td valign="top" >Connector</td> 2083 - <td valign="top" >TBD</td> 2084 - </tr> 2085 - <tr> 2071 + <td rowspan="7" valign="top" >Optional</td> 2086 2072 <td valign="top" >"aspect ratio"</td> 2087 2073 <td valign="top" >ENUM</td> 2088 2074 <td valign="top" >{ "None", "4:3", "16:9" }</td> 2089 2075 <td valign="top" >Connector</td> 2090 - <td valign="top" >DRM property to set aspect ratio from user space app. 2091 - This enum is made generic to allow addition of custom aspect 2092 - ratios.</td> 2076 + <td valign="top" >TDB</td> 2093 2077 </tr> 2094 2078 <tr> 2095 2079 <td valign="top" >“dirty”</td>
+110 -7
MAINTAINERS
··· 3768 3768 F: include/drm/ 3769 3769 F: include/uapi/drm/ 3770 3770 3771 + DRM DRIVER FOR AST SERVER GRAPHICS CHIPS 3772 + M: Dave Airlie <airlied@redhat.com> 3773 + S: Odd Fixes 3774 + F: drivers/gpu/drm/ast/ 3775 + 3776 + DRM DRIVER FOR BOCHS VIRTUAL GPU 3777 + M: Gerd Hoffmann <kraxel@redhat.com> 3778 + S: Odd Fixes 3779 + F: drivers/gpu/drm/bochs/ 3780 + 3781 + DRM DRIVER FOR QEMU'S CIRRUS DEVICE 3782 + M: Dave Airlie <airlied@redhat.com> 3783 + S: Odd Fixes 3784 + F: drivers/gpu/drm/cirrus/ 3785 + 3771 3786 RADEON and AMDGPU DRM DRIVERS 3772 3787 M: Alex Deucher <alexander.deucher@amd.com> 3773 3788 M: Christian König <christian.koenig@amd.com> ··· 3815 3800 S: Supported 3816 3801 F: drivers/gpu/drm/i915/ 3817 3802 F: include/drm/i915* 3818 - F: include/uapi/drm/i915* 3803 + F: include/uapi/drm/i915_drm.h 3819 3804 3820 3805 DRM DRIVERS FOR ATMEL HLCDC 3821 3806 M: Boris Brezillon <boris.brezillon@free-electrons.com> ··· 3840 3825 T: git git://git.kernel.org/pub/scm/linux/kernel/git/daeinki/drm-exynos.git 3841 3826 S: Supported 3842 3827 F: drivers/gpu/drm/exynos/ 3843 - F: include/drm/exynos* 3844 - F: include/uapi/drm/exynos* 3828 + F: include/uapi/drm/exynos_drm.h 3829 + F: Documentation/devicetree/bindings/display/exynos/ 3845 3830 3846 3831 DRM DRIVERS FOR FREESCALE DCU 3847 3832 M: Stefan Agner <stefan@agner.ch> ··· 3878 3863 F: drivers/gpu/drm/hisilicon/ 3879 3864 F: Documentation/devicetree/bindings/display/hisilicon/ 3880 3865 3866 + DRM DRIVER FOR INTEL I810 VIDEO CARDS 3867 + S: Orphan / Obsolete 3868 + F: drivers/gpu/drm/i810/ 3869 + F: include/uapi/drm/i810_drm.h 3870 + 3871 + DRM DRIVER FOR MSM ADRENO GPU 3872 + M: Rob Clark <robdclark@gmail.com> 3873 + L: linux-arm-msm@vger.kernel.org 3874 + L: dri-devel@lists.freedesktop.org 3875 + L: freedreno@lists.freedesktop.org 3876 + T: git git://people.freedesktop.org/~robclark/linux 3877 + S: Maintained 3878 + F: drivers/gpu/drm/msm/ 3879 + F: include/uapi/drm/msm_drm.h 3880 + F: Documentation/devicetree/bindings/display/msm/ 3881 + 3882 + DRM DRIVER FOR NVIDIA GEFORCE/QUADRO GPUS 3883 + M: Ben Skeggs <bskeggs@redhat.com> 3884 + L: dri-devel@lists.freedesktop.org 3885 + L: nouveau@lists.freedesktop.org 3886 + T: git git://github.com/skeggsb/linux 3887 + S: Supported 3888 + F: drivers/gpu/drm/nouveau/ 3889 + F: include/uapi/drm/nouveau_drm.h 3890 + 3881 3891 DRM DRIVERS FOR NVIDIA TEGRA 3882 3892 M: Thierry Reding <thierry.reding@gmail.com> 3883 3893 M: Terje Bergström <tbergstrom@nvidia.com> ··· 3916 3876 F: include/uapi/drm/tegra_drm.h 3917 3877 F: Documentation/devicetree/bindings/display/tegra/nvidia,tegra20-host1x.txt 3918 3878 3879 + DRM DRIVER FOR MATROX G200/G400 GRAPHICS CARDS 3880 + S: Orphan / Obsolete 3881 + F: drivers/gpu/drm/mga/ 3882 + F: include/uapi/drm/mga_drm.h 3883 + 3884 + DRM DRIVER FOR MGA G200 SERVER GRAPHICS CHIPS 3885 + M: Dave Airlie <airlied@redhat.com> 3886 + S: Odd Fixes 3887 + F: drivers/gpu/drm/mgag200/ 3888 + 3889 + DRM DRIVER FOR RAGE 128 VIDEO CARDS 3890 + S: Orphan / Obsolete 3891 + F: drivers/gpu/drm/r128/ 3892 + F: include/uapi/drm/r128_drm.h 3893 + 3919 3894 DRM DRIVERS FOR RENESAS 3920 3895 M: Laurent Pinchart <laurent.pinchart@ideasonboard.com> 3921 3896 L: dri-devel@lists.freedesktop.org 3922 3897 L: linux-renesas-soc@vger.kernel.org 3923 - T: git git://people.freedesktop.org/~airlied/linux 3898 + T: git git://linuxtv.org/pinchartl/fbdev 3924 3899 S: Supported 3925 3900 F: drivers/gpu/drm/rcar-du/ 3926 3901 F: drivers/gpu/drm/shmobile/ 3927 3902 F: include/linux/platform_data/shmob_drm.h 3903 + F: Documentation/devicetree/bindings/display/renesas,du.txt 3904 + 3905 + DRM DRIVER FOR QXL VIRTUAL GPU 3906 + M: Dave Airlie <airlied@redhat.com> 3907 + S: Odd Fixes 3908 + F: drivers/gpu/drm/qxl/ 3909 + F: include/uapi/drm/qxl_drm.h 3928 3910 3929 3911 DRM DRIVERS FOR ROCKCHIP 3930 3912 M: Mark Yao <mark.yao@rock-chips.com> 3931 3913 L: dri-devel@lists.freedesktop.org 3932 3914 S: Maintained 3933 3915 F: drivers/gpu/drm/rockchip/ 3934 - F: Documentation/devicetree/bindings/display/rockchip* 3916 + F: Documentation/devicetree/bindings/display/rockchip/ 3917 + 3918 + DRM DRIVER FOR SAVAGE VIDEO CARDS 3919 + S: Orphan / Obsolete 3920 + F: drivers/gpu/drm/savage/ 3921 + F: include/uapi/drm/savage_drm.h 3922 + 3923 + DRM DRIVER FOR SIS VIDEO CARDS 3924 + S: Orphan / Obsolete 3925 + F: drivers/gpu/drm/sis/ 3926 + F: include/uapi/drm/sis_drm.h 3935 3927 3936 3928 DRM DRIVERS FOR STI 3937 3929 M: Benjamin Gaignard <benjamin.gaignard@linaro.org> ··· 3974 3902 F: drivers/gpu/drm/sti 3975 3903 F: Documentation/devicetree/bindings/display/st,stih4xx.txt 3976 3904 3905 + DRM DRIVER FOR TDFX VIDEO CARDS 3906 + S: Orphan / Obsolete 3907 + F: drivers/gpu/drm/tdfx/ 3908 + 3909 + DRM DRIVER FOR USB DISPLAYLINK VIDEO ADAPTERS 3910 + M: Dave Airlie <airlied@redhat.com> 3911 + S: Odd Fixes 3912 + F: drivers/gpu/drm/udl/ 3913 + 3977 3914 DRM DRIVERS FOR VIVANTE GPU IP 3978 3915 M: Lucas Stach <l.stach@pengutronix.de> 3979 3916 R: Russell King <linux+etnaviv@arm.linux.org.uk> 3980 3917 R: Christian Gmeiner <christian.gmeiner@gmail.com> 3981 3918 L: dri-devel@lists.freedesktop.org 3982 3919 S: Maintained 3983 - F: drivers/gpu/drm/etnaviv 3984 - F: Documentation/devicetree/bindings/display/etnaviv 3920 + F: drivers/gpu/drm/etnaviv/ 3921 + F: include/uapi/drm/etnaviv_drm.h 3922 + F: Documentation/devicetree/bindings/display/etnaviv/ 3923 + 3924 + DRM DRIVER FOR VMWARE VIRTUAL GPU 3925 + M: "VMware Graphics" <linux-graphics-maintainer@vmware.com> 3926 + M: Sinclair Yeh <syeh@vmware.com> 3927 + M: Thomas Hellstrom <thellstrom@vmware.com> 3928 + L: dri-devel@lists.freedesktop.org 3929 + T: git git://people.freedesktop.org/~syeh/repos_linux 3930 + T: git git://people.freedesktop.org/~thomash/linux 3931 + S: Supported 3932 + F: drivers/gpu/drm/vmwgfx/ 3933 + F: include/uapi/drm/vmwgfx_drm.h 3934 + 3935 + DRM DRIVERS FOR VC4 3936 + M: Eric Anholt <eric@anholt.net> 3937 + T: git git://github.com/anholt/linux 3938 + S: Supported 3939 + F: drivers/gpu/drm/vc4/ 3940 + F: include/uapi/drm/vc4_drm.h 3941 + F: Documentation/devicetree/bindings/display/brcm,bcm-vc4.txt 3985 3942 3986 3943 DSBR100 USB FM RADIO DRIVER 3987 3944 M: Alexey Klimov <klimov.linux@gmail.com> ··· 7032 6931 M: Russell King <rmk+kernel@arm.linux.org.uk> 7033 6932 S: Maintained 7034 6933 F: drivers/gpu/drm/armada/ 6934 + F: include/uapi/drm/armada_drm.h 6935 + F: Documentation/devicetree/bindings/display/armada/ 7035 6936 7036 6937 MARVELL 88E6352 DSA support 7037 6938 M: Guenter Roeck <linux@roeck-us.net>
+1
drivers/gpu/drm/Kconfig
··· 52 52 select FB_CFB_FILLRECT 53 53 select FB_CFB_COPYAREA 54 54 select FB_CFB_IMAGEBLIT 55 + select FB_DEFERRED_IO 55 56 help 56 57 FBDEV helpers for KMS drivers. 57 58
+1 -1
drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
··· 514 514 .irq_uninstall = amdgpu_irq_uninstall, 515 515 .irq_handler = amdgpu_irq_handler, 516 516 .ioctls = amdgpu_ioctls_kms, 517 - .gem_free_object = amdgpu_gem_object_free, 517 + .gem_free_object_unlocked = amdgpu_gem_object_free, 518 518 .gem_open_object = amdgpu_gem_object_open, 519 519 .gem_close_object = amdgpu_gem_object_close, 520 520 .dumb_create = amdgpu_mode_dumb_create,
+1 -1
drivers/gpu/drm/amd/amdgpu/dce_v10_0.c
··· 3370 3370 3371 3371 /* wakeup usersapce */ 3372 3372 if (works->event) 3373 - drm_send_vblank_event(adev->ddev, crtc_id, works->event); 3373 + drm_crtc_send_vblank_event(&amdgpu_crtc->base, works->event); 3374 3374 3375 3375 spin_unlock_irqrestore(&adev->ddev->event_lock, flags); 3376 3376
+1 -1
drivers/gpu/drm/amd/amdgpu/dce_v11_0.c
··· 3366 3366 3367 3367 /* wakeup usersapce */ 3368 3368 if(works->event) 3369 - drm_send_vblank_event(adev->ddev, crtc_id, works->event); 3369 + drm_crtc_send_vblank_event(&amdgpu_crtc->base, works->event); 3370 3370 3371 3371 spin_unlock_irqrestore(&adev->ddev->event_lock, flags); 3372 3372
+1 -1
drivers/gpu/drm/amd/amdgpu/dce_v8_0.c
··· 3379 3379 3380 3380 /* wakeup usersapce */ 3381 3381 if (works->event) 3382 - drm_send_vblank_event(adev->ddev, crtc_id, works->event); 3382 + drm_crtc_send_vblank_event(&amdgpu_crtc->base, works->event); 3383 3383 3384 3384 spin_unlock_irqrestore(&adev->ddev->event_lock, flags); 3385 3385
+1 -1
drivers/gpu/drm/arm/hdlcd_drv.c
··· 113 113 } 114 114 115 115 static int hdlcd_atomic_commit(struct drm_device *dev, 116 - struct drm_atomic_state *state, bool async) 116 + struct drm_atomic_state *state, bool nonblock) 117 117 { 118 118 return drm_atomic_helper_commit(dev, state, false); 119 119 }
+12 -6
drivers/gpu/drm/drm_atomic.c
··· 145 145 continue; 146 146 147 147 /* 148 - * FIXME: Async commits can race with connector unplugging and 148 + * FIXME: Nonblocking commits can race with connector unplugging and 149 149 * there's currently nothing that prevents cleanup up state for 150 150 * deleted connectors. As long as the callback doesn't look at 151 151 * the connector we'll be fine though, so make sure that's the ··· 262 262 { 263 263 int ret, index = drm_crtc_index(crtc); 264 264 struct drm_crtc_state *crtc_state; 265 + 266 + WARN_ON(!state->acquire_ctx); 265 267 266 268 crtc_state = drm_atomic_get_existing_crtc_state(state, crtc); 267 269 if (crtc_state) ··· 624 622 int ret, index = drm_plane_index(plane); 625 623 struct drm_plane_state *plane_state; 626 624 625 + WARN_ON(!state->acquire_ctx); 626 + 627 627 plane_state = drm_atomic_get_existing_plane_state(state, plane); 628 628 if (plane_state) 629 629 return plane_state; ··· 893 889 int ret, index; 894 890 struct drm_mode_config *config = &connector->dev->mode_config; 895 891 struct drm_connector_state *connector_state; 892 + 893 + WARN_ON(!state->acquire_ctx); 896 894 897 895 ret = drm_modeset_lock(&config->connection_mutex, state->acquire_ctx); 898 896 if (ret) ··· 1396 1390 EXPORT_SYMBOL(drm_atomic_commit); 1397 1391 1398 1392 /** 1399 - * drm_atomic_async_commit - atomic&async configuration commit 1393 + * drm_atomic_nonblocking_commit - atomic&nonblocking configuration commit 1400 1394 * @state: atomic configuration to check 1401 1395 * 1402 1396 * Note that this function can return -EDEADLK if the driver needed to acquire ··· 1411 1405 * Returns: 1412 1406 * 0 on success, negative error code on failure. 1413 1407 */ 1414 - int drm_atomic_async_commit(struct drm_atomic_state *state) 1408 + int drm_atomic_nonblocking_commit(struct drm_atomic_state *state) 1415 1409 { 1416 1410 struct drm_mode_config *config = &state->dev->mode_config; 1417 1411 int ret; ··· 1420 1414 if (ret) 1421 1415 return ret; 1422 1416 1423 - DRM_DEBUG_ATOMIC("commiting %p asynchronously\n", state); 1417 + DRM_DEBUG_ATOMIC("commiting %p nonblocking\n", state); 1424 1418 1425 1419 return config->funcs->atomic_commit(state->dev, state, true); 1426 1420 } 1427 - EXPORT_SYMBOL(drm_atomic_async_commit); 1421 + EXPORT_SYMBOL(drm_atomic_nonblocking_commit); 1428 1422 1429 1423 /* 1430 1424 * The big monstor ioctl ··· 1693 1687 */ 1694 1688 ret = drm_atomic_check_only(state); 1695 1689 } else if (arg->flags & DRM_MODE_ATOMIC_NONBLOCK) { 1696 - ret = drm_atomic_async_commit(state); 1690 + ret = drm_atomic_nonblocking_commit(state); 1697 1691 } else { 1698 1692 ret = drm_atomic_commit(state); 1699 1693 }
+17 -17
drivers/gpu/drm/drm_atomic_helper.c
··· 1114 1114 * drm_atomic_helper_commit - commit validated state object 1115 1115 * @dev: DRM device 1116 1116 * @state: the driver state object 1117 - * @async: asynchronous commit 1117 + * @nonblocking: whether nonblocking behavior is requested. 1118 1118 * 1119 1119 * This function commits a with drm_atomic_helper_check() pre-validated state 1120 1120 * object. This can still fail when e.g. the framebuffer reservation fails. For 1121 - * now this doesn't implement asynchronous commits. 1121 + * now this doesn't implement nonblocking commits. 1122 1122 * 1123 - * Note that right now this function does not support async commits, and hence 1123 + * Note that right now this function does not support nonblocking commits, hence 1124 1124 * driver writers must implement their own version for now. Also note that the 1125 1125 * default ordering of how the various stages are called is to match the legacy 1126 1126 * modeset helper library closest. One peculiarity of that is that it doesn't ··· 1141 1141 */ 1142 1142 int drm_atomic_helper_commit(struct drm_device *dev, 1143 1143 struct drm_atomic_state *state, 1144 - bool async) 1144 + bool nonblock) 1145 1145 { 1146 1146 int ret; 1147 1147 1148 - if (async) 1148 + if (nonblock) 1149 1149 return -EBUSY; 1150 1150 1151 1151 ret = drm_atomic_helper_prepare_planes(dev, state); ··· 1195 1195 EXPORT_SYMBOL(drm_atomic_helper_commit); 1196 1196 1197 1197 /** 1198 - * DOC: implementing async commit 1198 + * DOC: implementing nonblocking commit 1199 1199 * 1200 - * For now the atomic helpers don't support async commit directly. If there is 1201 - * real need it could be added though, using the dma-buf fence infrastructure 1202 - * for generic synchronization with outstanding rendering. 1200 + * For now the atomic helpers don't support nonblocking commit directly. If 1201 + * there is real need it could be added though, using the dma-buf fence 1202 + * infrastructure for generic synchronization with outstanding rendering. 1203 1203 * 1204 - * For now drivers have to implement async commit themselves, with the following 1205 - * sequence being the recommended one: 1204 + * For now drivers have to implement nonblocking commit themselves, with the 1205 + * following sequence being the recommended one: 1206 1206 * 1207 1207 * 1. Run drm_atomic_helper_prepare_planes() first. This is the only function 1208 1208 * which commit needs to call which can fail, so we want to run it first and 1209 1209 * synchronously. 1210 1210 * 1211 - * 2. Synchronize with any outstanding asynchronous commit worker threads which 1211 + * 2. Synchronize with any outstanding nonblocking commit worker threads which 1212 1212 * might be affected the new state update. This can be done by either cancelling 1213 1213 * or flushing the work items, depending upon whether the driver can deal with 1214 1214 * cancelled updates. Note that it is important to ensure that the framebuffer ··· 1222 1222 * 3. The software state is updated synchronously with 1223 1223 * drm_atomic_helper_swap_state(). Doing this under the protection of all modeset 1224 1224 * locks means concurrent callers never see inconsistent state. And doing this 1225 - * while it's guaranteed that no relevant async worker runs means that async 1226 - * workers do not need grab any locks. Actually they must not grab locks, for 1227 - * otherwise the work flushing will deadlock. 1225 + * while it's guaranteed that no relevant nonblocking worker runs means that 1226 + * nonblocking workers do not need grab any locks. Actually they must not grab 1227 + * locks, for otherwise the work flushing will deadlock. 1228 1228 * 1229 1229 * 4. Schedule a work item to do all subsequent steps, using the split-out 1230 1230 * commit helpers: a) pre-plane commit b) plane commit c) post-plane commit and ··· 2371 2371 goto fail; 2372 2372 } 2373 2373 2374 - ret = drm_atomic_async_commit(state); 2374 + ret = drm_atomic_nonblocking_commit(state); 2375 2375 if (ret != 0) 2376 2376 goto fail; 2377 2377 2378 - /* Driver takes ownership of state on successful async commit. */ 2378 + /* Driver takes ownership of state on successful commit. */ 2379 2379 return 0; 2380 2380 fail: 2381 2381 if (ret == -EDEADLK)
+166 -12
drivers/gpu/drm/drm_fb_cma_helper.c
··· 25 25 #include <drm/drm_fb_cma_helper.h> 26 26 #include <linux/module.h> 27 27 28 + #define DEFAULT_FBDEFIO_DELAY_MS 50 29 + 28 30 struct drm_fb_cma { 29 31 struct drm_framebuffer fb; 30 32 struct drm_gem_cma_object *obj[4]; ··· 36 34 struct drm_fb_helper fb_helper; 37 35 struct drm_fb_cma *fb; 38 36 }; 37 + 38 + /** 39 + * DOC: framebuffer cma helper functions 40 + * 41 + * Provides helper functions for creating a cma (contiguous memory allocator) 42 + * backed framebuffer. 43 + * 44 + * drm_fb_cma_create() is used in the 45 + * (struct drm_mode_config_funcs *)->fb_create callback function to create the 46 + * cma backed framebuffer. 47 + * 48 + * An fbdev framebuffer backed by cma is also available by calling 49 + * drm_fbdev_cma_init(). drm_fbdev_cma_fini() tears it down. 50 + * If CONFIG_FB_DEFERRED_IO is enabled and the callback 51 + * (struct drm_framebuffer_funcs)->dirty is set, fb_deferred_io 52 + * will be set up automatically. dirty() is called by 53 + * drm_fb_helper_deferred_io() in process context (struct delayed_work). 54 + * 55 + * Example fbdev deferred io code: 56 + * 57 + * static int driver_fbdev_fb_dirty(struct drm_framebuffer *fb, 58 + * struct drm_file *file_priv, 59 + * unsigned flags, unsigned color, 60 + * struct drm_clip_rect *clips, 61 + * unsigned num_clips) 62 + * { 63 + * struct drm_gem_cma_object *cma = drm_fb_cma_get_gem_obj(fb, 0); 64 + * ... push changes ... 65 + * return 0; 66 + * } 67 + * 68 + * static struct drm_framebuffer_funcs driver_fbdev_fb_funcs = { 69 + * .destroy = drm_fb_cma_destroy, 70 + * .create_handle = drm_fb_cma_create_handle, 71 + * .dirty = driver_fbdev_fb_dirty, 72 + * }; 73 + * 74 + * static int driver_fbdev_create(struct drm_fb_helper *helper, 75 + * struct drm_fb_helper_surface_size *sizes) 76 + * { 77 + * return drm_fbdev_cma_create_with_funcs(helper, sizes, 78 + * &driver_fbdev_fb_funcs); 79 + * } 80 + * 81 + * static const struct drm_fb_helper_funcs driver_fb_helper_funcs = { 82 + * .fb_probe = driver_fbdev_create, 83 + * }; 84 + * 85 + * Initialize: 86 + * fbdev = drm_fbdev_cma_init_with_funcs(dev, 16, 87 + * dev->mode_config.num_crtc, 88 + * dev->mode_config.num_connector, 89 + * &driver_fb_helper_funcs); 90 + * 91 + */ 39 92 40 93 static inline struct drm_fbdev_cma *to_fbdev_cma(struct drm_fb_helper *helper) 41 94 { ··· 102 45 return container_of(fb, struct drm_fb_cma, fb); 103 46 } 104 47 105 - static void drm_fb_cma_destroy(struct drm_framebuffer *fb) 48 + void drm_fb_cma_destroy(struct drm_framebuffer *fb) 106 49 { 107 50 struct drm_fb_cma *fb_cma = to_fb_cma(fb); 108 51 int i; ··· 115 58 drm_framebuffer_cleanup(fb); 116 59 kfree(fb_cma); 117 60 } 61 + EXPORT_SYMBOL(drm_fb_cma_destroy); 118 62 119 - static int drm_fb_cma_create_handle(struct drm_framebuffer *fb, 63 + int drm_fb_cma_create_handle(struct drm_framebuffer *fb, 120 64 struct drm_file *file_priv, unsigned int *handle) 121 65 { 122 66 struct drm_fb_cma *fb_cma = to_fb_cma(fb); ··· 125 67 return drm_gem_handle_create(file_priv, 126 68 &fb_cma->obj[0]->base, handle); 127 69 } 70 + EXPORT_SYMBOL(drm_fb_cma_create_handle); 128 71 129 72 static struct drm_framebuffer_funcs drm_fb_cma_funcs = { 130 73 .destroy = drm_fb_cma_destroy, ··· 135 76 static struct drm_fb_cma *drm_fb_cma_alloc(struct drm_device *dev, 136 77 const struct drm_mode_fb_cmd2 *mode_cmd, 137 78 struct drm_gem_cma_object **obj, 138 - unsigned int num_planes) 79 + unsigned int num_planes, struct drm_framebuffer_funcs *funcs) 139 80 { 140 81 struct drm_fb_cma *fb_cma; 141 82 int ret; ··· 150 91 for (i = 0; i < num_planes; i++) 151 92 fb_cma->obj[i] = obj[i]; 152 93 153 - ret = drm_framebuffer_init(dev, &fb_cma->fb, &drm_fb_cma_funcs); 94 + ret = drm_framebuffer_init(dev, &fb_cma->fb, funcs); 154 95 if (ret) { 155 96 dev_err(dev->dev, "Failed to initialize framebuffer: %d\n", ret); 156 97 kfree(fb_cma); ··· 204 145 objs[i] = to_drm_gem_cma_obj(obj); 205 146 } 206 147 207 - fb_cma = drm_fb_cma_alloc(dev, mode_cmd, objs, i); 148 + fb_cma = drm_fb_cma_alloc(dev, mode_cmd, objs, i, &drm_fb_cma_funcs); 208 149 if (IS_ERR(fb_cma)) { 209 150 ret = PTR_ERR(fb_cma); 210 151 goto err_gem_object_unreference; ··· 292 233 .fb_setcmap = drm_fb_helper_setcmap, 293 234 }; 294 235 295 - static int drm_fbdev_cma_create(struct drm_fb_helper *helper, 296 - struct drm_fb_helper_surface_size *sizes) 236 + static int drm_fbdev_cma_deferred_io_mmap(struct fb_info *info, 237 + struct vm_area_struct *vma) 238 + { 239 + fb_deferred_io_mmap(info, vma); 240 + vma->vm_page_prot = pgprot_writecombine(vma->vm_page_prot); 241 + 242 + return 0; 243 + } 244 + 245 + static int drm_fbdev_cma_defio_init(struct fb_info *fbi, 246 + struct drm_gem_cma_object *cma_obj) 247 + { 248 + struct fb_deferred_io *fbdefio; 249 + struct fb_ops *fbops; 250 + 251 + /* 252 + * Per device structures are needed because: 253 + * fbops: fb_deferred_io_cleanup() clears fbops.fb_mmap 254 + * fbdefio: individual delays 255 + */ 256 + fbdefio = kzalloc(sizeof(*fbdefio), GFP_KERNEL); 257 + fbops = kzalloc(sizeof(*fbops), GFP_KERNEL); 258 + if (!fbdefio || !fbops) { 259 + kfree(fbdefio); 260 + return -ENOMEM; 261 + } 262 + 263 + /* can't be offset from vaddr since dirty() uses cma_obj */ 264 + fbi->screen_buffer = cma_obj->vaddr; 265 + /* fb_deferred_io_fault() needs a physical address */ 266 + fbi->fix.smem_start = page_to_phys(virt_to_page(fbi->screen_buffer)); 267 + 268 + *fbops = *fbi->fbops; 269 + fbi->fbops = fbops; 270 + 271 + fbdefio->delay = msecs_to_jiffies(DEFAULT_FBDEFIO_DELAY_MS); 272 + fbdefio->deferred_io = drm_fb_helper_deferred_io; 273 + fbi->fbdefio = fbdefio; 274 + fb_deferred_io_init(fbi); 275 + fbi->fbops->fb_mmap = drm_fbdev_cma_deferred_io_mmap; 276 + 277 + return 0; 278 + } 279 + 280 + static void drm_fbdev_cma_defio_fini(struct fb_info *fbi) 281 + { 282 + if (!fbi->fbdefio) 283 + return; 284 + 285 + fb_deferred_io_cleanup(fbi); 286 + kfree(fbi->fbdefio); 287 + kfree(fbi->fbops); 288 + } 289 + 290 + /* 291 + * For use in a (struct drm_fb_helper_funcs *)->fb_probe callback function that 292 + * needs custom struct drm_framebuffer_funcs, like dirty() for deferred_io use. 293 + */ 294 + int drm_fbdev_cma_create_with_funcs(struct drm_fb_helper *helper, 295 + struct drm_fb_helper_surface_size *sizes, 296 + struct drm_framebuffer_funcs *funcs) 297 297 { 298 298 struct drm_fbdev_cma *fbdev_cma = to_fbdev_cma(helper); 299 299 struct drm_mode_fb_cmd2 mode_cmd = { 0 }; ··· 388 270 goto err_gem_free_object; 389 271 } 390 272 391 - fbdev_cma->fb = drm_fb_cma_alloc(dev, &mode_cmd, &obj, 1); 273 + fbdev_cma->fb = drm_fb_cma_alloc(dev, &mode_cmd, &obj, 1, funcs); 392 274 if (IS_ERR(fbdev_cma->fb)) { 393 275 dev_err(dev->dev, "Failed to allocate DRM framebuffer.\n"); 394 276 ret = PTR_ERR(fbdev_cma->fb); ··· 414 296 fbi->screen_size = size; 415 297 fbi->fix.smem_len = size; 416 298 299 + if (funcs->dirty) { 300 + ret = drm_fbdev_cma_defio_init(fbi, obj); 301 + if (ret) 302 + goto err_cma_destroy; 303 + } 304 + 417 305 return 0; 418 306 307 + err_cma_destroy: 308 + drm_framebuffer_unregister_private(&fbdev_cma->fb->fb); 309 + drm_fb_cma_destroy(&fbdev_cma->fb->fb); 419 310 err_fb_info_destroy: 420 311 drm_fb_helper_release_fbi(helper); 421 312 err_gem_free_object: 422 313 dev->driver->gem_free_object(&obj->base); 423 314 return ret; 315 + } 316 + EXPORT_SYMBOL(drm_fbdev_cma_create_with_funcs); 317 + 318 + static int drm_fbdev_cma_create(struct drm_fb_helper *helper, 319 + struct drm_fb_helper_surface_size *sizes) 320 + { 321 + return drm_fbdev_cma_create_with_funcs(helper, sizes, &drm_fb_cma_funcs); 424 322 } 425 323 426 324 static const struct drm_fb_helper_funcs drm_fb_cma_helper_funcs = { ··· 444 310 }; 445 311 446 312 /** 447 - * drm_fbdev_cma_init() - Allocate and initializes a drm_fbdev_cma struct 313 + * drm_fbdev_cma_init_with_funcs() - Allocate and initializes a drm_fbdev_cma struct 448 314 * @dev: DRM device 449 315 * @preferred_bpp: Preferred bits per pixel for the device 450 316 * @num_crtc: Number of CRTCs 451 317 * @max_conn_count: Maximum number of connectors 318 + * @funcs: fb helper functions, in particular fb_probe() 452 319 * 453 320 * Returns a newly allocated drm_fbdev_cma struct or a ERR_PTR. 454 321 */ 455 - struct drm_fbdev_cma *drm_fbdev_cma_init(struct drm_device *dev, 322 + struct drm_fbdev_cma *drm_fbdev_cma_init_with_funcs(struct drm_device *dev, 456 323 unsigned int preferred_bpp, unsigned int num_crtc, 457 - unsigned int max_conn_count) 324 + unsigned int max_conn_count, const struct drm_fb_helper_funcs *funcs) 458 325 { 459 326 struct drm_fbdev_cma *fbdev_cma; 460 327 struct drm_fb_helper *helper; ··· 469 334 470 335 helper = &fbdev_cma->fb_helper; 471 336 472 - drm_fb_helper_prepare(dev, helper, &drm_fb_cma_helper_funcs); 337 + drm_fb_helper_prepare(dev, helper, funcs); 473 338 474 339 ret = drm_fb_helper_init(dev, helper, num_crtc, max_conn_count); 475 340 if (ret < 0) { ··· 499 364 500 365 return ERR_PTR(ret); 501 366 } 367 + EXPORT_SYMBOL_GPL(drm_fbdev_cma_init_with_funcs); 368 + 369 + /** 370 + * drm_fbdev_cma_init() - Allocate and initializes a drm_fbdev_cma struct 371 + * @dev: DRM device 372 + * @preferred_bpp: Preferred bits per pixel for the device 373 + * @num_crtc: Number of CRTCs 374 + * @max_conn_count: Maximum number of connectors 375 + * 376 + * Returns a newly allocated drm_fbdev_cma struct or a ERR_PTR. 377 + */ 378 + struct drm_fbdev_cma *drm_fbdev_cma_init(struct drm_device *dev, 379 + unsigned int preferred_bpp, unsigned int num_crtc, 380 + unsigned int max_conn_count) 381 + { 382 + return drm_fbdev_cma_init_with_funcs(dev, preferred_bpp, num_crtc, 383 + max_conn_count, &drm_fb_cma_helper_funcs); 384 + } 502 385 EXPORT_SYMBOL_GPL(drm_fbdev_cma_init); 503 386 504 387 /** ··· 526 373 void drm_fbdev_cma_fini(struct drm_fbdev_cma *fbdev_cma) 527 374 { 528 375 drm_fb_helper_unregister_fbi(&fbdev_cma->fb_helper); 376 + drm_fbdev_cma_defio_fini(fbdev_cma->fb_helper.fbdev); 529 377 drm_fb_helper_release_fbi(&fbdev_cma->fb_helper); 530 378 531 379 if (fbdev_cma->fb) {
+102 -1
drivers/gpu/drm/drm_fb_helper.c
··· 84 84 * and set up an initial configuration using the detected hardware, drivers 85 85 * should call drm_fb_helper_single_add_all_connectors() followed by 86 86 * drm_fb_helper_initial_config(). 87 + * 88 + * If CONFIG_FB_DEFERRED_IO is enabled and &drm_framebuffer_funcs ->dirty is 89 + * set, the drm_fb_helper_{cfb,sys}_{write,fillrect,copyarea,imageblit} 90 + * functions will accumulate changes and schedule &fb_helper .dirty_work to run 91 + * right away. This worker then calls the dirty() function ensuring that it 92 + * will always run in process context since the fb_*() function could be 93 + * running in atomic context. If drm_fb_helper_deferred_io() is used as the 94 + * deferred_io callback it will also schedule dirty_work with the damage 95 + * collected from the mmap page writes. 87 96 */ 88 97 89 98 /** ··· 646 637 kfree(helper->crtc_info); 647 638 } 648 639 640 + static void drm_fb_helper_dirty_work(struct work_struct *work) 641 + { 642 + struct drm_fb_helper *helper = container_of(work, struct drm_fb_helper, 643 + dirty_work); 644 + struct drm_clip_rect *clip = &helper->dirty_clip; 645 + struct drm_clip_rect clip_copy; 646 + unsigned long flags; 647 + 648 + spin_lock_irqsave(&helper->dirty_lock, flags); 649 + clip_copy = *clip; 650 + clip->x1 = clip->y1 = ~0; 651 + clip->x2 = clip->y2 = 0; 652 + spin_unlock_irqrestore(&helper->dirty_lock, flags); 653 + 654 + helper->fb->funcs->dirty(helper->fb, NULL, 0, 0, &clip_copy, 1); 655 + } 656 + 649 657 /** 650 658 * drm_fb_helper_prepare - setup a drm_fb_helper structure 651 659 * @dev: DRM device ··· 676 650 const struct drm_fb_helper_funcs *funcs) 677 651 { 678 652 INIT_LIST_HEAD(&helper->kernel_fb_list); 653 + spin_lock_init(&helper->dirty_lock); 654 + INIT_WORK(&helper->dirty_work, drm_fb_helper_dirty_work); 655 + helper->dirty_clip.x1 = helper->dirty_clip.y1 = ~0; 679 656 helper->funcs = funcs; 680 657 helper->dev = dev; 681 658 } ··· 863 834 } 864 835 EXPORT_SYMBOL(drm_fb_helper_unlink_fbi); 865 836 837 + static void drm_fb_helper_dirty(struct fb_info *info, u32 x, u32 y, 838 + u32 width, u32 height) 839 + { 840 + struct drm_fb_helper *helper = info->par; 841 + struct drm_clip_rect *clip = &helper->dirty_clip; 842 + unsigned long flags; 843 + 844 + if (!helper->fb->funcs->dirty) 845 + return; 846 + 847 + spin_lock_irqsave(&helper->dirty_lock, flags); 848 + clip->x1 = min_t(u32, clip->x1, x); 849 + clip->y1 = min_t(u32, clip->y1, y); 850 + clip->x2 = max_t(u32, clip->x2, x + width); 851 + clip->y2 = max_t(u32, clip->y2, y + height); 852 + spin_unlock_irqrestore(&helper->dirty_lock, flags); 853 + 854 + schedule_work(&helper->dirty_work); 855 + } 856 + 857 + /** 858 + * drm_fb_helper_deferred_io() - fbdev deferred_io callback function 859 + * @info: fb_info struct pointer 860 + * @pagelist: list of dirty mmap framebuffer pages 861 + * 862 + * This function is used as the &fb_deferred_io ->deferred_io 863 + * callback function for flushing the fbdev mmap writes. 864 + */ 865 + void drm_fb_helper_deferred_io(struct fb_info *info, 866 + struct list_head *pagelist) 867 + { 868 + unsigned long start, end, min, max; 869 + struct page *page; 870 + u32 y1, y2; 871 + 872 + min = ULONG_MAX; 873 + max = 0; 874 + list_for_each_entry(page, pagelist, lru) { 875 + start = page->index << PAGE_SHIFT; 876 + end = start + PAGE_SIZE - 1; 877 + min = min(min, start); 878 + max = max(max, end); 879 + } 880 + 881 + if (min < max) { 882 + y1 = min / info->fix.line_length; 883 + y2 = min_t(u32, DIV_ROUND_UP(max, info->fix.line_length), 884 + info->var.yres); 885 + drm_fb_helper_dirty(info, 0, y1, info->var.xres, y2 - y1); 886 + } 887 + } 888 + EXPORT_SYMBOL(drm_fb_helper_deferred_io); 889 + 866 890 /** 867 891 * drm_fb_helper_sys_read - wrapper around fb_sys_read 868 892 * @info: fb_info struct pointer ··· 944 862 ssize_t drm_fb_helper_sys_write(struct fb_info *info, const char __user *buf, 945 863 size_t count, loff_t *ppos) 946 864 { 947 - return fb_sys_write(info, buf, count, ppos); 865 + ssize_t ret; 866 + 867 + ret = fb_sys_write(info, buf, count, ppos); 868 + if (ret > 0) 869 + drm_fb_helper_dirty(info, 0, 0, info->var.xres, 870 + info->var.yres); 871 + 872 + return ret; 948 873 } 949 874 EXPORT_SYMBOL(drm_fb_helper_sys_write); 950 875 ··· 966 877 const struct fb_fillrect *rect) 967 878 { 968 879 sys_fillrect(info, rect); 880 + drm_fb_helper_dirty(info, rect->dx, rect->dy, 881 + rect->width, rect->height); 969 882 } 970 883 EXPORT_SYMBOL(drm_fb_helper_sys_fillrect); 971 884 ··· 982 891 const struct fb_copyarea *area) 983 892 { 984 893 sys_copyarea(info, area); 894 + drm_fb_helper_dirty(info, area->dx, area->dy, 895 + area->width, area->height); 985 896 } 986 897 EXPORT_SYMBOL(drm_fb_helper_sys_copyarea); 987 898 ··· 998 905 const struct fb_image *image) 999 906 { 1000 907 sys_imageblit(info, image); 908 + drm_fb_helper_dirty(info, image->dx, image->dy, 909 + image->width, image->height); 1001 910 } 1002 911 EXPORT_SYMBOL(drm_fb_helper_sys_imageblit); 1003 912 ··· 1014 919 const struct fb_fillrect *rect) 1015 920 { 1016 921 cfb_fillrect(info, rect); 922 + drm_fb_helper_dirty(info, rect->dx, rect->dy, 923 + rect->width, rect->height); 1017 924 } 1018 925 EXPORT_SYMBOL(drm_fb_helper_cfb_fillrect); 1019 926 ··· 1030 933 const struct fb_copyarea *area) 1031 934 { 1032 935 cfb_copyarea(info, area); 936 + drm_fb_helper_dirty(info, area->dx, area->dy, 937 + area->width, area->height); 1033 938 } 1034 939 EXPORT_SYMBOL(drm_fb_helper_cfb_copyarea); 1035 940 ··· 1046 947 const struct fb_image *image) 1047 948 { 1048 949 cfb_imageblit(info, image); 950 + drm_fb_helper_dirty(info, image->dx, image->dy, 951 + image->width, image->height); 1049 952 } 1050 953 EXPORT_SYMBOL(drm_fb_helper_cfb_imageblit); 1051 954
+55 -2
drivers/gpu/drm/drm_gem.c
··· 804 804 container_of(kref, struct drm_gem_object, refcount); 805 805 struct drm_device *dev = obj->dev; 806 806 807 - WARN_ON(!mutex_is_locked(&dev->struct_mutex)); 807 + if (dev->driver->gem_free_object_unlocked) { 808 + dev->driver->gem_free_object_unlocked(obj); 809 + } else if (dev->driver->gem_free_object) { 810 + WARN_ON(!mutex_is_locked(&dev->struct_mutex)); 808 811 809 - if (dev->driver->gem_free_object != NULL) 810 812 dev->driver->gem_free_object(obj); 813 + } 811 814 } 812 815 EXPORT_SYMBOL(drm_gem_object_free); 816 + 817 + /** 818 + * drm_gem_object_unreference_unlocked - release a GEM BO reference 819 + * @obj: GEM buffer object 820 + * 821 + * This releases a reference to @obj. Callers must not hold the 822 + * dev->struct_mutex lock when calling this function. 823 + * 824 + * See also __drm_gem_object_unreference(). 825 + */ 826 + void 827 + drm_gem_object_unreference_unlocked(struct drm_gem_object *obj) 828 + { 829 + struct drm_device *dev; 830 + 831 + if (!obj) 832 + return; 833 + 834 + dev = obj->dev; 835 + might_lock(&dev->struct_mutex); 836 + 837 + if (dev->driver->gem_free_object_unlocked) 838 + kref_put(&obj->refcount, drm_gem_object_free); 839 + else if (kref_put_mutex(&obj->refcount, drm_gem_object_free, 840 + &dev->struct_mutex)) 841 + mutex_unlock(&dev->struct_mutex); 842 + } 843 + EXPORT_SYMBOL(drm_gem_object_unreference_unlocked); 844 + 845 + /** 846 + * drm_gem_object_unreference - release a GEM BO reference 847 + * @obj: GEM buffer object 848 + * 849 + * This releases a reference to @obj. Callers must hold the dev->struct_mutex 850 + * lock when calling this function, even when the driver doesn't use 851 + * dev->struct_mutex for anything. 852 + * 853 + * For drivers not encumbered with legacy locking use 854 + * drm_gem_object_unreference_unlocked() instead. 855 + */ 856 + void 857 + drm_gem_object_unreference(struct drm_gem_object *obj) 858 + { 859 + if (obj) { 860 + WARN_ON(!mutex_is_locked(&obj->dev->struct_mutex)); 861 + 862 + kref_put(&obj->refcount, drm_gem_object_free); 863 + } 864 + } 865 + EXPORT_SYMBOL(drm_gem_object_unreference); 813 866 814 867 /** 815 868 * drm_gem_vm_open - vma->ops->open implementation for GEM
+1 -1
drivers/gpu/drm/etnaviv/etnaviv_drv.c
··· 497 497 .open = etnaviv_open, 498 498 .preclose = etnaviv_preclose, 499 499 .set_busid = drm_platform_set_busid, 500 - .gem_free_object = etnaviv_gem_free_object, 500 + .gem_free_object_unlocked = etnaviv_gem_free_object, 501 501 .gem_vm_ops = &vm_ops, 502 502 .prime_handle_to_fd = drm_gem_prime_handle_to_fd, 503 503 .prime_fd_to_handle = drm_gem_prime_fd_to_handle,
+2 -2
drivers/gpu/drm/exynos/exynos_drm_drv.c
··· 270 270 } 271 271 272 272 int exynos_atomic_commit(struct drm_device *dev, struct drm_atomic_state *state, 273 - bool async) 273 + bool nonblock) 274 274 { 275 275 struct exynos_drm_private *priv = dev->dev_private; 276 276 struct exynos_atomic_commit *commit; ··· 308 308 309 309 drm_atomic_helper_swap_state(dev, state); 310 310 311 - if (async) 311 + if (nonblock) 312 312 schedule_work(&commit->work); 313 313 else 314 314 exynos_atomic_commit_complete(commit);
+1 -1
drivers/gpu/drm/exynos/exynos_drm_drv.h
··· 308 308 #endif 309 309 310 310 int exynos_atomic_commit(struct drm_device *dev, struct drm_atomic_state *state, 311 - bool async); 311 + bool nonblock); 312 312 313 313 314 314 extern struct platform_driver fimd_driver;
+1 -1
drivers/gpu/drm/fsl-dcu/Kconfig
··· 1 1 config DRM_FSL_DCU 2 2 tristate "DRM Support for Freescale DCU" 3 - depends on DRM && OF && ARM 3 + depends on DRM && OF && ARM && COMMON_CLK 4 4 select BACKLIGHT_CLASS_DEVICE 5 5 select BACKLIGHT_LCD_SUPPORT 6 6 select DRM_KMS_HELPER
+8 -8
drivers/gpu/drm/i915/intel_display.c
··· 13431 13431 13432 13432 static int intel_atomic_prepare_commit(struct drm_device *dev, 13433 13433 struct drm_atomic_state *state, 13434 - bool async) 13434 + bool nonblock) 13435 13435 { 13436 13436 struct drm_i915_private *dev_priv = dev->dev_private; 13437 13437 struct drm_plane_state *plane_state; ··· 13440 13440 struct drm_crtc *crtc; 13441 13441 int i, ret; 13442 13442 13443 - if (async) { 13444 - DRM_DEBUG_KMS("i915 does not yet support async commit\n"); 13443 + if (nonblock) { 13444 + DRM_DEBUG_KMS("i915 does not yet support nonblocking commit\n"); 13445 13445 return -EINVAL; 13446 13446 } 13447 13447 ··· 13464 13464 ret = drm_atomic_helper_prepare_planes(dev, state); 13465 13465 mutex_unlock(&dev->struct_mutex); 13466 13466 13467 - if (!ret && !async) { 13467 + if (!ret && !nonblock) { 13468 13468 for_each_plane_in_state(state, plane, plane_state, i) { 13469 13469 struct intel_plane_state *intel_plane_state = 13470 13470 to_intel_plane_state(plane_state); ··· 13557 13557 * intel_atomic_commit - commit validated state object 13558 13558 * @dev: DRM device 13559 13559 * @state: the top-level driver state object 13560 - * @async: asynchronous commit 13560 + * @nonblock: nonblocking commit 13561 13561 * 13562 13562 * This function commits a top-level state object that has been validated 13563 13563 * with drm_atomic_helper_check(). 13564 13564 * 13565 13565 * FIXME: Atomic modeset support for i915 is not yet complete. At the moment 13566 13566 * we can only handle plane-related operations and do not yet support 13567 - * asynchronous commit. 13567 + * nonblocking commit. 13568 13568 * 13569 13569 * RETURNS 13570 13570 * Zero for success or -errno. 13571 13571 */ 13572 13572 static int intel_atomic_commit(struct drm_device *dev, 13573 13573 struct drm_atomic_state *state, 13574 - bool async) 13574 + bool nonblock) 13575 13575 { 13576 13576 struct intel_atomic_state *intel_state = to_intel_atomic_state(state); 13577 13577 struct drm_i915_private *dev_priv = dev->dev_private; ··· 13583 13583 unsigned long put_domains[I915_MAX_PIPES] = {}; 13584 13584 unsigned crtc_vblank_mask = 0; 13585 13585 13586 - ret = intel_atomic_prepare_commit(dev, state, async); 13586 + ret = intel_atomic_prepare_commit(dev, state, nonblock); 13587 13587 if (ret) { 13588 13588 DRM_DEBUG_ATOMIC("Preparing state failed with %i\n", ret); 13589 13589 return ret;
+1 -1
drivers/gpu/drm/imx/imx-drm-core.c
··· 411 411 .unload = imx_drm_driver_unload, 412 412 .lastclose = imx_drm_driver_lastclose, 413 413 .set_busid = drm_platform_set_busid, 414 - .gem_free_object = drm_gem_cma_free_object, 414 + .gem_free_object_unlocked = drm_gem_cma_free_object, 415 415 .gem_vm_ops = &drm_gem_cma_vm_ops, 416 416 .dumb_create = drm_gem_cma_dumb_create, 417 417 .dumb_map_offset = drm_gem_cma_dumb_map_offset,
+1 -1
drivers/gpu/drm/msm/mdp/mdp4/mdp4_crtc.c
··· 121 121 if (!file || (event->base.file_priv == file)) { 122 122 mdp4_crtc->event = NULL; 123 123 DBG("%s: send event: %p", mdp4_crtc->name, event); 124 - drm_send_vblank_event(dev, mdp4_crtc->id, event); 124 + drm_crtc_send_vblank_event(crtc, event); 125 125 } 126 126 } 127 127 spin_unlock_irqrestore(&dev->event_lock, flags);
+1 -1
drivers/gpu/drm/msm/mdp/mdp5/mdp5_crtc.c
··· 149 149 if (!file || (event->base.file_priv == file)) { 150 150 mdp5_crtc->event = NULL; 151 151 DBG("%s: send event: %p", mdp5_crtc->name, event); 152 - drm_send_vblank_event(dev, mdp5_crtc->id, event); 152 + drm_crtc_send_vblank_event(crtc, event); 153 153 } 154 154 } 155 155 spin_unlock_irqrestore(&dev->event_lock, flags);
+4 -5
drivers/gpu/drm/msm/msm_atomic.c
··· 190 190 * drm_atomic_helper_commit - commit validated state object 191 191 * @dev: DRM device 192 192 * @state: the driver state object 193 - * @async: asynchronous commit 193 + * @nonblock: nonblocking commit 194 194 * 195 195 * This function commits a with drm_atomic_helper_check() pre-validated state 196 - * object. This can still fail when e.g. the framebuffer reservation fails. For 197 - * now this doesn't implement asynchronous commits. 196 + * object. This can still fail when e.g. the framebuffer reservation fails. 198 197 * 199 198 * RETURNS 200 199 * Zero for success or -errno. 201 200 */ 202 201 int msm_atomic_commit(struct drm_device *dev, 203 - struct drm_atomic_state *state, bool async) 202 + struct drm_atomic_state *state, bool nonblock) 204 203 { 205 204 int nplanes = dev->mode_config.num_total_plane; 206 205 int ncrtcs = dev->mode_config.num_crtc; ··· 275 276 * current layout. 276 277 */ 277 278 278 - if (async) { 279 + if (nonblock) { 279 280 msm_queue_fence_cb(dev, &c->fence_cb, c->fence); 280 281 return 0; 281 282 }
+1 -1
drivers/gpu/drm/msm/msm_drv.h
··· 174 174 int msm_atomic_check(struct drm_device *dev, 175 175 struct drm_atomic_state *state); 176 176 int msm_atomic_commit(struct drm_device *dev, 177 - struct drm_atomic_state *state, bool async); 177 + struct drm_atomic_state *state, bool nonblock); 178 178 179 179 int msm_register_mmu(struct drm_device *dev, struct msm_mmu *mmu); 180 180
+2 -2
drivers/gpu/drm/omapdrm/omap_drv.c
··· 138 138 } 139 139 140 140 static int omap_atomic_commit(struct drm_device *dev, 141 - struct drm_atomic_state *state, bool async) 141 + struct drm_atomic_state *state, bool nonblock) 142 142 { 143 143 struct omap_drm_private *priv = dev->dev_private; 144 144 struct omap_atomic_state_commit *commit; ··· 177 177 /* Swap the state, this is the point of no return. */ 178 178 drm_atomic_helper_swap_state(dev, state); 179 179 180 - if (async) 180 + if (nonblock) 181 181 schedule_work(&commit->work); 182 182 else 183 183 omap_atomic_complete(commit);
+5 -4
drivers/gpu/drm/qxl/qxl_display.c
··· 460 460 .page_flip = qxl_crtc_page_flip, 461 461 }; 462 462 463 - static void qxl_user_framebuffer_destroy(struct drm_framebuffer *fb) 463 + void qxl_user_framebuffer_destroy(struct drm_framebuffer *fb) 464 464 { 465 465 struct qxl_framebuffer *qxl_fb = to_qxl_framebuffer(fb); 466 466 ··· 522 522 qxl_framebuffer_init(struct drm_device *dev, 523 523 struct qxl_framebuffer *qfb, 524 524 const struct drm_mode_fb_cmd2 *mode_cmd, 525 - struct drm_gem_object *obj) 525 + struct drm_gem_object *obj, 526 + const struct drm_framebuffer_funcs *funcs) 526 527 { 527 528 int ret; 528 529 529 530 qfb->obj = obj; 530 - ret = drm_framebuffer_init(dev, &qfb->base, &qxl_fb_funcs); 531 + ret = drm_framebuffer_init(dev, &qfb->base, funcs); 531 532 if (ret) { 532 533 qfb->obj = NULL; 533 534 return ret; ··· 995 994 if (qxl_fb == NULL) 996 995 return NULL; 997 996 998 - ret = qxl_framebuffer_init(dev, qxl_fb, mode_cmd, obj); 997 + ret = qxl_framebuffer_init(dev, qxl_fb, mode_cmd, obj, &qxl_fb_funcs); 999 998 if (ret) { 1000 999 kfree(qxl_fb); 1001 1000 drm_gem_object_unreference_unlocked(obj);
+3 -4
drivers/gpu/drm/qxl/qxl_drv.h
··· 322 322 struct workqueue_struct *gc_queue; 323 323 struct work_struct gc_work; 324 324 325 - struct work_struct fb_work; 326 - 327 325 struct drm_property *hotplug_mode_update_property; 328 326 int monitors_config_width; 329 327 int monitors_config_height; ··· 385 387 void qxl_fbdev_set_suspend(struct qxl_device *qdev, int state); 386 388 387 389 /* qxl_display.c */ 390 + void qxl_user_framebuffer_destroy(struct drm_framebuffer *fb); 388 391 int 389 392 qxl_framebuffer_init(struct drm_device *dev, 390 393 struct qxl_framebuffer *rfb, 391 394 const struct drm_mode_fb_cmd2 *mode_cmd, 392 - struct drm_gem_object *obj); 395 + struct drm_gem_object *obj, 396 + const struct drm_framebuffer_funcs *funcs); 393 397 void qxl_display_read_client_monitors_config(struct qxl_device *qdev); 394 398 void qxl_send_monitors_config(struct qxl_device *qdev); 395 399 int qxl_create_monitors_object(struct qxl_device *qdev); ··· 551 551 irqreturn_t qxl_irq_handler(int irq, void *arg); 552 552 553 553 /* qxl_fb.c */ 554 - int qxl_fb_init(struct qxl_device *qdev); 555 554 bool qxl_fbdev_qobj_is_fb(struct qxl_device *qdev, struct qxl_bo *qobj); 556 555 557 556 int qxl_debugfs_add_files(struct qxl_device *qdev,
+57 -166
drivers/gpu/drm/qxl/qxl_fb.c
··· 46 46 struct list_head delayed_ops; 47 47 void *shadow; 48 48 int size; 49 - 50 - /* dirty memory logging */ 51 - struct { 52 - spinlock_t lock; 53 - unsigned x1; 54 - unsigned y1; 55 - unsigned x2; 56 - unsigned y2; 57 - } dirty; 58 49 }; 59 50 60 51 static void qxl_fb_image_init(struct qxl_fb_image *qxl_fb_image, ··· 73 82 } 74 83 } 75 84 76 - static void qxl_fb_dirty_flush(struct fb_info *info) 77 - { 78 - struct qxl_fbdev *qfbdev = info->par; 79 - struct qxl_device *qdev = qfbdev->qdev; 80 - struct qxl_fb_image qxl_fb_image; 81 - struct fb_image *image = &qxl_fb_image.fb_image; 82 - unsigned long flags; 83 - u32 x1, x2, y1, y2; 84 - 85 - /* TODO: hard coding 32 bpp */ 86 - int stride = qfbdev->qfb.base.pitches[0]; 87 - 88 - spin_lock_irqsave(&qfbdev->dirty.lock, flags); 89 - 90 - x1 = qfbdev->dirty.x1; 91 - x2 = qfbdev->dirty.x2; 92 - y1 = qfbdev->dirty.y1; 93 - y2 = qfbdev->dirty.y2; 94 - qfbdev->dirty.x1 = 0; 95 - qfbdev->dirty.x2 = 0; 96 - qfbdev->dirty.y1 = 0; 97 - qfbdev->dirty.y2 = 0; 98 - 99 - spin_unlock_irqrestore(&qfbdev->dirty.lock, flags); 100 - 101 - /* 102 - * we are using a shadow draw buffer, at qdev->surface0_shadow 103 - */ 104 - qxl_io_log(qdev, "dirty x[%d, %d], y[%d, %d]", x1, x2, y1, y2); 105 - image->dx = x1; 106 - image->dy = y1; 107 - image->width = x2 - x1 + 1; 108 - image->height = y2 - y1 + 1; 109 - image->fg_color = 0xffffffff; /* unused, just to avoid uninitialized 110 - warnings */ 111 - image->bg_color = 0; 112 - image->depth = 32; /* TODO: take from somewhere? */ 113 - image->cmap.start = 0; 114 - image->cmap.len = 0; 115 - image->cmap.red = NULL; 116 - image->cmap.green = NULL; 117 - image->cmap.blue = NULL; 118 - image->cmap.transp = NULL; 119 - image->data = qfbdev->shadow + (x1 * 4) + (stride * y1); 120 - 121 - qxl_fb_image_init(&qxl_fb_image, qdev, info, NULL); 122 - qxl_draw_opaque_fb(&qxl_fb_image, stride); 123 - } 124 - 125 - static void qxl_dirty_update(struct qxl_fbdev *qfbdev, 126 - int x, int y, int width, int height) 127 - { 128 - struct qxl_device *qdev = qfbdev->qdev; 129 - unsigned long flags; 130 - int x2, y2; 131 - 132 - x2 = x + width - 1; 133 - y2 = y + height - 1; 134 - 135 - spin_lock_irqsave(&qfbdev->dirty.lock, flags); 136 - 137 - if ((qfbdev->dirty.y2 - qfbdev->dirty.y1) && 138 - (qfbdev->dirty.x2 - qfbdev->dirty.x1)) { 139 - if (qfbdev->dirty.y1 < y) 140 - y = qfbdev->dirty.y1; 141 - if (qfbdev->dirty.y2 > y2) 142 - y2 = qfbdev->dirty.y2; 143 - if (qfbdev->dirty.x1 < x) 144 - x = qfbdev->dirty.x1; 145 - if (qfbdev->dirty.x2 > x2) 146 - x2 = qfbdev->dirty.x2; 147 - } 148 - 149 - qfbdev->dirty.x1 = x; 150 - qfbdev->dirty.x2 = x2; 151 - qfbdev->dirty.y1 = y; 152 - qfbdev->dirty.y2 = y2; 153 - 154 - spin_unlock_irqrestore(&qfbdev->dirty.lock, flags); 155 - 156 - schedule_work(&qdev->fb_work); 157 - } 158 - 159 - static void qxl_deferred_io(struct fb_info *info, 160 - struct list_head *pagelist) 161 - { 162 - struct qxl_fbdev *qfbdev = info->par; 163 - unsigned long start, end, min, max; 164 - struct page *page; 165 - int y1, y2; 166 - 167 - min = ULONG_MAX; 168 - max = 0; 169 - list_for_each_entry(page, pagelist, lru) { 170 - start = page->index << PAGE_SHIFT; 171 - end = start + PAGE_SIZE - 1; 172 - min = min(min, start); 173 - max = max(max, end); 174 - } 175 - 176 - if (min < max) { 177 - y1 = min / info->fix.line_length; 178 - y2 = (max / info->fix.line_length) + 1; 179 - qxl_dirty_update(qfbdev, 0, y1, info->var.xres, y2 - y1); 180 - } 181 - }; 182 - 183 85 static struct fb_deferred_io qxl_defio = { 184 86 .delay = QXL_DIRTY_DELAY, 185 - .deferred_io = qxl_deferred_io, 87 + .deferred_io = drm_fb_helper_deferred_io, 186 88 }; 187 - 188 - static void qxl_fb_fillrect(struct fb_info *info, 189 - const struct fb_fillrect *rect) 190 - { 191 - struct qxl_fbdev *qfbdev = info->par; 192 - 193 - drm_fb_helper_sys_fillrect(info, rect); 194 - qxl_dirty_update(qfbdev, rect->dx, rect->dy, rect->width, 195 - rect->height); 196 - } 197 - 198 - static void qxl_fb_copyarea(struct fb_info *info, 199 - const struct fb_copyarea *area) 200 - { 201 - struct qxl_fbdev *qfbdev = info->par; 202 - 203 - drm_fb_helper_sys_copyarea(info, area); 204 - qxl_dirty_update(qfbdev, area->dx, area->dy, area->width, 205 - area->height); 206 - } 207 - 208 - static void qxl_fb_imageblit(struct fb_info *info, 209 - const struct fb_image *image) 210 - { 211 - struct qxl_fbdev *qfbdev = info->par; 212 - 213 - drm_fb_helper_sys_imageblit(info, image); 214 - qxl_dirty_update(qfbdev, image->dx, image->dy, image->width, 215 - image->height); 216 - } 217 - 218 - static void qxl_fb_work(struct work_struct *work) 219 - { 220 - struct qxl_device *qdev = container_of(work, struct qxl_device, fb_work); 221 - struct qxl_fbdev *qfbdev = qdev->mode_info.qfbdev; 222 - 223 - qxl_fb_dirty_flush(qfbdev->helper.fbdev); 224 - } 225 - 226 - int qxl_fb_init(struct qxl_device *qdev) 227 - { 228 - INIT_WORK(&qdev->fb_work, qxl_fb_work); 229 - return 0; 230 - } 231 89 232 90 static struct fb_ops qxlfb_ops = { 233 91 .owner = THIS_MODULE, 234 92 .fb_check_var = drm_fb_helper_check_var, 235 93 .fb_set_par = drm_fb_helper_set_par, /* TODO: copy vmwgfx */ 236 - .fb_fillrect = qxl_fb_fillrect, 237 - .fb_copyarea = qxl_fb_copyarea, 238 - .fb_imageblit = qxl_fb_imageblit, 94 + .fb_fillrect = drm_fb_helper_sys_fillrect, 95 + .fb_copyarea = drm_fb_helper_sys_copyarea, 96 + .fb_imageblit = drm_fb_helper_sys_imageblit, 239 97 .fb_pan_display = drm_fb_helper_pan_display, 240 98 .fb_blank = drm_fb_helper_blank, 241 99 .fb_setcmap = drm_fb_helper_setcmap, ··· 178 338 return ret; 179 339 } 180 340 341 + /* 342 + * FIXME 343 + * It should not be necessary to have a special dirty() callback for fbdev. 344 + */ 345 + static int qxlfb_framebuffer_dirty(struct drm_framebuffer *fb, 346 + struct drm_file *file_priv, 347 + unsigned flags, unsigned color, 348 + struct drm_clip_rect *clips, 349 + unsigned num_clips) 350 + { 351 + struct qxl_device *qdev = fb->dev->dev_private; 352 + struct fb_info *info = qdev->fbdev_info; 353 + struct qxl_fbdev *qfbdev = info->par; 354 + struct qxl_fb_image qxl_fb_image; 355 + struct fb_image *image = &qxl_fb_image.fb_image; 356 + 357 + /* TODO: hard coding 32 bpp */ 358 + int stride = qfbdev->qfb.base.pitches[0]; 359 + 360 + /* 361 + * we are using a shadow draw buffer, at qdev->surface0_shadow 362 + */ 363 + qxl_io_log(qdev, "dirty x[%d, %d], y[%d, %d]", clips->x1, clips->x2, 364 + clips->y1, clips->y2); 365 + image->dx = clips->x1; 366 + image->dy = clips->y1; 367 + image->width = clips->x2 - clips->x1; 368 + image->height = clips->y2 - clips->y1; 369 + image->fg_color = 0xffffffff; /* unused, just to avoid uninitialized 370 + warnings */ 371 + image->bg_color = 0; 372 + image->depth = 32; /* TODO: take from somewhere? */ 373 + image->cmap.start = 0; 374 + image->cmap.len = 0; 375 + image->cmap.red = NULL; 376 + image->cmap.green = NULL; 377 + image->cmap.blue = NULL; 378 + image->cmap.transp = NULL; 379 + image->data = qfbdev->shadow + (clips->x1 * 4) + (stride * clips->y1); 380 + 381 + qxl_fb_image_init(&qxl_fb_image, qdev, info, NULL); 382 + qxl_draw_opaque_fb(&qxl_fb_image, stride); 383 + 384 + return 0; 385 + } 386 + 387 + static const struct drm_framebuffer_funcs qxlfb_fb_funcs = { 388 + .destroy = qxl_user_framebuffer_destroy, 389 + .dirty = qxlfb_framebuffer_dirty, 390 + }; 391 + 181 392 static int qxlfb_create(struct qxl_fbdev *qfbdev, 182 393 struct drm_fb_helper_surface_size *sizes) 183 394 { ··· 274 383 275 384 info->par = qfbdev; 276 385 277 - qxl_framebuffer_init(qdev->ddev, &qfbdev->qfb, &mode_cmd, gobj); 386 + qxl_framebuffer_init(qdev->ddev, &qfbdev->qfb, &mode_cmd, gobj, 387 + &qxlfb_fb_funcs); 278 388 279 389 fb = &qfbdev->qfb.base; 280 390 ··· 396 504 qfbdev->qdev = qdev; 397 505 qdev->mode_info.qfbdev = qfbdev; 398 506 spin_lock_init(&qfbdev->delayed_ops_lock); 399 - spin_lock_init(&qfbdev->dirty.lock); 400 507 INIT_LIST_HEAD(&qfbdev->delayed_ops); 401 508 402 509 drm_fb_helper_prepare(qdev->ddev, &qfbdev->helper,
-4
drivers/gpu/drm/qxl/qxl_kms.c
··· 261 261 qdev->gc_queue = create_singlethread_workqueue("qxl_gc"); 262 262 INIT_WORK(&qdev->gc_work, qxl_gc_work); 263 263 264 - r = qxl_fb_init(qdev); 265 - if (r) 266 - return r; 267 - 268 264 return 0; 269 265 } 270 266
+1 -1
drivers/gpu/drm/radeon/radeon_display.c
··· 377 377 378 378 /* wakeup userspace */ 379 379 if (work->event) 380 - drm_send_vblank_event(rdev->ddev, crtc_id, work->event); 380 + drm_crtc_send_vblank_event(&radeon_crtc->base, work->event); 381 381 382 382 spin_unlock_irqrestore(&rdev->ddev->event_lock, flags); 383 383
+1 -1
drivers/gpu/drm/radeon/radeon_drv.c
··· 525 525 .irq_uninstall = radeon_driver_irq_uninstall_kms, 526 526 .irq_handler = radeon_driver_irq_handler_kms, 527 527 .ioctls = radeon_ioctls_kms, 528 - .gem_free_object = radeon_gem_object_free, 528 + .gem_free_object_unlocked = radeon_gem_object_free, 529 529 .gem_open_object = radeon_gem_object_open, 530 530 .gem_close_object = radeon_gem_object_close, 531 531 .dumb_create = radeon_mode_dumb_create,
+1 -1
drivers/gpu/drm/rcar-du/rcar_du_crtc.c
··· 314 314 return; 315 315 316 316 spin_lock_irqsave(&dev->event_lock, flags); 317 - drm_send_vblank_event(dev, rcrtc->index, event); 317 + drm_crtc_send_vblank_event(&rcrtc->crtc, event); 318 318 wake_up(&rcrtc->flip_wait); 319 319 spin_unlock_irqrestore(&dev->event_lock, flags); 320 320
+3 -2
drivers/gpu/drm/rcar-du/rcar_du_kms.c
··· 283 283 } 284 284 285 285 static int rcar_du_atomic_commit(struct drm_device *dev, 286 - struct drm_atomic_state *state, bool async) 286 + struct drm_atomic_state *state, 287 + bool nonblock) 287 288 { 288 289 struct rcar_du_device *rcdu = dev->dev_private; 289 290 struct rcar_du_commit *commit; ··· 329 328 /* Swap the state, this is the point of no return. */ 330 329 drm_atomic_helper_swap_state(dev, state); 331 330 332 - if (async) 331 + if (nonblock) 333 332 schedule_work(&commit->work); 334 333 else 335 334 rcar_du_atomic_complete(commit);
+3 -3
drivers/gpu/drm/rockchip/rockchip_drm_fb.c
··· 276 276 277 277 int rockchip_drm_atomic_commit(struct drm_device *dev, 278 278 struct drm_atomic_state *state, 279 - bool async) 279 + bool nonblock) 280 280 { 281 281 struct rockchip_drm_private *private = dev->dev_private; 282 282 struct rockchip_atomic_commit *commit = &private->commit; ··· 286 286 if (ret) 287 287 return ret; 288 288 289 - /* serialize outstanding asynchronous commits */ 289 + /* serialize outstanding nonblocking commits */ 290 290 mutex_lock(&commit->lock); 291 291 flush_work(&commit->work); 292 292 ··· 295 295 commit->dev = dev; 296 296 commit->state = state; 297 297 298 - if (async) 298 + if (nonblock) 299 299 schedule_work(&commit->work); 300 300 else 301 301 rockchip_atomic_commit_complete(commit);
+1 -1
drivers/gpu/drm/shmobile/shmob_drm_crtc.c
··· 440 440 event = scrtc->event; 441 441 scrtc->event = NULL; 442 442 if (event) { 443 - drm_send_vblank_event(dev, 0, event); 443 + drm_crtc_send_vblank_event(&scrtc->crtc, event); 444 444 drm_vblank_put(dev, 0); 445 445 } 446 446 spin_unlock_irqrestore(&dev->event_lock, flags);
+3 -3
drivers/gpu/drm/sti/sti_drv.c
··· 202 202 } 203 203 204 204 static int sti_atomic_commit(struct drm_device *drm, 205 - struct drm_atomic_state *state, bool async) 205 + struct drm_atomic_state *state, bool nonblock) 206 206 { 207 207 struct sti_private *private = drm->dev_private; 208 208 int err; ··· 211 211 if (err) 212 212 return err; 213 213 214 - /* serialize outstanding asynchronous commits */ 214 + /* serialize outstanding nonblocking commits */ 215 215 mutex_lock(&private->commit.lock); 216 216 flush_work(&private->commit.work); 217 217 ··· 223 223 224 224 drm_atomic_helper_swap_state(drm, state); 225 225 226 - if (async) 226 + if (nonblock) 227 227 sti_atomic_schedule(private, state); 228 228 else 229 229 sti_atomic_complete(private, state);
+3 -3
drivers/gpu/drm/tegra/drm.c
··· 74 74 } 75 75 76 76 static int tegra_atomic_commit(struct drm_device *drm, 77 - struct drm_atomic_state *state, bool async) 77 + struct drm_atomic_state *state, bool nonblock) 78 78 { 79 79 struct tegra_drm *tegra = drm->dev_private; 80 80 int err; ··· 83 83 if (err) 84 84 return err; 85 85 86 - /* serialize outstanding asynchronous commits */ 86 + /* serialize outstanding nonblocking commits */ 87 87 mutex_lock(&tegra->commit.lock); 88 88 flush_work(&tegra->commit.work); 89 89 ··· 95 95 96 96 drm_atomic_helper_swap_state(drm, state); 97 97 98 - if (async) 98 + if (nonblock) 99 99 tegra_atomic_schedule(tegra, state); 100 100 else 101 101 tegra_atomic_complete(tegra, state);
+1 -1
drivers/gpu/drm/tilcdc/tilcdc_crtc.c
··· 707 707 event = tilcdc_crtc->event; 708 708 tilcdc_crtc->event = NULL; 709 709 if (event) 710 - drm_send_vblank_event(dev, 0, event); 710 + drm_crtc_send_vblank_event(crtc, event); 711 711 712 712 spin_unlock_irqrestore(&dev->event_lock, flags); 713 713 }
-2
drivers/gpu/drm/udl/udl_drv.h
··· 81 81 struct drm_framebuffer base; 82 82 struct udl_gem_object *obj; 83 83 bool active_16; /* active on the 16-bit channel */ 84 - int x1, y1, x2, y2; /* dirty rect */ 85 - spinlock_t dirty_lock; 86 84 }; 87 85 88 86 #define to_udl_fb(x) container_of(x, struct udl_framebuffer, base)
+6 -134
drivers/gpu/drm/udl/udl_fb.c
··· 77 77 } 78 78 #endif 79 79 80 - /* 81 - * NOTE: fb_defio.c is holding info->fbdefio.mutex 82 - * Touching ANY framebuffer memory that triggers a page fault 83 - * in fb_defio will cause a deadlock, when it also tries to 84 - * grab the same mutex. 85 - */ 86 - static void udlfb_dpy_deferred_io(struct fb_info *info, 87 - struct list_head *pagelist) 88 - { 89 - struct page *cur; 90 - struct fb_deferred_io *fbdefio = info->fbdefio; 91 - struct udl_fbdev *ufbdev = info->par; 92 - struct drm_device *dev = ufbdev->ufb.base.dev; 93 - struct udl_device *udl = dev->dev_private; 94 - struct urb *urb; 95 - char *cmd; 96 - cycles_t start_cycles, end_cycles; 97 - int bytes_sent = 0; 98 - int bytes_identical = 0; 99 - int bytes_rendered = 0; 100 - 101 - if (!fb_defio) 102 - return; 103 - 104 - start_cycles = get_cycles(); 105 - 106 - urb = udl_get_urb(dev); 107 - if (!urb) 108 - return; 109 - 110 - cmd = urb->transfer_buffer; 111 - 112 - /* walk the written page list and render each to device */ 113 - list_for_each_entry(cur, &fbdefio->pagelist, lru) { 114 - 115 - if (udl_render_hline(dev, (ufbdev->ufb.base.bits_per_pixel / 8), 116 - &urb, (char *) info->fix.smem_start, 117 - &cmd, cur->index << PAGE_SHIFT, 118 - cur->index << PAGE_SHIFT, 119 - PAGE_SIZE, &bytes_identical, &bytes_sent)) 120 - goto error; 121 - bytes_rendered += PAGE_SIZE; 122 - } 123 - 124 - if (cmd > (char *) urb->transfer_buffer) { 125 - /* Send partial buffer remaining before exiting */ 126 - int len = cmd - (char *) urb->transfer_buffer; 127 - udl_submit_urb(dev, urb, len); 128 - bytes_sent += len; 129 - } else 130 - udl_urb_completion(urb); 131 - 132 - error: 133 - atomic_add(bytes_sent, &udl->bytes_sent); 134 - atomic_add(bytes_identical, &udl->bytes_identical); 135 - atomic_add(bytes_rendered, &udl->bytes_rendered); 136 - end_cycles = get_cycles(); 137 - atomic_add(((unsigned int) ((end_cycles - start_cycles) 138 - >> 10)), /* Kcycles */ 139 - &udl->cpu_kcycles_used); 140 - } 141 - 142 80 int udl_handle_damage(struct udl_framebuffer *fb, int x, int y, 143 81 int width, int height) 144 82 { ··· 90 152 struct urb *urb; 91 153 int aligned_x; 92 154 int bpp = (fb->base.bits_per_pixel / 8); 93 - int x2, y2; 94 - bool store_for_later = false; 95 - unsigned long flags; 96 155 97 156 if (!fb->active_16) 98 157 return 0; ··· 115 180 (y + height > fb->base.height)) 116 181 return -EINVAL; 117 182 118 - /* if we are in atomic just store the info 119 - can't test inside spin lock */ 120 - if (in_atomic()) 121 - store_for_later = true; 122 - 123 - x2 = x + width - 1; 124 - y2 = y + height - 1; 125 - 126 - spin_lock_irqsave(&fb->dirty_lock, flags); 127 - 128 - if (fb->y1 < y) 129 - y = fb->y1; 130 - if (fb->y2 > y2) 131 - y2 = fb->y2; 132 - if (fb->x1 < x) 133 - x = fb->x1; 134 - if (fb->x2 > x2) 135 - x2 = fb->x2; 136 - 137 - if (store_for_later) { 138 - fb->x1 = x; 139 - fb->x2 = x2; 140 - fb->y1 = y; 141 - fb->y2 = y2; 142 - spin_unlock_irqrestore(&fb->dirty_lock, flags); 143 - return 0; 144 - } 145 - 146 - fb->x1 = fb->y1 = INT_MAX; 147 - fb->x2 = fb->y2 = 0; 148 - 149 - spin_unlock_irqrestore(&fb->dirty_lock, flags); 150 183 start_cycles = get_cycles(); 151 184 152 185 urb = udl_get_urb(dev); ··· 122 219 return 0; 123 220 cmd = urb->transfer_buffer; 124 221 125 - for (i = y; i <= y2 ; i++) { 222 + for (i = y; i < height ; i++) { 126 223 const int line_offset = fb->base.pitches[0] * i; 127 224 const int byte_offset = line_offset + (x * bpp); 128 225 const int dev_byte_offset = (fb->base.width * bpp * i) + (x * bpp); 129 226 if (udl_render_hline(dev, bpp, &urb, 130 227 (char *) fb->obj->vmapping, 131 228 &cmd, byte_offset, dev_byte_offset, 132 - (x2 - x + 1) * bpp, 229 + width * bpp, 133 230 &bytes_identical, &bytes_sent)) 134 231 goto error; 135 232 } ··· 186 283 return 0; 187 284 } 188 285 189 - static void udl_fb_fillrect(struct fb_info *info, const struct fb_fillrect *rect) 190 - { 191 - struct udl_fbdev *ufbdev = info->par; 192 - 193 - drm_fb_helper_sys_fillrect(info, rect); 194 - 195 - udl_handle_damage(&ufbdev->ufb, rect->dx, rect->dy, rect->width, 196 - rect->height); 197 - } 198 - 199 - static void udl_fb_copyarea(struct fb_info *info, const struct fb_copyarea *region) 200 - { 201 - struct udl_fbdev *ufbdev = info->par; 202 - 203 - drm_fb_helper_sys_copyarea(info, region); 204 - 205 - udl_handle_damage(&ufbdev->ufb, region->dx, region->dy, region->width, 206 - region->height); 207 - } 208 - 209 - static void udl_fb_imageblit(struct fb_info *info, const struct fb_image *image) 210 - { 211 - struct udl_fbdev *ufbdev = info->par; 212 - 213 - drm_fb_helper_sys_imageblit(info, image); 214 - 215 - udl_handle_damage(&ufbdev->ufb, image->dx, image->dy, image->width, 216 - image->height); 217 - } 218 - 219 286 /* 220 287 * It's common for several clients to have framebuffer open simultaneously. 221 288 * e.g. both fbcon and X. Makes things interesting. ··· 212 339 213 340 if (fbdefio) { 214 341 fbdefio->delay = DL_DEFIO_WRITE_DELAY; 215 - fbdefio->deferred_io = udlfb_dpy_deferred_io; 342 + fbdefio->deferred_io = drm_fb_helper_deferred_io; 216 343 } 217 344 218 345 info->fbdefio = fbdefio; ··· 252 379 .owner = THIS_MODULE, 253 380 .fb_check_var = drm_fb_helper_check_var, 254 381 .fb_set_par = drm_fb_helper_set_par, 255 - .fb_fillrect = udl_fb_fillrect, 256 - .fb_copyarea = udl_fb_copyarea, 257 - .fb_imageblit = udl_fb_imageblit, 382 + .fb_fillrect = drm_fb_helper_sys_fillrect, 383 + .fb_copyarea = drm_fb_helper_sys_copyarea, 384 + .fb_imageblit = drm_fb_helper_sys_imageblit, 258 385 .fb_pan_display = drm_fb_helper_pan_display, 259 386 .fb_blank = drm_fb_helper_blank, 260 387 .fb_setcmap = drm_fb_helper_setcmap, ··· 331 458 { 332 459 int ret; 333 460 334 - spin_lock_init(&ufb->dirty_lock); 335 461 ufb->obj = obj; 336 462 drm_helper_mode_fill_fb_struct(&ufb->base, mode_cmd); 337 463 ret = drm_framebuffer_init(dev, &ufb->base, &udlfb_funcs);
+3 -3
drivers/gpu/drm/vc4/vc4_kms.c
··· 93 93 * vc4_atomic_commit - commit validated state object 94 94 * @dev: DRM device 95 95 * @state: the driver state object 96 - * @async: asynchronous commit 96 + * @nonblock: nonblocking commit 97 97 * 98 98 * This function commits a with drm_atomic_helper_check() pre-validated state 99 99 * object. This can still fail when e.g. the framebuffer reservation fails. For ··· 104 104 */ 105 105 static int vc4_atomic_commit(struct drm_device *dev, 106 106 struct drm_atomic_state *state, 107 - bool async) 107 + bool nonblock) 108 108 { 109 109 struct vc4_dev *vc4 = to_vc4_dev(dev); 110 110 int ret; ··· 170 170 * current layout. 171 171 */ 172 172 173 - if (async) { 173 + if (nonblock) { 174 174 vc4_queue_seqno_cb(dev, &c->cb, wait_seqno, 175 175 vc4_atomic_complete_commit_seqno_cb); 176 176 } else {
+2 -1
drivers/video/fbdev/core/fb_defio.c
··· 164 164 .set_page_dirty = fb_deferred_io_set_page_dirty, 165 165 }; 166 166 167 - static int fb_deferred_io_mmap(struct fb_info *info, struct vm_area_struct *vma) 167 + int fb_deferred_io_mmap(struct fb_info *info, struct vm_area_struct *vma) 168 168 { 169 169 vma->vm_ops = &fb_deferred_io_vm_ops; 170 170 vma->vm_flags |= VM_DONTEXPAND | VM_DONTDUMP; ··· 173 173 vma->vm_private_data = info; 174 174 return 0; 175 175 } 176 + EXPORT_SYMBOL(fb_deferred_io_mmap); 176 177 177 178 /* workqueue callback */ 178 179 static void fb_deferred_io_work(struct work_struct *work)
+12 -3
include/drm/drmP.h
··· 580 580 void (*debugfs_cleanup)(struct drm_minor *minor); 581 581 582 582 /** 583 - * Driver-specific constructor for drm_gem_objects, to set up 584 - * obj->driver_private. 583 + * @gem_free_object: deconstructor for drm_gem_objects 585 584 * 586 - * Returns 0 on success. 585 + * This is deprecated and should not be used by new drivers. Use 586 + * @gem_free_object_unlocked instead. 587 587 */ 588 588 void (*gem_free_object) (struct drm_gem_object *obj); 589 + 590 + /** 591 + * @gem_free_object_unlocked: deconstructor for drm_gem_objects 592 + * 593 + * This is for drivers which are not encumbered with dev->struct_mutex 594 + * legacy locking schemes. Use this hook instead of @gem_free_object. 595 + */ 596 + void (*gem_free_object_unlocked) (struct drm_gem_object *obj); 597 + 589 598 int (*gem_open_object) (struct drm_gem_object *, struct drm_file *); 590 599 void (*gem_close_object) (struct drm_gem_object *, struct drm_file *); 591 600
+1 -1
include/drm/drm_atomic.h
··· 137 137 138 138 int __must_check drm_atomic_check_only(struct drm_atomic_state *state); 139 139 int __must_check drm_atomic_commit(struct drm_atomic_state *state); 140 - int __must_check drm_atomic_async_commit(struct drm_atomic_state *state); 140 + int __must_check drm_atomic_nonblocking_commit(struct drm_atomic_state *state); 141 141 142 142 #define for_each_connector_in_state(state, connector, connector_state, __i) \ 143 143 for ((__i) = 0; \
+1 -1
include/drm/drm_atomic_helper.h
··· 40 40 struct drm_atomic_state *state); 41 41 int drm_atomic_helper_commit(struct drm_device *dev, 42 42 struct drm_atomic_state *state, 43 - bool async); 43 + bool nonblock); 44 44 45 45 void drm_atomic_helper_wait_for_fences(struct drm_device *dev, 46 46 struct drm_atomic_state *state);
+4 -4
include/drm/drm_crtc.h
··· 1886 1886 * drm_atomic_helper_commit(), or one of the exported sub-functions of 1887 1887 * it. 1888 1888 * 1889 - * Asynchronous commits (as indicated with the async parameter) must 1889 + * Nonblocking commits (as indicated with the nonblock parameter) must 1890 1890 * do any preparatory work which might result in an unsuccessful commit 1891 1891 * in the context of this callback. The only exceptions are hardware 1892 1892 * errors resulting in -EIO. But even in that case the driver must ··· 1899 1899 * The driver must wait for any pending rendering to the new 1900 1900 * framebuffers to complete before executing the flip. It should also 1901 1901 * wait for any pending rendering from other drivers if the underlying 1902 - * buffer is a shared dma-buf. Asynchronous commits must not wait for 1902 + * buffer is a shared dma-buf. Nonblocking commits must not wait for 1903 1903 * rendering in the context of this callback. 1904 1904 * 1905 1905 * An application can request to be notified when the atomic commit has ··· 1930 1930 * 1931 1931 * 0 on success or one of the below negative error codes: 1932 1932 * 1933 - * - -EBUSY, if an asynchronous updated is requested and there is 1933 + * - -EBUSY, if a nonblocking updated is requested and there is 1934 1934 * an earlier updated pending. Drivers are allowed to support a queue 1935 1935 * of outstanding updates, but currently no driver supports that. 1936 1936 * Note that drivers must wait for preceding updates to complete if a ··· 1960 1960 */ 1961 1961 int (*atomic_commit)(struct drm_device *dev, 1962 1962 struct drm_atomic_state *state, 1963 - bool async); 1963 + bool nonblock); 1964 1964 1965 1965 /** 1966 1966 * @atomic_state_alloc:
+14
include/drm/drm_fb_cma_helper.h
··· 4 4 struct drm_fbdev_cma; 5 5 struct drm_gem_cma_object; 6 6 7 + struct drm_fb_helper_surface_size; 8 + struct drm_framebuffer_funcs; 9 + struct drm_fb_helper_funcs; 7 10 struct drm_framebuffer; 11 + struct drm_fb_helper; 8 12 struct drm_device; 9 13 struct drm_file; 10 14 struct drm_mode_fb_cmd2; 11 15 16 + struct drm_fbdev_cma *drm_fbdev_cma_init_with_funcs(struct drm_device *dev, 17 + unsigned int preferred_bpp, unsigned int num_crtc, 18 + unsigned int max_conn_count, const struct drm_fb_helper_funcs *funcs); 12 19 struct drm_fbdev_cma *drm_fbdev_cma_init(struct drm_device *dev, 13 20 unsigned int preferred_bpp, unsigned int num_crtc, 14 21 unsigned int max_conn_count); ··· 23 16 24 17 void drm_fbdev_cma_restore_mode(struct drm_fbdev_cma *fbdev_cma); 25 18 void drm_fbdev_cma_hotplug_event(struct drm_fbdev_cma *fbdev_cma); 19 + int drm_fbdev_cma_create_with_funcs(struct drm_fb_helper *helper, 20 + struct drm_fb_helper_surface_size *sizes, 21 + struct drm_framebuffer_funcs *funcs); 22 + 23 + void drm_fb_cma_destroy(struct drm_framebuffer *fb); 24 + int drm_fb_cma_create_handle(struct drm_framebuffer *fb, 25 + struct drm_file *file_priv, unsigned int *handle); 26 26 27 27 struct drm_framebuffer *drm_fb_cma_create(struct drm_device *dev, 28 28 struct drm_file *file_priv, const struct drm_mode_fb_cmd2 *mode_cmd);
+15
include/drm/drm_fb_helper.h
··· 172 172 * @funcs: driver callbacks for fb helper 173 173 * @fbdev: emulated fbdev device info struct 174 174 * @pseudo_palette: fake palette of 16 colors 175 + * @dirty_clip: clip rectangle used with deferred_io to accumulate damage to 176 + * the screen buffer 177 + * @dirty_lock: spinlock protecting @dirty_clip 178 + * @dirty_work: worker used to flush the framebuffer 175 179 * 176 180 * This is the main structure used by the fbdev helpers. Drivers supporting 177 181 * fbdev emulation should embedded this into their overall driver structure. ··· 193 189 const struct drm_fb_helper_funcs *funcs; 194 190 struct fb_info *fbdev; 195 191 u32 pseudo_palette[17]; 192 + struct drm_clip_rect dirty_clip; 193 + spinlock_t dirty_lock; 194 + struct work_struct dirty_work; 196 195 197 196 /** 198 197 * @kernel_fb_list: ··· 251 244 uint32_t depth); 252 245 253 246 void drm_fb_helper_unlink_fbi(struct drm_fb_helper *fb_helper); 247 + 248 + void drm_fb_helper_deferred_io(struct fb_info *info, 249 + struct list_head *pagelist); 254 250 255 251 ssize_t drm_fb_helper_sys_read(struct fb_info *info, char __user *buf, 256 252 size_t count, loff_t *ppos); ··· 375 365 } 376 366 377 367 static inline void drm_fb_helper_unlink_fbi(struct drm_fb_helper *fb_helper) 368 + { 369 + } 370 + 371 + static inline void drm_fb_helper_deferred_io(struct fb_info *info, 372 + struct list_head *pagelist) 378 373 { 379 374 } 380 375
+15 -33
include/drm/drm_gem.h
··· 200 200 } 201 201 202 202 /** 203 - * drm_gem_object_unreference - release a GEM BO reference 203 + * __drm_gem_object_unreference - raw function to release a GEM BO reference 204 204 * @obj: GEM buffer object 205 205 * 206 - * This releases a reference to @obj. Callers must hold the dev->struct_mutex 207 - * lock when calling this function, even when the driver doesn't use 208 - * dev->struct_mutex for anything. 206 + * This function is meant to be used by drivers which are not encumbered with 207 + * dev->struct_mutex legacy locking and which are using the 208 + * gem_free_object_unlocked callback. It avoids all the locking checks and 209 + * locking overhead of drm_gem_object_unreference() and 210 + * drm_gem_object_unreference_unlocked(). 209 211 * 210 - * For drivers not encumbered with legacy locking use 211 - * drm_gem_object_unreference_unlocked() instead. 212 + * Drivers should never call this directly in their code. Instead they should 213 + * wrap it up into a driver_gem_object_unreference(struct driver_gem_object 214 + * *obj) wrapper function, and use that. Shared code should never call this, to 215 + * avoid breaking drivers by accident which still depend upon dev->struct_mutex 216 + * locking. 212 217 */ 213 218 static inline void 214 - drm_gem_object_unreference(struct drm_gem_object *obj) 219 + __drm_gem_object_unreference(struct drm_gem_object *obj) 215 220 { 216 - if (obj != NULL) { 217 - WARN_ON(!mutex_is_locked(&obj->dev->struct_mutex)); 218 - 219 - kref_put(&obj->refcount, drm_gem_object_free); 220 - } 221 + kref_put(&obj->refcount, drm_gem_object_free); 221 222 } 222 223 223 - /** 224 - * drm_gem_object_unreference_unlocked - release a GEM BO reference 225 - * @obj: GEM buffer object 226 - * 227 - * This releases a reference to @obj. Callers must not hold the 228 - * dev->struct_mutex lock when calling this function. 229 - */ 230 - static inline void 231 - drm_gem_object_unreference_unlocked(struct drm_gem_object *obj) 232 - { 233 - struct drm_device *dev; 234 - 235 - if (!obj) 236 - return; 237 - 238 - dev = obj->dev; 239 - if (kref_put_mutex(&obj->refcount, drm_gem_object_free, &dev->struct_mutex)) 240 - mutex_unlock(&dev->struct_mutex); 241 - else 242 - might_lock(&dev->struct_mutex); 243 - } 224 + void drm_gem_object_unreference_unlocked(struct drm_gem_object *obj); 225 + void drm_gem_object_unreference(struct drm_gem_object *obj); 244 226 245 227 int drm_gem_handle_create(struct drm_file *file_priv, 246 228 struct drm_gem_object *obj,
+1
include/linux/fb.h
··· 673 673 } 674 674 675 675 /* drivers/video/fb_defio.c */ 676 + int fb_deferred_io_mmap(struct fb_info *info, struct vm_area_struct *vma); 676 677 extern void fb_deferred_io_init(struct fb_info *info); 677 678 extern void fb_deferred_io_open(struct fb_info *info, 678 679 struct inode *inode,