Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'drm-misc-next-2018-06-21' of git://anongit.freedesktop.org/drm/drm-misc into drm-next

drm-misc-next for 4.19:

UAPI Changes:
- Add writeback connector (Brian Starkey/Liviu Dudau)
- Add "content type" property to HDMI connectors (Stanislav Lisovskiy)

Cross-subsystem Changes:
- some devicetree Docs update
- fix compile breakage on ION due to the dma-buf cleanups (Christian König)

Core Changes:
- Reject over-sized allocation requests early (Chris Wilson)
- gem-fb-helper: Always do implicit sync (Daniel Vetter)
- dma-buf cleanups (Christian König)

Driver Changes:
- Fixes for the otm8009a panel driver (Philippe Cornu)
- Add Innolux TV123WAM panel driver support (Sandeep Panda)
- Move GEM BO to drm_framebuffer in few drivers (Daniel Stone)
- i915 pinning improvements (Chris Wilson)
- Stop consulting plane->fb/crtc in a few drivers (Ville Syrjälä)

Signed-off-by: Dave Airlie <airlied@redhat.com>

Link: https://patchwork.freedesktop.org/patch/msgid/20180621105428.GA20795@juma

+2282 -1245
+29
Documentation/devicetree/bindings/display/panel/auo,g070vvn01.txt
··· 1 + AU Optronics Corporation 7.0" FHD (800 x 480) TFT LCD panel 2 + 3 + Required properties: 4 + - compatible: should be "auo,g070vvn01" 5 + - backlight: phandle of the backlight device attached to the panel 6 + - power-supply: single regulator to provide the supply voltage 7 + 8 + Required nodes: 9 + - port: Parallel port mapping to connect this display 10 + 11 + This panel needs single power supply voltage. Its backlight is conntrolled 12 + via PWM signal. 13 + 14 + Example: 15 + -------- 16 + 17 + Example device-tree definition when connected to iMX6Q based board 18 + 19 + lcd_panel: lcd-panel { 20 + compatible = "auo,g070vvn01"; 21 + backlight = <&backlight_lcd>; 22 + power-supply = <&reg_display>; 23 + 24 + port { 25 + lcd_panel_in: endpoint { 26 + remote-endpoint = <&lcd_display_out>; 27 + }; 28 + }; 29 + };
+20
Documentation/devicetree/bindings/display/panel/innolux,tv123wam.txt
··· 1 + Innolux TV123WAM 12.3 inch eDP 2K display panel 2 + 3 + This binding is compatible with the simple-panel binding, which is specified 4 + in simple-panel.txt in this directory. 5 + 6 + Required properties: 7 + - compatible: should be "innolux,tv123wam" 8 + - power-supply: regulator to provide the supply voltage 9 + 10 + Optional properties: 11 + - enable-gpios: GPIO pin to enable or disable the panel 12 + - backlight: phandle of the backlight device attached to the panel 13 + 14 + Example: 15 + panel_edp: panel-edp { 16 + compatible = "innolux,tv123wam"; 17 + enable-gpios = <&msmgpio 31 GPIO_ACTIVE_LOW>; 18 + power-supply = <&pm8916_l2>; 19 + backlight = <&backlight>; 20 + };
+15
Documentation/gpu/drm-kms.rst
··· 373 373 .. kernel-doc:: drivers/gpu/drm/drm_connector.c 374 374 :export: 375 375 376 + Writeback Connectors 377 + -------------------- 378 + 379 + .. kernel-doc:: drivers/gpu/drm/drm_writeback.c 380 + :doc: overview 381 + 382 + .. kernel-doc:: drivers/gpu/drm/drm_writeback.c 383 + :export: 384 + 376 385 Encoder Abstraction 377 386 =================== 378 387 ··· 525 516 526 517 .. kernel-doc:: drivers/gpu/drm/drm_connector.c 527 518 :doc: standard connector properties 519 + 520 + HDMI Specific Connector Properties 521 + ----------------------------- 522 + 523 + .. kernel-doc:: drivers/gpu/drm/drm_connector.c 524 + :doc: HDMI connector properties 528 525 529 526 Plane Composition Properties 530 527 ----------------------------
+1
Documentation/gpu/kms-properties.csv
··· 17 17 ,Virtual GPU,“suggested X”,RANGE,"Min=0, Max=0xffffffff",Connector,property to suggest an X offset for a connector 18 18 ,,“suggested Y”,RANGE,"Min=0, Max=0xffffffff",Connector,property to suggest an Y offset for a connector 19 19 ,Optional,"""aspect ratio""",ENUM,"{ ""None"", ""4:3"", ""16:9"" }",Connector,TDB 20 + ,Optional,"""content type""",ENUM,"{ ""No Data"", ""Graphics"", ""Photo"", ""Cinema"", ""Game"" }",Connector,TBD 20 21 i915,Generic,"""Broadcast RGB""",ENUM,"{ ""Automatic"", ""Full"", ""Limited 16:235"" }",Connector,"When this property is set to Limited 16:235 and CTM is set, the hardware will be programmed with the result of the multiplication of CTM by the limited range matrix to ensure the pixels normaly in the range 0..1.0 are remapped to the range 16/255..235/255." 21 22 ,,“audio”,ENUM,"{ ""force-dvi"", ""off"", ""auto"", ""on"" }",Connector,TBD 22 23 ,SDVO-TV,“mode”,ENUM,"{ ""NTSC_M"", ""NTSC_J"", ""NTSC_443"", ""PAL_B"" } etc.",Connector,TBD
+5 -51
drivers/dma-buf/dma-buf.c
··· 405 405 || !exp_info->ops->map_dma_buf 406 406 || !exp_info->ops->unmap_dma_buf 407 407 || !exp_info->ops->release 408 - || !exp_info->ops->map_atomic 409 408 || !exp_info->ops->map 410 409 || !exp_info->ops->mmap)) { 411 410 return ERR_PTR(-EINVAL); ··· 567 568 mutex_lock(&dmabuf->lock); 568 569 569 570 if (dmabuf->ops->attach) { 570 - ret = dmabuf->ops->attach(dmabuf, dev, attach); 571 + ret = dmabuf->ops->attach(dmabuf, attach); 571 572 if (ret) 572 573 goto err_attach; 573 574 } ··· 686 687 * void \*dma_buf_kmap(struct dma_buf \*, unsigned long); 687 688 * void dma_buf_kunmap(struct dma_buf \*, unsigned long, void \*); 688 689 * 689 - * There are also atomic variants of these interfaces. Like for kmap they 690 - * facilitate non-blocking fast-paths. Neither the importer nor the exporter 691 - * (in the callback) is allowed to block when using these. 692 - * 693 - * Interfaces:: 694 - * void \*dma_buf_kmap_atomic(struct dma_buf \*, unsigned long); 695 - * void dma_buf_kunmap_atomic(struct dma_buf \*, unsigned long, void \*); 696 - * 697 - * For importers all the restrictions of using kmap apply, like the limited 698 - * supply of kmap_atomic slots. Hence an importer shall only hold onto at 699 - * max 2 atomic dma_buf kmaps at the same time (in any given process context). 690 + * Implementing the functions is optional for exporters and for importers all 691 + * the restrictions of using kmap apply. 700 692 * 701 693 * dma_buf kmap calls outside of the range specified in begin_cpu_access are 702 694 * undefined. If the range is not PAGE_SIZE aligned, kmap needs to succeed on 703 695 * the partial chunks at the beginning and end but may return stale or bogus 704 696 * data outside of the range (in these partial chunks). 705 - * 706 - * Note that these calls need to always succeed. The exporter needs to 707 - * complete any preparations that might fail in begin_cpu_access. 708 697 * 709 698 * For some cases the overhead of kmap can be too high, a vmap interface 710 699 * is introduced. This interface should be used very carefully, as vmalloc ··· 847 860 EXPORT_SYMBOL_GPL(dma_buf_end_cpu_access); 848 861 849 862 /** 850 - * dma_buf_kmap_atomic - Map a page of the buffer object into kernel address 851 - * space. The same restrictions as for kmap_atomic and friends apply. 852 - * @dmabuf: [in] buffer to map page from. 853 - * @page_num: [in] page in PAGE_SIZE units to map. 854 - * 855 - * This call must always succeed, any necessary preparations that might fail 856 - * need to be done in begin_cpu_access. 857 - */ 858 - void *dma_buf_kmap_atomic(struct dma_buf *dmabuf, unsigned long page_num) 859 - { 860 - WARN_ON(!dmabuf); 861 - 862 - return dmabuf->ops->map_atomic(dmabuf, page_num); 863 - } 864 - EXPORT_SYMBOL_GPL(dma_buf_kmap_atomic); 865 - 866 - /** 867 - * dma_buf_kunmap_atomic - Unmap a page obtained by dma_buf_kmap_atomic. 868 - * @dmabuf: [in] buffer to unmap page from. 869 - * @page_num: [in] page in PAGE_SIZE units to unmap. 870 - * @vaddr: [in] kernel space pointer obtained from dma_buf_kmap_atomic. 871 - * 872 - * This call must always succeed. 873 - */ 874 - void dma_buf_kunmap_atomic(struct dma_buf *dmabuf, unsigned long page_num, 875 - void *vaddr) 876 - { 877 - WARN_ON(!dmabuf); 878 - 879 - if (dmabuf->ops->unmap_atomic) 880 - dmabuf->ops->unmap_atomic(dmabuf, page_num, vaddr); 881 - } 882 - EXPORT_SYMBOL_GPL(dma_buf_kunmap_atomic); 883 - 884 - /** 885 863 * dma_buf_kmap - Map a page of the buffer object into kernel address space. The 886 864 * same restrictions as for kmap and friends apply. 887 865 * @dmabuf: [in] buffer to map page from. ··· 859 907 { 860 908 WARN_ON(!dmabuf); 861 909 910 + if (!dmabuf->ops->map) 911 + return NULL; 862 912 return dmabuf->ops->map(dmabuf, page_num); 863 913 } 864 914 EXPORT_SYMBOL_GPL(dma_buf_kmap);
+1 -1
drivers/gpu/drm/Makefile
··· 18 18 drm_encoder.o drm_mode_object.o drm_property.o \ 19 19 drm_plane.o drm_color_mgmt.o drm_print.o \ 20 20 drm_dumb_buffers.o drm_mode_config.o drm_vblank.o \ 21 - drm_syncobj.o drm_lease.o 21 + drm_syncobj.o drm_lease.o drm_writeback.o 22 22 23 23 drm-$(CONFIG_DRM_LIB_RANDOM) += lib/drm_random.o 24 24 drm-$(CONFIG_DRM_VM) += drm_vm.o
+1 -4
drivers/gpu/drm/amd/amdgpu/amdgpu_prime.c
··· 133 133 } 134 134 135 135 static int amdgpu_gem_map_attach(struct dma_buf *dma_buf, 136 - struct device *target_dev, 137 136 struct dma_buf_attachment *attach) 138 137 { 139 138 struct drm_gem_object *obj = dma_buf->priv; ··· 140 141 struct amdgpu_device *adev = amdgpu_ttm_adev(bo->tbo.bdev); 141 142 long r; 142 143 143 - r = drm_gem_map_attach(dma_buf, target_dev, attach); 144 + r = drm_gem_map_attach(dma_buf, attach); 144 145 if (r) 145 146 return r; 146 147 ··· 244 245 .release = drm_gem_dmabuf_release, 245 246 .begin_cpu_access = amdgpu_gem_begin_cpu_access, 246 247 .map = drm_gem_dmabuf_kmap, 247 - .map_atomic = drm_gem_dmabuf_kmap_atomic, 248 248 .unmap = drm_gem_dmabuf_kunmap, 249 - .unmap_atomic = drm_gem_dmabuf_kunmap_atomic, 250 249 .mmap = drm_gem_dmabuf_mmap, 251 250 .vmap = drm_gem_dmabuf_vmap, 252 251 .vunmap = drm_gem_dmabuf_vunmap,
-2
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
··· 3914 3914 3915 3915 /* Flip */ 3916 3916 spin_lock_irqsave(&crtc->dev->event_lock, flags); 3917 - /* update crtc fb */ 3918 - crtc->primary->fb = fb; 3919 3917 3920 3918 WARN_ON(acrtc->pflip_status != AMDGPU_FLIP_NONE); 3921 3919 WARN_ON(!acrtc_state->stream);
-3
drivers/gpu/drm/arc/arcpgu_crtc.c
··· 136 136 { 137 137 struct arcpgu_drm_private *arcpgu = crtc_to_arcpgu_priv(crtc); 138 138 139 - if (!crtc->primary->fb) 140 - return; 141 - 142 139 clk_disable_unprepare(arcpgu->clk); 143 140 arc_pgu_write(arcpgu, ARCPGU_REG_CTRL, 144 141 arc_pgu_read(arcpgu, ARCPGU_REG_CTRL) &
+4 -19
drivers/gpu/drm/armada/armada_fb.c
··· 7 7 */ 8 8 #include <drm/drm_crtc_helper.h> 9 9 #include <drm/drm_fb_helper.h> 10 + #include <drm/drm_gem_framebuffer_helper.h> 10 11 #include "armada_drm.h" 11 12 #include "armada_fb.h" 12 13 #include "armada_gem.h" 13 14 #include "armada_hw.h" 14 15 15 - static void armada_fb_destroy(struct drm_framebuffer *fb) 16 - { 17 - struct armada_framebuffer *dfb = drm_fb_to_armada_fb(fb); 18 - 19 - drm_framebuffer_cleanup(&dfb->fb); 20 - drm_gem_object_put_unlocked(&dfb->obj->obj); 21 - kfree(dfb); 22 - } 23 - 24 - static int armada_fb_create_handle(struct drm_framebuffer *fb, 25 - struct drm_file *dfile, unsigned int *handle) 26 - { 27 - struct armada_framebuffer *dfb = drm_fb_to_armada_fb(fb); 28 - return drm_gem_handle_create(dfile, &dfb->obj->obj, handle); 29 - } 30 - 31 16 static const struct drm_framebuffer_funcs armada_fb_funcs = { 32 - .destroy = armada_fb_destroy, 33 - .create_handle = armada_fb_create_handle, 17 + .destroy = drm_gem_fb_destroy, 18 + .create_handle = drm_gem_fb_create_handle, 34 19 }; 35 20 36 21 struct armada_framebuffer *armada_framebuffer_create(struct drm_device *dev, ··· 63 78 64 79 dfb->fmt = format; 65 80 dfb->mod = config; 66 - dfb->obj = obj; 81 + dfb->fb.obj[0] = &obj->obj; 67 82 68 83 drm_helper_mode_fill_fb_struct(dev, &dfb->fb, mode); 69 84
+1 -2
drivers/gpu/drm/armada/armada_fb.h
··· 10 10 11 11 struct armada_framebuffer { 12 12 struct drm_framebuffer fb; 13 - struct armada_gem_object *obj; 14 13 uint8_t fmt; 15 14 uint8_t mod; 16 15 }; 17 16 #define drm_fb_to_armada_fb(dfb) \ 18 17 container_of(dfb, struct armada_framebuffer, fb) 19 - #define drm_fb_obj(fb) drm_fb_to_armada_fb(fb)->obj 18 + #define drm_fb_obj(fb) drm_to_armada_gem((fb)->obj[0]) 20 19 21 20 struct armada_framebuffer *armada_framebuffer_create(struct drm_device *, 22 21 const struct drm_mode_fb_cmd2 *, struct armada_gem_object *);
-2
drivers/gpu/drm/armada/armada_gem.c
··· 490 490 .map_dma_buf = armada_gem_prime_map_dma_buf, 491 491 .unmap_dma_buf = armada_gem_prime_unmap_dma_buf, 492 492 .release = drm_gem_dmabuf_release, 493 - .map_atomic = armada_gem_dmabuf_no_kmap, 494 - .unmap_atomic = armada_gem_dmabuf_no_kunmap, 495 493 .map = armada_gem_dmabuf_no_kmap, 496 494 .unmap = armada_gem_dmabuf_no_kunmap, 497 495 .mmap = armada_gem_dmabuf_mmap,
+1
drivers/gpu/drm/atmel-hlcdc/atmel_hlcdc_dc.c
··· 681 681 drm_fb_cma_fbdev_fini(dev); 682 682 flush_workqueue(dc->wq); 683 683 drm_kms_helper_poll_fini(dev); 684 + drm_atomic_helper_shutdown(dev); 684 685 drm_mode_config_cleanup(dev); 685 686 686 687 pm_runtime_get_sync(dev->dev);
+5 -14
drivers/gpu/drm/atmel-hlcdc/atmel_hlcdc_plane.c
··· 412 412 ATMEL_HLCDC_LAYER_FORMAT_CFG, cfg); 413 413 } 414 414 415 - static void atmel_hlcdc_plane_update_clut(struct atmel_hlcdc_plane *plane) 415 + static void atmel_hlcdc_plane_update_clut(struct atmel_hlcdc_plane *plane, 416 + struct atmel_hlcdc_plane_state *state) 416 417 { 417 - struct drm_crtc *crtc = plane->base.crtc; 418 + struct drm_crtc *crtc = state->base.crtc; 418 419 struct drm_color_lut *lut; 419 420 int idx; 420 421 ··· 780 779 atmel_hlcdc_plane_update_pos_and_size(plane, state); 781 780 atmel_hlcdc_plane_update_general_settings(plane, state); 782 781 atmel_hlcdc_plane_update_format(plane, state); 783 - atmel_hlcdc_plane_update_clut(plane); 782 + atmel_hlcdc_plane_update_clut(plane, state); 784 783 atmel_hlcdc_plane_update_buffers(plane, state); 785 784 atmel_hlcdc_plane_update_disc_area(plane, state); 786 785 ··· 815 814 816 815 /* Clear all pending interrupts */ 817 816 atmel_hlcdc_layer_read_reg(&plane->layer, ATMEL_HLCDC_LAYER_ISR); 818 - } 819 - 820 - static void atmel_hlcdc_plane_destroy(struct drm_plane *p) 821 - { 822 - struct atmel_hlcdc_plane *plane = drm_plane_to_atmel_hlcdc_plane(p); 823 - 824 - if (plane->base.fb) 825 - drm_framebuffer_put(plane->base.fb); 826 - 827 - drm_plane_cleanup(p); 828 817 } 829 818 830 819 static int atmel_hlcdc_plane_init_properties(struct atmel_hlcdc_plane *plane) ··· 993 1002 static const struct drm_plane_funcs layer_plane_funcs = { 994 1003 .update_plane = drm_atomic_helper_update_plane, 995 1004 .disable_plane = drm_atomic_helper_disable_plane, 996 - .destroy = atmel_hlcdc_plane_destroy, 1005 + .destroy = drm_plane_cleanup, 997 1006 .reset = atmel_hlcdc_plane_reset, 998 1007 .atomic_duplicate_state = atmel_hlcdc_plane_atomic_duplicate_state, 999 1008 .atomic_destroy_state = atmel_hlcdc_plane_atomic_destroy_state,
+3 -1
drivers/gpu/drm/bridge/Kconfig
··· 82 82 83 83 config DRM_SIL_SII8620 84 84 tristate "Silicon Image SII8620 HDMI/MHL bridge" 85 - depends on OF && RC_CORE 85 + depends on OF 86 86 select DRM_KMS_HELPER 87 87 imply EXTCON 88 + select INPUT 89 + select RC_CORE 88 90 help 89 91 Silicon Image SII8620 HDMI/MHL bridge chip driver. 90 92
+2 -2
drivers/gpu/drm/bridge/cdns-dsi.c
··· 1337 1337 .transfer = cdns_dsi_transfer, 1338 1338 }; 1339 1339 1340 - static int cdns_dsi_resume(struct device *dev) 1340 + static int __maybe_unused cdns_dsi_resume(struct device *dev) 1341 1341 { 1342 1342 struct cdns_dsi *dsi = dev_get_drvdata(dev); 1343 1343 ··· 1350 1350 return 0; 1351 1351 } 1352 1352 1353 - static int cdns_dsi_suspend(struct device *dev) 1353 + static int __maybe_unused cdns_dsi_suspend(struct device *dev) 1354 1354 { 1355 1355 struct cdns_dsi *dsi = dev_get_drvdata(dev); 1356 1356
+2 -8
drivers/gpu/drm/cirrus/cirrus_drv.h
··· 92 92 93 93 #define to_cirrus_crtc(x) container_of(x, struct cirrus_crtc, base) 94 94 #define to_cirrus_encoder(x) container_of(x, struct cirrus_encoder, base) 95 - #define to_cirrus_framebuffer(x) container_of(x, struct cirrus_framebuffer, base) 96 95 97 96 struct cirrus_crtc { 98 97 struct drm_crtc base; ··· 114 115 115 116 struct cirrus_connector { 116 117 struct drm_connector base; 117 - }; 118 - 119 - struct cirrus_framebuffer { 120 - struct drm_framebuffer base; 121 - struct drm_gem_object *obj; 122 118 }; 123 119 124 120 struct cirrus_mc { ··· 146 152 147 153 struct cirrus_fbdev { 148 154 struct drm_fb_helper helper; 149 - struct cirrus_framebuffer gfb; 155 + struct drm_framebuffer gfb; 150 156 void *sysram; 151 157 int size; 152 158 int x1, y1, x2, y2; /* dirty rect */ ··· 192 198 struct drm_mode_create_dumb *args); 193 199 194 200 int cirrus_framebuffer_init(struct drm_device *dev, 195 - struct cirrus_framebuffer *gfb, 201 + struct drm_framebuffer *gfb, 196 202 const struct drm_mode_fb_cmd2 *mode_cmd, 197 203 struct drm_gem_object *obj); 198 204
+10 -10
drivers/gpu/drm/cirrus/cirrus_fbdev.c
··· 22 22 struct drm_gem_object *obj; 23 23 struct cirrus_bo *bo; 24 24 int src_offset, dst_offset; 25 - int bpp = afbdev->gfb.base.format->cpp[0]; 25 + int bpp = afbdev->gfb.format->cpp[0]; 26 26 int ret = -EBUSY; 27 27 bool unmap = false; 28 28 bool store_for_later = false; 29 29 int x2, y2; 30 30 unsigned long flags; 31 31 32 - obj = afbdev->gfb.obj; 32 + obj = afbdev->gfb.obj[0]; 33 33 bo = gem_to_cirrus_bo(obj); 34 34 35 35 /* ··· 82 82 } 83 83 for (i = y; i < y + height; i++) { 84 84 /* assume equal stride for now */ 85 - src_offset = dst_offset = i * afbdev->gfb.base.pitches[0] + (x * bpp); 85 + src_offset = dst_offset = i * afbdev->gfb.pitches[0] + (x * bpp); 86 86 memcpy_toio(bo->kmap.virtual + src_offset, afbdev->sysram + src_offset, width * bpp); 87 87 88 88 } ··· 204 204 gfbdev->sysram = sysram; 205 205 gfbdev->size = size; 206 206 207 - fb = &gfbdev->gfb.base; 207 + fb = &gfbdev->gfb; 208 208 if (!fb) { 209 209 DRM_INFO("fb is NULL\n"); 210 210 return -EINVAL; ··· 246 246 static int cirrus_fbdev_destroy(struct drm_device *dev, 247 247 struct cirrus_fbdev *gfbdev) 248 248 { 249 - struct cirrus_framebuffer *gfb = &gfbdev->gfb; 249 + struct drm_framebuffer *gfb = &gfbdev->gfb; 250 250 251 251 drm_fb_helper_unregister_fbi(&gfbdev->helper); 252 252 253 - if (gfb->obj) { 254 - drm_gem_object_put_unlocked(gfb->obj); 255 - gfb->obj = NULL; 253 + if (gfb->obj[0]) { 254 + drm_gem_object_put_unlocked(gfb->obj[0]); 255 + gfb->obj[0] = NULL; 256 256 } 257 257 258 258 vfree(gfbdev->sysram); 259 259 drm_fb_helper_fini(&gfbdev->helper); 260 - drm_framebuffer_unregister_private(&gfb->base); 261 - drm_framebuffer_cleanup(&gfb->base); 260 + drm_framebuffer_unregister_private(gfb); 261 + drm_framebuffer_cleanup(gfb); 262 262 263 263 return 0; 264 264 }
+13 -30
drivers/gpu/drm/cirrus/cirrus_main.c
··· 10 10 */ 11 11 #include <drm/drmP.h> 12 12 #include <drm/drm_crtc_helper.h> 13 + #include <drm/drm_gem_framebuffer_helper.h> 13 14 14 15 #include "cirrus_drv.h" 15 16 16 - static int cirrus_create_handle(struct drm_framebuffer *fb, 17 - struct drm_file* file_priv, 18 - unsigned int* handle) 19 - { 20 - struct cirrus_framebuffer *cirrus_fb = to_cirrus_framebuffer(fb); 21 - 22 - return drm_gem_handle_create(file_priv, cirrus_fb->obj, handle); 23 - } 24 - 25 - static void cirrus_user_framebuffer_destroy(struct drm_framebuffer *fb) 26 - { 27 - struct cirrus_framebuffer *cirrus_fb = to_cirrus_framebuffer(fb); 28 - 29 - drm_gem_object_put_unlocked(cirrus_fb->obj); 30 - drm_framebuffer_cleanup(fb); 31 - kfree(fb); 32 - } 33 - 34 17 static const struct drm_framebuffer_funcs cirrus_fb_funcs = { 35 - .create_handle = cirrus_create_handle, 36 - .destroy = cirrus_user_framebuffer_destroy, 18 + .create_handle = drm_gem_fb_create_handle, 19 + .destroy = drm_gem_fb_destroy, 37 20 }; 38 21 39 22 int cirrus_framebuffer_init(struct drm_device *dev, 40 - struct cirrus_framebuffer *gfb, 23 + struct drm_framebuffer *gfb, 41 24 const struct drm_mode_fb_cmd2 *mode_cmd, 42 25 struct drm_gem_object *obj) 43 26 { 44 27 int ret; 45 28 46 - drm_helper_mode_fill_fb_struct(dev, &gfb->base, mode_cmd); 47 - gfb->obj = obj; 48 - ret = drm_framebuffer_init(dev, &gfb->base, &cirrus_fb_funcs); 29 + drm_helper_mode_fill_fb_struct(dev, gfb, mode_cmd); 30 + gfb->obj[0] = obj; 31 + ret = drm_framebuffer_init(dev, gfb, &cirrus_fb_funcs); 49 32 if (ret) { 50 33 DRM_ERROR("drm_framebuffer_init failed: %d\n", ret); 51 34 return ret; ··· 43 60 { 44 61 struct cirrus_device *cdev = dev->dev_private; 45 62 struct drm_gem_object *obj; 46 - struct cirrus_framebuffer *cirrus_fb; 63 + struct drm_framebuffer *fb; 47 64 u32 bpp; 48 65 int ret; 49 66 ··· 57 74 if (obj == NULL) 58 75 return ERR_PTR(-ENOENT); 59 76 60 - cirrus_fb = kzalloc(sizeof(*cirrus_fb), GFP_KERNEL); 61 - if (!cirrus_fb) { 77 + fb = kzalloc(sizeof(*fb), GFP_KERNEL); 78 + if (!fb) { 62 79 drm_gem_object_put_unlocked(obj); 63 80 return ERR_PTR(-ENOMEM); 64 81 } 65 82 66 - ret = cirrus_framebuffer_init(dev, cirrus_fb, mode_cmd, obj); 83 + ret = cirrus_framebuffer_init(dev, fb, mode_cmd, obj); 67 84 if (ret) { 68 85 drm_gem_object_put_unlocked(obj); 69 - kfree(cirrus_fb); 86 + kfree(fb); 70 87 return ERR_PTR(ret); 71 88 } 72 - return &cirrus_fb->base; 89 + return fb; 73 90 } 74 91 75 92 static const struct drm_mode_config_funcs cirrus_mode_funcs = {
+3 -9
drivers/gpu/drm/cirrus/cirrus_mode.c
··· 101 101 int x, int y, int atomic) 102 102 { 103 103 struct cirrus_device *cdev = crtc->dev->dev_private; 104 - struct drm_gem_object *obj; 105 - struct cirrus_framebuffer *cirrus_fb; 106 104 struct cirrus_bo *bo; 107 105 int ret; 108 106 u64 gpu_addr; 109 107 110 108 /* push the previous fb to system ram */ 111 109 if (!atomic && fb) { 112 - cirrus_fb = to_cirrus_framebuffer(fb); 113 - obj = cirrus_fb->obj; 114 - bo = gem_to_cirrus_bo(obj); 110 + bo = gem_to_cirrus_bo(fb->obj[0]); 115 111 ret = cirrus_bo_reserve(bo, false); 116 112 if (ret) 117 113 return ret; ··· 115 119 cirrus_bo_unreserve(bo); 116 120 } 117 121 118 - cirrus_fb = to_cirrus_framebuffer(crtc->primary->fb); 119 - obj = cirrus_fb->obj; 120 - bo = gem_to_cirrus_bo(obj); 122 + bo = gem_to_cirrus_bo(crtc->primary->fb->obj[0]); 121 123 122 124 ret = cirrus_bo_reserve(bo, false); 123 125 if (ret) ··· 127 133 return ret; 128 134 } 129 135 130 - if (&cdev->mode_info.gfbdev->gfb == cirrus_fb) { 136 + if (&cdev->mode_info.gfbdev->gfb == crtc->primary->fb) { 131 137 /* if pushing console in kmap it */ 132 138 ret = ttm_bo_kmap(&bo->bo, 0, bo->bo.num_pages, &bo->kmap); 133 139 if (ret)
+302 -89
drivers/gpu/drm/drm_atomic.c
··· 30 30 #include <drm/drm_atomic.h> 31 31 #include <drm/drm_mode.h> 32 32 #include <drm/drm_print.h> 33 + #include <drm/drm_writeback.h> 33 34 #include <linux/sync_file.h> 34 35 35 36 #include "drm_crtc_internal.h" ··· 326 325 return fence_ptr; 327 326 } 328 327 328 + static int set_out_fence_for_connector(struct drm_atomic_state *state, 329 + struct drm_connector *connector, 330 + s32 __user *fence_ptr) 331 + { 332 + unsigned int index = drm_connector_index(connector); 333 + 334 + if (!fence_ptr) 335 + return 0; 336 + 337 + if (put_user(-1, fence_ptr)) 338 + return -EFAULT; 339 + 340 + state->connectors[index].out_fence_ptr = fence_ptr; 341 + 342 + return 0; 343 + } 344 + 345 + static s32 __user *get_out_fence_for_connector(struct drm_atomic_state *state, 346 + struct drm_connector *connector) 347 + { 348 + unsigned int index = drm_connector_index(connector); 349 + s32 __user *fence_ptr; 350 + 351 + fence_ptr = state->connectors[index].out_fence_ptr; 352 + state->connectors[index].out_fence_ptr = NULL; 353 + 354 + return fence_ptr; 355 + } 356 + 329 357 /** 330 358 * drm_atomic_set_mode_for_crtc - set mode for CRTC 331 359 * @state: the CRTC whose incoming state to update ··· 369 339 int drm_atomic_set_mode_for_crtc(struct drm_crtc_state *state, 370 340 const struct drm_display_mode *mode) 371 341 { 342 + struct drm_crtc *crtc = state->crtc; 372 343 struct drm_mode_modeinfo umode; 373 344 374 345 /* Early return for no change. */ ··· 390 359 391 360 drm_mode_copy(&state->mode, mode); 392 361 state->enable = true; 393 - DRM_DEBUG_ATOMIC("Set [MODE:%s] for CRTC state %p\n", 394 - mode->name, state); 362 + DRM_DEBUG_ATOMIC("Set [MODE:%s] for [CRTC:%d:%s] state %p\n", 363 + mode->name, crtc->base.id, crtc->name, state); 395 364 } else { 396 365 memset(&state->mode, 0, sizeof(state->mode)); 397 366 state->enable = false; 398 - DRM_DEBUG_ATOMIC("Set [NOMODE] for CRTC state %p\n", 399 - state); 367 + DRM_DEBUG_ATOMIC("Set [NOMODE] for [CRTC:%d:%s] state %p\n", 368 + crtc->base.id, crtc->name, state); 400 369 } 401 370 402 371 return 0; ··· 419 388 int drm_atomic_set_mode_prop_for_crtc(struct drm_crtc_state *state, 420 389 struct drm_property_blob *blob) 421 390 { 391 + struct drm_crtc *crtc = state->crtc; 392 + 422 393 if (blob == state->mode_blob) 423 394 return 0; 424 395 ··· 430 397 memset(&state->mode, 0, sizeof(state->mode)); 431 398 432 399 if (blob) { 433 - if (blob->length != sizeof(struct drm_mode_modeinfo) || 434 - drm_mode_convert_umode(state->crtc->dev, &state->mode, 435 - blob->data)) 400 + int ret; 401 + 402 + if (blob->length != sizeof(struct drm_mode_modeinfo)) { 403 + DRM_DEBUG_ATOMIC("[CRTC:%d:%s] bad mode blob length: %zu\n", 404 + crtc->base.id, crtc->name, 405 + blob->length); 436 406 return -EINVAL; 407 + } 408 + 409 + ret = drm_mode_convert_umode(crtc->dev, 410 + &state->mode, blob->data); 411 + if (ret) { 412 + DRM_DEBUG_ATOMIC("[CRTC:%d:%s] invalid mode (ret=%d, status=%s):\n", 413 + crtc->base.id, crtc->name, 414 + ret, drm_get_mode_status_name(state->mode.status)); 415 + drm_mode_debug_printmodeline(&state->mode); 416 + return -EINVAL; 417 + } 437 418 438 419 state->mode_blob = drm_property_blob_get(blob); 439 420 state->enable = true; 440 - DRM_DEBUG_ATOMIC("Set [MODE:%s] for CRTC state %p\n", 441 - state->mode.name, state); 421 + DRM_DEBUG_ATOMIC("Set [MODE:%s] for [CRTC:%d:%s] state %p\n", 422 + state->mode.name, crtc->base.id, crtc->name, 423 + state); 442 424 } else { 443 425 state->enable = false; 444 - DRM_DEBUG_ATOMIC("Set [NOMODE] for CRTC state %p\n", 445 - state); 426 + DRM_DEBUG_ATOMIC("Set [NOMODE] for [CRTC:%d:%s] state %p\n", 427 + crtc->base.id, crtc->name, state); 446 428 } 447 429 448 430 return 0; ··· 587 539 return -EFAULT; 588 540 589 541 set_out_fence_for_crtc(state->state, crtc, fence_ptr); 590 - } else if (crtc->funcs->atomic_set_property) 542 + } else if (crtc->funcs->atomic_set_property) { 591 543 return crtc->funcs->atomic_set_property(crtc, state, property, val); 592 - else 544 + } else { 545 + DRM_DEBUG_ATOMIC("[CRTC:%d:%s] unknown property [PROP:%d:%s]]\n", 546 + crtc->base.id, crtc->name, 547 + property->base.id, property->name); 593 548 return -EINVAL; 549 + } 594 550 595 551 return 0; 596 552 } ··· 729 677 } 730 678 731 679 /** 680 + * drm_atomic_connector_check - check connector state 681 + * @connector: connector to check 682 + * @state: connector state to check 683 + * 684 + * Provides core sanity checks for connector state. 685 + * 686 + * RETURNS: 687 + * Zero on success, error code on failure 688 + */ 689 + static int drm_atomic_connector_check(struct drm_connector *connector, 690 + struct drm_connector_state *state) 691 + { 692 + struct drm_crtc_state *crtc_state; 693 + struct drm_writeback_job *writeback_job = state->writeback_job; 694 + 695 + if ((connector->connector_type != DRM_MODE_CONNECTOR_WRITEBACK) || !writeback_job) 696 + return 0; 697 + 698 + if (writeback_job->fb && !state->crtc) { 699 + DRM_DEBUG_ATOMIC("[CONNECTOR:%d:%s] framebuffer without CRTC\n", 700 + connector->base.id, connector->name); 701 + return -EINVAL; 702 + } 703 + 704 + if (state->crtc) 705 + crtc_state = drm_atomic_get_existing_crtc_state(state->state, 706 + state->crtc); 707 + 708 + if (writeback_job->fb && !crtc_state->active) { 709 + DRM_DEBUG_ATOMIC("[CONNECTOR:%d:%s] has framebuffer, but [CRTC:%d] is off\n", 710 + connector->base.id, connector->name, 711 + state->crtc->base.id); 712 + return -EINVAL; 713 + } 714 + 715 + if (writeback_job->out_fence && !writeback_job->fb) { 716 + DRM_DEBUG_ATOMIC("[CONNECTOR:%d:%s] requesting out-fence without framebuffer\n", 717 + connector->base.id, connector->name); 718 + return -EINVAL; 719 + } 720 + 721 + return 0; 722 + } 723 + 724 + /** 732 725 * drm_atomic_get_plane_state - get plane state 733 726 * @state: global atomic state object 734 727 * @plane: plane to get state object for ··· 796 699 struct drm_plane_state *plane_state; 797 700 798 701 WARN_ON(!state->acquire_ctx); 702 + 703 + /* the legacy pointers should never be set */ 704 + WARN_ON(plane->fb); 705 + WARN_ON(plane->old_fb); 706 + WARN_ON(plane->crtc); 799 707 800 708 plane_state = drm_atomic_get_existing_plane_state(state, plane); 801 709 if (plane_state) ··· 896 794 } else if (property == plane->alpha_property) { 897 795 state->alpha = val; 898 796 } else if (property == plane->rotation_property) { 899 - if (!is_power_of_2(val & DRM_MODE_ROTATE_MASK)) 797 + if (!is_power_of_2(val & DRM_MODE_ROTATE_MASK)) { 798 + DRM_DEBUG_ATOMIC("[PLANE:%d:%s] bad rotation bitmask: 0x%llx\n", 799 + plane->base.id, plane->name, val); 900 800 return -EINVAL; 801 + } 901 802 state->rotation = val; 902 803 } else if (property == plane->zpos_property) { 903 804 state->zpos = val; ··· 912 807 return plane->funcs->atomic_set_property(plane, state, 913 808 property, val); 914 809 } else { 810 + DRM_DEBUG_ATOMIC("[PLANE:%d:%s] unknown property [PROP:%d:%s]]\n", 811 + plane->base.id, plane->name, 812 + property->base.id, property->name); 915 813 return -EINVAL; 916 814 } 917 815 ··· 1022 914 1023 915 /* either *both* CRTC and FB must be set, or neither */ 1024 916 if (state->crtc && !state->fb) { 1025 - DRM_DEBUG_ATOMIC("CRTC set but no FB\n"); 917 + DRM_DEBUG_ATOMIC("[PLANE:%d:%s] CRTC set but no FB\n", 918 + plane->base.id, plane->name); 1026 919 return -EINVAL; 1027 920 } else if (state->fb && !state->crtc) { 1028 - DRM_DEBUG_ATOMIC("FB set but no CRTC\n"); 921 + DRM_DEBUG_ATOMIC("[PLANE:%d:%s] FB set but no CRTC\n", 922 + plane->base.id, plane->name); 1029 923 return -EINVAL; 1030 924 } 1031 925 ··· 1037 927 1038 928 /* Check whether this plane is usable on this CRTC */ 1039 929 if (!(plane->possible_crtcs & drm_crtc_mask(state->crtc))) { 1040 - DRM_DEBUG_ATOMIC("Invalid crtc for plane\n"); 930 + DRM_DEBUG_ATOMIC("Invalid [CRTC:%d:%s] for [PLANE:%d:%s]\n", 931 + state->crtc->base.id, state->crtc->name, 932 + plane->base.id, plane->name); 1041 933 return -EINVAL; 1042 934 } 1043 935 ··· 1048 936 state->fb->modifier); 1049 937 if (ret) { 1050 938 struct drm_format_name_buf format_name; 1051 - DRM_DEBUG_ATOMIC("Invalid pixel format %s, modifier 0x%llx\n", 939 + DRM_DEBUG_ATOMIC("[PLANE:%d:%s] invalid pixel format %s, modifier 0x%llx\n", 940 + plane->base.id, plane->name, 1052 941 drm_get_format_name(state->fb->format->format, 1053 942 &format_name), 1054 943 state->fb->modifier); ··· 1061 948 state->crtc_x > INT_MAX - (int32_t) state->crtc_w || 1062 949 state->crtc_h > INT_MAX || 1063 950 state->crtc_y > INT_MAX - (int32_t) state->crtc_h) { 1064 - DRM_DEBUG_ATOMIC("Invalid CRTC coordinates %ux%u+%d+%d\n", 951 + DRM_DEBUG_ATOMIC("[PLANE:%d:%s] invalid CRTC coordinates %ux%u+%d+%d\n", 952 + plane->base.id, plane->name, 1065 953 state->crtc_w, state->crtc_h, 1066 954 state->crtc_x, state->crtc_y); 1067 955 return -ERANGE; ··· 1076 962 state->src_x > fb_width - state->src_w || 1077 963 state->src_h > fb_height || 1078 964 state->src_y > fb_height - state->src_h) { 1079 - DRM_DEBUG_ATOMIC("Invalid source coordinates " 965 + DRM_DEBUG_ATOMIC("[PLANE:%d:%s] invalid source coordinates " 1080 966 "%u.%06ux%u.%06u+%u.%06u+%u.%06u (fb %ux%u)\n", 967 + plane->base.id, plane->name, 1081 968 state->src_w >> 16, ((state->src_w & 0xffff) * 15625) >> 10, 1082 969 state->src_h >> 16, ((state->src_h & 0xffff) * 15625) >> 10, 1083 970 state->src_x >> 16, ((state->src_x & 0xffff) * 15625) >> 10, ··· 1235 1120 state->private_objs[index].old_state = obj->state; 1236 1121 state->private_objs[index].new_state = obj_state; 1237 1122 state->private_objs[index].ptr = obj; 1123 + obj_state->state = state; 1238 1124 1239 1125 state->num_private_objs = num_objs; 1240 1126 ··· 1394 1278 state->link_status = val; 1395 1279 } else if (property == config->aspect_ratio_property) { 1396 1280 state->picture_aspect_ratio = val; 1281 + } else if (property == config->content_type_property) { 1282 + state->content_type = val; 1397 1283 } else if (property == connector->scaling_mode_property) { 1398 1284 state->scaling_mode = val; 1399 1285 } else if (property == connector->content_protection_property) { ··· 1404 1286 return -EINVAL; 1405 1287 } 1406 1288 state->content_protection = val; 1289 + } else if (property == config->writeback_fb_id_property) { 1290 + struct drm_framebuffer *fb = drm_framebuffer_lookup(dev, NULL, val); 1291 + int ret = drm_atomic_set_writeback_fb_for_connector(state, fb); 1292 + if (fb) 1293 + drm_framebuffer_put(fb); 1294 + return ret; 1295 + } else if (property == config->writeback_out_fence_ptr_property) { 1296 + s32 __user *fence_ptr = u64_to_user_ptr(val); 1297 + 1298 + return set_out_fence_for_connector(state->state, connector, 1299 + fence_ptr); 1407 1300 } else if (connector->funcs->atomic_set_property) { 1408 1301 return connector->funcs->atomic_set_property(connector, 1409 1302 state, property, val); 1410 1303 } else { 1304 + DRM_DEBUG_ATOMIC("[CONNECTOR:%d:%s] unknown property [PROP:%d:%s]]\n", 1305 + connector->base.id, connector->name, 1306 + property->base.id, property->name); 1411 1307 return -EINVAL; 1412 1308 } 1413 1309 ··· 1495 1363 *val = state->link_status; 1496 1364 } else if (property == config->aspect_ratio_property) { 1497 1365 *val = state->picture_aspect_ratio; 1366 + } else if (property == config->content_type_property) { 1367 + *val = state->content_type; 1498 1368 } else if (property == connector->scaling_mode_property) { 1499 1369 *val = state->scaling_mode; 1500 1370 } else if (property == connector->content_protection_property) { 1501 1371 *val = state->content_protection; 1372 + } else if (property == config->writeback_fb_id_property) { 1373 + /* Writeback framebuffer is one-shot, write and forget */ 1374 + *val = 0; 1375 + } else if (property == config->writeback_out_fence_ptr_property) { 1376 + *val = 0; 1502 1377 } else if (connector->funcs->atomic_get_property) { 1503 1378 return connector->funcs->atomic_get_property(connector, 1504 1379 state, property, val); ··· 1595 1456 } 1596 1457 1597 1458 if (crtc) 1598 - DRM_DEBUG_ATOMIC("Link plane state %p to [CRTC:%d:%s]\n", 1599 - plane_state, crtc->base.id, crtc->name); 1459 + DRM_DEBUG_ATOMIC("Link [PLANE:%d:%s] state %p to [CRTC:%d:%s]\n", 1460 + plane->base.id, plane->name, plane_state, 1461 + crtc->base.id, crtc->name); 1600 1462 else 1601 - DRM_DEBUG_ATOMIC("Link plane state %p to [NOCRTC]\n", 1602 - plane_state); 1463 + DRM_DEBUG_ATOMIC("Link [PLANE:%d:%s] state %p to [NOCRTC]\n", 1464 + plane->base.id, plane->name, plane_state); 1603 1465 1604 1466 return 0; 1605 1467 } ··· 1620 1480 drm_atomic_set_fb_for_plane(struct drm_plane_state *plane_state, 1621 1481 struct drm_framebuffer *fb) 1622 1482 { 1483 + struct drm_plane *plane = plane_state->plane; 1484 + 1623 1485 if (fb) 1624 - DRM_DEBUG_ATOMIC("Set [FB:%d] for plane state %p\n", 1625 - fb->base.id, plane_state); 1626 - else 1627 - DRM_DEBUG_ATOMIC("Set [NOFB] for plane state %p\n", 1486 + DRM_DEBUG_ATOMIC("Set [FB:%d] for [PLANE:%d:%s] state %p\n", 1487 + fb->base.id, plane->base.id, plane->name, 1628 1488 plane_state); 1489 + else 1490 + DRM_DEBUG_ATOMIC("Set [NOFB] for [PLANE:%d:%s] state %p\n", 1491 + plane->base.id, plane->name, plane_state); 1629 1492 1630 1493 drm_framebuffer_assign(&plane_state->fb, fb); 1631 1494 } ··· 1689 1546 drm_atomic_set_crtc_for_connector(struct drm_connector_state *conn_state, 1690 1547 struct drm_crtc *crtc) 1691 1548 { 1549 + struct drm_connector *connector = conn_state->connector; 1692 1550 struct drm_crtc_state *crtc_state; 1693 1551 1694 1552 if (conn_state->crtc == crtc) ··· 1717 1573 drm_connector_get(conn_state->connector); 1718 1574 conn_state->crtc = crtc; 1719 1575 1720 - DRM_DEBUG_ATOMIC("Link connector state %p to [CRTC:%d:%s]\n", 1576 + DRM_DEBUG_ATOMIC("Link [CONNECTOR:%d:%s] state %p to [CRTC:%d:%s]\n", 1577 + connector->base.id, connector->name, 1721 1578 conn_state, crtc->base.id, crtc->name); 1722 1579 } else { 1723 - DRM_DEBUG_ATOMIC("Link connector state %p to [NOCRTC]\n", 1580 + DRM_DEBUG_ATOMIC("Link [CONNECTOR:%d:%s] state %p to [NOCRTC]\n", 1581 + connector->base.id, connector->name, 1724 1582 conn_state); 1725 1583 } 1726 1584 1727 1585 return 0; 1728 1586 } 1729 1587 EXPORT_SYMBOL(drm_atomic_set_crtc_for_connector); 1588 + 1589 + /* 1590 + * drm_atomic_get_writeback_job - return or allocate a writeback job 1591 + * @conn_state: Connector state to get the job for 1592 + * 1593 + * Writeback jobs have a different lifetime to the atomic state they are 1594 + * associated with. This convenience function takes care of allocating a job 1595 + * if there isn't yet one associated with the connector state, otherwise 1596 + * it just returns the existing job. 1597 + * 1598 + * Returns: The writeback job for the given connector state 1599 + */ 1600 + static struct drm_writeback_job * 1601 + drm_atomic_get_writeback_job(struct drm_connector_state *conn_state) 1602 + { 1603 + WARN_ON(conn_state->connector->connector_type != DRM_MODE_CONNECTOR_WRITEBACK); 1604 + 1605 + if (!conn_state->writeback_job) 1606 + conn_state->writeback_job = 1607 + kzalloc(sizeof(*conn_state->writeback_job), GFP_KERNEL); 1608 + 1609 + return conn_state->writeback_job; 1610 + } 1611 + 1612 + /** 1613 + * drm_atomic_set_writeback_fb_for_connector - set writeback framebuffer 1614 + * @conn_state: atomic state object for the connector 1615 + * @fb: fb to use for the connector 1616 + * 1617 + * This is used to set the framebuffer for a writeback connector, which outputs 1618 + * to a buffer instead of an actual physical connector. 1619 + * Changing the assigned framebuffer requires us to grab a reference to the new 1620 + * fb and drop the reference to the old fb, if there is one. This function 1621 + * takes care of all these details besides updating the pointer in the 1622 + * state object itself. 1623 + * 1624 + * Note: The only way conn_state can already have an fb set is if the commit 1625 + * sets the property more than once. 1626 + * 1627 + * See also: drm_writeback_connector_init() 1628 + * 1629 + * Returns: 0 on success 1630 + */ 1631 + int drm_atomic_set_writeback_fb_for_connector( 1632 + struct drm_connector_state *conn_state, 1633 + struct drm_framebuffer *fb) 1634 + { 1635 + struct drm_writeback_job *job = 1636 + drm_atomic_get_writeback_job(conn_state); 1637 + if (!job) 1638 + return -ENOMEM; 1639 + 1640 + drm_framebuffer_assign(&job->fb, fb); 1641 + 1642 + if (fb) 1643 + DRM_DEBUG_ATOMIC("Set [FB:%d] for connector state %p\n", 1644 + fb->base.id, conn_state); 1645 + else 1646 + DRM_DEBUG_ATOMIC("Set [NOFB] for connector state %p\n", 1647 + conn_state); 1648 + 1649 + return 0; 1650 + } 1651 + EXPORT_SYMBOL(drm_atomic_set_writeback_fb_for_connector); 1730 1652 1731 1653 /** 1732 1654 * drm_atomic_add_affected_connectors - add connectors for crtc ··· 1882 1672 1883 1673 WARN_ON(!drm_atomic_get_new_crtc_state(state, crtc)); 1884 1674 1675 + DRM_DEBUG_ATOMIC("Adding all current planes for [CRTC:%d:%s] to %p\n", 1676 + crtc->base.id, crtc->name, state); 1677 + 1885 1678 drm_for_each_plane_mask(plane, state->dev, crtc->state->plane_mask) { 1886 1679 struct drm_plane_state *plane_state = 1887 1680 drm_atomic_get_plane_state(state, plane); ··· 1915 1702 struct drm_plane_state *plane_state; 1916 1703 struct drm_crtc *crtc; 1917 1704 struct drm_crtc_state *crtc_state; 1705 + struct drm_connector *conn; 1706 + struct drm_connector_state *conn_state; 1918 1707 int i, ret = 0; 1919 1708 1920 1709 DRM_DEBUG_ATOMIC("checking %p\n", state); ··· 1935 1720 if (ret) { 1936 1721 DRM_DEBUG_ATOMIC("[CRTC:%d:%s] atomic core check failed\n", 1937 1722 crtc->base.id, crtc->name); 1723 + return ret; 1724 + } 1725 + } 1726 + 1727 + for_each_new_connector_in_state(state, conn, conn_state, i) { 1728 + ret = drm_atomic_connector_check(conn, conn_state); 1729 + if (ret) { 1730 + DRM_DEBUG_ATOMIC("[CONNECTOR:%d:%s] atomic core check failed\n", 1731 + conn->base.id, conn->name); 1938 1732 return ret; 1939 1733 } 1940 1734 } ··· 2272 2048 } 2273 2049 2274 2050 /** 2275 - * drm_atomic_clean_old_fb -- Unset old_fb pointers and set plane->fb pointers. 2276 - * 2277 - * @dev: drm device to check. 2278 - * @plane_mask: plane mask for planes that were updated. 2279 - * @ret: return value, can be -EDEADLK for a retry. 2280 - * 2281 - * Before doing an update &drm_plane.old_fb is set to &drm_plane.fb, but before 2282 - * dropping the locks old_fb needs to be set to NULL and plane->fb updated. This 2283 - * is a common operation for each atomic update, so this call is split off as a 2284 - * helper. 2285 - */ 2286 - void drm_atomic_clean_old_fb(struct drm_device *dev, 2287 - unsigned plane_mask, 2288 - int ret) 2289 - { 2290 - struct drm_plane *plane; 2291 - 2292 - /* if succeeded, fixup legacy plane crtc/fb ptrs before dropping 2293 - * locks (ie. while it is still safe to deref plane->state). We 2294 - * need to do this here because the driver entry points cannot 2295 - * distinguish between legacy and atomic ioctls. 2296 - */ 2297 - drm_for_each_plane_mask(plane, dev, plane_mask) { 2298 - if (ret == 0) { 2299 - struct drm_framebuffer *new_fb = plane->state->fb; 2300 - if (new_fb) 2301 - drm_framebuffer_get(new_fb); 2302 - plane->fb = new_fb; 2303 - plane->crtc = plane->state->crtc; 2304 - 2305 - if (plane->old_fb) 2306 - drm_framebuffer_put(plane->old_fb); 2307 - } 2308 - plane->old_fb = NULL; 2309 - } 2310 - } 2311 - EXPORT_SYMBOL(drm_atomic_clean_old_fb); 2312 - 2313 - /** 2314 2051 * DOC: explicit fencing properties 2315 2052 * 2316 2053 * Explicit fencing allows userspace to control the buffer synchronization ··· 2346 2161 return 0; 2347 2162 } 2348 2163 2349 - static int prepare_crtc_signaling(struct drm_device *dev, 2164 + static int prepare_signaling(struct drm_device *dev, 2350 2165 struct drm_atomic_state *state, 2351 2166 struct drm_mode_atomic *arg, 2352 2167 struct drm_file *file_priv, ··· 2355 2170 { 2356 2171 struct drm_crtc *crtc; 2357 2172 struct drm_crtc_state *crtc_state; 2173 + struct drm_connector *conn; 2174 + struct drm_connector_state *conn_state; 2358 2175 int i, c = 0, ret; 2359 2176 2360 2177 if (arg->flags & DRM_MODE_ATOMIC_TEST_ONLY) ··· 2422 2235 c++; 2423 2236 } 2424 2237 2238 + for_each_new_connector_in_state(state, conn, conn_state, i) { 2239 + struct drm_writeback_job *job; 2240 + struct drm_out_fence_state *f; 2241 + struct dma_fence *fence; 2242 + s32 __user *fence_ptr; 2243 + 2244 + fence_ptr = get_out_fence_for_connector(state, conn); 2245 + if (!fence_ptr) 2246 + continue; 2247 + 2248 + job = drm_atomic_get_writeback_job(conn_state); 2249 + if (!job) 2250 + return -ENOMEM; 2251 + 2252 + f = krealloc(*fence_state, sizeof(**fence_state) * 2253 + (*num_fences + 1), GFP_KERNEL); 2254 + if (!f) 2255 + return -ENOMEM; 2256 + 2257 + memset(&f[*num_fences], 0, sizeof(*f)); 2258 + 2259 + f[*num_fences].out_fence_ptr = fence_ptr; 2260 + *fence_state = f; 2261 + 2262 + fence = drm_writeback_get_out_fence((struct drm_writeback_connector *)conn); 2263 + if (!fence) 2264 + return -ENOMEM; 2265 + 2266 + ret = setup_out_fence(&f[(*num_fences)++], fence); 2267 + if (ret) { 2268 + dma_fence_put(fence); 2269 + return ret; 2270 + } 2271 + 2272 + job->out_fence = fence; 2273 + } 2274 + 2425 2275 /* 2426 2276 * Having this flag means user mode pends on event which will never 2427 2277 * reach due to lack of at least one CRTC for signaling ··· 2469 2245 return 0; 2470 2246 } 2471 2247 2472 - static void complete_crtc_signaling(struct drm_device *dev, 2473 - struct drm_atomic_state *state, 2474 - struct drm_out_fence_state *fence_state, 2475 - unsigned int num_fences, 2476 - bool install_fds) 2248 + static void complete_signaling(struct drm_device *dev, 2249 + struct drm_atomic_state *state, 2250 + struct drm_out_fence_state *fence_state, 2251 + unsigned int num_fences, 2252 + bool install_fds) 2477 2253 { 2478 2254 struct drm_crtc *crtc; 2479 2255 struct drm_crtc_state *crtc_state; ··· 2530 2306 unsigned int copied_objs, copied_props; 2531 2307 struct drm_atomic_state *state; 2532 2308 struct drm_modeset_acquire_ctx ctx; 2533 - struct drm_plane *plane; 2534 2309 struct drm_out_fence_state *fence_state; 2535 - unsigned plane_mask; 2536 2310 int ret = 0; 2537 2311 unsigned int i, j, num_fences; 2538 2312 ··· 2570 2348 state->allow_modeset = !!(arg->flags & DRM_MODE_ATOMIC_ALLOW_MODESET); 2571 2349 2572 2350 retry: 2573 - plane_mask = 0; 2574 2351 copied_objs = 0; 2575 2352 copied_props = 0; 2576 2353 fence_state = NULL; ··· 2640 2419 copied_props++; 2641 2420 } 2642 2421 2643 - if (obj->type == DRM_MODE_OBJECT_PLANE && count_props && 2644 - !(arg->flags & DRM_MODE_ATOMIC_TEST_ONLY)) { 2645 - plane = obj_to_plane(obj); 2646 - plane_mask |= (1 << drm_plane_index(plane)); 2647 - plane->old_fb = plane->fb; 2648 - } 2649 2422 drm_mode_object_put(obj); 2650 2423 } 2651 2424 2652 - ret = prepare_crtc_signaling(dev, state, arg, file_priv, &fence_state, 2653 - &num_fences); 2425 + ret = prepare_signaling(dev, state, arg, file_priv, &fence_state, 2426 + &num_fences); 2654 2427 if (ret) 2655 2428 goto out; 2656 2429 ··· 2660 2445 } 2661 2446 2662 2447 out: 2663 - drm_atomic_clean_old_fb(dev, plane_mask, ret); 2664 - 2665 - complete_crtc_signaling(dev, state, fence_state, num_fences, !ret); 2448 + complete_signaling(dev, state, fence_state, num_fences, !ret); 2666 2449 2667 2450 if (ret == -EDEADLK) { 2668 2451 drm_atomic_state_clear(state);
+26 -14
drivers/gpu/drm/drm_atomic_helper.c
··· 30 30 #include <drm/drm_plane_helper.h> 31 31 #include <drm/drm_crtc_helper.h> 32 32 #include <drm/drm_atomic_helper.h> 33 + #include <drm/drm_writeback.h> 33 34 #include <linux/dma-fence.h> 34 35 35 36 #include "drm_crtc_helper_internal.h" ··· 1173 1172 } 1174 1173 EXPORT_SYMBOL(drm_atomic_helper_commit_modeset_disables); 1175 1174 1175 + static void drm_atomic_helper_commit_writebacks(struct drm_device *dev, 1176 + struct drm_atomic_state *old_state) 1177 + { 1178 + struct drm_connector *connector; 1179 + struct drm_connector_state *new_conn_state; 1180 + int i; 1181 + 1182 + for_each_new_connector_in_state(old_state, connector, new_conn_state, i) { 1183 + const struct drm_connector_helper_funcs *funcs; 1184 + 1185 + funcs = connector->helper_private; 1186 + 1187 + if (new_conn_state->writeback_job && new_conn_state->writeback_job->fb) { 1188 + WARN_ON(connector->connector_type != DRM_MODE_CONNECTOR_WRITEBACK); 1189 + funcs->atomic_commit(connector, new_conn_state->writeback_job); 1190 + } 1191 + } 1192 + } 1193 + 1176 1194 /** 1177 1195 * drm_atomic_helper_commit_modeset_enables - modeset commit to enable outputs 1178 1196 * @dev: DRM device ··· 1271 1251 1272 1252 drm_bridge_enable(encoder->bridge); 1273 1253 } 1254 + 1255 + drm_atomic_helper_commit_writebacks(dev, old_state); 1274 1256 } 1275 1257 EXPORT_SYMBOL(drm_atomic_helper_commit_modeset_enables); 1276 1258 ··· 2936 2914 struct drm_plane *plane; 2937 2915 struct drm_crtc_state *crtc_state; 2938 2916 struct drm_crtc *crtc; 2939 - unsigned plane_mask = 0; 2940 2917 int ret, i; 2941 2918 2942 2919 state = drm_atomic_state_alloc(dev); ··· 2978 2957 goto free; 2979 2958 2980 2959 drm_atomic_set_fb_for_plane(plane_state, NULL); 2981 - 2982 - if (clean_old_fbs) { 2983 - plane->old_fb = plane->fb; 2984 - plane_mask |= BIT(drm_plane_index(plane)); 2985 - } 2986 2960 } 2987 2961 2988 2962 ret = drm_atomic_commit(state); 2989 2963 free: 2990 - if (plane_mask) 2991 - drm_atomic_clean_old_fb(dev, plane_mask, ret); 2992 2964 drm_atomic_state_put(state); 2993 2965 return ret; 2994 2966 } ··· 3143 3129 3144 3130 state->acquire_ctx = ctx; 3145 3131 3146 - for_each_new_plane_in_state(state, plane, new_plane_state, i) { 3147 - WARN_ON(plane->crtc != new_plane_state->crtc); 3148 - WARN_ON(plane->fb != new_plane_state->fb); 3149 - WARN_ON(plane->old_fb); 3150 - 3132 + for_each_new_plane_in_state(state, plane, new_plane_state, i) 3151 3133 state->planes[i].old_state = plane->state; 3152 - } 3153 3134 3154 3135 for_each_new_crtc_in_state(state, crtc, new_crtc_state, i) 3155 3136 state->crtcs[i].old_state = crtc->state; ··· 3669 3660 if (state->crtc) 3670 3661 drm_connector_get(connector); 3671 3662 state->commit = NULL; 3663 + 3664 + /* Don't copy over a writeback job, they are used only once */ 3665 + state->writeback_job = NULL; 3672 3666 } 3673 3667 EXPORT_SYMBOL(__drm_atomic_helper_connector_duplicate_state); 3674 3668
+120 -1
drivers/gpu/drm/drm_connector.c
··· 87 87 { DRM_MODE_CONNECTOR_VIRTUAL, "Virtual" }, 88 88 { DRM_MODE_CONNECTOR_DSI, "DSI" }, 89 89 { DRM_MODE_CONNECTOR_DPI, "DPI" }, 90 + { DRM_MODE_CONNECTOR_WRITEBACK, "Writeback" }, 90 91 }; 91 92 92 93 void drm_connector_ida_init(void) ··· 196 195 struct ida *connector_ida = 197 196 &drm_connector_enum_list[connector_type].ida; 198 197 198 + WARN_ON(drm_drv_uses_atomic_modeset(dev) && 199 + (!funcs->atomic_destroy_state || 200 + !funcs->atomic_duplicate_state)); 201 + 199 202 ret = __drm_mode_object_add(dev, &connector->base, 200 203 DRM_MODE_OBJECT_CONNECTOR, 201 204 false, drm_connector_free); ··· 254 249 config->num_connector++; 255 250 spin_unlock_irq(&config->connector_list_lock); 256 251 257 - if (connector_type != DRM_MODE_CONNECTOR_VIRTUAL) 252 + if (connector_type != DRM_MODE_CONNECTOR_VIRTUAL && 253 + connector_type != DRM_MODE_CONNECTOR_WRITEBACK) 258 254 drm_object_attach_property(&connector->base, 259 255 config->edid_property, 260 256 0); ··· 726 720 { DRM_MODE_PICTURE_ASPECT_16_9, "16:9" }, 727 721 }; 728 722 723 + static const struct drm_prop_enum_list drm_content_type_enum_list[] = { 724 + { DRM_MODE_CONTENT_TYPE_NO_DATA, "No Data" }, 725 + { DRM_MODE_CONTENT_TYPE_GRAPHICS, "Graphics" }, 726 + { DRM_MODE_CONTENT_TYPE_PHOTO, "Photo" }, 727 + { DRM_MODE_CONTENT_TYPE_CINEMA, "Cinema" }, 728 + { DRM_MODE_CONTENT_TYPE_GAME, "Game" }, 729 + }; 730 + 729 731 static const struct drm_prop_enum_list drm_panel_orientation_enum_list[] = { 730 732 { DRM_MODE_PANEL_ORIENTATION_NORMAL, "Normal" }, 731 733 { DRM_MODE_PANEL_ORIENTATION_BOTTOM_UP, "Upside Down" }, ··· 1011 997 EXPORT_SYMBOL(drm_mode_create_dvi_i_properties); 1012 998 1013 999 /** 1000 + * DOC: HDMI connector properties 1001 + * 1002 + * content type (HDMI specific): 1003 + * Indicates content type setting to be used in HDMI infoframes to indicate 1004 + * content type for the external device, so that it adjusts it's display 1005 + * settings accordingly. 1006 + * 1007 + * The value of this property can be one of the following: 1008 + * 1009 + * No Data: 1010 + * Content type is unknown 1011 + * Graphics: 1012 + * Content type is graphics 1013 + * Photo: 1014 + * Content type is photo 1015 + * Cinema: 1016 + * Content type is cinema 1017 + * Game: 1018 + * Content type is game 1019 + * 1020 + * Drivers can set up this property by calling 1021 + * drm_connector_attach_content_type_property(). Decoding to 1022 + * infoframe values is done through 1023 + * drm_hdmi_get_content_type_from_property() and 1024 + * drm_hdmi_get_itc_bit_from_property(). 1025 + */ 1026 + 1027 + /** 1028 + * drm_connector_attach_content_type_property - attach content-type property 1029 + * @connector: connector to attach content type property on. 1030 + * 1031 + * Called by a driver the first time a HDMI connector is made. 1032 + */ 1033 + int drm_connector_attach_content_type_property(struct drm_connector *connector) 1034 + { 1035 + if (!drm_mode_create_content_type_property(connector->dev)) 1036 + drm_object_attach_property(&connector->base, 1037 + connector->dev->mode_config.content_type_property, 1038 + DRM_MODE_CONTENT_TYPE_NO_DATA); 1039 + return 0; 1040 + } 1041 + EXPORT_SYMBOL(drm_connector_attach_content_type_property); 1042 + 1043 + 1044 + /** 1045 + * drm_hdmi_avi_infoframe_content_type() - fill the HDMI AVI infoframe 1046 + * content type information, based 1047 + * on correspondent DRM property. 1048 + * @frame: HDMI AVI infoframe 1049 + * @conn_state: DRM display connector state 1050 + * 1051 + */ 1052 + void drm_hdmi_avi_infoframe_content_type(struct hdmi_avi_infoframe *frame, 1053 + const struct drm_connector_state *conn_state) 1054 + { 1055 + switch (conn_state->content_type) { 1056 + case DRM_MODE_CONTENT_TYPE_GRAPHICS: 1057 + frame->content_type = HDMI_CONTENT_TYPE_GRAPHICS; 1058 + break; 1059 + case DRM_MODE_CONTENT_TYPE_CINEMA: 1060 + frame->content_type = HDMI_CONTENT_TYPE_CINEMA; 1061 + break; 1062 + case DRM_MODE_CONTENT_TYPE_GAME: 1063 + frame->content_type = HDMI_CONTENT_TYPE_GAME; 1064 + break; 1065 + case DRM_MODE_CONTENT_TYPE_PHOTO: 1066 + frame->content_type = HDMI_CONTENT_TYPE_PHOTO; 1067 + break; 1068 + default: 1069 + /* Graphics is the default(0) */ 1070 + frame->content_type = HDMI_CONTENT_TYPE_GRAPHICS; 1071 + } 1072 + 1073 + frame->itc = conn_state->content_type != DRM_MODE_CONTENT_TYPE_NO_DATA; 1074 + } 1075 + EXPORT_SYMBOL(drm_hdmi_avi_infoframe_content_type); 1076 + 1077 + /** 1014 1078 * drm_create_tv_properties - create TV specific connector properties 1015 1079 * @dev: DRM device 1016 1080 * @num_modes: number of different TV formats (modes) supported ··· 1351 1259 return 0; 1352 1260 } 1353 1261 EXPORT_SYMBOL(drm_mode_create_aspect_ratio_property); 1262 + 1263 + /** 1264 + * drm_mode_create_content_type_property - create content type property 1265 + * @dev: DRM device 1266 + * 1267 + * Called by a driver the first time it's needed, must be attached to desired 1268 + * connectors. 1269 + * 1270 + * Returns: 1271 + * Zero on success, negative errno on failure. 1272 + */ 1273 + int drm_mode_create_content_type_property(struct drm_device *dev) 1274 + { 1275 + if (dev->mode_config.content_type_property) 1276 + return 0; 1277 + 1278 + dev->mode_config.content_type_property = 1279 + drm_property_create_enum(dev, 0, "content type", 1280 + drm_content_type_enum_list, 1281 + ARRAY_SIZE(drm_content_type_enum_list)); 1282 + 1283 + if (dev->mode_config.content_type_property == NULL) 1284 + return -ENOMEM; 1285 + 1286 + return 0; 1287 + } 1288 + EXPORT_SYMBOL(drm_mode_create_content_type_property); 1354 1289 1355 1290 /** 1356 1291 * drm_mode_create_suggested_offset_properties - create suggests offset properties
+25 -10
drivers/gpu/drm/drm_crtc.c
··· 286 286 if (WARN_ON(config->num_crtc >= 32)) 287 287 return -EINVAL; 288 288 289 + WARN_ON(drm_drv_uses_atomic_modeset(dev) && 290 + (!funcs->atomic_destroy_state || 291 + !funcs->atomic_duplicate_state)); 292 + 289 293 crtc->dev = dev; 290 294 crtc->funcs = funcs; 291 295 ··· 473 469 * connectors from it), hence we need to refcount the fbs across all 474 470 * crtcs. Atomic modeset will have saner semantics ... 475 471 */ 476 - drm_for_each_crtc(tmp, crtc->dev) 477 - tmp->primary->old_fb = tmp->primary->fb; 472 + drm_for_each_crtc(tmp, crtc->dev) { 473 + struct drm_plane *plane = tmp->primary; 474 + 475 + plane->old_fb = plane->fb; 476 + } 478 477 479 478 fb = set->fb; 480 479 481 480 ret = crtc->funcs->set_config(set, ctx); 482 481 if (ret == 0) { 483 - crtc->primary->crtc = fb ? crtc : NULL; 484 - crtc->primary->fb = fb; 482 + struct drm_plane *plane = crtc->primary; 483 + 484 + if (!plane->state) { 485 + plane->crtc = fb ? crtc : NULL; 486 + plane->fb = fb; 487 + } 485 488 } 486 489 487 490 drm_for_each_crtc(tmp, crtc->dev) { 488 - if (tmp->primary->fb) 489 - drm_framebuffer_get(tmp->primary->fb); 490 - if (tmp->primary->old_fb) 491 - drm_framebuffer_put(tmp->primary->old_fb); 492 - tmp->primary->old_fb = NULL; 491 + struct drm_plane *plane = tmp->primary; 492 + 493 + if (plane->fb) 494 + drm_framebuffer_get(plane->fb); 495 + if (plane->old_fb) 496 + drm_framebuffer_put(plane->old_fb); 497 + plane->old_fb = NULL; 493 498 } 494 499 495 500 return ret; ··· 653 640 654 641 ret = drm_mode_convert_umode(dev, mode, &crtc_req->mode); 655 642 if (ret) { 656 - DRM_DEBUG_KMS("Invalid mode\n"); 643 + DRM_DEBUG_KMS("Invalid mode (ret=%d, status=%s)\n", 644 + ret, drm_get_mode_status_name(mode->status)); 645 + drm_mode_debug_printmodeline(mode); 657 646 goto out; 658 647 } 659 648
+3
drivers/gpu/drm/drm_crtc_internal.h
··· 56 56 int drm_modeset_register_all(struct drm_device *dev); 57 57 void drm_modeset_unregister_all(struct drm_device *dev); 58 58 59 + /* drm_modes.c */ 60 + const char *drm_get_mode_status_name(enum drm_mode_status status); 61 + 59 62 /* IOCTLs */ 60 63 int drm_mode_getresources(struct drm_device *dev, 61 64 void *data, struct drm_file *file_priv);
+144 -135
drivers/gpu/drm/drm_edid.c
··· 163 163 /* Rotel RSX-1058 forwards sink's EDID but only does HDMI 1.1*/ 164 164 { "ETR", 13896, EDID_QUIRK_FORCE_8BPC }, 165 165 166 - /* HTC Vive VR Headset */ 166 + /* HTC Vive and Vive Pro VR Headsets */ 167 167 { "HVR", 0xaa01, EDID_QUIRK_NON_DESKTOP }, 168 + { "HVR", 0xaa02, EDID_QUIRK_NON_DESKTOP }, 168 169 169 170 /* Oculus Rift DK1, DK2, and CV1 VR Headsets */ 170 171 { "OVR", 0x0001, EDID_QUIRK_NON_DESKTOP }, ··· 688 687 static const struct drm_display_mode edid_cea_modes[] = { 689 688 /* 0 - dummy, VICs start at 1 */ 690 689 { }, 691 - /* 1 - 640x480@60Hz */ 690 + /* 1 - 640x480@60Hz 4:3 */ 692 691 { DRM_MODE("640x480", DRM_MODE_TYPE_DRIVER, 25175, 640, 656, 693 692 752, 800, 0, 480, 490, 492, 525, 0, 694 693 DRM_MODE_FLAG_NHSYNC | DRM_MODE_FLAG_NVSYNC), 695 694 .vrefresh = 60, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_4_3, }, 696 - /* 2 - 720x480@60Hz */ 695 + /* 2 - 720x480@60Hz 4:3 */ 697 696 { DRM_MODE("720x480", DRM_MODE_TYPE_DRIVER, 27000, 720, 736, 698 697 798, 858, 0, 480, 489, 495, 525, 0, 699 698 DRM_MODE_FLAG_NHSYNC | DRM_MODE_FLAG_NVSYNC), 700 699 .vrefresh = 60, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_4_3, }, 701 - /* 3 - 720x480@60Hz */ 700 + /* 3 - 720x480@60Hz 16:9 */ 702 701 { DRM_MODE("720x480", DRM_MODE_TYPE_DRIVER, 27000, 720, 736, 703 702 798, 858, 0, 480, 489, 495, 525, 0, 704 703 DRM_MODE_FLAG_NHSYNC | DRM_MODE_FLAG_NVSYNC), 705 704 .vrefresh = 60, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_16_9, }, 706 - /* 4 - 1280x720@60Hz */ 705 + /* 4 - 1280x720@60Hz 16:9 */ 707 706 { DRM_MODE("1280x720", DRM_MODE_TYPE_DRIVER, 74250, 1280, 1390, 708 707 1430, 1650, 0, 720, 725, 730, 750, 0, 709 708 DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC), 710 709 .vrefresh = 60, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_16_9, }, 711 - /* 5 - 1920x1080i@60Hz */ 710 + /* 5 - 1920x1080i@60Hz 16:9 */ 712 711 { DRM_MODE("1920x1080i", DRM_MODE_TYPE_DRIVER, 74250, 1920, 2008, 713 712 2052, 2200, 0, 1080, 1084, 1094, 1125, 0, 714 713 DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC | 715 - DRM_MODE_FLAG_INTERLACE), 714 + DRM_MODE_FLAG_INTERLACE), 716 715 .vrefresh = 60, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_16_9, }, 717 - /* 6 - 720(1440)x480i@60Hz */ 716 + /* 6 - 720(1440)x480i@60Hz 4:3 */ 718 717 { DRM_MODE("720x480i", DRM_MODE_TYPE_DRIVER, 13500, 720, 739, 719 718 801, 858, 0, 480, 488, 494, 525, 0, 720 719 DRM_MODE_FLAG_NHSYNC | DRM_MODE_FLAG_NVSYNC | 721 - DRM_MODE_FLAG_INTERLACE | DRM_MODE_FLAG_DBLCLK), 720 + DRM_MODE_FLAG_INTERLACE | DRM_MODE_FLAG_DBLCLK), 722 721 .vrefresh = 60, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_4_3, }, 723 - /* 7 - 720(1440)x480i@60Hz */ 722 + /* 7 - 720(1440)x480i@60Hz 16:9 */ 724 723 { DRM_MODE("720x480i", DRM_MODE_TYPE_DRIVER, 13500, 720, 739, 725 724 801, 858, 0, 480, 488, 494, 525, 0, 726 725 DRM_MODE_FLAG_NHSYNC | DRM_MODE_FLAG_NVSYNC | 727 - DRM_MODE_FLAG_INTERLACE | DRM_MODE_FLAG_DBLCLK), 726 + DRM_MODE_FLAG_INTERLACE | DRM_MODE_FLAG_DBLCLK), 728 727 .vrefresh = 60, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_16_9, }, 729 - /* 8 - 720(1440)x240@60Hz */ 728 + /* 8 - 720(1440)x240@60Hz 4:3 */ 730 729 { DRM_MODE("720x240", DRM_MODE_TYPE_DRIVER, 13500, 720, 739, 731 730 801, 858, 0, 240, 244, 247, 262, 0, 732 731 DRM_MODE_FLAG_NHSYNC | DRM_MODE_FLAG_NVSYNC | 733 - DRM_MODE_FLAG_DBLCLK), 732 + DRM_MODE_FLAG_DBLCLK), 734 733 .vrefresh = 60, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_4_3, }, 735 - /* 9 - 720(1440)x240@60Hz */ 734 + /* 9 - 720(1440)x240@60Hz 16:9 */ 736 735 { DRM_MODE("720x240", DRM_MODE_TYPE_DRIVER, 13500, 720, 739, 737 736 801, 858, 0, 240, 244, 247, 262, 0, 738 737 DRM_MODE_FLAG_NHSYNC | DRM_MODE_FLAG_NVSYNC | 739 - DRM_MODE_FLAG_DBLCLK), 738 + DRM_MODE_FLAG_DBLCLK), 740 739 .vrefresh = 60, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_16_9, }, 741 - /* 10 - 2880x480i@60Hz */ 740 + /* 10 - 2880x480i@60Hz 4:3 */ 742 741 { DRM_MODE("2880x480i", DRM_MODE_TYPE_DRIVER, 54000, 2880, 2956, 743 742 3204, 3432, 0, 480, 488, 494, 525, 0, 744 743 DRM_MODE_FLAG_NHSYNC | DRM_MODE_FLAG_NVSYNC | 745 - DRM_MODE_FLAG_INTERLACE), 744 + DRM_MODE_FLAG_INTERLACE), 746 745 .vrefresh = 60, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_4_3, }, 747 - /* 11 - 2880x480i@60Hz */ 746 + /* 11 - 2880x480i@60Hz 16:9 */ 748 747 { DRM_MODE("2880x480i", DRM_MODE_TYPE_DRIVER, 54000, 2880, 2956, 749 748 3204, 3432, 0, 480, 488, 494, 525, 0, 750 749 DRM_MODE_FLAG_NHSYNC | DRM_MODE_FLAG_NVSYNC | 751 - DRM_MODE_FLAG_INTERLACE), 750 + DRM_MODE_FLAG_INTERLACE), 752 751 .vrefresh = 60, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_16_9, }, 753 - /* 12 - 2880x240@60Hz */ 752 + /* 12 - 2880x240@60Hz 4:3 */ 754 753 { DRM_MODE("2880x240", DRM_MODE_TYPE_DRIVER, 54000, 2880, 2956, 755 754 3204, 3432, 0, 240, 244, 247, 262, 0, 756 755 DRM_MODE_FLAG_NHSYNC | DRM_MODE_FLAG_NVSYNC), 757 756 .vrefresh = 60, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_4_3, }, 758 - /* 13 - 2880x240@60Hz */ 757 + /* 13 - 2880x240@60Hz 16:9 */ 759 758 { DRM_MODE("2880x240", DRM_MODE_TYPE_DRIVER, 54000, 2880, 2956, 760 759 3204, 3432, 0, 240, 244, 247, 262, 0, 761 760 DRM_MODE_FLAG_NHSYNC | DRM_MODE_FLAG_NVSYNC), 762 761 .vrefresh = 60, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_16_9, }, 763 - /* 14 - 1440x480@60Hz */ 762 + /* 14 - 1440x480@60Hz 4:3 */ 764 763 { DRM_MODE("1440x480", DRM_MODE_TYPE_DRIVER, 54000, 1440, 1472, 765 764 1596, 1716, 0, 480, 489, 495, 525, 0, 766 765 DRM_MODE_FLAG_NHSYNC | DRM_MODE_FLAG_NVSYNC), 767 766 .vrefresh = 60, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_4_3, }, 768 - /* 15 - 1440x480@60Hz */ 767 + /* 15 - 1440x480@60Hz 16:9 */ 769 768 { DRM_MODE("1440x480", DRM_MODE_TYPE_DRIVER, 54000, 1440, 1472, 770 769 1596, 1716, 0, 480, 489, 495, 525, 0, 771 770 DRM_MODE_FLAG_NHSYNC | DRM_MODE_FLAG_NVSYNC), 772 771 .vrefresh = 60, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_16_9, }, 773 - /* 16 - 1920x1080@60Hz */ 772 + /* 16 - 1920x1080@60Hz 16:9 */ 774 773 { DRM_MODE("1920x1080", DRM_MODE_TYPE_DRIVER, 148500, 1920, 2008, 775 774 2052, 2200, 0, 1080, 1084, 1089, 1125, 0, 776 775 DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC), 777 776 .vrefresh = 60, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_16_9, }, 778 - /* 17 - 720x576@50Hz */ 777 + /* 17 - 720x576@50Hz 4:3 */ 779 778 { DRM_MODE("720x576", DRM_MODE_TYPE_DRIVER, 27000, 720, 732, 780 779 796, 864, 0, 576, 581, 586, 625, 0, 781 780 DRM_MODE_FLAG_NHSYNC | DRM_MODE_FLAG_NVSYNC), 782 781 .vrefresh = 50, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_4_3, }, 783 - /* 18 - 720x576@50Hz */ 782 + /* 18 - 720x576@50Hz 16:9 */ 784 783 { DRM_MODE("720x576", DRM_MODE_TYPE_DRIVER, 27000, 720, 732, 785 784 796, 864, 0, 576, 581, 586, 625, 0, 786 785 DRM_MODE_FLAG_NHSYNC | DRM_MODE_FLAG_NVSYNC), 787 786 .vrefresh = 50, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_16_9, }, 788 - /* 19 - 1280x720@50Hz */ 787 + /* 19 - 1280x720@50Hz 16:9 */ 789 788 { DRM_MODE("1280x720", DRM_MODE_TYPE_DRIVER, 74250, 1280, 1720, 790 789 1760, 1980, 0, 720, 725, 730, 750, 0, 791 790 DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC), 792 791 .vrefresh = 50, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_16_9, }, 793 - /* 20 - 1920x1080i@50Hz */ 792 + /* 20 - 1920x1080i@50Hz 16:9 */ 794 793 { DRM_MODE("1920x1080i", DRM_MODE_TYPE_DRIVER, 74250, 1920, 2448, 795 794 2492, 2640, 0, 1080, 1084, 1094, 1125, 0, 796 795 DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC | 797 - DRM_MODE_FLAG_INTERLACE), 796 + DRM_MODE_FLAG_INTERLACE), 798 797 .vrefresh = 50, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_16_9, }, 799 - /* 21 - 720(1440)x576i@50Hz */ 798 + /* 21 - 720(1440)x576i@50Hz 4:3 */ 800 799 { DRM_MODE("720x576i", DRM_MODE_TYPE_DRIVER, 13500, 720, 732, 801 800 795, 864, 0, 576, 580, 586, 625, 0, 802 801 DRM_MODE_FLAG_NHSYNC | DRM_MODE_FLAG_NVSYNC | 803 - DRM_MODE_FLAG_INTERLACE | DRM_MODE_FLAG_DBLCLK), 802 + DRM_MODE_FLAG_INTERLACE | DRM_MODE_FLAG_DBLCLK), 804 803 .vrefresh = 50, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_4_3, }, 805 - /* 22 - 720(1440)x576i@50Hz */ 804 + /* 22 - 720(1440)x576i@50Hz 16:9 */ 806 805 { DRM_MODE("720x576i", DRM_MODE_TYPE_DRIVER, 13500, 720, 732, 807 806 795, 864, 0, 576, 580, 586, 625, 0, 808 807 DRM_MODE_FLAG_NHSYNC | DRM_MODE_FLAG_NVSYNC | 809 - DRM_MODE_FLAG_INTERLACE | DRM_MODE_FLAG_DBLCLK), 808 + DRM_MODE_FLAG_INTERLACE | DRM_MODE_FLAG_DBLCLK), 810 809 .vrefresh = 50, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_16_9, }, 811 - /* 23 - 720(1440)x288@50Hz */ 810 + /* 23 - 720(1440)x288@50Hz 4:3 */ 812 811 { DRM_MODE("720x288", DRM_MODE_TYPE_DRIVER, 13500, 720, 732, 813 812 795, 864, 0, 288, 290, 293, 312, 0, 814 813 DRM_MODE_FLAG_NHSYNC | DRM_MODE_FLAG_NVSYNC | 815 - DRM_MODE_FLAG_DBLCLK), 814 + DRM_MODE_FLAG_DBLCLK), 816 815 .vrefresh = 50, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_4_3, }, 817 - /* 24 - 720(1440)x288@50Hz */ 816 + /* 24 - 720(1440)x288@50Hz 16:9 */ 818 817 { DRM_MODE("720x288", DRM_MODE_TYPE_DRIVER, 13500, 720, 732, 819 818 795, 864, 0, 288, 290, 293, 312, 0, 820 819 DRM_MODE_FLAG_NHSYNC | DRM_MODE_FLAG_NVSYNC | 821 - DRM_MODE_FLAG_DBLCLK), 820 + DRM_MODE_FLAG_DBLCLK), 822 821 .vrefresh = 50, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_16_9, }, 823 - /* 25 - 2880x576i@50Hz */ 822 + /* 25 - 2880x576i@50Hz 4:3 */ 824 823 { DRM_MODE("2880x576i", DRM_MODE_TYPE_DRIVER, 54000, 2880, 2928, 825 824 3180, 3456, 0, 576, 580, 586, 625, 0, 826 825 DRM_MODE_FLAG_NHSYNC | DRM_MODE_FLAG_NVSYNC | 827 - DRM_MODE_FLAG_INTERLACE), 826 + DRM_MODE_FLAG_INTERLACE), 828 827 .vrefresh = 50, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_4_3, }, 829 - /* 26 - 2880x576i@50Hz */ 828 + /* 26 - 2880x576i@50Hz 16:9 */ 830 829 { DRM_MODE("2880x576i", DRM_MODE_TYPE_DRIVER, 54000, 2880, 2928, 831 830 3180, 3456, 0, 576, 580, 586, 625, 0, 832 831 DRM_MODE_FLAG_NHSYNC | DRM_MODE_FLAG_NVSYNC | 833 - DRM_MODE_FLAG_INTERLACE), 832 + DRM_MODE_FLAG_INTERLACE), 834 833 .vrefresh = 50, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_16_9, }, 835 - /* 27 - 2880x288@50Hz */ 834 + /* 27 - 2880x288@50Hz 4:3 */ 836 835 { DRM_MODE("2880x288", DRM_MODE_TYPE_DRIVER, 54000, 2880, 2928, 837 836 3180, 3456, 0, 288, 290, 293, 312, 0, 838 837 DRM_MODE_FLAG_NHSYNC | DRM_MODE_FLAG_NVSYNC), 839 838 .vrefresh = 50, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_4_3, }, 840 - /* 28 - 2880x288@50Hz */ 839 + /* 28 - 2880x288@50Hz 16:9 */ 841 840 { DRM_MODE("2880x288", DRM_MODE_TYPE_DRIVER, 54000, 2880, 2928, 842 841 3180, 3456, 0, 288, 290, 293, 312, 0, 843 842 DRM_MODE_FLAG_NHSYNC | DRM_MODE_FLAG_NVSYNC), 844 843 .vrefresh = 50, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_16_9, }, 845 - /* 29 - 1440x576@50Hz */ 844 + /* 29 - 1440x576@50Hz 4:3 */ 846 845 { DRM_MODE("1440x576", DRM_MODE_TYPE_DRIVER, 54000, 1440, 1464, 847 846 1592, 1728, 0, 576, 581, 586, 625, 0, 848 847 DRM_MODE_FLAG_NHSYNC | DRM_MODE_FLAG_NVSYNC), 849 848 .vrefresh = 50, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_4_3, }, 850 - /* 30 - 1440x576@50Hz */ 849 + /* 30 - 1440x576@50Hz 16:9 */ 851 850 { DRM_MODE("1440x576", DRM_MODE_TYPE_DRIVER, 54000, 1440, 1464, 852 851 1592, 1728, 0, 576, 581, 586, 625, 0, 853 852 DRM_MODE_FLAG_NHSYNC | DRM_MODE_FLAG_NVSYNC), 854 853 .vrefresh = 50, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_16_9, }, 855 - /* 31 - 1920x1080@50Hz */ 854 + /* 31 - 1920x1080@50Hz 16:9 */ 856 855 { DRM_MODE("1920x1080", DRM_MODE_TYPE_DRIVER, 148500, 1920, 2448, 857 856 2492, 2640, 0, 1080, 1084, 1089, 1125, 0, 858 857 DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC), 859 858 .vrefresh = 50, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_16_9, }, 860 - /* 32 - 1920x1080@24Hz */ 859 + /* 32 - 1920x1080@24Hz 16:9 */ 861 860 { DRM_MODE("1920x1080", DRM_MODE_TYPE_DRIVER, 74250, 1920, 2558, 862 861 2602, 2750, 0, 1080, 1084, 1089, 1125, 0, 863 862 DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC), 864 863 .vrefresh = 24, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_16_9, }, 865 - /* 33 - 1920x1080@25Hz */ 864 + /* 33 - 1920x1080@25Hz 16:9 */ 866 865 { DRM_MODE("1920x1080", DRM_MODE_TYPE_DRIVER, 74250, 1920, 2448, 867 866 2492, 2640, 0, 1080, 1084, 1089, 1125, 0, 868 867 DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC), 869 868 .vrefresh = 25, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_16_9, }, 870 - /* 34 - 1920x1080@30Hz */ 869 + /* 34 - 1920x1080@30Hz 16:9 */ 871 870 { DRM_MODE("1920x1080", DRM_MODE_TYPE_DRIVER, 74250, 1920, 2008, 872 871 2052, 2200, 0, 1080, 1084, 1089, 1125, 0, 873 872 DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC), 874 873 .vrefresh = 30, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_16_9, }, 875 - /* 35 - 2880x480@60Hz */ 874 + /* 35 - 2880x480@60Hz 4:3 */ 876 875 { DRM_MODE("2880x480", DRM_MODE_TYPE_DRIVER, 108000, 2880, 2944, 877 876 3192, 3432, 0, 480, 489, 495, 525, 0, 878 877 DRM_MODE_FLAG_NHSYNC | DRM_MODE_FLAG_NVSYNC), 879 878 .vrefresh = 60, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_4_3, }, 880 - /* 36 - 2880x480@60Hz */ 879 + /* 36 - 2880x480@60Hz 16:9 */ 881 880 { DRM_MODE("2880x480", DRM_MODE_TYPE_DRIVER, 108000, 2880, 2944, 882 881 3192, 3432, 0, 480, 489, 495, 525, 0, 883 882 DRM_MODE_FLAG_NHSYNC | DRM_MODE_FLAG_NVSYNC), 884 883 .vrefresh = 60, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_16_9, }, 885 - /* 37 - 2880x576@50Hz */ 884 + /* 37 - 2880x576@50Hz 4:3 */ 886 885 { DRM_MODE("2880x576", DRM_MODE_TYPE_DRIVER, 108000, 2880, 2928, 887 886 3184, 3456, 0, 576, 581, 586, 625, 0, 888 887 DRM_MODE_FLAG_NHSYNC | DRM_MODE_FLAG_NVSYNC), 889 888 .vrefresh = 50, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_4_3, }, 890 - /* 38 - 2880x576@50Hz */ 889 + /* 38 - 2880x576@50Hz 16:9 */ 891 890 { DRM_MODE("2880x576", DRM_MODE_TYPE_DRIVER, 108000, 2880, 2928, 892 891 3184, 3456, 0, 576, 581, 586, 625, 0, 893 892 DRM_MODE_FLAG_NHSYNC | DRM_MODE_FLAG_NVSYNC), 894 893 .vrefresh = 50, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_16_9, }, 895 - /* 39 - 1920x1080i@50Hz */ 894 + /* 39 - 1920x1080i@50Hz 16:9 */ 896 895 { DRM_MODE("1920x1080i", DRM_MODE_TYPE_DRIVER, 72000, 1920, 1952, 897 896 2120, 2304, 0, 1080, 1126, 1136, 1250, 0, 898 897 DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_NVSYNC | 899 - DRM_MODE_FLAG_INTERLACE), 898 + DRM_MODE_FLAG_INTERLACE), 900 899 .vrefresh = 50, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_16_9, }, 901 - /* 40 - 1920x1080i@100Hz */ 900 + /* 40 - 1920x1080i@100Hz 16:9 */ 902 901 { DRM_MODE("1920x1080i", DRM_MODE_TYPE_DRIVER, 148500, 1920, 2448, 903 902 2492, 2640, 0, 1080, 1084, 1094, 1125, 0, 904 903 DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC | 905 - DRM_MODE_FLAG_INTERLACE), 904 + DRM_MODE_FLAG_INTERLACE), 906 905 .vrefresh = 100, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_16_9, }, 907 - /* 41 - 1280x720@100Hz */ 906 + /* 41 - 1280x720@100Hz 16:9 */ 908 907 { DRM_MODE("1280x720", DRM_MODE_TYPE_DRIVER, 148500, 1280, 1720, 909 908 1760, 1980, 0, 720, 725, 730, 750, 0, 910 909 DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC), 911 910 .vrefresh = 100, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_16_9, }, 912 - /* 42 - 720x576@100Hz */ 911 + /* 42 - 720x576@100Hz 4:3 */ 913 912 { DRM_MODE("720x576", DRM_MODE_TYPE_DRIVER, 54000, 720, 732, 914 913 796, 864, 0, 576, 581, 586, 625, 0, 915 914 DRM_MODE_FLAG_NHSYNC | DRM_MODE_FLAG_NVSYNC), 916 915 .vrefresh = 100, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_4_3, }, 917 - /* 43 - 720x576@100Hz */ 916 + /* 43 - 720x576@100Hz 16:9 */ 918 917 { DRM_MODE("720x576", DRM_MODE_TYPE_DRIVER, 54000, 720, 732, 919 918 796, 864, 0, 576, 581, 586, 625, 0, 920 919 DRM_MODE_FLAG_NHSYNC | DRM_MODE_FLAG_NVSYNC), 921 920 .vrefresh = 100, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_16_9, }, 922 - /* 44 - 720(1440)x576i@100Hz */ 921 + /* 44 - 720(1440)x576i@100Hz 4:3 */ 923 922 { DRM_MODE("720x576i", DRM_MODE_TYPE_DRIVER, 27000, 720, 732, 924 923 795, 864, 0, 576, 580, 586, 625, 0, 925 924 DRM_MODE_FLAG_NHSYNC | DRM_MODE_FLAG_NVSYNC | 926 - DRM_MODE_FLAG_INTERLACE | DRM_MODE_FLAG_DBLCLK), 925 + DRM_MODE_FLAG_INTERLACE | DRM_MODE_FLAG_DBLCLK), 927 926 .vrefresh = 100, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_4_3, }, 928 - /* 45 - 720(1440)x576i@100Hz */ 927 + /* 45 - 720(1440)x576i@100Hz 16:9 */ 929 928 { DRM_MODE("720x576i", DRM_MODE_TYPE_DRIVER, 27000, 720, 732, 930 929 795, 864, 0, 576, 580, 586, 625, 0, 931 930 DRM_MODE_FLAG_NHSYNC | DRM_MODE_FLAG_NVSYNC | 932 - DRM_MODE_FLAG_INTERLACE | DRM_MODE_FLAG_DBLCLK), 931 + DRM_MODE_FLAG_INTERLACE | DRM_MODE_FLAG_DBLCLK), 933 932 .vrefresh = 100, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_16_9, }, 934 - /* 46 - 1920x1080i@120Hz */ 933 + /* 46 - 1920x1080i@120Hz 16:9 */ 935 934 { DRM_MODE("1920x1080i", DRM_MODE_TYPE_DRIVER, 148500, 1920, 2008, 936 935 2052, 2200, 0, 1080, 1084, 1094, 1125, 0, 937 936 DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC | 938 - DRM_MODE_FLAG_INTERLACE), 937 + DRM_MODE_FLAG_INTERLACE), 939 938 .vrefresh = 120, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_16_9, }, 940 - /* 47 - 1280x720@120Hz */ 939 + /* 47 - 1280x720@120Hz 16:9 */ 941 940 { DRM_MODE("1280x720", DRM_MODE_TYPE_DRIVER, 148500, 1280, 1390, 942 941 1430, 1650, 0, 720, 725, 730, 750, 0, 943 942 DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC), 944 943 .vrefresh = 120, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_16_9, }, 945 - /* 48 - 720x480@120Hz */ 944 + /* 48 - 720x480@120Hz 4:3 */ 946 945 { DRM_MODE("720x480", DRM_MODE_TYPE_DRIVER, 54000, 720, 736, 947 946 798, 858, 0, 480, 489, 495, 525, 0, 948 947 DRM_MODE_FLAG_NHSYNC | DRM_MODE_FLAG_NVSYNC), 949 948 .vrefresh = 120, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_4_3, }, 950 - /* 49 - 720x480@120Hz */ 949 + /* 49 - 720x480@120Hz 16:9 */ 951 950 { DRM_MODE("720x480", DRM_MODE_TYPE_DRIVER, 54000, 720, 736, 952 951 798, 858, 0, 480, 489, 495, 525, 0, 953 952 DRM_MODE_FLAG_NHSYNC | DRM_MODE_FLAG_NVSYNC), 954 953 .vrefresh = 120, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_16_9, }, 955 - /* 50 - 720(1440)x480i@120Hz */ 954 + /* 50 - 720(1440)x480i@120Hz 4:3 */ 956 955 { DRM_MODE("720x480i", DRM_MODE_TYPE_DRIVER, 27000, 720, 739, 957 956 801, 858, 0, 480, 488, 494, 525, 0, 958 957 DRM_MODE_FLAG_NHSYNC | DRM_MODE_FLAG_NVSYNC | 959 - DRM_MODE_FLAG_INTERLACE | DRM_MODE_FLAG_DBLCLK), 958 + DRM_MODE_FLAG_INTERLACE | DRM_MODE_FLAG_DBLCLK), 960 959 .vrefresh = 120, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_4_3, }, 961 - /* 51 - 720(1440)x480i@120Hz */ 960 + /* 51 - 720(1440)x480i@120Hz 16:9 */ 962 961 { DRM_MODE("720x480i", DRM_MODE_TYPE_DRIVER, 27000, 720, 739, 963 962 801, 858, 0, 480, 488, 494, 525, 0, 964 963 DRM_MODE_FLAG_NHSYNC | DRM_MODE_FLAG_NVSYNC | 965 - DRM_MODE_FLAG_INTERLACE | DRM_MODE_FLAG_DBLCLK), 964 + DRM_MODE_FLAG_INTERLACE | DRM_MODE_FLAG_DBLCLK), 966 965 .vrefresh = 120, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_16_9, }, 967 - /* 52 - 720x576@200Hz */ 966 + /* 52 - 720x576@200Hz 4:3 */ 968 967 { DRM_MODE("720x576", DRM_MODE_TYPE_DRIVER, 108000, 720, 732, 969 968 796, 864, 0, 576, 581, 586, 625, 0, 970 969 DRM_MODE_FLAG_NHSYNC | DRM_MODE_FLAG_NVSYNC), 971 970 .vrefresh = 200, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_4_3, }, 972 - /* 53 - 720x576@200Hz */ 971 + /* 53 - 720x576@200Hz 16:9 */ 973 972 { DRM_MODE("720x576", DRM_MODE_TYPE_DRIVER, 108000, 720, 732, 974 973 796, 864, 0, 576, 581, 586, 625, 0, 975 974 DRM_MODE_FLAG_NHSYNC | DRM_MODE_FLAG_NVSYNC), 976 975 .vrefresh = 200, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_16_9, }, 977 - /* 54 - 720(1440)x576i@200Hz */ 976 + /* 54 - 720(1440)x576i@200Hz 4:3 */ 978 977 { DRM_MODE("720x576i", DRM_MODE_TYPE_DRIVER, 54000, 720, 732, 979 978 795, 864, 0, 576, 580, 586, 625, 0, 980 979 DRM_MODE_FLAG_NHSYNC | DRM_MODE_FLAG_NVSYNC | 981 - DRM_MODE_FLAG_INTERLACE | DRM_MODE_FLAG_DBLCLK), 980 + DRM_MODE_FLAG_INTERLACE | DRM_MODE_FLAG_DBLCLK), 982 981 .vrefresh = 200, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_4_3, }, 983 - /* 55 - 720(1440)x576i@200Hz */ 982 + /* 55 - 720(1440)x576i@200Hz 16:9 */ 984 983 { DRM_MODE("720x576i", DRM_MODE_TYPE_DRIVER, 54000, 720, 732, 985 984 795, 864, 0, 576, 580, 586, 625, 0, 986 985 DRM_MODE_FLAG_NHSYNC | DRM_MODE_FLAG_NVSYNC | 987 - DRM_MODE_FLAG_INTERLACE | DRM_MODE_FLAG_DBLCLK), 986 + DRM_MODE_FLAG_INTERLACE | DRM_MODE_FLAG_DBLCLK), 988 987 .vrefresh = 200, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_16_9, }, 989 - /* 56 - 720x480@240Hz */ 988 + /* 56 - 720x480@240Hz 4:3 */ 990 989 { DRM_MODE("720x480", DRM_MODE_TYPE_DRIVER, 108000, 720, 736, 991 990 798, 858, 0, 480, 489, 495, 525, 0, 992 991 DRM_MODE_FLAG_NHSYNC | DRM_MODE_FLAG_NVSYNC), 993 992 .vrefresh = 240, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_4_3, }, 994 - /* 57 - 720x480@240Hz */ 993 + /* 57 - 720x480@240Hz 16:9 */ 995 994 { DRM_MODE("720x480", DRM_MODE_TYPE_DRIVER, 108000, 720, 736, 996 995 798, 858, 0, 480, 489, 495, 525, 0, 997 996 DRM_MODE_FLAG_NHSYNC | DRM_MODE_FLAG_NVSYNC), 998 997 .vrefresh = 240, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_16_9, }, 999 - /* 58 - 720(1440)x480i@240Hz */ 998 + /* 58 - 720(1440)x480i@240Hz 4:3 */ 1000 999 { DRM_MODE("720x480i", DRM_MODE_TYPE_DRIVER, 54000, 720, 739, 1001 1000 801, 858, 0, 480, 488, 494, 525, 0, 1002 1001 DRM_MODE_FLAG_NHSYNC | DRM_MODE_FLAG_NVSYNC | 1003 - DRM_MODE_FLAG_INTERLACE | DRM_MODE_FLAG_DBLCLK), 1002 + DRM_MODE_FLAG_INTERLACE | DRM_MODE_FLAG_DBLCLK), 1004 1003 .vrefresh = 240, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_4_3, }, 1005 - /* 59 - 720(1440)x480i@240Hz */ 1004 + /* 59 - 720(1440)x480i@240Hz 16:9 */ 1006 1005 { DRM_MODE("720x480i", DRM_MODE_TYPE_DRIVER, 54000, 720, 739, 1007 1006 801, 858, 0, 480, 488, 494, 525, 0, 1008 1007 DRM_MODE_FLAG_NHSYNC | DRM_MODE_FLAG_NVSYNC | 1009 - DRM_MODE_FLAG_INTERLACE | DRM_MODE_FLAG_DBLCLK), 1008 + DRM_MODE_FLAG_INTERLACE | DRM_MODE_FLAG_DBLCLK), 1010 1009 .vrefresh = 240, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_16_9, }, 1011 - /* 60 - 1280x720@24Hz */ 1010 + /* 60 - 1280x720@24Hz 16:9 */ 1012 1011 { DRM_MODE("1280x720", DRM_MODE_TYPE_DRIVER, 59400, 1280, 3040, 1013 1012 3080, 3300, 0, 720, 725, 730, 750, 0, 1014 1013 DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC), 1015 1014 .vrefresh = 24, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_16_9, }, 1016 - /* 61 - 1280x720@25Hz */ 1015 + /* 61 - 1280x720@25Hz 16:9 */ 1017 1016 { DRM_MODE("1280x720", DRM_MODE_TYPE_DRIVER, 74250, 1280, 3700, 1018 1017 3740, 3960, 0, 720, 725, 730, 750, 0, 1019 1018 DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC), 1020 1019 .vrefresh = 25, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_16_9, }, 1021 - /* 62 - 1280x720@30Hz */ 1020 + /* 62 - 1280x720@30Hz 16:9 */ 1022 1021 { DRM_MODE("1280x720", DRM_MODE_TYPE_DRIVER, 74250, 1280, 3040, 1023 1022 3080, 3300, 0, 720, 725, 730, 750, 0, 1024 1023 DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC), 1025 1024 .vrefresh = 30, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_16_9, }, 1026 - /* 63 - 1920x1080@120Hz */ 1025 + /* 63 - 1920x1080@120Hz 16:9 */ 1027 1026 { DRM_MODE("1920x1080", DRM_MODE_TYPE_DRIVER, 297000, 1920, 2008, 1028 1027 2052, 2200, 0, 1080, 1084, 1089, 1125, 0, 1029 1028 DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC), 1030 - .vrefresh = 120, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_16_9, }, 1031 - /* 64 - 1920x1080@100Hz */ 1029 + .vrefresh = 120, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_16_9, }, 1030 + /* 64 - 1920x1080@100Hz 16:9 */ 1032 1031 { DRM_MODE("1920x1080", DRM_MODE_TYPE_DRIVER, 297000, 1920, 2448, 1033 1032 2492, 2640, 0, 1080, 1084, 1089, 1125, 0, 1034 1033 DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC), 1035 - .vrefresh = 100, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_16_9, }, 1036 - /* 65 - 1280x720@24Hz */ 1034 + .vrefresh = 100, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_16_9, }, 1035 + /* 65 - 1280x720@24Hz 64:27 */ 1037 1036 { DRM_MODE("1280x720", DRM_MODE_TYPE_DRIVER, 59400, 1280, 3040, 1038 1037 3080, 3300, 0, 720, 725, 730, 750, 0, 1039 1038 DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC), 1040 1039 .vrefresh = 24, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_64_27, }, 1041 - /* 66 - 1280x720@25Hz */ 1040 + /* 66 - 1280x720@25Hz 64:27 */ 1042 1041 { DRM_MODE("1280x720", DRM_MODE_TYPE_DRIVER, 74250, 1280, 3700, 1043 1042 3740, 3960, 0, 720, 725, 730, 750, 0, 1044 1043 DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC), 1045 1044 .vrefresh = 25, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_64_27, }, 1046 - /* 67 - 1280x720@30Hz */ 1045 + /* 67 - 1280x720@30Hz 64:27 */ 1047 1046 { DRM_MODE("1280x720", DRM_MODE_TYPE_DRIVER, 74250, 1280, 3040, 1048 1047 3080, 3300, 0, 720, 725, 730, 750, 0, 1049 1048 DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC), 1050 1049 .vrefresh = 30, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_64_27, }, 1051 - /* 68 - 1280x720@50Hz */ 1050 + /* 68 - 1280x720@50Hz 64:27 */ 1052 1051 { DRM_MODE("1280x720", DRM_MODE_TYPE_DRIVER, 74250, 1280, 1720, 1053 1052 1760, 1980, 0, 720, 725, 730, 750, 0, 1054 1053 DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC), 1055 1054 .vrefresh = 50, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_64_27, }, 1056 - /* 69 - 1280x720@60Hz */ 1055 + /* 69 - 1280x720@60Hz 64:27 */ 1057 1056 { DRM_MODE("1280x720", DRM_MODE_TYPE_DRIVER, 74250, 1280, 1390, 1058 1057 1430, 1650, 0, 720, 725, 730, 750, 0, 1059 1058 DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC), 1060 1059 .vrefresh = 60, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_64_27, }, 1061 - /* 70 - 1280x720@100Hz */ 1060 + /* 70 - 1280x720@100Hz 64:27 */ 1062 1061 { DRM_MODE("1280x720", DRM_MODE_TYPE_DRIVER, 148500, 1280, 1720, 1063 1062 1760, 1980, 0, 720, 725, 730, 750, 0, 1064 1063 DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC), 1065 1064 .vrefresh = 100, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_64_27, }, 1066 - /* 71 - 1280x720@120Hz */ 1065 + /* 71 - 1280x720@120Hz 64:27 */ 1067 1066 { DRM_MODE("1280x720", DRM_MODE_TYPE_DRIVER, 148500, 1280, 1390, 1068 1067 1430, 1650, 0, 720, 725, 730, 750, 0, 1069 1068 DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC), 1070 1069 .vrefresh = 120, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_64_27, }, 1071 - /* 72 - 1920x1080@24Hz */ 1070 + /* 72 - 1920x1080@24Hz 64:27 */ 1072 1071 { DRM_MODE("1920x1080", DRM_MODE_TYPE_DRIVER, 74250, 1920, 2558, 1073 1072 2602, 2750, 0, 1080, 1084, 1089, 1125, 0, 1074 1073 DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC), 1075 1074 .vrefresh = 24, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_64_27, }, 1076 - /* 73 - 1920x1080@25Hz */ 1075 + /* 73 - 1920x1080@25Hz 64:27 */ 1077 1076 { DRM_MODE("1920x1080", DRM_MODE_TYPE_DRIVER, 74250, 1920, 2448, 1078 1077 2492, 2640, 0, 1080, 1084, 1089, 1125, 0, 1079 1078 DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC), 1080 1079 .vrefresh = 25, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_64_27, }, 1081 - /* 74 - 1920x1080@30Hz */ 1080 + /* 74 - 1920x1080@30Hz 64:27 */ 1082 1081 { DRM_MODE("1920x1080", DRM_MODE_TYPE_DRIVER, 74250, 1920, 2008, 1083 1082 2052, 2200, 0, 1080, 1084, 1089, 1125, 0, 1084 1083 DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC), 1085 1084 .vrefresh = 30, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_64_27, }, 1086 - /* 75 - 1920x1080@50Hz */ 1085 + /* 75 - 1920x1080@50Hz 64:27 */ 1087 1086 { DRM_MODE("1920x1080", DRM_MODE_TYPE_DRIVER, 148500, 1920, 2448, 1088 1087 2492, 2640, 0, 1080, 1084, 1089, 1125, 0, 1089 1088 DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC), 1090 1089 .vrefresh = 50, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_64_27, }, 1091 - /* 76 - 1920x1080@60Hz */ 1090 + /* 76 - 1920x1080@60Hz 64:27 */ 1092 1091 { DRM_MODE("1920x1080", DRM_MODE_TYPE_DRIVER, 148500, 1920, 2008, 1093 1092 2052, 2200, 0, 1080, 1084, 1089, 1125, 0, 1094 1093 DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC), 1095 1094 .vrefresh = 60, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_64_27, }, 1096 - /* 77 - 1920x1080@100Hz */ 1095 + /* 77 - 1920x1080@100Hz 64:27 */ 1097 1096 { DRM_MODE("1920x1080", DRM_MODE_TYPE_DRIVER, 297000, 1920, 2448, 1098 1097 2492, 2640, 0, 1080, 1084, 1089, 1125, 0, 1099 1098 DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC), 1100 1099 .vrefresh = 100, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_64_27, }, 1101 - /* 78 - 1920x1080@120Hz */ 1100 + /* 78 - 1920x1080@120Hz 64:27 */ 1102 1101 { DRM_MODE("1920x1080", DRM_MODE_TYPE_DRIVER, 297000, 1920, 2008, 1103 1102 2052, 2200, 0, 1080, 1084, 1089, 1125, 0, 1104 1103 DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC), 1105 1104 .vrefresh = 120, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_64_27, }, 1106 - /* 79 - 1680x720@24Hz */ 1105 + /* 79 - 1680x720@24Hz 64:27 */ 1107 1106 { DRM_MODE("1680x720", DRM_MODE_TYPE_DRIVER, 59400, 1680, 3040, 1108 1107 3080, 3300, 0, 720, 725, 730, 750, 0, 1109 1108 DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC), 1110 1109 .vrefresh = 24, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_64_27, }, 1111 - /* 80 - 1680x720@25Hz */ 1110 + /* 80 - 1680x720@25Hz 64:27 */ 1112 1111 { DRM_MODE("1680x720", DRM_MODE_TYPE_DRIVER, 59400, 1680, 2908, 1113 1112 2948, 3168, 0, 720, 725, 730, 750, 0, 1114 1113 DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC), 1115 1114 .vrefresh = 25, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_64_27, }, 1116 - /* 81 - 1680x720@30Hz */ 1115 + /* 81 - 1680x720@30Hz 64:27 */ 1117 1116 { DRM_MODE("1680x720", DRM_MODE_TYPE_DRIVER, 59400, 1680, 2380, 1118 1117 2420, 2640, 0, 720, 725, 730, 750, 0, 1119 1118 DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC), 1120 1119 .vrefresh = 30, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_64_27, }, 1121 - /* 82 - 1680x720@50Hz */ 1120 + /* 82 - 1680x720@50Hz 64:27 */ 1122 1121 { DRM_MODE("1680x720", DRM_MODE_TYPE_DRIVER, 82500, 1680, 1940, 1123 1122 1980, 2200, 0, 720, 725, 730, 750, 0, 1124 1123 DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC), 1125 1124 .vrefresh = 50, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_64_27, }, 1126 - /* 83 - 1680x720@60Hz */ 1125 + /* 83 - 1680x720@60Hz 64:27 */ 1127 1126 { DRM_MODE("1680x720", DRM_MODE_TYPE_DRIVER, 99000, 1680, 1940, 1128 1127 1980, 2200, 0, 720, 725, 730, 750, 0, 1129 1128 DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC), 1130 1129 .vrefresh = 60, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_64_27, }, 1131 - /* 84 - 1680x720@100Hz */ 1130 + /* 84 - 1680x720@100Hz 64:27 */ 1132 1131 { DRM_MODE("1680x720", DRM_MODE_TYPE_DRIVER, 165000, 1680, 1740, 1133 1132 1780, 2000, 0, 720, 725, 730, 825, 0, 1134 1133 DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC), 1135 1134 .vrefresh = 100, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_64_27, }, 1136 - /* 85 - 1680x720@120Hz */ 1135 + /* 85 - 1680x720@120Hz 64:27 */ 1137 1136 { DRM_MODE("1680x720", DRM_MODE_TYPE_DRIVER, 198000, 1680, 1740, 1138 1137 1780, 2000, 0, 720, 725, 730, 825, 0, 1139 1138 DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC), 1140 1139 .vrefresh = 120, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_64_27, }, 1141 - /* 86 - 2560x1080@24Hz */ 1140 + /* 86 - 2560x1080@24Hz 64:27 */ 1142 1141 { DRM_MODE("2560x1080", DRM_MODE_TYPE_DRIVER, 99000, 2560, 3558, 1143 1142 3602, 3750, 0, 1080, 1084, 1089, 1100, 0, 1144 1143 DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC), 1145 1144 .vrefresh = 24, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_64_27, }, 1146 - /* 87 - 2560x1080@25Hz */ 1145 + /* 87 - 2560x1080@25Hz 64:27 */ 1147 1146 { DRM_MODE("2560x1080", DRM_MODE_TYPE_DRIVER, 90000, 2560, 3008, 1148 1147 3052, 3200, 0, 1080, 1084, 1089, 1125, 0, 1149 1148 DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC), 1150 1149 .vrefresh = 25, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_64_27, }, 1151 - /* 88 - 2560x1080@30Hz */ 1150 + /* 88 - 2560x1080@30Hz 64:27 */ 1152 1151 { DRM_MODE("2560x1080", DRM_MODE_TYPE_DRIVER, 118800, 2560, 3328, 1153 1152 3372, 3520, 0, 1080, 1084, 1089, 1125, 0, 1154 1153 DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC), 1155 1154 .vrefresh = 30, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_64_27, }, 1156 - /* 89 - 2560x1080@50Hz */ 1155 + /* 89 - 2560x1080@50Hz 64:27 */ 1157 1156 { DRM_MODE("2560x1080", DRM_MODE_TYPE_DRIVER, 185625, 2560, 3108, 1158 1157 3152, 3300, 0, 1080, 1084, 1089, 1125, 0, 1159 1158 DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC), 1160 1159 .vrefresh = 50, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_64_27, }, 1161 - /* 90 - 2560x1080@60Hz */ 1160 + /* 90 - 2560x1080@60Hz 64:27 */ 1162 1161 { DRM_MODE("2560x1080", DRM_MODE_TYPE_DRIVER, 198000, 2560, 2808, 1163 1162 2852, 3000, 0, 1080, 1084, 1089, 1100, 0, 1164 1163 DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC), 1165 1164 .vrefresh = 60, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_64_27, }, 1166 - /* 91 - 2560x1080@100Hz */ 1165 + /* 91 - 2560x1080@100Hz 64:27 */ 1167 1166 { DRM_MODE("2560x1080", DRM_MODE_TYPE_DRIVER, 371250, 2560, 2778, 1168 1167 2822, 2970, 0, 1080, 1084, 1089, 1250, 0, 1169 1168 DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC), 1170 1169 .vrefresh = 100, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_64_27, }, 1171 - /* 92 - 2560x1080@120Hz */ 1170 + /* 92 - 2560x1080@120Hz 64:27 */ 1172 1171 { DRM_MODE("2560x1080", DRM_MODE_TYPE_DRIVER, 495000, 2560, 3108, 1173 1172 3152, 3300, 0, 1080, 1084, 1089, 1250, 0, 1174 1173 DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC), 1175 1174 .vrefresh = 120, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_64_27, }, 1176 - /* 93 - 3840x2160p@24Hz 16:9 */ 1175 + /* 93 - 3840x2160@24Hz 16:9 */ 1177 1176 { DRM_MODE("3840x2160", DRM_MODE_TYPE_DRIVER, 297000, 3840, 5116, 1178 1177 5204, 5500, 0, 2160, 2168, 2178, 2250, 0, 1179 1178 DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC), 1180 1179 .vrefresh = 24, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_16_9, }, 1181 - /* 94 - 3840x2160p@25Hz 16:9 */ 1180 + /* 94 - 3840x2160@25Hz 16:9 */ 1182 1181 { DRM_MODE("3840x2160", DRM_MODE_TYPE_DRIVER, 297000, 3840, 4896, 1183 1182 4984, 5280, 0, 2160, 2168, 2178, 2250, 0, 1184 1183 DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC), 1185 1184 .vrefresh = 25, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_16_9, }, 1186 - /* 95 - 3840x2160p@30Hz 16:9 */ 1185 + /* 95 - 3840x2160@30Hz 16:9 */ 1187 1186 { DRM_MODE("3840x2160", DRM_MODE_TYPE_DRIVER, 297000, 3840, 4016, 1188 1187 4104, 4400, 0, 2160, 2168, 2178, 2250, 0, 1189 1188 DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC), 1190 1189 .vrefresh = 30, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_16_9, }, 1191 - /* 96 - 3840x2160p@50Hz 16:9 */ 1190 + /* 96 - 3840x2160@50Hz 16:9 */ 1192 1191 { DRM_MODE("3840x2160", DRM_MODE_TYPE_DRIVER, 594000, 3840, 4896, 1193 1192 4984, 5280, 0, 2160, 2168, 2178, 2250, 0, 1194 1193 DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC), 1195 1194 .vrefresh = 50, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_16_9, }, 1196 - /* 97 - 3840x2160p@60Hz 16:9 */ 1195 + /* 97 - 3840x2160@60Hz 16:9 */ 1197 1196 { DRM_MODE("3840x2160", DRM_MODE_TYPE_DRIVER, 594000, 3840, 4016, 1198 1197 4104, 4400, 0, 2160, 2168, 2178, 2250, 0, 1199 1198 DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC), 1200 1199 .vrefresh = 60, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_16_9, }, 1201 - /* 98 - 4096x2160p@24Hz 256:135 */ 1200 + /* 98 - 4096x2160@24Hz 256:135 */ 1202 1201 { DRM_MODE("4096x2160", DRM_MODE_TYPE_DRIVER, 297000, 4096, 5116, 1203 1202 5204, 5500, 0, 2160, 2168, 2178, 2250, 0, 1204 1203 DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC), 1205 1204 .vrefresh = 24, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_256_135, }, 1206 - /* 99 - 4096x2160p@25Hz 256:135 */ 1205 + /* 99 - 4096x2160@25Hz 256:135 */ 1207 1206 { DRM_MODE("4096x2160", DRM_MODE_TYPE_DRIVER, 297000, 4096, 5064, 1208 1207 5152, 5280, 0, 2160, 2168, 2178, 2250, 0, 1209 1208 DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC), 1210 1209 .vrefresh = 25, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_256_135, }, 1211 - /* 100 - 4096x2160p@30Hz 256:135 */ 1210 + /* 100 - 4096x2160@30Hz 256:135 */ 1212 1211 { DRM_MODE("4096x2160", DRM_MODE_TYPE_DRIVER, 297000, 4096, 4184, 1213 1212 4272, 4400, 0, 2160, 2168, 2178, 2250, 0, 1214 1213 DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC), 1215 1214 .vrefresh = 30, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_256_135, }, 1216 - /* 101 - 4096x2160p@50Hz 256:135 */ 1215 + /* 101 - 4096x2160@50Hz 256:135 */ 1217 1216 { DRM_MODE("4096x2160", DRM_MODE_TYPE_DRIVER, 594000, 4096, 5064, 1218 1217 5152, 5280, 0, 2160, 2168, 2178, 2250, 0, 1219 1218 DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC), 1220 1219 .vrefresh = 50, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_256_135, }, 1221 - /* 102 - 4096x2160p@60Hz 256:135 */ 1220 + /* 102 - 4096x2160@60Hz 256:135 */ 1222 1221 { DRM_MODE("4096x2160", DRM_MODE_TYPE_DRIVER, 594000, 4096, 4184, 1223 1222 4272, 4400, 0, 2160, 2168, 2178, 2250, 0, 1224 1223 DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC), 1225 1224 .vrefresh = 60, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_256_135, }, 1226 - /* 103 - 3840x2160p@24Hz 64:27 */ 1225 + /* 103 - 3840x2160@24Hz 64:27 */ 1227 1226 { DRM_MODE("3840x2160", DRM_MODE_TYPE_DRIVER, 297000, 3840, 5116, 1228 1227 5204, 5500, 0, 2160, 2168, 2178, 2250, 0, 1229 1228 DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC), 1230 1229 .vrefresh = 24, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_64_27, }, 1231 - /* 104 - 3840x2160p@25Hz 64:27 */ 1230 + /* 104 - 3840x2160@25Hz 64:27 */ 1232 1231 { DRM_MODE("3840x2160", DRM_MODE_TYPE_DRIVER, 297000, 3840, 4896, 1233 1232 4984, 5280, 0, 2160, 2168, 2178, 2250, 0, 1234 1233 DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC), 1235 1234 .vrefresh = 25, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_64_27, }, 1236 - /* 105 - 3840x2160p@30Hz 64:27 */ 1235 + /* 105 - 3840x2160@30Hz 64:27 */ 1237 1236 { DRM_MODE("3840x2160", DRM_MODE_TYPE_DRIVER, 297000, 3840, 4016, 1238 1237 4104, 4400, 0, 2160, 2168, 2178, 2250, 0, 1239 1238 DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC), 1240 1239 .vrefresh = 30, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_64_27, }, 1241 - /* 106 - 3840x2160p@50Hz 64:27 */ 1240 + /* 106 - 3840x2160@50Hz 64:27 */ 1242 1241 { DRM_MODE("3840x2160", DRM_MODE_TYPE_DRIVER, 594000, 3840, 4896, 1243 1242 4984, 5280, 0, 2160, 2168, 2178, 2250, 0, 1244 1243 DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC), 1245 1244 .vrefresh = 50, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_64_27, }, 1246 - /* 107 - 3840x2160p@60Hz 64:27 */ 1245 + /* 107 - 3840x2160@60Hz 64:27 */ 1247 1246 { DRM_MODE("3840x2160", DRM_MODE_TYPE_DRIVER, 594000, 3840, 4016, 1248 1247 4104, 4400, 0, 2160, 2168, 2178, 2250, 0, 1249 1248 DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC), ··· 4873 4872 } 4874 4873 4875 4874 frame->picture_aspect = HDMI_PICTURE_ASPECT_NONE; 4875 + 4876 + /* 4877 + * As some drivers don't support atomic, we can't use connector state. 4878 + * So just initialize the frame with default values, just the same way 4879 + * as it's done with other properties here. 4880 + */ 4881 + frame->content_type = HDMI_CONTENT_TYPE_GRAPHICS; 4882 + frame->itc = 0; 4876 4883 4877 4884 /* 4878 4885 * Populate picture aspect ratio from either
+1 -8
drivers/gpu/drm/drm_fb_helper.c
··· 368 368 struct drm_plane *plane; 369 369 struct drm_atomic_state *state; 370 370 int i, ret; 371 - unsigned int plane_mask; 372 371 struct drm_modeset_acquire_ctx ctx; 373 372 374 373 drm_modeset_acquire_init(&ctx, 0); ··· 380 381 381 382 state->acquire_ctx = &ctx; 382 383 retry: 383 - plane_mask = 0; 384 384 drm_for_each_plane(plane, dev) { 385 385 plane_state = drm_atomic_get_plane_state(state, plane); 386 386 if (IS_ERR(plane_state)) { ··· 388 390 } 389 391 390 392 plane_state->rotation = DRM_MODE_ROTATE_0; 391 - 392 - plane->old_fb = plane->fb; 393 - plane_mask |= 1 << drm_plane_index(plane); 394 393 395 394 /* disable non-primary: */ 396 395 if (plane->type == DRM_PLANE_TYPE_PRIMARY) ··· 425 430 ret = drm_atomic_commit(state); 426 431 427 432 out_state: 428 - drm_atomic_clean_old_fb(dev, plane_mask, ret); 429 - 430 433 if (ret == -EDEADLK) 431 434 goto backoff; 432 435 ··· 1157 1164 * @info: fbdev registered by the helper 1158 1165 * @rect: info about rectangle to fill 1159 1166 * 1160 - * A wrapper around cfb_imageblit implemented by fbdev core 1167 + * A wrapper around cfb_fillrect implemented by fbdev core 1161 1168 */ 1162 1169 void drm_fb_helper_cfb_fillrect(struct fb_info *info, 1163 1170 const struct fb_fillrect *rect)
-5
drivers/gpu/drm/drm_framebuffer.c
··· 836 836 goto unlock; 837 837 838 838 plane_mask |= BIT(drm_plane_index(plane)); 839 - 840 - plane->old_fb = plane->fb; 841 839 } 842 840 843 841 /* This list is only filled when disable_crtcs is set. */ ··· 850 852 ret = drm_atomic_commit(state); 851 853 852 854 unlock: 853 - if (plane_mask) 854 - drm_atomic_clean_old_fb(dev, plane_mask, ret); 855 - 856 855 if (ret == -EDEADLK) { 857 856 drm_atomic_state_clear(state); 858 857 drm_modeset_backoff(&ctx);
+1 -1
drivers/gpu/drm/drm_gem_framebuffer_helper.c
··· 253 253 struct dma_buf *dma_buf; 254 254 struct dma_fence *fence; 255 255 256 - if (plane->state->fb == state->fb || !state->fb) 256 + if (!state->fb) 257 257 return 0; 258 258 259 259 dma_buf = drm_gem_fb_get_obj(state->fb, 0)->dma_buf;
+7
drivers/gpu/drm/drm_ioctl.c
··· 334 334 return -EINVAL; 335 335 file_priv->aspect_ratio_allowed = req->value; 336 336 break; 337 + case DRM_CLIENT_CAP_WRITEBACK_CONNECTORS: 338 + if (!file_priv->atomic) 339 + return -EINVAL; 340 + if (req->value > 1) 341 + return -EINVAL; 342 + file_priv->writeback_connectors = req->value; 343 + break; 337 344 default: 338 345 return -EINVAL; 339 346 }
+64 -27
drivers/gpu/drm/drm_mm.c
··· 239 239 #define HOLE_SIZE(NODE) ((NODE)->hole_size) 240 240 #define HOLE_ADDR(NODE) (__drm_mm_hole_node_start(NODE)) 241 241 242 + static u64 rb_to_hole_size(struct rb_node *rb) 243 + { 244 + return rb_entry(rb, struct drm_mm_node, rb_hole_size)->hole_size; 245 + } 246 + 247 + static void insert_hole_size(struct rb_root_cached *root, 248 + struct drm_mm_node *node) 249 + { 250 + struct rb_node **link = &root->rb_root.rb_node, *rb = NULL; 251 + u64 x = node->hole_size; 252 + bool first = true; 253 + 254 + while (*link) { 255 + rb = *link; 256 + if (x > rb_to_hole_size(rb)) { 257 + link = &rb->rb_left; 258 + } else { 259 + link = &rb->rb_right; 260 + first = false; 261 + } 262 + } 263 + 264 + rb_link_node(&node->rb_hole_size, rb, link); 265 + rb_insert_color_cached(&node->rb_hole_size, root, first); 266 + } 267 + 242 268 static void add_hole(struct drm_mm_node *node) 243 269 { 244 270 struct drm_mm *mm = node->mm; ··· 273 247 __drm_mm_hole_node_end(node) - __drm_mm_hole_node_start(node); 274 248 DRM_MM_BUG_ON(!drm_mm_hole_follows(node)); 275 249 276 - RB_INSERT(mm->holes_size, rb_hole_size, HOLE_SIZE); 250 + insert_hole_size(&mm->holes_size, node); 277 251 RB_INSERT(mm->holes_addr, rb_hole_addr, HOLE_ADDR); 278 252 279 253 list_add(&node->hole_stack, &mm->hole_stack); ··· 284 258 DRM_MM_BUG_ON(!drm_mm_hole_follows(node)); 285 259 286 260 list_del(&node->hole_stack); 287 - rb_erase(&node->rb_hole_size, &node->mm->holes_size); 261 + rb_erase_cached(&node->rb_hole_size, &node->mm->holes_size); 288 262 rb_erase(&node->rb_hole_addr, &node->mm->holes_addr); 289 263 node->hole_size = 0; 290 264 ··· 308 282 309 283 static struct drm_mm_node *best_hole(struct drm_mm *mm, u64 size) 310 284 { 311 - struct rb_node *best = NULL; 312 - struct rb_node **link = &mm->holes_size.rb_node; 285 + struct rb_node *rb = mm->holes_size.rb_root.rb_node; 286 + struct drm_mm_node *best = NULL; 313 287 314 - while (*link) { 315 - struct rb_node *rb = *link; 288 + do { 289 + struct drm_mm_node *node = 290 + rb_entry(rb, struct drm_mm_node, rb_hole_size); 316 291 317 - if (size <= rb_hole_size(rb)) { 318 - link = &rb->rb_left; 319 - best = rb; 292 + if (size <= node->hole_size) { 293 + best = node; 294 + rb = rb->rb_right; 320 295 } else { 321 - link = &rb->rb_right; 296 + rb = rb->rb_left; 322 297 } 323 - } 298 + } while (rb); 324 299 325 - return rb_hole_size_to_node(best); 300 + return best; 326 301 } 327 302 328 303 static struct drm_mm_node *find_hole(struct drm_mm *mm, u64 addr) 329 304 { 305 + struct rb_node *rb = mm->holes_addr.rb_node; 330 306 struct drm_mm_node *node = NULL; 331 - struct rb_node **link = &mm->holes_addr.rb_node; 332 307 333 - while (*link) { 308 + while (rb) { 334 309 u64 hole_start; 335 310 336 - node = rb_hole_addr_to_node(*link); 311 + node = rb_hole_addr_to_node(rb); 337 312 hole_start = __drm_mm_hole_node_start(node); 338 313 339 314 if (addr < hole_start) 340 - link = &node->rb_hole_addr.rb_left; 315 + rb = node->rb_hole_addr.rb_left; 341 316 else if (addr > hole_start + node->hole_size) 342 - link = &node->rb_hole_addr.rb_right; 317 + rb = node->rb_hole_addr.rb_right; 343 318 else 344 319 break; 345 320 } ··· 353 326 u64 start, u64 end, u64 size, 354 327 enum drm_mm_insert_mode mode) 355 328 { 356 - if (RB_EMPTY_ROOT(&mm->holes_size)) 357 - return NULL; 358 - 359 329 switch (mode) { 360 330 default: 361 331 case DRM_MM_INSERT_BEST: ··· 379 355 switch (mode) { 380 356 default: 381 357 case DRM_MM_INSERT_BEST: 382 - return rb_hole_size_to_node(rb_next(&node->rb_hole_size)); 358 + return rb_hole_size_to_node(rb_prev(&node->rb_hole_size)); 383 359 384 360 case DRM_MM_INSERT_LOW: 385 361 return rb_hole_addr_to_node(rb_next(&node->rb_hole_addr)); ··· 450 426 } 451 427 EXPORT_SYMBOL(drm_mm_reserve_node); 452 428 429 + static u64 rb_to_hole_size_or_zero(struct rb_node *rb) 430 + { 431 + return rb ? rb_to_hole_size(rb) : 0; 432 + } 433 + 453 434 /** 454 435 * drm_mm_insert_node_in_range - ranged search for space and insert @node 455 436 * @mm: drm_mm to allocate from ··· 480 451 { 481 452 struct drm_mm_node *hole; 482 453 u64 remainder_mask; 454 + bool once; 483 455 484 456 DRM_MM_BUG_ON(range_start >= range_end); 485 457 486 458 if (unlikely(size == 0 || range_end - range_start < size)) 487 459 return -ENOSPC; 488 460 461 + if (rb_to_hole_size_or_zero(rb_first_cached(&mm->holes_size)) < size) 462 + return -ENOSPC; 463 + 489 464 if (alignment <= 1) 490 465 alignment = 0; 491 466 467 + once = mode & DRM_MM_INSERT_ONCE; 468 + mode &= ~DRM_MM_INSERT_ONCE; 469 + 492 470 remainder_mask = is_power_of_2(alignment) ? alignment - 1 : 0; 493 - for (hole = first_hole(mm, range_start, range_end, size, mode); hole; 494 - hole = next_hole(mm, hole, mode)) { 471 + for (hole = first_hole(mm, range_start, range_end, size, mode); 472 + hole; 473 + hole = once ? NULL : next_hole(mm, hole, mode)) { 495 474 u64 hole_start = __drm_mm_hole_node_start(hole); 496 475 u64 hole_end = hole_start + hole->hole_size; 497 476 u64 adj_start, adj_end; ··· 624 587 625 588 if (drm_mm_hole_follows(old)) { 626 589 list_replace(&old->hole_stack, &new->hole_stack); 627 - rb_replace_node(&old->rb_hole_size, 628 - &new->rb_hole_size, 629 - &mm->holes_size); 590 + rb_replace_node_cached(&old->rb_hole_size, 591 + &new->rb_hole_size, 592 + &mm->holes_size); 630 593 rb_replace_node(&old->rb_hole_addr, 631 594 &new->rb_hole_addr, 632 595 &mm->holes_addr); ··· 922 885 923 886 INIT_LIST_HEAD(&mm->hole_stack); 924 887 mm->interval_tree = RB_ROOT_CACHED; 925 - mm->holes_size = RB_ROOT; 888 + mm->holes_size = RB_ROOT_CACHED; 926 889 mm->holes_addr = RB_ROOT; 927 890 928 891 /* Clever trick to avoid a special case in the free hole tracking. */
+5
drivers/gpu/drm/drm_mode_config.c
··· 145 145 count = 0; 146 146 connector_id = u64_to_user_ptr(card_res->connector_id_ptr); 147 147 drm_for_each_connector_iter(connector, &conn_iter) { 148 + /* only expose writeback connectors if userspace understands them */ 149 + if (!file_priv->writeback_connectors && 150 + (connector->connector_type == DRM_MODE_CONNECTOR_WRITEBACK)) 151 + continue; 152 + 148 153 if (drm_lease_held(file_priv, connector->base.id)) { 149 154 if (count < card_res->count_connectors && 150 155 put_user(connector->base.id, connector_id + count)) {
+1 -1
drivers/gpu/drm/drm_modes.c
··· 1257 1257 1258 1258 #undef MODE_STATUS 1259 1259 1260 - static const char *drm_get_mode_status_name(enum drm_mode_status status) 1260 + const char *drm_get_mode_status_name(enum drm_mode_status status) 1261 1261 { 1262 1262 int index = status + 3; 1263 1263
+16
drivers/gpu/drm/drm_panel.c
··· 24 24 #include <linux/err.h> 25 25 #include <linux/module.h> 26 26 27 + #include <drm/drm_device.h> 27 28 #include <drm/drm_crtc.h> 28 29 #include <drm/drm_panel.h> 29 30 ··· 95 94 * 96 95 * An error is returned if the panel is already attached to another connector. 97 96 * 97 + * When unloading, the driver should detach from the panel by calling 98 + * drm_panel_detach(). 99 + * 98 100 * Return: 0 on success or a negative error code on failure. 99 101 */ 100 102 int drm_panel_attach(struct drm_panel *panel, struct drm_connector *connector) 101 103 { 102 104 if (panel->connector) 103 105 return -EBUSY; 106 + 107 + panel->link = device_link_add(connector->dev->dev, panel->dev, 0); 108 + if (!panel->link) { 109 + dev_err(panel->dev, "failed to link panel to %s\n", 110 + dev_name(connector->dev->dev)); 111 + return -EINVAL; 112 + } 104 113 105 114 panel->connector = connector; 106 115 panel->drm = connector->dev; ··· 126 115 * Detaches a panel from the connector it is attached to. If a panel is not 127 116 * attached to any connector this is effectively a no-op. 128 117 * 118 + * This function should not be called by the panel device itself. It 119 + * is only for the drm device that called drm_panel_attach(). 120 + * 129 121 * Return: 0 on success or a negative error code on failure. 130 122 */ 131 123 int drm_panel_detach(struct drm_panel *panel) 132 124 { 125 + device_link_del(panel->link); 126 + 133 127 panel->connector = NULL; 134 128 panel->drm = NULL; 135 129
+25 -16
drivers/gpu/drm/drm_plane.c
··· 177 177 if (WARN_ON(config->num_total_plane >= 32)) 178 178 return -EINVAL; 179 179 180 + WARN_ON(drm_drv_uses_atomic_modeset(dev) && 181 + (!funcs->atomic_destroy_state || 182 + !funcs->atomic_duplicate_state)); 183 + 180 184 ret = drm_mode_object_add(dev, &plane->base, DRM_MODE_OBJECT_PLANE); 181 185 if (ret) 182 186 return ret; ··· 565 561 if (i == plane->format_count) 566 562 return -EINVAL; 567 563 568 - if (!plane->modifier_count) 569 - return 0; 564 + if (plane->funcs->format_mod_supported) { 565 + if (!plane->funcs->format_mod_supported(plane, format, modifier)) 566 + return -EINVAL; 567 + } else { 568 + if (!plane->modifier_count) 569 + return 0; 570 570 571 - for (i = 0; i < plane->modifier_count; i++) { 572 - if (modifier == plane->modifiers[i]) 573 - break; 571 + for (i = 0; i < plane->modifier_count; i++) { 572 + if (modifier == plane->modifiers[i]) 573 + break; 574 + } 575 + if (i == plane->modifier_count) 576 + return -EINVAL; 574 577 } 575 - if (i == plane->modifier_count) 576 - return -EINVAL; 577 - 578 - if (plane->funcs->format_mod_supported && 579 - !plane->funcs->format_mod_supported(plane, format, modifier)) 580 - return -EINVAL; 581 578 582 579 return 0; 583 580 } ··· 655 650 crtc_x, crtc_y, crtc_w, crtc_h, 656 651 src_x, src_y, src_w, src_h, ctx); 657 652 if (!ret) { 658 - plane->crtc = crtc; 659 - plane->fb = fb; 660 - drm_framebuffer_get(plane->fb); 653 + if (!plane->state) { 654 + plane->crtc = crtc; 655 + plane->fb = fb; 656 + drm_framebuffer_get(plane->fb); 657 + } 661 658 } else { 662 659 plane->old_fb = NULL; 663 660 } ··· 1099 1092 /* Keep the old fb, don't unref it. */ 1100 1093 plane->old_fb = NULL; 1101 1094 } else { 1102 - plane->fb = fb; 1103 - drm_framebuffer_get(fb); 1095 + if (!plane->state) { 1096 + plane->fb = fb; 1097 + drm_framebuffer_get(fb); 1098 + } 1104 1099 } 1105 1100 1106 1101 out:
+3 -1
drivers/gpu/drm/drm_plane_helper.c
··· 502 502 int drm_plane_helper_disable(struct drm_plane *plane) 503 503 { 504 504 struct drm_plane_state *plane_state; 505 + struct drm_framebuffer *old_fb; 505 506 506 507 /* crtc helpers love to call disable functions for already disabled hw 507 508 * functions. So cope with that. */ ··· 522 521 plane_state->plane = plane; 523 522 524 523 plane_state->crtc = NULL; 524 + old_fb = plane_state->fb; 525 525 drm_atomic_set_fb_for_plane(plane_state, NULL); 526 526 527 - return drm_plane_helper_commit(plane, plane_state, plane->fb); 527 + return drm_plane_helper_commit(plane, plane_state, old_fb); 528 528 } 529 529 EXPORT_SYMBOL(drm_plane_helper_disable);
+1 -33
drivers/gpu/drm/drm_prime.c
··· 186 186 /** 187 187 * drm_gem_map_attach - dma_buf attach implementation for GEM 188 188 * @dma_buf: buffer to attach device to 189 - * @target_dev: not used 190 189 * @attach: buffer attachment data 191 190 * 192 191 * Allocates &drm_prime_attachment and calls &drm_driver.gem_prime_pin for ··· 194 195 * 195 196 * Returns 0 on success, negative error code on failure. 196 197 */ 197 - int drm_gem_map_attach(struct dma_buf *dma_buf, struct device *target_dev, 198 + int drm_gem_map_attach(struct dma_buf *dma_buf, 198 199 struct dma_buf_attachment *attach) 199 200 { 200 201 struct drm_prime_attachment *prime_attach; ··· 434 435 EXPORT_SYMBOL(drm_gem_dmabuf_vunmap); 435 436 436 437 /** 437 - * drm_gem_dmabuf_kmap_atomic - map_atomic implementation for GEM 438 - * @dma_buf: buffer to be mapped 439 - * @page_num: page number within the buffer 440 - * 441 - * Not implemented. This can be used as the &dma_buf_ops.map_atomic callback. 442 - */ 443 - void *drm_gem_dmabuf_kmap_atomic(struct dma_buf *dma_buf, 444 - unsigned long page_num) 445 - { 446 - return NULL; 447 - } 448 - EXPORT_SYMBOL(drm_gem_dmabuf_kmap_atomic); 449 - 450 - /** 451 - * drm_gem_dmabuf_kunmap_atomic - unmap_atomic implementation for GEM 452 - * @dma_buf: buffer to be unmapped 453 - * @page_num: page number within the buffer 454 - * @addr: virtual address of the buffer 455 - * 456 - * Not implemented. This can be used as the &dma_buf_ops.unmap_atomic callback. 457 - */ 458 - void drm_gem_dmabuf_kunmap_atomic(struct dma_buf *dma_buf, 459 - unsigned long page_num, void *addr) 460 - { 461 - 462 - } 463 - EXPORT_SYMBOL(drm_gem_dmabuf_kunmap_atomic); 464 - 465 - /** 466 438 * drm_gem_dmabuf_kmap - map implementation for GEM 467 439 * @dma_buf: buffer to be mapped 468 440 * @page_num: page number within the buffer ··· 490 520 .unmap_dma_buf = drm_gem_unmap_dma_buf, 491 521 .release = drm_gem_dmabuf_release, 492 522 .map = drm_gem_dmabuf_kmap, 493 - .map_atomic = drm_gem_dmabuf_kmap_atomic, 494 523 .unmap = drm_gem_dmabuf_kunmap, 495 - .unmap_atomic = drm_gem_dmabuf_kunmap_atomic, 496 524 .mmap = drm_gem_dmabuf_mmap, 497 525 .vmap = drm_gem_dmabuf_vmap, 498 526 .vunmap = drm_gem_dmabuf_vunmap,
+5 -5
drivers/gpu/drm/drm_vm.c
··· 100 100 * map, get the page, increment the use count and return it. 101 101 */ 102 102 #if IS_ENABLED(CONFIG_AGP) 103 - static int drm_vm_fault(struct vm_fault *vmf) 103 + static vm_fault_t drm_vm_fault(struct vm_fault *vmf) 104 104 { 105 105 struct vm_area_struct *vma = vmf->vma; 106 106 struct drm_file *priv = vma->vm_file->private_data; ··· 173 173 return VM_FAULT_SIGBUS; /* Disallow mremap */ 174 174 } 175 175 #else 176 - static int drm_vm_fault(struct vm_fault *vmf) 176 + static vm_fault_t drm_vm_fault(struct vm_fault *vmf) 177 177 { 178 178 return VM_FAULT_SIGBUS; 179 179 } ··· 189 189 * Get the mapping, find the real physical page to map, get the page, and 190 190 * return it. 191 191 */ 192 - static int drm_vm_shm_fault(struct vm_fault *vmf) 192 + static vm_fault_t drm_vm_shm_fault(struct vm_fault *vmf) 193 193 { 194 194 struct vm_area_struct *vma = vmf->vma; 195 195 struct drm_local_map *map = vma->vm_private_data; ··· 291 291 * 292 292 * Determine the page number from the page offset and get it from drm_device_dma::pagelist. 293 293 */ 294 - static int drm_vm_dma_fault(struct vm_fault *vmf) 294 + static vm_fault_t drm_vm_dma_fault(struct vm_fault *vmf) 295 295 { 296 296 struct vm_area_struct *vma = vmf->vma; 297 297 struct drm_file *priv = vma->vm_file->private_data; ··· 326 326 * 327 327 * Determine the map offset from the page offset and get it from drm_sg_mem::pagelist. 328 328 */ 329 - static int drm_vm_sg_fault(struct vm_fault *vmf) 329 + static vm_fault_t drm_vm_sg_fault(struct vm_fault *vmf) 330 330 { 331 331 struct vm_area_struct *vma = vmf->vma; 332 332 struct drm_local_map *map = vma->vm_private_data;
+350
drivers/gpu/drm/drm_writeback.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * (C) COPYRIGHT 2016 ARM Limited. All rights reserved. 4 + * Author: Brian Starkey <brian.starkey@arm.com> 5 + * 6 + * This program is free software and is provided to you under the terms of the 7 + * GNU General Public License version 2 as published by the Free Software 8 + * Foundation, and any use by you of this program is subject to the terms 9 + * of such GNU licence. 10 + */ 11 + 12 + #include <drm/drm_crtc.h> 13 + #include <drm/drm_modeset_helper_vtables.h> 14 + #include <drm/drm_property.h> 15 + #include <drm/drm_writeback.h> 16 + #include <drm/drmP.h> 17 + #include <linux/dma-fence.h> 18 + 19 + /** 20 + * DOC: overview 21 + * 22 + * Writeback connectors are used to expose hardware which can write the output 23 + * from a CRTC to a memory buffer. They are used and act similarly to other 24 + * types of connectors, with some important differences: 25 + * - Writeback connectors don't provide a way to output visually to the user. 26 + * - Writeback connectors should always report as "disconnected" (so that 27 + * clients which don't understand them will ignore them). 28 + * - Writeback connectors don't have EDID. 29 + * 30 + * A framebuffer may only be attached to a writeback connector when the 31 + * connector is attached to a CRTC. The WRITEBACK_FB_ID property which sets the 32 + * framebuffer applies only to a single commit (see below). A framebuffer may 33 + * not be attached while the CRTC is off. 34 + * 35 + * Unlike with planes, when a writeback framebuffer is removed by userspace DRM 36 + * makes no attempt to remove it from active use by the connector. This is 37 + * because no method is provided to abort a writeback operation, and in any 38 + * case making a new commit whilst a writeback is ongoing is undefined (see 39 + * WRITEBACK_OUT_FENCE_PTR below). As soon as the current writeback is finished, 40 + * the framebuffer will automatically no longer be in active use. As it will 41 + * also have already been removed from the framebuffer list, there will be no 42 + * way for any userspace application to retrieve a reference to it in the 43 + * intervening period. 44 + * 45 + * Writeback connectors have some additional properties, which userspace 46 + * can use to query and control them: 47 + * 48 + * "WRITEBACK_FB_ID": 49 + * Write-only object property storing a DRM_MODE_OBJECT_FB: it stores the 50 + * framebuffer to be written by the writeback connector. This property is 51 + * similar to the FB_ID property on planes, but will always read as zero 52 + * and is not preserved across commits. 53 + * Userspace must set this property to an output buffer every time it 54 + * wishes the buffer to get filled. 55 + * 56 + * "WRITEBACK_PIXEL_FORMATS": 57 + * Immutable blob property to store the supported pixel formats table. The 58 + * data is an array of u32 DRM_FORMAT_* fourcc values. 59 + * Userspace can use this blob to find out what pixel formats are supported 60 + * by the connector's writeback engine. 61 + * 62 + * "WRITEBACK_OUT_FENCE_PTR": 63 + * Userspace can use this property to provide a pointer for the kernel to 64 + * fill with a sync_file file descriptor, which will signal once the 65 + * writeback is finished. The value should be the address of a 32-bit 66 + * signed integer, cast to a u64. 67 + * Userspace should wait for this fence to signal before making another 68 + * commit affecting any of the same CRTCs, Planes or Connectors. 69 + * **Failure to do so will result in undefined behaviour.** 70 + * For this reason it is strongly recommended that all userspace 71 + * applications making use of writeback connectors *always* retrieve an 72 + * out-fence for the commit and use it appropriately. 73 + * From userspace, this property will always read as zero. 74 + */ 75 + 76 + #define fence_to_wb_connector(x) container_of(x->lock, \ 77 + struct drm_writeback_connector, \ 78 + fence_lock) 79 + 80 + static const char *drm_writeback_fence_get_driver_name(struct dma_fence *fence) 81 + { 82 + struct drm_writeback_connector *wb_connector = 83 + fence_to_wb_connector(fence); 84 + 85 + return wb_connector->base.dev->driver->name; 86 + } 87 + 88 + static const char * 89 + drm_writeback_fence_get_timeline_name(struct dma_fence *fence) 90 + { 91 + struct drm_writeback_connector *wb_connector = 92 + fence_to_wb_connector(fence); 93 + 94 + return wb_connector->timeline_name; 95 + } 96 + 97 + static bool drm_writeback_fence_enable_signaling(struct dma_fence *fence) 98 + { 99 + return true; 100 + } 101 + 102 + static const struct dma_fence_ops drm_writeback_fence_ops = { 103 + .get_driver_name = drm_writeback_fence_get_driver_name, 104 + .get_timeline_name = drm_writeback_fence_get_timeline_name, 105 + .enable_signaling = drm_writeback_fence_enable_signaling, 106 + .wait = dma_fence_default_wait, 107 + }; 108 + 109 + static int create_writeback_properties(struct drm_device *dev) 110 + { 111 + struct drm_property *prop; 112 + 113 + if (!dev->mode_config.writeback_fb_id_property) { 114 + prop = drm_property_create_object(dev, DRM_MODE_PROP_ATOMIC, 115 + "WRITEBACK_FB_ID", 116 + DRM_MODE_OBJECT_FB); 117 + if (!prop) 118 + return -ENOMEM; 119 + dev->mode_config.writeback_fb_id_property = prop; 120 + } 121 + 122 + if (!dev->mode_config.writeback_pixel_formats_property) { 123 + prop = drm_property_create(dev, DRM_MODE_PROP_BLOB | 124 + DRM_MODE_PROP_ATOMIC | 125 + DRM_MODE_PROP_IMMUTABLE, 126 + "WRITEBACK_PIXEL_FORMATS", 0); 127 + if (!prop) 128 + return -ENOMEM; 129 + dev->mode_config.writeback_pixel_formats_property = prop; 130 + } 131 + 132 + if (!dev->mode_config.writeback_out_fence_ptr_property) { 133 + prop = drm_property_create_range(dev, DRM_MODE_PROP_ATOMIC, 134 + "WRITEBACK_OUT_FENCE_PTR", 0, 135 + U64_MAX); 136 + if (!prop) 137 + return -ENOMEM; 138 + dev->mode_config.writeback_out_fence_ptr_property = prop; 139 + } 140 + 141 + return 0; 142 + } 143 + 144 + static const struct drm_encoder_funcs drm_writeback_encoder_funcs = { 145 + .destroy = drm_encoder_cleanup, 146 + }; 147 + 148 + /** 149 + * drm_writeback_connector_init - Initialize a writeback connector and its properties 150 + * @dev: DRM device 151 + * @wb_connector: Writeback connector to initialize 152 + * @con_funcs: Connector funcs vtable 153 + * @enc_helper_funcs: Encoder helper funcs vtable to be used by the internal encoder 154 + * @formats: Array of supported pixel formats for the writeback engine 155 + * @n_formats: Length of the formats array 156 + * 157 + * This function creates the writeback-connector-specific properties if they 158 + * have not been already created, initializes the connector as 159 + * type DRM_MODE_CONNECTOR_WRITEBACK, and correctly initializes the property 160 + * values. It will also create an internal encoder associated with the 161 + * drm_writeback_connector and set it to use the @enc_helper_funcs vtable for 162 + * the encoder helper. 163 + * 164 + * Drivers should always use this function instead of drm_connector_init() to 165 + * set up writeback connectors. 166 + * 167 + * Returns: 0 on success, or a negative error code 168 + */ 169 + int drm_writeback_connector_init(struct drm_device *dev, 170 + struct drm_writeback_connector *wb_connector, 171 + const struct drm_connector_funcs *con_funcs, 172 + const struct drm_encoder_helper_funcs *enc_helper_funcs, 173 + const u32 *formats, int n_formats) 174 + { 175 + struct drm_property_blob *blob; 176 + struct drm_connector *connector = &wb_connector->base; 177 + struct drm_mode_config *config = &dev->mode_config; 178 + int ret = create_writeback_properties(dev); 179 + 180 + if (ret != 0) 181 + return ret; 182 + 183 + blob = drm_property_create_blob(dev, n_formats * sizeof(*formats), 184 + formats); 185 + if (IS_ERR(blob)) 186 + return PTR_ERR(blob); 187 + 188 + drm_encoder_helper_add(&wb_connector->encoder, enc_helper_funcs); 189 + ret = drm_encoder_init(dev, &wb_connector->encoder, 190 + &drm_writeback_encoder_funcs, 191 + DRM_MODE_ENCODER_VIRTUAL, NULL); 192 + if (ret) 193 + goto fail; 194 + 195 + connector->interlace_allowed = 0; 196 + 197 + ret = drm_connector_init(dev, connector, con_funcs, 198 + DRM_MODE_CONNECTOR_WRITEBACK); 199 + if (ret) 200 + goto connector_fail; 201 + 202 + ret = drm_mode_connector_attach_encoder(connector, 203 + &wb_connector->encoder); 204 + if (ret) 205 + goto attach_fail; 206 + 207 + INIT_LIST_HEAD(&wb_connector->job_queue); 208 + spin_lock_init(&wb_connector->job_lock); 209 + 210 + wb_connector->fence_context = dma_fence_context_alloc(1); 211 + spin_lock_init(&wb_connector->fence_lock); 212 + snprintf(wb_connector->timeline_name, 213 + sizeof(wb_connector->timeline_name), 214 + "CONNECTOR:%d-%s", connector->base.id, connector->name); 215 + 216 + drm_object_attach_property(&connector->base, 217 + config->writeback_out_fence_ptr_property, 0); 218 + 219 + drm_object_attach_property(&connector->base, 220 + config->writeback_fb_id_property, 0); 221 + 222 + drm_object_attach_property(&connector->base, 223 + config->writeback_pixel_formats_property, 224 + blob->base.id); 225 + wb_connector->pixel_formats_blob_ptr = blob; 226 + 227 + return 0; 228 + 229 + attach_fail: 230 + drm_connector_cleanup(connector); 231 + connector_fail: 232 + drm_encoder_cleanup(&wb_connector->encoder); 233 + fail: 234 + drm_property_blob_put(blob); 235 + return ret; 236 + } 237 + EXPORT_SYMBOL(drm_writeback_connector_init); 238 + 239 + /** 240 + * drm_writeback_queue_job - Queue a writeback job for later signalling 241 + * @wb_connector: The writeback connector to queue a job on 242 + * @job: The job to queue 243 + * 244 + * This function adds a job to the job_queue for a writeback connector. It 245 + * should be considered to take ownership of the writeback job, and so any other 246 + * references to the job must be cleared after calling this function. 247 + * 248 + * Drivers must ensure that for a given writeback connector, jobs are queued in 249 + * exactly the same order as they will be completed by the hardware (and 250 + * signaled via drm_writeback_signal_completion). 251 + * 252 + * For every call to drm_writeback_queue_job() there must be exactly one call to 253 + * drm_writeback_signal_completion() 254 + * 255 + * See also: drm_writeback_signal_completion() 256 + */ 257 + void drm_writeback_queue_job(struct drm_writeback_connector *wb_connector, 258 + struct drm_writeback_job *job) 259 + { 260 + unsigned long flags; 261 + 262 + spin_lock_irqsave(&wb_connector->job_lock, flags); 263 + list_add_tail(&job->list_entry, &wb_connector->job_queue); 264 + spin_unlock_irqrestore(&wb_connector->job_lock, flags); 265 + } 266 + EXPORT_SYMBOL(drm_writeback_queue_job); 267 + 268 + /* 269 + * @cleanup_work: deferred cleanup of a writeback job 270 + * 271 + * The job cannot be cleaned up directly in drm_writeback_signal_completion, 272 + * because it may be called in interrupt context. Dropping the framebuffer 273 + * reference can sleep, and so the cleanup is deferred to a workqueue. 274 + */ 275 + static void cleanup_work(struct work_struct *work) 276 + { 277 + struct drm_writeback_job *job = container_of(work, 278 + struct drm_writeback_job, 279 + cleanup_work); 280 + drm_framebuffer_put(job->fb); 281 + kfree(job); 282 + } 283 + 284 + 285 + /** 286 + * drm_writeback_signal_completion - Signal the completion of a writeback job 287 + * @wb_connector: The writeback connector whose job is complete 288 + * @status: Status code to set in the writeback out_fence (0 for success) 289 + * 290 + * Drivers should call this to signal the completion of a previously queued 291 + * writeback job. It should be called as soon as possible after the hardware 292 + * has finished writing, and may be called from interrupt context. 293 + * It is the driver's responsibility to ensure that for a given connector, the 294 + * hardware completes writeback jobs in the same order as they are queued. 295 + * 296 + * Unless the driver is holding its own reference to the framebuffer, it must 297 + * not be accessed after calling this function. 298 + * 299 + * See also: drm_writeback_queue_job() 300 + */ 301 + void 302 + drm_writeback_signal_completion(struct drm_writeback_connector *wb_connector, 303 + int status) 304 + { 305 + unsigned long flags; 306 + struct drm_writeback_job *job; 307 + 308 + spin_lock_irqsave(&wb_connector->job_lock, flags); 309 + job = list_first_entry_or_null(&wb_connector->job_queue, 310 + struct drm_writeback_job, 311 + list_entry); 312 + if (job) { 313 + list_del(&job->list_entry); 314 + if (job->out_fence) { 315 + if (status) 316 + dma_fence_set_error(job->out_fence, status); 317 + dma_fence_signal(job->out_fence); 318 + dma_fence_put(job->out_fence); 319 + } 320 + } 321 + spin_unlock_irqrestore(&wb_connector->job_lock, flags); 322 + 323 + if (WARN_ON(!job)) 324 + return; 325 + 326 + INIT_WORK(&job->cleanup_work, cleanup_work); 327 + queue_work(system_long_wq, &job->cleanup_work); 328 + } 329 + EXPORT_SYMBOL(drm_writeback_signal_completion); 330 + 331 + struct dma_fence * 332 + drm_writeback_get_out_fence(struct drm_writeback_connector *wb_connector) 333 + { 334 + struct dma_fence *fence; 335 + 336 + if (WARN_ON(wb_connector->base.connector_type != 337 + DRM_MODE_CONNECTOR_WRITEBACK)) 338 + return NULL; 339 + 340 + fence = kzalloc(sizeof(*fence), GFP_KERNEL); 341 + if (!fence) 342 + return NULL; 343 + 344 + dma_fence_init(fence, &drm_writeback_fence_ops, 345 + &wb_connector->fence_lock, wb_connector->fence_context, 346 + ++wb_connector->fence_seqno); 347 + 348 + return fence; 349 + } 350 + EXPORT_SYMBOL(drm_writeback_get_out_fence);
-2
drivers/gpu/drm/exynos/exynos_drm_plane.c
··· 263 263 if (!state->crtc) 264 264 return; 265 265 266 - plane->crtc = state->crtc; 267 - 268 266 if (exynos_crtc->ops->update_plane) 269 267 exynos_crtc->ops->update_plane(exynos_crtc, exynos_plane); 270 268 }
+1 -1
drivers/gpu/drm/gma500/accel_2d.c
··· 251 251 if (!fb) 252 252 return; 253 253 254 - offset = psbfb->gtt->offset; 254 + offset = to_gtt_range(fb->obj[0])->offset; 255 255 stride = fb->pitches[0]; 256 256 257 257 switch (fb->format->depth) {
+11 -51
drivers/gpu/drm/gma500/framebuffer.c
··· 33 33 #include <drm/drm.h> 34 34 #include <drm/drm_crtc.h> 35 35 #include <drm/drm_fb_helper.h> 36 + #include <drm/drm_gem_framebuffer_helper.h> 36 37 37 38 #include "psb_drv.h" 38 39 #include "psb_intel_reg.h" ··· 41 40 #include "framebuffer.h" 42 41 #include "gtt.h" 43 42 44 - static void psb_user_framebuffer_destroy(struct drm_framebuffer *fb); 45 - static int psb_user_framebuffer_create_handle(struct drm_framebuffer *fb, 46 - struct drm_file *file_priv, 47 - unsigned int *handle); 48 - 49 43 static const struct drm_framebuffer_funcs psb_fb_funcs = { 50 - .destroy = psb_user_framebuffer_destroy, 51 - .create_handle = psb_user_framebuffer_create_handle, 44 + .destroy = drm_gem_fb_destroy, 45 + .create_handle = drm_gem_fb_create_handle, 52 46 }; 53 47 54 48 #define CMAP_TOHW(_val, _width) ((((_val) << (_width)) + 0x7FFF - (_val)) >> 16) ··· 92 96 struct psb_fbdev *fbdev = info->par; 93 97 struct psb_framebuffer *psbfb = &fbdev->pfb; 94 98 struct drm_device *dev = psbfb->base.dev; 99 + struct gtt_range *gtt = to_gtt_range(psbfb->base.obj[0]); 95 100 96 101 /* 97 102 * We have to poke our nose in here. The core fb code assumes 98 103 * panning is part of the hardware that can be invoked before 99 104 * the actual fb is mapped. In our case that isn't quite true. 100 105 */ 101 - if (psbfb->gtt->npage) { 106 + if (gtt->npage) { 102 107 /* GTT roll shifts in 4K pages, we need to shift the right 103 108 number of pages */ 104 109 int pages = info->fix.line_length >> 12; 105 - psb_gtt_roll(dev, psbfb->gtt, var->yoffset * pages); 110 + psb_gtt_roll(dev, gtt, var->yoffset * pages); 106 111 } 107 112 return 0; 108 113 } ··· 114 117 struct psb_framebuffer *psbfb = vma->vm_private_data; 115 118 struct drm_device *dev = psbfb->base.dev; 116 119 struct drm_psb_private *dev_priv = dev->dev_private; 120 + struct gtt_range *gtt = to_gtt_range(psbfb->base.obj[0]); 117 121 int page_num; 118 122 int i; 119 123 unsigned long address; 120 124 int ret; 121 125 unsigned long pfn; 122 126 unsigned long phys_addr = (unsigned long)dev_priv->stolen_base + 123 - psbfb->gtt->offset; 127 + gtt->offset; 124 128 125 129 page_num = vma_pages(vma); 126 130 address = vmf->address - (vmf->pgoff << PAGE_SHIFT); ··· 244 246 return -EINVAL; 245 247 246 248 drm_helper_mode_fill_fb_struct(dev, &fb->base, mode_cmd); 247 - fb->gtt = gt; 249 + fb->base.obj[0] = &gt->gem; 248 250 ret = drm_framebuffer_init(dev, &fb->base, &psb_fb_funcs); 249 251 if (ret) { 250 252 dev_err(dev->dev, "framebuffer init failed: %d\n", ret); ··· 516 518 drm_framebuffer_unregister_private(&psbfb->base); 517 519 drm_framebuffer_cleanup(&psbfb->base); 518 520 519 - if (psbfb->gtt) 520 - drm_gem_object_unreference_unlocked(&psbfb->gtt->gem); 521 + if (psbfb->base.obj[0]) 522 + drm_gem_object_unreference_unlocked(psbfb->base.obj[0]); 521 523 return 0; 522 524 } 523 525 ··· 572 574 psb_fbdev_destroy(dev, dev_priv->fbdev); 573 575 kfree(dev_priv->fbdev); 574 576 dev_priv->fbdev = NULL; 575 - } 576 - 577 - /** 578 - * psb_user_framebuffer_create_handle - add hamdle to a framebuffer 579 - * @fb: framebuffer 580 - * @file_priv: our DRM file 581 - * @handle: returned handle 582 - * 583 - * Our framebuffer object is a GTT range which also contains a GEM 584 - * object. We need to turn it into a handle for userspace. GEM will do 585 - * the work for us 586 - */ 587 - static int psb_user_framebuffer_create_handle(struct drm_framebuffer *fb, 588 - struct drm_file *file_priv, 589 - unsigned int *handle) 590 - { 591 - struct psb_framebuffer *psbfb = to_psb_fb(fb); 592 - struct gtt_range *r = psbfb->gtt; 593 - return drm_gem_handle_create(file_priv, &r->gem, handle); 594 - } 595 - 596 - /** 597 - * psb_user_framebuffer_destroy - destruct user created fb 598 - * @fb: framebuffer 599 - * 600 - * User framebuffers are backed by GEM objects so all we have to do is 601 - * clean up a bit and drop the reference, GEM will handle the fallout 602 - */ 603 - static void psb_user_framebuffer_destroy(struct drm_framebuffer *fb) 604 - { 605 - struct psb_framebuffer *psbfb = to_psb_fb(fb); 606 - struct gtt_range *r = psbfb->gtt; 607 - 608 - /* Let DRM do its clean up */ 609 - drm_framebuffer_cleanup(fb); 610 - /* We are no longer using the resource in GEM */ 611 - drm_gem_object_unreference_unlocked(&r->gem); 612 - kfree(fb); 613 577 } 614 578 615 579 static const struct drm_mode_config_funcs psb_mode_funcs = {
-1
drivers/gpu/drm/gma500/framebuffer.h
··· 31 31 struct drm_framebuffer base; 32 32 struct address_space *addr_space; 33 33 struct fb_info *fbdev; 34 - struct gtt_range *gtt; 35 34 }; 36 35 37 36 struct psb_fbdev {
+5 -5
drivers/gpu/drm/gma500/gma_display.c
··· 60 60 struct drm_psb_private *dev_priv = dev->dev_private; 61 61 struct gma_crtc *gma_crtc = to_gma_crtc(crtc); 62 62 struct drm_framebuffer *fb = crtc->primary->fb; 63 - struct psb_framebuffer *psbfb = to_psb_fb(fb); 63 + struct gtt_range *gtt = to_gtt_range(fb->obj[0]); 64 64 int pipe = gma_crtc->pipe; 65 65 const struct psb_offset *map = &dev_priv->regmap[pipe]; 66 66 unsigned long start, offset; ··· 78 78 79 79 /* We are displaying this buffer, make sure it is actually loaded 80 80 into the GTT */ 81 - ret = psb_gtt_pin(psbfb->gtt); 81 + ret = psb_gtt_pin(gtt); 82 82 if (ret < 0) 83 83 goto gma_pipe_set_base_exit; 84 - start = psbfb->gtt->offset; 84 + start = gtt->offset; 85 85 offset = y * fb->pitches[0] + x * fb->format->cpp[0]; 86 86 87 87 REG_WRITE(map->stride, fb->pitches[0]); ··· 129 129 gma_pipe_cleaner: 130 130 /* If there was a previous display we can now unpin it */ 131 131 if (old_fb) 132 - psb_gtt_unpin(to_psb_fb(old_fb)->gtt); 132 + psb_gtt_unpin(to_gtt_range(old_fb->obj[0])); 133 133 134 134 gma_pipe_set_base_exit: 135 135 gma_power_end(dev); ··· 491 491 crtc_funcs->dpms(crtc, DRM_MODE_DPMS_OFF); 492 492 493 493 if (crtc->primary->fb) { 494 - gt = to_psb_fb(crtc->primary->fb)->gtt; 494 + gt = to_gtt_range(crtc->primary->fb->obj[0]); 495 495 psb_gtt_unpin(gt); 496 496 } 497 497 }
+2
drivers/gpu/drm/gma500/gtt.h
··· 53 53 int roll; /* Roll applied to the GTT entries */ 54 54 }; 55 55 56 + #define to_gtt_range(x) container_of(x, struct gtt_range, gem) 57 + 56 58 extern struct gtt_range *psb_gtt_alloc_range(struct drm_device *dev, int len, 57 59 const char *name, int backed, 58 60 u32 align);
+1 -1
drivers/gpu/drm/gma500/mdfld_intel_display.c
··· 196 196 if (!gma_power_begin(dev, true)) 197 197 return 0; 198 198 199 - start = psbfb->gtt->offset; 199 + start = to_gtt_range(fb->obj[0])->offset; 200 200 offset = y * fb->pitches[0] + x * fb->format->cpp[0]; 201 201 202 202 REG_WRITE(map->stride, fb->pitches[0]);
+1 -2
drivers/gpu/drm/gma500/oaktrail_crtc.c
··· 600 600 struct drm_psb_private *dev_priv = dev->dev_private; 601 601 struct gma_crtc *gma_crtc = to_gma_crtc(crtc); 602 602 struct drm_framebuffer *fb = crtc->primary->fb; 603 - struct psb_framebuffer *psbfb = to_psb_fb(fb); 604 603 int pipe = gma_crtc->pipe; 605 604 const struct psb_offset *map = &dev_priv->regmap[pipe]; 606 605 unsigned long start, offset; ··· 616 617 if (!gma_power_begin(dev, true)) 617 618 return 0; 618 619 619 - start = psbfb->gtt->offset; 620 + start = to_gtt_range(fb->obj[0])->offset; 620 621 offset = y * fb->pitches[0] + x * fb->format->cpp[0]; 621 622 622 623 REG_WRITE(map->stride, fb->pitches[0]);
+9 -2
drivers/gpu/drm/gma500/psb_intel_sdvo.c
··· 429 429 "Scaling not supported" 430 430 }; 431 431 432 + #define MAX_ARG_LEN 32 433 + 432 434 static bool psb_intel_sdvo_write_cmd(struct psb_intel_sdvo *psb_intel_sdvo, u8 cmd, 433 435 const void *args, int args_len) 434 436 { 435 - u8 buf[args_len*2 + 2], status; 436 - struct i2c_msg msgs[args_len + 3]; 437 + u8 buf[MAX_ARG_LEN*2 + 2], status; 438 + struct i2c_msg msgs[MAX_ARG_LEN + 3]; 437 439 int i, ret; 440 + 441 + if (args_len > MAX_ARG_LEN) { 442 + DRM_ERROR("Need to increase arg length\n"); 443 + return false; 444 + } 438 445 439 446 psb_intel_sdvo_debug_write(psb_intel_sdvo, cmd, args, args_len); 440 447
+11 -2
drivers/gpu/drm/i2c/tda998x_drv.c
··· 589 589 return ret; 590 590 } 591 591 592 + #define MAX_WRITE_RANGE_BUF 32 593 + 592 594 static void 593 595 reg_write_range(struct tda998x_priv *priv, u16 reg, u8 *p, int cnt) 594 596 { 595 597 struct i2c_client *client = priv->hdmi; 596 - u8 buf[cnt+1]; 598 + /* This is the maximum size of the buffer passed in */ 599 + u8 buf[MAX_WRITE_RANGE_BUF + 1]; 597 600 int ret; 601 + 602 + if (cnt > MAX_WRITE_RANGE_BUF) { 603 + dev_err(&client->dev, "Fixed write buffer too small (%d)\n", 604 + MAX_WRITE_RANGE_BUF); 605 + return; 606 + } 598 607 599 608 buf[0] = REG2ADDR(reg); 600 609 memcpy(&buf[1], p, cnt); ··· 814 805 tda998x_write_if(struct tda998x_priv *priv, u8 bit, u16 addr, 815 806 union hdmi_infoframe *frame) 816 807 { 817 - u8 buf[32]; 808 + u8 buf[MAX_WRITE_RANGE_BUF]; 818 809 ssize_t len; 819 810 820 811 len = hdmi_infoframe_pack(frame, buf, sizeof(buf));
-11
drivers/gpu/drm/i915/i915_gem_dmabuf.c
··· 111 111 i915_gem_object_unpin_map(obj); 112 112 } 113 113 114 - static void *i915_gem_dmabuf_kmap_atomic(struct dma_buf *dma_buf, unsigned long page_num) 115 - { 116 - return NULL; 117 - } 118 - 119 - static void i915_gem_dmabuf_kunmap_atomic(struct dma_buf *dma_buf, unsigned long page_num, void *addr) 120 - { 121 - 122 - } 123 114 static void *i915_gem_dmabuf_kmap(struct dma_buf *dma_buf, unsigned long page_num) 124 115 { 125 116 struct drm_i915_gem_object *obj = dma_buf_to_obj(dma_buf); ··· 216 225 .unmap_dma_buf = i915_gem_unmap_dma_buf, 217 226 .release = drm_gem_dmabuf_release, 218 227 .map = i915_gem_dmabuf_kmap, 219 - .map_atomic = i915_gem_dmabuf_kmap_atomic, 220 228 .unmap = i915_gem_dmabuf_kunmap, 221 - .unmap_atomic = i915_gem_dmabuf_kunmap_atomic, 222 229 .mmap = i915_gem_dmabuf_mmap, 223 230 .vmap = i915_gem_dmabuf_vmap, 224 231 .vunmap = i915_gem_dmabuf_vunmap,
+10 -1
drivers/gpu/drm/i915/i915_gem_gtt.c
··· 3945 3945 3946 3946 mode = DRM_MM_INSERT_BEST; 3947 3947 if (flags & PIN_HIGH) 3948 - mode = DRM_MM_INSERT_HIGH; 3948 + mode = DRM_MM_INSERT_HIGHEST; 3949 3949 if (flags & PIN_MAPPABLE) 3950 3950 mode = DRM_MM_INSERT_LOW; 3951 3951 ··· 3964 3964 start, end, mode); 3965 3965 if (err != -ENOSPC) 3966 3966 return err; 3967 + 3968 + if (mode & DRM_MM_INSERT_ONCE) { 3969 + err = drm_mm_insert_node_in_range(&vm->mm, node, 3970 + size, alignment, color, 3971 + start, end, 3972 + DRM_MM_INSERT_BEST); 3973 + if (err != -ENOSPC) 3974 + return err; 3975 + } 3967 3976 3968 3977 if (flags & PIN_NOEVICT) 3969 3978 return -ENOSPC;
+1
drivers/gpu/drm/i915/intel_atomic.c
··· 124 124 if (new_conn_state->force_audio != old_conn_state->force_audio || 125 125 new_conn_state->broadcast_rgb != old_conn_state->broadcast_rgb || 126 126 new_conn_state->base.picture_aspect_ratio != old_conn_state->base.picture_aspect_ratio || 127 + new_conn_state->base.content_type != old_conn_state->base.content_type || 127 128 new_conn_state->base.scaling_mode != old_conn_state->base.scaling_mode) 128 129 crtc_state->mode_changed = true; 129 130
-12
drivers/gpu/drm/i915/intel_atomic_plane.c
··· 120 120 &crtc_state->base.adjusted_mode; 121 121 int ret; 122 122 123 - /* 124 - * Both crtc and plane->crtc could be NULL if we're updating a 125 - * property while the plane is disabled. We don't actually have 126 - * anything driver-specific we need to test in that case, so 127 - * just return success. 128 - */ 129 123 if (!intel_state->base.crtc && !old_plane_state->base.crtc) 130 124 return 0; 131 125 ··· 203 209 const struct drm_crtc_state *old_crtc_state; 204 210 struct drm_crtc_state *new_crtc_state; 205 211 206 - /* 207 - * Both crtc and plane->crtc could be NULL if we're updating a 208 - * property while the plane is disabled. We don't actually have 209 - * anything driver-specific we need to test in that case, so 210 - * just return success. 211 - */ 212 212 if (!crtc) 213 213 return 0; 214 214
+85 -42
drivers/gpu/drm/i915/intel_display.c
··· 1022 1022 * We can ditch the adjusted_mode.crtc_clock check as soon 1023 1023 * as Haswell has gained clock readout/fastboot support. 1024 1024 * 1025 - * We can ditch the crtc->primary->fb check as soon as we can 1025 + * We can ditch the crtc->primary->state->fb check as soon as we can 1026 1026 * properly reconstruct framebuffers. 1027 1027 * 1028 1028 * FIXME: The intel_crtc->active here should be switched to ··· 2882 2882 if (i915_gem_object_is_tiled(obj)) 2883 2883 dev_priv->preserve_bios_swizzle = true; 2884 2884 2885 - drm_framebuffer_get(fb); 2886 - primary->fb = primary->state->fb = fb; 2887 - primary->crtc = primary->state->crtc = &intel_crtc->base; 2885 + plane_state->fb = fb; 2886 + plane_state->crtc = &intel_crtc->base; 2888 2887 2889 2888 intel_set_plane_visible(to_intel_crtc_state(crtc_state), 2890 2889 to_intel_plane_state(plane_state), ··· 13240 13241 kfree(to_intel_plane(plane)); 13241 13242 } 13242 13243 13243 - static bool i8xx_mod_supported(uint32_t format, uint64_t modifier) 13244 + static bool i8xx_plane_format_mod_supported(struct drm_plane *_plane, 13245 + u32 format, u64 modifier) 13244 13246 { 13247 + switch (modifier) { 13248 + case DRM_FORMAT_MOD_LINEAR: 13249 + case I915_FORMAT_MOD_X_TILED: 13250 + break; 13251 + default: 13252 + return false; 13253 + } 13254 + 13245 13255 switch (format) { 13246 13256 case DRM_FORMAT_C8: 13247 13257 case DRM_FORMAT_RGB565: ··· 13263 13255 } 13264 13256 } 13265 13257 13266 - static bool i965_mod_supported(uint32_t format, uint64_t modifier) 13258 + static bool i965_plane_format_mod_supported(struct drm_plane *_plane, 13259 + u32 format, u64 modifier) 13267 13260 { 13261 + switch (modifier) { 13262 + case DRM_FORMAT_MOD_LINEAR: 13263 + case I915_FORMAT_MOD_X_TILED: 13264 + break; 13265 + default: 13266 + return false; 13267 + } 13268 + 13268 13269 switch (format) { 13269 13270 case DRM_FORMAT_C8: 13270 13271 case DRM_FORMAT_RGB565: ··· 13288 13271 } 13289 13272 } 13290 13273 13291 - static bool skl_mod_supported(uint32_t format, uint64_t modifier) 13274 + static bool skl_plane_format_mod_supported(struct drm_plane *_plane, 13275 + u32 format, u64 modifier) 13292 13276 { 13277 + struct intel_plane *plane = to_intel_plane(_plane); 13278 + 13279 + switch (modifier) { 13280 + case DRM_FORMAT_MOD_LINEAR: 13281 + case I915_FORMAT_MOD_X_TILED: 13282 + case I915_FORMAT_MOD_Y_TILED: 13283 + case I915_FORMAT_MOD_Yf_TILED: 13284 + break; 13285 + case I915_FORMAT_MOD_Y_TILED_CCS: 13286 + case I915_FORMAT_MOD_Yf_TILED_CCS: 13287 + if (!plane->has_ccs) 13288 + return false; 13289 + break; 13290 + default: 13291 + return false; 13292 + } 13293 + 13293 13294 switch (format) { 13294 13295 case DRM_FORMAT_XRGB8888: 13295 13296 case DRM_FORMAT_XBGR8888: ··· 13339 13304 } 13340 13305 } 13341 13306 13342 - static bool intel_primary_plane_format_mod_supported(struct drm_plane *plane, 13343 - uint32_t format, 13344 - uint64_t modifier) 13307 + static bool intel_cursor_format_mod_supported(struct drm_plane *_plane, 13308 + u32 format, u64 modifier) 13345 13309 { 13346 - struct drm_i915_private *dev_priv = to_i915(plane->dev); 13347 - 13348 - if (WARN_ON(modifier == DRM_FORMAT_MOD_INVALID)) 13349 - return false; 13350 - 13351 - if ((modifier >> 56) != DRM_FORMAT_MOD_VENDOR_INTEL && 13352 - modifier != DRM_FORMAT_MOD_LINEAR) 13353 - return false; 13354 - 13355 - if (INTEL_GEN(dev_priv) >= 9) 13356 - return skl_mod_supported(format, modifier); 13357 - else if (INTEL_GEN(dev_priv) >= 4) 13358 - return i965_mod_supported(format, modifier); 13359 - else 13360 - return i8xx_mod_supported(format, modifier); 13310 + return modifier == DRM_FORMAT_MOD_LINEAR && 13311 + format == DRM_FORMAT_ARGB8888; 13361 13312 } 13362 13313 13363 - static bool intel_cursor_plane_format_mod_supported(struct drm_plane *plane, 13364 - uint32_t format, 13365 - uint64_t modifier) 13366 - { 13367 - if (WARN_ON(modifier == DRM_FORMAT_MOD_INVALID)) 13368 - return false; 13369 - 13370 - return modifier == DRM_FORMAT_MOD_LINEAR && format == DRM_FORMAT_ARGB8888; 13371 - } 13372 - 13373 - static struct drm_plane_funcs intel_plane_funcs = { 13314 + static struct drm_plane_funcs skl_plane_funcs = { 13374 13315 .update_plane = drm_atomic_helper_update_plane, 13375 13316 .disable_plane = drm_atomic_helper_disable_plane, 13376 13317 .destroy = intel_plane_destroy, ··· 13354 13343 .atomic_set_property = intel_plane_atomic_set_property, 13355 13344 .atomic_duplicate_state = intel_plane_duplicate_state, 13356 13345 .atomic_destroy_state = intel_plane_destroy_state, 13357 - .format_mod_supported = intel_primary_plane_format_mod_supported, 13346 + .format_mod_supported = skl_plane_format_mod_supported, 13347 + }; 13348 + 13349 + static struct drm_plane_funcs i965_plane_funcs = { 13350 + .update_plane = drm_atomic_helper_update_plane, 13351 + .disable_plane = drm_atomic_helper_disable_plane, 13352 + .destroy = intel_plane_destroy, 13353 + .atomic_get_property = intel_plane_atomic_get_property, 13354 + .atomic_set_property = intel_plane_atomic_set_property, 13355 + .atomic_duplicate_state = intel_plane_duplicate_state, 13356 + .atomic_destroy_state = intel_plane_destroy_state, 13357 + .format_mod_supported = i965_plane_format_mod_supported, 13358 + }; 13359 + 13360 + static struct drm_plane_funcs i8xx_plane_funcs = { 13361 + .update_plane = drm_atomic_helper_update_plane, 13362 + .disable_plane = drm_atomic_helper_disable_plane, 13363 + .destroy = intel_plane_destroy, 13364 + .atomic_get_property = intel_plane_atomic_get_property, 13365 + .atomic_set_property = intel_plane_atomic_set_property, 13366 + .atomic_duplicate_state = intel_plane_duplicate_state, 13367 + .atomic_destroy_state = intel_plane_destroy_state, 13368 + .format_mod_supported = i8xx_plane_format_mod_supported, 13358 13369 }; 13359 13370 13360 13371 static int ··· 13501 13468 .atomic_set_property = intel_plane_atomic_set_property, 13502 13469 .atomic_duplicate_state = intel_plane_duplicate_state, 13503 13470 .atomic_destroy_state = intel_plane_destroy_state, 13504 - .format_mod_supported = intel_cursor_plane_format_mod_supported, 13471 + .format_mod_supported = intel_cursor_format_mod_supported, 13505 13472 }; 13506 13473 13507 13474 static bool i9xx_plane_has_fbc(struct drm_i915_private *dev_priv, ··· 13559 13526 { 13560 13527 struct intel_plane *primary = NULL; 13561 13528 struct intel_plane_state *state = NULL; 13529 + const struct drm_plane_funcs *plane_funcs; 13562 13530 const uint32_t *intel_primary_formats; 13563 13531 unsigned int supported_rotations; 13564 13532 unsigned int num_formats; ··· 13615 13581 primary->check_plane = intel_check_primary_plane; 13616 13582 13617 13583 if (INTEL_GEN(dev_priv) >= 9) { 13584 + primary->has_ccs = skl_plane_has_ccs(dev_priv, pipe, 13585 + PLANE_PRIMARY); 13586 + 13618 13587 if (skl_plane_has_planar(dev_priv, pipe, PLANE_PRIMARY)) { 13619 13588 intel_primary_formats = skl_pri_planar_formats; 13620 13589 num_formats = ARRAY_SIZE(skl_pri_planar_formats); ··· 13626 13589 num_formats = ARRAY_SIZE(skl_primary_formats); 13627 13590 } 13628 13591 13629 - if (skl_plane_has_ccs(dev_priv, pipe, PLANE_PRIMARY)) 13592 + if (primary->has_ccs) 13630 13593 modifiers = skl_format_modifiers_ccs; 13631 13594 else 13632 13595 modifiers = skl_format_modifiers_noccs; ··· 13634 13597 primary->update_plane = skl_update_plane; 13635 13598 primary->disable_plane = skl_disable_plane; 13636 13599 primary->get_hw_state = skl_plane_get_hw_state; 13600 + 13601 + plane_funcs = &skl_plane_funcs; 13637 13602 } else if (INTEL_GEN(dev_priv) >= 4) { 13638 13603 intel_primary_formats = i965_primary_formats; 13639 13604 num_formats = ARRAY_SIZE(i965_primary_formats); ··· 13644 13605 primary->update_plane = i9xx_update_plane; 13645 13606 primary->disable_plane = i9xx_disable_plane; 13646 13607 primary->get_hw_state = i9xx_plane_get_hw_state; 13608 + 13609 + plane_funcs = &i965_plane_funcs; 13647 13610 } else { 13648 13611 intel_primary_formats = i8xx_primary_formats; 13649 13612 num_formats = ARRAY_SIZE(i8xx_primary_formats); ··· 13654 13613 primary->update_plane = i9xx_update_plane; 13655 13614 primary->disable_plane = i9xx_disable_plane; 13656 13615 primary->get_hw_state = i9xx_plane_get_hw_state; 13616 + 13617 + plane_funcs = &i8xx_plane_funcs; 13657 13618 } 13658 13619 13659 13620 if (INTEL_GEN(dev_priv) >= 9) 13660 13621 ret = drm_universal_plane_init(&dev_priv->drm, &primary->base, 13661 - 0, &intel_plane_funcs, 13622 + 0, plane_funcs, 13662 13623 intel_primary_formats, num_formats, 13663 13624 modifiers, 13664 13625 DRM_PLANE_TYPE_PRIMARY, 13665 13626 "plane 1%c", pipe_name(pipe)); 13666 13627 else if (INTEL_GEN(dev_priv) >= 5 || IS_G4X(dev_priv)) 13667 13628 ret = drm_universal_plane_init(&dev_priv->drm, &primary->base, 13668 - 0, &intel_plane_funcs, 13629 + 0, plane_funcs, 13669 13630 intel_primary_formats, num_formats, 13670 13631 modifiers, 13671 13632 DRM_PLANE_TYPE_PRIMARY, 13672 13633 "primary %c", pipe_name(pipe)); 13673 13634 else 13674 13635 ret = drm_universal_plane_init(&dev_priv->drm, &primary->base, 13675 - 0, &intel_plane_funcs, 13636 + 0, plane_funcs, 13676 13637 intel_primary_formats, num_formats, 13677 13638 modifiers, 13678 13639 DRM_PLANE_TYPE_PRIMARY,
+1
drivers/gpu/drm/i915/intel_drv.h
··· 952 952 enum pipe pipe; 953 953 bool can_scale; 954 954 bool has_fbc; 955 + bool has_ccs; 955 956 int max_downscale; 956 957 uint32_t frontbuffer_bit; 957 958
+11 -6
drivers/gpu/drm/i915/intel_hdmi.c
··· 461 461 } 462 462 463 463 static void intel_hdmi_set_avi_infoframe(struct drm_encoder *encoder, 464 - const struct intel_crtc_state *crtc_state) 464 + const struct intel_crtc_state *crtc_state, 465 + const struct drm_connector_state *conn_state) 465 466 { 466 467 struct intel_hdmi *intel_hdmi = enc_to_intel_hdmi(encoder); 467 468 const struct drm_display_mode *adjusted_mode = ··· 491 490 HDMI_QUANTIZATION_RANGE_FULL, 492 491 intel_hdmi->rgb_quant_range_selectable, 493 492 is_hdmi2_sink); 493 + 494 + drm_hdmi_avi_infoframe_content_type(&frame.avi, 495 + conn_state); 494 496 495 497 /* TODO: handle pixel repetition for YCBCR420 outputs */ 496 498 intel_write_infoframe(encoder, crtc_state, &frame); ··· 590 586 I915_WRITE(reg, val); 591 587 POSTING_READ(reg); 592 588 593 - intel_hdmi_set_avi_infoframe(encoder, crtc_state); 589 + intel_hdmi_set_avi_infoframe(encoder, crtc_state, conn_state); 594 590 intel_hdmi_set_spd_infoframe(encoder, crtc_state); 595 591 intel_hdmi_set_hdmi_infoframe(encoder, crtc_state, conn_state); 596 592 } ··· 731 727 I915_WRITE(reg, val); 732 728 POSTING_READ(reg); 733 729 734 - intel_hdmi_set_avi_infoframe(encoder, crtc_state); 730 + intel_hdmi_set_avi_infoframe(encoder, crtc_state, conn_state); 735 731 intel_hdmi_set_spd_infoframe(encoder, crtc_state); 736 732 intel_hdmi_set_hdmi_infoframe(encoder, crtc_state, conn_state); 737 733 } ··· 774 770 I915_WRITE(reg, val); 775 771 POSTING_READ(reg); 776 772 777 - intel_hdmi_set_avi_infoframe(encoder, crtc_state); 773 + intel_hdmi_set_avi_infoframe(encoder, crtc_state, conn_state); 778 774 intel_hdmi_set_spd_infoframe(encoder, crtc_state); 779 775 intel_hdmi_set_hdmi_infoframe(encoder, crtc_state, conn_state); 780 776 } ··· 827 823 I915_WRITE(reg, val); 828 824 POSTING_READ(reg); 829 825 830 - intel_hdmi_set_avi_infoframe(encoder, crtc_state); 826 + intel_hdmi_set_avi_infoframe(encoder, crtc_state, conn_state); 831 827 intel_hdmi_set_spd_infoframe(encoder, crtc_state); 832 828 intel_hdmi_set_hdmi_infoframe(encoder, crtc_state, conn_state); 833 829 } ··· 860 856 I915_WRITE(reg, val); 861 857 POSTING_READ(reg); 862 858 863 - intel_hdmi_set_avi_infoframe(encoder, crtc_state); 859 + intel_hdmi_set_avi_infoframe(encoder, crtc_state, conn_state); 864 860 intel_hdmi_set_spd_infoframe(encoder, crtc_state); 865 861 intel_hdmi_set_hdmi_infoframe(encoder, crtc_state, conn_state); 866 862 } ··· 2052 2048 intel_attach_force_audio_property(connector); 2053 2049 intel_attach_broadcast_rgb_property(connector); 2054 2050 intel_attach_aspect_ratio_property(connector); 2051 + drm_connector_attach_content_type_property(connector); 2055 2052 connector->state->picture_aspect_ratio = HDMI_PICTURE_ASPECT_NONE; 2056 2053 } 2057 2054
+2
drivers/gpu/drm/i915/intel_ringbuffer.c
··· 1049 1049 flags |= PIN_OFFSET_BIAS | offset_bias; 1050 1050 if (vma->obj->stolen) 1051 1051 flags |= PIN_MAPPABLE; 1052 + else 1053 + flags |= PIN_HIGH; 1052 1054 1053 1055 if (!(vma->flags & I915_VMA_GLOBAL_BIND)) { 1054 1056 if (flags & PIN_MAPPABLE || map == I915_MAP_WC)
+101 -32
drivers/gpu/drm/i915/intel_sprite.c
··· 1241 1241 DRM_FORMAT_MOD_INVALID 1242 1242 }; 1243 1243 1244 - static bool g4x_mod_supported(uint32_t format, uint64_t modifier) 1244 + static bool g4x_sprite_format_mod_supported(struct drm_plane *_plane, 1245 + u32 format, u64 modifier) 1245 1246 { 1247 + switch (modifier) { 1248 + case DRM_FORMAT_MOD_LINEAR: 1249 + case I915_FORMAT_MOD_X_TILED: 1250 + break; 1251 + default: 1252 + return false; 1253 + } 1254 + 1246 1255 switch (format) { 1247 1256 case DRM_FORMAT_XRGB8888: 1248 1257 case DRM_FORMAT_YUYV: ··· 1267 1258 } 1268 1259 } 1269 1260 1270 - static bool snb_mod_supported(uint32_t format, uint64_t modifier) 1261 + static bool snb_sprite_format_mod_supported(struct drm_plane *_plane, 1262 + u32 format, u64 modifier) 1271 1263 { 1264 + switch (modifier) { 1265 + case DRM_FORMAT_MOD_LINEAR: 1266 + case I915_FORMAT_MOD_X_TILED: 1267 + break; 1268 + default: 1269 + return false; 1270 + } 1271 + 1272 1272 switch (format) { 1273 1273 case DRM_FORMAT_XRGB8888: 1274 1274 case DRM_FORMAT_XBGR8888: ··· 1294 1276 } 1295 1277 } 1296 1278 1297 - static bool vlv_mod_supported(uint32_t format, uint64_t modifier) 1279 + static bool vlv_sprite_format_mod_supported(struct drm_plane *_plane, 1280 + u32 format, u64 modifier) 1298 1281 { 1282 + switch (modifier) { 1283 + case DRM_FORMAT_MOD_LINEAR: 1284 + case I915_FORMAT_MOD_X_TILED: 1285 + break; 1286 + default: 1287 + return false; 1288 + } 1289 + 1299 1290 switch (format) { 1300 1291 case DRM_FORMAT_RGB565: 1301 1292 case DRM_FORMAT_ABGR8888: ··· 1326 1299 } 1327 1300 } 1328 1301 1329 - static bool skl_mod_supported(uint32_t format, uint64_t modifier) 1302 + static bool skl_plane_format_mod_supported(struct drm_plane *_plane, 1303 + u32 format, u64 modifier) 1330 1304 { 1305 + struct intel_plane *plane = to_intel_plane(_plane); 1306 + 1307 + switch (modifier) { 1308 + case DRM_FORMAT_MOD_LINEAR: 1309 + case I915_FORMAT_MOD_X_TILED: 1310 + case I915_FORMAT_MOD_Y_TILED: 1311 + case I915_FORMAT_MOD_Yf_TILED: 1312 + break; 1313 + case I915_FORMAT_MOD_Y_TILED_CCS: 1314 + case I915_FORMAT_MOD_Yf_TILED_CCS: 1315 + if (!plane->has_ccs) 1316 + return false; 1317 + break; 1318 + default: 1319 + return false; 1320 + } 1321 + 1331 1322 switch (format) { 1332 1323 case DRM_FORMAT_XRGB8888: 1333 1324 case DRM_FORMAT_XBGR8888: ··· 1377 1332 } 1378 1333 } 1379 1334 1380 - static bool intel_sprite_plane_format_mod_supported(struct drm_plane *plane, 1381 - uint32_t format, 1382 - uint64_t modifier) 1383 - { 1384 - struct drm_i915_private *dev_priv = to_i915(plane->dev); 1385 - 1386 - if (WARN_ON(modifier == DRM_FORMAT_MOD_INVALID)) 1387 - return false; 1388 - 1389 - if ((modifier >> 56) != DRM_FORMAT_MOD_VENDOR_INTEL && 1390 - modifier != DRM_FORMAT_MOD_LINEAR) 1391 - return false; 1392 - 1393 - if (INTEL_GEN(dev_priv) >= 9) 1394 - return skl_mod_supported(format, modifier); 1395 - else if (IS_VALLEYVIEW(dev_priv) || IS_CHERRYVIEW(dev_priv)) 1396 - return vlv_mod_supported(format, modifier); 1397 - else if (INTEL_GEN(dev_priv) >= 6) 1398 - return snb_mod_supported(format, modifier); 1399 - else 1400 - return g4x_mod_supported(format, modifier); 1401 - } 1402 - 1403 - static const struct drm_plane_funcs intel_sprite_plane_funcs = { 1335 + static const struct drm_plane_funcs g4x_sprite_funcs = { 1404 1336 .update_plane = drm_atomic_helper_update_plane, 1405 1337 .disable_plane = drm_atomic_helper_disable_plane, 1406 1338 .destroy = intel_plane_destroy, ··· 1385 1363 .atomic_set_property = intel_plane_atomic_set_property, 1386 1364 .atomic_duplicate_state = intel_plane_duplicate_state, 1387 1365 .atomic_destroy_state = intel_plane_destroy_state, 1388 - .format_mod_supported = intel_sprite_plane_format_mod_supported, 1366 + .format_mod_supported = g4x_sprite_format_mod_supported, 1367 + }; 1368 + 1369 + static const struct drm_plane_funcs snb_sprite_funcs = { 1370 + .update_plane = drm_atomic_helper_update_plane, 1371 + .disable_plane = drm_atomic_helper_disable_plane, 1372 + .destroy = intel_plane_destroy, 1373 + .atomic_get_property = intel_plane_atomic_get_property, 1374 + .atomic_set_property = intel_plane_atomic_set_property, 1375 + .atomic_duplicate_state = intel_plane_duplicate_state, 1376 + .atomic_destroy_state = intel_plane_destroy_state, 1377 + .format_mod_supported = snb_sprite_format_mod_supported, 1378 + }; 1379 + 1380 + static const struct drm_plane_funcs vlv_sprite_funcs = { 1381 + .update_plane = drm_atomic_helper_update_plane, 1382 + .disable_plane = drm_atomic_helper_disable_plane, 1383 + .destroy = intel_plane_destroy, 1384 + .atomic_get_property = intel_plane_atomic_get_property, 1385 + .atomic_set_property = intel_plane_atomic_set_property, 1386 + .atomic_duplicate_state = intel_plane_duplicate_state, 1387 + .atomic_destroy_state = intel_plane_destroy_state, 1388 + .format_mod_supported = vlv_sprite_format_mod_supported, 1389 + }; 1390 + 1391 + static const struct drm_plane_funcs skl_plane_funcs = { 1392 + .update_plane = drm_atomic_helper_update_plane, 1393 + .disable_plane = drm_atomic_helper_disable_plane, 1394 + .destroy = intel_plane_destroy, 1395 + .atomic_get_property = intel_plane_atomic_get_property, 1396 + .atomic_set_property = intel_plane_atomic_set_property, 1397 + .atomic_duplicate_state = intel_plane_duplicate_state, 1398 + .atomic_destroy_state = intel_plane_destroy_state, 1399 + .format_mod_supported = skl_plane_format_mod_supported, 1389 1400 }; 1390 1401 1391 1402 bool skl_plane_has_ccs(struct drm_i915_private *dev_priv, ··· 1444 1389 { 1445 1390 struct intel_plane *intel_plane = NULL; 1446 1391 struct intel_plane_state *state = NULL; 1392 + const struct drm_plane_funcs *plane_funcs; 1447 1393 unsigned long possible_crtcs; 1448 1394 const uint32_t *plane_formats; 1449 1395 const uint64_t *modifiers; ··· 1469 1413 intel_plane->can_scale = true; 1470 1414 state->scaler_id = -1; 1471 1415 1416 + intel_plane->has_ccs = skl_plane_has_ccs(dev_priv, pipe, 1417 + PLANE_SPRITE0 + plane); 1418 + 1472 1419 intel_plane->update_plane = skl_update_plane; 1473 1420 intel_plane->disable_plane = skl_disable_plane; 1474 1421 intel_plane->get_hw_state = skl_plane_get_hw_state; ··· 1485 1426 num_plane_formats = ARRAY_SIZE(skl_plane_formats); 1486 1427 } 1487 1428 1488 - if (skl_plane_has_ccs(dev_priv, pipe, PLANE_SPRITE0 + plane)) 1429 + if (intel_plane->has_ccs) 1489 1430 modifiers = skl_plane_format_modifiers_ccs; 1490 1431 else 1491 1432 modifiers = skl_plane_format_modifiers_noccs; 1433 + 1434 + plane_funcs = &skl_plane_funcs; 1492 1435 } else if (IS_VALLEYVIEW(dev_priv) || IS_CHERRYVIEW(dev_priv)) { 1493 1436 intel_plane->can_scale = false; 1494 1437 intel_plane->max_downscale = 1; ··· 1502 1441 plane_formats = vlv_plane_formats; 1503 1442 num_plane_formats = ARRAY_SIZE(vlv_plane_formats); 1504 1443 modifiers = i9xx_plane_format_modifiers; 1444 + 1445 + plane_funcs = &vlv_sprite_funcs; 1505 1446 } else if (INTEL_GEN(dev_priv) >= 7) { 1506 1447 if (IS_IVYBRIDGE(dev_priv)) { 1507 1448 intel_plane->can_scale = true; ··· 1520 1457 plane_formats = snb_plane_formats; 1521 1458 num_plane_formats = ARRAY_SIZE(snb_plane_formats); 1522 1459 modifiers = i9xx_plane_format_modifiers; 1460 + 1461 + plane_funcs = &snb_sprite_funcs; 1523 1462 } else { 1524 1463 intel_plane->can_scale = true; 1525 1464 intel_plane->max_downscale = 16; ··· 1534 1469 if (IS_GEN6(dev_priv)) { 1535 1470 plane_formats = snb_plane_formats; 1536 1471 num_plane_formats = ARRAY_SIZE(snb_plane_formats); 1472 + 1473 + plane_funcs = &snb_sprite_funcs; 1537 1474 } else { 1538 1475 plane_formats = g4x_plane_formats; 1539 1476 num_plane_formats = ARRAY_SIZE(g4x_plane_formats); 1477 + 1478 + plane_funcs = &g4x_sprite_funcs; 1540 1479 } 1541 1480 } 1542 1481 ··· 1567 1498 1568 1499 if (INTEL_GEN(dev_priv) >= 9) 1569 1500 ret = drm_universal_plane_init(&dev_priv->drm, &intel_plane->base, 1570 - possible_crtcs, &intel_sprite_plane_funcs, 1501 + possible_crtcs, plane_funcs, 1571 1502 plane_formats, num_plane_formats, 1572 1503 modifiers, 1573 1504 DRM_PLANE_TYPE_OVERLAY, 1574 1505 "plane %d%c", plane + 2, pipe_name(pipe)); 1575 1506 else 1576 1507 ret = drm_universal_plane_init(&dev_priv->drm, &intel_plane->base, 1577 - possible_crtcs, &intel_sprite_plane_funcs, 1508 + possible_crtcs, plane_funcs, 1578 1509 plane_formats, num_plane_formats, 1579 1510 modifiers, 1580 1511 DRM_PLANE_TYPE_OVERLAY,
-14
drivers/gpu/drm/i915/selftests/mock_dmabuf.c
··· 94 94 vm_unmap_ram(vaddr, mock->npages); 95 95 } 96 96 97 - static void *mock_dmabuf_kmap_atomic(struct dma_buf *dma_buf, unsigned long page_num) 98 - { 99 - struct mock_dmabuf *mock = to_mock(dma_buf); 100 - 101 - return kmap_atomic(mock->pages[page_num]); 102 - } 103 - 104 - static void mock_dmabuf_kunmap_atomic(struct dma_buf *dma_buf, unsigned long page_num, void *addr) 105 - { 106 - kunmap_atomic(addr); 107 - } 108 - 109 97 static void *mock_dmabuf_kmap(struct dma_buf *dma_buf, unsigned long page_num) 110 98 { 111 99 struct mock_dmabuf *mock = to_mock(dma_buf); ··· 118 130 .unmap_dma_buf = mock_unmap_dma_buf, 119 131 .release = mock_dmabuf_release, 120 132 .map = mock_dmabuf_kmap, 121 - .map_atomic = mock_dmabuf_kmap_atomic, 122 133 .unmap = mock_dmabuf_kunmap, 123 - .unmap_atomic = mock_dmabuf_kunmap_atomic, 124 134 .mmap = mock_dmabuf_mmap, 125 135 .vmap = mock_dmabuf_vmap, 126 136 .vunmap = mock_dmabuf_vunmap,
+18 -58
drivers/gpu/drm/mediatek/mtk_drm_fb.c
··· 15 15 #include <drm/drm_crtc_helper.h> 16 16 #include <drm/drm_fb_helper.h> 17 17 #include <drm/drm_gem.h> 18 + #include <drm/drm_gem_framebuffer_helper.h> 18 19 #include <linux/dma-buf.h> 19 20 #include <linux/reservation.h> 20 21 ··· 23 22 #include "mtk_drm_fb.h" 24 23 #include "mtk_drm_gem.h" 25 24 26 - /* 27 - * mtk specific framebuffer structure. 28 - * 29 - * @fb: drm framebuffer object. 30 - * @gem_obj: array of gem objects. 31 - */ 32 - struct mtk_drm_fb { 33 - struct drm_framebuffer base; 34 - /* For now we only support a single plane */ 35 - struct drm_gem_object *gem_obj; 36 - }; 37 - 38 - #define to_mtk_fb(x) container_of(x, struct mtk_drm_fb, base) 39 - 40 - struct drm_gem_object *mtk_fb_get_gem_obj(struct drm_framebuffer *fb) 41 - { 42 - struct mtk_drm_fb *mtk_fb = to_mtk_fb(fb); 43 - 44 - return mtk_fb->gem_obj; 45 - } 46 - 47 - static int mtk_drm_fb_create_handle(struct drm_framebuffer *fb, 48 - struct drm_file *file_priv, 49 - unsigned int *handle) 50 - { 51 - struct mtk_drm_fb *mtk_fb = to_mtk_fb(fb); 52 - 53 - return drm_gem_handle_create(file_priv, mtk_fb->gem_obj, handle); 54 - } 55 - 56 - static void mtk_drm_fb_destroy(struct drm_framebuffer *fb) 57 - { 58 - struct mtk_drm_fb *mtk_fb = to_mtk_fb(fb); 59 - 60 - drm_framebuffer_cleanup(fb); 61 - 62 - drm_gem_object_put_unlocked(mtk_fb->gem_obj); 63 - 64 - kfree(mtk_fb); 65 - } 66 - 67 25 static const struct drm_framebuffer_funcs mtk_drm_fb_funcs = { 68 - .create_handle = mtk_drm_fb_create_handle, 69 - .destroy = mtk_drm_fb_destroy, 26 + .create_handle = drm_gem_fb_create_handle, 27 + .destroy = drm_gem_fb_destroy, 70 28 }; 71 29 72 - static struct mtk_drm_fb *mtk_drm_framebuffer_init(struct drm_device *dev, 30 + static struct drm_framebuffer *mtk_drm_framebuffer_init(struct drm_device *dev, 73 31 const struct drm_mode_fb_cmd2 *mode, 74 32 struct drm_gem_object *obj) 75 33 { 76 - struct mtk_drm_fb *mtk_fb; 34 + struct drm_framebuffer *fb; 77 35 int ret; 78 36 79 37 if (drm_format_num_planes(mode->pixel_format) != 1) 80 38 return ERR_PTR(-EINVAL); 81 39 82 - mtk_fb = kzalloc(sizeof(*mtk_fb), GFP_KERNEL); 83 - if (!mtk_fb) 40 + fb = kzalloc(sizeof(*fb), GFP_KERNEL); 41 + if (!fb) 84 42 return ERR_PTR(-ENOMEM); 85 43 86 - drm_helper_mode_fill_fb_struct(dev, &mtk_fb->base, mode); 44 + drm_helper_mode_fill_fb_struct(dev, fb, mode); 87 45 88 - mtk_fb->gem_obj = obj; 46 + fb->obj[0] = obj; 89 47 90 - ret = drm_framebuffer_init(dev, &mtk_fb->base, &mtk_drm_fb_funcs); 48 + ret = drm_framebuffer_init(dev, fb, &mtk_drm_fb_funcs); 91 49 if (ret) { 92 50 DRM_ERROR("failed to initialize framebuffer\n"); 93 - kfree(mtk_fb); 51 + kfree(fb); 94 52 return ERR_PTR(ret); 95 53 } 96 54 97 - return mtk_fb; 55 + return fb; 98 56 } 99 57 100 58 /* ··· 70 110 if (!fb) 71 111 return 0; 72 112 73 - gem = mtk_fb_get_gem_obj(fb); 113 + gem = fb->obj[0]; 74 114 if (!gem || !gem->dma_buf || !gem->dma_buf->resv) 75 115 return 0; 76 116 ··· 88 128 struct drm_file *file, 89 129 const struct drm_mode_fb_cmd2 *cmd) 90 130 { 91 - struct mtk_drm_fb *mtk_fb; 131 + struct drm_framebuffer *fb; 92 132 struct drm_gem_object *gem; 93 133 unsigned int width = cmd->width; 94 134 unsigned int height = cmd->height; ··· 111 151 goto unreference; 112 152 } 113 153 114 - mtk_fb = mtk_drm_framebuffer_init(dev, cmd, gem); 115 - if (IS_ERR(mtk_fb)) { 116 - ret = PTR_ERR(mtk_fb); 154 + fb = mtk_drm_framebuffer_init(dev, cmd, gem); 155 + if (IS_ERR(fb)) { 156 + ret = PTR_ERR(fb); 117 157 goto unreference; 118 158 } 119 159 120 - return &mtk_fb->base; 160 + return fb; 121 161 122 162 unreference: 123 163 drm_gem_object_put_unlocked(gem);
-1
drivers/gpu/drm/mediatek/mtk_drm_fb.h
··· 14 14 #ifndef MTK_DRM_FB_H 15 15 #define MTK_DRM_FB_H 16 16 17 - struct drm_gem_object *mtk_fb_get_gem_obj(struct drm_framebuffer *fb); 18 17 int mtk_fb_wait(struct drm_framebuffer *fb); 19 18 struct drm_framebuffer *mtk_drm_mode_fb_create(struct drm_device *dev, 20 19 struct drm_file *file,
+1 -6
drivers/gpu/drm/mediatek/mtk_drm_plane.c
··· 95 95 if (!fb) 96 96 return 0; 97 97 98 - if (!mtk_fb_get_gem_obj(fb)) { 99 - DRM_DEBUG_KMS("buffer is null\n"); 100 - return -EFAULT; 101 - } 102 - 103 98 if (!state->crtc) 104 99 return 0; 105 100 ··· 122 127 if (!crtc || WARN_ON(!fb)) 123 128 return; 124 129 125 - gem = mtk_fb_get_gem_obj(fb); 130 + gem = fb->obj[0]; 126 131 mtk_gem = to_mtk_gem_obj(gem); 127 132 addr = mtk_gem->dma_addr; 128 133 pitch = fb->pitches[0];
+1 -2
drivers/gpu/drm/msm/disp/mdp4/mdp4_crtc.c
··· 201 201 int idx = idxs[pipe_id]; 202 202 if (idx > 0) { 203 203 const struct mdp_format *format = 204 - to_mdp_format(msm_framebuffer_format(plane->fb)); 204 + to_mdp_format(msm_framebuffer_format(plane->state->fb)); 205 205 alpha[idx-1] = format->alpha_enable; 206 206 } 207 207 } ··· 665 665 drm_crtc_init_with_planes(dev, crtc, plane, NULL, &mdp4_crtc_funcs, 666 666 NULL); 667 667 drm_crtc_helper_add(crtc, &mdp4_crtc_helper_funcs); 668 - plane->crtc = crtc; 669 668 670 669 return crtc; 671 670 }
-2
drivers/gpu/drm/msm/disp/mdp4/mdp4_plane.c
··· 167 167 msm_framebuffer_iova(fb, kms->aspace, 2)); 168 168 mdp4_write(mdp4_kms, REG_MDP4_PIPE_SRCP3_BASE(pipe), 169 169 msm_framebuffer_iova(fb, kms->aspace, 3)); 170 - 171 - plane->fb = fb; 172 170 } 173 171 174 172 static void mdp4_write_csc_config(struct mdp4_kms *mdp4_kms,
-1
drivers/gpu/drm/msm/disp/mdp5/mdp5_crtc.c
··· 1207 1207 "unref cursor", unref_cursor_worker); 1208 1208 1209 1209 drm_crtc_helper_add(crtc, &mdp5_crtc_helper_funcs); 1210 - plane->crtc = crtc; 1211 1210 1212 1211 return crtc; 1213 1212 }
+1 -3
drivers/gpu/drm/msm/disp/mdp5/mdp5_plane.c
··· 512 512 if (plane_enabled(new_state)) { 513 513 struct mdp5_ctl *ctl; 514 514 struct mdp5_pipeline *pipeline = 515 - mdp5_crtc_get_pipeline(plane->crtc); 515 + mdp5_crtc_get_pipeline(new_state->crtc); 516 516 int ret; 517 517 518 518 ret = mdp5_plane_mode_set(plane, new_state->crtc, new_state->fb, ··· 1028 1028 crtc_x + crtc_w, crtc_y, crtc_w, crtc_h, 1029 1029 src_img_w, src_img_h, 1030 1030 src_x + src_w, src_y, src_w, src_h); 1031 - 1032 - plane->fb = fb; 1033 1031 1034 1032 return ret; 1035 1033 }
+11 -43
drivers/gpu/drm/msm/msm_fb.c
··· 17 17 18 18 #include <drm/drm_crtc.h> 19 19 #include <drm/drm_crtc_helper.h> 20 + #include <drm/drm_gem_framebuffer_helper.h> 20 21 21 22 #include "msm_drv.h" 22 23 #include "msm_kms.h" ··· 26 25 struct msm_framebuffer { 27 26 struct drm_framebuffer base; 28 27 const struct msm_format *format; 29 - struct drm_gem_object *planes[MAX_PLANE]; 30 28 }; 31 29 #define to_msm_framebuffer(x) container_of(x, struct msm_framebuffer, base) 32 30 33 31 static struct drm_framebuffer *msm_framebuffer_init(struct drm_device *dev, 34 32 const struct drm_mode_fb_cmd2 *mode_cmd, struct drm_gem_object **bos); 35 33 36 - static int msm_framebuffer_create_handle(struct drm_framebuffer *fb, 37 - struct drm_file *file_priv, 38 - unsigned int *handle) 39 - { 40 - struct msm_framebuffer *msm_fb = to_msm_framebuffer(fb); 41 - return drm_gem_handle_create(file_priv, 42 - msm_fb->planes[0], handle); 43 - } 44 - 45 - static void msm_framebuffer_destroy(struct drm_framebuffer *fb) 46 - { 47 - struct msm_framebuffer *msm_fb = to_msm_framebuffer(fb); 48 - int i, n = fb->format->num_planes; 49 - 50 - DBG("destroy: FB ID: %d (%p)", fb->base.id, fb); 51 - 52 - drm_framebuffer_cleanup(fb); 53 - 54 - for (i = 0; i < n; i++) { 55 - struct drm_gem_object *bo = msm_fb->planes[i]; 56 - 57 - drm_gem_object_put_unlocked(bo); 58 - } 59 - 60 - kfree(msm_fb); 61 - } 62 - 63 34 static const struct drm_framebuffer_funcs msm_framebuffer_funcs = { 64 - .create_handle = msm_framebuffer_create_handle, 65 - .destroy = msm_framebuffer_destroy, 35 + .create_handle = drm_gem_fb_create_handle, 36 + .destroy = drm_gem_fb_destroy, 66 37 }; 67 38 68 39 #ifdef CONFIG_DEBUG_FS 69 40 void msm_framebuffer_describe(struct drm_framebuffer *fb, struct seq_file *m) 70 41 { 71 - struct msm_framebuffer *msm_fb = to_msm_framebuffer(fb); 72 42 int i, n = fb->format->num_planes; 73 43 74 44 seq_printf(m, "fb: %dx%d@%4.4s (%2d, ID:%d)\n", ··· 49 77 for (i = 0; i < n; i++) { 50 78 seq_printf(m, " %d: offset=%d pitch=%d, obj: ", 51 79 i, fb->offsets[i], fb->pitches[i]); 52 - msm_gem_describe(msm_fb->planes[i], m); 80 + msm_gem_describe(fb->obj[i], m); 53 81 } 54 82 } 55 83 #endif ··· 62 90 int msm_framebuffer_prepare(struct drm_framebuffer *fb, 63 91 struct msm_gem_address_space *aspace) 64 92 { 65 - struct msm_framebuffer *msm_fb = to_msm_framebuffer(fb); 66 93 int ret, i, n = fb->format->num_planes; 67 94 uint64_t iova; 68 95 69 96 for (i = 0; i < n; i++) { 70 - ret = msm_gem_get_iova(msm_fb->planes[i], aspace, &iova); 97 + ret = msm_gem_get_iova(fb->obj[i], aspace, &iova); 71 98 DBG("FB[%u]: iova[%d]: %08llx (%d)", fb->base.id, i, iova, ret); 72 99 if (ret) 73 100 return ret; ··· 78 107 void msm_framebuffer_cleanup(struct drm_framebuffer *fb, 79 108 struct msm_gem_address_space *aspace) 80 109 { 81 - struct msm_framebuffer *msm_fb = to_msm_framebuffer(fb); 82 110 int i, n = fb->format->num_planes; 83 111 84 112 for (i = 0; i < n; i++) 85 - msm_gem_put_iova(msm_fb->planes[i], aspace); 113 + msm_gem_put_iova(fb->obj[i], aspace); 86 114 } 87 115 88 116 uint32_t msm_framebuffer_iova(struct drm_framebuffer *fb, 89 117 struct msm_gem_address_space *aspace, int plane) 90 118 { 91 - struct msm_framebuffer *msm_fb = to_msm_framebuffer(fb); 92 - if (!msm_fb->planes[plane]) 119 + if (!fb->obj[plane]) 93 120 return 0; 94 - return msm_gem_iova(msm_fb->planes[plane], aspace) + fb->offsets[plane]; 121 + return msm_gem_iova(fb->obj[plane], aspace) + fb->offsets[plane]; 95 122 } 96 123 97 124 struct drm_gem_object *msm_framebuffer_bo(struct drm_framebuffer *fb, int plane) 98 125 { 99 - struct msm_framebuffer *msm_fb = to_msm_framebuffer(fb); 100 - return msm_fb->planes[plane]; 126 + return drm_gem_fb_get_obj(fb, plane); 101 127 } 102 128 103 129 const struct msm_format *msm_framebuffer_format(struct drm_framebuffer *fb) ··· 170 202 171 203 msm_fb->format = format; 172 204 173 - if (n > ARRAY_SIZE(msm_fb->planes)) { 205 + if (n > ARRAY_SIZE(fb->obj)) { 174 206 ret = -EINVAL; 175 207 goto fail; 176 208 } ··· 189 221 goto fail; 190 222 } 191 223 192 - msm_fb->planes[i] = bos[i]; 224 + msm_fb->base.obj[i] = bos[i]; 193 225 } 194 226 195 227 drm_helper_mode_fill_fb_struct(dev, fb, mode_cmd);
+24 -85
drivers/gpu/drm/omapdrm/omap_fb.c
··· 19 19 20 20 #include <drm/drm_crtc.h> 21 21 #include <drm/drm_crtc_helper.h> 22 + #include <drm/drm_gem_framebuffer_helper.h> 22 23 23 24 #include "omap_dmm_tiler.h" 24 25 #include "omap_drv.h" ··· 52 51 53 52 /* per-plane info for the fb: */ 54 53 struct plane { 55 - struct drm_gem_object *bo; 56 - u32 pitch; 57 - u32 offset; 58 54 dma_addr_t dma_addr; 59 55 }; 60 56 ··· 66 68 struct mutex lock; 67 69 }; 68 70 69 - static int omap_framebuffer_create_handle(struct drm_framebuffer *fb, 70 - struct drm_file *file_priv, 71 - unsigned int *handle) 72 - { 73 - struct omap_framebuffer *omap_fb = to_omap_framebuffer(fb); 74 - return drm_gem_handle_create(file_priv, 75 - omap_fb->planes[0].bo, handle); 76 - } 77 - 78 - static void omap_framebuffer_destroy(struct drm_framebuffer *fb) 79 - { 80 - struct omap_framebuffer *omap_fb = to_omap_framebuffer(fb); 81 - int i, n = fb->format->num_planes; 82 - 83 - DBG("destroy: FB ID: %d (%p)", fb->base.id, fb); 84 - 85 - drm_framebuffer_cleanup(fb); 86 - 87 - for (i = 0; i < n; i++) { 88 - struct plane *plane = &omap_fb->planes[i]; 89 - 90 - drm_gem_object_unreference_unlocked(plane->bo); 91 - } 92 - 93 - kfree(omap_fb); 94 - } 95 - 96 71 static const struct drm_framebuffer_funcs omap_framebuffer_funcs = { 97 - .create_handle = omap_framebuffer_create_handle, 98 - .destroy = omap_framebuffer_destroy, 72 + .create_handle = drm_gem_fb_create_handle, 73 + .destroy = drm_gem_fb_destroy, 99 74 }; 100 75 101 - static u32 get_linear_addr(struct plane *plane, 76 + static u32 get_linear_addr(struct drm_framebuffer *fb, 102 77 const struct drm_format_info *format, int n, int x, int y) 103 78 { 79 + struct omap_framebuffer *omap_fb = to_omap_framebuffer(fb); 80 + struct plane *plane = &omap_fb->planes[n]; 104 81 u32 offset; 105 82 106 - offset = plane->offset 83 + offset = fb->offsets[n] 107 84 + (x * format->cpp[n] / (n == 0 ? 1 : format->hsub)) 108 - + (y * plane->pitch / (n == 0 ? 1 : format->vsub)); 85 + + (y * fb->pitches[n] / (n == 0 ? 1 : format->vsub)); 109 86 110 87 return plane->dma_addr + offset; 111 88 } 112 89 113 90 bool omap_framebuffer_supports_rotation(struct drm_framebuffer *fb) 114 91 { 115 - struct omap_framebuffer *omap_fb = to_omap_framebuffer(fb); 116 - struct plane *plane = &omap_fb->planes[0]; 117 - 118 - return omap_gem_flags(plane->bo) & OMAP_BO_TILED; 92 + return omap_gem_flags(fb->obj[0]) & OMAP_BO_TILED; 119 93 } 120 94 121 95 /* Note: DRM rotates counter-clockwise, TILER & DSS rotates clockwise */ ··· 146 176 x = state->src_x >> 16; 147 177 y = state->src_y >> 16; 148 178 149 - if (omap_gem_flags(plane->bo) & OMAP_BO_TILED) { 179 + if (omap_gem_flags(fb->obj[0]) & OMAP_BO_TILED) { 150 180 u32 w = state->src_w >> 16; 151 181 u32 h = state->src_h >> 16; 152 182 ··· 171 201 x += w - 1; 172 202 173 203 /* Note: x and y are in TILER units, not pixels */ 174 - omap_gem_rotated_dma_addr(plane->bo, orient, x, y, 204 + omap_gem_rotated_dma_addr(fb->obj[0], orient, x, y, 175 205 &info->paddr); 176 206 info->rotation_type = OMAP_DSS_ROT_TILER; 177 207 info->rotation = state->rotation ?: DRM_MODE_ROTATE_0; 178 208 /* Note: stride in TILER units, not pixels */ 179 - info->screen_width = omap_gem_tiled_stride(plane->bo, orient); 209 + info->screen_width = omap_gem_tiled_stride(fb->obj[0], orient); 180 210 } else { 181 211 switch (state->rotation & DRM_MODE_ROTATE_MASK) { 182 212 case 0: ··· 191 221 break; 192 222 } 193 223 194 - info->paddr = get_linear_addr(plane, format, 0, x, y); 224 + info->paddr = get_linear_addr(fb, format, 0, x, y); 195 225 info->rotation_type = OMAP_DSS_ROT_NONE; 196 226 info->rotation = DRM_MODE_ROTATE_0; 197 - info->screen_width = plane->pitch; 227 + info->screen_width = fb->pitches[0]; 198 228 } 199 229 200 230 /* convert to pixels: */ ··· 204 234 plane = &omap_fb->planes[1]; 205 235 206 236 if (info->rotation_type == OMAP_DSS_ROT_TILER) { 207 - WARN_ON(!(omap_gem_flags(plane->bo) & OMAP_BO_TILED)); 208 - omap_gem_rotated_dma_addr(plane->bo, orient, x/2, y/2, 237 + WARN_ON(!(omap_gem_flags(fb->obj[1]) & OMAP_BO_TILED)); 238 + omap_gem_rotated_dma_addr(fb->obj[1], orient, x/2, y/2, 209 239 &info->p_uv_addr); 210 240 } else { 211 - info->p_uv_addr = get_linear_addr(plane, format, 1, x, y); 241 + info->p_uv_addr = get_linear_addr(fb, format, 1, x, y); 212 242 } 213 243 } else { 214 244 info->p_uv_addr = 0; ··· 231 261 232 262 for (i = 0; i < n; i++) { 233 263 struct plane *plane = &omap_fb->planes[i]; 234 - ret = omap_gem_pin(plane->bo, &plane->dma_addr); 264 + ret = omap_gem_pin(fb->obj[i], &plane->dma_addr); 235 265 if (ret) 236 266 goto fail; 237 - omap_gem_dma_sync_buffer(plane->bo, DMA_TO_DEVICE); 267 + omap_gem_dma_sync_buffer(fb->obj[i], DMA_TO_DEVICE); 238 268 } 239 269 240 270 omap_fb->pin_count++; ··· 246 276 fail: 247 277 for (i--; i >= 0; i--) { 248 278 struct plane *plane = &omap_fb->planes[i]; 249 - omap_gem_unpin(plane->bo); 279 + omap_gem_unpin(fb->obj[i]); 250 280 plane->dma_addr = 0; 251 281 } 252 282 ··· 272 302 273 303 for (i = 0; i < n; i++) { 274 304 struct plane *plane = &omap_fb->planes[i]; 275 - omap_gem_unpin(plane->bo); 305 + omap_gem_unpin(fb->obj[i]); 276 306 plane->dma_addr = 0; 277 307 } 278 308 279 309 mutex_unlock(&omap_fb->lock); 280 310 } 281 311 282 - /* iterate thru all the connectors, returning ones that are attached 283 - * to the same fb.. 284 - */ 285 - struct drm_connector *omap_framebuffer_get_next_connector( 286 - struct drm_framebuffer *fb, struct drm_connector *from) 287 - { 288 - struct drm_device *dev = fb->dev; 289 - struct list_head *connector_list = &dev->mode_config.connector_list; 290 - struct drm_connector *connector = from; 291 - 292 - if (!from) 293 - return list_first_entry_or_null(connector_list, typeof(*from), 294 - head); 295 - 296 - list_for_each_entry_from(connector, connector_list, head) { 297 - if (connector != from) { 298 - struct drm_encoder *encoder = connector->encoder; 299 - struct drm_crtc *crtc = encoder ? encoder->crtc : NULL; 300 - if (crtc && crtc->primary->fb == fb) 301 - return connector; 302 - 303 - } 304 - } 305 - 306 - return NULL; 307 - } 308 - 309 312 #ifdef CONFIG_DEBUG_FS 310 313 void omap_framebuffer_describe(struct drm_framebuffer *fb, struct seq_file *m) 311 314 { 312 - struct omap_framebuffer *omap_fb = to_omap_framebuffer(fb); 313 315 int i, n = fb->format->num_planes; 314 316 315 317 seq_printf(m, "fb: %dx%d@%4.4s\n", fb->width, fb->height, 316 318 (char *)&fb->format->format); 317 319 318 320 for (i = 0; i < n; i++) { 319 - struct plane *plane = &omap_fb->planes[i]; 320 321 seq_printf(m, " %d: offset=%d pitch=%d, obj: ", 321 - i, plane->offset, plane->pitch); 322 - omap_gem_describe(plane->bo, m); 322 + i, fb->offsets[n], fb->pitches[i]); 323 + omap_gem_describe(fb->obj[i], m); 323 324 } 324 325 } 325 326 #endif ··· 395 454 goto fail; 396 455 } 397 456 398 - plane->bo = bos[i]; 399 - plane->offset = mode_cmd->offsets[i]; 400 - plane->pitch = pitch; 457 + fb->obj[i] = bos[i]; 401 458 plane->dma_addr = 0; 402 459 } 403 460
-2
drivers/gpu/drm/omapdrm/omap_fb.h
··· 38 38 void omap_framebuffer_unpin(struct drm_framebuffer *fb); 39 39 void omap_framebuffer_update_scanout(struct drm_framebuffer *fb, 40 40 struct drm_plane_state *state, struct omap_overlay_info *info); 41 - struct drm_connector *omap_framebuffer_get_next_connector( 42 - struct drm_framebuffer *fb, struct drm_connector *from); 43 41 bool omap_framebuffer_supports_rotation(struct drm_framebuffer *fb); 44 42 void omap_framebuffer_describe(struct drm_framebuffer *fb, struct seq_file *m); 45 43
-2
drivers/gpu/drm/omapdrm/omap_gem_dmabuf.c
··· 148 148 .release = drm_gem_dmabuf_release, 149 149 .begin_cpu_access = omap_gem_dmabuf_begin_cpu_access, 150 150 .end_cpu_access = omap_gem_dmabuf_end_cpu_access, 151 - .map_atomic = omap_gem_dmabuf_kmap_atomic, 152 - .unmap_atomic = omap_gem_dmabuf_kunmap_atomic, 153 151 .map = omap_gem_dmabuf_kmap, 154 152 .unmap = omap_gem_dmabuf_kunmap, 155 153 .mmap = omap_gem_dmabuf_mmap,
-1
drivers/gpu/drm/panel/panel-innolux-p079zca.c
··· 292 292 DRM_DEV_ERROR(&dsi->dev, "failed to detach from DSI host: %d\n", 293 293 err); 294 294 295 - drm_panel_detach(&innolux->base); 296 295 innolux_panel_del(innolux); 297 296 298 297 return 0;
-1
drivers/gpu/drm/panel/panel-jdi-lt070me05000.c
··· 500 500 dev_err(&dsi->dev, "failed to detach from DSI host: %d\n", 501 501 ret); 502 502 503 - drm_panel_detach(&jdi->base); 504 503 jdi_panel_del(jdi); 505 504 506 505 return 0;
-1
drivers/gpu/drm/panel/panel-lvds.c
··· 282 282 { 283 283 struct panel_lvds *lvds = dev_get_drvdata(&pdev->dev); 284 284 285 - drm_panel_detach(&lvds->panel); 286 285 drm_panel_remove(&lvds->panel); 287 286 288 287 panel_lvds_disable(&lvds->panel);
+30 -28
drivers/gpu/drm/panel/panel-orisetech-otm8009a.c
··· 14 14 #include <linux/regulator/consumer.h> 15 15 #include <video/mipi_display.h> 16 16 17 - #define DRV_NAME "orisetech_otm8009a" 18 - 19 17 #define OTM8009A_BACKLIGHT_DEFAULT 240 20 18 #define OTM8009A_BACKLIGHT_MAX 255 21 19 ··· 94 96 95 97 if (mipi_dsi_dcs_write_buffer(dsi, data, len) < 0) 96 98 DRM_WARN("mipi dsi dcs write buffer failed\n"); 99 + } 100 + 101 + static void otm8009a_dcs_write_buf_hs(struct otm8009a *ctx, const void *data, 102 + size_t len) 103 + { 104 + struct mipi_dsi_device *dsi = to_mipi_dsi_device(ctx->dev); 105 + 106 + /* data will be sent in dsi hs mode (ie. no lpm) */ 107 + dsi->mode_flags &= ~MIPI_DSI_MODE_LPM; 108 + 109 + otm8009a_dcs_write_buf(ctx, data, len); 110 + 111 + /* restore back the dsi lpm mode */ 112 + dsi->mode_flags |= MIPI_DSI_MODE_LPM; 97 113 } 98 114 99 115 #define dcs_write_seq(ctx, seq...) \ ··· 260 248 if (!ctx->enabled) 261 249 return 0; /* This is not an issue so we return 0 here */ 262 250 263 - /* Power off the backlight. Note: end-user still controls brightness */ 264 - ctx->bl_dev->props.power = FB_BLANK_POWERDOWN; 265 - ret = backlight_update_status(ctx->bl_dev); 266 - if (ret) 267 - return ret; 251 + backlight_disable(ctx->bl_dev); 268 252 269 253 ret = mipi_dsi_dcs_set_display_off(dsi); 270 254 if (ret) ··· 324 316 325 317 ctx->prepared = true; 326 318 327 - /* 328 - * Power on the backlight. Note: end-user still controls brightness 329 - * Note: ctx->prepared must be true before updating the backlight. 330 - */ 331 - ctx->bl_dev->props.power = FB_BLANK_UNBLANK; 332 - backlight_update_status(ctx->bl_dev); 333 - 334 319 return 0; 335 320 } 336 321 337 322 static int otm8009a_enable(struct drm_panel *panel) 338 323 { 339 324 struct otm8009a *ctx = panel_to_otm8009a(panel); 325 + 326 + if (ctx->enabled) 327 + return 0; 328 + 329 + backlight_enable(ctx->bl_dev); 340 330 341 331 ctx->enabled = true; 342 332 ··· 393 387 */ 394 388 data[0] = MIPI_DCS_SET_DISPLAY_BRIGHTNESS; 395 389 data[1] = bd->props.brightness; 396 - otm8009a_dcs_write_buf(ctx, data, ARRAY_SIZE(data)); 390 + otm8009a_dcs_write_buf_hs(ctx, data, ARRAY_SIZE(data)); 397 391 398 392 /* set Brightness Control & Backlight on */ 399 393 data[1] = 0x24; ··· 405 399 406 400 /* Update Brightness Control & Backlight */ 407 401 data[0] = MIPI_DCS_WRITE_CONTROL_DISPLAY; 408 - otm8009a_dcs_write_buf(ctx, data, ARRAY_SIZE(data)); 402 + otm8009a_dcs_write_buf_hs(ctx, data, ARRAY_SIZE(data)); 409 403 410 404 return 0; 411 405 } ··· 450 444 ctx->panel.dev = dev; 451 445 ctx->panel.funcs = &otm8009a_drm_funcs; 452 446 453 - ctx->bl_dev = backlight_device_register(DRV_NAME "_backlight", dev, ctx, 454 - &otm8009a_backlight_ops, NULL); 447 + ctx->bl_dev = devm_backlight_device_register(dev, dev_name(dev), 448 + dsi->host->dev, ctx, 449 + &otm8009a_backlight_ops, 450 + NULL); 455 451 if (IS_ERR(ctx->bl_dev)) { 456 - dev_err(dev, "failed to register backlight device\n"); 457 - return PTR_ERR(ctx->bl_dev); 452 + ret = PTR_ERR(ctx->bl_dev); 453 + dev_err(dev, "failed to register backlight: %d\n", ret); 454 + return ret; 458 455 } 459 456 460 457 ctx->bl_dev->props.max_brightness = OTM8009A_BACKLIGHT_MAX; ··· 475 466 return ret; 476 467 } 477 468 478 - DRM_INFO(DRV_NAME "_panel %ux%u@%u %ubpp dsi %udl - ready\n", 479 - default_mode.hdisplay, default_mode.vdisplay, 480 - default_mode.vrefresh, 481 - mipi_dsi_pixel_format_to_bpp(dsi->format), dsi->lanes); 482 - 483 469 return 0; 484 470 } 485 471 ··· 484 480 485 481 mipi_dsi_detach(dsi); 486 482 drm_panel_remove(&ctx->panel); 487 - 488 - backlight_device_unregister(ctx->bl_dev); 489 483 490 484 return 0; 491 485 } ··· 498 496 .probe = otm8009a_probe, 499 497 .remove = otm8009a_remove, 500 498 .driver = { 501 - .name = DRV_NAME "_panel", 499 + .name = "panel-orisetech-otm8009a", 502 500 .of_match_table = orisetech_otm8009a_of_match, 503 501 }, 504 502 };
-1
drivers/gpu/drm/panel/panel-panasonic-vvx10f034n00.c
··· 299 299 if (ret < 0) 300 300 dev_err(&dsi->dev, "failed to detach from DSI host: %d\n", ret); 301 301 302 - drm_panel_detach(&wuxga_nt->base); 303 302 wuxga_nt_panel_del(wuxga_nt); 304 303 305 304 return 0;
-1
drivers/gpu/drm/panel/panel-seiko-43wvf1g.c
··· 292 292 { 293 293 struct seiko_panel *panel = dev_get_drvdata(&pdev->dev); 294 294 295 - drm_panel_detach(&panel->base); 296 295 drm_panel_remove(&panel->base); 297 296 298 297 seiko_panel_disable(&panel->base);
-1
drivers/gpu/drm/panel/panel-sharp-lq101r1sx01.c
··· 418 418 if (err < 0) 419 419 dev_err(&dsi->dev, "failed to detach from DSI host: %d\n", err); 420 420 421 - drm_panel_detach(&sharp->base); 422 421 sharp_panel_del(sharp); 423 422 424 423 return 0;
-1
drivers/gpu/drm/panel/panel-sharp-ls043t1le01.c
··· 327 327 if (ret < 0) 328 328 dev_err(&dsi->dev, "failed to detach from DSI host: %d\n", ret); 329 329 330 - drm_panel_detach(&sharp_nt->base); 331 330 sharp_nt_panel_del(sharp_nt); 332 331 333 332 return 0;
+61 -4
drivers/gpu/drm/panel/panel-simple.c
··· 364 364 { 365 365 struct panel_simple *panel = dev_get_drvdata(dev); 366 366 367 - drm_panel_detach(&panel->base); 368 367 drm_panel_remove(&panel->base); 369 368 370 369 panel_simple_disable(&panel->base); ··· 580 581 }, 581 582 }; 582 583 584 + static const struct display_timing auo_g070vvn01_timings = { 585 + .pixelclock = { 33300000, 34209000, 45000000 }, 586 + .hactive = { 800, 800, 800 }, 587 + .hfront_porch = { 20, 40, 200 }, 588 + .hback_porch = { 87, 40, 1 }, 589 + .hsync_len = { 1, 48, 87 }, 590 + .vactive = { 480, 480, 480 }, 591 + .vfront_porch = { 5, 13, 200 }, 592 + .vback_porch = { 31, 31, 29 }, 593 + .vsync_len = { 1, 1, 3 }, 594 + }; 595 + 596 + static const struct panel_desc auo_g070vvn01 = { 597 + .timings = &auo_g070vvn01_timings, 598 + .num_timings = 1, 599 + .bpc = 8, 600 + .size = { 601 + .width = 152, 602 + .height = 91, 603 + }, 604 + .delay = { 605 + .prepare = 200, 606 + .enable = 50, 607 + .disable = 50, 608 + .unprepare = 1000, 609 + }, 610 + }; 611 + 583 612 static const struct drm_display_mode auo_g104sn02_mode = { 584 613 .clock = 40000, 585 614 .hdisplay = 800, ··· 714 687 .enable = 450, 715 688 .unprepare = 500, 716 689 }, 717 - .bus_format = MEDIA_BUS_FMT_RGB888_1X7X4_JEIDA, 690 + .bus_format = MEDIA_BUS_FMT_RGB888_1X7X4_SPWG, 718 691 }; 719 692 720 693 static const struct drm_display_mode auo_t215hvn01_mode = { ··· 1244 1217 }, 1245 1218 }; 1246 1219 1220 + static const struct drm_display_mode innolux_tv123wam_mode = { 1221 + .clock = 206016, 1222 + .hdisplay = 2160, 1223 + .hsync_start = 2160 + 48, 1224 + .hsync_end = 2160 + 48 + 32, 1225 + .htotal = 2160 + 48 + 32 + 80, 1226 + .vdisplay = 1440, 1227 + .vsync_start = 1440 + 3, 1228 + .vsync_end = 1440 + 3 + 10, 1229 + .vtotal = 1440 + 3 + 10 + 27, 1230 + .vrefresh = 60, 1231 + .flags = DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC, 1232 + }; 1233 + 1234 + static const struct panel_desc innolux_tv123wam = { 1235 + .modes = &innolux_tv123wam_mode, 1236 + .num_modes = 1, 1237 + .bpc = 8, 1238 + .size = { 1239 + .width = 259, 1240 + .height = 173, 1241 + }, 1242 + }; 1243 + 1247 1244 static const struct drm_display_mode innolux_zj070na_01p_mode = { 1248 1245 .clock = 51501, 1249 1246 .hdisplay = 1024, ··· 1298 1247 .hback_porch = { 16, 36, 56 }, 1299 1248 .hsync_len = { 8, 8, 8 }, 1300 1249 .vactive = { 480, 480, 480 }, 1301 - .vfront_porch = { 6, 21, 33.5 }, 1302 - .vback_porch = { 6, 21, 33.5 }, 1250 + .vfront_porch = { 6, 21, 33 }, 1251 + .vback_porch = { 6, 21, 33 }, 1303 1252 .vsync_len = { 8, 8, 8 }, 1304 1253 .flags = DISPLAY_FLAGS_DE_HIGH, 1305 1254 }; ··· 2146 2095 .compatible = "auo,b133xtn01", 2147 2096 .data = &auo_b133xtn01, 2148 2097 }, { 2098 + .compatible = "auo,g070vvn01", 2099 + .data = &auo_g070vvn01, 2100 + }, { 2149 2101 .compatible = "auo,g104sn02", 2150 2102 .data = &auo_g104sn02, 2151 2103 }, { ··· 2223 2169 }, { 2224 2170 .compatible = "innolux,n156bge-l21", 2225 2171 .data = &innolux_n156bge_l21, 2172 + }, { 2173 + .compatible = "innolux,tv123wam", 2174 + .data = &innolux_tv123wam, 2226 2175 }, { 2227 2176 .compatible = "innolux,zj070na-01p", 2228 2177 .data = &innolux_zj070na_01p,
-1
drivers/gpu/drm/panel/panel-sitronix-st7789v.c
··· 419 419 { 420 420 struct st7789v *ctx = spi_get_drvdata(spi); 421 421 422 - drm_panel_detach(&ctx->panel); 423 422 drm_panel_remove(&ctx->panel); 424 423 425 424 if (ctx->backlight)
+1 -15
drivers/gpu/drm/rockchip/cdn-dp-reg.c
··· 792 792 793 793 int cdn_dp_audio_stop(struct cdn_dp_device *dp, struct audio_info *audio) 794 794 { 795 - u32 val; 796 795 int ret; 797 796 798 797 ret = cdn_dp_reg_write(dp, AUDIO_PACK_CONTROL, 0); ··· 800 801 return ret; 801 802 } 802 803 803 - val = SPDIF_AVG_SEL | SPDIF_JITTER_BYPASS; 804 - val |= SPDIF_FIFO_MID_RANGE(0xe0); 805 - val |= SPDIF_JITTER_THRSH(0xe0); 806 - val |= SPDIF_JITTER_AVG_WIN(7); 807 - writel(val, dp->regs + SPDIF_CTRL_ADDR); 804 + writel(0, dp->regs + SPDIF_CTRL_ADDR); 808 805 809 806 /* clearn the audio config and reset */ 810 807 writel(0, dp->regs + AUDIO_SRC_CNTL); ··· 924 929 { 925 930 u32 val; 926 931 927 - val = SPDIF_AVG_SEL | SPDIF_JITTER_BYPASS; 928 - val |= SPDIF_FIFO_MID_RANGE(0xe0); 929 - val |= SPDIF_JITTER_THRSH(0xe0); 930 - val |= SPDIF_JITTER_AVG_WIN(7); 931 - writel(val, dp->regs + SPDIF_CTRL_ADDR); 932 - 933 932 writel(SYNC_WR_TO_CH_ZERO, dp->regs + FIFO_CNTL); 934 933 935 934 val = MAX_NUM_CH(2) | AUDIO_TYPE_LPCM | CFG_SUB_PCKT_NUM(4); ··· 931 942 writel(SMPL2PKT_EN, dp->regs + SMPL2PKT_CNTL); 932 943 933 944 val = SPDIF_ENABLE | SPDIF_AVG_SEL | SPDIF_JITTER_BYPASS; 934 - val |= SPDIF_FIFO_MID_RANGE(0xe0); 935 - val |= SPDIF_JITTER_THRSH(0xe0); 936 - val |= SPDIF_JITTER_AVG_WIN(7); 937 945 writel(val, dp->regs + SPDIF_CTRL_ADDR); 938 946 939 947 clk_prepare_enable(dp->spdif_clk);
+23 -63
drivers/gpu/drm/rockchip/rockchip_drm_fb.c
··· 18 18 #include <drm/drm_atomic.h> 19 19 #include <drm/drm_fb_helper.h> 20 20 #include <drm/drm_crtc_helper.h> 21 + #include <drm/drm_gem_framebuffer_helper.h> 21 22 22 23 #include "rockchip_drm_drv.h" 23 24 #include "rockchip_drm_fb.h" 24 25 #include "rockchip_drm_gem.h" 25 26 #include "rockchip_drm_psr.h" 26 - 27 - #define to_rockchip_fb(x) container_of(x, struct rockchip_drm_fb, fb) 28 - 29 - struct rockchip_drm_fb { 30 - struct drm_framebuffer fb; 31 - struct drm_gem_object *obj[ROCKCHIP_MAX_FB_BUFFER]; 32 - }; 33 - 34 - struct drm_gem_object *rockchip_fb_get_gem_obj(struct drm_framebuffer *fb, 35 - unsigned int plane) 36 - { 37 - struct rockchip_drm_fb *rk_fb = to_rockchip_fb(fb); 38 - 39 - if (plane >= ROCKCHIP_MAX_FB_BUFFER) 40 - return NULL; 41 - 42 - return rk_fb->obj[plane]; 43 - } 44 - 45 - static void rockchip_drm_fb_destroy(struct drm_framebuffer *fb) 46 - { 47 - struct rockchip_drm_fb *rockchip_fb = to_rockchip_fb(fb); 48 - int i; 49 - 50 - for (i = 0; i < ROCKCHIP_MAX_FB_BUFFER; i++) 51 - drm_gem_object_put_unlocked(rockchip_fb->obj[i]); 52 - 53 - drm_framebuffer_cleanup(fb); 54 - kfree(rockchip_fb); 55 - } 56 - 57 - static int rockchip_drm_fb_create_handle(struct drm_framebuffer *fb, 58 - struct drm_file *file_priv, 59 - unsigned int *handle) 60 - { 61 - struct rockchip_drm_fb *rockchip_fb = to_rockchip_fb(fb); 62 - 63 - return drm_gem_handle_create(file_priv, 64 - rockchip_fb->obj[0], handle); 65 - } 66 27 67 28 static int rockchip_drm_fb_dirty(struct drm_framebuffer *fb, 68 29 struct drm_file *file, ··· 36 75 } 37 76 38 77 static const struct drm_framebuffer_funcs rockchip_drm_fb_funcs = { 39 - .destroy = rockchip_drm_fb_destroy, 40 - .create_handle = rockchip_drm_fb_create_handle, 41 - .dirty = rockchip_drm_fb_dirty, 78 + .destroy = drm_gem_fb_destroy, 79 + .create_handle = drm_gem_fb_create_handle, 80 + .dirty = rockchip_drm_fb_dirty, 42 81 }; 43 82 44 - static struct rockchip_drm_fb * 83 + static struct drm_framebuffer * 45 84 rockchip_fb_alloc(struct drm_device *dev, const struct drm_mode_fb_cmd2 *mode_cmd, 46 85 struct drm_gem_object **obj, unsigned int num_planes) 47 86 { 48 - struct rockchip_drm_fb *rockchip_fb; 87 + struct drm_framebuffer *fb; 49 88 int ret; 50 89 int i; 51 90 52 - rockchip_fb = kzalloc(sizeof(*rockchip_fb), GFP_KERNEL); 53 - if (!rockchip_fb) 91 + fb = kzalloc(sizeof(*fb), GFP_KERNEL); 92 + if (!fb) 54 93 return ERR_PTR(-ENOMEM); 55 94 56 - drm_helper_mode_fill_fb_struct(dev, &rockchip_fb->fb, mode_cmd); 95 + drm_helper_mode_fill_fb_struct(dev, fb, mode_cmd); 57 96 58 97 for (i = 0; i < num_planes; i++) 59 - rockchip_fb->obj[i] = obj[i]; 98 + fb->obj[i] = obj[i]; 60 99 61 - ret = drm_framebuffer_init(dev, &rockchip_fb->fb, 62 - &rockchip_drm_fb_funcs); 100 + ret = drm_framebuffer_init(dev, fb, &rockchip_drm_fb_funcs); 63 101 if (ret) { 64 102 DRM_DEV_ERROR(dev->dev, 65 103 "Failed to initialize framebuffer: %d\n", 66 104 ret); 67 - kfree(rockchip_fb); 105 + kfree(fb); 68 106 return ERR_PTR(ret); 69 107 } 70 108 71 - return rockchip_fb; 109 + return fb; 72 110 } 73 111 74 112 static struct drm_framebuffer * 75 113 rockchip_user_fb_create(struct drm_device *dev, struct drm_file *file_priv, 76 114 const struct drm_mode_fb_cmd2 *mode_cmd) 77 115 { 78 - struct rockchip_drm_fb *rockchip_fb; 116 + struct drm_framebuffer *fb; 79 117 struct drm_gem_object *objs[ROCKCHIP_MAX_FB_BUFFER]; 80 118 struct drm_gem_object *obj; 81 119 unsigned int hsub; ··· 113 153 objs[i] = obj; 114 154 } 115 155 116 - rockchip_fb = rockchip_fb_alloc(dev, mode_cmd, objs, i); 117 - if (IS_ERR(rockchip_fb)) { 118 - ret = PTR_ERR(rockchip_fb); 156 + fb = rockchip_fb_alloc(dev, mode_cmd, objs, i); 157 + if (IS_ERR(fb)) { 158 + ret = PTR_ERR(fb); 119 159 goto err_gem_object_unreference; 120 160 } 121 161 122 - return &rockchip_fb->fb; 162 + return fb; 123 163 124 164 err_gem_object_unreference: 125 165 for (i--; i >= 0; i--) ··· 202 242 const struct drm_mode_fb_cmd2 *mode_cmd, 203 243 struct drm_gem_object *obj) 204 244 { 205 - struct rockchip_drm_fb *rockchip_fb; 245 + struct drm_framebuffer *fb; 206 246 207 - rockchip_fb = rockchip_fb_alloc(dev, mode_cmd, &obj, 1); 208 - if (IS_ERR(rockchip_fb)) 209 - return ERR_CAST(rockchip_fb); 247 + fb = rockchip_fb_alloc(dev, mode_cmd, &obj, 1); 248 + if (IS_ERR(fb)) 249 + return ERR_CAST(fb); 210 250 211 - return &rockchip_fb->fb; 251 + return fb; 212 252 } 213 253 214 254 void rockchip_drm_mode_config_init(struct drm_device *dev)
-3
drivers/gpu/drm/rockchip/rockchip_drm_fb.h
··· 22 22 void rockchip_drm_framebuffer_fini(struct drm_framebuffer *fb); 23 23 24 24 void rockchip_drm_mode_config_init(struct drm_device *dev); 25 - 26 - struct drm_gem_object *rockchip_fb_get_gem_obj(struct drm_framebuffer *fb, 27 - unsigned int plane); 28 25 #endif /* _ROCKCHIP_DRM_FB_H */
+50 -23
drivers/gpu/drm/rockchip/rockchip_drm_vop.c
··· 486 486 spin_unlock_irqrestore(&vop->irq_lock, flags); 487 487 } 488 488 489 + static int vop_core_clks_enable(struct vop *vop) 490 + { 491 + int ret; 492 + 493 + ret = clk_enable(vop->hclk); 494 + if (ret < 0) 495 + return ret; 496 + 497 + ret = clk_enable(vop->aclk); 498 + if (ret < 0) 499 + goto err_disable_hclk; 500 + 501 + return 0; 502 + 503 + err_disable_hclk: 504 + clk_disable(vop->hclk); 505 + return ret; 506 + } 507 + 508 + static void vop_core_clks_disable(struct vop *vop) 509 + { 510 + clk_disable(vop->aclk); 511 + clk_disable(vop->hclk); 512 + } 513 + 489 514 static int vop_enable(struct drm_crtc *crtc) 490 515 { 491 516 struct vop *vop = to_vop(crtc); ··· 522 497 return ret; 523 498 } 524 499 525 - ret = clk_enable(vop->hclk); 500 + ret = vop_core_clks_enable(vop); 526 501 if (WARN_ON(ret < 0)) 527 502 goto err_put_pm_runtime; 528 503 529 504 ret = clk_enable(vop->dclk); 530 505 if (WARN_ON(ret < 0)) 531 - goto err_disable_hclk; 532 - 533 - ret = clk_enable(vop->aclk); 534 - if (WARN_ON(ret < 0)) 535 - goto err_disable_dclk; 506 + goto err_disable_core; 536 507 537 508 /* 538 509 * Slave iommu shares power, irq and clock with vop. It was associated ··· 540 519 if (ret) { 541 520 DRM_DEV_ERROR(vop->dev, 542 521 "failed to attach dma mapping, %d\n", ret); 543 - goto err_disable_aclk; 522 + goto err_disable_dclk; 544 523 } 545 524 546 525 spin_lock(&vop->reg_lock); ··· 573 552 574 553 spin_unlock(&vop->reg_lock); 575 554 576 - enable_irq(vop->irq); 577 - 578 555 drm_crtc_vblank_on(crtc); 579 556 580 557 return 0; 581 558 582 - err_disable_aclk: 583 - clk_disable(vop->aclk); 584 559 err_disable_dclk: 585 560 clk_disable(vop->dclk); 586 - err_disable_hclk: 587 - clk_disable(vop->hclk); 561 + err_disable_core: 562 + vop_core_clks_disable(vop); 588 563 err_put_pm_runtime: 589 564 pm_runtime_put_sync(vop->dev); 590 565 return ret; ··· 616 599 617 600 vop_dsp_hold_valid_irq_disable(vop); 618 601 619 - disable_irq(vop->irq); 620 - 621 602 vop->is_enabled = false; 622 603 623 604 /* ··· 624 609 rockchip_drm_dma_detach_device(vop->drm_dev, vop->dev); 625 610 626 611 clk_disable(vop->dclk); 627 - clk_disable(vop->aclk); 628 - clk_disable(vop->hclk); 612 + vop_core_clks_disable(vop); 629 613 pm_runtime_put(vop->dev); 630 614 mutex_unlock(&vop->vop_lock); 631 615 ··· 742 728 return; 743 729 } 744 730 745 - obj = rockchip_fb_get_gem_obj(fb, 0); 731 + obj = fb->obj[0]; 746 732 rk_obj = to_rockchip_obj(obj); 747 733 748 734 actual_w = drm_rect_width(src) >> 16; ··· 772 758 int vsub = drm_format_vert_chroma_subsampling(fb->format->format); 773 759 int bpp = fb->format->cpp[1]; 774 760 775 - uv_obj = rockchip_fb_get_gem_obj(fb, 1); 761 + uv_obj = fb->obj[1]; 776 762 rk_uv_obj = to_rockchip_obj(uv_obj); 777 763 778 764 offset = (src->x1 >> 16) * bpp / hsub; ··· 1192 1178 int ret = IRQ_NONE; 1193 1179 1194 1180 /* 1181 + * The irq is shared with the iommu. If the runtime-pm state of the 1182 + * vop-device is disabled the irq has to be targeted at the iommu. 1183 + */ 1184 + if (!pm_runtime_get_if_in_use(vop->dev)) 1185 + return IRQ_NONE; 1186 + 1187 + if (vop_core_clks_enable(vop)) { 1188 + DRM_DEV_ERROR_RATELIMITED(vop->dev, "couldn't enable clocks\n"); 1189 + goto out; 1190 + } 1191 + 1192 + /* 1195 1193 * interrupt register has interrupt status, enable and clear bits, we 1196 1194 * must hold irq_lock to avoid a race with enable/disable_vblank(). 1197 1195 */ ··· 1218 1192 1219 1193 /* This is expected for vop iommu irqs, since the irq is shared */ 1220 1194 if (!active_irqs) 1221 - return IRQ_NONE; 1195 + goto out_disable; 1222 1196 1223 1197 if (active_irqs & DSP_HOLD_VALID_INTR) { 1224 1198 complete(&vop->dsp_hold_completion); ··· 1244 1218 DRM_DEV_ERROR(vop->dev, "Unknown VOP IRQs: %#02x\n", 1245 1219 active_irqs); 1246 1220 1221 + out_disable: 1222 + vop_core_clks_disable(vop); 1223 + out: 1224 + pm_runtime_put(vop->dev); 1247 1225 return ret; 1248 1226 } 1249 1227 ··· 1625 1595 IRQF_SHARED, dev_name(dev), vop); 1626 1596 if (ret) 1627 1597 goto err_disable_pm_runtime; 1628 - 1629 - /* IRQ is initially disabled; it gets enabled in power_on */ 1630 - disable_irq(vop->irq); 1631 1598 1632 1599 return 0; 1633 1600
+3 -3
drivers/gpu/drm/rockchip/rockchip_lvds.c
··· 363 363 of_property_read_u32(endpoint, "reg", &endpoint_id); 364 364 ret = drm_of_find_panel_or_bridge(dev->of_node, 1, endpoint_id, 365 365 &lvds->panel, &lvds->bridge); 366 - if (!ret) 366 + if (!ret) { 367 + of_node_put(endpoint); 367 368 break; 369 + } 368 370 } 369 371 if (!child_count) { 370 372 DRM_DEV_ERROR(dev, "lvds port does not have any children\n"); ··· 448 446 goto err_free_connector; 449 447 } 450 448 } else { 451 - lvds->bridge->encoder = encoder; 452 449 ret = drm_bridge_attach(encoder, lvds->bridge, NULL); 453 450 if (ret) { 454 451 DRM_DEV_ERROR(drm_dev->dev, 455 452 "failed to attach bridge: %d\n", ret); 456 453 goto err_free_encoder; 457 454 } 458 - encoder->bridge = lvds->bridge; 459 455 } 460 456 461 457 pm_runtime_enable(dev);
+2
drivers/gpu/drm/selftests/drm_mm_selftests.h
··· 19 19 selftest(evict, igt_evict) 20 20 selftest(evict_range, igt_evict_range) 21 21 selftest(bottomup, igt_bottomup) 22 + selftest(lowest, igt_lowest) 22 23 selftest(topdown, igt_topdown) 24 + selftest(highest, igt_highest) 23 25 selftest(color, igt_color) 24 26 selftest(color_evict, igt_color_evict) 25 27 selftest(color_evict_range, igt_color_evict_range)
+71
drivers/gpu/drm/selftests/test-drm_mm.c
··· 1825 1825 return ret; 1826 1826 } 1827 1827 1828 + static int __igt_once(unsigned int mode) 1829 + { 1830 + struct drm_mm mm; 1831 + struct drm_mm_node rsvd_lo, rsvd_hi, node; 1832 + int err; 1833 + 1834 + drm_mm_init(&mm, 0, 7); 1835 + 1836 + memset(&rsvd_lo, 0, sizeof(rsvd_lo)); 1837 + rsvd_lo.start = 1; 1838 + rsvd_lo.size = 1; 1839 + err = drm_mm_reserve_node(&mm, &rsvd_lo); 1840 + if (err) { 1841 + pr_err("Could not reserve low node\n"); 1842 + goto err; 1843 + } 1844 + 1845 + memset(&rsvd_hi, 0, sizeof(rsvd_hi)); 1846 + rsvd_hi.start = 5; 1847 + rsvd_hi.size = 1; 1848 + err = drm_mm_reserve_node(&mm, &rsvd_hi); 1849 + if (err) { 1850 + pr_err("Could not reserve low node\n"); 1851 + goto err_lo; 1852 + } 1853 + 1854 + if (!drm_mm_hole_follows(&rsvd_lo) || !drm_mm_hole_follows(&rsvd_hi)) { 1855 + pr_err("Expected a hole after lo and high nodes!\n"); 1856 + err = -EINVAL; 1857 + goto err_hi; 1858 + } 1859 + 1860 + memset(&node, 0, sizeof(node)); 1861 + err = drm_mm_insert_node_generic(&mm, &node, 1862 + 2, 0, 0, 1863 + mode | DRM_MM_INSERT_ONCE); 1864 + if (!err) { 1865 + pr_err("Unexpectedly inserted the node into the wrong hole: node.start=%llx\n", 1866 + node.start); 1867 + err = -EINVAL; 1868 + goto err_node; 1869 + } 1870 + 1871 + err = drm_mm_insert_node_generic(&mm, &node, 2, 0, 0, mode); 1872 + if (err) { 1873 + pr_err("Could not insert the node into the available hole!\n"); 1874 + err = -EINVAL; 1875 + goto err_hi; 1876 + } 1877 + 1878 + err_node: 1879 + drm_mm_remove_node(&node); 1880 + err_hi: 1881 + drm_mm_remove_node(&rsvd_hi); 1882 + err_lo: 1883 + drm_mm_remove_node(&rsvd_lo); 1884 + err: 1885 + drm_mm_takedown(&mm); 1886 + return err; 1887 + } 1888 + 1889 + static int igt_lowest(void *ignored) 1890 + { 1891 + return __igt_once(DRM_MM_INSERT_LOW); 1892 + } 1893 + 1894 + static int igt_highest(void *ignored) 1895 + { 1896 + return __igt_once(DRM_MM_INSERT_HIGH); 1897 + } 1898 + 1828 1899 static void separate_adjacent_colors(const struct drm_mm_node *node, 1829 1900 unsigned long color, 1830 1901 u64 *start,
+5 -1
drivers/gpu/drm/sti/sti_gdp.c
··· 211 211 struct drm_info_node *node = s->private; 212 212 struct sti_gdp *gdp = (struct sti_gdp *)node->info_ent->data; 213 213 struct drm_plane *drm_plane = &gdp->plane.drm_plane; 214 - struct drm_crtc *crtc = drm_plane->crtc; 214 + struct drm_crtc *crtc; 215 + 216 + drm_modeset_lock(&drm_plane->mutex, NULL); 217 + crtc = drm_plane->state->crtc; 218 + drm_modeset_unlock(&drm_plane->mutex); 215 219 216 220 seq_printf(s, "%s: (vaddr = 0x%p)", 217 221 sti_plane_to_str(&gdp->plane), gdp->regs);
+2 -2
drivers/gpu/drm/sun4i/sun6i_mipi_dsi.c
··· 1040 1040 return 0; 1041 1041 } 1042 1042 1043 - static int sun6i_dsi_runtime_resume(struct device *dev) 1043 + static int __maybe_unused sun6i_dsi_runtime_resume(struct device *dev) 1044 1044 { 1045 1045 struct sun6i_dsi *dsi = dev_get_drvdata(dev); 1046 1046 ··· 1069 1069 return 0; 1070 1070 } 1071 1071 1072 - static int sun6i_dsi_runtime_suspend(struct device *dev) 1072 + static int __maybe_unused sun6i_dsi_runtime_suspend(struct device *dev) 1073 1073 { 1074 1074 struct sun6i_dsi *dsi = dev_get_drvdata(dev); 1075 1075
-14
drivers/gpu/drm/tegra/gem.c
··· 582 582 return 0; 583 583 } 584 584 585 - static void *tegra_gem_prime_kmap_atomic(struct dma_buf *buf, 586 - unsigned long page) 587 - { 588 - return NULL; 589 - } 590 - 591 - static void tegra_gem_prime_kunmap_atomic(struct dma_buf *buf, 592 - unsigned long page, 593 - void *addr) 594 - { 595 - } 596 - 597 585 static void *tegra_gem_prime_kmap(struct dma_buf *buf, unsigned long page) 598 586 { 599 587 return NULL; ··· 622 634 .release = tegra_gem_prime_release, 623 635 .begin_cpu_access = tegra_gem_prime_begin_cpu_access, 624 636 .end_cpu_access = tegra_gem_prime_end_cpu_access, 625 - .map_atomic = tegra_gem_prime_kmap_atomic, 626 - .unmap_atomic = tegra_gem_prime_kunmap_atomic, 627 637 .map = tegra_gem_prime_kmap, 628 638 .unmap = tegra_gem_prime_kunmap, 629 639 .mmap = tegra_gem_prime_mmap,
-18
drivers/gpu/drm/udl/udl_dmabuf.c
··· 29 29 }; 30 30 31 31 static int udl_attach_dma_buf(struct dma_buf *dmabuf, 32 - struct device *dev, 33 32 struct dma_buf_attachment *attach) 34 33 { 35 34 struct udl_drm_dmabuf_attachment *udl_attach; ··· 157 158 return NULL; 158 159 } 159 160 160 - static void *udl_dmabuf_kmap_atomic(struct dma_buf *dma_buf, 161 - unsigned long page_num) 162 - { 163 - /* TODO */ 164 - 165 - return NULL; 166 - } 167 - 168 161 static void udl_dmabuf_kunmap(struct dma_buf *dma_buf, 169 162 unsigned long page_num, void *addr) 170 - { 171 - /* TODO */ 172 - } 173 - 174 - static void udl_dmabuf_kunmap_atomic(struct dma_buf *dma_buf, 175 - unsigned long page_num, 176 - void *addr) 177 163 { 178 164 /* TODO */ 179 165 } ··· 177 193 .map_dma_buf = udl_map_dma_buf, 178 194 .unmap_dma_buf = udl_unmap_dma_buf, 179 195 .map = udl_dmabuf_kmap, 180 - .map_atomic = udl_dmabuf_kmap_atomic, 181 196 .unmap = udl_dmabuf_kunmap, 182 - .unmap_atomic = udl_dmabuf_kunmap_atomic, 183 197 .mmap = udl_dmabuf_mmap, 184 198 .release = drm_gem_dmabuf_release, 185 199 };
+2 -1
drivers/gpu/drm/udl/udl_drv.h
··· 16 16 17 17 #include <linux/usb.h> 18 18 #include <drm/drm_gem.h> 19 + #include <linux/mm_types.h> 19 20 20 21 #define DRIVER_NAME "udl" 21 22 #define DRIVER_DESC "DisplayLink" ··· 137 136 int udl_gem_vmap(struct udl_gem_object *obj); 138 137 void udl_gem_vunmap(struct udl_gem_object *obj); 139 138 int udl_drm_gem_mmap(struct file *filp, struct vm_area_struct *vma); 140 - int udl_gem_fault(struct vm_fault *vmf); 139 + vm_fault_t udl_gem_fault(struct vm_fault *vmf); 141 140 142 141 int udl_handle_damage(struct udl_framebuffer *fb, int x, int y, 143 142 int width, int height);
+2 -13
drivers/gpu/drm/udl/udl_gem.c
··· 100 100 return ret; 101 101 } 102 102 103 - int udl_gem_fault(struct vm_fault *vmf) 103 + vm_fault_t udl_gem_fault(struct vm_fault *vmf) 104 104 { 105 105 struct vm_area_struct *vma = vmf->vma; 106 106 struct udl_gem_object *obj = to_udl_bo(vma->vm_private_data); 107 107 struct page *page; 108 108 unsigned int page_offset; 109 - int ret = 0; 110 109 111 110 page_offset = (vmf->address - vma->vm_start) >> PAGE_SHIFT; 112 111 ··· 113 114 return VM_FAULT_SIGBUS; 114 115 115 116 page = obj->pages[page_offset]; 116 - ret = vm_insert_page(vma, vmf->address, page); 117 - switch (ret) { 118 - case -EAGAIN: 119 - case 0: 120 - case -ERESTARTSYS: 121 - return VM_FAULT_NOPAGE; 122 - case -ENOMEM: 123 - return VM_FAULT_OOM; 124 - default: 125 - return VM_FAULT_SIGBUS; 126 - } 117 + return vmf_insert_page(vma, vmf->address, page); 127 118 } 128 119 129 120 int udl_gem_get_pages(struct udl_gem_object *obj)
+2 -2
drivers/gpu/drm/v3d/v3d_sched.c
··· 114 114 v3d_invalidate_caches(v3d); 115 115 116 116 fence = v3d_fence_create(v3d, q); 117 - if (!fence) 118 - return fence; 117 + if (IS_ERR(fence)) 118 + return NULL; 119 119 120 120 if (job->done_fence) 121 121 dma_fence_put(job->done_fence);
-3
drivers/gpu/drm/vc4/vc4_crtc.c
··· 862 862 * is released. 863 863 */ 864 864 drm_atomic_set_fb_for_plane(plane->state, fb); 865 - plane->fb = fb; 866 865 867 866 vc4_queue_seqno_cb(dev, &flip_state->cb, bo->seqno, 868 867 vc4_async_page_flip_complete); ··· 1056 1057 drm_crtc_init_with_planes(drm, crtc, primary_plane, NULL, 1057 1058 &vc4_crtc_funcs, NULL); 1058 1059 drm_crtc_helper_add(crtc, &vc4_crtc_helper_funcs); 1059 - primary_plane->crtc = crtc; 1060 1060 vc4_crtc->channel = vc4_crtc->data->hvs_channel; 1061 1061 drm_mode_crtc_set_gamma_size(crtc, ARRAY_SIZE(vc4_crtc->lut_r)); 1062 1062 drm_crtc_enable_color_mgmt(crtc, 0, false, crtc->gamma_size); ··· 1091 1093 cursor_plane = vc4_plane_init(drm, DRM_PLANE_TYPE_CURSOR); 1092 1094 if (!IS_ERR(cursor_plane)) { 1093 1095 cursor_plane->possible_crtcs = 1 << drm_crtc_index(crtc); 1094 - cursor_plane->crtc = crtc; 1095 1096 crtc->cursor = cursor_plane; 1096 1097 } 1097 1098
+86 -10
drivers/gpu/drm/vc4/vc4_plane.c
··· 467 467 struct drm_framebuffer *fb = state->fb; 468 468 u32 ctl0_offset = vc4_state->dlist_count; 469 469 const struct hvs_format *format = vc4_get_hvs_format(fb->format->format); 470 + u64 base_format_mod = fourcc_mod_broadcom_mod(fb->modifier); 470 471 int num_planes = drm_format_num_planes(format->drm); 471 472 bool mix_plane_alpha; 472 473 bool covers_screen; 473 474 u32 scl0, scl1, pitch0; 474 475 u32 lbm_size, tiling; 475 476 unsigned long irqflags; 477 + u32 hvs_format = format->hvs; 476 478 int ret, i; 477 479 478 480 ret = vc4_plane_setup_clipping_and_scaling(state); ··· 514 512 scl1 = vc4_get_scl_field(state, 0); 515 513 } 516 514 517 - switch (fb->modifier) { 515 + switch (base_format_mod) { 518 516 case DRM_FORMAT_MOD_LINEAR: 519 517 tiling = SCALER_CTL0_TILING_LINEAR; 520 518 pitch0 = VC4_SET_FIELD(fb->pitches[0], SCALER_SRC_PITCH); ··· 537 535 break; 538 536 } 539 537 538 + case DRM_FORMAT_MOD_BROADCOM_SAND64: 539 + case DRM_FORMAT_MOD_BROADCOM_SAND128: 540 + case DRM_FORMAT_MOD_BROADCOM_SAND256: { 541 + uint32_t param = fourcc_mod_broadcom_param(fb->modifier); 542 + 543 + /* Column-based NV12 or RGBA. 544 + */ 545 + if (fb->format->num_planes > 1) { 546 + if (hvs_format != HVS_PIXEL_FORMAT_YCBCR_YUV420_2PLANE) { 547 + DRM_DEBUG_KMS("SAND format only valid for NV12/21"); 548 + return -EINVAL; 549 + } 550 + hvs_format = HVS_PIXEL_FORMAT_H264; 551 + } else { 552 + if (base_format_mod == DRM_FORMAT_MOD_BROADCOM_SAND256) { 553 + DRM_DEBUG_KMS("SAND256 format only valid for H.264"); 554 + return -EINVAL; 555 + } 556 + } 557 + 558 + switch (base_format_mod) { 559 + case DRM_FORMAT_MOD_BROADCOM_SAND64: 560 + tiling = SCALER_CTL0_TILING_64B; 561 + break; 562 + case DRM_FORMAT_MOD_BROADCOM_SAND128: 563 + tiling = SCALER_CTL0_TILING_128B; 564 + break; 565 + case DRM_FORMAT_MOD_BROADCOM_SAND256: 566 + tiling = SCALER_CTL0_TILING_256B_OR_T; 567 + break; 568 + default: 569 + break; 570 + } 571 + 572 + if (param > SCALER_TILE_HEIGHT_MASK) { 573 + DRM_DEBUG_KMS("SAND height too large (%d)\n", param); 574 + return -EINVAL; 575 + } 576 + 577 + pitch0 = VC4_SET_FIELD(param, SCALER_TILE_HEIGHT); 578 + break; 579 + } 580 + 540 581 default: 541 582 DRM_DEBUG_KMS("Unsupported FB tiling flag 0x%16llx", 542 583 (long long)fb->modifier); ··· 589 544 /* Control word */ 590 545 vc4_dlist_write(vc4_state, 591 546 SCALER_CTL0_VALID | 547 + VC4_SET_FIELD(SCALER_CTL0_RGBA_EXPAND_ROUND, SCALER_CTL0_RGBA_EXPAND) | 592 548 (format->pixel_order << SCALER_CTL0_ORDER_SHIFT) | 593 - (format->hvs << SCALER_CTL0_PIXEL_FORMAT_SHIFT) | 549 + (hvs_format << SCALER_CTL0_PIXEL_FORMAT_SHIFT) | 594 550 VC4_SET_FIELD(tiling, SCALER_CTL0_TILING) | 595 551 (vc4_state->is_unity ? SCALER_CTL0_UNITY : 0) | 596 552 VC4_SET_FIELD(scl0, SCALER_CTL0_SCL0) | ··· 653 607 654 608 /* Pitch word 1/2 */ 655 609 for (i = 1; i < num_planes; i++) { 656 - vc4_dlist_write(vc4_state, 657 - VC4_SET_FIELD(fb->pitches[i], SCALER_SRC_PITCH)); 610 + if (hvs_format != HVS_PIXEL_FORMAT_H264) { 611 + vc4_dlist_write(vc4_state, 612 + VC4_SET_FIELD(fb->pitches[i], 613 + SCALER_SRC_PITCH)); 614 + } else { 615 + vc4_dlist_write(vc4_state, pitch0); 616 + } 658 617 } 659 618 660 619 /* Colorspace conversion words */ ··· 861 810 struct dma_fence *fence; 862 811 int ret; 863 812 864 - if ((plane->state->fb == state->fb) || !state->fb) 813 + if (!state->fb) 865 814 return 0; 866 815 867 816 bo = to_vc4_bo(&drm_fb_cma_get_gem_obj(state->fb, 0)->base); 868 817 818 + fence = reservation_object_get_excl_rcu(bo->resv); 819 + drm_atomic_set_fence_for_plane(state, fence); 820 + 821 + if (plane->state->fb == state->fb) 822 + return 0; 823 + 869 824 ret = vc4_bo_inc_usecnt(bo); 870 825 if (ret) 871 826 return ret; 872 - 873 - fence = reservation_object_get_excl_rcu(bo->resv); 874 - drm_atomic_set_fence_for_plane(state, fence); 875 827 876 828 return 0; 877 829 } ··· 920 866 case DRM_FORMAT_BGR565: 921 867 case DRM_FORMAT_ARGB1555: 922 868 case DRM_FORMAT_XRGB1555: 923 - return true; 869 + switch (fourcc_mod_broadcom_mod(modifier)) { 870 + case DRM_FORMAT_MOD_LINEAR: 871 + case DRM_FORMAT_MOD_BROADCOM_VC4_T_TILED: 872 + case DRM_FORMAT_MOD_BROADCOM_SAND64: 873 + case DRM_FORMAT_MOD_BROADCOM_SAND128: 874 + return true; 875 + default: 876 + return false; 877 + } 878 + case DRM_FORMAT_NV12: 879 + case DRM_FORMAT_NV21: 880 + switch (fourcc_mod_broadcom_mod(modifier)) { 881 + case DRM_FORMAT_MOD_LINEAR: 882 + case DRM_FORMAT_MOD_BROADCOM_SAND64: 883 + case DRM_FORMAT_MOD_BROADCOM_SAND128: 884 + case DRM_FORMAT_MOD_BROADCOM_SAND256: 885 + return true; 886 + default: 887 + return false; 888 + } 924 889 case DRM_FORMAT_YUV422: 925 890 case DRM_FORMAT_YVU422: 926 891 case DRM_FORMAT_YUV420: 927 892 case DRM_FORMAT_YVU420: 928 - case DRM_FORMAT_NV12: 929 893 case DRM_FORMAT_NV16: 894 + case DRM_FORMAT_NV61: 930 895 default: 931 896 return (modifier == DRM_FORMAT_MOD_LINEAR); 932 897 } ··· 973 900 unsigned i; 974 901 static const uint64_t modifiers[] = { 975 902 DRM_FORMAT_MOD_BROADCOM_VC4_T_TILED, 903 + DRM_FORMAT_MOD_BROADCOM_SAND128, 904 + DRM_FORMAT_MOD_BROADCOM_SAND64, 905 + DRM_FORMAT_MOD_BROADCOM_SAND256, 976 906 DRM_FORMAT_MOD_LINEAR, 977 907 DRM_FORMAT_MOD_INVALID 978 908 };
+6
drivers/gpu/drm/vc4/vc4_regs.h
··· 1031 1031 #define SCALER_SRC_PITCH_MASK VC4_MASK(15, 0) 1032 1032 #define SCALER_SRC_PITCH_SHIFT 0 1033 1033 1034 + /* PITCH0/1/2 fields for tiled (SAND). */ 1035 + #define SCALER_TILE_SKIP_0_MASK VC4_MASK(18, 16) 1036 + #define SCALER_TILE_SKIP_0_SHIFT 16 1037 + #define SCALER_TILE_HEIGHT_MASK VC4_MASK(15, 0) 1038 + #define SCALER_TILE_HEIGHT_SHIFT 0 1039 + 1034 1040 /* PITCH0 fields for T-tiled. */ 1035 1041 #define SCALER_PITCH0_TILE_WIDTH_L_MASK VC4_MASK(22, 16) 1036 1042 #define SCALER_PITCH0_TILE_WIDTH_L_SHIFT 16
+2 -3
drivers/gpu/drm/vgem/vgem_drv.c
··· 61 61 kfree(vgem_obj); 62 62 } 63 63 64 - static int vgem_gem_fault(struct vm_fault *vmf) 64 + static vm_fault_t vgem_gem_fault(struct vm_fault *vmf) 65 65 { 66 66 struct vm_area_struct *vma = vmf->vma; 67 67 struct drm_vgem_gem_object *obj = vma->vm_private_data; 68 68 /* We don't use vmf->pgoff since that has the fake offset */ 69 69 unsigned long vaddr = vmf->address; 70 - int ret; 70 + vm_fault_t ret = VM_FAULT_SIGBUS; 71 71 loff_t num_pages; 72 72 pgoff_t page_offset; 73 73 page_offset = (vaddr - vma->vm_start) >> PAGE_SHIFT; ··· 77 77 if (page_offset > num_pages) 78 78 return VM_FAULT_SIGBUS; 79 79 80 - ret = -ENOENT; 81 80 mutex_lock(&obj->pages_lock); 82 81 if (obj->pages) { 83 82 get_page(obj->pages[page_offset]);
+5 -27
drivers/gpu/drm/virtio/virtgpu_display.c
··· 28 28 #include "virtgpu_drv.h" 29 29 #include <drm/drm_crtc_helper.h> 30 30 #include <drm/drm_atomic_helper.h> 31 + #include <drm/drm_gem_framebuffer_helper.h> 31 32 32 33 #define XRES_MIN 32 33 34 #define YRES_MIN 32 ··· 49 48 .atomic_destroy_state = drm_atomic_helper_crtc_destroy_state, 50 49 }; 51 50 52 - static void virtio_gpu_user_framebuffer_destroy(struct drm_framebuffer *fb) 53 - { 54 - struct virtio_gpu_framebuffer *virtio_gpu_fb 55 - = to_virtio_gpu_framebuffer(fb); 56 - 57 - drm_gem_object_put_unlocked(virtio_gpu_fb->obj); 58 - drm_framebuffer_cleanup(fb); 59 - kfree(virtio_gpu_fb); 60 - } 61 - 62 51 static int 63 52 virtio_gpu_framebuffer_surface_dirty(struct drm_framebuffer *fb, 64 53 struct drm_file *file_priv, ··· 62 71 return virtio_gpu_surface_dirty(virtio_gpu_fb, clips, num_clips); 63 72 } 64 73 65 - static int 66 - virtio_gpu_framebuffer_create_handle(struct drm_framebuffer *fb, 67 - struct drm_file *file_priv, 68 - unsigned int *handle) 69 - { 70 - struct virtio_gpu_framebuffer *virtio_gpu_fb = 71 - to_virtio_gpu_framebuffer(fb); 72 - 73 - return drm_gem_handle_create(file_priv, virtio_gpu_fb->obj, handle); 74 - } 75 - 76 74 static const struct drm_framebuffer_funcs virtio_gpu_fb_funcs = { 77 - .create_handle = virtio_gpu_framebuffer_create_handle, 78 - .destroy = virtio_gpu_user_framebuffer_destroy, 75 + .create_handle = drm_gem_fb_create_handle, 76 + .destroy = drm_gem_fb_destroy, 79 77 .dirty = virtio_gpu_framebuffer_surface_dirty, 80 78 }; 81 79 ··· 77 97 int ret; 78 98 struct virtio_gpu_object *bo; 79 99 80 - vgfb->obj = obj; 100 + vgfb->base.obj[0] = obj; 81 101 82 102 bo = gem_to_virtio_gpu_obj(obj); 83 103 ··· 85 105 86 106 ret = drm_framebuffer_init(dev, &vgfb->base, &virtio_gpu_fb_funcs); 87 107 if (ret) { 88 - vgfb->obj = NULL; 108 + vgfb->base.obj[0] = NULL; 89 109 return ret; 90 110 } 91 111 ··· 282 302 drm_crtc_init_with_planes(dev, crtc, primary, cursor, 283 303 &virtio_gpu_crtc_funcs, NULL); 284 304 drm_crtc_helper_add(crtc, &virtio_gpu_crtc_helper_funcs); 285 - primary->crtc = crtc; 286 - cursor->crtc = crtc; 287 305 288 306 drm_connector_init(dev, connector, &virtio_gpu_connector_funcs, 289 307 DRM_MODE_CONNECTOR_VIRTUAL);
-1
drivers/gpu/drm/virtio/virtgpu_drv.h
··· 124 124 125 125 struct virtio_gpu_framebuffer { 126 126 struct drm_framebuffer base; 127 - struct drm_gem_object *obj; 128 127 int x1, y1, x2, y2; /* dirty rect */ 129 128 spinlock_t dirty_lock; 130 129 uint32_t hw_res_handle;
+4 -4
drivers/gpu/drm/virtio/virtgpu_fb.c
··· 46 46 int bpp = fb->base.format->cpp[0]; 47 47 int x2, y2; 48 48 unsigned long flags; 49 - struct virtio_gpu_object *obj = gem_to_virtio_gpu_obj(fb->obj); 49 + struct virtio_gpu_object *obj = gem_to_virtio_gpu_obj(fb->base.obj[0]); 50 50 51 51 if ((width <= 0) || 52 52 (x + width > fb->base.width) || ··· 121 121 unsigned int num_clips) 122 122 { 123 123 struct virtio_gpu_device *vgdev = vgfb->base.dev->dev_private; 124 - struct virtio_gpu_object *obj = gem_to_virtio_gpu_obj(vgfb->obj); 124 + struct virtio_gpu_object *obj = gem_to_virtio_gpu_obj(vgfb->base.obj[0]); 125 125 struct drm_clip_rect norect; 126 126 struct drm_clip_rect *clips_ptr; 127 127 int left, right, top, bottom; ··· 305 305 306 306 drm_fb_helper_unregister_fbi(&vgfbdev->helper); 307 307 308 - if (vgfb->obj) 309 - vgfb->obj = NULL; 308 + if (vgfb->base.obj[0]) 309 + vgfb->base.obj[0] = NULL; 310 310 drm_fb_helper_fini(&vgfbdev->helper); 311 311 drm_framebuffer_cleanup(&vgfb->base); 312 312
+2 -2
drivers/gpu/drm/virtio/virtgpu_plane.c
··· 154 154 155 155 if (plane->state->fb) { 156 156 vgfb = to_virtio_gpu_framebuffer(plane->state->fb); 157 - bo = gem_to_virtio_gpu_obj(vgfb->obj); 157 + bo = gem_to_virtio_gpu_obj(vgfb->base.obj[0]); 158 158 handle = bo->hw_res_handle; 159 159 if (bo->dumb) { 160 160 virtio_gpu_cmd_transfer_to_host_2d ··· 208 208 209 209 if (plane->state->fb) { 210 210 vgfb = to_virtio_gpu_framebuffer(plane->state->fb); 211 - bo = gem_to_virtio_gpu_obj(vgfb->obj); 211 + bo = gem_to_virtio_gpu_obj(vgfb->base.obj[0]); 212 212 handle = bo->hw_res_handle; 213 213 } else { 214 214 handle = 0;
-25
drivers/gpu/drm/vmwgfx/vmwgfx_fb.c
··· 439 439 static int vmwgfx_set_config_internal(struct drm_mode_set *set) 440 440 { 441 441 struct drm_crtc *crtc = set->crtc; 442 - struct drm_framebuffer *fb; 443 - struct drm_crtc *tmp; 444 - struct drm_device *dev = set->crtc->dev; 445 442 struct drm_modeset_acquire_ctx ctx; 446 443 int ret; 447 444 448 445 drm_modeset_acquire_init(&ctx, 0); 449 446 450 447 restart: 451 - /* 452 - * NOTE: ->set_config can also disable other crtcs (if we steal all 453 - * connectors from it), hence we need to refcount the fbs across all 454 - * crtcs. Atomic modeset will have saner semantics ... 455 - */ 456 - drm_for_each_crtc(tmp, dev) 457 - tmp->primary->old_fb = tmp->primary->fb; 458 - 459 - fb = set->fb; 460 - 461 448 ret = crtc->funcs->set_config(set, &ctx); 462 - if (ret == 0) { 463 - crtc->primary->crtc = crtc; 464 - crtc->primary->fb = fb; 465 - } 466 - 467 - drm_for_each_crtc(tmp, dev) { 468 - if (tmp->primary->fb) 469 - drm_framebuffer_get(tmp->primary->fb); 470 - if (tmp->primary->old_fb) 471 - drm_framebuffer_put(tmp->primary->old_fb); 472 - tmp->primary->old_fb = NULL; 473 - } 474 449 475 450 if (ret == -EDEADLK) { 476 451 drm_modeset_backoff(&ctx);
+13 -7
drivers/gpu/drm/vmwgfx/vmwgfx_kms.c
··· 1536 1536 unsigned long requested_bb_mem = 0; 1537 1537 1538 1538 if (dev_priv->active_display_unit == vmw_du_screen_target) { 1539 - if (crtc->primary->fb) { 1540 - int cpp = crtc->primary->fb->pitches[0] / 1541 - crtc->primary->fb->width; 1539 + struct drm_plane *plane = crtc->primary; 1540 + struct drm_plane_state *plane_state; 1541 + 1542 + plane_state = drm_atomic_get_new_plane_state(state, plane); 1543 + 1544 + if (plane_state && plane_state->fb) { 1545 + int cpp = plane_state->fb->format->cpp[0]; 1542 1546 1543 1547 requested_bb_mem += crtc->mode.hdisplay * cpp * 1544 1548 crtc->mode.vdisplay; ··· 2326 2322 } else { 2327 2323 list_for_each_entry(crtc, &dev_priv->dev->mode_config.crtc_list, 2328 2324 head) { 2329 - if (crtc->primary->fb != &framebuffer->base) 2330 - continue; 2331 - units[num_units++] = vmw_crtc_to_du(crtc); 2325 + struct drm_plane *plane = crtc->primary; 2326 + 2327 + if (plane->state->fb == &framebuffer->base) 2328 + units[num_units++] = vmw_crtc_to_du(crtc); 2332 2329 } 2333 2330 } 2334 2331 ··· 2811 2806 struct drm_crtc *crtc) 2812 2807 { 2813 2808 struct vmw_display_unit *du = vmw_crtc_to_du(crtc); 2809 + struct drm_plane *plane = crtc->primary; 2814 2810 struct vmw_framebuffer *vfb; 2815 2811 2816 2812 mutex_lock(&dev_priv->global_kms_state_mutex); ··· 2819 2813 if (!du->is_implicit) 2820 2814 goto out_unlock; 2821 2815 2822 - vfb = vmw_framebuffer_to_vfb(crtc->primary->fb); 2816 + vfb = vmw_framebuffer_to_vfb(plane->state->fb); 2823 2817 WARN_ON_ONCE(dev_priv->num_implicit != 1 && 2824 2818 dev_priv->implicit_fb != vfb); 2825 2819
-14
drivers/gpu/drm/vmwgfx/vmwgfx_prime.c
··· 40 40 */ 41 41 42 42 static int vmw_prime_map_attach(struct dma_buf *dma_buf, 43 - struct device *target_dev, 44 43 struct dma_buf_attachment *attach) 45 44 { 46 45 return -ENOSYS; ··· 71 72 { 72 73 } 73 74 74 - static void *vmw_prime_dmabuf_kmap_atomic(struct dma_buf *dma_buf, 75 - unsigned long page_num) 76 - { 77 - return NULL; 78 - } 79 - 80 - static void vmw_prime_dmabuf_kunmap_atomic(struct dma_buf *dma_buf, 81 - unsigned long page_num, void *addr) 82 - { 83 - 84 - } 85 75 static void *vmw_prime_dmabuf_kmap(struct dma_buf *dma_buf, 86 76 unsigned long page_num) 87 77 { ··· 97 109 .unmap_dma_buf = vmw_prime_unmap_dma_buf, 98 110 .release = NULL, 99 111 .map = vmw_prime_dmabuf_kmap, 100 - .map_atomic = vmw_prime_dmabuf_kmap_atomic, 101 112 .unmap = vmw_prime_dmabuf_kunmap, 102 - .unmap_atomic = vmw_prime_dmabuf_kunmap_atomic, 103 113 .mmap = vmw_prime_dmabuf_mmap, 104 114 .vmap = vmw_prime_dmabuf_vmap, 105 115 .vunmap = vmw_prime_dmabuf_vunmap,
-2
drivers/gpu/drm/vmwgfx/vmwgfx_scrn.c
··· 527 527 */ 528 528 if (ret != 0) 529 529 DRM_ERROR("Failed to update screen.\n"); 530 - 531 - crtc->primary->fb = plane->state->fb; 532 530 } else { 533 531 /* 534 532 * When disabling a plane, CRTC and FB should always be NULL
+2 -3
drivers/gpu/drm/vmwgfx/vmwgfx_stdu.c
··· 414 414 static void vmw_stdu_crtc_atomic_enable(struct drm_crtc *crtc, 415 415 struct drm_crtc_state *old_state) 416 416 { 417 + struct drm_plane_state *plane_state = crtc->primary->state; 417 418 struct vmw_private *dev_priv; 418 419 struct vmw_screen_target_display_unit *stdu; 419 420 struct vmw_framebuffer *vfb; ··· 423 422 424 423 stdu = vmw_crtc_to_stdu(crtc); 425 424 dev_priv = vmw_priv(crtc->dev); 426 - fb = crtc->primary->fb; 425 + fb = plane_state->fb; 427 426 428 427 vfb = (fb) ? vmw_framebuffer_to_vfb(fb) : NULL; 429 428 ··· 1286 1285 1, 1, NULL, crtc); 1287 1286 if (ret) 1288 1287 DRM_ERROR("Failed to update STDU.\n"); 1289 - 1290 - crtc->primary->fb = plane->state->fb; 1291 1288 } else { 1292 1289 crtc = old_state->crtc; 1293 1290 stdu = vmw_crtc_to_stdu(crtc);
+1 -1
drivers/gpu/drm/xen/xen_drm_front.c
··· 623 623 if (ret < 0) 624 624 return ret; 625 625 626 - DRM_INFO("Have %d conector(s)\n", cfg->num_connectors); 626 + DRM_INFO("Have %d connector(s)\n", cfg->num_connectors); 627 627 /* Create event channels for all connectors and publish */ 628 628 ret = xen_drm_front_evtchnl_create_all(front_info); 629 629 if (ret < 0)
+2 -2
drivers/gpu/drm/xen/xen_drm_front.h
··· 126 126 127 127 static inline u64 xen_drm_front_fb_to_cookie(struct drm_framebuffer *fb) 128 128 { 129 - return (u64)fb; 129 + return (uintptr_t)fb; 130 130 } 131 131 132 132 static inline u64 xen_drm_front_dbuf_to_cookie(struct drm_gem_object *gem_obj) 133 133 { 134 - return (u64)gem_obj; 134 + return (uintptr_t)gem_obj; 135 135 } 136 136 137 137 int xen_drm_front_mode_set(struct xen_drm_front_drm_pipeline *pipeline,
+1 -1
drivers/gpu/drm/xen/xen_drm_front_shbuf.c
··· 122 122 } 123 123 124 124 #define xen_page_to_vaddr(page) \ 125 - ((phys_addr_t)pfn_to_kaddr(page_to_xen_pfn(page))) 125 + ((uintptr_t)pfn_to_kaddr(page_to_xen_pfn(page))) 126 126 127 127 static int backend_unmap(struct xen_drm_front_shbuf *buf) 128 128 {
+1 -2
drivers/media/common/videobuf2/videobuf2-dma-contig.c
··· 222 222 enum dma_data_direction dma_dir; 223 223 }; 224 224 225 - static int vb2_dc_dmabuf_ops_attach(struct dma_buf *dbuf, struct device *dev, 225 + static int vb2_dc_dmabuf_ops_attach(struct dma_buf *dbuf, 226 226 struct dma_buf_attachment *dbuf_attach) 227 227 { 228 228 struct vb2_dc_attachment *attach; ··· 358 358 .map_dma_buf = vb2_dc_dmabuf_ops_map, 359 359 .unmap_dma_buf = vb2_dc_dmabuf_ops_unmap, 360 360 .map = vb2_dc_dmabuf_ops_kmap, 361 - .map_atomic = vb2_dc_dmabuf_ops_kmap, 362 361 .vmap = vb2_dc_dmabuf_ops_vmap, 363 362 .mmap = vb2_dc_dmabuf_ops_mmap, 364 363 .release = vb2_dc_dmabuf_ops_release,
+1 -2
drivers/media/common/videobuf2/videobuf2-dma-sg.c
··· 371 371 enum dma_data_direction dma_dir; 372 372 }; 373 373 374 - static int vb2_dma_sg_dmabuf_ops_attach(struct dma_buf *dbuf, struct device *dev, 374 + static int vb2_dma_sg_dmabuf_ops_attach(struct dma_buf *dbuf, 375 375 struct dma_buf_attachment *dbuf_attach) 376 376 { 377 377 struct vb2_dma_sg_attachment *attach; ··· 507 507 .map_dma_buf = vb2_dma_sg_dmabuf_ops_map, 508 508 .unmap_dma_buf = vb2_dma_sg_dmabuf_ops_unmap, 509 509 .map = vb2_dma_sg_dmabuf_ops_kmap, 510 - .map_atomic = vb2_dma_sg_dmabuf_ops_kmap, 511 510 .vmap = vb2_dma_sg_dmabuf_ops_vmap, 512 511 .mmap = vb2_dma_sg_dmabuf_ops_mmap, 513 512 .release = vb2_dma_sg_dmabuf_ops_release,
+1 -2
drivers/media/common/videobuf2/videobuf2-vmalloc.c
··· 209 209 enum dma_data_direction dma_dir; 210 210 }; 211 211 212 - static int vb2_vmalloc_dmabuf_ops_attach(struct dma_buf *dbuf, struct device *dev, 212 + static int vb2_vmalloc_dmabuf_ops_attach(struct dma_buf *dbuf, 213 213 struct dma_buf_attachment *dbuf_attach) 214 214 { 215 215 struct vb2_vmalloc_attachment *attach; ··· 346 346 .map_dma_buf = vb2_vmalloc_dmabuf_ops_map, 347 347 .unmap_dma_buf = vb2_vmalloc_dmabuf_ops_unmap, 348 348 .map = vb2_vmalloc_dmabuf_ops_kmap, 349 - .map_atomic = vb2_vmalloc_dmabuf_ops_kmap, 350 349 .vmap = vb2_vmalloc_dmabuf_ops_vmap, 351 350 .mmap = vb2_vmalloc_dmabuf_ops_mmap, 352 351 .release = vb2_vmalloc_dmabuf_ops_release,
+2 -4
drivers/staging/android/ion/ion.c
··· 201 201 struct list_head list; 202 202 }; 203 203 204 - static int ion_dma_buf_attach(struct dma_buf *dmabuf, struct device *dev, 204 + static int ion_dma_buf_attach(struct dma_buf *dmabuf, 205 205 struct dma_buf_attachment *attachment) 206 206 { 207 207 struct ion_dma_buf_attachment *a; ··· 219 219 } 220 220 221 221 a->table = table; 222 - a->dev = dev; 222 + a->dev = attachment->dev; 223 223 INIT_LIST_HEAD(&a->list); 224 224 225 225 attachment->priv = a; ··· 375 375 .detach = ion_dma_buf_detatch, 376 376 .begin_cpu_access = ion_dma_buf_begin_cpu_access, 377 377 .end_cpu_access = ion_dma_buf_end_cpu_access, 378 - .map_atomic = ion_dma_buf_kmap, 379 - .unmap_atomic = ion_dma_buf_kunmap, 380 378 .map = ion_dma_buf_kmap, 381 379 .unmap = ion_dma_buf_kunmap, 382 380 };
-6
drivers/tee/tee_shm.c
··· 80 80 tee_shm_release(shm); 81 81 } 82 82 83 - static void *tee_shm_op_map_atomic(struct dma_buf *dmabuf, unsigned long pgnum) 84 - { 85 - return NULL; 86 - } 87 - 88 83 static void *tee_shm_op_map(struct dma_buf *dmabuf, unsigned long pgnum) 89 84 { 90 85 return NULL; ··· 102 107 .map_dma_buf = tee_shm_op_map_dma_buf, 103 108 .unmap_dma_buf = tee_shm_op_unmap_dma_buf, 104 109 .release = tee_shm_op_release, 105 - .map_atomic = tee_shm_op_map_atomic, 106 110 .map = tee_shm_op_map, 107 111 .mmap = tee_shm_op_mmap, 108 112 };
+11 -3
include/drm/drm_atomic.h
··· 160 160 struct __drm_connnectors_state { 161 161 struct drm_connector *ptr; 162 162 struct drm_connector_state *state, *old_state, *new_state; 163 + /** 164 + * @out_fence_ptr: 165 + * 166 + * User-provided pointer which the kernel uses to return a sync_file 167 + * file descriptor. Used by writeback connectors to signal completion of 168 + * the writeback. 169 + */ 170 + s32 __user *out_fence_ptr; 163 171 }; 164 172 165 173 struct drm_private_obj; ··· 602 594 int __must_check 603 595 drm_atomic_set_crtc_for_connector(struct drm_connector_state *conn_state, 604 596 struct drm_crtc *crtc); 597 + int drm_atomic_set_writeback_fb_for_connector( 598 + struct drm_connector_state *conn_state, 599 + struct drm_framebuffer *fb); 605 600 int __must_check 606 601 drm_atomic_add_affected_connectors(struct drm_atomic_state *state, 607 602 struct drm_crtc *crtc); 608 603 int __must_check 609 604 drm_atomic_add_affected_planes(struct drm_atomic_state *state, 610 605 struct drm_crtc *crtc); 611 - 612 - void 613 - drm_atomic_clean_old_fb(struct drm_device *dev, unsigned plane_mask, int ret); 614 606 615 607 int __must_check drm_atomic_check_only(struct drm_atomic_state *state); 616 608 int __must_check drm_atomic_commit(struct drm_atomic_state *state);
+21 -5
include/drm/drm_bridge.h
··· 97 97 /** 98 98 * @mode_fixup: 99 99 * 100 - * This callback is used to validate and adjust a mode. The paramater 100 + * This callback is used to validate and adjust a mode. The parameter 101 101 * mode is the display mode that should be fed to the next element in 102 102 * the display chain, either the final &drm_connector or the next 103 103 * &drm_bridge. The parameter adjusted_mode is the input mode the bridge ··· 178 178 * then this would be &drm_encoder_helper_funcs.mode_set. The display 179 179 * pipe (i.e. clocks and timing signals) is off when this function is 180 180 * called. 181 + * 182 + * The adjusted_mode parameter is the mode output by the CRTC for the 183 + * first bridge in the chain. It can be different from the mode 184 + * parameter that contains the desired mode for the connector at the end 185 + * of the bridges chain, for instance when the first bridge in the chain 186 + * performs scaling. The adjusted mode is mostly useful for the first 187 + * bridge in the chain and is likely irrelevant for the other bridges. 188 + * 189 + * For atomic drivers the adjusted_mode is the mode stored in 190 + * &drm_crtc_state.adjusted_mode. 191 + * 192 + * NOTE: 193 + * 194 + * If a need arises to store and access modes adjusted for other 195 + * locations than the connection between the CRTC and the first bridge, 196 + * the DRM framework will have to be extended with DRM bridge states. 181 197 */ 182 198 void (*mode_set)(struct drm_bridge *bridge, 183 199 struct drm_display_mode *mode, ··· 301 285 struct drm_bridge *previous); 302 286 303 287 bool drm_bridge_mode_fixup(struct drm_bridge *bridge, 304 - const struct drm_display_mode *mode, 305 - struct drm_display_mode *adjusted_mode); 288 + const struct drm_display_mode *mode, 289 + struct drm_display_mode *adjusted_mode); 306 290 enum drm_mode_status drm_bridge_mode_valid(struct drm_bridge *bridge, 307 291 const struct drm_display_mode *mode); 308 292 void drm_bridge_disable(struct drm_bridge *bridge); 309 293 void drm_bridge_post_disable(struct drm_bridge *bridge); 310 294 void drm_bridge_mode_set(struct drm_bridge *bridge, 311 - struct drm_display_mode *mode, 312 - struct drm_display_mode *adjusted_mode); 295 + struct drm_display_mode *mode, 296 + struct drm_display_mode *adjusted_mode); 313 297 void drm_bridge_pre_enable(struct drm_bridge *bridge); 314 298 void drm_bridge_enable(struct drm_bridge *bridge); 315 299
+30
include/drm/drm_connector.h
··· 419 419 enum hdmi_picture_aspect picture_aspect_ratio; 420 420 421 421 /** 422 + * @content_type: Connector property to control the 423 + * HDMI infoframe content type setting. 424 + * The %DRM_MODE_CONTENT_TYPE_\* values much 425 + * match the values. 426 + */ 427 + unsigned int content_type; 428 + 429 + /** 422 430 * @scaling_mode: Connector property to control the 423 431 * upscaling, mostly used for built-in panels. 424 432 */ ··· 437 429 * protection. This is most commonly used for HDCP. 438 430 */ 439 431 unsigned int content_protection; 432 + 433 + /** 434 + * @writeback_job: Writeback job for writeback connectors 435 + * 436 + * Holds the framebuffer and out-fence for a writeback connector. As 437 + * the writeback completion may be asynchronous to the normal commit 438 + * cycle, the writeback job lifetime is managed separately from the 439 + * normal atomic state by this object. 440 + * 441 + * See also: drm_writeback_queue_job() and 442 + * drm_writeback_signal_completion() 443 + */ 444 + struct drm_writeback_job *writeback_job; 440 445 }; 441 446 442 447 /** ··· 629 608 * cleaned up by calling the @atomic_destroy_state hook in this 630 609 * structure. 631 610 * 611 + * This callback is mandatory for atomic drivers. 612 + * 632 613 * Atomic drivers which don't subclass &struct drm_connector_state should use 633 614 * drm_atomic_helper_connector_duplicate_state(). Drivers that subclass the 634 615 * state structure to extend it with driver-private state should use ··· 657 634 * 658 635 * Destroy a state duplicated with @atomic_duplicate_state and release 659 636 * or unreference all resources it references 637 + * 638 + * This callback is mandatory for atomic drivers. 660 639 */ 661 640 void (*atomic_destroy_state)(struct drm_connector *connector, 662 641 struct drm_connector_state *state); ··· 1114 1089 unsigned int num_modes, 1115 1090 const char * const modes[]); 1116 1091 int drm_mode_create_scaling_mode_property(struct drm_device *dev); 1092 + int drm_connector_attach_content_type_property(struct drm_connector *dev); 1117 1093 int drm_connector_attach_scaling_mode_property(struct drm_connector *connector, 1118 1094 u32 scaling_mode_mask); 1119 1095 int drm_connector_attach_content_protection_property( 1120 1096 struct drm_connector *connector); 1121 1097 int drm_mode_create_aspect_ratio_property(struct drm_device *dev); 1098 + int drm_mode_create_content_type_property(struct drm_device *dev); 1099 + void drm_hdmi_avi_infoframe_content_type(struct hdmi_avi_infoframe *frame, 1100 + const struct drm_connector_state *conn_state); 1101 + 1122 1102 int drm_mode_create_suggested_offset_properties(struct drm_device *dev); 1123 1103 1124 1104 int drm_mode_connector_set_path_property(struct drm_connector *connector,
+11 -4
include/drm/drm_crtc.h
··· 134 134 * 135 135 * Internal display timings which can be used by the driver to handle 136 136 * differences between the mode requested by userspace in @mode and what 137 - * is actually programmed into the hardware. It is purely driver 138 - * implementation defined what exactly this adjusted mode means. Usually 139 - * it is used to store the hardware display timings used between the 140 - * CRTC and encoder blocks. 137 + * is actually programmed into the hardware. 138 + * 139 + * For drivers using drm_bridge, this stores hardware display timings 140 + * used between the CRTC and the first bridge. For other drivers, the 141 + * meaning of the adjusted_mode field is purely driver implementation 142 + * defined information, and will usually be used to store the hardware 143 + * display timings used between the CRTC and encoder blocks. 141 144 */ 142 145 struct drm_display_mode adjusted_mode; 143 146 ··· 506 503 * cleaned up by calling the @atomic_destroy_state hook in this 507 504 * structure. 508 505 * 506 + * This callback is mandatory for atomic drivers. 507 + * 509 508 * Atomic drivers which don't subclass &struct drm_crtc_state should use 510 509 * drm_atomic_helper_crtc_duplicate_state(). Drivers that subclass the 511 510 * state structure to extend it with driver-private state should use ··· 534 529 * 535 530 * Destroy a state duplicated with @atomic_duplicate_state and release 536 531 * or unreference all resources it references 532 + * 533 + * This callback is mandatory for atomic drivers. 537 534 */ 538 535 void (*atomic_destroy_state)(struct drm_crtc *crtc, 539 536 struct drm_crtc_state *state);
+7
include/drm/drm_file.h
··· 193 193 unsigned aspect_ratio_allowed:1; 194 194 195 195 /** 196 + * @writeback_connectors: 197 + * 198 + * True if client understands writeback connectors 199 + */ 200 + unsigned writeback_connectors:1; 201 + 202 + /** 196 203 * @is_master: 197 204 * 198 205 * This client is the creator of @master. Protected by struct
+33 -1
include/drm/drm_mm.h
··· 109 109 * Allocates the node from the bottom of the found hole. 110 110 */ 111 111 DRM_MM_INSERT_EVICT, 112 + 113 + /** 114 + * @DRM_MM_INSERT_ONCE: 115 + * 116 + * Only check the first hole for suitablity and report -ENOSPC 117 + * immediately otherwise, rather than check every hole until a 118 + * suitable one is found. Can only be used in conjunction with another 119 + * search method such as DRM_MM_INSERT_HIGH or DRM_MM_INSERT_LOW. 120 + */ 121 + DRM_MM_INSERT_ONCE = BIT(31), 122 + 123 + /** 124 + * @DRM_MM_INSERT_HIGHEST: 125 + * 126 + * Only check the highest hole (the hole with the largest address) and 127 + * insert the node at the top of the hole or report -ENOSPC if 128 + * unsuitable. 129 + * 130 + * Does not search all holes. 131 + */ 132 + DRM_MM_INSERT_HIGHEST = DRM_MM_INSERT_HIGH | DRM_MM_INSERT_ONCE, 133 + 134 + /** 135 + * @DRM_MM_INSERT_LOWEST: 136 + * 137 + * Only check the lowest hole (the hole with the smallest address) and 138 + * insert the node at the bottom of the hole or report -ENOSPC if 139 + * unsuitable. 140 + * 141 + * Does not search all holes. 142 + */ 143 + DRM_MM_INSERT_LOWEST = DRM_MM_INSERT_LOW | DRM_MM_INSERT_ONCE, 112 144 }; 113 145 114 146 /** ··· 205 173 struct drm_mm_node head_node; 206 174 /* Keep an interval_tree for fast lookup of drm_mm_nodes by address. */ 207 175 struct rb_root_cached interval_tree; 208 - struct rb_root holes_size; 176 + struct rb_root_cached holes_size; 209 177 struct rb_root holes_addr; 210 178 211 179 unsigned long scan_active;
+28
include/drm/drm_mode_config.h
··· 727 727 */ 728 728 struct drm_property *aspect_ratio_property; 729 729 /** 730 + * @content_type_property: Optional connector property to control the 731 + * HDMI infoframe content type setting. 732 + */ 733 + struct drm_property *content_type_property; 734 + /** 730 735 * @degamma_lut_property: Optional CRTC property to set the LUT used to 731 736 * convert the framebuffer's colors to linear gamma. 732 737 */ ··· 783 778 * upside-down). 784 779 */ 785 780 struct drm_property *panel_orientation_property; 781 + 782 + /** 783 + * @writeback_fb_id_property: Property for writeback connectors, storing 784 + * the ID of the output framebuffer. 785 + * See also: drm_writeback_connector_init() 786 + */ 787 + struct drm_property *writeback_fb_id_property; 788 + 789 + /** 790 + * @writeback_pixel_formats_property: Property for writeback connectors, 791 + * storing an array of the supported pixel formats for the writeback 792 + * engine (read-only). 793 + * See also: drm_writeback_connector_init() 794 + */ 795 + struct drm_property *writeback_pixel_formats_property; 796 + /** 797 + * @writeback_out_fence_ptr_property: Property for writeback connectors, 798 + * fd pointer representing the outgoing fences for a writeback 799 + * connector. Userspace should provide a pointer to a value of type s32, 800 + * and then cast that pointer to u64. 801 + * See also: drm_writeback_connector_init() 802 + */ 803 + struct drm_property *writeback_out_fence_ptr_property; 786 804 787 805 /* dumb ioctl parameters */ 788 806 uint32_t preferred_depth, prefer_shadow;
+11
include/drm/drm_modeset_helper_vtables.h
··· 974 974 */ 975 975 int (*atomic_check)(struct drm_connector *connector, 976 976 struct drm_connector_state *state); 977 + 978 + /** 979 + * @atomic_commit: 980 + * 981 + * This hook is to be used by drivers implementing writeback connectors 982 + * that need a point when to commit the writeback job to the hardware. 983 + * 984 + * This callback is used by the atomic modeset helpers. 985 + */ 986 + void (*atomic_commit)(struct drm_connector *connector, 987 + struct drm_writeback_job *writeback_job); 977 988 }; 978 989 979 990 /**
+1
include/drm/drm_panel.h
··· 89 89 struct drm_device *drm; 90 90 struct drm_connector *connector; 91 91 struct device *dev; 92 + struct device_link *link; 92 93 93 94 const struct drm_panel_funcs *funcs; 94 95
+8 -1
include/drm/drm_plane.h
··· 288 288 * cleaned up by calling the @atomic_destroy_state hook in this 289 289 * structure. 290 290 * 291 + * This callback is mandatory for atomic drivers. 292 + * 291 293 * Atomic drivers which don't subclass &struct drm_plane_state should use 292 294 * drm_atomic_helper_plane_duplicate_state(). Drivers that subclass the 293 295 * state structure to extend it with driver-private state should use ··· 316 314 * 317 315 * Destroy a state duplicated with @atomic_duplicate_state and release 318 316 * or unreference all resources it references 317 + * 318 + * This callback is mandatory for atomic drivers. 319 319 */ 320 320 void (*atomic_destroy_state)(struct drm_plane *plane, 321 321 struct drm_plane_state *state); ··· 435 431 * This optional hook is used for the DRM to determine if the given 436 432 * format/modifier combination is valid for the plane. This allows the 437 433 * DRM to generate the correct format bitmask (which formats apply to 438 - * which modifier). 434 + * which modifier), and to valdiate modifiers at atomic_check time. 435 + * 436 + * If not present, then any modifier in the plane's modifier 437 + * list is allowed with any of the plane's formats. 439 438 * 440 439 * Returns: 441 440 *
+1 -5
include/drm/drm_prime.h
··· 82 82 struct dma_buf *drm_gem_dmabuf_export(struct drm_device *dev, 83 83 struct dma_buf_export_info *exp_info); 84 84 void drm_gem_dmabuf_release(struct dma_buf *dma_buf); 85 - int drm_gem_map_attach(struct dma_buf *dma_buf, struct device *target_dev, 85 + int drm_gem_map_attach(struct dma_buf *dma_buf, 86 86 struct dma_buf_attachment *attach); 87 87 void drm_gem_map_detach(struct dma_buf *dma_buf, 88 88 struct dma_buf_attachment *attach); ··· 93 93 enum dma_data_direction dir); 94 94 void *drm_gem_dmabuf_vmap(struct dma_buf *dma_buf); 95 95 void drm_gem_dmabuf_vunmap(struct dma_buf *dma_buf, void *vaddr); 96 - void *drm_gem_dmabuf_kmap_atomic(struct dma_buf *dma_buf, 97 - unsigned long page_num); 98 - void drm_gem_dmabuf_kunmap_atomic(struct dma_buf *dma_buf, 99 - unsigned long page_num, void *addr); 100 96 void *drm_gem_dmabuf_kmap(struct dma_buf *dma_buf, unsigned long page_num); 101 97 void drm_gem_dmabuf_kunmap(struct dma_buf *dma_buf, unsigned long page_num, 102 98 void *addr);
+130
include/drm/drm_writeback.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + /* 3 + * (C) COPYRIGHT 2016 ARM Limited. All rights reserved. 4 + * Author: Brian Starkey <brian.starkey@arm.com> 5 + * 6 + * This program is free software and is provided to you under the terms of the 7 + * GNU General Public License version 2 as published by the Free Software 8 + * Foundation, and any use by you of this program is subject to the terms 9 + * of such GNU licence. 10 + */ 11 + 12 + #ifndef __DRM_WRITEBACK_H__ 13 + #define __DRM_WRITEBACK_H__ 14 + #include <drm/drm_connector.h> 15 + #include <drm/drm_encoder.h> 16 + #include <linux/workqueue.h> 17 + 18 + struct drm_writeback_connector { 19 + struct drm_connector base; 20 + 21 + /** 22 + * @encoder: Internal encoder used by the connector to fulfill 23 + * the DRM framework requirements. The users of the 24 + * @drm_writeback_connector control the behaviour of the @encoder 25 + * by passing the @enc_funcs parameter to drm_writeback_connector_init() 26 + * function. 27 + */ 28 + struct drm_encoder encoder; 29 + 30 + /** 31 + * @pixel_formats_blob_ptr: 32 + * 33 + * DRM blob property data for the pixel formats list on writeback 34 + * connectors 35 + * See also drm_writeback_connector_init() 36 + */ 37 + struct drm_property_blob *pixel_formats_blob_ptr; 38 + 39 + /** @job_lock: Protects job_queue */ 40 + spinlock_t job_lock; 41 + 42 + /** 43 + * @job_queue: 44 + * 45 + * Holds a list of a connector's writeback jobs; the last item is the 46 + * most recent. The first item may be either waiting for the hardware 47 + * to begin writing, or currently being written. 48 + * 49 + * See also: drm_writeback_queue_job() and 50 + * drm_writeback_signal_completion() 51 + */ 52 + struct list_head job_queue; 53 + 54 + /** 55 + * @fence_context: 56 + * 57 + * timeline context used for fence operations. 58 + */ 59 + unsigned int fence_context; 60 + /** 61 + * @fence_lock: 62 + * 63 + * spinlock to protect the fences in the fence_context. 64 + */ 65 + spinlock_t fence_lock; 66 + /** 67 + * @fence_seqno: 68 + * 69 + * Seqno variable used as monotonic counter for the fences 70 + * created on the connector's timeline. 71 + */ 72 + unsigned long fence_seqno; 73 + /** 74 + * @timeline_name: 75 + * 76 + * The name of the connector's fence timeline. 77 + */ 78 + char timeline_name[32]; 79 + }; 80 + 81 + struct drm_writeback_job { 82 + /** 83 + * @cleanup_work: 84 + * 85 + * Used to allow drm_writeback_signal_completion to defer dropping the 86 + * framebuffer reference to a workqueue 87 + */ 88 + struct work_struct cleanup_work; 89 + 90 + /** 91 + * @list_entry: 92 + * 93 + * List item for the writeback connector's @job_queue 94 + */ 95 + struct list_head list_entry; 96 + 97 + /** 98 + * @fb: 99 + * 100 + * Framebuffer to be written to by the writeback connector. Do not set 101 + * directly, use drm_atomic_set_writeback_fb_for_connector() 102 + */ 103 + struct drm_framebuffer *fb; 104 + 105 + /** 106 + * @out_fence: 107 + * 108 + * Fence which will signal once the writeback has completed 109 + */ 110 + struct dma_fence *out_fence; 111 + }; 112 + 113 + int drm_writeback_connector_init(struct drm_device *dev, 114 + struct drm_writeback_connector *wb_connector, 115 + const struct drm_connector_funcs *con_funcs, 116 + const struct drm_encoder_helper_funcs *enc_helper_funcs, 117 + const u32 *formats, int n_formats); 118 + 119 + void drm_writeback_queue_job(struct drm_writeback_connector *wb_connector, 120 + struct drm_writeback_job *job); 121 + 122 + void drm_writeback_cleanup_job(struct drm_writeback_job *job); 123 + 124 + void 125 + drm_writeback_signal_completion(struct drm_writeback_connector *wb_connector, 126 + int status); 127 + 128 + struct dma_fence * 129 + drm_writeback_get_out_fence(struct drm_writeback_connector *wb_connector); 130 + #endif
+8 -13
include/linux/dma-buf.h
··· 39 39 40 40 /** 41 41 * struct dma_buf_ops - operations possible on struct dma_buf 42 - * @map_atomic: maps a page from the buffer into kernel address 42 + * @map_atomic: [optional] maps a page from the buffer into kernel address 43 43 * space, users may not block until the subsequent unmap call. 44 44 * This callback must not sleep. 45 45 * @unmap_atomic: [optional] unmaps a atomically mapped page from the buffer. 46 46 * This Callback must not sleep. 47 - * @map: maps a page from the buffer into kernel address space. 47 + * @map: [optional] maps a page from the buffer into kernel address space. 48 48 * @unmap: [optional] unmaps a page from the buffer. 49 49 * @vmap: [optional] creates a virtual mapping for the buffer into kernel 50 50 * address space. Same restrictions as for vmap and friends apply. ··· 55 55 * @attach: 56 56 * 57 57 * This is called from dma_buf_attach() to make sure that a given 58 - * &device can access the provided &dma_buf. Exporters which support 59 - * buffer objects in special locations like VRAM or device-specific 60 - * carveout areas should check whether the buffer could be move to 61 - * system memory (or directly accessed by the provided device), and 62 - * otherwise need to fail the attach operation. 58 + * &dma_buf_attachment.dev can access the provided &dma_buf. Exporters 59 + * which support buffer objects in special locations like VRAM or 60 + * device-specific carveout areas should check whether the buffer could 61 + * be move to system memory (or directly accessed by the provided 62 + * device), and otherwise need to fail the attach operation. 63 63 * 64 64 * The exporter should also in general check whether the current 65 65 * allocation fullfills the DMA constraints of the new device. If this ··· 77 77 * to signal that backing storage is already allocated and incompatible 78 78 * with the requirements of requesting device. 79 79 */ 80 - int (*attach)(struct dma_buf *, struct device *, 81 - struct dma_buf_attachment *); 80 + int (*attach)(struct dma_buf *, struct dma_buf_attachment *); 82 81 83 82 /** 84 83 * @detach: ··· 205 206 * to be restarted. 206 207 */ 207 208 int (*end_cpu_access)(struct dma_buf *, enum dma_data_direction); 208 - void *(*map_atomic)(struct dma_buf *, unsigned long); 209 - void (*unmap_atomic)(struct dma_buf *, unsigned long, void *); 210 209 void *(*map)(struct dma_buf *, unsigned long); 211 210 void (*unmap)(struct dma_buf *, unsigned long, void *); 212 211 ··· 392 395 enum dma_data_direction dir); 393 396 int dma_buf_end_cpu_access(struct dma_buf *dma_buf, 394 397 enum dma_data_direction dir); 395 - void *dma_buf_kmap_atomic(struct dma_buf *, unsigned long); 396 - void dma_buf_kunmap_atomic(struct dma_buf *, unsigned long, void *); 397 398 void *dma_buf_kmap(struct dma_buf *, unsigned long); 398 399 void dma_buf_kunmap(struct dma_buf *, unsigned long, void *); 399 400
+9
include/uapi/drm/drm.h
··· 687 687 */ 688 688 #define DRM_CLIENT_CAP_ASPECT_RATIO 4 689 689 690 + /** 691 + * DRM_CLIENT_CAP_WRITEBACK_CONNECTORS 692 + * 693 + * If set to 1, the DRM core will expose special connectors to be used for 694 + * writing back to memory the scene setup in the commit. Depends on client 695 + * also supporting DRM_CLIENT_CAP_ATOMIC 696 + */ 697 + #define DRM_CLIENT_CAP_WRITEBACK_CONNECTORS 5 698 + 690 699 /** DRM_IOCTL_SET_CLIENT_CAP ioctl argument type */ 691 700 struct drm_set_client_cap { 692 701 __u64 capability;
+59
include/uapi/drm/drm_fourcc.h
··· 385 385 fourcc_mod_code(NVIDIA, 0x15) 386 386 387 387 /* 388 + * Some Broadcom modifiers take parameters, for example the number of 389 + * vertical lines in the image. Reserve the lower 32 bits for modifier 390 + * type, and the next 24 bits for parameters. Top 8 bits are the 391 + * vendor code. 392 + */ 393 + #define __fourcc_mod_broadcom_param_shift 8 394 + #define __fourcc_mod_broadcom_param_bits 48 395 + #define fourcc_mod_broadcom_code(val, params) \ 396 + fourcc_mod_code(BROADCOM, ((((__u64)params) << __fourcc_mod_broadcom_param_shift) | val)) 397 + #define fourcc_mod_broadcom_param(m) \ 398 + ((int)(((m) >> __fourcc_mod_broadcom_param_shift) & \ 399 + ((1ULL << __fourcc_mod_broadcom_param_bits) - 1))) 400 + #define fourcc_mod_broadcom_mod(m) \ 401 + ((m) & ~(((1ULL << __fourcc_mod_broadcom_param_bits) - 1) << \ 402 + __fourcc_mod_broadcom_param_shift)) 403 + 404 + /* 388 405 * Broadcom VC4 "T" format 389 406 * 390 407 * This is the primary layout that the V3D GPU can texture from (it ··· 421 404 * tiles) or right-to-left (odd rows of 4k tiles). 422 405 */ 423 406 #define DRM_FORMAT_MOD_BROADCOM_VC4_T_TILED fourcc_mod_code(BROADCOM, 1) 407 + 408 + /* 409 + * Broadcom SAND format 410 + * 411 + * This is the native format that the H.264 codec block uses. For VC4 412 + * HVS, it is only valid for H.264 (NV12/21) and RGBA modes. 413 + * 414 + * The image can be considered to be split into columns, and the 415 + * columns are placed consecutively into memory. The width of those 416 + * columns can be either 32, 64, 128, or 256 pixels, but in practice 417 + * only 128 pixel columns are used. 418 + * 419 + * The pitch between the start of each column is set to optimally 420 + * switch between SDRAM banks. This is passed as the number of lines 421 + * of column width in the modifier (we can't use the stride value due 422 + * to various core checks that look at it , so you should set the 423 + * stride to width*cpp). 424 + * 425 + * Note that the column height for this format modifier is the same 426 + * for all of the planes, assuming that each column contains both Y 427 + * and UV. Some SAND-using hardware stores UV in a separate tiled 428 + * image from Y to reduce the column height, which is not supported 429 + * with these modifiers. 430 + */ 431 + 432 + #define DRM_FORMAT_MOD_BROADCOM_SAND32_COL_HEIGHT(v) \ 433 + fourcc_mod_broadcom_code(2, v) 434 + #define DRM_FORMAT_MOD_BROADCOM_SAND64_COL_HEIGHT(v) \ 435 + fourcc_mod_broadcom_code(3, v) 436 + #define DRM_FORMAT_MOD_BROADCOM_SAND128_COL_HEIGHT(v) \ 437 + fourcc_mod_broadcom_code(4, v) 438 + #define DRM_FORMAT_MOD_BROADCOM_SAND256_COL_HEIGHT(v) \ 439 + fourcc_mod_broadcom_code(5, v) 440 + 441 + #define DRM_FORMAT_MOD_BROADCOM_SAND32 \ 442 + DRM_FORMAT_MOD_BROADCOM_SAND32_COL_HEIGHT(0) 443 + #define DRM_FORMAT_MOD_BROADCOM_SAND64 \ 444 + DRM_FORMAT_MOD_BROADCOM_SAND64_COL_HEIGHT(0) 445 + #define DRM_FORMAT_MOD_BROADCOM_SAND128 \ 446 + DRM_FORMAT_MOD_BROADCOM_SAND128_COL_HEIGHT(0) 447 + #define DRM_FORMAT_MOD_BROADCOM_SAND256 \ 448 + DRM_FORMAT_MOD_BROADCOM_SAND256_COL_HEIGHT(0) 424 449 425 450 #if defined(__cplusplus) 426 451 }
+8
include/uapi/drm/drm_mode.h
··· 96 96 #define DRM_MODE_PICTURE_ASPECT_64_27 3 97 97 #define DRM_MODE_PICTURE_ASPECT_256_135 4 98 98 99 + /* Content type options */ 100 + #define DRM_MODE_CONTENT_TYPE_NO_DATA 0 101 + #define DRM_MODE_CONTENT_TYPE_GRAPHICS 1 102 + #define DRM_MODE_CONTENT_TYPE_PHOTO 2 103 + #define DRM_MODE_CONTENT_TYPE_CINEMA 3 104 + #define DRM_MODE_CONTENT_TYPE_GAME 4 105 + 99 106 /* Aspect ratio flag bitmask (4 bits 22:19) */ 100 107 #define DRM_MODE_FLAG_PIC_AR_MASK (0x0F<<19) 101 108 #define DRM_MODE_FLAG_PIC_AR_NONE \ ··· 351 344 #define DRM_MODE_CONNECTOR_VIRTUAL 15 352 345 #define DRM_MODE_CONNECTOR_DSI 16 353 346 #define DRM_MODE_CONNECTOR_DPI 17 347 + #define DRM_MODE_CONNECTOR_WRITEBACK 18 354 348 355 349 struct drm_mode_get_connector { 356 350