Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'drm-misc-next-2022-10-20' of git://anongit.freedesktop.org/drm/drm-misc into drm-next

drm-misc-next for 6.2:

UAPI Changes:
- Documentation for page-flip flags

Cross-subsystem Changes:
- dma-buf: Add unlocked variant of vmapping and attachment-mapping
functions

Core Changes:
- atomic-helpers: CRTC primary plane test fixes
- connector: TV API consistency improvements, cmdline parsing
improvements
- crtc-helpers: Introduce drm_crtc_helper_atomic_check() helper
- edid: Fixes for HFVSDB parsing,
- fourcc: Addition of the Vivante tiled modifier
- makefile: Sort and reorganize the objects files
- mode_config: Remove fb_base from drm_mode_config_funcs
- sched: Add a module parameter to change the scheduling policy,
refcounting fix for fences
- tests: Sort the Kunit tests in the Makefile, improvements to the
DP-MST tests
- ttm: Remove unnecessary drm_mm_clean() call

Driver Changes:
- New driver: ofdrm
- Move all drivers to a common dma-buf locking convention
- bridge:
- adv7533: Remove dynamic lane switching
- it6505: Runtime PM support
- ps8640: Handle AUX defer messages
- tc358775: Drop soft-reset over I2C
- ast: Atomic Gamma LUT Support, Convert to SHMEM, various
improvements
- lcdif: Support for YUV planes
- mgag200: Fix PLL Setup on some revisions
- udl: Modesetting improvements, hot-unplug support
- vc4: Fix support for PAL-M

Signed-off-by: Dave Airlie <airlied@redhat.com>

From: Maxime Ripard <maxime@cerno.tech>
Link: https://patchwork.freedesktop.org/patch/msgid/20221020072405.g3o4hxuk75gmeumw@houat

+3854 -1358
+6
Documentation/driver-api/dma-buf.rst
··· 119 119 120 120 .. kernel-doc:: include/uapi/linux/dma-buf.h 121 121 122 + DMA-BUF locking convention 123 + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 124 + 125 + .. kernel-doc:: drivers/dma-buf/dma-buf.c 126 + :doc: locking convention 127 + 122 128 Kernel Functions and Structures Reference 123 129 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 124 130
+1
MAINTAINERS
··· 6692 6692 S: Maintained 6693 6693 T: git git://anongit.freedesktop.org/drm/drm-misc 6694 6694 F: drivers/gpu/drm/drm_aperture.c 6695 + F: drivers/gpu/drm/tiny/ofdrm.c 6695 6696 F: drivers/gpu/drm/tiny/simpledrm.c 6696 6697 F: drivers/video/aperture.c 6697 6698 F: include/drm/drm_aperture.h
+185 -31
drivers/dma-buf/dma-buf.c
··· 657 657 658 658 dmabuf->file = file; 659 659 660 - mutex_init(&dmabuf->lock); 661 660 INIT_LIST_HEAD(&dmabuf->attachments); 662 661 663 662 mutex_lock(&db_list.lock); ··· 795 796 } 796 797 797 798 /** 799 + * DOC: locking convention 800 + * 801 + * In order to avoid deadlock situations between dma-buf exports and importers, 802 + * all dma-buf API users must follow the common dma-buf locking convention. 803 + * 804 + * Convention for importers 805 + * 806 + * 1. Importers must hold the dma-buf reservation lock when calling these 807 + * functions: 808 + * 809 + * - dma_buf_pin() 810 + * - dma_buf_unpin() 811 + * - dma_buf_map_attachment() 812 + * - dma_buf_unmap_attachment() 813 + * - dma_buf_vmap() 814 + * - dma_buf_vunmap() 815 + * 816 + * 2. Importers must not hold the dma-buf reservation lock when calling these 817 + * functions: 818 + * 819 + * - dma_buf_attach() 820 + * - dma_buf_dynamic_attach() 821 + * - dma_buf_detach() 822 + * - dma_buf_export( 823 + * - dma_buf_fd() 824 + * - dma_buf_get() 825 + * - dma_buf_put() 826 + * - dma_buf_mmap() 827 + * - dma_buf_begin_cpu_access() 828 + * - dma_buf_end_cpu_access() 829 + * - dma_buf_map_attachment_unlocked() 830 + * - dma_buf_unmap_attachment_unlocked() 831 + * - dma_buf_vmap_unlocked() 832 + * - dma_buf_vunmap_unlocked() 833 + * 834 + * Convention for exporters 835 + * 836 + * 1. These &dma_buf_ops callbacks are invoked with unlocked dma-buf 837 + * reservation and exporter can take the lock: 838 + * 839 + * - &dma_buf_ops.attach() 840 + * - &dma_buf_ops.detach() 841 + * - &dma_buf_ops.release() 842 + * - &dma_buf_ops.begin_cpu_access() 843 + * - &dma_buf_ops.end_cpu_access() 844 + * 845 + * 2. These &dma_buf_ops callbacks are invoked with locked dma-buf 846 + * reservation and exporter can't take the lock: 847 + * 848 + * - &dma_buf_ops.pin() 849 + * - &dma_buf_ops.unpin() 850 + * - &dma_buf_ops.map_dma_buf() 851 + * - &dma_buf_ops.unmap_dma_buf() 852 + * - &dma_buf_ops.mmap() 853 + * - &dma_buf_ops.vmap() 854 + * - &dma_buf_ops.vunmap() 855 + * 856 + * 3. Exporters must hold the dma-buf reservation lock when calling these 857 + * functions: 858 + * 859 + * - dma_buf_move_notify() 860 + */ 861 + 862 + /** 798 863 * dma_buf_dynamic_attach - Add the device to dma_buf's attachments list 799 864 * @dmabuf: [in] buffer to attach device to. 800 865 * @dev: [in] device to be attached. ··· 922 859 dma_buf_is_dynamic(dmabuf)) { 923 860 struct sg_table *sgt; 924 861 862 + dma_resv_lock(attach->dmabuf->resv, NULL); 925 863 if (dma_buf_is_dynamic(attach->dmabuf)) { 926 - dma_resv_lock(attach->dmabuf->resv, NULL); 927 864 ret = dmabuf->ops->pin(attach); 928 865 if (ret) 929 866 goto err_unlock; ··· 936 873 ret = PTR_ERR(sgt); 937 874 goto err_unpin; 938 875 } 939 - if (dma_buf_is_dynamic(attach->dmabuf)) 940 - dma_resv_unlock(attach->dmabuf->resv); 876 + dma_resv_unlock(attach->dmabuf->resv); 941 877 attach->sgt = sgt; 942 878 attach->dir = DMA_BIDIRECTIONAL; 943 879 } ··· 952 890 dmabuf->ops->unpin(attach); 953 891 954 892 err_unlock: 955 - if (dma_buf_is_dynamic(attach->dmabuf)) 956 - dma_resv_unlock(attach->dmabuf->resv); 893 + dma_resv_unlock(attach->dmabuf->resv); 957 894 958 895 dma_buf_detach(dmabuf, attach); 959 896 return ERR_PTR(ret); ··· 998 937 if (WARN_ON(!dmabuf || !attach)) 999 938 return; 1000 939 940 + dma_resv_lock(attach->dmabuf->resv, NULL); 941 + 1001 942 if (attach->sgt) { 1002 - if (dma_buf_is_dynamic(attach->dmabuf)) 1003 - dma_resv_lock(attach->dmabuf->resv, NULL); 1004 943 1005 944 __unmap_dma_buf(attach, attach->sgt, attach->dir); 1006 945 1007 - if (dma_buf_is_dynamic(attach->dmabuf)) { 946 + if (dma_buf_is_dynamic(attach->dmabuf)) 1008 947 dmabuf->ops->unpin(attach); 1009 - dma_resv_unlock(attach->dmabuf->resv); 1010 - } 1011 948 } 1012 - 1013 - dma_resv_lock(dmabuf->resv, NULL); 1014 949 list_del(&attach->node); 950 + 1015 951 dma_resv_unlock(dmabuf->resv); 952 + 1016 953 if (dmabuf->ops->detach) 1017 954 dmabuf->ops->detach(dmabuf, attach); 1018 955 ··· 1101 1042 if (WARN_ON(!attach || !attach->dmabuf)) 1102 1043 return ERR_PTR(-EINVAL); 1103 1044 1104 - if (dma_buf_attachment_is_dynamic(attach)) 1105 - dma_resv_assert_held(attach->dmabuf->resv); 1045 + dma_resv_assert_held(attach->dmabuf->resv); 1106 1046 1107 1047 if (attach->sgt) { 1108 1048 /* ··· 1116 1058 } 1117 1059 1118 1060 if (dma_buf_is_dynamic(attach->dmabuf)) { 1119 - dma_resv_assert_held(attach->dmabuf->resv); 1120 1061 if (!IS_ENABLED(CONFIG_DMABUF_MOVE_NOTIFY)) { 1121 1062 r = attach->dmabuf->ops->pin(attach); 1122 1063 if (r) ··· 1158 1101 EXPORT_SYMBOL_NS_GPL(dma_buf_map_attachment, DMA_BUF); 1159 1102 1160 1103 /** 1104 + * dma_buf_map_attachment_unlocked - Returns the scatterlist table of the attachment; 1105 + * mapped into _device_ address space. Is a wrapper for map_dma_buf() of the 1106 + * dma_buf_ops. 1107 + * @attach: [in] attachment whose scatterlist is to be returned 1108 + * @direction: [in] direction of DMA transfer 1109 + * 1110 + * Unlocked variant of dma_buf_map_attachment(). 1111 + */ 1112 + struct sg_table * 1113 + dma_buf_map_attachment_unlocked(struct dma_buf_attachment *attach, 1114 + enum dma_data_direction direction) 1115 + { 1116 + struct sg_table *sg_table; 1117 + 1118 + might_sleep(); 1119 + 1120 + if (WARN_ON(!attach || !attach->dmabuf)) 1121 + return ERR_PTR(-EINVAL); 1122 + 1123 + dma_resv_lock(attach->dmabuf->resv, NULL); 1124 + sg_table = dma_buf_map_attachment(attach, direction); 1125 + dma_resv_unlock(attach->dmabuf->resv); 1126 + 1127 + return sg_table; 1128 + } 1129 + EXPORT_SYMBOL_NS_GPL(dma_buf_map_attachment_unlocked, DMA_BUF); 1130 + 1131 + /** 1161 1132 * dma_buf_unmap_attachment - unmaps and decreases usecount of the buffer;might 1162 1133 * deallocate the scatterlist associated. Is a wrapper for unmap_dma_buf() of 1163 1134 * dma_buf_ops. ··· 1204 1119 if (WARN_ON(!attach || !attach->dmabuf || !sg_table)) 1205 1120 return; 1206 1121 1207 - if (dma_buf_attachment_is_dynamic(attach)) 1208 - dma_resv_assert_held(attach->dmabuf->resv); 1122 + dma_resv_assert_held(attach->dmabuf->resv); 1209 1123 1210 1124 if (attach->sgt == sg_table) 1211 1125 return; 1212 - 1213 - if (dma_buf_is_dynamic(attach->dmabuf)) 1214 - dma_resv_assert_held(attach->dmabuf->resv); 1215 1126 1216 1127 __unmap_dma_buf(attach, sg_table, direction); 1217 1128 ··· 1216 1135 dma_buf_unpin(attach); 1217 1136 } 1218 1137 EXPORT_SYMBOL_NS_GPL(dma_buf_unmap_attachment, DMA_BUF); 1138 + 1139 + /** 1140 + * dma_buf_unmap_attachment_unlocked - unmaps and decreases usecount of the buffer;might 1141 + * deallocate the scatterlist associated. Is a wrapper for unmap_dma_buf() of 1142 + * dma_buf_ops. 1143 + * @attach: [in] attachment to unmap buffer from 1144 + * @sg_table: [in] scatterlist info of the buffer to unmap 1145 + * @direction: [in] direction of DMA transfer 1146 + * 1147 + * Unlocked variant of dma_buf_unmap_attachment(). 1148 + */ 1149 + void dma_buf_unmap_attachment_unlocked(struct dma_buf_attachment *attach, 1150 + struct sg_table *sg_table, 1151 + enum dma_data_direction direction) 1152 + { 1153 + might_sleep(); 1154 + 1155 + if (WARN_ON(!attach || !attach->dmabuf || !sg_table)) 1156 + return; 1157 + 1158 + dma_resv_lock(attach->dmabuf->resv, NULL); 1159 + dma_buf_unmap_attachment(attach, sg_table, direction); 1160 + dma_resv_unlock(attach->dmabuf->resv); 1161 + } 1162 + EXPORT_SYMBOL_NS_GPL(dma_buf_unmap_attachment_unlocked, DMA_BUF); 1219 1163 1220 1164 /** 1221 1165 * dma_buf_move_notify - notify attachments that DMA-buf is moving ··· 1453 1347 int dma_buf_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma, 1454 1348 unsigned long pgoff) 1455 1349 { 1350 + int ret; 1351 + 1456 1352 if (WARN_ON(!dmabuf || !vma)) 1457 1353 return -EINVAL; 1458 1354 ··· 1475 1367 vma_set_file(vma, dmabuf->file); 1476 1368 vma->vm_pgoff = pgoff; 1477 1369 1478 - return dmabuf->ops->mmap(dmabuf, vma); 1370 + dma_resv_lock(dmabuf->resv, NULL); 1371 + ret = dmabuf->ops->mmap(dmabuf, vma); 1372 + dma_resv_unlock(dmabuf->resv); 1373 + 1374 + return ret; 1479 1375 } 1480 1376 EXPORT_SYMBOL_NS_GPL(dma_buf_mmap, DMA_BUF); 1481 1377 ··· 1502 1390 int dma_buf_vmap(struct dma_buf *dmabuf, struct iosys_map *map) 1503 1391 { 1504 1392 struct iosys_map ptr; 1505 - int ret = 0; 1393 + int ret; 1506 1394 1507 1395 iosys_map_clear(map); 1508 1396 1509 1397 if (WARN_ON(!dmabuf)) 1510 1398 return -EINVAL; 1511 1399 1400 + dma_resv_assert_held(dmabuf->resv); 1401 + 1512 1402 if (!dmabuf->ops->vmap) 1513 1403 return -EINVAL; 1514 1404 1515 - mutex_lock(&dmabuf->lock); 1516 1405 if (dmabuf->vmapping_counter) { 1517 1406 dmabuf->vmapping_counter++; 1518 1407 BUG_ON(iosys_map_is_null(&dmabuf->vmap_ptr)); 1519 1408 *map = dmabuf->vmap_ptr; 1520 - goto out_unlock; 1409 + return 0; 1521 1410 } 1522 1411 1523 1412 BUG_ON(iosys_map_is_set(&dmabuf->vmap_ptr)); 1524 1413 1525 1414 ret = dmabuf->ops->vmap(dmabuf, &ptr); 1526 1415 if (WARN_ON_ONCE(ret)) 1527 - goto out_unlock; 1416 + return ret; 1528 1417 1529 1418 dmabuf->vmap_ptr = ptr; 1530 1419 dmabuf->vmapping_counter = 1; 1531 1420 1532 1421 *map = dmabuf->vmap_ptr; 1533 1422 1534 - out_unlock: 1535 - mutex_unlock(&dmabuf->lock); 1536 - return ret; 1423 + return 0; 1537 1424 } 1538 1425 EXPORT_SYMBOL_NS_GPL(dma_buf_vmap, DMA_BUF); 1426 + 1427 + /** 1428 + * dma_buf_vmap_unlocked - Create virtual mapping for the buffer object into kernel 1429 + * address space. Same restrictions as for vmap and friends apply. 1430 + * @dmabuf: [in] buffer to vmap 1431 + * @map: [out] returns the vmap pointer 1432 + * 1433 + * Unlocked version of dma_buf_vmap() 1434 + * 1435 + * Returns 0 on success, or a negative errno code otherwise. 1436 + */ 1437 + int dma_buf_vmap_unlocked(struct dma_buf *dmabuf, struct iosys_map *map) 1438 + { 1439 + int ret; 1440 + 1441 + iosys_map_clear(map); 1442 + 1443 + if (WARN_ON(!dmabuf)) 1444 + return -EINVAL; 1445 + 1446 + dma_resv_lock(dmabuf->resv, NULL); 1447 + ret = dma_buf_vmap(dmabuf, map); 1448 + dma_resv_unlock(dmabuf->resv); 1449 + 1450 + return ret; 1451 + } 1452 + EXPORT_SYMBOL_NS_GPL(dma_buf_vmap_unlocked, DMA_BUF); 1539 1453 1540 1454 /** 1541 1455 * dma_buf_vunmap - Unmap a vmap obtained by dma_buf_vmap. ··· 1573 1435 if (WARN_ON(!dmabuf)) 1574 1436 return; 1575 1437 1438 + dma_resv_assert_held(dmabuf->resv); 1439 + 1576 1440 BUG_ON(iosys_map_is_null(&dmabuf->vmap_ptr)); 1577 1441 BUG_ON(dmabuf->vmapping_counter == 0); 1578 1442 BUG_ON(!iosys_map_is_equal(&dmabuf->vmap_ptr, map)); 1579 1443 1580 - mutex_lock(&dmabuf->lock); 1581 1444 if (--dmabuf->vmapping_counter == 0) { 1582 1445 if (dmabuf->ops->vunmap) 1583 1446 dmabuf->ops->vunmap(dmabuf, map); 1584 1447 iosys_map_clear(&dmabuf->vmap_ptr); 1585 1448 } 1586 - mutex_unlock(&dmabuf->lock); 1587 1449 } 1588 1450 EXPORT_SYMBOL_NS_GPL(dma_buf_vunmap, DMA_BUF); 1451 + 1452 + /** 1453 + * dma_buf_vunmap_unlocked - Unmap a vmap obtained by dma_buf_vmap. 1454 + * @dmabuf: [in] buffer to vunmap 1455 + * @map: [in] vmap pointer to vunmap 1456 + */ 1457 + void dma_buf_vunmap_unlocked(struct dma_buf *dmabuf, struct iosys_map *map) 1458 + { 1459 + if (WARN_ON(!dmabuf)) 1460 + return; 1461 + 1462 + dma_resv_lock(dmabuf->resv, NULL); 1463 + dma_buf_vunmap(dmabuf, map); 1464 + dma_resv_unlock(dmabuf->resv); 1465 + } 1466 + EXPORT_SYMBOL_NS_GPL(dma_buf_vunmap_unlocked, DMA_BUF); 1589 1467 1590 1468 #ifdef CONFIG_DEBUG_FS 1591 1469 static int dma_buf_debug_show(struct seq_file *s, void *unused)
+77 -30
drivers/gpu/drm/Makefile
··· 5 5 6 6 CFLAGS-$(CONFIG_DRM_USE_DYNAMIC_DEBUG) += -DDYNAMIC_DEBUG_MODULE 7 7 8 - drm-y := drm_aperture.o drm_auth.o drm_cache.o \ 9 - drm_file.o drm_gem.o drm_ioctl.o \ 10 - drm_drv.o \ 11 - drm_sysfs.o drm_mm.o \ 12 - drm_crtc.o drm_fourcc.o drm_modes.o drm_edid.o drm_displayid.o \ 13 - drm_trace_points.o drm_prime.o \ 14 - drm_vma_manager.o \ 15 - drm_modeset_lock.o drm_atomic.o drm_bridge.o \ 16 - drm_framebuffer.o drm_connector.o drm_blend.o \ 17 - drm_encoder.o drm_mode_object.o drm_property.o \ 18 - drm_plane.o drm_color_mgmt.o drm_print.o \ 19 - drm_dumb_buffers.o drm_mode_config.o drm_vblank.o \ 20 - drm_syncobj.o drm_lease.o drm_writeback.o drm_client.o \ 21 - drm_client_modeset.o drm_atomic_uapi.o \ 22 - drm_managed.o drm_vblank_work.o 23 - drm-$(CONFIG_DRM_LEGACY) += drm_agpsupport.o drm_bufs.o drm_context.o drm_dma.o \ 24 - drm_hashtab.o drm_irq.o drm_legacy_misc.o drm_lock.o \ 25 - drm_memory.o drm_scatter.o drm_vm.o 8 + drm-y := \ 9 + drm_aperture.o \ 10 + drm_atomic.o \ 11 + drm_atomic_uapi.o \ 12 + drm_auth.o \ 13 + drm_blend.o \ 14 + drm_bridge.o \ 15 + drm_cache.o \ 16 + drm_client.o \ 17 + drm_client_modeset.o \ 18 + drm_color_mgmt.o \ 19 + drm_connector.o \ 20 + drm_crtc.o \ 21 + drm_displayid.o \ 22 + drm_drv.o \ 23 + drm_dumb_buffers.o \ 24 + drm_edid.o \ 25 + drm_encoder.o \ 26 + drm_file.o \ 27 + drm_fourcc.o \ 28 + drm_framebuffer.o \ 29 + drm_gem.o \ 30 + drm_ioctl.o \ 31 + drm_lease.o \ 32 + drm_managed.o \ 33 + drm_mm.o \ 34 + drm_mode_config.o \ 35 + drm_mode_object.o \ 36 + drm_modes.o \ 37 + drm_modeset_lock.o \ 38 + drm_plane.o \ 39 + drm_prime.o \ 40 + drm_print.o \ 41 + drm_property.o \ 42 + drm_syncobj.o \ 43 + drm_sysfs.o \ 44 + drm_trace_points.o \ 45 + drm_vblank.o \ 46 + drm_vblank_work.o \ 47 + drm_vma_manager.o \ 48 + drm_writeback.o 49 + drm-$(CONFIG_DRM_LEGACY) += \ 50 + drm_agpsupport.o \ 51 + drm_bufs.o \ 52 + drm_context.o \ 53 + drm_dma.o \ 54 + drm_hashtab.o \ 55 + drm_irq.o \ 56 + drm_legacy_misc.o \ 57 + drm_lock.o \ 58 + drm_memory.o \ 59 + drm_scatter.o \ 60 + drm_vm.o 26 61 drm-$(CONFIG_DRM_LIB_RANDOM) += lib/drm_random.o 27 62 drm-$(CONFIG_COMPAT) += drm_ioc32.o 28 63 drm-$(CONFIG_DRM_PANEL) += drm_panel.o 29 64 drm-$(CONFIG_OF) += drm_of.o 30 65 drm-$(CONFIG_PCI) += drm_pci.o 31 - drm-$(CONFIG_DEBUG_FS) += drm_debugfs.o drm_debugfs_crc.o 66 + drm-$(CONFIG_DEBUG_FS) += \ 67 + drm_debugfs.o \ 68 + drm_debugfs_crc.o 32 69 drm-$(CONFIG_DRM_LOAD_EDID_FIRMWARE) += drm_edid_load.o 33 - drm-$(CONFIG_DRM_PRIVACY_SCREEN) += drm_privacy_screen.o drm_privacy_screen_x86.o 70 + drm-$(CONFIG_DRM_PRIVACY_SCREEN) += \ 71 + drm_privacy_screen.o \ 72 + drm_privacy_screen_x86.o 34 73 obj-$(CONFIG_DRM) += drm.o 35 74 36 75 obj-$(CONFIG_DRM_NOMODESET) += drm_nomodeset.o ··· 98 59 # Modesetting helpers 99 60 # 100 61 101 - drm_kms_helper-y := drm_bridge_connector.o drm_crtc_helper.o \ 102 - drm_encoder_slave.o drm_flip_work.o \ 103 - drm_probe_helper.o \ 104 - drm_plane_helper.o drm_atomic_helper.o \ 105 - drm_kms_helper_common.o \ 106 - drm_simple_kms_helper.o drm_modeset_helper.o \ 107 - drm_gem_atomic_helper.o \ 108 - drm_gem_framebuffer_helper.o \ 109 - drm_atomic_state_helper.o drm_damage_helper.o \ 110 - drm_format_helper.o drm_self_refresh_helper.o drm_rect.o 62 + drm_kms_helper-y := \ 63 + drm_atomic_helper.o \ 64 + drm_atomic_state_helper.o \ 65 + drm_bridge_connector.o \ 66 + drm_crtc_helper.o \ 67 + drm_damage_helper.o \ 68 + drm_encoder_slave.o \ 69 + drm_flip_work.o \ 70 + drm_format_helper.o \ 71 + drm_gem_atomic_helper.o \ 72 + drm_gem_framebuffer_helper.o \ 73 + drm_kms_helper_common.o \ 74 + drm_modeset_helper.o \ 75 + drm_plane_helper.o \ 76 + drm_probe_helper.o \ 77 + drm_rect.o \ 78 + drm_self_refresh_helper.o \ 79 + drm_simple_kms_helper.o 111 80 drm_kms_helper-$(CONFIG_DRM_PANEL_BRIDGE) += bridge/panel.o 112 81 drm_kms_helper-$(CONFIG_DRM_FBDEV_EMULATION) += drm_fb_helper.o 113 82 obj-$(CONFIG_DRM_KMS_HELPER) += drm_kms_helper.o
-2
drivers/gpu/drm/amd/amdgpu/amdgpu_vkms.c
··· 498 498 adev_to_drm(adev)->mode_config.preferred_depth = 24; 499 499 adev_to_drm(adev)->mode_config.prefer_shadow = 1; 500 500 501 - adev_to_drm(adev)->mode_config.fb_base = adev->gmc.aper_base; 502 - 503 501 r = amdgpu_display_modeset_create_props(adev); 504 502 if (r) 505 503 return r;
-2
drivers/gpu/drm/amd/amdgpu/dce_v10_0.c
··· 2800 2800 2801 2801 adev_to_drm(adev)->mode_config.fb_modifiers_not_supported = true; 2802 2802 2803 - adev_to_drm(adev)->mode_config.fb_base = adev->gmc.aper_base; 2804 - 2805 2803 r = amdgpu_display_modeset_create_props(adev); 2806 2804 if (r) 2807 2805 return r;
-2
drivers/gpu/drm/amd/amdgpu/dce_v11_0.c
··· 2918 2918 2919 2919 adev_to_drm(adev)->mode_config.fb_modifiers_not_supported = true; 2920 2920 2921 - adev_to_drm(adev)->mode_config.fb_base = adev->gmc.aper_base; 2922 - 2923 2921 r = amdgpu_display_modeset_create_props(adev); 2924 2922 if (r) 2925 2923 return r;
-1
drivers/gpu/drm/amd/amdgpu/dce_v6_0.c
··· 2675 2675 adev_to_drm(adev)->mode_config.preferred_depth = 24; 2676 2676 adev_to_drm(adev)->mode_config.prefer_shadow = 1; 2677 2677 adev_to_drm(adev)->mode_config.fb_modifiers_not_supported = true; 2678 - adev_to_drm(adev)->mode_config.fb_base = adev->gmc.aper_base; 2679 2678 2680 2679 r = amdgpu_display_modeset_create_props(adev); 2681 2680 if (r)
-2
drivers/gpu/drm/amd/amdgpu/dce_v8_0.c
··· 2701 2701 2702 2702 adev_to_drm(adev)->mode_config.fb_modifiers_not_supported = true; 2703 2703 2704 - adev_to_drm(adev)->mode_config.fb_base = adev->gmc.aper_base; 2705 - 2706 2704 r = amdgpu_display_modeset_create_props(adev); 2707 2705 if (r) 2708 2706 return r;
-2
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
··· 3816 3816 /* indicates support for immediate flip */ 3817 3817 adev_to_drm(adev)->mode_config.async_page_flip = true; 3818 3818 3819 - adev_to_drm(adev)->mode_config.fb_base = adev->gmc.aper_base; 3820 - 3821 3819 state = kzalloc(sizeof(*state), GFP_KERNEL); 3822 3820 if (!state) 3823 3821 return -ENOMEM;
+4 -4
drivers/gpu/drm/armada/armada_gem.c
··· 66 66 if (dobj->obj.import_attach) { 67 67 /* We only ever display imported data */ 68 68 if (dobj->sgt) 69 - dma_buf_unmap_attachment(dobj->obj.import_attach, 70 - dobj->sgt, DMA_TO_DEVICE); 69 + dma_buf_unmap_attachment_unlocked(dobj->obj.import_attach, 70 + dobj->sgt, DMA_TO_DEVICE); 71 71 drm_prime_gem_destroy(&dobj->obj, NULL); 72 72 } 73 73 ··· 539 539 { 540 540 int ret; 541 541 542 - dobj->sgt = dma_buf_map_attachment(dobj->obj.import_attach, 543 - DMA_TO_DEVICE); 542 + dobj->sgt = dma_buf_map_attachment_unlocked(dobj->obj.import_attach, 543 + DMA_TO_DEVICE); 544 544 if (IS_ERR(dobj->sgt)) { 545 545 ret = PTR_ERR(dobj->sgt); 546 546 dobj->sgt = NULL;
+1 -3
drivers/gpu/drm/ast/Kconfig
··· 2 2 config DRM_AST 3 3 tristate "AST server chips" 4 4 depends on DRM && PCI && MMU 5 + select DRM_GEM_SHMEM_HELPER 5 6 select DRM_KMS_HELPER 6 - select DRM_VRAM_HELPER 7 - select DRM_TTM 8 - select DRM_TTM_HELPER 9 7 help 10 8 Say yes for experimental AST GPU driver. Do not enable 11 9 this driver without having a working -modesetting,
+2 -2
drivers/gpu/drm/ast/ast_drv.c
··· 33 33 #include <drm/drm_atomic_helper.h> 34 34 #include <drm/drm_crtc_helper.h> 35 35 #include <drm/drm_drv.h> 36 - #include <drm/drm_gem_vram_helper.h> 36 + #include <drm/drm_gem_shmem_helper.h> 37 37 #include <drm/drm_module.h> 38 38 #include <drm/drm_probe_helper.h> 39 39 ··· 63 63 .minor = DRIVER_MINOR, 64 64 .patchlevel = DRIVER_PATCHLEVEL, 65 65 66 - DRM_GEM_VRAM_DRIVER 66 + DRM_GEM_SHMEM_DRIVER_OPS 67 67 }; 68 68 69 69 /*
+18 -16
drivers/gpu/drm/ast/ast_drv.h
··· 87 87 #define AST_DRAM_8Gx16 8 88 88 89 89 /* 90 - * Cursor plane 90 + * Hardware cursor 91 91 */ 92 92 93 93 #define AST_MAX_HWC_WIDTH 64 ··· 95 95 96 96 #define AST_HWC_SIZE (AST_MAX_HWC_WIDTH * AST_MAX_HWC_HEIGHT * 2) 97 97 #define AST_HWC_SIGNATURE_SIZE 32 98 - 99 - #define AST_DEFAULT_HWC_NUM 2 100 98 101 99 /* define for signature structure */ 102 100 #define AST_HWC_SIGNATURE_CHECKSUM 0x00 ··· 105 107 #define AST_HWC_SIGNATURE_HOTSPOTX 0x14 106 108 #define AST_HWC_SIGNATURE_HOTSPOTY 0x18 107 109 108 - struct ast_cursor_plane { 110 + /* 111 + * Planes 112 + */ 113 + 114 + struct ast_plane { 109 115 struct drm_plane base; 110 116 111 - struct { 112 - struct drm_gem_vram_object *gbo; 113 - struct iosys_map map; 114 - u64 off; 115 - } hwc[AST_DEFAULT_HWC_NUM]; 116 - 117 - unsigned int next_hwc_index; 117 + void __iomem *vaddr; 118 + u64 offset; 119 + unsigned long size; 118 120 }; 119 121 120 - static inline struct ast_cursor_plane * 121 - to_ast_cursor_plane(struct drm_plane *plane) 122 + static inline struct ast_plane *to_ast_plane(struct drm_plane *plane) 122 123 { 123 - return container_of(plane, struct ast_cursor_plane, base); 124 + return container_of(plane, struct ast_plane, base); 124 125 } 125 126 126 127 /* ··· 172 175 uint32_t dram_type; 173 176 uint32_t mclk; 174 177 175 - struct drm_plane primary_plane; 176 - struct ast_cursor_plane cursor_plane; 178 + void __iomem *vram; 179 + unsigned long vram_base; 180 + unsigned long vram_size; 181 + unsigned long vram_fb_available; 182 + 183 + struct ast_plane primary_plane; 184 + struct ast_plane cursor_plane; 177 185 struct drm_crtc crtc; 178 186 struct { 179 187 struct {
+2 -3
drivers/gpu/drm/ast/ast_main.c
··· 32 32 #include <drm/drm_crtc_helper.h> 33 33 #include <drm/drm_drv.h> 34 34 #include <drm/drm_gem.h> 35 - #include <drm/drm_gem_vram_helper.h> 36 35 #include <drm/drm_managed.h> 37 36 38 37 #include "ast_drv.h" ··· 460 461 461 462 /* map reserved buffer */ 462 463 ast->dp501_fw_buf = NULL; 463 - if (dev->vram_mm->vram_size < pci_resource_len(pdev, 0)) { 464 - ast->dp501_fw_buf = pci_iomap_range(pdev, 0, dev->vram_mm->vram_size, 0); 464 + if (ast->vram_size < pci_resource_len(pdev, 0)) { 465 + ast->dp501_fw_buf = pci_iomap_range(pdev, 0, ast->vram_size, 0); 465 466 if (!ast->dp501_fw_buf) 466 467 drm_info(dev, "failed to map reserved buffer!\n"); 467 468 }
+7 -7
drivers/gpu/drm/ast/ast_mm.c
··· 28 28 29 29 #include <linux/pci.h> 30 30 31 - #include <drm/drm_gem_vram_helper.h> 32 31 #include <drm/drm_managed.h> 33 32 #include <drm/drm_print.h> 34 33 ··· 79 80 struct pci_dev *pdev = to_pci_dev(dev->dev); 80 81 resource_size_t base, size; 81 82 u32 vram_size; 82 - int ret; 83 83 84 84 base = pci_resource_start(pdev, 0); 85 85 size = pci_resource_len(pdev, 0); ··· 89 91 90 92 vram_size = ast_get_vram_size(ast); 91 93 92 - ret = drmm_vram_helper_init(dev, base, vram_size); 93 - if (ret) { 94 - drm_err(dev, "Error initializing VRAM MM; %d\n", ret); 95 - return ret; 96 - } 94 + ast->vram = devm_ioremap_wc(dev->dev, base, vram_size); 95 + if (!ast->vram) 96 + return -ENOMEM; 97 + 98 + ast->vram_base = base; 99 + ast->vram_size = vram_size; 100 + ast->vram_fb_available = vram_size; 97 101 98 102 return 0; 99 103 }
+265 -234
drivers/gpu/drm/ast/ast_mode.c
··· 36 36 #include <drm/drm_atomic_state_helper.h> 37 37 #include <drm/drm_crtc.h> 38 38 #include <drm/drm_crtc_helper.h> 39 + #include <drm/drm_damage_helper.h> 39 40 #include <drm/drm_edid.h> 41 + #include <drm/drm_format_helper.h> 40 42 #include <drm/drm_fourcc.h> 41 43 #include <drm/drm_gem_atomic_helper.h> 42 44 #include <drm/drm_gem_framebuffer_helper.h> 43 - #include <drm/drm_gem_vram_helper.h> 45 + #include <drm/drm_gem_shmem_helper.h> 44 46 #include <drm/drm_managed.h> 45 47 #include <drm/drm_probe_helper.h> 46 48 #include <drm/drm_simple_kms_helper.h> 47 49 48 50 #include "ast_drv.h" 49 51 #include "ast_tables.h" 52 + 53 + #define AST_LUT_SIZE 256 50 54 51 55 static inline void ast_load_palette_index(struct ast_private *ast, 52 56 u8 index, u8 red, u8 green, ··· 66 62 ast_io_read8(ast, AST_IO_SEQ_PORT); 67 63 } 68 64 69 - static void ast_crtc_load_lut(struct ast_private *ast, struct drm_crtc *crtc) 65 + static void ast_crtc_set_gamma_linear(struct ast_private *ast, 66 + const struct drm_format_info *format) 70 67 { 71 - u16 *r, *g, *b; 72 68 int i; 73 69 74 - if (!crtc->enabled) 75 - return; 70 + switch (format->format) { 71 + case DRM_FORMAT_C8: /* In this case, gamma table is used as color palette */ 72 + case DRM_FORMAT_RGB565: 73 + case DRM_FORMAT_XRGB8888: 74 + for (i = 0; i < AST_LUT_SIZE; i++) 75 + ast_load_palette_index(ast, i, i, i, i); 76 + break; 77 + default: 78 + drm_warn_once(&ast->base, "Unsupported format %p4cc for gamma correction\n", 79 + &format->format); 80 + break; 81 + } 82 + } 76 83 77 - r = crtc->gamma_store; 78 - g = r + crtc->gamma_size; 79 - b = g + crtc->gamma_size; 84 + static void ast_crtc_set_gamma(struct ast_private *ast, 85 + const struct drm_format_info *format, 86 + struct drm_color_lut *lut) 87 + { 88 + int i; 80 89 81 - for (i = 0; i < 256; i++) 82 - ast_load_palette_index(ast, i, *r++ >> 8, *g++ >> 8, *b++ >> 8); 90 + switch (format->format) { 91 + case DRM_FORMAT_C8: /* In this case, gamma table is used as color palette */ 92 + case DRM_FORMAT_RGB565: 93 + case DRM_FORMAT_XRGB8888: 94 + for (i = 0; i < AST_LUT_SIZE; i++) 95 + ast_load_palette_index(ast, i, 96 + lut[i].red >> 8, 97 + lut[i].green >> 8, 98 + lut[i].blue >> 8); 99 + break; 100 + default: 101 + drm_warn_once(&ast->base, "Unsupported format %p4cc for gamma correction\n", 102 + &format->format); 103 + break; 104 + } 83 105 } 84 106 85 107 static bool ast_get_vbios_mode_info(const struct drm_format_info *format, ··· 568 538 } 569 539 570 540 /* 541 + * Planes 542 + */ 543 + 544 + static int ast_plane_init(struct drm_device *dev, struct ast_plane *ast_plane, 545 + void __iomem *vaddr, u64 offset, unsigned long size, 546 + uint32_t possible_crtcs, 547 + const struct drm_plane_funcs *funcs, 548 + const uint32_t *formats, unsigned int format_count, 549 + const uint64_t *format_modifiers, 550 + enum drm_plane_type type) 551 + { 552 + struct drm_plane *plane = &ast_plane->base; 553 + 554 + ast_plane->vaddr = vaddr; 555 + ast_plane->offset = offset; 556 + ast_plane->size = size; 557 + 558 + return drm_universal_plane_init(dev, plane, possible_crtcs, funcs, 559 + formats, format_count, format_modifiers, 560 + type, NULL); 561 + } 562 + 563 + /* 571 564 * Primary plane 572 565 */ 573 566 ··· 603 550 static int ast_primary_plane_helper_atomic_check(struct drm_plane *plane, 604 551 struct drm_atomic_state *state) 605 552 { 606 - struct drm_plane_state *new_plane_state = drm_atomic_get_new_plane_state(state, 607 - plane); 608 - struct drm_crtc_state *crtc_state; 609 - struct ast_crtc_state *ast_crtc_state; 553 + struct drm_device *dev = plane->dev; 554 + struct drm_plane_state *new_plane_state = drm_atomic_get_new_plane_state(state, plane); 555 + struct drm_crtc_state *new_crtc_state = NULL; 556 + struct ast_crtc_state *new_ast_crtc_state; 610 557 int ret; 611 558 612 - if (!new_plane_state->crtc) 613 - return 0; 559 + if (new_plane_state->crtc) 560 + new_crtc_state = drm_atomic_get_new_crtc_state(state, new_plane_state->crtc); 614 561 615 - crtc_state = drm_atomic_get_new_crtc_state(state, 616 - new_plane_state->crtc); 617 - 618 - ret = drm_atomic_helper_check_plane_state(new_plane_state, crtc_state, 562 + ret = drm_atomic_helper_check_plane_state(new_plane_state, new_crtc_state, 619 563 DRM_PLANE_NO_SCALING, 620 564 DRM_PLANE_NO_SCALING, 621 565 false, true); 622 - if (ret) 566 + if (ret) { 623 567 return ret; 568 + } else if (!new_plane_state->visible) { 569 + if (drm_WARN_ON(dev, new_plane_state->crtc)) /* cannot legally happen */ 570 + return -EINVAL; 571 + else 572 + return 0; 573 + } 624 574 625 - if (!new_plane_state->visible) 626 - return 0; 575 + new_ast_crtc_state = to_ast_crtc_state(new_crtc_state); 627 576 628 - ast_crtc_state = to_ast_crtc_state(crtc_state); 629 - 630 - ast_crtc_state->format = new_plane_state->fb->format; 577 + new_ast_crtc_state->format = new_plane_state->fb->format; 631 578 632 579 return 0; 633 580 } 634 581 635 - static void 636 - ast_primary_plane_helper_atomic_update(struct drm_plane *plane, 637 - struct drm_atomic_state *state) 582 + static void ast_handle_damage(struct ast_plane *ast_plane, struct iosys_map *src, 583 + struct drm_framebuffer *fb, 584 + const struct drm_rect *clip) 638 585 { 639 - struct drm_plane_state *old_state = drm_atomic_get_old_plane_state(state, 640 - plane); 586 + struct iosys_map dst = IOSYS_MAP_INIT_VADDR(ast_plane->vaddr); 587 + 588 + iosys_map_incr(&dst, drm_fb_clip_offset(fb->pitches[0], fb->format, clip)); 589 + drm_fb_memcpy(&dst, fb->pitches, src, fb, clip); 590 + } 591 + 592 + static void ast_primary_plane_helper_atomic_update(struct drm_plane *plane, 593 + struct drm_atomic_state *state) 594 + { 641 595 struct drm_device *dev = plane->dev; 642 596 struct ast_private *ast = to_ast_private(dev); 643 - struct drm_plane_state *new_state = drm_atomic_get_new_plane_state(state, 644 - plane); 645 - struct drm_gem_vram_object *gbo; 646 - s64 gpu_addr; 647 - struct drm_framebuffer *fb = new_state->fb; 648 - struct drm_framebuffer *old_fb = old_state->fb; 597 + struct drm_plane_state *plane_state = drm_atomic_get_new_plane_state(state, plane); 598 + struct drm_shadow_plane_state *shadow_plane_state = to_drm_shadow_plane_state(plane_state); 599 + struct drm_framebuffer *fb = plane_state->fb; 600 + struct drm_plane_state *old_plane_state = drm_atomic_get_old_plane_state(state, plane); 601 + struct drm_framebuffer *old_fb = old_plane_state->fb; 602 + struct ast_plane *ast_plane = to_ast_plane(plane); 603 + struct drm_rect damage; 604 + struct drm_atomic_helper_damage_iter iter; 649 605 650 606 if (!old_fb || (fb->format != old_fb->format)) { 651 - struct drm_crtc_state *crtc_state = new_state->crtc->state; 607 + struct drm_crtc *crtc = plane_state->crtc; 608 + struct drm_crtc_state *crtc_state = drm_atomic_get_new_crtc_state(state, crtc); 652 609 struct ast_crtc_state *ast_crtc_state = to_ast_crtc_state(crtc_state); 653 610 struct ast_vbios_mode_info *vbios_mode_info = &ast_crtc_state->vbios_mode_info; 654 611 ··· 666 603 ast_set_vbios_color_reg(ast, fb->format, vbios_mode_info); 667 604 } 668 605 669 - gbo = drm_gem_vram_of_gem(fb->obj[0]); 670 - gpu_addr = drm_gem_vram_offset(gbo); 671 - if (drm_WARN_ON_ONCE(dev, gpu_addr < 0)) 672 - return; /* Bug: we didn't pin the BO to VRAM in prepare_fb. */ 606 + drm_atomic_helper_damage_iter_init(&iter, old_plane_state, plane_state); 607 + drm_atomic_for_each_plane_damage(&iter, &damage) { 608 + ast_handle_damage(ast_plane, shadow_plane_state->data, fb, &damage); 609 + } 673 610 674 - ast_set_offset_reg(ast, fb); 675 - ast_set_start_address_crt1(ast, (u32)gpu_addr); 676 - 677 - ast_set_index_reg_mask(ast, AST_IO_SEQ_PORT, 0x1, 0xdf, 0x00); 611 + /* 612 + * Some BMCs stop scanning out the video signal after the driver 613 + * reprogrammed the offset or scanout address. This stalls display 614 + * output for several seconds and makes the display unusable. 615 + * Therefore only update the offset if it changes and reprogram the 616 + * address after enabling the plane. 617 + */ 618 + if (!old_fb || old_fb->pitches[0] != fb->pitches[0]) 619 + ast_set_offset_reg(ast, fb); 620 + if (!old_fb) { 621 + ast_set_start_address_crt1(ast, (u32)ast_plane->offset); 622 + ast_set_index_reg_mask(ast, AST_IO_SEQ_PORT, 0x1, 0xdf, 0x00); 623 + } 678 624 } 679 625 680 - static void 681 - ast_primary_plane_helper_atomic_disable(struct drm_plane *plane, 682 - struct drm_atomic_state *state) 626 + static void ast_primary_plane_helper_atomic_disable(struct drm_plane *plane, 627 + struct drm_atomic_state *state) 683 628 { 684 629 struct ast_private *ast = to_ast_private(plane->dev); 685 630 ··· 695 624 } 696 625 697 626 static const struct drm_plane_helper_funcs ast_primary_plane_helper_funcs = { 698 - DRM_GEM_VRAM_PLANE_HELPER_FUNCS, 627 + DRM_GEM_SHADOW_PLANE_HELPER_FUNCS, 699 628 .atomic_check = ast_primary_plane_helper_atomic_check, 700 629 .atomic_update = ast_primary_plane_helper_atomic_update, 701 630 .atomic_disable = ast_primary_plane_helper_atomic_disable, ··· 705 634 .update_plane = drm_atomic_helper_update_plane, 706 635 .disable_plane = drm_atomic_helper_disable_plane, 707 636 .destroy = drm_plane_cleanup, 708 - .reset = drm_atomic_helper_plane_reset, 709 - .atomic_duplicate_state = drm_atomic_helper_plane_duplicate_state, 710 - .atomic_destroy_state = drm_atomic_helper_plane_destroy_state, 637 + DRM_GEM_SHADOW_PLANE_FUNCS, 711 638 }; 712 639 713 640 static int ast_primary_plane_init(struct ast_private *ast) 714 641 { 715 642 struct drm_device *dev = &ast->base; 716 - struct drm_plane *primary_plane = &ast->primary_plane; 643 + struct ast_plane *ast_primary_plane = &ast->primary_plane; 644 + struct drm_plane *primary_plane = &ast_primary_plane->base; 645 + void __iomem *vaddr = ast->vram; 646 + u64 offset = ast->vram_base; 647 + unsigned long cursor_size = roundup(AST_HWC_SIZE + AST_HWC_SIGNATURE_SIZE, PAGE_SIZE); 648 + unsigned long size = ast->vram_fb_available - cursor_size; 717 649 int ret; 718 650 719 - ret = drm_universal_plane_init(dev, primary_plane, 0x01, 720 - &ast_primary_plane_funcs, 721 - ast_primary_plane_formats, 722 - ARRAY_SIZE(ast_primary_plane_formats), 723 - NULL, DRM_PLANE_TYPE_PRIMARY, NULL); 651 + ret = ast_plane_init(dev, ast_primary_plane, vaddr, offset, size, 652 + 0x01, &ast_primary_plane_funcs, 653 + ast_primary_plane_formats, ARRAY_SIZE(ast_primary_plane_formats), 654 + NULL, DRM_PLANE_TYPE_PRIMARY); 724 655 if (ret) { 725 - drm_err(dev, "drm_universal_plane_init() failed: %d\n", ret); 656 + drm_err(dev, "ast_plane_init() failed: %d\n", ret); 726 657 return ret; 727 658 } 728 659 drm_plane_helper_add(primary_plane, &ast_primary_plane_helper_funcs); 660 + drm_plane_enable_fb_damage_clips(primary_plane); 729 661 730 662 return 0; 731 663 } ··· 848 774 static int ast_cursor_plane_helper_atomic_check(struct drm_plane *plane, 849 775 struct drm_atomic_state *state) 850 776 { 851 - struct drm_plane_state *new_plane_state = drm_atomic_get_new_plane_state(state, 852 - plane); 853 - struct drm_framebuffer *fb = new_plane_state->fb; 854 - struct drm_crtc_state *crtc_state; 777 + struct drm_plane_state *new_plane_state = drm_atomic_get_new_plane_state(state, plane); 778 + struct drm_framebuffer *new_fb = new_plane_state->fb; 779 + struct drm_crtc_state *new_crtc_state = NULL; 855 780 int ret; 856 781 857 - if (!new_plane_state->crtc) 858 - return 0; 782 + if (new_plane_state->crtc) 783 + new_crtc_state = drm_atomic_get_new_crtc_state(state, new_plane_state->crtc); 859 784 860 - crtc_state = drm_atomic_get_new_crtc_state(state, 861 - new_plane_state->crtc); 862 - 863 - ret = drm_atomic_helper_check_plane_state(new_plane_state, crtc_state, 785 + ret = drm_atomic_helper_check_plane_state(new_plane_state, new_crtc_state, 864 786 DRM_PLANE_NO_SCALING, 865 787 DRM_PLANE_NO_SCALING, 866 788 true, true); 867 - if (ret) 789 + if (ret || !new_plane_state->visible) 868 790 return ret; 869 791 870 - if (!new_plane_state->visible) 871 - return 0; 872 - 873 - if (fb->width > AST_MAX_HWC_WIDTH || fb->height > AST_MAX_HWC_HEIGHT) 792 + if (new_fb->width > AST_MAX_HWC_WIDTH || new_fb->height > AST_MAX_HWC_HEIGHT) 874 793 return -EINVAL; 875 794 876 795 return 0; 877 796 } 878 797 879 - static void 880 - ast_cursor_plane_helper_atomic_update(struct drm_plane *plane, 881 - struct drm_atomic_state *state) 798 + static void ast_cursor_plane_helper_atomic_update(struct drm_plane *plane, 799 + struct drm_atomic_state *state) 882 800 { 883 - struct ast_cursor_plane *ast_cursor_plane = to_ast_cursor_plane(plane); 884 - struct drm_plane_state *old_state = drm_atomic_get_old_plane_state(state, 885 - plane); 886 - struct drm_plane_state *new_state = drm_atomic_get_new_plane_state(state, 887 - plane); 888 - struct drm_shadow_plane_state *shadow_plane_state = to_drm_shadow_plane_state(new_state); 889 - struct drm_framebuffer *fb = new_state->fb; 801 + struct ast_plane *ast_plane = to_ast_plane(plane); 802 + struct drm_plane_state *plane_state = drm_atomic_get_new_plane_state(state, plane); 803 + struct drm_shadow_plane_state *shadow_plane_state = to_drm_shadow_plane_state(plane_state); 804 + struct drm_framebuffer *fb = plane_state->fb; 805 + struct drm_plane_state *old_plane_state = drm_atomic_get_old_plane_state(state, plane); 890 806 struct ast_private *ast = to_ast_private(plane->dev); 891 - struct iosys_map dst_map = 892 - ast_cursor_plane->hwc[ast_cursor_plane->next_hwc_index].map; 893 - u64 dst_off = 894 - ast_cursor_plane->hwc[ast_cursor_plane->next_hwc_index].off; 895 807 struct iosys_map src_map = shadow_plane_state->data[0]; 808 + struct drm_rect damage; 809 + const u8 *src = src_map.vaddr; /* TODO: Use mapping abstraction properly */ 810 + u64 dst_off = ast_plane->offset; 811 + u8 __iomem *dst = ast_plane->vaddr; /* TODO: Use mapping abstraction properly */ 812 + u8 __iomem *sig = dst + AST_HWC_SIZE; /* TODO: Use mapping abstraction properly */ 896 813 unsigned int offset_x, offset_y; 897 814 u16 x, y; 898 815 u8 x_offset, y_offset; 899 - u8 __iomem *dst; 900 - u8 __iomem *sig; 901 - const u8 *src; 902 - 903 - src = src_map.vaddr; /* TODO: Use mapping abstraction properly */ 904 - dst = dst_map.vaddr_iomem; /* TODO: Use mapping abstraction properly */ 905 - sig = dst + AST_HWC_SIZE; /* TODO: Use mapping abstraction properly */ 906 816 907 817 /* 908 - * Do data transfer to HW cursor BO. If a new cursor image was installed, 909 - * point the scanout engine to dst_gbo's offset and page-flip the HWC buffers. 818 + * Do data transfer to hardware buffer and point the scanout 819 + * engine to the offset. 910 820 */ 911 821 912 - ast_update_cursor_image(dst, src, fb->width, fb->height); 913 - 914 - if (new_state->fb != old_state->fb) { 822 + if (drm_atomic_helper_damage_merged(old_plane_state, plane_state, &damage)) { 823 + ast_update_cursor_image(dst, src, fb->width, fb->height); 915 824 ast_set_cursor_base(ast, dst_off); 916 - 917 - ++ast_cursor_plane->next_hwc_index; 918 - ast_cursor_plane->next_hwc_index %= ARRAY_SIZE(ast_cursor_plane->hwc); 919 825 } 920 826 921 827 /* 922 828 * Update location in HWC signature and registers. 923 829 */ 924 830 925 - writel(new_state->crtc_x, sig + AST_HWC_SIGNATURE_X); 926 - writel(new_state->crtc_y, sig + AST_HWC_SIGNATURE_Y); 831 + writel(plane_state->crtc_x, sig + AST_HWC_SIGNATURE_X); 832 + writel(plane_state->crtc_y, sig + AST_HWC_SIGNATURE_Y); 927 833 928 834 offset_x = AST_MAX_HWC_WIDTH - fb->width; 929 835 offset_y = AST_MAX_HWC_HEIGHT - fb->height; 930 836 931 - if (new_state->crtc_x < 0) { 932 - x_offset = (-new_state->crtc_x) + offset_x; 837 + if (plane_state->crtc_x < 0) { 838 + x_offset = (-plane_state->crtc_x) + offset_x; 933 839 x = 0; 934 840 } else { 935 841 x_offset = offset_x; 936 - x = new_state->crtc_x; 842 + x = plane_state->crtc_x; 937 843 } 938 - if (new_state->crtc_y < 0) { 939 - y_offset = (-new_state->crtc_y) + offset_y; 844 + if (plane_state->crtc_y < 0) { 845 + y_offset = (-plane_state->crtc_y) + offset_y; 940 846 y = 0; 941 847 } else { 942 848 y_offset = offset_y; 943 - y = new_state->crtc_y; 849 + y = plane_state->crtc_y; 944 850 } 945 851 946 852 ast_set_cursor_location(ast, x, y, x_offset, y_offset); ··· 929 875 ast_set_cursor_enabled(ast, true); 930 876 } 931 877 932 - static void 933 - ast_cursor_plane_helper_atomic_disable(struct drm_plane *plane, 934 - struct drm_atomic_state *state) 878 + static void ast_cursor_plane_helper_atomic_disable(struct drm_plane *plane, 879 + struct drm_atomic_state *state) 935 880 { 936 881 struct ast_private *ast = to_ast_private(plane->dev); 937 882 ··· 944 891 .atomic_disable = ast_cursor_plane_helper_atomic_disable, 945 892 }; 946 893 947 - static void ast_cursor_plane_destroy(struct drm_plane *plane) 948 - { 949 - struct ast_cursor_plane *ast_cursor_plane = to_ast_cursor_plane(plane); 950 - size_t i; 951 - struct drm_gem_vram_object *gbo; 952 - struct iosys_map map; 953 - 954 - for (i = 0; i < ARRAY_SIZE(ast_cursor_plane->hwc); ++i) { 955 - gbo = ast_cursor_plane->hwc[i].gbo; 956 - map = ast_cursor_plane->hwc[i].map; 957 - drm_gem_vram_vunmap(gbo, &map); 958 - drm_gem_vram_unpin(gbo); 959 - drm_gem_vram_put(gbo); 960 - } 961 - 962 - drm_plane_cleanup(plane); 963 - } 964 - 965 894 static const struct drm_plane_funcs ast_cursor_plane_funcs = { 966 895 .update_plane = drm_atomic_helper_update_plane, 967 896 .disable_plane = drm_atomic_helper_disable_plane, 968 - .destroy = ast_cursor_plane_destroy, 897 + .destroy = drm_plane_cleanup, 969 898 DRM_GEM_SHADOW_PLANE_FUNCS, 970 899 }; 971 900 972 901 static int ast_cursor_plane_init(struct ast_private *ast) 973 902 { 974 903 struct drm_device *dev = &ast->base; 975 - struct ast_cursor_plane *ast_cursor_plane = &ast->cursor_plane; 904 + struct ast_plane *ast_cursor_plane = &ast->cursor_plane; 976 905 struct drm_plane *cursor_plane = &ast_cursor_plane->base; 977 - size_t size, i; 978 - struct drm_gem_vram_object *gbo; 979 - struct iosys_map map; 906 + size_t size; 907 + void __iomem *vaddr; 908 + u64 offset; 980 909 int ret; 981 - s64 off; 982 910 983 911 /* 984 912 * Allocate backing storage for cursors. The BOs are permanently ··· 968 934 969 935 size = roundup(AST_HWC_SIZE + AST_HWC_SIGNATURE_SIZE, PAGE_SIZE); 970 936 971 - for (i = 0; i < ARRAY_SIZE(ast_cursor_plane->hwc); ++i) { 972 - gbo = drm_gem_vram_create(dev, size, 0); 973 - if (IS_ERR(gbo)) { 974 - ret = PTR_ERR(gbo); 975 - goto err_hwc; 976 - } 977 - ret = drm_gem_vram_pin(gbo, DRM_GEM_VRAM_PL_FLAG_VRAM | 978 - DRM_GEM_VRAM_PL_FLAG_TOPDOWN); 979 - if (ret) 980 - goto err_drm_gem_vram_put; 981 - ret = drm_gem_vram_vmap(gbo, &map); 982 - if (ret) 983 - goto err_drm_gem_vram_unpin; 984 - off = drm_gem_vram_offset(gbo); 985 - if (off < 0) { 986 - ret = off; 987 - goto err_drm_gem_vram_vunmap; 988 - } 989 - ast_cursor_plane->hwc[i].gbo = gbo; 990 - ast_cursor_plane->hwc[i].map = map; 991 - ast_cursor_plane->hwc[i].off = off; 992 - } 937 + if (ast->vram_fb_available < size) 938 + return -ENOMEM; 993 939 994 - /* 995 - * Create the cursor plane. The plane's destroy callback will release 996 - * the backing storages' BO memory. 997 - */ 940 + vaddr = ast->vram + ast->vram_fb_available - size; 941 + offset = ast->vram_base + ast->vram_fb_available - size; 998 942 999 - ret = drm_universal_plane_init(dev, cursor_plane, 0x01, 1000 - &ast_cursor_plane_funcs, 1001 - ast_cursor_plane_formats, 1002 - ARRAY_SIZE(ast_cursor_plane_formats), 1003 - NULL, DRM_PLANE_TYPE_CURSOR, NULL); 943 + ret = ast_plane_init(dev, ast_cursor_plane, vaddr, offset, size, 944 + 0x01, &ast_cursor_plane_funcs, 945 + ast_cursor_plane_formats, ARRAY_SIZE(ast_cursor_plane_formats), 946 + NULL, DRM_PLANE_TYPE_CURSOR); 1004 947 if (ret) { 1005 - drm_err(dev, "drm_universal_plane failed(): %d\n", ret); 1006 - goto err_hwc; 948 + drm_err(dev, "ast_plane_init() failed: %d\n", ret); 949 + return ret; 1007 950 } 1008 951 drm_plane_helper_add(cursor_plane, &ast_cursor_plane_helper_funcs); 952 + drm_plane_enable_fb_damage_clips(cursor_plane); 953 + 954 + ast->vram_fb_available -= size; 1009 955 1010 956 return 0; 1011 - 1012 - err_hwc: 1013 - while (i) { 1014 - --i; 1015 - gbo = ast_cursor_plane->hwc[i].gbo; 1016 - map = ast_cursor_plane->hwc[i].map; 1017 - err_drm_gem_vram_vunmap: 1018 - drm_gem_vram_vunmap(gbo, &map); 1019 - err_drm_gem_vram_unpin: 1020 - drm_gem_vram_unpin(gbo); 1021 - err_drm_gem_vram_put: 1022 - drm_gem_vram_put(gbo); 1023 - } 1024 - return ret; 1025 957 } 1026 958 1027 959 /* ··· 1026 1026 1027 1027 ast_set_color_reg(ast, format); 1028 1028 ast_set_vbios_color_reg(ast, format, vbios_mode_info); 1029 + if (crtc->state->gamma_lut) 1030 + ast_crtc_set_gamma(ast, format, crtc->state->gamma_lut->data); 1031 + else 1032 + ast_crtc_set_gamma_linear(ast, format); 1029 1033 } 1030 - 1031 - ast_crtc_load_lut(ast, crtc); 1032 1034 break; 1033 1035 case DRM_MODE_DPMS_STANDBY: 1034 1036 case DRM_MODE_DPMS_SUSPEND: ··· 1125 1123 struct drm_atomic_state *state) 1126 1124 { 1127 1125 struct drm_crtc_state *crtc_state = drm_atomic_get_new_crtc_state(state, crtc); 1126 + struct drm_crtc_state *old_crtc_state = drm_atomic_get_old_crtc_state(state, crtc); 1127 + struct ast_crtc_state *old_ast_crtc_state = to_ast_crtc_state(old_crtc_state); 1128 1128 struct drm_device *dev = crtc->dev; 1129 1129 struct ast_crtc_state *ast_state; 1130 1130 const struct drm_format_info *format; 1131 1131 bool succ; 1132 1132 int ret; 1133 1133 1134 - ret = drm_atomic_helper_check_crtc_state(crtc_state, false); 1134 + if (!crtc_state->enable) 1135 + return 0; 1136 + 1137 + ret = drm_atomic_helper_check_crtc_primary_plane(crtc_state); 1135 1138 if (ret) 1136 1139 return ret; 1137 - 1138 - if (!crtc_state->enable) 1139 - goto out; 1140 1140 1141 1141 ast_state = to_ast_crtc_state(crtc_state); 1142 1142 ··· 1146 1142 if (drm_WARN_ON_ONCE(dev, !format)) 1147 1143 return -EINVAL; /* BUG: We didn't set format in primary check(). */ 1148 1144 1145 + /* 1146 + * The gamma LUT has to be reloaded after changing the primary 1147 + * plane's color format. 1148 + */ 1149 + if (old_ast_crtc_state->format != format) 1150 + crtc_state->color_mgmt_changed = true; 1151 + 1152 + if (crtc_state->color_mgmt_changed && crtc_state->gamma_lut) { 1153 + if (crtc_state->gamma_lut->length != 1154 + AST_LUT_SIZE * sizeof(struct drm_color_lut)) { 1155 + drm_err(dev, "Wrong size for gamma_lut %zu\n", 1156 + crtc_state->gamma_lut->length); 1157 + return -EINVAL; 1158 + } 1159 + } 1160 + 1149 1161 succ = ast_get_vbios_mode_info(format, &crtc_state->mode, 1150 1162 &crtc_state->adjusted_mode, 1151 1163 &ast_state->vbios_mode_info); 1152 1164 if (!succ) 1153 1165 return -EINVAL; 1154 1166 1155 - out: 1156 - return drm_atomic_add_affected_planes(state, crtc); 1157 - } 1158 - 1159 - static void ast_crtc_helper_atomic_begin(struct drm_crtc *crtc, struct drm_atomic_state *state) 1160 - { 1161 - struct drm_device *dev = crtc->dev; 1162 - struct ast_private *ast = to_ast_private(dev); 1163 - 1164 - /* 1165 - * Concurrent operations could possibly trigger a call to 1166 - * drm_connector_helper_funcs.get_modes by trying to read the 1167 - * display modes. Protect access to I/O registers by acquiring 1168 - * the I/O-register lock. Released in atomic_flush(). 1169 - */ 1170 - mutex_lock(&ast->ioregs_lock); 1167 + return 0; 1171 1168 } 1172 1169 1173 1170 static void ··· 1177 1172 { 1178 1173 struct drm_crtc_state *crtc_state = drm_atomic_get_new_crtc_state(state, 1179 1174 crtc); 1180 - struct drm_crtc_state *old_crtc_state = drm_atomic_get_old_crtc_state(state, 1181 - crtc); 1182 1175 struct drm_device *dev = crtc->dev; 1183 1176 struct ast_private *ast = to_ast_private(dev); 1184 1177 struct ast_crtc_state *ast_crtc_state = to_ast_crtc_state(crtc_state); 1185 - struct ast_crtc_state *old_ast_crtc_state = to_ast_crtc_state(old_crtc_state); 1186 1178 struct ast_vbios_mode_info *vbios_mode_info = &ast_crtc_state->vbios_mode_info; 1187 1179 1188 1180 /* 1189 1181 * The gamma LUT has to be reloaded after changing the primary 1190 1182 * plane's color format. 1191 1183 */ 1192 - if (old_ast_crtc_state->format != ast_crtc_state->format) 1193 - ast_crtc_load_lut(ast, crtc); 1184 + if (crtc_state->enable && crtc_state->color_mgmt_changed) { 1185 + if (crtc_state->gamma_lut) 1186 + ast_crtc_set_gamma(ast, 1187 + ast_crtc_state->format, 1188 + crtc_state->gamma_lut->data); 1189 + else 1190 + ast_crtc_set_gamma_linear(ast, ast_crtc_state->format); 1191 + } 1194 1192 1195 1193 //Set Aspeed Display-Port 1196 1194 if (ast->tx_chip_types & AST_TX_ASTDP_BIT) 1197 1195 ast_dp_set_mode(crtc, vbios_mode_info); 1198 - 1199 - mutex_unlock(&ast->ioregs_lock); 1200 1196 } 1201 1197 1202 - static void 1203 - ast_crtc_helper_atomic_enable(struct drm_crtc *crtc, 1204 - struct drm_atomic_state *state) 1198 + static void ast_crtc_helper_atomic_enable(struct drm_crtc *crtc, struct drm_atomic_state *state) 1205 1199 { 1206 1200 struct drm_device *dev = crtc->dev; 1207 1201 struct ast_private *ast = to_ast_private(dev); 1208 - struct drm_crtc_state *crtc_state = crtc->state; 1202 + struct drm_crtc_state *crtc_state = drm_atomic_get_new_crtc_state(state, crtc); 1209 1203 struct ast_crtc_state *ast_crtc_state = to_ast_crtc_state(crtc_state); 1210 1204 struct ast_vbios_mode_info *vbios_mode_info = 1211 1205 &ast_crtc_state->vbios_mode_info; ··· 1221 1217 ast_crtc_dpms(crtc, DRM_MODE_DPMS_ON); 1222 1218 } 1223 1219 1224 - static void 1225 - ast_crtc_helper_atomic_disable(struct drm_crtc *crtc, 1226 - struct drm_atomic_state *state) 1220 + static void ast_crtc_helper_atomic_disable(struct drm_crtc *crtc, struct drm_atomic_state *state) 1227 1221 { 1228 - struct drm_crtc_state *old_crtc_state = drm_atomic_get_old_crtc_state(state, 1229 - crtc); 1222 + struct drm_crtc_state *old_crtc_state = drm_atomic_get_old_crtc_state(state, crtc); 1230 1223 struct drm_device *dev = crtc->dev; 1231 1224 struct ast_private *ast = to_ast_private(dev); 1232 1225 ··· 1251 1250 static const struct drm_crtc_helper_funcs ast_crtc_helper_funcs = { 1252 1251 .mode_valid = ast_crtc_helper_mode_valid, 1253 1252 .atomic_check = ast_crtc_helper_atomic_check, 1254 - .atomic_begin = ast_crtc_helper_atomic_begin, 1255 1253 .atomic_flush = ast_crtc_helper_atomic_flush, 1256 1254 .atomic_enable = ast_crtc_helper_atomic_enable, 1257 1255 .atomic_disable = ast_crtc_helper_atomic_disable, ··· 1317 1317 struct drm_crtc *crtc = &ast->crtc; 1318 1318 int ret; 1319 1319 1320 - ret = drm_crtc_init_with_planes(dev, crtc, &ast->primary_plane, 1320 + ret = drm_crtc_init_with_planes(dev, crtc, &ast->primary_plane.base, 1321 1321 &ast->cursor_plane.base, &ast_crtc_funcs, 1322 1322 NULL); 1323 1323 if (ret) 1324 1324 return ret; 1325 1325 1326 - drm_mode_crtc_set_gamma_size(crtc, 256); 1326 + drm_mode_crtc_set_gamma_size(crtc, AST_LUT_SIZE); 1327 + drm_crtc_enable_color_mgmt(crtc, 0, false, AST_LUT_SIZE); 1328 + 1327 1329 drm_crtc_helper_add(crtc, &ast_crtc_helper_funcs); 1328 1330 1329 1331 return 0; ··· 1720 1718 * Mode config 1721 1719 */ 1722 1720 1721 + static void ast_mode_config_helper_atomic_commit_tail(struct drm_atomic_state *state) 1722 + { 1723 + struct ast_private *ast = to_ast_private(state->dev); 1724 + 1725 + /* 1726 + * Concurrent operations could possibly trigger a call to 1727 + * drm_connector_helper_funcs.get_modes by trying to read the 1728 + * display modes. Protect access to I/O registers by acquiring 1729 + * the I/O-register lock. Released in atomic_flush(). 1730 + */ 1731 + mutex_lock(&ast->ioregs_lock); 1732 + drm_atomic_helper_commit_tail_rpm(state); 1733 + mutex_unlock(&ast->ioregs_lock); 1734 + } 1735 + 1723 1736 static const struct drm_mode_config_helper_funcs ast_mode_config_helper_funcs = { 1724 - .atomic_commit_tail = drm_atomic_helper_commit_tail_rpm, 1737 + .atomic_commit_tail = ast_mode_config_helper_atomic_commit_tail, 1725 1738 }; 1726 1739 1740 + static enum drm_mode_status ast_mode_config_mode_valid(struct drm_device *dev, 1741 + const struct drm_display_mode *mode) 1742 + { 1743 + static const unsigned long max_bpp = 4; /* DRM_FORMAT_XRGB8888 */ 1744 + struct ast_private *ast = to_ast_private(dev); 1745 + unsigned long fbsize, fbpages, max_fbpages; 1746 + 1747 + max_fbpages = (ast->vram_fb_available) >> PAGE_SHIFT; 1748 + 1749 + fbsize = mode->hdisplay * mode->vdisplay * max_bpp; 1750 + fbpages = DIV_ROUND_UP(fbsize, PAGE_SIZE); 1751 + 1752 + if (fbpages > max_fbpages) 1753 + return MODE_MEM; 1754 + 1755 + return MODE_OK; 1756 + } 1757 + 1727 1758 static const struct drm_mode_config_funcs ast_mode_config_funcs = { 1728 - .fb_create = drm_gem_fb_create, 1729 - .mode_valid = drm_vram_helper_mode_valid, 1759 + .fb_create = drm_gem_fb_create_with_dirty, 1760 + .mode_valid = ast_mode_config_mode_valid, 1730 1761 .atomic_check = drm_atomic_helper_check, 1731 1762 .atomic_commit = drm_atomic_helper_commit, 1732 1763 }; ··· 1767 1732 int ast_mode_config_init(struct ast_private *ast) 1768 1733 { 1769 1734 struct drm_device *dev = &ast->base; 1770 - struct pci_dev *pdev = to_pci_dev(dev->dev); 1771 1735 int ret; 1772 1736 1773 1737 ret = drmm_mode_config_init(dev); ··· 1777 1743 dev->mode_config.min_width = 0; 1778 1744 dev->mode_config.min_height = 0; 1779 1745 dev->mode_config.preferred_depth = 24; 1780 - dev->mode_config.prefer_shadow = 1; 1781 - dev->mode_config.fb_base = pci_resource_start(pdev, 0); 1782 1746 1783 1747 if (ast->chip == AST2100 || 1784 1748 ast->chip == AST2200 || ··· 1792 1760 } 1793 1761 1794 1762 dev->mode_config.helper_private = &ast_mode_config_helper_funcs; 1795 - 1796 1763 1797 1764 ret = ast_primary_plane_init(ast); 1798 1765 if (ret)
+2 -1
drivers/gpu/drm/bridge/adv7511/adv7511.h
··· 402 402 403 403 void adv7533_dsi_power_on(struct adv7511 *adv); 404 404 void adv7533_dsi_power_off(struct adv7511 *adv); 405 - void adv7533_mode_set(struct adv7511 *adv, const struct drm_display_mode *mode); 405 + enum drm_mode_status adv7533_mode_valid(struct adv7511 *adv, 406 + const struct drm_display_mode *mode); 406 407 int adv7533_patch_registers(struct adv7511 *adv); 407 408 int adv7533_patch_cec_registers(struct adv7511 *adv); 408 409 int adv7533_attach_dsi(struct adv7511 *adv);
+14 -4
drivers/gpu/drm/bridge/adv7511/adv7511_drv.c
··· 697 697 } 698 698 699 699 static enum drm_mode_status adv7511_mode_valid(struct adv7511 *adv7511, 700 - struct drm_display_mode *mode) 700 + const struct drm_display_mode *mode) 701 701 { 702 702 if (mode->clock > 165000) 703 703 return MODE_CLOCK_HIGH; ··· 790 790 0x6, low_refresh_rate << 1); 791 791 regmap_update_bits(adv7511->regmap, 0x17, 792 792 0x60, (vsync_polarity << 6) | (hsync_polarity << 5)); 793 - 794 - if (adv7511->type == ADV7533 || adv7511->type == ADV7535) 795 - adv7533_mode_set(adv7511, adj_mode); 796 793 797 794 drm_mode_copy(&adv7511->curr_mode, adj_mode); 798 795 ··· 910 913 adv7511_mode_set(adv, mode, adj_mode); 911 914 } 912 915 916 + static enum drm_mode_status adv7511_bridge_mode_valid(struct drm_bridge *bridge, 917 + const struct drm_display_info *info, 918 + const struct drm_display_mode *mode) 919 + { 920 + struct adv7511 *adv = bridge_to_adv7511(bridge); 921 + 922 + if (adv->type == ADV7533 || adv->type == ADV7535) 923 + return adv7533_mode_valid(adv, mode); 924 + else 925 + return adv7511_mode_valid(adv, mode); 926 + } 927 + 913 928 static int adv7511_bridge_attach(struct drm_bridge *bridge, 914 929 enum drm_bridge_attach_flags flags) 915 930 { ··· 969 960 .enable = adv7511_bridge_enable, 970 961 .disable = adv7511_bridge_disable, 971 962 .mode_set = adv7511_bridge_mode_set, 963 + .mode_valid = adv7511_bridge_mode_valid, 972 964 .attach = adv7511_bridge_attach, 973 965 .detect = adv7511_bridge_detect, 974 966 .get_edid = adv7511_bridge_get_edid,
+13 -12
drivers/gpu/drm/bridge/adv7511/adv7533.c
··· 100 100 regmap_write(adv->regmap_cec, 0x27, 0x0b); 101 101 } 102 102 103 - void adv7533_mode_set(struct adv7511 *adv, const struct drm_display_mode *mode) 103 + enum drm_mode_status adv7533_mode_valid(struct adv7511 *adv, 104 + const struct drm_display_mode *mode) 104 105 { 106 + int lanes; 105 107 struct mipi_dsi_device *dsi = adv->dsi; 106 - int lanes, ret; 107 - 108 - if (adv->num_dsi_lanes != 4) 109 - return; 110 108 111 109 if (mode->clock > 80000) 112 110 lanes = 4; 113 111 else 114 112 lanes = 3; 115 113 116 - if (lanes != dsi->lanes) { 117 - mipi_dsi_detach(dsi); 118 - dsi->lanes = lanes; 119 - ret = mipi_dsi_attach(dsi); 120 - if (ret) 121 - dev_err(&dsi->dev, "failed to change host lanes\n"); 122 - } 114 + /* 115 + * TODO: add support for dynamic switching of lanes 116 + * by using the bridge pre_enable() op . Till then filter 117 + * out the modes which shall need different number of lanes 118 + * than what was configured in the device tree. 119 + */ 120 + if (lanes != dsi->lanes) 121 + return MODE_BAD; 122 + 123 + return MODE_OK; 123 124 } 124 125 125 126 int adv7533_patch_registers(struct adv7511 *adv)
+48 -10
drivers/gpu/drm/bridge/ite-it6505.c
··· 421 421 struct notifier_block event_nb; 422 422 struct extcon_dev *extcon; 423 423 struct work_struct extcon_wq; 424 + int extcon_state; 424 425 enum drm_connector_status connector_status; 425 426 enum link_train_status link_state; 426 427 struct work_struct link_works; ··· 2686 2685 { 2687 2686 struct it6505 *it6505 = container_of(work, struct it6505, extcon_wq); 2688 2687 struct device *dev = &it6505->client->dev; 2689 - int state = extcon_get_state(it6505->extcon, EXTCON_DISP_DP); 2690 - unsigned int pwroffretry = 0; 2688 + int state, ret; 2691 2689 2692 2690 if (it6505->enable_drv_hold) 2693 2691 return; 2694 2692 2695 2693 mutex_lock(&it6505->extcon_lock); 2696 2694 2695 + state = extcon_get_state(it6505->extcon, EXTCON_DISP_DP); 2697 2696 DRM_DEV_DEBUG_DRIVER(dev, "EXTCON_DISP_DP = 0x%02x", state); 2698 - if (state > 0) { 2697 + 2698 + if (state == it6505->extcon_state || unlikely(state < 0)) 2699 + goto unlock; 2700 + it6505->extcon_state = state; 2701 + if (state) { 2699 2702 DRM_DEV_DEBUG_DRIVER(dev, "start to power on"); 2700 2703 msleep(100); 2701 - it6505_poweron(it6505); 2704 + ret = pm_runtime_get_sync(dev); 2705 + 2706 + /* 2707 + * On system resume, extcon_work can be triggered before 2708 + * pm_runtime_force_resume re-enables runtime power management. 2709 + * Handling the error here to make sure the bridge is powered on. 2710 + */ 2711 + if (ret) 2712 + it6505_poweron(it6505); 2702 2713 } else { 2703 2714 DRM_DEV_DEBUG_DRIVER(dev, "start to power off"); 2704 - while (it6505_poweroff(it6505) && pwroffretry++ < 5) { 2705 - DRM_DEV_DEBUG_DRIVER(dev, "power off fail %d times", 2706 - pwroffretry); 2707 - } 2715 + pm_runtime_put_sync(dev); 2708 2716 2709 2717 drm_helper_hpd_irq_event(it6505->bridge.dev); 2710 2718 memset(it6505->dpcd, 0, sizeof(it6505->dpcd)); 2711 2719 DRM_DEV_DEBUG_DRIVER(dev, "power off it6505 success!"); 2712 2720 } 2713 2721 2722 + unlock: 2714 2723 mutex_unlock(&it6505->extcon_lock); 2715 2724 } 2716 2725 ··· 2991 2980 } 2992 2981 } 2993 2982 2983 + static void it6505_bridge_atomic_pre_enable(struct drm_bridge *bridge, 2984 + struct drm_bridge_state *old_state) 2985 + { 2986 + struct it6505 *it6505 = bridge_to_it6505(bridge); 2987 + struct device *dev = &it6505->client->dev; 2988 + 2989 + DRM_DEV_DEBUG_DRIVER(dev, "start"); 2990 + 2991 + pm_runtime_get_sync(dev); 2992 + } 2993 + 2994 + static void it6505_bridge_atomic_post_disable(struct drm_bridge *bridge, 2995 + struct drm_bridge_state *old_state) 2996 + { 2997 + struct it6505 *it6505 = bridge_to_it6505(bridge); 2998 + struct device *dev = &it6505->client->dev; 2999 + 3000 + DRM_DEV_DEBUG_DRIVER(dev, "start"); 3001 + 3002 + pm_runtime_put_sync(dev); 3003 + } 3004 + 2994 3005 static enum drm_connector_status 2995 3006 it6505_bridge_detect(struct drm_bridge *bridge) 2996 3007 { ··· 3047 3014 .mode_valid = it6505_bridge_mode_valid, 3048 3015 .atomic_enable = it6505_bridge_atomic_enable, 3049 3016 .atomic_disable = it6505_bridge_atomic_disable, 3017 + .atomic_pre_enable = it6505_bridge_atomic_pre_enable, 3018 + .atomic_post_disable = it6505_bridge_atomic_post_disable, 3050 3019 .detect = it6505_bridge_detect, 3051 3020 .get_edid = it6505_bridge_get_edid, 3052 3021 }; ··· 3067 3032 return it6505_poweroff(it6505); 3068 3033 } 3069 3034 3070 - static SIMPLE_DEV_PM_OPS(it6505_bridge_pm_ops, it6505_bridge_suspend, 3071 - it6505_bridge_resume); 3035 + static const struct dev_pm_ops it6505_bridge_pm_ops = { 3036 + SET_SYSTEM_SLEEP_PM_OPS(pm_runtime_force_suspend, pm_runtime_force_resume) 3037 + SET_RUNTIME_PM_OPS(it6505_bridge_suspend, it6505_bridge_resume, NULL) 3038 + }; 3072 3039 3073 3040 static int it6505_init_pdata(struct it6505 *it6505) 3074 3041 { ··· 3352 3315 3353 3316 DRM_DEV_DEBUG_DRIVER(dev, "it6505 device name: %s", dev_name(dev)); 3354 3317 debugfs_init(it6505); 3318 + pm_runtime_enable(dev); 3355 3319 3356 3320 it6505->bridge.funcs = &it6505_bridge_funcs; 3357 3321 it6505->bridge.type = DRM_MODE_CONNECTOR_DisplayPort;
+8 -1
drivers/gpu/drm/bridge/parade-ps8640.c
··· 286 286 } 287 287 288 288 switch (data & SWAUX_STATUS_MASK) { 289 - /* Ignore the DEFER cases as they are already handled in hardware */ 290 289 case SWAUX_STATUS_NACK: 291 290 case SWAUX_STATUS_I2C_NACK: 292 291 /* ··· 300 301 301 302 fallthrough; 302 303 case SWAUX_STATUS_ACKM: 304 + len = data & SWAUX_M_MASK; 305 + break; 306 + case SWAUX_STATUS_DEFER: 307 + case SWAUX_STATUS_I2C_DEFER: 308 + if (is_native_aux) 309 + msg->reply |= DP_AUX_NATIVE_REPLY_DEFER; 310 + else 311 + msg->reply |= DP_AUX_I2C_REPLY_DEFER; 303 312 len = data & SWAUX_M_MASK; 304 313 break; 305 314 case SWAUX_STATUS_INVALID:
+1 -1
drivers/gpu/drm/bridge/tc358775.c
··· 408 408 (val >> 8) & 0xFF, val & 0xFF); 409 409 410 410 d2l_write(tc->i2c, SYSRST, SYS_RST_REG | SYS_RST_DSIRX | SYS_RST_BM | 411 - SYS_RST_LCD | SYS_RST_I2CM | SYS_RST_I2CS); 411 + SYS_RST_LCD | SYS_RST_I2CM); 412 412 usleep_range(30000, 40000); 413 413 414 414 d2l_write(tc->i2c, PPI_TX_RX_TA, TTA_GET | TTA_SURE);
+8 -6
drivers/gpu/drm/display/Makefile
··· 3 3 obj-$(CONFIG_DRM_DP_AUX_BUS) += drm_dp_aux_bus.o 4 4 5 5 drm_display_helper-y := drm_display_helper_mod.o 6 - drm_display_helper-$(CONFIG_DRM_DISPLAY_DP_HELPER) += drm_dp_dual_mode_helper.o \ 7 - drm_dp_helper.o \ 8 - drm_dp_mst_topology.o \ 9 - drm_dsc_helper.o 6 + drm_display_helper-$(CONFIG_DRM_DISPLAY_DP_HELPER) += \ 7 + drm_dp_dual_mode_helper.o \ 8 + drm_dp_helper.o \ 9 + drm_dp_mst_topology.o \ 10 + drm_dsc_helper.o 10 11 drm_display_helper-$(CONFIG_DRM_DISPLAY_HDCP_HELPER) += drm_hdcp_helper.o 11 - drm_display_helper-$(CONFIG_DRM_DISPLAY_HDMI_HELPER) += drm_hdmi_helper.o \ 12 - drm_scdc_helper.o 12 + drm_display_helper-$(CONFIG_DRM_DISPLAY_HDMI_HELPER) += \ 13 + drm_hdmi_helper.o \ 14 + drm_scdc_helper.o 13 15 drm_display_helper-$(CONFIG_DRM_DP_AUX_CHARDEV) += drm_dp_aux_dev.o 14 16 drm_display_helper-$(CONFIG_DRM_DP_CEC) += drm_dp_cec.o 15 17
+18 -42
drivers/gpu/drm/drm_atomic_helper.c
··· 924 924 EXPORT_SYMBOL(drm_atomic_helper_check_plane_state); 925 925 926 926 /** 927 - * drm_atomic_helper_check_crtc_state() - Check CRTC state for validity 927 + * drm_atomic_helper_check_crtc_primary_plane() - Check CRTC state for primary plane 928 928 * @crtc_state: CRTC state to check 929 - * @can_disable_primary_planes: can the CRTC be enabled without a primary plane? 930 929 * 931 - * Checks that a desired CRTC update is valid. Drivers that provide 932 - * their own CRTC handling rather than helper-provided implementations may 933 - * still wish to call this function to avoid duplication of error checking 934 - * code. 935 - * 936 - * Note that @can_disable_primary_planes only tests if the CRTC can be 937 - * enabled without a primary plane. To test if a primary plane can be updated 938 - * without a CRTC, use drm_atomic_helper_check_plane_state() in the plane's 939 - * atomic check. 930 + * Checks that a CRTC has at least one primary plane attached to it, which is 931 + * a requirement on some hardware. Note that this only involves the CRTC side 932 + * of the test. To test if the primary plane is visible or if it can be updated 933 + * without the CRTC being enabled, use drm_atomic_helper_check_plane_state() in 934 + * the plane's atomic check. 940 935 * 941 936 * RETURNS: 942 - * Zero if update appears valid, error code on failure 937 + * 0 if a primary plane is attached to the CRTC, or an error code otherwise 943 938 */ 944 - int drm_atomic_helper_check_crtc_state(struct drm_crtc_state *crtc_state, 945 - bool can_disable_primary_planes) 939 + int drm_atomic_helper_check_crtc_primary_plane(struct drm_crtc_state *crtc_state) 946 940 { 947 - struct drm_device *dev = crtc_state->crtc->dev; 948 - struct drm_atomic_state *state = crtc_state->state; 949 - 950 - if (!crtc_state->enable) 951 - return 0; 941 + struct drm_crtc *crtc = crtc_state->crtc; 942 + struct drm_device *dev = crtc->dev; 943 + struct drm_plane *plane; 952 944 953 945 /* needs at least one primary plane to be enabled */ 954 - if (!can_disable_primary_planes) { 955 - bool has_primary_plane = false; 956 - struct drm_plane *plane; 957 - 958 - drm_for_each_plane_mask(plane, dev, crtc_state->plane_mask) { 959 - struct drm_plane_state *plane_state; 960 - 961 - if (plane->type != DRM_PLANE_TYPE_PRIMARY) 962 - continue; 963 - plane_state = drm_atomic_get_plane_state(state, plane); 964 - if (IS_ERR(plane_state)) 965 - return PTR_ERR(plane_state); 966 - if (plane_state->fb && plane_state->crtc) { 967 - has_primary_plane = true; 968 - break; 969 - } 970 - } 971 - if (!has_primary_plane) { 972 - drm_dbg_kms(dev, "Cannot enable CRTC without a primary plane.\n"); 973 - return -EINVAL; 974 - } 946 + drm_for_each_plane_mask(plane, dev, crtc_state->plane_mask) { 947 + if (plane->type == DRM_PLANE_TYPE_PRIMARY) 948 + return 0; 975 949 } 976 950 977 - return 0; 951 + drm_dbg_atomic(dev, "[CRTC:%d:%s] primary plane missing\n", crtc->base.id, crtc->name); 952 + 953 + return -EINVAL; 978 954 } 979 - EXPORT_SYMBOL(drm_atomic_helper_check_crtc_state); 955 + EXPORT_SYMBOL(drm_atomic_helper_check_crtc_primary_plane); 980 956 981 957 /** 982 958 * drm_atomic_helper_check_planes - validate state object for planes changes
+3 -3
drivers/gpu/drm/drm_atomic_state_helper.c
··· 464 464 EXPORT_SYMBOL(drm_atomic_helper_connector_reset); 465 465 466 466 /** 467 - * drm_atomic_helper_connector_tv_reset - Resets TV connector properties 467 + * drm_atomic_helper_connector_tv_margins_reset - Resets TV connector properties 468 468 * @connector: DRM connector 469 469 * 470 470 * Resets the TV-related properties attached to a connector. 471 471 */ 472 - void drm_atomic_helper_connector_tv_reset(struct drm_connector *connector) 472 + void drm_atomic_helper_connector_tv_margins_reset(struct drm_connector *connector) 473 473 { 474 474 struct drm_cmdline_mode *cmdline = &connector->cmdline_mode; 475 475 struct drm_connector_state *state = connector->state; ··· 479 479 state->tv.margins.top = cmdline->tv_margins.top; 480 480 state->tv.margins.bottom = cmdline->tv_margins.bottom; 481 481 } 482 - EXPORT_SYMBOL(drm_atomic_helper_connector_tv_reset); 482 + EXPORT_SYMBOL(drm_atomic_helper_connector_tv_margins_reset); 483 483 484 484 /** 485 485 * __drm_atomic_helper_connector_duplicate_state - copy atomic connector state
+4
drivers/gpu/drm/drm_atomic_uapi.c
··· 687 687 */ 688 688 return -EINVAL; 689 689 } else if (property == config->tv_select_subconnector_property) { 690 + state->tv.select_subconnector = val; 691 + } else if (property == config->tv_subconnector_property) { 690 692 state->tv.subconnector = val; 691 693 } else if (property == config->tv_left_margin_property) { 692 694 state->tv.margins.left = val; ··· 797 795 else 798 796 *val = connector->dpms; 799 797 } else if (property == config->tv_select_subconnector_property) { 798 + *val = state->tv.select_subconnector; 799 + } else if (property == config->tv_subconnector_property) { 800 800 *val = state->tv.subconnector; 801 801 } else if (property == config->tv_left_margin_property) { 802 802 *val = state->tv.margins.left;
+2 -2
drivers/gpu/drm/drm_client.c
··· 323 323 * fd_install step out of the driver backend hooks, to make that 324 324 * final step optional for internal users. 325 325 */ 326 - ret = drm_gem_vmap(buffer->gem, map); 326 + ret = drm_gem_vmap_unlocked(buffer->gem, map); 327 327 if (ret) 328 328 return ret; 329 329 ··· 345 345 { 346 346 struct iosys_map *map = &buffer->map; 347 347 348 - drm_gem_vunmap(buffer->gem, map); 348 + drm_gem_vunmap_unlocked(buffer->gem, map); 349 349 } 350 350 EXPORT_SYMBOL(drm_client_buffer_vunmap); 351 351
+26
drivers/gpu/drm/drm_crtc_helper.c
··· 434 434 } 435 435 EXPORT_SYMBOL(drm_crtc_helper_set_mode); 436 436 437 + /** 438 + * drm_crtc_helper_atomic_check() - Helper to check CRTC atomic-state 439 + * @crtc: CRTC to check 440 + * @state: atomic state object 441 + * 442 + * Provides a default CRTC-state check handler for CRTCs that only have 443 + * one primary plane attached to it. 444 + * 445 + * This is often the case for the CRTC of simple framebuffers. See also 446 + * drm_plane_helper_atomic_check() for the respective plane-state check 447 + * helper function. 448 + * 449 + * RETURNS: 450 + * Zero on success, or an errno code otherwise. 451 + */ 452 + int drm_crtc_helper_atomic_check(struct drm_crtc *crtc, struct drm_atomic_state *state) 453 + { 454 + struct drm_crtc_state *new_crtc_state = drm_atomic_get_new_crtc_state(state, crtc); 455 + 456 + if (!new_crtc_state->enable) 457 + return 0; 458 + 459 + return drm_atomic_helper_check_crtc_primary_plane(new_crtc_state); 460 + } 461 + EXPORT_SYMBOL(drm_crtc_helper_atomic_check); 462 + 437 463 static void 438 464 drm_crtc_helper_disable(struct drm_crtc *crtc) 439 465 {
+222 -124
drivers/gpu/drm/drm_edid.c
··· 1572 1572 const struct edid *edid; 1573 1573 }; 1574 1574 1575 - static bool version_greater(const struct drm_edid *drm_edid, 1576 - u8 version, u8 revision) 1577 - { 1578 - const struct edid *edid = drm_edid->edid; 1579 - 1580 - return edid->version > version || 1581 - (edid->version == version && edid->revision > revision); 1582 - } 1583 - 1584 1575 static int edid_hfeeodb_extension_block_count(const struct edid *edid); 1585 1576 1586 1577 static int edid_hfeeodb_block_count(const struct edid *edid) ··· 2975 2984 BUILD_BUG_ON(offsetof(typeof(*descriptor), data.other_data.data.range.formula.cvt.flags) != 15); 2976 2985 2977 2986 if (descriptor->data.other_data.data.range.flags == DRM_EDID_CVT_SUPPORT_FLAG && 2978 - descriptor->data.other_data.data.range.formula.cvt.flags & 0x10) 2987 + descriptor->data.other_data.data.range.formula.cvt.flags & DRM_EDID_CVT_FLAGS_REDUCED_BLANKING) 2979 2988 *res = true; 2980 2989 } 2981 2990 ··· 3003 3012 3004 3013 BUILD_BUG_ON(offsetof(typeof(*descriptor), data.other_data.data.range.flags) != 10); 3005 3014 3006 - if (descriptor->data.other_data.data.range.flags == 0x02) 3015 + if (descriptor->data.other_data.data.range.flags == DRM_EDID_SECONDARY_GTF_SUPPORT_FLAG) 3007 3016 *res = descriptor; 3008 3017 } 3009 3018 ··· 3068 3077 return descriptor ? descriptor->data.other_data.data.range.formula.gtf2.j : 0; 3069 3078 } 3070 3079 3080 + static void 3081 + get_timing_level(const struct detailed_timing *descriptor, void *data) 3082 + { 3083 + int *res = data; 3084 + 3085 + if (!is_display_descriptor(descriptor, EDID_DETAIL_MONITOR_RANGE)) 3086 + return; 3087 + 3088 + BUILD_BUG_ON(offsetof(typeof(*descriptor), data.other_data.data.range.flags) != 10); 3089 + 3090 + switch (descriptor->data.other_data.data.range.flags) { 3091 + case DRM_EDID_DEFAULT_GTF_SUPPORT_FLAG: 3092 + *res = LEVEL_GTF; 3093 + break; 3094 + case DRM_EDID_SECONDARY_GTF_SUPPORT_FLAG: 3095 + *res = LEVEL_GTF2; 3096 + break; 3097 + case DRM_EDID_CVT_SUPPORT_FLAG: 3098 + *res = LEVEL_CVT; 3099 + break; 3100 + default: 3101 + break; 3102 + } 3103 + } 3104 + 3071 3105 /* Get standard timing level (CVT/GTF/DMT). */ 3072 3106 static int standard_timing_level(const struct drm_edid *drm_edid) 3073 3107 { 3074 3108 const struct edid *edid = drm_edid->edid; 3075 3109 3076 - if (edid->revision >= 2) { 3077 - if (edid->revision >= 4 && (edid->features & DRM_EDID_FEATURE_DEFAULT_GTF)) 3078 - return LEVEL_CVT; 3079 - if (drm_gtf2_hbreak(drm_edid)) 3080 - return LEVEL_GTF2; 3081 - if (edid->features & DRM_EDID_FEATURE_DEFAULT_GTF) 3082 - return LEVEL_GTF; 3110 + if (edid->revision >= 4) { 3111 + /* 3112 + * If the range descriptor doesn't 3113 + * indicate otherwise default to CVT 3114 + */ 3115 + int ret = LEVEL_CVT; 3116 + 3117 + drm_for_each_detailed_block(drm_edid, get_timing_level, &ret); 3118 + 3119 + return ret; 3120 + } else if (edid->revision >= 3 && drm_gtf2_hbreak(drm_edid)) { 3121 + return LEVEL_GTF2; 3122 + } else if (edid->revision >= 2) { 3123 + return LEVEL_GTF; 3124 + } else { 3125 + return LEVEL_DMT; 3083 3126 } 3084 - return LEVEL_DMT; 3085 3127 } 3086 3128 3087 3129 /* ··· 3135 3111 return 0; 3136 3112 3137 3113 return DIV_ROUND_CLOSEST(mode->clock, mode->htotal); 3114 + } 3115 + 3116 + static struct drm_display_mode * 3117 + drm_gtf2_mode(struct drm_device *dev, 3118 + const struct drm_edid *drm_edid, 3119 + int hsize, int vsize, int vrefresh_rate) 3120 + { 3121 + struct drm_display_mode *mode; 3122 + 3123 + /* 3124 + * This is potentially wrong if there's ever a monitor with 3125 + * more than one ranges section, each claiming a different 3126 + * secondary GTF curve. Please don't do that. 3127 + */ 3128 + mode = drm_gtf_mode(dev, hsize, vsize, vrefresh_rate, 0, 0); 3129 + if (!mode) 3130 + return NULL; 3131 + 3132 + if (drm_mode_hsync(mode) > drm_gtf2_hbreak(drm_edid)) { 3133 + drm_mode_destroy(dev, mode); 3134 + mode = drm_gtf_mode_complex(dev, hsize, vsize, 3135 + vrefresh_rate, 0, 0, 3136 + drm_gtf2_m(drm_edid), 3137 + drm_gtf2_2c(drm_edid), 3138 + drm_gtf2_k(drm_edid), 3139 + drm_gtf2_2j(drm_edid)); 3140 + } 3141 + 3142 + return mode; 3138 3143 } 3139 3144 3140 3145 /* ··· 3254 3201 mode = drm_gtf_mode(dev, hsize, vsize, vrefresh_rate, 0, 0); 3255 3202 break; 3256 3203 case LEVEL_GTF2: 3257 - /* 3258 - * This is potentially wrong if there's ever a monitor with 3259 - * more than one ranges section, each claiming a different 3260 - * secondary GTF curve. Please don't do that. 3261 - */ 3262 - mode = drm_gtf_mode(dev, hsize, vsize, vrefresh_rate, 0, 0); 3263 - if (!mode) 3264 - return NULL; 3265 - if (drm_mode_hsync(mode) > drm_gtf2_hbreak(drm_edid)) { 3266 - drm_mode_destroy(dev, mode); 3267 - mode = drm_gtf_mode_complex(dev, hsize, vsize, 3268 - vrefresh_rate, 0, 0, 3269 - drm_gtf2_m(drm_edid), 3270 - drm_gtf2_2c(drm_edid), 3271 - drm_gtf2_k(drm_edid), 3272 - drm_gtf2_2j(drm_edid)); 3273 - } 3204 + mode = drm_gtf2_mode(dev, drm_edid, hsize, vsize, vrefresh_rate); 3274 3205 break; 3275 3206 case LEVEL_CVT: 3276 3207 mode = drm_cvt_mode(dev, hsize, vsize, vrefresh_rate, 0, 0, ··· 3452 3415 return 0; 3453 3416 3454 3417 /* 1.4 with CVT support gives us real precision, yay */ 3455 - if (edid->revision >= 4 && t[10] == 0x04) 3418 + if (edid->revision >= 4 && t[10] == DRM_EDID_CVT_SUPPORT_FLAG) 3456 3419 return (t[9] * 10000) - ((t[12] >> 2) * 250); 3457 3420 3458 3421 /* 1.3 is pathetic, so fuzz up a bit */ ··· 3478 3441 return false; 3479 3442 3480 3443 /* 1.4 max horizontal check */ 3481 - if (edid->revision >= 4 && t[10] == 0x04) 3444 + if (edid->revision >= 4 && t[10] == DRM_EDID_CVT_SUPPORT_FLAG) 3482 3445 if (t[13] && mode->hdisplay > 8 * (t[13] + (256 * (t[12]&0x3)))) 3483 3446 return false; 3484 3447 ··· 3570 3533 return modes; 3571 3534 } 3572 3535 3536 + static int drm_gtf2_modes_for_range(struct drm_connector *connector, 3537 + const struct drm_edid *drm_edid, 3538 + const struct detailed_timing *timing) 3539 + { 3540 + int i, modes = 0; 3541 + struct drm_display_mode *newmode; 3542 + struct drm_device *dev = connector->dev; 3543 + 3544 + for (i = 0; i < ARRAY_SIZE(extra_modes); i++) { 3545 + const struct minimode *m = &extra_modes[i]; 3546 + 3547 + newmode = drm_gtf2_mode(dev, drm_edid, m->w, m->h, m->r); 3548 + if (!newmode) 3549 + return modes; 3550 + 3551 + drm_mode_fixup_1366x768(newmode); 3552 + if (!mode_in_range(newmode, drm_edid, timing) || 3553 + !valid_inferred_mode(connector, newmode)) { 3554 + drm_mode_destroy(dev, newmode); 3555 + continue; 3556 + } 3557 + 3558 + drm_mode_probed_add(connector, newmode); 3559 + modes++; 3560 + } 3561 + 3562 + return modes; 3563 + } 3564 + 3573 3565 static int drm_cvt_modes_for_range(struct drm_connector *connector, 3574 3566 const struct drm_edid *drm_edid, 3575 3567 const struct detailed_timing *timing) ··· 3643 3577 closure->drm_edid, 3644 3578 timing); 3645 3579 3646 - if (!version_greater(closure->drm_edid, 1, 1)) 3580 + if (closure->drm_edid->edid->revision < 2) 3647 3581 return; /* GTF not defined yet */ 3648 3582 3649 3583 switch (range->flags) { 3650 - case 0x02: /* secondary gtf, XXX could do more */ 3651 - case 0x00: /* default gtf */ 3584 + case DRM_EDID_SECONDARY_GTF_SUPPORT_FLAG: 3585 + closure->modes += drm_gtf2_modes_for_range(closure->connector, 3586 + closure->drm_edid, 3587 + timing); 3588 + break; 3589 + case DRM_EDID_DEFAULT_GTF_SUPPORT_FLAG: 3652 3590 closure->modes += drm_gtf_modes_for_range(closure->connector, 3653 3591 closure->drm_edid, 3654 3592 timing); 3655 3593 break; 3656 - case 0x04: /* cvt, only in 1.4+ */ 3657 - if (!version_greater(closure->drm_edid, 1, 3)) 3594 + case DRM_EDID_CVT_SUPPORT_FLAG: 3595 + if (closure->drm_edid->edid->revision < 4) 3658 3596 break; 3659 3597 3660 3598 closure->modes += drm_cvt_modes_for_range(closure->connector, 3661 3599 closure->drm_edid, 3662 3600 timing); 3663 3601 break; 3664 - case 0x01: /* just the ranges, no formula */ 3602 + case DRM_EDID_RANGE_LIMITS_ONLY_FLAG: 3665 3603 default: 3666 3604 break; 3667 3605 } ··· 3679 3609 .drm_edid = drm_edid, 3680 3610 }; 3681 3611 3682 - if (version_greater(drm_edid, 1, 0)) 3612 + if (drm_edid->edid->revision >= 1) 3683 3613 drm_for_each_detailed_block(drm_edid, do_inferred_modes, &closure); 3684 3614 3685 3615 return closure.modes; ··· 3756 3686 } 3757 3687 } 3758 3688 3759 - if (version_greater(drm_edid, 1, 0)) 3689 + if (edid->revision >= 1) 3760 3690 drm_for_each_detailed_block(drm_edid, do_established_modes, 3761 3691 &closure); 3762 3692 ··· 3811 3741 } 3812 3742 } 3813 3743 3814 - if (version_greater(drm_edid, 1, 0)) 3744 + if (drm_edid->edid->revision >= 1) 3815 3745 drm_for_each_detailed_block(drm_edid, do_standard_modes, 3816 3746 &closure); 3817 3747 ··· 3891 3821 .drm_edid = drm_edid, 3892 3822 }; 3893 3823 3894 - if (version_greater(drm_edid, 1, 2)) 3824 + if (drm_edid->edid->revision >= 3) 3895 3825 drm_for_each_detailed_block(drm_edid, do_cvt_mode, &closure); 3896 3826 3897 3827 /* XXX should also look for CVT codes in VTB blocks */ ··· 3943 3873 struct detailed_mode_closure closure = { 3944 3874 .connector = connector, 3945 3875 .drm_edid = drm_edid, 3946 - .preferred = true, 3947 3876 .quirks = quirks, 3948 3877 }; 3949 3878 3950 - if (closure.preferred && !version_greater(drm_edid, 1, 3)) 3879 + if (drm_edid->edid->revision >= 4) 3880 + closure.preferred = true; /* first detailed timing is always preferred */ 3881 + else 3951 3882 closure.preferred = 3952 - (drm_edid->edid->features & DRM_EDID_FEATURE_PREFERRED_TIMING); 3883 + drm_edid->edid->features & DRM_EDID_FEATURE_PREFERRED_TIMING; 3953 3884 3954 3885 drm_for_each_detailed_block(drm_edid, do_detailed_mode, &closure); 3955 3886 ··· 5823 5752 hdmi->y420_dc_modes = dc_mask; 5824 5753 } 5825 5754 5755 + static void drm_parse_dsc_info(struct drm_hdmi_dsc_cap *hdmi_dsc, 5756 + const u8 *hf_scds) 5757 + { 5758 + hdmi_dsc->v_1p2 = hf_scds[11] & DRM_EDID_DSC_1P2; 5759 + 5760 + if (!hdmi_dsc->v_1p2) 5761 + return; 5762 + 5763 + hdmi_dsc->native_420 = hf_scds[11] & DRM_EDID_DSC_NATIVE_420; 5764 + hdmi_dsc->all_bpp = hf_scds[11] & DRM_EDID_DSC_ALL_BPP; 5765 + 5766 + if (hf_scds[11] & DRM_EDID_DSC_16BPC) 5767 + hdmi_dsc->bpc_supported = 16; 5768 + else if (hf_scds[11] & DRM_EDID_DSC_12BPC) 5769 + hdmi_dsc->bpc_supported = 12; 5770 + else if (hf_scds[11] & DRM_EDID_DSC_10BPC) 5771 + hdmi_dsc->bpc_supported = 10; 5772 + else 5773 + /* Supports min 8 BPC if DSC 1.2 is supported*/ 5774 + hdmi_dsc->bpc_supported = 8; 5775 + 5776 + if (cea_db_payload_len(hf_scds) >= 12 && hf_scds[12]) { 5777 + u8 dsc_max_slices; 5778 + u8 dsc_max_frl_rate; 5779 + 5780 + dsc_max_frl_rate = (hf_scds[12] & DRM_EDID_DSC_MAX_FRL_RATE_MASK) >> 4; 5781 + drm_get_max_frl_rate(dsc_max_frl_rate, &hdmi_dsc->max_lanes, 5782 + &hdmi_dsc->max_frl_rate_per_lane); 5783 + 5784 + dsc_max_slices = hf_scds[12] & DRM_EDID_DSC_MAX_SLICES; 5785 + 5786 + switch (dsc_max_slices) { 5787 + case 1: 5788 + hdmi_dsc->max_slices = 1; 5789 + hdmi_dsc->clk_per_slice = 340; 5790 + break; 5791 + case 2: 5792 + hdmi_dsc->max_slices = 2; 5793 + hdmi_dsc->clk_per_slice = 340; 5794 + break; 5795 + case 3: 5796 + hdmi_dsc->max_slices = 4; 5797 + hdmi_dsc->clk_per_slice = 340; 5798 + break; 5799 + case 4: 5800 + hdmi_dsc->max_slices = 8; 5801 + hdmi_dsc->clk_per_slice = 340; 5802 + break; 5803 + case 5: 5804 + hdmi_dsc->max_slices = 8; 5805 + hdmi_dsc->clk_per_slice = 400; 5806 + break; 5807 + case 6: 5808 + hdmi_dsc->max_slices = 12; 5809 + hdmi_dsc->clk_per_slice = 400; 5810 + break; 5811 + case 7: 5812 + hdmi_dsc->max_slices = 16; 5813 + hdmi_dsc->clk_per_slice = 400; 5814 + break; 5815 + case 0: 5816 + default: 5817 + hdmi_dsc->max_slices = 0; 5818 + hdmi_dsc->clk_per_slice = 0; 5819 + } 5820 + } 5821 + 5822 + if (cea_db_payload_len(hf_scds) >= 13 && hf_scds[13]) 5823 + hdmi_dsc->total_chunk_kbytes = hf_scds[13] & DRM_EDID_DSC_TOTAL_CHUNK_KBYTES; 5824 + } 5825 + 5826 5826 /* Sink Capability Data Structure */ 5827 5827 static void drm_parse_hdmi_forum_scds(struct drm_connector *connector, 5828 5828 const u8 *hf_scds) 5829 5829 { 5830 5830 struct drm_display_info *display = &connector->display_info; 5831 5831 struct drm_hdmi_info *hdmi = &display->hdmi; 5832 + struct drm_hdmi_dsc_cap *hdmi_dsc = &hdmi->dsc_cap; 5833 + int max_tmds_clock = 0; 5834 + u8 max_frl_rate = 0; 5835 + bool dsc_support = false; 5832 5836 5833 5837 display->has_hdmi_infoframe = true; 5834 5838 ··· 5923 5777 */ 5924 5778 5925 5779 if (hf_scds[5]) { 5926 - /* max clock is 5000 KHz times block value */ 5927 - u32 max_tmds_clock = hf_scds[5] * 5000; 5928 5780 struct drm_scdc *scdc = &hdmi->scdc; 5781 + 5782 + /* max clock is 5000 KHz times block value */ 5783 + max_tmds_clock = hf_scds[5] * 5000; 5929 5784 5930 5785 if (max_tmds_clock > 340000) { 5931 5786 display->max_tmds_clock = max_tmds_clock; 5932 - DRM_DEBUG_KMS("HF-VSDB: max TMDS clock %d kHz\n", 5933 - display->max_tmds_clock); 5934 5787 } 5935 5788 5936 5789 if (scdc->supported) { ··· 5942 5797 } 5943 5798 5944 5799 if (hf_scds[7]) { 5945 - u8 max_frl_rate; 5946 - u8 dsc_max_frl_rate; 5947 - u8 dsc_max_slices; 5948 - struct drm_hdmi_dsc_cap *hdmi_dsc = &hdmi->dsc_cap; 5949 - 5950 - DRM_DEBUG_KMS("hdmi_21 sink detected. parsing edid\n"); 5951 5800 max_frl_rate = (hf_scds[7] & DRM_EDID_MAX_FRL_RATE_MASK) >> 4; 5952 5801 drm_get_max_frl_rate(max_frl_rate, &hdmi->max_lanes, 5953 5802 &hdmi->max_frl_rate_per_lane); 5954 - hdmi_dsc->v_1p2 = hf_scds[11] & DRM_EDID_DSC_1P2; 5955 - 5956 - if (hdmi_dsc->v_1p2) { 5957 - hdmi_dsc->native_420 = hf_scds[11] & DRM_EDID_DSC_NATIVE_420; 5958 - hdmi_dsc->all_bpp = hf_scds[11] & DRM_EDID_DSC_ALL_BPP; 5959 - 5960 - if (hf_scds[11] & DRM_EDID_DSC_16BPC) 5961 - hdmi_dsc->bpc_supported = 16; 5962 - else if (hf_scds[11] & DRM_EDID_DSC_12BPC) 5963 - hdmi_dsc->bpc_supported = 12; 5964 - else if (hf_scds[11] & DRM_EDID_DSC_10BPC) 5965 - hdmi_dsc->bpc_supported = 10; 5966 - else 5967 - hdmi_dsc->bpc_supported = 0; 5968 - 5969 - dsc_max_frl_rate = (hf_scds[12] & DRM_EDID_DSC_MAX_FRL_RATE_MASK) >> 4; 5970 - drm_get_max_frl_rate(dsc_max_frl_rate, &hdmi_dsc->max_lanes, 5971 - &hdmi_dsc->max_frl_rate_per_lane); 5972 - hdmi_dsc->total_chunk_kbytes = hf_scds[13] & DRM_EDID_DSC_TOTAL_CHUNK_KBYTES; 5973 - 5974 - dsc_max_slices = hf_scds[12] & DRM_EDID_DSC_MAX_SLICES; 5975 - switch (dsc_max_slices) { 5976 - case 1: 5977 - hdmi_dsc->max_slices = 1; 5978 - hdmi_dsc->clk_per_slice = 340; 5979 - break; 5980 - case 2: 5981 - hdmi_dsc->max_slices = 2; 5982 - hdmi_dsc->clk_per_slice = 340; 5983 - break; 5984 - case 3: 5985 - hdmi_dsc->max_slices = 4; 5986 - hdmi_dsc->clk_per_slice = 340; 5987 - break; 5988 - case 4: 5989 - hdmi_dsc->max_slices = 8; 5990 - hdmi_dsc->clk_per_slice = 340; 5991 - break; 5992 - case 5: 5993 - hdmi_dsc->max_slices = 8; 5994 - hdmi_dsc->clk_per_slice = 400; 5995 - break; 5996 - case 6: 5997 - hdmi_dsc->max_slices = 12; 5998 - hdmi_dsc->clk_per_slice = 400; 5999 - break; 6000 - case 7: 6001 - hdmi_dsc->max_slices = 16; 6002 - hdmi_dsc->clk_per_slice = 400; 6003 - break; 6004 - case 0: 6005 - default: 6006 - hdmi_dsc->max_slices = 0; 6007 - hdmi_dsc->clk_per_slice = 0; 6008 - } 6009 - } 6010 5803 } 6011 5804 6012 5805 drm_parse_ycbcr420_deep_color_info(connector, hf_scds); 5806 + 5807 + if (cea_db_payload_len(hf_scds) >= 11 && hf_scds[11]) { 5808 + drm_parse_dsc_info(hdmi_dsc, hf_scds); 5809 + dsc_support = true; 5810 + } 5811 + 5812 + drm_dbg_kms(connector->dev, 5813 + "HF-VSDB: max TMDS clock: %d KHz, HDMI 2.1 support: %s, DSC 1.2 support: %s\n", 5814 + max_tmds_clock, str_yes_no(max_frl_rate), str_yes_no(dsc_support)); 6013 5815 } 6014 5816 6015 5817 static void drm_parse_hdmi_deep_color_info(struct drm_connector *connector, ··· 6125 6033 return; 6126 6034 6127 6035 /* 6128 - * Check for flag range limits only. If flag == 1 then 6129 - * no additional timing information provided. 6130 - * Default GTF, GTF Secondary curve and CVT are not 6131 - * supported 6036 + * These limits are used to determine the VRR refresh 6037 + * rate range. Only the "range limits only" variant 6038 + * of the range descriptor seems to guarantee that 6039 + * any and all timings are accepted by the sink, as 6040 + * opposed to just timings conforming to the indicated 6041 + * formula (GTF/GTF2/CVT). Thus other variants of the 6042 + * range descriptor are not accepted here. 6132 6043 */ 6133 6044 if (range->flags != DRM_EDID_RANGE_LIMITS_ONLY_FLAG) 6134 6045 return; ··· 6156 6061 .drm_edid = drm_edid, 6157 6062 }; 6158 6063 6159 - if (!version_greater(drm_edid, 1, 1)) 6064 + if (drm_edid->edid->revision < 4) 6065 + return; 6066 + 6067 + if (!(drm_edid->edid->features & DRM_EDID_FEATURE_CONTINUOUS_FREQ)) 6160 6068 return; 6161 6069 6162 6070 drm_for_each_detailed_block(drm_edid, get_monitor_range, &closure); ··· 6488 6390 num_modes += add_cea_modes(connector, drm_edid); 6489 6391 num_modes += add_alternate_cea_modes(connector, drm_edid); 6490 6392 num_modes += add_displayid_detailed_modes(connector, drm_edid); 6491 - if (drm_edid->edid->features & DRM_EDID_FEATURE_DEFAULT_GTF) 6393 + if (drm_edid->edid->features & DRM_EDID_FEATURE_CONTINUOUS_FREQ) 6492 6394 num_modes += add_inferred_modes(connector, drm_edid); 6493 6395 6494 6396 if (quirks & (EDID_QUIRK_PREFER_LARGE_60 | EDID_QUIRK_PREFER_LARGE_75)) ··· 6935 6837 * by non-zero YQ when receiving RGB. There doesn't seem to be any 6936 6838 * good way to tell which version of CEA-861 the sink supports, so 6937 6839 * we limit non-zero YQ to HDMI 2.0 sinks only as HDMI 2.0 is based 6938 - * on on CEA-861-F. 6840 + * on CEA-861-F. 6939 6841 */ 6940 6842 if (!is_hdmi2_sink(connector) || 6941 6843 rgb_quant_range == HDMI_QUANTIZATION_RANGE_LIMITED)
+10
drivers/gpu/drm/drm_format_helper.c
··· 660 660 drm_fb_xrgb8888_to_rgb565(dst, dst_pitch, src, fb, clip, false); 661 661 return 0; 662 662 } 663 + } else if (dst_format == (DRM_FORMAT_RGB565 | DRM_FORMAT_BIG_ENDIAN)) { 664 + if (fb_format == DRM_FORMAT_RGB565) { 665 + drm_fb_swab(dst, dst_pitch, src, fb, clip, false); 666 + return 0; 667 + } 663 668 } else if (dst_format == DRM_FORMAT_RGB888) { 664 669 if (fb_format == DRM_FORMAT_XRGB8888) { 665 670 drm_fb_xrgb8888_to_rgb888(dst, dst_pitch, src, fb, clip); ··· 681 676 } else if (dst_format == DRM_FORMAT_XRGB2101010) { 682 677 if (fb_format == DRM_FORMAT_XRGB8888) { 683 678 drm_fb_xrgb8888_to_xrgb2101010(dst, dst_pitch, src, fb, clip); 679 + return 0; 680 + } 681 + } else if (dst_format == DRM_FORMAT_BGRX8888) { 682 + if (fb_format == DRM_FORMAT_XRGB8888) { 683 + drm_fb_swab(dst, dst_pitch, src, fb, clip, false); 684 684 return 0; 685 685 } 686 686 }
+24
drivers/gpu/drm/drm_gem.c
··· 1158 1158 { 1159 1159 int ret; 1160 1160 1161 + dma_resv_assert_held(obj->resv); 1162 + 1161 1163 if (!obj->funcs->vmap) 1162 1164 return -EOPNOTSUPP; 1163 1165 ··· 1175 1173 1176 1174 void drm_gem_vunmap(struct drm_gem_object *obj, struct iosys_map *map) 1177 1175 { 1176 + dma_resv_assert_held(obj->resv); 1177 + 1178 1178 if (iosys_map_is_null(map)) 1179 1179 return; 1180 1180 ··· 1187 1183 iosys_map_clear(map); 1188 1184 } 1189 1185 EXPORT_SYMBOL(drm_gem_vunmap); 1186 + 1187 + int drm_gem_vmap_unlocked(struct drm_gem_object *obj, struct iosys_map *map) 1188 + { 1189 + int ret; 1190 + 1191 + dma_resv_lock(obj->resv, NULL); 1192 + ret = drm_gem_vmap(obj, map); 1193 + dma_resv_unlock(obj->resv); 1194 + 1195 + return ret; 1196 + } 1197 + EXPORT_SYMBOL(drm_gem_vmap_unlocked); 1198 + 1199 + void drm_gem_vunmap_unlocked(struct drm_gem_object *obj, struct iosys_map *map) 1200 + { 1201 + dma_resv_lock(obj->resv, NULL); 1202 + drm_gem_vunmap(obj, map); 1203 + dma_resv_unlock(obj->resv); 1204 + } 1205 + EXPORT_SYMBOL(drm_gem_vunmap_unlocked); 1190 1206 1191 1207 /** 1192 1208 * drm_gem_lock_reservations - Sets up the ww context and acquires
+3 -3
drivers/gpu/drm/drm_gem_dma_helper.c
··· 230 230 231 231 if (gem_obj->import_attach) { 232 232 if (dma_obj->vaddr) 233 - dma_buf_vunmap(gem_obj->import_attach->dmabuf, &map); 233 + dma_buf_vunmap_unlocked(gem_obj->import_attach->dmabuf, &map); 234 234 drm_prime_gem_destroy(gem_obj, dma_obj->sgt); 235 235 } else if (dma_obj->vaddr) { 236 236 if (dma_obj->map_noncoherent) ··· 581 581 struct iosys_map map; 582 582 int ret; 583 583 584 - ret = dma_buf_vmap(attach->dmabuf, &map); 584 + ret = dma_buf_vmap_unlocked(attach->dmabuf, &map); 585 585 if (ret) { 586 586 DRM_ERROR("Failed to vmap PRIME buffer\n"); 587 587 return ERR_PTR(ret); ··· 589 589 590 590 obj = drm_gem_dma_prime_import_sg_table(dev, attach, sgt); 591 591 if (IS_ERR(obj)) { 592 - dma_buf_vunmap(attach->dmabuf, &map); 592 + dma_buf_vunmap_unlocked(attach->dmabuf, &map); 593 593 return obj; 594 594 } 595 595
+3 -3
drivers/gpu/drm/drm_gem_framebuffer_helper.c
··· 354 354 ret = -EINVAL; 355 355 goto err_drm_gem_vunmap; 356 356 } 357 - ret = drm_gem_vmap(obj, &map[i]); 357 + ret = drm_gem_vmap_unlocked(obj, &map[i]); 358 358 if (ret) 359 359 goto err_drm_gem_vunmap; 360 360 } ··· 376 376 obj = drm_gem_fb_get_obj(fb, i); 377 377 if (!obj) 378 378 continue; 379 - drm_gem_vunmap(obj, &map[i]); 379 + drm_gem_vunmap_unlocked(obj, &map[i]); 380 380 } 381 381 return ret; 382 382 } ··· 403 403 continue; 404 404 if (iosys_map_is_null(&map[i])) 405 405 continue; 406 - drm_gem_vunmap(obj, &map[i]); 406 + drm_gem_vunmap_unlocked(obj, &map[i]); 407 407 } 408 408 } 409 409 EXPORT_SYMBOL(drm_gem_fb_vunmap);
+1 -8
drivers/gpu/drm/drm_gem_ttm_helper.c
··· 64 64 struct iosys_map *map) 65 65 { 66 66 struct ttm_buffer_object *bo = drm_gem_ttm_of_gem(gem); 67 - int ret; 68 67 69 - dma_resv_lock(gem->resv, NULL); 70 - ret = ttm_bo_vmap(bo, map); 71 - dma_resv_unlock(gem->resv); 72 - 73 - return ret; 68 + return ttm_bo_vmap(bo, map); 74 69 } 75 70 EXPORT_SYMBOL(drm_gem_ttm_vmap); 76 71 ··· 82 87 { 83 88 struct ttm_buffer_object *bo = drm_gem_ttm_of_gem(gem); 84 89 85 - dma_resv_lock(gem->resv, NULL); 86 90 ttm_bo_vunmap(bo, map); 87 - dma_resv_unlock(gem->resv); 88 91 } 89 92 EXPORT_SYMBOL(drm_gem_ttm_vunmap); 90 93
+13 -9
drivers/gpu/drm/drm_modes.c
··· 1801 1801 1802 1802 name = mode_option; 1803 1803 1804 - /* Try to locate the bpp and refresh specifiers, if any */ 1805 - bpp_ptr = strchr(name, '-'); 1806 - if (bpp_ptr) 1807 - bpp_off = bpp_ptr - name; 1808 - 1809 - refresh_ptr = strchr(name, '@'); 1810 - if (refresh_ptr) 1811 - refresh_off = refresh_ptr - name; 1812 - 1813 1804 /* Locate the start of named options */ 1814 1805 options_ptr = strchr(name, ','); 1815 1806 if (options_ptr) 1816 1807 options_off = options_ptr - name; 1808 + else 1809 + options_off = strlen(name); 1810 + 1811 + /* Try to locate the bpp and refresh specifiers, if any */ 1812 + bpp_ptr = strnchr(name, options_off, '-'); 1813 + while (bpp_ptr && !isdigit(bpp_ptr[1])) 1814 + bpp_ptr = strnchr(bpp_ptr + 1, options_off, '-'); 1815 + if (bpp_ptr) 1816 + bpp_off = bpp_ptr - name; 1817 + 1818 + refresh_ptr = strnchr(name, options_off, '@'); 1819 + if (refresh_ptr) 1820 + refresh_off = refresh_ptr - name; 1817 1821 1818 1822 /* Locate the end of the name / resolution, and parse it */ 1819 1823 if (bpp_ptr) {
+3 -1
drivers/gpu/drm/drm_plane_helper.c
··· 298 298 * scale and positioning are not expected to change since the plane is always 299 299 * a fullscreen scanout buffer. 300 300 * 301 - * This is often the case for the primary plane of simple framebuffers. 301 + * This is often the case for the primary plane of simple framebuffers. See 302 + * also drm_crtc_helper_atomic_check() for the respective CRTC-state check 303 + * helper function. 302 304 * 303 305 * RETURNS: 304 306 * Zero on success, or an errno code otherwise.
+3 -3
drivers/gpu/drm/drm_prime.c
··· 940 940 941 941 get_dma_buf(dma_buf); 942 942 943 - sgt = dma_buf_map_attachment(attach, DMA_BIDIRECTIONAL); 943 + sgt = dma_buf_map_attachment_unlocked(attach, DMA_BIDIRECTIONAL); 944 944 if (IS_ERR(sgt)) { 945 945 ret = PTR_ERR(sgt); 946 946 goto fail_detach; ··· 958 958 return obj; 959 959 960 960 fail_unmap: 961 - dma_buf_unmap_attachment(attach, sgt, DMA_BIDIRECTIONAL); 961 + dma_buf_unmap_attachment_unlocked(attach, sgt, DMA_BIDIRECTIONAL); 962 962 fail_detach: 963 963 dma_buf_detach(dma_buf, attach); 964 964 dma_buf_put(dma_buf); ··· 1056 1056 1057 1057 attach = obj->import_attach; 1058 1058 if (sg) 1059 - dma_buf_unmap_attachment(attach, sg, DMA_BIDIRECTIONAL); 1059 + dma_buf_unmap_attachment_unlocked(attach, sg, DMA_BIDIRECTIONAL); 1060 1060 dma_buf = attach->dmabuf; 1061 1061 dma_buf_detach(attach->dmabuf, attach); 1062 1062 /* remove the reference */
+5 -1
drivers/gpu/drm/drm_simple_kms_helper.c
··· 102 102 struct drm_crtc_state *crtc_state = drm_atomic_get_new_crtc_state(state, crtc); 103 103 int ret; 104 104 105 - ret = drm_atomic_helper_check_crtc_state(crtc_state, false); 105 + if (!crtc_state->enable) 106 + goto out; 107 + 108 + ret = drm_atomic_helper_check_crtc_primary_plane(crtc_state); 106 109 if (ret) 107 110 return ret; 108 111 112 + out: 109 113 return drm_atomic_add_affected_planes(state, crtc); 110 114 } 111 115
+1 -1
drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
··· 65 65 struct iosys_map map = IOSYS_MAP_INIT_VADDR(etnaviv_obj->vaddr); 66 66 67 67 if (etnaviv_obj->vaddr) 68 - dma_buf_vunmap(etnaviv_obj->base.import_attach->dmabuf, &map); 68 + dma_buf_vunmap_unlocked(etnaviv_obj->base.import_attach->dmabuf, &map); 69 69 70 70 /* Don't drop the pages for imported dmabuf, as they are not 71 71 * ours, just free the array we allocated:
+3 -3
drivers/gpu/drm/gma500/framebuffer.c
··· 286 286 287 287 info->fbops = &psbfb_unaccel_ops; 288 288 289 - info->fix.smem_start = dev->mode_config.fb_base; 289 + info->fix.smem_start = dev_priv->fb_base; 290 290 info->fix.smem_len = size; 291 291 info->fix.ywrapstep = 0; 292 292 info->fix.ypanstep = 0; ··· 296 296 info->screen_size = size; 297 297 298 298 if (dev_priv->gtt.stolen_size) { 299 - info->apertures->ranges[0].base = dev->mode_config.fb_base; 299 + info->apertures->ranges[0].base = dev_priv->fb_base; 300 300 info->apertures->ranges[0].size = dev_priv->gtt.stolen_size; 301 301 } 302 302 ··· 527 527 528 528 /* set memory base */ 529 529 /* Oaktrail and Poulsbo should use BAR 2*/ 530 - pci_read_config_dword(pdev, PSB_BSM, (u32 *)&(dev->mode_config.fb_base)); 530 + pci_read_config_dword(pdev, PSB_BSM, (u32 *)&(dev_priv->fb_base)); 531 531 532 532 /* num pipes is 2 for PSB but 1 for Mrst */ 533 533 for (i = 0; i < dev_priv->num_pipe; i++)
+1
drivers/gpu/drm/gma500/psb_drv.h
··· 523 523 uint32_t blc_adj2; 524 524 525 525 struct drm_fb_helper *fb_helper; 526 + resource_size_t fb_base; 526 527 527 528 bool dsr_enable; 528 529 u32 dsr_fb_update;
+1 -1
drivers/gpu/drm/gud/gud_connector.c
··· 355 355 drm_atomic_helper_connector_reset(connector); 356 356 connector->state->tv = gconn->initial_tv_state; 357 357 /* Set margins from command line */ 358 - drm_atomic_helper_connector_tv_reset(connector); 358 + drm_atomic_helper_connector_tv_margins_reset(connector); 359 359 if (gconn->initial_brightness >= 0) 360 360 connector->state->tv.brightness = gconn->initial_brightness; 361 361 }
+3 -13
drivers/gpu/drm/hisilicon/hibmc/hibmc_drm_drv.c
··· 105 105 dev->mode_config.max_width = 1920; 106 106 dev->mode_config.max_height = 1200; 107 107 108 - dev->mode_config.fb_base = priv->fb_base; 109 108 dev->mode_config.preferred_depth = 32; 110 109 dev->mode_config.prefer_shadow = 1; 111 110 ··· 211 212 { 212 213 struct drm_device *dev = &priv->dev; 213 214 struct pci_dev *pdev = to_pci_dev(dev->dev); 214 - resource_size_t addr, size, ioaddr, iosize; 215 + resource_size_t ioaddr, iosize; 215 216 216 217 ioaddr = pci_resource_start(pdev, 1); 217 218 iosize = pci_resource_len(pdev, 1); ··· 220 221 drm_err(dev, "Cannot map mmio region\n"); 221 222 return -ENOMEM; 222 223 } 223 - 224 - addr = pci_resource_start(pdev, 0); 225 - size = pci_resource_len(pdev, 0); 226 - priv->fb_map = devm_ioremap(dev->dev, addr, size); 227 - if (!priv->fb_map) { 228 - drm_err(dev, "Cannot map framebuffer\n"); 229 - return -ENOMEM; 230 - } 231 - priv->fb_base = addr; 232 - priv->fb_size = size; 233 224 234 225 return 0; 235 226 } ··· 260 271 if (ret) 261 272 goto err; 262 273 263 - ret = drmm_vram_helper_init(dev, pci_resource_start(pdev, 0), priv->fb_size); 274 + ret = drmm_vram_helper_init(dev, pci_resource_start(pdev, 0), 275 + pci_resource_len(pdev, 0)); 264 276 if (ret) { 265 277 drm_err(dev, "Error initializing VRAM MM; %d\n", ret); 266 278 goto err;
-3
drivers/gpu/drm/hisilicon/hibmc/hibmc_drm_drv.h
··· 32 32 struct hibmc_drm_private { 33 33 /* hw */ 34 34 void __iomem *mmio; 35 - void __iomem *fb_map; 36 - resource_size_t fb_base; 37 - resource_size_t fb_size; 38 35 39 36 /* drm */ 40 37 struct drm_device dev;
+1 -1
drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
··· 72 72 struct drm_i915_gem_object *obj = dma_buf_to_obj(dma_buf); 73 73 void *vaddr; 74 74 75 - vaddr = i915_gem_object_pin_map_unlocked(obj, I915_MAP_WB); 75 + vaddr = i915_gem_object_pin_map(obj, I915_MAP_WB); 76 76 if (IS_ERR(vaddr)) 77 77 return PTR_ERR(vaddr); 78 78
+14
drivers/gpu/drm/i915/gem/i915_gem_object.c
··· 290 290 __i915_gem_object_free_mmaps(obj); 291 291 292 292 atomic_set(&obj->mm.pages_pin_count, 0); 293 + 294 + /* 295 + * dma_buf_unmap_attachment() requires reservation to be 296 + * locked. The imported GEM shouldn't share reservation lock 297 + * and ttm_bo_cleanup_memtype_use() shouldn't be invoked for 298 + * dma-buf, so it's safe to take the lock. 299 + */ 300 + if (obj->base.import_attach) 301 + i915_gem_object_lock(obj, NULL); 302 + 293 303 __i915_gem_object_put_pages(obj); 304 + 305 + if (obj->base.import_attach) 306 + i915_gem_object_unlock(obj); 307 + 294 308 GEM_BUG_ON(i915_gem_object_has_pages(obj)); 295 309 } 296 310
+8 -8
drivers/gpu/drm/i915/gem/selftests/i915_gem_dmabuf.c
··· 213 213 goto out_import; 214 214 } 215 215 216 - st = dma_buf_map_attachment(import_attach, DMA_BIDIRECTIONAL); 216 + st = dma_buf_map_attachment_unlocked(import_attach, DMA_BIDIRECTIONAL); 217 217 if (IS_ERR(st)) { 218 218 err = PTR_ERR(st); 219 219 goto out_detach; ··· 226 226 timeout = -ETIME; 227 227 } 228 228 err = timeout > 0 ? 0 : timeout; 229 - dma_buf_unmap_attachment(import_attach, st, DMA_BIDIRECTIONAL); 229 + dma_buf_unmap_attachment_unlocked(import_attach, st, DMA_BIDIRECTIONAL); 230 230 out_detach: 231 231 dma_buf_detach(dmabuf, import_attach); 232 232 out_import: ··· 296 296 goto out_obj; 297 297 } 298 298 299 - err = dma_buf_vmap(dmabuf, &map); 299 + err = dma_buf_vmap_unlocked(dmabuf, &map); 300 300 dma_map = err ? NULL : map.vaddr; 301 301 if (!dma_map) { 302 302 pr_err("dma_buf_vmap failed\n"); ··· 337 337 338 338 err = 0; 339 339 out_dma_map: 340 - dma_buf_vunmap(dmabuf, &map); 340 + dma_buf_vunmap_unlocked(dmabuf, &map); 341 341 out_obj: 342 342 i915_gem_object_put(obj); 343 343 out_dmabuf: ··· 358 358 if (IS_ERR(dmabuf)) 359 359 return PTR_ERR(dmabuf); 360 360 361 - err = dma_buf_vmap(dmabuf, &map); 361 + err = dma_buf_vmap_unlocked(dmabuf, &map); 362 362 ptr = err ? NULL : map.vaddr; 363 363 if (!ptr) { 364 364 pr_err("dma_buf_vmap failed\n"); ··· 367 367 } 368 368 369 369 memset(ptr, 0xc5, PAGE_SIZE); 370 - dma_buf_vunmap(dmabuf, &map); 370 + dma_buf_vunmap_unlocked(dmabuf, &map); 371 371 372 372 obj = to_intel_bo(i915_gem_prime_import(&i915->drm, dmabuf)); 373 373 if (IS_ERR(obj)) { ··· 418 418 } 419 419 i915_gem_object_put(obj); 420 420 421 - err = dma_buf_vmap(dmabuf, &map); 421 + err = dma_buf_vmap_unlocked(dmabuf, &map); 422 422 ptr = err ? NULL : map.vaddr; 423 423 if (!ptr) { 424 424 pr_err("dma_buf_vmap failed\n"); ··· 435 435 memset(ptr, 0xc5, dmabuf->size); 436 436 437 437 err = 0; 438 - dma_buf_vunmap(dmabuf, &map); 438 + dma_buf_vunmap_unlocked(dmabuf, &map); 439 439 out: 440 440 dma_buf_put(dmabuf); 441 441 return err;
+2 -2
drivers/gpu/drm/lima/lima_sched.c
··· 371 371 } else { 372 372 buffer_chunk->size = lima_bo_size(bo); 373 373 374 - ret = drm_gem_shmem_vmap(&bo->base, &map); 374 + ret = drm_gem_vmap_unlocked(&bo->base.base, &map); 375 375 if (ret) { 376 376 kvfree(et); 377 377 goto out; ··· 379 379 380 380 memcpy(buffer_chunk + 1, map.vaddr, buffer_chunk->size); 381 381 382 - drm_gem_shmem_vunmap(&bo->base, &map); 382 + drm_gem_vunmap_unlocked(&bo->base.base, &map); 383 383 } 384 384 385 385 buffer_chunk = (void *)(buffer_chunk + 1) + buffer_chunk->size;
+2 -1
drivers/gpu/drm/mgag200/mgag200_g200se.c
··· 284 284 pixpllcp = pixpllc->p - 1; 285 285 pixpllcs = pixpllc->s; 286 286 287 - xpixpllcm = pixpllcm | ((pixpllcn & BIT(8)) >> 1); 287 + // For G200SE A, BIT(7) should be set unconditionally. 288 + xpixpllcm = BIT(7) | pixpllcm; 288 289 xpixpllcn = pixpllcn; 289 290 xpixpllcp = (pixpllcs << 3) | pixpllcp; 290 291
+5 -6
drivers/gpu/drm/mgag200/mgag200_mode.c
··· 579 579 struct drm_property_blob *new_gamma_lut = new_crtc_state->gamma_lut; 580 580 int ret; 581 581 582 - ret = drm_atomic_helper_check_crtc_state(new_crtc_state, false); 583 - if (ret) 584 - return ret; 585 - 586 582 if (!new_crtc_state->enable) 587 583 return 0; 584 + 585 + ret = drm_atomic_helper_check_crtc_primary_plane(new_crtc_state); 586 + if (ret) 587 + return ret; 588 588 589 589 if (new_crtc_state->mode_changed) { 590 590 if (funcs->pixpllc_atomic_check) { ··· 601 601 } 602 602 } 603 603 604 - return drm_atomic_add_affected_planes(new_state, crtc); 604 + return 0; 605 605 } 606 606 607 607 void mgag200_crtc_helper_atomic_flush(struct drm_crtc *crtc, struct drm_atomic_state *old_state) ··· 824 824 dev->mode_config.max_width = MGAG200_MAX_FB_WIDTH; 825 825 dev->mode_config.max_height = MGAG200_MAX_FB_HEIGHT; 826 826 dev->mode_config.preferred_depth = 24; 827 - dev->mode_config.fb_base = mdev->vram_res->start; 828 827 dev->mode_config.funcs = &mgag200_mode_config_funcs; 829 828 dev->mode_config.helper_private = &mgag200_mode_config_helper_funcs; 830 829
-2
drivers/gpu/drm/msm/msm_fbdev.c
··· 109 109 110 110 drm_fb_helper_fill_info(fbi, helper, sizes); 111 111 112 - dev->mode_config.fb_base = paddr; 113 - 114 112 fbi->screen_base = msm_gem_get_vaddr(bo); 115 113 if (IS_ERR(fbi->screen_base)) { 116 114 ret = PTR_ERR(fbi->screen_base);
+216 -23
drivers/gpu/drm/mxsfb/lcdif_kms.c
··· 15 15 #include <drm/drm_atomic.h> 16 16 #include <drm/drm_atomic_helper.h> 17 17 #include <drm/drm_bridge.h> 18 + #include <drm/drm_color_mgmt.h> 18 19 #include <drm/drm_crtc.h> 19 20 #include <drm/drm_encoder.h> 20 21 #include <drm/drm_fb_dma_helper.h> ··· 32 31 /* ----------------------------------------------------------------------------- 33 32 * CRTC 34 33 */ 34 + 35 + /* 36 + * For conversion from YCbCr to RGB, the CSC operates as follows: 37 + * 38 + * |R| |A1 A2 A3| |Y + D1| 39 + * |G| = |B1 B2 B3| * |Cb + D2| 40 + * |B| |C1 C2 C3| |Cr + D3| 41 + * 42 + * The A, B and C coefficients are expressed as Q2.8 fixed point values, and 43 + * the D coefficients as Q0.8. Despite the reference manual stating the 44 + * opposite, the D1, D2 and D3 offset values are added to Y, Cb and Cr, not 45 + * subtracted. They must thus be programmed with negative values. 46 + */ 47 + static const u32 lcdif_yuv2rgb_coeffs[3][2][6] = { 48 + [DRM_COLOR_YCBCR_BT601] = { 49 + [DRM_COLOR_YCBCR_LIMITED_RANGE] = { 50 + /* 51 + * BT.601 limited range: 52 + * 53 + * |R| |1.1644 0.0000 1.5960| |Y - 16 | 54 + * |G| = |1.1644 -0.3917 -0.8129| * |Cb - 128| 55 + * |B| |1.1644 2.0172 0.0000| |Cr - 128| 56 + */ 57 + CSC0_COEF0_A1(0x12a) | CSC0_COEF0_A2(0x000), 58 + CSC0_COEF1_A3(0x199) | CSC0_COEF1_B1(0x12a), 59 + CSC0_COEF2_B2(0x79c) | CSC0_COEF2_B3(0x730), 60 + CSC0_COEF3_C1(0x12a) | CSC0_COEF3_C2(0x204), 61 + CSC0_COEF4_C3(0x000) | CSC0_COEF4_D1(0x1f0), 62 + CSC0_COEF5_D2(0x180) | CSC0_COEF5_D3(0x180), 63 + }, 64 + [DRM_COLOR_YCBCR_FULL_RANGE] = { 65 + /* 66 + * BT.601 full range: 67 + * 68 + * |R| |1.0000 0.0000 1.4020| |Y - 0 | 69 + * |G| = |1.0000 -0.3441 -0.7141| * |Cb - 128| 70 + * |B| |1.0000 1.7720 0.0000| |Cr - 128| 71 + */ 72 + CSC0_COEF0_A1(0x100) | CSC0_COEF0_A2(0x000), 73 + CSC0_COEF1_A3(0x167) | CSC0_COEF1_B1(0x100), 74 + CSC0_COEF2_B2(0x7a8) | CSC0_COEF2_B3(0x749), 75 + CSC0_COEF3_C1(0x100) | CSC0_COEF3_C2(0x1c6), 76 + CSC0_COEF4_C3(0x000) | CSC0_COEF4_D1(0x000), 77 + CSC0_COEF5_D2(0x180) | CSC0_COEF5_D3(0x180), 78 + }, 79 + }, 80 + [DRM_COLOR_YCBCR_BT709] = { 81 + [DRM_COLOR_YCBCR_LIMITED_RANGE] = { 82 + /* 83 + * Rec.709 limited range: 84 + * 85 + * |R| |1.1644 0.0000 1.7927| |Y - 16 | 86 + * |G| = |1.1644 -0.2132 -0.5329| * |Cb - 128| 87 + * |B| |1.1644 2.1124 0.0000| |Cr - 128| 88 + */ 89 + CSC0_COEF0_A1(0x12a) | CSC0_COEF0_A2(0x000), 90 + CSC0_COEF1_A3(0x1cb) | CSC0_COEF1_B1(0x12a), 91 + CSC0_COEF2_B2(0x7c9) | CSC0_COEF2_B3(0x778), 92 + CSC0_COEF3_C1(0x12a) | CSC0_COEF3_C2(0x21d), 93 + CSC0_COEF4_C3(0x000) | CSC0_COEF4_D1(0x1f0), 94 + CSC0_COEF5_D2(0x180) | CSC0_COEF5_D3(0x180), 95 + }, 96 + [DRM_COLOR_YCBCR_FULL_RANGE] = { 97 + /* 98 + * Rec.709 full range: 99 + * 100 + * |R| |1.0000 0.0000 1.5748| |Y - 0 | 101 + * |G| = |1.0000 -0.1873 -0.4681| * |Cb - 128| 102 + * |B| |1.0000 1.8556 0.0000| |Cr - 128| 103 + */ 104 + CSC0_COEF0_A1(0x100) | CSC0_COEF0_A2(0x000), 105 + CSC0_COEF1_A3(0x193) | CSC0_COEF1_B1(0x100), 106 + CSC0_COEF2_B2(0x7d0) | CSC0_COEF2_B3(0x788), 107 + CSC0_COEF3_C1(0x100) | CSC0_COEF3_C2(0x1db), 108 + CSC0_COEF4_C3(0x000) | CSC0_COEF4_D1(0x000), 109 + CSC0_COEF5_D2(0x180) | CSC0_COEF5_D3(0x180), 110 + }, 111 + }, 112 + [DRM_COLOR_YCBCR_BT2020] = { 113 + [DRM_COLOR_YCBCR_LIMITED_RANGE] = { 114 + /* 115 + * BT.2020 limited range: 116 + * 117 + * |R| |1.1644 0.0000 1.6787| |Y - 16 | 118 + * |G| = |1.1644 -0.1874 -0.6505| * |Cb - 128| 119 + * |B| |1.1644 2.1418 0.0000| |Cr - 128| 120 + */ 121 + CSC0_COEF0_A1(0x12a) | CSC0_COEF0_A2(0x000), 122 + CSC0_COEF1_A3(0x1ae) | CSC0_COEF1_B1(0x12a), 123 + CSC0_COEF2_B2(0x7d0) | CSC0_COEF2_B3(0x759), 124 + CSC0_COEF3_C1(0x12a) | CSC0_COEF3_C2(0x224), 125 + CSC0_COEF4_C3(0x000) | CSC0_COEF4_D1(0x1f0), 126 + CSC0_COEF5_D2(0x180) | CSC0_COEF5_D3(0x180), 127 + }, 128 + [DRM_COLOR_YCBCR_FULL_RANGE] = { 129 + /* 130 + * BT.2020 full range: 131 + * 132 + * |R| |1.0000 0.0000 1.4746| |Y - 0 | 133 + * |G| = |1.0000 -0.1646 -0.5714| * |Cb - 128| 134 + * |B| |1.0000 1.8814 0.0000| |Cr - 128| 135 + */ 136 + CSC0_COEF0_A1(0x100) | CSC0_COEF0_A2(0x000), 137 + CSC0_COEF1_A3(0x179) | CSC0_COEF1_B1(0x100), 138 + CSC0_COEF2_B2(0x7d6) | CSC0_COEF2_B3(0x76e), 139 + CSC0_COEF3_C1(0x100) | CSC0_COEF3_C2(0x1e2), 140 + CSC0_COEF4_C3(0x000) | CSC0_COEF4_D1(0x000), 141 + CSC0_COEF5_D2(0x180) | CSC0_COEF5_D3(0x180), 142 + }, 143 + }, 144 + }; 145 + 35 146 static void lcdif_set_formats(struct lcdif_drm_private *lcdif, 147 + struct drm_plane_state *plane_state, 36 148 const u32 bus_format) 37 149 { 38 150 struct drm_device *drm = lcdif->drm; 39 - const u32 format = lcdif->crtc.primary->state->fb->format->format; 40 - 41 - writel(CSC0_CTRL_BYPASS, lcdif->base + LCDC_V8_CSC0_CTRL); 151 + const u32 format = plane_state->fb->format->format; 152 + bool in_yuv = false; 153 + bool out_yuv = false; 42 154 43 155 switch (bus_format) { 44 156 case MEDIA_BUS_FMT_RGB565_1X16: ··· 165 51 case MEDIA_BUS_FMT_UYVY8_1X16: 166 52 writel(DISP_PARA_LINE_PATTERN_UYVY_H, 167 53 lcdif->base + LCDC_V8_DISP_PARA); 168 - 169 - /* CSC: BT.601 Full Range RGB to YCbCr coefficients. */ 170 - writel(CSC0_COEF0_A2(0x096) | CSC0_COEF0_A1(0x04c), 171 - lcdif->base + LCDC_V8_CSC0_COEF0); 172 - writel(CSC0_COEF1_B1(0x7d5) | CSC0_COEF1_A3(0x01d), 173 - lcdif->base + LCDC_V8_CSC0_COEF1); 174 - writel(CSC0_COEF2_B3(0x080) | CSC0_COEF2_B2(0x7ac), 175 - lcdif->base + LCDC_V8_CSC0_COEF2); 176 - writel(CSC0_COEF3_C2(0x795) | CSC0_COEF3_C1(0x080), 177 - lcdif->base + LCDC_V8_CSC0_COEF3); 178 - writel(CSC0_COEF4_D1(0x000) | CSC0_COEF4_C3(0x7ec), 179 - lcdif->base + LCDC_V8_CSC0_COEF4); 180 - writel(CSC0_COEF5_D3(0x080) | CSC0_COEF5_D2(0x080), 181 - lcdif->base + LCDC_V8_CSC0_COEF5); 182 - 183 - writel(CSC0_CTRL_CSC_MODE_RGB2YCbCr, 184 - lcdif->base + LCDC_V8_CSC0_CTRL); 185 - 54 + out_yuv = true; 186 55 break; 187 56 default: 188 57 dev_err(drm->dev, "Unknown media bus format 0x%x\n", bus_format); ··· 173 76 } 174 77 175 78 switch (format) { 79 + /* RGB Formats */ 176 80 case DRM_FORMAT_RGB565: 177 81 writel(CTRLDESCL0_5_BPP_16_RGB565, 178 82 lcdif->base + LCDC_V8_CTRLDESCL0_5); ··· 198 100 writel(CTRLDESCL0_5_BPP_32_ARGB8888, 199 101 lcdif->base + LCDC_V8_CTRLDESCL0_5); 200 102 break; 103 + 104 + /* YUV Formats */ 105 + case DRM_FORMAT_YUYV: 106 + writel(CTRLDESCL0_5_BPP_YCbCr422 | CTRLDESCL0_5_YUV_FORMAT_VY2UY1, 107 + lcdif->base + LCDC_V8_CTRLDESCL0_5); 108 + in_yuv = true; 109 + break; 110 + case DRM_FORMAT_YVYU: 111 + writel(CTRLDESCL0_5_BPP_YCbCr422 | CTRLDESCL0_5_YUV_FORMAT_UY2VY1, 112 + lcdif->base + LCDC_V8_CTRLDESCL0_5); 113 + in_yuv = true; 114 + break; 115 + case DRM_FORMAT_UYVY: 116 + writel(CTRLDESCL0_5_BPP_YCbCr422 | CTRLDESCL0_5_YUV_FORMAT_Y2VY1U, 117 + lcdif->base + LCDC_V8_CTRLDESCL0_5); 118 + in_yuv = true; 119 + break; 120 + case DRM_FORMAT_VYUY: 121 + writel(CTRLDESCL0_5_BPP_YCbCr422 | CTRLDESCL0_5_YUV_FORMAT_Y2UY1V, 122 + lcdif->base + LCDC_V8_CTRLDESCL0_5); 123 + in_yuv = true; 124 + break; 125 + 201 126 default: 202 127 dev_err(drm->dev, "Unknown pixel format 0x%x\n", format); 203 128 break; 129 + } 130 + 131 + /* 132 + * The CSC differentiates between "YCbCr" and "YUV", but the reference 133 + * manual doesn't detail how they differ. Experiments showed that the 134 + * luminance value is unaffected, only the calculations involving chroma 135 + * values differ. The YCbCr mode behaves as expected, with chroma values 136 + * being offset by 128. The YUV mode isn't fully understood. 137 + */ 138 + if (!in_yuv && out_yuv) { 139 + /* RGB -> YCbCr */ 140 + writel(CSC0_CTRL_CSC_MODE_RGB2YCbCr, 141 + lcdif->base + LCDC_V8_CSC0_CTRL); 142 + 143 + /* 144 + * CSC: BT.601 Limited Range RGB to YCbCr coefficients. 145 + * 146 + * |Y | | 0.2568 0.5041 0.0979| |R| |16 | 147 + * |Cb| = |-0.1482 -0.2910 0.4392| * |G| + |128| 148 + * |Cr| | 0.4392 0.4392 -0.3678| |B| |128| 149 + */ 150 + writel(CSC0_COEF0_A2(0x081) | CSC0_COEF0_A1(0x041), 151 + lcdif->base + LCDC_V8_CSC0_COEF0); 152 + writel(CSC0_COEF1_B1(0x7db) | CSC0_COEF1_A3(0x019), 153 + lcdif->base + LCDC_V8_CSC0_COEF1); 154 + writel(CSC0_COEF2_B3(0x070) | CSC0_COEF2_B2(0x7b6), 155 + lcdif->base + LCDC_V8_CSC0_COEF2); 156 + writel(CSC0_COEF3_C2(0x7a2) | CSC0_COEF3_C1(0x070), 157 + lcdif->base + LCDC_V8_CSC0_COEF3); 158 + writel(CSC0_COEF4_D1(0x010) | CSC0_COEF4_C3(0x7ee), 159 + lcdif->base + LCDC_V8_CSC0_COEF4); 160 + writel(CSC0_COEF5_D3(0x080) | CSC0_COEF5_D2(0x080), 161 + lcdif->base + LCDC_V8_CSC0_COEF5); 162 + } else if (in_yuv && !out_yuv) { 163 + /* YCbCr -> RGB */ 164 + const u32 *coeffs = 165 + lcdif_yuv2rgb_coeffs[plane_state->color_encoding] 166 + [plane_state->color_range]; 167 + 168 + writel(CSC0_CTRL_CSC_MODE_YCbCr2RGB, 169 + lcdif->base + LCDC_V8_CSC0_CTRL); 170 + 171 + writel(coeffs[0], lcdif->base + LCDC_V8_CSC0_COEF0); 172 + writel(coeffs[1], lcdif->base + LCDC_V8_CSC0_COEF1); 173 + writel(coeffs[2], lcdif->base + LCDC_V8_CSC0_COEF2); 174 + writel(coeffs[3], lcdif->base + LCDC_V8_CSC0_COEF3); 175 + writel(coeffs[4], lcdif->base + LCDC_V8_CSC0_COEF4); 176 + writel(coeffs[5], lcdif->base + LCDC_V8_CSC0_COEF5); 177 + } else { 178 + /* RGB -> RGB, YCbCr -> YCbCr: bypass colorspace converter. */ 179 + writel(CSC0_CTRL_BYPASS, lcdif->base + LCDC_V8_CSC0_CTRL); 204 180 } 205 181 } 206 182 ··· 360 188 } 361 189 362 190 static void lcdif_crtc_mode_set_nofb(struct lcdif_drm_private *lcdif, 191 + struct drm_plane_state *plane_state, 363 192 struct drm_bridge_state *bridge_state, 364 193 const u32 bus_format) 365 194 { ··· 383 210 /* Mandatory eLCDIF reset as per the Reference Manual */ 384 211 lcdif_reset_block(lcdif); 385 212 386 - lcdif_set_formats(lcdif, bus_format); 213 + lcdif_set_formats(lcdif, plane_state, bus_format); 387 214 388 215 lcdif_set_mode(lcdif, bus_flags); 389 216 } ··· 466 293 467 294 pm_runtime_get_sync(drm->dev); 468 295 469 - lcdif_crtc_mode_set_nofb(lcdif, bridge_state, bus_format); 296 + lcdif_crtc_mode_set_nofb(lcdif, new_pstate, bridge_state, bus_format); 470 297 471 298 /* Write cur_buf as well to avoid an initial corrupt frame */ 472 299 paddr = drm_fb_dma_get_gem_addr(new_pstate->fb, new_pstate, 0); ··· 610 437 }; 611 438 612 439 static const u32 lcdif_primary_plane_formats[] = { 440 + /* RGB */ 613 441 DRM_FORMAT_RGB565, 614 442 DRM_FORMAT_RGB888, 615 443 DRM_FORMAT_XBGR8888, 616 444 DRM_FORMAT_XRGB1555, 617 445 DRM_FORMAT_XRGB4444, 618 446 DRM_FORMAT_XRGB8888, 447 + 448 + /* Packed YCbCr */ 449 + DRM_FORMAT_YUYV, 450 + DRM_FORMAT_YVYU, 451 + DRM_FORMAT_UYVY, 452 + DRM_FORMAT_VYUY, 619 453 }; 620 454 621 455 static const u64 lcdif_modifiers[] = { ··· 636 456 637 457 int lcdif_kms_init(struct lcdif_drm_private *lcdif) 638 458 { 459 + const u32 supported_encodings = BIT(DRM_COLOR_YCBCR_BT601) | 460 + BIT(DRM_COLOR_YCBCR_BT709) | 461 + BIT(DRM_COLOR_YCBCR_BT2020); 462 + const u32 supported_ranges = BIT(DRM_COLOR_YCBCR_LIMITED_RANGE) | 463 + BIT(DRM_COLOR_YCBCR_FULL_RANGE); 639 464 struct drm_encoder *encoder = &lcdif->encoder; 640 465 struct drm_crtc *crtc = &lcdif->crtc; 641 466 int ret; ··· 653 468 ARRAY_SIZE(lcdif_primary_plane_formats), 654 469 lcdif_modifiers, DRM_PLANE_TYPE_PRIMARY, 655 470 NULL); 471 + if (ret) 472 + return ret; 473 + 474 + ret = drm_plane_create_color_properties(&lcdif->planes.primary, 475 + supported_encodings, 476 + supported_ranges, 477 + DRM_COLOR_YCBCR_BT601, 478 + DRM_COLOR_YCBCR_LIMITED_RANGE); 656 479 if (ret) 657 480 return ret; 658 481
+20 -17
drivers/gpu/drm/mxsfb/lcdif_regs.h
··· 130 130 #define CTRL_FETCH_START_OPTION_BPV BIT(9) 131 131 #define CTRL_FETCH_START_OPTION_RESV GENMASK(9, 8) 132 132 #define CTRL_FETCH_START_OPTION_MASK GENMASK(9, 8) 133 - #define CTRL_NEG BIT(4) 133 + #define CTRL_NEG BIT(4) 134 134 #define CTRL_INV_PXCK BIT(3) 135 135 #define CTRL_INV_DE BIT(2) 136 136 #define CTRL_INV_VS BIT(1) ··· 138 138 139 139 #define DISP_PARA_DISP_ON BIT(31) 140 140 #define DISP_PARA_SWAP_EN BIT(30) 141 - #define DISP_PARA_LINE_PATTERN_UYVY_H (GENMASK(29, 28) | BIT(26)) 142 - #define DISP_PARA_LINE_PATTERN_RGB565 GENMASK(28, 26) 143 - #define DISP_PARA_LINE_PATTERN_RGB888 0 141 + #define DISP_PARA_LINE_PATTERN_UYVY_H (0xd << 26) 142 + #define DISP_PARA_LINE_PATTERN_RGB565 (0x7 << 26) 143 + #define DISP_PARA_LINE_PATTERN_RGB888 (0x0 << 26) 144 144 #define DISP_PARA_LINE_PATTERN_MASK GENMASK(29, 26) 145 145 #define DISP_PARA_DISP_MODE_MASK GENMASK(25, 24) 146 146 #define DISP_PARA_BGND_R_MASK GENMASK(23, 16) ··· 186 186 #define INT_ENABLE_D1_PLANE_PANIC_EN BIT(0) 187 187 188 188 #define CTRLDESCL0_1_HEIGHT(n) (((n) & 0xffff) << 16) 189 - #define CTRLDESCL0_1_HEIGHT_MASK GENMASK(31, 16) 189 + #define CTRLDESCL0_1_HEIGHT_MASK GENMASK(31, 16) 190 190 #define CTRLDESCL0_1_WIDTH(n) ((n) & 0xffff) 191 191 #define CTRLDESCL0_1_WIDTH_MASK GENMASK(15, 0) 192 192 ··· 198 198 199 199 #define CTRLDESCL0_5_EN BIT(31) 200 200 #define CTRLDESCL0_5_SHADOW_LOAD_EN BIT(30) 201 - #define CTRLDESCL0_5_BPP_16_RGB565 BIT(26) 202 - #define CTRLDESCL0_5_BPP_16_ARGB1555 (BIT(26) | BIT(24)) 203 - #define CTRLDESCL0_5_BPP_16_ARGB4444 (BIT(26) | BIT(25)) 204 - #define CTRLDESCL0_5_BPP_YCbCr422 (BIT(26) | BIT(25) | BIT(24)) 205 - #define CTRLDESCL0_5_BPP_24_RGB888 BIT(27) 206 - #define CTRLDESCL0_5_BPP_32_ARGB8888 (BIT(27) | BIT(24)) 207 - #define CTRLDESCL0_5_BPP_32_ABGR8888 (BIT(27) | BIT(25)) 201 + #define CTRLDESCL0_5_BPP_16_RGB565 (0x4 << 24) 202 + #define CTRLDESCL0_5_BPP_16_ARGB1555 (0x5 << 24) 203 + #define CTRLDESCL0_5_BPP_16_ARGB4444 (0x6 << 24) 204 + #define CTRLDESCL0_5_BPP_YCbCr422 (0x7 << 24) 205 + #define CTRLDESCL0_5_BPP_24_RGB888 (0x8 << 24) 206 + #define CTRLDESCL0_5_BPP_32_ARGB8888 (0x9 << 24) 207 + #define CTRLDESCL0_5_BPP_32_ABGR8888 (0xa << 24) 208 208 #define CTRLDESCL0_5_BPP_MASK GENMASK(27, 24) 209 - #define CTRLDESCL0_5_YUV_FORMAT_Y2VY1U 0 210 - #define CTRLDESCL0_5_YUV_FORMAT_Y2UY1V BIT(14) 211 - #define CTRLDESCL0_5_YUV_FORMAT_VY2UY1 BIT(15) 212 - #define CTRLDESCL0_5_YUV_FORMAT_UY2VY1 (BIT(15) | BIT(14)) 209 + #define CTRLDESCL0_5_YUV_FORMAT_Y2VY1U (0x0 << 14) 210 + #define CTRLDESCL0_5_YUV_FORMAT_Y2UY1V (0x1 << 14) 211 + #define CTRLDESCL0_5_YUV_FORMAT_VY2UY1 (0x2 << 14) 212 + #define CTRLDESCL0_5_YUV_FORMAT_UY2VY1 (0x3 << 14) 213 213 #define CTRLDESCL0_5_YUV_FORMAT_MASK GENMASK(15, 14) 214 214 215 - #define CSC0_CTRL_CSC_MODE_RGB2YCbCr GENMASK(2, 1) 215 + #define CSC0_CTRL_CSC_MODE_YUV2RGB (0x0 << 1) 216 + #define CSC0_CTRL_CSC_MODE_YCbCr2RGB (0x1 << 1) 217 + #define CSC0_CTRL_CSC_MODE_RGB2YUV (0x2 << 1) 218 + #define CSC0_CTRL_CSC_MODE_RGB2YCbCr (0x3 << 1) 216 219 #define CSC0_CTRL_CSC_MODE_MASK GENMASK(2, 1) 217 220 #define CSC0_CTRL_BYPASS BIT(0) 218 221
+2 -2
drivers/gpu/drm/nouveau/dispnv50/disp.c
··· 131 131 { 132 132 struct nv50_dmac *dmac = container_of(push, typeof(*dmac), _push); 133 133 134 - dmac->cur = push->cur - (u32 *)dmac->_push.mem.object.map.ptr; 134 + dmac->cur = push->cur - (u32 __iomem *)dmac->_push.mem.object.map.ptr; 135 135 if (dmac->put != dmac->cur) { 136 136 /* Push buffer fetches are not coherent with BAR1, we need to ensure 137 137 * writes have been flushed right through to VRAM before writing PUT. ··· 194 194 if (WARN_ON(size > dmac->max)) 195 195 return -EINVAL; 196 196 197 - dmac->cur = push->cur - (u32 *)dmac->_push.mem.object.map.ptr; 197 + dmac->cur = push->cur - (u32 __iomem *)dmac->_push.mem.object.map.ptr; 198 198 if (dmac->cur + size >= dmac->max) { 199 199 int ret = nv50_dmac_wind(dmac); 200 200 if (ret)
-1
drivers/gpu/drm/nouveau/nouveau_display.c
··· 672 672 drm_mode_create_dvi_i_properties(dev); 673 673 674 674 dev->mode_config.funcs = &nouveau_mode_config_funcs; 675 - dev->mode_config.fb_base = device->func->resource_addr(device, 1); 676 675 677 676 dev->mode_config.min_width = 0; 678 677 dev->mode_config.min_height = 0;
+4 -2
drivers/gpu/drm/nouveau/nv04_fbcon.c
··· 137 137 struct nouveau_channel *chan = drm->channel; 138 138 struct nvif_device *device = &drm->client.device; 139 139 struct nvif_push *push = chan->chan.push; 140 + struct nvkm_device *nvkm_device = nvxx_device(&drm->client.device); 141 + resource_size_t fb_base = nvkm_device->func->resource_addr(nvkm_device, 1); 140 142 int surface_fmt, pattern_fmt, rect_fmt; 141 143 int ret; 142 144 ··· 212 210 0x0188, chan->vram.handle); 213 211 PUSH_NVSQ(push, NV042, 0x0300, surface_fmt, 214 212 0x0304, info->fix.line_length | (info->fix.line_length << 16), 215 - 0x0308, info->fix.smem_start - dev->mode_config.fb_base, 216 - 0x030c, info->fix.smem_start - dev->mode_config.fb_base); 213 + 0x0308, info->fix.smem_start - fb_base, 214 + 0x030c, info->fix.smem_start - fb_base); 217 215 218 216 PUSH_NVSQ(push, NV043, 0x0000, nfbdev->rop.handle); 219 217 PUSH_NVSQ(push, NV043, 0x0300, 0x55);
-2
drivers/gpu/drm/omapdrm/omap_fbdev.c
··· 177 177 178 178 drm_fb_helper_fill_info(fbi, helper, sizes); 179 179 180 - dev->mode_config.fb_base = dma_addr; 181 - 182 180 fbi->screen_buffer = omap_gem_vaddr(fbdev->bo); 183 181 fbi->screen_size = fbdev->bo->size; 184 182 fbi->fix.smem_start = dma_addr;
+2 -2
drivers/gpu/drm/omapdrm/omap_gem_dmabuf.c
··· 125 125 126 126 get_dma_buf(dma_buf); 127 127 128 - sgt = dma_buf_map_attachment(attach, DMA_TO_DEVICE); 128 + sgt = dma_buf_map_attachment_unlocked(attach, DMA_TO_DEVICE); 129 129 if (IS_ERR(sgt)) { 130 130 ret = PTR_ERR(sgt); 131 131 goto fail_detach; ··· 142 142 return obj; 143 143 144 144 fail_unmap: 145 - dma_buf_unmap_attachment(attach, sgt, DMA_TO_DEVICE); 145 + dma_buf_unmap_attachment_unlocked(attach, sgt, DMA_TO_DEVICE); 146 146 fail_detach: 147 147 dma_buf_detach(dma_buf, attach); 148 148 dma_buf_put(dma_buf);
+7
drivers/gpu/drm/panel/panel-samsung-db7430.c
··· 331 331 }; 332 332 MODULE_DEVICE_TABLE(of, db7430_match); 333 333 334 + static const struct spi_device_id db7430_ids[] = { 335 + { "lms397kf04" }, 336 + { }, 337 + }; 338 + MODULE_DEVICE_TABLE(spi, db7430_ids); 339 + 334 340 static struct spi_driver db7430_driver = { 335 341 .probe = db7430_probe, 336 342 .remove = db7430_remove, 343 + .id_table = db7430_ids, 337 344 .driver = { 338 345 .name = "db7430-panel", 339 346 .of_match_table = db7430_match,
+7
drivers/gpu/drm/panel/panel-tpo-tpg110.c
··· 463 463 }; 464 464 MODULE_DEVICE_TABLE(of, tpg110_match); 465 465 466 + static const struct spi_device_id tpg110_ids[] = { 467 + { "tpg110" }, 468 + { }, 469 + }; 470 + MODULE_DEVICE_TABLE(spi, tpg110_ids); 471 + 466 472 static struct spi_driver tpg110_driver = { 467 473 .probe = tpg110_probe, 468 474 .remove = tpg110_remove, 475 + .id_table = tpg110_ids, 469 476 .driver = { 470 477 .name = "tpo-tpg110-panel", 471 478 .of_match_table = tpg110_match,
+7
drivers/gpu/drm/panel/panel-widechips-ws2401.c
··· 425 425 }; 426 426 MODULE_DEVICE_TABLE(of, ws2401_match); 427 427 428 + static const struct spi_device_id ws2401_ids[] = { 429 + { "lms380kf01" }, 430 + { }, 431 + }; 432 + MODULE_DEVICE_TABLE(spi, ws2401_ids); 433 + 428 434 static struct spi_driver ws2401_driver = { 429 435 .probe = ws2401_probe, 430 436 .remove = ws2401_remove, 437 + .id_table = ws2401_ids, 431 438 .driver = { 432 439 .name = "ws2401-panel", 433 440 .of_match_table = ws2401_match,
+2 -2
drivers/gpu/drm/panfrost/panfrost_dump.c
··· 209 209 goto dump_header; 210 210 } 211 211 212 - ret = drm_gem_shmem_vmap(&bo->base, &map); 212 + ret = drm_gem_vmap_unlocked(&bo->base.base, &map); 213 213 if (ret) { 214 214 dev_err(pfdev->dev, "Panfrost Dump: couldn't map Buffer Object\n"); 215 215 iter.hdr->bomap.valid = 0; ··· 236 236 vaddr = map.vaddr; 237 237 memcpy(iter.data, vaddr, bo->base.base.size); 238 238 239 - drm_gem_shmem_vunmap(&bo->base, &map); 239 + drm_gem_vunmap_unlocked(&bo->base.base, &map); 240 240 241 241 iter.hdr->bomap.valid = 1; 242 242
+3 -3
drivers/gpu/drm/panfrost/panfrost_perfcnt.c
··· 106 106 goto err_close_bo; 107 107 } 108 108 109 - ret = drm_gem_shmem_vmap(bo, &map); 109 + ret = drm_gem_vmap_unlocked(&bo->base, &map); 110 110 if (ret) 111 111 goto err_put_mapping; 112 112 perfcnt->buf = map.vaddr; ··· 165 165 return 0; 166 166 167 167 err_vunmap: 168 - drm_gem_shmem_vunmap(bo, &map); 168 + drm_gem_vunmap_unlocked(&bo->base, &map); 169 169 err_put_mapping: 170 170 panfrost_gem_mapping_put(perfcnt->mapping); 171 171 err_close_bo: ··· 195 195 GPU_PERFCNT_CFG_MODE(GPU_PERFCNT_CFG_MODE_OFF)); 196 196 197 197 perfcnt->user = NULL; 198 - drm_gem_shmem_vunmap(&perfcnt->mapping->obj->base, &map); 198 + drm_gem_vunmap_unlocked(&perfcnt->mapping->obj->base.base, &map); 199 199 perfcnt->buf = NULL; 200 200 panfrost_gem_close(&perfcnt->mapping->obj->base.base, file_priv); 201 201 panfrost_mmu_as_put(pfdev, perfcnt->mapping->mmu);
-2
drivers/gpu/drm/qxl/qxl_display.c
··· 1261 1261 qdev->ddev.mode_config.max_width = 8192; 1262 1262 qdev->ddev.mode_config.max_height = 8192; 1263 1263 1264 - qdev->ddev.mode_config.fb_base = qdev->vram_base; 1265 - 1266 1264 drm_mode_create_suggested_offset_properties(&qdev->ddev); 1267 1265 qxl_mode_create_hotplug_mode_update_property(qdev); 1268 1266
+9 -8
drivers/gpu/drm/qxl/qxl_object.c
··· 168 168 bo->map_count++; 169 169 goto out; 170 170 } 171 - r = ttm_bo_vmap(&bo->tbo, &bo->map); 171 + 172 + r = __qxl_bo_pin(bo); 172 173 if (r) 173 174 return r; 175 + 176 + r = ttm_bo_vmap(&bo->tbo, &bo->map); 177 + if (r) { 178 + __qxl_bo_unpin(bo); 179 + return r; 180 + } 174 181 bo->map_count = 1; 175 182 176 183 /* TODO: Remove kptr in favor of map everywhere. */ ··· 198 191 r = qxl_bo_reserve(bo); 199 192 if (r) 200 193 return r; 201 - 202 - r = __qxl_bo_pin(bo); 203 - if (r) { 204 - qxl_bo_unreserve(bo); 205 - return r; 206 - } 207 194 208 195 r = qxl_bo_vmap_locked(bo, map); 209 196 qxl_bo_unreserve(bo); ··· 248 247 return; 249 248 bo->kptr = NULL; 250 249 ttm_bo_vunmap(&bo->tbo, &bo->map); 250 + __qxl_bo_unpin(bo); 251 251 } 252 252 253 253 int qxl_bo_vunmap(struct qxl_bo *bo) ··· 260 258 return r; 261 259 262 260 qxl_bo_vunmap_locked(bo); 263 - __qxl_bo_unpin(bo); 264 261 qxl_bo_unreserve(bo); 265 262 return 0; 266 263 }
+2 -2
drivers/gpu/drm/qxl/qxl_prime.c
··· 59 59 struct qxl_bo *bo = gem_to_qxl_bo(obj); 60 60 int ret; 61 61 62 - ret = qxl_bo_vmap(bo, map); 62 + ret = qxl_bo_vmap_locked(bo, map); 63 63 if (ret < 0) 64 64 return ret; 65 65 ··· 71 71 { 72 72 struct qxl_bo *bo = gem_to_qxl_bo(obj); 73 73 74 - qxl_bo_vunmap(bo); 74 + qxl_bo_vunmap_locked(bo); 75 75 }
-2
drivers/gpu/drm/radeon/radeon_display.c
··· 1604 1604 1605 1605 rdev->ddev->mode_config.fb_modifiers_not_supported = true; 1606 1606 1607 - rdev->ddev->mode_config.fb_base = rdev->mc.aper_base; 1608 - 1609 1607 ret = radeon_modeset_create_props(rdev); 1610 1608 if (ret) { 1611 1609 return ret;
+1 -1
drivers/gpu/drm/radeon/radeon_fb.c
··· 276 276 drm_fb_helper_fill_info(info, &rfbdev->helper, sizes); 277 277 278 278 /* setup aperture base/size for vesafb takeover */ 279 - info->apertures->ranges[0].base = rdev->ddev->mode_config.fb_base; 279 + info->apertures->ranges[0].base = rdev->mc.aper_base; 280 280 info->apertures->ranges[0].size = rdev->mc.aper_size; 281 281 282 282 /* Use default scratch pixmap (info->pixmap.flags = FB_PIXMAP_SYSTEM) */
+25 -1
drivers/gpu/drm/scheduler/sched_entity.c
··· 73 73 entity->priority = priority; 74 74 entity->sched_list = num_sched_list > 1 ? sched_list : NULL; 75 75 entity->last_scheduled = NULL; 76 + RB_CLEAR_NODE(&entity->rb_tree_node); 76 77 77 78 if(num_sched_list) 78 79 entity->rq = &sched_list[0]->sched_rq[entity->priority]; ··· 208 207 struct drm_sched_job *job = container_of(cb, struct drm_sched_job, 209 208 finish_cb); 210 209 210 + dma_fence_put(f); 211 211 INIT_WORK(&job->work, drm_sched_entity_kill_jobs_work); 212 212 schedule_work(&job->work); 213 213 } ··· 236 234 struct drm_sched_fence *s_fence = job->s_fence; 237 235 238 236 /* Wait for all dependencies to avoid data corruptions */ 239 - while ((f = drm_sched_job_dependency(job, entity))) 237 + while ((f = drm_sched_job_dependency(job, entity))) { 240 238 dma_fence_wait(f, false); 239 + dma_fence_put(f); 240 + } 241 241 242 242 drm_sched_fence_scheduled(s_fence); 243 243 dma_fence_set_error(&s_fence->finished, -ESRCH); ··· 254 250 continue; 255 251 } 256 252 253 + dma_fence_get(entity->last_scheduled); 257 254 r = dma_fence_add_callback(entity->last_scheduled, 258 255 &job->finish_cb, 259 256 drm_sched_entity_kill_jobs_cb); ··· 449 444 smp_wmb(); 450 445 451 446 spsc_queue_pop(&entity->job_queue); 447 + 448 + /* 449 + * Update the entity's location in the min heap according to 450 + * the timestamp of the next job, if any. 451 + */ 452 + if (drm_sched_policy == DRM_SCHED_POLICY_FIFO) { 453 + struct drm_sched_job *next; 454 + 455 + next = to_drm_sched_job(spsc_queue_peek(&entity->job_queue)); 456 + if (next) 457 + drm_sched_rq_update_fifo(entity, next->submit_ts); 458 + } 459 + 452 460 return sched_job; 453 461 } 454 462 ··· 526 508 atomic_inc(entity->rq->sched->score); 527 509 WRITE_ONCE(entity->last_user, current->group_leader); 528 510 first = spsc_queue_push(&entity->job_queue, &sched_job->queue_node); 511 + sched_job->submit_ts = ktime_get(); 529 512 530 513 /* first job wakes up scheduler */ 531 514 if (first) { ··· 538 519 DRM_ERROR("Trying to push to a killed entity\n"); 539 520 return; 540 521 } 522 + 541 523 drm_sched_rq_add_entity(entity->rq, entity); 542 524 spin_unlock(&entity->rq_lock); 525 + 526 + if (drm_sched_policy == DRM_SCHED_POLICY_FIFO) 527 + drm_sched_rq_update_fifo(entity, sched_job->submit_ts); 528 + 543 529 drm_sched_wakeup(entity->rq->sched); 544 530 } 545 531 }
+93 -3
drivers/gpu/drm/scheduler/sched_main.c
··· 62 62 #define to_drm_sched_job(sched_job) \ 63 63 container_of((sched_job), struct drm_sched_job, queue_node) 64 64 65 + int drm_sched_policy = DRM_SCHED_POLICY_RR; 66 + 67 + /** 68 + * DOC: sched_policy (int) 69 + * Used to override default entities scheduling policy in a run queue. 70 + */ 71 + MODULE_PARM_DESC(sched_policy, "Specify schedule policy for entities on a runqueue, " __stringify(DRM_SCHED_POLICY_RR) " = Round Robin (default), " __stringify(DRM_SCHED_POLICY_FIFO) " = use FIFO."); 72 + module_param_named(sched_policy, drm_sched_policy, int, 0444); 73 + 74 + static __always_inline bool drm_sched_entity_compare_before(struct rb_node *a, 75 + const struct rb_node *b) 76 + { 77 + struct drm_sched_entity *ent_a = rb_entry((a), struct drm_sched_entity, rb_tree_node); 78 + struct drm_sched_entity *ent_b = rb_entry((b), struct drm_sched_entity, rb_tree_node); 79 + 80 + return ktime_before(ent_a->oldest_job_waiting, ent_b->oldest_job_waiting); 81 + } 82 + 83 + static inline void drm_sched_rq_remove_fifo_locked(struct drm_sched_entity *entity) 84 + { 85 + struct drm_sched_rq *rq = entity->rq; 86 + 87 + if (!RB_EMPTY_NODE(&entity->rb_tree_node)) { 88 + rb_erase_cached(&entity->rb_tree_node, &rq->rb_tree_root); 89 + RB_CLEAR_NODE(&entity->rb_tree_node); 90 + } 91 + } 92 + 93 + void drm_sched_rq_update_fifo(struct drm_sched_entity *entity, ktime_t ts) 94 + { 95 + /* 96 + * Both locks need to be grabbed, one to protect from entity->rq change 97 + * for entity from within concurrent drm_sched_entity_select_rq and the 98 + * other to update the rb tree structure. 99 + */ 100 + spin_lock(&entity->rq_lock); 101 + spin_lock(&entity->rq->lock); 102 + 103 + drm_sched_rq_remove_fifo_locked(entity); 104 + 105 + entity->oldest_job_waiting = ts; 106 + 107 + rb_add_cached(&entity->rb_tree_node, &entity->rq->rb_tree_root, 108 + drm_sched_entity_compare_before); 109 + 110 + spin_unlock(&entity->rq->lock); 111 + spin_unlock(&entity->rq_lock); 112 + } 113 + 65 114 /** 66 115 * drm_sched_rq_init - initialize a given run queue struct 67 116 * ··· 124 75 { 125 76 spin_lock_init(&rq->lock); 126 77 INIT_LIST_HEAD(&rq->entities); 78 + rq->rb_tree_root = RB_ROOT_CACHED; 127 79 rq->current_entity = NULL; 128 80 rq->sched = sched; 129 81 } ··· 142 92 { 143 93 if (!list_empty(&entity->list)) 144 94 return; 95 + 145 96 spin_lock(&rq->lock); 97 + 146 98 atomic_inc(rq->sched->score); 147 99 list_add_tail(&entity->list, &rq->entities); 100 + 148 101 spin_unlock(&rq->lock); 149 102 } 150 103 ··· 164 111 { 165 112 if (list_empty(&entity->list)) 166 113 return; 114 + 167 115 spin_lock(&rq->lock); 116 + 168 117 atomic_dec(rq->sched->score); 169 118 list_del_init(&entity->list); 119 + 170 120 if (rq->current_entity == entity) 171 121 rq->current_entity = NULL; 122 + 123 + if (drm_sched_policy == DRM_SCHED_POLICY_FIFO) 124 + drm_sched_rq_remove_fifo_locked(entity); 125 + 172 126 spin_unlock(&rq->lock); 173 127 } 174 128 175 129 /** 176 - * drm_sched_rq_select_entity - Select an entity which could provide a job to run 130 + * drm_sched_rq_select_entity_rr - Select an entity which could provide a job to run 177 131 * 178 132 * @rq: scheduler run queue to check. 179 133 * 180 134 * Try to find a ready entity, returns NULL if none found. 181 135 */ 182 136 static struct drm_sched_entity * 183 - drm_sched_rq_select_entity(struct drm_sched_rq *rq) 137 + drm_sched_rq_select_entity_rr(struct drm_sched_rq *rq) 184 138 { 185 139 struct drm_sched_entity *entity; 186 140 ··· 221 161 spin_unlock(&rq->lock); 222 162 223 163 return NULL; 164 + } 165 + 166 + /** 167 + * drm_sched_rq_select_entity_fifo - Select an entity which provides a job to run 168 + * 169 + * @rq: scheduler run queue to check. 170 + * 171 + * Find oldest waiting ready entity, returns NULL if none found. 172 + */ 173 + static struct drm_sched_entity * 174 + drm_sched_rq_select_entity_fifo(struct drm_sched_rq *rq) 175 + { 176 + struct rb_node *rb; 177 + 178 + spin_lock(&rq->lock); 179 + for (rb = rb_first_cached(&rq->rb_tree_root); rb; rb = rb_next(rb)) { 180 + struct drm_sched_entity *entity; 181 + 182 + entity = rb_entry(rb, struct drm_sched_entity, rb_tree_node); 183 + if (drm_sched_entity_is_ready(entity)) { 184 + rq->current_entity = entity; 185 + reinit_completion(&entity->entity_idle); 186 + break; 187 + } 188 + } 189 + spin_unlock(&rq->lock); 190 + 191 + return rb ? rb_entry(rb, struct drm_sched_entity, rb_tree_node) : NULL; 224 192 } 225 193 226 194 /** ··· 891 803 892 804 /* Kernel run queue has higher priority than normal run queue*/ 893 805 for (i = DRM_SCHED_PRIORITY_COUNT - 1; i >= DRM_SCHED_PRIORITY_MIN; i--) { 894 - entity = drm_sched_rq_select_entity(&sched->sched_rq[i]); 806 + entity = drm_sched_policy == DRM_SCHED_POLICY_FIFO ? 807 + drm_sched_rq_select_entity_fifo(&sched->sched_rq[i]) : 808 + drm_sched_rq_select_entity_rr(&sched->sched_rq[i]); 895 809 if (entity) 896 810 break; 897 811 }
+14 -23
drivers/gpu/drm/solomon/ssd130x.c
··· 20 20 21 21 #include <drm/drm_atomic.h> 22 22 #include <drm/drm_atomic_helper.h> 23 + #include <drm/drm_crtc_helper.h> 23 24 #include <drm/drm_damage_helper.h> 24 25 #include <drm/drm_edid.h> 25 26 #include <drm/drm_fb_helper.h> ··· 579 578 struct drm_plane_state *plane_state = drm_atomic_get_new_plane_state(state, plane); 580 579 struct drm_plane_state *old_plane_state = drm_atomic_get_old_plane_state(state, plane); 581 580 struct drm_shadow_plane_state *shadow_plane_state = to_drm_shadow_plane_state(plane_state); 581 + struct drm_atomic_helper_damage_iter iter; 582 582 struct drm_device *drm = plane->dev; 583 - struct drm_rect src_clip, dst_clip; 583 + struct drm_rect dst_clip; 584 + struct drm_rect damage; 584 585 int idx; 585 - 586 - if (!drm_atomic_helper_damage_merged(old_plane_state, plane_state, &src_clip)) 587 - return; 588 - 589 - dst_clip = plane_state->dst; 590 - if (!drm_rect_intersect(&dst_clip, &src_clip)) 591 - return; 592 586 593 587 if (!drm_dev_enter(drm, &idx)) 594 588 return; 595 589 596 - ssd130x_fb_blit_rect(plane_state->fb, &shadow_plane_state->data[0], &dst_clip); 590 + drm_atomic_helper_damage_iter_init(&iter, old_plane_state, plane_state); 591 + drm_atomic_for_each_plane_damage(&iter, &damage) { 592 + dst_clip = plane_state->dst; 593 + 594 + if (!drm_rect_intersect(&dst_clip, &damage)) 595 + continue; 596 + 597 + ssd130x_fb_blit_rect(plane_state->fb, &shadow_plane_state->data[0], &dst_clip); 598 + } 597 599 598 600 drm_dev_exit(idx); 599 601 } ··· 646 642 return MODE_OK; 647 643 } 648 644 649 - static int ssd130x_crtc_helper_atomic_check(struct drm_crtc *crtc, 650 - struct drm_atomic_state *new_state) 651 - { 652 - struct drm_crtc_state *new_crtc_state = drm_atomic_get_new_crtc_state(new_state, crtc); 653 - int ret; 654 - 655 - ret = drm_atomic_helper_check_crtc_state(new_crtc_state, false); 656 - if (ret) 657 - return ret; 658 - 659 - return drm_atomic_add_affected_planes(new_state, crtc); 660 - } 661 - 662 645 /* 663 646 * The CRTC is always enabled. Screen updates are performed by 664 647 * the primary plane's atomic_update function. Disabling clears ··· 653 662 */ 654 663 static const struct drm_crtc_helper_funcs ssd130x_crtc_helper_funcs = { 655 664 .mode_valid = ssd130x_crtc_helper_mode_valid, 656 - .atomic_check = ssd130x_crtc_helper_atomic_check, 665 + .atomic_check = drm_crtc_helper_atomic_check, 657 666 }; 658 667 659 668 static void ssd130x_crtc_reset(struct drm_crtc *crtc)
-1
drivers/gpu/drm/tegra/fb.c
··· 280 280 } 281 281 } 282 282 283 - drm->mode_config.fb_base = (resource_size_t)bo->iova; 284 283 info->screen_base = (void __iomem *)bo->vaddr + offset; 285 284 info->screen_size = size; 286 285 info->fix.smem_start = (unsigned long)(bo->iova + offset);
+9 -8
drivers/gpu/drm/tegra/gem.c
··· 84 84 goto free; 85 85 } 86 86 87 - map->sgt = dma_buf_map_attachment(map->attach, direction); 87 + map->sgt = dma_buf_map_attachment_unlocked(map->attach, direction); 88 88 if (IS_ERR(map->sgt)) { 89 89 dma_buf_detach(buf, map->attach); 90 90 err = PTR_ERR(map->sgt); ··· 160 160 static void tegra_bo_unpin(struct host1x_bo_mapping *map) 161 161 { 162 162 if (map->attach) { 163 - dma_buf_unmap_attachment(map->attach, map->sgt, map->direction); 163 + dma_buf_unmap_attachment_unlocked(map->attach, map->sgt, 164 + map->direction); 164 165 dma_buf_detach(map->attach->dmabuf, map->attach); 165 166 } else { 166 167 dma_unmap_sgtable(map->dev, map->sgt, map->direction, 0); ··· 182 181 if (obj->vaddr) { 183 182 return obj->vaddr; 184 183 } else if (obj->gem.import_attach) { 185 - ret = dma_buf_vmap(obj->gem.import_attach->dmabuf, &map); 184 + ret = dma_buf_vmap_unlocked(obj->gem.import_attach->dmabuf, &map); 186 185 return ret ? NULL : map.vaddr; 187 186 } else { 188 187 return vmap(obj->pages, obj->num_pages, VM_MAP, ··· 198 197 if (obj->vaddr) 199 198 return; 200 199 else if (obj->gem.import_attach) 201 - dma_buf_vunmap(obj->gem.import_attach->dmabuf, &map); 200 + dma_buf_vunmap_unlocked(obj->gem.import_attach->dmabuf, &map); 202 201 else 203 202 vunmap(addr); 204 203 } ··· 462 461 463 462 get_dma_buf(buf); 464 463 465 - bo->sgt = dma_buf_map_attachment(attach, DMA_TO_DEVICE); 464 + bo->sgt = dma_buf_map_attachment_unlocked(attach, DMA_TO_DEVICE); 466 465 if (IS_ERR(bo->sgt)) { 467 466 err = PTR_ERR(bo->sgt); 468 467 goto detach; ··· 480 479 481 480 detach: 482 481 if (!IS_ERR_OR_NULL(bo->sgt)) 483 - dma_buf_unmap_attachment(attach, bo->sgt, DMA_TO_DEVICE); 482 + dma_buf_unmap_attachment_unlocked(attach, bo->sgt, DMA_TO_DEVICE); 484 483 485 484 dma_buf_detach(buf, attach); 486 485 dma_buf_put(buf); ··· 509 508 tegra_bo_iommu_unmap(tegra, bo); 510 509 511 510 if (gem->import_attach) { 512 - dma_buf_unmap_attachment(gem->import_attach, bo->sgt, 513 - DMA_TO_DEVICE); 511 + dma_buf_unmap_attachment_unlocked(gem->import_attach, bo->sgt, 512 + DMA_TO_DEVICE); 514 513 drm_prime_gem_destroy(gem, NULL); 515 514 } else { 516 515 tegra_bo_free(gem->dev, bo);
+11 -3
drivers/gpu/drm/tests/Makefile
··· 1 1 # SPDX-License-Identifier: GPL-2.0 2 2 3 - obj-$(CONFIG_DRM_KUNIT_TEST) += drm_format_helper_test.o drm_damage_helper_test.o \ 4 - drm_cmdline_parser_test.o drm_rect_test.o drm_format_test.o drm_plane_helper_test.o \ 5 - drm_dp_mst_helper_test.o drm_framebuffer_test.o drm_buddy_test.o drm_mm_test.o 3 + obj-$(CONFIG_DRM_KUNIT_TEST) += \ 4 + drm_buddy_test.o \ 5 + drm_cmdline_parser_test.o \ 6 + drm_damage_helper_test.o \ 7 + drm_dp_mst_helper_test.o \ 8 + drm_format_helper_test.o \ 9 + drm_format_test.o \ 10 + drm_framebuffer_test.o \ 11 + drm_mm_test.o \ 12 + drm_plane_helper_test.o \ 13 + drm_rect_test.o
+295 -150
drivers/gpu/drm/tests/drm_dp_mst_helper_test.c
··· 5 5 * Copyright (c) 2022 Maíra Canal <mairacanal@riseup.net> 6 6 */ 7 7 8 - #define PREFIX_STR "[drm_dp_mst_helper]" 9 - 10 8 #include <kunit/test.h> 11 - 12 - #include <linux/random.h> 13 9 14 10 #include <drm/display/drm_dp_mst_helper.h> 15 11 #include <drm/drm_print.h> 16 12 17 13 #include "../display/drm_dp_mst_topology_internal.h" 18 14 15 + struct drm_dp_mst_calc_pbn_mode_test { 16 + const int clock; 17 + const int bpp; 18 + const bool dsc; 19 + const int expected; 20 + }; 21 + 22 + static const struct drm_dp_mst_calc_pbn_mode_test drm_dp_mst_calc_pbn_mode_cases[] = { 23 + { 24 + .clock = 154000, 25 + .bpp = 30, 26 + .dsc = false, 27 + .expected = 689 28 + }, 29 + { 30 + .clock = 234000, 31 + .bpp = 30, 32 + .dsc = false, 33 + .expected = 1047 34 + }, 35 + { 36 + .clock = 297000, 37 + .bpp = 24, 38 + .dsc = false, 39 + .expected = 1063 40 + }, 41 + { 42 + .clock = 332880, 43 + .bpp = 24, 44 + .dsc = true, 45 + .expected = 50 46 + }, 47 + { 48 + .clock = 324540, 49 + .bpp = 24, 50 + .dsc = true, 51 + .expected = 49 52 + }, 53 + }; 54 + 19 55 static void drm_test_dp_mst_calc_pbn_mode(struct kunit *test) 20 56 { 21 - int pbn, i; 22 - const struct { 23 - int rate; 24 - int bpp; 25 - int expected; 26 - bool dsc; 27 - } test_params[] = { 28 - { 154000, 30, 689, false }, 29 - { 234000, 30, 1047, false }, 30 - { 297000, 24, 1063, false }, 31 - { 332880, 24, 50, true }, 32 - { 324540, 24, 49, true }, 33 - }; 57 + const struct drm_dp_mst_calc_pbn_mode_test *params = test->param_value; 34 58 35 - for (i = 0; i < ARRAY_SIZE(test_params); i++) { 36 - pbn = drm_dp_calc_pbn_mode(test_params[i].rate, 37 - test_params[i].bpp, 38 - test_params[i].dsc); 39 - KUNIT_EXPECT_EQ_MSG(test, pbn, test_params[i].expected, 40 - "Expected PBN %d for clock %d bpp %d, got %d\n", 41 - test_params[i].expected, test_params[i].rate, 42 - test_params[i].bpp, pbn); 43 - } 59 + KUNIT_EXPECT_EQ(test, drm_dp_calc_pbn_mode(params->clock, params->bpp, params->dsc), 60 + params->expected); 44 61 } 62 + 63 + static void dp_mst_calc_pbn_mode_desc(const struct drm_dp_mst_calc_pbn_mode_test *t, char *desc) 64 + { 65 + sprintf(desc, "Clock %d BPP %d DSC %s", t->clock, t->bpp, t->dsc ? "enabled" : "disabled"); 66 + } 67 + 68 + KUNIT_ARRAY_PARAM(drm_dp_mst_calc_pbn_mode, drm_dp_mst_calc_pbn_mode_cases, 69 + dp_mst_calc_pbn_mode_desc); 70 + 71 + static u8 data[] = { 0xff, 0x00, 0xdd }; 72 + 73 + struct drm_dp_mst_sideband_msg_req_test { 74 + const char *desc; 75 + const struct drm_dp_sideband_msg_req_body in; 76 + }; 77 + 78 + static const struct drm_dp_mst_sideband_msg_req_test drm_dp_mst_sideband_msg_req_cases[] = { 79 + { 80 + .desc = "DP_ENUM_PATH_RESOURCES with port number", 81 + .in = { 82 + .req_type = DP_ENUM_PATH_RESOURCES, 83 + .u.port_num.port_number = 5, 84 + }, 85 + }, 86 + { 87 + .desc = "DP_POWER_UP_PHY with port number", 88 + .in = { 89 + .req_type = DP_POWER_UP_PHY, 90 + .u.port_num.port_number = 5, 91 + }, 92 + }, 93 + { 94 + .desc = "DP_POWER_DOWN_PHY with port number", 95 + .in = { 96 + .req_type = DP_POWER_DOWN_PHY, 97 + .u.port_num.port_number = 5, 98 + }, 99 + }, 100 + { 101 + .desc = "DP_ALLOCATE_PAYLOAD with SDP stream sinks", 102 + .in = { 103 + .req_type = DP_ALLOCATE_PAYLOAD, 104 + .u.allocate_payload.number_sdp_streams = 3, 105 + .u.allocate_payload.sdp_stream_sink = { 1, 2, 3 }, 106 + }, 107 + }, 108 + { 109 + .desc = "DP_ALLOCATE_PAYLOAD with port number", 110 + .in = { 111 + .req_type = DP_ALLOCATE_PAYLOAD, 112 + .u.allocate_payload.port_number = 0xf, 113 + }, 114 + }, 115 + { 116 + .desc = "DP_ALLOCATE_PAYLOAD with VCPI", 117 + .in = { 118 + .req_type = DP_ALLOCATE_PAYLOAD, 119 + .u.allocate_payload.vcpi = 0x7f, 120 + }, 121 + }, 122 + { 123 + .desc = "DP_ALLOCATE_PAYLOAD with PBN", 124 + .in = { 125 + .req_type = DP_ALLOCATE_PAYLOAD, 126 + .u.allocate_payload.pbn = U16_MAX, 127 + }, 128 + }, 129 + { 130 + .desc = "DP_QUERY_PAYLOAD with port number", 131 + .in = { 132 + .req_type = DP_QUERY_PAYLOAD, 133 + .u.query_payload.port_number = 0xf, 134 + }, 135 + }, 136 + { 137 + .desc = "DP_QUERY_PAYLOAD with VCPI", 138 + .in = { 139 + .req_type = DP_QUERY_PAYLOAD, 140 + .u.query_payload.vcpi = 0x7f, 141 + }, 142 + }, 143 + { 144 + .desc = "DP_REMOTE_DPCD_READ with port number", 145 + .in = { 146 + .req_type = DP_REMOTE_DPCD_READ, 147 + .u.dpcd_read.port_number = 0xf, 148 + }, 149 + }, 150 + { 151 + .desc = "DP_REMOTE_DPCD_READ with DPCD address", 152 + .in = { 153 + .req_type = DP_REMOTE_DPCD_READ, 154 + .u.dpcd_read.dpcd_address = 0xfedcb, 155 + }, 156 + }, 157 + { 158 + .desc = "DP_REMOTE_DPCD_READ with max number of bytes", 159 + .in = { 160 + .req_type = DP_REMOTE_DPCD_READ, 161 + .u.dpcd_read.num_bytes = U8_MAX, 162 + }, 163 + }, 164 + { 165 + .desc = "DP_REMOTE_DPCD_WRITE with port number", 166 + .in = { 167 + .req_type = DP_REMOTE_DPCD_WRITE, 168 + .u.dpcd_write.port_number = 0xf, 169 + }, 170 + }, 171 + { 172 + .desc = "DP_REMOTE_DPCD_WRITE with DPCD address", 173 + .in = { 174 + .req_type = DP_REMOTE_DPCD_WRITE, 175 + .u.dpcd_write.dpcd_address = 0xfedcb, 176 + }, 177 + }, 178 + { 179 + .desc = "DP_REMOTE_DPCD_WRITE with data array", 180 + .in = { 181 + .req_type = DP_REMOTE_DPCD_WRITE, 182 + .u.dpcd_write.num_bytes = ARRAY_SIZE(data), 183 + .u.dpcd_write.bytes = data, 184 + }, 185 + }, 186 + { 187 + .desc = "DP_REMOTE_I2C_READ with port number", 188 + .in = { 189 + .req_type = DP_REMOTE_I2C_READ, 190 + .u.i2c_read.port_number = 0xf, 191 + }, 192 + }, 193 + { 194 + .desc = "DP_REMOTE_I2C_READ with I2C device ID", 195 + .in = { 196 + .req_type = DP_REMOTE_I2C_READ, 197 + .u.i2c_read.read_i2c_device_id = 0x7f, 198 + }, 199 + }, 200 + { 201 + .desc = "DP_REMOTE_I2C_READ with transactions array", 202 + .in = { 203 + .req_type = DP_REMOTE_I2C_READ, 204 + .u.i2c_read.num_transactions = 3, 205 + .u.i2c_read.num_bytes_read = ARRAY_SIZE(data) * 3, 206 + .u.i2c_read.transactions = { 207 + { .bytes = data, .num_bytes = ARRAY_SIZE(data), .i2c_dev_id = 0x7f, 208 + .i2c_transaction_delay = 0xf, }, 209 + { .bytes = data, .num_bytes = ARRAY_SIZE(data), .i2c_dev_id = 0x7e, 210 + .i2c_transaction_delay = 0xe, }, 211 + { .bytes = data, .num_bytes = ARRAY_SIZE(data), .i2c_dev_id = 0x7d, 212 + .i2c_transaction_delay = 0xd, }, 213 + }, 214 + }, 215 + }, 216 + { 217 + .desc = "DP_REMOTE_I2C_WRITE with port number", 218 + .in = { 219 + .req_type = DP_REMOTE_I2C_WRITE, 220 + .u.i2c_write.port_number = 0xf, 221 + }, 222 + }, 223 + { 224 + .desc = "DP_REMOTE_I2C_WRITE with I2C device ID", 225 + .in = { 226 + .req_type = DP_REMOTE_I2C_WRITE, 227 + .u.i2c_write.write_i2c_device_id = 0x7f, 228 + }, 229 + }, 230 + { 231 + .desc = "DP_REMOTE_I2C_WRITE with data array", 232 + .in = { 233 + .req_type = DP_REMOTE_I2C_WRITE, 234 + .u.i2c_write.num_bytes = ARRAY_SIZE(data), 235 + .u.i2c_write.bytes = data, 236 + }, 237 + }, 238 + { 239 + .desc = "DP_QUERY_STREAM_ENC_STATUS with stream ID", 240 + .in = { 241 + .req_type = DP_QUERY_STREAM_ENC_STATUS, 242 + .u.enc_status.stream_id = 1, 243 + }, 244 + }, 245 + { 246 + .desc = "DP_QUERY_STREAM_ENC_STATUS with client ID", 247 + .in = { 248 + .req_type = DP_QUERY_STREAM_ENC_STATUS, 249 + .u.enc_status.client_id = { 0x4f, 0x7f, 0xb4, 0x00, 0x8c, 0x0d, 0x67 }, 250 + }, 251 + }, 252 + { 253 + .desc = "DP_QUERY_STREAM_ENC_STATUS with stream event", 254 + .in = { 255 + .req_type = DP_QUERY_STREAM_ENC_STATUS, 256 + .u.enc_status.stream_event = 3, 257 + }, 258 + }, 259 + { 260 + .desc = "DP_QUERY_STREAM_ENC_STATUS with valid stream event", 261 + .in = { 262 + .req_type = DP_QUERY_STREAM_ENC_STATUS, 263 + .u.enc_status.valid_stream_event = 0, 264 + }, 265 + }, 266 + { 267 + .desc = "DP_QUERY_STREAM_ENC_STATUS with stream behavior", 268 + .in = { 269 + .req_type = DP_QUERY_STREAM_ENC_STATUS, 270 + .u.enc_status.stream_behavior = 3, 271 + }, 272 + }, 273 + { 274 + .desc = "DP_QUERY_STREAM_ENC_STATUS with a valid stream behavior", 275 + .in = { 276 + .req_type = DP_QUERY_STREAM_ENC_STATUS, 277 + .u.enc_status.valid_stream_behavior = 1, 278 + } 279 + }, 280 + }; 45 281 46 282 static bool 47 283 sideband_msg_req_equal(const struct drm_dp_sideband_msg_req_body *in, ··· 354 118 return true; 355 119 } 356 120 357 - static bool 358 - sideband_msg_req_encode_decode(struct drm_dp_sideband_msg_req_body *in) 121 + static void drm_test_dp_mst_msg_printf(struct drm_printer *p, struct va_format *vaf) 359 122 { 123 + struct kunit *test = p->arg; 124 + 125 + kunit_err(test, "%pV", vaf); 126 + } 127 + 128 + static void drm_test_dp_mst_sideband_msg_req_decode(struct kunit *test) 129 + { 130 + const struct drm_dp_mst_sideband_msg_req_test *params = test->param_value; 131 + const struct drm_dp_sideband_msg_req_body *in = &params->in; 360 132 struct drm_dp_sideband_msg_req_body *out; 361 - struct drm_printer p = drm_err_printer(PREFIX_STR); 362 133 struct drm_dp_sideband_msg_tx *txmsg; 363 - int i, ret; 364 - bool result = true; 134 + struct drm_printer p = { 135 + .printfn = drm_test_dp_mst_msg_printf, 136 + .arg = test 137 + }; 138 + int i; 365 139 366 - out = kzalloc(sizeof(*out), GFP_KERNEL); 367 - if (!out) 368 - return false; 140 + out = kunit_kzalloc(test, sizeof(*out), GFP_KERNEL); 141 + KUNIT_ASSERT_NOT_NULL(test, out); 369 142 370 - txmsg = kzalloc(sizeof(*txmsg), GFP_KERNEL); 371 - if (!txmsg) { 372 - kfree(out); 373 - return false; 374 - } 143 + txmsg = kunit_kzalloc(test, sizeof(*txmsg), GFP_KERNEL); 144 + KUNIT_ASSERT_NOT_NULL(test, txmsg); 375 145 376 146 drm_dp_encode_sideband_req(in, txmsg); 377 - ret = drm_dp_decode_sideband_req(txmsg, out); 378 - if (ret < 0) { 379 - drm_printf(&p, "Failed to decode sideband request: %d\n", 380 - ret); 381 - result = false; 382 - goto out; 383 - } 147 + KUNIT_EXPECT_GE_MSG(test, drm_dp_decode_sideband_req(txmsg, out), 0, 148 + "Failed to decode sideband request"); 384 149 385 150 if (!sideband_msg_req_equal(in, out)) { 386 - drm_printf(&p, "Encode/decode failed, expected:\n"); 151 + KUNIT_FAIL(test, "Encode/decode failed"); 152 + kunit_err(test, "Expected:"); 387 153 drm_dp_dump_sideband_msg_req_body(in, 1, &p); 388 - drm_printf(&p, "Got:\n"); 154 + kunit_err(test, "Got:"); 389 155 drm_dp_dump_sideband_msg_req_body(out, 1, &p); 390 - result = false; 391 - goto out; 392 156 } 393 157 394 158 switch (in->req_type) { ··· 403 167 kfree(out->u.i2c_write.bytes); 404 168 break; 405 169 } 406 - 407 - /* Clear everything but the req_type for the input */ 408 - memset(&in->u, 0, sizeof(in->u)); 409 - 410 - out: 411 - kfree(out); 412 - kfree(txmsg); 413 - return result; 414 170 } 415 171 416 - static void drm_test_dp_mst_sideband_msg_req_decode(struct kunit *test) 172 + static void 173 + drm_dp_mst_sideband_msg_req_desc(const struct drm_dp_mst_sideband_msg_req_test *t, char *desc) 417 174 { 418 - struct drm_dp_sideband_msg_req_body in = { 0 }; 419 - u8 data[] = { 0xff, 0x0, 0xdd }; 420 - int i; 421 - 422 - in.req_type = DP_ENUM_PATH_RESOURCES; 423 - in.u.port_num.port_number = 5; 424 - KUNIT_EXPECT_TRUE(test, sideband_msg_req_encode_decode(&in)); 425 - 426 - in.req_type = DP_POWER_UP_PHY; 427 - in.u.port_num.port_number = 5; 428 - KUNIT_EXPECT_TRUE(test, sideband_msg_req_encode_decode(&in)); 429 - 430 - in.req_type = DP_POWER_DOWN_PHY; 431 - in.u.port_num.port_number = 5; 432 - KUNIT_EXPECT_TRUE(test, sideband_msg_req_encode_decode(&in)); 433 - 434 - in.req_type = DP_ALLOCATE_PAYLOAD; 435 - in.u.allocate_payload.number_sdp_streams = 3; 436 - for (i = 0; i < in.u.allocate_payload.number_sdp_streams; i++) 437 - in.u.allocate_payload.sdp_stream_sink[i] = i + 1; 438 - KUNIT_EXPECT_TRUE(test, sideband_msg_req_encode_decode(&in)); 439 - in.u.allocate_payload.port_number = 0xf; 440 - KUNIT_EXPECT_TRUE(test, sideband_msg_req_encode_decode(&in)); 441 - in.u.allocate_payload.vcpi = 0x7f; 442 - KUNIT_EXPECT_TRUE(test, sideband_msg_req_encode_decode(&in)); 443 - in.u.allocate_payload.pbn = U16_MAX; 444 - KUNIT_EXPECT_TRUE(test, sideband_msg_req_encode_decode(&in)); 445 - 446 - in.req_type = DP_QUERY_PAYLOAD; 447 - in.u.query_payload.port_number = 0xf; 448 - KUNIT_EXPECT_TRUE(test, sideband_msg_req_encode_decode(&in)); 449 - in.u.query_payload.vcpi = 0x7f; 450 - KUNIT_EXPECT_TRUE(test, sideband_msg_req_encode_decode(&in)); 451 - 452 - in.req_type = DP_REMOTE_DPCD_READ; 453 - in.u.dpcd_read.port_number = 0xf; 454 - KUNIT_EXPECT_TRUE(test, sideband_msg_req_encode_decode(&in)); 455 - in.u.dpcd_read.dpcd_address = 0xfedcb; 456 - KUNIT_EXPECT_TRUE(test, sideband_msg_req_encode_decode(&in)); 457 - in.u.dpcd_read.num_bytes = U8_MAX; 458 - KUNIT_EXPECT_TRUE(test, sideband_msg_req_encode_decode(&in)); 459 - 460 - in.req_type = DP_REMOTE_DPCD_WRITE; 461 - in.u.dpcd_write.port_number = 0xf; 462 - KUNIT_EXPECT_TRUE(test, sideband_msg_req_encode_decode(&in)); 463 - in.u.dpcd_write.dpcd_address = 0xfedcb; 464 - KUNIT_EXPECT_TRUE(test, sideband_msg_req_encode_decode(&in)); 465 - in.u.dpcd_write.num_bytes = ARRAY_SIZE(data); 466 - in.u.dpcd_write.bytes = data; 467 - KUNIT_EXPECT_TRUE(test, sideband_msg_req_encode_decode(&in)); 468 - 469 - in.req_type = DP_REMOTE_I2C_READ; 470 - in.u.i2c_read.port_number = 0xf; 471 - KUNIT_EXPECT_TRUE(test, sideband_msg_req_encode_decode(&in)); 472 - in.u.i2c_read.read_i2c_device_id = 0x7f; 473 - KUNIT_EXPECT_TRUE(test, sideband_msg_req_encode_decode(&in)); 474 - in.u.i2c_read.num_transactions = 3; 475 - in.u.i2c_read.num_bytes_read = ARRAY_SIZE(data) * 3; 476 - for (i = 0; i < in.u.i2c_read.num_transactions; i++) { 477 - in.u.i2c_read.transactions[i].bytes = data; 478 - in.u.i2c_read.transactions[i].num_bytes = ARRAY_SIZE(data); 479 - in.u.i2c_read.transactions[i].i2c_dev_id = 0x7f & ~i; 480 - in.u.i2c_read.transactions[i].i2c_transaction_delay = 0xf & ~i; 481 - } 482 - KUNIT_EXPECT_TRUE(test, sideband_msg_req_encode_decode(&in)); 483 - 484 - in.req_type = DP_REMOTE_I2C_WRITE; 485 - in.u.i2c_write.port_number = 0xf; 486 - KUNIT_EXPECT_TRUE(test, sideband_msg_req_encode_decode(&in)); 487 - in.u.i2c_write.write_i2c_device_id = 0x7f; 488 - KUNIT_EXPECT_TRUE(test, sideband_msg_req_encode_decode(&in)); 489 - in.u.i2c_write.num_bytes = ARRAY_SIZE(data); 490 - in.u.i2c_write.bytes = data; 491 - KUNIT_EXPECT_TRUE(test, sideband_msg_req_encode_decode(&in)); 492 - 493 - in.req_type = DP_QUERY_STREAM_ENC_STATUS; 494 - in.u.enc_status.stream_id = 1; 495 - KUNIT_EXPECT_TRUE(test, sideband_msg_req_encode_decode(&in)); 496 - get_random_bytes(in.u.enc_status.client_id, 497 - sizeof(in.u.enc_status.client_id)); 498 - KUNIT_EXPECT_TRUE(test, sideband_msg_req_encode_decode(&in)); 499 - in.u.enc_status.stream_event = 3; 500 - KUNIT_EXPECT_TRUE(test, sideband_msg_req_encode_decode(&in)); 501 - in.u.enc_status.valid_stream_event = 0; 502 - KUNIT_EXPECT_TRUE(test, sideband_msg_req_encode_decode(&in)); 503 - in.u.enc_status.stream_behavior = 3; 504 - KUNIT_EXPECT_TRUE(test, sideband_msg_req_encode_decode(&in)); 505 - in.u.enc_status.valid_stream_behavior = 1; 506 - KUNIT_EXPECT_TRUE(test, sideband_msg_req_encode_decode(&in)); 175 + strcpy(desc, t->desc); 507 176 } 177 + 178 + KUNIT_ARRAY_PARAM(drm_dp_mst_sideband_msg_req, drm_dp_mst_sideband_msg_req_cases, 179 + drm_dp_mst_sideband_msg_req_desc); 508 180 509 181 static struct kunit_case drm_dp_mst_helper_tests[] = { 510 - KUNIT_CASE(drm_test_dp_mst_calc_pbn_mode), 511 - KUNIT_CASE(drm_test_dp_mst_sideband_msg_req_decode), 182 + KUNIT_CASE_PARAM(drm_test_dp_mst_calc_pbn_mode, drm_dp_mst_calc_pbn_mode_gen_params), 183 + KUNIT_CASE_PARAM(drm_test_dp_mst_sideband_msg_req_decode, 184 + drm_dp_mst_sideband_msg_req_gen_params), 512 185 { } 513 186 }; 514 187
+13
drivers/gpu/drm/tiny/Kconfig
··· 51 51 This is a KMS driver for projectors which use the GM12U320 chipset 52 52 for video transfer over USB2/3, such as the Acer C120 mini projector. 53 53 54 + config DRM_OFDRM 55 + tristate "Open Firmware display driver" 56 + depends on DRM && OF && (PPC || COMPILE_TEST) 57 + select APERTURE_HELPERS 58 + select DRM_GEM_SHMEM_HELPER 59 + select DRM_KMS_HELPER 60 + help 61 + DRM driver for Open Firmware framebuffers. 62 + 63 + This driver assumes that the display hardware has been initialized 64 + by the Open Firmware before the kernel boots. Scanout buffer, size, 65 + and display format must be provided via device tree. 66 + 54 67 config DRM_PANEL_MIPI_DBI 55 68 tristate "DRM support for MIPI DBI compatible panels" 56 69 depends on DRM && SPI
+1
drivers/gpu/drm/tiny/Makefile
··· 4 4 obj-$(CONFIG_DRM_BOCHS) += bochs.o 5 5 obj-$(CONFIG_DRM_CIRRUS_QEMU) += cirrus.o 6 6 obj-$(CONFIG_DRM_GM12U320) += gm12u320.o 7 + obj-$(CONFIG_DRM_OFDRM) += ofdrm.o 7 8 obj-$(CONFIG_DRM_PANEL_MIPI_DBI) += panel-mipi-dbi.o 8 9 obj-$(CONFIG_DRM_SIMPLEDRM) += simpledrm.o 9 10 obj-$(CONFIG_TINYDRM_HX8357D) += hx8357d.o
-1
drivers/gpu/drm/tiny/bochs.c
··· 543 543 bochs->dev->mode_config.max_width = 8192; 544 544 bochs->dev->mode_config.max_height = 8192; 545 545 546 - bochs->dev->mode_config.fb_base = bochs->fb_base; 547 546 bochs->dev->mode_config.preferred_depth = 24; 548 547 bochs->dev->mode_config.prefer_shadow = 0; 549 548 bochs->dev->mode_config.prefer_shadow_fbdev = 1;
+1424
drivers/gpu/drm/tiny/ofdrm.c
··· 1 + // SPDX-License-Identifier: GPL-2.0-only 2 + 3 + #include <linux/of_address.h> 4 + #include <linux/pci.h> 5 + #include <linux/platform_device.h> 6 + 7 + #include <drm/drm_aperture.h> 8 + #include <drm/drm_atomic.h> 9 + #include <drm/drm_atomic_state_helper.h> 10 + #include <drm/drm_connector.h> 11 + #include <drm/drm_damage_helper.h> 12 + #include <drm/drm_device.h> 13 + #include <drm/drm_drv.h> 14 + #include <drm/drm_fb_helper.h> 15 + #include <drm/drm_format_helper.h> 16 + #include <drm/drm_framebuffer.h> 17 + #include <drm/drm_gem_atomic_helper.h> 18 + #include <drm/drm_gem_framebuffer_helper.h> 19 + #include <drm/drm_gem_shmem_helper.h> 20 + #include <drm/drm_managed.h> 21 + #include <drm/drm_modeset_helper_vtables.h> 22 + #include <drm/drm_plane_helper.h> 23 + #include <drm/drm_probe_helper.h> 24 + #include <drm/drm_simple_kms_helper.h> 25 + 26 + #define DRIVER_NAME "ofdrm" 27 + #define DRIVER_DESC "DRM driver for OF platform devices" 28 + #define DRIVER_DATE "20220501" 29 + #define DRIVER_MAJOR 1 30 + #define DRIVER_MINOR 0 31 + 32 + #define PCI_VENDOR_ID_ATI_R520 0x7100 33 + #define PCI_VENDOR_ID_ATI_R600 0x9400 34 + 35 + #define OFDRM_GAMMA_LUT_SIZE 256 36 + 37 + /* Definitions used by the Avivo palette */ 38 + #define AVIVO_DC_LUT_RW_SELECT 0x6480 39 + #define AVIVO_DC_LUT_RW_MODE 0x6484 40 + #define AVIVO_DC_LUT_RW_INDEX 0x6488 41 + #define AVIVO_DC_LUT_SEQ_COLOR 0x648c 42 + #define AVIVO_DC_LUT_PWL_DATA 0x6490 43 + #define AVIVO_DC_LUT_30_COLOR 0x6494 44 + #define AVIVO_DC_LUT_READ_PIPE_SELECT 0x6498 45 + #define AVIVO_DC_LUT_WRITE_EN_MASK 0x649c 46 + #define AVIVO_DC_LUT_AUTOFILL 0x64a0 47 + #define AVIVO_DC_LUTA_CONTROL 0x64c0 48 + #define AVIVO_DC_LUTA_BLACK_OFFSET_BLUE 0x64c4 49 + #define AVIVO_DC_LUTA_BLACK_OFFSET_GREEN 0x64c8 50 + #define AVIVO_DC_LUTA_BLACK_OFFSET_RED 0x64cc 51 + #define AVIVO_DC_LUTA_WHITE_OFFSET_BLUE 0x64d0 52 + #define AVIVO_DC_LUTA_WHITE_OFFSET_GREEN 0x64d4 53 + #define AVIVO_DC_LUTA_WHITE_OFFSET_RED 0x64d8 54 + #define AVIVO_DC_LUTB_CONTROL 0x6cc0 55 + #define AVIVO_DC_LUTB_BLACK_OFFSET_BLUE 0x6cc4 56 + #define AVIVO_DC_LUTB_BLACK_OFFSET_GREEN 0x6cc8 57 + #define AVIVO_DC_LUTB_BLACK_OFFSET_RED 0x6ccc 58 + #define AVIVO_DC_LUTB_WHITE_OFFSET_BLUE 0x6cd0 59 + #define AVIVO_DC_LUTB_WHITE_OFFSET_GREEN 0x6cd4 60 + #define AVIVO_DC_LUTB_WHITE_OFFSET_RED 0x6cd8 61 + 62 + enum ofdrm_model { 63 + OFDRM_MODEL_UNKNOWN, 64 + OFDRM_MODEL_MACH64, /* ATI Mach64 */ 65 + OFDRM_MODEL_RAGE128, /* ATI Rage128 */ 66 + OFDRM_MODEL_RAGE_M3A, /* ATI Rage Mobility M3 Head A */ 67 + OFDRM_MODEL_RAGE_M3B, /* ATI Rage Mobility M3 Head B */ 68 + OFDRM_MODEL_RADEON, /* ATI Radeon */ 69 + OFDRM_MODEL_GXT2000, /* IBM GXT2000 */ 70 + OFDRM_MODEL_AVIVO, /* ATI R5xx */ 71 + OFDRM_MODEL_QEMU, /* QEMU VGA */ 72 + }; 73 + 74 + /* 75 + * Helpers for display nodes 76 + */ 77 + 78 + static int display_get_validated_int(struct drm_device *dev, const char *name, uint32_t value) 79 + { 80 + if (value > INT_MAX) { 81 + drm_err(dev, "invalid framebuffer %s of %u\n", name, value); 82 + return -EINVAL; 83 + } 84 + return (int)value; 85 + } 86 + 87 + static int display_get_validated_int0(struct drm_device *dev, const char *name, uint32_t value) 88 + { 89 + if (!value) { 90 + drm_err(dev, "invalid framebuffer %s of %u\n", name, value); 91 + return -EINVAL; 92 + } 93 + return display_get_validated_int(dev, name, value); 94 + } 95 + 96 + static const struct drm_format_info *display_get_validated_format(struct drm_device *dev, 97 + u32 depth, bool big_endian) 98 + { 99 + const struct drm_format_info *info; 100 + u32 format; 101 + 102 + switch (depth) { 103 + case 8: 104 + format = drm_mode_legacy_fb_format(8, 8); 105 + break; 106 + case 15: 107 + case 16: 108 + format = drm_mode_legacy_fb_format(16, depth); 109 + break; 110 + case 32: 111 + format = drm_mode_legacy_fb_format(32, 24); 112 + break; 113 + default: 114 + drm_err(dev, "unsupported framebuffer depth %u\n", depth); 115 + return ERR_PTR(-EINVAL); 116 + } 117 + 118 + /* 119 + * DRM formats assume little-endian byte order. Update the format 120 + * if the scanout buffer uses big-endian ordering. 121 + */ 122 + if (big_endian) { 123 + switch (format) { 124 + case DRM_FORMAT_XRGB8888: 125 + format = DRM_FORMAT_BGRX8888; 126 + break; 127 + case DRM_FORMAT_ARGB8888: 128 + format = DRM_FORMAT_BGRA8888; 129 + break; 130 + case DRM_FORMAT_RGB565: 131 + format = DRM_FORMAT_RGB565 | DRM_FORMAT_BIG_ENDIAN; 132 + break; 133 + case DRM_FORMAT_XRGB1555: 134 + format = DRM_FORMAT_XRGB1555 | DRM_FORMAT_BIG_ENDIAN; 135 + break; 136 + default: 137 + break; 138 + } 139 + } 140 + 141 + info = drm_format_info(format); 142 + if (!info) { 143 + drm_err(dev, "cannot find framebuffer format for depth %u\n", depth); 144 + return ERR_PTR(-EINVAL); 145 + } 146 + 147 + return info; 148 + } 149 + 150 + static int display_read_u32_of(struct drm_device *dev, struct device_node *of_node, 151 + const char *name, u32 *value) 152 + { 153 + int ret = of_property_read_u32(of_node, name, value); 154 + 155 + if (ret) 156 + drm_err(dev, "cannot parse framebuffer %s: error %d\n", name, ret); 157 + return ret; 158 + } 159 + 160 + static bool display_get_big_endian_of(struct drm_device *dev, struct device_node *of_node) 161 + { 162 + bool big_endian; 163 + 164 + #ifdef __BIG_ENDIAN 165 + big_endian = true; 166 + if (of_get_property(of_node, "little-endian", NULL)) 167 + big_endian = false; 168 + #else 169 + big_endian = false; 170 + if (of_get_property(of_node, "big-endian", NULL)) 171 + big_endian = true; 172 + #endif 173 + 174 + return big_endian; 175 + } 176 + 177 + static int display_get_width_of(struct drm_device *dev, struct device_node *of_node) 178 + { 179 + u32 width; 180 + int ret = display_read_u32_of(dev, of_node, "width", &width); 181 + 182 + if (ret) 183 + return ret; 184 + return display_get_validated_int0(dev, "width", width); 185 + } 186 + 187 + static int display_get_height_of(struct drm_device *dev, struct device_node *of_node) 188 + { 189 + u32 height; 190 + int ret = display_read_u32_of(dev, of_node, "height", &height); 191 + 192 + if (ret) 193 + return ret; 194 + return display_get_validated_int0(dev, "height", height); 195 + } 196 + 197 + static int display_get_depth_of(struct drm_device *dev, struct device_node *of_node) 198 + { 199 + u32 depth; 200 + int ret = display_read_u32_of(dev, of_node, "depth", &depth); 201 + 202 + if (ret) 203 + return ret; 204 + return display_get_validated_int0(dev, "depth", depth); 205 + } 206 + 207 + static int display_get_linebytes_of(struct drm_device *dev, struct device_node *of_node) 208 + { 209 + u32 linebytes; 210 + int ret = display_read_u32_of(dev, of_node, "linebytes", &linebytes); 211 + 212 + if (ret) 213 + return ret; 214 + return display_get_validated_int(dev, "linebytes", linebytes); 215 + } 216 + 217 + static u64 display_get_address_of(struct drm_device *dev, struct device_node *of_node) 218 + { 219 + u32 address; 220 + int ret; 221 + 222 + /* 223 + * Not all devices provide an address property, it's not 224 + * a bug if this fails. The driver will try to find the 225 + * framebuffer base address from the device's memory regions. 226 + */ 227 + ret = of_property_read_u32(of_node, "address", &address); 228 + if (ret) 229 + return OF_BAD_ADDR; 230 + 231 + return address; 232 + } 233 + 234 + static bool is_avivo(__be32 vendor, __be32 device) 235 + { 236 + /* This will match most R5xx */ 237 + return (vendor == PCI_VENDOR_ID_ATI) && 238 + ((device >= PCI_VENDOR_ID_ATI_R520 && device < 0x7800) || 239 + (PCI_VENDOR_ID_ATI_R600 >= 0x9400)); 240 + } 241 + 242 + static enum ofdrm_model display_get_model_of(struct drm_device *dev, struct device_node *of_node) 243 + { 244 + enum ofdrm_model model = OFDRM_MODEL_UNKNOWN; 245 + 246 + if (of_node_name_prefix(of_node, "ATY,Rage128")) { 247 + model = OFDRM_MODEL_RAGE128; 248 + } else if (of_node_name_prefix(of_node, "ATY,RageM3pA") || 249 + of_node_name_prefix(of_node, "ATY,RageM3p12A")) { 250 + model = OFDRM_MODEL_RAGE_M3A; 251 + } else if (of_node_name_prefix(of_node, "ATY,RageM3pB")) { 252 + model = OFDRM_MODEL_RAGE_M3B; 253 + } else if (of_node_name_prefix(of_node, "ATY,Rage6")) { 254 + model = OFDRM_MODEL_RADEON; 255 + } else if (of_node_name_prefix(of_node, "ATY,")) { 256 + return OFDRM_MODEL_MACH64; 257 + } else if (of_device_is_compatible(of_node, "pci1014,b7") || 258 + of_device_is_compatible(of_node, "pci1014,21c")) { 259 + model = OFDRM_MODEL_GXT2000; 260 + } else if (of_node_name_prefix(of_node, "vga,Display-")) { 261 + struct device_node *of_parent; 262 + const __be32 *vendor_p, *device_p; 263 + 264 + /* Look for AVIVO initialized by SLOF */ 265 + of_parent = of_get_parent(of_node); 266 + vendor_p = of_get_property(of_parent, "vendor-id", NULL); 267 + device_p = of_get_property(of_parent, "device-id", NULL); 268 + if (vendor_p && device_p && is_avivo(*vendor_p, *device_p)) 269 + model = OFDRM_MODEL_AVIVO; 270 + of_node_put(of_parent); 271 + } else if (of_device_is_compatible(of_node, "qemu,std-vga")) { 272 + model = OFDRM_MODEL_QEMU; 273 + } 274 + 275 + return model; 276 + } 277 + 278 + /* 279 + * Open Firmware display device 280 + */ 281 + 282 + struct ofdrm_device; 283 + 284 + struct ofdrm_device_funcs { 285 + void __iomem *(*cmap_ioremap)(struct ofdrm_device *odev, 286 + struct device_node *of_node, 287 + u64 fb_bas); 288 + void (*cmap_write)(struct ofdrm_device *odev, unsigned char index, 289 + unsigned char r, unsigned char g, unsigned char b); 290 + }; 291 + 292 + struct ofdrm_device { 293 + struct drm_device dev; 294 + struct platform_device *pdev; 295 + 296 + const struct ofdrm_device_funcs *funcs; 297 + 298 + /* firmware-buffer settings */ 299 + struct iosys_map screen_base; 300 + struct drm_display_mode mode; 301 + const struct drm_format_info *format; 302 + unsigned int pitch; 303 + 304 + /* colormap */ 305 + void __iomem *cmap_base; 306 + 307 + /* modesetting */ 308 + uint32_t formats[8]; 309 + struct drm_plane primary_plane; 310 + struct drm_crtc crtc; 311 + struct drm_encoder encoder; 312 + struct drm_connector connector; 313 + }; 314 + 315 + static struct ofdrm_device *ofdrm_device_of_dev(struct drm_device *dev) 316 + { 317 + return container_of(dev, struct ofdrm_device, dev); 318 + } 319 + 320 + /* 321 + * Hardware 322 + */ 323 + 324 + #if defined(CONFIG_PCI) 325 + static struct pci_dev *display_get_pci_dev_of(struct drm_device *dev, struct device_node *of_node) 326 + { 327 + const __be32 *vendor_p, *device_p; 328 + u32 vendor, device; 329 + struct pci_dev *pcidev; 330 + 331 + vendor_p = of_get_property(of_node, "vendor-id", NULL); 332 + if (!vendor_p) 333 + return ERR_PTR(-ENODEV); 334 + vendor = be32_to_cpup(vendor_p); 335 + 336 + device_p = of_get_property(of_node, "device-id", NULL); 337 + if (!device_p) 338 + return ERR_PTR(-ENODEV); 339 + device = be32_to_cpup(device_p); 340 + 341 + pcidev = pci_get_device(vendor, device, NULL); 342 + if (!pcidev) 343 + return ERR_PTR(-ENODEV); 344 + 345 + return pcidev; 346 + } 347 + 348 + static void ofdrm_pci_release(void *data) 349 + { 350 + struct pci_dev *pcidev = data; 351 + 352 + pci_disable_device(pcidev); 353 + } 354 + 355 + static int ofdrm_device_init_pci(struct ofdrm_device *odev) 356 + { 357 + struct drm_device *dev = &odev->dev; 358 + struct platform_device *pdev = to_platform_device(dev->dev); 359 + struct device_node *of_node = pdev->dev.of_node; 360 + struct pci_dev *pcidev; 361 + int ret; 362 + 363 + /* 364 + * Never use pcim_ or other managed helpers on the returned PCI 365 + * device. Otherwise, probing the native driver will fail for 366 + * resource conflicts. PCI-device management has to be tied to 367 + * the lifetime of the platform device until the native driver 368 + * takes over. 369 + */ 370 + pcidev = display_get_pci_dev_of(dev, of_node); 371 + if (IS_ERR(pcidev)) 372 + return 0; /* no PCI device found; ignore the error */ 373 + 374 + ret = pci_enable_device(pcidev); 375 + if (ret) { 376 + drm_err(dev, "pci_enable_device(%s) failed: %d\n", 377 + dev_name(&pcidev->dev), ret); 378 + return ret; 379 + } 380 + ret = devm_add_action_or_reset(&pdev->dev, ofdrm_pci_release, pcidev); 381 + if (ret) 382 + return ret; 383 + 384 + return 0; 385 + } 386 + #else 387 + static int ofdrm_device_init_pci(struct ofdrm_device *odev) 388 + { 389 + return 0; 390 + } 391 + #endif 392 + 393 + /* 394 + * OF display settings 395 + */ 396 + 397 + static struct resource *ofdrm_find_fb_resource(struct ofdrm_device *odev, 398 + struct resource *fb_res) 399 + { 400 + struct platform_device *pdev = to_platform_device(odev->dev.dev); 401 + struct resource *res, *max_res = NULL; 402 + u32 i; 403 + 404 + for (i = 0; pdev->num_resources; ++i) { 405 + res = platform_get_resource(pdev, IORESOURCE_MEM, i); 406 + if (!res) 407 + break; /* all resources processed */ 408 + if (resource_size(res) < resource_size(fb_res)) 409 + continue; /* resource too small */ 410 + if (fb_res->start && resource_contains(res, fb_res)) 411 + return res; /* resource contains framebuffer */ 412 + if (!max_res || resource_size(res) > resource_size(max_res)) 413 + max_res = res; /* store largest resource as fallback */ 414 + } 415 + 416 + return max_res; 417 + } 418 + 419 + /* 420 + * Colormap / Palette 421 + */ 422 + 423 + static void __iomem *get_cmap_address_of(struct ofdrm_device *odev, struct device_node *of_node, 424 + int bar_no, unsigned long offset, unsigned long size) 425 + { 426 + struct drm_device *dev = &odev->dev; 427 + const __be32 *addr_p; 428 + u64 max_size, address; 429 + unsigned int flags; 430 + void __iomem *mem; 431 + 432 + addr_p = of_get_pci_address(of_node, bar_no, &max_size, &flags); 433 + if (!addr_p) 434 + addr_p = of_get_address(of_node, bar_no, &max_size, &flags); 435 + if (!addr_p) 436 + return ERR_PTR(-ENODEV); 437 + 438 + if ((flags & (IORESOURCE_IO | IORESOURCE_MEM)) == 0) 439 + return ERR_PTR(-ENODEV); 440 + 441 + if ((offset + size) >= max_size) 442 + return ERR_PTR(-ENODEV); 443 + 444 + address = of_translate_address(of_node, addr_p); 445 + if (address == OF_BAD_ADDR) 446 + return ERR_PTR(-ENODEV); 447 + 448 + mem = devm_ioremap(dev->dev, address + offset, size); 449 + if (!mem) 450 + return ERR_PTR(-ENOMEM); 451 + 452 + return mem; 453 + } 454 + 455 + static void __iomem *ofdrm_mach64_cmap_ioremap(struct ofdrm_device *odev, 456 + struct device_node *of_node, 457 + u64 fb_base) 458 + { 459 + struct drm_device *dev = &odev->dev; 460 + u64 address; 461 + void __iomem *cmap_base; 462 + 463 + address = fb_base & 0xff000000ul; 464 + address += 0x7ff000; 465 + 466 + cmap_base = devm_ioremap(dev->dev, address, 0x1000); 467 + if (!cmap_base) 468 + return ERR_PTR(-ENOMEM); 469 + 470 + return cmap_base; 471 + } 472 + 473 + static void ofdrm_mach64_cmap_write(struct ofdrm_device *odev, unsigned char index, 474 + unsigned char r, unsigned char g, unsigned char b) 475 + { 476 + void __iomem *addr = odev->cmap_base + 0xcc0; 477 + void __iomem *data = odev->cmap_base + 0xcc0 + 1; 478 + 479 + writeb(index, addr); 480 + writeb(r, data); 481 + writeb(g, data); 482 + writeb(b, data); 483 + } 484 + 485 + static void __iomem *ofdrm_rage128_cmap_ioremap(struct ofdrm_device *odev, 486 + struct device_node *of_node, 487 + u64 fb_base) 488 + { 489 + return get_cmap_address_of(odev, of_node, 2, 0, 0x1fff); 490 + } 491 + 492 + static void ofdrm_rage128_cmap_write(struct ofdrm_device *odev, unsigned char index, 493 + unsigned char r, unsigned char g, unsigned char b) 494 + { 495 + void __iomem *addr = odev->cmap_base + 0xb0; 496 + void __iomem *data = odev->cmap_base + 0xb4; 497 + u32 color = (r << 16) | (g << 8) | b; 498 + 499 + writeb(index, addr); 500 + writel(color, data); 501 + } 502 + 503 + static void __iomem *ofdrm_rage_m3a_cmap_ioremap(struct ofdrm_device *odev, 504 + struct device_node *of_node, 505 + u64 fb_base) 506 + { 507 + return get_cmap_address_of(odev, of_node, 2, 0, 0x1fff); 508 + } 509 + 510 + static void ofdrm_rage_m3a_cmap_write(struct ofdrm_device *odev, unsigned char index, 511 + unsigned char r, unsigned char g, unsigned char b) 512 + { 513 + void __iomem *dac_ctl = odev->cmap_base + 0x58; 514 + void __iomem *addr = odev->cmap_base + 0xb0; 515 + void __iomem *data = odev->cmap_base + 0xb4; 516 + u32 color = (r << 16) | (g << 8) | b; 517 + u32 val; 518 + 519 + /* Clear PALETTE_ACCESS_CNTL in DAC_CNTL */ 520 + val = readl(dac_ctl); 521 + val &= ~0x20; 522 + writel(val, dac_ctl); 523 + 524 + /* Set color at palette index */ 525 + writeb(index, addr); 526 + writel(color, data); 527 + } 528 + 529 + static void __iomem *ofdrm_rage_m3b_cmap_ioremap(struct ofdrm_device *odev, 530 + struct device_node *of_node, 531 + u64 fb_base) 532 + { 533 + return get_cmap_address_of(odev, of_node, 2, 0, 0x1fff); 534 + } 535 + 536 + static void ofdrm_rage_m3b_cmap_write(struct ofdrm_device *odev, unsigned char index, 537 + unsigned char r, unsigned char g, unsigned char b) 538 + { 539 + void __iomem *dac_ctl = odev->cmap_base + 0x58; 540 + void __iomem *addr = odev->cmap_base + 0xb0; 541 + void __iomem *data = odev->cmap_base + 0xb4; 542 + u32 color = (r << 16) | (g << 8) | b; 543 + u32 val; 544 + 545 + /* Set PALETTE_ACCESS_CNTL in DAC_CNTL */ 546 + val = readl(dac_ctl); 547 + val |= 0x20; 548 + writel(val, dac_ctl); 549 + 550 + /* Set color at palette index */ 551 + writeb(index, addr); 552 + writel(color, data); 553 + } 554 + 555 + static void __iomem *ofdrm_radeon_cmap_ioremap(struct ofdrm_device *odev, 556 + struct device_node *of_node, 557 + u64 fb_base) 558 + { 559 + return get_cmap_address_of(odev, of_node, 1, 0, 0x1fff); 560 + } 561 + 562 + static void __iomem *ofdrm_gxt2000_cmap_ioremap(struct ofdrm_device *odev, 563 + struct device_node *of_node, 564 + u64 fb_base) 565 + { 566 + return get_cmap_address_of(odev, of_node, 0, 0x6000, 0x1000); 567 + } 568 + 569 + static void ofdrm_gxt2000_cmap_write(struct ofdrm_device *odev, unsigned char index, 570 + unsigned char r, unsigned char g, unsigned char b) 571 + { 572 + void __iomem *data = ((unsigned int __iomem *)odev->cmap_base) + index; 573 + u32 color = (r << 16) | (g << 8) | b; 574 + 575 + writel(color, data); 576 + } 577 + 578 + static void __iomem *ofdrm_avivo_cmap_ioremap(struct ofdrm_device *odev, 579 + struct device_node *of_node, 580 + u64 fb_base) 581 + { 582 + struct device_node *of_parent; 583 + void __iomem *cmap_base; 584 + 585 + of_parent = of_get_parent(of_node); 586 + cmap_base = get_cmap_address_of(odev, of_parent, 0, 0, 0x10000); 587 + of_node_put(of_parent); 588 + 589 + return cmap_base; 590 + } 591 + 592 + static void ofdrm_avivo_cmap_write(struct ofdrm_device *odev, unsigned char index, 593 + unsigned char r, unsigned char g, unsigned char b) 594 + { 595 + void __iomem *lutsel = odev->cmap_base + AVIVO_DC_LUT_RW_SELECT; 596 + void __iomem *addr = odev->cmap_base + AVIVO_DC_LUT_RW_INDEX; 597 + void __iomem *data = odev->cmap_base + AVIVO_DC_LUT_30_COLOR; 598 + u32 color = (r << 22) | (g << 12) | (b << 2); 599 + 600 + /* Write to both LUTs for now */ 601 + 602 + writel(1, lutsel); 603 + writeb(index, addr); 604 + writel(color, data); 605 + 606 + writel(0, lutsel); 607 + writeb(index, addr); 608 + writel(color, data); 609 + } 610 + 611 + static void __iomem *ofdrm_qemu_cmap_ioremap(struct ofdrm_device *odev, 612 + struct device_node *of_node, 613 + u64 fb_base) 614 + { 615 + static const __be32 io_of_addr[3] = { 616 + cpu_to_be32(0x01000000), 617 + cpu_to_be32(0x00), 618 + cpu_to_be32(0x00), 619 + }; 620 + 621 + struct drm_device *dev = &odev->dev; 622 + u64 address; 623 + void __iomem *cmap_base; 624 + 625 + address = of_translate_address(of_node, io_of_addr); 626 + if (address == OF_BAD_ADDR) 627 + return ERR_PTR(-ENODEV); 628 + 629 + cmap_base = devm_ioremap(dev->dev, address + 0x3c8, 2); 630 + if (!cmap_base) 631 + return ERR_PTR(-ENOMEM); 632 + 633 + return cmap_base; 634 + } 635 + 636 + static void ofdrm_qemu_cmap_write(struct ofdrm_device *odev, unsigned char index, 637 + unsigned char r, unsigned char g, unsigned char b) 638 + { 639 + void __iomem *addr = odev->cmap_base; 640 + void __iomem *data = odev->cmap_base + 1; 641 + 642 + writeb(index, addr); 643 + writeb(r, data); 644 + writeb(g, data); 645 + writeb(b, data); 646 + } 647 + 648 + static void ofdrm_device_set_gamma_linear(struct ofdrm_device *odev, 649 + const struct drm_format_info *format) 650 + { 651 + struct drm_device *dev = &odev->dev; 652 + int i; 653 + 654 + switch (format->format) { 655 + case DRM_FORMAT_RGB565: 656 + case DRM_FORMAT_RGB565 | DRM_FORMAT_BIG_ENDIAN: 657 + /* Use better interpolation, to take 32 values from 0 to 255 */ 658 + for (i = 0; i < OFDRM_GAMMA_LUT_SIZE / 8; i++) { 659 + unsigned char r = i * 8 + i / 4; 660 + unsigned char g = i * 4 + i / 16; 661 + unsigned char b = i * 8 + i / 4; 662 + 663 + odev->funcs->cmap_write(odev, i, r, g, b); 664 + } 665 + /* Green has one more bit, so add padding with 0 for red and blue. */ 666 + for (i = OFDRM_GAMMA_LUT_SIZE / 8; i < OFDRM_GAMMA_LUT_SIZE / 4; i++) { 667 + unsigned char r = 0; 668 + unsigned char g = i * 4 + i / 16; 669 + unsigned char b = 0; 670 + 671 + odev->funcs->cmap_write(odev, i, r, g, b); 672 + } 673 + break; 674 + case DRM_FORMAT_XRGB8888: 675 + case DRM_FORMAT_BGRX8888: 676 + for (i = 0; i < OFDRM_GAMMA_LUT_SIZE; i++) 677 + odev->funcs->cmap_write(odev, i, i, i, i); 678 + break; 679 + default: 680 + drm_warn_once(dev, "Unsupported format %p4cc for gamma correction\n", 681 + &format->format); 682 + break; 683 + } 684 + } 685 + 686 + static void ofdrm_device_set_gamma(struct ofdrm_device *odev, 687 + const struct drm_format_info *format, 688 + struct drm_color_lut *lut) 689 + { 690 + struct drm_device *dev = &odev->dev; 691 + int i; 692 + 693 + switch (format->format) { 694 + case DRM_FORMAT_RGB565: 695 + case DRM_FORMAT_RGB565 | DRM_FORMAT_BIG_ENDIAN: 696 + /* Use better interpolation, to take 32 values from lut[0] to lut[255] */ 697 + for (i = 0; i < OFDRM_GAMMA_LUT_SIZE / 8; i++) { 698 + unsigned char r = lut[i * 8 + i / 4].red >> 8; 699 + unsigned char g = lut[i * 4 + i / 16].green >> 8; 700 + unsigned char b = lut[i * 8 + i / 4].blue >> 8; 701 + 702 + odev->funcs->cmap_write(odev, i, r, g, b); 703 + } 704 + /* Green has one more bit, so add padding with 0 for red and blue. */ 705 + for (i = OFDRM_GAMMA_LUT_SIZE / 8; i < OFDRM_GAMMA_LUT_SIZE / 4; i++) { 706 + unsigned char r = 0; 707 + unsigned char g = lut[i * 4 + i / 16].green >> 8; 708 + unsigned char b = 0; 709 + 710 + odev->funcs->cmap_write(odev, i, r, g, b); 711 + } 712 + break; 713 + case DRM_FORMAT_XRGB8888: 714 + case DRM_FORMAT_BGRX8888: 715 + for (i = 0; i < OFDRM_GAMMA_LUT_SIZE; i++) { 716 + unsigned char r = lut[i].red >> 8; 717 + unsigned char g = lut[i].green >> 8; 718 + unsigned char b = lut[i].blue >> 8; 719 + 720 + odev->funcs->cmap_write(odev, i, r, g, b); 721 + } 722 + break; 723 + default: 724 + drm_warn_once(dev, "Unsupported format %p4cc for gamma correction\n", 725 + &format->format); 726 + break; 727 + } 728 + } 729 + 730 + /* 731 + * Modesetting 732 + */ 733 + 734 + struct ofdrm_crtc_state { 735 + struct drm_crtc_state base; 736 + 737 + /* Primary-plane format; required for color mgmt. */ 738 + const struct drm_format_info *format; 739 + }; 740 + 741 + static struct ofdrm_crtc_state *to_ofdrm_crtc_state(struct drm_crtc_state *base) 742 + { 743 + return container_of(base, struct ofdrm_crtc_state, base); 744 + } 745 + 746 + static void ofdrm_crtc_state_destroy(struct ofdrm_crtc_state *ofdrm_crtc_state) 747 + { 748 + __drm_atomic_helper_crtc_destroy_state(&ofdrm_crtc_state->base); 749 + kfree(ofdrm_crtc_state); 750 + } 751 + 752 + /* 753 + * Support all formats of OF display and maybe more; in order 754 + * of preference. The display's update function will do any 755 + * conversion necessary. 756 + * 757 + * TODO: Add blit helpers for remaining formats and uncomment 758 + * constants. 759 + */ 760 + static const uint32_t ofdrm_primary_plane_formats[] = { 761 + DRM_FORMAT_XRGB8888, 762 + DRM_FORMAT_RGB565, 763 + //DRM_FORMAT_XRGB1555, 764 + //DRM_FORMAT_C8, 765 + /* Big-endian formats below */ 766 + DRM_FORMAT_BGRX8888, 767 + DRM_FORMAT_RGB565 | DRM_FORMAT_BIG_ENDIAN, 768 + }; 769 + 770 + static const uint64_t ofdrm_primary_plane_format_modifiers[] = { 771 + DRM_FORMAT_MOD_LINEAR, 772 + DRM_FORMAT_MOD_INVALID 773 + }; 774 + 775 + static int ofdrm_primary_plane_helper_atomic_check(struct drm_plane *plane, 776 + struct drm_atomic_state *new_state) 777 + { 778 + struct drm_plane_state *new_plane_state = drm_atomic_get_new_plane_state(new_state, plane); 779 + struct drm_framebuffer *new_fb = new_plane_state->fb; 780 + struct drm_crtc *new_crtc = new_plane_state->crtc; 781 + struct drm_crtc_state *new_crtc_state = NULL; 782 + struct ofdrm_crtc_state *new_ofdrm_crtc_state; 783 + int ret; 784 + 785 + if (new_crtc) 786 + new_crtc_state = drm_atomic_get_new_crtc_state(new_state, new_plane_state->crtc); 787 + 788 + ret = drm_atomic_helper_check_plane_state(new_plane_state, new_crtc_state, 789 + DRM_PLANE_NO_SCALING, 790 + DRM_PLANE_NO_SCALING, 791 + false, false); 792 + if (ret) 793 + return ret; 794 + else if (!new_plane_state->visible) 795 + return 0; 796 + 797 + new_crtc_state = drm_atomic_get_new_crtc_state(new_state, new_plane_state->crtc); 798 + 799 + new_ofdrm_crtc_state = to_ofdrm_crtc_state(new_crtc_state); 800 + new_ofdrm_crtc_state->format = new_fb->format; 801 + 802 + return 0; 803 + } 804 + 805 + static void ofdrm_primary_plane_helper_atomic_update(struct drm_plane *plane, 806 + struct drm_atomic_state *state) 807 + { 808 + struct drm_device *dev = plane->dev; 809 + struct ofdrm_device *odev = ofdrm_device_of_dev(dev); 810 + struct drm_plane_state *plane_state = drm_atomic_get_new_plane_state(state, plane); 811 + struct drm_plane_state *old_plane_state = drm_atomic_get_old_plane_state(state, plane); 812 + struct drm_shadow_plane_state *shadow_plane_state = to_drm_shadow_plane_state(plane_state); 813 + struct drm_framebuffer *fb = plane_state->fb; 814 + unsigned int dst_pitch = odev->pitch; 815 + const struct drm_format_info *dst_format = odev->format; 816 + struct drm_atomic_helper_damage_iter iter; 817 + struct drm_rect damage; 818 + int ret, idx; 819 + 820 + ret = drm_gem_fb_begin_cpu_access(fb, DMA_FROM_DEVICE); 821 + if (ret) 822 + return; 823 + 824 + if (!drm_dev_enter(dev, &idx)) 825 + goto out_drm_gem_fb_end_cpu_access; 826 + 827 + drm_atomic_helper_damage_iter_init(&iter, old_plane_state, plane_state); 828 + drm_atomic_for_each_plane_damage(&iter, &damage) { 829 + struct iosys_map dst = odev->screen_base; 830 + struct drm_rect dst_clip = plane_state->dst; 831 + 832 + if (!drm_rect_intersect(&dst_clip, &damage)) 833 + continue; 834 + 835 + iosys_map_incr(&dst, drm_fb_clip_offset(dst_pitch, dst_format, &dst_clip)); 836 + drm_fb_blit(&dst, &dst_pitch, dst_format->format, shadow_plane_state->data, fb, 837 + &damage); 838 + } 839 + 840 + drm_dev_exit(idx); 841 + out_drm_gem_fb_end_cpu_access: 842 + drm_gem_fb_end_cpu_access(fb, DMA_FROM_DEVICE); 843 + } 844 + 845 + static void ofdrm_primary_plane_helper_atomic_disable(struct drm_plane *plane, 846 + struct drm_atomic_state *state) 847 + { 848 + struct drm_device *dev = plane->dev; 849 + struct ofdrm_device *odev = ofdrm_device_of_dev(dev); 850 + struct iosys_map dst = odev->screen_base; 851 + struct drm_plane_state *plane_state = drm_atomic_get_new_plane_state(state, plane); 852 + void __iomem *dst_vmap = dst.vaddr_iomem; /* TODO: Use mapping abstraction */ 853 + unsigned int dst_pitch = odev->pitch; 854 + const struct drm_format_info *dst_format = odev->format; 855 + struct drm_rect dst_clip; 856 + unsigned long lines, linepixels, i; 857 + int idx; 858 + 859 + drm_rect_init(&dst_clip, 860 + plane_state->src_x >> 16, plane_state->src_y >> 16, 861 + plane_state->src_w >> 16, plane_state->src_h >> 16); 862 + 863 + lines = drm_rect_height(&dst_clip); 864 + linepixels = drm_rect_width(&dst_clip); 865 + 866 + if (!drm_dev_enter(dev, &idx)) 867 + return; 868 + 869 + /* Clear buffer to black if disabled */ 870 + dst_vmap += drm_fb_clip_offset(dst_pitch, dst_format, &dst_clip); 871 + for (i = 0; i < lines; ++i) { 872 + memset_io(dst_vmap, 0, linepixels * dst_format->cpp[0]); 873 + dst_vmap += dst_pitch; 874 + } 875 + 876 + drm_dev_exit(idx); 877 + } 878 + 879 + static const struct drm_plane_helper_funcs ofdrm_primary_plane_helper_funcs = { 880 + DRM_GEM_SHADOW_PLANE_HELPER_FUNCS, 881 + .atomic_check = ofdrm_primary_plane_helper_atomic_check, 882 + .atomic_update = ofdrm_primary_plane_helper_atomic_update, 883 + .atomic_disable = ofdrm_primary_plane_helper_atomic_disable, 884 + }; 885 + 886 + static const struct drm_plane_funcs ofdrm_primary_plane_funcs = { 887 + .update_plane = drm_atomic_helper_update_plane, 888 + .disable_plane = drm_atomic_helper_disable_plane, 889 + .destroy = drm_plane_cleanup, 890 + DRM_GEM_SHADOW_PLANE_FUNCS, 891 + }; 892 + 893 + static enum drm_mode_status ofdrm_crtc_helper_mode_valid(struct drm_crtc *crtc, 894 + const struct drm_display_mode *mode) 895 + { 896 + struct ofdrm_device *odev = ofdrm_device_of_dev(crtc->dev); 897 + 898 + return drm_crtc_helper_mode_valid_fixed(crtc, mode, &odev->mode); 899 + } 900 + 901 + static int ofdrm_crtc_helper_atomic_check(struct drm_crtc *crtc, 902 + struct drm_atomic_state *new_state) 903 + { 904 + static const size_t gamma_lut_length = OFDRM_GAMMA_LUT_SIZE * sizeof(struct drm_color_lut); 905 + 906 + struct drm_device *dev = crtc->dev; 907 + struct drm_crtc_state *new_crtc_state = drm_atomic_get_new_crtc_state(new_state, crtc); 908 + int ret; 909 + 910 + if (!new_crtc_state->enable) 911 + return 0; 912 + 913 + ret = drm_atomic_helper_check_crtc_primary_plane(new_crtc_state); 914 + if (ret) 915 + return ret; 916 + 917 + if (new_crtc_state->color_mgmt_changed) { 918 + struct drm_property_blob *gamma_lut = new_crtc_state->gamma_lut; 919 + 920 + if (gamma_lut && (gamma_lut->length != gamma_lut_length)) { 921 + drm_dbg(dev, "Incorrect gamma_lut length %zu\n", gamma_lut->length); 922 + return -EINVAL; 923 + } 924 + } 925 + 926 + return 0; 927 + } 928 + 929 + static void ofdrm_crtc_helper_atomic_flush(struct drm_crtc *crtc, struct drm_atomic_state *state) 930 + { 931 + struct ofdrm_device *odev = ofdrm_device_of_dev(crtc->dev); 932 + struct drm_crtc_state *crtc_state = drm_atomic_get_new_crtc_state(state, crtc); 933 + struct ofdrm_crtc_state *ofdrm_crtc_state = to_ofdrm_crtc_state(crtc_state); 934 + 935 + if (crtc_state->enable && crtc_state->color_mgmt_changed) { 936 + const struct drm_format_info *format = ofdrm_crtc_state->format; 937 + 938 + if (crtc_state->gamma_lut) 939 + ofdrm_device_set_gamma(odev, format, crtc_state->gamma_lut->data); 940 + else 941 + ofdrm_device_set_gamma_linear(odev, format); 942 + } 943 + } 944 + 945 + /* 946 + * The CRTC is always enabled. Screen updates are performed by 947 + * the primary plane's atomic_update function. Disabling clears 948 + * the screen in the primary plane's atomic_disable function. 949 + */ 950 + static const struct drm_crtc_helper_funcs ofdrm_crtc_helper_funcs = { 951 + .mode_valid = ofdrm_crtc_helper_mode_valid, 952 + .atomic_check = ofdrm_crtc_helper_atomic_check, 953 + .atomic_flush = ofdrm_crtc_helper_atomic_flush, 954 + }; 955 + 956 + static void ofdrm_crtc_reset(struct drm_crtc *crtc) 957 + { 958 + struct ofdrm_crtc_state *ofdrm_crtc_state = 959 + kzalloc(sizeof(*ofdrm_crtc_state), GFP_KERNEL); 960 + 961 + if (crtc->state) 962 + ofdrm_crtc_state_destroy(to_ofdrm_crtc_state(crtc->state)); 963 + 964 + if (ofdrm_crtc_state) 965 + __drm_atomic_helper_crtc_reset(crtc, &ofdrm_crtc_state->base); 966 + else 967 + __drm_atomic_helper_crtc_reset(crtc, NULL); 968 + } 969 + 970 + static struct drm_crtc_state *ofdrm_crtc_atomic_duplicate_state(struct drm_crtc *crtc) 971 + { 972 + struct drm_device *dev = crtc->dev; 973 + struct drm_crtc_state *crtc_state = crtc->state; 974 + struct ofdrm_crtc_state *new_ofdrm_crtc_state; 975 + struct ofdrm_crtc_state *ofdrm_crtc_state; 976 + 977 + if (drm_WARN_ON(dev, !crtc_state)) 978 + return NULL; 979 + 980 + new_ofdrm_crtc_state = kzalloc(sizeof(*new_ofdrm_crtc_state), GFP_KERNEL); 981 + if (!new_ofdrm_crtc_state) 982 + return NULL; 983 + 984 + ofdrm_crtc_state = to_ofdrm_crtc_state(crtc_state); 985 + 986 + __drm_atomic_helper_crtc_duplicate_state(crtc, &new_ofdrm_crtc_state->base); 987 + new_ofdrm_crtc_state->format = ofdrm_crtc_state->format; 988 + 989 + return &new_ofdrm_crtc_state->base; 990 + } 991 + 992 + static void ofdrm_crtc_atomic_destroy_state(struct drm_crtc *crtc, 993 + struct drm_crtc_state *crtc_state) 994 + { 995 + ofdrm_crtc_state_destroy(to_ofdrm_crtc_state(crtc_state)); 996 + } 997 + 998 + static const struct drm_crtc_funcs ofdrm_crtc_funcs = { 999 + .reset = ofdrm_crtc_reset, 1000 + .destroy = drm_crtc_cleanup, 1001 + .set_config = drm_atomic_helper_set_config, 1002 + .page_flip = drm_atomic_helper_page_flip, 1003 + .atomic_duplicate_state = ofdrm_crtc_atomic_duplicate_state, 1004 + .atomic_destroy_state = ofdrm_crtc_atomic_destroy_state, 1005 + }; 1006 + 1007 + static int ofdrm_connector_helper_get_modes(struct drm_connector *connector) 1008 + { 1009 + struct ofdrm_device *odev = ofdrm_device_of_dev(connector->dev); 1010 + 1011 + return drm_connector_helper_get_modes_fixed(connector, &odev->mode); 1012 + } 1013 + 1014 + static const struct drm_connector_helper_funcs ofdrm_connector_helper_funcs = { 1015 + .get_modes = ofdrm_connector_helper_get_modes, 1016 + }; 1017 + 1018 + static const struct drm_connector_funcs ofdrm_connector_funcs = { 1019 + .reset = drm_atomic_helper_connector_reset, 1020 + .fill_modes = drm_helper_probe_single_connector_modes, 1021 + .destroy = drm_connector_cleanup, 1022 + .atomic_duplicate_state = drm_atomic_helper_connector_duplicate_state, 1023 + .atomic_destroy_state = drm_atomic_helper_connector_destroy_state, 1024 + }; 1025 + 1026 + static const struct drm_mode_config_funcs ofdrm_mode_config_funcs = { 1027 + .fb_create = drm_gem_fb_create_with_dirty, 1028 + .atomic_check = drm_atomic_helper_check, 1029 + .atomic_commit = drm_atomic_helper_commit, 1030 + }; 1031 + 1032 + /* 1033 + * Init / Cleanup 1034 + */ 1035 + 1036 + static const struct ofdrm_device_funcs ofdrm_unknown_device_funcs = { 1037 + }; 1038 + 1039 + static const struct ofdrm_device_funcs ofdrm_mach64_device_funcs = { 1040 + .cmap_ioremap = ofdrm_mach64_cmap_ioremap, 1041 + .cmap_write = ofdrm_mach64_cmap_write, 1042 + }; 1043 + 1044 + static const struct ofdrm_device_funcs ofdrm_rage128_device_funcs = { 1045 + .cmap_ioremap = ofdrm_rage128_cmap_ioremap, 1046 + .cmap_write = ofdrm_rage128_cmap_write, 1047 + }; 1048 + 1049 + static const struct ofdrm_device_funcs ofdrm_rage_m3a_device_funcs = { 1050 + .cmap_ioremap = ofdrm_rage_m3a_cmap_ioremap, 1051 + .cmap_write = ofdrm_rage_m3a_cmap_write, 1052 + }; 1053 + 1054 + static const struct ofdrm_device_funcs ofdrm_rage_m3b_device_funcs = { 1055 + .cmap_ioremap = ofdrm_rage_m3b_cmap_ioremap, 1056 + .cmap_write = ofdrm_rage_m3b_cmap_write, 1057 + }; 1058 + 1059 + static const struct ofdrm_device_funcs ofdrm_radeon_device_funcs = { 1060 + .cmap_ioremap = ofdrm_radeon_cmap_ioremap, 1061 + .cmap_write = ofdrm_rage128_cmap_write, /* same as Rage128 */ 1062 + }; 1063 + 1064 + static const struct ofdrm_device_funcs ofdrm_gxt2000_device_funcs = { 1065 + .cmap_ioremap = ofdrm_gxt2000_cmap_ioremap, 1066 + .cmap_write = ofdrm_gxt2000_cmap_write, 1067 + }; 1068 + 1069 + static const struct ofdrm_device_funcs ofdrm_avivo_device_funcs = { 1070 + .cmap_ioremap = ofdrm_avivo_cmap_ioremap, 1071 + .cmap_write = ofdrm_avivo_cmap_write, 1072 + }; 1073 + 1074 + static const struct ofdrm_device_funcs ofdrm_qemu_device_funcs = { 1075 + .cmap_ioremap = ofdrm_qemu_cmap_ioremap, 1076 + .cmap_write = ofdrm_qemu_cmap_write, 1077 + }; 1078 + 1079 + static struct drm_display_mode ofdrm_mode(unsigned int width, unsigned int height) 1080 + { 1081 + /* 1082 + * Assume a monitor resolution of 96 dpi to 1083 + * get a somewhat reasonable screen size. 1084 + */ 1085 + const struct drm_display_mode mode = { 1086 + DRM_MODE_INIT(60, width, height, 1087 + DRM_MODE_RES_MM(width, 96ul), 1088 + DRM_MODE_RES_MM(height, 96ul)) 1089 + }; 1090 + 1091 + return mode; 1092 + } 1093 + 1094 + static struct ofdrm_device *ofdrm_device_create(struct drm_driver *drv, 1095 + struct platform_device *pdev) 1096 + { 1097 + struct device_node *of_node = pdev->dev.of_node; 1098 + struct ofdrm_device *odev; 1099 + struct drm_device *dev; 1100 + enum ofdrm_model model; 1101 + bool big_endian; 1102 + int width, height, depth, linebytes; 1103 + const struct drm_format_info *format; 1104 + u64 address; 1105 + resource_size_t fb_size, fb_base, fb_pgbase, fb_pgsize; 1106 + struct resource *res, *mem; 1107 + void __iomem *screen_base; 1108 + struct drm_plane *primary_plane; 1109 + struct drm_crtc *crtc; 1110 + struct drm_encoder *encoder; 1111 + struct drm_connector *connector; 1112 + unsigned long max_width, max_height; 1113 + size_t nformats; 1114 + int ret; 1115 + 1116 + odev = devm_drm_dev_alloc(&pdev->dev, drv, struct ofdrm_device, dev); 1117 + if (IS_ERR(odev)) 1118 + return ERR_CAST(odev); 1119 + dev = &odev->dev; 1120 + platform_set_drvdata(pdev, dev); 1121 + 1122 + ret = ofdrm_device_init_pci(odev); 1123 + if (ret) 1124 + return ERR_PTR(ret); 1125 + 1126 + /* 1127 + * OF display-node settings 1128 + */ 1129 + 1130 + model = display_get_model_of(dev, of_node); 1131 + drm_dbg(dev, "detected model %d\n", model); 1132 + 1133 + switch (model) { 1134 + case OFDRM_MODEL_UNKNOWN: 1135 + odev->funcs = &ofdrm_unknown_device_funcs; 1136 + break; 1137 + case OFDRM_MODEL_MACH64: 1138 + odev->funcs = &ofdrm_mach64_device_funcs; 1139 + break; 1140 + case OFDRM_MODEL_RAGE128: 1141 + odev->funcs = &ofdrm_rage128_device_funcs; 1142 + break; 1143 + case OFDRM_MODEL_RAGE_M3A: 1144 + odev->funcs = &ofdrm_rage_m3a_device_funcs; 1145 + break; 1146 + case OFDRM_MODEL_RAGE_M3B: 1147 + odev->funcs = &ofdrm_rage_m3b_device_funcs; 1148 + break; 1149 + case OFDRM_MODEL_RADEON: 1150 + odev->funcs = &ofdrm_radeon_device_funcs; 1151 + break; 1152 + case OFDRM_MODEL_GXT2000: 1153 + odev->funcs = &ofdrm_gxt2000_device_funcs; 1154 + break; 1155 + case OFDRM_MODEL_AVIVO: 1156 + odev->funcs = &ofdrm_avivo_device_funcs; 1157 + break; 1158 + case OFDRM_MODEL_QEMU: 1159 + odev->funcs = &ofdrm_qemu_device_funcs; 1160 + break; 1161 + } 1162 + 1163 + big_endian = display_get_big_endian_of(dev, of_node); 1164 + 1165 + width = display_get_width_of(dev, of_node); 1166 + if (width < 0) 1167 + return ERR_PTR(width); 1168 + height = display_get_height_of(dev, of_node); 1169 + if (height < 0) 1170 + return ERR_PTR(height); 1171 + depth = display_get_depth_of(dev, of_node); 1172 + if (depth < 0) 1173 + return ERR_PTR(depth); 1174 + linebytes = display_get_linebytes_of(dev, of_node); 1175 + if (linebytes < 0) 1176 + return ERR_PTR(linebytes); 1177 + 1178 + format = display_get_validated_format(dev, depth, big_endian); 1179 + if (IS_ERR(format)) 1180 + return ERR_CAST(format); 1181 + if (!linebytes) { 1182 + linebytes = drm_format_info_min_pitch(format, 0, width); 1183 + if (drm_WARN_ON(dev, !linebytes)) 1184 + return ERR_PTR(-EINVAL); 1185 + } 1186 + 1187 + fb_size = linebytes * height; 1188 + 1189 + /* 1190 + * Try to figure out the address of the framebuffer. Unfortunately, Open 1191 + * Firmware doesn't provide a standard way to do so. All we can do is a 1192 + * dodgy heuristic that happens to work in practice. 1193 + * 1194 + * On most machines, the "address" property contains what we need, though 1195 + * not on Matrox cards found in IBM machines. What appears to give good 1196 + * results is to go through the PCI ranges and pick one that encloses the 1197 + * "address" property. If none match, we pick the largest. 1198 + */ 1199 + address = display_get_address_of(dev, of_node); 1200 + if (address != OF_BAD_ADDR) { 1201 + struct resource fb_res = DEFINE_RES_MEM(address, fb_size); 1202 + 1203 + res = ofdrm_find_fb_resource(odev, &fb_res); 1204 + if (!res) 1205 + return ERR_PTR(-EINVAL); 1206 + if (resource_contains(res, &fb_res)) 1207 + fb_base = address; 1208 + else 1209 + fb_base = res->start; 1210 + } else { 1211 + struct resource fb_res = DEFINE_RES_MEM(0u, fb_size); 1212 + 1213 + res = ofdrm_find_fb_resource(odev, &fb_res); 1214 + if (!res) 1215 + return ERR_PTR(-EINVAL); 1216 + fb_base = res->start; 1217 + } 1218 + 1219 + /* 1220 + * I/O resources 1221 + */ 1222 + 1223 + fb_pgbase = round_down(fb_base, PAGE_SIZE); 1224 + fb_pgsize = fb_base - fb_pgbase + round_up(fb_size, PAGE_SIZE); 1225 + 1226 + ret = devm_aperture_acquire_from_firmware(dev, fb_pgbase, fb_pgsize); 1227 + if (ret) { 1228 + drm_err(dev, "could not acquire memory range %pr: error %d\n", &res, ret); 1229 + return ERR_PTR(ret); 1230 + } 1231 + 1232 + mem = devm_request_mem_region(&pdev->dev, fb_pgbase, fb_pgsize, drv->name); 1233 + if (!mem) { 1234 + drm_warn(dev, "could not acquire memory region %pr\n", &res); 1235 + return ERR_PTR(-ENOMEM); 1236 + } 1237 + 1238 + screen_base = devm_ioremap(&pdev->dev, mem->start, resource_size(mem)); 1239 + if (!screen_base) 1240 + return ERR_PTR(-ENOMEM); 1241 + 1242 + if (odev->funcs->cmap_ioremap) { 1243 + void __iomem *cmap_base = odev->funcs->cmap_ioremap(odev, of_node, fb_base); 1244 + 1245 + if (IS_ERR(cmap_base)) { 1246 + /* Don't fail; continue without colormap */ 1247 + drm_warn(dev, "could not find colormap: error %ld\n", PTR_ERR(cmap_base)); 1248 + } else { 1249 + odev->cmap_base = cmap_base; 1250 + } 1251 + } 1252 + 1253 + /* 1254 + * Firmware framebuffer 1255 + */ 1256 + 1257 + iosys_map_set_vaddr_iomem(&odev->screen_base, screen_base); 1258 + odev->mode = ofdrm_mode(width, height); 1259 + odev->format = format; 1260 + odev->pitch = linebytes; 1261 + 1262 + drm_dbg(dev, "display mode={" DRM_MODE_FMT "}\n", DRM_MODE_ARG(&odev->mode)); 1263 + drm_dbg(dev, "framebuffer format=%p4cc, size=%dx%d, linebytes=%d byte\n", 1264 + &format->format, width, height, linebytes); 1265 + 1266 + /* 1267 + * Mode-setting pipeline 1268 + */ 1269 + 1270 + ret = drmm_mode_config_init(dev); 1271 + if (ret) 1272 + return ERR_PTR(ret); 1273 + 1274 + max_width = max_t(unsigned long, width, DRM_SHADOW_PLANE_MAX_WIDTH); 1275 + max_height = max_t(unsigned long, height, DRM_SHADOW_PLANE_MAX_HEIGHT); 1276 + 1277 + dev->mode_config.min_width = width; 1278 + dev->mode_config.max_width = max_width; 1279 + dev->mode_config.min_height = height; 1280 + dev->mode_config.max_height = max_height; 1281 + dev->mode_config.funcs = &ofdrm_mode_config_funcs; 1282 + switch (depth) { 1283 + case 32: 1284 + dev->mode_config.preferred_depth = 24; 1285 + break; 1286 + default: 1287 + dev->mode_config.preferred_depth = depth; 1288 + break; 1289 + } 1290 + dev->mode_config.quirk_addfb_prefer_host_byte_order = true; 1291 + 1292 + /* Primary plane */ 1293 + 1294 + nformats = drm_fb_build_fourcc_list(dev, &format->format, 1, 1295 + ofdrm_primary_plane_formats, 1296 + ARRAY_SIZE(ofdrm_primary_plane_formats), 1297 + odev->formats, ARRAY_SIZE(odev->formats)); 1298 + 1299 + primary_plane = &odev->primary_plane; 1300 + ret = drm_universal_plane_init(dev, primary_plane, 0, &ofdrm_primary_plane_funcs, 1301 + odev->formats, nformats, 1302 + ofdrm_primary_plane_format_modifiers, 1303 + DRM_PLANE_TYPE_PRIMARY, NULL); 1304 + if (ret) 1305 + return ERR_PTR(ret); 1306 + drm_plane_helper_add(primary_plane, &ofdrm_primary_plane_helper_funcs); 1307 + drm_plane_enable_fb_damage_clips(primary_plane); 1308 + 1309 + /* CRTC */ 1310 + 1311 + crtc = &odev->crtc; 1312 + ret = drm_crtc_init_with_planes(dev, crtc, primary_plane, NULL, 1313 + &ofdrm_crtc_funcs, NULL); 1314 + if (ret) 1315 + return ERR_PTR(ret); 1316 + drm_crtc_helper_add(crtc, &ofdrm_crtc_helper_funcs); 1317 + 1318 + if (odev->cmap_base) { 1319 + drm_mode_crtc_set_gamma_size(crtc, OFDRM_GAMMA_LUT_SIZE); 1320 + drm_crtc_enable_color_mgmt(crtc, 0, false, OFDRM_GAMMA_LUT_SIZE); 1321 + } 1322 + 1323 + /* Encoder */ 1324 + 1325 + encoder = &odev->encoder; 1326 + ret = drm_simple_encoder_init(dev, encoder, DRM_MODE_ENCODER_NONE); 1327 + if (ret) 1328 + return ERR_PTR(ret); 1329 + encoder->possible_crtcs = drm_crtc_mask(crtc); 1330 + 1331 + /* Connector */ 1332 + 1333 + connector = &odev->connector; 1334 + ret = drm_connector_init(dev, connector, &ofdrm_connector_funcs, 1335 + DRM_MODE_CONNECTOR_Unknown); 1336 + if (ret) 1337 + return ERR_PTR(ret); 1338 + drm_connector_helper_add(connector, &ofdrm_connector_helper_funcs); 1339 + drm_connector_set_panel_orientation_with_quirk(connector, 1340 + DRM_MODE_PANEL_ORIENTATION_UNKNOWN, 1341 + width, height); 1342 + 1343 + ret = drm_connector_attach_encoder(connector, encoder); 1344 + if (ret) 1345 + return ERR_PTR(ret); 1346 + 1347 + drm_mode_config_reset(dev); 1348 + 1349 + return odev; 1350 + } 1351 + 1352 + /* 1353 + * DRM driver 1354 + */ 1355 + 1356 + DEFINE_DRM_GEM_FOPS(ofdrm_fops); 1357 + 1358 + static struct drm_driver ofdrm_driver = { 1359 + DRM_GEM_SHMEM_DRIVER_OPS, 1360 + .name = DRIVER_NAME, 1361 + .desc = DRIVER_DESC, 1362 + .date = DRIVER_DATE, 1363 + .major = DRIVER_MAJOR, 1364 + .minor = DRIVER_MINOR, 1365 + .driver_features = DRIVER_ATOMIC | DRIVER_GEM | DRIVER_MODESET, 1366 + .fops = &ofdrm_fops, 1367 + }; 1368 + 1369 + /* 1370 + * Platform driver 1371 + */ 1372 + 1373 + static int ofdrm_probe(struct platform_device *pdev) 1374 + { 1375 + struct ofdrm_device *odev; 1376 + struct drm_device *dev; 1377 + int ret; 1378 + 1379 + odev = ofdrm_device_create(&ofdrm_driver, pdev); 1380 + if (IS_ERR(odev)) 1381 + return PTR_ERR(odev); 1382 + dev = &odev->dev; 1383 + 1384 + ret = drm_dev_register(dev, 0); 1385 + if (ret) 1386 + return ret; 1387 + 1388 + /* 1389 + * FIXME: 24-bit color depth does not work reliably with a 32-bpp 1390 + * value. Force the bpp value of the scanout buffer's format. 1391 + */ 1392 + drm_fbdev_generic_setup(dev, drm_format_info_bpp(odev->format, 0)); 1393 + 1394 + return 0; 1395 + } 1396 + 1397 + static int ofdrm_remove(struct platform_device *pdev) 1398 + { 1399 + struct drm_device *dev = platform_get_drvdata(pdev); 1400 + 1401 + drm_dev_unplug(dev); 1402 + 1403 + return 0; 1404 + } 1405 + 1406 + static const struct of_device_id ofdrm_of_match_display[] = { 1407 + { .compatible = "display", }, 1408 + { }, 1409 + }; 1410 + MODULE_DEVICE_TABLE(of, ofdrm_of_match_display); 1411 + 1412 + static struct platform_driver ofdrm_platform_driver = { 1413 + .driver = { 1414 + .name = "of-display", 1415 + .of_match_table = ofdrm_of_match_display, 1416 + }, 1417 + .probe = ofdrm_probe, 1418 + .remove = ofdrm_remove, 1419 + }; 1420 + 1421 + module_platform_driver(ofdrm_platform_driver); 1422 + 1423 + MODULE_DESCRIPTION(DRIVER_DESC); 1424 + MODULE_LICENSE("GPL");
+2 -14
drivers/gpu/drm/tiny/simpledrm.c
··· 11 11 #include <drm/drm_atomic.h> 12 12 #include <drm/drm_atomic_state_helper.h> 13 13 #include <drm/drm_connector.h> 14 + #include <drm/drm_crtc_helper.h> 14 15 #include <drm/drm_damage_helper.h> 15 16 #include <drm/drm_device.h> 16 17 #include <drm/drm_drv.h> ··· 546 545 return drm_crtc_helper_mode_valid_fixed(crtc, mode, &sdev->mode); 547 546 } 548 547 549 - static int simpledrm_crtc_helper_atomic_check(struct drm_crtc *crtc, 550 - struct drm_atomic_state *new_state) 551 - { 552 - struct drm_crtc_state *new_crtc_state = drm_atomic_get_new_crtc_state(new_state, crtc); 553 - int ret; 554 - 555 - ret = drm_atomic_helper_check_crtc_state(new_crtc_state, false); 556 - if (ret) 557 - return ret; 558 - 559 - return drm_atomic_add_affected_planes(new_state, crtc); 560 - } 561 - 562 548 /* 563 549 * The CRTC is always enabled. Screen updates are performed by 564 550 * the primary plane's atomic_update function. Disabling clears ··· 553 565 */ 554 566 static const struct drm_crtc_helper_funcs simpledrm_crtc_helper_funcs = { 555 567 .mode_valid = simpledrm_crtc_helper_mode_valid, 556 - .atomic_check = simpledrm_crtc_helper_atomic_check, 568 + .atomic_check = drm_crtc_helper_atomic_check, 557 569 }; 558 570 559 571 static const struct drm_crtc_funcs simpledrm_crtc_funcs = {
-1
drivers/gpu/drm/ttm/ttm_range_manager.c
··· 229 229 return ret; 230 230 231 231 spin_lock(&rman->lock); 232 - drm_mm_clean(mm); 233 232 drm_mm_takedown(mm); 234 233 spin_unlock(&rman->lock); 235 234
+1 -1
drivers/gpu/drm/udl/Makefile
··· 1 1 # SPDX-License-Identifier: GPL-2.0-only 2 - udl-y := udl_drv.o udl_modeset.o udl_connector.o udl_main.o udl_transfer.o 2 + udl-y := udl_drv.o udl_modeset.o udl_main.o udl_transfer.o 3 3 4 4 obj-$(CONFIG_DRM_UDL) := udl.o
-139
drivers/gpu/drm/udl/udl_connector.c
··· 1 - // SPDX-License-Identifier: GPL-2.0-only 2 - /* 3 - * Copyright (C) 2012 Red Hat 4 - * based in parts on udlfb.c: 5 - * Copyright (C) 2009 Roberto De Ioris <roberto@unbit.it> 6 - * Copyright (C) 2009 Jaya Kumar <jayakumar.lkml@gmail.com> 7 - * Copyright (C) 2009 Bernie Thompson <bernie@plugable.com> 8 - */ 9 - 10 - #include <drm/drm_atomic_state_helper.h> 11 - #include <drm/drm_edid.h> 12 - #include <drm/drm_crtc_helper.h> 13 - #include <drm/drm_probe_helper.h> 14 - 15 - #include "udl_connector.h" 16 - #include "udl_drv.h" 17 - 18 - static int udl_get_edid_block(void *data, u8 *buf, unsigned int block, 19 - size_t len) 20 - { 21 - int ret, i; 22 - u8 *read_buff; 23 - struct udl_device *udl = data; 24 - struct usb_device *udev = udl_to_usb_device(udl); 25 - 26 - read_buff = kmalloc(2, GFP_KERNEL); 27 - if (!read_buff) 28 - return -1; 29 - 30 - for (i = 0; i < len; i++) { 31 - int bval = (i + block * EDID_LENGTH) << 8; 32 - ret = usb_control_msg(udev, usb_rcvctrlpipe(udev, 0), 33 - 0x02, (0x80 | (0x02 << 5)), bval, 34 - 0xA1, read_buff, 2, 1000); 35 - if (ret < 1) { 36 - DRM_ERROR("Read EDID byte %d failed err %x\n", i, ret); 37 - kfree(read_buff); 38 - return -1; 39 - } 40 - buf[i] = read_buff[1]; 41 - } 42 - 43 - kfree(read_buff); 44 - return 0; 45 - } 46 - 47 - static int udl_get_modes(struct drm_connector *connector) 48 - { 49 - struct udl_drm_connector *udl_connector = 50 - container_of(connector, 51 - struct udl_drm_connector, 52 - connector); 53 - 54 - drm_connector_update_edid_property(connector, udl_connector->edid); 55 - if (udl_connector->edid) 56 - return drm_add_edid_modes(connector, udl_connector->edid); 57 - return 0; 58 - } 59 - 60 - static enum drm_mode_status udl_mode_valid(struct drm_connector *connector, 61 - struct drm_display_mode *mode) 62 - { 63 - struct udl_device *udl = to_udl(connector->dev); 64 - if (!udl->sku_pixel_limit) 65 - return 0; 66 - 67 - if (mode->vdisplay * mode->hdisplay > udl->sku_pixel_limit) 68 - return MODE_VIRTUAL_Y; 69 - 70 - return 0; 71 - } 72 - 73 - static enum drm_connector_status 74 - udl_detect(struct drm_connector *connector, bool force) 75 - { 76 - struct udl_device *udl = to_udl(connector->dev); 77 - struct udl_drm_connector *udl_connector = 78 - container_of(connector, 79 - struct udl_drm_connector, 80 - connector); 81 - 82 - /* cleanup previous edid */ 83 - if (udl_connector->edid != NULL) { 84 - kfree(udl_connector->edid); 85 - udl_connector->edid = NULL; 86 - } 87 - 88 - udl_connector->edid = drm_do_get_edid(connector, udl_get_edid_block, udl); 89 - if (!udl_connector->edid) 90 - return connector_status_disconnected; 91 - 92 - return connector_status_connected; 93 - } 94 - 95 - static void udl_connector_destroy(struct drm_connector *connector) 96 - { 97 - struct udl_drm_connector *udl_connector = 98 - container_of(connector, 99 - struct udl_drm_connector, 100 - connector); 101 - 102 - drm_connector_cleanup(connector); 103 - kfree(udl_connector->edid); 104 - kfree(connector); 105 - } 106 - 107 - static const struct drm_connector_helper_funcs udl_connector_helper_funcs = { 108 - .get_modes = udl_get_modes, 109 - .mode_valid = udl_mode_valid, 110 - }; 111 - 112 - static const struct drm_connector_funcs udl_connector_funcs = { 113 - .reset = drm_atomic_helper_connector_reset, 114 - .detect = udl_detect, 115 - .fill_modes = drm_helper_probe_single_connector_modes, 116 - .destroy = udl_connector_destroy, 117 - .atomic_duplicate_state = drm_atomic_helper_connector_duplicate_state, 118 - .atomic_destroy_state = drm_atomic_helper_connector_destroy_state, 119 - }; 120 - 121 - struct drm_connector *udl_connector_init(struct drm_device *dev) 122 - { 123 - struct udl_drm_connector *udl_connector; 124 - struct drm_connector *connector; 125 - 126 - udl_connector = kzalloc(sizeof(struct udl_drm_connector), GFP_KERNEL); 127 - if (!udl_connector) 128 - return ERR_PTR(-ENOMEM); 129 - 130 - connector = &udl_connector->connector; 131 - drm_connector_init(dev, connector, &udl_connector_funcs, 132 - DRM_MODE_CONNECTOR_VGA); 133 - drm_connector_helper_add(connector, &udl_connector_helper_funcs); 134 - 135 - connector->polled = DRM_CONNECTOR_POLL_HPD | 136 - DRM_CONNECTOR_POLL_CONNECT | DRM_CONNECTOR_POLL_DISCONNECT; 137 - 138 - return connector; 139 - }
-15
drivers/gpu/drm/udl/udl_connector.h
··· 1 - #ifndef __UDL_CONNECTOR_H__ 2 - #define __UDL_CONNECTOR_H__ 3 - 4 - #include <drm/drm_crtc.h> 5 - 6 - struct edid; 7 - 8 - struct udl_drm_connector { 9 - struct drm_connector connector; 10 - /* last udl_detect edid */ 11 - struct edid *edid; 12 - }; 13 - 14 - 15 - #endif //__UDL_CONNECTOR_H__
+18 -24
drivers/gpu/drm/udl/udl_drv.h
··· 14 14 #include <linux/mm_types.h> 15 15 #include <linux/usb.h> 16 16 17 + #include <drm/drm_connector.h> 18 + #include <drm/drm_crtc.h> 17 19 #include <drm/drm_device.h> 20 + #include <drm/drm_encoder.h> 18 21 #include <drm/drm_framebuffer.h> 19 22 #include <drm/drm_gem.h> 20 - #include <drm/drm_simple_kms_helper.h> 23 + #include <drm/drm_plane.h> 21 24 22 25 struct drm_mode_create_dumb; 23 26 ··· 49 46 size_t size; 50 47 }; 51 48 49 + struct udl_connector { 50 + struct drm_connector connector; 51 + /* last udl_detect edid */ 52 + struct edid *edid; 53 + }; 54 + 55 + static inline struct udl_connector *to_udl_connector(struct drm_connector *connector) 56 + { 57 + return container_of(connector, struct udl_connector, connector); 58 + } 59 + 52 60 struct udl_device { 53 61 struct drm_device drm; 54 62 struct device *dev; 55 63 struct device *dmadev; 56 64 57 - struct drm_simple_display_pipe display_pipe; 65 + struct drm_plane primary_plane; 66 + struct drm_crtc crtc; 67 + struct drm_encoder encoder; 58 68 59 69 struct mutex gem_lock; 60 70 61 71 int sku_pixel_limit; 62 72 63 73 struct urb_list urbs; 64 - 65 - char mode_buf[1024]; 66 - uint32_t mode_buf_len; 67 74 }; 68 75 69 76 #define to_udl(x) container_of(x, struct udl_device, drm) ··· 101 88 102 89 int udl_drop_usb(struct drm_device *dev); 103 90 int udl_select_std_channel(struct udl_device *udl); 104 - 105 - #define CMD_WRITE_RAW8 "\xAF\x60" /**< 8 bit raw write command. */ 106 - #define CMD_WRITE_RL8 "\xAF\x61" /**< 8 bit run length command. */ 107 - #define CMD_WRITE_COPY8 "\xAF\x62" /**< 8 bit copy command. */ 108 - #define CMD_WRITE_RLX8 "\xAF\x63" /**< 8 bit extended run length command. */ 109 - 110 - #define CMD_WRITE_RAW16 "\xAF\x68" /**< 16 bit raw write command. */ 111 - #define CMD_WRITE_RL16 "\xAF\x69" /**< 16 bit run length command. */ 112 - #define CMD_WRITE_COPY16 "\xAF\x6A" /**< 16 bit copy command. */ 113 - #define CMD_WRITE_RLX16 "\xAF\x6B" /**< 16 bit extended run length command. */ 114 - 115 - /* On/Off for driving the DisplayLink framebuffer to the display */ 116 - #define UDL_REG_BLANK_MODE 0x1f 117 - 118 - #define UDL_BLANK_MODE_ON 0x00 /* hsync and vsync on, visible */ 119 - #define UDL_BLANK_MODE_BLANKED 0x01 /* hsync and vsync on, blanked */ 120 - #define UDL_BLANK_MODE_VSYNC_OFF 0x03 /* vsync off, blanked */ 121 - #define UDL_BLANK_MODE_HSYNC_OFF 0x05 /* hsync off, blanked */ 122 - #define UDL_BLANK_MODE_POWERDOWN 0x07 /* powered off; requires modeset */ 123 91 124 92 #endif
+365 -205
drivers/gpu/drm/udl/udl_modeset.c
··· 8 8 * Copyright (C) 2009 Bernie Thompson <bernie@plugable.com> 9 9 */ 10 10 11 + #include <linux/bitfield.h> 12 + 13 + #include <drm/drm_atomic.h> 11 14 #include <drm/drm_atomic_helper.h> 12 15 #include <drm/drm_crtc_helper.h> 13 16 #include <drm/drm_damage_helper.h> 17 + #include <drm/drm_drv.h> 18 + #include <drm/drm_edid.h> 14 19 #include <drm/drm_fourcc.h> 15 20 #include <drm/drm_gem_atomic_helper.h> 16 21 #include <drm/drm_gem_framebuffer_helper.h> 17 22 #include <drm/drm_gem_shmem_helper.h> 18 23 #include <drm/drm_modeset_helper_vtables.h> 24 + #include <drm/drm_plane_helper.h> 25 + #include <drm/drm_probe_helper.h> 19 26 #include <drm/drm_vblank.h> 20 27 21 28 #include "udl_drv.h" 22 - 23 - #define UDL_COLOR_DEPTH_16BPP 0 29 + #include "udl_proto.h" 24 30 25 31 /* 26 - * All DisplayLink bulk operations start with 0xAF, followed by specific code 27 - * All operations are written to buffers which then later get sent to device 32 + * All DisplayLink bulk operations start with 0xaf (UDL_MSG_BULK), followed by 33 + * a specific command code. All operations are written to a command buffer, which 34 + * the driver sends to the device. 28 35 */ 29 36 static char *udl_set_register(char *buf, u8 reg, u8 val) 30 37 { 31 - *buf++ = 0xAF; 32 - *buf++ = 0x20; 38 + *buf++ = UDL_MSG_BULK; 39 + *buf++ = UDL_CMD_WRITEREG; 33 40 *buf++ = reg; 34 41 *buf++ = val; 42 + 35 43 return buf; 36 44 } 37 45 38 46 static char *udl_vidreg_lock(char *buf) 39 47 { 40 - return udl_set_register(buf, 0xFF, 0x00); 48 + return udl_set_register(buf, UDL_REG_VIDREG, UDL_VIDREG_LOCK); 41 49 } 42 50 43 51 static char *udl_vidreg_unlock(char *buf) 44 52 { 45 - return udl_set_register(buf, 0xFF, 0xFF); 53 + return udl_set_register(buf, UDL_REG_VIDREG, UDL_VIDREG_UNLOCK); 46 54 } 47 55 48 56 static char *udl_set_blank_mode(char *buf, u8 mode) 49 57 { 50 - return udl_set_register(buf, UDL_REG_BLANK_MODE, mode); 58 + return udl_set_register(buf, UDL_REG_BLANKMODE, mode); 51 59 } 52 60 53 61 static char *udl_set_color_depth(char *buf, u8 selection) 54 62 { 55 - return udl_set_register(buf, 0x00, selection); 63 + return udl_set_register(buf, UDL_REG_COLORDEPTH, selection); 56 64 } 57 65 58 - static char *udl_set_base16bpp(char *wrptr, u32 base) 66 + static char *udl_set_base16bpp(char *buf, u32 base) 59 67 { 60 - /* the base pointer is 16 bits wide, 0x20 is hi byte. */ 61 - wrptr = udl_set_register(wrptr, 0x20, base >> 16); 62 - wrptr = udl_set_register(wrptr, 0x21, base >> 8); 63 - return udl_set_register(wrptr, 0x22, base); 68 + /* the base pointer is 24 bits wide, 0x20 is hi byte. */ 69 + u8 reg20 = FIELD_GET(UDL_BASE_ADDR2_MASK, base); 70 + u8 reg21 = FIELD_GET(UDL_BASE_ADDR1_MASK, base); 71 + u8 reg22 = FIELD_GET(UDL_BASE_ADDR0_MASK, base); 72 + 73 + buf = udl_set_register(buf, UDL_REG_BASE16BPP_ADDR2, reg20); 74 + buf = udl_set_register(buf, UDL_REG_BASE16BPP_ADDR1, reg21); 75 + buf = udl_set_register(buf, UDL_REG_BASE16BPP_ADDR0, reg22); 76 + 77 + return buf; 64 78 } 65 79 66 80 /* 67 81 * DisplayLink HW has separate 16bpp and 8bpp framebuffers. 68 82 * In 24bpp modes, the low 323 RGB bits go in the 8bpp framebuffer 69 83 */ 70 - static char *udl_set_base8bpp(char *wrptr, u32 base) 84 + static char *udl_set_base8bpp(char *buf, u32 base) 71 85 { 72 - wrptr = udl_set_register(wrptr, 0x26, base >> 16); 73 - wrptr = udl_set_register(wrptr, 0x27, base >> 8); 74 - return udl_set_register(wrptr, 0x28, base); 86 + /* the base pointer is 24 bits wide, 0x26 is hi byte. */ 87 + u8 reg26 = FIELD_GET(UDL_BASE_ADDR2_MASK, base); 88 + u8 reg27 = FIELD_GET(UDL_BASE_ADDR1_MASK, base); 89 + u8 reg28 = FIELD_GET(UDL_BASE_ADDR0_MASK, base); 90 + 91 + buf = udl_set_register(buf, UDL_REG_BASE8BPP_ADDR2, reg26); 92 + buf = udl_set_register(buf, UDL_REG_BASE8BPP_ADDR1, reg27); 93 + buf = udl_set_register(buf, UDL_REG_BASE8BPP_ADDR0, reg28); 94 + 95 + return buf; 75 96 } 76 97 77 98 static char *udl_set_register_16(char *wrptr, u8 reg, u16 value) ··· 143 122 } 144 123 145 124 /* 146 - * This takes a standard fbdev screeninfo struct and all of its monitor mode 147 - * details and converts them into the DisplayLink equivalent register commands. 148 - ERR(vreg(dev, 0x00, (color_depth == 16) ? 0 : 1)); 149 - ERR(vreg_lfsr16(dev, 0x01, xDisplayStart)); 150 - ERR(vreg_lfsr16(dev, 0x03, xDisplayEnd)); 151 - ERR(vreg_lfsr16(dev, 0x05, yDisplayStart)); 152 - ERR(vreg_lfsr16(dev, 0x07, yDisplayEnd)); 153 - ERR(vreg_lfsr16(dev, 0x09, xEndCount)); 154 - ERR(vreg_lfsr16(dev, 0x0B, hSyncStart)); 155 - ERR(vreg_lfsr16(dev, 0x0D, hSyncEnd)); 156 - ERR(vreg_big_endian(dev, 0x0F, hPixels)); 157 - ERR(vreg_lfsr16(dev, 0x11, yEndCount)); 158 - ERR(vreg_lfsr16(dev, 0x13, vSyncStart)); 159 - ERR(vreg_lfsr16(dev, 0x15, vSyncEnd)); 160 - ERR(vreg_big_endian(dev, 0x17, vPixels)); 161 - ERR(vreg_little_endian(dev, 0x1B, pixelClock5KHz)); 162 - 163 - ERR(vreg(dev, 0x1F, 0)); 164 - 165 - ERR(vbuf(dev, WRITE_VIDREG_UNLOCK, DSIZEOF(WRITE_VIDREG_UNLOCK))); 125 + * Takes a DRM display mode and converts it into the DisplayLink 126 + * equivalent register commands. 166 127 */ 167 - static char *udl_set_vid_cmds(char *wrptr, struct drm_display_mode *mode) 128 + static char *udl_set_display_mode(char *buf, struct drm_display_mode *mode) 168 129 { 169 - u16 xds, yds; 170 - u16 xde, yde; 171 - u16 yec; 130 + u16 reg01 = mode->crtc_htotal - mode->crtc_hsync_start; 131 + u16 reg03 = reg01 + mode->crtc_hdisplay; 132 + u16 reg05 = mode->crtc_vtotal - mode->crtc_vsync_start; 133 + u16 reg07 = reg05 + mode->crtc_vdisplay; 134 + u16 reg09 = mode->crtc_htotal - 1; 135 + u16 reg0b = 1; /* libdlo hardcodes hsync start to 1 */ 136 + u16 reg0d = mode->crtc_hsync_end - mode->crtc_hsync_start + 1; 137 + u16 reg0f = mode->hdisplay; 138 + u16 reg11 = mode->crtc_vtotal; 139 + u16 reg13 = 0; /* libdlo hardcodes vsync start to 0 */ 140 + u16 reg15 = mode->crtc_vsync_end - mode->crtc_vsync_start; 141 + u16 reg17 = mode->crtc_vdisplay; 142 + u16 reg1b = mode->clock / 5; 172 143 173 - /* x display start */ 174 - xds = mode->crtc_htotal - mode->crtc_hsync_start; 175 - wrptr = udl_set_register_lfsr16(wrptr, 0x01, xds); 176 - /* x display end */ 177 - xde = xds + mode->crtc_hdisplay; 178 - wrptr = udl_set_register_lfsr16(wrptr, 0x03, xde); 144 + buf = udl_set_register_lfsr16(buf, UDL_REG_XDISPLAYSTART, reg01); 145 + buf = udl_set_register_lfsr16(buf, UDL_REG_XDISPLAYEND, reg03); 146 + buf = udl_set_register_lfsr16(buf, UDL_REG_YDISPLAYSTART, reg05); 147 + buf = udl_set_register_lfsr16(buf, UDL_REG_YDISPLAYEND, reg07); 148 + buf = udl_set_register_lfsr16(buf, UDL_REG_XENDCOUNT, reg09); 149 + buf = udl_set_register_lfsr16(buf, UDL_REG_HSYNCSTART, reg0b); 150 + buf = udl_set_register_lfsr16(buf, UDL_REG_HSYNCEND, reg0d); 151 + buf = udl_set_register_16(buf, UDL_REG_HPIXELS, reg0f); 152 + buf = udl_set_register_lfsr16(buf, UDL_REG_YENDCOUNT, reg11); 153 + buf = udl_set_register_lfsr16(buf, UDL_REG_VSYNCSTART, reg13); 154 + buf = udl_set_register_lfsr16(buf, UDL_REG_VSYNCEND, reg15); 155 + buf = udl_set_register_16(buf, UDL_REG_VPIXELS, reg17); 156 + buf = udl_set_register_16be(buf, UDL_REG_PIXELCLOCK5KHZ, reg1b); 179 157 180 - /* y display start */ 181 - yds = mode->crtc_vtotal - mode->crtc_vsync_start; 182 - wrptr = udl_set_register_lfsr16(wrptr, 0x05, yds); 183 - /* y display end */ 184 - yde = yds + mode->crtc_vdisplay; 185 - wrptr = udl_set_register_lfsr16(wrptr, 0x07, yde); 186 - 187 - /* x end count is active + blanking - 1 */ 188 - wrptr = udl_set_register_lfsr16(wrptr, 0x09, 189 - mode->crtc_htotal - 1); 190 - 191 - /* libdlo hardcodes hsync start to 1 */ 192 - wrptr = udl_set_register_lfsr16(wrptr, 0x0B, 1); 193 - 194 - /* hsync end is width of sync pulse + 1 */ 195 - wrptr = udl_set_register_lfsr16(wrptr, 0x0D, 196 - mode->crtc_hsync_end - mode->crtc_hsync_start + 1); 197 - 198 - /* hpixels is active pixels */ 199 - wrptr = udl_set_register_16(wrptr, 0x0F, mode->hdisplay); 200 - 201 - /* yendcount is vertical active + vertical blanking */ 202 - yec = mode->crtc_vtotal; 203 - wrptr = udl_set_register_lfsr16(wrptr, 0x11, yec); 204 - 205 - /* libdlo hardcodes vsync start to 0 */ 206 - wrptr = udl_set_register_lfsr16(wrptr, 0x13, 0); 207 - 208 - /* vsync end is width of vsync pulse */ 209 - wrptr = udl_set_register_lfsr16(wrptr, 0x15, mode->crtc_vsync_end - mode->crtc_vsync_start); 210 - 211 - /* vpixels is active pixels */ 212 - wrptr = udl_set_register_16(wrptr, 0x17, mode->crtc_vdisplay); 213 - 214 - wrptr = udl_set_register_16be(wrptr, 0x1B, 215 - mode->clock / 5); 216 - 217 - return wrptr; 158 + return buf; 218 159 } 219 160 220 161 static char *udl_dummy_render(char *wrptr) 221 162 { 222 - *wrptr++ = 0xAF; 223 - *wrptr++ = 0x6A; /* copy */ 163 + *wrptr++ = UDL_MSG_BULK; 164 + *wrptr++ = UDL_CMD_WRITECOPY16; 224 165 *wrptr++ = 0x00; /* from addr */ 225 166 *wrptr++ = 0x00; 226 167 *wrptr++ = 0x00; ··· 191 208 *wrptr++ = 0x00; 192 209 *wrptr++ = 0x00; 193 210 return wrptr; 194 - } 195 - 196 - static int udl_crtc_write_mode_to_hw(struct drm_crtc *crtc) 197 - { 198 - struct drm_device *dev = crtc->dev; 199 - struct udl_device *udl = to_udl(dev); 200 - struct urb *urb; 201 - char *buf; 202 - int retval; 203 - 204 - if (udl->mode_buf_len == 0) { 205 - DRM_ERROR("No mode set\n"); 206 - return -EINVAL; 207 - } 208 - 209 - urb = udl_get_urb(dev); 210 - if (!urb) 211 - return -ENOMEM; 212 - 213 - buf = (char *)urb->transfer_buffer; 214 - 215 - memcpy(buf, udl->mode_buf, udl->mode_buf_len); 216 - retval = udl_submit_urb(dev, urb, udl->mode_buf_len); 217 - DRM_DEBUG("write mode info %d\n", udl->mode_buf_len); 218 - return retval; 219 211 } 220 212 221 213 static long udl_log_cpp(unsigned int cpp) ··· 216 258 return ret; 217 259 log_bpp = ret; 218 260 219 - ret = drm_gem_fb_begin_cpu_access(fb, DMA_FROM_DEVICE); 220 - if (ret) 221 - return ret; 222 - 223 261 urb = udl_get_urb(dev); 224 - if (!urb) { 225 - ret = -ENOMEM; 226 - goto out_drm_gem_fb_end_cpu_access; 227 - } 262 + if (!urb) 263 + return -ENOMEM; 228 264 cmd = urb->transfer_buffer; 229 265 230 266 for (i = clip->y1; i < clip->y2; i++) { ··· 230 278 &cmd, byte_offset, dev_byte_offset, 231 279 byte_width); 232 280 if (ret) 233 - goto out_drm_gem_fb_end_cpu_access; 281 + return ret; 234 282 } 235 283 236 284 if (cmd > (char *)urb->transfer_buffer) { 237 285 /* Send partial buffer remaining before exiting */ 238 286 int len; 239 287 if (cmd < (char *)urb->transfer_buffer + urb->transfer_buffer_length) 240 - *cmd++ = 0xAF; 288 + *cmd++ = UDL_MSG_BULK; 241 289 len = cmd - (char *)urb->transfer_buffer; 242 290 ret = udl_submit_urb(dev, urb, len); 243 291 } else { 244 292 udl_urb_completion(urb); 245 293 } 246 294 247 - ret = 0; 248 - 249 - out_drm_gem_fb_end_cpu_access: 250 - drm_gem_fb_end_cpu_access(fb, DMA_FROM_DEVICE); 251 - return ret; 295 + return 0; 252 296 } 253 297 254 298 /* 255 - * Simple display pipeline 299 + * Primary plane 256 300 */ 257 301 258 - static const uint32_t udl_simple_display_pipe_formats[] = { 302 + static const uint32_t udl_primary_plane_formats[] = { 259 303 DRM_FORMAT_RGB565, 260 304 DRM_FORMAT_XRGB8888, 261 305 }; 262 306 263 - static enum drm_mode_status 264 - udl_simple_display_pipe_mode_valid(struct drm_simple_display_pipe *pipe, 265 - const struct drm_display_mode *mode) 266 - { 267 - return MODE_OK; 268 - } 307 + static const uint64_t udl_primary_plane_fmtmods[] = { 308 + DRM_FORMAT_MOD_LINEAR, 309 + DRM_FORMAT_MOD_INVALID 310 + }; 269 311 270 - static void 271 - udl_simple_display_pipe_enable(struct drm_simple_display_pipe *pipe, 272 - struct drm_crtc_state *crtc_state, 273 - struct drm_plane_state *plane_state) 312 + static void udl_primary_plane_helper_atomic_update(struct drm_plane *plane, 313 + struct drm_atomic_state *state) 274 314 { 275 - struct drm_crtc *crtc = &pipe->crtc; 276 - struct drm_device *dev = crtc->dev; 277 - struct drm_framebuffer *fb = plane_state->fb; 278 - struct udl_device *udl = to_udl(dev); 279 - struct drm_display_mode *mode = &crtc_state->mode; 315 + struct drm_device *dev = plane->dev; 316 + struct drm_plane_state *plane_state = drm_atomic_get_new_plane_state(state, plane); 280 317 struct drm_shadow_plane_state *shadow_plane_state = to_drm_shadow_plane_state(plane_state); 281 - struct drm_rect clip = DRM_RECT_INIT(0, 0, fb->width, fb->height); 282 - char *buf; 283 - char *wrptr; 284 - int color_depth = UDL_COLOR_DEPTH_16BPP; 318 + struct drm_framebuffer *fb = plane_state->fb; 319 + struct drm_plane_state *old_plane_state = drm_atomic_get_old_plane_state(state, plane); 320 + struct drm_atomic_helper_damage_iter iter; 321 + struct drm_rect damage; 322 + int ret, idx; 285 323 286 - buf = (char *)udl->mode_buf; 324 + if (!fb) 325 + return; /* no framebuffer; plane is disabled */ 287 326 288 - /* This first section has to do with setting the base address on the 289 - * controller associated with the display. There are 2 base 290 - * pointers, currently, we only use the 16 bpp segment. 291 - */ 292 - wrptr = udl_vidreg_lock(buf); 293 - wrptr = udl_set_color_depth(wrptr, color_depth); 294 - /* set base for 16bpp segment to 0 */ 295 - wrptr = udl_set_base16bpp(wrptr, 0); 296 - /* set base for 8bpp segment to end of fb */ 297 - wrptr = udl_set_base8bpp(wrptr, 2 * mode->vdisplay * mode->hdisplay); 327 + ret = drm_gem_fb_begin_cpu_access(fb, DMA_FROM_DEVICE); 328 + if (ret) 329 + return; 298 330 299 - wrptr = udl_set_vid_cmds(wrptr, mode); 300 - wrptr = udl_set_blank_mode(wrptr, UDL_BLANK_MODE_ON); 301 - wrptr = udl_vidreg_unlock(wrptr); 331 + if (!drm_dev_enter(dev, &idx)) 332 + goto out_drm_gem_fb_end_cpu_access; 302 333 303 - wrptr = udl_dummy_render(wrptr); 334 + drm_atomic_helper_damage_iter_init(&iter, old_plane_state, plane_state); 335 + drm_atomic_for_each_plane_damage(&iter, &damage) { 336 + udl_handle_damage(fb, &shadow_plane_state->data[0], &damage); 337 + } 304 338 305 - udl->mode_buf_len = wrptr - buf; 339 + drm_dev_exit(idx); 306 340 307 - udl_handle_damage(fb, &shadow_plane_state->data[0], &clip); 308 - 309 - /* enable display */ 310 - udl_crtc_write_mode_to_hw(crtc); 341 + out_drm_gem_fb_end_cpu_access: 342 + drm_gem_fb_end_cpu_access(fb, DMA_FROM_DEVICE); 311 343 } 312 344 313 - static void 314 - udl_simple_display_pipe_disable(struct drm_simple_display_pipe *pipe) 345 + static const struct drm_plane_helper_funcs udl_primary_plane_helper_funcs = { 346 + DRM_GEM_SHADOW_PLANE_HELPER_FUNCS, 347 + .atomic_check = drm_plane_helper_atomic_check, 348 + .atomic_update = udl_primary_plane_helper_atomic_update, 349 + }; 350 + 351 + static const struct drm_plane_funcs udl_primary_plane_funcs = { 352 + .update_plane = drm_atomic_helper_update_plane, 353 + .disable_plane = drm_atomic_helper_disable_plane, 354 + .destroy = drm_plane_cleanup, 355 + DRM_GEM_SHADOW_PLANE_FUNCS, 356 + }; 357 + 358 + /* 359 + * CRTC 360 + */ 361 + 362 + static int udl_crtc_helper_atomic_check(struct drm_crtc *crtc, struct drm_atomic_state *state) 315 363 { 316 - struct drm_crtc *crtc = &pipe->crtc; 364 + struct drm_crtc_state *new_crtc_state = drm_atomic_get_new_crtc_state(state, crtc); 365 + 366 + if (!new_crtc_state->enable) 367 + return 0; 368 + 369 + return drm_atomic_helper_check_crtc_primary_plane(new_crtc_state); 370 + } 371 + 372 + static void udl_crtc_helper_atomic_enable(struct drm_crtc *crtc, struct drm_atomic_state *state) 373 + { 317 374 struct drm_device *dev = crtc->dev; 375 + struct drm_crtc_state *crtc_state = drm_atomic_get_new_crtc_state(state, crtc); 376 + struct drm_display_mode *mode = &crtc_state->mode; 318 377 struct urb *urb; 319 378 char *buf; 379 + int idx; 380 + 381 + if (!drm_dev_enter(dev, &idx)) 382 + return; 320 383 321 384 urb = udl_get_urb(dev); 322 385 if (!urb) 323 - return; 386 + goto out; 324 387 325 388 buf = (char *)urb->transfer_buffer; 326 389 buf = udl_vidreg_lock(buf); 327 - buf = udl_set_blank_mode(buf, UDL_BLANK_MODE_POWERDOWN); 390 + buf = udl_set_color_depth(buf, UDL_COLORDEPTH_16BPP); 391 + /* set base for 16bpp segment to 0 */ 392 + buf = udl_set_base16bpp(buf, 0); 393 + /* set base for 8bpp segment to end of fb */ 394 + buf = udl_set_base8bpp(buf, 2 * mode->vdisplay * mode->hdisplay); 395 + buf = udl_set_display_mode(buf, mode); 396 + buf = udl_set_blank_mode(buf, UDL_BLANKMODE_ON); 328 397 buf = udl_vidreg_unlock(buf); 329 398 buf = udl_dummy_render(buf); 330 399 331 400 udl_submit_urb(dev, urb, buf - (char *)urb->transfer_buffer); 401 + 402 + out: 403 + drm_dev_exit(idx); 332 404 } 333 405 334 - static void 335 - udl_simple_display_pipe_update(struct drm_simple_display_pipe *pipe, 336 - struct drm_plane_state *old_plane_state) 406 + static void udl_crtc_helper_atomic_disable(struct drm_crtc *crtc, struct drm_atomic_state *state) 337 407 { 338 - struct drm_plane_state *state = pipe->plane.state; 339 - struct drm_shadow_plane_state *shadow_plane_state = to_drm_shadow_plane_state(state); 340 - struct drm_framebuffer *fb = state->fb; 341 - struct drm_rect rect; 408 + struct drm_device *dev = crtc->dev; 409 + struct urb *urb; 410 + char *buf; 411 + int idx; 342 412 343 - if (!fb) 413 + if (!drm_dev_enter(dev, &idx)) 344 414 return; 345 415 346 - if (drm_atomic_helper_damage_merged(old_plane_state, state, &rect)) 347 - udl_handle_damage(fb, &shadow_plane_state->data[0], &rect); 416 + urb = udl_get_urb(dev); 417 + if (!urb) 418 + goto out; 419 + 420 + buf = (char *)urb->transfer_buffer; 421 + buf = udl_vidreg_lock(buf); 422 + buf = udl_set_blank_mode(buf, UDL_BLANKMODE_POWERDOWN); 423 + buf = udl_vidreg_unlock(buf); 424 + buf = udl_dummy_render(buf); 425 + 426 + udl_submit_urb(dev, urb, buf - (char *)urb->transfer_buffer); 427 + 428 + out: 429 + drm_dev_exit(idx); 348 430 } 349 431 350 - static const struct drm_simple_display_pipe_funcs udl_simple_display_pipe_funcs = { 351 - .mode_valid = udl_simple_display_pipe_mode_valid, 352 - .enable = udl_simple_display_pipe_enable, 353 - .disable = udl_simple_display_pipe_disable, 354 - .update = udl_simple_display_pipe_update, 355 - DRM_GEM_SIMPLE_DISPLAY_PIPE_SHADOW_PLANE_FUNCS, 432 + static const struct drm_crtc_helper_funcs udl_crtc_helper_funcs = { 433 + .atomic_check = udl_crtc_helper_atomic_check, 434 + .atomic_enable = udl_crtc_helper_atomic_enable, 435 + .atomic_disable = udl_crtc_helper_atomic_disable, 356 436 }; 437 + 438 + static const struct drm_crtc_funcs udl_crtc_funcs = { 439 + .reset = drm_atomic_helper_crtc_reset, 440 + .destroy = drm_crtc_cleanup, 441 + .set_config = drm_atomic_helper_set_config, 442 + .page_flip = drm_atomic_helper_page_flip, 443 + .atomic_duplicate_state = drm_atomic_helper_crtc_duplicate_state, 444 + .atomic_destroy_state = drm_atomic_helper_crtc_destroy_state, 445 + }; 446 + 447 + /* 448 + * Encoder 449 + */ 450 + 451 + static const struct drm_encoder_funcs udl_encoder_funcs = { 452 + .destroy = drm_encoder_cleanup, 453 + }; 454 + 455 + /* 456 + * Connector 457 + */ 458 + 459 + static int udl_connector_helper_get_modes(struct drm_connector *connector) 460 + { 461 + struct udl_connector *udl_connector = to_udl_connector(connector); 462 + 463 + drm_connector_update_edid_property(connector, udl_connector->edid); 464 + if (udl_connector->edid) 465 + return drm_add_edid_modes(connector, udl_connector->edid); 466 + 467 + return 0; 468 + } 469 + 470 + static const struct drm_connector_helper_funcs udl_connector_helper_funcs = { 471 + .get_modes = udl_connector_helper_get_modes, 472 + }; 473 + 474 + static int udl_get_edid_block(void *data, u8 *buf, unsigned int block, size_t len) 475 + { 476 + struct udl_device *udl = data; 477 + struct drm_device *dev = &udl->drm; 478 + struct usb_device *udev = udl_to_usb_device(udl); 479 + u8 *read_buff; 480 + int ret; 481 + size_t i; 482 + 483 + read_buff = kmalloc(2, GFP_KERNEL); 484 + if (!read_buff) 485 + return -ENOMEM; 486 + 487 + for (i = 0; i < len; i++) { 488 + int bval = (i + block * EDID_LENGTH) << 8; 489 + 490 + ret = usb_control_msg(udev, usb_rcvctrlpipe(udev, 0), 491 + 0x02, (0x80 | (0x02 << 5)), bval, 492 + 0xA1, read_buff, 2, USB_CTRL_GET_TIMEOUT); 493 + if (ret < 0) { 494 + drm_err(dev, "Read EDID byte %zu failed err %x\n", i, ret); 495 + goto err_kfree; 496 + } else if (ret < 1) { 497 + ret = -EIO; 498 + drm_err(dev, "Read EDID byte %zu failed\n", i); 499 + goto err_kfree; 500 + } 501 + 502 + buf[i] = read_buff[1]; 503 + } 504 + 505 + kfree(read_buff); 506 + 507 + return 0; 508 + 509 + err_kfree: 510 + kfree(read_buff); 511 + return ret; 512 + } 513 + 514 + static enum drm_connector_status udl_connector_detect(struct drm_connector *connector, bool force) 515 + { 516 + struct drm_device *dev = connector->dev; 517 + struct udl_device *udl = to_udl(dev); 518 + struct udl_connector *udl_connector = to_udl_connector(connector); 519 + enum drm_connector_status status = connector_status_disconnected; 520 + int idx; 521 + 522 + /* cleanup previous EDID */ 523 + kfree(udl_connector->edid); 524 + udl_connector->edid = NULL; 525 + 526 + if (!drm_dev_enter(dev, &idx)) 527 + return connector_status_disconnected; 528 + 529 + udl_connector->edid = drm_do_get_edid(connector, udl_get_edid_block, udl); 530 + if (udl_connector->edid) 531 + status = connector_status_connected; 532 + 533 + drm_dev_exit(idx); 534 + 535 + return status; 536 + } 537 + 538 + static void udl_connector_destroy(struct drm_connector *connector) 539 + { 540 + struct udl_connector *udl_connector = to_udl_connector(connector); 541 + 542 + drm_connector_cleanup(connector); 543 + kfree(udl_connector->edid); 544 + kfree(udl_connector); 545 + } 546 + 547 + static const struct drm_connector_funcs udl_connector_funcs = { 548 + .reset = drm_atomic_helper_connector_reset, 549 + .detect = udl_connector_detect, 550 + .fill_modes = drm_helper_probe_single_connector_modes, 551 + .destroy = udl_connector_destroy, 552 + .atomic_duplicate_state = drm_atomic_helper_connector_duplicate_state, 553 + .atomic_destroy_state = drm_atomic_helper_connector_destroy_state, 554 + }; 555 + 556 + struct drm_connector *udl_connector_init(struct drm_device *dev) 557 + { 558 + struct udl_connector *udl_connector; 559 + struct drm_connector *connector; 560 + int ret; 561 + 562 + udl_connector = kzalloc(sizeof(*udl_connector), GFP_KERNEL); 563 + if (!udl_connector) 564 + return ERR_PTR(-ENOMEM); 565 + 566 + connector = &udl_connector->connector; 567 + ret = drm_connector_init(dev, connector, &udl_connector_funcs, DRM_MODE_CONNECTOR_VGA); 568 + if (ret) 569 + goto err_kfree; 570 + 571 + drm_connector_helper_add(connector, &udl_connector_helper_funcs); 572 + 573 + connector->polled = DRM_CONNECTOR_POLL_HPD | 574 + DRM_CONNECTOR_POLL_CONNECT | 575 + DRM_CONNECTOR_POLL_DISCONNECT; 576 + 577 + return connector; 578 + 579 + err_kfree: 580 + kfree(udl_connector); 581 + return ERR_PTR(ret); 582 + } 357 583 358 584 /* 359 585 * Modesetting 360 586 */ 361 587 362 - static const struct drm_mode_config_funcs udl_mode_funcs = { 588 + static enum drm_mode_status udl_mode_config_mode_valid(struct drm_device *dev, 589 + const struct drm_display_mode *mode) 590 + { 591 + struct udl_device *udl = to_udl(dev); 592 + 593 + if (udl->sku_pixel_limit) { 594 + if (mode->vdisplay * mode->hdisplay > udl->sku_pixel_limit) 595 + return MODE_MEM; 596 + } 597 + 598 + return MODE_OK; 599 + } 600 + 601 + static const struct drm_mode_config_funcs udl_mode_config_funcs = { 363 602 .fb_create = drm_gem_fb_create_with_dirty, 603 + .mode_valid = udl_mode_config_mode_valid, 364 604 .atomic_check = drm_atomic_helper_check, 365 605 .atomic_commit = drm_atomic_helper_commit, 366 606 }; 367 607 368 608 int udl_modeset_init(struct drm_device *dev) 369 609 { 370 - size_t format_count = ARRAY_SIZE(udl_simple_display_pipe_formats); 371 610 struct udl_device *udl = to_udl(dev); 611 + struct drm_plane *primary_plane; 612 + struct drm_crtc *crtc; 613 + struct drm_encoder *encoder; 372 614 struct drm_connector *connector; 373 615 int ret; 374 616 ··· 572 426 573 427 dev->mode_config.min_width = 640; 574 428 dev->mode_config.min_height = 480; 575 - 576 429 dev->mode_config.max_width = 2048; 577 430 dev->mode_config.max_height = 2048; 578 - 579 - dev->mode_config.prefer_shadow = 0; 580 431 dev->mode_config.preferred_depth = 16; 432 + dev->mode_config.funcs = &udl_mode_config_funcs; 581 433 582 - dev->mode_config.funcs = &udl_mode_funcs; 434 + primary_plane = &udl->primary_plane; 435 + ret = drm_universal_plane_init(dev, primary_plane, 0, 436 + &udl_primary_plane_funcs, 437 + udl_primary_plane_formats, 438 + ARRAY_SIZE(udl_primary_plane_formats), 439 + udl_primary_plane_fmtmods, 440 + DRM_PLANE_TYPE_PRIMARY, NULL); 441 + if (ret) 442 + return ret; 443 + drm_plane_helper_add(primary_plane, &udl_primary_plane_helper_funcs); 444 + drm_plane_enable_fb_damage_clips(primary_plane); 445 + 446 + crtc = &udl->crtc; 447 + ret = drm_crtc_init_with_planes(dev, crtc, primary_plane, NULL, 448 + &udl_crtc_funcs, NULL); 449 + if (ret) 450 + return ret; 451 + drm_crtc_helper_add(crtc, &udl_crtc_helper_funcs); 452 + 453 + encoder = &udl->encoder; 454 + ret = drm_encoder_init(dev, encoder, &udl_encoder_funcs, DRM_MODE_ENCODER_DAC, NULL); 455 + if (ret) 456 + return ret; 457 + encoder->possible_crtcs = drm_crtc_mask(crtc); 583 458 584 459 connector = udl_connector_init(dev); 585 460 if (IS_ERR(connector)) 586 461 return PTR_ERR(connector); 587 - 588 - format_count = ARRAY_SIZE(udl_simple_display_pipe_formats); 589 - 590 - ret = drm_simple_display_pipe_init(dev, &udl->display_pipe, 591 - &udl_simple_display_pipe_funcs, 592 - udl_simple_display_pipe_formats, 593 - format_count, NULL, connector); 462 + ret = drm_connector_attach_encoder(connector, encoder); 594 463 if (ret) 595 464 return ret; 596 - drm_plane_enable_fb_damage_clips(&udl->display_pipe.plane); 597 465 598 466 drm_mode_config_reset(dev); 599 467
+68
drivers/gpu/drm/udl/udl_proto.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0-only */ 2 + 3 + #ifndef UDL_PROTO_H 4 + #define UDL_PROTO_H 5 + 6 + #include <linux/bits.h> 7 + 8 + #define UDL_MSG_BULK 0xaf 9 + 10 + /* Register access */ 11 + #define UDL_CMD_WRITEREG 0x20 /* See register constants below */ 12 + 13 + /* Framebuffer access */ 14 + #define UDL_CMD_WRITERAW8 0x60 /* 8 bit raw write command. */ 15 + #define UDL_CMD_WRITERL8 0x61 /* 8 bit run length command. */ 16 + #define UDL_CMD_WRITECOPY8 0x62 /* 8 bit copy command. */ 17 + #define UDL_CMD_WRITERLX8 0x63 /* 8 bit extended run length command. */ 18 + #define UDL_CMD_WRITERAW16 0x68 /* 16 bit raw write command. */ 19 + #define UDL_CMD_WRITERL16 0x69 /* 16 bit run length command. */ 20 + #define UDL_CMD_WRITECOPY16 0x6a /* 16 bit copy command. */ 21 + #define UDL_CMD_WRITERLX16 0x6b /* 16 bit extended run length command. */ 22 + 23 + /* Color depth */ 24 + #define UDL_REG_COLORDEPTH 0x00 25 + #define UDL_COLORDEPTH_16BPP 0 26 + #define UDL_COLORDEPTH_24BPP 1 27 + 28 + /* Display-mode settings */ 29 + #define UDL_REG_XDISPLAYSTART 0x01 30 + #define UDL_REG_XDISPLAYEND 0x03 31 + #define UDL_REG_YDISPLAYSTART 0x05 32 + #define UDL_REG_YDISPLAYEND 0x07 33 + #define UDL_REG_XENDCOUNT 0x09 34 + #define UDL_REG_HSYNCSTART 0x0b 35 + #define UDL_REG_HSYNCEND 0x0d 36 + #define UDL_REG_HPIXELS 0x0f 37 + #define UDL_REG_YENDCOUNT 0x11 38 + #define UDL_REG_VSYNCSTART 0x13 39 + #define UDL_REG_VSYNCEND 0x15 40 + #define UDL_REG_VPIXELS 0x17 41 + #define UDL_REG_PIXELCLOCK5KHZ 0x1b 42 + 43 + /* On/Off for driving the DisplayLink framebuffer to the display */ 44 + #define UDL_REG_BLANKMODE 0x1f 45 + #define UDL_BLANKMODE_ON 0x00 /* hsync and vsync on, visible */ 46 + #define UDL_BLANKMODE_BLANKED 0x01 /* hsync and vsync on, blanked */ 47 + #define UDL_BLANKMODE_VSYNC_OFF 0x03 /* vsync off, blanked */ 48 + #define UDL_BLANKMODE_HSYNC_OFF 0x05 /* hsync off, blanked */ 49 + #define UDL_BLANKMODE_POWERDOWN 0x07 /* powered off; requires modeset */ 50 + 51 + /* Framebuffer address */ 52 + #define UDL_REG_BASE16BPP_ADDR2 0x20 53 + #define UDL_REG_BASE16BPP_ADDR1 0x21 54 + #define UDL_REG_BASE16BPP_ADDR0 0x22 55 + #define UDL_REG_BASE8BPP_ADDR2 0x26 56 + #define UDL_REG_BASE8BPP_ADDR1 0x27 57 + #define UDL_REG_BASE8BPP_ADDR0 0x28 58 + 59 + #define UDL_BASE_ADDR0_MASK GENMASK(7, 0) 60 + #define UDL_BASE_ADDR1_MASK GENMASK(15, 8) 61 + #define UDL_BASE_ADDR2_MASK GENMASK(23, 16) 62 + 63 + /* Lock/unlock video registers */ 64 + #define UDL_REG_VIDREG 0xff 65 + #define UDL_VIDREG_LOCK 0x00 66 + #define UDL_VIDREG_UNLOCK 0xff 67 + 68 + #endif
+4 -3
drivers/gpu/drm/udl/udl_transfer.c
··· 10 10 #include <asm/unaligned.h> 11 11 12 12 #include "udl_drv.h" 13 + #include "udl_proto.h" 13 14 14 15 #define MAX_CMD_PIXELS 255 15 16 ··· 90 89 const u8 *cmd_pixel_start, *cmd_pixel_end = NULL; 91 90 uint16_t pixel_val16; 92 91 93 - *cmd++ = 0xaf; 94 - *cmd++ = 0x6b; 92 + *cmd++ = UDL_MSG_BULK; 93 + *cmd++ = UDL_CMD_WRITERLX16; 95 94 *cmd++ = (uint8_t) ((dev_addr >> 16) & 0xFF); 96 95 *cmd++ = (uint8_t) ((dev_addr >> 8) & 0xFF); 97 96 *cmd++ = (uint8_t) ((dev_addr) & 0xFF); ··· 153 152 if (cmd_buffer_end <= MIN_RLX_CMD_BYTES + cmd) { 154 153 /* Fill leftover bytes with no-ops */ 155 154 if (cmd_buffer_end > cmd) 156 - memset(cmd, 0xAF, cmd_buffer_end - cmd); 155 + memset(cmd, UDL_MSG_BULK, cmd_buffer_end - cmd); 157 156 cmd = (uint8_t *) cmd_buffer_end; 158 157 } 159 158
+1 -1
drivers/gpu/drm/vc4/vc4_hdmi.c
··· 542 542 new_state->base.max_bpc = 8; 543 543 new_state->base.max_requested_bpc = 8; 544 544 new_state->output_format = VC4_HDMI_OUTPUT_RGB; 545 - drm_atomic_helper_connector_tv_reset(connector); 545 + drm_atomic_helper_connector_tv_margins_reset(connector); 546 546 } 547 547 548 548 static struct drm_connector_state *
+4 -4
drivers/gpu/drm/vc4/vc4_vec.c
··· 69 69 #define VEC_CONFIG0_STD_MASK GENMASK(1, 0) 70 70 #define VEC_CONFIG0_NTSC_STD 0 71 71 #define VEC_CONFIG0_PAL_BDGHI_STD 1 72 + #define VEC_CONFIG0_PAL_M_STD 2 72 73 #define VEC_CONFIG0_PAL_N_STD 3 73 74 74 75 #define VEC_SCHPH 0x108 ··· 256 255 .config1 = VEC_CONFIG1_C_CVBS_CVBS, 257 256 }, 258 257 [VC4_VEC_TV_MODE_PAL_M] = { 259 - .mode = &pal_mode, 260 - .config0 = VEC_CONFIG0_PAL_BDGHI_STD, 261 - .config1 = VEC_CONFIG1_C_CVBS_CVBS | VEC_CONFIG1_CUSTOM_FREQ, 262 - .custom_freq = 0x223b61d1, 258 + .mode = &ntsc_mode, 259 + .config0 = VEC_CONFIG0_PAL_M_STD, 260 + .config1 = VEC_CONFIG1_C_CVBS_CVBS, 263 261 }, 264 262 }; 265 263
+4 -3
drivers/infiniband/core/umem_dmabuf.c
··· 26 26 if (umem_dmabuf->sgt) 27 27 goto wait_fence; 28 28 29 - sgt = dma_buf_map_attachment(umem_dmabuf->attach, DMA_BIDIRECTIONAL); 29 + sgt = dma_buf_map_attachment_unlocked(umem_dmabuf->attach, 30 + DMA_BIDIRECTIONAL); 30 31 if (IS_ERR(sgt)) 31 32 return PTR_ERR(sgt); 32 33 ··· 103 102 umem_dmabuf->last_sg_trim = 0; 104 103 } 105 104 106 - dma_buf_unmap_attachment(umem_dmabuf->attach, umem_dmabuf->sgt, 107 - DMA_BIDIRECTIONAL); 105 + dma_buf_unmap_attachment_unlocked(umem_dmabuf->attach, umem_dmabuf->sgt, 106 + DMA_BIDIRECTIONAL); 108 107 109 108 umem_dmabuf->sgt = NULL; 110 109 }
+7 -15
drivers/media/common/videobuf2/videobuf2-dma-contig.c
··· 101 101 if (buf->db_attach) { 102 102 struct iosys_map map; 103 103 104 - if (!dma_buf_vmap(buf->db_attach->dmabuf, &map)) 104 + if (!dma_buf_vmap_unlocked(buf->db_attach->dmabuf, &map)) 105 105 buf->vaddr = map.vaddr; 106 106 107 107 return buf->vaddr; ··· 382 382 struct dma_buf_attachment *db_attach, enum dma_data_direction dma_dir) 383 383 { 384 384 struct vb2_dc_attachment *attach = db_attach->priv; 385 - /* stealing dmabuf mutex to serialize map/unmap operations */ 386 - struct mutex *lock = &db_attach->dmabuf->lock; 387 385 struct sg_table *sgt; 388 - 389 - mutex_lock(lock); 390 386 391 387 sgt = &attach->sgt; 392 388 /* return previously mapped sg table */ 393 - if (attach->dma_dir == dma_dir) { 394 - mutex_unlock(lock); 389 + if (attach->dma_dir == dma_dir) 395 390 return sgt; 396 - } 397 391 398 392 /* release any previous cache */ 399 393 if (attach->dma_dir != DMA_NONE) { ··· 403 409 if (dma_map_sgtable(db_attach->dev, sgt, dma_dir, 404 410 DMA_ATTR_SKIP_CPU_SYNC)) { 405 411 pr_err("failed to map scatterlist\n"); 406 - mutex_unlock(lock); 407 412 return ERR_PTR(-EIO); 408 413 } 409 414 410 415 attach->dma_dir = dma_dir; 411 - 412 - mutex_unlock(lock); 413 416 414 417 return sgt; 415 418 } ··· 702 711 } 703 712 704 713 /* get the associated scatterlist for this buffer */ 705 - sgt = dma_buf_map_attachment(buf->db_attach, buf->dma_dir); 714 + sgt = dma_buf_map_attachment_unlocked(buf->db_attach, buf->dma_dir); 706 715 if (IS_ERR(sgt)) { 707 716 pr_err("Error getting dmabuf scatterlist\n"); 708 717 return -EINVAL; ··· 713 722 if (contig_size < buf->size) { 714 723 pr_err("contiguous chunk is too small %lu/%lu\n", 715 724 contig_size, buf->size); 716 - dma_buf_unmap_attachment(buf->db_attach, sgt, buf->dma_dir); 725 + dma_buf_unmap_attachment_unlocked(buf->db_attach, sgt, 726 + buf->dma_dir); 717 727 return -EFAULT; 718 728 } 719 729 ··· 742 750 } 743 751 744 752 if (buf->vaddr) { 745 - dma_buf_vunmap(buf->db_attach->dmabuf, &map); 753 + dma_buf_vunmap_unlocked(buf->db_attach->dmabuf, &map); 746 754 buf->vaddr = NULL; 747 755 } 748 - dma_buf_unmap_attachment(buf->db_attach, sgt, buf->dma_dir); 756 + dma_buf_unmap_attachment_unlocked(buf->db_attach, sgt, buf->dma_dir); 749 757 750 758 buf->dma_addr = 0; 751 759 buf->dma_sgt = NULL;
+5 -14
drivers/media/common/videobuf2/videobuf2-dma-sg.c
··· 309 309 310 310 if (!buf->vaddr) { 311 311 if (buf->db_attach) { 312 - ret = dma_buf_vmap(buf->db_attach->dmabuf, &map); 312 + ret = dma_buf_vmap_unlocked(buf->db_attach->dmabuf, &map); 313 313 buf->vaddr = ret ? NULL : map.vaddr; 314 314 } else { 315 315 buf->vaddr = vm_map_ram(buf->pages, buf->num_pages, -1); ··· 424 424 struct dma_buf_attachment *db_attach, enum dma_data_direction dma_dir) 425 425 { 426 426 struct vb2_dma_sg_attachment *attach = db_attach->priv; 427 - /* stealing dmabuf mutex to serialize map/unmap operations */ 428 - struct mutex *lock = &db_attach->dmabuf->lock; 429 427 struct sg_table *sgt; 430 - 431 - mutex_lock(lock); 432 428 433 429 sgt = &attach->sgt; 434 430 /* return previously mapped sg table */ 435 - if (attach->dma_dir == dma_dir) { 436 - mutex_unlock(lock); 431 + if (attach->dma_dir == dma_dir) 437 432 return sgt; 438 - } 439 433 440 434 /* release any previous cache */ 441 435 if (attach->dma_dir != DMA_NONE) { ··· 440 446 /* mapping to the client with new direction */ 441 447 if (dma_map_sgtable(db_attach->dev, sgt, dma_dir, 0)) { 442 448 pr_err("failed to map scatterlist\n"); 443 - mutex_unlock(lock); 444 449 return ERR_PTR(-EIO); 445 450 } 446 451 447 452 attach->dma_dir = dma_dir; 448 - 449 - mutex_unlock(lock); 450 453 451 454 return sgt; 452 455 } ··· 556 565 } 557 566 558 567 /* get the associated scatterlist for this buffer */ 559 - sgt = dma_buf_map_attachment(buf->db_attach, buf->dma_dir); 568 + sgt = dma_buf_map_attachment_unlocked(buf->db_attach, buf->dma_dir); 560 569 if (IS_ERR(sgt)) { 561 570 pr_err("Error getting dmabuf scatterlist\n"); 562 571 return -EINVAL; ··· 585 594 } 586 595 587 596 if (buf->vaddr) { 588 - dma_buf_vunmap(buf->db_attach->dmabuf, &map); 597 + dma_buf_vunmap_unlocked(buf->db_attach->dmabuf, &map); 589 598 buf->vaddr = NULL; 590 599 } 591 - dma_buf_unmap_attachment(buf->db_attach, sgt, buf->dma_dir); 600 + dma_buf_unmap_attachment_unlocked(buf->db_attach, sgt, buf->dma_dir); 592 601 593 602 buf->dma_sgt = NULL; 594 603 }
+4 -13
drivers/media/common/videobuf2/videobuf2-vmalloc.c
··· 267 267 struct dma_buf_attachment *db_attach, enum dma_data_direction dma_dir) 268 268 { 269 269 struct vb2_vmalloc_attachment *attach = db_attach->priv; 270 - /* stealing dmabuf mutex to serialize map/unmap operations */ 271 - struct mutex *lock = &db_attach->dmabuf->lock; 272 270 struct sg_table *sgt; 273 - 274 - mutex_lock(lock); 275 271 276 272 sgt = &attach->sgt; 277 273 /* return previously mapped sg table */ 278 - if (attach->dma_dir == dma_dir) { 279 - mutex_unlock(lock); 274 + if (attach->dma_dir == dma_dir) 280 275 return sgt; 281 - } 282 276 283 277 /* release any previous cache */ 284 278 if (attach->dma_dir != DMA_NONE) { ··· 283 289 /* mapping to the client with new direction */ 284 290 if (dma_map_sgtable(db_attach->dev, sgt, dma_dir, 0)) { 285 291 pr_err("failed to map scatterlist\n"); 286 - mutex_unlock(lock); 287 292 return ERR_PTR(-EIO); 288 293 } 289 294 290 295 attach->dma_dir = dma_dir; 291 - 292 - mutex_unlock(lock); 293 296 294 297 return sgt; 295 298 } ··· 367 376 struct iosys_map map; 368 377 int ret; 369 378 370 - ret = dma_buf_vmap(buf->dbuf, &map); 379 + ret = dma_buf_vmap_unlocked(buf->dbuf, &map); 371 380 if (ret) 372 381 return -EFAULT; 373 382 buf->vaddr = map.vaddr; ··· 380 389 struct vb2_vmalloc_buf *buf = mem_priv; 381 390 struct iosys_map map = IOSYS_MAP_INIT_VADDR(buf->vaddr); 382 391 383 - dma_buf_vunmap(buf->dbuf, &map); 392 + dma_buf_vunmap_unlocked(buf->dbuf, &map); 384 393 buf->vaddr = NULL; 385 394 } 386 395 ··· 390 399 struct iosys_map map = IOSYS_MAP_INIT_VADDR(buf->vaddr); 391 400 392 401 if (buf->vaddr) 393 - dma_buf_vunmap(buf->dbuf, &map); 402 + dma_buf_vunmap_unlocked(buf->dbuf, &map); 394 403 395 404 kfree(buf); 396 405 }
+3 -3
drivers/media/platform/nvidia/tegra-vde/dmabuf-cache.c
··· 38 38 if (entry->vde->domain) 39 39 tegra_vde_iommu_unmap(entry->vde, entry->iova); 40 40 41 - dma_buf_unmap_attachment(entry->a, entry->sgt, entry->dma_dir); 41 + dma_buf_unmap_attachment_unlocked(entry->a, entry->sgt, entry->dma_dir); 42 42 dma_buf_detach(dmabuf, entry->a); 43 43 dma_buf_put(dmabuf); 44 44 ··· 102 102 goto err_unlock; 103 103 } 104 104 105 - sgt = dma_buf_map_attachment(attachment, dma_dir); 105 + sgt = dma_buf_map_attachment_unlocked(attachment, dma_dir); 106 106 if (IS_ERR(sgt)) { 107 107 dev_err(dev, "Failed to get dmabufs sg_table\n"); 108 108 err = PTR_ERR(sgt); ··· 152 152 err_free: 153 153 kfree(entry); 154 154 err_unmap: 155 - dma_buf_unmap_attachment(attachment, sgt, dma_dir); 155 + dma_buf_unmap_attachment_unlocked(attachment, sgt, dma_dir); 156 156 err_detach: 157 157 dma_buf_detach(dmabuf, attachment); 158 158 err_unlock:
+3 -3
drivers/misc/fastrpc.c
··· 310 310 return; 311 311 } 312 312 } 313 - dma_buf_unmap_attachment(map->attach, map->table, 314 - DMA_BIDIRECTIONAL); 313 + dma_buf_unmap_attachment_unlocked(map->attach, map->table, 314 + DMA_BIDIRECTIONAL); 315 315 dma_buf_detach(map->buf, map->attach); 316 316 dma_buf_put(map->buf); 317 317 } ··· 726 726 goto attach_err; 727 727 } 728 728 729 - map->table = dma_buf_map_attachment(map->attach, DMA_BIDIRECTIONAL); 729 + map->table = dma_buf_map_attachment_unlocked(map->attach, DMA_BIDIRECTIONAL); 730 730 if (IS_ERR(map->table)) { 731 731 err = PTR_ERR(map->table); 732 732 goto map_err;
+1
drivers/video/fbdev/Kconfig
··· 455 455 config FB_OF 456 456 bool "Open Firmware frame buffer device support" 457 457 depends on (FB = y) && PPC && (!PPC_PSERIES || PCI) 458 + depends on !DRM_OFDRM 458 459 select APERTURE_HELPERS 459 460 select FB_CFB_FILLRECT 460 461 select FB_CFB_COPYAREA
+4 -4
drivers/xen/gntdev-dmabuf.c
··· 600 600 601 601 gntdev_dmabuf->u.imp.attach = attach; 602 602 603 - sgt = dma_buf_map_attachment(attach, DMA_BIDIRECTIONAL); 603 + sgt = dma_buf_map_attachment_unlocked(attach, DMA_BIDIRECTIONAL); 604 604 if (IS_ERR(sgt)) { 605 605 ret = ERR_CAST(sgt); 606 606 goto fail_detach; ··· 658 658 fail_end_access: 659 659 dmabuf_imp_end_foreign_access(gntdev_dmabuf->u.imp.refs, count); 660 660 fail_unmap: 661 - dma_buf_unmap_attachment(attach, sgt, DMA_BIDIRECTIONAL); 661 + dma_buf_unmap_attachment_unlocked(attach, sgt, DMA_BIDIRECTIONAL); 662 662 fail_detach: 663 663 dma_buf_detach(dma_buf, attach); 664 664 fail_free_obj: ··· 708 708 attach = gntdev_dmabuf->u.imp.attach; 709 709 710 710 if (gntdev_dmabuf->u.imp.sgt) 711 - dma_buf_unmap_attachment(attach, gntdev_dmabuf->u.imp.sgt, 712 - DMA_BIDIRECTIONAL); 711 + dma_buf_unmap_attachment_unlocked(attach, gntdev_dmabuf->u.imp.sgt, 712 + DMA_BIDIRECTIONAL); 713 713 dma_buf = attach->dmabuf; 714 714 dma_buf_detach(attach->dmabuf, attach); 715 715 dma_buf_put(dma_buf);
+1 -2
include/drm/drm_atomic_helper.h
··· 58 58 int max_scale, 59 59 bool can_position, 60 60 bool can_update_disabled); 61 - int drm_atomic_helper_check_crtc_state(struct drm_crtc_state *crtc_state, 62 - bool can_disable_primary_plane); 63 61 int drm_atomic_helper_check_planes(struct drm_device *dev, 64 62 struct drm_atomic_state *state); 63 + int drm_atomic_helper_check_crtc_primary_plane(struct drm_crtc_state *crtc_state); 65 64 int drm_atomic_helper_check(struct drm_device *dev, 66 65 struct drm_atomic_state *state); 67 66 void drm_atomic_helper_commit_tail(struct drm_atomic_state *state);
+1 -1
include/drm/drm_atomic_state_helper.h
··· 70 70 void __drm_atomic_helper_connector_reset(struct drm_connector *connector, 71 71 struct drm_connector_state *conn_state); 72 72 void drm_atomic_helper_connector_reset(struct drm_connector *connector); 73 - void drm_atomic_helper_connector_tv_reset(struct drm_connector *connector); 73 + void drm_atomic_helper_connector_tv_margins_reset(struct drm_connector *connector); 74 74 void 75 75 __drm_atomic_helper_connector_duplicate_state(struct drm_connector *connector, 76 76 struct drm_connector_state *state);
+3 -1
include/drm/drm_connector.h
··· 692 692 693 693 /** 694 694 * struct drm_tv_connector_state - TV connector related states 695 - * @subconnector: selected subconnector 695 + * @select_subconnector: selected subconnector 696 + * @subconnector: detected subconnector 696 697 * @margins: TV margins 697 698 * @mode: TV mode 698 699 * @brightness: brightness in percent ··· 704 703 * @hue: hue in percent 705 704 */ 706 705 struct drm_tv_connector_state { 706 + enum drm_mode_subconnector select_subconnector; 707 707 enum drm_mode_subconnector subconnector; 708 708 struct drm_connector_tv_margins margins; 709 709 unsigned int mode;
+2
include/drm/drm_crtc_helper.h
··· 50 50 struct drm_display_mode *mode, 51 51 int x, int y, 52 52 struct drm_framebuffer *old_fb); 53 + int drm_crtc_helper_atomic_check(struct drm_crtc *crtc, 54 + struct drm_atomic_state *state); 53 55 bool drm_helper_crtc_in_use(struct drm_crtc *crtc); 54 56 bool drm_helper_encoder_in_use(struct drm_encoder *encoder); 55 57
+9 -5
include/drm/drm_edid.h
··· 97 97 #define DRM_EDID_RANGE_OFFSET_MIN_HFREQ (1 << 2) /* 1.4 */ 98 98 #define DRM_EDID_RANGE_OFFSET_MAX_HFREQ (1 << 3) /* 1.4 */ 99 99 100 - #define DRM_EDID_DEFAULT_GTF_SUPPORT_FLAG 0x00 101 - #define DRM_EDID_RANGE_LIMITS_ONLY_FLAG 0x01 102 - #define DRM_EDID_SECONDARY_GTF_SUPPORT_FLAG 0x02 103 - #define DRM_EDID_CVT_SUPPORT_FLAG 0x04 100 + #define DRM_EDID_DEFAULT_GTF_SUPPORT_FLAG 0x00 /* 1.3 */ 101 + #define DRM_EDID_RANGE_LIMITS_ONLY_FLAG 0x01 /* 1.4 */ 102 + #define DRM_EDID_SECONDARY_GTF_SUPPORT_FLAG 0x02 /* 1.3 */ 103 + #define DRM_EDID_CVT_SUPPORT_FLAG 0x04 /* 1.4 */ 104 + 105 + #define DRM_EDID_CVT_FLAGS_STANDARD_BLANKING (1 << 3) 106 + #define DRM_EDID_CVT_FLAGS_REDUCED_BLANKING (1 << 4) 104 107 105 108 struct detailed_data_monitor_range { 106 109 u8 min_vfreq; ··· 209 206 #define DRM_EDID_DIGITAL_TYPE_DP (5 << 0) /* 1.4 */ 210 207 #define DRM_EDID_DIGITAL_DFP_1_X (1 << 0) /* 1.3 */ 211 208 212 - #define DRM_EDID_FEATURE_DEFAULT_GTF (1 << 0) 209 + #define DRM_EDID_FEATURE_DEFAULT_GTF (1 << 0) /* 1.2 */ 210 + #define DRM_EDID_FEATURE_CONTINUOUS_FREQ (1 << 0) /* 1.4 */ 213 211 #define DRM_EDID_FEATURE_PREFERRED_TIMING (1 << 1) 214 212 #define DRM_EDID_FEATURE_STANDARD_COLOR (1 << 2) 215 213 /* If analog */
+3
include/drm/drm_gem.h
··· 457 457 void drm_gem_put_pages(struct drm_gem_object *obj, struct page **pages, 458 458 bool dirty, bool accessed); 459 459 460 + int drm_gem_vmap_unlocked(struct drm_gem_object *obj, struct iosys_map *map); 461 + void drm_gem_vunmap_unlocked(struct drm_gem_object *obj, struct iosys_map *map); 462 + 460 463 int drm_gem_objects_lookup(struct drm_file *filp, void __user *bo_handles, 461 464 int count, struct drm_gem_object ***objs_out); 462 465 struct drm_gem_object *drm_gem_object_lookup(struct drm_file *filp, u32 handle);
-2
include/drm/drm_mode_config.h
··· 345 345 * @max_width: maximum fb pixel width on this device 346 346 * @max_height: maximum fb pixel height on this device 347 347 * @funcs: core driver provided mode setting functions 348 - * @fb_base: base address of the framebuffer 349 348 * @poll_enabled: track polling support for this device 350 349 * @poll_running: track polling status for this device 351 350 * @delayed_event: track delayed poll uevent deliver for this device ··· 541 542 int min_width, min_height; 542 543 int max_width, max_height; 543 544 const struct drm_mode_config_funcs *funcs; 544 - resource_size_t fb_base; 545 545 546 546 /* output poll support */ 547 547 bool poll_enabled;
+32
include/drm/gpu_scheduler.h
··· 59 59 DRM_SCHED_PRIORITY_UNSET = -2 60 60 }; 61 61 62 + /* Used to chose between FIFO and RR jobs scheduling */ 63 + extern int drm_sched_policy; 64 + 65 + #define DRM_SCHED_POLICY_RR 0 66 + #define DRM_SCHED_POLICY_FIFO 1 67 + 62 68 /** 63 69 * struct drm_sched_entity - A wrapper around a job queue (typically 64 70 * attached to the DRM file_priv). ··· 211 205 * drm_sched_entity_fini(). 212 206 */ 213 207 struct completion entity_idle; 208 + 209 + /** 210 + * @oldest_job_waiting: 211 + * 212 + * Marks earliest job waiting in SW queue 213 + */ 214 + ktime_t oldest_job_waiting; 215 + 216 + /** 217 + * @rb_tree_node: 218 + * 219 + * The node used to insert this entity into time based priority queue 220 + */ 221 + struct rb_node rb_tree_node; 222 + 214 223 }; 215 224 216 225 /** ··· 235 214 * @sched: the scheduler to which this rq belongs to. 236 215 * @entities: list of the entities to be scheduled. 237 216 * @current_entity: the entity which is to be scheduled. 217 + * @rb_tree_root: root of time based priory queue of entities for FIFO scheduling 238 218 * 239 219 * Run queue is a set of entities scheduling command submissions for 240 220 * one specific ring. It implements the scheduling policy that selects ··· 246 224 struct drm_gpu_scheduler *sched; 247 225 struct list_head entities; 248 226 struct drm_sched_entity *current_entity; 227 + struct rb_root_cached rb_tree_root; 249 228 }; 250 229 251 230 /** ··· 346 323 347 324 /** @last_dependency: tracks @dependencies as they signal */ 348 325 unsigned long last_dependency; 326 + 327 + /** 328 + * @submit_ts: 329 + * 330 + * When the job was pushed into the entity queue. 331 + */ 332 + ktime_t submit_ts; 349 333 }; 350 334 351 335 static inline bool drm_sched_invalidate_job(struct drm_sched_job *s_job, ··· 541 511 struct drm_sched_entity *entity); 542 512 void drm_sched_rq_remove_entity(struct drm_sched_rq *rq, 543 513 struct drm_sched_entity *entity); 514 + 515 + void drm_sched_rq_update_fifo(struct drm_sched_entity *entity, ktime_t ts); 544 516 545 517 int drm_sched_entity_init(struct drm_sched_entity *entity, 546 518 enum drm_sched_priority priority,
+8 -9
include/linux/dma-buf.h
··· 327 327 const struct dma_buf_ops *ops; 328 328 329 329 /** 330 - * @lock: 331 - * 332 - * Used internally to serialize list manipulation, attach/detach and 333 - * vmap/unmap. Note that in many cases this is superseeded by 334 - * dma_resv_lock() on @resv. 335 - */ 336 - struct mutex lock; 337 - 338 - /** 339 330 * @vmapping_counter: 340 331 * 341 332 * Used internally to refcnt the vmaps returned by dma_buf_vmap(). ··· 618 627 enum dma_data_direction dir); 619 628 int dma_buf_end_cpu_access(struct dma_buf *dma_buf, 620 629 enum dma_data_direction dir); 630 + struct sg_table * 631 + dma_buf_map_attachment_unlocked(struct dma_buf_attachment *attach, 632 + enum dma_data_direction direction); 633 + void dma_buf_unmap_attachment_unlocked(struct dma_buf_attachment *attach, 634 + struct sg_table *sg_table, 635 + enum dma_data_direction direction); 621 636 622 637 int dma_buf_mmap(struct dma_buf *, struct vm_area_struct *, 623 638 unsigned long); 624 639 int dma_buf_vmap(struct dma_buf *dmabuf, struct iosys_map *map); 625 640 void dma_buf_vunmap(struct dma_buf *dmabuf, struct iosys_map *map); 641 + int dma_buf_vmap_unlocked(struct dma_buf *dmabuf, struct iosys_map *map); 642 + void dma_buf_vunmap_unlocked(struct dma_buf *dmabuf, struct iosys_map *map); 626 643 #endif /* __DMA_BUF_H__ */
+29
include/uapi/drm/drm_fourcc.h
··· 744 744 */ 745 745 #define DRM_FORMAT_MOD_VIVANTE_SPLIT_SUPER_TILED fourcc_mod_code(VIVANTE, 4) 746 746 747 + /* 748 + * Vivante TS (tile-status) buffer modifiers. They can be combined with all of 749 + * the color buffer tiling modifiers defined above. When TS is present it's a 750 + * separate buffer containing the clear/compression status of each tile. The 751 + * modifiers are defined as VIVANTE_MOD_TS_c_s, where c is the color buffer 752 + * tile size in bytes covered by one entry in the status buffer and s is the 753 + * number of status bits per entry. 754 + * We reserve the top 8 bits of the Vivante modifier space for tile status 755 + * clear/compression modifiers, as future cores might add some more TS layout 756 + * variations. 757 + */ 758 + #define VIVANTE_MOD_TS_64_4 (1ULL << 48) 759 + #define VIVANTE_MOD_TS_64_2 (2ULL << 48) 760 + #define VIVANTE_MOD_TS_128_4 (3ULL << 48) 761 + #define VIVANTE_MOD_TS_256_4 (4ULL << 48) 762 + #define VIVANTE_MOD_TS_MASK (0xfULL << 48) 763 + 764 + /* 765 + * Vivante compression modifiers. Those depend on a TS modifier being present 766 + * as the TS bits get reinterpreted as compression tags instead of simple 767 + * clear markers when compression is enabled. 768 + */ 769 + #define VIVANTE_MOD_COMP_DEC400 (1ULL << 52) 770 + #define VIVANTE_MOD_COMP_MASK (0xfULL << 52) 771 + 772 + /* Masking out the extension bits will yield the base modifier. */ 773 + #define VIVANTE_MOD_EXT_MASK (VIVANTE_MOD_TS_MASK | \ 774 + VIVANTE_MOD_COMP_MASK) 775 + 747 776 /* NVIDIA frame buffer modifiers */ 748 777 749 778 /*
+62 -1
include/uapi/drm/drm_mode.h
··· 935 935 }; 936 936 }; 937 937 938 + /** 939 + * DRM_MODE_PAGE_FLIP_EVENT 940 + * 941 + * Request that the kernel sends back a vblank event (see 942 + * struct drm_event_vblank) with the &DRM_EVENT_FLIP_COMPLETE type when the 943 + * page-flip is done. 944 + */ 938 945 #define DRM_MODE_PAGE_FLIP_EVENT 0x01 946 + /** 947 + * DRM_MODE_PAGE_FLIP_ASYNC 948 + * 949 + * Request that the page-flip is performed as soon as possible, ie. with no 950 + * delay due to waiting for vblank. This may cause tearing to be visible on 951 + * the screen. 952 + */ 939 953 #define DRM_MODE_PAGE_FLIP_ASYNC 0x02 940 954 #define DRM_MODE_PAGE_FLIP_TARGET_ABSOLUTE 0x4 941 955 #define DRM_MODE_PAGE_FLIP_TARGET_RELATIVE 0x8 942 956 #define DRM_MODE_PAGE_FLIP_TARGET (DRM_MODE_PAGE_FLIP_TARGET_ABSOLUTE | \ 943 957 DRM_MODE_PAGE_FLIP_TARGET_RELATIVE) 958 + /** 959 + * DRM_MODE_PAGE_FLIP_FLAGS 960 + * 961 + * Bitmask of flags suitable for &drm_mode_crtc_page_flip_target.flags. 962 + */ 944 963 #define DRM_MODE_PAGE_FLIP_FLAGS (DRM_MODE_PAGE_FLIP_EVENT | \ 945 964 DRM_MODE_PAGE_FLIP_ASYNC | \ 946 965 DRM_MODE_PAGE_FLIP_TARGET) ··· 1053 1034 __u32 handle; 1054 1035 }; 1055 1036 1056 - /* page-flip flags are valid, plus: */ 1037 + /** 1038 + * DRM_MODE_ATOMIC_TEST_ONLY 1039 + * 1040 + * Do not apply the atomic commit, instead check whether the hardware supports 1041 + * this configuration. 1042 + * 1043 + * See &drm_mode_config_funcs.atomic_check for more details on test-only 1044 + * commits. 1045 + */ 1057 1046 #define DRM_MODE_ATOMIC_TEST_ONLY 0x0100 1047 + /** 1048 + * DRM_MODE_ATOMIC_NONBLOCK 1049 + * 1050 + * Do not block while applying the atomic commit. The &DRM_IOCTL_MODE_ATOMIC 1051 + * IOCTL returns immediately instead of waiting for the changes to be applied 1052 + * in hardware. Note, the driver will still check that the update can be 1053 + * applied before retuning. 1054 + */ 1058 1055 #define DRM_MODE_ATOMIC_NONBLOCK 0x0200 1056 + /** 1057 + * DRM_MODE_ATOMIC_ALLOW_MODESET 1058 + * 1059 + * Allow the update to result in temporary or transient visible artifacts while 1060 + * the update is being applied. Applying the update may also take significantly 1061 + * more time than a page flip. All visual artifacts will disappear by the time 1062 + * the update is completed, as signalled through the vblank event's timestamp 1063 + * (see struct drm_event_vblank). 1064 + * 1065 + * This flag must be set when the KMS update might cause visible artifacts. 1066 + * Without this flag such KMS update will return a EINVAL error. What kind of 1067 + * update may cause visible artifacts depends on the driver and the hardware. 1068 + * User-space that needs to know beforehand if an update might cause visible 1069 + * artifacts can use &DRM_MODE_ATOMIC_TEST_ONLY without 1070 + * &DRM_MODE_ATOMIC_ALLOW_MODESET to see if it fails. 1071 + * 1072 + * To the best of the driver's knowledge, visual artifacts are guaranteed to 1073 + * not appear when this flag is not set. Some sinks might display visual 1074 + * artifacts outside of the driver's control. 1075 + */ 1059 1076 #define DRM_MODE_ATOMIC_ALLOW_MODESET 0x0400 1060 1077 1078 + /** 1079 + * DRM_MODE_ATOMIC_FLAGS 1080 + * 1081 + * Bitfield of flags accepted by the &DRM_IOCTL_MODE_ATOMIC IOCTL in 1082 + * &drm_mode_atomic.flags. 1083 + */ 1061 1084 #define DRM_MODE_ATOMIC_FLAGS (\ 1062 1085 DRM_MODE_PAGE_FLIP_EVENT |\ 1063 1086 DRM_MODE_PAGE_FLIP_ASYNC |\