Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'drm-misc-next-2022-01-27' of git://anongit.freedesktop.org/drm/drm-misc into drm-next

[airlied: add two missing Kconfig]

drm-misc-next for v5.18:

UAPI Changes:
- Fix invalid IN_FORMATS blob when plane->format_mod_supported is NULL.

Cross-subsystem Changes:
- Assorted dt bindings updates.
- Fix vga16fb vga checking on x86.
- Fix extra semicolon in rwsem.h's _down_write_nest_lock.
- Assorted small fixes to agp and fbdev drivers.
- Fix oops in creating a udmabuf with 0 pages.
- Hot-unplug firmware fb devices on forced removal
- Reqquest memory region in simplefb and simpledrm, and don't make the ioresource as busy.

Core Changes:
- Mock a drm_plane in drm-plane-helper selftest.
- Assorted bug fixes to device logging, dbi.
- Use DP helper for sink count in mst.
- Assorted documentation fixes.
- Assorted small fixes.
- Move DP headers to drm/dp, and add a drm dp helper module.
- Move the buddy allocator from i915 to common drm.
- Add simple pci and platform module init macros to remove a lot of boilerplate from some drivers.
- Support microsoft extension for HMDs and specialized monitors.
- Improve edid parser's deep color handling.
- Add type 7 timing support to edid parser.
- Add a weak backpointer to the ttm_bo from ttm_resource
- Add 3 eDP panels.

Driver Changes:
- Add support for HDMI and JZ4780 to ingenic.
- Add support for higher DP/eDP bitrates to nouveau.
- Assorted driver fixes to tilcdc, vmwgfx, sn65dsi83, meson, stm, panfrost, v3d, gma500, vc4, virtio, mgag200, ast, radeon, amdgpu, nouveau, various bridge drivers.
- Convert and revert exynos dsi support to bridge driver.
- Add vcc supply regulator support for sn65dsi83.
- More conversion of bridge/chipone-icn6211 to atomic.
- Remove conflicting fb's from stm, and add support for new hw version.
- Add device link in parade-ps8640 to fix suspend/resume.
- Update Boe-tv110c9m init sequence.
- Add wide screen support to AST2600.
- Fix omapdrm implicit dma_buf fencing.
- Add support for multiple overlay planes to vkms.
- Convert bridge/anx7625 to atomic, add HDCP support,
add eld support for audio, and fix HPD.
- Add driver for ChromeOS privacy screen.
- Handover display from firmware to vc4 more gracefully, and support nomodeset.
- Add flexible and ycbcr pixel formats to stm/ltdc.
- Convert exynos mipi dsi to atomic.
- Add initial dual core group GPUs support to panfrost.
- No longer add exclusive fence in amdgpu as shared fence.
- Add CSC and full range supoprt to vc4.
- Shutdown the display on system shutdown and unbind.
- Add Multi-Inno Technology MI0700S4T-6 simple panel.

Signed-off-by: Dave Airlie <airlied@redhat.com>

From: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/456a23c6-7324-7543-0c45-751f30ef83f7@linux.intel.com

+5380 -2616
+1
Documentation/devicetree/bindings/display/bridge/lvds-codec.yaml
··· 39 39 - const: lvds-encoder # Generic LVDS encoder compatible fallback 40 40 - items: 41 41 - enum: 42 + - ti,ds90cf364a # For the DS90CF364A FPD-Link LVDS Receiver 42 43 - ti,ds90cf384a # For the DS90CF384A FPD-Link LVDS Receiver 43 44 - const: lvds-decoder # Generic LVDS decoders compatible fallback 44 45 - enum:
+4 -1
Documentation/devicetree/bindings/display/bridge/ti,sn65dsi83.yaml
··· 32 32 maxItems: 1 33 33 description: GPIO specifier for bridge_en pin (active high). 34 34 35 + vcc-supply: 36 + description: A 1.8V power supply (see regulator/regulator.yaml). 37 + 35 38 ports: 36 39 $ref: /schemas/graph.yaml#/properties/ports 37 40 ··· 94 91 required: 95 92 - compatible 96 93 - reg 97 - - enable-gpios 98 94 - ports 99 95 100 96 allOf: ··· 135 133 reg = <0x2d>; 136 134 137 135 enable-gpios = <&gpio2 1 GPIO_ACTIVE_HIGH>; 136 + vcc-supply = <&reg_sn65dsi83_1v8>; 138 137 139 138 ports { 140 139 #address-cells = <1>;
+2
Documentation/devicetree/bindings/display/panel/panel-simple.yaml
··· 222 222 - logictechno,lttd800480070-l6wh-rt 223 223 # Mitsubishi "AA070MC01 7.0" WVGA TFT LCD panel 224 224 - mitsubishi,aa070mc01-ca1 225 + # Multi-Inno Technology Co.,Ltd MI0700S4T-6 7" 800x480 TFT Resistive Touch Module 226 + - multi-inno,mi0700s4t-6 225 227 # Multi-Inno Technology Co.,Ltd MI1010AIT-1CP 10.1" 1280x800 LVDS IPS Cap Touch Mod. 226 228 - multi-inno,mi1010ait-1cp 227 229 # NEC LCD Technologies, Ltd. 12.1" WXGA (1280x800) LVDS TFT LCD panel
+9 -2
Documentation/devicetree/bindings/display/panel/sony,acx424akp.yaml
··· 4 4 $id: http://devicetree.org/schemas/display/panel/sony,acx424akp.yaml# 5 5 $schema: http://devicetree.org/meta-schemas/core.yaml# 6 6 7 - title: Sony ACX424AKP 4" 480x864 AMOLED panel 7 + title: Sony ACX424AKP/ACX424AKM 4" 480x864/480x854 AMOLED panel 8 + 9 + description: The Sony ACX424AKP and ACX424AKM are panels built around 10 + the Novatek NT35560 display controller. The only difference is that 11 + the AKM is configured to use 10 pixels less in the Y axis than the 12 + AKP. 8 13 9 14 maintainers: 10 15 - Linus Walleij <linus.walleij@linaro.org> ··· 19 14 20 15 properties: 21 16 compatible: 22 - const: sony,acx424akp 17 + enum: 18 + - sony,acx424akp 19 + - sony,acx424akm 23 20 reg: true 24 21 reset-gpios: true 25 22 vddi-supply:
+6
Documentation/gpu/drm-internals.rst
··· 75 75 kernel log at initialization time and passes it to userspace through the 76 76 DRM_IOCTL_VERSION ioctl. 77 77 78 + Module Initialization 79 + --------------------- 80 + 81 + .. kernel-doc:: include/drm/drm_module.h 82 + :doc: overview 83 + 78 84 Managing Ownership of the Framebuffer Aperture 79 85 ---------------------------------------------- 80 86
+13 -13
Documentation/gpu/drm-kms-helpers.rst
··· 232 232 Display Port Helper Functions Reference 233 233 ======================================= 234 234 235 - .. kernel-doc:: drivers/gpu/drm/drm_dp_helper.c 235 + .. kernel-doc:: drivers/gpu/drm/dp/drm_dp.c 236 236 :doc: dp helpers 237 237 238 - .. kernel-doc:: include/drm/drm_dp_helper.h 238 + .. kernel-doc:: include/drm/dp/drm_dp_helper.h 239 239 :internal: 240 240 241 - .. kernel-doc:: drivers/gpu/drm/drm_dp_helper.c 241 + .. kernel-doc:: drivers/gpu/drm/dp/drm_dp.c 242 242 :export: 243 243 244 244 Display Port CEC Helper Functions Reference 245 245 =========================================== 246 246 247 - .. kernel-doc:: drivers/gpu/drm/drm_dp_cec.c 247 + .. kernel-doc:: drivers/gpu/drm/dp/drm_dp_cec.c 248 248 :doc: dp cec helpers 249 249 250 - .. kernel-doc:: drivers/gpu/drm/drm_dp_cec.c 250 + .. kernel-doc:: drivers/gpu/drm/dp/drm_dp_cec.c 251 251 :export: 252 252 253 253 Display Port Dual Mode Adaptor Helper Functions Reference 254 254 ========================================================= 255 255 256 - .. kernel-doc:: drivers/gpu/drm/drm_dp_dual_mode_helper.c 256 + .. kernel-doc:: drivers/gpu/drm/dp/drm_dp_dual_mode_helper.c 257 257 :doc: dp dual mode helpers 258 258 259 - .. kernel-doc:: include/drm/drm_dp_dual_mode_helper.h 259 + .. kernel-doc:: include/drm/dp/drm_dp_dual_mode_helper.h 260 260 :internal: 261 261 262 - .. kernel-doc:: drivers/gpu/drm/drm_dp_dual_mode_helper.c 262 + .. kernel-doc:: drivers/gpu/drm/dp/drm_dp_dual_mode_helper.c 263 263 :export: 264 264 265 265 Display Port MST Helpers ··· 268 268 Overview 269 269 -------- 270 270 271 - .. kernel-doc:: drivers/gpu/drm/drm_dp_mst_topology.c 271 + .. kernel-doc:: drivers/gpu/drm/dp/drm_dp_mst_topology.c 272 272 :doc: dp mst helper 273 273 274 - .. kernel-doc:: drivers/gpu/drm/drm_dp_mst_topology.c 274 + .. kernel-doc:: drivers/gpu/drm/dp/drm_dp_mst_topology.c 275 275 :doc: Branch device and port refcounting 276 276 277 277 Functions Reference 278 278 ------------------- 279 279 280 - .. kernel-doc:: include/drm/drm_dp_mst_helper.h 280 + .. kernel-doc:: include/drm/dp/drm_dp_mst_helper.h 281 281 :internal: 282 282 283 - .. kernel-doc:: drivers/gpu/drm/drm_dp_mst_topology.c 283 + .. kernel-doc:: drivers/gpu/drm/dp/drm_dp_mst_topology.c 284 284 :export: 285 285 286 286 Topology Lifetime Internals ··· 289 289 These functions aren't exported to drivers, but are documented here to help make 290 290 the MST topology helpers easier to understand 291 291 292 - .. kernel-doc:: drivers/gpu/drm/drm_dp_mst_topology.c 292 + .. kernel-doc:: drivers/gpu/drm/dp/drm_dp_mst_topology.c 293 293 :functions: drm_dp_mst_topology_try_get_mstb drm_dp_mst_topology_get_mstb 294 294 drm_dp_mst_topology_put_mstb 295 295 drm_dp_mst_topology_try_get_port drm_dp_mst_topology_get_port
+3 -3
Documentation/gpu/drm-kms.rst
··· 423 423 Writeback Connectors 424 424 -------------------- 425 425 426 - .. kernel-doc:: include/drm/drm_writeback.h 427 - :internal: 428 - 429 426 .. kernel-doc:: drivers/gpu/drm/drm_writeback.c 430 427 :doc: overview 428 + 429 + .. kernel-doc:: include/drm/drm_writeback.h 430 + :internal: 431 431 432 432 .. kernel-doc:: drivers/gpu/drm/drm_writeback.c 433 433 :export:
+1 -1
Documentation/gpu/drm-mm.rst
··· 8 8 efficiently is thus crucial for the graphics stack and plays a central 9 9 role in the DRM infrastructure. 10 10 11 - The DRM core includes two memory managers, namely Translation Table Maps 11 + The DRM core includes two memory managers, namely Translation Table Manager 12 12 (TTM) and Graphics Execution Manager (GEM). TTM was the first DRM memory 13 13 manager to be developed and tried to be a one-size-fits-them all 14 14 solution. It provides a single userspace API to accommodate the need of
+15
Documentation/gpu/todo.rst
··· 467 467 468 468 Level: Intermediate 469 469 470 + Request memory regions in all drivers 471 + ------------------------------------- 472 + 473 + Go through all drivers and add code to request the memory regions that the 474 + driver uses. This requires adding calls to request_mem_region(), 475 + pci_request_region() or similar functions. Use helpers for managed cleanup 476 + where possible. 477 + 478 + Drivers are pretty bad at doing this and there used to be conflicts among 479 + DRM and fbdev drivers. Still, it's the correct thing to do. 480 + 481 + Contact: Thomas Zimmermann <tzimmermann@suse.de> 482 + 483 + Level: Starter 484 + 470 485 471 486 Core refactorings 472 487 =================
-2
Documentation/gpu/vkms.rst
··· 124 124 125 125 There's lots of plane features we could add support for: 126 126 127 - - Multiple overlay planes. [Good to get started] 128 - 129 127 - Clearing primary plane: clear primary plane before plane composition (at the 130 128 start) for correctness of pixel blend ops. It also guarantees alpha channel 131 129 is cleared in the target buffer for stable crc. [Good to get started]
+6 -2
drivers/char/agp/ati-agp.c
··· 55 55 56 56 static int ati_create_page_map(struct ati_page_map *page_map) 57 57 { 58 - int i, err = 0; 58 + int i, err; 59 59 60 60 page_map->real = (unsigned long *) __get_free_page(GFP_KERNEL); 61 61 if (page_map->real == NULL) ··· 63 63 64 64 set_memory_uc((unsigned long)page_map->real, 1); 65 65 err = map_page_into_agp(virt_to_page(page_map->real)); 66 + if (err) { 67 + free_page((unsigned long)page_map->real); 68 + return err; 69 + } 66 70 page_map->remapped = page_map->real; 67 71 68 72 for (i = 0; i < PAGE_SIZE / sizeof(unsigned long); i++) { ··· 307 303 for (i = 0, j = pg_start; i < mem->page_count; i++, j++) { 308 304 addr = (j * PAGE_SIZE) + agp_bridge->gart_bus_addr; 309 305 cur_gatt = GET_GATT(addr); 310 - writel(agp_bridge->driver->mask_memory(agp_bridge, 306 + writel(agp_bridge->driver->mask_memory(agp_bridge, 311 307 page_to_phys(mem->pages[i]), 312 308 mem->type), 313 309 cur_gatt+GET_GATT_OFF(addr));
+2
drivers/char/agp/backend.c
··· 62 62 63 63 /** 64 64 * agp_backend_acquire - attempt to acquire an agp backend. 65 + * @pdev: the PCI device 65 66 * 66 67 */ 67 68 struct agp_bridge_data *agp_backend_acquire(struct pci_dev *pdev) ··· 84 83 85 84 /** 86 85 * agp_backend_release - release the lock on the agp backend. 86 + * @bridge: the AGP backend to release 87 87 * 88 88 * The caller must insure that the graphics aperture translation table 89 89 * is read for use by another entity.
+3 -1
drivers/char/agp/frontend.c
··· 39 39 #include <linux/fs.h> 40 40 #include <linux/sched.h> 41 41 #include <linux/uaccess.h> 42 + 42 43 #include "agp.h" 44 + #include "compat_ioctl.h" 43 45 44 46 struct agp_front_data agp_fe; 45 47 ··· 1019 1017 case AGPIOC_UNBIND: 1020 1018 ret_val = agpioc_unbind_wrap(curr_priv, (void __user *) arg); 1021 1019 break; 1022 - 1020 + 1023 1021 case AGPIOC_CHIPSET_FLUSH: 1024 1022 break; 1025 1023 }
+2 -1
drivers/char/agp/nvidia-agp.c
··· 261 261 static void nvidia_tlbflush(struct agp_memory *mem) 262 262 { 263 263 unsigned long end; 264 - u32 wbc_reg, temp; 264 + u32 wbc_reg; 265 + u32 __maybe_unused temp; 265 266 int i; 266 267 267 268 /* flush chipset */
+1 -4
drivers/char/agp/sworks-agp.c
··· 262 262 263 263 static int serverworks_configure(void) 264 264 { 265 - struct aper_size_info_lvl2 *current_size; 266 265 u32 temp; 267 266 u8 enable_reg; 268 267 u16 cap_reg; 269 - 270 - current_size = A_SIZE_LVL2(agp_bridge->current_size); 271 268 272 269 /* Get the memory mapped registers */ 273 270 pci_read_config_dword(agp_bridge->dev, serverworks_private.mm_addr_ofs, &temp); ··· 347 350 for (i = 0, j = pg_start; i < mem->page_count; i++, j++) { 348 351 addr = (j * PAGE_SIZE) + agp_bridge->gart_bus_addr; 349 352 cur_gatt = SVRWRKS_GET_GATT(addr); 350 - writel(agp_bridge->driver->mask_memory(agp_bridge, 353 + writel(agp_bridge->driver->mask_memory(agp_bridge, 351 354 page_to_phys(mem->pages[i]), mem->type), 352 355 cur_gatt+GET_GATT_OFF(addr)); 353 356 }
-3
drivers/char/agp/via-agp.c
··· 128 128 static int via_configure_agp3(void) 129 129 { 130 130 u32 temp; 131 - struct aper_size_info_16 *current_size; 132 - 133 - current_size = A_SIZE_16(agp_bridge->current_size); 134 131 135 132 /* address to map to */ 136 133 agp_bridge->gart_bus_addr = pci_bus_address(agp_bridge->dev,
+17 -29
drivers/dma-buf/dma-resv.c
··· 542 542 * dma_resv_get_fences - Get an object's shared and exclusive 543 543 * fences without update side lock held 544 544 * @obj: the reservation object 545 - * @fence_excl: the returned exclusive fence (or NULL) 546 - * @shared_count: the number of shared fences returned 547 - * @shared: the array of shared fence ptrs returned (array is krealloc'd to 548 - * the required size, and must be freed by caller) 545 + * @write: true if we should return all fences 546 + * @num_fences: the number of fences returned 547 + * @fences: the array of fence ptrs returned (array is krealloc'd to the 548 + * required size, and must be freed by caller) 549 549 * 550 - * Retrieve all fences from the reservation object. If the pointer for the 551 - * exclusive fence is not specified the fence is put into the array of the 552 - * shared fences as well. Returns either zero or -ENOMEM. 550 + * Retrieve all fences from the reservation object. 551 + * Returns either zero or -ENOMEM. 553 552 */ 554 - int dma_resv_get_fences(struct dma_resv *obj, struct dma_fence **fence_excl, 555 - unsigned int *shared_count, struct dma_fence ***shared) 553 + int dma_resv_get_fences(struct dma_resv *obj, bool write, 554 + unsigned int *num_fences, struct dma_fence ***fences) 556 555 { 557 556 struct dma_resv_iter cursor; 558 557 struct dma_fence *fence; 559 558 560 - *shared_count = 0; 561 - *shared = NULL; 559 + *num_fences = 0; 560 + *fences = NULL; 562 561 563 - if (fence_excl) 564 - *fence_excl = NULL; 565 - 566 - dma_resv_iter_begin(&cursor, obj, true); 562 + dma_resv_iter_begin(&cursor, obj, write); 567 563 dma_resv_for_each_fence_unlocked(&cursor, fence) { 568 564 569 565 if (dma_resv_iter_is_restarted(&cursor)) { 570 566 unsigned int count; 571 567 572 - while (*shared_count) 573 - dma_fence_put((*shared)[--(*shared_count)]); 568 + while (*num_fences) 569 + dma_fence_put((*fences)[--(*num_fences)]); 574 570 575 - if (fence_excl) 576 - dma_fence_put(*fence_excl); 577 - 578 - count = cursor.shared_count; 579 - count += fence_excl ? 0 : 1; 571 + count = cursor.shared_count + 1; 580 572 581 573 /* Eventually re-allocate the array */ 582 - *shared = krealloc_array(*shared, count, 574 + *fences = krealloc_array(*fences, count, 583 575 sizeof(void *), 584 576 GFP_KERNEL); 585 - if (count && !*shared) { 577 + if (count && !*fences) { 586 578 dma_resv_iter_end(&cursor); 587 579 return -ENOMEM; 588 580 } 589 581 } 590 582 591 - dma_fence_get(fence); 592 - if (dma_resv_iter_is_exclusive(&cursor) && fence_excl) 593 - *fence_excl = fence; 594 - else 595 - (*shared)[(*shared_count)++] = fence; 583 + (*fences)[(*num_fences)++] = dma_fence_get(fence); 596 584 } 597 585 dma_resv_iter_end(&cursor); 598 586
+5 -21
drivers/dma-buf/st-dma-resv.c
··· 275 275 276 276 static int test_get_fences(void *arg, bool shared) 277 277 { 278 - struct dma_fence *f, *excl = NULL, **fences = NULL; 278 + struct dma_fence *f, **fences = NULL; 279 279 struct dma_resv resv; 280 280 int r, i; 281 281 ··· 304 304 } 305 305 dma_resv_unlock(&resv); 306 306 307 - r = dma_resv_get_fences(&resv, &excl, &i, &fences); 307 + r = dma_resv_get_fences(&resv, shared, &i, &fences); 308 308 if (r) { 309 309 pr_err("get_fences failed\n"); 310 310 goto err_free; 311 311 } 312 312 313 - if (shared) { 314 - if (excl != NULL) { 315 - pr_err("get_fences returned unexpected excl fence\n"); 316 - goto err_free; 317 - } 318 - if (i != 1 || fences[0] != f) { 319 - pr_err("get_fences returned unexpected shared fence\n"); 320 - goto err_free; 321 - } 322 - } else { 323 - if (excl != f) { 324 - pr_err("get_fences returned unexpected excl fence\n"); 325 - goto err_free; 326 - } 327 - if (i != 0) { 328 - pr_err("get_fences returned unexpected shared fence\n"); 329 - goto err_free; 330 - } 313 + if (i != 1 || fences[0] != f) { 314 + pr_err("get_fences returned unexpected fence\n"); 315 + goto err_free; 331 316 } 332 317 333 318 dma_fence_signal(f); 334 319 err_free: 335 - dma_fence_put(excl); 336 320 while (i--) 337 321 dma_fence_put(fences[i]); 338 322 kfree(fences);
+4
drivers/dma-buf/udmabuf.c
··· 190 190 if (ubuf->pagecount > pglimit) 191 191 goto err; 192 192 } 193 + 194 + if (!ubuf->pagecount) 195 + goto err; 196 + 193 197 ubuf->pages = kmalloc_array(ubuf->pagecount, sizeof(*ubuf->pages), 194 198 GFP_KERNEL); 195 199 if (!ubuf->pages) {
+1 -1
drivers/firmware/sysfb_simplefb.c
··· 99 99 100 100 /* setup IORESOURCE_MEM as framebuffer memory */ 101 101 memset(&res, 0, sizeof(res)); 102 - res.flags = IORESOURCE_MEM | IORESOURCE_BUSY; 102 + res.flags = IORESOURCE_MEM; 103 103 res.name = simplefb_resname; 104 104 res.start = base; 105 105 res.end = res.start + length - 1;
+15
drivers/gpu/drm/Kconfig
··· 68 68 depends on DRM 69 69 depends on DEBUG_KERNEL 70 70 select PRIME_NUMBERS 71 + select DRM_DP_HELPER 71 72 select DRM_LIB_RANDOM 72 73 select DRM_KMS_HELPER 73 74 select DRM_EXPORT_FOR_TESTS if m ··· 80 79 developers working on DRM and associated drivers. 81 80 82 81 If in doubt, say "N". 82 + 83 + config DRM_DP_HELPER 84 + tristate 85 + depends on DRM 86 + help 87 + DRM helpers for DisplayPort. 83 88 84 89 config DRM_KMS_HELPER 85 90 tristate ··· 205 198 GPU memory types. Will be enabled automatically if a device driver 206 199 uses it. 207 200 201 + config DRM_BUDDY 202 + tristate 203 + depends on DRM 204 + help 205 + A page based buddy allocator 206 + 208 207 config DRM_VRAM_HELPER 209 208 tristate 210 209 depends on DRM ··· 249 236 depends on DRM && PCI && MMU 250 237 depends on AGP || !AGP 251 238 select FW_LOADER 239 + select DRM_DP_HELPER 252 240 select DRM_KMS_HELPER 253 241 select DRM_TTM 254 242 select DRM_TTM_HELPER ··· 270 256 tristate "AMD GPU" 271 257 depends on DRM && PCI && MMU 272 258 select FW_LOADER 259 + select DRM_DP_HELPER 273 260 select DRM_KMS_HELPER 274 261 select DRM_SCHED 275 262 select DRM_TTM
+6 -8
drivers/gpu/drm/Makefile
··· 31 31 drm-$(CONFIG_DRM_LOAD_EDID_FIRMWARE) += drm_edid_load.o 32 32 drm-$(CONFIG_DRM_PRIVACY_SCREEN) += drm_privacy_screen.o drm_privacy_screen_x86.o 33 33 34 - obj-$(CONFIG_DRM_DP_AUX_BUS) += drm_dp_aux_bus.o 35 - 36 34 obj-$(CONFIG_DRM_NOMODESET) += drm_nomodeset.o 37 35 38 36 drm_cma_helper-y := drm_gem_cma_helper.o ··· 40 42 drm_shmem_helper-y := drm_gem_shmem_helper.o 41 43 obj-$(CONFIG_DRM_GEM_SHMEM_HELPER) += drm_shmem_helper.o 42 44 45 + obj-$(CONFIG_DRM_BUDDY) += drm_buddy.o 46 + 43 47 drm_vram_helper-y := drm_gem_vram_helper.o 44 48 obj-$(CONFIG_DRM_VRAM_HELPER) += drm_vram_helper.o 45 49 46 50 drm_ttm_helper-y := drm_gem_ttm_helper.o 47 51 obj-$(CONFIG_DRM_TTM_HELPER) += drm_ttm_helper.o 48 52 49 - drm_kms_helper-y := drm_bridge_connector.o drm_crtc_helper.o drm_dp_helper.o \ 53 + drm_kms_helper-y := drm_bridge_connector.o drm_crtc_helper.o \ 50 54 drm_dsc.o drm_encoder_slave.o drm_flip_work.o drm_hdcp.o \ 51 55 drm_probe_helper.o \ 52 - drm_plane_helper.o drm_dp_mst_topology.o drm_atomic_helper.o \ 53 - drm_kms_helper_common.o drm_dp_dual_mode_helper.o \ 56 + drm_plane_helper.o drm_atomic_helper.o \ 57 + drm_kms_helper_common.o \ 54 58 drm_simple_kms_helper.o drm_modeset_helper.o \ 55 59 drm_scdc_helper.o drm_gem_atomic_helper.o \ 56 60 drm_gem_framebuffer_helper.o \ 57 61 drm_atomic_state_helper.o drm_damage_helper.o \ 58 62 drm_format_helper.o drm_self_refresh_helper.o drm_rect.o 59 - 60 63 drm_kms_helper-$(CONFIG_DRM_PANEL_BRIDGE) += bridge/panel.o 61 64 drm_kms_helper-$(CONFIG_DRM_FBDEV_EMULATION) += drm_fb_helper.o 62 - drm_kms_helper-$(CONFIG_DRM_DP_AUX_CHARDEV) += drm_dp_aux_dev.o 63 - drm_kms_helper-$(CONFIG_DRM_DP_CEC) += drm_dp_cec.o 64 65 65 66 obj-$(CONFIG_DRM_KMS_HELPER) += drm_kms_helper.o 66 67 obj-$(CONFIG_DRM_DEBUG_SELFTEST) += selftests/ ··· 69 72 obj-$(CONFIG_DRM_MIPI_DSI) += drm_mipi_dsi.o 70 73 obj-$(CONFIG_DRM_PANEL_ORIENTATION_QUIRKS) += drm_panel_orientation_quirks.o 71 74 obj-y += arm/ 75 + obj-y += dp/ 72 76 obj-$(CONFIG_DRM_TTM) += ttm/ 73 77 obj-$(CONFIG_DRM_SCHED) += scheduler/ 74 78 obj-$(CONFIG_DRM_TDFX) += tdfx/
+2 -2
drivers/gpu/drm/amd/amdgpu/amdgpu_connectors.c
··· 26 26 27 27 #include <drm/drm_edid.h> 28 28 #include <drm/drm_fb_helper.h> 29 - #include <drm/drm_dp_helper.h> 29 + #include <drm/dp/drm_dp_helper.h> 30 30 #include <drm/drm_probe_helper.h> 31 31 #include <drm/amdgpu_drm.h> 32 32 #include "amdgpu.h" ··· 175 175 176 176 /* Check if bpc is within clock limit. Try to degrade gracefully otherwise */ 177 177 if ((bpc == 12) && (mode_clock * 3/2 > max_tmds_clock)) { 178 - if ((connector->display_info.edid_hdmi_dc_modes & DRM_EDID_HDMI_DC_30) && 178 + if ((connector->display_info.edid_hdmi_rgb444_dc_modes & DRM_EDID_HDMI_DC_30) && 179 179 (mode_clock * 5/4 <= max_tmds_clock)) 180 180 bpc = 10; 181 181 else
+1 -4
drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
··· 1274 1274 /* 1275 1275 * Work around dma_resv shortcommings by wrapping up the 1276 1276 * submission in a dma_fence_chain and add it as exclusive 1277 - * fence, but first add the submission as shared fence to make 1278 - * sure that shared fences never signal before the exclusive 1279 - * one. 1277 + * fence. 1280 1278 */ 1281 1279 dma_fence_chain_init(chain, dma_resv_excl_fence(resv), 1282 1280 dma_fence_get(p->fence), 1); 1283 1281 1284 - dma_resv_add_shared_fence(resv, p->fence); 1285 1282 rcu_assign_pointer(resv->fence_excl, &chain->base); 1286 1283 e->chain = NULL; 1287 1284 }
+4 -2
drivers/gpu/drm/amd/amdgpu/amdgpu_display.c
··· 200 200 goto unpin; 201 201 } 202 202 203 - r = dma_resv_get_fences(new_abo->tbo.base.resv, NULL, 204 - &work->shared_count, &work->shared); 203 + /* TODO: Unify this with other drivers */ 204 + r = dma_resv_get_fences(new_abo->tbo.base.resv, true, 205 + &work->shared_count, 206 + &work->shared); 205 207 if (unlikely(r != 0)) { 206 208 DRM_ERROR("failed to get fences for buffer\n"); 207 209 goto unpin;
-6
drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
··· 226 226 if (!amdgpu_vm_ready(vm)) 227 227 goto out_unlock; 228 228 229 - fence = dma_resv_excl_fence(bo->tbo.base.resv); 230 - if (fence) { 231 - amdgpu_bo_fence(bo, fence, true); 232 - fence = NULL; 233 - } 234 - 235 229 r = amdgpu_vm_clear_freed(adev, vm, &fence); 236 230 if (r || !fence) 237 231 goto out_unlock;
+4 -1
drivers/gpu/drm/amd/amdgpu/amdgpu_gtt_mgr.c
··· 167 167 return 0; 168 168 169 169 err_free: 170 + ttm_resource_fini(man, &node->base.base); 170 171 kfree(node); 171 172 172 173 err_out: ··· 199 198 if (!(res->placement & TTM_PL_FLAG_TEMPORARY)) 200 199 atomic64_sub(res->num_pages, &mgr->used); 201 200 201 + ttm_resource_fini(man, res); 202 202 kfree(node); 203 203 } 204 204 ··· 288 286 man->use_tt = true; 289 287 man->func = &amdgpu_gtt_mgr_func; 290 288 291 - ttm_resource_manager_init(man, gtt_size >> PAGE_SHIFT); 289 + ttm_resource_manager_init(man, &adev->mman.bdev, 290 + gtt_size >> PAGE_SHIFT); 292 291 293 292 start = AMDGPU_GTT_MAX_TRANSFER_SIZE * AMDGPU_GTT_NUM_TRANSFER_WINDOWS; 294 293 size = (adev->gmc.gart_size >> PAGE_SHIFT) - start;
+1 -1
drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c
··· 112 112 unsigned count; 113 113 int r; 114 114 115 - r = dma_resv_get_fences(resv, NULL, &count, &fences); 115 + r = dma_resv_get_fences(resv, true, &count, &fences); 116 116 if (r) 117 117 goto fallback; 118 118
+2 -2
drivers/gpu/drm/amd/amdgpu/amdgpu_mode.h
··· 33 33 #include <drm/drm_crtc.h> 34 34 #include <drm/drm_edid.h> 35 35 #include <drm/drm_encoder.h> 36 - #include <drm/drm_dp_helper.h> 36 + #include <drm/dp/drm_dp_helper.h> 37 37 #include <drm/drm_fixed.h> 38 38 #include <drm/drm_crtc_helper.h> 39 39 #include <drm/drm_fb_helper.h> ··· 44 44 #include <linux/hrtimer.h> 45 45 #include "amdgpu_irq.h" 46 46 47 - #include <drm/drm_dp_mst_helper.h> 47 + #include <drm/dp/drm_dp_mst_helper.h> 48 48 #include "modules/inc/mod_freesync.h" 49 49 #include "amdgpu_dm_irq_params.h" 50 50
+2 -1
drivers/gpu/drm/amd/amdgpu/amdgpu_preempt_mgr.c
··· 95 95 struct amdgpu_preempt_mgr *mgr = to_preempt_mgr(man); 96 96 97 97 atomic64_sub(res->num_pages, &mgr->used); 98 + ttm_resource_fini(man, res); 98 99 kfree(res); 99 100 } 100 101 ··· 153 152 man->use_tt = true; 154 153 man->func = &amdgpu_preempt_mgr_func; 155 154 156 - ttm_resource_manager_init(man, (1 << 30)); 155 + ttm_resource_manager_init(man, &adev->mman.bdev, (1 << 30)); 157 156 158 157 atomic64_set(&mgr->used, 0); 159 158
+5 -5
drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
··· 2087 2087 TTM_PL_VRAM); 2088 2088 struct drm_printer p = drm_seq_file_printer(m); 2089 2089 2090 - man->func->debug(man, &p); 2090 + ttm_resource_manager_debug(man, &p); 2091 2091 return 0; 2092 2092 } 2093 2093 ··· 2105 2105 TTM_PL_TT); 2106 2106 struct drm_printer p = drm_seq_file_printer(m); 2107 2107 2108 - man->func->debug(man, &p); 2108 + ttm_resource_manager_debug(man, &p); 2109 2109 return 0; 2110 2110 } 2111 2111 ··· 2116 2116 AMDGPU_PL_GDS); 2117 2117 struct drm_printer p = drm_seq_file_printer(m); 2118 2118 2119 - man->func->debug(man, &p); 2119 + ttm_resource_manager_debug(man, &p); 2120 2120 return 0; 2121 2121 } 2122 2122 ··· 2127 2127 AMDGPU_PL_GWS); 2128 2128 struct drm_printer p = drm_seq_file_printer(m); 2129 2129 2130 - man->func->debug(man, &p); 2130 + ttm_resource_manager_debug(man, &p); 2131 2131 return 0; 2132 2132 } 2133 2133 ··· 2138 2138 AMDGPU_PL_OA); 2139 2139 struct drm_printer p = drm_seq_file_printer(m); 2140 2140 2141 - man->func->debug(man, &p); 2141 + ttm_resource_manager_debug(man, &p); 2142 2142 return 0; 2143 2143 } 2144 2144
+4 -1
drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c
··· 472 472 while (i--) 473 473 drm_mm_remove_node(&node->mm_nodes[i]); 474 474 spin_unlock(&mgr->lock); 475 + ttm_resource_fini(man, &node->base); 475 476 kvfree(node); 476 477 477 478 error_sub: ··· 512 511 atomic64_sub(usage, &mgr->usage); 513 512 atomic64_sub(vis_usage, &mgr->vis_usage); 514 513 514 + ttm_resource_fini(man, res); 515 515 kvfree(node); 516 516 } 517 517 ··· 691 689 struct amdgpu_vram_mgr *mgr = &adev->mman.vram_mgr; 692 690 struct ttm_resource_manager *man = &mgr->manager; 693 691 694 - ttm_resource_manager_init(man, adev->gmc.real_vram_size >> PAGE_SHIFT); 692 + ttm_resource_manager_init(man, &adev->mman.bdev, 693 + adev->gmc.real_vram_size >> PAGE_SHIFT); 695 694 696 695 man->func = &amdgpu_vram_mgr_func; 697 696
+1 -1
drivers/gpu/drm/amd/amdgpu/atombios_dp.c
··· 34 34 #include "atombios_dp.h" 35 35 #include "amdgpu_connectors.h" 36 36 #include "amdgpu_atombios.h" 37 - #include <drm/drm_dp_helper.h> 37 + #include <drm/dp/drm_dp_helper.h> 38 38 39 39 /* move these to drm_dp_helper.c/h */ 40 40 #define DP_LINK_CONFIGURATION_SIZE 9
+2 -2
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
··· 76 76 #include <drm/drm_atomic.h> 77 77 #include <drm/drm_atomic_uapi.h> 78 78 #include <drm/drm_atomic_helper.h> 79 - #include <drm/drm_dp_mst_helper.h> 79 + #include <drm/dp/drm_dp_mst_helper.h> 80 80 #include <drm/drm_fb_helper.h> 81 81 #include <drm/drm_fourcc.h> 82 82 #include <drm/drm_edid.h> ··· 5856 5856 else if (drm_mode_is_420_also(info, mode_in) 5857 5857 && aconnector->force_yuv420_output) 5858 5858 timing_out->pixel_encoding = PIXEL_ENCODING_YCBCR420; 5859 - else if ((connector->display_info.color_formats & DRM_COLOR_FORMAT_YCRCB444) 5859 + else if ((connector->display_info.color_formats & DRM_COLOR_FORMAT_YCBCR444) 5860 5860 && stream->signal == SIGNAL_TYPE_HDMI_TYPE_A) 5861 5861 timing_out->pixel_encoding = PIXEL_ENCODING_YCBCR444; 5862 5862 else
+1 -1
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h
··· 29 29 #include <drm/drm_atomic.h> 30 30 #include <drm/drm_connector.h> 31 31 #include <drm/drm_crtc.h> 32 - #include <drm/drm_dp_mst_helper.h> 32 + #include <drm/dp/drm_dp_mst_helper.h> 33 33 #include <drm/drm_plane.h> 34 34 35 35 /*
+2 -2
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
··· 25 25 26 26 #include <drm/drm_atomic.h> 27 27 #include <drm/drm_atomic_helper.h> 28 - #include <drm/drm_dp_mst_helper.h> 29 - #include <drm/drm_dp_helper.h> 28 + #include <drm/dp/drm_dp_mst_helper.h> 29 + #include <drm/dp/drm_dp_helper.h> 30 30 #include "dm_services.h" 31 31 #include "amdgpu.h" 32 32 #include "amdgpu_dm.h"
+1 -1
drivers/gpu/drm/amd/display/dc/dsc/dc_dsc.c
··· 25 25 #include <drm/drm_dsc.h> 26 26 #include "dc_hw_types.h" 27 27 #include "dsc.h" 28 - #include <drm/drm_dp_helper.h> 28 + #include <drm/dp/drm_dp_helper.h> 29 29 #include "dc.h" 30 30 #include "rc_calc.h" 31 31 #include "fixed31_32.h"
+1 -1
drivers/gpu/drm/amd/display/dc/os_types.h
··· 36 36 #include <asm/byteorder.h> 37 37 38 38 #include <drm/drm_print.h> 39 - #include <drm/drm_dp_helper.h> 39 + #include <drm/dp/drm_dp_helper.h> 40 40 41 41 #include "cgs_common.h" 42 42
+1 -1
drivers/gpu/drm/amd/display/include/dpcd_defs.h
··· 26 26 #ifndef __DAL_DPCD_DEFS_H__ 27 27 #define __DAL_DPCD_DEFS_H__ 28 28 29 - #include <drm/drm_dp_helper.h> 29 + #include <drm/dp/drm_dp_helper.h> 30 30 #ifndef DP_SINK_HW_REVISION_START // can remove this once the define gets into linux drm_dp_helper.h 31 31 #define DP_SINK_HW_REVISION_START 0x409 32 32 #endif
+1 -1
drivers/gpu/drm/amd/display/modules/hdcp/hdcp.h
··· 30 30 #include "hdcp_log.h" 31 31 32 32 #include <drm/drm_hdcp.h> 33 - #include <drm/drm_dp_helper.h> 33 + #include <drm/dp/drm_dp_helper.h> 34 34 35 35 enum mod_hdcp_trans_input_result { 36 36 UNKNOWN = 0,
+6 -6
drivers/gpu/drm/arm/display/komeda/d71/d71_component.c
··· 1078 1078 mask |= IPS_CTRL_YUV | IPS_CTRL_CHD422 | IPS_CTRL_CHD420; 1079 1079 1080 1080 /* config color format */ 1081 - if (st->color_format == DRM_COLOR_FORMAT_YCRCB420) 1081 + if (st->color_format == DRM_COLOR_FORMAT_YCBCR420) 1082 1082 ctrl |= IPS_CTRL_YUV | IPS_CTRL_CHD422 | IPS_CTRL_CHD420; 1083 - else if (st->color_format == DRM_COLOR_FORMAT_YCRCB422) 1083 + else if (st->color_format == DRM_COLOR_FORMAT_YCBCR422) 1084 1084 ctrl |= IPS_CTRL_YUV | IPS_CTRL_CHD422; 1085 - else if (st->color_format == DRM_COLOR_FORMAT_YCRCB444) 1085 + else if (st->color_format == DRM_COLOR_FORMAT_YCBCR444) 1086 1086 ctrl |= IPS_CTRL_YUV; 1087 1087 1088 1088 malidp_write32_mask(reg, BLK_CONTROL, mask, ctrl); ··· 1144 1144 improc = to_improc(c); 1145 1145 improc->supported_color_depths = BIT(8) | BIT(10); 1146 1146 improc->supported_color_formats = DRM_COLOR_FORMAT_RGB444 | 1147 - DRM_COLOR_FORMAT_YCRCB444 | 1148 - DRM_COLOR_FORMAT_YCRCB422; 1147 + DRM_COLOR_FORMAT_YCBCR444 | 1148 + DRM_COLOR_FORMAT_YCBCR422; 1149 1149 value = malidp_read32(reg, BLK_INFO); 1150 1150 if (value & IPS_INFO_CHD420) 1151 - improc->supported_color_formats |= DRM_COLOR_FORMAT_YCRCB420; 1151 + improc->supported_color_formats |= DRM_COLOR_FORMAT_YCBCR420; 1152 1152 1153 1153 improc->supports_csc = true; 1154 1154 improc->supports_gamma = true;
+2 -1
drivers/gpu/drm/arm/display/komeda/komeda_drv.c
··· 9 9 #include <linux/platform_device.h> 10 10 #include <linux/component.h> 11 11 #include <linux/pm_runtime.h> 12 + #include <drm/drm_module.h> 12 13 #include <drm/drm_of.h> 13 14 #include "komeda_dev.h" 14 15 #include "komeda_kms.h" ··· 199 198 }, 200 199 }; 201 200 202 - module_platform_driver(komeda_platform_driver); 201 + drm_module_platform_driver(komeda_platform_driver); 203 202 204 203 MODULE_AUTHOR("James.Qian.Wang <james.qian.wang@arm.com>"); 205 204 MODULE_DESCRIPTION("Komeda KMS driver");
+2 -1
drivers/gpu/drm/arm/hdlcd_drv.c
··· 30 30 #include <drm/drm_gem_cma_helper.h> 31 31 #include <drm/drm_gem_framebuffer_helper.h> 32 32 #include <drm/drm_modeset_helper.h> 33 + #include <drm/drm_module.h> 33 34 #include <drm/drm_of.h> 34 35 #include <drm/drm_probe_helper.h> 35 36 #include <drm/drm_vblank.h> ··· 435 434 }, 436 435 }; 437 436 438 - module_platform_driver(hdlcd_platform_driver); 437 + drm_module_platform_driver(hdlcd_platform_driver); 439 438 440 439 MODULE_AUTHOR("Liviu Dudau"); 441 440 MODULE_DESCRIPTION("ARM HDLCD DRM driver");
+2 -1
drivers/gpu/drm/arm/malidp_drv.c
··· 25 25 #include <drm/drm_gem_cma_helper.h> 26 26 #include <drm/drm_gem_framebuffer_helper.h> 27 27 #include <drm/drm_modeset_helper.h> 28 + #include <drm/drm_module.h> 28 29 #include <drm/drm_of.h> 29 30 #include <drm/drm_probe_helper.h> 30 31 #include <drm/drm_vblank.h> ··· 1009 1008 }, 1010 1009 }; 1011 1010 1012 - module_platform_driver(malidp_platform_driver); 1011 + drm_module_platform_driver(malidp_platform_driver); 1013 1012 1014 1013 MODULE_AUTHOR("Liviu Dudau <Liviu.Dudau@arm.com>"); 1015 1014 MODULE_DESCRIPTION("ARM Mali DP DRM driver");
+2 -16
drivers/gpu/drm/ast/ast_drv.c
··· 34 34 #include <drm/drm_crtc_helper.h> 35 35 #include <drm/drm_drv.h> 36 36 #include <drm/drm_gem_vram_helper.h> 37 + #include <drm/drm_module.h> 37 38 #include <drm/drm_probe_helper.h> 38 39 39 40 #include "ast_drv.h" ··· 231 230 .driver.pm = &ast_pm_ops, 232 231 }; 233 232 234 - static int __init ast_init(void) 235 - { 236 - if (drm_firmware_drivers_only() && ast_modeset == -1) 237 - return -EINVAL; 238 - 239 - if (ast_modeset == 0) 240 - return -EINVAL; 241 - return pci_register_driver(&ast_pci_driver); 242 - } 243 - static void __exit ast_exit(void) 244 - { 245 - pci_unregister_driver(&ast_pci_driver); 246 - } 247 - 248 - module_init(ast_init); 249 - module_exit(ast_exit); 233 + drm_module_pci_driver_if_modeset(ast_pci_driver, ast_modeset); 250 234 251 235 MODULE_AUTHOR(DRIVER_AUTHOR); 252 236 MODULE_DESCRIPTION(DRIVER_DESC);
+2
drivers/gpu/drm/ast/ast_main.c
··· 209 209 if (ast->chip == AST2500 && 210 210 scu_rev == 0x100) /* ast2510 */ 211 211 ast->support_wide_screen = true; 212 + if (ast->chip == AST2600) /* ast2600 */ 213 + ast->support_wide_screen = true; 212 214 } 213 215 break; 214 216 }
+4 -1
drivers/gpu/drm/ast/ast_mode.c
··· 471 471 static void ast_set_crtthd_reg(struct ast_private *ast) 472 472 { 473 473 /* Set Threshold */ 474 - if (ast->chip == AST2300 || ast->chip == AST2400 || 474 + if (ast->chip == AST2600) { 475 + ast_set_index_reg(ast, AST_IO_CRTC_PORT, 0xa7, 0xe0); 476 + ast_set_index_reg(ast, AST_IO_CRTC_PORT, 0xa6, 0xa0); 477 + } else if (ast->chip == AST2300 || ast->chip == AST2400 || 475 478 ast->chip == AST2500) { 476 479 ast_set_index_reg(ast, AST_IO_CRTC_PORT, 0xa7, 0x78); 477 480 ast_set_index_reg(ast, AST_IO_CRTC_PORT, 0xa6, 0x60);
+5
drivers/gpu/drm/bridge/Kconfig
··· 30 30 config DRM_CHIPONE_ICN6211 31 31 tristate "Chipone ICN6211 MIPI-DSI/RGB Converter bridge" 32 32 depends on OF 33 + depends on DRM_KMS_HELPER 33 34 select DRM_MIPI_DSI 34 35 select DRM_PANEL_BRIDGE 35 36 help ··· 184 183 tristate "Parade PS8640 MIPI DSI to eDP Converter" 185 184 depends on OF 186 185 select DRM_DP_AUX_BUS 186 + select DRM_DP_HELPER 187 187 select DRM_KMS_HELPER 188 188 select DRM_MIPI_DSI 189 189 select DRM_PANEL ··· 255 253 config DRM_TOSHIBA_TC358767 256 254 tristate "Toshiba TC358767 eDP bridge" 257 255 depends on OF 256 + select DRM_DP_HELPER 258 257 select DRM_KMS_HELPER 259 258 select REGMAP_I2C 260 259 select DRM_PANEL ··· 275 272 config DRM_TOSHIBA_TC358775 276 273 tristate "Toshiba TC358775 DSI/LVDS bridge" 277 274 depends on OF 275 + select DRM_DP_HELPER 278 276 select DRM_KMS_HELPER 279 277 select REGMAP_I2C 280 278 select DRM_PANEL ··· 303 299 config DRM_TI_SN65DSI86 304 300 tristate "TI SN65DSI86 DSI to eDP bridge" 305 301 depends on OF 302 + select DRM_DP_HELPER 306 303 select DRM_KMS_HELPER 307 304 select REGMAP_I2C 308 305 select DRM_PANEL
+1
drivers/gpu/drm/bridge/adv7511/adv7511.h
··· 169 169 #define ADV7511_PACKET_ENABLE_SPARE2 BIT(1) 170 170 #define ADV7511_PACKET_ENABLE_SPARE1 BIT(0) 171 171 172 + #define ADV7535_REG_POWER2_HPD_OVERRIDE BIT(6) 172 173 #define ADV7511_REG_POWER2_HPD_SRC_MASK 0xc0 173 174 #define ADV7511_REG_POWER2_HPD_SRC_BOTH 0x00 174 175 #define ADV7511_REG_POWER2_HPD_SRC_HPD 0x40
+23 -8
drivers/gpu/drm/bridge/adv7511/adv7511_drv.c
··· 223 223 config.csc_coefficents = adv7511_csc_ycbcr_to_rgb; 224 224 225 225 if ((connector->display_info.color_formats & 226 - DRM_COLOR_FORMAT_YCRCB422) && 226 + DRM_COLOR_FORMAT_YCBCR422) && 227 227 config.hdmi_mode) { 228 228 config.csc_enable = false; 229 229 config.avi_infoframe.colorspace = ··· 351 351 * from standby or are enabled. When the HPD goes low the adv7511 is 352 352 * reset and the outputs are disabled which might cause the monitor to 353 353 * go to standby again. To avoid this we ignore the HPD pin for the 354 - * first few seconds after enabling the output. 354 + * first few seconds after enabling the output. On the other hand 355 + * adv7535 require to enable HPD Override bit for proper HPD. 355 356 */ 356 - regmap_update_bits(adv7511->regmap, ADV7511_REG_POWER2, 357 - ADV7511_REG_POWER2_HPD_SRC_MASK, 358 - ADV7511_REG_POWER2_HPD_SRC_NONE); 357 + if (adv7511->type == ADV7535) 358 + regmap_update_bits(adv7511->regmap, ADV7511_REG_POWER2, 359 + ADV7535_REG_POWER2_HPD_OVERRIDE, 360 + ADV7535_REG_POWER2_HPD_OVERRIDE); 361 + else 362 + regmap_update_bits(adv7511->regmap, ADV7511_REG_POWER2, 363 + ADV7511_REG_POWER2_HPD_SRC_MASK, 364 + ADV7511_REG_POWER2_HPD_SRC_NONE); 359 365 } 360 366 361 367 static void adv7511_power_on(struct adv7511 *adv7511) ··· 381 375 static void __adv7511_power_off(struct adv7511 *adv7511) 382 376 { 383 377 /* TODO: setup additional power down modes */ 378 + if (adv7511->type == ADV7535) 379 + regmap_update_bits(adv7511->regmap, ADV7511_REG_POWER2, 380 + ADV7535_REG_POWER2_HPD_OVERRIDE, 0); 381 + 384 382 regmap_update_bits(adv7511->regmap, ADV7511_REG_POWER, 385 383 ADV7511_POWER_POWER_DOWN, 386 384 ADV7511_POWER_POWER_DOWN); ··· 682 672 status = connector_status_disconnected; 683 673 } else { 684 674 /* Renable HPD sensing */ 685 - regmap_update_bits(adv7511->regmap, ADV7511_REG_POWER2, 686 - ADV7511_REG_POWER2_HPD_SRC_MASK, 687 - ADV7511_REG_POWER2_HPD_SRC_BOTH); 675 + if (adv7511->type == ADV7535) 676 + regmap_update_bits(adv7511->regmap, ADV7511_REG_POWER2, 677 + ADV7535_REG_POWER2_HPD_OVERRIDE, 678 + ADV7535_REG_POWER2_HPD_OVERRIDE); 679 + else 680 + regmap_update_bits(adv7511->regmap, ADV7511_REG_POWER2, 681 + ADV7511_REG_POWER2_HPD_SRC_MASK, 682 + ADV7511_REG_POWER2_HPD_SRC_BOTH); 688 683 } 689 684 690 685 adv7511->status = status;
+1 -1
drivers/gpu/drm/bridge/adv7511/adv7533.c
··· 29 29 struct mipi_dsi_device *dsi = adv->dsi; 30 30 struct drm_display_mode *mode = &adv->curr_mode; 31 31 unsigned int hsw, hfp, hbp, vsw, vfp, vbp; 32 - u8 clock_div_by_lanes[] = { 6, 4, 3 }; /* 2, 3, 4 lanes */ 32 + static const u8 clock_div_by_lanes[] = { 6, 4, 3 }; /* 2, 3, 4 lanes */ 33 33 34 34 hsw = mode->hsync_end - mode->hsync_start; 35 35 hfp = mode->hsync_start - mode->hdisplay;
+2
drivers/gpu/drm/bridge/analogix/Kconfig
··· 3 3 tristate "Analogix ANX6345 bridge" 4 4 depends on OF 5 5 select DRM_ANALOGIX_DP 6 + select DRM_DP_HELPER 6 7 select DRM_KMS_HELPER 7 8 select REGMAP_I2C 8 9 help ··· 15 14 config DRM_ANALOGIX_ANX78XX 16 15 tristate "Analogix ANX78XX bridge" 17 16 select DRM_ANALOGIX_DP 17 + select DRM_DP_HELPER 18 18 select DRM_KMS_HELPER 19 19 select REGMAP_I2C 20 20 help
+1 -1
drivers/gpu/drm/bridge/analogix/analogix-anx6345.c
··· 22 22 #include <drm/drm_bridge.h> 23 23 #include <drm/drm_crtc.h> 24 24 #include <drm/drm_crtc_helper.h> 25 - #include <drm/drm_dp_helper.h> 25 + #include <drm/dp/drm_dp_helper.h> 26 26 #include <drm/drm_edid.h> 27 27 #include <drm/drm_of.h> 28 28 #include <drm/drm_panel.h>
+1 -1
drivers/gpu/drm/bridge/analogix/analogix-anx78xx.c
··· 21 21 #include <drm/drm_atomic_helper.h> 22 22 #include <drm/drm_bridge.h> 23 23 #include <drm/drm_crtc.h> 24 - #include <drm/drm_dp_helper.h> 24 + #include <drm/dp/drm_dp_helper.h> 25 25 #include <drm/drm_edid.h> 26 26 #include <drm/drm_print.h> 27 27 #include <drm/drm_probe_helper.h>
+1 -1
drivers/gpu/drm/bridge/analogix/analogix-i2c-dptx.c
··· 8 8 #include <linux/regmap.h> 9 9 10 10 #include <drm/drm.h> 11 - #include <drm/drm_dp_helper.h> 11 + #include <drm/dp/drm_dp_helper.h> 12 12 #include <drm/drm_print.h> 13 13 14 14 #include "analogix-i2c-dptx.h"
+2 -2
drivers/gpu/drm/bridge/analogix/analogix_dp_core.c
··· 1537 1537 video->color_depth = COLOR_8; 1538 1538 break; 1539 1539 } 1540 - if (display_info->color_formats & DRM_COLOR_FORMAT_YCRCB444) 1540 + if (display_info->color_formats & DRM_COLOR_FORMAT_YCBCR444) 1541 1541 video->color_space = COLOR_YCBCR444; 1542 - else if (display_info->color_formats & DRM_COLOR_FORMAT_YCRCB422) 1542 + else if (display_info->color_formats & DRM_COLOR_FORMAT_YCBCR422) 1543 1543 video->color_space = COLOR_YCBCR422; 1544 1544 else 1545 1545 video->color_space = COLOR_RGB;
+1 -1
drivers/gpu/drm/bridge/analogix/analogix_dp_core.h
··· 10 10 #define _ANALOGIX_DP_CORE_H 11 11 12 12 #include <drm/drm_crtc.h> 13 - #include <drm/drm_dp_helper.h> 13 + #include <drm/dp/drm_dp_helper.h> 14 14 15 15 #define DP_TIMEOUT_LOOP_COUNT 100 16 16 #define MAX_CR_LOOP 5
+421 -19
drivers/gpu/drm/bridge/analogix/anx7625.c
··· 24 24 #include <drm/drm_atomic_helper.h> 25 25 #include <drm/drm_bridge.h> 26 26 #include <drm/drm_crtc_helper.h> 27 - #include <drm/drm_dp_helper.h> 27 + #include <drm/dp/drm_dp_helper.h> 28 28 #include <drm/drm_edid.h> 29 + #include <drm/drm_hdcp.h> 29 30 #include <drm/drm_mipi_dsi.h> 30 31 #include <drm/drm_of.h> 31 32 #include <drm/drm_panel.h> ··· 208 207 AP_AUX_CTRL_STATUS); 209 208 if (val < 0 || (val & 0x0F)) { 210 209 DRM_DEV_ERROR(dev, "aux status %02x\n", val); 210 + return -EIO; 211 + } 212 + 213 + return 0; 214 + } 215 + 216 + static int anx7625_aux_dpcd_read(struct anx7625_data *ctx, 217 + u32 address, u8 len, u8 *buf) 218 + { 219 + struct device *dev = &ctx->client->dev; 220 + int ret; 221 + u8 addrh, addrm, addrl; 222 + u8 cmd; 223 + 224 + if (len > MAX_DPCD_BUFFER_SIZE) { 225 + dev_err(dev, "exceed aux buffer len.\n"); 226 + return -EINVAL; 227 + } 228 + 229 + addrl = address & 0xFF; 230 + addrm = (address >> 8) & 0xFF; 231 + addrh = (address >> 16) & 0xFF; 232 + 233 + cmd = DPCD_CMD(len, DPCD_READ); 234 + cmd = ((len - 1) << 4) | 0x09; 235 + 236 + /* Set command and length */ 237 + ret = anx7625_reg_write(ctx, ctx->i2c.rx_p0_client, 238 + AP_AUX_COMMAND, cmd); 239 + 240 + /* Set aux access address */ 241 + ret |= anx7625_reg_write(ctx, ctx->i2c.rx_p0_client, 242 + AP_AUX_ADDR_7_0, addrl); 243 + ret |= anx7625_reg_write(ctx, ctx->i2c.rx_p0_client, 244 + AP_AUX_ADDR_15_8, addrm); 245 + ret |= anx7625_reg_write(ctx, ctx->i2c.rx_p0_client, 246 + AP_AUX_ADDR_19_16, addrh); 247 + 248 + /* Enable aux access */ 249 + ret |= anx7625_write_or(ctx, ctx->i2c.rx_p0_client, 250 + AP_AUX_CTRL_STATUS, AP_AUX_CTRL_OP_EN); 251 + 252 + if (ret < 0) { 253 + dev_err(dev, "cannot access aux related register.\n"); 254 + return -EIO; 255 + } 256 + 257 + usleep_range(2000, 2100); 258 + 259 + ret = wait_aux_op_finish(ctx); 260 + if (ret) { 261 + dev_err(dev, "aux IO error: wait aux op finish.\n"); 262 + return ret; 263 + } 264 + 265 + ret = anx7625_reg_block_read(ctx, ctx->i2c.rx_p0_client, 266 + AP_AUX_BUFF_START, len, buf); 267 + if (ret < 0) { 268 + dev_err(dev, "read dpcd register failed\n"); 211 269 return -EIO; 212 270 } 213 271 ··· 729 669 return ret; 730 670 } 731 671 672 + static int anx7625_read_flash_status(struct anx7625_data *ctx) 673 + { 674 + return anx7625_reg_read(ctx, ctx->i2c.rx_p0_client, R_RAM_CTRL); 675 + } 676 + 677 + static int anx7625_hdcp_key_probe(struct anx7625_data *ctx) 678 + { 679 + int ret, val; 680 + struct device *dev = &ctx->client->dev; 681 + u8 ident[FLASH_BUF_LEN]; 682 + 683 + ret = anx7625_reg_write(ctx, ctx->i2c.rx_p0_client, 684 + FLASH_ADDR_HIGH, 0x91); 685 + ret |= anx7625_reg_write(ctx, ctx->i2c.rx_p0_client, 686 + FLASH_ADDR_LOW, 0xA0); 687 + if (ret < 0) { 688 + dev_err(dev, "IO error : set key flash address.\n"); 689 + return ret; 690 + } 691 + 692 + ret = anx7625_reg_write(ctx, ctx->i2c.rx_p0_client, 693 + FLASH_LEN_HIGH, (FLASH_BUF_LEN - 1) >> 8); 694 + ret |= anx7625_reg_write(ctx, ctx->i2c.rx_p0_client, 695 + FLASH_LEN_LOW, (FLASH_BUF_LEN - 1) & 0xFF); 696 + if (ret < 0) { 697 + dev_err(dev, "IO error : set key flash len.\n"); 698 + return ret; 699 + } 700 + 701 + ret = anx7625_reg_write(ctx, ctx->i2c.rx_p0_client, 702 + R_FLASH_RW_CTRL, FLASH_READ); 703 + ret |= readx_poll_timeout(anx7625_read_flash_status, 704 + ctx, val, 705 + ((val & FLASH_DONE) || (val < 0)), 706 + 2000, 707 + 2000 * 150); 708 + if (ret) { 709 + dev_err(dev, "flash read access fail!\n"); 710 + return -EIO; 711 + } 712 + 713 + ret = anx7625_reg_block_read(ctx, ctx->i2c.rx_p0_client, 714 + FLASH_BUF_BASE_ADDR, 715 + FLASH_BUF_LEN, ident); 716 + if (ret < 0) { 717 + dev_err(dev, "read flash data fail!\n"); 718 + return -EIO; 719 + } 720 + 721 + if (ident[29] == 0xFF && ident[30] == 0xFF && ident[31] == 0xFF) 722 + return -EINVAL; 723 + 724 + return 0; 725 + } 726 + 727 + static int anx7625_hdcp_key_load(struct anx7625_data *ctx) 728 + { 729 + int ret; 730 + struct device *dev = &ctx->client->dev; 731 + 732 + /* Select HDCP 1.4 KEY */ 733 + ret = anx7625_reg_write(ctx, ctx->i2c.rx_p0_client, 734 + R_BOOT_RETRY, 0x12); 735 + ret |= anx7625_reg_write(ctx, ctx->i2c.rx_p0_client, 736 + FLASH_ADDR_HIGH, HDCP14KEY_START_ADDR >> 8); 737 + ret |= anx7625_reg_write(ctx, ctx->i2c.rx_p0_client, 738 + FLASH_ADDR_LOW, HDCP14KEY_START_ADDR & 0xFF); 739 + ret |= anx7625_reg_write(ctx, ctx->i2c.rx_p0_client, 740 + R_RAM_LEN_H, HDCP14KEY_SIZE >> 12); 741 + ret |= anx7625_reg_write(ctx, ctx->i2c.rx_p0_client, 742 + R_RAM_LEN_L, HDCP14KEY_SIZE >> 4); 743 + 744 + ret |= anx7625_reg_write(ctx, ctx->i2c.rx_p0_client, 745 + R_RAM_ADDR_H, 0); 746 + ret |= anx7625_reg_write(ctx, ctx->i2c.rx_p0_client, 747 + R_RAM_ADDR_L, 0); 748 + /* Enable HDCP 1.4 KEY load */ 749 + ret |= anx7625_reg_write(ctx, ctx->i2c.rx_p0_client, 750 + R_RAM_CTRL, DECRYPT_EN | LOAD_START); 751 + dev_dbg(dev, "load HDCP 1.4 key done\n"); 752 + return ret; 753 + } 754 + 755 + static int anx7625_hdcp_disable(struct anx7625_data *ctx) 756 + { 757 + int ret; 758 + struct device *dev = &ctx->client->dev; 759 + 760 + dev_dbg(dev, "disable HDCP 1.4\n"); 761 + 762 + /* Disable HDCP */ 763 + ret = anx7625_write_and(ctx, ctx->i2c.rx_p1_client, 0xee, 0x9f); 764 + /* Try auth flag */ 765 + ret |= anx7625_write_or(ctx, ctx->i2c.rx_p1_client, 0xec, 0x10); 766 + /* Interrupt for DRM */ 767 + ret |= anx7625_write_or(ctx, ctx->i2c.rx_p1_client, 0xff, 0x01); 768 + if (ret < 0) 769 + dev_err(dev, "fail to disable HDCP\n"); 770 + 771 + return anx7625_write_and(ctx, ctx->i2c.tx_p0_client, 772 + TX_HDCP_CTRL0, ~HARD_AUTH_EN & 0xFF); 773 + } 774 + 775 + static int anx7625_hdcp_enable(struct anx7625_data *ctx) 776 + { 777 + u8 bcap; 778 + int ret; 779 + struct device *dev = &ctx->client->dev; 780 + 781 + ret = anx7625_hdcp_key_probe(ctx); 782 + if (ret) { 783 + dev_dbg(dev, "no key found, not to do hdcp\n"); 784 + return ret; 785 + } 786 + 787 + /* Read downstream capability */ 788 + anx7625_aux_dpcd_read(ctx, 0x68028, 1, &bcap); 789 + if (!(bcap & 0x01)) { 790 + pr_warn("downstream not support HDCP 1.4, cap(%x).\n", bcap); 791 + return 0; 792 + } 793 + 794 + dev_dbg(dev, "enable HDCP 1.4\n"); 795 + 796 + /* First clear HDCP state */ 797 + ret = anx7625_reg_write(ctx, ctx->i2c.tx_p0_client, 798 + TX_HDCP_CTRL0, 799 + KSVLIST_VLD | BKSV_SRM_PASS | RE_AUTHEN); 800 + usleep_range(1000, 1100); 801 + /* Second clear HDCP state */ 802 + ret |= anx7625_reg_write(ctx, ctx->i2c.tx_p0_client, 803 + TX_HDCP_CTRL0, 804 + KSVLIST_VLD | BKSV_SRM_PASS | RE_AUTHEN); 805 + 806 + /* Set time for waiting KSVR */ 807 + ret |= anx7625_reg_write(ctx, ctx->i2c.tx_p0_client, 808 + SP_TX_WAIT_KSVR_TIME, 0xc8); 809 + /* Set time for waiting R0 */ 810 + ret |= anx7625_reg_write(ctx, ctx->i2c.tx_p0_client, 811 + SP_TX_WAIT_R0_TIME, 0xb0); 812 + ret |= anx7625_hdcp_key_load(ctx); 813 + if (ret) { 814 + pr_warn("prepare HDCP key failed.\n"); 815 + return ret; 816 + } 817 + 818 + ret = anx7625_write_or(ctx, ctx->i2c.rx_p1_client, 0xee, 0x20); 819 + 820 + /* Try auth flag */ 821 + ret |= anx7625_write_or(ctx, ctx->i2c.rx_p1_client, 0xec, 0x10); 822 + /* Interrupt for DRM */ 823 + ret |= anx7625_write_or(ctx, ctx->i2c.rx_p1_client, 0xff, 0x01); 824 + if (ret < 0) 825 + dev_err(dev, "fail to enable HDCP\n"); 826 + 827 + return anx7625_write_or(ctx, ctx->i2c.tx_p0_client, 828 + TX_HDCP_CTRL0, HARD_AUTH_EN); 829 + } 830 + 732 831 static void anx7625_dp_start(struct anx7625_data *ctx) 733 832 { 734 833 int ret; ··· 898 679 return; 899 680 } 900 681 682 + /* Disable HDCP */ 683 + anx7625_write_and(ctx, ctx->i2c.rx_p1_client, 0xee, 0x9f); 684 + 901 685 if (ctx->pdata.is_dpi) 902 686 ret = anx7625_dpi_config(ctx); 903 687 else ··· 908 686 909 687 if (ret < 0) 910 688 DRM_DEV_ERROR(dev, "MIPI phy setup error.\n"); 689 + 690 + ctx->hdcp_cp = DRM_MODE_CONTENT_PROTECTION_UNDESIRED; 691 + 692 + ctx->dp_en = 1; 911 693 } 912 694 913 695 static void anx7625_dp_stop(struct anx7625_data *ctx) ··· 931 705 ret |= anx7625_video_mute_control(ctx, 1); 932 706 if (ret < 0) 933 707 DRM_DEV_ERROR(dev, "IO error : mute video fail\n"); 708 + 709 + ctx->hdcp_cp = DRM_MODE_CONTENT_PROTECTION_UNDESIRED; 710 + 711 + ctx->dp_en = 0; 934 712 } 935 713 936 714 static int sp_tx_rst_aux(struct anx7625_data *ctx) ··· 1328 1098 /* Gpio for chip power enable */ 1329 1099 platform->pdata.gpio_p_on = 1330 1100 devm_gpiod_get_optional(dev, "enable", GPIOD_OUT_LOW); 1101 + if (IS_ERR_OR_NULL(platform->pdata.gpio_p_on)) { 1102 + DRM_DEV_DEBUG_DRIVER(dev, "no enable gpio found\n"); 1103 + platform->pdata.gpio_p_on = NULL; 1104 + } 1105 + 1331 1106 /* Gpio for chip reset */ 1332 1107 platform->pdata.gpio_reset = 1333 1108 devm_gpiod_get_optional(dev, "reset", GPIOD_OUT_LOW); 1109 + if (IS_ERR_OR_NULL(platform->pdata.gpio_reset)) { 1110 + DRM_DEV_DEBUG_DRIVER(dev, "no reset gpio found\n"); 1111 + platform->pdata.gpio_reset = NULL; 1112 + } 1334 1113 1335 1114 if (platform->pdata.gpio_p_on && platform->pdata.gpio_reset) { 1336 1115 platform->pdata.low_power_mode = 1; ··· 1840 1601 return 0; 1841 1602 } 1842 1603 1604 + static int anx7625_audio_get_eld(struct device *dev, void *data, 1605 + u8 *buf, size_t len) 1606 + { 1607 + struct anx7625_data *ctx = dev_get_drvdata(dev); 1608 + 1609 + if (!ctx->connector) { 1610 + dev_err(dev, "connector not initial\n"); 1611 + return -EINVAL; 1612 + } 1613 + 1614 + dev_dbg(dev, "audio copy eld\n"); 1615 + memcpy(buf, ctx->connector->eld, 1616 + min(sizeof(ctx->connector->eld), len)); 1617 + 1618 + return 0; 1619 + } 1620 + 1843 1621 static const struct hdmi_codec_ops anx7625_codec_ops = { 1844 1622 .hw_params = anx7625_audio_hw_params, 1845 1623 .audio_shutdown = anx7625_audio_shutdown, 1624 + .get_eld = anx7625_audio_get_eld, 1846 1625 .get_dai_id = anx7625_hdmi_i2s_get_dai_id, 1847 1626 .hook_plugged_cb = anx7625_audio_hook_plugged_cb, 1848 1627 }; ··· 1917 1660 host = of_find_mipi_dsi_host_by_node(ctx->pdata.mipi_host_node); 1918 1661 if (!host) { 1919 1662 DRM_DEV_ERROR(dev, "fail to find dsi host.\n"); 1920 - return -EINVAL; 1663 + return -EPROBE_DEFER; 1921 1664 } 1922 1665 1923 1666 dsi = devm_mipi_dsi_device_register_full(dev, host, &info); ··· 1941 1684 ctx->dsi = dsi; 1942 1685 1943 1686 DRM_DEV_DEBUG_DRIVER(dev, "attach dsi succeeded.\n"); 1687 + 1688 + return 0; 1689 + } 1690 + 1691 + static void hdcp_check_work_func(struct work_struct *work) 1692 + { 1693 + u8 status; 1694 + struct delayed_work *dwork; 1695 + struct anx7625_data *ctx; 1696 + struct device *dev; 1697 + struct drm_device *drm_dev; 1698 + 1699 + dwork = to_delayed_work(work); 1700 + ctx = container_of(dwork, struct anx7625_data, hdcp_work); 1701 + dev = &ctx->client->dev; 1702 + 1703 + if (!ctx->connector) { 1704 + dev_err(dev, "HDCP connector is null!"); 1705 + return; 1706 + } 1707 + 1708 + drm_dev = ctx->connector->dev; 1709 + drm_modeset_lock(&drm_dev->mode_config.connection_mutex, NULL); 1710 + mutex_lock(&ctx->hdcp_wq_lock); 1711 + 1712 + status = anx7625_reg_read(ctx, ctx->i2c.tx_p0_client, 0); 1713 + dev_dbg(dev, "sink HDCP status check: %.02x\n", status); 1714 + if (status & BIT(1)) { 1715 + ctx->hdcp_cp = DRM_MODE_CONTENT_PROTECTION_ENABLED; 1716 + drm_hdcp_update_content_protection(ctx->connector, 1717 + ctx->hdcp_cp); 1718 + dev_dbg(dev, "update CP to ENABLE\n"); 1719 + } 1720 + 1721 + mutex_unlock(&ctx->hdcp_wq_lock); 1722 + drm_modeset_unlock(&drm_dev->mode_config.connection_mutex); 1723 + } 1724 + 1725 + static int anx7625_connector_atomic_check(struct anx7625_data *ctx, 1726 + struct drm_connector_state *state) 1727 + { 1728 + struct device *dev = &ctx->client->dev; 1729 + int cp; 1730 + 1731 + dev_dbg(dev, "hdcp state check\n"); 1732 + cp = state->content_protection; 1733 + 1734 + if (cp == ctx->hdcp_cp) 1735 + return 0; 1736 + 1737 + if (cp == DRM_MODE_CONTENT_PROTECTION_DESIRED) { 1738 + if (ctx->dp_en) { 1739 + dev_dbg(dev, "enable HDCP\n"); 1740 + anx7625_hdcp_enable(ctx); 1741 + 1742 + queue_delayed_work(ctx->hdcp_workqueue, 1743 + &ctx->hdcp_work, 1744 + msecs_to_jiffies(2000)); 1745 + } 1746 + } 1747 + 1748 + if (cp == DRM_MODE_CONTENT_PROTECTION_UNDESIRED) { 1749 + if (ctx->hdcp_cp != DRM_MODE_CONTENT_PROTECTION_ENABLED) { 1750 + dev_err(dev, "current CP is not ENABLED\n"); 1751 + return -EINVAL; 1752 + } 1753 + anx7625_hdcp_disable(ctx); 1754 + ctx->hdcp_cp = DRM_MODE_CONTENT_PROTECTION_UNDESIRED; 1755 + drm_hdcp_update_content_protection(ctx->connector, 1756 + ctx->hdcp_cp); 1757 + dev_dbg(dev, "update CP to UNDESIRE\n"); 1758 + } 1759 + 1760 + if (cp == DRM_MODE_CONTENT_PROTECTION_ENABLED) { 1761 + dev_err(dev, "Userspace illegal set to PROTECTION ENABLE\n"); 1762 + return -EINVAL; 1763 + } 1944 1764 1945 1765 return 0; 1946 1766 } ··· 2236 1902 return true; 2237 1903 } 2238 1904 2239 - static void anx7625_bridge_enable(struct drm_bridge *bridge) 1905 + static int anx7625_bridge_atomic_check(struct drm_bridge *bridge, 1906 + struct drm_bridge_state *bridge_state, 1907 + struct drm_crtc_state *crtc_state, 1908 + struct drm_connector_state *conn_state) 2240 1909 { 2241 1910 struct anx7625_data *ctx = bridge_to_anx7625(bridge); 2242 1911 struct device *dev = &ctx->client->dev; 2243 1912 2244 - DRM_DEV_DEBUG_DRIVER(dev, "drm enable\n"); 1913 + dev_dbg(dev, "drm bridge atomic check\n"); 1914 + 1915 + anx7625_bridge_mode_fixup(bridge, &crtc_state->mode, 1916 + &crtc_state->adjusted_mode); 1917 + 1918 + return anx7625_connector_atomic_check(ctx, conn_state); 1919 + } 1920 + 1921 + static void anx7625_bridge_atomic_enable(struct drm_bridge *bridge, 1922 + struct drm_bridge_state *state) 1923 + { 1924 + struct anx7625_data *ctx = bridge_to_anx7625(bridge); 1925 + struct device *dev = &ctx->client->dev; 1926 + struct drm_connector *connector; 1927 + 1928 + dev_dbg(dev, "drm atomic enable\n"); 1929 + 1930 + if (!bridge->encoder) { 1931 + dev_err(dev, "Parent encoder object not found"); 1932 + return; 1933 + } 1934 + 1935 + connector = drm_atomic_get_new_connector_for_encoder(state->base.state, 1936 + bridge->encoder); 1937 + if (!connector) 1938 + return; 1939 + 1940 + ctx->connector = connector; 2245 1941 2246 1942 pm_runtime_get_sync(dev); 2247 1943 2248 1944 anx7625_dp_start(ctx); 2249 1945 } 2250 1946 2251 - static void anx7625_bridge_disable(struct drm_bridge *bridge) 1947 + static void anx7625_bridge_atomic_disable(struct drm_bridge *bridge, 1948 + struct drm_bridge_state *old) 2252 1949 { 2253 1950 struct anx7625_data *ctx = bridge_to_anx7625(bridge); 2254 1951 struct device *dev = &ctx->client->dev; 2255 1952 2256 - DRM_DEV_DEBUG_DRIVER(dev, "drm disable\n"); 1953 + dev_dbg(dev, "drm atomic disable\n"); 2257 1954 1955 + ctx->connector = NULL; 2258 1956 anx7625_dp_stop(ctx); 2259 1957 2260 1958 pm_runtime_put_sync(dev); ··· 2316 1950 2317 1951 static const struct drm_bridge_funcs anx7625_bridge_funcs = { 2318 1952 .attach = anx7625_bridge_attach, 2319 - .disable = anx7625_bridge_disable, 2320 1953 .mode_valid = anx7625_bridge_mode_valid, 2321 1954 .mode_set = anx7625_bridge_mode_set, 2322 - .mode_fixup = anx7625_bridge_mode_fixup, 2323 - .enable = anx7625_bridge_enable, 1955 + .atomic_check = anx7625_bridge_atomic_check, 1956 + .atomic_enable = anx7625_bridge_atomic_enable, 1957 + .atomic_disable = anx7625_bridge_atomic_disable, 1958 + .atomic_duplicate_state = drm_atomic_helper_bridge_duplicate_state, 1959 + .atomic_destroy_state = drm_atomic_helper_bridge_destroy_state, 1960 + .atomic_reset = drm_atomic_helper_bridge_reset, 2324 1961 .detect = anx7625_bridge_detect, 2325 1962 .get_edid = anx7625_bridge_get_edid, 2326 1963 }; ··· 2331 1962 static int anx7625_register_i2c_dummy_clients(struct anx7625_data *ctx, 2332 1963 struct i2c_client *client) 2333 1964 { 1965 + int err = 0; 1966 + 2334 1967 ctx->i2c.tx_p0_client = i2c_new_dummy_device(client->adapter, 2335 1968 TX_P0_ADDR >> 1); 2336 - if (!ctx->i2c.tx_p0_client) 2337 - return -ENOMEM; 1969 + if (IS_ERR(ctx->i2c.tx_p0_client)) 1970 + return PTR_ERR(ctx->i2c.tx_p0_client); 2338 1971 2339 1972 ctx->i2c.tx_p1_client = i2c_new_dummy_device(client->adapter, 2340 1973 TX_P1_ADDR >> 1); 2341 - if (!ctx->i2c.tx_p1_client) 1974 + if (IS_ERR(ctx->i2c.tx_p1_client)) { 1975 + err = PTR_ERR(ctx->i2c.tx_p1_client); 2342 1976 goto free_tx_p0; 1977 + } 2343 1978 2344 1979 ctx->i2c.tx_p2_client = i2c_new_dummy_device(client->adapter, 2345 1980 TX_P2_ADDR >> 1); 2346 - if (!ctx->i2c.tx_p2_client) 1981 + if (IS_ERR(ctx->i2c.tx_p2_client)) { 1982 + err = PTR_ERR(ctx->i2c.tx_p2_client); 2347 1983 goto free_tx_p1; 1984 + } 2348 1985 2349 1986 ctx->i2c.rx_p0_client = i2c_new_dummy_device(client->adapter, 2350 1987 RX_P0_ADDR >> 1); 2351 - if (!ctx->i2c.rx_p0_client) 1988 + if (IS_ERR(ctx->i2c.rx_p0_client)) { 1989 + err = PTR_ERR(ctx->i2c.rx_p0_client); 2352 1990 goto free_tx_p2; 1991 + } 2353 1992 2354 1993 ctx->i2c.rx_p1_client = i2c_new_dummy_device(client->adapter, 2355 1994 RX_P1_ADDR >> 1); 2356 - if (!ctx->i2c.rx_p1_client) 1995 + if (IS_ERR(ctx->i2c.rx_p1_client)) { 1996 + err = PTR_ERR(ctx->i2c.rx_p1_client); 2357 1997 goto free_rx_p0; 1998 + } 2358 1999 2359 2000 ctx->i2c.rx_p2_client = i2c_new_dummy_device(client->adapter, 2360 2001 RX_P2_ADDR >> 1); 2361 - if (!ctx->i2c.rx_p2_client) 2002 + if (IS_ERR(ctx->i2c.rx_p2_client)) { 2003 + err = PTR_ERR(ctx->i2c.rx_p2_client); 2362 2004 goto free_rx_p1; 2005 + } 2363 2006 2364 2007 ctx->i2c.tcpc_client = i2c_new_dummy_device(client->adapter, 2365 2008 TCPC_INTERFACE_ADDR >> 1); 2366 - if (!ctx->i2c.tcpc_client) 2009 + if (IS_ERR(ctx->i2c.tcpc_client)) { 2010 + err = PTR_ERR(ctx->i2c.tcpc_client); 2367 2011 goto free_rx_p2; 2012 + } 2368 2013 2369 2014 return 0; 2370 2015 ··· 2395 2012 free_tx_p0: 2396 2013 i2c_unregister_device(ctx->i2c.tx_p0_client); 2397 2014 2398 - return -ENOMEM; 2015 + return err; 2399 2016 } 2400 2017 2401 2018 static void anx7625_unregister_i2c_dummy_clients(struct anx7625_data *ctx) ··· 2517 2134 anx7625_init_gpio(platform); 2518 2135 2519 2136 mutex_init(&platform->lock); 2137 + mutex_init(&platform->hdcp_wq_lock); 2138 + 2139 + INIT_DELAYED_WORK(&platform->hdcp_work, hdcp_check_work_func); 2140 + platform->hdcp_workqueue = create_workqueue("hdcp workqueue"); 2141 + if (!platform->hdcp_workqueue) { 2142 + dev_err(dev, "fail to create work queue\n"); 2143 + ret = -ENOMEM; 2144 + goto free_platform; 2145 + } 2520 2146 2521 2147 platform->pdata.intp_irq = client->irq; 2522 2148 if (platform->pdata.intp_irq) { ··· 2535 2143 if (!platform->workqueue) { 2536 2144 DRM_DEV_ERROR(dev, "fail to create work queue\n"); 2537 2145 ret = -ENOMEM; 2538 - goto free_platform; 2146 + goto free_hdcp_wq; 2539 2147 } 2540 2148 2541 2149 ret = devm_request_threaded_irq(dev, platform->pdata.intp_irq, ··· 2605 2213 if (platform->workqueue) 2606 2214 destroy_workqueue(platform->workqueue); 2607 2215 2216 + free_hdcp_wq: 2217 + if (platform->hdcp_workqueue) 2218 + destroy_workqueue(platform->hdcp_workqueue); 2219 + 2608 2220 free_platform: 2609 2221 kfree(platform); 2610 2222 ··· 2623 2227 2624 2228 if (platform->pdata.intp_irq) 2625 2229 destroy_workqueue(platform->workqueue); 2230 + 2231 + if (platform->hdcp_workqueue) { 2232 + cancel_delayed_work(&platform->hdcp_work); 2233 + flush_workqueue(platform->workqueue); 2234 + destroy_workqueue(platform->workqueue); 2235 + } 2626 2236 2627 2237 if (!platform->pdata.low_power_mode) 2628 2238 pm_runtime_put_sync_suspend(&client->dev);
+72 -6
drivers/gpu/drm/bridge/analogix/anx7625.h
··· 59 59 60 60 /***************************************************************/ 61 61 /* Register definition of device address 0x70 */ 62 - #define I2C_ADDR_70_DPTX 0x70 62 + #define TX_HDCP_CTRL0 0x01 63 + #define STORE_AN BIT(7) 64 + #define RX_REPEATER BIT(6) 65 + #define RE_AUTHEN BIT(5) 66 + #define SW_AUTH_OK BIT(4) 67 + #define HARD_AUTH_EN BIT(3) 68 + #define ENC_EN BIT(2) 69 + #define BKSV_SRM_PASS BIT(1) 70 + #define KSVLIST_VLD BIT(0) 63 71 64 - #define SP_TX_LINK_BW_SET_REG 0xA0 65 - #define SP_TX_LANE_COUNT_SET_REG 0xA1 72 + #define SP_TX_WAIT_R0_TIME 0x40 73 + #define SP_TX_WAIT_KSVR_TIME 0x42 74 + #define SP_TX_SYS_CTRL1_REG 0x80 75 + #define HDCP2TX_FW_EN BIT(4) 76 + 77 + #define SP_TX_LINK_BW_SET_REG 0xA0 78 + #define SP_TX_LANE_COUNT_SET_REG 0xA1 66 79 67 80 #define M_VID_0 0xC0 68 81 #define M_VID_1 0xC1 ··· 83 70 #define N_VID_0 0xC3 84 71 #define N_VID_1 0xC4 85 72 #define N_VID_2 0xC5 73 + 74 + #define KEY_START_ADDR 0x9000 75 + #define KEY_RESERVED 416 76 + 77 + #define HDCP14KEY_START_ADDR (KEY_START_ADDR + KEY_RESERVED) 78 + #define HDCP14KEY_SIZE 624 86 79 87 80 /***************************************************************/ 88 81 /* Register definition of device address 0x72 */ ··· 174 155 175 156 #define I2C_ADDR_7E_FLASH_CONTROLLER 0x7E 176 157 158 + #define R_BOOT_RETRY 0x00 159 + #define R_RAM_ADDR_H 0x01 160 + #define R_RAM_ADDR_L 0x02 161 + #define R_RAM_LEN_H 0x03 162 + #define R_RAM_LEN_L 0x04 177 163 #define FLASH_LOAD_STA 0x05 178 164 #define FLASH_LOAD_STA_CHK BIT(7) 165 + 166 + #define R_RAM_CTRL 0x05 167 + /* bit positions */ 168 + #define FLASH_DONE BIT(7) 169 + #define BOOT_LOAD_DONE BIT(6) 170 + #define CRC_OK BIT(5) 171 + #define LOAD_DONE BIT(4) 172 + #define O_RW_DONE BIT(3) 173 + #define FUSE_BUSY BIT(2) 174 + #define DECRYPT_EN BIT(1) 175 + #define LOAD_START BIT(0) 176 + 177 + #define FLASH_ADDR_HIGH 0x0F 178 + #define FLASH_ADDR_LOW 0x10 179 + #define FLASH_LEN_HIGH 0x31 180 + #define FLASH_LEN_LOW 0x32 181 + #define R_FLASH_RW_CTRL 0x33 182 + /* bit positions */ 183 + #define READ_DELAY_SELECT BIT(7) 184 + #define GENERAL_INSTRUCTION_EN BIT(6) 185 + #define FLASH_ERASE_EN BIT(5) 186 + #define RDID_READ_EN BIT(4) 187 + #define REMS_READ_EN BIT(3) 188 + #define WRITE_STATUS_EN BIT(2) 189 + #define FLASH_READ BIT(1) 190 + #define FLASH_WRITE BIT(0) 191 + 192 + #define FLASH_BUF_BASE_ADDR 0x60 193 + #define FLASH_BUF_LEN 0x20 179 194 180 195 #define XTAL_FRQ_SEL 0x3F 181 196 /* bit field positions */ ··· 237 184 #define AP_AUX_CTRL_ADDRONLY 0x20 238 185 239 186 #define AP_AUX_BUFF_START 0x15 240 - #define PIXEL_CLOCK_L 0x25 241 - #define PIXEL_CLOCK_H 0x26 187 + #define PIXEL_CLOCK_L 0x25 188 + #define PIXEL_CLOCK_H 0x26 242 189 243 - #define AP_AUX_COMMAND 0x27 /* com+len */ 190 + #define AP_AUX_COMMAND 0x27 /* com+len */ 191 + #define LENGTH_SHIFT 4 192 + #define DPCD_READ 0x09 193 + #define DPCD_WRITE 0x08 194 + #define DPCD_CMD(len, cmd) ((((len) - 1) << LENGTH_SHIFT) | (cmd)) 195 + 244 196 /* Bit 0&1: 3D video structure */ 245 197 /* 0x01: frame packing, 0x02:Line alternative, 0x03:Side-by-side(full) */ 246 198 #define AP_AV_STATUS 0x28 ··· 450 392 struct platform_device *audio_pdev; 451 393 int hpd_status; 452 394 int hpd_high_cnt; 395 + int dp_en; 396 + int hdcp_cp; 453 397 /* Lock for work queue */ 454 398 struct mutex lock; 455 399 struct i2c_client *client; 456 400 struct anx7625_i2c_client i2c; 457 401 struct i2c_client *last_client; 402 + struct timer_list hdcp_timer; 458 403 struct s_edid_data slimport_edid_p; 459 404 struct device *codec_dev; 460 405 hdmi_codec_plugged_cb plugged_cb; 461 406 struct work_struct work; 462 407 struct workqueue_struct *workqueue; 408 + struct delayed_work hdcp_work; 409 + struct workqueue_struct *hdcp_workqueue; 410 + /* Lock for hdcp work queue */ 411 + struct mutex hdcp_wq_lock; 463 412 char edid_block; 464 413 struct display_timing dt; 465 414 u8 display_timing_valid; 466 415 struct drm_bridge bridge; 467 416 u8 bridge_attached; 417 + struct drm_connector *connector; 468 418 struct mipi_dsi_device *dsi; 469 419 }; 470 420
+1
drivers/gpu/drm/bridge/cadence/Kconfig
··· 1 1 # SPDX-License-Identifier: GPL-2.0-only 2 2 config DRM_CDNS_MHDP8546 3 3 tristate "Cadence DPI/DP bridge" 4 + select DRM_DP_HELPER 4 5 select DRM_KMS_HELPER 5 6 select DRM_PANEL_BRIDGE 6 7 depends on OF
+10 -10
drivers/gpu/drm/bridge/cadence/cdns-mhdp8546-core.c
··· 41 41 #include <drm/drm_bridge.h> 42 42 #include <drm/drm_connector.h> 43 43 #include <drm/drm_crtc_helper.h> 44 - #include <drm/drm_dp_helper.h> 44 + #include <drm/dp/drm_dp_helper.h> 45 45 #include <drm/drm_hdcp.h> 46 46 #include <drm/drm_modeset_helper_vtables.h> 47 47 #include <drm/drm_print.h> ··· 1553 1553 1554 1554 switch (fmt->color_format) { 1555 1555 case DRM_COLOR_FORMAT_RGB444: 1556 - case DRM_COLOR_FORMAT_YCRCB444: 1556 + case DRM_COLOR_FORMAT_YCBCR444: 1557 1557 bpp = fmt->bpc * 3; 1558 1558 break; 1559 - case DRM_COLOR_FORMAT_YCRCB422: 1559 + case DRM_COLOR_FORMAT_YCBCR422: 1560 1560 bpp = fmt->bpc * 2; 1561 1561 break; 1562 - case DRM_COLOR_FORMAT_YCRCB420: 1562 + case DRM_COLOR_FORMAT_YCBCR420: 1563 1563 bpp = fmt->bpc * 3 / 2; 1564 1564 break; 1565 1565 default: ··· 1767 1767 * If YCBCR supported and stream not SD, use ITU709 1768 1768 * Need to handle ITU version with YCBCR420 when supported 1769 1769 */ 1770 - if ((pxlfmt == DRM_COLOR_FORMAT_YCRCB444 || 1771 - pxlfmt == DRM_COLOR_FORMAT_YCRCB422) && mode->crtc_vdisplay >= 720) 1770 + if ((pxlfmt == DRM_COLOR_FORMAT_YCBCR444 || 1771 + pxlfmt == DRM_COLOR_FORMAT_YCBCR422) && mode->crtc_vdisplay >= 720) 1772 1772 misc0 = DP_YCBCR_COEFFICIENTS_ITU709; 1773 1773 1774 1774 bpp = cdns_mhdp_get_bpp(&mhdp->display_fmt); ··· 1778 1778 pxl_repr = CDNS_DP_FRAMER_RGB << CDNS_DP_FRAMER_PXL_FORMAT; 1779 1779 misc0 |= DP_COLOR_FORMAT_RGB; 1780 1780 break; 1781 - case DRM_COLOR_FORMAT_YCRCB444: 1781 + case DRM_COLOR_FORMAT_YCBCR444: 1782 1782 pxl_repr = CDNS_DP_FRAMER_YCBCR444 << CDNS_DP_FRAMER_PXL_FORMAT; 1783 1783 misc0 |= DP_COLOR_FORMAT_YCbCr444 | DP_TEST_DYNAMIC_RANGE_CEA; 1784 1784 break; 1785 - case DRM_COLOR_FORMAT_YCRCB422: 1785 + case DRM_COLOR_FORMAT_YCBCR422: 1786 1786 pxl_repr = CDNS_DP_FRAMER_YCBCR422 << CDNS_DP_FRAMER_PXL_FORMAT; 1787 1787 misc0 |= DP_COLOR_FORMAT_YCbCr422 | DP_TEST_DYNAMIC_RANGE_CEA; 1788 1788 break; 1789 - case DRM_COLOR_FORMAT_YCRCB420: 1789 + case DRM_COLOR_FORMAT_YCBCR420: 1790 1790 pxl_repr = CDNS_DP_FRAMER_YCBCR420 << CDNS_DP_FRAMER_PXL_FORMAT; 1791 1791 break; 1792 1792 default: ··· 1882 1882 if (mhdp->display_fmt.y_only) 1883 1883 misc1 |= CDNS_DP_TEST_COLOR_FORMAT_RAW_Y_ONLY; 1884 1884 /* Use VSC SDP for Y420 */ 1885 - if (pxlfmt == DRM_COLOR_FORMAT_YCRCB420) 1885 + if (pxlfmt == DRM_COLOR_FORMAT_YCBCR420) 1886 1886 misc1 = CDNS_DP_TEST_VSC_SDP; 1887 1887 1888 1888 cdns_mhdp_reg_write(mhdp, CDNS_DP_MSA_MISC(stream_id),
+1 -1
drivers/gpu/drm/bridge/cadence/cdns-mhdp8546-core.h
··· 17 17 18 18 #include <drm/drm_bridge.h> 19 19 #include <drm/drm_connector.h> 20 - #include <drm/drm_dp_helper.h> 20 + #include <drm/dp/drm_dp_helper.h> 21 21 22 22 struct clk; 23 23 struct device;
+26 -13
drivers/gpu/drm/bridge/chipone-icn6211.c
··· 4 4 * Author: Jagan Teki <jagan@amarulasolutions.com> 5 5 */ 6 6 7 + #include <drm/drm_atomic_helper.h> 7 8 #include <drm/drm_of.h> 8 9 #include <drm/drm_print.h> 9 10 #include <drm/drm_mipi_dsi.h> ··· 31 30 struct chipone { 32 31 struct device *dev; 33 32 struct drm_bridge bridge; 33 + struct drm_display_mode mode; 34 34 struct drm_bridge *panel_bridge; 35 35 struct gpio_desc *enable_gpio; 36 36 struct regulator *vdd1; ··· 42 40 static inline struct chipone *bridge_to_chipone(struct drm_bridge *bridge) 43 41 { 44 42 return container_of(bridge, struct chipone, bridge); 45 - } 46 - 47 - static struct drm_display_mode *bridge_to_mode(struct drm_bridge *bridge) 48 - { 49 - return &bridge->encoder->crtc->state->adjusted_mode; 50 43 } 51 44 52 45 static inline int chipone_dsi_write(struct chipone *icn, const void *seq, ··· 58 61 chipone_dsi_write(icn, d, ARRAY_SIZE(d)); \ 59 62 } 60 63 61 - static void chipone_enable(struct drm_bridge *bridge) 64 + static void chipone_atomic_enable(struct drm_bridge *bridge, 65 + struct drm_bridge_state *old_bridge_state) 62 66 { 63 67 struct chipone *icn = bridge_to_chipone(bridge); 64 - struct drm_display_mode *mode = bridge_to_mode(bridge); 68 + struct drm_display_mode *mode = &icn->mode; 65 69 66 70 ICN6211_DSI(icn, 0x7a, 0xc1); 67 71 ··· 112 114 usleep_range(10000, 11000); 113 115 } 114 116 115 - static void chipone_pre_enable(struct drm_bridge *bridge) 117 + static void chipone_atomic_pre_enable(struct drm_bridge *bridge, 118 + struct drm_bridge_state *old_bridge_state) 116 119 { 117 120 struct chipone *icn = bridge_to_chipone(bridge); 118 121 int ret; ··· 144 145 usleep_range(10000, 11000); 145 146 } 146 147 147 - static void chipone_post_disable(struct drm_bridge *bridge) 148 + static void chipone_atomic_post_disable(struct drm_bridge *bridge, 149 + struct drm_bridge_state *old_bridge_state) 148 150 { 149 151 struct chipone *icn = bridge_to_chipone(bridge); 150 152 ··· 161 161 gpiod_set_value(icn->enable_gpio, 0); 162 162 } 163 163 164 + static void chipone_mode_set(struct drm_bridge *bridge, 165 + const struct drm_display_mode *mode, 166 + const struct drm_display_mode *adjusted_mode) 167 + { 168 + struct chipone *icn = bridge_to_chipone(bridge); 169 + 170 + drm_mode_copy(&icn->mode, adjusted_mode); 171 + } 172 + 164 173 static int chipone_attach(struct drm_bridge *bridge, enum drm_bridge_attach_flags flags) 165 174 { 166 175 struct chipone *icn = bridge_to_chipone(bridge); ··· 178 169 } 179 170 180 171 static const struct drm_bridge_funcs chipone_bridge_funcs = { 181 - .attach = chipone_attach, 182 - .post_disable = chipone_post_disable, 183 - .pre_enable = chipone_pre_enable, 184 - .enable = chipone_enable, 172 + .atomic_duplicate_state = drm_atomic_helper_bridge_duplicate_state, 173 + .atomic_destroy_state = drm_atomic_helper_bridge_destroy_state, 174 + .atomic_reset = drm_atomic_helper_bridge_reset, 175 + .atomic_pre_enable = chipone_atomic_pre_enable, 176 + .atomic_enable = chipone_atomic_enable, 177 + .atomic_post_disable = chipone_atomic_post_disable, 178 + .mode_set = chipone_mode_set, 179 + .attach = chipone_attach, 185 180 }; 186 181 187 182 static int chipone_parse_dt(struct chipone *icn)
-3
drivers/gpu/drm/bridge/ite-it66121.c
··· 936 936 return -EPROBE_DEFER; 937 937 } 938 938 939 - if (!ctx->next_bridge) 940 - return -EPROBE_DEFER; 941 - 942 939 i2c_set_clientdata(client, ctx); 943 940 mutex_init(&ctx->lock); 944 941
+2 -2
drivers/gpu/drm/bridge/lontium-lt9611.c
··· 1090 1090 if (!lt9611) 1091 1091 return -ENOMEM; 1092 1092 1093 - lt9611->dev = &client->dev; 1093 + lt9611->dev = dev; 1094 1094 lt9611->client = client; 1095 1095 lt9611->sleep = false; 1096 1096 ··· 1100 1100 return PTR_ERR(lt9611->regmap); 1101 1101 } 1102 1102 1103 - ret = lt9611_parse_dt(&client->dev, lt9611); 1103 + ret = lt9611_parse_dt(dev, lt9611); 1104 1104 if (ret) { 1105 1105 dev_err(dev, "failed to parse device tree\n"); 1106 1106 return ret;
+2 -2
drivers/gpu/drm/bridge/lontium-lt9611uxc.c
··· 860 860 if (!lt9611uxc) 861 861 return -ENOMEM; 862 862 863 - lt9611uxc->dev = &client->dev; 863 + lt9611uxc->dev = dev; 864 864 lt9611uxc->client = client; 865 865 mutex_init(&lt9611uxc->ocm_lock); 866 866 ··· 870 870 return PTR_ERR(lt9611uxc->regmap); 871 871 } 872 872 873 - ret = lt9611uxc_parse_dt(&client->dev, lt9611uxc); 873 + ret = lt9611uxc_parse_dt(dev, lt9611uxc); 874 874 if (ret) { 875 875 dev_err(dev, "failed to parse device tree\n"); 876 876 return ret;
+3 -5
drivers/gpu/drm/bridge/nwl-dsi.c
··· 65 65 struct nwl_dsi { 66 66 struct drm_bridge bridge; 67 67 struct mipi_dsi_host dsi_host; 68 - struct drm_bridge *panel_bridge; 69 68 struct device *dev; 70 69 struct phy *phy; 71 70 union phy_configure_opts phy_cfg; ··· 923 924 if (IS_ERR(panel_bridge)) 924 925 return PTR_ERR(panel_bridge); 925 926 } 926 - dsi->panel_bridge = panel_bridge; 927 927 928 - if (!dsi->panel_bridge) 928 + if (!panel_bridge) 929 929 return -EPROBE_DEFER; 930 930 931 - return drm_bridge_attach(bridge->encoder, dsi->panel_bridge, bridge, 932 - flags); 931 + return drm_bridge_attach(bridge->encoder, panel_bridge, bridge, flags); 933 932 } 934 933 935 934 static void nwl_dsi_bridge_detach(struct drm_bridge *bridge) ··· 1203 1206 1204 1207 ret = nwl_dsi_select_input(dsi); 1205 1208 if (ret < 0) { 1209 + pm_runtime_disable(dev); 1206 1210 mipi_dsi_host_unregister(&dsi->dsi_host); 1207 1211 return ret; 1208 1212 }
+28 -5
drivers/gpu/drm/bridge/parade-ps8640.c
··· 14 14 #include <linux/regulator/consumer.h> 15 15 16 16 #include <drm/drm_bridge.h> 17 - #include <drm/drm_dp_aux_bus.h> 18 - #include <drm/drm_dp_helper.h> 17 + #include <drm/dp/drm_dp_aux_bus.h> 18 + #include <drm/dp/drm_dp_helper.h> 19 19 #include <drm/drm_mipi_dsi.h> 20 20 #include <drm/drm_of.h> 21 21 #include <drm/drm_panel.h> ··· 102 102 struct regulator_bulk_data supplies[2]; 103 103 struct gpio_desc *gpio_reset; 104 104 struct gpio_desc *gpio_powerdown; 105 + struct device_link *link; 105 106 bool pre_enabled; 106 107 }; 107 108 ··· 457 456 return ret; 458 457 } 459 458 459 + ps_bridge->link = device_link_add(bridge->dev->dev, dev, DL_FLAG_STATELESS); 460 + if (!ps_bridge->link) { 461 + dev_err(dev, "failed to create device link"); 462 + ret = -EINVAL; 463 + goto err_devlink; 464 + } 465 + 460 466 /* Attach the panel-bridge to the dsi bridge */ 461 - return drm_bridge_attach(bridge->encoder, ps_bridge->panel_bridge, 462 - &ps_bridge->bridge, flags); 467 + ret = drm_bridge_attach(bridge->encoder, ps_bridge->panel_bridge, 468 + &ps_bridge->bridge, flags); 469 + if (ret) 470 + goto err_bridge_attach; 471 + 472 + return 0; 473 + 474 + err_bridge_attach: 475 + device_link_del(ps_bridge->link); 476 + err_devlink: 477 + drm_dp_aux_unregister(&ps_bridge->aux); 478 + 479 + return ret; 463 480 } 464 481 465 482 static void ps8640_bridge_detach(struct drm_bridge *bridge) 466 483 { 467 - drm_dp_aux_unregister(&bridge_to_ps8640(bridge)->aux); 484 + struct ps8640 *ps_bridge = bridge_to_ps8640(bridge); 485 + 486 + drm_dp_aux_unregister(&ps_bridge->aux); 487 + if (ps_bridge->link) 488 + device_link_del(ps_bridge->link); 468 489 } 469 490 470 491 static struct edid *ps8640_bridge_get_edid(struct drm_bridge *bridge,
+100 -31
drivers/gpu/drm/bridge/sii902x.c
··· 166 166 struct i2c_client *i2c; 167 167 struct regmap *regmap; 168 168 struct drm_bridge bridge; 169 + struct drm_bridge *next_bridge; 169 170 struct drm_connector connector; 170 171 struct gpio_desc *reset_gpio; 171 172 struct i2c_mux_core *i2cmux; 172 173 struct regulator_bulk_data supplies[2]; 174 + bool sink_is_hdmi; 173 175 /* 174 176 * Mutex protects audio and video functions from interfering 175 177 * each other, by keeping their i2c command sequences atomic. ··· 247 245 gpiod_set_value(sii902x->reset_gpio, 0); 248 246 } 249 247 250 - static enum drm_connector_status 251 - sii902x_connector_detect(struct drm_connector *connector, bool force) 248 + static enum drm_connector_status sii902x_detect(struct sii902x *sii902x) 252 249 { 253 - struct sii902x *sii902x = connector_to_sii902x(connector); 254 250 unsigned int status; 255 251 256 252 mutex_lock(&sii902x->mutex); ··· 261 261 connector_status_connected : connector_status_disconnected; 262 262 } 263 263 264 + static enum drm_connector_status 265 + sii902x_connector_detect(struct drm_connector *connector, bool force) 266 + { 267 + struct sii902x *sii902x = connector_to_sii902x(connector); 268 + 269 + return sii902x_detect(sii902x); 270 + } 271 + 264 272 static const struct drm_connector_funcs sii902x_connector_funcs = { 265 273 .detect = sii902x_connector_detect, 266 274 .fill_modes = drm_helper_probe_single_connector_modes, ··· 278 270 .atomic_destroy_state = drm_atomic_helper_connector_destroy_state, 279 271 }; 280 272 281 - static int sii902x_get_modes(struct drm_connector *connector) 273 + static struct edid *sii902x_get_edid(struct sii902x *sii902x, 274 + struct drm_connector *connector) 282 275 { 283 - struct sii902x *sii902x = connector_to_sii902x(connector); 284 - u32 bus_format = MEDIA_BUS_FMT_RGB888_1X24; 285 - u8 output_mode = SII902X_SYS_CTRL_OUTPUT_DVI; 286 276 struct edid *edid; 287 - int num = 0, ret; 288 277 289 278 mutex_lock(&sii902x->mutex); 290 279 291 280 edid = drm_get_edid(connector, sii902x->i2cmux->adapter[0]); 292 - drm_connector_update_edid_property(connector, edid); 293 281 if (edid) { 294 282 if (drm_detect_hdmi_monitor(edid)) 295 - output_mode = SII902X_SYS_CTRL_OUTPUT_HDMI; 283 + sii902x->sink_is_hdmi = true; 284 + else 285 + sii902x->sink_is_hdmi = false; 286 + } 296 287 288 + mutex_unlock(&sii902x->mutex); 289 + 290 + return edid; 291 + } 292 + 293 + static int sii902x_get_modes(struct drm_connector *connector) 294 + { 295 + struct sii902x *sii902x = connector_to_sii902x(connector); 296 + struct edid *edid; 297 + int num = 0; 298 + 299 + edid = sii902x_get_edid(sii902x, connector); 300 + drm_connector_update_edid_property(connector, edid); 301 + if (edid) { 297 302 num = drm_add_edid_modes(connector, edid); 298 303 kfree(edid); 299 304 } 300 305 301 - ret = drm_display_info_set_bus_formats(&connector->display_info, 302 - &bus_format, 1); 303 - if (ret) 304 - goto error_out; 305 - 306 - ret = regmap_update_bits(sii902x->regmap, SII902X_SYS_CTRL_DATA, 307 - SII902X_SYS_CTRL_OUTPUT_MODE, output_mode); 308 - if (ret) 309 - goto error_out; 310 - 311 - ret = num; 312 - 313 - error_out: 314 - mutex_unlock(&sii902x->mutex); 315 - 316 - return ret; 306 + return num; 317 307 } 318 308 319 309 static enum drm_mode_status sii902x_mode_valid(struct drm_connector *connector, ··· 360 354 const struct drm_display_mode *adj) 361 355 { 362 356 struct sii902x *sii902x = bridge_to_sii902x(bridge); 357 + u8 output_mode = SII902X_SYS_CTRL_OUTPUT_DVI; 363 358 struct regmap *regmap = sii902x->regmap; 364 359 u8 buf[HDMI_INFOFRAME_SIZE(AVI)]; 365 360 struct hdmi_avi_infoframe frame; 366 361 u16 pixel_clock_10kHz = adj->clock / 10; 367 362 int ret; 363 + 364 + if (sii902x->sink_is_hdmi) 365 + output_mode = SII902X_SYS_CTRL_OUTPUT_HDMI; 368 366 369 367 buf[0] = pixel_clock_10kHz & 0xff; 370 368 buf[1] = pixel_clock_10kHz >> 8; ··· 384 374 SII902X_TPI_AVI_INPUT_COLORSPACE_RGB; 385 375 386 376 mutex_lock(&sii902x->mutex); 377 + 378 + ret = regmap_update_bits(sii902x->regmap, SII902X_SYS_CTRL_DATA, 379 + SII902X_SYS_CTRL_OUTPUT_MODE, output_mode); 380 + if (ret) 381 + goto out; 387 382 388 383 ret = regmap_bulk_write(regmap, SII902X_TPI_VIDEO_DATA, buf, 10); 389 384 if (ret) ··· 420 405 enum drm_bridge_attach_flags flags) 421 406 { 422 407 struct sii902x *sii902x = bridge_to_sii902x(bridge); 408 + u32 bus_format = MEDIA_BUS_FMT_RGB888_1X24; 423 409 struct drm_device *drm = bridge->dev; 424 410 int ret; 425 411 426 - if (flags & DRM_BRIDGE_ATTACH_NO_CONNECTOR) { 427 - DRM_ERROR("Fix bridge driver to make connector optional!"); 428 - return -EINVAL; 429 - } 412 + if (flags & DRM_BRIDGE_ATTACH_NO_CONNECTOR) 413 + return drm_bridge_attach(bridge->encoder, sii902x->next_bridge, 414 + bridge, flags); 430 415 431 416 drm_connector_helper_add(&sii902x->connector, 432 417 &sii902x_connector_helper_funcs); ··· 448 433 else 449 434 sii902x->connector.polled = DRM_CONNECTOR_POLL_CONNECT; 450 435 436 + ret = drm_display_info_set_bus_formats(&sii902x->connector.display_info, 437 + &bus_format, 1); 438 + if (ret) 439 + return ret; 440 + 451 441 drm_connector_attach_encoder(&sii902x->connector, bridge->encoder); 452 442 453 443 return 0; 444 + } 445 + 446 + static enum drm_connector_status sii902x_bridge_detect(struct drm_bridge *bridge) 447 + { 448 + struct sii902x *sii902x = bridge_to_sii902x(bridge); 449 + 450 + return sii902x_detect(sii902x); 451 + } 452 + 453 + static struct edid *sii902x_bridge_get_edid(struct drm_bridge *bridge, 454 + struct drm_connector *connector) 455 + { 456 + struct sii902x *sii902x = bridge_to_sii902x(bridge); 457 + 458 + return sii902x_get_edid(sii902x, connector); 454 459 } 455 460 456 461 static const struct drm_bridge_funcs sii902x_bridge_funcs = { ··· 478 443 .mode_set = sii902x_bridge_mode_set, 479 444 .disable = sii902x_bridge_disable, 480 445 .enable = sii902x_bridge_enable, 446 + .detect = sii902x_bridge_detect, 447 + .get_edid = sii902x_bridge_get_edid, 481 448 }; 482 449 483 450 static int sii902x_mute(struct sii902x *sii902x, bool mute) ··· 866 829 867 830 mutex_unlock(&sii902x->mutex); 868 831 869 - if ((status & SII902X_HOTPLUG_EVENT) && sii902x->bridge.dev) 832 + if ((status & SII902X_HOTPLUG_EVENT) && sii902x->bridge.dev) { 870 833 drm_helper_hpd_irq_event(sii902x->bridge.dev); 834 + drm_bridge_hpd_notify(&sii902x->bridge, (status & SII902X_PLUGGED_STATUS) 835 + ? connector_status_connected 836 + : connector_status_disconnected); 837 + } 871 838 872 839 return IRQ_HANDLED; 873 840 } ··· 1042 1001 sii902x->bridge.funcs = &sii902x_bridge_funcs; 1043 1002 sii902x->bridge.of_node = dev->of_node; 1044 1003 sii902x->bridge.timings = &default_sii902x_timings; 1004 + sii902x->bridge.ops = DRM_BRIDGE_OP_DETECT | DRM_BRIDGE_OP_EDID; 1005 + 1006 + if (sii902x->i2c->irq > 0) 1007 + sii902x->bridge.ops |= DRM_BRIDGE_OP_HPD; 1008 + 1045 1009 drm_bridge_add(&sii902x->bridge); 1046 1010 1047 1011 sii902x_audio_codec_init(sii902x, dev); ··· 1068 1022 const struct i2c_device_id *id) 1069 1023 { 1070 1024 struct device *dev = &client->dev; 1025 + struct device_node *endpoint; 1071 1026 struct sii902x *sii902x; 1072 1027 int ret; 1073 1028 ··· 1094 1047 dev_err(dev, "Failed to retrieve/request reset gpio: %ld\n", 1095 1048 PTR_ERR(sii902x->reset_gpio)); 1096 1049 return PTR_ERR(sii902x->reset_gpio); 1050 + } 1051 + 1052 + endpoint = of_graph_get_endpoint_by_regs(dev->of_node, 1, -1); 1053 + if (endpoint) { 1054 + struct device_node *remote = of_graph_get_remote_port_parent(endpoint); 1055 + 1056 + of_node_put(endpoint); 1057 + if (!remote) { 1058 + dev_err(dev, "Endpoint in port@1 unconnected\n"); 1059 + return -ENODEV; 1060 + } 1061 + 1062 + if (!of_device_is_available(remote)) { 1063 + dev_err(dev, "port@1 remote device is disabled\n"); 1064 + of_node_put(remote); 1065 + return -ENODEV; 1066 + } 1067 + 1068 + sii902x->next_bridge = of_drm_find_bridge(remote); 1069 + of_node_put(remote); 1070 + if (!sii902x->next_bridge) 1071 + return -EPROBE_DEFER; 1097 1072 } 1098 1073 1099 1074 mutex_init(&sii902x->mutex);
+1 -1
drivers/gpu/drm/bridge/sil-sii8620.c
··· 2120 2120 if (ret) { 2121 2121 dev_err(ctx->dev, "Failed to register RC device\n"); 2122 2122 ctx->error = ret; 2123 - rc_free_device(ctx->rc_dev); 2123 + rc_free_device(rc_dev); 2124 2124 return; 2125 2125 } 2126 2126 ctx->rc_dev = rc_dev;
+8 -8
drivers/gpu/drm/bridge/synopsys/dw-hdmi.c
··· 2540 2540 struct drm_display_mode *mode = &crtc_state->mode; 2541 2541 u8 max_bpc = conn_state->max_requested_bpc; 2542 2542 bool is_hdmi2_sink = info->hdmi.scdc.supported || 2543 - (info->color_formats & DRM_COLOR_FORMAT_YCRCB420); 2543 + (info->color_formats & DRM_COLOR_FORMAT_YCBCR420); 2544 2544 u32 *output_fmts; 2545 2545 unsigned int i = 0; 2546 2546 ··· 2594 2594 */ 2595 2595 2596 2596 if (max_bpc >= 16 && info->bpc == 16) { 2597 - if (info->color_formats & DRM_COLOR_FORMAT_YCRCB444) 2597 + if (info->color_formats & DRM_COLOR_FORMAT_YCBCR444) 2598 2598 output_fmts[i++] = MEDIA_BUS_FMT_YUV16_1X48; 2599 2599 2600 2600 output_fmts[i++] = MEDIA_BUS_FMT_RGB161616_1X48; 2601 2601 } 2602 2602 2603 2603 if (max_bpc >= 12 && info->bpc >= 12) { 2604 - if (info->color_formats & DRM_COLOR_FORMAT_YCRCB422) 2604 + if (info->color_formats & DRM_COLOR_FORMAT_YCBCR422) 2605 2605 output_fmts[i++] = MEDIA_BUS_FMT_UYVY12_1X24; 2606 2606 2607 - if (info->color_formats & DRM_COLOR_FORMAT_YCRCB444) 2607 + if (info->color_formats & DRM_COLOR_FORMAT_YCBCR444) 2608 2608 output_fmts[i++] = MEDIA_BUS_FMT_YUV12_1X36; 2609 2609 2610 2610 output_fmts[i++] = MEDIA_BUS_FMT_RGB121212_1X36; 2611 2611 } 2612 2612 2613 2613 if (max_bpc >= 10 && info->bpc >= 10) { 2614 - if (info->color_formats & DRM_COLOR_FORMAT_YCRCB422) 2614 + if (info->color_formats & DRM_COLOR_FORMAT_YCBCR422) 2615 2615 output_fmts[i++] = MEDIA_BUS_FMT_UYVY10_1X20; 2616 2616 2617 - if (info->color_formats & DRM_COLOR_FORMAT_YCRCB444) 2617 + if (info->color_formats & DRM_COLOR_FORMAT_YCBCR444) 2618 2618 output_fmts[i++] = MEDIA_BUS_FMT_YUV10_1X30; 2619 2619 2620 2620 output_fmts[i++] = MEDIA_BUS_FMT_RGB101010_1X30; 2621 2621 } 2622 2622 2623 - if (info->color_formats & DRM_COLOR_FORMAT_YCRCB422) 2623 + if (info->color_formats & DRM_COLOR_FORMAT_YCBCR422) 2624 2624 output_fmts[i++] = MEDIA_BUS_FMT_UYVY8_1X16; 2625 2625 2626 - if (info->color_formats & DRM_COLOR_FORMAT_YCRCB444) 2626 + if (info->color_formats & DRM_COLOR_FORMAT_YCBCR444) 2627 2627 output_fmts[i++] = MEDIA_BUS_FMT_YUV8_1X24; 2628 2628 2629 2629 /* Default 8bit RGB fallback */
+17 -8
drivers/gpu/drm/bridge/synopsys/dw-mipi-dsi.c
··· 871 871 dsi_write(dsi, DSI_INT_MSK1, 0); 872 872 } 873 873 874 - static void dw_mipi_dsi_bridge_post_disable(struct drm_bridge *bridge) 874 + static void dw_mipi_dsi_bridge_post_atomic_disable(struct drm_bridge *bridge, 875 + struct drm_bridge_state *old_bridge_state) 875 876 { 876 877 struct dw_mipi_dsi *dsi = bridge_to_dsi(bridge); 877 878 const struct dw_mipi_dsi_phy_ops *phy_ops = dsi->plat_data->phy_ops; ··· 979 978 dw_mipi_dsi_mode_set(dsi->slave, adjusted_mode); 980 979 } 981 980 982 - static void dw_mipi_dsi_bridge_enable(struct drm_bridge *bridge) 981 + static void dw_mipi_dsi_bridge_atomic_enable(struct drm_bridge *bridge, 982 + struct drm_bridge_state *old_bridge_state) 983 983 { 984 984 struct dw_mipi_dsi *dsi = bridge_to_dsi(bridge); 985 985 ··· 1000 998 enum drm_mode_status mode_status = MODE_OK; 1001 999 1002 1000 if (pdata->mode_valid) 1003 - mode_status = pdata->mode_valid(pdata->priv_data, mode); 1001 + mode_status = pdata->mode_valid(pdata->priv_data, mode, 1002 + dsi->mode_flags, 1003 + dw_mipi_dsi_get_lanes(dsi), 1004 + dsi->format); 1004 1005 1005 1006 return mode_status; 1006 1007 } ··· 1037 1032 } 1038 1033 1039 1034 static const struct drm_bridge_funcs dw_mipi_dsi_bridge_funcs = { 1040 - .mode_set = dw_mipi_dsi_bridge_mode_set, 1041 - .enable = dw_mipi_dsi_bridge_enable, 1042 - .post_disable = dw_mipi_dsi_bridge_post_disable, 1043 - .mode_valid = dw_mipi_dsi_bridge_mode_valid, 1044 - .attach = dw_mipi_dsi_bridge_attach, 1035 + .atomic_duplicate_state = drm_atomic_helper_bridge_duplicate_state, 1036 + .atomic_destroy_state = drm_atomic_helper_bridge_destroy_state, 1037 + .atomic_reset = drm_atomic_helper_bridge_reset, 1038 + .atomic_enable = dw_mipi_dsi_bridge_atomic_enable, 1039 + .atomic_post_disable = dw_mipi_dsi_bridge_post_atomic_disable, 1040 + .mode_set = dw_mipi_dsi_bridge_mode_set, 1041 + .mode_valid = dw_mipi_dsi_bridge_mode_valid, 1042 + .attach = dw_mipi_dsi_bridge_attach, 1045 1043 }; 1046 1044 1047 1045 #ifdef CONFIG_DEBUG_FS ··· 1207 1199 ret = mipi_dsi_host_register(&dsi->dsi_host); 1208 1200 if (ret) { 1209 1201 dev_err(dev, "Failed to register MIPI host: %d\n", ret); 1202 + pm_runtime_disable(dev); 1210 1203 dw_mipi_dsi_debugfs_remove(dsi); 1211 1204 return ERR_PTR(ret); 1212 1205 }
+1 -1
drivers/gpu/drm/bridge/tc358767.c
··· 27 27 28 28 #include <drm/drm_atomic_helper.h> 29 29 #include <drm/drm_bridge.h> 30 - #include <drm/drm_dp_helper.h> 30 + #include <drm/dp/drm_dp_helper.h> 31 31 #include <drm/drm_edid.h> 32 32 #include <drm/drm_of.h> 33 33 #include <drm/drm_panel.h>
+2 -2
drivers/gpu/drm/bridge/tc358775.c
··· 22 22 #include <drm/drm_atomic_helper.h> 23 23 #include <drm/drm_bridge.h> 24 24 #include <drm/drm_crtc_helper.h> 25 - #include <drm/drm_dp_helper.h> 25 + #include <drm/dp/drm_dp_helper.h> 26 26 #include <drm/drm_mipi_dsi.h> 27 27 #include <drm/drm_of.h> 28 28 #include <drm/drm_panel.h> ··· 241 241 } 242 242 243 243 #define TC358775_LVCFG_LVDLINK__MASK 0x00000002 244 - #define TC358775_LVCFG_LVDLINK__SHIFT 0 244 + #define TC358775_LVCFG_LVDLINK__SHIFT 1 245 245 static inline u32 TC358775_LVCFG_LVDLINK(uint32_t val) 246 246 { 247 247 return ((val) << TC358775_LVCFG_LVDLINK__SHIFT) &
+43 -10
drivers/gpu/drm/bridge/ti-sn65dsi83.c
··· 33 33 #include <linux/of_device.h> 34 34 #include <linux/of_graph.h> 35 35 #include <linux/regmap.h> 36 + #include <linux/regulator/consumer.h> 36 37 37 38 #include <drm/drm_atomic_helper.h> 38 39 #include <drm/drm_bridge.h> ··· 144 143 struct mipi_dsi_device *dsi; 145 144 struct drm_bridge *panel_bridge; 146 145 struct gpio_desc *enable_gpio; 146 + struct regulator *vcc; 147 147 int dsi_lanes; 148 148 bool lvds_dual_link; 149 149 bool lvds_dual_link_even_odd_swap; ··· 339 337 u16 val; 340 338 int ret; 341 339 340 + ret = regulator_enable(ctx->vcc); 341 + if (ret) { 342 + dev_err(ctx->dev, "Failed to enable vcc: %d\n", ret); 343 + return; 344 + } 345 + 342 346 /* Deassert reset */ 343 347 gpiod_set_value(ctx->enable_gpio, 1); 344 348 usleep_range(1000, 1100); ··· 494 486 struct drm_bridge_state *old_bridge_state) 495 487 { 496 488 struct sn65dsi83 *ctx = bridge_to_sn65dsi83(bridge); 489 + int ret; 497 490 498 491 /* Put the chip in reset, pull EN line low, and assure 10ms reset low timing. */ 499 492 gpiod_set_value(ctx->enable_gpio, 0); 500 493 usleep_range(10000, 11000); 494 + 495 + ret = regulator_disable(ctx->vcc); 496 + if (ret) 497 + dev_err(ctx->dev, "Failed to disable vcc: %d\n", ret); 501 498 502 499 regcache_mark_dirty(ctx->regmap); 503 500 } ··· 573 560 ctx->host_node = of_graph_get_remote_port_parent(endpoint); 574 561 of_node_put(endpoint); 575 562 576 - if (ctx->dsi_lanes < 0 || ctx->dsi_lanes > 4) 577 - return -EINVAL; 578 - if (!ctx->host_node) 579 - return -ENODEV; 563 + if (ctx->dsi_lanes < 0 || ctx->dsi_lanes > 4) { 564 + ret = -EINVAL; 565 + goto err_put_node; 566 + } 567 + if (!ctx->host_node) { 568 + ret = -ENODEV; 569 + goto err_put_node; 570 + } 580 571 581 572 ctx->lvds_dual_link = false; 582 573 ctx->lvds_dual_link_even_odd_swap = false; ··· 607 590 608 591 ret = drm_of_find_panel_or_bridge(dev->of_node, 2, 0, &panel, &panel_bridge); 609 592 if (ret < 0) 610 - return ret; 593 + goto err_put_node; 611 594 if (panel) { 612 595 panel_bridge = devm_drm_panel_bridge_add(dev, panel); 613 - if (IS_ERR(panel_bridge)) 614 - return PTR_ERR(panel_bridge); 596 + if (IS_ERR(panel_bridge)) { 597 + ret = PTR_ERR(panel_bridge); 598 + goto err_put_node; 599 + } 615 600 } 616 601 617 602 ctx->panel_bridge = panel_bridge; 618 603 604 + ctx->vcc = devm_regulator_get(dev, "vcc"); 605 + if (IS_ERR(ctx->vcc)) 606 + return dev_err_probe(dev, PTR_ERR(ctx->vcc), 607 + "Failed to get supply 'vcc'\n"); 608 + 619 609 return 0; 610 + 611 + err_put_node: 612 + of_node_put(ctx->host_node); 613 + return ret; 620 614 } 621 615 622 616 static int sn65dsi83_host_attach(struct sn65dsi83 *ctx) ··· 690 662 } 691 663 692 664 /* Put the chip in reset, pull EN line low, and assure 10ms reset low timing. */ 693 - ctx->enable_gpio = devm_gpiod_get(ctx->dev, "enable", GPIOD_OUT_LOW); 665 + ctx->enable_gpio = devm_gpiod_get_optional(ctx->dev, "enable", 666 + GPIOD_OUT_LOW); 694 667 if (IS_ERR(ctx->enable_gpio)) 695 668 return PTR_ERR(ctx->enable_gpio); 696 669 ··· 702 673 return ret; 703 674 704 675 ctx->regmap = devm_regmap_init_i2c(client, &sn65dsi83_regmap_config); 705 - if (IS_ERR(ctx->regmap)) 706 - return PTR_ERR(ctx->regmap); 676 + if (IS_ERR(ctx->regmap)) { 677 + ret = PTR_ERR(ctx->regmap); 678 + goto err_put_node; 679 + } 707 680 708 681 dev_set_drvdata(dev, ctx); 709 682 i2c_set_clientdata(client, ctx); ··· 722 691 723 692 err_remove_bridge: 724 693 drm_bridge_remove(&ctx->bridge); 694 + err_put_node: 695 + of_node_put(ctx->host_node); 725 696 return ret; 726 697 } 727 698
+2 -2
drivers/gpu/drm/bridge/ti-sn65dsi86.c
··· 26 26 #include <drm/drm_atomic.h> 27 27 #include <drm/drm_atomic_helper.h> 28 28 #include <drm/drm_bridge.h> 29 - #include <drm/drm_dp_aux_bus.h> 30 - #include <drm/drm_dp_helper.h> 29 + #include <drm/dp/drm_dp_aux_bus.h> 30 + #include <drm/dp/drm_dp_helper.h> 31 31 #include <drm/drm_mipi_dsi.h> 32 32 #include <drm/drm_of.h> 33 33 #include <drm/drm_panel.h>
+9
drivers/gpu/drm/dp/Makefile
··· 1 + # SPDX-License-Identifier: MIT 2 + 3 + obj-$(CONFIG_DRM_DP_AUX_BUS) += drm_dp_aux_bus.o 4 + 5 + drm_dp_helper-y := drm_dp.o drm_dp_dual_mode_helper.o drm_dp_helper_mod.o drm_dp_mst_topology.o 6 + drm_dp_helper-$(CONFIG_DRM_DP_AUX_CHARDEV) += drm_dp_aux_dev.o 7 + drm_dp_helper-$(CONFIG_DRM_DP_CEC) += drm_dp_cec.o 8 + 9 + obj-$(CONFIG_DRM_DP_HELPER) += drm_dp_helper.o
+33
drivers/gpu/drm/dp/drm_dp_helper_internal.h
··· 1 + /* SPDX-License-Identifier: MIT */ 2 + 3 + #ifndef DRM_DP_HELPER_INTERNAL_H 4 + #define DRM_DP_HELPER_INTERNAL_H 5 + 6 + struct drm_dp_aux; 7 + 8 + #ifdef CONFIG_DRM_DP_AUX_CHARDEV 9 + int drm_dp_aux_dev_init(void); 10 + void drm_dp_aux_dev_exit(void); 11 + int drm_dp_aux_register_devnode(struct drm_dp_aux *aux); 12 + void drm_dp_aux_unregister_devnode(struct drm_dp_aux *aux); 13 + #else 14 + static inline int drm_dp_aux_dev_init(void) 15 + { 16 + return 0; 17 + } 18 + 19 + static inline void drm_dp_aux_dev_exit(void) 20 + { 21 + } 22 + 23 + static inline int drm_dp_aux_register_devnode(struct drm_dp_aux *aux) 24 + { 25 + return 0; 26 + } 27 + 28 + static inline void drm_dp_aux_unregister_devnode(struct drm_dp_aux *aux) 29 + { 30 + } 31 + #endif 32 + 33 + #endif
+22
drivers/gpu/drm/dp/drm_dp_helper_mod.c
··· 1 + // SPDX-License-Identifier: MIT 2 + 3 + #include <linux/module.h> 4 + 5 + #include "drm_dp_helper_internal.h" 6 + 7 + MODULE_DESCRIPTION("DRM DisplayPort helper"); 8 + MODULE_LICENSE("GPL and additional rights"); 9 + 10 + static int __init drm_dp_helper_module_init(void) 11 + { 12 + return drm_dp_aux_dev_init(); 13 + } 14 + 15 + static void __exit drm_dp_helper_module_exit(void) 16 + { 17 + /* Call exit functions from specific dp helpers here */ 18 + drm_dp_aux_dev_exit(); 19 + } 20 + 21 + module_init(drm_dp_helper_module_init); 22 + module_exit(drm_dp_helper_module_exit);
+535
drivers/gpu/drm/drm_buddy.c
··· 1 + // SPDX-License-Identifier: MIT 2 + /* 3 + * Copyright © 2021 Intel Corporation 4 + */ 5 + 6 + #include <linux/kmemleak.h> 7 + #include <linux/module.h> 8 + #include <linux/sizes.h> 9 + 10 + #include <drm/drm_buddy.h> 11 + 12 + static struct kmem_cache *slab_blocks; 13 + 14 + static struct drm_buddy_block *drm_block_alloc(struct drm_buddy *mm, 15 + struct drm_buddy_block *parent, 16 + unsigned int order, 17 + u64 offset) 18 + { 19 + struct drm_buddy_block *block; 20 + 21 + BUG_ON(order > DRM_BUDDY_MAX_ORDER); 22 + 23 + block = kmem_cache_zalloc(slab_blocks, GFP_KERNEL); 24 + if (!block) 25 + return NULL; 26 + 27 + block->header = offset; 28 + block->header |= order; 29 + block->parent = parent; 30 + 31 + BUG_ON(block->header & DRM_BUDDY_HEADER_UNUSED); 32 + return block; 33 + } 34 + 35 + static void drm_block_free(struct drm_buddy *mm, 36 + struct drm_buddy_block *block) 37 + { 38 + kmem_cache_free(slab_blocks, block); 39 + } 40 + 41 + static void mark_allocated(struct drm_buddy_block *block) 42 + { 43 + block->header &= ~DRM_BUDDY_HEADER_STATE; 44 + block->header |= DRM_BUDDY_ALLOCATED; 45 + 46 + list_del(&block->link); 47 + } 48 + 49 + static void mark_free(struct drm_buddy *mm, 50 + struct drm_buddy_block *block) 51 + { 52 + block->header &= ~DRM_BUDDY_HEADER_STATE; 53 + block->header |= DRM_BUDDY_FREE; 54 + 55 + list_add(&block->link, 56 + &mm->free_list[drm_buddy_block_order(block)]); 57 + } 58 + 59 + static void mark_split(struct drm_buddy_block *block) 60 + { 61 + block->header &= ~DRM_BUDDY_HEADER_STATE; 62 + block->header |= DRM_BUDDY_SPLIT; 63 + 64 + list_del(&block->link); 65 + } 66 + 67 + /** 68 + * drm_buddy_init - init memory manager 69 + * 70 + * @mm: DRM buddy manager to initialize 71 + * @size: size in bytes to manage 72 + * @chunk_size: minimum page size in bytes for our allocations 73 + * 74 + * Initializes the memory manager and its resources. 75 + * 76 + * Returns: 77 + * 0 on success, error code on failure. 78 + */ 79 + int drm_buddy_init(struct drm_buddy *mm, u64 size, u64 chunk_size) 80 + { 81 + unsigned int i; 82 + u64 offset; 83 + 84 + if (size < chunk_size) 85 + return -EINVAL; 86 + 87 + if (chunk_size < PAGE_SIZE) 88 + return -EINVAL; 89 + 90 + if (!is_power_of_2(chunk_size)) 91 + return -EINVAL; 92 + 93 + size = round_down(size, chunk_size); 94 + 95 + mm->size = size; 96 + mm->avail = size; 97 + mm->chunk_size = chunk_size; 98 + mm->max_order = ilog2(size) - ilog2(chunk_size); 99 + 100 + BUG_ON(mm->max_order > DRM_BUDDY_MAX_ORDER); 101 + 102 + mm->free_list = kmalloc_array(mm->max_order + 1, 103 + sizeof(struct list_head), 104 + GFP_KERNEL); 105 + if (!mm->free_list) 106 + return -ENOMEM; 107 + 108 + for (i = 0; i <= mm->max_order; ++i) 109 + INIT_LIST_HEAD(&mm->free_list[i]); 110 + 111 + mm->n_roots = hweight64(size); 112 + 113 + mm->roots = kmalloc_array(mm->n_roots, 114 + sizeof(struct drm_buddy_block *), 115 + GFP_KERNEL); 116 + if (!mm->roots) 117 + goto out_free_list; 118 + 119 + offset = 0; 120 + i = 0; 121 + 122 + /* 123 + * Split into power-of-two blocks, in case we are given a size that is 124 + * not itself a power-of-two. 125 + */ 126 + do { 127 + struct drm_buddy_block *root; 128 + unsigned int order; 129 + u64 root_size; 130 + 131 + root_size = rounddown_pow_of_two(size); 132 + order = ilog2(root_size) - ilog2(chunk_size); 133 + 134 + root = drm_block_alloc(mm, NULL, order, offset); 135 + if (!root) 136 + goto out_free_roots; 137 + 138 + mark_free(mm, root); 139 + 140 + BUG_ON(i > mm->max_order); 141 + BUG_ON(drm_buddy_block_size(mm, root) < chunk_size); 142 + 143 + mm->roots[i] = root; 144 + 145 + offset += root_size; 146 + size -= root_size; 147 + i++; 148 + } while (size); 149 + 150 + return 0; 151 + 152 + out_free_roots: 153 + while (i--) 154 + drm_block_free(mm, mm->roots[i]); 155 + kfree(mm->roots); 156 + out_free_list: 157 + kfree(mm->free_list); 158 + return -ENOMEM; 159 + } 160 + EXPORT_SYMBOL(drm_buddy_init); 161 + 162 + /** 163 + * drm_buddy_fini - tear down the memory manager 164 + * 165 + * @mm: DRM buddy manager to free 166 + * 167 + * Cleanup memory manager resources and the freelist 168 + */ 169 + void drm_buddy_fini(struct drm_buddy *mm) 170 + { 171 + int i; 172 + 173 + for (i = 0; i < mm->n_roots; ++i) { 174 + WARN_ON(!drm_buddy_block_is_free(mm->roots[i])); 175 + drm_block_free(mm, mm->roots[i]); 176 + } 177 + 178 + WARN_ON(mm->avail != mm->size); 179 + 180 + kfree(mm->roots); 181 + kfree(mm->free_list); 182 + } 183 + EXPORT_SYMBOL(drm_buddy_fini); 184 + 185 + static int split_block(struct drm_buddy *mm, 186 + struct drm_buddy_block *block) 187 + { 188 + unsigned int block_order = drm_buddy_block_order(block) - 1; 189 + u64 offset = drm_buddy_block_offset(block); 190 + 191 + BUG_ON(!drm_buddy_block_is_free(block)); 192 + BUG_ON(!drm_buddy_block_order(block)); 193 + 194 + block->left = drm_block_alloc(mm, block, block_order, offset); 195 + if (!block->left) 196 + return -ENOMEM; 197 + 198 + block->right = drm_block_alloc(mm, block, block_order, 199 + offset + (mm->chunk_size << block_order)); 200 + if (!block->right) { 201 + drm_block_free(mm, block->left); 202 + return -ENOMEM; 203 + } 204 + 205 + mark_free(mm, block->left); 206 + mark_free(mm, block->right); 207 + 208 + mark_split(block); 209 + 210 + return 0; 211 + } 212 + 213 + static struct drm_buddy_block * 214 + get_buddy(struct drm_buddy_block *block) 215 + { 216 + struct drm_buddy_block *parent; 217 + 218 + parent = block->parent; 219 + if (!parent) 220 + return NULL; 221 + 222 + if (parent->left == block) 223 + return parent->right; 224 + 225 + return parent->left; 226 + } 227 + 228 + static void __drm_buddy_free(struct drm_buddy *mm, 229 + struct drm_buddy_block *block) 230 + { 231 + struct drm_buddy_block *parent; 232 + 233 + while ((parent = block->parent)) { 234 + struct drm_buddy_block *buddy; 235 + 236 + buddy = get_buddy(block); 237 + 238 + if (!drm_buddy_block_is_free(buddy)) 239 + break; 240 + 241 + list_del(&buddy->link); 242 + 243 + drm_block_free(mm, block); 244 + drm_block_free(mm, buddy); 245 + 246 + block = parent; 247 + } 248 + 249 + mark_free(mm, block); 250 + } 251 + 252 + /** 253 + * drm_buddy_free_block - free a block 254 + * 255 + * @mm: DRM buddy manager 256 + * @block: block to be freed 257 + */ 258 + void drm_buddy_free_block(struct drm_buddy *mm, 259 + struct drm_buddy_block *block) 260 + { 261 + BUG_ON(!drm_buddy_block_is_allocated(block)); 262 + mm->avail += drm_buddy_block_size(mm, block); 263 + __drm_buddy_free(mm, block); 264 + } 265 + EXPORT_SYMBOL(drm_buddy_free_block); 266 + 267 + /** 268 + * drm_buddy_free_list - free blocks 269 + * 270 + * @mm: DRM buddy manager 271 + * @objects: input list head to free blocks 272 + */ 273 + void drm_buddy_free_list(struct drm_buddy *mm, struct list_head *objects) 274 + { 275 + struct drm_buddy_block *block, *on; 276 + 277 + list_for_each_entry_safe(block, on, objects, link) { 278 + drm_buddy_free_block(mm, block); 279 + cond_resched(); 280 + } 281 + INIT_LIST_HEAD(objects); 282 + } 283 + EXPORT_SYMBOL(drm_buddy_free_list); 284 + 285 + /** 286 + * drm_buddy_alloc_blocks - allocate power-of-two blocks 287 + * 288 + * @mm: DRM buddy manager to allocate from 289 + * @order: size of the allocation 290 + * 291 + * The order value here translates to: 292 + * 293 + * 0 = 2^0 * mm->chunk_size 294 + * 1 = 2^1 * mm->chunk_size 295 + * 2 = 2^2 * mm->chunk_size 296 + * 297 + * Returns: 298 + * allocated ptr to the &drm_buddy_block on success 299 + */ 300 + struct drm_buddy_block * 301 + drm_buddy_alloc_blocks(struct drm_buddy *mm, unsigned int order) 302 + { 303 + struct drm_buddy_block *block = NULL; 304 + unsigned int i; 305 + int err; 306 + 307 + for (i = order; i <= mm->max_order; ++i) { 308 + block = list_first_entry_or_null(&mm->free_list[i], 309 + struct drm_buddy_block, 310 + link); 311 + if (block) 312 + break; 313 + } 314 + 315 + if (!block) 316 + return ERR_PTR(-ENOSPC); 317 + 318 + BUG_ON(!drm_buddy_block_is_free(block)); 319 + 320 + while (i != order) { 321 + err = split_block(mm, block); 322 + if (unlikely(err)) 323 + goto out_free; 324 + 325 + /* Go low */ 326 + block = block->left; 327 + i--; 328 + } 329 + 330 + mark_allocated(block); 331 + mm->avail -= drm_buddy_block_size(mm, block); 332 + kmemleak_update_trace(block); 333 + return block; 334 + 335 + out_free: 336 + if (i != order) 337 + __drm_buddy_free(mm, block); 338 + return ERR_PTR(err); 339 + } 340 + EXPORT_SYMBOL(drm_buddy_alloc_blocks); 341 + 342 + static inline bool overlaps(u64 s1, u64 e1, u64 s2, u64 e2) 343 + { 344 + return s1 <= e2 && e1 >= s2; 345 + } 346 + 347 + static inline bool contains(u64 s1, u64 e1, u64 s2, u64 e2) 348 + { 349 + return s1 <= s2 && e1 >= e2; 350 + } 351 + 352 + /** 353 + * drm_buddy_alloc_range - allocate range 354 + * 355 + * @mm: DRM buddy manager to allocate from 356 + * @blocks: output list head to add allocated blocks 357 + * @start: start of the allowed range for this block 358 + * @size: size of the allocation 359 + * 360 + * Intended for pre-allocating portions of the address space, for example to 361 + * reserve a block for the initial framebuffer or similar, hence the expectation 362 + * here is that drm_buddy_alloc_blocks() is still the main vehicle for 363 + * allocations, so if that's not the case then the drm_mm range allocator is 364 + * probably a much better fit, and so you should probably go use that instead. 365 + * 366 + * Note that it's safe to chain together multiple alloc_ranges 367 + * with the same blocks list 368 + * 369 + * Returns: 370 + * 0 on success, error code on failure. 371 + */ 372 + int drm_buddy_alloc_range(struct drm_buddy *mm, 373 + struct list_head *blocks, 374 + u64 start, u64 size) 375 + { 376 + struct drm_buddy_block *block; 377 + struct drm_buddy_block *buddy; 378 + LIST_HEAD(allocated); 379 + LIST_HEAD(dfs); 380 + u64 end; 381 + int err; 382 + int i; 383 + 384 + if (size < mm->chunk_size) 385 + return -EINVAL; 386 + 387 + if (!IS_ALIGNED(size | start, mm->chunk_size)) 388 + return -EINVAL; 389 + 390 + if (range_overflows(start, size, mm->size)) 391 + return -EINVAL; 392 + 393 + for (i = 0; i < mm->n_roots; ++i) 394 + list_add_tail(&mm->roots[i]->tmp_link, &dfs); 395 + 396 + end = start + size - 1; 397 + 398 + do { 399 + u64 block_start; 400 + u64 block_end; 401 + 402 + block = list_first_entry_or_null(&dfs, 403 + struct drm_buddy_block, 404 + tmp_link); 405 + if (!block) 406 + break; 407 + 408 + list_del(&block->tmp_link); 409 + 410 + block_start = drm_buddy_block_offset(block); 411 + block_end = block_start + drm_buddy_block_size(mm, block) - 1; 412 + 413 + if (!overlaps(start, end, block_start, block_end)) 414 + continue; 415 + 416 + if (drm_buddy_block_is_allocated(block)) { 417 + err = -ENOSPC; 418 + goto err_free; 419 + } 420 + 421 + if (contains(start, end, block_start, block_end)) { 422 + if (!drm_buddy_block_is_free(block)) { 423 + err = -ENOSPC; 424 + goto err_free; 425 + } 426 + 427 + mark_allocated(block); 428 + mm->avail -= drm_buddy_block_size(mm, block); 429 + list_add_tail(&block->link, &allocated); 430 + continue; 431 + } 432 + 433 + if (!drm_buddy_block_is_split(block)) { 434 + err = split_block(mm, block); 435 + if (unlikely(err)) 436 + goto err_undo; 437 + } 438 + 439 + list_add(&block->right->tmp_link, &dfs); 440 + list_add(&block->left->tmp_link, &dfs); 441 + } while (1); 442 + 443 + list_splice_tail(&allocated, blocks); 444 + return 0; 445 + 446 + err_undo: 447 + /* 448 + * We really don't want to leave around a bunch of split blocks, since 449 + * bigger is better, so make sure we merge everything back before we 450 + * free the allocated blocks. 451 + */ 452 + buddy = get_buddy(block); 453 + if (buddy && 454 + (drm_buddy_block_is_free(block) && 455 + drm_buddy_block_is_free(buddy))) 456 + __drm_buddy_free(mm, block); 457 + 458 + err_free: 459 + drm_buddy_free_list(mm, &allocated); 460 + return err; 461 + } 462 + EXPORT_SYMBOL(drm_buddy_alloc_range); 463 + 464 + /** 465 + * drm_buddy_block_print - print block information 466 + * 467 + * @mm: DRM buddy manager 468 + * @block: DRM buddy block 469 + * @p: DRM printer to use 470 + */ 471 + void drm_buddy_block_print(struct drm_buddy *mm, 472 + struct drm_buddy_block *block, 473 + struct drm_printer *p) 474 + { 475 + u64 start = drm_buddy_block_offset(block); 476 + u64 size = drm_buddy_block_size(mm, block); 477 + 478 + drm_printf(p, "%#018llx-%#018llx: %llu\n", start, start + size, size); 479 + } 480 + EXPORT_SYMBOL(drm_buddy_block_print); 481 + 482 + /** 483 + * drm_buddy_print - print allocator state 484 + * 485 + * @mm: DRM buddy manager 486 + * @p: DRM printer to use 487 + */ 488 + void drm_buddy_print(struct drm_buddy *mm, struct drm_printer *p) 489 + { 490 + int order; 491 + 492 + drm_printf(p, "chunk_size: %lluKiB, total: %lluMiB, free: %lluMiB\n", 493 + mm->chunk_size >> 10, mm->size >> 20, mm->avail >> 20); 494 + 495 + for (order = mm->max_order; order >= 0; order--) { 496 + struct drm_buddy_block *block; 497 + u64 count = 0, free; 498 + 499 + list_for_each_entry(block, &mm->free_list[order], link) { 500 + BUG_ON(!drm_buddy_block_is_free(block)); 501 + count++; 502 + } 503 + 504 + drm_printf(p, "order-%d ", order); 505 + 506 + free = count * (mm->chunk_size << order); 507 + if (free < SZ_1M) 508 + drm_printf(p, "free: %lluKiB", free >> 10); 509 + else 510 + drm_printf(p, "free: %lluMiB", free >> 20); 511 + 512 + drm_printf(p, ", pages: %llu\n", count); 513 + } 514 + } 515 + EXPORT_SYMBOL(drm_buddy_print); 516 + 517 + static void drm_buddy_module_exit(void) 518 + { 519 + kmem_cache_destroy(slab_blocks); 520 + } 521 + 522 + static int __init drm_buddy_module_init(void) 523 + { 524 + slab_blocks = KMEM_CACHE(drm_buddy_block, 0); 525 + if (!slab_blocks) 526 + return -ENOMEM; 527 + 528 + return 0; 529 + } 530 + 531 + module_init(drm_buddy_module_init); 532 + module_exit(drm_buddy_module_exit); 533 + 534 + MODULE_DESCRIPTION("DRM Buddy Allocator"); 535 + MODULE_LICENSE("Dual MIT/GPL");
+4
drivers/gpu/drm/drm_color_mgmt.c
··· 82 82 * driver boot-up state too. Drivers can access this blob through 83 83 * &drm_crtc_state.gamma_lut. 84 84 * 85 + * Note that for mostly historical reasons stemming from Xorg heritage, 86 + * this is also used to store the color map (also sometimes color lut, CLUT 87 + * or color palette) for indexed formats like DRM_FORMAT_C8. 88 + * 85 89 * “GAMMA_LUT_SIZE”: 86 90 * Unsigned range property to give the size of the lookup table to be set 87 91 * on the GAMMA_LUT property (the size depends on the underlying hardware).
-27
drivers/gpu/drm/drm_crtc_helper_internal.h
··· 28 28 29 29 #include <drm/drm_connector.h> 30 30 #include <drm/drm_crtc.h> 31 - #include <drm/drm_dp_helper.h> 32 31 #include <drm/drm_encoder.h> 33 32 #include <drm/drm_modes.h> 34 - 35 - /* drm_dp_aux_dev.c */ 36 - #ifdef CONFIG_DRM_DP_AUX_CHARDEV 37 - int drm_dp_aux_dev_init(void); 38 - void drm_dp_aux_dev_exit(void); 39 - int drm_dp_aux_register_devnode(struct drm_dp_aux *aux); 40 - void drm_dp_aux_unregister_devnode(struct drm_dp_aux *aux); 41 - #else 42 - static inline int drm_dp_aux_dev_init(void) 43 - { 44 - return 0; 45 - } 46 - 47 - static inline void drm_dp_aux_dev_exit(void) 48 - { 49 - } 50 - 51 - static inline int drm_dp_aux_register_devnode(struct drm_dp_aux *aux) 52 - { 53 - return 0; 54 - } 55 - 56 - static inline void drm_dp_aux_unregister_devnode(struct drm_dp_aux *aux) 57 - { 58 - } 59 - #endif 60 33 61 34 /* drm_probe_helper.c */ 62 35 enum drm_mode_status drm_crtc_mode_valid(struct drm_crtc *crtc,
+2 -2
drivers/gpu/drm/drm_dp_aux_bus.c drivers/gpu/drm/dp/drm_dp_aux_bus.c
··· 19 19 #include <linux/pm_domain.h> 20 20 #include <linux/pm_runtime.h> 21 21 22 - #include <drm/drm_dp_aux_bus.h> 23 - #include <drm/drm_dp_helper.h> 22 + #include <drm/dp/drm_dp_aux_bus.h> 23 + #include <drm/dp/drm_dp_helper.h> 24 24 25 25 /** 26 26 * dp_aux_ep_match() - The match function for the dp_aux_bus.
+3 -3
drivers/gpu/drm/drm_dp_aux_dev.c drivers/gpu/drm/dp/drm_dp_aux_dev.c
··· 36 36 #include <linux/uio.h> 37 37 38 38 #include <drm/drm_crtc.h> 39 - #include <drm/drm_dp_helper.h> 40 - #include <drm/drm_dp_mst_helper.h> 39 + #include <drm/dp/drm_dp_helper.h> 40 + #include <drm/dp/drm_dp_mst_helper.h> 41 41 #include <drm/drm_print.h> 42 42 43 - #include "drm_crtc_helper_internal.h" 43 + #include "drm_dp_helper_internal.h" 44 44 45 45 struct drm_dp_aux_dev { 46 46 unsigned index;
+1 -1
drivers/gpu/drm/drm_dp_cec.c drivers/gpu/drm/dp/drm_dp_cec.c
··· 13 13 14 14 #include <drm/drm_connector.h> 15 15 #include <drm/drm_device.h> 16 - #include <drm/drm_dp_helper.h> 16 + #include <drm/dp/drm_dp_helper.h> 17 17 18 18 /* 19 19 * Unfortunately it turns out that we have a chicken-and-egg situation
+1 -1
drivers/gpu/drm/drm_dp_dual_mode_helper.c drivers/gpu/drm/dp/drm_dp_dual_mode_helper.c
··· 28 28 #include <linux/string.h> 29 29 30 30 #include <drm/drm_device.h> 31 - #include <drm/drm_dp_dual_mode_helper.h> 31 + #include <drm/dp/drm_dp_dual_mode_helper.h> 32 32 #include <drm/drm_print.h> 33 33 34 34 /**
+3 -3
drivers/gpu/drm/drm_dp_helper.c drivers/gpu/drm/dp/drm_dp.c
··· 29 29 #include <linux/sched.h> 30 30 #include <linux/seq_file.h> 31 31 32 - #include <drm/drm_dp_helper.h> 32 + #include <drm/dp/drm_dp_helper.h> 33 33 #include <drm/drm_print.h> 34 34 #include <drm/drm_vblank.h> 35 - #include <drm/drm_dp_mst_helper.h> 35 + #include <drm/dp/drm_dp_mst_helper.h> 36 36 #include <drm/drm_panel.h> 37 37 38 - #include "drm_crtc_helper_internal.h" 38 + #include "drm_dp_helper_internal.h" 39 39 40 40 struct dp_aux_backlight { 41 41 struct backlight_device *base;
+4 -4
drivers/gpu/drm/drm_dp_mst_topology.c drivers/gpu/drm/dp/drm_dp_mst_topology.c
··· 38 38 #include <linux/math64.h> 39 39 #endif 40 40 41 + #include <drm/dp/drm_dp_mst_helper.h> 41 42 #include <drm/drm_atomic.h> 42 43 #include <drm/drm_atomic_helper.h> 43 - #include <drm/drm_dp_mst_helper.h> 44 44 #include <drm/drm_drv.h> 45 45 #include <drm/drm_print.h> 46 46 #include <drm/drm_probe_helper.h> 47 47 48 - #include "drm_crtc_helper_internal.h" 48 + #include "drm_dp_helper_internal.h" 49 49 #include "drm_dp_mst_topology_internal.h" 50 50 51 51 /** ··· 4196 4196 int ret = 0; 4197 4197 int sc; 4198 4198 *handled = false; 4199 - sc = esi[0] & 0x3f; 4199 + sc = DP_GET_SINK_COUNT(esi[0]); 4200 4200 4201 4201 if (sc != mgr->sink_count) { 4202 4202 mgr->sink_count = sc; ··· 4811 4811 4812 4812 seq_printf(m, "%smstb - [%p]: num_ports: %d\n", prefix, mstb, mstb->num_ports); 4813 4813 list_for_each_entry(port, &mstb->ports, next) { 4814 - seq_printf(m, "%sport %d - [%p] (%s - %s): ddps: %d, ldps: %d, sdp: %d/%d, fec: %s, conn: %p\n", 4814 + seq_printf(m, "%sport %d - [%p] (%s - %s): ddps: %d, ldps: %d, sdp: %d/%d, fec: %s, conn: %p\n", 4815 4815 prefix, 4816 4816 port->port_num, 4817 4817 port,
+1 -1
drivers/gpu/drm/drm_dp_mst_topology_internal.h drivers/gpu/drm/dp/drm_dp_mst_topology_internal.h
··· 10 10 #ifndef _DRM_DP_MST_HELPER_INTERNAL_H_ 11 11 #define _DRM_DP_MST_HELPER_INTERNAL_H_ 12 12 13 - #include <drm/drm_dp_mst_helper.h> 13 + #include <drm/dp/drm_dp_mst_helper.h> 14 14 15 15 void 16 16 drm_dp_encode_sideband_req(const struct drm_dp_sideband_msg_req_body *req,
+1 -1
drivers/gpu/drm/drm_dsc.c
··· 12 12 #include <linux/errno.h> 13 13 #include <linux/byteorder/generic.h> 14 14 #include <drm/drm_print.h> 15 - #include <drm/drm_dp_helper.h> 15 + #include <drm/dp/drm_dp_helper.h> 16 16 #include <drm/drm_dsc.h> 17 17 18 18 /**
+68 -36
drivers/gpu/drm/drm_edid.c
··· 93 93 /* Non desktop display (i.e. HMD) */ 94 94 #define EDID_QUIRK_NON_DESKTOP (1 << 12) 95 95 96 + #define MICROSOFT_IEEE_OUI 0xca125c 97 + 96 98 struct detailed_mode_closure { 97 99 struct drm_connector *connector; 98 100 struct edid *edid; ··· 214 212 215 213 /* Windows Mixed Reality Headsets */ 216 214 EDID_QUIRK('A', 'C', 'R', 0x7fce, EDID_QUIRK_NON_DESKTOP), 217 - EDID_QUIRK('H', 'P', 'N', 0x3515, EDID_QUIRK_NON_DESKTOP), 218 215 EDID_QUIRK('L', 'E', 'N', 0x0408, EDID_QUIRK_NON_DESKTOP), 219 - EDID_QUIRK('L', 'E', 'N', 0xb800, EDID_QUIRK_NON_DESKTOP), 220 216 EDID_QUIRK('F', 'U', 'J', 0x1970, EDID_QUIRK_NON_DESKTOP), 221 217 EDID_QUIRK('D', 'E', 'L', 0x7fce, EDID_QUIRK_NON_DESKTOP), 222 218 EDID_QUIRK('S', 'E', 'C', 0x144a, EDID_QUIRK_NON_DESKTOP), ··· 3776 3776 } 3777 3777 3778 3778 if (modes > 0) 3779 - info->color_formats |= DRM_COLOR_FORMAT_YCRCB420; 3779 + info->color_formats |= DRM_COLOR_FORMAT_YCBCR420; 3780 3780 return modes; 3781 3781 } 3782 3782 ··· 4222 4222 return oui(db[3], db[2], db[1]) == HDMI_FORUM_IEEE_OUI; 4223 4223 } 4224 4224 4225 + static bool cea_db_is_microsoft_vsdb(const u8 *db) 4226 + { 4227 + if (cea_db_tag(db) != VENDOR_BLOCK) 4228 + return false; 4229 + 4230 + if (cea_db_payload_len(db) != 21) 4231 + return false; 4232 + 4233 + return oui(db[3], db[2], db[1]) == MICROSOFT_IEEE_OUI; 4234 + } 4235 + 4225 4236 static bool cea_db_is_vcdb(const u8 *db) 4226 4237 { 4227 4238 if (cea_db_tag(db) != USE_EXTENDED_TAG) ··· 4290 4279 if (map_len == 0) { 4291 4280 /* All CEA modes support ycbcr420 sampling also.*/ 4292 4281 hdmi->y420_cmdb_map = U64_MAX; 4293 - info->color_formats |= DRM_COLOR_FORMAT_YCRCB420; 4282 + info->color_formats |= DRM_COLOR_FORMAT_YCBCR420; 4294 4283 return; 4295 4284 } 4296 4285 ··· 4313 4302 map |= (u64)db[2 + count] << (8 * count); 4314 4303 4315 4304 if (map) 4316 - info->color_formats |= DRM_COLOR_FORMAT_YCRCB420; 4305 + info->color_formats |= DRM_COLOR_FORMAT_YCBCR420; 4317 4306 4318 4307 hdmi->y420_cmdb_map = map; 4319 4308 } ··· 5086 5075 5087 5076 if (hdmi[6] & DRM_EDID_HDMI_DC_30) { 5088 5077 dc_bpc = 10; 5089 - info->edid_hdmi_dc_modes |= DRM_EDID_HDMI_DC_30; 5078 + info->edid_hdmi_rgb444_dc_modes |= DRM_EDID_HDMI_DC_30; 5090 5079 DRM_DEBUG("%s: HDMI sink does deep color 30.\n", 5091 5080 connector->name); 5092 5081 } 5093 5082 5094 5083 if (hdmi[6] & DRM_EDID_HDMI_DC_36) { 5095 5084 dc_bpc = 12; 5096 - info->edid_hdmi_dc_modes |= DRM_EDID_HDMI_DC_36; 5085 + info->edid_hdmi_rgb444_dc_modes |= DRM_EDID_HDMI_DC_36; 5097 5086 DRM_DEBUG("%s: HDMI sink does deep color 36.\n", 5098 5087 connector->name); 5099 5088 } 5100 5089 5101 5090 if (hdmi[6] & DRM_EDID_HDMI_DC_48) { 5102 5091 dc_bpc = 16; 5103 - info->edid_hdmi_dc_modes |= DRM_EDID_HDMI_DC_48; 5092 + info->edid_hdmi_rgb444_dc_modes |= DRM_EDID_HDMI_DC_48; 5104 5093 DRM_DEBUG("%s: HDMI sink does deep color 48.\n", 5105 5094 connector->name); 5106 5095 } ··· 5115 5104 connector->name, dc_bpc); 5116 5105 info->bpc = dc_bpc; 5117 5106 5118 - /* 5119 - * Deep color support mandates RGB444 support for all video 5120 - * modes and forbids YCRCB422 support for all video modes per 5121 - * HDMI 1.3 spec. 5122 - */ 5123 - info->color_formats = DRM_COLOR_FORMAT_RGB444; 5124 - 5125 5107 /* YCRCB444 is optional according to spec. */ 5126 5108 if (hdmi[6] & DRM_EDID_HDMI_DC_Y444) { 5127 - info->color_formats |= DRM_COLOR_FORMAT_YCRCB444; 5109 + info->edid_hdmi_ycbcr444_dc_modes = info->edid_hdmi_rgb444_dc_modes; 5128 5110 DRM_DEBUG("%s: HDMI sink does YCRCB444 in deep color.\n", 5129 5111 connector->name); 5130 5112 } ··· 5153 5149 drm_parse_hdmi_deep_color_info(connector, db); 5154 5150 } 5155 5151 5152 + /* 5153 + * See EDID extension for head-mounted and specialized monitors, specified at: 5154 + * https://docs.microsoft.com/en-us/windows-hardware/drivers/display/specialized-monitors-edid-extension 5155 + */ 5156 + static void drm_parse_microsoft_vsdb(struct drm_connector *connector, 5157 + const u8 *db) 5158 + { 5159 + struct drm_display_info *info = &connector->display_info; 5160 + u8 version = db[4]; 5161 + bool desktop_usage = db[5] & BIT(6); 5162 + 5163 + /* Version 1 and 2 for HMDs, version 3 flags desktop usage explicitly */ 5164 + if (version == 1 || version == 2 || (version == 3 && !desktop_usage)) 5165 + info->non_desktop = true; 5166 + 5167 + drm_dbg_kms(connector->dev, "HMD or specialized display VSDB version %u: 0x%02x\n", 5168 + version, db[5]); 5169 + } 5170 + 5156 5171 static void drm_parse_cea_ext(struct drm_connector *connector, 5157 5172 const struct edid *edid) 5158 5173 { ··· 5188 5165 /* The existence of a CEA block should imply RGB support */ 5189 5166 info->color_formats = DRM_COLOR_FORMAT_RGB444; 5190 5167 if (edid_ext[3] & EDID_CEA_YCRCB444) 5191 - info->color_formats |= DRM_COLOR_FORMAT_YCRCB444; 5168 + info->color_formats |= DRM_COLOR_FORMAT_YCBCR444; 5192 5169 if (edid_ext[3] & EDID_CEA_YCRCB422) 5193 - info->color_formats |= DRM_COLOR_FORMAT_YCRCB422; 5170 + info->color_formats |= DRM_COLOR_FORMAT_YCBCR422; 5194 5171 5195 5172 if (cea_db_offsets(edid_ext, &start, &end)) 5196 5173 return; ··· 5202 5179 drm_parse_hdmi_vsdb_video(connector, db); 5203 5180 if (cea_db_is_hdmi_forum_vsdb(db)) 5204 5181 drm_parse_hdmi_forum_vsdb(connector, db); 5182 + if (cea_db_is_microsoft_vsdb(db)) 5183 + drm_parse_microsoft_vsdb(connector, db); 5205 5184 if (cea_db_is_y420cmdb(db)) 5206 5185 drm_parse_y420cmdb_bitmap(connector, db); 5207 5186 if (cea_db_is_vcdb(db)) ··· 5358 5333 info->width_mm = edid->width_cm * 10; 5359 5334 info->height_mm = edid->height_cm * 10; 5360 5335 5361 - info->non_desktop = !!(quirks & EDID_QUIRK_NON_DESKTOP); 5362 - 5363 5336 drm_get_monitor_range(connector, edid); 5364 5337 5365 - DRM_DEBUG_KMS("non_desktop set to %d\n", info->non_desktop); 5366 - 5367 5338 if (edid->revision < 3) 5368 - return quirks; 5339 + goto out; 5369 5340 5370 5341 if (!(edid->input & DRM_EDID_INPUT_DIGITAL)) 5371 - return quirks; 5342 + goto out; 5372 5343 5373 5344 drm_parse_cea_ext(connector, edid); 5374 5345 ··· 5384 5363 5385 5364 /* Only defined for 1.4 with digital displays */ 5386 5365 if (edid->revision < 4) 5387 - return quirks; 5366 + goto out; 5388 5367 5389 5368 switch (edid->input & DRM_EDID_DIGITAL_DEPTH_MASK) { 5390 5369 case DRM_EDID_DIGITAL_DEPTH_6: ··· 5416 5395 5417 5396 info->color_formats |= DRM_COLOR_FORMAT_RGB444; 5418 5397 if (edid->features & DRM_EDID_FEATURE_RGB_YCRCB444) 5419 - info->color_formats |= DRM_COLOR_FORMAT_YCRCB444; 5398 + info->color_formats |= DRM_COLOR_FORMAT_YCBCR444; 5420 5399 if (edid->features & DRM_EDID_FEATURE_RGB_YCRCB422) 5421 - info->color_formats |= DRM_COLOR_FORMAT_YCRCB422; 5400 + info->color_formats |= DRM_COLOR_FORMAT_YCBCR422; 5422 5401 5423 5402 drm_update_mso(connector, edid); 5403 + 5404 + out: 5405 + if (quirks & EDID_QUIRK_NON_DESKTOP) { 5406 + drm_dbg_kms(connector->dev, "Non-desktop display%s\n", 5407 + info->non_desktop ? " (redundant quirk)" : ""); 5408 + info->non_desktop = true; 5409 + } 5424 5410 5425 5411 return quirks; 5426 5412 } 5427 5413 5428 5414 static struct drm_display_mode *drm_mode_displayid_detailed(struct drm_device *dev, 5429 - struct displayid_detailed_timings_1 *timings) 5415 + struct displayid_detailed_timings_1 *timings, 5416 + bool type_7) 5430 5417 { 5431 5418 struct drm_display_mode *mode; 5432 5419 unsigned pixel_clock = (timings->pixel_clock[0] | ··· 5455 5426 if (!mode) 5456 5427 return NULL; 5457 5428 5458 - mode->clock = pixel_clock * 10; 5429 + /* resolution is kHz for type VII, and 10 kHz for type I */ 5430 + mode->clock = type_7 ? pixel_clock : pixel_clock * 10; 5459 5431 mode->hdisplay = hactive; 5460 5432 mode->hsync_start = mode->hdisplay + hsync; 5461 5433 mode->hsync_end = mode->hsync_start + hsync_width; ··· 5487 5457 int num_timings; 5488 5458 struct drm_display_mode *newmode; 5489 5459 int num_modes = 0; 5460 + bool type_7 = block->tag == DATA_BLOCK_2_TYPE_7_DETAILED_TIMING; 5490 5461 /* blocks must be multiple of 20 bytes length */ 5491 5462 if (block->num_bytes % 20) 5492 5463 return 0; ··· 5496 5465 for (i = 0; i < num_timings; i++) { 5497 5466 struct displayid_detailed_timings_1 *timings = &det->timings[i]; 5498 5467 5499 - newmode = drm_mode_displayid_detailed(connector->dev, timings); 5468 + newmode = drm_mode_displayid_detailed(connector->dev, timings, type_7); 5500 5469 if (!newmode) 5501 5470 continue; 5502 5471 ··· 5515 5484 5516 5485 displayid_iter_edid_begin(edid, &iter); 5517 5486 displayid_iter_for_each(block, &iter) { 5518 - if (block->tag == DATA_BLOCK_TYPE_1_DETAILED_TIMING) 5487 + if (block->tag == DATA_BLOCK_TYPE_1_DETAILED_TIMING || 5488 + block->tag == DATA_BLOCK_2_TYPE_7_DETAILED_TIMING) 5519 5489 num_modes += add_displayid_detailed_1_modes(connector, block); 5520 5490 } 5521 5491 displayid_iter_end(&iter); ··· 5684 5652 return true; 5685 5653 5686 5654 return connector->display_info.hdmi.scdc.supported || 5687 - connector->display_info.color_formats & DRM_COLOR_FORMAT_YCRCB420; 5655 + connector->display_info.color_formats & DRM_COLOR_FORMAT_YCBCR420; 5688 5656 } 5689 5657 5690 5658 static inline bool is_eotf_supported(u8 output_eotf, u8 sink_eotf) ··· 5923 5891 #undef ACE 5924 5892 5925 5893 /** 5926 - * drm_hdmi_avi_infoframe_colorspace() - fill the HDMI AVI infoframe 5927 - * colorspace information 5894 + * drm_hdmi_avi_infoframe_colorimetry() - fill the HDMI AVI infoframe 5895 + * colorimetry information 5928 5896 * @frame: HDMI AVI infoframe 5929 5897 * @conn_state: connector state 5930 5898 */ 5931 5899 void 5932 - drm_hdmi_avi_infoframe_colorspace(struct hdmi_avi_infoframe *frame, 5900 + drm_hdmi_avi_infoframe_colorimetry(struct hdmi_avi_infoframe *frame, 5933 5901 const struct drm_connector_state *conn_state) 5934 5902 { 5935 5903 u32 colorimetry_val; ··· 5948 5916 frame->extended_colorimetry = (colorimetry_val >> 2) & 5949 5917 EXTENDED_COLORIMETRY_MASK; 5950 5918 } 5951 - EXPORT_SYMBOL(drm_hdmi_avi_infoframe_colorspace); 5919 + EXPORT_SYMBOL(drm_hdmi_avi_infoframe_colorimetry); 5952 5920 5953 5921 /** 5954 5922 * drm_hdmi_avi_infoframe_quant_range() - fill the HDMI AVI infoframe
-14
drivers/gpu/drm/drm_kms_helper_common.c
··· 61 61 "DEPRECATED. Use drm.edid_firmware module parameter instead."); 62 62 63 63 #endif 64 - 65 - static int __init drm_kms_helper_init(void) 66 - { 67 - return drm_dp_aux_dev_init(); 68 - } 69 - 70 - static void __exit drm_kms_helper_exit(void) 71 - { 72 - /* Call exit functions from specific kms helpers here */ 73 - drm_dp_aux_dev_exit(); 74 - } 75 - 76 - module_init(drm_kms_helper_init); 77 - module_exit(drm_kms_helper_exit);
+2 -7
drivers/gpu/drm/drm_plane.c
··· 202 202 203 203 memcpy(formats_ptr(blob_data), plane->format_types, formats_size); 204 204 205 - /* If we can't determine support, just bail */ 206 - if (!plane->funcs->format_mod_supported) 207 - goto done; 208 - 209 205 mod = modifiers_ptr(blob_data); 210 206 for (i = 0; i < plane->modifier_count; i++) { 211 207 for (j = 0; j < plane->format_count; j++) { 212 - if (plane->funcs->format_mod_supported(plane, 208 + if (!plane->funcs->format_mod_supported || 209 + plane->funcs->format_mod_supported(plane, 213 210 plane->format_types[j], 214 211 plane->modifiers[i])) { 215 - 216 212 mod->formats |= 1ULL << j; 217 213 } 218 214 } ··· 219 223 mod++; 220 224 } 221 225 222 - done: 223 226 drm_object_attach_property(&plane->base, config->modifiers_property, 224 227 blob->base.id); 225 228
+4 -1
drivers/gpu/drm/drm_privacy_screen.c
··· 387 387 * * An ERR_PTR(errno) on failure. 388 388 */ 389 389 struct drm_privacy_screen *drm_privacy_screen_register( 390 - struct device *parent, const struct drm_privacy_screen_ops *ops) 390 + struct device *parent, const struct drm_privacy_screen_ops *ops, 391 + void *data) 391 392 { 392 393 struct drm_privacy_screen *priv; 393 394 int ret; ··· 405 404 priv->dev.parent = parent; 406 405 priv->dev.release = drm_privacy_screen_device_release; 407 406 dev_set_name(&priv->dev, "privacy_screen-%s", dev_name(parent)); 407 + priv->drvdata = data; 408 408 priv->ops = ops; 409 409 410 410 priv->ops->get_hw_state(priv); ··· 441 439 mutex_unlock(&drm_privacy_screen_devs_lock); 442 440 443 441 mutex_lock(&priv->lock); 442 + priv->drvdata = NULL; 444 443 priv->ops = NULL; 445 444 mutex_unlock(&priv->lock); 446 445
+17
drivers/gpu/drm/drm_privacy_screen_x86.c
··· 50 50 } 51 51 #endif 52 52 53 + #if IS_ENABLED(CONFIG_CHROMEOS_PRIVACY_SCREEN) 54 + static bool __init detect_chromeos_privacy_screen(void) 55 + { 56 + return acpi_dev_present("GOOG0010", NULL, -1); 57 + } 58 + #endif 59 + 53 60 static const struct arch_init_data arch_init_data[] __initconst = { 54 61 #if IS_ENABLED(CONFIG_THINKPAD_ACPI) 55 62 { ··· 66 59 .provider = "privacy_screen-thinkpad_acpi", 67 60 }, 68 61 .detect = detect_thinkpad_privacy_screen, 62 + }, 63 + #endif 64 + #if IS_ENABLED(CONFIG_CHROMEOS_PRIVACY_SCREEN) 65 + { 66 + .lookup = { 67 + .dev_id = NULL, 68 + .con_id = NULL, 69 + .provider = "privacy_screen-GOOG0010:00", 70 + }, 71 + .detect = detect_chromeos_privacy_screen, 69 72 }, 70 73 #endif 71 74 };
+1 -2
drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c
··· 189 189 continue; 190 190 191 191 if (bo->flags & ETNA_SUBMIT_BO_WRITE) { 192 - ret = dma_resv_get_fences(robj, NULL, 193 - &bo->nr_shared, 192 + ret = dma_resv_get_fences(robj, true, &bo->nr_shared, 194 193 &bo->shared); 195 194 if (ret) 196 195 return ret;
+1
drivers/gpu/drm/exynos/Kconfig
··· 66 66 bool "Exynos specific extensions for Analogix DP driver" 67 67 depends on DRM_EXYNOS_FIMD || DRM_EXYNOS7_DECON 68 68 select DRM_ANALOGIX_DP 69 + select DRM_DP_HELPER 69 70 default DRM_EXYNOS 70 71 select DRM_PANEL 71 72 help
+12 -1
drivers/gpu/drm/exynos/exynos_drm_dsi.c
··· 258 258 struct list_head bridge_chain; 259 259 struct drm_bridge *out_bridge; 260 260 struct device *dev; 261 + struct drm_display_mode mode; 261 262 262 263 void __iomem *reg_base; 263 264 struct phy *phy; ··· 882 881 883 882 static void exynos_dsi_set_display_mode(struct exynos_dsi *dsi) 884 883 { 885 - struct drm_display_mode *m = &dsi->encoder.crtc->state->adjusted_mode; 884 + struct drm_display_mode *m = &dsi->mode; 886 885 unsigned int num_bits_resol = dsi->driver_data->num_bits_resol; 887 886 u32 reg; 888 887 ··· 1447 1446 pm_runtime_put_sync(dsi->dev); 1448 1447 } 1449 1448 1449 + static void exynos_dsi_mode_set(struct drm_encoder *encoder, 1450 + struct drm_display_mode *mode, 1451 + struct drm_display_mode *adjusted_mode) 1452 + { 1453 + struct exynos_dsi *dsi = encoder_to_dsi(encoder); 1454 + 1455 + drm_mode_copy(&dsi->mode, adjusted_mode); 1456 + } 1457 + 1450 1458 static enum drm_connector_status 1451 1459 exynos_dsi_detect(struct drm_connector *connector, bool force) 1452 1460 { ··· 1523 1513 static const struct drm_encoder_helper_funcs exynos_dsi_encoder_helper_funcs = { 1524 1514 .enable = exynos_dsi_enable, 1525 1515 .disable = exynos_dsi_disable, 1516 + .mode_set = exynos_dsi_mode_set, 1526 1517 }; 1527 1518 1528 1519 MODULE_DEVICE_TABLE(of, exynos_dsi_of_match);
+4 -10
drivers/gpu/drm/gma500/cdv_intel_dp.c
··· 31 31 32 32 #include <drm/drm_crtc.h> 33 33 #include <drm/drm_crtc_helper.h> 34 - #include <drm/drm_dp_helper.h> 34 + #include <drm/dp/drm_dp_helper.h> 35 35 #include <drm/drm_simple_kms_helper.h> 36 36 37 37 #include "gma_display.h" ··· 82 82 { 83 83 struct i2c_algo_dp_aux_data *algo_data = adapter->algo_data; 84 84 int mode = MODE_I2C_START; 85 - int ret; 86 85 87 86 if (reading) 88 87 mode |= MODE_I2C_READ; ··· 89 90 mode |= MODE_I2C_WRITE; 90 91 algo_data->address = address; 91 92 algo_data->running = true; 92 - ret = i2c_algo_dp_aux_transaction(adapter, mode, 0, NULL); 93 - return ret; 93 + return i2c_algo_dp_aux_transaction(adapter, mode, 0, NULL); 94 94 } 95 95 96 96 /* ··· 120 122 i2c_algo_dp_aux_put_byte(struct i2c_adapter *adapter, u8 byte) 121 123 { 122 124 struct i2c_algo_dp_aux_data *algo_data = adapter->algo_data; 123 - int ret; 124 125 125 126 if (!algo_data->running) 126 127 return -EIO; 127 128 128 - ret = i2c_algo_dp_aux_transaction(adapter, MODE_I2C_WRITE, byte, NULL); 129 - return ret; 129 + return i2c_algo_dp_aux_transaction(adapter, MODE_I2C_WRITE, byte, NULL); 130 130 } 131 131 132 132 /* ··· 135 139 i2c_algo_dp_aux_get_byte(struct i2c_adapter *adapter, u8 *byte_ret) 136 140 { 137 141 struct i2c_algo_dp_aux_data *algo_data = adapter->algo_data; 138 - int ret; 139 142 140 143 if (!algo_data->running) 141 144 return -EIO; 142 145 143 - ret = i2c_algo_dp_aux_transaction(adapter, MODE_I2C_READ, 0, byte_ret); 144 - return ret; 146 + return i2c_algo_dp_aux_transaction(adapter, MODE_I2C_READ, 0, byte_ret); 145 147 } 146 148 147 149 static int
+2 -4
drivers/gpu/drm/gma500/gma_display.c
··· 335 335 struct psb_gem_object *pobj; 336 336 struct psb_gem_object *cursor_pobj = gma_crtc->cursor_pobj; 337 337 struct drm_gem_object *obj; 338 - void *tmp_dst, *tmp_src; 338 + void *tmp_dst; 339 339 int ret = 0, i, cursor_pages; 340 340 341 341 /* If we didn't get a handle then turn the cursor off */ ··· 400 400 /* Copy the cursor to cursor mem */ 401 401 tmp_dst = dev_priv->vram_addr + cursor_pobj->offset; 402 402 for (i = 0; i < cursor_pages; i++) { 403 - tmp_src = kmap(pobj->pages[i]); 404 - memcpy(tmp_dst, tmp_src, PAGE_SIZE); 405 - kunmap(pobj->pages[i]); 403 + memcpy_from_page(tmp_dst, pobj->pages[i], 0, PAGE_SIZE); 406 404 tmp_dst += PAGE_SIZE; 407 405 } 408 406
+1 -1
drivers/gpu/drm/gma500/intel_bios.c
··· 6 6 * Eric Anholt <eric@anholt.net> 7 7 */ 8 8 #include <drm/drm.h> 9 - #include <drm/drm_dp_helper.h> 9 + #include <drm/dp/drm_dp_helper.h> 10 10 11 11 #include "intel_bios.h" 12 12 #include "psb_drv.h"
+4 -4
drivers/gpu/drm/gma500/mmu.c
··· 184 184 pd->invalid_pte = 0; 185 185 } 186 186 187 - v = kmap(pd->dummy_pt); 187 + v = kmap_local_page(pd->dummy_pt); 188 188 for (i = 0; i < (PAGE_SIZE / sizeof(uint32_t)); ++i) 189 189 v[i] = pd->invalid_pte; 190 190 191 - kunmap(pd->dummy_pt); 191 + kunmap_local(v); 192 192 193 - v = kmap(pd->p); 193 + v = kmap_local_page(pd->p); 194 194 for (i = 0; i < (PAGE_SIZE / sizeof(uint32_t)); ++i) 195 195 v[i] = pd->invalid_pde; 196 196 197 - kunmap(pd->p); 197 + kunmap_local(v); 198 198 199 199 clear_page(kmap(pd->dummy_page)); 200 200 kunmap(pd->dummy_page);
+2 -1
drivers/gpu/drm/hisilicon/hibmc/hibmc_drm_drv.c
··· 20 20 #include <drm/drm_gem_framebuffer_helper.h> 21 21 #include <drm/drm_gem_vram_helper.h> 22 22 #include <drm/drm_managed.h> 23 + #include <drm/drm_module.h> 23 24 #include <drm/drm_vblank.h> 24 25 25 26 #include "hibmc_drm_drv.h" ··· 380 379 .driver.pm = &hibmc_pm_ops, 381 380 }; 382 381 383 - module_pci_driver(hibmc_pci_driver); 382 + drm_module_pci_driver(hibmc_pci_driver); 384 383 385 384 MODULE_DEVICE_TABLE(pci, hibmc_pci_table); 386 385 MODULE_AUTHOR("RongrongZou <zourongrong@huawei.com>");
+2
drivers/gpu/drm/i915/Kconfig
··· 9 9 # the shmem_readpage() which depends upon tmpfs 10 10 select SHMEM 11 11 select TMPFS 12 + select DRM_DP_HELPER 12 13 select DRM_KMS_HELPER 13 14 select DRM_PANEL 14 15 select DRM_MIPI_DSI ··· 28 27 select CEC_CORE if CEC_NOTIFIER 29 28 select VMAP_PFN 30 29 select DRM_TTM 30 + select DRM_BUDDY 31 31 help 32 32 Choose this option if you have a system that has "Intel Graphics 33 33 Media Accelerator" or "HD Graphics" integrated graphics,
-1
drivers/gpu/drm/i915/Makefile
··· 161 161 i915-y += \ 162 162 $(gem-y) \ 163 163 i915_active.o \ 164 - i915_buddy.o \ 165 164 i915_cmd_parser.o \ 166 165 i915_deps.o \ 167 166 i915_gem_evict.o \
+1 -1
drivers/gpu/drm/i915/display/intel_bios.c
··· 25 25 * 26 26 */ 27 27 28 - #include <drm/drm_dp_helper.h> 28 + #include <drm/dp/drm_dp_helper.h> 29 29 30 30 #include "display/intel_display.h" 31 31 #include "display/intel_display_types.h"
+1 -1
drivers/gpu/drm/i915/display/intel_display.c
··· 38 38 #include <drm/drm_atomic_helper.h> 39 39 #include <drm/drm_atomic_uapi.h> 40 40 #include <drm/drm_damage_helper.h> 41 - #include <drm/drm_dp_helper.h> 41 + #include <drm/dp/drm_dp_helper.h> 42 42 #include <drm/drm_edid.h> 43 43 #include <drm/drm_fourcc.h> 44 44 #include <drm/drm_plane_helper.h>
+2 -2
drivers/gpu/drm/i915/display/intel_display_types.h
··· 32 32 #include <linux/pwm.h> 33 33 #include <linux/sched/clock.h> 34 34 35 + #include <drm/dp/drm_dp_dual_mode_helper.h> 36 + #include <drm/dp/drm_dp_mst_helper.h> 35 37 #include <drm/drm_atomic.h> 36 38 #include <drm/drm_crtc.h> 37 - #include <drm/drm_dp_dual_mode_helper.h> 38 - #include <drm/drm_dp_mst_helper.h> 39 39 #include <drm/drm_dsc.h> 40 40 #include <drm/drm_encoder.h> 41 41 #include <drm/drm_fb_helper.h>
+1 -1
drivers/gpu/drm/i915/display/intel_dp.c
··· 36 36 37 37 #include <drm/drm_atomic_helper.h> 38 38 #include <drm/drm_crtc.h> 39 - #include <drm/drm_dp_helper.h> 39 + #include <drm/dp/drm_dp_helper.h> 40 40 #include <drm/drm_edid.h> 41 41 #include <drm/drm_probe_helper.h> 42 42
+2 -2
drivers/gpu/drm/i915/display/intel_dp_hdcp.c
··· 6 6 * Sean Paul <seanpaul@chromium.org> 7 7 */ 8 8 9 - #include <drm/drm_dp_helper.h> 10 - #include <drm/drm_dp_mst_helper.h> 9 + #include <drm/dp/drm_dp_helper.h> 10 + #include <drm/dp/drm_dp_mst_helper.h> 11 11 #include <drm/drm_hdcp.h> 12 12 #include <drm/drm_print.h> 13 13
+3 -3
drivers/gpu/drm/i915/display/intel_hdmi.c
··· 730 730 else 731 731 frame->colorspace = HDMI_COLORSPACE_RGB; 732 732 733 - drm_hdmi_avi_infoframe_colorspace(frame, conn_state); 733 + drm_hdmi_avi_infoframe_colorimetry(frame, conn_state); 734 734 735 735 /* nonsense combination */ 736 736 drm_WARN_ON(encoder->base.dev, crtc_state->limited_color_range && ··· 1912 1912 if (ycbcr420_output) 1913 1913 return hdmi->y420_dc_modes & DRM_EDID_YCBCR420_DC_36; 1914 1914 else 1915 - return info->edid_hdmi_dc_modes & DRM_EDID_HDMI_DC_36; 1915 + return info->edid_hdmi_rgb444_dc_modes & DRM_EDID_HDMI_DC_36; 1916 1916 case 10: 1917 1917 if (!has_hdmi_sink) 1918 1918 return false; ··· 1920 1920 if (ycbcr420_output) 1921 1921 return hdmi->y420_dc_modes & DRM_EDID_YCBCR420_DC_30; 1922 1922 else 1923 - return info->edid_hdmi_dc_modes & DRM_EDID_HDMI_DC_30; 1923 + return info->edid_hdmi_rgb444_dc_modes & DRM_EDID_HDMI_DC_30; 1924 1924 case 8: 1925 1925 return true; 1926 1926 default:
+2 -2
drivers/gpu/drm/i915/display/intel_lspcon.c
··· 24 24 */ 25 25 26 26 #include <drm/drm_atomic_helper.h> 27 - #include <drm/drm_dp_dual_mode_helper.h> 27 + #include <drm/dp/drm_dp_dual_mode_helper.h> 28 28 #include <drm/drm_edid.h> 29 29 30 30 #include "intel_de.h" ··· 537 537 frame.avi.colorspace = HDMI_COLORSPACE_RGB; 538 538 539 539 /* Set the Colorspace as per the HDMI spec */ 540 - drm_hdmi_avi_infoframe_colorspace(&frame.avi, conn_state); 540 + drm_hdmi_avi_infoframe_colorimetry(&frame.avi, conn_state); 541 541 542 542 /* nonsense combination */ 543 543 drm_WARN_ON(encoder->base.dev, crtc_state->limited_color_range &&
-466
drivers/gpu/drm/i915/i915_buddy.c
··· 1 - // SPDX-License-Identifier: MIT 2 - /* 3 - * Copyright © 2021 Intel Corporation 4 - */ 5 - 6 - #include <linux/kmemleak.h> 7 - #include <linux/sizes.h> 8 - 9 - #include "i915_buddy.h" 10 - 11 - #include "i915_gem.h" 12 - #include "i915_utils.h" 13 - 14 - static struct kmem_cache *slab_blocks; 15 - 16 - static struct i915_buddy_block *i915_block_alloc(struct i915_buddy_mm *mm, 17 - struct i915_buddy_block *parent, 18 - unsigned int order, 19 - u64 offset) 20 - { 21 - struct i915_buddy_block *block; 22 - 23 - GEM_BUG_ON(order > I915_BUDDY_MAX_ORDER); 24 - 25 - block = kmem_cache_zalloc(slab_blocks, GFP_KERNEL); 26 - if (!block) 27 - return NULL; 28 - 29 - block->header = offset; 30 - block->header |= order; 31 - block->parent = parent; 32 - 33 - GEM_BUG_ON(block->header & I915_BUDDY_HEADER_UNUSED); 34 - return block; 35 - } 36 - 37 - static void i915_block_free(struct i915_buddy_mm *mm, 38 - struct i915_buddy_block *block) 39 - { 40 - kmem_cache_free(slab_blocks, block); 41 - } 42 - 43 - static void mark_allocated(struct i915_buddy_block *block) 44 - { 45 - block->header &= ~I915_BUDDY_HEADER_STATE; 46 - block->header |= I915_BUDDY_ALLOCATED; 47 - 48 - list_del(&block->link); 49 - } 50 - 51 - static void mark_free(struct i915_buddy_mm *mm, 52 - struct i915_buddy_block *block) 53 - { 54 - block->header &= ~I915_BUDDY_HEADER_STATE; 55 - block->header |= I915_BUDDY_FREE; 56 - 57 - list_add(&block->link, 58 - &mm->free_list[i915_buddy_block_order(block)]); 59 - } 60 - 61 - static void mark_split(struct i915_buddy_block *block) 62 - { 63 - block->header &= ~I915_BUDDY_HEADER_STATE; 64 - block->header |= I915_BUDDY_SPLIT; 65 - 66 - list_del(&block->link); 67 - } 68 - 69 - int i915_buddy_init(struct i915_buddy_mm *mm, u64 size, u64 chunk_size) 70 - { 71 - unsigned int i; 72 - u64 offset; 73 - 74 - if (size < chunk_size) 75 - return -EINVAL; 76 - 77 - if (chunk_size < PAGE_SIZE) 78 - return -EINVAL; 79 - 80 - if (!is_power_of_2(chunk_size)) 81 - return -EINVAL; 82 - 83 - size = round_down(size, chunk_size); 84 - 85 - mm->size = size; 86 - mm->avail = size; 87 - mm->chunk_size = chunk_size; 88 - mm->max_order = ilog2(size) - ilog2(chunk_size); 89 - 90 - GEM_BUG_ON(mm->max_order > I915_BUDDY_MAX_ORDER); 91 - 92 - mm->free_list = kmalloc_array(mm->max_order + 1, 93 - sizeof(struct list_head), 94 - GFP_KERNEL); 95 - if (!mm->free_list) 96 - return -ENOMEM; 97 - 98 - for (i = 0; i <= mm->max_order; ++i) 99 - INIT_LIST_HEAD(&mm->free_list[i]); 100 - 101 - mm->n_roots = hweight64(size); 102 - 103 - mm->roots = kmalloc_array(mm->n_roots, 104 - sizeof(struct i915_buddy_block *), 105 - GFP_KERNEL); 106 - if (!mm->roots) 107 - goto out_free_list; 108 - 109 - offset = 0; 110 - i = 0; 111 - 112 - /* 113 - * Split into power-of-two blocks, in case we are given a size that is 114 - * not itself a power-of-two. 115 - */ 116 - do { 117 - struct i915_buddy_block *root; 118 - unsigned int order; 119 - u64 root_size; 120 - 121 - root_size = rounddown_pow_of_two(size); 122 - order = ilog2(root_size) - ilog2(chunk_size); 123 - 124 - root = i915_block_alloc(mm, NULL, order, offset); 125 - if (!root) 126 - goto out_free_roots; 127 - 128 - mark_free(mm, root); 129 - 130 - GEM_BUG_ON(i > mm->max_order); 131 - GEM_BUG_ON(i915_buddy_block_size(mm, root) < chunk_size); 132 - 133 - mm->roots[i] = root; 134 - 135 - offset += root_size; 136 - size -= root_size; 137 - i++; 138 - } while (size); 139 - 140 - return 0; 141 - 142 - out_free_roots: 143 - while (i--) 144 - i915_block_free(mm, mm->roots[i]); 145 - kfree(mm->roots); 146 - out_free_list: 147 - kfree(mm->free_list); 148 - return -ENOMEM; 149 - } 150 - 151 - void i915_buddy_fini(struct i915_buddy_mm *mm) 152 - { 153 - int i; 154 - 155 - for (i = 0; i < mm->n_roots; ++i) { 156 - GEM_WARN_ON(!i915_buddy_block_is_free(mm->roots[i])); 157 - i915_block_free(mm, mm->roots[i]); 158 - } 159 - 160 - GEM_WARN_ON(mm->avail != mm->size); 161 - 162 - kfree(mm->roots); 163 - kfree(mm->free_list); 164 - } 165 - 166 - static int split_block(struct i915_buddy_mm *mm, 167 - struct i915_buddy_block *block) 168 - { 169 - unsigned int block_order = i915_buddy_block_order(block) - 1; 170 - u64 offset = i915_buddy_block_offset(block); 171 - 172 - GEM_BUG_ON(!i915_buddy_block_is_free(block)); 173 - GEM_BUG_ON(!i915_buddy_block_order(block)); 174 - 175 - block->left = i915_block_alloc(mm, block, block_order, offset); 176 - if (!block->left) 177 - return -ENOMEM; 178 - 179 - block->right = i915_block_alloc(mm, block, block_order, 180 - offset + (mm->chunk_size << block_order)); 181 - if (!block->right) { 182 - i915_block_free(mm, block->left); 183 - return -ENOMEM; 184 - } 185 - 186 - mark_free(mm, block->left); 187 - mark_free(mm, block->right); 188 - 189 - mark_split(block); 190 - 191 - return 0; 192 - } 193 - 194 - static struct i915_buddy_block * 195 - get_buddy(struct i915_buddy_block *block) 196 - { 197 - struct i915_buddy_block *parent; 198 - 199 - parent = block->parent; 200 - if (!parent) 201 - return NULL; 202 - 203 - if (parent->left == block) 204 - return parent->right; 205 - 206 - return parent->left; 207 - } 208 - 209 - static void __i915_buddy_free(struct i915_buddy_mm *mm, 210 - struct i915_buddy_block *block) 211 - { 212 - struct i915_buddy_block *parent; 213 - 214 - while ((parent = block->parent)) { 215 - struct i915_buddy_block *buddy; 216 - 217 - buddy = get_buddy(block); 218 - 219 - if (!i915_buddy_block_is_free(buddy)) 220 - break; 221 - 222 - list_del(&buddy->link); 223 - 224 - i915_block_free(mm, block); 225 - i915_block_free(mm, buddy); 226 - 227 - block = parent; 228 - } 229 - 230 - mark_free(mm, block); 231 - } 232 - 233 - void i915_buddy_free(struct i915_buddy_mm *mm, 234 - struct i915_buddy_block *block) 235 - { 236 - GEM_BUG_ON(!i915_buddy_block_is_allocated(block)); 237 - mm->avail += i915_buddy_block_size(mm, block); 238 - __i915_buddy_free(mm, block); 239 - } 240 - 241 - void i915_buddy_free_list(struct i915_buddy_mm *mm, struct list_head *objects) 242 - { 243 - struct i915_buddy_block *block, *on; 244 - 245 - list_for_each_entry_safe(block, on, objects, link) { 246 - i915_buddy_free(mm, block); 247 - cond_resched(); 248 - } 249 - INIT_LIST_HEAD(objects); 250 - } 251 - 252 - /* 253 - * Allocate power-of-two block. The order value here translates to: 254 - * 255 - * 0 = 2^0 * mm->chunk_size 256 - * 1 = 2^1 * mm->chunk_size 257 - * 2 = 2^2 * mm->chunk_size 258 - * ... 259 - */ 260 - struct i915_buddy_block * 261 - i915_buddy_alloc(struct i915_buddy_mm *mm, unsigned int order) 262 - { 263 - struct i915_buddy_block *block = NULL; 264 - unsigned int i; 265 - int err; 266 - 267 - for (i = order; i <= mm->max_order; ++i) { 268 - block = list_first_entry_or_null(&mm->free_list[i], 269 - struct i915_buddy_block, 270 - link); 271 - if (block) 272 - break; 273 - } 274 - 275 - if (!block) 276 - return ERR_PTR(-ENOSPC); 277 - 278 - GEM_BUG_ON(!i915_buddy_block_is_free(block)); 279 - 280 - while (i != order) { 281 - err = split_block(mm, block); 282 - if (unlikely(err)) 283 - goto out_free; 284 - 285 - /* Go low */ 286 - block = block->left; 287 - i--; 288 - } 289 - 290 - mark_allocated(block); 291 - mm->avail -= i915_buddy_block_size(mm, block); 292 - kmemleak_update_trace(block); 293 - return block; 294 - 295 - out_free: 296 - if (i != order) 297 - __i915_buddy_free(mm, block); 298 - return ERR_PTR(err); 299 - } 300 - 301 - static inline bool overlaps(u64 s1, u64 e1, u64 s2, u64 e2) 302 - { 303 - return s1 <= e2 && e1 >= s2; 304 - } 305 - 306 - static inline bool contains(u64 s1, u64 e1, u64 s2, u64 e2) 307 - { 308 - return s1 <= s2 && e1 >= e2; 309 - } 310 - 311 - /* 312 - * Allocate range. Note that it's safe to chain together multiple alloc_ranges 313 - * with the same blocks list. 314 - * 315 - * Intended for pre-allocating portions of the address space, for example to 316 - * reserve a block for the initial framebuffer or similar, hence the expectation 317 - * here is that i915_buddy_alloc() is still the main vehicle for 318 - * allocations, so if that's not the case then the drm_mm range allocator is 319 - * probably a much better fit, and so you should probably go use that instead. 320 - */ 321 - int i915_buddy_alloc_range(struct i915_buddy_mm *mm, 322 - struct list_head *blocks, 323 - u64 start, u64 size) 324 - { 325 - struct i915_buddy_block *block; 326 - struct i915_buddy_block *buddy; 327 - LIST_HEAD(allocated); 328 - LIST_HEAD(dfs); 329 - u64 end; 330 - int err; 331 - int i; 332 - 333 - if (size < mm->chunk_size) 334 - return -EINVAL; 335 - 336 - if (!IS_ALIGNED(size | start, mm->chunk_size)) 337 - return -EINVAL; 338 - 339 - if (range_overflows(start, size, mm->size)) 340 - return -EINVAL; 341 - 342 - for (i = 0; i < mm->n_roots; ++i) 343 - list_add_tail(&mm->roots[i]->tmp_link, &dfs); 344 - 345 - end = start + size - 1; 346 - 347 - do { 348 - u64 block_start; 349 - u64 block_end; 350 - 351 - block = list_first_entry_or_null(&dfs, 352 - struct i915_buddy_block, 353 - tmp_link); 354 - if (!block) 355 - break; 356 - 357 - list_del(&block->tmp_link); 358 - 359 - block_start = i915_buddy_block_offset(block); 360 - block_end = block_start + i915_buddy_block_size(mm, block) - 1; 361 - 362 - if (!overlaps(start, end, block_start, block_end)) 363 - continue; 364 - 365 - if (i915_buddy_block_is_allocated(block)) { 366 - err = -ENOSPC; 367 - goto err_free; 368 - } 369 - 370 - if (contains(start, end, block_start, block_end)) { 371 - if (!i915_buddy_block_is_free(block)) { 372 - err = -ENOSPC; 373 - goto err_free; 374 - } 375 - 376 - mark_allocated(block); 377 - mm->avail -= i915_buddy_block_size(mm, block); 378 - list_add_tail(&block->link, &allocated); 379 - continue; 380 - } 381 - 382 - if (!i915_buddy_block_is_split(block)) { 383 - err = split_block(mm, block); 384 - if (unlikely(err)) 385 - goto err_undo; 386 - } 387 - 388 - list_add(&block->right->tmp_link, &dfs); 389 - list_add(&block->left->tmp_link, &dfs); 390 - } while (1); 391 - 392 - list_splice_tail(&allocated, blocks); 393 - return 0; 394 - 395 - err_undo: 396 - /* 397 - * We really don't want to leave around a bunch of split blocks, since 398 - * bigger is better, so make sure we merge everything back before we 399 - * free the allocated blocks. 400 - */ 401 - buddy = get_buddy(block); 402 - if (buddy && 403 - (i915_buddy_block_is_free(block) && 404 - i915_buddy_block_is_free(buddy))) 405 - __i915_buddy_free(mm, block); 406 - 407 - err_free: 408 - i915_buddy_free_list(mm, &allocated); 409 - return err; 410 - } 411 - 412 - void i915_buddy_block_print(struct i915_buddy_mm *mm, 413 - struct i915_buddy_block *block, 414 - struct drm_printer *p) 415 - { 416 - u64 start = i915_buddy_block_offset(block); 417 - u64 size = i915_buddy_block_size(mm, block); 418 - 419 - drm_printf(p, "%#018llx-%#018llx: %llu\n", start, start + size, size); 420 - } 421 - 422 - void i915_buddy_print(struct i915_buddy_mm *mm, struct drm_printer *p) 423 - { 424 - int order; 425 - 426 - drm_printf(p, "chunk_size: %lluKiB, total: %lluMiB, free: %lluMiB\n", 427 - mm->chunk_size >> 10, mm->size >> 20, mm->avail >> 20); 428 - 429 - for (order = mm->max_order; order >= 0; order--) { 430 - struct i915_buddy_block *block; 431 - u64 count = 0, free; 432 - 433 - list_for_each_entry(block, &mm->free_list[order], link) { 434 - GEM_BUG_ON(!i915_buddy_block_is_free(block)); 435 - count++; 436 - } 437 - 438 - drm_printf(p, "order-%d ", order); 439 - 440 - free = count * (mm->chunk_size << order); 441 - if (free < SZ_1M) 442 - drm_printf(p, "free: %lluKiB", free >> 10); 443 - else 444 - drm_printf(p, "free: %lluMiB", free >> 20); 445 - 446 - drm_printf(p, ", pages: %llu\n", count); 447 - } 448 - } 449 - 450 - #if IS_ENABLED(CONFIG_DRM_I915_SELFTEST) 451 - #include "selftests/i915_buddy.c" 452 - #endif 453 - 454 - void i915_buddy_module_exit(void) 455 - { 456 - kmem_cache_destroy(slab_blocks); 457 - } 458 - 459 - int __init i915_buddy_module_init(void) 460 - { 461 - slab_blocks = KMEM_CACHE(i915_buddy_block, 0); 462 - if (!slab_blocks) 463 - return -ENOMEM; 464 - 465 - return 0; 466 - }
-143
drivers/gpu/drm/i915/i915_buddy.h
··· 1 - /* SPDX-License-Identifier: MIT */ 2 - /* 3 - * Copyright © 2021 Intel Corporation 4 - */ 5 - 6 - #ifndef __I915_BUDDY_H__ 7 - #define __I915_BUDDY_H__ 8 - 9 - #include <linux/bitops.h> 10 - #include <linux/list.h> 11 - #include <linux/slab.h> 12 - 13 - #include <drm/drm_print.h> 14 - 15 - struct i915_buddy_block { 16 - #define I915_BUDDY_HEADER_OFFSET GENMASK_ULL(63, 12) 17 - #define I915_BUDDY_HEADER_STATE GENMASK_ULL(11, 10) 18 - #define I915_BUDDY_ALLOCATED (1 << 10) 19 - #define I915_BUDDY_FREE (2 << 10) 20 - #define I915_BUDDY_SPLIT (3 << 10) 21 - /* Free to be used, if needed in the future */ 22 - #define I915_BUDDY_HEADER_UNUSED GENMASK_ULL(9, 6) 23 - #define I915_BUDDY_HEADER_ORDER GENMASK_ULL(5, 0) 24 - u64 header; 25 - 26 - struct i915_buddy_block *left; 27 - struct i915_buddy_block *right; 28 - struct i915_buddy_block *parent; 29 - 30 - void *private; /* owned by creator */ 31 - 32 - /* 33 - * While the block is allocated by the user through i915_buddy_alloc*, 34 - * the user has ownership of the link, for example to maintain within 35 - * a list, if so desired. As soon as the block is freed with 36 - * i915_buddy_free* ownership is given back to the mm. 37 - */ 38 - struct list_head link; 39 - struct list_head tmp_link; 40 - }; 41 - 42 - /* Order-zero must be at least PAGE_SIZE */ 43 - #define I915_BUDDY_MAX_ORDER (63 - PAGE_SHIFT) 44 - 45 - /* 46 - * Binary Buddy System. 47 - * 48 - * Locking should be handled by the user, a simple mutex around 49 - * i915_buddy_alloc* and i915_buddy_free* should suffice. 50 - */ 51 - struct i915_buddy_mm { 52 - /* Maintain a free list for each order. */ 53 - struct list_head *free_list; 54 - 55 - /* 56 - * Maintain explicit binary tree(s) to track the allocation of the 57 - * address space. This gives us a simple way of finding a buddy block 58 - * and performing the potentially recursive merge step when freeing a 59 - * block. Nodes are either allocated or free, in which case they will 60 - * also exist on the respective free list. 61 - */ 62 - struct i915_buddy_block **roots; 63 - 64 - /* 65 - * Anything from here is public, and remains static for the lifetime of 66 - * the mm. Everything above is considered do-not-touch. 67 - */ 68 - unsigned int n_roots; 69 - unsigned int max_order; 70 - 71 - /* Must be at least PAGE_SIZE */ 72 - u64 chunk_size; 73 - u64 size; 74 - u64 avail; 75 - }; 76 - 77 - static inline u64 78 - i915_buddy_block_offset(struct i915_buddy_block *block) 79 - { 80 - return block->header & I915_BUDDY_HEADER_OFFSET; 81 - } 82 - 83 - static inline unsigned int 84 - i915_buddy_block_order(struct i915_buddy_block *block) 85 - { 86 - return block->header & I915_BUDDY_HEADER_ORDER; 87 - } 88 - 89 - static inline unsigned int 90 - i915_buddy_block_state(struct i915_buddy_block *block) 91 - { 92 - return block->header & I915_BUDDY_HEADER_STATE; 93 - } 94 - 95 - static inline bool 96 - i915_buddy_block_is_allocated(struct i915_buddy_block *block) 97 - { 98 - return i915_buddy_block_state(block) == I915_BUDDY_ALLOCATED; 99 - } 100 - 101 - static inline bool 102 - i915_buddy_block_is_free(struct i915_buddy_block *block) 103 - { 104 - return i915_buddy_block_state(block) == I915_BUDDY_FREE; 105 - } 106 - 107 - static inline bool 108 - i915_buddy_block_is_split(struct i915_buddy_block *block) 109 - { 110 - return i915_buddy_block_state(block) == I915_BUDDY_SPLIT; 111 - } 112 - 113 - static inline u64 114 - i915_buddy_block_size(struct i915_buddy_mm *mm, 115 - struct i915_buddy_block *block) 116 - { 117 - return mm->chunk_size << i915_buddy_block_order(block); 118 - } 119 - 120 - int i915_buddy_init(struct i915_buddy_mm *mm, u64 size, u64 chunk_size); 121 - 122 - void i915_buddy_fini(struct i915_buddy_mm *mm); 123 - 124 - struct i915_buddy_block * 125 - i915_buddy_alloc(struct i915_buddy_mm *mm, unsigned int order); 126 - 127 - int i915_buddy_alloc_range(struct i915_buddy_mm *mm, 128 - struct list_head *blocks, 129 - u64 start, u64 size); 130 - 131 - void i915_buddy_free(struct i915_buddy_mm *mm, struct i915_buddy_block *block); 132 - 133 - void i915_buddy_free_list(struct i915_buddy_mm *mm, struct list_head *objects); 134 - 135 - void i915_buddy_print(struct i915_buddy_mm *mm, struct drm_printer *p); 136 - void i915_buddy_block_print(struct i915_buddy_mm *mm, 137 - struct i915_buddy_block *block, 138 - struct drm_printer *p); 139 - 140 - void i915_buddy_module_exit(void); 141 - int i915_buddy_module_init(void); 142 - 143 - #endif
-3
drivers/gpu/drm/i915/i915_module.c
··· 9 9 #include "gem/i915_gem_context.h" 10 10 #include "gem/i915_gem_object.h" 11 11 #include "i915_active.h" 12 - #include "i915_buddy.h" 13 12 #include "i915_params.h" 14 13 #include "i915_pci.h" 15 14 #include "i915_perf.h" ··· 49 50 { .init = i915_check_nomodeset }, 50 51 { .init = i915_active_module_init, 51 52 .exit = i915_active_module_exit }, 52 - { .init = i915_buddy_module_init, 53 - .exit = i915_buddy_module_exit }, 54 53 { .init = i915_context_module_init, 55 54 .exit = i915_context_module_exit }, 56 55 { .init = i915_gem_context_module_init,
+5 -6
drivers/gpu/drm/i915/i915_scatterlist.c
··· 5 5 */ 6 6 7 7 #include "i915_scatterlist.h" 8 - 9 - #include "i915_buddy.h" 10 8 #include "i915_ttm_buddy_manager.h" 11 9 10 + #include <drm/drm_buddy.h> 12 11 #include <drm/drm_mm.h> 13 12 14 13 #include <linux/slab.h> ··· 152 153 struct i915_ttm_buddy_resource *bman_res = to_ttm_buddy_resource(res); 153 154 const u64 size = res->num_pages << PAGE_SHIFT; 154 155 const u64 max_segment = rounddown(UINT_MAX, PAGE_SIZE); 155 - struct i915_buddy_mm *mm = bman_res->mm; 156 + struct drm_buddy *mm = bman_res->mm; 156 157 struct list_head *blocks = &bman_res->blocks; 157 - struct i915_buddy_block *block; 158 + struct drm_buddy_block *block; 158 159 struct i915_refct_sgt *rsgt; 159 160 struct scatterlist *sg; 160 161 struct sg_table *st; ··· 180 181 list_for_each_entry(block, blocks, link) { 181 182 u64 block_size, offset; 182 183 183 - block_size = min_t(u64, size, i915_buddy_block_size(mm, block)); 184 - offset = i915_buddy_block_offset(block); 184 + block_size = min_t(u64, size, drm_buddy_block_size(mm, block)); 185 + offset = drm_buddy_block_offset(block); 185 186 186 187 while (block_size) { 187 188 u64 len;
+20 -17
drivers/gpu/drm/i915/i915_ttm_buddy_manager.c
··· 8 8 #include <drm/ttm/ttm_bo_driver.h> 9 9 #include <drm/ttm/ttm_placement.h> 10 10 11 + #include <drm/drm_buddy.h> 12 + 11 13 #include "i915_ttm_buddy_manager.h" 12 14 13 - #include "i915_buddy.h" 14 15 #include "i915_gem.h" 15 16 16 17 struct i915_ttm_buddy_manager { 17 18 struct ttm_resource_manager manager; 18 - struct i915_buddy_mm mm; 19 + struct drm_buddy mm; 19 20 struct list_head reserved; 20 21 struct mutex lock; 21 22 u64 default_page_size; ··· 35 34 { 36 35 struct i915_ttm_buddy_manager *bman = to_buddy_manager(man); 37 36 struct i915_ttm_buddy_resource *bman_res; 38 - struct i915_buddy_mm *mm = &bman->mm; 37 + struct drm_buddy *mm = &bman->mm; 39 38 unsigned long n_pages; 40 39 unsigned int min_order; 41 40 u64 min_page_size; ··· 74 73 n_pages = size >> ilog2(mm->chunk_size); 75 74 76 75 do { 77 - struct i915_buddy_block *block; 76 + struct drm_buddy_block *block; 78 77 unsigned int order; 79 78 80 79 order = fls(n_pages) - 1; ··· 83 82 84 83 do { 85 84 mutex_lock(&bman->lock); 86 - block = i915_buddy_alloc(mm, order); 85 + block = drm_buddy_alloc_blocks(mm, order); 87 86 mutex_unlock(&bman->lock); 88 87 if (!IS_ERR(block)) 89 88 break; ··· 107 106 108 107 err_free_blocks: 109 108 mutex_lock(&bman->lock); 110 - i915_buddy_free_list(mm, &bman_res->blocks); 109 + drm_buddy_free_list(mm, &bman_res->blocks); 111 110 mutex_unlock(&bman->lock); 112 111 err_free_res: 112 + ttm_resource_fini(man, &bman_res->base); 113 113 kfree(bman_res); 114 114 return err; 115 115 } ··· 122 120 struct i915_ttm_buddy_manager *bman = to_buddy_manager(man); 123 121 124 122 mutex_lock(&bman->lock); 125 - i915_buddy_free_list(&bman->mm, &bman_res->blocks); 123 + drm_buddy_free_list(&bman->mm, &bman_res->blocks); 126 124 mutex_unlock(&bman->lock); 127 125 126 + ttm_resource_fini(man, res); 128 127 kfree(bman_res); 129 128 } 130 129 ··· 133 130 struct drm_printer *printer) 134 131 { 135 132 struct i915_ttm_buddy_manager *bman = to_buddy_manager(man); 136 - struct i915_buddy_block *block; 133 + struct drm_buddy_block *block; 137 134 138 135 mutex_lock(&bman->lock); 139 136 drm_printf(printer, "default_page_size: %lluKiB\n", 140 137 bman->default_page_size >> 10); 141 138 142 - i915_buddy_print(&bman->mm, printer); 139 + drm_buddy_print(&bman->mm, printer); 143 140 144 141 drm_printf(printer, "reserved:\n"); 145 142 list_for_each_entry(block, &bman->reserved, link) 146 - i915_buddy_block_print(&bman->mm, block, printer); 143 + drm_buddy_block_print(&bman->mm, block, printer); 147 144 mutex_unlock(&bman->lock); 148 145 } 149 146 ··· 193 190 if (!bman) 194 191 return -ENOMEM; 195 192 196 - err = i915_buddy_init(&bman->mm, size, chunk_size); 193 + err = drm_buddy_init(&bman->mm, size, chunk_size); 197 194 if (err) 198 195 goto err_free_bman; 199 196 ··· 205 202 man = &bman->manager; 206 203 man->use_tt = use_tt; 207 204 man->func = &i915_ttm_buddy_manager_func; 208 - ttm_resource_manager_init(man, bman->mm.size >> PAGE_SHIFT); 205 + ttm_resource_manager_init(man, bdev, bman->mm.size >> PAGE_SHIFT); 209 206 210 207 ttm_resource_manager_set_used(man, true); 211 208 ttm_set_driver_manager(bdev, type, man); ··· 231 228 { 232 229 struct ttm_resource_manager *man = ttm_manager_type(bdev, type); 233 230 struct i915_ttm_buddy_manager *bman = to_buddy_manager(man); 234 - struct i915_buddy_mm *mm = &bman->mm; 231 + struct drm_buddy *mm = &bman->mm; 235 232 int ret; 236 233 237 234 ttm_resource_manager_set_used(man, false); ··· 243 240 ttm_set_driver_manager(bdev, type, NULL); 244 241 245 242 mutex_lock(&bman->lock); 246 - i915_buddy_free_list(mm, &bman->reserved); 247 - i915_buddy_fini(mm); 243 + drm_buddy_free_list(mm, &bman->reserved); 244 + drm_buddy_fini(mm); 248 245 mutex_unlock(&bman->lock); 249 246 250 247 ttm_resource_manager_cleanup(man); ··· 267 264 u64 start, u64 size) 268 265 { 269 266 struct i915_ttm_buddy_manager *bman = to_buddy_manager(man); 270 - struct i915_buddy_mm *mm = &bman->mm; 267 + struct drm_buddy *mm = &bman->mm; 271 268 int ret; 272 269 273 270 mutex_lock(&bman->lock); 274 - ret = i915_buddy_alloc_range(mm, &bman->reserved, start, size); 271 + ret = drm_buddy_alloc_range(mm, &bman->reserved, start, size); 275 272 mutex_unlock(&bman->lock); 276 273 277 274 return ret;
+2 -2
drivers/gpu/drm/i915/i915_ttm_buddy_manager.h
··· 13 13 14 14 struct ttm_device; 15 15 struct ttm_resource_manager; 16 - struct i915_buddy_mm; 16 + struct drm_buddy; 17 17 18 18 /** 19 19 * struct i915_ttm_buddy_resource ··· 28 28 struct i915_ttm_buddy_resource { 29 29 struct ttm_resource base; 30 30 struct list_head blocks; 31 - struct i915_buddy_mm *mm; 31 + struct drm_buddy *mm; 32 32 }; 33 33 34 34 /**
-787
drivers/gpu/drm/i915/selftests/i915_buddy.c
··· 1 - // SPDX-License-Identifier: MIT 2 - /* 3 - * Copyright © 2019 Intel Corporation 4 - */ 5 - 6 - #include <linux/prime_numbers.h> 7 - 8 - #include "../i915_selftest.h" 9 - #include "i915_random.h" 10 - 11 - static void __igt_dump_block(struct i915_buddy_mm *mm, 12 - struct i915_buddy_block *block, 13 - bool buddy) 14 - { 15 - pr_err("block info: header=%llx, state=%u, order=%d, offset=%llx size=%llx root=%s buddy=%s\n", 16 - block->header, 17 - i915_buddy_block_state(block), 18 - i915_buddy_block_order(block), 19 - i915_buddy_block_offset(block), 20 - i915_buddy_block_size(mm, block), 21 - yesno(!block->parent), 22 - yesno(buddy)); 23 - } 24 - 25 - static void igt_dump_block(struct i915_buddy_mm *mm, 26 - struct i915_buddy_block *block) 27 - { 28 - struct i915_buddy_block *buddy; 29 - 30 - __igt_dump_block(mm, block, false); 31 - 32 - buddy = get_buddy(block); 33 - if (buddy) 34 - __igt_dump_block(mm, buddy, true); 35 - } 36 - 37 - static int igt_check_block(struct i915_buddy_mm *mm, 38 - struct i915_buddy_block *block) 39 - { 40 - struct i915_buddy_block *buddy; 41 - unsigned int block_state; 42 - u64 block_size; 43 - u64 offset; 44 - int err = 0; 45 - 46 - block_state = i915_buddy_block_state(block); 47 - 48 - if (block_state != I915_BUDDY_ALLOCATED && 49 - block_state != I915_BUDDY_FREE && 50 - block_state != I915_BUDDY_SPLIT) { 51 - pr_err("block state mismatch\n"); 52 - err = -EINVAL; 53 - } 54 - 55 - block_size = i915_buddy_block_size(mm, block); 56 - offset = i915_buddy_block_offset(block); 57 - 58 - if (block_size < mm->chunk_size) { 59 - pr_err("block size smaller than min size\n"); 60 - err = -EINVAL; 61 - } 62 - 63 - if (!is_power_of_2(block_size)) { 64 - pr_err("block size not power of two\n"); 65 - err = -EINVAL; 66 - } 67 - 68 - if (!IS_ALIGNED(block_size, mm->chunk_size)) { 69 - pr_err("block size not aligned to min size\n"); 70 - err = -EINVAL; 71 - } 72 - 73 - if (!IS_ALIGNED(offset, mm->chunk_size)) { 74 - pr_err("block offset not aligned to min size\n"); 75 - err = -EINVAL; 76 - } 77 - 78 - if (!IS_ALIGNED(offset, block_size)) { 79 - pr_err("block offset not aligned to block size\n"); 80 - err = -EINVAL; 81 - } 82 - 83 - buddy = get_buddy(block); 84 - 85 - if (!buddy && block->parent) { 86 - pr_err("buddy has gone fishing\n"); 87 - err = -EINVAL; 88 - } 89 - 90 - if (buddy) { 91 - if (i915_buddy_block_offset(buddy) != (offset ^ block_size)) { 92 - pr_err("buddy has wrong offset\n"); 93 - err = -EINVAL; 94 - } 95 - 96 - if (i915_buddy_block_size(mm, buddy) != block_size) { 97 - pr_err("buddy size mismatch\n"); 98 - err = -EINVAL; 99 - } 100 - 101 - if (i915_buddy_block_state(buddy) == block_state && 102 - block_state == I915_BUDDY_FREE) { 103 - pr_err("block and its buddy are free\n"); 104 - err = -EINVAL; 105 - } 106 - } 107 - 108 - return err; 109 - } 110 - 111 - static int igt_check_blocks(struct i915_buddy_mm *mm, 112 - struct list_head *blocks, 113 - u64 expected_size, 114 - bool is_contiguous) 115 - { 116 - struct i915_buddy_block *block; 117 - struct i915_buddy_block *prev; 118 - u64 total; 119 - int err = 0; 120 - 121 - block = NULL; 122 - prev = NULL; 123 - total = 0; 124 - 125 - list_for_each_entry(block, blocks, link) { 126 - err = igt_check_block(mm, block); 127 - 128 - if (!i915_buddy_block_is_allocated(block)) { 129 - pr_err("block not allocated\n"), 130 - err = -EINVAL; 131 - } 132 - 133 - if (is_contiguous && prev) { 134 - u64 prev_block_size; 135 - u64 prev_offset; 136 - u64 offset; 137 - 138 - prev_offset = i915_buddy_block_offset(prev); 139 - prev_block_size = i915_buddy_block_size(mm, prev); 140 - offset = i915_buddy_block_offset(block); 141 - 142 - if (offset != (prev_offset + prev_block_size)) { 143 - pr_err("block offset mismatch\n"); 144 - err = -EINVAL; 145 - } 146 - } 147 - 148 - if (err) 149 - break; 150 - 151 - total += i915_buddy_block_size(mm, block); 152 - prev = block; 153 - } 154 - 155 - if (!err) { 156 - if (total != expected_size) { 157 - pr_err("size mismatch, expected=%llx, found=%llx\n", 158 - expected_size, total); 159 - err = -EINVAL; 160 - } 161 - return err; 162 - } 163 - 164 - if (prev) { 165 - pr_err("prev block, dump:\n"); 166 - igt_dump_block(mm, prev); 167 - } 168 - 169 - pr_err("bad block, dump:\n"); 170 - igt_dump_block(mm, block); 171 - 172 - return err; 173 - } 174 - 175 - static int igt_check_mm(struct i915_buddy_mm *mm) 176 - { 177 - struct i915_buddy_block *root; 178 - struct i915_buddy_block *prev; 179 - unsigned int i; 180 - u64 total; 181 - int err = 0; 182 - 183 - if (!mm->n_roots) { 184 - pr_err("n_roots is zero\n"); 185 - return -EINVAL; 186 - } 187 - 188 - if (mm->n_roots != hweight64(mm->size)) { 189 - pr_err("n_roots mismatch, n_roots=%u, expected=%lu\n", 190 - mm->n_roots, hweight64(mm->size)); 191 - return -EINVAL; 192 - } 193 - 194 - root = NULL; 195 - prev = NULL; 196 - total = 0; 197 - 198 - for (i = 0; i < mm->n_roots; ++i) { 199 - struct i915_buddy_block *block; 200 - unsigned int order; 201 - 202 - root = mm->roots[i]; 203 - if (!root) { 204 - pr_err("root(%u) is NULL\n", i); 205 - err = -EINVAL; 206 - break; 207 - } 208 - 209 - err = igt_check_block(mm, root); 210 - 211 - if (!i915_buddy_block_is_free(root)) { 212 - pr_err("root not free\n"); 213 - err = -EINVAL; 214 - } 215 - 216 - order = i915_buddy_block_order(root); 217 - 218 - if (!i) { 219 - if (order != mm->max_order) { 220 - pr_err("max order root missing\n"); 221 - err = -EINVAL; 222 - } 223 - } 224 - 225 - if (prev) { 226 - u64 prev_block_size; 227 - u64 prev_offset; 228 - u64 offset; 229 - 230 - prev_offset = i915_buddy_block_offset(prev); 231 - prev_block_size = i915_buddy_block_size(mm, prev); 232 - offset = i915_buddy_block_offset(root); 233 - 234 - if (offset != (prev_offset + prev_block_size)) { 235 - pr_err("root offset mismatch\n"); 236 - err = -EINVAL; 237 - } 238 - } 239 - 240 - block = list_first_entry_or_null(&mm->free_list[order], 241 - struct i915_buddy_block, 242 - link); 243 - if (block != root) { 244 - pr_err("root mismatch at order=%u\n", order); 245 - err = -EINVAL; 246 - } 247 - 248 - if (err) 249 - break; 250 - 251 - prev = root; 252 - total += i915_buddy_block_size(mm, root); 253 - } 254 - 255 - if (!err) { 256 - if (total != mm->size) { 257 - pr_err("expected mm size=%llx, found=%llx\n", mm->size, 258 - total); 259 - err = -EINVAL; 260 - } 261 - return err; 262 - } 263 - 264 - if (prev) { 265 - pr_err("prev root(%u), dump:\n", i - 1); 266 - igt_dump_block(mm, prev); 267 - } 268 - 269 - if (root) { 270 - pr_err("bad root(%u), dump:\n", i); 271 - igt_dump_block(mm, root); 272 - } 273 - 274 - return err; 275 - } 276 - 277 - static void igt_mm_config(u64 *size, u64 *chunk_size) 278 - { 279 - I915_RND_STATE(prng); 280 - u32 s, ms; 281 - 282 - /* Nothing fancy, just try to get an interesting bit pattern */ 283 - 284 - prandom_seed_state(&prng, i915_selftest.random_seed); 285 - 286 - /* Let size be a random number of pages up to 8 GB (2M pages) */ 287 - s = 1 + i915_prandom_u32_max_state((BIT(33 - 12)) - 1, &prng); 288 - /* Let the chunk size be a random power of 2 less than size */ 289 - ms = BIT(i915_prandom_u32_max_state(ilog2(s), &prng)); 290 - /* Round size down to the chunk size */ 291 - s &= -ms; 292 - 293 - /* Convert from pages to bytes */ 294 - *chunk_size = (u64)ms << 12; 295 - *size = (u64)s << 12; 296 - } 297 - 298 - static int igt_buddy_alloc_smoke(void *arg) 299 - { 300 - struct i915_buddy_mm mm; 301 - IGT_TIMEOUT(end_time); 302 - I915_RND_STATE(prng); 303 - u64 chunk_size; 304 - u64 mm_size; 305 - int *order; 306 - int err, i; 307 - 308 - igt_mm_config(&mm_size, &chunk_size); 309 - 310 - pr_info("buddy_init with size=%llx, chunk_size=%llx\n", mm_size, chunk_size); 311 - 312 - err = i915_buddy_init(&mm, mm_size, chunk_size); 313 - if (err) { 314 - pr_err("buddy_init failed(%d)\n", err); 315 - return err; 316 - } 317 - 318 - order = i915_random_order(mm.max_order + 1, &prng); 319 - if (!order) 320 - goto out_fini; 321 - 322 - for (i = 0; i <= mm.max_order; ++i) { 323 - struct i915_buddy_block *block; 324 - int max_order = order[i]; 325 - bool timeout = false; 326 - LIST_HEAD(blocks); 327 - int order; 328 - u64 total; 329 - 330 - err = igt_check_mm(&mm); 331 - if (err) { 332 - pr_err("pre-mm check failed, abort\n"); 333 - break; 334 - } 335 - 336 - pr_info("filling from max_order=%u\n", max_order); 337 - 338 - order = max_order; 339 - total = 0; 340 - 341 - do { 342 - retry: 343 - block = i915_buddy_alloc(&mm, order); 344 - if (IS_ERR(block)) { 345 - err = PTR_ERR(block); 346 - if (err == -ENOMEM) { 347 - pr_info("buddy_alloc hit -ENOMEM with order=%d\n", 348 - order); 349 - } else { 350 - if (order--) { 351 - err = 0; 352 - goto retry; 353 - } 354 - 355 - pr_err("buddy_alloc with order=%d failed(%d)\n", 356 - order, err); 357 - } 358 - 359 - break; 360 - } 361 - 362 - list_add_tail(&block->link, &blocks); 363 - 364 - if (i915_buddy_block_order(block) != order) { 365 - pr_err("buddy_alloc order mismatch\n"); 366 - err = -EINVAL; 367 - break; 368 - } 369 - 370 - total += i915_buddy_block_size(&mm, block); 371 - 372 - if (__igt_timeout(end_time, NULL)) { 373 - timeout = true; 374 - break; 375 - } 376 - } while (total < mm.size); 377 - 378 - if (!err) 379 - err = igt_check_blocks(&mm, &blocks, total, false); 380 - 381 - i915_buddy_free_list(&mm, &blocks); 382 - 383 - if (!err) { 384 - err = igt_check_mm(&mm); 385 - if (err) 386 - pr_err("post-mm check failed\n"); 387 - } 388 - 389 - if (err || timeout) 390 - break; 391 - 392 - cond_resched(); 393 - } 394 - 395 - if (err == -ENOMEM) 396 - err = 0; 397 - 398 - kfree(order); 399 - out_fini: 400 - i915_buddy_fini(&mm); 401 - 402 - return err; 403 - } 404 - 405 - static int igt_buddy_alloc_pessimistic(void *arg) 406 - { 407 - const unsigned int max_order = 16; 408 - struct i915_buddy_block *block, *bn; 409 - struct i915_buddy_mm mm; 410 - unsigned int order; 411 - LIST_HEAD(blocks); 412 - int err; 413 - 414 - /* 415 - * Create a pot-sized mm, then allocate one of each possible 416 - * order within. This should leave the mm with exactly one 417 - * page left. 418 - */ 419 - 420 - err = i915_buddy_init(&mm, PAGE_SIZE << max_order, PAGE_SIZE); 421 - if (err) { 422 - pr_err("buddy_init failed(%d)\n", err); 423 - return err; 424 - } 425 - GEM_BUG_ON(mm.max_order != max_order); 426 - 427 - for (order = 0; order < max_order; order++) { 428 - block = i915_buddy_alloc(&mm, order); 429 - if (IS_ERR(block)) { 430 - pr_info("buddy_alloc hit -ENOMEM with order=%d\n", 431 - order); 432 - err = PTR_ERR(block); 433 - goto err; 434 - } 435 - 436 - list_add_tail(&block->link, &blocks); 437 - } 438 - 439 - /* And now the last remaining block available */ 440 - block = i915_buddy_alloc(&mm, 0); 441 - if (IS_ERR(block)) { 442 - pr_info("buddy_alloc hit -ENOMEM on final alloc\n"); 443 - err = PTR_ERR(block); 444 - goto err; 445 - } 446 - list_add_tail(&block->link, &blocks); 447 - 448 - /* Should be completely full! */ 449 - for (order = max_order; order--; ) { 450 - block = i915_buddy_alloc(&mm, order); 451 - if (!IS_ERR(block)) { 452 - pr_info("buddy_alloc unexpectedly succeeded at order %d, it should be full!", 453 - order); 454 - list_add_tail(&block->link, &blocks); 455 - err = -EINVAL; 456 - goto err; 457 - } 458 - } 459 - 460 - block = list_last_entry(&blocks, typeof(*block), link); 461 - list_del(&block->link); 462 - i915_buddy_free(&mm, block); 463 - 464 - /* As we free in increasing size, we make available larger blocks */ 465 - order = 1; 466 - list_for_each_entry_safe(block, bn, &blocks, link) { 467 - list_del(&block->link); 468 - i915_buddy_free(&mm, block); 469 - 470 - block = i915_buddy_alloc(&mm, order); 471 - if (IS_ERR(block)) { 472 - pr_info("buddy_alloc (realloc) hit -ENOMEM with order=%d\n", 473 - order); 474 - err = PTR_ERR(block); 475 - goto err; 476 - } 477 - i915_buddy_free(&mm, block); 478 - order++; 479 - } 480 - 481 - /* To confirm, now the whole mm should be available */ 482 - block = i915_buddy_alloc(&mm, max_order); 483 - if (IS_ERR(block)) { 484 - pr_info("buddy_alloc (realloc) hit -ENOMEM with order=%d\n", 485 - max_order); 486 - err = PTR_ERR(block); 487 - goto err; 488 - } 489 - i915_buddy_free(&mm, block); 490 - 491 - err: 492 - i915_buddy_free_list(&mm, &blocks); 493 - i915_buddy_fini(&mm); 494 - return err; 495 - } 496 - 497 - static int igt_buddy_alloc_optimistic(void *arg) 498 - { 499 - const int max_order = 16; 500 - struct i915_buddy_block *block; 501 - struct i915_buddy_mm mm; 502 - LIST_HEAD(blocks); 503 - int order; 504 - int err; 505 - 506 - /* 507 - * Create a mm with one block of each order available, and 508 - * try to allocate them all. 509 - */ 510 - 511 - err = i915_buddy_init(&mm, 512 - PAGE_SIZE * ((1 << (max_order + 1)) - 1), 513 - PAGE_SIZE); 514 - if (err) { 515 - pr_err("buddy_init failed(%d)\n", err); 516 - return err; 517 - } 518 - GEM_BUG_ON(mm.max_order != max_order); 519 - 520 - for (order = 0; order <= max_order; order++) { 521 - block = i915_buddy_alloc(&mm, order); 522 - if (IS_ERR(block)) { 523 - pr_info("buddy_alloc hit -ENOMEM with order=%d\n", 524 - order); 525 - err = PTR_ERR(block); 526 - goto err; 527 - } 528 - 529 - list_add_tail(&block->link, &blocks); 530 - } 531 - 532 - /* Should be completely full! */ 533 - block = i915_buddy_alloc(&mm, 0); 534 - if (!IS_ERR(block)) { 535 - pr_info("buddy_alloc unexpectedly succeeded, it should be full!"); 536 - list_add_tail(&block->link, &blocks); 537 - err = -EINVAL; 538 - goto err; 539 - } 540 - 541 - err: 542 - i915_buddy_free_list(&mm, &blocks); 543 - i915_buddy_fini(&mm); 544 - return err; 545 - } 546 - 547 - static int igt_buddy_alloc_pathological(void *arg) 548 - { 549 - const int max_order = 16; 550 - struct i915_buddy_block *block; 551 - struct i915_buddy_mm mm; 552 - LIST_HEAD(blocks); 553 - LIST_HEAD(holes); 554 - int order, top; 555 - int err; 556 - 557 - /* 558 - * Create a pot-sized mm, then allocate one of each possible 559 - * order within. This should leave the mm with exactly one 560 - * page left. Free the largest block, then whittle down again. 561 - * Eventually we will have a fully 50% fragmented mm. 562 - */ 563 - 564 - err = i915_buddy_init(&mm, PAGE_SIZE << max_order, PAGE_SIZE); 565 - if (err) { 566 - pr_err("buddy_init failed(%d)\n", err); 567 - return err; 568 - } 569 - GEM_BUG_ON(mm.max_order != max_order); 570 - 571 - for (top = max_order; top; top--) { 572 - /* Make room by freeing the largest allocated block */ 573 - block = list_first_entry_or_null(&blocks, typeof(*block), link); 574 - if (block) { 575 - list_del(&block->link); 576 - i915_buddy_free(&mm, block); 577 - } 578 - 579 - for (order = top; order--; ) { 580 - block = i915_buddy_alloc(&mm, order); 581 - if (IS_ERR(block)) { 582 - pr_info("buddy_alloc hit -ENOMEM with order=%d, top=%d\n", 583 - order, top); 584 - err = PTR_ERR(block); 585 - goto err; 586 - } 587 - list_add_tail(&block->link, &blocks); 588 - } 589 - 590 - /* There should be one final page for this sub-allocation */ 591 - block = i915_buddy_alloc(&mm, 0); 592 - if (IS_ERR(block)) { 593 - pr_info("buddy_alloc hit -ENOMEM for hole\n"); 594 - err = PTR_ERR(block); 595 - goto err; 596 - } 597 - list_add_tail(&block->link, &holes); 598 - 599 - block = i915_buddy_alloc(&mm, top); 600 - if (!IS_ERR(block)) { 601 - pr_info("buddy_alloc unexpectedly succeeded at top-order %d/%d, it should be full!", 602 - top, max_order); 603 - list_add_tail(&block->link, &blocks); 604 - err = -EINVAL; 605 - goto err; 606 - } 607 - } 608 - 609 - i915_buddy_free_list(&mm, &holes); 610 - 611 - /* Nothing larger than blocks of chunk_size now available */ 612 - for (order = 1; order <= max_order; order++) { 613 - block = i915_buddy_alloc(&mm, order); 614 - if (!IS_ERR(block)) { 615 - pr_info("buddy_alloc unexpectedly succeeded at order %d, it should be full!", 616 - order); 617 - list_add_tail(&block->link, &blocks); 618 - err = -EINVAL; 619 - goto err; 620 - } 621 - } 622 - 623 - err: 624 - list_splice_tail(&holes, &blocks); 625 - i915_buddy_free_list(&mm, &blocks); 626 - i915_buddy_fini(&mm); 627 - return err; 628 - } 629 - 630 - static int igt_buddy_alloc_range(void *arg) 631 - { 632 - struct i915_buddy_mm mm; 633 - unsigned long page_num; 634 - LIST_HEAD(blocks); 635 - u64 chunk_size; 636 - u64 offset; 637 - u64 size; 638 - u64 rem; 639 - int err; 640 - 641 - igt_mm_config(&size, &chunk_size); 642 - 643 - pr_info("buddy_init with size=%llx, chunk_size=%llx\n", size, chunk_size); 644 - 645 - err = i915_buddy_init(&mm, size, chunk_size); 646 - if (err) { 647 - pr_err("buddy_init failed(%d)\n", err); 648 - return err; 649 - } 650 - 651 - err = igt_check_mm(&mm); 652 - if (err) { 653 - pr_err("pre-mm check failed, abort, abort, abort!\n"); 654 - goto err_fini; 655 - } 656 - 657 - rem = mm.size; 658 - offset = 0; 659 - 660 - for_each_prime_number_from(page_num, 1, ULONG_MAX - 1) { 661 - struct i915_buddy_block *block; 662 - LIST_HEAD(tmp); 663 - 664 - size = min(page_num * mm.chunk_size, rem); 665 - 666 - err = i915_buddy_alloc_range(&mm, &tmp, offset, size); 667 - if (err) { 668 - if (err == -ENOMEM) { 669 - pr_info("alloc_range hit -ENOMEM with size=%llx\n", 670 - size); 671 - } else { 672 - pr_err("alloc_range with offset=%llx, size=%llx failed(%d)\n", 673 - offset, size, err); 674 - } 675 - 676 - break; 677 - } 678 - 679 - block = list_first_entry_or_null(&tmp, 680 - struct i915_buddy_block, 681 - link); 682 - if (!block) { 683 - pr_err("alloc_range has no blocks\n"); 684 - err = -EINVAL; 685 - break; 686 - } 687 - 688 - if (i915_buddy_block_offset(block) != offset) { 689 - pr_err("alloc_range start offset mismatch, found=%llx, expected=%llx\n", 690 - i915_buddy_block_offset(block), offset); 691 - err = -EINVAL; 692 - } 693 - 694 - if (!err) 695 - err = igt_check_blocks(&mm, &tmp, size, true); 696 - 697 - list_splice_tail(&tmp, &blocks); 698 - 699 - if (err) 700 - break; 701 - 702 - offset += size; 703 - 704 - rem -= size; 705 - if (!rem) 706 - break; 707 - 708 - cond_resched(); 709 - } 710 - 711 - if (err == -ENOMEM) 712 - err = 0; 713 - 714 - i915_buddy_free_list(&mm, &blocks); 715 - 716 - if (!err) { 717 - err = igt_check_mm(&mm); 718 - if (err) 719 - pr_err("post-mm check failed\n"); 720 - } 721 - 722 - err_fini: 723 - i915_buddy_fini(&mm); 724 - 725 - return err; 726 - } 727 - 728 - static int igt_buddy_alloc_limit(void *arg) 729 - { 730 - struct i915_buddy_block *block; 731 - struct i915_buddy_mm mm; 732 - const u64 size = U64_MAX; 733 - int err; 734 - 735 - err = i915_buddy_init(&mm, size, PAGE_SIZE); 736 - if (err) 737 - return err; 738 - 739 - if (mm.max_order != I915_BUDDY_MAX_ORDER) { 740 - pr_err("mm.max_order(%d) != %d\n", 741 - mm.max_order, I915_BUDDY_MAX_ORDER); 742 - err = -EINVAL; 743 - goto out_fini; 744 - } 745 - 746 - block = i915_buddy_alloc(&mm, mm.max_order); 747 - if (IS_ERR(block)) { 748 - err = PTR_ERR(block); 749 - goto out_fini; 750 - } 751 - 752 - if (i915_buddy_block_order(block) != mm.max_order) { 753 - pr_err("block order(%d) != %d\n", 754 - i915_buddy_block_order(block), mm.max_order); 755 - err = -EINVAL; 756 - goto out_free; 757 - } 758 - 759 - if (i915_buddy_block_size(&mm, block) != 760 - BIT_ULL(mm.max_order) * PAGE_SIZE) { 761 - pr_err("block size(%llu) != %llu\n", 762 - i915_buddy_block_size(&mm, block), 763 - BIT_ULL(mm.max_order) * PAGE_SIZE); 764 - err = -EINVAL; 765 - goto out_free; 766 - } 767 - 768 - out_free: 769 - i915_buddy_free(&mm, block); 770 - out_fini: 771 - i915_buddy_fini(&mm); 772 - return err; 773 - } 774 - 775 - int i915_buddy_mock_selftests(void) 776 - { 777 - static const struct i915_subtest tests[] = { 778 - SUBTEST(igt_buddy_alloc_pessimistic), 779 - SUBTEST(igt_buddy_alloc_optimistic), 780 - SUBTEST(igt_buddy_alloc_pathological), 781 - SUBTEST(igt_buddy_alloc_smoke), 782 - SUBTEST(igt_buddy_alloc_range), 783 - SUBTEST(igt_buddy_alloc_limit), 784 - }; 785 - 786 - return i915_subtests(tests, NULL); 787 - }
-1
drivers/gpu/drm/i915/selftests/i915_mock_selftests.h
··· 33 33 selftest(gtt, i915_gem_gtt_mock_selftests) 34 34 selftest(hugepages, i915_gem_huge_page_mock_selftests) 35 35 selftest(memory_region, intel_memory_region_mock_selftests) 36 - selftest(buddy, i915_buddy_mock_selftests)
+7 -6
drivers/gpu/drm/i915/selftests/intel_memory_region.c
··· 6 6 #include <linux/prime_numbers.h> 7 7 #include <linux/sort.h> 8 8 9 + #include <drm/drm_buddy.h> 10 + 9 11 #include "../i915_selftest.h" 10 12 11 13 #include "mock_drm.h" ··· 22 20 #include "gt/intel_engine_pm.h" 23 21 #include "gt/intel_engine_user.h" 24 22 #include "gt/intel_gt.h" 25 - #include "i915_buddy.h" 26 23 #include "gt/intel_migrate.h" 27 24 #include "i915_memcpy.h" 28 25 #include "i915_ttm_buddy_manager.h" ··· 370 369 struct drm_i915_private *i915 = mem->i915; 371 370 struct i915_ttm_buddy_resource *res; 372 371 struct drm_i915_gem_object *obj; 373 - struct i915_buddy_mm *mm; 372 + struct drm_buddy *mm; 374 373 unsigned int expected_order; 375 374 LIST_HEAD(objects); 376 375 u64 size; ··· 455 454 struct drm_i915_private *i915 = mem->i915; 456 455 struct i915_ttm_buddy_resource *res; 457 456 struct drm_i915_gem_object *obj; 458 - struct i915_buddy_block *block; 459 - struct i915_buddy_mm *mm; 457 + struct drm_buddy_block *block; 458 + struct drm_buddy *mm; 460 459 struct list_head *blocks; 461 460 struct scatterlist *sg; 462 461 LIST_HEAD(objects); ··· 486 485 mm = res->mm; 487 486 size = 0; 488 487 list_for_each_entry(block, blocks, link) { 489 - if (i915_buddy_block_size(mm, block) > size) 490 - size = i915_buddy_block_size(mm, block); 488 + if (drm_buddy_block_size(mm, block) > size) 489 + size = drm_buddy_block_size(mm, block); 491 490 } 492 491 if (size < max_segment) { 493 492 pr_err("%s: Failed to create a huge contiguous block [> %u], largest block %lld\n",
+2 -1
drivers/gpu/drm/imx/dcss/dcss-drv.c
··· 6 6 #include <linux/module.h> 7 7 #include <linux/kernel.h> 8 8 #include <linux/platform_device.h> 9 + #include <drm/drm_module.h> 9 10 #include <drm/drm_of.h> 10 11 11 12 #include "dcss-dev.h" ··· 132 131 }, 133 132 }; 134 133 135 - module_platform_driver(dcss_platform_driver); 134 + drm_module_platform_driver(dcss_platform_driver); 136 135 137 136 MODULE_AUTHOR("Laurentiu Palcu <laurentiu.palcu@nxp.com>"); 138 137 MODULE_DESCRIPTION("DCSS driver for i.MX8MQ");
+59 -3
drivers/gpu/drm/ingenic/ingenic-drm-drv.c
··· 6 6 7 7 #include "ingenic-drm.h" 8 8 9 + #include <linux/bitfield.h> 9 10 #include <linux/component.h> 10 11 #include <linux/clk.h> 11 12 #include <linux/dma-mapping.h> ··· 50 49 u32 addr; 51 50 u32 id; 52 51 u32 cmd; 52 + /* extended hw descriptor for jz4780 */ 53 + u32 offsize; 54 + u32 pagewidth; 55 + u32 cpos; 56 + u32 dessize; 53 57 } __aligned(16); 54 58 55 59 struct ingenic_dma_hwdescs { ··· 66 60 bool needs_dev_clk; 67 61 bool has_osd; 68 62 bool map_noncoherent; 63 + bool use_extended_hwdesc; 69 64 unsigned int max_width, max_height; 70 65 const u32 *formats_f0, *formats_f1; 71 66 unsigned int num_formats_f0, num_formats_f1; ··· 180 173 .val_bits = 32, 181 174 .reg_stride = 4, 182 175 183 - .max_register = JZ_REG_LCD_SIZE1, 184 176 .writeable_reg = ingenic_drm_writeable_reg, 185 177 }; 186 178 ··· 453 447 if (!crtc) 454 448 return 0; 455 449 450 + if (plane == &priv->f0) 451 + return -EINVAL; 452 + 456 453 crtc_state = drm_atomic_get_existing_crtc_state(state, 457 454 crtc); 458 455 if (WARN_ON(!crtc_state)) ··· 672 663 hwdesc->cmd = JZ_LCD_CMD_EOF_IRQ | (width * height * cpp / 4); 673 664 hwdesc->next = dma_hwdesc_addr(priv, next_id); 674 665 666 + if (priv->soc_info->use_extended_hwdesc) { 667 + hwdesc->cmd |= JZ_LCD_CMD_FRM_ENABLE; 668 + 669 + /* Extended 8-byte descriptor */ 670 + hwdesc->cpos = 0; 671 + hwdesc->offsize = 0; 672 + hwdesc->pagewidth = 0; 673 + 674 + switch (newstate->fb->format->format) { 675 + case DRM_FORMAT_XRGB1555: 676 + hwdesc->cpos |= JZ_LCD_CPOS_RGB555; 677 + fallthrough; 678 + case DRM_FORMAT_RGB565: 679 + hwdesc->cpos |= JZ_LCD_CPOS_BPP_15_16; 680 + break; 681 + case DRM_FORMAT_XRGB8888: 682 + hwdesc->cpos |= JZ_LCD_CPOS_BPP_18_24; 683 + break; 684 + } 685 + hwdesc->cpos |= (JZ_LCD_CPOS_COEFFICIENT_1 << 686 + JZ_LCD_CPOS_COEFFICIENT_OFFSET); 687 + hwdesc->dessize = 688 + (0xff << JZ_LCD_DESSIZE_ALPHA_OFFSET) | 689 + FIELD_PREP(JZ_LCD_DESSIZE_HEIGHT_MASK, height - 1) | 690 + FIELD_PREP(JZ_LCD_DESSIZE_WIDTH_MASK, width - 1); 691 + } 692 + 675 693 if (drm_atomic_crtc_needs_modeset(crtc_state)) { 676 694 fourcc = newstate->fb->format->format; 677 695 ··· 729 693 cfg = JZ_LCD_CFG_PS_DISABLE | JZ_LCD_CFG_CLS_DISABLE 730 694 | JZ_LCD_CFG_SPL_DISABLE | JZ_LCD_CFG_REV_DISABLE; 731 695 } 696 + 697 + if (priv->soc_info->use_extended_hwdesc) 698 + cfg |= JZ_LCD_CFG_DESCRIPTOR_8; 732 699 733 700 if (mode->flags & DRM_MODE_FLAG_NHSYNC) 734 701 cfg |= JZ_LCD_CFG_HSYNC_ACTIVE_LOW; ··· 1050 1011 struct ingenic_drm_bridge *ib; 1051 1012 struct drm_device *drm; 1052 1013 void __iomem *base; 1014 + struct resource *res; 1015 + struct regmap_config regmap_config; 1053 1016 long parent_rate; 1054 1017 unsigned int i, clone_mask = 0; 1055 1018 int ret, irq; ··· 1097 1056 drm->mode_config.funcs = &ingenic_drm_mode_config_funcs; 1098 1057 drm->mode_config.helper_private = &ingenic_drm_mode_config_helpers; 1099 1058 1100 - base = devm_platform_ioremap_resource(pdev, 0); 1059 + base = devm_platform_get_and_ioremap_resource(pdev, 0, &res); 1101 1060 if (IS_ERR(base)) { 1102 1061 dev_err(dev, "Failed to get memory resource\n"); 1103 1062 return PTR_ERR(base); 1104 1063 } 1105 1064 1065 + regmap_config = ingenic_drm_regmap_config; 1066 + regmap_config.max_register = res->end - res->start; 1106 1067 priv->map = devm_regmap_init_mmio(dev, base, 1107 - &ingenic_drm_regmap_config); 1068 + &regmap_config); 1108 1069 if (IS_ERR(priv->map)) { 1109 1070 dev_err(dev, "Failed to create regmap\n"); 1110 1071 return PTR_ERR(priv->map); ··· 1508 1465 .num_formats_f0 = ARRAY_SIZE(jz4770_formats_f0), 1509 1466 }; 1510 1467 1468 + static const struct jz_soc_info jz4780_soc_info = { 1469 + .needs_dev_clk = true, 1470 + .has_osd = true, 1471 + .use_extended_hwdesc = true, 1472 + .max_width = 4096, 1473 + .max_height = 2048, 1474 + .formats_f1 = jz4770_formats_f1, 1475 + .num_formats_f1 = ARRAY_SIZE(jz4770_formats_f1), 1476 + .formats_f0 = jz4770_formats_f0, 1477 + .num_formats_f0 = ARRAY_SIZE(jz4770_formats_f0), 1478 + }; 1479 + 1511 1480 static const struct of_device_id ingenic_drm_of_match[] = { 1512 1481 { .compatible = "ingenic,jz4740-lcd", .data = &jz4740_soc_info }, 1513 1482 { .compatible = "ingenic,jz4725b-lcd", .data = &jz4725b_soc_info }, 1514 1483 { .compatible = "ingenic,jz4770-lcd", .data = &jz4770_soc_info }, 1484 + { .compatible = "ingenic,jz4780-lcd", .data = &jz4780_soc_info }, 1515 1485 { /* sentinel */ }, 1516 1486 }; 1517 1487 MODULE_DEVICE_TABLE(of, ingenic_drm_of_match);
+38
drivers/gpu/drm/ingenic/ingenic-drm.h
··· 44 44 #define JZ_REG_LCD_XYP1 0x124 45 45 #define JZ_REG_LCD_SIZE0 0x128 46 46 #define JZ_REG_LCD_SIZE1 0x12c 47 + #define JZ_REG_LCD_PCFG 0x2c0 47 48 48 49 #define JZ_LCD_CFG_SLCD BIT(31) 50 + #define JZ_LCD_CFG_DESCRIPTOR_8 BIT(28) 51 + #define JZ_LCD_CFG_RECOVER_FIFO_UNDERRUN BIT(25) 49 52 #define JZ_LCD_CFG_PS_DISABLE BIT(23) 50 53 #define JZ_LCD_CFG_CLS_DISABLE BIT(22) 51 54 #define JZ_LCD_CFG_SPL_DISABLE BIT(21) ··· 66 63 #define JZ_LCD_CFG_DE_ACTIVE_LOW BIT(9) 67 64 #define JZ_LCD_CFG_VSYNC_ACTIVE_LOW BIT(8) 68 65 #define JZ_LCD_CFG_18_BIT BIT(7) 66 + #define JZ_LCD_CFG_24_BIT BIT(6) 69 67 #define JZ_LCD_CFG_PDW (BIT(5) | BIT(4)) 70 68 71 69 #define JZ_LCD_CFG_MODE_GENERIC_16BIT 0 ··· 136 132 #define JZ_LCD_CMD_SOF_IRQ BIT(31) 137 133 #define JZ_LCD_CMD_EOF_IRQ BIT(30) 138 134 #define JZ_LCD_CMD_ENABLE_PAL BIT(28) 135 + #define JZ_LCD_CMD_FRM_ENABLE BIT(26) 139 136 140 137 #define JZ_LCD_SYNC_MASK 0x3ff 141 138 ··· 158 153 #define JZ_LCD_RGBC_EVEN_BGR (0x5 << 0) 159 154 160 155 #define JZ_LCD_OSDC_OSDEN BIT(0) 156 + #define JZ_LCD_OSDC_ALPHAEN BIT(2) 161 157 #define JZ_LCD_OSDC_F0EN BIT(3) 162 158 #define JZ_LCD_OSDC_F1EN BIT(4) 163 159 ··· 181 175 182 176 #define JZ_LCD_SIZE01_WIDTH_LSB 0 183 177 #define JZ_LCD_SIZE01_HEIGHT_LSB 16 178 + 179 + #define JZ_LCD_DESSIZE_ALPHA_OFFSET 24 180 + #define JZ_LCD_DESSIZE_HEIGHT_MASK GENMASK(23, 12) 181 + #define JZ_LCD_DESSIZE_WIDTH_MASK GENMASK(11, 0) 182 + 183 + #define JZ_LCD_CPOS_BPP_15_16 (4 << 27) 184 + #define JZ_LCD_CPOS_BPP_18_24 (5 << 27) 185 + #define JZ_LCD_CPOS_BPP_30 (7 << 27) 186 + #define JZ_LCD_CPOS_RGB555 BIT(30) 187 + #define JZ_LCD_CPOS_PREMULTIPLY_LCD BIT(26) 188 + #define JZ_LCD_CPOS_COEFFICIENT_OFFSET 24 189 + #define JZ_LCD_CPOS_COEFFICIENT_0 0 190 + #define JZ_LCD_CPOS_COEFFICIENT_1 1 191 + #define JZ_LCD_CPOS_COEFFICIENT_ALPHA1 2 192 + #define JZ_LCD_CPOS_COEFFICIENT_1_ALPHA1 3 193 + 194 + #define JZ_LCD_RGBC_RGB_PADDING BIT(15) 195 + #define JZ_LCD_RGBC_RGB_PADDING_FIRST BIT(14) 196 + #define JZ_LCD_RGBC_422 BIT(8) 197 + #define JZ_LCD_RGBC_RGB_FORMAT_ENABLE BIT(7) 198 + 199 + #define JZ_LCD_PCFG_PRI_MODE BIT(31) 200 + #define JZ_LCD_PCFG_HP_BST_4 (0 << 28) 201 + #define JZ_LCD_PCFG_HP_BST_8 (1 << 28) 202 + #define JZ_LCD_PCFG_HP_BST_16 (2 << 28) 203 + #define JZ_LCD_PCFG_HP_BST_32 (3 << 28) 204 + #define JZ_LCD_PCFG_HP_BST_64 (4 << 28) 205 + #define JZ_LCD_PCFG_HP_BST_16_CONT (5 << 28) 206 + #define JZ_LCD_PCFG_HP_BST_DISABLE (7 << 28) 207 + #define JZ_LCD_PCFG_THRESHOLD2_OFFSET 18 208 + #define JZ_LCD_PCFG_THRESHOLD1_OFFSET 9 209 + #define JZ_LCD_PCFG_THRESHOLD0_OFFSET 0 184 210 185 211 struct device; 186 212 struct drm_plane;
+13 -12
drivers/gpu/drm/meson/meson_drv.c
··· 302 302 if (priv->afbcd.ops) { 303 303 ret = priv->afbcd.ops->init(priv); 304 304 if (ret) 305 - return ret; 305 + goto free_drm; 306 306 } 307 307 308 308 /* Encoder Initialization */ 309 309 310 310 ret = meson_encoder_cvbs_init(priv); 311 311 if (ret) 312 - goto free_drm; 312 + goto exit_afbcd; 313 313 314 314 if (has_components) { 315 315 ret = component_bind_all(drm->dev, drm); 316 316 if (ret) { 317 317 dev_err(drm->dev, "Couldn't bind all components\n"); 318 - goto free_drm; 318 + goto exit_afbcd; 319 319 } 320 320 } 321 321 322 322 ret = meson_encoder_hdmi_init(priv); 323 323 if (ret) 324 - goto free_drm; 324 + goto exit_afbcd; 325 325 326 326 ret = meson_plane_create(priv); 327 327 if (ret) 328 - goto free_drm; 328 + goto exit_afbcd; 329 329 330 330 ret = meson_overlay_create(priv); 331 331 if (ret) 332 - goto free_drm; 332 + goto exit_afbcd; 333 333 334 334 ret = meson_crtc_create(priv); 335 335 if (ret) 336 - goto free_drm; 336 + goto exit_afbcd; 337 337 338 338 ret = request_irq(priv->vsync_irq, meson_irq, 0, drm->driver->name, drm); 339 339 if (ret) 340 - goto free_drm; 340 + goto exit_afbcd; 341 341 342 342 drm_mode_config_reset(drm); 343 343 ··· 355 355 356 356 uninstall_irq: 357 357 free_irq(priv->vsync_irq, drm); 358 + exit_afbcd: 359 + if (priv->afbcd.ops) 360 + priv->afbcd.ops->exit(priv); 358 361 free_drm: 359 362 drm_dev_put(drm); 360 363 ··· 388 385 free_irq(priv->vsync_irq, drm); 389 386 drm_dev_put(drm); 390 387 391 - if (priv->afbcd.ops) { 392 - priv->afbcd.ops->reset(priv); 393 - meson_rdma_free(priv); 394 - } 388 + if (priv->afbcd.ops) 389 + priv->afbcd.ops->exit(priv); 395 390 } 396 391 397 392 static const struct component_master_ops meson_drv_master_ops = {
+27 -14
drivers/gpu/drm/meson/meson_osd_afbcd.c
··· 79 79 return meson_gxm_afbcd_pixel_fmt(modifier, format) >= 0; 80 80 } 81 81 82 - static int meson_gxm_afbcd_init(struct meson_drm *priv) 83 - { 84 - return 0; 85 - } 86 - 87 82 static int meson_gxm_afbcd_reset(struct meson_drm *priv) 88 83 { 89 84 writel_relaxed(VIU_SW_RESET_OSD1_AFBCD, ··· 86 91 writel_relaxed(0, priv->io_base + _REG(VIU_SW_RESET)); 87 92 88 93 return 0; 94 + } 95 + 96 + static int meson_gxm_afbcd_init(struct meson_drm *priv) 97 + { 98 + return 0; 99 + } 100 + 101 + static void meson_gxm_afbcd_exit(struct meson_drm *priv) 102 + { 103 + meson_gxm_afbcd_reset(priv); 89 104 } 90 105 91 106 static int meson_gxm_afbcd_enable(struct meson_drm *priv) ··· 177 172 178 173 struct meson_afbcd_ops meson_afbcd_gxm_ops = { 179 174 .init = meson_gxm_afbcd_init, 175 + .exit = meson_gxm_afbcd_exit, 180 176 .reset = meson_gxm_afbcd_reset, 181 177 .enable = meson_gxm_afbcd_enable, 182 178 .disable = meson_gxm_afbcd_disable, ··· 275 269 return meson_g12a_afbcd_pixel_fmt(modifier, format) >= 0; 276 270 } 277 271 272 + static int meson_g12a_afbcd_reset(struct meson_drm *priv) 273 + { 274 + meson_rdma_reset(priv); 275 + 276 + meson_rdma_writel_sync(priv, VIU_SW_RESET_G12A_AFBC_ARB | 277 + VIU_SW_RESET_G12A_OSD1_AFBCD, 278 + VIU_SW_RESET); 279 + meson_rdma_writel_sync(priv, 0, VIU_SW_RESET); 280 + 281 + return 0; 282 + } 283 + 278 284 static int meson_g12a_afbcd_init(struct meson_drm *priv) 279 285 { 280 286 int ret; ··· 304 286 return 0; 305 287 } 306 288 307 - static int meson_g12a_afbcd_reset(struct meson_drm *priv) 289 + static void meson_g12a_afbcd_exit(struct meson_drm *priv) 308 290 { 309 - meson_rdma_reset(priv); 310 - 311 - meson_rdma_writel_sync(priv, VIU_SW_RESET_G12A_AFBC_ARB | 312 - VIU_SW_RESET_G12A_OSD1_AFBCD, 313 - VIU_SW_RESET); 314 - meson_rdma_writel_sync(priv, 0, VIU_SW_RESET); 315 - 316 - return 0; 291 + meson_g12a_afbcd_reset(priv); 292 + meson_rdma_free(priv); 317 293 } 318 294 319 295 static int meson_g12a_afbcd_enable(struct meson_drm *priv) ··· 392 380 393 381 struct meson_afbcd_ops meson_afbcd_g12a_ops = { 394 382 .init = meson_g12a_afbcd_init, 383 + .exit = meson_g12a_afbcd_exit, 395 384 .reset = meson_g12a_afbcd_reset, 396 385 .enable = meson_g12a_afbcd_enable, 397 386 .disable = meson_g12a_afbcd_disable,
+1
drivers/gpu/drm/meson/meson_osd_afbcd.h
··· 14 14 15 15 struct meson_afbcd_ops { 16 16 int (*init)(struct meson_drm *priv); 17 + void (*exit)(struct meson_drm *priv); 17 18 int (*reset)(struct meson_drm *priv); 18 19 int (*enable)(struct meson_drm *priv); 19 20 int (*disable)(struct meson_drm *priv);
+4 -1
drivers/gpu/drm/mgag200/mgag200_mode.c
··· 529 529 WREG_GFX(3, 0x00); 530 530 WREG_GFX(4, 0x00); 531 531 WREG_GFX(5, 0x40); 532 - WREG_GFX(6, 0x05); 532 + /* GCTL6 should be 0x05, but we configure memmapsl to 0xb8000 (text mode), 533 + * so that it doesn't hang when running kexec/kdump on G200_SE rev42. 534 + */ 535 + WREG_GFX(6, 0x0d); 533 536 WREG_GFX(7, 0x0f); 534 537 WREG_GFX(8, 0x0f); 535 538
+1
drivers/gpu/drm/msm/Kconfig
··· 12 12 select IOMMU_IO_PGTABLE 13 13 select QCOM_MDT_LOADER if ARCH_QCOM 14 14 select REGULATOR 15 + select DRM_DP_HELPER 15 16 select DRM_KMS_HELPER 16 17 select DRM_PANEL 17 18 select DRM_BRIDGE
+1 -1
drivers/gpu/drm/msm/dp/dp_audio.c
··· 8 8 9 9 #include <linux/of_platform.h> 10 10 11 - #include <drm/drm_dp_helper.h> 11 + #include <drm/dp/drm_dp_helper.h> 12 12 #include <drm/drm_edid.h> 13 13 14 14 #include "dp_catalog.h"
+1 -1
drivers/gpu/drm/msm/dp/dp_aux.h
··· 7 7 #define _DP_AUX_H_ 8 8 9 9 #include "dp_catalog.h" 10 - #include <drm/drm_dp_helper.h> 10 + #include <drm/dp/drm_dp_helper.h> 11 11 12 12 int dp_aux_register(struct drm_dp_aux *dp_aux); 13 13 void dp_aux_unregister(struct drm_dp_aux *dp_aux);
+1 -1
drivers/gpu/drm/msm/dp/dp_catalog.c
··· 10 10 #include <linux/phy/phy.h> 11 11 #include <linux/phy/phy-dp.h> 12 12 #include <linux/rational.h> 13 - #include <drm/drm_dp_helper.h> 13 + #include <drm/dp/drm_dp_helper.h> 14 14 #include <drm/drm_print.h> 15 15 16 16 #include "dp_catalog.h"
+1 -1
drivers/gpu/drm/msm/dp/dp_ctrl.c
··· 12 12 #include <linux/phy/phy-dp.h> 13 13 #include <linux/pm_opp.h> 14 14 #include <drm/drm_fixed.h> 15 - #include <drm/drm_dp_helper.h> 15 + #include <drm/dp/drm_dp_helper.h> 16 16 #include <drm/drm_print.h> 17 17 18 18 #include "dp_reg.h"
+77
drivers/gpu/drm/msm/edp/edp.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0-only */ 2 + /* 3 + * Copyright (c) 2014-2015, The Linux Foundation. All rights reserved. 4 + */ 5 + 6 + #ifndef __EDP_CONNECTOR_H__ 7 + #define __EDP_CONNECTOR_H__ 8 + 9 + #include <linux/i2c.h> 10 + #include <linux/interrupt.h> 11 + #include <linux/kernel.h> 12 + #include <linux/platform_device.h> 13 + #include <drm/dp/drm_dp_helper.h> 14 + #include <drm/drm_bridge.h> 15 + #include <drm/drm_crtc.h> 16 + 17 + #include "msm_drv.h" 18 + 19 + #define edp_read(offset) msm_readl((offset)) 20 + #define edp_write(offset, data) msm_writel((data), (offset)) 21 + 22 + struct edp_ctrl; 23 + struct edp_aux; 24 + struct edp_phy; 25 + 26 + struct msm_edp { 27 + struct drm_device *dev; 28 + struct platform_device *pdev; 29 + 30 + struct drm_connector *connector; 31 + struct drm_bridge *bridge; 32 + 33 + /* the encoder we are hooked to (outside of eDP block) */ 34 + struct drm_encoder *encoder; 35 + 36 + struct edp_ctrl *ctrl; 37 + 38 + int irq; 39 + }; 40 + 41 + /* eDP bridge */ 42 + struct drm_bridge *msm_edp_bridge_init(struct msm_edp *edp); 43 + void edp_bridge_destroy(struct drm_bridge *bridge); 44 + 45 + /* eDP connector */ 46 + struct drm_connector *msm_edp_connector_init(struct msm_edp *edp); 47 + 48 + /* AUX */ 49 + void *msm_edp_aux_init(struct msm_edp *edp, void __iomem *regbase, struct drm_dp_aux **drm_aux); 50 + void msm_edp_aux_destroy(struct device *dev, struct edp_aux *aux); 51 + irqreturn_t msm_edp_aux_irq(struct edp_aux *aux, u32 isr); 52 + void msm_edp_aux_ctrl(struct edp_aux *aux, int enable); 53 + 54 + /* Phy */ 55 + bool msm_edp_phy_ready(struct edp_phy *phy); 56 + void msm_edp_phy_ctrl(struct edp_phy *phy, int enable); 57 + void msm_edp_phy_vm_pe_init(struct edp_phy *phy); 58 + void msm_edp_phy_vm_pe_cfg(struct edp_phy *phy, u32 v0, u32 v1); 59 + void msm_edp_phy_lane_power_ctrl(struct edp_phy *phy, bool up, u32 max_lane); 60 + void *msm_edp_phy_init(struct device *dev, void __iomem *regbase); 61 + 62 + /* Ctrl */ 63 + irqreturn_t msm_edp_ctrl_irq(struct edp_ctrl *ctrl); 64 + void msm_edp_ctrl_power(struct edp_ctrl *ctrl, bool on); 65 + int msm_edp_ctrl_init(struct msm_edp *edp); 66 + void msm_edp_ctrl_destroy(struct edp_ctrl *ctrl); 67 + bool msm_edp_ctrl_panel_connected(struct edp_ctrl *ctrl); 68 + int msm_edp_ctrl_get_panel_info(struct edp_ctrl *ctrl, 69 + struct drm_connector *connector, struct edid **edid); 70 + int msm_edp_ctrl_timing_cfg(struct edp_ctrl *ctrl, 71 + const struct drm_display_mode *mode, 72 + const struct drm_display_info *info); 73 + /* @pixel_rate is in kHz */ 74 + bool msm_edp_ctrl_pixel_clock_valid(struct edp_ctrl *ctrl, 75 + u32 pixel_rate, u32 *pm, u32 *pn); 76 + 77 + #endif /* __EDP_CONNECTOR_H__ */
+1373
drivers/gpu/drm/msm/edp/edp_ctrl.c
··· 1 + // SPDX-License-Identifier: GPL-2.0-only 2 + /* 3 + * Copyright (c) 2014-2015, The Linux Foundation. All rights reserved. 4 + */ 5 + 6 + #include <linux/clk.h> 7 + #include <linux/gpio/consumer.h> 8 + #include <linux/regulator/consumer.h> 9 + #include <drm/dp/drm_dp_helper.h> 10 + #include <drm/drm_crtc.h> 11 + #include <drm/drm_edid.h> 12 + 13 + #include "edp.h" 14 + #include "edp.xml.h" 15 + 16 + #define VDDA_UA_ON_LOAD 100000 /* uA units */ 17 + #define VDDA_UA_OFF_LOAD 100 /* uA units */ 18 + 19 + #define DPCD_LINK_VOLTAGE_MAX 4 20 + #define DPCD_LINK_PRE_EMPHASIS_MAX 4 21 + 22 + #define EDP_LINK_BW_MAX DP_LINK_BW_2_7 23 + 24 + /* Link training return value */ 25 + #define EDP_TRAIN_FAIL -1 26 + #define EDP_TRAIN_SUCCESS 0 27 + #define EDP_TRAIN_RECONFIG 1 28 + 29 + #define EDP_CLK_MASK_AHB BIT(0) 30 + #define EDP_CLK_MASK_AUX BIT(1) 31 + #define EDP_CLK_MASK_LINK BIT(2) 32 + #define EDP_CLK_MASK_PIXEL BIT(3) 33 + #define EDP_CLK_MASK_MDP_CORE BIT(4) 34 + #define EDP_CLK_MASK_LINK_CHAN (EDP_CLK_MASK_LINK | EDP_CLK_MASK_PIXEL) 35 + #define EDP_CLK_MASK_AUX_CHAN \ 36 + (EDP_CLK_MASK_AHB | EDP_CLK_MASK_AUX | EDP_CLK_MASK_MDP_CORE) 37 + #define EDP_CLK_MASK_ALL (EDP_CLK_MASK_AUX_CHAN | EDP_CLK_MASK_LINK_CHAN) 38 + 39 + #define EDP_BACKLIGHT_MAX 255 40 + 41 + #define EDP_INTR_STATUS1 \ 42 + (EDP_INTERRUPT_REG_1_HPD | EDP_INTERRUPT_REG_1_AUX_I2C_DONE | \ 43 + EDP_INTERRUPT_REG_1_WRONG_ADDR | EDP_INTERRUPT_REG_1_TIMEOUT | \ 44 + EDP_INTERRUPT_REG_1_NACK_DEFER | EDP_INTERRUPT_REG_1_WRONG_DATA_CNT | \ 45 + EDP_INTERRUPT_REG_1_I2C_NACK | EDP_INTERRUPT_REG_1_I2C_DEFER | \ 46 + EDP_INTERRUPT_REG_1_PLL_UNLOCK | EDP_INTERRUPT_REG_1_AUX_ERROR) 47 + #define EDP_INTR_MASK1 (EDP_INTR_STATUS1 << 2) 48 + #define EDP_INTR_STATUS2 \ 49 + (EDP_INTERRUPT_REG_2_READY_FOR_VIDEO | \ 50 + EDP_INTERRUPT_REG_2_IDLE_PATTERNs_SENT | \ 51 + EDP_INTERRUPT_REG_2_FRAME_END | EDP_INTERRUPT_REG_2_CRC_UPDATED) 52 + #define EDP_INTR_MASK2 (EDP_INTR_STATUS2 << 2) 53 + 54 + struct edp_ctrl { 55 + struct platform_device *pdev; 56 + 57 + void __iomem *base; 58 + 59 + /* regulators */ 60 + struct regulator *vdda_vreg; /* 1.8 V */ 61 + struct regulator *lvl_vreg; 62 + 63 + /* clocks */ 64 + struct clk *aux_clk; 65 + struct clk *pixel_clk; 66 + struct clk *ahb_clk; 67 + struct clk *link_clk; 68 + struct clk *mdp_core_clk; 69 + 70 + /* gpios */ 71 + struct gpio_desc *panel_en_gpio; 72 + struct gpio_desc *panel_hpd_gpio; 73 + 74 + /* completion and mutex */ 75 + struct completion idle_comp; 76 + struct mutex dev_mutex; /* To protect device power status */ 77 + 78 + /* work queue */ 79 + struct work_struct on_work; 80 + struct work_struct off_work; 81 + struct workqueue_struct *workqueue; 82 + 83 + /* Interrupt register lock */ 84 + spinlock_t irq_lock; 85 + 86 + bool edp_connected; 87 + bool power_on; 88 + 89 + /* edid raw data */ 90 + struct edid *edid; 91 + 92 + struct drm_dp_aux *drm_aux; 93 + 94 + /* dpcd raw data */ 95 + u8 dpcd[DP_RECEIVER_CAP_SIZE]; 96 + 97 + /* Link status */ 98 + u8 link_rate; 99 + u8 lane_cnt; 100 + u8 v_level; 101 + u8 p_level; 102 + 103 + /* Timing status */ 104 + u8 interlaced; 105 + u32 pixel_rate; /* in kHz */ 106 + u32 color_depth; 107 + 108 + struct edp_aux *aux; 109 + struct edp_phy *phy; 110 + }; 111 + 112 + struct edp_pixel_clk_div { 113 + u32 rate; /* in kHz */ 114 + u32 m; 115 + u32 n; 116 + }; 117 + 118 + #define EDP_PIXEL_CLK_NUM 8 119 + static const struct edp_pixel_clk_div clk_divs[2][EDP_PIXEL_CLK_NUM] = { 120 + { /* Link clock = 162MHz, source clock = 810MHz */ 121 + {119000, 31, 211}, /* WSXGA+ 1680x1050@60Hz CVT */ 122 + {130250, 32, 199}, /* UXGA 1600x1200@60Hz CVT */ 123 + {148500, 11, 60}, /* FHD 1920x1080@60Hz */ 124 + {154000, 50, 263}, /* WUXGA 1920x1200@60Hz CVT */ 125 + {209250, 31, 120}, /* QXGA 2048x1536@60Hz CVT */ 126 + {268500, 119, 359}, /* WQXGA 2560x1600@60Hz CVT */ 127 + {138530, 33, 193}, /* AUO B116HAN03.0 Panel */ 128 + {141400, 48, 275}, /* AUO B133HTN01.2 Panel */ 129 + }, 130 + { /* Link clock = 270MHz, source clock = 675MHz */ 131 + {119000, 52, 295}, /* WSXGA+ 1680x1050@60Hz CVT */ 132 + {130250, 11, 57}, /* UXGA 1600x1200@60Hz CVT */ 133 + {148500, 11, 50}, /* FHD 1920x1080@60Hz */ 134 + {154000, 47, 206}, /* WUXGA 1920x1200@60Hz CVT */ 135 + {209250, 31, 100}, /* QXGA 2048x1536@60Hz CVT */ 136 + {268500, 107, 269}, /* WQXGA 2560x1600@60Hz CVT */ 137 + {138530, 63, 307}, /* AUO B116HAN03.0 Panel */ 138 + {141400, 53, 253}, /* AUO B133HTN01.2 Panel */ 139 + }, 140 + }; 141 + 142 + static int edp_clk_init(struct edp_ctrl *ctrl) 143 + { 144 + struct platform_device *pdev = ctrl->pdev; 145 + int ret; 146 + 147 + ctrl->aux_clk = msm_clk_get(pdev, "core"); 148 + if (IS_ERR(ctrl->aux_clk)) { 149 + ret = PTR_ERR(ctrl->aux_clk); 150 + pr_err("%s: Can't find core clock, %d\n", __func__, ret); 151 + ctrl->aux_clk = NULL; 152 + return ret; 153 + } 154 + 155 + ctrl->pixel_clk = msm_clk_get(pdev, "pixel"); 156 + if (IS_ERR(ctrl->pixel_clk)) { 157 + ret = PTR_ERR(ctrl->pixel_clk); 158 + pr_err("%s: Can't find pixel clock, %d\n", __func__, ret); 159 + ctrl->pixel_clk = NULL; 160 + return ret; 161 + } 162 + 163 + ctrl->ahb_clk = msm_clk_get(pdev, "iface"); 164 + if (IS_ERR(ctrl->ahb_clk)) { 165 + ret = PTR_ERR(ctrl->ahb_clk); 166 + pr_err("%s: Can't find iface clock, %d\n", __func__, ret); 167 + ctrl->ahb_clk = NULL; 168 + return ret; 169 + } 170 + 171 + ctrl->link_clk = msm_clk_get(pdev, "link"); 172 + if (IS_ERR(ctrl->link_clk)) { 173 + ret = PTR_ERR(ctrl->link_clk); 174 + pr_err("%s: Can't find link clock, %d\n", __func__, ret); 175 + ctrl->link_clk = NULL; 176 + return ret; 177 + } 178 + 179 + /* need mdp core clock to receive irq */ 180 + ctrl->mdp_core_clk = msm_clk_get(pdev, "mdp_core"); 181 + if (IS_ERR(ctrl->mdp_core_clk)) { 182 + ret = PTR_ERR(ctrl->mdp_core_clk); 183 + pr_err("%s: Can't find mdp_core clock, %d\n", __func__, ret); 184 + ctrl->mdp_core_clk = NULL; 185 + return ret; 186 + } 187 + 188 + return 0; 189 + } 190 + 191 + static int edp_clk_enable(struct edp_ctrl *ctrl, u32 clk_mask) 192 + { 193 + int ret; 194 + 195 + DBG("mask=%x", clk_mask); 196 + /* ahb_clk should be enabled first */ 197 + if (clk_mask & EDP_CLK_MASK_AHB) { 198 + ret = clk_prepare_enable(ctrl->ahb_clk); 199 + if (ret) { 200 + pr_err("%s: Failed to enable ahb clk\n", __func__); 201 + goto f0; 202 + } 203 + } 204 + if (clk_mask & EDP_CLK_MASK_AUX) { 205 + ret = clk_set_rate(ctrl->aux_clk, 19200000); 206 + if (ret) { 207 + pr_err("%s: Failed to set rate aux clk\n", __func__); 208 + goto f1; 209 + } 210 + ret = clk_prepare_enable(ctrl->aux_clk); 211 + if (ret) { 212 + pr_err("%s: Failed to enable aux clk\n", __func__); 213 + goto f1; 214 + } 215 + } 216 + /* Need to set rate and enable link_clk prior to pixel_clk */ 217 + if (clk_mask & EDP_CLK_MASK_LINK) { 218 + DBG("edp->link_clk, set_rate %ld", 219 + (unsigned long)ctrl->link_rate * 27000000); 220 + ret = clk_set_rate(ctrl->link_clk, 221 + (unsigned long)ctrl->link_rate * 27000000); 222 + if (ret) { 223 + pr_err("%s: Failed to set rate to link clk\n", 224 + __func__); 225 + goto f2; 226 + } 227 + 228 + ret = clk_prepare_enable(ctrl->link_clk); 229 + if (ret) { 230 + pr_err("%s: Failed to enable link clk\n", __func__); 231 + goto f2; 232 + } 233 + } 234 + if (clk_mask & EDP_CLK_MASK_PIXEL) { 235 + DBG("edp->pixel_clk, set_rate %ld", 236 + (unsigned long)ctrl->pixel_rate * 1000); 237 + ret = clk_set_rate(ctrl->pixel_clk, 238 + (unsigned long)ctrl->pixel_rate * 1000); 239 + if (ret) { 240 + pr_err("%s: Failed to set rate to pixel clk\n", 241 + __func__); 242 + goto f3; 243 + } 244 + 245 + ret = clk_prepare_enable(ctrl->pixel_clk); 246 + if (ret) { 247 + pr_err("%s: Failed to enable pixel clk\n", __func__); 248 + goto f3; 249 + } 250 + } 251 + if (clk_mask & EDP_CLK_MASK_MDP_CORE) { 252 + ret = clk_prepare_enable(ctrl->mdp_core_clk); 253 + if (ret) { 254 + pr_err("%s: Failed to enable mdp core clk\n", __func__); 255 + goto f4; 256 + } 257 + } 258 + 259 + return 0; 260 + 261 + f4: 262 + if (clk_mask & EDP_CLK_MASK_PIXEL) 263 + clk_disable_unprepare(ctrl->pixel_clk); 264 + f3: 265 + if (clk_mask & EDP_CLK_MASK_LINK) 266 + clk_disable_unprepare(ctrl->link_clk); 267 + f2: 268 + if (clk_mask & EDP_CLK_MASK_AUX) 269 + clk_disable_unprepare(ctrl->aux_clk); 270 + f1: 271 + if (clk_mask & EDP_CLK_MASK_AHB) 272 + clk_disable_unprepare(ctrl->ahb_clk); 273 + f0: 274 + return ret; 275 + } 276 + 277 + static void edp_clk_disable(struct edp_ctrl *ctrl, u32 clk_mask) 278 + { 279 + if (clk_mask & EDP_CLK_MASK_MDP_CORE) 280 + clk_disable_unprepare(ctrl->mdp_core_clk); 281 + if (clk_mask & EDP_CLK_MASK_PIXEL) 282 + clk_disable_unprepare(ctrl->pixel_clk); 283 + if (clk_mask & EDP_CLK_MASK_LINK) 284 + clk_disable_unprepare(ctrl->link_clk); 285 + if (clk_mask & EDP_CLK_MASK_AUX) 286 + clk_disable_unprepare(ctrl->aux_clk); 287 + if (clk_mask & EDP_CLK_MASK_AHB) 288 + clk_disable_unprepare(ctrl->ahb_clk); 289 + } 290 + 291 + static int edp_regulator_init(struct edp_ctrl *ctrl) 292 + { 293 + struct device *dev = &ctrl->pdev->dev; 294 + int ret; 295 + 296 + DBG(""); 297 + ctrl->vdda_vreg = devm_regulator_get(dev, "vdda"); 298 + ret = PTR_ERR_OR_ZERO(ctrl->vdda_vreg); 299 + if (ret) { 300 + pr_err("%s: Could not get vdda reg, ret = %d\n", __func__, 301 + ret); 302 + ctrl->vdda_vreg = NULL; 303 + return ret; 304 + } 305 + ctrl->lvl_vreg = devm_regulator_get(dev, "lvl-vdd"); 306 + ret = PTR_ERR_OR_ZERO(ctrl->lvl_vreg); 307 + if (ret) { 308 + pr_err("%s: Could not get lvl-vdd reg, ret = %d\n", __func__, 309 + ret); 310 + ctrl->lvl_vreg = NULL; 311 + return ret; 312 + } 313 + 314 + return 0; 315 + } 316 + 317 + static int edp_regulator_enable(struct edp_ctrl *ctrl) 318 + { 319 + int ret; 320 + 321 + ret = regulator_set_load(ctrl->vdda_vreg, VDDA_UA_ON_LOAD); 322 + if (ret < 0) { 323 + pr_err("%s: vdda_vreg set regulator mode failed.\n", __func__); 324 + goto vdda_set_fail; 325 + } 326 + 327 + ret = regulator_enable(ctrl->vdda_vreg); 328 + if (ret) { 329 + pr_err("%s: Failed to enable vdda_vreg regulator.\n", __func__); 330 + goto vdda_enable_fail; 331 + } 332 + 333 + ret = regulator_enable(ctrl->lvl_vreg); 334 + if (ret) { 335 + pr_err("Failed to enable lvl-vdd reg regulator, %d", ret); 336 + goto lvl_enable_fail; 337 + } 338 + 339 + DBG("exit"); 340 + return 0; 341 + 342 + lvl_enable_fail: 343 + regulator_disable(ctrl->vdda_vreg); 344 + vdda_enable_fail: 345 + regulator_set_load(ctrl->vdda_vreg, VDDA_UA_OFF_LOAD); 346 + vdda_set_fail: 347 + return ret; 348 + } 349 + 350 + static void edp_regulator_disable(struct edp_ctrl *ctrl) 351 + { 352 + regulator_disable(ctrl->lvl_vreg); 353 + regulator_disable(ctrl->vdda_vreg); 354 + regulator_set_load(ctrl->vdda_vreg, VDDA_UA_OFF_LOAD); 355 + } 356 + 357 + static int edp_gpio_config(struct edp_ctrl *ctrl) 358 + { 359 + struct device *dev = &ctrl->pdev->dev; 360 + int ret; 361 + 362 + ctrl->panel_hpd_gpio = devm_gpiod_get(dev, "panel-hpd", GPIOD_IN); 363 + if (IS_ERR(ctrl->panel_hpd_gpio)) { 364 + ret = PTR_ERR(ctrl->panel_hpd_gpio); 365 + ctrl->panel_hpd_gpio = NULL; 366 + pr_err("%s: cannot get panel-hpd-gpios, %d\n", __func__, ret); 367 + return ret; 368 + } 369 + 370 + ctrl->panel_en_gpio = devm_gpiod_get(dev, "panel-en", GPIOD_OUT_LOW); 371 + if (IS_ERR(ctrl->panel_en_gpio)) { 372 + ret = PTR_ERR(ctrl->panel_en_gpio); 373 + ctrl->panel_en_gpio = NULL; 374 + pr_err("%s: cannot get panel-en-gpios, %d\n", __func__, ret); 375 + return ret; 376 + } 377 + 378 + DBG("gpio on"); 379 + 380 + return 0; 381 + } 382 + 383 + static void edp_ctrl_irq_enable(struct edp_ctrl *ctrl, int enable) 384 + { 385 + unsigned long flags; 386 + 387 + DBG("%d", enable); 388 + spin_lock_irqsave(&ctrl->irq_lock, flags); 389 + if (enable) { 390 + edp_write(ctrl->base + REG_EDP_INTERRUPT_REG_1, EDP_INTR_MASK1); 391 + edp_write(ctrl->base + REG_EDP_INTERRUPT_REG_2, EDP_INTR_MASK2); 392 + } else { 393 + edp_write(ctrl->base + REG_EDP_INTERRUPT_REG_1, 0x0); 394 + edp_write(ctrl->base + REG_EDP_INTERRUPT_REG_2, 0x0); 395 + } 396 + spin_unlock_irqrestore(&ctrl->irq_lock, flags); 397 + DBG("exit"); 398 + } 399 + 400 + static void edp_fill_link_cfg(struct edp_ctrl *ctrl) 401 + { 402 + u32 prate; 403 + u32 lrate; 404 + u32 bpp; 405 + u8 max_lane = drm_dp_max_lane_count(ctrl->dpcd); 406 + u8 lane; 407 + 408 + prate = ctrl->pixel_rate; 409 + bpp = ctrl->color_depth * 3; 410 + 411 + /* 412 + * By default, use the maximum link rate and minimum lane count, 413 + * so that we can do rate down shift during link training. 414 + */ 415 + ctrl->link_rate = ctrl->dpcd[DP_MAX_LINK_RATE]; 416 + 417 + prate *= bpp; 418 + prate /= 8; /* in kByte */ 419 + 420 + lrate = 270000; /* in kHz */ 421 + lrate *= ctrl->link_rate; 422 + lrate /= 10; /* in kByte, 10 bits --> 8 bits */ 423 + 424 + for (lane = 1; lane <= max_lane; lane <<= 1) { 425 + if (lrate >= prate) 426 + break; 427 + lrate <<= 1; 428 + } 429 + 430 + ctrl->lane_cnt = lane; 431 + DBG("rate=%d lane=%d", ctrl->link_rate, ctrl->lane_cnt); 432 + } 433 + 434 + static void edp_config_ctrl(struct edp_ctrl *ctrl) 435 + { 436 + u32 data; 437 + enum edp_color_depth depth; 438 + 439 + data = EDP_CONFIGURATION_CTRL_LANES(ctrl->lane_cnt - 1); 440 + 441 + if (drm_dp_enhanced_frame_cap(ctrl->dpcd)) 442 + data |= EDP_CONFIGURATION_CTRL_ENHANCED_FRAMING; 443 + 444 + depth = EDP_6BIT; 445 + if (ctrl->color_depth == 8) 446 + depth = EDP_8BIT; 447 + 448 + data |= EDP_CONFIGURATION_CTRL_COLOR(depth); 449 + 450 + if (!ctrl->interlaced) /* progressive */ 451 + data |= EDP_CONFIGURATION_CTRL_PROGRESSIVE; 452 + 453 + data |= (EDP_CONFIGURATION_CTRL_SYNC_CLK | 454 + EDP_CONFIGURATION_CTRL_STATIC_MVID); 455 + 456 + edp_write(ctrl->base + REG_EDP_CONFIGURATION_CTRL, data); 457 + } 458 + 459 + static void edp_state_ctrl(struct edp_ctrl *ctrl, u32 state) 460 + { 461 + edp_write(ctrl->base + REG_EDP_STATE_CTRL, state); 462 + /* Make sure H/W status is set */ 463 + wmb(); 464 + } 465 + 466 + static int edp_lane_set_write(struct edp_ctrl *ctrl, 467 + u8 voltage_level, u8 pre_emphasis_level) 468 + { 469 + int i; 470 + u8 buf[4]; 471 + 472 + if (voltage_level >= DPCD_LINK_VOLTAGE_MAX) 473 + voltage_level |= 0x04; 474 + 475 + if (pre_emphasis_level >= DPCD_LINK_PRE_EMPHASIS_MAX) 476 + pre_emphasis_level |= 0x04; 477 + 478 + pre_emphasis_level <<= 3; 479 + 480 + for (i = 0; i < 4; i++) 481 + buf[i] = voltage_level | pre_emphasis_level; 482 + 483 + DBG("%s: p|v=0x%x", __func__, voltage_level | pre_emphasis_level); 484 + if (drm_dp_dpcd_write(ctrl->drm_aux, 0x103, buf, 4) < 4) { 485 + pr_err("%s: Set sw/pe to panel failed\n", __func__); 486 + return -ENOLINK; 487 + } 488 + 489 + return 0; 490 + } 491 + 492 + static int edp_train_pattern_set_write(struct edp_ctrl *ctrl, u8 pattern) 493 + { 494 + u8 p = pattern; 495 + 496 + DBG("pattern=%x", p); 497 + if (drm_dp_dpcd_write(ctrl->drm_aux, 498 + DP_TRAINING_PATTERN_SET, &p, 1) < 1) { 499 + pr_err("%s: Set training pattern to panel failed\n", __func__); 500 + return -ENOLINK; 501 + } 502 + 503 + return 0; 504 + } 505 + 506 + static void edp_sink_train_set_adjust(struct edp_ctrl *ctrl, 507 + const u8 *link_status) 508 + { 509 + int i; 510 + u8 max = 0; 511 + u8 data; 512 + 513 + /* use the max level across lanes */ 514 + for (i = 0; i < ctrl->lane_cnt; i++) { 515 + data = drm_dp_get_adjust_request_voltage(link_status, i); 516 + DBG("lane=%d req_voltage_swing=0x%x", i, data); 517 + if (max < data) 518 + max = data; 519 + } 520 + 521 + ctrl->v_level = max >> DP_TRAIN_VOLTAGE_SWING_SHIFT; 522 + 523 + /* use the max level across lanes */ 524 + max = 0; 525 + for (i = 0; i < ctrl->lane_cnt; i++) { 526 + data = drm_dp_get_adjust_request_pre_emphasis(link_status, i); 527 + DBG("lane=%d req_pre_emphasis=0x%x", i, data); 528 + if (max < data) 529 + max = data; 530 + } 531 + 532 + ctrl->p_level = max >> DP_TRAIN_PRE_EMPHASIS_SHIFT; 533 + DBG("v_level=%d, p_level=%d", ctrl->v_level, ctrl->p_level); 534 + } 535 + 536 + static void edp_host_train_set(struct edp_ctrl *ctrl, u32 train) 537 + { 538 + int cnt = 10; 539 + u32 data; 540 + u32 shift = train - 1; 541 + 542 + DBG("train=%d", train); 543 + 544 + edp_state_ctrl(ctrl, EDP_STATE_CTRL_TRAIN_PATTERN_1 << shift); 545 + while (--cnt) { 546 + data = edp_read(ctrl->base + REG_EDP_MAINLINK_READY); 547 + if (data & (EDP_MAINLINK_READY_TRAIN_PATTERN_1_READY << shift)) 548 + break; 549 + } 550 + 551 + if (cnt == 0) 552 + pr_err("%s: set link_train=%d failed\n", __func__, train); 553 + } 554 + 555 + static const u8 vm_pre_emphasis[4][4] = { 556 + {0x03, 0x06, 0x09, 0x0C}, /* pe0, 0 db */ 557 + {0x03, 0x06, 0x09, 0xFF}, /* pe1, 3.5 db */ 558 + {0x03, 0x06, 0xFF, 0xFF}, /* pe2, 6.0 db */ 559 + {0x03, 0xFF, 0xFF, 0xFF} /* pe3, 9.5 db */ 560 + }; 561 + 562 + /* voltage swing, 0.2v and 1.0v are not support */ 563 + static const u8 vm_voltage_swing[4][4] = { 564 + {0x14, 0x18, 0x1A, 0x1E}, /* sw0, 0.4v */ 565 + {0x18, 0x1A, 0x1E, 0xFF}, /* sw1, 0.6 v */ 566 + {0x1A, 0x1E, 0xFF, 0xFF}, /* sw1, 0.8 v */ 567 + {0x1E, 0xFF, 0xFF, 0xFF} /* sw1, 1.2 v, optional */ 568 + }; 569 + 570 + static int edp_voltage_pre_emphasise_set(struct edp_ctrl *ctrl) 571 + { 572 + u32 value0; 573 + u32 value1; 574 + 575 + DBG("v=%d p=%d", ctrl->v_level, ctrl->p_level); 576 + 577 + value0 = vm_pre_emphasis[(int)(ctrl->v_level)][(int)(ctrl->p_level)]; 578 + value1 = vm_voltage_swing[(int)(ctrl->v_level)][(int)(ctrl->p_level)]; 579 + 580 + /* Configure host and panel only if both values are allowed */ 581 + if (value0 != 0xFF && value1 != 0xFF) { 582 + msm_edp_phy_vm_pe_cfg(ctrl->phy, value0, value1); 583 + return edp_lane_set_write(ctrl, ctrl->v_level, ctrl->p_level); 584 + } 585 + 586 + return -EINVAL; 587 + } 588 + 589 + static int edp_start_link_train_1(struct edp_ctrl *ctrl) 590 + { 591 + u8 link_status[DP_LINK_STATUS_SIZE]; 592 + u8 old_v_level; 593 + int tries; 594 + int ret; 595 + int rlen; 596 + 597 + DBG(""); 598 + 599 + edp_host_train_set(ctrl, DP_TRAINING_PATTERN_1); 600 + ret = edp_voltage_pre_emphasise_set(ctrl); 601 + if (ret) 602 + return ret; 603 + ret = edp_train_pattern_set_write(ctrl, 604 + DP_TRAINING_PATTERN_1 | DP_RECOVERED_CLOCK_OUT_EN); 605 + if (ret) 606 + return ret; 607 + 608 + tries = 0; 609 + old_v_level = ctrl->v_level; 610 + while (1) { 611 + drm_dp_link_train_clock_recovery_delay(ctrl->drm_aux, ctrl->dpcd); 612 + 613 + rlen = drm_dp_dpcd_read_link_status(ctrl->drm_aux, link_status); 614 + if (rlen < DP_LINK_STATUS_SIZE) { 615 + pr_err("%s: read link status failed\n", __func__); 616 + return -ENOLINK; 617 + } 618 + if (drm_dp_clock_recovery_ok(link_status, ctrl->lane_cnt)) { 619 + ret = 0; 620 + break; 621 + } 622 + 623 + if (ctrl->v_level == DPCD_LINK_VOLTAGE_MAX) { 624 + ret = -1; 625 + break; 626 + } 627 + 628 + if (old_v_level == ctrl->v_level) { 629 + tries++; 630 + if (tries >= 5) { 631 + ret = -1; 632 + break; 633 + } 634 + } else { 635 + tries = 0; 636 + old_v_level = ctrl->v_level; 637 + } 638 + 639 + edp_sink_train_set_adjust(ctrl, link_status); 640 + ret = edp_voltage_pre_emphasise_set(ctrl); 641 + if (ret) 642 + return ret; 643 + } 644 + 645 + return ret; 646 + } 647 + 648 + static int edp_start_link_train_2(struct edp_ctrl *ctrl) 649 + { 650 + u8 link_status[DP_LINK_STATUS_SIZE]; 651 + int tries = 0; 652 + int ret; 653 + int rlen; 654 + 655 + DBG(""); 656 + 657 + edp_host_train_set(ctrl, DP_TRAINING_PATTERN_2); 658 + ret = edp_voltage_pre_emphasise_set(ctrl); 659 + if (ret) 660 + return ret; 661 + 662 + ret = edp_train_pattern_set_write(ctrl, 663 + DP_TRAINING_PATTERN_2 | DP_RECOVERED_CLOCK_OUT_EN); 664 + if (ret) 665 + return ret; 666 + 667 + while (1) { 668 + drm_dp_link_train_channel_eq_delay(ctrl->drm_aux, ctrl->dpcd); 669 + 670 + rlen = drm_dp_dpcd_read_link_status(ctrl->drm_aux, link_status); 671 + if (rlen < DP_LINK_STATUS_SIZE) { 672 + pr_err("%s: read link status failed\n", __func__); 673 + return -ENOLINK; 674 + } 675 + if (drm_dp_channel_eq_ok(link_status, ctrl->lane_cnt)) { 676 + ret = 0; 677 + break; 678 + } 679 + 680 + tries++; 681 + if (tries > 10) { 682 + ret = -1; 683 + break; 684 + } 685 + 686 + edp_sink_train_set_adjust(ctrl, link_status); 687 + ret = edp_voltage_pre_emphasise_set(ctrl); 688 + if (ret) 689 + return ret; 690 + } 691 + 692 + return ret; 693 + } 694 + 695 + static int edp_link_rate_down_shift(struct edp_ctrl *ctrl) 696 + { 697 + u32 prate, lrate, bpp; 698 + u8 rate, lane, max_lane; 699 + int changed = 0; 700 + 701 + rate = ctrl->link_rate; 702 + lane = ctrl->lane_cnt; 703 + max_lane = drm_dp_max_lane_count(ctrl->dpcd); 704 + 705 + bpp = ctrl->color_depth * 3; 706 + prate = ctrl->pixel_rate; 707 + prate *= bpp; 708 + prate /= 8; /* in kByte */ 709 + 710 + if (rate > DP_LINK_BW_1_62 && rate <= EDP_LINK_BW_MAX) { 711 + rate -= 4; /* reduce rate */ 712 + changed++; 713 + } 714 + 715 + if (changed) { 716 + if (lane >= 1 && lane < max_lane) 717 + lane <<= 1; /* increase lane */ 718 + 719 + lrate = 270000; /* in kHz */ 720 + lrate *= rate; 721 + lrate /= 10; /* kByte, 10 bits --> 8 bits */ 722 + lrate *= lane; 723 + 724 + DBG("new lrate=%u prate=%u(kHz) rate=%d lane=%d p=%u b=%d", 725 + lrate, prate, rate, lane, 726 + ctrl->pixel_rate, 727 + bpp); 728 + 729 + if (lrate > prate) { 730 + ctrl->link_rate = rate; 731 + ctrl->lane_cnt = lane; 732 + DBG("new rate=%d %d", rate, lane); 733 + return 0; 734 + } 735 + } 736 + 737 + return -EINVAL; 738 + } 739 + 740 + static int edp_clear_training_pattern(struct edp_ctrl *ctrl) 741 + { 742 + int ret; 743 + 744 + ret = edp_train_pattern_set_write(ctrl, 0); 745 + 746 + drm_dp_link_train_channel_eq_delay(ctrl->drm_aux, ctrl->dpcd); 747 + 748 + return ret; 749 + } 750 + 751 + static int edp_do_link_train(struct edp_ctrl *ctrl) 752 + { 753 + u8 values[2]; 754 + int ret; 755 + 756 + DBG(""); 757 + /* 758 + * Set the current link rate and lane cnt to panel. They may have been 759 + * adjusted and the values are different from them in DPCD CAP 760 + */ 761 + values[0] = ctrl->lane_cnt; 762 + values[1] = ctrl->link_rate; 763 + 764 + if (drm_dp_enhanced_frame_cap(ctrl->dpcd)) 765 + values[1] |= DP_LANE_COUNT_ENHANCED_FRAME_EN; 766 + 767 + if (drm_dp_dpcd_write(ctrl->drm_aux, DP_LINK_BW_SET, values, 768 + sizeof(values)) < 0) 769 + return EDP_TRAIN_FAIL; 770 + 771 + ctrl->v_level = 0; /* start from default level */ 772 + ctrl->p_level = 0; 773 + 774 + edp_state_ctrl(ctrl, 0); 775 + if (edp_clear_training_pattern(ctrl)) 776 + return EDP_TRAIN_FAIL; 777 + 778 + ret = edp_start_link_train_1(ctrl); 779 + if (ret < 0) { 780 + if (edp_link_rate_down_shift(ctrl) == 0) { 781 + DBG("link reconfig"); 782 + ret = EDP_TRAIN_RECONFIG; 783 + goto clear; 784 + } else { 785 + pr_err("%s: Training 1 failed", __func__); 786 + ret = EDP_TRAIN_FAIL; 787 + goto clear; 788 + } 789 + } 790 + DBG("Training 1 completed successfully"); 791 + 792 + edp_state_ctrl(ctrl, 0); 793 + if (edp_clear_training_pattern(ctrl)) 794 + return EDP_TRAIN_FAIL; 795 + 796 + ret = edp_start_link_train_2(ctrl); 797 + if (ret < 0) { 798 + if (edp_link_rate_down_shift(ctrl) == 0) { 799 + DBG("link reconfig"); 800 + ret = EDP_TRAIN_RECONFIG; 801 + goto clear; 802 + } else { 803 + pr_err("%s: Training 2 failed", __func__); 804 + ret = EDP_TRAIN_FAIL; 805 + goto clear; 806 + } 807 + } 808 + DBG("Training 2 completed successfully"); 809 + 810 + edp_state_ctrl(ctrl, EDP_STATE_CTRL_SEND_VIDEO); 811 + clear: 812 + edp_clear_training_pattern(ctrl); 813 + 814 + return ret; 815 + } 816 + 817 + static void edp_clock_synchrous(struct edp_ctrl *ctrl, int sync) 818 + { 819 + u32 data; 820 + enum edp_color_depth depth; 821 + 822 + data = edp_read(ctrl->base + REG_EDP_MISC1_MISC0); 823 + 824 + if (sync) 825 + data |= EDP_MISC1_MISC0_SYNC; 826 + else 827 + data &= ~EDP_MISC1_MISC0_SYNC; 828 + 829 + /* only legacy rgb mode supported */ 830 + depth = EDP_6BIT; /* Default */ 831 + if (ctrl->color_depth == 8) 832 + depth = EDP_8BIT; 833 + else if (ctrl->color_depth == 10) 834 + depth = EDP_10BIT; 835 + else if (ctrl->color_depth == 12) 836 + depth = EDP_12BIT; 837 + else if (ctrl->color_depth == 16) 838 + depth = EDP_16BIT; 839 + 840 + data |= EDP_MISC1_MISC0_COLOR(depth); 841 + 842 + edp_write(ctrl->base + REG_EDP_MISC1_MISC0, data); 843 + } 844 + 845 + static int edp_sw_mvid_nvid(struct edp_ctrl *ctrl, u32 m, u32 n) 846 + { 847 + u32 n_multi, m_multi = 5; 848 + 849 + if (ctrl->link_rate == DP_LINK_BW_1_62) { 850 + n_multi = 1; 851 + } else if (ctrl->link_rate == DP_LINK_BW_2_7) { 852 + n_multi = 2; 853 + } else { 854 + pr_err("%s: Invalid link rate, %d\n", __func__, 855 + ctrl->link_rate); 856 + return -EINVAL; 857 + } 858 + 859 + edp_write(ctrl->base + REG_EDP_SOFTWARE_MVID, m * m_multi); 860 + edp_write(ctrl->base + REG_EDP_SOFTWARE_NVID, n * n_multi); 861 + 862 + return 0; 863 + } 864 + 865 + static void edp_mainlink_ctrl(struct edp_ctrl *ctrl, int enable) 866 + { 867 + u32 data = 0; 868 + 869 + edp_write(ctrl->base + REG_EDP_MAINLINK_CTRL, EDP_MAINLINK_CTRL_RESET); 870 + /* Make sure fully reset */ 871 + wmb(); 872 + usleep_range(500, 1000); 873 + 874 + if (enable) 875 + data |= EDP_MAINLINK_CTRL_ENABLE; 876 + 877 + edp_write(ctrl->base + REG_EDP_MAINLINK_CTRL, data); 878 + } 879 + 880 + static void edp_ctrl_phy_aux_enable(struct edp_ctrl *ctrl, int enable) 881 + { 882 + if (enable) { 883 + edp_regulator_enable(ctrl); 884 + edp_clk_enable(ctrl, EDP_CLK_MASK_AUX_CHAN); 885 + msm_edp_phy_ctrl(ctrl->phy, 1); 886 + msm_edp_aux_ctrl(ctrl->aux, 1); 887 + gpiod_set_value(ctrl->panel_en_gpio, 1); 888 + } else { 889 + gpiod_set_value(ctrl->panel_en_gpio, 0); 890 + msm_edp_aux_ctrl(ctrl->aux, 0); 891 + msm_edp_phy_ctrl(ctrl->phy, 0); 892 + edp_clk_disable(ctrl, EDP_CLK_MASK_AUX_CHAN); 893 + edp_regulator_disable(ctrl); 894 + } 895 + } 896 + 897 + static void edp_ctrl_link_enable(struct edp_ctrl *ctrl, int enable) 898 + { 899 + u32 m, n; 900 + 901 + if (enable) { 902 + /* Enable link channel clocks */ 903 + edp_clk_enable(ctrl, EDP_CLK_MASK_LINK_CHAN); 904 + 905 + msm_edp_phy_lane_power_ctrl(ctrl->phy, true, ctrl->lane_cnt); 906 + 907 + msm_edp_phy_vm_pe_init(ctrl->phy); 908 + 909 + /* Make sure phy is programed */ 910 + wmb(); 911 + msm_edp_phy_ready(ctrl->phy); 912 + 913 + edp_config_ctrl(ctrl); 914 + msm_edp_ctrl_pixel_clock_valid(ctrl, ctrl->pixel_rate, &m, &n); 915 + edp_sw_mvid_nvid(ctrl, m, n); 916 + edp_mainlink_ctrl(ctrl, 1); 917 + } else { 918 + edp_mainlink_ctrl(ctrl, 0); 919 + 920 + msm_edp_phy_lane_power_ctrl(ctrl->phy, false, 0); 921 + edp_clk_disable(ctrl, EDP_CLK_MASK_LINK_CHAN); 922 + } 923 + } 924 + 925 + static int edp_ctrl_training(struct edp_ctrl *ctrl) 926 + { 927 + int ret; 928 + 929 + /* Do link training only when power is on */ 930 + if (!ctrl->power_on) 931 + return -EINVAL; 932 + 933 + train_start: 934 + ret = edp_do_link_train(ctrl); 935 + if (ret == EDP_TRAIN_RECONFIG) { 936 + /* Re-configure main link */ 937 + edp_ctrl_irq_enable(ctrl, 0); 938 + edp_ctrl_link_enable(ctrl, 0); 939 + msm_edp_phy_ctrl(ctrl->phy, 0); 940 + 941 + /* Make sure link is fully disabled */ 942 + wmb(); 943 + usleep_range(500, 1000); 944 + 945 + msm_edp_phy_ctrl(ctrl->phy, 1); 946 + edp_ctrl_link_enable(ctrl, 1); 947 + edp_ctrl_irq_enable(ctrl, 1); 948 + goto train_start; 949 + } 950 + 951 + return ret; 952 + } 953 + 954 + static void edp_ctrl_on_worker(struct work_struct *work) 955 + { 956 + struct edp_ctrl *ctrl = container_of( 957 + work, struct edp_ctrl, on_work); 958 + u8 value; 959 + int ret; 960 + 961 + mutex_lock(&ctrl->dev_mutex); 962 + 963 + if (ctrl->power_on) { 964 + DBG("already on"); 965 + goto unlock_ret; 966 + } 967 + 968 + edp_ctrl_phy_aux_enable(ctrl, 1); 969 + edp_ctrl_link_enable(ctrl, 1); 970 + 971 + edp_ctrl_irq_enable(ctrl, 1); 972 + 973 + /* DP_SET_POWER register is only available on DPCD v1.1 and later */ 974 + if (ctrl->dpcd[DP_DPCD_REV] >= 0x11) { 975 + ret = drm_dp_dpcd_readb(ctrl->drm_aux, DP_SET_POWER, &value); 976 + if (ret < 0) 977 + goto fail; 978 + 979 + value &= ~DP_SET_POWER_MASK; 980 + value |= DP_SET_POWER_D0; 981 + 982 + ret = drm_dp_dpcd_writeb(ctrl->drm_aux, DP_SET_POWER, value); 983 + if (ret < 0) 984 + goto fail; 985 + 986 + /* 987 + * According to the DP 1.1 specification, a "Sink Device must 988 + * exit the power saving state within 1 ms" (Section 2.5.3.1, 989 + * Table 5-52, "Sink Control Field" (register 0x600). 990 + */ 991 + usleep_range(1000, 2000); 992 + } 993 + 994 + ctrl->power_on = true; 995 + 996 + /* Start link training */ 997 + ret = edp_ctrl_training(ctrl); 998 + if (ret != EDP_TRAIN_SUCCESS) 999 + goto fail; 1000 + 1001 + DBG("DONE"); 1002 + goto unlock_ret; 1003 + 1004 + fail: 1005 + edp_ctrl_irq_enable(ctrl, 0); 1006 + edp_ctrl_link_enable(ctrl, 0); 1007 + edp_ctrl_phy_aux_enable(ctrl, 0); 1008 + ctrl->power_on = false; 1009 + unlock_ret: 1010 + mutex_unlock(&ctrl->dev_mutex); 1011 + } 1012 + 1013 + static void edp_ctrl_off_worker(struct work_struct *work) 1014 + { 1015 + struct edp_ctrl *ctrl = container_of( 1016 + work, struct edp_ctrl, off_work); 1017 + unsigned long time_left; 1018 + 1019 + mutex_lock(&ctrl->dev_mutex); 1020 + 1021 + if (!ctrl->power_on) { 1022 + DBG("already off"); 1023 + goto unlock_ret; 1024 + } 1025 + 1026 + reinit_completion(&ctrl->idle_comp); 1027 + edp_state_ctrl(ctrl, EDP_STATE_CTRL_PUSH_IDLE); 1028 + 1029 + time_left = wait_for_completion_timeout(&ctrl->idle_comp, 1030 + msecs_to_jiffies(500)); 1031 + if (!time_left) 1032 + DBG("%s: idle pattern timedout\n", __func__); 1033 + 1034 + edp_state_ctrl(ctrl, 0); 1035 + 1036 + /* DP_SET_POWER register is only available on DPCD v1.1 and later */ 1037 + if (ctrl->dpcd[DP_DPCD_REV] >= 0x11) { 1038 + u8 value; 1039 + int ret; 1040 + 1041 + ret = drm_dp_dpcd_readb(ctrl->drm_aux, DP_SET_POWER, &value); 1042 + if (ret > 0) { 1043 + value &= ~DP_SET_POWER_MASK; 1044 + value |= DP_SET_POWER_D3; 1045 + 1046 + drm_dp_dpcd_writeb(ctrl->drm_aux, DP_SET_POWER, value); 1047 + } 1048 + } 1049 + 1050 + edp_ctrl_irq_enable(ctrl, 0); 1051 + 1052 + edp_ctrl_link_enable(ctrl, 0); 1053 + 1054 + edp_ctrl_phy_aux_enable(ctrl, 0); 1055 + 1056 + ctrl->power_on = false; 1057 + 1058 + unlock_ret: 1059 + mutex_unlock(&ctrl->dev_mutex); 1060 + } 1061 + 1062 + irqreturn_t msm_edp_ctrl_irq(struct edp_ctrl *ctrl) 1063 + { 1064 + u32 isr1, isr2, mask1, mask2; 1065 + u32 ack; 1066 + 1067 + DBG(""); 1068 + spin_lock(&ctrl->irq_lock); 1069 + isr1 = edp_read(ctrl->base + REG_EDP_INTERRUPT_REG_1); 1070 + isr2 = edp_read(ctrl->base + REG_EDP_INTERRUPT_REG_2); 1071 + 1072 + mask1 = isr1 & EDP_INTR_MASK1; 1073 + mask2 = isr2 & EDP_INTR_MASK2; 1074 + 1075 + isr1 &= ~mask1; /* remove masks bit */ 1076 + isr2 &= ~mask2; 1077 + 1078 + DBG("isr=%x mask=%x isr2=%x mask2=%x", 1079 + isr1, mask1, isr2, mask2); 1080 + 1081 + ack = isr1 & EDP_INTR_STATUS1; 1082 + ack <<= 1; /* ack bits */ 1083 + ack |= mask1; 1084 + edp_write(ctrl->base + REG_EDP_INTERRUPT_REG_1, ack); 1085 + 1086 + ack = isr2 & EDP_INTR_STATUS2; 1087 + ack <<= 1; /* ack bits */ 1088 + ack |= mask2; 1089 + edp_write(ctrl->base + REG_EDP_INTERRUPT_REG_2, ack); 1090 + spin_unlock(&ctrl->irq_lock); 1091 + 1092 + if (isr1 & EDP_INTERRUPT_REG_1_HPD) 1093 + DBG("edp_hpd"); 1094 + 1095 + if (isr2 & EDP_INTERRUPT_REG_2_READY_FOR_VIDEO) 1096 + DBG("edp_video_ready"); 1097 + 1098 + if (isr2 & EDP_INTERRUPT_REG_2_IDLE_PATTERNs_SENT) { 1099 + DBG("idle_patterns_sent"); 1100 + complete(&ctrl->idle_comp); 1101 + } 1102 + 1103 + msm_edp_aux_irq(ctrl->aux, isr1); 1104 + 1105 + return IRQ_HANDLED; 1106 + } 1107 + 1108 + void msm_edp_ctrl_power(struct edp_ctrl *ctrl, bool on) 1109 + { 1110 + if (on) 1111 + queue_work(ctrl->workqueue, &ctrl->on_work); 1112 + else 1113 + queue_work(ctrl->workqueue, &ctrl->off_work); 1114 + } 1115 + 1116 + int msm_edp_ctrl_init(struct msm_edp *edp) 1117 + { 1118 + struct edp_ctrl *ctrl = NULL; 1119 + struct device *dev; 1120 + int ret; 1121 + 1122 + if (!edp) { 1123 + pr_err("%s: edp is NULL!\n", __func__); 1124 + return -EINVAL; 1125 + } 1126 + 1127 + dev = &edp->pdev->dev; 1128 + ctrl = devm_kzalloc(dev, sizeof(*ctrl), GFP_KERNEL); 1129 + if (!ctrl) 1130 + return -ENOMEM; 1131 + 1132 + edp->ctrl = ctrl; 1133 + ctrl->pdev = edp->pdev; 1134 + 1135 + ctrl->base = msm_ioremap(ctrl->pdev, "edp", "eDP"); 1136 + if (IS_ERR(ctrl->base)) 1137 + return PTR_ERR(ctrl->base); 1138 + 1139 + /* Get regulator, clock, gpio, pwm */ 1140 + ret = edp_regulator_init(ctrl); 1141 + if (ret) { 1142 + pr_err("%s:regulator init fail\n", __func__); 1143 + return ret; 1144 + } 1145 + ret = edp_clk_init(ctrl); 1146 + if (ret) { 1147 + pr_err("%s:clk init fail\n", __func__); 1148 + return ret; 1149 + } 1150 + ret = edp_gpio_config(ctrl); 1151 + if (ret) { 1152 + pr_err("%s:failed to configure GPIOs: %d", __func__, ret); 1153 + return ret; 1154 + } 1155 + 1156 + /* Init aux and phy */ 1157 + ctrl->aux = msm_edp_aux_init(edp, ctrl->base, &ctrl->drm_aux); 1158 + if (!ctrl->aux || !ctrl->drm_aux) { 1159 + pr_err("%s:failed to init aux\n", __func__); 1160 + return -ENOMEM; 1161 + } 1162 + 1163 + ctrl->phy = msm_edp_phy_init(dev, ctrl->base); 1164 + if (!ctrl->phy) { 1165 + pr_err("%s:failed to init phy\n", __func__); 1166 + ret = -ENOMEM; 1167 + goto err_destory_aux; 1168 + } 1169 + 1170 + spin_lock_init(&ctrl->irq_lock); 1171 + mutex_init(&ctrl->dev_mutex); 1172 + init_completion(&ctrl->idle_comp); 1173 + 1174 + /* setup workqueue */ 1175 + ctrl->workqueue = alloc_ordered_workqueue("edp_drm_work", 0); 1176 + INIT_WORK(&ctrl->on_work, edp_ctrl_on_worker); 1177 + INIT_WORK(&ctrl->off_work, edp_ctrl_off_worker); 1178 + 1179 + return 0; 1180 + 1181 + err_destory_aux: 1182 + msm_edp_aux_destroy(dev, ctrl->aux); 1183 + ctrl->aux = NULL; 1184 + return ret; 1185 + } 1186 + 1187 + void msm_edp_ctrl_destroy(struct edp_ctrl *ctrl) 1188 + { 1189 + if (!ctrl) 1190 + return; 1191 + 1192 + if (ctrl->workqueue) { 1193 + destroy_workqueue(ctrl->workqueue); 1194 + ctrl->workqueue = NULL; 1195 + } 1196 + 1197 + if (ctrl->aux) { 1198 + msm_edp_aux_destroy(&ctrl->pdev->dev, ctrl->aux); 1199 + ctrl->aux = NULL; 1200 + } 1201 + 1202 + kfree(ctrl->edid); 1203 + ctrl->edid = NULL; 1204 + 1205 + mutex_destroy(&ctrl->dev_mutex); 1206 + } 1207 + 1208 + bool msm_edp_ctrl_panel_connected(struct edp_ctrl *ctrl) 1209 + { 1210 + mutex_lock(&ctrl->dev_mutex); 1211 + DBG("connect status = %d", ctrl->edp_connected); 1212 + if (ctrl->edp_connected) { 1213 + mutex_unlock(&ctrl->dev_mutex); 1214 + return true; 1215 + } 1216 + 1217 + if (!ctrl->power_on) { 1218 + edp_ctrl_phy_aux_enable(ctrl, 1); 1219 + edp_ctrl_irq_enable(ctrl, 1); 1220 + } 1221 + 1222 + if (drm_dp_dpcd_read(ctrl->drm_aux, DP_DPCD_REV, ctrl->dpcd, 1223 + DP_RECEIVER_CAP_SIZE) < DP_RECEIVER_CAP_SIZE) { 1224 + pr_err("%s: AUX channel is NOT ready\n", __func__); 1225 + memset(ctrl->dpcd, 0, DP_RECEIVER_CAP_SIZE); 1226 + } else { 1227 + ctrl->edp_connected = true; 1228 + } 1229 + 1230 + if (!ctrl->power_on) { 1231 + edp_ctrl_irq_enable(ctrl, 0); 1232 + edp_ctrl_phy_aux_enable(ctrl, 0); 1233 + } 1234 + 1235 + DBG("exit: connect status=%d", ctrl->edp_connected); 1236 + 1237 + mutex_unlock(&ctrl->dev_mutex); 1238 + 1239 + return ctrl->edp_connected; 1240 + } 1241 + 1242 + int msm_edp_ctrl_get_panel_info(struct edp_ctrl *ctrl, 1243 + struct drm_connector *connector, struct edid **edid) 1244 + { 1245 + mutex_lock(&ctrl->dev_mutex); 1246 + 1247 + if (ctrl->edid) { 1248 + if (edid) { 1249 + DBG("Just return edid buffer"); 1250 + *edid = ctrl->edid; 1251 + } 1252 + goto unlock_ret; 1253 + } 1254 + 1255 + if (!ctrl->power_on) { 1256 + edp_ctrl_phy_aux_enable(ctrl, 1); 1257 + edp_ctrl_irq_enable(ctrl, 1); 1258 + } 1259 + 1260 + /* Initialize link rate as panel max link rate */ 1261 + ctrl->link_rate = ctrl->dpcd[DP_MAX_LINK_RATE]; 1262 + 1263 + ctrl->edid = drm_get_edid(connector, &ctrl->drm_aux->ddc); 1264 + if (!ctrl->edid) { 1265 + pr_err("%s: edid read fail\n", __func__); 1266 + goto disable_ret; 1267 + } 1268 + 1269 + if (edid) 1270 + *edid = ctrl->edid; 1271 + 1272 + disable_ret: 1273 + if (!ctrl->power_on) { 1274 + edp_ctrl_irq_enable(ctrl, 0); 1275 + edp_ctrl_phy_aux_enable(ctrl, 0); 1276 + } 1277 + unlock_ret: 1278 + mutex_unlock(&ctrl->dev_mutex); 1279 + return 0; 1280 + } 1281 + 1282 + int msm_edp_ctrl_timing_cfg(struct edp_ctrl *ctrl, 1283 + const struct drm_display_mode *mode, 1284 + const struct drm_display_info *info) 1285 + { 1286 + u32 hstart_from_sync, vstart_from_sync; 1287 + u32 data; 1288 + int ret = 0; 1289 + 1290 + mutex_lock(&ctrl->dev_mutex); 1291 + /* 1292 + * Need to keep color depth, pixel rate and 1293 + * interlaced information in ctrl context 1294 + */ 1295 + ctrl->color_depth = info->bpc; 1296 + ctrl->pixel_rate = mode->clock; 1297 + ctrl->interlaced = !!(mode->flags & DRM_MODE_FLAG_INTERLACE); 1298 + 1299 + /* Fill initial link config based on passed in timing */ 1300 + edp_fill_link_cfg(ctrl); 1301 + 1302 + if (edp_clk_enable(ctrl, EDP_CLK_MASK_AHB)) { 1303 + pr_err("%s, fail to prepare enable ahb clk\n", __func__); 1304 + ret = -EINVAL; 1305 + goto unlock_ret; 1306 + } 1307 + edp_clock_synchrous(ctrl, 1); 1308 + 1309 + /* Configure eDP timing to HW */ 1310 + edp_write(ctrl->base + REG_EDP_TOTAL_HOR_VER, 1311 + EDP_TOTAL_HOR_VER_HORIZ(mode->htotal) | 1312 + EDP_TOTAL_HOR_VER_VERT(mode->vtotal)); 1313 + 1314 + vstart_from_sync = mode->vtotal - mode->vsync_start; 1315 + hstart_from_sync = mode->htotal - mode->hsync_start; 1316 + edp_write(ctrl->base + REG_EDP_START_HOR_VER_FROM_SYNC, 1317 + EDP_START_HOR_VER_FROM_SYNC_HORIZ(hstart_from_sync) | 1318 + EDP_START_HOR_VER_FROM_SYNC_VERT(vstart_from_sync)); 1319 + 1320 + data = EDP_HSYNC_VSYNC_WIDTH_POLARITY_VERT( 1321 + mode->vsync_end - mode->vsync_start); 1322 + data |= EDP_HSYNC_VSYNC_WIDTH_POLARITY_HORIZ( 1323 + mode->hsync_end - mode->hsync_start); 1324 + if (mode->flags & DRM_MODE_FLAG_NVSYNC) 1325 + data |= EDP_HSYNC_VSYNC_WIDTH_POLARITY_NVSYNC; 1326 + if (mode->flags & DRM_MODE_FLAG_NHSYNC) 1327 + data |= EDP_HSYNC_VSYNC_WIDTH_POLARITY_NHSYNC; 1328 + edp_write(ctrl->base + REG_EDP_HSYNC_VSYNC_WIDTH_POLARITY, data); 1329 + 1330 + edp_write(ctrl->base + REG_EDP_ACTIVE_HOR_VER, 1331 + EDP_ACTIVE_HOR_VER_HORIZ(mode->hdisplay) | 1332 + EDP_ACTIVE_HOR_VER_VERT(mode->vdisplay)); 1333 + 1334 + edp_clk_disable(ctrl, EDP_CLK_MASK_AHB); 1335 + 1336 + unlock_ret: 1337 + mutex_unlock(&ctrl->dev_mutex); 1338 + return ret; 1339 + } 1340 + 1341 + bool msm_edp_ctrl_pixel_clock_valid(struct edp_ctrl *ctrl, 1342 + u32 pixel_rate, u32 *pm, u32 *pn) 1343 + { 1344 + const struct edp_pixel_clk_div *divs; 1345 + u32 err = 1; /* 1% error tolerance */ 1346 + u32 clk_err; 1347 + int i; 1348 + 1349 + if (ctrl->link_rate == DP_LINK_BW_1_62) { 1350 + divs = clk_divs[0]; 1351 + } else if (ctrl->link_rate == DP_LINK_BW_2_7) { 1352 + divs = clk_divs[1]; 1353 + } else { 1354 + pr_err("%s: Invalid link rate,%d\n", __func__, ctrl->link_rate); 1355 + return false; 1356 + } 1357 + 1358 + for (i = 0; i < EDP_PIXEL_CLK_NUM; i++) { 1359 + clk_err = abs(divs[i].rate - pixel_rate); 1360 + if ((divs[i].rate * err / 100) >= clk_err) { 1361 + if (pm) 1362 + *pm = divs[i].m; 1363 + if (pn) 1364 + *pn = divs[i].n; 1365 + return true; 1366 + } 1367 + } 1368 + 1369 + DBG("pixel clock %d(kHz) not supported", pixel_rate); 1370 + 1371 + return false; 1372 + } 1373 +
+9
drivers/gpu/drm/mxsfb/mxsfb_drv.c
··· 374 374 struct drm_device *drm = platform_get_drvdata(pdev); 375 375 376 376 drm_dev_unregister(drm); 377 + drm_atomic_helper_shutdown(drm); 377 378 mxsfb_unload(drm); 378 379 drm_dev_put(drm); 379 380 380 381 return 0; 382 + } 383 + 384 + static void mxsfb_shutdown(struct platform_device *pdev) 385 + { 386 + struct drm_device *drm = platform_get_drvdata(pdev); 387 + 388 + drm_atomic_helper_shutdown(drm); 381 389 } 382 390 383 391 #ifdef CONFIG_PM_SLEEP ··· 411 403 static struct platform_driver mxsfb_platform_driver = { 412 404 .probe = mxsfb_probe, 413 405 .remove = mxsfb_remove, 406 + .shutdown = mxsfb_shutdown, 414 407 .driver = { 415 408 .name = "mxsfb", 416 409 .of_match_table = mxsfb_dt_ids,
+1
drivers/gpu/drm/nouveau/Kconfig
··· 4 4 depends on DRM && PCI && MMU 5 5 select IOMMU_API 6 6 select FW_LOADER 7 + select DRM_DP_HELPER 7 8 select DRM_KMS_HELPER 8 9 select DRM_TTM 9 10 select DRM_TTM_HELPER
+1 -1
drivers/gpu/drm/nouveau/dispnv50/disp.c
··· 35 35 36 36 #include <drm/drm_atomic.h> 37 37 #include <drm/drm_atomic_helper.h> 38 - #include <drm/drm_dp_helper.h> 38 + #include <drm/dp/drm_dp_helper.h> 39 39 #include <drm/drm_edid.h> 40 40 #include <drm/drm_fb_helper.h> 41 41 #include <drm/drm_plane_helper.h>
+1 -1
drivers/gpu/drm/nouveau/nouveau_connector.h
··· 36 36 #include <drm/drm_crtc.h> 37 37 #include <drm/drm_edid.h> 38 38 #include <drm/drm_encoder.h> 39 - #include <drm/drm_dp_helper.h> 39 + #include <drm/dp/drm_dp_helper.h> 40 40 #include <drm/drm_util.h> 41 41 42 42 #include "nouveau_crtc.h"
+16 -1
drivers/gpu/drm/nouveau/nouveau_dp.c
··· 22 22 * Authors: Ben Skeggs 23 23 */ 24 24 25 - #include <drm/drm_dp_helper.h> 25 + #include <drm/dp/drm_dp_helper.h> 26 26 27 27 #include "nouveau_drv.h" 28 28 #include "nouveau_connector.h" ··· 146 146 nv_encoder->dp.link_bw = 27000 * dpcd[DP_MAX_LINK_RATE]; 147 147 nv_encoder->dp.link_nr = 148 148 dpcd[DP_MAX_LANE_COUNT] & DP_MAX_LANE_COUNT_MASK; 149 + 150 + if (connector->connector_type == DRM_MODE_CONNECTOR_eDP && dpcd[DP_DPCD_REV] >= 0x13) { 151 + struct drm_dp_aux *aux = &nv_connector->aux; 152 + int ret, i; 153 + u8 sink_rates[16]; 154 + 155 + ret = drm_dp_dpcd_read(aux, DP_SUPPORTED_LINK_RATES, sink_rates, sizeof(sink_rates)); 156 + if (ret == sizeof(sink_rates)) { 157 + for (i = 0; i < ARRAY_SIZE(sink_rates); i += 2) { 158 + int val = ((sink_rates[i + 1] << 8) | sink_rates[i]) * 200 / 10; 159 + if (val && (i == 0 || val > nv_encoder->dp.link_bw)) 160 + nv_encoder->dp.link_bw = val; 161 + } 162 + } 163 + } 149 164 150 165 NV_DEBUG(drm, "display: %dx%d dpcd 0x%02x\n", 151 166 nv_encoder->dp.link_nr, nv_encoder->dp.link_bw,
+2 -2
drivers/gpu/drm/nouveau/nouveau_encoder.h
··· 30 30 #include <subdev/bios/dcb.h> 31 31 32 32 #include <drm/drm_encoder_slave.h> 33 - #include <drm/drm_dp_helper.h> 34 - #include <drm/drm_dp_mst_helper.h> 33 + #include <drm/dp/drm_dp_helper.h> 34 + #include <drm/dp/drm_dp_mst_helper.h> 35 35 #include "dispnv04/disp.h" 36 36 struct nv50_head_atom; 37 37 struct nouveau_connector;
+2 -1
drivers/gpu/drm/nouveau/nouveau_mem.c
··· 162 162 } 163 163 164 164 void 165 - nouveau_mem_del(struct ttm_resource *reg) 165 + nouveau_mem_del(struct ttm_resource_manager *man, struct ttm_resource *reg) 166 166 { 167 167 struct nouveau_mem *mem = nouveau_mem(reg); 168 168 169 169 nouveau_mem_fini(mem); 170 + ttm_resource_fini(man, reg); 170 171 kfree(mem); 171 172 } 172 173
+2 -1
drivers/gpu/drm/nouveau/nouveau_mem.h
··· 23 23 24 24 int nouveau_mem_new(struct nouveau_cli *, u8 kind, u8 comp, 25 25 struct ttm_resource **); 26 - void nouveau_mem_del(struct ttm_resource *); 26 + void nouveau_mem_del(struct ttm_resource_manager *man, 27 + struct ttm_resource *); 27 28 int nouveau_mem_vram(struct ttm_resource *, bool contig, u8 page); 28 29 int nouveau_mem_host(struct ttm_resource *, struct ttm_tt *); 29 30 void nouveau_mem_fini(struct nouveau_mem *);
+7 -6
drivers/gpu/drm/nouveau/nouveau_ttm.c
··· 36 36 #include <core/tegra.h> 37 37 38 38 static void 39 - nouveau_manager_del(struct ttm_resource_manager *man, struct ttm_resource *reg) 39 + nouveau_manager_del(struct ttm_resource_manager *man, 40 + struct ttm_resource *reg) 40 41 { 41 - nouveau_mem_del(reg); 42 + nouveau_mem_del(man, reg); 42 43 } 43 44 44 45 static int ··· 63 62 64 63 ret = nouveau_mem_vram(*res, nvbo->contig, nvbo->page); 65 64 if (ret) { 66 - nouveau_mem_del(*res); 65 + nouveau_mem_del(man, *res); 67 66 return ret; 68 67 } 69 68 ··· 119 118 ret = nvif_vmm_get(&mem->cli->vmm.vmm, PTES, false, 12, 0, 120 119 (long)(*res)->num_pages << PAGE_SHIFT, &mem->vma[0]); 121 120 if (ret) { 122 - nouveau_mem_del(*res); 121 + nouveau_mem_del(man, *res); 123 122 return ret; 124 123 } 125 124 ··· 164 163 165 164 man->func = &nouveau_vram_manager; 166 165 167 - ttm_resource_manager_init(man, 166 + ttm_resource_manager_init(man, &drm->ttm.bdev, 168 167 drm->gem.vram_available >> PAGE_SHIFT); 169 168 ttm_set_driver_manager(&drm->ttm.bdev, TTM_PL_VRAM, man); 170 169 ttm_resource_manager_set_used(man, true); ··· 211 210 212 211 man->func = func; 213 212 man->use_tt = true; 214 - ttm_resource_manager_init(man, size_pages); 213 + ttm_resource_manager_init(man, &drm->ttm.bdev, size_pages); 215 214 ttm_set_driver_manager(&drm->ttm.bdev, TTM_PL_TT, man); 216 215 ttm_resource_manager_set_used(man, true); 217 216 return 0;
+227 -96
drivers/gpu/drm/nouveau/nvkm/engine/disp/dp.c
··· 41 41 42 42 struct lt_state { 43 43 struct nvkm_dp *dp; 44 + 45 + int repeaters; 46 + int repeater; 47 + 44 48 u8 stat[6]; 45 49 u8 conf[4]; 46 50 bool pc2; ··· 56 52 nvkm_dp_train_sense(struct lt_state *lt, bool pc, u32 delay) 57 53 { 58 54 struct nvkm_dp *dp = lt->dp; 55 + u32 addr; 59 56 int ret; 60 57 61 - if (dp->dpcd[DPCD_RC0E_AUX_RD_INTERVAL]) 62 - mdelay(dp->dpcd[DPCD_RC0E_AUX_RD_INTERVAL] * 4); 63 - else 64 - udelay(delay); 58 + usleep_range(delay, delay * 2); 65 59 66 - ret = nvkm_rdaux(dp->aux, DPCD_LS02, lt->stat, 6); 60 + if (lt->repeater) 61 + addr = DPCD_LTTPR_LANE0_1_STATUS(lt->repeater); 62 + else 63 + addr = DPCD_LS02; 64 + 65 + ret = nvkm_rdaux(dp->aux, addr, &lt->stat[0], 3); 66 + if (ret) 67 + return ret; 68 + 69 + if (lt->repeater) 70 + addr = DPCD_LTTPR_LANE0_1_ADJUST(lt->repeater); 71 + else 72 + addr = DPCD_LS06; 73 + 74 + ret = nvkm_rdaux(dp->aux, addr, &lt->stat[4], 2); 67 75 if (ret) 68 76 return ret; 69 77 ··· 101 85 struct nvbios_dpout info; 102 86 struct nvbios_dpcfg ocfg; 103 87 u8 ver, hdr, cnt, len; 88 + u32 addr; 104 89 u32 data; 105 90 int ret, i; 106 91 ··· 130 113 OUTP_TRACE(&dp->outp, "config lane %d %02x %02x", 131 114 i, lt->conf[i], lpc2); 132 115 116 + if (lt->repeater != lt->repeaters) 117 + continue; 118 + 133 119 data = nvbios_dpout_match(bios, dp->outp.info.hasht, 134 120 dp->outp.info.hashm, 135 121 &ver, &hdr, &cnt, &len, &info); ··· 149 129 ocfg.pe, ocfg.tx_pu); 150 130 } 151 131 152 - ret = nvkm_wraux(dp->aux, DPCD_LC03(0), lt->conf, 4); 132 + if (lt->repeater) 133 + addr = DPCD_LTTPR_LANE0_SET(lt->repeater); 134 + else 135 + addr = DPCD_LC03(0); 136 + 137 + ret = nvkm_wraux(dp->aux, addr, lt->conf, 4); 153 138 if (ret) 154 139 return ret; 155 140 ··· 171 146 nvkm_dp_train_pattern(struct lt_state *lt, u8 pattern) 172 147 { 173 148 struct nvkm_dp *dp = lt->dp; 149 + u32 addr; 174 150 u8 sink_tp; 175 151 176 152 OUTP_TRACE(&dp->outp, "training pattern %d", pattern); 177 153 dp->outp.ior->func->dp.pattern(dp->outp.ior, pattern); 178 154 179 - nvkm_rdaux(dp->aux, DPCD_LC02, &sink_tp, 1); 155 + if (lt->repeater) 156 + addr = DPCD_LTTPR_PATTERN_SET(lt->repeater); 157 + else 158 + addr = DPCD_LC02; 159 + 160 + nvkm_rdaux(dp->aux, addr, &sink_tp, 1); 180 161 sink_tp &= ~DPCD_LC02_TRAINING_PATTERN_SET; 181 - sink_tp |= pattern; 182 - nvkm_wraux(dp->aux, DPCD_LC02, &sink_tp, 1); 162 + sink_tp |= (pattern != 4) ? pattern : 7; 163 + 164 + if (pattern != 0) 165 + sink_tp |= DPCD_LC02_SCRAMBLING_DISABLE; 166 + else 167 + sink_tp &= ~DPCD_LC02_SCRAMBLING_DISABLE; 168 + nvkm_wraux(dp->aux, addr, &sink_tp, 1); 183 169 } 184 170 185 171 static int 186 172 nvkm_dp_train_eq(struct lt_state *lt) 187 173 { 174 + struct nvkm_i2c_aux *aux = lt->dp->aux; 188 175 bool eq_done = false, cr_done = true; 189 - int tries = 0, i; 176 + int tries = 0, usec = 0, i; 177 + u8 data; 190 178 191 - if (lt->dp->dpcd[DPCD_RC02] & DPCD_RC02_TPS3_SUPPORTED) 192 - nvkm_dp_train_pattern(lt, 3); 193 - else 194 - nvkm_dp_train_pattern(lt, 2); 179 + if (lt->repeater) { 180 + if (!nvkm_rdaux(aux, DPCD_LTTPR_AUX_RD_INTERVAL(lt->repeater), &data, sizeof(data))) 181 + usec = (data & DPCD_RC0E_AUX_RD_INTERVAL) * 4000; 182 + 183 + nvkm_dp_train_pattern(lt, 4); 184 + } else { 185 + if (lt->dp->dpcd[DPCD_RC00_DPCD_REV] >= 0x14 && 186 + lt->dp->dpcd[DPCD_RC03] & DPCD_RC03_TPS4_SUPPORTED) 187 + nvkm_dp_train_pattern(lt, 4); 188 + else 189 + if (lt->dp->dpcd[DPCD_RC00_DPCD_REV] >= 0x12 && 190 + lt->dp->dpcd[DPCD_RC02] & DPCD_RC02_TPS3_SUPPORTED) 191 + nvkm_dp_train_pattern(lt, 3); 192 + else 193 + nvkm_dp_train_pattern(lt, 2); 194 + 195 + usec = (lt->dp->dpcd[DPCD_RC0E] & DPCD_RC0E_AUX_RD_INTERVAL) * 4000; 196 + } 195 197 196 198 do { 197 199 if ((tries && 198 200 nvkm_dp_train_drive(lt, lt->pc2)) || 199 - nvkm_dp_train_sense(lt, lt->pc2, 400)) 201 + nvkm_dp_train_sense(lt, lt->pc2, usec ? usec : 400)) 200 202 break; 201 203 202 204 eq_done = !!(lt->stat[2] & DPCD_LS04_INTERLANE_ALIGN_DONE); ··· 245 193 { 246 194 bool cr_done = false, abort = false; 247 195 int voltage = lt->conf[0] & DPCD_LC03_VOLTAGE_SWING_SET; 248 - int tries = 0, i; 196 + int tries = 0, usec = 0, i; 249 197 250 198 nvkm_dp_train_pattern(lt, 1); 251 199 200 + if (lt->dp->dpcd[DPCD_RC00_DPCD_REV] < 0x14 && !lt->repeater) 201 + usec = (lt->dp->dpcd[DPCD_RC0E] & DPCD_RC0E_AUX_RD_INTERVAL) * 4000; 202 + 252 203 do { 253 204 if (nvkm_dp_train_drive(lt, false) || 254 - nvkm_dp_train_sense(lt, false, 100)) 205 + nvkm_dp_train_sense(lt, false, usec ? usec : 100)) 255 206 break; 256 207 257 208 cr_done = true; ··· 278 223 } 279 224 280 225 static int 281 - nvkm_dp_train_links(struct nvkm_dp *dp) 226 + nvkm_dp_train_links(struct nvkm_dp *dp, int rate) 282 227 { 283 228 struct nvkm_ior *ior = dp->outp.ior; 284 229 struct nvkm_disp *disp = dp->outp.disp; ··· 288 233 .dp = dp, 289 234 }; 290 235 u32 lnkcmp; 291 - u8 sink[2]; 236 + u8 sink[2], data; 292 237 int ret; 293 238 294 239 OUTP_DBG(&dp->outp, "training %d x %d MB/s", 295 240 ior->dp.nr, ior->dp.bw * 27); 296 241 297 242 /* Intersect misc. capabilities of the OR and sink. */ 243 + if (disp->engine.subdev.device->chipset < 0x110) 244 + dp->dpcd[DPCD_RC03] &= ~DPCD_RC03_TPS4_SUPPORTED; 298 245 if (disp->engine.subdev.device->chipset < 0xd0) 299 246 dp->dpcd[DPCD_RC02] &= ~DPCD_RC02_TPS3_SUPPORTED; 300 247 lt.pc2 = dp->dpcd[DPCD_RC02] & DPCD_RC02_TPS3_SUPPORTED; ··· 344 287 345 288 ior->func->dp.power(ior, ior->dp.nr); 346 289 290 + /* Select LTTPR non-transparent mode if we have a valid configuration, 291 + * use transparent mode otherwise. 292 + */ 293 + if (dp->lttpr[0] >= 0x14) { 294 + data = DPCD_LTTPR_MODE_TRANSPARENT; 295 + nvkm_wraux(dp->aux, DPCD_LTTPR_MODE, &data, sizeof(data)); 296 + 297 + if (dp->lttprs) { 298 + data = DPCD_LTTPR_MODE_NON_TRANSPARENT; 299 + nvkm_wraux(dp->aux, DPCD_LTTPR_MODE, &data, sizeof(data)); 300 + lt.repeaters = dp->lttprs; 301 + } 302 + } 303 + 347 304 /* Set desired link configuration on the sink. */ 348 - sink[0] = ior->dp.bw; 305 + sink[0] = (dp->rate[rate].dpcd < 0) ? ior->dp.bw : 0; 349 306 sink[1] = ior->dp.nr; 350 307 if (ior->dp.ef) 351 308 sink[1] |= DPCD_LC01_ENHANCED_FRAME_EN; ··· 368 297 if (ret) 369 298 return ret; 370 299 300 + if (dp->rate[rate].dpcd >= 0) { 301 + ret = nvkm_rdaux(dp->aux, DPCD_LC15_LINK_RATE_SET, &sink[0], sizeof(sink[0])); 302 + if (ret) 303 + return ret; 304 + 305 + sink[0] &= ~DPCD_LC15_LINK_RATE_SET_MASK; 306 + sink[0] |= dp->rate[rate].dpcd; 307 + 308 + ret = nvkm_wraux(dp->aux, DPCD_LC15_LINK_RATE_SET, &sink[0], sizeof(sink[0])); 309 + if (ret) 310 + return ret; 311 + } 312 + 371 313 /* Attempt to train the link in this configuration. */ 372 - memset(lt.stat, 0x00, sizeof(lt.stat)); 373 - ret = nvkm_dp_train_cr(&lt); 374 - if (ret == 0) 375 - ret = nvkm_dp_train_eq(&lt); 376 - nvkm_dp_train_pattern(&lt, 0); 314 + for (lt.repeater = lt.repeaters; lt.repeater >= 0; lt.repeater--) { 315 + if (lt.repeater) 316 + OUTP_DBG(&dp->outp, "training LTTPR%d", lt.repeater); 317 + else 318 + OUTP_DBG(&dp->outp, "training sink"); 319 + 320 + memset(lt.stat, 0x00, sizeof(lt.stat)); 321 + ret = nvkm_dp_train_cr(&lt); 322 + if (ret == 0) 323 + ret = nvkm_dp_train_eq(&lt); 324 + nvkm_dp_train_pattern(&lt, 0); 325 + } 326 + 377 327 return ret; 378 328 } 379 329 ··· 437 345 } 438 346 } 439 347 440 - static const struct dp_rates { 441 - u32 rate; 442 - u8 bw; 443 - u8 nr; 444 - } nvkm_dp_rates[] = { 445 - { 2160000, 0x14, 4 }, 446 - { 1080000, 0x0a, 4 }, 447 - { 1080000, 0x14, 2 }, 448 - { 648000, 0x06, 4 }, 449 - { 540000, 0x0a, 2 }, 450 - { 540000, 0x14, 1 }, 451 - { 324000, 0x06, 2 }, 452 - { 270000, 0x0a, 1 }, 453 - { 162000, 0x06, 1 }, 454 - {} 455 - }; 456 - 457 348 static int 458 349 nvkm_dp_train(struct nvkm_dp *dp, u32 dataKBps) 459 350 { 460 351 struct nvkm_ior *ior = dp->outp.ior; 461 - const u8 sink_nr = dp->dpcd[DPCD_RC02] & DPCD_RC02_MAX_LANE_COUNT; 462 - const u8 sink_bw = dp->dpcd[DPCD_RC01_MAX_LINK_RATE]; 463 - const u8 outp_nr = dp->outp.info.dpconf.link_nr; 464 - const u8 outp_bw = dp->outp.info.dpconf.link_bw; 465 - const struct dp_rates *failsafe = NULL, *cfg; 466 - int ret = -EINVAL; 352 + int ret = -EINVAL, nr, rate; 467 353 u8 pwr; 468 - 469 - /* Find the lowest configuration of the OR that can support 470 - * the required link rate. 471 - * 472 - * We will refuse to program the OR to lower rates, even if 473 - * link training fails at higher rates (or even if the sink 474 - * can't support the rate at all, though the DD is supposed 475 - * to prevent such situations from happening). 476 - * 477 - * Attempting to do so can cause the entire display to hang, 478 - * and it's better to have a failed modeset than that. 479 - */ 480 - for (cfg = nvkm_dp_rates; cfg->rate; cfg++) { 481 - if (cfg->nr <= outp_nr && cfg->bw <= outp_bw) { 482 - /* Try to respect sink limits too when selecting 483 - * lowest link configuration. 484 - */ 485 - if (!failsafe || 486 - (cfg->nr <= sink_nr && cfg->bw <= sink_bw)) 487 - failsafe = cfg; 488 - } 489 - 490 - if (failsafe && cfg[1].rate < dataKBps) 491 - break; 492 - } 493 - 494 - if (WARN_ON(!failsafe)) 495 - return ret; 496 354 497 355 /* Ensure sink is not in a low-power state. */ 498 356 if (!nvkm_rdaux(dp->aux, DPCD_SC00, &pwr, 1)) { ··· 453 411 } 454 412 } 455 413 456 - /* Link training. */ 457 - OUTP_DBG(&dp->outp, "training (min: %d x %d MB/s)", 458 - failsafe->nr, failsafe->bw * 27); 459 - nvkm_dp_train_init(dp); 460 - for (cfg = nvkm_dp_rates; ret < 0 && cfg <= failsafe; cfg++) { 461 - /* Skip configurations not supported by both OR and sink. */ 462 - if ((cfg->nr > outp_nr || cfg->bw > outp_bw || 463 - cfg->nr > sink_nr || cfg->bw > sink_bw)) { 464 - if (cfg != failsafe) 465 - continue; 466 - OUTP_ERR(&dp->outp, "link rate unsupported by sink"); 467 - } 468 - ior->dp.mst = dp->lt.mst; 469 - ior->dp.ef = dp->dpcd[DPCD_RC02] & DPCD_RC02_ENHANCED_FRAME_CAP; 470 - ior->dp.bw = cfg->bw; 471 - ior->dp.nr = cfg->nr; 414 + ior->dp.mst = dp->lt.mst; 415 + ior->dp.ef = dp->dpcd[DPCD_RC02] & DPCD_RC02_ENHANCED_FRAME_CAP; 416 + ior->dp.nr = 0; 472 417 473 - /* Program selected link configuration. */ 474 - ret = nvkm_dp_train_links(dp); 418 + /* Link training. */ 419 + OUTP_DBG(&dp->outp, "training"); 420 + nvkm_dp_train_init(dp); 421 + for (nr = dp->links; ret < 0 && nr; nr >>= 1) { 422 + for (rate = 0; ret < 0 && rate < dp->rates; rate++) { 423 + if (dp->rate[rate].rate * nr >= dataKBps || WARN_ON(!ior->dp.nr)) { 424 + /* Program selected link configuration. */ 425 + ior->dp.bw = dp->rate[rate].rate / 27000; 426 + ior->dp.nr = nr; 427 + ret = nvkm_dp_train_links(dp, rate); 428 + } 429 + } 475 430 } 476 431 nvkm_dp_train_fini(dp); 477 432 if (ret < 0) ··· 566 527 } 567 528 568 529 static bool 530 + nvkm_dp_enable_supported_link_rates(struct nvkm_dp *dp) 531 + { 532 + u8 sink_rates[DPCD_RC10_SUPPORTED_LINK_RATES__SIZE]; 533 + int i, j, k; 534 + 535 + if (dp->outp.conn->info.type != DCB_CONNECTOR_eDP || 536 + dp->dpcd[DPCD_RC00_DPCD_REV] < 0x13 || 537 + nvkm_rdaux(dp->aux, DPCD_RC10_SUPPORTED_LINK_RATES(0), sink_rates, sizeof(sink_rates))) 538 + return false; 539 + 540 + for (i = 0; i < ARRAY_SIZE(sink_rates); i += 2) { 541 + const u32 rate = ((sink_rates[i + 1] << 8) | sink_rates[i]) * 200 / 10; 542 + 543 + if (!rate || WARN_ON(dp->rates == ARRAY_SIZE(dp->rate))) 544 + break; 545 + 546 + if (rate > dp->outp.info.dpconf.link_bw * 27000) { 547 + OUTP_DBG(&dp->outp, "rate %d !outp", rate); 548 + continue; 549 + } 550 + 551 + for (j = 0; j < dp->rates; j++) { 552 + if (rate > dp->rate[j].rate) { 553 + for (k = dp->rates; k > j; k--) 554 + dp->rate[k] = dp->rate[k - 1]; 555 + break; 556 + } 557 + } 558 + 559 + dp->rate[j].dpcd = i / 2; 560 + dp->rate[j].rate = rate; 561 + dp->rates++; 562 + } 563 + 564 + for (i = 0; i < dp->rates; i++) 565 + OUTP_DBG(&dp->outp, "link_rate[%d] = %d", dp->rate[i].dpcd, dp->rate[i].rate); 566 + 567 + return dp->rates != 0; 568 + } 569 + 570 + static bool 569 571 nvkm_dp_enable(struct nvkm_dp *dp, bool enable) 570 572 { 571 573 struct nvkm_i2c_aux *aux = dp->aux; ··· 618 538 dp->present = true; 619 539 } 620 540 621 - if (!nvkm_rdaux(aux, DPCD_RC00_DPCD_REV, dp->dpcd, 622 - sizeof(dp->dpcd))) 541 + /* Detect any LTTPRs before reading DPCD receiver caps. */ 542 + if (!nvkm_rdaux(aux, DPCD_LTTPR_REV, dp->lttpr, sizeof(dp->lttpr)) && 543 + dp->lttpr[0] >= 0x14 && dp->lttpr[2]) { 544 + switch (dp->lttpr[2]) { 545 + case 0x80: dp->lttprs = 1; break; 546 + case 0x40: dp->lttprs = 2; break; 547 + case 0x20: dp->lttprs = 3; break; 548 + case 0x10: dp->lttprs = 4; break; 549 + case 0x08: dp->lttprs = 5; break; 550 + case 0x04: dp->lttprs = 6; break; 551 + case 0x02: dp->lttprs = 7; break; 552 + case 0x01: dp->lttprs = 8; break; 553 + default: 554 + /* Unknown LTTPR count, we'll switch to transparent mode. */ 555 + WARN_ON(1); 556 + dp->lttprs = 0; 557 + break; 558 + } 559 + } else { 560 + /* No LTTPR support, or zero LTTPR count - don't touch it at all. */ 561 + memset(dp->lttpr, 0x00, sizeof(dp->lttpr)); 562 + } 563 + 564 + if (!nvkm_rdaux(aux, DPCD_RC00_DPCD_REV, dp->dpcd, sizeof(dp->dpcd))) { 565 + const u8 rates[] = { 0x1e, 0x14, 0x0a, 0x06, 0 }; 566 + const u8 *rate; 567 + int rate_max; 568 + 569 + dp->rates = 0; 570 + dp->links = dp->dpcd[DPCD_RC02] & DPCD_RC02_MAX_LANE_COUNT; 571 + dp->links = min(dp->links, dp->outp.info.dpconf.link_nr); 572 + if (dp->lttprs && dp->lttpr[4]) 573 + dp->links = min_t(int, dp->links, dp->lttpr[4]); 574 + 575 + rate_max = dp->dpcd[DPCD_RC01_MAX_LINK_RATE]; 576 + rate_max = min(rate_max, dp->outp.info.dpconf.link_bw); 577 + if (dp->lttprs && dp->lttpr[1]) 578 + rate_max = min_t(int, rate_max, dp->lttpr[1]); 579 + 580 + if (!nvkm_dp_enable_supported_link_rates(dp)) { 581 + for (rate = rates; *rate; rate++) { 582 + if (*rate <= rate_max) { 583 + if (WARN_ON(dp->rates == ARRAY_SIZE(dp->rate))) 584 + break; 585 + 586 + dp->rate[dp->rates].dpcd = -1; 587 + dp->rate[dp->rates].rate = *rate * 27000; 588 + dp->rates++; 589 + } 590 + } 591 + } 592 + 623 593 return true; 594 + } 624 595 } 625 596 626 597 if (dp->present) {
+29 -6
drivers/gpu/drm/nouveau/nvkm/engine/disp/dp.h
··· 9 9 #include <subdev/bios/dp.h> 10 10 11 11 struct nvkm_dp { 12 - union { 13 - struct nvkm_outp base; 14 - struct nvkm_outp outp; 15 - }; 12 + struct nvkm_outp outp; 16 13 17 14 struct nvbios_dpout info; 18 15 u8 version; ··· 18 21 19 22 struct nvkm_notify hpd; 20 23 bool present; 24 + u8 lttpr[6]; 25 + u8 lttprs; 21 26 u8 dpcd[16]; 27 + 28 + struct { 29 + int dpcd; /* -1, or index into SUPPORTED_LINK_RATES table */ 30 + u32 rate; 31 + } rate[8]; 32 + int rates; 33 + int links; 22 34 23 35 struct mutex mutex; 24 36 struct { ··· 48 42 #define DPCD_RC02_TPS3_SUPPORTED 0x40 49 43 #define DPCD_RC02_MAX_LANE_COUNT 0x1f 50 44 #define DPCD_RC03 0x00003 45 + #define DPCD_RC03_TPS4_SUPPORTED 0x80 51 46 #define DPCD_RC03_MAX_DOWNSPREAD 0x01 52 - #define DPCD_RC0E_AUX_RD_INTERVAL 0x0000e 47 + #define DPCD_RC0E 0x0000e 48 + #define DPCD_RC0E_AUX_RD_INTERVAL 0x7f 49 + #define DPCD_RC10_SUPPORTED_LINK_RATES(i) 0x00010 50 + #define DPCD_RC10_SUPPORTED_LINK_RATES__SIZE 16 53 51 54 52 /* DPCD Link Configuration */ 55 53 #define DPCD_LC00_LINK_BW_SET 0x00100 ··· 61 51 #define DPCD_LC01_ENHANCED_FRAME_EN 0x80 62 52 #define DPCD_LC01_LANE_COUNT_SET 0x1f 63 53 #define DPCD_LC02 0x00102 64 - #define DPCD_LC02_TRAINING_PATTERN_SET 0x03 54 + #define DPCD_LC02_TRAINING_PATTERN_SET 0x0f 55 + #define DPCD_LC02_SCRAMBLING_DISABLE 0x20 65 56 #define DPCD_LC03(l) ((l) + 0x00103) 66 57 #define DPCD_LC03_MAX_PRE_EMPHASIS_REACHED 0x20 67 58 #define DPCD_LC03_PRE_EMPHASIS_SET 0x18 ··· 78 67 #define DPCD_LC10_LANE3_POST_CURSOR2_SET 0x30 79 68 #define DPCD_LC10_LANE2_MAX_POST_CURSOR2_REACHED 0x04 80 69 #define DPCD_LC10_LANE2_POST_CURSOR2_SET 0x03 70 + #define DPCD_LC15_LINK_RATE_SET 0x00115 71 + #define DPCD_LC15_LINK_RATE_SET_MASK 0x07 81 72 82 73 /* DPCD Link/Sink Status */ 83 74 #define DPCD_LS02 0x00202 ··· 121 108 #define DPCD_SC00_SET_POWER 0x03 122 109 #define DPCD_SC00_SET_POWER_D0 0x01 123 110 #define DPCD_SC00_SET_POWER_D3 0x03 111 + 112 + #define DPCD_LTTPR_REV 0xf0000 113 + #define DPCD_LTTPR_MODE 0xf0003 114 + #define DPCD_LTTPR_MODE_TRANSPARENT 0x55 115 + #define DPCD_LTTPR_MODE_NON_TRANSPARENT 0xaa 116 + #define DPCD_LTTPR_PATTERN_SET(i) ((i - 1) * 0x50 + 0xf0010) 117 + #define DPCD_LTTPR_LANE0_SET(i) ((i - 1) * 0x50 + 0xf0011) 118 + #define DPCD_LTTPR_AUX_RD_INTERVAL(i) ((i - 1) * 0x50 + 0xf0020) 119 + #define DPCD_LTTPR_LANE0_1_STATUS(i) ((i - 1) * 0x50 + 0xf0030) 120 + #define DPCD_LTTPR_LANE0_1_ADJUST(i) ((i - 1) * 0x50 + 0xf0033) 124 121 #endif
+12 -1
drivers/gpu/drm/nouveau/nvkm/engine/disp/sorg94.c
··· 77 77 { 78 78 struct nvkm_device *device = sor->disp->engine.subdev.device; 79 79 const u32 loff = nv50_sor_link(sor); 80 - nvkm_mask(device, 0x61c10c + loff, 0x0f000000, pattern << 24); 80 + u32 data; 81 + 82 + switch (pattern) { 83 + case 0: data = 0x00001000; break; 84 + case 1: data = 0x01000000; break; 85 + case 2: data = 0x02000000; break; 86 + default: 87 + WARN_ON(1); 88 + return; 89 + } 90 + 91 + nvkm_mask(device, 0x61c10c + loff, 0x0f001000, data); 81 92 } 82 93 83 94 void
+4
drivers/gpu/drm/nouveau/nvkm/engine/disp/sorga102.c
··· 37 37 case 0x0a: clksor |= 0x00040000; break; 38 38 case 0x14: clksor |= 0x00080000; break; 39 39 case 0x1e: clksor |= 0x000c0000; break; 40 + case 0x08: clksor |= 0x00100000; break; 41 + case 0x09: clksor |= 0x00140000; break; 42 + case 0x0c: clksor |= 0x00180000; break; 43 + case 0x10: clksor |= 0x001c0000; break; 40 44 default: 41 45 WARN_ON(1); 42 46 return -EINVAL;
+13 -1
drivers/gpu/drm/nouveau/nvkm/engine/disp/sorgf119.c
··· 92 92 { 93 93 struct nvkm_device *device = sor->disp->engine.subdev.device; 94 94 const u32 soff = nv50_ior_base(sor); 95 - nvkm_mask(device, 0x61c110 + soff, 0x0f0f0f0f, 0x01010101 * pattern); 95 + u32 data; 96 + 97 + switch (pattern) { 98 + case 0: data = 0x10101010; break; 99 + case 1: data = 0x01010101; break; 100 + case 2: data = 0x02020202; break; 101 + case 3: data = 0x03030303; break; 102 + default: 103 + WARN_ON(1); 104 + return; 105 + } 106 + 107 + nvkm_mask(device, 0x61c110 + soff, 0x1f1f1f1f, data); 96 108 } 97 109 98 110 int
+15 -3
drivers/gpu/drm/nouveau/nvkm/engine/disp/sorgm107.c
··· 28 28 { 29 29 struct nvkm_device *device = sor->disp->engine.subdev.device; 30 30 const u32 soff = nv50_ior_base(sor); 31 - const u32 data = 0x01010101 * pattern; 31 + u32 mask = 0x1f1f1f1f, data; 32 + 33 + switch (pattern) { 34 + case 0: data = 0x10101010; break; 35 + case 1: data = 0x01010101; break; 36 + case 2: data = 0x02020202; break; 37 + case 3: data = 0x03030303; break; 38 + case 4: data = 0x1b1b1b1b; break; 39 + default: 40 + WARN_ON(1); 41 + return; 42 + } 43 + 32 44 if (sor->asy.link & 1) 33 - nvkm_mask(device, 0x61c110 + soff, 0x0f0f0f0f, data); 45 + nvkm_mask(device, 0x61c110 + soff, mask, data); 34 46 else 35 - nvkm_mask(device, 0x61c12c + soff, 0x0f0f0f0f, data); 47 + nvkm_mask(device, 0x61c12c + soff, mask, data); 36 48 } 37 49 38 50 static const struct nvkm_ior_func
+5 -4
drivers/gpu/drm/nouveau/nvkm/subdev/acr/hsfw.c
··· 142 142 143 143 hsfw->imem_size = desc->code_size; 144 144 hsfw->imem_tag = desc->start_tag; 145 - hsfw->imem = kmalloc(desc->code_size, GFP_KERNEL); 146 - memcpy(hsfw->imem, data + desc->code_off, desc->code_size); 147 - 145 + hsfw->imem = kmemdup(data + desc->code_off, desc->code_size, GFP_KERNEL); 148 146 nvkm_firmware_put(fw); 149 - return 0; 147 + if (!hsfw->imem) 148 + return -ENOMEM; 149 + else 150 + return 0; 150 151 } 151 152 152 153 int
+1
drivers/gpu/drm/omapdrm/omap_gem_dmabuf.c
··· 93 93 exp_info.size = omap_gem_mmap_size(obj); 94 94 exp_info.flags = flags; 95 95 exp_info.priv = obj; 96 + exp_info.resv = obj->resv; 96 97 97 98 return drm_gem_dmabuf_export(obj->dev, &exp_info); 98 99 }
+4 -4
drivers/gpu/drm/panel/panel-boe-tv101wum-nl6.c
··· 86 86 _INIT_DCS_CMD(0x0F, 0x73), 87 87 _INIT_DCS_CMD(0x95, 0xE6), 88 88 _INIT_DCS_CMD(0x96, 0xF0), 89 - _INIT_DCS_CMD(0x30, 0x11), 89 + _INIT_DCS_CMD(0x30, 0x00), 90 90 _INIT_DCS_CMD(0x6D, 0x66), 91 91 _INIT_DCS_CMD(0x75, 0xA2), 92 92 _INIT_DCS_CMD(0x77, 0x3B), ··· 112 112 _INIT_DCS_CMD(0xB1, 0x00, 0xD2, 0x01, 0x0B, 0x01, 0x34, 0x01, 0x76, 0x01, 0xA3, 0x01, 0xEF, 0x02, 0x27, 0x02, 0x29), 113 113 _INIT_DCS_CMD(0xB2, 0x02, 0x5F, 0x02, 0x9E, 0x02, 0xC9, 0x03, 0x00, 0x03, 0x26, 0x03, 0x53, 0x03, 0x63, 0x03, 0x73), 114 114 115 - _INIT_DCS_CMD(0xB3, 0x03, 0x86, 0x03, 0x9A, 0x03, 0xA7, 0x03, 0xCF, 0x03, 0xDE, 0x03, 0xE0), 115 + _INIT_DCS_CMD(0xB3, 0x03, 0x86, 0x03, 0x9A, 0x03, 0xAF, 0x03, 0xDF, 0x03, 0xF5, 0x03, 0xE0), 116 116 _INIT_DCS_CMD(0xB4, 0x00, 0x00, 0x00, 0x1B, 0x00, 0x45, 0x00, 0x65, 0x00, 0x81, 0x00, 0x99, 0x00, 0xAE, 0x00, 0xC1), 117 117 _INIT_DCS_CMD(0xB5, 0x00, 0xD2, 0x01, 0x0B, 0x01, 0x34, 0x01, 0x76, 0x01, 0xA3, 0x01, 0xEF, 0x02, 0x27, 0x02, 0x29), 118 118 _INIT_DCS_CMD(0xB6, 0x02, 0x5F, 0x02, 0x9E, 0x02, 0xC9, 0x03, 0x00, 0x03, 0x26, 0x03, 0x53, 0x03, 0x63, 0x03, 0x73), 119 - _INIT_DCS_CMD(0xB7, 0x03, 0x86, 0x03, 0x9A, 0x03, 0xA7, 0x03, 0xCF, 0x03, 0xDE, 0x03, 0xE0), 119 + _INIT_DCS_CMD(0xB7, 0x03, 0x86, 0x03, 0x9A, 0x03, 0xAF, 0x03, 0xDF, 0x03, 0xF5, 0x03, 0xE0), 120 120 121 121 _INIT_DCS_CMD(0xB8, 0x00, 0x00, 0x00, 0x1B, 0x00, 0x45, 0x00, 0x65, 0x00, 0x81, 0x00, 0x99, 0x00, 0xAE, 0x00, 0xC1), 122 122 _INIT_DCS_CMD(0xB9, 0x00, 0xD2, 0x01, 0x0B, 0x01, 0x34, 0x01, 0x76, 0x01, 0xA3, 0x01, 0xEF, 0x02, 0x27, 0x02, 0x29), 123 123 _INIT_DCS_CMD(0xBA, 0x02, 0x5F, 0x02, 0x9E, 0x02, 0xC9, 0x03, 0x00, 0x03, 0x26, 0x03, 0x53, 0x03, 0x63, 0x03, 0x73), 124 124 125 - _INIT_DCS_CMD(0xBB, 0x03, 0x86, 0x03, 0x9A, 0x03, 0xA7, 0x03, 0xCF, 0x03, 0xDE, 0x03, 0xE0), 125 + _INIT_DCS_CMD(0xBB, 0x03, 0x86, 0x03, 0x9A, 0x03, 0xAF, 0x03, 0xDF, 0x03, 0xF5, 0x03, 0xE0), 126 126 _INIT_DCS_CMD(0xFF, 0x24), 127 127 _INIT_DCS_CMD(0xFB, 0x01), 128 128
+19 -2
drivers/gpu/drm/panel/panel-edp.c
··· 36 36 37 37 #include <drm/drm_crtc.h> 38 38 #include <drm/drm_device.h> 39 - #include <drm/drm_dp_aux_bus.h> 40 - #include <drm/drm_dp_helper.h> 39 + #include <drm/dp/drm_dp_aux_bus.h> 40 + #include <drm/dp/drm_dp_helper.h> 41 41 #include <drm/drm_panel.h> 42 42 43 43 /** ··· 1745 1745 .enable = 50, 1746 1746 }; 1747 1747 1748 + static const struct panel_delay delay_200_500_e80_d50 = { 1749 + .hpd_absent = 200, 1750 + .unprepare = 500, 1751 + .enable = 80, 1752 + .disable = 50, 1753 + }; 1754 + 1755 + static const struct panel_delay delay_100_500_e200 = { 1756 + .hpd_absent = 100, 1757 + .unprepare = 500, 1758 + .enable = 200, 1759 + }; 1760 + 1748 1761 #define EDP_PANEL_ENTRY(vend_chr_0, vend_chr_1, vend_chr_2, product_id, _delay, _name) \ 1749 1762 { \ 1750 1763 .name = _name, \ ··· 1781 1768 EDP_PANEL_ENTRY('B', 'O', 'E', 0x07d1, &boe_nv133fhm_n61.delay, "NV133FHM-N61"), 1782 1769 EDP_PANEL_ENTRY('B', 'O', 'E', 0x082d, &boe_nv133fhm_n61.delay, "NV133FHM-N62"), 1783 1770 EDP_PANEL_ENTRY('B', 'O', 'E', 0x098d, &boe_nv110wtm_n61.delay, "NV110WTM-N61"), 1771 + EDP_PANEL_ENTRY('B', 'O', 'E', 0x0a5d, &delay_200_500_e50, "NV116WHM-N45"), 1784 1772 1785 1773 EDP_PANEL_ENTRY('C', 'M', 'N', 0x114c, &innolux_n116bca_ea1.delay, "N116BCA-EA1"), 1786 1774 1787 1775 EDP_PANEL_ENTRY('K', 'D', 'B', 0x0624, &kingdisplay_kd116n21_30nv_a010.delay, "116N21-30NV-A010"), 1776 + EDP_PANEL_ENTRY('K', 'D', 'B', 0x1120, &delay_200_500_e80_d50, "116N29-30NK-C007"), 1788 1777 1789 1778 EDP_PANEL_ENTRY('S', 'H', 'P', 0x154c, &delay_200_500_p2e100, "LQ116M1JW10"), 1779 + 1780 + EDP_PANEL_ENTRY('S', 'T', 'A', 0x0100, &delay_100_500_e200, "2081116HHD028001-51D"), 1790 1781 1791 1782 { /* sentinal */ } 1792 1783 };
+2 -2
drivers/gpu/drm/panel/panel-samsung-atna33xc20.c
··· 14 14 #include <linux/pm_runtime.h> 15 15 #include <linux/regulator/consumer.h> 16 16 17 - #include <drm/drm_dp_aux_bus.h> 18 - #include <drm/drm_dp_helper.h> 17 + #include <drm/dp/drm_dp_aux_bus.h> 18 + #include <drm/dp/drm_dp_helper.h> 19 19 #include <drm/drm_edid.h> 20 20 #include <drm/drm_panel.h> 21 21
+33
drivers/gpu/drm/panel/panel-simple.c
··· 2525 2525 .bus_flags = DRM_BUS_FLAG_DE_HIGH, 2526 2526 }; 2527 2527 2528 + static const struct display_timing multi_inno_mi0700s4t_6_timing = { 2529 + .pixelclock = { 29000000, 33000000, 38000000 }, 2530 + .hactive = { 800, 800, 800 }, 2531 + .hfront_porch = { 180, 210, 240 }, 2532 + .hback_porch = { 16, 16, 16 }, 2533 + .hsync_len = { 30, 30, 30 }, 2534 + .vactive = { 480, 480, 480 }, 2535 + .vfront_porch = { 12, 22, 32 }, 2536 + .vback_porch = { 10, 10, 10 }, 2537 + .vsync_len = { 13, 13, 13 }, 2538 + .flags = DISPLAY_FLAGS_HSYNC_LOW | DISPLAY_FLAGS_VSYNC_LOW | 2539 + DISPLAY_FLAGS_DE_HIGH | DISPLAY_FLAGS_PIXDATA_POSEDGE | 2540 + DISPLAY_FLAGS_SYNC_POSEDGE, 2541 + }; 2542 + 2543 + static const struct panel_desc multi_inno_mi0700s4t_6 = { 2544 + .timings = &multi_inno_mi0700s4t_6_timing, 2545 + .num_timings = 1, 2546 + .bpc = 8, 2547 + .size = { 2548 + .width = 154, 2549 + .height = 86, 2550 + }, 2551 + .bus_format = MEDIA_BUS_FMT_RGB888_1X24, 2552 + .bus_flags = DRM_BUS_FLAG_DE_HIGH | 2553 + DRM_BUS_FLAG_PIXDATA_SAMPLE_NEGEDGE | 2554 + DRM_BUS_FLAG_SYNC_SAMPLE_NEGEDGE, 2555 + .connector_type = DRM_MODE_CONNECTOR_DPI, 2556 + }; 2557 + 2528 2558 static const struct display_timing multi_inno_mi1010ait_1cp_timing = { 2529 2559 .pixelclock = { 68900000, 70000000, 73400000 }, 2530 2560 .hactive = { 1280, 1280, 1280 }, ··· 3901 3871 }, { 3902 3872 .compatible = "mitsubishi,aa070mc01-ca1", 3903 3873 .data = &mitsubishi_aa070mc01, 3874 + }, { 3875 + .compatible = "multi-inno,mi0700s4t-6", 3876 + .data = &multi_inno_mi0700s4t_6, 3904 3877 }, { 3905 3878 .compatible = "multi-inno,mi1010ait-1cp", 3906 3879 .data = &multi_inno_mi1010ait_1cp,
+7 -205
drivers/gpu/drm/panfrost/panfrost_features.h
··· 12 12 HW_FEATURE_JOBCHAIN_DISAMBIGUATION, 13 13 HW_FEATURE_PWRON_DURING_PWROFF_TRANS, 14 14 HW_FEATURE_XAFFINITY, 15 - HW_FEATURE_OUT_OF_ORDER_EXEC, 16 - HW_FEATURE_MRT, 17 - HW_FEATURE_BRNDOUT_CC, 18 - HW_FEATURE_INTERPIPE_REG_ALIASING, 19 - HW_FEATURE_LD_ST_TILEBUFFER, 20 - HW_FEATURE_MSAA_16X, 21 - HW_FEATURE_32_BIT_UNIFORM_ADDRESS, 22 - HW_FEATURE_ATTR_AUTO_TYPE_INFERRAL, 23 - HW_FEATURE_OPTIMIZED_COVERAGE_MASK, 24 - HW_FEATURE_T7XX_PAIRING_RULES, 25 - HW_FEATURE_LD_ST_LEA_TEX, 26 - HW_FEATURE_LINEAR_FILTER_FLOAT, 27 - HW_FEATURE_WORKGROUP_ROUND_MULTIPLE_OF_4, 28 - HW_FEATURE_IMAGES_IN_FRAGMENT_SHADERS, 29 - HW_FEATURE_TEST4_DATUM_MODE, 30 - HW_FEATURE_NEXT_INSTRUCTION_TYPE, 31 - HW_FEATURE_BRNDOUT_KILL, 32 - HW_FEATURE_WARPING, 33 15 HW_FEATURE_V4, 34 16 HW_FEATURE_FLUSH_REDUCTION, 35 17 HW_FEATURE_PROTECTED_MODE, ··· 24 42 }; 25 43 26 44 #define hw_features_t600 (\ 27 - BIT_ULL(HW_FEATURE_LD_ST_LEA_TEX) | \ 28 - BIT_ULL(HW_FEATURE_LINEAR_FILTER_FLOAT) | \ 29 45 BIT_ULL(HW_FEATURE_THREAD_GROUP_SPLIT) | \ 30 46 BIT_ULL(HW_FEATURE_V4)) 31 47 32 - #define hw_features_t620 (\ 33 - BIT_ULL(HW_FEATURE_LD_ST_LEA_TEX) | \ 34 - BIT_ULL(HW_FEATURE_LINEAR_FILTER_FLOAT) | \ 35 - BIT_ULL(HW_FEATURE_ATTR_AUTO_TYPE_INFERRAL) | \ 36 - BIT_ULL(HW_FEATURE_THREAD_GROUP_SPLIT) | \ 37 - BIT_ULL(HW_FEATURE_V4)) 48 + #define hw_features_t620 hw_features_t600 38 49 39 - #define hw_features_t720 (\ 40 - BIT_ULL(HW_FEATURE_32_BIT_UNIFORM_ADDRESS) | \ 41 - BIT_ULL(HW_FEATURE_ATTR_AUTO_TYPE_INFERRAL) | \ 42 - BIT_ULL(HW_FEATURE_INTERPIPE_REG_ALIASING) | \ 43 - BIT_ULL(HW_FEATURE_OPTIMIZED_COVERAGE_MASK) | \ 44 - BIT_ULL(HW_FEATURE_T7XX_PAIRING_RULES) | \ 45 - BIT_ULL(HW_FEATURE_THREAD_GROUP_SPLIT) | \ 46 - BIT_ULL(HW_FEATURE_WORKGROUP_ROUND_MULTIPLE_OF_4) | \ 47 - BIT_ULL(HW_FEATURE_WARPING) | \ 48 - BIT_ULL(HW_FEATURE_V4)) 49 - 50 + #define hw_features_t720 hw_features_t600 50 51 51 52 #define hw_features_t760 (\ 52 53 BIT_ULL(HW_FEATURE_JOBCHAIN_DISAMBIGUATION) | \ 53 54 BIT_ULL(HW_FEATURE_PWRON_DURING_PWROFF_TRANS) | \ 54 55 BIT_ULL(HW_FEATURE_XAFFINITY) | \ 55 - BIT_ULL(HW_FEATURE_32_BIT_UNIFORM_ADDRESS) | \ 56 - BIT_ULL(HW_FEATURE_ATTR_AUTO_TYPE_INFERRAL) | \ 57 - BIT_ULL(HW_FEATURE_BRNDOUT_CC) | \ 58 - BIT_ULL(HW_FEATURE_LD_ST_LEA_TEX) | \ 59 - BIT_ULL(HW_FEATURE_LD_ST_TILEBUFFER) | \ 60 - BIT_ULL(HW_FEATURE_LINEAR_FILTER_FLOAT) | \ 61 - BIT_ULL(HW_FEATURE_MRT) | \ 62 - BIT_ULL(HW_FEATURE_MSAA_16X) | \ 63 - BIT_ULL(HW_FEATURE_OUT_OF_ORDER_EXEC) | \ 64 - BIT_ULL(HW_FEATURE_T7XX_PAIRING_RULES) | \ 65 - BIT_ULL(HW_FEATURE_TEST4_DATUM_MODE) | \ 66 56 BIT_ULL(HW_FEATURE_THREAD_GROUP_SPLIT)) 67 57 68 - // T860 69 - #define hw_features_t860 (\ 70 - BIT_ULL(HW_FEATURE_JOBCHAIN_DISAMBIGUATION) | \ 71 - BIT_ULL(HW_FEATURE_PWRON_DURING_PWROFF_TRANS) | \ 72 - BIT_ULL(HW_FEATURE_XAFFINITY) | \ 73 - BIT_ULL(HW_FEATURE_32_BIT_UNIFORM_ADDRESS) | \ 74 - BIT_ULL(HW_FEATURE_ATTR_AUTO_TYPE_INFERRAL) | \ 75 - BIT_ULL(HW_FEATURE_BRNDOUT_CC) | \ 76 - BIT_ULL(HW_FEATURE_BRNDOUT_KILL) | \ 77 - BIT_ULL(HW_FEATURE_LD_ST_LEA_TEX) | \ 78 - BIT_ULL(HW_FEATURE_LD_ST_TILEBUFFER) | \ 79 - BIT_ULL(HW_FEATURE_LINEAR_FILTER_FLOAT) | \ 80 - BIT_ULL(HW_FEATURE_MRT) | \ 81 - BIT_ULL(HW_FEATURE_MSAA_16X) | \ 82 - BIT_ULL(HW_FEATURE_NEXT_INSTRUCTION_TYPE) | \ 83 - BIT_ULL(HW_FEATURE_OUT_OF_ORDER_EXEC) | \ 84 - BIT_ULL(HW_FEATURE_T7XX_PAIRING_RULES) | \ 85 - BIT_ULL(HW_FEATURE_TEST4_DATUM_MODE) | \ 86 - BIT_ULL(HW_FEATURE_THREAD_GROUP_SPLIT)) 58 + #define hw_features_t860 hw_features_t760 87 59 88 - #define hw_features_t880 hw_features_t860 60 + #define hw_features_t880 hw_features_t760 89 61 90 - #define hw_features_t830 (\ 91 - BIT_ULL(HW_FEATURE_JOBCHAIN_DISAMBIGUATION) | \ 92 - BIT_ULL(HW_FEATURE_PWRON_DURING_PWROFF_TRANS) | \ 93 - BIT_ULL(HW_FEATURE_XAFFINITY) | \ 94 - BIT_ULL(HW_FEATURE_WARPING) | \ 95 - BIT_ULL(HW_FEATURE_INTERPIPE_REG_ALIASING) | \ 96 - BIT_ULL(HW_FEATURE_32_BIT_UNIFORM_ADDRESS) | \ 97 - BIT_ULL(HW_FEATURE_ATTR_AUTO_TYPE_INFERRAL) | \ 98 - BIT_ULL(HW_FEATURE_BRNDOUT_CC) | \ 99 - BIT_ULL(HW_FEATURE_BRNDOUT_KILL) | \ 100 - BIT_ULL(HW_FEATURE_LD_ST_LEA_TEX) | \ 101 - BIT_ULL(HW_FEATURE_LD_ST_TILEBUFFER) | \ 102 - BIT_ULL(HW_FEATURE_LINEAR_FILTER_FLOAT) | \ 103 - BIT_ULL(HW_FEATURE_MRT) | \ 104 - BIT_ULL(HW_FEATURE_NEXT_INSTRUCTION_TYPE) | \ 105 - BIT_ULL(HW_FEATURE_OUT_OF_ORDER_EXEC) | \ 106 - BIT_ULL(HW_FEATURE_T7XX_PAIRING_RULES) | \ 107 - BIT_ULL(HW_FEATURE_TEST4_DATUM_MODE) | \ 108 - BIT_ULL(HW_FEATURE_THREAD_GROUP_SPLIT)) 62 + #define hw_features_t830 hw_features_t760 109 63 110 - #define hw_features_t820 (\ 111 - BIT_ULL(HW_FEATURE_JOBCHAIN_DISAMBIGUATION) | \ 112 - BIT_ULL(HW_FEATURE_PWRON_DURING_PWROFF_TRANS) | \ 113 - BIT_ULL(HW_FEATURE_XAFFINITY) | \ 114 - BIT_ULL(HW_FEATURE_WARPING) | \ 115 - BIT_ULL(HW_FEATURE_INTERPIPE_REG_ALIASING) | \ 116 - BIT_ULL(HW_FEATURE_32_BIT_UNIFORM_ADDRESS) | \ 117 - BIT_ULL(HW_FEATURE_ATTR_AUTO_TYPE_INFERRAL) | \ 118 - BIT_ULL(HW_FEATURE_BRNDOUT_CC) | \ 119 - BIT_ULL(HW_FEATURE_BRNDOUT_KILL) | \ 120 - BIT_ULL(HW_FEATURE_LD_ST_LEA_TEX) | \ 121 - BIT_ULL(HW_FEATURE_LD_ST_TILEBUFFER) | \ 122 - BIT_ULL(HW_FEATURE_LINEAR_FILTER_FLOAT) | \ 123 - BIT_ULL(HW_FEATURE_MRT) | \ 124 - BIT_ULL(HW_FEATURE_NEXT_INSTRUCTION_TYPE) | \ 125 - BIT_ULL(HW_FEATURE_OUT_OF_ORDER_EXEC) | \ 126 - BIT_ULL(HW_FEATURE_T7XX_PAIRING_RULES) | \ 127 - BIT_ULL(HW_FEATURE_TEST4_DATUM_MODE) | \ 128 - BIT_ULL(HW_FEATURE_THREAD_GROUP_SPLIT)) 64 + #define hw_features_t820 hw_features_t760 129 65 130 66 #define hw_features_g71 (\ 131 67 BIT_ULL(HW_FEATURE_JOBCHAIN_DISAMBIGUATION) | \ 132 68 BIT_ULL(HW_FEATURE_PWRON_DURING_PWROFF_TRANS) | \ 133 69 BIT_ULL(HW_FEATURE_XAFFINITY) | \ 134 - BIT_ULL(HW_FEATURE_WARPING) | \ 135 - BIT_ULL(HW_FEATURE_INTERPIPE_REG_ALIASING) | \ 136 - BIT_ULL(HW_FEATURE_32_BIT_UNIFORM_ADDRESS) | \ 137 - BIT_ULL(HW_FEATURE_ATTR_AUTO_TYPE_INFERRAL) | \ 138 - BIT_ULL(HW_FEATURE_BRNDOUT_CC) | \ 139 - BIT_ULL(HW_FEATURE_BRNDOUT_KILL) | \ 140 - BIT_ULL(HW_FEATURE_LD_ST_LEA_TEX) | \ 141 - BIT_ULL(HW_FEATURE_LD_ST_TILEBUFFER) | \ 142 - BIT_ULL(HW_FEATURE_LINEAR_FILTER_FLOAT) | \ 143 - BIT_ULL(HW_FEATURE_MRT) | \ 144 - BIT_ULL(HW_FEATURE_MSAA_16X) | \ 145 - BIT_ULL(HW_FEATURE_NEXT_INSTRUCTION_TYPE) | \ 146 - BIT_ULL(HW_FEATURE_OUT_OF_ORDER_EXEC) | \ 147 - BIT_ULL(HW_FEATURE_T7XX_PAIRING_RULES) | \ 148 - BIT_ULL(HW_FEATURE_TEST4_DATUM_MODE) | \ 149 70 BIT_ULL(HW_FEATURE_THREAD_GROUP_SPLIT) | \ 150 71 BIT_ULL(HW_FEATURE_FLUSH_REDUCTION) | \ 151 72 BIT_ULL(HW_FEATURE_PROTECTED_MODE) | \ ··· 58 173 BIT_ULL(HW_FEATURE_JOBCHAIN_DISAMBIGUATION) | \ 59 174 BIT_ULL(HW_FEATURE_PWRON_DURING_PWROFF_TRANS) | \ 60 175 BIT_ULL(HW_FEATURE_XAFFINITY) | \ 61 - BIT_ULL(HW_FEATURE_WARPING) | \ 62 - BIT_ULL(HW_FEATURE_INTERPIPE_REG_ALIASING) | \ 63 - BIT_ULL(HW_FEATURE_32_BIT_UNIFORM_ADDRESS) | \ 64 - BIT_ULL(HW_FEATURE_ATTR_AUTO_TYPE_INFERRAL) | \ 65 - BIT_ULL(HW_FEATURE_BRNDOUT_CC) | \ 66 - BIT_ULL(HW_FEATURE_BRNDOUT_KILL) | \ 67 - BIT_ULL(HW_FEATURE_LD_ST_LEA_TEX) | \ 68 - BIT_ULL(HW_FEATURE_LD_ST_TILEBUFFER) | \ 69 - BIT_ULL(HW_FEATURE_LINEAR_FILTER_FLOAT) | \ 70 - BIT_ULL(HW_FEATURE_MRT) | \ 71 - BIT_ULL(HW_FEATURE_MSAA_16X) | \ 72 - BIT_ULL(HW_FEATURE_NEXT_INSTRUCTION_TYPE) | \ 73 - BIT_ULL(HW_FEATURE_OUT_OF_ORDER_EXEC) | \ 74 - BIT_ULL(HW_FEATURE_T7XX_PAIRING_RULES) | \ 75 - BIT_ULL(HW_FEATURE_TEST4_DATUM_MODE) | \ 76 176 BIT_ULL(HW_FEATURE_THREAD_GROUP_SPLIT) | \ 77 177 BIT_ULL(HW_FEATURE_FLUSH_REDUCTION) | \ 78 178 BIT_ULL(HW_FEATURE_PROTECTED_MODE) | \ 79 179 BIT_ULL(HW_FEATURE_PROTECTED_DEBUG_MODE) | \ 80 180 BIT_ULL(HW_FEATURE_COHERENCY_REG)) 81 181 82 - #define hw_features_g51 (\ 83 - BIT_ULL(HW_FEATURE_JOBCHAIN_DISAMBIGUATION) | \ 84 - BIT_ULL(HW_FEATURE_PWRON_DURING_PWROFF_TRANS) | \ 85 - BIT_ULL(HW_FEATURE_XAFFINITY) | \ 86 - BIT_ULL(HW_FEATURE_WARPING) | \ 87 - BIT_ULL(HW_FEATURE_INTERPIPE_REG_ALIASING) | \ 88 - BIT_ULL(HW_FEATURE_32_BIT_UNIFORM_ADDRESS) | \ 89 - BIT_ULL(HW_FEATURE_ATTR_AUTO_TYPE_INFERRAL) | \ 90 - BIT_ULL(HW_FEATURE_BRNDOUT_CC) | \ 91 - BIT_ULL(HW_FEATURE_BRNDOUT_KILL) | \ 92 - BIT_ULL(HW_FEATURE_LD_ST_LEA_TEX) | \ 93 - BIT_ULL(HW_FEATURE_LD_ST_TILEBUFFER) | \ 94 - BIT_ULL(HW_FEATURE_LINEAR_FILTER_FLOAT) | \ 95 - BIT_ULL(HW_FEATURE_MRT) | \ 96 - BIT_ULL(HW_FEATURE_MSAA_16X) | \ 97 - BIT_ULL(HW_FEATURE_NEXT_INSTRUCTION_TYPE) | \ 98 - BIT_ULL(HW_FEATURE_OUT_OF_ORDER_EXEC) | \ 99 - BIT_ULL(HW_FEATURE_T7XX_PAIRING_RULES) | \ 100 - BIT_ULL(HW_FEATURE_TEST4_DATUM_MODE) | \ 101 - BIT_ULL(HW_FEATURE_THREAD_GROUP_SPLIT) | \ 102 - BIT_ULL(HW_FEATURE_FLUSH_REDUCTION) | \ 103 - BIT_ULL(HW_FEATURE_PROTECTED_MODE) | \ 104 - BIT_ULL(HW_FEATURE_PROTECTED_DEBUG_MODE) | \ 105 - BIT_ULL(HW_FEATURE_COHERENCY_REG)) 182 + #define hw_features_g51 hw_features_g72 106 183 107 184 #define hw_features_g52 (\ 108 185 BIT_ULL(HW_FEATURE_JOBCHAIN_DISAMBIGUATION) | \ 109 186 BIT_ULL(HW_FEATURE_PWRON_DURING_PWROFF_TRANS) | \ 110 187 BIT_ULL(HW_FEATURE_XAFFINITY) | \ 111 - BIT_ULL(HW_FEATURE_WARPING) | \ 112 - BIT_ULL(HW_FEATURE_INTERPIPE_REG_ALIASING) | \ 113 - BIT_ULL(HW_FEATURE_32_BIT_UNIFORM_ADDRESS) | \ 114 - BIT_ULL(HW_FEATURE_ATTR_AUTO_TYPE_INFERRAL) | \ 115 - BIT_ULL(HW_FEATURE_BRNDOUT_CC) | \ 116 - BIT_ULL(HW_FEATURE_BRNDOUT_KILL) | \ 117 - BIT_ULL(HW_FEATURE_LD_ST_LEA_TEX) | \ 118 - BIT_ULL(HW_FEATURE_LD_ST_TILEBUFFER) | \ 119 - BIT_ULL(HW_FEATURE_LINEAR_FILTER_FLOAT) | \ 120 - BIT_ULL(HW_FEATURE_MRT) | \ 121 - BIT_ULL(HW_FEATURE_MSAA_16X) | \ 122 - BIT_ULL(HW_FEATURE_NEXT_INSTRUCTION_TYPE) | \ 123 - BIT_ULL(HW_FEATURE_OUT_OF_ORDER_EXEC) | \ 124 - BIT_ULL(HW_FEATURE_T7XX_PAIRING_RULES) | \ 125 - BIT_ULL(HW_FEATURE_TEST4_DATUM_MODE) | \ 126 188 BIT_ULL(HW_FEATURE_THREAD_GROUP_SPLIT) | \ 127 189 BIT_ULL(HW_FEATURE_FLUSH_REDUCTION) | \ 128 190 BIT_ULL(HW_FEATURE_PROTECTED_MODE) | \ ··· 80 248 BIT_ULL(HW_FEATURE_JOBCHAIN_DISAMBIGUATION) | \ 81 249 BIT_ULL(HW_FEATURE_PWRON_DURING_PWROFF_TRANS) | \ 82 250 BIT_ULL(HW_FEATURE_XAFFINITY) | \ 83 - BIT_ULL(HW_FEATURE_WARPING) | \ 84 - BIT_ULL(HW_FEATURE_INTERPIPE_REG_ALIASING) | \ 85 - BIT_ULL(HW_FEATURE_32_BIT_UNIFORM_ADDRESS) | \ 86 - BIT_ULL(HW_FEATURE_ATTR_AUTO_TYPE_INFERRAL) | \ 87 - BIT_ULL(HW_FEATURE_BRNDOUT_CC) | \ 88 - BIT_ULL(HW_FEATURE_BRNDOUT_KILL) | \ 89 - BIT_ULL(HW_FEATURE_LD_ST_LEA_TEX) | \ 90 - BIT_ULL(HW_FEATURE_LD_ST_TILEBUFFER) | \ 91 - BIT_ULL(HW_FEATURE_LINEAR_FILTER_FLOAT) | \ 92 - BIT_ULL(HW_FEATURE_MRT) | \ 93 - BIT_ULL(HW_FEATURE_MSAA_16X) | \ 94 - BIT_ULL(HW_FEATURE_NEXT_INSTRUCTION_TYPE) | \ 95 - BIT_ULL(HW_FEATURE_OUT_OF_ORDER_EXEC) | \ 96 - BIT_ULL(HW_FEATURE_T7XX_PAIRING_RULES) | \ 97 - BIT_ULL(HW_FEATURE_TEST4_DATUM_MODE) | \ 98 251 BIT_ULL(HW_FEATURE_THREAD_GROUP_SPLIT) | \ 99 252 BIT_ULL(HW_FEATURE_FLUSH_REDUCTION) | \ 100 253 BIT_ULL(HW_FEATURE_PROTECTED_MODE) | \ ··· 93 276 BIT_ULL(HW_FEATURE_JOBCHAIN_DISAMBIGUATION) | \ 94 277 BIT_ULL(HW_FEATURE_PWRON_DURING_PWROFF_TRANS) | \ 95 278 BIT_ULL(HW_FEATURE_XAFFINITY) | \ 96 - BIT_ULL(HW_FEATURE_WARPING) | \ 97 - BIT_ULL(HW_FEATURE_INTERPIPE_REG_ALIASING) | \ 98 - BIT_ULL(HW_FEATURE_32_BIT_UNIFORM_ADDRESS) | \ 99 - BIT_ULL(HW_FEATURE_ATTR_AUTO_TYPE_INFERRAL) | \ 100 - BIT_ULL(HW_FEATURE_BRNDOUT_CC) | \ 101 - BIT_ULL(HW_FEATURE_BRNDOUT_KILL) | \ 102 - BIT_ULL(HW_FEATURE_LD_ST_LEA_TEX) | \ 103 - BIT_ULL(HW_FEATURE_LD_ST_TILEBUFFER) | \ 104 - BIT_ULL(HW_FEATURE_LINEAR_FILTER_FLOAT) | \ 105 - BIT_ULL(HW_FEATURE_MRT) | \ 106 - BIT_ULL(HW_FEATURE_MSAA_16X) | \ 107 - BIT_ULL(HW_FEATURE_NEXT_INSTRUCTION_TYPE) | \ 108 - BIT_ULL(HW_FEATURE_OUT_OF_ORDER_EXEC) | \ 109 - BIT_ULL(HW_FEATURE_T7XX_PAIRING_RULES) | \ 110 - BIT_ULL(HW_FEATURE_TEST4_DATUM_MODE) | \ 111 279 BIT_ULL(HW_FEATURE_THREAD_GROUP_SPLIT) | \ 112 280 BIT_ULL(HW_FEATURE_FLUSH_REDUCTION) | \ 113 281 BIT_ULL(HW_FEATURE_PROTECTED_MODE) | \
+26 -6
drivers/gpu/drm/panfrost/panfrost_gpu.c
··· 320 320 { 321 321 int ret; 322 322 u32 val; 323 + u64 core_mask = U64_MAX; 323 324 324 325 panfrost_gpu_init_quirks(pfdev); 325 326 326 - /* Just turn on everything for now */ 327 - gpu_write(pfdev, L2_PWRON_LO, pfdev->features.l2_present); 327 + if (pfdev->features.l2_present != 1) { 328 + /* 329 + * Only support one core group now. 330 + * ~(l2_present - 1) unsets all bits in l2_present except 331 + * the bottom bit. (l2_present - 2) has all the bits in 332 + * the first core group set. AND them together to generate 333 + * a mask of cores in the first core group. 334 + */ 335 + core_mask = ~(pfdev->features.l2_present - 1) & 336 + (pfdev->features.l2_present - 2); 337 + dev_info_once(pfdev->dev, "using only 1st core group (%lu cores from %lu)\n", 338 + hweight64(core_mask), 339 + hweight64(pfdev->features.shader_present)); 340 + } 341 + gpu_write(pfdev, L2_PWRON_LO, pfdev->features.l2_present & core_mask); 328 342 ret = readl_relaxed_poll_timeout(pfdev->iomem + L2_READY_LO, 329 - val, val == pfdev->features.l2_present, 100, 20000); 343 + val, val == (pfdev->features.l2_present & core_mask), 344 + 100, 20000); 330 345 if (ret) 331 346 dev_err(pfdev->dev, "error powering up gpu L2"); 332 347 333 - gpu_write(pfdev, SHADER_PWRON_LO, pfdev->features.shader_present); 348 + gpu_write(pfdev, SHADER_PWRON_LO, 349 + pfdev->features.shader_present & core_mask); 334 350 ret = readl_relaxed_poll_timeout(pfdev->iomem + SHADER_READY_LO, 335 - val, val == pfdev->features.shader_present, 100, 20000); 351 + val, val == (pfdev->features.shader_present & core_mask), 352 + 100, 20000); 336 353 if (ret) 337 354 dev_err(pfdev->dev, "error powering up gpu shader"); 338 355 ··· 377 360 378 361 panfrost_gpu_init_features(pfdev); 379 362 380 - dma_set_mask_and_coherent(pfdev->dev, 363 + err = dma_set_mask_and_coherent(pfdev->dev, 381 364 DMA_BIT_MASK(FIELD_GET(0xff00, pfdev->features.mmu_features))); 365 + if (err) 366 + return err; 367 + 382 368 dma_set_max_seg_size(pfdev->dev, UINT_MAX); 383 369 384 370 irq = platform_get_irq_byname(to_platform_device(pfdev->dev), "gpu");
+1 -1
drivers/gpu/drm/radeon/atombios_dp.c
··· 30 30 31 31 #include "atom.h" 32 32 #include "atom-bits.h" 33 - #include <drm/drm_dp_helper.h> 33 + #include <drm/dp/drm_dp_helper.h> 34 34 35 35 /* move these to drm_dp_helper.c/h */ 36 36 #define DP_LINK_CONFIGURATION_SIZE 9
+2 -2
drivers/gpu/drm/radeon/radeon_connectors.c
··· 27 27 #include <drm/drm_edid.h> 28 28 #include <drm/drm_crtc_helper.h> 29 29 #include <drm/drm_fb_helper.h> 30 - #include <drm/drm_dp_mst_helper.h> 30 + #include <drm/dp/drm_dp_mst_helper.h> 31 31 #include <drm/drm_probe_helper.h> 32 32 #include <drm/radeon_drm.h> 33 33 #include "radeon.h" ··· 204 204 205 205 /* Check if bpc is within clock limit. Try to degrade gracefully otherwise */ 206 206 if ((bpc == 12) && (mode_clock * 3/2 > max_tmds_clock)) { 207 - if ((connector->display_info.edid_hdmi_dc_modes & DRM_EDID_HDMI_DC_30) && 207 + if ((connector->display_info.edid_hdmi_rgb444_dc_modes & DRM_EDID_HDMI_DC_30) && 208 208 (mode_clock * 5/4 <= max_tmds_clock)) 209 209 bpc = 10; 210 210 else
+1 -1
drivers/gpu/drm/radeon/radeon_dp_mst.c
··· 1 1 // SPDX-License-Identifier: MIT 2 2 3 - #include <drm/drm_dp_mst_helper.h> 3 + #include <drm/dp/drm_dp_mst_helper.h> 4 4 #include <drm/drm_fb_helper.h> 5 5 #include <drm/drm_file.h> 6 6 #include <drm/drm_probe_helper.h>
+2 -2
drivers/gpu/drm/radeon/radeon_mode.h
··· 33 33 #include <drm/drm_crtc.h> 34 34 #include <drm/drm_edid.h> 35 35 #include <drm/drm_encoder.h> 36 - #include <drm/drm_dp_helper.h> 37 - #include <drm/drm_dp_mst_helper.h> 36 + #include <drm/dp/drm_dp_helper.h> 37 + #include <drm/dp/drm_dp_mst_helper.h> 38 38 #include <drm/drm_fixed.h> 39 39 #include <drm/drm_crtc_helper.h> 40 40 #include <linux/i2c.h>
+2 -2
drivers/gpu/drm/radeon/radeon_ttm.c
··· 802 802 TTM_PL_VRAM); 803 803 struct drm_printer p = drm_seq_file_printer(m); 804 804 805 - man->func->debug(man, &p); 805 + ttm_resource_manager_debug(man, &p); 806 806 return 0; 807 807 } 808 808 ··· 820 820 TTM_PL_TT); 821 821 struct drm_printer p = drm_seq_file_printer(m); 822 822 823 - man->func->debug(man, &p); 823 + ttm_resource_manager_debug(man, &p); 824 824 return 0; 825 825 } 826 826
+2
drivers/gpu/drm/rockchip/Kconfig
··· 2 2 config DRM_ROCKCHIP 3 3 tristate "DRM Support for Rockchip" 4 4 depends on DRM && ROCKCHIP_IOMMU 5 + select DRM_DP_HELPER 5 6 select DRM_GEM_CMA_HELPER 6 7 select DRM_KMS_HELPER 7 8 select DRM_PANEL 8 9 select VIDEOMODE_HELPERS 9 10 select DRM_ANALOGIX_DP if ROCKCHIP_ANALOGIX_DP 11 + select DRM_DP_HELPER if ROCKCHIP_ANALOGIX_DP 10 12 select DRM_DW_HDMI if ROCKCHIP_DW_HDMI 11 13 select DRM_DW_MIPI_DSI if ROCKCHIP_DW_MIPI_DSI 12 14 select GENERIC_PHY if ROCKCHIP_DW_MIPI_DSI
+2 -2
drivers/gpu/drm/rockchip/analogix_dp-rockchip.c
··· 22 22 #include <drm/drm_atomic.h> 23 23 #include <drm/drm_atomic_helper.h> 24 24 #include <drm/bridge/analogix_dp.h> 25 - #include <drm/drm_dp_helper.h> 25 + #include <drm/dp/drm_dp_helper.h> 26 26 #include <drm/drm_of.h> 27 27 #include <drm/drm_panel.h> 28 28 #include <drm/drm_probe_helper.h> ··· 117 117 { 118 118 struct drm_display_info *di = &connector->display_info; 119 119 /* VOP couldn't output YUV video format for eDP rightly */ 120 - u32 mask = DRM_COLOR_FORMAT_YCRCB444 | DRM_COLOR_FORMAT_YCRCB422; 120 + u32 mask = DRM_COLOR_FORMAT_YCBCR444 | DRM_COLOR_FORMAT_YCBCR422; 121 121 122 122 if ((di->color_formats & mask)) { 123 123 DRM_DEBUG_KMS("Swapping display color format from YUV to RGB\n");
+1 -1
drivers/gpu/drm/rockchip/cdn-dp-core.c
··· 16 16 #include <sound/hdmi-codec.h> 17 17 18 18 #include <drm/drm_atomic_helper.h> 19 - #include <drm/drm_dp_helper.h> 19 + #include <drm/dp/drm_dp_helper.h> 20 20 #include <drm/drm_edid.h> 21 21 #include <drm/drm_of.h> 22 22 #include <drm/drm_probe_helper.h>
+1 -1
drivers/gpu/drm/rockchip/cdn-dp-core.h
··· 7 7 #ifndef _CDN_DP_CORE_H 8 8 #define _CDN_DP_CORE_H 9 9 10 - #include <drm/drm_dp_helper.h> 10 + #include <drm/dp/drm_dp_helper.h> 11 11 #include <drm/drm_panel.h> 12 12 #include <drm/drm_probe_helper.h> 13 13
+1 -1
drivers/gpu/drm/rockchip/rockchip_lvds.c
··· 20 20 #include <drm/drm_atomic_helper.h> 21 21 #include <drm/drm_bridge.h> 22 22 #include <drm/drm_bridge_connector.h> 23 - #include <drm/drm_dp_helper.h> 23 + #include <drm/dp/drm_dp_helper.h> 24 24 #include <drm/drm_of.h> 25 25 #include <drm/drm_panel.h> 26 26 #include <drm/drm_probe_helper.h>
+1 -1
drivers/gpu/drm/rockchip/rockchip_rgb.c
··· 11 11 #include <drm/drm_atomic_helper.h> 12 12 #include <drm/drm_bridge.h> 13 13 #include <drm/drm_bridge_connector.h> 14 - #include <drm/drm_dp_helper.h> 14 + #include <drm/dp/drm_dp_helper.h> 15 15 #include <drm/drm_of.h> 16 16 #include <drm/drm_panel.h> 17 17 #include <drm/drm_probe_helper.h>
+5 -3
drivers/gpu/drm/selftests/test-drm_dp_mst_helper.c
··· 7 7 8 8 #include <linux/random.h> 9 9 10 - #include <drm/drm_dp_mst_helper.h> 10 + #include <drm/dp/drm_dp_mst_helper.h> 11 11 #include <drm/drm_print.h> 12 12 13 - #include "../drm_dp_mst_topology_internal.h" 13 + #include "../dp/drm_dp_mst_topology_internal.h" 14 14 #include "test-drm_modeset_common.h" 15 15 16 16 int igt_dp_mst_calc_pbn_mode(void *ignored) ··· 131 131 return false; 132 132 133 133 txmsg = kzalloc(sizeof(*txmsg), GFP_KERNEL); 134 - if (!txmsg) 134 + if (!txmsg) { 135 + kfree(out); 135 136 return false; 137 + } 136 138 137 139 drm_dp_encode_sideband_req(in, txmsg); 138 140 ret = drm_dp_decode_sideband_req(txmsg, out);
+4
drivers/gpu/drm/selftests/test-drm_plane_helper.c
··· 87 87 DRM_MODE_FLAG_NHSYNC | DRM_MODE_FLAG_NVSYNC) 88 88 }, 89 89 }; 90 + struct drm_plane plane = { 91 + .dev = NULL 92 + }; 90 93 struct drm_framebuffer fb = { 91 94 .width = 2048, 92 95 .height = 2048 93 96 }; 94 97 struct drm_plane_state plane_state = { 98 + .plane = &plane, 95 99 .crtc = ZERO_SIZE_PTR, 96 100 .fb = &fb, 97 101 .rotation = DRM_MODE_ROTATE_0
+5
drivers/gpu/drm/stm/drv.c
··· 14 14 #include <linux/of_platform.h> 15 15 #include <linux/pm_runtime.h> 16 16 17 + #include <drm/drm_aperture.h> 17 18 #include <drm/drm_atomic.h> 18 19 #include <drm/drm_atomic_helper.h> 19 20 #include <drm/drm_drv.h> ··· 183 182 int ret; 184 183 185 184 DRM_DEBUG("%s\n", __func__); 185 + 186 + ret = drm_aperture_remove_framebuffers(false, &drv_driver); 187 + if (ret) 188 + return ret; 186 189 187 190 dma_set_coherent_mask(dev, DMA_BIT_MASK(32)); 188 191
+106 -8
drivers/gpu/drm/stm/dw_mipi_dsi-stm.c
··· 247 247 int ret, bpp; 248 248 u32 val; 249 249 250 - /* Update lane capabilities according to hw version */ 251 - dsi->lane_min_kbps = LANE_MIN_KBPS; 252 - dsi->lane_max_kbps = LANE_MAX_KBPS; 253 - if (dsi->hw_version == HWVER_131) { 254 - dsi->lane_min_kbps *= 2; 255 - dsi->lane_max_kbps *= 2; 256 - } 257 - 258 250 pll_in_khz = (unsigned int)(clk_get_rate(dsi->pllref_clk) / 1000); 259 251 260 252 /* Compute requested pll out */ ··· 322 330 return 0; 323 331 } 324 332 333 + #define CLK_TOLERANCE_HZ 50 334 + 335 + static enum drm_mode_status 336 + dw_mipi_dsi_stm_mode_valid(void *priv_data, 337 + const struct drm_display_mode *mode, 338 + unsigned long mode_flags, u32 lanes, u32 format) 339 + { 340 + struct dw_mipi_dsi_stm *dsi = priv_data; 341 + unsigned int idf, ndiv, odf, pll_in_khz, pll_out_khz; 342 + int ret, bpp; 343 + 344 + bpp = mipi_dsi_pixel_format_to_bpp(format); 345 + if (bpp < 0) 346 + return MODE_BAD; 347 + 348 + /* Compute requested pll out */ 349 + pll_out_khz = mode->clock * bpp / lanes; 350 + 351 + if (pll_out_khz > dsi->lane_max_kbps) 352 + return MODE_CLOCK_HIGH; 353 + 354 + if (mode_flags & MIPI_DSI_MODE_VIDEO_BURST) { 355 + /* Add 20% to pll out to be higher than pixel bw */ 356 + pll_out_khz = (pll_out_khz * 12) / 10; 357 + } else { 358 + if (pll_out_khz < dsi->lane_min_kbps) 359 + return MODE_CLOCK_LOW; 360 + } 361 + 362 + /* Compute best pll parameters */ 363 + idf = 0; 364 + ndiv = 0; 365 + odf = 0; 366 + pll_in_khz = clk_get_rate(dsi->pllref_clk) / 1000; 367 + ret = dsi_pll_get_params(dsi, pll_in_khz, pll_out_khz, &idf, &ndiv, &odf); 368 + if (ret) { 369 + DRM_WARN("Warning dsi_pll_get_params(): bad params\n"); 370 + return MODE_ERROR; 371 + } 372 + 373 + if (!(mode_flags & MIPI_DSI_MODE_VIDEO_BURST)) { 374 + unsigned int px_clock_hz, target_px_clock_hz, lane_mbps; 375 + int dsi_short_packet_size_px, hfp, hsync, hbp, delay_to_lp; 376 + struct dw_mipi_dsi_dphy_timing dphy_timing; 377 + 378 + /* Get the adjusted pll out value */ 379 + pll_out_khz = dsi_pll_get_clkout_khz(pll_in_khz, idf, ndiv, odf); 380 + 381 + px_clock_hz = DIV_ROUND_CLOSEST_ULL(1000ULL * pll_out_khz * lanes, bpp); 382 + target_px_clock_hz = mode->clock * 1000; 383 + /* 384 + * Filter modes according to the clock value, particularly useful for 385 + * hdmi modes that require precise pixel clocks. 386 + */ 387 + if (px_clock_hz < target_px_clock_hz - CLK_TOLERANCE_HZ || 388 + px_clock_hz > target_px_clock_hz + CLK_TOLERANCE_HZ) 389 + return MODE_CLOCK_RANGE; 390 + 391 + /* sync packets are codes as DSI short packets (4 bytes) */ 392 + dsi_short_packet_size_px = DIV_ROUND_UP(4 * BITS_PER_BYTE, bpp); 393 + 394 + hfp = mode->hsync_start - mode->hdisplay; 395 + hsync = mode->hsync_end - mode->hsync_start; 396 + hbp = mode->htotal - mode->hsync_end; 397 + 398 + /* hsync must be longer than 4 bytes HSS packets */ 399 + if (hsync < dsi_short_packet_size_px) 400 + return MODE_HSYNC_NARROW; 401 + 402 + if (mode_flags & MIPI_DSI_MODE_VIDEO_SYNC_PULSE) { 403 + /* HBP must be longer than 4 bytes HSE packets */ 404 + if (hbp < dsi_short_packet_size_px) 405 + return MODE_HSYNC_NARROW; 406 + hbp -= dsi_short_packet_size_px; 407 + } else { 408 + /* With sync events HBP extends in the hsync */ 409 + hbp += hsync - dsi_short_packet_size_px; 410 + } 411 + 412 + lane_mbps = pll_out_khz / 1000; 413 + ret = dw_mipi_dsi_phy_get_timing(priv_data, lane_mbps, &dphy_timing); 414 + if (ret) 415 + return MODE_ERROR; 416 + /* 417 + * In non-burst mode DSI has to enter in LP during HFP 418 + * (horizontal front porch) or HBP (horizontal back porch) to 419 + * resync with LTDC pixel clock. 420 + */ 421 + delay_to_lp = DIV_ROUND_UP((dphy_timing.data_hs2lp + dphy_timing.data_lp2hs) * 422 + lanes * BITS_PER_BYTE, bpp); 423 + if (hfp < delay_to_lp && hbp < delay_to_lp) 424 + return MODE_HSYNC; 425 + } 426 + 427 + return MODE_OK; 428 + } 429 + 325 430 static const struct dw_mipi_dsi_phy_ops dw_mipi_dsi_stm_phy_ops = { 326 431 .init = dw_mipi_dsi_phy_init, 327 432 .power_on = dw_mipi_dsi_phy_power_on, ··· 429 340 430 341 static struct dw_mipi_dsi_plat_data dw_mipi_dsi_stm_plat_data = { 431 342 .max_data_lanes = 2, 343 + .mode_valid = dw_mipi_dsi_stm_mode_valid, 432 344 .phy_ops = &dw_mipi_dsi_stm_phy_ops, 433 345 }; 434 346 ··· 505 415 ret = -ENODEV; 506 416 DRM_ERROR("bad dsi hardware version\n"); 507 417 goto err_dsi_probe; 418 + } 419 + 420 + /* set lane capabilities according to hw version */ 421 + dsi->lane_min_kbps = LANE_MIN_KBPS; 422 + dsi->lane_max_kbps = LANE_MAX_KBPS; 423 + if (dsi->hw_version == HWVER_131) { 424 + dsi->lane_min_kbps *= 2; 425 + dsi->lane_max_kbps *= 2; 508 426 } 509 427 510 428 dw_mipi_dsi_stm_plat_data.base = dsi->base;
+647 -163
drivers/gpu/drm/stm/ltdc.c
··· 18 18 #include <linux/pinctrl/consumer.h> 19 19 #include <linux/platform_device.h> 20 20 #include <linux/pm_runtime.h> 21 + #include <linux/regmap.h> 21 22 #include <linux/reset.h> 22 23 23 24 #include <drm/drm_atomic.h> ··· 47 46 #define HWVER_10200 0x010200 48 47 #define HWVER_10300 0x010300 49 48 #define HWVER_20101 0x020101 49 + #define HWVER_40100 0x040100 50 50 51 51 /* 52 52 * The address of some registers depends on the HW version: such registers have 53 - * an extra offset specified with reg_ofs. 53 + * an extra offset specified with layer_ofs. 54 54 */ 55 - #define REG_OFS_NONE 0 56 - #define REG_OFS_4 4 /* Insertion of "Layer Conf. 2" reg */ 57 - #define REG_OFS (ldev->caps.reg_ofs) 58 - #define LAY_OFS 0x80 /* Register Offset between 2 layers */ 55 + #define LAY_OFS_0 0x80 56 + #define LAY_OFS_1 0x100 57 + #define LAY_OFS (ldev->caps.layer_ofs) 59 58 60 59 /* Global register offsets */ 61 60 #define LTDC_IDR 0x0000 /* IDentification */ ··· 76 75 #define LTDC_LIPCR 0x0040 /* Line Interrupt Position Conf. */ 77 76 #define LTDC_CPSR 0x0044 /* Current Position Status */ 78 77 #define LTDC_CDSR 0x0048 /* Current Display Status */ 78 + #define LTDC_EDCR 0x0060 /* External Display Control */ 79 + #define LTDC_FUT 0x0090 /* Fifo underrun Threshold */ 79 80 80 81 /* Layer register offsets */ 81 - #define LTDC_L1LC1R (0x80) /* L1 Layer Configuration 1 */ 82 - #define LTDC_L1LC2R (0x84) /* L1 Layer Configuration 2 */ 83 - #define LTDC_L1CR (0x84 + REG_OFS)/* L1 Control */ 84 - #define LTDC_L1WHPCR (0x88 + REG_OFS)/* L1 Window Hor Position Config */ 85 - #define LTDC_L1WVPCR (0x8C + REG_OFS)/* L1 Window Vert Position Config */ 86 - #define LTDC_L1CKCR (0x90 + REG_OFS)/* L1 Color Keying Configuration */ 87 - #define LTDC_L1PFCR (0x94 + REG_OFS)/* L1 Pixel Format Configuration */ 88 - #define LTDC_L1CACR (0x98 + REG_OFS)/* L1 Constant Alpha Config */ 89 - #define LTDC_L1DCCR (0x9C + REG_OFS)/* L1 Default Color Configuration */ 90 - #define LTDC_L1BFCR (0xA0 + REG_OFS)/* L1 Blend Factors Configuration */ 91 - #define LTDC_L1FBBCR (0xA4 + REG_OFS)/* L1 FrameBuffer Bus Control */ 92 - #define LTDC_L1AFBCR (0xA8 + REG_OFS)/* L1 AuxFB Control */ 93 - #define LTDC_L1CFBAR (0xAC + REG_OFS)/* L1 Color FrameBuffer Address */ 94 - #define LTDC_L1CFBLR (0xB0 + REG_OFS)/* L1 Color FrameBuffer Length */ 95 - #define LTDC_L1CFBLNR (0xB4 + REG_OFS)/* L1 Color FrameBuffer Line Nb */ 96 - #define LTDC_L1AFBAR (0xB8 + REG_OFS)/* L1 AuxFB Address */ 97 - #define LTDC_L1AFBLR (0xBC + REG_OFS)/* L1 AuxFB Length */ 98 - #define LTDC_L1AFBLNR (0xC0 + REG_OFS)/* L1 AuxFB Line Number */ 99 - #define LTDC_L1CLUTWR (0xC4 + REG_OFS)/* L1 CLUT Write */ 100 - #define LTDC_L1YS1R (0xE0 + REG_OFS)/* L1 YCbCr Scale 1 */ 101 - #define LTDC_L1YS2R (0xE4 + REG_OFS)/* L1 YCbCr Scale 2 */ 82 + #define LTDC_L1C0R (ldev->caps.layer_regs[0]) /* L1 configuration 0 */ 83 + #define LTDC_L1C1R (ldev->caps.layer_regs[1]) /* L1 configuration 1 */ 84 + #define LTDC_L1RCR (ldev->caps.layer_regs[2]) /* L1 reload control */ 85 + #define LTDC_L1CR (ldev->caps.layer_regs[3]) /* L1 control register */ 86 + #define LTDC_L1WHPCR (ldev->caps.layer_regs[4]) /* L1 window horizontal position configuration */ 87 + #define LTDC_L1WVPCR (ldev->caps.layer_regs[5]) /* L1 window vertical position configuration */ 88 + #define LTDC_L1CKCR (ldev->caps.layer_regs[6]) /* L1 color keying configuration */ 89 + #define LTDC_L1PFCR (ldev->caps.layer_regs[7]) /* L1 pixel format configuration */ 90 + #define LTDC_L1CACR (ldev->caps.layer_regs[8]) /* L1 constant alpha configuration */ 91 + #define LTDC_L1DCCR (ldev->caps.layer_regs[9]) /* L1 default color configuration */ 92 + #define LTDC_L1BFCR (ldev->caps.layer_regs[10]) /* L1 blending factors configuration */ 93 + #define LTDC_L1BLCR (ldev->caps.layer_regs[11]) /* L1 burst length configuration */ 94 + #define LTDC_L1PCR (ldev->caps.layer_regs[12]) /* L1 planar configuration */ 95 + #define LTDC_L1CFBAR (ldev->caps.layer_regs[13]) /* L1 color frame buffer address */ 96 + #define LTDC_L1CFBLR (ldev->caps.layer_regs[14]) /* L1 color frame buffer length */ 97 + #define LTDC_L1CFBLNR (ldev->caps.layer_regs[15]) /* L1 color frame buffer line number */ 98 + #define LTDC_L1AFBA0R (ldev->caps.layer_regs[16]) /* L1 auxiliary frame buffer address 0 */ 99 + #define LTDC_L1AFBA1R (ldev->caps.layer_regs[17]) /* L1 auxiliary frame buffer address 1 */ 100 + #define LTDC_L1AFBLR (ldev->caps.layer_regs[18]) /* L1 auxiliary frame buffer length */ 101 + #define LTDC_L1AFBLNR (ldev->caps.layer_regs[19]) /* L1 auxiliary frame buffer line number */ 102 + #define LTDC_L1CLUTWR (ldev->caps.layer_regs[20]) /* L1 CLUT write */ 103 + #define LTDC_L1CYR0R (ldev->caps.layer_regs[21]) /* L1 Conversion YCbCr RGB 0 */ 104 + #define LTDC_L1CYR1R (ldev->caps.layer_regs[22]) /* L1 Conversion YCbCr RGB 1 */ 105 + #define LTDC_L1FPF0R (ldev->caps.layer_regs[23]) /* L1 Flexible Pixel Format 0 */ 106 + #define LTDC_L1FPF1R (ldev->caps.layer_regs[24]) /* L1 Flexible Pixel Format 1 */ 102 107 103 108 /* Bit definitions */ 104 109 #define SSCR_VSH GENMASK(10, 0) /* Vertical Synchronization Height */ ··· 171 164 #define ISR_TERRIF BIT(2) /* Transfer ERRor Interrupt Flag */ 172 165 #define ISR_RRIF BIT(3) /* Register Reload Interrupt Flag */ 173 166 167 + #define EDCR_OCYEN BIT(25) /* Output Conversion to YCbCr 422: ENable */ 168 + #define EDCR_OCYSEL BIT(26) /* Output Conversion to YCbCr 422: SELection of the CCIR */ 169 + #define EDCR_OCYCO BIT(27) /* Output Conversion to YCbCr 422: Chrominance Order */ 170 + 174 171 #define LXCR_LEN BIT(0) /* Layer ENable */ 175 172 #define LXCR_COLKEN BIT(1) /* Color Keying Enable */ 176 173 #define LXCR_CLUTEN BIT(4) /* Color Look-Up Table ENable */ ··· 186 175 #define LXWVPCR_WVSPPOS GENMASK(26, 16) /* Window Vertical StoP POSition */ 187 176 188 177 #define LXPFCR_PF GENMASK(2, 0) /* Pixel Format */ 178 + #define PF_FLEXIBLE 0x7 /* Flexible Pixel Format selected */ 189 179 190 180 #define LXCACR_CONSTA GENMASK(7, 0) /* CONSTant Alpha */ 191 181 ··· 197 185 #define LXCFBLR_CFBP GENMASK(28, 16) /* Color Frame Buffer Pitch in bytes */ 198 186 199 187 #define LXCFBLNR_CFBLN GENMASK(10, 0) /* Color Frame Buffer Line Number */ 188 + 189 + #define LXCR_C1R_YIA BIT(0) /* Ycbcr 422 Interleaved Ability */ 190 + #define LXCR_C1R_YSPA BIT(1) /* Ycbcr 420 Semi-Planar Ability */ 191 + #define LXCR_C1R_YFPA BIT(2) /* Ycbcr 420 Full-Planar Ability */ 192 + #define LXCR_C1R_SCA BIT(31) /* SCaling Ability*/ 193 + 194 + #define LxPCR_YREN BIT(9) /* Y Rescale Enable for the color dynamic range */ 195 + #define LxPCR_OF BIT(8) /* Odd pixel First */ 196 + #define LxPCR_CBF BIT(7) /* CB component First */ 197 + #define LxPCR_YF BIT(6) /* Y component First */ 198 + #define LxPCR_YCM GENMASK(5, 4) /* Ycbcr Conversion Mode */ 199 + #define YCM_I 0x0 /* Interleaved 422 */ 200 + #define YCM_SP 0x1 /* Semi-Planar 420 */ 201 + #define YCM_FP 0x2 /* Full-Planar 420 */ 202 + #define LxPCR_YCEN BIT(3) /* YCbCr-to-RGB Conversion Enable */ 203 + 204 + #define LXRCR_IMR BIT(0) /* IMmediate Reload */ 205 + #define LXRCR_VBR BIT(1) /* Vertical Blanking Reload */ 206 + #define LXRCR_GRMSK BIT(2) /* Global (centralized) Reload MaSKed */ 200 207 201 208 #define CLUT_SIZE 256 202 209 ··· 232 201 /* RGB formats */ 233 202 PF_ARGB8888, /* ARGB [32 bits] */ 234 203 PF_RGBA8888, /* RGBA [32 bits] */ 204 + PF_ABGR8888, /* ABGR [32 bits] */ 205 + PF_BGRA8888, /* BGRA [32 bits] */ 235 206 PF_RGB888, /* RGB [24 bits] */ 207 + PF_BGR888, /* BGR [24 bits] */ 236 208 PF_RGB565, /* RGB [16 bits] */ 209 + PF_BGR565, /* BGR [16 bits] */ 237 210 PF_ARGB1555, /* ARGB A:1 bit RGB:15 bits [16 bits] */ 238 211 PF_ARGB4444, /* ARGB A:4 bits R/G/B: 4 bits each [16 bits] */ 239 212 /* Indexed formats */ ··· 269 234 PF_ARGB4444 /* 0x07 */ 270 235 }; 271 236 237 + static const enum ltdc_pix_fmt ltdc_pix_fmt_a2[NB_PF] = { 238 + PF_ARGB8888, /* 0x00 */ 239 + PF_ABGR8888, /* 0x01 */ 240 + PF_RGBA8888, /* 0x02 */ 241 + PF_BGRA8888, /* 0x03 */ 242 + PF_RGB565, /* 0x04 */ 243 + PF_BGR565, /* 0x05 */ 244 + PF_RGB888, /* 0x06 */ 245 + PF_NONE /* 0x07 */ 246 + }; 247 + 248 + static const u32 ltdc_drm_fmt_a0[] = { 249 + DRM_FORMAT_ARGB8888, 250 + DRM_FORMAT_XRGB8888, 251 + DRM_FORMAT_RGB888, 252 + DRM_FORMAT_RGB565, 253 + DRM_FORMAT_ARGB1555, 254 + DRM_FORMAT_XRGB1555, 255 + DRM_FORMAT_ARGB4444, 256 + DRM_FORMAT_XRGB4444, 257 + DRM_FORMAT_C8 258 + }; 259 + 260 + static const u32 ltdc_drm_fmt_a1[] = { 261 + DRM_FORMAT_ARGB8888, 262 + DRM_FORMAT_XRGB8888, 263 + DRM_FORMAT_RGB888, 264 + DRM_FORMAT_RGB565, 265 + DRM_FORMAT_RGBA8888, 266 + DRM_FORMAT_RGBX8888, 267 + DRM_FORMAT_ARGB1555, 268 + DRM_FORMAT_XRGB1555, 269 + DRM_FORMAT_ARGB4444, 270 + DRM_FORMAT_XRGB4444, 271 + DRM_FORMAT_C8 272 + }; 273 + 274 + static const u32 ltdc_drm_fmt_a2[] = { 275 + DRM_FORMAT_ARGB8888, 276 + DRM_FORMAT_XRGB8888, 277 + DRM_FORMAT_ABGR8888, 278 + DRM_FORMAT_XBGR8888, 279 + DRM_FORMAT_RGBA8888, 280 + DRM_FORMAT_RGBX8888, 281 + DRM_FORMAT_BGRA8888, 282 + DRM_FORMAT_BGRX8888, 283 + DRM_FORMAT_RGB565, 284 + DRM_FORMAT_BGR565, 285 + DRM_FORMAT_RGB888, 286 + DRM_FORMAT_BGR888, 287 + DRM_FORMAT_ARGB1555, 288 + DRM_FORMAT_XRGB1555, 289 + DRM_FORMAT_ARGB4444, 290 + DRM_FORMAT_XRGB4444, 291 + DRM_FORMAT_C8 292 + }; 293 + 294 + static const u32 ltdc_drm_fmt_ycbcr_cp[] = { 295 + DRM_FORMAT_YUYV, 296 + DRM_FORMAT_YVYU, 297 + DRM_FORMAT_UYVY, 298 + DRM_FORMAT_VYUY 299 + }; 300 + 301 + static const u32 ltdc_drm_fmt_ycbcr_sp[] = { 302 + DRM_FORMAT_NV12, 303 + DRM_FORMAT_NV21 304 + }; 305 + 306 + static const u32 ltdc_drm_fmt_ycbcr_fp[] = { 307 + DRM_FORMAT_YUV420, 308 + DRM_FORMAT_YVU420 309 + }; 310 + 311 + /* Layer register offsets */ 312 + static const u32 ltdc_layer_regs_a0[] = { 313 + 0x80, /* L1 configuration 0 */ 314 + 0x00, /* not available */ 315 + 0x00, /* not available */ 316 + 0x84, /* L1 control register */ 317 + 0x88, /* L1 window horizontal position configuration */ 318 + 0x8c, /* L1 window vertical position configuration */ 319 + 0x90, /* L1 color keying configuration */ 320 + 0x94, /* L1 pixel format configuration */ 321 + 0x98, /* L1 constant alpha configuration */ 322 + 0x9c, /* L1 default color configuration */ 323 + 0xa0, /* L1 blending factors configuration */ 324 + 0x00, /* not available */ 325 + 0x00, /* not available */ 326 + 0xac, /* L1 color frame buffer address */ 327 + 0xb0, /* L1 color frame buffer length */ 328 + 0xb4, /* L1 color frame buffer line number */ 329 + 0x00, /* not available */ 330 + 0x00, /* not available */ 331 + 0x00, /* not available */ 332 + 0x00, /* not available */ 333 + 0xc4, /* L1 CLUT write */ 334 + 0x00, /* not available */ 335 + 0x00, /* not available */ 336 + 0x00, /* not available */ 337 + 0x00 /* not available */ 338 + }; 339 + 340 + static const u32 ltdc_layer_regs_a1[] = { 341 + 0x80, /* L1 configuration 0 */ 342 + 0x84, /* L1 configuration 1 */ 343 + 0x00, /* L1 reload control */ 344 + 0x88, /* L1 control register */ 345 + 0x8c, /* L1 window horizontal position configuration */ 346 + 0x90, /* L1 window vertical position configuration */ 347 + 0x94, /* L1 color keying configuration */ 348 + 0x98, /* L1 pixel format configuration */ 349 + 0x9c, /* L1 constant alpha configuration */ 350 + 0xa0, /* L1 default color configuration */ 351 + 0xa4, /* L1 blending factors configuration */ 352 + 0xa8, /* L1 burst length configuration */ 353 + 0x00, /* not available */ 354 + 0xac, /* L1 color frame buffer address */ 355 + 0xb0, /* L1 color frame buffer length */ 356 + 0xb4, /* L1 color frame buffer line number */ 357 + 0xb8, /* L1 auxiliary frame buffer address 0 */ 358 + 0xbc, /* L1 auxiliary frame buffer address 1 */ 359 + 0xc0, /* L1 auxiliary frame buffer length */ 360 + 0xc4, /* L1 auxiliary frame buffer line number */ 361 + 0xc8, /* L1 CLUT write */ 362 + 0x00, /* not available */ 363 + 0x00, /* not available */ 364 + 0x00, /* not available */ 365 + 0x00 /* not available */ 366 + }; 367 + 368 + static const u32 ltdc_layer_regs_a2[] = { 369 + 0x100, /* L1 configuration 0 */ 370 + 0x104, /* L1 configuration 1 */ 371 + 0x108, /* L1 reload control */ 372 + 0x10c, /* L1 control register */ 373 + 0x110, /* L1 window horizontal position configuration */ 374 + 0x114, /* L1 window vertical position configuration */ 375 + 0x118, /* L1 color keying configuration */ 376 + 0x11c, /* L1 pixel format configuration */ 377 + 0x120, /* L1 constant alpha configuration */ 378 + 0x124, /* L1 default color configuration */ 379 + 0x128, /* L1 blending factors configuration */ 380 + 0x12c, /* L1 burst length configuration */ 381 + 0x130, /* L1 planar configuration */ 382 + 0x134, /* L1 color frame buffer address */ 383 + 0x138, /* L1 color frame buffer length */ 384 + 0x13c, /* L1 color frame buffer line number */ 385 + 0x140, /* L1 auxiliary frame buffer address 0 */ 386 + 0x144, /* L1 auxiliary frame buffer address 1 */ 387 + 0x148, /* L1 auxiliary frame buffer length */ 388 + 0x14c, /* L1 auxiliary frame buffer line number */ 389 + 0x150, /* L1 CLUT write */ 390 + 0x16c, /* L1 Conversion YCbCr RGB 0 */ 391 + 0x170, /* L1 Conversion YCbCr RGB 1 */ 392 + 0x174, /* L1 Flexible Pixel Format 0 */ 393 + 0x178 /* L1 Flexible Pixel Format 1 */ 394 + }; 395 + 272 396 static const u64 ltdc_format_modifiers[] = { 273 397 DRM_FORMAT_MOD_LINEAR, 274 398 DRM_FORMAT_MOD_INVALID 275 399 }; 276 400 277 - static inline u32 reg_read(void __iomem *base, u32 reg) 278 - { 279 - return readl_relaxed(base + reg); 280 - } 401 + static const struct regmap_config stm32_ltdc_regmap_cfg = { 402 + .reg_bits = 32, 403 + .val_bits = 32, 404 + .reg_stride = sizeof(u32), 405 + .max_register = 0x400, 406 + .use_relaxed_mmio = true, 407 + .cache_type = REGCACHE_NONE, 408 + }; 281 409 282 - static inline void reg_write(void __iomem *base, u32 reg, u32 val) 283 - { 284 - writel_relaxed(val, base + reg); 285 - } 286 - 287 - static inline void reg_set(void __iomem *base, u32 reg, u32 mask) 288 - { 289 - reg_write(base, reg, reg_read(base, reg) | mask); 290 - } 291 - 292 - static inline void reg_clear(void __iomem *base, u32 reg, u32 mask) 293 - { 294 - reg_write(base, reg, reg_read(base, reg) & ~mask); 295 - } 296 - 297 - static inline void reg_update_bits(void __iomem *base, u32 reg, u32 mask, 298 - u32 val) 299 - { 300 - reg_write(base, reg, (reg_read(base, reg) & ~mask) | val); 301 - } 410 + static const u32 ltdc_ycbcr2rgb_coeffs[DRM_COLOR_ENCODING_MAX][DRM_COLOR_RANGE_MAX][2] = { 411 + [DRM_COLOR_YCBCR_BT601][DRM_COLOR_YCBCR_LIMITED_RANGE] = { 412 + 0x02040199, /* (b_cb = 516 / r_cr = 409) */ 413 + 0x006400D0 /* (g_cb = 100 / g_cr = 208) */ 414 + }, 415 + [DRM_COLOR_YCBCR_BT601][DRM_COLOR_YCBCR_FULL_RANGE] = { 416 + 0x01C60167, /* (b_cb = 454 / r_cr = 359) */ 417 + 0x005800B7 /* (g_cb = 88 / g_cr = 183) */ 418 + }, 419 + [DRM_COLOR_YCBCR_BT709][DRM_COLOR_YCBCR_LIMITED_RANGE] = { 420 + 0x021D01CB, /* (b_cb = 541 / r_cr = 459) */ 421 + 0x00370089 /* (g_cb = 55 / g_cr = 137) */ 422 + }, 423 + [DRM_COLOR_YCBCR_BT709][DRM_COLOR_YCBCR_FULL_RANGE] = { 424 + 0x01DB0193, /* (b_cb = 475 / r_cr = 403) */ 425 + 0x00300078 /* (g_cb = 48 / g_cr = 120) */ 426 + } 427 + /* BT2020 not supported */ 428 + }; 302 429 303 430 static inline struct ltdc_device *crtc_to_ltdc(struct drm_crtc *crtc) 304 431 { ··· 486 289 case DRM_FORMAT_XRGB8888: 487 290 pf = PF_ARGB8888; 488 291 break; 292 + case DRM_FORMAT_ABGR8888: 293 + case DRM_FORMAT_XBGR8888: 294 + pf = PF_ABGR8888; 295 + break; 489 296 case DRM_FORMAT_RGBA8888: 490 297 case DRM_FORMAT_RGBX8888: 491 298 pf = PF_RGBA8888; 492 299 break; 300 + case DRM_FORMAT_BGRA8888: 301 + case DRM_FORMAT_BGRX8888: 302 + pf = PF_BGRA8888; 303 + break; 493 304 case DRM_FORMAT_RGB888: 494 305 pf = PF_RGB888; 495 306 break; 307 + case DRM_FORMAT_BGR888: 308 + pf = PF_BGR888; 309 + break; 496 310 case DRM_FORMAT_RGB565: 497 311 pf = PF_RGB565; 312 + break; 313 + case DRM_FORMAT_BGR565: 314 + pf = PF_BGR565; 498 315 break; 499 316 case DRM_FORMAT_ARGB1555: 500 317 case DRM_FORMAT_XRGB1555: ··· 530 319 return pf; 531 320 } 532 321 533 - static inline u32 to_drm_pixelformat(enum ltdc_pix_fmt pf) 322 + static inline u32 ltdc_set_flexible_pixel_format(struct drm_plane *plane, enum ltdc_pix_fmt pix_fmt) 534 323 { 535 - switch (pf) { 536 - case PF_ARGB8888: 537 - return DRM_FORMAT_ARGB8888; 538 - case PF_RGBA8888: 539 - return DRM_FORMAT_RGBA8888; 540 - case PF_RGB888: 541 - return DRM_FORMAT_RGB888; 542 - case PF_RGB565: 543 - return DRM_FORMAT_RGB565; 324 + struct ltdc_device *ldev = plane_to_ltdc(plane); 325 + u32 lofs = plane->index * LAY_OFS, ret = PF_FLEXIBLE; 326 + int psize, alen, apos, rlen, rpos, glen, gpos, blen, bpos; 327 + 328 + switch (pix_fmt) { 329 + case PF_BGR888: 330 + psize = 3; 331 + alen = 0; apos = 0; rlen = 8; rpos = 0; 332 + glen = 8; gpos = 8; blen = 8; bpos = 16; 333 + break; 544 334 case PF_ARGB1555: 545 - return DRM_FORMAT_ARGB1555; 335 + psize = 2; 336 + alen = 1; apos = 15; rlen = 5; rpos = 10; 337 + glen = 5; gpos = 5; blen = 5; bpos = 0; 338 + break; 546 339 case PF_ARGB4444: 547 - return DRM_FORMAT_ARGB4444; 340 + psize = 2; 341 + alen = 4; apos = 12; rlen = 4; rpos = 8; 342 + glen = 4; gpos = 4; blen = 4; bpos = 0; 343 + break; 548 344 case PF_L8: 549 - return DRM_FORMAT_C8; 550 - case PF_AL44: /* No DRM support */ 551 - case PF_AL88: /* No DRM support */ 552 - case PF_NONE: 345 + psize = 1; 346 + alen = 0; apos = 0; rlen = 8; rpos = 0; 347 + glen = 8; gpos = 0; blen = 8; bpos = 0; 348 + break; 349 + case PF_AL44: 350 + psize = 1; 351 + alen = 4; apos = 4; rlen = 4; rpos = 0; 352 + glen = 4; gpos = 0; blen = 4; bpos = 0; 353 + break; 354 + case PF_AL88: 355 + psize = 2; 356 + alen = 8; apos = 8; rlen = 8; rpos = 0; 357 + glen = 8; gpos = 0; blen = 8; bpos = 0; 358 + break; 553 359 default: 554 - return 0; 360 + ret = NB_PF; /* error case, trace msg is handled by the caller */ 361 + break; 555 362 } 363 + 364 + if (ret == PF_FLEXIBLE) { 365 + regmap_write(ldev->regmap, LTDC_L1FPF0R + lofs, 366 + (rlen << 14) + (rpos << 9) + (alen << 5) + apos); 367 + 368 + regmap_write(ldev->regmap, LTDC_L1FPF1R + lofs, 369 + (psize << 18) + (blen << 14) + (bpos << 9) + (glen << 5) + gpos); 370 + } 371 + 372 + return ret; 556 373 } 557 374 558 - static inline u32 get_pixelformat_without_alpha(u32 drm) 375 + /* 376 + * All non-alpha color formats derived from native alpha color formats are 377 + * either characterized by a FourCC format code 378 + */ 379 + static inline u32 is_xrgb(u32 drm) 559 380 { 560 - switch (drm) { 561 - case DRM_FORMAT_ARGB4444: 562 - return DRM_FORMAT_XRGB4444; 563 - case DRM_FORMAT_RGBA4444: 564 - return DRM_FORMAT_RGBX4444; 565 - case DRM_FORMAT_ARGB1555: 566 - return DRM_FORMAT_XRGB1555; 567 - case DRM_FORMAT_RGBA5551: 568 - return DRM_FORMAT_RGBX5551; 569 - case DRM_FORMAT_ARGB8888: 570 - return DRM_FORMAT_XRGB8888; 571 - case DRM_FORMAT_RGBA8888: 572 - return DRM_FORMAT_RGBX8888; 381 + return ((drm & 0xFF) == 'X' || ((drm >> 8) & 0xFF) == 'X'); 382 + } 383 + 384 + static inline void ltdc_set_ycbcr_config(struct drm_plane *plane, u32 drm_pix_fmt) 385 + { 386 + struct ltdc_device *ldev = plane_to_ltdc(plane); 387 + struct drm_plane_state *state = plane->state; 388 + u32 lofs = plane->index * LAY_OFS; 389 + u32 val; 390 + 391 + switch (drm_pix_fmt) { 392 + case DRM_FORMAT_YUYV: 393 + val = (YCM_I << 4) | LxPCR_YF | LxPCR_CBF; 394 + break; 395 + case DRM_FORMAT_YVYU: 396 + val = (YCM_I << 4) | LxPCR_YF; 397 + break; 398 + case DRM_FORMAT_UYVY: 399 + val = (YCM_I << 4) | LxPCR_CBF; 400 + break; 401 + case DRM_FORMAT_VYUY: 402 + val = (YCM_I << 4); 403 + break; 404 + case DRM_FORMAT_NV12: 405 + val = (YCM_SP << 4) | LxPCR_CBF; 406 + break; 407 + case DRM_FORMAT_NV21: 408 + val = (YCM_SP << 4); 409 + break; 410 + case DRM_FORMAT_YUV420: 411 + case DRM_FORMAT_YVU420: 412 + val = (YCM_FP << 4); 413 + break; 573 414 default: 574 - return 0; 415 + /* RGB or not a YCbCr supported format */ 416 + break; 575 417 } 418 + 419 + /* Enable limited range */ 420 + if (state->color_range == DRM_COLOR_YCBCR_LIMITED_RANGE) 421 + val |= LxPCR_YREN; 422 + 423 + /* enable ycbcr conversion */ 424 + val |= LxPCR_YCEN; 425 + 426 + regmap_write(ldev->regmap, LTDC_L1PCR + lofs, val); 427 + } 428 + 429 + static inline void ltdc_set_ycbcr_coeffs(struct drm_plane *plane) 430 + { 431 + struct ltdc_device *ldev = plane_to_ltdc(plane); 432 + struct drm_plane_state *state = plane->state; 433 + enum drm_color_encoding enc = state->color_encoding; 434 + enum drm_color_range ran = state->color_range; 435 + u32 lofs = plane->index * LAY_OFS; 436 + 437 + if (enc != DRM_COLOR_YCBCR_BT601 && enc != DRM_COLOR_YCBCR_BT709) { 438 + DRM_ERROR("color encoding %d not supported, use bt601 by default\n", enc); 439 + /* set by default color encoding to DRM_COLOR_YCBCR_BT601 */ 440 + enc = DRM_COLOR_YCBCR_BT601; 441 + } 442 + 443 + if (ran != DRM_COLOR_YCBCR_LIMITED_RANGE && ran != DRM_COLOR_YCBCR_FULL_RANGE) { 444 + DRM_ERROR("color range %d not supported, use limited range by default\n", ran); 445 + /* set by default color range to DRM_COLOR_YCBCR_LIMITED_RANGE */ 446 + ran = DRM_COLOR_YCBCR_LIMITED_RANGE; 447 + } 448 + 449 + DRM_DEBUG_DRIVER("Color encoding=%d, range=%d\n", enc, ran); 450 + regmap_write(ldev->regmap, LTDC_L1CYR0R + lofs, 451 + ltdc_ycbcr2rgb_coeffs[enc][ran][0]); 452 + regmap_write(ldev->regmap, LTDC_L1CYR1R + lofs, 453 + ltdc_ycbcr2rgb_coeffs[enc][ran][1]); 576 454 } 577 455 578 456 static irqreturn_t ltdc_irq_thread(int irq, void *arg) ··· 690 390 struct drm_device *ddev = arg; 691 391 struct ltdc_device *ldev = ddev->dev_private; 692 392 693 - /* Read & Clear the interrupt status */ 694 - ldev->irq_status = reg_read(ldev->regs, LTDC_ISR); 695 - reg_write(ldev->regs, LTDC_ICR, ldev->irq_status); 393 + /* 394 + * Read & Clear the interrupt status 395 + * In order to write / read registers in this critical section 396 + * very quickly, the regmap functions are not used. 397 + */ 398 + ldev->irq_status = readl_relaxed(ldev->regs + LTDC_ISR); 399 + writel_relaxed(ldev->irq_status, ldev->regs + LTDC_ICR); 696 400 697 401 return IRQ_WAKE_THREAD; 698 402 } ··· 720 416 for (i = 0; i < CLUT_SIZE; i++, lut++) { 721 417 val = ((lut->red << 8) & 0xff0000) | (lut->green & 0xff00) | 722 418 (lut->blue >> 8) | (i << 24); 723 - reg_write(ldev->regs, LTDC_L1CLUTWR, val); 419 + regmap_write(ldev->regmap, LTDC_L1CLUTWR, val); 724 420 } 725 421 } 726 422 ··· 735 431 pm_runtime_get_sync(ddev->dev); 736 432 737 433 /* Sets the background color value */ 738 - reg_write(ldev->regs, LTDC_BCCR, BCCR_BCBLACK); 434 + regmap_write(ldev->regmap, LTDC_BCCR, BCCR_BCBLACK); 739 435 740 436 /* Enable IRQ */ 741 - reg_set(ldev->regs, LTDC_IER, IER_RRIE | IER_FUIE | IER_TERRIE); 437 + regmap_set_bits(ldev->regmap, LTDC_IER, IER_RRIE | IER_FUIE | IER_TERRIE); 742 438 743 439 /* Commit shadow registers = update planes at next vblank */ 744 - reg_set(ldev->regs, LTDC_SRCR, SRCR_VBR); 440 + if (!ldev->caps.plane_reg_shadow) 441 + regmap_set_bits(ldev->regmap, LTDC_SRCR, SRCR_VBR); 745 442 746 443 drm_crtc_vblank_on(crtc); 747 444 } ··· 758 453 drm_crtc_vblank_off(crtc); 759 454 760 455 /* disable IRQ */ 761 - reg_clear(ldev->regs, LTDC_IER, IER_RRIE | IER_FUIE | IER_TERRIE); 456 + regmap_clear_bits(ldev->regmap, LTDC_IER, IER_RRIE | IER_FUIE | IER_TERRIE); 762 457 763 458 /* immediately commit disable of layers before switching off LTDC */ 764 - reg_set(ldev->regs, LTDC_SRCR, SRCR_IMR); 459 + if (!ldev->caps.plane_reg_shadow) 460 + regmap_set_bits(ldev->regmap, LTDC_SRCR, SRCR_IMR); 765 461 766 462 pm_runtime_put_sync(ddev->dev); 767 463 } ··· 839 533 struct drm_display_mode *mode = &crtc->state->adjusted_mode; 840 534 u32 hsync, vsync, accum_hbp, accum_vbp, accum_act_w, accum_act_h; 841 535 u32 total_width, total_height; 536 + u32 bus_formats = MEDIA_BUS_FMT_RGB888_1X24; 842 537 u32 bus_flags = 0; 843 538 u32 val; 844 539 int ret; ··· 865 558 866 559 if (bridge && bridge->timings) 867 560 bus_flags = bridge->timings->input_bus_flags; 868 - else if (connector) 561 + else if (connector) { 869 562 bus_flags = connector->display_info.bus_flags; 563 + if (connector->display_info.num_bus_formats) 564 + bus_formats = connector->display_info.bus_formats[0]; 565 + } 870 566 871 567 if (!pm_runtime_active(ddev->dev)) { 872 568 ret = pm_runtime_get_sync(ddev->dev); ··· 914 604 if (bus_flags & DRM_BUS_FLAG_PIXDATA_DRIVE_NEGEDGE) 915 605 val |= GCR_PCPOL; 916 606 917 - reg_update_bits(ldev->regs, LTDC_GCR, 918 - GCR_HSPOL | GCR_VSPOL | GCR_DEPOL | GCR_PCPOL, val); 607 + regmap_update_bits(ldev->regmap, LTDC_GCR, 608 + GCR_HSPOL | GCR_VSPOL | GCR_DEPOL | GCR_PCPOL, val); 919 609 920 610 /* Set Synchronization size */ 921 611 val = (hsync << 16) | vsync; 922 - reg_update_bits(ldev->regs, LTDC_SSCR, SSCR_VSH | SSCR_HSW, val); 612 + regmap_update_bits(ldev->regmap, LTDC_SSCR, SSCR_VSH | SSCR_HSW, val); 923 613 924 614 /* Set Accumulated Back porch */ 925 615 val = (accum_hbp << 16) | accum_vbp; 926 - reg_update_bits(ldev->regs, LTDC_BPCR, BPCR_AVBP | BPCR_AHBP, val); 616 + regmap_update_bits(ldev->regmap, LTDC_BPCR, BPCR_AVBP | BPCR_AHBP, val); 927 617 928 618 /* Set Accumulated Active Width */ 929 619 val = (accum_act_w << 16) | accum_act_h; 930 - reg_update_bits(ldev->regs, LTDC_AWCR, AWCR_AAW | AWCR_AAH, val); 620 + regmap_update_bits(ldev->regmap, LTDC_AWCR, AWCR_AAW | AWCR_AAH, val); 931 621 932 622 /* Set total width & height */ 933 623 val = (total_width << 16) | total_height; 934 - reg_update_bits(ldev->regs, LTDC_TWCR, TWCR_TOTALH | TWCR_TOTALW, val); 624 + regmap_update_bits(ldev->regmap, LTDC_TWCR, TWCR_TOTALH | TWCR_TOTALW, val); 935 625 936 - reg_write(ldev->regs, LTDC_LIPCR, (accum_act_h + 1)); 626 + regmap_write(ldev->regmap, LTDC_LIPCR, (accum_act_h + 1)); 627 + 628 + /* Configure the output format (hw version dependent) */ 629 + if (ldev->caps.ycbcr_output) { 630 + /* Input video dynamic_range & colorimetry */ 631 + int vic = drm_match_cea_mode(mode); 632 + u32 val; 633 + 634 + if (vic == 6 || vic == 7 || vic == 21 || vic == 22 || 635 + vic == 2 || vic == 3 || vic == 17 || vic == 18) 636 + /* ITU-R BT.601 */ 637 + val = 0; 638 + else 639 + /* ITU-R BT.709 */ 640 + val = EDCR_OCYSEL; 641 + 642 + switch (bus_formats) { 643 + case MEDIA_BUS_FMT_YUYV8_1X16: 644 + /* enable ycbcr output converter */ 645 + regmap_write(ldev->regmap, LTDC_EDCR, EDCR_OCYEN | val); 646 + break; 647 + case MEDIA_BUS_FMT_YVYU8_1X16: 648 + /* enable ycbcr output converter & invert chrominance order */ 649 + regmap_write(ldev->regmap, LTDC_EDCR, EDCR_OCYEN | EDCR_OCYCO | val); 650 + break; 651 + default: 652 + /* disable ycbcr output converter */ 653 + regmap_write(ldev->regmap, LTDC_EDCR, 0); 654 + break; 655 + } 656 + } 937 657 } 938 658 939 659 static void ltdc_crtc_atomic_flush(struct drm_crtc *crtc, ··· 978 638 ltdc_crtc_update_clut(crtc); 979 639 980 640 /* Commit shadow registers = update planes at next vblank */ 981 - reg_set(ldev->regs, LTDC_SRCR, SRCR_VBR); 641 + if (!ldev->caps.plane_reg_shadow) 642 + regmap_set_bits(ldev->regmap, LTDC_SRCR, SRCR_VBR); 982 643 983 644 if (event) { 984 645 crtc->state->event = NULL; ··· 1021 680 * simplify the code and only test if line > vactive_end 1022 681 */ 1023 682 if (pm_runtime_active(ddev->dev)) { 1024 - line = reg_read(ldev->regs, LTDC_CPSR) & CPSR_CYPOS; 1025 - vactive_start = reg_read(ldev->regs, LTDC_BPCR) & BPCR_AVBP; 1026 - vactive_end = reg_read(ldev->regs, LTDC_AWCR) & AWCR_AAH; 1027 - vtotal = reg_read(ldev->regs, LTDC_TWCR) & TWCR_TOTALH; 683 + regmap_read(ldev->regmap, LTDC_CPSR, &line); 684 + line &= CPSR_CYPOS; 685 + regmap_read(ldev->regmap, LTDC_BPCR, &vactive_start); 686 + vactive_start &= BPCR_AVBP; 687 + regmap_read(ldev->regmap, LTDC_AWCR, &vactive_end); 688 + vactive_end &= AWCR_AAH; 689 + regmap_read(ldev->regmap, LTDC_TWCR, &vtotal); 690 + vtotal &= TWCR_TOTALH; 1028 691 1029 692 if (line > vactive_end) 1030 693 *vpos = line - vtotal - vactive_start; ··· 1064 719 DRM_DEBUG_DRIVER("\n"); 1065 720 1066 721 if (state->enable) 1067 - reg_set(ldev->regs, LTDC_IER, IER_LIE); 722 + regmap_set_bits(ldev->regmap, LTDC_IER, IER_LIE); 1068 723 else 1069 724 return -EPERM; 1070 725 ··· 1076 731 struct ltdc_device *ldev = crtc_to_ltdc(crtc); 1077 732 1078 733 DRM_DEBUG_DRIVER("\n"); 1079 - reg_clear(ldev->regs, LTDC_IER, IER_LIE); 734 + regmap_clear_bits(ldev->regmap, LTDC_IER, IER_LIE); 1080 735 } 1081 736 1082 737 static const struct drm_crtc_funcs ltdc_crtc_funcs = { ··· 1134 789 u32 y0 = newstate->crtc_y; 1135 790 u32 y1 = newstate->crtc_y + newstate->crtc_h - 1; 1136 791 u32 src_x, src_y, src_w, src_h; 1137 - u32 val, pitch_in_bytes, line_length, paddr, ahbp, avbp, bpcr; 792 + u32 val, pitch_in_bytes, line_length, line_number, paddr, ahbp, avbp, bpcr; 1138 793 enum ltdc_pix_fmt pf; 1139 794 1140 795 if (!newstate->crtc || !fb) { ··· 1154 809 newstate->crtc_w, newstate->crtc_h, 1155 810 newstate->crtc_x, newstate->crtc_y); 1156 811 1157 - bpcr = reg_read(ldev->regs, LTDC_BPCR); 812 + regmap_read(ldev->regmap, LTDC_BPCR, &bpcr); 813 + 1158 814 ahbp = (bpcr & BPCR_AHBP) >> 16; 1159 815 avbp = bpcr & BPCR_AVBP; 1160 816 1161 817 /* Configures the horizontal start and stop position */ 1162 818 val = ((x1 + 1 + ahbp) << 16) + (x0 + 1 + ahbp); 1163 - reg_update_bits(ldev->regs, LTDC_L1WHPCR + lofs, 1164 - LXWHPCR_WHSTPOS | LXWHPCR_WHSPPOS, val); 819 + regmap_write_bits(ldev->regmap, LTDC_L1WHPCR + lofs, 820 + LXWHPCR_WHSTPOS | LXWHPCR_WHSPPOS, val); 1165 821 1166 822 /* Configures the vertical start and stop position */ 1167 823 val = ((y1 + 1 + avbp) << 16) + (y0 + 1 + avbp); 1168 - reg_update_bits(ldev->regs, LTDC_L1WVPCR + lofs, 1169 - LXWVPCR_WVSTPOS | LXWVPCR_WVSPPOS, val); 824 + regmap_write_bits(ldev->regmap, LTDC_L1WVPCR + lofs, 825 + LXWVPCR_WVSTPOS | LXWVPCR_WVSPPOS, val); 1170 826 1171 827 /* Specifies the pixel format */ 1172 828 pf = to_ltdc_pixelformat(fb->format->format); ··· 1175 829 if (ldev->caps.pix_fmt_hw[val] == pf) 1176 830 break; 1177 831 832 + /* Use the flexible color format feature if necessary and available */ 833 + if (ldev->caps.pix_fmt_flex && val == NB_PF) 834 + val = ltdc_set_flexible_pixel_format(plane, pf); 835 + 1178 836 if (val == NB_PF) { 1179 837 DRM_ERROR("Pixel format %.4s not supported\n", 1180 838 (char *)&fb->format->format); 1181 839 val = 0; /* set by default ARGB 32 bits */ 1182 840 } 1183 - reg_update_bits(ldev->regs, LTDC_L1PFCR + lofs, LXPFCR_PF, val); 841 + regmap_write_bits(ldev->regmap, LTDC_L1PFCR + lofs, LXPFCR_PF, val); 1184 842 1185 843 /* Configures the color frame buffer pitch in bytes & line length */ 1186 844 pitch_in_bytes = fb->pitches[0]; 1187 845 line_length = fb->format->cpp[0] * 1188 846 (x1 - x0 + 1) + (ldev->caps.bus_width >> 3) - 1; 1189 847 val = ((pitch_in_bytes << 16) | line_length); 1190 - reg_update_bits(ldev->regs, LTDC_L1CFBLR + lofs, 1191 - LXCFBLR_CFBLL | LXCFBLR_CFBP, val); 848 + regmap_write_bits(ldev->regmap, LTDC_L1CFBLR + lofs, LXCFBLR_CFBLL | LXCFBLR_CFBP, val); 1192 849 1193 850 /* Specifies the constant alpha value */ 1194 851 val = newstate->alpha >> 8; 1195 - reg_update_bits(ldev->regs, LTDC_L1CACR + lofs, LXCACR_CONSTA, val); 852 + regmap_write_bits(ldev->regmap, LTDC_L1CACR + lofs, LXCACR_CONSTA, val); 1196 853 1197 854 /* Specifies the blending factors */ 1198 855 val = BF1_PAXCA | BF2_1PAXCA; ··· 1207 858 plane->type != DRM_PLANE_TYPE_PRIMARY) 1208 859 val = BF1_PAXCA | BF2_1PAXCA; 1209 860 1210 - reg_update_bits(ldev->regs, LTDC_L1BFCR + lofs, 1211 - LXBFCR_BF2 | LXBFCR_BF1, val); 861 + regmap_write_bits(ldev->regmap, LTDC_L1BFCR + lofs, LXBFCR_BF2 | LXBFCR_BF1, val); 1212 862 1213 863 /* Configures the frame buffer line number */ 1214 - val = y1 - y0 + 1; 1215 - reg_update_bits(ldev->regs, LTDC_L1CFBLNR + lofs, LXCFBLNR_CFBLN, val); 864 + line_number = y1 - y0 + 1; 865 + regmap_write_bits(ldev->regmap, LTDC_L1CFBLNR + lofs, LXCFBLNR_CFBLN, line_number); 1216 866 1217 867 /* Sets the FB address */ 1218 868 paddr = (u32)drm_fb_cma_get_gem_addr(fb, newstate, 0); 1219 869 1220 870 DRM_DEBUG_DRIVER("fb: phys 0x%08x", paddr); 1221 - reg_write(ldev->regs, LTDC_L1CFBAR + lofs, paddr); 871 + regmap_write(ldev->regmap, LTDC_L1CFBAR + lofs, paddr); 872 + 873 + if (ldev->caps.ycbcr_input) { 874 + if (fb->format->is_yuv) { 875 + switch (fb->format->format) { 876 + case DRM_FORMAT_NV12: 877 + case DRM_FORMAT_NV21: 878 + /* Configure the auxiliary frame buffer address 0 & 1 */ 879 + paddr = (u32)drm_fb_cma_get_gem_addr(fb, newstate, 1); 880 + regmap_write(ldev->regmap, LTDC_L1AFBA0R + lofs, paddr); 881 + regmap_write(ldev->regmap, LTDC_L1AFBA1R + lofs, paddr + 1); 882 + 883 + /* Configure the buffer length */ 884 + val = ((pitch_in_bytes << 16) | line_length); 885 + regmap_write(ldev->regmap, LTDC_L1AFBLR + lofs, val); 886 + 887 + /* Configure the frame buffer line number */ 888 + val = (line_number >> 1); 889 + regmap_write(ldev->regmap, LTDC_L1AFBLNR + lofs, val); 890 + break; 891 + case DRM_FORMAT_YUV420: 892 + /* Configure the auxiliary frame buffer address 0 */ 893 + paddr = (u32)drm_fb_cma_get_gem_addr(fb, newstate, 1); 894 + regmap_write(ldev->regmap, LTDC_L1AFBA0R + lofs, paddr); 895 + 896 + /* Configure the auxiliary frame buffer address 1 */ 897 + paddr = (u32)drm_fb_cma_get_gem_addr(fb, newstate, 2); 898 + regmap_write(ldev->regmap, LTDC_L1AFBA1R + lofs, paddr); 899 + 900 + line_length = ((fb->format->cpp[0] * (x1 - x0 + 1)) >> 1) + 901 + (ldev->caps.bus_width >> 3) - 1; 902 + 903 + /* Configure the buffer length */ 904 + val = (((pitch_in_bytes >> 1) << 16) | line_length); 905 + regmap_write(ldev->regmap, LTDC_L1AFBLR + lofs, val); 906 + 907 + /* Configure the frame buffer line number */ 908 + val = (line_number >> 1); 909 + regmap_write(ldev->regmap, LTDC_L1AFBLNR + lofs, val); 910 + break; 911 + case DRM_FORMAT_YVU420: 912 + /* Configure the auxiliary frame buffer address 0 */ 913 + paddr = (u32)drm_fb_cma_get_gem_addr(fb, newstate, 2); 914 + regmap_write(ldev->regmap, LTDC_L1AFBA0R + lofs, paddr); 915 + 916 + /* Configure the auxiliary frame buffer address 1 */ 917 + paddr = (u32)drm_fb_cma_get_gem_addr(fb, newstate, 1); 918 + regmap_write(ldev->regmap, LTDC_L1AFBA1R + lofs, paddr); 919 + 920 + line_length = ((fb->format->cpp[0] * (x1 - x0 + 1)) >> 1) + 921 + (ldev->caps.bus_width >> 3) - 1; 922 + 923 + /* Configure the buffer length */ 924 + val = (((pitch_in_bytes >> 1) << 16) | line_length); 925 + regmap_write(ldev->regmap, LTDC_L1AFBLR + lofs, val); 926 + 927 + /* Configure the frame buffer line number */ 928 + val = (line_number >> 1); 929 + regmap_write(ldev->regmap, LTDC_L1AFBLNR + lofs, val); 930 + break; 931 + } 932 + 933 + /* Configure YCbC conversion coefficient */ 934 + ltdc_set_ycbcr_coeffs(plane); 935 + 936 + /* Configure YCbCr format and enable/disable conversion */ 937 + ltdc_set_ycbcr_config(plane, fb->format->format); 938 + } else { 939 + /* disable ycbcr conversion */ 940 + regmap_write(ldev->regmap, LTDC_L1PCR + lofs, 0); 941 + } 942 + } 1222 943 1223 944 /* Enable layer and CLUT if needed */ 1224 945 val = fb->format->format == DRM_FORMAT_C8 ? LXCR_CLUTEN : 0; 1225 946 val |= LXCR_LEN; 1226 - reg_update_bits(ldev->regs, LTDC_L1CR + lofs, 1227 - LXCR_LEN | LXCR_CLUTEN, val); 947 + regmap_write_bits(ldev->regmap, LTDC_L1CR + lofs, LXCR_LEN | LXCR_CLUTEN, val); 948 + 949 + /* Commit shadow registers = update plane at next vblank */ 950 + if (ldev->caps.plane_reg_shadow) 951 + regmap_write_bits(ldev->regmap, LTDC_L1RCR + lofs, 952 + LXRCR_IMR | LXRCR_VBR | LXRCR_GRMSK, LXRCR_VBR); 1228 953 1229 954 ldev->plane_fpsi[plane->index].counter++; 1230 955 ··· 1323 900 u32 lofs = plane->index * LAY_OFS; 1324 901 1325 902 /* disable layer */ 1326 - reg_clear(ldev->regs, LTDC_L1CR + lofs, LXCR_LEN); 903 + regmap_write_bits(ldev->regmap, LTDC_L1CR + lofs, LXCR_LEN, 0); 904 + 905 + /* Commit shadow registers = update plane at next vblank */ 906 + if (ldev->caps.plane_reg_shadow) 907 + regmap_write_bits(ldev->regmap, LTDC_L1RCR + lofs, 908 + LXRCR_IMR | LXRCR_VBR | LXRCR_GRMSK, LXRCR_VBR); 1327 909 1328 910 DRM_DEBUG_DRIVER("CRTC:%d plane:%d\n", 1329 911 oldstate->crtc->base.id, plane->base.id); ··· 1353 925 fpsi->counter = 0; 1354 926 } 1355 927 1356 - static bool ltdc_plane_format_mod_supported(struct drm_plane *plane, 1357 - u32 format, 1358 - u64 modifier) 1359 - { 1360 - if (modifier == DRM_FORMAT_MOD_LINEAR) 1361 - return true; 1362 - 1363 - return false; 1364 - } 1365 - 1366 928 static const struct drm_plane_funcs ltdc_plane_funcs = { 1367 929 .update_plane = drm_atomic_helper_update_plane, 1368 930 .disable_plane = drm_atomic_helper_disable_plane, ··· 1361 943 .atomic_duplicate_state = drm_atomic_helper_plane_duplicate_state, 1362 944 .atomic_destroy_state = drm_atomic_helper_plane_destroy_state, 1363 945 .atomic_print_state = ltdc_plane_atomic_print_state, 1364 - .format_mod_supported = ltdc_plane_format_mod_supported, 1365 946 }; 1366 947 1367 948 static const struct drm_plane_helper_funcs ltdc_plane_helper_funcs = { ··· 1370 953 }; 1371 954 1372 955 static struct drm_plane *ltdc_plane_create(struct drm_device *ddev, 1373 - enum drm_plane_type type) 956 + enum drm_plane_type type, 957 + int index) 1374 958 { 1375 959 unsigned long possible_crtcs = CRTC_MASK; 1376 960 struct ltdc_device *ldev = ddev->dev_private; 1377 961 struct device *dev = ddev->dev; 1378 962 struct drm_plane *plane; 1379 963 unsigned int i, nb_fmt = 0; 1380 - u32 formats[NB_PF * 2]; 1381 - u32 drm_fmt, drm_fmt_no_alpha; 964 + u32 *formats; 965 + u32 drm_fmt; 1382 966 const u64 *modifiers = ltdc_format_modifiers; 967 + u32 lofs = index * LAY_OFS; 968 + u32 val; 1383 969 int ret; 1384 970 1385 - /* Get supported pixel formats */ 1386 - for (i = 0; i < NB_PF; i++) { 1387 - drm_fmt = to_drm_pixelformat(ldev->caps.pix_fmt_hw[i]); 1388 - if (!drm_fmt) 1389 - continue; 1390 - formats[nb_fmt++] = drm_fmt; 971 + /* Allocate the biggest size according to supported color formats */ 972 + formats = devm_kzalloc(dev, (ldev->caps.pix_fmt_nb + 973 + ARRAY_SIZE(ltdc_drm_fmt_ycbcr_cp) + 974 + ARRAY_SIZE(ltdc_drm_fmt_ycbcr_sp) + 975 + ARRAY_SIZE(ltdc_drm_fmt_ycbcr_fp)) * 976 + sizeof(*formats), GFP_KERNEL); 1391 977 1392 - /* Add the no-alpha related format if any & supported */ 1393 - drm_fmt_no_alpha = get_pixelformat_without_alpha(drm_fmt); 1394 - if (!drm_fmt_no_alpha) 1395 - continue; 978 + for (i = 0; i < ldev->caps.pix_fmt_nb; i++) { 979 + drm_fmt = ldev->caps.pix_fmt_drm[i]; 1396 980 1397 981 /* Manage hw-specific capabilities */ 1398 - if (ldev->caps.non_alpha_only_l1 && 1399 - type != DRM_PLANE_TYPE_PRIMARY) 1400 - continue; 982 + if (ldev->caps.non_alpha_only_l1) 983 + /* XR24 & RX24 like formats supported only on primary layer */ 984 + if (type != DRM_PLANE_TYPE_PRIMARY && is_xrgb(drm_fmt)) 985 + continue; 1401 986 1402 - formats[nb_fmt++] = drm_fmt_no_alpha; 987 + formats[nb_fmt++] = drm_fmt; 988 + } 989 + 990 + /* Add YCbCr supported pixel formats */ 991 + if (ldev->caps.ycbcr_input) { 992 + regmap_read(ldev->regmap, LTDC_L1C1R + lofs, &val); 993 + if (val & LXCR_C1R_YIA) { 994 + memcpy(&formats[nb_fmt], ltdc_drm_fmt_ycbcr_cp, 995 + ARRAY_SIZE(ltdc_drm_fmt_ycbcr_cp) * sizeof(*formats)); 996 + nb_fmt += ARRAY_SIZE(ltdc_drm_fmt_ycbcr_cp); 997 + } 998 + if (val & LXCR_C1R_YSPA) { 999 + memcpy(&formats[nb_fmt], ltdc_drm_fmt_ycbcr_sp, 1000 + ARRAY_SIZE(ltdc_drm_fmt_ycbcr_sp) * sizeof(*formats)); 1001 + nb_fmt += ARRAY_SIZE(ltdc_drm_fmt_ycbcr_sp); 1002 + } 1003 + if (val & LXCR_C1R_YFPA) { 1004 + memcpy(&formats[nb_fmt], ltdc_drm_fmt_ycbcr_fp, 1005 + ARRAY_SIZE(ltdc_drm_fmt_ycbcr_fp) * sizeof(*formats)); 1006 + nb_fmt += ARRAY_SIZE(ltdc_drm_fmt_ycbcr_fp); 1007 + } 1403 1008 } 1404 1009 1405 1010 plane = devm_kzalloc(dev, sizeof(*plane), GFP_KERNEL); ··· 1433 994 modifiers, type, NULL); 1434 995 if (ret < 0) 1435 996 return NULL; 997 + 998 + if (ldev->caps.ycbcr_input) { 999 + if (val & (LXCR_C1R_YIA | LXCR_C1R_YSPA | LXCR_C1R_YFPA)) 1000 + drm_plane_create_color_properties(plane, 1001 + BIT(DRM_COLOR_YCBCR_BT601) | 1002 + BIT(DRM_COLOR_YCBCR_BT709), 1003 + BIT(DRM_COLOR_YCBCR_LIMITED_RANGE) | 1004 + BIT(DRM_COLOR_YCBCR_FULL_RANGE), 1005 + DRM_COLOR_YCBCR_BT601, 1006 + DRM_COLOR_YCBCR_LIMITED_RANGE); 1007 + } 1436 1008 1437 1009 drm_plane_helper_add(plane, &ltdc_plane_helper_funcs); 1438 1010 ··· 1470 1020 unsigned int i; 1471 1021 int ret; 1472 1022 1473 - primary = ltdc_plane_create(ddev, DRM_PLANE_TYPE_PRIMARY); 1023 + primary = ltdc_plane_create(ddev, DRM_PLANE_TYPE_PRIMARY, 0); 1474 1024 if (!primary) { 1475 1025 DRM_ERROR("Can not create primary plane\n"); 1476 1026 return -EINVAL; ··· 1494 1044 1495 1045 /* Add planes. Note : the first layer is used by primary plane */ 1496 1046 for (i = 1; i < ldev->caps.nb_layers; i++) { 1497 - overlay = ltdc_plane_create(ddev, DRM_PLANE_TYPE_OVERLAY); 1047 + overlay = ltdc_plane_create(ddev, DRM_PLANE_TYPE_OVERLAY, i); 1498 1048 if (!overlay) { 1499 1049 ret = -ENOMEM; 1500 1050 DRM_ERROR("Can not create overlay plane %d\n", i); ··· 1518 1068 DRM_DEBUG_DRIVER("\n"); 1519 1069 1520 1070 /* Disable LTDC */ 1521 - reg_clear(ldev->regs, LTDC_GCR, GCR_LTDCEN); 1071 + regmap_clear_bits(ldev->regmap, LTDC_GCR, GCR_LTDCEN); 1522 1072 1523 1073 /* Set to sleep state the pinctrl whatever type of encoder */ 1524 1074 pinctrl_pm_select_sleep_state(ddev->dev); ··· 1532 1082 DRM_DEBUG_DRIVER("\n"); 1533 1083 1534 1084 /* Enable LTDC */ 1535 - reg_set(ldev->regs, LTDC_GCR, GCR_LTDCEN); 1085 + regmap_set_bits(ldev->regmap, LTDC_GCR, GCR_LTDCEN); 1536 1086 } 1537 1087 1538 1088 static void ltdc_encoder_mode_set(struct drm_encoder *encoder, ··· 1595 1145 * at least 1 layer must be managed & the number of layers 1596 1146 * must not exceed LTDC_MAX_LAYER 1597 1147 */ 1598 - lcr = reg_read(ldev->regs, LTDC_LCR); 1148 + regmap_read(ldev->regmap, LTDC_LCR, &lcr); 1599 1149 1600 1150 ldev->caps.nb_layers = clamp((int)lcr, 1, LTDC_MAX_LAYER); 1601 1151 1602 1152 /* set data bus width */ 1603 - gc2r = reg_read(ldev->regs, LTDC_GC2R); 1153 + regmap_read(ldev->regmap, LTDC_GC2R, &gc2r); 1604 1154 bus_width_log2 = (gc2r & GC2R_BW) >> 4; 1605 1155 ldev->caps.bus_width = 8 << bus_width_log2; 1606 - ldev->caps.hw_version = reg_read(ldev->regs, LTDC_IDR); 1156 + regmap_read(ldev->regmap, LTDC_IDR, &ldev->caps.hw_version); 1607 1157 1608 1158 switch (ldev->caps.hw_version) { 1609 1159 case HWVER_10200: 1610 1160 case HWVER_10300: 1611 - ldev->caps.reg_ofs = REG_OFS_NONE; 1161 + ldev->caps.layer_ofs = LAY_OFS_0; 1162 + ldev->caps.layer_regs = ltdc_layer_regs_a0; 1612 1163 ldev->caps.pix_fmt_hw = ltdc_pix_fmt_a0; 1164 + ldev->caps.pix_fmt_drm = ltdc_drm_fmt_a0; 1165 + ldev->caps.pix_fmt_nb = ARRAY_SIZE(ltdc_drm_fmt_a0); 1166 + ldev->caps.pix_fmt_flex = false; 1613 1167 /* 1614 1168 * Hw older versions support non-alpha color formats derived 1615 1169 * from native alpha color formats only on the primary layer. ··· 1626 1172 if (ldev->caps.hw_version == HWVER_10200) 1627 1173 ldev->caps.pad_max_freq_hz = 65000000; 1628 1174 ldev->caps.nb_irq = 2; 1175 + ldev->caps.ycbcr_input = false; 1176 + ldev->caps.ycbcr_output = false; 1177 + ldev->caps.plane_reg_shadow = false; 1629 1178 break; 1630 1179 case HWVER_20101: 1631 - ldev->caps.reg_ofs = REG_OFS_4; 1180 + ldev->caps.layer_ofs = LAY_OFS_0; 1181 + ldev->caps.layer_regs = ltdc_layer_regs_a1; 1632 1182 ldev->caps.pix_fmt_hw = ltdc_pix_fmt_a1; 1183 + ldev->caps.pix_fmt_drm = ltdc_drm_fmt_a1; 1184 + ldev->caps.pix_fmt_nb = ARRAY_SIZE(ltdc_drm_fmt_a1); 1185 + ldev->caps.pix_fmt_flex = false; 1633 1186 ldev->caps.non_alpha_only_l1 = false; 1634 1187 ldev->caps.pad_max_freq_hz = 150000000; 1635 1188 ldev->caps.nb_irq = 4; 1189 + ldev->caps.ycbcr_input = false; 1190 + ldev->caps.ycbcr_output = false; 1191 + ldev->caps.plane_reg_shadow = false; 1192 + break; 1193 + case HWVER_40100: 1194 + ldev->caps.layer_ofs = LAY_OFS_1; 1195 + ldev->caps.layer_regs = ltdc_layer_regs_a2; 1196 + ldev->caps.pix_fmt_hw = ltdc_pix_fmt_a2; 1197 + ldev->caps.pix_fmt_drm = ltdc_drm_fmt_a2; 1198 + ldev->caps.pix_fmt_nb = ARRAY_SIZE(ltdc_drm_fmt_a2); 1199 + ldev->caps.pix_fmt_flex = true; 1200 + ldev->caps.non_alpha_only_l1 = false; 1201 + ldev->caps.pad_max_freq_hz = 90000000; 1202 + ldev->caps.nb_irq = 2; 1203 + ldev->caps.ycbcr_input = true; 1204 + ldev->caps.ycbcr_output = true; 1205 + ldev->caps.plane_reg_shadow = true; 1636 1206 break; 1637 1207 default: 1638 1208 return -ENODEV; ··· 1774 1296 goto err; 1775 1297 } 1776 1298 1299 + ldev->regmap = devm_regmap_init_mmio(&pdev->dev, ldev->regs, &stm32_ltdc_regmap_cfg); 1300 + if (IS_ERR(ldev->regmap)) { 1301 + DRM_ERROR("Unable to regmap ltdc registers\n"); 1302 + ret = PTR_ERR(ldev->regmap); 1303 + goto err; 1304 + } 1305 + 1777 1306 /* Disable interrupts */ 1778 - reg_clear(ldev->regs, LTDC_IER, 1779 - IER_LIE | IER_RRIE | IER_FUIE | IER_TERRIE); 1307 + regmap_clear_bits(ldev->regmap, LTDC_IER, IER_LIE | IER_RRIE | IER_FUIE | IER_TERRIE); 1780 1308 1781 1309 ret = ltdc_get_caps(ddev); 1782 1310 if (ret) {
+10 -2
drivers/gpu/drm/stm/ltdc.h
··· 14 14 struct ltdc_caps { 15 15 u32 hw_version; /* hardware version */ 16 16 u32 nb_layers; /* number of supported layers */ 17 - u32 reg_ofs; /* register offset for applicable regs */ 17 + u32 layer_ofs; /* layer offset for applicable regs */ 18 + const u32 *layer_regs; /* layer register offset */ 18 19 u32 bus_width; /* bus width (32 or 64 bits) */ 19 - const u32 *pix_fmt_hw; /* supported pixel formats */ 20 + const u32 *pix_fmt_hw; /* supported hw pixel formats */ 21 + const u32 *pix_fmt_drm; /* supported drm pixel formats */ 22 + int pix_fmt_nb; /* number of pixel format */ 23 + bool pix_fmt_flex; /* pixel format flexibility supported */ 20 24 bool non_alpha_only_l1; /* non-native no-alpha formats on layer 1 */ 21 25 int pad_max_freq_hz; /* max frequency supported by pad */ 22 26 int nb_irq; /* number of hardware interrupts */ 27 + bool ycbcr_input; /* ycbcr input converter supported */ 28 + bool ycbcr_output; /* ycbcr output converter supported */ 29 + bool plane_reg_shadow; /* plane shadow registers ability */ 23 30 }; 24 31 25 32 #define LTDC_MAX_LAYER 4 ··· 38 31 39 32 struct ltdc_device { 40 33 void __iomem *regs; 34 + struct regmap *regmap; 41 35 struct clk *pixel_clk; /* lcd pixel clock */ 42 36 struct mutex err_lock; /* protecting error_status */ 43 37 struct ltdc_caps caps;
+1
drivers/gpu/drm/tegra/Kconfig
··· 5 5 depends on COMMON_CLK 6 6 depends on DRM 7 7 depends on OF 8 + select DRM_DP_HELPER 8 9 select DRM_KMS_HELPER 9 10 select DRM_MIPI_DSI 10 11 select DRM_PANEL
+1 -1
drivers/gpu/drm/tegra/dp.c
··· 5 5 */ 6 6 7 7 #include <drm/drm_crtc.h> 8 - #include <drm/drm_dp_helper.h> 8 + #include <drm/dp/drm_dp_helper.h> 9 9 #include <drm/drm_print.h> 10 10 11 11 #include "dp.h"
+1 -1
drivers/gpu/drm/tegra/dpaux.c
··· 18 18 #include <linux/reset.h> 19 19 #include <linux/workqueue.h> 20 20 21 - #include <drm/drm_dp_helper.h> 21 + #include <drm/dp/drm_dp_helper.h> 22 22 #include <drm/drm_panel.h> 23 23 24 24 #include "dp.h"
+1 -1
drivers/gpu/drm/tegra/sor.c
··· 18 18 19 19 #include <drm/drm_atomic_helper.h> 20 20 #include <drm/drm_debugfs.h> 21 - #include <drm/drm_dp_helper.h> 21 + #include <drm/dp/drm_dp_helper.h> 22 22 #include <drm/drm_file.h> 23 23 #include <drm/drm_panel.h> 24 24 #include <drm/drm_scdc_helper.h>
+1 -3
drivers/gpu/drm/tilcdc/tilcdc_drv.c
··· 60 60 list_del(&mod->list); 61 61 } 62 62 63 - static struct of_device_id tilcdc_of_match[]; 64 - 65 63 static int tilcdc_atomic_check(struct drm_device *dev, 66 64 struct drm_atomic_state *state) 67 65 { ··· 585 587 return 0; 586 588 } 587 589 588 - static struct of_device_id tilcdc_of_match[] = { 590 + static const struct of_device_id tilcdc_of_match[] = { 589 591 { .compatible = "ti,am33xx-tilcdc", }, 590 592 { .compatible = "ti,da850-tilcdc", }, 591 593 { },
+2 -18
drivers/gpu/drm/tiny/bochs.c
··· 10 10 #include <drm/drm_gem_framebuffer_helper.h> 11 11 #include <drm/drm_gem_vram_helper.h> 12 12 #include <drm/drm_managed.h> 13 + #include <drm/drm_module.h> 13 14 #include <drm/drm_probe_helper.h> 14 15 #include <drm/drm_simple_kms_helper.h> 15 16 ··· 717 716 /* ---------------------------------------------------------------------- */ 718 717 /* module init/exit */ 719 718 720 - static int __init bochs_init(void) 721 - { 722 - if (drm_firmware_drivers_only() && bochs_modeset == -1) 723 - return -EINVAL; 724 - 725 - if (bochs_modeset == 0) 726 - return -EINVAL; 727 - 728 - return pci_register_driver(&bochs_pci_driver); 729 - } 730 - 731 - static void __exit bochs_exit(void) 732 - { 733 - pci_unregister_driver(&bochs_pci_driver); 734 - } 735 - 736 - module_init(bochs_init); 737 - module_exit(bochs_exit); 719 + drm_module_pci_driver_if_modeset(bochs_pci_driver, bochs_modeset); 738 720 739 721 MODULE_DEVICE_TABLE(pci, bochs_pci_tbl); 740 722 MODULE_AUTHOR("Gerd Hoffmann <kraxel@redhat.com>");
+2 -15
drivers/gpu/drm/tiny/cirrus.c
··· 39 39 #include <drm/drm_ioctl.h> 40 40 #include <drm/drm_managed.h> 41 41 #include <drm/drm_modeset_helper_vtables.h> 42 + #include <drm/drm_module.h> 42 43 #include <drm/drm_probe_helper.h> 43 44 #include <drm/drm_simple_kms_helper.h> 44 45 ··· 634 633 .remove = cirrus_pci_remove, 635 634 }; 636 635 637 - static int __init cirrus_init(void) 638 - { 639 - if (drm_firmware_drivers_only()) 640 - return -EINVAL; 641 - 642 - return pci_register_driver(&cirrus_pci_driver); 643 - } 644 - 645 - static void __exit cirrus_exit(void) 646 - { 647 - pci_unregister_driver(&cirrus_pci_driver); 648 - } 649 - 650 - module_init(cirrus_init); 651 - module_exit(cirrus_exit); 636 + drm_module_pci_driver(cirrus_pci_driver) 652 637 653 638 MODULE_DEVICE_TABLE(pci, pciidlist); 654 639 MODULE_LICENSE("GPL");
+17 -5
drivers/gpu/drm/tiny/simpledrm.c
··· 526 526 { 527 527 struct drm_device *dev = &sdev->dev; 528 528 struct platform_device *pdev = sdev->pdev; 529 - struct resource *mem; 529 + struct resource *res, *mem; 530 530 void __iomem *screen_base; 531 531 int ret; 532 532 533 - mem = platform_get_resource(pdev, IORESOURCE_MEM, 0); 534 - if (!mem) 533 + res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 534 + if (!res) 535 535 return -EINVAL; 536 536 537 - ret = devm_aperture_acquire_from_firmware(dev, mem->start, resource_size(mem)); 537 + ret = devm_aperture_acquire_from_firmware(dev, res->start, resource_size(res)); 538 538 if (ret) { 539 539 drm_err(dev, "could not acquire memory range %pr: error %d\n", 540 - mem, ret); 540 + res, ret); 541 541 return ret; 542 + } 543 + 544 + mem = devm_request_mem_region(&pdev->dev, res->start, resource_size(res), 545 + sdev->dev.driver->name); 546 + if (!mem) { 547 + /* 548 + * We cannot make this fatal. Sometimes this comes from magic 549 + * spaces our resource handlers simply don't know about. Use 550 + * the I/O-memory resource as-is and try to map that instead. 551 + */ 552 + drm_warn(dev, "could not acquire memory region %pr\n", res); 553 + mem = res; 542 554 } 543 555 544 556 screen_base = devm_ioremap_wc(&pdev->dev, mem->start,
+5 -2
drivers/gpu/drm/ttm/ttm_bo_util.c
··· 241 241 if (bo->type != ttm_bo_type_sg) 242 242 fbo->base.base.resv = &fbo->base.base._resv; 243 243 244 + if (fbo->base.resource) { 245 + ttm_resource_set_bo(fbo->base.resource, &fbo->base); 246 + bo->resource = NULL; 247 + } 248 + 244 249 dma_resv_init(&fbo->base.base._resv); 245 250 fbo->base.base.dev = NULL; 246 251 ret = dma_resv_trylock(&fbo->base.base._resv); ··· 514 509 ghost_obj->ttm = NULL; 515 510 else 516 511 bo->ttm = NULL; 517 - bo->resource = NULL; 518 512 519 513 dma_resv_unlock(&ghost_obj->base._resv); 520 514 ttm_bo_put(ghost_obj); ··· 641 637 dma_resv_unlock(&ghost->base._resv); 642 638 ttm_bo_put(ghost); 643 639 bo->ttm = ttm; 644 - bo->resource = NULL; 645 640 ttm_bo_assign_mem(bo, sys_res); 646 641 return 0; 647 642
+3 -1
drivers/gpu/drm/ttm/ttm_range_manager.c
··· 89 89 spin_unlock(&rman->lock); 90 90 91 91 if (unlikely(ret)) { 92 + ttm_resource_fini(man, *res); 92 93 kfree(node); 93 94 return ret; 94 95 } ··· 109 108 drm_mm_remove_node(&node->mm_nodes[0]); 110 109 spin_unlock(&rman->lock); 111 110 111 + ttm_resource_fini(man, res); 112 112 kfree(node); 113 113 } 114 114 ··· 158 156 159 157 man->func = &ttm_range_manager_func; 160 158 161 - ttm_resource_manager_init(man, p_size); 159 + ttm_resource_manager_init(man, bdev, p_size); 162 160 163 161 drm_mm_init(&rman->mm, 0, p_size); 164 162 spin_lock_init(&rman->lock);
+35
drivers/gpu/drm/ttm/ttm_resource.c
··· 29 29 #include <drm/ttm/ttm_resource.h> 30 30 #include <drm/ttm/ttm_bo_driver.h> 31 31 32 + /** 33 + * ttm_resource_init - resource object constructure 34 + * @bo: buffer object this resources is allocated for 35 + * @place: placement of the resource 36 + * @res: the resource object to inistilize 37 + * 38 + * Initialize a new resource object. Counterpart of &ttm_resource_fini. 39 + */ 32 40 void ttm_resource_init(struct ttm_buffer_object *bo, 33 41 const struct ttm_place *place, 34 42 struct ttm_resource *res) ··· 49 41 res->bus.offset = 0; 50 42 res->bus.is_iomem = false; 51 43 res->bus.caching = ttm_cached; 44 + res->bo = bo; 52 45 } 53 46 EXPORT_SYMBOL(ttm_resource_init); 47 + 48 + /** 49 + * ttm_resource_fini - resource destructor 50 + * @man: the resource manager this resource belongs to 51 + * @res: the resource to clean up 52 + * 53 + * Should be used by resource manager backends to clean up the TTM resource 54 + * objects before freeing the underlying structure. Counterpart of 55 + * &ttm_resource_init 56 + */ 57 + void ttm_resource_fini(struct ttm_resource_manager *man, 58 + struct ttm_resource *res) 59 + { 60 + } 61 + EXPORT_SYMBOL(ttm_resource_fini); 54 62 55 63 int ttm_resource_alloc(struct ttm_buffer_object *bo, 56 64 const struct ttm_place *place, ··· 140 116 } 141 117 EXPORT_SYMBOL(ttm_resource_compat); 142 118 119 + void ttm_resource_set_bo(struct ttm_resource *res, 120 + struct ttm_buffer_object *bo) 121 + { 122 + spin_lock(&bo->bdev->lru_lock); 123 + res->bo = bo; 124 + spin_unlock(&bo->bdev->lru_lock); 125 + } 126 + 143 127 /** 144 128 * ttm_resource_manager_init 145 129 * 146 130 * @man: memory manager object to init 131 + * @bdev: ttm device this manager belongs to 147 132 * @p_size: size managed area in pages. 148 133 * 149 134 * Initialise core parts of a manager object. 150 135 */ 151 136 void ttm_resource_manager_init(struct ttm_resource_manager *man, 137 + struct ttm_device *bdev, 152 138 unsigned long p_size) 153 139 { 154 140 unsigned i; 155 141 156 142 spin_lock_init(&man->move_lock); 143 + man->bdev = bdev; 157 144 man->size = p_size; 158 145 159 146 for (i = 0; i < TTM_MAX_BO_PRIORITY; ++i)
+2 -1
drivers/gpu/drm/ttm/ttm_sys_manager.c
··· 23 23 static void ttm_sys_man_free(struct ttm_resource_manager *man, 24 24 struct ttm_resource *res) 25 25 { 26 + ttm_resource_fini(man, res); 26 27 kfree(res); 27 28 } 28 29 ··· 43 42 man->use_tt = true; 44 43 man->func = &ttm_sys_manager_func; 45 44 46 - ttm_resource_manager_init(man, 0); 45 + ttm_resource_manager_init(man, bdev, 0); 47 46 ttm_set_driver_manager(bdev, TTM_PL_SYSTEM, man); 48 47 ttm_resource_manager_set_used(man, true); 49 48 }
+6 -2
drivers/gpu/drm/v3d/v3d_drv.c
··· 219 219 int ret; 220 220 u32 mmu_debug; 221 221 u32 ident1; 222 + u64 mask; 222 223 223 224 v3d = devm_drm_dev_alloc(dev, &v3d_drm_driver, struct v3d_dev, drm); 224 225 if (IS_ERR(v3d)) ··· 238 237 return ret; 239 238 240 239 mmu_debug = V3D_READ(V3D_MMU_DEBUG_INFO); 241 - dma_set_mask_and_coherent(dev, 242 - DMA_BIT_MASK(30 + V3D_GET_FIELD(mmu_debug, V3D_MMU_PA_WIDTH))); 240 + mask = DMA_BIT_MASK(30 + V3D_GET_FIELD(mmu_debug, V3D_MMU_PA_WIDTH)); 241 + ret = dma_set_mask_and_coherent(dev, mask); 242 + if (ret) 243 + return ret; 244 + 243 245 v3d->va_width = 30 + V3D_GET_FIELD(mmu_debug, V3D_MMU_VA_WIDTH); 244 246 245 247 ident1 = V3D_READ(V3D_HUB_IDENT1);
-2
drivers/gpu/drm/vc4/vc4_bo.c
··· 355 355 uint32_t page_index = bo_page_index(size); 356 356 struct vc4_bo *bo = NULL; 357 357 358 - size = roundup(size, PAGE_SIZE); 359 - 360 358 mutex_lock(&vc4->bo_lock); 361 359 if (page_index >= vc4->bo_cache.size_list_size) 362 360 goto out;
+29 -4
drivers/gpu/drm/vc4/vc4_drv.c
··· 37 37 #include <drm/drm_fb_helper.h> 38 38 #include <drm/drm_vblank.h> 39 39 40 + #include <soc/bcm2835/raspberrypi-firmware.h> 41 + 40 42 #include "uapi/drm/vc4_drm.h" 41 43 42 44 #include "vc4_drv.h" ··· 217 215 static int vc4_drm_bind(struct device *dev) 218 216 { 219 217 struct platform_device *pdev = to_platform_device(dev); 218 + struct rpi_firmware *firmware = NULL; 220 219 struct drm_device *drm; 221 220 struct vc4_dev *vc4; 222 221 struct device_node *node; ··· 254 251 if (ret) 255 252 return ret; 256 253 254 + node = of_find_compatible_node(NULL, NULL, "raspberrypi,bcm2835-firmware"); 255 + if (node) { 256 + firmware = rpi_firmware_get(node); 257 + of_node_put(node); 258 + 259 + if (!firmware) 260 + return -EPROBE_DEFER; 261 + } 262 + 263 + ret = drm_aperture_remove_framebuffers(false, &vc4_drm_driver); 264 + if (ret) 265 + return ret; 266 + 267 + if (firmware) { 268 + ret = rpi_firmware_property(firmware, 269 + RPI_FIRMWARE_NOTIFY_DISPLAY_DONE, 270 + NULL, 0); 271 + if (ret) 272 + drm_warn(drm, "Couldn't stop firmware display driver: %d\n", ret); 273 + 274 + rpi_firmware_put(firmware); 275 + } 276 + 257 277 ret = component_bind_all(dev, drm); 258 278 if (ret) 259 279 return ret; 260 280 261 281 ret = vc4_plane_create_additional_planes(drm); 262 - if (ret) 263 - goto unbind_all; 264 - 265 - ret = drm_aperture_remove_framebuffers(false, &vc4_drm_driver); 266 282 if (ret) 267 283 goto unbind_all; 268 284 ··· 378 356 static int __init vc4_drm_register(void) 379 357 { 380 358 int ret; 359 + 360 + if (drm_firmware_drivers_only()) 361 + return -ENODEV; 381 362 382 363 ret = platform_register_drivers(component_drivers, 383 364 ARRAY_SIZE(component_drivers));
+78 -54
drivers/gpu/drm/vc4/vc4_hdmi.c
··· 104 104 return (mode->clock * 1000) > HDMI_14_MAX_TMDS_CLK; 105 105 } 106 106 107 + static bool vc4_hdmi_is_full_range_rgb(struct vc4_hdmi *vc4_hdmi, 108 + const struct drm_display_mode *mode) 109 + { 110 + struct vc4_hdmi_encoder *vc4_encoder = &vc4_hdmi->encoder; 111 + 112 + return !vc4_encoder->hdmi_monitor || 113 + drm_default_rgb_quant_range(mode) == HDMI_QUANTIZATION_RANGE_FULL; 114 + } 115 + 107 116 static int vc4_hdmi_debugfs_regs(struct seq_file *m, void *unused) 108 117 { 109 118 struct drm_info_node *node = (struct drm_info_node *)m->private; ··· 490 481 static void vc4_hdmi_set_avi_infoframe(struct drm_encoder *encoder) 491 482 { 492 483 struct vc4_hdmi *vc4_hdmi = encoder_to_vc4_hdmi(encoder); 493 - struct vc4_hdmi_encoder *vc4_encoder = to_vc4_hdmi_encoder(encoder); 494 484 struct drm_connector *connector = &vc4_hdmi->connector; 495 485 struct drm_connector_state *cstate = connector->state; 496 486 const struct drm_display_mode *mode = &vc4_hdmi->saved_adjusted_mode; ··· 507 499 508 500 drm_hdmi_avi_infoframe_quant_range(&frame.avi, 509 501 connector, mode, 510 - vc4_encoder->limited_rgb_range ? 511 - HDMI_QUANTIZATION_RANGE_LIMITED : 512 - HDMI_QUANTIZATION_RANGE_FULL); 513 - drm_hdmi_avi_infoframe_colorspace(&frame.avi, cstate); 502 + vc4_hdmi_is_full_range_rgb(vc4_hdmi, mode) ? 503 + HDMI_QUANTIZATION_RANGE_FULL : 504 + HDMI_QUANTIZATION_RANGE_LIMITED); 505 + drm_hdmi_avi_infoframe_colorimetry(&frame.avi, cstate); 514 506 drm_hdmi_avi_infoframe_bars(&frame.avi, cstate); 515 507 516 508 vc4_hdmi_write_infoframe(encoder, &frame); ··· 734 726 mutex_unlock(&vc4_hdmi->mutex); 735 727 } 736 728 737 - static void vc4_hdmi_csc_setup(struct vc4_hdmi *vc4_hdmi, bool enable) 729 + static void vc4_hdmi_csc_setup(struct vc4_hdmi *vc4_hdmi, 730 + struct drm_connector_state *state, 731 + const struct drm_display_mode *mode) 738 732 { 739 733 unsigned long flags; 740 734 u32 csc_ctl; ··· 746 736 csc_ctl = VC4_SET_FIELD(VC4_HD_CSC_CTL_ORDER_BGR, 747 737 VC4_HD_CSC_CTL_ORDER); 748 738 749 - if (enable) { 739 + if (!vc4_hdmi_is_full_range_rgb(vc4_hdmi, mode)) { 750 740 /* CEA VICs other than #1 requre limited range RGB 751 741 * output unless overridden by an AVI infoframe. 752 742 * Apply a colorspace conversion to squash 0-255 down ··· 776 766 spin_unlock_irqrestore(&vc4_hdmi->hw_lock, flags); 777 767 } 778 768 779 - static void vc5_hdmi_csc_setup(struct vc4_hdmi *vc4_hdmi, bool enable) 769 + /* 770 + * If we need to output Full Range RGB, then use the unity matrix 771 + * 772 + * [ 1 0 0 0] 773 + * [ 0 1 0 0] 774 + * [ 0 0 1 0] 775 + * 776 + * Matrix is signed 2p13 fixed point, with signed 9p6 offsets 777 + */ 778 + static const u16 vc5_hdmi_csc_full_rgb_unity[3][4] = { 779 + { 0x2000, 0x0000, 0x0000, 0x0000 }, 780 + { 0x0000, 0x2000, 0x0000, 0x0000 }, 781 + { 0x0000, 0x0000, 0x2000, 0x0000 }, 782 + }; 783 + 784 + /* 785 + * CEA VICs other than #1 require limited range RGB output unless 786 + * overridden by an AVI infoframe. Apply a colorspace conversion to 787 + * squash 0-255 down to 16-235. The matrix here is: 788 + * 789 + * [ 0.8594 0 0 16] 790 + * [ 0 0.8594 0 16] 791 + * [ 0 0 0.8594 16] 792 + * 793 + * Matrix is signed 2p13 fixed point, with signed 9p6 offsets 794 + */ 795 + static const u16 vc5_hdmi_csc_full_rgb_to_limited_rgb[3][4] = { 796 + { 0x1b80, 0x0000, 0x0000, 0x0400 }, 797 + { 0x0000, 0x1b80, 0x0000, 0x0400 }, 798 + { 0x0000, 0x0000, 0x1b80, 0x0400 }, 799 + }; 800 + 801 + static void vc5_hdmi_set_csc_coeffs(struct vc4_hdmi *vc4_hdmi, 802 + const u16 coeffs[3][4]) 803 + { 804 + lockdep_assert_held(&vc4_hdmi->hw_lock); 805 + 806 + HDMI_WRITE(HDMI_CSC_12_11, (coeffs[0][1] << 16) | coeffs[0][0]); 807 + HDMI_WRITE(HDMI_CSC_14_13, (coeffs[0][3] << 16) | coeffs[0][2]); 808 + HDMI_WRITE(HDMI_CSC_22_21, (coeffs[1][1] << 16) | coeffs[1][0]); 809 + HDMI_WRITE(HDMI_CSC_24_23, (coeffs[1][3] << 16) | coeffs[1][2]); 810 + HDMI_WRITE(HDMI_CSC_32_31, (coeffs[2][1] << 16) | coeffs[2][0]); 811 + HDMI_WRITE(HDMI_CSC_34_33, (coeffs[2][3] << 16) | coeffs[2][2]); 812 + } 813 + 814 + static void vc5_hdmi_csc_setup(struct vc4_hdmi *vc4_hdmi, 815 + struct drm_connector_state *state, 816 + const struct drm_display_mode *mode) 780 817 { 781 818 unsigned long flags; 782 - u32 csc_ctl; 783 - 784 - csc_ctl = 0x07; /* RGB_CONVERT_MODE = custom matrix, || USE_RGB_TO_YCBCR */ 819 + u32 csc_ctl = VC5_MT_CP_CSC_CTL_ENABLE | VC4_SET_FIELD(VC4_HD_CSC_CTL_MODE_CUSTOM, 820 + VC5_MT_CP_CSC_CTL_MODE); 785 821 786 822 spin_lock_irqsave(&vc4_hdmi->hw_lock, flags); 787 823 788 - if (enable) { 789 - /* CEA VICs other than #1 requre limited range RGB 790 - * output unless overridden by an AVI infoframe. 791 - * Apply a colorspace conversion to squash 0-255 down 792 - * to 16-235. The matrix here is: 793 - * 794 - * [ 0.8594 0 0 16] 795 - * [ 0 0.8594 0 16] 796 - * [ 0 0 0.8594 16] 797 - * [ 0 0 0 1] 798 - * Matrix is signed 2p13 fixed point, with signed 9p6 offsets 799 - */ 800 - HDMI_WRITE(HDMI_CSC_12_11, (0x0000 << 16) | 0x1b80); 801 - HDMI_WRITE(HDMI_CSC_14_13, (0x0400 << 16) | 0x0000); 802 - HDMI_WRITE(HDMI_CSC_22_21, (0x1b80 << 16) | 0x0000); 803 - HDMI_WRITE(HDMI_CSC_24_23, (0x0400 << 16) | 0x0000); 804 - HDMI_WRITE(HDMI_CSC_32_31, (0x0000 << 16) | 0x0000); 805 - HDMI_WRITE(HDMI_CSC_34_33, (0x0400 << 16) | 0x1b80); 806 - } else { 807 - /* Still use the matrix for full range, but make it unity. 808 - * Matrix is signed 2p13 fixed point, with signed 9p6 offsets 809 - */ 810 - HDMI_WRITE(HDMI_CSC_12_11, (0x0000 << 16) | 0x2000); 811 - HDMI_WRITE(HDMI_CSC_14_13, (0x0000 << 16) | 0x0000); 812 - HDMI_WRITE(HDMI_CSC_22_21, (0x2000 << 16) | 0x0000); 813 - HDMI_WRITE(HDMI_CSC_24_23, (0x0000 << 16) | 0x0000); 814 - HDMI_WRITE(HDMI_CSC_32_31, (0x0000 << 16) | 0x0000); 815 - HDMI_WRITE(HDMI_CSC_34_33, (0x0000 << 16) | 0x2000); 816 - } 824 + HDMI_WRITE(HDMI_VEC_INTERFACE_XBAR, 0x354021); 825 + 826 + if (!vc4_hdmi_is_full_range_rgb(vc4_hdmi, mode)) 827 + vc5_hdmi_set_csc_coeffs(vc4_hdmi, vc5_hdmi_csc_full_rgb_to_limited_rgb); 828 + else 829 + vc5_hdmi_set_csc_coeffs(vc4_hdmi, vc5_hdmi_csc_full_rgb_unity); 817 830 818 831 HDMI_WRITE(HDMI_CSC_CTL, csc_ctl); 819 832 ··· 922 889 923 890 spin_lock_irqsave(&vc4_hdmi->hw_lock, flags); 924 891 925 - HDMI_WRITE(HDMI_VEC_INTERFACE_XBAR, 0x354021); 926 892 HDMI_WRITE(HDMI_HORZA, 927 893 (vsync_pos ? VC5_HDMI_HORZA_VPOS : 0) | 928 894 (hsync_pos ? VC5_HDMI_HORZA_HPOS : 0) | ··· 1145 1113 struct drm_atomic_state *state) 1146 1114 { 1147 1115 struct vc4_hdmi *vc4_hdmi = encoder_to_vc4_hdmi(encoder); 1116 + struct drm_connector *connector = &vc4_hdmi->connector; 1148 1117 struct drm_display_mode *mode = &vc4_hdmi->saved_adjusted_mode; 1149 - struct vc4_hdmi_encoder *vc4_encoder = to_vc4_hdmi_encoder(encoder); 1118 + struct drm_connector_state *conn_state = 1119 + drm_atomic_get_new_connector_state(state, connector); 1150 1120 unsigned long flags; 1151 1121 1152 1122 mutex_lock(&vc4_hdmi->mutex); 1153 1123 1154 - if (vc4_encoder->hdmi_monitor && 1155 - drm_default_rgb_quant_range(mode) == HDMI_QUANTIZATION_RANGE_LIMITED) { 1156 - if (vc4_hdmi->variant->csc_setup) 1157 - vc4_hdmi->variant->csc_setup(vc4_hdmi, true); 1158 - 1159 - vc4_encoder->limited_rgb_range = true; 1160 - } else { 1161 - if (vc4_hdmi->variant->csc_setup) 1162 - vc4_hdmi->variant->csc_setup(vc4_hdmi, false); 1163 - 1164 - vc4_encoder->limited_rgb_range = false; 1165 - } 1124 + if (vc4_hdmi->variant->csc_setup) 1125 + vc4_hdmi->variant->csc_setup(vc4_hdmi, conn_state, mode); 1166 1126 1167 1127 spin_lock_irqsave(&vc4_hdmi->hw_lock, flags); 1168 1128 HDMI_WRITE(HDMI_FIFO_CTL, VC4_HDMI_FIFO_CTL_MASTER_SLAVE_N);
+3 -2
drivers/gpu/drm/vc4/vc4_hdmi.h
··· 12 12 struct vc4_hdmi_encoder { 13 13 struct vc4_encoder base; 14 14 bool hdmi_monitor; 15 - bool limited_rgb_range; 16 15 }; 17 16 18 17 static inline struct vc4_hdmi_encoder * ··· 76 77 void (*reset)(struct vc4_hdmi *vc4_hdmi); 77 78 78 79 /* Callback to enable / disable the CSC */ 79 - void (*csc_setup)(struct vc4_hdmi *vc4_hdmi, bool enable); 80 + void (*csc_setup)(struct vc4_hdmi *vc4_hdmi, 81 + struct drm_connector_state *state, 82 + const struct drm_display_mode *mode); 80 83 81 84 /* Callback to configure the video timings in the HDMI block */ 82 85 void (*set_timings)(struct vc4_hdmi *vc4_hdmi,
+3
drivers/gpu/drm/vc4/vc4_regs.h
··· 774 774 # define VC4_HD_CSC_CTL_RGB2YCC BIT(1) 775 775 # define VC4_HD_CSC_CTL_ENABLE BIT(0) 776 776 777 + # define VC5_MT_CP_CSC_CTL_ENABLE BIT(2) 778 + # define VC5_MT_CP_CSC_CTL_MODE_MASK VC4_MASK(1, 0) 779 + 777 780 # define VC4_DVP_HT_CLOCK_STOP_PIXEL BIT(1) 778 781 779 782 /* HVS display list information. */
+3
drivers/gpu/drm/virtio/virtgpu_gem.c
··· 248 248 { 249 249 u32 i; 250 250 251 + if (!objs) 252 + return; 253 + 251 254 for (i = 0; i < objs->nents; i++) 252 255 drm_gem_object_put(objs->objs[i]); 253 256 virtio_gpu_array_free(objs);
+2
drivers/gpu/drm/vkms/vkms_drv.h
··· 20 20 #define XRES_MAX 8192 21 21 #define YRES_MAX 8192 22 22 23 + #define NUM_OVERLAY_PLANES 8 24 + 23 25 struct vkms_writeback_job { 24 26 struct dma_buf_map map[DRM_FORMAT_MAX_PLANES]; 25 27 struct dma_buf_map data[DRM_FORMAT_MAX_PLANES];
+22 -7
drivers/gpu/drm/vkms/vkms_output.c
··· 32 32 .get_modes = vkms_conn_get_modes, 33 33 }; 34 34 35 + static int vkms_add_overlay_plane(struct vkms_device *vkmsdev, int index, 36 + struct drm_crtc *crtc) 37 + { 38 + struct vkms_plane *overlay; 39 + 40 + overlay = vkms_plane_init(vkmsdev, DRM_PLANE_TYPE_OVERLAY, index); 41 + if (IS_ERR(overlay)) 42 + return PTR_ERR(overlay); 43 + 44 + if (!overlay->base.possible_crtcs) 45 + overlay->base.possible_crtcs = drm_crtc_mask(crtc); 46 + 47 + return 0; 48 + } 49 + 35 50 int vkms_output_init(struct vkms_device *vkmsdev, int index) 36 51 { 37 52 struct vkms_output *output = &vkmsdev->output; ··· 54 39 struct drm_connector *connector = &output->connector; 55 40 struct drm_encoder *encoder = &output->encoder; 56 41 struct drm_crtc *crtc = &output->crtc; 57 - struct vkms_plane *primary, *cursor = NULL, *overlay = NULL; 42 + struct vkms_plane *primary, *cursor = NULL; 58 43 int ret; 59 44 int writeback; 45 + unsigned int n; 60 46 61 47 primary = vkms_plane_init(vkmsdev, DRM_PLANE_TYPE_PRIMARY, index); 62 48 if (IS_ERR(primary)) 63 49 return PTR_ERR(primary); 64 50 65 51 if (vkmsdev->config->overlay) { 66 - overlay = vkms_plane_init(vkmsdev, DRM_PLANE_TYPE_OVERLAY, index); 67 - if (IS_ERR(overlay)) 68 - return PTR_ERR(overlay); 69 - 70 - if (!overlay->base.possible_crtcs) 71 - overlay->base.possible_crtcs = drm_crtc_mask(crtc); 52 + for (n = 0; n < NUM_OVERLAY_PLANES; n++) { 53 + ret = vkms_add_overlay_plane(vkmsdev, index, crtc); 54 + if (ret) 55 + return ret; 56 + } 72 57 } 73 58 74 59 if (vkmsdev->config->cursor) {
+3 -1
drivers/gpu/drm/vmwgfx/vmwgfx_gmrid_manager.c
··· 117 117 gman->used_gmr_pages -= (*res)->num_pages; 118 118 spin_unlock(&gman->lock); 119 119 ida_free(&gman->gmr_ida, id); 120 + ttm_resource_fini(man, *res); 120 121 kfree(*res); 121 122 return -ENOSPC; 122 123 } ··· 131 130 spin_lock(&gman->lock); 132 131 gman->used_gmr_pages -= res->num_pages; 133 132 spin_unlock(&gman->lock); 133 + ttm_resource_fini(man, res); 134 134 kfree(res); 135 135 } 136 136 ··· 162 160 163 161 man->func = &vmw_gmrid_manager_func; 164 162 man->use_tt = true; 165 - ttm_resource_manager_init(man, 0); 163 + ttm_resource_manager_init(man, &dev_priv->bdev, 0); 166 164 spin_lock_init(&gman->lock); 167 165 gman->used_gmr_pages = 0; 168 166 ida_init(&gman->gmr_ida);
+2 -1
drivers/gpu/drm/vmwgfx/vmwgfx_system_manager.c
··· 49 49 static void vmw_sys_man_free(struct ttm_resource_manager *man, 50 50 struct ttm_resource *res) 51 51 { 52 + ttm_resource_fini(man, res); 52 53 kfree(res); 53 54 } 54 55 ··· 70 69 man->use_tt = true; 71 70 man->func = &vmw_sys_manager_func; 72 71 73 - ttm_resource_manager_init(man, 0); 72 + ttm_resource_manager_init(man, bdev, 0); 74 73 ttm_set_driver_manager(bdev, VMW_PL_SYSTEM, man); 75 74 ttm_resource_manager_set_used(man, true); 76 75 return 0;
+1
drivers/gpu/drm/xlnx/Kconfig
··· 6 6 depends on PHY_XILINX_ZYNQMP 7 7 depends on XILINX_ZYNQMP_DPDMA 8 8 select DMA_ENGINE 9 + select DRM_DP_HELPER 9 10 select DRM_GEM_CMA_HELPER 10 11 select DRM_KMS_HELPER 11 12 select GENERIC_PHY
+1 -1
drivers/gpu/drm/xlnx/zynqmp_dp.c
··· 13 13 #include <drm/drm_connector.h> 14 14 #include <drm/drm_crtc.h> 15 15 #include <drm/drm_device.h> 16 - #include <drm/drm_dp_helper.h> 16 + #include <drm/dp/drm_dp_helper.h> 17 17 #include <drm/drm_edid.h> 18 18 #include <drm/drm_encoder.h> 19 19 #include <drm/drm_managed.h>
+11
drivers/platform/chrome/Kconfig
··· 243 243 To compile this driver as a module, choose M here: the 244 244 module will be called cros_usbpd_notify. 245 245 246 + config CHROMEOS_PRIVACY_SCREEN 247 + tristate "ChromeOS Privacy Screen support" 248 + depends on ACPI 249 + depends on DRM 250 + select DRM_PRIVACY_SCREEN 251 + help 252 + This driver provides the support needed for the in-built electronic 253 + privacy screen that is present on some ChromeOS devices. When enabled, 254 + this should probably always be built into the kernel to avoid or 255 + minimize drm probe deferral. 256 + 246 257 source "drivers/platform/chrome/wilco_ec/Kconfig" 247 258 248 259 endif # CHROMEOS_PLATFORMS
+1
drivers/platform/chrome/Makefile
··· 4 4 CFLAGS_cros_ec_trace.o:= -I$(src) 5 5 6 6 obj-$(CONFIG_CHROMEOS_LAPTOP) += chromeos_laptop.o 7 + obj-$(CONFIG_CHROMEOS_PRIVACY_SCREEN) += chromeos_privacy_screen.o 7 8 obj-$(CONFIG_CHROMEOS_PSTORE) += chromeos_pstore.o 8 9 obj-$(CONFIG_CHROMEOS_TBMC) += chromeos_tbmc.o 9 10 obj-$(CONFIG_CROS_EC) += cros_ec.o
+153
drivers/platform/chrome/chromeos_privacy_screen.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + 3 + /* 4 + * ChromeOS Privacy Screen support 5 + * 6 + * Copyright (C) 2022 Google LLC 7 + * 8 + * This is the Chromeos privacy screen provider, present on certain chromebooks, 9 + * represented by a GOOG0010 device in the ACPI. This ACPI device, if present, 10 + * will cause the i915 drm driver to probe defer until this driver registers 11 + * the privacy-screen. 12 + */ 13 + 14 + #include <linux/acpi.h> 15 + #include <drm/drm_privacy_screen_driver.h> 16 + 17 + /* 18 + * The DSM (Device Specific Method) constants below are the agreed API with 19 + * the firmware team, on how to control privacy screen using ACPI methods. 20 + */ 21 + #define PRIV_SCRN_DSM_REVID 1 /* DSM version */ 22 + #define PRIV_SCRN_DSM_FN_GET_STATUS 1 /* Get privacy screen status */ 23 + #define PRIV_SCRN_DSM_FN_ENABLE 2 /* Enable privacy screen */ 24 + #define PRIV_SCRN_DSM_FN_DISABLE 3 /* Disable privacy screen */ 25 + 26 + static const guid_t chromeos_privacy_screen_dsm_guid = 27 + GUID_INIT(0xc7033113, 0x8720, 0x4ceb, 28 + 0x90, 0x90, 0x9d, 0x52, 0xb3, 0xe5, 0x2d, 0x73); 29 + 30 + static void 31 + chromeos_privacy_screen_get_hw_state(struct drm_privacy_screen 32 + *drm_privacy_screen) 33 + { 34 + union acpi_object *obj; 35 + acpi_handle handle; 36 + struct device *privacy_screen = 37 + drm_privacy_screen_get_drvdata(drm_privacy_screen); 38 + 39 + handle = acpi_device_handle(to_acpi_device(privacy_screen)); 40 + obj = acpi_evaluate_dsm(handle, &chromeos_privacy_screen_dsm_guid, 41 + PRIV_SCRN_DSM_REVID, 42 + PRIV_SCRN_DSM_FN_GET_STATUS, NULL); 43 + if (!obj) { 44 + dev_err(privacy_screen, 45 + "_DSM failed to get privacy-screen state\n"); 46 + return; 47 + } 48 + 49 + if (obj->type != ACPI_TYPE_INTEGER) 50 + dev_err(privacy_screen, 51 + "Bad _DSM to get privacy-screen state\n"); 52 + else if (obj->integer.value == 1) 53 + drm_privacy_screen->hw_state = drm_privacy_screen->sw_state = 54 + PRIVACY_SCREEN_ENABLED; 55 + else 56 + drm_privacy_screen->hw_state = drm_privacy_screen->sw_state = 57 + PRIVACY_SCREEN_DISABLED; 58 + 59 + ACPI_FREE(obj); 60 + } 61 + 62 + static int 63 + chromeos_privacy_screen_set_sw_state(struct drm_privacy_screen 64 + *drm_privacy_screen, 65 + enum drm_privacy_screen_status state) 66 + { 67 + union acpi_object *obj = NULL; 68 + acpi_handle handle; 69 + struct device *privacy_screen = 70 + drm_privacy_screen_get_drvdata(drm_privacy_screen); 71 + 72 + handle = acpi_device_handle(to_acpi_device(privacy_screen)); 73 + 74 + if (state == PRIVACY_SCREEN_DISABLED) { 75 + obj = acpi_evaluate_dsm(handle, 76 + &chromeos_privacy_screen_dsm_guid, 77 + PRIV_SCRN_DSM_REVID, 78 + PRIV_SCRN_DSM_FN_DISABLE, NULL); 79 + } else if (state == PRIVACY_SCREEN_ENABLED) { 80 + obj = acpi_evaluate_dsm(handle, 81 + &chromeos_privacy_screen_dsm_guid, 82 + PRIV_SCRN_DSM_REVID, 83 + PRIV_SCRN_DSM_FN_ENABLE, NULL); 84 + } else { 85 + dev_err(privacy_screen, 86 + "Bad attempt to set privacy-screen status to %u\n", 87 + state); 88 + return -EINVAL; 89 + } 90 + 91 + if (!obj) { 92 + dev_err(privacy_screen, 93 + "_DSM failed to set privacy-screen state\n"); 94 + return -EIO; 95 + } 96 + 97 + drm_privacy_screen->hw_state = drm_privacy_screen->sw_state = state; 98 + ACPI_FREE(obj); 99 + return 0; 100 + } 101 + 102 + static const struct drm_privacy_screen_ops chromeos_privacy_screen_ops = { 103 + .get_hw_state = chromeos_privacy_screen_get_hw_state, 104 + .set_sw_state = chromeos_privacy_screen_set_sw_state, 105 + }; 106 + 107 + static int chromeos_privacy_screen_add(struct acpi_device *adev) 108 + { 109 + struct drm_privacy_screen *drm_privacy_screen = 110 + drm_privacy_screen_register(&adev->dev, 111 + &chromeos_privacy_screen_ops, 112 + &adev->dev); 113 + 114 + if (IS_ERR(drm_privacy_screen)) { 115 + dev_err(&adev->dev, "Error registering privacy-screen\n"); 116 + return PTR_ERR(drm_privacy_screen); 117 + } 118 + 119 + adev->driver_data = drm_privacy_screen; 120 + dev_info(&adev->dev, "registered privacy-screen '%s'\n", 121 + dev_name(&drm_privacy_screen->dev)); 122 + 123 + return 0; 124 + } 125 + 126 + static int chromeos_privacy_screen_remove(struct acpi_device *adev) 127 + { 128 + struct drm_privacy_screen *drm_privacy_screen = acpi_driver_data(adev); 129 + 130 + drm_privacy_screen_unregister(drm_privacy_screen); 131 + return 0; 132 + } 133 + 134 + static const struct acpi_device_id chromeos_privacy_screen_device_ids[] = { 135 + {"GOOG0010", 0}, /* Google's electronic privacy screen for eDP-1 */ 136 + {} 137 + }; 138 + MODULE_DEVICE_TABLE(acpi, chromeos_privacy_screen_device_ids); 139 + 140 + static struct acpi_driver chromeos_privacy_screen_driver = { 141 + .name = "chromeos_privacy_screen_driver", 142 + .class = "ChromeOS", 143 + .ids = chromeos_privacy_screen_device_ids, 144 + .ops = { 145 + .add = chromeos_privacy_screen_add, 146 + .remove = chromeos_privacy_screen_remove, 147 + }, 148 + }; 149 + 150 + module_acpi_driver(chromeos_privacy_screen_driver); 151 + MODULE_LICENSE("GPL v2"); 152 + MODULE_DESCRIPTION("ChromeOS ACPI Privacy Screen driver"); 153 + MODULE_AUTHOR("Rajat Jain <rajatja@google.com>");
+1 -1
drivers/platform/x86/thinkpad_acpi.c
··· 9869 9869 return 0; 9870 9870 9871 9871 lcdshadow_dev = drm_privacy_screen_register(&tpacpi_pdev->dev, 9872 - &lcdshadow_ops); 9872 + &lcdshadow_ops, NULL); 9873 9873 if (IS_ERR(lcdshadow_dev)) 9874 9874 return PTR_ERR(lcdshadow_dev); 9875 9875
+1 -1
drivers/video/fbdev/asiliantfb.c
··· 110 110 static void asiliant_calc_dclk2(u32 *ppixclock, u8 *dclk2_m, u8 *dclk2_n, u8 *dclk2_div) 111 111 { 112 112 unsigned pixclock = *ppixclock; 113 - unsigned Ftarget = 1000000 * (1000000 / pixclock); 113 + unsigned Ftarget; 114 114 unsigned n; 115 115 unsigned best_error = 0xffffffff; 116 116 unsigned best_m = 0xffffffff,
+26 -3
drivers/video/fbdev/core/fbmem.c
··· 25 25 #include <linux/init.h> 26 26 #include <linux/linux_logo.h> 27 27 #include <linux/proc_fs.h> 28 + #include <linux/platform_device.h> 28 29 #include <linux/seq_file.h> 29 30 #include <linux/console.h> 30 31 #include <linux/kmod.h> ··· 1558 1557 /* check all firmware fbs and kick off if the base addr overlaps */ 1559 1558 for_each_registered_fb(i) { 1560 1559 struct apertures_struct *gen_aper; 1560 + struct device *device; 1561 1561 1562 1562 if (!(registered_fb[i]->flags & FBINFO_MISC_FIRMWARE)) 1563 1563 continue; 1564 1564 1565 1565 gen_aper = registered_fb[i]->apertures; 1566 + device = registered_fb[i]->device; 1566 1567 if (fb_do_apertures_overlap(gen_aper, a) || 1567 1568 (primary && gen_aper && gen_aper->count && 1568 1569 gen_aper->ranges[0].base == VGA_FB_PHYS)) { 1569 1570 1570 1571 printk(KERN_INFO "fb%d: switching to %s from %s\n", 1571 1572 i, name, registered_fb[i]->fix.id); 1572 - do_unregister_framebuffer(registered_fb[i]); 1573 + 1574 + /* 1575 + * If we kick-out a firmware driver, we also want to remove 1576 + * the underlying platform device, such as simple-framebuffer, 1577 + * VESA, EFI, etc. A native driver will then be able to 1578 + * allocate the memory range. 1579 + * 1580 + * If it's not a platform device, at least print a warning. A 1581 + * fix would add code to remove the device from the system. 1582 + */ 1583 + if (dev_is_platform(device)) { 1584 + registered_fb[i]->forced_out = true; 1585 + platform_device_unregister(to_platform_device(device)); 1586 + } else { 1587 + pr_warn("fb%d: cannot remove device\n", i); 1588 + do_unregister_framebuffer(registered_fb[i]); 1589 + } 1573 1590 } 1574 1591 } 1575 1592 } ··· 1917 1898 void 1918 1899 unregister_framebuffer(struct fb_info *fb_info) 1919 1900 { 1920 - mutex_lock(&registration_lock); 1901 + bool forced_out = fb_info->forced_out; 1902 + 1903 + if (!forced_out) 1904 + mutex_lock(&registration_lock); 1921 1905 do_unregister_framebuffer(fb_info); 1922 - mutex_unlock(&registration_lock); 1906 + if (!forced_out) 1907 + mutex_unlock(&registration_lock); 1923 1908 } 1924 1909 EXPORT_SYMBOL(unregister_framebuffer); 1925 1910
+1 -1
drivers/video/fbdev/s3c-fb.c
··· 489 489 struct s3c_fb_win *win = info->par; 490 490 struct s3c_fb *sfb = win->parent; 491 491 void __iomem *regs = sfb->regs; 492 - void __iomem *buf = regs; 492 + void __iomem *buf; 493 493 int win_no = win->index; 494 494 u32 alpha = 0; 495 495 u32 data;
+45 -20
drivers/video/fbdev/simplefb.c
··· 66 66 return 0; 67 67 } 68 68 69 - struct simplefb_par; 69 + struct simplefb_par { 70 + u32 palette[PSEUDO_PALETTE_SIZE]; 71 + struct resource *mem; 72 + #if defined CONFIG_OF && defined CONFIG_COMMON_CLK 73 + bool clks_enabled; 74 + unsigned int clk_count; 75 + struct clk **clks; 76 + #endif 77 + #if defined CONFIG_OF && defined CONFIG_REGULATOR 78 + bool regulators_enabled; 79 + u32 regulator_count; 80 + struct regulator **regulators; 81 + #endif 82 + }; 83 + 70 84 static void simplefb_clocks_destroy(struct simplefb_par *par); 71 85 static void simplefb_regulators_destroy(struct simplefb_par *par); 72 86 73 87 static void simplefb_destroy(struct fb_info *info) 74 88 { 89 + struct simplefb_par *par = info->par; 90 + struct resource *mem = par->mem; 91 + 75 92 simplefb_regulators_destroy(info->par); 76 93 simplefb_clocks_destroy(info->par); 77 94 if (info->screen_base) 78 95 iounmap(info->screen_base); 96 + 97 + if (mem) 98 + release_mem_region(mem->start, resource_size(mem)); 79 99 } 80 100 81 101 static const struct fb_ops simplefb_ops = { ··· 188 168 189 169 return 0; 190 170 } 191 - 192 - struct simplefb_par { 193 - u32 palette[PSEUDO_PALETTE_SIZE]; 194 - #if defined CONFIG_OF && defined CONFIG_COMMON_CLK 195 - bool clks_enabled; 196 - unsigned int clk_count; 197 - struct clk **clks; 198 - #endif 199 - #if defined CONFIG_OF && defined CONFIG_REGULATOR 200 - bool regulators_enabled; 201 - u32 regulator_count; 202 - struct regulator **regulators; 203 - #endif 204 - }; 205 171 206 172 #if defined CONFIG_OF && defined CONFIG_COMMON_CLK 207 173 /* ··· 411 405 struct simplefb_params params; 412 406 struct fb_info *info; 413 407 struct simplefb_par *par; 414 - struct resource *mem; 408 + struct resource *res, *mem; 415 409 416 410 /* 417 411 * Generic drivers must not be registered if a framebuffer exists. ··· 436 430 if (ret) 437 431 return ret; 438 432 439 - mem = platform_get_resource(pdev, IORESOURCE_MEM, 0); 440 - if (!mem) { 433 + res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 434 + if (!res) { 441 435 dev_err(&pdev->dev, "No memory resource\n"); 442 436 return -EINVAL; 443 437 } 444 438 439 + mem = request_mem_region(res->start, resource_size(res), "simplefb"); 440 + if (!mem) { 441 + /* 442 + * We cannot make this fatal. Sometimes this comes from magic 443 + * spaces our resource handlers simply don't know about. Use 444 + * the I/O-memory resource as-is and try to map that instead. 445 + */ 446 + dev_warn(&pdev->dev, "simplefb: cannot reserve video memory at %pR\n", res); 447 + mem = res; 448 + } 449 + 445 450 info = framebuffer_alloc(sizeof(struct simplefb_par), &pdev->dev); 446 - if (!info) 447 - return -ENOMEM; 451 + if (!info) { 452 + ret = -ENOMEM; 453 + goto error_release_mem_region; 454 + } 448 455 platform_set_drvdata(pdev, info); 449 456 450 457 par = info->par; ··· 514 495 info->var.xres, info->var.yres, 515 496 info->var.bits_per_pixel, info->fix.line_length); 516 497 498 + if (mem != res) 499 + par->mem = mem; /* release in clean-up handler */ 500 + 517 501 ret = register_framebuffer(info); 518 502 if (ret < 0) { 519 503 dev_err(&pdev->dev, "Unable to register simplefb: %d\n", ret); ··· 535 513 iounmap(info->screen_base); 536 514 error_fb_release: 537 515 framebuffer_release(info); 516 + error_release_mem_region: 517 + if (mem != res) 518 + release_mem_region(mem->start, resource_size(mem)); 538 519 return ret; 539 520 } 540 521
+5
drivers/video/fbdev/vga16fb.c
··· 1351 1351 printk(KERN_INFO "vga16fb: mapped to 0x%p\n", info->screen_base); 1352 1352 par = info->par; 1353 1353 1354 + #if defined(CONFIG_X86) 1355 + par->isVGA = screen_info.orig_video_isVGA == VIDEO_TYPE_VGAC; 1356 + #else 1357 + /* non-x86 architectures treat orig_video_isVGA as a boolean flag */ 1354 1358 par->isVGA = screen_info.orig_video_isVGA; 1359 + #endif 1355 1360 par->palette_blanked = 0; 1356 1361 par->vesa_blanked = 0; 1357 1362
+3 -1
include/drm/bridge/dw_mipi_dsi.h
··· 51 51 unsigned int max_data_lanes; 52 52 53 53 enum drm_mode_status (*mode_valid)(void *priv_data, 54 - const struct drm_display_mode *mode); 54 + const struct drm_display_mode *mode, 55 + unsigned long mode_flags, 56 + u32 lanes, u32 format); 55 57 56 58 const struct dw_mipi_dsi_phy_ops *phy_ops; 57 59 const struct dw_mipi_dsi_host_ops *host_ops;
+150
include/drm/drm_buddy.h
··· 1 + /* SPDX-License-Identifier: MIT */ 2 + /* 3 + * Copyright © 2021 Intel Corporation 4 + */ 5 + 6 + #ifndef __DRM_BUDDY_H__ 7 + #define __DRM_BUDDY_H__ 8 + 9 + #include <linux/bitops.h> 10 + #include <linux/list.h> 11 + #include <linux/slab.h> 12 + #include <linux/sched.h> 13 + 14 + #include <drm/drm_print.h> 15 + 16 + #define range_overflows(start, size, max) ({ \ 17 + typeof(start) start__ = (start); \ 18 + typeof(size) size__ = (size); \ 19 + typeof(max) max__ = (max); \ 20 + (void)(&start__ == &size__); \ 21 + (void)(&start__ == &max__); \ 22 + start__ >= max__ || size__ > max__ - start__; \ 23 + }) 24 + 25 + struct drm_buddy_block { 26 + #define DRM_BUDDY_HEADER_OFFSET GENMASK_ULL(63, 12) 27 + #define DRM_BUDDY_HEADER_STATE GENMASK_ULL(11, 10) 28 + #define DRM_BUDDY_ALLOCATED (1 << 10) 29 + #define DRM_BUDDY_FREE (2 << 10) 30 + #define DRM_BUDDY_SPLIT (3 << 10) 31 + /* Free to be used, if needed in the future */ 32 + #define DRM_BUDDY_HEADER_UNUSED GENMASK_ULL(9, 6) 33 + #define DRM_BUDDY_HEADER_ORDER GENMASK_ULL(5, 0) 34 + u64 header; 35 + 36 + struct drm_buddy_block *left; 37 + struct drm_buddy_block *right; 38 + struct drm_buddy_block *parent; 39 + 40 + void *private; /* owned by creator */ 41 + 42 + /* 43 + * While the block is allocated by the user through drm_buddy_alloc*, 44 + * the user has ownership of the link, for example to maintain within 45 + * a list, if so desired. As soon as the block is freed with 46 + * drm_buddy_free* ownership is given back to the mm. 47 + */ 48 + struct list_head link; 49 + struct list_head tmp_link; 50 + }; 51 + 52 + /* Order-zero must be at least PAGE_SIZE */ 53 + #define DRM_BUDDY_MAX_ORDER (63 - PAGE_SHIFT) 54 + 55 + /* 56 + * Binary Buddy System. 57 + * 58 + * Locking should be handled by the user, a simple mutex around 59 + * drm_buddy_alloc* and drm_buddy_free* should suffice. 60 + */ 61 + struct drm_buddy { 62 + /* Maintain a free list for each order. */ 63 + struct list_head *free_list; 64 + 65 + /* 66 + * Maintain explicit binary tree(s) to track the allocation of the 67 + * address space. This gives us a simple way of finding a buddy block 68 + * and performing the potentially recursive merge step when freeing a 69 + * block. Nodes are either allocated or free, in which case they will 70 + * also exist on the respective free list. 71 + */ 72 + struct drm_buddy_block **roots; 73 + 74 + /* 75 + * Anything from here is public, and remains static for the lifetime of 76 + * the mm. Everything above is considered do-not-touch. 77 + */ 78 + unsigned int n_roots; 79 + unsigned int max_order; 80 + 81 + /* Must be at least PAGE_SIZE */ 82 + u64 chunk_size; 83 + u64 size; 84 + u64 avail; 85 + }; 86 + 87 + static inline u64 88 + drm_buddy_block_offset(struct drm_buddy_block *block) 89 + { 90 + return block->header & DRM_BUDDY_HEADER_OFFSET; 91 + } 92 + 93 + static inline unsigned int 94 + drm_buddy_block_order(struct drm_buddy_block *block) 95 + { 96 + return block->header & DRM_BUDDY_HEADER_ORDER; 97 + } 98 + 99 + static inline unsigned int 100 + drm_buddy_block_state(struct drm_buddy_block *block) 101 + { 102 + return block->header & DRM_BUDDY_HEADER_STATE; 103 + } 104 + 105 + static inline bool 106 + drm_buddy_block_is_allocated(struct drm_buddy_block *block) 107 + { 108 + return drm_buddy_block_state(block) == DRM_BUDDY_ALLOCATED; 109 + } 110 + 111 + static inline bool 112 + drm_buddy_block_is_free(struct drm_buddy_block *block) 113 + { 114 + return drm_buddy_block_state(block) == DRM_BUDDY_FREE; 115 + } 116 + 117 + static inline bool 118 + drm_buddy_block_is_split(struct drm_buddy_block *block) 119 + { 120 + return drm_buddy_block_state(block) == DRM_BUDDY_SPLIT; 121 + } 122 + 123 + static inline u64 124 + drm_buddy_block_size(struct drm_buddy *mm, 125 + struct drm_buddy_block *block) 126 + { 127 + return mm->chunk_size << drm_buddy_block_order(block); 128 + } 129 + 130 + int drm_buddy_init(struct drm_buddy *mm, u64 size, u64 chunk_size); 131 + 132 + void drm_buddy_fini(struct drm_buddy *mm); 133 + 134 + struct drm_buddy_block * 135 + drm_buddy_alloc_blocks(struct drm_buddy *mm, unsigned int order); 136 + 137 + int drm_buddy_alloc_range(struct drm_buddy *mm, 138 + struct list_head *blocks, 139 + u64 start, u64 size); 140 + 141 + void drm_buddy_free_block(struct drm_buddy *mm, struct drm_buddy_block *block); 142 + 143 + void drm_buddy_free_list(struct drm_buddy *mm, struct list_head *objects); 144 + 145 + void drm_buddy_print(struct drm_buddy *mm, struct drm_printer *p); 146 + void drm_buddy_block_print(struct drm_buddy *mm, 147 + struct drm_buddy_block *block, 148 + struct drm_printer *p); 149 + 150 + #endif
+12 -6
include/drm/drm_connector.h
··· 522 522 enum subpixel_order subpixel_order; 523 523 524 524 #define DRM_COLOR_FORMAT_RGB444 (1<<0) 525 - #define DRM_COLOR_FORMAT_YCRCB444 (1<<1) 526 - #define DRM_COLOR_FORMAT_YCRCB422 (1<<2) 527 - #define DRM_COLOR_FORMAT_YCRCB420 (1<<3) 525 + #define DRM_COLOR_FORMAT_YCBCR444 (1<<1) 526 + #define DRM_COLOR_FORMAT_YCBCR422 (1<<2) 527 + #define DRM_COLOR_FORMAT_YCBCR420 (1<<3) 528 528 529 529 /** 530 530 * @panel_orientation: Read only connector property for built-in panels, ··· 592 592 bool rgb_quant_range_selectable; 593 593 594 594 /** 595 - * @edid_hdmi_dc_modes: Mask of supported hdmi deep color modes. Even 596 - * more stuff redundant with @bus_formats. 595 + * @edid_hdmi_dc_rgb444_modes: Mask of supported hdmi deep color modes 596 + * in RGB 4:4:4. Even more stuff redundant with @bus_formats. 597 597 */ 598 - u8 edid_hdmi_dc_modes; 598 + u8 edid_hdmi_rgb444_dc_modes; 599 + 600 + /** 601 + * @edid_hdmi_dc_ycbcr444_modes: Mask of supported hdmi deep color 602 + * modes in YCbCr 4:4:4. Even more stuff redundant with @bus_formats. 603 + */ 604 + u8 edid_hdmi_ycbcr444_dc_modes; 599 605 600 606 /** 601 607 * @cea_rev: CEA revision of the HDMI sink.
+10
include/drm/drm_crtc.h
··· 285 285 * Lookup table for converting pixel data after the color conversion 286 286 * matrix @ctm. See drm_crtc_enable_color_mgmt(). The blob (if not 287 287 * NULL) is an array of &struct drm_color_lut. 288 + * 289 + * Note that for mostly historical reasons stemming from Xorg heritage, 290 + * this is also used to store the color map (also sometimes color lut, 291 + * CLUT or color palette) for indexed formats like DRM_FORMAT_C8. 288 292 */ 289 293 struct drm_property_blob *gamma_lut; 290 294 ··· 1079 1075 /** 1080 1076 * @gamma_size: Size of legacy gamma ramp reported to userspace. Set up 1081 1077 * by calling drm_mode_crtc_set_gamma_size(). 1078 + * 1079 + * Note that atomic drivers need to instead use 1080 + * &drm_crtc_state.gamma_lut. See drm_crtc_enable_color_mgmt(). 1082 1081 */ 1083 1082 uint32_t gamma_size; 1084 1083 1085 1084 /** 1086 1085 * @gamma_store: Gamma ramp values used by the legacy SETGAMMA and 1087 1086 * GETGAMMA IOCTls. Set up by calling drm_mode_crtc_set_gamma_size(). 1087 + * 1088 + * Note that atomic drivers need to instead use 1089 + * &drm_crtc_state.gamma_lut. See drm_crtc_enable_color_mgmt(). 1088 1090 */ 1089 1091 uint16_t *gamma_store; 1090 1092
include/drm/drm_dp_aux_bus.h include/drm/dp/drm_dp_aux_bus.h
include/drm/drm_dp_dual_mode_helper.h include/drm/dp/drm_dp_dual_mode_helper.h
+2 -5
include/drm/drm_dp_helper.h include/drm/dp/drm_dp_helper.h
··· 1038 1038 #define DP_SIDEBAND_MSG_UP_REQ_BASE 0x1600 /* 1.2 MST */ 1039 1039 1040 1040 /* DPRX Event Status Indicator */ 1041 - #define DP_SINK_COUNT_ESI 0x2002 /* 1.2 */ 1042 - /* 0-5 sink count */ 1043 - # define DP_SINK_COUNT_CP_READY (1 << 6) 1044 - 1045 - #define DP_DEVICE_SERVICE_IRQ_VECTOR_ESI0 0x2003 /* 1.2 */ 1041 + #define DP_SINK_COUNT_ESI 0x2002 /* same as 0x200 */ 1042 + #define DP_DEVICE_SERVICE_IRQ_VECTOR_ESI0 0x2003 /* same as 0x201 */ 1046 1043 1047 1044 #define DP_DEVICE_SERVICE_IRQ_VECTOR_ESI1 0x2004 /* 1.2 */ 1048 1045 # define DP_RX_GTC_MSTR_REQ_STATUS_CHANGE (1 << 0)
+1 -1
include/drm/drm_dp_mst_helper.h include/drm/dp/drm_dp_mst_helper.h
··· 23 23 #define _DRM_DP_MST_HELPER_H_ 24 24 25 25 #include <linux/types.h> 26 - #include <drm/drm_dp_helper.h> 26 + #include <drm/dp/drm_dp_helper.h> 27 27 #include <drm/drm_atomic.h> 28 28 29 29 #if IS_ENABLED(CONFIG_DRM_DEBUG_DP_MST_TOPOLOGY_REFS)
+1 -1
include/drm/drm_dsc.h
··· 8 8 #ifndef DRM_DSC_H_ 9 9 #define DRM_DSC_H_ 10 10 11 - #include <drm/drm_dp_helper.h> 11 + #include <drm/dp/drm_dp_helper.h> 12 12 13 13 /* VESA Display Stream Compression DSC 1.2 constants */ 14 14 #define DSC_NUM_BUF_RANGES 15
+2 -2
include/drm/drm_edid.h
··· 401 401 const struct drm_display_mode *mode); 402 402 403 403 void 404 - drm_hdmi_avi_infoframe_colorspace(struct hdmi_avi_infoframe *frame, 405 - const struct drm_connector_state *conn_state); 404 + drm_hdmi_avi_infoframe_colorimetry(struct hdmi_avi_infoframe *frame, 405 + const struct drm_connector_state *conn_state); 406 406 407 407 void 408 408 drm_hdmi_avi_infoframe_bars(struct hdmi_avi_infoframe *frame,
+1 -1
include/drm/drm_mipi_dbi.h
··· 194 194 #ifdef CONFIG_DEBUG_FS 195 195 void mipi_dbi_debugfs_init(struct drm_minor *minor); 196 196 #else 197 - #define mipi_dbi_debugfs_init NULL 197 + static inline void mipi_dbi_debugfs_init(struct drm_minor *minor) {} 198 198 #endif 199 199 200 200 #endif /* __LINUX_MIPI_DBI_H */
+1
include/drm/drm_modeset_lock.h
··· 34 34 * struct drm_modeset_acquire_ctx - locking context (see ww_acquire_ctx) 35 35 * @ww_ctx: base acquire ctx 36 36 * @contended: used internally for -EDEADLK handling 37 + * @stack_depot: used internally for contention debugging 37 38 * @locked: list of held locks 38 39 * @trylock_only: trylock mode used in atomic contexts/panic notifiers 39 40 * @interruptible: whether interruptible locking should be used.
+125
include/drm/drm_module.h
··· 1 + /* SPDX-License-Identifier: MIT */ 2 + 3 + #ifndef DRM_MODULE_H 4 + #define DRM_MODULE_H 5 + 6 + #include <linux/pci.h> 7 + #include <linux/platform_device.h> 8 + 9 + #include <drm/drm_drv.h> 10 + 11 + /** 12 + * DOC: overview 13 + * 14 + * This library provides helpers registering DRM drivers during module 15 + * initialization and shutdown. The provided helpers act like bus-specific 16 + * module helpers, such as module_pci_driver(), but respect additional 17 + * parameters that control DRM driver registration. 18 + * 19 + * Below is an example of initializing a DRM driver for a device on the 20 + * PCI bus. 21 + * 22 + * .. code-block:: c 23 + * 24 + * struct pci_driver my_pci_drv = { 25 + * }; 26 + * 27 + * drm_module_pci_driver(my_pci_drv); 28 + * 29 + * The generated code will test if DRM drivers are enabled and register 30 + * the PCI driver my_pci_drv. For more complex module initialization, you 31 + * can still use module_init() and module_exit() in your driver. 32 + */ 33 + 34 + /* 35 + * PCI drivers 36 + */ 37 + 38 + static inline int __init drm_pci_register_driver(struct pci_driver *pci_drv) 39 + { 40 + if (drm_firmware_drivers_only()) 41 + return -ENODEV; 42 + 43 + return pci_register_driver(pci_drv); 44 + } 45 + 46 + /** 47 + * drm_module_pci_driver - Register a DRM driver for PCI-based devices 48 + * @__pci_drv: the PCI driver structure 49 + * 50 + * Registers a DRM driver for devices on the PCI bus. The helper 51 + * macro behaves like module_pci_driver() but tests the state of 52 + * drm_firmware_drivers_only(). For more complex module initialization, 53 + * use module_init() and module_exit() directly. 54 + * 55 + * Each module may only use this macro once. Calling it replaces 56 + * module_init() and module_exit(). 57 + */ 58 + #define drm_module_pci_driver(__pci_drv) \ 59 + module_driver(__pci_drv, drm_pci_register_driver, pci_unregister_driver) 60 + 61 + static inline int __init 62 + drm_pci_register_driver_if_modeset(struct pci_driver *pci_drv, int modeset) 63 + { 64 + if (drm_firmware_drivers_only() && modeset == -1) 65 + return -ENODEV; 66 + if (modeset == 0) 67 + return -ENODEV; 68 + 69 + return pci_register_driver(pci_drv); 70 + } 71 + 72 + static inline void __exit 73 + drm_pci_unregister_driver_if_modeset(struct pci_driver *pci_drv, int modeset) 74 + { 75 + pci_unregister_driver(pci_drv); 76 + } 77 + 78 + /** 79 + * drm_module_pci_driver_if_modeset - Register a DRM driver for PCI-based devices 80 + * @__pci_drv: the PCI driver structure 81 + * @__modeset: an additional parameter that disables the driver 82 + * 83 + * This macro is deprecated and only provided for existing drivers. For 84 + * new drivers, use drm_module_pci_driver(). 85 + * 86 + * Registers a DRM driver for devices on the PCI bus. The helper macro 87 + * behaves like drm_module_pci_driver() with an additional driver-specific 88 + * flag. If __modeset is 0, the driver has been disabled, if __modeset is 89 + * -1 the driver state depends on the global DRM state. For all other 90 + * values, the PCI driver has been enabled. The default should be -1. 91 + */ 92 + #define drm_module_pci_driver_if_modeset(__pci_drv, __modeset) \ 93 + module_driver(__pci_drv, drm_pci_register_driver_if_modeset, \ 94 + drm_pci_unregister_driver_if_modeset, __modeset) 95 + 96 + /* 97 + * Platform drivers 98 + */ 99 + 100 + static inline int __init 101 + drm_platform_driver_register(struct platform_driver *platform_drv) 102 + { 103 + if (drm_firmware_drivers_only()) 104 + return -ENODEV; 105 + 106 + return platform_driver_register(platform_drv); 107 + } 108 + 109 + /** 110 + * drm_module_platform_driver - Register a DRM driver for platform devices 111 + * @__platform_drv: the platform driver structure 112 + * 113 + * Registers a DRM driver for devices on the platform bus. The helper 114 + * macro behaves like module_platform_driver() but tests the state of 115 + * drm_firmware_drivers_only(). For more complex module initialization, 116 + * use module_init() and module_exit() directly. 117 + * 118 + * Each module may only use this macro once. Calling it replaces 119 + * module_init() and module_exit(). 120 + */ 121 + #define drm_module_platform_driver(__platform_drv) \ 122 + module_driver(__platform_drv, drm_platform_driver_register, \ 123 + platform_driver_unregister) 124 + 125 + #endif
+1 -1
include/drm/drm_plane.h
··· 516 516 * This optional hook is used for the DRM to determine if the given 517 517 * format/modifier combination is valid for the plane. This allows the 518 518 * DRM to generate the correct format bitmask (which formats apply to 519 - * which modifier), and to valdiate modifiers at atomic_check time. 519 + * which modifier), and to validate modifiers at atomic_check time. 520 520 * 521 521 * If not present, then any modifier in the plane's modifier 522 522 * list is allowed with any of the plane's formats.
+12 -1
include/drm/drm_privacy_screen_driver.h
··· 73 73 * for more info. 74 74 */ 75 75 enum drm_privacy_screen_status hw_state; 76 + /** 77 + * @drvdata: Private data owned by the privacy screen provider 78 + */ 79 + void *drvdata; 76 80 }; 77 81 82 + static inline 83 + void *drm_privacy_screen_get_drvdata(struct drm_privacy_screen *priv) 84 + { 85 + return priv->drvdata; 86 + } 87 + 78 88 struct drm_privacy_screen *drm_privacy_screen_register( 79 - struct device *parent, const struct drm_privacy_screen_ops *ops); 89 + struct device *parent, const struct drm_privacy_screen_ops *ops, 90 + void *data); 80 91 void drm_privacy_screen_unregister(struct drm_privacy_screen *priv); 81 92 82 93 void drm_privacy_screen_call_notifier_chain(struct drm_privacy_screen *priv);
+16 -7
include/drm/ttm/ttm_resource.h
··· 105 105 * @use_type: The memory type is enabled. 106 106 * @use_tt: If a TT object should be used for the backing store. 107 107 * @size: Size of the managed region. 108 + * @bdev: ttm device this manager belongs to 108 109 * @func: structure pointer implementing the range manager. See above 109 110 * @move_lock: lock for move fence 110 - * static information. bdev::driver::io_mem_free is never used. 111 - * @lru: The lru list for this memory type. 112 111 * @move: The fence of the last pipelined move operation. 112 + * @lru: The lru list for this memory type. 113 113 * 114 114 * This structure is used to identify and manage memory types for a device. 115 115 */ ··· 119 119 */ 120 120 bool use_type; 121 121 bool use_tt; 122 + struct ttm_device *bdev; 122 123 uint64_t size; 123 124 const struct ttm_resource_manager_func *func; 124 125 spinlock_t move_lock; 126 + 127 + /* 128 + * Protected by @move_lock. 129 + */ 130 + struct dma_fence *move; 125 131 126 132 /* 127 133 * Protected by the global->lru_lock. 128 134 */ 129 135 130 136 struct list_head lru[TTM_MAX_BO_PRIORITY]; 131 - 132 - /* 133 - * Protected by @move_lock. 134 - */ 135 - struct dma_fence *move; 136 137 }; 137 138 138 139 /** ··· 161 160 * @mem_type: Resource type of the allocation. 162 161 * @placement: Placement flags. 163 162 * @bus: Placement on io bus accessible to the CPU 163 + * @bo: weak reference to the BO, protected by ttm_device::lru_lock 164 164 * 165 165 * Structure indicating the placement and space resources used by a 166 166 * buffer object. ··· 172 170 uint32_t mem_type; 173 171 uint32_t placement; 174 172 struct ttm_bus_placement bus; 173 + struct ttm_buffer_object *bo; 175 174 }; 176 175 177 176 /** ··· 264 261 void ttm_resource_init(struct ttm_buffer_object *bo, 265 262 const struct ttm_place *place, 266 263 struct ttm_resource *res); 264 + void ttm_resource_fini(struct ttm_resource_manager *man, 265 + struct ttm_resource *res); 266 + 267 267 int ttm_resource_alloc(struct ttm_buffer_object *bo, 268 268 const struct ttm_place *place, 269 269 struct ttm_resource **res); 270 270 void ttm_resource_free(struct ttm_buffer_object *bo, struct ttm_resource **res); 271 271 bool ttm_resource_compat(struct ttm_resource *res, 272 272 struct ttm_placement *placement); 273 + void ttm_resource_set_bo(struct ttm_resource *res, 274 + struct ttm_buffer_object *bo); 273 275 274 276 void ttm_resource_manager_init(struct ttm_resource_manager *man, 277 + struct ttm_device *bdev, 275 278 unsigned long p_size); 276 279 277 280 int ttm_resource_manager_evict_all(struct ttm_device *bdev,
+2 -2
include/linux/dma-buf-map.h
··· 52 52 * 53 53 * struct dma_buf_map map = DMA_BUF_MAP_INIT_VADDR(0xdeadbeaf); 54 54 * 55 - * dma_buf_map_set_vaddr(&map. 0xdeadbeaf); 55 + * dma_buf_map_set_vaddr(&map, 0xdeadbeaf); 56 56 * 57 57 * To set an address in I/O memory, use dma_buf_map_set_vaddr_iomem(). 58 58 * 59 59 * .. code-block:: c 60 60 * 61 - * dma_buf_map_set_vaddr_iomem(&map. 0xdeadbeaf); 61 + * dma_buf_map_set_vaddr_iomem(&map, 0xdeadbeaf); 62 62 * 63 63 * Instances of struct dma_buf_map do not have to be cleaned up, but 64 64 * can be cleared to NULL with dma_buf_map_clear(). Cleared mappings
+2 -2
include/linux/dma-resv.h
··· 458 458 int dma_resv_reserve_shared(struct dma_resv *obj, unsigned int num_fences); 459 459 void dma_resv_add_shared_fence(struct dma_resv *obj, struct dma_fence *fence); 460 460 void dma_resv_add_excl_fence(struct dma_resv *obj, struct dma_fence *fence); 461 - int dma_resv_get_fences(struct dma_resv *obj, struct dma_fence **pfence_excl, 462 - unsigned *pshared_count, struct dma_fence ***pshared); 461 + int dma_resv_get_fences(struct dma_resv *obj, bool write, 462 + unsigned int *num_fences, struct dma_fence ***fences); 463 463 int dma_resv_copy_fences(struct dma_resv *dst, struct dma_resv *src); 464 464 long dma_resv_wait_timeout(struct dma_resv *obj, bool wait_all, bool intr, 465 465 unsigned long timeout);
+1
include/linux/fb.h
··· 502 502 } *apertures; 503 503 504 504 bool skip_vt_switch; /* no VT switch on suspend/resume required */ 505 + bool forced_out; /* set when being removed by another driver */ 505 506 }; 506 507 507 508 static inline struct apertures_struct *alloc_apertures(unsigned int max_num) {
+1 -1
include/linux/rwsem.h
··· 230 230 do { \ 231 231 typecheck(struct lockdep_map *, &(nest_lock)->dep_map); \ 232 232 _down_write_nest_lock(sem, &(nest_lock)->dep_map); \ 233 - } while (0); 233 + } while (0) 234 234 235 235 /* 236 236 * Take/release a lock when not the owner will release it.
+1
include/soc/bcm2835/raspberrypi-firmware.h
··· 91 91 RPI_FIRMWARE_GET_POE_HAT_VAL = 0x00030049, 92 92 RPI_FIRMWARE_SET_POE_HAT_VAL = 0x00030050, 93 93 RPI_FIRMWARE_NOTIFY_XHCI_RESET = 0x00030058, 94 + RPI_FIRMWARE_NOTIFY_DISPLAY_DONE = 0x00030066, 94 95 95 96 /* Dispmanx TAGS */ 96 97 RPI_FIRMWARE_FRAMEBUFFER_ALLOCATE = 0x00040001,
+2 -2
include/uapi/drm/panfrost_drm.h
··· 84 84 __s64 timeout_ns; /* absolute */ 85 85 }; 86 86 87 + /* Valid flags to pass to drm_panfrost_create_bo */ 87 88 #define PANFROST_BO_NOEXEC 1 88 89 #define PANFROST_BO_HEAP 2 89 90 90 91 /** 91 92 * struct drm_panfrost_create_bo - ioctl argument for creating Panfrost BOs. 92 93 * 93 - * There are currently no values for the flags argument, but it may be 94 - * used in a future extension. 94 + * The flags argument is a bit mask of PANFROST_BO_* flags. 95 95 */ 96 96 struct drm_panfrost_create_bo { 97 97 __u32 size;