Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'drm-misc-next-2022-04-07' of git://anongit.freedesktop.org/drm/drm-misc into drm-next

drm-misc-next for 5.19:

UAPI Changes:

Cross-subsystem Changes:

Core Changes:
- atomic: Add atomic_print_state to private objects
- edid: Constify the EDID parsing API, rework of the API
- dma-buf: Add dma_resv_replace_fences, dma_resv_get_singleton, make
dma_resv_excl_fence private
- format: Support monochrome formats
- fbdev: fixes for cfb_imageblit and sys_imageblit, pagelist
corruption fix
- selftests: several small fixes
- ttm: Rework bulk move handling

Driver Changes:
- Switch all relevant drivers to drm_mode_copy or drm_mode_duplicate
- bridge: conversions to devm_drm_of_get_bridge and panel_bridge,
autosuspend for analogix_dp, audio support for it66121, DSI to DPI
support for tc358767, PLL fixes and I2C support for icn6211
- bridge_connector: Enable HPD if supported
- etnaviv: fencing improvements
- gma500: GEM and GTT improvements, connector handling fixes
- komeda: switch to plane reset helper
- mediatek: MIPI DSI improvements
- omapdrm: GEM improvements
- panel: DT bindings fixes for st7735r, few fixes for ssd130x, new
panels: ltk035c5444t, B133UAN01, NV3052C
- qxl: Allow to run on arm64
- sysfb: Kconfig rework, support for VESA graphic mode selection
- vc4: Add a tracepoint for CL submissions, HDMI YUV output,
HDMI and clock improvements
- virtio: Remove restriction of non-zero blob_flags,
- vmwgfx: support for CursorMob and CursorBypass 4, various
improvements and small fixes

[airlied: fixup conflict with newvision panel callbacks]
Signed-off-by: Dave Airlie <airlied@redhat.com>

From: Maxime Ripard <maxime@cerno.tech>
Link: https://patchwork.freedesktop.org/patch/msgid/20220407085940.pnflvjojs4qw4b77@houat

+5795 -2790
+3
Documentation/devicetree/bindings/display/bridge/ite,it66121.yaml
··· 38 38 interrupts: 39 39 maxItems: 1 40 40 41 + "#sound-dai-cells": 42 + const: 0 43 + 41 44 ports: 42 45 $ref: /schemas/graph.yaml#/properties/ports 43 46
+19 -3
Documentation/devicetree/bindings/display/bridge/toshiba,tc358767.yaml
··· 53 53 54 54 properties: 55 55 port@0: 56 - $ref: /schemas/graph.yaml#/properties/port 56 + $ref: /schemas/graph.yaml#/$defs/port-base 57 + unevaluatedProperties: false 57 58 description: | 58 59 DSI input port. The remote endpoint phandle should be a 59 60 reference to a valid DSI output endpoint node 60 61 62 + properties: 63 + endpoint: 64 + $ref: /schemas/media/video-interfaces.yaml# 65 + unevaluatedProperties: false 66 + 67 + properties: 68 + data-lanes: 69 + description: array of physical DSI data lane indexes. 70 + minItems: 1 71 + items: 72 + - const: 1 73 + - const: 2 74 + - const: 3 75 + - const: 4 76 + 61 77 port@1: 62 78 $ref: /schemas/graph.yaml#/properties/port 63 79 description: | 64 - DPI input port. The remote endpoint phandle should be a 65 - reference to a valid DPI output endpoint node 80 + DPI input/output port. The remote endpoint phandle should be a 81 + reference to a valid DPI output or input endpoint node. 66 82 67 83 port@2: 68 84 $ref: /schemas/graph.yaml#/properties/port
+59
Documentation/devicetree/bindings/display/panel/leadtek,ltk035c5444t.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/display/panel/leadtek,ltk035c5444t.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: Leadtek ltk035c5444t 3.5" (640x480 pixels) 24-bit IPS LCD panel 8 + 9 + maintainers: 10 + - Paul Cercueil <paul@crapouillou.net> 11 + - Christophe Branchereau <cbranchereau@gmail.com> 12 + 13 + allOf: 14 + - $ref: panel-common.yaml# 15 + - $ref: /schemas/spi/spi-peripheral-props.yaml# 16 + 17 + properties: 18 + compatible: 19 + const: leadtek,ltk035c5444t 20 + 21 + backlight: true 22 + port: true 23 + power-supply: true 24 + reg: true 25 + reset-gpios: true 26 + 27 + required: 28 + - compatible 29 + - power-supply 30 + - reset-gpios 31 + 32 + unevaluatedProperties: false 33 + 34 + examples: 35 + - | 36 + #include <dt-bindings/gpio/gpio.h> 37 + 38 + spi { 39 + #address-cells = <1>; 40 + #size-cells = <0>; 41 + panel@0 { 42 + compatible = "leadtek,ltk035c5444t"; 43 + reg = <0>; 44 + 45 + spi-3wire; 46 + spi-max-frequency = <3125000>; 47 + 48 + reset-gpios = <&gpe 2 GPIO_ACTIVE_LOW>; 49 + 50 + backlight = <&backlight>; 51 + power-supply = <&vcc>; 52 + 53 + port { 54 + panel_input: endpoint { 55 + remote-endpoint = <&panel_output>; 56 + }; 57 + }; 58 + }; 59 + };
+2 -4
Documentation/devicetree/bindings/display/sitronix,st7735r.yaml
··· 32 32 - okaya,rh128128t 33 33 - const: sitronix,st7715r 34 34 35 - spi-max-frequency: 36 - maximum: 32000000 37 - 38 35 dc-gpios: 39 36 maxItems: 1 40 37 description: Display data/command selection (D/CX) 41 38 42 39 backlight: true 43 40 reg: true 41 + spi-max-frequency: true 44 42 reset-gpios: true 45 43 rotation: true 46 44 ··· 46 48 - compatible 47 49 - reg 48 50 - dc-gpios 49 - - reset-gpios 50 51 51 52 additionalProperties: false 52 53 ··· 69 72 dc-gpios = <&gpio 43 GPIO_ACTIVE_HIGH>; 70 73 reset-gpios = <&gpio 80 GPIO_ACTIVE_HIGH>; 71 74 rotation = <270>; 75 + backlight = <&backlight>; 72 76 }; 73 77 }; 74 78
+9
Documentation/gpu/drm-mm.rst
··· 466 466 .. kernel-doc:: drivers/gpu/drm/drm_mm.c 467 467 :export: 468 468 469 + DRM Buddy Allocator 470 + =================== 471 + 472 + DRM Buddy Function References 473 + ----------------------------- 474 + 475 + .. kernel-doc:: drivers/gpu/drm/drm_buddy.c 476 + :export: 477 + 469 478 DRM Cache Handling and Fast WC memcpy() 470 479 ======================================= 471 480
+3 -1
Documentation/gpu/drm-uapi.rst
··· 148 148 If a driver advertises render node support, DRM core will create a 149 149 separate render node called renderD<num>. There will be one render node 150 150 per device. No ioctls except PRIME-related ioctls will be allowed on 151 - this node. Especially GEM_OPEN will be explicitly prohibited. Render 151 + this node. Especially GEM_OPEN will be explicitly prohibited. For a 152 + complete list of driver-independent ioctls that can be used on render 153 + nodes, see the ioctls marked DRM_RENDER_ALLOW in drm_ioctl.c Render 152 154 nodes are designed to avoid the buffer-leaks, which occur if clients 153 155 guess the flink names or mmap offsets on the legacy interface. 154 156 Additionally to this basic interface, drivers must mark their
+11
MAINTAINERS
··· 6308 6308 F: Documentation/devicetree/bindings/display/panel/olimex,lcd-olinuxino.yaml 6309 6309 F: drivers/gpu/drm/panel/panel-olimex-lcd-olinuxino.c 6310 6310 6311 + DRM DRIVER FOR PARADE PS8640 BRIDGE CHIP 6312 + R: Douglas Anderson <dianders@chromium.org> 6313 + F: Documentation/devicetree/bindings/display/bridge/ps8640.yaml 6314 + F: drivers/gpu/drm/bridge/parade-ps8640.c 6315 + 6311 6316 DRM DRIVER FOR PERVASIVE DISPLAYS REPAPER PANELS 6312 6317 M: Noralf Trønnes <noralf@tronnes.org> 6313 6318 S: Maintained ··· 6425 6420 DRM DRIVER FOR TDFX VIDEO CARDS 6426 6421 S: Orphan / Obsolete 6427 6422 F: drivers/gpu/drm/tdfx/ 6423 + 6424 + DRM DRIVER FOR TI SN65DSI86 BRIDGE CHIP 6425 + R: Douglas Anderson <dianders@chromium.org> 6426 + F: Documentation/devicetree/bindings/display/bridge/ti,sn65dsi86.yaml 6427 + F: drivers/gpu/drm/bridge/ti-sn65dsi86.c 6428 6428 6429 6429 DRM DRIVER FOR TPO TPG110 PANELS 6430 6430 M: Linus Walleij <linus.walleij@linaro.org> ··· 6550 6540 R: Jernej Skrabec <jernej.skrabec@gmail.com> 6551 6541 S: Maintained 6552 6542 T: git git://anongit.freedesktop.org/drm/drm-misc 6543 + F: Documentation/devicetree/bindings/display/bridge/ 6553 6544 F: drivers/gpu/drm/bridge/ 6554 6545 6555 6546 DRM DRIVERS FOR EXYNOS
+6
arch/x86/Kconfig
··· 940 940 941 941 If unsure, say Y. 942 942 943 + config BOOT_VESA_SUPPORT 944 + bool 945 + help 946 + If true, at least one selected framebuffer driver can take advantage 947 + of VESA video modes set at an early boot stage via the vga= parameter. 948 + 943 949 config MAXSMP 944 950 bool "Enable Maximum number of SMP Processors and NUMA Nodes" 945 951 depends on X86_64 && SMP && DEBUG_KERNEL
+2 -2
arch/x86/boot/video-vesa.c
··· 83 83 (vminfo.memory_layout == 4 || 84 84 vminfo.memory_layout == 6) && 85 85 vminfo.memory_planes == 1) { 86 - #ifdef CONFIG_FB_BOOT_VESA_SUPPORT 86 + #ifdef CONFIG_BOOT_VESA_SUPPORT 87 87 /* Graphics mode, color, linear frame buffer 88 88 supported. Only register the mode if 89 89 if framebuffer is configured, however, ··· 121 121 if ((vminfo.mode_attr & 0x15) == 0x05) { 122 122 /* It's a supported text mode */ 123 123 is_graphic = 0; 124 - #ifdef CONFIG_FB_BOOT_VESA_SUPPORT 124 + #ifdef CONFIG_BOOT_VESA_SUPPORT 125 125 } else if ((vminfo.mode_attr & 0x99) == 0x99) { 126 126 /* It's a graphics mode with linear frame buffer */ 127 127 is_graphic = 1;
+1 -1
drivers/dma-buf/dma-buf.c
··· 443 443 * as a file descriptor by calling dma_buf_fd(). 444 444 * 445 445 * 2. Userspace passes this file-descriptors to all drivers it wants this buffer 446 - * to share with: First the filedescriptor is converted to a &dma_buf using 446 + * to share with: First the file descriptor is converted to a &dma_buf using 447 447 * dma_buf_get(). Then the buffer is attached to the device using 448 448 * dma_buf_attach(). 449 449 *
+122 -20
drivers/dma-buf/dma-resv.c
··· 34 34 */ 35 35 36 36 #include <linux/dma-resv.h> 37 + #include <linux/dma-fence-array.h> 37 38 #include <linux/export.h> 38 39 #include <linux/mm.h> 39 40 #include <linux/sched/mm.h> ··· 56 55 57 56 DEFINE_WD_CLASS(reservation_ww_class); 58 57 EXPORT_SYMBOL(reservation_ww_class); 58 + 59 + struct dma_resv_list { 60 + struct rcu_head rcu; 61 + u32 shared_count, shared_max; 62 + struct dma_fence __rcu *shared[]; 63 + }; 59 64 60 65 /** 61 66 * dma_resv_list_alloc - allocate fence list ··· 140 133 } 141 134 EXPORT_SYMBOL(dma_resv_fini); 142 135 136 + static inline struct dma_fence * 137 + dma_resv_excl_fence(struct dma_resv *obj) 138 + { 139 + return rcu_dereference_check(obj->fence_excl, dma_resv_held(obj)); 140 + } 141 + 142 + static inline struct dma_resv_list *dma_resv_shared_list(struct dma_resv *obj) 143 + { 144 + return rcu_dereference_check(obj->fence, dma_resv_held(obj)); 145 + } 146 + 143 147 /** 144 - * dma_resv_reserve_shared - Reserve space to add shared fences to 148 + * dma_resv_reserve_fences - Reserve space to add shared fences to 145 149 * a dma_resv. 146 150 * @obj: reservation object 147 151 * @num_fences: number of fences we want to add ··· 167 149 * RETURNS 168 150 * Zero for success, or -errno 169 151 */ 170 - int dma_resv_reserve_shared(struct dma_resv *obj, unsigned int num_fences) 152 + int dma_resv_reserve_fences(struct dma_resv *obj, unsigned int num_fences) 171 153 { 172 154 struct dma_resv_list *old, *new; 173 155 unsigned int i, j, k, max; ··· 230 212 231 213 return 0; 232 214 } 233 - EXPORT_SYMBOL(dma_resv_reserve_shared); 215 + EXPORT_SYMBOL(dma_resv_reserve_fences); 234 216 235 217 #ifdef CONFIG_DEBUG_MUTEXES 236 218 /** ··· 238 220 * @obj: the dma_resv object to reset 239 221 * 240 222 * Reset the number of pre-reserved shared slots to test that drivers do 241 - * correct slot allocation using dma_resv_reserve_shared(). See also 223 + * correct slot allocation using dma_resv_reserve_fences(). See also 242 224 * &dma_resv_list.shared_max. 243 225 */ 244 226 void dma_resv_reset_shared_max(struct dma_resv *obj) ··· 260 242 * @fence: the shared fence to add 261 243 * 262 244 * Add a fence to a shared slot, @obj must be locked with dma_resv_lock(), and 263 - * dma_resv_reserve_shared() has been called. 245 + * dma_resv_reserve_fences() has been called. 264 246 * 265 247 * See also &dma_resv.fence for a discussion of the semantics. 266 248 */ ··· 308 290 EXPORT_SYMBOL(dma_resv_add_shared_fence); 309 291 310 292 /** 293 + * dma_resv_replace_fences - replace fences in the dma_resv obj 294 + * @obj: the reservation object 295 + * @context: the context of the fences to replace 296 + * @replacement: the new fence to use instead 297 + * 298 + * Replace fences with a specified context with a new fence. Only valid if the 299 + * operation represented by the original fence has no longer access to the 300 + * resources represented by the dma_resv object when the new fence completes. 301 + * 302 + * And example for using this is replacing a preemption fence with a page table 303 + * update fence which makes the resource inaccessible. 304 + */ 305 + void dma_resv_replace_fences(struct dma_resv *obj, uint64_t context, 306 + struct dma_fence *replacement) 307 + { 308 + struct dma_resv_list *list; 309 + struct dma_fence *old; 310 + unsigned int i; 311 + 312 + dma_resv_assert_held(obj); 313 + 314 + write_seqcount_begin(&obj->seq); 315 + 316 + old = dma_resv_excl_fence(obj); 317 + if (old->context == context) { 318 + RCU_INIT_POINTER(obj->fence_excl, dma_fence_get(replacement)); 319 + dma_fence_put(old); 320 + } 321 + 322 + list = dma_resv_shared_list(obj); 323 + for (i = 0; list && i < list->shared_count; ++i) { 324 + old = rcu_dereference_protected(list->shared[i], 325 + dma_resv_held(obj)); 326 + if (old->context != context) 327 + continue; 328 + 329 + rcu_assign_pointer(list->shared[i], dma_fence_get(replacement)); 330 + dma_fence_put(old); 331 + } 332 + 333 + write_seqcount_end(&obj->seq); 334 + } 335 + EXPORT_SYMBOL(dma_resv_replace_fences); 336 + 337 + /** 311 338 * dma_resv_add_excl_fence - Add an exclusive fence. 312 339 * @obj: the reservation object 313 340 * @fence: the exclusive fence to add 314 341 * 315 342 * Add a fence to the exclusive slot. @obj must be locked with dma_resv_lock(). 316 - * Note that this function replaces all fences attached to @obj, see also 317 - * &dma_resv.fence_excl for a discussion of the semantics. 343 + * See also &dma_resv.fence_excl for a discussion of the semantics. 318 344 */ 319 345 void dma_resv_add_excl_fence(struct dma_resv *obj, struct dma_fence *fence) 320 346 { 321 347 struct dma_fence *old_fence = dma_resv_excl_fence(obj); 322 - struct dma_resv_list *old; 323 - u32 i = 0; 324 348 325 349 dma_resv_assert_held(obj); 326 - 327 - old = dma_resv_shared_list(obj); 328 - if (old) 329 - i = old->shared_count; 330 350 331 351 dma_fence_get(fence); 332 352 333 353 write_seqcount_begin(&obj->seq); 334 354 /* write_seqcount_begin provides the necessary memory barrier */ 335 355 RCU_INIT_POINTER(obj->fence_excl, fence); 336 - if (old) 337 - old->shared_count = 0; 338 356 write_seqcount_end(&obj->seq); 339 - 340 - /* inplace update, no shared fences */ 341 - while (i--) 342 - dma_fence_put(rcu_dereference_protected(old->shared[i], 343 - dma_resv_held(obj))); 344 357 345 358 dma_fence_put(old_fence); 346 359 } ··· 642 593 return 0; 643 594 } 644 595 EXPORT_SYMBOL_GPL(dma_resv_get_fences); 596 + 597 + /** 598 + * dma_resv_get_singleton - Get a single fence for all the fences 599 + * @obj: the reservation object 600 + * @write: true if we should return all fences 601 + * @fence: the resulting fence 602 + * 603 + * Get a single fence representing all the fences inside the resv object. 604 + * Returns either 0 for success or -ENOMEM. 605 + * 606 + * Warning: This can't be used like this when adding the fence back to the resv 607 + * object since that can lead to stack corruption when finalizing the 608 + * dma_fence_array. 609 + * 610 + * Returns 0 on success and negative error values on failure. 611 + */ 612 + int dma_resv_get_singleton(struct dma_resv *obj, bool write, 613 + struct dma_fence **fence) 614 + { 615 + struct dma_fence_array *array; 616 + struct dma_fence **fences; 617 + unsigned count; 618 + int r; 619 + 620 + r = dma_resv_get_fences(obj, write, &count, &fences); 621 + if (r) 622 + return r; 623 + 624 + if (count == 0) { 625 + *fence = NULL; 626 + return 0; 627 + } 628 + 629 + if (count == 1) { 630 + *fence = fences[0]; 631 + kfree(fences); 632 + return 0; 633 + } 634 + 635 + array = dma_fence_array_create(count, fences, 636 + dma_fence_context_alloc(1), 637 + 1, false); 638 + if (!array) { 639 + while (count--) 640 + dma_fence_put(fences[count]); 641 + kfree(fences); 642 + return -ENOMEM; 643 + } 644 + 645 + *fence = &array->base; 646 + return 0; 647 + } 648 + EXPORT_SYMBOL_GPL(dma_resv_get_singleton); 645 649 646 650 /** 647 651 * dma_resv_wait_timeout - Wait on reservation's objects
+38 -42
drivers/dma-buf/st-dma-resv.c
··· 75 75 goto err_free; 76 76 } 77 77 78 - if (shared) { 79 - r = dma_resv_reserve_shared(&resv, 1); 80 - if (r) { 81 - pr_err("Resv shared slot allocation failed\n"); 82 - goto err_unlock; 83 - } 84 - 85 - dma_resv_add_shared_fence(&resv, f); 86 - } else { 87 - dma_resv_add_excl_fence(&resv, f); 78 + r = dma_resv_reserve_fences(&resv, 1); 79 + if (r) { 80 + pr_err("Resv shared slot allocation failed\n"); 81 + goto err_unlock; 88 82 } 83 + 84 + if (shared) 85 + dma_resv_add_shared_fence(&resv, f); 86 + else 87 + dma_resv_add_excl_fence(&resv, f); 89 88 90 89 if (dma_resv_test_signaled(&resv, shared)) { 91 90 pr_err("Resv unexpectedly signaled\n"); ··· 133 134 goto err_free; 134 135 } 135 136 136 - if (shared) { 137 - r = dma_resv_reserve_shared(&resv, 1); 138 - if (r) { 139 - pr_err("Resv shared slot allocation failed\n"); 140 - goto err_unlock; 141 - } 142 - 143 - dma_resv_add_shared_fence(&resv, f); 144 - } else { 145 - dma_resv_add_excl_fence(&resv, f); 137 + r = dma_resv_reserve_fences(&resv, 1); 138 + if (r) { 139 + pr_err("Resv shared slot allocation failed\n"); 140 + goto err_unlock; 146 141 } 142 + 143 + if (shared) 144 + dma_resv_add_shared_fence(&resv, f); 145 + else 146 + dma_resv_add_excl_fence(&resv, f); 147 147 148 148 r = -ENOENT; 149 149 dma_resv_for_each_fence(&cursor, &resv, shared, fence) { ··· 204 206 goto err_free; 205 207 } 206 208 207 - if (shared) { 208 - r = dma_resv_reserve_shared(&resv, 1); 209 - if (r) { 210 - pr_err("Resv shared slot allocation failed\n"); 211 - dma_resv_unlock(&resv); 212 - goto err_free; 213 - } 214 - 215 - dma_resv_add_shared_fence(&resv, f); 216 - } else { 217 - dma_resv_add_excl_fence(&resv, f); 209 + r = dma_resv_reserve_fences(&resv, 1); 210 + if (r) { 211 + pr_err("Resv shared slot allocation failed\n"); 212 + dma_resv_unlock(&resv); 213 + goto err_free; 218 214 } 215 + 216 + if (shared) 217 + dma_resv_add_shared_fence(&resv, f); 218 + else 219 + dma_resv_add_excl_fence(&resv, f); 219 220 dma_resv_unlock(&resv); 220 221 221 222 r = -ENOENT; ··· 287 290 goto err_resv; 288 291 } 289 292 290 - if (shared) { 291 - r = dma_resv_reserve_shared(&resv, 1); 292 - if (r) { 293 - pr_err("Resv shared slot allocation failed\n"); 294 - dma_resv_unlock(&resv); 295 - goto err_resv; 296 - } 297 - 298 - dma_resv_add_shared_fence(&resv, f); 299 - } else { 300 - dma_resv_add_excl_fence(&resv, f); 293 + r = dma_resv_reserve_fences(&resv, 1); 294 + if (r) { 295 + pr_err("Resv shared slot allocation failed\n"); 296 + dma_resv_unlock(&resv); 297 + goto err_resv; 301 298 } 299 + 300 + if (shared) 301 + dma_resv_add_shared_fence(&resv, f); 302 + else 303 + dma_resv_add_excl_fence(&resv, f); 302 304 dma_resv_unlock(&resv); 303 305 304 306 r = dma_resv_get_fences(&resv, shared, &i, &fences);
+3 -3
drivers/firmware/Kconfig
··· 219 219 220 220 config SYSFB 221 221 bool 222 - default y 223 - depends on X86 || EFI 222 + select BOOT_VESA_SUPPORT 224 223 225 224 config SYSFB_SIMPLEFB 226 225 bool "Mark VGA/VBE/EFI FB as generic system framebuffer" 227 - depends on SYSFB 226 + depends on X86 || EFI 227 + select SYSFB 228 228 help 229 229 Firmwares often provide initial graphics framebuffers so the BIOS, 230 230 bootloader or kernel can show basic video-output during boot for
+9 -44
drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c
··· 253 253 static int amdgpu_amdkfd_remove_eviction_fence(struct amdgpu_bo *bo, 254 254 struct amdgpu_amdkfd_fence *ef) 255 255 { 256 - struct dma_resv *resv = bo->tbo.base.resv; 257 - struct dma_resv_list *old, *new; 258 - unsigned int i, j, k; 256 + struct dma_fence *replacement; 259 257 260 258 if (!ef) 261 259 return -EINVAL; 262 260 263 - old = dma_resv_shared_list(resv); 264 - if (!old) 265 - return 0; 266 - 267 - new = kmalloc(struct_size(new, shared, old->shared_max), GFP_KERNEL); 268 - if (!new) 269 - return -ENOMEM; 270 - 271 - /* Go through all the shared fences in the resevation object and sort 272 - * the interesting ones to the end of the list. 261 + /* TODO: Instead of block before we should use the fence of the page 262 + * table update and TLB flush here directly. 273 263 */ 274 - for (i = 0, j = old->shared_count, k = 0; i < old->shared_count; ++i) { 275 - struct dma_fence *f; 276 - 277 - f = rcu_dereference_protected(old->shared[i], 278 - dma_resv_held(resv)); 279 - 280 - if (f->context == ef->base.context) 281 - RCU_INIT_POINTER(new->shared[--j], f); 282 - else 283 - RCU_INIT_POINTER(new->shared[k++], f); 284 - } 285 - new->shared_max = old->shared_max; 286 - new->shared_count = k; 287 - 288 - /* Install the new fence list, seqcount provides the barriers */ 289 - write_seqcount_begin(&resv->seq); 290 - RCU_INIT_POINTER(resv->fence, new); 291 - write_seqcount_end(&resv->seq); 292 - 293 - /* Drop the references to the removed fences or move them to ef_list */ 294 - for (i = j; i < old->shared_count; ++i) { 295 - struct dma_fence *f; 296 - 297 - f = rcu_dereference_protected(new->shared[i], 298 - dma_resv_held(resv)); 299 - dma_fence_put(f); 300 - } 301 - kfree_rcu(old, rcu); 302 - 264 + replacement = dma_fence_get_stub(); 265 + dma_resv_replace_fences(bo->tbo.base.resv, ef->base.context, 266 + replacement); 267 + dma_fence_put(replacement); 303 268 return 0; 304 269 } 305 270 ··· 1233 1268 AMDGPU_FENCE_OWNER_KFD, false); 1234 1269 if (ret) 1235 1270 goto wait_pd_fail; 1236 - ret = dma_resv_reserve_shared(vm->root.bo->tbo.base.resv, 1); 1271 + ret = dma_resv_reserve_fences(vm->root.bo->tbo.base.resv, 1); 1237 1272 if (ret) 1238 1273 goto reserve_shared_fail; 1239 1274 amdgpu_bo_fence(vm->root.bo, ··· 2571 2606 * Add process eviction fence to bo so they can 2572 2607 * evict each other. 2573 2608 */ 2574 - ret = dma_resv_reserve_shared(gws_bo->tbo.base.resv, 1); 2609 + ret = dma_resv_reserve_fences(gws_bo->tbo.base.resv, 1); 2575 2610 if (ret) 2576 2611 goto reserve_shared_fail; 2577 2612 amdgpu_bo_fence(gws_bo, &process_info->eviction_fence->base, true);
+10 -5
drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
··· 1275 1275 amdgpu_bo_list_for_each_entry(e, p->bo_list) { 1276 1276 struct dma_resv *resv = e->tv.bo->base.resv; 1277 1277 struct dma_fence_chain *chain = e->chain; 1278 + struct dma_resv_iter cursor; 1279 + struct dma_fence *fence; 1278 1280 1279 1281 if (!chain) 1280 1282 continue; 1281 1283 1282 1284 /* 1283 - * Work around dma_resv shortcomings by wrapping up the 1284 - * submission in a dma_fence_chain and add it as exclusive 1285 + * Temporary workaround dma_resv shortcommings by wrapping up 1286 + * the submission in a dma_fence_chain and add it as exclusive 1285 1287 * fence. 1288 + * 1289 + * TODO: Remove together with dma_resv rework. 1286 1290 */ 1287 - dma_fence_chain_init(chain, dma_resv_excl_fence(resv), 1288 - dma_fence_get(p->fence), 1); 1289 - 1291 + dma_resv_for_each_fence(&cursor, resv, false, fence) { 1292 + break; 1293 + } 1294 + dma_fence_chain_init(chain, fence, dma_fence_get(p->fence), 1); 1290 1295 rcu_assign_pointer(resv->fence_excl, &chain->base); 1291 1296 e->chain = NULL; 1292 1297 }
+18 -31
drivers/gpu/drm/amd/amdgpu/amdgpu_gtt_mgr.c
··· 26 26 27 27 #include "amdgpu.h" 28 28 29 - struct amdgpu_gtt_node { 30 - struct ttm_buffer_object *tbo; 31 - struct ttm_range_mgr_node base; 32 - }; 33 - 34 29 static inline struct amdgpu_gtt_mgr * 35 30 to_gtt_mgr(struct ttm_resource_manager *man) 36 31 { 37 32 return container_of(man, struct amdgpu_gtt_mgr, manager); 38 - } 39 - 40 - static inline struct amdgpu_gtt_node * 41 - to_amdgpu_gtt_node(struct ttm_resource *res) 42 - { 43 - return container_of(res, struct amdgpu_gtt_node, base.base); 44 33 } 45 34 46 35 /** ··· 95 106 */ 96 107 bool amdgpu_gtt_mgr_has_gart_addr(struct ttm_resource *res) 97 108 { 98 - struct amdgpu_gtt_node *node = to_amdgpu_gtt_node(res); 109 + struct ttm_range_mgr_node *node = to_ttm_range_mgr_node(res); 99 110 100 - return drm_mm_node_allocated(&node->base.mm_nodes[0]); 111 + return drm_mm_node_allocated(&node->mm_nodes[0]); 101 112 } 102 113 103 114 /** ··· 117 128 { 118 129 struct amdgpu_gtt_mgr *mgr = to_gtt_mgr(man); 119 130 uint32_t num_pages = PFN_UP(tbo->base.size); 120 - struct amdgpu_gtt_node *node; 131 + struct ttm_range_mgr_node *node; 121 132 int r; 122 133 123 - node = kzalloc(struct_size(node, base.mm_nodes, 1), GFP_KERNEL); 134 + node = kzalloc(struct_size(node, mm_nodes, 1), GFP_KERNEL); 124 135 if (!node) 125 136 return -ENOMEM; 126 137 127 - node->tbo = tbo; 128 - ttm_resource_init(tbo, place, &node->base.base); 138 + ttm_resource_init(tbo, place, &node->base); 129 139 if (!(place->flags & TTM_PL_FLAG_TEMPORARY) && 130 140 ttm_resource_manager_usage(man) > man->size) { 131 141 r = -ENOSPC; ··· 133 145 134 146 if (place->lpfn) { 135 147 spin_lock(&mgr->lock); 136 - r = drm_mm_insert_node_in_range(&mgr->mm, 137 - &node->base.mm_nodes[0], 148 + r = drm_mm_insert_node_in_range(&mgr->mm, &node->mm_nodes[0], 138 149 num_pages, tbo->page_alignment, 139 150 0, place->fpfn, place->lpfn, 140 151 DRM_MM_INSERT_BEST); ··· 141 154 if (unlikely(r)) 142 155 goto err_free; 143 156 144 - node->base.base.start = node->base.mm_nodes[0].start; 157 + node->base.start = node->mm_nodes[0].start; 145 158 } else { 146 - node->base.mm_nodes[0].start = 0; 147 - node->base.mm_nodes[0].size = node->base.base.num_pages; 148 - node->base.base.start = AMDGPU_BO_INVALID_OFFSET; 159 + node->mm_nodes[0].start = 0; 160 + node->mm_nodes[0].size = node->base.num_pages; 161 + node->base.start = AMDGPU_BO_INVALID_OFFSET; 149 162 } 150 163 151 - *res = &node->base.base; 164 + *res = &node->base; 152 165 return 0; 153 166 154 167 err_free: 155 - ttm_resource_fini(man, &node->base.base); 168 + ttm_resource_fini(man, &node->base); 156 169 kfree(node); 157 170 return r; 158 171 } ··· 168 181 static void amdgpu_gtt_mgr_del(struct ttm_resource_manager *man, 169 182 struct ttm_resource *res) 170 183 { 171 - struct amdgpu_gtt_node *node = to_amdgpu_gtt_node(res); 184 + struct ttm_range_mgr_node *node = to_ttm_range_mgr_node(res); 172 185 struct amdgpu_gtt_mgr *mgr = to_gtt_mgr(man); 173 186 174 187 spin_lock(&mgr->lock); 175 - if (drm_mm_node_allocated(&node->base.mm_nodes[0])) 176 - drm_mm_remove_node(&node->base.mm_nodes[0]); 188 + if (drm_mm_node_allocated(&node->mm_nodes[0])) 189 + drm_mm_remove_node(&node->mm_nodes[0]); 177 190 spin_unlock(&mgr->lock); 178 191 179 192 ttm_resource_fini(man, res); ··· 189 202 */ 190 203 void amdgpu_gtt_mgr_recover(struct amdgpu_gtt_mgr *mgr) 191 204 { 192 - struct amdgpu_gtt_node *node; 205 + struct ttm_range_mgr_node *node; 193 206 struct drm_mm_node *mm_node; 194 207 struct amdgpu_device *adev; 195 208 196 209 adev = container_of(mgr, typeof(*adev), mman.gtt_mgr); 197 210 spin_lock(&mgr->lock); 198 211 drm_mm_for_each_node(mm_node, &mgr->mm) { 199 - node = container_of(mm_node, typeof(*node), base.mm_nodes[0]); 200 - amdgpu_ttm_recover_gart(node->tbo); 212 + node = container_of(mm_node, typeof(*node), mm_nodes[0]); 213 + amdgpu_ttm_recover_gart(node->base.bo); 201 214 } 202 215 spin_unlock(&mgr->lock); 203 216
+3 -20
drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c
··· 107 107 void amdgpu_pasid_free_delayed(struct dma_resv *resv, 108 108 u32 pasid) 109 109 { 110 - struct dma_fence *fence, **fences; 111 110 struct amdgpu_pasid_cb *cb; 112 - unsigned count; 111 + struct dma_fence *fence; 113 112 int r; 114 113 115 - r = dma_resv_get_fences(resv, true, &count, &fences); 114 + r = dma_resv_get_singleton(resv, true, &fence); 116 115 if (r) 117 116 goto fallback; 118 117 119 - if (count == 0) { 118 + if (!fence) { 120 119 amdgpu_pasid_free(pasid); 121 120 return; 122 - } 123 - 124 - if (count == 1) { 125 - fence = fences[0]; 126 - kfree(fences); 127 - } else { 128 - uint64_t context = dma_fence_context_alloc(1); 129 - struct dma_fence_array *array; 130 - 131 - array = dma_fence_array_create(count, fences, context, 132 - 1, false); 133 - if (!array) { 134 - kfree(fences); 135 - goto fallback; 136 - } 137 - fence = &array->base; 138 121 } 139 122 140 123 cb = kmalloc(sizeof(*cb), GFP_KERNEL);
+8
drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
··· 1390 1390 bool shared) 1391 1391 { 1392 1392 struct dma_resv *resv = bo->tbo.base.resv; 1393 + int r; 1394 + 1395 + r = dma_resv_reserve_fences(resv, 1); 1396 + if (r) { 1397 + /* As last resort on OOM we block for the fence */ 1398 + dma_fence_wait(fence, false); 1399 + return; 1400 + } 1393 1401 1394 1402 if (shared) 1395 1403 dma_resv_add_shared_fence(resv, fence);
-1
drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
··· 1547 1547 .io_mem_reserve = &amdgpu_ttm_io_mem_reserve, 1548 1548 .io_mem_pfn = amdgpu_ttm_io_mem_pfn, 1549 1549 .access_memory = &amdgpu_ttm_access_memory, 1550 - .del_from_lru_notify = &amdgpu_vm_del_from_lru_notify 1551 1550 }; 1552 1551 1553 1552 /*
+12 -66
drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
··· 377 377 378 378 dma_resv_assert_held(vm->root.bo->tbo.base.resv); 379 379 380 - vm->bulk_moveable = false; 380 + ttm_bo_set_bulk_move(&bo->tbo, &vm->lru_bulk_move); 381 381 if (bo->tbo.type == ttm_bo_type_kernel && bo->parent) 382 382 amdgpu_vm_bo_relocated(base); 383 383 else ··· 640 640 } 641 641 642 642 /** 643 - * amdgpu_vm_del_from_lru_notify - update bulk_moveable flag 644 - * 645 - * @bo: BO which was removed from the LRU 646 - * 647 - * Make sure the bulk_moveable flag is updated when a BO is removed from the 648 - * LRU. 649 - */ 650 - void amdgpu_vm_del_from_lru_notify(struct ttm_buffer_object *bo) 651 - { 652 - struct amdgpu_bo *abo; 653 - struct amdgpu_vm_bo_base *bo_base; 654 - 655 - if (!amdgpu_bo_is_amdgpu_bo(bo)) 656 - return; 657 - 658 - if (bo->pin_count) 659 - return; 660 - 661 - abo = ttm_to_amdgpu_bo(bo); 662 - if (!abo->parent) 663 - return; 664 - for (bo_base = abo->vm_bo; bo_base; bo_base = bo_base->next) { 665 - struct amdgpu_vm *vm = bo_base->vm; 666 - 667 - if (abo->tbo.base.resv == vm->root.bo->tbo.base.resv) 668 - vm->bulk_moveable = false; 669 - } 670 - 671 - } 672 - /** 673 643 * amdgpu_vm_move_to_lru_tail - move all BOs to the end of LRU 674 644 * 675 645 * @adev: amdgpu device pointer ··· 651 681 void amdgpu_vm_move_to_lru_tail(struct amdgpu_device *adev, 652 682 struct amdgpu_vm *vm) 653 683 { 654 - struct amdgpu_vm_bo_base *bo_base; 655 - 656 - if (vm->bulk_moveable) { 657 - spin_lock(&adev->mman.bdev.lru_lock); 658 - ttm_bo_bulk_move_lru_tail(&vm->lru_bulk_move); 659 - spin_unlock(&adev->mman.bdev.lru_lock); 660 - return; 661 - } 662 - 663 - memset(&vm->lru_bulk_move, 0, sizeof(vm->lru_bulk_move)); 664 - 665 684 spin_lock(&adev->mman.bdev.lru_lock); 666 - list_for_each_entry(bo_base, &vm->idle, vm_status) { 667 - struct amdgpu_bo *bo = bo_base->bo; 668 - struct amdgpu_bo *shadow = amdgpu_bo_shadowed(bo); 669 - 670 - if (!bo->parent) 671 - continue; 672 - 673 - ttm_bo_move_to_lru_tail(&bo->tbo, bo->tbo.resource, 674 - &vm->lru_bulk_move); 675 - if (shadow) 676 - ttm_bo_move_to_lru_tail(&shadow->tbo, 677 - shadow->tbo.resource, 678 - &vm->lru_bulk_move); 679 - } 685 + ttm_lru_bulk_move_tail(&vm->lru_bulk_move); 680 686 spin_unlock(&adev->mman.bdev.lru_lock); 681 - 682 - vm->bulk_moveable = true; 683 687 } 684 688 685 689 /** ··· 675 731 { 676 732 struct amdgpu_vm_bo_base *bo_base, *tmp; 677 733 int r; 678 - 679 - vm->bulk_moveable &= list_empty(&vm->evicted); 680 734 681 735 list_for_each_entry_safe(bo_base, tmp, &vm->evicted, vm_status) { 682 736 struct amdgpu_bo *bo = bo_base->bo; ··· 999 1057 1000 1058 if (!entry->bo) 1001 1059 return; 1060 + 1002 1061 shadow = amdgpu_bo_shadowed(entry->bo); 1062 + if (shadow) { 1063 + ttm_bo_set_bulk_move(&shadow->tbo, NULL); 1064 + amdgpu_bo_unref(&shadow); 1065 + } 1066 + 1067 + ttm_bo_set_bulk_move(&entry->bo->tbo, NULL); 1003 1068 entry->bo->vm_bo = NULL; 1004 1069 list_del(&entry->vm_status); 1005 - amdgpu_bo_unref(&shadow); 1006 1070 amdgpu_bo_unref(&entry->bo); 1007 1071 } 1008 1072 ··· 1027 1079 { 1028 1080 struct amdgpu_vm_pt_cursor cursor; 1029 1081 struct amdgpu_vm_bo_base *entry; 1030 - 1031 - vm->bulk_moveable = false; 1032 1082 1033 1083 for_each_amdgpu_vm_pt_dfs_safe(adev, vm, start, cursor, entry) 1034 1084 amdgpu_vm_free_table(entry); ··· 2611 2665 if (bo) { 2612 2666 dma_resv_assert_held(bo->tbo.base.resv); 2613 2667 if (bo->tbo.base.resv == vm->root.bo->tbo.base.resv) 2614 - vm->bulk_moveable = false; 2668 + ttm_bo_set_bulk_move(&bo->tbo, NULL); 2615 2669 2616 2670 for (base = &bo_va->base.bo->vm_bo; *base; 2617 2671 base = &(*base)->next) { ··· 2926 2980 if (r) 2927 2981 goto error_free_root; 2928 2982 2929 - r = dma_resv_reserve_shared(root_bo->tbo.base.resv, 1); 2983 + r = dma_resv_reserve_fences(root_bo->tbo.base.resv, 1); 2930 2984 if (r) 2931 2985 goto error_unreserve; 2932 2986 ··· 3369 3423 value = 0; 3370 3424 } 3371 3425 3372 - r = dma_resv_reserve_shared(root->tbo.base.resv, 1); 3426 + r = dma_resv_reserve_fences(root->tbo.base.resv, 1); 3373 3427 if (r) { 3374 3428 pr_debug("failed %d to reserve fence slot\n", r); 3375 3429 goto error_unlock;
-3
drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
··· 317 317 318 318 /* Store positions of group of BOs */ 319 319 struct ttm_lru_bulk_move lru_bulk_move; 320 - /* mark whether can do the bulk move */ 321 - bool bulk_moveable; 322 320 /* Flag to indicate if VM is used for compute */ 323 321 bool is_compute_context; 324 322 }; ··· 452 454 453 455 void amdgpu_vm_move_to_lru_tail(struct amdgpu_device *adev, 454 456 struct amdgpu_vm *vm); 455 - void amdgpu_vm_del_from_lru_notify(struct ttm_buffer_object *bo); 456 457 void amdgpu_vm_get_memory(struct amdgpu_vm *vm, uint64_t *vram_mem, 457 458 uint64_t *gtt_mem, uint64_t *cpu_mem); 458 459
+1 -1
drivers/gpu/drm/amd/amdkfd/kfd_svm.c
··· 548 548 goto reserve_bo_failed; 549 549 } 550 550 551 - r = dma_resv_reserve_shared(bo->tbo.base.resv, 1); 551 + r = dma_resv_reserve_fences(bo->tbo.base.resv, 1); 552 552 if (r) { 553 553 pr_debug("failed %d to reserve bo\n", r); 554 554 amdgpu_bo_unreserve(bo);
+2 -11
drivers/gpu/drm/arm/display/komeda/komeda_plane.c
··· 135 135 static void komeda_plane_reset(struct drm_plane *plane) 136 136 { 137 137 struct komeda_plane_state *state; 138 - struct komeda_plane *kplane = to_kplane(plane); 139 138 140 139 if (plane->state) 141 140 __drm_atomic_helper_plane_destroy_state(plane->state); ··· 143 144 plane->state = NULL; 144 145 145 146 state = kzalloc(sizeof(*state), GFP_KERNEL); 146 - if (state) { 147 - state->base.rotation = DRM_MODE_ROTATE_0; 148 - state->base.pixel_blend_mode = DRM_MODE_BLEND_PREMULTI; 149 - state->base.alpha = DRM_BLEND_ALPHA_OPAQUE; 150 - state->base.zpos = kplane->layer->base.id; 151 - state->base.color_encoding = DRM_COLOR_YCBCR_BT601; 152 - state->base.color_range = DRM_COLOR_YCBCR_LIMITED_RANGE; 153 - plane->state = &state->base; 154 - plane->state->plane = plane; 155 - } 147 + if (state) 148 + __drm_atomic_helper_plane_reset(plane, &state->base); 156 149 } 157 150 158 151 static struct drm_plane_state *
+2
drivers/gpu/drm/bridge/Kconfig
··· 77 77 config DRM_ITE_IT6505 78 78 tristate "ITE IT6505 DisplayPort bridge" 79 79 depends on OF 80 + select DRM_DP_HELPER 80 81 select DRM_KMS_HELPER 81 82 select EXTCON 82 83 help ··· 266 265 select DRM_DP_HELPER 267 266 select DRM_KMS_HELPER 268 267 select REGMAP_I2C 268 + select DRM_MIPI_DSI 269 269 select DRM_PANEL 270 270 help 271 271 Toshiba TC358767 eDP bridge chip driver.
+1
drivers/gpu/drm/bridge/adv7511/adv7511_drv.c
··· 1313 1313 adv7511_audio_exit(adv7511); 1314 1314 drm_bridge_remove(&adv7511->bridge); 1315 1315 err_unregister_cec: 1316 + cec_unregister_adapter(adv7511->cec_adap); 1316 1317 i2c_unregister_device(adv7511->i2c_cec); 1317 1318 clk_disable_unprepare(adv7511->cec_clk); 1318 1319 err_i2c_unregister_packet:
+17 -3
drivers/gpu/drm/bridge/analogix/analogix_dp_core.c
··· 1119 1119 return 0; 1120 1120 } 1121 1121 1122 - pm_runtime_get_sync(dp->dev); 1123 1122 edid = drm_get_edid(connector, &dp->aux.ddc); 1124 - pm_runtime_put(dp->dev); 1125 1123 if (edid) { 1126 1124 drm_connector_update_edid_property(&dp->connector, 1127 1125 edid); ··· 1630 1632 struct drm_dp_aux_msg *msg) 1631 1633 { 1632 1634 struct analogix_dp_device *dp = to_dp(aux); 1635 + int ret; 1633 1636 1634 - return analogix_dp_transfer(dp, msg); 1637 + pm_runtime_get_sync(dp->dev); 1638 + 1639 + ret = analogix_dp_detect_hpd(dp); 1640 + if (ret) 1641 + goto out; 1642 + 1643 + ret = analogix_dp_transfer(dp, msg); 1644 + out: 1645 + pm_runtime_mark_last_busy(dp->dev); 1646 + pm_runtime_put_autosuspend(dp->dev); 1647 + 1648 + return ret; 1635 1649 } 1636 1650 1637 1651 struct analogix_dp_device * ··· 1774 1764 if (ret) 1775 1765 return ret; 1776 1766 1767 + pm_runtime_use_autosuspend(dp->dev); 1768 + pm_runtime_set_autosuspend_delay(dp->dev, 100); 1777 1769 pm_runtime_enable(dp->dev); 1778 1770 1779 1771 ret = analogix_dp_create_bridge(drm_dev, dp); ··· 1787 1775 return 0; 1788 1776 1789 1777 err_disable_pm_runtime: 1778 + pm_runtime_dont_use_autosuspend(dp->dev); 1790 1779 pm_runtime_disable(dp->dev); 1791 1780 drm_dp_aux_unregister(&dp->aux); 1792 1781 ··· 1806 1793 } 1807 1794 1808 1795 drm_dp_aux_unregister(&dp->aux); 1796 + pm_runtime_dont_use_autosuspend(dp->dev); 1809 1797 pm_runtime_disable(dp->dev); 1810 1798 } 1811 1799 EXPORT_SYMBOL_GPL(analogix_dp_unbind);
+21 -15
drivers/gpu/drm/bridge/analogix/anx7625.c
··· 874 874 } 875 875 876 876 /* Read downstream capability */ 877 - anx7625_aux_trans(ctx, DP_AUX_NATIVE_READ, 0x68028, 1, &bcap); 877 + ret = anx7625_aux_trans(ctx, DP_AUX_NATIVE_READ, 0x68028, 1, &bcap); 878 + if (ret < 0) 879 + return ret; 880 + 878 881 if (!(bcap & 0x01)) { 879 882 pr_warn("downstream not support HDCP 1.4, cap(%x).\n", bcap); 880 883 return 0; ··· 924 921 { 925 922 int ret; 926 923 struct device *dev = &ctx->client->dev; 924 + u8 data; 927 925 928 926 if (!ctx->display_timing_valid) { 929 927 DRM_DEV_ERROR(dev, "mipi not set display timing yet.\n"); 930 928 return; 931 929 } 930 + 931 + dev_dbg(dev, "set downstream sink into normal\n"); 932 + /* Downstream sink enter into normal mode */ 933 + data = 1; 934 + ret = anx7625_aux_trans(ctx, DP_AUX_NATIVE_WRITE, 0x000600, 1, &data); 935 + if (ret < 0) 936 + dev_err(dev, "IO error : set sink into normal mode fail\n"); 932 937 933 938 /* Disable HDCP */ 934 939 anx7625_write_and(ctx, ctx->i2c.rx_p1_client, 0xee, 0x9f); ··· 1619 1608 struct anx7625_platform_data *pdata) 1620 1609 { 1621 1610 struct device_node *np = dev->of_node, *ep0; 1622 - struct drm_panel *panel; 1623 - int ret; 1624 1611 int bus_type, mipi_lanes; 1625 1612 1626 1613 anx7625_get_swing_setting(dev, pdata); ··· 1655 1646 if (of_property_read_bool(np, "analogix,audio-enable")) 1656 1647 pdata->audio_en = 1; 1657 1648 1658 - ret = drm_of_find_panel_or_bridge(np, 1, 0, &panel, NULL); 1659 - if (ret < 0) { 1660 - if (ret == -ENODEV) 1649 + pdata->panel_bridge = devm_drm_of_get_bridge(dev, np, 1, 0); 1650 + if (IS_ERR(pdata->panel_bridge)) { 1651 + if (PTR_ERR(pdata->panel_bridge) == -ENODEV) 1661 1652 return 0; 1662 - return ret; 1663 - } 1664 - if (!panel) 1665 - return -ENODEV; 1666 1653 1667 - pdata->panel_bridge = devm_drm_panel_bridge_add(dev, panel); 1668 - if (IS_ERR(pdata->panel_bridge)) 1669 1654 return PTR_ERR(pdata->panel_bridge); 1655 + } 1656 + 1670 1657 DRM_DEV_DEBUG_DRIVER(dev, "get panel node.\n"); 1671 1658 1672 1659 return 0; ··· 2016 2011 dsi->format = MIPI_DSI_FMT_RGB888; 2017 2012 dsi->mode_flags = MIPI_DSI_MODE_VIDEO | 2018 2013 MIPI_DSI_MODE_VIDEO_SYNC_PULSE | 2019 - MIPI_DSI_MODE_VIDEO_HSE; 2014 + MIPI_DSI_MODE_VIDEO_HSE | 2015 + MIPI_DSI_HS_PKT_END_ALIGNED; 2020 2016 2021 2017 ret = devm_mipi_dsi_attach(dev, dsi); 2022 2018 if (ret) { ··· 2660 2654 if (ret) { 2661 2655 if (ret != -EPROBE_DEFER) 2662 2656 DRM_DEV_ERROR(dev, "fail to parse DT : %d\n", ret); 2663 - return ret; 2657 + goto free_wq; 2664 2658 } 2665 2659 2666 2660 if (anx7625_register_i2c_dummy_clients(platform, client) != 0) { ··· 2675 2669 pm_suspend_ignore_children(dev, true); 2676 2670 ret = devm_add_action_or_reset(dev, anx7625_runtime_disable, dev); 2677 2671 if (ret) 2678 - return ret; 2672 + goto free_wq; 2679 2673 2680 2674 if (!platform->pdata.low_power_mode) { 2681 2675 anx7625_disable_pd_protocol(platform);
+448 -59
drivers/gpu/drm/bridge/chipone-icn6211.c
··· 11 11 12 12 #include <linux/delay.h> 13 13 #include <linux/gpio/consumer.h> 14 + #include <linux/i2c.h> 14 15 #include <linux/module.h> 15 16 #include <linux/of_device.h> 16 17 #include <linux/regulator/consumer.h> 17 18 18 - #include <video/mipi_display.h> 19 - 19 + #define VENDOR_ID 0x00 20 + #define DEVICE_ID_H 0x01 21 + #define DEVICE_ID_L 0x02 22 + #define VERSION_ID 0x03 23 + #define FIRMWARE_VERSION 0x08 24 + #define CONFIG_FINISH 0x09 25 + #define PD_CTRL(n) (0x0a + ((n) & 0x3)) /* 0..3 */ 26 + #define RST_CTRL(n) (0x0e + ((n) & 0x1)) /* 0..1 */ 27 + #define SYS_CTRL(n) (0x10 + ((n) & 0x7)) /* 0..4 */ 28 + #define RGB_DRV(n) (0x18 + ((n) & 0x3)) /* 0..3 */ 29 + #define RGB_DLY(n) (0x1c + ((n) & 0x1)) /* 0..1 */ 30 + #define RGB_TEST_CTRL 0x1e 31 + #define ATE_PLL_EN 0x1f 20 32 #define HACTIVE_LI 0x20 21 33 #define VACTIVE_LI 0x21 22 34 #define VACTIVE_HACTIVE_HI 0x22 ··· 36 24 #define HSYNC_LI 0x24 37 25 #define HBP_LI 0x25 38 26 #define HFP_HSW_HBP_HI 0x26 27 + #define HFP_HSW_HBP_HI_HFP(n) (((n) & 0x300) >> 4) 28 + #define HFP_HSW_HBP_HI_HS(n) (((n) & 0x300) >> 6) 29 + #define HFP_HSW_HBP_HI_HBP(n) (((n) & 0x300) >> 8) 39 30 #define VFP 0x27 40 31 #define VSYNC 0x28 41 32 #define VBP 0x29 33 + #define BIST_POL 0x2a 34 + #define BIST_POL_BIST_MODE(n) (((n) & 0xf) << 4) 35 + #define BIST_POL_BIST_GEN BIT(3) 36 + #define BIST_POL_HSYNC_POL BIT(2) 37 + #define BIST_POL_VSYNC_POL BIT(1) 38 + #define BIST_POL_DE_POL BIT(0) 39 + #define BIST_RED 0x2b 40 + #define BIST_GREEN 0x2c 41 + #define BIST_BLUE 0x2d 42 + #define BIST_CHESS_X 0x2e 43 + #define BIST_CHESS_Y 0x2f 44 + #define BIST_CHESS_XY_H 0x30 45 + #define BIST_FRAME_TIME_L 0x31 46 + #define BIST_FRAME_TIME_H 0x32 47 + #define FIFO_MAX_ADDR_LOW 0x33 48 + #define SYNC_EVENT_DLY 0x34 49 + #define HSW_MIN 0x35 50 + #define HFP_MIN 0x36 51 + #define LOGIC_RST_NUM 0x37 52 + #define OSC_CTRL(n) (0x48 + ((n) & 0x7)) /* 0..5 */ 53 + #define BG_CTRL 0x4e 54 + #define LDO_PLL 0x4f 55 + #define PLL_CTRL(n) (0x50 + ((n) & 0xf)) /* 0..15 */ 56 + #define PLL_CTRL_6_EXTERNAL 0x90 57 + #define PLL_CTRL_6_MIPI_CLK 0x92 58 + #define PLL_CTRL_6_INTERNAL 0x93 59 + #define PLL_REM(n) (0x60 + ((n) & 0x3)) /* 0..2 */ 60 + #define PLL_DIV(n) (0x63 + ((n) & 0x3)) /* 0..2 */ 61 + #define PLL_FRAC(n) (0x66 + ((n) & 0x3)) /* 0..2 */ 62 + #define PLL_INT(n) (0x69 + ((n) & 0x1)) /* 0..1 */ 63 + #define PLL_REF_DIV 0x6b 64 + #define PLL_REF_DIV_P(n) ((n) & 0xf) 65 + #define PLL_REF_DIV_Pe BIT(4) 66 + #define PLL_REF_DIV_S(n) (((n) & 0x7) << 5) 67 + #define PLL_SSC_P(n) (0x6c + ((n) & 0x3)) /* 0..2 */ 68 + #define PLL_SSC_STEP(n) (0x6f + ((n) & 0x3)) /* 0..2 */ 69 + #define PLL_SSC_OFFSET(n) (0x72 + ((n) & 0x3)) /* 0..3 */ 70 + #define GPIO_OEN 0x79 71 + #define MIPI_CFG_PW 0x7a 72 + #define MIPI_CFG_PW_CONFIG_DSI 0xc1 73 + #define MIPI_CFG_PW_CONFIG_I2C 0x3e 74 + #define GPIO_SEL(n) (0x7b + ((n) & 0x1)) /* 0..1 */ 75 + #define IRQ_SEL 0x7d 76 + #define DBG_SEL 0x7e 77 + #define DBG_SIGNAL 0x7f 78 + #define MIPI_ERR_VECTOR_L 0x80 79 + #define MIPI_ERR_VECTOR_H 0x81 80 + #define MIPI_ERR_VECTOR_EN_L 0x82 81 + #define MIPI_ERR_VECTOR_EN_H 0x83 82 + #define MIPI_MAX_SIZE_L 0x84 83 + #define MIPI_MAX_SIZE_H 0x85 84 + #define DSI_CTRL 0x86 85 + #define DSI_CTRL_UNKNOWN 0x28 86 + #define DSI_CTRL_DSI_LANES(n) ((n) & 0x3) 87 + #define MIPI_PN_SWAP 0x87 88 + #define MIPI_PN_SWAP_CLK BIT(4) 89 + #define MIPI_PN_SWAP_D(n) BIT((n) & 0x3) 90 + #define MIPI_SOT_SYNC_BIT_(n) (0x88 + ((n) & 0x1)) /* 0..1 */ 91 + #define MIPI_ULPS_CTRL 0x8a 92 + #define MIPI_CLK_CHK_VAR 0x8e 93 + #define MIPI_CLK_CHK_INI 0x8f 94 + #define MIPI_T_TERM_EN 0x90 95 + #define MIPI_T_HS_SETTLE 0x91 96 + #define MIPI_T_TA_SURE_PRE 0x92 97 + #define MIPI_T_LPX_SET 0x94 98 + #define MIPI_T_CLK_MISS 0x95 99 + #define MIPI_INIT_TIME_L 0x96 100 + #define MIPI_INIT_TIME_H 0x97 101 + #define MIPI_T_CLK_TERM_EN 0x99 102 + #define MIPI_T_CLK_SETTLE 0x9a 103 + #define MIPI_TO_HS_RX_L 0x9e 104 + #define MIPI_TO_HS_RX_H 0x9f 105 + #define MIPI_PHY_(n) (0xa0 + ((n) & 0x7)) /* 0..5 */ 106 + #define MIPI_PD_RX 0xb0 107 + #define MIPI_PD_TERM 0xb1 108 + #define MIPI_PD_HSRX 0xb2 109 + #define MIPI_PD_LPTX 0xb3 110 + #define MIPI_PD_LPRX 0xb4 111 + #define MIPI_PD_CK_LANE 0xb5 112 + #define MIPI_FORCE_0 0xb6 113 + #define MIPI_RST_CTRL 0xb7 114 + #define MIPI_RST_NUM 0xb8 115 + #define MIPI_DBG_SET_(n) (0xc0 + ((n) & 0xf)) /* 0..9 */ 116 + #define MIPI_DBG_SEL 0xe0 117 + #define MIPI_DBG_DATA 0xe1 118 + #define MIPI_ATE_TEST_SEL 0xe2 119 + #define MIPI_ATE_STATUS_(n) (0xe3 + ((n) & 0x1)) /* 0..1 */ 120 + #define MIPI_ATE_STATUS_1 0xe4 121 + #define ICN6211_MAX_REGISTER MIPI_ATE_STATUS(1) 42 122 43 123 struct chipone { 44 124 struct device *dev; 125 + struct i2c_client *client; 45 126 struct drm_bridge bridge; 46 127 struct drm_display_mode mode; 47 128 struct drm_bridge *panel_bridge; 129 + struct mipi_dsi_device *dsi; 48 130 struct gpio_desc *enable_gpio; 49 131 struct regulator *vdd1; 50 132 struct regulator *vdd2; 51 133 struct regulator *vdd3; 134 + bool interface_i2c; 52 135 }; 53 136 54 137 static inline struct chipone *bridge_to_chipone(struct drm_bridge *bridge) ··· 151 44 return container_of(bridge, struct chipone, bridge); 152 45 } 153 46 154 - static inline int chipone_dsi_write(struct chipone *icn, const void *seq, 155 - size_t len) 47 + static void chipone_readb(struct chipone *icn, u8 reg, u8 *val) 156 48 { 157 - struct mipi_dsi_device *dsi = to_mipi_dsi_device(icn->dev); 158 - 159 - return mipi_dsi_generic_write(dsi, seq, len); 49 + if (icn->interface_i2c) 50 + *val = i2c_smbus_read_byte_data(icn->client, reg); 51 + else 52 + mipi_dsi_generic_read(icn->dsi, (u8[]){reg, 1}, 2, val, 1); 160 53 } 161 54 162 - #define ICN6211_DSI(icn, seq...) \ 163 - { \ 164 - const u8 d[] = { seq }; \ 165 - chipone_dsi_write(icn, d, ARRAY_SIZE(d)); \ 55 + static int chipone_writeb(struct chipone *icn, u8 reg, u8 val) 56 + { 57 + if (icn->interface_i2c) 58 + return i2c_smbus_write_byte_data(icn->client, reg, val); 59 + else 60 + return mipi_dsi_generic_write(icn->dsi, (u8[]){reg, val}, 2); 61 + } 62 + 63 + static void chipone_configure_pll(struct chipone *icn, 64 + const struct drm_display_mode *mode) 65 + { 66 + unsigned int best_p = 0, best_m = 0, best_s = 0; 67 + unsigned int mode_clock = mode->clock * 1000; 68 + unsigned int delta, min_delta = 0xffffffff; 69 + unsigned int freq_p, freq_s, freq_out; 70 + unsigned int p_min, p_max; 71 + unsigned int p, m, s; 72 + unsigned int fin; 73 + bool best_p_pot; 74 + u8 ref_div; 75 + 76 + /* 77 + * DSI byte clock frequency (input into PLL) is calculated as: 78 + * DSI_CLK = mode clock * bpp / dsi_data_lanes / 8 79 + * 80 + * DPI pixel clock frequency (output from PLL) is mode clock. 81 + * 82 + * The chip contains fractional PLL which works as follows: 83 + * DPI_CLK = ((DSI_CLK / P) * M) / S 84 + * P is pre-divider, register PLL_REF_DIV[3:0] is 1:n divider 85 + * register PLL_REF_DIV[4] is extra 1:2 divider 86 + * M is integer multiplier, register PLL_INT(0) is multiplier 87 + * S is post-divider, register PLL_REF_DIV[7:5] is 2^(n+1) divider 88 + * 89 + * It seems the PLL input clock after applying P pre-divider have 90 + * to be lower than 20 MHz. 91 + */ 92 + fin = mode_clock * mipi_dsi_pixel_format_to_bpp(icn->dsi->format) / 93 + icn->dsi->lanes / 8; /* in Hz */ 94 + 95 + /* Minimum value of P predivider for PLL input in 5..20 MHz */ 96 + p_min = clamp(DIV_ROUND_UP(fin, 20000000), 1U, 31U); 97 + p_max = clamp(fin / 5000000, 1U, 31U); 98 + 99 + for (p = p_min; p < p_max; p++) { /* PLL_REF_DIV[4,3:0] */ 100 + if (p > 16 && p & 1) /* P > 16 uses extra /2 */ 101 + continue; 102 + freq_p = fin / p; 103 + if (freq_p == 0) /* Divider too high */ 104 + break; 105 + 106 + for (s = 0; s < 0x7; s++) { /* PLL_REF_DIV[7:5] */ 107 + freq_s = freq_p / BIT(s + 1); 108 + if (freq_s == 0) /* Divider too high */ 109 + break; 110 + 111 + m = mode_clock / freq_s; 112 + 113 + /* Multiplier is 8 bit */ 114 + if (m > 0xff) 115 + continue; 116 + 117 + /* Limit PLL VCO frequency to 1 GHz */ 118 + freq_out = (fin * m) / p; 119 + if (freq_out > 1000000000) 120 + continue; 121 + 122 + /* Apply post-divider */ 123 + freq_out /= BIT(s + 1); 124 + 125 + delta = abs(mode_clock - freq_out); 126 + if (delta < min_delta) { 127 + best_p = p; 128 + best_m = m; 129 + best_s = s; 130 + min_delta = delta; 131 + } 132 + } 166 133 } 134 + 135 + best_p_pot = !(best_p & 1); 136 + 137 + dev_dbg(icn->dev, 138 + "PLL: P[3:0]=%d P[4]=2*%d M=%d S[7:5]=2^%d delta=%d => DSI f_in=%d Hz ; DPI f_out=%d Hz\n", 139 + best_p >> best_p_pot, best_p_pot, best_m, best_s + 1, 140 + min_delta, fin, (fin * best_m) / (best_p << (best_s + 1))); 141 + 142 + ref_div = PLL_REF_DIV_P(best_p >> best_p_pot) | PLL_REF_DIV_S(best_s); 143 + if (best_p_pot) /* Prefer /2 pre-divider */ 144 + ref_div |= PLL_REF_DIV_Pe; 145 + 146 + /* Clock source selection fixed to MIPI DSI clock lane */ 147 + chipone_writeb(icn, PLL_CTRL(6), PLL_CTRL_6_MIPI_CLK); 148 + chipone_writeb(icn, PLL_REF_DIV, ref_div); 149 + chipone_writeb(icn, PLL_INT(0), best_m); 150 + } 167 151 168 152 static void chipone_atomic_enable(struct drm_bridge *bridge, 169 153 struct drm_bridge_state *old_bridge_state) 170 154 { 171 155 struct chipone *icn = bridge_to_chipone(bridge); 156 + struct drm_atomic_state *state = old_bridge_state->base.state; 172 157 struct drm_display_mode *mode = &icn->mode; 158 + const struct drm_bridge_state *bridge_state; 159 + u16 hfp, hbp, hsync; 160 + u32 bus_flags; 161 + u8 pol, id[4]; 173 162 174 - ICN6211_DSI(icn, 0x7a, 0xc1); 163 + chipone_readb(icn, VENDOR_ID, id); 164 + chipone_readb(icn, DEVICE_ID_H, id + 1); 165 + chipone_readb(icn, DEVICE_ID_L, id + 2); 166 + chipone_readb(icn, VERSION_ID, id + 3); 175 167 176 - ICN6211_DSI(icn, HACTIVE_LI, mode->hdisplay & 0xff); 168 + dev_dbg(icn->dev, 169 + "Chip IDs: Vendor=0x%02x Device=0x%02x:0x%02x Version=0x%02x\n", 170 + id[0], id[1], id[2], id[3]); 177 171 178 - ICN6211_DSI(icn, VACTIVE_LI, mode->vdisplay & 0xff); 172 + if (id[0] != 0xc1 || id[1] != 0x62 || id[2] != 0x11) { 173 + dev_dbg(icn->dev, "Invalid Chip IDs, aborting configuration\n"); 174 + return; 175 + } 179 176 180 - /** 177 + /* Get the DPI flags from the bridge state. */ 178 + bridge_state = drm_atomic_get_new_bridge_state(state, bridge); 179 + bus_flags = bridge_state->output_bus_cfg.flags; 180 + 181 + if (icn->interface_i2c) 182 + chipone_writeb(icn, MIPI_CFG_PW, MIPI_CFG_PW_CONFIG_I2C); 183 + else 184 + chipone_writeb(icn, MIPI_CFG_PW, MIPI_CFG_PW_CONFIG_DSI); 185 + 186 + chipone_writeb(icn, HACTIVE_LI, mode->hdisplay & 0xff); 187 + 188 + chipone_writeb(icn, VACTIVE_LI, mode->vdisplay & 0xff); 189 + 190 + /* 181 191 * lsb nibble: 2nd nibble of hdisplay 182 192 * msb nibble: 2nd nibble of vdisplay 183 193 */ 184 - ICN6211_DSI(icn, VACTIVE_HACTIVE_HI, 185 - ((mode->hdisplay >> 8) & 0xf) | 186 - (((mode->vdisplay >> 8) & 0xf) << 4)); 194 + chipone_writeb(icn, VACTIVE_HACTIVE_HI, 195 + ((mode->hdisplay >> 8) & 0xf) | 196 + (((mode->vdisplay >> 8) & 0xf) << 4)); 187 197 188 - ICN6211_DSI(icn, HFP_LI, mode->hsync_start - mode->hdisplay); 198 + hfp = mode->hsync_start - mode->hdisplay; 199 + hsync = mode->hsync_end - mode->hsync_start; 200 + hbp = mode->htotal - mode->hsync_end; 189 201 190 - ICN6211_DSI(icn, HSYNC_LI, mode->hsync_end - mode->hsync_start); 202 + chipone_writeb(icn, HFP_LI, hfp & 0xff); 203 + chipone_writeb(icn, HSYNC_LI, hsync & 0xff); 204 + chipone_writeb(icn, HBP_LI, hbp & 0xff); 205 + /* Top two bits of Horizontal Front porch/Sync/Back porch */ 206 + chipone_writeb(icn, HFP_HSW_HBP_HI, 207 + HFP_HSW_HBP_HI_HFP(hfp) | 208 + HFP_HSW_HBP_HI_HS(hsync) | 209 + HFP_HSW_HBP_HI_HBP(hbp)); 191 210 192 - ICN6211_DSI(icn, HBP_LI, mode->htotal - mode->hsync_end); 211 + chipone_writeb(icn, VFP, mode->vsync_start - mode->vdisplay); 193 212 194 - ICN6211_DSI(icn, HFP_HSW_HBP_HI, 0x00); 213 + chipone_writeb(icn, VSYNC, mode->vsync_end - mode->vsync_start); 195 214 196 - ICN6211_DSI(icn, VFP, mode->vsync_start - mode->vdisplay); 197 - 198 - ICN6211_DSI(icn, VSYNC, mode->vsync_end - mode->vsync_start); 199 - 200 - ICN6211_DSI(icn, VBP, mode->vtotal - mode->vsync_end); 215 + chipone_writeb(icn, VBP, mode->vtotal - mode->vsync_end); 201 216 202 217 /* dsi specific sequence */ 203 - ICN6211_DSI(icn, MIPI_DCS_SET_TEAR_OFF, 0x80); 204 - ICN6211_DSI(icn, MIPI_DCS_SET_ADDRESS_MODE, 0x28); 205 - ICN6211_DSI(icn, 0xb5, 0xa0); 206 - ICN6211_DSI(icn, 0x5c, 0xff); 207 - ICN6211_DSI(icn, MIPI_DCS_SET_COLUMN_ADDRESS, 0x01); 208 - ICN6211_DSI(icn, MIPI_DCS_GET_POWER_SAVE, 0x92); 209 - ICN6211_DSI(icn, 0x6b, 0x71); 210 - ICN6211_DSI(icn, 0x69, 0x2b); 211 - ICN6211_DSI(icn, MIPI_DCS_ENTER_SLEEP_MODE, 0x40); 212 - ICN6211_DSI(icn, MIPI_DCS_EXIT_SLEEP_MODE, 0x98); 218 + chipone_writeb(icn, SYNC_EVENT_DLY, 0x80); 219 + chipone_writeb(icn, HFP_MIN, hfp & 0xff); 220 + chipone_writeb(icn, MIPI_PD_CK_LANE, 0xa0); 221 + chipone_writeb(icn, PLL_CTRL(12), 0xff); 222 + chipone_writeb(icn, MIPI_PN_SWAP, 0x00); 223 + 224 + /* DPI HS/VS/DE polarity */ 225 + pol = ((mode->flags & DRM_MODE_FLAG_PHSYNC) ? BIST_POL_HSYNC_POL : 0) | 226 + ((mode->flags & DRM_MODE_FLAG_PVSYNC) ? BIST_POL_VSYNC_POL : 0) | 227 + ((bus_flags & DRM_BUS_FLAG_DE_HIGH) ? BIST_POL_DE_POL : 0); 228 + chipone_writeb(icn, BIST_POL, pol); 229 + 230 + /* Configure PLL settings */ 231 + chipone_configure_pll(icn, mode); 232 + 233 + chipone_writeb(icn, SYS_CTRL(0), 0x40); 234 + chipone_writeb(icn, SYS_CTRL(1), 0x88); 213 235 214 236 /* icn6211 specific sequence */ 215 - ICN6211_DSI(icn, 0xb6, 0x20); 216 - ICN6211_DSI(icn, 0x51, 0x20); 217 - ICN6211_DSI(icn, 0x09, 0x10); 237 + chipone_writeb(icn, MIPI_FORCE_0, 0x20); 238 + chipone_writeb(icn, PLL_CTRL(1), 0x20); 239 + chipone_writeb(icn, CONFIG_FINISH, 0x10); 218 240 219 241 usleep_range(10000, 11000); 220 242 } ··· 404 168 struct chipone *icn = bridge_to_chipone(bridge); 405 169 406 170 drm_mode_copy(&icn->mode, adjusted_mode); 171 + }; 172 + 173 + static int chipone_dsi_attach(struct chipone *icn) 174 + { 175 + struct mipi_dsi_device *dsi = icn->dsi; 176 + int ret; 177 + 178 + dsi->lanes = 4; 179 + dsi->format = MIPI_DSI_FMT_RGB888; 180 + dsi->mode_flags = MIPI_DSI_MODE_VIDEO | MIPI_DSI_MODE_VIDEO_BURST | 181 + MIPI_DSI_MODE_LPM | MIPI_DSI_MODE_NO_EOT_PACKET; 182 + 183 + ret = mipi_dsi_attach(dsi); 184 + if (ret < 0) 185 + dev_err(icn->dev, "failed to attach dsi\n"); 186 + 187 + return ret; 188 + } 189 + 190 + static int chipone_dsi_host_attach(struct chipone *icn) 191 + { 192 + struct device *dev = icn->dev; 193 + struct device_node *host_node; 194 + struct device_node *endpoint; 195 + struct mipi_dsi_device *dsi; 196 + struct mipi_dsi_host *host; 197 + int ret = 0; 198 + 199 + const struct mipi_dsi_device_info info = { 200 + .type = "chipone", 201 + .channel = 0, 202 + .node = NULL, 203 + }; 204 + 205 + endpoint = of_graph_get_endpoint_by_regs(dev->of_node, 0, 0); 206 + host_node = of_graph_get_remote_port_parent(endpoint); 207 + of_node_put(endpoint); 208 + 209 + if (!host_node) 210 + return -EINVAL; 211 + 212 + host = of_find_mipi_dsi_host_by_node(host_node); 213 + of_node_put(host_node); 214 + if (!host) { 215 + dev_err(dev, "failed to find dsi host\n"); 216 + return -EPROBE_DEFER; 217 + } 218 + 219 + dsi = mipi_dsi_device_register_full(host, &info); 220 + if (IS_ERR(dsi)) { 221 + return dev_err_probe(dev, PTR_ERR(dsi), 222 + "failed to create dsi device\n"); 223 + } 224 + 225 + icn->dsi = dsi; 226 + 227 + ret = chipone_dsi_attach(icn); 228 + if (ret < 0) 229 + mipi_dsi_device_unregister(dsi); 230 + 231 + return ret; 407 232 } 408 233 409 234 static int chipone_attach(struct drm_bridge *bridge, enum drm_bridge_attach_flags flags) ··· 472 175 struct chipone *icn = bridge_to_chipone(bridge); 473 176 474 177 return drm_bridge_attach(bridge->encoder, icn->panel_bridge, bridge, flags); 178 + } 179 + 180 + #define MAX_INPUT_SEL_FORMATS 1 181 + 182 + static u32 * 183 + chipone_atomic_get_input_bus_fmts(struct drm_bridge *bridge, 184 + struct drm_bridge_state *bridge_state, 185 + struct drm_crtc_state *crtc_state, 186 + struct drm_connector_state *conn_state, 187 + u32 output_fmt, 188 + unsigned int *num_input_fmts) 189 + { 190 + u32 *input_fmts; 191 + 192 + *num_input_fmts = 0; 193 + 194 + input_fmts = kcalloc(MAX_INPUT_SEL_FORMATS, sizeof(*input_fmts), 195 + GFP_KERNEL); 196 + if (!input_fmts) 197 + return NULL; 198 + 199 + /* This is the DSI-end bus format */ 200 + input_fmts[0] = MEDIA_BUS_FMT_RGB888_1X24; 201 + *num_input_fmts = 1; 202 + 203 + return input_fmts; 475 204 } 476 205 477 206 static const struct drm_bridge_funcs chipone_bridge_funcs = { ··· 509 186 .atomic_post_disable = chipone_atomic_post_disable, 510 187 .mode_set = chipone_mode_set, 511 188 .attach = chipone_attach, 189 + .atomic_get_input_bus_fmts = chipone_atomic_get_input_bus_fmts, 512 190 }; 513 191 514 192 static int chipone_parse_dt(struct chipone *icn) ··· 557 233 return 0; 558 234 } 559 235 560 - static int chipone_probe(struct mipi_dsi_device *dsi) 236 + static int chipone_common_probe(struct device *dev, struct chipone **icnr) 561 237 { 562 - struct device *dev = &dsi->dev; 563 238 struct chipone *icn; 564 239 int ret; 565 240 ··· 566 243 if (!icn) 567 244 return -ENOMEM; 568 245 569 - mipi_dsi_set_drvdata(dsi, icn); 570 246 icn->dev = dev; 571 247 572 248 ret = chipone_parse_dt(icn); ··· 576 254 icn->bridge.type = DRM_MODE_CONNECTOR_DPI; 577 255 icn->bridge.of_node = dev->of_node; 578 256 579 - drm_bridge_add(&icn->bridge); 580 - 581 - dsi->lanes = 4; 582 - dsi->format = MIPI_DSI_FMT_RGB888; 583 - dsi->mode_flags = MIPI_DSI_MODE_VIDEO_SYNC_PULSE; 584 - 585 - ret = mipi_dsi_attach(dsi); 586 - if (ret < 0) { 587 - drm_bridge_remove(&icn->bridge); 588 - dev_err(dev, "failed to attach dsi\n"); 589 - } 257 + *icnr = icn; 590 258 591 259 return ret; 592 260 } 593 261 594 - static int chipone_remove(struct mipi_dsi_device *dsi) 262 + static int chipone_dsi_probe(struct mipi_dsi_device *dsi) 263 + { 264 + struct device *dev = &dsi->dev; 265 + struct chipone *icn; 266 + int ret; 267 + 268 + ret = chipone_common_probe(dev, &icn); 269 + if (ret) 270 + return ret; 271 + 272 + icn->interface_i2c = false; 273 + icn->dsi = dsi; 274 + 275 + mipi_dsi_set_drvdata(dsi, icn); 276 + 277 + drm_bridge_add(&icn->bridge); 278 + 279 + ret = chipone_dsi_attach(icn); 280 + if (ret) 281 + drm_bridge_remove(&icn->bridge); 282 + 283 + return ret; 284 + } 285 + 286 + static int chipone_i2c_probe(struct i2c_client *client, 287 + const struct i2c_device_id *id) 288 + { 289 + struct device *dev = &client->dev; 290 + struct chipone *icn; 291 + int ret; 292 + 293 + ret = chipone_common_probe(dev, &icn); 294 + if (ret) 295 + return ret; 296 + 297 + icn->interface_i2c = true; 298 + icn->client = client; 299 + dev_set_drvdata(dev, icn); 300 + i2c_set_clientdata(client, icn); 301 + 302 + drm_bridge_add(&icn->bridge); 303 + 304 + return chipone_dsi_host_attach(icn); 305 + } 306 + 307 + static int chipone_dsi_remove(struct mipi_dsi_device *dsi) 595 308 { 596 309 struct chipone *icn = mipi_dsi_get_drvdata(dsi); 597 310 ··· 642 285 }; 643 286 MODULE_DEVICE_TABLE(of, chipone_of_match); 644 287 645 - static struct mipi_dsi_driver chipone_driver = { 646 - .probe = chipone_probe, 647 - .remove = chipone_remove, 288 + static struct mipi_dsi_driver chipone_dsi_driver = { 289 + .probe = chipone_dsi_probe, 290 + .remove = chipone_dsi_remove, 648 291 .driver = { 649 292 .name = "chipone-icn6211", 650 293 .owner = THIS_MODULE, 651 294 .of_match_table = chipone_of_match, 652 295 }, 653 296 }; 654 - module_mipi_dsi_driver(chipone_driver); 297 + 298 + static struct i2c_device_id chipone_i2c_id[] = { 299 + { "chipone,icn6211" }, 300 + {}, 301 + }; 302 + MODULE_DEVICE_TABLE(i2c, chipone_i2c_id); 303 + 304 + static struct i2c_driver chipone_i2c_driver = { 305 + .probe = chipone_i2c_probe, 306 + .id_table = chipone_i2c_id, 307 + .driver = { 308 + .name = "chipone-icn6211-i2c", 309 + .of_match_table = chipone_of_match, 310 + }, 311 + }; 312 + 313 + static int __init chipone_init(void) 314 + { 315 + if (IS_ENABLED(CONFIG_DRM_MIPI_DSI)) 316 + mipi_dsi_driver_register(&chipone_dsi_driver); 317 + 318 + return i2c_add_driver(&chipone_i2c_driver); 319 + } 320 + module_init(chipone_init); 321 + 322 + static void __exit chipone_exit(void) 323 + { 324 + i2c_del_driver(&chipone_i2c_driver); 325 + 326 + if (IS_ENABLED(CONFIG_DRM_MIPI_DSI)) 327 + mipi_dsi_driver_unregister(&chipone_dsi_driver); 328 + } 329 + module_exit(chipone_exit); 655 330 656 331 MODULE_AUTHOR("Jagan Teki <jagan@amarulasolutions.com>"); 657 332 MODULE_DESCRIPTION("Chipone ICN6211 MIPI-DSI to RGB Converter Bridge");
+628 -1
drivers/gpu/drm/bridge/ite-it66121.c
··· 27 27 #include <drm/drm_print.h> 28 28 #include <drm/drm_probe_helper.h> 29 29 30 + #include <sound/hdmi-codec.h> 31 + 30 32 #define IT66121_VENDOR_ID0_REG 0x00 31 33 #define IT66121_VENDOR_ID1_REG 0x01 32 34 #define IT66121_DEVICE_ID0_REG 0x02 ··· 157 155 #define IT66121_AV_MUTE_ON BIT(0) 158 156 #define IT66121_AV_MUTE_BLUESCR BIT(1) 159 157 158 + #define IT66121_PKT_CTS_CTRL_REG 0xC5 159 + #define IT66121_PKT_CTS_CTRL_SEL BIT(1) 160 + 160 161 #define IT66121_PKT_GEN_CTRL_REG 0xC6 161 162 #define IT66121_PKT_GEN_CTRL_ON BIT(0) 162 163 #define IT66121_PKT_GEN_CTRL_RPT BIT(1) ··· 207 202 #define IT66121_EDID_SLEEP_US 20000 208 203 #define IT66121_EDID_TIMEOUT_US 200000 209 204 #define IT66121_EDID_FIFO_SIZE 32 205 + 206 + #define IT66121_CLK_CTRL0_REG 0x58 207 + #define IT66121_CLK_CTRL0_AUTO_OVER_SAMPLING BIT(4) 208 + #define IT66121_CLK_CTRL0_EXT_MCLK_MASK GENMASK(3, 2) 209 + #define IT66121_CLK_CTRL0_EXT_MCLK_128FS (0 << 2) 210 + #define IT66121_CLK_CTRL0_EXT_MCLK_256FS BIT(2) 211 + #define IT66121_CLK_CTRL0_EXT_MCLK_512FS (2 << 2) 212 + #define IT66121_CLK_CTRL0_EXT_MCLK_1024FS (3 << 2) 213 + #define IT66121_CLK_CTRL0_AUTO_IPCLK BIT(0) 214 + #define IT66121_CLK_STATUS1_REG 0x5E 215 + #define IT66121_CLK_STATUS2_REG 0x5F 216 + 217 + #define IT66121_AUD_CTRL0_REG 0xE0 218 + #define IT66121_AUD_SWL (3 << 6) 219 + #define IT66121_AUD_16BIT (0 << 6) 220 + #define IT66121_AUD_18BIT BIT(6) 221 + #define IT66121_AUD_20BIT (2 << 6) 222 + #define IT66121_AUD_24BIT (3 << 6) 223 + #define IT66121_AUD_SPDIFTC BIT(5) 224 + #define IT66121_AUD_SPDIF BIT(4) 225 + #define IT66121_AUD_I2S (0 << 4) 226 + #define IT66121_AUD_EN_I2S3 BIT(3) 227 + #define IT66121_AUD_EN_I2S2 BIT(2) 228 + #define IT66121_AUD_EN_I2S1 BIT(1) 229 + #define IT66121_AUD_EN_I2S0 BIT(0) 230 + #define IT66121_AUD_CTRL0_AUD_SEL BIT(4) 231 + 232 + #define IT66121_AUD_CTRL1_REG 0xE1 233 + #define IT66121_AUD_FIFOMAP_REG 0xE2 234 + #define IT66121_AUD_CTRL3_REG 0xE3 235 + #define IT66121_AUD_SRCVALID_FLAT_REG 0xE4 236 + #define IT66121_AUD_FLAT_SRC0 BIT(4) 237 + #define IT66121_AUD_FLAT_SRC1 BIT(5) 238 + #define IT66121_AUD_FLAT_SRC2 BIT(6) 239 + #define IT66121_AUD_FLAT_SRC3 BIT(7) 240 + #define IT66121_AUD_HDAUDIO_REG 0xE5 241 + 242 + #define IT66121_AUD_PKT_CTS0_REG 0x130 243 + #define IT66121_AUD_PKT_CTS1_REG 0x131 244 + #define IT66121_AUD_PKT_CTS2_REG 0x132 245 + #define IT66121_AUD_PKT_N0_REG 0x133 246 + #define IT66121_AUD_PKT_N1_REG 0x134 247 + #define IT66121_AUD_PKT_N2_REG 0x135 248 + 249 + #define IT66121_AUD_CHST_MODE_REG 0x191 250 + #define IT66121_AUD_CHST_CAT_REG 0x192 251 + #define IT66121_AUD_CHST_SRCNUM_REG 0x193 252 + #define IT66121_AUD_CHST_CHTNUM_REG 0x194 253 + #define IT66121_AUD_CHST_CA_FS_REG 0x198 254 + #define IT66121_AUD_CHST_OFS_WL_REG 0x199 255 + 256 + #define IT66121_AUD_PKT_CTS_CNT0_REG 0x1A0 257 + #define IT66121_AUD_PKT_CTS_CNT1_REG 0x1A1 258 + #define IT66121_AUD_PKT_CTS_CNT2_REG 0x1A2 259 + 260 + #define IT66121_AUD_FS_22P05K 0x4 261 + #define IT66121_AUD_FS_44P1K 0x0 262 + #define IT66121_AUD_FS_88P2K 0x8 263 + #define IT66121_AUD_FS_176P4K 0xC 264 + #define IT66121_AUD_FS_24K 0x6 265 + #define IT66121_AUD_FS_48K 0x2 266 + #define IT66121_AUD_FS_96K 0xA 267 + #define IT66121_AUD_FS_192K 0xE 268 + #define IT66121_AUD_FS_768K 0x9 269 + #define IT66121_AUD_FS_32K 0x3 270 + #define IT66121_AUD_FS_OTHER 0x1 271 + 272 + #define IT66121_AUD_SWL_21BIT 0xD 273 + #define IT66121_AUD_SWL_24BIT 0xB 274 + #define IT66121_AUD_SWL_23BIT 0x9 275 + #define IT66121_AUD_SWL_22BIT 0x5 276 + #define IT66121_AUD_SWL_20BIT 0x3 277 + #define IT66121_AUD_SWL_17BIT 0xC 278 + #define IT66121_AUD_SWL_19BIT 0x8 279 + #define IT66121_AUD_SWL_18BIT 0x4 280 + #define IT66121_AUD_SWL_16BIT 0x2 281 + #define IT66121_AUD_SWL_NOT_INDICATED 0x0 282 + 283 + #define IT66121_VENDOR_ID0 0x54 284 + #define IT66121_VENDOR_ID1 0x49 285 + #define IT66121_DEVICE_ID0 0x12 286 + #define IT66121_DEVICE_ID1 0x06 287 + #define IT66121_DEVICE_MASK 0x0F 210 288 #define IT66121_AFE_CLK_HIGH 80000 /* Khz */ 211 289 212 290 struct it66121_ctx { ··· 304 216 u32 bus_width; 305 217 struct mutex lock; /* Protects fields below and device registers */ 306 218 struct hdmi_avi_infoframe hdmi_avi_infoframe; 219 + struct { 220 + struct platform_device *pdev; 221 + u8 ch_enable; 222 + u8 fs; 223 + u8 swl; 224 + bool auto_cts; 225 + } audio; 307 226 }; 308 227 309 228 static const struct regmap_range_cfg it66121_regmap_banks[] = { ··· 322 227 .selector_mask = 0x1, 323 228 .selector_shift = 0, 324 229 .window_start = 0x00, 325 - .window_len = 0x130, 230 + .window_len = 0x100, 326 231 }, 327 232 }; 328 233 ··· 981 886 return IRQ_HANDLED; 982 887 } 983 888 889 + static int it661221_set_chstat(struct it66121_ctx *ctx, u8 iec60958_chstat[]) 890 + { 891 + int ret; 892 + 893 + ret = regmap_write(ctx->regmap, IT66121_AUD_CHST_MODE_REG, iec60958_chstat[0] & 0x7C); 894 + if (ret) 895 + return ret; 896 + 897 + ret = regmap_write(ctx->regmap, IT66121_AUD_CHST_CAT_REG, iec60958_chstat[1]); 898 + if (ret) 899 + return ret; 900 + 901 + ret = regmap_write(ctx->regmap, IT66121_AUD_CHST_SRCNUM_REG, iec60958_chstat[2] & 0x0F); 902 + if (ret) 903 + return ret; 904 + 905 + ret = regmap_write(ctx->regmap, IT66121_AUD_CHST_CHTNUM_REG, 906 + (iec60958_chstat[2] >> 4) & 0x0F); 907 + if (ret) 908 + return ret; 909 + 910 + ret = regmap_write(ctx->regmap, IT66121_AUD_CHST_CA_FS_REG, iec60958_chstat[3]); 911 + if (ret) 912 + return ret; 913 + 914 + return regmap_write(ctx->regmap, IT66121_AUD_CHST_OFS_WL_REG, iec60958_chstat[4]); 915 + } 916 + 917 + static int it661221_set_lpcm_audio(struct it66121_ctx *ctx, u8 audio_src_num, u8 audio_swl) 918 + { 919 + int ret; 920 + unsigned int audio_enable = 0; 921 + unsigned int audio_format = 0; 922 + 923 + switch (audio_swl) { 924 + case 16: 925 + audio_enable |= IT66121_AUD_16BIT; 926 + break; 927 + case 18: 928 + audio_enable |= IT66121_AUD_18BIT; 929 + break; 930 + case 20: 931 + audio_enable |= IT66121_AUD_20BIT; 932 + break; 933 + case 24: 934 + default: 935 + audio_enable |= IT66121_AUD_24BIT; 936 + break; 937 + } 938 + 939 + audio_format |= 0x40; 940 + switch (audio_src_num) { 941 + case 4: 942 + audio_enable |= IT66121_AUD_EN_I2S3 | IT66121_AUD_EN_I2S2 | 943 + IT66121_AUD_EN_I2S1 | IT66121_AUD_EN_I2S0; 944 + break; 945 + case 3: 946 + audio_enable |= IT66121_AUD_EN_I2S2 | IT66121_AUD_EN_I2S1 | 947 + IT66121_AUD_EN_I2S0; 948 + break; 949 + case 2: 950 + audio_enable |= IT66121_AUD_EN_I2S1 | IT66121_AUD_EN_I2S0; 951 + break; 952 + case 1: 953 + default: 954 + audio_format &= ~0x40; 955 + audio_enable |= IT66121_AUD_EN_I2S0; 956 + break; 957 + } 958 + 959 + audio_format |= 0x01; 960 + ctx->audio.ch_enable = audio_enable; 961 + 962 + ret = regmap_write(ctx->regmap, IT66121_AUD_CTRL0_REG, audio_enable & 0xF0); 963 + if (ret) 964 + return ret; 965 + 966 + ret = regmap_write(ctx->regmap, IT66121_AUD_CTRL1_REG, audio_format); 967 + if (ret) 968 + return ret; 969 + 970 + ret = regmap_write(ctx->regmap, IT66121_AUD_FIFOMAP_REG, 0xE4); 971 + if (ret) 972 + return ret; 973 + 974 + ret = regmap_write(ctx->regmap, IT66121_AUD_CTRL3_REG, 0x00); 975 + if (ret) 976 + return ret; 977 + 978 + ret = regmap_write(ctx->regmap, IT66121_AUD_SRCVALID_FLAT_REG, 0x00); 979 + if (ret) 980 + return ret; 981 + 982 + return regmap_write(ctx->regmap, IT66121_AUD_HDAUDIO_REG, 0x00); 983 + } 984 + 985 + static int it661221_set_ncts(struct it66121_ctx *ctx, u8 fs) 986 + { 987 + int ret; 988 + unsigned int n; 989 + 990 + switch (fs) { 991 + case IT66121_AUD_FS_32K: 992 + n = 4096; 993 + break; 994 + case IT66121_AUD_FS_44P1K: 995 + n = 6272; 996 + break; 997 + case IT66121_AUD_FS_48K: 998 + n = 6144; 999 + break; 1000 + case IT66121_AUD_FS_88P2K: 1001 + n = 12544; 1002 + break; 1003 + case IT66121_AUD_FS_96K: 1004 + n = 12288; 1005 + break; 1006 + case IT66121_AUD_FS_176P4K: 1007 + n = 25088; 1008 + break; 1009 + case IT66121_AUD_FS_192K: 1010 + n = 24576; 1011 + break; 1012 + case IT66121_AUD_FS_768K: 1013 + n = 24576; 1014 + break; 1015 + default: 1016 + n = 6144; 1017 + break; 1018 + } 1019 + 1020 + ret = regmap_write(ctx->regmap, IT66121_AUD_PKT_N0_REG, (u8)((n) & 0xFF)); 1021 + if (ret) 1022 + return ret; 1023 + 1024 + ret = regmap_write(ctx->regmap, IT66121_AUD_PKT_N1_REG, (u8)((n >> 8) & 0xFF)); 1025 + if (ret) 1026 + return ret; 1027 + 1028 + ret = regmap_write(ctx->regmap, IT66121_AUD_PKT_N2_REG, (u8)((n >> 16) & 0xF)); 1029 + if (ret) 1030 + return ret; 1031 + 1032 + if (ctx->audio.auto_cts) { 1033 + u8 loop_cnt = 255; 1034 + u8 cts_stable_cnt = 0; 1035 + unsigned int sum_cts = 0; 1036 + unsigned int cts = 0; 1037 + unsigned int last_cts = 0; 1038 + unsigned int diff; 1039 + unsigned int val; 1040 + 1041 + while (loop_cnt--) { 1042 + msleep(30); 1043 + regmap_read(ctx->regmap, IT66121_AUD_PKT_CTS_CNT2_REG, &val); 1044 + cts = val << 12; 1045 + regmap_read(ctx->regmap, IT66121_AUD_PKT_CTS_CNT1_REG, &val); 1046 + cts |= val << 4; 1047 + regmap_read(ctx->regmap, IT66121_AUD_PKT_CTS_CNT0_REG, &val); 1048 + cts |= val >> 4; 1049 + if (cts == 0) { 1050 + continue; 1051 + } else { 1052 + if (last_cts > cts) 1053 + diff = last_cts - cts; 1054 + else 1055 + diff = cts - last_cts; 1056 + last_cts = cts; 1057 + if (diff < 5) { 1058 + cts_stable_cnt++; 1059 + sum_cts += cts; 1060 + } else { 1061 + cts_stable_cnt = 0; 1062 + sum_cts = 0; 1063 + continue; 1064 + } 1065 + 1066 + if (cts_stable_cnt >= 32) { 1067 + last_cts = (sum_cts >> 5); 1068 + break; 1069 + } 1070 + } 1071 + } 1072 + 1073 + regmap_write(ctx->regmap, IT66121_AUD_PKT_CTS0_REG, (u8)((last_cts) & 0xFF)); 1074 + regmap_write(ctx->regmap, IT66121_AUD_PKT_CTS1_REG, (u8)((last_cts >> 8) & 0xFF)); 1075 + regmap_write(ctx->regmap, IT66121_AUD_PKT_CTS2_REG, (u8)((last_cts >> 16) & 0x0F)); 1076 + } 1077 + 1078 + ret = regmap_write(ctx->regmap, 0xF8, 0xC3); 1079 + if (ret) 1080 + return ret; 1081 + 1082 + ret = regmap_write(ctx->regmap, 0xF8, 0xA5); 1083 + if (ret) 1084 + return ret; 1085 + 1086 + if (ctx->audio.auto_cts) { 1087 + ret = regmap_write_bits(ctx->regmap, IT66121_PKT_CTS_CTRL_REG, 1088 + IT66121_PKT_CTS_CTRL_SEL, 1089 + 1); 1090 + } else { 1091 + ret = regmap_write_bits(ctx->regmap, IT66121_PKT_CTS_CTRL_REG, 1092 + IT66121_PKT_CTS_CTRL_SEL, 1093 + 0); 1094 + } 1095 + 1096 + if (ret) 1097 + return ret; 1098 + 1099 + return regmap_write(ctx->regmap, 0xF8, 0xFF); 1100 + } 1101 + 1102 + static int it661221_audio_output_enable(struct it66121_ctx *ctx, bool enable) 1103 + { 1104 + int ret; 1105 + 1106 + if (enable) { 1107 + ret = regmap_write_bits(ctx->regmap, IT66121_SW_RST_REG, 1108 + IT66121_SW_RST_AUD | IT66121_SW_RST_AREF, 1109 + 0); 1110 + if (ret) 1111 + return ret; 1112 + 1113 + ret = regmap_write_bits(ctx->regmap, IT66121_AUD_CTRL0_REG, 1114 + IT66121_AUD_EN_I2S3 | IT66121_AUD_EN_I2S2 | 1115 + IT66121_AUD_EN_I2S1 | IT66121_AUD_EN_I2S0, 1116 + ctx->audio.ch_enable); 1117 + } else { 1118 + ret = regmap_write_bits(ctx->regmap, IT66121_AUD_CTRL0_REG, 1119 + IT66121_AUD_EN_I2S3 | IT66121_AUD_EN_I2S2 | 1120 + IT66121_AUD_EN_I2S1 | IT66121_AUD_EN_I2S0, 1121 + ctx->audio.ch_enable & 0xF0); 1122 + if (ret) 1123 + return ret; 1124 + 1125 + ret = regmap_write_bits(ctx->regmap, IT66121_SW_RST_REG, 1126 + IT66121_SW_RST_AUD | IT66121_SW_RST_AREF, 1127 + IT66121_SW_RST_AUD | IT66121_SW_RST_AREF); 1128 + } 1129 + 1130 + return ret; 1131 + } 1132 + 1133 + static int it661221_audio_ch_enable(struct it66121_ctx *ctx, bool enable) 1134 + { 1135 + int ret; 1136 + 1137 + if (enable) { 1138 + ret = regmap_write(ctx->regmap, IT66121_AUD_SRCVALID_FLAT_REG, 0); 1139 + if (ret) 1140 + return ret; 1141 + 1142 + ret = regmap_write(ctx->regmap, IT66121_AUD_CTRL0_REG, ctx->audio.ch_enable); 1143 + } else { 1144 + ret = regmap_write(ctx->regmap, IT66121_AUD_CTRL0_REG, ctx->audio.ch_enable & 0xF0); 1145 + } 1146 + 1147 + return ret; 1148 + } 1149 + 1150 + static int it66121_audio_hw_params(struct device *dev, void *data, 1151 + struct hdmi_codec_daifmt *daifmt, 1152 + struct hdmi_codec_params *params) 1153 + { 1154 + u8 fs; 1155 + u8 swl; 1156 + int ret; 1157 + struct it66121_ctx *ctx = dev_get_drvdata(dev); 1158 + static u8 iec60958_chstat[5]; 1159 + unsigned int channels = params->channels; 1160 + unsigned int sample_rate = params->sample_rate; 1161 + unsigned int sample_width = params->sample_width; 1162 + 1163 + mutex_lock(&ctx->lock); 1164 + dev_dbg(dev, "%s: %u, %u, %u, %u\n", __func__, 1165 + daifmt->fmt, sample_rate, sample_width, channels); 1166 + 1167 + switch (daifmt->fmt) { 1168 + case HDMI_I2S: 1169 + dev_dbg(dev, "Using HDMI I2S\n"); 1170 + break; 1171 + default: 1172 + dev_err(dev, "Invalid or unsupported DAI format %d\n", daifmt->fmt); 1173 + ret = -EINVAL; 1174 + goto out; 1175 + } 1176 + 1177 + // Set audio clock recovery (N/CTS) 1178 + ret = regmap_write(ctx->regmap, IT66121_CLK_CTRL0_REG, 1179 + IT66121_CLK_CTRL0_AUTO_OVER_SAMPLING | 1180 + IT66121_CLK_CTRL0_EXT_MCLK_256FS | 1181 + IT66121_CLK_CTRL0_AUTO_IPCLK); 1182 + if (ret) 1183 + goto out; 1184 + 1185 + ret = regmap_write_bits(ctx->regmap, IT66121_AUD_CTRL0_REG, 1186 + IT66121_AUD_CTRL0_AUD_SEL, 0); // remove spdif selection 1187 + if (ret) 1188 + goto out; 1189 + 1190 + switch (sample_rate) { 1191 + case 44100L: 1192 + fs = IT66121_AUD_FS_44P1K; 1193 + break; 1194 + case 88200L: 1195 + fs = IT66121_AUD_FS_88P2K; 1196 + break; 1197 + case 176400L: 1198 + fs = IT66121_AUD_FS_176P4K; 1199 + break; 1200 + case 32000L: 1201 + fs = IT66121_AUD_FS_32K; 1202 + break; 1203 + case 48000L: 1204 + fs = IT66121_AUD_FS_48K; 1205 + break; 1206 + case 96000L: 1207 + fs = IT66121_AUD_FS_96K; 1208 + break; 1209 + case 192000L: 1210 + fs = IT66121_AUD_FS_192K; 1211 + break; 1212 + case 768000L: 1213 + fs = IT66121_AUD_FS_768K; 1214 + break; 1215 + default: 1216 + fs = IT66121_AUD_FS_48K; 1217 + break; 1218 + } 1219 + 1220 + ctx->audio.fs = fs; 1221 + ret = it661221_set_ncts(ctx, fs); 1222 + if (ret) { 1223 + dev_err(dev, "Failed to set N/CTS: %d\n", ret); 1224 + goto out; 1225 + } 1226 + 1227 + // Set audio format register (except audio channel enable) 1228 + ret = it661221_set_lpcm_audio(ctx, (channels + 1) / 2, sample_width); 1229 + if (ret) { 1230 + dev_err(dev, "Failed to set LPCM audio: %d\n", ret); 1231 + goto out; 1232 + } 1233 + 1234 + // Set audio channel status 1235 + iec60958_chstat[0] = 0; 1236 + if ((channels + 1) / 2 == 1) 1237 + iec60958_chstat[0] |= 0x1; 1238 + iec60958_chstat[0] &= ~(1 << 1); 1239 + iec60958_chstat[1] = 0; 1240 + iec60958_chstat[2] = (channels + 1) / 2; 1241 + iec60958_chstat[2] |= (channels << 4) & 0xF0; 1242 + iec60958_chstat[3] = fs; 1243 + 1244 + switch (sample_width) { 1245 + case 21L: 1246 + swl = IT66121_AUD_SWL_21BIT; 1247 + break; 1248 + case 24L: 1249 + swl = IT66121_AUD_SWL_24BIT; 1250 + break; 1251 + case 23L: 1252 + swl = IT66121_AUD_SWL_23BIT; 1253 + break; 1254 + case 22L: 1255 + swl = IT66121_AUD_SWL_22BIT; 1256 + break; 1257 + case 20L: 1258 + swl = IT66121_AUD_SWL_20BIT; 1259 + break; 1260 + case 17L: 1261 + swl = IT66121_AUD_SWL_17BIT; 1262 + break; 1263 + case 19L: 1264 + swl = IT66121_AUD_SWL_19BIT; 1265 + break; 1266 + case 18L: 1267 + swl = IT66121_AUD_SWL_18BIT; 1268 + break; 1269 + case 16L: 1270 + swl = IT66121_AUD_SWL_16BIT; 1271 + break; 1272 + default: 1273 + swl = IT66121_AUD_SWL_NOT_INDICATED; 1274 + break; 1275 + } 1276 + 1277 + iec60958_chstat[4] = (((~fs) << 4) & 0xF0) | swl; 1278 + ret = it661221_set_chstat(ctx, iec60958_chstat); 1279 + if (ret) { 1280 + dev_err(dev, "Failed to set channel status: %d\n", ret); 1281 + goto out; 1282 + } 1283 + 1284 + // Enable audio channel enable while input clock stable (if SPDIF). 1285 + ret = it661221_audio_ch_enable(ctx, true); 1286 + if (ret) { 1287 + dev_err(dev, "Failed to enable audio channel: %d\n", ret); 1288 + goto out; 1289 + } 1290 + 1291 + ret = regmap_write_bits(ctx->regmap, IT66121_INT_MASK1_REG, 1292 + IT66121_INT_MASK1_AUD_OVF, 1293 + 0); 1294 + if (ret) 1295 + goto out; 1296 + 1297 + dev_dbg(dev, "HDMI audio enabled.\n"); 1298 + out: 1299 + mutex_unlock(&ctx->lock); 1300 + 1301 + return ret; 1302 + } 1303 + 1304 + static int it66121_audio_startup(struct device *dev, void *data) 1305 + { 1306 + int ret; 1307 + struct it66121_ctx *ctx = dev_get_drvdata(dev); 1308 + 1309 + dev_dbg(dev, "%s\n", __func__); 1310 + 1311 + mutex_lock(&ctx->lock); 1312 + ret = it661221_audio_output_enable(ctx, true); 1313 + if (ret) 1314 + dev_err(dev, "Failed to enable audio output: %d\n", ret); 1315 + 1316 + mutex_unlock(&ctx->lock); 1317 + 1318 + return ret; 1319 + } 1320 + 1321 + static void it66121_audio_shutdown(struct device *dev, void *data) 1322 + { 1323 + int ret; 1324 + struct it66121_ctx *ctx = dev_get_drvdata(dev); 1325 + 1326 + dev_dbg(dev, "%s\n", __func__); 1327 + 1328 + mutex_lock(&ctx->lock); 1329 + ret = it661221_audio_output_enable(ctx, false); 1330 + if (ret) 1331 + dev_err(dev, "Failed to disable audio output: %d\n", ret); 1332 + 1333 + mutex_unlock(&ctx->lock); 1334 + } 1335 + 1336 + static int it66121_audio_mute(struct device *dev, void *data, 1337 + bool enable, int direction) 1338 + { 1339 + int ret; 1340 + struct it66121_ctx *ctx = dev_get_drvdata(dev); 1341 + 1342 + dev_dbg(dev, "%s: enable=%s, direction=%d\n", 1343 + __func__, enable ? "true" : "false", direction); 1344 + 1345 + mutex_lock(&ctx->lock); 1346 + 1347 + if (enable) { 1348 + ret = regmap_write_bits(ctx->regmap, IT66121_AUD_SRCVALID_FLAT_REG, 1349 + IT66121_AUD_FLAT_SRC0 | IT66121_AUD_FLAT_SRC1 | 1350 + IT66121_AUD_FLAT_SRC2 | IT66121_AUD_FLAT_SRC3, 1351 + IT66121_AUD_FLAT_SRC0 | IT66121_AUD_FLAT_SRC1 | 1352 + IT66121_AUD_FLAT_SRC2 | IT66121_AUD_FLAT_SRC3); 1353 + } else { 1354 + ret = regmap_write_bits(ctx->regmap, IT66121_AUD_SRCVALID_FLAT_REG, 1355 + IT66121_AUD_FLAT_SRC0 | IT66121_AUD_FLAT_SRC1 | 1356 + IT66121_AUD_FLAT_SRC2 | IT66121_AUD_FLAT_SRC3, 1357 + 0); 1358 + } 1359 + 1360 + mutex_unlock(&ctx->lock); 1361 + 1362 + return ret; 1363 + } 1364 + 1365 + static int it66121_audio_get_eld(struct device *dev, void *data, 1366 + u8 *buf, size_t len) 1367 + { 1368 + struct it66121_ctx *ctx = dev_get_drvdata(dev); 1369 + 1370 + mutex_lock(&ctx->lock); 1371 + 1372 + memcpy(buf, ctx->connector->eld, 1373 + min(sizeof(ctx->connector->eld), len)); 1374 + 1375 + mutex_unlock(&ctx->lock); 1376 + 1377 + return 0; 1378 + } 1379 + 1380 + static const struct hdmi_codec_ops it66121_audio_codec_ops = { 1381 + .hw_params = it66121_audio_hw_params, 1382 + .audio_startup = it66121_audio_startup, 1383 + .audio_shutdown = it66121_audio_shutdown, 1384 + .mute_stream = it66121_audio_mute, 1385 + .get_eld = it66121_audio_get_eld, 1386 + .no_capture_mute = 1, 1387 + }; 1388 + 1389 + static int it66121_audio_codec_init(struct it66121_ctx *ctx, struct device *dev) 1390 + { 1391 + struct hdmi_codec_pdata codec_data = { 1392 + .ops = &it66121_audio_codec_ops, 1393 + .i2s = 1, /* Only i2s support for now */ 1394 + .spdif = 0, 1395 + .max_i2s_channels = 8, 1396 + }; 1397 + 1398 + dev_dbg(dev, "%s\n", __func__); 1399 + 1400 + if (!of_property_read_bool(dev->of_node, "#sound-dai-cells")) { 1401 + dev_info(dev, "No \"#sound-dai-cells\", no audio\n"); 1402 + return 0; 1403 + } 1404 + 1405 + ctx->audio.pdev = platform_device_register_data(dev, 1406 + HDMI_CODEC_DRV_NAME, 1407 + PLATFORM_DEVID_AUTO, 1408 + &codec_data, 1409 + sizeof(codec_data)); 1410 + 1411 + if (IS_ERR(ctx->audio.pdev)) { 1412 + dev_err(dev, "Failed to initialize HDMI audio codec: %d\n", 1413 + PTR_ERR_OR_ZERO(ctx->audio.pdev)); 1414 + } 1415 + 1416 + return PTR_ERR_OR_ZERO(ctx->audio.pdev); 1417 + } 1418 + 984 1419 static int it66121_probe(struct i2c_client *client, 985 1420 const struct i2c_device_id *id) 986 1421 { ··· 1612 987 ite66121_power_off(ctx); 1613 988 return ret; 1614 989 } 990 + 991 + it66121_audio_codec_init(ctx, dev); 1615 992 1616 993 drm_bridge_add(&ctx->bridge); 1617 994
+43 -6
drivers/gpu/drm/bridge/lontium-lt9611.c
··· 700 700 } 701 701 702 702 /* bridge funcs */ 703 - static void lt9611_bridge_enable(struct drm_bridge *bridge) 703 + static void 704 + lt9611_bridge_atomic_enable(struct drm_bridge *bridge, 705 + struct drm_bridge_state *old_bridge_state) 704 706 { 705 707 struct lt9611 *lt9611 = bridge_to_lt9611(bridge); 706 708 ··· 723 721 regmap_write(lt9611->regmap, 0x8130, 0xea); 724 722 } 725 723 726 - static void lt9611_bridge_disable(struct drm_bridge *bridge) 724 + static void 725 + lt9611_bridge_atomic_disable(struct drm_bridge *bridge, 726 + struct drm_bridge_state *old_bridge_state) 727 727 { 728 728 struct lt9611 *lt9611 = bridge_to_lt9611(bridge); 729 729 int ret; ··· 860 856 lt9611->sleep = false; 861 857 } 862 858 863 - static void lt9611_bridge_post_disable(struct drm_bridge *bridge) 859 + static void 860 + lt9611_bridge_atomic_post_disable(struct drm_bridge *bridge, 861 + struct drm_bridge_state *old_bridge_state) 864 862 { 865 863 struct lt9611 *lt9611 = bridge_to_lt9611(bridge); 866 864 ··· 922 916 lt9611_enable_hpd_interrupts(lt9611); 923 917 } 924 918 919 + #define MAX_INPUT_SEL_FORMATS 1 920 + 921 + static u32 * 922 + lt9611_atomic_get_input_bus_fmts(struct drm_bridge *bridge, 923 + struct drm_bridge_state *bridge_state, 924 + struct drm_crtc_state *crtc_state, 925 + struct drm_connector_state *conn_state, 926 + u32 output_fmt, 927 + unsigned int *num_input_fmts) 928 + { 929 + u32 *input_fmts; 930 + 931 + *num_input_fmts = 0; 932 + 933 + input_fmts = kcalloc(MAX_INPUT_SEL_FORMATS, sizeof(*input_fmts), 934 + GFP_KERNEL); 935 + if (!input_fmts) 936 + return NULL; 937 + 938 + /* This is the DSI-end bus format */ 939 + input_fmts[0] = MEDIA_BUS_FMT_RGB888_1X24; 940 + *num_input_fmts = 1; 941 + 942 + return input_fmts; 943 + } 944 + 925 945 static const struct drm_bridge_funcs lt9611_bridge_funcs = { 926 946 .attach = lt9611_bridge_attach, 927 947 .mode_valid = lt9611_bridge_mode_valid, 928 - .enable = lt9611_bridge_enable, 929 - .disable = lt9611_bridge_disable, 930 - .post_disable = lt9611_bridge_post_disable, 931 948 .mode_set = lt9611_bridge_mode_set, 932 949 .detect = lt9611_bridge_detect, 933 950 .get_edid = lt9611_bridge_get_edid, 934 951 .hpd_enable = lt9611_bridge_hpd_enable, 952 + 953 + .atomic_enable = lt9611_bridge_atomic_enable, 954 + .atomic_disable = lt9611_bridge_atomic_disable, 955 + .atomic_post_disable = lt9611_bridge_atomic_post_disable, 956 + .atomic_duplicate_state = drm_atomic_helper_bridge_duplicate_state, 957 + .atomic_destroy_state = drm_atomic_helper_bridge_destroy_state, 958 + .atomic_reset = drm_atomic_helper_bridge_reset, 959 + .atomic_get_input_bus_fmts = lt9611_atomic_get_input_bus_fmts, 935 960 }; 936 961 937 962 static int lt9611_parse_dt(struct device *dev,
+5 -25
drivers/gpu/drm/bridge/nwl-dsi.c
··· 26 26 #include <drm/drm_bridge.h> 27 27 #include <drm/drm_mipi_dsi.h> 28 28 #include <drm/drm_of.h> 29 - #include <drm/drm_panel.h> 30 29 #include <drm/drm_print.h> 31 30 32 31 #include <video/mipi_display.h> ··· 852 853 /* Save the new desired phy config */ 853 854 memcpy(&dsi->phy_cfg, &new_cfg, sizeof(new_cfg)); 854 855 855 - memcpy(&dsi->mode, adjusted_mode, sizeof(dsi->mode)); 856 + drm_mode_copy(&dsi->mode, adjusted_mode); 856 857 drm_mode_debug_printmodeline(adjusted_mode); 857 858 858 859 if (pm_runtime_resume_and_get(dev) < 0) ··· 909 910 { 910 911 struct nwl_dsi *dsi = bridge_to_dsi(bridge); 911 912 struct drm_bridge *panel_bridge; 912 - struct drm_panel *panel; 913 - int ret; 914 913 915 - ret = drm_of_find_panel_or_bridge(dsi->dev->of_node, 1, 0, &panel, 916 - &panel_bridge); 917 - if (ret) 918 - return ret; 919 - 920 - if (panel) { 921 - panel_bridge = drm_panel_bridge_add(panel); 922 - if (IS_ERR(panel_bridge)) 923 - return PTR_ERR(panel_bridge); 924 - } 925 - 926 - if (!panel_bridge) 927 - return -EPROBE_DEFER; 914 + panel_bridge = devm_drm_of_get_bridge(dsi->dev, dsi->dev->of_node, 1, 0); 915 + if (IS_ERR(panel_bridge)) 916 + return PTR_ERR(panel_bridge); 928 917 929 918 return drm_bridge_attach(bridge->encoder, panel_bridge, bridge, flags); 930 - } 931 - 932 - static void nwl_dsi_bridge_detach(struct drm_bridge *bridge) 933 - { struct nwl_dsi *dsi = bridge_to_dsi(bridge); 934 - 935 - drm_of_panel_bridge_remove(dsi->dev->of_node, 1, 0); 936 919 } 937 920 938 921 static u32 *nwl_bridge_atomic_get_input_bus_fmts(struct drm_bridge *bridge, ··· 962 981 .mode_set = nwl_dsi_bridge_mode_set, 963 982 .mode_valid = nwl_dsi_bridge_mode_valid, 964 983 .attach = nwl_dsi_bridge_attach, 965 - .detach = nwl_dsi_bridge_detach, 966 984 }; 967 985 968 986 static int nwl_dsi_parse_dt(struct nwl_dsi *dsi) ··· 1133 1153 static const struct soc_device_attribute nwl_dsi_quirks_match[] = { 1134 1154 { .soc_id = "i.MX8MQ", .revision = "2.0", 1135 1155 .data = (void *)E11418_HS_MODE_QUIRK }, 1136 - { /* sentinel. */ }, 1156 + { /* sentinel. */ } 1137 1157 }; 1138 1158 1139 1159 static int nwl_dsi_probe(struct platform_device *pdev)
+1 -6
drivers/gpu/drm/bridge/nxp-ptn3460.c
··· 263 263 struct device *dev = &client->dev; 264 264 struct ptn3460_bridge *ptn_bridge; 265 265 struct drm_bridge *panel_bridge; 266 - struct drm_panel *panel; 267 266 int ret; 268 267 269 268 ptn_bridge = devm_kzalloc(dev, sizeof(*ptn_bridge), GFP_KERNEL); ··· 270 271 return -ENOMEM; 271 272 } 272 273 273 - ret = drm_of_find_panel_or_bridge(dev->of_node, 0, 0, &panel, NULL); 274 - if (ret) 275 - return ret; 276 - 277 - panel_bridge = devm_drm_panel_bridge_add(dev, panel); 274 + panel_bridge = devm_drm_of_get_bridge(dev, dev->of_node, 0, 0); 278 275 if (IS_ERR(panel_bridge)) 279 276 return PTR_ERR(panel_bridge); 280 277
+3
drivers/gpu/drm/bridge/panel.c
··· 83 83 drm_connector_attach_encoder(&panel_bridge->connector, 84 84 bridge->encoder); 85 85 86 + if (connector->funcs->reset) 87 + connector->funcs->reset(connector); 88 + 86 89 return 0; 87 90 } 88 91
+1 -6
drivers/gpu/drm/bridge/parade-ps8622.c
··· 452 452 struct device *dev = &client->dev; 453 453 struct ps8622_bridge *ps8622; 454 454 struct drm_bridge *panel_bridge; 455 - struct drm_panel *panel; 456 455 int ret; 457 456 458 457 ps8622 = devm_kzalloc(dev, sizeof(*ps8622), GFP_KERNEL); 459 458 if (!ps8622) 460 459 return -ENOMEM; 461 460 462 - ret = drm_of_find_panel_or_bridge(dev->of_node, 0, 0, &panel, NULL); 463 - if (ret) 464 - return ret; 465 - 466 - panel_bridge = devm_drm_panel_bridge_add(dev, panel); 461 + panel_bridge = devm_drm_of_get_bridge(dev, dev->of_node, 0, 0); 467 462 if (IS_ERR(panel_bridge)) 468 463 return PTR_ERR(panel_bridge); 469 464
+1 -8
drivers/gpu/drm/bridge/parade-ps8640.c
··· 589 589 struct device *dev = &client->dev; 590 590 struct device_node *np = dev->of_node; 591 591 struct ps8640 *ps_bridge; 592 - struct drm_panel *panel; 593 592 int ret; 594 593 u32 i; 595 594 ··· 673 674 devm_of_dp_aux_populate_ep_devices(&ps_bridge->aux); 674 675 675 676 /* port@1 is ps8640 output port */ 676 - ret = drm_of_find_panel_or_bridge(np, 1, 0, &panel, NULL); 677 - if (ret < 0) 678 - return ret; 679 - if (!panel) 680 - return -ENODEV; 681 - 682 - ps_bridge->panel_bridge = devm_drm_panel_bridge_add(dev, panel); 677 + ps_bridge->panel_bridge = devm_drm_of_get_bridge(dev, np, 1, 0); 683 678 if (IS_ERR(ps_bridge->panel_bridge)) 684 679 return PTR_ERR(ps_bridge->panel_bridge); 685 680
+1 -1
drivers/gpu/drm/bridge/synopsys/dw-hdmi.c
··· 2830 2830 mutex_lock(&hdmi->mutex); 2831 2831 2832 2832 /* Store the display mode for plugin/DKMS poweron events */ 2833 - memcpy(&hdmi->previous_mode, mode, sizeof(hdmi->previous_mode)); 2833 + drm_mode_copy(&hdmi->previous_mode, mode); 2834 2834 2835 2835 mutex_unlock(&hdmi->mutex); 2836 2836 }
+7 -44
drivers/gpu/drm/bridge/synopsys/dw-mipi-dsi.c
··· 246 246 247 247 struct clk *pclk; 248 248 249 - bool device_found; 250 249 unsigned int lane_mbps; /* per lane */ 251 250 u32 channel; 252 251 u32 lanes; ··· 309 310 return readl(dsi->base + reg); 310 311 } 311 312 312 - static int dw_mipi_dsi_panel_or_bridge(struct dw_mipi_dsi *dsi, 313 - struct device_node *node) 314 - { 315 - struct drm_bridge *bridge; 316 - struct drm_panel *panel; 317 - int ret; 318 - 319 - ret = drm_of_find_panel_or_bridge(node, 1, 0, &panel, &bridge); 320 - if (ret) 321 - return ret; 322 - 323 - if (panel) { 324 - bridge = drm_panel_bridge_add_typed(panel, 325 - DRM_MODE_CONNECTOR_DSI); 326 - if (IS_ERR(bridge)) 327 - return PTR_ERR(bridge); 328 - } 329 - 330 - dsi->panel_bridge = bridge; 331 - 332 - if (!dsi->panel_bridge) 333 - return -EPROBE_DEFER; 334 - 335 - return 0; 336 - } 337 - 338 313 static int dw_mipi_dsi_host_attach(struct mipi_dsi_host *host, 339 314 struct mipi_dsi_device *device) 340 315 { 341 316 struct dw_mipi_dsi *dsi = host_to_dsi(host); 342 317 const struct dw_mipi_dsi_plat_data *pdata = dsi->plat_data; 318 + struct drm_bridge *bridge; 343 319 int ret; 344 320 345 321 if (device->lanes > dsi->plat_data->max_data_lanes) { ··· 328 354 dsi->format = device->format; 329 355 dsi->mode_flags = device->mode_flags; 330 356 331 - if (!dsi->device_found) { 332 - ret = dw_mipi_dsi_panel_or_bridge(dsi, host->dev->of_node); 333 - if (ret) 334 - return ret; 357 + bridge = devm_drm_of_get_bridge(dsi->dev, dsi->dev->of_node, 1, 0); 358 + if (IS_ERR(bridge)) 359 + return PTR_ERR(bridge); 335 360 336 - dsi->device_found = true; 337 - } 361 + dsi->panel_bridge = bridge; 362 + 363 + drm_bridge_add(&dsi->bridge); 338 364 339 365 if (pdata->host_ops && pdata->host_ops->attach) { 340 366 ret = pdata->host_ops->attach(pdata->priv_data, device); ··· 995 1021 /* Set the encoder type as caller does not know it */ 996 1022 bridge->encoder->encoder_type = DRM_MODE_ENCODER_DSI; 997 1023 998 - if (!dsi->device_found) { 999 - int ret; 1000 - 1001 - ret = dw_mipi_dsi_panel_or_bridge(dsi, dsi->dev->of_node); 1002 - if (ret) 1003 - return ret; 1004 - 1005 - dsi->device_found = true; 1006 - } 1007 - 1008 1024 /* Attach the panel-bridge to the dsi bridge */ 1009 1025 return drm_bridge_attach(bridge->encoder, dsi->panel_bridge, bridge, 1010 1026 flags); ··· 1181 1217 #ifdef CONFIG_OF 1182 1218 dsi->bridge.of_node = pdev->dev.of_node; 1183 1219 #endif 1184 - drm_bridge_add(&dsi->bridge); 1185 1220 1186 1221 return dsi; 1187 1222 }
+1 -8
drivers/gpu/drm/bridge/tc358762.c
··· 179 179 { 180 180 struct drm_bridge *panel_bridge; 181 181 struct device *dev = ctx->dev; 182 - struct drm_panel *panel; 183 - int ret; 184 182 185 - ret = drm_of_find_panel_or_bridge(dev->of_node, 1, 0, &panel, NULL); 186 - if (ret) 187 - return ret; 188 - 189 - panel_bridge = devm_drm_panel_bridge_add(dev, panel); 190 - 183 + panel_bridge = devm_drm_of_get_bridge(dev, dev->of_node, 1, 0); 191 184 if (IS_ERR(panel_bridge)) 192 185 return PTR_ERR(panel_bridge); 193 186
+6 -98
drivers/gpu/drm/bridge/tc358764.c
··· 16 16 #include <video/mipi_display.h> 17 17 18 18 #include <drm/drm_atomic_helper.h> 19 - #include <drm/drm_bridge.h> 20 - #include <drm/drm_crtc.h> 21 - #include <drm/drm_fb_helper.h> 22 19 #include <drm/drm_mipi_dsi.h> 23 20 #include <drm/drm_of.h> 24 - #include <drm/drm_panel.h> 25 21 #include <drm/drm_print.h> 26 - #include <drm/drm_probe_helper.h> 27 22 28 23 #define FLD_MASK(start, end) (((1 << ((start) - (end) + 1)) - 1) << (end)) 29 24 #define FLD_VAL(val, start, end) (((val) << (end)) & FLD_MASK(start, end)) ··· 148 153 struct tc358764 { 149 154 struct device *dev; 150 155 struct drm_bridge bridge; 151 - struct drm_connector connector; 156 + struct drm_bridge *next_bridge; 152 157 struct regulator_bulk_data supplies[ARRAY_SIZE(tc358764_supplies)]; 153 158 struct gpio_desc *gpio_reset; 154 - struct drm_panel *panel; 155 159 int error; 156 160 }; 157 161 ··· 202 208 static inline struct tc358764 *bridge_to_tc358764(struct drm_bridge *bridge) 203 209 { 204 210 return container_of(bridge, struct tc358764, bridge); 205 - } 206 - 207 - static inline 208 - struct tc358764 *connector_to_tc358764(struct drm_connector *connector) 209 - { 210 - return container_of(connector, struct tc358764, connector); 211 211 } 212 212 213 213 static int tc358764_init(struct tc358764 *ctx) ··· 266 278 usleep_range(1000, 2000); 267 279 } 268 280 269 - static int tc358764_get_modes(struct drm_connector *connector) 270 - { 271 - struct tc358764 *ctx = connector_to_tc358764(connector); 272 - 273 - return drm_panel_get_modes(ctx->panel, connector); 274 - } 275 - 276 - static const 277 - struct drm_connector_helper_funcs tc358764_connector_helper_funcs = { 278 - .get_modes = tc358764_get_modes, 279 - }; 280 - 281 - static const struct drm_connector_funcs tc358764_connector_funcs = { 282 - .fill_modes = drm_helper_probe_single_connector_modes, 283 - .destroy = drm_connector_cleanup, 284 - .reset = drm_atomic_helper_connector_reset, 285 - .atomic_duplicate_state = drm_atomic_helper_connector_duplicate_state, 286 - .atomic_destroy_state = drm_atomic_helper_connector_destroy_state, 287 - }; 288 - 289 - static void tc358764_disable(struct drm_bridge *bridge) 290 - { 291 - struct tc358764 *ctx = bridge_to_tc358764(bridge); 292 - int ret = drm_panel_disable(bridge_to_tc358764(bridge)->panel); 293 - 294 - if (ret < 0) 295 - dev_err(ctx->dev, "error disabling panel (%d)\n", ret); 296 - } 297 - 298 281 static void tc358764_post_disable(struct drm_bridge *bridge) 299 282 { 300 283 struct tc358764 *ctx = bridge_to_tc358764(bridge); 301 284 int ret; 302 285 303 - ret = drm_panel_unprepare(ctx->panel); 304 - if (ret < 0) 305 - dev_err(ctx->dev, "error unpreparing panel (%d)\n", ret); 306 286 tc358764_reset(ctx); 307 287 usleep_range(10000, 15000); 308 288 ret = regulator_bulk_disable(ARRAY_SIZE(ctx->supplies), ctx->supplies); ··· 291 335 ret = tc358764_init(ctx); 292 336 if (ret < 0) 293 337 dev_err(ctx->dev, "error initializing bridge (%d)\n", ret); 294 - ret = drm_panel_prepare(ctx->panel); 295 - if (ret < 0) 296 - dev_err(ctx->dev, "error preparing panel (%d)\n", ret); 297 - } 298 - 299 - static void tc358764_enable(struct drm_bridge *bridge) 300 - { 301 - struct tc358764 *ctx = bridge_to_tc358764(bridge); 302 - int ret = drm_panel_enable(ctx->panel); 303 - 304 - if (ret < 0) 305 - dev_err(ctx->dev, "error enabling panel (%d)\n", ret); 306 338 } 307 339 308 340 static int tc358764_attach(struct drm_bridge *bridge, 309 341 enum drm_bridge_attach_flags flags) 310 342 { 311 343 struct tc358764 *ctx = bridge_to_tc358764(bridge); 312 - struct drm_device *drm = bridge->dev; 313 - int ret; 314 344 315 - if (flags & DRM_BRIDGE_ATTACH_NO_CONNECTOR) { 316 - DRM_ERROR("Fix bridge driver to make connector optional!"); 317 - return -EINVAL; 318 - } 319 - 320 - ctx->connector.polled = DRM_CONNECTOR_POLL_HPD; 321 - ret = drm_connector_init(drm, &ctx->connector, 322 - &tc358764_connector_funcs, 323 - DRM_MODE_CONNECTOR_LVDS); 324 - if (ret) { 325 - DRM_ERROR("Failed to initialize connector\n"); 326 - return ret; 327 - } 328 - 329 - drm_connector_helper_add(&ctx->connector, 330 - &tc358764_connector_helper_funcs); 331 - drm_connector_attach_encoder(&ctx->connector, bridge->encoder); 332 - ctx->connector.funcs->reset(&ctx->connector); 333 - drm_connector_register(&ctx->connector); 334 - 335 - return 0; 336 - } 337 - 338 - static void tc358764_detach(struct drm_bridge *bridge) 339 - { 340 - struct tc358764 *ctx = bridge_to_tc358764(bridge); 341 - 342 - drm_connector_unregister(&ctx->connector); 343 - ctx->panel = NULL; 344 - drm_connector_put(&ctx->connector); 345 + return drm_bridge_attach(bridge->encoder, ctx->next_bridge, bridge, flags); 345 346 } 346 347 347 348 static const struct drm_bridge_funcs tc358764_bridge_funcs = { 348 - .disable = tc358764_disable, 349 349 .post_disable = tc358764_post_disable, 350 - .enable = tc358764_enable, 351 350 .pre_enable = tc358764_pre_enable, 352 351 .attach = tc358764_attach, 353 - .detach = tc358764_detach, 354 352 }; 355 353 356 354 static int tc358764_parse_dt(struct tc358764 *ctx) 357 355 { 358 356 struct device *dev = ctx->dev; 359 - int ret; 360 357 361 358 ctx->gpio_reset = devm_gpiod_get(dev, "reset", GPIOD_OUT_LOW); 362 359 if (IS_ERR(ctx->gpio_reset)) { ··· 317 408 return PTR_ERR(ctx->gpio_reset); 318 409 } 319 410 320 - ret = drm_of_find_panel_or_bridge(ctx->dev->of_node, 1, 0, &ctx->panel, 321 - NULL); 322 - if (ret && ret != -EPROBE_DEFER) 323 - dev_err(dev, "cannot find panel (%d)\n", ret); 411 + ctx->next_bridge = devm_drm_of_get_bridge(dev, dev->of_node, 1, 0); 412 + if (IS_ERR(ctx->next_bridge)) 413 + return PTR_ERR(ctx->next_bridge); 324 414 325 - return ret; 415 + return 0; 326 416 } 327 417 328 418 static int tc358764_configure_regulators(struct tc358764 *ctx)
+511 -74
drivers/gpu/drm/bridge/tc358767.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0-or-later 2 2 /* 3 - * tc358767 eDP bridge driver 3 + * TC358767/TC358867/TC9595 DSI/DPI-to-DPI/(e)DP bridge driver 4 + * 5 + * The TC358767/TC358867/TC9595 can operate in multiple modes. 6 + * The following modes are supported: 7 + * DPI->(e)DP -- supported 8 + * DSI->DPI .... supported 9 + * DSI->(e)DP .. NOT supported 4 10 * 5 11 * Copyright (C) 2016 CogentEmbedded Inc 6 12 * Author: Andrey Gusakov <andrey.gusakov@cogentembedded.com> ··· 35 29 #include <drm/drm_bridge.h> 36 30 #include <drm/dp/drm_dp_helper.h> 37 31 #include <drm/drm_edid.h> 32 + #include <drm/drm_mipi_dsi.h> 38 33 #include <drm/drm_of.h> 39 34 #include <drm/drm_panel.h> 40 35 #include <drm/drm_print.h> ··· 43 36 44 37 /* Registers */ 45 38 46 - /* Display Parallel Interface */ 39 + /* PPI layer registers */ 40 + #define PPI_STARTPPI 0x0104 /* START control bit */ 41 + #define PPI_LPTXTIMECNT 0x0114 /* LPTX timing signal */ 42 + #define LPX_PERIOD 3 43 + #define PPI_LANEENABLE 0x0134 44 + #define PPI_TX_RX_TA 0x013c 45 + #define TTA_GET 0x40000 46 + #define TTA_SURE 6 47 + #define PPI_D0S_ATMR 0x0144 48 + #define PPI_D1S_ATMR 0x0148 49 + #define PPI_D0S_CLRSIPOCOUNT 0x0164 /* Assertion timer for Lane 0 */ 50 + #define PPI_D1S_CLRSIPOCOUNT 0x0168 /* Assertion timer for Lane 1 */ 51 + #define PPI_D2S_CLRSIPOCOUNT 0x016c /* Assertion timer for Lane 2 */ 52 + #define PPI_D3S_CLRSIPOCOUNT 0x0170 /* Assertion timer for Lane 3 */ 53 + #define PPI_START_FUNCTION BIT(0) 54 + 55 + /* DSI layer registers */ 56 + #define DSI_STARTDSI 0x0204 /* START control bit of DSI-TX */ 57 + #define DSI_LANEENABLE 0x0210 /* Enables each lane */ 58 + #define DSI_RX_START BIT(0) 59 + 60 + /* Lane enable PPI and DSI register bits */ 61 + #define LANEENABLE_CLEN BIT(0) 62 + #define LANEENABLE_L0EN BIT(1) 63 + #define LANEENABLE_L1EN BIT(2) 64 + #define LANEENABLE_L2EN BIT(1) 65 + #define LANEENABLE_L3EN BIT(2) 66 + 67 + /* Display Parallel Input Interface */ 47 68 #define DPIPXLFMT 0x0440 48 69 #define VS_POL_ACTIVE_LOW (1 << 10) 49 70 #define HS_POL_ACTIVE_LOW (1 << 9) ··· 82 47 #define DPI_BPP_RGB888 (0 << 0) 83 48 #define DPI_BPP_RGB666 (1 << 0) 84 49 #define DPI_BPP_RGB565 (2 << 0) 50 + 51 + /* Display Parallel Output Interface */ 52 + #define POCTRL 0x0448 53 + #define POCTRL_S2P BIT(7) 54 + #define POCTRL_PCLK_POL BIT(3) 55 + #define POCTRL_VS_POL BIT(2) 56 + #define POCTRL_HS_POL BIT(1) 57 + #define POCTRL_DE_POL BIT(0) 85 58 86 59 /* Video Path */ 87 60 #define VPCTRL0 0x0450 ··· 289 246 struct drm_bridge bridge; 290 247 struct drm_bridge *panel_bridge; 291 248 struct drm_connector connector; 249 + 250 + struct mipi_dsi_device *dsi; 251 + u8 dsi_lanes; 292 252 293 253 /* link settings */ 294 254 struct tc_edp_link link; ··· 515 469 int mul, best_mul = 1; 516 470 int delta, best_delta; 517 471 int ext_div[] = {1, 2, 3, 5, 7}; 472 + int clk_min, clk_max; 518 473 int best_pixelclock = 0; 519 474 int vco_hi = 0; 520 475 u32 pxl_pllparam; 476 + 477 + /* 478 + * refclk * mul / (ext_pre_div * pre_div) should be in range: 479 + * - DPI ..... 0 to 100 MHz 480 + * - (e)DP ... 150 to 650 MHz 481 + */ 482 + if (tc->bridge.type == DRM_MODE_CONNECTOR_DPI) { 483 + clk_min = 0; 484 + clk_max = 100000000; 485 + } else { 486 + clk_min = 150000000; 487 + clk_max = 650000000; 488 + } 521 489 522 490 dev_dbg(tc->dev, "PLL: requested %d pixelclock, ref %d\n", pixelclock, 523 491 refclk); ··· 559 499 continue; 560 500 561 501 clk = (refclk / ext_div[i_pre] / div) * mul; 562 - /* 563 - * refclk * mul / (ext_pre_div * pre_div) 564 - * should be in the 150 to 650 MHz range 565 - */ 566 - if ((clk > 650000000) || (clk < 150000000)) 502 + if ((clk > clk_max) || (clk < clk_min)) 567 503 continue; 568 504 569 505 clk = clk / ext_div[i_post]; ··· 712 656 if (ret) 713 657 goto err; 714 658 659 + /* Register DP AUX channel */ 660 + tc->aux.name = "TC358767 AUX i2c adapter"; 661 + tc->aux.dev = tc->dev; 662 + tc->aux.transfer = tc_aux_transfer; 663 + drm_dp_aux_init(&tc->aux); 664 + 715 665 return 0; 716 666 err: 717 667 dev_err(tc->dev, "tc_aux_link_setup failed: %d\n", ret); ··· 790 728 return ret; 791 729 } 792 730 793 - static int tc_set_video_mode(struct tc_data *tc, 794 - const struct drm_display_mode *mode) 731 + static int tc_set_common_video_mode(struct tc_data *tc, 732 + const struct drm_display_mode *mode) 795 733 { 796 - int ret; 797 - int vid_sync_dly; 798 - int max_tu_symbol; 799 - 800 734 int left_margin = mode->htotal - mode->hsync_end; 801 735 int right_margin = mode->hsync_start - mode->hdisplay; 802 736 int hsync_len = mode->hsync_end - mode->hsync_start; 803 737 int upper_margin = mode->vtotal - mode->vsync_end; 804 738 int lower_margin = mode->vsync_start - mode->vdisplay; 805 739 int vsync_len = mode->vsync_end - mode->vsync_start; 806 - u32 dp0_syncval; 807 - u32 bits_per_pixel = 24; 808 - u32 in_bw, out_bw; 809 - 810 - /* 811 - * Recommended maximum number of symbols transferred in a transfer unit: 812 - * DIV_ROUND_UP((input active video bandwidth in bytes) * tu_size, 813 - * (output active video bandwidth in bytes)) 814 - * Must be less than tu_size. 815 - */ 816 - 817 - in_bw = mode->clock * bits_per_pixel / 8; 818 - out_bw = tc->link.num_lanes * tc->link.rate; 819 - max_tu_symbol = DIV_ROUND_UP(in_bw * TU_SIZE_RECOMMENDED, out_bw); 740 + int ret; 820 741 821 742 dev_dbg(tc->dev, "set mode %dx%d\n", 822 743 mode->hdisplay, mode->vdisplay); ··· 857 812 FIELD_PREP(COLOR_B, 99) | 858 813 ENI2CFILTER | 859 814 FIELD_PREP(COLOR_BAR_MODE, COLOR_BAR_MODE_BARS)); 860 - if (ret) 861 - return ret; 815 + 816 + return ret; 817 + } 818 + 819 + static int tc_set_dpi_video_mode(struct tc_data *tc, 820 + const struct drm_display_mode *mode) 821 + { 822 + u32 value = POCTRL_S2P; 823 + 824 + if (tc->mode.flags & DRM_MODE_FLAG_NHSYNC) 825 + value |= POCTRL_HS_POL; 826 + 827 + if (tc->mode.flags & DRM_MODE_FLAG_NVSYNC) 828 + value |= POCTRL_VS_POL; 829 + 830 + return regmap_write(tc->regmap, POCTRL, value); 831 + } 832 + 833 + static int tc_set_edp_video_mode(struct tc_data *tc, 834 + const struct drm_display_mode *mode) 835 + { 836 + int ret; 837 + int vid_sync_dly; 838 + int max_tu_symbol; 839 + 840 + int left_margin = mode->htotal - mode->hsync_end; 841 + int hsync_len = mode->hsync_end - mode->hsync_start; 842 + int upper_margin = mode->vtotal - mode->vsync_end; 843 + int vsync_len = mode->vsync_end - mode->vsync_start; 844 + u32 dp0_syncval; 845 + u32 bits_per_pixel = 24; 846 + u32 in_bw, out_bw; 847 + 848 + /* 849 + * Recommended maximum number of symbols transferred in a transfer unit: 850 + * DIV_ROUND_UP((input active video bandwidth in bytes) * tu_size, 851 + * (output active video bandwidth in bytes)) 852 + * Must be less than tu_size. 853 + */ 854 + 855 + in_bw = mode->clock * bits_per_pixel / 8; 856 + out_bw = tc->link.num_lanes * tc->link.rate; 857 + max_tu_symbol = DIV_ROUND_UP(in_bw * TU_SIZE_RECOMMENDED, out_bw); 862 858 863 859 /* DP Main Stream Attributes */ 864 860 vid_sync_dly = hsync_len + left_margin + mode->hdisplay; ··· 949 863 FIELD_PREP(MAX_TU_SYMBOL, max_tu_symbol) | 950 864 FIELD_PREP(TU_SIZE, TU_SIZE_RECOMMENDED) | 951 865 BPC_8); 952 - if (ret) 953 - return ret; 954 - 955 - return 0; 866 + return ret; 956 867 } 957 868 958 869 static int tc_wait_link_training(struct tc_data *tc) ··· 1247 1164 return regmap_write(tc->regmap, DP0CTL, 0); 1248 1165 } 1249 1166 1250 - static int tc_stream_enable(struct tc_data *tc) 1167 + static int tc_dpi_stream_enable(struct tc_data *tc) 1168 + { 1169 + int ret; 1170 + u32 value; 1171 + 1172 + dev_dbg(tc->dev, "enable video stream\n"); 1173 + 1174 + /* Setup PLL */ 1175 + ret = tc_set_syspllparam(tc); 1176 + if (ret) 1177 + return ret; 1178 + 1179 + /* 1180 + * Initially PLLs are in bypass. Force PLL parameter update, 1181 + * disable PLL bypass, enable PLL 1182 + */ 1183 + ret = tc_pllupdate(tc, DP0_PLLCTRL); 1184 + if (ret) 1185 + return ret; 1186 + 1187 + ret = tc_pllupdate(tc, DP1_PLLCTRL); 1188 + if (ret) 1189 + return ret; 1190 + 1191 + /* Pixel PLL must always be enabled for DPI mode */ 1192 + ret = tc_pxl_pll_en(tc, clk_get_rate(tc->refclk), 1193 + 1000 * tc->mode.clock); 1194 + if (ret) 1195 + return ret; 1196 + 1197 + regmap_write(tc->regmap, PPI_D0S_CLRSIPOCOUNT, 3); 1198 + regmap_write(tc->regmap, PPI_D1S_CLRSIPOCOUNT, 3); 1199 + regmap_write(tc->regmap, PPI_D2S_CLRSIPOCOUNT, 3); 1200 + regmap_write(tc->regmap, PPI_D3S_CLRSIPOCOUNT, 3); 1201 + regmap_write(tc->regmap, PPI_D0S_ATMR, 0); 1202 + regmap_write(tc->regmap, PPI_D1S_ATMR, 0); 1203 + regmap_write(tc->regmap, PPI_TX_RX_TA, TTA_GET | TTA_SURE); 1204 + regmap_write(tc->regmap, PPI_LPTXTIMECNT, LPX_PERIOD); 1205 + 1206 + value = ((LANEENABLE_L0EN << tc->dsi_lanes) - LANEENABLE_L0EN) | 1207 + LANEENABLE_CLEN; 1208 + regmap_write(tc->regmap, PPI_LANEENABLE, value); 1209 + regmap_write(tc->regmap, DSI_LANEENABLE, value); 1210 + 1211 + ret = tc_set_common_video_mode(tc, &tc->mode); 1212 + if (ret) 1213 + return ret; 1214 + 1215 + ret = tc_set_dpi_video_mode(tc, &tc->mode); 1216 + if (ret) 1217 + return ret; 1218 + 1219 + /* Set input interface */ 1220 + value = DP0_AUDSRC_NO_INPUT; 1221 + if (tc_test_pattern) 1222 + value |= DP0_VIDSRC_COLOR_BAR; 1223 + else 1224 + value |= DP0_VIDSRC_DSI_RX; 1225 + ret = regmap_write(tc->regmap, SYSCTRL, value); 1226 + if (ret) 1227 + return ret; 1228 + 1229 + usleep_range(120, 150); 1230 + 1231 + regmap_write(tc->regmap, PPI_STARTPPI, PPI_START_FUNCTION); 1232 + regmap_write(tc->regmap, DSI_STARTDSI, DSI_RX_START); 1233 + 1234 + return 0; 1235 + } 1236 + 1237 + static int tc_dpi_stream_disable(struct tc_data *tc) 1238 + { 1239 + dev_dbg(tc->dev, "disable video stream\n"); 1240 + 1241 + tc_pxl_pll_dis(tc); 1242 + 1243 + return 0; 1244 + } 1245 + 1246 + static int tc_edp_stream_enable(struct tc_data *tc) 1251 1247 { 1252 1248 int ret; 1253 1249 u32 value; ··· 1341 1179 return ret; 1342 1180 } 1343 1181 1344 - ret = tc_set_video_mode(tc, &tc->mode); 1182 + ret = tc_set_common_video_mode(tc, &tc->mode); 1183 + if (ret) 1184 + return ret; 1185 + 1186 + ret = tc_set_edp_video_mode(tc, &tc->mode); 1345 1187 if (ret) 1346 1188 return ret; 1347 1189 ··· 1385 1219 return 0; 1386 1220 } 1387 1221 1388 - static int tc_stream_disable(struct tc_data *tc) 1222 + static int tc_edp_stream_disable(struct tc_data *tc) 1389 1223 { 1390 1224 int ret; 1391 1225 ··· 1400 1234 return 0; 1401 1235 } 1402 1236 1403 - static void tc_bridge_enable(struct drm_bridge *bridge) 1237 + static void 1238 + tc_dpi_bridge_atomic_enable(struct drm_bridge *bridge, 1239 + struct drm_bridge_state *old_bridge_state) 1240 + 1241 + { 1242 + struct tc_data *tc = bridge_to_tc(bridge); 1243 + int ret; 1244 + 1245 + ret = tc_dpi_stream_enable(tc); 1246 + if (ret < 0) { 1247 + dev_err(tc->dev, "main link stream start error: %d\n", ret); 1248 + tc_main_link_disable(tc); 1249 + return; 1250 + } 1251 + } 1252 + 1253 + static void 1254 + tc_dpi_bridge_atomic_disable(struct drm_bridge *bridge, 1255 + struct drm_bridge_state *old_bridge_state) 1256 + { 1257 + struct tc_data *tc = bridge_to_tc(bridge); 1258 + int ret; 1259 + 1260 + ret = tc_dpi_stream_disable(tc); 1261 + if (ret < 0) 1262 + dev_err(tc->dev, "main link stream stop error: %d\n", ret); 1263 + } 1264 + 1265 + static void 1266 + tc_edp_bridge_atomic_enable(struct drm_bridge *bridge, 1267 + struct drm_bridge_state *old_bridge_state) 1404 1268 { 1405 1269 struct tc_data *tc = bridge_to_tc(bridge); 1406 1270 int ret; ··· 1447 1251 return; 1448 1252 } 1449 1253 1450 - ret = tc_stream_enable(tc); 1254 + ret = tc_edp_stream_enable(tc); 1451 1255 if (ret < 0) { 1452 1256 dev_err(tc->dev, "main link stream start error: %d\n", ret); 1453 1257 tc_main_link_disable(tc); ··· 1455 1259 } 1456 1260 } 1457 1261 1458 - static void tc_bridge_disable(struct drm_bridge *bridge) 1262 + static void 1263 + tc_edp_bridge_atomic_disable(struct drm_bridge *bridge, 1264 + struct drm_bridge_state *old_bridge_state) 1459 1265 { 1460 1266 struct tc_data *tc = bridge_to_tc(bridge); 1461 1267 int ret; 1462 1268 1463 - ret = tc_stream_disable(tc); 1269 + ret = tc_edp_stream_disable(tc); 1464 1270 if (ret < 0) 1465 1271 dev_err(tc->dev, "main link stream stop error: %d\n", ret); 1466 1272 ··· 1483 1285 return true; 1484 1286 } 1485 1287 1486 - static enum drm_mode_status tc_mode_valid(struct drm_bridge *bridge, 1487 - const struct drm_display_info *info, 1488 - const struct drm_display_mode *mode) 1288 + static int tc_common_atomic_check(struct drm_bridge *bridge, 1289 + struct drm_bridge_state *bridge_state, 1290 + struct drm_crtc_state *crtc_state, 1291 + struct drm_connector_state *conn_state, 1292 + const unsigned int max_khz) 1293 + { 1294 + tc_bridge_mode_fixup(bridge, &crtc_state->mode, 1295 + &crtc_state->adjusted_mode); 1296 + 1297 + if (crtc_state->adjusted_mode.clock > max_khz) 1298 + return -EINVAL; 1299 + 1300 + return 0; 1301 + } 1302 + 1303 + static int tc_dpi_atomic_check(struct drm_bridge *bridge, 1304 + struct drm_bridge_state *bridge_state, 1305 + struct drm_crtc_state *crtc_state, 1306 + struct drm_connector_state *conn_state) 1307 + { 1308 + /* DSI->DPI interface clock limitation: upto 100 MHz */ 1309 + return tc_common_atomic_check(bridge, bridge_state, crtc_state, 1310 + conn_state, 100000); 1311 + } 1312 + 1313 + static int tc_edp_atomic_check(struct drm_bridge *bridge, 1314 + struct drm_bridge_state *bridge_state, 1315 + struct drm_crtc_state *crtc_state, 1316 + struct drm_connector_state *conn_state) 1317 + { 1318 + /* DPI->(e)DP interface clock limitation: upto 154 MHz */ 1319 + return tc_common_atomic_check(bridge, bridge_state, crtc_state, 1320 + conn_state, 154000); 1321 + } 1322 + 1323 + static enum drm_mode_status 1324 + tc_dpi_mode_valid(struct drm_bridge *bridge, 1325 + const struct drm_display_info *info, 1326 + const struct drm_display_mode *mode) 1327 + { 1328 + /* DPI interface clock limitation: upto 100 MHz */ 1329 + if (mode->clock > 100000) 1330 + return MODE_CLOCK_HIGH; 1331 + 1332 + return MODE_OK; 1333 + } 1334 + 1335 + static enum drm_mode_status 1336 + tc_edp_mode_valid(struct drm_bridge *bridge, 1337 + const struct drm_display_info *info, 1338 + const struct drm_display_mode *mode) 1489 1339 { 1490 1340 struct tc_data *tc = bridge_to_tc(bridge); 1491 1341 u32 req, avail; ··· 1558 1312 { 1559 1313 struct tc_data *tc = bridge_to_tc(bridge); 1560 1314 1561 - tc->mode = *mode; 1315 + drm_mode_copy(&tc->mode, mode); 1562 1316 } 1563 1317 1564 1318 static struct edid *tc_get_edid(struct drm_bridge *bridge, ··· 1641 1395 .atomic_destroy_state = drm_atomic_helper_connector_destroy_state, 1642 1396 }; 1643 1397 1644 - static int tc_bridge_attach(struct drm_bridge *bridge, 1645 - enum drm_bridge_attach_flags flags) 1398 + static int tc_dpi_bridge_attach(struct drm_bridge *bridge, 1399 + enum drm_bridge_attach_flags flags) 1400 + { 1401 + struct tc_data *tc = bridge_to_tc(bridge); 1402 + 1403 + if (!tc->panel_bridge) 1404 + return 0; 1405 + 1406 + return drm_bridge_attach(tc->bridge.encoder, tc->panel_bridge, 1407 + &tc->bridge, flags); 1408 + } 1409 + 1410 + static int tc_edp_bridge_attach(struct drm_bridge *bridge, 1411 + enum drm_bridge_attach_flags flags) 1646 1412 { 1647 1413 u32 bus_format = MEDIA_BUS_FMT_RGB888_1X24; 1648 1414 struct tc_data *tc = bridge_to_tc(bridge); ··· 1706 1448 return ret; 1707 1449 } 1708 1450 1709 - static void tc_bridge_detach(struct drm_bridge *bridge) 1451 + static void tc_edp_bridge_detach(struct drm_bridge *bridge) 1710 1452 { 1711 1453 drm_dp_aux_unregister(&bridge_to_tc(bridge)->aux); 1712 1454 } 1713 1455 1714 - static const struct drm_bridge_funcs tc_bridge_funcs = { 1715 - .attach = tc_bridge_attach, 1716 - .detach = tc_bridge_detach, 1717 - .mode_valid = tc_mode_valid, 1456 + #define MAX_INPUT_SEL_FORMATS 1 1457 + 1458 + static u32 * 1459 + tc_dpi_atomic_get_input_bus_fmts(struct drm_bridge *bridge, 1460 + struct drm_bridge_state *bridge_state, 1461 + struct drm_crtc_state *crtc_state, 1462 + struct drm_connector_state *conn_state, 1463 + u32 output_fmt, 1464 + unsigned int *num_input_fmts) 1465 + { 1466 + u32 *input_fmts; 1467 + 1468 + *num_input_fmts = 0; 1469 + 1470 + input_fmts = kcalloc(MAX_INPUT_SEL_FORMATS, sizeof(*input_fmts), 1471 + GFP_KERNEL); 1472 + if (!input_fmts) 1473 + return NULL; 1474 + 1475 + /* This is the DSI-end bus format */ 1476 + input_fmts[0] = MEDIA_BUS_FMT_RGB888_1X24; 1477 + *num_input_fmts = 1; 1478 + 1479 + return input_fmts; 1480 + } 1481 + 1482 + static const struct drm_bridge_funcs tc_dpi_bridge_funcs = { 1483 + .attach = tc_dpi_bridge_attach, 1484 + .mode_valid = tc_dpi_mode_valid, 1718 1485 .mode_set = tc_bridge_mode_set, 1719 - .enable = tc_bridge_enable, 1720 - .disable = tc_bridge_disable, 1486 + .atomic_check = tc_dpi_atomic_check, 1487 + .atomic_enable = tc_dpi_bridge_atomic_enable, 1488 + .atomic_disable = tc_dpi_bridge_atomic_disable, 1489 + .atomic_duplicate_state = drm_atomic_helper_bridge_duplicate_state, 1490 + .atomic_destroy_state = drm_atomic_helper_bridge_destroy_state, 1491 + .atomic_reset = drm_atomic_helper_bridge_reset, 1492 + .atomic_get_input_bus_fmts = tc_dpi_atomic_get_input_bus_fmts, 1493 + }; 1494 + 1495 + static const struct drm_bridge_funcs tc_edp_bridge_funcs = { 1496 + .attach = tc_edp_bridge_attach, 1497 + .detach = tc_edp_bridge_detach, 1498 + .mode_valid = tc_edp_mode_valid, 1499 + .mode_set = tc_bridge_mode_set, 1500 + .atomic_check = tc_edp_atomic_check, 1501 + .atomic_enable = tc_edp_bridge_atomic_enable, 1502 + .atomic_disable = tc_edp_bridge_atomic_disable, 1721 1503 .mode_fixup = tc_bridge_mode_fixup, 1722 1504 .detect = tc_bridge_detect, 1723 1505 .get_edid = tc_get_edid, 1506 + .atomic_duplicate_state = drm_atomic_helper_bridge_duplicate_state, 1507 + .atomic_destroy_state = drm_atomic_helper_bridge_destroy_state, 1508 + .atomic_reset = drm_atomic_helper_bridge_reset, 1724 1509 }; 1725 1510 1726 1511 static bool tc_readable_reg(struct device *dev, unsigned int reg) ··· 1850 1549 return IRQ_HANDLED; 1851 1550 } 1852 1551 1853 - static int tc_probe(struct i2c_client *client, const struct i2c_device_id *id) 1552 + static int tc_mipi_dsi_host_attach(struct tc_data *tc) 1854 1553 { 1855 - struct device *dev = &client->dev; 1554 + struct device *dev = tc->dev; 1555 + struct device_node *host_node; 1556 + struct device_node *endpoint; 1557 + struct mipi_dsi_device *dsi; 1558 + struct mipi_dsi_host *host; 1559 + const struct mipi_dsi_device_info info = { 1560 + .type = "tc358767", 1561 + .channel = 0, 1562 + .node = NULL, 1563 + }; 1564 + int dsi_lanes, ret; 1565 + 1566 + endpoint = of_graph_get_endpoint_by_regs(dev->of_node, 0, -1); 1567 + dsi_lanes = of_property_count_u32_elems(endpoint, "data-lanes"); 1568 + host_node = of_graph_get_remote_port_parent(endpoint); 1569 + host = of_find_mipi_dsi_host_by_node(host_node); 1570 + of_node_put(host_node); 1571 + of_node_put(endpoint); 1572 + 1573 + if (dsi_lanes < 0 || dsi_lanes > 4) 1574 + return -EINVAL; 1575 + 1576 + if (!host) 1577 + return -EPROBE_DEFER; 1578 + 1579 + dsi = mipi_dsi_device_register_full(host, &info); 1580 + if (IS_ERR(dsi)) 1581 + return dev_err_probe(dev, PTR_ERR(dsi), 1582 + "failed to create dsi device\n"); 1583 + 1584 + tc->dsi = dsi; 1585 + 1586 + tc->dsi_lanes = dsi_lanes; 1587 + dsi->lanes = tc->dsi_lanes; 1588 + dsi->format = MIPI_DSI_FMT_RGB888; 1589 + dsi->mode_flags = MIPI_DSI_MODE_VIDEO | MIPI_DSI_MODE_VIDEO_SYNC_PULSE; 1590 + 1591 + ret = mipi_dsi_attach(dsi); 1592 + if (ret < 0) { 1593 + dev_err(dev, "failed to attach dsi to host: %d\n", ret); 1594 + return ret; 1595 + } 1596 + 1597 + return 0; 1598 + } 1599 + 1600 + static int tc_probe_dpi_bridge_endpoint(struct tc_data *tc) 1601 + { 1602 + struct device *dev = tc->dev; 1856 1603 struct drm_panel *panel; 1857 - struct tc_data *tc; 1858 1604 int ret; 1859 1605 1860 - tc = devm_kzalloc(dev, sizeof(*tc), GFP_KERNEL); 1861 - if (!tc) 1862 - return -ENOMEM; 1606 + /* port@1 is the DPI input/output port */ 1607 + ret = drm_of_find_panel_or_bridge(dev->of_node, 1, 0, &panel, NULL); 1608 + if (ret && ret != -ENODEV) 1609 + return ret; 1863 1610 1864 - tc->dev = dev; 1611 + if (panel) { 1612 + struct drm_bridge *panel_bridge; 1613 + 1614 + panel_bridge = devm_drm_panel_bridge_add(dev, panel); 1615 + if (IS_ERR(panel_bridge)) 1616 + return PTR_ERR(panel_bridge); 1617 + 1618 + tc->panel_bridge = panel_bridge; 1619 + tc->bridge.type = DRM_MODE_CONNECTOR_DPI; 1620 + tc->bridge.funcs = &tc_dpi_bridge_funcs; 1621 + 1622 + return 0; 1623 + } 1624 + 1625 + return ret; 1626 + } 1627 + 1628 + static int tc_probe_edp_bridge_endpoint(struct tc_data *tc) 1629 + { 1630 + struct device *dev = tc->dev; 1631 + struct drm_panel *panel; 1632 + int ret; 1865 1633 1866 1634 /* port@2 is the output port */ 1867 1635 ret = drm_of_find_panel_or_bridge(dev->of_node, 2, 0, &panel, NULL); ··· 1949 1579 } else { 1950 1580 tc->bridge.type = DRM_MODE_CONNECTOR_DisplayPort; 1951 1581 } 1582 + 1583 + tc->bridge.funcs = &tc_edp_bridge_funcs; 1584 + if (tc->hpd_pin >= 0) 1585 + tc->bridge.ops |= DRM_BRIDGE_OP_DETECT; 1586 + tc->bridge.ops |= DRM_BRIDGE_OP_EDID; 1587 + 1588 + return ret; 1589 + } 1590 + 1591 + static int tc_probe_bridge_endpoint(struct tc_data *tc) 1592 + { 1593 + struct device *dev = tc->dev; 1594 + struct of_endpoint endpoint; 1595 + struct device_node *node = NULL; 1596 + const u8 mode_dpi_to_edp = BIT(1) | BIT(2); 1597 + const u8 mode_dsi_to_edp = BIT(0) | BIT(2); 1598 + const u8 mode_dsi_to_dpi = BIT(0) | BIT(1); 1599 + u8 mode = 0; 1600 + 1601 + /* 1602 + * Determine bridge configuration. 1603 + * 1604 + * Port allocation: 1605 + * port@0 - DSI input 1606 + * port@1 - DPI input/output 1607 + * port@2 - eDP output 1608 + * 1609 + * Possible connections: 1610 + * DPI -> port@1 -> port@2 -> eDP :: [port@0 is not connected] 1611 + * DSI -> port@0 -> port@2 -> eDP :: [port@1 is not connected] 1612 + * DSI -> port@0 -> port@1 -> DPI :: [port@2 is not connected] 1613 + */ 1614 + 1615 + for_each_endpoint_of_node(dev->of_node, node) { 1616 + of_graph_parse_endpoint(node, &endpoint); 1617 + if (endpoint.port > 2) 1618 + return -EINVAL; 1619 + 1620 + mode |= BIT(endpoint.port); 1621 + } 1622 + 1623 + if (mode == mode_dpi_to_edp) 1624 + return tc_probe_edp_bridge_endpoint(tc); 1625 + else if (mode == mode_dsi_to_dpi) 1626 + return tc_probe_dpi_bridge_endpoint(tc); 1627 + else if (mode == mode_dsi_to_edp) 1628 + dev_warn(dev, "The mode DSI-to-(e)DP is not supported!\n"); 1629 + else 1630 + dev_warn(dev, "Invalid mode (0x%x) is not supported!\n", mode); 1631 + 1632 + return -EINVAL; 1633 + } 1634 + 1635 + static int tc_probe(struct i2c_client *client, const struct i2c_device_id *id) 1636 + { 1637 + struct device *dev = &client->dev; 1638 + struct tc_data *tc; 1639 + int ret; 1640 + 1641 + tc = devm_kzalloc(dev, sizeof(*tc), GFP_KERNEL); 1642 + if (!tc) 1643 + return -ENOMEM; 1644 + 1645 + tc->dev = dev; 1646 + 1647 + ret = tc_probe_bridge_endpoint(tc); 1648 + if (ret) 1649 + return ret; 1952 1650 1953 1651 /* Shut down GPIO is optional */ 1954 1652 tc->sd_gpio = devm_gpiod_get_optional(dev, "shutdown", GPIOD_OUT_HIGH); ··· 2124 1686 } 2125 1687 } 2126 1688 2127 - ret = tc_aux_link_setup(tc); 2128 - if (ret) 2129 - return ret; 2130 - 2131 - /* Register DP AUX channel */ 2132 - tc->aux.name = "TC358767 AUX i2c adapter"; 2133 - tc->aux.dev = tc->dev; 2134 - tc->aux.transfer = tc_aux_transfer; 2135 - drm_dp_aux_init(&tc->aux); 2136 - 2137 - tc->bridge.funcs = &tc_bridge_funcs; 2138 - if (tc->hpd_pin >= 0) 2139 - tc->bridge.ops |= DRM_BRIDGE_OP_DETECT; 2140 - tc->bridge.ops |= DRM_BRIDGE_OP_EDID; 1689 + if (tc->bridge.type != DRM_MODE_CONNECTOR_DPI) { /* (e)DP output */ 1690 + ret = tc_aux_link_setup(tc); 1691 + if (ret) 1692 + return ret; 1693 + } 2141 1694 2142 1695 tc->bridge.of_node = dev->of_node; 2143 1696 drm_bridge_add(&tc->bridge); 2144 1697 2145 1698 i2c_set_clientdata(client, tc); 1699 + 1700 + if (tc->bridge.type == DRM_MODE_CONNECTOR_DPI) { /* DPI output */ 1701 + ret = tc_mipi_dsi_host_attach(tc); 1702 + if (ret) { 1703 + drm_bridge_remove(&tc->bridge); 1704 + return ret; 1705 + } 1706 + } 2146 1707 2147 1708 return 0; 2148 1709 }
+2 -9
drivers/gpu/drm/bridge/tc358775.c
··· 649 649 static int tc_probe(struct i2c_client *client, const struct i2c_device_id *id) 650 650 { 651 651 struct device *dev = &client->dev; 652 - struct drm_panel *panel; 653 652 struct tc_data *tc; 654 653 int ret; 655 654 ··· 659 660 tc->dev = dev; 660 661 tc->i2c = client; 661 662 662 - ret = drm_of_find_panel_or_bridge(dev->of_node, TC358775_LVDS_OUT0, 663 - 0, &panel, NULL); 664 - if (ret < 0) 665 - return ret; 666 - if (!panel) 667 - return -ENODEV; 668 - 669 - tc->panel_bridge = devm_drm_panel_bridge_add(dev, panel); 663 + tc->panel_bridge = devm_drm_of_get_bridge(dev, dev->of_node, 664 + TC358775_LVDS_OUT0, 0); 670 665 if (IS_ERR(tc->panel_bridge)) 671 666 return PTR_ERR(tc->panel_bridge); 672 667
+8 -9
drivers/gpu/drm/bridge/ti-sn65dsi83.c
··· 488 488 /* Clear all errors that got asserted during initialization. */ 489 489 regmap_read(ctx->regmap, REG_IRQ_STAT, &pval); 490 490 regmap_write(ctx->regmap, REG_IRQ_STAT, pval); 491 + 492 + usleep_range(10000, 12000); 493 + regmap_read(ctx->regmap, REG_IRQ_STAT, &pval); 494 + if (pval) 495 + dev_err(ctx->dev, "Unexpected link status 0x%02x\n", pval); 491 496 } 492 497 493 498 static void sn65dsi83_atomic_disable(struct drm_bridge *bridge, ··· 570 565 struct drm_bridge *panel_bridge; 571 566 struct device *dev = ctx->dev; 572 567 struct device_node *endpoint; 573 - struct drm_panel *panel; 574 568 int ret; 575 569 576 570 endpoint = of_graph_get_endpoint_by_regs(dev->of_node, 0, 0); ··· 609 605 } 610 606 } 611 607 612 - ret = drm_of_find_panel_or_bridge(dev->of_node, 2, 0, &panel, &panel_bridge); 613 - if (ret < 0) 608 + panel_bridge = devm_drm_of_get_bridge(dev, dev->of_node, 2, 0); 609 + if (IS_ERR(panel_bridge)) { 610 + ret = PTR_ERR(panel_bridge); 614 611 goto err_put_node; 615 - if (panel) { 616 - panel_bridge = devm_drm_panel_bridge_add(dev, panel); 617 - if (IS_ERR(panel_bridge)) { 618 - ret = PTR_ERR(panel_bridge); 619 - goto err_put_node; 620 - } 621 612 } 622 613 623 614 ctx->panel_bridge = panel_bridge;
+1 -7
drivers/gpu/drm/bridge/ti-sn65dsi86.c
··· 1188 1188 { 1189 1189 struct ti_sn65dsi86 *pdata = dev_get_drvdata(adev->dev.parent); 1190 1190 struct device_node *np = pdata->dev->of_node; 1191 - struct drm_panel *panel; 1192 1191 int ret; 1193 1192 1194 - ret = drm_of_find_panel_or_bridge(np, 1, 0, &panel, NULL); 1195 - if (ret) 1196 - return dev_err_probe(&adev->dev, ret, 1197 - "could not find any panel node\n"); 1198 - 1199 - pdata->next_bridge = devm_drm_panel_bridge_add(pdata->dev, panel); 1193 + pdata->next_bridge = devm_drm_of_get_bridge(pdata->dev, np, 1, 0); 1200 1194 if (IS_ERR(pdata->next_bridge)) { 1201 1195 DRM_ERROR("failed to create panel bridge\n"); 1202 1196 return PTR_ERR(pdata->next_bridge);
+20
drivers/gpu/drm/drm_atomic.c
··· 789 789 obj->state = state; 790 790 obj->funcs = funcs; 791 791 list_add_tail(&obj->head, &dev->mode_config.privobj_list); 792 + 793 + state->obj = obj; 792 794 } 793 795 EXPORT_SYMBOL(drm_atomic_private_obj_init); 794 796 ··· 1425 1423 int drm_atomic_commit(struct drm_atomic_state *state) 1426 1424 { 1427 1425 struct drm_mode_config *config = &state->dev->mode_config; 1426 + struct drm_printer p = drm_info_printer(state->dev->dev); 1428 1427 int ret; 1428 + 1429 + if (drm_debug_enabled(DRM_UT_STATE)) 1430 + drm_atomic_print_new_state(state, &p); 1429 1431 1430 1432 ret = drm_atomic_check_only(state); 1431 1433 if (ret) ··· 1638 1632 } 1639 1633 EXPORT_SYMBOL(__drm_atomic_helper_set_config); 1640 1634 1635 + static void drm_atomic_private_obj_print_state(struct drm_printer *p, 1636 + const struct drm_private_state *state) 1637 + { 1638 + struct drm_private_obj *obj = state->obj; 1639 + 1640 + if (obj->funcs->atomic_print_state) 1641 + obj->funcs->atomic_print_state(p, state); 1642 + } 1643 + 1641 1644 /** 1642 1645 * drm_atomic_print_new_state - prints drm atomic state 1643 1646 * @state: atomic configuration to check ··· 1667 1652 struct drm_crtc_state *crtc_state; 1668 1653 struct drm_connector *connector; 1669 1654 struct drm_connector_state *connector_state; 1655 + struct drm_private_obj *obj; 1656 + struct drm_private_state *obj_state; 1670 1657 int i; 1671 1658 1672 1659 if (!p) { ··· 1686 1669 1687 1670 for_each_new_connector_in_state(state, connector, connector_state, i) 1688 1671 drm_atomic_connector_print_state(p, connector_state); 1672 + 1673 + for_each_new_private_obj_in_state(state, obj, obj_state, i) 1674 + drm_atomic_private_obj_print_state(p, obj_state); 1689 1675 } 1690 1676 EXPORT_SYMBOL(drm_atomic_print_new_state); 1691 1677
-4
drivers/gpu/drm/drm_atomic_uapi.c
··· 1328 1328 struct drm_out_fence_state *fence_state; 1329 1329 int ret = 0; 1330 1330 unsigned int i, j, num_fences; 1331 - struct drm_printer p = drm_info_printer(dev->dev); 1332 1331 1333 1332 /* disallow for drivers not supporting atomic: */ 1334 1333 if (!drm_core_check_feature(dev, DRIVER_ATOMIC)) ··· 1459 1460 } else if (arg->flags & DRM_MODE_ATOMIC_NONBLOCK) { 1460 1461 ret = drm_atomic_nonblocking_commit(state); 1461 1462 } else { 1462 - if (drm_debug_enabled(DRM_UT_STATE)) 1463 - drm_atomic_print_new_state(state, &p); 1464 - 1465 1463 ret = drm_atomic_commit(state); 1466 1464 } 1467 1465
+1 -1
drivers/gpu/drm/drm_blend.c
··· 317 317 * DRM_MODE_ROTATE_90 | DRM_MODE_ROTATE_180 | 318 318 * DRM_MODE_ROTATE_270 | DRM_MODE_REFLECT_Y); 319 319 * 320 - * to eliminate the DRM_MODE_ROTATE_X flag. Depending on what kind of 320 + * to eliminate the DRM_MODE_REFLECT_X flag. Depending on what kind of 321 321 * transforms the hardware supports, this function may not 322 322 * be able to produce a supported transform, so the caller should 323 323 * check the result afterwards.
+3 -1
drivers/gpu/drm/drm_bridge_connector.c
··· 384 384 connector_type, ddc); 385 385 drm_connector_helper_add(connector, &drm_bridge_connector_helper_funcs); 386 386 387 - if (bridge_connector->bridge_hpd) 387 + if (bridge_connector->bridge_hpd) { 388 388 connector->polled = DRM_CONNECTOR_POLL_HPD; 389 + drm_bridge_connector_enable_hpd(connector); 390 + } 389 391 else if (bridge_connector->bridge_detect) 390 392 connector->polled = DRM_CONNECTOR_POLL_CONNECT 391 393 | DRM_CONNECTOR_POLL_DISCONNECT;
+352 -268
drivers/gpu/drm/drm_edid.c
··· 97 97 98 98 struct detailed_mode_closure { 99 99 struct drm_connector *connector; 100 - struct edid *edid; 100 + const struct edid *edid; 101 101 bool preferred; 102 102 u32 quirks; 103 103 int modes; ··· 1572 1572 0x00, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0x00 1573 1573 }; 1574 1574 1575 + static void edid_header_fix(void *edid) 1576 + { 1577 + memcpy(edid, edid_header, sizeof(edid_header)); 1578 + } 1579 + 1575 1580 /** 1576 1581 * drm_edid_header_is_valid - sanity check the header of the base EDID block 1577 1582 * @raw_edid: pointer to raw base EDID block ··· 1585 1580 * 1586 1581 * Return: 8 if the header is perfect, down to 0 if it's totally wrong. 1587 1582 */ 1588 - int drm_edid_header_is_valid(const u8 *raw_edid) 1583 + int drm_edid_header_is_valid(const void *_edid) 1589 1584 { 1585 + const struct edid *edid = _edid; 1590 1586 int i, score = 0; 1591 1587 1592 - for (i = 0; i < sizeof(edid_header); i++) 1593 - if (raw_edid[i] == edid_header[i]) 1588 + for (i = 0; i < sizeof(edid_header); i++) { 1589 + if (edid->header[i] == edid_header[i]) 1594 1590 score++; 1591 + } 1595 1592 1596 1593 return score; 1597 1594 } ··· 1604 1597 MODULE_PARM_DESC(edid_fixup, 1605 1598 "Minimum number of valid EDID header bytes (0-8, default 6)"); 1606 1599 1607 - static int drm_edid_block_checksum(const u8 *raw_edid) 1600 + static int edid_block_compute_checksum(const void *_block) 1608 1601 { 1602 + const u8 *block = _block; 1609 1603 int i; 1610 1604 u8 csum = 0, crc = 0; 1611 1605 1612 1606 for (i = 0; i < EDID_LENGTH - 1; i++) 1613 - csum += raw_edid[i]; 1607 + csum += block[i]; 1614 1608 1615 1609 crc = 0x100 - csum; 1616 1610 1617 1611 return crc; 1618 1612 } 1619 1613 1620 - static bool drm_edid_block_checksum_diff(const u8 *raw_edid, u8 real_checksum) 1614 + static int edid_block_get_checksum(const void *_block) 1621 1615 { 1622 - if (raw_edid[EDID_LENGTH - 1] != real_checksum) 1623 - return true; 1624 - else 1625 - return false; 1616 + const struct edid *block = _block; 1617 + 1618 + return block->checksum; 1626 1619 } 1627 1620 1628 - static bool drm_edid_is_zero(const u8 *in_edid, int length) 1621 + static int edid_block_tag(const void *_block) 1629 1622 { 1630 - if (memchr_inv(in_edid, 0, length)) 1631 - return false; 1623 + const u8 *block = _block; 1632 1624 1633 - return true; 1625 + return block[0]; 1626 + } 1627 + 1628 + static bool edid_is_zero(const void *edid, int length) 1629 + { 1630 + return !memchr_inv(edid, 0, length); 1634 1631 } 1635 1632 1636 1633 /** ··· 1668 1657 } 1669 1658 EXPORT_SYMBOL(drm_edid_are_equal); 1670 1659 1660 + enum edid_block_status { 1661 + EDID_BLOCK_OK = 0, 1662 + EDID_BLOCK_NULL, 1663 + EDID_BLOCK_HEADER_CORRUPT, 1664 + EDID_BLOCK_HEADER_REPAIR, 1665 + EDID_BLOCK_HEADER_FIXED, 1666 + EDID_BLOCK_CHECKSUM, 1667 + EDID_BLOCK_VERSION, 1668 + }; 1669 + 1670 + static enum edid_block_status edid_block_check(const void *_block, 1671 + bool is_base_block) 1672 + { 1673 + const struct edid *block = _block; 1674 + 1675 + if (!block) 1676 + return EDID_BLOCK_NULL; 1677 + 1678 + if (is_base_block) { 1679 + int score = drm_edid_header_is_valid(block); 1680 + 1681 + if (score < clamp(edid_fixup, 0, 8)) 1682 + return EDID_BLOCK_HEADER_CORRUPT; 1683 + 1684 + if (score < 8) 1685 + return EDID_BLOCK_HEADER_REPAIR; 1686 + } 1687 + 1688 + if (edid_block_compute_checksum(block) != edid_block_get_checksum(block)) 1689 + return EDID_BLOCK_CHECKSUM; 1690 + 1691 + if (is_base_block) { 1692 + if (block->version != 1) 1693 + return EDID_BLOCK_VERSION; 1694 + } 1695 + 1696 + return EDID_BLOCK_OK; 1697 + } 1698 + 1699 + static bool edid_block_status_valid(enum edid_block_status status, int tag) 1700 + { 1701 + return status == EDID_BLOCK_OK || 1702 + status == EDID_BLOCK_HEADER_FIXED || 1703 + (status == EDID_BLOCK_CHECKSUM && tag == CEA_EXT); 1704 + } 1705 + 1706 + static bool edid_block_valid(const void *block, bool base) 1707 + { 1708 + return edid_block_status_valid(edid_block_check(block, base), 1709 + edid_block_tag(block)); 1710 + } 1711 + 1671 1712 /** 1672 1713 * drm_edid_block_valid - Sanity check the EDID block (base or extension) 1673 1714 * @raw_edid: pointer to raw EDID block 1674 - * @block: type of block to validate (0 for base, extension otherwise) 1715 + * @block_num: type of block to validate (0 for base, extension otherwise) 1675 1716 * @print_bad_edid: if true, dump bad EDID blocks to the console 1676 1717 * @edid_corrupt: if true, the header or checksum is invalid 1677 1718 * ··· 1732 1669 * 1733 1670 * Return: True if the block is valid, false otherwise. 1734 1671 */ 1735 - bool drm_edid_block_valid(u8 *raw_edid, int block, bool print_bad_edid, 1672 + bool drm_edid_block_valid(u8 *_block, int block_num, bool print_bad_edid, 1736 1673 bool *edid_corrupt) 1737 1674 { 1738 - u8 csum; 1739 - struct edid *edid = (struct edid *)raw_edid; 1675 + struct edid *block = (struct edid *)_block; 1676 + enum edid_block_status status; 1677 + bool is_base_block = block_num == 0; 1678 + bool valid; 1740 1679 1741 - if (WARN_ON(!raw_edid)) 1680 + if (WARN_ON(!block)) 1742 1681 return false; 1743 1682 1744 - if (edid_fixup > 8 || edid_fixup < 0) 1745 - edid_fixup = 6; 1683 + status = edid_block_check(block, is_base_block); 1684 + if (status == EDID_BLOCK_HEADER_REPAIR) { 1685 + DRM_DEBUG("Fixing EDID header, your hardware may be failing\n"); 1686 + edid_header_fix(block); 1746 1687 1747 - if (block == 0) { 1748 - int score = drm_edid_header_is_valid(raw_edid); 1749 - 1750 - if (score == 8) { 1751 - if (edid_corrupt) 1752 - *edid_corrupt = false; 1753 - } else if (score >= edid_fixup) { 1754 - /* Displayport Link CTS Core 1.2 rev1.1 test 4.2.2.6 1755 - * The corrupt flag needs to be set here otherwise, the 1756 - * fix-up code here will correct the problem, the 1757 - * checksum is correct and the test fails 1758 - */ 1759 - if (edid_corrupt) 1760 - *edid_corrupt = true; 1761 - DRM_DEBUG("Fixing EDID header, your hardware may be failing\n"); 1762 - memcpy(raw_edid, edid_header, sizeof(edid_header)); 1763 - } else { 1764 - if (edid_corrupt) 1765 - *edid_corrupt = true; 1766 - goto bad; 1767 - } 1688 + /* Retry with fixed header, update status if that worked. */ 1689 + status = edid_block_check(block, is_base_block); 1690 + if (status == EDID_BLOCK_OK) 1691 + status = EDID_BLOCK_HEADER_FIXED; 1768 1692 } 1769 1693 1770 - csum = drm_edid_block_checksum(raw_edid); 1771 - if (drm_edid_block_checksum_diff(raw_edid, csum)) { 1772 - if (edid_corrupt) 1694 + if (edid_corrupt) { 1695 + /* 1696 + * Unknown major version isn't corrupt but we can't use it. Only 1697 + * the base block can reset edid_corrupt to false. 1698 + */ 1699 + if (is_base_block && 1700 + (status == EDID_BLOCK_OK || status == EDID_BLOCK_VERSION)) 1701 + *edid_corrupt = false; 1702 + else if (status != EDID_BLOCK_OK) 1773 1703 *edid_corrupt = true; 1774 - 1775 - /* allow CEA to slide through, switches mangle this */ 1776 - if (raw_edid[0] == CEA_EXT) { 1777 - DRM_DEBUG("EDID checksum is invalid, remainder is %d\n", csum); 1778 - DRM_DEBUG("Assuming a KVM switch modified the CEA block but left the original checksum\n"); 1779 - } else { 1780 - if (print_bad_edid) 1781 - DRM_NOTE("EDID checksum is invalid, remainder is %d\n", csum); 1782 - 1783 - goto bad; 1784 - } 1785 1704 } 1786 1705 1787 - /* per-block-type checks */ 1788 - switch (raw_edid[0]) { 1789 - case 0: /* base */ 1790 - if (edid->version != 1) { 1791 - DRM_NOTE("EDID has major version %d, instead of 1\n", edid->version); 1792 - goto bad; 1706 + /* Determine whether we can use this block with this status. */ 1707 + valid = edid_block_status_valid(status, edid_block_tag(block)); 1708 + 1709 + /* Some fairly random status printouts. */ 1710 + if (status == EDID_BLOCK_CHECKSUM) { 1711 + if (valid) { 1712 + DRM_DEBUG("EDID block checksum is invalid, remainder is %d\n", 1713 + edid_block_compute_checksum(block)); 1714 + DRM_DEBUG("Assuming a KVM switch modified the block but left the original checksum\n"); 1715 + } else if (print_bad_edid) { 1716 + DRM_NOTE("EDID block checksum is invalid, remainder is %d\n", 1717 + edid_block_compute_checksum(block)); 1793 1718 } 1794 - 1795 - if (edid->revision > 4) 1796 - DRM_DEBUG("EDID minor > 4, assuming backward compatibility\n"); 1797 - break; 1798 - 1799 - default: 1800 - break; 1719 + } else if (status == EDID_BLOCK_VERSION) { 1720 + DRM_NOTE("EDID has major version %d, instead of 1\n", 1721 + block->version); 1801 1722 } 1802 1723 1803 - return true; 1804 - 1805 - bad: 1806 - if (print_bad_edid) { 1807 - if (drm_edid_is_zero(raw_edid, EDID_LENGTH)) { 1724 + if (!valid && print_bad_edid) { 1725 + if (edid_is_zero(block, EDID_LENGTH)) { 1808 1726 pr_notice("EDID block is all zeroes\n"); 1809 1727 } else { 1810 1728 pr_notice("Raw EDID:\n"); 1811 1729 print_hex_dump(KERN_NOTICE, 1812 1730 " \t", DUMP_PREFIX_NONE, 16, 1, 1813 - raw_edid, EDID_LENGTH, false); 1731 + block, EDID_LENGTH, false); 1814 1732 } 1815 1733 } 1816 - return false; 1734 + 1735 + return valid; 1817 1736 } 1818 1737 EXPORT_SYMBOL(drm_edid_block_valid); 1819 1738 ··· 1822 1777 return true; 1823 1778 } 1824 1779 EXPORT_SYMBOL(drm_edid_is_valid); 1780 + 1781 + static struct edid *edid_filter_invalid_blocks(const struct edid *edid, 1782 + int invalid_blocks) 1783 + { 1784 + struct edid *new, *dest_block; 1785 + int valid_extensions = edid->extensions - invalid_blocks; 1786 + int i; 1787 + 1788 + new = kmalloc_array(valid_extensions + 1, EDID_LENGTH, GFP_KERNEL); 1789 + if (!new) 1790 + goto out; 1791 + 1792 + dest_block = new; 1793 + for (i = 0; i <= edid->extensions; i++) { 1794 + const void *block = edid + i; 1795 + 1796 + if (edid_block_valid(block, i == 0)) 1797 + memcpy(dest_block++, block, EDID_LENGTH); 1798 + } 1799 + 1800 + new->extensions = valid_extensions; 1801 + new->checksum = edid_block_compute_checksum(new); 1802 + 1803 + out: 1804 + kfree(edid); 1805 + 1806 + return new; 1807 + } 1825 1808 1826 1809 #define DDC_SEGMENT_ADDR 0x30 1827 1810 /** ··· 1932 1859 /* Calculate real checksum for the last edid extension block data */ 1933 1860 if (last_block < num_blocks) 1934 1861 connector->real_edid_checksum = 1935 - drm_edid_block_checksum(edid + last_block * EDID_LENGTH); 1862 + edid_block_compute_checksum(edid + last_block * EDID_LENGTH); 1936 1863 1937 1864 if (connector->bad_edid_counter++ && !drm_debug_enabled(DRM_UT_KMS)) 1938 1865 return; ··· 1942 1869 u8 *block = edid + i * EDID_LENGTH; 1943 1870 char prefix[20]; 1944 1871 1945 - if (drm_edid_is_zero(block, EDID_LENGTH)) 1872 + if (edid_is_zero(block, EDID_LENGTH)) 1946 1873 sprintf(prefix, "\t[%02x] ZERO ", i); 1947 1874 else if (!drm_edid_block_valid(block, i, false, NULL)) 1948 1875 sprintf(prefix, "\t[%02x] BAD ", i); ··· 2007 1934 int *null_edid_counter = connector ? &connector->null_edid_counter : NULL; 2008 1935 bool *edid_corrupt = connector ? &connector->edid_corrupt : NULL; 2009 1936 void *edid; 2010 - int i; 1937 + int try; 2011 1938 2012 1939 edid = kmalloc(EDID_LENGTH, GFP_KERNEL); 2013 1940 if (edid == NULL) 2014 1941 return NULL; 2015 1942 2016 1943 /* base block fetch */ 2017 - for (i = 0; i < 4; i++) { 1944 + for (try = 0; try < 4; try++) { 2018 1945 if (get_edid_block(data, edid, 0, EDID_LENGTH)) 2019 1946 goto out; 2020 1947 if (drm_edid_block_valid(edid, 0, false, edid_corrupt)) 2021 1948 break; 2022 - if (i == 0 && drm_edid_is_zero(edid, EDID_LENGTH)) { 1949 + if (try == 0 && edid_is_zero(edid, EDID_LENGTH)) { 2023 1950 if (null_edid_counter) 2024 1951 (*null_edid_counter)++; 2025 1952 goto carp; 2026 1953 } 2027 1954 } 2028 - if (i == 4) 1955 + if (try == 4) 2029 1956 goto carp; 2030 1957 2031 1958 return edid; ··· 2063 1990 size_t len), 2064 1991 void *data) 2065 1992 { 2066 - int i, j = 0, valid_extensions = 0; 2067 - u8 *edid, *new; 2068 - struct edid *override; 1993 + int j, invalid_blocks = 0; 1994 + struct edid *edid, *new, *override; 2069 1995 2070 1996 override = drm_get_override_edid(connector); 2071 1997 if (override) 2072 1998 return override; 2073 1999 2074 - edid = (u8 *)drm_do_get_edid_base_block(connector, get_edid_block, data); 2000 + edid = drm_do_get_edid_base_block(connector, get_edid_block, data); 2075 2001 if (!edid) 2076 2002 return NULL; 2077 2003 2078 - /* if there's no extensions or no connector, we're done */ 2079 - valid_extensions = edid[0x7e]; 2080 - if (valid_extensions == 0) 2081 - return (struct edid *)edid; 2004 + if (edid->extensions == 0) 2005 + return edid; 2082 2006 2083 - new = krealloc(edid, (valid_extensions + 1) * EDID_LENGTH, GFP_KERNEL); 2007 + new = krealloc(edid, (edid->extensions + 1) * EDID_LENGTH, GFP_KERNEL); 2084 2008 if (!new) 2085 2009 goto out; 2086 2010 edid = new; 2087 2011 2088 - for (j = 1; j <= edid[0x7e]; j++) { 2089 - u8 *block = edid + j * EDID_LENGTH; 2012 + for (j = 1; j <= edid->extensions; j++) { 2013 + void *block = edid + j; 2014 + int try; 2090 2015 2091 - for (i = 0; i < 4; i++) { 2016 + for (try = 0; try < 4; try++) { 2092 2017 if (get_edid_block(data, block, j, EDID_LENGTH)) 2093 2018 goto out; 2094 2019 if (drm_edid_block_valid(block, j, false, NULL)) 2095 2020 break; 2096 2021 } 2097 2022 2098 - if (i == 4) 2099 - valid_extensions--; 2023 + if (try == 4) 2024 + invalid_blocks++; 2100 2025 } 2101 2026 2102 - if (valid_extensions != edid[0x7e]) { 2103 - u8 *base; 2027 + if (invalid_blocks) { 2028 + connector_bad_edid(connector, (u8 *)edid, edid->extensions + 1); 2104 2029 2105 - connector_bad_edid(connector, edid, edid[0x7e] + 1); 2106 - 2107 - edid[EDID_LENGTH-1] += edid[0x7e] - valid_extensions; 2108 - edid[0x7e] = valid_extensions; 2109 - 2110 - new = kmalloc_array(valid_extensions + 1, EDID_LENGTH, 2111 - GFP_KERNEL); 2112 - if (!new) 2113 - goto out; 2114 - 2115 - base = new; 2116 - for (i = 0; i <= edid[0x7e]; i++) { 2117 - u8 *block = edid + i * EDID_LENGTH; 2118 - 2119 - if (!drm_edid_block_valid(block, i, false, NULL)) 2120 - continue; 2121 - 2122 - memcpy(base, block, EDID_LENGTH); 2123 - base += EDID_LENGTH; 2124 - } 2125 - 2126 - kfree(edid); 2127 - edid = new; 2030 + edid = edid_filter_invalid_blocks(edid, invalid_blocks); 2128 2031 } 2129 2032 2130 - return (struct edid *)edid; 2033 + return edid; 2131 2034 2132 2035 out: 2133 2036 kfree(edid); ··· 2199 2150 2200 2151 u32 drm_edid_get_panel_id(struct i2c_adapter *adapter) 2201 2152 { 2202 - struct edid *edid; 2153 + const struct edid *edid; 2203 2154 u32 panel_id; 2204 2155 2205 2156 edid = drm_do_get_edid_base_block(NULL, drm_do_probe_ddc_edid, adapter); ··· 2380 2331 } 2381 2332 EXPORT_SYMBOL(drm_mode_find_dmt); 2382 2333 2383 - static bool is_display_descriptor(const u8 d[18], u8 tag) 2334 + static bool is_display_descriptor(const struct detailed_timing *descriptor, u8 type) 2384 2335 { 2385 - return d[0] == 0x00 && d[1] == 0x00 && 2386 - d[2] == 0x00 && d[3] == tag; 2336 + BUILD_BUG_ON(offsetof(typeof(*descriptor), pixel_clock) != 0); 2337 + BUILD_BUG_ON(offsetof(typeof(*descriptor), data.other_data.pad1) != 2); 2338 + BUILD_BUG_ON(offsetof(typeof(*descriptor), data.other_data.type) != 3); 2339 + 2340 + return descriptor->pixel_clock == 0 && 2341 + descriptor->data.other_data.pad1 == 0 && 2342 + descriptor->data.other_data.type == type; 2387 2343 } 2388 2344 2389 - static bool is_detailed_timing_descriptor(const u8 d[18]) 2345 + static bool is_detailed_timing_descriptor(const struct detailed_timing *descriptor) 2390 2346 { 2391 - return d[0] != 0x00 || d[1] != 0x00; 2347 + BUILD_BUG_ON(offsetof(typeof(*descriptor), pixel_clock) != 0); 2348 + 2349 + return descriptor->pixel_clock != 0; 2392 2350 } 2393 2351 2394 - typedef void detailed_cb(struct detailed_timing *timing, void *closure); 2352 + typedef void detailed_cb(const struct detailed_timing *timing, void *closure); 2395 2353 2396 2354 static void 2397 - cea_for_each_detailed_block(u8 *ext, detailed_cb *cb, void *closure) 2355 + cea_for_each_detailed_block(const u8 *ext, detailed_cb *cb, void *closure) 2398 2356 { 2399 2357 int i, n; 2400 2358 u8 d = ext[0x02]; 2401 - u8 *det_base = ext + d; 2359 + const u8 *det_base = ext + d; 2402 2360 2403 2361 if (d < 4 || d > 127) 2404 2362 return; 2405 2363 2406 2364 n = (127 - d) / 18; 2407 2365 for (i = 0; i < n; i++) 2408 - cb((struct detailed_timing *)(det_base + 18 * i), closure); 2366 + cb((const struct detailed_timing *)(det_base + 18 * i), closure); 2409 2367 } 2410 2368 2411 2369 static void 2412 - vtb_for_each_detailed_block(u8 *ext, detailed_cb *cb, void *closure) 2370 + vtb_for_each_detailed_block(const u8 *ext, detailed_cb *cb, void *closure) 2413 2371 { 2414 2372 unsigned int i, n = min((int)ext[0x02], 6); 2415 - u8 *det_base = ext + 5; 2373 + const u8 *det_base = ext + 5; 2416 2374 2417 2375 if (ext[0x01] != 1) 2418 2376 return; /* unknown version */ 2419 2377 2420 2378 for (i = 0; i < n; i++) 2421 - cb((struct detailed_timing *)(det_base + 18 * i), closure); 2379 + cb((const struct detailed_timing *)(det_base + 18 * i), closure); 2422 2380 } 2423 2381 2424 2382 static void 2425 - drm_for_each_detailed_block(u8 *raw_edid, detailed_cb *cb, void *closure) 2383 + drm_for_each_detailed_block(const struct edid *edid, detailed_cb *cb, void *closure) 2426 2384 { 2427 2385 int i; 2428 - struct edid *edid = (struct edid *)raw_edid; 2429 2386 2430 2387 if (edid == NULL) 2431 2388 return; ··· 2439 2384 for (i = 0; i < EDID_DETAILED_TIMINGS; i++) 2440 2385 cb(&(edid->detailed_timings[i]), closure); 2441 2386 2442 - for (i = 1; i <= raw_edid[0x7e]; i++) { 2443 - u8 *ext = raw_edid + (i * EDID_LENGTH); 2387 + for (i = 1; i <= edid->extensions; i++) { 2388 + const u8 *ext = (const u8 *)edid + (i * EDID_LENGTH); 2444 2389 2445 2390 switch (*ext) { 2446 2391 case CEA_EXT: ··· 2456 2401 } 2457 2402 2458 2403 static void 2459 - is_rb(struct detailed_timing *t, void *data) 2404 + is_rb(const struct detailed_timing *descriptor, void *data) 2460 2405 { 2461 - u8 *r = (u8 *)t; 2406 + bool *res = data; 2462 2407 2463 - if (!is_display_descriptor(r, EDID_DETAIL_MONITOR_RANGE)) 2408 + if (!is_display_descriptor(descriptor, EDID_DETAIL_MONITOR_RANGE)) 2464 2409 return; 2465 2410 2466 - if (r[15] & 0x10) 2467 - *(bool *)data = true; 2411 + BUILD_BUG_ON(offsetof(typeof(*descriptor), data.other_data.data.range.flags) != 10); 2412 + BUILD_BUG_ON(offsetof(typeof(*descriptor), data.other_data.data.range.formula.cvt.flags) != 15); 2413 + 2414 + if (descriptor->data.other_data.data.range.flags == DRM_EDID_CVT_SUPPORT_FLAG && 2415 + descriptor->data.other_data.data.range.formula.cvt.flags & 0x10) 2416 + *res = true; 2468 2417 } 2469 2418 2470 2419 /* EDID 1.4 defines this explicitly. For EDID 1.3, we guess, badly. */ 2471 2420 static bool 2472 - drm_monitor_supports_rb(struct edid *edid) 2421 + drm_monitor_supports_rb(const struct edid *edid) 2473 2422 { 2474 2423 if (edid->revision >= 4) { 2475 2424 bool ret = false; 2476 2425 2477 - drm_for_each_detailed_block((u8 *)edid, is_rb, &ret); 2426 + drm_for_each_detailed_block(edid, is_rb, &ret); 2478 2427 return ret; 2479 2428 } 2480 2429 ··· 2486 2427 } 2487 2428 2488 2429 static void 2489 - find_gtf2(struct detailed_timing *t, void *data) 2430 + find_gtf2(const struct detailed_timing *descriptor, void *data) 2490 2431 { 2491 - u8 *r = (u8 *)t; 2432 + const struct detailed_timing **res = data; 2492 2433 2493 - if (!is_display_descriptor(r, EDID_DETAIL_MONITOR_RANGE)) 2434 + if (!is_display_descriptor(descriptor, EDID_DETAIL_MONITOR_RANGE)) 2494 2435 return; 2495 2436 2496 - if (r[10] == 0x02) 2497 - *(u8 **)data = r; 2437 + BUILD_BUG_ON(offsetof(typeof(*descriptor), data.other_data.data.range.flags) != 10); 2438 + 2439 + if (descriptor->data.other_data.data.range.flags == 0x02) 2440 + *res = descriptor; 2498 2441 } 2499 2442 2500 2443 /* Secondary GTF curve kicks in above some break frequency */ 2501 2444 static int 2502 - drm_gtf2_hbreak(struct edid *edid) 2445 + drm_gtf2_hbreak(const struct edid *edid) 2503 2446 { 2504 - u8 *r = NULL; 2447 + const struct detailed_timing *descriptor = NULL; 2505 2448 2506 - drm_for_each_detailed_block((u8 *)edid, find_gtf2, &r); 2507 - return r ? (r[12] * 2) : 0; 2449 + drm_for_each_detailed_block(edid, find_gtf2, &descriptor); 2450 + 2451 + BUILD_BUG_ON(offsetof(typeof(*descriptor), data.other_data.data.range.formula.gtf2.hfreq_start_khz) != 12); 2452 + 2453 + return descriptor ? descriptor->data.other_data.data.range.formula.gtf2.hfreq_start_khz * 2 : 0; 2508 2454 } 2509 2455 2510 2456 static int 2511 - drm_gtf2_2c(struct edid *edid) 2457 + drm_gtf2_2c(const struct edid *edid) 2512 2458 { 2513 - u8 *r = NULL; 2459 + const struct detailed_timing *descriptor = NULL; 2514 2460 2515 - drm_for_each_detailed_block((u8 *)edid, find_gtf2, &r); 2516 - return r ? r[13] : 0; 2461 + drm_for_each_detailed_block(edid, find_gtf2, &descriptor); 2462 + 2463 + BUILD_BUG_ON(offsetof(typeof(*descriptor), data.other_data.data.range.formula.gtf2.c) != 13); 2464 + 2465 + return descriptor ? descriptor->data.other_data.data.range.formula.gtf2.c : 0; 2517 2466 } 2518 2467 2519 2468 static int 2520 - drm_gtf2_m(struct edid *edid) 2469 + drm_gtf2_m(const struct edid *edid) 2521 2470 { 2522 - u8 *r = NULL; 2471 + const struct detailed_timing *descriptor = NULL; 2523 2472 2524 - drm_for_each_detailed_block((u8 *)edid, find_gtf2, &r); 2525 - return r ? (r[15] << 8) + r[14] : 0; 2473 + drm_for_each_detailed_block(edid, find_gtf2, &descriptor); 2474 + 2475 + BUILD_BUG_ON(offsetof(typeof(*descriptor), data.other_data.data.range.formula.gtf2.m) != 14); 2476 + 2477 + return descriptor ? le16_to_cpu(descriptor->data.other_data.data.range.formula.gtf2.m) : 0; 2526 2478 } 2527 2479 2528 2480 static int 2529 - drm_gtf2_k(struct edid *edid) 2481 + drm_gtf2_k(const struct edid *edid) 2530 2482 { 2531 - u8 *r = NULL; 2483 + const struct detailed_timing *descriptor = NULL; 2532 2484 2533 - drm_for_each_detailed_block((u8 *)edid, find_gtf2, &r); 2534 - return r ? r[16] : 0; 2485 + drm_for_each_detailed_block(edid, find_gtf2, &descriptor); 2486 + 2487 + BUILD_BUG_ON(offsetof(typeof(*descriptor), data.other_data.data.range.formula.gtf2.k) != 16); 2488 + 2489 + return descriptor ? descriptor->data.other_data.data.range.formula.gtf2.k : 0; 2535 2490 } 2536 2491 2537 2492 static int 2538 - drm_gtf2_2j(struct edid *edid) 2493 + drm_gtf2_2j(const struct edid *edid) 2539 2494 { 2540 - u8 *r = NULL; 2495 + const struct detailed_timing *descriptor = NULL; 2541 2496 2542 - drm_for_each_detailed_block((u8 *)edid, find_gtf2, &r); 2543 - return r ? r[17] : 0; 2497 + drm_for_each_detailed_block(edid, find_gtf2, &descriptor); 2498 + 2499 + BUILD_BUG_ON(offsetof(typeof(*descriptor), data.other_data.data.range.formula.gtf2.j) != 17); 2500 + 2501 + return descriptor ? descriptor->data.other_data.data.range.formula.gtf2.j : 0; 2544 2502 } 2545 2503 2546 2504 /** 2547 2505 * standard_timing_level - get std. timing level(CVT/GTF/DMT) 2548 2506 * @edid: EDID block to scan 2549 2507 */ 2550 - static int standard_timing_level(struct edid *edid) 2508 + static int standard_timing_level(const struct edid *edid) 2551 2509 { 2552 2510 if (edid->revision >= 2) { 2553 2511 if (edid->revision >= 4 && (edid->features & DRM_EDID_FEATURE_DEFAULT_GTF)) ··· 2607 2531 * and convert them into a real mode using CVT/GTF/DMT. 2608 2532 */ 2609 2533 static struct drm_display_mode * 2610 - drm_mode_std(struct drm_connector *connector, struct edid *edid, 2611 - struct std_timing *t) 2534 + drm_mode_std(struct drm_connector *connector, const struct edid *edid, 2535 + const struct std_timing *t) 2612 2536 { 2613 2537 struct drm_device *dev = connector->dev; 2614 2538 struct drm_display_mode *m, *mode = NULL; ··· 2726 2650 */ 2727 2651 static void 2728 2652 drm_mode_do_interlace_quirk(struct drm_display_mode *mode, 2729 - struct detailed_pixel_timing *pt) 2653 + const struct detailed_pixel_timing *pt) 2730 2654 { 2731 2655 int i; 2732 2656 static const struct { ··· 2769 2693 * return a new struct drm_display_mode. 2770 2694 */ 2771 2695 static struct drm_display_mode *drm_mode_detailed(struct drm_device *dev, 2772 - struct edid *edid, 2773 - struct detailed_timing *timing, 2696 + const struct edid *edid, 2697 + const struct detailed_timing *timing, 2774 2698 u32 quirks) 2775 2699 { 2776 2700 struct drm_display_mode *mode; 2777 - struct detailed_pixel_timing *pt = &timing->data.pixel_data; 2701 + const struct detailed_pixel_timing *pt = &timing->data.pixel_data; 2778 2702 unsigned hactive = (pt->hactive_hblank_hi & 0xf0) << 4 | pt->hactive_lo; 2779 2703 unsigned vactive = (pt->vactive_vblank_hi & 0xf0) << 4 | pt->vactive_lo; 2780 2704 unsigned hblank = (pt->hactive_hblank_hi & 0xf) << 8 | pt->hblank_lo; ··· 2816 2740 return NULL; 2817 2741 2818 2742 if (quirks & EDID_QUIRK_135_CLOCK_TOO_HIGH) 2819 - timing->pixel_clock = cpu_to_le16(1088); 2820 - 2821 - mode->clock = le16_to_cpu(timing->pixel_clock) * 10; 2743 + mode->clock = 1088 * 10; 2744 + else 2745 + mode->clock = le16_to_cpu(timing->pixel_clock) * 10; 2822 2746 2823 2747 mode->hdisplay = hactive; 2824 2748 mode->hsync_start = mode->hdisplay + hsync_offset; ··· 2839 2763 drm_mode_do_interlace_quirk(mode, pt); 2840 2764 2841 2765 if (quirks & EDID_QUIRK_DETAILED_SYNC_PP) { 2842 - pt->misc |= DRM_EDID_PT_HSYNC_POSITIVE | DRM_EDID_PT_VSYNC_POSITIVE; 2766 + mode->flags |= DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC; 2767 + } else { 2768 + mode->flags |= (pt->misc & DRM_EDID_PT_HSYNC_POSITIVE) ? 2769 + DRM_MODE_FLAG_PHSYNC : DRM_MODE_FLAG_NHSYNC; 2770 + mode->flags |= (pt->misc & DRM_EDID_PT_VSYNC_POSITIVE) ? 2771 + DRM_MODE_FLAG_PVSYNC : DRM_MODE_FLAG_NVSYNC; 2843 2772 } 2844 - 2845 - mode->flags |= (pt->misc & DRM_EDID_PT_HSYNC_POSITIVE) ? 2846 - DRM_MODE_FLAG_PHSYNC : DRM_MODE_FLAG_NHSYNC; 2847 - mode->flags |= (pt->misc & DRM_EDID_PT_VSYNC_POSITIVE) ? 2848 - DRM_MODE_FLAG_PVSYNC : DRM_MODE_FLAG_NVSYNC; 2849 2773 2850 2774 set_size: 2851 2775 mode->width_mm = pt->width_mm_lo | (pt->width_height_mm_hi & 0xf0) << 4; ··· 2869 2793 2870 2794 static bool 2871 2795 mode_in_hsync_range(const struct drm_display_mode *mode, 2872 - struct edid *edid, u8 *t) 2796 + const struct edid *edid, const u8 *t) 2873 2797 { 2874 2798 int hsync, hmin, hmax; 2875 2799 ··· 2886 2810 2887 2811 static bool 2888 2812 mode_in_vsync_range(const struct drm_display_mode *mode, 2889 - struct edid *edid, u8 *t) 2813 + const struct edid *edid, const u8 *t) 2890 2814 { 2891 2815 int vsync, vmin, vmax; 2892 2816 ··· 2902 2826 } 2903 2827 2904 2828 static u32 2905 - range_pixel_clock(struct edid *edid, u8 *t) 2829 + range_pixel_clock(const struct edid *edid, const u8 *t) 2906 2830 { 2907 2831 /* unspecified */ 2908 2832 if (t[9] == 0 || t[9] == 255) ··· 2917 2841 } 2918 2842 2919 2843 static bool 2920 - mode_in_range(const struct drm_display_mode *mode, struct edid *edid, 2921 - struct detailed_timing *timing) 2844 + mode_in_range(const struct drm_display_mode *mode, const struct edid *edid, 2845 + const struct detailed_timing *timing) 2922 2846 { 2923 2847 u32 max_clock; 2924 - u8 *t = (u8 *)timing; 2848 + const u8 *t = (const u8 *)timing; 2925 2849 2926 2850 if (!mode_in_hsync_range(mode, edid, t)) 2927 2851 return false; ··· 2963 2887 } 2964 2888 2965 2889 static int 2966 - drm_dmt_modes_for_range(struct drm_connector *connector, struct edid *edid, 2967 - struct detailed_timing *timing) 2890 + drm_dmt_modes_for_range(struct drm_connector *connector, const struct edid *edid, 2891 + const struct detailed_timing *timing) 2968 2892 { 2969 2893 int i, modes = 0; 2970 2894 struct drm_display_mode *newmode; ··· 2998 2922 } 2999 2923 3000 2924 static int 3001 - drm_gtf_modes_for_range(struct drm_connector *connector, struct edid *edid, 3002 - struct detailed_timing *timing) 2925 + drm_gtf_modes_for_range(struct drm_connector *connector, const struct edid *edid, 2926 + const struct detailed_timing *timing) 3003 2927 { 3004 2928 int i, modes = 0; 3005 2929 struct drm_display_mode *newmode; ··· 3027 2951 } 3028 2952 3029 2953 static int 3030 - drm_cvt_modes_for_range(struct drm_connector *connector, struct edid *edid, 3031 - struct detailed_timing *timing) 2954 + drm_cvt_modes_for_range(struct drm_connector *connector, const struct edid *edid, 2955 + const struct detailed_timing *timing) 3032 2956 { 3033 2957 int i, modes = 0; 3034 2958 struct drm_display_mode *newmode; ··· 3057 2981 } 3058 2982 3059 2983 static void 3060 - do_inferred_modes(struct detailed_timing *timing, void *c) 2984 + do_inferred_modes(const struct detailed_timing *timing, void *c) 3061 2985 { 3062 2986 struct detailed_mode_closure *closure = c; 3063 - struct detailed_non_pixel *data = &timing->data.other_data; 3064 - struct detailed_data_monitor_range *range = &data->data.range; 2987 + const struct detailed_non_pixel *data = &timing->data.other_data; 2988 + const struct detailed_data_monitor_range *range = &data->data.range; 3065 2989 3066 - if (!is_display_descriptor((const u8 *)timing, EDID_DETAIL_MONITOR_RANGE)) 2990 + if (!is_display_descriptor(timing, EDID_DETAIL_MONITOR_RANGE)) 3067 2991 return; 3068 2992 3069 2993 closure->modes += drm_dmt_modes_for_range(closure->connector, ··· 3095 3019 } 3096 3020 3097 3021 static int 3098 - add_inferred_modes(struct drm_connector *connector, struct edid *edid) 3022 + add_inferred_modes(struct drm_connector *connector, const struct edid *edid) 3099 3023 { 3100 3024 struct detailed_mode_closure closure = { 3101 3025 .connector = connector, ··· 3103 3027 }; 3104 3028 3105 3029 if (version_greater(edid, 1, 0)) 3106 - drm_for_each_detailed_block((u8 *)edid, do_inferred_modes, 3107 - &closure); 3030 + drm_for_each_detailed_block(edid, do_inferred_modes, &closure); 3108 3031 3109 3032 return closure.modes; 3110 3033 } 3111 3034 3112 3035 static int 3113 - drm_est3_modes(struct drm_connector *connector, struct detailed_timing *timing) 3036 + drm_est3_modes(struct drm_connector *connector, const struct detailed_timing *timing) 3114 3037 { 3115 3038 int i, j, m, modes = 0; 3116 3039 struct drm_display_mode *mode; 3117 - u8 *est = ((u8 *)timing) + 6; 3040 + const u8 *est = ((const u8 *)timing) + 6; 3118 3041 3119 3042 for (i = 0; i < 6; i++) { 3120 3043 for (j = 7; j >= 0; j--) { ··· 3138 3063 } 3139 3064 3140 3065 static void 3141 - do_established_modes(struct detailed_timing *timing, void *c) 3066 + do_established_modes(const struct detailed_timing *timing, void *c) 3142 3067 { 3143 3068 struct detailed_mode_closure *closure = c; 3144 3069 3145 - if (!is_display_descriptor((const u8 *)timing, EDID_DETAIL_EST_TIMINGS)) 3070 + if (!is_display_descriptor(timing, EDID_DETAIL_EST_TIMINGS)) 3146 3071 return; 3147 3072 3148 3073 closure->modes += drm_est3_modes(closure->connector, timing); ··· 3157 3082 * (defined above). Tease them out and add them to the global modes list. 3158 3083 */ 3159 3084 static int 3160 - add_established_modes(struct drm_connector *connector, struct edid *edid) 3085 + add_established_modes(struct drm_connector *connector, const struct edid *edid) 3161 3086 { 3162 3087 struct drm_device *dev = connector->dev; 3163 3088 unsigned long est_bits = edid->established_timings.t1 | ··· 3182 3107 } 3183 3108 3184 3109 if (version_greater(edid, 1, 0)) 3185 - drm_for_each_detailed_block((u8 *)edid, 3186 - do_established_modes, &closure); 3110 + drm_for_each_detailed_block(edid, do_established_modes, 3111 + &closure); 3187 3112 3188 3113 return modes + closure.modes; 3189 3114 } 3190 3115 3191 3116 static void 3192 - do_standard_modes(struct detailed_timing *timing, void *c) 3117 + do_standard_modes(const struct detailed_timing *timing, void *c) 3193 3118 { 3194 3119 struct detailed_mode_closure *closure = c; 3195 - struct detailed_non_pixel *data = &timing->data.other_data; 3120 + const struct detailed_non_pixel *data = &timing->data.other_data; 3196 3121 struct drm_connector *connector = closure->connector; 3197 - struct edid *edid = closure->edid; 3122 + const struct edid *edid = closure->edid; 3198 3123 int i; 3199 3124 3200 - if (!is_display_descriptor((const u8 *)timing, EDID_DETAIL_STD_MODES)) 3125 + if (!is_display_descriptor(timing, EDID_DETAIL_STD_MODES)) 3201 3126 return; 3202 3127 3203 3128 for (i = 0; i < 6; i++) { 3204 - struct std_timing *std = &data->data.timings[i]; 3129 + const struct std_timing *std = &data->data.timings[i]; 3205 3130 struct drm_display_mode *newmode; 3206 3131 3207 3132 newmode = drm_mode_std(connector, edid, std); ··· 3221 3146 * GTF or CVT. Grab them from @edid and add them to the list. 3222 3147 */ 3223 3148 static int 3224 - add_standard_modes(struct drm_connector *connector, struct edid *edid) 3149 + add_standard_modes(struct drm_connector *connector, const struct edid *edid) 3225 3150 { 3226 3151 int i, modes = 0; 3227 3152 struct detailed_mode_closure closure = { ··· 3241 3166 } 3242 3167 3243 3168 if (version_greater(edid, 1, 0)) 3244 - drm_for_each_detailed_block((u8 *)edid, do_standard_modes, 3169 + drm_for_each_detailed_block(edid, do_standard_modes, 3245 3170 &closure); 3246 3171 3247 3172 /* XXX should also look for standard codes in VTB blocks */ ··· 3250 3175 } 3251 3176 3252 3177 static int drm_cvt_modes(struct drm_connector *connector, 3253 - struct detailed_timing *timing) 3178 + const struct detailed_timing *timing) 3254 3179 { 3255 3180 int i, j, modes = 0; 3256 3181 struct drm_display_mode *newmode; 3257 3182 struct drm_device *dev = connector->dev; 3258 - struct cvt_timing *cvt; 3183 + const struct cvt_timing *cvt; 3259 3184 const int rates[] = { 60, 85, 75, 60, 50 }; 3260 3185 const u8 empty[3] = { 0, 0, 0 }; 3261 3186 ··· 3302 3227 } 3303 3228 3304 3229 static void 3305 - do_cvt_mode(struct detailed_timing *timing, void *c) 3230 + do_cvt_mode(const struct detailed_timing *timing, void *c) 3306 3231 { 3307 3232 struct detailed_mode_closure *closure = c; 3308 3233 3309 - if (!is_display_descriptor((const u8 *)timing, EDID_DETAIL_CVT_3BYTE)) 3234 + if (!is_display_descriptor(timing, EDID_DETAIL_CVT_3BYTE)) 3310 3235 return; 3311 3236 3312 3237 closure->modes += drm_cvt_modes(closure->connector, timing); 3313 3238 } 3314 3239 3315 3240 static int 3316 - add_cvt_modes(struct drm_connector *connector, struct edid *edid) 3241 + add_cvt_modes(struct drm_connector *connector, const struct edid *edid) 3317 3242 { 3318 3243 struct detailed_mode_closure closure = { 3319 3244 .connector = connector, ··· 3321 3246 }; 3322 3247 3323 3248 if (version_greater(edid, 1, 2)) 3324 - drm_for_each_detailed_block((u8 *)edid, do_cvt_mode, &closure); 3249 + drm_for_each_detailed_block(edid, do_cvt_mode, &closure); 3325 3250 3326 3251 /* XXX should also look for CVT codes in VTB blocks */ 3327 3252 ··· 3331 3256 static void fixup_detailed_cea_mode_clock(struct drm_display_mode *mode); 3332 3257 3333 3258 static void 3334 - do_detailed_mode(struct detailed_timing *timing, void *c) 3259 + do_detailed_mode(const struct detailed_timing *timing, void *c) 3335 3260 { 3336 3261 struct detailed_mode_closure *closure = c; 3337 3262 struct drm_display_mode *newmode; 3338 3263 3339 - if (!is_detailed_timing_descriptor((const u8 *)timing)) 3264 + if (!is_detailed_timing_descriptor(timing)) 3340 3265 return; 3341 3266 3342 3267 newmode = drm_mode_detailed(closure->connector->dev, ··· 3367 3292 * @quirks: quirks to apply 3368 3293 */ 3369 3294 static int 3370 - add_detailed_modes(struct drm_connector *connector, struct edid *edid, 3295 + add_detailed_modes(struct drm_connector *connector, const struct edid *edid, 3371 3296 u32 quirks) 3372 3297 { 3373 3298 struct detailed_mode_closure closure = { ··· 3381 3306 closure.preferred = 3382 3307 (edid->features & DRM_EDID_FEATURE_PREFERRED_TIMING); 3383 3308 3384 - drm_for_each_detailed_block((u8 *)edid, do_detailed_mode, &closure); 3309 + drm_for_each_detailed_block(edid, do_detailed_mode, &closure); 3385 3310 3386 3311 return closure.modes; 3387 3312 } ··· 3416 3341 /* Find CEA extension */ 3417 3342 for (i = *ext_index; i < edid->extensions; i++) { 3418 3343 edid_ext = (const u8 *)edid + EDID_LENGTH * (i + 1); 3419 - if (edid_ext[0] == ext_id) 3344 + if (edid_block_tag(edid_ext) == ext_id) 3420 3345 break; 3421 3346 } 3422 3347 ··· 3713 3638 } 3714 3639 3715 3640 static int 3716 - add_alternate_cea_modes(struct drm_connector *connector, struct edid *edid) 3641 + add_alternate_cea_modes(struct drm_connector *connector, const struct edid *edid) 3717 3642 { 3718 3643 struct drm_device *dev = connector->dev; 3719 3644 struct drm_display_mode *mode, *tmp; ··· 4394 4319 } 4395 4320 4396 4321 static int 4397 - add_cea_modes(struct drm_connector *connector, struct edid *edid) 4322 + add_cea_modes(struct drm_connector *connector, const struct edid *edid) 4398 4323 { 4399 4324 const u8 *cea = drm_find_cea_extension(edid); 4400 4325 const u8 *db, *hdmi = NULL, *video = NULL; ··· 4564 4489 } 4565 4490 4566 4491 static void 4567 - monitor_name(struct detailed_timing *t, void *data) 4492 + monitor_name(const struct detailed_timing *timing, void *data) 4568 4493 { 4569 - if (!is_display_descriptor((const u8 *)t, EDID_DETAIL_MONITOR_NAME)) 4494 + const char **res = data; 4495 + 4496 + if (!is_display_descriptor(timing, EDID_DETAIL_MONITOR_NAME)) 4570 4497 return; 4571 4498 4572 - *(u8 **)data = t->data.other_data.data.str.str; 4499 + *res = timing->data.other_data.data.str.str; 4573 4500 } 4574 4501 4575 - static int get_monitor_name(struct edid *edid, char name[13]) 4502 + static int get_monitor_name(const struct edid *edid, char name[13]) 4576 4503 { 4577 - char *edid_name = NULL; 4504 + const char *edid_name = NULL; 4578 4505 int mnl; 4579 4506 4580 4507 if (!edid || !name) 4581 4508 return 0; 4582 4509 4583 - drm_for_each_detailed_block((u8 *)edid, monitor_name, &edid_name); 4510 + drm_for_each_detailed_block(edid, monitor_name, &edid_name); 4584 4511 for (mnl = 0; edid_name && mnl < 13; mnl++) { 4585 4512 if (edid_name[mnl] == 0x0a) 4586 4513 break; ··· 4600 4523 * @bufsize: The size of the name buffer (should be at least 14 chars.) 4601 4524 * 4602 4525 */ 4603 - void drm_edid_get_monitor_name(struct edid *edid, char *name, int bufsize) 4526 + void drm_edid_get_monitor_name(const struct edid *edid, char *name, int bufsize) 4604 4527 { 4605 4528 int name_length; 4606 4529 char buf[13]; ··· 4634 4557 * Fill the ELD (EDID-Like Data) buffer for passing to the audio driver. The 4635 4558 * HDCP and Port_ID ELD fields are left for the graphics driver to fill in. 4636 4559 */ 4637 - static void drm_edid_to_eld(struct drm_connector *connector, struct edid *edid) 4560 + static void drm_edid_to_eld(struct drm_connector *connector, 4561 + const struct edid *edid) 4638 4562 { 4639 4563 uint8_t *eld = connector->eld; 4640 4564 const u8 *cea; ··· 4731 4653 * 4732 4654 * Return: The number of found SADs or negative number on error. 4733 4655 */ 4734 - int drm_edid_to_sad(struct edid *edid, struct cea_sad **sads) 4656 + int drm_edid_to_sad(const struct edid *edid, struct cea_sad **sads) 4735 4657 { 4736 4658 int count = 0; 4737 4659 int i, start, end, dbl; ··· 4793 4715 * Return: The number of found Speaker Allocation Blocks or negative number on 4794 4716 * error. 4795 4717 */ 4796 - int drm_edid_to_speaker_allocation(struct edid *edid, u8 **sadb) 4718 + int drm_edid_to_speaker_allocation(const struct edid *edid, u8 **sadb) 4797 4719 { 4798 4720 int count = 0; 4799 4721 int i, start, end, dbl; ··· 4888 4810 * 4889 4811 * Return: True if the monitor is HDMI, false if not or unknown. 4890 4812 */ 4891 - bool drm_detect_hdmi_monitor(struct edid *edid) 4813 + bool drm_detect_hdmi_monitor(const struct edid *edid) 4892 4814 { 4893 4815 const u8 *edid_ext; 4894 4816 int i; ··· 4926 4848 * 4927 4849 * Return: True if the monitor supports audio, false otherwise. 4928 4850 */ 4929 - bool drm_detect_monitor_audio(struct edid *edid) 4851 + bool drm_detect_monitor_audio(const struct edid *edid) 4930 4852 { 4931 4853 const u8 *edid_ext; 4932 4854 int i, j; ··· 5297 5219 } 5298 5220 5299 5221 static 5300 - void get_monitor_range(struct detailed_timing *timing, 5222 + void get_monitor_range(const struct detailed_timing *timing, 5301 5223 void *info_monitor_range) 5302 5224 { 5303 5225 struct drm_monitor_range_info *monitor_range = info_monitor_range; 5304 5226 const struct detailed_non_pixel *data = &timing->data.other_data; 5305 5227 const struct detailed_data_monitor_range *range = &data->data.range; 5306 5228 5307 - if (!is_display_descriptor((const u8 *)timing, EDID_DETAIL_MONITOR_RANGE)) 5229 + if (!is_display_descriptor(timing, EDID_DETAIL_MONITOR_RANGE)) 5308 5230 return; 5309 5231 5310 5232 /* ··· 5329 5251 if (!version_greater(edid, 1, 1)) 5330 5252 return; 5331 5253 5332 - drm_for_each_detailed_block((u8 *)edid, get_monitor_range, 5254 + drm_for_each_detailed_block(edid, get_monitor_range, 5333 5255 &info->monitor_range); 5334 5256 5335 5257 DRM_DEBUG_KMS("Supported Monitor Refresh rate range is %d Hz - %d Hz\n", ··· 5593 5515 } 5594 5516 5595 5517 static int add_displayid_detailed_modes(struct drm_connector *connector, 5596 - struct edid *edid) 5518 + const struct edid *edid) 5597 5519 { 5598 5520 const struct displayid_block *block; 5599 5521 struct displayid_iter iter; ··· 5610 5532 return num_modes; 5611 5533 } 5612 5534 5613 - /** 5614 - * drm_add_edid_modes - add modes from EDID data, if available 5615 - * @connector: connector we're probing 5616 - * @edid: EDID data 5617 - * 5618 - * Add the specified modes to the connector's mode list. Also fills out the 5619 - * &drm_display_info structure and ELD in @connector with any information which 5620 - * can be derived from the edid. 5621 - * 5622 - * Return: The number of modes added or 0 if we couldn't find any. 5623 - */ 5624 - int drm_add_edid_modes(struct drm_connector *connector, struct edid *edid) 5535 + static int drm_edid_connector_update(struct drm_connector *connector, 5536 + const struct edid *edid) 5625 5537 { 5626 5538 int num_modes = 0; 5627 5539 u32 quirks; 5628 5540 5629 5541 if (edid == NULL) { 5630 5542 clear_eld(connector); 5631 - return 0; 5632 - } 5633 - if (!drm_edid_is_valid(edid)) { 5634 - clear_eld(connector); 5635 - drm_warn(connector->dev, "%s: EDID invalid.\n", 5636 - connector->name); 5637 5543 return 0; 5638 5544 } 5639 5545 ··· 5670 5608 connector->display_info.bpc = 12; 5671 5609 5672 5610 return num_modes; 5611 + } 5612 + 5613 + /** 5614 + * drm_add_edid_modes - add modes from EDID data, if available 5615 + * @connector: connector we're probing 5616 + * @edid: EDID data 5617 + * 5618 + * Add the specified modes to the connector's mode list. Also fills out the 5619 + * &drm_display_info structure and ELD in @connector with any information which 5620 + * can be derived from the edid. 5621 + * 5622 + * Return: The number of modes added or 0 if we couldn't find any. 5623 + */ 5624 + int drm_add_edid_modes(struct drm_connector *connector, struct edid *edid) 5625 + { 5626 + if (edid && !drm_edid_is_valid(edid)) { 5627 + drm_warn(connector->dev, "%s: EDID invalid.\n", 5628 + connector->name); 5629 + edid = NULL; 5630 + } 5631 + 5632 + return drm_edid_connector_update(connector, edid); 5673 5633 } 5674 5634 EXPORT_SYMBOL(drm_add_edid_modes); 5675 5635
+28 -48
drivers/gpu/drm/drm_format_helper.c
··· 594 594 } 595 595 EXPORT_SYMBOL(drm_fb_blit_toio); 596 596 597 - static void drm_fb_gray8_to_mono_reversed_line(u8 *dst, const u8 *src, unsigned int pixels, 598 - unsigned int start_offset, unsigned int end_len) 597 + 598 + static void drm_fb_gray8_to_mono_line(u8 *dst, const u8 *src, unsigned int pixels) 599 599 { 600 - unsigned int xb, i; 600 + while (pixels) { 601 + unsigned int i, bits = min(pixels, 8U); 602 + u8 byte = 0; 601 603 602 - for (xb = 0; xb < pixels; xb++) { 603 - unsigned int start = 0, end = 8; 604 - u8 byte = 0x00; 605 - 606 - if (xb == 0 && start_offset) 607 - start = start_offset; 608 - 609 - if (xb == pixels - 1 && end_len) 610 - end = end_len; 611 - 612 - for (i = start; i < end; i++) { 613 - unsigned int x = xb * 8 + i; 614 - 615 - byte >>= 1; 616 - if (src[x] >> 7) 617 - byte |= BIT(7); 604 + for (i = 0; i < bits; i++, pixels--) { 605 + if (*src++ >= 128) 606 + byte |= BIT(i); 618 607 } 619 608 *dst++ = byte; 620 609 } 621 610 } 622 611 623 612 /** 624 - * drm_fb_xrgb8888_to_mono_reversed - Convert XRGB8888 to reversed monochrome 625 - * @dst: reversed monochrome destination buffer 613 + * drm_fb_xrgb8888_to_mono - Convert XRGB8888 to monochrome 614 + * @dst: monochrome destination buffer (0=black, 1=white) 626 615 * @dst_pitch: Number of bytes between two consecutive scanlines within dst 627 - * @src: XRGB8888 source buffer 616 + * @vaddr: XRGB8888 source buffer 628 617 * @fb: DRM framebuffer 629 618 * @clip: Clip rectangle area to copy 630 619 * ··· 622 633 * and use this function to convert to the native format. 623 634 * 624 635 * This function uses drm_fb_xrgb8888_to_gray8() to convert to grayscale and 625 - * then the result is converted from grayscale to reversed monohrome. 636 + * then the result is converted from grayscale to monochrome. 637 + * 638 + * The first pixel (upper left corner of the clip rectangle) will be converted 639 + * and copied to the first bit (LSB) in the first byte of the monochrome 640 + * destination buffer. 641 + * If the caller requires that the first pixel in a byte must be located at an 642 + * x-coordinate that is a multiple of 8, then the caller must take care itself 643 + * of supplying a suitable clip rectangle. 626 644 */ 627 - void drm_fb_xrgb8888_to_mono_reversed(void *dst, unsigned int dst_pitch, const void *vaddr, 628 - const struct drm_framebuffer *fb, const struct drm_rect *clip) 645 + void drm_fb_xrgb8888_to_mono(void *dst, unsigned int dst_pitch, const void *vaddr, 646 + const struct drm_framebuffer *fb, const struct drm_rect *clip) 629 647 { 630 648 unsigned int linepixels = drm_rect_width(clip); 631 - unsigned int lines = clip->y2 - clip->y1; 649 + unsigned int lines = drm_rect_height(clip); 632 650 unsigned int cpp = fb->format->cpp[0]; 633 651 unsigned int len_src32 = linepixels * cpp; 634 652 struct drm_device *dev = fb->dev; 635 - unsigned int start_offset, end_len; 636 653 unsigned int y; 637 654 u8 *mono = dst, *gray8; 638 655 u32 *src32; ··· 647 652 return; 648 653 649 654 /* 650 - * The reversed mono destination buffer contains 1 bit per pixel 651 - * and destination scanlines have to be in multiple of 8 pixels. 655 + * The mono destination buffer contains 1 bit per pixel 652 656 */ 653 657 if (!dst_pitch) 654 658 dst_pitch = DIV_ROUND_UP(linepixels, 8); 655 - 656 - drm_WARN_ONCE(dev, dst_pitch % 8 != 0, "dst_pitch is not a multiple of 8\n"); 657 659 658 660 /* 659 661 * The cma memory is write-combined so reads are uncached. 660 662 * Speed up by fetching one line at a time. 661 663 * 662 - * Also, format conversion from XR24 to reversed monochrome 663 - * are done line-by-line but are converted to 8-bit grayscale 664 - * as an intermediate step. 664 + * Also, format conversion from XR24 to monochrome are done 665 + * line-by-line but are converted to 8-bit grayscale as an 666 + * intermediate step. 665 667 * 666 668 * Allocate a buffer to be used for both copying from the cma 667 669 * memory and to store the intermediate grayscale line pixels. ··· 669 677 670 678 gray8 = (u8 *)src32 + len_src32; 671 679 672 - /* 673 - * For damage handling, it is possible that only parts of the source 674 - * buffer is copied and this could lead to start and end pixels that 675 - * are not aligned to multiple of 8. 676 - * 677 - * Calculate if the start and end pixels are not aligned and set the 678 - * offsets for the reversed mono line conversion function to adjust. 679 - */ 680 - start_offset = clip->x1 % 8; 681 - end_len = clip->x2 % 8; 682 - 683 680 vaddr += clip_offset(clip, fb->pitches[0], cpp); 684 681 for (y = 0; y < lines; y++) { 685 682 src32 = memcpy(src32, vaddr, len_src32); 686 683 drm_fb_xrgb8888_to_gray8_line(gray8, src32, linepixels); 687 - drm_fb_gray8_to_mono_reversed_line(mono, gray8, dst_pitch, 688 - start_offset, end_len); 684 + drm_fb_gray8_to_mono_line(mono, gray8, linepixels); 689 685 vaddr += fb->pitches[0]; 690 686 mono += dst_pitch; 691 687 } 692 688 693 689 kfree(src32); 694 690 } 695 - EXPORT_SYMBOL(drm_fb_xrgb8888_to_mono_reversed); 691 + EXPORT_SYMBOL(drm_fb_xrgb8888_to_mono);
-80
drivers/gpu/drm/drm_gem.c
··· 1273 1273 ww_acquire_fini(acquire_ctx); 1274 1274 } 1275 1275 EXPORT_SYMBOL(drm_gem_unlock_reservations); 1276 - 1277 - /** 1278 - * drm_gem_fence_array_add - Adds the fence to an array of fences to be 1279 - * waited on, deduplicating fences from the same context. 1280 - * 1281 - * @fence_array: array of dma_fence * for the job to block on. 1282 - * @fence: the dma_fence to add to the list of dependencies. 1283 - * 1284 - * This functions consumes the reference for @fence both on success and error 1285 - * cases. 1286 - * 1287 - * Returns: 1288 - * 0 on success, or an error on failing to expand the array. 1289 - */ 1290 - int drm_gem_fence_array_add(struct xarray *fence_array, 1291 - struct dma_fence *fence) 1292 - { 1293 - struct dma_fence *entry; 1294 - unsigned long index; 1295 - u32 id = 0; 1296 - int ret; 1297 - 1298 - if (!fence) 1299 - return 0; 1300 - 1301 - /* Deduplicate if we already depend on a fence from the same context. 1302 - * This lets the size of the array of deps scale with the number of 1303 - * engines involved, rather than the number of BOs. 1304 - */ 1305 - xa_for_each(fence_array, index, entry) { 1306 - if (entry->context != fence->context) 1307 - continue; 1308 - 1309 - if (dma_fence_is_later(fence, entry)) { 1310 - dma_fence_put(entry); 1311 - xa_store(fence_array, index, fence, GFP_KERNEL); 1312 - } else { 1313 - dma_fence_put(fence); 1314 - } 1315 - return 0; 1316 - } 1317 - 1318 - ret = xa_alloc(fence_array, &id, fence, xa_limit_32b, GFP_KERNEL); 1319 - if (ret != 0) 1320 - dma_fence_put(fence); 1321 - 1322 - return ret; 1323 - } 1324 - EXPORT_SYMBOL(drm_gem_fence_array_add); 1325 - 1326 - /** 1327 - * drm_gem_fence_array_add_implicit - Adds the implicit dependencies tracked 1328 - * in the GEM object's reservation object to an array of dma_fences for use in 1329 - * scheduling a rendering job. 1330 - * 1331 - * This should be called after drm_gem_lock_reservations() on your array of 1332 - * GEM objects used in the job but before updating the reservations with your 1333 - * own fences. 1334 - * 1335 - * @fence_array: array of dma_fence * for the job to block on. 1336 - * @obj: the gem object to add new dependencies from. 1337 - * @write: whether the job might write the object (so we need to depend on 1338 - * shared fences in the reservation object). 1339 - */ 1340 - int drm_gem_fence_array_add_implicit(struct xarray *fence_array, 1341 - struct drm_gem_object *obj, 1342 - bool write) 1343 - { 1344 - struct dma_resv_iter cursor; 1345 - struct dma_fence *fence; 1346 - int ret = 0; 1347 - 1348 - dma_resv_for_each_fence(&cursor, obj->resv, write, fence) { 1349 - ret = drm_gem_fence_array_add(fence_array, fence); 1350 - if (ret) 1351 - break; 1352 - } 1353 - return ret; 1354 - } 1355 - EXPORT_SYMBOL(drm_gem_fence_array_add_implicit);
+7 -11
drivers/gpu/drm/drm_gem_atomic_helper.c
··· 143 143 */ 144 144 int drm_gem_plane_helper_prepare_fb(struct drm_plane *plane, struct drm_plane_state *state) 145 145 { 146 - struct dma_resv_iter cursor; 147 146 struct drm_gem_object *obj; 148 147 struct dma_fence *fence; 148 + int ret; 149 149 150 150 if (!state->fb) 151 151 return 0; 152 152 153 153 obj = drm_gem_fb_get_obj(state->fb, 0); 154 - dma_resv_iter_begin(&cursor, obj->resv, false); 155 - dma_resv_for_each_fence_unlocked(&cursor, fence) { 156 - /* TODO: Currently there should be only one write fence, so this 157 - * here works fine. But drm_atomic_set_fence_for_plane() should 158 - * be changed to be able to handle more fences in general for 159 - * multiple BOs per fb anyway. */ 160 - dma_fence_get(fence); 161 - break; 162 - } 163 - dma_resv_iter_end(&cursor); 154 + ret = dma_resv_get_singleton(obj->resv, false, &fence); 155 + if (ret) 156 + return ret; 164 157 158 + /* TODO: drm_atomic_set_fence_for_plane() should be changed to be able 159 + * to handle more fences in general for multiple BOs per fb. 160 + */ 165 161 drm_atomic_set_fence_for_plane(state, fence); 166 162 return 0; 167 163 }
+1 -1
drivers/gpu/drm/drm_gem_vram_helper.c
··· 867 867 if (!tt) 868 868 return NULL; 869 869 870 - ret = ttm_tt_init(tt, bo, page_flags, ttm_cached); 870 + ret = ttm_tt_init(tt, bo, page_flags, ttm_cached, 0); 871 871 if (ret < 0) 872 872 goto err_ttm_tt_init; 873 873
+17
drivers/gpu/drm/drm_modes.c
··· 942 942 EXPORT_SYMBOL(drm_mode_copy); 943 943 944 944 /** 945 + * drm_mode_init - initialize the mode from another mode 946 + * @dst: mode to overwrite 947 + * @src: mode to copy 948 + * 949 + * Copy an existing mode into another mode, zeroing the 950 + * list head of the destination mode. Typically used 951 + * to guarantee the list head is not left with stack 952 + * garbage in on-stack modes. 953 + */ 954 + void drm_mode_init(struct drm_display_mode *dst, const struct drm_display_mode *src) 955 + { 956 + memset(dst, 0, sizeof(*dst)); 957 + drm_mode_copy(dst, src); 958 + } 959 + EXPORT_SYMBOL(drm_mode_init); 960 + 961 + /** 945 962 * drm_mode_duplicate - allocate and duplicate an existing mode 946 963 * @dev: drm_device to allocate the duplicated mode for 947 964 * @mode: mode to duplicate
+1 -4
drivers/gpu/drm/etnaviv/etnaviv_gem.h
··· 80 80 u64 va; 81 81 struct etnaviv_gem_object *obj; 82 82 struct etnaviv_vram_mapping *mapping; 83 - struct dma_fence *excl; 84 - unsigned int nr_shared; 85 - struct dma_fence **shared; 86 83 }; 87 84 88 85 /* Created per submit-ioctl, to track bo's and cmdstream bufs, etc, ··· 92 95 struct etnaviv_file_private *ctx; 93 96 struct etnaviv_gpu *gpu; 94 97 struct etnaviv_iommu_context *mmu_context, *prev_mmu_context; 95 - struct dma_fence *out_fence, *in_fence; 98 + struct dma_fence *out_fence; 96 99 int out_fence_id; 97 100 struct list_head node; /* GPU active submit list */ 98 101 struct etnaviv_cmdbuf cmdbuf;
+36 -31
drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c
··· 179 179 struct etnaviv_gem_submit_bo *bo = &submit->bos[i]; 180 180 struct dma_resv *robj = bo->obj->base.resv; 181 181 182 - if (!(bo->flags & ETNA_SUBMIT_BO_WRITE)) { 183 - ret = dma_resv_reserve_shared(robj, 1); 184 - if (ret) 185 - return ret; 186 - } 182 + ret = dma_resv_reserve_fences(robj, 1); 183 + if (ret) 184 + return ret; 187 185 188 186 if (submit->flags & ETNA_SUBMIT_NO_IMPLICIT) 189 187 continue; 190 188 191 - if (bo->flags & ETNA_SUBMIT_BO_WRITE) { 192 - ret = dma_resv_get_fences(robj, true, &bo->nr_shared, 193 - &bo->shared); 194 - if (ret) 195 - return ret; 196 - } else { 197 - bo->excl = dma_fence_get(dma_resv_excl_fence(robj)); 198 - } 199 - 189 + ret = drm_sched_job_add_implicit_dependencies(&submit->sched_job, 190 + &bo->obj->base, 191 + bo->flags & ETNA_SUBMIT_BO_WRITE); 192 + if (ret) 193 + return ret; 200 194 } 201 195 202 196 return ret; ··· 396 402 397 403 wake_up_all(&submit->gpu->fence_event); 398 404 399 - if (submit->in_fence) 400 - dma_fence_put(submit->in_fence); 401 405 if (submit->out_fence) { 402 406 /* first remove from IDR, so fence can not be found anymore */ 403 407 mutex_lock(&submit->gpu->fence_lock); ··· 526 534 ret = etnaviv_cmdbuf_init(priv->cmdbuf_suballoc, &submit->cmdbuf, 527 535 ALIGN(args->stream_size, 8) + 8); 528 536 if (ret) 529 - goto err_submit_objects; 537 + goto err_submit_put; 530 538 531 539 submit->ctx = file->driver_priv; 532 540 submit->mmu_context = etnaviv_iommu_context_get(submit->ctx->mmu); 533 541 submit->exec_state = args->exec_state; 534 542 submit->flags = args->flags; 535 543 544 + ret = drm_sched_job_init(&submit->sched_job, 545 + &ctx->sched_entity[args->pipe], 546 + submit->ctx); 547 + if (ret) 548 + goto err_submit_put; 549 + 536 550 ret = submit_lookup_objects(submit, file, bos, args->nr_bos); 537 551 if (ret) 538 - goto err_submit_objects; 552 + goto err_submit_job; 539 553 540 554 if ((priv->mmu_global->version != ETNAVIV_IOMMU_V2) && 541 555 !etnaviv_cmd_validate_one(gpu, stream, args->stream_size / 4, 542 556 relocs, args->nr_relocs)) { 543 557 ret = -EINVAL; 544 - goto err_submit_objects; 558 + goto err_submit_job; 545 559 } 546 560 547 561 if (args->flags & ETNA_SUBMIT_FENCE_FD_IN) { 548 - submit->in_fence = sync_file_get_fence(args->fence_fd); 549 - if (!submit->in_fence) { 562 + struct dma_fence *in_fence = sync_file_get_fence(args->fence_fd); 563 + if (!in_fence) { 550 564 ret = -EINVAL; 551 - goto err_submit_objects; 565 + goto err_submit_job; 552 566 } 567 + 568 + ret = drm_sched_job_add_dependency(&submit->sched_job, 569 + in_fence); 570 + if (ret) 571 + goto err_submit_job; 553 572 } 554 573 555 574 ret = submit_pin_objects(submit); 556 575 if (ret) 557 - goto err_submit_objects; 576 + goto err_submit_job; 558 577 559 578 ret = submit_reloc(submit, stream, args->stream_size / 4, 560 579 relocs, args->nr_relocs); 561 580 if (ret) 562 - goto err_submit_objects; 581 + goto err_submit_job; 563 582 564 583 ret = submit_perfmon_validate(submit, args->exec_state, pmrs); 565 584 if (ret) 566 - goto err_submit_objects; 585 + goto err_submit_job; 567 586 568 587 memcpy(submit->cmdbuf.vaddr, stream, args->stream_size); 569 588 570 589 ret = submit_lock_objects(submit, &ticket); 571 590 if (ret) 572 - goto err_submit_objects; 591 + goto err_submit_job; 573 592 574 593 ret = submit_fence_sync(submit); 575 594 if (ret) 576 - goto err_submit_objects; 595 + goto err_submit_job; 577 596 578 - ret = etnaviv_sched_push_job(&ctx->sched_entity[args->pipe], submit); 597 + ret = etnaviv_sched_push_job(submit); 579 598 if (ret) 580 - goto err_submit_objects; 599 + goto err_submit_job; 581 600 582 601 submit_attach_object_fences(submit); 583 602 ··· 602 599 sync_file = sync_file_create(submit->out_fence); 603 600 if (!sync_file) { 604 601 ret = -ENOMEM; 605 - goto err_submit_objects; 602 + goto err_submit_job; 606 603 } 607 604 fd_install(out_fence_fd, sync_file->file); 608 605 } ··· 610 607 args->fence_fd = out_fence_fd; 611 608 args->fence = submit->out_fence_id; 612 609 613 - err_submit_objects: 610 + err_submit_job: 611 + drm_sched_job_cleanup(&submit->sched_job); 612 + err_submit_put: 614 613 etnaviv_submit_put(submit); 615 614 616 615 err_submit_ww_acquire:
+2 -61
drivers/gpu/drm/etnaviv/etnaviv_sched.c
··· 17 17 static int etnaviv_hw_jobs_limit = 4; 18 18 module_param_named(hw_job_limit, etnaviv_hw_jobs_limit, int , 0444); 19 19 20 - static struct dma_fence * 21 - etnaviv_sched_dependency(struct drm_sched_job *sched_job, 22 - struct drm_sched_entity *entity) 23 - { 24 - struct etnaviv_gem_submit *submit = to_etnaviv_submit(sched_job); 25 - struct dma_fence *fence; 26 - int i; 27 - 28 - if (unlikely(submit->in_fence)) { 29 - fence = submit->in_fence; 30 - submit->in_fence = NULL; 31 - 32 - if (!dma_fence_is_signaled(fence)) 33 - return fence; 34 - 35 - dma_fence_put(fence); 36 - } 37 - 38 - for (i = 0; i < submit->nr_bos; i++) { 39 - struct etnaviv_gem_submit_bo *bo = &submit->bos[i]; 40 - int j; 41 - 42 - if (bo->excl) { 43 - fence = bo->excl; 44 - bo->excl = NULL; 45 - 46 - if (!dma_fence_is_signaled(fence)) 47 - return fence; 48 - 49 - dma_fence_put(fence); 50 - } 51 - 52 - for (j = 0; j < bo->nr_shared; j++) { 53 - if (!bo->shared[j]) 54 - continue; 55 - 56 - fence = bo->shared[j]; 57 - bo->shared[j] = NULL; 58 - 59 - if (!dma_fence_is_signaled(fence)) 60 - return fence; 61 - 62 - dma_fence_put(fence); 63 - } 64 - kfree(bo->shared); 65 - bo->nr_shared = 0; 66 - bo->shared = NULL; 67 - } 68 - 69 - return NULL; 70 - } 71 - 72 20 static struct dma_fence *etnaviv_sched_run_job(struct drm_sched_job *sched_job) 73 21 { 74 22 struct etnaviv_gem_submit *submit = to_etnaviv_submit(sched_job); ··· 90 142 } 91 143 92 144 static const struct drm_sched_backend_ops etnaviv_sched_ops = { 93 - .dependency = etnaviv_sched_dependency, 94 145 .run_job = etnaviv_sched_run_job, 95 146 .timedout_job = etnaviv_sched_timedout_job, 96 147 .free_job = etnaviv_sched_free_job, 97 148 }; 98 149 99 - int etnaviv_sched_push_job(struct drm_sched_entity *sched_entity, 100 - struct etnaviv_gem_submit *submit) 150 + int etnaviv_sched_push_job(struct etnaviv_gem_submit *submit) 101 151 { 102 152 int ret = 0; 103 153 104 154 /* 105 155 * Hold the fence lock across the whole operation to avoid jobs being 106 156 * pushed out of order with regard to their sched fence seqnos as 107 - * allocated in drm_sched_job_init. 157 + * allocated in drm_sched_job_arm. 108 158 */ 109 159 mutex_lock(&submit->gpu->fence_lock); 110 - 111 - ret = drm_sched_job_init(&submit->sched_job, sched_entity, 112 - submit->ctx); 113 - if (ret) 114 - goto out_unlock; 115 160 116 161 drm_sched_job_arm(&submit->sched_job); 117 162
+1 -2
drivers/gpu/drm/etnaviv/etnaviv_sched.h
··· 18 18 19 19 int etnaviv_sched_init(struct etnaviv_gpu *gpu); 20 20 void etnaviv_sched_fini(struct etnaviv_gpu *gpu); 21 - int etnaviv_sched_push_job(struct drm_sched_entity *sched_entity, 22 - struct etnaviv_gem_submit *submit); 21 + int etnaviv_sched_push_job(struct etnaviv_gem_submit *submit); 23 22 24 23 #endif /* __ETNAVIV_SCHED_H__ */
+63 -180
drivers/gpu/drm/exynos/exynos_drm_dsi.c
··· 24 24 25 25 #include <drm/drm_atomic_helper.h> 26 26 #include <drm/drm_bridge.h> 27 - #include <drm/drm_fb_helper.h> 28 27 #include <drm/drm_mipi_dsi.h> 29 - #include <drm/drm_panel.h> 30 28 #include <drm/drm_print.h> 31 29 #include <drm/drm_probe_helper.h> 32 30 #include <drm/drm_simple_kms_helper.h> ··· 251 253 struct exynos_dsi { 252 254 struct drm_encoder encoder; 253 255 struct mipi_dsi_host dsi_host; 254 - struct drm_connector connector; 255 - struct drm_panel *panel; 256 - struct list_head bridge_chain; 256 + struct drm_bridge bridge; 257 257 struct drm_bridge *out_bridge; 258 258 struct device *dev; 259 259 struct drm_display_mode mode; ··· 281 285 }; 282 286 283 287 #define host_to_dsi(host) container_of(host, struct exynos_dsi, dsi_host) 284 - #define connector_to_dsi(c) container_of(c, struct exynos_dsi, connector) 285 288 286 - static inline struct exynos_dsi *encoder_to_dsi(struct drm_encoder *e) 289 + static inline struct exynos_dsi *bridge_to_dsi(struct drm_bridge *b) 287 290 { 288 - return container_of(e, struct exynos_dsi, encoder); 291 + return container_of(b, struct exynos_dsi, bridge); 289 292 } 290 293 291 294 enum reg_idx { ··· 1360 1365 } 1361 1366 } 1362 1367 1363 - static void exynos_dsi_enable(struct drm_encoder *encoder) 1368 + static void exynos_dsi_atomic_pre_enable(struct drm_bridge *bridge, 1369 + struct drm_bridge_state *old_bridge_state) 1364 1370 { 1365 - struct exynos_dsi *dsi = encoder_to_dsi(encoder); 1366 - struct drm_bridge *iter; 1371 + struct exynos_dsi *dsi = bridge_to_dsi(bridge); 1367 1372 int ret; 1368 1373 1369 1374 if (dsi->state & DSIM_STATE_ENABLED) ··· 1376 1381 } 1377 1382 1378 1383 dsi->state |= DSIM_STATE_ENABLED; 1384 + } 1379 1385 1380 - if (dsi->panel) { 1381 - ret = drm_panel_prepare(dsi->panel); 1382 - if (ret < 0) 1383 - goto err_put_sync; 1384 - } else { 1385 - list_for_each_entry_reverse(iter, &dsi->bridge_chain, 1386 - chain_node) { 1387 - if (iter->funcs->pre_enable) 1388 - iter->funcs->pre_enable(iter); 1389 - } 1390 - } 1386 + static void exynos_dsi_atomic_enable(struct drm_bridge *bridge, 1387 + struct drm_bridge_state *old_bridge_state) 1388 + { 1389 + struct exynos_dsi *dsi = bridge_to_dsi(bridge); 1391 1390 1392 1391 exynos_dsi_set_display_mode(dsi); 1393 1392 exynos_dsi_set_display_enable(dsi, true); 1394 1393 1395 - if (dsi->panel) { 1396 - ret = drm_panel_enable(dsi->panel); 1397 - if (ret < 0) 1398 - goto err_display_disable; 1399 - } else { 1400 - list_for_each_entry(iter, &dsi->bridge_chain, chain_node) { 1401 - if (iter->funcs->enable) 1402 - iter->funcs->enable(iter); 1403 - } 1404 - } 1405 - 1406 1394 dsi->state |= DSIM_STATE_VIDOUT_AVAILABLE; 1395 + 1407 1396 return; 1408 - 1409 - err_display_disable: 1410 - exynos_dsi_set_display_enable(dsi, false); 1411 - drm_panel_unprepare(dsi->panel); 1412 - 1413 - err_put_sync: 1414 - dsi->state &= ~DSIM_STATE_ENABLED; 1415 - pm_runtime_put(dsi->dev); 1416 1397 } 1417 1398 1418 - static void exynos_dsi_disable(struct drm_encoder *encoder) 1399 + static void exynos_dsi_atomic_disable(struct drm_bridge *bridge, 1400 + struct drm_bridge_state *old_bridge_state) 1419 1401 { 1420 - struct exynos_dsi *dsi = encoder_to_dsi(encoder); 1421 - struct drm_bridge *iter; 1402 + struct exynos_dsi *dsi = bridge_to_dsi(bridge); 1422 1403 1423 1404 if (!(dsi->state & DSIM_STATE_ENABLED)) 1424 1405 return; 1425 1406 1426 1407 dsi->state &= ~DSIM_STATE_VIDOUT_AVAILABLE; 1408 + } 1427 1409 1428 - drm_panel_disable(dsi->panel); 1429 - 1430 - list_for_each_entry_reverse(iter, &dsi->bridge_chain, chain_node) { 1431 - if (iter->funcs->disable) 1432 - iter->funcs->disable(iter); 1433 - } 1410 + static void exynos_dsi_atomic_post_disable(struct drm_bridge *bridge, 1411 + struct drm_bridge_state *old_bridge_state) 1412 + { 1413 + struct exynos_dsi *dsi = bridge_to_dsi(bridge); 1434 1414 1435 1415 exynos_dsi_set_display_enable(dsi, false); 1436 - drm_panel_unprepare(dsi->panel); 1437 - 1438 - list_for_each_entry(iter, &dsi->bridge_chain, chain_node) { 1439 - if (iter->funcs->post_disable) 1440 - iter->funcs->post_disable(iter); 1441 - } 1442 1416 1443 1417 dsi->state &= ~DSIM_STATE_ENABLED; 1444 1418 pm_runtime_put_sync(dsi->dev); 1445 1419 } 1446 1420 1447 - static void exynos_dsi_mode_set(struct drm_encoder *encoder, 1448 - struct drm_display_mode *mode, 1449 - struct drm_display_mode *adjusted_mode) 1421 + static void exynos_dsi_mode_set(struct drm_bridge *bridge, 1422 + const struct drm_display_mode *mode, 1423 + const struct drm_display_mode *adjusted_mode) 1450 1424 { 1451 - struct exynos_dsi *dsi = encoder_to_dsi(encoder); 1425 + struct exynos_dsi *dsi = bridge_to_dsi(bridge); 1452 1426 1453 1427 drm_mode_copy(&dsi->mode, adjusted_mode); 1454 1428 } 1455 1429 1456 - static enum drm_connector_status 1457 - exynos_dsi_detect(struct drm_connector *connector, bool force) 1430 + static int exynos_dsi_attach(struct drm_bridge *bridge, 1431 + enum drm_bridge_attach_flags flags) 1458 1432 { 1459 - return connector->status; 1433 + struct exynos_dsi *dsi = bridge_to_dsi(bridge); 1434 + 1435 + return drm_bridge_attach(bridge->encoder, dsi->out_bridge, NULL, flags); 1460 1436 } 1461 1437 1462 - static void exynos_dsi_connector_destroy(struct drm_connector *connector) 1463 - { 1464 - drm_connector_unregister(connector); 1465 - drm_connector_cleanup(connector); 1466 - connector->dev = NULL; 1467 - } 1468 - 1469 - static const struct drm_connector_funcs exynos_dsi_connector_funcs = { 1470 - .detect = exynos_dsi_detect, 1471 - .fill_modes = drm_helper_probe_single_connector_modes, 1472 - .destroy = exynos_dsi_connector_destroy, 1473 - .reset = drm_atomic_helper_connector_reset, 1474 - .atomic_duplicate_state = drm_atomic_helper_connector_duplicate_state, 1475 - .atomic_destroy_state = drm_atomic_helper_connector_destroy_state, 1476 - }; 1477 - 1478 - static int exynos_dsi_get_modes(struct drm_connector *connector) 1479 - { 1480 - struct exynos_dsi *dsi = connector_to_dsi(connector); 1481 - 1482 - if (dsi->panel) 1483 - return drm_panel_get_modes(dsi->panel, connector); 1484 - 1485 - return 0; 1486 - } 1487 - 1488 - static const struct drm_connector_helper_funcs exynos_dsi_connector_helper_funcs = { 1489 - .get_modes = exynos_dsi_get_modes, 1490 - }; 1491 - 1492 - static int exynos_dsi_create_connector(struct drm_encoder *encoder) 1493 - { 1494 - struct exynos_dsi *dsi = encoder_to_dsi(encoder); 1495 - struct drm_connector *connector = &dsi->connector; 1496 - struct drm_device *drm = encoder->dev; 1497 - int ret; 1498 - 1499 - connector->polled = DRM_CONNECTOR_POLL_HPD; 1500 - 1501 - ret = drm_connector_init(drm, connector, &exynos_dsi_connector_funcs, 1502 - DRM_MODE_CONNECTOR_DSI); 1503 - if (ret) { 1504 - DRM_DEV_ERROR(dsi->dev, 1505 - "Failed to initialize connector with drm\n"); 1506 - return ret; 1507 - } 1508 - 1509 - connector->status = connector_status_disconnected; 1510 - drm_connector_helper_add(connector, &exynos_dsi_connector_helper_funcs); 1511 - drm_connector_attach_encoder(connector, encoder); 1512 - if (!drm->registered) 1513 - return 0; 1514 - 1515 - connector->funcs->reset(connector); 1516 - drm_connector_register(connector); 1517 - return 0; 1518 - } 1519 - 1520 - static const struct drm_encoder_helper_funcs exynos_dsi_encoder_helper_funcs = { 1521 - .enable = exynos_dsi_enable, 1522 - .disable = exynos_dsi_disable, 1523 - .mode_set = exynos_dsi_mode_set, 1438 + static const struct drm_bridge_funcs exynos_dsi_bridge_funcs = { 1439 + .atomic_duplicate_state = drm_atomic_helper_bridge_duplicate_state, 1440 + .atomic_destroy_state = drm_atomic_helper_bridge_destroy_state, 1441 + .atomic_reset = drm_atomic_helper_bridge_reset, 1442 + .atomic_pre_enable = exynos_dsi_atomic_pre_enable, 1443 + .atomic_enable = exynos_dsi_atomic_enable, 1444 + .atomic_disable = exynos_dsi_atomic_disable, 1445 + .atomic_post_disable = exynos_dsi_atomic_post_disable, 1446 + .mode_set = exynos_dsi_mode_set, 1447 + .attach = exynos_dsi_attach, 1524 1448 }; 1525 1449 1526 1450 MODULE_DEVICE_TABLE(of, exynos_dsi_of_match); ··· 1448 1534 struct mipi_dsi_device *device) 1449 1535 { 1450 1536 struct exynos_dsi *dsi = host_to_dsi(host); 1537 + struct device *dev = dsi->dev; 1451 1538 struct drm_encoder *encoder = &dsi->encoder; 1452 1539 struct drm_device *drm = encoder->dev; 1453 - struct drm_bridge *out_bridge; 1540 + int ret; 1454 1541 1455 - out_bridge = of_drm_find_bridge(device->dev.of_node); 1456 - if (out_bridge) { 1457 - drm_bridge_attach(encoder, out_bridge, NULL, 0); 1458 - dsi->out_bridge = out_bridge; 1459 - list_splice_init(&encoder->bridge_chain, &dsi->bridge_chain); 1460 - } else { 1461 - int ret = exynos_dsi_create_connector(encoder); 1462 - 1463 - if (ret) { 1464 - DRM_DEV_ERROR(dsi->dev, 1465 - "failed to create connector ret = %d\n", 1466 - ret); 1467 - drm_encoder_cleanup(encoder); 1468 - return ret; 1469 - } 1470 - 1471 - dsi->panel = of_drm_find_panel(device->dev.of_node); 1472 - if (IS_ERR(dsi->panel)) 1473 - dsi->panel = NULL; 1474 - else 1475 - dsi->connector.status = connector_status_connected; 1542 + dsi->out_bridge = devm_drm_of_get_bridge(dev, dev->of_node, 1, 0); 1543 + if (IS_ERR(dsi->out_bridge)) { 1544 + ret = PTR_ERR(dsi->out_bridge); 1545 + DRM_DEV_ERROR(dev, "failed to find the bridge: %d\n", ret); 1546 + return ret; 1476 1547 } 1548 + 1549 + DRM_DEV_INFO(dev, "Attached %s device\n", device->name); 1550 + 1551 + drm_bridge_add(&dsi->bridge); 1552 + 1553 + drm_bridge_attach(encoder, &dsi->bridge, NULL, 0); 1477 1554 1478 1555 /* 1479 1556 * This is a temporary solution and should be made by more generic way. ··· 1473 1568 * TE interrupt handler. 1474 1569 */ 1475 1570 if (!(device->mode_flags & MIPI_DSI_MODE_VIDEO)) { 1476 - int ret = exynos_dsi_register_te_irq(dsi, &device->dev); 1571 + ret = exynos_dsi_register_te_irq(dsi, &device->dev); 1477 1572 if (ret) 1478 1573 return ret; 1479 1574 } ··· 1500 1595 struct exynos_dsi *dsi = host_to_dsi(host); 1501 1596 struct drm_device *drm = dsi->encoder.dev; 1502 1597 1503 - if (dsi->panel) { 1504 - mutex_lock(&drm->mode_config.mutex); 1505 - exynos_dsi_disable(&dsi->encoder); 1506 - dsi->panel = NULL; 1507 - dsi->connector.status = connector_status_disconnected; 1508 - mutex_unlock(&drm->mode_config.mutex); 1509 - } else { 1510 - if (dsi->out_bridge->funcs->detach) 1511 - dsi->out_bridge->funcs->detach(dsi->out_bridge); 1512 - dsi->out_bridge = NULL; 1513 - INIT_LIST_HEAD(&dsi->bridge_chain); 1514 - } 1598 + if (dsi->out_bridge->funcs->detach) 1599 + dsi->out_bridge->funcs->detach(dsi->out_bridge); 1600 + dsi->out_bridge = NULL; 1515 1601 1516 1602 if (drm->mode_config.poll_enabled) 1517 1603 drm_kms_helper_hotplug_event(drm); 1518 1604 1519 1605 exynos_dsi_unregister_te_irq(dsi); 1606 + 1607 + drm_bridge_remove(&dsi->bridge); 1520 1608 1521 1609 return 0; 1522 1610 } ··· 1560 1662 return ret; 1561 1663 } 1562 1664 1563 - enum { 1564 - DSI_PORT_IN, 1565 - DSI_PORT_OUT 1566 - }; 1567 - 1568 1665 static int exynos_dsi_parse_dt(struct exynos_dsi *dsi) 1569 1666 { 1570 1667 struct device *dev = dsi->dev; ··· 1590 1697 struct exynos_dsi *dsi = dev_get_drvdata(dev); 1591 1698 struct drm_encoder *encoder = &dsi->encoder; 1592 1699 struct drm_device *drm_dev = data; 1593 - struct device_node *in_bridge_node; 1594 - struct drm_bridge *in_bridge; 1595 1700 int ret; 1596 1701 1597 1702 drm_simple_encoder_init(drm_dev, encoder, DRM_MODE_ENCODER_TMDS); 1598 1703 1599 - drm_encoder_helper_add(encoder, &exynos_dsi_encoder_helper_funcs); 1600 - 1601 1704 ret = exynos_drm_set_possible_crtcs(encoder, EXYNOS_DISPLAY_TYPE_LCD); 1602 1705 if (ret < 0) 1603 1706 return ret; 1604 - 1605 - in_bridge_node = of_graph_get_remote_node(dev->of_node, DSI_PORT_IN, 0); 1606 - if (in_bridge_node) { 1607 - in_bridge = of_drm_find_bridge(in_bridge_node); 1608 - if (in_bridge) 1609 - drm_bridge_attach(encoder, in_bridge, NULL, 0); 1610 - of_node_put(in_bridge_node); 1611 - } 1612 1707 1613 1708 return mipi_dsi_host_register(&dsi->dsi_host); 1614 1709 } ··· 1605 1724 void *data) 1606 1725 { 1607 1726 struct exynos_dsi *dsi = dev_get_drvdata(dev); 1608 - struct drm_encoder *encoder = &dsi->encoder; 1609 1727 1610 - exynos_dsi_disable(encoder); 1728 + exynos_dsi_atomic_disable(&dsi->bridge, NULL); 1611 1729 1612 1730 mipi_dsi_host_unregister(&dsi->dsi_host); 1613 1731 } ··· 1629 1749 init_completion(&dsi->completed); 1630 1750 spin_lock_init(&dsi->transfer_lock); 1631 1751 INIT_LIST_HEAD(&dsi->transfer_list); 1632 - INIT_LIST_HEAD(&dsi->bridge_chain); 1633 1752 1634 1753 dsi->dsi_host.ops = &exynos_dsi_ops; 1635 1754 dsi->dsi_host.dev = dev; ··· 1695 1816 platform_set_drvdata(pdev, dsi); 1696 1817 1697 1818 pm_runtime_enable(dev); 1819 + 1820 + dsi->bridge.funcs = &exynos_dsi_bridge_funcs; 1821 + dsi->bridge.of_node = dev->of_node; 1822 + dsi->bridge.type = DRM_MODE_CONNECTOR_DSI; 1698 1823 1699 1824 ret = component_add(dev, &exynos_dsi_component_ops); 1700 1825 if (ret)
+22
drivers/gpu/drm/exynos/exynos_drm_mic.c
··· 102 102 struct videomode vm; 103 103 struct drm_encoder *encoder; 104 104 struct drm_bridge bridge; 105 + struct drm_bridge *next_bridge; 105 106 106 107 bool enabled; 107 108 }; ··· 299 298 300 299 static void mic_enable(struct drm_bridge *bridge) { } 301 300 301 + static int mic_attach(struct drm_bridge *bridge, 302 + enum drm_bridge_attach_flags flags) 303 + { 304 + struct exynos_mic *mic = bridge->driver_private; 305 + 306 + return drm_bridge_attach(bridge->encoder, mic->next_bridge, 307 + &mic->bridge, flags); 308 + } 309 + 302 310 static const struct drm_bridge_funcs mic_bridge_funcs = { 303 311 .disable = mic_disable, 304 312 .post_disable = mic_post_disable, 305 313 .mode_set = mic_mode_set, 306 314 .pre_enable = mic_pre_enable, 307 315 .enable = mic_enable, 316 + .attach = mic_attach, 308 317 }; 309 318 310 319 static int exynos_mic_bind(struct device *dev, struct device *master, ··· 388 377 { 389 378 struct device *dev = &pdev->dev; 390 379 struct exynos_mic *mic; 380 + struct device_node *remote; 391 381 struct resource res; 392 382 int ret, i; 393 383 ··· 431 419 goto err; 432 420 } 433 421 } 422 + 423 + remote = of_graph_get_remote_node(dev->of_node, 1, 0); 424 + mic->next_bridge = of_drm_find_bridge(remote); 425 + if (IS_ERR(mic->next_bridge)) { 426 + DRM_DEV_ERROR(dev, "mic: Failed to find next bridge\n"); 427 + ret = PTR_ERR(mic->next_bridge); 428 + goto err; 429 + } 430 + 431 + of_node_put(remote); 434 432 435 433 platform_set_drvdata(pdev, mic); 436 434
+8 -3
drivers/gpu/drm/gma500/cdv_device.c
··· 262 262 struct drm_psb_private *dev_priv = to_drm_psb_private(dev); 263 263 struct pci_dev *pdev = to_pci_dev(dev->dev); 264 264 struct psb_save_area *regs = &dev_priv->regs; 265 + struct drm_connector_list_iter conn_iter; 265 266 struct drm_connector *connector; 266 267 267 268 dev_dbg(dev->dev, "Saving GPU registers.\n"); ··· 299 298 regs->cdv.saveIER = REG_READ(PSB_INT_ENABLE_R); 300 299 regs->cdv.saveIMR = REG_READ(PSB_INT_MASK_R); 301 300 302 - list_for_each_entry(connector, &dev->mode_config.connector_list, head) 301 + drm_connector_list_iter_begin(dev, &conn_iter); 302 + drm_for_each_connector_iter(connector, &conn_iter) 303 303 connector->funcs->dpms(connector, DRM_MODE_DPMS_OFF); 304 + drm_connector_list_iter_end(&conn_iter); 304 305 305 306 return 0; 306 307 } ··· 320 317 struct drm_psb_private *dev_priv = to_drm_psb_private(dev); 321 318 struct pci_dev *pdev = to_pci_dev(dev->dev); 322 319 struct psb_save_area *regs = &dev_priv->regs; 320 + struct drm_connector_list_iter conn_iter; 323 321 struct drm_connector *connector; 324 322 u32 temp; 325 323 ··· 377 373 378 374 drm_mode_config_reset(dev); 379 375 380 - list_for_each_entry(connector, &dev->mode_config.connector_list, head) 376 + drm_connector_list_iter_begin(dev, &conn_iter); 377 + drm_for_each_connector_iter(connector, &conn_iter) 381 378 connector->funcs->dpms(connector, DRM_MODE_DPMS_ON); 379 + drm_connector_list_iter_end(&conn_iter); 382 380 383 381 /* Resume the modeset for every activated CRTC */ 384 382 drm_helper_resume_force_mode(dev); ··· 609 603 .errata = cdv_errata, 610 604 611 605 .crtc_helper = &cdv_intel_helper_funcs, 612 - .crtc_funcs = &gma_intel_crtc_funcs, 613 606 .clock_funcs = &cdv_clock_funcs, 614 607 615 608 .output_init = cdv_output_init,
+2 -4
drivers/gpu/drm/gma500/cdv_intel_crt.c
··· 191 191 192 192 static void cdv_intel_crt_destroy(struct drm_connector *connector) 193 193 { 194 + struct gma_connector *gma_connector = to_gma_connector(connector); 194 195 struct gma_encoder *gma_encoder = gma_attached_encoder(connector); 195 196 196 197 psb_intel_i2c_destroy(gma_encoder->ddc_bus); 197 - drm_connector_unregister(connector); 198 198 drm_connector_cleanup(connector); 199 - kfree(connector); 199 + kfree(gma_connector); 200 200 } 201 201 202 202 static int cdv_intel_crt_get_modes(struct drm_connector *connector) ··· 280 280 drm_encoder_helper_add(encoder, &cdv_intel_crt_helper_funcs); 281 281 drm_connector_helper_add(connector, 282 282 &cdv_intel_crt_connector_helper_funcs); 283 - 284 - drm_connector_register(connector); 285 283 286 284 return; 287 285 failed_ddc:
+7 -2
drivers/gpu/drm/gma500/cdv_intel_display.c
··· 584 584 bool ok; 585 585 bool is_lvds = false; 586 586 bool is_dp = false; 587 - struct drm_mode_config *mode_config = &dev->mode_config; 587 + struct drm_connector_list_iter conn_iter; 588 588 struct drm_connector *connector; 589 589 const struct gma_limit_t *limit; 590 590 u32 ddi_select = 0; 591 591 bool is_edp = false; 592 592 593 - list_for_each_entry(connector, &mode_config->connector_list, head) { 593 + drm_connector_list_iter_begin(dev, &conn_iter); 594 + drm_for_each_connector_iter(connector, &conn_iter) { 594 595 struct gma_encoder *gma_encoder = 595 596 gma_attached_encoder(connector); 596 597 ··· 614 613 is_edp = true; 615 614 break; 616 615 default: 616 + drm_connector_list_iter_end(&conn_iter); 617 617 DRM_ERROR("invalid output type.\n"); 618 618 return 0; 619 619 } 620 + 621 + break; 620 622 } 623 + drm_connector_list_iter_end(&conn_iter); 621 624 622 625 if (dev_priv->dplla_96mhz) 623 626 /* low-end sku, 96/100 mhz */
+2 -4
drivers/gpu/drm/gma500/cdv_intel_dp.c
··· 1857 1857 static void 1858 1858 cdv_intel_dp_destroy(struct drm_connector *connector) 1859 1859 { 1860 + struct gma_connector *gma_connector = to_gma_connector(connector); 1860 1861 struct gma_encoder *gma_encoder = gma_attached_encoder(connector); 1861 1862 struct cdv_intel_dp *intel_dp = gma_encoder->dev_priv; 1862 1863 ··· 1867 1866 intel_dp->panel_fixed_mode = NULL; 1868 1867 } 1869 1868 i2c_del_adapter(&intel_dp->adapter); 1870 - drm_connector_unregister(connector); 1871 1869 drm_connector_cleanup(connector); 1872 - kfree(connector); 1870 + kfree(gma_connector); 1873 1871 } 1874 1872 1875 1873 static const struct drm_encoder_helper_funcs cdv_intel_dp_helper_funcs = { ··· 1989 1989 connector->polled = DRM_CONNECTOR_POLL_HPD; 1990 1990 connector->interlace_allowed = false; 1991 1991 connector->doublescan_allowed = false; 1992 - 1993 - drm_connector_register(connector); 1994 1992 1995 1993 /* Set up the DDC bus. */ 1996 1994 switch (output_reg) {
+2 -3
drivers/gpu/drm/gma500/cdv_intel_hdmi.c
··· 242 242 243 243 static void cdv_hdmi_destroy(struct drm_connector *connector) 244 244 { 245 + struct gma_connector *gma_connector = to_gma_connector(connector); 245 246 struct gma_encoder *gma_encoder = gma_attached_encoder(connector); 246 247 247 248 psb_intel_i2c_destroy(gma_encoder->i2c_bus); 248 - drm_connector_unregister(connector); 249 249 drm_connector_cleanup(connector); 250 - kfree(connector); 250 + kfree(gma_connector); 251 251 } 252 252 253 253 static const struct drm_encoder_helper_funcs cdv_hdmi_helper_funcs = { ··· 352 352 353 353 hdmi_priv->hdmi_i2c_adapter = &(gma_encoder->i2c_bus->adapter); 354 354 hdmi_priv->dev = dev; 355 - drm_connector_register(connector); 356 355 return; 357 356 358 357 failed_ddc:
+2 -3
drivers/gpu/drm/gma500/cdv_intel_lvds.c
··· 326 326 */ 327 327 static void cdv_intel_lvds_destroy(struct drm_connector *connector) 328 328 { 329 + struct gma_connector *gma_connector = to_gma_connector(connector); 329 330 struct gma_encoder *gma_encoder = gma_attached_encoder(connector); 330 331 331 332 psb_intel_i2c_destroy(gma_encoder->i2c_bus); 332 - drm_connector_unregister(connector); 333 333 drm_connector_cleanup(connector); 334 - kfree(connector); 334 + kfree(gma_connector); 335 335 } 336 336 337 337 static int cdv_intel_lvds_set_property(struct drm_connector *connector, ··· 647 647 648 648 out: 649 649 mutex_unlock(&dev->mode_config.mutex); 650 - drm_connector_register(connector); 651 650 return; 652 651 653 652 failed_find:
+6 -4
drivers/gpu/drm/gma500/framebuffer.c
··· 451 451 static void psb_setup_outputs(struct drm_device *dev) 452 452 { 453 453 struct drm_psb_private *dev_priv = to_drm_psb_private(dev); 454 + struct drm_connector_list_iter conn_iter; 454 455 struct drm_connector *connector; 455 456 456 457 drm_mode_create_scaling_mode_property(dev); ··· 462 461 "backlight", 0, 100); 463 462 dev_priv->ops->output_init(dev); 464 463 465 - list_for_each_entry(connector, &dev->mode_config.connector_list, 466 - head) { 464 + drm_connector_list_iter_begin(dev, &conn_iter); 465 + drm_for_each_connector_iter(connector, &conn_iter) { 467 466 struct gma_encoder *gma_encoder = gma_attached_encoder(connector); 468 467 struct drm_encoder *encoder = &gma_encoder->base; 469 468 int crtc_mask = 0, clone_mask = 0; ··· 506 505 encoder->possible_clones = 507 506 gma_connector_clones(dev, clone_mask); 508 507 } 508 + drm_connector_list_iter_end(&conn_iter); 509 509 } 510 510 511 511 void psb_modeset_init(struct drm_device *dev) ··· 516 514 struct pci_dev *pdev = to_pci_dev(dev->dev); 517 515 int i; 518 516 519 - drm_mode_config_init(dev); 517 + if (drmm_mode_config_init(dev)) 518 + return; 520 519 521 520 dev->mode_config.min_width = 0; 522 521 dev->mode_config.min_height = 0; ··· 549 546 if (dev_priv->modeset) { 550 547 drm_kms_helper_poll_fini(dev); 551 548 psb_fbdev_fini(dev); 552 - drm_mode_config_cleanup(dev); 553 549 } 554 550 }
+150 -11
drivers/gpu/drm/gma500/gem.c
··· 21 21 #include "gem.h" 22 22 #include "psb_drv.h" 23 23 24 + /* 25 + * PSB GEM object 26 + */ 27 + 24 28 int psb_gem_pin(struct psb_gem_object *pobj) 25 29 { 26 30 struct drm_gem_object *obj = &pobj->base; ··· 35 31 unsigned int npages; 36 32 int ret; 37 33 38 - mutex_lock(&dev_priv->gtt_mutex); 34 + ret = dma_resv_lock(obj->resv, NULL); 35 + if (drm_WARN_ONCE(dev, ret, "dma_resv_lock() failed, ret=%d\n", ret)) 36 + return ret; 39 37 40 38 if (pobj->in_gart || pobj->stolen) 41 39 goto out; /* already mapped */ ··· 45 39 pages = drm_gem_get_pages(obj); 46 40 if (IS_ERR(pages)) { 47 41 ret = PTR_ERR(pages); 48 - goto err_mutex_unlock; 42 + goto err_dma_resv_unlock; 49 43 } 50 44 51 45 npages = obj->size / PAGE_SIZE; ··· 57 51 (gpu_base + pobj->offset), npages, 0, 0, 58 52 PSB_MMU_CACHED_MEMORY); 59 53 60 - pobj->npage = npages; 61 54 pobj->pages = pages; 62 55 63 56 out: 64 57 ++pobj->in_gart; 65 - mutex_unlock(&dev_priv->gtt_mutex); 58 + dma_resv_unlock(obj->resv); 66 59 67 60 return 0; 68 61 69 - err_mutex_unlock: 70 - mutex_unlock(&dev_priv->gtt_mutex); 62 + err_dma_resv_unlock: 63 + dma_resv_unlock(obj->resv); 71 64 return ret; 72 65 } 73 66 ··· 76 71 struct drm_device *dev = obj->dev; 77 72 struct drm_psb_private *dev_priv = to_drm_psb_private(dev); 78 73 u32 gpu_base = dev_priv->gtt.gatt_start; 74 + unsigned long npages; 75 + int ret; 79 76 80 - mutex_lock(&dev_priv->gtt_mutex); 77 + ret = dma_resv_lock(obj->resv, NULL); 78 + if (drm_WARN_ONCE(dev, ret, "dma_resv_lock() failed, ret=%d\n", ret)) 79 + return; 81 80 82 81 WARN_ON(!pobj->in_gart); 83 82 ··· 90 81 if (pobj->in_gart || pobj->stolen) 91 82 goto out; 92 83 84 + npages = obj->size / PAGE_SIZE; 85 + 93 86 psb_mmu_remove_pages(psb_mmu_get_default_pd(dev_priv->mmu), 94 - (gpu_base + pobj->offset), pobj->npage, 0, 0); 87 + (gpu_base + pobj->offset), npages, 0, 0); 95 88 psb_gtt_remove_pages(dev_priv, &pobj->resource); 96 89 97 90 /* Reset caching flags */ 98 - set_pages_array_wb(pobj->pages, pobj->npage); 91 + set_pages_array_wb(pobj->pages, npages); 99 92 100 93 drm_gem_put_pages(obj, pobj->pages, true, false); 101 94 pobj->pages = NULL; 102 - pobj->npage = 0; 103 95 104 96 out: 105 - mutex_unlock(&dev_priv->gtt_mutex); 97 + dma_resv_unlock(obj->resv); 106 98 } 107 99 108 100 static vm_fault_t psb_gem_fault(struct vm_fault *vmf); ··· 299 289 mutex_unlock(&dev_priv->mmap_mutex); 300 290 301 291 return ret; 292 + } 293 + 294 + /* 295 + * Memory management 296 + */ 297 + 298 + /* Insert vram stolen pages into the GTT. */ 299 + static void psb_gem_mm_populate_stolen(struct drm_psb_private *pdev) 300 + { 301 + struct drm_device *dev = &pdev->dev; 302 + unsigned int pfn_base; 303 + unsigned int i, num_pages; 304 + uint32_t pte; 305 + 306 + pfn_base = pdev->stolen_base >> PAGE_SHIFT; 307 + num_pages = pdev->vram_stolen_size >> PAGE_SHIFT; 308 + 309 + drm_dbg(dev, "Set up %u stolen pages starting at 0x%08x, GTT offset %dK\n", 310 + num_pages, pfn_base << PAGE_SHIFT, 0); 311 + 312 + for (i = 0; i < num_pages; ++i) { 313 + pte = psb_gtt_mask_pte(pfn_base + i, PSB_MMU_CACHED_MEMORY); 314 + iowrite32(pte, pdev->gtt_map + i); 315 + } 316 + 317 + (void)ioread32(pdev->gtt_map + i - 1); 318 + } 319 + 320 + int psb_gem_mm_init(struct drm_device *dev) 321 + { 322 + struct drm_psb_private *dev_priv = to_drm_psb_private(dev); 323 + struct pci_dev *pdev = to_pci_dev(dev->dev); 324 + unsigned long stolen_size, vram_stolen_size; 325 + struct psb_gtt *pg; 326 + int ret; 327 + 328 + mutex_init(&dev_priv->mmap_mutex); 329 + 330 + pg = &dev_priv->gtt; 331 + 332 + pci_read_config_dword(pdev, PSB_BSM, &dev_priv->stolen_base); 333 + vram_stolen_size = pg->gtt_phys_start - dev_priv->stolen_base - PAGE_SIZE; 334 + 335 + stolen_size = vram_stolen_size; 336 + 337 + dev_dbg(dev->dev, "Stolen memory base 0x%x, size %luK\n", 338 + dev_priv->stolen_base, vram_stolen_size / 1024); 339 + 340 + pg->stolen_size = stolen_size; 341 + dev_priv->vram_stolen_size = vram_stolen_size; 342 + 343 + dev_priv->vram_addr = ioremap_wc(dev_priv->stolen_base, stolen_size); 344 + if (!dev_priv->vram_addr) { 345 + dev_err(dev->dev, "Failure to map stolen base.\n"); 346 + ret = -ENOMEM; 347 + goto err_mutex_destroy; 348 + } 349 + 350 + psb_gem_mm_populate_stolen(dev_priv); 351 + 352 + return 0; 353 + 354 + err_mutex_destroy: 355 + mutex_destroy(&dev_priv->mmap_mutex); 356 + return ret; 357 + } 358 + 359 + void psb_gem_mm_fini(struct drm_device *dev) 360 + { 361 + struct drm_psb_private *dev_priv = to_drm_psb_private(dev); 362 + 363 + iounmap(dev_priv->vram_addr); 364 + 365 + mutex_destroy(&dev_priv->mmap_mutex); 366 + } 367 + 368 + /* Re-insert all pinned GEM objects into GTT. */ 369 + static void psb_gem_mm_populate_resources(struct drm_psb_private *pdev) 370 + { 371 + unsigned int restored = 0, total = 0, size = 0; 372 + struct resource *r = pdev->gtt_mem->child; 373 + struct drm_device *dev = &pdev->dev; 374 + struct psb_gem_object *pobj; 375 + 376 + while (r) { 377 + /* 378 + * TODO: GTT restoration needs a refactoring, so that we don't have to touch 379 + * struct psb_gem_object here. The type represents a GEM object and is 380 + * not related to the GTT itself. 381 + */ 382 + pobj = container_of(r, struct psb_gem_object, resource); 383 + if (pobj->pages) { 384 + psb_gtt_insert_pages(pdev, &pobj->resource, pobj->pages); 385 + size += resource_size(&pobj->resource); 386 + ++restored; 387 + } 388 + r = r->sibling; 389 + ++total; 390 + } 391 + 392 + drm_dbg(dev, "Restored %u of %u gtt ranges (%u KB)", restored, total, (size / 1024)); 393 + } 394 + 395 + int psb_gem_mm_resume(struct drm_device *dev) 396 + { 397 + struct drm_psb_private *dev_priv = to_drm_psb_private(dev); 398 + struct pci_dev *pdev = to_pci_dev(dev->dev); 399 + unsigned long stolen_size, vram_stolen_size; 400 + struct psb_gtt *pg; 401 + 402 + pg = &dev_priv->gtt; 403 + 404 + pci_read_config_dword(pdev, PSB_BSM, &dev_priv->stolen_base); 405 + vram_stolen_size = pg->gtt_phys_start - dev_priv->stolen_base - PAGE_SIZE; 406 + 407 + stolen_size = vram_stolen_size; 408 + 409 + dev_dbg(dev->dev, "Stolen memory base 0x%x, size %luK\n", dev_priv->stolen_base, 410 + vram_stolen_size / 1024); 411 + 412 + if (stolen_size != pg->stolen_size) { 413 + dev_err(dev->dev, "GTT resume error.\n"); 414 + return -EINVAL; 415 + } 416 + 417 + psb_gem_mm_populate_stolen(dev_priv); 418 + psb_gem_mm_populate_resources(dev_priv); 419 + 420 + return 0; 302 421 }
+12 -1
drivers/gpu/drm/gma500/gem.h
··· 14 14 15 15 struct drm_device; 16 16 17 + /* 18 + * PSB GEM object 19 + */ 20 + 17 21 struct psb_gem_object { 18 22 struct drm_gem_object base; 19 23 ··· 27 23 bool stolen; /* Backed from stolen RAM */ 28 24 bool mmapping; /* Is mmappable */ 29 25 struct page **pages; /* Backing pages if present */ 30 - int npage; /* Number of backing pages */ 31 26 }; 32 27 33 28 static inline struct psb_gem_object *to_psb_gem_object(struct drm_gem_object *obj) ··· 39 36 40 37 int psb_gem_pin(struct psb_gem_object *pobj); 41 38 void psb_gem_unpin(struct psb_gem_object *pobj); 39 + 40 + /* 41 + * Memory management 42 + */ 43 + 44 + int psb_gem_mm_init(struct drm_device *dev); 45 + void psb_gem_mm_fini(struct drm_device *dev); 46 + int psb_gem_mm_resume(struct drm_device *dev); 42 47 43 48 #endif
+33 -20
drivers/gpu/drm/gma500/gma_display.c
··· 17 17 #include "framebuffer.h" 18 18 #include "gem.h" 19 19 #include "gma_display.h" 20 - #include "psb_drv.h" 20 + #include "psb_irq.h" 21 21 #include "psb_intel_drv.h" 22 22 #include "psb_intel_reg.h" 23 23 ··· 27 27 bool gma_pipe_has_type(struct drm_crtc *crtc, int type) 28 28 { 29 29 struct drm_device *dev = crtc->dev; 30 - struct drm_mode_config *mode_config = &dev->mode_config; 31 - struct drm_connector *l_entry; 30 + struct drm_connector_list_iter conn_iter; 31 + struct drm_connector *connector; 32 32 33 - list_for_each_entry(l_entry, &mode_config->connector_list, head) { 34 - if (l_entry->encoder && l_entry->encoder->crtc == crtc) { 33 + drm_connector_list_iter_begin(dev, &conn_iter); 34 + drm_for_each_connector_iter(connector, &conn_iter) { 35 + if (connector->encoder && connector->encoder->crtc == crtc) { 35 36 struct gma_encoder *gma_encoder = 36 - gma_attached_encoder(l_entry); 37 - if (gma_encoder->type == type) 37 + gma_attached_encoder(connector); 38 + if (gma_encoder->type == type) { 39 + drm_connector_list_iter_end(&conn_iter); 38 40 return true; 41 + } 39 42 } 40 43 } 44 + drm_connector_list_iter_end(&conn_iter); 41 45 42 46 return false; 43 47 } ··· 176 172 } 177 173 } 178 174 179 - int gma_crtc_gamma_set(struct drm_crtc *crtc, u16 *red, u16 *green, u16 *blue, 180 - u32 size, 181 - struct drm_modeset_acquire_ctx *ctx) 175 + static int gma_crtc_gamma_set(struct drm_crtc *crtc, u16 *red, u16 *green, 176 + u16 *blue, u32 size, 177 + struct drm_modeset_acquire_ctx *ctx) 182 178 { 183 179 gma_crtc_load_lut(crtc); 184 180 ··· 323 319 REG_WRITE(DSPARB, 0x3F3E); 324 320 } 325 321 326 - int gma_crtc_cursor_set(struct drm_crtc *crtc, 327 - struct drm_file *file_priv, 328 - uint32_t handle, 329 - uint32_t width, uint32_t height) 322 + static int gma_crtc_cursor_set(struct drm_crtc *crtc, 323 + struct drm_file *file_priv, uint32_t handle, 324 + uint32_t width, uint32_t height) 330 325 { 331 326 struct drm_device *dev = crtc->dev; 332 327 struct drm_psb_private *dev_priv = to_drm_psb_private(dev); ··· 394 391 goto unref_cursor; 395 392 } 396 393 397 - /* Prevent overflow */ 398 - if (pobj->npage > 4) 399 - cursor_pages = 4; 400 - else 401 - cursor_pages = pobj->npage; 394 + cursor_pages = obj->size / PAGE_SIZE; 395 + if (cursor_pages > 4) 396 + cursor_pages = 4; /* Prevent overflow */ 402 397 403 398 /* Copy the cursor to cursor mem */ 404 399 tmp_dst = dev_priv->vram_addr + cursor_pobj->offset; ··· 438 437 return ret; 439 438 } 440 439 441 - int gma_crtc_cursor_move(struct drm_crtc *crtc, int x, int y) 440 + static int gma_crtc_cursor_move(struct drm_crtc *crtc, int x, int y) 442 441 { 443 442 struct drm_device *dev = crtc->dev; 444 443 struct gma_crtc *gma_crtc = to_gma_crtc(crtc); ··· 567 566 568 567 return ret; 569 568 } 569 + 570 + const struct drm_crtc_funcs gma_crtc_funcs = { 571 + .cursor_set = gma_crtc_cursor_set, 572 + .cursor_move = gma_crtc_cursor_move, 573 + .gamma_set = gma_crtc_gamma_set, 574 + .set_config = gma_crtc_set_config, 575 + .destroy = gma_crtc_destroy, 576 + .page_flip = gma_crtc_page_flip, 577 + .enable_vblank = gma_crtc_enable_vblank, 578 + .disable_vblank = gma_crtc_disable_vblank, 579 + .get_vblank_counter = gma_crtc_get_vblank_counter, 580 + }; 570 581 571 582 /* 572 583 * Save HW states of given crtc
+2 -8
drivers/gpu/drm/gma500/gma_display.h
··· 58 58 extern void gma_wait_for_vblank(struct drm_device *dev); 59 59 extern int gma_pipe_set_base(struct drm_crtc *crtc, int x, int y, 60 60 struct drm_framebuffer *old_fb); 61 - extern int gma_crtc_cursor_set(struct drm_crtc *crtc, 62 - struct drm_file *file_priv, 63 - uint32_t handle, 64 - uint32_t width, uint32_t height); 65 - extern int gma_crtc_cursor_move(struct drm_crtc *crtc, int x, int y); 66 61 extern void gma_crtc_load_lut(struct drm_crtc *crtc); 67 - extern int gma_crtc_gamma_set(struct drm_crtc *crtc, u16 *red, u16 *green, 68 - u16 *blue, u32 size, 69 - struct drm_modeset_acquire_ctx *ctx); 70 62 extern void gma_crtc_dpms(struct drm_crtc *crtc, int mode); 71 63 extern void gma_crtc_prepare(struct drm_crtc *crtc); 72 64 extern void gma_crtc_commit(struct drm_crtc *crtc); ··· 74 82 75 83 extern void gma_crtc_save(struct drm_crtc *crtc); 76 84 extern void gma_crtc_restore(struct drm_crtc *crtc); 85 + 86 + extern const struct drm_crtc_funcs gma_crtc_funcs; 77 87 78 88 extern void gma_encoder_prepare(struct drm_encoder *encoder); 79 89 extern void gma_encoder_commit(struct drm_encoder *encoder);
+150 -171
drivers/gpu/drm/gma500/gtt.c
··· 49 49 * 50 50 * Set the GTT entry for the appropriate memory type. 51 51 */ 52 - static inline uint32_t psb_gtt_mask_pte(uint32_t pfn, int type) 52 + uint32_t psb_gtt_mask_pte(uint32_t pfn, int type) 53 53 { 54 54 uint32_t mask = PSB_PTE_VALID; 55 55 ··· 74 74 return pdev->gtt_map + (offset >> PAGE_SHIFT); 75 75 } 76 76 77 - /* 78 - * Take our preallocated GTT range and insert the GEM object into 79 - * the GTT. This is protected via the gtt mutex which the caller 80 - * must hold. 81 - */ 77 + /* Acquires GTT mutex internally. */ 82 78 void psb_gtt_insert_pages(struct drm_psb_private *pdev, const struct resource *res, 83 79 struct page **pages) 84 80 { 85 81 resource_size_t npages, i; 86 82 u32 __iomem *gtt_slot; 87 83 u32 pte; 84 + 85 + mutex_lock(&pdev->gtt_mutex); 88 86 89 87 /* Write our page entries into the GTT itself */ 90 88 ··· 96 98 97 99 /* Make sure all the entries are set before we return */ 98 100 ioread32(gtt_slot - 1); 101 + 102 + mutex_unlock(&pdev->gtt_mutex); 99 103 } 100 104 101 - /* 102 - * Remove a preallocated GTT range from the GTT. Overwrite all the 103 - * page table entries with the dummy page. This is protected via the gtt 104 - * mutex which the caller must hold. 105 - */ 105 + /* Acquires GTT mutex internally. */ 106 106 void psb_gtt_remove_pages(struct drm_psb_private *pdev, const struct resource *res) 107 107 { 108 108 resource_size_t npages, i; 109 109 u32 __iomem *gtt_slot; 110 110 u32 pte; 111 + 112 + mutex_lock(&pdev->gtt_mutex); 111 113 112 114 /* Install scratch page for the resource */ 113 115 ··· 121 123 122 124 /* Make sure all the entries are set before we return */ 123 125 ioread32(gtt_slot - 1); 126 + 127 + mutex_unlock(&pdev->gtt_mutex); 124 128 } 125 129 126 - static void psb_gtt_alloc(struct drm_device *dev) 130 + static int psb_gtt_enable(struct drm_psb_private *dev_priv) 127 131 { 128 - struct drm_psb_private *dev_priv = to_drm_psb_private(dev); 129 - init_rwsem(&dev_priv->gtt.sem); 130 - } 131 - 132 - void psb_gtt_takedown(struct drm_device *dev) 133 - { 134 - struct drm_psb_private *dev_priv = to_drm_psb_private(dev); 132 + struct drm_device *dev = &dev_priv->dev; 135 133 struct pci_dev *pdev = to_pci_dev(dev->dev); 134 + int ret; 136 135 137 - if (dev_priv->gtt_map) { 138 - iounmap(dev_priv->gtt_map); 139 - dev_priv->gtt_map = NULL; 140 - } 141 - if (dev_priv->gtt_initialized) { 142 - pci_write_config_word(pdev, PSB_GMCH_CTRL, 143 - dev_priv->gmch_ctrl); 144 - PSB_WVDC32(dev_priv->pge_ctl, PSB_PGETBL_CTL); 145 - (void) PSB_RVDC32(PSB_PGETBL_CTL); 146 - } 147 - if (dev_priv->vram_addr) 148 - iounmap(dev_priv->gtt_map); 149 - } 150 - 151 - int psb_gtt_init(struct drm_device *dev, int resume) 152 - { 153 - struct drm_psb_private *dev_priv = to_drm_psb_private(dev); 154 - struct pci_dev *pdev = to_pci_dev(dev->dev); 155 - unsigned gtt_pages; 156 - unsigned long stolen_size, vram_stolen_size; 157 - unsigned i, num_pages; 158 - unsigned pfn_base; 159 - struct psb_gtt *pg; 160 - 161 - int ret = 0; 162 - uint32_t pte; 163 - 164 - if (!resume) { 165 - mutex_init(&dev_priv->gtt_mutex); 166 - mutex_init(&dev_priv->mmap_mutex); 167 - psb_gtt_alloc(dev); 168 - } 169 - 170 - pg = &dev_priv->gtt; 171 - 172 - /* Enable the GTT */ 173 - pci_read_config_word(pdev, PSB_GMCH_CTRL, &dev_priv->gmch_ctrl); 174 - pci_write_config_word(pdev, PSB_GMCH_CTRL, 175 - dev_priv->gmch_ctrl | _PSB_GMCH_ENABLED); 136 + ret = pci_read_config_word(pdev, PSB_GMCH_CTRL, &dev_priv->gmch_ctrl); 137 + if (ret) 138 + return pcibios_err_to_errno(ret); 139 + ret = pci_write_config_word(pdev, PSB_GMCH_CTRL, dev_priv->gmch_ctrl | _PSB_GMCH_ENABLED); 140 + if (ret) 141 + return pcibios_err_to_errno(ret); 176 142 177 143 dev_priv->pge_ctl = PSB_RVDC32(PSB_PGETBL_CTL); 178 144 PSB_WVDC32(dev_priv->pge_ctl | _PSB_PGETBL_ENABLED, PSB_PGETBL_CTL); 179 - (void) PSB_RVDC32(PSB_PGETBL_CTL); 145 + 146 + (void)PSB_RVDC32(PSB_PGETBL_CTL); 147 + 148 + return 0; 149 + } 150 + 151 + static void psb_gtt_disable(struct drm_psb_private *dev_priv) 152 + { 153 + struct drm_device *dev = &dev_priv->dev; 154 + struct pci_dev *pdev = to_pci_dev(dev->dev); 155 + 156 + pci_write_config_word(pdev, PSB_GMCH_CTRL, dev_priv->gmch_ctrl); 157 + PSB_WVDC32(dev_priv->pge_ctl, PSB_PGETBL_CTL); 158 + 159 + (void)PSB_RVDC32(PSB_PGETBL_CTL); 160 + } 161 + 162 + void psb_gtt_fini(struct drm_device *dev) 163 + { 164 + struct drm_psb_private *dev_priv = to_drm_psb_private(dev); 165 + 166 + iounmap(dev_priv->gtt_map); 167 + psb_gtt_disable(dev_priv); 168 + mutex_destroy(&dev_priv->gtt_mutex); 169 + } 170 + 171 + /* Clear GTT. Use a scratch page to avoid accidents or scribbles. */ 172 + static void psb_gtt_clear(struct drm_psb_private *pdev) 173 + { 174 + resource_size_t pfn_base; 175 + unsigned long i; 176 + uint32_t pte; 177 + 178 + pfn_base = page_to_pfn(pdev->scratch_page); 179 + pte = psb_gtt_mask_pte(pfn_base, PSB_MMU_CACHED_MEMORY); 180 + 181 + for (i = 0; i < pdev->gtt.gtt_pages; ++i) 182 + iowrite32(pte, pdev->gtt_map + i); 183 + 184 + (void)ioread32(pdev->gtt_map + i - 1); 185 + } 186 + 187 + static void psb_gtt_init_ranges(struct drm_psb_private *dev_priv) 188 + { 189 + struct drm_device *dev = &dev_priv->dev; 190 + struct pci_dev *pdev = to_pci_dev(dev->dev); 191 + struct psb_gtt *pg = &dev_priv->gtt; 192 + resource_size_t gtt_phys_start, mmu_gatt_start, gtt_start, gtt_pages, 193 + gatt_start, gatt_pages; 194 + struct resource *gtt_mem; 180 195 181 196 /* The root resource we allocate address space from */ 182 - dev_priv->gtt_initialized = 1; 183 - 184 - pg->gtt_phys_start = dev_priv->pge_ctl & PAGE_MASK; 197 + gtt_phys_start = dev_priv->pge_ctl & PAGE_MASK; 185 198 186 199 /* 187 - * The video mmu has a hw bug when accessing 0x0D0000000. 188 - * Make gatt start at 0x0e000,0000. This doesn't actually 189 - * matter for us but may do if the video acceleration ever 190 - * gets opened up. 200 + * The video MMU has a HW bug when accessing 0x0d0000000. Make 201 + * GATT start at 0x0e0000000. This doesn't actually matter for 202 + * us now, but maybe will if the video acceleration ever gets 203 + * opened up. 191 204 */ 192 - pg->mmu_gatt_start = 0xE0000000; 205 + mmu_gatt_start = 0xe0000000; 193 206 194 - pg->gtt_start = pci_resource_start(pdev, PSB_GTT_RESOURCE); 195 - gtt_pages = pci_resource_len(pdev, PSB_GTT_RESOURCE) 196 - >> PAGE_SHIFT; 207 + gtt_start = pci_resource_start(pdev, PSB_GTT_RESOURCE); 208 + gtt_pages = pci_resource_len(pdev, PSB_GTT_RESOURCE) >> PAGE_SHIFT; 209 + 197 210 /* CDV doesn't report this. In which case the system has 64 gtt pages */ 198 - if (pg->gtt_start == 0 || gtt_pages == 0) { 211 + if (!gtt_start || !gtt_pages) { 199 212 dev_dbg(dev->dev, "GTT PCI BAR not initialized.\n"); 200 213 gtt_pages = 64; 201 - pg->gtt_start = dev_priv->pge_ctl; 214 + gtt_start = dev_priv->pge_ctl; 202 215 } 203 216 204 - pg->gatt_start = pci_resource_start(pdev, PSB_GATT_RESOURCE); 205 - pg->gatt_pages = pci_resource_len(pdev, PSB_GATT_RESOURCE) 206 - >> PAGE_SHIFT; 207 - dev_priv->gtt_mem = &pdev->resource[PSB_GATT_RESOURCE]; 217 + gatt_start = pci_resource_start(pdev, PSB_GATT_RESOURCE); 218 + gatt_pages = pci_resource_len(pdev, PSB_GATT_RESOURCE) >> PAGE_SHIFT; 208 219 209 - if (pg->gatt_pages == 0 || pg->gatt_start == 0) { 220 + if (!gatt_pages || !gatt_start) { 210 221 static struct resource fudge; /* Preferably peppermint */ 211 - /* This can occur on CDV systems. Fudge it in this case. 212 - We really don't care what imaginary space is being allocated 213 - at this point */ 222 + 223 + /* 224 + * This can occur on CDV systems. Fudge it in this case. We 225 + * really don't care what imaginary space is being allocated 226 + * at this point. 227 + */ 214 228 dev_dbg(dev->dev, "GATT PCI BAR not initialized.\n"); 215 - pg->gatt_start = 0x40000000; 216 - pg->gatt_pages = (128 * 1024 * 1024) >> PAGE_SHIFT; 217 - /* This is a little confusing but in fact the GTT is providing 218 - a view from the GPU into memory and not vice versa. As such 219 - this is really allocating space that is not the same as the 220 - CPU address space on CDV */ 229 + gatt_start = 0x40000000; 230 + gatt_pages = (128 * 1024 * 1024) >> PAGE_SHIFT; 231 + 232 + /* 233 + * This is a little confusing but in fact the GTT is providing 234 + * a view from the GPU into memory and not vice versa. As such 235 + * this is really allocating space that is not the same as the 236 + * CPU address space on CDV. 237 + */ 221 238 fudge.start = 0x40000000; 222 239 fudge.end = 0x40000000 + 128 * 1024 * 1024 - 1; 223 240 fudge.name = "fudge"; 224 241 fudge.flags = IORESOURCE_MEM; 225 - dev_priv->gtt_mem = &fudge; 242 + 243 + gtt_mem = &fudge; 244 + } else { 245 + gtt_mem = &pdev->resource[PSB_GATT_RESOURCE]; 226 246 } 227 247 228 - pci_read_config_dword(pdev, PSB_BSM, &dev_priv->stolen_base); 229 - vram_stolen_size = pg->gtt_phys_start - dev_priv->stolen_base 230 - - PAGE_SIZE; 231 - 232 - stolen_size = vram_stolen_size; 233 - 234 - dev_dbg(dev->dev, "Stolen memory base 0x%x, size %luK\n", 235 - dev_priv->stolen_base, vram_stolen_size / 1024); 236 - 237 - if (resume && (gtt_pages != pg->gtt_pages) && 238 - (stolen_size != pg->stolen_size)) { 239 - dev_err(dev->dev, "GTT resume error.\n"); 240 - ret = -EINVAL; 241 - goto out_err; 242 - } 243 - 248 + pg->gtt_phys_start = gtt_phys_start; 249 + pg->mmu_gatt_start = mmu_gatt_start; 250 + pg->gtt_start = gtt_start; 244 251 pg->gtt_pages = gtt_pages; 245 - pg->stolen_size = stolen_size; 246 - dev_priv->vram_stolen_size = vram_stolen_size; 252 + pg->gatt_start = gatt_start; 253 + pg->gatt_pages = gatt_pages; 254 + dev_priv->gtt_mem = gtt_mem; 255 + } 247 256 248 - /* 249 - * Map the GTT and the stolen memory area 250 - */ 251 - if (!resume) 252 - dev_priv->gtt_map = ioremap(pg->gtt_phys_start, 253 - gtt_pages << PAGE_SHIFT); 257 + int psb_gtt_init(struct drm_device *dev) 258 + { 259 + struct drm_psb_private *dev_priv = to_drm_psb_private(dev); 260 + struct psb_gtt *pg = &dev_priv->gtt; 261 + int ret; 262 + 263 + mutex_init(&dev_priv->gtt_mutex); 264 + 265 + ret = psb_gtt_enable(dev_priv); 266 + if (ret) 267 + goto err_mutex_destroy; 268 + 269 + psb_gtt_init_ranges(dev_priv); 270 + 271 + dev_priv->gtt_map = ioremap(pg->gtt_phys_start, pg->gtt_pages << PAGE_SHIFT); 254 272 if (!dev_priv->gtt_map) { 255 273 dev_err(dev->dev, "Failure to map gtt.\n"); 256 274 ret = -ENOMEM; 257 - goto out_err; 275 + goto err_psb_gtt_disable; 258 276 } 259 277 260 - if (!resume) 261 - dev_priv->vram_addr = ioremap_wc(dev_priv->stolen_base, 262 - stolen_size); 278 + psb_gtt_clear(dev_priv); 263 279 264 - if (!dev_priv->vram_addr) { 265 - dev_err(dev->dev, "Failure to map stolen base.\n"); 266 - ret = -ENOMEM; 267 - goto out_err; 268 - } 269 - 270 - /* 271 - * Insert vram stolen pages into the GTT 272 - */ 273 - 274 - pfn_base = dev_priv->stolen_base >> PAGE_SHIFT; 275 - num_pages = vram_stolen_size >> PAGE_SHIFT; 276 - dev_dbg(dev->dev, "Set up %d stolen pages starting at 0x%08x, GTT offset %dK\n", 277 - num_pages, pfn_base << PAGE_SHIFT, 0); 278 - for (i = 0; i < num_pages; ++i) { 279 - pte = psb_gtt_mask_pte(pfn_base + i, PSB_MMU_CACHED_MEMORY); 280 - iowrite32(pte, dev_priv->gtt_map + i); 281 - } 282 - 283 - /* 284 - * Init rest of GTT to the scratch page to avoid accidents or scribbles 285 - */ 286 - 287 - pfn_base = page_to_pfn(dev_priv->scratch_page); 288 - pte = psb_gtt_mask_pte(pfn_base, PSB_MMU_CACHED_MEMORY); 289 - for (; i < gtt_pages; ++i) 290 - iowrite32(pte, dev_priv->gtt_map + i); 291 - 292 - (void) ioread32(dev_priv->gtt_map + i - 1); 293 280 return 0; 294 281 295 - out_err: 296 - psb_gtt_takedown(dev); 282 + err_psb_gtt_disable: 283 + psb_gtt_disable(dev_priv); 284 + err_mutex_destroy: 285 + mutex_destroy(&dev_priv->gtt_mutex); 297 286 return ret; 298 287 } 299 288 300 - int psb_gtt_restore(struct drm_device *dev) 289 + int psb_gtt_resume(struct drm_device *dev) 301 290 { 302 291 struct drm_psb_private *dev_priv = to_drm_psb_private(dev); 303 - struct resource *r = dev_priv->gtt_mem->child; 304 - struct psb_gem_object *pobj; 305 - unsigned int restored = 0, total = 0, size = 0; 292 + struct psb_gtt *pg = &dev_priv->gtt; 293 + unsigned int old_gtt_pages = pg->gtt_pages; 294 + int ret; 306 295 307 - /* On resume, the gtt_mutex is already initialized */ 308 - mutex_lock(&dev_priv->gtt_mutex); 309 - psb_gtt_init(dev, 1); 296 + /* Enable the GTT */ 297 + ret = psb_gtt_enable(dev_priv); 298 + if (ret) 299 + return ret; 310 300 311 - while (r != NULL) { 312 - /* 313 - * TODO: GTT restoration needs a refactoring, so that we don't have to touch 314 - * struct psb_gem_object here. The type represents a GEM object and is 315 - * not related to the GTT itself. 316 - */ 317 - pobj = container_of(r, struct psb_gem_object, resource); 318 - if (pobj->pages) { 319 - psb_gtt_insert_pages(dev_priv, &pobj->resource, pobj->pages); 320 - size += pobj->resource.end - pobj->resource.start; 321 - restored++; 322 - } 323 - r = r->sibling; 324 - total++; 301 + psb_gtt_init_ranges(dev_priv); 302 + 303 + if (old_gtt_pages != pg->gtt_pages) { 304 + dev_err(dev->dev, "GTT resume error.\n"); 305 + ret = -ENODEV; 306 + goto err_psb_gtt_disable; 325 307 } 326 - mutex_unlock(&dev_priv->gtt_mutex); 327 - DRM_DEBUG_DRIVER("Restored %u of %u gtt ranges (%u KB)", restored, 328 - total, (size / 1024)); 329 308 330 - return 0; 309 + psb_gtt_clear(dev_priv); 310 + 311 + err_psb_gtt_disable: 312 + psb_gtt_disable(dev_priv); 313 + return ret; 331 314 }
+4 -4
drivers/gpu/drm/gma500/gtt.h
··· 22 22 unsigned gatt_pages; 23 23 unsigned long stolen_size; 24 24 unsigned long vram_stolen_size; 25 - struct rw_semaphore sem; 26 25 }; 27 26 28 27 /* Exported functions */ 29 - extern int psb_gtt_init(struct drm_device *dev, int resume); 30 - extern void psb_gtt_takedown(struct drm_device *dev); 31 - extern int psb_gtt_restore(struct drm_device *dev); 28 + int psb_gtt_init(struct drm_device *dev); 29 + void psb_gtt_fini(struct drm_device *dev); 30 + int psb_gtt_resume(struct drm_device *dev); 32 31 33 32 int psb_gtt_allocate_resource(struct drm_psb_private *pdev, struct resource *res, 34 33 const char *name, resource_size_t size, resource_size_t align, 35 34 bool stolen, u32 *offset); 36 35 36 + uint32_t psb_gtt_mask_pte(uint32_t pfn, int type); 37 37 void psb_gtt_insert_pages(struct drm_psb_private *pdev, const struct resource *res, 38 38 struct page **pages); 39 39 void psb_gtt_remove_pages(struct drm_psb_private *pdev, const struct resource *res);
+13 -12
drivers/gpu/drm/gma500/oaktrail_crtc.c
··· 372 372 bool ok, is_sdvo = false; 373 373 bool is_lvds = false; 374 374 bool is_mipi = false; 375 - struct drm_mode_config *mode_config = &dev->mode_config; 376 375 struct gma_encoder *gma_encoder = NULL; 377 376 uint64_t scalingType = DRM_MODE_SCALE_FULLSCREEN; 377 + struct drm_connector_list_iter conn_iter; 378 378 struct drm_connector *connector; 379 379 int i; 380 380 int need_aux = gma_pipe_has_type(crtc, INTEL_OUTPUT_SDVO) ? 1 : 0; ··· 385 385 if (!gma_power_begin(dev, true)) 386 386 return 0; 387 387 388 - memcpy(&gma_crtc->saved_mode, 389 - mode, 390 - sizeof(struct drm_display_mode)); 391 - memcpy(&gma_crtc->saved_adjusted_mode, 392 - adjusted_mode, 393 - sizeof(struct drm_display_mode)); 388 + drm_mode_copy(&gma_crtc->saved_mode, mode); 389 + drm_mode_copy(&gma_crtc->saved_adjusted_mode, adjusted_mode); 394 390 395 - list_for_each_entry(connector, &mode_config->connector_list, head) { 391 + drm_connector_list_iter_begin(dev, &conn_iter); 392 + drm_for_each_connector_iter(connector, &conn_iter) { 396 393 if (!connector->encoder || connector->encoder->crtc != crtc) 397 394 continue; 398 395 ··· 406 409 is_mipi = true; 407 410 break; 408 411 } 412 + 413 + break; 409 414 } 415 + 416 + if (gma_encoder) 417 + drm_object_property_get_value(&connector->base, 418 + dev->mode_config.scaling_mode_property, &scalingType); 419 + 420 + drm_connector_list_iter_end(&conn_iter); 410 421 411 422 /* Disable the VGA plane that we never use */ 412 423 for (i = 0; i <= need_aux; i++) ··· 428 423 REG_WRITE_WITH_AUX(map->src, ((mode->crtc_hdisplay - 1) << 16) | 429 424 (mode->crtc_vdisplay - 1), i); 430 425 } 431 - 432 - if (gma_encoder) 433 - drm_object_property_get_value(&connector->base, 434 - dev->mode_config.scaling_mode_property, &scalingType); 435 426 436 427 if (scalingType == DRM_MODE_SCALE_NO_SCALE) { 437 428 /* Moorestown doesn't have register support for centering so
-1
drivers/gpu/drm/gma500/oaktrail_device.c
··· 545 545 .chip_setup = oaktrail_chip_setup, 546 546 .chip_teardown = oaktrail_teardown, 547 547 .crtc_helper = &oaktrail_helper_funcs, 548 - .crtc_funcs = &gma_intel_crtc_funcs, 549 548 550 549 .output_init = oaktrail_output_init, 551 550
-1
drivers/gpu/drm/gma500/oaktrail_hdmi.c
··· 654 654 connector->display_info.subpixel_order = SubPixelHorizontalRGB; 655 655 connector->interlace_allowed = false; 656 656 connector->doublescan_allowed = false; 657 - drm_connector_register(connector); 658 657 dev_info(dev->dev, "HDMI initialised.\n"); 659 658 660 659 return;
+8 -8
drivers/gpu/drm/gma500/oaktrail_lvds.c
··· 85 85 struct drm_device *dev = encoder->dev; 86 86 struct drm_psb_private *dev_priv = to_drm_psb_private(dev); 87 87 struct psb_intel_mode_device *mode_dev = &dev_priv->mode_dev; 88 - struct drm_mode_config *mode_config = &dev->mode_config; 88 + struct drm_connector_list_iter conn_iter; 89 89 struct drm_connector *connector = NULL; 90 90 struct drm_crtc *crtc = encoder->crtc; 91 91 u32 lvds_port; ··· 112 112 REG_WRITE(LVDS, lvds_port); 113 113 114 114 /* Find the connector we're trying to set up */ 115 - list_for_each_entry(connector, &mode_config->connector_list, head) { 115 + drm_connector_list_iter_begin(dev, &conn_iter); 116 + drm_for_each_connector_iter(connector, &conn_iter) { 116 117 if (connector->encoder && connector->encoder->crtc == crtc) 117 118 break; 118 119 } 119 120 120 - if (list_entry_is_head(connector, &mode_config->connector_list, head)) { 121 + if (!connector) { 122 + drm_connector_list_iter_end(&conn_iter); 121 123 DRM_ERROR("Couldn't find connector when setting mode"); 122 124 gma_power_end(dev); 123 125 return; 124 126 } 125 127 126 - drm_object_property_get_value( 127 - &connector->base, 128 - dev->mode_config.scaling_mode_property, 129 - &v); 128 + drm_object_property_get_value( &connector->base, 129 + dev->mode_config.scaling_mode_property, &v); 130 + drm_connector_list_iter_end(&conn_iter); 130 131 131 132 if (v == DRM_MODE_SCALE_NO_SCALE) 132 133 REG_WRITE(PFIT_CONTROL, 0); ··· 401 400 out: 402 401 mutex_unlock(&dev->mode_config.mutex); 403 402 404 - drm_connector_register(connector); 405 403 return; 406 404 407 405 failed_find:
+3 -2
drivers/gpu/drm/gma500/opregion.c
··· 23 23 */ 24 24 #include <linux/acpi.h> 25 25 #include "psb_drv.h" 26 + #include "psb_irq.h" 26 27 #include "psb_intel_reg.h" 27 28 28 29 #define PCI_ASLE 0xe4 ··· 218 217 if (asle && system_opregion ) { 219 218 /* Don't do this on Medfield or other non PC like devices, they 220 219 use the bit for something different altogether */ 221 - psb_enable_pipestat(dev_priv, 0, PIPE_LEGACY_BLC_EVENT_ENABLE); 222 - psb_enable_pipestat(dev_priv, 1, PIPE_LEGACY_BLC_EVENT_ENABLE); 220 + gma_enable_pipestat(dev_priv, 0, PIPE_LEGACY_BLC_EVENT_ENABLE); 221 + gma_enable_pipestat(dev_priv, 1, PIPE_LEGACY_BLC_EVENT_ENABLE); 223 222 224 223 asle->tche = ASLE_ALS_EN | ASLE_BLC_EN | ASLE_PFIT_EN 225 224 | ASLE_PFMB_EN;
+9 -6
drivers/gpu/drm/gma500/power.c
··· 28 28 * Alan Cox <alan@linux.intel.com> 29 29 */ 30 30 31 + #include "gem.h" 31 32 #include "power.h" 32 33 #include "psb_drv.h" 33 34 #include "psb_reg.h" ··· 113 112 pci_write_config_word(pdev, PSB_GMCH_CTRL, 114 113 dev_priv->gmch_ctrl | _PSB_GMCH_ENABLED); 115 114 116 - psb_gtt_restore(dev); /* Rebuild our GTT mappings */ 115 + /* Rebuild our GTT mappings */ 116 + psb_gtt_resume(dev); 117 + psb_gem_mm_resume(dev); 117 118 dev_priv->ops->restore_regs(dev); 118 119 } 119 120 ··· 201 198 dev_err(dev->dev, "GPU hardware busy, cannot suspend\n"); 202 199 return -EBUSY; 203 200 } 204 - psb_irq_uninstall(dev); 201 + gma_irq_uninstall(dev); 205 202 gma_suspend_display(dev); 206 203 gma_suspend_pci(pdev); 207 204 } ··· 223 220 mutex_lock(&power_mutex); 224 221 gma_resume_pci(pdev); 225 222 gma_resume_display(pdev); 226 - psb_irq_preinstall(dev); 227 - psb_irq_postinstall(dev); 223 + gma_irq_preinstall(dev); 224 + gma_irq_postinstall(dev); 228 225 mutex_unlock(&power_mutex); 229 226 return 0; 230 227 } ··· 270 267 /* Ok power up needed */ 271 268 ret = gma_resume_pci(pdev); 272 269 if (ret == 0) { 273 - psb_irq_preinstall(dev); 274 - psb_irq_postinstall(dev); 270 + gma_irq_preinstall(dev); 271 + gma_irq_postinstall(dev); 275 272 pm_runtime_get(dev->dev); 276 273 dev_priv->display_count++; 277 274 spin_unlock_irqrestore(&power_ctrl_lock, flags);
+20 -9
drivers/gpu/drm/gma500/psb_device.c
··· 168 168 static int psb_save_display_registers(struct drm_device *dev) 169 169 { 170 170 struct drm_psb_private *dev_priv = to_drm_psb_private(dev); 171 + struct gma_connector *gma_connector; 171 172 struct drm_crtc *crtc; 172 - struct gma_connector *connector; 173 + struct drm_connector_list_iter conn_iter; 174 + struct drm_connector *connector; 173 175 struct psb_state *regs = &dev_priv->regs.psb; 174 176 175 177 /* Display arbitration control + watermarks */ ··· 191 189 dev_priv->ops->save_crtc(crtc); 192 190 } 193 191 194 - list_for_each_entry(connector, &dev->mode_config.connector_list, base.head) 195 - if (connector->save) 196 - connector->save(&connector->base); 192 + drm_connector_list_iter_begin(dev, &conn_iter); 193 + drm_for_each_connector_iter(connector, &conn_iter) { 194 + gma_connector = to_gma_connector(connector); 195 + if (gma_connector->save) 196 + gma_connector->save(connector); 197 + } 198 + drm_connector_list_iter_end(&conn_iter); 197 199 198 200 drm_modeset_unlock_all(dev); 199 201 return 0; ··· 212 206 static int psb_restore_display_registers(struct drm_device *dev) 213 207 { 214 208 struct drm_psb_private *dev_priv = to_drm_psb_private(dev); 209 + struct gma_connector *gma_connector; 215 210 struct drm_crtc *crtc; 216 - struct gma_connector *connector; 211 + struct drm_connector_list_iter conn_iter; 212 + struct drm_connector *connector; 217 213 struct psb_state *regs = &dev_priv->regs.psb; 218 214 219 215 /* Display arbitration + watermarks */ ··· 236 228 if (drm_helper_crtc_in_use(crtc)) 237 229 dev_priv->ops->restore_crtc(crtc); 238 230 239 - list_for_each_entry(connector, &dev->mode_config.connector_list, base.head) 240 - if (connector->restore) 241 - connector->restore(&connector->base); 231 + drm_connector_list_iter_begin(dev, &conn_iter); 232 + drm_for_each_connector_iter(connector, &conn_iter) { 233 + gma_connector = to_gma_connector(connector); 234 + if (gma_connector->restore) 235 + gma_connector->restore(connector); 236 + } 237 + drm_connector_list_iter_end(&conn_iter); 242 238 243 239 drm_modeset_unlock_all(dev); 244 240 return 0; ··· 341 329 .chip_teardown = psb_chip_teardown, 342 330 343 331 .crtc_helper = &psb_intel_helper_funcs, 344 - .crtc_funcs = &gma_intel_crtc_funcs, 345 332 .clock_funcs = &psb_clock_funcs, 346 333 347 334 .output_init = psb_output_init,
+17 -12
drivers/gpu/drm/gma500/psb_drv.c
··· 28 28 #include <drm/drm_vblank.h> 29 29 30 30 #include "framebuffer.h" 31 + #include "gem.h" 31 32 #include "intel_bios.h" 32 33 #include "mid_bios.h" 33 34 #include "power.h" ··· 100 99 * 101 100 * Soft reset the graphics engine and then reload the necessary registers. 102 101 */ 103 - void psb_spank(struct drm_psb_private *dev_priv) 102 + static void psb_spank(struct drm_psb_private *dev_priv) 104 103 { 105 104 PSB_WSGX32(_PSB_CS_RESET_BIF_RESET | _PSB_CS_RESET_DPM_RESET | 106 105 _PSB_CS_RESET_TA_RESET | _PSB_CS_RESET_USE_RESET | ··· 173 172 gma_backlight_exit(dev); 174 173 psb_modeset_cleanup(dev); 175 174 175 + gma_irq_uninstall(dev); 176 + 176 177 if (dev_priv->ops->chip_teardown) 177 178 dev_priv->ops->chip_teardown(dev); 178 179 ··· 187 184 if (dev_priv->mmu) { 188 185 struct psb_gtt *pg = &dev_priv->gtt; 189 186 190 - down_read(&pg->sem); 191 187 psb_mmu_remove_pfn_sequence( 192 188 psb_mmu_get_default_pd 193 189 (dev_priv->mmu), 194 190 pg->mmu_gatt_start, 195 191 dev_priv->vram_stolen_size >> PAGE_SHIFT); 196 - up_read(&pg->sem); 197 192 psb_mmu_driver_takedown(dev_priv->mmu); 198 193 dev_priv->mmu = NULL; 199 194 } 200 - psb_gtt_takedown(dev); 195 + psb_gem_mm_fini(dev); 196 + psb_gtt_fini(dev); 201 197 if (dev_priv->scratch_page) { 202 198 set_pages_wb(dev_priv->scratch_page, 1); 203 199 __free_page(dev_priv->scratch_page); ··· 236 234 struct drm_psb_private *dev_priv = to_drm_psb_private(dev); 237 235 unsigned long resource_start, resource_len; 238 236 unsigned long irqflags; 239 - int ret = -ENOMEM; 237 + struct drm_connector_list_iter conn_iter; 240 238 struct drm_connector *connector; 241 239 struct gma_encoder *gma_encoder; 242 240 struct psb_gtt *pg; 241 + int ret = -ENOMEM; 243 242 244 243 /* initializing driver private data */ 245 244 ··· 329 326 330 327 set_pages_uc(dev_priv->scratch_page, 1); 331 328 332 - ret = psb_gtt_init(dev, 0); 329 + ret = psb_gtt_init(dev); 330 + if (ret) 331 + goto out_err; 332 + ret = psb_gem_mm_init(dev); 333 333 if (ret) 334 334 goto out_err; 335 335 ··· 351 345 return ret; 352 346 353 347 /* Add stolen memory to SGX MMU */ 354 - down_read(&pg->sem); 355 348 ret = psb_mmu_insert_pfn_sequence(psb_mmu_get_default_pd(dev_priv->mmu), 356 349 dev_priv->stolen_base >> PAGE_SHIFT, 357 350 pg->gatt_start, 358 351 pg->stolen_size >> PAGE_SHIFT, 0); 359 - up_read(&pg->sem); 360 352 361 353 psb_mmu_set_pd_context(psb_mmu_get_default_pd(dev_priv->mmu), 0); 362 354 psb_mmu_set_pd_context(dev_priv->pf_pd, 1); ··· 383 379 PSB_WVDC32(0xFFFFFFFF, PSB_INT_MASK_R); 384 380 spin_unlock_irqrestore(&dev_priv->irqmask_lock, irqflags); 385 381 386 - psb_irq_install(dev, pdev->irq); 382 + gma_irq_install(dev, pdev->irq); 387 383 388 384 dev->max_vblank_count = 0xffffff; /* only 24 bits of frame count */ 389 385 ··· 391 387 psb_fbdev_init(dev); 392 388 drm_kms_helper_poll_init(dev); 393 389 394 - /* Only add backlight support if we have LVDS output */ 395 - list_for_each_entry(connector, &dev->mode_config.connector_list, 396 - head) { 390 + /* Only add backlight support if we have LVDS or MIPI output */ 391 + drm_connector_list_iter_begin(dev, &conn_iter); 392 + drm_for_each_connector_iter(connector, &conn_iter) { 397 393 gma_encoder = gma_attached_encoder(connector); 398 394 399 395 switch (gma_encoder->type) { ··· 403 399 break; 404 400 } 405 401 } 402 + drm_connector_list_iter_end(&conn_iter); 406 403 407 404 if (ret) 408 405 return ret;
+1 -89
drivers/gpu/drm/gma500/psb_drv.h
··· 13 13 14 14 #include <drm/drm_device.h> 15 15 16 - #include "gma_display.h" 17 16 #include "gtt.h" 18 17 #include "intel_bios.h" 19 18 #include "mmu.h" ··· 34 35 35 36 /* Append new drm mode definition here, align with libdrm definition */ 36 37 #define DRM_MODE_SCALE_NO_SCALE 2 37 - 38 - enum { 39 - CHIP_PSB_8108 = 0, /* Poulsbo */ 40 - CHIP_PSB_8109 = 1, /* Poulsbo */ 41 - CHIP_MRST_4100 = 2, /* Moorestown/Oaktrail */ 42 - }; 43 38 44 39 #define IS_PSB(drm) ((to_pci_dev((drm)->dev)->device & 0xfffe) == 0x8108) 45 40 #define IS_MRST(drm) ((to_pci_dev((drm)->dev)->device & 0xfff0) == 0x4100) ··· 401 408 uint32_t stolen_base; 402 409 u8 __iomem *vram_addr; 403 410 unsigned long vram_stolen_size; 404 - int gtt_initialized; 405 411 u16 gmch_ctrl; /* Saved GTT setup */ 406 412 u32 pge_ctl; 407 413 ··· 578 586 579 587 /* Sub functions */ 580 588 struct drm_crtc_helper_funcs const *crtc_helper; 581 - struct drm_crtc_funcs const *crtc_funcs; 582 589 const struct gma_clock_funcs *clock_funcs; 583 590 584 591 /* Setup hooks */ ··· 609 618 int i2c_bus; /* I2C bus identifier for Moorestown */ 610 619 }; 611 620 612 - 613 - 614 - extern int drm_crtc_probe_output_modes(struct drm_device *dev, int, int); 615 - extern int drm_pick_crtcs(struct drm_device *dev); 616 - 617 - /* psb_irq.c */ 618 - extern void psb_irq_uninstall_islands(struct drm_device *dev, int hw_islands); 619 - extern int psb_vblank_wait2(struct drm_device *dev, unsigned int *sequence); 620 - extern int psb_vblank_wait(struct drm_device *dev, unsigned int *sequence); 621 - extern int psb_enable_vblank(struct drm_crtc *crtc); 622 - extern void psb_disable_vblank(struct drm_crtc *crtc); 623 - void 624 - psb_enable_pipestat(struct drm_psb_private *dev_priv, int pipe, u32 mask); 625 - 626 - void 627 - psb_disable_pipestat(struct drm_psb_private *dev_priv, int pipe, u32 mask); 628 - 629 - extern u32 psb_get_vblank_counter(struct drm_crtc *crtc); 630 - 631 - /* framebuffer.c */ 632 - extern int psbfb_probed(struct drm_device *dev); 633 - extern int psbfb_remove(struct drm_device *dev, 634 - struct drm_framebuffer *fb); 635 - /* psb_drv.c */ 636 - extern void psb_spank(struct drm_psb_private *dev_priv); 637 - 638 - /* psb_reset.c */ 621 + /* psb_lid.c */ 639 622 extern void psb_lid_timer_init(struct drm_psb_private *dev_priv); 640 623 extern void psb_lid_timer_takedown(struct drm_psb_private *dev_priv); 641 - extern void psb_print_pagefault(struct drm_psb_private *dev_priv); 642 624 643 625 /* modesetting */ 644 626 extern void psb_modeset_init(struct drm_device *dev); ··· 634 670 635 671 /* psb_intel_display.c */ 636 672 extern const struct drm_crtc_helper_funcs psb_intel_helper_funcs; 637 - extern const struct drm_crtc_funcs gma_intel_crtc_funcs; 638 673 639 674 /* psb_intel_lvds.c */ 640 675 extern const struct drm_connector_helper_funcs ··· 653 690 /* cdv_device.c */ 654 691 extern const struct psb_ops cdv_chip_ops; 655 692 656 - /* Debug print bits setting */ 657 - #define PSB_D_GENERAL (1 << 0) 658 - #define PSB_D_INIT (1 << 1) 659 - #define PSB_D_IRQ (1 << 2) 660 - #define PSB_D_ENTRY (1 << 3) 661 - /* debug the get H/V BP/FP count */ 662 - #define PSB_D_HV (1 << 4) 663 - #define PSB_D_DBI_BF (1 << 5) 664 - #define PSB_D_PM (1 << 6) 665 - #define PSB_D_RENDER (1 << 7) 666 - #define PSB_D_REG (1 << 8) 667 - #define PSB_D_MSVDX (1 << 9) 668 - #define PSB_D_TOPAZ (1 << 10) 669 - 670 - extern int drm_idle_check_interval; 671 - 672 693 /* Utilities */ 673 - static inline u32 MRST_MSG_READ32(int domain, uint port, uint offset) 674 - { 675 - int mcr = (0xD0<<24) | (port << 16) | (offset << 8); 676 - uint32_t ret_val = 0; 677 - struct pci_dev *pci_root = pci_get_domain_bus_and_slot(domain, 0, 0); 678 - pci_write_config_dword(pci_root, 0xD0, mcr); 679 - pci_read_config_dword(pci_root, 0xD4, &ret_val); 680 - pci_dev_put(pci_root); 681 - return ret_val; 682 - } 683 - static inline void MRST_MSG_WRITE32(int domain, uint port, uint offset, 684 - u32 value) 685 - { 686 - int mcr = (0xE0<<24) | (port << 16) | (offset << 8) | 0xF0; 687 - struct pci_dev *pci_root = pci_get_domain_bus_and_slot(domain, 0, 0); 688 - pci_write_config_dword(pci_root, 0xD4, value); 689 - pci_write_config_dword(pci_root, 0xD0, mcr); 690 - pci_dev_put(pci_root); 691 - } 692 - 693 694 static inline uint32_t REGISTER_READ(struct drm_device *dev, uint32_t reg) 694 695 { 695 696 struct drm_psb_private *dev_priv = to_drm_psb_private(dev); ··· 734 807 #define PSB_WVDC32(_val, _offs) iowrite32(_val, dev_priv->vdc_reg + (_offs)) 735 808 #define PSB_RVDC32(_offs) ioread32(dev_priv->vdc_reg + (_offs)) 736 809 737 - /* #define TRAP_SGX_PM_FAULT 1 */ 738 - #ifdef TRAP_SGX_PM_FAULT 739 - #define PSB_RSGX32(_offs) \ 740 - ({ \ 741 - if (inl(dev_priv->apm_base + PSB_APM_STS) & 0x3) { \ 742 - pr_err("access sgx when it's off!! (READ) %s, %d\n", \ 743 - __FILE__, __LINE__); \ 744 - melay(1000); \ 745 - } \ 746 - ioread32(dev_priv->sgx_reg + (_offs)); \ 747 - }) 748 - #else 749 810 #define PSB_RSGX32(_offs) ioread32(dev_priv->sgx_reg + (_offs)) 750 - #endif 751 811 #define PSB_WSGX32(_val, _offs) iowrite32(_val, dev_priv->sgx_reg + (_offs)) 752 - 753 - #define MSVDX_REG_DUMP 0 754 812 755 813 #define PSB_WMSVDX32(_val, _offs) iowrite32(_val, dev_priv->msvdx_reg + (_offs)) 756 814 #define PSB_RMSVDX32(_offs) ioread32(dev_priv->msvdx_reg + (_offs))
+17 -22
drivers/gpu/drm/gma500/psb_intel_display.c
··· 106 106 u32 dpll = 0, fp = 0, dspcntr, pipeconf; 107 107 bool ok, is_sdvo = false; 108 108 bool is_lvds = false, is_tv = false; 109 - struct drm_mode_config *mode_config = &dev->mode_config; 109 + struct drm_connector_list_iter conn_iter; 110 110 struct drm_connector *connector; 111 111 const struct gma_limit_t *limit; 112 112 ··· 116 116 return 0; 117 117 } 118 118 119 - list_for_each_entry(connector, &mode_config->connector_list, head) { 119 + drm_connector_list_iter_begin(dev, &conn_iter); 120 + drm_for_each_connector_iter(connector, &conn_iter) { 120 121 struct gma_encoder *gma_encoder = gma_attached_encoder(connector); 121 122 122 123 if (!connector->encoder ··· 135 134 is_tv = true; 136 135 break; 137 136 } 137 + 138 + break; 138 139 } 140 + drm_connector_list_iter_end(&conn_iter); 139 141 140 142 refclk = 96000; 141 143 ··· 431 427 .disable = gma_crtc_disable, 432 428 }; 433 429 434 - const struct drm_crtc_funcs gma_intel_crtc_funcs = { 435 - .cursor_set = gma_crtc_cursor_set, 436 - .cursor_move = gma_crtc_cursor_move, 437 - .gamma_set = gma_crtc_gamma_set, 438 - .set_config = gma_crtc_set_config, 439 - .destroy = gma_crtc_destroy, 440 - .page_flip = gma_crtc_page_flip, 441 - .enable_vblank = psb_enable_vblank, 442 - .disable_vblank = psb_disable_vblank, 443 - .get_vblank_counter = psb_get_vblank_counter, 444 - }; 445 - 446 430 const struct gma_clock_funcs psb_clock_funcs = { 447 431 .clock = psb_intel_clock, 448 432 .limit = psb_intel_limit, ··· 492 500 return; 493 501 } 494 502 495 - /* Set the CRTC operations from the chip specific data */ 496 - drm_crtc_init(dev, &gma_crtc->base, dev_priv->ops->crtc_funcs); 503 + drm_crtc_init(dev, &gma_crtc->base, &gma_crtc_funcs); 497 504 498 505 /* Set the CRTC clock functions from chip specific data */ 499 506 gma_crtc->clock_funcs = dev_priv->ops->clock_funcs; ··· 526 535 527 536 struct drm_crtc *psb_intel_get_crtc_from_pipe(struct drm_device *dev, int pipe) 528 537 { 529 - struct drm_crtc *crtc = NULL; 538 + struct drm_crtc *crtc; 530 539 531 540 list_for_each_entry(crtc, &dev->mode_config.crtc_list, head) { 532 541 struct gma_crtc *gma_crtc = to_gma_crtc(crtc); 542 + 533 543 if (gma_crtc->pipe == pipe) 534 - break; 544 + return crtc; 535 545 } 536 - return crtc; 546 + return NULL; 537 547 } 538 548 539 549 int gma_connector_clones(struct drm_device *dev, int type_mask) 540 550 { 541 - int index_mask = 0; 551 + struct drm_connector_list_iter conn_iter; 542 552 struct drm_connector *connector; 553 + int index_mask = 0; 543 554 int entry = 0; 544 555 545 - list_for_each_entry(connector, &dev->mode_config.connector_list, 546 - head) { 556 + drm_connector_list_iter_begin(dev, &conn_iter); 557 + drm_for_each_connector_iter(connector, &conn_iter) { 547 558 struct gma_encoder *gma_encoder = gma_attached_encoder(connector); 548 559 if (type_mask & (1 << gma_encoder->type)) 549 560 index_mask |= (1 << entry); 550 561 entry++; 551 562 } 563 + drm_connector_list_iter_end(&conn_iter); 564 + 552 565 return index_mask; 553 566 }
+2 -3
drivers/gpu/drm/gma500/psb_intel_lvds.c
··· 521 521 */ 522 522 void psb_intel_lvds_destroy(struct drm_connector *connector) 523 523 { 524 + struct gma_connector *gma_connector = to_gma_connector(connector); 524 525 struct gma_encoder *gma_encoder = gma_attached_encoder(connector); 525 526 struct psb_intel_lvds_priv *lvds_priv = gma_encoder->dev_priv; 526 527 527 528 psb_intel_i2c_destroy(lvds_priv->ddc_bus); 528 - drm_connector_unregister(connector); 529 529 drm_connector_cleanup(connector); 530 - kfree(connector); 530 + kfree(gma_connector); 531 531 } 532 532 533 533 int psb_intel_lvds_set_property(struct drm_connector *connector, ··· 782 782 */ 783 783 out: 784 784 mutex_unlock(&dev->mode_config.mutex); 785 - drm_connector_register(connector); 786 785 return; 787 786 788 787 failed_find:
+3 -3
drivers/gpu/drm/gma500/psb_intel_sdvo.c
··· 1542 1542 1543 1543 static void psb_intel_sdvo_destroy(struct drm_connector *connector) 1544 1544 { 1545 - drm_connector_unregister(connector); 1545 + struct gma_connector *gma_connector = to_gma_connector(connector); 1546 + 1546 1547 drm_connector_cleanup(connector); 1547 - kfree(connector); 1548 + kfree(gma_connector); 1548 1549 } 1549 1550 1550 1551 static bool psb_intel_sdvo_detect_hdmi_audio(struct drm_connector *connector) ··· 1933 1932 connector->base.restore = psb_intel_sdvo_restore; 1934 1933 1935 1934 gma_connector_attach_encoder(&connector->base, &encoder->base); 1936 - drm_connector_register(&connector->base.base); 1937 1935 } 1938 1936 1939 1937 static void
+35 -59
drivers/gpu/drm/gma500/psb_irq.c
··· 21 21 * inline functions 22 22 */ 23 23 24 - static inline u32 25 - psb_pipestat(int pipe) 24 + static inline u32 gma_pipestat(int pipe) 26 25 { 27 26 if (pipe == 0) 28 27 return PIPEASTAT; ··· 32 33 BUG(); 33 34 } 34 35 35 - static inline u32 36 - mid_pipe_event(int pipe) 36 + static inline u32 gma_pipe_event(int pipe) 37 37 { 38 38 if (pipe == 0) 39 39 return _PSB_PIPEA_EVENT_FLAG; ··· 43 45 BUG(); 44 46 } 45 47 46 - static inline u32 47 - mid_pipe_vsync(int pipe) 48 - { 49 - if (pipe == 0) 50 - return _PSB_VSYNC_PIPEA_FLAG; 51 - if (pipe == 1) 52 - return _PSB_VSYNC_PIPEB_FLAG; 53 - if (pipe == 2) 54 - return _MDFLD_PIPEC_VBLANK_FLAG; 55 - BUG(); 56 - } 57 - 58 - static inline u32 59 - mid_pipeconf(int pipe) 48 + static inline u32 gma_pipeconf(int pipe) 60 49 { 61 50 if (pipe == 0) 62 51 return PIPEACONF; ··· 54 69 BUG(); 55 70 } 56 71 57 - void 58 - psb_enable_pipestat(struct drm_psb_private *dev_priv, int pipe, u32 mask) 72 + void gma_enable_pipestat(struct drm_psb_private *dev_priv, int pipe, u32 mask) 59 73 { 60 74 if ((dev_priv->pipestat[pipe] & mask) != mask) { 61 - u32 reg = psb_pipestat(pipe); 75 + u32 reg = gma_pipestat(pipe); 62 76 dev_priv->pipestat[pipe] |= mask; 63 77 /* Enable the interrupt, clear any pending status */ 64 78 if (gma_power_begin(&dev_priv->dev, false)) { ··· 70 86 } 71 87 } 72 88 73 - void 74 - psb_disable_pipestat(struct drm_psb_private *dev_priv, int pipe, u32 mask) 89 + void gma_disable_pipestat(struct drm_psb_private *dev_priv, int pipe, u32 mask) 75 90 { 76 91 if ((dev_priv->pipestat[pipe] & mask) != 0) { 77 - u32 reg = psb_pipestat(pipe); 92 + u32 reg = gma_pipestat(pipe); 78 93 dev_priv->pipestat[pipe] &= ~mask; 79 94 if (gma_power_begin(&dev_priv->dev, false)) { 80 95 u32 writeVal = PSB_RVDC32(reg); ··· 88 105 /* 89 106 * Display controller interrupt handler for pipe event. 90 107 */ 91 - static void mid_pipe_event_handler(struct drm_device *dev, int pipe) 108 + static void gma_pipe_event_handler(struct drm_device *dev, int pipe) 92 109 { 93 110 struct drm_psb_private *dev_priv = to_drm_psb_private(dev); 94 111 95 112 uint32_t pipe_stat_val = 0; 96 - uint32_t pipe_stat_reg = psb_pipestat(pipe); 113 + uint32_t pipe_stat_reg = gma_pipestat(pipe); 97 114 uint32_t pipe_enable = dev_priv->pipestat[pipe]; 98 115 uint32_t pipe_status = dev_priv->pipestat[pipe] >> 16; 99 116 uint32_t pipe_clear; ··· 143 160 /* 144 161 * Display controller interrupt handler. 145 162 */ 146 - static void psb_vdc_interrupt(struct drm_device *dev, uint32_t vdc_stat) 163 + static void gma_vdc_interrupt(struct drm_device *dev, uint32_t vdc_stat) 147 164 { 148 165 if (vdc_stat & _PSB_IRQ_ASLE) 149 166 psb_intel_opregion_asle_intr(dev); 150 167 151 168 if (vdc_stat & _PSB_VSYNC_PIPEA_FLAG) 152 - mid_pipe_event_handler(dev, 0); 169 + gma_pipe_event_handler(dev, 0); 153 170 154 171 if (vdc_stat & _PSB_VSYNC_PIPEB_FLAG) 155 - mid_pipe_event_handler(dev, 1); 172 + gma_pipe_event_handler(dev, 1); 156 173 } 157 174 158 175 /* 159 176 * SGX interrupt handler 160 177 */ 161 - static void psb_sgx_interrupt(struct drm_device *dev, u32 stat_1, u32 stat_2) 178 + static void gma_sgx_interrupt(struct drm_device *dev, u32 stat_1, u32 stat_2) 162 179 { 163 180 struct drm_psb_private *dev_priv = to_drm_psb_private(dev); 164 181 u32 val, addr; ··· 205 222 PSB_RSGX32(PSB_CR_EVENT_HOST_CLEAR2); 206 223 } 207 224 208 - static irqreturn_t psb_irq_handler(int irq, void *arg) 225 + static irqreturn_t gma_irq_handler(int irq, void *arg) 209 226 { 210 227 struct drm_device *dev = arg; 211 228 struct drm_psb_private *dev_priv = to_drm_psb_private(dev); ··· 229 246 spin_unlock(&dev_priv->irqmask_lock); 230 247 231 248 if (dsp_int && gma_power_is_on(dev)) { 232 - psb_vdc_interrupt(dev, vdc_stat); 249 + gma_vdc_interrupt(dev, vdc_stat); 233 250 handled = 1; 234 251 } 235 252 236 253 if (sgx_int) { 237 254 sgx_stat_1 = PSB_RSGX32(PSB_CR_EVENT_STATUS); 238 255 sgx_stat_2 = PSB_RSGX32(PSB_CR_EVENT_STATUS2); 239 - psb_sgx_interrupt(dev, sgx_stat_1, sgx_stat_2); 256 + gma_sgx_interrupt(dev, sgx_stat_1, sgx_stat_2); 240 257 handled = 1; 241 258 } 242 259 ··· 257 274 return IRQ_HANDLED; 258 275 } 259 276 260 - void psb_irq_preinstall(struct drm_device *dev) 277 + void gma_irq_preinstall(struct drm_device *dev) 261 278 { 262 279 struct drm_psb_private *dev_priv = to_drm_psb_private(dev); 263 280 unsigned long irqflags; ··· 286 303 spin_unlock_irqrestore(&dev_priv->irqmask_lock, irqflags); 287 304 } 288 305 289 - void psb_irq_postinstall(struct drm_device *dev) 306 + void gma_irq_postinstall(struct drm_device *dev) 290 307 { 291 308 struct drm_psb_private *dev_priv = to_drm_psb_private(dev); 292 309 unsigned long irqflags; ··· 305 322 306 323 for (i = 0; i < dev->num_crtcs; ++i) { 307 324 if (dev->vblank[i].enabled) 308 - psb_enable_pipestat(dev_priv, i, PIPE_VBLANK_INTERRUPT_ENABLE); 325 + gma_enable_pipestat(dev_priv, i, PIPE_VBLANK_INTERRUPT_ENABLE); 309 326 else 310 - psb_disable_pipestat(dev_priv, i, PIPE_VBLANK_INTERRUPT_ENABLE); 327 + gma_disable_pipestat(dev_priv, i, PIPE_VBLANK_INTERRUPT_ENABLE); 311 328 } 312 329 313 330 if (dev_priv->ops->hotplug_enable) ··· 316 333 spin_unlock_irqrestore(&dev_priv->irqmask_lock, irqflags); 317 334 } 318 335 319 - int psb_irq_install(struct drm_device *dev, unsigned int irq) 336 + int gma_irq_install(struct drm_device *dev, unsigned int irq) 320 337 { 321 338 int ret; 322 339 323 340 if (irq == IRQ_NOTCONNECTED) 324 341 return -ENOTCONN; 325 342 326 - psb_irq_preinstall(dev); 343 + gma_irq_preinstall(dev); 327 344 328 345 /* PCI devices require shared interrupts. */ 329 - ret = request_irq(irq, psb_irq_handler, IRQF_SHARED, dev->driver->name, dev); 346 + ret = request_irq(irq, gma_irq_handler, IRQF_SHARED, dev->driver->name, dev); 330 347 if (ret) 331 348 return ret; 332 349 333 - psb_irq_postinstall(dev); 350 + gma_irq_postinstall(dev); 334 351 335 352 return 0; 336 353 } 337 354 338 - void psb_irq_uninstall(struct drm_device *dev) 355 + void gma_irq_uninstall(struct drm_device *dev) 339 356 { 340 357 struct drm_psb_private *dev_priv = to_drm_psb_private(dev); 341 358 struct pci_dev *pdev = to_pci_dev(dev->dev); ··· 351 368 352 369 for (i = 0; i < dev->num_crtcs; ++i) { 353 370 if (dev->vblank[i].enabled) 354 - psb_disable_pipestat(dev_priv, i, PIPE_VBLANK_INTERRUPT_ENABLE); 371 + gma_disable_pipestat(dev_priv, i, PIPE_VBLANK_INTERRUPT_ENABLE); 355 372 } 356 373 357 374 dev_priv->vdc_irq_mask &= _PSB_IRQ_SGX_FLAG | ··· 371 388 free_irq(pdev->irq, dev); 372 389 } 373 390 374 - /* 375 - * It is used to enable VBLANK interrupt 376 - */ 377 - int psb_enable_vblank(struct drm_crtc *crtc) 391 + int gma_crtc_enable_vblank(struct drm_crtc *crtc) 378 392 { 379 393 struct drm_device *dev = crtc->dev; 380 394 unsigned int pipe = crtc->index; 381 395 struct drm_psb_private *dev_priv = to_drm_psb_private(dev); 382 396 unsigned long irqflags; 383 397 uint32_t reg_val = 0; 384 - uint32_t pipeconf_reg = mid_pipeconf(pipe); 398 + uint32_t pipeconf_reg = gma_pipeconf(pipe); 385 399 386 400 if (gma_power_begin(dev, false)) { 387 401 reg_val = REG_READ(pipeconf_reg); ··· 397 417 398 418 PSB_WVDC32(~dev_priv->vdc_irq_mask, PSB_INT_MASK_R); 399 419 PSB_WVDC32(dev_priv->vdc_irq_mask, PSB_INT_ENABLE_R); 400 - psb_enable_pipestat(dev_priv, pipe, PIPE_VBLANK_INTERRUPT_ENABLE); 420 + gma_enable_pipestat(dev_priv, pipe, PIPE_VBLANK_INTERRUPT_ENABLE); 401 421 402 422 spin_unlock_irqrestore(&dev_priv->irqmask_lock, irqflags); 403 423 404 424 return 0; 405 425 } 406 426 407 - /* 408 - * It is used to disable VBLANK interrupt 409 - */ 410 - void psb_disable_vblank(struct drm_crtc *crtc) 427 + void gma_crtc_disable_vblank(struct drm_crtc *crtc) 411 428 { 412 429 struct drm_device *dev = crtc->dev; 413 430 unsigned int pipe = crtc->index; ··· 420 443 421 444 PSB_WVDC32(~dev_priv->vdc_irq_mask, PSB_INT_MASK_R); 422 445 PSB_WVDC32(dev_priv->vdc_irq_mask, PSB_INT_ENABLE_R); 423 - psb_disable_pipestat(dev_priv, pipe, PIPE_VBLANK_INTERRUPT_ENABLE); 446 + gma_disable_pipestat(dev_priv, pipe, PIPE_VBLANK_INTERRUPT_ENABLE); 424 447 425 448 spin_unlock_irqrestore(&dev_priv->irqmask_lock, irqflags); 426 449 } ··· 428 451 /* Called from drm generic code, passed a 'crtc', which 429 452 * we use as a pipe index 430 453 */ 431 - u32 psb_get_vblank_counter(struct drm_crtc *crtc) 454 + u32 gma_crtc_get_vblank_counter(struct drm_crtc *crtc) 432 455 { 433 456 struct drm_device *dev = crtc->dev; 434 457 unsigned int pipe = crtc->index; ··· 463 486 464 487 if (!(reg_val & PIPEACONF_ENABLE)) { 465 488 dev_err(dev->dev, "trying to get vblank count for disabled pipe %u\n", 466 - pipe); 467 - goto psb_get_vblank_counter_exit; 489 + pipe); 490 + goto err_gma_power_end; 468 491 } 469 492 470 493 /* ··· 483 506 484 507 count = (high1 << 8) | low; 485 508 486 - psb_get_vblank_counter_exit: 487 - 509 + err_gma_power_end: 488 510 gma_power_end(dev); 489 511 490 512 return count;
+9 -10
drivers/gpu/drm/gma500/psb_irq.h
··· 15 15 struct drm_crtc; 16 16 struct drm_device; 17 17 18 - bool sysirq_init(struct drm_device *dev); 19 - void sysirq_uninit(struct drm_device *dev); 18 + void gma_irq_preinstall(struct drm_device *dev); 19 + void gma_irq_postinstall(struct drm_device *dev); 20 + int gma_irq_install(struct drm_device *dev, unsigned int irq); 21 + void gma_irq_uninstall(struct drm_device *dev); 20 22 21 - void psb_irq_preinstall(struct drm_device *dev); 22 - void psb_irq_postinstall(struct drm_device *dev); 23 - int psb_irq_install(struct drm_device *dev, unsigned int irq); 24 - void psb_irq_uninstall(struct drm_device *dev); 25 - 26 - int psb_enable_vblank(struct drm_crtc *crtc); 27 - void psb_disable_vblank(struct drm_crtc *crtc); 28 - u32 psb_get_vblank_counter(struct drm_crtc *crtc); 23 + int gma_crtc_enable_vblank(struct drm_crtc *crtc); 24 + void gma_crtc_disable_vblank(struct drm_crtc *crtc); 25 + u32 gma_crtc_get_vblank_counter(struct drm_crtc *crtc); 26 + void gma_enable_pipestat(struct drm_psb_private *dev_priv, int pipe, u32 mask); 27 + void gma_disable_pipestat(struct drm_psb_private *dev_priv, int pipe, u32 mask); 29 28 30 29 #endif /* _PSB_IRQ_H_ */
+2 -1
drivers/gpu/drm/i915/gem/i915_gem_clflush.c
··· 108 108 trace_i915_gem_object_clflush(obj); 109 109 110 110 clflush = NULL; 111 - if (!(flags & I915_CLFLUSH_SYNC)) 111 + if (!(flags & I915_CLFLUSH_SYNC) && 112 + dma_resv_reserve_fences(obj->base.resv, 1) == 0) 112 113 clflush = clflush_work_create(obj); 113 114 if (clflush) { 114 115 i915_sw_fence_await_reservation(&clflush->base.chain,
+4 -6
drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
··· 998 998 } 999 999 } 1000 1000 1001 - if (!(ev->flags & EXEC_OBJECT_WRITE)) { 1002 - err = dma_resv_reserve_shared(vma->obj->base.resv, 1); 1003 - if (err) 1004 - return err; 1005 - } 1001 + err = dma_resv_reserve_fences(vma->obj->base.resv, 1); 1002 + if (err) 1003 + return err; 1006 1004 1007 1005 GEM_BUG_ON(drm_mm_node_allocated(&vma->node) && 1008 1006 eb_vma_misplaced(&eb->exec[i], vma, ev->flags)); ··· 2301 2303 if (IS_ERR(batch)) 2302 2304 return PTR_ERR(batch); 2303 2305 2304 - err = dma_resv_reserve_shared(shadow->obj->base.resv, 1); 2306 + err = dma_resv_reserve_fences(shadow->obj->base.resv, 1); 2305 2307 if (err) 2306 2308 return err; 2307 2309
+2 -2
drivers/gpu/drm/i915/gem/i915_gem_ttm.c
··· 283 283 i915_tt->is_shmem = true; 284 284 } 285 285 286 - ret = ttm_tt_init(&i915_tt->ttm, bo, page_flags, caching); 286 + ret = ttm_tt_init(&i915_tt->ttm, bo, page_flags, caching, 0); 287 287 if (ret) 288 288 goto err_free; 289 289 ··· 936 936 bo->priority = I915_TTM_PRIO_HAS_PAGES; 937 937 } 938 938 939 - ttm_bo_move_to_lru_tail(bo, bo->resource, NULL); 939 + ttm_bo_move_to_lru_tail(bo); 940 940 spin_unlock(&bo->bdev->lru_lock); 941 941 } 942 942
+5 -1
drivers/gpu/drm/i915/gem/i915_gem_ttm_move.c
··· 611 611 assert_object_held(src); 612 612 i915_deps_init(&deps, GFP_KERNEL | __GFP_NORETRY | __GFP_NOWARN); 613 613 614 - ret = dma_resv_reserve_shared(src_bo->base.resv, 1); 614 + ret = dma_resv_reserve_fences(src_bo->base.resv, 1); 615 + if (ret) 616 + return ret; 617 + 618 + ret = dma_resv_reserve_fences(dst_bo->base.resv, 1); 615 619 if (ret) 616 620 return ret; 617 621
+4 -1
drivers/gpu/drm/i915/gem/selftests/i915_gem_migrate.c
··· 216 216 i915_gem_object_is_lmem(obj), 217 217 0xdeadbeaf, &rq); 218 218 if (rq) { 219 - dma_resv_add_excl_fence(obj->base.resv, &rq->fence); 219 + err = dma_resv_reserve_fences(obj->base.resv, 1); 220 + if (!err) 221 + dma_resv_add_excl_fence(obj->base.resv, 222 + &rq->fence); 220 223 i915_gem_object_set_moving_fence(obj, &rq->fence); 221 224 i915_request_put(rq); 222 225 }
+8 -2
drivers/gpu/drm/i915/i915_vma.c
··· 1819 1819 intel_frontbuffer_put(front); 1820 1820 } 1821 1821 1822 + if (!(flags & __EXEC_OBJECT_NO_RESERVE)) { 1823 + err = dma_resv_reserve_fences(vma->obj->base.resv, 1); 1824 + if (unlikely(err)) 1825 + return err; 1826 + } 1827 + 1822 1828 if (fence) { 1823 1829 dma_resv_add_excl_fence(vma->obj->base.resv, fence); 1824 1830 obj->write_domain = I915_GEM_DOMAIN_RENDER; ··· 1832 1826 } 1833 1827 } else { 1834 1828 if (!(flags & __EXEC_OBJECT_NO_RESERVE)) { 1835 - err = dma_resv_reserve_shared(vma->obj->base.resv, 1); 1829 + err = dma_resv_reserve_fences(vma->obj->base.resv, 1); 1836 1830 if (unlikely(err)) 1837 1831 return err; 1838 1832 } ··· 2050 2044 if (!obj->mm.rsgt) 2051 2045 return -EBUSY; 2052 2046 2053 - err = dma_resv_reserve_shared(obj->base.resv, 1); 2047 + err = dma_resv_reserve_fences(obj->base.resv, 1); 2054 2048 if (err) 2055 2049 return -EBUSY; 2056 2050
+7
drivers/gpu/drm/i915/selftests/intel_memory_region.c
··· 1043 1043 } 1044 1044 1045 1045 i915_gem_object_lock(obj, NULL); 1046 + 1047 + err = dma_resv_reserve_fences(obj->base.resv, 1); 1048 + if (err) { 1049 + i915_gem_object_unlock(obj); 1050 + goto out_put; 1051 + } 1052 + 1046 1053 /* Put the pages into a known state -- from the gpu for added fun */ 1047 1054 intel_engine_pm_get(engine); 1048 1055 err = intel_context_migrate_clear(engine->gt->migrate.context, NULL,
+1 -2
drivers/gpu/drm/imx/imx-ldb.c
··· 150 150 if (imx_ldb_ch->mode_valid) { 151 151 struct drm_display_mode *mode; 152 152 153 - mode = drm_mode_create(connector->dev); 153 + mode = drm_mode_duplicate(connector->dev, &imx_ldb_ch->mode); 154 154 if (!mode) 155 155 return -EINVAL; 156 - drm_mode_copy(mode, &imx_ldb_ch->mode); 157 156 mode->type |= DRM_MODE_TYPE_DRIVER | DRM_MODE_TYPE_PREFERRED; 158 157 drm_mode_probed_add(connector, mode); 159 158 num_modes++;
+23 -11
drivers/gpu/drm/ingenic/ingenic-drm-drv.c
··· 226 226 } 227 227 } 228 228 229 + static void ingenic_drm_bridge_atomic_enable(struct drm_bridge *bridge, 230 + struct drm_bridge_state *old_bridge_state) 231 + { 232 + struct ingenic_drm *priv = drm_device_get_priv(bridge->dev); 233 + 234 + regmap_write(priv->map, JZ_REG_LCD_STATE, 0); 235 + 236 + regmap_update_bits(priv->map, JZ_REG_LCD_CTRL, 237 + JZ_LCD_CTRL_ENABLE | JZ_LCD_CTRL_DISABLE, 238 + JZ_LCD_CTRL_ENABLE); 239 + } 240 + 229 241 static void ingenic_drm_crtc_atomic_enable(struct drm_crtc *crtc, 230 242 struct drm_atomic_state *state) 231 243 { ··· 249 237 if (WARN_ON(IS_ERR(priv_state))) 250 238 return; 251 239 252 - regmap_write(priv->map, JZ_REG_LCD_STATE, 0); 253 - 254 240 /* Set addresses of our DMA descriptor chains */ 255 241 next_id = priv_state->use_palette ? HWDESC_PALETTE : 0; 256 242 regmap_write(priv->map, JZ_REG_LCD_DA0, dma_hwdesc_addr(priv, next_id)); 257 243 regmap_write(priv->map, JZ_REG_LCD_DA1, dma_hwdesc_addr(priv, 1)); 258 244 259 - regmap_update_bits(priv->map, JZ_REG_LCD_CTRL, 260 - JZ_LCD_CTRL_ENABLE | JZ_LCD_CTRL_DISABLE, 261 - JZ_LCD_CTRL_ENABLE); 262 - 263 245 drm_crtc_vblank_on(crtc); 264 246 } 265 247 266 - static void ingenic_drm_crtc_atomic_disable(struct drm_crtc *crtc, 267 - struct drm_atomic_state *state) 248 + static void ingenic_drm_bridge_atomic_disable(struct drm_bridge *bridge, 249 + struct drm_bridge_state *old_bridge_state) 268 250 { 269 - struct ingenic_drm *priv = drm_crtc_get_priv(crtc); 251 + struct ingenic_drm *priv = drm_device_get_priv(bridge->dev); 270 252 unsigned int var; 271 - 272 - drm_crtc_vblank_off(crtc); 273 253 274 254 regmap_update_bits(priv->map, JZ_REG_LCD_CTRL, 275 255 JZ_LCD_CTRL_DISABLE, JZ_LCD_CTRL_DISABLE); ··· 269 265 regmap_read_poll_timeout(priv->map, JZ_REG_LCD_STATE, var, 270 266 var & JZ_LCD_STATE_DISABLED, 271 267 1000, 0); 268 + } 269 + 270 + static void ingenic_drm_crtc_atomic_disable(struct drm_crtc *crtc, 271 + struct drm_atomic_state *state) 272 + { 273 + drm_crtc_vblank_off(crtc); 272 274 } 273 275 274 276 static void ingenic_drm_crtc_update_timings(struct ingenic_drm *priv, ··· 978 968 979 969 static const struct drm_bridge_funcs ingenic_drm_bridge_funcs = { 980 970 .attach = ingenic_drm_bridge_attach, 971 + .atomic_enable = ingenic_drm_bridge_atomic_enable, 972 + .atomic_disable = ingenic_drm_bridge_atomic_disable, 981 973 .atomic_check = ingenic_drm_bridge_atomic_check, 982 974 .atomic_reset = drm_atomic_helper_bridge_reset, 983 975 .atomic_duplicate_state = drm_atomic_helper_bridge_duplicate_state,
+4 -6
drivers/gpu/drm/lima/lima_gem.c
··· 257 257 static int lima_gem_sync_bo(struct lima_sched_task *task, struct lima_bo *bo, 258 258 bool write, bool explicit) 259 259 { 260 - int err = 0; 260 + int err; 261 261 262 - if (!write) { 263 - err = dma_resv_reserve_shared(lima_bo_resv(bo), 1); 264 - if (err) 265 - return err; 266 - } 262 + err = dma_resv_reserve_fences(lima_bo_resv(bo), 1); 263 + if (err) 264 + return err; 267 265 268 266 /* explicit sync use user passed dep fence */ 269 267 if (explicit)
+5 -38
drivers/gpu/drm/mcde/mcde_dsi.c
··· 19 19 #include <drm/drm_mipi_dsi.h> 20 20 #include <drm/drm_modeset_helper_vtables.h> 21 21 #include <drm/drm_of.h> 22 - #include <drm/drm_panel.h> 23 22 #include <drm/drm_print.h> 24 23 #include <drm/drm_probe_helper.h> 25 24 ··· 38 39 struct device *dev; 39 40 struct mcde *mcde; 40 41 struct drm_bridge bridge; 41 - struct drm_panel *panel; 42 42 struct drm_bridge *bridge_out; 43 43 struct mipi_dsi_host dsi_host; 44 44 struct mipi_dsi_device *mdsi; ··· 1071 1073 struct drm_device *drm = data; 1072 1074 struct mcde *mcde = to_mcde(drm); 1073 1075 struct mcde_dsi *d = dev_get_drvdata(dev); 1074 - struct device_node *child; 1075 - struct drm_panel *panel = NULL; 1076 - struct drm_bridge *bridge = NULL; 1076 + struct drm_bridge *bridge; 1077 1077 1078 1078 if (!of_get_available_child_count(dev->of_node)) { 1079 1079 dev_info(dev, "unused DSI interface\n"); ··· 1096 1100 return PTR_ERR(d->lp_clk); 1097 1101 } 1098 1102 1099 - /* Look for a panel as a child to this node */ 1100 - for_each_available_child_of_node(dev->of_node, child) { 1101 - panel = of_drm_find_panel(child); 1102 - if (IS_ERR(panel)) { 1103 - dev_err(dev, "failed to find panel try bridge (%ld)\n", 1104 - PTR_ERR(panel)); 1105 - panel = NULL; 1106 - 1107 - bridge = of_drm_find_bridge(child); 1108 - if (!bridge) { 1109 - dev_err(dev, "failed to find bridge\n"); 1110 - return -EINVAL; 1111 - } 1112 - } 1113 - } 1114 - if (panel) { 1115 - bridge = drm_panel_bridge_add_typed(panel, 1116 - DRM_MODE_CONNECTOR_DSI); 1117 - if (IS_ERR(bridge)) { 1118 - dev_err(dev, "error adding panel bridge\n"); 1119 - return PTR_ERR(bridge); 1120 - } 1121 - dev_info(dev, "connected to panel\n"); 1122 - d->panel = panel; 1123 - } else if (bridge) { 1124 - /* TODO: AV8100 HDMI encoder goes here for example */ 1125 - dev_info(dev, "connected to non-panel bridge (unsupported)\n"); 1126 - return -ENODEV; 1127 - } else { 1128 - dev_err(dev, "no panel or bridge\n"); 1129 - return -ENODEV; 1103 + bridge = devm_drm_of_get_bridge(dev, dev->of_node, 0, 0); 1104 + if (IS_ERR(bridge)) { 1105 + dev_err(dev, "error to get bridge\n"); 1106 + return PTR_ERR(bridge); 1130 1107 } 1131 1108 1132 1109 d->bridge_out = bridge; ··· 1122 1153 { 1123 1154 struct mcde_dsi *d = dev_get_drvdata(dev); 1124 1155 1125 - if (d->panel) 1126 - drm_panel_bridge_remove(d->bridge_out); 1127 1156 regmap_update_bits(d->prcmu, PRCM_DSI_SW_RESET, 1128 1157 PRCM_DSI_SW_RESET_DSI0_SW_RESETN, 0); 1129 1158 }
+12
drivers/gpu/drm/mediatek/mtk_dsi.c
··· 500 500 DRM_WARN("HFP + HBP less than d-phy, FPS will under 60Hz\n"); 501 501 } 502 502 503 + if ((dsi->mode_flags & MIPI_DSI_HS_PKT_END_ALIGNED) && 504 + (dsi->lanes == 4)) { 505 + horizontal_sync_active_byte = 506 + roundup(horizontal_sync_active_byte, dsi->lanes) - 2; 507 + horizontal_frontporch_byte = 508 + roundup(horizontal_frontporch_byte, dsi->lanes) - 2; 509 + horizontal_backporch_byte = 510 + roundup(horizontal_backporch_byte, dsi->lanes) - 2; 511 + horizontal_backporch_byte -= 512 + (vm->hactive * dsi_tmp_buf_bpp + 2) % dsi->lanes; 513 + } 514 + 503 515 writel(horizontal_sync_active_byte, dsi->regs + DSI_HSA_WC); 504 516 writel(horizontal_backporch_byte, dsi->regs + DSI_HBP_WC); 505 517 writel(horizontal_frontporch_byte, dsi->regs + DSI_HFP_WC);
+1 -1
drivers/gpu/drm/meson/meson_drv.c
··· 168 168 }, 169 169 .attrs = (const struct soc_device_attribute []) { 170 170 { .soc_id = "GXL (S805*)", }, 171 - { /* sentinel */ }, 171 + { /* sentinel */ } 172 172 } 173 173 }, 174 174 };
+8 -10
drivers/gpu/drm/msm/msm_gem_submit.c
··· 320 320 struct drm_gem_object *obj = &submit->bos[i].obj->base; 321 321 bool write = submit->bos[i].flags & MSM_SUBMIT_BO_WRITE; 322 322 323 - if (!write) { 324 - /* NOTE: _reserve_shared() must happen before 325 - * _add_shared_fence(), which makes this a slightly 326 - * strange place to call it. OTOH this is a 327 - * convenient can-fail point to hook it in. 328 - */ 329 - ret = dma_resv_reserve_shared(obj->resv, 1); 330 - if (ret) 331 - return ret; 332 - } 323 + /* NOTE: _reserve_shared() must happen before 324 + * _add_shared_fence(), which makes this a slightly 325 + * strange place to call it. OTOH this is a 326 + * convenient can-fail point to hook it in. 327 + */ 328 + ret = dma_resv_reserve_fences(obj->resv, 1); 329 + if (ret) 330 + return ret; 333 331 334 332 /* exclusive fences must be ordered */ 335 333 if (no_implicit && !write)
+3 -3
drivers/gpu/drm/nouveau/dispnv50/atom.h
··· 160 160 static inline struct drm_encoder * 161 161 nv50_head_atom_get_encoder(struct nv50_head_atom *atom) 162 162 { 163 - struct drm_encoder *encoder = NULL; 163 + struct drm_encoder *encoder; 164 164 165 165 /* We only ever have a single encoder */ 166 166 drm_for_each_encoder_mask(encoder, atom->state.crtc->dev, 167 167 atom->state.encoder_mask) 168 - break; 168 + return encoder; 169 169 170 - return encoder; 170 + return NULL; 171 171 } 172 172 173 173 #define nv50_wndw_atom(p) container_of((p), struct nv50_wndw_atom, state)
+22 -5
drivers/gpu/drm/nouveau/dispnv50/crc.c
··· 390 390 struct nv50_head_atom *armh = nv50_head_atom(old_crtc_state); 391 391 struct nv50_head_atom *asyh = nv50_head_atom(new_crtc_state); 392 392 struct nv50_outp_atom *outp_atom; 393 - struct nouveau_encoder *outp = 394 - nv50_real_outp(nv50_head_atom_get_encoder(armh)); 395 - struct drm_encoder *encoder = &outp->base.base; 393 + struct nouveau_encoder *outp; 394 + struct drm_encoder *encoder, *enc; 395 + 396 + enc = nv50_head_atom_get_encoder(armh); 397 + if (!enc) 398 + continue; 399 + 400 + outp = nv50_real_outp(enc); 401 + if (!outp) 402 + continue; 403 + 404 + encoder = &outp->base.base; 396 405 397 406 if (!asyh->clr.crc) 398 407 continue; ··· 452 443 struct drm_device *dev = crtc->dev; 453 444 struct nv50_crc *crc = &head->crc; 454 445 const struct nv50_crc_func *func = nv50_disp(dev)->core->func->crc; 455 - struct nouveau_encoder *outp = 456 - nv50_real_outp(nv50_head_atom_get_encoder(asyh)); 446 + struct nouveau_encoder *outp; 447 + struct drm_encoder *encoder; 448 + 449 + encoder = nv50_head_atom_get_encoder(asyh); 450 + if (!encoder) 451 + return; 452 + 453 + outp = nv50_real_outp(encoder); 454 + if (!outp) 455 + return; 457 456 458 457 func->set_src(head, outp->or, nv50_crc_source_type(outp, asyh->crc.src), 459 458 &crc->ctx[crc->ctx_idx]);
+5 -9
drivers/gpu/drm/nouveau/dispnv50/wndw.c
··· 536 536 struct nouveau_bo *nvbo; 537 537 struct nv50_head_atom *asyh; 538 538 struct nv50_wndw_ctxdma *ctxdma; 539 - struct dma_resv_iter cursor; 540 - struct dma_fence *fence; 541 539 int ret; 542 540 543 541 NV_ATOMIC(drm, "%s prepare: %p\n", plane->name, fb); ··· 558 560 asyw->image.handle[0] = ctxdma->object.handle; 559 561 } 560 562 561 - dma_resv_iter_begin(&cursor, nvbo->bo.base.resv, false); 562 - dma_resv_for_each_fence_unlocked(&cursor, fence) { 563 - /* TODO: We only use the first writer here */ 564 - asyw->state.fence = dma_fence_get(fence); 565 - break; 566 - } 567 - dma_resv_iter_end(&cursor); 563 + ret = dma_resv_get_singleton(nvbo->bo.base.resv, false, 564 + &asyw->state.fence); 565 + if (ret) 566 + return ret; 567 + 568 568 asyw->image.offset[0] = nvbo->offset; 569 569 570 570 if (wndw->func->prepare) {
+8 -1
drivers/gpu/drm/nouveau/nouveau_bo.c
··· 959 959 { 960 960 struct nouveau_drm *drm = nouveau_bdev(bo->bdev); 961 961 struct drm_device *dev = drm->dev; 962 - struct dma_fence *fence = dma_resv_excl_fence(bo->base.resv); 962 + struct dma_fence *fence; 963 + int ret; 964 + 965 + /* TODO: This is actually a memory management dependency */ 966 + ret = dma_resv_get_singleton(bo->base.resv, false, &fence); 967 + if (ret) 968 + dma_resv_wait_timeout(bo->base.resv, false, false, 969 + MAX_SCHEDULE_TIMEOUT); 963 970 964 971 nv10_bo_put_tile_region(dev, *old_tile, fence); 965 972 *old_tile = new_tile;
+3 -5
drivers/gpu/drm/nouveau/nouveau_fence.c
··· 346 346 struct dma_resv *resv = nvbo->bo.base.resv; 347 347 int i, ret; 348 348 349 - if (!exclusive) { 350 - ret = dma_resv_reserve_shared(resv, 1); 351 - if (ret) 352 - return ret; 353 - } 349 + ret = dma_resv_reserve_fences(resv, 1); 350 + if (ret) 351 + return ret; 354 352 355 353 /* Waiting for the exclusive fence first causes performance regressions 356 354 * under some circumstances. So manually wait for the shared ones first.
+1 -1
drivers/gpu/drm/nouveau/nvkm/engine/device/base.c
··· 2935 2935 /* switch mmio to cpu's native endianness */ 2936 2936 if (!nvkm_device_endianness(device)) { 2937 2937 nvdev_error(device, 2938 - "Couldn't switch GPU to CPUs endianess\n"); 2938 + "Couldn't switch GPU to CPUs endianness\n"); 2939 2939 ret = -ENOSYS; 2940 2940 goto done; 2941 2941 }
+4 -2
drivers/gpu/drm/nouveau/nvkm/subdev/clk/base.c
··· 135 135 136 136 list_for_each_entry_from_reverse(cstate, &pstate->list, head) { 137 137 if (nvkm_cstate_valid(clk, cstate, max_volt, clk->temp)) 138 - break; 138 + return cstate; 139 139 } 140 140 141 - return cstate; 141 + return NULL; 142 142 } 143 143 144 144 static struct nvkm_cstate * ··· 169 169 if (!list_empty(&pstate->list)) { 170 170 cstate = nvkm_cstate_get(clk, pstate, cstatei); 171 171 cstate = nvkm_cstate_find_best(clk, pstate, cstate); 172 + if (!cstate) 173 + return -EINVAL; 172 174 } else { 173 175 cstate = &pstate->base; 174 176 }
+1 -1
drivers/gpu/drm/nouveau/nvkm/subdev/instmem/nv50.c
··· 313 313 struct nv50_instobj *iobj = nv50_instobj(memory); 314 314 struct nvkm_instmem *imem = &iobj->imem->base; 315 315 struct nvkm_vma *bar; 316 - void *map = map; 316 + void *map; 317 317 318 318 mutex_lock(&imem->mutex); 319 319 if (likely(iobj->lru.next))
+113 -85
drivers/gpu/drm/omapdrm/omap_gem.c
··· 38 38 /** roll applied when mapping to DMM */ 39 39 u32 roll; 40 40 41 - /** protects dma_addr_cnt, block, pages, dma_addrs and vaddr */ 41 + /** protects pin_cnt, block, pages, dma_addrs and vaddr */ 42 42 struct mutex lock; 43 43 44 44 /** ··· 50 50 * - buffers imported from dmabuf (with the OMAP_BO_MEM_DMABUF flag set) 51 51 * if they are physically contiguous (when sgt->orig_nents == 1) 52 52 * 53 - * - buffers mapped through the TILER when dma_addr_cnt is not zero, in 54 - * which case the DMA address points to the TILER aperture 53 + * - buffers mapped through the TILER when pin_cnt is not zero, in which 54 + * case the DMA address points to the TILER aperture 55 55 * 56 56 * Physically contiguous buffers have their DMA address equal to the 57 57 * physical address as we don't remap those buffers through the TILER. 58 58 * 59 59 * Buffers mapped to the TILER have their DMA address pointing to the 60 - * TILER aperture. As TILER mappings are refcounted (through 61 - * dma_addr_cnt) the DMA address must be accessed through omap_gem_pin() 62 - * to ensure that the mapping won't disappear unexpectedly. References 63 - * must be released with omap_gem_unpin(). 60 + * TILER aperture. As TILER mappings are refcounted (through pin_cnt) 61 + * the DMA address must be accessed through omap_gem_pin() to ensure 62 + * that the mapping won't disappear unexpectedly. References must be 63 + * released with omap_gem_unpin(). 64 64 */ 65 65 dma_addr_t dma_addr; 66 66 67 67 /** 68 - * # of users of dma_addr 68 + * # of users 69 69 */ 70 - refcount_t dma_addr_cnt; 70 + refcount_t pin_cnt; 71 71 72 72 /** 73 73 * If the buffer has been imported from a dmabuf the OMAP_DB_DMABUF flag ··· 750 750 } 751 751 } 752 752 753 + static int omap_gem_pin_tiler(struct drm_gem_object *obj) 754 + { 755 + struct omap_gem_object *omap_obj = to_omap_bo(obj); 756 + u32 npages = obj->size >> PAGE_SHIFT; 757 + enum tiler_fmt fmt = gem2fmt(omap_obj->flags); 758 + struct tiler_block *block; 759 + int ret; 760 + 761 + BUG_ON(omap_obj->block); 762 + 763 + if (omap_obj->flags & OMAP_BO_TILED_MASK) { 764 + block = tiler_reserve_2d(fmt, omap_obj->width, omap_obj->height, 765 + PAGE_SIZE); 766 + } else { 767 + block = tiler_reserve_1d(obj->size); 768 + } 769 + 770 + if (IS_ERR(block)) { 771 + ret = PTR_ERR(block); 772 + dev_err(obj->dev->dev, "could not remap: %d (%d)\n", ret, fmt); 773 + goto fail; 774 + } 775 + 776 + /* TODO: enable async refill.. */ 777 + ret = tiler_pin(block, omap_obj->pages, npages, omap_obj->roll, true); 778 + if (ret) { 779 + tiler_release(block); 780 + dev_err(obj->dev->dev, "could not pin: %d\n", ret); 781 + goto fail; 782 + } 783 + 784 + omap_obj->dma_addr = tiler_ssptr(block); 785 + omap_obj->block = block; 786 + 787 + DBG("got dma address: %pad", &omap_obj->dma_addr); 788 + 789 + fail: 790 + return ret; 791 + } 792 + 753 793 /** 754 794 * omap_gem_pin() - Pin a GEM object in memory 755 795 * @obj: the GEM object ··· 812 772 813 773 mutex_lock(&omap_obj->lock); 814 774 815 - if (!omap_gem_is_contiguous(omap_obj) && priv->has_dmm) { 816 - if (refcount_read(&omap_obj->dma_addr_cnt) == 0) { 817 - u32 npages = obj->size >> PAGE_SHIFT; 818 - enum tiler_fmt fmt = gem2fmt(omap_obj->flags); 819 - struct tiler_block *block; 775 + if (!omap_gem_is_contiguous(omap_obj)) { 776 + if (refcount_read(&omap_obj->pin_cnt) == 0) { 820 777 821 - BUG_ON(omap_obj->block); 822 - 823 - refcount_set(&omap_obj->dma_addr_cnt, 1); 778 + refcount_set(&omap_obj->pin_cnt, 1); 824 779 825 780 ret = omap_gem_attach_pages(obj); 826 781 if (ret) 827 782 goto fail; 828 783 829 - if (omap_obj->flags & OMAP_BO_TILED_MASK) { 830 - block = tiler_reserve_2d(fmt, 831 - omap_obj->width, 832 - omap_obj->height, PAGE_SIZE); 833 - } else { 834 - block = tiler_reserve_1d(obj->size); 784 + if (omap_obj->flags & OMAP_BO_SCANOUT) { 785 + if (priv->has_dmm) { 786 + ret = omap_gem_pin_tiler(obj); 787 + if (ret) 788 + goto fail; 789 + } 835 790 } 836 - 837 - if (IS_ERR(block)) { 838 - ret = PTR_ERR(block); 839 - dev_err(obj->dev->dev, 840 - "could not remap: %d (%d)\n", ret, fmt); 841 - goto fail; 842 - } 843 - 844 - /* TODO: enable async refill.. */ 845 - ret = tiler_pin(block, omap_obj->pages, npages, 846 - omap_obj->roll, true); 847 - if (ret) { 848 - tiler_release(block); 849 - dev_err(obj->dev->dev, 850 - "could not pin: %d\n", ret); 851 - goto fail; 852 - } 853 - 854 - omap_obj->dma_addr = tiler_ssptr(block); 855 - omap_obj->block = block; 856 - 857 - DBG("got dma address: %pad", &omap_obj->dma_addr); 858 791 } else { 859 - refcount_inc(&omap_obj->dma_addr_cnt); 792 + refcount_inc(&omap_obj->pin_cnt); 860 793 } 861 - 862 - if (dma_addr) 863 - *dma_addr = omap_obj->dma_addr; 864 - } else if (omap_gem_is_contiguous(omap_obj)) { 865 - if (dma_addr) 866 - *dma_addr = omap_obj->dma_addr; 867 - } else { 868 - ret = -EINVAL; 869 - goto fail; 870 794 } 795 + 796 + if (dma_addr) 797 + *dma_addr = omap_obj->dma_addr; 871 798 872 799 fail: 873 800 mutex_unlock(&omap_obj->lock); ··· 854 847 struct omap_gem_object *omap_obj = to_omap_bo(obj); 855 848 int ret; 856 849 857 - if (omap_gem_is_contiguous(omap_obj) || !priv->has_dmm) 850 + if (omap_gem_is_contiguous(omap_obj)) 858 851 return; 859 852 860 - if (refcount_dec_and_test(&omap_obj->dma_addr_cnt)) { 853 + if (refcount_dec_and_test(&omap_obj->pin_cnt)) { 861 854 if (omap_obj->sgt) { 862 855 sg_free_table(omap_obj->sgt); 863 856 kfree(omap_obj->sgt); 864 857 omap_obj->sgt = NULL; 865 858 } 866 - ret = tiler_unpin(omap_obj->block); 867 - if (ret) { 868 - dev_err(obj->dev->dev, 869 - "could not unpin pages: %d\n", ret); 859 + if (!(omap_obj->flags & OMAP_BO_SCANOUT)) 860 + return; 861 + if (priv->has_dmm) { 862 + ret = tiler_unpin(omap_obj->block); 863 + if (ret) { 864 + dev_err(obj->dev->dev, 865 + "could not unpin pages: %d\n", ret); 866 + } 867 + ret = tiler_release(omap_obj->block); 868 + if (ret) { 869 + dev_err(obj->dev->dev, 870 + "could not release unmap: %d\n", ret); 871 + } 872 + omap_obj->dma_addr = 0; 873 + omap_obj->block = NULL; 870 874 } 871 - ret = tiler_release(omap_obj->block); 872 - if (ret) { 873 - dev_err(obj->dev->dev, 874 - "could not release unmap: %d\n", ret); 875 - } 876 - omap_obj->dma_addr = 0; 877 - omap_obj->block = NULL; 878 875 } 879 876 } 880 877 ··· 911 900 912 901 mutex_lock(&omap_obj->lock); 913 902 914 - if ((refcount_read(&omap_obj->dma_addr_cnt) > 0) && omap_obj->block && 903 + if ((refcount_read(&omap_obj->pin_cnt) > 0) && omap_obj->block && 915 904 (omap_obj->flags & OMAP_BO_TILED_MASK)) { 916 905 *dma_addr = tiler_tsptr(omap_obj->block, orient, x, y); 917 906 ret = 0; ··· 979 968 return 0; 980 969 } 981 970 982 - struct sg_table *omap_gem_get_sg(struct drm_gem_object *obj) 971 + struct sg_table *omap_gem_get_sg(struct drm_gem_object *obj, 972 + enum dma_data_direction dir) 983 973 { 984 974 struct omap_gem_object *omap_obj = to_omap_bo(obj); 985 975 dma_addr_t addr; ··· 1005 993 goto err_unpin; 1006 994 } 1007 995 1008 - if (omap_obj->flags & OMAP_BO_TILED_MASK) { 1009 - enum tiler_fmt fmt = gem2fmt(omap_obj->flags); 996 + if (addr) { 997 + if (omap_obj->flags & OMAP_BO_TILED_MASK) { 998 + enum tiler_fmt fmt = gem2fmt(omap_obj->flags); 1010 999 1011 - len = omap_obj->width << (int)fmt; 1012 - count = omap_obj->height; 1013 - stride = tiler_stride(fmt, 0); 1000 + len = omap_obj->width << (int)fmt; 1001 + count = omap_obj->height; 1002 + stride = tiler_stride(fmt, 0); 1003 + } else { 1004 + len = obj->size; 1005 + count = 1; 1006 + stride = 0; 1007 + } 1014 1008 } else { 1015 - len = obj->size; 1016 - count = 1; 1017 - stride = 0; 1009 + count = obj->size >> PAGE_SHIFT; 1018 1010 } 1019 1011 1020 1012 ret = sg_alloc_table(sgt, count, GFP_KERNEL); 1021 1013 if (ret) 1022 1014 goto err_free; 1023 1015 1024 - for_each_sg(sgt->sgl, sg, count, i) { 1025 - sg_set_page(sg, phys_to_page(addr), len, offset_in_page(addr)); 1026 - sg_dma_address(sg) = addr; 1027 - sg_dma_len(sg) = len; 1016 + /* this must be after omap_gem_pin() to ensure we have pages attached */ 1017 + omap_gem_dma_sync_buffer(obj, dir); 1028 1018 1029 - addr += stride; 1019 + if (addr) { 1020 + for_each_sg(sgt->sgl, sg, count, i) { 1021 + sg_set_page(sg, phys_to_page(addr), len, 1022 + offset_in_page(addr)); 1023 + sg_dma_address(sg) = addr; 1024 + sg_dma_len(sg) = len; 1025 + 1026 + addr += stride; 1027 + } 1028 + } else { 1029 + for_each_sg(sgt->sgl, sg, count, i) { 1030 + sg_set_page(sg, omap_obj->pages[i], PAGE_SIZE, 0); 1031 + sg_dma_address(sg) = omap_obj->dma_addrs[i]; 1032 + sg_dma_len(sg) = PAGE_SIZE; 1033 + } 1030 1034 } 1031 1035 1032 1036 omap_obj->sgt = sgt; ··· 1152 1124 seq_printf(m, "%08x: %2d (%2d) %08llx %pad (%2d) %p %4d", 1153 1125 omap_obj->flags, obj->name, kref_read(&obj->refcount), 1154 1126 off, &omap_obj->dma_addr, 1155 - refcount_read(&omap_obj->dma_addr_cnt), 1127 + refcount_read(&omap_obj->pin_cnt), 1156 1128 omap_obj->vaddr, omap_obj->roll); 1157 1129 1158 1130 if (omap_obj->flags & OMAP_BO_TILED_MASK) { ··· 1215 1187 mutex_lock(&omap_obj->lock); 1216 1188 1217 1189 /* The object should not be pinned. */ 1218 - WARN_ON(refcount_read(&omap_obj->dma_addr_cnt) > 0); 1190 + WARN_ON(refcount_read(&omap_obj->pin_cnt) > 0); 1219 1191 1220 1192 if (omap_obj->pages) { 1221 1193 if (omap_obj->flags & OMAP_BO_MEM_DMABUF)
+2 -1
drivers/gpu/drm/omapdrm/omap_gem.h
··· 82 82 int omap_gem_rotated_dma_addr(struct drm_gem_object *obj, u32 orient, 83 83 int x, int y, dma_addr_t *dma_addr); 84 84 int omap_gem_tiled_stride(struct drm_gem_object *obj, u32 orient); 85 - struct sg_table *omap_gem_get_sg(struct drm_gem_object *obj); 85 + struct sg_table *omap_gem_get_sg(struct drm_gem_object *obj, 86 + enum dma_data_direction dir); 86 87 void omap_gem_put_sg(struct drm_gem_object *obj, struct sg_table *sgt); 87 88 88 89 #endif /* __OMAPDRM_GEM_H__ */
+1 -4
drivers/gpu/drm/omapdrm/omap_gem_dmabuf.c
··· 23 23 { 24 24 struct drm_gem_object *obj = attachment->dmabuf->priv; 25 25 struct sg_table *sg; 26 - sg = omap_gem_get_sg(obj); 26 + sg = omap_gem_get_sg(obj, dir); 27 27 if (IS_ERR(sg)) 28 28 return sg; 29 - 30 - /* this must be after omap_gem_pin() to ensure we have pages attached */ 31 - omap_gem_dma_sync_buffer(obj, dir); 32 29 33 30 return sg; 34 31 }
+1 -1
drivers/gpu/drm/omapdrm/omap_overlay.c
··· 86 86 r_ovl = omap_plane_find_free_overlay(s->dev, overlay_map, 87 87 caps, fourcc); 88 88 if (!r_ovl) { 89 - overlay_map[r_ovl->idx] = NULL; 89 + overlay_map[ovl->idx] = NULL; 90 90 *overlay = NULL; 91 91 return -ENOMEM; 92 92 }
+9
drivers/gpu/drm/panel/Kconfig
··· 284 284 panel (found on the Zoom2/3/3630 SDP boards). To compile this driver 285 285 as a module, choose M here. 286 286 287 + config DRM_PANEL_NEWVISION_NV3052C 288 + tristate "NewVision NV3052C RGB/SPI panel" 289 + depends on OF && SPI 290 + depends on BACKLIGHT_CLASS_DEVICE 291 + select DRM_MIPI_DBI 292 + help 293 + Say Y here if you want to enable support for the panels built 294 + around the NewVision NV3052C display controller. 295 + 287 296 config DRM_PANEL_NOVATEK_NT35510 288 297 tristate "Novatek NT35510 RGB panel driver" 289 298 depends on OF
+1
drivers/gpu/drm/panel/Makefile
··· 26 26 obj-$(CONFIG_DRM_PANEL_LG_LB035Q02) += panel-lg-lb035q02.o 27 27 obj-$(CONFIG_DRM_PANEL_LG_LG4573) += panel-lg-lg4573.o 28 28 obj-$(CONFIG_DRM_PANEL_NEC_NL8048HL11) += panel-nec-nl8048hl11.o 29 + obj-$(CONFIG_DRM_PANEL_NEWVISION_NV3052C) += panel-newvision-nv3052c.o 29 30 obj-$(CONFIG_DRM_PANEL_NOVATEK_NT35510) += panel-novatek-nt35510.o 30 31 obj-$(CONFIG_DRM_PANEL_NOVATEK_NT35560) += panel-novatek-nt35560.o 31 32 obj-$(CONFIG_DRM_PANEL_NOVATEK_NT35950) += panel-novatek-nt35950.o
+27 -3
drivers/gpu/drm/panel/panel-abt-y030xx067a.c
··· 140 140 { 0x03, REG03_VPOSITION(0x0a) }, 141 141 { 0x04, REG04_HPOSITION1(0xd2) }, 142 142 { 0x05, REG05_CLIP | REG05_NVM_VREFRESH | REG05_SLBRCHARGE(0x2) }, 143 - { 0x06, REG06_XPSAVE | REG06_NT }, 143 + { 0x06, REG06_NT }, 144 144 { 0x07, 0 }, 145 145 { 0x08, REG08_PANEL(0x1) | REG08_CLOCK_DIV(0x2) }, 146 146 { 0x09, REG09_SUB_BRIGHT_R(0x20) }, ··· 183 183 goto err_disable_regulator; 184 184 } 185 185 186 - msleep(120); 187 - 188 186 return 0; 189 187 190 188 err_disable_regulator: ··· 196 198 197 199 gpiod_set_value_cansleep(priv->reset_gpio, 1); 198 200 regulator_disable(priv->supply); 201 + 202 + return 0; 203 + } 204 + 205 + static int y030xx067a_enable(struct drm_panel *panel) 206 + { 207 + struct y030xx067a *priv = to_y030xx067a(panel); 208 + 209 + regmap_set_bits(priv->map, 0x06, REG06_XPSAVE); 210 + 211 + if (panel->backlight) { 212 + /* Wait for the picture to be ready before enabling backlight */ 213 + msleep(120); 214 + } 215 + 216 + return 0; 217 + } 218 + 219 + static int y030xx067a_disable(struct drm_panel *panel) 220 + { 221 + struct y030xx067a *priv = to_y030xx067a(panel); 222 + 223 + regmap_clear_bits(priv->map, 0x06, REG06_XPSAVE); 199 224 200 225 return 0; 201 226 } ··· 260 239 static const struct drm_panel_funcs y030xx067a_funcs = { 261 240 .prepare = y030xx067a_prepare, 262 241 .unprepare = y030xx067a_unprepare, 242 + .enable = y030xx067a_enable, 243 + .disable = y030xx067a_disable, 263 244 .get_modes = y030xx067a_get_modes, 264 245 }; 265 246 ··· 269 246 .reg_bits = 8, 270 247 .val_bits = 8, 271 248 .max_register = 0x15, 249 + .cache_type = REGCACHE_FLAT, 272 250 }; 273 251 274 252 static int y030xx067a_probe(struct spi_device *spi)
+2
drivers/gpu/drm/panel/panel-edp.c
··· 1847 1847 static const struct edp_panel_entry edp_panels[] = { 1848 1848 EDP_PANEL_ENTRY('A', 'U', 'O', 0x405c, &auo_b116xak01.delay, "B116XAK01"), 1849 1849 EDP_PANEL_ENTRY('A', 'U', 'O', 0x615c, &delay_200_500_e50, "B116XAN06.1"), 1850 + EDP_PANEL_ENTRY('A', 'U', 'O', 0x8594, &delay_200_500_e50, "B133UAN01.0"), 1850 1851 1851 1852 EDP_PANEL_ENTRY('B', 'O', 'E', 0x0786, &delay_200_500_p2e80, "NV116WHM-T01"), 1852 1853 EDP_PANEL_ENTRY('B', 'O', 'E', 0x07d1, &boe_nv133fhm_n61.delay, "NV133FHM-N61"), ··· 1860 1859 EDP_PANEL_ENTRY('K', 'D', 'B', 0x0624, &kingdisplay_kd116n21_30nv_a010.delay, "116N21-30NV-A010"), 1861 1860 EDP_PANEL_ENTRY('K', 'D', 'B', 0x1120, &delay_200_500_e80_d50, "116N29-30NK-C007"), 1862 1861 1862 + EDP_PANEL_ENTRY('S', 'H', 'P', 0x1523, &sharp_lq140m1jw46.delay, "LQ140M1JW46"), 1863 1863 EDP_PANEL_ENTRY('S', 'H', 'P', 0x154c, &delay_200_500_p2e100, "LQ116M1JW10"), 1864 1864 1865 1865 EDP_PANEL_ENTRY('S', 'T', 'A', 0x0100, &delay_100_500_e200, "2081116HHD028001-51D"),
+27 -4
drivers/gpu/drm/panel/panel-innolux-ej030na.c
··· 80 80 { 0x47, 0x08 }, 81 81 { 0x48, 0x0f }, 82 82 { 0x49, 0x0f }, 83 - 84 - { 0x2b, 0x01 }, 85 83 }; 86 84 87 85 static int ej030na_prepare(struct drm_panel *panel) ··· 107 109 goto err_disable_regulator; 108 110 } 109 111 110 - msleep(120); 111 - 112 112 return 0; 113 113 114 114 err_disable_regulator: ··· 120 124 121 125 gpiod_set_value_cansleep(priv->reset_gpio, 1); 122 126 regulator_disable(priv->supply); 127 + 128 + return 0; 129 + } 130 + 131 + static int ej030na_enable(struct drm_panel *panel) 132 + { 133 + struct ej030na *priv = to_ej030na(panel); 134 + 135 + /* standby off */ 136 + regmap_write(priv->map, 0x2b, 0x01); 137 + 138 + if (panel->backlight) { 139 + /* Wait for the picture to be ready before enabling backlight */ 140 + msleep(120); 141 + } 142 + 143 + return 0; 144 + } 145 + 146 + static int ej030na_disable(struct drm_panel *panel) 147 + { 148 + struct ej030na *priv = to_ej030na(panel); 149 + 150 + /* standby on */ 151 + regmap_write(priv->map, 0x2b, 0x00); 123 152 124 153 return 0; 125 154 } ··· 186 165 static const struct drm_panel_funcs ej030na_funcs = { 187 166 .prepare = ej030na_prepare, 188 167 .unprepare = ej030na_unprepare, 168 + .enable = ej030na_enable, 169 + .disable = ej030na_disable, 189 170 .get_modes = ej030na_get_modes, 190 171 }; 191 172
+482
drivers/gpu/drm/panel/panel-newvision-nv3052c.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * NewVision NV3052C IPS LCD panel driver 4 + * 5 + * Copyright (C) 2020, Paul Cercueil <paul@crapouillou.net> 6 + * Copyright (C) 2022, Christophe Branchereau <cbranchereau@gmail.com> 7 + */ 8 + 9 + #include <linux/delay.h> 10 + #include <linux/device.h> 11 + #include <linux/gpio/consumer.h> 12 + #include <linux/media-bus-format.h> 13 + #include <linux/module.h> 14 + #include <linux/of_device.h> 15 + #include <linux/regulator/consumer.h> 16 + #include <linux/spi/spi.h> 17 + #include <video/mipi_display.h> 18 + #include <drm/drm_mipi_dbi.h> 19 + #include <drm/drm_modes.h> 20 + #include <drm/drm_panel.h> 21 + 22 + struct nv3052c_panel_info { 23 + const struct drm_display_mode *display_modes; 24 + unsigned int num_modes; 25 + u16 width_mm, height_mm; 26 + u32 bus_format, bus_flags; 27 + }; 28 + 29 + struct nv3052c { 30 + struct device *dev; 31 + struct drm_panel panel; 32 + struct mipi_dbi dbi; 33 + const struct nv3052c_panel_info *panel_info; 34 + struct regulator *supply; 35 + struct gpio_desc *reset_gpio; 36 + }; 37 + 38 + struct nv3052c_reg { 39 + u8 cmd; 40 + u8 val; 41 + }; 42 + 43 + static const struct nv3052c_reg nv3052c_panel_regs[] = { 44 + { 0xff, 0x30 }, 45 + { 0xff, 0x52 }, 46 + { 0xff, 0x01 }, 47 + { 0xe3, 0x00 }, 48 + { 0x40, 0x00 }, 49 + { 0x03, 0x40 }, 50 + { 0x04, 0x00 }, 51 + { 0x05, 0x03 }, 52 + { 0x08, 0x00 }, 53 + { 0x09, 0x07 }, 54 + { 0x0a, 0x01 }, 55 + { 0x0b, 0x32 }, 56 + { 0x0c, 0x32 }, 57 + { 0x0d, 0x0b }, 58 + { 0x0e, 0x00 }, 59 + { 0x23, 0xa0 }, 60 + { 0x24, 0x0c }, 61 + { 0x25, 0x06 }, 62 + { 0x26, 0x14 }, 63 + { 0x27, 0x14 }, 64 + { 0x38, 0xcc }, 65 + { 0x39, 0xd7 }, 66 + { 0x3a, 0x4a }, 67 + { 0x28, 0x40 }, 68 + { 0x29, 0x01 }, 69 + { 0x2a, 0xdf }, 70 + { 0x49, 0x3c }, 71 + { 0x91, 0x77 }, 72 + { 0x92, 0x77 }, 73 + { 0xa0, 0x55 }, 74 + { 0xa1, 0x50 }, 75 + { 0xa4, 0x9c }, 76 + { 0xa7, 0x02 }, 77 + { 0xa8, 0x01 }, 78 + { 0xa9, 0x01 }, 79 + { 0xaa, 0xfc }, 80 + { 0xab, 0x28 }, 81 + { 0xac, 0x06 }, 82 + { 0xad, 0x06 }, 83 + { 0xae, 0x06 }, 84 + { 0xaf, 0x03 }, 85 + { 0xb0, 0x08 }, 86 + { 0xb1, 0x26 }, 87 + { 0xb2, 0x28 }, 88 + { 0xb3, 0x28 }, 89 + { 0xb4, 0x33 }, 90 + { 0xb5, 0x08 }, 91 + { 0xb6, 0x26 }, 92 + { 0xb7, 0x08 }, 93 + { 0xb8, 0x26 }, 94 + { 0xf0, 0x00 }, 95 + { 0xf6, 0xc0 }, 96 + { 0xff, 0x30 }, 97 + { 0xff, 0x52 }, 98 + { 0xff, 0x02 }, 99 + { 0xb0, 0x0b }, 100 + { 0xb1, 0x16 }, 101 + { 0xb2, 0x17 }, 102 + { 0xb3, 0x2c }, 103 + { 0xb4, 0x32 }, 104 + { 0xb5, 0x3b }, 105 + { 0xb6, 0x29 }, 106 + { 0xb7, 0x40 }, 107 + { 0xb8, 0x0d }, 108 + { 0xb9, 0x05 }, 109 + { 0xba, 0x12 }, 110 + { 0xbb, 0x10 }, 111 + { 0xbc, 0x12 }, 112 + { 0xbd, 0x15 }, 113 + { 0xbe, 0x19 }, 114 + { 0xbf, 0x0e }, 115 + { 0xc0, 0x16 }, 116 + { 0xc1, 0x0a }, 117 + { 0xd0, 0x0c }, 118 + { 0xd1, 0x17 }, 119 + { 0xd2, 0x14 }, 120 + { 0xd3, 0x2e }, 121 + { 0xd4, 0x32 }, 122 + { 0xd5, 0x3c }, 123 + { 0xd6, 0x22 }, 124 + { 0xd7, 0x3d }, 125 + { 0xd8, 0x0d }, 126 + { 0xd9, 0x07 }, 127 + { 0xda, 0x13 }, 128 + { 0xdb, 0x13 }, 129 + { 0xdc, 0x11 }, 130 + { 0xdd, 0x15 }, 131 + { 0xde, 0x19 }, 132 + { 0xdf, 0x10 }, 133 + { 0xe0, 0x17 }, 134 + { 0xe1, 0x0a }, 135 + { 0xff, 0x30 }, 136 + { 0xff, 0x52 }, 137 + { 0xff, 0x03 }, 138 + { 0x00, 0x2a }, 139 + { 0x01, 0x2a }, 140 + { 0x02, 0x2a }, 141 + { 0x03, 0x2a }, 142 + { 0x04, 0x61 }, 143 + { 0x05, 0x80 }, 144 + { 0x06, 0xc7 }, 145 + { 0x07, 0x01 }, 146 + { 0x08, 0x03 }, 147 + { 0x09, 0x04 }, 148 + { 0x70, 0x22 }, 149 + { 0x71, 0x80 }, 150 + { 0x30, 0x2a }, 151 + { 0x31, 0x2a }, 152 + { 0x32, 0x2a }, 153 + { 0x33, 0x2a }, 154 + { 0x34, 0x61 }, 155 + { 0x35, 0xc5 }, 156 + { 0x36, 0x80 }, 157 + { 0x37, 0x23 }, 158 + { 0x40, 0x03 }, 159 + { 0x41, 0x04 }, 160 + { 0x42, 0x05 }, 161 + { 0x43, 0x06 }, 162 + { 0x44, 0x11 }, 163 + { 0x45, 0xe8 }, 164 + { 0x46, 0xe9 }, 165 + { 0x47, 0x11 }, 166 + { 0x48, 0xea }, 167 + { 0x49, 0xeb }, 168 + { 0x50, 0x07 }, 169 + { 0x51, 0x08 }, 170 + { 0x52, 0x09 }, 171 + { 0x53, 0x0a }, 172 + { 0x54, 0x11 }, 173 + { 0x55, 0xec }, 174 + { 0x56, 0xed }, 175 + { 0x57, 0x11 }, 176 + { 0x58, 0xef }, 177 + { 0x59, 0xf0 }, 178 + { 0xb1, 0x01 }, 179 + { 0xb4, 0x15 }, 180 + { 0xb5, 0x16 }, 181 + { 0xb6, 0x09 }, 182 + { 0xb7, 0x0f }, 183 + { 0xb8, 0x0d }, 184 + { 0xb9, 0x0b }, 185 + { 0xba, 0x00 }, 186 + { 0xc7, 0x02 }, 187 + { 0xca, 0x17 }, 188 + { 0xcb, 0x18 }, 189 + { 0xcc, 0x0a }, 190 + { 0xcd, 0x10 }, 191 + { 0xce, 0x0e }, 192 + { 0xcf, 0x0c }, 193 + { 0xd0, 0x00 }, 194 + { 0x81, 0x00 }, 195 + { 0x84, 0x15 }, 196 + { 0x85, 0x16 }, 197 + { 0x86, 0x10 }, 198 + { 0x87, 0x0a }, 199 + { 0x88, 0x0c }, 200 + { 0x89, 0x0e }, 201 + { 0x8a, 0x02 }, 202 + { 0x97, 0x00 }, 203 + { 0x9a, 0x17 }, 204 + { 0x9b, 0x18 }, 205 + { 0x9c, 0x0f }, 206 + { 0x9d, 0x09 }, 207 + { 0x9e, 0x0b }, 208 + { 0x9f, 0x0d }, 209 + { 0xa0, 0x01 }, 210 + { 0xff, 0x30 }, 211 + { 0xff, 0x52 }, 212 + { 0xff, 0x02 }, 213 + { 0x01, 0x01 }, 214 + { 0x02, 0xda }, 215 + { 0x03, 0xba }, 216 + { 0x04, 0xa8 }, 217 + { 0x05, 0x9a }, 218 + { 0x06, 0x70 }, 219 + { 0x07, 0xff }, 220 + { 0x08, 0x91 }, 221 + { 0x09, 0x90 }, 222 + { 0x0a, 0xff }, 223 + { 0x0b, 0x8f }, 224 + { 0x0c, 0x60 }, 225 + { 0x0d, 0x58 }, 226 + { 0x0e, 0x48 }, 227 + { 0x0f, 0x38 }, 228 + { 0x10, 0x2b }, 229 + { 0xff, 0x30 }, 230 + { 0xff, 0x52 }, 231 + { 0xff, 0x00 }, 232 + { 0x36, 0x0a }, 233 + }; 234 + 235 + static inline struct nv3052c *to_nv3052c(struct drm_panel *panel) 236 + { 237 + return container_of(panel, struct nv3052c, panel); 238 + } 239 + 240 + static int nv3052c_prepare(struct drm_panel *panel) 241 + { 242 + struct nv3052c *priv = to_nv3052c(panel); 243 + struct mipi_dbi *dbi = &priv->dbi; 244 + unsigned int i; 245 + int err; 246 + 247 + err = regulator_enable(priv->supply); 248 + if (err) { 249 + dev_err(priv->dev, "Failed to enable power supply: %d\n", err); 250 + return err; 251 + } 252 + 253 + /* Reset the chip */ 254 + gpiod_set_value_cansleep(priv->reset_gpio, 1); 255 + usleep_range(10, 1000); 256 + gpiod_set_value_cansleep(priv->reset_gpio, 0); 257 + usleep_range(5000, 20000); 258 + 259 + for (i = 0; i < ARRAY_SIZE(nv3052c_panel_regs); i++) { 260 + err = mipi_dbi_command(dbi, nv3052c_panel_regs[i].cmd, 261 + nv3052c_panel_regs[i].val); 262 + 263 + if (err) { 264 + dev_err(priv->dev, "Unable to set register: %d\n", err); 265 + goto err_disable_regulator; 266 + } 267 + } 268 + 269 + err = mipi_dbi_command(dbi, MIPI_DCS_EXIT_SLEEP_MODE); 270 + if (err) { 271 + dev_err(priv->dev, "Unable to exit sleep mode: %d\n", err); 272 + goto err_disable_regulator; 273 + } 274 + 275 + return 0; 276 + 277 + err_disable_regulator: 278 + regulator_disable(priv->supply); 279 + return err; 280 + } 281 + 282 + static int nv3052c_unprepare(struct drm_panel *panel) 283 + { 284 + struct nv3052c *priv = to_nv3052c(panel); 285 + struct mipi_dbi *dbi = &priv->dbi; 286 + int err; 287 + 288 + err = mipi_dbi_command(dbi, MIPI_DCS_ENTER_SLEEP_MODE); 289 + if (err) 290 + dev_err(priv->dev, "Unable to enter sleep mode: %d\n", err); 291 + 292 + gpiod_set_value_cansleep(priv->reset_gpio, 1); 293 + regulator_disable(priv->supply); 294 + 295 + return 0; 296 + } 297 + 298 + static int nv3052c_enable(struct drm_panel *panel) 299 + { 300 + struct nv3052c *priv = to_nv3052c(panel); 301 + struct mipi_dbi *dbi = &priv->dbi; 302 + int err; 303 + 304 + err = mipi_dbi_command(dbi, MIPI_DCS_SET_DISPLAY_ON); 305 + if (err) { 306 + dev_err(priv->dev, "Unable to enable display: %d\n", err); 307 + return err; 308 + } 309 + 310 + if (panel->backlight) { 311 + /* Wait for the picture to be ready before enabling backlight */ 312 + msleep(120); 313 + } 314 + 315 + return 0; 316 + } 317 + 318 + static int nv3052c_disable(struct drm_panel *panel) 319 + { 320 + struct nv3052c *priv = to_nv3052c(panel); 321 + struct mipi_dbi *dbi = &priv->dbi; 322 + int err; 323 + 324 + err = mipi_dbi_command(dbi, MIPI_DCS_SET_DISPLAY_OFF); 325 + if (err) { 326 + dev_err(priv->dev, "Unable to disable display: %d\n", err); 327 + return err; 328 + } 329 + 330 + return 0; 331 + } 332 + 333 + static int nv3052c_get_modes(struct drm_panel *panel, 334 + struct drm_connector *connector) 335 + { 336 + struct nv3052c *priv = to_nv3052c(panel); 337 + const struct nv3052c_panel_info *panel_info = priv->panel_info; 338 + struct drm_display_mode *mode; 339 + unsigned int i; 340 + 341 + for (i = 0; i < panel_info->num_modes; i++) { 342 + mode = drm_mode_duplicate(connector->dev, 343 + &panel_info->display_modes[i]); 344 + if (!mode) 345 + return -ENOMEM; 346 + 347 + drm_mode_set_name(mode); 348 + 349 + mode->type = DRM_MODE_TYPE_DRIVER; 350 + if (panel_info->num_modes == 1) 351 + mode->type |= DRM_MODE_TYPE_PREFERRED; 352 + 353 + drm_mode_probed_add(connector, mode); 354 + } 355 + 356 + connector->display_info.bpc = 8; 357 + connector->display_info.width_mm = panel_info->width_mm; 358 + connector->display_info.height_mm = panel_info->height_mm; 359 + 360 + drm_display_info_set_bus_formats(&connector->display_info, 361 + &panel_info->bus_format, 1); 362 + connector->display_info.bus_flags = panel_info->bus_flags; 363 + 364 + return panel_info->num_modes; 365 + } 366 + 367 + static const struct drm_panel_funcs nv3052c_funcs = { 368 + .prepare = nv3052c_prepare, 369 + .unprepare = nv3052c_unprepare, 370 + .enable = nv3052c_enable, 371 + .disable = nv3052c_disable, 372 + .get_modes = nv3052c_get_modes, 373 + }; 374 + 375 + static int nv3052c_probe(struct spi_device *spi) 376 + { 377 + struct device *dev = &spi->dev; 378 + struct nv3052c *priv; 379 + int err; 380 + 381 + priv = devm_kzalloc(dev, sizeof(*priv), GFP_KERNEL); 382 + if (!priv) 383 + return -ENOMEM; 384 + 385 + priv->dev = dev; 386 + 387 + priv->panel_info = of_device_get_match_data(dev); 388 + if (!priv->panel_info) 389 + return -EINVAL; 390 + 391 + priv->supply = devm_regulator_get(dev, "power"); 392 + if (IS_ERR(priv->supply)) 393 + return dev_err_probe(dev, PTR_ERR(priv->supply), "Failed to get power supply\n"); 394 + 395 + priv->reset_gpio = devm_gpiod_get(dev, "reset", GPIOD_OUT_HIGH); 396 + if (IS_ERR(priv->reset_gpio)) 397 + return dev_err_probe(dev, PTR_ERR(priv->reset_gpio), "Failed to get reset GPIO\n"); 398 + 399 + err = mipi_dbi_spi_init(spi, &priv->dbi, NULL); 400 + if (err) 401 + return dev_err_probe(dev, err, "MIPI DBI init failed\n"); 402 + 403 + priv->dbi.read_commands = NULL; 404 + 405 + spi_set_drvdata(spi, priv); 406 + 407 + drm_panel_init(&priv->panel, dev, &nv3052c_funcs, 408 + DRM_MODE_CONNECTOR_DPI); 409 + 410 + err = drm_panel_of_backlight(&priv->panel); 411 + if (err) 412 + return dev_err_probe(dev, err, "Failed to attach backlight\n"); 413 + 414 + drm_panel_add(&priv->panel); 415 + 416 + return 0; 417 + } 418 + 419 + static void nv3052c_remove(struct spi_device *spi) 420 + { 421 + struct nv3052c *priv = spi_get_drvdata(spi); 422 + 423 + drm_panel_remove(&priv->panel); 424 + drm_panel_disable(&priv->panel); 425 + drm_panel_unprepare(&priv->panel); 426 + } 427 + 428 + static const struct drm_display_mode ltk035c5444t_modes[] = { 429 + { /* 60 Hz */ 430 + .clock = 24000, 431 + .hdisplay = 640, 432 + .hsync_start = 640 + 96, 433 + .hsync_end = 640 + 96 + 16, 434 + .htotal = 640 + 96 + 16 + 48, 435 + .vdisplay = 480, 436 + .vsync_start = 480 + 5, 437 + .vsync_end = 480 + 5 + 2, 438 + .vtotal = 480 + 5 + 2 + 13, 439 + .flags = DRM_MODE_FLAG_NHSYNC | DRM_MODE_FLAG_NVSYNC, 440 + }, 441 + { /* 50 Hz */ 442 + .clock = 18000, 443 + .hdisplay = 640, 444 + .hsync_start = 640 + 39, 445 + .hsync_end = 640 + 39 + 2, 446 + .htotal = 640 + 39 + 2 + 39, 447 + .vdisplay = 480, 448 + .vsync_start = 480 + 5, 449 + .vsync_end = 480 + 5 + 2, 450 + .vtotal = 480 + 5 + 2 + 13, 451 + .flags = DRM_MODE_FLAG_NHSYNC | DRM_MODE_FLAG_NVSYNC, 452 + }, 453 + }; 454 + 455 + static const struct nv3052c_panel_info ltk035c5444t_panel_info = { 456 + .display_modes = ltk035c5444t_modes, 457 + .num_modes = ARRAY_SIZE(ltk035c5444t_modes), 458 + .width_mm = 77, 459 + .height_mm = 64, 460 + .bus_format = MEDIA_BUS_FMT_RGB888_1X24, 461 + .bus_flags = DRM_BUS_FLAG_DE_HIGH | DRM_BUS_FLAG_PIXDATA_DRIVE_NEGEDGE, 462 + }; 463 + 464 + static const struct of_device_id nv3052c_of_match[] = { 465 + { .compatible = "leadtek,ltk035c5444t", .data = &ltk035c5444t_panel_info }, 466 + { /* sentinel */ } 467 + }; 468 + MODULE_DEVICE_TABLE(of, nv3052c_of_match); 469 + 470 + static struct spi_driver nv3052c_driver = { 471 + .driver = { 472 + .name = "nv3052c", 473 + .of_match_table = nv3052c_of_match, 474 + }, 475 + .probe = nv3052c_probe, 476 + .remove = nv3052c_remove, 477 + }; 478 + module_spi_driver(nv3052c_driver); 479 + 480 + MODULE_AUTHOR("Paul Cercueil <paul@crapouillou.net>"); 481 + MODULE_AUTHOR("Christophe Branchereau <cbranchereau@gmail.com>"); 482 + MODULE_LICENSE("GPL v2");
+1 -2
drivers/gpu/drm/panel/panel-truly-nt35597.c
··· 446 446 const struct nt35597_config *config; 447 447 448 448 config = ctx->config; 449 - mode = drm_mode_create(connector->dev); 449 + mode = drm_mode_duplicate(connector->dev, config->dm); 450 450 if (!mode) { 451 451 dev_err(ctx->dev, "failed to create a new display mode\n"); 452 452 return 0; ··· 454 454 455 455 connector->display_info.width_mm = config->width_mm; 456 456 connector->display_info.height_mm = config->height_mm; 457 - drm_mode_copy(mode, config->dm); 458 457 mode->type = DRM_MODE_TYPE_DRIVER | DRM_MODE_TYPE_PREFERRED; 459 458 drm_mode_probed_add(connector, mode); 460 459
+2 -2
drivers/gpu/drm/panel/panel-visionox-rm69299.c
··· 168 168 struct visionox_rm69299 *ctx = panel_to_ctx(panel); 169 169 struct drm_display_mode *mode; 170 170 171 - mode = drm_mode_create(connector->dev); 171 + mode = drm_mode_duplicate(connector->dev, 172 + &visionox_rm69299_1080x2248_60hz); 172 173 if (!mode) { 173 174 dev_err(ctx->panel.dev, "failed to create a new display mode\n"); 174 175 return 0; ··· 177 176 178 177 connector->display_info.width_mm = 74; 179 178 connector->display_info.height_mm = 131; 180 - drm_mode_copy(mode, &visionox_rm69299_1080x2248_60hz); 181 179 mode->type = DRM_MODE_TYPE_DRIVER | DRM_MODE_TYPE_PREFERRED; 182 180 drm_mode_probed_add(connector, mode); 183 181
+4
drivers/gpu/drm/panfrost/panfrost_job.c
··· 247 247 int i, ret; 248 248 249 249 for (i = 0; i < bo_count; i++) { 250 + ret = dma_resv_reserve_fences(bos[i]->resv, 1); 251 + if (ret) 252 + return ret; 253 + 250 254 /* panfrost always uses write mode in its current uapi */ 251 255 ret = drm_sched_job_add_implicit_dependencies(job, bos[i], 252 256 true);
+2 -2
drivers/gpu/drm/qxl/qxl_kms.c
··· 165 165 (int)qdev->surfaceram_size / 1024, 166 166 (sb == 4) ? "64bit" : "32bit"); 167 167 168 - qdev->rom = ioremap(qdev->rom_base, qdev->rom_size); 168 + qdev->rom = ioremap_wc(qdev->rom_base, qdev->rom_size); 169 169 if (!qdev->rom) { 170 170 pr_err("Unable to ioremap ROM\n"); 171 171 r = -ENOMEM; ··· 183 183 goto rom_unmap; 184 184 } 185 185 186 - qdev->ram_header = ioremap(qdev->vram_base + 186 + qdev->ram_header = ioremap_wc(qdev->vram_base + 187 187 qdev->rom->ram_header_offset, 188 188 sizeof(*qdev->ram_header)); 189 189 if (!qdev->ram_header) {
+1 -1
drivers/gpu/drm/qxl/qxl_release.c
··· 200 200 return ret; 201 201 } 202 202 203 - ret = dma_resv_reserve_shared(bo->tbo.base.resv, 1); 203 + ret = dma_resv_reserve_fences(bo->tbo.base.resv, 1); 204 204 if (ret) 205 205 return ret; 206 206
+3 -3
drivers/gpu/drm/qxl/qxl_ttm.c
··· 82 82 case TTM_PL_VRAM: 83 83 mem->bus.is_iomem = true; 84 84 mem->bus.offset = (mem->start << PAGE_SHIFT) + qdev->vram_base; 85 - mem->bus.caching = ttm_cached; 85 + mem->bus.caching = ttm_write_combined; 86 86 break; 87 87 case TTM_PL_PRIV: 88 88 mem->bus.is_iomem = true; 89 89 mem->bus.offset = (mem->start << PAGE_SHIFT) + 90 90 qdev->surfaceram_base; 91 - mem->bus.caching = ttm_cached; 91 + mem->bus.caching = ttm_write_combined; 92 92 break; 93 93 default: 94 94 return -EINVAL; ··· 113 113 ttm = kzalloc(sizeof(struct ttm_tt), GFP_KERNEL); 114 114 if (ttm == NULL) 115 115 return NULL; 116 - if (ttm_tt_init(ttm, bo, page_flags, ttm_cached)) { 116 + if (ttm_tt_init(ttm, bo, page_flags, ttm_cached, 0)) { 117 117 kfree(ttm); 118 118 return NULL; 119 119 }
+4
drivers/gpu/drm/radeon/radeon_cs.c
··· 535 535 return r; 536 536 537 537 radeon_sync_fence(&p->ib.sync, bo_va->last_pt_update); 538 + 539 + r = dma_resv_reserve_fences(bo->tbo.base.resv, 1); 540 + if (r) 541 + return r; 538 542 } 539 543 540 544 return radeon_vm_clear_invalids(rdev, vm);
+6 -1
drivers/gpu/drm/radeon/radeon_display.c
··· 533 533 DRM_ERROR("failed to pin new rbo buffer before flip\n"); 534 534 goto cleanup; 535 535 } 536 - work->fence = dma_fence_get(dma_resv_excl_fence(new_rbo->tbo.base.resv)); 536 + r = dma_resv_get_singleton(new_rbo->tbo.base.resv, false, &work->fence); 537 + if (r) { 538 + radeon_bo_unreserve(new_rbo); 539 + DRM_ERROR("failed to get new rbo buffer fences\n"); 540 + goto cleanup; 541 + } 537 542 radeon_bo_get_tiling_flags(new_rbo, &tiling_flags, NULL); 538 543 radeon_bo_unreserve(new_rbo); 539 544
+8
drivers/gpu/drm/radeon/radeon_object.c
··· 782 782 bool shared) 783 783 { 784 784 struct dma_resv *resv = bo->tbo.base.resv; 785 + int r; 786 + 787 + r = dma_resv_reserve_fences(resv, 1); 788 + if (r) { 789 + /* As last resort on OOM we block for the fence */ 790 + dma_fence_wait(&fence->base, false); 791 + return; 792 + } 785 793 786 794 if (shared) 787 795 dma_resv_add_shared_fence(resv, &fence->base);
+1 -1
drivers/gpu/drm/radeon/radeon_vm.c
··· 831 831 int r; 832 832 833 833 radeon_sync_resv(rdev, &ib->sync, pt->tbo.base.resv, true); 834 - r = dma_resv_reserve_shared(pt->tbo.base.resv, 1); 834 + r = dma_resv_reserve_fences(pt->tbo.base.resv, 1); 835 835 if (r) 836 836 return r; 837 837
+2
drivers/gpu/drm/scheduler/sched_main.c
··· 703 703 struct dma_fence *fence; 704 704 int ret; 705 705 706 + dma_resv_assert_held(obj->resv); 707 + 706 708 dma_resv_for_each_fence(&cursor, obj->resv, write, fence) { 707 709 /* Make sure to grab an additional ref on the added fence */ 708 710 dma_fence_get(fence);
+6 -4
drivers/gpu/drm/selftests/test-drm_buddy.c
··· 488 488 } 489 489 490 490 order = drm_random_order(mm.max_order + 1, &prng); 491 - if (!order) 491 + if (!order) { 492 + err = -ENOMEM; 492 493 goto out_fini; 494 + } 493 495 494 496 for (i = 0; i <= mm.max_order; ++i) { 495 497 struct drm_buddy_block *block; ··· 904 902 905 903 static int igt_buddy_alloc_limit(void *arg) 906 904 { 907 - u64 end, size = U64_MAX, start = 0; 905 + u64 size = U64_MAX, start = 0; 908 906 struct drm_buddy_block *block; 909 907 unsigned long flags = 0; 910 908 LIST_HEAD(allocated); 911 909 struct drm_buddy mm; 912 910 int err; 913 911 914 - size = end = round_down(size, 4096); 915 912 err = drm_buddy_init(&mm, size, PAGE_SIZE); 916 913 if (err) 917 914 return err; ··· 922 921 goto out_fini; 923 922 } 924 923 925 - err = drm_buddy_alloc_blocks(&mm, start, end, size, 924 + size = mm.chunk_size << mm.max_order; 925 + err = drm_buddy_alloc_blocks(&mm, start, size, size, 926 926 PAGE_SIZE, &allocated, flags); 927 927 928 928 if (unlikely(err))
+1 -1
drivers/gpu/drm/solomon/Kconfig
··· 1 1 config DRM_SSD130X 2 2 tristate "DRM support for Solomon SSD130x OLED displays" 3 - depends on DRM 3 + depends on DRM && MMU 4 4 select BACKLIGHT_CLASS_DEVICE 5 5 select DRM_GEM_SHMEM_HELPER 6 6 select DRM_KMS_HELPER
+27 -15
drivers/gpu/drm/solomon/ssd130x.c
··· 48 48 #define SSD130X_CONTRAST 0x81 49 49 #define SSD130X_SET_LOOKUP_TABLE 0x91 50 50 #define SSD130X_CHARGE_PUMP 0x8d 51 - #define SSD130X_SEG_REMAP_ON 0xa1 51 + #define SSD130X_SET_SEG_REMAP 0xa0 52 52 #define SSD130X_DISPLAY_OFF 0xae 53 53 #define SSD130X_SET_MULTIPLEX_RATIO 0xa8 54 54 #define SSD130X_DISPLAY_ON 0xaf ··· 61 61 #define SSD130X_SET_COM_PINS_CONFIG 0xda 62 62 #define SSD130X_SET_VCOMH 0xdb 63 63 64 - #define SSD130X_SET_COM_SCAN_DIR_MASK GENMASK(3, 2) 64 + #define SSD130X_SET_SEG_REMAP_MASK GENMASK(0, 0) 65 + #define SSD130X_SET_SEG_REMAP_SET(val) FIELD_PREP(SSD130X_SET_SEG_REMAP_MASK, (val)) 66 + #define SSD130X_SET_COM_SCAN_DIR_MASK GENMASK(3, 3) 65 67 #define SSD130X_SET_COM_SCAN_DIR_SET(val) FIELD_PREP(SSD130X_SET_COM_SCAN_DIR_MASK, (val)) 66 68 #define SSD130X_SET_CLOCK_DIV_MASK GENMASK(3, 0) 67 69 #define SSD130X_SET_CLOCK_DIV_SET(val) FIELD_PREP(SSD130X_SET_CLOCK_DIV_MASK, (val)) ··· 237 235 238 236 static int ssd130x_init(struct ssd130x_device *ssd130x) 239 237 { 240 - u32 precharge, dclk, com_invdir, compins, chargepump; 238 + u32 precharge, dclk, com_invdir, compins, chargepump, seg_remap; 241 239 int ret; 242 240 243 241 /* Set initial contrast */ ··· 246 244 return ret; 247 245 248 246 /* Set segment re-map */ 249 - if (ssd130x->seg_remap) { 250 - ret = ssd130x_write_cmd(ssd130x, 1, SSD130X_SEG_REMAP_ON); 251 - if (ret < 0) 252 - return ret; 253 - } 247 + seg_remap = (SSD130X_SET_SEG_REMAP | 248 + SSD130X_SET_SEG_REMAP_SET(ssd130x->seg_remap)); 249 + ret = ssd130x_write_cmd(ssd130x, 1, seg_remap); 250 + if (ret < 0) 251 + return ret; 254 252 255 253 /* Set COM direction */ 256 254 com_invdir = (SSD130X_SET_COM_SCAN_DIR | ··· 355 353 unsigned int width = drm_rect_width(rect); 356 354 unsigned int height = drm_rect_height(rect); 357 355 unsigned int line_length = DIV_ROUND_UP(width, 8); 358 - unsigned int pages = DIV_ROUND_UP(y % 8 + height, 8); 356 + unsigned int pages = DIV_ROUND_UP(height, 8); 357 + struct drm_device *drm = &ssd130x->drm; 359 358 u32 array_idx = 0; 360 359 int ret, i, j, k; 361 360 u8 *data_array = NULL; 361 + 362 + drm_WARN_ONCE(drm, y % 8 != 0, "y must be aligned to screen page\n"); 362 363 363 364 data_array = kcalloc(width, pages, GFP_KERNEL); 364 365 if (!data_array) ··· 404 399 if (ret < 0) 405 400 goto out_free; 406 401 407 - for (i = y / 8; i < y / 8 + pages; i++) { 402 + for (i = 0; i < pages; i++) { 408 403 int m = 8; 409 404 410 405 /* Last page may be partial */ 411 - if (8 * (i + 1) > ssd130x->height) 406 + if (8 * (y / 8 + i + 1) > ssd130x->height) 412 407 m = ssd130x->height % 8; 413 - for (j = x; j < x + width; j++) { 408 + for (j = 0; j < width; j++) { 414 409 u8 data = 0; 415 410 416 411 for (k = 0; k < m; k++) { ··· 440 435 .y2 = ssd130x->height, 441 436 }; 442 437 443 - buf = kcalloc(ssd130x->width, ssd130x->height, GFP_KERNEL); 438 + buf = kcalloc(DIV_ROUND_UP(ssd130x->width, 8), ssd130x->height, 439 + GFP_KERNEL); 444 440 if (!buf) 445 441 return; 446 442 ··· 455 449 { 456 450 struct ssd130x_device *ssd130x = drm_to_ssd130x(fb->dev); 457 451 void *vmap = map->vaddr; /* TODO: Use mapping abstraction properly */ 452 + unsigned int dst_pitch; 458 453 int ret = 0; 459 454 u8 *buf = NULL; 460 455 461 - buf = kcalloc(fb->width, fb->height, GFP_KERNEL); 456 + /* Align y to display page boundaries */ 457 + rect->y1 = round_down(rect->y1, 8); 458 + rect->y2 = min_t(unsigned int, round_up(rect->y2, 8), ssd130x->height); 459 + 460 + dst_pitch = DIV_ROUND_UP(drm_rect_width(rect), 8); 461 + buf = kcalloc(dst_pitch, drm_rect_height(rect), GFP_KERNEL); 462 462 if (!buf) 463 463 return -ENOMEM; 464 464 465 - drm_fb_xrgb8888_to_mono_reversed(buf, 0, vmap, fb, rect); 465 + drm_fb_xrgb8888_to_mono(buf, dst_pitch, vmap, fb, rect); 466 466 467 467 ssd130x_update_rect(ssd130x, buf, rect); 468 468
+1 -1
drivers/gpu/drm/tilcdc/tilcdc_crtc.c
··· 433 433 434 434 set_scanout(crtc, fb); 435 435 436 - crtc->hwmode = crtc->state->adjusted_mode; 436 + drm_mode_copy(&crtc->hwmode, &crtc->state->adjusted_mode); 437 437 438 438 tilcdc_crtc->hvtotal_us = 439 439 tilcdc_mode_hvtotal(&crtc->hwmode);
+5 -3
drivers/gpu/drm/tilcdc/tilcdc_external.c
··· 60 60 int tilcdc_add_component_encoder(struct drm_device *ddev) 61 61 { 62 62 struct tilcdc_drm_private *priv = ddev->dev_private; 63 - struct drm_encoder *encoder; 63 + struct drm_encoder *encoder = NULL, *iter; 64 64 65 - list_for_each_entry(encoder, &ddev->mode_config.encoder_list, head) 66 - if (encoder->possible_crtcs & (1 << priv->crtc->index)) 65 + list_for_each_entry(iter, &ddev->mode_config.encoder_list, head) 66 + if (iter->possible_crtcs & (1 << priv->crtc->index)) { 67 + encoder = iter; 67 68 break; 69 + } 68 70 69 71 if (!encoder) { 70 72 dev_err(ddev->dev, "%s: No suitable encoder found\n", __func__);
+1 -1
drivers/gpu/drm/tiny/repaper.c
··· 540 540 if (ret) 541 541 goto out_free; 542 542 543 - drm_fb_xrgb8888_to_mono_reversed(buf, 0, cma_obj->vaddr, fb, &clip); 543 + drm_fb_xrgb8888_to_mono(buf, 0, cma_obj->vaddr, fb, &clip); 544 544 545 545 drm_gem_fb_end_cpu_access(fb, DMA_FROM_DEVICE); 546 546
+1 -1
drivers/gpu/drm/ttm/ttm_agp_backend.c
··· 134 134 agp_be->mem = NULL; 135 135 agp_be->bridge = bridge; 136 136 137 - if (ttm_tt_init(&agp_be->ttm, bo, page_flags, ttm_write_combined)) { 137 + if (ttm_tt_init(&agp_be->ttm, bo, page_flags, ttm_write_combined, 0)) { 138 138 kfree(agp_be); 139 139 return NULL; 140 140 }
+97 -126
drivers/gpu/drm/ttm/ttm_bo.c
··· 69 69 } 70 70 } 71 71 72 - static inline void ttm_bo_move_to_pinned(struct ttm_buffer_object *bo) 72 + /** 73 + * ttm_bo_move_to_lru_tail 74 + * 75 + * @bo: The buffer object. 76 + * 77 + * Move this BO to the tail of all lru lists used to lookup and reserve an 78 + * object. This function must be called with struct ttm_global::lru_lock 79 + * held, and is used to make a BO less likely to be considered for eviction. 80 + */ 81 + void ttm_bo_move_to_lru_tail(struct ttm_buffer_object *bo) 73 82 { 74 - struct ttm_device *bdev = bo->bdev; 83 + dma_resv_assert_held(bo->base.resv); 75 84 76 - list_move_tail(&bo->lru, &bdev->pinned); 77 - 78 - if (bdev->funcs->del_from_lru_notify) 79 - bdev->funcs->del_from_lru_notify(bo); 80 - } 81 - 82 - static inline void ttm_bo_del_from_lru(struct ttm_buffer_object *bo) 83 - { 84 - struct ttm_device *bdev = bo->bdev; 85 - 86 - list_del_init(&bo->lru); 87 - 88 - if (bdev->funcs->del_from_lru_notify) 89 - bdev->funcs->del_from_lru_notify(bo); 90 - } 91 - 92 - static void ttm_bo_bulk_move_set_pos(struct ttm_lru_bulk_move_pos *pos, 93 - struct ttm_buffer_object *bo) 94 - { 95 - if (!pos->first) 96 - pos->first = bo; 97 - pos->last = bo; 98 - } 99 - 100 - void ttm_bo_move_to_lru_tail(struct ttm_buffer_object *bo, 101 - struct ttm_resource *mem, 102 - struct ttm_lru_bulk_move *bulk) 103 - { 104 - struct ttm_device *bdev = bo->bdev; 105 - struct ttm_resource_manager *man; 106 - 107 - if (!bo->deleted) 108 - dma_resv_assert_held(bo->base.resv); 109 - 110 - if (bo->pin_count) { 111 - ttm_bo_move_to_pinned(bo); 112 - return; 113 - } 114 - 115 - if (!mem) 116 - return; 117 - 118 - man = ttm_manager_type(bdev, mem->mem_type); 119 - list_move_tail(&bo->lru, &man->lru[bo->priority]); 120 - 121 - if (bdev->funcs->del_from_lru_notify) 122 - bdev->funcs->del_from_lru_notify(bo); 123 - 124 - if (bulk && !bo->pin_count) { 125 - switch (bo->resource->mem_type) { 126 - case TTM_PL_TT: 127 - ttm_bo_bulk_move_set_pos(&bulk->tt[bo->priority], bo); 128 - break; 129 - 130 - case TTM_PL_VRAM: 131 - ttm_bo_bulk_move_set_pos(&bulk->vram[bo->priority], bo); 132 - break; 133 - } 134 - } 85 + if (bo->resource) 86 + ttm_resource_move_to_lru_tail(bo->resource); 135 87 } 136 88 EXPORT_SYMBOL(ttm_bo_move_to_lru_tail); 137 89 138 - void ttm_bo_bulk_move_lru_tail(struct ttm_lru_bulk_move *bulk) 90 + /** 91 + * ttm_bo_set_bulk_move - update BOs bulk move object 92 + * 93 + * @bo: The buffer object. 94 + * 95 + * Update the BOs bulk move object, making sure that resources are added/removed 96 + * as well. A bulk move allows to move many resource on the LRU at once, 97 + * resulting in much less overhead of maintaining the LRU. 98 + * The only requirement is that the resources stay together on the LRU and are 99 + * never separated. This is enforces by setting the bulk_move structure on a BO. 100 + * ttm_lru_bulk_move_tail() should be used to move all resources to the tail of 101 + * their LRU list. 102 + */ 103 + void ttm_bo_set_bulk_move(struct ttm_buffer_object *bo, 104 + struct ttm_lru_bulk_move *bulk) 139 105 { 140 - unsigned i; 106 + dma_resv_assert_held(bo->base.resv); 141 107 142 - for (i = 0; i < TTM_MAX_BO_PRIORITY; ++i) { 143 - struct ttm_lru_bulk_move_pos *pos = &bulk->tt[i]; 144 - struct ttm_resource_manager *man; 108 + if (bo->bulk_move == bulk) 109 + return; 145 110 146 - if (!pos->first) 147 - continue; 148 - 149 - dma_resv_assert_held(pos->first->base.resv); 150 - dma_resv_assert_held(pos->last->base.resv); 151 - 152 - man = ttm_manager_type(pos->first->bdev, TTM_PL_TT); 153 - list_bulk_move_tail(&man->lru[i], &pos->first->lru, 154 - &pos->last->lru); 155 - } 156 - 157 - for (i = 0; i < TTM_MAX_BO_PRIORITY; ++i) { 158 - struct ttm_lru_bulk_move_pos *pos = &bulk->vram[i]; 159 - struct ttm_resource_manager *man; 160 - 161 - if (!pos->first) 162 - continue; 163 - 164 - dma_resv_assert_held(pos->first->base.resv); 165 - dma_resv_assert_held(pos->last->base.resv); 166 - 167 - man = ttm_manager_type(pos->first->bdev, TTM_PL_VRAM); 168 - list_bulk_move_tail(&man->lru[i], &pos->first->lru, 169 - &pos->last->lru); 170 - } 111 + spin_lock(&bo->bdev->lru_lock); 112 + if (bo->bulk_move && bo->resource) 113 + ttm_lru_bulk_move_del(bo->bulk_move, bo->resource); 114 + bo->bulk_move = bulk; 115 + if (bo->bulk_move && bo->resource) 116 + ttm_lru_bulk_move_add(bo->bulk_move, bo->resource); 117 + spin_unlock(&bo->bdev->lru_lock); 171 118 } 172 - EXPORT_SYMBOL(ttm_bo_bulk_move_lru_tail); 119 + EXPORT_SYMBOL(ttm_bo_set_bulk_move); 173 120 174 121 static int ttm_bo_handle_move_mem(struct ttm_buffer_object *bo, 175 122 struct ttm_resource *mem, bool evict, ··· 150 203 goto out_err; 151 204 } 152 205 } 206 + 207 + ret = dma_resv_reserve_fences(bo->base.resv, 1); 208 + if (ret) 209 + goto out_err; 153 210 154 211 ret = bdev->funcs->move(bo, evict, ctx, mem, hop); 155 212 if (ret) { ··· 295 344 return ret; 296 345 } 297 346 298 - ttm_bo_move_to_pinned(bo); 299 347 list_del_init(&bo->ddestroy); 300 348 spin_unlock(&bo->bdev->lru_lock); 301 349 ttm_bo_cleanup_memtype_use(bo); ··· 359 409 int ret; 360 410 361 411 WARN_ON_ONCE(bo->pin_count); 412 + WARN_ON_ONCE(bo->bulk_move); 362 413 363 414 if (!bo->deleted) { 364 415 ret = ttm_bo_individualize_resv(bo); ··· 396 445 */ 397 446 if (bo->pin_count) { 398 447 bo->pin_count = 0; 399 - ttm_bo_move_to_lru_tail(bo, bo->resource, NULL); 448 + ttm_resource_move_to_lru_tail(bo->resource); 400 449 } 401 450 402 451 kref_init(&bo->kref); ··· 409 458 } 410 459 411 460 spin_lock(&bo->bdev->lru_lock); 412 - ttm_bo_del_from_lru(bo); 413 461 list_del(&bo->ddestroy); 414 462 spin_unlock(&bo->bdev->lru_lock); 415 463 ··· 623 673 struct ww_acquire_ctx *ticket) 624 674 { 625 675 struct ttm_buffer_object *bo = NULL, *busy_bo = NULL; 676 + struct ttm_resource_cursor cursor; 677 + struct ttm_resource *res; 626 678 bool locked = false; 627 - unsigned i; 628 679 int ret; 629 680 630 681 spin_lock(&bdev->lru_lock); 631 - for (i = 0; i < TTM_MAX_BO_PRIORITY; ++i) { 632 - list_for_each_entry(bo, &man->lru[i], lru) { 633 - bool busy; 682 + ttm_resource_manager_for_each_res(man, &cursor, res) { 683 + bool busy; 634 684 635 - if (!ttm_bo_evict_swapout_allowable(bo, ctx, place, 636 - &locked, &busy)) { 637 - if (busy && !busy_bo && ticket != 638 - dma_resv_locking_ctx(bo->base.resv)) 639 - busy_bo = bo; 640 - continue; 641 - } 642 - 643 - if (!ttm_bo_get_unless_zero(bo)) { 644 - if (locked) 645 - dma_resv_unlock(bo->base.resv); 646 - continue; 647 - } 648 - break; 685 + if (!ttm_bo_evict_swapout_allowable(res->bo, ctx, place, 686 + &locked, &busy)) { 687 + if (busy && !busy_bo && ticket != 688 + dma_resv_locking_ctx(res->bo->base.resv)) 689 + busy_bo = res->bo; 690 + continue; 649 691 } 650 692 651 - /* If the inner loop terminated early, we have our candidate */ 652 - if (&bo->lru != &man->lru[i]) 693 + if (ttm_bo_get_unless_zero(res->bo)) { 694 + bo = res->bo; 653 695 break; 654 - 655 - bo = NULL; 696 + } 697 + if (locked) 698 + dma_resv_unlock(res->bo->base.resv); 656 699 } 657 700 658 701 if (!bo) { ··· 677 734 return ret; 678 735 } 679 736 737 + /** 738 + * ttm_bo_pin - Pin the buffer object. 739 + * @bo: The buffer object to pin 740 + * 741 + * Make sure the buffer is not evicted any more during memory pressure. 742 + * @bo must be unpinned again by calling ttm_bo_unpin(). 743 + */ 744 + void ttm_bo_pin(struct ttm_buffer_object *bo) 745 + { 746 + dma_resv_assert_held(bo->base.resv); 747 + WARN_ON_ONCE(!kref_read(&bo->kref)); 748 + if (!(bo->pin_count++) && bo->bulk_move && bo->resource) 749 + ttm_lru_bulk_move_del(bo->bulk_move, bo->resource); 750 + } 751 + EXPORT_SYMBOL(ttm_bo_pin); 752 + 753 + /** 754 + * ttm_bo_unpin - Unpin the buffer object. 755 + * @bo: The buffer object to unpin 756 + * 757 + * Allows the buffer object to be evicted again during memory pressure. 758 + */ 759 + void ttm_bo_unpin(struct ttm_buffer_object *bo) 760 + { 761 + dma_resv_assert_held(bo->base.resv); 762 + WARN_ON_ONCE(!kref_read(&bo->kref)); 763 + if (WARN_ON_ONCE(!bo->pin_count)) 764 + return; 765 + 766 + if (!(--bo->pin_count) && bo->bulk_move && bo->resource) 767 + ttm_lru_bulk_move_add(bo->bulk_move, bo->resource); 768 + } 769 + EXPORT_SYMBOL(ttm_bo_unpin); 770 + 680 771 /* 681 772 * Add the last move fence to the BO and reserve a new shared slot. We only use 682 773 * a shared slot to avoid unecessary sync and rely on the subsequent bo move to ··· 739 762 740 763 dma_resv_add_shared_fence(bo->base.resv, fence); 741 764 742 - ret = dma_resv_reserve_shared(bo->base.resv, 1); 765 + ret = dma_resv_reserve_fences(bo->base.resv, 1); 743 766 if (unlikely(ret)) { 744 767 dma_fence_put(fence); 745 768 return ret; ··· 798 821 bool type_found = false; 799 822 int i, ret; 800 823 801 - ret = dma_resv_reserve_shared(bo->base.resv, 1); 824 + ret = dma_resv_reserve_fences(bo->base.resv, 1); 802 825 if (unlikely(ret)) 803 826 return ret; 804 827 ··· 852 875 } 853 876 854 877 error: 855 - if (bo->resource->mem_type == TTM_PL_SYSTEM && !bo->pin_count) 856 - ttm_bo_move_to_lru_tail_unlocked(bo); 857 - 858 878 return ret; 859 879 } 860 880 EXPORT_SYMBOL(ttm_bo_mem_space); ··· 945 971 bo->destroy = destroy ? destroy : ttm_bo_default_destroy; 946 972 947 973 kref_init(&bo->kref); 948 - INIT_LIST_HEAD(&bo->lru); 949 974 INIT_LIST_HEAD(&bo->ddestroy); 950 975 bo->bdev = bdev; 951 976 bo->type = type; ··· 952 979 bo->moving = NULL; 953 980 bo->pin_count = 0; 954 981 bo->sg = sg; 982 + bo->bulk_move = NULL; 955 983 if (resv) { 956 984 bo->base.resv = resv; 957 985 dma_resv_assert_held(bo->base.resv); ··· 994 1020 ttm_bo_put(bo); 995 1021 return ret; 996 1022 } 997 - 998 - ttm_bo_move_to_lru_tail_unlocked(bo); 999 1023 1000 1024 return ret; 1001 1025 } ··· 1095 1123 return ret == -EBUSY ? -ENOSPC : ret; 1096 1124 } 1097 1125 1098 - ttm_bo_move_to_pinned(bo); 1099 1126 /* TODO: Cleanup the locking */ 1100 1127 spin_unlock(&bo->bdev->lru_lock); 1101 1128
+9 -4
drivers/gpu/drm/ttm/ttm_bo_util.c
··· 221 221 222 222 fbo->base = *bo; 223 223 224 - ttm_bo_get(bo); 225 - fbo->bo = bo; 226 - 227 224 /** 228 225 * Fix up members that we shouldn't copy directly: 229 226 * TODO: Explicit member copy would probably be better here. ··· 228 231 229 232 atomic_inc(&ttm_glob.bo_count); 230 233 INIT_LIST_HEAD(&fbo->base.ddestroy); 231 - INIT_LIST_HEAD(&fbo->base.lru); 232 234 fbo->base.moving = NULL; 233 235 drm_vma_node_reset(&fbo->base.base.vma_node); 234 236 ··· 246 250 fbo->base.base.dev = NULL; 247 251 ret = dma_resv_trylock(&fbo->base.base._resv); 248 252 WARN_ON(!ret); 253 + 254 + ret = dma_resv_reserve_fences(&fbo->base.base._resv, 1); 255 + if (ret) { 256 + kfree(fbo); 257 + return ret; 258 + } 259 + 260 + ttm_bo_get(bo); 261 + fbo->bo = bo; 249 262 250 263 ttm_bo_move_to_lru_tail_unlocked(&fbo->base); 251 264
+40 -44
drivers/gpu/drm/ttm/ttm_device.c
··· 142 142 int ttm_device_swapout(struct ttm_device *bdev, struct ttm_operation_ctx *ctx, 143 143 gfp_t gfp_flags) 144 144 { 145 + struct ttm_resource_cursor cursor; 145 146 struct ttm_resource_manager *man; 146 - struct ttm_buffer_object *bo; 147 - unsigned i, j; 147 + struct ttm_resource *res; 148 + unsigned i; 148 149 int ret; 149 150 150 151 spin_lock(&bdev->lru_lock); ··· 154 153 if (!man || !man->use_tt) 155 154 continue; 156 155 157 - for (j = 0; j < TTM_MAX_BO_PRIORITY; ++j) { 158 - list_for_each_entry(bo, &man->lru[j], lru) { 159 - uint32_t num_pages = PFN_UP(bo->base.size); 156 + ttm_resource_manager_for_each_res(man, &cursor, res) { 157 + struct ttm_buffer_object *bo = res->bo; 158 + uint32_t num_pages = PFN_UP(bo->base.size); 160 159 161 - ret = ttm_bo_swapout(bo, ctx, gfp_flags); 162 - /* ttm_bo_swapout has dropped the lru_lock */ 163 - if (!ret) 164 - return num_pages; 165 - if (ret != -EBUSY) 166 - return ret; 167 - } 160 + ret = ttm_bo_swapout(bo, ctx, gfp_flags); 161 + /* ttm_bo_swapout has dropped the lru_lock */ 162 + if (!ret) 163 + return num_pages; 164 + if (ret != -EBUSY) 165 + return ret; 168 166 } 169 167 } 170 168 spin_unlock(&bdev->lru_lock); ··· 259 259 } 260 260 EXPORT_SYMBOL(ttm_device_fini); 261 261 262 + static void ttm_device_clear_lru_dma_mappings(struct ttm_device *bdev, 263 + struct list_head *list) 264 + { 265 + struct ttm_resource *res; 266 + 267 + spin_lock(&bdev->lru_lock); 268 + while ((res = list_first_entry_or_null(list, typeof(*res), lru))) { 269 + struct ttm_buffer_object *bo = res->bo; 270 + 271 + /* Take ref against racing releases once lru_lock is unlocked */ 272 + if (!ttm_bo_get_unless_zero(bo)) 273 + continue; 274 + 275 + list_del_init(&res->lru); 276 + spin_unlock(&bdev->lru_lock); 277 + 278 + if (bo->ttm) 279 + ttm_tt_unpopulate(bo->bdev, bo->ttm); 280 + 281 + ttm_bo_put(bo); 282 + spin_lock(&bdev->lru_lock); 283 + } 284 + spin_unlock(&bdev->lru_lock); 285 + } 286 + 262 287 void ttm_device_clear_dma_mappings(struct ttm_device *bdev) 263 288 { 264 289 struct ttm_resource_manager *man; 265 - struct ttm_buffer_object *bo; 266 290 unsigned int i, j; 267 291 268 - spin_lock(&bdev->lru_lock); 269 - while (!list_empty(&bdev->pinned)) { 270 - bo = list_first_entry(&bdev->pinned, struct ttm_buffer_object, lru); 271 - /* Take ref against racing releases once lru_lock is unlocked */ 272 - if (ttm_bo_get_unless_zero(bo)) { 273 - list_del_init(&bo->lru); 274 - spin_unlock(&bdev->lru_lock); 275 - 276 - if (bo->ttm) 277 - ttm_tt_unpopulate(bo->bdev, bo->ttm); 278 - 279 - ttm_bo_put(bo); 280 - spin_lock(&bdev->lru_lock); 281 - } 282 - } 292 + ttm_device_clear_lru_dma_mappings(bdev, &bdev->pinned); 283 293 284 294 for (i = TTM_PL_SYSTEM; i < TTM_NUM_MEM_TYPES; ++i) { 285 295 man = ttm_manager_type(bdev, i); 286 296 if (!man || !man->use_tt) 287 297 continue; 288 298 289 - for (j = 0; j < TTM_MAX_BO_PRIORITY; ++j) { 290 - while (!list_empty(&man->lru[j])) { 291 - bo = list_first_entry(&man->lru[j], struct ttm_buffer_object, lru); 292 - if (ttm_bo_get_unless_zero(bo)) { 293 - list_del_init(&bo->lru); 294 - spin_unlock(&bdev->lru_lock); 295 - 296 - if (bo->ttm) 297 - ttm_tt_unpopulate(bo->bdev, bo->ttm); 298 - 299 - ttm_bo_put(bo); 300 - spin_lock(&bdev->lru_lock); 301 - } 302 - } 303 - } 299 + for (j = 0; j < TTM_MAX_BO_PRIORITY; ++j) 300 + ttm_device_clear_lru_dma_mappings(bdev, &man->lru[j]); 304 301 } 305 - spin_unlock(&bdev->lru_lock); 306 302 } 307 303 EXPORT_SYMBOL(ttm_device_clear_dma_mappings);
+7 -8
drivers/gpu/drm/ttm/ttm_execbuf_util.c
··· 90 90 91 91 list_for_each_entry(entry, list, head) { 92 92 struct ttm_buffer_object *bo = entry->bo; 93 + unsigned int num_fences; 93 94 94 95 ret = ttm_bo_reserve(bo, intr, (ticket == NULL), ticket); 95 96 if (ret == -EALREADY && dups) { ··· 101 100 continue; 102 101 } 103 102 103 + num_fences = min(entry->num_shared, 1u); 104 104 if (!ret) { 105 - if (!entry->num_shared) 106 - continue; 107 - 108 - ret = dma_resv_reserve_shared(bo->base.resv, 109 - entry->num_shared); 105 + ret = dma_resv_reserve_fences(bo->base.resv, 106 + num_fences); 110 107 if (!ret) 111 108 continue; 112 109 } ··· 119 120 ret = ttm_bo_reserve_slowpath(bo, intr, ticket); 120 121 } 121 122 122 - if (!ret && entry->num_shared) 123 - ret = dma_resv_reserve_shared(bo->base.resv, 124 - entry->num_shared); 123 + if (!ret) 124 + ret = dma_resv_reserve_fences(bo->base.resv, 125 + num_fences); 125 126 126 127 if (unlikely(ret != 0)) { 127 128 if (ticket) {
+190 -7
drivers/gpu/drm/ttm/ttm_resource.c
··· 30 30 #include <drm/ttm/ttm_bo_driver.h> 31 31 32 32 /** 33 + * ttm_lru_bulk_move_init - initialize a bulk move structure 34 + * @bulk: the structure to init 35 + * 36 + * For now just memset the structure to zero. 37 + */ 38 + void ttm_lru_bulk_move_init(struct ttm_lru_bulk_move *bulk) 39 + { 40 + memset(bulk, 0, sizeof(*bulk)); 41 + } 42 + EXPORT_SYMBOL(ttm_lru_bulk_move_init); 43 + 44 + /** 45 + * ttm_lru_bulk_move_tail - bulk move range of resources to the LRU tail. 46 + * 47 + * @bulk: bulk move structure 48 + * 49 + * Bulk move BOs to the LRU tail, only valid to use when driver makes sure that 50 + * resource order never changes. Should be called with &ttm_device.lru_lock held. 51 + */ 52 + void ttm_lru_bulk_move_tail(struct ttm_lru_bulk_move *bulk) 53 + { 54 + unsigned i, j; 55 + 56 + for (i = 0; i < TTM_NUM_MEM_TYPES; ++i) { 57 + for (j = 0; j < TTM_MAX_BO_PRIORITY; ++j) { 58 + struct ttm_lru_bulk_move_pos *pos = &bulk->pos[i][j]; 59 + struct ttm_resource_manager *man; 60 + 61 + if (!pos->first) 62 + continue; 63 + 64 + lockdep_assert_held(&pos->first->bo->bdev->lru_lock); 65 + dma_resv_assert_held(pos->first->bo->base.resv); 66 + dma_resv_assert_held(pos->last->bo->base.resv); 67 + 68 + man = ttm_manager_type(pos->first->bo->bdev, i); 69 + list_bulk_move_tail(&man->lru[j], &pos->first->lru, 70 + &pos->last->lru); 71 + } 72 + } 73 + } 74 + EXPORT_SYMBOL(ttm_lru_bulk_move_tail); 75 + 76 + /* Return the bulk move pos object for this resource */ 77 + static struct ttm_lru_bulk_move_pos * 78 + ttm_lru_bulk_move_pos(struct ttm_lru_bulk_move *bulk, struct ttm_resource *res) 79 + { 80 + return &bulk->pos[res->mem_type][res->bo->priority]; 81 + } 82 + 83 + /* Move the resource to the tail of the bulk move range */ 84 + static void ttm_lru_bulk_move_pos_tail(struct ttm_lru_bulk_move_pos *pos, 85 + struct ttm_resource *res) 86 + { 87 + if (pos->last != res) { 88 + list_move(&res->lru, &pos->last->lru); 89 + pos->last = res; 90 + } 91 + } 92 + 93 + /* Add the resource to a bulk_move cursor */ 94 + void ttm_lru_bulk_move_add(struct ttm_lru_bulk_move *bulk, 95 + struct ttm_resource *res) 96 + { 97 + struct ttm_lru_bulk_move_pos *pos = ttm_lru_bulk_move_pos(bulk, res); 98 + 99 + if (!pos->first) { 100 + pos->first = res; 101 + pos->last = res; 102 + } else { 103 + ttm_lru_bulk_move_pos_tail(pos, res); 104 + } 105 + } 106 + 107 + /* Remove the resource from a bulk_move range */ 108 + void ttm_lru_bulk_move_del(struct ttm_lru_bulk_move *bulk, 109 + struct ttm_resource *res) 110 + { 111 + struct ttm_lru_bulk_move_pos *pos = ttm_lru_bulk_move_pos(bulk, res); 112 + 113 + if (unlikely(pos->first == res && pos->last == res)) { 114 + pos->first = NULL; 115 + pos->last = NULL; 116 + } else if (pos->first == res) { 117 + pos->first = list_next_entry(res, lru); 118 + } else if (pos->last == res) { 119 + pos->last = list_prev_entry(res, lru); 120 + } else { 121 + list_move(&res->lru, &pos->last->lru); 122 + } 123 + } 124 + 125 + /* Move a resource to the LRU or bulk tail */ 126 + void ttm_resource_move_to_lru_tail(struct ttm_resource *res) 127 + { 128 + struct ttm_buffer_object *bo = res->bo; 129 + struct ttm_device *bdev = bo->bdev; 130 + 131 + lockdep_assert_held(&bo->bdev->lru_lock); 132 + 133 + if (bo->pin_count) { 134 + list_move_tail(&res->lru, &bdev->pinned); 135 + 136 + } else if (bo->bulk_move) { 137 + struct ttm_lru_bulk_move_pos *pos = 138 + ttm_lru_bulk_move_pos(bo->bulk_move, res); 139 + 140 + ttm_lru_bulk_move_pos_tail(pos, res); 141 + } else { 142 + struct ttm_resource_manager *man; 143 + 144 + man = ttm_manager_type(bdev, res->mem_type); 145 + list_move_tail(&res->lru, &man->lru[bo->priority]); 146 + } 147 + } 148 + 149 + /** 33 150 * ttm_resource_init - resource object constructure 34 151 * @bo: buffer object this resources is allocated for 35 152 * @place: placement of the resource 36 153 * @res: the resource object to inistilize 37 154 * 38 - * Initialize a new resource object. Counterpart of &ttm_resource_fini. 155 + * Initialize a new resource object. Counterpart of ttm_resource_fini(). 39 156 */ 40 157 void ttm_resource_init(struct ttm_buffer_object *bo, 41 158 const struct ttm_place *place, ··· 169 52 res->bus.is_iomem = false; 170 53 res->bus.caching = ttm_cached; 171 54 res->bo = bo; 55 + INIT_LIST_HEAD(&res->lru); 172 56 173 57 man = ttm_manager_type(bo->bdev, place->mem_type); 174 58 spin_lock(&bo->bdev->lru_lock); 175 - man->usage += bo->base.size; 59 + man->usage += res->num_pages << PAGE_SHIFT; 60 + if (bo->bulk_move) 61 + ttm_lru_bulk_move_add(bo->bulk_move, res); 62 + else 63 + ttm_resource_move_to_lru_tail(res); 176 64 spin_unlock(&bo->bdev->lru_lock); 177 65 } 178 66 EXPORT_SYMBOL(ttm_resource_init); ··· 188 66 * @res: the resource to clean up 189 67 * 190 68 * Should be used by resource manager backends to clean up the TTM resource 191 - * objects before freeing the underlying structure. Counterpart of 192 - * &ttm_resource_init 69 + * objects before freeing the underlying structure. Makes sure the resource is 70 + * removed from the LRU before destruction. 71 + * Counterpart of ttm_resource_init(). 193 72 */ 194 73 void ttm_resource_fini(struct ttm_resource_manager *man, 195 74 struct ttm_resource *res) 196 75 { 197 - spin_lock(&man->bdev->lru_lock); 198 - man->usage -= res->bo->base.size; 199 - spin_unlock(&man->bdev->lru_lock); 76 + struct ttm_device *bdev = man->bdev; 77 + 78 + spin_lock(&bdev->lru_lock); 79 + list_del_init(&res->lru); 80 + man->usage -= res->num_pages << PAGE_SHIFT; 81 + spin_unlock(&bdev->lru_lock); 200 82 } 201 83 EXPORT_SYMBOL(ttm_resource_fini); 202 84 ··· 220 94 221 95 if (!*res) 222 96 return; 97 + 98 + if (bo->bulk_move) { 99 + spin_lock(&bo->bdev->lru_lock); 100 + ttm_lru_bulk_move_del(bo->bulk_move, *res); 101 + spin_unlock(&bo->bdev->lru_lock); 102 + } 223 103 224 104 man = ttm_manager_type(bo->bdev, (*res)->mem_type); 225 105 man->func->free(man, *res); ··· 404 272 man->func->debug(man, p); 405 273 } 406 274 EXPORT_SYMBOL(ttm_resource_manager_debug); 275 + 276 + /** 277 + * ttm_resource_manager_first 278 + * 279 + * @man: resource manager to iterate over 280 + * @cursor: cursor to record the position 281 + * 282 + * Returns the first resource from the resource manager. 283 + */ 284 + struct ttm_resource * 285 + ttm_resource_manager_first(struct ttm_resource_manager *man, 286 + struct ttm_resource_cursor *cursor) 287 + { 288 + struct ttm_resource *res; 289 + 290 + lockdep_assert_held(&man->bdev->lru_lock); 291 + 292 + for (cursor->priority = 0; cursor->priority < TTM_MAX_BO_PRIORITY; 293 + ++cursor->priority) 294 + list_for_each_entry(res, &man->lru[cursor->priority], lru) 295 + return res; 296 + 297 + return NULL; 298 + } 299 + 300 + /** 301 + * ttm_resource_manager_next 302 + * 303 + * @man: resource manager to iterate over 304 + * @cursor: cursor to record the position 305 + * @res: the current resource pointer 306 + * 307 + * Returns the next resource from the resource manager. 308 + */ 309 + struct ttm_resource * 310 + ttm_resource_manager_next(struct ttm_resource_manager *man, 311 + struct ttm_resource_cursor *cursor, 312 + struct ttm_resource *res) 313 + { 314 + lockdep_assert_held(&man->bdev->lru_lock); 315 + 316 + list_for_each_entry_continue(res, &man->lru[cursor->priority], lru) 317 + return res; 318 + 319 + for (++cursor->priority; cursor->priority < TTM_MAX_BO_PRIORITY; 320 + ++cursor->priority) 321 + list_for_each_entry(res, &man->lru[cursor->priority], lru) 322 + return res; 323 + 324 + return NULL; 325 + } 407 326 408 327 static void ttm_kmap_iter_iomap_map_local(struct ttm_kmap_iter *iter, 409 328 struct iosys_map *dmap,
+7 -5
drivers/gpu/drm/ttm/ttm_tt.c
··· 134 134 static void ttm_tt_init_fields(struct ttm_tt *ttm, 135 135 struct ttm_buffer_object *bo, 136 136 uint32_t page_flags, 137 - enum ttm_caching caching) 137 + enum ttm_caching caching, 138 + unsigned long extra_pages) 138 139 { 139 - ttm->num_pages = PAGE_ALIGN(bo->base.size) >> PAGE_SHIFT; 140 + ttm->num_pages = (PAGE_ALIGN(bo->base.size) >> PAGE_SHIFT) + extra_pages; 140 141 ttm->caching = ttm_cached; 141 142 ttm->page_flags = page_flags; 142 143 ttm->dma_address = NULL; ··· 147 146 } 148 147 149 148 int ttm_tt_init(struct ttm_tt *ttm, struct ttm_buffer_object *bo, 150 - uint32_t page_flags, enum ttm_caching caching) 149 + uint32_t page_flags, enum ttm_caching caching, 150 + unsigned long extra_pages) 151 151 { 152 - ttm_tt_init_fields(ttm, bo, page_flags, caching); 152 + ttm_tt_init_fields(ttm, bo, page_flags, caching, extra_pages); 153 153 154 154 if (ttm_tt_alloc_page_directory(ttm)) { 155 155 pr_err("Failed allocating page table\n"); ··· 182 180 { 183 181 int ret; 184 182 185 - ttm_tt_init_fields(ttm, bo, page_flags, caching); 183 + ttm_tt_init_fields(ttm, bo, page_flags, caching, 0); 186 184 187 185 if (page_flags & TTM_TT_FLAG_EXTERNAL) 188 186 ret = ttm_sg_tt_alloc_page_directory(ttm);
+10 -5
drivers/gpu/drm/v3d/v3d_gem.c
··· 259 259 return ret; 260 260 261 261 for (i = 0; i < job->bo_count; i++) { 262 + ret = dma_resv_reserve_fences(job->bo[i]->resv, 1); 263 + if (ret) 264 + goto fail; 265 + 262 266 ret = drm_sched_job_add_implicit_dependencies(&job->base, 263 267 job->bo[i], true); 264 - if (ret) { 265 - drm_gem_unlock_reservations(job->bo, job->bo_count, 266 - acquire_ctx); 267 - return ret; 268 - } 268 + if (ret) 269 + goto fail; 269 270 } 270 271 271 272 return 0; 273 + 274 + fail: 275 + drm_gem_unlock_reservations(job->bo, job->bo_count, acquire_ctx); 276 + return ret; 272 277 } 273 278 274 279 /**
+10 -4
drivers/gpu/drm/vc4/vc4_crtc.c
··· 70 70 static unsigned int 71 71 vc4_crtc_get_cob_allocation(struct vc4_dev *vc4, unsigned int channel) 72 72 { 73 + struct vc4_hvs *hvs = vc4->hvs; 73 74 u32 dispbase = HVS_READ(SCALER_DISPBASEX(channel)); 74 75 /* Top/base are supposed to be 4-pixel aligned, but the 75 76 * Raspberry Pi firmware fills the low bits (which are ··· 90 89 { 91 90 struct drm_device *dev = crtc->dev; 92 91 struct vc4_dev *vc4 = to_vc4_dev(dev); 92 + struct vc4_hvs *hvs = vc4->hvs; 93 93 struct vc4_crtc *vc4_crtc = to_vc4_crtc(crtc); 94 94 struct vc4_crtc_state *vc4_crtc_state = to_vc4_crtc_state(crtc->state); 95 95 unsigned int cob_size; ··· 125 123 *vpos /= 2; 126 124 127 125 /* Use hpos to correct for field offset in interlaced mode. */ 128 - if (VC4_GET_FIELD(val, SCALER_DISPSTATX_FRAME_COUNT) % 2) 126 + if (vc4_hvs_get_fifo_frame_count(hvs, vc4_crtc_state->assigned_channel) % 2) 129 127 *hpos += mode->crtc_htotal / 2; 130 128 } 131 129 ··· 415 413 static void require_hvs_enabled(struct drm_device *dev) 416 414 { 417 415 struct vc4_dev *vc4 = to_vc4_dev(dev); 416 + struct vc4_hvs *hvs = vc4->hvs; 418 417 419 418 WARN_ON_ONCE((HVS_READ(SCALER_DISPCTRL) & SCALER_DISPCTRL_ENABLE) != 420 419 SCALER_DISPCTRL_ENABLE); ··· 429 426 struct vc4_encoder *vc4_encoder = to_vc4_encoder(encoder); 430 427 struct vc4_crtc *vc4_crtc = to_vc4_crtc(crtc); 431 428 struct drm_device *dev = crtc->dev; 429 + struct vc4_dev *vc4 = to_vc4_dev(dev); 432 430 int ret; 433 431 434 432 CRTC_WRITE(PV_V_CONTROL, ··· 459 455 vc4_encoder->post_crtc_disable(encoder, state); 460 456 461 457 vc4_crtc_pixelvalve_reset(crtc); 462 - vc4_hvs_stop_channel(dev, channel); 458 + vc4_hvs_stop_channel(vc4->hvs, channel); 463 459 464 460 if (vc4_encoder && vc4_encoder->post_crtc_powerdown) 465 461 vc4_encoder->post_crtc_powerdown(encoder, state); ··· 485 481 int vc4_crtc_disable_at_boot(struct drm_crtc *crtc) 486 482 { 487 483 struct drm_device *drm = crtc->dev; 484 + struct vc4_dev *vc4 = to_vc4_dev(drm); 488 485 struct vc4_crtc *vc4_crtc = to_vc4_crtc(crtc); 489 486 enum vc4_encoder_type encoder_type; 490 487 const struct vc4_pv_data *pv_data; ··· 507 502 if (!(CRTC_READ(PV_V_CONTROL) & PV_VCONTROL_VIDEN)) 508 503 return 0; 509 504 510 - channel = vc4_hvs_get_fifo_from_output(drm, vc4_crtc->data->hvs_output); 505 + channel = vc4_hvs_get_fifo_from_output(vc4->hvs, vc4_crtc->data->hvs_output); 511 506 if (channel < 0) 512 507 return 0; 513 508 ··· 722 717 struct drm_crtc *crtc = &vc4_crtc->base; 723 718 struct drm_device *dev = crtc->dev; 724 719 struct vc4_dev *vc4 = to_vc4_dev(dev); 720 + struct vc4_hvs *hvs = vc4->hvs; 725 721 u32 chan = vc4_crtc->current_hvs_channel; 726 722 unsigned long flags; 727 723 ··· 741 735 * the CRTC and encoder already reconfigured, leading to 742 736 * underruns. This can be seen when reconfiguring the CRTC. 743 737 */ 744 - vc4_hvs_unmask_underrun(dev, chan); 738 + vc4_hvs_unmask_underrun(hvs, chan); 745 739 } 746 740 spin_unlock(&vc4_crtc->irq_lock); 747 741 spin_unlock_irqrestore(&dev->event_lock, flags);
+8 -7
drivers/gpu/drm/vc4/vc4_drv.h
··· 574 574 575 575 #define V3D_READ(offset) readl(vc4->v3d->regs + offset) 576 576 #define V3D_WRITE(offset, val) writel(val, vc4->v3d->regs + offset) 577 - #define HVS_READ(offset) readl(vc4->hvs->regs + offset) 578 - #define HVS_WRITE(offset, val) writel(val, vc4->hvs->regs + offset) 577 + #define HVS_READ(offset) readl(hvs->regs + offset) 578 + #define HVS_WRITE(offset, val) writel(val, hvs->regs + offset) 579 579 580 580 #define VC4_REG32(reg) { .name = #reg, .offset = reg } 581 581 ··· 933 933 934 934 /* vc4_hvs.c */ 935 935 extern struct platform_driver vc4_hvs_driver; 936 - void vc4_hvs_stop_channel(struct drm_device *dev, unsigned int output); 937 - int vc4_hvs_get_fifo_from_output(struct drm_device *dev, unsigned int output); 936 + void vc4_hvs_stop_channel(struct vc4_hvs *hvs, unsigned int output); 937 + int vc4_hvs_get_fifo_from_output(struct vc4_hvs *hvs, unsigned int output); 938 + u8 vc4_hvs_get_fifo_frame_count(struct vc4_hvs *hvs, unsigned int fifo); 938 939 int vc4_hvs_atomic_check(struct drm_crtc *crtc, struct drm_atomic_state *state); 939 940 void vc4_hvs_atomic_begin(struct drm_crtc *crtc, struct drm_atomic_state *state); 940 941 void vc4_hvs_atomic_enable(struct drm_crtc *crtc, struct drm_atomic_state *state); 941 942 void vc4_hvs_atomic_disable(struct drm_crtc *crtc, struct drm_atomic_state *state); 942 943 void vc4_hvs_atomic_flush(struct drm_crtc *crtc, struct drm_atomic_state *state); 943 - void vc4_hvs_dump_state(struct drm_device *dev); 944 - void vc4_hvs_unmask_underrun(struct drm_device *dev, int channel); 945 - void vc4_hvs_mask_underrun(struct drm_device *dev, int channel); 944 + void vc4_hvs_dump_state(struct vc4_hvs *hvs); 945 + void vc4_hvs_unmask_underrun(struct vc4_hvs *hvs, int channel); 946 + void vc4_hvs_mask_underrun(struct vc4_hvs *hvs, int channel); 946 947 947 948 /* vc4_kms.c */ 948 949 int vc4_kms_load(struct drm_device *dev);
+8 -1
drivers/gpu/drm/vc4/vc4_gem.c
··· 485 485 * immediately move it to the to-be-rendered queue. 486 486 */ 487 487 if (exec->ct0ca != exec->ct0ea) { 488 + trace_vc4_submit_cl(dev, false, exec->seqno, exec->ct0ca, 489 + exec->ct0ea); 488 490 submit_cl(dev, 0, exec->ct0ca, exec->ct0ea); 489 491 } else { 490 492 struct vc4_exec_info *next; ··· 521 519 */ 522 520 vc4_flush_texture_caches(dev); 523 521 522 + trace_vc4_submit_cl(dev, true, exec->seqno, exec->ct1ca, exec->ct1ea); 524 523 submit_cl(dev, 1, exec->ct1ca, exec->ct1ea); 525 524 } 526 525 ··· 644 641 for (i = 0; i < exec->bo_count; i++) { 645 642 bo = &exec->bo[i]->base; 646 643 647 - ret = dma_resv_reserve_shared(bo->resv, 1); 644 + ret = dma_resv_reserve_fences(bo->resv, 1); 648 645 if (ret) { 649 646 vc4_unlock_bo_reservations(dev, exec, acquire_ctx); 650 647 return ret; ··· 1137 1134 struct ww_acquire_ctx acquire_ctx; 1138 1135 struct dma_fence *in_fence; 1139 1136 int ret = 0; 1137 + 1138 + trace_vc4_submit_cl_ioctl(dev, args->bin_cl_size, 1139 + args->shader_rec_size, 1140 + args->bo_handle_count); 1140 1141 1141 1142 if (!vc4->v3d) { 1142 1143 DRM_DEBUG("VC4_SUBMIT_CL with no VC4 V3D probed\n");
+382 -60
drivers/gpu/drm/vc4/vc4_hdmi.c
··· 99 99 100 100 #define HDMI_14_MAX_TMDS_CLK (340 * 1000 * 1000) 101 101 102 - static bool vc4_hdmi_mode_needs_scrambling(const struct drm_display_mode *mode) 102 + static const char * const output_format_str[] = { 103 + [VC4_HDMI_OUTPUT_RGB] = "RGB", 104 + [VC4_HDMI_OUTPUT_YUV420] = "YUV 4:2:0", 105 + [VC4_HDMI_OUTPUT_YUV422] = "YUV 4:2:2", 106 + [VC4_HDMI_OUTPUT_YUV444] = "YUV 4:4:4", 107 + }; 108 + 109 + static const char *vc4_hdmi_output_fmt_str(enum vc4_hdmi_output_format fmt) 103 110 { 104 - return (mode->clock * 1000) > HDMI_14_MAX_TMDS_CLK; 111 + if (fmt >= ARRAY_SIZE(output_format_str)) 112 + return "invalid"; 113 + 114 + return output_format_str[fmt]; 115 + } 116 + 117 + static unsigned long long 118 + vc4_hdmi_encoder_compute_mode_clock(const struct drm_display_mode *mode, 119 + unsigned int bpc, enum vc4_hdmi_output_format fmt); 120 + 121 + static bool vc4_hdmi_mode_needs_scrambling(const struct drm_display_mode *mode, 122 + unsigned int bpc, 123 + enum vc4_hdmi_output_format fmt) 124 + { 125 + unsigned long long clock = vc4_hdmi_encoder_compute_mode_clock(mode, bpc, fmt); 126 + 127 + return clock > HDMI_14_MAX_TMDS_CLK; 105 128 } 106 129 107 130 static bool vc4_hdmi_is_full_range_rgb(struct vc4_hdmi *vc4_hdmi, ··· 289 266 struct drm_display_mode *mode; 290 267 291 268 list_for_each_entry(mode, &connector->probed_modes, head) { 292 - if (vc4_hdmi_mode_needs_scrambling(mode)) { 269 + if (vc4_hdmi_mode_needs_scrambling(mode, 8, VC4_HDMI_OUTPUT_RGB)) { 293 270 drm_warn_once(drm, "The core clock cannot reach frequencies high enough to support 4k @ 60Hz."); 294 271 drm_warn_once(drm, "Please change your config.txt file to add hdmi_enable_4kp60."); 295 272 } ··· 346 323 347 324 new_state->base.max_bpc = 8; 348 325 new_state->base.max_requested_bpc = 8; 326 + new_state->output_format = VC4_HDMI_OUTPUT_RGB; 349 327 drm_atomic_helper_connector_tv_reset(connector); 350 328 } 351 329 ··· 361 337 if (!new_state) 362 338 return NULL; 363 339 364 - new_state->pixel_rate = vc4_state->pixel_rate; 340 + new_state->tmds_char_rate = vc4_state->tmds_char_rate; 341 + new_state->output_bpc = vc4_state->output_bpc; 342 + new_state->output_format = vc4_state->output_format; 365 343 __drm_atomic_helper_connector_duplicate_state(connector, &new_state->base); 366 344 367 345 return &new_state->base; ··· 507 481 DRM_ERROR("Failed to wait for infoframe to start: %d\n", ret); 508 482 } 509 483 484 + static void vc4_hdmi_avi_infoframe_colorspace(struct hdmi_avi_infoframe *frame, 485 + enum vc4_hdmi_output_format fmt) 486 + { 487 + switch (fmt) { 488 + case VC4_HDMI_OUTPUT_RGB: 489 + frame->colorspace = HDMI_COLORSPACE_RGB; 490 + break; 491 + 492 + case VC4_HDMI_OUTPUT_YUV420: 493 + frame->colorspace = HDMI_COLORSPACE_YUV420; 494 + break; 495 + 496 + case VC4_HDMI_OUTPUT_YUV422: 497 + frame->colorspace = HDMI_COLORSPACE_YUV422; 498 + break; 499 + 500 + case VC4_HDMI_OUTPUT_YUV444: 501 + frame->colorspace = HDMI_COLORSPACE_YUV444; 502 + break; 503 + 504 + default: 505 + break; 506 + } 507 + } 508 + 510 509 static void vc4_hdmi_set_avi_infoframe(struct drm_encoder *encoder) 511 510 { 512 511 struct vc4_hdmi *vc4_hdmi = encoder_to_vc4_hdmi(encoder); 513 512 struct drm_connector *connector = &vc4_hdmi->connector; 514 513 struct drm_connector_state *cstate = connector->state; 514 + struct vc4_hdmi_connector_state *vc4_state = 515 + conn_state_to_vc4_hdmi_conn_state(cstate); 515 516 const struct drm_display_mode *mode = &vc4_hdmi->saved_adjusted_mode; 516 517 union hdmi_infoframe frame; 517 518 int ret; ··· 558 505 HDMI_QUANTIZATION_RANGE_FULL : 559 506 HDMI_QUANTIZATION_RANGE_LIMITED); 560 507 drm_hdmi_avi_infoframe_colorimetry(&frame.avi, cstate); 508 + vc4_hdmi_avi_infoframe_colorspace(&frame.avi, vc4_state->output_format); 561 509 drm_hdmi_avi_infoframe_bars(&frame.avi, cstate); 562 510 563 511 vc4_hdmi_write_infoframe(encoder, &frame); ··· 661 607 if (!vc4_hdmi_supports_scrambling(encoder, mode)) 662 608 return; 663 609 664 - if (!vc4_hdmi_mode_needs_scrambling(mode)) 610 + if (!vc4_hdmi_mode_needs_scrambling(mode, 611 + vc4_hdmi->output_bpc, 612 + vc4_hdmi->output_format)) 665 613 return; 666 614 667 615 drm_scdc_set_high_tmds_clock_ratio(vc4_hdmi->ddc, true); ··· 858 802 { 0x0000, 0x0000, 0x1b80, 0x0400 }, 859 803 }; 860 804 805 + /* 806 + * Conversion between Full Range RGB and Full Range YUV422 using the 807 + * BT.709 Colorspace 808 + * 809 + * 810 + * [ 0.181906 0.611804 0.061758 16 ] 811 + * [ -0.100268 -0.337232 0.437500 128 ] 812 + * [ 0.437500 -0.397386 -0.040114 128 ] 813 + * 814 + * Matrix is signed 2p13 fixed point, with signed 9p6 offsets 815 + */ 816 + static const u16 vc5_hdmi_csc_full_rgb_to_limited_yuv422_bt709[3][4] = { 817 + { 0x05d2, 0x1394, 0x01fa, 0x0400 }, 818 + { 0xfccc, 0xf536, 0x0e00, 0x2000 }, 819 + { 0x0e00, 0xf34a, 0xfeb8, 0x2000 }, 820 + }; 821 + 822 + /* 823 + * Conversion between Full Range RGB and Full Range YUV444 using the 824 + * BT.709 Colorspace 825 + * 826 + * [ -0.100268 -0.337232 0.437500 128 ] 827 + * [ 0.437500 -0.397386 -0.040114 128 ] 828 + * [ 0.181906 0.611804 0.061758 16 ] 829 + * 830 + * Matrix is signed 2p13 fixed point, with signed 9p6 offsets 831 + */ 832 + static const u16 vc5_hdmi_csc_full_rgb_to_limited_yuv444_bt709[3][4] = { 833 + { 0xfccc, 0xf536, 0x0e00, 0x2000 }, 834 + { 0x0e00, 0xf34a, 0xfeb8, 0x2000 }, 835 + { 0x05d2, 0x1394, 0x01fa, 0x0400 }, 836 + }; 837 + 861 838 static void vc5_hdmi_set_csc_coeffs(struct vc4_hdmi *vc4_hdmi, 862 839 const u16 coeffs[3][4]) 863 840 { ··· 908 819 struct drm_connector_state *state, 909 820 const struct drm_display_mode *mode) 910 821 { 822 + struct vc4_hdmi_connector_state *vc4_state = 823 + conn_state_to_vc4_hdmi_conn_state(state); 911 824 unsigned long flags; 825 + u32 if_cfg = 0; 826 + u32 if_xbar = 0x543210; 827 + u32 csc_chan_ctl = 0; 912 828 u32 csc_ctl = VC5_MT_CP_CSC_CTL_ENABLE | VC4_SET_FIELD(VC4_HD_CSC_CTL_MODE_CUSTOM, 913 829 VC5_MT_CP_CSC_CTL_MODE); 914 830 915 831 spin_lock_irqsave(&vc4_hdmi->hw_lock, flags); 916 832 917 - HDMI_WRITE(HDMI_VEC_INTERFACE_XBAR, 0x354021); 833 + switch (vc4_state->output_format) { 834 + case VC4_HDMI_OUTPUT_YUV444: 835 + vc5_hdmi_set_csc_coeffs(vc4_hdmi, vc5_hdmi_csc_full_rgb_to_limited_yuv444_bt709); 836 + break; 918 837 919 - if (!vc4_hdmi_is_full_range_rgb(vc4_hdmi, mode)) 920 - vc5_hdmi_set_csc_coeffs(vc4_hdmi, vc5_hdmi_csc_full_rgb_to_limited_rgb); 921 - else 922 - vc5_hdmi_set_csc_coeffs(vc4_hdmi, vc5_hdmi_csc_full_rgb_unity); 838 + case VC4_HDMI_OUTPUT_YUV422: 839 + csc_ctl |= VC4_SET_FIELD(VC5_MT_CP_CSC_CTL_FILTER_MODE_444_TO_422_STANDARD, 840 + VC5_MT_CP_CSC_CTL_FILTER_MODE_444_TO_422) | 841 + VC5_MT_CP_CSC_CTL_USE_444_TO_422 | 842 + VC5_MT_CP_CSC_CTL_USE_RNG_SUPPRESSION; 923 843 844 + csc_chan_ctl |= VC4_SET_FIELD(VC5_MT_CP_CHANNEL_CTL_OUTPUT_REMAP_LEGACY_STYLE, 845 + VC5_MT_CP_CHANNEL_CTL_OUTPUT_REMAP); 846 + 847 + if_cfg |= VC4_SET_FIELD(VC5_DVP_HT_VEC_INTERFACE_CFG_SEL_422_FORMAT_422_LEGACY, 848 + VC5_DVP_HT_VEC_INTERFACE_CFG_SEL_422); 849 + 850 + vc5_hdmi_set_csc_coeffs(vc4_hdmi, vc5_hdmi_csc_full_rgb_to_limited_yuv422_bt709); 851 + break; 852 + 853 + case VC4_HDMI_OUTPUT_RGB: 854 + if_xbar = 0x354021; 855 + 856 + if (!vc4_hdmi_is_full_range_rgb(vc4_hdmi, mode)) 857 + vc5_hdmi_set_csc_coeffs(vc4_hdmi, vc5_hdmi_csc_full_rgb_to_limited_rgb); 858 + else 859 + vc5_hdmi_set_csc_coeffs(vc4_hdmi, vc5_hdmi_csc_full_rgb_unity); 860 + break; 861 + 862 + default: 863 + break; 864 + } 865 + 866 + HDMI_WRITE(HDMI_VEC_INTERFACE_CFG, if_cfg); 867 + HDMI_WRITE(HDMI_VEC_INTERFACE_XBAR, if_xbar); 868 + HDMI_WRITE(HDMI_CSC_CHANNEL_CTL, csc_chan_ctl); 924 869 HDMI_WRITE(HDMI_CSC_CTL, csc_ctl); 925 870 926 871 spin_unlock_irqrestore(&vc4_hdmi->hw_lock, flags); ··· 1015 892 struct drm_connector_state *state, 1016 893 struct drm_display_mode *mode) 1017 894 { 895 + const struct vc4_hdmi_connector_state *vc4_state = 896 + conn_state_to_vc4_hdmi_conn_state(state); 1018 897 bool hsync_pos = mode->flags & DRM_MODE_FLAG_PHSYNC; 1019 898 bool vsync_pos = mode->flags & DRM_MODE_FLAG_PVSYNC; 1020 899 bool interlaced = mode->flags & DRM_MODE_FLAG_INTERLACE; ··· 1064 939 HDMI_WRITE(HDMI_VERTB0, vertb_even); 1065 940 HDMI_WRITE(HDMI_VERTB1, vertb); 1066 941 1067 - switch (state->max_bpc) { 942 + switch (vc4_state->output_bpc) { 1068 943 case 12: 1069 944 gcp = 6; 1070 945 gcp_en = true; ··· 1078 953 gcp = 4; 1079 954 gcp_en = false; 1080 955 break; 956 + } 957 + 958 + /* 959 + * YCC422 is always 36-bit and not considered deep colour so 960 + * doesn't signal in GCP. 961 + */ 962 + if (vc4_state->output_format == VC4_HDMI_OUTPUT_YUV422) { 963 + gcp = 4; 964 + gcp_en = false; 1081 965 } 1082 966 1083 967 reg = HDMI_READ(HDMI_DEEP_COLOR_CONFIG_1); ··· 1156 1022 struct vc4_hdmi_connector_state *vc4_conn_state = 1157 1023 conn_state_to_vc4_hdmi_conn_state(conn_state); 1158 1024 struct drm_display_mode *mode = &vc4_hdmi->saved_adjusted_mode; 1159 - unsigned long pixel_rate = vc4_conn_state->pixel_rate; 1025 + unsigned long tmds_char_rate = vc4_conn_state->tmds_char_rate; 1160 1026 unsigned long bvb_rate, hsm_rate; 1161 1027 unsigned long flags; 1162 1028 int ret; ··· 1179 1045 * Additionally, the AXI clock needs to be at least 25% of 1180 1046 * pixel clock, but HSM ends up being the limiting factor. 1181 1047 */ 1182 - hsm_rate = max_t(unsigned long, 120000000, (pixel_rate / 100) * 101); 1048 + hsm_rate = max_t(unsigned long, 120000000, (tmds_char_rate / 100) * 101); 1183 1049 ret = clk_set_min_rate(vc4_hdmi->hsm_clock, hsm_rate); 1184 1050 if (ret) { 1185 1051 DRM_ERROR("Failed to set HSM clock rate: %d\n", ret); ··· 1192 1058 goto out; 1193 1059 } 1194 1060 1195 - ret = clk_set_rate(vc4_hdmi->pixel_clock, pixel_rate); 1061 + ret = clk_set_rate(vc4_hdmi->pixel_clock, tmds_char_rate); 1196 1062 if (ret) { 1197 1063 DRM_ERROR("Failed to set pixel clock rate: %d\n", ret); 1198 1064 goto err_put_runtime_pm; ··· 1207 1073 1208 1074 vc4_hdmi_cec_update_clk_div(vc4_hdmi); 1209 1075 1210 - if (pixel_rate > 297000000) 1076 + if (tmds_char_rate > 297000000) 1211 1077 bvb_rate = 300000000; 1212 - else if (pixel_rate > 148500000) 1078 + else if (tmds_char_rate > 148500000) 1213 1079 bvb_rate = 150000000; 1214 1080 else 1215 1081 bvb_rate = 75000000; ··· 1366 1232 struct drm_connector_state *conn_state) 1367 1233 { 1368 1234 struct vc4_hdmi *vc4_hdmi = encoder_to_vc4_hdmi(encoder); 1235 + struct vc4_hdmi_connector_state *vc4_state = 1236 + conn_state_to_vc4_hdmi_conn_state(conn_state); 1369 1237 1370 1238 mutex_lock(&vc4_hdmi->mutex); 1371 1239 drm_mode_copy(&vc4_hdmi->saved_adjusted_mode, 1372 1240 &crtc_state->adjusted_mode); 1241 + vc4_hdmi->output_bpc = vc4_state->output_bpc; 1242 + vc4_hdmi->output_format = vc4_state->output_format; 1373 1243 mutex_unlock(&vc4_hdmi->mutex); 1244 + } 1245 + 1246 + static bool 1247 + vc4_hdmi_sink_supports_format_bpc(const struct vc4_hdmi *vc4_hdmi, 1248 + const struct drm_display_info *info, 1249 + const struct drm_display_mode *mode, 1250 + unsigned int format, unsigned int bpc) 1251 + { 1252 + struct drm_device *dev = vc4_hdmi->connector.dev; 1253 + u8 vic = drm_match_cea_mode(mode); 1254 + 1255 + if (vic == 1 && bpc != 8) { 1256 + drm_dbg(dev, "VIC1 requires a bpc of 8, got %u\n", bpc); 1257 + return false; 1258 + } 1259 + 1260 + if (!info->is_hdmi && 1261 + (format != VC4_HDMI_OUTPUT_RGB || bpc != 8)) { 1262 + drm_dbg(dev, "DVI Monitors require an RGB output at 8 bpc\n"); 1263 + return false; 1264 + } 1265 + 1266 + switch (format) { 1267 + case VC4_HDMI_OUTPUT_RGB: 1268 + drm_dbg(dev, "RGB Format, checking the constraints.\n"); 1269 + 1270 + if (!(info->color_formats & DRM_COLOR_FORMAT_RGB444)) 1271 + return false; 1272 + 1273 + if (bpc == 10 && !(info->edid_hdmi_rgb444_dc_modes & DRM_EDID_HDMI_DC_30)) { 1274 + drm_dbg(dev, "10 BPC but sink doesn't support Deep Color 30.\n"); 1275 + return false; 1276 + } 1277 + 1278 + if (bpc == 12 && !(info->edid_hdmi_rgb444_dc_modes & DRM_EDID_HDMI_DC_36)) { 1279 + drm_dbg(dev, "12 BPC but sink doesn't support Deep Color 36.\n"); 1280 + return false; 1281 + } 1282 + 1283 + drm_dbg(dev, "RGB format supported in that configuration.\n"); 1284 + 1285 + return true; 1286 + 1287 + case VC4_HDMI_OUTPUT_YUV422: 1288 + drm_dbg(dev, "YUV422 format, checking the constraints.\n"); 1289 + 1290 + if (!(info->color_formats & DRM_COLOR_FORMAT_YCBCR422)) { 1291 + drm_dbg(dev, "Sink doesn't support YUV422.\n"); 1292 + return false; 1293 + } 1294 + 1295 + if (bpc != 12) { 1296 + drm_dbg(dev, "YUV422 only supports 12 bpc.\n"); 1297 + return false; 1298 + } 1299 + 1300 + drm_dbg(dev, "YUV422 format supported in that configuration.\n"); 1301 + 1302 + return true; 1303 + 1304 + case VC4_HDMI_OUTPUT_YUV444: 1305 + drm_dbg(dev, "YUV444 format, checking the constraints.\n"); 1306 + 1307 + if (!(info->color_formats & DRM_COLOR_FORMAT_YCBCR444)) { 1308 + drm_dbg(dev, "Sink doesn't support YUV444.\n"); 1309 + return false; 1310 + } 1311 + 1312 + if (bpc == 10 && !(info->edid_hdmi_ycbcr444_dc_modes & DRM_EDID_HDMI_DC_30)) { 1313 + drm_dbg(dev, "10 BPC but sink doesn't support Deep Color 30.\n"); 1314 + return false; 1315 + } 1316 + 1317 + if (bpc == 12 && !(info->edid_hdmi_ycbcr444_dc_modes & DRM_EDID_HDMI_DC_36)) { 1318 + drm_dbg(dev, "12 BPC but sink doesn't support Deep Color 36.\n"); 1319 + return false; 1320 + } 1321 + 1322 + drm_dbg(dev, "YUV444 format supported in that configuration.\n"); 1323 + 1324 + return true; 1325 + } 1326 + 1327 + return false; 1328 + } 1329 + 1330 + static enum drm_mode_status 1331 + vc4_hdmi_encoder_clock_valid(const struct vc4_hdmi *vc4_hdmi, 1332 + unsigned long long clock) 1333 + { 1334 + const struct drm_connector *connector = &vc4_hdmi->connector; 1335 + const struct drm_display_info *info = &connector->display_info; 1336 + 1337 + if (clock > vc4_hdmi->variant->max_pixel_clock) 1338 + return MODE_CLOCK_HIGH; 1339 + 1340 + if (vc4_hdmi->disable_4kp60 && clock > HDMI_14_MAX_TMDS_CLK) 1341 + return MODE_CLOCK_HIGH; 1342 + 1343 + if (info->max_tmds_clock && clock > (info->max_tmds_clock * 1000)) 1344 + return MODE_CLOCK_HIGH; 1345 + 1346 + return MODE_OK; 1347 + } 1348 + 1349 + static unsigned long long 1350 + vc4_hdmi_encoder_compute_mode_clock(const struct drm_display_mode *mode, 1351 + unsigned int bpc, 1352 + enum vc4_hdmi_output_format fmt) 1353 + { 1354 + unsigned long long clock = mode->clock * 1000; 1355 + 1356 + if (mode->flags & DRM_MODE_FLAG_DBLCLK) 1357 + clock = clock * 2; 1358 + 1359 + if (fmt == VC4_HDMI_OUTPUT_YUV422) 1360 + bpc = 8; 1361 + 1362 + clock = clock * bpc; 1363 + do_div(clock, 8); 1364 + 1365 + return clock; 1366 + } 1367 + 1368 + static int 1369 + vc4_hdmi_encoder_compute_clock(const struct vc4_hdmi *vc4_hdmi, 1370 + struct vc4_hdmi_connector_state *vc4_state, 1371 + const struct drm_display_mode *mode, 1372 + unsigned int bpc, unsigned int fmt) 1373 + { 1374 + unsigned long long clock; 1375 + 1376 + clock = vc4_hdmi_encoder_compute_mode_clock(mode, bpc, fmt); 1377 + if (vc4_hdmi_encoder_clock_valid(vc4_hdmi, clock) != MODE_OK) 1378 + return -EINVAL; 1379 + 1380 + vc4_state->tmds_char_rate = clock; 1381 + 1382 + return 0; 1383 + } 1384 + 1385 + static int 1386 + vc4_hdmi_encoder_compute_format(const struct vc4_hdmi *vc4_hdmi, 1387 + struct vc4_hdmi_connector_state *vc4_state, 1388 + const struct drm_display_mode *mode, 1389 + unsigned int bpc) 1390 + { 1391 + struct drm_device *dev = vc4_hdmi->connector.dev; 1392 + const struct drm_connector *connector = &vc4_hdmi->connector; 1393 + const struct drm_display_info *info = &connector->display_info; 1394 + unsigned int format; 1395 + 1396 + drm_dbg(dev, "Trying with an RGB output\n"); 1397 + 1398 + format = VC4_HDMI_OUTPUT_RGB; 1399 + if (vc4_hdmi_sink_supports_format_bpc(vc4_hdmi, info, mode, format, bpc)) { 1400 + int ret; 1401 + 1402 + ret = vc4_hdmi_encoder_compute_clock(vc4_hdmi, vc4_state, 1403 + mode, bpc, format); 1404 + if (!ret) { 1405 + vc4_state->output_format = format; 1406 + return 0; 1407 + } 1408 + } 1409 + 1410 + drm_dbg(dev, "Failed, Trying with an YUV422 output\n"); 1411 + 1412 + format = VC4_HDMI_OUTPUT_YUV422; 1413 + if (vc4_hdmi_sink_supports_format_bpc(vc4_hdmi, info, mode, format, bpc)) { 1414 + int ret; 1415 + 1416 + ret = vc4_hdmi_encoder_compute_clock(vc4_hdmi, vc4_state, 1417 + mode, bpc, format); 1418 + if (!ret) { 1419 + vc4_state->output_format = format; 1420 + return 0; 1421 + } 1422 + } 1423 + 1424 + drm_dbg(dev, "Failed. No Format Supported for that bpc count.\n"); 1425 + 1426 + return -EINVAL; 1427 + } 1428 + 1429 + static int 1430 + vc4_hdmi_encoder_compute_config(const struct vc4_hdmi *vc4_hdmi, 1431 + struct vc4_hdmi_connector_state *vc4_state, 1432 + const struct drm_display_mode *mode) 1433 + { 1434 + struct drm_device *dev = vc4_hdmi->connector.dev; 1435 + struct drm_connector_state *conn_state = &vc4_state->base; 1436 + unsigned int max_bpc = clamp_t(unsigned int, conn_state->max_bpc, 8, 12); 1437 + unsigned int bpc; 1438 + int ret; 1439 + 1440 + for (bpc = max_bpc; bpc >= 8; bpc -= 2) { 1441 + drm_dbg(dev, "Trying with a %d bpc output\n", bpc); 1442 + 1443 + ret = vc4_hdmi_encoder_compute_format(vc4_hdmi, vc4_state, 1444 + mode, bpc); 1445 + if (ret) 1446 + continue; 1447 + 1448 + vc4_state->output_bpc = bpc; 1449 + 1450 + drm_dbg(dev, 1451 + "Mode %ux%u @ %uHz: Found configuration: bpc: %u, fmt: %s, clock: %llu\n", 1452 + mode->hdisplay, mode->vdisplay, drm_mode_vrefresh(mode), 1453 + vc4_state->output_bpc, 1454 + vc4_hdmi_output_fmt_str(vc4_state->output_format), 1455 + vc4_state->tmds_char_rate); 1456 + 1457 + break; 1458 + } 1459 + 1460 + return ret; 1374 1461 } 1375 1462 1376 1463 #define WIFI_2_4GHz_CH1_MIN_FREQ 2400000000ULL ··· 1604 1249 struct vc4_hdmi_connector_state *vc4_state = conn_state_to_vc4_hdmi_conn_state(conn_state); 1605 1250 struct drm_display_mode *mode = &crtc_state->adjusted_mode; 1606 1251 struct vc4_hdmi *vc4_hdmi = encoder_to_vc4_hdmi(encoder); 1607 - unsigned long long pixel_rate = mode->clock * 1000; 1608 - unsigned long long tmds_rate; 1252 + unsigned long long tmds_char_rate = mode->clock * 1000; 1253 + unsigned long long tmds_bit_rate; 1254 + int ret; 1609 1255 1610 1256 if (vc4_hdmi->variant->unsupported_odd_h_timings && 1611 1257 !(mode->flags & DRM_MODE_FLAG_DBLCLK) && ··· 1620 1264 * bandwidth). Slightly lower the frequency to bring it out of 1621 1265 * the WiFi range. 1622 1266 */ 1623 - tmds_rate = pixel_rate * 10; 1267 + tmds_bit_rate = tmds_char_rate * 10; 1624 1268 if (vc4_hdmi->disable_wifi_frequencies && 1625 - (tmds_rate >= WIFI_2_4GHz_CH1_MIN_FREQ && 1626 - tmds_rate <= WIFI_2_4GHz_CH1_MAX_FREQ)) { 1269 + (tmds_bit_rate >= WIFI_2_4GHz_CH1_MIN_FREQ && 1270 + tmds_bit_rate <= WIFI_2_4GHz_CH1_MAX_FREQ)) { 1627 1271 mode->clock = 238560; 1628 - pixel_rate = mode->clock * 1000; 1272 + tmds_char_rate = mode->clock * 1000; 1629 1273 } 1630 1274 1631 - if (conn_state->max_bpc == 12) { 1632 - pixel_rate = pixel_rate * 150; 1633 - do_div(pixel_rate, 100); 1634 - } else if (conn_state->max_bpc == 10) { 1635 - pixel_rate = pixel_rate * 125; 1636 - do_div(pixel_rate, 100); 1637 - } 1638 - 1639 - if (mode->flags & DRM_MODE_FLAG_DBLCLK) 1640 - pixel_rate = pixel_rate * 2; 1641 - 1642 - if (pixel_rate > vc4_hdmi->variant->max_pixel_clock) 1643 - return -EINVAL; 1644 - 1645 - if (vc4_hdmi->disable_4kp60 && (pixel_rate > HDMI_14_MAX_TMDS_CLK)) 1646 - return -EINVAL; 1647 - 1648 - vc4_state->pixel_rate = pixel_rate; 1275 + ret = vc4_hdmi_encoder_compute_config(vc4_hdmi, vc4_state, mode); 1276 + if (ret) 1277 + return ret; 1649 1278 1650 1279 return 0; 1651 1280 } ··· 1647 1306 (mode->hsync_end % 2) || (mode->htotal % 2))) 1648 1307 return MODE_H_ILLEGAL; 1649 1308 1650 - if ((mode->clock * 1000) > vc4_hdmi->variant->max_pixel_clock) 1651 - return MODE_CLOCK_HIGH; 1652 - 1653 - if (vc4_hdmi->disable_4kp60 && vc4_hdmi_mode_needs_scrambling(mode)) 1654 - return MODE_CLOCK_HIGH; 1655 - 1656 - return MODE_OK; 1309 + return vc4_hdmi_encoder_clock_valid(vc4_hdmi, mode->clock * 1000); 1657 1310 } 1658 1311 1659 1312 static const struct drm_encoder_helper_funcs vc4_hdmi_encoder_helper_funcs = { ··· 2901 2566 if (max_rate < 550000000) 2902 2567 vc4_hdmi->disable_4kp60 = true; 2903 2568 } 2904 - 2905 - /* 2906 - * If we boot without any cable connected to the HDMI connector, 2907 - * the firmware will skip the HSM initialization and leave it 2908 - * with a rate of 0, resulting in a bus lockup when we're 2909 - * accessing the registers even if it's enabled. 2910 - * 2911 - * Let's put a sensible default at runtime_resume so that we 2912 - * don't end up in this situation. 2913 - */ 2914 - ret = clk_set_min_rate(vc4_hdmi->hsm_clock, HSM_MIN_CLOCK_FREQ); 2915 - if (ret) 2916 - goto err_put_ddc; 2917 2569 2918 2570 /* 2919 2571 * We need to have the device powered up at this point to call
+22 -1
drivers/gpu/drm/vc4/vc4_hdmi.h
··· 121 121 bool streaming; 122 122 }; 123 123 124 + enum vc4_hdmi_output_format { 125 + VC4_HDMI_OUTPUT_RGB, 126 + VC4_HDMI_OUTPUT_YUV422, 127 + VC4_HDMI_OUTPUT_YUV444, 128 + VC4_HDMI_OUTPUT_YUV420, 129 + }; 130 + 124 131 /* General HDMI hardware state. */ 125 132 struct vc4_hdmi { 126 133 struct vc4_hdmi_audio audio; ··· 227 220 * the scrambler on? Protected by @mutex. 228 221 */ 229 222 bool scdc_enabled; 223 + 224 + /** 225 + * @output_bpc: Copy of @vc4_connector_state.output_bpc for use 226 + * outside of KMS hooks. Protected by @mutex. 227 + */ 228 + unsigned int output_bpc; 229 + 230 + /** 231 + * @output_format: Copy of @vc4_connector_state.output_format 232 + * for use outside of KMS hooks. Protected by @mutex. 233 + */ 234 + enum vc4_hdmi_output_format output_format; 230 235 }; 231 236 232 237 static inline struct vc4_hdmi * ··· 257 238 258 239 struct vc4_hdmi_connector_state { 259 240 struct drm_connector_state base; 260 - unsigned long long pixel_rate; 241 + unsigned long long tmds_char_rate; 242 + unsigned int output_bpc; 243 + enum vc4_hdmi_output_format output_format; 261 244 }; 262 245 263 246 static inline struct vc4_hdmi_connector_state *
+1 -1
drivers/gpu/drm/vc4/vc4_hdmi_phy.c
··· 365 365 { 366 366 const struct phy_lane_settings *chan0_settings, *chan1_settings, *chan2_settings, *clock_settings; 367 367 const struct vc4_hdmi_variant *variant = vc4_hdmi->variant; 368 - unsigned long long pixel_freq = conn_state->pixel_rate; 368 + unsigned long long pixel_freq = conn_state->tmds_char_rate; 369 369 unsigned long long vco_freq; 370 370 unsigned char word_sel; 371 371 unsigned long flags;
+6
drivers/gpu/drm/vc4/vc4_hdmi_regs.h
··· 54 54 HDMI_CSC_24_23, 55 55 HDMI_CSC_32_31, 56 56 HDMI_CSC_34_33, 57 + HDMI_CSC_CHANNEL_CTL, 57 58 HDMI_CSC_CTL, 58 59 59 60 /* ··· 120 119 HDMI_TX_PHY_POWERDOWN_CTL, 121 120 HDMI_TX_PHY_RESET_CTL, 122 121 HDMI_TX_PHY_TMDS_CLK_WORD_SEL, 122 + HDMI_VEC_INTERFACE_CFG, 123 123 HDMI_VEC_INTERFACE_XBAR, 124 124 HDMI_VERTA0, 125 125 HDMI_VERTA1, ··· 246 244 VC4_HDMI_REG(HDMI_SCRAMBLER_CTL, 0x1c4), 247 245 248 246 VC5_DVP_REG(HDMI_CLOCK_STOP, 0x0bc), 247 + VC5_DVP_REG(HDMI_VEC_INTERFACE_CFG, 0x0ec), 249 248 VC5_DVP_REG(HDMI_VEC_INTERFACE_XBAR, 0x0f0), 250 249 251 250 VC5_PHY_REG(HDMI_TX_PHY_RESET_CTL, 0x000), ··· 292 289 VC5_CSC_REG(HDMI_CSC_24_23, 0x010), 293 290 VC5_CSC_REG(HDMI_CSC_32_31, 0x014), 294 291 VC5_CSC_REG(HDMI_CSC_34_33, 0x018), 292 + VC5_CSC_REG(HDMI_CSC_CHANNEL_CTL, 0x02c), 295 293 }; 296 294 297 295 static const struct vc4_hdmi_register __maybe_unused vc5_hdmi_hdmi1_fields[] = { ··· 328 324 VC4_HDMI_REG(HDMI_SCRAMBLER_CTL, 0x1c4), 329 325 330 326 VC5_DVP_REG(HDMI_CLOCK_STOP, 0x0bc), 327 + VC5_DVP_REG(HDMI_VEC_INTERFACE_CFG, 0x0ec), 331 328 VC5_DVP_REG(HDMI_VEC_INTERFACE_XBAR, 0x0f0), 332 329 333 330 VC5_PHY_REG(HDMI_TX_PHY_RESET_CTL, 0x000), ··· 374 369 VC5_CSC_REG(HDMI_CSC_24_23, 0x010), 375 370 VC5_CSC_REG(HDMI_CSC_32_31, 0x014), 376 371 VC5_CSC_REG(HDMI_CSC_34_33, 0x018), 372 + VC5_CSC_REG(HDMI_CSC_CHANNEL_CTL, 0x02c), 377 373 }; 378 374 379 375 static inline
+78 -49
drivers/gpu/drm/vc4/vc4_hvs.c
··· 64 64 VC4_REG32(SCALER_OLEDCOEF2), 65 65 }; 66 66 67 - void vc4_hvs_dump_state(struct drm_device *dev) 67 + void vc4_hvs_dump_state(struct vc4_hvs *hvs) 68 68 { 69 - struct vc4_dev *vc4 = to_vc4_dev(dev); 70 - struct drm_printer p = drm_info_printer(&vc4->hvs->pdev->dev); 69 + struct drm_printer p = drm_info_printer(&hvs->pdev->dev); 71 70 int i; 72 71 73 - drm_print_regset32(&p, &vc4->hvs->regset); 72 + drm_print_regset32(&p, &hvs->regset); 74 73 75 74 DRM_INFO("HVS ctx:\n"); 76 75 for (i = 0; i < 64; i += 4) { 77 76 DRM_INFO("0x%08x (%s): 0x%08x 0x%08x 0x%08x 0x%08x\n", 78 77 i * 4, i < HVS_BOOTLOADER_DLIST_END ? "B" : "D", 79 - readl((u32 __iomem *)vc4->hvs->dlist + i + 0), 80 - readl((u32 __iomem *)vc4->hvs->dlist + i + 1), 81 - readl((u32 __iomem *)vc4->hvs->dlist + i + 2), 82 - readl((u32 __iomem *)vc4->hvs->dlist + i + 3)); 78 + readl((u32 __iomem *)hvs->dlist + i + 0), 79 + readl((u32 __iomem *)hvs->dlist + i + 1), 80 + readl((u32 __iomem *)hvs->dlist + i + 2), 81 + readl((u32 __iomem *)hvs->dlist + i + 3)); 83 82 } 84 83 } 85 84 ··· 156 157 return 0; 157 158 } 158 159 159 - static void vc4_hvs_lut_load(struct drm_crtc *crtc) 160 + static void vc4_hvs_lut_load(struct vc4_hvs *hvs, 161 + struct vc4_crtc *vc4_crtc) 160 162 { 161 - struct drm_device *dev = crtc->dev; 162 - struct vc4_dev *vc4 = to_vc4_dev(dev); 163 - struct vc4_crtc *vc4_crtc = to_vc4_crtc(crtc); 163 + struct drm_crtc *crtc = &vc4_crtc->base; 164 164 struct vc4_crtc_state *vc4_state = to_vc4_crtc_state(crtc->state); 165 165 u32 i; 166 166 ··· 179 181 HVS_WRITE(SCALER_GAMDATA, vc4_crtc->lut_b[i]); 180 182 } 181 183 182 - static void vc4_hvs_update_gamma_lut(struct drm_crtc *crtc) 184 + static void vc4_hvs_update_gamma_lut(struct vc4_hvs *hvs, 185 + struct vc4_crtc *vc4_crtc) 183 186 { 184 - struct vc4_crtc *vc4_crtc = to_vc4_crtc(crtc); 185 - struct drm_color_lut *lut = crtc->state->gamma_lut->data; 186 - u32 length = drm_color_lut_size(crtc->state->gamma_lut); 187 + struct drm_crtc_state *crtc_state = vc4_crtc->base.state; 188 + struct drm_color_lut *lut = crtc_state->gamma_lut->data; 189 + u32 length = drm_color_lut_size(crtc_state->gamma_lut); 187 190 u32 i; 188 191 189 192 for (i = 0; i < length; i++) { ··· 193 194 vc4_crtc->lut_b[i] = drm_color_lut_extract(lut[i].blue, 8); 194 195 } 195 196 196 - vc4_hvs_lut_load(crtc); 197 + vc4_hvs_lut_load(hvs, vc4_crtc); 197 198 } 198 199 199 - int vc4_hvs_get_fifo_from_output(struct drm_device *dev, unsigned int output) 200 + u8 vc4_hvs_get_fifo_frame_count(struct vc4_hvs *hvs, unsigned int fifo) 200 201 { 201 - struct vc4_dev *vc4 = to_vc4_dev(dev); 202 + u8 field = 0; 203 + 204 + switch (fifo) { 205 + case 0: 206 + field = VC4_GET_FIELD(HVS_READ(SCALER_DISPSTAT1), 207 + SCALER_DISPSTAT1_FRCNT0); 208 + break; 209 + case 1: 210 + field = VC4_GET_FIELD(HVS_READ(SCALER_DISPSTAT1), 211 + SCALER_DISPSTAT1_FRCNT1); 212 + break; 213 + case 2: 214 + field = VC4_GET_FIELD(HVS_READ(SCALER_DISPSTAT2), 215 + SCALER_DISPSTAT2_FRCNT2); 216 + break; 217 + } 218 + 219 + return field; 220 + } 221 + 222 + int vc4_hvs_get_fifo_from_output(struct vc4_hvs *hvs, unsigned int output) 223 + { 202 224 u32 reg; 203 225 int ret; 204 226 205 - if (!vc4->hvs->hvs5) 227 + if (!hvs->hvs5) 206 228 return output; 207 229 208 230 switch (output) { ··· 270 250 } 271 251 } 272 252 273 - static int vc4_hvs_init_channel(struct vc4_dev *vc4, struct drm_crtc *crtc, 253 + static int vc4_hvs_init_channel(struct vc4_hvs *hvs, struct drm_crtc *crtc, 274 254 struct drm_display_mode *mode, bool oneshot) 275 255 { 256 + struct vc4_crtc *vc4_crtc = to_vc4_crtc(crtc); 276 257 struct vc4_crtc_state *vc4_crtc_state = to_vc4_crtc_state(crtc->state); 277 258 unsigned int chan = vc4_crtc_state->assigned_channel; 278 259 bool interlace = mode->flags & DRM_MODE_FLAG_INTERLACE; ··· 291 270 */ 292 271 dispctrl = SCALER_DISPCTRLX_ENABLE; 293 272 294 - if (!vc4->hvs->hvs5) 273 + if (!hvs->hvs5) 295 274 dispctrl |= VC4_SET_FIELD(mode->hdisplay, 296 275 SCALER_DISPCTRLX_WIDTH) | 297 276 VC4_SET_FIELD(mode->vdisplay, ··· 312 291 313 292 HVS_WRITE(SCALER_DISPBKGNDX(chan), dispbkgndx | 314 293 SCALER_DISPBKGND_AUTOHS | 315 - ((!vc4->hvs->hvs5) ? SCALER_DISPBKGND_GAMMA : 0) | 294 + ((!hvs->hvs5) ? SCALER_DISPBKGND_GAMMA : 0) | 316 295 (interlace ? SCALER_DISPBKGND_INTERLACE : 0)); 317 296 318 297 /* Reload the LUT, since the SRAMs would have been disabled if 319 298 * all CRTCs had SCALER_DISPBKGND_GAMMA unset at once. 320 299 */ 321 - vc4_hvs_lut_load(crtc); 300 + vc4_hvs_lut_load(hvs, vc4_crtc); 322 301 323 302 return 0; 324 303 } 325 304 326 - void vc4_hvs_stop_channel(struct drm_device *dev, unsigned int chan) 305 + void vc4_hvs_stop_channel(struct vc4_hvs *hvs, unsigned int chan) 327 306 { 328 - struct vc4_dev *vc4 = to_vc4_dev(dev); 329 - 330 307 if (HVS_READ(SCALER_DISPCTRLX(chan)) & SCALER_DISPCTRLX_ENABLE) 331 308 return; 332 309 ··· 378 359 return 0; 379 360 } 380 361 381 - static void vc4_hvs_update_dlist(struct drm_crtc *crtc) 362 + static void vc4_hvs_install_dlist(struct drm_crtc *crtc) 382 363 { 383 364 struct drm_device *dev = crtc->dev; 384 365 struct vc4_dev *vc4 = to_vc4_dev(dev); 366 + struct vc4_hvs *hvs = vc4->hvs; 367 + struct vc4_crtc_state *vc4_state = to_vc4_crtc_state(crtc->state); 368 + 369 + HVS_WRITE(SCALER_DISPLISTX(vc4_state->assigned_channel), 370 + vc4_state->mm.start); 371 + } 372 + 373 + static void vc4_hvs_update_dlist(struct drm_crtc *crtc) 374 + { 375 + struct drm_device *dev = crtc->dev; 385 376 struct vc4_crtc *vc4_crtc = to_vc4_crtc(crtc); 386 377 struct vc4_crtc_state *vc4_state = to_vc4_crtc_state(crtc->state); 387 378 unsigned long flags; ··· 408 379 crtc->state->event = NULL; 409 380 } 410 381 411 - HVS_WRITE(SCALER_DISPLISTX(vc4_state->assigned_channel), 412 - vc4_state->mm.start); 413 - 414 382 spin_unlock_irqrestore(&dev->event_lock, flags); 415 - } else { 416 - HVS_WRITE(SCALER_DISPLISTX(vc4_state->assigned_channel), 417 - vc4_state->mm.start); 418 383 } 419 384 420 385 spin_lock_irqsave(&vc4_crtc->irq_lock, flags); ··· 437 414 struct vc4_crtc *vc4_crtc = to_vc4_crtc(crtc); 438 415 bool oneshot = vc4_crtc->feeds_txp; 439 416 417 + vc4_hvs_install_dlist(crtc); 440 418 vc4_hvs_update_dlist(crtc); 441 - vc4_hvs_init_channel(vc4, crtc, mode, oneshot); 419 + vc4_hvs_init_channel(vc4->hvs, crtc, mode, oneshot); 442 420 } 443 421 444 422 void vc4_hvs_atomic_disable(struct drm_crtc *crtc, 445 423 struct drm_atomic_state *state) 446 424 { 447 425 struct drm_device *dev = crtc->dev; 426 + struct vc4_dev *vc4 = to_vc4_dev(dev); 448 427 struct drm_crtc_state *old_state = drm_atomic_get_old_crtc_state(state, crtc); 449 428 struct vc4_crtc_state *vc4_state = to_vc4_crtc_state(old_state); 450 429 unsigned int chan = vc4_state->assigned_channel; 451 430 452 - vc4_hvs_stop_channel(dev, chan); 431 + vc4_hvs_stop_channel(vc4->hvs, chan); 453 432 } 454 433 455 434 void vc4_hvs_atomic_flush(struct drm_crtc *crtc, ··· 461 436 crtc); 462 437 struct drm_device *dev = crtc->dev; 463 438 struct vc4_dev *vc4 = to_vc4_dev(dev); 439 + struct vc4_hvs *hvs = vc4->hvs; 440 + struct vc4_crtc *vc4_crtc = to_vc4_crtc(crtc); 464 441 struct vc4_crtc_state *vc4_state = to_vc4_crtc_state(crtc->state); 442 + unsigned int channel = vc4_state->assigned_channel; 465 443 struct drm_plane *plane; 466 444 struct vc4_plane_state *vc4_plane_state; 467 445 bool debug_dump_regs = false; ··· 474 446 475 447 if (debug_dump_regs) { 476 448 DRM_INFO("CRTC %d HVS before:\n", drm_crtc_index(crtc)); 477 - vc4_hvs_dump_state(dev); 449 + vc4_hvs_dump_state(hvs); 478 450 } 479 451 480 452 /* Copy all the active planes' dlist contents to the hardware dlist. */ ··· 505 477 /* This sets a black background color fill, as is the case 506 478 * with other DRM drivers. 507 479 */ 508 - HVS_WRITE(SCALER_DISPBKGNDX(vc4_state->assigned_channel), 509 - HVS_READ(SCALER_DISPBKGNDX(vc4_state->assigned_channel)) | 480 + HVS_WRITE(SCALER_DISPBKGNDX(channel), 481 + HVS_READ(SCALER_DISPBKGNDX(channel)) | 510 482 SCALER_DISPBKGND_FILL); 511 483 512 484 /* Only update DISPLIST if the CRTC was already running and is not ··· 516 488 * If the CRTC is being disabled, there's no point in updating this 517 489 * information. 518 490 */ 519 - if (crtc->state->active && old_state->active) 491 + if (crtc->state->active && old_state->active) { 492 + vc4_hvs_install_dlist(crtc); 520 493 vc4_hvs_update_dlist(crtc); 494 + } 521 495 522 496 if (crtc->state->color_mgmt_changed) { 523 - u32 dispbkgndx = HVS_READ(SCALER_DISPBKGNDX(vc4_state->assigned_channel)); 497 + u32 dispbkgndx = HVS_READ(SCALER_DISPBKGNDX(channel)); 524 498 525 499 if (crtc->state->gamma_lut) { 526 - vc4_hvs_update_gamma_lut(crtc); 500 + vc4_hvs_update_gamma_lut(hvs, vc4_crtc); 527 501 dispbkgndx |= SCALER_DISPBKGND_GAMMA; 528 502 } else { 529 503 /* Unsetting DISPBKGND_GAMMA skips the gamma lut step ··· 534 504 */ 535 505 dispbkgndx &= ~SCALER_DISPBKGND_GAMMA; 536 506 } 537 - HVS_WRITE(SCALER_DISPBKGNDX(vc4_state->assigned_channel), dispbkgndx); 507 + HVS_WRITE(SCALER_DISPBKGNDX(channel), dispbkgndx); 538 508 } 539 509 540 510 if (debug_dump_regs) { 541 511 DRM_INFO("CRTC %d HVS after:\n", drm_crtc_index(crtc)); 542 - vc4_hvs_dump_state(dev); 512 + vc4_hvs_dump_state(hvs); 543 513 } 544 514 } 545 515 546 - void vc4_hvs_mask_underrun(struct drm_device *dev, int channel) 516 + void vc4_hvs_mask_underrun(struct vc4_hvs *hvs, int channel) 547 517 { 548 - struct vc4_dev *vc4 = to_vc4_dev(dev); 549 518 u32 dispctrl = HVS_READ(SCALER_DISPCTRL); 550 519 551 520 dispctrl &= ~SCALER_DISPCTRL_DSPEISLUR(channel); ··· 552 523 HVS_WRITE(SCALER_DISPCTRL, dispctrl); 553 524 } 554 525 555 - void vc4_hvs_unmask_underrun(struct drm_device *dev, int channel) 526 + void vc4_hvs_unmask_underrun(struct vc4_hvs *hvs, int channel) 556 527 { 557 - struct vc4_dev *vc4 = to_vc4_dev(dev); 558 528 u32 dispctrl = HVS_READ(SCALER_DISPCTRL); 559 529 560 530 dispctrl |= SCALER_DISPCTRL_DSPEISLUR(channel); ··· 575 547 { 576 548 struct drm_device *dev = data; 577 549 struct vc4_dev *vc4 = to_vc4_dev(dev); 550 + struct vc4_hvs *hvs = vc4->hvs; 578 551 irqreturn_t irqret = IRQ_NONE; 579 552 int channel; 580 553 u32 control; ··· 588 559 /* Interrupt masking is not always honored, so check it here. */ 589 560 if (status & SCALER_DISPSTAT_EUFLOW(channel) && 590 561 control & SCALER_DISPCTRL_DSPEISLUR(channel)) { 591 - vc4_hvs_mask_underrun(dev, channel); 562 + vc4_hvs_mask_underrun(hvs, channel); 592 563 vc4_hvs_report_underrun(dev); 593 564 594 565 irqret = IRQ_HANDLED;
+5
drivers/gpu/drm/vc4/vc4_irq.c
··· 51 51 52 52 #include "vc4_drv.h" 53 53 #include "vc4_regs.h" 54 + #include "vc4_trace.h" 54 55 55 56 #define V3D_DRIVER_IRQS (V3D_INT_OUTOMEM | \ 56 57 V3D_INT_FLDONE | \ ··· 124 123 if (!exec) 125 124 return; 126 125 126 + trace_vc4_bcl_end_irq(dev, exec->seqno); 127 + 127 128 vc4_move_job_to_render(dev, exec); 128 129 next = vc4_first_bin_job(vc4); 129 130 ··· 163 160 164 161 if (!exec) 165 162 return; 163 + 164 + trace_vc4_rcl_end_irq(dev, exec->seqno); 166 165 167 166 vc4->finished_seqno++; 168 167 list_move_tail(&exec->head, &vc4->job_done_list);
+41 -7
drivers/gpu/drm/vc4/vc4_kms.c
··· 32 32 int fifo; 33 33 }; 34 34 35 - static struct vc4_ctm_state *to_vc4_ctm_state(struct drm_private_state *priv) 35 + static struct vc4_ctm_state * 36 + to_vc4_ctm_state(const struct drm_private_state *priv) 36 37 { 37 38 return container_of(priv, struct vc4_ctm_state, base); 38 39 } ··· 50 49 }; 51 50 52 51 static struct vc4_hvs_state * 53 - to_vc4_hvs_state(struct drm_private_state *priv) 52 + to_vc4_hvs_state(const struct drm_private_state *priv) 54 53 { 55 54 return container_of(priv, struct vc4_hvs_state, base); 56 55 } ··· 62 61 }; 63 62 64 63 static struct vc4_load_tracker_state * 65 - to_vc4_load_tracker_state(struct drm_private_state *priv) 64 + to_vc4_load_tracker_state(const struct drm_private_state *priv) 66 65 { 67 66 return container_of(priv, struct vc4_load_tracker_state, base); 68 67 } ··· 158 157 static void 159 158 vc4_ctm_commit(struct vc4_dev *vc4, struct drm_atomic_state *state) 160 159 { 160 + struct vc4_hvs *hvs = vc4->hvs; 161 161 struct vc4_ctm_state *ctm_state = to_vc4_ctm_state(vc4->ctm_manager.state); 162 162 struct drm_color_ctm *ctm = ctm_state->ctm; 163 163 ··· 232 230 static void vc4_hvs_pv_muxing_commit(struct vc4_dev *vc4, 233 231 struct drm_atomic_state *state) 234 232 { 233 + struct vc4_hvs *hvs = vc4->hvs; 235 234 struct drm_crtc_state *crtc_state; 236 235 struct drm_crtc *crtc; 237 236 unsigned int i; ··· 273 270 static void vc5_hvs_pv_muxing_commit(struct vc4_dev *vc4, 274 271 struct drm_atomic_state *state) 275 272 { 273 + struct vc4_hvs *hvs = vc4->hvs; 276 274 struct drm_crtc_state *crtc_state; 277 275 struct drm_crtc *crtc; 278 276 unsigned char mux; ··· 366 362 continue; 367 363 368 364 vc4_crtc_state = to_vc4_crtc_state(new_crtc_state); 369 - vc4_hvs_mask_underrun(dev, vc4_crtc_state->assigned_channel); 365 + vc4_hvs_mask_underrun(hvs, vc4_crtc_state->assigned_channel); 370 366 } 371 367 372 368 for (channel = 0; channel < HVS_NUM_CHANNELS; channel++) { ··· 389 385 } 390 386 391 387 if (vc4->hvs->hvs5) { 388 + unsigned long state_rate = max(old_hvs_state->core_clock_rate, 389 + new_hvs_state->core_clock_rate); 392 390 unsigned long core_rate = max_t(unsigned long, 393 - 500000000, 394 - new_hvs_state->core_clock_rate); 391 + 500000000, state_rate); 395 392 393 + drm_dbg(dev, "Raising the core clock at %lu Hz\n", core_rate); 394 + 395 + /* 396 + * Do a temporary request on the core clock during the 397 + * modeset. 398 + */ 396 399 clk_set_min_rate(hvs->core_clk, core_rate); 397 400 } 401 + 398 402 drm_atomic_helper_commit_modeset_disables(dev, state); 399 403 400 404 vc4_ctm_commit(vc4, state); ··· 412 400 else 413 401 vc4_hvs_pv_muxing_commit(vc4, state); 414 402 415 - drm_atomic_helper_commit_planes(dev, state, 0); 403 + drm_atomic_helper_commit_planes(dev, state, 404 + DRM_PLANE_COMMIT_ACTIVE_ONLY); 416 405 417 406 drm_atomic_helper_commit_modeset_enables(dev, state); 418 407 ··· 429 416 drm_dbg(dev, "Running the core clock at %lu Hz\n", 430 417 new_hvs_state->core_clock_rate); 431 418 419 + /* 420 + * Request a clock rate based on the current HVS 421 + * requirements. 422 + */ 432 423 clk_set_min_rate(hvs->core_clk, new_hvs_state->core_clock_rate); 433 424 } 434 425 } ··· 717 700 kfree(hvs_state); 718 701 } 719 702 703 + static void vc4_hvs_channels_print_state(struct drm_printer *p, 704 + const struct drm_private_state *state) 705 + { 706 + struct vc4_hvs_state *hvs_state = to_vc4_hvs_state(state); 707 + unsigned int i; 708 + 709 + drm_printf(p, "HVS State\n"); 710 + drm_printf(p, "\tCore Clock Rate: %lu\n", hvs_state->core_clock_rate); 711 + 712 + for (i = 0; i < HVS_NUM_CHANNELS; i++) { 713 + drm_printf(p, "\tChannel %d\n", i); 714 + drm_printf(p, "\t\tin use=%d\n", hvs_state->fifo_state[i].in_use); 715 + drm_printf(p, "\t\tload=%lu\n", hvs_state->fifo_state[i].fifo_load); 716 + } 717 + } 718 + 720 719 static const struct drm_private_state_funcs vc4_hvs_state_funcs = { 721 720 .atomic_duplicate_state = vc4_hvs_channels_duplicate_state, 722 721 .atomic_destroy_state = vc4_hvs_channels_destroy_state, 722 + .atomic_print_state = vc4_hvs_channels_print_state, 723 723 }; 724 724 725 725 static void vc4_hvs_channels_obj_fini(struct drm_device *dev, void *unused)
+26 -2
drivers/gpu/drm/vc4/vc4_regs.h
··· 379 379 # define SCALER_DISPSTATX_MODE_EOF 3 380 380 # define SCALER_DISPSTATX_FULL BIT(29) 381 381 # define SCALER_DISPSTATX_EMPTY BIT(28) 382 - # define SCALER_DISPSTATX_FRAME_COUNT_MASK VC4_MASK(17, 12) 383 - # define SCALER_DISPSTATX_FRAME_COUNT_SHIFT 12 384 382 # define SCALER_DISPSTATX_LINE_MASK VC4_MASK(11, 0) 385 383 # define SCALER_DISPSTATX_LINE_SHIFT 0 386 384 ··· 401 403 (x) * (SCALER_DISPBKGND1 - \ 402 404 SCALER_DISPBKGND0)) 403 405 #define SCALER_DISPSTAT1 0x00000058 406 + # define SCALER_DISPSTAT1_FRCNT0_MASK VC4_MASK(23, 18) 407 + # define SCALER_DISPSTAT1_FRCNT0_SHIFT 18 408 + # define SCALER_DISPSTAT1_FRCNT1_MASK VC4_MASK(17, 12) 409 + # define SCALER_DISPSTAT1_FRCNT1_SHIFT 12 410 + 404 411 #define SCALER_DISPSTATX(x) (SCALER_DISPSTAT0 + \ 405 412 (x) * (SCALER_DISPSTAT1 - \ 406 413 SCALER_DISPSTAT0)) 414 + 407 415 #define SCALER_DISPBASE1 0x0000005c 408 416 #define SCALER_DISPBASEX(x) (SCALER_DISPBASE0 + \ 409 417 (x) * (SCALER_DISPBASE1 - \ ··· 419 415 (x) * (SCALER_DISPCTRL1 - \ 420 416 SCALER_DISPCTRL0)) 421 417 #define SCALER_DISPBKGND2 0x00000064 418 + 422 419 #define SCALER_DISPSTAT2 0x00000068 420 + # define SCALER_DISPSTAT2_FRCNT2_MASK VC4_MASK(17, 12) 421 + # define SCALER_DISPSTAT2_FRCNT2_SHIFT 12 422 + 423 423 #define SCALER_DISPBASE2 0x0000006c 424 424 #define SCALER_DISPALPHA2 0x00000070 425 425 #define SCALER_GAMADDR 0x00000078 ··· 782 774 # define VC4_HD_CSC_CTL_RGB2YCC BIT(1) 783 775 # define VC4_HD_CSC_CTL_ENABLE BIT(0) 784 776 777 + # define VC5_MT_CP_CSC_CTL_USE_444_TO_422 BIT(6) 778 + # define VC5_MT_CP_CSC_CTL_FILTER_MODE_444_TO_422_MASK \ 779 + VC4_MASK(5, 4) 780 + # define VC5_MT_CP_CSC_CTL_FILTER_MODE_444_TO_422_STANDARD \ 781 + 3 782 + # define VC5_MT_CP_CSC_CTL_USE_RNG_SUPPRESSION BIT(3) 785 783 # define VC5_MT_CP_CSC_CTL_ENABLE BIT(2) 786 784 # define VC5_MT_CP_CSC_CTL_MODE_MASK VC4_MASK(1, 0) 787 785 786 + # define VC5_MT_CP_CHANNEL_CTL_OUTPUT_REMAP_MASK \ 787 + VC4_MASK(7, 6) 788 + # define VC5_MT_CP_CHANNEL_CTL_OUTPUT_REMAP_LEGACY_STYLE \ 789 + 2 790 + 788 791 # define VC4_DVP_HT_CLOCK_STOP_PIXEL BIT(1) 792 + 793 + # define VC5_DVP_HT_VEC_INTERFACE_CFG_SEL_422_MASK \ 794 + VC4_MASK(3, 2) 795 + # define VC5_DVP_HT_VEC_INTERFACE_CFG_SEL_422_FORMAT_422_LEGACY \ 796 + 2 789 797 790 798 /* HVS display list information. */ 791 799 #define HVS_BOOTLOADER_DLIST_END 32
+95
drivers/gpu/drm/vc4/vc4_trace.h
··· 52 52 __entry->dev, __entry->seqno) 53 53 ); 54 54 55 + TRACE_EVENT(vc4_submit_cl_ioctl, 56 + TP_PROTO(struct drm_device *dev, u32 bin_cl_size, u32 shader_rec_size, u32 bo_count), 57 + TP_ARGS(dev, bin_cl_size, shader_rec_size, bo_count), 58 + 59 + TP_STRUCT__entry( 60 + __field(u32, dev) 61 + __field(u32, bin_cl_size) 62 + __field(u32, shader_rec_size) 63 + __field(u32, bo_count) 64 + ), 65 + 66 + TP_fast_assign( 67 + __entry->dev = dev->primary->index; 68 + __entry->bin_cl_size = bin_cl_size; 69 + __entry->shader_rec_size = shader_rec_size; 70 + __entry->bo_count = bo_count; 71 + ), 72 + 73 + TP_printk("dev=%u, bin_cl_size=%u, shader_rec_size=%u, bo_count=%u", 74 + __entry->dev, 75 + __entry->bin_cl_size, 76 + __entry->shader_rec_size, 77 + __entry->bo_count) 78 + ); 79 + 80 + TRACE_EVENT(vc4_submit_cl, 81 + TP_PROTO(struct drm_device *dev, bool is_render, 82 + uint64_t seqno, 83 + u32 ctnqba, u32 ctnqea), 84 + TP_ARGS(dev, is_render, seqno, ctnqba, ctnqea), 85 + 86 + TP_STRUCT__entry( 87 + __field(u32, dev) 88 + __field(bool, is_render) 89 + __field(u64, seqno) 90 + __field(u32, ctnqba) 91 + __field(u32, ctnqea) 92 + ), 93 + 94 + TP_fast_assign( 95 + __entry->dev = dev->primary->index; 96 + __entry->is_render = is_render; 97 + __entry->seqno = seqno; 98 + __entry->ctnqba = ctnqba; 99 + __entry->ctnqea = ctnqea; 100 + ), 101 + 102 + TP_printk("dev=%u, %s, seqno=%llu, 0x%08x..0x%08x", 103 + __entry->dev, 104 + __entry->is_render ? "RCL" : "BCL", 105 + __entry->seqno, 106 + __entry->ctnqba, 107 + __entry->ctnqea) 108 + ); 109 + 110 + TRACE_EVENT(vc4_bcl_end_irq, 111 + TP_PROTO(struct drm_device *dev, 112 + uint64_t seqno), 113 + TP_ARGS(dev, seqno), 114 + 115 + TP_STRUCT__entry( 116 + __field(u32, dev) 117 + __field(u64, seqno) 118 + ), 119 + 120 + TP_fast_assign( 121 + __entry->dev = dev->primary->index; 122 + __entry->seqno = seqno; 123 + ), 124 + 125 + TP_printk("dev=%u, seqno=%llu", 126 + __entry->dev, 127 + __entry->seqno) 128 + ); 129 + 130 + TRACE_EVENT(vc4_rcl_end_irq, 131 + TP_PROTO(struct drm_device *dev, 132 + uint64_t seqno), 133 + TP_ARGS(dev, seqno), 134 + 135 + TP_STRUCT__entry( 136 + __field(u32, dev) 137 + __field(u64, seqno) 138 + ), 139 + 140 + TP_fast_assign( 141 + __entry->dev = dev->primary->index; 142 + __entry->seqno = seqno; 143 + ), 144 + 145 + TP_printk("dev=%u, seqno=%llu", 146 + __entry->dev, 147 + __entry->seqno) 148 + ); 149 + 55 150 #endif /* _VC4_TRACE_H_ */ 56 151 57 152 /* This part must be outside protection */
+7 -5
drivers/gpu/drm/vgem/vgem_fence.c
··· 157 157 } 158 158 159 159 /* Expose the fence via the dma-buf */ 160 - ret = 0; 161 160 dma_resv_lock(resv, NULL); 162 - if (arg->flags & VGEM_FENCE_WRITE) 163 - dma_resv_add_excl_fence(resv, fence); 164 - else if ((ret = dma_resv_reserve_shared(resv, 1)) == 0) 165 - dma_resv_add_shared_fence(resv, fence); 161 + ret = dma_resv_reserve_fences(resv, 1); 162 + if (!ret) { 163 + if (arg->flags & VGEM_FENCE_WRITE) 164 + dma_resv_add_excl_fence(resv, fence); 165 + else 166 + dma_resv_add_shared_fence(resv, fence); 167 + } 166 168 dma_resv_unlock(resv); 167 169 168 170 /* Record the fence in our idr for later signaling */
+2
drivers/gpu/drm/virtio/virtgpu_display.c
··· 179 179 DRM_DEBUG("add mode: %dx%d\n", width, height); 180 180 mode = drm_cvt_mode(connector->dev, width, height, 60, 181 181 false, false, false); 182 + if (!mode) 183 + return count; 182 184 mode->type |= DRM_MODE_TYPE_PREFERRED; 183 185 drm_mode_probed_add(connector, mode); 184 186 count++;
+9
drivers/gpu/drm/virtio/virtgpu_gem.c
··· 214 214 215 215 int virtio_gpu_array_lock_resv(struct virtio_gpu_object_array *objs) 216 216 { 217 + unsigned int i; 217 218 int ret; 218 219 219 220 if (objs->nents == 1) { ··· 222 221 } else { 223 222 ret = drm_gem_lock_reservations(objs->objs, objs->nents, 224 223 &objs->ticket); 224 + } 225 + if (ret) 226 + return ret; 227 + 228 + for (i = 0; i < objs->nents; ++i) { 229 + ret = dma_resv_reserve_fences(objs->objs[i]->resv, 1); 230 + if (ret) 231 + return ret; 225 232 } 226 233 return ret; 227 234 }
+1 -2
drivers/gpu/drm/virtio/virtgpu_ioctl.c
··· 609 609 if (!vgdev->has_resource_blob) 610 610 return -EINVAL; 611 611 612 - if ((rc_blob->blob_flags & ~VIRTGPU_BLOB_FLAG_USE_MASK) || 613 - !rc_blob->blob_flags) 612 + if (rc_blob->blob_flags & ~VIRTGPU_BLOB_FLAG_USE_MASK) 614 613 return -EINVAL; 615 614 616 615 if (rc_blob->blob_flags & VIRTGPU_BLOB_FLAG_USE_CROSS_DEVICE) {
+11 -5
drivers/gpu/drm/vmwgfx/vmwgfx_bo.c
··· 747 747 struct vmw_fence_obj *fence) 748 748 { 749 749 struct ttm_device *bdev = bo->bdev; 750 - 751 750 struct vmw_private *dev_priv = 752 751 container_of(bdev, struct vmw_private, bdev); 752 + int ret; 753 753 754 - if (fence == NULL) { 754 + if (fence == NULL) 755 755 vmw_execbuf_fence_commands(NULL, dev_priv, &fence, NULL); 756 + else 757 + dma_fence_get(&fence->base); 758 + 759 + ret = dma_resv_reserve_fences(bo->base.resv, 1); 760 + if (!ret) 756 761 dma_resv_add_excl_fence(bo->base.resv, &fence->base); 757 - dma_fence_put(&fence->base); 758 - } else 759 - dma_resv_add_excl_fence(bo->base.resv, &fence->base); 762 + else 763 + /* Last resort fallback when we are OOM */ 764 + dma_fence_wait(&fence->base, false); 765 + dma_fence_put(&fence->base); 760 766 } 761 767 762 768
+8 -5
drivers/gpu/drm/vmwgfx/vmwgfx_cmd.c
··· 528 528 *seqno = atomic_add_return(1, &dev_priv->marker_seq); 529 529 } while (*seqno == 0); 530 530 531 - if (!(vmw_fifo_caps(dev_priv) & SVGA_FIFO_CAP_FENCE)) { 531 + if (!vmw_has_fences(dev_priv)) { 532 532 533 533 /* 534 534 * Don't request hardware to send a fence. The ··· 675 675 */ 676 676 bool vmw_cmd_supported(struct vmw_private *vmw) 677 677 { 678 - if ((vmw->capabilities & (SVGA_CAP_COMMAND_BUFFERS | 679 - SVGA_CAP_CMD_BUFFERS_2)) != 0) 680 - return true; 678 + bool has_cmdbufs = 679 + (vmw->capabilities & (SVGA_CAP_COMMAND_BUFFERS | 680 + SVGA_CAP_CMD_BUFFERS_2)) != 0; 681 + if (vmw_is_svga_v3(vmw)) 682 + return (has_cmdbufs && 683 + (vmw->capabilities & SVGA_CAP_GBOBJECTS) != 0); 681 684 /* 682 685 * We have FIFO cmd's 683 686 */ 684 - return vmw->fifo_mem != NULL; 687 + return has_cmdbufs || vmw->fifo_mem != NULL; 685 688 }
+8 -12
drivers/gpu/drm/vmwgfx/vmwgfx_drv.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 OR MIT 2 2 /************************************************************************** 3 3 * 4 - * Copyright 2009-2016 VMware, Inc., Palo Alto, CA., USA 4 + * Copyright 2009-2022 VMware, Inc., Palo Alto, CA., USA 5 5 * 6 6 * Permission is hereby granted, free of charge, to any person obtaining a 7 7 * copy of this software and associated documentation files (the ··· 848 848 849 849 850 850 dev_priv->capabilities = vmw_read(dev_priv, SVGA_REG_CAPABILITIES); 851 - 851 + vmw_print_bitmap(&dev_priv->drm, "Capabilities", 852 + dev_priv->capabilities, 853 + cap1_names, ARRAY_SIZE(cap1_names)); 852 854 if (dev_priv->capabilities & SVGA_CAP_CAP2_REGISTER) { 853 855 dev_priv->capabilities2 = vmw_read(dev_priv, SVGA_REG_CAP2); 856 + vmw_print_bitmap(&dev_priv->drm, "Capabilities2", 857 + dev_priv->capabilities2, 858 + cap2_names, ARRAY_SIZE(cap2_names)); 854 859 } 855 - 856 860 857 861 ret = vmw_dma_select_mode(dev_priv); 858 862 if (unlikely(ret != 0)) { ··· 943 939 "MOB limits: max mob size = %u kB, max mob pages = %u\n", 944 940 dev_priv->max_mob_size / 1024, dev_priv->max_mob_pages); 945 941 946 - vmw_print_bitmap(&dev_priv->drm, "Capabilities", 947 - dev_priv->capabilities, 948 - cap1_names, ARRAY_SIZE(cap1_names)); 949 - if (dev_priv->capabilities & SVGA_CAP_CAP2_REGISTER) 950 - vmw_print_bitmap(&dev_priv->drm, "Capabilities2", 951 - dev_priv->capabilities2, 952 - cap2_names, ARRAY_SIZE(cap2_names)); 953 - 954 942 ret = vmw_dma_masks(dev_priv); 955 943 if (unlikely(ret != 0)) 956 944 goto out_err0; ··· 980 984 } 981 985 982 986 if (dev_priv->capabilities & SVGA_CAP_IRQMASK) { 983 - ret = vmw_irq_install(&dev_priv->drm, pdev->irq); 987 + ret = vmw_irq_install(dev_priv); 984 988 if (ret != 0) { 985 989 drm_err(&dev_priv->drm, 986 990 "Failed installing irq: %d\n", ret);
+21 -2
drivers/gpu/drm/vmwgfx/vmwgfx_drv.h
··· 1 1 /* SPDX-License-Identifier: GPL-2.0 OR MIT */ 2 2 /************************************************************************** 3 3 * 4 - * Copyright 2009-2021 VMware, Inc., Palo Alto, CA., USA 4 + * Copyright 2009-2022 VMware, Inc., Palo Alto, CA., USA 5 5 * 6 6 * Permission is hereby granted, free of charge, to any person obtaining a 7 7 * copy of this software and associated documentation files (the ··· 66 66 #define VMWGFX_PCI_ID_SVGA3 0x0406 67 67 68 68 /* 69 + * This has to match get_count_order(SVGA_IRQFLAG_MAX) 70 + */ 71 + #define VMWGFX_MAX_NUM_IRQS 6 72 + 73 + /* 69 74 * Perhaps we should have sysfs entries for these. 70 75 */ 71 76 #define VMWGFX_NUM_GB_CONTEXT 256 ··· 106 101 * struct vmw_buffer_object - TTM buffer object with vmwgfx additions 107 102 * @base: The TTM buffer object 108 103 * @res_tree: RB tree of resources using this buffer object as a backing MOB 104 + * @base_mapped_count: ttm BO mapping count; used by KMS atomic helpers. 109 105 * @cpu_writers: Number of synccpu write grabs. Protected by reservation when 110 106 * increased. May be decreased without reservation. 111 107 * @dx_query_ctx: DX context if this buffer object is used as a DX query MOB ··· 117 111 struct vmw_buffer_object { 118 112 struct ttm_buffer_object base; 119 113 struct rb_root res_tree; 114 + /* For KMS atomic helpers: ttm bo mapping count */ 115 + atomic_t base_mapped_count; 116 + 120 117 atomic_t cpu_writers; 121 118 /* Not ref-counted. Protected by binding_mutex */ 122 119 struct vmw_resource *dx_query_ctx; ··· 537 528 bool has_mob; 538 529 spinlock_t hw_lock; 539 530 bool assume_16bpp; 531 + u32 irqs[VMWGFX_MAX_NUM_IRQS]; 532 + u32 num_irq_vectors; 540 533 541 534 enum vmw_sm_type sm_type; 542 535 ··· 1165 1154 * IRQs and wating - vmwgfx_irq.c 1166 1155 */ 1167 1156 1168 - extern int vmw_irq_install(struct drm_device *dev, int irq); 1157 + extern int vmw_irq_install(struct vmw_private *dev_priv); 1169 1158 extern void vmw_irq_uninstall(struct drm_device *dev); 1170 1159 extern bool vmw_seqno_passed(struct vmw_private *dev_priv, 1171 1160 uint32_t seqno); ··· 1688 1677 vmw_write(vmw, SVGA_REG_IRQ_STATUS, status); 1689 1678 else 1690 1679 outl(status, vmw->io_start + SVGA_IRQSTATUS_PORT); 1680 + } 1681 + 1682 + static inline bool vmw_has_fences(struct vmw_private *vmw) 1683 + { 1684 + if ((vmw->capabilities & (SVGA_CAP_COMMAND_BUFFERS | 1685 + SVGA_CAP_CMD_BUFFERS_2)) != 0) 1686 + return true; 1687 + return (vmw_fifo_caps(vmw) & SVGA_FIFO_CAP_FENCE) != 0; 1691 1688 } 1692 1689 1693 1690 #endif
+1 -1
drivers/gpu/drm/vmwgfx/vmwgfx_fb.c
··· 483 483 484 484 static int vmw_fb_kms_framebuffer(struct fb_info *info) 485 485 { 486 - struct drm_mode_fb_cmd2 mode_cmd; 486 + struct drm_mode_fb_cmd2 mode_cmd = {0}; 487 487 struct vmw_fb_par *par = info->par; 488 488 struct fb_var_screeninfo *var = &info->var; 489 489 struct drm_framebuffer *cur_fb;
+21 -7
drivers/gpu/drm/vmwgfx/vmwgfx_fence.c
··· 82 82 return container_of(fence->base.lock, struct vmw_fence_manager, lock); 83 83 } 84 84 85 + static u32 vmw_fence_goal_read(struct vmw_private *vmw) 86 + { 87 + if ((vmw->capabilities2 & SVGA_CAP2_EXTRA_REGS) != 0) 88 + return vmw_read(vmw, SVGA_REG_FENCE_GOAL); 89 + else 90 + return vmw_fifo_mem_read(vmw, SVGA_FIFO_FENCE_GOAL); 91 + } 92 + 93 + static void vmw_fence_goal_write(struct vmw_private *vmw, u32 value) 94 + { 95 + if ((vmw->capabilities2 & SVGA_CAP2_EXTRA_REGS) != 0) 96 + vmw_write(vmw, SVGA_REG_FENCE_GOAL, value); 97 + else 98 + vmw_fifo_mem_write(vmw, SVGA_FIFO_FENCE_GOAL, value); 99 + } 100 + 85 101 /* 86 102 * Note on fencing subsystem usage of irqs: 87 103 * Typically the vmw_fences_update function is called ··· 408 392 if (likely(!fman->seqno_valid)) 409 393 return false; 410 394 411 - goal_seqno = vmw_fifo_mem_read(fman->dev_priv, SVGA_FIFO_FENCE_GOAL); 395 + goal_seqno = vmw_fence_goal_read(fman->dev_priv); 412 396 if (likely(passed_seqno - goal_seqno >= VMW_FENCE_WRAP)) 413 397 return false; 414 398 ··· 416 400 list_for_each_entry(fence, &fman->fence_list, head) { 417 401 if (!list_empty(&fence->seq_passed_actions)) { 418 402 fman->seqno_valid = true; 419 - vmw_fifo_mem_write(fman->dev_priv, 420 - SVGA_FIFO_FENCE_GOAL, 421 - fence->base.seqno); 403 + vmw_fence_goal_write(fman->dev_priv, 404 + fence->base.seqno); 422 405 break; 423 406 } 424 407 } ··· 449 434 if (dma_fence_is_signaled_locked(&fence->base)) 450 435 return false; 451 436 452 - goal_seqno = vmw_fifo_mem_read(fman->dev_priv, SVGA_FIFO_FENCE_GOAL); 437 + goal_seqno = vmw_fence_goal_read(fman->dev_priv); 453 438 if (likely(fman->seqno_valid && 454 439 goal_seqno - fence->base.seqno < VMW_FENCE_WRAP)) 455 440 return false; 456 441 457 - vmw_fifo_mem_write(fman->dev_priv, SVGA_FIFO_FENCE_GOAL, 458 - fence->base.seqno); 442 + vmw_fence_goal_write(fman->dev_priv, fence->base.seqno); 459 443 fman->seqno_valid = true; 460 444 461 445 return true;
+15 -12
drivers/gpu/drm/vmwgfx/vmwgfx_ioctl.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 OR MIT 2 2 /************************************************************************** 3 3 * 4 - * Copyright 2009-2015 VMware, Inc., Palo Alto, CA., USA 4 + * Copyright 2009-2022 VMware, Inc., Palo Alto, CA., USA 5 5 * 6 6 * Permission is hereby granted, free of charge, to any person obtaining a 7 7 * copy of this software and associated documentation files (the ··· 27 27 28 28 #include "vmwgfx_drv.h" 29 29 #include "vmwgfx_devcaps.h" 30 - #include <drm/vmwgfx_drm.h> 31 30 #include "vmwgfx_kms.h" 31 + 32 + #include <drm/vmwgfx_drm.h> 33 + #include <linux/pci.h> 32 34 33 35 int vmw_getparam_ioctl(struct drm_device *dev, void *data, 34 36 struct drm_file *file_priv) ··· 64 62 break; 65 63 case DRM_VMW_PARAM_FIFO_HW_VERSION: 66 64 { 67 - if ((dev_priv->capabilities & SVGA_CAP_GBOBJECTS)) { 65 + if ((dev_priv->capabilities & SVGA_CAP_GBOBJECTS)) 68 66 param->value = SVGA3D_HWVERSION_WS8_B1; 69 - break; 70 - } 71 - 72 - param->value = 73 - vmw_fifo_mem_read(dev_priv, 74 - ((vmw_fifo_caps(dev_priv) & 75 - SVGA_FIFO_CAP_3D_HWVERSION_REVISED) ? 76 - SVGA_FIFO_3D_HWVERSION_REVISED : 77 - SVGA_FIFO_3D_HWVERSION)); 67 + else 68 + param->value = vmw_fifo_mem_read( 69 + dev_priv, 70 + ((vmw_fifo_caps(dev_priv) & 71 + SVGA_FIFO_CAP_3D_HWVERSION_REVISED) ? 72 + SVGA_FIFO_3D_HWVERSION_REVISED : 73 + SVGA_FIFO_3D_HWVERSION)); 78 74 break; 79 75 } 80 76 case DRM_VMW_PARAM_MAX_SURF_MEMORY: ··· 107 107 break; 108 108 case DRM_VMW_PARAM_GL43: 109 109 param->value = has_gl43_context(dev_priv); 110 + break; 111 + case DRM_VMW_PARAM_DEVICE_ID: 112 + param->value = to_pci_dev(dev_priv->drm.dev)->device; 110 113 break; 111 114 default: 112 115 return -EINVAL;
+67 -14
drivers/gpu/drm/vmwgfx/vmwgfx_irq.c
··· 32 32 33 33 #define VMW_FENCE_WRAP (1 << 24) 34 34 35 + static u32 vmw_irqflag_fence_goal(struct vmw_private *vmw) 36 + { 37 + if ((vmw->capabilities2 & SVGA_CAP2_EXTRA_REGS) != 0) 38 + return SVGA_IRQFLAG_REG_FENCE_GOAL; 39 + else 40 + return SVGA_IRQFLAG_FENCE_GOAL; 41 + } 42 + 35 43 /** 36 44 * vmw_thread_fn - Deferred (process context) irq handler 37 45 * ··· 104 96 wake_up_all(&dev_priv->fifo_queue); 105 97 106 98 if ((masked_status & (SVGA_IRQFLAG_ANY_FENCE | 107 - SVGA_IRQFLAG_FENCE_GOAL)) && 99 + vmw_irqflag_fence_goal(dev_priv))) && 108 100 !test_and_set_bit(VMW_IRQTHREAD_FENCE, dev_priv->irqthread_pending)) 109 101 ret = IRQ_WAKE_THREAD; 110 102 ··· 145 137 if (likely(dev_priv->last_read_seqno - seqno < VMW_FENCE_WRAP)) 146 138 return true; 147 139 148 - if (!(vmw_fifo_caps(dev_priv) & SVGA_FIFO_CAP_FENCE) && 149 - vmw_fifo_idle(dev_priv, seqno)) 140 + if (!vmw_has_fences(dev_priv) && vmw_fifo_idle(dev_priv, seqno)) 150 141 return true; 151 142 152 143 /** ··· 167 160 unsigned long timeout) 168 161 { 169 162 struct vmw_fifo_state *fifo_state = dev_priv->fifo; 163 + bool fifo_down = false; 170 164 171 165 uint32_t count = 0; 172 166 uint32_t signal_seq; ··· 184 176 */ 185 177 186 178 if (fifo_idle) { 187 - down_read(&fifo_state->rwsem); 188 179 if (dev_priv->cman) { 189 180 ret = vmw_cmdbuf_idle(dev_priv->cman, interruptible, 190 181 10*HZ); 191 182 if (ret) 192 183 goto out_err; 184 + } else if (fifo_state) { 185 + down_read(&fifo_state->rwsem); 186 + fifo_down = true; 193 187 } 194 188 } 195 189 ··· 228 218 } 229 219 } 230 220 finish_wait(&dev_priv->fence_queue, &__wait); 231 - if (ret == 0 && fifo_idle) 221 + if (ret == 0 && fifo_idle && fifo_state) 232 222 vmw_fence_write(dev_priv, signal_seq); 233 223 234 224 wake_up_all(&dev_priv->fence_queue); 235 225 out_err: 236 - if (fifo_idle) 226 + if (fifo_down) 237 227 up_read(&fifo_state->rwsem); 238 228 239 229 return ret; ··· 276 266 277 267 void vmw_goal_waiter_add(struct vmw_private *dev_priv) 278 268 { 279 - vmw_generic_waiter_add(dev_priv, SVGA_IRQFLAG_FENCE_GOAL, 269 + vmw_generic_waiter_add(dev_priv, vmw_irqflag_fence_goal(dev_priv), 280 270 &dev_priv->goal_queue_waiters); 281 271 } 282 272 283 273 void vmw_goal_waiter_remove(struct vmw_private *dev_priv) 284 274 { 285 - vmw_generic_waiter_remove(dev_priv, SVGA_IRQFLAG_FENCE_GOAL, 275 + vmw_generic_waiter_remove(dev_priv, vmw_irqflag_fence_goal(dev_priv), 286 276 &dev_priv->goal_queue_waiters); 287 277 } 288 278 ··· 300 290 struct vmw_private *dev_priv = vmw_priv(dev); 301 291 struct pci_dev *pdev = to_pci_dev(dev->dev); 302 292 uint32_t status; 293 + u32 i; 303 294 304 295 if (!(dev_priv->capabilities & SVGA_CAP_IRQMASK)) 305 296 return; ··· 310 299 status = vmw_irq_status_read(dev_priv); 311 300 vmw_irq_status_write(dev_priv, status); 312 301 313 - free_irq(pdev->irq, dev); 302 + for (i = 0; i < dev_priv->num_irq_vectors; ++i) 303 + free_irq(dev_priv->irqs[i], dev); 304 + 305 + pci_free_irq_vectors(pdev); 306 + dev_priv->num_irq_vectors = 0; 314 307 } 315 308 316 309 /** 317 310 * vmw_irq_install - Install the irq handlers 318 311 * 319 - * @dev: Pointer to the drm device. 320 - * @irq: The irq number. 312 + * @dev_priv: Pointer to the vmw_private device. 321 313 * Return: Zero if successful. Negative number otherwise. 322 314 */ 323 - int vmw_irq_install(struct drm_device *dev, int irq) 315 + int vmw_irq_install(struct vmw_private *dev_priv) 324 316 { 317 + struct pci_dev *pdev = to_pci_dev(dev_priv->drm.dev); 318 + struct drm_device *dev = &dev_priv->drm; 319 + int ret; 320 + int nvec; 321 + int i = 0; 322 + 323 + BUILD_BUG_ON((SVGA_IRQFLAG_MAX >> VMWGFX_MAX_NUM_IRQS) != 1); 324 + BUG_ON(VMWGFX_MAX_NUM_IRQS != get_count_order(SVGA_IRQFLAG_MAX)); 325 + 326 + nvec = pci_alloc_irq_vectors(pdev, 1, VMWGFX_MAX_NUM_IRQS, 327 + PCI_IRQ_ALL_TYPES); 328 + 329 + if (nvec <= 0) { 330 + drm_err(&dev_priv->drm, 331 + "IRQ's are unavailable, nvec: %d\n", nvec); 332 + ret = nvec; 333 + goto done; 334 + } 335 + 325 336 vmw_irq_preinstall(dev); 326 337 327 - return request_threaded_irq(irq, vmw_irq_handler, vmw_thread_fn, 328 - IRQF_SHARED, VMWGFX_DRIVER_NAME, dev); 338 + for (i = 0; i < nvec; ++i) { 339 + ret = pci_irq_vector(pdev, i); 340 + if (ret < 0) { 341 + drm_err(&dev_priv->drm, 342 + "failed getting irq vector: %d\n", ret); 343 + goto done; 344 + } 345 + dev_priv->irqs[i] = ret; 346 + 347 + ret = request_threaded_irq(dev_priv->irqs[i], vmw_irq_handler, vmw_thread_fn, 348 + IRQF_SHARED, VMWGFX_DRIVER_NAME, dev); 349 + if (ret != 0) { 350 + drm_err(&dev_priv->drm, 351 + "Failed installing irq(%d): %d\n", 352 + dev_priv->irqs[i], ret); 353 + goto done; 354 + } 355 + } 356 + 357 + done: 358 + dev_priv->num_irq_vectors = i; 359 + return ret; 329 360 }
+353 -92
drivers/gpu/drm/vmwgfx/vmwgfx_kms.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 OR MIT 2 2 /************************************************************************** 3 3 * 4 - * Copyright 2009-2015 VMware, Inc., Palo Alto, CA., USA 4 + * Copyright 2009-2022 VMware, Inc., Palo Alto, CA., USA 5 5 * 6 6 * Permission is hereby granted, free of charge, to any person obtaining a 7 7 * copy of this software and associated documentation files (the ··· 41 41 struct vmw_private *dev_priv = vmw_priv(du->primary.dev); 42 42 drm_plane_cleanup(&du->primary); 43 43 if (vmw_cmd_supported(dev_priv)) 44 - drm_plane_cleanup(&du->cursor); 44 + drm_plane_cleanup(&du->cursor.base); 45 45 46 46 drm_connector_unregister(&du->connector); 47 47 drm_crtc_cleanup(&du->crtc); ··· 53 53 * Display Unit Cursor functions 54 54 */ 55 55 56 - static int vmw_cursor_update_image(struct vmw_private *dev_priv, 57 - u32 *image, u32 width, u32 height, 58 - u32 hotspotX, u32 hotspotY) 56 + static void vmw_cursor_update_mob(struct vmw_private *dev_priv, 57 + struct ttm_buffer_object *bo, 58 + struct ttm_bo_kmap_obj *map, 59 + u32 *image, u32 width, u32 height, 60 + u32 hotspotX, u32 hotspotY); 61 + 62 + struct vmw_svga_fifo_cmd_define_cursor { 63 + u32 cmd; 64 + SVGAFifoCmdDefineAlphaCursor cursor; 65 + }; 66 + 67 + static void vmw_cursor_update_image(struct vmw_private *dev_priv, 68 + struct ttm_buffer_object *cm_bo, 69 + struct ttm_bo_kmap_obj *cm_map, 70 + u32 *image, u32 width, u32 height, 71 + u32 hotspotX, u32 hotspotY) 59 72 { 60 - struct { 61 - u32 cmd; 62 - SVGAFifoCmdDefineAlphaCursor cursor; 63 - } *cmd; 64 - u32 image_size = width * height * 4; 65 - u32 cmd_size = sizeof(*cmd) + image_size; 73 + struct vmw_svga_fifo_cmd_define_cursor *cmd; 74 + const u32 image_size = width * height * sizeof(*image); 75 + const u32 cmd_size = sizeof(*cmd) + image_size; 66 76 67 - if (!image) 68 - return -EINVAL; 77 + if (cm_bo != NULL) { 78 + vmw_cursor_update_mob(dev_priv, cm_bo, cm_map, image, 79 + width, height, 80 + hotspotX, hotspotY); 81 + return; 82 + } 69 83 84 + /* Try to reserve fifocmd space and swallow any failures; 85 + such reservations cannot be left unconsumed for long 86 + under the risk of clogging other fifocmd users, so 87 + we treat reservations separtely from the way we treat 88 + other fallible KMS-atomic resources at prepare_fb */ 70 89 cmd = VMW_CMD_RESERVE(dev_priv, cmd_size); 90 + 71 91 if (unlikely(cmd == NULL)) 72 - return -ENOMEM; 92 + return; 73 93 74 94 memset(cmd, 0, sizeof(*cmd)); 75 95 ··· 103 83 cmd->cursor.hotspotY = hotspotY; 104 84 105 85 vmw_cmd_commit_flush(dev_priv, cmd_size); 106 - 107 - return 0; 108 86 } 109 87 110 - static int vmw_cursor_update_bo(struct vmw_private *dev_priv, 111 - struct vmw_buffer_object *bo, 112 - u32 width, u32 height, 113 - u32 hotspotX, u32 hotspotY) 88 + /** 89 + * vmw_cursor_update_mob - Update cursor vis CursorMob mechanism 90 + * 91 + * @dev_priv: device to work with 92 + * @bo: BO for the MOB 93 + * @map: kmap obj for the BO 94 + * @image: cursor source data to fill the MOB with 95 + * @width: source data width 96 + * @height: source data height 97 + * @hotspotX: cursor hotspot x 98 + * @hotspotY: cursor hotspot Y 99 + */ 100 + static void vmw_cursor_update_mob(struct vmw_private *dev_priv, 101 + struct ttm_buffer_object *bo, 102 + struct ttm_bo_kmap_obj *map, 103 + u32 *image, u32 width, u32 height, 104 + u32 hotspotX, u32 hotspotY) 114 105 { 115 - struct ttm_bo_kmap_obj map; 116 - unsigned long kmap_offset; 117 - unsigned long kmap_num; 118 - void *virtual; 106 + SVGAGBCursorHeader *header; 107 + SVGAGBAlphaCursorHeader *alpha_header; 108 + const u32 image_size = width * height * sizeof(*image); 119 109 bool dummy; 120 - int ret; 121 110 122 - kmap_offset = 0; 123 - kmap_num = PFN_UP(width*height*4); 111 + BUG_ON(!image); 124 112 125 - ret = ttm_bo_reserve(&bo->base, true, false, NULL); 126 - if (unlikely(ret != 0)) { 127 - DRM_ERROR("reserve failed\n"); 128 - return -EINVAL; 113 + header = (SVGAGBCursorHeader *)ttm_kmap_obj_virtual(map, &dummy); 114 + alpha_header = &header->header.alphaHeader; 115 + 116 + header->type = SVGA_ALPHA_CURSOR; 117 + header->sizeInBytes = image_size; 118 + 119 + alpha_header->hotspotX = hotspotX; 120 + alpha_header->hotspotY = hotspotY; 121 + alpha_header->width = width; 122 + alpha_header->height = height; 123 + 124 + memcpy(header + 1, image, image_size); 125 + 126 + vmw_write(dev_priv, SVGA_REG_CURSOR_MOBID, bo->resource->start); 127 + } 128 + 129 + void vmw_du_destroy_cursor_mob_array(struct vmw_cursor_plane *vcp) 130 + { 131 + size_t i; 132 + 133 + for (i = 0; i < ARRAY_SIZE(vcp->cursor_mob); i++) { 134 + if (vcp->cursor_mob[i] != NULL) { 135 + ttm_bo_unpin(vcp->cursor_mob[i]); 136 + ttm_bo_put(vcp->cursor_mob[i]); 137 + kfree(vcp->cursor_mob[i]); 138 + vcp->cursor_mob[i] = NULL; 139 + } 140 + } 141 + } 142 + 143 + #define CURSOR_MOB_SIZE(dimension) \ 144 + ((dimension) * (dimension) * sizeof(u32) + sizeof(SVGAGBCursorHeader)) 145 + 146 + int vmw_du_create_cursor_mob_array(struct vmw_cursor_plane *cursor) 147 + { 148 + struct vmw_private *dev_priv = cursor->base.dev->dev_private; 149 + uint32_t cursor_max_dim, mob_max_size; 150 + int ret = 0; 151 + size_t i; 152 + 153 + if (!dev_priv->has_mob || (dev_priv->capabilities2 & SVGA_CAP2_CURSOR_MOB) == 0) 154 + return -ENOSYS; 155 + 156 + mob_max_size = vmw_read(dev_priv, SVGA_REG_MOB_MAX_SIZE); 157 + cursor_max_dim = vmw_read(dev_priv, SVGA_REG_CURSOR_MAX_DIMENSION); 158 + 159 + if (CURSOR_MOB_SIZE(cursor_max_dim) > mob_max_size) 160 + cursor_max_dim = 64; /* Mandatorily-supported cursor dimension */ 161 + 162 + for (i = 0; i < ARRAY_SIZE(cursor->cursor_mob); i++) { 163 + struct ttm_buffer_object **const bo = &cursor->cursor_mob[i]; 164 + 165 + ret = vmw_bo_create_kernel(dev_priv, 166 + CURSOR_MOB_SIZE(cursor_max_dim), 167 + &vmw_mob_placement, bo); 168 + 169 + if (ret != 0) 170 + goto teardown; 171 + 172 + if ((*bo)->resource->mem_type != VMW_PL_MOB) { 173 + DRM_ERROR("Obtained buffer object is not a MOB.\n"); 174 + ret = -ENOSYS; 175 + goto teardown; 176 + } 177 + 178 + /* Fence the mob creation so we are guarateed to have the mob */ 179 + ret = ttm_bo_reserve(*bo, false, false, NULL); 180 + 181 + if (ret != 0) 182 + goto teardown; 183 + 184 + vmw_bo_fence_single(*bo, NULL); 185 + 186 + ttm_bo_unreserve(*bo); 187 + 188 + drm_info(&dev_priv->drm, "Using CursorMob mobid %lu, max dimension %u\n", 189 + (*bo)->resource->start, cursor_max_dim); 129 190 } 130 191 131 - ret = ttm_bo_kmap(&bo->base, kmap_offset, kmap_num, &map); 132 - if (unlikely(ret != 0)) 133 - goto err_unreserve; 192 + return 0; 134 193 135 - virtual = ttm_kmap_obj_virtual(&map, &dummy); 136 - ret = vmw_cursor_update_image(dev_priv, virtual, width, height, 137 - hotspotX, hotspotY); 138 - 139 - ttm_bo_kunmap(&map); 140 - err_unreserve: 141 - ttm_bo_unreserve(&bo->base); 194 + teardown: 195 + vmw_du_destroy_cursor_mob_array(cursor); 142 196 143 197 return ret; 198 + } 199 + 200 + #undef CURSOR_MOB_SIZE 201 + 202 + static void vmw_cursor_update_bo(struct vmw_private *dev_priv, 203 + struct ttm_buffer_object *cm_bo, 204 + struct ttm_bo_kmap_obj *cm_map, 205 + struct vmw_buffer_object *bo, 206 + u32 width, u32 height, 207 + u32 hotspotX, u32 hotspotY) 208 + { 209 + void *virtual; 210 + bool dummy; 211 + 212 + virtual = ttm_kmap_obj_virtual(&bo->map, &dummy); 213 + if (virtual) { 214 + vmw_cursor_update_image(dev_priv, cm_bo, cm_map, virtual, 215 + width, height, 216 + hotspotX, hotspotY); 217 + atomic_dec(&bo->base_mapped_count); 218 + } 144 219 } 145 220 146 221 147 222 static void vmw_cursor_update_position(struct vmw_private *dev_priv, 148 223 bool show, int x, int y) 149 224 { 225 + const uint32_t svga_cursor_on = show ? SVGA_CURSOR_ON_SHOW 226 + : SVGA_CURSOR_ON_HIDE; 150 227 uint32_t count; 151 228 152 229 spin_lock(&dev_priv->cursor_lock); 153 - if (vmw_is_cursor_bypass3_enabled(dev_priv)) { 154 - vmw_fifo_mem_write(dev_priv, SVGA_FIFO_CURSOR_ON, show ? 1 : 0); 230 + if (dev_priv->capabilities2 & SVGA_CAP2_EXTRA_REGS) { 231 + vmw_write(dev_priv, SVGA_REG_CURSOR4_X, x); 232 + vmw_write(dev_priv, SVGA_REG_CURSOR4_Y, y); 233 + vmw_write(dev_priv, SVGA_REG_CURSOR4_SCREEN_ID, SVGA3D_INVALID_ID); 234 + vmw_write(dev_priv, SVGA_REG_CURSOR4_ON, svga_cursor_on); 235 + vmw_write(dev_priv, SVGA_REG_CURSOR4_SUBMIT, TRUE); 236 + } else if (vmw_is_cursor_bypass3_enabled(dev_priv)) { 237 + vmw_fifo_mem_write(dev_priv, SVGA_FIFO_CURSOR_ON, svga_cursor_on); 155 238 vmw_fifo_mem_write(dev_priv, SVGA_FIFO_CURSOR_X, x); 156 239 vmw_fifo_mem_write(dev_priv, SVGA_FIFO_CURSOR_Y, y); 157 240 count = vmw_fifo_mem_read(dev_priv, SVGA_FIFO_CURSOR_COUNT); ··· 262 139 } else { 263 140 vmw_write(dev_priv, SVGA_REG_CURSOR_X, x); 264 141 vmw_write(dev_priv, SVGA_REG_CURSOR_Y, y); 265 - vmw_write(dev_priv, SVGA_REG_CURSOR_ON, show ? 1 : 0); 142 + vmw_write(dev_priv, SVGA_REG_CURSOR_ON, svga_cursor_on); 266 143 } 267 144 spin_unlock(&dev_priv->cursor_lock); 268 145 } ··· 392 269 continue; 393 270 394 271 du->cursor_age = du->cursor_surface->snooper.age; 395 - vmw_cursor_update_image(dev_priv, 272 + vmw_cursor_update_image(dev_priv, NULL, NULL, 396 273 du->cursor_surface->snooper.image, 397 274 64, 64, 398 275 du->hotspot_x + du->core_hotspot_x, ··· 406 283 void vmw_du_cursor_plane_destroy(struct drm_plane *plane) 407 284 { 408 285 vmw_cursor_update_position(plane->dev->dev_private, false, 0, 0); 409 - 286 + vmw_du_destroy_cursor_mob_array(vmw_plane_to_vcp(plane)); 410 287 drm_plane_cleanup(plane); 411 288 } 412 289 ··· 444 321 445 322 446 323 /** 447 - * vmw_du_plane_cleanup_fb - Unpins the cursor 324 + * vmw_du_plane_cleanup_fb - Unpins the plane surface 448 325 * 449 326 * @plane: display plane 450 327 * @old_state: Contains the FB to clean up ··· 464 341 465 342 466 343 /** 344 + * vmw_du_cursor_plane_cleanup_fb - Unpins the plane surface 345 + * 346 + * @plane: cursor plane 347 + * @old_state: contains the state to clean up 348 + * 349 + * Unmaps all cursor bo mappings and unpins the cursor surface 350 + * 351 + * Returns 0 on success 352 + */ 353 + void 354 + vmw_du_cursor_plane_cleanup_fb(struct drm_plane *plane, 355 + struct drm_plane_state *old_state) 356 + { 357 + struct vmw_plane_state *vps = vmw_plane_state_to_vps(old_state); 358 + bool dummy; 359 + 360 + if (vps->bo != NULL && ttm_kmap_obj_virtual(&vps->bo->map, &dummy) != NULL) { 361 + const int ret = ttm_bo_reserve(&vps->bo->base, true, false, NULL); 362 + 363 + if (likely(ret == 0)) { 364 + if (atomic_read(&vps->bo->base_mapped_count) == 0) 365 + ttm_bo_kunmap(&vps->bo->map); 366 + ttm_bo_unreserve(&vps->bo->base); 367 + } 368 + } 369 + 370 + if (vps->cm_bo != NULL && ttm_kmap_obj_virtual(&vps->cm_map, &dummy) != NULL) { 371 + const int ret = ttm_bo_reserve(vps->cm_bo, true, false, NULL); 372 + 373 + if (likely(ret == 0)) { 374 + ttm_bo_kunmap(&vps->cm_map); 375 + ttm_bo_unreserve(vps->cm_bo); 376 + } 377 + } 378 + 379 + vmw_du_plane_unpin_surf(vps, false); 380 + 381 + if (vps->surf) { 382 + vmw_surface_unreference(&vps->surf); 383 + vps->surf = NULL; 384 + } 385 + 386 + if (vps->bo) { 387 + vmw_bo_unreference(&vps->bo); 388 + vps->bo = NULL; 389 + } 390 + } 391 + 392 + /** 467 393 * vmw_du_cursor_plane_prepare_fb - Readies the cursor by referencing it 468 394 * 469 395 * @plane: display plane ··· 525 353 struct drm_plane_state *new_state) 526 354 { 527 355 struct drm_framebuffer *fb = new_state->fb; 356 + struct vmw_cursor_plane *vcp = vmw_plane_to_vcp(plane); 528 357 struct vmw_plane_state *vps = vmw_plane_state_to_vps(new_state); 358 + struct ttm_buffer_object *cm_bo = NULL; 359 + bool dummy; 360 + int ret = 0; 529 361 530 - 531 - if (vps->surf) 362 + if (vps->surf) { 532 363 vmw_surface_unreference(&vps->surf); 364 + vps->surf = NULL; 365 + } 533 366 534 - if (vps->bo) 367 + if (vps->bo) { 535 368 vmw_bo_unreference(&vps->bo); 369 + vps->bo = NULL; 370 + } 536 371 537 372 if (fb) { 538 373 if (vmw_framebuffer_to_vfb(fb)->bo) { ··· 551 372 } 552 373 } 553 374 375 + vps->cm_bo = NULL; 376 + 377 + if (vps->surf == NULL && vps->bo != NULL) { 378 + const u32 size = new_state->crtc_w * new_state->crtc_h * sizeof(u32); 379 + 380 + /* Not using vmw_bo_map_and_cache() helper here as we need to reserve 381 + the ttm_buffer_object first which wmw_bo_map_and_cache() omits. */ 382 + ret = ttm_bo_reserve(&vps->bo->base, true, false, NULL); 383 + 384 + if (unlikely(ret != 0)) 385 + return -ENOMEM; 386 + 387 + ret = ttm_bo_kmap(&vps->bo->base, 0, PFN_UP(size), &vps->bo->map); 388 + 389 + if (likely(ret == 0)) 390 + atomic_inc(&vps->bo->base_mapped_count); 391 + 392 + ttm_bo_unreserve(&vps->bo->base); 393 + 394 + if (unlikely(ret != 0)) 395 + return -ENOMEM; 396 + } 397 + 398 + if (vps->surf || vps->bo) { 399 + unsigned cursor_mob_idx = vps->cursor_mob_idx; 400 + 401 + /* Lazily set up cursor MOBs just once -- no reattempts. */ 402 + if (cursor_mob_idx == 0 && vcp->cursor_mob[0] == NULL) 403 + if (vmw_du_create_cursor_mob_array(vcp) != 0) 404 + vps->cursor_mob_idx = cursor_mob_idx = -1U; 405 + 406 + if (cursor_mob_idx < ARRAY_SIZE(vcp->cursor_mob)) { 407 + const u32 size = sizeof(SVGAGBCursorHeader) + 408 + new_state->crtc_w * new_state->crtc_h * sizeof(u32); 409 + 410 + cm_bo = vcp->cursor_mob[cursor_mob_idx]; 411 + 412 + if (cm_bo->resource->num_pages * PAGE_SIZE < size) { 413 + ret = -EINVAL; 414 + goto error_bo_unmap; 415 + } 416 + 417 + ret = ttm_bo_reserve(cm_bo, false, false, NULL); 418 + 419 + if (unlikely(ret != 0)) { 420 + ret = -ENOMEM; 421 + goto error_bo_unmap; 422 + } 423 + 424 + ret = ttm_bo_kmap(cm_bo, 0, PFN_UP(size), &vps->cm_map); 425 + 426 + /* 427 + * We just want to try to get mob bind to finish 428 + * so that the first write to SVGA_REG_CURSOR_MOBID 429 + * is done with a buffer that the device has already 430 + * seen 431 + */ 432 + (void) ttm_bo_wait(cm_bo, false, false); 433 + 434 + ttm_bo_unreserve(cm_bo); 435 + 436 + if (unlikely(ret != 0)) { 437 + ret = -ENOMEM; 438 + goto error_bo_unmap; 439 + } 440 + 441 + vps->cursor_mob_idx = cursor_mob_idx ^ 1; 442 + vps->cm_bo = cm_bo; 443 + } 444 + } 445 + 554 446 return 0; 447 + 448 + error_bo_unmap: 449 + if (vps->bo != NULL && ttm_kmap_obj_virtual(&vps->bo->map, &dummy) != NULL) { 450 + const int ret = ttm_bo_reserve(&vps->bo->base, true, false, NULL); 451 + if (likely(ret == 0)) { 452 + atomic_dec(&vps->bo->base_mapped_count); 453 + ttm_bo_kunmap(&vps->bo->map); 454 + ttm_bo_unreserve(&vps->bo->base); 455 + } 456 + } 457 + 458 + return ret; 555 459 } 556 460 557 461 ··· 651 389 struct vmw_display_unit *du = vmw_crtc_to_du(crtc); 652 390 struct vmw_plane_state *vps = vmw_plane_state_to_vps(new_state); 653 391 s32 hotspot_x, hotspot_y; 654 - int ret = 0; 655 - 656 392 657 393 hotspot_x = du->hotspot_x; 658 394 hotspot_y = du->hotspot_y; ··· 666 406 if (vps->surf) { 667 407 du->cursor_age = du->cursor_surface->snooper.age; 668 408 669 - ret = vmw_cursor_update_image(dev_priv, 670 - vps->surf->snooper.image, 671 - 64, 64, hotspot_x, 672 - hotspot_y); 409 + vmw_cursor_update_image(dev_priv, vps->cm_bo, &vps->cm_map, 410 + vps->surf->snooper.image, 411 + new_state->crtc_w, 412 + new_state->crtc_h, 413 + hotspot_x, hotspot_y); 673 414 } else if (vps->bo) { 674 - ret = vmw_cursor_update_bo(dev_priv, vps->bo, 675 - new_state->crtc_w, 676 - new_state->crtc_h, 677 - hotspot_x, hotspot_y); 415 + vmw_cursor_update_bo(dev_priv, vps->cm_bo, &vps->cm_map, 416 + vps->bo, 417 + new_state->crtc_w, 418 + new_state->crtc_h, 419 + hotspot_x, hotspot_y); 678 420 } else { 679 421 vmw_cursor_update_position(dev_priv, false, 0, 0); 680 422 return; 681 423 } 682 424 683 - if (!ret) { 684 - du->cursor_x = new_state->crtc_x + du->set_gui_x; 685 - du->cursor_y = new_state->crtc_y + du->set_gui_y; 425 + du->cursor_x = new_state->crtc_x + du->set_gui_x; 426 + du->cursor_y = new_state->crtc_y + du->set_gui_y; 686 427 687 - vmw_cursor_update_position(dev_priv, true, 688 - du->cursor_x + hotspot_x, 689 - du->cursor_y + hotspot_y); 428 + vmw_cursor_update_position(dev_priv, true, 429 + du->cursor_x + hotspot_x, 430 + du->cursor_y + hotspot_y); 690 431 691 - du->core_hotspot_x = hotspot_x - du->hotspot_x; 692 - du->core_hotspot_y = hotspot_y - du->hotspot_y; 693 - } else { 694 - DRM_ERROR("Failed to update cursor image\n"); 695 - } 432 + du->core_hotspot_x = hotspot_x - du->hotspot_x; 433 + du->core_hotspot_y = hotspot_y - du->hotspot_y; 696 434 } 697 435 698 436 ··· 776 518 if (new_state->crtc_w != 64 || new_state->crtc_h != 64) { 777 519 DRM_ERROR("Invalid cursor dimensions (%d, %d)\n", 778 520 new_state->crtc_w, new_state->crtc_h); 779 - ret = -EINVAL; 521 + return -EINVAL; 780 522 } 781 523 782 524 if (!vmw_framebuffer_to_vfb(fb)->bo) ··· 784 526 785 527 if (surface && !surface->snooper.image) { 786 528 DRM_ERROR("surface not suitable for cursor\n"); 787 - ret = -EINVAL; 529 + return -EINVAL; 788 530 } 789 531 790 - return ret; 532 + return 0; 791 533 } 792 534 793 535 ··· 969 711 void vmw_du_plane_reset(struct drm_plane *plane) 970 712 { 971 713 struct vmw_plane_state *vps; 972 - 973 714 974 715 if (plane->state) 975 716 vmw_du_plane_destroy_state(plane, plane->state); ··· 1170 913 /* 1171 914 * Sanity checks. 1172 915 */ 916 + 917 + if (!drm_any_plane_has_format(&dev_priv->drm, 918 + mode_cmd->pixel_format, 919 + mode_cmd->modifier[0])) { 920 + drm_dbg(&dev_priv->drm, 921 + "unsupported pixel format %p4cc / modifier 0x%llx\n", 922 + &mode_cmd->pixel_format, mode_cmd->modifier[0]); 923 + return -EINVAL; 924 + } 1173 925 1174 926 /* Surface must be marked as a scanout. */ 1175 927 if (unlikely(!surface->metadata.scanout)) ··· 1502 1236 return -EINVAL; 1503 1237 } 1504 1238 1505 - /* Limited framebuffer color depth support for screen objects */ 1506 - if (dev_priv->active_display_unit == vmw_du_screen_object) { 1507 - switch (mode_cmd->pixel_format) { 1508 - case DRM_FORMAT_XRGB8888: 1509 - case DRM_FORMAT_ARGB8888: 1510 - break; 1511 - case DRM_FORMAT_XRGB1555: 1512 - case DRM_FORMAT_RGB565: 1513 - break; 1514 - default: 1515 - DRM_ERROR("Invalid pixel format: %p4cc\n", 1516 - &mode_cmd->pixel_format); 1517 - return -EINVAL; 1518 - } 1239 + if (!drm_any_plane_has_format(&dev_priv->drm, 1240 + mode_cmd->pixel_format, 1241 + mode_cmd->modifier[0])) { 1242 + drm_dbg(&dev_priv->drm, 1243 + "unsupported pixel format %p4cc / modifier 0x%llx\n", 1244 + &mode_cmd->pixel_format, mode_cmd->modifier[0]); 1245 + return -EINVAL; 1519 1246 } 1520 1247 1521 1248 vfbd = kzalloc(sizeof(*vfbd), GFP_KERNEL); ··· 1603 1344 ret = vmw_kms_new_framebuffer_surface(dev_priv, surface, &vfb, 1604 1345 mode_cmd, 1605 1346 is_bo_proxy); 1606 - 1607 1347 /* 1608 1348 * vmw_create_bo_proxy() adds a reference that is no longer 1609 1349 * needed ··· 1643 1385 ret = vmw_user_lookup_handle(dev_priv, file_priv, 1644 1386 mode_cmd->handles[0], 1645 1387 &surface, &bo); 1646 - if (ret) 1388 + if (ret) { 1389 + DRM_ERROR("Invalid buffer object handle %u (0x%x).\n", 1390 + mode_cmd->handles[0], mode_cmd->handles[0]); 1647 1391 goto err_out; 1392 + } 1648 1393 1649 1394 1650 1395 if (!bo && 1651 1396 !vmw_kms_srf_ok(dev_priv, mode_cmd->width, mode_cmd->height)) { 1652 - DRM_ERROR("Surface size cannot exceed %dx%d", 1397 + DRM_ERROR("Surface size cannot exceed %dx%d\n", 1653 1398 dev_priv->texture_max_width, 1654 1399 dev_priv->texture_max_height); 1655 1400 goto err_out;
+26 -3
drivers/gpu/drm/vmwgfx/vmwgfx_kms.h
··· 1 1 /* SPDX-License-Identifier: GPL-2.0 OR MIT */ 2 2 /************************************************************************** 3 3 * 4 - * Copyright 2009-2015 VMware, Inc., Palo Alto, CA., USA 4 + * Copyright 2009-2022 VMware, Inc., Palo Alto, CA., USA 5 5 * 6 6 * Permission is hereby granted, free of charge, to any person obtaining a 7 7 * copy of this software and associated documentation files (the ··· 247 247 static const uint32_t __maybe_unused vmw_primary_plane_formats[] = { 248 248 DRM_FORMAT_XRGB1555, 249 249 DRM_FORMAT_RGB565, 250 - DRM_FORMAT_RGB888, 251 250 DRM_FORMAT_XRGB8888, 252 251 DRM_FORMAT_ARGB8888, 253 252 }; ··· 260 261 #define vmw_plane_state_to_vps(x) container_of(x, struct vmw_plane_state, base) 261 262 #define vmw_connector_state_to_vcs(x) \ 262 263 container_of(x, struct vmw_connector_state, base) 264 + #define vmw_plane_to_vcp(x) container_of(x, struct vmw_cursor_plane, base) 263 265 264 266 /** 265 267 * Derived class for crtc state object ··· 293 293 294 294 /* For CPU Blit */ 295 295 unsigned int cpp; 296 + 297 + /* CursorMob flipping index; -1 if cursor mobs not used */ 298 + unsigned int cursor_mob_idx; 299 + /* Currently-active CursorMob */ 300 + struct ttm_buffer_object *cm_bo; 301 + /* CursorMob kmap_obj; expected valid at cursor_plane_atomic_update 302 + IFF currently-active CursorMob above is valid */ 303 + struct ttm_bo_kmap_obj cm_map; 296 304 }; 297 305 298 306 ··· 334 326 }; 335 327 336 328 /** 329 + * Derived class for cursor plane object 330 + * 331 + * @base DRM plane object 332 + * @cursor_mob array of two MOBs for CursorMob flipping 333 + */ 334 + struct vmw_cursor_plane { 335 + struct drm_plane base; 336 + struct ttm_buffer_object *cursor_mob[2]; 337 + }; 338 + 339 + /** 337 340 * Base class display unit. 338 341 * 339 342 * Since the SVGA hw doesn't have a concept of a crtc, encoder or connector ··· 356 337 struct drm_encoder encoder; 357 338 struct drm_connector connector; 358 339 struct drm_plane primary; 359 - struct drm_plane cursor; 340 + struct vmw_cursor_plane cursor; 360 341 361 342 struct vmw_surface *cursor_surface; 362 343 struct vmw_buffer_object *cursor_bo; ··· 471 452 /* Universal Plane Helpers */ 472 453 void vmw_du_primary_plane_destroy(struct drm_plane *plane); 473 454 void vmw_du_cursor_plane_destroy(struct drm_plane *plane); 455 + int vmw_du_create_cursor_mob_array(struct vmw_cursor_plane *vcp); 456 + void vmw_du_destroy_cursor_mob_array(struct vmw_cursor_plane *vcp); 474 457 475 458 /* Atomic Helpers */ 476 459 int vmw_du_primary_plane_atomic_check(struct drm_plane *plane, ··· 483 462 struct drm_atomic_state *state); 484 463 int vmw_du_cursor_plane_prepare_fb(struct drm_plane *plane, 485 464 struct drm_plane_state *new_state); 465 + void vmw_du_cursor_plane_cleanup_fb(struct drm_plane *plane, 466 + struct drm_plane_state *old_state); 486 467 void vmw_du_plane_cleanup_fb(struct drm_plane *plane, 487 468 struct drm_plane_state *old_state); 488 469 void vmw_du_plane_reset(struct drm_plane *plane);
+17 -19
drivers/gpu/drm/vmwgfx/vmwgfx_ldu.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 OR MIT 2 2 /************************************************************************** 3 3 * 4 - * Copyright 2009-2015 VMware, Inc., Palo Alto, CA., USA 4 + * Copyright 2009-2022 VMware, Inc., Palo Alto, CA., USA 5 5 * 6 6 * Permission is hereby granted, free of charge, to any person obtaining a 7 7 * copy of this software and associated documentation files (the ··· 338 338 .atomic_check = vmw_du_cursor_plane_atomic_check, 339 339 .atomic_update = vmw_du_cursor_plane_atomic_update, 340 340 .prepare_fb = vmw_du_cursor_plane_prepare_fb, 341 - .cleanup_fb = vmw_du_plane_cleanup_fb, 341 + .cleanup_fb = vmw_du_cursor_plane_cleanup_fb, 342 342 }; 343 343 344 344 static const struct ··· 363 363 struct drm_device *dev = &dev_priv->drm; 364 364 struct drm_connector *connector; 365 365 struct drm_encoder *encoder; 366 - struct drm_plane *primary, *cursor; 366 + struct drm_plane *primary; 367 + struct vmw_cursor_plane *cursor; 367 368 struct drm_crtc *crtc; 368 369 int ret; 369 370 ··· 393 392 ldu->base.is_implicit = true; 394 393 395 394 /* Initialize primary plane */ 396 - ret = drm_universal_plane_init(dev, &ldu->base.primary, 395 + ret = drm_universal_plane_init(dev, primary, 397 396 0, &vmw_ldu_plane_funcs, 398 397 vmw_primary_plane_formats, 399 398 ARRAY_SIZE(vmw_primary_plane_formats), ··· 410 409 */ 411 410 if (vmw_cmd_supported(dev_priv)) { 412 411 /* Initialize cursor plane */ 413 - ret = drm_universal_plane_init(dev, &ldu->base.cursor, 412 + ret = drm_universal_plane_init(dev, &cursor->base, 414 413 0, &vmw_ldu_cursor_funcs, 415 414 vmw_cursor_plane_formats, 416 415 ARRAY_SIZE(vmw_cursor_plane_formats), ··· 421 420 goto err_free; 422 421 } 423 422 424 - drm_plane_helper_add(cursor, &vmw_ldu_cursor_plane_helper_funcs); 423 + drm_plane_helper_add(&cursor->base, &vmw_ldu_cursor_plane_helper_funcs); 425 424 } 426 425 427 426 ret = drm_connector_init(dev, connector, &vmw_legacy_connector_funcs, ··· 451 450 goto err_free_encoder; 452 451 } 453 452 454 - ret = drm_crtc_init_with_planes( 455 - dev, crtc, &ldu->base.primary, 456 - vmw_cmd_supported(dev_priv) ? &ldu->base.cursor : NULL, 453 + ret = drm_crtc_init_with_planes(dev, crtc, primary, 454 + vmw_cmd_supported(dev_priv) ? &cursor->base : NULL, 457 455 &vmw_legacy_crtc_funcs, NULL); 458 456 if (ret) { 459 457 DRM_ERROR("Failed to initialize CRTC\n"); ··· 492 492 { 493 493 struct drm_device *dev = &dev_priv->drm; 494 494 int i, ret; 495 + int num_display_units = (dev_priv->capabilities & SVGA_CAP_MULTIMON) ? 496 + VMWGFX_NUM_DISPLAY_UNITS : 1; 495 497 496 498 if (unlikely(dev_priv->ldu_priv)) { 497 499 return -EINVAL; ··· 508 506 dev_priv->ldu_priv->last_num_active = 0; 509 507 dev_priv->ldu_priv->fb = NULL; 510 508 511 - /* for old hardware without multimon only enable one display */ 512 - if (dev_priv->capabilities & SVGA_CAP_MULTIMON) 513 - ret = drm_vblank_init(dev, VMWGFX_NUM_DISPLAY_UNITS); 514 - else 515 - ret = drm_vblank_init(dev, 1); 509 + ret = drm_vblank_init(dev, num_display_units); 516 510 if (ret != 0) 517 511 goto err_free; 518 512 519 513 vmw_kms_create_implicit_placement_property(dev_priv); 520 514 521 - if (dev_priv->capabilities & SVGA_CAP_MULTIMON) 522 - for (i = 0; i < VMWGFX_NUM_DISPLAY_UNITS; ++i) 523 - vmw_ldu_init(dev_priv, i); 524 - else 525 - vmw_ldu_init(dev_priv, 0); 515 + for (i = 0; i < num_display_units; ++i) { 516 + ret = vmw_ldu_init(dev_priv, i); 517 + if (ret != 0) 518 + goto err_free; 519 + } 526 520 527 521 dev_priv->active_display_unit = vmw_du_legacy; 528 522
+9 -10
drivers/gpu/drm/vmwgfx/vmwgfx_resource.c
··· 859 859 struct ttm_device *bdev = bo->bdev; 860 860 struct vmw_private *dev_priv; 861 861 862 - 863 862 dev_priv = container_of(bdev, struct vmw_private, bdev); 864 863 865 864 mutex_lock(&dev_priv->binding_mutex); 866 - 867 - dx_query_mob = container_of(bo, struct vmw_buffer_object, base); 868 - if (!dx_query_mob || !dx_query_mob->dx_query_ctx) { 869 - mutex_unlock(&dev_priv->binding_mutex); 870 - return; 871 - } 872 865 873 866 /* If BO is being moved from MOB to system memory */ 874 867 if (new_mem->mem_type == TTM_PL_SYSTEM && 875 868 old_mem->mem_type == VMW_PL_MOB) { 876 869 struct vmw_fence_obj *fence; 870 + 871 + dx_query_mob = container_of(bo, struct vmw_buffer_object, base); 872 + if (!dx_query_mob || !dx_query_mob->dx_query_ctx) { 873 + mutex_unlock(&dev_priv->binding_mutex); 874 + return; 875 + } 877 876 878 877 (void) vmw_query_readback_all(dx_query_mob); 879 878 mutex_unlock(&dev_priv->binding_mutex); ··· 887 888 (void) ttm_bo_wait(bo, false, false); 888 889 } else 889 890 mutex_unlock(&dev_priv->binding_mutex); 890 - 891 891 } 892 892 893 893 /** ··· 1163 1165 vmw_bo_fence_single(bo, NULL); 1164 1166 if (bo->moving) 1165 1167 dma_fence_put(bo->moving); 1166 - bo->moving = dma_fence_get 1167 - (dma_resv_excl_fence(bo->base.resv)); 1168 + 1169 + return dma_resv_get_singleton(bo->base.resv, false, 1170 + &bo->moving); 1168 1171 } 1169 1172 1170 1173 return 0;
+9 -8
drivers/gpu/drm/vmwgfx/vmwgfx_scrn.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 OR MIT 2 2 /************************************************************************** 3 3 * 4 - * Copyright 2011-2015 VMware, Inc., Palo Alto, CA., USA 4 + * Copyright 2011-2022 VMware, Inc., Palo Alto, CA., USA 5 5 * 6 6 * Permission is hereby granted, free of charge, to any person obtaining a 7 7 * copy of this software and associated documentation files (the ··· 804 804 .atomic_check = vmw_du_cursor_plane_atomic_check, 805 805 .atomic_update = vmw_du_cursor_plane_atomic_update, 806 806 .prepare_fb = vmw_du_cursor_plane_prepare_fb, 807 - .cleanup_fb = vmw_du_plane_cleanup_fb, 807 + .cleanup_fb = vmw_du_cursor_plane_cleanup_fb, 808 808 }; 809 809 810 810 static const struct ··· 832 832 struct drm_device *dev = &dev_priv->drm; 833 833 struct drm_connector *connector; 834 834 struct drm_encoder *encoder; 835 - struct drm_plane *primary, *cursor; 835 + struct drm_plane *primary; 836 + struct vmw_cursor_plane *cursor; 836 837 struct drm_crtc *crtc; 837 838 int ret; 838 839 ··· 860 859 sou->base.is_implicit = false; 861 860 862 861 /* Initialize primary plane */ 863 - ret = drm_universal_plane_init(dev, &sou->base.primary, 862 + ret = drm_universal_plane_init(dev, primary, 864 863 0, &vmw_sou_plane_funcs, 865 864 vmw_primary_plane_formats, 866 865 ARRAY_SIZE(vmw_primary_plane_formats), ··· 874 873 drm_plane_enable_fb_damage_clips(primary); 875 874 876 875 /* Initialize cursor plane */ 877 - ret = drm_universal_plane_init(dev, &sou->base.cursor, 876 + ret = drm_universal_plane_init(dev, &cursor->base, 878 877 0, &vmw_sou_cursor_funcs, 879 878 vmw_cursor_plane_formats, 880 879 ARRAY_SIZE(vmw_cursor_plane_formats), ··· 885 884 goto err_free; 886 885 } 887 886 888 - drm_plane_helper_add(cursor, &vmw_sou_cursor_plane_helper_funcs); 887 + drm_plane_helper_add(&cursor->base, &vmw_sou_cursor_plane_helper_funcs); 889 888 890 889 ret = drm_connector_init(dev, connector, &vmw_sou_connector_funcs, 891 890 DRM_MODE_CONNECTOR_VIRTUAL); ··· 914 913 goto err_free_encoder; 915 914 } 916 915 917 - ret = drm_crtc_init_with_planes(dev, crtc, &sou->base.primary, 918 - &sou->base.cursor, 916 + ret = drm_crtc_init_with_planes(dev, crtc, primary, 917 + &cursor->base, 919 918 &vmw_screen_object_crtc_funcs, NULL); 920 919 if (ret) { 921 920 DRM_ERROR("Failed to initialize CRTC\n");
+16 -11
drivers/gpu/drm/vmwgfx/vmwgfx_stdu.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 OR MIT 2 2 /****************************************************************************** 3 3 * 4 - * COPYRIGHT (C) 2014-2015 VMware, Inc., Palo Alto, CA., USA 4 + * COPYRIGHT (C) 2014-2022 VMware, Inc., Palo Alto, CA., USA 5 5 * 6 6 * Permission is hereby granted, free of charge, to any person obtaining a 7 7 * copy of this software and associated documentation files (the ··· 137 137 /****************************************************************************** 138 138 * Screen Target Display Unit CRTC Functions 139 139 *****************************************************************************/ 140 + 141 + static bool vmw_stdu_use_cpu_blit(const struct vmw_private *vmw) 142 + { 143 + return !(vmw->capabilities & SVGA_CAP_3D) || vmw->vram_size < (32 * 1024 * 1024); 144 + } 140 145 141 146 142 147 /** ··· 694 689 container_of(vfb, struct vmw_framebuffer_bo, base)->buffer; 695 690 struct vmw_stdu_dirty ddirty; 696 691 int ret; 697 - bool cpu_blit = !(dev_priv->capabilities & SVGA_CAP_3D); 692 + bool cpu_blit = vmw_stdu_use_cpu_blit(dev_priv); 698 693 DECLARE_VAL_CONTEXT(val_ctx, NULL, 0); 699 694 700 695 /* ··· 1169 1164 * so cache these mappings 1170 1165 */ 1171 1166 if (vps->content_fb_type == SEPARATE_BO && 1172 - !(dev_priv->capabilities & SVGA_CAP_3D)) 1167 + vmw_stdu_use_cpu_blit(dev_priv)) 1173 1168 vps->cpp = new_fb->pitches[0] / new_fb->width; 1174 1169 1175 1170 return 0; ··· 1373 1368 bo_update.base.vfb = vfb; 1374 1369 bo_update.base.out_fence = out_fence; 1375 1370 bo_update.base.mutex = NULL; 1376 - bo_update.base.cpu_blit = !(dev_priv->capabilities & SVGA_CAP_3D); 1371 + bo_update.base.cpu_blit = vmw_stdu_use_cpu_blit(dev_priv); 1377 1372 bo_update.base.intr = false; 1378 1373 1379 1374 /* ··· 1690 1685 .atomic_check = vmw_du_cursor_plane_atomic_check, 1691 1686 .atomic_update = vmw_du_cursor_plane_atomic_update, 1692 1687 .prepare_fb = vmw_du_cursor_plane_prepare_fb, 1693 - .cleanup_fb = vmw_du_plane_cleanup_fb, 1688 + .cleanup_fb = vmw_du_cursor_plane_cleanup_fb, 1694 1689 }; 1695 1690 1696 1691 static const struct ··· 1728 1723 struct drm_device *dev = &dev_priv->drm; 1729 1724 struct drm_connector *connector; 1730 1725 struct drm_encoder *encoder; 1731 - struct drm_plane *primary, *cursor; 1726 + struct drm_plane *primary; 1727 + struct vmw_cursor_plane *cursor; 1732 1728 struct drm_crtc *crtc; 1733 1729 int ret; 1734 - 1735 1730 1736 1731 stdu = kzalloc(sizeof(*stdu), GFP_KERNEL); 1737 1732 if (!stdu) ··· 1764 1759 drm_plane_enable_fb_damage_clips(primary); 1765 1760 1766 1761 /* Initialize cursor plane */ 1767 - ret = drm_universal_plane_init(dev, cursor, 1762 + ret = drm_universal_plane_init(dev, &cursor->base, 1768 1763 0, &vmw_stdu_cursor_funcs, 1769 1764 vmw_cursor_plane_formats, 1770 1765 ARRAY_SIZE(vmw_cursor_plane_formats), ··· 1775 1770 goto err_free; 1776 1771 } 1777 1772 1778 - drm_plane_helper_add(cursor, &vmw_stdu_cursor_plane_helper_funcs); 1773 + drm_plane_helper_add(&cursor->base, &vmw_stdu_cursor_plane_helper_funcs); 1779 1774 1780 1775 ret = drm_connector_init(dev, connector, &vmw_stdu_connector_funcs, 1781 1776 DRM_MODE_CONNECTOR_VIRTUAL); ··· 1804 1799 goto err_free_encoder; 1805 1800 } 1806 1801 1807 - ret = drm_crtc_init_with_planes(dev, crtc, &stdu->base.primary, 1808 - &stdu->base.cursor, 1802 + ret = drm_crtc_init_with_planes(dev, crtc, primary, 1803 + &cursor->base, 1809 1804 &vmw_stdu_crtc_funcs, NULL); 1810 1805 if (ret) { 1811 1806 DRM_ERROR("Failed to initialize CRTC\n");
+1 -1
drivers/gpu/drm/vmwgfx/vmwgfx_ttm_buffer.c
··· 517 517 ttm_cached); 518 518 else 519 519 ret = ttm_tt_init(&vmw_be->dma_ttm, bo, page_flags, 520 - ttm_cached); 520 + ttm_cached, 0); 521 521 if (unlikely(ret != 0)) 522 522 goto out_no_init; 523 523
+2 -6
drivers/infiniband/core/umem_dmabuf.c
··· 16 16 { 17 17 struct sg_table *sgt; 18 18 struct scatterlist *sg; 19 - struct dma_fence *fence; 20 19 unsigned long start, end, cur = 0; 21 20 unsigned int nmap = 0; 22 21 int i; ··· 67 68 * may be not up-to-date. Wait for the exporter to finish 68 69 * the migration. 69 70 */ 70 - fence = dma_resv_excl_fence(umem_dmabuf->attach->dmabuf->resv); 71 - if (fence) 72 - return dma_fence_wait(fence, false); 73 - 74 - return 0; 71 + return dma_resv_wait_timeout(umem_dmabuf->attach->dmabuf->resv, false, 72 + false, MAX_SCHEDULE_TIMEOUT); 75 73 } 76 74 EXPORT_SYMBOL(ib_umem_dmabuf_map_pages); 77 75
+4 -10
drivers/video/fbdev/Kconfig
··· 66 66 select I2C_ALGOBIT 67 67 select I2C 68 68 69 - config FB_BOOT_VESA_SUPPORT 70 - bool 71 - depends on FB 72 - help 73 - If true, at least one selected framebuffer driver can take advantage 74 - of VESA video modes set at an early boot stage via the vga= parameter. 75 - 76 69 config FB_CFB_FILLRECT 77 70 tristate 78 71 depends on FB ··· 620 627 select FB_CFB_FILLRECT 621 628 select FB_CFB_COPYAREA 622 629 select FB_CFB_IMAGEBLIT 623 - select FB_BOOT_VESA_SUPPORT 630 + select SYSFB 624 631 help 625 632 This is the frame buffer device driver for generic VESA 2.0 626 633 compliant graphic cards. The older VESA 1.2 cards are not supported. ··· 634 641 select FB_CFB_FILLRECT 635 642 select FB_CFB_COPYAREA 636 643 select FB_CFB_IMAGEBLIT 644 + select SYSFB 637 645 help 638 646 This is the EFI frame buffer device driver. If the firmware on 639 647 your platform is EFI 1.10 or UEFI 2.0, select Y to add support for ··· 1045 1051 select FB_CFB_FILLRECT 1046 1052 select FB_CFB_COPYAREA 1047 1053 select FB_CFB_IMAGEBLIT 1048 - select FB_BOOT_VESA_SUPPORT if FB_INTEL = y 1054 + select BOOT_VESA_SUPPORT if FB_INTEL = y 1049 1055 depends on !DRM_I915 1050 1056 help 1051 1057 This driver supports the on-board graphics built in to the Intel ··· 1372 1378 select FB_CFB_FILLRECT 1373 1379 select FB_CFB_COPYAREA 1374 1380 select FB_CFB_IMAGEBLIT 1375 - select FB_BOOT_VESA_SUPPORT if FB_SIS = y 1381 + select BOOT_VESA_SUPPORT if FB_SIS = y 1376 1382 select FB_SIS_300 if !FB_SIS_315 1377 1383 help 1378 1384 This is the frame buffer device driver for the SiS 300, 315, 330
+8 -1
drivers/video/fbdev/core/fb_defio.c
··· 59 59 printk(KERN_ERR "no mapping available\n"); 60 60 61 61 BUG_ON(!page->mapping); 62 - INIT_LIST_HEAD(&page->lru); 63 62 page->index = vmf->pgoff; 64 63 65 64 vmf->page = page; ··· 212 213 void fb_deferred_io_init(struct fb_info *info) 213 214 { 214 215 struct fb_deferred_io *fbdefio = info->fbdefio; 216 + struct page *page; 217 + unsigned int i; 215 218 216 219 BUG_ON(!fbdefio); 217 220 mutex_init(&fbdefio->lock); ··· 221 220 INIT_LIST_HEAD(&fbdefio->pagelist); 222 221 if (fbdefio->delay == 0) /* set a default of 1 s */ 223 222 fbdefio->delay = HZ; 223 + 224 + /* initialize all the page lists one time */ 225 + for (i = 0; i < info->fix.smem_len; i += PAGE_SIZE) { 226 + page = fb_deferred_io_page(info, i); 227 + INIT_LIST_HEAD(&page->lru); 228 + } 224 229 } 225 230 EXPORT_SYMBOL_GPL(fb_deferred_io_init); 226 231
+23 -4
include/drm/drm_atomic.h
··· 227 227 */ 228 228 void (*atomic_destroy_state)(struct drm_private_obj *obj, 229 229 struct drm_private_state *state); 230 + 231 + /** 232 + * @atomic_print_state: 233 + * 234 + * If driver subclasses &struct drm_private_state, it should implement 235 + * this optional hook for printing additional driver specific state. 236 + * 237 + * Do not call this directly, use drm_atomic_private_obj_print_state() 238 + * instead. 239 + */ 240 + void (*atomic_print_state)(struct drm_printer *p, 241 + const struct drm_private_state *state); 230 242 }; 231 243 232 244 /** ··· 323 311 324 312 /** 325 313 * struct drm_private_state - base struct for driver private object state 326 - * @state: backpointer to global drm_atomic_state 327 314 * 328 - * Currently only contains a backpointer to the overall atomic update, but in 329 - * the future also might hold synchronization information similar to e.g. 330 - * &drm_crtc.commit. 315 + * Currently only contains a backpointer to the overall atomic update, 316 + * and the relevant private object but in the future also might hold 317 + * synchronization information similar to e.g. &drm_crtc.commit. 331 318 */ 332 319 struct drm_private_state { 320 + /** 321 + * @state: backpointer to global drm_atomic_state 322 + */ 333 323 struct drm_atomic_state *state; 324 + 325 + /** 326 + * @obj: backpointer to the private object 327 + */ 328 + struct drm_private_obj *obj; 334 329 }; 335 330 336 331 struct __drm_private_objs_state {
+6 -6
include/drm/drm_edid.h
··· 372 372 struct drm_connector_state; 373 373 struct drm_display_mode; 374 374 375 - int drm_edid_to_sad(struct edid *edid, struct cea_sad **sads); 376 - int drm_edid_to_speaker_allocation(struct edid *edid, u8 **sadb); 375 + int drm_edid_to_sad(const struct edid *edid, struct cea_sad **sads); 376 + int drm_edid_to_speaker_allocation(const struct edid *edid, u8 **sadb); 377 377 int drm_av_sync_delay(struct drm_connector *connector, 378 378 const struct drm_display_mode *mode); 379 379 ··· 569 569 int drm_add_override_edid_modes(struct drm_connector *connector); 570 570 571 571 u8 drm_match_cea_mode(const struct drm_display_mode *to_match); 572 - bool drm_detect_hdmi_monitor(struct edid *edid); 573 - bool drm_detect_monitor_audio(struct edid *edid); 572 + bool drm_detect_hdmi_monitor(const struct edid *edid); 573 + bool drm_detect_monitor_audio(const struct edid *edid); 574 574 enum hdmi_quantization_range 575 575 drm_default_rgb_quant_range(const struct drm_display_mode *mode); 576 576 int drm_add_modes_noedid(struct drm_connector *connector, ··· 578 578 void drm_set_preferred_mode(struct drm_connector *connector, 579 579 int hpref, int vpref); 580 580 581 - int drm_edid_header_is_valid(const u8 *raw_edid); 581 + int drm_edid_header_is_valid(const void *edid); 582 582 bool drm_edid_block_valid(u8 *raw_edid, int block, bool print_bad_edid, 583 583 bool *edid_corrupt); 584 584 bool drm_edid_is_valid(struct edid *edid); 585 - void drm_edid_get_monitor_name(struct edid *edid, char *name, 585 + void drm_edid_get_monitor_name(const struct edid *edid, char *name, 586 586 int buflen); 587 587 struct drm_display_mode *drm_mode_find_dmt(struct drm_device *dev, 588 588 int hsize, int vsize, int fresh,
+1 -1
include/drm/drm_file.h
··· 248 248 */ 249 249 struct drm_master *master; 250 250 251 - /** @master_lock: Serializes @master. */ 251 + /** @master_lookup_lock: Serializes @master. */ 252 252 spinlock_t master_lookup_lock; 253 253 254 254 /** @pid: Process that opened this file. */
+2 -3
include/drm/drm_format_helper.h
··· 43 43 const void *vmap, const struct drm_framebuffer *fb, 44 44 const struct drm_rect *rect); 45 45 46 - void drm_fb_xrgb8888_to_mono_reversed(void *dst, unsigned int dst_pitch, const void *src, 47 - const struct drm_framebuffer *fb, 48 - const struct drm_rect *clip); 46 + void drm_fb_xrgb8888_to_mono(void *dst, unsigned int dst_pitch, const void *src, 47 + const struct drm_framebuffer *fb, const struct drm_rect *clip); 49 48 50 49 #endif /* __LINUX_DRM_FORMAT_HELPER_H */
-5
include/drm/drm_gem.h
··· 407 407 struct ww_acquire_ctx *acquire_ctx); 408 408 void drm_gem_unlock_reservations(struct drm_gem_object **objs, int count, 409 409 struct ww_acquire_ctx *acquire_ctx); 410 - int drm_gem_fence_array_add(struct xarray *fence_array, 411 - struct dma_fence *fence); 412 - int drm_gem_fence_array_add_implicit(struct xarray *fence_array, 413 - struct drm_gem_object *obj, 414 - bool write); 415 410 int drm_gem_dumb_map_offset(struct drm_file *file, struct drm_device *dev, 416 411 u32 handle, u64 *offset); 417 412
+2
include/drm/drm_mipi_dsi.h
··· 137 137 #define MIPI_DSI_CLOCK_NON_CONTINUOUS BIT(10) 138 138 /* transmit data in low power */ 139 139 #define MIPI_DSI_MODE_LPM BIT(11) 140 + /* transmit data ending at the same time for all lanes within one hsync */ 141 + #define MIPI_DSI_HS_PKT_END_ALIGNED BIT(12) 140 142 141 143 enum mipi_dsi_pixel_format { 142 144 MIPI_DSI_FMT_RGB888,
+2
include/drm/drm_modes.h
··· 492 492 int adjust_flags); 493 493 void drm_mode_copy(struct drm_display_mode *dst, 494 494 const struct drm_display_mode *src); 495 + void drm_mode_init(struct drm_display_mode *dst, 496 + const struct drm_display_mode *src); 495 497 struct drm_display_mode *drm_mode_duplicate(struct drm_device *dev, 496 498 const struct drm_display_mode *mode); 497 499 bool drm_mode_match(const struct drm_display_mode *mode1,
+1 -1
include/drm/drm_modeset_helper_vtables.h
··· 1384 1384 * starting to commit the update to the hardware. 1385 1385 * 1386 1386 * After the atomic update is committed to the hardware this hook needs 1387 - * to call drm_atomic_helper_commit_hw_done(). Then wait for the upate 1387 + * to call drm_atomic_helper_commit_hw_done(). Then wait for the update 1388 1388 * to be executed by the hardware, for example using 1389 1389 * drm_atomic_helper_wait_for_vblanks() or 1390 1390 * drm_atomic_helper_wait_for_flip_done(), and then clean up the old
+1
include/drm/gpu_scheduler.h
··· 270 270 * @sched: the scheduler instance on which this job is scheduled. 271 271 * @s_fence: contains the fences for the scheduling of job. 272 272 * @finish_cb: the callback for the finished fence. 273 + * @work: Helper to reschdeule job kill to different context. 273 274 * @id: a unique id assigned to each job scheduled on the scheduler. 274 275 * @karma: increment on every hang caused by this job. If this exceeds the hang 275 276 * limit of the scheduler then the job is marked guilty and will not
+6 -56
include/drm/ttm/ttm_bo_api.h
··· 55 55 56 56 struct ttm_place; 57 57 58 - struct ttm_lru_bulk_move; 59 - 60 58 /** 61 59 * enum ttm_bo_type 62 60 * ··· 92 94 * @ttm: TTM structure holding system pages. 93 95 * @evicted: Whether the object was evicted without user-space knowing. 94 96 * @deleted: True if the object is only a zombie and already deleted. 95 - * @lru: List head for the lru list. 96 97 * @ddestroy: List head for the delayed destroy list. 97 98 * @swap: List head for swap LRU list. 98 99 * @moving: Fence set when BO is moving ··· 135 138 struct ttm_resource *resource; 136 139 struct ttm_tt *ttm; 137 140 bool deleted; 141 + struct ttm_lru_bulk_move *bulk_move; 138 142 139 143 /** 140 144 * Members protected by the bdev::lru_lock. 141 145 */ 142 146 143 - struct list_head lru; 144 147 struct list_head ddestroy; 145 148 146 149 /** ··· 288 291 */ 289 292 void ttm_bo_put(struct ttm_buffer_object *bo); 290 293 291 - /** 292 - * ttm_bo_move_to_lru_tail 293 - * 294 - * @bo: The buffer object. 295 - * @mem: Resource object. 296 - * @bulk: optional bulk move structure to remember BO positions 297 - * 298 - * Move this BO to the tail of all lru lists used to lookup and reserve an 299 - * object. This function must be called with struct ttm_global::lru_lock 300 - * held, and is used to make a BO less likely to be considered for eviction. 301 - */ 302 - void ttm_bo_move_to_lru_tail(struct ttm_buffer_object *bo, 303 - struct ttm_resource *mem, 304 - struct ttm_lru_bulk_move *bulk); 305 - 306 - /** 307 - * ttm_bo_bulk_move_lru_tail 308 - * 309 - * @bulk: bulk move structure 310 - * 311 - * Bulk move BOs to the LRU tail, only valid to use when driver makes sure that 312 - * BO order never changes. Should be called with ttm_global::lru_lock held. 313 - */ 314 - void ttm_bo_bulk_move_lru_tail(struct ttm_lru_bulk_move *bulk); 294 + void ttm_bo_move_to_lru_tail(struct ttm_buffer_object *bo); 295 + void ttm_bo_set_bulk_move(struct ttm_buffer_object *bo, 296 + struct ttm_lru_bulk_move *bulk); 315 297 316 298 /** 317 299 * ttm_bo_lock_delayed_workqueue ··· 516 540 int ttm_bo_swapout(struct ttm_buffer_object *bo, struct ttm_operation_ctx *ctx, 517 541 gfp_t gfp_flags); 518 542 519 - /** 520 - * ttm_bo_pin - Pin the buffer object. 521 - * @bo: The buffer object to pin 522 - * 523 - * Make sure the buffer is not evicted any more during memory pressure. 524 - */ 525 - static inline void ttm_bo_pin(struct ttm_buffer_object *bo) 526 - { 527 - dma_resv_assert_held(bo->base.resv); 528 - WARN_ON_ONCE(!kref_read(&bo->kref)); 529 - ++bo->pin_count; 530 - } 531 - 532 - /** 533 - * ttm_bo_unpin - Unpin the buffer object. 534 - * @bo: The buffer object to unpin 535 - * 536 - * Allows the buffer object to be evicted again during memory pressure. 537 - */ 538 - static inline void ttm_bo_unpin(struct ttm_buffer_object *bo) 539 - { 540 - dma_resv_assert_held(bo->base.resv); 541 - WARN_ON_ONCE(!kref_read(&bo->kref)); 542 - if (bo->pin_count) 543 - --bo->pin_count; 544 - else 545 - WARN_ON_ONCE(true); 546 - } 543 + void ttm_bo_pin(struct ttm_buffer_object *bo); 544 + void ttm_bo_unpin(struct ttm_buffer_object *bo); 547 545 548 546 int ttm_mem_evict_first(struct ttm_device *bdev, 549 547 struct ttm_resource_manager *man,
+1 -28
include/drm/ttm/ttm_bo_driver.h
··· 45 45 #include "ttm_tt.h" 46 46 #include "ttm_pool.h" 47 47 48 - /** 49 - * struct ttm_lru_bulk_move_pos 50 - * 51 - * @first: first BO in the bulk move range 52 - * @last: last BO in the bulk move range 53 - * 54 - * Positions for a lru bulk move. 55 - */ 56 - struct ttm_lru_bulk_move_pos { 57 - struct ttm_buffer_object *first; 58 - struct ttm_buffer_object *last; 59 - }; 60 - 61 - /** 62 - * struct ttm_lru_bulk_move 63 - * 64 - * @tt: first/last lru entry for BOs in the TT domain 65 - * @vram: first/last lru entry for BOs in the VRAM domain 66 - * @swap: first/last lru entry for BOs on the swap list 67 - * 68 - * Helper structure for bulk moves on the LRU list. 69 - */ 70 - struct ttm_lru_bulk_move { 71 - struct ttm_lru_bulk_move_pos tt[TTM_MAX_BO_PRIORITY]; 72 - struct ttm_lru_bulk_move_pos vram[TTM_MAX_BO_PRIORITY]; 73 - }; 74 - 75 48 /* 76 49 * ttm_bo.c 77 50 */ ··· 155 182 ttm_bo_move_to_lru_tail_unlocked(struct ttm_buffer_object *bo) 156 183 { 157 184 spin_lock(&bo->bdev->lru_lock); 158 - ttm_bo_move_to_lru_tail(bo, bo->resource, NULL); 185 + ttm_bo_move_to_lru_tail(bo); 159 186 spin_unlock(&bo->bdev->lru_lock); 160 187 } 161 188
-11
include/drm/ttm/ttm_device.h
··· 30 30 #include <drm/ttm/ttm_resource.h> 31 31 #include <drm/ttm/ttm_pool.h> 32 32 33 - #define TTM_NUM_MEM_TYPES 8 34 - 35 33 struct ttm_device; 36 34 struct ttm_placement; 37 35 struct ttm_buffer_object; ··· 197 199 */ 198 200 int (*access_memory)(struct ttm_buffer_object *bo, unsigned long offset, 199 201 void *buf, int len, int write); 200 - 201 - /** 202 - * struct ttm_bo_driver member del_from_lru_notify 203 - * 204 - * @bo: the buffer object deleted from lru 205 - * 206 - * notify driver that a BO was deleted from LRU. 207 - */ 208 - void (*del_from_lru_notify)(struct ttm_buffer_object *bo); 209 202 210 203 /** 211 204 * Notify the driver that we're about to release a BO
+74
include/drm/ttm/ttm_resource.h
··· 26 26 #define _TTM_RESOURCE_H_ 27 27 28 28 #include <linux/types.h> 29 + #include <linux/list.h> 29 30 #include <linux/mutex.h> 30 31 #include <linux/iosys-map.h> 31 32 #include <linux/dma-fence.h> 33 + 32 34 #include <drm/drm_print.h> 33 35 #include <drm/ttm/ttm_caching.h> 34 36 #include <drm/ttm/ttm_kmap_iter.h> 35 37 36 38 #define TTM_MAX_BO_PRIORITY 4U 39 + #define TTM_NUM_MEM_TYPES 8 37 40 38 41 struct ttm_device; 39 42 struct ttm_resource_manager; ··· 181 178 uint32_t placement; 182 179 struct ttm_bus_placement bus; 183 180 struct ttm_buffer_object *bo; 181 + 182 + /** 183 + * @lru: Least recently used list, see &ttm_resource_manager.lru 184 + */ 185 + struct list_head lru; 186 + }; 187 + 188 + /** 189 + * struct ttm_resource_cursor 190 + * 191 + * @priority: the current priority 192 + * 193 + * Cursor to iterate over the resources in a manager. 194 + */ 195 + struct ttm_resource_cursor { 196 + unsigned int priority; 197 + }; 198 + 199 + /** 200 + * struct ttm_lru_bulk_move_pos 201 + * 202 + * @first: first res in the bulk move range 203 + * @last: last res in the bulk move range 204 + * 205 + * Range of resources for a lru bulk move. 206 + */ 207 + struct ttm_lru_bulk_move_pos { 208 + struct ttm_resource *first; 209 + struct ttm_resource *last; 210 + }; 211 + 212 + /** 213 + * struct ttm_lru_bulk_move 214 + * 215 + * @tt: first/last lru entry for resources in the TT domain 216 + * @vram: first/last lru entry for resources in the VRAM domain 217 + * 218 + * Container for the current bulk move state. Should be used with 219 + * ttm_lru_bulk_move_init() and ttm_bo_set_bulk_move(). 220 + */ 221 + struct ttm_lru_bulk_move { 222 + struct ttm_lru_bulk_move_pos pos[TTM_NUM_MEM_TYPES][TTM_MAX_BO_PRIORITY]; 184 223 }; 185 224 186 225 /** ··· 311 266 man->move = NULL; 312 267 } 313 268 269 + void ttm_lru_bulk_move_init(struct ttm_lru_bulk_move *bulk); 270 + void ttm_lru_bulk_move_add(struct ttm_lru_bulk_move *bulk, 271 + struct ttm_resource *res); 272 + void ttm_lru_bulk_move_del(struct ttm_lru_bulk_move *bulk, 273 + struct ttm_resource *res); 274 + void ttm_lru_bulk_move_tail(struct ttm_lru_bulk_move *bulk); 275 + 276 + void ttm_resource_move_to_lru_tail(struct ttm_resource *res); 277 + 314 278 void ttm_resource_init(struct ttm_buffer_object *bo, 315 279 const struct ttm_place *place, 316 280 struct ttm_resource *res); ··· 345 291 uint64_t ttm_resource_manager_usage(struct ttm_resource_manager *man); 346 292 void ttm_resource_manager_debug(struct ttm_resource_manager *man, 347 293 struct drm_printer *p); 294 + 295 + struct ttm_resource * 296 + ttm_resource_manager_first(struct ttm_resource_manager *man, 297 + struct ttm_resource_cursor *cursor); 298 + struct ttm_resource * 299 + ttm_resource_manager_next(struct ttm_resource_manager *man, 300 + struct ttm_resource_cursor *cursor, 301 + struct ttm_resource *res); 302 + 303 + /** 304 + * ttm_resource_manager_for_each_res - iterate over all resources 305 + * @man: the resource manager 306 + * @cursor: struct ttm_resource_cursor for the current position 307 + * @res: the current resource 308 + * 309 + * Iterate over all the evictable resources in a resource manager. 310 + */ 311 + #define ttm_resource_manager_for_each_res(man, cursor, res) \ 312 + for (res = ttm_resource_manager_first(man, cursor); res; \ 313 + res = ttm_resource_manager_next(man, cursor, res)) 348 314 349 315 struct ttm_kmap_iter * 350 316 ttm_kmap_iter_iomap_init(struct ttm_kmap_iter_iomap *iter_io,
+3 -1
include/drm/ttm/ttm_tt.h
··· 140 140 * @bo: The buffer object we create the ttm for. 141 141 * @page_flags: Page flags as identified by TTM_TT_FLAG_XX flags. 142 142 * @caching: the desired caching state of the pages 143 + * @extra_pages: Extra pages needed for the driver. 143 144 * 144 145 * Create a struct ttm_tt to back data with system memory pages. 145 146 * No pages are actually allocated. ··· 148 147 * NULL: Out of memory. 149 148 */ 150 149 int ttm_tt_init(struct ttm_tt *ttm, struct ttm_buffer_object *bo, 151 - uint32_t page_flags, enum ttm_caching caching); 150 + uint32_t page_flags, enum ttm_caching caching, 151 + unsigned long extra_pages); 152 152 int ttm_sg_tt_init(struct ttm_tt *ttm_dma, struct ttm_buffer_object *bo, 153 153 uint32_t page_flags, enum ttm_caching caching); 154 154
+1 -3
include/linux/dma-buf.h
··· 424 424 * IMPORTANT: 425 425 * 426 426 * All drivers must obey the struct dma_resv rules, specifically the 427 - * rules for updating fences, see &dma_resv.fence_excl and 428 - * &dma_resv.fence. If these dependency rules are broken access tracking 429 - * can be lost resulting in use after free issues. 427 + * rules for updating and obeying fences. 430 428 */ 431 429 struct dma_resv *resv; 432 430
+12 -61
include/linux/dma-resv.h
··· 47 47 48 48 extern struct ww_class reservation_ww_class; 49 49 50 - /** 51 - * struct dma_resv_list - a list of shared fences 52 - * @rcu: for internal use 53 - * @shared_count: table of shared fences 54 - * @shared_max: for growing shared fence table 55 - * @shared: shared fence table 56 - */ 57 - struct dma_resv_list { 58 - struct rcu_head rcu; 59 - u32 shared_count, shared_max; 60 - struct dma_fence __rcu *shared[]; 61 - }; 50 + struct dma_resv_list; 62 51 63 52 /** 64 53 * struct dma_resv - a reservation object manages fences for a buffer ··· 93 104 * 94 105 * The exclusive fence, if there is one currently. 95 106 * 96 - * There are two ways to update this fence: 97 - * 98 - * - First by calling dma_resv_add_excl_fence(), which replaces all 99 - * fences attached to the reservation object. To guarantee that no 100 - * fences are lost, this new fence must signal only after all previous 101 - * fences, both shared and exclusive, have signalled. In some cases it 102 - * is convenient to achieve that by attaching a struct dma_fence_array 103 - * with all the new and old fences. 104 - * 105 - * - Alternatively the fence can be set directly, which leaves the 106 - * shared fences unchanged. To guarantee that no fences are lost, this 107 - * new fence must signal only after the previous exclusive fence has 108 - * signalled. Since the shared fences are staying intact, it is not 109 - * necessary to maintain any ordering against those. If semantically 110 - * only a new access is added without actually treating the previous 111 - * one as a dependency the exclusive fences can be strung together 112 - * using struct dma_fence_chain. 107 + * To guarantee that no fences are lost, this new fence must signal 108 + * only after the previous exclusive fence has signalled. If 109 + * semantically only a new access is added without actually treating the 110 + * previous one as a dependency the exclusive fences can be strung 111 + * together using struct dma_fence_chain. 113 112 * 114 113 * Note that actual semantics of what an exclusive or shared fence mean 115 114 * is defined by the user, for reservation objects shared across drivers ··· 117 140 * A new fence is added by calling dma_resv_add_shared_fence(). Since 118 141 * this often needs to be done past the point of no return in command 119 142 * submission it cannot fail, and therefore sufficient slots need to be 120 - * reserved by calling dma_resv_reserve_shared(). 143 + * reserved by calling dma_resv_reserve_fences(). 121 144 * 122 145 * Note that actual semantics of what an exclusive or shared fence mean 123 146 * is defined by the user, for reservation objects shared across drivers ··· 411 434 ww_mutex_unlock(&obj->lock); 412 435 } 413 436 414 - /** 415 - * dma_resv_excl_fence - return the object's exclusive fence 416 - * @obj: the reservation object 417 - * 418 - * Returns the exclusive fence (if any). Caller must either hold the objects 419 - * through dma_resv_lock() or the RCU read side lock through rcu_read_lock(), 420 - * or one of the variants of each 421 - * 422 - * RETURNS 423 - * The exclusive fence or NULL 424 - */ 425 - static inline struct dma_fence * 426 - dma_resv_excl_fence(struct dma_resv *obj) 427 - { 428 - return rcu_dereference_check(obj->fence_excl, dma_resv_held(obj)); 429 - } 430 - 431 - /** 432 - * dma_resv_shared_list - get the reservation object's shared fence list 433 - * @obj: the reservation object 434 - * 435 - * Returns the shared fence list. Caller must either hold the objects 436 - * through dma_resv_lock() or the RCU read side lock through rcu_read_lock(), 437 - * or one of the variants of each 438 - */ 439 - static inline struct dma_resv_list *dma_resv_shared_list(struct dma_resv *obj) 440 - { 441 - return rcu_dereference_check(obj->fence, dma_resv_held(obj)); 442 - } 443 - 444 437 void dma_resv_init(struct dma_resv *obj); 445 438 void dma_resv_fini(struct dma_resv *obj); 446 - int dma_resv_reserve_shared(struct dma_resv *obj, unsigned int num_fences); 439 + int dma_resv_reserve_fences(struct dma_resv *obj, unsigned int num_fences); 447 440 void dma_resv_add_shared_fence(struct dma_resv *obj, struct dma_fence *fence); 441 + void dma_resv_replace_fences(struct dma_resv *obj, uint64_t context, 442 + struct dma_fence *fence); 448 443 void dma_resv_add_excl_fence(struct dma_resv *obj, struct dma_fence *fence); 449 444 int dma_resv_get_fences(struct dma_resv *obj, bool write, 450 445 unsigned int *num_fences, struct dma_fence ***fences); 446 + int dma_resv_get_singleton(struct dma_resv *obj, bool write, 447 + struct dma_fence **fence); 451 448 int dma_resv_copy_fences(struct dma_resv *dst, struct dma_resv *src); 452 449 long dma_resv_wait_timeout(struct dma_resv *obj, bool wait_all, bool intr, 453 450 unsigned long timeout);
-4
include/linux/efi.h
··· 1329 1329 } 1330 1330 #endif 1331 1331 1332 - #ifdef CONFIG_SYSFB 1333 1332 extern void efifb_setup_from_dmi(struct screen_info *si, const char *opt); 1334 - #else 1335 - static inline void efifb_setup_from_dmi(struct screen_info *si, const char *opt) { } 1336 - #endif 1337 1333 1338 1334 #endif /* _LINUX_EFI_H */
+8 -1
include/uapi/drm/vmwgfx_drm.h
··· 1 1 /************************************************************************** 2 2 * 3 - * Copyright © 2009-2015 VMware, Inc., Palo Alto, CA., USA 3 + * Copyright © 2009-2022 VMware, Inc., Palo Alto, CA., USA 4 4 * All Rights Reserved. 5 5 * 6 6 * Permission is hereby granted, free of charge, to any person obtaining a ··· 92 92 * 93 93 * DRM_VMW_PARAM_SM5 94 94 * SM5 support is enabled. 95 + * 96 + * DRM_VMW_PARAM_GL43 97 + * SM5.1+GL4.3 support is enabled. 98 + * 99 + * DRM_VMW_PARAM_DEVICE_ID 100 + * PCI ID of the underlying SVGA device. 95 101 */ 96 102 97 103 #define DRM_VMW_PARAM_NUM_STREAMS 0 ··· 117 111 #define DRM_VMW_PARAM_SM4_1 14 118 112 #define DRM_VMW_PARAM_SM5 15 119 113 #define DRM_VMW_PARAM_GL43 16 114 + #define DRM_VMW_PARAM_DEVICE_ID 17 120 115 121 116 /** 122 117 * enum drm_vmw_handle_type - handle type for ref ioctls