Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'drm-misc-next-2019-10-09-2' of git://anongit.freedesktop.org/drm/drm-misc into drm-next

drm-misc-next for 5.5:

UAPI Changes:
-Colorspace: Expose different prop values for DP vs. HDMI (Gwan-gyeong Mun)
-fourcc: Add DRM_FORMAT_MOD_ARM_16X16_BLOCK_U_INTERLEAVED (Raymond)
-not_actually: s/ENOTSUPP/EOPNOTSUPP/ in drm_edid and drm_mipi_dbi. This should
not reach userspace, but adding here to specifically call that out (Daniel)
-i810: Prevent underflow in dispatch ioctls (Dan)
-komeda: Add ACLK sysfs attribute (Mihail)
-v3d: Allow userspace to clean up after render jobs (Iago)

Cross-subsystem Changes:
-MAINTAINERS:
-Add Alyssa & Steven as panfrost reviewers (Rob)
-Add Jernej as DE2 reviewer (Maxime)
-Add Chen-Yu as Allwinner maintainer (Maxime)
-staging: Make some stack arrays static const (Colin)

Core Changes:
-ttm: Allow drivers to specify their vma manager (to use gem mgr) (Gerd)
-docs: Various fixes in connector/encoder/bridge docs (Daniel, Lyude, Laurent)
-connector: Allow more than 3 possible encoders for a connector (José)
-dp_cec: Allow a connector to be associated with a cec device (Dariusz)
-various: Fix some compile/sparse warnings (Ville)
-mm: Ensure mm node removals are properly serialised (Chris)
-panel: Specify the type of panel for drm_panels for later use (Laurent)
-panel: Use drm_panel_init to init device and funcs (Laurent)
-mst: Refactors and cleanups in anticipation of suspend/resume support (Lyude)
-vram:
-Add lazy unmapping for gem bo's (Thomas)
-Unify and rationalize vram mm and gem vram (Thomas)
-Expose vmap and vunmap for gem vram objects (Thomas)
-Allow objects to be pinned at the top of vram to avoid fragmentation (Thomas)

Driver Changes:
-various: Include drm_bridge.h instead of relying on drm_crtc.h (Boris)
-ast/mgag200: Refactor show_cursor(), move cursor to top of video mem (Thomas)
-komeda:
-Add error event printing (behind CONFIG) and reg dump support (Lowry)
-Add suspend/resume support (Lowry)
-Workaround D71 shadow registers not flushing on disable (Lowry)
-meson: Add suspend/resume support (Neil)
-omap: Miscellaneous refactors and improvements (Tomi/Jyri)
-panfrost/shmem: Silence lockdep by using mutex_trylock (Rob)
-panfrost: Miscellaneous small fixes (Rob/Steven)
-sti: Fix warnings (Benjamin/Linus)
-sun4i:
-Add vcc-dsi regulator to sun6i_mipi_dsi (Jagan)
-A few patches to figure out the DRQ/start delay calc on dsi (Jagan/Icenowy)
-virtio:
-Add module param to switch resource reuse workaround on/off (Gerd)
-Avoid calling vmexit while holding spinlock (Gerd)
-Use gem shmem helpers instead of ttm (Gerd)
-Accommodate command buffer allocations too big for cma (David)

Cc: Rob Herring <robh@kernel.org>
Cc: Maxime Ripard <mripard@kernel.org>
Cc: Gwan-gyeong Mun <gwan-gyeong.mun@intel.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Cc: Lyude Paul <lyude@redhat.com>
Cc: José Roberto de Souza <jose.souza@intel.com>
Cc: Dariusz Marcinkiewicz <darekm@google.com>
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
Cc: Raymond Smith <raymond.smith@arm.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Colin Ian King <colin.king@canonical.com>
Cc: Thomas Zimmermann <tzimmermann@suse.de>
Cc: Dan Carpenter <dan.carpenter@oracle.com>
Cc: Mihail Atanassov <Mihail.Atanassov@arm.com>
Cc: Lowry Li <Lowry.Li@arm.com>
Cc: Neil Armstrong <narmstrong@baylibre.com>
Cc: Jyri Sarha <jsarha@ti.com>
Cc: Tomi Valkeinen <tomi.valkeinen@ti.com>
Cc: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Cc: Steven Price <steven.price@arm.com>
Cc: Benjamin Gaignard <benjamin.gaignard@st.com>
Cc: Linus Walleij <linus.walleij@linaro.org>
Cc: Jagan Teki <jagan@amarulasolutions.com>
Cc: Icenowy Zheng <icenowy@aosc.io>
Cc: Iago Toral Quiroga <itoral@igalia.com>
Cc: David Riley <davidriley@chromium.org>
Signed-off-by: Dave Airlie <airlied@redhat.com>

# gpg: Signature made Thu 10 Oct 2019 01:00:47 AM AEST
# gpg: using RSA key 732C002572DCAF79
# gpg: Can't check signature: public key not found

# Conflicts:
# drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
# drivers/gpu/drm/i915/i915_drv.c
# drivers/gpu/drm/i915/i915_gem.c
# drivers/gpu/drm/i915/i915_gem_gtt.c
# drivers/gpu/drm/i915/i915_vma.c
From: Sean Paul <sean@poorly.run>
Link: https://patchwork.freedesktop.org/patch/msgid/20191009150825.GA227673@art_vandelay

+4661 -2998
+5
Documentation/devicetree/bindings/display/allwinner,sun6i-a31-mipi-dsi.yaml
··· 36 36 resets: 37 37 maxItems: 1 38 38 39 + vcc-dsi-supply: 40 + description: VCC-DSI power supply of the DSI encoder 41 + 39 42 phys: 40 43 maxItems: 1 41 44 ··· 67 64 - phys 68 65 - phy-names 69 66 - resets 67 + - vcc-dsi-supply 70 68 - port 71 69 72 70 additionalProperties: false ··· 83 79 resets = <&ccu 4>; 84 80 phys = <&dphy0>; 85 81 phy-names = "dphy"; 82 + vcc-dsi-supply = <&reg_dcdc1>; 86 83 #address-cells = <1>; 87 84 #size-cells = <0>; 88 85
+5 -1
Documentation/devicetree/bindings/display/bridge/anx7814.txt
··· 6 6 7 7 Required properties: 8 8 9 - - compatible : "analogix,anx7814" 9 + - compatible : Must be one of: 10 + "analogix,anx7808" 11 + "analogix,anx7812" 12 + "analogix,anx7814" 13 + "analogix,anx7818" 10 14 - reg : I2C address of the device 11 15 - interrupts : Should contain the INTP interrupt 12 16 - hpd-gpios : Which GPIO to use for hpd
+4 -7
Documentation/gpu/drm-mm.rst
··· 400 400 .. kernel-doc:: drivers/gpu/drm/drm_gem_vram_helper.c 401 401 :export: 402 402 403 - VRAM MM Helper Functions Reference 404 - ---------------------------------- 403 + GEM TTM Helper Functions Reference 404 + ----------------------------------- 405 405 406 - .. kernel-doc:: drivers/gpu/drm/drm_vram_mm_helper.c 406 + .. kernel-doc:: drivers/gpu/drm/drm_gem_ttm_helper.c 407 407 :doc: overview 408 408 409 - .. kernel-doc:: include/drm/drm_vram_mm_helper.h 410 - :internal: 411 - 412 - .. kernel-doc:: drivers/gpu/drm/drm_vram_mm_helper.c 409 + .. kernel-doc:: drivers/gpu/drm/drm_gem_ttm_helper.c 413 410 :export: 414 411 415 412 VMA Offset Manager
+1 -1
Documentation/gpu/mcde.rst
··· 5 5 ======================================================= 6 6 7 7 .. kernel-doc:: drivers/gpu/drm/mcde/mcde_drv.c 8 - :doc: ST-Ericsson MCDE DRM Driver 8 + :doc: ST-Ericsson MCDE Driver
+12
Documentation/gpu/todo.rst
··· 284 284 removed: drm_fb_helper_single_add_all_connectors(), 285 285 drm_fb_helper_add_one_connector() and drm_fb_helper_remove_one_connector(). 286 286 287 + connector register/unregister fixes 288 + ----------------------------------- 289 + 290 + - For most connectors it's a no-op to call drm_connector_register/unregister 291 + directly from driver code, drm_dev_register/unregister take care of this 292 + already. We can remove all of them. 293 + 294 + - For dp drivers it's a bit more a mess, since we need the connector to be 295 + registered when calling drm_dp_aux_register. Fix this by instead calling 296 + drm_dp_aux_init, and moving the actual registering into a late_register 297 + callback as recommended in the kerneldoc. 298 + 287 299 Core refactorings 288 300 ================= 289 301
+12
MAINTAINERS
··· 1272 1272 ARM MALI PANFROST DRM DRIVER 1273 1273 M: Rob Herring <robh@kernel.org> 1274 1274 M: Tomeu Vizoso <tomeu.vizoso@collabora.com> 1275 + R: Steven Price <steven.price@arm.com> 1276 + R: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com> 1275 1277 L: dri-devel@lists.freedesktop.org 1276 1278 S: Supported 1277 1279 T: git git://anongit.freedesktop.org/drm/drm-misc ··· 5378 5376 5379 5377 DRM DRIVERS FOR ALLWINNER A10 5380 5378 M: Maxime Ripard <mripard@kernel.org> 5379 + M: Chen-Yu Tsai <wens@csie.org> 5381 5380 L: dri-devel@lists.freedesktop.org 5382 5381 S: Supported 5383 5382 F: drivers/gpu/drm/sun4i/ 5384 5383 F: Documentation/devicetree/bindings/display/sunxi/sun4i-drm.txt 5384 + T: git git://anongit.freedesktop.org/drm/drm-misc 5385 + 5386 + DRM DRIVER FOR ALLWINNER DE2 AND DE3 ENGINE 5387 + M: Maxime Ripard <mripard@kernel.org> 5388 + M: Chen-Yu Tsai <wens@csie.org> 5389 + R: Jernej Skrabec <jernej.skrabec@siol.net> 5390 + L: dri-devel@lists.freedesktop.org 5391 + S: Supported 5392 + F: drivers/gpu/drm/sun4i/sun8i* 5385 5393 T: git git://anongit.freedesktop.org/drm/drm-misc 5386 5394 5387 5395 DRM DRIVERS FOR AMLOGIC SOCS
+35 -43
drivers/dma-buf/dma-fence.c
··· 273 273 } 274 274 EXPORT_SYMBOL(dma_fence_free); 275 275 276 + static bool __dma_fence_enable_signaling(struct dma_fence *fence) 277 + { 278 + bool was_set; 279 + 280 + lockdep_assert_held(fence->lock); 281 + 282 + was_set = test_and_set_bit(DMA_FENCE_FLAG_ENABLE_SIGNAL_BIT, 283 + &fence->flags); 284 + 285 + if (test_bit(DMA_FENCE_FLAG_SIGNALED_BIT, &fence->flags)) 286 + return false; 287 + 288 + if (!was_set && fence->ops->enable_signaling) { 289 + trace_dma_fence_enable_signal(fence); 290 + 291 + if (!fence->ops->enable_signaling(fence)) { 292 + dma_fence_signal_locked(fence); 293 + return false; 294 + } 295 + } 296 + 297 + return true; 298 + } 299 + 276 300 /** 277 301 * dma_fence_enable_sw_signaling - enable signaling on fence 278 302 * @fence: the fence to enable ··· 309 285 { 310 286 unsigned long flags; 311 287 312 - if (!test_and_set_bit(DMA_FENCE_FLAG_ENABLE_SIGNAL_BIT, 313 - &fence->flags) && 314 - !test_bit(DMA_FENCE_FLAG_SIGNALED_BIT, &fence->flags) && 315 - fence->ops->enable_signaling) { 316 - trace_dma_fence_enable_signal(fence); 288 + if (test_bit(DMA_FENCE_FLAG_SIGNALED_BIT, &fence->flags)) 289 + return; 317 290 318 - spin_lock_irqsave(fence->lock, flags); 319 - 320 - if (!fence->ops->enable_signaling(fence)) 321 - dma_fence_signal_locked(fence); 322 - 323 - spin_unlock_irqrestore(fence->lock, flags); 324 - } 291 + spin_lock_irqsave(fence->lock, flags); 292 + __dma_fence_enable_signaling(fence); 293 + spin_unlock_irqrestore(fence->lock, flags); 325 294 } 326 295 EXPORT_SYMBOL(dma_fence_enable_sw_signaling); 327 296 ··· 348 331 { 349 332 unsigned long flags; 350 333 int ret = 0; 351 - bool was_set; 352 334 353 335 if (WARN_ON(!fence || !func)) 354 336 return -EINVAL; ··· 359 343 360 344 spin_lock_irqsave(fence->lock, flags); 361 345 362 - was_set = test_and_set_bit(DMA_FENCE_FLAG_ENABLE_SIGNAL_BIT, 363 - &fence->flags); 364 - 365 - if (test_bit(DMA_FENCE_FLAG_SIGNALED_BIT, &fence->flags)) 366 - ret = -ENOENT; 367 - else if (!was_set && fence->ops->enable_signaling) { 368 - trace_dma_fence_enable_signal(fence); 369 - 370 - if (!fence->ops->enable_signaling(fence)) { 371 - dma_fence_signal_locked(fence); 372 - ret = -ENOENT; 373 - } 374 - } 375 - 376 - if (!ret) { 346 + if (__dma_fence_enable_signaling(fence)) { 377 347 cb->func = func; 378 348 list_add_tail(&cb->node, &fence->cb_list); 379 - } else 349 + } else { 380 350 INIT_LIST_HEAD(&cb->node); 351 + ret = -ENOENT; 352 + } 353 + 381 354 spin_unlock_irqrestore(fence->lock, flags); 382 355 383 356 return ret; ··· 466 461 struct default_wait_cb cb; 467 462 unsigned long flags; 468 463 signed long ret = timeout ? timeout : 1; 469 - bool was_set; 470 464 471 465 if (test_bit(DMA_FENCE_FLAG_SIGNALED_BIT, &fence->flags)) 472 466 return ret; ··· 477 473 goto out; 478 474 } 479 475 480 - was_set = test_and_set_bit(DMA_FENCE_FLAG_ENABLE_SIGNAL_BIT, 481 - &fence->flags); 482 - 483 - if (test_bit(DMA_FENCE_FLAG_SIGNALED_BIT, &fence->flags)) 476 + if (!__dma_fence_enable_signaling(fence)) 484 477 goto out; 485 - 486 - if (!was_set && fence->ops->enable_signaling) { 487 - trace_dma_fence_enable_signal(fence); 488 - 489 - if (!fence->ops->enable_signaling(fence)) { 490 - dma_fence_signal_locked(fence); 491 - goto out; 492 - } 493 - } 494 478 495 479 if (!timeout) { 496 480 ret = 0;
+1 -1
drivers/dma-buf/dma-resv.c
··· 471 471 if (pfence_excl) 472 472 *pfence_excl = fence_excl; 473 473 else if (fence_excl) 474 - shared[++shared_count] = fence_excl; 474 + shared[shared_count++] = fence_excl; 475 475 476 476 if (!shared_count) { 477 477 kfree(shared);
+7 -1
drivers/gpu/drm/Kconfig
··· 168 168 config DRM_VRAM_HELPER 169 169 tristate 170 170 depends on DRM 171 - select DRM_TTM 172 171 help 173 172 Helpers for VRAM memory management 173 + 174 + config DRM_TTM_HELPER 175 + tristate 176 + depends on DRM 177 + select DRM_TTM 178 + help 179 + Helpers for ttm-based gem objects 174 180 175 181 config DRM_GEM_CMA_HELPER 176 182 bool
+4 -2
drivers/gpu/drm/Makefile
··· 33 33 drm-$(CONFIG_DRM_LOAD_EDID_FIRMWARE) += drm_edid_load.o 34 34 35 35 drm_vram_helper-y := drm_gem_vram_helper.o \ 36 - drm_vram_helper_common.o \ 37 - drm_vram_mm_helper.o 36 + drm_vram_helper_common.o 38 37 obj-$(CONFIG_DRM_VRAM_HELPER) += drm_vram_helper.o 38 + 39 + drm_ttm_helper-y := drm_gem_ttm_helper.o 40 + obj-$(CONFIG_DRM_TTM_HELPER) += drm_ttm_helper.o 39 41 40 42 drm_kms_helper-y := drm_crtc_helper.o drm_dp_helper.o drm_dsc.o drm_probe_helper.o \ 41 43 drm_plane_helper.o drm_dp_mst_topology.o drm_atomic_helper.o \
+8 -15
drivers/gpu/drm/amd/amdgpu/amdgpu_connectors.c
··· 217 217 struct drm_encoder *encoder; 218 218 const struct drm_connector_helper_funcs *connector_funcs = connector->helper_private; 219 219 bool connected; 220 - int i; 221 220 222 221 best_encoder = connector_funcs->best_encoder(connector); 223 222 224 - drm_connector_for_each_possible_encoder(connector, encoder, i) { 223 + drm_connector_for_each_possible_encoder(connector, encoder) { 225 224 if ((encoder == best_encoder) && (status == connector_status_connected)) 226 225 connected = true; 227 226 else ··· 235 236 int encoder_type) 236 237 { 237 238 struct drm_encoder *encoder; 238 - int i; 239 239 240 - drm_connector_for_each_possible_encoder(connector, encoder, i) { 240 + drm_connector_for_each_possible_encoder(connector, encoder) { 241 241 if (encoder->encoder_type == encoder_type) 242 242 return encoder; 243 243 } ··· 345 347 amdgpu_connector_best_single_encoder(struct drm_connector *connector) 346 348 { 347 349 struct drm_encoder *encoder; 348 - int i; 349 350 350 351 /* pick the first one */ 351 - drm_connector_for_each_possible_encoder(connector, encoder, i) 352 + drm_connector_for_each_possible_encoder(connector, encoder) 352 353 return encoder; 353 354 354 355 return NULL; ··· 1062 1065 /* find analog encoder */ 1063 1066 if (amdgpu_connector->dac_load_detect) { 1064 1067 struct drm_encoder *encoder; 1065 - int i; 1066 1068 1067 - drm_connector_for_each_possible_encoder(connector, encoder, i) { 1069 + drm_connector_for_each_possible_encoder(connector, encoder) { 1068 1070 if (encoder->encoder_type != DRM_MODE_ENCODER_DAC && 1069 1071 encoder->encoder_type != DRM_MODE_ENCODER_TVDAC) 1070 1072 continue; ··· 1113 1117 { 1114 1118 struct amdgpu_connector *amdgpu_connector = to_amdgpu_connector(connector); 1115 1119 struct drm_encoder *encoder; 1116 - int i; 1117 1120 1118 - drm_connector_for_each_possible_encoder(connector, encoder, i) { 1121 + drm_connector_for_each_possible_encoder(connector, encoder) { 1119 1122 if (amdgpu_connector->use_digital == true) { 1120 1123 if (encoder->encoder_type == DRM_MODE_ENCODER_TMDS) 1121 1124 return encoder; ··· 1129 1134 1130 1135 /* then check use digitial */ 1131 1136 /* pick the first one */ 1132 - drm_connector_for_each_possible_encoder(connector, encoder, i) 1137 + drm_connector_for_each_possible_encoder(connector, encoder) 1133 1138 return encoder; 1134 1139 1135 1140 return NULL; ··· 1266 1271 { 1267 1272 struct drm_encoder *encoder; 1268 1273 struct amdgpu_encoder *amdgpu_encoder; 1269 - int i; 1270 1274 1271 - drm_connector_for_each_possible_encoder(connector, encoder, i) { 1275 + drm_connector_for_each_possible_encoder(connector, encoder) { 1272 1276 amdgpu_encoder = to_amdgpu_encoder(encoder); 1273 1277 1274 1278 switch (amdgpu_encoder->encoder_id) { ··· 1286 1292 { 1287 1293 struct drm_encoder *encoder; 1288 1294 struct amdgpu_encoder *amdgpu_encoder; 1289 - int i; 1290 1295 bool found = false; 1291 1296 1292 - drm_connector_for_each_possible_encoder(connector, encoder, i) { 1297 + drm_connector_for_each_possible_encoder(connector, encoder) { 1293 1298 amdgpu_encoder = to_amdgpu_encoder(encoder); 1294 1299 if (amdgpu_encoder->caps & ATOM_ENCODER_CAP_RECORD_HBR2) 1295 1300 found = true;
+1 -1
drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
··· 1049 1049 } 1050 1050 1051 1051 /* Get rid of things like offb */ 1052 - ret = drm_fb_helper_remove_conflicting_pci_framebuffers(pdev, 0, "amdgpudrmfb"); 1052 + ret = drm_fb_helper_remove_conflicting_pci_framebuffers(pdev, "amdgpudrmfb"); 1053 1053 if (ret) 1054 1054 return ret; 1055 1055
+1
drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
··· 1731 1731 r = ttm_bo_device_init(&adev->mman.bdev, 1732 1732 &amdgpu_bo_driver, 1733 1733 adev->ddev->anon_inode->i_mapping, 1734 + adev->ddev->vma_offset_manager, 1734 1735 dma_addressing_limited(adev->dev)); 1735 1736 if (r) { 1736 1737 DRM_ERROR("failed initializing buffer object driver(%d).\n", r);
+2 -3
drivers/gpu/drm/amd/amdgpu/dce_virtual.c
··· 260 260 dce_virtual_encoder(struct drm_connector *connector) 261 261 { 262 262 struct drm_encoder *encoder; 263 - int i; 264 263 265 - drm_connector_for_each_possible_encoder(connector, encoder, i) { 264 + drm_connector_for_each_possible_encoder(connector, encoder) { 266 265 if (encoder->encoder_type == DRM_MODE_ENCODER_VIRTUAL) 267 266 return encoder; 268 267 } 269 268 270 269 /* pick the first one */ 271 - drm_connector_for_each_possible_encoder(connector, encoder, i) 270 + drm_connector_for_each_possible_encoder(connector, encoder) 272 271 return encoder; 273 272 274 273 return NULL;
+7 -1
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
··· 4837 4837 4838 4838 static struct drm_encoder *amdgpu_dm_connector_to_encoder(struct drm_connector *connector) 4839 4839 { 4840 - return drm_encoder_find(connector->dev, NULL, connector->encoder_ids[0]); 4840 + struct drm_encoder *encoder; 4841 + 4842 + /* There is only one encoder per connector */ 4843 + drm_connector_for_each_possible_encoder(connector, encoder) 4844 + return encoder; 4845 + 4846 + return NULL; 4841 4847 } 4842 4848 4843 4849 static void amdgpu_dm_get_native_mode(struct drm_connector *connector)
+1 -1
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
··· 416 416 417 417 drm_dp_aux_register(&aconnector->dm_dp_aux.aux); 418 418 drm_dp_cec_register_connector(&aconnector->dm_dp_aux.aux, 419 - aconnector->base.name, dm->adev->dev); 419 + &aconnector->base); 420 420 aconnector->mst_mgr.cbs = &dm_mst_cbs; 421 421 drm_dp_mst_topology_mgr_init( 422 422 &aconnector->mst_mgr,
+1
drivers/gpu/drm/arc/arcpgu_hdmi.c
··· 5 5 * Copyright (C) 2016 Synopsys, Inc. (www.synopsys.com) 6 6 */ 7 7 8 + #include <drm/drm_bridge.h> 8 9 #include <drm/drm_crtc.h> 9 10 #include <drm/drm_encoder.h> 10 11 #include <drm/drm_device.h>
+6
drivers/gpu/drm/arm/display/Kconfig
··· 12 12 Processor driver. It supports the D71 variants of the hardware. 13 13 14 14 If compiled as a module it will be called komeda. 15 + 16 + config DRM_KOMEDA_ERROR_PRINT 17 + bool "Enable komeda error print" 18 + depends on DRM_KOMEDA 19 + help 20 + Choose this option to enable error printing.
+2
drivers/gpu/drm/arm/display/komeda/Makefile
··· 22 22 d71/d71_dev.o \ 23 23 d71/d71_component.o 24 24 25 + komeda-$(CONFIG_DRM_KOMEDA_ERROR_PRINT) += komeda_event.o 26 + 25 27 obj-$(CONFIG_DRM_KOMEDA) += komeda.o
+85 -1
drivers/gpu/drm/arm/display/komeda/d71/d71_component.c
··· 1218 1218 return err; 1219 1219 } 1220 1220 1221 + static void d71_gcu_dump(struct d71_dev *d71, struct seq_file *sf) 1222 + { 1223 + u32 v[5]; 1224 + 1225 + seq_puts(sf, "\n------ GCU ------\n"); 1226 + 1227 + get_values_from_reg(d71->gcu_addr, 0, 3, v); 1228 + seq_printf(sf, "GLB_ARCH_ID:\t\t0x%X\n", v[0]); 1229 + seq_printf(sf, "GLB_CORE_ID:\t\t0x%X\n", v[1]); 1230 + seq_printf(sf, "GLB_CORE_INFO:\t\t0x%X\n", v[2]); 1231 + 1232 + get_values_from_reg(d71->gcu_addr, 0x10, 1, v); 1233 + seq_printf(sf, "GLB_IRQ_STATUS:\t\t0x%X\n", v[0]); 1234 + 1235 + get_values_from_reg(d71->gcu_addr, 0xA0, 5, v); 1236 + seq_printf(sf, "GCU_IRQ_RAW_STATUS:\t0x%X\n", v[0]); 1237 + seq_printf(sf, "GCU_IRQ_CLEAR:\t\t0x%X\n", v[1]); 1238 + seq_printf(sf, "GCU_IRQ_MASK:\t\t0x%X\n", v[2]); 1239 + seq_printf(sf, "GCU_IRQ_STATUS:\t\t0x%X\n", v[3]); 1240 + seq_printf(sf, "GCU_STATUS:\t\t0x%X\n", v[4]); 1241 + 1242 + get_values_from_reg(d71->gcu_addr, 0xD0, 3, v); 1243 + seq_printf(sf, "GCU_CONTROL:\t\t0x%X\n", v[0]); 1244 + seq_printf(sf, "GCU_CONFIG_VALID0:\t0x%X\n", v[1]); 1245 + seq_printf(sf, "GCU_CONFIG_VALID1:\t0x%X\n", v[2]); 1246 + } 1247 + 1248 + static void d71_lpu_dump(struct d71_pipeline *pipe, struct seq_file *sf) 1249 + { 1250 + u32 v[6]; 1251 + 1252 + seq_printf(sf, "\n------ LPU%d ------\n", pipe->base.id); 1253 + 1254 + dump_block_header(sf, pipe->lpu_addr); 1255 + 1256 + get_values_from_reg(pipe->lpu_addr, 0xA0, 6, v); 1257 + seq_printf(sf, "LPU_IRQ_RAW_STATUS:\t0x%X\n", v[0]); 1258 + seq_printf(sf, "LPU_IRQ_CLEAR:\t\t0x%X\n", v[1]); 1259 + seq_printf(sf, "LPU_IRQ_MASK:\t\t0x%X\n", v[2]); 1260 + seq_printf(sf, "LPU_IRQ_STATUS:\t\t0x%X\n", v[3]); 1261 + seq_printf(sf, "LPU_STATUS:\t\t0x%X\n", v[4]); 1262 + seq_printf(sf, "LPU_TBU_STATUS:\t\t0x%X\n", v[5]); 1263 + 1264 + get_values_from_reg(pipe->lpu_addr, 0xC0, 1, v); 1265 + seq_printf(sf, "LPU_INFO:\t\t0x%X\n", v[0]); 1266 + 1267 + get_values_from_reg(pipe->lpu_addr, 0xD0, 3, v); 1268 + seq_printf(sf, "LPU_RAXI_CONTROL:\t0x%X\n", v[0]); 1269 + seq_printf(sf, "LPU_WAXI_CONTROL:\t0x%X\n", v[1]); 1270 + seq_printf(sf, "LPU_TBU_CONTROL:\t0x%X\n", v[2]); 1271 + } 1272 + 1273 + static void d71_dou_dump(struct d71_pipeline *pipe, struct seq_file *sf) 1274 + { 1275 + u32 v[5]; 1276 + 1277 + seq_printf(sf, "\n------ DOU%d ------\n", pipe->base.id); 1278 + 1279 + dump_block_header(sf, pipe->dou_addr); 1280 + 1281 + get_values_from_reg(pipe->dou_addr, 0xA0, 5, v); 1282 + seq_printf(sf, "DOU_IRQ_RAW_STATUS:\t0x%X\n", v[0]); 1283 + seq_printf(sf, "DOU_IRQ_CLEAR:\t\t0x%X\n", v[1]); 1284 + seq_printf(sf, "DOU_IRQ_MASK:\t\t0x%X\n", v[2]); 1285 + seq_printf(sf, "DOU_IRQ_STATUS:\t\t0x%X\n", v[3]); 1286 + seq_printf(sf, "DOU_STATUS:\t\t0x%X\n", v[4]); 1287 + } 1288 + 1289 + static void d71_pipeline_dump(struct komeda_pipeline *pipe, struct seq_file *sf) 1290 + { 1291 + struct d71_pipeline *d71_pipe = to_d71_pipeline(pipe); 1292 + 1293 + d71_lpu_dump(d71_pipe, sf); 1294 + d71_dou_dump(d71_pipe, sf); 1295 + } 1296 + 1221 1297 const struct komeda_pipeline_funcs d71_pipeline_funcs = { 1222 - .downscaling_clk_check = d71_downscaling_clk_check, 1298 + .downscaling_clk_check = d71_downscaling_clk_check, 1299 + .dump_register = d71_pipeline_dump, 1223 1300 }; 1301 + 1302 + void d71_dump(struct komeda_dev *mdev, struct seq_file *sf) 1303 + { 1304 + struct d71_dev *d71 = mdev->chip_data; 1305 + 1306 + d71_gcu_dump(d71, sf); 1307 + }
+29 -12
drivers/gpu/drm/arm/display/komeda/d71/d71_dev.c
··· 195 195 if (gcu_status & GLB_IRQ_STATUS_PIPE1) 196 196 evts->pipes[1] |= get_pipeline_event(d71->pipes[1], gcu_status); 197 197 198 - return gcu_status ? IRQ_HANDLED : IRQ_NONE; 198 + return IRQ_RETVAL(gcu_status); 199 199 } 200 200 201 201 #define ENABLED_GCU_IRQS (GCU_IRQ_CVAL0 | GCU_IRQ_CVAL1 | \ ··· 395 395 err = PTR_ERR(pipe); 396 396 goto err_cleanup; 397 397 } 398 + 399 + /* D71 HW doesn't update shadow registers when display output 400 + * is turning off, so when we disable all pipeline components 401 + * together with display output disable by one flush or one 402 + * operation, the disable operation updated registers will not 403 + * be flush to or valid in HW, which may leads problem. 404 + * To workaround this problem, introduce a two phase disable. 405 + * Phase1: Disabling components with display is on to make sure 406 + * the disable can be flushed to HW. 407 + * Phase2: Only turn-off display output. 408 + */ 409 + value = KOMEDA_PIPELINE_IMPROCS | 410 + BIT(KOMEDA_COMPONENT_TIMING_CTRLR); 411 + 412 + pipe->standalone_disabled_comps = value; 413 + 398 414 d71->pipes[i] = to_d71_pipeline(pipe); 399 415 } 400 416 ··· 577 561 } 578 562 579 563 static const struct komeda_dev_funcs d71_chip_funcs = { 580 - .init_format_table = d71_init_fmt_tbl, 581 - .enum_resources = d71_enum_resources, 582 - .cleanup = d71_cleanup, 583 - .irq_handler = d71_irq_handler, 584 - .enable_irq = d71_enable_irq, 585 - .disable_irq = d71_disable_irq, 586 - .on_off_vblank = d71_on_off_vblank, 587 - .change_opmode = d71_change_opmode, 588 - .flush = d71_flush, 589 - .connect_iommu = d71_connect_iommu, 590 - .disconnect_iommu = d71_disconnect_iommu, 564 + .init_format_table = d71_init_fmt_tbl, 565 + .enum_resources = d71_enum_resources, 566 + .cleanup = d71_cleanup, 567 + .irq_handler = d71_irq_handler, 568 + .enable_irq = d71_enable_irq, 569 + .disable_irq = d71_disable_irq, 570 + .on_off_vblank = d71_on_off_vblank, 571 + .change_opmode = d71_change_opmode, 572 + .flush = d71_flush, 573 + .connect_iommu = d71_connect_iommu, 574 + .disconnect_iommu = d71_disconnect_iommu, 575 + .dump_register = d71_dump, 591 576 }; 592 577 593 578 const struct komeda_dev_funcs *
+2
drivers/gpu/drm/arm/display/komeda/d71/d71_dev.h
··· 49 49 struct block_header *blk, u32 __iomem *reg); 50 50 void d71_read_block_header(u32 __iomem *reg, struct block_header *blk); 51 51 52 + void d71_dump(struct komeda_dev *mdev, struct seq_file *sf); 53 + 52 54 #endif /* !_D71_DEV_H_ */
+53 -29
drivers/gpu/drm/arm/display/komeda/komeda_crtc.c
··· 5 5 * 6 6 */ 7 7 #include <linux/clk.h> 8 - #include <linux/pm_runtime.h> 9 8 #include <linux/spinlock.h> 10 9 11 10 #include <drm/drm_atomic.h> ··· 249 250 { 250 251 komeda_crtc_prepare(to_kcrtc(crtc)); 251 252 drm_crtc_vblank_on(crtc); 253 + WARN_ON(drm_crtc_vblank_get(crtc)); 252 254 komeda_crtc_do_flush(crtc, old); 255 + } 256 + 257 + static void 258 + komeda_crtc_flush_and_wait_for_flip_done(struct komeda_crtc *kcrtc, 259 + struct completion *input_flip_done) 260 + { 261 + struct drm_device *drm = kcrtc->base.dev; 262 + struct komeda_dev *mdev = kcrtc->master->mdev; 263 + struct completion *flip_done; 264 + struct completion temp; 265 + int timeout; 266 + 267 + /* if caller doesn't send a flip_done, use a private flip_done */ 268 + if (input_flip_done) { 269 + flip_done = input_flip_done; 270 + } else { 271 + init_completion(&temp); 272 + kcrtc->disable_done = &temp; 273 + flip_done = &temp; 274 + } 275 + 276 + mdev->funcs->flush(mdev, kcrtc->master->id, 0); 277 + 278 + /* wait the flip take affect.*/ 279 + timeout = wait_for_completion_timeout(flip_done, HZ); 280 + if (timeout == 0) { 281 + DRM_ERROR("wait pipe%d flip done timeout\n", kcrtc->master->id); 282 + if (!input_flip_done) { 283 + unsigned long flags; 284 + 285 + spin_lock_irqsave(&drm->event_lock, flags); 286 + kcrtc->disable_done = NULL; 287 + spin_unlock_irqrestore(&drm->event_lock, flags); 288 + } 289 + } 253 290 } 254 291 255 292 static void ··· 294 259 { 295 260 struct komeda_crtc *kcrtc = to_kcrtc(crtc); 296 261 struct komeda_crtc_state *old_st = to_kcrtc_st(old); 297 - struct komeda_dev *mdev = crtc->dev->dev_private; 298 262 struct komeda_pipeline *master = kcrtc->master; 299 263 struct komeda_pipeline *slave = kcrtc->slave; 300 264 struct completion *disable_done = &crtc->state->commit->flip_done; 301 - struct completion temp; 302 - int timeout; 265 + bool needs_phase2 = false; 303 266 304 - DRM_DEBUG_ATOMIC("CRTC%d_DISABLE: active_pipes: 0x%x, affected: 0x%x.\n", 267 + DRM_DEBUG_ATOMIC("CRTC%d_DISABLE: active_pipes: 0x%x, affected: 0x%x\n", 305 268 drm_crtc_index(crtc), 306 269 old_st->active_pipes, old_st->affected_pipes); 307 270 ··· 307 274 komeda_pipeline_disable(slave, old->state); 308 275 309 276 if (has_bit(master->id, old_st->active_pipes)) 310 - komeda_pipeline_disable(master, old->state); 277 + needs_phase2 = komeda_pipeline_disable(master, old->state); 311 278 312 279 /* crtc_disable has two scenarios according to the state->active switch. 313 280 * 1. active -> inactive ··· 326 293 * That's also the reason why skip modeset commit in 327 294 * komeda_crtc_atomic_flush() 328 295 */ 329 - if (crtc->state->active) { 330 - struct komeda_pipeline_state *pipe_st; 331 - /* clear the old active_comps to zero */ 332 - pipe_st = komeda_pipeline_get_old_state(master, old->state); 333 - pipe_st->active_comps = 0; 296 + disable_done = (needs_phase2 || crtc->state->active) ? 297 + NULL : &crtc->state->commit->flip_done; 334 298 335 - init_completion(&temp); 336 - kcrtc->disable_done = &temp; 337 - disable_done = &temp; 299 + /* wait phase 1 disable done */ 300 + komeda_crtc_flush_and_wait_for_flip_done(kcrtc, disable_done); 301 + 302 + /* phase 2 */ 303 + if (needs_phase2) { 304 + komeda_pipeline_disable(kcrtc->master, old->state); 305 + 306 + disable_done = crtc->state->active ? 307 + NULL : &crtc->state->commit->flip_done; 308 + 309 + komeda_crtc_flush_and_wait_for_flip_done(kcrtc, disable_done); 338 310 } 339 311 340 - mdev->funcs->flush(mdev, master->id, 0); 341 - 342 - /* wait the disable take affect.*/ 343 - timeout = wait_for_completion_timeout(disable_done, HZ); 344 - if (timeout == 0) { 345 - DRM_ERROR("disable pipeline%d timeout.\n", kcrtc->master->id); 346 - if (crtc->state->active) { 347 - unsigned long flags; 348 - 349 - spin_lock_irqsave(&crtc->dev->event_lock, flags); 350 - kcrtc->disable_done = NULL; 351 - spin_unlock_irqrestore(&crtc->dev->event_lock, flags); 352 - } 353 - } 354 - 312 + drm_crtc_vblank_put(crtc); 355 313 drm_crtc_vblank_off(crtc); 356 314 komeda_crtc_unprepare(kcrtc); 357 315 }
+71 -6
drivers/gpu/drm/arm/display/komeda/komeda_dev.c
··· 25 25 struct komeda_dev *mdev = sf->private; 26 26 int i; 27 27 28 + seq_puts(sf, "\n====== Komeda register dump =========\n"); 29 + 28 30 if (mdev->funcs->dump_register) 29 31 mdev->funcs->dump_register(mdev, sf); 30 32 ··· 93 91 } 94 92 static DEVICE_ATTR_RO(config_id); 95 93 94 + static ssize_t 95 + aclk_hz_show(struct device *dev, struct device_attribute *attr, char *buf) 96 + { 97 + struct komeda_dev *mdev = dev_to_mdev(dev); 98 + 99 + return snprintf(buf, PAGE_SIZE, "%lu\n", clk_get_rate(mdev->aclk)); 100 + } 101 + static DEVICE_ATTR_RO(aclk_hz); 102 + 96 103 static struct attribute *komeda_sysfs_entries[] = { 97 104 &dev_attr_core_id.attr, 98 105 &dev_attr_config_id.attr, 106 + &dev_attr_aclk_hz.attr, 99 107 NULL, 100 108 }; 101 109 ··· 228 216 product->product_id, 229 217 MALIDP_CORE_ID_PRODUCT_ID(mdev->chip.core_id)); 230 218 err = -ENODEV; 231 - goto err_cleanup; 219 + goto disable_clk; 232 220 } 233 221 234 222 DRM_INFO("Found ARM Mali-D%x version r%dp%d\n", ··· 241 229 err = mdev->funcs->enum_resources(mdev); 242 230 if (err) { 243 231 DRM_ERROR("enumerate display resource failed.\n"); 244 - goto err_cleanup; 232 + goto disable_clk; 245 233 } 246 234 247 235 err = komeda_parse_dt(dev, mdev); 248 236 if (err) { 249 237 DRM_ERROR("parse device tree failed.\n"); 250 - goto err_cleanup; 238 + goto disable_clk; 251 239 } 252 240 253 241 err = komeda_assemble_pipelines(mdev); 254 242 if (err) { 255 243 DRM_ERROR("assemble display pipelines failed.\n"); 256 - goto err_cleanup; 244 + goto disable_clk; 257 245 } 258 246 259 247 dev->dma_parms = &mdev->dma_parms; ··· 266 254 if (mdev->iommu && mdev->funcs->connect_iommu) { 267 255 err = mdev->funcs->connect_iommu(mdev); 268 256 if (err) { 257 + DRM_ERROR("connect iommu failed.\n"); 269 258 mdev->iommu = NULL; 270 - goto err_cleanup; 259 + goto disable_clk; 271 260 } 272 261 } 262 + 263 + clk_disable_unprepare(mdev->aclk); 273 264 274 265 err = sysfs_create_group(&dev->kobj, &komeda_sysfs_attr_group); 275 266 if (err) { ··· 286 271 287 272 return mdev; 288 273 274 + disable_clk: 275 + clk_disable_unprepare(mdev->aclk); 289 276 err_cleanup: 290 277 komeda_dev_destroy(mdev); 291 278 return ERR_PTR(err); ··· 305 288 debugfs_remove_recursive(mdev->debugfs_root); 306 289 #endif 307 290 291 + if (mdev->aclk) 292 + clk_prepare_enable(mdev->aclk); 293 + 308 294 if (mdev->iommu && mdev->funcs->disconnect_iommu) 309 - mdev->funcs->disconnect_iommu(mdev); 295 + if (mdev->funcs->disconnect_iommu(mdev)) 296 + DRM_ERROR("disconnect iommu failed.\n"); 310 297 mdev->iommu = NULL; 311 298 312 299 for (i = 0; i < mdev->n_pipelines; i++) { ··· 337 316 } 338 317 339 318 devm_kfree(dev, mdev); 319 + } 320 + 321 + int komeda_dev_resume(struct komeda_dev *mdev) 322 + { 323 + int ret = 0; 324 + 325 + clk_prepare_enable(mdev->aclk); 326 + 327 + if (mdev->iommu && mdev->funcs->connect_iommu) { 328 + ret = mdev->funcs->connect_iommu(mdev); 329 + if (ret < 0) { 330 + DRM_ERROR("connect iommu failed.\n"); 331 + goto disable_clk; 332 + } 333 + } 334 + 335 + ret = mdev->funcs->enable_irq(mdev); 336 + 337 + disable_clk: 338 + clk_disable_unprepare(mdev->aclk); 339 + 340 + return ret; 341 + } 342 + 343 + int komeda_dev_suspend(struct komeda_dev *mdev) 344 + { 345 + int ret = 0; 346 + 347 + clk_prepare_enable(mdev->aclk); 348 + 349 + if (mdev->iommu && mdev->funcs->disconnect_iommu) { 350 + ret = mdev->funcs->disconnect_iommu(mdev); 351 + if (ret < 0) { 352 + DRM_ERROR("disconnect iommu failed.\n"); 353 + goto disable_clk; 354 + } 355 + } 356 + 357 + ret = mdev->funcs->disable_irq(mdev); 358 + 359 + disable_clk: 360 + clk_disable_unprepare(mdev->aclk); 361 + 362 + return ret; 340 363 }
+20
drivers/gpu/drm/arm/display/komeda/komeda_dev.h
··· 40 40 #define KOMEDA_ERR_TTNG BIT_ULL(30) 41 41 #define KOMEDA_ERR_TTF BIT_ULL(31) 42 42 43 + #define KOMEDA_ERR_EVENTS \ 44 + (KOMEDA_EVENT_URUN | KOMEDA_EVENT_IBSY | KOMEDA_EVENT_OVR |\ 45 + KOMEDA_ERR_TETO | KOMEDA_ERR_TEMR | KOMEDA_ERR_TITR |\ 46 + KOMEDA_ERR_CPE | KOMEDA_ERR_CFGE | KOMEDA_ERR_AXIE |\ 47 + KOMEDA_ERR_ACE0 | KOMEDA_ERR_ACE1 | KOMEDA_ERR_ACE2 |\ 48 + KOMEDA_ERR_ACE3 | KOMEDA_ERR_DRIFTTO | KOMEDA_ERR_FRAMETO |\ 49 + KOMEDA_ERR_ZME | KOMEDA_ERR_MERR | KOMEDA_ERR_TCF |\ 50 + KOMEDA_ERR_TTNG | KOMEDA_ERR_TTF) 51 + 52 + #define KOMEDA_WARN_EVENTS KOMEDA_ERR_CSCE 53 + 43 54 /* malidp device id */ 44 55 enum { 45 56 MALI_D71 = 0, ··· 217 206 void komeda_dev_destroy(struct komeda_dev *mdev); 218 207 219 208 struct komeda_dev *dev_to_mdev(struct device *dev); 209 + 210 + #ifdef CONFIG_DRM_KOMEDA_ERROR_PRINT 211 + void komeda_print_events(struct komeda_events *evts); 212 + #else 213 + static inline void komeda_print_events(struct komeda_events *evts) {} 214 + #endif 215 + 216 + int komeda_dev_resume(struct komeda_dev *mdev); 217 + int komeda_dev_suspend(struct komeda_dev *mdev); 220 218 221 219 #endif /*_KOMEDA_DEV_H_*/
+29 -1
drivers/gpu/drm/arm/display/komeda/komeda_drv.c
··· 8 8 #include <linux/kernel.h> 9 9 #include <linux/platform_device.h> 10 10 #include <linux/component.h> 11 + #include <linux/pm_runtime.h> 11 12 #include <drm/drm_of.h> 12 13 #include "komeda_dev.h" 13 14 #include "komeda_kms.h" ··· 137 136 138 137 MODULE_DEVICE_TABLE(of, komeda_of_match); 139 138 139 + static int __maybe_unused komeda_pm_suspend(struct device *dev) 140 + { 141 + struct komeda_drv *mdrv = dev_get_drvdata(dev); 142 + struct drm_device *drm = &mdrv->kms->base; 143 + int res; 144 + 145 + res = drm_mode_config_helper_suspend(drm); 146 + 147 + komeda_dev_suspend(mdrv->mdev); 148 + 149 + return res; 150 + } 151 + 152 + static int __maybe_unused komeda_pm_resume(struct device *dev) 153 + { 154 + struct komeda_drv *mdrv = dev_get_drvdata(dev); 155 + struct drm_device *drm = &mdrv->kms->base; 156 + 157 + komeda_dev_resume(mdrv->mdev); 158 + 159 + return drm_mode_config_helper_resume(drm); 160 + } 161 + 162 + static const struct dev_pm_ops komeda_pm_ops = { 163 + SET_SYSTEM_SLEEP_PM_OPS(komeda_pm_suspend, komeda_pm_resume) 164 + }; 165 + 140 166 static struct platform_driver komeda_platform_driver = { 141 167 .probe = komeda_platform_probe, 142 168 .remove = komeda_platform_remove, 143 169 .driver = { 144 170 .name = "komeda", 145 171 .of_match_table = komeda_of_match, 146 - .pm = NULL, 172 + .pm = &komeda_pm_ops, 147 173 }, 148 174 }; 149 175
+140
drivers/gpu/drm/arm/display/komeda/komeda_event.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * (C) COPYRIGHT 2019 ARM Limited. All rights reserved. 4 + * Author: James.Qian.Wang <james.qian.wang@arm.com> 5 + * 6 + */ 7 + #include <drm/drm_print.h> 8 + 9 + #include "komeda_dev.h" 10 + 11 + struct komeda_str { 12 + char *str; 13 + u32 sz; 14 + u32 len; 15 + }; 16 + 17 + /* return 0 on success, < 0 on no space. 18 + */ 19 + static int komeda_sprintf(struct komeda_str *str, const char *fmt, ...) 20 + { 21 + va_list args; 22 + int num, free_sz; 23 + int err; 24 + 25 + free_sz = str->sz - str->len - 1; 26 + if (free_sz <= 0) 27 + return -ENOSPC; 28 + 29 + va_start(args, fmt); 30 + 31 + num = vsnprintf(str->str + str->len, free_sz, fmt, args); 32 + 33 + va_end(args); 34 + 35 + if (num < free_sz) { 36 + str->len += num; 37 + err = 0; 38 + } else { 39 + str->len = str->sz - 1; 40 + err = -ENOSPC; 41 + } 42 + 43 + return err; 44 + } 45 + 46 + static void evt_sprintf(struct komeda_str *str, u64 evt, const char *msg) 47 + { 48 + if (evt) 49 + komeda_sprintf(str, msg); 50 + } 51 + 52 + static void evt_str(struct komeda_str *str, u64 events) 53 + { 54 + if (events == 0ULL) { 55 + komeda_sprintf(str, "None"); 56 + return; 57 + } 58 + 59 + evt_sprintf(str, events & KOMEDA_EVENT_VSYNC, "VSYNC|"); 60 + evt_sprintf(str, events & KOMEDA_EVENT_FLIP, "FLIP|"); 61 + evt_sprintf(str, events & KOMEDA_EVENT_EOW, "EOW|"); 62 + evt_sprintf(str, events & KOMEDA_EVENT_MODE, "OP-MODE|"); 63 + 64 + evt_sprintf(str, events & KOMEDA_EVENT_URUN, "UNDERRUN|"); 65 + evt_sprintf(str, events & KOMEDA_EVENT_OVR, "OVERRUN|"); 66 + 67 + /* GLB error */ 68 + evt_sprintf(str, events & KOMEDA_ERR_MERR, "MERR|"); 69 + evt_sprintf(str, events & KOMEDA_ERR_FRAMETO, "FRAMETO|"); 70 + 71 + /* DOU error */ 72 + evt_sprintf(str, events & KOMEDA_ERR_DRIFTTO, "DRIFTTO|"); 73 + evt_sprintf(str, events & KOMEDA_ERR_FRAMETO, "FRAMETO|"); 74 + evt_sprintf(str, events & KOMEDA_ERR_TETO, "TETO|"); 75 + evt_sprintf(str, events & KOMEDA_ERR_CSCE, "CSCE|"); 76 + 77 + /* LPU errors or events */ 78 + evt_sprintf(str, events & KOMEDA_EVENT_IBSY, "IBSY|"); 79 + evt_sprintf(str, events & KOMEDA_ERR_AXIE, "AXIE|"); 80 + evt_sprintf(str, events & KOMEDA_ERR_ACE0, "ACE0|"); 81 + evt_sprintf(str, events & KOMEDA_ERR_ACE1, "ACE1|"); 82 + evt_sprintf(str, events & KOMEDA_ERR_ACE2, "ACE2|"); 83 + evt_sprintf(str, events & KOMEDA_ERR_ACE3, "ACE3|"); 84 + 85 + /* LPU TBU errors*/ 86 + evt_sprintf(str, events & KOMEDA_ERR_TCF, "TCF|"); 87 + evt_sprintf(str, events & KOMEDA_ERR_TTNG, "TTNG|"); 88 + evt_sprintf(str, events & KOMEDA_ERR_TITR, "TITR|"); 89 + evt_sprintf(str, events & KOMEDA_ERR_TEMR, "TEMR|"); 90 + evt_sprintf(str, events & KOMEDA_ERR_TTF, "TTF|"); 91 + 92 + /* CU errors*/ 93 + evt_sprintf(str, events & KOMEDA_ERR_CPE, "COPROC|"); 94 + evt_sprintf(str, events & KOMEDA_ERR_ZME, "ZME|"); 95 + evt_sprintf(str, events & KOMEDA_ERR_CFGE, "CFGE|"); 96 + evt_sprintf(str, events & KOMEDA_ERR_TEMR, "TEMR|"); 97 + 98 + if (str->len > 0 && (str->str[str->len - 1] == '|')) { 99 + str->str[str->len - 1] = 0; 100 + str->len--; 101 + } 102 + } 103 + 104 + static bool is_new_frame(struct komeda_events *a) 105 + { 106 + return (a->pipes[0] | a->pipes[1]) & 107 + (KOMEDA_EVENT_FLIP | KOMEDA_EVENT_EOW); 108 + } 109 + 110 + void komeda_print_events(struct komeda_events *evts) 111 + { 112 + u64 print_evts = KOMEDA_ERR_EVENTS; 113 + static bool en_print = true; 114 + 115 + /* reduce the same msg print, only print the first evt for one frame */ 116 + if (evts->global || is_new_frame(evts)) 117 + en_print = true; 118 + if (!en_print) 119 + return; 120 + 121 + if ((evts->global | evts->pipes[0] | evts->pipes[1]) & print_evts) { 122 + char msg[256]; 123 + struct komeda_str str; 124 + 125 + str.str = msg; 126 + str.sz = sizeof(msg); 127 + str.len = 0; 128 + 129 + komeda_sprintf(&str, "gcu: "); 130 + evt_str(&str, evts->global); 131 + komeda_sprintf(&str, ", pipes[0]: "); 132 + evt_str(&str, evts->pipes[0]); 133 + komeda_sprintf(&str, ", pipes[1]: "); 134 + evt_str(&str, evts->pipes[1]); 135 + 136 + DRM_ERROR("err detect: %s\n", msg); 137 + 138 + en_print = false; 139 + } 140 + }
+2
drivers/gpu/drm/arm/display/komeda/komeda_kms.c
··· 48 48 memset(&evts, 0, sizeof(evts)); 49 49 status = mdev->funcs->irq_handler(mdev, &evts); 50 50 51 + komeda_print_events(&evts); 52 + 51 53 /* Notify the crtc to handle the events */ 52 54 for (i = 0; i < kms->n_crtcs; i++) 53 55 komeda_crtc_handle_event(&kms->crtcs[i], &evts);
+13 -1
drivers/gpu/drm/arm/display/komeda/komeda_pipeline.h
··· 389 389 int id; 390 390 /** @avail_comps: available components mask of pipeline */ 391 391 u32 avail_comps; 392 + /** 393 + * @standalone_disabled_comps: 394 + * 395 + * When disable the pipeline, some components can not be disabled 396 + * together with others, but need a sparated and standalone disable. 397 + * The standalone_disabled_comps are the components which need to be 398 + * disabled standalone, and this concept also introduce concept of 399 + * two phase. 400 + * phase 1: for disabling the common components. 401 + * phase 2: for disabling the standalong_disabled_comps. 402 + */ 403 + u32 standalone_disabled_comps; 392 404 /** @n_layers: the number of layer on @layers */ 393 405 int n_layers; 394 406 /** @layers: the pipeline layers */ ··· 547 535 struct komeda_pipeline_state * 548 536 komeda_pipeline_get_old_state(struct komeda_pipeline *pipe, 549 537 struct drm_atomic_state *state); 550 - void komeda_pipeline_disable(struct komeda_pipeline *pipe, 538 + bool komeda_pipeline_disable(struct komeda_pipeline *pipe, 551 539 struct drm_atomic_state *old_state); 552 540 void komeda_pipeline_update(struct komeda_pipeline *pipe, 553 541 struct drm_atomic_state *old_state);
+26 -4
drivers/gpu/drm/arm/display/komeda/komeda_pipeline_state.c
··· 1218 1218 return 0; 1219 1219 } 1220 1220 1221 - void komeda_pipeline_disable(struct komeda_pipeline *pipe, 1221 + /* Since standalong disabled components must be disabled separately and in the 1222 + * last, So a complete disable operation may needs to call pipeline_disable 1223 + * twice (two phase disabling). 1224 + * Phase 1: disable the common components, flush it. 1225 + * Phase 2: disable the standalone disabled components, flush it. 1226 + * 1227 + * RETURNS: 1228 + * true: disable is not complete, needs a phase 2 disable. 1229 + * false: disable is complete. 1230 + */ 1231 + bool komeda_pipeline_disable(struct komeda_pipeline *pipe, 1222 1232 struct drm_atomic_state *old_state) 1223 1233 { 1224 1234 struct komeda_pipeline_state *old; ··· 1238 1228 1239 1229 old = komeda_pipeline_get_old_state(pipe, old_state); 1240 1230 1241 - disabling_comps = old->active_comps; 1242 - DRM_DEBUG_ATOMIC("PIPE%d: disabling_comps: 0x%x.\n", 1243 - pipe->id, disabling_comps); 1231 + disabling_comps = old->active_comps & 1232 + (~pipe->standalone_disabled_comps); 1233 + if (!disabling_comps) 1234 + disabling_comps = old->active_comps & 1235 + pipe->standalone_disabled_comps; 1236 + 1237 + DRM_DEBUG_ATOMIC("PIPE%d: active_comps: 0x%x, disabling_comps: 0x%x.\n", 1238 + pipe->id, old->active_comps, disabling_comps); 1244 1239 1245 1240 dp_for_each_set_bit(id, disabling_comps) { 1246 1241 c = komeda_pipeline_get_component(pipe, id); ··· 1263 1248 1264 1249 c->funcs->disable(c); 1265 1250 } 1251 + 1252 + /* Update the pipeline state, if there are components that are still 1253 + * active, return true for calling the phase 2 disable. 1254 + */ 1255 + old->active_comps &= ~disabling_comps; 1256 + 1257 + return old->active_comps ? true : false; 1266 1258 } 1267 1259 1268 1260 void komeda_pipeline_update(struct komeda_pipeline *pipe,
+2
drivers/gpu/drm/ast/Kconfig
··· 4 4 depends on DRM && PCI && MMU 5 5 select DRM_KMS_HELPER 6 6 select DRM_VRAM_HELPER 7 + select DRM_TTM 8 + select DRM_TTM_HELPER 7 9 help 8 10 Say yes for experimental AST GPU driver. Do not enable 9 11 this driver without having a working -modesetting,
-1
drivers/gpu/drm/ast/ast_drv.c
··· 35 35 #include <drm/drm_gem_vram_helper.h> 36 36 #include <drm/drm_pci.h> 37 37 #include <drm/drm_probe_helper.h> 38 - #include <drm/drm_vram_mm_helper.h> 39 38 40 39 #include "ast_drv.h" 41 40
+24 -19
drivers/gpu/drm/ast/ast_drv.h
··· 82 82 #define AST_DRAM_4Gx16 7 83 83 #define AST_DRAM_8Gx16 8 84 84 85 + 86 + #define AST_MAX_HWC_WIDTH 64 87 + #define AST_MAX_HWC_HEIGHT 64 88 + 89 + #define AST_HWC_SIZE (AST_MAX_HWC_WIDTH * AST_MAX_HWC_HEIGHT * 2) 90 + #define AST_HWC_SIGNATURE_SIZE 32 91 + 92 + #define AST_DEFAULT_HWC_NUM 2 93 + 94 + /* define for signature structure */ 95 + #define AST_HWC_SIGNATURE_CHECKSUM 0x00 96 + #define AST_HWC_SIGNATURE_SizeX 0x04 97 + #define AST_HWC_SIGNATURE_SizeY 0x08 98 + #define AST_HWC_SIGNATURE_X 0x0C 99 + #define AST_HWC_SIGNATURE_Y 0x10 100 + #define AST_HWC_SIGNATURE_HOTSPOTX 0x14 101 + #define AST_HWC_SIGNATURE_HOTSPOTY 0x18 102 + 103 + 85 104 struct ast_private { 86 105 struct drm_device *dev; 87 106 ··· 116 97 117 98 int fb_mtrr; 118 99 119 - struct drm_gem_object *cursor_cache; 120 - int next_cursor; 100 + struct { 101 + struct drm_gem_vram_object *gbo[AST_DEFAULT_HWC_NUM]; 102 + unsigned int next_index; 103 + } cursor; 104 + 121 105 bool support_wide_screen; 122 106 enum { 123 107 ast_use_p2a, ··· 220 198 #define AST_VIDMEM_SIZE_128M 0x08000000 221 199 222 200 #define AST_VIDMEM_DEFAULT_SIZE AST_VIDMEM_SIZE_8M 223 - 224 - #define AST_MAX_HWC_WIDTH 64 225 - #define AST_MAX_HWC_HEIGHT 64 226 - 227 - #define AST_HWC_SIZE (AST_MAX_HWC_WIDTH*AST_MAX_HWC_HEIGHT*2) 228 - #define AST_HWC_SIGNATURE_SIZE 32 229 - 230 - #define AST_DEFAULT_HWC_NUM 2 231 - /* define for signature structure */ 232 - #define AST_HWC_SIGNATURE_CHECKSUM 0x00 233 - #define AST_HWC_SIGNATURE_SizeX 0x04 234 - #define AST_HWC_SIGNATURE_SizeY 0x08 235 - #define AST_HWC_SIGNATURE_X 0x0C 236 - #define AST_HWC_SIGNATURE_Y 0x10 237 - #define AST_HWC_SIGNATURE_HOTSPOTX 0x14 238 - #define AST_HWC_SIGNATURE_HOTSPOTY 0x18 239 - 240 201 241 202 struct ast_i2c_chan { 242 203 struct i2c_adapter adapter;
-1
drivers/gpu/drm/ast/ast_main.c
··· 33 33 #include <drm/drm_gem.h> 34 34 #include <drm/drm_gem_framebuffer_helper.h> 35 35 #include <drm/drm_gem_vram_helper.h> 36 - #include <drm/drm_vram_mm_helper.h> 37 36 38 37 #include "ast_drv.h" 39 38
+141 -127
drivers/gpu/drm/ast/ast_mode.c
··· 687 687 kfree(encoder); 688 688 } 689 689 690 - 691 - static struct drm_encoder *ast_best_single_encoder(struct drm_connector *connector) 692 - { 693 - int enc_id = connector->encoder_ids[0]; 694 - /* pick the encoder ids */ 695 - if (enc_id) 696 - return drm_encoder_find(connector->dev, NULL, enc_id); 697 - return NULL; 698 - } 699 - 700 - 701 690 static const struct drm_encoder_funcs ast_enc_funcs = { 702 691 .destroy = ast_encoder_destroy, 703 692 }; ··· 836 847 static const struct drm_connector_helper_funcs ast_connector_helper_funcs = { 837 848 .mode_valid = ast_mode_valid, 838 849 .get_modes = ast_get_modes, 839 - .best_encoder = ast_best_single_encoder, 840 850 }; 841 851 842 852 static const struct drm_connector_funcs ast_connector_funcs = { ··· 883 895 static int ast_cursor_init(struct drm_device *dev) 884 896 { 885 897 struct ast_private *ast = dev->dev_private; 886 - int size; 887 - int ret; 888 - struct drm_gem_object *obj; 898 + size_t size, i; 889 899 struct drm_gem_vram_object *gbo; 890 - s64 gpu_addr; 891 - void *base; 900 + int ret; 892 901 893 - size = (AST_HWC_SIZE + AST_HWC_SIGNATURE_SIZE) * AST_DEFAULT_HWC_NUM; 902 + size = roundup(AST_HWC_SIZE + AST_HWC_SIGNATURE_SIZE, PAGE_SIZE); 894 903 895 - ret = ast_gem_create(dev, size, true, &obj); 896 - if (ret) 897 - return ret; 898 - gbo = drm_gem_vram_of_gem(obj); 899 - ret = drm_gem_vram_pin(gbo, DRM_GEM_VRAM_PL_FLAG_VRAM); 900 - if (ret) 901 - goto fail; 902 - gpu_addr = drm_gem_vram_offset(gbo); 903 - if (gpu_addr < 0) { 904 - drm_gem_vram_unpin(gbo); 905 - ret = (int)gpu_addr; 906 - goto fail; 904 + for (i = 0; i < ARRAY_SIZE(ast->cursor.gbo); ++i) { 905 + gbo = drm_gem_vram_create(dev, &dev->vram_mm->bdev, 906 + size, 0, false); 907 + if (IS_ERR(gbo)) { 908 + ret = PTR_ERR(gbo); 909 + goto err_drm_gem_vram_put; 910 + } 911 + ret = drm_gem_vram_pin(gbo, DRM_GEM_VRAM_PL_FLAG_VRAM | 912 + DRM_GEM_VRAM_PL_FLAG_TOPDOWN); 913 + if (ret) { 914 + drm_gem_vram_put(gbo); 915 + goto err_drm_gem_vram_put; 916 + } 917 + 918 + ast->cursor.gbo[i] = gbo; 907 919 } 908 920 909 - /* kmap the object */ 910 - base = drm_gem_vram_kmap(gbo, true, NULL); 911 - if (IS_ERR(base)) { 912 - ret = PTR_ERR(base); 913 - goto fail; 914 - } 915 - 916 - ast->cursor_cache = obj; 917 921 return 0; 918 - fail: 922 + 923 + err_drm_gem_vram_put: 924 + while (i) { 925 + --i; 926 + gbo = ast->cursor.gbo[i]; 927 + drm_gem_vram_unpin(gbo); 928 + drm_gem_vram_put(gbo); 929 + ast->cursor.gbo[i] = NULL; 930 + } 919 931 return ret; 920 932 } 921 933 922 934 static void ast_cursor_fini(struct drm_device *dev) 923 935 { 924 936 struct ast_private *ast = dev->dev_private; 925 - struct drm_gem_vram_object *gbo = 926 - drm_gem_vram_of_gem(ast->cursor_cache); 927 - drm_gem_vram_kunmap(gbo); 928 - drm_gem_vram_unpin(gbo); 929 - drm_gem_object_put_unlocked(ast->cursor_cache); 937 + size_t i; 938 + struct drm_gem_vram_object *gbo; 939 + 940 + for (i = 0; i < ARRAY_SIZE(ast->cursor.gbo); ++i) { 941 + gbo = ast->cursor.gbo[i]; 942 + drm_gem_vram_unpin(gbo); 943 + drm_gem_vram_put(gbo); 944 + } 930 945 } 931 946 932 947 int ast_mode_init(struct drm_device *dev) ··· 1067 1076 kfree(i2c); 1068 1077 } 1069 1078 1070 - static void ast_show_cursor(struct drm_crtc *crtc) 1071 - { 1072 - struct ast_private *ast = crtc->dev->dev_private; 1073 - u8 jreg; 1074 - 1075 - jreg = 0x2; 1076 - /* enable ARGB cursor */ 1077 - jreg |= 1; 1078 - ast_set_index_reg_mask(ast, AST_IO_CRTC_PORT, 0xcb, 0xfc, jreg); 1079 - } 1080 - 1081 - static void ast_hide_cursor(struct drm_crtc *crtc) 1082 - { 1083 - struct ast_private *ast = crtc->dev->dev_private; 1084 - ast_set_index_reg_mask(ast, AST_IO_CRTC_PORT, 0xcb, 0xfc, 0x00); 1085 - } 1086 - 1087 1079 static u32 copy_cursor_image(u8 *src, u8 *dst, int width, int height) 1088 1080 { 1089 1081 union { ··· 1123 1149 return csum; 1124 1150 } 1125 1151 1152 + static int ast_cursor_update(void *dst, void *src, unsigned int width, 1153 + unsigned int height) 1154 + { 1155 + u32 csum; 1156 + 1157 + /* do data transfer to cursor cache */ 1158 + csum = copy_cursor_image(src, dst, width, height); 1159 + 1160 + /* write checksum + signature */ 1161 + dst += AST_HWC_SIZE; 1162 + writel(csum, dst); 1163 + writel(width, dst + AST_HWC_SIGNATURE_SizeX); 1164 + writel(height, dst + AST_HWC_SIGNATURE_SizeY); 1165 + writel(0, dst + AST_HWC_SIGNATURE_HOTSPOTX); 1166 + writel(0, dst + AST_HWC_SIGNATURE_HOTSPOTY); 1167 + 1168 + return 0; 1169 + } 1170 + 1171 + static void ast_cursor_set_base(struct ast_private *ast, u64 address) 1172 + { 1173 + u8 addr0 = (address >> 3) & 0xff; 1174 + u8 addr1 = (address >> 11) & 0xff; 1175 + u8 addr2 = (address >> 19) & 0xff; 1176 + 1177 + ast_set_index_reg(ast, AST_IO_CRTC_PORT, 0xc8, addr0); 1178 + ast_set_index_reg(ast, AST_IO_CRTC_PORT, 0xc9, addr1); 1179 + ast_set_index_reg(ast, AST_IO_CRTC_PORT, 0xca, addr2); 1180 + } 1181 + 1182 + static int ast_show_cursor(struct drm_crtc *crtc, void *src, 1183 + unsigned int width, unsigned int height) 1184 + { 1185 + struct ast_private *ast = crtc->dev->dev_private; 1186 + struct ast_crtc *ast_crtc = to_ast_crtc(crtc); 1187 + struct drm_gem_vram_object *gbo; 1188 + void *dst; 1189 + s64 off; 1190 + int ret; 1191 + u8 jreg; 1192 + 1193 + gbo = ast->cursor.gbo[ast->cursor.next_index]; 1194 + dst = drm_gem_vram_vmap(gbo); 1195 + if (IS_ERR(dst)) 1196 + return PTR_ERR(dst); 1197 + off = drm_gem_vram_offset(gbo); 1198 + if (off < 0) { 1199 + ret = (int)off; 1200 + goto err_drm_gem_vram_vunmap; 1201 + } 1202 + 1203 + ret = ast_cursor_update(dst, src, width, height); 1204 + if (ret) 1205 + goto err_drm_gem_vram_vunmap; 1206 + ast_cursor_set_base(ast, off); 1207 + 1208 + ast_crtc->offset_x = AST_MAX_HWC_WIDTH - width; 1209 + ast_crtc->offset_y = AST_MAX_HWC_WIDTH - height; 1210 + 1211 + jreg = 0x2; 1212 + /* enable ARGB cursor */ 1213 + jreg |= 1; 1214 + ast_set_index_reg_mask(ast, AST_IO_CRTC_PORT, 0xcb, 0xfc, jreg); 1215 + 1216 + ++ast->cursor.next_index; 1217 + ast->cursor.next_index %= ARRAY_SIZE(ast->cursor.gbo); 1218 + 1219 + drm_gem_vram_vunmap(gbo, dst); 1220 + 1221 + return 0; 1222 + 1223 + err_drm_gem_vram_vunmap: 1224 + drm_gem_vram_vunmap(gbo, dst); 1225 + return ret; 1226 + } 1227 + 1228 + static void ast_hide_cursor(struct drm_crtc *crtc) 1229 + { 1230 + struct ast_private *ast = crtc->dev->dev_private; 1231 + 1232 + ast_set_index_reg_mask(ast, AST_IO_CRTC_PORT, 0xcb, 0xfc, 0x00); 1233 + } 1234 + 1126 1235 static int ast_cursor_set(struct drm_crtc *crtc, 1127 1236 struct drm_file *file_priv, 1128 1237 uint32_t handle, 1129 1238 uint32_t width, 1130 1239 uint32_t height) 1131 1240 { 1132 - struct ast_private *ast = crtc->dev->dev_private; 1133 - struct ast_crtc *ast_crtc = to_ast_crtc(crtc); 1134 1241 struct drm_gem_object *obj; 1135 1242 struct drm_gem_vram_object *gbo; 1136 - s64 dst_gpu; 1137 - u64 gpu_addr; 1138 - u32 csum; 1243 + u8 *src; 1139 1244 int ret; 1140 - u8 *src, *dst; 1141 1245 1142 1246 if (!handle) { 1143 1247 ast_hide_cursor(crtc); ··· 1231 1179 return -ENOENT; 1232 1180 } 1233 1181 gbo = drm_gem_vram_of_gem(obj); 1234 - 1235 - ret = drm_gem_vram_pin(gbo, 0); 1236 - if (ret) 1237 - goto err_drm_gem_object_put_unlocked; 1238 - src = drm_gem_vram_kmap(gbo, true, NULL); 1182 + src = drm_gem_vram_vmap(gbo); 1239 1183 if (IS_ERR(src)) { 1240 1184 ret = PTR_ERR(src); 1241 - goto err_drm_gem_vram_unpin; 1185 + goto err_drm_gem_object_put_unlocked; 1242 1186 } 1243 1187 1244 - dst = drm_gem_vram_kmap(drm_gem_vram_of_gem(ast->cursor_cache), 1245 - false, NULL); 1246 - if (IS_ERR(dst)) { 1247 - ret = PTR_ERR(dst); 1248 - goto err_drm_gem_vram_kunmap; 1249 - } 1250 - dst_gpu = drm_gem_vram_offset(drm_gem_vram_of_gem(ast->cursor_cache)); 1251 - if (dst_gpu < 0) { 1252 - ret = (int)dst_gpu; 1253 - goto err_drm_gem_vram_kunmap; 1254 - } 1188 + ret = ast_show_cursor(crtc, src, width, height); 1189 + if (ret) 1190 + goto err_drm_gem_vram_vunmap; 1255 1191 1256 - dst += (AST_HWC_SIZE + AST_HWC_SIGNATURE_SIZE)*ast->next_cursor; 1257 - 1258 - /* do data transfer to cursor cache */ 1259 - csum = copy_cursor_image(src, dst, width, height); 1260 - 1261 - /* write checksum + signature */ 1262 - { 1263 - struct drm_gem_vram_object *dst_gbo = 1264 - drm_gem_vram_of_gem(ast->cursor_cache); 1265 - u8 *dst = drm_gem_vram_kmap(dst_gbo, false, NULL); 1266 - dst += (AST_HWC_SIZE + AST_HWC_SIGNATURE_SIZE)*ast->next_cursor + AST_HWC_SIZE; 1267 - writel(csum, dst); 1268 - writel(width, dst + AST_HWC_SIGNATURE_SizeX); 1269 - writel(height, dst + AST_HWC_SIGNATURE_SizeY); 1270 - writel(0, dst + AST_HWC_SIGNATURE_HOTSPOTX); 1271 - writel(0, dst + AST_HWC_SIGNATURE_HOTSPOTY); 1272 - 1273 - /* set pattern offset */ 1274 - gpu_addr = (u64)dst_gpu; 1275 - gpu_addr += (AST_HWC_SIZE + AST_HWC_SIGNATURE_SIZE)*ast->next_cursor; 1276 - gpu_addr >>= 3; 1277 - ast_set_index_reg(ast, AST_IO_CRTC_PORT, 0xc8, gpu_addr & 0xff); 1278 - ast_set_index_reg(ast, AST_IO_CRTC_PORT, 0xc9, (gpu_addr >> 8) & 0xff); 1279 - ast_set_index_reg(ast, AST_IO_CRTC_PORT, 0xca, (gpu_addr >> 16) & 0xff); 1280 - } 1281 - ast_crtc->offset_x = AST_MAX_HWC_WIDTH - width; 1282 - ast_crtc->offset_y = AST_MAX_HWC_WIDTH - height; 1283 - 1284 - ast->next_cursor = (ast->next_cursor + 1) % AST_DEFAULT_HWC_NUM; 1285 - 1286 - ast_show_cursor(crtc); 1287 - 1288 - drm_gem_vram_kunmap(gbo); 1289 - drm_gem_vram_unpin(gbo); 1192 + drm_gem_vram_vunmap(gbo, src); 1290 1193 drm_gem_object_put_unlocked(obj); 1291 1194 1292 1195 return 0; 1293 1196 1294 - err_drm_gem_vram_kunmap: 1295 - drm_gem_vram_kunmap(gbo); 1296 - err_drm_gem_vram_unpin: 1297 - drm_gem_vram_unpin(gbo); 1197 + err_drm_gem_vram_vunmap: 1198 + drm_gem_vram_vunmap(gbo, src); 1298 1199 err_drm_gem_object_put_unlocked: 1299 1200 drm_gem_object_put_unlocked(obj); 1300 1201 return ret; ··· 1258 1253 { 1259 1254 struct ast_crtc *ast_crtc = to_ast_crtc(crtc); 1260 1255 struct ast_private *ast = crtc->dev->dev_private; 1256 + struct drm_gem_vram_object *gbo; 1261 1257 int x_offset, y_offset; 1262 - u8 *sig; 1258 + u8 *dst, *sig; 1259 + u8 jreg; 1263 1260 1264 - sig = drm_gem_vram_kmap(drm_gem_vram_of_gem(ast->cursor_cache), 1265 - false, NULL); 1266 - sig += (AST_HWC_SIZE + AST_HWC_SIGNATURE_SIZE)*ast->next_cursor + AST_HWC_SIZE; 1261 + gbo = ast->cursor.gbo[ast->cursor.next_index]; 1262 + dst = drm_gem_vram_vmap(gbo); 1263 + if (IS_ERR(dst)) 1264 + return PTR_ERR(dst); 1265 + 1266 + sig = dst + AST_HWC_SIZE; 1267 1267 writel(x, sig + AST_HWC_SIGNATURE_X); 1268 1268 writel(y, sig + AST_HWC_SIGNATURE_Y); 1269 1269 ··· 1291 1281 ast_set_index_reg(ast, AST_IO_CRTC_PORT, 0xc7, ((y >> 8) & 0x07)); 1292 1282 1293 1283 /* dummy write to fire HWC */ 1294 - ast_show_cursor(crtc); 1284 + jreg = 0x02 | 1285 + 0x01; /* enable ARGB4444 cursor */ 1286 + ast_set_index_reg_mask(ast, AST_IO_CRTC_PORT, 0xcb, 0xfc, jreg); 1287 + 1288 + drm_gem_vram_vunmap(gbo, dst); 1295 1289 1296 1290 return 0; 1297 1291 }
+1 -2
drivers/gpu/drm/ast/ast_ttm.c
··· 30 30 31 31 #include <drm/drm_print.h> 32 32 #include <drm/drm_gem_vram_helper.h> 33 - #include <drm/drm_vram_mm_helper.h> 34 33 35 34 #include "ast_drv.h" 36 35 ··· 41 42 42 43 vmm = drm_vram_helper_alloc_mm( 43 44 dev, pci_resource_start(dev->pdev, 0), 44 - ast->vram_size, &drm_gem_vram_mm_funcs); 45 + ast->vram_size); 45 46 if (IS_ERR(vmm)) { 46 47 ret = PTR_ERR(vmm); 47 48 DRM_ERROR("Error initializing VRAM MM; %d\n", ret);
+2 -1
drivers/gpu/drm/atmel-hlcdc/atmel_hlcdc_output.c
··· 107 107 output->encoder.possible_crtcs = 0x1; 108 108 109 109 if (panel) { 110 - bridge = drm_panel_bridge_add(panel, DRM_MODE_CONNECTOR_Unknown); 110 + bridge = drm_panel_bridge_add_typed(panel, 111 + DRM_MODE_CONNECTOR_Unknown); 111 112 if (IS_ERR(bridge)) 112 113 return PTR_ERR(bridge); 113 114 }
+2
drivers/gpu/drm/bochs/Kconfig
··· 4 4 depends on DRM && PCI && MMU 5 5 select DRM_KMS_HELPER 6 6 select DRM_VRAM_HELPER 7 + select DRM_TTM 8 + select DRM_TTM_HELPER 7 9 help 8 10 Choose this option for qemu. 9 11 If M is selected the module will be called bochs-drm.
-1
drivers/gpu/drm/bochs/bochs.h
··· 10 10 #include <drm/drm_gem.h> 11 11 #include <drm/drm_gem_vram_helper.h> 12 12 #include <drm/drm_simple_kms_helper.h> 13 - #include <drm/drm_vram_mm_helper.h> 14 13 15 14 /* ---------------------------------------------------------------------- */ 16 15
+1 -1
drivers/gpu/drm/bochs/bochs_drv.c
··· 114 114 return -ENOMEM; 115 115 } 116 116 117 - ret = drm_fb_helper_remove_conflicting_pci_framebuffers(pdev, 0, "bochsdrmfb"); 117 + ret = drm_fb_helper_remove_conflicting_pci_framebuffers(pdev, "bochsdrmfb"); 118 118 if (ret) 119 119 return ret; 120 120
+1 -2
drivers/gpu/drm/bochs/bochs_mm.c
··· 11 11 struct drm_vram_mm *vmm; 12 12 13 13 vmm = drm_vram_helper_alloc_mm(bochs->dev, bochs->fb_base, 14 - bochs->fb_size, 15 - &drm_gem_vram_mm_funcs); 14 + bochs->fb_size); 16 15 return PTR_ERR_OR_ZERO(vmm); 17 16 } 18 17
+20 -8
drivers/gpu/drm/bridge/analogix-anx78xx.c
··· 19 19 #include <linux/types.h> 20 20 21 21 #include <drm/drm_atomic_helper.h> 22 + #include <drm/drm_bridge.h> 22 23 #include <drm/drm_crtc.h> 23 24 #include <drm/drm_dp_helper.h> 24 25 #include <drm/drm_edid.h> ··· 716 715 /* 1.0V digital core power regulator */ 717 716 pdata->dvdd10 = devm_regulator_get(dev, "dvdd10"); 718 717 if (IS_ERR(pdata->dvdd10)) { 719 - DRM_ERROR("DVDD10 regulator not found\n"); 718 + if (PTR_ERR(pdata->dvdd10) != -EPROBE_DEFER) 719 + DRM_ERROR("DVDD10 regulator not found\n"); 720 + 720 721 return PTR_ERR(pdata->dvdd10); 721 722 } 722 723 ··· 1304 1301 }; 1305 1302 1306 1303 static const u16 anx78xx_chipid_list[] = { 1304 + 0x7808, 1307 1305 0x7812, 1308 1306 0x7814, 1309 1307 0x7818, ··· 1336 1332 1337 1333 err = anx78xx_init_pdata(anx78xx); 1338 1334 if (err) { 1339 - DRM_ERROR("Failed to initialize pdata: %d\n", err); 1335 + if (err != -EPROBE_DEFER) 1336 + DRM_ERROR("Failed to initialize pdata: %d\n", err); 1337 + 1340 1338 return err; 1341 1339 } 1342 1340 ··· 1356 1350 1357 1351 /* Map slave addresses of ANX7814 */ 1358 1352 for (i = 0; i < I2C_NUM_ADDRESSES; i++) { 1359 - anx78xx->i2c_dummy[i] = i2c_new_dummy(client->adapter, 1360 - anx78xx_i2c_addresses[i] >> 1); 1361 - if (!anx78xx->i2c_dummy[i]) { 1362 - err = -ENOMEM; 1363 - DRM_ERROR("Failed to reserve I2C bus %02x\n", 1364 - anx78xx_i2c_addresses[i]); 1353 + struct i2c_client *i2c_dummy; 1354 + 1355 + i2c_dummy = i2c_new_dummy_device(client->adapter, 1356 + anx78xx_i2c_addresses[i] >> 1); 1357 + if (IS_ERR(i2c_dummy)) { 1358 + err = PTR_ERR(i2c_dummy); 1359 + DRM_ERROR("Failed to reserve I2C bus %02x: %d\n", 1360 + anx78xx_i2c_addresses[i], err); 1365 1361 goto err_unregister_i2c; 1366 1362 } 1367 1363 1364 + anx78xx->i2c_dummy[i] = i2c_dummy; 1368 1365 anx78xx->map[i] = devm_regmap_init_i2c(anx78xx->i2c_dummy[i], 1369 1366 &anx78xx_regmap_config); 1370 1367 if (IS_ERR(anx78xx->map[i])) { ··· 1472 1463 1473 1464 #if IS_ENABLED(CONFIG_OF) 1474 1465 static const struct of_device_id anx78xx_match_table[] = { 1466 + { .compatible = "analogix,anx7808", }, 1467 + { .compatible = "analogix,anx7812", }, 1475 1468 { .compatible = "analogix,anx7814", }, 1469 + { .compatible = "analogix,anx7818", }, 1476 1470 { /* sentinel */ }, 1477 1471 }; 1478 1472 MODULE_DEVICE_TABLE(of, anx78xx_match_table);
+1
drivers/gpu/drm/bridge/analogix/analogix_dp_core.c
··· 21 21 #include <drm/bridge/analogix_dp.h> 22 22 #include <drm/drm_atomic.h> 23 23 #include <drm/drm_atomic_helper.h> 24 + #include <drm/drm_bridge.h> 24 25 #include <drm/drm_crtc.h> 25 26 #include <drm/drm_device.h> 26 27 #include <drm/drm_panel.h>
+2 -1
drivers/gpu/drm/bridge/cdns-dsi.c
··· 956 956 957 957 panel = of_drm_find_panel(np); 958 958 if (!IS_ERR(panel)) { 959 - bridge = drm_panel_bridge_add(panel, DRM_MODE_CONNECTOR_DSI); 959 + bridge = drm_panel_bridge_add_typed(panel, 960 + DRM_MODE_CONNECTOR_DSI); 960 961 } else { 961 962 bridge = of_drm_find_bridge(dev->dev.of_node); 962 963 if (!bridge)
+1
drivers/gpu/drm/bridge/dumb-vga-dac.c
··· 12 12 #include <linux/regulator/consumer.h> 13 13 14 14 #include <drm/drm_atomic_helper.h> 15 + #include <drm/drm_bridge.h> 15 16 #include <drm/drm_crtc.h> 16 17 #include <drm/drm_print.h> 17 18 #include <drm/drm_probe_helper.h>
+2 -1
drivers/gpu/drm/bridge/lvds-encoder.c
··· 106 106 } 107 107 108 108 lvds_encoder->panel_bridge = 109 - devm_drm_panel_bridge_add(dev, panel, DRM_MODE_CONNECTOR_LVDS); 109 + devm_drm_panel_bridge_add_typed(dev, panel, 110 + DRM_MODE_CONNECTOR_LVDS); 110 111 if (IS_ERR(lvds_encoder->panel_bridge)) 111 112 return PTR_ERR(lvds_encoder->panel_bridge); 112 113
+1
drivers/gpu/drm/bridge/megachips-stdpxxxx-ge-b850v3-fw.c
··· 25 25 26 26 #include <drm/drm_atomic.h> 27 27 #include <drm/drm_atomic_helper.h> 28 + #include <drm/drm_bridge.h> 28 29 #include <drm/drm_edid.h> 29 30 #include <drm/drm_print.h> 30 31 #include <drm/drm_probe_helper.h>
+1
drivers/gpu/drm/bridge/nxp-ptn3460.c
··· 11 11 #include <linux/module.h> 12 12 #include <linux/of.h> 13 13 #include <drm/drm_atomic_helper.h> 14 + #include <drm/drm_bridge.h> 14 15 #include <drm/drm_crtc.h> 15 16 #include <drm/drm_edid.h> 16 17 #include <drm/drm_of.h>
+59 -11
drivers/gpu/drm/bridge/panel.c
··· 5 5 */ 6 6 7 7 #include <drm/drm_atomic_helper.h> 8 + #include <drm/drm_bridge.h> 8 9 #include <drm/drm_connector.h> 9 10 #include <drm/drm_encoder.h> 10 11 #include <drm/drm_modeset_helper_vtables.h> ··· 134 133 * just calls the appropriate functions from &drm_panel. 135 134 * 136 135 * @panel: The drm_panel being wrapped. Must be non-NULL. 137 - * @connector_type: The DRM_MODE_CONNECTOR_* for the connector to be 138 - * created. 139 136 * 140 137 * For drivers converting from directly using drm_panel: The expected 141 138 * usage pattern is that during either encoder module probe or DSI ··· 147 148 * drm_mode_config_cleanup() if the bridge has already been attached), then 148 149 * drm_panel_bridge_remove() to free it. 149 150 * 151 + * The connector type is set to @panel->connector_type, which must be set to a 152 + * known type. Calling this function with a panel whose connector type is 153 + * DRM_MODE_CONNECTOR_Unknown will return NULL. 154 + * 150 155 * See devm_drm_panel_bridge_add() for an automatically manged version of this 151 156 * function. 152 157 */ 153 - struct drm_bridge *drm_panel_bridge_add(struct drm_panel *panel, 154 - u32 connector_type) 158 + struct drm_bridge *drm_panel_bridge_add(struct drm_panel *panel) 159 + { 160 + if (WARN_ON(panel->connector_type == DRM_MODE_CONNECTOR_Unknown)) 161 + return NULL; 162 + 163 + return drm_panel_bridge_add_typed(panel, panel->connector_type); 164 + } 165 + EXPORT_SYMBOL(drm_panel_bridge_add); 166 + 167 + /** 168 + * drm_panel_bridge_add_typed - Creates a &drm_bridge and &drm_connector with 169 + * an explicit connector type. 170 + * @panel: The drm_panel being wrapped. Must be non-NULL. 171 + * @connector_type: The connector type (DRM_MODE_CONNECTOR_*) 172 + * 173 + * This is just like drm_panel_bridge_add(), but forces the connector type to 174 + * @connector_type instead of infering it from the panel. 175 + * 176 + * This function is deprecated and should not be used in new drivers. Use 177 + * drm_panel_bridge_add() instead, and fix panel drivers as necessary if they 178 + * don't report a connector type. 179 + */ 180 + struct drm_bridge *drm_panel_bridge_add_typed(struct drm_panel *panel, 181 + u32 connector_type) 155 182 { 156 183 struct panel_bridge *panel_bridge; 157 184 ··· 201 176 202 177 return &panel_bridge->bridge; 203 178 } 204 - EXPORT_SYMBOL(drm_panel_bridge_add); 179 + EXPORT_SYMBOL(drm_panel_bridge_add_typed); 205 180 206 181 /** 207 182 * drm_panel_bridge_remove - Unregisters and frees a drm_bridge ··· 238 213 * that just calls the appropriate functions from &drm_panel. 239 214 * @dev: device to tie the bridge lifetime to 240 215 * @panel: The drm_panel being wrapped. Must be non-NULL. 241 - * @connector_type: The DRM_MODE_CONNECTOR_* for the connector to be 242 - * created. 243 216 * 244 217 * This is the managed version of drm_panel_bridge_add() which automatically 245 218 * calls drm_panel_bridge_remove() when @dev is unbound. 246 219 */ 247 220 struct drm_bridge *devm_drm_panel_bridge_add(struct device *dev, 248 - struct drm_panel *panel, 249 - u32 connector_type) 221 + struct drm_panel *panel) 222 + { 223 + if (WARN_ON(panel->connector_type == DRM_MODE_CONNECTOR_Unknown)) 224 + return NULL; 225 + 226 + return devm_drm_panel_bridge_add_typed(dev, panel, 227 + panel->connector_type); 228 + } 229 + EXPORT_SYMBOL(devm_drm_panel_bridge_add); 230 + 231 + /** 232 + * devm_drm_panel_bridge_add_typed - Creates a managed &drm_bridge and 233 + * &drm_connector with an explicit connector type. 234 + * @dev: device to tie the bridge lifetime to 235 + * @panel: The drm_panel being wrapped. Must be non-NULL. 236 + * @connector_type: The connector type (DRM_MODE_CONNECTOR_*) 237 + * 238 + * This is just like devm_drm_panel_bridge_add(), but forces the connector type 239 + * to @connector_type instead of infering it from the panel. 240 + * 241 + * This function is deprecated and should not be used in new drivers. Use 242 + * devm_drm_panel_bridge_add() instead, and fix panel drivers as necessary if 243 + * they don't report a connector type. 244 + */ 245 + struct drm_bridge *devm_drm_panel_bridge_add_typed(struct device *dev, 246 + struct drm_panel *panel, 247 + u32 connector_type) 250 248 { 251 249 struct drm_bridge **ptr, *bridge; 252 250 ··· 278 230 if (!ptr) 279 231 return ERR_PTR(-ENOMEM); 280 232 281 - bridge = drm_panel_bridge_add(panel, connector_type); 233 + bridge = drm_panel_bridge_add_typed(panel, connector_type); 282 234 if (!IS_ERR(bridge)) { 283 235 *ptr = bridge; 284 236 devres_add(dev, ptr); ··· 288 240 289 241 return bridge; 290 242 } 291 - EXPORT_SYMBOL(devm_drm_panel_bridge_add); 243 + EXPORT_SYMBOL(devm_drm_panel_bridge_add_typed);
+1
drivers/gpu/drm/bridge/parade-ps8622.c
··· 17 17 #include <linux/regulator/consumer.h> 18 18 19 19 #include <drm/drm_atomic_helper.h> 20 + #include <drm/drm_bridge.h> 20 21 #include <drm/drm_crtc.h> 21 22 #include <drm/drm_of.h> 22 23 #include <drm/drm_panel.h>
+1
drivers/gpu/drm/bridge/sii902x.c
··· 20 20 #include <linux/clk.h> 21 21 22 22 #include <drm/drm_atomic_helper.h> 23 + #include <drm/drm_bridge.h> 23 24 #include <drm/drm_drv.h> 24 25 #include <drm/drm_edid.h> 25 26 #include <drm/drm_print.h>
+1
drivers/gpu/drm/bridge/sii9234.c
··· 13 13 * Dharam Kumar <dharam.kr@samsung.com> 14 14 */ 15 15 #include <drm/bridge/mhl.h> 16 + #include <drm/drm_bridge.h> 16 17 #include <drm/drm_crtc.h> 17 18 #include <drm/drm_edid.h> 18 19
+1
drivers/gpu/drm/bridge/sil-sii8620.c
··· 9 9 #include <asm/unaligned.h> 10 10 11 11 #include <drm/bridge/mhl.h> 12 + #include <drm/drm_bridge.h> 12 13 #include <drm/drm_crtc.h> 13 14 #include <drm/drm_edid.h> 14 15 #include <drm/drm_encoder.h>
+2 -2
drivers/gpu/drm/bridge/synopsys/dw-hdmi-cec.c
··· 285 285 286 286 ret = cec_register_adapter(cec->adap, pdev->dev.parent); 287 287 if (ret < 0) { 288 - cec_notifier_cec_adap_unregister(cec->notify); 288 + cec_notifier_cec_adap_unregister(cec->notify, cec->adap); 289 289 return ret; 290 290 } 291 291 ··· 302 302 { 303 303 struct dw_hdmi_cec *cec = platform_get_drvdata(pdev); 304 304 305 - cec_notifier_cec_adap_unregister(cec->notify); 305 + cec_notifier_cec_adap_unregister(cec->notify, cec->adap); 306 306 cec_unregister_adapter(cec->adap); 307 307 308 308 return 0;
+10
drivers/gpu/drm/bridge/synopsys/dw-hdmi-i2s-audio.c
··· 102 102 } 103 103 104 104 dw_hdmi_set_sample_rate(hdmi, hparms->sample_rate); 105 + dw_hdmi_set_channel_status(hdmi, hparms->iec.status); 105 106 dw_hdmi_set_channel_count(hdmi, hparms->channels); 106 107 dw_hdmi_set_channel_allocation(hdmi, hparms->cea.channel_allocation); 107 108 108 109 hdmi_write(audio, inputclkfs, HDMI_AUD_INPUTCLKFS); 109 110 hdmi_write(audio, conf0, HDMI_AUD_CONF0); 110 111 hdmi_write(audio, conf1, HDMI_AUD_CONF1); 112 + 113 + return 0; 114 + } 115 + 116 + static int dw_hdmi_i2s_audio_startup(struct device *dev, void *data) 117 + { 118 + struct dw_hdmi_i2s_audio_data *audio = data; 119 + struct dw_hdmi *hdmi = audio->hdmi; 111 120 112 121 dw_hdmi_audio_enable(hdmi); 113 122 ··· 162 153 163 154 static struct hdmi_codec_ops dw_hdmi_i2s_ops = { 164 155 .hw_params = dw_hdmi_i2s_hw_params, 156 + .audio_startup = dw_hdmi_i2s_audio_startup, 165 157 .audio_shutdown = dw_hdmi_i2s_audio_shutdown, 166 158 .get_eld = dw_hdmi_i2s_get_eld, 167 159 .get_dai_id = dw_hdmi_i2s_get_dai_id,
+31
drivers/gpu/drm/bridge/synopsys/dw-hdmi.c
··· 26 26 27 27 #include <drm/bridge/dw_hdmi.h> 28 28 #include <drm/drm_atomic_helper.h> 29 + #include <drm/drm_bridge.h> 29 30 #include <drm/drm_edid.h> 30 31 #include <drm/drm_of.h> 31 32 #include <drm/drm_print.h> ··· 37 36 #include "dw-hdmi-cec.h" 38 37 #include "dw-hdmi.h" 39 38 39 + #define DDC_CI_ADDR 0x37 40 40 #define DDC_SEGMENT_ADDR 0x30 41 41 42 42 #define HDMI_EDID_LEN 512 ··· 400 398 u8 addr = msgs[0].addr; 401 399 int i, ret = 0; 402 400 401 + if (addr == DDC_CI_ADDR) 402 + /* 403 + * The internal I2C controller does not support the multi-byte 404 + * read and write operations needed for DDC/CI. 405 + * TOFIX: Blacklist the DDC/CI address until we filter out 406 + * unsupported I2C operations. 407 + */ 408 + return -EOPNOTSUPP; 409 + 403 410 dev_dbg(hdmi->dev, "xfer: num: %d, addr: %#x\n", num, addr); 404 411 405 412 for (i = 0; i < num; i++) { ··· 590 579 591 580 return n; 592 581 } 582 + 583 + /* 584 + * When transmitting IEC60958 linear PCM audio, these registers allow to 585 + * configure the channel status information of all the channel status 586 + * bits in the IEC60958 frame. For the moment this configuration is only 587 + * used when the I2S audio interface, General Purpose Audio (GPA), 588 + * or AHB audio DMA (AHBAUDDMA) interface is active 589 + * (for S/PDIF interface this information comes from the stream). 590 + */ 591 + void dw_hdmi_set_channel_status(struct dw_hdmi *hdmi, 592 + u8 *channel_status) 593 + { 594 + /* 595 + * Set channel status register for frequency and word length. 596 + * Use default values for other registers. 597 + */ 598 + hdmi_writeb(hdmi, channel_status[3], HDMI_FC_AUDSCHNLS7); 599 + hdmi_writeb(hdmi, channel_status[4], HDMI_FC_AUDSCHNLS8); 600 + } 601 + EXPORT_SYMBOL_GPL(dw_hdmi_set_channel_status); 593 602 594 603 static void hdmi_set_clk_regenerator(struct dw_hdmi *hdmi, 595 604 unsigned long pixel_clk, unsigned int sample_rate)
+2
drivers/gpu/drm/bridge/synopsys/dw-hdmi.h
··· 158 158 #define HDMI_FC_SPDDEVICEINF 0x1062 159 159 #define HDMI_FC_AUDSCONF 0x1063 160 160 #define HDMI_FC_AUDSSTAT 0x1064 161 + #define HDMI_FC_AUDSCHNLS7 0x106e 162 + #define HDMI_FC_AUDSCHNLS8 0x106f 161 163 #define HDMI_FC_DATACH0FILL 0x1070 162 164 #define HDMI_FC_DATACH1FILL 0x1071 163 165 #define HDMI_FC_DATACH2FILL 0x1072
+3 -7
drivers/gpu/drm/bridge/synopsys/dw-mipi-dsi.c
··· 316 316 return ret; 317 317 318 318 if (panel) { 319 - bridge = drm_panel_bridge_add(panel, DRM_MODE_CONNECTOR_DSI); 319 + bridge = drm_panel_bridge_add_typed(panel, 320 + DRM_MODE_CONNECTOR_DSI); 320 321 if (IS_ERR(bridge)) 321 322 return PTR_ERR(bridge); 322 323 } ··· 982 981 struct device *dev = &pdev->dev; 983 982 struct reset_control *apb_rst; 984 983 struct dw_mipi_dsi *dsi; 985 - struct resource *res; 986 984 int ret; 987 985 988 986 dsi = devm_kzalloc(dev, sizeof(*dsi), GFP_KERNEL); ··· 997 997 } 998 998 999 999 if (!plat_data->base) { 1000 - res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 1001 - if (!res) 1002 - return ERR_PTR(-ENODEV); 1003 - 1004 - dsi->base = devm_ioremap_resource(dev, res); 1000 + dsi->base = devm_platform_ioremap_resource(pdev, 0); 1005 1001 if (IS_ERR(dsi->base)) 1006 1002 return ERR_PTR(-ENODEV); 1007 1003
+1
drivers/gpu/drm/bridge/tc358764.c
··· 16 16 #include <video/mipi_display.h> 17 17 18 18 #include <drm/drm_atomic_helper.h> 19 + #include <drm/drm_bridge.h> 19 20 #include <drm/drm_crtc.h> 20 21 #include <drm/drm_fb_helper.h> 21 22 #include <drm/drm_mipi_dsi.h>
+1
drivers/gpu/drm/bridge/tc358767.c
··· 26 26 #include <linux/slab.h> 27 27 28 28 #include <drm/drm_atomic_helper.h> 29 + #include <drm/drm_bridge.h> 29 30 #include <drm/drm_dp_helper.h> 30 31 #include <drm/drm_edid.h> 31 32 #include <drm/drm_of.h>
+1
drivers/gpu/drm/bridge/ti-sn65dsi86.c
··· 17 17 18 18 #include <drm/drm_atomic.h> 19 19 #include <drm/drm_atomic_helper.h> 20 + #include <drm/drm_bridge.h> 20 21 #include <drm/drm_dp_helper.h> 21 22 #include <drm/drm_mipi_dsi.h> 22 23 #include <drm/drm_of.h>
+1
drivers/gpu/drm/bridge/ti-tfp410.c
··· 14 14 #include <linux/platform_device.h> 15 15 16 16 #include <drm/drm_atomic_helper.h> 17 + #include <drm/drm_bridge.h> 17 18 #include <drm/drm_crtc.h> 18 19 #include <drm/drm_print.h> 19 20 #include <drm/drm_probe_helper.h>
+1 -1
drivers/gpu/drm/cirrus/cirrus.c
··· 532 532 struct cirrus_device *cirrus; 533 533 int ret; 534 534 535 - ret = drm_fb_helper_remove_conflicting_pci_framebuffers(pdev, 0, "cirrusdrmfb"); 535 + ret = drm_fb_helper_remove_conflicting_pci_framebuffers(pdev, "cirrusdrmfb"); 536 536 if (ret) 537 537 return ret; 538 538
+4 -14
drivers/gpu/drm/drm_atomic_helper.c
··· 31 31 #include <drm/drm_atomic.h> 32 32 #include <drm/drm_atomic_helper.h> 33 33 #include <drm/drm_atomic_uapi.h> 34 + #include <drm/drm_bridge.h> 34 35 #include <drm/drm_damage_helper.h> 35 36 #include <drm/drm_device.h> 36 37 #include <drm/drm_plane_helper.h> ··· 98 97 } 99 98 } 100 99 101 - /* 102 - * For connectors that support multiple encoders, either the 103 - * .atomic_best_encoder() or .best_encoder() operation must be implemented. 104 - */ 105 - static struct drm_encoder * 106 - pick_single_encoder_for_connector(struct drm_connector *connector) 107 - { 108 - WARN_ON(connector->encoder_ids[1]); 109 - return drm_encoder_find(connector->dev, NULL, connector->encoder_ids[0]); 110 - } 111 - 112 100 static int handle_conflicting_encoders(struct drm_atomic_state *state, 113 101 bool disable_conflicting_encoders) 114 102 { ··· 125 135 else if (funcs->best_encoder) 126 136 new_encoder = funcs->best_encoder(connector); 127 137 else 128 - new_encoder = pick_single_encoder_for_connector(connector); 138 + new_encoder = drm_connector_get_single_encoder(connector); 129 139 130 140 if (new_encoder) { 131 141 if (encoder_mask & drm_encoder_mask(new_encoder)) { ··· 349 359 else if (funcs->best_encoder) 350 360 new_encoder = funcs->best_encoder(connector); 351 361 else 352 - new_encoder = pick_single_encoder_for_connector(connector); 362 + new_encoder = drm_connector_get_single_encoder(connector); 353 363 354 364 if (!new_encoder) { 355 365 DRM_DEBUG_ATOMIC("No suitable encoder found for [CONNECTOR:%d:%s]\n", ··· 472 482 continue; 473 483 474 484 funcs = crtc->helper_private; 475 - if (!funcs->mode_fixup) 485 + if (!funcs || !funcs->mode_fixup) 476 486 continue; 477 487 478 488 ret = funcs->mode_fixup(crtc, &new_crtc_state->mode,
+1 -1
drivers/gpu/drm/drm_atomic_uapi.c
··· 1405 1405 } else if (arg->flags & DRM_MODE_ATOMIC_NONBLOCK) { 1406 1406 ret = drm_atomic_nonblocking_commit(state); 1407 1407 } else { 1408 - if (unlikely(drm_debug & DRM_UT_STATE)) 1408 + if (drm_debug_enabled(DRM_UT_STATE)) 1409 1409 drm_atomic_print_state(state); 1410 1410 1411 1411 ret = drm_atomic_commit(state);
+6 -1
drivers/gpu/drm/drm_blend.c
··· 130 130 * Z position is set up with drm_plane_create_zpos_immutable_property() and 131 131 * drm_plane_create_zpos_property(). It controls the visibility of overlapping 132 132 * planes. Without this property the primary plane is always below the cursor 133 - * plane, and ordering between all other planes is undefined. 133 + * plane, and ordering between all other planes is undefined. The positive 134 + * Z axis points towards the user, i.e. planes with lower Z position values 135 + * are underneath planes with higher Z position values. Note that the Z 136 + * position value can also be immutable, to inform userspace about the 137 + * hard-coded stacking of overlay planes, see 138 + * drm_plane_create_zpos_immutable_property(). 134 139 * 135 140 * pixel blend mode: 136 141 * Pixel blend mode is set up with drm_plane_create_blend_mode_property().
+1 -2
drivers/gpu/drm/drm_client_modeset.c
··· 415 415 struct drm_crtc *crtc) 416 416 { 417 417 struct drm_encoder *encoder; 418 - int i; 419 418 420 - drm_connector_for_each_possible_encoder(connector, encoder, i) { 419 + drm_connector_for_each_possible_encoder(connector, encoder) { 421 420 if (encoder->possible_crtcs & drm_crtc_mask(crtc)) 422 421 return true; 423 422 }
+101 -43
drivers/gpu/drm/drm_connector.c
··· 365 365 int drm_connector_attach_encoder(struct drm_connector *connector, 366 366 struct drm_encoder *encoder) 367 367 { 368 - int i; 369 - 370 368 /* 371 369 * In the past, drivers have attempted to model the static association 372 370 * of connector to encoder in simple connector/encoder devices using a ··· 379 381 if (WARN_ON(connector->encoder)) 380 382 return -EINVAL; 381 383 382 - for (i = 0; i < ARRAY_SIZE(connector->encoder_ids); i++) { 383 - if (connector->encoder_ids[i] == 0) { 384 - connector->encoder_ids[i] = encoder->base.id; 385 - return 0; 386 - } 387 - } 388 - return -ENOMEM; 384 + connector->possible_encoders |= drm_encoder_mask(encoder); 385 + 386 + return 0; 389 387 } 390 388 EXPORT_SYMBOL(drm_connector_attach_encoder); 391 389 392 390 /** 393 - * drm_connector_has_possible_encoder - check if the connector and encoder are assosicated with each other 391 + * drm_connector_has_possible_encoder - check if the connector and encoder are 392 + * associated with each other 394 393 * @connector: the connector 395 394 * @encoder: the encoder 396 395 * ··· 397 402 bool drm_connector_has_possible_encoder(struct drm_connector *connector, 398 403 struct drm_encoder *encoder) 399 404 { 400 - struct drm_encoder *enc; 401 - int i; 402 - 403 - drm_connector_for_each_possible_encoder(connector, enc, i) { 404 - if (enc == encoder) 405 - return true; 406 - } 407 - 408 - return false; 405 + return connector->possible_encoders & drm_encoder_mask(encoder); 409 406 } 410 407 EXPORT_SYMBOL(drm_connector_has_possible_encoder); 411 408 ··· 467 480 * drm_connector_register - register a connector 468 481 * @connector: the connector to register 469 482 * 470 - * Register userspace interfaces for a connector 483 + * Register userspace interfaces for a connector. Only call this for connectors 484 + * which can be hotplugged after drm_dev_register() has been called already, 485 + * e.g. DP MST connectors. All other connectors will be registered automatically 486 + * when calling drm_dev_register(). 471 487 * 472 488 * Returns: 473 489 * Zero on success, error code on failure. ··· 516 526 * drm_connector_unregister - unregister a connector 517 527 * @connector: the connector to unregister 518 528 * 519 - * Unregister userspace interfaces for a connector 529 + * Unregister userspace interfaces for a connector. Only call this for 530 + * connectors which have registered explicitly by calling drm_dev_register(), 531 + * since connectors are unregistered automatically when drm_dev_unregister() is 532 + * called. 520 533 */ 521 534 void drm_connector_unregister(struct drm_connector *connector) 522 535 { ··· 873 880 /* Added as part of Additional Colorimetry Extension in 861.G */ 874 881 { DRM_MODE_COLORIMETRY_DCI_P3_RGB_D65, "DCI-P3_RGB_D65" }, 875 882 { DRM_MODE_COLORIMETRY_DCI_P3_RGB_THEATER, "DCI-P3_RGB_Theater" }, 883 + }; 884 + 885 + /* 886 + * As per DP 1.4a spec, 2.2.5.7.5 VSC SDP Payload for Pixel Encoding/Colorimetry 887 + * Format Table 2-120 888 + */ 889 + static const struct drm_prop_enum_list dp_colorspaces[] = { 890 + /* For Default case, driver will set the colorspace */ 891 + { DRM_MODE_COLORIMETRY_DEFAULT, "Default" }, 892 + { DRM_MODE_COLORIMETRY_RGB_WIDE_FIXED, "RGB_Wide_Gamut_Fixed_Point" }, 893 + /* Colorimetry based on scRGB (IEC 61966-2-2) */ 894 + { DRM_MODE_COLORIMETRY_RGB_WIDE_FLOAT, "RGB_Wide_Gamut_Floating_Point" }, 895 + /* Colorimetry based on IEC 61966-2-5 */ 896 + { DRM_MODE_COLORIMETRY_OPRGB, "opRGB" }, 897 + /* Colorimetry based on SMPTE RP 431-2 */ 898 + { DRM_MODE_COLORIMETRY_DCI_P3_RGB_D65, "DCI-P3_RGB_D65" }, 899 + /* Colorimetry based on ITU-R BT.2020 */ 900 + { DRM_MODE_COLORIMETRY_BT2020_RGB, "BT2020_RGB" }, 901 + { DRM_MODE_COLORIMETRY_BT601_YCC, "BT601_YCC" }, 902 + { DRM_MODE_COLORIMETRY_BT709_YCC, "BT709_YCC" }, 903 + /* Standard Definition Colorimetry based on IEC 61966-2-4 */ 904 + { DRM_MODE_COLORIMETRY_XVYCC_601, "XVYCC_601" }, 905 + /* High Definition Colorimetry based on IEC 61966-2-4 */ 906 + { DRM_MODE_COLORIMETRY_XVYCC_709, "XVYCC_709" }, 907 + /* Colorimetry based on IEC 61966-2-1/Amendment 1 */ 908 + { DRM_MODE_COLORIMETRY_SYCC_601, "SYCC_601" }, 909 + /* Colorimetry based on IEC 61966-2-5 [33] */ 910 + { DRM_MODE_COLORIMETRY_OPYCC_601, "opYCC_601" }, 911 + /* Colorimetry based on ITU-R BT.2020 */ 912 + { DRM_MODE_COLORIMETRY_BT2020_CYCC, "BT2020_CYCC" }, 913 + /* Colorimetry based on ITU-R BT.2020 */ 914 + { DRM_MODE_COLORIMETRY_BT2020_YCC, "BT2020_YCC" }, 876 915 }; 877 916 878 917 /** ··· 1699 1674 * DOC: standard connector properties 1700 1675 * 1701 1676 * Colorspace: 1702 - * drm_mode_create_colorspace_property - create colorspace property 1703 1677 * This property helps select a suitable colorspace based on the sink 1704 1678 * capability. Modern sink devices support wider gamut like BT2020. 1705 1679 * This helps switch to BT2020 mode if the BT2020 encoded video stream ··· 1718 1694 * - This property is just to inform sink what colorspace 1719 1695 * source is trying to drive. 1720 1696 * 1721 - * Called by a driver the first time it's needed, must be attached to desired 1722 - * connectors. 1697 + * Because between HDMI and DP have different colorspaces, 1698 + * drm_mode_create_hdmi_colorspace_property() is used for HDMI connector and 1699 + * drm_mode_create_dp_colorspace_property() is used for DP connector. 1723 1700 */ 1724 - int drm_mode_create_colorspace_property(struct drm_connector *connector) 1701 + 1702 + /** 1703 + * drm_mode_create_hdmi_colorspace_property - create hdmi colorspace property 1704 + * @connector: connector to create the Colorspace property on. 1705 + * 1706 + * Called by a driver the first time it's needed, must be attached to desired 1707 + * HDMI connectors. 1708 + * 1709 + * Returns: 1710 + * Zero on success, negative errono on failure. 1711 + */ 1712 + int drm_mode_create_hdmi_colorspace_property(struct drm_connector *connector) 1725 1713 { 1726 1714 struct drm_device *dev = connector->dev; 1727 - struct drm_property *prop; 1728 1715 1729 - if (connector->connector_type == DRM_MODE_CONNECTOR_HDMIA || 1730 - connector->connector_type == DRM_MODE_CONNECTOR_HDMIB) { 1731 - prop = drm_property_create_enum(dev, DRM_MODE_PROP_ENUM, 1732 - "Colorspace", 1733 - hdmi_colorspaces, 1734 - ARRAY_SIZE(hdmi_colorspaces)); 1735 - if (!prop) 1736 - return -ENOMEM; 1737 - } else { 1738 - DRM_DEBUG_KMS("Colorspace property not supported\n"); 1716 + if (connector->colorspace_property) 1739 1717 return 0; 1740 - } 1741 1718 1742 - connector->colorspace_property = prop; 1719 + connector->colorspace_property = 1720 + drm_property_create_enum(dev, DRM_MODE_PROP_ENUM, "Colorspace", 1721 + hdmi_colorspaces, 1722 + ARRAY_SIZE(hdmi_colorspaces)); 1723 + 1724 + if (!connector->colorspace_property) 1725 + return -ENOMEM; 1743 1726 1744 1727 return 0; 1745 1728 } 1746 - EXPORT_SYMBOL(drm_mode_create_colorspace_property); 1729 + EXPORT_SYMBOL(drm_mode_create_hdmi_colorspace_property); 1730 + 1731 + /** 1732 + * drm_mode_create_dp_colorspace_property - create dp colorspace property 1733 + * @connector: connector to create the Colorspace property on. 1734 + * 1735 + * Called by a driver the first time it's needed, must be attached to desired 1736 + * DP connectors. 1737 + * 1738 + * Returns: 1739 + * Zero on success, negative errono on failure. 1740 + */ 1741 + int drm_mode_create_dp_colorspace_property(struct drm_connector *connector) 1742 + { 1743 + struct drm_device *dev = connector->dev; 1744 + 1745 + if (connector->colorspace_property) 1746 + return 0; 1747 + 1748 + connector->colorspace_property = 1749 + drm_property_create_enum(dev, DRM_MODE_PROP_ENUM, "Colorspace", 1750 + dp_colorspaces, 1751 + ARRAY_SIZE(dp_colorspaces)); 1752 + 1753 + if (!connector->colorspace_property) 1754 + return -ENOMEM; 1755 + 1756 + return 0; 1757 + } 1758 + EXPORT_SYMBOL(drm_mode_create_dp_colorspace_property); 1747 1759 1748 1760 /** 1749 1761 * drm_mode_create_content_type_property - create content type property ··· 2181 2121 int encoders_count = 0; 2182 2122 int ret = 0; 2183 2123 int copied = 0; 2184 - int i; 2185 2124 struct drm_mode_modeinfo u_mode; 2186 2125 struct drm_mode_modeinfo __user *mode_ptr; 2187 2126 uint32_t __user *encoder_ptr; ··· 2195 2136 if (!connector) 2196 2137 return -ENOENT; 2197 2138 2198 - drm_connector_for_each_possible_encoder(connector, encoder, i) 2199 - encoders_count++; 2139 + encoders_count = hweight32(connector->possible_encoders); 2200 2140 2201 2141 if ((out_resp->count_encoders >= encoders_count) && encoders_count) { 2202 2142 copied = 0; 2203 2143 encoder_ptr = (uint32_t __user *)(unsigned long)(out_resp->encoders_ptr); 2204 2144 2205 - drm_connector_for_each_possible_encoder(connector, encoder, i) { 2145 + drm_connector_for_each_possible_encoder(connector, encoder) { 2206 2146 if (put_user(encoder->base.id, encoder_ptr + copied)) { 2207 2147 ret = -EFAULT; 2208 2148 goto out;
+22 -1
drivers/gpu/drm/drm_crtc_helper.c
··· 36 36 #include <drm/drm_atomic.h> 37 37 #include <drm/drm_atomic_helper.h> 38 38 #include <drm/drm_atomic_uapi.h> 39 + #include <drm/drm_bridge.h> 39 40 #include <drm/drm_crtc.h> 40 41 #include <drm/drm_crtc_helper.h> 41 42 #include <drm/drm_drv.h> ··· 460 459 __drm_helper_disable_unused_functions(dev); 461 460 } 462 461 462 + /* 463 + * For connectors that support multiple encoders, either the 464 + * .atomic_best_encoder() or .best_encoder() operation must be implemented. 465 + */ 466 + struct drm_encoder * 467 + drm_connector_get_single_encoder(struct drm_connector *connector) 468 + { 469 + struct drm_encoder *encoder; 470 + 471 + WARN_ON(hweight32(connector->possible_encoders) > 1); 472 + drm_connector_for_each_possible_encoder(connector, encoder) 473 + return encoder; 474 + 475 + return NULL; 476 + } 477 + 463 478 /** 464 479 * drm_crtc_helper_set_config - set a new config from userspace 465 480 * @set: mode set configuration ··· 641 624 new_encoder = connector->encoder; 642 625 for (ro = 0; ro < set->num_connectors; ro++) { 643 626 if (set->connectors[ro] == connector) { 644 - new_encoder = connector_funcs->best_encoder(connector); 627 + if (connector_funcs->best_encoder) 628 + new_encoder = connector_funcs->best_encoder(connector); 629 + else 630 + new_encoder = drm_connector_get_single_encoder(connector); 631 + 645 632 /* if we can't get an encoder for a connector 646 633 we are setting now - then fail */ 647 634 if (new_encoder == NULL)
+3
drivers/gpu/drm/drm_crtc_helper_internal.h
··· 75 75 const struct drm_display_mode *mode); 76 76 enum drm_mode_status drm_connector_mode_valid(struct drm_connector *connector, 77 77 struct drm_display_mode *mode); 78 + 79 + struct drm_encoder * 80 + drm_connector_get_single_encoder(struct drm_connector *connector);
+7 -1
drivers/gpu/drm/drm_damage_helper.c
··· 212 212 drm_for_each_plane(plane, fb->dev) { 213 213 struct drm_plane_state *plane_state; 214 214 215 - if (plane->state->fb != fb) 215 + ret = drm_modeset_lock(&plane->mutex, state->acquire_ctx); 216 + if (ret) 217 + goto out; 218 + 219 + if (plane->state->fb != fb) { 220 + drm_modeset_unlock(&plane->mutex); 216 221 continue; 222 + } 217 223 218 224 plane_state = drm_atomic_get_plane_state(state, plane); 219 225 if (IS_ERR(plane_state)) {
+3 -5
drivers/gpu/drm/drm_debugfs_crc.c
··· 334 334 return LINE_LEN(crc->values_cnt); 335 335 } 336 336 337 - static unsigned int crtc_crc_poll(struct file *file, poll_table *wait) 337 + static __poll_t crtc_crc_poll(struct file *file, poll_table *wait) 338 338 { 339 339 struct drm_crtc *crtc = file->f_inode->i_private; 340 340 struct drm_crtc_crc *crc = &crtc->crc; 341 - unsigned ret; 341 + __poll_t ret = 0; 342 342 343 343 poll_wait(file, &crc->wq, wait); 344 344 345 345 spin_lock_irq(&crc->lock); 346 346 if (crc->source && crtc_crc_data_count(crc)) 347 - ret = POLLIN | POLLRDNORM; 348 - else 349 - ret = 0; 347 + ret |= EPOLLIN | EPOLLRDNORM; 350 348 spin_unlock_irq(&crc->lock); 351 349 352 350 return ret;
+16 -9
drivers/gpu/drm/drm_dp_cec.c
··· 8 8 #include <linux/kernel.h> 9 9 #include <linux/module.h> 10 10 #include <linux/slab.h> 11 + #include <drm/drm_connector.h> 11 12 #include <drm/drm_dp_helper.h> 13 + #include <drm/drmP.h> 12 14 #include <media/cec.h> 13 15 14 16 /* ··· 297 295 */ 298 296 void drm_dp_cec_set_edid(struct drm_dp_aux *aux, const struct edid *edid) 299 297 { 300 - u32 cec_caps = CEC_CAP_DEFAULTS | CEC_CAP_NEEDS_HPD; 298 + struct drm_connector *connector = aux->cec.connector; 299 + u32 cec_caps = CEC_CAP_DEFAULTS | CEC_CAP_NEEDS_HPD | 300 + CEC_CAP_CONNECTOR_INFO; 301 + struct cec_connector_info conn_info; 301 302 unsigned int num_las = 1; 302 303 u8 cap; 303 304 ··· 349 344 350 345 /* Create a new adapter */ 351 346 aux->cec.adap = cec_allocate_adapter(&drm_dp_cec_adap_ops, 352 - aux, aux->cec.name, cec_caps, 347 + aux, connector->name, cec_caps, 353 348 num_las); 354 349 if (IS_ERR(aux->cec.adap)) { 355 350 aux->cec.adap = NULL; 356 351 goto unlock; 357 352 } 358 - if (cec_register_adapter(aux->cec.adap, aux->cec.parent)) { 353 + 354 + cec_fill_conn_info_from_drm(&conn_info, connector); 355 + cec_s_conn_info(aux->cec.adap, &conn_info); 356 + 357 + if (cec_register_adapter(aux->cec.adap, connector->dev->dev)) { 359 358 cec_delete_adapter(aux->cec.adap); 360 359 aux->cec.adap = NULL; 361 360 } else { ··· 415 406 /** 416 407 * drm_dp_cec_register_connector() - register a new connector 417 408 * @aux: DisplayPort AUX channel 418 - * @name: name of the CEC device 419 - * @parent: parent device 409 + * @connector: drm connector 420 410 * 421 411 * A new connector was registered with associated CEC adapter name and 422 412 * CEC adapter parent device. After registering the name and parent 423 413 * drm_dp_cec_set_edid() is called to check if the connector supports 424 414 * CEC and to register a CEC adapter if that is the case. 425 415 */ 426 - void drm_dp_cec_register_connector(struct drm_dp_aux *aux, const char *name, 427 - struct device *parent) 416 + void drm_dp_cec_register_connector(struct drm_dp_aux *aux, 417 + struct drm_connector *connector) 428 418 { 429 419 WARN_ON(aux->cec.adap); 430 420 if (WARN_ON(!aux->transfer)) 431 421 return; 432 - aux->cec.name = name; 433 - aux->cec.parent = parent; 422 + aux->cec.connector = connector; 434 423 INIT_DELAYED_WORK(&aux->cec.unregister_work, 435 424 drm_dp_cec_unregister_work); 436 425 }
+8
drivers/gpu/drm/drm_dp_helper.c
··· 1109 1109 * @aux: DisplayPort AUX channel 1110 1110 * 1111 1111 * Automatically calls drm_dp_aux_init() if this hasn't been done yet. 1112 + * This should only be called when the underlying &struct drm_connector is 1113 + * initialiazed already. Therefore the best place to call this is from 1114 + * &drm_connector_funcs.late_register. Not that drivers which don't follow this 1115 + * will Oops when CONFIG_DRM_DP_AUX_CHARDEV is enabled. 1116 + * 1117 + * Drivers which need to use the aux channel before that point (e.g. at driver 1118 + * load time, before drm_dev_register() has been called) need to call 1119 + * drm_dp_aux_init(). 1112 1120 * 1113 1121 * Returns 0 on success or a negative error code on failure. 1114 1122 */
+510 -243
drivers/gpu/drm/drm_dp_mst_topology.c
··· 32 32 #include <drm/drm_atomic_helper.h> 33 33 #include <drm/drm_dp_mst_helper.h> 34 34 #include <drm/drm_drv.h> 35 - #include <drm/drm_fixed.h> 36 35 #include <drm/drm_print.h> 37 36 #include <drm/drm_probe_helper.h> 38 37 39 38 #include "drm_crtc_helper_internal.h" 39 + #include "drm_dp_mst_topology_internal.h" 40 40 41 41 /** 42 42 * DOC: dp mst helper ··· 47 47 */ 48 48 static bool dump_dp_payload_table(struct drm_dp_mst_topology_mgr *mgr, 49 49 char *buf); 50 - static int test_calc_pbn_mode(void); 51 50 52 51 static void drm_dp_mst_topology_put_port(struct drm_dp_mst_port *port); 53 52 ··· 72 73 static int drm_dp_mst_register_i2c_bus(struct drm_dp_aux *aux); 73 74 static void drm_dp_mst_unregister_i2c_bus(struct drm_dp_aux *aux); 74 75 static void drm_dp_mst_kick_tx(struct drm_dp_mst_topology_mgr *mgr); 76 + 77 + #define DBG_PREFIX "[dp_mst]" 75 78 76 79 #define DP_STR(x) [DP_ ## x] = #x 77 80 ··· 131 130 } 132 131 133 132 #undef DP_STR 133 + #define DP_STR(x) [DRM_DP_SIDEBAND_TX_ ## x] = #x 134 + 135 + static const char *drm_dp_mst_sideband_tx_state_str(int state) 136 + { 137 + static const char * const sideband_reason_str[] = { 138 + DP_STR(QUEUED), 139 + DP_STR(START_SEND), 140 + DP_STR(SENT), 141 + DP_STR(RX), 142 + DP_STR(TIMEOUT), 143 + }; 144 + 145 + if (state >= ARRAY_SIZE(sideband_reason_str) || 146 + !sideband_reason_str[state]) 147 + return "unknown"; 148 + 149 + return sideband_reason_str[state]; 150 + } 151 + 152 + static int 153 + drm_dp_mst_rad_to_str(const u8 rad[8], u8 lct, char *out, size_t len) 154 + { 155 + int i; 156 + u8 unpacked_rad[16]; 157 + 158 + for (i = 0; i < lct; i++) { 159 + if (i % 2) 160 + unpacked_rad[i] = rad[i / 2] >> 4; 161 + else 162 + unpacked_rad[i] = rad[i / 2] & BIT_MASK(4); 163 + } 164 + 165 + /* TODO: Eventually add something to printk so we can format the rad 166 + * like this: 1.2.3 167 + */ 168 + return snprintf(out, len, "%*phC", lct, unpacked_rad); 169 + } 134 170 135 171 /* sideband msg handling */ 136 172 static u8 drm_dp_msg_header_crc4(const uint8_t *data, size_t num_nibbles) ··· 300 262 return true; 301 263 } 302 264 303 - static void drm_dp_encode_sideband_req(struct drm_dp_sideband_msg_req_body *req, 304 - struct drm_dp_sideband_msg_tx *raw) 265 + void 266 + drm_dp_encode_sideband_req(const struct drm_dp_sideband_msg_req_body *req, 267 + struct drm_dp_sideband_msg_tx *raw) 305 268 { 306 269 int idx = 0; 307 270 int i; ··· 311 272 312 273 switch (req->req_type) { 313 274 case DP_ENUM_PATH_RESOURCES: 275 + case DP_POWER_DOWN_PHY: 276 + case DP_POWER_UP_PHY: 314 277 buf[idx] = (req->u.port_num.port_number & 0xf) << 4; 315 278 idx++; 316 279 break; ··· 400 359 memcpy(&buf[idx], req->u.i2c_write.bytes, req->u.i2c_write.num_bytes); 401 360 idx += req->u.i2c_write.num_bytes; 402 361 break; 403 - 404 - case DP_POWER_DOWN_PHY: 405 - case DP_POWER_UP_PHY: 406 - buf[idx] = (req->u.port_num.port_number & 0xf) << 4; 407 - idx++; 408 - break; 409 362 } 410 363 raw->cur_len = idx; 364 + } 365 + EXPORT_SYMBOL_FOR_TESTS_ONLY(drm_dp_encode_sideband_req); 366 + 367 + /* Decode a sideband request we've encoded, mainly used for debugging */ 368 + int 369 + drm_dp_decode_sideband_req(const struct drm_dp_sideband_msg_tx *raw, 370 + struct drm_dp_sideband_msg_req_body *req) 371 + { 372 + const u8 *buf = raw->msg; 373 + int i, idx = 0; 374 + 375 + req->req_type = buf[idx++] & 0x7f; 376 + switch (req->req_type) { 377 + case DP_ENUM_PATH_RESOURCES: 378 + case DP_POWER_DOWN_PHY: 379 + case DP_POWER_UP_PHY: 380 + req->u.port_num.port_number = (buf[idx] >> 4) & 0xf; 381 + break; 382 + case DP_ALLOCATE_PAYLOAD: 383 + { 384 + struct drm_dp_allocate_payload *a = 385 + &req->u.allocate_payload; 386 + 387 + a->number_sdp_streams = buf[idx] & 0xf; 388 + a->port_number = (buf[idx] >> 4) & 0xf; 389 + 390 + WARN_ON(buf[++idx] & 0x80); 391 + a->vcpi = buf[idx] & 0x7f; 392 + 393 + a->pbn = buf[++idx] << 8; 394 + a->pbn |= buf[++idx]; 395 + 396 + idx++; 397 + for (i = 0; i < a->number_sdp_streams; i++) { 398 + a->sdp_stream_sink[i] = 399 + (buf[idx + (i / 2)] >> ((i % 2) ? 0 : 4)) & 0xf; 400 + } 401 + } 402 + break; 403 + case DP_QUERY_PAYLOAD: 404 + req->u.query_payload.port_number = (buf[idx] >> 4) & 0xf; 405 + WARN_ON(buf[++idx] & 0x80); 406 + req->u.query_payload.vcpi = buf[idx] & 0x7f; 407 + break; 408 + case DP_REMOTE_DPCD_READ: 409 + { 410 + struct drm_dp_remote_dpcd_read *r = &req->u.dpcd_read; 411 + 412 + r->port_number = (buf[idx] >> 4) & 0xf; 413 + 414 + r->dpcd_address = (buf[idx] << 16) & 0xf0000; 415 + r->dpcd_address |= (buf[++idx] << 8) & 0xff00; 416 + r->dpcd_address |= buf[++idx] & 0xff; 417 + 418 + r->num_bytes = buf[++idx]; 419 + } 420 + break; 421 + case DP_REMOTE_DPCD_WRITE: 422 + { 423 + struct drm_dp_remote_dpcd_write *w = 424 + &req->u.dpcd_write; 425 + 426 + w->port_number = (buf[idx] >> 4) & 0xf; 427 + 428 + w->dpcd_address = (buf[idx] << 16) & 0xf0000; 429 + w->dpcd_address |= (buf[++idx] << 8) & 0xff00; 430 + w->dpcd_address |= buf[++idx] & 0xff; 431 + 432 + w->num_bytes = buf[++idx]; 433 + 434 + w->bytes = kmemdup(&buf[++idx], w->num_bytes, 435 + GFP_KERNEL); 436 + if (!w->bytes) 437 + return -ENOMEM; 438 + } 439 + break; 440 + case DP_REMOTE_I2C_READ: 441 + { 442 + struct drm_dp_remote_i2c_read *r = &req->u.i2c_read; 443 + struct drm_dp_remote_i2c_read_tx *tx; 444 + bool failed = false; 445 + 446 + r->num_transactions = buf[idx] & 0x3; 447 + r->port_number = (buf[idx] >> 4) & 0xf; 448 + for (i = 0; i < r->num_transactions; i++) { 449 + tx = &r->transactions[i]; 450 + 451 + tx->i2c_dev_id = buf[++idx] & 0x7f; 452 + tx->num_bytes = buf[++idx]; 453 + tx->bytes = kmemdup(&buf[++idx], 454 + tx->num_bytes, 455 + GFP_KERNEL); 456 + if (!tx->bytes) { 457 + failed = true; 458 + break; 459 + } 460 + idx += tx->num_bytes; 461 + tx->no_stop_bit = (buf[idx] >> 5) & 0x1; 462 + tx->i2c_transaction_delay = buf[idx] & 0xf; 463 + } 464 + 465 + if (failed) { 466 + for (i = 0; i < r->num_transactions; i++) 467 + kfree(tx->bytes); 468 + return -ENOMEM; 469 + } 470 + 471 + r->read_i2c_device_id = buf[++idx] & 0x7f; 472 + r->num_bytes_read = buf[++idx]; 473 + } 474 + break; 475 + case DP_REMOTE_I2C_WRITE: 476 + { 477 + struct drm_dp_remote_i2c_write *w = &req->u.i2c_write; 478 + 479 + w->port_number = (buf[idx] >> 4) & 0xf; 480 + w->write_i2c_device_id = buf[++idx] & 0x7f; 481 + w->num_bytes = buf[++idx]; 482 + w->bytes = kmemdup(&buf[++idx], w->num_bytes, 483 + GFP_KERNEL); 484 + if (!w->bytes) 485 + return -ENOMEM; 486 + } 487 + break; 488 + } 489 + 490 + return 0; 491 + } 492 + EXPORT_SYMBOL_FOR_TESTS_ONLY(drm_dp_decode_sideband_req); 493 + 494 + void 495 + drm_dp_dump_sideband_msg_req_body(const struct drm_dp_sideband_msg_req_body *req, 496 + int indent, struct drm_printer *printer) 497 + { 498 + int i; 499 + 500 + #define P(f, ...) drm_printf_indent(printer, indent, f, ##__VA_ARGS__) 501 + if (req->req_type == DP_LINK_ADDRESS) { 502 + /* No contents to print */ 503 + P("type=%s\n", drm_dp_mst_req_type_str(req->req_type)); 504 + return; 505 + } 506 + 507 + P("type=%s contents:\n", drm_dp_mst_req_type_str(req->req_type)); 508 + indent++; 509 + 510 + switch (req->req_type) { 511 + case DP_ENUM_PATH_RESOURCES: 512 + case DP_POWER_DOWN_PHY: 513 + case DP_POWER_UP_PHY: 514 + P("port=%d\n", req->u.port_num.port_number); 515 + break; 516 + case DP_ALLOCATE_PAYLOAD: 517 + P("port=%d vcpi=%d pbn=%d sdp_streams=%d %*ph\n", 518 + req->u.allocate_payload.port_number, 519 + req->u.allocate_payload.vcpi, req->u.allocate_payload.pbn, 520 + req->u.allocate_payload.number_sdp_streams, 521 + req->u.allocate_payload.number_sdp_streams, 522 + req->u.allocate_payload.sdp_stream_sink); 523 + break; 524 + case DP_QUERY_PAYLOAD: 525 + P("port=%d vcpi=%d\n", 526 + req->u.query_payload.port_number, 527 + req->u.query_payload.vcpi); 528 + break; 529 + case DP_REMOTE_DPCD_READ: 530 + P("port=%d dpcd_addr=%05x len=%d\n", 531 + req->u.dpcd_read.port_number, req->u.dpcd_read.dpcd_address, 532 + req->u.dpcd_read.num_bytes); 533 + break; 534 + case DP_REMOTE_DPCD_WRITE: 535 + P("port=%d addr=%05x len=%d: %*ph\n", 536 + req->u.dpcd_write.port_number, 537 + req->u.dpcd_write.dpcd_address, 538 + req->u.dpcd_write.num_bytes, req->u.dpcd_write.num_bytes, 539 + req->u.dpcd_write.bytes); 540 + break; 541 + case DP_REMOTE_I2C_READ: 542 + P("port=%d num_tx=%d id=%d size=%d:\n", 543 + req->u.i2c_read.port_number, 544 + req->u.i2c_read.num_transactions, 545 + req->u.i2c_read.read_i2c_device_id, 546 + req->u.i2c_read.num_bytes_read); 547 + 548 + indent++; 549 + for (i = 0; i < req->u.i2c_read.num_transactions; i++) { 550 + const struct drm_dp_remote_i2c_read_tx *rtx = 551 + &req->u.i2c_read.transactions[i]; 552 + 553 + P("%d: id=%03d size=%03d no_stop_bit=%d tx_delay=%03d: %*ph\n", 554 + i, rtx->i2c_dev_id, rtx->num_bytes, 555 + rtx->no_stop_bit, rtx->i2c_transaction_delay, 556 + rtx->num_bytes, rtx->bytes); 557 + } 558 + break; 559 + case DP_REMOTE_I2C_WRITE: 560 + P("port=%d id=%d size=%d: %*ph\n", 561 + req->u.i2c_write.port_number, 562 + req->u.i2c_write.write_i2c_device_id, 563 + req->u.i2c_write.num_bytes, req->u.i2c_write.num_bytes, 564 + req->u.i2c_write.bytes); 565 + break; 566 + default: 567 + P("???\n"); 568 + break; 569 + } 570 + #undef P 571 + } 572 + EXPORT_SYMBOL_FOR_TESTS_ONLY(drm_dp_dump_sideband_msg_req_body); 573 + 574 + static inline void 575 + drm_dp_mst_dump_sideband_msg_tx(struct drm_printer *p, 576 + const struct drm_dp_sideband_msg_tx *txmsg) 577 + { 578 + struct drm_dp_sideband_msg_req_body req; 579 + char buf[64]; 580 + int ret; 581 + int i; 582 + 583 + drm_dp_mst_rad_to_str(txmsg->dst->rad, txmsg->dst->lct, buf, 584 + sizeof(buf)); 585 + drm_printf(p, "txmsg cur_offset=%x cur_len=%x seqno=%x state=%s path_msg=%d dst=%s\n", 586 + txmsg->cur_offset, txmsg->cur_len, txmsg->seqno, 587 + drm_dp_mst_sideband_tx_state_str(txmsg->state), 588 + txmsg->path_msg, buf); 589 + 590 + ret = drm_dp_decode_sideband_req(txmsg, &req); 591 + if (ret) { 592 + drm_printf(p, "<failed to decode sideband req: %d>\n", ret); 593 + return; 594 + } 595 + drm_dp_dump_sideband_msg_req_body(&req, 1, p); 596 + 597 + switch (req.req_type) { 598 + case DP_REMOTE_DPCD_WRITE: 599 + kfree(req.u.dpcd_write.bytes); 600 + break; 601 + case DP_REMOTE_I2C_READ: 602 + for (i = 0; i < req.u.i2c_read.num_transactions; i++) 603 + kfree(req.u.i2c_read.transactions[i].bytes); 604 + break; 605 + case DP_REMOTE_I2C_WRITE: 606 + kfree(req.u.i2c_write.bytes); 607 + break; 608 + } 411 609 } 412 610 413 611 static void drm_dp_crc_sideband_chunk_req(u8 *msg, u8 len) ··· 1122 842 clear_bit(vcpi - 1, &mgr->vcpi_mask); 1123 843 1124 844 for (i = 0; i < mgr->max_payloads; i++) { 1125 - if (mgr->proposed_vcpis[i]) 1126 - if (mgr->proposed_vcpis[i]->vcpi == vcpi) { 1127 - mgr->proposed_vcpis[i] = NULL; 1128 - clear_bit(i + 1, &mgr->payload_mask); 1129 - } 845 + if (mgr->proposed_vcpis[i] && 846 + mgr->proposed_vcpis[i]->vcpi == vcpi) { 847 + mgr->proposed_vcpis[i] = NULL; 848 + clear_bit(i + 1, &mgr->payload_mask); 849 + } 1130 850 } 1131 851 mutex_unlock(&mgr->payload_lock); 1132 852 } ··· 1179 899 } 1180 900 } 1181 901 out: 902 + if (unlikely(ret == -EIO) && drm_debug_enabled(DRM_UT_DP)) { 903 + struct drm_printer p = drm_debug_printer(DBG_PREFIX); 904 + 905 + drm_dp_mst_dump_sideband_msg_tx(&p, txmsg); 906 + } 1182 907 mutex_unlock(&mgr->qlock); 1183 908 1184 909 return ret; ··· 1902 1617 } 1903 1618 EXPORT_SYMBOL(drm_dp_mst_connector_early_unregister); 1904 1619 1905 - static void drm_dp_add_port(struct drm_dp_mst_branch *mstb, 1906 - struct drm_device *dev, 1907 - struct drm_dp_link_addr_reply_port *port_msg) 1620 + static void 1621 + drm_dp_mst_handle_link_address_port(struct drm_dp_mst_branch *mstb, 1622 + struct drm_device *dev, 1623 + struct drm_dp_link_addr_reply_port *port_msg) 1908 1624 { 1909 1625 struct drm_dp_mst_port *port; 1910 1626 bool ret; ··· 2008 1722 drm_dp_mst_topology_put_port(port); 2009 1723 } 2010 1724 2011 - static void drm_dp_update_port(struct drm_dp_mst_branch *mstb, 2012 - struct drm_dp_connection_status_notify *conn_stat) 1725 + static void 1726 + drm_dp_mst_handle_conn_stat(struct drm_dp_mst_branch *mstb, 1727 + struct drm_dp_connection_status_notify *conn_stat) 2013 1728 { 2014 1729 struct drm_dp_mst_port *port; 2015 1730 int old_pdt; ··· 2087 1800 2088 1801 static struct drm_dp_mst_branch *get_mst_branch_device_by_guid_helper( 2089 1802 struct drm_dp_mst_branch *mstb, 2090 - uint8_t *guid) 1803 + const uint8_t *guid) 2091 1804 { 2092 1805 struct drm_dp_mst_branch *found_mstb; 2093 1806 struct drm_dp_mst_port *port; ··· 2111 1824 2112 1825 static struct drm_dp_mst_branch * 2113 1826 drm_dp_get_mst_branch_device_by_guid(struct drm_dp_mst_topology_mgr *mgr, 2114 - uint8_t *guid) 1827 + const uint8_t *guid) 2115 1828 { 2116 1829 struct drm_dp_mst_branch *mstb; 2117 1830 int ret; ··· 2322 2035 idx += tosend + 1; 2323 2036 2324 2037 ret = drm_dp_send_sideband_msg(mgr, up, chunk, idx); 2325 - if (ret) { 2326 - DRM_DEBUG_KMS("sideband msg failed to send\n"); 2038 + if (unlikely(ret) && drm_debug_enabled(DRM_UT_DP)) { 2039 + struct drm_printer p = drm_debug_printer(DBG_PREFIX); 2040 + 2041 + drm_printf(&p, "sideband msg failed to send\n"); 2042 + drm_dp_mst_dump_sideband_msg_tx(&p, txmsg); 2327 2043 return ret; 2328 2044 } 2329 2045 ··· 2388 2098 { 2389 2099 mutex_lock(&mgr->qlock); 2390 2100 list_add_tail(&txmsg->next, &mgr->tx_msg_downq); 2101 + 2102 + if (drm_debug_enabled(DRM_UT_DP)) { 2103 + struct drm_printer p = drm_debug_printer(DBG_PREFIX); 2104 + 2105 + drm_dp_mst_dump_sideband_msg_tx(&p, txmsg); 2106 + } 2107 + 2391 2108 if (list_is_singular(&mgr->tx_msg_downq)) 2392 2109 process_single_down_tx_qlock(mgr); 2393 2110 mutex_unlock(&mgr->qlock); 2394 2111 } 2395 2112 2113 + static void 2114 + drm_dp_dump_link_address(struct drm_dp_link_address_ack_reply *reply) 2115 + { 2116 + struct drm_dp_link_addr_reply_port *port_reply; 2117 + int i; 2118 + 2119 + for (i = 0; i < reply->nports; i++) { 2120 + port_reply = &reply->ports[i]; 2121 + DRM_DEBUG_KMS("port %d: input %d, pdt: %d, pn: %d, dpcd_rev: %02x, mcs: %d, ddps: %d, ldps %d, sdp %d/%d\n", 2122 + i, 2123 + port_reply->input_port, 2124 + port_reply->peer_device_type, 2125 + port_reply->port_number, 2126 + port_reply->dpcd_revision, 2127 + port_reply->mcs, 2128 + port_reply->ddps, 2129 + port_reply->legacy_device_plug_status, 2130 + port_reply->num_sdp_streams, 2131 + port_reply->num_sdp_stream_sinks); 2132 + } 2133 + } 2134 + 2396 2135 static void drm_dp_send_link_address(struct drm_dp_mst_topology_mgr *mgr, 2397 2136 struct drm_dp_mst_branch *mstb) 2398 2137 { 2399 - int len; 2400 2138 struct drm_dp_sideband_msg_tx *txmsg; 2401 - int ret; 2139 + struct drm_dp_link_address_ack_reply *reply; 2140 + int i, len, ret; 2402 2141 2403 2142 txmsg = kzalloc(sizeof(*txmsg), GFP_KERNEL); 2404 2143 if (!txmsg) ··· 2439 2120 mstb->link_address_sent = true; 2440 2121 drm_dp_queue_down_tx(mgr, txmsg); 2441 2122 2123 + /* FIXME: Actually do some real error handling here */ 2442 2124 ret = drm_dp_mst_wait_tx_reply(mstb, txmsg); 2443 - if (ret > 0) { 2444 - int i; 2445 - 2446 - if (txmsg->reply.reply_type == DP_SIDEBAND_REPLY_NAK) { 2447 - DRM_DEBUG_KMS("link address nak received\n"); 2448 - } else { 2449 - DRM_DEBUG_KMS("link address reply: %d\n", txmsg->reply.u.link_addr.nports); 2450 - for (i = 0; i < txmsg->reply.u.link_addr.nports; i++) { 2451 - DRM_DEBUG_KMS("port %d: input %d, pdt: %d, pn: %d, dpcd_rev: %02x, mcs: %d, ddps: %d, ldps %d, sdp %d/%d\n", i, 2452 - txmsg->reply.u.link_addr.ports[i].input_port, 2453 - txmsg->reply.u.link_addr.ports[i].peer_device_type, 2454 - txmsg->reply.u.link_addr.ports[i].port_number, 2455 - txmsg->reply.u.link_addr.ports[i].dpcd_revision, 2456 - txmsg->reply.u.link_addr.ports[i].mcs, 2457 - txmsg->reply.u.link_addr.ports[i].ddps, 2458 - txmsg->reply.u.link_addr.ports[i].legacy_device_plug_status, 2459 - txmsg->reply.u.link_addr.ports[i].num_sdp_streams, 2460 - txmsg->reply.u.link_addr.ports[i].num_sdp_stream_sinks); 2461 - } 2462 - 2463 - drm_dp_check_mstb_guid(mstb, txmsg->reply.u.link_addr.guid); 2464 - 2465 - for (i = 0; i < txmsg->reply.u.link_addr.nports; i++) { 2466 - drm_dp_add_port(mstb, mgr->dev, &txmsg->reply.u.link_addr.ports[i]); 2467 - } 2468 - drm_kms_helper_hotplug_event(mgr->dev); 2469 - } 2470 - } else { 2471 - mstb->link_address_sent = false; 2472 - DRM_DEBUG_KMS("link address failed %d\n", ret); 2125 + if (ret <= 0) { 2126 + DRM_ERROR("Sending link address failed with %d\n", ret); 2127 + goto out; 2128 + } 2129 + if (txmsg->reply.reply_type == DP_SIDEBAND_REPLY_NAK) { 2130 + DRM_ERROR("link address NAK received\n"); 2131 + ret = -EIO; 2132 + goto out; 2473 2133 } 2474 2134 2135 + reply = &txmsg->reply.u.link_addr; 2136 + DRM_DEBUG_KMS("link address reply: %d\n", reply->nports); 2137 + drm_dp_dump_link_address(reply); 2138 + 2139 + drm_dp_check_mstb_guid(mstb, reply->guid); 2140 + 2141 + for (i = 0; i < reply->nports; i++) 2142 + drm_dp_mst_handle_link_address_port(mstb, mgr->dev, 2143 + &reply->ports[i]); 2144 + 2145 + drm_kms_helper_hotplug_event(mgr->dev); 2146 + 2147 + out: 2148 + if (ret <= 0) 2149 + mstb->link_address_sent = false; 2475 2150 kfree(txmsg); 2476 2151 } 2477 2152 2478 - static int drm_dp_send_enum_path_resources(struct drm_dp_mst_topology_mgr *mgr, 2479 - struct drm_dp_mst_branch *mstb, 2480 - struct drm_dp_mst_port *port) 2153 + static int 2154 + drm_dp_send_enum_path_resources(struct drm_dp_mst_topology_mgr *mgr, 2155 + struct drm_dp_mst_branch *mstb, 2156 + struct drm_dp_mst_port *port) 2481 2157 { 2482 - int len; 2158 + struct drm_dp_enum_path_resources_ack_reply *path_res; 2483 2159 struct drm_dp_sideband_msg_tx *txmsg; 2160 + int len; 2484 2161 int ret; 2485 2162 2486 2163 txmsg = kzalloc(sizeof(*txmsg), GFP_KERNEL); ··· 2490 2175 2491 2176 ret = drm_dp_mst_wait_tx_reply(mstb, txmsg); 2492 2177 if (ret > 0) { 2178 + path_res = &txmsg->reply.u.path_resources; 2179 + 2493 2180 if (txmsg->reply.reply_type == DP_SIDEBAND_REPLY_NAK) { 2494 2181 DRM_DEBUG_KMS("enum path resources nak received\n"); 2495 2182 } else { 2496 - if (port->port_num != txmsg->reply.u.path_resources.port_number) 2183 + if (port->port_num != path_res->port_number) 2497 2184 DRM_ERROR("got incorrect port in response\n"); 2498 - DRM_DEBUG_KMS("enum path resources %d: %d %d\n", txmsg->reply.u.path_resources.port_number, txmsg->reply.u.path_resources.full_payload_bw_number, 2499 - txmsg->reply.u.path_resources.avail_payload_bw_number); 2500 - port->available_pbn = txmsg->reply.u.path_resources.avail_payload_bw_number; 2185 + 2186 + DRM_DEBUG_KMS("enum path resources %d: %d %d\n", 2187 + path_res->port_number, 2188 + path_res->full_payload_bw_number, 2189 + path_res->avail_payload_bw_number); 2190 + port->available_pbn = 2191 + path_res->avail_payload_bw_number; 2501 2192 } 2502 2193 } 2503 2194 ··· 2976 2655 return 0; 2977 2656 } 2978 2657 2979 - static bool drm_dp_get_vc_payload_bw(int dp_link_bw, 2980 - int dp_link_count, 2981 - int *out) 2658 + static int drm_dp_get_vc_payload_bw(u8 dp_link_bw, u8 dp_link_count) 2982 2659 { 2983 - switch (dp_link_bw) { 2984 - default: 2660 + if (dp_link_bw == 0 || dp_link_count == 0) 2985 2661 DRM_DEBUG_KMS("invalid link bandwidth in DPCD: %x (link count: %d)\n", 2986 2662 dp_link_bw, dp_link_count); 2987 - return false; 2988 2663 2989 - case DP_LINK_BW_1_62: 2990 - *out = 3 * dp_link_count; 2991 - break; 2992 - case DP_LINK_BW_2_7: 2993 - *out = 5 * dp_link_count; 2994 - break; 2995 - case DP_LINK_BW_5_4: 2996 - *out = 10 * dp_link_count; 2997 - break; 2998 - case DP_LINK_BW_8_1: 2999 - *out = 15 * dp_link_count; 3000 - break; 3001 - } 3002 - return true; 2664 + return dp_link_bw * dp_link_count / 2; 3003 2665 } 3004 2666 3005 2667 /** ··· 3014 2710 goto out_unlock; 3015 2711 } 3016 2712 3017 - if (!drm_dp_get_vc_payload_bw(mgr->dpcd[1], 3018 - mgr->dpcd[2] & DP_MAX_LANE_COUNT_MASK, 3019 - &mgr->pbn_div)) { 2713 + mgr->pbn_div = drm_dp_get_vc_payload_bw(mgr->dpcd[1], 2714 + mgr->dpcd[2] & DP_MAX_LANE_COUNT_MASK); 2715 + if (mgr->pbn_div == 0) { 3020 2716 ret = -EINVAL; 3021 2717 goto out_unlock; 3022 2718 } ··· 3194 2890 3195 2891 static int drm_dp_mst_handle_down_rep(struct drm_dp_mst_topology_mgr *mgr) 3196 2892 { 3197 - int ret = 0; 2893 + struct drm_dp_sideband_msg_tx *txmsg; 2894 + struct drm_dp_mst_branch *mstb; 2895 + struct drm_dp_sideband_msg_hdr *hdr = &mgr->down_rep_recv.initial_hdr; 2896 + int slot = -1; 3198 2897 3199 - if (!drm_dp_get_one_sb_msg(mgr, false)) { 3200 - memset(&mgr->down_rep_recv, 0, 3201 - sizeof(struct drm_dp_sideband_msg_rx)); 2898 + if (!drm_dp_get_one_sb_msg(mgr, false)) 2899 + goto clear_down_rep_recv; 2900 + 2901 + if (!mgr->down_rep_recv.have_eomt) 3202 2902 return 0; 2903 + 2904 + mstb = drm_dp_get_mst_branch_device(mgr, hdr->lct, hdr->rad); 2905 + if (!mstb) { 2906 + DRM_DEBUG_KMS("Got MST reply from unknown device %d\n", 2907 + hdr->lct); 2908 + goto clear_down_rep_recv; 3203 2909 } 3204 2910 3205 - if (mgr->down_rep_recv.have_eomt) { 3206 - struct drm_dp_sideband_msg_tx *txmsg; 3207 - struct drm_dp_mst_branch *mstb; 3208 - int slot = -1; 3209 - mstb = drm_dp_get_mst_branch_device(mgr, 3210 - mgr->down_rep_recv.initial_hdr.lct, 3211 - mgr->down_rep_recv.initial_hdr.rad); 2911 + /* find the message */ 2912 + slot = hdr->seqno; 2913 + mutex_lock(&mgr->qlock); 2914 + txmsg = mstb->tx_slots[slot]; 2915 + /* remove from slots */ 2916 + mutex_unlock(&mgr->qlock); 3212 2917 3213 - if (!mstb) { 3214 - DRM_DEBUG_KMS("Got MST reply from unknown device %d\n", mgr->down_rep_recv.initial_hdr.lct); 3215 - memset(&mgr->down_rep_recv, 0, sizeof(struct drm_dp_sideband_msg_rx)); 3216 - return 0; 3217 - } 3218 - 3219 - /* find the message */ 3220 - slot = mgr->down_rep_recv.initial_hdr.seqno; 3221 - mutex_lock(&mgr->qlock); 3222 - txmsg = mstb->tx_slots[slot]; 3223 - /* remove from slots */ 3224 - mutex_unlock(&mgr->qlock); 3225 - 3226 - if (!txmsg) { 3227 - DRM_DEBUG_KMS("Got MST reply with no msg %p %d %d %02x %02x\n", 3228 - mstb, 3229 - mgr->down_rep_recv.initial_hdr.seqno, 3230 - mgr->down_rep_recv.initial_hdr.lct, 3231 - mgr->down_rep_recv.initial_hdr.rad[0], 3232 - mgr->down_rep_recv.msg[0]); 3233 - drm_dp_mst_topology_put_mstb(mstb); 3234 - memset(&mgr->down_rep_recv, 0, sizeof(struct drm_dp_sideband_msg_rx)); 3235 - return 0; 3236 - } 3237 - 3238 - drm_dp_sideband_parse_reply(&mgr->down_rep_recv, &txmsg->reply); 3239 - 3240 - if (txmsg->reply.reply_type == DP_SIDEBAND_REPLY_NAK) 3241 - DRM_DEBUG_KMS("Got NAK reply: req 0x%02x (%s), reason 0x%02x (%s), nak data 0x%02x\n", 3242 - txmsg->reply.req_type, 3243 - drm_dp_mst_req_type_str(txmsg->reply.req_type), 3244 - txmsg->reply.u.nak.reason, 3245 - drm_dp_mst_nak_reason_str(txmsg->reply.u.nak.reason), 3246 - txmsg->reply.u.nak.nak_data); 3247 - 3248 - memset(&mgr->down_rep_recv, 0, sizeof(struct drm_dp_sideband_msg_rx)); 3249 - drm_dp_mst_topology_put_mstb(mstb); 3250 - 3251 - mutex_lock(&mgr->qlock); 3252 - txmsg->state = DRM_DP_SIDEBAND_TX_RX; 3253 - mstb->tx_slots[slot] = NULL; 3254 - mutex_unlock(&mgr->qlock); 3255 - 3256 - wake_up_all(&mgr->tx_waitq); 2918 + if (!txmsg) { 2919 + DRM_DEBUG_KMS("Got MST reply with no msg %p %d %d %02x %02x\n", 2920 + mstb, hdr->seqno, hdr->lct, hdr->rad[0], 2921 + mgr->down_rep_recv.msg[0]); 2922 + goto no_msg; 3257 2923 } 3258 - return ret; 2924 + 2925 + drm_dp_sideband_parse_reply(&mgr->down_rep_recv, &txmsg->reply); 2926 + 2927 + if (txmsg->reply.reply_type == DP_SIDEBAND_REPLY_NAK) 2928 + DRM_DEBUG_KMS("Got NAK reply: req 0x%02x (%s), reason 0x%02x (%s), nak data 0x%02x\n", 2929 + txmsg->reply.req_type, 2930 + drm_dp_mst_req_type_str(txmsg->reply.req_type), 2931 + txmsg->reply.u.nak.reason, 2932 + drm_dp_mst_nak_reason_str(txmsg->reply.u.nak.reason), 2933 + txmsg->reply.u.nak.nak_data); 2934 + 2935 + memset(&mgr->down_rep_recv, 0, sizeof(struct drm_dp_sideband_msg_rx)); 2936 + drm_dp_mst_topology_put_mstb(mstb); 2937 + 2938 + mutex_lock(&mgr->qlock); 2939 + txmsg->state = DRM_DP_SIDEBAND_TX_RX; 2940 + mstb->tx_slots[slot] = NULL; 2941 + mutex_unlock(&mgr->qlock); 2942 + 2943 + wake_up_all(&mgr->tx_waitq); 2944 + 2945 + return 0; 2946 + 2947 + no_msg: 2948 + drm_dp_mst_topology_put_mstb(mstb); 2949 + clear_down_rep_recv: 2950 + memset(&mgr->down_rep_recv, 0, sizeof(struct drm_dp_sideband_msg_rx)); 2951 + 2952 + return 0; 3259 2953 } 3260 2954 3261 2955 static int drm_dp_mst_handle_up_req(struct drm_dp_mst_topology_mgr *mgr) 3262 2956 { 3263 - int ret = 0; 2957 + struct drm_dp_sideband_msg_req_body msg; 2958 + struct drm_dp_sideband_msg_hdr *hdr = &mgr->up_req_recv.initial_hdr; 2959 + struct drm_dp_mst_branch *mstb = NULL; 2960 + const u8 *guid; 2961 + bool seqno; 3264 2962 3265 - if (!drm_dp_get_one_sb_msg(mgr, true)) { 3266 - memset(&mgr->up_req_recv, 0, 3267 - sizeof(struct drm_dp_sideband_msg_rx)); 2963 + if (!drm_dp_get_one_sb_msg(mgr, true)) 2964 + goto out; 2965 + 2966 + if (!mgr->up_req_recv.have_eomt) 3268 2967 return 0; 2968 + 2969 + if (!hdr->broadcast) { 2970 + mstb = drm_dp_get_mst_branch_device(mgr, hdr->lct, hdr->rad); 2971 + if (!mstb) { 2972 + DRM_DEBUG_KMS("Got MST reply from unknown device %d\n", 2973 + hdr->lct); 2974 + goto out; 2975 + } 3269 2976 } 3270 2977 3271 - if (mgr->up_req_recv.have_eomt) { 3272 - struct drm_dp_sideband_msg_req_body msg; 3273 - struct drm_dp_mst_branch *mstb = NULL; 3274 - bool seqno; 2978 + seqno = hdr->seqno; 2979 + drm_dp_sideband_parse_req(&mgr->up_req_recv, &msg); 3275 2980 3276 - if (!mgr->up_req_recv.initial_hdr.broadcast) { 3277 - mstb = drm_dp_get_mst_branch_device(mgr, 3278 - mgr->up_req_recv.initial_hdr.lct, 3279 - mgr->up_req_recv.initial_hdr.rad); 3280 - if (!mstb) { 3281 - DRM_DEBUG_KMS("Got MST reply from unknown device %d\n", mgr->up_req_recv.initial_hdr.lct); 3282 - memset(&mgr->up_req_recv, 0, sizeof(struct drm_dp_sideband_msg_rx)); 3283 - return 0; 3284 - } 2981 + if (msg.req_type == DP_CONNECTION_STATUS_NOTIFY) 2982 + guid = msg.u.conn_stat.guid; 2983 + else if (msg.req_type == DP_RESOURCE_STATUS_NOTIFY) 2984 + guid = msg.u.resource_stat.guid; 2985 + else 2986 + goto out; 2987 + 2988 + drm_dp_send_up_ack_reply(mgr, mgr->mst_primary, msg.req_type, seqno, 2989 + false); 2990 + 2991 + if (!mstb) { 2992 + mstb = drm_dp_get_mst_branch_device_by_guid(mgr, guid); 2993 + if (!mstb) { 2994 + DRM_DEBUG_KMS("Got MST reply from unknown device %d\n", 2995 + hdr->lct); 2996 + goto out; 3285 2997 } 3286 - 3287 - seqno = mgr->up_req_recv.initial_hdr.seqno; 3288 - drm_dp_sideband_parse_req(&mgr->up_req_recv, &msg); 3289 - 3290 - if (msg.req_type == DP_CONNECTION_STATUS_NOTIFY) { 3291 - drm_dp_send_up_ack_reply(mgr, mgr->mst_primary, msg.req_type, seqno, false); 3292 - 3293 - if (!mstb) 3294 - mstb = drm_dp_get_mst_branch_device_by_guid(mgr, msg.u.conn_stat.guid); 3295 - 3296 - if (!mstb) { 3297 - DRM_DEBUG_KMS("Got MST reply from unknown device %d\n", mgr->up_req_recv.initial_hdr.lct); 3298 - memset(&mgr->up_req_recv, 0, sizeof(struct drm_dp_sideband_msg_rx)); 3299 - return 0; 3300 - } 3301 - 3302 - drm_dp_update_port(mstb, &msg.u.conn_stat); 3303 - 3304 - DRM_DEBUG_KMS("Got CSN: pn: %d ldps:%d ddps: %d mcs: %d ip: %d pdt: %d\n", msg.u.conn_stat.port_number, msg.u.conn_stat.legacy_device_plug_status, msg.u.conn_stat.displayport_device_plug_status, msg.u.conn_stat.message_capability_status, msg.u.conn_stat.input_port, msg.u.conn_stat.peer_device_type); 3305 - drm_kms_helper_hotplug_event(mgr->dev); 3306 - 3307 - } else if (msg.req_type == DP_RESOURCE_STATUS_NOTIFY) { 3308 - drm_dp_send_up_ack_reply(mgr, mgr->mst_primary, msg.req_type, seqno, false); 3309 - if (!mstb) 3310 - mstb = drm_dp_get_mst_branch_device_by_guid(mgr, msg.u.resource_stat.guid); 3311 - 3312 - if (!mstb) { 3313 - DRM_DEBUG_KMS("Got MST reply from unknown device %d\n", mgr->up_req_recv.initial_hdr.lct); 3314 - memset(&mgr->up_req_recv, 0, sizeof(struct drm_dp_sideband_msg_rx)); 3315 - return 0; 3316 - } 3317 - 3318 - DRM_DEBUG_KMS("Got RSN: pn: %d avail_pbn %d\n", msg.u.resource_stat.port_number, msg.u.resource_stat.available_pbn); 3319 - } 3320 - 3321 - if (mstb) 3322 - drm_dp_mst_topology_put_mstb(mstb); 3323 - 3324 - memset(&mgr->up_req_recv, 0, sizeof(struct drm_dp_sideband_msg_rx)); 3325 2998 } 3326 - return ret; 2999 + 3000 + if (msg.req_type == DP_CONNECTION_STATUS_NOTIFY) { 3001 + drm_dp_mst_handle_conn_stat(mstb, &msg.u.conn_stat); 3002 + 3003 + DRM_DEBUG_KMS("Got CSN: pn: %d ldps:%d ddps: %d mcs: %d ip: %d pdt: %d\n", 3004 + msg.u.conn_stat.port_number, 3005 + msg.u.conn_stat.legacy_device_plug_status, 3006 + msg.u.conn_stat.displayport_device_plug_status, 3007 + msg.u.conn_stat.message_capability_status, 3008 + msg.u.conn_stat.input_port, 3009 + msg.u.conn_stat.peer_device_type); 3010 + 3011 + drm_kms_helper_hotplug_event(mgr->dev); 3012 + } else if (msg.req_type == DP_RESOURCE_STATUS_NOTIFY) { 3013 + DRM_DEBUG_KMS("Got RSN: pn: %d avail_pbn %d\n", 3014 + msg.u.resource_stat.port_number, 3015 + msg.u.resource_stat.available_pbn); 3016 + } 3017 + 3018 + drm_dp_mst_topology_put_mstb(mstb); 3019 + out: 3020 + memset(&mgr->up_req_recv, 0, sizeof(struct drm_dp_sideband_msg_rx)); 3021 + return 0; 3327 3022 } 3328 3023 3329 3024 /** ··· 3842 3539 */ 3843 3540 int drm_dp_calc_pbn_mode(int clock, int bpp) 3844 3541 { 3845 - u64 kbps; 3846 - s64 peak_kbps; 3847 - u32 numerator; 3848 - u32 denominator; 3849 - 3850 - kbps = clock * bpp; 3851 - 3852 3542 /* 3853 3543 * margin 5300ppm + 300ppm ~ 0.6% as per spec, factor is 1.006 3854 3544 * The unit of 54/64Mbytes/sec is an arbitrary unit chosen based on ··· 3852 3556 * peak_kbps *= (64/54) 3853 3557 * peak_kbps *= 8 convert to bytes 3854 3558 */ 3855 - 3856 - numerator = 64 * 1006; 3857 - denominator = 54 * 8 * 1000 * 1000; 3858 - 3859 - kbps *= numerator; 3860 - peak_kbps = drm_fixp_from_fraction(kbps, denominator); 3861 - 3862 - return drm_fixp2int_ceil(peak_kbps); 3559 + return DIV_ROUND_UP_ULL(mul_u32_u32(clock * bpp, 64 * 1006), 3560 + 8 * 54 * 1000 * 1000); 3863 3561 } 3864 3562 EXPORT_SYMBOL(drm_dp_calc_pbn_mode); 3865 - 3866 - static int test_calc_pbn_mode(void) 3867 - { 3868 - int ret; 3869 - ret = drm_dp_calc_pbn_mode(154000, 30); 3870 - if (ret != 689) { 3871 - DRM_ERROR("PBN calculation test failed - clock %d, bpp %d, expected PBN %d, actual PBN %d.\n", 3872 - 154000, 30, 689, ret); 3873 - return -EINVAL; 3874 - } 3875 - ret = drm_dp_calc_pbn_mode(234000, 30); 3876 - if (ret != 1047) { 3877 - DRM_ERROR("PBN calculation test failed - clock %d, bpp %d, expected PBN %d, actual PBN %d.\n", 3878 - 234000, 30, 1047, ret); 3879 - return -EINVAL; 3880 - } 3881 - ret = drm_dp_calc_pbn_mode(297000, 24); 3882 - if (ret != 1063) { 3883 - DRM_ERROR("PBN calculation test failed - clock %d, bpp %d, expected PBN %d, actual PBN %d.\n", 3884 - 297000, 24, 1063, ret); 3885 - return -EINVAL; 3886 - } 3887 - return 0; 3888 - } 3889 3563 3890 3564 /* we want to kick the TX after we've ack the up/down IRQs. */ 3891 3565 static void drm_dp_mst_kick_tx(struct drm_dp_mst_topology_mgr *mgr) ··· 4014 3748 } 4015 3749 list_del(&port->next); 4016 3750 mutex_unlock(&mgr->destroy_connector_lock); 4017 - 4018 - INIT_LIST_HEAD(&port->next); 4019 3751 4020 3752 mgr->cbs->destroy_connector(mgr, port->connector); 4021 3753 ··· 4234 3970 if (!mgr->proposed_vcpis) 4235 3971 return -ENOMEM; 4236 3972 set_bit(0, &mgr->payload_mask); 4237 - if (test_calc_pbn_mode() < 0) 4238 - DRM_ERROR("MST PBN self-test failed\n"); 4239 3973 4240 3974 mst_state = kzalloc(sizeof(*mst_state), GFP_KERNEL); 4241 3975 if (mst_state == NULL) ··· 4269 4007 mgr->aux = NULL; 4270 4008 drm_atomic_private_obj_fini(&mgr->base); 4271 4009 mgr->funcs = NULL; 4010 + 4011 + mutex_destroy(&mgr->destroy_connector_lock); 4012 + mutex_destroy(&mgr->payload_lock); 4013 + mutex_destroy(&mgr->qlock); 4014 + mutex_destroy(&mgr->lock); 4272 4015 } 4273 4016 EXPORT_SYMBOL(drm_dp_mst_topology_mgr_destroy); 4274 4017
+24
drivers/gpu/drm/drm_dp_mst_topology_internal.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0-only 2 + * 3 + * Declarations for DP MST related functions which are only used in selftests 4 + * 5 + * Copyright © 2018 Red Hat 6 + * Authors: 7 + * Lyude Paul <lyude@redhat.com> 8 + */ 9 + 10 + #ifndef _DRM_DP_MST_HELPER_INTERNAL_H_ 11 + #define _DRM_DP_MST_HELPER_INTERNAL_H_ 12 + 13 + #include <drm/drm_dp_mst_helper.h> 14 + 15 + void 16 + drm_dp_encode_sideband_req(const struct drm_dp_sideband_msg_req_body *req, 17 + struct drm_dp_sideband_msg_tx *raw); 18 + int drm_dp_decode_sideband_req(const struct drm_dp_sideband_msg_tx *raw, 19 + struct drm_dp_sideband_msg_req_body *req); 20 + void 21 + drm_dp_dump_sideband_msg_req_body(const struct drm_dp_sideband_msg_req_body *req, 22 + int indent, struct drm_printer *printer); 23 + 24 + #endif /* !_DRM_DP_MST_HELPER_INTERNAL_H_ */
-17
drivers/gpu/drm/drm_drv.c
··· 46 46 #include "drm_internal.h" 47 47 #include "drm_legacy.h" 48 48 49 - /* 50 - * drm_debug: Enable debug output. 51 - * Bitmask of DRM_UT_x. See include/drm/drm_print.h for details. 52 - */ 53 - unsigned int drm_debug = 0; 54 - EXPORT_SYMBOL(drm_debug); 55 - 56 49 MODULE_AUTHOR("Gareth Hughes, Leif Delgass, José Fonseca, Jon Smirl"); 57 50 MODULE_DESCRIPTION("DRM shared core routines"); 58 51 MODULE_LICENSE("GPL and additional rights"); 59 - MODULE_PARM_DESC(debug, "Enable debug output, where each bit enables a debug category.\n" 60 - "\t\tBit 0 (0x01) will enable CORE messages (drm core code)\n" 61 - "\t\tBit 1 (0x02) will enable DRIVER messages (drm controller code)\n" 62 - "\t\tBit 2 (0x04) will enable KMS messages (modesetting code)\n" 63 - "\t\tBit 3 (0x08) will enable PRIME messages (prime code)\n" 64 - "\t\tBit 4 (0x10) will enable ATOMIC messages (atomic code)\n" 65 - "\t\tBit 5 (0x20) will enable VBL messages (vblank code)\n" 66 - "\t\tBit 7 (0x80) will enable LEASE messages (leasing code)\n" 67 - "\t\tBit 8 (0x100) will enable DP messages (displayport code)"); 68 - module_param_named(debug, drm_debug, int, 0600); 69 52 70 53 static DEFINE_SPINLOCK(drm_minor_lock); 71 54 static struct idr drm_minors_idr;
+5 -18
drivers/gpu/drm/drm_dsc.c
··· 216 216 */ 217 217 for (i = 0; i < DSC_NUM_BUF_RANGES; i++) { 218 218 pps_payload->rc_range_parameters[i] = 219 - ((dsc_cfg->rc_range_params[i].range_min_qp << 220 - DSC_PPS_RC_RANGE_MINQP_SHIFT) | 221 - (dsc_cfg->rc_range_params[i].range_max_qp << 222 - DSC_PPS_RC_RANGE_MAXQP_SHIFT) | 223 - (dsc_cfg->rc_range_params[i].range_bpg_offset)); 224 - pps_payload->rc_range_parameters[i] = 225 - cpu_to_be16(pps_payload->rc_range_parameters[i]); 219 + cpu_to_be16((dsc_cfg->rc_range_params[i].range_min_qp << 220 + DSC_PPS_RC_RANGE_MINQP_SHIFT) | 221 + (dsc_cfg->rc_range_params[i].range_max_qp << 222 + DSC_PPS_RC_RANGE_MAXQP_SHIFT) | 223 + (dsc_cfg->rc_range_params[i].range_bpg_offset)); 226 224 } 227 225 228 226 /* PPS 88 */ ··· 334 336 else 335 337 vdsc_cfg->nfl_bpg_offset = 0; 336 338 337 - /* 2^16 - 1 */ 338 - if (vdsc_cfg->nfl_bpg_offset > 65535) { 339 - DRM_DEBUG_KMS("NflBpgOffset is too large for this slice height\n"); 340 - return -ERANGE; 341 - } 342 - 343 339 /* Number of groups used to code the entire slice */ 344 340 groups_total = groups_per_line * vdsc_cfg->slice_height; 345 341 ··· 361 369 * be used to disable the scale increment at the end of the slice 362 370 */ 363 371 vdsc_cfg->scale_increment_interval = 0; 364 - } 365 - 366 - if (vdsc_cfg->scale_increment_interval > 65535) { 367 - DRM_DEBUG_KMS("ScaleIncrementInterval is large for slice height\n"); 368 - return -ERANGE; 369 372 } 370 373 371 374 /*
+104 -4
drivers/gpu/drm/drm_edid.c
··· 1275 1275 4104, 4400, 0, 2160, 2168, 2178, 2250, 0, 1276 1276 DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC), 1277 1277 .vrefresh = 60, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_64_27, }, 1278 + /* 108 - 1280x720@48Hz 16:9 */ 1279 + { DRM_MODE("1280x720", DRM_MODE_TYPE_DRIVER, 90000, 1280, 2240, 1280 + 2280, 2500, 0, 720, 725, 730, 750, 0, 1281 + DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC), 1282 + .vrefresh = 48, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_16_9, }, 1283 + /* 109 - 1280x720@48Hz 64:27 */ 1284 + { DRM_MODE("1280x720", DRM_MODE_TYPE_DRIVER, 90000, 1280, 2240, 1285 + 2280, 2500, 0, 720, 725, 730, 750, 0, 1286 + DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC), 1287 + .vrefresh = 48, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_64_27, }, 1288 + /* 110 - 1680x720@48Hz 64:27 */ 1289 + { DRM_MODE("1680x720", DRM_MODE_TYPE_DRIVER, 99000, 1680, 2490, 1290 + 2530, 2750, 0, 720, 725, 730, 750, 0, 1291 + DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC), 1292 + .vrefresh = 48, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_64_27, }, 1293 + /* 111 - 1920x1080@48Hz 16:9 */ 1294 + { DRM_MODE("1920x1080", DRM_MODE_TYPE_DRIVER, 148500, 1920, 2558, 1295 + 2602, 2750, 0, 1080, 1084, 1089, 1125, 0, 1296 + DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC), 1297 + .vrefresh = 48, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_16_9, }, 1298 + /* 112 - 1920x1080@48Hz 64:27 */ 1299 + { DRM_MODE("1920x1080", DRM_MODE_TYPE_DRIVER, 148500, 1920, 2558, 1300 + 2602, 2750, 0, 1080, 1084, 1089, 1125, 0, 1301 + DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC), 1302 + .vrefresh = 48, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_64_27, }, 1303 + /* 113 - 2560x1080@48Hz 64:27 */ 1304 + { DRM_MODE("2560x1080", DRM_MODE_TYPE_DRIVER, 198000, 2560, 3558, 1305 + 3602, 3750, 0, 1080, 1084, 1089, 1100, 0, 1306 + DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC), 1307 + .vrefresh = 48, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_64_27, }, 1308 + /* 114 - 3840x2160@48Hz 16:9 */ 1309 + { DRM_MODE("3840x2160", DRM_MODE_TYPE_DRIVER, 594000, 3840, 5116, 1310 + 5204, 5500, 0, 2160, 2168, 2178, 2250, 0, 1311 + DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC), 1312 + .vrefresh = 48, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_16_9, }, 1313 + /* 115 - 4096x2160@48Hz 256:135 */ 1314 + { DRM_MODE("4096x2160", DRM_MODE_TYPE_DRIVER, 594000, 4096, 5116, 1315 + 5204, 5500, 0, 2160, 2168, 2178, 2250, 0, 1316 + DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC), 1317 + .vrefresh = 48, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_256_135, }, 1318 + /* 116 - 3840x2160@48Hz 64:27 */ 1319 + { DRM_MODE("3840x2160", DRM_MODE_TYPE_DRIVER, 594000, 3840, 5116, 1320 + 5204, 5500, 0, 2160, 2168, 2178, 2250, 0, 1321 + DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC), 1322 + .vrefresh = 48, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_64_27, }, 1323 + /* 117 - 3840x2160@100Hz 16:9 */ 1324 + { DRM_MODE("3840x2160", DRM_MODE_TYPE_DRIVER, 1188000, 3840, 4896, 1325 + 4984, 5280, 0, 2160, 2168, 2178, 2250, 0, 1326 + DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC), 1327 + .vrefresh = 100, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_16_9, }, 1328 + /* 118 - 3840x2160@120Hz 16:9 */ 1329 + { DRM_MODE("3840x2160", DRM_MODE_TYPE_DRIVER, 1188000, 3840, 4016, 1330 + 4104, 4400, 0, 2160, 2168, 2178, 2250, 0, 1331 + DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC), 1332 + .vrefresh = 120, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_16_9, }, 1333 + /* 119 - 3840x2160@100Hz 64:27 */ 1334 + { DRM_MODE("3840x2160", DRM_MODE_TYPE_DRIVER, 1188000, 3840, 4896, 1335 + 4984, 5280, 0, 2160, 2168, 2178, 2250, 0, 1336 + DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC), 1337 + .vrefresh = 100, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_64_27, }, 1338 + /* 120 - 3840x2160@120Hz 64:27 */ 1339 + { DRM_MODE("3840x2160", DRM_MODE_TYPE_DRIVER, 1188000, 3840, 4016, 1340 + 4104, 4400, 0, 2160, 2168, 2178, 2250, 0, 1341 + DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC), 1342 + .vrefresh = 120, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_64_27, }, 1343 + /* 121 - 5120x2160@24Hz 64:27 */ 1344 + { DRM_MODE("5120x2160", DRM_MODE_TYPE_DRIVER, 396000, 5120, 7116, 1345 + 7204, 7500, 0, 2160, 2168, 2178, 2200, 0, 1346 + DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC), 1347 + .vrefresh = 24, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_64_27, }, 1348 + /* 122 - 5120x2160@25Hz 64:27 */ 1349 + { DRM_MODE("5120x2160", DRM_MODE_TYPE_DRIVER, 396000, 5120, 6816, 1350 + 6904, 7200, 0, 2160, 2168, 2178, 2200, 0, 1351 + DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC), 1352 + .vrefresh = 25, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_64_27, }, 1353 + /* 123 - 5120x2160@30Hz 64:27 */ 1354 + { DRM_MODE("5120x2160", DRM_MODE_TYPE_DRIVER, 396000, 5120, 5784, 1355 + 5872, 6000, 0, 2160, 2168, 2178, 2200, 0, 1356 + DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC), 1357 + .vrefresh = 30, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_64_27, }, 1358 + /* 124 - 5120x2160@48Hz 64:27 */ 1359 + { DRM_MODE("5120x2160", DRM_MODE_TYPE_DRIVER, 742500, 5120, 5866, 1360 + 5954, 6250, 0, 2160, 2168, 2178, 2475, 0, 1361 + DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC), 1362 + .vrefresh = 48, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_64_27, }, 1363 + /* 125 - 5120x2160@50Hz 64:27 */ 1364 + { DRM_MODE("5120x2160", DRM_MODE_TYPE_DRIVER, 742500, 5120, 6216, 1365 + 6304, 6600, 0, 2160, 2168, 2178, 2250, 0, 1366 + DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC), 1367 + .vrefresh = 50, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_64_27, }, 1368 + /* 126 - 5120x2160@60Hz 64:27 */ 1369 + { DRM_MODE("5120x2160", DRM_MODE_TYPE_DRIVER, 742500, 5120, 5284, 1370 + 5372, 5500, 0, 2160, 2168, 2178, 2250, 0, 1371 + DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC), 1372 + .vrefresh = 60, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_64_27, }, 1373 + /* 127 - 5120x2160@100Hz 64:27 */ 1374 + { DRM_MODE("5120x2160", DRM_MODE_TYPE_DRIVER, 1485000, 5120, 6216, 1375 + 6304, 6600, 0, 2160, 2168, 2178, 2250, 0, 1376 + DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC), 1377 + .vrefresh = 100, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_64_27, }, 1278 1378 }; 1279 1379 1280 1380 /* ··· 1651 1551 { 1652 1552 int i; 1653 1553 1654 - if (connector->bad_edid_counter++ && !(drm_debug & DRM_UT_KMS)) 1554 + if (connector->bad_edid_counter++ && !drm_debug_enabled(DRM_UT_KMS)) 1655 1555 return; 1656 1556 1657 1557 dev_warn(connector->dev->dev, ··· 3819 3719 if (*end < 4 || *end > 127) 3820 3720 return -ERANGE; 3821 3721 } else { 3822 - return -ENOTSUPP; 3722 + return -EOPNOTSUPP; 3823 3723 } 3824 3724 3825 3725 return 0; ··· 4288 4188 4289 4189 if (cea_revision(cea) < 3) { 4290 4190 DRM_DEBUG_KMS("SAD: wrong CEA revision\n"); 4291 - return -ENOTSUPP; 4191 + return -EOPNOTSUPP; 4292 4192 } 4293 4193 4294 4194 if (cea_db_offsets(cea, &start, &end)) { ··· 4349 4249 4350 4250 if (cea_revision(cea) < 3) { 4351 4251 DRM_DEBUG_KMS("SAD: wrong CEA revision\n"); 4352 - return -ENOTSUPP; 4252 + return -EOPNOTSUPP; 4353 4253 } 4354 4254 4355 4255 if (cea_db_offsets(cea, &start, &end)) {
+1 -1
drivers/gpu/drm/drm_edid_load.c
··· 175 175 u8 *edid; 176 176 int fwsize, builtin; 177 177 int i, valid_extensions = 0; 178 - bool print_bad_edid = !connector->bad_edid_counter || (drm_debug & DRM_UT_KMS); 178 + bool print_bad_edid = !connector->bad_edid_counter || drm_debug_enabled(DRM_UT_KMS); 179 179 180 180 builtin = match_string(generic_edid_name, GENERIC_EDIDS, name); 181 181 if (builtin >= 0) {
+1
drivers/gpu/drm/drm_encoder.c
··· 22 22 23 23 #include <linux/export.h> 24 24 25 + #include <drm/drm_bridge.h> 25 26 #include <drm/drm_device.h> 26 27 #include <drm/drm_drv.h> 27 28 #include <drm/drm_encoder.h>
+1
drivers/gpu/drm/drm_fb_helper.c
··· 46 46 #include <drm/drm_print.h> 47 47 #include <drm/drm_vblank.h> 48 48 49 + #include "drm_crtc_helper_internal.h" 49 50 #include "drm_internal.h" 50 51 51 52 static bool drm_fbdev_emulation = true;
+56
drivers/gpu/drm/drm_gem_ttm_helper.c
··· 1 + // SPDX-License-Identifier: GPL-2.0-or-later 2 + 3 + #include <linux/module.h> 4 + 5 + #include <drm/drm_gem_ttm_helper.h> 6 + 7 + /** 8 + * DOC: overview 9 + * 10 + * This library provides helper functions for gem objects backed by 11 + * ttm. 12 + */ 13 + 14 + /** 15 + * drm_gem_ttm_print_info() - Print &ttm_buffer_object info for debugfs 16 + * @p: DRM printer 17 + * @indent: Tab indentation level 18 + * @gem: GEM object 19 + * 20 + * This function can be used as &drm_gem_object_funcs.print_info 21 + * callback. 22 + */ 23 + void drm_gem_ttm_print_info(struct drm_printer *p, unsigned int indent, 24 + const struct drm_gem_object *gem) 25 + { 26 + static const char * const plname[] = { 27 + [ TTM_PL_SYSTEM ] = "system", 28 + [ TTM_PL_TT ] = "tt", 29 + [ TTM_PL_VRAM ] = "vram", 30 + [ TTM_PL_PRIV ] = "priv", 31 + 32 + [ 16 ] = "cached", 33 + [ 17 ] = "uncached", 34 + [ 18 ] = "wc", 35 + [ 19 ] = "contig", 36 + 37 + [ 21 ] = "pinned", /* NO_EVICT */ 38 + [ 22 ] = "topdown", 39 + }; 40 + const struct ttm_buffer_object *bo = drm_gem_ttm_of_gem(gem); 41 + 42 + drm_printf_indent(p, indent, "placement="); 43 + drm_print_bits(p, bo->mem.placement, plname, ARRAY_SIZE(plname)); 44 + drm_printf(p, "\n"); 45 + 46 + if (bo->mem.bus.is_iomem) { 47 + drm_printf_indent(p, indent, "bus.base=%lx\n", 48 + (unsigned long)bo->mem.bus.base); 49 + drm_printf_indent(p, indent, "bus.offset=%lx\n", 50 + (unsigned long)bo->mem.bus.offset); 51 + } 52 + } 53 + EXPORT_SYMBOL(drm_gem_ttm_print_info); 54 + 55 + MODULE_DESCRIPTION("DRM gem ttm helpers"); 56 + MODULE_LICENSE("GPL");
+549 -125
drivers/gpu/drm/drm_gem_vram_helper.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0-or-later 2 2 3 - #include <drm/drm_gem_vram_helper.h> 3 + #include <drm/drm_debugfs.h> 4 4 #include <drm/drm_device.h> 5 + #include <drm/drm_file.h> 6 + #include <drm/drm_gem_ttm_helper.h> 7 + #include <drm/drm_gem_vram_helper.h> 5 8 #include <drm/drm_mode.h> 6 9 #include <drm/drm_prime.h> 7 - #include <drm/drm_vram_mm_helper.h> 8 10 #include <drm/ttm/ttm_page_alloc.h> 9 11 10 12 static const struct drm_gem_object_funcs drm_gem_vram_object_funcs; ··· 16 14 * 17 15 * This library provides a GEM buffer object that is backed by video RAM 18 16 * (VRAM). It can be used for framebuffer devices with dedicated memory. 17 + * 18 + * The data structure &struct drm_vram_mm and its helpers implement a memory 19 + * manager for simple framebuffer devices with dedicated video memory. Buffer 20 + * objects are either placed in video RAM or evicted to system memory. The rsp. 21 + * buffer object is provided by &struct drm_gem_vram_object. 19 22 */ 20 23 21 24 /* ··· 33 26 * TTM buffer object in 'bo' has already been cleaned 34 27 * up; only release the GEM object. 35 28 */ 29 + 30 + WARN_ON(gbo->kmap_use_count); 31 + WARN_ON(gbo->kmap.virtual); 32 + 36 33 drm_gem_object_release(&gbo->bo.base); 37 34 } 38 35 ··· 58 47 { 59 48 unsigned int i; 60 49 unsigned int c = 0; 50 + u32 invariant_flags = pl_flag & TTM_PL_FLAG_TOPDOWN; 61 51 62 52 gbo->placement.placement = gbo->placements; 63 53 gbo->placement.busy_placement = gbo->placements; ··· 66 54 if (pl_flag & TTM_PL_FLAG_VRAM) 67 55 gbo->placements[c++].flags = TTM_PL_FLAG_WC | 68 56 TTM_PL_FLAG_UNCACHED | 69 - TTM_PL_FLAG_VRAM; 57 + TTM_PL_FLAG_VRAM | 58 + invariant_flags; 70 59 71 60 if (pl_flag & TTM_PL_FLAG_SYSTEM) 72 61 gbo->placements[c++].flags = TTM_PL_MASK_CACHING | 73 - TTM_PL_FLAG_SYSTEM; 62 + TTM_PL_FLAG_SYSTEM | 63 + invariant_flags; 74 64 75 65 if (!c) 76 66 gbo->placements[c++].flags = TTM_PL_MASK_CACHING | 77 - TTM_PL_FLAG_SYSTEM; 67 + TTM_PL_FLAG_SYSTEM | 68 + invariant_flags; 78 69 79 70 gbo->placement.num_placement = c; 80 71 gbo->placement.num_busy_placement = c; ··· 97 82 int ret; 98 83 size_t acc_size; 99 84 100 - if (!gbo->bo.base.funcs) 101 - gbo->bo.base.funcs = &drm_gem_vram_object_funcs; 85 + gbo->bo.base.funcs = &drm_gem_vram_object_funcs; 102 86 103 87 ret = drm_gem_object_init(dev, &gbo->bo.base, size); 104 88 if (ret) ··· 206 192 } 207 193 EXPORT_SYMBOL(drm_gem_vram_offset); 208 194 209 - /** 210 - * drm_gem_vram_pin() - Pins a GEM VRAM object in a region. 211 - * @gbo: the GEM VRAM object 212 - * @pl_flag: a bitmask of possible memory regions 213 - * 214 - * Pinning a buffer object ensures that it is not evicted from 215 - * a memory region. A pinned buffer object has to be unpinned before 216 - * it can be pinned to another region. If the pl_flag argument is 0, 217 - * the buffer is pinned at its current location (video RAM or system 218 - * memory). 219 - * 220 - * Returns: 221 - * 0 on success, or 222 - * a negative error code otherwise. 223 - */ 224 - int drm_gem_vram_pin(struct drm_gem_vram_object *gbo, unsigned long pl_flag) 195 + static int drm_gem_vram_pin_locked(struct drm_gem_vram_object *gbo, 196 + unsigned long pl_flag) 225 197 { 226 198 int i, ret; 227 199 struct ttm_operation_ctx ctx = { false, false }; 228 - 229 - ret = ttm_bo_reserve(&gbo->bo, true, false, NULL); 230 - if (ret < 0) 231 - return ret; 232 200 233 201 if (gbo->pin_count) 234 202 goto out; ··· 223 227 224 228 ret = ttm_bo_validate(&gbo->bo, &gbo->placement, &ctx); 225 229 if (ret < 0) 226 - goto err_ttm_bo_unreserve; 230 + return ret; 227 231 228 232 out: 229 233 ++gbo->pin_count; 230 - ttm_bo_unreserve(&gbo->bo); 231 234 232 235 return 0; 236 + } 233 237 234 - err_ttm_bo_unreserve: 238 + /** 239 + * drm_gem_vram_pin() - Pins a GEM VRAM object in a region. 240 + * @gbo: the GEM VRAM object 241 + * @pl_flag: a bitmask of possible memory regions 242 + * 243 + * Pinning a buffer object ensures that it is not evicted from 244 + * a memory region. A pinned buffer object has to be unpinned before 245 + * it can be pinned to another region. If the pl_flag argument is 0, 246 + * the buffer is pinned at its current location (video RAM or system 247 + * memory). 248 + * 249 + * Small buffer objects, such as cursor images, can lead to memory 250 + * fragmentation if they are pinned in the middle of video RAM. This 251 + * is especially a problem on devices with only a small amount of 252 + * video RAM. Fragmentation can prevent the primary framebuffer from 253 + * fitting in, even though there's enough memory overall. The modifier 254 + * DRM_GEM_VRAM_PL_FLAG_TOPDOWN marks the buffer object to be pinned 255 + * at the high end of the memory region to avoid fragmentation. 256 + * 257 + * Returns: 258 + * 0 on success, or 259 + * a negative error code otherwise. 260 + */ 261 + int drm_gem_vram_pin(struct drm_gem_vram_object *gbo, unsigned long pl_flag) 262 + { 263 + int ret; 264 + 265 + ret = ttm_bo_reserve(&gbo->bo, true, false, NULL); 266 + if (ret) 267 + return ret; 268 + ret = drm_gem_vram_pin_locked(gbo, pl_flag); 235 269 ttm_bo_unreserve(&gbo->bo); 270 + 236 271 return ret; 237 272 } 238 273 EXPORT_SYMBOL(drm_gem_vram_pin); 274 + 275 + static int drm_gem_vram_unpin_locked(struct drm_gem_vram_object *gbo) 276 + { 277 + int i, ret; 278 + struct ttm_operation_ctx ctx = { false, false }; 279 + 280 + if (WARN_ON_ONCE(!gbo->pin_count)) 281 + return 0; 282 + 283 + --gbo->pin_count; 284 + if (gbo->pin_count) 285 + return 0; 286 + 287 + for (i = 0; i < gbo->placement.num_placement ; ++i) 288 + gbo->placements[i].flags &= ~TTM_PL_FLAG_NO_EVICT; 289 + 290 + ret = ttm_bo_validate(&gbo->bo, &gbo->placement, &ctx); 291 + if (ret < 0) 292 + return ret; 293 + 294 + return 0; 295 + } 239 296 240 297 /** 241 298 * drm_gem_vram_unpin() - Unpins a GEM VRAM object ··· 300 251 */ 301 252 int drm_gem_vram_unpin(struct drm_gem_vram_object *gbo) 302 253 { 303 - int i, ret; 304 - struct ttm_operation_ctx ctx = { false, false }; 254 + int ret; 305 255 306 256 ret = ttm_bo_reserve(&gbo->bo, true, false, NULL); 307 - if (ret < 0) 257 + if (ret) 308 258 return ret; 309 - 310 - if (WARN_ON_ONCE(!gbo->pin_count)) 311 - goto out; 312 - 313 - --gbo->pin_count; 314 - if (gbo->pin_count) 315 - goto out; 316 - 317 - for (i = 0; i < gbo->placement.num_placement ; ++i) 318 - gbo->placements[i].flags &= ~TTM_PL_FLAG_NO_EVICT; 319 - 320 - ret = ttm_bo_validate(&gbo->bo, &gbo->placement, &ctx); 321 - if (ret < 0) 322 - goto err_ttm_bo_unreserve; 323 - 324 - out: 259 + ret = drm_gem_vram_unpin_locked(gbo); 325 260 ttm_bo_unreserve(&gbo->bo); 326 261 327 - return 0; 328 - 329 - err_ttm_bo_unreserve: 330 - ttm_bo_unreserve(&gbo->bo); 331 262 return ret; 332 263 } 333 264 EXPORT_SYMBOL(drm_gem_vram_unpin); 265 + 266 + static void *drm_gem_vram_kmap_locked(struct drm_gem_vram_object *gbo, 267 + bool map, bool *is_iomem) 268 + { 269 + int ret; 270 + struct ttm_bo_kmap_obj *kmap = &gbo->kmap; 271 + 272 + if (gbo->kmap_use_count > 0) 273 + goto out; 274 + 275 + if (kmap->virtual || !map) 276 + goto out; 277 + 278 + ret = ttm_bo_kmap(&gbo->bo, 0, gbo->bo.num_pages, kmap); 279 + if (ret) 280 + return ERR_PTR(ret); 281 + 282 + out: 283 + if (!kmap->virtual) { 284 + if (is_iomem) 285 + *is_iomem = false; 286 + return NULL; /* not mapped; don't increment ref */ 287 + } 288 + ++gbo->kmap_use_count; 289 + if (is_iomem) 290 + return ttm_kmap_obj_virtual(kmap, is_iomem); 291 + return kmap->virtual; 292 + } 334 293 335 294 /** 336 295 * drm_gem_vram_kmap() - Maps a GEM VRAM object into kernel address space ··· 361 304 bool *is_iomem) 362 305 { 363 306 int ret; 364 - struct ttm_bo_kmap_obj *kmap = &gbo->kmap; 307 + void *virtual; 365 308 366 - if (kmap->virtual || !map) 367 - goto out; 368 - 369 - ret = ttm_bo_kmap(&gbo->bo, 0, gbo->bo.num_pages, kmap); 309 + ret = ttm_bo_reserve(&gbo->bo, true, false, NULL); 370 310 if (ret) 371 311 return ERR_PTR(ret); 312 + virtual = drm_gem_vram_kmap_locked(gbo, map, is_iomem); 313 + ttm_bo_unreserve(&gbo->bo); 372 314 373 - out: 374 - if (!is_iomem) 375 - return kmap->virtual; 376 - if (!kmap->virtual) { 377 - *is_iomem = false; 378 - return NULL; 379 - } 380 - return ttm_kmap_obj_virtual(kmap, is_iomem); 315 + return virtual; 381 316 } 382 317 EXPORT_SYMBOL(drm_gem_vram_kmap); 318 + 319 + static void drm_gem_vram_kunmap_locked(struct drm_gem_vram_object *gbo) 320 + { 321 + if (WARN_ON_ONCE(!gbo->kmap_use_count)) 322 + return; 323 + if (--gbo->kmap_use_count > 0) 324 + return; 325 + 326 + /* 327 + * Permanently mapping and unmapping buffers adds overhead from 328 + * updating the page tables and creates debugging output. Therefore, 329 + * we delay the actual unmap operation until the BO gets evicted 330 + * from memory. See drm_gem_vram_bo_driver_move_notify(). 331 + */ 332 + } 383 333 384 334 /** 385 335 * drm_gem_vram_kunmap() - Unmaps a GEM VRAM object ··· 394 330 */ 395 331 void drm_gem_vram_kunmap(struct drm_gem_vram_object *gbo) 396 332 { 397 - struct ttm_bo_kmap_obj *kmap = &gbo->kmap; 333 + int ret; 398 334 399 - if (!kmap->virtual) 335 + ret = ttm_bo_reserve(&gbo->bo, false, false, NULL); 336 + if (WARN_ONCE(ret, "ttm_bo_reserve_failed(): ret=%d\n", ret)) 400 337 return; 401 - 402 - ttm_bo_kunmap(kmap); 403 - kmap->virtual = NULL; 338 + drm_gem_vram_kunmap_locked(gbo); 339 + ttm_bo_unreserve(&gbo->bo); 404 340 } 405 341 EXPORT_SYMBOL(drm_gem_vram_kunmap); 342 + 343 + /** 344 + * drm_gem_vram_vmap() - Pins and maps a GEM VRAM object into kernel address 345 + * space 346 + * @gbo: The GEM VRAM object to map 347 + * 348 + * The vmap function pins a GEM VRAM object to its current location, either 349 + * system or video memory, and maps its buffer into kernel address space. 350 + * As pinned object cannot be relocated, you should avoid pinning objects 351 + * permanently. Call drm_gem_vram_vunmap() with the returned address to 352 + * unmap and unpin the GEM VRAM object. 353 + * 354 + * If you have special requirements for the pinning or mapping operations, 355 + * call drm_gem_vram_pin() and drm_gem_vram_kmap() directly. 356 + * 357 + * Returns: 358 + * The buffer's virtual address on success, or 359 + * an ERR_PTR()-encoded error code otherwise. 360 + */ 361 + void *drm_gem_vram_vmap(struct drm_gem_vram_object *gbo) 362 + { 363 + int ret; 364 + void *base; 365 + 366 + ret = ttm_bo_reserve(&gbo->bo, true, false, NULL); 367 + if (ret) 368 + return ERR_PTR(ret); 369 + 370 + ret = drm_gem_vram_pin_locked(gbo, 0); 371 + if (ret) 372 + goto err_ttm_bo_unreserve; 373 + base = drm_gem_vram_kmap_locked(gbo, true, NULL); 374 + if (IS_ERR(base)) { 375 + ret = PTR_ERR(base); 376 + goto err_drm_gem_vram_unpin_locked; 377 + } 378 + 379 + ttm_bo_unreserve(&gbo->bo); 380 + 381 + return base; 382 + 383 + err_drm_gem_vram_unpin_locked: 384 + drm_gem_vram_unpin_locked(gbo); 385 + err_ttm_bo_unreserve: 386 + ttm_bo_unreserve(&gbo->bo); 387 + return ERR_PTR(ret); 388 + } 389 + EXPORT_SYMBOL(drm_gem_vram_vmap); 390 + 391 + /** 392 + * drm_gem_vram_vunmap() - Unmaps and unpins a GEM VRAM object 393 + * @gbo: The GEM VRAM object to unmap 394 + * @vaddr: The mapping's base address as returned by drm_gem_vram_vmap() 395 + * 396 + * A call to drm_gem_vram_vunmap() unmaps and unpins a GEM VRAM buffer. See 397 + * the documentation for drm_gem_vram_vmap() for more information. 398 + */ 399 + void drm_gem_vram_vunmap(struct drm_gem_vram_object *gbo, void *vaddr) 400 + { 401 + int ret; 402 + 403 + ret = ttm_bo_reserve(&gbo->bo, false, false, NULL); 404 + if (WARN_ONCE(ret, "ttm_bo_reserve_failed(): ret=%d\n", ret)) 405 + return; 406 + 407 + drm_gem_vram_kunmap_locked(gbo); 408 + drm_gem_vram_unpin_locked(gbo); 409 + 410 + ttm_bo_unreserve(&gbo->bo); 411 + } 412 + EXPORT_SYMBOL(drm_gem_vram_vunmap); 406 413 407 414 /** 408 415 * drm_gem_vram_fill_create_dumb() - \ ··· 545 410 return (bo->destroy == ttm_buffer_object_destroy); 546 411 } 547 412 548 - /** 549 - * drm_gem_vram_bo_driver_evict_flags() - \ 550 - Implements &struct ttm_bo_driver.evict_flags 551 - * @bo: TTM buffer object. Refers to &struct drm_gem_vram_object.bo 552 - * @pl: TTM placement information. 553 - */ 554 - void drm_gem_vram_bo_driver_evict_flags(struct ttm_buffer_object *bo, 555 - struct ttm_placement *pl) 413 + static void drm_gem_vram_bo_driver_evict_flags(struct drm_gem_vram_object *gbo, 414 + struct ttm_placement *pl) 556 415 { 557 - struct drm_gem_vram_object *gbo; 558 - 559 - /* TTM may pass BOs that are not GEM VRAM BOs. */ 560 - if (!drm_is_gem_vram(bo)) 561 - return; 562 - 563 - gbo = drm_gem_vram_of_bo(bo); 564 416 drm_gem_vram_placement(gbo, TTM_PL_FLAG_SYSTEM); 565 417 *pl = gbo->placement; 566 418 } 567 - EXPORT_SYMBOL(drm_gem_vram_bo_driver_evict_flags); 568 419 569 - /** 570 - * drm_gem_vram_bo_driver_verify_access() - \ 571 - Implements &struct ttm_bo_driver.verify_access 572 - * @bo: TTM buffer object. Refers to &struct drm_gem_vram_object.bo 573 - * @filp: File pointer. 574 - * 575 - * Returns: 576 - * 0 on success, or 577 - * a negative errno code otherwise. 578 - */ 579 - int drm_gem_vram_bo_driver_verify_access(struct ttm_buffer_object *bo, 580 - struct file *filp) 420 + static int drm_gem_vram_bo_driver_verify_access(struct drm_gem_vram_object *gbo, 421 + struct file *filp) 581 422 { 582 - struct drm_gem_vram_object *gbo = drm_gem_vram_of_bo(bo); 583 - 584 423 return drm_vma_node_verify_access(&gbo->bo.base.vma_node, 585 424 filp->private_data); 586 425 } 587 - EXPORT_SYMBOL(drm_gem_vram_bo_driver_verify_access); 588 426 589 - /* 590 - * drm_gem_vram_mm_funcs - Functions for &struct drm_vram_mm 591 - * 592 - * Most users of @struct drm_gem_vram_object will also use 593 - * @struct drm_vram_mm. This instance of &struct drm_vram_mm_funcs 594 - * can be used to connect both. 595 - */ 596 - const struct drm_vram_mm_funcs drm_gem_vram_mm_funcs = { 597 - .evict_flags = drm_gem_vram_bo_driver_evict_flags, 598 - .verify_access = drm_gem_vram_bo_driver_verify_access 599 - }; 600 - EXPORT_SYMBOL(drm_gem_vram_mm_funcs); 427 + static void drm_gem_vram_bo_driver_move_notify(struct drm_gem_vram_object *gbo, 428 + bool evict, 429 + struct ttm_mem_reg *new_mem) 430 + { 431 + struct ttm_bo_kmap_obj *kmap = &gbo->kmap; 432 + 433 + if (WARN_ON_ONCE(gbo->kmap_use_count)) 434 + return; 435 + 436 + if (!kmap->virtual) 437 + return; 438 + ttm_bo_kunmap(kmap); 439 + kmap->virtual = NULL; 440 + } 601 441 602 442 /* 603 443 * Helpers for struct drm_gem_object_funcs ··· 705 595 static void *drm_gem_vram_object_vmap(struct drm_gem_object *gem) 706 596 { 707 597 struct drm_gem_vram_object *gbo = drm_gem_vram_of_gem(gem); 708 - int ret; 709 598 void *base; 710 599 711 - ret = drm_gem_vram_pin(gbo, 0); 712 - if (ret) 600 + base = drm_gem_vram_vmap(gbo); 601 + if (IS_ERR(base)) 713 602 return NULL; 714 - base = drm_gem_vram_kmap(gbo, true, NULL); 715 - if (IS_ERR(base)) { 716 - drm_gem_vram_unpin(gbo); 717 - return NULL; 718 - } 719 603 return base; 720 604 } 721 605 ··· 724 620 { 725 621 struct drm_gem_vram_object *gbo = drm_gem_vram_of_gem(gem); 726 622 727 - drm_gem_vram_kunmap(gbo); 728 - drm_gem_vram_unpin(gbo); 623 + drm_gem_vram_vunmap(gbo, vaddr); 729 624 } 730 625 731 626 /* ··· 736 633 .pin = drm_gem_vram_object_pin, 737 634 .unpin = drm_gem_vram_object_unpin, 738 635 .vmap = drm_gem_vram_object_vmap, 739 - .vunmap = drm_gem_vram_object_vunmap 636 + .vunmap = drm_gem_vram_object_vunmap, 637 + .print_info = drm_gem_ttm_print_info, 740 638 }; 639 + 640 + /* 641 + * VRAM memory manager 642 + */ 643 + 644 + /* 645 + * TTM TT 646 + */ 647 + 648 + static void backend_func_destroy(struct ttm_tt *tt) 649 + { 650 + ttm_tt_fini(tt); 651 + kfree(tt); 652 + } 653 + 654 + static struct ttm_backend_func backend_func = { 655 + .destroy = backend_func_destroy 656 + }; 657 + 658 + /* 659 + * TTM BO device 660 + */ 661 + 662 + static struct ttm_tt *bo_driver_ttm_tt_create(struct ttm_buffer_object *bo, 663 + uint32_t page_flags) 664 + { 665 + struct ttm_tt *tt; 666 + int ret; 667 + 668 + tt = kzalloc(sizeof(*tt), GFP_KERNEL); 669 + if (!tt) 670 + return NULL; 671 + 672 + tt->func = &backend_func; 673 + 674 + ret = ttm_tt_init(tt, bo, page_flags); 675 + if (ret < 0) 676 + goto err_ttm_tt_init; 677 + 678 + return tt; 679 + 680 + err_ttm_tt_init: 681 + kfree(tt); 682 + return NULL; 683 + } 684 + 685 + static int bo_driver_init_mem_type(struct ttm_bo_device *bdev, uint32_t type, 686 + struct ttm_mem_type_manager *man) 687 + { 688 + switch (type) { 689 + case TTM_PL_SYSTEM: 690 + man->flags = TTM_MEMTYPE_FLAG_MAPPABLE; 691 + man->available_caching = TTM_PL_MASK_CACHING; 692 + man->default_caching = TTM_PL_FLAG_CACHED; 693 + break; 694 + case TTM_PL_VRAM: 695 + man->func = &ttm_bo_manager_func; 696 + man->flags = TTM_MEMTYPE_FLAG_FIXED | 697 + TTM_MEMTYPE_FLAG_MAPPABLE; 698 + man->available_caching = TTM_PL_FLAG_UNCACHED | 699 + TTM_PL_FLAG_WC; 700 + man->default_caching = TTM_PL_FLAG_WC; 701 + break; 702 + default: 703 + return -EINVAL; 704 + } 705 + return 0; 706 + } 707 + 708 + static void bo_driver_evict_flags(struct ttm_buffer_object *bo, 709 + struct ttm_placement *placement) 710 + { 711 + struct drm_gem_vram_object *gbo; 712 + 713 + /* TTM may pass BOs that are not GEM VRAM BOs. */ 714 + if (!drm_is_gem_vram(bo)) 715 + return; 716 + 717 + gbo = drm_gem_vram_of_bo(bo); 718 + 719 + drm_gem_vram_bo_driver_evict_flags(gbo, placement); 720 + } 721 + 722 + static int bo_driver_verify_access(struct ttm_buffer_object *bo, 723 + struct file *filp) 724 + { 725 + struct drm_gem_vram_object *gbo; 726 + 727 + /* TTM may pass BOs that are not GEM VRAM BOs. */ 728 + if (!drm_is_gem_vram(bo)) 729 + return -EINVAL; 730 + 731 + gbo = drm_gem_vram_of_bo(bo); 732 + 733 + return drm_gem_vram_bo_driver_verify_access(gbo, filp); 734 + } 735 + 736 + static void bo_driver_move_notify(struct ttm_buffer_object *bo, 737 + bool evict, 738 + struct ttm_mem_reg *new_mem) 739 + { 740 + struct drm_gem_vram_object *gbo; 741 + 742 + /* TTM may pass BOs that are not GEM VRAM BOs. */ 743 + if (!drm_is_gem_vram(bo)) 744 + return; 745 + 746 + gbo = drm_gem_vram_of_bo(bo); 747 + 748 + drm_gem_vram_bo_driver_move_notify(gbo, evict, new_mem); 749 + } 750 + 751 + static int bo_driver_io_mem_reserve(struct ttm_bo_device *bdev, 752 + struct ttm_mem_reg *mem) 753 + { 754 + struct ttm_mem_type_manager *man = bdev->man + mem->mem_type; 755 + struct drm_vram_mm *vmm = drm_vram_mm_of_bdev(bdev); 756 + 757 + if (!(man->flags & TTM_MEMTYPE_FLAG_MAPPABLE)) 758 + return -EINVAL; 759 + 760 + mem->bus.addr = NULL; 761 + mem->bus.size = mem->num_pages << PAGE_SHIFT; 762 + 763 + switch (mem->mem_type) { 764 + case TTM_PL_SYSTEM: /* nothing to do */ 765 + mem->bus.offset = 0; 766 + mem->bus.base = 0; 767 + mem->bus.is_iomem = false; 768 + break; 769 + case TTM_PL_VRAM: 770 + mem->bus.offset = mem->start << PAGE_SHIFT; 771 + mem->bus.base = vmm->vram_base; 772 + mem->bus.is_iomem = true; 773 + break; 774 + default: 775 + return -EINVAL; 776 + } 777 + 778 + return 0; 779 + } 780 + 781 + static void bo_driver_io_mem_free(struct ttm_bo_device *bdev, 782 + struct ttm_mem_reg *mem) 783 + { } 784 + 785 + static struct ttm_bo_driver bo_driver = { 786 + .ttm_tt_create = bo_driver_ttm_tt_create, 787 + .ttm_tt_populate = ttm_pool_populate, 788 + .ttm_tt_unpopulate = ttm_pool_unpopulate, 789 + .init_mem_type = bo_driver_init_mem_type, 790 + .eviction_valuable = ttm_bo_eviction_valuable, 791 + .evict_flags = bo_driver_evict_flags, 792 + .verify_access = bo_driver_verify_access, 793 + .move_notify = bo_driver_move_notify, 794 + .io_mem_reserve = bo_driver_io_mem_reserve, 795 + .io_mem_free = bo_driver_io_mem_free, 796 + }; 797 + 798 + /* 799 + * struct drm_vram_mm 800 + */ 801 + 802 + #if defined(CONFIG_DEBUG_FS) 803 + static int drm_vram_mm_debugfs(struct seq_file *m, void *data) 804 + { 805 + struct drm_info_node *node = (struct drm_info_node *) m->private; 806 + struct drm_vram_mm *vmm = node->minor->dev->vram_mm; 807 + struct drm_mm *mm = vmm->bdev.man[TTM_PL_VRAM].priv; 808 + struct ttm_bo_global *glob = vmm->bdev.glob; 809 + struct drm_printer p = drm_seq_file_printer(m); 810 + 811 + spin_lock(&glob->lru_lock); 812 + drm_mm_print(mm, &p); 813 + spin_unlock(&glob->lru_lock); 814 + return 0; 815 + } 816 + 817 + static const struct drm_info_list drm_vram_mm_debugfs_list[] = { 818 + { "vram-mm", drm_vram_mm_debugfs, 0, NULL }, 819 + }; 820 + #endif 821 + 822 + /** 823 + * drm_vram_mm_debugfs_init() - Register VRAM MM debugfs file. 824 + * 825 + * @minor: drm minor device. 826 + * 827 + * Returns: 828 + * 0 on success, or 829 + * a negative error code otherwise. 830 + */ 831 + int drm_vram_mm_debugfs_init(struct drm_minor *minor) 832 + { 833 + int ret = 0; 834 + 835 + #if defined(CONFIG_DEBUG_FS) 836 + ret = drm_debugfs_create_files(drm_vram_mm_debugfs_list, 837 + ARRAY_SIZE(drm_vram_mm_debugfs_list), 838 + minor->debugfs_root, minor); 839 + #endif 840 + return ret; 841 + } 842 + EXPORT_SYMBOL(drm_vram_mm_debugfs_init); 843 + 844 + static int drm_vram_mm_init(struct drm_vram_mm *vmm, struct drm_device *dev, 845 + uint64_t vram_base, size_t vram_size) 846 + { 847 + int ret; 848 + 849 + vmm->vram_base = vram_base; 850 + vmm->vram_size = vram_size; 851 + 852 + ret = ttm_bo_device_init(&vmm->bdev, &bo_driver, 853 + dev->anon_inode->i_mapping, 854 + dev->vma_offset_manager, 855 + true); 856 + if (ret) 857 + return ret; 858 + 859 + ret = ttm_bo_init_mm(&vmm->bdev, TTM_PL_VRAM, vram_size >> PAGE_SHIFT); 860 + if (ret) 861 + return ret; 862 + 863 + return 0; 864 + } 865 + 866 + static void drm_vram_mm_cleanup(struct drm_vram_mm *vmm) 867 + { 868 + ttm_bo_device_release(&vmm->bdev); 869 + } 870 + 871 + static int drm_vram_mm_mmap(struct file *filp, struct vm_area_struct *vma, 872 + struct drm_vram_mm *vmm) 873 + { 874 + return ttm_bo_mmap(filp, vma, &vmm->bdev); 875 + } 876 + 877 + /* 878 + * Helpers for integration with struct drm_device 879 + */ 880 + 881 + /** 882 + * drm_vram_helper_alloc_mm - Allocates a device's instance of \ 883 + &struct drm_vram_mm 884 + * @dev: the DRM device 885 + * @vram_base: the base address of the video memory 886 + * @vram_size: the size of the video memory in bytes 887 + * 888 + * Returns: 889 + * The new instance of &struct drm_vram_mm on success, or 890 + * an ERR_PTR()-encoded errno code otherwise. 891 + */ 892 + struct drm_vram_mm *drm_vram_helper_alloc_mm( 893 + struct drm_device *dev, uint64_t vram_base, size_t vram_size) 894 + { 895 + int ret; 896 + 897 + if (WARN_ON(dev->vram_mm)) 898 + return dev->vram_mm; 899 + 900 + dev->vram_mm = kzalloc(sizeof(*dev->vram_mm), GFP_KERNEL); 901 + if (!dev->vram_mm) 902 + return ERR_PTR(-ENOMEM); 903 + 904 + ret = drm_vram_mm_init(dev->vram_mm, dev, vram_base, vram_size); 905 + if (ret) 906 + goto err_kfree; 907 + 908 + return dev->vram_mm; 909 + 910 + err_kfree: 911 + kfree(dev->vram_mm); 912 + dev->vram_mm = NULL; 913 + return ERR_PTR(ret); 914 + } 915 + EXPORT_SYMBOL(drm_vram_helper_alloc_mm); 916 + 917 + /** 918 + * drm_vram_helper_release_mm - Releases a device's instance of \ 919 + &struct drm_vram_mm 920 + * @dev: the DRM device 921 + */ 922 + void drm_vram_helper_release_mm(struct drm_device *dev) 923 + { 924 + if (!dev->vram_mm) 925 + return; 926 + 927 + drm_vram_mm_cleanup(dev->vram_mm); 928 + kfree(dev->vram_mm); 929 + dev->vram_mm = NULL; 930 + } 931 + EXPORT_SYMBOL(drm_vram_helper_release_mm); 932 + 933 + /* 934 + * Helpers for &struct file_operations 935 + */ 936 + 937 + /** 938 + * drm_vram_mm_file_operations_mmap() - \ 939 + Implements &struct file_operations.mmap() 940 + * @filp: the mapping's file structure 941 + * @vma: the mapping's memory area 942 + * 943 + * Returns: 944 + * 0 on success, or 945 + * a negative error code otherwise. 946 + */ 947 + int drm_vram_mm_file_operations_mmap( 948 + struct file *filp, struct vm_area_struct *vma) 949 + { 950 + struct drm_file *file_priv = filp->private_data; 951 + struct drm_device *dev = file_priv->minor->dev; 952 + 953 + if (WARN_ONCE(!dev->vram_mm, "VRAM MM not initialized")) 954 + return -EINVAL; 955 + 956 + return drm_vram_mm_mmap(filp, vma, dev->vram_mm); 957 + } 958 + EXPORT_SYMBOL(drm_vram_mm_file_operations_mmap);
+1
drivers/gpu/drm/drm_memory.c
··· 40 40 #include <xen/xen.h> 41 41 42 42 #include <drm/drm_agpsupport.h> 43 + #include <drm/drm_cache.h> 43 44 #include <drm/drm_device.h> 44 45 45 46 #include "drm_legacy.h"
+4 -5
drivers/gpu/drm/drm_mipi_dbi.c
··· 783 783 int i, ret; 784 784 u8 *dst; 785 785 786 - if (drm_debug & DRM_UT_DRIVER) 786 + if (drm_debug_enabled(DRM_UT_DRIVER)) 787 787 pr_debug("[drm:%s] dc=%d, max_chunk=%zu, transfers:\n", 788 788 __func__, dc, max_chunk); 789 789 ··· 907 907 max_chunk = dbi->tx_buf9_len; 908 908 dst16 = dbi->tx_buf9; 909 909 910 - if (drm_debug & DRM_UT_DRIVER) 910 + if (drm_debug_enabled(DRM_UT_DRIVER)) 911 911 pr_debug("[drm:%s] dc=%d, max_chunk=%zu, transfers:\n", 912 912 __func__, dc, max_chunk); 913 913 ··· 955 955 int ret; 956 956 957 957 if (mipi_dbi_command_is_read(dbi, *cmd)) 958 - return -ENOTSUPP; 958 + return -EOPNOTSUPP; 959 959 960 960 MIPI_DBI_DEBUG_COMMAND(*cmd, parameters, num); 961 961 ··· 1187 1187 struct mipi_dbi_dev *dbidev = m->private; 1188 1188 u8 val, cmd = 0, parameters[64]; 1189 1189 char *buf, *pos, *token; 1190 - unsigned int i; 1191 - int ret, idx; 1190 + int i, ret, idx; 1192 1191 1193 1192 if (!drm_dev_enter(&dbidev->drm, &idx)) 1194 1193 return -ENODEV;
+21 -15
drivers/gpu/drm/drm_mm.c
··· 174 174 175 175 node->__subtree_last = LAST(node); 176 176 177 - if (hole_node->allocated) { 177 + if (drm_mm_node_allocated(hole_node)) { 178 178 rb = &hole_node->rb; 179 179 while (rb) { 180 180 parent = rb_entry(rb, struct drm_mm_node, rb); ··· 424 424 425 425 node->mm = mm; 426 426 427 + __set_bit(DRM_MM_NODE_ALLOCATED_BIT, &node->flags); 427 428 list_add(&node->node_list, &hole->node_list); 428 429 drm_mm_interval_tree_add_node(hole, node); 429 - node->allocated = true; 430 430 node->hole_size = 0; 431 431 432 432 rm_hole(hole); ··· 543 543 node->color = color; 544 544 node->hole_size = 0; 545 545 546 + __set_bit(DRM_MM_NODE_ALLOCATED_BIT, &node->flags); 546 547 list_add(&node->node_list, &hole->node_list); 547 548 drm_mm_interval_tree_add_node(hole, node); 548 - node->allocated = true; 549 549 550 550 rm_hole(hole); 551 551 if (adj_start > hole_start) ··· 561 561 } 562 562 EXPORT_SYMBOL(drm_mm_insert_node_in_range); 563 563 564 + static inline bool drm_mm_node_scanned_block(const struct drm_mm_node *node) 565 + { 566 + return test_bit(DRM_MM_NODE_SCANNED_BIT, &node->flags); 567 + } 568 + 564 569 /** 565 570 * drm_mm_remove_node - Remove a memory node from the allocator. 566 571 * @node: drm_mm_node to remove ··· 579 574 struct drm_mm *mm = node->mm; 580 575 struct drm_mm_node *prev_node; 581 576 582 - DRM_MM_BUG_ON(!node->allocated); 583 - DRM_MM_BUG_ON(node->scanned_block); 577 + DRM_MM_BUG_ON(!drm_mm_node_allocated(node)); 578 + DRM_MM_BUG_ON(drm_mm_node_scanned_block(node)); 584 579 585 580 prev_node = list_prev_entry(node, node_list); 586 581 ··· 589 584 590 585 drm_mm_interval_tree_remove(node, &mm->interval_tree); 591 586 list_del(&node->node_list); 592 - node->allocated = false; 593 587 594 588 if (drm_mm_hole_follows(prev_node)) 595 589 rm_hole(prev_node); 596 590 add_hole(prev_node); 591 + 592 + clear_bit_unlock(DRM_MM_NODE_ALLOCATED_BIT, &node->flags); 597 593 } 598 594 EXPORT_SYMBOL(drm_mm_remove_node); 599 595 ··· 611 605 { 612 606 struct drm_mm *mm = old->mm; 613 607 614 - DRM_MM_BUG_ON(!old->allocated); 608 + DRM_MM_BUG_ON(!drm_mm_node_allocated(old)); 615 609 616 610 *new = *old; 617 611 612 + __set_bit(DRM_MM_NODE_ALLOCATED_BIT, &new->flags); 618 613 list_replace(&old->node_list, &new->node_list); 619 614 rb_replace_node_cached(&old->rb, &new->rb, &mm->interval_tree); 620 615 ··· 629 622 &mm->holes_addr); 630 623 } 631 624 632 - old->allocated = false; 633 - new->allocated = true; 625 + clear_bit_unlock(DRM_MM_NODE_ALLOCATED_BIT, &old->flags); 634 626 } 635 627 EXPORT_SYMBOL(drm_mm_replace_node); 636 628 ··· 737 731 u64 adj_start, adj_end; 738 732 739 733 DRM_MM_BUG_ON(node->mm != mm); 740 - DRM_MM_BUG_ON(!node->allocated); 741 - DRM_MM_BUG_ON(node->scanned_block); 742 - node->scanned_block = true; 734 + DRM_MM_BUG_ON(!drm_mm_node_allocated(node)); 735 + DRM_MM_BUG_ON(drm_mm_node_scanned_block(node)); 736 + __set_bit(DRM_MM_NODE_SCANNED_BIT, &node->flags); 743 737 mm->scan_active++; 744 738 745 739 /* Remove this block from the node_list so that we enlarge the hole ··· 824 818 struct drm_mm_node *prev_node; 825 819 826 820 DRM_MM_BUG_ON(node->mm != scan->mm); 827 - DRM_MM_BUG_ON(!node->scanned_block); 828 - node->scanned_block = false; 821 + DRM_MM_BUG_ON(!drm_mm_node_scanned_block(node)); 822 + __clear_bit(DRM_MM_NODE_SCANNED_BIT, &node->flags); 829 823 830 824 DRM_MM_BUG_ON(!node->mm->scan_active); 831 825 node->mm->scan_active--; ··· 923 917 924 918 /* Clever trick to avoid a special case in the free hole tracking. */ 925 919 INIT_LIST_HEAD(&mm->head_node.node_list); 926 - mm->head_node.allocated = false; 920 + mm->head_node.flags = 0; 927 921 mm->head_node.mm = mm; 928 922 mm->head_node.start = start + size; 929 923 mm->head_node.size = -size;
-5
drivers/gpu/drm/drm_of.c
··· 250 250 if (!remote) 251 251 return -ENODEV; 252 252 253 - if (!of_device_is_available(remote)) { 254 - of_node_put(remote); 255 - return -ENODEV; 256 - } 257 - 258 253 if (panel) { 259 254 *panel = of_drm_find_panel(remote); 260 255 if (!IS_ERR(*panel))
+11 -3
drivers/gpu/drm/drm_panel.c
··· 44 44 /** 45 45 * drm_panel_init - initialize a panel 46 46 * @panel: DRM panel 47 + * @dev: parent device of the panel 48 + * @funcs: panel operations 49 + * @connector_type: the connector type (DRM_MODE_CONNECTOR_*) corresponding to 50 + * the panel interface 47 51 * 48 - * Sets up internal fields of the panel so that it can subsequently be added 49 - * to the registry. 52 + * Initialize the panel structure for subsequent registration with 53 + * drm_panel_add(). 50 54 */ 51 - void drm_panel_init(struct drm_panel *panel) 55 + void drm_panel_init(struct drm_panel *panel, struct device *dev, 56 + const struct drm_panel_funcs *funcs, int connector_type) 52 57 { 53 58 INIT_LIST_HEAD(&panel->list); 59 + panel->dev = dev; 60 + panel->funcs = funcs; 61 + panel->connector_type = connector_type; 54 62 } 55 63 EXPORT_SYMBOL(drm_panel_init); 56 64
+58 -2
drivers/gpu/drm/drm_print.c
··· 28 28 #include <stdarg.h> 29 29 30 30 #include <linux/io.h> 31 + #include <linux/moduleparam.h> 31 32 #include <linux/seq_file.h> 32 33 #include <linux/slab.h> 33 34 34 35 #include <drm/drm.h> 35 36 #include <drm/drm_drv.h> 36 37 #include <drm/drm_print.h> 38 + 39 + /* 40 + * drm_debug: Enable debug output. 41 + * Bitmask of DRM_UT_x. See include/drm/drm_print.h for details. 42 + */ 43 + unsigned int drm_debug; 44 + EXPORT_SYMBOL(drm_debug); 45 + 46 + MODULE_PARM_DESC(debug, "Enable debug output, where each bit enables a debug category.\n" 47 + "\t\tBit 0 (0x01) will enable CORE messages (drm core code)\n" 48 + "\t\tBit 1 (0x02) will enable DRIVER messages (drm controller code)\n" 49 + "\t\tBit 2 (0x04) will enable KMS messages (modesetting code)\n" 50 + "\t\tBit 3 (0x08) will enable PRIME messages (prime code)\n" 51 + "\t\tBit 4 (0x10) will enable ATOMIC messages (atomic code)\n" 52 + "\t\tBit 5 (0x20) will enable VBL messages (vblank code)\n" 53 + "\t\tBit 7 (0x80) will enable LEASE messages (leasing code)\n" 54 + "\t\tBit 8 (0x100) will enable DP messages (displayport code)"); 55 + module_param_named(debug, drm_debug, int, 0600); 37 56 38 57 void __drm_puts_coredump(struct drm_printer *p, const char *str) 39 58 { ··· 166 147 } 167 148 EXPORT_SYMBOL(__drm_printfn_debug); 168 149 150 + void __drm_printfn_err(struct drm_printer *p, struct va_format *vaf) 151 + { 152 + pr_err("*ERROR* %s %pV", p->prefix, vaf); 153 + } 154 + EXPORT_SYMBOL(__drm_printfn_err); 155 + 169 156 /** 170 157 * drm_puts - print a const string to a &drm_printer stream 171 158 * @p: the &drm printer ··· 204 179 } 205 180 EXPORT_SYMBOL(drm_printf); 206 181 182 + /** 183 + * drm_print_bits - print bits to a &drm_printer stream 184 + * 185 + * Print bits (in flag fields for example) in human readable form. 186 + * 187 + * @p: the &drm_printer 188 + * @value: field value. 189 + * @bits: Array with bit names. 190 + * @nbits: Size of bit names array. 191 + */ 192 + void drm_print_bits(struct drm_printer *p, unsigned long value, 193 + const char * const bits[], unsigned int nbits) 194 + { 195 + bool first = true; 196 + unsigned int i; 197 + 198 + if (WARN_ON_ONCE(nbits > BITS_PER_TYPE(value))) 199 + nbits = BITS_PER_TYPE(value); 200 + 201 + for_each_set_bit(i, &value, nbits) { 202 + if (WARN_ON_ONCE(!bits[i])) 203 + continue; 204 + drm_printf(p, "%s%s", first ? "" : ",", 205 + bits[i]); 206 + first = false; 207 + } 208 + if (first) 209 + drm_printf(p, "(none)"); 210 + } 211 + EXPORT_SYMBOL(drm_print_bits); 212 + 207 213 void drm_dev_printk(const struct device *dev, const char *level, 208 214 const char *format, ...) 209 215 { ··· 262 206 struct va_format vaf; 263 207 va_list args; 264 208 265 - if (!(drm_debug & category)) 209 + if (!drm_debug_enabled(category)) 266 210 return; 267 211 268 212 va_start(args, format); ··· 285 229 struct va_format vaf; 286 230 va_list args; 287 231 288 - if (!(drm_debug & category)) 232 + if (!drm_debug_enabled(category)) 289 233 return; 290 234 291 235 va_start(args, format);
+2 -2
drivers/gpu/drm/drm_probe_helper.c
··· 32 32 #include <linux/export.h> 33 33 #include <linux/moduleparam.h> 34 34 35 + #include <drm/drm_bridge.h> 35 36 #include <drm/drm_client.h> 36 37 #include <drm/drm_crtc.h> 37 38 #include <drm/drm_edid.h> ··· 93 92 struct drm_device *dev = connector->dev; 94 93 enum drm_mode_status ret = MODE_OK; 95 94 struct drm_encoder *encoder; 96 - int i; 97 95 98 96 /* Step 1: Validate against connector */ 99 97 ret = drm_connector_mode_valid(connector, mode); ··· 100 100 return ret; 101 101 102 102 /* Step 2: Validate against encoders and crtcs */ 103 - drm_connector_for_each_possible_encoder(connector, encoder, i) { 103 + drm_connector_for_each_possible_encoder(connector, encoder) { 104 104 struct drm_crtc *crtc; 105 105 106 106 ret = drm_encoder_mode_valid(encoder, mode);
+1
drivers/gpu/drm/drm_simple_kms_helper.c
··· 8 8 9 9 #include <drm/drm_atomic.h> 10 10 #include <drm/drm_atomic_helper.h> 11 + #include <drm/drm_bridge.h> 11 12 #include <drm/drm_plane_helper.h> 12 13 #include <drm/drm_probe_helper.h> 13 14 #include <drm/drm_simple_kms_helper.h>
+1
drivers/gpu/drm/drm_syncobj.c
··· 135 135 #include <drm/drm_gem.h> 136 136 #include <drm/drm_print.h> 137 137 #include <drm/drm_syncobj.h> 138 + #include <drm/drm_utils.h> 138 139 139 140 #include "drm_internal.h" 140 141
+10 -4
drivers/gpu/drm/drm_trace.h
··· 13 13 #define TRACE_INCLUDE_FILE drm_trace 14 14 15 15 TRACE_EVENT(drm_vblank_event, 16 - TP_PROTO(int crtc, unsigned int seq), 17 - TP_ARGS(crtc, seq), 16 + TP_PROTO(int crtc, unsigned int seq, ktime_t time, bool high_prec), 17 + TP_ARGS(crtc, seq, time, high_prec), 18 18 TP_STRUCT__entry( 19 19 __field(int, crtc) 20 20 __field(unsigned int, seq) 21 + __field(ktime_t, time) 22 + __field(bool, high_prec) 21 23 ), 22 24 TP_fast_assign( 23 25 __entry->crtc = crtc; 24 26 __entry->seq = seq; 25 - ), 26 - TP_printk("crtc=%d, seq=%u", __entry->crtc, __entry->seq) 27 + __entry->time = time; 28 + __entry->high_prec = high_prec; 29 + ), 30 + TP_printk("crtc=%d, seq=%u, time=%lld, high-prec=%s", 31 + __entry->crtc, __entry->seq, __entry->time, 32 + __entry->high_prec ? "true" : "false") 27 33 ); 28 34 29 35 TRACE_EVENT(drm_vblank_event_queued,
+46 -8
drivers/gpu/drm/drm_vblank.c
··· 106 106 107 107 write_seqlock(&vblank->seqlock); 108 108 vblank->time = t_vblank; 109 - vblank->count += vblank_count_inc; 109 + atomic64_add(vblank_count_inc, &vblank->count); 110 110 write_sequnlock(&vblank->seqlock); 111 111 } 112 112 ··· 272 272 273 273 DRM_DEBUG_VBL("updating vblank count on crtc %u:" 274 274 " current=%llu, diff=%u, hw=%u hw_last=%u\n", 275 - pipe, vblank->count, diff, cur_vblank, vblank->last); 275 + pipe, atomic64_read(&vblank->count), diff, 276 + cur_vblank, vblank->last); 276 277 277 278 if (diff == 0) { 278 279 WARN_ON_ONCE(cur_vblank != vblank->last); ··· 295 294 static u64 drm_vblank_count(struct drm_device *dev, unsigned int pipe) 296 295 { 297 296 struct drm_vblank_crtc *vblank = &dev->vblank[pipe]; 297 + u64 count; 298 298 299 299 if (WARN_ON(pipe >= dev->num_crtcs)) 300 300 return 0; 301 301 302 - return vblank->count; 302 + count = atomic64_read(&vblank->count); 303 + 304 + /* 305 + * This read barrier corresponds to the implicit write barrier of the 306 + * write seqlock in store_vblank(). Note that this is the only place 307 + * where we need an explicit barrier, since all other access goes 308 + * through drm_vblank_count_and_time(), which already has the required 309 + * read barrier curtesy of the read seqlock. 310 + */ 311 + smp_rmb(); 312 + 313 + return count; 303 314 } 304 315 305 316 /** ··· 332 319 u64 vblank; 333 320 unsigned long flags; 334 321 335 - WARN_ONCE(drm_debug & DRM_UT_VBL && !dev->driver->get_vblank_timestamp, 322 + WARN_ONCE(drm_debug_enabled(DRM_UT_VBL) && !dev->driver->get_vblank_timestamp, 336 323 "This function requires support for accurate vblank timestamps."); 337 324 338 325 spin_lock_irqsave(&dev->vblank_time_lock, flags); ··· 706 693 */ 707 694 *vblank_time = ktime_sub_ns(etime, delta_ns); 708 695 709 - if ((drm_debug & DRM_UT_VBL) == 0) 696 + if (!drm_debug_enabled(DRM_UT_VBL)) 710 697 return true; 711 698 712 699 ts_etime = ktime_to_timespec64(etime); ··· 776 763 * vblank interrupt (since it only reports the software vblank counter), see 777 764 * drm_crtc_accurate_vblank_count() for such use-cases. 778 765 * 766 + * Note that for a given vblank counter value drm_crtc_handle_vblank() 767 + * and drm_crtc_vblank_count() or drm_crtc_vblank_count_and_time() 768 + * provide a barrier: Any writes done before calling 769 + * drm_crtc_handle_vblank() will be visible to callers of the later 770 + * functions, iff the vblank count is the same or a later one. 771 + * 772 + * See also &drm_vblank_crtc.count. 773 + * 779 774 * Returns: 780 775 * The software vblank counter. 781 776 */ ··· 821 800 822 801 do { 823 802 seq = read_seqbegin(&vblank->seqlock); 824 - vblank_count = vblank->count; 803 + vblank_count = atomic64_read(&vblank->count); 825 804 *vblanktime = vblank->time; 826 805 } while (read_seqretry(&vblank->seqlock, seq)); 827 806 ··· 838 817 * vblank events since the system was booted, including lost events due to 839 818 * modesetting activity. Returns corresponding system timestamp of the time 840 819 * of the vblank interval that corresponds to the current vblank counter value. 820 + * 821 + * Note that for a given vblank counter value drm_crtc_handle_vblank() 822 + * and drm_crtc_vblank_count() or drm_crtc_vblank_count_and_time() 823 + * provide a barrier: Any writes done before calling 824 + * drm_crtc_handle_vblank() will be visible to callers of the later 825 + * functions, iff the vblank count is the same or a later one. 826 + * 827 + * See also &drm_vblank_crtc.count. 841 828 */ 842 829 u64 drm_crtc_vblank_count_and_time(struct drm_crtc *crtc, 843 830 ktime_t *vblanktime) ··· 1352 1323 assert_spin_locked(&dev->vblank_time_lock); 1353 1324 1354 1325 vblank = &dev->vblank[pipe]; 1355 - WARN_ONCE((drm_debug & DRM_UT_VBL) && !vblank->framedur_ns, 1326 + WARN_ONCE(drm_debug_enabled(DRM_UT_VBL) && !vblank->framedur_ns, 1356 1327 "Cannot compute missed vblanks without frame duration\n"); 1357 1328 framedur_ns = vblank->framedur_ns; 1358 1329 ··· 1760 1731 send_vblank_event(dev, e, seq, now); 1761 1732 } 1762 1733 1763 - trace_drm_vblank_event(pipe, seq); 1734 + trace_drm_vblank_event(pipe, seq, now, 1735 + dev->driver->get_vblank_timestamp != NULL); 1764 1736 } 1765 1737 1766 1738 /** ··· 1835 1805 * update the vblank counter and send any signals that may be pending. 1836 1806 * 1837 1807 * This is the native KMS version of drm_handle_vblank(). 1808 + * 1809 + * Note that for a given vblank counter value drm_crtc_handle_vblank() 1810 + * and drm_crtc_vblank_count() or drm_crtc_vblank_count_and_time() 1811 + * provide a barrier: Any writes done before calling 1812 + * drm_crtc_handle_vblank() will be visible to callers of the later 1813 + * functions, iff the vblank count is the same or a later one. 1814 + * 1815 + * See also &drm_vblank_crtc.count. 1838 1816 * 1839 1817 * Returns: 1840 1818 * True if the event was successfully handled, false on failure.
+3 -5
drivers/gpu/drm/drm_vram_helper_common.c
··· 7 7 * 8 8 * This library provides &struct drm_gem_vram_object (GEM VRAM), a GEM 9 9 * buffer object that is backed by video RAM. It can be used for 10 - * framebuffer devices with dedicated memory. The video RAM can be 11 - * managed with &struct drm_vram_mm (VRAM MM). Both data structures are 12 - * supposed to be used together, but can also be used individually. 10 + * framebuffer devices with dedicated memory. The video RAM is managed 11 + * by &struct drm_vram_mm (VRAM MM). 13 12 * 14 13 * With the GEM interface userspace applications create, manage and destroy 15 14 * graphics buffers, such as an on-screen framebuffer. GEM does not provide ··· 49 50 * // setup device, vram base and size 50 51 * // ... 51 52 * 52 - * ret = drm_vram_helper_alloc_mm(dev, vram_base, vram_size, 53 - * &drm_gem_vram_mm_funcs); 53 + * ret = drm_vram_helper_alloc_mm(dev, vram_base, vram_size); 54 54 * if (ret) 55 55 * return ret; 56 56 * return 0;
-297
drivers/gpu/drm/drm_vram_mm_helper.c
··· 1 - // SPDX-License-Identifier: GPL-2.0-or-later 2 - 3 - #include <drm/drm_device.h> 4 - #include <drm/drm_file.h> 5 - #include <drm/drm_vram_mm_helper.h> 6 - 7 - #include <drm/ttm/ttm_page_alloc.h> 8 - 9 - /** 10 - * DOC: overview 11 - * 12 - * The data structure &struct drm_vram_mm and its helpers implement a memory 13 - * manager for simple framebuffer devices with dedicated video memory. Buffer 14 - * objects are either placed in video RAM or evicted to system memory. These 15 - * helper functions work well with &struct drm_gem_vram_object. 16 - */ 17 - 18 - /* 19 - * TTM TT 20 - */ 21 - 22 - static void backend_func_destroy(struct ttm_tt *tt) 23 - { 24 - ttm_tt_fini(tt); 25 - kfree(tt); 26 - } 27 - 28 - static struct ttm_backend_func backend_func = { 29 - .destroy = backend_func_destroy 30 - }; 31 - 32 - /* 33 - * TTM BO device 34 - */ 35 - 36 - static struct ttm_tt *bo_driver_ttm_tt_create(struct ttm_buffer_object *bo, 37 - uint32_t page_flags) 38 - { 39 - struct ttm_tt *tt; 40 - int ret; 41 - 42 - tt = kzalloc(sizeof(*tt), GFP_KERNEL); 43 - if (!tt) 44 - return NULL; 45 - 46 - tt->func = &backend_func; 47 - 48 - ret = ttm_tt_init(tt, bo, page_flags); 49 - if (ret < 0) 50 - goto err_ttm_tt_init; 51 - 52 - return tt; 53 - 54 - err_ttm_tt_init: 55 - kfree(tt); 56 - return NULL; 57 - } 58 - 59 - static int bo_driver_init_mem_type(struct ttm_bo_device *bdev, uint32_t type, 60 - struct ttm_mem_type_manager *man) 61 - { 62 - switch (type) { 63 - case TTM_PL_SYSTEM: 64 - man->flags = TTM_MEMTYPE_FLAG_MAPPABLE; 65 - man->available_caching = TTM_PL_MASK_CACHING; 66 - man->default_caching = TTM_PL_FLAG_CACHED; 67 - break; 68 - case TTM_PL_VRAM: 69 - man->func = &ttm_bo_manager_func; 70 - man->flags = TTM_MEMTYPE_FLAG_FIXED | 71 - TTM_MEMTYPE_FLAG_MAPPABLE; 72 - man->available_caching = TTM_PL_FLAG_UNCACHED | 73 - TTM_PL_FLAG_WC; 74 - man->default_caching = TTM_PL_FLAG_WC; 75 - break; 76 - default: 77 - return -EINVAL; 78 - } 79 - return 0; 80 - } 81 - 82 - static void bo_driver_evict_flags(struct ttm_buffer_object *bo, 83 - struct ttm_placement *placement) 84 - { 85 - struct drm_vram_mm *vmm = drm_vram_mm_of_bdev(bo->bdev); 86 - 87 - if (vmm->funcs && vmm->funcs->evict_flags) 88 - vmm->funcs->evict_flags(bo, placement); 89 - } 90 - 91 - static int bo_driver_verify_access(struct ttm_buffer_object *bo, 92 - struct file *filp) 93 - { 94 - struct drm_vram_mm *vmm = drm_vram_mm_of_bdev(bo->bdev); 95 - 96 - if (!vmm->funcs || !vmm->funcs->verify_access) 97 - return 0; 98 - return vmm->funcs->verify_access(bo, filp); 99 - } 100 - 101 - static int bo_driver_io_mem_reserve(struct ttm_bo_device *bdev, 102 - struct ttm_mem_reg *mem) 103 - { 104 - struct ttm_mem_type_manager *man = bdev->man + mem->mem_type; 105 - struct drm_vram_mm *vmm = drm_vram_mm_of_bdev(bdev); 106 - 107 - if (!(man->flags & TTM_MEMTYPE_FLAG_MAPPABLE)) 108 - return -EINVAL; 109 - 110 - mem->bus.addr = NULL; 111 - mem->bus.size = mem->num_pages << PAGE_SHIFT; 112 - 113 - switch (mem->mem_type) { 114 - case TTM_PL_SYSTEM: /* nothing to do */ 115 - mem->bus.offset = 0; 116 - mem->bus.base = 0; 117 - mem->bus.is_iomem = false; 118 - break; 119 - case TTM_PL_VRAM: 120 - mem->bus.offset = mem->start << PAGE_SHIFT; 121 - mem->bus.base = vmm->vram_base; 122 - mem->bus.is_iomem = true; 123 - break; 124 - default: 125 - return -EINVAL; 126 - } 127 - 128 - return 0; 129 - } 130 - 131 - static void bo_driver_io_mem_free(struct ttm_bo_device *bdev, 132 - struct ttm_mem_reg *mem) 133 - { } 134 - 135 - static struct ttm_bo_driver bo_driver = { 136 - .ttm_tt_create = bo_driver_ttm_tt_create, 137 - .ttm_tt_populate = ttm_pool_populate, 138 - .ttm_tt_unpopulate = ttm_pool_unpopulate, 139 - .init_mem_type = bo_driver_init_mem_type, 140 - .eviction_valuable = ttm_bo_eviction_valuable, 141 - .evict_flags = bo_driver_evict_flags, 142 - .verify_access = bo_driver_verify_access, 143 - .io_mem_reserve = bo_driver_io_mem_reserve, 144 - .io_mem_free = bo_driver_io_mem_free, 145 - }; 146 - 147 - /* 148 - * struct drm_vram_mm 149 - */ 150 - 151 - /** 152 - * drm_vram_mm_init() - Initialize an instance of VRAM MM. 153 - * @vmm: the VRAM MM instance to initialize 154 - * @dev: the DRM device 155 - * @vram_base: the base address of the video memory 156 - * @vram_size: the size of the video memory in bytes 157 - * @funcs: callback functions for buffer objects 158 - * 159 - * Returns: 160 - * 0 on success, or 161 - * a negative error code otherwise. 162 - */ 163 - int drm_vram_mm_init(struct drm_vram_mm *vmm, struct drm_device *dev, 164 - uint64_t vram_base, size_t vram_size, 165 - const struct drm_vram_mm_funcs *funcs) 166 - { 167 - int ret; 168 - 169 - vmm->vram_base = vram_base; 170 - vmm->vram_size = vram_size; 171 - vmm->funcs = funcs; 172 - 173 - ret = ttm_bo_device_init(&vmm->bdev, &bo_driver, 174 - dev->anon_inode->i_mapping, 175 - true); 176 - if (ret) 177 - return ret; 178 - 179 - ret = ttm_bo_init_mm(&vmm->bdev, TTM_PL_VRAM, vram_size >> PAGE_SHIFT); 180 - if (ret) 181 - return ret; 182 - 183 - return 0; 184 - } 185 - EXPORT_SYMBOL(drm_vram_mm_init); 186 - 187 - /** 188 - * drm_vram_mm_cleanup() - Cleans up an initialized instance of VRAM MM. 189 - * @vmm: the VRAM MM instance to clean up 190 - */ 191 - void drm_vram_mm_cleanup(struct drm_vram_mm *vmm) 192 - { 193 - ttm_bo_device_release(&vmm->bdev); 194 - } 195 - EXPORT_SYMBOL(drm_vram_mm_cleanup); 196 - 197 - /** 198 - * drm_vram_mm_mmap() - Helper for implementing &struct file_operations.mmap() 199 - * @filp: the mapping's file structure 200 - * @vma: the mapping's memory area 201 - * @vmm: the VRAM MM instance 202 - * 203 - * Returns: 204 - * 0 on success, or 205 - * a negative error code otherwise. 206 - */ 207 - int drm_vram_mm_mmap(struct file *filp, struct vm_area_struct *vma, 208 - struct drm_vram_mm *vmm) 209 - { 210 - return ttm_bo_mmap(filp, vma, &vmm->bdev); 211 - } 212 - EXPORT_SYMBOL(drm_vram_mm_mmap); 213 - 214 - /* 215 - * Helpers for integration with struct drm_device 216 - */ 217 - 218 - /** 219 - * drm_vram_helper_alloc_mm - Allocates a device's instance of \ 220 - &struct drm_vram_mm 221 - * @dev: the DRM device 222 - * @vram_base: the base address of the video memory 223 - * @vram_size: the size of the video memory in bytes 224 - * @funcs: callback functions for buffer objects 225 - * 226 - * Returns: 227 - * The new instance of &struct drm_vram_mm on success, or 228 - * an ERR_PTR()-encoded errno code otherwise. 229 - */ 230 - struct drm_vram_mm *drm_vram_helper_alloc_mm( 231 - struct drm_device *dev, uint64_t vram_base, size_t vram_size, 232 - const struct drm_vram_mm_funcs *funcs) 233 - { 234 - int ret; 235 - 236 - if (WARN_ON(dev->vram_mm)) 237 - return dev->vram_mm; 238 - 239 - dev->vram_mm = kzalloc(sizeof(*dev->vram_mm), GFP_KERNEL); 240 - if (!dev->vram_mm) 241 - return ERR_PTR(-ENOMEM); 242 - 243 - ret = drm_vram_mm_init(dev->vram_mm, dev, vram_base, vram_size, funcs); 244 - if (ret) 245 - goto err_kfree; 246 - 247 - return dev->vram_mm; 248 - 249 - err_kfree: 250 - kfree(dev->vram_mm); 251 - dev->vram_mm = NULL; 252 - return ERR_PTR(ret); 253 - } 254 - EXPORT_SYMBOL(drm_vram_helper_alloc_mm); 255 - 256 - /** 257 - * drm_vram_helper_release_mm - Releases a device's instance of \ 258 - &struct drm_vram_mm 259 - * @dev: the DRM device 260 - */ 261 - void drm_vram_helper_release_mm(struct drm_device *dev) 262 - { 263 - if (!dev->vram_mm) 264 - return; 265 - 266 - drm_vram_mm_cleanup(dev->vram_mm); 267 - kfree(dev->vram_mm); 268 - dev->vram_mm = NULL; 269 - } 270 - EXPORT_SYMBOL(drm_vram_helper_release_mm); 271 - 272 - /* 273 - * Helpers for &struct file_operations 274 - */ 275 - 276 - /** 277 - * drm_vram_mm_file_operations_mmap() - \ 278 - Implements &struct file_operations.mmap() 279 - * @filp: the mapping's file structure 280 - * @vma: the mapping's memory area 281 - * 282 - * Returns: 283 - * 0 on success, or 284 - * a negative error code otherwise. 285 - */ 286 - int drm_vram_mm_file_operations_mmap( 287 - struct file *filp, struct vm_area_struct *vma) 288 - { 289 - struct drm_file *file_priv = filp->private_data; 290 - struct drm_device *dev = file_priv->minor->dev; 291 - 292 - if (WARN_ONCE(!dev->vram_mm, "VRAM MM not initialized")) 293 - return -EINVAL; 294 - 295 - return drm_vram_mm_mmap(filp, vma, dev->vram_mm); 296 - } 297 - EXPORT_SYMBOL(drm_vram_mm_file_operations_mmap);
+4 -4
drivers/gpu/drm/etnaviv/etnaviv_buffer.c
··· 326 326 327 327 lockdep_assert_held(&gpu->lock); 328 328 329 - if (drm_debug & DRM_UT_DRIVER) 329 + if (drm_debug_enabled(DRM_UT_DRIVER)) 330 330 etnaviv_buffer_dump(gpu, buffer, 0, 0x50); 331 331 332 332 link_target = etnaviv_cmdbuf_get_va(cmdbuf, ··· 459 459 etnaviv_cmdbuf_get_va(buffer, &gpu->mmu_context->cmdbuf_mapping) 460 460 + buffer->user_size - 4); 461 461 462 - if (drm_debug & DRM_UT_DRIVER) 462 + if (drm_debug_enabled(DRM_UT_DRIVER)) 463 463 pr_info("stream link to 0x%08x @ 0x%08x %p\n", 464 464 return_target, 465 465 etnaviv_cmdbuf_get_va(cmdbuf, &gpu->mmu_context->cmdbuf_mapping), 466 466 cmdbuf->vaddr); 467 467 468 - if (drm_debug & DRM_UT_DRIVER) { 468 + if (drm_debug_enabled(DRM_UT_DRIVER)) { 469 469 print_hex_dump(KERN_INFO, "cmd ", DUMP_PREFIX_OFFSET, 16, 4, 470 470 cmdbuf->vaddr, cmdbuf->size, 0); 471 471 ··· 484 484 VIV_FE_LINK_HEADER_PREFETCH(link_dwords), 485 485 link_target); 486 486 487 - if (drm_debug & DRM_UT_DRIVER) 487 + if (drm_debug_enabled(DRM_UT_DRIVER)) 488 488 etnaviv_buffer_dump(gpu, buffer, 0, 0x50); 489 489 }
+1
drivers/gpu/drm/exynos/exynos_dp.c
··· 19 19 20 20 #include <drm/bridge/analogix_dp.h> 21 21 #include <drm/drm_atomic_helper.h> 22 + #include <drm/drm_bridge.h> 22 23 #include <drm/drm_crtc.h> 23 24 #include <drm/drm_of.h> 24 25 #include <drm/drm_panel.h>
+1
drivers/gpu/drm/exynos/exynos_drm_dsi.c
··· 24 24 #include <video/videomode.h> 25 25 26 26 #include <drm/drm_atomic_helper.h> 27 + #include <drm/drm_bridge.h> 27 28 #include <drm/drm_fb_helper.h> 28 29 #include <drm/drm_mipi_dsi.h> 29 30 #include <drm/drm_panel.h>
+1
drivers/gpu/drm/exynos/exynos_drm_mic.c
··· 21 21 #include <video/of_videomode.h> 22 22 #include <video/videomode.h> 23 23 24 + #include <drm/drm_bridge.h> 24 25 #include <drm/drm_encoder.h> 25 26 #include <drm/drm_print.h> 26 27
+19 -13
drivers/gpu/drm/exynos/exynos_hdmi.c
··· 34 34 #include <media/cec-notifier.h> 35 35 36 36 #include <drm/drm_atomic_helper.h> 37 + #include <drm/drm_bridge.h> 37 38 #include <drm/drm_edid.h> 38 39 #include <drm/drm_print.h> 39 40 #include <drm/drm_probe_helper.h> ··· 853 852 854 853 static void hdmi_connector_destroy(struct drm_connector *connector) 855 854 { 855 + struct hdmi_context *hdata = connector_to_hdmi(connector); 856 + 857 + cec_notifier_conn_unregister(hdata->notifier); 858 + 856 859 drm_connector_unregister(connector); 857 860 drm_connector_cleanup(connector); 858 861 } ··· 940 935 { 941 936 struct hdmi_context *hdata = encoder_to_hdmi(encoder); 942 937 struct drm_connector *connector = &hdata->connector; 938 + struct cec_connector_info conn_info; 943 939 int ret; 944 940 945 941 connector->interlace_allowed = true; ··· 961 955 ret = drm_bridge_attach(encoder, hdata->bridge, NULL); 962 956 if (ret) 963 957 DRM_DEV_ERROR(hdata->dev, "Failed to attach bridge\n"); 958 + } 959 + 960 + cec_fill_conn_info_from_drm(&conn_info, connector); 961 + 962 + hdata->notifier = cec_notifier_conn_register(hdata->dev, NULL, 963 + &conn_info); 964 + if (!hdata->notifier) { 965 + ret = -ENOMEM; 966 + DRM_DEV_ERROR(hdata->dev, "Failed to allocate CEC notifier\n"); 964 967 } 965 968 966 969 return ret; ··· 1543 1528 */ 1544 1529 mutex_unlock(&hdata->mutex); 1545 1530 cancel_delayed_work(&hdata->hotplug_work); 1546 - cec_notifier_set_phys_addr(hdata->notifier, 1547 - CEC_PHYS_ADDR_INVALID); 1531 + if (hdata->notifier) 1532 + cec_notifier_phys_addr_invalidate(hdata->notifier); 1548 1533 return; 1549 1534 } 1550 1535 ··· 2021 2006 } 2022 2007 } 2023 2008 2024 - hdata->notifier = cec_notifier_get(&pdev->dev); 2025 - if (hdata->notifier == NULL) { 2026 - ret = -ENOMEM; 2027 - goto err_hdmiphy; 2028 - } 2029 - 2030 2009 pm_runtime_enable(dev); 2031 2010 2032 2011 audio_infoframe = &hdata->audio.infoframe; ··· 2032 2023 2033 2024 ret = hdmi_register_audio_device(hdata); 2034 2025 if (ret) 2035 - goto err_notifier_put; 2026 + goto err_rpm_disable; 2036 2027 2037 2028 ret = component_add(&pdev->dev, &hdmi_component_ops); 2038 2029 if (ret) ··· 2043 2034 err_unregister_audio: 2044 2035 platform_device_unregister(hdata->audio.pdev); 2045 2036 2046 - err_notifier_put: 2047 - cec_notifier_put(hdata->notifier); 2037 + err_rpm_disable: 2048 2038 pm_runtime_disable(dev); 2049 2039 2050 2040 err_hdmiphy: ··· 2062 2054 struct hdmi_context *hdata = platform_get_drvdata(pdev); 2063 2055 2064 2056 cancel_delayed_work_sync(&hdata->hotplug_work); 2065 - cec_notifier_set_phys_addr(hdata->notifier, CEC_PHYS_ADDR_INVALID); 2066 2057 2067 2058 component_del(&pdev->dev, &hdmi_component_ops); 2068 2059 platform_device_unregister(hdata->audio.pdev); 2069 2060 2070 - cec_notifier_put(hdata->notifier); 2071 2061 pm_runtime_disable(&pdev->dev); 2072 2062 2073 2063 if (!IS_ERR(hdata->reg_hdmi_en))
+1
drivers/gpu/drm/fsl-dcu/fsl_dcu_drm_rgb.c
··· 9 9 #include <linux/of_graph.h> 10 10 11 11 #include <drm/drm_atomic_helper.h> 12 + #include <drm/drm_bridge.h> 12 13 #include <drm/drm_of.h> 13 14 #include <drm/drm_panel.h> 14 15 #include <drm/drm_probe_helper.h>
+2 -1
drivers/gpu/drm/hisilicon/hibmc/Kconfig
··· 4 4 depends on DRM && PCI && MMU && ARM64 5 5 select DRM_KMS_HELPER 6 6 select DRM_VRAM_HELPER 7 - 7 + select DRM_TTM 8 + select DRM_TTM_HELPER 8 9 help 9 10 Choose this option if you have a Hisilicon Hibmc soc chipset. 10 11 If M is selected the module will be called hibmc-drm.
-1
drivers/gpu/drm/hisilicon/hibmc/hibmc_drm_drv.c
··· 22 22 #include <drm/drm_print.h> 23 23 #include <drm/drm_probe_helper.h> 24 24 #include <drm/drm_vblank.h> 25 - #include <drm/drm_vram_mm_helper.h> 26 25 27 26 #include "hibmc_drm_drv.h" 28 27 #include "hibmc_drm_regs.h"
+1 -2
drivers/gpu/drm/hisilicon/hibmc/hibmc_ttm.c
··· 17 17 #include <drm/drm_gem.h> 18 18 #include <drm/drm_gem_vram_helper.h> 19 19 #include <drm/drm_print.h> 20 - #include <drm/drm_vram_mm_helper.h> 21 20 22 21 #include "hibmc_drm_drv.h" 23 22 ··· 28 29 29 30 vmm = drm_vram_helper_alloc_mm(dev, 30 31 pci_resource_start(dev->pdev, 0), 31 - hibmc->fb_size, &drm_gem_vram_mm_funcs); 32 + hibmc->fb_size); 32 33 if (IS_ERR(vmm)) { 33 34 ret = PTR_ERR(vmm); 34 35 DRM_ERROR("Error initializing VRAM MM; %d\n", ret);
+1
drivers/gpu/drm/hisilicon/kirin/dw_drm_dsi.c
··· 18 18 #include <linux/platform_device.h> 19 19 20 20 #include <drm/drm_atomic_helper.h> 21 + #include <drm/drm_bridge.h> 21 22 #include <drm/drm_device.h> 22 23 #include <drm/drm_encoder_slave.h> 23 24 #include <drm/drm_mipi_dsi.h>
+1 -1
drivers/gpu/drm/i2c/sil164_drv.c
··· 44 44 ((struct sil164_priv *)to_encoder_slave(x)->slave_priv) 45 45 46 46 #define sil164_dbg(client, format, ...) do { \ 47 - if (drm_debug & DRM_UT_KMS) \ 47 + if (drm_debug_enabled(DRM_UT_KMS)) \ 48 48 dev_printk(KERN_DEBUG, &client->dev, \ 49 49 "%s: " format, __func__, ## __VA_ARGS__); \ 50 50 } while (0)
+6 -6
drivers/gpu/drm/i2c/tda9950.c
··· 420 420 priv->hdmi = glue->parent; 421 421 422 422 priv->adap = cec_allocate_adapter(&tda9950_cec_ops, priv, "tda9950", 423 - CEC_CAP_DEFAULTS, 423 + CEC_CAP_DEFAULTS | 424 + CEC_CAP_CONNECTOR_INFO, 424 425 CEC_MAX_LOG_ADDRS); 425 426 if (IS_ERR(priv->adap)) 426 427 return PTR_ERR(priv->adap); ··· 458 457 if (ret < 0) 459 458 return ret; 460 459 461 - priv->notify = cec_notifier_get(priv->hdmi); 460 + priv->notify = cec_notifier_cec_adap_register(priv->hdmi, NULL, 461 + priv->adap); 462 462 if (!priv->notify) 463 463 return -ENOMEM; 464 464 465 465 ret = cec_register_adapter(priv->adap, priv->hdmi); 466 466 if (ret < 0) { 467 - cec_notifier_put(priv->notify); 467 + cec_notifier_cec_adap_unregister(priv->notify, priv->adap); 468 468 return ret; 469 469 } 470 470 ··· 475 473 */ 476 474 devm_remove_action(dev, tda9950_cec_del, priv); 477 475 478 - cec_register_cec_notifier(priv->adap, priv->notify); 479 - 480 476 return 0; 481 477 } 482 478 ··· 482 482 { 483 483 struct tda9950_priv *priv = i2c_get_clientdata(client); 484 484 485 + cec_notifier_cec_adap_unregister(priv->notify, priv->adap); 485 486 cec_unregister_adapter(priv->adap); 486 - cec_notifier_put(priv->notify); 487 487 488 488 return 0; 489 489 }
+1
drivers/gpu/drm/i2c/tda998x_drv.c
··· 14 14 #include <sound/hdmi-codec.h> 15 15 16 16 #include <drm/drm_atomic_helper.h> 17 + #include <drm/drm_bridge.h> 17 18 #include <drm/drm_edid.h> 18 19 #include <drm/drm_of.h> 19 20 #include <drm/drm_print.h>
+2 -2
drivers/gpu/drm/i810/i810_dma.c
··· 728 728 if (nbox > I810_NR_SAREA_CLIPRECTS) 729 729 nbox = I810_NR_SAREA_CLIPRECTS; 730 730 731 - if (used > 4 * 1024) 731 + if (used < 0 || used > 4 * 1024) 732 732 used = 0; 733 733 734 734 if (sarea_priv->dirty) ··· 1048 1048 if (u != I810_BUF_CLIENT) 1049 1049 DRM_DEBUG("MC found buffer that isn't mine!\n"); 1050 1050 1051 - if (used > 4 * 1024) 1051 + if (used < 0 || used > 4 * 1024) 1052 1052 used = 0; 1053 1053 1054 1054 sarea_priv->dirty = 0x7f;
+1 -1
drivers/gpu/drm/i915/display/intel_connector.c
··· 277 277 void 278 278 intel_attach_colorspace_property(struct drm_connector *connector) 279 279 { 280 - if (!drm_mode_create_colorspace_property(connector)) 280 + if (!drm_mode_create_hdmi_colorspace_property(connector)) 281 281 drm_object_attach_property(&connector->base, 282 282 connector->colorspace_property, 0); 283 283 }
+1 -3
drivers/gpu/drm/i915/display/intel_dp.c
··· 5555 5555 intel_dp_connector_register(struct drm_connector *connector) 5556 5556 { 5557 5557 struct intel_dp *intel_dp = intel_attached_dp(connector); 5558 - struct drm_device *dev = connector->dev; 5559 5558 int ret; 5560 5559 5561 5560 ret = intel_connector_register(connector); ··· 5569 5570 intel_dp->aux.dev = connector->kdev; 5570 5571 ret = drm_dp_aux_register(&intel_dp->aux); 5571 5572 if (!ret) 5572 - drm_dp_cec_register_connector(&intel_dp->aux, 5573 - connector->name, dev->dev); 5573 + drm_dp_cec_register_connector(&intel_dp->aux, connector); 5574 5574 return ret; 5575 5575 } 5576 5576
+9 -4
drivers/gpu/drm/i915/display/intel_hdmi.c
··· 2809 2809 2810 2810 static void intel_hdmi_destroy(struct drm_connector *connector) 2811 2811 { 2812 - if (intel_attached_hdmi(connector)->cec_notifier) 2813 - cec_notifier_put(intel_attached_hdmi(connector)->cec_notifier); 2812 + struct cec_notifier *n = intel_attached_hdmi(connector)->cec_notifier; 2813 + 2814 + cec_notifier_conn_unregister(n); 2814 2815 2815 2816 intel_connector_destroy(connector); 2816 2817 } ··· 3126 3125 struct drm_device *dev = intel_encoder->base.dev; 3127 3126 struct drm_i915_private *dev_priv = to_i915(dev); 3128 3127 enum port port = intel_encoder->port; 3128 + struct cec_connector_info conn_info; 3129 3129 3130 3130 DRM_DEBUG_KMS("Adding HDMI connector on [ENCODER:%d:%s]\n", 3131 3131 intel_encoder->base.base.id, intel_encoder->base.name); ··· 3180 3178 I915_WRITE(PEG_BAND_GAP_DATA, (temp & ~0xf) | 0xd); 3181 3179 } 3182 3180 3183 - intel_hdmi->cec_notifier = cec_notifier_get_conn(dev->dev, 3184 - port_identifier(port)); 3181 + cec_fill_conn_info_from_drm(&conn_info, connector); 3182 + 3183 + intel_hdmi->cec_notifier = 3184 + cec_notifier_conn_register(dev->dev, port_identifier(port), 3185 + &conn_info); 3185 3186 if (!intel_hdmi->cec_notifier) 3186 3187 DRM_DEBUG_KMS("CEC notifier get failed\n"); 3187 3188 }
+1 -1
drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
··· 908 908 cache->use_64bit_reloc = HAS_64BIT_RELOC(i915); 909 909 cache->has_fence = cache->gen < 4; 910 910 cache->needs_unfenced = INTEL_INFO(i915)->unfenced_needs_alignment; 911 - cache->node.allocated = false; 911 + cache->node.flags = 0; 912 912 cache->ce = NULL; 913 913 cache->rq = NULL; 914 914 cache->rq_size = 0;
+2 -39
drivers/gpu/drm/i915/i915_drv.c
··· 350 350 return ret; 351 351 } 352 352 353 - static int i915_kick_out_firmware_fb(struct drm_i915_private *dev_priv) 354 - { 355 - struct apertures_struct *ap; 356 - struct pci_dev *pdev = dev_priv->drm.pdev; 357 - struct i915_ggtt *ggtt = &dev_priv->ggtt; 358 - bool primary; 359 - int ret; 360 - 361 - ap = alloc_apertures(1); 362 - if (!ap) 363 - return -ENOMEM; 364 - 365 - ap->ranges[0].base = ggtt->gmadr.start; 366 - ap->ranges[0].size = ggtt->mappable_end; 367 - 368 - primary = 369 - pdev->resource[PCI_ROM_RESOURCE].flags & IORESOURCE_ROM_SHADOW; 370 - 371 - ret = drm_fb_helper_remove_conflicting_framebuffers(ap, "inteldrmfb", primary); 372 - 373 - kfree(ap); 374 - 375 - return ret; 376 - } 377 - 378 353 static void i915_driver_modeset_remove(struct drm_i915_private *i915) 379 354 { 380 355 intel_modeset_driver_remove(i915); ··· 1162 1187 if (ret) 1163 1188 goto err_perf; 1164 1189 1165 - /* 1166 - * WARNING: Apparently we must kick fbdev drivers before vgacon, 1167 - * otherwise the vga fbdev driver falls over. 1168 - */ 1169 - ret = i915_kick_out_firmware_fb(dev_priv); 1170 - if (ret) { 1171 - DRM_ERROR("failed to remove conflicting framebuffer drivers\n"); 1190 + ret = drm_fb_helper_remove_conflicting_pci_framebuffers(pdev, "inteldrmfb"); 1191 + if (ret) 1172 1192 goto err_ggtt; 1173 - } 1174 - 1175 - ret = vga_remove_vgacon(pdev); 1176 - if (ret) { 1177 - DRM_ERROR("failed to remove conflicting VGA console\n"); 1178 - goto err_ggtt; 1179 - } 1180 1193 1181 1194 ret = i915_ggtt_init_hw(dev_priv); 1182 1195 if (ret)
+2 -2
drivers/gpu/drm/i915/i915_gem.c
··· 363 363 PIN_NOEVICT); 364 364 if (!IS_ERR(vma)) { 365 365 node.start = i915_ggtt_offset(vma); 366 - node.allocated = false; 366 + node.flags = 0; 367 367 } else { 368 368 ret = insert_mappable_node(ggtt, &node, PAGE_SIZE); 369 369 if (ret) ··· 562 562 PIN_NOEVICT); 563 563 if (!IS_ERR(vma)) { 564 564 node.start = i915_ggtt_offset(vma); 565 - node.allocated = false; 565 + node.flags = 0; 566 566 } else { 567 567 ret = insert_mappable_node(ggtt, &node, PAGE_SIZE); 568 568 if (ret)
+1
drivers/gpu/drm/imx/imx-ldb.c
··· 20 20 21 21 #include <drm/drm_atomic.h> 22 22 #include <drm/drm_atomic_helper.h> 23 + #include <drm/drm_bridge.h> 23 24 #include <drm/drm_fb_helper.h> 24 25 #include <drm/drm_of.h> 25 26 #include <drm/drm_panel.h>
+1
drivers/gpu/drm/imx/parallel-display.c
··· 13 13 #include <video/of_display_timing.h> 14 14 15 15 #include <drm/drm_atomic_helper.h> 16 + #include <drm/drm_bridge.h> 16 17 #include <drm/drm_fb_helper.h> 17 18 #include <drm/drm_of.h> 18 19 #include <drm/drm_panel.h>
+3 -2
drivers/gpu/drm/ingenic/ingenic-drm.c
··· 13 13 14 14 #include <drm/drm_atomic.h> 15 15 #include <drm/drm_atomic_helper.h> 16 + #include <drm/drm_bridge.h> 16 17 #include <drm/drm_crtc.h> 17 18 #include <drm/drm_crtc_helper.h> 18 19 #include <drm/drm_drv.h> ··· 677 676 } 678 677 679 678 if (panel) 680 - bridge = devm_drm_panel_bridge_add(dev, panel, 681 - DRM_MODE_CONNECTOR_DPI); 679 + bridge = devm_drm_panel_bridge_add_typed(dev, panel, 680 + DRM_MODE_CONNECTOR_DPI); 682 681 683 682 priv->dma_hwdesc = dma_alloc_coherent(dev, sizeof(*priv->dma_hwdesc), 684 683 &priv->dma_hwdesc_phys,
+2 -1
drivers/gpu/drm/lima/lima_device.c
··· 105 105 if (err) 106 106 goto error_out0; 107 107 108 - dev->reset = devm_reset_control_get_optional(dev->dev, NULL); 108 + dev->reset = devm_reset_control_array_get_optional_shared(dev->dev); 109 + 109 110 if (IS_ERR(dev->reset)) { 110 111 err = PTR_ERR(dev->reset); 111 112 if (err != -EPROBE_DEFER)
+2 -1
drivers/gpu/drm/mcde/mcde_drv.c
··· 484 484 } 485 485 if (!match) { 486 486 dev_err(dev, "no matching components\n"); 487 - return -ENODEV; 487 + ret = -ENODEV; 488 + goto clk_disable; 488 489 } 489 490 if (IS_ERR(match)) { 490 491 dev_err(dev, "could not create component match\n");
+2 -2
drivers/gpu/drm/mcde/mcde_dsi.c
··· 946 946 } 947 947 } 948 948 if (panel) { 949 - bridge = drm_panel_bridge_add(panel, 950 - DRM_MODE_CONNECTOR_DSI); 949 + bridge = drm_panel_bridge_add_typed(panel, 950 + DRM_MODE_CONNECTOR_DSI); 951 951 if (IS_ERR(bridge)) { 952 952 dev_err(dev, "error adding panel bridge\n"); 953 953 return PTR_ERR(bridge);
+1
drivers/gpu/drm/mediatek/mtk_dpi.c
··· 17 17 #include <video/videomode.h> 18 18 19 19 #include <drm/drm_atomic_helper.h> 20 + #include <drm/drm_bridge.h> 20 21 #include <drm/drm_crtc.h> 21 22 #include <drm/drm_of.h> 22 23
+1
drivers/gpu/drm/mediatek/mtk_dsi.c
··· 16 16 #include <video/videomode.h> 17 17 18 18 #include <drm/drm_atomic_helper.h> 19 + #include <drm/drm_bridge.h> 19 20 #include <drm/drm_mipi_dsi.h> 20 21 #include <drm/drm_of.h> 21 22 #include <drm/drm_panel.h>
+1
drivers/gpu/drm/mediatek/mtk_hdmi.c
··· 23 23 #include <sound/hdmi-codec.h> 24 24 25 25 #include <drm/drm_atomic_helper.h> 26 + #include <drm/drm_bridge.h> 26 27 #include <drm/drm_crtc.h> 27 28 #include <drm/drm_edid.h> 28 29 #include <drm/drm_print.h>
+32
drivers/gpu/drm/meson/meson_drv.c
··· 372 372 .unbind = meson_drv_unbind, 373 373 }; 374 374 375 + static int __maybe_unused meson_drv_pm_suspend(struct device *dev) 376 + { 377 + struct meson_drm *priv = dev_get_drvdata(dev); 378 + 379 + if (!priv) 380 + return 0; 381 + 382 + return drm_mode_config_helper_suspend(priv->drm); 383 + } 384 + 385 + static int __maybe_unused meson_drv_pm_resume(struct device *dev) 386 + { 387 + struct meson_drm *priv = dev_get_drvdata(dev); 388 + 389 + if (!priv) 390 + return 0; 391 + 392 + meson_vpu_init(priv); 393 + meson_venc_init(priv); 394 + meson_vpp_init(priv); 395 + meson_viu_init(priv); 396 + 397 + drm_mode_config_helper_resume(priv->drm); 398 + 399 + return 0; 400 + } 401 + 375 402 static int compare_of(struct device *dev, void *data) 376 403 { 377 404 DRM_DEBUG_DRIVER("Comparing of node %pOF with %pOF\n", ··· 494 467 }; 495 468 MODULE_DEVICE_TABLE(of, dt_match); 496 469 470 + static const struct dev_pm_ops meson_drv_pm_ops = { 471 + SET_SYSTEM_SLEEP_PM_OPS(meson_drv_pm_suspend, meson_drv_pm_resume) 472 + }; 473 + 497 474 static struct platform_driver meson_drm_platform_driver = { 498 475 .probe = meson_drv_probe, 499 476 .driver = { 500 477 .name = "meson-drm", 501 478 .of_match_table = dt_match, 479 + .pm = &meson_drv_pm_ops, 502 480 }, 503 481 }; 504 482
+76 -34
drivers/gpu/drm/meson/meson_dw_hdmi.c
··· 802 802 return false; 803 803 } 804 804 805 + static void meson_dw_hdmi_init(struct meson_dw_hdmi *meson_dw_hdmi) 806 + { 807 + struct meson_drm *priv = meson_dw_hdmi->priv; 808 + 809 + /* Enable clocks */ 810 + regmap_update_bits(priv->hhi, HHI_HDMI_CLK_CNTL, 0xffff, 0x100); 811 + 812 + /* Bring HDMITX MEM output of power down */ 813 + regmap_update_bits(priv->hhi, HHI_MEM_PD_REG0, 0xff << 8, 0); 814 + 815 + /* Reset HDMITX APB & TX & PHY */ 816 + reset_control_reset(meson_dw_hdmi->hdmitx_apb); 817 + reset_control_reset(meson_dw_hdmi->hdmitx_ctrl); 818 + reset_control_reset(meson_dw_hdmi->hdmitx_phy); 819 + 820 + /* Enable APB3 fail on error */ 821 + if (!meson_vpu_is_compatible(priv, VPU_COMPATIBLE_G12A)) { 822 + writel_bits_relaxed(BIT(15), BIT(15), 823 + meson_dw_hdmi->hdmitx + HDMITX_TOP_CTRL_REG); 824 + writel_bits_relaxed(BIT(15), BIT(15), 825 + meson_dw_hdmi->hdmitx + HDMITX_DWC_CTRL_REG); 826 + } 827 + 828 + /* Bring out of reset */ 829 + meson_dw_hdmi->data->top_write(meson_dw_hdmi, 830 + HDMITX_TOP_SW_RESET, 0); 831 + 832 + msleep(20); 833 + 834 + meson_dw_hdmi->data->top_write(meson_dw_hdmi, 835 + HDMITX_TOP_CLK_CNTL, 0xff); 836 + 837 + /* Enable HDMI-TX Interrupt */ 838 + meson_dw_hdmi->data->top_write(meson_dw_hdmi, HDMITX_TOP_INTR_STAT_CLR, 839 + HDMITX_TOP_INTR_CORE); 840 + 841 + meson_dw_hdmi->data->top_write(meson_dw_hdmi, HDMITX_TOP_INTR_MASKN, 842 + HDMITX_TOP_INTR_CORE); 843 + 844 + } 845 + 805 846 static int meson_dw_hdmi_bind(struct device *dev, struct device *master, 806 847 void *data) 807 848 { ··· 966 925 967 926 DRM_DEBUG_DRIVER("encoder initialized\n"); 968 927 969 - /* Enable clocks */ 970 - regmap_update_bits(priv->hhi, HHI_HDMI_CLK_CNTL, 0xffff, 0x100); 971 - 972 - /* Bring HDMITX MEM output of power down */ 973 - regmap_update_bits(priv->hhi, HHI_MEM_PD_REG0, 0xff << 8, 0); 974 - 975 - /* Reset HDMITX APB & TX & PHY */ 976 - reset_control_reset(meson_dw_hdmi->hdmitx_apb); 977 - reset_control_reset(meson_dw_hdmi->hdmitx_ctrl); 978 - reset_control_reset(meson_dw_hdmi->hdmitx_phy); 979 - 980 - /* Enable APB3 fail on error */ 981 - if (!meson_vpu_is_compatible(priv, VPU_COMPATIBLE_G12A)) { 982 - writel_bits_relaxed(BIT(15), BIT(15), 983 - meson_dw_hdmi->hdmitx + HDMITX_TOP_CTRL_REG); 984 - writel_bits_relaxed(BIT(15), BIT(15), 985 - meson_dw_hdmi->hdmitx + HDMITX_DWC_CTRL_REG); 986 - } 987 - 988 - /* Bring out of reset */ 989 - meson_dw_hdmi->data->top_write(meson_dw_hdmi, 990 - HDMITX_TOP_SW_RESET, 0); 991 - 992 - msleep(20); 993 - 994 - meson_dw_hdmi->data->top_write(meson_dw_hdmi, 995 - HDMITX_TOP_CLK_CNTL, 0xff); 996 - 997 - /* Enable HDMI-TX Interrupt */ 998 - meson_dw_hdmi->data->top_write(meson_dw_hdmi, HDMITX_TOP_INTR_STAT_CLR, 999 - HDMITX_TOP_INTR_CORE); 1000 - 1001 - meson_dw_hdmi->data->top_write(meson_dw_hdmi, HDMITX_TOP_INTR_MASKN, 1002 - HDMITX_TOP_INTR_CORE); 928 + meson_dw_hdmi_init(meson_dw_hdmi); 1003 929 1004 930 /* Bridge / Connector */ 1005 931 ··· 1002 994 .unbind = meson_dw_hdmi_unbind, 1003 995 }; 1004 996 997 + static int __maybe_unused meson_dw_hdmi_pm_suspend(struct device *dev) 998 + { 999 + struct meson_dw_hdmi *meson_dw_hdmi = dev_get_drvdata(dev); 1000 + 1001 + if (!meson_dw_hdmi) 1002 + return 0; 1003 + 1004 + /* Reset TOP */ 1005 + meson_dw_hdmi->data->top_write(meson_dw_hdmi, 1006 + HDMITX_TOP_SW_RESET, 0); 1007 + 1008 + return 0; 1009 + } 1010 + 1011 + static int __maybe_unused meson_dw_hdmi_pm_resume(struct device *dev) 1012 + { 1013 + struct meson_dw_hdmi *meson_dw_hdmi = dev_get_drvdata(dev); 1014 + 1015 + if (!meson_dw_hdmi) 1016 + return 0; 1017 + 1018 + meson_dw_hdmi_init(meson_dw_hdmi); 1019 + 1020 + dw_hdmi_resume(meson_dw_hdmi->hdmi); 1021 + 1022 + return 0; 1023 + } 1024 + 1005 1025 static int meson_dw_hdmi_probe(struct platform_device *pdev) 1006 1026 { 1007 1027 return component_add(&pdev->dev, &meson_dw_hdmi_ops); ··· 1041 1005 1042 1006 return 0; 1043 1007 } 1008 + 1009 + static const struct dev_pm_ops meson_dw_hdmi_pm_ops = { 1010 + SET_SYSTEM_SLEEP_PM_OPS(meson_dw_hdmi_pm_suspend, 1011 + meson_dw_hdmi_pm_resume) 1012 + }; 1044 1013 1045 1014 static const struct of_device_id meson_dw_hdmi_of_table[] = { 1046 1015 { .compatible = "amlogic,meson-gxbb-dw-hdmi", ··· 1066 1025 .driver = { 1067 1026 .name = DRIVER_NAME, 1068 1027 .of_match_table = meson_dw_hdmi_of_table, 1028 + .pm = &meson_dw_hdmi_pm_ops, 1069 1029 }, 1070 1030 }; 1071 1031 module_platform_driver(meson_dw_hdmi_platform_driver);
+7 -2
drivers/gpu/drm/meson/meson_vclk.c
··· 638 638 if (frac >= HDMI_FRAC_MAX_GXBB) 639 639 return false; 640 640 } else if (meson_vpu_is_compatible(priv, VPU_COMPATIBLE_GXM) || 641 - meson_vpu_is_compatible(priv, VPU_COMPATIBLE_GXL) || 642 - meson_vpu_is_compatible(priv, VPU_COMPATIBLE_G12A)) { 641 + meson_vpu_is_compatible(priv, VPU_COMPATIBLE_GXL)) { 643 642 /* Empiric supported min/max dividers */ 644 643 if (m < 106 || m > 247) 645 644 return false; 646 645 if (frac >= HDMI_FRAC_MAX_GXL) 646 + return false; 647 + } else if (meson_vpu_is_compatible(priv, VPU_COMPATIBLE_G12A)) { 648 + /* Empiric supported min/max dividers */ 649 + if (m < 106 || m > 247) 650 + return false; 651 + if (frac >= HDMI_FRAC_MAX_G12A) 647 652 return false; 648 653 } 649 654
+2
drivers/gpu/drm/mgag200/Kconfig
··· 4 4 depends on DRM && PCI && MMU 5 5 select DRM_KMS_HELPER 6 6 select DRM_VRAM_HELPER 7 + select DRM_TTM 8 + select DRM_TTM_HELPER 7 9 help 8 10 This is a KMS driver for the MGA G200 server chips, it 9 11 does not support the original MGA G200 or any of the desktop
+198 -131
drivers/gpu/drm/mgag200/mgag200_cursor.c
··· 12 12 static bool warn_transparent = true; 13 13 static bool warn_palette = true; 14 14 15 - /* 16 - Hide the cursor off screen. We can't disable the cursor hardware because it 17 - takes too long to re-activate and causes momentary corruption 18 - */ 19 - static void mga_hide_cursor(struct mga_device *mdev) 15 + static int mgag200_cursor_update(struct mga_device *mdev, void *dst, void *src, 16 + unsigned int width, unsigned int height) 20 17 { 21 - WREG8(MGA_CURPOSXL, 0); 22 - WREG8(MGA_CURPOSXH, 0); 23 - if (mdev->cursor.pixels_current) 24 - drm_gem_vram_unpin(mdev->cursor.pixels_current); 25 - mdev->cursor.pixels_current = NULL; 26 - } 27 - 28 - int mga_crtc_cursor_set(struct drm_crtc *crtc, 29 - struct drm_file *file_priv, 30 - uint32_t handle, 31 - uint32_t width, 32 - uint32_t height) 33 - { 34 - struct drm_device *dev = crtc->dev; 35 - struct mga_device *mdev = (struct mga_device *)dev->dev_private; 36 - struct drm_gem_vram_object *pixels_1 = mdev->cursor.pixels_1; 37 - struct drm_gem_vram_object *pixels_2 = mdev->cursor.pixels_2; 38 - struct drm_gem_vram_object *pixels_current = mdev->cursor.pixels_current; 39 - struct drm_gem_vram_object *pixels_next; 40 - struct drm_gem_object *obj; 41 - struct drm_gem_vram_object *gbo = NULL; 42 - int ret = 0; 43 - u8 *src, *dst; 18 + struct drm_device *dev = mdev->dev; 44 19 unsigned int i, row, col; 45 20 uint32_t colour_set[16]; 46 21 uint32_t *next_space = &colour_set[0]; ··· 23 48 uint32_t this_colour; 24 49 bool found = false; 25 50 int colour_count = 0; 26 - s64 gpu_addr; 27 - u64 dst_gpu; 28 51 u8 reg_index; 29 52 u8 this_row[48]; 30 - 31 - if (!pixels_1 || !pixels_2) { 32 - WREG8(MGA_CURPOSXL, 0); 33 - WREG8(MGA_CURPOSXH, 0); 34 - return -ENOTSUPP; /* Didn't allocate space for cursors */ 35 - } 36 - 37 - if (WARN_ON(pixels_current && 38 - pixels_1 != pixels_current && 39 - pixels_2 != pixels_current)) { 40 - return -ENOTSUPP; /* inconsistent state */ 41 - } 42 - 43 - if (!handle || !file_priv) { 44 - mga_hide_cursor(mdev); 45 - return 0; 46 - } 47 - 48 - if (width != 64 || height != 64) { 49 - WREG8(MGA_CURPOSXL, 0); 50 - WREG8(MGA_CURPOSXH, 0); 51 - return -EINVAL; 52 - } 53 - 54 - if (pixels_current == pixels_1) 55 - pixels_next = pixels_2; 56 - else 57 - pixels_next = pixels_1; 58 - 59 - obj = drm_gem_object_lookup(file_priv, handle); 60 - if (!obj) 61 - return -ENOENT; 62 - gbo = drm_gem_vram_of_gem(obj); 63 - ret = drm_gem_vram_pin(gbo, 0); 64 - if (ret) { 65 - dev_err(&dev->pdev->dev, "failed to lock user bo\n"); 66 - goto err_drm_gem_object_put_unlocked; 67 - } 68 - src = drm_gem_vram_kmap(gbo, true, NULL); 69 - if (IS_ERR(src)) { 70 - ret = PTR_ERR(src); 71 - dev_err(&dev->pdev->dev, 72 - "failed to kmap user buffer updates\n"); 73 - goto err_drm_gem_vram_unpin_src; 74 - } 75 - 76 - /* Pin and map up-coming buffer to write colour indices */ 77 - ret = drm_gem_vram_pin(pixels_next, DRM_GEM_VRAM_PL_FLAG_VRAM); 78 - if (ret) { 79 - dev_err(&dev->pdev->dev, 80 - "failed to pin cursor buffer: %d\n", ret); 81 - goto err_drm_gem_vram_kunmap_src; 82 - } 83 - dst = drm_gem_vram_kmap(pixels_next, true, NULL); 84 - if (IS_ERR(dst)) { 85 - ret = PTR_ERR(dst); 86 - dev_err(&dev->pdev->dev, 87 - "failed to kmap cursor updates: %d\n", ret); 88 - goto err_drm_gem_vram_unpin_dst; 89 - } 90 - gpu_addr = drm_gem_vram_offset(pixels_next); 91 - if (gpu_addr < 0) { 92 - ret = (int)gpu_addr; 93 - dev_err(&dev->pdev->dev, 94 - "failed to get cursor scanout address: %d\n", ret); 95 - goto err_drm_gem_vram_kunmap_dst; 96 - } 97 - dst_gpu = (u64)gpu_addr; 98 53 99 54 memset(&colour_set[0], 0, sizeof(uint32_t)*16); 100 55 /* width*height*4 = 16384 */ ··· 38 133 dev_info(&dev->pdev->dev, "Not enabling hardware cursor.\n"); 39 134 warn_transparent = false; /* Only tell the user once. */ 40 135 } 41 - ret = -EINVAL; 42 - goto err_drm_gem_vram_kunmap_dst; 136 + return -EINVAL; 43 137 } 44 138 /* Don't need to store transparent pixels as colours */ 45 139 if (this_colour>>24 == 0x0) ··· 59 155 dev_info(&dev->pdev->dev, "Not enabling hardware cursor.\n"); 60 156 warn_palette = false; /* Only tell the user once. */ 61 157 } 62 - ret = -EINVAL; 63 - goto err_drm_gem_vram_kunmap_dst; 158 + return -EINVAL; 64 159 } 65 160 *next_space = this_colour; 66 161 next_space++; ··· 103 200 memcpy_toio(dst + row*48, &this_row[0], 48); 104 201 } 105 202 203 + return 0; 204 + } 205 + 206 + static void mgag200_cursor_set_base(struct mga_device *mdev, u64 address) 207 + { 208 + u8 addrl = (address >> 10) & 0xff; 209 + u8 addrh = (address >> 18) & 0x3f; 210 + 106 211 /* Program gpu address of cursor buffer */ 107 - WREG_DAC(MGA1064_CURSOR_BASE_ADR_LOW, (u8)((dst_gpu>>10) & 0xff)); 108 - WREG_DAC(MGA1064_CURSOR_BASE_ADR_HI, (u8)((dst_gpu>>18) & 0x3f)); 212 + WREG_DAC(MGA1064_CURSOR_BASE_ADR_LOW, addrl); 213 + WREG_DAC(MGA1064_CURSOR_BASE_ADR_HI, addrh); 214 + } 215 + 216 + static int mgag200_show_cursor(struct mga_device *mdev, void *src, 217 + unsigned int width, unsigned int height) 218 + { 219 + struct drm_device *dev = mdev->dev; 220 + struct drm_gem_vram_object *gbo; 221 + void *dst; 222 + s64 off; 223 + int ret; 224 + 225 + gbo = mdev->cursor.gbo[mdev->cursor.next_index]; 226 + if (!gbo) { 227 + WREG8(MGA_CURPOSXL, 0); 228 + WREG8(MGA_CURPOSXH, 0); 229 + return -ENOTSUPP; /* Didn't allocate space for cursors */ 230 + } 231 + dst = drm_gem_vram_vmap(gbo); 232 + if (IS_ERR(dst)) { 233 + ret = PTR_ERR(dst); 234 + dev_err(&dev->pdev->dev, 235 + "failed to map cursor updates: %d\n", ret); 236 + return ret; 237 + } 238 + off = drm_gem_vram_offset(gbo); 239 + if (off < 0) { 240 + ret = (int)off; 241 + dev_err(&dev->pdev->dev, 242 + "failed to get cursor scanout address: %d\n", ret); 243 + goto err_drm_gem_vram_vunmap; 244 + } 245 + 246 + ret = mgag200_cursor_update(mdev, dst, src, width, height); 247 + if (ret) 248 + goto err_drm_gem_vram_vunmap; 249 + mgag200_cursor_set_base(mdev, off); 109 250 110 251 /* Adjust cursor control register to turn on the cursor */ 111 252 WREG_DAC(MGA1064_CURSOR_CTL, 4); /* 16-colour palletized cursor mode */ 112 253 113 - /* Now update internal buffer pointers */ 114 - if (pixels_current) 115 - drm_gem_vram_unpin(pixels_current); 116 - mdev->cursor.pixels_current = pixels_next; 254 + drm_gem_vram_vunmap(gbo, dst); 117 255 118 - drm_gem_vram_kunmap(pixels_next); 119 - drm_gem_vram_kunmap(gbo); 120 - drm_gem_vram_unpin(gbo); 121 - drm_gem_object_put_unlocked(obj); 256 + ++mdev->cursor.next_index; 257 + mdev->cursor.next_index %= ARRAY_SIZE(mdev->cursor.gbo); 122 258 123 259 return 0; 124 260 125 - err_drm_gem_vram_kunmap_dst: 126 - drm_gem_vram_kunmap(pixels_next); 127 - err_drm_gem_vram_unpin_dst: 128 - drm_gem_vram_unpin(pixels_next); 129 - err_drm_gem_vram_kunmap_src: 130 - drm_gem_vram_kunmap(gbo); 131 - err_drm_gem_vram_unpin_src: 132 - drm_gem_vram_unpin(gbo); 133 - err_drm_gem_object_put_unlocked: 134 - drm_gem_object_put_unlocked(obj); 261 + err_drm_gem_vram_vunmap: 262 + drm_gem_vram_vunmap(gbo, dst); 135 263 return ret; 136 264 } 137 265 138 - int mga_crtc_cursor_move(struct drm_crtc *crtc, int x, int y) 266 + /* 267 + * Hide the cursor off screen. We can't disable the cursor hardware because 268 + * it takes too long to re-activate and causes momentary corruption. 269 + */ 270 + static void mgag200_hide_cursor(struct mga_device *mdev) 139 271 { 140 - struct mga_device *mdev = (struct mga_device *)crtc->dev->dev_private; 141 - /* Our origin is at (64,64) */ 142 - x += 64; 143 - y += 64; 272 + WREG8(MGA_CURPOSXL, 0); 273 + WREG8(MGA_CURPOSXH, 0); 274 + } 144 275 145 - BUG_ON(x <= 0); 146 - BUG_ON(y <= 0); 147 - BUG_ON(x & ~0xffff); 148 - BUG_ON(y & ~0xffff); 276 + static void mgag200_move_cursor(struct mga_device *mdev, int x, int y) 277 + { 278 + if (WARN_ON(x <= 0)) 279 + return; 280 + if (WARN_ON(y <= 0)) 281 + return; 282 + if (WARN_ON(x & ~0xffff)) 283 + return; 284 + if (WARN_ON(y & ~0xffff)) 285 + return; 149 286 150 287 WREG8(MGA_CURPOSXL, x & 0xff); 151 288 WREG8(MGA_CURPOSXH, (x>>8) & 0xff); 152 289 153 290 WREG8(MGA_CURPOSYL, y & 0xff); 154 291 WREG8(MGA_CURPOSYH, (y>>8) & 0xff); 292 + } 293 + 294 + int mgag200_cursor_init(struct mga_device *mdev) 295 + { 296 + struct drm_device *dev = mdev->dev; 297 + size_t ncursors = ARRAY_SIZE(mdev->cursor.gbo); 298 + size_t size; 299 + int ret; 300 + size_t i; 301 + struct drm_gem_vram_object *gbo; 302 + 303 + size = roundup(64 * 48, PAGE_SIZE); 304 + if (size * ncursors > mdev->vram_fb_available) 305 + return -ENOMEM; 306 + 307 + for (i = 0; i < ncursors; ++i) { 308 + gbo = drm_gem_vram_create(dev, &dev->vram_mm->bdev, 309 + size, 0, false); 310 + if (IS_ERR(gbo)) { 311 + ret = PTR_ERR(gbo); 312 + goto err_drm_gem_vram_put; 313 + } 314 + ret = drm_gem_vram_pin(gbo, DRM_GEM_VRAM_PL_FLAG_VRAM | 315 + DRM_GEM_VRAM_PL_FLAG_TOPDOWN); 316 + if (ret) { 317 + drm_gem_vram_put(gbo); 318 + goto err_drm_gem_vram_put; 319 + } 320 + 321 + mdev->cursor.gbo[i] = gbo; 322 + } 323 + 324 + /* 325 + * At the high end of video memory, we reserve space for 326 + * buffer objects. The cursor plane uses this memory to store 327 + * a double-buffered image of the current cursor. Hence, it's 328 + * not available for framebuffers. 329 + */ 330 + mdev->vram_fb_available -= ncursors * size; 331 + 332 + return 0; 333 + 334 + err_drm_gem_vram_put: 335 + while (i) { 336 + --i; 337 + gbo = mdev->cursor.gbo[i]; 338 + drm_gem_vram_unpin(gbo); 339 + drm_gem_vram_put(gbo); 340 + mdev->cursor.gbo[i] = NULL; 341 + } 342 + return ret; 343 + } 344 + 345 + void mgag200_cursor_fini(struct mga_device *mdev) 346 + { 347 + size_t i; 348 + struct drm_gem_vram_object *gbo; 349 + 350 + for (i = 0; i < ARRAY_SIZE(mdev->cursor.gbo); ++i) { 351 + gbo = mdev->cursor.gbo[i]; 352 + drm_gem_vram_unpin(gbo); 353 + drm_gem_vram_put(gbo); 354 + } 355 + } 356 + 357 + int mgag200_crtc_cursor_set(struct drm_crtc *crtc, struct drm_file *file_priv, 358 + uint32_t handle, uint32_t width, uint32_t height) 359 + { 360 + struct drm_device *dev = crtc->dev; 361 + struct mga_device *mdev = (struct mga_device *)dev->dev_private; 362 + struct drm_gem_object *obj; 363 + struct drm_gem_vram_object *gbo = NULL; 364 + int ret; 365 + u8 *src; 366 + 367 + if (!handle || !file_priv) { 368 + mgag200_hide_cursor(mdev); 369 + return 0; 370 + } 371 + 372 + if (width != 64 || height != 64) { 373 + WREG8(MGA_CURPOSXL, 0); 374 + WREG8(MGA_CURPOSXH, 0); 375 + return -EINVAL; 376 + } 377 + 378 + obj = drm_gem_object_lookup(file_priv, handle); 379 + if (!obj) 380 + return -ENOENT; 381 + gbo = drm_gem_vram_of_gem(obj); 382 + src = drm_gem_vram_vmap(gbo); 383 + if (IS_ERR(src)) { 384 + ret = PTR_ERR(src); 385 + dev_err(&dev->pdev->dev, 386 + "failed to map user buffer updates\n"); 387 + goto err_drm_gem_object_put_unlocked; 388 + } 389 + 390 + ret = mgag200_show_cursor(mdev, src, width, height); 391 + if (ret) 392 + goto err_drm_gem_vram_vunmap; 393 + 394 + /* Now update internal buffer pointers */ 395 + drm_gem_vram_vunmap(gbo, src); 396 + drm_gem_object_put_unlocked(obj); 397 + 398 + return 0; 399 + err_drm_gem_vram_vunmap: 400 + drm_gem_vram_vunmap(gbo, src); 401 + err_drm_gem_object_put_unlocked: 402 + drm_gem_object_put_unlocked(obj); 403 + return ret; 404 + } 405 + 406 + int mgag200_crtc_cursor_move(struct drm_crtc *crtc, int x, int y) 407 + { 408 + struct mga_device *mdev = (struct mga_device *)crtc->dev->dev_private; 409 + 410 + /* Our origin is at (64,64) */ 411 + x += 64; 412 + y += 64; 413 + 414 + mgag200_move_cursor(mdev, x, y); 415 + 155 416 return 0; 156 417 }
+1 -1
drivers/gpu/drm/mgag200/mgag200_drv.c
··· 46 46 47 47 static int mga_pci_probe(struct pci_dev *pdev, const struct pci_device_id *ent) 48 48 { 49 - drm_fb_helper_remove_conflicting_pci_framebuffers(pdev, 0, "mgag200drmfb"); 49 + drm_fb_helper_remove_conflicting_pci_framebuffers(pdev, "mgag200drmfb"); 50 50 51 51 return drm_get_pci_dev(pdev, ent, &driver); 52 52 }
+9 -14
drivers/gpu/drm/mgag200/mgag200_drv.h
··· 19 19 #include <drm/drm_fb_helper.h> 20 20 #include <drm/drm_gem.h> 21 21 #include <drm/drm_gem_vram_helper.h> 22 - #include <drm/drm_vram_mm_helper.h> 23 22 24 23 #include "mgag200_reg.h" 25 24 ··· 129 130 }; 130 131 131 132 struct mga_cursor { 132 - /* 133 - We have to have 2 buffers for the cursor to avoid occasional 134 - corruption while switching cursor icons. 135 - If either of these is NULL, then don't do hardware cursors, and 136 - fall back to software. 137 - */ 138 - struct drm_gem_vram_object *pixels_1; 139 - struct drm_gem_vram_object *pixels_2; 140 - /* The currently displayed icon, this points to one of pixels_1, or pixels_2 */ 141 - struct drm_gem_vram_object *pixels_current; 133 + struct drm_gem_vram_object *gbo[2]; 134 + unsigned int next_index; 142 135 }; 143 136 144 137 struct mga_mc { ··· 165 174 166 175 struct mga_cursor cursor; 167 176 177 + size_t vram_fb_available; 178 + 168 179 bool suspended; 169 180 int num_crtc; 170 181 enum mga_type type; ··· 197 204 void mgag200_mm_fini(struct mga_device *mdev); 198 205 int mgag200_mmap(struct file *filp, struct vm_area_struct *vma); 199 206 200 - int mga_crtc_cursor_set(struct drm_crtc *crtc, struct drm_file *file_priv, 201 - uint32_t handle, uint32_t width, uint32_t height); 202 - int mga_crtc_cursor_move(struct drm_crtc *crtc, int x, int y); 207 + int mgag200_cursor_init(struct mga_device *mdev); 208 + void mgag200_cursor_fini(struct mga_device *mdev); 209 + int mgag200_crtc_cursor_set(struct drm_crtc *crtc, struct drm_file *file_priv, 210 + uint32_t handle, uint32_t width, uint32_t height); 211 + int mgag200_crtc_cursor_move(struct drm_crtc *crtc, int x, int y); 203 212 204 213 #endif /* __MGAG200_DRV_H__ */
+6 -14
drivers/gpu/drm/mgag200/mgag200_main.c
··· 159 159 160 160 drm_mode_config_init(dev); 161 161 dev->mode_config.funcs = (void *)&mga_mode_funcs; 162 - if (IS_G200_SE(mdev) && mdev->mc.vram_size < (2048*1024)) 162 + if (IS_G200_SE(mdev) && mdev->vram_fb_available < (2048*1024)) 163 163 dev->mode_config.preferred_depth = 16; 164 164 else 165 165 dev->mode_config.preferred_depth = 32; ··· 171 171 goto err_modeset; 172 172 } 173 173 174 - /* Make small buffers to store a hardware cursor (double buffered icon updates) */ 175 - mdev->cursor.pixels_1 = drm_gem_vram_create(dev, &dev->vram_mm->bdev, 176 - roundup(48*64, PAGE_SIZE), 177 - 0, 0); 178 - mdev->cursor.pixels_2 = drm_gem_vram_create(dev, &dev->vram_mm->bdev, 179 - roundup(48*64, PAGE_SIZE), 180 - 0, 0); 181 - if (IS_ERR(mdev->cursor.pixels_2) || IS_ERR(mdev->cursor.pixels_1)) { 182 - mdev->cursor.pixels_1 = NULL; 183 - mdev->cursor.pixels_2 = NULL; 174 + r = mgag200_cursor_init(mdev); 175 + if (r) 184 176 dev_warn(&dev->pdev->dev, 185 - "Could not allocate space for cursors. Not doing hardware cursors.\n"); 186 - } 187 - mdev->cursor.pixels_current = NULL; 177 + "Could not initialize cursors. Not doing hardware cursors.\n"); 188 178 189 179 r = drm_fbdev_generic_setup(mdev->dev, 0); 190 180 if (r) ··· 184 194 185 195 err_modeset: 186 196 drm_mode_config_cleanup(dev); 197 + mgag200_cursor_fini(mdev); 187 198 mgag200_mm_fini(mdev); 188 199 err_mm: 189 200 dev->dev_private = NULL; ··· 200 209 return; 201 210 mgag200_modeset_fini(mdev); 202 211 drm_mode_config_cleanup(dev); 212 + mgag200_cursor_fini(mdev); 203 213 mgag200_mm_fini(mdev); 204 214 dev->dev_private = NULL; 205 215 }
+3 -14
drivers/gpu/drm/mgag200/mgag200_mode.c
··· 1413 1413 1414 1414 /* These provide the minimum set of functions required to handle a CRTC */ 1415 1415 static const struct drm_crtc_funcs mga_crtc_funcs = { 1416 - .cursor_set = mga_crtc_cursor_set, 1417 - .cursor_move = mga_crtc_cursor_move, 1416 + .cursor_set = mgag200_crtc_cursor_set, 1417 + .cursor_move = mgag200_crtc_cursor_move, 1418 1418 .gamma_set = mga_crtc_gamma_set, 1419 1419 .set_config = drm_crtc_helper_set_config, 1420 1420 .destroy = mga_crtc_destroy, ··· 1629 1629 bpp = connector->cmdline_mode.bpp; 1630 1630 } 1631 1631 1632 - if ((mode->hdisplay * mode->vdisplay * (bpp/8)) > mdev->mc.vram_size) { 1632 + if ((mode->hdisplay * mode->vdisplay * (bpp/8)) > mdev->vram_fb_available) { 1633 1633 if (connector->cmdline_mode.specified) 1634 1634 connector->cmdline_mode.specified = false; 1635 1635 return MODE_BAD; 1636 1636 } 1637 1637 1638 1638 return MODE_OK; 1639 - } 1640 - 1641 - static struct drm_encoder *mga_connector_best_encoder(struct drm_connector 1642 - *connector) 1643 - { 1644 - int enc_id = connector->encoder_ids[0]; 1645 - /* pick the encoder ids */ 1646 - if (enc_id) 1647 - return drm_encoder_find(connector->dev, NULL, enc_id); 1648 - return NULL; 1649 1639 } 1650 1640 1651 1641 static void mga_connector_destroy(struct drm_connector *connector) ··· 1649 1659 static const struct drm_connector_helper_funcs mga_vga_connector_helper_funcs = { 1650 1660 .get_modes = mga_vga_get_modes, 1651 1661 .mode_valid = mga_vga_mode_valid, 1652 - .best_encoder = mga_connector_best_encoder, 1653 1662 }; 1654 1663 1655 1664 static const struct drm_connector_funcs mga_vga_connector_funcs = {
+5 -2
drivers/gpu/drm/mgag200/mgag200_ttm.c
··· 37 37 struct drm_device *dev = mdev->dev; 38 38 39 39 vmm = drm_vram_helper_alloc_mm(dev, pci_resource_start(dev->pdev, 0), 40 - mdev->mc.vram_size, 41 - &drm_gem_vram_mm_funcs); 40 + mdev->mc.vram_size); 42 41 if (IS_ERR(vmm)) { 43 42 ret = PTR_ERR(vmm); 44 43 DRM_ERROR("Error initializing VRAM MM; %d\n", ret); ··· 50 51 mdev->fb_mtrr = arch_phys_wc_add(pci_resource_start(dev->pdev, 0), 51 52 pci_resource_len(dev->pdev, 0)); 52 53 54 + mdev->vram_fb_available = mdev->mc.vram_size; 55 + 53 56 return 0; 54 57 } 55 58 56 59 void mgag200_mm_fini(struct mga_device *mdev) 57 60 { 58 61 struct drm_device *dev = mdev->dev; 62 + 63 + mdev->vram_fb_available = 0; 59 64 60 65 drm_vram_helper_release_mm(dev); 61 66
+2 -2
drivers/gpu/drm/msm/disp/dpu1/dpu_kms.h
··· 31 31 */ 32 32 #define DPU_DEBUG(fmt, ...) \ 33 33 do { \ 34 - if (unlikely(drm_debug & DRM_UT_KMS)) \ 34 + if (drm_debug_enabled(DRM_UT_KMS)) \ 35 35 DRM_DEBUG(fmt, ##__VA_ARGS__); \ 36 36 else \ 37 37 pr_debug(fmt, ##__VA_ARGS__); \ ··· 43 43 */ 44 44 #define DPU_DEBUG_DRIVER(fmt, ...) \ 45 45 do { \ 46 - if (unlikely(drm_debug & DRM_UT_DRIVER)) \ 46 + if (drm_debug_enabled(DRM_UT_DRIVER)) \ 47 47 DRM_ERROR(fmt, ##__VA_ARGS__); \ 48 48 else \ 49 49 pr_debug(fmt, ##__VA_ARGS__); \
+1
drivers/gpu/drm/msm/dsi/dsi.h
··· 9 9 #include <linux/of_platform.h> 10 10 #include <linux/platform_device.h> 11 11 12 + #include <drm/drm_bridge.h> 12 13 #include <drm/drm_crtc.h> 13 14 #include <drm/drm_mipi_dsi.h> 14 15 #include <drm/drm_panel.h>
+3 -1
drivers/gpu/drm/msm/edp/edp.c
··· 178 178 goto fail; 179 179 } 180 180 181 - encoder->bridge = edp->bridge; 181 + ret = drm_bridge_attach(encoder, edp->bridge, NULL); 182 + if (ret) 183 + goto fail; 182 184 183 185 priv->bridges[priv->num_bridges++] = edp->bridge; 184 186 priv->connectors[priv->num_connectors++] = edp->connector;
+1
drivers/gpu/drm/msm/edp/edp.h
··· 10 10 #include <linux/interrupt.h> 11 11 #include <linux/kernel.h> 12 12 #include <linux/platform_device.h> 13 + #include <drm/drm_bridge.h> 13 14 #include <drm/drm_crtc.h> 14 15 #include <drm/drm_dp_helper.h> 15 16
+3 -1
drivers/gpu/drm/msm/hdmi/hdmi.c
··· 327 327 goto fail; 328 328 } 329 329 330 - encoder->bridge = hdmi->bridge; 330 + ret = drm_bridge_attach(encoder, hdmi->bridge, NULL); 331 + if (ret) 332 + goto fail; 331 333 332 334 priv->bridges[priv->num_bridges++] = hdmi->bridge; 333 335 priv->connectors[priv->num_connectors++] = hdmi->connector;
+2
drivers/gpu/drm/msm/hdmi/hdmi.h
··· 14 14 #include <linux/gpio/consumer.h> 15 15 #include <linux/hdmi.h> 16 16 17 + #include <drm/drm_bridge.h> 18 + 17 19 #include "msm_drv.h" 18 20 #include "hdmi.xml.h" 19 21
+1 -1
drivers/gpu/drm/nouveau/dispnv04/disp.c
··· 256 256 257 257 list_for_each_entry_safe(connector, ct, 258 258 &dev->mode_config.connector_list, head) { 259 - if (!connector->encoder_ids[0]) { 259 + if (!connector->possible_encoders) { 260 260 NV_WARN(drm, "%s has no encoders, removing\n", 261 261 connector->name); 262 262 connector->funcs->destroy(connector);
+1 -1
drivers/gpu/drm/nouveau/dispnv50/disp.c
··· 2392 2392 2393 2393 /* cull any connectors we created that don't have an encoder */ 2394 2394 list_for_each_entry_safe(connector, tmp, &dev->mode_config.connector_list, head) { 2395 - if (connector->encoder_ids[0]) 2395 + if (connector->possible_encoders) 2396 2396 continue; 2397 2397 2398 2398 NV_WARN(drm, "%s has no encoders, removing\n",
+4 -6
drivers/gpu/drm/nouveau/nouveau_connector.c
··· 365 365 { 366 366 struct nouveau_encoder *nv_encoder; 367 367 struct drm_encoder *enc; 368 - int i; 369 368 370 - drm_connector_for_each_possible_encoder(connector, enc, i) { 369 + drm_connector_for_each_possible_encoder(connector, enc) { 371 370 nv_encoder = nouveau_encoder(enc); 372 371 373 372 if (type == DCB_OUTPUT_ANY || ··· 413 414 struct drm_device *dev = connector->dev; 414 415 struct nouveau_encoder *nv_encoder = NULL, *found = NULL; 415 416 struct drm_encoder *encoder; 416 - int i, ret; 417 + int ret; 417 418 bool switcheroo_ddc = false; 418 419 419 - drm_connector_for_each_possible_encoder(connector, encoder, i) { 420 + drm_connector_for_each_possible_encoder(connector, encoder) { 420 421 nv_encoder = nouveau_encoder(encoder); 421 422 422 423 switch (nv_encoder->dcb->type) { ··· 1414 1415 switch (type) { 1415 1416 case DRM_MODE_CONNECTOR_DisplayPort: 1416 1417 case DRM_MODE_CONNECTOR_eDP: 1417 - drm_dp_cec_register_connector(&nv_connector->aux, 1418 - connector->name, dev->dev); 1418 + drm_dp_cec_register_connector(&nv_connector->aux, connector); 1419 1419 break; 1420 1420 } 1421 1421
+1
drivers/gpu/drm/nouveau/nouveau_ttm.c
··· 236 236 ret = ttm_bo_device_init(&drm->ttm.bdev, 237 237 &nouveau_bo_driver, 238 238 dev->anon_inode->i_mapping, 239 + dev->vma_offset_manager, 239 240 drm->client.mmu.dmabits <= 32 ? true : false); 240 241 if (ret) { 241 242 NV_ERROR(drm, "error initialising bo driver, %d\n", ret);
+1 -1
drivers/gpu/drm/omapdrm/dss/Makefile
··· 6 6 7 7 obj-$(CONFIG_OMAP2_DSS) += omapdss.o 8 8 # Core DSS files 9 - omapdss-y := core.o dss.o dispc.o dispc_coefs.o \ 9 + omapdss-y := dss.o dispc.o dispc_coefs.o \ 10 10 pll.o video-pll.o 11 11 omapdss-$(CONFIG_OMAP2_DSS_DPI) += dpi.o 12 12 omapdss-$(CONFIG_OMAP2_DSS_VENC) += venc.o
-55
drivers/gpu/drm/omapdrm/dss/core.c
··· 1 - // SPDX-License-Identifier: GPL-2.0-only 2 - /* 3 - * Copyright (C) 2009 Nokia Corporation 4 - * Author: Tomi Valkeinen <tomi.valkeinen@ti.com> 5 - * 6 - * Some code and ideas taken from drivers/video/omap/ driver 7 - * by Imre Deak. 8 - */ 9 - 10 - #define DSS_SUBSYS_NAME "CORE" 11 - 12 - #include <linux/kernel.h> 13 - #include <linux/module.h> 14 - #include <linux/platform_device.h> 15 - 16 - #include "omapdss.h" 17 - #include "dss.h" 18 - 19 - /* INIT */ 20 - static struct platform_driver * const omap_dss_drivers[] = { 21 - &omap_dsshw_driver, 22 - &omap_dispchw_driver, 23 - #ifdef CONFIG_OMAP2_DSS_DSI 24 - &omap_dsihw_driver, 25 - #endif 26 - #ifdef CONFIG_OMAP2_DSS_VENC 27 - &omap_venchw_driver, 28 - #endif 29 - #ifdef CONFIG_OMAP4_DSS_HDMI 30 - &omapdss_hdmi4hw_driver, 31 - #endif 32 - #ifdef CONFIG_OMAP5_DSS_HDMI 33 - &omapdss_hdmi5hw_driver, 34 - #endif 35 - }; 36 - 37 - static int __init omap_dss_init(void) 38 - { 39 - return platform_register_drivers(omap_dss_drivers, 40 - ARRAY_SIZE(omap_dss_drivers)); 41 - } 42 - 43 - static void __exit omap_dss_exit(void) 44 - { 45 - platform_unregister_drivers(omap_dss_drivers, 46 - ARRAY_SIZE(omap_dss_drivers)); 47 - } 48 - 49 - module_init(omap_dss_init); 50 - module_exit(omap_dss_exit); 51 - 52 - MODULE_AUTHOR("Tomi Valkeinen <tomi.valkeinen@ti.com>"); 53 - MODULE_DESCRIPTION("OMAP2/3 Display Subsystem"); 54 - MODULE_LICENSE("GPL v2"); 55 -
+29 -17
drivers/gpu/drm/omapdrm/dss/dispc.c
··· 114 114 const unsigned int num_reg_fields; 115 115 const enum omap_overlay_caps *overlay_caps; 116 116 const u32 **supported_color_modes; 117 + const u32 *supported_scaler_color_modes; 117 118 unsigned int num_mgrs; 118 119 unsigned int num_ovls; 119 120 unsigned int buffer_size_unit; ··· 185 184 186 185 struct regmap *syscon_pol; 187 186 u32 syscon_pol_offset; 188 - 189 - /* DISPC_CONTROL & DISPC_CONFIG lock*/ 190 - spinlock_t control_lock; 191 187 }; 192 188 193 189 enum omap_color_component { ··· 366 368 static u32 mgr_fld_read(struct dispc_device *dispc, enum omap_channel channel, 367 369 enum mgr_reg_fields regfld) 368 370 { 369 - const struct dispc_reg_field rfld = mgr_desc[channel].reg_desc[regfld]; 371 + const struct dispc_reg_field *rfld = &mgr_desc[channel].reg_desc[regfld]; 370 372 371 - return REG_GET(dispc, rfld.reg, rfld.high, rfld.low); 373 + return REG_GET(dispc, rfld->reg, rfld->high, rfld->low); 372 374 } 373 375 374 376 static void mgr_fld_write(struct dispc_device *dispc, enum omap_channel channel, 375 377 enum mgr_reg_fields regfld, int val) 376 378 { 377 - const struct dispc_reg_field rfld = mgr_desc[channel].reg_desc[regfld]; 378 - const bool need_lock = rfld.reg == DISPC_CONTROL || rfld.reg == DISPC_CONFIG; 379 - unsigned long flags; 379 + const struct dispc_reg_field *rfld = &mgr_desc[channel].reg_desc[regfld]; 380 380 381 - if (need_lock) { 382 - spin_lock_irqsave(&dispc->control_lock, flags); 383 - REG_FLD_MOD(dispc, rfld.reg, val, rfld.high, rfld.low); 384 - spin_unlock_irqrestore(&dispc->control_lock, flags); 385 - } else { 386 - REG_FLD_MOD(dispc, rfld.reg, val, rfld.high, rfld.low); 387 - } 381 + REG_FLD_MOD(dispc, rfld->reg, val, rfld->high, rfld->low); 388 382 } 389 383 390 384 static int dispc_get_num_ovls(struct dispc_device *dispc) ··· 2500 2510 if (width == out_width && height == out_height) 2501 2511 return 0; 2502 2512 2513 + if (dispc->feat->supported_scaler_color_modes) { 2514 + const u32 *modes = dispc->feat->supported_scaler_color_modes; 2515 + unsigned int i; 2516 + 2517 + for (i = 0; modes[i]; ++i) { 2518 + if (modes[i] == fourcc) 2519 + break; 2520 + } 2521 + 2522 + if (modes[i] == 0) 2523 + return -EINVAL; 2524 + } 2525 + 2503 2526 if (plane == OMAP_DSS_WB) { 2504 2527 switch (fourcc) { 2505 2528 case DRM_FORMAT_NV12: ··· 4228 4225 DRM_FORMAT_RGBX8888), 4229 4226 }; 4230 4227 4228 + static const u32 omap3_dispc_supported_scaler_color_modes[] = { 4229 + DRM_FORMAT_XRGB8888, DRM_FORMAT_RGB565, DRM_FORMAT_YUYV, 4230 + DRM_FORMAT_UYVY, 4231 + 0, 4232 + }; 4233 + 4231 4234 static const struct dispc_features omap24xx_dispc_feats = { 4232 4235 .sw_start = 5, 4233 4236 .fp_start = 15, ··· 4262 4253 .num_reg_fields = ARRAY_SIZE(omap2_dispc_reg_fields), 4263 4254 .overlay_caps = omap2_dispc_overlay_caps, 4264 4255 .supported_color_modes = omap2_dispc_supported_color_modes, 4256 + .supported_scaler_color_modes = COLOR_ARRAY(DRM_FORMAT_XRGB8888), 4265 4257 .num_mgrs = 2, 4266 4258 .num_ovls = 3, 4267 4259 .buffer_size_unit = 1, ··· 4297 4287 .num_reg_fields = ARRAY_SIZE(omap3_dispc_reg_fields), 4298 4288 .overlay_caps = omap3430_dispc_overlay_caps, 4299 4289 .supported_color_modes = omap3_dispc_supported_color_modes, 4290 + .supported_scaler_color_modes = omap3_dispc_supported_scaler_color_modes, 4300 4291 .num_mgrs = 2, 4301 4292 .num_ovls = 3, 4302 4293 .buffer_size_unit = 1, ··· 4332 4321 .num_reg_fields = ARRAY_SIZE(omap3_dispc_reg_fields), 4333 4322 .overlay_caps = omap3430_dispc_overlay_caps, 4334 4323 .supported_color_modes = omap3_dispc_supported_color_modes, 4324 + .supported_scaler_color_modes = omap3_dispc_supported_scaler_color_modes, 4335 4325 .num_mgrs = 2, 4336 4326 .num_ovls = 3, 4337 4327 .buffer_size_unit = 1, ··· 4367 4355 .num_reg_fields = ARRAY_SIZE(omap3_dispc_reg_fields), 4368 4356 .overlay_caps = omap3630_dispc_overlay_caps, 4369 4357 .supported_color_modes = omap3_dispc_supported_color_modes, 4358 + .supported_scaler_color_modes = omap3_dispc_supported_scaler_color_modes, 4370 4359 .num_mgrs = 2, 4371 4360 .num_ovls = 3, 4372 4361 .buffer_size_unit = 1, ··· 4402 4389 .num_reg_fields = ARRAY_SIZE(omap3_dispc_reg_fields), 4403 4390 .overlay_caps = omap3430_dispc_overlay_caps, 4404 4391 .supported_color_modes = omap3_dispc_supported_color_modes, 4392 + .supported_scaler_color_modes = omap3_dispc_supported_scaler_color_modes, 4405 4393 .num_mgrs = 1, 4406 4394 .num_ovls = 3, 4407 4395 .buffer_size_unit = 1, ··· 4781 4767 dispc->pdev = pdev; 4782 4768 platform_set_drvdata(pdev, dispc); 4783 4769 dispc->dss = dss; 4784 - 4785 - spin_lock_init(&dispc->control_lock); 4786 4770 4787 4771 /* 4788 4772 * The OMAP3-based models can't be told apart using the compatible
+37
drivers/gpu/drm/omapdrm/dss/dss.c
··· 1598 1598 .suppress_bind_attrs = true, 1599 1599 }, 1600 1600 }; 1601 + 1602 + /* INIT */ 1603 + static struct platform_driver * const omap_dss_drivers[] = { 1604 + &omap_dsshw_driver, 1605 + &omap_dispchw_driver, 1606 + #ifdef CONFIG_OMAP2_DSS_DSI 1607 + &omap_dsihw_driver, 1608 + #endif 1609 + #ifdef CONFIG_OMAP2_DSS_VENC 1610 + &omap_venchw_driver, 1611 + #endif 1612 + #ifdef CONFIG_OMAP4_DSS_HDMI 1613 + &omapdss_hdmi4hw_driver, 1614 + #endif 1615 + #ifdef CONFIG_OMAP5_DSS_HDMI 1616 + &omapdss_hdmi5hw_driver, 1617 + #endif 1618 + }; 1619 + 1620 + static int __init omap_dss_init(void) 1621 + { 1622 + return platform_register_drivers(omap_dss_drivers, 1623 + ARRAY_SIZE(omap_dss_drivers)); 1624 + } 1625 + 1626 + static void __exit omap_dss_exit(void) 1627 + { 1628 + platform_unregister_drivers(omap_dss_drivers, 1629 + ARRAY_SIZE(omap_dss_drivers)); 1630 + } 1631 + 1632 + module_init(omap_dss_init); 1633 + module_exit(omap_dss_exit); 1634 + 1635 + MODULE_AUTHOR("Tomi Valkeinen <tomi.valkeinen@ti.com>"); 1636 + MODULE_DESCRIPTION("OMAP2/3/4/5 Display Subsystem"); 1637 + MODULE_LICENSE("GPL v2");
+3 -2
drivers/gpu/drm/omapdrm/dss/hdmi4_core.c
··· 542 542 } 543 543 544 544 /* Set ACR clock divisor */ 545 - REG_FLD_MOD(av_base, 546 - HDMI_CORE_AV_FREQ_SVAL, cfg->mclk_mode, 2, 0); 545 + if (cfg->use_mclk) 546 + REG_FLD_MOD(av_base, HDMI_CORE_AV_FREQ_SVAL, 547 + cfg->mclk_mode, 2, 0); 547 548 548 549 r = hdmi_read_reg(av_base, HDMI_CORE_AV_ACR_CTRL); 549 550 /*
+66 -57
drivers/gpu/drm/omapdrm/dss/hdmi5_core.c
··· 23 23 24 24 #include "hdmi5_core.h" 25 25 26 - /* only 24 bit color depth used for now */ 27 - static const struct csc_table csc_table_deepcolor[] = { 28 - /* HDMI_DEEP_COLOR_24BIT */ 29 - [0] = { 7036, 0, 0, 32, 0, 7036, 0, 32, 0, 0, 7036, 32, }, 30 - /* HDMI_DEEP_COLOR_30BIT */ 31 - [1] = { 7015, 0, 0, 128, 0, 7015, 0, 128, 0, 0, 7015, 128, }, 32 - /* HDMI_DEEP_COLOR_36BIT */ 33 - [2] = { 7010, 0, 0, 512, 0, 7010, 0, 512, 0, 0, 7010, 512, }, 34 - /* FULL RANGE */ 35 - [3] = { 8192, 0, 0, 0, 0, 8192, 0, 0, 0, 0, 8192, 0, }, 36 - }; 37 - 38 26 static void hdmi_core_ddc_init(struct hdmi_core_data *core) 39 27 { 40 28 void __iomem *base = core->base; 41 29 const unsigned long long iclk = 266000000; /* DSS L3 ICLK */ 42 - const unsigned int ss_scl_high = 4600; /* ns */ 43 - const unsigned int ss_scl_low = 5400; /* ns */ 30 + const unsigned int ss_scl_high = 4700; /* ns */ 31 + const unsigned int ss_scl_low = 5500; /* ns */ 44 32 const unsigned int fs_scl_high = 600; /* ns */ 45 33 const unsigned int fs_scl_low = 1300; /* ns */ 46 34 const unsigned int sda_hold = 1000; /* ns */ ··· 385 397 REG_FLD_MOD(base, HDMI_CORE_VP_CONF, clr_depth ? 0 : 2, 1, 0); 386 398 } 387 399 388 - static void hdmi_core_config_csc(struct hdmi_core_data *core) 389 - { 390 - int clr_depth = 0; /* 24 bit color depth */ 391 - 392 - /* CSC_COLORDEPTH */ 393 - REG_FLD_MOD(core->base, HDMI_CORE_CSC_SCALE, clr_depth, 7, 4); 394 - } 395 - 396 400 static void hdmi_core_config_video_sampler(struct hdmi_core_data *core) 397 401 { 398 402 int video_mapping = 1; /* for 24 bit color depth */ ··· 449 469 REG_FLD_MOD(base, HDMI_CORE_FC_PRCONF, pr, 3, 0); 450 470 } 451 471 452 - static void hdmi_core_csc_config(struct hdmi_core_data *core, 453 - struct csc_table csc_coeff) 472 + static void hdmi_core_write_csc(struct hdmi_core_data *core, 473 + const struct csc_table *csc_coeff) 454 474 { 455 475 void __iomem *base = core->base; 456 476 457 - REG_FLD_MOD(base, HDMI_CORE_CSC_COEF_A1_MSB, csc_coeff.a1 >> 8 , 6, 0); 458 - REG_FLD_MOD(base, HDMI_CORE_CSC_COEF_A1_LSB, csc_coeff.a1, 7, 0); 459 - REG_FLD_MOD(base, HDMI_CORE_CSC_COEF_A2_MSB, csc_coeff.a2 >> 8, 6, 0); 460 - REG_FLD_MOD(base, HDMI_CORE_CSC_COEF_A2_LSB, csc_coeff.a2, 7, 0); 461 - REG_FLD_MOD(base, HDMI_CORE_CSC_COEF_A3_MSB, csc_coeff.a3 >> 8, 6, 0); 462 - REG_FLD_MOD(base, HDMI_CORE_CSC_COEF_A3_LSB, csc_coeff.a3, 7, 0); 463 - REG_FLD_MOD(base, HDMI_CORE_CSC_COEF_A4_MSB, csc_coeff.a4 >> 8, 6, 0); 464 - REG_FLD_MOD(base, HDMI_CORE_CSC_COEF_A4_LSB, csc_coeff.a4, 7, 0); 465 - REG_FLD_MOD(base, HDMI_CORE_CSC_COEF_B1_MSB, csc_coeff.b1 >> 8, 6, 0); 466 - REG_FLD_MOD(base, HDMI_CORE_CSC_COEF_B1_LSB, csc_coeff.b1, 7, 0); 467 - REG_FLD_MOD(base, HDMI_CORE_CSC_COEF_B2_MSB, csc_coeff.b2 >> 8, 6, 0); 468 - REG_FLD_MOD(base, HDMI_CORE_CSC_COEF_B2_LSB, csc_coeff.b2, 7, 0); 469 - REG_FLD_MOD(base, HDMI_CORE_CSC_COEF_B3_MSB, csc_coeff.b3 >> 8, 6, 0); 470 - REG_FLD_MOD(base, HDMI_CORE_CSC_COEF_B3_LSB, csc_coeff.b3, 7, 0); 471 - REG_FLD_MOD(base, HDMI_CORE_CSC_COEF_B4_MSB, csc_coeff.b4 >> 8, 6, 0); 472 - REG_FLD_MOD(base, HDMI_CORE_CSC_COEF_B4_LSB, csc_coeff.b4, 7, 0); 473 - REG_FLD_MOD(base, HDMI_CORE_CSC_COEF_C1_MSB, csc_coeff.c1 >> 8, 6, 0); 474 - REG_FLD_MOD(base, HDMI_CORE_CSC_COEF_C1_LSB, csc_coeff.c1, 7, 0); 475 - REG_FLD_MOD(base, HDMI_CORE_CSC_COEF_C2_MSB, csc_coeff.c2 >> 8, 6, 0); 476 - REG_FLD_MOD(base, HDMI_CORE_CSC_COEF_C2_LSB, csc_coeff.c2, 7, 0); 477 - REG_FLD_MOD(base, HDMI_CORE_CSC_COEF_C3_MSB, csc_coeff.c3 >> 8, 6, 0); 478 - REG_FLD_MOD(base, HDMI_CORE_CSC_COEF_C3_LSB, csc_coeff.c3, 7, 0); 479 - REG_FLD_MOD(base, HDMI_CORE_CSC_COEF_C4_MSB, csc_coeff.c4 >> 8, 6, 0); 480 - REG_FLD_MOD(base, HDMI_CORE_CSC_COEF_C4_LSB, csc_coeff.c4, 7, 0); 477 + REG_FLD_MOD(base, HDMI_CORE_CSC_COEF_A1_MSB, csc_coeff->a1 >> 8, 6, 0); 478 + REG_FLD_MOD(base, HDMI_CORE_CSC_COEF_A1_LSB, csc_coeff->a1, 7, 0); 479 + REG_FLD_MOD(base, HDMI_CORE_CSC_COEF_A2_MSB, csc_coeff->a2 >> 8, 6, 0); 480 + REG_FLD_MOD(base, HDMI_CORE_CSC_COEF_A2_LSB, csc_coeff->a2, 7, 0); 481 + REG_FLD_MOD(base, HDMI_CORE_CSC_COEF_A3_MSB, csc_coeff->a3 >> 8, 6, 0); 482 + REG_FLD_MOD(base, HDMI_CORE_CSC_COEF_A3_LSB, csc_coeff->a3, 7, 0); 483 + REG_FLD_MOD(base, HDMI_CORE_CSC_COEF_A4_MSB, csc_coeff->a4 >> 8, 6, 0); 484 + REG_FLD_MOD(base, HDMI_CORE_CSC_COEF_A4_LSB, csc_coeff->a4, 7, 0); 485 + REG_FLD_MOD(base, HDMI_CORE_CSC_COEF_B1_MSB, csc_coeff->b1 >> 8, 6, 0); 486 + REG_FLD_MOD(base, HDMI_CORE_CSC_COEF_B1_LSB, csc_coeff->b1, 7, 0); 487 + REG_FLD_MOD(base, HDMI_CORE_CSC_COEF_B2_MSB, csc_coeff->b2 >> 8, 6, 0); 488 + REG_FLD_MOD(base, HDMI_CORE_CSC_COEF_B2_LSB, csc_coeff->b2, 7, 0); 489 + REG_FLD_MOD(base, HDMI_CORE_CSC_COEF_B3_MSB, csc_coeff->b3 >> 8, 6, 0); 490 + REG_FLD_MOD(base, HDMI_CORE_CSC_COEF_B3_LSB, csc_coeff->b3, 7, 0); 491 + REG_FLD_MOD(base, HDMI_CORE_CSC_COEF_B4_MSB, csc_coeff->b4 >> 8, 6, 0); 492 + REG_FLD_MOD(base, HDMI_CORE_CSC_COEF_B4_LSB, csc_coeff->b4, 7, 0); 493 + REG_FLD_MOD(base, HDMI_CORE_CSC_COEF_C1_MSB, csc_coeff->c1 >> 8, 6, 0); 494 + REG_FLD_MOD(base, HDMI_CORE_CSC_COEF_C1_LSB, csc_coeff->c1, 7, 0); 495 + REG_FLD_MOD(base, HDMI_CORE_CSC_COEF_C2_MSB, csc_coeff->c2 >> 8, 6, 0); 496 + REG_FLD_MOD(base, HDMI_CORE_CSC_COEF_C2_LSB, csc_coeff->c2, 7, 0); 497 + REG_FLD_MOD(base, HDMI_CORE_CSC_COEF_C3_MSB, csc_coeff->c3 >> 8, 6, 0); 498 + REG_FLD_MOD(base, HDMI_CORE_CSC_COEF_C3_LSB, csc_coeff->c3, 7, 0); 499 + REG_FLD_MOD(base, HDMI_CORE_CSC_COEF_C4_MSB, csc_coeff->c4 >> 8, 6, 0); 500 + REG_FLD_MOD(base, HDMI_CORE_CSC_COEF_C4_LSB, csc_coeff->c4, 7, 0); 481 501 502 + /* enable CSC */ 482 503 REG_FLD_MOD(base, HDMI_CORE_MC_FLOWCTRL, 0x1, 0, 0); 483 504 } 484 505 485 - static void hdmi_core_configure_range(struct hdmi_core_data *core) 506 + static void hdmi_core_configure_range(struct hdmi_core_data *core, 507 + enum hdmi_quantization_range range) 486 508 { 487 - struct csc_table csc_coeff = { 0 }; 509 + static const struct csc_table csc_limited_range = { 510 + 7036, 0, 0, 32, 0, 7036, 0, 32, 0, 0, 7036, 32 511 + }; 512 + static const struct csc_table csc_full_range = { 513 + 8192, 0, 0, 0, 0, 8192, 0, 0, 0, 0, 8192, 0 514 + }; 515 + const struct csc_table *csc_coeff; 488 516 489 - /* support limited range with 24 bit color depth for now */ 490 - csc_coeff = csc_table_deepcolor[0]; 517 + /* CSC_COLORDEPTH = 24 bits*/ 518 + REG_FLD_MOD(core->base, HDMI_CORE_CSC_SCALE, 0, 7, 4); 491 519 492 - hdmi_core_csc_config(core, csc_coeff); 520 + switch (range) { 521 + case HDMI_QUANTIZATION_RANGE_FULL: 522 + csc_coeff = &csc_full_range; 523 + break; 524 + 525 + case HDMI_QUANTIZATION_RANGE_DEFAULT: 526 + case HDMI_QUANTIZATION_RANGE_LIMITED: 527 + default: 528 + csc_coeff = &csc_limited_range; 529 + break; 530 + } 531 + 532 + hdmi_core_write_csc(core, csc_coeff); 493 533 } 494 534 495 535 static void hdmi_core_enable_video_path(struct hdmi_core_data *core) ··· 600 600 struct videomode vm; 601 601 struct hdmi_video_format video_format; 602 602 struct hdmi_core_vid_config v_core_cfg; 603 + enum hdmi_quantization_range range; 603 604 604 605 hdmi_core_mask_interrupts(core); 606 + 607 + if (cfg->hdmi_dvi_mode == HDMI_HDMI) { 608 + char vic = cfg->infoframe.video_code; 609 + 610 + /* All CEA modes other than VIC 1 use limited quantization range. */ 611 + range = vic > 1 ? HDMI_QUANTIZATION_RANGE_LIMITED : 612 + HDMI_QUANTIZATION_RANGE_FULL; 613 + } else { 614 + range = HDMI_QUANTIZATION_RANGE_FULL; 615 + } 605 616 606 617 hdmi_core_init(&v_core_cfg, cfg); 607 618 ··· 627 616 628 617 hdmi_wp_video_config_interface(wp, &vm); 629 618 630 - /* support limited range with 24 bit color depth for now */ 631 - hdmi_core_configure_range(core); 632 - cfg->infoframe.quantization_range = HDMI_QUANTIZATION_RANGE_LIMITED; 619 + hdmi_core_configure_range(core, range); 620 + cfg->infoframe.quantization_range = range; 633 621 634 622 /* 635 623 * configure core video part, set software reset in the core ··· 638 628 hdmi_core_video_config(core, &v_core_cfg); 639 629 640 630 hdmi_core_config_video_packetizer(core); 641 - hdmi_core_config_csc(core); 642 631 hdmi_core_config_video_sampler(core); 643 632 644 633 if (cfg->hdmi_dvi_mode == HDMI_HDMI)
+1
drivers/gpu/drm/omapdrm/dss/output.c
··· 12 12 #include <linux/of.h> 13 13 #include <linux/of_graph.h> 14 14 15 + #include <drm/drm_bridge.h> 15 16 #include <drm/drm_panel.h> 16 17 17 18 #include "dss.h"
+1
drivers/gpu/drm/omapdrm/omap_drv.c
··· 11 11 12 12 #include <drm/drm_atomic.h> 13 13 #include <drm/drm_atomic_helper.h> 14 + #include <drm/drm_bridge.h> 14 15 #include <drm/drm_drv.h> 15 16 #include <drm/drm_fb_helper.h> 16 17 #include <drm/drm_file.h>
+1
drivers/gpu/drm/omapdrm/omap_encoder.c
··· 6 6 7 7 #include <linux/list.h> 8 8 9 + #include <drm/drm_bridge.h> 9 10 #include <drm/drm_crtc.h> 10 11 #include <drm/drm_modeset_helper_vtables.h> 11 12 #include <drm/drm_edid.h>
+2 -3
drivers/gpu/drm/panel/panel-arm-versatile.c
··· 350 350 dev_info(dev, "panel mounted on IB2 daughterboard\n"); 351 351 } 352 352 353 - drm_panel_init(&vpanel->panel); 354 - vpanel->panel.dev = dev; 355 - vpanel->panel.funcs = &versatile_panel_drm_funcs; 353 + drm_panel_init(&vpanel->panel, dev, &versatile_panel_drm_funcs, 354 + DRM_MODE_CONNECTOR_DPI); 356 355 357 356 return drm_panel_add(&vpanel->panel); 358 357 }
+2 -3
drivers/gpu/drm/panel/panel-feiyang-fy07024di26a30d.c
··· 204 204 mipi_dsi_set_drvdata(dsi, ctx); 205 205 ctx->dsi = dsi; 206 206 207 - drm_panel_init(&ctx->panel); 208 - ctx->panel.dev = &dsi->dev; 209 - ctx->panel.funcs = &feiyang_funcs; 207 + drm_panel_init(&ctx->panel, &dsi->dev, &feiyang_funcs, 208 + DRM_MODE_CONNECTOR_DSI); 210 209 211 210 ctx->dvdd = devm_regulator_get(&dsi->dev, "dvdd"); 212 211 if (IS_ERR(ctx->dvdd)) {
+2 -3
drivers/gpu/drm/panel/panel-ilitek-ili9322.c
··· 895 895 ili->input = ili->conf->input; 896 896 } 897 897 898 - drm_panel_init(&ili->panel); 899 - ili->panel.dev = dev; 900 - ili->panel.funcs = &ili9322_drm_funcs; 898 + drm_panel_init(&ili->panel, dev, &ili9322_drm_funcs, 899 + DRM_MODE_CONNECTOR_DPI); 901 900 902 901 return drm_panel_add(&ili->panel); 903 902 }
+2 -3
drivers/gpu/drm/panel/panel-ilitek-ili9881c.c
··· 433 433 mipi_dsi_set_drvdata(dsi, ctx); 434 434 ctx->dsi = dsi; 435 435 436 - drm_panel_init(&ctx->panel); 437 - ctx->panel.dev = &dsi->dev; 438 - ctx->panel.funcs = &ili9881c_funcs; 436 + drm_panel_init(&ctx->panel, &dsi->dev, &ili9881c_funcs, 437 + DRM_MODE_CONNECTOR_DSI); 439 438 440 439 ctx->power = devm_regulator_get(&dsi->dev, "power"); 441 440 if (IS_ERR(ctx->power)) {
+2 -3
drivers/gpu/drm/panel/panel-innolux-p079zca.c
··· 487 487 if (IS_ERR(innolux->backlight)) 488 488 return PTR_ERR(innolux->backlight); 489 489 490 - drm_panel_init(&innolux->base); 491 - innolux->base.funcs = &innolux_panel_funcs; 492 - innolux->base.dev = dev; 490 + drm_panel_init(&innolux->base, dev, &innolux_panel_funcs, 491 + DRM_MODE_CONNECTOR_DSI); 493 492 494 493 err = drm_panel_add(&innolux->base); 495 494 if (err < 0)
+2 -3
drivers/gpu/drm/panel/panel-jdi-lt070me05000.c
··· 437 437 return ret; 438 438 } 439 439 440 - drm_panel_init(&jdi->base); 441 - jdi->base.funcs = &jdi_panel_funcs; 442 - jdi->base.dev = &jdi->dsi->dev; 440 + drm_panel_init(&jdi->base, &jdi->dsi->dev, &jdi_panel_funcs, 441 + DRM_MODE_CONNECTOR_DSI); 443 442 444 443 ret = drm_panel_add(&jdi->base); 445 444
+2 -3
drivers/gpu/drm/panel/panel-kingdisplay-kd097d04.c
··· 391 391 if (IS_ERR(kingdisplay->backlight)) 392 392 return PTR_ERR(kingdisplay->backlight); 393 393 394 - drm_panel_init(&kingdisplay->base); 395 - kingdisplay->base.funcs = &kingdisplay_panel_funcs; 396 - kingdisplay->base.dev = &kingdisplay->link->dev; 394 + drm_panel_init(&kingdisplay->base, &kingdisplay->link->dev, 395 + &kingdisplay_panel_funcs, DRM_MODE_CONNECTOR_DSI); 397 396 398 397 return drm_panel_add(&kingdisplay->base); 399 398 }
+2 -3
drivers/gpu/drm/panel/panel-lg-lb035q02.c
··· 196 196 if (ret < 0) 197 197 return ret; 198 198 199 - drm_panel_init(&lcd->panel); 200 - lcd->panel.dev = &lcd->spi->dev; 201 - lcd->panel.funcs = &lb035q02_funcs; 199 + drm_panel_init(&lcd->panel, &lcd->spi->dev, &lb035q02_funcs, 200 + DRM_MODE_CONNECTOR_DPI); 202 201 203 202 return drm_panel_add(&lcd->panel); 204 203 }
+2 -3
drivers/gpu/drm/panel/panel-lg-lg4573.c
··· 259 259 return ret; 260 260 } 261 261 262 - drm_panel_init(&ctx->panel); 263 - ctx->panel.dev = &spi->dev; 264 - ctx->panel.funcs = &lg4573_drm_funcs; 262 + drm_panel_init(&ctx->panel, &spi->dev, &lg4573_drm_funcs, 263 + DRM_MODE_CONNECTOR_DPI); 265 264 266 265 return drm_panel_add(&ctx->panel); 267 266 }
+6 -20
drivers/gpu/drm/panel/panel-lvds.c
··· 197 197 static int panel_lvds_probe(struct platform_device *pdev) 198 198 { 199 199 struct panel_lvds *lvds; 200 - struct device_node *np; 201 200 int ret; 202 201 203 202 lvds = devm_kzalloc(&pdev->dev, sizeof(*lvds), GFP_KERNEL); ··· 242 243 return ret; 243 244 } 244 245 245 - np = of_parse_phandle(lvds->dev->of_node, "backlight", 0); 246 - if (np) { 247 - lvds->backlight = of_find_backlight_by_node(np); 248 - of_node_put(np); 249 - 250 - if (!lvds->backlight) 251 - return -EPROBE_DEFER; 252 - } 246 + lvds->backlight = devm_of_find_backlight(lvds->dev); 247 + if (IS_ERR(lvds->backlight)) 248 + return PTR_ERR(lvds->backlight); 253 249 254 250 /* 255 251 * TODO: Handle all power supplies specified in the DT node in a generic ··· 254 260 */ 255 261 256 262 /* Register the panel. */ 257 - drm_panel_init(&lvds->panel); 258 - lvds->panel.dev = lvds->dev; 259 - lvds->panel.funcs = &panel_lvds_funcs; 263 + drm_panel_init(&lvds->panel, lvds->dev, &panel_lvds_funcs, 264 + DRM_MODE_CONNECTOR_LVDS); 260 265 261 266 ret = drm_panel_add(&lvds->panel); 262 267 if (ret < 0) 263 - goto error; 268 + return ret; 264 269 265 270 dev_set_drvdata(lvds->dev, lvds); 266 271 return 0; 267 - 268 - error: 269 - put_device(&lvds->backlight->dev); 270 - return ret; 271 272 } 272 273 273 274 static int panel_lvds_remove(struct platform_device *pdev) ··· 272 283 drm_panel_remove(&lvds->panel); 273 284 274 285 panel_lvds_disable(&lvds->panel); 275 - 276 - if (lvds->backlight) 277 - put_device(&lvds->backlight->dev); 278 286 279 287 return 0; 280 288 }
+2 -3
drivers/gpu/drm/panel/panel-nec-nl8048hl11.c
··· 205 205 if (ret < 0) 206 206 return ret; 207 207 208 - drm_panel_init(&lcd->panel); 209 - lcd->panel.dev = &lcd->spi->dev; 210 - lcd->panel.funcs = &nl8048_funcs; 208 + drm_panel_init(&lcd->panel, &lcd->spi->dev, &nl8048_funcs, 209 + DRM_MODE_CONNECTOR_DPI); 211 210 212 211 return drm_panel_add(&lcd->panel); 213 212 }
+2 -3
drivers/gpu/drm/panel/panel-novatek-nt39016.c
··· 292 292 return err; 293 293 } 294 294 295 - drm_panel_init(&panel->drm_panel); 296 - panel->drm_panel.dev = dev; 297 - panel->drm_panel.funcs = &nt39016_funcs; 295 + drm_panel_init(&panel->drm_panel, dev, &nt39016_funcs, 296 + DRM_MODE_CONNECTOR_DPI); 298 297 299 298 err = drm_panel_add(&panel->drm_panel); 300 299 if (err < 0) {
+2 -3
drivers/gpu/drm/panel/panel-olimex-lcd-olinuxino.c
··· 288 288 if (IS_ERR(lcd->backlight)) 289 289 return PTR_ERR(lcd->backlight); 290 290 291 - drm_panel_init(&lcd->panel); 292 - lcd->panel.dev = dev; 293 - lcd->panel.funcs = &lcd_olinuxino_funcs; 291 + drm_panel_init(&lcd->panel, dev, &lcd_olinuxino_funcs, 292 + DRM_MODE_CONNECTOR_DPI); 294 293 295 294 return drm_panel_add(&lcd->panel); 296 295 }
+2 -3
drivers/gpu/drm/panel/panel-orisetech-otm8009a.c
··· 455 455 dsi->mode_flags = MIPI_DSI_MODE_VIDEO | MIPI_DSI_MODE_VIDEO_BURST | 456 456 MIPI_DSI_MODE_LPM; 457 457 458 - drm_panel_init(&ctx->panel); 459 - ctx->panel.dev = dev; 460 - ctx->panel.funcs = &otm8009a_drm_funcs; 458 + drm_panel_init(&ctx->panel, dev, &otm8009a_drm_funcs, 459 + DRM_MODE_CONNECTOR_DSI); 461 460 462 461 ctx->bl_dev = devm_backlight_device_register(dev, dev_name(dev), 463 462 dsi->host->dev, ctx,
+2 -3
drivers/gpu/drm/panel/panel-osd-osd101t2587-53ts.c
··· 166 166 if (IS_ERR(osd101t2587->backlight)) 167 167 return PTR_ERR(osd101t2587->backlight); 168 168 169 - drm_panel_init(&osd101t2587->base); 170 - osd101t2587->base.funcs = &osd101t2587_panel_funcs; 171 - osd101t2587->base.dev = &osd101t2587->dsi->dev; 169 + drm_panel_init(&osd101t2587->base, &osd101t2587->dsi->dev, 170 + &osd101t2587_panel_funcs, DRM_MODE_CONNECTOR_DSI); 172 171 173 172 return drm_panel_add(&osd101t2587->base); 174 173 }
+2 -3
drivers/gpu/drm/panel/panel-panasonic-vvx10f034n00.c
··· 223 223 return -EPROBE_DEFER; 224 224 } 225 225 226 - drm_panel_init(&wuxga_nt->base); 227 - wuxga_nt->base.funcs = &wuxga_nt_panel_funcs; 228 - wuxga_nt->base.dev = &wuxga_nt->dsi->dev; 226 + drm_panel_init(&wuxga_nt->base, &wuxga_nt->dsi->dev, 227 + &wuxga_nt_panel_funcs, DRM_MODE_CONNECTOR_DSI); 229 228 230 229 ret = drm_panel_add(&wuxga_nt->base); 231 230 if (ret < 0)
+2 -2
drivers/gpu/drm/panel/panel-raspberrypi-touchscreen.c
··· 426 426 return PTR_ERR(ts->dsi); 427 427 } 428 428 429 - ts->base.dev = dev; 430 - ts->base.funcs = &rpi_touchscreen_funcs; 429 + drm_panel_init(&ts->base, dev, &rpi_touchscreen_funcs, 430 + DRM_MODE_CONNECTOR_DSI); 431 431 432 432 /* This appears last, as it's what will unblock the DSI host 433 433 * driver's component bind function.
+2 -3
drivers/gpu/drm/panel/panel-raydium-rm67191.c
··· 606 606 if (ret) 607 607 return ret; 608 608 609 - drm_panel_init(&panel->panel); 610 - panel->panel.funcs = &rad_panel_funcs; 611 - panel->panel.dev = dev; 609 + drm_panel_init(&panel->panel, dev, &rad_panel_funcs, 610 + DRM_MODE_CONNECTOR_DSI); 612 611 dev_set_drvdata(dev, panel); 613 612 614 613 ret = drm_panel_add(&panel->panel);
+2 -3
drivers/gpu/drm/panel/panel-raydium-rm68200.c
··· 404 404 dsi->mode_flags = MIPI_DSI_MODE_VIDEO | MIPI_DSI_MODE_VIDEO_BURST | 405 405 MIPI_DSI_MODE_LPM; 406 406 407 - drm_panel_init(&ctx->panel); 408 - ctx->panel.dev = dev; 409 - ctx->panel.funcs = &rm68200_drm_funcs; 407 + drm_panel_init(&ctx->panel, dev, &rm68200_drm_funcs, 408 + DRM_MODE_CONNECTOR_DSI); 410 409 411 410 drm_panel_add(&ctx->panel); 412 411
+2 -3
drivers/gpu/drm/panel/panel-rocktech-jh057n00900.c
··· 343 343 return ret; 344 344 } 345 345 346 - drm_panel_init(&ctx->panel); 347 - ctx->panel.dev = dev; 348 - ctx->panel.funcs = &jh057n_drm_funcs; 346 + drm_panel_init(&ctx->panel, dev, &jh057n_drm_funcs, 347 + DRM_MODE_CONNECTOR_DSI); 349 348 350 349 drm_panel_add(&ctx->panel); 351 350
+2 -3
drivers/gpu/drm/panel/panel-ronbo-rb070d30.c
··· 173 173 mipi_dsi_set_drvdata(dsi, ctx); 174 174 ctx->dsi = dsi; 175 175 176 - drm_panel_init(&ctx->panel); 177 - ctx->panel.dev = &dsi->dev; 178 - ctx->panel.funcs = &rb070d30_panel_funcs; 176 + drm_panel_init(&ctx->panel, &dsi->dev, &rb070d30_panel_funcs, 177 + DRM_MODE_CONNECTOR_DSI); 179 178 180 179 ctx->gpios.reset = devm_gpiod_get(&dsi->dev, "reset", GPIOD_OUT_LOW); 181 180 if (IS_ERR(ctx->gpios.reset)) {
+2 -3
drivers/gpu/drm/panel/panel-samsung-ld9040.c
··· 351 351 return ret; 352 352 } 353 353 354 - drm_panel_init(&ctx->panel); 355 - ctx->panel.dev = dev; 356 - ctx->panel.funcs = &ld9040_drm_funcs; 354 + drm_panel_init(&ctx->panel, dev, &ld9040_drm_funcs, 355 + DRM_MODE_CONNECTOR_DPI); 357 356 358 357 return drm_panel_add(&ctx->panel); 359 358 }
+2 -3
drivers/gpu/drm/panel/panel-samsung-s6d16d0.c
··· 215 215 return ret; 216 216 } 217 217 218 - drm_panel_init(&s6->panel); 219 - s6->panel.dev = dev; 220 - s6->panel.funcs = &s6d16d0_drm_funcs; 218 + drm_panel_init(&s6->panel, dev, &s6d16d0_drm_funcs, 219 + DRM_MODE_CONNECTOR_DSI); 221 220 222 221 ret = drm_panel_add(&s6->panel); 223 222 if (ret < 0)
+2 -3
drivers/gpu/drm/panel/panel-samsung-s6e3ha2.c
··· 732 732 ctx->bl_dev->props.brightness = S6E3HA2_DEFAULT_BRIGHTNESS; 733 733 ctx->bl_dev->props.power = FB_BLANK_POWERDOWN; 734 734 735 - drm_panel_init(&ctx->panel); 736 - ctx->panel.dev = dev; 737 - ctx->panel.funcs = &s6e3ha2_drm_funcs; 735 + drm_panel_init(&ctx->panel, dev, &s6e3ha2_drm_funcs, 736 + DRM_MODE_CONNECTOR_DSI); 738 737 739 738 ret = drm_panel_add(&ctx->panel); 740 739 if (ret < 0)
+2 -3
drivers/gpu/drm/panel/panel-samsung-s6e63j0x03.c
··· 466 466 return PTR_ERR(ctx->reset_gpio); 467 467 } 468 468 469 - drm_panel_init(&ctx->panel); 470 - ctx->panel.dev = dev; 471 - ctx->panel.funcs = &s6e63j0x03_funcs; 469 + drm_panel_init(&ctx->panel, dev, &s6e63j0x03_funcs, 470 + DRM_MODE_CONNECTOR_DSI); 472 471 473 472 ctx->bl_dev = backlight_device_register("s6e63j0x03", dev, ctx, 474 473 &s6e63j0x03_bl_ops, NULL);
+2 -3
drivers/gpu/drm/panel/panel-samsung-s6e63m0.c
··· 473 473 return ret; 474 474 } 475 475 476 - drm_panel_init(&ctx->panel); 477 - ctx->panel.dev = dev; 478 - ctx->panel.funcs = &s6e63m0_drm_funcs; 476 + drm_panel_init(&ctx->panel, dev, &s6e63m0_drm_funcs, 477 + DRM_MODE_CONNECTOR_DPI); 479 478 480 479 ret = s6e63m0_backlight_register(ctx); 481 480 if (ret < 0)
+2 -3
drivers/gpu/drm/panel/panel-samsung-s6e8aa0.c
··· 1017 1017 1018 1018 ctx->brightness = GAMMA_LEVEL_NUM - 1; 1019 1019 1020 - drm_panel_init(&ctx->panel); 1021 - ctx->panel.dev = dev; 1022 - ctx->panel.funcs = &s6e8aa0_drm_funcs; 1020 + drm_panel_init(&ctx->panel, dev, &s6e8aa0_drm_funcs, 1021 + DRM_MODE_CONNECTOR_DSI); 1023 1022 1024 1023 ret = drm_panel_add(&ctx->panel); 1025 1024 if (ret < 0)
+2 -3
drivers/gpu/drm/panel/panel-seiko-43wvf1g.c
··· 274 274 return -EPROBE_DEFER; 275 275 } 276 276 277 - drm_panel_init(&panel->base); 278 - panel->base.dev = dev; 279 - panel->base.funcs = &seiko_panel_funcs; 277 + drm_panel_init(&panel->base, dev, &seiko_panel_funcs, 278 + DRM_MODE_CONNECTOR_DPI); 280 279 281 280 err = drm_panel_add(&panel->base); 282 281 if (err < 0)
+2 -3
drivers/gpu/drm/panel/panel-sharp-lq101r1sx01.c
··· 329 329 if (IS_ERR(sharp->backlight)) 330 330 return PTR_ERR(sharp->backlight); 331 331 332 - drm_panel_init(&sharp->base); 333 - sharp->base.funcs = &sharp_panel_funcs; 334 - sharp->base.dev = &sharp->link1->dev; 332 + drm_panel_init(&sharp->base, &sharp->link1->dev, &sharp_panel_funcs, 333 + DRM_MODE_CONNECTOR_DSI); 335 334 336 335 return drm_panel_add(&sharp->base); 337 336 }
+2 -3
drivers/gpu/drm/panel/panel-sharp-ls037v7dw01.c
··· 185 185 return PTR_ERR(lcd->ud_gpio); 186 186 } 187 187 188 - drm_panel_init(&lcd->panel); 189 - lcd->panel.dev = &pdev->dev; 190 - lcd->panel.funcs = &ls037v7dw01_funcs; 188 + drm_panel_init(&lcd->panel, &pdev->dev, &ls037v7dw01_funcs, 189 + DRM_MODE_CONNECTOR_DPI); 191 190 192 191 return drm_panel_add(&lcd->panel); 193 192 }
+2 -3
drivers/gpu/drm/panel/panel-sharp-ls043t1le01.c
··· 264 264 if (IS_ERR(sharp_nt->backlight)) 265 265 return PTR_ERR(sharp_nt->backlight); 266 266 267 - drm_panel_init(&sharp_nt->base); 268 - sharp_nt->base.funcs = &sharp_nt_panel_funcs; 269 - sharp_nt->base.dev = &sharp_nt->dsi->dev; 267 + drm_panel_init(&sharp_nt->base, &sharp_nt->dsi->dev, 268 + &sharp_nt_panel_funcs, DRM_MODE_CONNECTOR_DSI); 270 269 271 270 return drm_panel_add(&sharp_nt->base); 272 271 }
+26 -3
drivers/gpu/drm/panel/panel-simple.c
··· 94 94 95 95 u32 bus_format; 96 96 u32 bus_flags; 97 + int connector_type; 97 98 }; 98 99 99 100 struct panel_simple { ··· 465 464 if (!of_get_display_timing(dev->of_node, "panel-timing", &dt)) 466 465 panel_simple_parse_panel_timing_node(dev, panel, &dt); 467 466 468 - drm_panel_init(&panel->base); 469 - panel->base.dev = dev; 470 - panel->base.funcs = &panel_simple_funcs; 467 + drm_panel_init(&panel->base, dev, &panel_simple_funcs, 468 + desc->connector_type); 471 469 472 470 err = drm_panel_add(&panel->base); 473 471 if (err < 0) ··· 833 833 .unprepare = 1000, 834 834 }, 835 835 .bus_format = MEDIA_BUS_FMT_RGB888_1X7X4_JEIDA, 836 + .connector_type = DRM_MODE_CONNECTOR_LVDS, 836 837 }; 837 838 838 839 static const struct display_timing auo_g185han01_timings = { ··· 863 862 .unprepare = 1000, 864 863 }, 865 864 .bus_format = MEDIA_BUS_FMT_RGB888_1X7X4_SPWG, 865 + .connector_type = DRM_MODE_CONNECTOR_LVDS, 866 866 }; 867 867 868 868 static const struct display_timing auo_p320hvn03_timings = { ··· 892 890 .unprepare = 500, 893 891 }, 894 892 .bus_format = MEDIA_BUS_FMT_RGB888_1X7X4_SPWG, 893 + .connector_type = DRM_MODE_CONNECTOR_LVDS, 895 894 }; 896 895 897 896 static const struct drm_display_mode auo_t215hvn01_mode = { ··· 1208 1205 .disable = 200, 1209 1206 }, 1210 1207 .bus_format = MEDIA_BUS_FMT_RGB666_1X7X3_SPWG, 1208 + .connector_type = DRM_MODE_CONNECTOR_LVDS, 1211 1209 }; 1212 1210 1213 1211 static const struct display_timing dlc_dlc1010gig_timing = { ··· 1239 1235 .unprepare = 60, 1240 1236 }, 1241 1237 .bus_format = MEDIA_BUS_FMT_RGB888_1X7X4_SPWG, 1238 + .connector_type = DRM_MODE_CONNECTOR_LVDS, 1242 1239 }; 1243 1240 1244 1241 static const struct drm_display_mode edt_et035012dm6_mode = { ··· 1506 1501 .height = 94, 1507 1502 }, 1508 1503 .bus_format = MEDIA_BUS_FMT_RGB666_1X7X3_SPWG, 1504 + .connector_type = DRM_MODE_CONNECTOR_LVDS, 1509 1505 }; 1510 1506 1511 1507 static const struct display_timing hannstar_hsd100pxn1_timing = { ··· 1531 1525 .height = 152, 1532 1526 }, 1533 1527 .bus_format = MEDIA_BUS_FMT_RGB666_1X7X3_SPWG, 1528 + .connector_type = DRM_MODE_CONNECTOR_LVDS, 1534 1529 }; 1535 1530 1536 1531 static const struct drm_display_mode hitachi_tx23d38vm0caa_mode = { ··· 1638 1631 .unprepare = 800, 1639 1632 }, 1640 1633 .bus_format = MEDIA_BUS_FMT_RGB888_1X7X4_SPWG, 1634 + .connector_type = DRM_MODE_CONNECTOR_LVDS, 1641 1635 }; 1642 1636 1643 1637 static const struct display_timing innolux_g101ice_l01_timing = { ··· 1667 1659 .disable = 200, 1668 1660 }, 1669 1661 .bus_format = MEDIA_BUS_FMT_RGB888_1X7X4_SPWG, 1662 + .connector_type = DRM_MODE_CONNECTOR_LVDS, 1670 1663 }; 1671 1664 1672 1665 static const struct display_timing innolux_g121i1_l01_timing = { ··· 1695 1686 .disable = 20, 1696 1687 }, 1697 1688 .bus_format = MEDIA_BUS_FMT_RGB888_1X7X4_SPWG, 1689 + .connector_type = DRM_MODE_CONNECTOR_LVDS, 1698 1690 }; 1699 1691 1700 1692 static const struct drm_display_mode innolux_g121x1_l03_mode = { ··· 1879 1869 .height = 109, 1880 1870 }, 1881 1871 .bus_format = MEDIA_BUS_FMT_RGB666_1X7X3_SPWG, 1872 + .connector_type = DRM_MODE_CONNECTOR_LVDS, 1882 1873 }; 1883 1874 1884 1875 static const struct display_timing kyo_tcg121xglp_timing = { ··· 1904 1893 .height = 184, 1905 1894 }, 1906 1895 .bus_format = MEDIA_BUS_FMT_RGB888_1X7X4_SPWG, 1896 + .connector_type = DRM_MODE_CONNECTOR_LVDS, 1907 1897 }; 1908 1898 1909 1899 static const struct drm_display_mode lemaker_bl035_rgb_002_mode = { ··· 1953 1941 .height = 91, 1954 1942 }, 1955 1943 .bus_format = MEDIA_BUS_FMT_RGB888_1X7X4_SPWG, 1944 + .connector_type = DRM_MODE_CONNECTOR_LVDS, 1956 1945 }; 1957 1946 1958 1947 static const struct drm_display_mode lg_lp079qx1_sp0v_mode = { ··· 2076 2063 .disable = 400, 2077 2064 }, 2078 2065 .bus_format = MEDIA_BUS_FMT_RGB888_1X7X4_SPWG, 2066 + .connector_type = DRM_MODE_CONNECTOR_LVDS, 2079 2067 .bus_flags = DRM_BUS_FLAG_DE_HIGH, 2080 2068 }; 2081 2069 ··· 2105 2091 .disable = 50, 2106 2092 }, 2107 2093 .bus_format = MEDIA_BUS_FMT_RGB888_1X7X4_SPWG, 2094 + .connector_type = DRM_MODE_CONNECTOR_LVDS, 2108 2095 }; 2109 2096 2110 2097 static const struct drm_display_mode nec_nl4827hc19_05b_mode = { ··· 2208 2193 .unprepare = 500, 2209 2194 }, 2210 2195 .bus_format = MEDIA_BUS_FMT_RGB888_1X7X4_SPWG, 2196 + .connector_type = DRM_MODE_CONNECTOR_LVDS, 2211 2197 }; 2212 2198 2213 2199 static const struct drm_display_mode nvd_9128_mode = { ··· 2232 2216 .height = 88, 2233 2217 }, 2234 2218 .bus_format = MEDIA_BUS_FMT_RGB888_1X7X4_SPWG, 2219 + .connector_type = DRM_MODE_CONNECTOR_LVDS, 2235 2220 }; 2236 2221 2237 2222 static const struct display_timing okaya_rs800480t_7x0gp_timing = { ··· 2398 2381 }, 2399 2382 .bus_format = MEDIA_BUS_FMT_RGB888_1X24, 2400 2383 .bus_flags = DRM_BUS_FLAG_DE_HIGH | DRM_BUS_FLAG_PIXDATA_DRIVE_POSEDGE, 2384 + .connector_type = DRM_MODE_CONNECTOR_DPI, 2401 2385 }; 2402 2386 2403 2387 static const struct drm_display_mode pda_91_00156_a0_mode = { ··· 2646 2628 .height = 136, 2647 2629 }, 2648 2630 .bus_format = MEDIA_BUS_FMT_RGB888_1X7X4_JEIDA, 2631 + .connector_type = DRM_MODE_CONNECTOR_LVDS, 2649 2632 }; 2650 2633 2651 2634 static const struct display_timing sharp_lq123p1jx31_timing = { ··· 2826 2807 .height = 95, 2827 2808 }, 2828 2809 .bus_format = MEDIA_BUS_FMT_RGB888_1X7X4_SPWG, 2810 + .connector_type = DRM_MODE_CONNECTOR_LVDS, 2829 2811 }; 2830 2812 2831 2813 static const struct display_timing tianma_tm070rvhg71_timing = { ··· 2851 2831 .height = 86, 2852 2832 }, 2853 2833 .bus_format = MEDIA_BUS_FMT_RGB888_1X7X4_SPWG, 2834 + .connector_type = DRM_MODE_CONNECTOR_LVDS, 2854 2835 }; 2855 2836 2856 2837 static const struct drm_display_mode ti_nspire_cx_lcd_mode[] = { ··· 2934 2913 }, 2935 2914 .bus_format = MEDIA_BUS_FMT_RGB888_1X24, 2936 2915 .bus_flags = DRM_BUS_FLAG_DE_HIGH | DRM_BUS_FLAG_PIXDATA_DRIVE_POSEDGE, 2916 + .connector_type = DRM_MODE_CONNECTOR_LVDS, 2937 2917 }; 2938 2918 2939 2919 static const struct drm_display_mode tpk_f07a_0102_mode = { ··· 3005 2983 .height = 91, 3006 2984 }, 3007 2985 .bus_format = MEDIA_BUS_FMT_RGB666_1X7X3_SPWG, 2986 + .connector_type = DRM_MODE_CONNECTOR_LVDS, 3008 2987 }; 3009 2988 3010 2989 static const struct panel_desc urt_umsh_8596md_parallel = {
+2 -3
drivers/gpu/drm/panel/panel-sitronix-st7701.c
··· 369 369 if (IS_ERR(st7701->backlight)) 370 370 return PTR_ERR(st7701->backlight); 371 371 372 - drm_panel_init(&st7701->panel); 372 + drm_panel_init(&st7701->panel, &dsi->dev, &st7701_funcs, 373 + DRM_MODE_CONNECTOR_DSI); 373 374 374 375 /** 375 376 * Once sleep out has been issued, ST7701 IC required to wait 120ms ··· 382 381 * ts8550b and there is no valid documentation for that. 383 382 */ 384 383 st7701->sleep_delay = 120 + desc->panel_sleep_delay; 385 - st7701->panel.funcs = &st7701_funcs; 386 - st7701->panel.dev = &dsi->dev; 387 384 388 385 ret = drm_panel_add(&st7701->panel); 389 386 if (ret < 0)
+2 -2
drivers/gpu/drm/panel/panel-sitronix-st7789v.c
··· 381 381 spi_set_drvdata(spi, ctx); 382 382 ctx->spi = spi; 383 383 384 - ctx->panel.dev = &spi->dev; 385 - ctx->panel.funcs = &st7789v_drm_funcs; 384 + drm_panel_init(&ctx->panel, &spi->dev, &st7789v_drm_funcs, 385 + DRM_MODE_CONNECTOR_DPI); 386 386 387 387 ctx->power = devm_regulator_get(&spi->dev, "power"); 388 388 if (IS_ERR(ctx->power))
+2 -3
drivers/gpu/drm/panel/panel-sony-acx565akm.c
··· 648 648 return ret; 649 649 } 650 650 651 - drm_panel_init(&lcd->panel); 652 - lcd->panel.dev = &lcd->spi->dev; 653 - lcd->panel.funcs = &acx565akm_funcs; 651 + drm_panel_init(&lcd->panel, &lcd->spi->dev, &acx565akm_funcs, 652 + DRM_MODE_CONNECTOR_DPI); 654 653 655 654 ret = drm_panel_add(&lcd->panel); 656 655 if (ret < 0) {
+2 -3
drivers/gpu/drm/panel/panel-tpo-td028ttec1.c
··· 347 347 return ret; 348 348 } 349 349 350 - drm_panel_init(&lcd->panel); 351 - lcd->panel.dev = &lcd->spi->dev; 352 - lcd->panel.funcs = &td028ttec1_funcs; 350 + drm_panel_init(&lcd->panel, &lcd->spi->dev, &td028ttec1_funcs, 351 + DRM_MODE_CONNECTOR_DPI); 353 352 354 353 return drm_panel_add(&lcd->panel); 355 354 }
+2 -3
drivers/gpu/drm/panel/panel-tpo-td043mtea1.c
··· 458 458 return ret; 459 459 } 460 460 461 - drm_panel_init(&lcd->panel); 462 - lcd->panel.dev = &lcd->spi->dev; 463 - lcd->panel.funcs = &td043mtea1_funcs; 461 + drm_panel_init(&lcd->panel, &lcd->spi->dev, &td043mtea1_funcs, 462 + DRM_MODE_CONNECTOR_DPI); 464 463 465 464 ret = drm_panel_add(&lcd->panel); 466 465 if (ret < 0) {
+2 -3
drivers/gpu/drm/panel/panel-tpo-tpg110.c
··· 457 457 if (ret) 458 458 return ret; 459 459 460 - drm_panel_init(&tpg->panel); 461 - tpg->panel.dev = dev; 462 - tpg->panel.funcs = &tpg110_drm_funcs; 460 + drm_panel_init(&tpg->panel, dev, &tpg110_drm_funcs, 461 + DRM_MODE_CONNECTOR_DPI); 463 462 spi_set_drvdata(spi, tpg); 464 463 465 464 return drm_panel_add(&tpg->panel);
+2 -3
drivers/gpu/drm/panel/panel-truly-nt35597.c
··· 518 518 /* dual port */ 519 519 gpiod_set_value(ctx->mode_gpio, 0); 520 520 521 - drm_panel_init(&ctx->panel); 522 - ctx->panel.dev = dev; 523 - ctx->panel.funcs = &truly_nt35597_drm_funcs; 521 + drm_panel_init(&ctx->panel, dev, &truly_nt35597_drm_funcs, 522 + DRM_MODE_CONNECTOR_DSI); 524 523 drm_panel_add(&ctx->panel); 525 524 526 525 return 0;
+4 -2
drivers/gpu/drm/panfrost/panfrost_devfreq.c
··· 53 53 if (err) { 54 54 dev_err(dev, "Cannot set frequency %lu (%d)\n", target_rate, 55 55 err); 56 - regulator_set_voltage(pfdev->regulator, pfdev->devfreq.cur_volt, 57 - pfdev->devfreq.cur_volt); 56 + if (pfdev->regulator) 57 + regulator_set_voltage(pfdev->regulator, 58 + pfdev->devfreq.cur_volt, 59 + pfdev->devfreq.cur_volt); 58 60 return err; 59 61 } 60 62
+81
drivers/gpu/drm/panfrost/panfrost_issues.h
··· 13 13 * to care about. 14 14 */ 15 15 enum panfrost_hw_issue { 16 + /* Need way to guarantee that all previously-translated memory accesses 17 + * are commited */ 16 18 HW_ISSUE_6367, 19 + 20 + /* On job complete with non-done the cache is not flushed */ 17 21 HW_ISSUE_6787, 22 + 23 + /* Write of PRFCNT_CONFIG_MODE_MANUAL to PRFCNT_CONFIG causes a 24 + * instrumentation dump if PRFCNT_TILER_EN is enabled */ 18 25 HW_ISSUE_8186, 26 + 27 + /* TIB: Reports faults from a vtile which has not yet been allocated */ 19 28 HW_ISSUE_8245, 29 + 30 + /* uTLB deadlock could occur when writing to an invalid page at the 31 + * same time as access to a valid page in the same uTLB cache line ( == 32 + * 4 PTEs == 16K block of mapping) */ 20 33 HW_ISSUE_8316, 34 + 35 + /* HT: TERMINATE for RUN command ignored if previous LOAD_DESCRIPTOR is 36 + * still executing */ 21 37 HW_ISSUE_8394, 38 + 39 + /* CSE: Sends a TERMINATED response for a task that should not be 40 + * terminated */ 22 41 HW_ISSUE_8401, 42 + 43 + /* Repeatedly Soft-stopping a job chain consisting of (Vertex Shader, 44 + * Cache Flush, Tiler) jobs causes DATA_INVALID_FAULT on tiler job. */ 23 45 HW_ISSUE_8408, 46 + 47 + /* Disable the Pause Buffer in the LS pipe. */ 24 48 HW_ISSUE_8443, 49 + 50 + /* Change in RMUs in use causes problems related with the core's SDC */ 25 51 HW_ISSUE_8987, 52 + 53 + /* Compute endpoint has a 4-deep queue of tasks, meaning a soft stop 54 + * won't complete until all 4 tasks have completed */ 26 55 HW_ISSUE_9435, 56 + 57 + /* HT: Tiler returns TERMINATED for non-terminated command */ 27 58 HW_ISSUE_9510, 59 + 60 + /* Occasionally the GPU will issue multiple page faults for the same 61 + * address before the MMU page table has been read by the GPU */ 28 62 HW_ISSUE_9630, 63 + 64 + /* RA DCD load request to SDC returns invalid load ignore causing 65 + * colour buffer mismatch */ 29 66 HW_ISSUE_10327, 67 + 68 + /* MMU TLB invalidation hazards */ 30 69 HW_ISSUE_10649, 70 + 71 + /* Missing cache flush in multi core-group configuration */ 31 72 HW_ISSUE_10676, 73 + 74 + /* Chicken bit on T72X for a hardware workaround in compiler */ 32 75 HW_ISSUE_10797, 76 + 77 + /* Soft-stopping fragment jobs might fail with TILE_RANGE_FAULT */ 33 78 HW_ISSUE_10817, 79 + 80 + /* Intermittent missing interrupt on job completion */ 34 81 HW_ISSUE_10883, 82 + 83 + /* Soft-stopping fragment jobs might fail with TILE_RANGE_ERROR 84 + * (similar to issue 10817) and can use #10817 workaround */ 35 85 HW_ISSUE_10959, 86 + 87 + /* Soft-stopped fragment shader job can restart with out-of-bound 88 + * restart index */ 36 89 HW_ISSUE_10969, 90 + 91 + /* Race condition can cause tile list corruption */ 37 92 HW_ISSUE_11020, 93 + 94 + /* Write buffer can cause tile list corruption */ 38 95 HW_ISSUE_11024, 96 + 97 + /* Pause buffer can cause a fragment job hang */ 39 98 HW_ISSUE_11035, 99 + 100 + /* Dynamic Core Scaling not supported due to errata */ 40 101 HW_ISSUE_11056, 102 + 103 + /* Clear encoder state for a hard stopped fragment job which is AFBC 104 + * encoded by soft resetting the GPU. Only for T76X r0p0, r0p1 and 105 + * r0p1_50rel0 */ 41 106 HW_ISSUE_T76X_3542, 107 + 108 + /* Keep tiler module clock on to prevent GPU stall */ 42 109 HW_ISSUE_T76X_3953, 110 + 111 + /* Must ensure L2 is not transitioning when we reset. Workaround with a 112 + * busy wait until L2 completes transition; ensure there is a maximum 113 + * loop count as she may never complete her transition. (On chips 114 + * without this errata, it's totally okay if L2 transitions.) */ 43 115 HW_ISSUE_TMIX_8463, 116 + 117 + /* Don't set SC_LS_ATTR_CHECK_DISABLE/SC_LS_ALLOW_ATTR_TYPES */ 44 118 GPUCORE_1619, 119 + 120 + /* When a hard-stop follows close after a soft-stop, the completion 121 + * code for the terminated job may be incorrectly set to STOPPED */ 45 122 HW_ISSUE_TMIX_8438, 123 + 124 + /* "Protected mode" is buggy on Mali-G31 some Bifrost chips, so the 125 + * kernel must fiddle with L2 caches to prevent data leakage */ 46 126 HW_ISSUE_TGOX_R1_1234, 127 + 47 128 HW_ISSUE_END 48 129 }; 49 130
+2 -2
drivers/gpu/drm/pl111/pl111_drv.c
··· 150 150 return -EPROBE_DEFER; 151 151 152 152 if (panel) { 153 - bridge = drm_panel_bridge_add(panel, 154 - DRM_MODE_CONNECTOR_Unknown); 153 + bridge = drm_panel_bridge_add_typed(panel, 154 + DRM_MODE_CONNECTOR_Unknown); 155 155 if (IS_ERR(bridge)) { 156 156 ret = PTR_ERR(bridge); 157 157 goto out_config;
+1 -9
drivers/gpu/drm/qxl/qxl_drv.c
··· 88 88 if (ret) 89 89 goto free_dev; 90 90 91 - ret = drm_fb_helper_remove_conflicting_pci_framebuffers(pdev, 0, "qxl"); 91 + ret = drm_fb_helper_remove_conflicting_pci_framebuffers(pdev, "qxl"); 92 92 if (ret) 93 93 goto disable_pci; 94 94 ··· 276 276 #endif 277 277 .prime_handle_to_fd = drm_gem_prime_handle_to_fd, 278 278 .prime_fd_to_handle = drm_gem_prime_fd_to_handle, 279 - .gem_prime_pin = qxl_gem_prime_pin, 280 - .gem_prime_unpin = qxl_gem_prime_unpin, 281 - .gem_prime_get_sg_table = qxl_gem_prime_get_sg_table, 282 279 .gem_prime_import_sg_table = qxl_gem_prime_import_sg_table, 283 - .gem_prime_vmap = qxl_gem_prime_vmap, 284 - .gem_prime_vunmap = qxl_gem_prime_vunmap, 285 280 .gem_prime_mmap = qxl_gem_prime_mmap, 286 - .gem_free_object_unlocked = qxl_gem_object_free, 287 - .gem_open_object = qxl_gem_object_open, 288 - .gem_close_object = qxl_gem_object_close, 289 281 .fops = &qxl_fops, 290 282 .ioctls = qxl_ioctls, 291 283 .irq_handler = qxl_irq_handler,
+1
drivers/gpu/drm/qxl/qxl_drv.h
··· 38 38 #include <drm/drm_crtc.h> 39 39 #include <drm/drm_encoder.h> 40 40 #include <drm/drm_fb_helper.h> 41 + #include <drm/drm_gem_ttm_helper.h> 41 42 #include <drm/drm_ioctl.h> 42 43 #include <drm/drm_gem.h> 43 44 #include <drm/qxl_drm.h>
+13
drivers/gpu/drm/qxl/qxl_object.c
··· 77 77 } 78 78 } 79 79 80 + static const struct drm_gem_object_funcs qxl_object_funcs = { 81 + .free = qxl_gem_object_free, 82 + .open = qxl_gem_object_open, 83 + .close = qxl_gem_object_close, 84 + .pin = qxl_gem_prime_pin, 85 + .unpin = qxl_gem_prime_unpin, 86 + .get_sg_table = qxl_gem_prime_get_sg_table, 87 + .vmap = qxl_gem_prime_vmap, 88 + .vunmap = qxl_gem_prime_vunmap, 89 + .print_info = drm_gem_ttm_print_info, 90 + }; 91 + 80 92 int qxl_bo_create(struct qxl_device *qdev, 81 93 unsigned long size, bool kernel, bool pinned, u32 domain, 82 94 struct qxl_surface *surf, ··· 112 100 kfree(bo); 113 101 return r; 114 102 } 103 + bo->tbo.base.funcs = &qxl_object_funcs; 115 104 bo->type = domain; 116 105 bo->pin_count = pinned ? 1 : 0; 117 106 bo->surface_id = 0;
+1
drivers/gpu/drm/qxl/qxl_ttm.c
··· 325 325 r = ttm_bo_device_init(&qdev->mman.bdev, 326 326 &qxl_bo_driver, 327 327 qdev->ddev.anon_inode->i_mapping, 328 + qdev->ddev.vma_offset_manager, 328 329 false); 329 330 if (r) { 330 331 DRM_ERROR("failed initializing buffer object driver(%d).\n", r);
+9 -18
drivers/gpu/drm/radeon/radeon_connectors.c
··· 249 249 struct drm_encoder *encoder; 250 250 const struct drm_connector_helper_funcs *connector_funcs = connector->helper_private; 251 251 bool connected; 252 - int i; 253 252 254 253 best_encoder = connector_funcs->best_encoder(connector); 255 254 256 - drm_connector_for_each_possible_encoder(connector, encoder, i) { 255 + drm_connector_for_each_possible_encoder(connector, encoder) { 257 256 if ((encoder == best_encoder) && (status == connector_status_connected)) 258 257 connected = true; 259 258 else ··· 268 269 static struct drm_encoder *radeon_find_encoder(struct drm_connector *connector, int encoder_type) 269 270 { 270 271 struct drm_encoder *encoder; 271 - int i; 272 272 273 - drm_connector_for_each_possible_encoder(connector, encoder, i) { 273 + drm_connector_for_each_possible_encoder(connector, encoder) { 274 274 if (encoder->encoder_type == encoder_type) 275 275 return encoder; 276 276 } ··· 378 380 static struct drm_encoder *radeon_best_single_encoder(struct drm_connector *connector) 379 381 { 380 382 struct drm_encoder *encoder; 381 - int i; 382 383 383 384 /* pick the first one */ 384 - drm_connector_for_each_possible_encoder(connector, encoder, i) 385 + drm_connector_for_each_possible_encoder(connector, encoder) 385 386 return encoder; 386 387 387 388 return NULL; ··· 425 428 426 429 list_for_each_entry(conflict, &dev->mode_config.connector_list, head) { 427 430 struct drm_encoder *enc; 428 - int i; 429 431 430 432 if (conflict == connector) 431 433 continue; 432 434 433 435 radeon_conflict = to_radeon_connector(conflict); 434 436 435 - drm_connector_for_each_possible_encoder(conflict, enc, i) { 437 + drm_connector_for_each_possible_encoder(conflict, enc) { 436 438 /* if the IDs match */ 437 439 if (enc == encoder) { 438 440 if (conflict->status != connector_status_connected) ··· 1359 1363 1360 1364 /* find analog encoder */ 1361 1365 if (radeon_connector->dac_load_detect) { 1362 - int i; 1363 - 1364 - drm_connector_for_each_possible_encoder(connector, encoder, i) { 1366 + drm_connector_for_each_possible_encoder(connector, encoder) { 1365 1367 if (encoder->encoder_type != DRM_MODE_ENCODER_DAC && 1366 1368 encoder->encoder_type != DRM_MODE_ENCODER_TVDAC) 1367 1369 continue; ··· 1437 1443 { 1438 1444 struct radeon_connector *radeon_connector = to_radeon_connector(connector); 1439 1445 struct drm_encoder *encoder; 1440 - int i; 1441 1446 1442 - drm_connector_for_each_possible_encoder(connector, encoder, i) { 1447 + drm_connector_for_each_possible_encoder(connector, encoder) { 1443 1448 if (radeon_connector->use_digital == true) { 1444 1449 if (encoder->encoder_type == DRM_MODE_ENCODER_TMDS) 1445 1450 return encoder; ··· 1453 1460 1454 1461 /* then check use digitial */ 1455 1462 /* pick the first one */ 1456 - drm_connector_for_each_possible_encoder(connector, encoder, i) 1463 + drm_connector_for_each_possible_encoder(connector, encoder) 1457 1464 return encoder; 1458 1465 1459 1466 return NULL; ··· 1596 1603 { 1597 1604 struct drm_encoder *encoder; 1598 1605 struct radeon_encoder *radeon_encoder; 1599 - int i; 1600 1606 1601 - drm_connector_for_each_possible_encoder(connector, encoder, i) { 1607 + drm_connector_for_each_possible_encoder(connector, encoder) { 1602 1608 radeon_encoder = to_radeon_encoder(encoder); 1603 1609 1604 1610 switch (radeon_encoder->encoder_id) { ··· 1616 1624 { 1617 1625 struct drm_encoder *encoder; 1618 1626 struct radeon_encoder *radeon_encoder; 1619 - int i; 1620 1627 bool found = false; 1621 1628 1622 - drm_connector_for_each_possible_encoder(connector, encoder, i) { 1629 + drm_connector_for_each_possible_encoder(connector, encoder) { 1623 1630 radeon_encoder = to_radeon_encoder(encoder); 1624 1631 if (radeon_encoder->caps & ATOM_ENCODER_CAP_RECORD_HBR2) 1625 1632 found = true;
+1 -1
drivers/gpu/drm/radeon/radeon_drv.c
··· 361 361 return -EPROBE_DEFER; 362 362 363 363 /* Get rid of things like offb */ 364 - ret = drm_fb_helper_remove_conflicting_pci_framebuffers(pdev, 0, "radeondrmfb"); 364 + ret = drm_fb_helper_remove_conflicting_pci_framebuffers(pdev, "radeondrmfb"); 365 365 if (ret) 366 366 return ret; 367 367
+1
drivers/gpu/drm/radeon/radeon_ttm.c
··· 794 794 r = ttm_bo_device_init(&rdev->mman.bdev, 795 795 &radeon_bo_driver, 796 796 rdev->ddev->anon_inode->i_mapping, 797 + rdev->ddev->vma_offset_manager, 797 798 dma_addressing_limited(&rdev->pdev->dev)); 798 799 if (r) { 799 800 DRM_ERROR("failed initializing buffer object driver(%d).\n", r);
+3 -2
drivers/gpu/drm/rcar-du/rcar_du_encoder.c
··· 9 9 10 10 #include <linux/export.h> 11 11 12 + #include <drm/drm_bridge.h> 12 13 #include <drm/drm_crtc.h> 13 14 #include <drm/drm_modeset_helper_vtables.h> 14 15 #include <drm/drm_panel.h> ··· 85 84 goto done; 86 85 } 87 86 88 - bridge = devm_drm_panel_bridge_add(rcdu->dev, panel, 89 - DRM_MODE_CONNECTOR_DPI); 87 + bridge = devm_drm_panel_bridge_add_typed(rcdu->dev, panel, 88 + DRM_MODE_CONNECTOR_DPI); 90 89 if (IS_ERR(bridge)) { 91 90 ret = PTR_ERR(bridge); 92 91 goto done;
+1
drivers/gpu/drm/rockchip/rockchip_lvds.c
··· 16 16 #include <linux/regmap.h> 17 17 #include <linux/reset.h> 18 18 #include <drm/drm_atomic_helper.h> 19 + #include <drm/drm_bridge.h> 19 20 20 21 #include <drm/drm_dp_helper.h> 21 22 #include <drm/drm_of.h>
+3 -1
drivers/gpu/drm/rockchip/rockchip_rgb.c
··· 9 9 #include <linux/of_graph.h> 10 10 11 11 #include <drm/drm_atomic_helper.h> 12 + #include <drm/drm_bridge.h> 12 13 #include <drm/drm_dp_helper.h> 13 14 #include <drm/drm_of.h> 14 15 #include <drm/drm_panel.h> ··· 136 135 drm_encoder_helper_add(encoder, &rockchip_rgb_encoder_helper_funcs); 137 136 138 137 if (panel) { 139 - bridge = drm_panel_bridge_add(panel, DRM_MODE_CONNECTOR_LVDS); 138 + bridge = drm_panel_bridge_add_typed(panel, 139 + DRM_MODE_CONNECTOR_LVDS); 140 140 if (IS_ERR(bridge)) 141 141 return ERR_CAST(bridge); 142 142 }
+1 -1
drivers/gpu/drm/selftests/Makefile
··· 1 1 # SPDX-License-Identifier: GPL-2.0-only 2 2 test-drm_modeset-y := test-drm_modeset_common.o test-drm_plane_helper.o \ 3 3 test-drm_format.o test-drm_framebuffer.o \ 4 - test-drm_damage_helper.o 4 + test-drm_damage_helper.o test-drm_dp_mst_helper.o 5 5 6 6 obj-$(CONFIG_DRM_DEBUG_SELFTEST) += test-drm_mm.o test-drm_modeset.o test-drm_cmdline_parser.o
+2
drivers/gpu/drm/selftests/drm_modeset_selftests.h
··· 32 32 selftest(damage_iter_damage_one_outside, igt_damage_iter_damage_one_outside) 33 33 selftest(damage_iter_damage_src_moved, igt_damage_iter_damage_src_moved) 34 34 selftest(damage_iter_damage_not_visible, igt_damage_iter_damage_not_visible) 35 + selftest(dp_mst_calc_pbn_mode, igt_dp_mst_calc_pbn_mode) 36 + selftest(dp_mst_sideband_msg_req_decode, igt_dp_mst_sideband_msg_req_decode)
+238
drivers/gpu/drm/selftests/test-drm_dp_mst_helper.c
··· 1 + // SPDX-License-Identifier: GPL-2.0-only 2 + /* 3 + * Test cases for for the DRM DP MST helpers 4 + */ 5 + 6 + #define PREFIX_STR "[drm_dp_mst_helper]" 7 + 8 + #include <drm/drm_dp_mst_helper.h> 9 + #include <drm/drm_print.h> 10 + 11 + #include "../drm_dp_mst_topology_internal.h" 12 + #include "test-drm_modeset_common.h" 13 + 14 + int igt_dp_mst_calc_pbn_mode(void *ignored) 15 + { 16 + int pbn, i; 17 + const struct { 18 + int rate; 19 + int bpp; 20 + int expected; 21 + } test_params[] = { 22 + { 154000, 30, 689 }, 23 + { 234000, 30, 1047 }, 24 + { 297000, 24, 1063 }, 25 + }; 26 + 27 + for (i = 0; i < ARRAY_SIZE(test_params); i++) { 28 + pbn = drm_dp_calc_pbn_mode(test_params[i].rate, 29 + test_params[i].bpp); 30 + FAIL(pbn != test_params[i].expected, 31 + "Expected PBN %d for clock %d bpp %d, got %d\n", 32 + test_params[i].expected, test_params[i].rate, 33 + test_params[i].bpp, pbn); 34 + } 35 + 36 + return 0; 37 + } 38 + 39 + static bool 40 + sideband_msg_req_equal(const struct drm_dp_sideband_msg_req_body *in, 41 + const struct drm_dp_sideband_msg_req_body *out) 42 + { 43 + const struct drm_dp_remote_i2c_read_tx *txin, *txout; 44 + int i; 45 + 46 + if (in->req_type != out->req_type) 47 + return false; 48 + 49 + switch (in->req_type) { 50 + /* 51 + * Compare struct members manually for request types which can't be 52 + * compared simply using memcmp(). This is because said request types 53 + * contain pointers to other allocated structs 54 + */ 55 + case DP_REMOTE_I2C_READ: 56 + #define IN in->u.i2c_read 57 + #define OUT out->u.i2c_read 58 + if (IN.num_bytes_read != OUT.num_bytes_read || 59 + IN.num_transactions != OUT.num_transactions || 60 + IN.port_number != OUT.port_number || 61 + IN.read_i2c_device_id != OUT.read_i2c_device_id) 62 + return false; 63 + 64 + for (i = 0; i < IN.num_transactions; i++) { 65 + txin = &IN.transactions[i]; 66 + txout = &OUT.transactions[i]; 67 + 68 + if (txin->i2c_dev_id != txout->i2c_dev_id || 69 + txin->no_stop_bit != txout->no_stop_bit || 70 + txin->num_bytes != txout->num_bytes || 71 + txin->i2c_transaction_delay != 72 + txout->i2c_transaction_delay) 73 + return false; 74 + 75 + if (memcmp(txin->bytes, txout->bytes, 76 + txin->num_bytes) != 0) 77 + return false; 78 + } 79 + break; 80 + #undef IN 81 + #undef OUT 82 + 83 + case DP_REMOTE_DPCD_WRITE: 84 + #define IN in->u.dpcd_write 85 + #define OUT out->u.dpcd_write 86 + if (IN.dpcd_address != OUT.dpcd_address || 87 + IN.num_bytes != OUT.num_bytes || 88 + IN.port_number != OUT.port_number) 89 + return false; 90 + 91 + return memcmp(IN.bytes, OUT.bytes, IN.num_bytes) == 0; 92 + #undef IN 93 + #undef OUT 94 + 95 + case DP_REMOTE_I2C_WRITE: 96 + #define IN in->u.i2c_write 97 + #define OUT out->u.i2c_write 98 + if (IN.port_number != OUT.port_number || 99 + IN.write_i2c_device_id != OUT.write_i2c_device_id || 100 + IN.num_bytes != OUT.num_bytes) 101 + return false; 102 + 103 + return memcmp(IN.bytes, OUT.bytes, IN.num_bytes) == 0; 104 + #undef IN 105 + #undef OUT 106 + 107 + default: 108 + return memcmp(in, out, sizeof(*in)) == 0; 109 + } 110 + 111 + return true; 112 + } 113 + 114 + static bool 115 + sideband_msg_req_encode_decode(struct drm_dp_sideband_msg_req_body *in) 116 + { 117 + struct drm_dp_sideband_msg_req_body out = {0}; 118 + struct drm_printer p = drm_err_printer(PREFIX_STR); 119 + struct drm_dp_sideband_msg_tx txmsg; 120 + int i, ret; 121 + 122 + drm_dp_encode_sideband_req(in, &txmsg); 123 + ret = drm_dp_decode_sideband_req(&txmsg, &out); 124 + if (ret < 0) { 125 + drm_printf(&p, "Failed to decode sideband request: %d\n", 126 + ret); 127 + return false; 128 + } 129 + 130 + if (!sideband_msg_req_equal(in, &out)) { 131 + drm_printf(&p, "Encode/decode failed, expected:\n"); 132 + drm_dp_dump_sideband_msg_req_body(in, 1, &p); 133 + drm_printf(&p, "Got:\n"); 134 + drm_dp_dump_sideband_msg_req_body(&out, 1, &p); 135 + return false; 136 + } 137 + 138 + switch (in->req_type) { 139 + case DP_REMOTE_DPCD_WRITE: 140 + kfree(out.u.dpcd_write.bytes); 141 + break; 142 + case DP_REMOTE_I2C_READ: 143 + for (i = 0; i < out.u.i2c_read.num_transactions; i++) 144 + kfree(out.u.i2c_read.transactions[i].bytes); 145 + break; 146 + case DP_REMOTE_I2C_WRITE: 147 + kfree(out.u.i2c_write.bytes); 148 + break; 149 + } 150 + 151 + /* Clear everything but the req_type for the input */ 152 + memset(&in->u, 0, sizeof(in->u)); 153 + 154 + return true; 155 + } 156 + 157 + int igt_dp_mst_sideband_msg_req_decode(void *unused) 158 + { 159 + struct drm_dp_sideband_msg_req_body in = { 0 }; 160 + u8 data[] = { 0xff, 0x0, 0xdd }; 161 + int i; 162 + 163 + #define DO_TEST() FAIL_ON(!sideband_msg_req_encode_decode(&in)) 164 + 165 + in.req_type = DP_ENUM_PATH_RESOURCES; 166 + in.u.port_num.port_number = 5; 167 + DO_TEST(); 168 + 169 + in.req_type = DP_POWER_UP_PHY; 170 + in.u.port_num.port_number = 5; 171 + DO_TEST(); 172 + 173 + in.req_type = DP_POWER_DOWN_PHY; 174 + in.u.port_num.port_number = 5; 175 + DO_TEST(); 176 + 177 + in.req_type = DP_ALLOCATE_PAYLOAD; 178 + in.u.allocate_payload.number_sdp_streams = 3; 179 + for (i = 0; i < in.u.allocate_payload.number_sdp_streams; i++) 180 + in.u.allocate_payload.sdp_stream_sink[i] = i + 1; 181 + DO_TEST(); 182 + in.u.allocate_payload.port_number = 0xf; 183 + DO_TEST(); 184 + in.u.allocate_payload.vcpi = 0x7f; 185 + DO_TEST(); 186 + in.u.allocate_payload.pbn = U16_MAX; 187 + DO_TEST(); 188 + 189 + in.req_type = DP_QUERY_PAYLOAD; 190 + in.u.query_payload.port_number = 0xf; 191 + DO_TEST(); 192 + in.u.query_payload.vcpi = 0x7f; 193 + DO_TEST(); 194 + 195 + in.req_type = DP_REMOTE_DPCD_READ; 196 + in.u.dpcd_read.port_number = 0xf; 197 + DO_TEST(); 198 + in.u.dpcd_read.dpcd_address = 0xfedcb; 199 + DO_TEST(); 200 + in.u.dpcd_read.num_bytes = U8_MAX; 201 + DO_TEST(); 202 + 203 + in.req_type = DP_REMOTE_DPCD_WRITE; 204 + in.u.dpcd_write.port_number = 0xf; 205 + DO_TEST(); 206 + in.u.dpcd_write.dpcd_address = 0xfedcb; 207 + DO_TEST(); 208 + in.u.dpcd_write.num_bytes = ARRAY_SIZE(data); 209 + in.u.dpcd_write.bytes = data; 210 + DO_TEST(); 211 + 212 + in.req_type = DP_REMOTE_I2C_READ; 213 + in.u.i2c_read.port_number = 0xf; 214 + DO_TEST(); 215 + in.u.i2c_read.read_i2c_device_id = 0x7f; 216 + DO_TEST(); 217 + in.u.i2c_read.num_transactions = 3; 218 + in.u.i2c_read.num_bytes_read = ARRAY_SIZE(data) * 3; 219 + for (i = 0; i < in.u.i2c_read.num_transactions; i++) { 220 + in.u.i2c_read.transactions[i].bytes = data; 221 + in.u.i2c_read.transactions[i].num_bytes = ARRAY_SIZE(data); 222 + in.u.i2c_read.transactions[i].i2c_dev_id = 0x7f & ~i; 223 + in.u.i2c_read.transactions[i].i2c_transaction_delay = 0xf & ~i; 224 + } 225 + DO_TEST(); 226 + 227 + in.req_type = DP_REMOTE_I2C_WRITE; 228 + in.u.i2c_write.port_number = 0xf; 229 + DO_TEST(); 230 + in.u.i2c_write.write_i2c_device_id = 0x7f; 231 + DO_TEST(); 232 + in.u.i2c_write.num_bytes = ARRAY_SIZE(data); 233 + in.u.i2c_write.bytes = data; 234 + DO_TEST(); 235 + 236 + #undef DO_TEST 237 + return 0; 238 + }
+1 -1
drivers/gpu/drm/selftests/test-drm_framebuffer.c
··· 126 126 .handles = { 1, 1, 0 }, .pitches = { MAX_WIDTH, MAX_WIDTH - 1, 0 }, 127 127 } 128 128 }, 129 - { .buffer_created = 0, .name = "NV12 Invalid modifier/misssing DRM_MODE_FB_MODIFIERS flag", 129 + { .buffer_created = 0, .name = "NV12 Invalid modifier/missing DRM_MODE_FB_MODIFIERS flag", 130 130 .cmd = { .width = MAX_WIDTH, .height = MAX_HEIGHT, .pixel_format = DRM_FORMAT_NV12, 131 131 .handles = { 1, 1, 0 }, .modifier = { DRM_FORMAT_MOD_SAMSUNG_64_32_TILE, 0, 0 }, 132 132 .pitches = { MAX_WIDTH, MAX_WIDTH, 0 },
+7 -7
drivers/gpu/drm/selftests/test-drm_mm.c
··· 854 854 855 855 if (start > 0) { 856 856 node = __drm_mm_interval_first(mm, 0, start - 1); 857 - if (node->allocated) { 857 + if (drm_mm_node_allocated(node)) { 858 858 pr_err("node before start: node=%llx+%llu, start=%llx\n", 859 859 node->start, node->size, start); 860 860 return false; ··· 863 863 864 864 if (end < U64_MAX) { 865 865 node = __drm_mm_interval_first(mm, end, U64_MAX); 866 - if (node->allocated) { 866 + if (drm_mm_node_allocated(node)) { 867 867 pr_err("node after end: node=%llx+%llu, end=%llx\n", 868 868 node->start, node->size, end); 869 869 return false; ··· 1156 1156 struct drm_mm_node *next = list_next_entry(hole, node_list); 1157 1157 const char *node1 = NULL, *node2 = NULL; 1158 1158 1159 - if (hole->allocated) 1159 + if (drm_mm_node_allocated(hole)) 1160 1160 node1 = kasprintf(GFP_KERNEL, 1161 1161 "[%llx + %lld, color=%ld], ", 1162 1162 hole->start, hole->size, hole->color); 1163 1163 1164 - if (next->allocated) 1164 + if (drm_mm_node_allocated(next)) 1165 1165 node2 = kasprintf(GFP_KERNEL, 1166 1166 ", [%llx + %lld, color=%ld]", 1167 1167 next->start, next->size, next->color); ··· 1900 1900 u64 *start, 1901 1901 u64 *end) 1902 1902 { 1903 - if (node->allocated && node->color != color) 1903 + if (drm_mm_node_allocated(node) && node->color != color) 1904 1904 ++*start; 1905 1905 1906 1906 node = list_next_entry(node, node_list); 1907 - if (node->allocated && node->color != color) 1907 + if (drm_mm_node_allocated(node) && node->color != color) 1908 1908 --*end; 1909 1909 } 1910 1910 1911 1911 static bool colors_abutt(const struct drm_mm_node *node) 1912 1912 { 1913 1913 if (!drm_mm_hole_follows(node) && 1914 - list_next_entry(node, node_list)->allocated) { 1914 + drm_mm_node_allocated(list_next_entry(node, node_list))) { 1915 1915 pr_err("colors abutt; %ld [%llx + %llx] is next to %ld [%llx + %llx]!\n", 1916 1916 node->color, node->start, node->size, 1917 1917 list_next_entry(node, node_list)->color,
+2
drivers/gpu/drm/selftests/test-drm_modeset_common.h
··· 39 39 int igt_damage_iter_damage_one_outside(void *ignored); 40 40 int igt_damage_iter_damage_src_moved(void *ignored); 41 41 int igt_damage_iter_damage_not_visible(void *ignored); 42 + int igt_dp_mst_calc_pbn_mode(void *ignored); 43 + int igt_dp_mst_sideband_msg_req_decode(void *ignored); 42 44 43 45 #endif
+1 -1
drivers/gpu/drm/sti/sti_cursor.c
··· 47 47 void *base; 48 48 }; 49 49 50 - /** 50 + /* 51 51 * STI Cursor structure 52 52 * 53 53 * @sti_plane: sti_plane structure
+2 -1
drivers/gpu/drm/sti/sti_dvo.c
··· 12 12 #include <linux/platform_device.h> 13 13 14 14 #include <drm/drm_atomic_helper.h> 15 + #include <drm/drm_bridge.h> 15 16 #include <drm/drm_device.h> 16 17 #include <drm/drm_panel.h> 17 18 #include <drm/drm_print.h> ··· 66 65 .awg_fwgen_fct = sti_awg_generate_code_data_enable_mode, 67 66 }; 68 67 69 - /** 68 + /* 70 69 * STI digital video output structure 71 70 * 72 71 * @dev: driver device
+1 -1
drivers/gpu/drm/sti/sti_gdp.c
··· 103 103 dma_addr_t btm_field_paddr; 104 104 }; 105 105 106 - /** 106 + /* 107 107 * STI GDP structure 108 108 * 109 109 * @sti_plane: sti_plane structure
+2 -1
drivers/gpu/drm/sti/sti_hda.c
··· 12 12 #include <linux/seq_file.h> 13 13 14 14 #include <drm/drm_atomic_helper.h> 15 + #include <drm/drm_bridge.h> 15 16 #include <drm/drm_debugfs.h> 16 17 #include <drm/drm_device.h> 17 18 #include <drm/drm_file.h> ··· 231 230 AWGi_720x480p_60, NN_720x480p_60, VID_ED} 232 231 }; 233 232 234 - /** 233 + /* 235 234 * STI hd analog structure 236 235 * 237 236 * @dev: driver device
+16 -10
drivers/gpu/drm/sti/sti_hdmi.c
··· 9 9 #include <linux/debugfs.h> 10 10 #include <linux/hdmi.h> 11 11 #include <linux/module.h> 12 - #include <linux/of_gpio.h> 12 + #include <linux/io.h> 13 13 #include <linux/platform_device.h> 14 14 #include <linux/reset.h> 15 15 16 16 #include <drm/drm_atomic_helper.h> 17 + #include <drm/drm_bridge.h> 17 18 #include <drm/drm_debugfs.h> 18 19 #include <drm/drm_drv.h> 19 20 #include <drm/drm_edid.h> ··· 334 333 * Helper to concatenate infoframe in 32 bits word 335 334 * 336 335 * @ptr: pointer on the hdmi internal structure 337 - * @data: infoframe to write 338 336 * @size: size to write 339 337 */ 340 338 static inline unsigned int hdmi_infoframe_subpack(const u8 *ptr, size_t size) ··· 543 543 return 0; 544 544 } 545 545 546 + #define HDMI_TIMEOUT_SWRESET 100 /*milliseconds */ 547 + 546 548 /** 547 549 * Software reset of the hdmi subsystem 548 550 * 549 551 * @hdmi: pointer on the hdmi internal structure 550 552 * 551 553 */ 552 - #define HDMI_TIMEOUT_SWRESET 100 /*milliseconds */ 553 554 static void hdmi_swreset(struct sti_hdmi *hdmi) 554 555 { 555 556 u32 val; ··· 1257 1256 struct drm_device *drm_dev = data; 1258 1257 struct drm_encoder *encoder; 1259 1258 struct sti_hdmi_connector *connector; 1259 + struct cec_connector_info conn_info; 1260 1260 struct drm_connector *drm_connector; 1261 1261 struct drm_bridge *bridge; 1262 1262 int err; ··· 1320 1318 goto err_sysfs; 1321 1319 } 1322 1320 1321 + cec_fill_conn_info_from_drm(&conn_info, drm_connector); 1322 + hdmi->notifier = cec_notifier_conn_register(&hdmi->dev, NULL, 1323 + &conn_info); 1324 + if (!hdmi->notifier) { 1325 + hdmi->drm_connector = NULL; 1326 + return -ENOMEM; 1327 + } 1328 + 1323 1329 /* Enable default interrupts */ 1324 1330 hdmi_write(hdmi, HDMI_DEFAULT_INT, HDMI_INT_EN); 1325 1331 ··· 1341 1331 static void sti_hdmi_unbind(struct device *dev, 1342 1332 struct device *master, void *data) 1343 1333 { 1334 + struct sti_hdmi *hdmi = dev_get_drvdata(dev); 1335 + 1336 + cec_notifier_conn_unregister(hdmi->notifier); 1344 1337 } 1345 1338 1346 1339 static const struct component_ops sti_hdmi_ops = { ··· 1449 1436 goto release_adapter; 1450 1437 } 1451 1438 1452 - hdmi->notifier = cec_notifier_get(&pdev->dev); 1453 - if (!hdmi->notifier) 1454 - goto release_adapter; 1455 - 1456 1439 hdmi->reset = devm_reset_control_get(dev, "hdmi"); 1457 1440 /* Take hdmi out of reset */ 1458 1441 if (!IS_ERR(hdmi->reset)) ··· 1468 1459 { 1469 1460 struct sti_hdmi *hdmi = dev_get_drvdata(&pdev->dev); 1470 1461 1471 - cec_notifier_set_phys_addr(hdmi->notifier, CEC_PHYS_ADDR_INVALID); 1472 - 1473 1462 i2c_put_adapter(hdmi->ddc_adapt); 1474 1463 if (hdmi->audio_pdev) 1475 1464 platform_device_unregister(hdmi->audio_pdev); 1476 1465 component_del(&pdev->dev, &sti_hdmi_ops); 1477 1466 1478 - cec_notifier_put(hdmi->notifier); 1479 1467 return 0; 1480 1468 } 1481 1469
+5 -5
drivers/gpu/drm/sti/sti_tvout.c
··· 157 157 * 158 158 * @tvout: tvout structure 159 159 * @reg: register to set 160 - * @cr_r: 161 - * @y_g: 162 - * @cb_b: 160 + * @cr_r: red chroma or red order 161 + * @y_g: y or green order 162 + * @cb_b: blue chroma or blue order 163 163 */ 164 164 static void tvout_vip_set_color_order(struct sti_tvout *tvout, int reg, 165 165 u32 cr_r, u32 y_g, u32 cb_b) ··· 214 214 * @tvout: tvout structure 215 215 * @reg: register to set 216 216 * @main_path: main or auxiliary path 217 - * @sel_input: selected_input (main/aux + conv) 217 + * @video_out: selected_input (main/aux + conv) 218 218 */ 219 219 static void tvout_vip_set_sel_input(struct sti_tvout *tvout, 220 220 int reg, ··· 251 251 * 252 252 * @tvout: tvout structure 253 253 * @reg: register to set 254 - * @in_vid_signed: used video input format 254 + * @in_vid_fmt: used video input format 255 255 */ 256 256 static void tvout_vip_set_in_vid_fmt(struct sti_tvout *tvout, 257 257 int reg, u32 in_vid_fmt)
+1 -1
drivers/gpu/drm/sti/sti_vtg.c
··· 121 121 u32 vsync_off_bot; 122 122 }; 123 123 124 - /** 124 + /* 125 125 * STI VTG structure 126 126 * 127 127 * @regs: register mapping
+4 -1
drivers/gpu/drm/stm/dw_mipi_dsi-stm.c
··· 260 260 /* Compute requested pll out */ 261 261 bpp = mipi_dsi_pixel_format_to_bpp(format); 262 262 pll_out_khz = mode->clock * bpp / lanes; 263 + 263 264 /* Add 20% to pll out to be higher than pixel bw (burst mode only) */ 264 - pll_out_khz = (pll_out_khz * 12) / 10; 265 + if (mode_flags & MIPI_DSI_MODE_VIDEO_BURST) 266 + pll_out_khz = (pll_out_khz * 12) / 10; 267 + 265 268 if (pll_out_khz > dsi->lane_max_kbps) { 266 269 pll_out_khz = dsi->lane_max_kbps; 267 270 DRM_WARN("Warning max phy mbps is used\n");
+37 -2
drivers/gpu/drm/stm/ltdc.c
··· 15 15 #include <linux/module.h> 16 16 #include <linux/of_address.h> 17 17 #include <linux/of_graph.h> 18 + #include <linux/pinctrl/consumer.h> 18 19 #include <linux/platform_device.h> 19 20 #include <linux/pm_runtime.h> 20 21 #include <linux/reset.h> ··· 1041 1040 .destroy = drm_encoder_cleanup, 1042 1041 }; 1043 1042 1043 + static void ltdc_encoder_disable(struct drm_encoder *encoder) 1044 + { 1045 + struct drm_device *ddev = encoder->dev; 1046 + 1047 + DRM_DEBUG_DRIVER("\n"); 1048 + 1049 + /* Set to sleep state the pinctrl whatever type of encoder */ 1050 + pinctrl_pm_select_sleep_state(ddev->dev); 1051 + } 1052 + 1053 + static void ltdc_encoder_enable(struct drm_encoder *encoder) 1054 + { 1055 + struct drm_device *ddev = encoder->dev; 1056 + 1057 + DRM_DEBUG_DRIVER("\n"); 1058 + 1059 + /* 1060 + * Set to default state the pinctrl only with DPI type. 1061 + * Others types like DSI, don't need pinctrl due to 1062 + * internal bridge (the signals do not come out of the chipset). 1063 + */ 1064 + if (encoder->encoder_type == DRM_MODE_ENCODER_DPI) 1065 + pinctrl_pm_select_default_state(ddev->dev); 1066 + } 1067 + 1068 + static const struct drm_encoder_helper_funcs ltdc_encoder_helper_funcs = { 1069 + .disable = ltdc_encoder_disable, 1070 + .enable = ltdc_encoder_enable, 1071 + }; 1072 + 1044 1073 static int ltdc_encoder_init(struct drm_device *ddev, struct drm_bridge *bridge) 1045 1074 { 1046 1075 struct drm_encoder *encoder; ··· 1085 1054 1086 1055 drm_encoder_init(ddev, encoder, &ltdc_encoder_funcs, 1087 1056 DRM_MODE_ENCODER_DPI, NULL); 1057 + 1058 + drm_encoder_helper_add(encoder, &ltdc_encoder_helper_funcs); 1088 1059 1089 1060 ret = drm_bridge_attach(encoder, bridge, NULL); 1090 1061 if (ret) { ··· 1269 1236 /* Add endpoints panels or bridges if any */ 1270 1237 for (i = 0; i < MAX_ENDPOINTS; i++) { 1271 1238 if (panel[i]) { 1272 - bridge[i] = drm_panel_bridge_add(panel[i], 1273 - DRM_MODE_CONNECTOR_DPI); 1239 + bridge[i] = drm_panel_bridge_add_typed(panel[i], 1240 + DRM_MODE_CONNECTOR_DPI); 1274 1241 if (IS_ERR(bridge[i])) { 1275 1242 DRM_ERROR("panel-bridge endpoint %d\n", i); 1276 1243 ret = PTR_ERR(bridge[i]); ··· 1312 1279 ddev->irq_enabled = 1; 1313 1280 1314 1281 clk_disable_unprepare(ldev->pixel_clk); 1282 + 1283 + pinctrl_pm_select_sleep_state(ddev->dev); 1315 1284 1316 1285 pm_runtime_enable(ddev->dev); 1317 1286
+4 -2
drivers/gpu/drm/sun4i/sun4i_hdmi_enc.c
··· 490 490 { 491 491 struct platform_device *pdev = to_platform_device(dev); 492 492 struct drm_device *drm = data; 493 + struct cec_connector_info conn_info; 493 494 struct sun4i_drv *drv = drm->dev_private; 494 495 struct sun4i_hdmi *hdmi; 495 496 struct resource *res; ··· 630 629 631 630 #ifdef CONFIG_DRM_SUN4I_HDMI_CEC 632 631 hdmi->cec_adap = cec_pin_allocate_adapter(&sun4i_hdmi_cec_pin_ops, 633 - hdmi, "sun4i", CEC_CAP_TRANSMIT | CEC_CAP_LOG_ADDRS | 634 - CEC_CAP_PASSTHROUGH | CEC_CAP_RC); 632 + hdmi, "sun4i", CEC_CAP_DEFAULTS | CEC_CAP_CONNECTOR_INFO); 635 633 ret = PTR_ERR_OR_ZERO(hdmi->cec_adap); 636 634 if (ret < 0) 637 635 goto err_cleanup_connector; ··· 649 649 "Couldn't initialise the HDMI connector\n"); 650 650 goto err_cleanup_connector; 651 651 } 652 + cec_fill_conn_info_from_drm(&conn_info, &hdmi->connector); 653 + cec_s_conn_info(hdmi->cec_adap, &conn_info); 652 654 653 655 /* There is no HPD interrupt, so we need to poll the controller */ 654 656 hdmi->connector.polled = DRM_CONNECTOR_POLL_CONNECT |
+1
drivers/gpu/drm/sun4i/sun4i_lvds.c
··· 7 7 #include <linux/clk.h> 8 8 9 9 #include <drm/drm_atomic_helper.h> 10 + #include <drm/drm_bridge.h> 10 11 #include <drm/drm_of.h> 11 12 #include <drm/drm_panel.h> 12 13 #include <drm/drm_print.h>
+1
drivers/gpu/drm/sun4i/sun4i_rgb.c
··· 9 9 #include <linux/clk.h> 10 10 11 11 #include <drm/drm_atomic_helper.h> 12 + #include <drm/drm_bridge.h> 12 13 #include <drm/drm_of.h> 13 14 #include <drm/drm_panel.h> 14 15 #include <drm/drm_print.h>
+1
drivers/gpu/drm/sun4i/sun4i_tcon.c
··· 16 16 #include <linux/reset.h> 17 17 18 18 #include <drm/drm_atomic_helper.h> 19 + #include <drm/drm_bridge.h> 19 20 #include <drm/drm_connector.h> 20 21 #include <drm/drm_crtc.h> 21 22 #include <drm/drm_encoder.h>
+25 -10
drivers/gpu/drm/sun4i/sun6i_mipi_dsi.c
··· 16 16 #include <linux/platform_device.h> 17 17 #include <linux/pm_runtime.h> 18 18 #include <linux/regmap.h> 19 + #include <linux/regulator/consumer.h> 19 20 #include <linux/reset.h> 20 21 #include <linux/slab.h> 21 22 ··· 366 365 static u16 sun6i_dsi_get_video_start_delay(struct sun6i_dsi *dsi, 367 366 struct drm_display_mode *mode) 368 367 { 369 - u16 start = clamp(mode->vtotal - mode->vdisplay - 10, 8, 100); 370 - u16 delay = mode->vtotal - (mode->vsync_end - mode->vdisplay) + start; 368 + u16 delay = mode->vtotal - (mode->vsync_start - mode->vdisplay) + 1; 371 369 372 370 if (delay > mode->vtotal) 373 371 delay = delay % mode->vtotal; ··· 437 437 SUN6I_DSI_BURST_LINE_SYNC_POINT(SUN6I_DSI_SYNC_POINT)); 438 438 439 439 val = SUN6I_DSI_TCON_DRQ_ENABLE_MODE; 440 - } else if ((mode->hsync_end - mode->hdisplay) > 20) { 440 + } else if ((mode->hsync_start - mode->hdisplay) > 20) { 441 441 /* Maaaaaagic */ 442 - u16 drq = (mode->hsync_end - mode->hdisplay) - 20; 442 + u16 drq = (mode->hsync_start - mode->hdisplay) - 20; 443 443 444 444 drq *= mipi_dsi_pixel_format_to_bpp(device->format); 445 445 drq /= 32; ··· 569 569 (mode->htotal - mode->hsync_end) * Bpp - HBP_PACKET_OVERHEAD); 570 570 571 571 /* 572 - * The frontporch is set using a blanking packet (4 573 - * bytes + payload + 2 bytes). Its minimal size is 574 - * therefore 6 bytes 572 + * The frontporch is set using a sync event (4 bytes) 573 + * and two blanking packets (each one is 4 bytes + 574 + * payload + 2 bytes). Its minimal size is therefore 575 + * 16 bytes 575 576 */ 576 - #define HFP_PACKET_OVERHEAD 6 577 + #define HFP_PACKET_OVERHEAD 16 577 578 hfp = max((unsigned int)HFP_PACKET_OVERHEAD, 578 579 (mode->hsync_start - mode->hdisplay) * Bpp - HFP_PACKET_OVERHEAD); 579 580 ··· 832 831 u32 pkt = msg->type; 833 832 834 833 if (msg->type == MIPI_DSI_DCS_LONG_WRITE) { 835 - pkt |= ((msg->tx_len + 1) & 0xffff) << 8; 836 - pkt |= (((msg->tx_len + 1) >> 8) & 0xffff) << 16; 834 + pkt |= ((msg->tx_len) & 0xffff) << 8; 835 + pkt |= (((msg->tx_len) >> 8) & 0xffff) << 16; 837 836 } else { 838 837 pkt |= (((u8 *)msg->tx_buf)[0] << 8); 839 838 if (msg->tx_len > 1) ··· 1101 1100 return PTR_ERR(base); 1102 1101 } 1103 1102 1103 + dsi->regulator = devm_regulator_get(dev, "vcc-dsi"); 1104 + if (IS_ERR(dsi->regulator)) { 1105 + dev_err(dev, "Couldn't get VCC-DSI supply\n"); 1106 + return PTR_ERR(dsi->regulator); 1107 + } 1108 + 1104 1109 dsi->regs = devm_regmap_init_mmio_clk(dev, "bus", base, 1105 1110 &sun6i_dsi_regmap_config); 1106 1111 if (IS_ERR(dsi->regs)) { ··· 1180 1173 static int __maybe_unused sun6i_dsi_runtime_resume(struct device *dev) 1181 1174 { 1182 1175 struct sun6i_dsi *dsi = dev_get_drvdata(dev); 1176 + int err; 1177 + 1178 + err = regulator_enable(dsi->regulator); 1179 + if (err) { 1180 + dev_err(dsi->dev, "failed to enable VCC-DSI supply: %d\n", err); 1181 + return err; 1182 + } 1183 1183 1184 1184 reset_control_deassert(dsi->reset); 1185 1185 clk_prepare_enable(dsi->mod_clk); ··· 1219 1205 1220 1206 clk_disable_unprepare(dsi->mod_clk); 1221 1207 reset_control_assert(dsi->reset); 1208 + regulator_disable(dsi->regulator); 1222 1209 1223 1210 return 0; 1224 1211 }
+1
drivers/gpu/drm/sun4i/sun6i_mipi_dsi.h
··· 23 23 struct clk *bus_clk; 24 24 struct clk *mod_clk; 25 25 struct regmap *regs; 26 + struct regulator *regulator; 26 27 struct reset_control *reset; 27 28 struct phy *dphy; 28 29
+3 -2
drivers/gpu/drm/tilcdc/tilcdc_external.c
··· 8 8 #include <linux/of_graph.h> 9 9 10 10 #include <drm/drm_atomic_helper.h> 11 + #include <drm/drm_bridge.h> 11 12 #include <drm/drm_of.h> 12 13 13 14 #include "tilcdc_drv.h" ··· 140 139 } 141 140 142 141 if (panel) { 143 - bridge = devm_drm_panel_bridge_add(ddev->dev, panel, 144 - DRM_MODE_CONNECTOR_DPI); 142 + bridge = devm_drm_panel_bridge_add_typed(ddev->dev, panel, 143 + DRM_MODE_CONNECTOR_DPI); 145 144 if (IS_ERR(bridge)) { 146 145 ret = PTR_ERR(bridge); 147 146 goto err_encoder_cleanup;
+1 -1
drivers/gpu/drm/tilcdc/tilcdc_plane.c
··· 11 11 12 12 #include "tilcdc_drv.h" 13 13 14 - static struct drm_plane_funcs tilcdc_plane_funcs = { 14 + static const struct drm_plane_funcs tilcdc_plane_funcs = { 15 15 .update_plane = drm_atomic_helper_update_plane, 16 16 .disable_plane = drm_atomic_helper_disable_plane, 17 17 .destroy = drm_plane_cleanup,
-1
drivers/gpu/drm/tiny/Kconfig
··· 63 63 depends on DRM && SPI 64 64 select DRM_KMS_HELPER 65 65 select DRM_KMS_CMA_HELPER 66 - depends on THERMAL || !THERMAL 67 66 help 68 67 DRM driver for the following Pervasive Displays panels: 69 68 1.44" TFT EPD Panel (E1144CS021)
+7 -7
drivers/gpu/drm/ttm/ttm_bo.c
··· 675 675 if (bo->bdev->driver->release_notify) 676 676 bo->bdev->driver->release_notify(bo); 677 677 678 - drm_vma_offset_remove(&bdev->vma_manager, &bo->base.vma_node); 678 + drm_vma_offset_remove(bdev->vma_manager, &bo->base.vma_node); 679 679 ttm_mem_io_lock(man, false); 680 680 ttm_mem_io_free_vm(bo); 681 681 ttm_mem_io_unlock(man); ··· 1356 1356 */ 1357 1357 if (bo->type == ttm_bo_type_device || 1358 1358 bo->type == ttm_bo_type_sg) 1359 - ret = drm_vma_offset_add(&bdev->vma_manager, &bo->base.vma_node, 1359 + ret = drm_vma_offset_add(bdev->vma_manager, &bo->base.vma_node, 1360 1360 bo->mem.num_pages); 1361 1361 1362 1362 /* passed reservation objects should already be locked, ··· 1707 1707 pr_debug("Swap list %d was clean\n", i); 1708 1708 spin_unlock(&glob->lru_lock); 1709 1709 1710 - drm_vma_offset_manager_destroy(&bdev->vma_manager); 1711 - 1712 1710 if (!ret) 1713 1711 ttm_bo_global_release(); 1714 1712 ··· 1717 1719 int ttm_bo_device_init(struct ttm_bo_device *bdev, 1718 1720 struct ttm_bo_driver *driver, 1719 1721 struct address_space *mapping, 1722 + struct drm_vma_offset_manager *vma_manager, 1720 1723 bool need_dma32) 1721 1724 { 1722 1725 struct ttm_bo_global *glob = &ttm_bo_glob; 1723 1726 int ret; 1727 + 1728 + if (WARN_ON(vma_manager == NULL)) 1729 + return -EINVAL; 1724 1730 1725 1731 ret = ttm_bo_global_init(); 1726 1732 if (ret) ··· 1742 1740 if (unlikely(ret != 0)) 1743 1741 goto out_no_sys; 1744 1742 1745 - drm_vma_offset_manager_init(&bdev->vma_manager, 1746 - DRM_FILE_PAGE_OFFSET_START, 1747 - DRM_FILE_PAGE_OFFSET_SIZE); 1743 + bdev->vma_manager = vma_manager; 1748 1744 INIT_DELAYED_WORK(&bdev->wq, ttm_bo_delayed_workqueue); 1749 1745 INIT_LIST_HEAD(&bdev->ddestroy); 1750 1746 bdev->dev_mapping = mapping;
+3 -3
drivers/gpu/drm/ttm/ttm_bo_vm.c
··· 409 409 struct drm_vma_offset_node *node; 410 410 struct ttm_buffer_object *bo = NULL; 411 411 412 - drm_vma_offset_lock_lookup(&bdev->vma_manager); 412 + drm_vma_offset_lock_lookup(bdev->vma_manager); 413 413 414 - node = drm_vma_offset_lookup_locked(&bdev->vma_manager, offset, pages); 414 + node = drm_vma_offset_lookup_locked(bdev->vma_manager, offset, pages); 415 415 if (likely(node)) { 416 416 bo = container_of(node, struct ttm_buffer_object, 417 417 base.vma_node); 418 418 bo = ttm_bo_get_unless_zero(bo); 419 419 } 420 420 421 - drm_vma_offset_unlock_lookup(&bdev->vma_manager); 421 + drm_vma_offset_unlock_lookup(bdev->vma_manager); 422 422 423 423 if (!bo) 424 424 pr_err("Could not find buffer object to map\n");
+2 -2
drivers/gpu/drm/tve200/tve200_drv.c
··· 80 80 if (ret && ret != -ENODEV) 81 81 return ret; 82 82 if (panel) { 83 - bridge = drm_panel_bridge_add(panel, 84 - DRM_MODE_CONNECTOR_Unknown); 83 + bridge = drm_panel_bridge_add_typed(panel, 84 + DRM_MODE_CONNECTOR_Unknown); 85 85 if (IS_ERR(bridge)) { 86 86 ret = PTR_ERR(bridge); 87 87 goto out_bridge;
-8
drivers/gpu/drm/udl/udl_connector.c
··· 90 90 return connector_status_connected; 91 91 } 92 92 93 - static struct drm_encoder* 94 - udl_best_single_encoder(struct drm_connector *connector) 95 - { 96 - int enc_id = connector->encoder_ids[0]; 97 - return drm_encoder_find(connector->dev, NULL, enc_id); 98 - } 99 - 100 93 static int udl_connector_set_property(struct drm_connector *connector, 101 94 struct drm_property *property, 102 95 uint64_t val) ··· 113 120 static const struct drm_connector_helper_funcs udl_connector_helper_funcs = { 114 121 .get_modes = udl_get_modes, 115 122 .mode_valid = udl_mode_valid, 116 - .best_encoder = udl_best_single_encoder, 117 123 }; 118 124 119 125 static const struct drm_connector_funcs udl_connector_funcs = {
+3
drivers/gpu/drm/v3d/v3d_drv.c
··· 126 126 case DRM_V3D_PARAM_SUPPORTS_CSD: 127 127 args->value = v3d_has_csd(v3d); 128 128 return 0; 129 + case DRM_V3D_PARAM_SUPPORTS_CACHE_FLUSH: 130 + args->value = 1; 131 + return 0; 129 132 default: 130 133 DRM_DEBUG("Unknown parameter %d\n", args->param); 131 134 return -EINVAL;
+47 -8
drivers/gpu/drm/v3d/v3d_gem.c
··· 530 530 struct drm_v3d_submit_cl *args = data; 531 531 struct v3d_bin_job *bin = NULL; 532 532 struct v3d_render_job *render; 533 + struct v3d_job *clean_job = NULL; 534 + struct v3d_job *last_job; 533 535 struct ww_acquire_ctx acquire_ctx; 534 536 int ret = 0; 535 537 536 538 trace_v3d_submit_cl_ioctl(&v3d->drm, args->rcl_start, args->rcl_end); 537 539 538 - if (args->pad != 0) { 539 - DRM_INFO("pad must be zero: %d\n", args->pad); 540 + if (args->flags != 0 && 541 + args->flags != DRM_V3D_SUBMIT_CL_FLUSH_CACHE) { 542 + DRM_INFO("invalid flags: %d\n", args->flags); 540 543 return -EINVAL; 541 544 } 542 545 ··· 566 563 ret = v3d_job_init(v3d, file_priv, &bin->base, 567 564 v3d_job_free, args->in_sync_bcl); 568 565 if (ret) { 566 + kfree(bin); 569 567 v3d_job_put(&render->base); 570 568 return ret; 571 569 } ··· 579 575 bin->render = render; 580 576 } 581 577 582 - ret = v3d_lookup_bos(dev, file_priv, &render->base, 578 + if (args->flags & DRM_V3D_SUBMIT_CL_FLUSH_CACHE) { 579 + clean_job = kcalloc(1, sizeof(*clean_job), GFP_KERNEL); 580 + if (!clean_job) { 581 + ret = -ENOMEM; 582 + goto fail; 583 + } 584 + 585 + ret = v3d_job_init(v3d, file_priv, clean_job, v3d_job_free, 0); 586 + if (ret) { 587 + kfree(clean_job); 588 + clean_job = NULL; 589 + goto fail; 590 + } 591 + 592 + last_job = clean_job; 593 + } else { 594 + last_job = &render->base; 595 + } 596 + 597 + ret = v3d_lookup_bos(dev, file_priv, last_job, 583 598 args->bo_handles, args->bo_handle_count); 584 599 if (ret) 585 600 goto fail; 586 601 587 - ret = v3d_lock_bo_reservations(&render->base, &acquire_ctx); 602 + ret = v3d_lock_bo_reservations(last_job, &acquire_ctx); 588 603 if (ret) 589 604 goto fail; 590 605 ··· 622 599 ret = v3d_push_job(v3d_priv, &render->base, V3D_RENDER); 623 600 if (ret) 624 601 goto fail_unreserve; 602 + 603 + if (clean_job) { 604 + struct dma_fence *render_fence = 605 + dma_fence_get(render->base.done_fence); 606 + ret = drm_gem_fence_array_add(&clean_job->deps, render_fence); 607 + if (ret) 608 + goto fail_unreserve; 609 + ret = v3d_push_job(v3d_priv, clean_job, V3D_CACHE_CLEAN); 610 + if (ret) 611 + goto fail_unreserve; 612 + } 613 + 625 614 mutex_unlock(&v3d->sched_lock); 626 615 627 616 v3d_attach_fences_and_unlock_reservation(file_priv, 628 - &render->base, 617 + last_job, 629 618 &acquire_ctx, 630 619 args->out_sync, 631 - render->base.done_fence); 620 + last_job->done_fence); 632 621 633 622 if (bin) 634 623 v3d_job_put(&bin->base); 635 624 v3d_job_put(&render->base); 625 + if (clean_job) 626 + v3d_job_put(clean_job); 636 627 637 628 return 0; 638 629 639 630 fail_unreserve: 640 631 mutex_unlock(&v3d->sched_lock); 641 - drm_gem_unlock_reservations(render->base.bo, 642 - render->base.bo_count, &acquire_ctx); 632 + drm_gem_unlock_reservations(last_job->bo, 633 + last_job->bo_count, &acquire_ctx); 643 634 fail: 644 635 if (bin) 645 636 v3d_job_put(&bin->base); 646 637 v3d_job_put(&render->base); 638 + if (clean_job) 639 + v3d_job_put(clean_job); 647 640 648 641 return ret; 649 642 }
+2
drivers/gpu/drm/vboxvideo/Kconfig
··· 4 4 depends on DRM && X86 && PCI 5 5 select DRM_KMS_HELPER 6 6 select DRM_VRAM_HELPER 7 + select DRM_TTM 8 + select DRM_TTM_HELPER 7 9 select GENERIC_ALLOCATOR 8 10 help 9 11 This is a KMS driver for the virtual Graphics Card used in
-2
drivers/gpu/drm/vboxvideo/vbox_drv.h
··· 20 20 #include <drm/drm_gem.h> 21 21 #include <drm/drm_gem_vram_helper.h> 22 22 23 - #include <drm/drm_vram_mm_helper.h> 24 - 25 23 #include "vboxvideo_guest.h" 26 24 #include "vboxvideo_vbe.h" 27 25 #include "hgsmi_ch_setup.h"
+1 -2
drivers/gpu/drm/vboxvideo/vbox_ttm.c
··· 17 17 struct drm_device *dev = &vbox->ddev; 18 18 19 19 vmm = drm_vram_helper_alloc_mm(dev, pci_resource_start(dev->pdev, 0), 20 - vbox->available_vram_size, 21 - &drm_gem_vram_mm_funcs); 20 + vbox->available_vram_size); 22 21 if (IS_ERR(vmm)) { 23 22 ret = PTR_ERR(vmm); 24 23 DRM_ERROR("Error initializing VRAM MM; %d\n", ret);
+1 -1
drivers/gpu/drm/vc4/vc4_crtc.c
··· 994 994 struct vc4_dev *vc4 = to_vc4_dev(crtc->dev); 995 995 struct vc4_crtc_state *vc4_state = to_vc4_crtc_state(state); 996 996 997 - if (vc4_state->mm.allocated) { 997 + if (drm_mm_node_allocated(&vc4_state->mm)) { 998 998 unsigned long flags; 999 999 1000 1000 spin_lock_irqsave(&vc4->hvs->mm_lock, flags);
+2 -1
drivers/gpu/drm/vc4/vc4_dpi.c
··· 249 249 } 250 250 251 251 if (panel) 252 - bridge = drm_panel_bridge_add(panel, DRM_MODE_CONNECTOR_DPI); 252 + bridge = drm_panel_bridge_add_typed(panel, 253 + DRM_MODE_CONNECTOR_DPI); 253 254 254 255 return drm_bridge_attach(dpi->encoder, bridge, NULL); 255 256 }
+3 -2
drivers/gpu/drm/vc4/vc4_dsi.c
··· 31 31 #include <linux/pm_runtime.h> 32 32 33 33 #include <drm/drm_atomic_helper.h> 34 + #include <drm/drm_bridge.h> 34 35 #include <drm/drm_edid.h> 35 36 #include <drm/drm_mipi_dsi.h> 36 37 #include <drm/drm_of.h> ··· 1576 1575 } 1577 1576 1578 1577 if (panel) { 1579 - dsi->bridge = devm_drm_panel_bridge_add(dev, panel, 1580 - DRM_MODE_CONNECTOR_DSI); 1578 + dsi->bridge = devm_drm_panel_bridge_add_typed(dev, panel, 1579 + DRM_MODE_CONNECTOR_DSI); 1581 1580 if (IS_ERR(dsi->bridge)) 1582 1581 return PTR_ERR(dsi->bridge); 1583 1582 }
+9 -4
drivers/gpu/drm/vc4/vc4_hdmi.c
··· 1285 1285 1286 1286 static int vc4_hdmi_bind(struct device *dev, struct device *master, void *data) 1287 1287 { 1288 + #ifdef CONFIG_DRM_VC4_HDMI_CEC 1289 + struct cec_connector_info conn_info; 1290 + #endif 1288 1291 struct platform_device *pdev = to_platform_device(dev); 1289 1292 struct drm_device *drm = dev_get_drvdata(master); 1290 1293 struct vc4_dev *vc4 = drm->dev_private; ··· 1406 1403 #ifdef CONFIG_DRM_VC4_HDMI_CEC 1407 1404 hdmi->cec_adap = cec_allocate_adapter(&vc4_hdmi_cec_adap_ops, 1408 1405 vc4, "vc4", 1409 - CEC_CAP_TRANSMIT | 1410 - CEC_CAP_LOG_ADDRS | 1411 - CEC_CAP_PASSTHROUGH | 1412 - CEC_CAP_RC, 1); 1406 + CEC_CAP_DEFAULTS | 1407 + CEC_CAP_CONNECTOR_INFO, 1); 1413 1408 ret = PTR_ERR_OR_ZERO(hdmi->cec_adap); 1414 1409 if (ret < 0) 1415 1410 goto err_destroy_conn; 1411 + 1412 + cec_fill_conn_info_from_drm(&conn_info, hdmi->connector); 1413 + cec_s_conn_info(hdmi->cec_adap, &conn_info); 1414 + 1416 1415 HDMI_WRITE(VC4_HDMI_CPU_MASK_SET, 0xffffffff); 1417 1416 value = HDMI_READ(VC4_HDMI_CEC_CNTRL_1); 1418 1417 value &= ~VC4_HDMI_CEC_DIV_CLK_CNT_MASK;
+1 -1
drivers/gpu/drm/vc4/vc4_hvs.c
··· 315 315 struct drm_device *drm = dev_get_drvdata(master); 316 316 struct vc4_dev *vc4 = drm->dev_private; 317 317 318 - if (vc4->hvs->mitchell_netravali_filter.allocated) 318 + if (drm_mm_node_allocated(&vc4->hvs->mitchell_netravali_filter)) 319 319 drm_mm_remove_node(&vc4->hvs->mitchell_netravali_filter); 320 320 321 321 drm_mm_takedown(&vc4->hvs->dlist_mm);
+2 -2
drivers/gpu/drm/vc4/vc4_plane.c
··· 178 178 struct vc4_dev *vc4 = to_vc4_dev(plane->dev); 179 179 struct vc4_plane_state *vc4_state = to_vc4_plane_state(state); 180 180 181 - if (vc4_state->lbm.allocated) { 181 + if (drm_mm_node_allocated(&vc4_state->lbm)) { 182 182 unsigned long irqflags; 183 183 184 184 spin_lock_irqsave(&vc4->hvs->mm_lock, irqflags); ··· 557 557 /* Allocate the LBM memory that the HVS will use for temporary 558 558 * storage due to our scaling/format conversion. 559 559 */ 560 - if (!vc4_state->lbm.allocated) { 560 + if (!drm_mm_node_allocated(&vc4_state->lbm)) { 561 561 int ret; 562 562 563 563 spin_lock_irqsave(&vc4->hvs->mm_lock, irqflags);
+1 -1
drivers/gpu/drm/virtio/Kconfig
··· 3 3 tristate "Virtio GPU driver" 4 4 depends on DRM && VIRTIO && MMU 5 5 select DRM_KMS_HELPER 6 - select DRM_TTM 6 + select DRM_GEM_SHMEM_HELPER 7 7 help 8 8 This is the virtual GPU driver for virtio. It can be used with 9 9 QEMU based VMMs (like KVM or Xen).
+1 -1
drivers/gpu/drm/virtio/Makefile
··· 4 4 # Direct Rendering Infrastructure (DRI) in XFree86 4.1.0 and higher. 5 5 6 6 virtio-gpu-y := virtgpu_drv.o virtgpu_kms.o virtgpu_gem.o \ 7 - virtgpu_display.o virtgpu_vq.o virtgpu_ttm.o \ 7 + virtgpu_display.o virtgpu_vq.o \ 8 8 virtgpu_fence.o virtgpu_object.o virtgpu_debugfs.o virtgpu_plane.o \ 9 9 virtgpu_ioctl.o virtgpu_prime.o virtgpu_trace_points.o 10 10
+3 -19
drivers/gpu/drm/virtio/virtgpu_drv.c
··· 56 56 dev->pdev = pdev; 57 57 if (vga) 58 58 drm_fb_helper_remove_conflicting_pci_framebuffers(pdev, 59 - 0, 60 59 "virtiodrmfb"); 61 60 62 61 /* ··· 184 185 MODULE_AUTHOR("Gerd Hoffmann <kraxel@redhat.com>"); 185 186 MODULE_AUTHOR("Alon Levy"); 186 187 187 - static const struct file_operations virtio_gpu_driver_fops = { 188 - .owner = THIS_MODULE, 189 - .open = drm_open, 190 - .mmap = virtio_gpu_mmap, 191 - .poll = drm_poll, 192 - .read = drm_read, 193 - .unlocked_ioctl = drm_ioctl, 194 - .release = drm_release, 195 - .compat_ioctl = drm_compat_ioctl, 196 - .llseek = noop_llseek, 197 - }; 188 + DEFINE_DRM_GEM_SHMEM_FOPS(virtio_gpu_driver_fops); 198 189 199 190 static struct drm_driver driver = { 200 191 .driver_features = DRIVER_MODESET | DRIVER_GEM | DRIVER_RENDER | DRIVER_ATOMIC, ··· 199 210 #endif 200 211 .prime_handle_to_fd = drm_gem_prime_handle_to_fd, 201 212 .prime_fd_to_handle = drm_gem_prime_fd_to_handle, 202 - .gem_prime_get_sg_table = virtgpu_gem_prime_get_sg_table, 213 + .gem_prime_mmap = drm_gem_prime_mmap, 203 214 .gem_prime_import_sg_table = virtgpu_gem_prime_import_sg_table, 204 - .gem_prime_vmap = virtgpu_gem_prime_vmap, 205 - .gem_prime_vunmap = virtgpu_gem_prime_vunmap, 206 - .gem_prime_mmap = virtgpu_gem_prime_mmap, 207 215 208 - .gem_free_object_unlocked = virtio_gpu_gem_free_object, 209 - .gem_open_object = virtio_gpu_gem_object_open, 210 - .gem_close_object = virtio_gpu_gem_object_close, 216 + .gem_create_object = virtio_gpu_create_object, 211 217 .fops = &virtio_gpu_driver_fops, 212 218 213 219 .ioctls = virtio_gpu_ioctls,
+43 -88
drivers/gpu/drm/virtio/virtgpu_drv.h
··· 35 35 #include <drm/drm_encoder.h> 36 36 #include <drm/drm_fb_helper.h> 37 37 #include <drm/drm_gem.h> 38 + #include <drm/drm_gem_shmem_helper.h> 38 39 #include <drm/drm_ioctl.h> 39 40 #include <drm/drm_probe_helper.h> 40 - #include <drm/ttm/ttm_bo_api.h> 41 - #include <drm/ttm/ttm_bo_driver.h> 42 - #include <drm/ttm/ttm_module.h> 43 - #include <drm/ttm/ttm_placement.h> 44 41 45 42 #define DRIVER_NAME "virtio_gpu" 46 43 #define DRIVER_DESC "virtio GPU" ··· 65 68 }; 66 69 67 70 struct virtio_gpu_object { 68 - struct drm_gem_object gem_base; 71 + struct drm_gem_shmem_object base; 69 72 uint32_t hw_res_handle; 70 73 71 74 struct sg_table *pages; 72 75 uint32_t mapped; 73 - void *vmap; 74 76 bool dumb; 75 - struct ttm_place placement_code; 76 - struct ttm_placement placement; 77 - struct ttm_buffer_object tbo; 78 - struct ttm_bo_kmap_obj kmap; 79 77 bool created; 80 78 }; 81 79 #define gem_to_virtio_gpu_obj(gobj) \ 82 - container_of((gobj), struct virtio_gpu_object, gem_base) 80 + container_of((gobj), struct virtio_gpu_object, base.base) 81 + 82 + struct virtio_gpu_object_array { 83 + struct ww_acquire_ctx ticket; 84 + struct list_head next; 85 + u32 nents, total; 86 + struct drm_gem_object *objs[]; 87 + }; 83 88 84 89 struct virtio_gpu_vbuffer; 85 90 struct virtio_gpu_device; ··· 114 115 115 116 char *resp_buf; 116 117 int resp_size; 117 - 118 118 virtio_gpu_resp_cb resp_cb; 119 119 120 + struct virtio_gpu_object_array *objs; 120 121 struct list_head list; 121 122 }; 122 123 ··· 146 147 #define to_virtio_gpu_framebuffer(x) \ 147 148 container_of(x, struct virtio_gpu_framebuffer, base) 148 149 149 - struct virtio_gpu_mman { 150 - struct ttm_bo_device bdev; 151 - }; 152 - 153 150 struct virtio_gpu_queue { 154 151 struct virtqueue *vq; 155 152 spinlock_t qlock; ··· 174 179 175 180 struct virtio_device *vdev; 176 181 177 - struct virtio_gpu_mman mman; 178 - 179 182 struct virtio_gpu_output outputs[VIRTIO_GPU_MAX_SCANOUTS]; 180 183 uint32_t num_scanouts; 181 184 ··· 198 205 199 206 struct work_struct config_changed_work; 200 207 208 + struct work_struct obj_free_work; 209 + spinlock_t obj_free_lock; 210 + struct list_head obj_free_list; 211 + 201 212 struct virtio_gpu_drv_capset *capsets; 202 213 uint32_t num_capsets; 203 214 struct list_head cap_cache; ··· 214 217 /* virtio_ioctl.c */ 215 218 #define DRM_VIRTIO_NUM_IOCTLS 10 216 219 extern struct drm_ioctl_desc virtio_gpu_ioctls[DRM_VIRTIO_NUM_IOCTLS]; 217 - int virtio_gpu_object_list_validate(struct ww_acquire_ctx *ticket, 218 - struct list_head *head); 219 - void virtio_gpu_unref_list(struct list_head *head); 220 220 221 221 /* virtio_kms.c */ 222 222 int virtio_gpu_init(struct drm_device *dev); ··· 234 240 struct drm_file *file); 235 241 void virtio_gpu_gem_object_close(struct drm_gem_object *obj, 236 242 struct drm_file *file); 237 - struct virtio_gpu_object* 238 - virtio_gpu_alloc_object(struct drm_device *dev, 239 - struct virtio_gpu_object_params *params, 240 - struct virtio_gpu_fence *fence); 241 243 int virtio_gpu_mode_dumb_create(struct drm_file *file_priv, 242 244 struct drm_device *dev, 243 245 struct drm_mode_create_dumb *args); ··· 241 251 struct drm_device *dev, 242 252 uint32_t handle, uint64_t *offset_p); 243 253 254 + struct virtio_gpu_object_array *virtio_gpu_array_alloc(u32 nents); 255 + struct virtio_gpu_object_array* 256 + virtio_gpu_array_from_handles(struct drm_file *drm_file, u32 *handles, u32 nents); 257 + void virtio_gpu_array_add_obj(struct virtio_gpu_object_array *objs, 258 + struct drm_gem_object *obj); 259 + int virtio_gpu_array_lock_resv(struct virtio_gpu_object_array *objs); 260 + void virtio_gpu_array_unlock_resv(struct virtio_gpu_object_array *objs); 261 + void virtio_gpu_array_add_fence(struct virtio_gpu_object_array *objs, 262 + struct dma_fence *fence); 263 + void virtio_gpu_array_put_free(struct virtio_gpu_object_array *objs); 264 + void virtio_gpu_array_put_free_delayed(struct virtio_gpu_device *vgdev, 265 + struct virtio_gpu_object_array *objs); 266 + void virtio_gpu_array_put_free_work(struct work_struct *work); 267 + 244 268 /* virtio vg */ 245 269 int virtio_gpu_alloc_vbufs(struct virtio_gpu_device *vgdev); 246 270 void virtio_gpu_free_vbufs(struct virtio_gpu_device *vgdev); 247 271 void virtio_gpu_cmd_create_resource(struct virtio_gpu_device *vgdev, 248 272 struct virtio_gpu_object *bo, 249 273 struct virtio_gpu_object_params *params, 274 + struct virtio_gpu_object_array *objs, 250 275 struct virtio_gpu_fence *fence); 251 276 void virtio_gpu_cmd_unref_resource(struct virtio_gpu_device *vgdev, 252 277 uint32_t resource_id); 253 278 void virtio_gpu_cmd_transfer_to_host_2d(struct virtio_gpu_device *vgdev, 254 - struct virtio_gpu_object *bo, 255 279 uint64_t offset, 256 280 __le32 width, __le32 height, 257 281 __le32 x, __le32 y, 282 + struct virtio_gpu_object_array *objs, 258 283 struct virtio_gpu_fence *fence); 259 284 void virtio_gpu_cmd_resource_flush(struct virtio_gpu_device *vgdev, 260 285 uint32_t resource_id, ··· 300 295 uint32_t id); 301 296 void virtio_gpu_cmd_context_attach_resource(struct virtio_gpu_device *vgdev, 302 297 uint32_t ctx_id, 303 - uint32_t resource_id); 298 + struct virtio_gpu_object_array *objs); 304 299 void virtio_gpu_cmd_context_detach_resource(struct virtio_gpu_device *vgdev, 305 300 uint32_t ctx_id, 306 - uint32_t resource_id); 301 + struct virtio_gpu_object_array *objs); 307 302 void virtio_gpu_cmd_submit(struct virtio_gpu_device *vgdev, 308 303 void *data, uint32_t data_size, 309 - uint32_t ctx_id, struct virtio_gpu_fence *fence); 304 + uint32_t ctx_id, 305 + struct virtio_gpu_object_array *objs, 306 + struct virtio_gpu_fence *fence); 310 307 void virtio_gpu_cmd_transfer_from_host_3d(struct virtio_gpu_device *vgdev, 311 - uint32_t resource_id, uint32_t ctx_id, 308 + uint32_t ctx_id, 312 309 uint64_t offset, uint32_t level, 313 310 struct virtio_gpu_box *box, 311 + struct virtio_gpu_object_array *objs, 314 312 struct virtio_gpu_fence *fence); 315 313 void virtio_gpu_cmd_transfer_to_host_3d(struct virtio_gpu_device *vgdev, 316 - struct virtio_gpu_object *bo, 317 314 uint32_t ctx_id, 318 315 uint64_t offset, uint32_t level, 319 316 struct virtio_gpu_box *box, 317 + struct virtio_gpu_object_array *objs, 320 318 struct virtio_gpu_fence *fence); 321 319 void 322 320 virtio_gpu_cmd_resource_create_3d(struct virtio_gpu_device *vgdev, 323 321 struct virtio_gpu_object *bo, 324 322 struct virtio_gpu_object_params *params, 323 + struct virtio_gpu_object_array *objs, 325 324 struct virtio_gpu_fence *fence); 326 325 void virtio_gpu_ctrl_ack(struct virtqueue *vq); 327 326 void virtio_gpu_cursor_ack(struct virtqueue *vq); ··· 348 339 enum drm_plane_type type, 349 340 int index); 350 341 351 - /* virtio_gpu_ttm.c */ 352 - int virtio_gpu_ttm_init(struct virtio_gpu_device *vgdev); 353 - void virtio_gpu_ttm_fini(struct virtio_gpu_device *vgdev); 354 - int virtio_gpu_mmap(struct file *filp, struct vm_area_struct *vma); 355 - 356 342 /* virtio_gpu_fence.c */ 357 343 bool virtio_fence_signaled(struct dma_fence *f); 358 344 struct virtio_gpu_fence *virtio_gpu_fence_alloc( ··· 359 355 u64 last_seq); 360 356 361 357 /* virtio_gpu_object */ 358 + struct drm_gem_object *virtio_gpu_create_object(struct drm_device *dev, 359 + size_t size); 362 360 int virtio_gpu_object_create(struct virtio_gpu_device *vgdev, 363 361 struct virtio_gpu_object_params *params, 364 362 struct virtio_gpu_object **bo_ptr, 365 363 struct virtio_gpu_fence *fence); 366 - void virtio_gpu_object_kunmap(struct virtio_gpu_object *bo); 367 - int virtio_gpu_object_kmap(struct virtio_gpu_object *bo); 368 - int virtio_gpu_object_get_sg_table(struct virtio_gpu_device *qdev, 369 - struct virtio_gpu_object *bo); 370 - void virtio_gpu_object_free_sg_table(struct virtio_gpu_object *bo); 371 - int virtio_gpu_object_wait(struct virtio_gpu_object *bo, bool no_wait); 372 364 373 365 /* virtgpu_prime.c */ 374 - struct sg_table *virtgpu_gem_prime_get_sg_table(struct drm_gem_object *obj); 375 366 struct drm_gem_object *virtgpu_gem_prime_import_sg_table( 376 367 struct drm_device *dev, struct dma_buf_attachment *attach, 377 368 struct sg_table *sgt); 378 - void *virtgpu_gem_prime_vmap(struct drm_gem_object *obj); 379 - void virtgpu_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr); 380 - int virtgpu_gem_prime_mmap(struct drm_gem_object *obj, 381 - struct vm_area_struct *vma); 382 - 383 - static inline struct virtio_gpu_object* 384 - virtio_gpu_object_ref(struct virtio_gpu_object *bo) 385 - { 386 - ttm_bo_get(&bo->tbo); 387 - return bo; 388 - } 389 - 390 - static inline void virtio_gpu_object_unref(struct virtio_gpu_object **bo) 391 - { 392 - struct ttm_buffer_object *tbo; 393 - 394 - if ((*bo) == NULL) 395 - return; 396 - tbo = &((*bo)->tbo); 397 - ttm_bo_put(tbo); 398 - *bo = NULL; 399 - } 400 369 401 370 static inline u64 virtio_gpu_object_mmap_offset(struct virtio_gpu_object *bo) 402 371 { 403 - return drm_vma_node_offset_addr(&bo->tbo.base.vma_node); 404 - } 405 - 406 - static inline int virtio_gpu_object_reserve(struct virtio_gpu_object *bo, 407 - bool no_wait) 408 - { 409 - int r; 410 - 411 - r = ttm_bo_reserve(&bo->tbo, true, no_wait, NULL); 412 - if (unlikely(r != 0)) { 413 - if (r != -ERESTARTSYS) { 414 - struct virtio_gpu_device *qdev = 415 - bo->gem_base.dev->dev_private; 416 - dev_err(qdev->dev, "%p reserve failed\n", bo); 417 - } 418 - return r; 419 - } 420 - return 0; 421 - } 422 - 423 - static inline void virtio_gpu_object_unreserve(struct virtio_gpu_object *bo) 424 - { 425 - ttm_bo_unreserve(&bo->tbo); 372 + return drm_vma_node_offset_addr(&bo->base.base.vma_node); 426 373 } 427 374 428 375 /* virgl debufs */
+4
drivers/gpu/drm/virtio/virtgpu_fence.c
··· 41 41 { 42 42 struct virtio_gpu_fence *fence = to_virtio_fence(f); 43 43 44 + if (WARN_ON_ONCE(fence->f.seqno == 0)) 45 + /* leaked fence outside driver before completing 46 + * initialization with virtio_gpu_fence_emit */ 47 + return false; 44 48 if (atomic64_read(&fence->drv->last_seq) >= fence->f.seqno) 45 49 return true; 46 50 return false;
+139 -44
drivers/gpu/drm/virtio/virtgpu_gem.c
··· 28 28 29 29 #include "virtgpu_drv.h" 30 30 31 - void virtio_gpu_gem_free_object(struct drm_gem_object *gem_obj) 32 - { 33 - struct virtio_gpu_object *obj = gem_to_virtio_gpu_obj(gem_obj); 34 - 35 - if (obj) 36 - virtio_gpu_object_unref(&obj); 37 - } 38 - 39 - struct virtio_gpu_object* 40 - virtio_gpu_alloc_object(struct drm_device *dev, 41 - struct virtio_gpu_object_params *params, 42 - struct virtio_gpu_fence *fence) 43 - { 44 - struct virtio_gpu_device *vgdev = dev->dev_private; 45 - struct virtio_gpu_object *obj; 46 - int ret; 47 - 48 - ret = virtio_gpu_object_create(vgdev, params, &obj, fence); 49 - if (ret) 50 - return ERR_PTR(ret); 51 - 52 - return obj; 53 - } 54 - 55 31 int virtio_gpu_gem_create(struct drm_file *file, 56 32 struct drm_device *dev, 57 33 struct virtio_gpu_object_params *params, 58 34 struct drm_gem_object **obj_p, 59 35 uint32_t *handle_p) 60 36 { 37 + struct virtio_gpu_device *vgdev = dev->dev_private; 61 38 struct virtio_gpu_object *obj; 62 39 int ret; 63 40 u32 handle; 64 41 65 - obj = virtio_gpu_alloc_object(dev, params, NULL); 66 - if (IS_ERR(obj)) 67 - return PTR_ERR(obj); 42 + ret = virtio_gpu_object_create(vgdev, params, &obj, NULL); 43 + if (ret < 0) 44 + return ret; 68 45 69 - ret = drm_gem_handle_create(file, &obj->gem_base, &handle); 46 + ret = drm_gem_handle_create(file, &obj->base.base, &handle); 70 47 if (ret) { 71 - drm_gem_object_release(&obj->gem_base); 48 + drm_gem_object_release(&obj->base.base); 72 49 return ret; 73 50 } 74 51 75 - *obj_p = &obj->gem_base; 52 + *obj_p = &obj->base.base; 76 53 77 54 /* drop reference from allocate - handle holds it now */ 78 - drm_gem_object_put_unlocked(&obj->gem_base); 55 + drm_gem_object_put_unlocked(&obj->base.base); 79 56 80 57 *handle_p = handle; 81 58 return 0; ··· 113 136 { 114 137 struct virtio_gpu_device *vgdev = obj->dev->dev_private; 115 138 struct virtio_gpu_fpriv *vfpriv = file->driver_priv; 116 - struct virtio_gpu_object *qobj = gem_to_virtio_gpu_obj(obj); 117 - int r; 139 + struct virtio_gpu_object_array *objs; 118 140 119 141 if (!vgdev->has_virgl_3d) 120 142 return 0; 121 143 122 - r = virtio_gpu_object_reserve(qobj, false); 123 - if (r) 124 - return r; 144 + objs = virtio_gpu_array_alloc(1); 145 + if (!objs) 146 + return -ENOMEM; 147 + virtio_gpu_array_add_obj(objs, obj); 125 148 126 149 virtio_gpu_cmd_context_attach_resource(vgdev, vfpriv->ctx_id, 127 - qobj->hw_res_handle); 128 - virtio_gpu_object_unreserve(qobj); 150 + objs); 129 151 return 0; 130 152 } 131 153 ··· 133 157 { 134 158 struct virtio_gpu_device *vgdev = obj->dev->dev_private; 135 159 struct virtio_gpu_fpriv *vfpriv = file->driver_priv; 136 - struct virtio_gpu_object *qobj = gem_to_virtio_gpu_obj(obj); 137 - int r; 160 + struct virtio_gpu_object_array *objs; 138 161 139 162 if (!vgdev->has_virgl_3d) 140 163 return; 141 164 142 - r = virtio_gpu_object_reserve(qobj, false); 143 - if (r) 165 + objs = virtio_gpu_array_alloc(1); 166 + if (!objs) 144 167 return; 168 + virtio_gpu_array_add_obj(objs, obj); 145 169 146 170 virtio_gpu_cmd_context_detach_resource(vgdev, vfpriv->ctx_id, 147 - qobj->hw_res_handle); 148 - virtio_gpu_object_unreserve(qobj); 171 + objs); 172 + } 173 + 174 + struct virtio_gpu_object_array *virtio_gpu_array_alloc(u32 nents) 175 + { 176 + struct virtio_gpu_object_array *objs; 177 + size_t size = sizeof(*objs) + sizeof(objs->objs[0]) * nents; 178 + 179 + objs = kmalloc(size, GFP_KERNEL); 180 + if (!objs) 181 + return NULL; 182 + 183 + objs->nents = 0; 184 + objs->total = nents; 185 + return objs; 186 + } 187 + 188 + static void virtio_gpu_array_free(struct virtio_gpu_object_array *objs) 189 + { 190 + kfree(objs); 191 + } 192 + 193 + struct virtio_gpu_object_array* 194 + virtio_gpu_array_from_handles(struct drm_file *drm_file, u32 *handles, u32 nents) 195 + { 196 + struct virtio_gpu_object_array *objs; 197 + u32 i; 198 + 199 + objs = virtio_gpu_array_alloc(nents); 200 + if (!objs) 201 + return NULL; 202 + 203 + for (i = 0; i < nents; i++) { 204 + objs->objs[i] = drm_gem_object_lookup(drm_file, handles[i]); 205 + if (!objs->objs[i]) { 206 + objs->nents = i; 207 + virtio_gpu_array_put_free(objs); 208 + return NULL; 209 + } 210 + } 211 + objs->nents = i; 212 + return objs; 213 + } 214 + 215 + void virtio_gpu_array_add_obj(struct virtio_gpu_object_array *objs, 216 + struct drm_gem_object *obj) 217 + { 218 + if (WARN_ON_ONCE(objs->nents == objs->total)) 219 + return; 220 + 221 + drm_gem_object_get(obj); 222 + objs->objs[objs->nents] = obj; 223 + objs->nents++; 224 + } 225 + 226 + int virtio_gpu_array_lock_resv(struct virtio_gpu_object_array *objs) 227 + { 228 + int ret; 229 + 230 + if (objs->nents == 1) { 231 + ret = dma_resv_lock_interruptible(objs->objs[0]->resv, NULL); 232 + } else { 233 + ret = drm_gem_lock_reservations(objs->objs, objs->nents, 234 + &objs->ticket); 235 + } 236 + return ret; 237 + } 238 + 239 + void virtio_gpu_array_unlock_resv(struct virtio_gpu_object_array *objs) 240 + { 241 + if (objs->nents == 1) { 242 + dma_resv_unlock(objs->objs[0]->resv); 243 + } else { 244 + drm_gem_unlock_reservations(objs->objs, objs->nents, 245 + &objs->ticket); 246 + } 247 + } 248 + 249 + void virtio_gpu_array_add_fence(struct virtio_gpu_object_array *objs, 250 + struct dma_fence *fence) 251 + { 252 + int i; 253 + 254 + for (i = 0; i < objs->nents; i++) 255 + dma_resv_add_excl_fence(objs->objs[i]->resv, fence); 256 + } 257 + 258 + void virtio_gpu_array_put_free(struct virtio_gpu_object_array *objs) 259 + { 260 + u32 i; 261 + 262 + for (i = 0; i < objs->nents; i++) 263 + drm_gem_object_put_unlocked(objs->objs[i]); 264 + virtio_gpu_array_free(objs); 265 + } 266 + 267 + void virtio_gpu_array_put_free_delayed(struct virtio_gpu_device *vgdev, 268 + struct virtio_gpu_object_array *objs) 269 + { 270 + spin_lock(&vgdev->obj_free_lock); 271 + list_add_tail(&objs->next, &vgdev->obj_free_list); 272 + spin_unlock(&vgdev->obj_free_lock); 273 + schedule_work(&vgdev->obj_free_work); 274 + } 275 + 276 + void virtio_gpu_array_put_free_work(struct work_struct *work) 277 + { 278 + struct virtio_gpu_device *vgdev = 279 + container_of(work, struct virtio_gpu_device, obj_free_work); 280 + struct virtio_gpu_object_array *objs; 281 + 282 + spin_lock(&vgdev->obj_free_lock); 283 + while (!list_empty(&vgdev->obj_free_list)) { 284 + objs = list_first_entry(&vgdev->obj_free_list, 285 + struct virtio_gpu_object_array, next); 286 + list_del(&objs->next); 287 + spin_unlock(&vgdev->obj_free_lock); 288 + virtio_gpu_array_put_free(objs); 289 + spin_lock(&vgdev->obj_free_lock); 290 + } 291 + spin_unlock(&vgdev->obj_free_lock); 149 292 }
+77 -151
drivers/gpu/drm/virtio/virtgpu_ioctl.c
··· 29 29 #include <linux/sync_file.h> 30 30 31 31 #include <drm/drm_file.h> 32 - #include <drm/ttm/ttm_execbuf_util.h> 33 32 #include <drm/virtgpu_drm.h> 34 33 35 34 #include "virtgpu_drv.h" ··· 55 56 &virtio_gpu_map->offset); 56 57 } 57 58 58 - int virtio_gpu_object_list_validate(struct ww_acquire_ctx *ticket, 59 - struct list_head *head) 60 - { 61 - struct ttm_operation_ctx ctx = { false, false }; 62 - struct ttm_validate_buffer *buf; 63 - struct ttm_buffer_object *bo; 64 - struct virtio_gpu_object *qobj; 65 - int ret; 66 - 67 - ret = ttm_eu_reserve_buffers(ticket, head, true, NULL, true); 68 - if (ret != 0) 69 - return ret; 70 - 71 - list_for_each_entry(buf, head, head) { 72 - bo = buf->bo; 73 - qobj = container_of(bo, struct virtio_gpu_object, tbo); 74 - ret = ttm_bo_validate(bo, &qobj->placement, &ctx); 75 - if (ret) { 76 - ttm_eu_backoff_reservation(ticket, head); 77 - return ret; 78 - } 79 - } 80 - return 0; 81 - } 82 - 83 - void virtio_gpu_unref_list(struct list_head *head) 84 - { 85 - struct ttm_validate_buffer *buf; 86 - struct ttm_buffer_object *bo; 87 - struct virtio_gpu_object *qobj; 88 - 89 - list_for_each_entry(buf, head, head) { 90 - bo = buf->bo; 91 - qobj = container_of(bo, struct virtio_gpu_object, tbo); 92 - 93 - drm_gem_object_put_unlocked(&qobj->gem_base); 94 - } 95 - } 96 - 97 59 /* 98 60 * Usage of execbuffer: 99 61 * Relocations need to take into account the full VIRTIO_GPUDrawable size. ··· 67 107 struct drm_virtgpu_execbuffer *exbuf = data; 68 108 struct virtio_gpu_device *vgdev = dev->dev_private; 69 109 struct virtio_gpu_fpriv *vfpriv = drm_file->driver_priv; 70 - struct drm_gem_object *gobj; 71 110 struct virtio_gpu_fence *out_fence; 72 - struct virtio_gpu_object *qobj; 73 111 int ret; 74 112 uint32_t *bo_handles = NULL; 75 113 void __user *user_bo_handles = NULL; 76 - struct list_head validate_list; 77 - struct ttm_validate_buffer *buflist = NULL; 78 - int i; 79 - struct ww_acquire_ctx ticket; 114 + struct virtio_gpu_object_array *buflist = NULL; 80 115 struct sync_file *sync_file; 81 116 int in_fence_fd = exbuf->fence_fd; 82 117 int out_fence_fd = -1; ··· 112 157 return out_fence_fd; 113 158 } 114 159 115 - INIT_LIST_HEAD(&validate_list); 116 160 if (exbuf->num_bo_handles) { 117 - 118 161 bo_handles = kvmalloc_array(exbuf->num_bo_handles, 119 - sizeof(uint32_t), GFP_KERNEL); 120 - buflist = kvmalloc_array(exbuf->num_bo_handles, 121 - sizeof(struct ttm_validate_buffer), 122 - GFP_KERNEL | __GFP_ZERO); 123 - if (!bo_handles || !buflist) { 162 + sizeof(uint32_t), GFP_KERNEL); 163 + if (!bo_handles) { 124 164 ret = -ENOMEM; 125 165 goto out_unused_fd; 126 166 } ··· 127 177 goto out_unused_fd; 128 178 } 129 179 130 - for (i = 0; i < exbuf->num_bo_handles; i++) { 131 - gobj = drm_gem_object_lookup(drm_file, bo_handles[i]); 132 - if (!gobj) { 133 - ret = -ENOENT; 134 - goto out_unused_fd; 135 - } 136 - 137 - qobj = gem_to_virtio_gpu_obj(gobj); 138 - buflist[i].bo = &qobj->tbo; 139 - 140 - list_add(&buflist[i].head, &validate_list); 180 + buflist = virtio_gpu_array_from_handles(drm_file, bo_handles, 181 + exbuf->num_bo_handles); 182 + if (!buflist) { 183 + ret = -ENOENT; 184 + goto out_unused_fd; 141 185 } 142 186 kvfree(bo_handles); 143 187 bo_handles = NULL; 144 188 } 145 189 146 - ret = virtio_gpu_object_list_validate(&ticket, &validate_list); 147 - if (ret) 148 - goto out_free; 190 + if (buflist) { 191 + ret = virtio_gpu_array_lock_resv(buflist); 192 + if (ret) 193 + goto out_unused_fd; 194 + } 149 195 150 - buf = memdup_user(u64_to_user_ptr(exbuf->command), exbuf->size); 196 + buf = vmemdup_user(u64_to_user_ptr(exbuf->command), exbuf->size); 151 197 if (IS_ERR(buf)) { 152 198 ret = PTR_ERR(buf); 153 199 goto out_unresv; ··· 168 222 } 169 223 170 224 virtio_gpu_cmd_submit(vgdev, buf, exbuf->size, 171 - vfpriv->ctx_id, out_fence); 172 - 173 - ttm_eu_fence_buffer_objects(&ticket, &validate_list, &out_fence->f); 174 - 175 - /* fence the command bo */ 176 - virtio_gpu_unref_list(&validate_list); 177 - kvfree(buflist); 225 + vfpriv->ctx_id, buflist, out_fence); 178 226 return 0; 179 227 180 228 out_memdup: 181 - kfree(buf); 229 + kvfree(buf); 182 230 out_unresv: 183 - ttm_eu_backoff_reservation(&ticket, &validate_list); 184 - out_free: 185 - virtio_gpu_unref_list(&validate_list); 231 + if (buflist) 232 + virtio_gpu_array_unlock_resv(buflist); 186 233 out_unused_fd: 187 234 kvfree(bo_handles); 188 - kvfree(buflist); 235 + if (buflist) 236 + virtio_gpu_array_put_free(buflist); 189 237 190 238 if (out_fence_fd >= 0) 191 239 put_unused_fd(out_fence_fd); ··· 256 316 fence = virtio_gpu_fence_alloc(vgdev); 257 317 if (!fence) 258 318 return -ENOMEM; 259 - qobj = virtio_gpu_alloc_object(dev, &params, fence); 319 + ret = virtio_gpu_object_create(vgdev, &params, &qobj, fence); 260 320 dma_fence_put(&fence->f); 261 - if (IS_ERR(qobj)) 262 - return PTR_ERR(qobj); 263 - obj = &qobj->gem_base; 321 + if (ret < 0) 322 + return ret; 323 + obj = &qobj->base.base; 264 324 265 325 ret = drm_gem_handle_create(file_priv, obj, &handle); 266 326 if (ret) { ··· 287 347 288 348 qobj = gem_to_virtio_gpu_obj(gobj); 289 349 290 - ri->size = qobj->gem_base.size; 350 + ri->size = qobj->base.base.size; 291 351 ri->res_handle = qobj->hw_res_handle; 292 352 drm_gem_object_put_unlocked(gobj); 293 353 return 0; ··· 300 360 struct virtio_gpu_device *vgdev = dev->dev_private; 301 361 struct virtio_gpu_fpriv *vfpriv = file->driver_priv; 302 362 struct drm_virtgpu_3d_transfer_from_host *args = data; 303 - struct ttm_operation_ctx ctx = { true, false }; 304 - struct drm_gem_object *gobj = NULL; 305 - struct virtio_gpu_object *qobj = NULL; 363 + struct virtio_gpu_object_array *objs; 306 364 struct virtio_gpu_fence *fence; 307 365 int ret; 308 366 u32 offset = args->offset; ··· 309 371 if (vgdev->has_virgl_3d == false) 310 372 return -ENOSYS; 311 373 312 - gobj = drm_gem_object_lookup(file, args->bo_handle); 313 - if (gobj == NULL) 374 + objs = virtio_gpu_array_from_handles(file, &args->bo_handle, 1); 375 + if (objs == NULL) 314 376 return -ENOENT; 315 377 316 - qobj = gem_to_virtio_gpu_obj(gobj); 317 - 318 - ret = virtio_gpu_object_reserve(qobj, false); 319 - if (ret) 320 - goto out; 321 - 322 - ret = ttm_bo_validate(&qobj->tbo, &qobj->placement, &ctx); 323 - if (unlikely(ret)) 324 - goto out_unres; 378 + ret = virtio_gpu_array_lock_resv(objs); 379 + if (ret != 0) 380 + goto err_put_free; 325 381 326 382 convert_to_hw_box(&box, &args->box); 327 383 328 384 fence = virtio_gpu_fence_alloc(vgdev); 329 385 if (!fence) { 330 386 ret = -ENOMEM; 331 - goto out_unres; 387 + goto err_unlock; 332 388 } 333 389 virtio_gpu_cmd_transfer_from_host_3d 334 - (vgdev, qobj->hw_res_handle, 335 - vfpriv->ctx_id, offset, args->level, 336 - &box, fence); 337 - dma_resv_add_excl_fence(qobj->tbo.base.resv, 338 - &fence->f); 339 - 390 + (vgdev, vfpriv->ctx_id, offset, args->level, 391 + &box, objs, fence); 340 392 dma_fence_put(&fence->f); 341 - out_unres: 342 - virtio_gpu_object_unreserve(qobj); 343 - out: 344 - drm_gem_object_put_unlocked(gobj); 393 + return 0; 394 + 395 + err_unlock: 396 + virtio_gpu_array_unlock_resv(objs); 397 + err_put_free: 398 + virtio_gpu_array_put_free(objs); 345 399 return ret; 346 400 } 347 401 ··· 343 413 struct virtio_gpu_device *vgdev = dev->dev_private; 344 414 struct virtio_gpu_fpriv *vfpriv = file->driver_priv; 345 415 struct drm_virtgpu_3d_transfer_to_host *args = data; 346 - struct ttm_operation_ctx ctx = { true, false }; 347 - struct drm_gem_object *gobj = NULL; 348 - struct virtio_gpu_object *qobj = NULL; 416 + struct virtio_gpu_object_array *objs; 349 417 struct virtio_gpu_fence *fence; 350 418 struct virtio_gpu_box box; 351 419 int ret; 352 420 u32 offset = args->offset; 353 421 354 - gobj = drm_gem_object_lookup(file, args->bo_handle); 355 - if (gobj == NULL) 422 + objs = virtio_gpu_array_from_handles(file, &args->bo_handle, 1); 423 + if (objs == NULL) 356 424 return -ENOENT; 357 - 358 - qobj = gem_to_virtio_gpu_obj(gobj); 359 - 360 - ret = virtio_gpu_object_reserve(qobj, false); 361 - if (ret) 362 - goto out; 363 - 364 - ret = ttm_bo_validate(&qobj->tbo, &qobj->placement, &ctx); 365 - if (unlikely(ret)) 366 - goto out_unres; 367 425 368 426 convert_to_hw_box(&box, &args->box); 369 427 if (!vgdev->has_virgl_3d) { 370 428 virtio_gpu_cmd_transfer_to_host_2d 371 - (vgdev, qobj, offset, 372 - box.w, box.h, box.x, box.y, NULL); 429 + (vgdev, offset, 430 + box.w, box.h, box.x, box.y, 431 + objs, NULL); 373 432 } else { 433 + ret = virtio_gpu_array_lock_resv(objs); 434 + if (ret != 0) 435 + goto err_put_free; 436 + 437 + ret = -ENOMEM; 374 438 fence = virtio_gpu_fence_alloc(vgdev); 375 - if (!fence) { 376 - ret = -ENOMEM; 377 - goto out_unres; 378 - } 439 + if (!fence) 440 + goto err_unlock; 441 + 379 442 virtio_gpu_cmd_transfer_to_host_3d 380 - (vgdev, qobj, 443 + (vgdev, 381 444 vfpriv ? vfpriv->ctx_id : 0, offset, 382 - args->level, &box, fence); 383 - dma_resv_add_excl_fence(qobj->tbo.base.resv, 384 - &fence->f); 445 + args->level, &box, objs, fence); 385 446 dma_fence_put(&fence->f); 386 447 } 448 + return 0; 387 449 388 - out_unres: 389 - virtio_gpu_object_unreserve(qobj); 390 - out: 391 - drm_gem_object_put_unlocked(gobj); 450 + err_unlock: 451 + virtio_gpu_array_unlock_resv(objs); 452 + err_put_free: 453 + virtio_gpu_array_put_free(objs); 392 454 return ret; 393 455 } 394 456 395 457 static int virtio_gpu_wait_ioctl(struct drm_device *dev, void *data, 396 - struct drm_file *file) 458 + struct drm_file *file) 397 459 { 398 460 struct drm_virtgpu_3d_wait *args = data; 399 - struct drm_gem_object *gobj = NULL; 400 - struct virtio_gpu_object *qobj = NULL; 461 + struct drm_gem_object *obj; 462 + long timeout = 15 * HZ; 401 463 int ret; 402 - bool nowait = false; 403 464 404 - gobj = drm_gem_object_lookup(file, args->handle); 405 - if (gobj == NULL) 465 + obj = drm_gem_object_lookup(file, args->handle); 466 + if (obj == NULL) 406 467 return -ENOENT; 407 468 408 - qobj = gem_to_virtio_gpu_obj(gobj); 469 + if (args->flags & VIRTGPU_WAIT_NOWAIT) { 470 + ret = dma_resv_test_signaled_rcu(obj->resv, true); 471 + } else { 472 + ret = dma_resv_wait_timeout_rcu(obj->resv, true, true, 473 + timeout); 474 + } 475 + if (ret == 0) 476 + ret = -EBUSY; 477 + else if (ret > 0) 478 + ret = 0; 409 479 410 - if (args->flags & VIRTGPU_WAIT_NOWAIT) 411 - nowait = true; 412 - ret = virtio_gpu_object_wait(qobj, nowait); 413 - 414 - drm_gem_object_put_unlocked(gobj); 480 + drm_gem_object_put_unlocked(obj); 415 481 return ret; 416 482 } 417 483
+6 -9
drivers/gpu/drm/virtio/virtgpu_kms.c
··· 147 147 INIT_WORK(&vgdev->config_changed_work, 148 148 virtio_gpu_config_changed_work_func); 149 149 150 + INIT_WORK(&vgdev->obj_free_work, 151 + virtio_gpu_array_put_free_work); 152 + INIT_LIST_HEAD(&vgdev->obj_free_list); 153 + spin_lock_init(&vgdev->obj_free_lock); 154 + 150 155 #ifdef __LITTLE_ENDIAN 151 156 if (virtio_has_feature(vgdev->vdev, VIRTIO_GPU_F_VIRGL)) 152 157 vgdev->has_virgl_3d = true; ··· 176 171 if (ret) { 177 172 DRM_ERROR("failed to alloc vbufs\n"); 178 173 goto err_vbufs; 179 - } 180 - 181 - ret = virtio_gpu_ttm_init(vgdev); 182 - if (ret) { 183 - DRM_ERROR("failed to init ttm %d\n", ret); 184 - goto err_ttm; 185 174 } 186 175 187 176 /* get display info */ ··· 209 210 return 0; 210 211 211 212 err_scanouts: 212 - virtio_gpu_ttm_fini(vgdev); 213 - err_ttm: 214 213 virtio_gpu_free_vbufs(vgdev); 215 214 err_vbufs: 216 215 vgdev->vdev->config->del_vqs(vgdev->vdev); ··· 231 234 { 232 235 struct virtio_gpu_device *vgdev = dev->dev_private; 233 236 237 + flush_work(&vgdev->obj_free_work); 234 238 vgdev->vqs_ready = false; 235 239 flush_work(&vgdev->ctrlq.dequeue_work); 236 240 flush_work(&vgdev->cursorq.dequeue_work); ··· 240 242 vgdev->vdev->config->del_vqs(vgdev->vdev); 241 243 242 244 virtio_gpu_modeset_fini(vgdev); 243 - virtio_gpu_ttm_fini(vgdev); 244 245 virtio_gpu_free_vbufs(vgdev); 245 246 virtio_gpu_cleanup_cap_cache(vgdev); 246 247 kfree(vgdev->capsets);
+94 -182
drivers/gpu/drm/virtio/virtgpu_object.c
··· 23 23 * WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. 24 24 */ 25 25 26 - #include <drm/ttm/ttm_execbuf_util.h> 26 + #include <linux/moduleparam.h> 27 27 28 28 #include "virtgpu_drv.h" 29 + 30 + static int virtio_gpu_virglrenderer_workaround = 1; 31 + module_param_named(virglhack, virtio_gpu_virglrenderer_workaround, int, 0400); 29 32 30 33 static int virtio_gpu_resource_id_get(struct virtio_gpu_device *vgdev, 31 34 uint32_t *resid) 32 35 { 33 - #if 0 34 - int handle = ida_alloc(&vgdev->resource_ida, GFP_KERNEL); 35 - 36 - if (handle < 0) 37 - return handle; 38 - #else 39 - static int handle; 40 - 41 - /* 42 - * FIXME: dirty hack to avoid re-using IDs, virglrenderer 43 - * can't deal with that. Needs fixing in virglrenderer, also 44 - * should figure a better way to handle that in the guest. 45 - */ 46 - handle++; 47 - #endif 48 - 49 - *resid = handle + 1; 36 + if (virtio_gpu_virglrenderer_workaround) { 37 + /* 38 + * Hack to avoid re-using resource IDs. 39 + * 40 + * virglrenderer versions up to (and including) 0.7.0 41 + * can't deal with that. virglrenderer commit 42 + * "f91a9dd35715 Fix unlinking resources from hash 43 + * table." (Feb 2019) fixes the bug. 44 + */ 45 + static int handle; 46 + handle++; 47 + *resid = handle + 1; 48 + } else { 49 + int handle = ida_alloc(&vgdev->resource_ida, GFP_KERNEL); 50 + if (handle < 0) 51 + return handle; 52 + *resid = handle + 1; 53 + } 50 54 return 0; 51 55 } 52 56 53 57 static void virtio_gpu_resource_id_put(struct virtio_gpu_device *vgdev, uint32_t id) 54 58 { 55 - #if 0 56 - ida_free(&vgdev->resource_ida, id - 1); 57 - #endif 59 + if (!virtio_gpu_virglrenderer_workaround) { 60 + ida_free(&vgdev->resource_ida, id - 1); 61 + } 58 62 } 59 63 60 - static void virtio_gpu_ttm_bo_destroy(struct ttm_buffer_object *tbo) 64 + static void virtio_gpu_free_object(struct drm_gem_object *obj) 61 65 { 62 - struct virtio_gpu_object *bo; 63 - struct virtio_gpu_device *vgdev; 66 + struct virtio_gpu_object *bo = gem_to_virtio_gpu_obj(obj); 67 + struct virtio_gpu_device *vgdev = bo->base.base.dev->dev_private; 64 68 65 - bo = container_of(tbo, struct virtio_gpu_object, tbo); 66 - vgdev = (struct virtio_gpu_device *)bo->gem_base.dev->dev_private; 67 - 69 + if (bo->pages) 70 + virtio_gpu_object_detach(vgdev, bo); 68 71 if (bo->created) 69 72 virtio_gpu_cmd_unref_resource(vgdev, bo->hw_res_handle); 70 - if (bo->pages) 71 - virtio_gpu_object_free_sg_table(bo); 72 - if (bo->vmap) 73 - virtio_gpu_object_kunmap(bo); 74 - drm_gem_object_release(&bo->gem_base); 75 73 virtio_gpu_resource_id_put(vgdev, bo->hw_res_handle); 76 - kfree(bo); 74 + 75 + drm_gem_shmem_free_object(obj); 77 76 } 78 77 79 - static void virtio_gpu_init_ttm_placement(struct virtio_gpu_object *vgbo) 78 + static const struct drm_gem_object_funcs virtio_gpu_gem_funcs = { 79 + .free = virtio_gpu_free_object, 80 + .open = virtio_gpu_gem_object_open, 81 + .close = virtio_gpu_gem_object_close, 82 + 83 + .print_info = drm_gem_shmem_print_info, 84 + .pin = drm_gem_shmem_pin, 85 + .unpin = drm_gem_shmem_unpin, 86 + .get_sg_table = drm_gem_shmem_get_sg_table, 87 + .vmap = drm_gem_shmem_vmap, 88 + .vunmap = drm_gem_shmem_vunmap, 89 + .vm_ops = &drm_gem_shmem_vm_ops, 90 + }; 91 + 92 + struct drm_gem_object *virtio_gpu_create_object(struct drm_device *dev, 93 + size_t size) 80 94 { 81 - u32 c = 1; 95 + struct virtio_gpu_object *bo; 82 96 83 - vgbo->placement.placement = &vgbo->placement_code; 84 - vgbo->placement.busy_placement = &vgbo->placement_code; 85 - vgbo->placement_code.fpfn = 0; 86 - vgbo->placement_code.lpfn = 0; 87 - vgbo->placement_code.flags = 88 - TTM_PL_MASK_CACHING | TTM_PL_FLAG_TT | 89 - TTM_PL_FLAG_NO_EVICT; 90 - vgbo->placement.num_placement = c; 91 - vgbo->placement.num_busy_placement = c; 97 + bo = kzalloc(sizeof(*bo), GFP_KERNEL); 98 + if (!bo) 99 + return NULL; 92 100 101 + bo->base.base.funcs = &virtio_gpu_gem_funcs; 102 + return &bo->base.base; 93 103 } 94 104 95 105 int virtio_gpu_object_create(struct virtio_gpu_device *vgdev, ··· 107 97 struct virtio_gpu_object **bo_ptr, 108 98 struct virtio_gpu_fence *fence) 109 99 { 100 + struct virtio_gpu_object_array *objs = NULL; 101 + struct drm_gem_shmem_object *shmem_obj; 110 102 struct virtio_gpu_object *bo; 111 - size_t acc_size; 112 103 int ret; 113 104 114 105 *bo_ptr = NULL; 115 106 116 - acc_size = ttm_bo_dma_acc_size(&vgdev->mman.bdev, params->size, 117 - sizeof(struct virtio_gpu_object)); 118 - 119 - bo = kzalloc(sizeof(struct virtio_gpu_object), GFP_KERNEL); 120 - if (bo == NULL) 121 - return -ENOMEM; 122 - ret = virtio_gpu_resource_id_get(vgdev, &bo->hw_res_handle); 123 - if (ret < 0) { 124 - kfree(bo); 125 - return ret; 126 - } 127 107 params->size = roundup(params->size, PAGE_SIZE); 128 - ret = drm_gem_object_init(vgdev->ddev, &bo->gem_base, params->size); 129 - if (ret != 0) { 130 - virtio_gpu_resource_id_put(vgdev, bo->hw_res_handle); 131 - kfree(bo); 132 - return ret; 133 - } 108 + shmem_obj = drm_gem_shmem_create(vgdev->ddev, params->size); 109 + if (IS_ERR(shmem_obj)) 110 + return PTR_ERR(shmem_obj); 111 + bo = gem_to_virtio_gpu_obj(&shmem_obj->base); 112 + 113 + ret = virtio_gpu_resource_id_get(vgdev, &bo->hw_res_handle); 114 + if (ret < 0) 115 + goto err_free_gem; 116 + 134 117 bo->dumb = params->dumb; 135 118 136 - if (params->virgl) { 137 - virtio_gpu_cmd_resource_create_3d(vgdev, bo, params, fence); 138 - } else { 139 - virtio_gpu_cmd_create_resource(vgdev, bo, params, fence); 119 + if (fence) { 120 + ret = -ENOMEM; 121 + objs = virtio_gpu_array_alloc(1); 122 + if (!objs) 123 + goto err_put_id; 124 + virtio_gpu_array_add_obj(objs, &bo->base.base); 125 + 126 + ret = virtio_gpu_array_lock_resv(objs); 127 + if (ret != 0) 128 + goto err_put_objs; 140 129 } 141 130 142 - virtio_gpu_init_ttm_placement(bo); 143 - ret = ttm_bo_init(&vgdev->mman.bdev, &bo->tbo, params->size, 144 - ttm_bo_type_device, &bo->placement, 0, 145 - true, acc_size, NULL, NULL, 146 - &virtio_gpu_ttm_bo_destroy); 147 - /* ttm_bo_init failure will call the destroy */ 148 - if (ret != 0) 131 + if (params->virgl) { 132 + virtio_gpu_cmd_resource_create_3d(vgdev, bo, params, 133 + objs, fence); 134 + } else { 135 + virtio_gpu_cmd_create_resource(vgdev, bo, params, 136 + objs, fence); 137 + } 138 + 139 + ret = virtio_gpu_object_attach(vgdev, bo, NULL); 140 + if (ret != 0) { 141 + virtio_gpu_free_object(&shmem_obj->base); 149 142 return ret; 150 - 151 - if (fence) { 152 - struct virtio_gpu_fence_driver *drv = &vgdev->fence_drv; 153 - struct list_head validate_list; 154 - struct ttm_validate_buffer mainbuf; 155 - struct ww_acquire_ctx ticket; 156 - unsigned long irq_flags; 157 - bool signaled; 158 - 159 - INIT_LIST_HEAD(&validate_list); 160 - memset(&mainbuf, 0, sizeof(struct ttm_validate_buffer)); 161 - 162 - /* use a gem reference since unref list undoes them */ 163 - drm_gem_object_get(&bo->gem_base); 164 - mainbuf.bo = &bo->tbo; 165 - list_add(&mainbuf.head, &validate_list); 166 - 167 - ret = virtio_gpu_object_list_validate(&ticket, &validate_list); 168 - if (ret == 0) { 169 - spin_lock_irqsave(&drv->lock, irq_flags); 170 - signaled = virtio_fence_signaled(&fence->f); 171 - if (!signaled) 172 - /* virtio create command still in flight */ 173 - ttm_eu_fence_buffer_objects(&ticket, &validate_list, 174 - &fence->f); 175 - spin_unlock_irqrestore(&drv->lock, irq_flags); 176 - if (signaled) 177 - /* virtio create command finished */ 178 - ttm_eu_backoff_reservation(&ticket, &validate_list); 179 - } 180 - virtio_gpu_unref_list(&validate_list); 181 143 } 182 144 183 145 *bo_ptr = bo; 184 146 return 0; 147 + 148 + err_put_objs: 149 + virtio_gpu_array_put_free(objs); 150 + err_put_id: 151 + virtio_gpu_resource_id_put(vgdev, bo->hw_res_handle); 152 + err_free_gem: 153 + drm_gem_shmem_free_object(&shmem_obj->base); 154 + return ret; 185 155 } 186 - 187 - void virtio_gpu_object_kunmap(struct virtio_gpu_object *bo) 188 - { 189 - bo->vmap = NULL; 190 - ttm_bo_kunmap(&bo->kmap); 191 - } 192 - 193 - int virtio_gpu_object_kmap(struct virtio_gpu_object *bo) 194 - { 195 - bool is_iomem; 196 - int r; 197 - 198 - WARN_ON(bo->vmap); 199 - 200 - r = ttm_bo_kmap(&bo->tbo, 0, bo->tbo.num_pages, &bo->kmap); 201 - if (r) 202 - return r; 203 - bo->vmap = ttm_kmap_obj_virtual(&bo->kmap, &is_iomem); 204 - return 0; 205 - } 206 - 207 - int virtio_gpu_object_get_sg_table(struct virtio_gpu_device *qdev, 208 - struct virtio_gpu_object *bo) 209 - { 210 - int ret; 211 - struct page **pages = bo->tbo.ttm->pages; 212 - int nr_pages = bo->tbo.num_pages; 213 - struct ttm_operation_ctx ctx = { 214 - .interruptible = false, 215 - .no_wait_gpu = false 216 - }; 217 - size_t max_segment; 218 - 219 - /* wtf swapping */ 220 - if (bo->pages) 221 - return 0; 222 - 223 - if (bo->tbo.ttm->state == tt_unpopulated) 224 - bo->tbo.ttm->bdev->driver->ttm_tt_populate(bo->tbo.ttm, &ctx); 225 - bo->pages = kmalloc(sizeof(struct sg_table), GFP_KERNEL); 226 - if (!bo->pages) 227 - goto out; 228 - 229 - max_segment = virtio_max_dma_size(qdev->vdev); 230 - max_segment &= PAGE_MASK; 231 - if (max_segment > SCATTERLIST_MAX_SEGMENT) 232 - max_segment = SCATTERLIST_MAX_SEGMENT; 233 - ret = __sg_alloc_table_from_pages(bo->pages, pages, nr_pages, 0, 234 - nr_pages << PAGE_SHIFT, 235 - max_segment, GFP_KERNEL); 236 - if (ret) 237 - goto out; 238 - return 0; 239 - out: 240 - kfree(bo->pages); 241 - bo->pages = NULL; 242 - return -ENOMEM; 243 - } 244 - 245 - void virtio_gpu_object_free_sg_table(struct virtio_gpu_object *bo) 246 - { 247 - sg_free_table(bo->pages); 248 - kfree(bo->pages); 249 - bo->pages = NULL; 250 - } 251 - 252 - int virtio_gpu_object_wait(struct virtio_gpu_object *bo, bool no_wait) 253 - { 254 - int r; 255 - 256 - r = ttm_bo_reserve(&bo->tbo, true, no_wait, NULL); 257 - if (unlikely(r != 0)) 258 - return r; 259 - r = ttm_bo_wait(&bo->tbo, true, no_wait); 260 - ttm_bo_unreserve(&bo->tbo); 261 - return r; 262 - } 263 -
+36 -15
drivers/gpu/drm/virtio/virtgpu_plane.c
··· 84 84 static int virtio_gpu_plane_atomic_check(struct drm_plane *plane, 85 85 struct drm_plane_state *state) 86 86 { 87 - return 0; 87 + bool is_cursor = plane->type == DRM_PLANE_TYPE_CURSOR; 88 + struct drm_crtc_state *crtc_state; 89 + int ret; 90 + 91 + if (!state->fb || !state->crtc) 92 + return 0; 93 + 94 + crtc_state = drm_atomic_get_crtc_state(state->state, state->crtc); 95 + if (IS_ERR(crtc_state)) 96 + return PTR_ERR(crtc_state); 97 + 98 + ret = drm_atomic_helper_check_plane_state(state, crtc_state, 99 + DRM_PLANE_HELPER_NO_SCALING, 100 + DRM_PLANE_HELPER_NO_SCALING, 101 + is_cursor, true); 102 + return ret; 88 103 } 89 104 90 105 static void virtio_gpu_primary_plane_update(struct drm_plane *plane, ··· 124 109 bo = gem_to_virtio_gpu_obj(vgfb->base.obj[0]); 125 110 handle = bo->hw_res_handle; 126 111 if (bo->dumb) { 112 + struct virtio_gpu_object_array *objs; 113 + 114 + objs = virtio_gpu_array_alloc(1); 115 + if (!objs) 116 + return; 117 + virtio_gpu_array_add_obj(objs, vgfb->base.obj[0]); 127 118 virtio_gpu_cmd_transfer_to_host_2d 128 - (vgdev, bo, 0, 119 + (vgdev, 0, 129 120 cpu_to_le32(plane->state->src_w >> 16), 130 121 cpu_to_le32(plane->state->src_h >> 16), 131 122 cpu_to_le32(plane->state->src_x >> 16), 132 - cpu_to_le32(plane->state->src_y >> 16), NULL); 123 + cpu_to_le32(plane->state->src_y >> 16), 124 + objs, NULL); 133 125 } 134 126 } else { 135 127 handle = 0; ··· 208 186 struct virtio_gpu_framebuffer *vgfb; 209 187 struct virtio_gpu_object *bo = NULL; 210 188 uint32_t handle; 211 - int ret = 0; 212 189 213 190 if (plane->state->crtc) 214 191 output = drm_crtc_to_virtio_gpu_output(plane->state->crtc); ··· 226 205 227 206 if (bo && bo->dumb && (plane->state->fb != old_state->fb)) { 228 207 /* new cursor -- update & wait */ 208 + struct virtio_gpu_object_array *objs; 209 + 210 + objs = virtio_gpu_array_alloc(1); 211 + if (!objs) 212 + return; 213 + virtio_gpu_array_add_obj(objs, vgfb->base.obj[0]); 229 214 virtio_gpu_cmd_transfer_to_host_2d 230 - (vgdev, bo, 0, 215 + (vgdev, 0, 231 216 cpu_to_le32(plane->state->crtc_w), 232 217 cpu_to_le32(plane->state->crtc_h), 233 - 0, 0, vgfb->fence); 234 - ret = virtio_gpu_object_reserve(bo, false); 235 - if (!ret) { 236 - dma_resv_add_excl_fence(bo->tbo.base.resv, 237 - &vgfb->fence->f); 238 - dma_fence_put(&vgfb->fence->f); 239 - vgfb->fence = NULL; 240 - virtio_gpu_object_unreserve(bo); 241 - virtio_gpu_object_wait(bo, false); 242 - } 218 + 0, 0, objs, vgfb->fence); 219 + dma_fence_wait(&vgfb->fence->f, true); 220 + dma_fence_put(&vgfb->fence->f); 221 + vgfb->fence = NULL; 243 222 } 244 223 245 224 if (plane->state->fb != old_state->fb) {
-34
drivers/gpu/drm/virtio/virtgpu_prime.c
··· 30 30 * device that might share buffers with virtgpu 31 31 */ 32 32 33 - struct sg_table *virtgpu_gem_prime_get_sg_table(struct drm_gem_object *obj) 34 - { 35 - struct virtio_gpu_object *bo = gem_to_virtio_gpu_obj(obj); 36 - 37 - if (!bo->tbo.ttm->pages || !bo->tbo.ttm->num_pages) 38 - /* should not happen */ 39 - return ERR_PTR(-EINVAL); 40 - 41 - return drm_prime_pages_to_sg(bo->tbo.ttm->pages, 42 - bo->tbo.ttm->num_pages); 43 - } 44 - 45 33 struct drm_gem_object *virtgpu_gem_prime_import_sg_table( 46 34 struct drm_device *dev, struct dma_buf_attachment *attach, 47 35 struct sg_table *table) 48 36 { 49 37 return ERR_PTR(-ENODEV); 50 - } 51 - 52 - void *virtgpu_gem_prime_vmap(struct drm_gem_object *obj) 53 - { 54 - struct virtio_gpu_object *bo = gem_to_virtio_gpu_obj(obj); 55 - int ret; 56 - 57 - ret = virtio_gpu_object_kmap(bo); 58 - if (ret) 59 - return NULL; 60 - return bo->vmap; 61 - } 62 - 63 - void virtgpu_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr) 64 - { 65 - virtio_gpu_object_kunmap(gem_to_virtio_gpu_obj(obj)); 66 - } 67 - 68 - int virtgpu_gem_prime_mmap(struct drm_gem_object *obj, 69 - struct vm_area_struct *vma) 70 - { 71 - return drm_gem_prime_mmap(obj, vma); 72 38 }
-305
drivers/gpu/drm/virtio/virtgpu_ttm.c
··· 1 - /* 2 - * Copyright (C) 2015 Red Hat, Inc. 3 - * All Rights Reserved. 4 - * 5 - * Authors: 6 - * Dave Airlie 7 - * Alon Levy 8 - * 9 - * Permission is hereby granted, free of charge, to any person obtaining a 10 - * copy of this software and associated documentation files (the "Software"), 11 - * to deal in the Software without restriction, including without limitation 12 - * the rights to use, copy, modify, merge, publish, distribute, sublicense, 13 - * and/or sell copies of the Software, and to permit persons to whom the 14 - * Software is furnished to do so, subject to the following conditions: 15 - * 16 - * The above copyright notice and this permission notice shall be included in 17 - * all copies or substantial portions of the Software. 18 - * 19 - * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 20 - * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 21 - * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL 22 - * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR 23 - * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, 24 - * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR 25 - * OTHER DEALINGS IN THE SOFTWARE. 26 - */ 27 - 28 - #include <linux/delay.h> 29 - 30 - #include <drm/drm.h> 31 - #include <drm/drm_file.h> 32 - #include <drm/ttm/ttm_bo_api.h> 33 - #include <drm/ttm/ttm_bo_driver.h> 34 - #include <drm/ttm/ttm_module.h> 35 - #include <drm/ttm/ttm_page_alloc.h> 36 - #include <drm/ttm/ttm_placement.h> 37 - #include <drm/virtgpu_drm.h> 38 - 39 - #include "virtgpu_drv.h" 40 - 41 - static struct 42 - virtio_gpu_device *virtio_gpu_get_vgdev(struct ttm_bo_device *bdev) 43 - { 44 - struct virtio_gpu_mman *mman; 45 - struct virtio_gpu_device *vgdev; 46 - 47 - mman = container_of(bdev, struct virtio_gpu_mman, bdev); 48 - vgdev = container_of(mman, struct virtio_gpu_device, mman); 49 - return vgdev; 50 - } 51 - 52 - int virtio_gpu_mmap(struct file *filp, struct vm_area_struct *vma) 53 - { 54 - struct drm_file *file_priv; 55 - struct virtio_gpu_device *vgdev; 56 - int r; 57 - 58 - file_priv = filp->private_data; 59 - vgdev = file_priv->minor->dev->dev_private; 60 - if (vgdev == NULL) { 61 - DRM_ERROR( 62 - "filp->private_data->minor->dev->dev_private == NULL\n"); 63 - return -EINVAL; 64 - } 65 - r = ttm_bo_mmap(filp, vma, &vgdev->mman.bdev); 66 - 67 - return r; 68 - } 69 - 70 - static int virtio_gpu_invalidate_caches(struct ttm_bo_device *bdev, 71 - uint32_t flags) 72 - { 73 - return 0; 74 - } 75 - 76 - static int ttm_bo_man_get_node(struct ttm_mem_type_manager *man, 77 - struct ttm_buffer_object *bo, 78 - const struct ttm_place *place, 79 - struct ttm_mem_reg *mem) 80 - { 81 - mem->mm_node = (void *)1; 82 - return 0; 83 - } 84 - 85 - static void ttm_bo_man_put_node(struct ttm_mem_type_manager *man, 86 - struct ttm_mem_reg *mem) 87 - { 88 - mem->mm_node = (void *)NULL; 89 - } 90 - 91 - static int ttm_bo_man_init(struct ttm_mem_type_manager *man, 92 - unsigned long p_size) 93 - { 94 - return 0; 95 - } 96 - 97 - static int ttm_bo_man_takedown(struct ttm_mem_type_manager *man) 98 - { 99 - return 0; 100 - } 101 - 102 - static void ttm_bo_man_debug(struct ttm_mem_type_manager *man, 103 - struct drm_printer *printer) 104 - { 105 - } 106 - 107 - static const struct ttm_mem_type_manager_func virtio_gpu_bo_manager_func = { 108 - .init = ttm_bo_man_init, 109 - .takedown = ttm_bo_man_takedown, 110 - .get_node = ttm_bo_man_get_node, 111 - .put_node = ttm_bo_man_put_node, 112 - .debug = ttm_bo_man_debug 113 - }; 114 - 115 - static int virtio_gpu_init_mem_type(struct ttm_bo_device *bdev, uint32_t type, 116 - struct ttm_mem_type_manager *man) 117 - { 118 - switch (type) { 119 - case TTM_PL_SYSTEM: 120 - /* System memory */ 121 - man->flags = TTM_MEMTYPE_FLAG_MAPPABLE; 122 - man->available_caching = TTM_PL_MASK_CACHING; 123 - man->default_caching = TTM_PL_FLAG_CACHED; 124 - break; 125 - case TTM_PL_TT: 126 - man->func = &virtio_gpu_bo_manager_func; 127 - man->flags = TTM_MEMTYPE_FLAG_MAPPABLE; 128 - man->available_caching = TTM_PL_MASK_CACHING; 129 - man->default_caching = TTM_PL_FLAG_CACHED; 130 - break; 131 - default: 132 - DRM_ERROR("Unsupported memory type %u\n", (unsigned int)type); 133 - return -EINVAL; 134 - } 135 - return 0; 136 - } 137 - 138 - static void virtio_gpu_evict_flags(struct ttm_buffer_object *bo, 139 - struct ttm_placement *placement) 140 - { 141 - static const struct ttm_place placements = { 142 - .fpfn = 0, 143 - .lpfn = 0, 144 - .flags = TTM_PL_MASK_CACHING | TTM_PL_FLAG_SYSTEM, 145 - }; 146 - 147 - placement->placement = &placements; 148 - placement->busy_placement = &placements; 149 - placement->num_placement = 1; 150 - placement->num_busy_placement = 1; 151 - } 152 - 153 - static int virtio_gpu_verify_access(struct ttm_buffer_object *bo, 154 - struct file *filp) 155 - { 156 - return 0; 157 - } 158 - 159 - static int virtio_gpu_ttm_io_mem_reserve(struct ttm_bo_device *bdev, 160 - struct ttm_mem_reg *mem) 161 - { 162 - struct ttm_mem_type_manager *man = &bdev->man[mem->mem_type]; 163 - 164 - mem->bus.addr = NULL; 165 - mem->bus.offset = 0; 166 - mem->bus.size = mem->num_pages << PAGE_SHIFT; 167 - mem->bus.base = 0; 168 - mem->bus.is_iomem = false; 169 - if (!(man->flags & TTM_MEMTYPE_FLAG_MAPPABLE)) 170 - return -EINVAL; 171 - switch (mem->mem_type) { 172 - case TTM_PL_SYSTEM: 173 - case TTM_PL_TT: 174 - /* system memory */ 175 - return 0; 176 - default: 177 - return -EINVAL; 178 - } 179 - return 0; 180 - } 181 - 182 - static void virtio_gpu_ttm_io_mem_free(struct ttm_bo_device *bdev, 183 - struct ttm_mem_reg *mem) 184 - { 185 - } 186 - 187 - /* 188 - * TTM backend functions. 189 - */ 190 - struct virtio_gpu_ttm_tt { 191 - struct ttm_dma_tt ttm; 192 - struct virtio_gpu_object *obj; 193 - }; 194 - 195 - static int virtio_gpu_ttm_tt_bind(struct ttm_tt *ttm, 196 - struct ttm_mem_reg *bo_mem) 197 - { 198 - struct virtio_gpu_ttm_tt *gtt = 199 - container_of(ttm, struct virtio_gpu_ttm_tt, ttm.ttm); 200 - struct virtio_gpu_device *vgdev = 201 - virtio_gpu_get_vgdev(gtt->obj->tbo.bdev); 202 - 203 - virtio_gpu_object_attach(vgdev, gtt->obj, NULL); 204 - return 0; 205 - } 206 - 207 - static int virtio_gpu_ttm_tt_unbind(struct ttm_tt *ttm) 208 - { 209 - struct virtio_gpu_ttm_tt *gtt = 210 - container_of(ttm, struct virtio_gpu_ttm_tt, ttm.ttm); 211 - struct virtio_gpu_device *vgdev = 212 - virtio_gpu_get_vgdev(gtt->obj->tbo.bdev); 213 - 214 - virtio_gpu_object_detach(vgdev, gtt->obj); 215 - return 0; 216 - } 217 - 218 - static void virtio_gpu_ttm_tt_destroy(struct ttm_tt *ttm) 219 - { 220 - struct virtio_gpu_ttm_tt *gtt = 221 - container_of(ttm, struct virtio_gpu_ttm_tt, ttm.ttm); 222 - 223 - ttm_dma_tt_fini(&gtt->ttm); 224 - kfree(gtt); 225 - } 226 - 227 - static struct ttm_backend_func virtio_gpu_tt_func = { 228 - .bind = &virtio_gpu_ttm_tt_bind, 229 - .unbind = &virtio_gpu_ttm_tt_unbind, 230 - .destroy = &virtio_gpu_ttm_tt_destroy, 231 - }; 232 - 233 - static struct ttm_tt *virtio_gpu_ttm_tt_create(struct ttm_buffer_object *bo, 234 - uint32_t page_flags) 235 - { 236 - struct virtio_gpu_device *vgdev; 237 - struct virtio_gpu_ttm_tt *gtt; 238 - 239 - vgdev = virtio_gpu_get_vgdev(bo->bdev); 240 - gtt = kzalloc(sizeof(struct virtio_gpu_ttm_tt), GFP_KERNEL); 241 - if (gtt == NULL) 242 - return NULL; 243 - gtt->ttm.ttm.func = &virtio_gpu_tt_func; 244 - gtt->obj = container_of(bo, struct virtio_gpu_object, tbo); 245 - if (ttm_dma_tt_init(&gtt->ttm, bo, page_flags)) { 246 - kfree(gtt); 247 - return NULL; 248 - } 249 - return &gtt->ttm.ttm; 250 - } 251 - 252 - static void virtio_gpu_bo_swap_notify(struct ttm_buffer_object *tbo) 253 - { 254 - struct virtio_gpu_object *bo; 255 - 256 - bo = container_of(tbo, struct virtio_gpu_object, tbo); 257 - 258 - if (bo->pages) 259 - virtio_gpu_object_free_sg_table(bo); 260 - } 261 - 262 - static struct ttm_bo_driver virtio_gpu_bo_driver = { 263 - .ttm_tt_create = &virtio_gpu_ttm_tt_create, 264 - .invalidate_caches = &virtio_gpu_invalidate_caches, 265 - .init_mem_type = &virtio_gpu_init_mem_type, 266 - .eviction_valuable = ttm_bo_eviction_valuable, 267 - .evict_flags = &virtio_gpu_evict_flags, 268 - .verify_access = &virtio_gpu_verify_access, 269 - .io_mem_reserve = &virtio_gpu_ttm_io_mem_reserve, 270 - .io_mem_free = &virtio_gpu_ttm_io_mem_free, 271 - .swap_notify = &virtio_gpu_bo_swap_notify, 272 - }; 273 - 274 - int virtio_gpu_ttm_init(struct virtio_gpu_device *vgdev) 275 - { 276 - int r; 277 - 278 - /* No others user of address space so set it to 0 */ 279 - r = ttm_bo_device_init(&vgdev->mman.bdev, 280 - &virtio_gpu_bo_driver, 281 - vgdev->ddev->anon_inode->i_mapping, 282 - false); 283 - if (r) { 284 - DRM_ERROR("failed initializing buffer object driver(%d).\n", r); 285 - goto err_dev_init; 286 - } 287 - 288 - r = ttm_bo_init_mm(&vgdev->mman.bdev, TTM_PL_TT, 0); 289 - if (r) { 290 - DRM_ERROR("Failed initializing GTT heap.\n"); 291 - goto err_mm_init; 292 - } 293 - return 0; 294 - 295 - err_mm_init: 296 - ttm_bo_device_release(&vgdev->mman.bdev); 297 - err_dev_init: 298 - return r; 299 - } 300 - 301 - void virtio_gpu_ttm_fini(struct virtio_gpu_device *vgdev) 302 - { 303 - ttm_bo_device_release(&vgdev->mman.bdev); 304 - DRM_INFO("virtio_gpu: ttm finalized\n"); 305 - }
+156 -59
drivers/gpu/drm/virtio/virtgpu_vq.c
··· 155 155 { 156 156 if (vbuf->resp_size > MAX_INLINE_RESP_SIZE) 157 157 kfree(vbuf->resp_buf); 158 - kfree(vbuf->data_buf); 158 + kvfree(vbuf->data_buf); 159 159 kmem_cache_free(vgdev->vbufs, vbuf); 160 160 } 161 161 ··· 192 192 } while (!virtqueue_enable_cb(vgdev->ctrlq.vq)); 193 193 spin_unlock(&vgdev->ctrlq.qlock); 194 194 195 - list_for_each_entry_safe(entry, tmp, &reclaim_list, list) { 195 + list_for_each_entry(entry, &reclaim_list, list) { 196 196 resp = (struct virtio_gpu_ctrl_hdr *)entry->resp_buf; 197 197 198 198 trace_virtio_gpu_cmd_response(vgdev->ctrlq.vq, resp); ··· 219 219 } 220 220 if (entry->resp_cb) 221 221 entry->resp_cb(vgdev, entry); 222 - 223 - list_del(&entry->list); 224 - free_vbuf(vgdev, entry); 225 222 } 226 223 wake_up(&vgdev->ctrlq.ack_queue); 227 224 228 225 if (fence_id) 229 226 virtio_gpu_fence_event_process(vgdev, fence_id); 227 + 228 + list_for_each_entry_safe(entry, tmp, &reclaim_list, list) { 229 + if (entry->objs) 230 + virtio_gpu_array_put_free_delayed(vgdev, entry->objs); 231 + list_del(&entry->list); 232 + free_vbuf(vgdev, entry); 233 + } 230 234 } 231 235 232 236 void virtio_gpu_dequeue_cursor_func(struct work_struct *work) ··· 256 252 wake_up(&vgdev->cursorq.ack_queue); 257 253 } 258 254 259 - static int virtio_gpu_queue_ctrl_buffer_locked(struct virtio_gpu_device *vgdev, 260 - struct virtio_gpu_vbuffer *vbuf) 255 + /* Create sg_table from a vmalloc'd buffer. */ 256 + static struct sg_table *vmalloc_to_sgt(char *data, uint32_t size, int *sg_ents) 257 + { 258 + int ret, s, i; 259 + struct sg_table *sgt; 260 + struct scatterlist *sg; 261 + struct page *pg; 262 + 263 + if (WARN_ON(!PAGE_ALIGNED(data))) 264 + return NULL; 265 + 266 + sgt = kmalloc(sizeof(*sgt), GFP_KERNEL); 267 + if (!sgt) 268 + return NULL; 269 + 270 + *sg_ents = DIV_ROUND_UP(size, PAGE_SIZE); 271 + ret = sg_alloc_table(sgt, *sg_ents, GFP_KERNEL); 272 + if (ret) { 273 + kfree(sgt); 274 + return NULL; 275 + } 276 + 277 + for_each_sg(sgt->sgl, sg, *sg_ents, i) { 278 + pg = vmalloc_to_page(data); 279 + if (!pg) { 280 + sg_free_table(sgt); 281 + kfree(sgt); 282 + return NULL; 283 + } 284 + 285 + s = min_t(int, PAGE_SIZE, size); 286 + sg_set_page(sg, pg, s, 0); 287 + 288 + size -= s; 289 + data += s; 290 + } 291 + 292 + return sgt; 293 + } 294 + 295 + static bool virtio_gpu_queue_ctrl_buffer_locked(struct virtio_gpu_device *vgdev, 296 + struct virtio_gpu_vbuffer *vbuf, 297 + struct scatterlist *vout) 261 298 __releases(&vgdev->ctrlq.qlock) 262 299 __acquires(&vgdev->ctrlq.qlock) 263 300 { 264 301 struct virtqueue *vq = vgdev->ctrlq.vq; 265 - struct scatterlist *sgs[3], vcmd, vout, vresp; 302 + struct scatterlist *sgs[3], vcmd, vresp; 266 303 int outcnt = 0, incnt = 0; 304 + bool notify = false; 267 305 int ret; 268 306 269 307 if (!vgdev->vqs_ready) 270 - return -ENODEV; 308 + return notify; 271 309 272 310 sg_init_one(&vcmd, vbuf->buf, vbuf->size); 273 311 sgs[outcnt + incnt] = &vcmd; 274 312 outcnt++; 275 313 276 - if (vbuf->data_size) { 277 - sg_init_one(&vout, vbuf->data_buf, vbuf->data_size); 278 - sgs[outcnt + incnt] = &vout; 314 + if (vout) { 315 + sgs[outcnt + incnt] = vout; 279 316 outcnt++; 280 317 } 281 318 ··· 337 292 trace_virtio_gpu_cmd_queue(vq, 338 293 (struct virtio_gpu_ctrl_hdr *)vbuf->buf); 339 294 340 - virtqueue_kick(vq); 295 + notify = virtqueue_kick_prepare(vq); 341 296 } 342 - 343 - if (!ret) 344 - ret = vq->num_free; 345 - return ret; 297 + return notify; 346 298 } 347 299 348 - static int virtio_gpu_queue_ctrl_buffer(struct virtio_gpu_device *vgdev, 349 - struct virtio_gpu_vbuffer *vbuf) 350 - { 351 - int rc; 352 - 353 - spin_lock(&vgdev->ctrlq.qlock); 354 - rc = virtio_gpu_queue_ctrl_buffer_locked(vgdev, vbuf); 355 - spin_unlock(&vgdev->ctrlq.qlock); 356 - return rc; 357 - } 358 - 359 - static int virtio_gpu_queue_fenced_ctrl_buffer(struct virtio_gpu_device *vgdev, 360 - struct virtio_gpu_vbuffer *vbuf, 361 - struct virtio_gpu_ctrl_hdr *hdr, 362 - struct virtio_gpu_fence *fence) 300 + static void virtio_gpu_queue_fenced_ctrl_buffer(struct virtio_gpu_device *vgdev, 301 + struct virtio_gpu_vbuffer *vbuf, 302 + struct virtio_gpu_ctrl_hdr *hdr, 303 + struct virtio_gpu_fence *fence) 363 304 { 364 305 struct virtqueue *vq = vgdev->ctrlq.vq; 365 - int rc; 306 + struct scatterlist *vout = NULL, sg; 307 + struct sg_table *sgt = NULL; 308 + bool notify; 309 + int outcnt = 0; 310 + 311 + if (vbuf->data_size) { 312 + if (is_vmalloc_addr(vbuf->data_buf)) { 313 + sgt = vmalloc_to_sgt(vbuf->data_buf, vbuf->data_size, 314 + &outcnt); 315 + if (!sgt) 316 + return; 317 + vout = sgt->sgl; 318 + } else { 319 + sg_init_one(&sg, vbuf->data_buf, vbuf->data_size); 320 + vout = &sg; 321 + outcnt = 1; 322 + } 323 + } 366 324 367 325 again: 368 326 spin_lock(&vgdev->ctrlq.qlock); ··· 378 330 * to wait for free space, which can result in fence ids being 379 331 * submitted out-of-order. 380 332 */ 381 - if (vq->num_free < 3) { 333 + if (vq->num_free < 2 + outcnt) { 382 334 spin_unlock(&vgdev->ctrlq.qlock); 383 335 wait_event(vgdev->ctrlq.ack_queue, vq->num_free >= 3); 384 336 goto again; 385 337 } 386 338 387 - if (fence) 339 + if (hdr && fence) { 388 340 virtio_gpu_fence_emit(vgdev, hdr, fence); 389 - rc = virtio_gpu_queue_ctrl_buffer_locked(vgdev, vbuf); 341 + if (vbuf->objs) { 342 + virtio_gpu_array_add_fence(vbuf->objs, &fence->f); 343 + virtio_gpu_array_unlock_resv(vbuf->objs); 344 + } 345 + } 346 + notify = virtio_gpu_queue_ctrl_buffer_locked(vgdev, vbuf, vout); 390 347 spin_unlock(&vgdev->ctrlq.qlock); 391 - return rc; 348 + if (notify) 349 + virtqueue_notify(vgdev->ctrlq.vq); 350 + 351 + if (sgt) { 352 + sg_free_table(sgt); 353 + kfree(sgt); 354 + } 392 355 } 393 356 394 - static int virtio_gpu_queue_cursor(struct virtio_gpu_device *vgdev, 395 - struct virtio_gpu_vbuffer *vbuf) 357 + static void virtio_gpu_queue_ctrl_buffer(struct virtio_gpu_device *vgdev, 358 + struct virtio_gpu_vbuffer *vbuf) 359 + { 360 + virtio_gpu_queue_fenced_ctrl_buffer(vgdev, vbuf, NULL, NULL); 361 + } 362 + 363 + static void virtio_gpu_queue_cursor(struct virtio_gpu_device *vgdev, 364 + struct virtio_gpu_vbuffer *vbuf) 396 365 { 397 366 struct virtqueue *vq = vgdev->cursorq.vq; 398 367 struct scatterlist *sgs[1], ccmd; 368 + bool notify; 399 369 int ret; 400 370 int outcnt; 401 371 402 372 if (!vgdev->vqs_ready) 403 - return -ENODEV; 373 + return; 404 374 405 375 sg_init_one(&ccmd, vbuf->buf, vbuf->size); 406 376 sgs[0] = &ccmd; ··· 436 370 trace_virtio_gpu_cmd_queue(vq, 437 371 (struct virtio_gpu_ctrl_hdr *)vbuf->buf); 438 372 439 - virtqueue_kick(vq); 373 + notify = virtqueue_kick_prepare(vq); 440 374 } 441 375 442 376 spin_unlock(&vgdev->cursorq.qlock); 443 377 444 - if (!ret) 445 - ret = vq->num_free; 446 - return ret; 378 + if (notify) 379 + virtqueue_notify(vq); 447 380 } 448 381 449 382 /* just create gem objects for userspace and long lived objects, ··· 453 388 void virtio_gpu_cmd_create_resource(struct virtio_gpu_device *vgdev, 454 389 struct virtio_gpu_object *bo, 455 390 struct virtio_gpu_object_params *params, 391 + struct virtio_gpu_object_array *objs, 456 392 struct virtio_gpu_fence *fence) 457 393 { 458 394 struct virtio_gpu_resource_create_2d *cmd_p; ··· 461 395 462 396 cmd_p = virtio_gpu_alloc_cmd(vgdev, &vbuf, sizeof(*cmd_p)); 463 397 memset(cmd_p, 0, sizeof(*cmd_p)); 398 + vbuf->objs = objs; 464 399 465 400 cmd_p->hdr.type = cpu_to_le32(VIRTIO_GPU_CMD_RESOURCE_CREATE_2D); 466 401 cmd_p->resource_id = cpu_to_le32(bo->hw_res_handle); ··· 548 481 } 549 482 550 483 void virtio_gpu_cmd_transfer_to_host_2d(struct virtio_gpu_device *vgdev, 551 - struct virtio_gpu_object *bo, 552 484 uint64_t offset, 553 485 __le32 width, __le32 height, 554 486 __le32 x, __le32 y, 487 + struct virtio_gpu_object_array *objs, 555 488 struct virtio_gpu_fence *fence) 556 489 { 490 + struct virtio_gpu_object *bo = gem_to_virtio_gpu_obj(objs->objs[0]); 557 491 struct virtio_gpu_transfer_to_host_2d *cmd_p; 558 492 struct virtio_gpu_vbuffer *vbuf; 559 493 bool use_dma_api = !virtio_has_iommu_quirk(vgdev->vdev); ··· 566 498 567 499 cmd_p = virtio_gpu_alloc_cmd(vgdev, &vbuf, sizeof(*cmd_p)); 568 500 memset(cmd_p, 0, sizeof(*cmd_p)); 501 + vbuf->objs = objs; 569 502 570 503 cmd_p->hdr.type = cpu_to_le32(VIRTIO_GPU_CMD_TRANSFER_TO_HOST_2D); 571 504 cmd_p->resource_id = cpu_to_le32(bo->hw_res_handle); ··· 895 826 896 827 void virtio_gpu_cmd_context_attach_resource(struct virtio_gpu_device *vgdev, 897 828 uint32_t ctx_id, 898 - uint32_t resource_id) 829 + struct virtio_gpu_object_array *objs) 899 830 { 831 + struct virtio_gpu_object *bo = gem_to_virtio_gpu_obj(objs->objs[0]); 900 832 struct virtio_gpu_ctx_resource *cmd_p; 901 833 struct virtio_gpu_vbuffer *vbuf; 902 834 903 835 cmd_p = virtio_gpu_alloc_cmd(vgdev, &vbuf, sizeof(*cmd_p)); 904 836 memset(cmd_p, 0, sizeof(*cmd_p)); 837 + vbuf->objs = objs; 905 838 906 839 cmd_p->hdr.type = cpu_to_le32(VIRTIO_GPU_CMD_CTX_ATTACH_RESOURCE); 907 840 cmd_p->hdr.ctx_id = cpu_to_le32(ctx_id); 908 - cmd_p->resource_id = cpu_to_le32(resource_id); 841 + cmd_p->resource_id = cpu_to_le32(bo->hw_res_handle); 909 842 virtio_gpu_queue_ctrl_buffer(vgdev, vbuf); 910 843 911 844 } 912 845 913 846 void virtio_gpu_cmd_context_detach_resource(struct virtio_gpu_device *vgdev, 914 847 uint32_t ctx_id, 915 - uint32_t resource_id) 848 + struct virtio_gpu_object_array *objs) 916 849 { 850 + struct virtio_gpu_object *bo = gem_to_virtio_gpu_obj(objs->objs[0]); 917 851 struct virtio_gpu_ctx_resource *cmd_p; 918 852 struct virtio_gpu_vbuffer *vbuf; 919 853 920 854 cmd_p = virtio_gpu_alloc_cmd(vgdev, &vbuf, sizeof(*cmd_p)); 921 855 memset(cmd_p, 0, sizeof(*cmd_p)); 856 + vbuf->objs = objs; 922 857 923 858 cmd_p->hdr.type = cpu_to_le32(VIRTIO_GPU_CMD_CTX_DETACH_RESOURCE); 924 859 cmd_p->hdr.ctx_id = cpu_to_le32(ctx_id); 925 - cmd_p->resource_id = cpu_to_le32(resource_id); 860 + cmd_p->resource_id = cpu_to_le32(bo->hw_res_handle); 926 861 virtio_gpu_queue_ctrl_buffer(vgdev, vbuf); 927 862 } 928 863 ··· 934 861 virtio_gpu_cmd_resource_create_3d(struct virtio_gpu_device *vgdev, 935 862 struct virtio_gpu_object *bo, 936 863 struct virtio_gpu_object_params *params, 864 + struct virtio_gpu_object_array *objs, 937 865 struct virtio_gpu_fence *fence) 938 866 { 939 867 struct virtio_gpu_resource_create_3d *cmd_p; ··· 942 868 943 869 cmd_p = virtio_gpu_alloc_cmd(vgdev, &vbuf, sizeof(*cmd_p)); 944 870 memset(cmd_p, 0, sizeof(*cmd_p)); 871 + vbuf->objs = objs; 945 872 946 873 cmd_p->hdr.type = cpu_to_le32(VIRTIO_GPU_CMD_RESOURCE_CREATE_3D); 947 874 cmd_p->resource_id = cpu_to_le32(bo->hw_res_handle); ··· 963 888 } 964 889 965 890 void virtio_gpu_cmd_transfer_to_host_3d(struct virtio_gpu_device *vgdev, 966 - struct virtio_gpu_object *bo, 967 891 uint32_t ctx_id, 968 892 uint64_t offset, uint32_t level, 969 893 struct virtio_gpu_box *box, 894 + struct virtio_gpu_object_array *objs, 970 895 struct virtio_gpu_fence *fence) 971 896 { 897 + struct virtio_gpu_object *bo = gem_to_virtio_gpu_obj(objs->objs[0]); 972 898 struct virtio_gpu_transfer_host_3d *cmd_p; 973 899 struct virtio_gpu_vbuffer *vbuf; 974 900 bool use_dma_api = !virtio_has_iommu_quirk(vgdev->vdev); ··· 982 906 cmd_p = virtio_gpu_alloc_cmd(vgdev, &vbuf, sizeof(*cmd_p)); 983 907 memset(cmd_p, 0, sizeof(*cmd_p)); 984 908 909 + vbuf->objs = objs; 910 + 985 911 cmd_p->hdr.type = cpu_to_le32(VIRTIO_GPU_CMD_TRANSFER_TO_HOST_3D); 986 912 cmd_p->hdr.ctx_id = cpu_to_le32(ctx_id); 987 913 cmd_p->resource_id = cpu_to_le32(bo->hw_res_handle); ··· 995 917 } 996 918 997 919 void virtio_gpu_cmd_transfer_from_host_3d(struct virtio_gpu_device *vgdev, 998 - uint32_t resource_id, uint32_t ctx_id, 920 + uint32_t ctx_id, 999 921 uint64_t offset, uint32_t level, 1000 922 struct virtio_gpu_box *box, 923 + struct virtio_gpu_object_array *objs, 1001 924 struct virtio_gpu_fence *fence) 1002 925 { 926 + struct virtio_gpu_object *bo = gem_to_virtio_gpu_obj(objs->objs[0]); 1003 927 struct virtio_gpu_transfer_host_3d *cmd_p; 1004 928 struct virtio_gpu_vbuffer *vbuf; 1005 929 1006 930 cmd_p = virtio_gpu_alloc_cmd(vgdev, &vbuf, sizeof(*cmd_p)); 1007 931 memset(cmd_p, 0, sizeof(*cmd_p)); 1008 932 933 + vbuf->objs = objs; 934 + 1009 935 cmd_p->hdr.type = cpu_to_le32(VIRTIO_GPU_CMD_TRANSFER_FROM_HOST_3D); 1010 936 cmd_p->hdr.ctx_id = cpu_to_le32(ctx_id); 1011 - cmd_p->resource_id = cpu_to_le32(resource_id); 937 + cmd_p->resource_id = cpu_to_le32(bo->hw_res_handle); 1012 938 cmd_p->box = *box; 1013 939 cmd_p->offset = cpu_to_le64(offset); 1014 940 cmd_p->level = cpu_to_le32(level); ··· 1022 940 1023 941 void virtio_gpu_cmd_submit(struct virtio_gpu_device *vgdev, 1024 942 void *data, uint32_t data_size, 1025 - uint32_t ctx_id, struct virtio_gpu_fence *fence) 943 + uint32_t ctx_id, 944 + struct virtio_gpu_object_array *objs, 945 + struct virtio_gpu_fence *fence) 1026 946 { 1027 947 struct virtio_gpu_cmd_submit *cmd_p; 1028 948 struct virtio_gpu_vbuffer *vbuf; ··· 1034 950 1035 951 vbuf->data_buf = data; 1036 952 vbuf->data_size = data_size; 953 + vbuf->objs = objs; 1037 954 1038 955 cmd_p->hdr.type = cpu_to_le32(VIRTIO_GPU_CMD_SUBMIT_3D); 1039 956 cmd_p->hdr.ctx_id = cpu_to_le32(ctx_id); ··· 1050 965 bool use_dma_api = !virtio_has_iommu_quirk(vgdev->vdev); 1051 966 struct virtio_gpu_mem_entry *ents; 1052 967 struct scatterlist *sg; 1053 - int si, nents; 968 + int si, nents, ret; 1054 969 1055 970 if (WARN_ON_ONCE(!obj->created)) 1056 971 return -EINVAL; 972 + if (WARN_ON_ONCE(obj->pages)) 973 + return -EINVAL; 1057 974 1058 - if (!obj->pages) { 1059 - int ret; 975 + ret = drm_gem_shmem_pin(&obj->base.base); 976 + if (ret < 0) 977 + return -EINVAL; 1060 978 1061 - ret = virtio_gpu_object_get_sg_table(vgdev, obj); 1062 - if (ret) 1063 - return ret; 979 + obj->pages = drm_gem_shmem_get_sg_table(&obj->base.base); 980 + if (obj->pages == NULL) { 981 + drm_gem_shmem_unpin(&obj->base.base); 982 + return -EINVAL; 1064 983 } 1065 984 1066 985 if (use_dma_api) { ··· 1103 1014 { 1104 1015 bool use_dma_api = !virtio_has_iommu_quirk(vgdev->vdev); 1105 1016 1017 + if (WARN_ON_ONCE(!obj->pages)) 1018 + return; 1019 + 1106 1020 if (use_dma_api && obj->mapped) { 1107 1021 struct virtio_gpu_fence *fence = virtio_gpu_fence_alloc(vgdev); 1108 1022 /* detach backing and wait for the host process it ... */ ··· 1121 1029 } else { 1122 1030 virtio_gpu_cmd_resource_inval_backing(vgdev, obj->hw_res_handle, NULL); 1123 1031 } 1032 + 1033 + sg_free_table(obj->pages); 1034 + obj->pages = NULL; 1035 + 1036 + drm_gem_shmem_unpin(&obj->base.base); 1124 1037 } 1125 1038 1126 1039 void virtio_gpu_cursor_ping(struct virtio_gpu_device *vgdev,
+4 -5
drivers/gpu/drm/vkms/vkms_crtc.c
··· 16 16 u64 ret_overrun; 17 17 bool ret; 18 18 19 - spin_lock(&output->lock); 20 - 21 19 ret_overrun = hrtimer_forward_now(&output->vblank_hrtimer, 22 20 output->period_ns); 23 21 WARN_ON(ret_overrun != 1); 24 22 23 + spin_lock(&output->lock); 25 24 ret = drm_crtc_handle_vblank(crtc); 26 25 if (!ret) 27 26 DRM_ERROR("vkms failure on handling vblank"); 28 27 29 28 state = output->composer_state; 29 + spin_unlock(&output->lock); 30 + 30 31 if (state && output->composer_enabled) { 31 32 u64 frame = drm_crtc_accurate_vblank_count(crtc); 32 33 ··· 48 47 if (!ret) 49 48 DRM_DEBUG_DRIVER("Composer worker already queued\n"); 50 49 } 51 - 52 - spin_unlock(&output->lock); 53 50 54 51 return HRTIMER_RESTART; 55 52 } ··· 84 85 struct vkms_output *output = &vkmsdev->output; 85 86 struct drm_vblank_crtc *vblank = &dev->vblank[pipe]; 86 87 87 - *vblank_time = output->vblank_hrtimer.node.expires; 88 + *vblank_time = READ_ONCE(output->vblank_hrtimer.node.expires); 88 89 89 90 if (WARN_ON(*vblank_time == vblank->time)) 90 91 return true;
+1 -1
drivers/gpu/drm/vkms/vkms_drv.c
··· 83 83 84 84 drm_atomic_helper_commit_hw_done(old_state); 85 85 86 - drm_atomic_helper_wait_for_vblanks(dev, old_state); 86 + drm_atomic_helper_wait_for_flip_done(dev, old_state); 87 87 88 88 for_each_old_crtc_in_state(old_state, crtc, old_crtc_state, i) { 89 89 struct vkms_crtc_state *vkms_state =
+5
drivers/gpu/drm/vmwgfx/vmwgfx_drv.c
··· 827 827 goto out_no_fman; 828 828 } 829 829 830 + drm_vma_offset_manager_init(&dev_priv->vma_manager, 831 + DRM_FILE_PAGE_OFFSET_START, 832 + DRM_FILE_PAGE_OFFSET_SIZE); 830 833 ret = ttm_bo_device_init(&dev_priv->bdev, 831 834 &vmw_bo_driver, 832 835 dev->anon_inode->i_mapping, 836 + &dev_priv->vma_manager, 833 837 false); 834 838 if (unlikely(ret != 0)) { 835 839 DRM_ERROR("Failed initializing TTM buffer object driver.\n"); ··· 990 986 if (dev_priv->has_mob) 991 987 (void) ttm_bo_clean_mm(&dev_priv->bdev, VMW_PL_MOB); 992 988 (void) ttm_bo_device_release(&dev_priv->bdev); 989 + drm_vma_offset_manager_destroy(&dev_priv->vma_manager); 993 990 vmw_release_device_late(dev_priv); 994 991 vmw_fence_manager_takedown(dev_priv->fman); 995 992 if (dev_priv->capabilities & SVGA_CAP_IRQMASK)
+1
drivers/gpu/drm/vmwgfx/vmwgfx_drv.h
··· 438 438 struct vmw_fifo_state fifo; 439 439 440 440 struct drm_device *dev; 441 + struct drm_vma_offset_manager vma_manager; 441 442 unsigned long vmw_chipset; 442 443 unsigned int io_start; 443 444 uint32_t vram_start;
+3 -2
drivers/media/cec/cec-notifier.c
··· 153 153 } 154 154 EXPORT_SYMBOL_GPL(cec_notifier_cec_adap_register); 155 155 156 - void cec_notifier_cec_adap_unregister(struct cec_notifier *n) 156 + void cec_notifier_cec_adap_unregister(struct cec_notifier *n, 157 + struct cec_adapter *adap) 157 158 { 158 159 if (!n) 159 160 return; 160 161 161 162 mutex_lock(&n->lock); 162 - n->cec_adap->notifier = NULL; 163 + adap->notifier = NULL; 163 164 n->cec_adap = NULL; 164 165 n->callback = NULL; 165 166 mutex_unlock(&n->lock);
+4 -2
drivers/media/platform/cros-ec-cec/cros-ec-cec.c
··· 314 314 return 0; 315 315 316 316 out_probe_notify: 317 - cec_notifier_cec_adap_unregister(cros_ec_cec->notify); 317 + cec_notifier_cec_adap_unregister(cros_ec_cec->notify, 318 + cros_ec_cec->adap); 318 319 out_probe_adapter: 319 320 cec_delete_adapter(cros_ec_cec->adap); 320 321 return ret; ··· 336 335 return ret; 337 336 } 338 337 339 - cec_notifier_cec_adap_unregister(cros_ec_cec->notify); 338 + cec_notifier_cec_adap_unregister(cros_ec_cec->notify, 339 + cros_ec_cec->adap); 340 340 cec_unregister_adapter(cros_ec_cec->adap); 341 341 342 342 return 0;
+2 -2
drivers/media/platform/meson/ao-cec-g12a.c
··· 736 736 clk_disable_unprepare(ao_cec->core); 737 737 738 738 out_probe_notify: 739 - cec_notifier_cec_adap_unregister(ao_cec->notify); 739 + cec_notifier_cec_adap_unregister(ao_cec->notify, ao_cec->adap); 740 740 741 741 out_probe_adapter: 742 742 cec_delete_adapter(ao_cec->adap); ··· 752 752 753 753 clk_disable_unprepare(ao_cec->core); 754 754 755 - cec_notifier_cec_adap_unregister(ao_cec->notify); 755 + cec_notifier_cec_adap_unregister(ao_cec->notify, ao_cec->adap); 756 756 757 757 cec_unregister_adapter(ao_cec->adap); 758 758
+2 -2
drivers/media/platform/meson/ao-cec.c
··· 688 688 clk_disable_unprepare(ao_cec->core); 689 689 690 690 out_probe_notify: 691 - cec_notifier_cec_adap_unregister(ao_cec->notify); 691 + cec_notifier_cec_adap_unregister(ao_cec->notify, ao_cec->adap); 692 692 693 693 out_probe_adapter: 694 694 cec_delete_adapter(ao_cec->adap); ··· 704 704 705 705 clk_disable_unprepare(ao_cec->core); 706 706 707 - cec_notifier_cec_adap_unregister(ao_cec->notify); 707 + cec_notifier_cec_adap_unregister(ao_cec->notify, ao_cec->adap); 708 708 cec_unregister_adapter(ao_cec->adap); 709 709 710 710 return 0;
+2 -2
drivers/media/platform/s5p-cec/s5p_cec.c
··· 239 239 return 0; 240 240 241 241 err_notifier: 242 - cec_notifier_cec_adap_unregister(cec->notifier); 242 + cec_notifier_cec_adap_unregister(cec->notifier, cec->adap); 243 243 244 244 err_delete_adapter: 245 245 cec_delete_adapter(cec->adap); ··· 250 250 { 251 251 struct s5p_cec_dev *cec = platform_get_drvdata(pdev); 252 252 253 - cec_notifier_cec_adap_unregister(cec->notifier); 253 + cec_notifier_cec_adap_unregister(cec->notifier, cec->adap); 254 254 cec_unregister_adapter(cec->adap); 255 255 pm_runtime_disable(&pdev->dev); 256 256 return 0;
+2 -2
drivers/media/platform/seco-cec/seco-cec.c
··· 671 671 return ret; 672 672 673 673 err_notifier: 674 - cec_notifier_cec_adap_unregister(secocec->notifier); 674 + cec_notifier_cec_adap_unregister(secocec->notifier, secocec->cec_adap); 675 675 err_delete_adapter: 676 676 cec_delete_adapter(secocec->cec_adap); 677 677 err: ··· 692 692 693 693 dev_dbg(&pdev->dev, "IR disabled"); 694 694 } 695 - cec_notifier_cec_adap_unregister(secocec->notifier); 695 + cec_notifier_cec_adap_unregister(secocec->notifier, secocec->cec_adap); 696 696 cec_unregister_adapter(secocec->cec_adap); 697 697 698 698 release_region(BRA_SMB_BASE_ADDR, 7);
+2 -2
drivers/media/platform/sti/cec/stih-cec.c
··· 359 359 return 0; 360 360 361 361 err_notifier: 362 - cec_notifier_cec_adap_unregister(cec->notifier); 362 + cec_notifier_cec_adap_unregister(cec->notifier, cec->adap); 363 363 364 364 err_delete_adapter: 365 365 cec_delete_adapter(cec->adap); ··· 370 370 { 371 371 struct stih_cec *cec = platform_get_drvdata(pdev); 372 372 373 - cec_notifier_cec_adap_unregister(cec->notifier); 373 + cec_notifier_cec_adap_unregister(cec->notifier, cec->adap); 374 374 cec_unregister_adapter(cec->adap); 375 375 376 376 return 0;
+2 -2
drivers/media/platform/tegra-cec/tegra_cec.c
··· 409 409 return 0; 410 410 411 411 err_notifier: 412 - cec_notifier_cec_adap_unregister(cec->notifier); 412 + cec_notifier_cec_adap_unregister(cec->notifier, cec->adap); 413 413 err_adapter: 414 414 cec_delete_adapter(cec->adap); 415 415 err_clk: ··· 423 423 424 424 clk_disable_unprepare(cec->clk); 425 425 426 - cec_notifier_cec_adap_unregister(cec->notifier); 426 + cec_notifier_cec_adap_unregister(cec->notifier, cec->adap); 427 427 cec_unregister_adapter(cec->adap); 428 428 429 429 return 0;
+5 -12
drivers/video/fbdev/core/fbmem.c
··· 1758 1758 /** 1759 1759 * remove_conflicting_pci_framebuffers - remove firmware-configured framebuffers for PCI devices 1760 1760 * @pdev: PCI device 1761 - * @res_id: index of PCI BAR configuring framebuffer memory 1762 1761 * @name: requesting driver name 1763 1762 * 1764 1763 * This function removes framebuffer devices (eg. initialized by firmware) 1765 - * using memory range configured for @pdev's BAR @res_id. 1764 + * using memory range configured for any of @pdev's memory bars. 1766 1765 * 1767 1766 * The function assumes that PCI device with shadowed ROM drives a primary 1768 1767 * display and so kicks out vga16fb. 1769 1768 */ 1770 - int remove_conflicting_pci_framebuffers(struct pci_dev *pdev, int res_id, const char *name) 1769 + int remove_conflicting_pci_framebuffers(struct pci_dev *pdev, const char *name) 1771 1770 { 1772 1771 struct apertures_struct *ap; 1773 1772 bool primary = false; 1774 1773 int err, idx, bar; 1775 - bool res_id_found = false; 1776 1774 1777 1775 for (idx = 0, bar = 0; bar < PCI_ROM_RESOURCE; bar++) { 1778 1776 if (!(pci_resource_flags(pdev, bar) & IORESOURCE_MEM)) ··· 1787 1789 continue; 1788 1790 ap->ranges[idx].base = pci_resource_start(pdev, bar); 1789 1791 ap->ranges[idx].size = pci_resource_len(pdev, bar); 1790 - pci_info(pdev, "%s: bar %d: 0x%lx -> 0x%lx\n", __func__, bar, 1791 - (unsigned long)pci_resource_start(pdev, bar), 1792 - (unsigned long)pci_resource_end(pdev, bar)); 1792 + pci_dbg(pdev, "%s: bar %d: 0x%lx -> 0x%lx\n", __func__, bar, 1793 + (unsigned long)pci_resource_start(pdev, bar), 1794 + (unsigned long)pci_resource_end(pdev, bar)); 1793 1795 idx++; 1794 - if (res_id == bar) 1795 - res_id_found = true; 1796 1796 } 1797 - if (!res_id_found) 1798 - pci_warn(pdev, "%s: passed res_id (%d) is not a memory bar\n", 1799 - __func__, res_id); 1800 1797 1801 1798 #ifdef CONFIG_X86 1802 1799 primary = pdev->resource[PCI_ROM_RESOURCE].flags &
-13
drivers/video/fbdev/sa1100fb.c
··· 968 968 969 969 #ifdef CONFIG_CPU_FREQ 970 970 /* 971 - * Calculate the minimum DMA period over all displays that we own. 972 - * This, together with the SDRAM bandwidth defines the slowest CPU 973 - * frequency that can be selected. 974 - */ 975 - static unsigned int sa1100fb_min_dma_period(struct sa1100fb_info *fbi) 976 - { 977 - /* 978 - * FIXME: we need to verify _all_ consoles. 979 - */ 980 - return sa1100fb_display_dma_period(&fbi->fb.var); 981 - } 982 - 983 - /* 984 971 * CPU clock speed change handler. We need to adjust the LCD timing 985 972 * parameters when the CPU clock is adjusted by the power management 986 973 * subsystem.
+4 -4
drivers/video/hdmi.c
··· 1576 1576 if (ptr[0] & 0x10) 1577 1577 frame->active_aspect = ptr[1] & 0xf; 1578 1578 if (ptr[0] & 0x8) { 1579 - frame->top_bar = (ptr[5] << 8) + ptr[6]; 1580 - frame->bottom_bar = (ptr[7] << 8) + ptr[8]; 1579 + frame->top_bar = (ptr[6] << 8) | ptr[5]; 1580 + frame->bottom_bar = (ptr[8] << 8) | ptr[7]; 1581 1581 } 1582 1582 if (ptr[0] & 0x4) { 1583 - frame->left_bar = (ptr[9] << 8) + ptr[10]; 1584 - frame->right_bar = (ptr[11] << 8) + ptr[12]; 1583 + frame->left_bar = (ptr[10] << 8) | ptr[9]; 1584 + frame->right_bar = (ptr[12] << 8) | ptr[11]; 1585 1585 } 1586 1586 frame->scan_mode = ptr[0] & 0x3; 1587 1587
+1
include/drm/bridge/dw_hdmi.h
··· 156 156 157 157 void dw_hdmi_set_sample_rate(struct dw_hdmi *hdmi, unsigned int rate); 158 158 void dw_hdmi_set_channel_count(struct dw_hdmi *hdmi, unsigned int cnt); 159 + void dw_hdmi_set_channel_status(struct dw_hdmi *hdmi, u8 *channel_status); 159 160 void dw_hdmi_set_channel_allocation(struct dw_hdmi *hdmi, unsigned int ca); 160 161 void dw_hdmi_audio_enable(struct dw_hdmi *hdmi); 161 162 void dw_hdmi_audio_disable(struct dw_hdmi *hdmi);
+18 -15
include/drm/drm_bridge.h
··· 42 42 * This callback is invoked whenever our bridge is being attached to a 43 43 * &drm_encoder. 44 44 * 45 - * The attach callback is optional. 45 + * The @attach callback is optional. 46 46 * 47 47 * RETURNS: 48 48 * ··· 56 56 * This callback is invoked whenever our bridge is being detached from a 57 57 * &drm_encoder. 58 58 * 59 - * The detach callback is optional. 59 + * The @detach callback is optional. 60 60 */ 61 61 void (*detach)(struct drm_bridge *bridge); 62 62 ··· 76 76 * atomic helpers to validate modes supplied by userspace in 77 77 * drm_atomic_helper_check_modeset(). 78 78 * 79 - * This function is optional. 79 + * The @mode_valid callback is optional. 80 80 * 81 81 * NOTE: 82 82 * ··· 108 108 * this function passes all other callbacks must succeed for this 109 109 * configuration. 110 110 * 111 - * The mode_fixup callback is optional. 111 + * The @mode_fixup callback is optional. 112 112 * 113 113 * NOTE: 114 114 * ··· 146 146 * The bridge can assume that the display pipe (i.e. clocks and timing 147 147 * signals) feeding it is still running when this callback is called. 148 148 * 149 - * The disable callback is optional. 149 + * The @disable callback is optional. 150 150 */ 151 151 void (*disable)(struct drm_bridge *bridge); 152 152 ··· 165 165 * singals) feeding it is no longer running when this callback is 166 166 * called. 167 167 * 168 - * The post_disable callback is optional. 168 + * The @post_disable callback is optional. 169 169 */ 170 170 void (*post_disable)(struct drm_bridge *bridge); 171 171 ··· 214 214 * not enable the display link feeding the next bridge in the chain (if 215 215 * there is one) when this callback is called. 216 216 * 217 - * The pre_enable callback is optional. 217 + * The @pre_enable callback is optional. 218 218 */ 219 219 void (*pre_enable)(struct drm_bridge *bridge); 220 220 ··· 234 234 * callback must enable the display link feeding the next bridge in the 235 235 * chain if there is one. 236 236 * 237 - * The enable callback is optional. 237 + * The @enable callback is optional. 238 238 */ 239 239 void (*enable)(struct drm_bridge *bridge); 240 240 ··· 283 283 * would be prudent to also provide an implementation of @enable if 284 284 * you are expecting driver calls into &drm_bridge_enable. 285 285 * 286 - * The enable callback is optional. 286 + * The @atomic_enable callback is optional. 287 287 */ 288 288 void (*atomic_enable)(struct drm_bridge *bridge, 289 289 struct drm_atomic_state *state); ··· 305 305 * would be prudent to also provide an implementation of @disable if 306 306 * you are expecting driver calls into &drm_bridge_disable. 307 307 * 308 - * The disable callback is optional. 308 + * The @atomic_disable callback is optional. 309 309 */ 310 310 void (*atomic_disable)(struct drm_bridge *bridge, 311 311 struct drm_atomic_state *state); ··· 330 330 * @post_disable if you are expecting driver calls into 331 331 * &drm_bridge_post_disable. 332 332 * 333 - * The post_disable callback is optional. 333 + * The @atomic_post_disable callback is optional. 334 334 */ 335 335 void (*atomic_post_disable)(struct drm_bridge *bridge, 336 336 struct drm_atomic_state *state); ··· 429 429 struct drm_atomic_state *state); 430 430 431 431 #ifdef CONFIG_DRM_PANEL_BRIDGE 432 - struct drm_bridge *drm_panel_bridge_add(struct drm_panel *panel, 433 - u32 connector_type); 432 + struct drm_bridge *drm_panel_bridge_add(struct drm_panel *panel); 433 + struct drm_bridge *drm_panel_bridge_add_typed(struct drm_panel *panel, 434 + u32 connector_type); 434 435 void drm_panel_bridge_remove(struct drm_bridge *bridge); 435 436 struct drm_bridge *devm_drm_panel_bridge_add(struct device *dev, 436 - struct drm_panel *panel, 437 - u32 connector_type); 437 + struct drm_panel *panel); 438 + struct drm_bridge *devm_drm_panel_bridge_add_typed(struct device *dev, 439 + struct drm_panel *panel, 440 + u32 connector_type); 438 441 #endif 439 442 440 443 #endif
+13 -12
include/drm/drm_connector.h
··· 281 281 /* Additional Colorimetry extension added as part of CTA 861.G */ 282 282 #define DRM_MODE_COLORIMETRY_DCI_P3_RGB_D65 11 283 283 #define DRM_MODE_COLORIMETRY_DCI_P3_RGB_THEATER 12 284 + /* Additional Colorimetry Options added for DP 1.4a VSC Colorimetry Format */ 285 + #define DRM_MODE_COLORIMETRY_RGB_WIDE_FIXED 13 286 + #define DRM_MODE_COLORIMETRY_RGB_WIDE_FLOAT 14 287 + #define DRM_MODE_COLORIMETRY_BT601_YCC 15 284 288 285 289 /** 286 290 * enum drm_bus_flags - bus_flags info for &drm_display_info ··· 1292 1288 /** @override_edid: has the EDID been overwritten through debugfs for testing? */ 1293 1289 bool override_edid; 1294 1290 1295 - #define DRM_CONNECTOR_MAX_ENCODER 3 1296 1291 /** 1297 - * @encoder_ids: Valid encoders for this connector. Please only use 1298 - * drm_connector_for_each_possible_encoder() to enumerate these. 1292 + * @possible_encoders: Bit mask of encoders that can drive this 1293 + * connector, drm_encoder_index() determines the index into the bitfield 1294 + * and the bits are set with drm_connector_attach_encoder(). 1299 1295 */ 1300 - uint32_t encoder_ids[DRM_CONNECTOR_MAX_ENCODER]; 1296 + u32 possible_encoders; 1301 1297 1302 1298 /** 1303 1299 * @encoder: Currently bound encoder driving this connector, if any. ··· 1527 1523 int drm_connector_attach_vrr_capable_property( 1528 1524 struct drm_connector *connector); 1529 1525 int drm_mode_create_aspect_ratio_property(struct drm_device *dev); 1530 - int drm_mode_create_colorspace_property(struct drm_connector *connector); 1526 + int drm_mode_create_hdmi_colorspace_property(struct drm_connector *connector); 1527 + int drm_mode_create_dp_colorspace_property(struct drm_connector *connector); 1531 1528 int drm_mode_create_content_type_property(struct drm_device *dev); 1532 1529 void drm_hdmi_avi_infoframe_content_type(struct hdmi_avi_infoframe *frame, 1533 1530 const struct drm_connector_state *conn_state); ··· 1613 1608 * drm_connector_for_each_possible_encoder - iterate connector's possible encoders 1614 1609 * @connector: &struct drm_connector pointer 1615 1610 * @encoder: &struct drm_encoder pointer used as cursor 1616 - * @__i: int iteration cursor, for macro-internal use 1617 1611 */ 1618 - #define drm_connector_for_each_possible_encoder(connector, encoder, __i) \ 1619 - for ((__i) = 0; (__i) < ARRAY_SIZE((connector)->encoder_ids) && \ 1620 - (connector)->encoder_ids[(__i)] != 0; (__i)++) \ 1621 - for_each_if((encoder) = \ 1622 - drm_encoder_find((connector)->dev, NULL, \ 1623 - (connector)->encoder_ids[(__i)])) \ 1612 + #define drm_connector_for_each_possible_encoder(connector, encoder) \ 1613 + drm_for_each_encoder_mask(encoder, (connector)->dev, \ 1614 + (connector)->possible_encoders) 1624 1615 1625 1616 #endif
-1
include/drm/drm_crtc.h
··· 41 41 #include <drm/drm_connector.h> 42 42 #include <drm/drm_device.h> 43 43 #include <drm/drm_property.h> 44 - #include <drm/drm_bridge.h> 45 44 #include <drm/drm_edid.h> 46 45 #include <drm/drm_plane.h> 47 46 #include <drm/drm_blend.h>
+50 -9
include/drm/drm_dp_helper.h
··· 42 42 * 1.2 formally includes both eDP and DPI definitions. 43 43 */ 44 44 45 + /* MSA (Main Stream Attribute) MISC bits (as MISC1<<8|MISC0) */ 46 + #define DP_MSA_MISC_SYNC_CLOCK (1 << 0) 47 + #define DP_MSA_MISC_INTERLACE_VTOTAL_EVEN (1 << 8) 48 + #define DP_MSA_MISC_STEREO_NO_3D (0 << 9) 49 + #define DP_MSA_MISC_STEREO_PROG_RIGHT_EYE (1 << 9) 50 + #define DP_MSA_MISC_STEREO_PROG_LEFT_EYE (3 << 9) 51 + /* bits per component for non-RAW */ 52 + #define DP_MSA_MISC_6_BPC (0 << 5) 53 + #define DP_MSA_MISC_8_BPC (1 << 5) 54 + #define DP_MSA_MISC_10_BPC (2 << 5) 55 + #define DP_MSA_MISC_12_BPC (3 << 5) 56 + #define DP_MSA_MISC_16_BPC (4 << 5) 57 + /* bits per component for RAW */ 58 + #define DP_MSA_MISC_RAW_6_BPC (1 << 5) 59 + #define DP_MSA_MISC_RAW_7_BPC (2 << 5) 60 + #define DP_MSA_MISC_RAW_8_BPC (3 << 5) 61 + #define DP_MSA_MISC_RAW_10_BPC (4 << 5) 62 + #define DP_MSA_MISC_RAW_12_BPC (5 << 5) 63 + #define DP_MSA_MISC_RAW_14_BPC (6 << 5) 64 + #define DP_MSA_MISC_RAW_16_BPC (7 << 5) 65 + /* pixel encoding/colorimetry format */ 66 + #define _DP_MSA_MISC_COLOR(misc1_7, misc0_21, misc0_3, misc0_4) \ 67 + ((misc1_7) << 15 | (misc0_4) << 4 | (misc0_3) << 3 | ((misc0_21) << 1)) 68 + #define DP_MSA_MISC_COLOR_RGB _DP_MSA_MISC_COLOR(0, 0, 0, 0) 69 + #define DP_MSA_MISC_COLOR_CEA_RGB _DP_MSA_MISC_COLOR(0, 0, 1, 0) 70 + #define DP_MSA_MISC_COLOR_RGB_WIDE_FIXED _DP_MSA_MISC_COLOR(0, 3, 0, 0) 71 + #define DP_MSA_MISC_COLOR_RGB_WIDE_FLOAT _DP_MSA_MISC_COLOR(0, 3, 0, 1) 72 + #define DP_MSA_MISC_COLOR_Y_ONLY _DP_MSA_MISC_COLOR(1, 0, 0, 0) 73 + #define DP_MSA_MISC_COLOR_RAW _DP_MSA_MISC_COLOR(1, 1, 0, 0) 74 + #define DP_MSA_MISC_COLOR_YCBCR_422_BT601 _DP_MSA_MISC_COLOR(0, 1, 1, 0) 75 + #define DP_MSA_MISC_COLOR_YCBCR_422_BT709 _DP_MSA_MISC_COLOR(0, 1, 1, 1) 76 + #define DP_MSA_MISC_COLOR_YCBCR_444_BT601 _DP_MSA_MISC_COLOR(0, 2, 1, 0) 77 + #define DP_MSA_MISC_COLOR_YCBCR_444_BT709 _DP_MSA_MISC_COLOR(0, 2, 1, 1) 78 + #define DP_MSA_MISC_COLOR_XVYCC_422_BT601 _DP_MSA_MISC_COLOR(0, 1, 0, 0) 79 + #define DP_MSA_MISC_COLOR_XVYCC_422_BT709 _DP_MSA_MISC_COLOR(0, 1, 0, 1) 80 + #define DP_MSA_MISC_COLOR_XVYCC_444_BT601 _DP_MSA_MISC_COLOR(0, 2, 0, 0) 81 + #define DP_MSA_MISC_COLOR_XVYCC_444_BT709 _DP_MSA_MISC_COLOR(0, 2, 0, 1) 82 + #define DP_MSA_MISC_COLOR_OPRGB _DP_MSA_MISC_COLOR(0, 0, 1, 1) 83 + #define DP_MSA_MISC_COLOR_DCI_P3 _DP_MSA_MISC_COLOR(0, 3, 1, 0) 84 + #define DP_MSA_MISC_COLOR_COLOR_PROFILE _DP_MSA_MISC_COLOR(0, 3, 1, 1) 85 + #define DP_MSA_MISC_COLOR_VSC_SDP (1 << 14) 86 + 45 87 #define DP_AUX_MAX_PAYLOAD_BYTES 16 46 88 47 89 #define DP_AUX_I2C_WRITE 0x0 ··· 1272 1230 1273 1231 struct cec_adapter; 1274 1232 struct edid; 1233 + struct drm_connector; 1275 1234 1276 1235 /** 1277 1236 * struct drm_dp_aux_cec - DisplayPort CEC-Tunneling-over-AUX 1278 1237 * @lock: mutex protecting this struct 1279 1238 * @adap: the CEC adapter for CEC-Tunneling-over-AUX support. 1280 - * @name: name of the CEC adapter 1281 - * @parent: parent device of the CEC adapter 1239 + * @connector: the connector this CEC adapter is associated with 1282 1240 * @unregister_work: unregister the CEC adapter 1283 1241 */ 1284 1242 struct drm_dp_aux_cec { 1285 1243 struct mutex lock; 1286 1244 struct cec_adapter *adap; 1287 - const char *name; 1288 - struct device *parent; 1245 + struct drm_connector *connector; 1289 1246 struct delayed_work unregister_work; 1290 1247 }; 1291 1248 ··· 1492 1451 1493 1452 #ifdef CONFIG_DRM_DP_CEC 1494 1453 void drm_dp_cec_irq(struct drm_dp_aux *aux); 1495 - void drm_dp_cec_register_connector(struct drm_dp_aux *aux, const char *name, 1496 - struct device *parent); 1454 + void drm_dp_cec_register_connector(struct drm_dp_aux *aux, 1455 + struct drm_connector *connector); 1497 1456 void drm_dp_cec_unregister_connector(struct drm_dp_aux *aux); 1498 1457 void drm_dp_cec_set_edid(struct drm_dp_aux *aux, const struct edid *edid); 1499 1458 void drm_dp_cec_unset_edid(struct drm_dp_aux *aux); ··· 1502 1461 { 1503 1462 } 1504 1463 1505 - static inline void drm_dp_cec_register_connector(struct drm_dp_aux *aux, 1506 - const char *name, 1507 - struct device *parent) 1464 + static inline void 1465 + drm_dp_cec_register_connector(struct drm_dp_aux *aux, 1466 + struct drm_connector *connector) 1508 1467 { 1509 1468 } 1510 1469
+4 -8
include/drm/drm_dp_mst_helper.h
··· 287 287 struct drm_dp_remote_i2c_read { 288 288 u8 num_transactions; 289 289 u8 port_number; 290 - struct { 290 + struct drm_dp_remote_i2c_read_tx { 291 291 u8 i2c_dev_id; 292 292 u8 num_bytes; 293 293 u8 *bytes; ··· 334 334 335 335 struct drm_dp_query_payload_ack_reply { 336 336 u8 port_number; 337 - u8 allocated_pbn; 337 + u16 allocated_pbn; 338 338 }; 339 339 340 340 struct drm_dp_sideband_msg_req_body { ··· 481 481 int conn_base_id; 482 482 483 483 /** 484 - * @down_rep_recv: Message receiver state for down replies. This and 485 - * @up_req_recv are only ever access from the work item, which is 486 - * serialised. 484 + * @down_rep_recv: Message receiver state for down replies. 487 485 */ 488 486 struct drm_dp_sideband_msg_rx down_rep_recv; 489 487 /** 490 - * @up_req_recv: Message receiver state for up requests. This and 491 - * @down_rep_recv are only ever access from the work item, which is 492 - * serialised. 488 + * @up_req_recv: Message receiver state for up requests. 493 489 */ 494 490 struct drm_dp_sideband_msg_rx up_req_recv; 495 491
-2
include/drm/drm_drv.h
··· 778 778 int dev_priv_size; 779 779 }; 780 780 781 - extern unsigned int drm_debug; 782 - 783 781 int drm_dev_init(struct drm_device *dev, 784 782 struct drm_driver *driver, 785 783 struct device *parent);
+3 -3
include/drm/drm_encoder.h
··· 140 140 * @possible_crtcs: Bitmask of potential CRTC bindings, using 141 141 * drm_crtc_index() as the index into the bitfield. The driver must set 142 142 * the bits for all &drm_crtc objects this encoder can be connected to 143 - * before calling drm_encoder_init(). 143 + * before calling drm_dev_register(). 144 144 * 145 145 * In reality almost every driver gets this wrong. 146 146 * ··· 154 154 * using drm_encoder_index() as the index into the bitfield. The driver 155 155 * must set the bits for all &drm_encoder objects which can clone a 156 156 * &drm_crtc together with this encoder before calling 157 - * drm_encoder_init(). Drivers should set the bit representing the 157 + * drm_dev_register(). Drivers should set the bit representing the 158 158 * encoder itself, too. Cloning bits should be set such that when two 159 159 * encoders can be used in a cloned configuration, they both should have 160 160 * each another bits set. ··· 198 198 } 199 199 200 200 /** 201 - * drm_encoder_mask - find the mask of a registered ENCODER 201 + * drm_encoder_mask - find the mask of a registered encoder 202 202 * @encoder: encoder to find mask for 203 203 * 204 204 * Given a registered encoder, return the mask bit of that encoder for an
+2 -4
include/drm/drm_fb_helper.h
··· 539 539 /** 540 540 * drm_fb_helper_remove_conflicting_pci_framebuffers - remove firmware-configured framebuffers for PCI devices 541 541 * @pdev: PCI device 542 - * @resource_id: index of PCI BAR configuring framebuffer memory 543 542 * @name: requesting driver name 544 543 * 545 544 * This function removes framebuffer devices (eg. initialized by firmware) 546 - * using memory range configured for @pdev's BAR @resource_id. 545 + * using memory range configured for any of @pdev's memory bars. 547 546 * 548 547 * The function assumes that PCI device with shadowed ROM drives a primary 549 548 * display and so kicks out vga16fb. 550 549 */ 551 550 static inline int 552 551 drm_fb_helper_remove_conflicting_pci_framebuffers(struct pci_dev *pdev, 553 - int resource_id, 554 552 const char *name) 555 553 { 556 554 int ret = 0; ··· 558 560 * otherwise the vga fbdev driver falls over. 559 561 */ 560 562 #if IS_REACHABLE(CONFIG_FB) 561 - ret = remove_conflicting_pci_framebuffers(pdev, resource_id, name); 563 + ret = remove_conflicting_pci_framebuffers(pdev, name); 562 564 #endif 563 565 if (ret == 0) 564 566 ret = vga_remove_vgacon(pdev);
+19
include/drm/drm_gem_ttm_helper.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0-or-later */ 2 + 3 + #ifndef DRM_GEM_TTM_HELPER_H 4 + #define DRM_GEM_TTM_HELPER_H 5 + 6 + #include <linux/kernel.h> 7 + 8 + #include <drm/drm_gem.h> 9 + #include <drm/drm_device.h> 10 + #include <drm/ttm/ttm_bo_api.h> 11 + #include <drm/ttm/ttm_bo_driver.h> 12 + 13 + #define drm_gem_ttm_of_gem(gem_obj) \ 14 + container_of(gem_obj, struct ttm_buffer_object, base) 15 + 16 + void drm_gem_ttm_print_info(struct drm_printer *p, unsigned int indent, 17 + const struct drm_gem_object *gem); 18 + 19 + #endif
+95 -12
include/drm/drm_gem_vram_helper.h
··· 3 3 #ifndef DRM_GEM_VRAM_HELPER_H 4 4 #define DRM_GEM_VRAM_HELPER_H 5 5 6 + #include <drm/drm_file.h> 6 7 #include <drm/drm_gem.h> 8 + #include <drm/drm_ioctl.h> 7 9 #include <drm/ttm/ttm_bo_api.h> 10 + #include <drm/ttm/ttm_bo_driver.h> 8 11 #include <drm/ttm/ttm_placement.h> 12 + 9 13 #include <linux/kernel.h> /* for container_of() */ 10 14 11 15 struct drm_mode_create_dumb; ··· 19 15 20 16 #define DRM_GEM_VRAM_PL_FLAG_VRAM TTM_PL_FLAG_VRAM 21 17 #define DRM_GEM_VRAM_PL_FLAG_SYSTEM TTM_PL_FLAG_SYSTEM 18 + #define DRM_GEM_VRAM_PL_FLAG_TOPDOWN TTM_PL_FLAG_TOPDOWN 22 19 23 20 /* 24 21 * Buffer-object helpers ··· 39 34 * backed by VRAM. It can be used for simple framebuffer devices with 40 35 * dedicated memory. The buffer object can be evicted to system memory if 41 36 * video memory becomes scarce. 37 + * 38 + * GEM VRAM objects perform reference counting for pin and mapping 39 + * operations. So a buffer object that has been pinned N times with 40 + * drm_gem_vram_pin() must be unpinned N times with 41 + * drm_gem_vram_unpin(). The same applies to pairs of 42 + * drm_gem_vram_kmap() and drm_gem_vram_kunmap(), as well as pairs of 43 + * drm_gem_vram_vmap() and drm_gem_vram_vunmap(). 42 44 */ 43 45 struct drm_gem_vram_object { 44 46 struct ttm_buffer_object bo; 45 47 struct ttm_bo_kmap_obj kmap; 48 + 49 + /** 50 + * @kmap_use_count: 51 + * 52 + * Reference count on the virtual address. 53 + * The address are un-mapped when the count reaches zero. 54 + */ 55 + unsigned int kmap_use_count; 46 56 47 57 /* Supported placements are %TTM_PL_VRAM and %TTM_PL_SYSTEM */ 48 58 struct ttm_placement placement; ··· 103 83 void *drm_gem_vram_kmap(struct drm_gem_vram_object *gbo, bool map, 104 84 bool *is_iomem); 105 85 void drm_gem_vram_kunmap(struct drm_gem_vram_object *gbo); 86 + void *drm_gem_vram_vmap(struct drm_gem_vram_object *gbo); 87 + void drm_gem_vram_vunmap(struct drm_gem_vram_object *gbo, void *vaddr); 106 88 107 89 int drm_gem_vram_fill_create_dumb(struct drm_file *file, 108 90 struct drm_device *dev, ··· 112 90 unsigned long pg_align, 113 91 bool interruptible, 114 92 struct drm_mode_create_dumb *args); 115 - 116 - /* 117 - * Helpers for struct ttm_bo_driver 118 - */ 119 - 120 - void drm_gem_vram_bo_driver_evict_flags(struct ttm_buffer_object *bo, 121 - struct ttm_placement *pl); 122 - 123 - int drm_gem_vram_bo_driver_verify_access(struct ttm_buffer_object *bo, 124 - struct file *filp); 125 - 126 - extern const struct drm_vram_mm_funcs drm_gem_vram_mm_funcs; 127 93 128 94 /* 129 95 * Helpers for struct drm_driver ··· 132 122 * &struct drm_driver with default functions. 133 123 */ 134 124 #define DRM_GEM_VRAM_DRIVER \ 125 + .debugfs_init = drm_vram_mm_debugfs_init, \ 135 126 .dumb_create = drm_gem_vram_driver_dumb_create, \ 136 127 .dumb_map_offset = drm_gem_vram_driver_dumb_mmap_offset, \ 137 128 .gem_prime_mmap = drm_gem_prime_mmap 129 + 130 + /* 131 + * VRAM memory manager 132 + */ 133 + 134 + /** 135 + * struct drm_vram_mm - An instance of VRAM MM 136 + * @vram_base: Base address of the managed video memory 137 + * @vram_size: Size of the managed video memory in bytes 138 + * @bdev: The TTM BO device. 139 + * @funcs: TTM BO functions 140 + * 141 + * The fields &struct drm_vram_mm.vram_base and 142 + * &struct drm_vram_mm.vrm_size are managed by VRAM MM, but are 143 + * available for public read access. Use the field 144 + * &struct drm_vram_mm.bdev to access the TTM BO device. 145 + */ 146 + struct drm_vram_mm { 147 + uint64_t vram_base; 148 + size_t vram_size; 149 + 150 + struct ttm_bo_device bdev; 151 + }; 152 + 153 + /** 154 + * drm_vram_mm_of_bdev() - \ 155 + Returns the container of type &struct ttm_bo_device for field bdev. 156 + * @bdev: the TTM BO device 157 + * 158 + * Returns: 159 + * The containing instance of &struct drm_vram_mm 160 + */ 161 + static inline struct drm_vram_mm *drm_vram_mm_of_bdev( 162 + struct ttm_bo_device *bdev) 163 + { 164 + return container_of(bdev, struct drm_vram_mm, bdev); 165 + } 166 + 167 + int drm_vram_mm_debugfs_init(struct drm_minor *minor); 168 + 169 + /* 170 + * Helpers for integration with struct drm_device 171 + */ 172 + 173 + struct drm_vram_mm *drm_vram_helper_alloc_mm( 174 + struct drm_device *dev, uint64_t vram_base, size_t vram_size); 175 + void drm_vram_helper_release_mm(struct drm_device *dev); 176 + 177 + /* 178 + * Helpers for &struct file_operations 179 + */ 180 + 181 + int drm_vram_mm_file_operations_mmap( 182 + struct file *filp, struct vm_area_struct *vma); 183 + 184 + /** 185 + * define DRM_VRAM_MM_FILE_OPERATIONS - default callback functions for \ 186 + &struct file_operations 187 + * 188 + * Drivers that use VRAM MM can use this macro to initialize 189 + * &struct file_operations with default functions. 190 + */ 191 + #define DRM_VRAM_MM_FILE_OPERATIONS \ 192 + .llseek = no_llseek, \ 193 + .read = drm_read, \ 194 + .poll = drm_poll, \ 195 + .unlocked_ioctl = drm_ioctl, \ 196 + .compat_ioctl = drm_compat_ioctl, \ 197 + .mmap = drm_vram_mm_file_operations_mmap, \ 198 + .open = drm_open, \ 199 + .release = drm_release \ 200 + 138 201 139 202 #endif
+4 -3
include/drm/drm_mm.h
··· 168 168 struct rb_node rb_hole_addr; 169 169 u64 __subtree_last; 170 170 u64 hole_size; 171 - bool allocated : 1; 172 - bool scanned_block : 1; 171 + unsigned long flags; 172 + #define DRM_MM_NODE_ALLOCATED_BIT 0 173 + #define DRM_MM_NODE_SCANNED_BIT 1 173 174 #ifdef CONFIG_DRM_DEBUG_MM 174 175 depot_stack_handle_t stack; 175 176 #endif ··· 254 253 */ 255 254 static inline bool drm_mm_node_allocated(const struct drm_mm_node *node) 256 255 { 257 - return node->allocated; 256 + return test_bit(DRM_MM_NODE_ALLOCATED_BIT, &node->flags); 258 257 } 259 258 260 259 /**
+3 -4
include/drm/drm_modeset_helper_vtables.h
··· 955 955 * @atomic_best_encoder. 956 956 * 957 957 * You can leave this function to NULL if the connector is only 958 - * attached to a single encoder and you are using the atomic helpers. 959 - * In this case, the core will call drm_atomic_helper_best_encoder() 960 - * for you. 958 + * attached to a single encoder. In this case, the core will call 959 + * drm_connector_get_single_encoder() for you. 961 960 * 962 961 * RETURNS: 963 962 * ··· 976 977 * 977 978 * This function is used by drm_atomic_helper_check_modeset(). 978 979 * If it is not implemented, the core will fallback to @best_encoder 979 - * (or drm_atomic_helper_best_encoder() if @best_encoder is NULL). 980 + * (or drm_connector_get_single_encoder() if @best_encoder is NULL). 980 981 * 981 982 * NOTE: 982 983 *
+9
include/drm/drm_modeset_lock.h
··· 114 114 return ww_mutex_is_locked(&lock->mutex); 115 115 } 116 116 117 + /** 118 + * drm_modeset_lock_assert_held - equivalent to lockdep_assert_held() 119 + * @lock: lock to check 120 + */ 121 + static inline void drm_modeset_lock_assert_held(struct drm_modeset_lock *lock) 122 + { 123 + lockdep_assert_held(&lock->mutex.base); 124 + } 125 + 117 126 int drm_modeset_lock(struct drm_modeset_lock *lock, 118 127 struct drm_modeset_acquire_ctx *ctx); 119 128 int __must_check drm_modeset_lock_single_interruptible(struct drm_modeset_lock *lock);
+12 -1
include/drm/drm_panel.h
··· 140 140 const struct drm_panel_funcs *funcs; 141 141 142 142 /** 143 + * @connector_type: 144 + * 145 + * Type of the panel as a DRM_MODE_CONNECTOR_* value. This is used to 146 + * initialise the drm_connector corresponding to the panel with the 147 + * correct connector type. 148 + */ 149 + int connector_type; 150 + 151 + /** 143 152 * @list: 144 153 * 145 154 * Panel entry in registry. ··· 156 147 struct list_head list; 157 148 }; 158 149 159 - void drm_panel_init(struct drm_panel *panel); 150 + void drm_panel_init(struct drm_panel *panel, struct device *dev, 151 + const struct drm_panel_funcs *funcs, 152 + int connector_type); 160 153 161 154 int drm_panel_add(struct drm_panel *panel); 162 155 void drm_panel_remove(struct drm_panel *panel);
-2
include/drm/drm_prime.h
··· 61 61 struct drm_gem_object; 62 62 struct drm_file; 63 63 64 - struct device; 65 - 66 64 /* core prime functions */ 67 65 struct dma_buf *drm_gem_dmabuf_export(struct drm_device *dev, 68 66 struct dma_buf_export_info *exp_info);
+26
include/drm/drm_print.h
··· 34 34 35 35 #include <drm/drm.h> 36 36 37 + extern unsigned int drm_debug; 38 + 37 39 /** 38 40 * DOC: print 39 41 * ··· 85 83 void __drm_puts_seq_file(struct drm_printer *p, const char *str); 86 84 void __drm_printfn_info(struct drm_printer *p, struct va_format *vaf); 87 85 void __drm_printfn_debug(struct drm_printer *p, struct va_format *vaf); 86 + void __drm_printfn_err(struct drm_printer *p, struct va_format *vaf); 88 87 89 88 __printf(2, 3) 90 89 void drm_printf(struct drm_printer *p, const char *f, ...); 91 90 void drm_puts(struct drm_printer *p, const char *str); 92 91 void drm_print_regset32(struct drm_printer *p, struct debugfs_regset32 *regset); 92 + void drm_print_bits(struct drm_printer *p, unsigned long value, 93 + const char * const bits[], unsigned int nbits); 93 94 94 95 __printf(2, 0) 95 96 /** ··· 232 227 return p; 233 228 } 234 229 230 + /** 231 + * drm_err_printer - construct a &drm_printer that outputs to pr_err() 232 + * @prefix: debug output prefix 233 + * 234 + * RETURNS: 235 + * The &drm_printer object 236 + */ 237 + static inline struct drm_printer drm_err_printer(const char *prefix) 238 + { 239 + struct drm_printer p = { 240 + .printfn = __drm_printfn_err, 241 + .prefix = prefix 242 + }; 243 + return p; 244 + } 245 + 235 246 /* 236 247 * The following categories are defined: 237 248 * ··· 292 271 #define DRM_UT_STATE 0x40 293 272 #define DRM_UT_LEASE 0x80 294 273 #define DRM_UT_DP 0x100 274 + 275 + static inline bool drm_debug_enabled(unsigned int category) 276 + { 277 + return unlikely(drm_debug & category); 278 + } 295 279 296 280 __printf(3, 4) 297 281 void drm_dev_printk(const struct device *dev, const char *level,
+31
include/drm/drm_rect.h
··· 70 70 (r)->y1 >> 16, (((r)->y1 & 0xffff) * 15625) >> 10 71 71 72 72 /** 73 + * drm_rect_init - initialize the rectangle from x/y/w/h 74 + * @r: rectangle 75 + * @x: x coordinate 76 + * @y: y coordinate 77 + * @width: width 78 + * @height: height 79 + */ 80 + static inline void drm_rect_init(struct drm_rect *r, int x, int y, 81 + int width, int height) 82 + { 83 + r->x1 = x; 84 + r->y1 = y; 85 + r->x2 = x + width; 86 + r->y2 = y + height; 87 + } 88 + 89 + /** 73 90 * drm_rect_adjust_size - adjust the size of the rectangle 74 91 * @r: rectangle to be adjusted 75 92 * @dw: horizontal adjustment ··· 121 104 r->y1 += dy; 122 105 r->x2 += dx; 123 106 r->y2 += dy; 107 + } 108 + 109 + /** 110 + * drm_rect_translate_to - translate the rectangle to an absolute position 111 + * @r: rectangle to be tranlated 112 + * @x: horizontal position 113 + * @y: vertical position 114 + * 115 + * Move rectangle @r to @x in the horizontal direction, 116 + * and to @y in the vertical direction. 117 + */ 118 + static inline void drm_rect_translate_to(struct drm_rect *r, int x, int y) 119 + { 120 + drm_rect_translate(r, x - r->x1, y - r->y1); 124 121 } 125 122 126 123 /**
+13 -2
include/drm/drm_vblank.h
··· 109 109 seqlock_t seqlock; 110 110 111 111 /** 112 - * @count: Current software vblank counter. 112 + * @count: 113 + * 114 + * Current software vblank counter. 115 + * 116 + * Note that for a given vblank counter value drm_crtc_handle_vblank() 117 + * and drm_crtc_vblank_count() or drm_crtc_vblank_count_and_time() 118 + * provide a barrier: Any writes done before calling 119 + * drm_crtc_handle_vblank() will be visible to callers of the later 120 + * functions, iff the vblank count is the same or a later one. 121 + * 122 + * IMPORTANT: This guarantee requires barriers, therefor never access 123 + * this field directly. Use drm_crtc_vblank_count() instead. 113 124 */ 114 - u64 count; 125 + atomic64_t count; 115 126 /** 116 127 * @time: Vblank timestamp corresponding to @count. 117 128 */
-104
include/drm/drm_vram_mm_helper.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0-or-later */ 2 - 3 - #ifndef DRM_VRAM_MM_HELPER_H 4 - #define DRM_VRAM_MM_HELPER_H 5 - 6 - #include <drm/drm_file.h> 7 - #include <drm/drm_ioctl.h> 8 - #include <drm/ttm/ttm_bo_driver.h> 9 - 10 - struct drm_device; 11 - 12 - /** 13 - * struct drm_vram_mm_funcs - Callback functions for &struct drm_vram_mm 14 - * @evict_flags: Provides an implementation for struct \ 15 - &ttm_bo_driver.evict_flags 16 - * @verify_access: Provides an implementation for \ 17 - struct &ttm_bo_driver.verify_access 18 - * 19 - * These callback function integrate VRAM MM with TTM buffer objects. New 20 - * functions can be added if necessary. 21 - */ 22 - struct drm_vram_mm_funcs { 23 - void (*evict_flags)(struct ttm_buffer_object *bo, 24 - struct ttm_placement *placement); 25 - int (*verify_access)(struct ttm_buffer_object *bo, struct file *filp); 26 - }; 27 - 28 - /** 29 - * struct drm_vram_mm - An instance of VRAM MM 30 - * @vram_base: Base address of the managed video memory 31 - * @vram_size: Size of the managed video memory in bytes 32 - * @bdev: The TTM BO device. 33 - * @funcs: TTM BO functions 34 - * 35 - * The fields &struct drm_vram_mm.vram_base and 36 - * &struct drm_vram_mm.vrm_size are managed by VRAM MM, but are 37 - * available for public read access. Use the field 38 - * &struct drm_vram_mm.bdev to access the TTM BO device. 39 - */ 40 - struct drm_vram_mm { 41 - uint64_t vram_base; 42 - size_t vram_size; 43 - 44 - struct ttm_bo_device bdev; 45 - 46 - const struct drm_vram_mm_funcs *funcs; 47 - }; 48 - 49 - /** 50 - * drm_vram_mm_of_bdev() - \ 51 - Returns the container of type &struct ttm_bo_device for field bdev. 52 - * @bdev: the TTM BO device 53 - * 54 - * Returns: 55 - * The containing instance of &struct drm_vram_mm 56 - */ 57 - static inline struct drm_vram_mm *drm_vram_mm_of_bdev( 58 - struct ttm_bo_device *bdev) 59 - { 60 - return container_of(bdev, struct drm_vram_mm, bdev); 61 - } 62 - 63 - int drm_vram_mm_init(struct drm_vram_mm *vmm, struct drm_device *dev, 64 - uint64_t vram_base, size_t vram_size, 65 - const struct drm_vram_mm_funcs *funcs); 66 - void drm_vram_mm_cleanup(struct drm_vram_mm *vmm); 67 - 68 - int drm_vram_mm_mmap(struct file *filp, struct vm_area_struct *vma, 69 - struct drm_vram_mm *vmm); 70 - 71 - /* 72 - * Helpers for integration with struct drm_device 73 - */ 74 - 75 - struct drm_vram_mm *drm_vram_helper_alloc_mm( 76 - struct drm_device *dev, uint64_t vram_base, size_t vram_size, 77 - const struct drm_vram_mm_funcs *funcs); 78 - void drm_vram_helper_release_mm(struct drm_device *dev); 79 - 80 - /* 81 - * Helpers for &struct file_operations 82 - */ 83 - 84 - int drm_vram_mm_file_operations_mmap( 85 - struct file *filp, struct vm_area_struct *vma); 86 - 87 - /** 88 - * define DRM_VRAM_MM_FILE_OPERATIONS - default callback functions for \ 89 - &struct file_operations 90 - * 91 - * Drivers that use VRAM MM can use this macro to initialize 92 - * &struct file_operations with default functions. 93 - */ 94 - #define DRM_VRAM_MM_FILE_OPERATIONS \ 95 - .llseek = no_llseek, \ 96 - .read = drm_read, \ 97 - .poll = drm_poll, \ 98 - .unlocked_ioctl = drm_ioctl, \ 99 - .compat_ioctl = drm_compat_ioctl, \ 100 - .mmap = drm_vram_mm_file_operations_mmap, \ 101 - .open = drm_open, \ 102 - .release = drm_release \ 103 - 104 - #endif
+4 -2
include/drm/ttm/ttm_bo_driver.h
··· 451 451 * 452 452 * @driver: Pointer to a struct ttm_bo_driver struct setup by the driver. 453 453 * @man: An array of mem_type_managers. 454 - * @vma_manager: Address space manager 454 + * @vma_manager: Address space manager (pointer) 455 455 * lru_lock: Spinlock that protects the buffer+device lru lists and 456 456 * ddestroy lists. 457 457 * @dev_mapping: A pointer to the struct address_space representing the ··· 474 474 /* 475 475 * Protected by internal locks. 476 476 */ 477 - struct drm_vma_offset_manager vma_manager; 477 + struct drm_vma_offset_manager *vma_manager; 478 478 479 479 /* 480 480 * Protected by the global:lru lock. ··· 595 595 * @glob: A pointer to an initialized struct ttm_bo_global. 596 596 * @driver: A pointer to a struct ttm_bo_driver set up by the caller. 597 597 * @mapping: The address space to use for this bo. 598 + * @vma_manager: A pointer to a vma manager. 598 599 * @file_page_offset: Offset into the device address space that is available 599 600 * for buffer data. This ensures compatibility with other users of the 600 601 * address space. ··· 607 606 int ttm_bo_device_init(struct ttm_bo_device *bdev, 608 607 struct ttm_bo_driver *driver, 609 608 struct address_space *mapping, 609 + struct drm_vma_offset_manager *vma_manager, 610 610 bool need_dma32); 611 611 612 612 /**
+1 -1
include/linux/fb.h
··· 607 607 extern int register_framebuffer(struct fb_info *fb_info); 608 608 extern void unregister_framebuffer(struct fb_info *fb_info); 609 609 extern void unlink_framebuffer(struct fb_info *fb_info); 610 - extern int remove_conflicting_pci_framebuffers(struct pci_dev *pdev, int res_id, 610 + extern int remove_conflicting_pci_framebuffers(struct pci_dev *pdev, 611 611 const char *name); 612 612 extern int remove_conflicting_framebuffers(struct apertures_struct *a, 613 613 const char *name, bool primary);
+5 -2
include/media/cec-notifier.h
··· 93 93 * cec_notifier_cec_adap_unregister - decrease refcount and delete when the 94 94 * refcount reaches 0. 95 95 * @n: notifier. If NULL, then this function does nothing. 96 + * @adap: the cec adapter that registered this notifier. 96 97 */ 97 - void cec_notifier_cec_adap_unregister(struct cec_notifier *n); 98 + void cec_notifier_cec_adap_unregister(struct cec_notifier *n, 99 + struct cec_adapter *adap); 98 100 99 101 /** 100 102 * cec_notifier_set_phys_addr - set a new physical address. ··· 162 160 return (struct cec_notifier *)0xdeadfeed; 163 161 } 164 162 165 - static inline void cec_notifier_cec_adap_unregister(struct cec_notifier *n) 163 + static inline void cec_notifier_cec_adap_unregister(struct cec_notifier *n, 164 + struct cec_adapter *adap) 166 165 { 167 166 } 168 167
+25 -1
include/uapi/drm/drm_fourcc.h
··· 648 648 * Further information on the use of AFBC modifiers can be found in 649 649 * Documentation/gpu/afbc.rst 650 650 */ 651 - #define DRM_FORMAT_MOD_ARM_AFBC(__afbc_mode) fourcc_mod_code(ARM, __afbc_mode) 651 + 652 + /* 653 + * The top 4 bits (out of the 56 bits alloted for specifying vendor specific 654 + * modifiers) denote the category for modifiers. Currently we have only two 655 + * categories of modifiers ie AFBC and MISC. We can have a maximum of sixteen 656 + * different categories. 657 + */ 658 + #define DRM_FORMAT_MOD_ARM_CODE(__type, __val) \ 659 + fourcc_mod_code(ARM, ((__u64)(__type) << 52) | ((__val) & 0x000fffffffffffffULL)) 660 + 661 + #define DRM_FORMAT_MOD_ARM_TYPE_AFBC 0x00 662 + #define DRM_FORMAT_MOD_ARM_TYPE_MISC 0x01 663 + 664 + #define DRM_FORMAT_MOD_ARM_AFBC(__afbc_mode) \ 665 + DRM_FORMAT_MOD_ARM_CODE(DRM_FORMAT_MOD_ARM_TYPE_AFBC, __afbc_mode) 652 666 653 667 /* 654 668 * AFBC superblock size ··· 755 741 * Indicates that the buffer includes per-superblock content hints. 756 742 */ 757 743 #define AFBC_FORMAT_MOD_BCH (1ULL << 11) 744 + 745 + /* 746 + * Arm 16x16 Block U-Interleaved modifier 747 + * 748 + * This is used by Arm Mali Utgard and Midgard GPUs. It divides the image 749 + * into 16x16 pixel blocks. Blocks are stored linearly in order, but pixels 750 + * in the block are reordered. 751 + */ 752 + #define DRM_FORMAT_MOD_ARM_16X16_BLOCK_U_INTERLEAVED \ 753 + DRM_FORMAT_MOD_ARM_CODE(DRM_FORMAT_MOD_ARM_TYPE_MISC, 1ULL) 758 754 759 755 /* 760 756 * Allwinner tiled modifier
+5 -3
include/uapi/drm/v3d_drm.h
··· 48 48 #define DRM_IOCTL_V3D_SUBMIT_TFU DRM_IOW(DRM_COMMAND_BASE + DRM_V3D_SUBMIT_TFU, struct drm_v3d_submit_tfu) 49 49 #define DRM_IOCTL_V3D_SUBMIT_CSD DRM_IOW(DRM_COMMAND_BASE + DRM_V3D_SUBMIT_CSD, struct drm_v3d_submit_csd) 50 50 51 + #define DRM_V3D_SUBMIT_CL_FLUSH_CACHE 0x01 52 + 51 53 /** 52 54 * struct drm_v3d_submit_cl - ioctl argument for submitting commands to the 3D 53 55 * engine. ··· 63 61 * flushed by the time the render done IRQ happens, which is the 64 62 * trigger for out_sync. Any dirtying of cachelines by the job (only 65 63 * possible using TMU writes) must be flushed by the caller using the 66 - * CL's cache flush commands. 64 + * DRM_V3D_SUBMIT_CL_FLUSH_CACHE_FLAG flag. 67 65 */ 68 66 struct drm_v3d_submit_cl { 69 67 /* Pointer to the binner command list. ··· 126 124 /* Number of BO handles passed in (size is that times 4). */ 127 125 __u32 bo_handle_count; 128 126 129 - /* Pad, must be zero-filled. */ 130 - __u32 pad; 127 + __u32 flags; 131 128 }; 132 129 133 130 /** ··· 194 193 DRM_V3D_PARAM_V3D_CORE0_IDENT2, 195 194 DRM_V3D_PARAM_SUPPORTS_TFU, 196 195 DRM_V3D_PARAM_SUPPORTS_CSD, 196 + DRM_V3D_PARAM_SUPPORTS_CACHE_FLUSH, 197 197 }; 198 198 199 199 struct drm_v3d_get_param {