Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'drm-misc-next-2024-09-20' of https://gitlab.freedesktop.org/drm/misc/kernel into drm-next

drm-misc-next for v6.12:

UAPI Changes:
- Add panthor/DEV_QUERY_TIMESTAMP_INFO query.

Cross-subsystem Changes:
- Updated dt bindings.
- Add documentation explaining default errnos for fences.
- Mark dma-buf heaps creation functions as __init.

Core Changes:
- Split DSC helpers from DP helpers.
- Clang build fixes for drm/mm test.
- Remove simple pipeline support for gem-vram,
no longer any users left after converting bochs.
- Add erno to drm_sched_start to distinguish between GPU and queue
reset.
- Add drm_framebuffer testcases.
- Fix uninitialized spinlock acquisition with CONFIG_DRM_PANIC=n.
- Use read_trylock instead of read_lock in dma_fence_begin_signalling to
quiesce lockdep.

Driver Changes:
- Assorted small fixes and updates for tegra, host1x, imagination,
nouveau, panfrost, panthor, panel/ili9341, mali, exynos,
panel/samsung-s6e3fa7, ast, bridge/ti-sn65dsi86, panel/himax-hx83112a,
bridge/tc358767, bridge/imx8mp-hdmi-tx, panel/khadas-ts050,
panel/nt36523, panel/sony-acx565akm, kmb, accel/qaic, omap, v3d.
- Add bridge/TI TDP158.
- Assorted documentation updates.
- Convert bochs from simple drm to gem shmem, and check modes
against available memory.
- Many VC4 fixes, most related to scaling and YUV support.
- Convert some drivers to use SYSTEM_SLEEP_PM_OPS and RUNTIME_PM_OPS.
- Rockchip 4k@60 support.

Signed-off-by: Dave Airlie <airlied@redhat.com>
From: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/445713a6-2427-4c53-8ec2-3a894ec62405@linux.intel.com

+2764 -2130
+57
Documentation/devicetree/bindings/display/bridge/ti,tdp158.yaml
··· 1 + # SPDX-License-Identifier: GPL-2.0-only OR BSD-2-Clause 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/display/bridge/ti,tdp158.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: TI TDP158 HDMI to TMDS Redriver 8 + 9 + maintainers: 10 + - Arnaud Vrac <avrac@freebox.fr> 11 + - Pierre-Hugues Husson <phhusson@freebox.fr> 12 + 13 + properties: 14 + compatible: 15 + const: ti,tdp158 16 + 17 + # The reg property is required if and only if the device is connected 18 + # to an I2C bus. In pin strap mode, reg must not be specified. 19 + reg: 20 + description: I2C address of the device 21 + 22 + # Pin 36 = Operation Enable / Reset Pin 23 + # OE = L: Power Down Mode 24 + # OE = H: Normal Operation 25 + # Internal weak pullup - device resets on H to L transitions 26 + enable-gpios: 27 + description: GPIO controlling bridge enable 28 + 29 + vcc-supply: 30 + description: Power supply 3.3V 31 + 32 + vdd-supply: 33 + description: Power supply 1.1V 34 + 35 + ports: 36 + $ref: /schemas/graph.yaml#/properties/ports 37 + 38 + properties: 39 + port@0: 40 + $ref: /schemas/graph.yaml#/properties/port 41 + description: Bridge input 42 + 43 + port@1: 44 + $ref: /schemas/graph.yaml#/properties/port 45 + description: Bridge output 46 + 47 + required: 48 + - port@0 49 + - port@1 50 + 51 + required: 52 + - compatible 53 + - vcc-supply 54 + - vdd-supply 55 + - ports 56 + 57 + additionalProperties: false
-2
Documentation/devicetree/bindings/display/imx/fsl-imx-drm.txt
··· 119 119 - interface-pix-fmt: How this display is connected to the 120 120 display interface. Currently supported types: "rgb24", "rgb565", "bgr666" 121 121 and "lvds666". 122 - - edid: verbatim EDID data block describing attached display. 123 122 - ddc: phandle describing the i2c bus handling the display data 124 123 channel 125 124 - port@[0-1]: Port nodes with endpoint definitions as defined in ··· 130 131 131 132 disp0 { 132 133 compatible = "fsl,imx-parallel-display"; 133 - edid = [edid-data]; 134 134 interface-pix-fmt = "rgb24"; 135 135 136 136 port@0 {
-1
Documentation/devicetree/bindings/display/imx/ldb.txt
··· 62 62 display-timings are used instead. 63 63 64 64 Optional properties (required if display-timings are used): 65 - - ddc-i2c-bus: phandle of an I2C controller used for DDC EDID probing 66 65 - display-timings : A node that describes the display timings as defined in 67 66 Documentation/devicetree/bindings/display/panel/display-timing.txt. 68 67 - fsl,data-mapping : should be "spwg" or "jeida"
+1
Documentation/devicetree/bindings/gpu/arm,mali-bifrost.yaml
··· 26 26 - renesas,r9a07g054-mali 27 27 - rockchip,px30-mali 28 28 - rockchip,rk3568-mali 29 + - rockchip,rk3576-mali 29 30 - const: arm,mali-bifrost # Mali Bifrost GPU model/revision is fully discoverable 30 31 - items: 31 32 - enum:
+20 -7
Documentation/gpu/drm-uapi.rst
··· 305 305 ------------------ 306 306 307 307 The KMD is responsible for checking if the device needs a reset, and to perform 308 - it as needed. Usually a hang is detected when a job gets stuck executing. KMD 309 - should keep track of resets, because userspace can query any time about the 310 - reset status for a specific context. This is needed to propagate to the rest of 311 - the stack that a reset has happened. Currently, this is implemented by each 312 - driver separately, with no common DRM interface. Ideally this should be properly 313 - integrated at DRM scheduler to provide a common ground for all drivers. After a 314 - reset, KMD should reject new command submissions for affected contexts. 308 + it as needed. Usually a hang is detected when a job gets stuck executing. 309 + 310 + Propagation of errors to userspace has proven to be tricky since it goes in 311 + the opposite direction of the usual flow of commands. Because of this vendor 312 + independent error handling was added to the &dma_fence object, this way drivers 313 + can add an error code to their fences before signaling them. See function 314 + dma_fence_set_error() on how to do this and for examples of error codes to use. 315 + 316 + The DRM scheduler also allows setting error codes on all pending fences when 317 + hardware submissions are restarted after an reset. Error codes are also 318 + forwarded from the hardware fence to the scheduler fence to bubble up errors 319 + to the higher levels of the stack and eventually userspace. 320 + 321 + Fence errors can be queried by userspace through the generic SYNC_IOC_FILE_INFO 322 + IOCTL as well as through driver specific interfaces. 323 + 324 + Additional to setting fence errors drivers should also keep track of resets per 325 + context, the DRM scheduler provides the drm_sched_entity_error() function as 326 + helper for this use case. After a reset, KMD should reject new command 327 + submissions for affected contexts. 315 328 316 329 User Mode Driver 317 330 ----------------
+16
Documentation/gpu/todo.rst
··· 834 834 835 835 Level: Advanced 836 836 837 + Querying errors from drm_syncobj 838 + ================================ 839 + 840 + The drm_syncobj container can be used by driver independent code to signal 841 + complection of submission. 842 + 843 + One minor feature still missing is a generic DRM IOCTL to query the error 844 + status of binary and timeline drm_syncobj. 845 + 846 + This should probably be improved by implementing the necessary kernel interface 847 + and adding support for that in the userspace stack. 848 + 849 + Contact: Christian König 850 + 851 + Level: Starter 852 + 837 853 Outside DRM 838 854 =========== 839 855
+5 -38
drivers/accel/qaic/qaic_debugfs.c
··· 64 64 return 0; 65 65 } 66 66 67 - static int bootlog_fops_open(struct inode *inode, struct file *file) 68 - { 69 - return single_open(file, bootlog_show, inode->i_private); 70 - } 67 + DEFINE_SHOW_ATTRIBUTE(bootlog); 71 68 72 - static const struct file_operations bootlog_fops = { 73 - .owner = THIS_MODULE, 74 - .open = bootlog_fops_open, 75 - .read = seq_read, 76 - .llseek = seq_lseek, 77 - .release = single_release, 78 - }; 79 - 80 - static int read_dbc_fifo_size(struct seq_file *s, void *unused) 69 + static int fifo_size_show(struct seq_file *s, void *unused) 81 70 { 82 71 struct dma_bridge_chan *dbc = s->private; 83 72 ··· 74 85 return 0; 75 86 } 76 87 77 - static int fifo_size_open(struct inode *inode, struct file *file) 78 - { 79 - return single_open(file, read_dbc_fifo_size, inode->i_private); 80 - } 88 + DEFINE_SHOW_ATTRIBUTE(fifo_size); 81 89 82 - static const struct file_operations fifo_size_fops = { 83 - .owner = THIS_MODULE, 84 - .open = fifo_size_open, 85 - .read = seq_read, 86 - .llseek = seq_lseek, 87 - .release = single_release, 88 - }; 89 - 90 - static int read_dbc_queued(struct seq_file *s, void *unused) 90 + static int queued_show(struct seq_file *s, void *unused) 91 91 { 92 92 struct dma_bridge_chan *dbc = s->private; 93 93 u32 tail = 0, head = 0; ··· 93 115 return 0; 94 116 } 95 117 96 - static int queued_open(struct inode *inode, struct file *file) 97 - { 98 - return single_open(file, read_dbc_queued, inode->i_private); 99 - } 100 - 101 - static const struct file_operations queued_fops = { 102 - .owner = THIS_MODULE, 103 - .open = queued_open, 104 - .read = seq_read, 105 - .llseek = seq_lseek, 106 - .release = single_release, 107 - }; 118 + DEFINE_SHOW_ATTRIBUTE(queued); 108 119 109 120 void qaic_debugfs_init(struct qaic_drm_device *qddev) 110 121 {
+3 -3
drivers/dma-buf/dma-fence.c
··· 309 309 if (in_atomic()) 310 310 return true; 311 311 312 - /* ... and non-recursive readlock */ 313 - lock_acquire(&dma_fence_lockdep_map, 0, 0, 1, 1, NULL, _RET_IP_); 312 + /* ... and non-recursive successful read_trylock */ 313 + lock_acquire(&dma_fence_lockdep_map, 0, 1, 1, 1, NULL, _RET_IP_); 314 314 315 315 return false; 316 316 } ··· 341 341 lock_map_acquire(&dma_fence_lockdep_map); 342 342 lock_map_release(&dma_fence_lockdep_map); 343 343 if (tmp) 344 - lock_acquire(&dma_fence_lockdep_map, 0, 0, 1, 1, NULL, _THIS_IP_); 344 + lock_acquire(&dma_fence_lockdep_map, 0, 1, 1, 1, NULL, _THIS_IP_); 345 345 } 346 346 #endif 347 347
+2 -2
drivers/dma-buf/heaps/cma_heap.c
··· 366 366 .allocate = cma_heap_allocate, 367 367 }; 368 368 369 - static int __add_cma_heap(struct cma *cma, void *data) 369 + static int __init __add_cma_heap(struct cma *cma, void *data) 370 370 { 371 371 struct cma_heap *cma_heap; 372 372 struct dma_heap_export_info exp_info; ··· 391 391 return 0; 392 392 } 393 393 394 - static int add_default_cma_heap(void) 394 + static int __init add_default_cma_heap(void) 395 395 { 396 396 struct cma *default_cma = dev_get_cma_area(NULL); 397 397 int ret = 0;
+1 -1
drivers/dma-buf/heaps/system_heap.c
··· 421 421 .allocate = system_heap_allocate, 422 422 }; 423 423 424 - static int system_heap_create(void) 424 + static int __init system_heap_create(void) 425 425 { 426 426 struct dma_heap_export_info exp_info; 427 427
+1
drivers/gpu/drm/amd/amdgpu/Kconfig
··· 6 6 depends on !UML 7 7 select FW_LOADER 8 8 select DRM_DISPLAY_DP_HELPER 9 + select DRM_DISPLAY_DSC_HELPER 9 10 select DRM_DISPLAY_HDMI_HELPER 10 11 select DRM_DISPLAY_HDCP_HELPER 11 12 select DRM_DISPLAY_HELPER
+1 -1
drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_arcturus.c
··· 299 299 if (r) 300 300 goto out; 301 301 } else { 302 - drm_sched_start(&ring->sched); 302 + drm_sched_start(&ring->sched, 0); 303 303 } 304 304 } 305 305
+2 -2
drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
··· 5824 5824 if (!amdgpu_ring_sched_ready(ring)) 5825 5825 continue; 5826 5826 5827 - drm_sched_start(&ring->sched); 5827 + drm_sched_start(&ring->sched, 0); 5828 5828 } 5829 5829 5830 5830 if (!drm_drv_uses_atomic_modeset(adev_to_drm(tmp_adev)) && !job_signaled) ··· 6331 6331 if (!amdgpu_ring_sched_ready(ring)) 6332 6332 continue; 6333 6333 6334 - drm_sched_start(&ring->sched); 6334 + drm_sched_start(&ring->sched, 0); 6335 6335 } 6336 6336 6337 6337 amdgpu_device_unset_mp1_state(adev);
+1 -1
drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
··· 149 149 atomic_inc(&ring->adev->gpu_reset_counter); 150 150 amdgpu_fence_driver_force_completion(ring); 151 151 if (amdgpu_ring_sched_ready(ring)) 152 - drm_sched_start(&ring->sched); 152 + drm_sched_start(&ring->sched, 0); 153 153 goto exit; 154 154 } 155 155 }
+72 -65
drivers/gpu/drm/ast/ast_dp.c
··· 149 149 return 0; 150 150 } 151 151 152 - static bool ast_dp_power_is_on(struct ast_device *ast) 152 + static bool ast_dp_get_phy_sleep(struct ast_device *ast) 153 153 { 154 - u8 vgacre3; 154 + u8 vgacre3 = ast_get_index_reg(ast, AST_IO_VGACRI, 0xe3); 155 155 156 - vgacre3 = ast_get_index_reg(ast, AST_IO_VGACRI, 0xe3); 157 - 158 - return !(vgacre3 & AST_DP_PHY_SLEEP); 156 + return (vgacre3 & AST_IO_VGACRE3_DP_PHY_SLEEP); 159 157 } 160 158 161 - static void ast_dp_power_on_off(struct drm_device *dev, bool on) 159 + static void ast_dp_set_phy_sleep(struct ast_device *ast, bool sleep) 162 160 { 163 - struct ast_device *ast = to_ast_device(dev); 164 - // Read and Turn off DP PHY sleep 165 - u8 bE3 = ast_get_index_reg_mask(ast, AST_IO_VGACRI, 0xE3, AST_DP_VIDEO_ENABLE); 161 + u8 vgacre3 = 0x00; 166 162 167 - // Turn on DP PHY sleep 168 - if (!on) 169 - bE3 |= AST_DP_PHY_SLEEP; 163 + if (sleep) 164 + vgacre3 |= AST_IO_VGACRE3_DP_PHY_SLEEP; 170 165 171 - // DP Power on/off 172 - ast_set_index_reg_mask(ast, AST_IO_VGACRI, 0xE3, (u8) ~AST_DP_PHY_SLEEP, bE3); 173 - 166 + ast_set_index_reg_mask(ast, AST_IO_VGACRI, 0xe3, (u8)~AST_IO_VGACRE3_DP_PHY_SLEEP, 167 + vgacre3); 174 168 msleep(50); 175 169 } 176 170 ··· 186 192 drm_err(dev, "Link training failed\n"); 187 193 } 188 194 189 - static void ast_dp_set_on_off(struct drm_device *dev, bool on) 195 + static bool __ast_dp_wait_enable(struct ast_device *ast, bool enabled) 190 196 { 191 - struct ast_device *ast = to_ast_device(dev); 192 - u8 video_on_off = on; 193 - u32 i = 0; 197 + u8 vgacrdf_test = 0x00; 198 + u8 vgacrdf; 199 + unsigned int i; 194 200 195 - // Video On/Off 196 - ast_set_index_reg_mask(ast, AST_IO_VGACRI, 0xE3, (u8) ~AST_DP_VIDEO_ENABLE, on); 201 + if (enabled) 202 + vgacrdf_test |= AST_IO_VGACRDF_DP_VIDEO_ENABLE; 197 203 198 - video_on_off <<= 4; 199 - while (ast_get_index_reg_mask(ast, AST_IO_VGACRI, 0xDF, 200 - ASTDP_MIRROR_VIDEO_ENABLE) != video_on_off) { 201 - // wait 1 ms 202 - mdelay(1); 203 - if (++i > 200) 204 - break; 204 + for (i = 0; i < 200; ++i) { 205 + if (i) 206 + mdelay(1); 207 + vgacrdf = ast_get_index_reg_mask(ast, AST_IO_VGACRI, 0xdf, 208 + AST_IO_VGACRDF_DP_VIDEO_ENABLE); 209 + if (vgacrdf == vgacrdf_test) 210 + return true; 205 211 } 212 + 213 + return false; 214 + } 215 + 216 + static void ast_dp_set_enable(struct ast_device *ast, bool enabled) 217 + { 218 + struct drm_device *dev = &ast->base; 219 + u8 vgacre3 = 0x00; 220 + 221 + if (enabled) 222 + vgacre3 |= AST_IO_VGACRE3_DP_VIDEO_ENABLE; 223 + 224 + ast_set_index_reg_mask(ast, AST_IO_VGACRI, 0xe3, (u8)~AST_IO_VGACRE3_DP_VIDEO_ENABLE, 225 + vgacre3); 226 + 227 + drm_WARN_ON(dev, !__ast_dp_wait_enable(ast, enabled)); 206 228 } 207 229 208 230 static void ast_dp_set_mode(struct drm_crtc *crtc, struct ast_vbios_mode_info *vbios_mode) ··· 327 317 static void ast_astdp_encoder_helper_atomic_enable(struct drm_encoder *encoder, 328 318 struct drm_atomic_state *state) 329 319 { 330 - struct drm_device *dev = encoder->dev; 331 - struct ast_device *ast = to_ast_device(dev); 320 + struct ast_device *ast = to_ast_device(encoder->dev); 332 321 struct ast_connector *ast_connector = &ast->output.astdp.connector; 333 322 334 323 if (ast_connector->physical_status == connector_status_connected) { 335 - ast_dp_power_on_off(dev, AST_DP_POWER_ON); 324 + ast_dp_set_phy_sleep(ast, false); 336 325 ast_dp_link_training(ast); 337 326 338 327 ast_wait_for_vretrace(ast); 339 - ast_dp_set_on_off(dev, 1); 328 + ast_dp_set_enable(ast, true); 340 329 } 341 330 } 342 331 343 332 static void ast_astdp_encoder_helper_atomic_disable(struct drm_encoder *encoder, 344 333 struct drm_atomic_state *state) 345 334 { 346 - struct drm_device *dev = encoder->dev; 335 + struct ast_device *ast = to_ast_device(encoder->dev); 347 336 348 - ast_dp_set_on_off(dev, 0); 349 - ast_dp_power_on_off(dev, AST_DP_POWER_OFF); 337 + ast_dp_set_enable(ast, false); 338 + ast_dp_set_phy_sleep(ast, true); 350 339 } 351 340 352 341 static const struct drm_encoder_helper_funcs ast_astdp_encoder_helper_funcs = { ··· 392 383 bool force) 393 384 { 394 385 struct ast_connector *ast_connector = to_ast_connector(connector); 395 - struct drm_device *dev = connector->dev; 396 386 struct ast_device *ast = to_ast_device(connector->dev); 397 387 enum drm_connector_status status = connector_status_disconnected; 398 - bool power_is_on; 388 + bool phy_sleep; 399 389 400 390 mutex_lock(&ast->modeset_lock); 401 391 402 - power_is_on = ast_dp_power_is_on(ast); 403 - if (!power_is_on) 404 - ast_dp_power_on_off(dev, true); 392 + phy_sleep = ast_dp_get_phy_sleep(ast); 393 + if (phy_sleep) 394 + ast_dp_set_phy_sleep(ast, false); 405 395 406 396 if (ast_astdp_is_connected(ast)) 407 397 status = connector_status_connected; 408 398 409 - if (!power_is_on && status == connector_status_disconnected) 410 - ast_dp_power_on_off(dev, false); 399 + if (phy_sleep && status == connector_status_disconnected) 400 + ast_dp_set_phy_sleep(ast, true); 411 401 412 402 mutex_unlock(&ast->modeset_lock); 413 403 ··· 422 414 .detect_ctx = ast_astdp_connector_helper_detect_ctx, 423 415 }; 424 416 417 + /* 418 + * Output 419 + */ 420 + 425 421 static const struct drm_connector_funcs ast_astdp_connector_funcs = { 426 422 .reset = drm_atomic_helper_connector_reset, 427 423 .fill_modes = drm_helper_probe_single_connector_modes, ··· 434 422 .atomic_destroy_state = drm_atomic_helper_connector_destroy_state, 435 423 }; 436 424 437 - static int ast_astdp_connector_init(struct drm_device *dev, struct drm_connector *connector) 438 - { 439 - int ret; 440 - 441 - ret = drm_connector_init(dev, connector, &ast_astdp_connector_funcs, 442 - DRM_MODE_CONNECTOR_DisplayPort); 443 - if (ret) 444 - return ret; 445 - 446 - drm_connector_helper_add(connector, &ast_astdp_connector_helper_funcs); 447 - 448 - connector->interlace_allowed = 0; 449 - connector->doublescan_allowed = 0; 450 - 451 - connector->polled = DRM_CONNECTOR_POLL_CONNECT | DRM_CONNECTOR_POLL_DISCONNECT; 452 - 453 - return 0; 454 - } 455 - 456 425 int ast_astdp_output_init(struct ast_device *ast) 457 426 { 458 427 struct drm_device *dev = &ast->base; 459 428 struct drm_crtc *crtc = &ast->crtc; 460 - struct drm_encoder *encoder = &ast->output.astdp.encoder; 461 - struct ast_connector *ast_connector = &ast->output.astdp.connector; 462 - struct drm_connector *connector = &ast_connector->base; 429 + struct drm_encoder *encoder; 430 + struct ast_connector *ast_connector; 431 + struct drm_connector *connector; 463 432 int ret; 464 433 434 + /* encoder */ 435 + 436 + encoder = &ast->output.astdp.encoder; 465 437 ret = drm_encoder_init(dev, encoder, &ast_astdp_encoder_funcs, 466 438 DRM_MODE_ENCODER_TMDS, NULL); 467 439 if (ret) ··· 454 458 455 459 encoder->possible_crtcs = drm_crtc_mask(crtc); 456 460 457 - ret = ast_astdp_connector_init(dev, connector); 461 + /* connector */ 462 + 463 + ast_connector = &ast->output.astdp.connector; 464 + connector = &ast_connector->base; 465 + ret = drm_connector_init(dev, connector, &ast_astdp_connector_funcs, 466 + DRM_MODE_CONNECTOR_DisplayPort); 458 467 if (ret) 459 468 return ret; 469 + drm_connector_helper_add(connector, &ast_astdp_connector_helper_funcs); 470 + 471 + connector->interlace_allowed = 0; 472 + connector->doublescan_allowed = 0; 473 + connector->polled = DRM_CONNECTOR_POLL_CONNECT | DRM_CONNECTOR_POLL_DISCONNECT; 474 + 460 475 ast_connector->physical_status = connector->status; 461 476 462 477 ret = drm_connector_attach_encoder(connector, encoder);
+52 -59
drivers/gpu/drm/ast/ast_dp501.c
··· 21 21 ast->dp501_fw = NULL; 22 22 } 23 23 24 - static int ast_load_dp501_microcode(struct drm_device *dev) 24 + static int ast_load_dp501_microcode(struct ast_device *ast) 25 25 { 26 - struct ast_device *ast = to_ast_device(dev); 26 + struct drm_device *dev = &ast->base; 27 27 int ret; 28 28 29 29 ret = request_firmware(&ast->dp501_fw, "ast_dp501_fw.bin", dev->dev); ··· 109 109 } 110 110 #endif 111 111 112 - static bool ast_write_cmd(struct drm_device *dev, u8 data) 112 + static bool ast_write_cmd(struct ast_device *ast, u8 data) 113 113 { 114 - struct ast_device *ast = to_ast_device(dev); 115 114 int retry = 0; 115 + 116 116 if (wait_nack(ast)) { 117 117 send_nack(ast); 118 118 ast_set_index_reg_mask(ast, AST_IO_VGACRI, 0x9a, 0x00, data); ··· 131 131 return false; 132 132 } 133 133 134 - static bool ast_write_data(struct drm_device *dev, u8 data) 134 + static bool ast_write_data(struct ast_device *ast, u8 data) 135 135 { 136 - struct ast_device *ast = to_ast_device(dev); 137 - 138 136 if (wait_nack(ast)) { 139 137 send_nack(ast); 140 138 ast_set_index_reg_mask(ast, AST_IO_VGACRI, 0x9a, 0x00, data); ··· 173 175 } 174 176 #endif 175 177 176 - static void ast_set_dp501_video_output(struct drm_device *dev, u8 mode) 178 + static void ast_set_dp501_video_output(struct ast_device *ast, u8 mode) 177 179 { 178 - ast_write_cmd(dev, 0x40); 179 - ast_write_data(dev, mode); 180 + ast_write_cmd(ast, 0x40); 181 + ast_write_data(ast, mode); 180 182 181 183 msleep(10); 182 184 } ··· 186 188 return ast_mindwm(ast, 0x1e6e2104) & 0x7fffffff; 187 189 } 188 190 189 - bool ast_backup_fw(struct drm_device *dev, u8 *addr, u32 size) 191 + bool ast_backup_fw(struct ast_device *ast, u8 *addr, u32 size) 190 192 { 191 - struct ast_device *ast = to_ast_device(dev); 192 193 u32 i, data; 193 194 u32 boot_address; 194 195 ··· 204 207 return false; 205 208 } 206 209 207 - static bool ast_launch_m68k(struct drm_device *dev) 210 + static bool ast_launch_m68k(struct ast_device *ast) 208 211 { 209 - struct ast_device *ast = to_ast_device(dev); 210 212 u32 i, data, len = 0; 211 213 u32 boot_address; 212 214 u8 *fw_addr = NULL; ··· 222 226 len = 32*1024; 223 227 } else { 224 228 if (!ast->dp501_fw && 225 - ast_load_dp501_microcode(dev) < 0) 229 + ast_load_dp501_microcode(ast) < 0) 226 230 return false; 227 231 228 232 fw_addr = (u8 *)ast->dp501_fw->data; ··· 344 348 return true; 345 349 } 346 350 347 - static bool ast_init_dvo(struct drm_device *dev) 351 + static bool ast_init_dvo(struct ast_device *ast) 348 352 { 349 - struct ast_device *ast = to_ast_device(dev); 350 353 u8 jreg; 351 354 u32 data; 352 355 ast_write32(ast, 0xf004, 0x1e6e0000); ··· 416 421 } 417 422 418 423 419 - static void ast_init_analog(struct drm_device *dev) 424 + static void ast_init_analog(struct ast_device *ast) 420 425 { 421 - struct ast_device *ast = to_ast_device(dev); 422 426 u32 data; 423 427 424 428 /* ··· 442 448 ast_set_index_reg_mask(ast, AST_IO_VGACRI, 0xa3, 0xcf, 0x00); 443 449 } 444 450 445 - void ast_init_3rdtx(struct drm_device *dev) 451 + void ast_init_3rdtx(struct ast_device *ast) 446 452 { 447 - struct ast_device *ast = to_ast_device(dev); 448 - u8 jreg; 453 + u8 vgacrd1; 449 454 450 455 if (IS_AST_GEN4(ast) || IS_AST_GEN5(ast)) { 451 - jreg = ast_get_index_reg_mask(ast, AST_IO_VGACRI, 0xd1, 0xff); 452 - switch (jreg & 0x0e) { 453 - case 0x04: 454 - ast_init_dvo(dev); 456 + vgacrd1 = ast_get_index_reg_mask(ast, AST_IO_VGACRI, 0xd1, 457 + AST_IO_VGACRD1_TX_TYPE_MASK); 458 + switch (vgacrd1) { 459 + case AST_IO_VGACRD1_TX_SIL164_VBIOS: 460 + ast_init_dvo(ast); 455 461 break; 456 - case 0x08: 457 - ast_launch_m68k(dev); 462 + case AST_IO_VGACRD1_TX_DP501_VBIOS: 463 + ast_launch_m68k(ast); 458 464 break; 459 - case 0x0c: 460 - ast_init_dvo(dev); 465 + case AST_IO_VGACRD1_TX_FW_EMBEDDED_FW: 466 + ast_init_dvo(ast); 461 467 break; 462 468 default: 463 - if (ast->tx_chip_types & BIT(AST_TX_SIL164)) 464 - ast_init_dvo(dev); 469 + if (ast->tx_chip == AST_TX_SIL164) 470 + ast_init_dvo(ast); 465 471 else 466 - ast_init_analog(dev); 472 + ast_init_analog(ast); 467 473 } 468 474 } 469 475 } ··· 479 485 static void ast_dp501_encoder_helper_atomic_enable(struct drm_encoder *encoder, 480 486 struct drm_atomic_state *state) 481 487 { 482 - struct drm_device *dev = encoder->dev; 488 + struct ast_device *ast = to_ast_device(encoder->dev); 483 489 484 - ast_set_dp501_video_output(dev, 1); 490 + ast_set_dp501_video_output(ast, 1); 485 491 } 486 492 487 493 static void ast_dp501_encoder_helper_atomic_disable(struct drm_encoder *encoder, 488 494 struct drm_atomic_state *state) 489 495 { 490 - struct drm_device *dev = encoder->dev; 496 + struct ast_device *ast = to_ast_device(encoder->dev); 491 497 492 - ast_set_dp501_video_output(dev, 0); 498 + ast_set_dp501_video_output(ast, 0); 493 499 } 494 500 495 501 static const struct drm_encoder_helper_funcs ast_dp501_encoder_helper_funcs = { ··· 561 567 .atomic_destroy_state = drm_atomic_helper_connector_destroy_state, 562 568 }; 563 569 564 - static int ast_dp501_connector_init(struct drm_device *dev, struct drm_connector *connector) 565 - { 566 - int ret; 567 - 568 - ret = drm_connector_init(dev, connector, &ast_dp501_connector_funcs, 569 - DRM_MODE_CONNECTOR_DisplayPort); 570 - if (ret) 571 - return ret; 572 - 573 - drm_connector_helper_add(connector, &ast_dp501_connector_helper_funcs); 574 - 575 - connector->interlace_allowed = 0; 576 - connector->doublescan_allowed = 0; 577 - 578 - connector->polled = DRM_CONNECTOR_POLL_CONNECT | DRM_CONNECTOR_POLL_DISCONNECT; 579 - 580 - return 0; 581 - } 570 + /* 571 + * Output 572 + */ 582 573 583 574 int ast_dp501_output_init(struct ast_device *ast) 584 575 { 585 576 struct drm_device *dev = &ast->base; 586 577 struct drm_crtc *crtc = &ast->crtc; 587 - struct drm_encoder *encoder = &ast->output.dp501.encoder; 588 - struct ast_connector *ast_connector = &ast->output.dp501.connector; 589 - struct drm_connector *connector = &ast_connector->base; 578 + struct drm_encoder *encoder; 579 + struct ast_connector *ast_connector; 580 + struct drm_connector *connector; 590 581 int ret; 591 582 583 + /* encoder */ 584 + 585 + encoder = &ast->output.dp501.encoder; 592 586 ret = drm_encoder_init(dev, encoder, &ast_dp501_encoder_funcs, 593 587 DRM_MODE_ENCODER_TMDS, NULL); 594 588 if (ret) ··· 585 603 586 604 encoder->possible_crtcs = drm_crtc_mask(crtc); 587 605 588 - ret = ast_dp501_connector_init(dev, connector); 606 + /* connector */ 607 + 608 + ast_connector = &ast->output.dp501.connector; 609 + connector = &ast_connector->base; 610 + ret = drm_connector_init(dev, connector, &ast_dp501_connector_funcs, 611 + DRM_MODE_CONNECTOR_DisplayPort); 589 612 if (ret) 590 613 return ret; 614 + drm_connector_helper_add(connector, &ast_dp501_connector_helper_funcs); 615 + 616 + connector->interlace_allowed = 0; 617 + connector->doublescan_allowed = 0; 618 + connector->polled = DRM_CONNECTOR_POLL_CONNECT | DRM_CONNECTOR_POLL_DISCONNECT; 619 + 591 620 ast_connector->physical_status = connector->status; 592 621 593 622 ret = drm_connector_attach_encoder(connector, encoder);
+1 -1
drivers/gpu/drm/ast/ast_drv.c
··· 396 396 ast_enable_vga(ast->ioregs); 397 397 ast_open_key(ast->ioregs); 398 398 ast_enable_mmio(dev->dev, ast->ioregs); 399 - ast_post_gpu(dev); 399 + ast_post_gpu(ast); 400 400 401 401 return drm_mode_config_helper_resume(dev); 402 402 }
+6 -13
drivers/gpu/drm/ast/ast_drv.h
··· 91 91 AST_TX_ASTDP, 92 92 }; 93 93 94 - #define AST_TX_NONE_BIT BIT(AST_TX_NONE) 95 - #define AST_TX_SIL164_BIT BIT(AST_TX_SIL164) 96 - #define AST_TX_DP501_BIT BIT(AST_TX_DP501) 97 - #define AST_TX_ASTDP_BIT BIT(AST_TX_ASTDP) 98 - 99 94 enum ast_config_mode { 100 95 ast_use_p2a, 101 96 ast_use_dt, ··· 182 187 183 188 struct mutex modeset_lock; /* Protects access to modeset I/O registers in ioregs */ 184 189 190 + enum ast_tx_chip tx_chip; 191 + 185 192 struct ast_plane primary_plane; 186 193 struct ast_plane cursor_plane; 187 194 struct drm_crtc crtc; 188 - struct { 195 + union { 189 196 struct { 190 197 struct drm_encoder encoder; 191 198 struct ast_connector connector; ··· 208 211 209 212 bool support_wide_screen; 210 213 211 - unsigned long tx_chip_types; /* bitfield of enum ast_chip_type */ 212 214 u8 *dp501_fw_addr; 213 215 const struct firmware *dp501_fw; /* dp501 fw */ 214 216 }; ··· 403 407 #define AST_DP501_LINKRATE 0xf014 404 408 #define AST_DP501_EDID_DATA 0xf020 405 409 406 - #define AST_DP_POWER_ON true 407 - #define AST_DP_POWER_OFF false 408 - 409 410 /* 410 411 * ASTDP resoultion table: 411 412 * EX: ASTDP_A_B_C: ··· 446 453 int ast_mm_init(struct ast_device *ast); 447 454 448 455 /* ast post */ 449 - void ast_post_gpu(struct drm_device *dev); 456 + void ast_post_gpu(struct ast_device *ast); 450 457 u32 ast_mindwm(struct ast_device *ast, u32 r); 451 458 void ast_moutdwm(struct ast_device *ast, u32 r, u32 v); 452 459 void ast_patch_ahb_2500(void __iomem *regs); ··· 455 462 int ast_sil164_output_init(struct ast_device *ast); 456 463 457 464 /* ast dp501 */ 458 - bool ast_backup_fw(struct drm_device *dev, u8 *addr, u32 size); 459 - void ast_init_3rdtx(struct drm_device *dev); 465 + bool ast_backup_fw(struct ast_device *ast, u8 *addr, u32 size); 466 + void ast_init_3rdtx(struct ast_device *ast); 460 467 int ast_dp501_output_init(struct ast_device *ast); 461 468 462 469 /* aspeed DP */
+41 -26
drivers/gpu/drm/ast/ast_main.c
··· 68 68 69 69 static void ast_detect_tx_chip(struct ast_device *ast, bool need_post) 70 70 { 71 + static const char * const info_str[] = { 72 + "analog VGA", 73 + "Sil164 TMDS transmitter", 74 + "DP501 DisplayPort transmitter", 75 + "ASPEED DisplayPort transmitter", 76 + }; 77 + 71 78 struct drm_device *dev = &ast->base; 72 - u8 jreg; 79 + u8 jreg, vgacrd1; 80 + 81 + /* 82 + * Several of the listed TX chips are not explicitly supported 83 + * by the ast driver. If these exist in real-world devices, they 84 + * are most likely reported as VGA or SIL164 outputs. We warn here 85 + * to get bug reports for these devices. If none come in for some 86 + * time, we can begin to fail device probing on these values. 87 + */ 88 + vgacrd1 = ast_get_index_reg_mask(ast, AST_IO_VGACRI, 0xd1, AST_IO_VGACRD1_TX_TYPE_MASK); 89 + drm_WARN(dev, vgacrd1 == AST_IO_VGACRD1_TX_ITE66121_VBIOS, 90 + "ITE IT66121 detected, 0x%x, Gen%lu\n", vgacrd1, AST_GEN(ast)); 91 + drm_WARN(dev, vgacrd1 == AST_IO_VGACRD1_TX_CH7003_VBIOS, 92 + "Chrontel CH7003 detected, 0x%x, Gen%lu\n", vgacrd1, AST_GEN(ast)); 93 + drm_WARN(dev, vgacrd1 == AST_IO_VGACRD1_TX_ANX9807_VBIOS, 94 + "Analogix ANX9807 detected, 0x%x, Gen%lu\n", vgacrd1, AST_GEN(ast)); 73 95 74 96 /* Check 3rd Tx option (digital output afaik) */ 75 - ast->tx_chip_types |= AST_TX_NONE_BIT; 97 + ast->tx_chip = AST_TX_NONE; 76 98 77 99 /* 78 100 * VGACRA3 Enhanced Color Mode Register, check if DVO is already ··· 107 85 if (!need_post) { 108 86 jreg = ast_get_index_reg_mask(ast, AST_IO_VGACRI, 0xa3, 0xff); 109 87 if (jreg & 0x80) 110 - ast->tx_chip_types = AST_TX_SIL164_BIT; 88 + ast->tx_chip = AST_TX_SIL164; 111 89 } 112 90 113 91 if (IS_AST_GEN4(ast) || IS_AST_GEN5(ast) || IS_AST_GEN6(ast)) { ··· 116 94 * the SOC scratch register #1 bits 11:8 (interestingly marked 117 95 * as "reserved" in the spec) 118 96 */ 119 - jreg = ast_get_index_reg_mask(ast, AST_IO_VGACRI, 0xd1, 0xff); 97 + jreg = ast_get_index_reg_mask(ast, AST_IO_VGACRI, 0xd1, 98 + AST_IO_VGACRD1_TX_TYPE_MASK); 120 99 switch (jreg) { 121 - case 0x04: 122 - ast->tx_chip_types = AST_TX_SIL164_BIT; 100 + case AST_IO_VGACRD1_TX_SIL164_VBIOS: 101 + ast->tx_chip = AST_TX_SIL164; 123 102 break; 124 - case 0x08: 103 + case AST_IO_VGACRD1_TX_DP501_VBIOS: 125 104 ast->dp501_fw_addr = drmm_kzalloc(dev, 32*1024, GFP_KERNEL); 126 105 if (ast->dp501_fw_addr) { 127 106 /* backup firmware */ 128 - if (ast_backup_fw(dev, ast->dp501_fw_addr, 32*1024)) { 107 + if (ast_backup_fw(ast, ast->dp501_fw_addr, 32*1024)) { 129 108 drmm_kfree(dev, ast->dp501_fw_addr); 130 109 ast->dp501_fw_addr = NULL; 131 110 } 132 111 } 133 112 fallthrough; 134 - case 0x0c: 135 - ast->tx_chip_types = AST_TX_DP501_BIT; 113 + case AST_IO_VGACRD1_TX_FW_EMBEDDED_FW: 114 + ast->tx_chip = AST_TX_DP501; 136 115 } 137 116 } else if (IS_AST_GEN7(ast)) { 138 - if (ast_get_index_reg_mask(ast, AST_IO_VGACRI, 0xD1, TX_TYPE_MASK) == 139 - ASTDP_DPMCU_TX) { 117 + if (ast_get_index_reg_mask(ast, AST_IO_VGACRI, 0xd1, AST_IO_VGACRD1_TX_TYPE_MASK) == 118 + AST_IO_VGACRD1_TX_ASTDP) { 140 119 int ret = ast_dp_launch(ast); 141 120 142 121 if (!ret) 143 - ast->tx_chip_types = AST_TX_ASTDP_BIT; 122 + ast->tx_chip = AST_TX_ASTDP; 144 123 } 145 124 } 146 125 147 - /* Print stuff for diagnostic purposes */ 148 - if (ast->tx_chip_types & AST_TX_NONE_BIT) 149 - drm_info(dev, "Using analog VGA\n"); 150 - if (ast->tx_chip_types & AST_TX_SIL164_BIT) 151 - drm_info(dev, "Using Sil164 TMDS transmitter\n"); 152 - if (ast->tx_chip_types & AST_TX_DP501_BIT) 153 - drm_info(dev, "Using DP501 DisplayPort transmitter\n"); 154 - if (ast->tx_chip_types & AST_TX_ASTDP_BIT) 155 - drm_info(dev, "Using ASPEED DisplayPort transmitter\n"); 126 + drm_info(dev, "Using %s\n", info_str[ast->tx_chip]); 156 127 } 157 128 158 - static int ast_get_dram_info(struct drm_device *dev) 129 + static int ast_get_dram_info(struct ast_device *ast) 159 130 { 131 + struct drm_device *dev = &ast->base; 160 132 struct device_node *np = dev->dev->of_node; 161 - struct ast_device *ast = to_ast_device(dev); 162 133 uint32_t mcr_cfg, mcr_scu_mpll, mcr_scu_strap; 163 134 uint32_t denum, num, div, ref_pll, dsel; 164 135 ··· 293 278 ast_detect_widescreen(ast); 294 279 ast_detect_tx_chip(ast, need_post); 295 280 296 - ret = ast_get_dram_info(dev); 281 + ret = ast_get_dram_info(ast); 297 282 if (ret) 298 283 return ERR_PTR(ret); 299 284 ··· 301 286 ast->mclk, ast->dram_type, ast->dram_bus_width); 302 287 303 288 if (need_post) 304 - ast_post_gpu(dev); 289 + ast_post_gpu(ast); 305 290 306 291 ret = ast_mm_init(ast); 307 292 if (ret)
+16 -18
drivers/gpu/drm/ast/ast_mode.c
··· 1287 1287 .atomic_destroy_state = ast_crtc_atomic_destroy_state, 1288 1288 }; 1289 1289 1290 - static int ast_crtc_init(struct drm_device *dev) 1290 + static int ast_crtc_init(struct ast_device *ast) 1291 1291 { 1292 - struct ast_device *ast = to_ast_device(dev); 1292 + struct drm_device *dev = &ast->base; 1293 1293 struct drm_crtc *crtc = &ast->crtc; 1294 1294 int ret; 1295 1295 ··· 1396 1396 if (ret) 1397 1397 return ret; 1398 1398 1399 - ast_crtc_init(dev); 1399 + ret = ast_crtc_init(ast); 1400 + if (ret) 1401 + return ret; 1400 1402 1401 - if (ast->tx_chip_types & AST_TX_NONE_BIT) { 1403 + switch (ast->tx_chip) { 1404 + case AST_TX_NONE: 1402 1405 ret = ast_vga_output_init(ast); 1403 - if (ret) 1404 - return ret; 1405 - } 1406 - if (ast->tx_chip_types & AST_TX_SIL164_BIT) { 1406 + break; 1407 + case AST_TX_SIL164: 1407 1408 ret = ast_sil164_output_init(ast); 1408 - if (ret) 1409 - return ret; 1410 - } 1411 - if (ast->tx_chip_types & AST_TX_DP501_BIT) { 1409 + break; 1410 + case AST_TX_DP501: 1412 1411 ret = ast_dp501_output_init(ast); 1413 - if (ret) 1414 - return ret; 1415 - } 1416 - if (ast->tx_chip_types & AST_TX_ASTDP_BIT) { 1412 + break; 1413 + case AST_TX_ASTDP: 1417 1414 ret = ast_astdp_output_init(ast); 1418 - if (ret) 1419 - return ret; 1415 + break; 1420 1416 } 1417 + if (ret) 1418 + return ret; 1421 1419 1422 1420 drm_mode_config_reset(dev); 1423 1421
+15 -21
drivers/gpu/drm/ast/ast_post.c
··· 34 34 #include "ast_dram_tables.h" 35 35 #include "ast_drv.h" 36 36 37 - static void ast_post_chip_2300(struct drm_device *dev); 38 - static void ast_post_chip_2500(struct drm_device *dev); 37 + static void ast_post_chip_2300(struct ast_device *ast); 38 + static void ast_post_chip_2500(struct ast_device *ast); 39 39 40 40 static const u8 extreginfo[] = { 0x0f, 0x04, 0x1c, 0xff }; 41 41 static const u8 extreginfo_ast2300[] = { 0x0f, 0x04, 0x1f, 0xff }; 42 42 43 - static void 44 - ast_set_def_ext_reg(struct drm_device *dev) 43 + static void ast_set_def_ext_reg(struct ast_device *ast) 45 44 { 46 - struct ast_device *ast = to_ast_device(dev); 47 45 u8 i, index, reg; 48 46 const u8 *ext_reg_info; 49 47 ··· 250 252 251 253 252 254 253 - static void ast_init_dram_reg(struct drm_device *dev) 255 + static void ast_init_dram_reg(struct ast_device *ast) 254 256 { 255 - struct ast_device *ast = to_ast_device(dev); 256 257 u8 j; 257 258 u32 data, temp, i; 258 259 const struct ast_dramstruct *dram_reg_info; ··· 340 343 } while ((j & 0x40) == 0); 341 344 } 342 345 343 - void ast_post_gpu(struct drm_device *dev) 346 + void ast_post_gpu(struct ast_device *ast) 344 347 { 345 - struct ast_device *ast = to_ast_device(dev); 346 - 347 - ast_set_def_ext_reg(dev); 348 + ast_set_def_ext_reg(ast); 348 349 349 350 if (IS_AST_GEN7(ast)) { 350 - if (ast->tx_chip_types & AST_TX_ASTDP_BIT) 351 + if (ast->tx_chip == AST_TX_ASTDP) 351 352 ast_dp_launch(ast); 352 353 } else if (ast->config_mode == ast_use_p2a) { 353 354 if (IS_AST_GEN6(ast)) 354 - ast_post_chip_2500(dev); 355 + ast_post_chip_2500(ast); 355 356 else if (IS_AST_GEN5(ast) || IS_AST_GEN4(ast)) 356 - ast_post_chip_2300(dev); 357 + ast_post_chip_2300(ast); 357 358 else 358 - ast_init_dram_reg(dev); 359 + ast_init_dram_reg(ast); 359 360 360 - ast_init_3rdtx(dev); 361 + ast_init_3rdtx(ast); 361 362 } else { 362 - if (ast->tx_chip_types & AST_TX_SIL164_BIT) 363 + if (ast->tx_chip == AST_TX_SIL164) 363 364 ast_set_index_reg_mask(ast, AST_IO_VGACRI, 0xa3, 0xcf, 0x80); /* Enable DVO */ 364 365 } 365 366 } ··· 1564 1569 1565 1570 } 1566 1571 1567 - static void ast_post_chip_2300(struct drm_device *dev) 1572 + static void ast_post_chip_2300(struct ast_device *ast) 1568 1573 { 1569 - struct ast_device *ast = to_ast_device(dev); 1570 1574 struct ast2300_dram_param param; 1571 1575 u32 temp; 1572 1576 u8 reg; ··· 2032 2038 __ast_moutdwm(regs, 0x1e6e207c, 0x08000000); /* clear fast reset */ 2033 2039 } 2034 2040 2035 - void ast_post_chip_2500(struct drm_device *dev) 2041 + void ast_post_chip_2500(struct ast_device *ast) 2036 2042 { 2037 - struct ast_device *ast = to_ast_device(dev); 2043 + struct drm_device *dev = &ast->base; 2038 2044 u32 temp; 2039 2045 u8 reg; 2040 2046
+15 -26
drivers/gpu/drm/ast/ast_reg.h
··· 37 37 #define AST_IO_VGACRCB_HWC_16BPP BIT(0) /* set: ARGB4444, cleared: 2bpp palette */ 38 38 #define AST_IO_VGACRCB_HWC_ENABLED BIT(1) 39 39 40 - #define AST_IO_VGACRD1_MCU_FW_EXECUTING BIT(5) 40 + #define AST_IO_VGACRD1_MCU_FW_EXECUTING BIT(5) 41 + /* Display Transmitter Type */ 42 + #define AST_IO_VGACRD1_TX_TYPE_MASK GENMASK(3, 1) 43 + #define AST_IO_VGACRD1_NO_TX 0x00 44 + #define AST_IO_VGACRD1_TX_ITE66121_VBIOS 0x02 45 + #define AST_IO_VGACRD1_TX_SIL164_VBIOS 0x04 46 + #define AST_IO_VGACRD1_TX_CH7003_VBIOS 0x06 47 + #define AST_IO_VGACRD1_TX_DP501_VBIOS 0x08 48 + #define AST_IO_VGACRD1_TX_ANX9807_VBIOS 0x0a 49 + #define AST_IO_VGACRD1_TX_FW_EMBEDDED_FW 0x0c /* special case of DP501 */ 50 + #define AST_IO_VGACRD1_TX_ASTDP 0x0e 51 + 41 52 #define AST_IO_VGACRD7_EDID_VALID_FLAG BIT(0) 42 53 #define AST_IO_VGACRDC_LINK_SUCCESS BIT(0) 43 54 #define AST_IO_VGACRDF_HPD BIT(0) 55 + #define AST_IO_VGACRDF_DP_VIDEO_ENABLE BIT(4) /* mirrors AST_IO_VGACRE3_DP_VIDEO_ENABLE */ 56 + #define AST_IO_VGACRE3_DP_VIDEO_ENABLE BIT(0) 57 + #define AST_IO_VGACRE3_DP_PHY_SLEEP BIT(4) 44 58 #define AST_IO_VGACRE5_EDID_READ_DONE BIT(0) 45 59 46 60 #define AST_IO_VGAIR1_R (0x5A) 47 61 #define AST_IO_VGAIR1_VREFRESH BIT(3) 48 62 49 - /* 50 - * Display Transmitter Type 51 - */ 52 - 53 - #define TX_TYPE_MASK GENMASK(3, 1) 54 - #define NO_TX (0 << 1) 55 - #define ITE66121_VBIOS_TX (1 << 1) 56 - #define SI164_VBIOS_TX (2 << 1) 57 - #define CH7003_VBIOS_TX (3 << 1) 58 - #define DP501_VBIOS_TX (4 << 1) 59 - #define ANX9807_VBIOS_TX (5 << 1) 60 - #define TX_FW_EMBEDDED_FW_TX (6 << 1) 61 - #define ASTDP_DPMCU_TX (7 << 1) 62 63 63 64 #define AST_VRAM_INIT_STATUS_MASK GENMASK(7, 6) 64 65 //#define AST_VRAM_INIT_BY_BMC BIT(7) ··· 68 67 /* 69 68 * AST DisplayPort 70 69 */ 71 - 72 - /* Define for Soc scratched reg used on ASTDP */ 73 - #define AST_DP_PHY_SLEEP BIT(4) 74 - #define AST_DP_VIDEO_ENABLE BIT(0) 75 - 76 - /* 77 - * CRDF[b4]: Mirror of AST_DP_VIDEO_ENABLE 78 - * Precondition: A. ~AST_DP_PHY_SLEEP && 79 - * B. DP_HPD && 80 - * C. DP_LINK_SUCCESS 81 - */ 82 - #define ASTDP_MIRROR_VIDEO_ENABLE BIT(4) 83 70 84 71 /* 85 72 * ASTDP setmode registers:
+28 -31
drivers/gpu/drm/ast/ast_sil164.c
··· 71 71 .atomic_destroy_state = drm_atomic_helper_connector_destroy_state, 72 72 }; 73 73 74 - static int ast_sil164_connector_init(struct drm_device *dev, struct drm_connector *connector) 75 - { 76 - struct ast_device *ast = to_ast_device(dev); 77 - struct i2c_adapter *ddc; 78 - int ret; 79 - 80 - ddc = ast_ddc_create(ast); 81 - if (IS_ERR(ddc)) { 82 - ret = PTR_ERR(ddc); 83 - drm_err(dev, "failed to add DDC bus for connector; ret=%d\n", ret); 84 - return ret; 85 - } 86 - 87 - ret = drm_connector_init_with_ddc(dev, connector, &ast_sil164_connector_funcs, 88 - DRM_MODE_CONNECTOR_DVII, ddc); 89 - if (ret) 90 - return ret; 91 - 92 - drm_connector_helper_add(connector, &ast_sil164_connector_helper_funcs); 93 - 94 - connector->interlace_allowed = 0; 95 - connector->doublescan_allowed = 0; 96 - 97 - connector->polled = DRM_CONNECTOR_POLL_CONNECT | DRM_CONNECTOR_POLL_DISCONNECT; 98 - 99 - return 0; 100 - } 74 + /* 75 + * Output 76 + */ 101 77 102 78 int ast_sil164_output_init(struct ast_device *ast) 103 79 { 104 80 struct drm_device *dev = &ast->base; 105 81 struct drm_crtc *crtc = &ast->crtc; 106 - struct drm_encoder *encoder = &ast->output.sil164.encoder; 107 - struct ast_connector *ast_connector = &ast->output.sil164.connector; 108 - struct drm_connector *connector = &ast_connector->base; 82 + struct i2c_adapter *ddc; 83 + struct drm_encoder *encoder; 84 + struct ast_connector *ast_connector; 85 + struct drm_connector *connector; 109 86 int ret; 110 87 88 + /* DDC */ 89 + 90 + ddc = ast_ddc_create(ast); 91 + if (IS_ERR(ddc)) 92 + return PTR_ERR(ddc); 93 + 94 + /* encoder */ 95 + 96 + encoder = &ast->output.sil164.encoder; 111 97 ret = drm_encoder_init(dev, encoder, &ast_sil164_encoder_funcs, 112 98 DRM_MODE_ENCODER_TMDS, NULL); 113 99 if (ret) 114 100 return ret; 115 101 encoder->possible_crtcs = drm_crtc_mask(crtc); 116 102 117 - ret = ast_sil164_connector_init(dev, connector); 103 + /* connector */ 104 + 105 + ast_connector = &ast->output.sil164.connector; 106 + connector = &ast_connector->base; 107 + ret = drm_connector_init_with_ddc(dev, connector, &ast_sil164_connector_funcs, 108 + DRM_MODE_CONNECTOR_DVII, ddc); 118 109 if (ret) 119 110 return ret; 111 + drm_connector_helper_add(connector, &ast_sil164_connector_helper_funcs); 112 + 113 + connector->interlace_allowed = 0; 114 + connector->doublescan_allowed = 0; 115 + connector->polled = DRM_CONNECTOR_POLL_CONNECT | DRM_CONNECTOR_POLL_DISCONNECT; 116 + 120 117 ast_connector->physical_status = connector->status; 121 118 122 119 ret = drm_connector_attach_encoder(connector, encoder);
+28 -31
drivers/gpu/drm/ast/ast_vga.c
··· 71 71 .atomic_destroy_state = drm_atomic_helper_connector_destroy_state, 72 72 }; 73 73 74 - static int ast_vga_connector_init(struct drm_device *dev, struct drm_connector *connector) 75 - { 76 - struct ast_device *ast = to_ast_device(dev); 77 - struct i2c_adapter *ddc; 78 - int ret; 79 - 80 - ddc = ast_ddc_create(ast); 81 - if (IS_ERR(ddc)) { 82 - ret = PTR_ERR(ddc); 83 - drm_err(dev, "failed to add DDC bus for connector; ret=%d\n", ret); 84 - return ret; 85 - } 86 - 87 - ret = drm_connector_init_with_ddc(dev, connector, &ast_vga_connector_funcs, 88 - DRM_MODE_CONNECTOR_VGA, ddc); 89 - if (ret) 90 - return ret; 91 - 92 - drm_connector_helper_add(connector, &ast_vga_connector_helper_funcs); 93 - 94 - connector->interlace_allowed = 0; 95 - connector->doublescan_allowed = 0; 96 - 97 - connector->polled = DRM_CONNECTOR_POLL_CONNECT | DRM_CONNECTOR_POLL_DISCONNECT; 98 - 99 - return 0; 100 - } 74 + /* 75 + * Output 76 + */ 101 77 102 78 int ast_vga_output_init(struct ast_device *ast) 103 79 { 104 80 struct drm_device *dev = &ast->base; 105 81 struct drm_crtc *crtc = &ast->crtc; 106 - struct drm_encoder *encoder = &ast->output.vga.encoder; 107 - struct ast_connector *ast_connector = &ast->output.vga.connector; 108 - struct drm_connector *connector = &ast_connector->base; 82 + struct i2c_adapter *ddc; 83 + struct drm_encoder *encoder; 84 + struct ast_connector *ast_connector; 85 + struct drm_connector *connector; 109 86 int ret; 110 87 88 + /* DDC */ 89 + 90 + ddc = ast_ddc_create(ast); 91 + if (IS_ERR(ddc)) 92 + return PTR_ERR(ddc); 93 + 94 + /* encoder */ 95 + 96 + encoder = &ast->output.vga.encoder; 111 97 ret = drm_encoder_init(dev, encoder, &ast_vga_encoder_funcs, 112 98 DRM_MODE_ENCODER_DAC, NULL); 113 99 if (ret) 114 100 return ret; 115 101 encoder->possible_crtcs = drm_crtc_mask(crtc); 116 102 117 - ret = ast_vga_connector_init(dev, connector); 103 + /* connector */ 104 + 105 + ast_connector = &ast->output.vga.connector; 106 + connector = &ast_connector->base; 107 + ret = drm_connector_init_with_ddc(dev, connector, &ast_vga_connector_funcs, 108 + DRM_MODE_CONNECTOR_VGA, ddc); 118 109 if (ret) 119 110 return ret; 111 + drm_connector_helper_add(connector, &ast_vga_connector_helper_funcs); 112 + 113 + connector->interlace_allowed = 0; 114 + connector->doublescan_allowed = 0; 115 + connector->polled = DRM_CONNECTOR_POLL_CONNECT | DRM_CONNECTOR_POLL_DISCONNECT; 116 + 120 117 ast_connector->physical_status = connector->status; 121 118 122 119 ret = drm_connector_attach_encoder(connector, encoder);
+7
drivers/gpu/drm/bridge/Kconfig
··· 368 368 It supports up to 720p resolution with 60 and 120 Hz refresh 369 369 rates. 370 370 371 + config DRM_TI_TDP158 372 + tristate "TI TDP158 HDMI/TMDS bridge" 373 + depends on OF 374 + select DRM_PANEL_BRIDGE 375 + help 376 + Texas Instruments TDP158 HDMI/TMDS Bridge driver 377 + 371 378 config DRM_TI_TFP410 372 379 tristate "TI TFP410 DVI/HDMI bridge" 373 380 depends on OF
+1
drivers/gpu/drm/bridge/Makefile
··· 32 32 obj-$(CONFIG_DRM_TI_DLPC3433) += ti-dlpc3433.o 33 33 obj-$(CONFIG_DRM_TI_SN65DSI83) += ti-sn65dsi83.o 34 34 obj-$(CONFIG_DRM_TI_SN65DSI86) += ti-sn65dsi86.o 35 + obj-$(CONFIG_DRM_TI_TDP158) += ti-tdp158.o 35 36 obj-$(CONFIG_DRM_TI_TFP410) += ti-tfp410.o 36 37 obj-$(CONFIG_DRM_TI_TPD12S015) += ti-tpd12s015.o 37 38 obj-$(CONFIG_DRM_NWL_MIPI_DSI) += nwl-dsi.o
+10
drivers/gpu/drm/bridge/imx/Kconfig
··· 3 3 config DRM_IMX_LDB_HELPER 4 4 tristate 5 5 6 + config DRM_IMX_LEGACY_BRIDGE 7 + tristate 8 + depends on DRM_IMX 9 + help 10 + This is a DRM bridge implementation for the DRM i.MX IPUv3 driver, 11 + that uses of_get_drm_display_mode to acquire display mode. 12 + 13 + Newer designs should not use this bridge and should use proper panel 14 + driver instead. 15 + 6 16 config DRM_IMX8MP_DW_HDMI_BRIDGE 7 17 tristate "Freescale i.MX8MP HDMI-TX bridge support" 8 18 depends on OF
+1
drivers/gpu/drm/bridge/imx/Makefile
··· 1 1 obj-$(CONFIG_DRM_IMX_LDB_HELPER) += imx-ldb-helper.o 2 + obj-$(CONFIG_DRM_IMX_LEGACY_BRIDGE) += imx-legacy-bridge.o 2 3 obj-$(CONFIG_DRM_IMX8MP_DW_HDMI_BRIDGE) += imx8mp-hdmi-tx.o 3 4 obj-$(CONFIG_DRM_IMX8MP_HDMI_PVI) += imx8mp-hdmi-pvi.o 4 5 obj-$(CONFIG_DRM_IMX8QM_LDB) += imx8qm-ldb.o
+87
drivers/gpu/drm/bridge/imx/imx-legacy-bridge.c
··· 1 + // SPDX-License-Identifier: GPL-2.0+ 2 + /* 3 + * Freescale i.MX drm driver 4 + * 5 + * bridge driver for legacy DT bindings, utilizing display-timings node 6 + */ 7 + 8 + #include <drm/drm_bridge.h> 9 + #include <drm/drm_modes.h> 10 + #include <drm/drm_probe_helper.h> 11 + #include <drm/bridge/imx.h> 12 + 13 + #include <video/of_display_timing.h> 14 + #include <video/of_videomode.h> 15 + 16 + struct imx_legacy_bridge { 17 + struct drm_bridge base; 18 + 19 + struct drm_display_mode mode; 20 + u32 bus_flags; 21 + }; 22 + 23 + #define to_imx_legacy_bridge(bridge) container_of(bridge, struct imx_legacy_bridge, base) 24 + 25 + static int imx_legacy_bridge_attach(struct drm_bridge *bridge, 26 + enum drm_bridge_attach_flags flags) 27 + { 28 + if (!(flags & DRM_BRIDGE_ATTACH_NO_CONNECTOR)) 29 + return -EINVAL; 30 + 31 + return 0; 32 + } 33 + 34 + static int imx_legacy_bridge_get_modes(struct drm_bridge *bridge, 35 + struct drm_connector *connector) 36 + { 37 + struct imx_legacy_bridge *imx_bridge = to_imx_legacy_bridge(bridge); 38 + int ret; 39 + 40 + ret = drm_connector_helper_get_modes_fixed(connector, &imx_bridge->mode); 41 + if (ret) 42 + return ret; 43 + 44 + connector->display_info.bus_flags = imx_bridge->bus_flags; 45 + 46 + return 0; 47 + } 48 + 49 + struct drm_bridge_funcs imx_legacy_bridge_funcs = { 50 + .attach = imx_legacy_bridge_attach, 51 + .get_modes = imx_legacy_bridge_get_modes, 52 + }; 53 + 54 + struct drm_bridge *devm_imx_drm_legacy_bridge(struct device *dev, 55 + struct device_node *np, 56 + int type) 57 + { 58 + struct imx_legacy_bridge *imx_bridge; 59 + int ret; 60 + 61 + imx_bridge = devm_kzalloc(dev, sizeof(*imx_bridge), GFP_KERNEL); 62 + if (!imx_bridge) 63 + return ERR_PTR(-ENOMEM); 64 + 65 + ret = of_get_drm_display_mode(np, 66 + &imx_bridge->mode, 67 + &imx_bridge->bus_flags, 68 + OF_USE_NATIVE_MODE); 69 + if (ret) 70 + return ERR_PTR(ret); 71 + 72 + imx_bridge->mode.type |= DRM_MODE_TYPE_DRIVER; 73 + 74 + imx_bridge->base.funcs = &imx_legacy_bridge_funcs; 75 + imx_bridge->base.of_node = np; 76 + imx_bridge->base.ops = DRM_BRIDGE_OP_MODES; 77 + imx_bridge->base.type = type; 78 + 79 + ret = devm_drm_bridge_add(dev, &imx_bridge->base); 80 + if (ret) 81 + return ERR_PTR(ret); 82 + 83 + return &imx_bridge->base; 84 + } 85 + EXPORT_SYMBOL_GPL(devm_imx_drm_legacy_bridge); 86 + 87 + MODULE_LICENSE("GPL");
+13 -7
drivers/gpu/drm/bridge/imx/imx8mp-hdmi-tx.c
··· 23 23 const struct drm_display_mode *mode) 24 24 { 25 25 struct imx8mp_hdmi *hdmi = (struct imx8mp_hdmi *)data; 26 + long round_rate; 26 27 27 28 if (mode->clock < 13500) 28 29 return MODE_CLOCK_LOW; ··· 31 30 if (mode->clock > 297000) 32 31 return MODE_CLOCK_HIGH; 33 32 34 - if (clk_round_rate(hdmi->pixclk, mode->clock * 1000) != 35 - mode->clock * 1000) 33 + round_rate = clk_round_rate(hdmi->pixclk, mode->clock * 1000); 34 + /* imx8mp's pixel clock generator (fsl-samsung-hdmi) cannot generate 35 + * all possible frequencies, so allow some tolerance to support more 36 + * modes. 37 + * Allow 0.5% difference allowed in various standards (VESA, CEA861) 38 + * 0.5% = 5/1000 tolerance (mode->clock is 1/1000) 39 + */ 40 + if (abs(round_rate - mode->clock * 1000) > mode->clock * 5) 36 41 return MODE_CLOCK_RANGE; 37 42 38 43 /* We don't support double-clocked and Interlaced modes */ ··· 118 111 dw_hdmi_remove(hdmi->dw_hdmi); 119 112 } 120 113 121 - static int __maybe_unused imx8mp_dw_hdmi_pm_suspend(struct device *dev) 114 + static int imx8mp_dw_hdmi_pm_suspend(struct device *dev) 122 115 { 123 116 return 0; 124 117 } 125 118 126 - static int __maybe_unused imx8mp_dw_hdmi_pm_resume(struct device *dev) 119 + static int imx8mp_dw_hdmi_pm_resume(struct device *dev) 127 120 { 128 121 struct imx8mp_hdmi *hdmi = dev_get_drvdata(dev); 129 122 ··· 133 126 } 134 127 135 128 static const struct dev_pm_ops imx8mp_dw_hdmi_pm_ops = { 136 - SET_SYSTEM_SLEEP_PM_OPS(imx8mp_dw_hdmi_pm_suspend, 137 - imx8mp_dw_hdmi_pm_resume) 129 + SYSTEM_SLEEP_PM_OPS(imx8mp_dw_hdmi_pm_suspend, imx8mp_dw_hdmi_pm_resume) 138 130 }; 139 131 140 132 static const struct of_device_id imx8mp_dw_hdmi_of_table[] = { ··· 148 142 .driver = { 149 143 .name = "imx8mp-dw-hdmi-tx", 150 144 .of_match_table = imx8mp_dw_hdmi_of_table, 151 - .pm = &imx8mp_dw_hdmi_pm_ops, 145 + .pm = pm_ptr(&imx8mp_dw_hdmi_pm_ops), 152 146 }, 153 147 }; 154 148
+4 -5
drivers/gpu/drm/bridge/imx/imx8qm-ldb.c
··· 542 542 pm_runtime_disable(&pdev->dev); 543 543 } 544 544 545 - static int __maybe_unused imx8qm_ldb_runtime_suspend(struct device *dev) 545 + static int imx8qm_ldb_runtime_suspend(struct device *dev) 546 546 { 547 547 return 0; 548 548 } 549 549 550 - static int __maybe_unused imx8qm_ldb_runtime_resume(struct device *dev) 550 + static int imx8qm_ldb_runtime_resume(struct device *dev) 551 551 { 552 552 struct imx8qm_ldb *imx8qm_ldb = dev_get_drvdata(dev); 553 553 struct ldb *ldb = &imx8qm_ldb->base; ··· 559 559 } 560 560 561 561 static const struct dev_pm_ops imx8qm_ldb_pm_ops = { 562 - SET_RUNTIME_PM_OPS(imx8qm_ldb_runtime_suspend, 563 - imx8qm_ldb_runtime_resume, NULL) 562 + RUNTIME_PM_OPS(imx8qm_ldb_runtime_suspend, imx8qm_ldb_runtime_resume, NULL) 564 563 }; 565 564 566 565 static const struct of_device_id imx8qm_ldb_dt_ids[] = { ··· 572 573 .probe = imx8qm_ldb_probe, 573 574 .remove_new = imx8qm_ldb_remove, 574 575 .driver = { 575 - .pm = &imx8qm_ldb_pm_ops, 576 + .pm = pm_ptr(&imx8qm_ldb_pm_ops), 576 577 .name = DRIVER_NAME, 577 578 .of_match_table = imx8qm_ldb_dt_ids, 578 579 },
+4 -5
drivers/gpu/drm/bridge/imx/imx8qxp-ldb.c
··· 678 678 pm_runtime_disable(&pdev->dev); 679 679 } 680 680 681 - static int __maybe_unused imx8qxp_ldb_runtime_suspend(struct device *dev) 681 + static int imx8qxp_ldb_runtime_suspend(struct device *dev) 682 682 { 683 683 return 0; 684 684 } 685 685 686 - static int __maybe_unused imx8qxp_ldb_runtime_resume(struct device *dev) 686 + static int imx8qxp_ldb_runtime_resume(struct device *dev) 687 687 { 688 688 struct imx8qxp_ldb *imx8qxp_ldb = dev_get_drvdata(dev); 689 689 struct ldb *ldb = &imx8qxp_ldb->base; ··· 695 695 } 696 696 697 697 static const struct dev_pm_ops imx8qxp_ldb_pm_ops = { 698 - SET_RUNTIME_PM_OPS(imx8qxp_ldb_runtime_suspend, 699 - imx8qxp_ldb_runtime_resume, NULL) 698 + RUNTIME_PM_OPS(imx8qxp_ldb_runtime_suspend, imx8qxp_ldb_runtime_resume, NULL) 700 699 }; 701 700 702 701 static const struct of_device_id imx8qxp_ldb_dt_ids[] = { ··· 708 709 .probe = imx8qxp_ldb_probe, 709 710 .remove_new = imx8qxp_ldb_remove, 710 711 .driver = { 711 - .pm = &imx8qxp_ldb_pm_ops, 712 + .pm = pm_ptr(&imx8qxp_ldb_pm_ops), 712 713 .name = DRIVER_NAME, 713 714 .of_match_table = imx8qxp_ldb_dt_ids, 714 715 },
+4 -5
drivers/gpu/drm/bridge/imx/imx8qxp-pixel-combiner.c
··· 371 371 pm_runtime_disable(&pdev->dev); 372 372 } 373 373 374 - static int __maybe_unused imx8qxp_pc_runtime_suspend(struct device *dev) 374 + static int imx8qxp_pc_runtime_suspend(struct device *dev) 375 375 { 376 376 struct platform_device *pdev = to_platform_device(dev); 377 377 struct imx8qxp_pc *pc = platform_get_drvdata(pdev); ··· 393 393 return ret; 394 394 } 395 395 396 - static int __maybe_unused imx8qxp_pc_runtime_resume(struct device *dev) 396 + static int imx8qxp_pc_runtime_resume(struct device *dev) 397 397 { 398 398 struct platform_device *pdev = to_platform_device(dev); 399 399 struct imx8qxp_pc *pc = platform_get_drvdata(pdev); ··· 415 415 } 416 416 417 417 static const struct dev_pm_ops imx8qxp_pc_pm_ops = { 418 - SET_RUNTIME_PM_OPS(imx8qxp_pc_runtime_suspend, 419 - imx8qxp_pc_runtime_resume, NULL) 418 + RUNTIME_PM_OPS(imx8qxp_pc_runtime_suspend, imx8qxp_pc_runtime_resume, NULL) 420 419 }; 421 420 422 421 static const struct of_device_id imx8qxp_pc_dt_ids[] = { ··· 429 430 .probe = imx8qxp_pc_bridge_probe, 430 431 .remove_new = imx8qxp_pc_bridge_remove, 431 432 .driver = { 432 - .pm = &imx8qxp_pc_pm_ops, 433 + .pm = pm_ptr(&imx8qxp_pc_pm_ops), 433 434 .name = DRIVER_NAME, 434 435 .of_match_table = imx8qxp_pc_dt_ids, 435 436 },
+4 -4
drivers/gpu/drm/bridge/samsung-dsim.c
··· 2043 2043 } 2044 2044 EXPORT_SYMBOL_GPL(samsung_dsim_remove); 2045 2045 2046 - static int __maybe_unused samsung_dsim_suspend(struct device *dev) 2046 + static int samsung_dsim_suspend(struct device *dev) 2047 2047 { 2048 2048 struct samsung_dsim *dsi = dev_get_drvdata(dev); 2049 2049 const struct samsung_dsim_driver_data *driver_data = dsi->driver_data; ··· 2073 2073 return 0; 2074 2074 } 2075 2075 2076 - static int __maybe_unused samsung_dsim_resume(struct device *dev) 2076 + static int samsung_dsim_resume(struct device *dev) 2077 2077 { 2078 2078 struct samsung_dsim *dsi = dev_get_drvdata(dev); 2079 2079 const struct samsung_dsim_driver_data *driver_data = dsi->driver_data; ··· 2108 2108 } 2109 2109 2110 2110 const struct dev_pm_ops samsung_dsim_pm_ops = { 2111 - SET_RUNTIME_PM_OPS(samsung_dsim_suspend, samsung_dsim_resume, NULL) 2111 + RUNTIME_PM_OPS(samsung_dsim_suspend, samsung_dsim_resume, NULL) 2112 2112 SET_SYSTEM_SLEEP_PM_OPS(pm_runtime_force_suspend, 2113 2113 pm_runtime_force_resume) 2114 2114 }; ··· 2142 2142 .remove_new = samsung_dsim_remove, 2143 2143 .driver = { 2144 2144 .name = "samsung-dsim", 2145 - .pm = &samsung_dsim_pm_ops, 2145 + .pm = pm_ptr(&samsung_dsim_pm_ops), 2146 2146 .of_match_table = samsung_dsim_of_match, 2147 2147 }, 2148 2148 };
+4 -4
drivers/gpu/drm/bridge/synopsys/dw-hdmi-cec.c
··· 312 312 cec_unregister_adapter(cec->adap); 313 313 } 314 314 315 - static int __maybe_unused dw_hdmi_cec_resume(struct device *dev) 315 + static int dw_hdmi_cec_resume(struct device *dev) 316 316 { 317 317 struct dw_hdmi_cec *cec = dev_get_drvdata(dev); 318 318 ··· 328 328 return 0; 329 329 } 330 330 331 - static int __maybe_unused dw_hdmi_cec_suspend(struct device *dev) 331 + static int dw_hdmi_cec_suspend(struct device *dev) 332 332 { 333 333 struct dw_hdmi_cec *cec = dev_get_drvdata(dev); 334 334 ··· 341 341 } 342 342 343 343 static const struct dev_pm_ops dw_hdmi_cec_pm = { 344 - SET_SYSTEM_SLEEP_PM_OPS(dw_hdmi_cec_suspend, dw_hdmi_cec_resume) 344 + SYSTEM_SLEEP_PM_OPS(dw_hdmi_cec_suspend, dw_hdmi_cec_resume) 345 345 }; 346 346 347 347 static struct platform_driver dw_hdmi_cec_driver = { ··· 349 349 .remove_new = dw_hdmi_cec_remove, 350 350 .driver = { 351 351 .name = "dw-hdmi-cec", 352 - .pm = &dw_hdmi_cec_pm, 352 + .pm = pm_ptr(&dw_hdmi_cec_pm), 353 353 }, 354 354 }; 355 355 module_platform_driver(dw_hdmi_cec_driver);
+35 -21
drivers/gpu/drm/bridge/tc358767.c
··· 2169 2169 .n_yes_ranges = ARRAY_SIZE(tc_precious_ranges), 2170 2170 }; 2171 2171 2172 - static const struct regmap_range tc_non_writeable_ranges[] = { 2173 - regmap_reg_range(PPI_BUSYPPI, PPI_BUSYPPI), 2174 - regmap_reg_range(DSI_BUSYDSI, DSI_BUSYDSI), 2175 - regmap_reg_range(DSI_LANESTATUS0, DSI_INTSTATUS), 2176 - regmap_reg_range(TC_IDREG, SYSSTAT), 2177 - regmap_reg_range(GPIOI, GPIOI), 2178 - regmap_reg_range(DP0_LTSTAT, DP0_SNKLTCHGREQ), 2179 - }; 2180 - 2181 - static const struct regmap_access_table tc_writeable_table = { 2182 - .no_ranges = tc_non_writeable_ranges, 2183 - .n_no_ranges = ARRAY_SIZE(tc_non_writeable_ranges), 2184 - }; 2172 + static bool tc_writeable_reg(struct device *dev, unsigned int reg) 2173 + { 2174 + /* RO reg */ 2175 + switch (reg) { 2176 + case PPI_BUSYPPI: 2177 + case DSI_BUSYDSI: 2178 + case DSI_LANESTATUS0: 2179 + case DSI_LANESTATUS1: 2180 + case DSI_INTSTATUS: 2181 + case TC_IDREG: 2182 + case SYSBOOT: 2183 + case SYSSTAT: 2184 + case GPIOI: 2185 + case DP0_LTSTAT: 2186 + case DP0_SNKLTCHGREQ: 2187 + return false; 2188 + } 2189 + /* WO reg */ 2190 + switch (reg) { 2191 + case DSI_STARTDSI: 2192 + case DSI_INTCLR: 2193 + return true; 2194 + } 2195 + return tc_readable_reg(dev, reg); 2196 + } 2185 2197 2186 2198 static const struct regmap_config tc_regmap_config = { 2187 2199 .name = "tc358767", ··· 2203 2191 .max_register = PLL_DBG, 2204 2192 .cache_type = REGCACHE_MAPLE, 2205 2193 .readable_reg = tc_readable_reg, 2194 + .writeable_reg = tc_writeable_reg, 2206 2195 .volatile_table = &tc_volatile_table, 2207 2196 .precious_table = &tc_precious_table, 2208 - .wr_table = &tc_writeable_table, 2209 2197 .reg_format_endian = REGMAP_ENDIAN_BIG, 2210 2198 .val_format_endian = REGMAP_ENDIAN_LITTLE, 2211 2199 }; ··· 2241 2229 bool h = val & INT_GPIO_H(tc->hpd_pin); 2242 2230 bool lc = val & INT_GPIO_LC(tc->hpd_pin); 2243 2231 2244 - dev_dbg(tc->dev, "GPIO%d: %s %s\n", tc->hpd_pin, 2245 - h ? "H" : "", lc ? "LC" : ""); 2246 - 2247 - if (h || lc) 2232 + if (h || lc) { 2233 + dev_dbg(tc->dev, "GPIO%d: %s %s\n", tc->hpd_pin, 2234 + h ? "H" : "", lc ? "LC" : ""); 2248 2235 drm_kms_helper_hotplug_event(tc->bridge.dev); 2236 + } 2249 2237 } 2250 2238 2251 2239 regmap_write(tc->regmap, INTSTS_G, val); ··· 2310 2298 /* port@1 is the DPI input/output port */ 2311 2299 ret = drm_of_find_panel_or_bridge(dev->of_node, 1, 0, &panel, &bridge); 2312 2300 if (ret && ret != -ENODEV) 2313 - return ret; 2301 + return dev_err_probe(dev, ret, 2302 + "Could not find DPI panel or bridge\n"); 2314 2303 2315 2304 if (panel) { 2316 2305 bridge = devm_drm_panel_bridge_add(dev, panel); ··· 2339 2326 /* port@2 is the output port */ 2340 2327 ret = drm_of_find_panel_or_bridge(dev->of_node, 2, 0, &panel, NULL); 2341 2328 if (ret && ret != -ENODEV) 2342 - return ret; 2329 + return dev_err_probe(dev, ret, 2330 + "Could not find DSI panel or bridge\n"); 2343 2331 2344 2332 if (panel) { 2345 2333 struct drm_bridge *panel_bridge; ··· 2564 2550 ret = tc_mipi_dsi_host_attach(tc); 2565 2551 if (ret) { 2566 2552 drm_bridge_remove(&tc->bridge); 2567 - return ret; 2553 + return dev_err_probe(dev, ret, "Failed to attach DSI host\n"); 2568 2554 } 2569 2555 } 2570 2556
+2 -2
drivers/gpu/drm/bridge/ti-sn65dsi86.c
··· 1635 1635 } 1636 1636 1637 1637 #else 1638 - static inline int ti_sn_pwm_pin_request(struct ti_sn65dsi86 *pdata) { return 0; } 1639 - static inline void ti_sn_pwm_pin_release(struct ti_sn65dsi86 *pdata) {} 1638 + static inline int __maybe_unused ti_sn_pwm_pin_request(struct ti_sn65dsi86 *pdata) { return 0; } 1639 + static inline void __maybe_unused ti_sn_pwm_pin_release(struct ti_sn65dsi86 *pdata) {} 1640 1640 1641 1641 static inline int ti_sn_pwm_register(void) { return 0; } 1642 1642 static inline void ti_sn_pwm_unregister(void) {}
+111
drivers/gpu/drm/bridge/ti-tdp158.c
··· 1 + // SPDX-License-Identifier: GPL-2.0-only 2 + /* 3 + * Copyright 2024 Freebox SAS 4 + */ 5 + 6 + #include <linux/gpio/consumer.h> 7 + #include <linux/i2c.h> 8 + 9 + #include <drm/drm_atomic_helper.h> 10 + #include <drm/drm_bridge.h> 11 + 12 + struct tdp158 { 13 + struct drm_bridge bridge; 14 + struct drm_bridge *next; 15 + struct gpio_desc *enable; // Operation Enable - pin 36 16 + struct regulator *vcc; // 3.3V 17 + struct regulator *vdd; // 1.1V 18 + struct device *dev; 19 + }; 20 + 21 + static void tdp158_enable(struct drm_bridge *bridge, struct drm_bridge_state *prev) 22 + { 23 + int err; 24 + struct tdp158 *tdp158 = bridge->driver_private; 25 + 26 + err = regulator_enable(tdp158->vcc); 27 + if (err) 28 + dev_err(tdp158->dev, "failed to enable vcc: %d", err); 29 + 30 + err = regulator_enable(tdp158->vdd); 31 + if (err) 32 + dev_err(tdp158->dev, "failed to enable vdd: %d", err); 33 + 34 + gpiod_set_value_cansleep(tdp158->enable, 1); 35 + } 36 + 37 + static void tdp158_disable(struct drm_bridge *bridge, struct drm_bridge_state *prev) 38 + { 39 + struct tdp158 *tdp158 = bridge->driver_private; 40 + 41 + gpiod_set_value_cansleep(tdp158->enable, 0); 42 + regulator_disable(tdp158->vdd); 43 + regulator_disable(tdp158->vcc); 44 + } 45 + 46 + static int tdp158_attach(struct drm_bridge *bridge, enum drm_bridge_attach_flags flags) 47 + { 48 + struct tdp158 *tdp158 = bridge->driver_private; 49 + 50 + return drm_bridge_attach(bridge->encoder, tdp158->next, bridge, flags); 51 + } 52 + 53 + static const struct drm_bridge_funcs tdp158_bridge_funcs = { 54 + .attach = tdp158_attach, 55 + .atomic_enable = tdp158_enable, 56 + .atomic_disable = tdp158_disable, 57 + .atomic_duplicate_state = drm_atomic_helper_bridge_duplicate_state, 58 + .atomic_destroy_state = drm_atomic_helper_bridge_destroy_state, 59 + .atomic_reset = drm_atomic_helper_bridge_reset, 60 + }; 61 + 62 + static int tdp158_probe(struct i2c_client *client) 63 + { 64 + struct tdp158 *tdp158; 65 + struct device *dev = &client->dev; 66 + 67 + tdp158 = devm_kzalloc(dev, sizeof(*tdp158), GFP_KERNEL); 68 + if (!tdp158) 69 + return -ENOMEM; 70 + 71 + tdp158->next = devm_drm_of_get_bridge(dev, dev->of_node, 1, 0); 72 + if (IS_ERR(tdp158->next)) 73 + return dev_err_probe(dev, PTR_ERR(tdp158->next), "missing bridge"); 74 + 75 + tdp158->vcc = devm_regulator_get(dev, "vcc"); 76 + if (IS_ERR(tdp158->vcc)) 77 + return dev_err_probe(dev, PTR_ERR(tdp158->vcc), "vcc"); 78 + 79 + tdp158->vdd = devm_regulator_get(dev, "vdd"); 80 + if (IS_ERR(tdp158->vdd)) 81 + return dev_err_probe(dev, PTR_ERR(tdp158->vdd), "vdd"); 82 + 83 + tdp158->enable = devm_gpiod_get_optional(dev, "enable", GPIOD_OUT_LOW); 84 + if (IS_ERR(tdp158->enable)) 85 + return dev_err_probe(dev, PTR_ERR(tdp158->enable), "enable"); 86 + 87 + tdp158->bridge.of_node = dev->of_node; 88 + tdp158->bridge.funcs = &tdp158_bridge_funcs; 89 + tdp158->bridge.driver_private = tdp158; 90 + tdp158->dev = dev; 91 + 92 + return devm_drm_bridge_add(dev, &tdp158->bridge); 93 + } 94 + 95 + static const struct of_device_id tdp158_match_table[] = { 96 + { .compatible = "ti,tdp158" }, 97 + { } 98 + }; 99 + MODULE_DEVICE_TABLE(of, tdp158_match_table); 100 + 101 + static struct i2c_driver tdp158_driver = { 102 + .probe = tdp158_probe, 103 + .driver = { 104 + .name = "tdp158", 105 + .of_match_table = tdp158_match_table, 106 + }, 107 + }; 108 + module_i2c_driver(tdp158_driver); 109 + 110 + MODULE_DESCRIPTION("TI TDP158 driver"); 111 + MODULE_LICENSE("GPL");
+6
drivers/gpu/drm/display/Kconfig
··· 64 64 65 65 If in doubt, say "N". 66 66 67 + config DRM_DISPLAY_DSC_HELPER 68 + bool 69 + depends on DRM_DISPLAY_HELPER 70 + help 71 + DRM display helpers for VESA DSC (used by DSI and DisplayPort). 72 + 67 73 config DRM_DISPLAY_HDCP_HELPER 68 74 bool 69 75 help
+3 -2
drivers/gpu/drm/display/Makefile
··· 8 8 drm_display_helper-$(CONFIG_DRM_DISPLAY_DP_HELPER) += \ 9 9 drm_dp_dual_mode_helper.o \ 10 10 drm_dp_helper.o \ 11 - drm_dp_mst_topology.o \ 12 - drm_dsc_helper.o 11 + drm_dp_mst_topology.o 13 12 drm_display_helper-$(CONFIG_DRM_DISPLAY_DP_TUNNEL) += \ 14 13 drm_dp_tunnel.o 14 + drm_display_helper-$(CONFIG_DRM_DISPLAY_DSC_HELPER) += \ 15 + drm_dsc_helper.o 15 16 drm_display_helper-$(CONFIG_DRM_DISPLAY_HDCP_HELPER) += drm_hdcp_helper.o 16 17 drm_display_helper-$(CONFIG_DRM_DISPLAY_HDMI_HELPER) += \ 17 18 drm_hdmi_helper.o \
+1 -1
drivers/gpu/drm/drm_atomic_helper.c
··· 3015 3015 bool stall) 3016 3016 { 3017 3017 int i, ret; 3018 - unsigned long flags; 3018 + unsigned long flags = 0; 3019 3019 struct drm_connector *connector; 3020 3020 struct drm_connector_state *old_conn_state, *new_conn_state; 3021 3021 struct drm_crtc *crtc;
+2
drivers/gpu/drm/drm_framebuffer.c
··· 99 99 100 100 return 0; 101 101 } 102 + EXPORT_SYMBOL_FOR_TESTS_ONLY(drm_framebuffer_check_src_coords); 102 103 103 104 /** 104 105 * drm_mode_addfb - add an FB to the graphics configuration ··· 839 838 840 839 fb->funcs->destroy(fb); 841 840 } 841 + EXPORT_SYMBOL_FOR_TESTS_ONLY(drm_framebuffer_free); 842 842 843 843 /** 844 844 * drm_framebuffer_init - initialize a framebuffer
-45
drivers/gpu/drm/drm_gem_vram_helper.c
··· 16 16 #include <drm/drm_mode.h> 17 17 #include <drm/drm_plane.h> 18 18 #include <drm/drm_prime.h> 19 - #include <drm/drm_simple_kms_helper.h> 20 19 21 20 #include <drm/ttm/ttm_range_manager.h> 22 21 #include <drm/ttm/ttm_tt.h> ··· 684 685 __drm_gem_vram_plane_helper_cleanup_fb(plane, old_state, fb->format->num_planes); 685 686 } 686 687 EXPORT_SYMBOL(drm_gem_vram_plane_helper_cleanup_fb); 687 - 688 - /* 689 - * Helpers for struct drm_simple_display_pipe_funcs 690 - */ 691 - 692 - /** 693 - * drm_gem_vram_simple_display_pipe_prepare_fb() - Implements &struct 694 - * drm_simple_display_pipe_funcs.prepare_fb 695 - * @pipe: a simple display pipe 696 - * @new_state: the plane's new state 697 - * 698 - * During plane updates, this function pins the GEM VRAM 699 - * objects of the plane's new framebuffer to VRAM. Call 700 - * drm_gem_vram_simple_display_pipe_cleanup_fb() to unpin them. 701 - * 702 - * Returns: 703 - * 0 on success, or 704 - * a negative errno code otherwise. 705 - */ 706 - int drm_gem_vram_simple_display_pipe_prepare_fb( 707 - struct drm_simple_display_pipe *pipe, 708 - struct drm_plane_state *new_state) 709 - { 710 - return drm_gem_vram_plane_helper_prepare_fb(&pipe->plane, new_state); 711 - } 712 - EXPORT_SYMBOL(drm_gem_vram_simple_display_pipe_prepare_fb); 713 - 714 - /** 715 - * drm_gem_vram_simple_display_pipe_cleanup_fb() - Implements &struct 716 - * drm_simple_display_pipe_funcs.cleanup_fb 717 - * @pipe: a simple display pipe 718 - * @old_state: the plane's old state 719 - * 720 - * During plane updates, this function unpins the GEM VRAM 721 - * objects of the plane's old framebuffer from VRAM. Complements 722 - * drm_gem_vram_simple_display_pipe_prepare_fb(). 723 - */ 724 - void drm_gem_vram_simple_display_pipe_cleanup_fb( 725 - struct drm_simple_display_pipe *pipe, 726 - struct drm_plane_state *old_state) 727 - { 728 - drm_gem_vram_plane_helper_cleanup_fb(&pipe->plane, old_state); 729 - } 730 - EXPORT_SYMBOL(drm_gem_vram_simple_display_pipe_cleanup_fb); 731 688 732 689 /* 733 690 * PRIME helpers
+2 -2
drivers/gpu/drm/drm_mm.c
··· 151 151 152 152 INTERVAL_TREE_DEFINE(struct drm_mm_node, rb, 153 153 u64, __subtree_last, 154 - START, LAST, static inline, drm_mm_interval_tree) 154 + START, LAST, static inline __maybe_unused, drm_mm_interval_tree) 155 155 156 156 struct drm_mm_node * 157 157 __drm_mm_interval_first(const struct drm_mm *mm, u64 start, u64 last) ··· 611 611 } 612 612 EXPORT_SYMBOL(drm_mm_insert_node_in_range); 613 613 614 - static inline bool drm_mm_node_scanned_block(const struct drm_mm_node *node) 614 + static inline __maybe_unused bool drm_mm_node_scanned_block(const struct drm_mm_node *node) 615 615 { 616 616 return test_bit(DRM_MM_NODE_SCANNED_BIT, &node->flags); 617 617 }
+1
drivers/gpu/drm/drm_mode_object.c
··· 81 81 { 82 82 return __drm_mode_object_add(dev, obj, obj_type, true, NULL); 83 83 } 84 + EXPORT_SYMBOL_FOR_TESTS_ONLY(drm_mode_object_add); 84 85 85 86 void drm_mode_object_register(struct drm_device *dev, 86 87 struct drm_mode_object *obj)
+1 -1
drivers/gpu/drm/etnaviv/etnaviv_sched.c
··· 72 72 73 73 drm_sched_resubmit_jobs(&gpu->sched); 74 74 75 - drm_sched_start(&gpu->sched); 75 + drm_sched_start(&gpu->sched, 0); 76 76 return DRM_GPU_SCHED_STAT_NOMINAL; 77 77 78 78 out_no_timeout:
+15 -10
drivers/gpu/drm/exynos/exynos_hdmi.c
··· 883 883 static int hdmi_get_modes(struct drm_connector *connector) 884 884 { 885 885 struct hdmi_context *hdata = connector_to_hdmi(connector); 886 - struct edid *edid; 886 + const struct drm_display_info *info = &connector->display_info; 887 + const struct drm_edid *drm_edid; 887 888 int ret; 888 889 889 890 if (!hdata->ddc_adpt) 890 891 goto no_edid; 891 892 892 - edid = drm_get_edid(connector, hdata->ddc_adpt); 893 - if (!edid) 893 + drm_edid = drm_edid_read_ddc(connector, hdata->ddc_adpt); 894 + 895 + ret = drm_edid_connector_update(connector, drm_edid); 896 + if (ret) 897 + return 0; 898 + 899 + cec_notifier_set_phys_addr(hdata->notifier, info->source_physical_address); 900 + 901 + if (!drm_edid) 894 902 goto no_edid; 895 903 896 - hdata->dvi_mode = !connector->display_info.is_hdmi; 904 + hdata->dvi_mode = !info->is_hdmi; 897 905 DRM_DEV_DEBUG_KMS(hdata->dev, "%s : width[%d] x height[%d]\n", 898 906 (hdata->dvi_mode ? "dvi monitor" : "hdmi monitor"), 899 - edid->width_cm, edid->height_cm); 907 + info->width_mm / 10, info->height_mm / 10); 900 908 901 - drm_connector_update_edid_property(connector, edid); 902 - cec_notifier_set_phys_addr_from_edid(hdata->notifier, edid); 909 + ret = drm_edid_connector_add_modes(connector); 903 910 904 - ret = drm_add_edid_modes(connector, edid); 905 - 906 - kfree(edid); 911 + drm_edid_free(drm_edid); 907 912 908 913 return ret; 909 914
+1
drivers/gpu/drm/i915/Kconfig
··· 11 11 select SHMEM 12 12 select TMPFS 13 13 select DRM_DISPLAY_DP_HELPER 14 + select DRM_DISPLAY_DSC_HELPER 14 15 select DRM_DISPLAY_HDCP_HELPER 15 16 select DRM_DISPLAY_HDMI_HELPER 16 17 select DRM_DISPLAY_HELPER
+1 -1
drivers/gpu/drm/imagination/pvr_ccb.c
··· 321 321 bool reserved = false; 322 322 u32 retries = 0; 323 323 324 - while ((jiffies - start_timestamp) < (u32)RESERVE_SLOT_TIMEOUT || 324 + while (time_before(jiffies, start_timestamp + RESERVE_SLOT_TIMEOUT) || 325 325 retries < RESERVE_SLOT_MIN_RETRIES) { 326 326 reserved = pvr_kccb_try_reserve_slot(pvr_dev); 327 327 if (reserved)
+3 -15
drivers/gpu/drm/imagination/pvr_context.c
··· 69 69 void *stream; 70 70 int err; 71 71 72 - stream = kzalloc(stream_size, GFP_KERNEL); 73 - if (!stream) 74 - return -ENOMEM; 75 - 76 - if (copy_from_user(stream, u64_to_user_ptr(stream_user_ptr), stream_size)) { 77 - err = -EFAULT; 78 - goto err_free; 79 - } 72 + stream = memdup_user(u64_to_user_ptr(stream_user_ptr), stream_size); 73 + if (IS_ERR(stream)) 74 + return PTR_ERR(stream); 80 75 81 76 err = pvr_stream_process(pvr_dev, cmd_defs, stream, stream_size, dest); 82 - if (err) 83 - goto err_free; 84 77 85 - kfree(stream); 86 - 87 - return 0; 88 - 89 - err_free: 90 78 kfree(stream); 91 79 92 80 return err;
+1 -1
drivers/gpu/drm/imagination/pvr_drv.c
··· 220 220 return ret; 221 221 } 222 222 223 - static __always_inline u64 223 + static __always_inline __maybe_unused u64 224 224 pvr_fw_version_packed(u32 major, u32 minor) 225 225 { 226 226 return ((u64)major << 32) | minor;
+3 -10
drivers/gpu/drm/imagination/pvr_job.c
··· 90 90 void *stream; 91 91 int err; 92 92 93 - stream = kzalloc(stream_len, GFP_KERNEL); 94 - if (!stream) 95 - return -ENOMEM; 96 - 97 - if (copy_from_user(stream, u64_to_user_ptr(stream_userptr), stream_len)) { 98 - err = -EFAULT; 99 - goto err_free_stream; 100 - } 93 + stream = memdup_user(u64_to_user_ptr(stream_userptr), stream_len); 94 + if (IS_ERR(stream)) 95 + return PTR_ERR(stream); 101 96 102 97 err = pvr_job_process_stream(pvr_dev, stream_def, stream, stream_len, job); 103 98 104 - err_free_stream: 105 99 kfree(stream); 106 - 107 100 return err; 108 101 } 109 102
+2 -2
drivers/gpu/drm/imagination/pvr_queue.c
··· 782 782 } 783 783 } 784 784 785 - drm_sched_start(&queue->scheduler); 785 + drm_sched_start(&queue->scheduler, 0); 786 786 } 787 787 788 788 /** ··· 842 842 } 843 843 mutex_unlock(&pvr_dev->queues.lock); 844 844 845 - drm_sched_start(sched); 845 + drm_sched_start(sched, 0); 846 846 847 847 return DRM_GPU_SCHED_STAT_NOMINAL; 848 848 }
+1 -3
drivers/gpu/drm/imagination/pvr_vm.c
··· 640 640 641 641 xa_lock(&pvr_file->vm_ctx_handles); 642 642 vm_ctx = xa_load(&pvr_file->vm_ctx_handles, handle); 643 - if (vm_ctx) 644 - kref_get(&vm_ctx->ref_count); 645 - 643 + pvr_vm_context_get(vm_ctx); 646 644 xa_unlock(&pvr_file->vm_ctx_handles); 647 645 648 646 return vm_ctx;
+7 -3
drivers/gpu/drm/imx/ipuv3/Kconfig
··· 11 11 12 12 config DRM_IMX_PARALLEL_DISPLAY 13 13 tristate "Support for parallel displays" 14 - select DRM_PANEL 15 14 depends on DRM_IMX 15 + select DRM_BRIDGE 16 + select DRM_PANEL_BRIDGE 16 17 select VIDEOMODE_HELPERS 17 18 18 19 config DRM_IMX_TVE ··· 27 26 28 27 config DRM_IMX_LDB 29 28 tristate "Support for LVDS displays" 30 - depends on DRM_IMX && MFD_SYSCON 29 + depends on DRM_IMX 31 30 depends on COMMON_CLK 32 - select DRM_PANEL 31 + select MFD_SYSCON 32 + select DRM_BRIDGE 33 + select DRM_PANEL_BRIDGE 34 + select DRM_IMX_LEGACY_BRIDGE 33 35 help 34 36 Choose this to enable the internal LVDS Display Bridge (LDB) 35 37 found on i.MX53 and i.MX6 processors.
-7
drivers/gpu/drm/imx/ipuv3/imx-drm-core.c
··· 34 34 35 35 DEFINE_DRM_GEM_DMA_FOPS(imx_drm_driver_fops); 36 36 37 - void imx_drm_connector_destroy(struct drm_connector *connector) 38 - { 39 - drm_connector_unregister(connector); 40 - drm_connector_cleanup(connector); 41 - } 42 - EXPORT_SYMBOL_GPL(imx_drm_connector_destroy); 43 - 44 37 static int imx_drm_atomic_check(struct drm_device *dev, 45 38 struct drm_atomic_state *state) 46 39 {
-14
drivers/gpu/drm/imx/ipuv3/imx-drm.h
··· 3 3 #define _IMX_DRM_H_ 4 4 5 5 struct device_node; 6 - struct drm_crtc; 7 6 struct drm_connector; 8 7 struct drm_device; 9 - struct drm_display_mode; 10 8 struct drm_encoder; 11 - struct drm_framebuffer; 12 - struct drm_plane; 13 - struct platform_device; 14 9 15 10 struct imx_crtc_state { 16 11 struct drm_crtc_state base; ··· 19 24 { 20 25 return container_of(s, struct imx_crtc_state, base); 21 26 } 22 - int imx_drm_init_drm(struct platform_device *pdev, 23 - int preferred_bpp); 24 - int imx_drm_exit_drm(void); 25 27 26 28 extern struct platform_driver ipu_drm_driver; 27 29 28 - void imx_drm_mode_config_init(struct drm_device *drm); 29 - 30 - struct drm_gem_dma_object *imx_drm_fb_get_obj(struct drm_framebuffer *fb); 31 - 32 30 int imx_drm_encoder_parse_of(struct drm_device *drm, 33 31 struct drm_encoder *encoder, struct device_node *np); 34 - 35 - void imx_drm_connector_destroy(struct drm_connector *connector); 36 32 37 33 int ipu_planes_assign_pre(struct drm_device *dev, 38 34 struct drm_atomic_state *state);
+41 -162
drivers/gpu/drm/imx/ipuv3/imx-ldb.c
··· 19 19 #include <linux/regmap.h> 20 20 #include <linux/videodev2.h> 21 21 22 - #include <video/of_display_timing.h> 23 - #include <video/of_videomode.h> 24 - 25 22 #include <drm/drm_atomic.h> 26 23 #include <drm/drm_atomic_helper.h> 27 24 #include <drm/drm_bridge.h> 28 - #include <drm/drm_edid.h> 25 + #include <drm/drm_bridge_connector.h> 29 26 #include <drm/drm_managed.h> 30 27 #include <drm/drm_of.h> 31 - #include <drm/drm_panel.h> 32 28 #include <drm/drm_print.h> 33 29 #include <drm/drm_probe_helper.h> 34 30 #include <drm/drm_simple_kms_helper.h> 31 + #include <drm/bridge/imx.h> 35 32 36 33 #include "imx-drm.h" 37 34 ··· 52 55 struct imx_ldb_channel; 53 56 54 57 struct imx_ldb_encoder { 55 - struct drm_connector connector; 56 58 struct drm_encoder encoder; 57 59 struct imx_ldb_channel *channel; 58 60 }; ··· 61 65 struct imx_ldb_channel { 62 66 struct imx_ldb *ldb; 63 67 64 - /* Defines what is connected to the ldb, only one at a time */ 65 - struct drm_panel *panel; 66 68 struct drm_bridge *bridge; 67 69 68 70 struct device_node *child; 69 - struct i2c_adapter *ddc; 70 71 int chno; 71 - const struct drm_edid *drm_edid; 72 - struct drm_display_mode mode; 73 - int mode_valid; 74 72 u32 bus_format; 75 - u32 bus_flags; 76 73 }; 77 - 78 - static inline struct imx_ldb_channel *con_to_imx_ldb_ch(struct drm_connector *c) 79 - { 80 - return container_of(c, struct imx_ldb_encoder, connector)->channel; 81 - } 82 74 83 75 static inline struct imx_ldb_channel *enc_to_imx_ldb_ch(struct drm_encoder *e) 84 76 { ··· 117 133 } 118 134 } 119 135 120 - static int imx_ldb_connector_get_modes(struct drm_connector *connector) 121 - { 122 - struct imx_ldb_channel *imx_ldb_ch = con_to_imx_ldb_ch(connector); 123 - int num_modes; 124 - 125 - num_modes = drm_panel_get_modes(imx_ldb_ch->panel, connector); 126 - if (num_modes > 0) 127 - return num_modes; 128 - 129 - if (!imx_ldb_ch->drm_edid && imx_ldb_ch->ddc) { 130 - imx_ldb_ch->drm_edid = drm_edid_read_ddc(connector, 131 - imx_ldb_ch->ddc); 132 - drm_edid_connector_update(connector, imx_ldb_ch->drm_edid); 133 - } 134 - 135 - if (imx_ldb_ch->drm_edid) 136 - num_modes = drm_edid_connector_add_modes(connector); 137 - 138 - if (imx_ldb_ch->mode_valid) { 139 - struct drm_display_mode *mode; 140 - 141 - mode = drm_mode_duplicate(connector->dev, &imx_ldb_ch->mode); 142 - if (!mode) 143 - return -EINVAL; 144 - mode->type |= DRM_MODE_TYPE_DRIVER | DRM_MODE_TYPE_PREFERRED; 145 - drm_mode_probed_add(connector, mode); 146 - num_modes++; 147 - } 148 - 149 - return num_modes; 150 - } 151 - 152 136 static void imx_ldb_set_clock(struct imx_ldb *ldb, int mux, int chno, 153 137 unsigned long serial_clk, unsigned long di_clk) 154 138 { ··· 157 205 return; 158 206 } 159 207 160 - drm_panel_prepare(imx_ldb_ch->panel); 161 - 162 208 if (dual) { 163 209 clk_set_parent(ldb->clk_sel[mux], ldb->clk[0]); 164 210 clk_set_parent(ldb->clk_sel[mux], ldb->clk[1]); ··· 195 245 } 196 246 197 247 regmap_write(ldb->regmap, IOMUXC_GPR2, ldb->ldb_ctrl); 198 - 199 - drm_panel_enable(imx_ldb_ch->panel); 200 248 } 201 249 202 250 static void ··· 271 323 int dual = ldb->ldb_ctrl & LDB_SPLIT_MODE_EN; 272 324 int mux, ret; 273 325 274 - drm_panel_disable(imx_ldb_ch->panel); 275 - 276 326 if (imx_ldb_ch == &ldb->channel[0] || dual) 277 327 ldb->ldb_ctrl &= ~LDB_CH0_MODE_EN_MASK; 278 328 if (imx_ldb_ch == &ldb->channel[1] || dual) ··· 304 358 dev_err(ldb->dev, 305 359 "unable to set di%d parent clock to original parent\n", 306 360 mux); 307 - 308 - drm_panel_unprepare(imx_ldb_ch->panel); 309 361 } 310 362 311 363 static int imx_ldb_encoder_atomic_check(struct drm_encoder *encoder, ··· 318 374 /* Bus format description in DT overrides connector display info. */ 319 375 if (!bus_format && di->num_bus_formats) { 320 376 bus_format = di->bus_formats[0]; 321 - imx_crtc_state->bus_flags = di->bus_flags; 322 377 } else { 323 378 bus_format = imx_ldb_ch->bus_format; 324 - imx_crtc_state->bus_flags = imx_ldb_ch->bus_flags; 325 379 } 380 + 381 + imx_crtc_state->bus_flags = di->bus_flags; 382 + 326 383 switch (bus_format) { 327 384 case MEDIA_BUS_FMT_RGB666_1X7X3_SPWG: 328 385 imx_crtc_state->bus_format = MEDIA_BUS_FMT_RGB666_1X18; ··· 342 397 return 0; 343 398 } 344 399 345 - 346 - static const struct drm_connector_funcs imx_ldb_connector_funcs = { 347 - .fill_modes = drm_helper_probe_single_connector_modes, 348 - .destroy = imx_drm_connector_destroy, 349 - .reset = drm_atomic_helper_connector_reset, 350 - .atomic_duplicate_state = drm_atomic_helper_connector_duplicate_state, 351 - .atomic_destroy_state = drm_atomic_helper_connector_destroy_state, 352 - }; 353 - 354 - static const struct drm_connector_helper_funcs imx_ldb_connector_helper_funcs = { 355 - .get_modes = imx_ldb_connector_get_modes, 356 - }; 357 400 358 401 static const struct drm_encoder_helper_funcs imx_ldb_encoder_helper_funcs = { 359 402 .atomic_mode_set = imx_ldb_encoder_atomic_mode_set, ··· 380 447 return PTR_ERR(ldb_encoder); 381 448 382 449 ldb_encoder->channel = imx_ldb_ch; 383 - connector = &ldb_encoder->connector; 384 450 encoder = &ldb_encoder->encoder; 385 451 386 452 ret = imx_drm_encoder_parse_of(drm, encoder, imx_ldb_ch->child); ··· 398 466 399 467 drm_encoder_helper_add(encoder, &imx_ldb_encoder_helper_funcs); 400 468 401 - if (imx_ldb_ch->bridge) { 402 - ret = drm_bridge_attach(encoder, imx_ldb_ch->bridge, NULL, 0); 403 - if (ret) 404 - return ret; 405 - } else { 406 - /* 407 - * We want to add the connector whenever there is no bridge 408 - * that brings its own, not only when there is a panel. For 409 - * historical reasons, the ldb driver can also work without 410 - * a panel. 411 - */ 412 - drm_connector_helper_add(connector, 413 - &imx_ldb_connector_helper_funcs); 414 - drm_connector_init_with_ddc(drm, connector, 415 - &imx_ldb_connector_funcs, 416 - DRM_MODE_CONNECTOR_LVDS, 417 - imx_ldb_ch->ddc); 418 - drm_connector_attach_encoder(connector, encoder); 419 - } 469 + ret = drm_bridge_attach(encoder, imx_ldb_ch->bridge, NULL, 470 + DRM_BRIDGE_ATTACH_NO_CONNECTOR); 471 + if (ret) 472 + return ret; 473 + 474 + connector = drm_bridge_connector_init(drm, encoder); 475 + if (IS_ERR(connector)) 476 + return PTR_ERR(connector); 477 + 478 + drm_connector_attach_encoder(connector, encoder); 420 479 421 480 return 0; 422 481 } ··· 471 548 { } 472 549 }; 473 550 MODULE_DEVICE_TABLE(of, imx_ldb_dt_ids); 474 - 475 - static int imx_ldb_panel_ddc(struct device *dev, 476 - struct imx_ldb_channel *channel, struct device_node *child) 477 - { 478 - struct device_node *ddc_node; 479 - int ret; 480 - 481 - ddc_node = of_parse_phandle(child, "ddc-i2c-bus", 0); 482 - if (ddc_node) { 483 - channel->ddc = of_find_i2c_adapter_by_node(ddc_node); 484 - of_node_put(ddc_node); 485 - if (!channel->ddc) { 486 - dev_warn(dev, "failed to get ddc i2c adapter\n"); 487 - return -EPROBE_DEFER; 488 - } 489 - } 490 - 491 - if (!channel->ddc) { 492 - const void *edidp; 493 - int edid_len; 494 - 495 - /* if no DDC available, fallback to hardcoded EDID */ 496 - dev_dbg(dev, "no ddc available\n"); 497 - 498 - edidp = of_get_property(child, "edid", &edid_len); 499 - if (edidp) { 500 - channel->drm_edid = drm_edid_alloc(edidp, edid_len); 501 - if (!channel->drm_edid) 502 - return -ENOMEM; 503 - } else if (!channel->panel) { 504 - /* fallback to display-timings node */ 505 - ret = of_get_drm_display_mode(child, 506 - &channel->mode, 507 - &channel->bus_flags, 508 - OF_USE_NATIVE_MODE); 509 - if (!ret) 510 - channel->mode_valid = 1; 511 - } 512 - } 513 - return 0; 514 - } 515 551 516 552 static int imx_ldb_bind(struct device *dev, struct device *master, void *data) 517 553 { ··· 576 694 * The output port is port@4 with an external 4-port mux or 577 695 * port@2 with the internal 2-port mux. 578 696 */ 579 - ret = drm_of_find_panel_or_bridge(child, 580 - imx_ldb->lvds_mux ? 4 : 2, 0, 581 - &channel->panel, &channel->bridge); 582 - if (ret && ret != -ENODEV) 583 - goto free_child; 584 - 585 - /* panel ddc only if there is no bridge */ 586 - if (!channel->bridge) { 587 - ret = imx_ldb_panel_ddc(dev, channel, child); 588 - if (ret) 697 + channel->bridge = devm_drm_of_get_bridge(dev, child, 698 + imx_ldb->lvds_mux ? 4 : 2, 0); 699 + if (IS_ERR(channel->bridge)) { 700 + ret = PTR_ERR(channel->bridge); 701 + if (ret != -ENODEV) 589 702 goto free_child; 703 + channel->bridge = NULL; 590 704 } 591 705 592 706 bus_format = of_get_bus_format(dev, child); 593 - if (bus_format == -EINVAL) { 594 - /* 595 - * If no bus format was specified in the device tree, 596 - * we can still get it from the connected panel later. 597 - */ 598 - if (channel->panel && channel->panel->funcs && 599 - channel->panel->funcs->get_modes) 600 - bus_format = 0; 601 - } 707 + /* 708 + * If no bus format was specified in the device tree, 709 + * we can still get it from the connected panel later. 710 + */ 711 + if (bus_format == -EINVAL && channel->bridge) 712 + bus_format = 0; 602 713 if (bus_format < 0) { 603 714 dev_err(dev, "could not determine data mapping: %d\n", 604 715 bus_format); ··· 599 724 goto free_child; 600 725 } 601 726 channel->bus_format = bus_format; 727 + 728 + /* 729 + * legacy bridge doesn't handle bus_format, so create it after 730 + * checking the bus_format property. 731 + */ 732 + if (!channel->bridge) { 733 + channel->bridge = devm_imx_drm_legacy_bridge(dev, child, 734 + DRM_MODE_CONNECTOR_LVDS); 735 + if (IS_ERR(channel->bridge)) { 736 + ret = PTR_ERR(channel->bridge); 737 + goto free_child; 738 + } 739 + } 740 + 602 741 channel->child = child; 603 742 } 604 743 ··· 627 738 628 739 static void imx_ldb_remove(struct platform_device *pdev) 629 740 { 630 - struct imx_ldb *imx_ldb = platform_get_drvdata(pdev); 631 - int i; 632 - 633 - for (i = 0; i < 2; i++) { 634 - struct imx_ldb_channel *channel = &imx_ldb->channel[i]; 635 - 636 - drm_edid_free(channel->drm_edid); 637 - i2c_put_adapter(channel->ddc); 638 - } 639 - 640 741 component_del(&pdev->dev, &imx_ldb_ops); 641 742 } 642 743
+7 -1
drivers/gpu/drm/imx/ipuv3/imx-tve.c
··· 305 305 return 0; 306 306 } 307 307 308 + static void imx_tve_connector_destroy(struct drm_connector *connector) 309 + { 310 + drm_connector_unregister(connector); 311 + drm_connector_cleanup(connector); 312 + } 313 + 308 314 static const struct drm_connector_funcs imx_tve_connector_funcs = { 309 315 .fill_modes = drm_helper_probe_single_connector_modes, 310 - .destroy = imx_drm_connector_destroy, 316 + .destroy = imx_tve_connector_destroy, 311 317 .reset = drm_atomic_helper_connector_reset, 312 318 .atomic_duplicate_state = drm_atomic_helper_connector_duplicate_state, 313 319 .atomic_destroy_state = drm_atomic_helper_connector_destroy_state,
+21 -118
drivers/gpu/drm/imx/ipuv3/parallel-display.c
··· 12 12 #include <linux/platform_device.h> 13 13 #include <linux/videodev2.h> 14 14 15 - #include <video/of_display_timing.h> 16 - 17 15 #include <drm/drm_atomic_helper.h> 18 16 #include <drm/drm_bridge.h> 19 - #include <drm/drm_edid.h> 17 + #include <drm/drm_bridge_connector.h> 20 18 #include <drm/drm_managed.h> 21 19 #include <drm/drm_of.h> 22 - #include <drm/drm_panel.h> 23 20 #include <drm/drm_probe_helper.h> 24 21 #include <drm/drm_simple_kms_helper.h> 22 + #include <drm/bridge/imx.h> 25 23 26 24 #include "imx-drm.h" 27 25 28 26 struct imx_parallel_display_encoder { 29 - struct drm_connector connector; 30 27 struct drm_encoder encoder; 31 28 struct drm_bridge bridge; 32 29 struct imx_parallel_display *pd; ··· 31 34 32 35 struct imx_parallel_display { 33 36 struct device *dev; 34 - const struct drm_edid *drm_edid; 35 37 u32 bus_format; 36 - u32 bus_flags; 37 - struct drm_display_mode mode; 38 - struct drm_panel *panel; 39 38 struct drm_bridge *next_bridge; 40 39 }; 41 - 42 - static inline struct imx_parallel_display *con_to_imxpd(struct drm_connector *c) 43 - { 44 - return container_of(c, struct imx_parallel_display_encoder, connector)->pd; 45 - } 46 40 47 41 static inline struct imx_parallel_display *bridge_to_imxpd(struct drm_bridge *b) 48 42 { 49 43 return container_of(b, struct imx_parallel_display_encoder, bridge)->pd; 50 - } 51 - 52 - static int imx_pd_connector_get_modes(struct drm_connector *connector) 53 - { 54 - struct imx_parallel_display *imxpd = con_to_imxpd(connector); 55 - struct device_node *np = imxpd->dev->of_node; 56 - int num_modes; 57 - 58 - num_modes = drm_panel_get_modes(imxpd->panel, connector); 59 - if (num_modes > 0) 60 - return num_modes; 61 - 62 - if (imxpd->drm_edid) { 63 - drm_edid_connector_update(connector, imxpd->drm_edid); 64 - num_modes = drm_edid_connector_add_modes(connector); 65 - } 66 - 67 - if (np) { 68 - struct drm_display_mode *mode = drm_mode_create(connector->dev); 69 - int ret; 70 - 71 - if (!mode) 72 - return 0; 73 - 74 - ret = of_get_drm_display_mode(np, &imxpd->mode, 75 - &imxpd->bus_flags, 76 - OF_USE_NATIVE_MODE); 77 - if (ret) { 78 - drm_mode_destroy(connector->dev, mode); 79 - return 0; 80 - } 81 - 82 - drm_mode_copy(mode, &imxpd->mode); 83 - mode->type |= DRM_MODE_TYPE_DRIVER | DRM_MODE_TYPE_PREFERRED; 84 - drm_mode_probed_add(connector, mode); 85 - num_modes++; 86 - } 87 - 88 - return num_modes; 89 - } 90 - 91 - static void imx_pd_bridge_enable(struct drm_bridge *bridge) 92 - { 93 - struct imx_parallel_display *imxpd = bridge_to_imxpd(bridge); 94 - 95 - drm_panel_prepare(imxpd->panel); 96 - drm_panel_enable(imxpd->panel); 97 - } 98 - 99 - static void imx_pd_bridge_disable(struct drm_bridge *bridge) 100 - { 101 - struct imx_parallel_display *imxpd = bridge_to_imxpd(bridge); 102 - 103 - drm_panel_disable(imxpd->panel); 104 - drm_panel_unprepare(imxpd->panel); 105 44 } 106 45 107 46 static const u32 imx_pd_bus_fmts[] = { ··· 133 200 { 134 201 struct imx_crtc_state *imx_crtc_state = to_imx_crtc_state(crtc_state); 135 202 struct drm_display_info *di = &conn_state->connector->display_info; 136 - struct imx_parallel_display *imxpd = bridge_to_imxpd(bridge); 137 203 struct drm_bridge_state *next_bridge_state = NULL; 138 204 struct drm_bridge *next_bridge; 139 205 u32 bus_flags, bus_fmt; ··· 144 212 145 213 if (next_bridge_state) 146 214 bus_flags = next_bridge_state->input_bus_cfg.flags; 147 - else if (di->num_bus_formats) 148 - bus_flags = di->bus_flags; 149 215 else 150 - bus_flags = imxpd->bus_flags; 216 + bus_flags = di->bus_flags; 151 217 152 218 bus_fmt = bridge_state->input_bus_cfg.format; 153 219 if (!imx_pd_format_supported(bus_fmt)) ··· 161 231 return 0; 162 232 } 163 233 164 - static const struct drm_connector_funcs imx_pd_connector_funcs = { 165 - .fill_modes = drm_helper_probe_single_connector_modes, 166 - .destroy = imx_drm_connector_destroy, 167 - .reset = drm_atomic_helper_connector_reset, 168 - .atomic_duplicate_state = drm_atomic_helper_connector_duplicate_state, 169 - .atomic_destroy_state = drm_atomic_helper_connector_destroy_state, 170 - }; 234 + static int imx_pd_bridge_attach(struct drm_bridge *bridge, 235 + enum drm_bridge_attach_flags flags) 236 + { 237 + struct imx_parallel_display *imxpd = bridge_to_imxpd(bridge); 171 238 172 - static const struct drm_connector_helper_funcs imx_pd_connector_helper_funcs = { 173 - .get_modes = imx_pd_connector_get_modes, 174 - }; 239 + return drm_bridge_attach(bridge->encoder, imxpd->next_bridge, bridge, flags); 240 + } 175 241 176 242 static const struct drm_bridge_funcs imx_pd_bridge_funcs = { 177 - .enable = imx_pd_bridge_enable, 178 - .disable = imx_pd_bridge_disable, 243 + .attach = imx_pd_bridge_attach, 179 244 .atomic_reset = drm_atomic_helper_bridge_reset, 180 245 .atomic_duplicate_state = drm_atomic_helper_bridge_duplicate_state, 181 246 .atomic_destroy_state = drm_atomic_helper_bridge_destroy_state, ··· 195 270 return PTR_ERR(imxpd_encoder); 196 271 197 272 imxpd_encoder->pd = imxpd; 198 - connector = &imxpd_encoder->connector; 199 273 encoder = &imxpd_encoder->encoder; 200 274 bridge = &imxpd_encoder->bridge; 201 275 ··· 202 278 if (ret) 203 279 return ret; 204 280 205 - /* set the connector's dpms to OFF so that 206 - * drm_helper_connector_dpms() won't return 207 - * immediately since the current state is ON 208 - * at this point. 209 - */ 210 - connector->dpms = DRM_MODE_DPMS_OFF; 211 - 212 281 bridge->funcs = &imx_pd_bridge_funcs; 213 - drm_bridge_attach(encoder, bridge, NULL, 0); 282 + drm_bridge_attach(encoder, bridge, NULL, DRM_BRIDGE_ATTACH_NO_CONNECTOR); 214 283 215 - if (imxpd->next_bridge) { 216 - ret = drm_bridge_attach(encoder, imxpd->next_bridge, bridge, 0); 217 - if (ret < 0) 218 - return ret; 219 - } else { 220 - drm_connector_helper_add(connector, 221 - &imx_pd_connector_helper_funcs); 222 - drm_connector_init(drm, connector, &imx_pd_connector_funcs, 223 - DRM_MODE_CONNECTOR_DPI); 284 + connector = drm_bridge_connector_init(drm, encoder); 285 + if (IS_ERR(connector)) 286 + return PTR_ERR(connector); 224 287 225 - drm_connector_attach_encoder(connector, encoder); 226 - } 288 + drm_connector_attach_encoder(connector, encoder); 227 289 228 290 return 0; 229 291 } ··· 222 312 { 223 313 struct device *dev = &pdev->dev; 224 314 struct device_node *np = dev->of_node; 225 - const u8 *edidp; 226 315 struct imx_parallel_display *imxpd; 227 - int edid_len; 228 316 int ret; 229 317 u32 bus_format = 0; 230 318 const char *fmt; ··· 232 324 return -ENOMEM; 233 325 234 326 /* port@1 is the output port */ 235 - ret = drm_of_find_panel_or_bridge(np, 1, 0, &imxpd->panel, 236 - &imxpd->next_bridge); 237 - if (ret && ret != -ENODEV) 327 + imxpd->next_bridge = devm_drm_of_get_bridge(dev, np, 1, 0); 328 + if (imxpd->next_bridge == ERR_PTR(-ENODEV)) 329 + imxpd->next_bridge = devm_imx_drm_legacy_bridge(dev, np, DRM_MODE_CONNECTOR_DPI); 330 + if (IS_ERR(imxpd->next_bridge)) { 331 + ret = PTR_ERR(imxpd->next_bridge); 238 332 return ret; 239 - 240 - edidp = of_get_property(np, "edid", &edid_len); 241 - if (edidp) 242 - imxpd->drm_edid = drm_edid_alloc(edidp, edid_len); 333 + } 243 334 244 335 ret = of_property_read_string(np, "interface-pix-fmt", &fmt); 245 336 if (!ret) { ··· 262 355 263 356 static void imx_pd_remove(struct platform_device *pdev) 264 357 { 265 - struct imx_parallel_display *imxpd = platform_get_drvdata(pdev); 266 - 267 358 component_del(&pdev->dev, &imx_pd_ops); 268 - 269 - drm_edid_free(imxpd->drm_edid); 270 359 } 271 360 272 361 static const struct of_device_id imx_pd_dt_ids[] = {
+2 -2
drivers/gpu/drm/kmb/kmb_dsi.c
··· 818 818 } 819 819 } 820 820 821 - static inline void 821 + static inline __maybe_unused void 822 822 set_test_mode_src_osc_freq_target_low_bits(struct kmb_dsi *kmb_dsi, 823 823 u32 dphy_no, 824 824 u32 freq) ··· 830 830 (freq & 0x7f)); 831 831 } 832 832 833 - static inline void 833 + static inline __maybe_unused void 834 834 set_test_mode_src_osc_freq_target_hi_bits(struct kmb_dsi *kmb_dsi, 835 835 u32 dphy_no, 836 836 u32 freq)
+1 -1
drivers/gpu/drm/lima/lima_sched.c
··· 463 463 lima_pm_idle(ldev); 464 464 465 465 drm_sched_resubmit_jobs(&pipe->base); 466 - drm_sched_start(&pipe->base); 466 + drm_sched_start(&pipe->base, 0); 467 467 468 468 return DRM_GPU_SCHED_STAT_NOMINAL; 469 469 }
+2
drivers/gpu/drm/msm/Kconfig
··· 92 92 bool "Enable DPU support in MSM DRM driver" 93 93 depends on DRM_MSM 94 94 select DRM_MSM_MDSS 95 + select DRM_DISPLAY_DSC_HELPER 95 96 default y 96 97 help 97 98 Compile in support for the Display Processing Unit in ··· 114 113 depends on DRM_MSM 115 114 select DRM_PANEL 116 115 select DRM_MIPI_DSI 116 + select DRM_DISPLAY_DSC_HELPER 117 117 default y 118 118 help 119 119 Choose this option if you have a need for MIPI DSI connector
+2 -3
drivers/gpu/drm/nouveau/nouveau_connector.c
··· 477 477 struct nouveau_connector *nv_connector = nouveau_connector(connector); 478 478 struct nouveau_encoder *nv_encoder; 479 479 struct pci_dev *pdev = to_pci_dev(dev->dev); 480 - struct device_node *cn, *dn = pci_device_to_OF_node(pdev); 480 + struct device_node *dn = pci_device_to_OF_node(pdev); 481 481 482 482 if (!dn || 483 483 !((nv_encoder = find_encoder(connector, DCB_OUTPUT_TMDS)) || 484 484 (nv_encoder = find_encoder(connector, DCB_OUTPUT_ANALOG)))) 485 485 return NULL; 486 486 487 - for_each_child_of_node(dn, cn) { 487 + for_each_child_of_node_scoped(dn, cn) { 488 488 const char *name = of_get_property(cn, "name", NULL); 489 489 const void *edid = of_get_property(cn, "EDID", NULL); 490 490 int idx = name ? name[strlen(name) - 1] - 'A' : 0; ··· 492 492 if (nv_encoder->dcb->i2c_index == idx && edid) { 493 493 nv_connector->edid = 494 494 kmemdup(edid, EDID_LENGTH, GFP_KERNEL); 495 - of_node_put(cn); 496 495 return nv_encoder; 497 496 } 498 497 }
+1 -1
drivers/gpu/drm/nouveau/nouveau_sched.c
··· 379 379 else 380 380 NV_PRINTK(warn, job->cli, "Generic job timeout.\n"); 381 381 382 - drm_sched_start(sched); 382 + drm_sched_start(sched, 0); 383 383 384 384 return stat; 385 385 }
+2 -2
drivers/gpu/drm/nouveau/nvkm/engine/device/tegra.c
··· 120 120 mutex_init(&tdev->iommu.mutex); 121 121 122 122 if (device_iommu_mapped(dev)) { 123 - tdev->iommu.domain = iommu_domain_alloc(&platform_bus_type); 124 - if (!tdev->iommu.domain) 123 + tdev->iommu.domain = iommu_paging_domain_alloc(dev); 124 + if (IS_ERR(tdev->iommu.domain)) 125 125 goto error; 126 126 127 127 /*
+1 -1
drivers/gpu/drm/nouveau/nvkm/subdev/volt/base.c
··· 142 142 return -ENODEV; 143 143 } 144 144 145 - result = min(max(result, (s64)info.min), (s64)info.max); 145 + result = clamp(result, (s64)info.min, (s64)info.max); 146 146 147 147 if (info.link != 0xff) { 148 148 int ret = nvkm_volt_map(volt, info.link, temp);
+6 -19
drivers/gpu/drm/omapdrm/dss/base.c
··· 139 139 } 140 140 141 141 int omapdss_device_connect(struct dss_device *dss, 142 - struct omap_dss_device *src, 143 142 struct omap_dss_device *dst) 144 143 { 145 - dev_dbg(&dss->pdev->dev, "connect(%s, %s)\n", 146 - src ? dev_name(src->dev) : "NULL", 144 + dev_dbg(&dss->pdev->dev, "connect(%s)\n", 147 145 dst ? dev_name(dst->dev) : "NULL"); 148 146 149 - if (!dst) { 150 - /* 151 - * The destination is NULL when the source is connected to a 152 - * bridge instead of a DSS device. Stop here, we will attach 153 - * the bridge later when we will have a DRM encoder. 154 - */ 155 - return src && src->bridge ? 0 : -EINVAL; 156 - } 147 + if (!dst) 148 + return -EINVAL; 157 149 158 150 if (omapdss_device_is_connected(dst)) 159 151 return -EBUSY; ··· 155 163 return 0; 156 164 } 157 165 158 - void omapdss_device_disconnect(struct omap_dss_device *src, 166 + void omapdss_device_disconnect(struct dss_device *dss, 159 167 struct omap_dss_device *dst) 160 168 { 161 - struct dss_device *dss = src ? src->dss : dst->dss; 162 - 163 - dev_dbg(&dss->pdev->dev, "disconnect(%s, %s)\n", 164 - src ? dev_name(src->dev) : "NULL", 169 + dev_dbg(&dss->pdev->dev, "disconnect(%s)\n", 165 170 dst ? dev_name(dst->dev) : "NULL"); 166 171 167 - if (!dst) { 168 - WARN_ON(!src->bridge); 172 + if (WARN_ON(!dst)) 169 173 return; 170 - } 171 174 172 175 if (!dst->id && !omapdss_device_is_connected(dst)) { 173 176 WARN_ON(1);
+1 -2
drivers/gpu/drm/omapdrm/dss/omapdss.h
··· 242 242 void omapdss_device_put(struct omap_dss_device *dssdev); 243 243 struct omap_dss_device *omapdss_find_device_by_node(struct device_node *node); 244 244 int omapdss_device_connect(struct dss_device *dss, 245 - struct omap_dss_device *src, 246 245 struct omap_dss_device *dst); 247 - void omapdss_device_disconnect(struct omap_dss_device *src, 246 + void omapdss_device_disconnect(struct dss_device *dss, 248 247 struct omap_dss_device *dst); 249 248 250 249 int omap_dss_get_num_overlay_managers(void);
+3 -3
drivers/gpu/drm/omapdrm/omap_dmm_tiler.c
··· 119 119 * earlier than the DMA finished writing the value to memory. 120 120 */ 121 121 rmb(); 122 - return readl(dmm->wa_dma_data); 122 + return readl((__iomem void *)dmm->wa_dma_data); 123 123 } 124 124 125 125 static void dmm_write_wa(struct dmm *dmm, u32 val, u32 reg) ··· 127 127 dma_addr_t src, dst; 128 128 int r; 129 129 130 - writel(val, dmm->wa_dma_data); 130 + writel(val, (__iomem void *)dmm->wa_dma_data); 131 131 /* 132 132 * As per i878 workaround, the DMA is used to access the DMM registers. 133 133 * Make sure that the writel is not moved by the compiler or the CPU, so ··· 411 411 */ 412 412 413 413 /* read back to ensure the data is in RAM */ 414 - readl(&txn->last_pat->next_pa); 414 + readl((__iomem void *)&txn->last_pat->next_pa); 415 415 416 416 /* write to PAT_DESCR to clear out any pending transaction */ 417 417 dmm_write(dmm, 0x0, reg[PAT_DESCR][engine->id]);
+2 -2
drivers/gpu/drm/omapdrm/omap_drv.c
··· 307 307 for (i = 0; i < priv->num_pipes; i++) { 308 308 struct omap_drm_pipeline *pipe = &priv->pipes[i]; 309 309 310 - omapdss_device_disconnect(NULL, pipe->output); 310 + omapdss_device_disconnect(priv->dss, pipe->output); 311 311 312 312 omapdss_device_put(pipe->output); 313 313 pipe->output = NULL; ··· 325 325 int r; 326 326 327 327 for_each_dss_output(output) { 328 - r = omapdss_device_connect(priv->dss, NULL, output); 328 + r = omapdss_device_connect(priv->dss, output); 329 329 if (r == -EPROBE_DEFER) { 330 330 omapdss_device_put(output); 331 331 return r;
+2 -8
drivers/gpu/drm/omapdrm/omap_gem.c
··· 1402 1402 1403 1403 omap_obj = to_omap_bo(obj); 1404 1404 1405 - mutex_lock(&omap_obj->lock); 1406 - 1407 1405 omap_obj->sgt = sgt; 1408 1406 1409 1407 if (omap_gem_sgt_is_contiguous(sgt, size)) { ··· 1416 1418 pages = kcalloc(npages, sizeof(*pages), GFP_KERNEL); 1417 1419 if (!pages) { 1418 1420 omap_gem_free_object(obj); 1419 - obj = ERR_PTR(-ENOMEM); 1420 - goto done; 1421 + return ERR_PTR(-ENOMEM); 1421 1422 } 1422 1423 1423 1424 omap_obj->pages = pages; 1424 1425 ret = drm_prime_sg_to_page_array(sgt, pages, npages); 1425 1426 if (ret) { 1426 1427 omap_gem_free_object(obj); 1427 - obj = ERR_PTR(-ENOMEM); 1428 - goto done; 1428 + return ERR_PTR(-ENOMEM); 1429 1429 } 1430 1430 } 1431 1431 1432 - done: 1433 - mutex_unlock(&omap_obj->lock); 1434 1432 return obj; 1435 1433 } 1436 1434
+3 -3
drivers/gpu/drm/panel/Kconfig
··· 378 378 depends on OF 379 379 depends on DRM_MIPI_DSI 380 380 depends on BACKLIGHT_CLASS_DEVICE 381 - select DRM_DISPLAY_DP_HELPER 381 + select DRM_DISPLAY_DSC_HELPER 382 382 select DRM_DISPLAY_HELPER 383 383 help 384 384 Say Y here if you want to enable support for LG sw43408 panel. ··· 587 587 depends on OF 588 588 depends on DRM_MIPI_DSI 589 589 depends on BACKLIGHT_CLASS_DEVICE 590 - select DRM_DISPLAY_DP_HELPER 590 + select DRM_DISPLAY_DSC_HELPER 591 591 select DRM_DISPLAY_HELPER 592 592 help 593 593 Say Y here if you want to enable support for Raydium RM692E5-based ··· 946 946 depends on OF 947 947 depends on DRM_MIPI_DSI 948 948 depends on BACKLIGHT_CLASS_DEVICE 949 - select DRM_DISPLAY_DP_HELPER 949 + select DRM_DISPLAY_DSC_HELPER 950 950 select DRM_DISPLAY_HELPER 951 951 help 952 952 Say Y here if you want to enable support for Visionox
+133 -158
drivers/gpu/drm/panel/panel-himax-hx83112a.c
··· 56 56 msleep(50); 57 57 } 58 58 59 - static int hx83112a_on(struct hx83112a_panel *ctx) 59 + static int hx83112a_on(struct mipi_dsi_device *dsi) 60 60 { 61 - struct mipi_dsi_device *dsi = ctx->dsi; 62 - struct device *dev = &dsi->dev; 63 - int ret; 61 + struct mipi_dsi_multi_context dsi_ctx = { .dsi = dsi }; 64 62 65 63 dsi->mode_flags |= MIPI_DSI_MODE_LPM; 66 64 67 - mipi_dsi_dcs_write_seq(dsi, HX83112A_SETEXTC, 0x83, 0x11, 0x2a); 68 - mipi_dsi_dcs_write_seq(dsi, HX83112A_SETPOWER1, 69 - 0x08, 0x28, 0x28, 0x83, 0x83, 0x4c, 0x4f, 0x33); 70 - mipi_dsi_dcs_write_seq(dsi, HX83112A_SETDISP, 71 - 0x00, 0x02, 0x00, 0x90, 0x24, 0x00, 0x08, 0x19, 72 - 0xea, 0x11, 0x11, 0x00, 0x11, 0xa3); 73 - mipi_dsi_dcs_write_seq(dsi, HX83112A_SETDRV, 74 - 0x58, 0x68, 0x58, 0x68, 0x0f, 0xef, 0x0b, 0xc0, 75 - 0x0b, 0xc0, 0x0b, 0xc0, 0x00, 0xff, 0x00, 0xff, 76 - 0x00, 0x00, 0x14, 0x15, 0x00, 0x29, 0x11, 0x07, 77 - 0x12, 0x00, 0x29); 78 - mipi_dsi_dcs_write_seq(dsi, HX83112A_SETBANK, 0x02); 79 - mipi_dsi_dcs_write_seq(dsi, HX83112A_SETDRV, 80 - 0x00, 0x12, 0x12, 0x11, 0x88, 0x12, 0x12, 0x00, 81 - 0x53); 82 - mipi_dsi_dcs_write_seq(dsi, HX83112A_SETBANK, 0x00); 83 - mipi_dsi_dcs_write_seq(dsi, HX83112A_SETBANK, 0x03); 84 - mipi_dsi_dcs_write_seq(dsi, HX83112A_SETDGCLUT, 85 - 0xff, 0xfe, 0xfb, 0xf8, 0xf4, 0xf1, 0xed, 0xe6, 86 - 0xe2, 0xde, 0xdb, 0xd6, 0xd3, 0xcf, 0xca, 0xc6, 87 - 0xc2, 0xbe, 0xb9, 0xb0, 0xa7, 0x9e, 0x96, 0x8d, 88 - 0x84, 0x7c, 0x74, 0x6b, 0x62, 0x5a, 0x51, 0x49, 89 - 0x41, 0x39, 0x31, 0x29, 0x21, 0x19, 0x12, 0x0a, 90 - 0x06, 0x05, 0x02, 0x01, 0x00, 0x00, 0xc9, 0xb3, 91 - 0x08, 0x0e, 0xf2, 0xe1, 0x59, 0xf4, 0x22, 0xad, 92 - 0x40); 93 - mipi_dsi_dcs_write_seq(dsi, HX83112A_SETBANK, 0x02); 94 - mipi_dsi_dcs_write_seq(dsi, HX83112A_SETDGCLUT, 95 - 0xff, 0xfe, 0xfb, 0xf8, 0xf4, 0xf1, 0xed, 0xe6, 96 - 0xe2, 0xde, 0xdb, 0xd6, 0xd3, 0xcf, 0xca, 0xc6, 97 - 0xc2, 0xbe, 0xb9, 0xb0, 0xa7, 0x9e, 0x96, 0x8d, 98 - 0x84, 0x7c, 0x74, 0x6b, 0x62, 0x5a, 0x51, 0x49, 99 - 0x41, 0x39, 0x31, 0x29, 0x21, 0x19, 0x12, 0x0a, 100 - 0x06, 0x05, 0x02, 0x01, 0x00, 0x00, 0xc9, 0xb3, 101 - 0x08, 0x0e, 0xf2, 0xe1, 0x59, 0xf4, 0x22, 0xad, 102 - 0x40); 103 - mipi_dsi_dcs_write_seq(dsi, HX83112A_SETBANK, 0x01); 104 - mipi_dsi_dcs_write_seq(dsi, HX83112A_SETDGCLUT, 105 - 0xff, 0xfe, 0xfb, 0xf8, 0xf4, 0xf1, 0xed, 0xe6, 106 - 0xe2, 0xde, 0xdb, 0xd6, 0xd3, 0xcf, 0xca, 0xc6, 107 - 0xc2, 0xbe, 0xb9, 0xb0, 0xa7, 0x9e, 0x96, 0x8d, 108 - 0x84, 0x7c, 0x74, 0x6b, 0x62, 0x5a, 0x51, 0x49, 109 - 0x41, 0x39, 0x31, 0x29, 0x21, 0x19, 0x12, 0x0a, 110 - 0x06, 0x05, 0x02, 0x01, 0x00, 0x00, 0xc9, 0xb3, 111 - 0x08, 0x0e, 0xf2, 0xe1, 0x59, 0xf4, 0x22, 0xad, 112 - 0x40); 113 - mipi_dsi_dcs_write_seq(dsi, HX83112A_SETBANK, 0x00); 114 - mipi_dsi_dcs_write_seq(dsi, HX83112A_SETDGCLUT, 0x01); 115 - mipi_dsi_dcs_write_seq(dsi, HX83112A_SETTCON, 116 - 0x70, 0x00, 0x04, 0xe0, 0x33, 0x00); 117 - mipi_dsi_dcs_write_seq(dsi, HX83112A_SETPANEL, 0x08); 118 - mipi_dsi_dcs_write_seq(dsi, HX83112A_SETPOWER2, 0x2b, 0x2b); 119 - mipi_dsi_dcs_write_seq(dsi, HX83112A_SETGIP0, 120 - 0x80, 0x00, 0x00, 0x00, 0x00, 0x01, 0x00, 0x08, 121 - 0x08, 0x03, 0x03, 0x22, 0x18, 0x07, 0x07, 0x07, 122 - 0x07, 0x32, 0x10, 0x06, 0x00, 0x06, 0x32, 0x10, 123 - 0x07, 0x00, 0x07, 0x32, 0x19, 0x31, 0x09, 0x31, 124 - 0x00, 0x00, 0x05, 0x00, 0x00, 0x00, 0x00, 0x08, 125 - 0x09, 0x30, 0x00, 0x00, 0x00, 0x06, 0x0d, 0x00, 126 - 0x0f); 127 - mipi_dsi_dcs_write_seq(dsi, HX83112A_SETBANK, 0x01); 128 - mipi_dsi_dcs_write_seq(dsi, HX83112A_SETGIP0, 129 - 0x00, 0x00, 0x19, 0x10, 0x00, 0x0a, 0x00, 0x81); 130 - mipi_dsi_dcs_write_seq(dsi, HX83112A_SETBANK, 0x00); 131 - mipi_dsi_dcs_write_seq(dsi, HX83112A_SETGIP1, 132 - 0x18, 0x18, 0x18, 0x18, 0x18, 0x18, 0x18, 0x18, 133 - 0xc0, 0xc0, 0x18, 0x18, 0x19, 0x19, 0x18, 0x18, 134 - 0x40, 0x40, 0x18, 0x18, 0x18, 0x18, 0x3f, 0x3f, 135 - 0x28, 0x28, 0x24, 0x24, 0x02, 0x03, 0x02, 0x03, 136 - 0x00, 0x01, 0x00, 0x01, 0x31, 0x31, 0x31, 0x31, 137 - 0x30, 0x30, 0x30, 0x30, 0x2f, 0x2f, 0x2f, 0x2f); 138 - mipi_dsi_dcs_write_seq(dsi, HX83112A_SETGIP2, 139 - 0x18, 0x18, 0x18, 0x18, 0x18, 0x18, 0x18, 0x18, 140 - 0x40, 0x40, 0x18, 0x18, 0x18, 0x18, 0x19, 0x19, 141 - 0x40, 0x40, 0x18, 0x18, 0x18, 0x18, 0x3f, 0x3f, 142 - 0x24, 0x24, 0x28, 0x28, 0x01, 0x00, 0x01, 0x00, 143 - 0x03, 0x02, 0x03, 0x02, 0x31, 0x31, 0x31, 0x31, 144 - 0x30, 0x30, 0x30, 0x30, 0x2f, 0x2f, 0x2f, 0x2f); 145 - mipi_dsi_dcs_write_seq(dsi, HX83112A_SETGIP3, 146 - 0xaa, 0xea, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xea, 147 - 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xea, 0xab, 0xaa, 148 - 0xaa, 0xaa, 0xaa, 0xea, 0xab, 0xaa, 0xaa, 0xaa); 149 - mipi_dsi_dcs_write_seq(dsi, HX83112A_SETBANK, 0x01); 150 - mipi_dsi_dcs_write_seq(dsi, HX83112A_SETGIP3, 151 - 0xaa, 0x2e, 0x28, 0x00, 0x00, 0x00, 0xaa, 0x2e, 152 - 0x28, 0x00, 0x00, 0x00, 0xaa, 0xee, 0xaa, 0xaa, 153 - 0xaa, 0xaa, 0xaa, 0xee, 0xaa, 0xaa, 0xaa, 0xaa); 154 - mipi_dsi_dcs_write_seq(dsi, HX83112A_SETBANK, 0x02); 155 - mipi_dsi_dcs_write_seq(dsi, HX83112A_SETGIP3, 156 - 0xaa, 0xff, 0xff, 0xff, 0xff, 0xff, 0xaa, 0xff, 157 - 0xff, 0xff, 0xff, 0xff); 158 - mipi_dsi_dcs_write_seq(dsi, HX83112A_SETBANK, 0x03); 159 - mipi_dsi_dcs_write_seq(dsi, HX83112A_SETGIP3, 160 - 0xaa, 0xaa, 0xea, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 161 - 0xea, 0xaa, 0xaa, 0xaa, 0xaa, 0xff, 0xff, 0xff, 162 - 0xff, 0xff, 0xaa, 0xff, 0xff, 0xff, 0xff, 0xff); 163 - mipi_dsi_dcs_write_seq(dsi, HX83112A_SETBANK, 0x00); 164 - mipi_dsi_dcs_write_seq(dsi, HX83112A_SETTP1, 165 - 0x0e, 0x0e, 0x1e, 0x65, 0x1c, 0x65, 0x00, 0x50, 166 - 0x20, 0x20, 0x00, 0x00, 0x02, 0x02, 0x02, 0x05, 167 - 0x14, 0x14, 0x32, 0xb9, 0x23, 0xb9, 0x08); 168 - mipi_dsi_dcs_write_seq(dsi, HX83112A_SETBANK, 0x01); 169 - mipi_dsi_dcs_write_seq(dsi, HX83112A_SETTP1, 170 - 0x02, 0x00, 0xa8, 0x01, 0xa8, 0x0d, 0xa4, 0x0e); 171 - mipi_dsi_dcs_write_seq(dsi, HX83112A_SETBANK, 0x02); 172 - mipi_dsi_dcs_write_seq(dsi, HX83112A_SETTP1, 173 - 0x00, 0x00, 0x08, 0x00, 0x01, 0x00, 0x00, 0x00, 174 - 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 175 - 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x04, 0x00, 176 - 0x00, 0x00, 0x00, 0x02, 0x00); 177 - mipi_dsi_dcs_write_seq(dsi, HX83112A_SETBANK, 0x00); 178 - mipi_dsi_dcs_write_seq(dsi, HX83112A_UNKNOWN1, 0xc3); 179 - mipi_dsi_dcs_write_seq(dsi, HX83112A_SETCLOCK, 0xd1, 0xd6); 180 - mipi_dsi_dcs_write_seq(dsi, HX83112A_UNKNOWN1, 0x3f); 181 - mipi_dsi_dcs_write_seq(dsi, HX83112A_UNKNOWN1, 0xc6); 182 - mipi_dsi_dcs_write_seq(dsi, HX83112A_SETPTBA, 0x37); 183 - mipi_dsi_dcs_write_seq(dsi, HX83112A_UNKNOWN1, 0x3f); 65 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, HX83112A_SETEXTC, 0x83, 0x11, 0x2a); 66 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, HX83112A_SETPOWER1, 67 + 0x08, 0x28, 0x28, 0x83, 0x83, 0x4c, 0x4f, 0x33); 68 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, HX83112A_SETDISP, 69 + 0x00, 0x02, 0x00, 0x90, 0x24, 0x00, 0x08, 0x19, 70 + 0xea, 0x11, 0x11, 0x00, 0x11, 0xa3); 71 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, HX83112A_SETDRV, 72 + 0x58, 0x68, 0x58, 0x68, 0x0f, 0xef, 0x0b, 0xc0, 73 + 0x0b, 0xc0, 0x0b, 0xc0, 0x00, 0xff, 0x00, 0xff, 74 + 0x00, 0x00, 0x14, 0x15, 0x00, 0x29, 0x11, 0x07, 75 + 0x12, 0x00, 0x29); 76 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, HX83112A_SETBANK, 0x02); 77 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, HX83112A_SETDRV, 78 + 0x00, 0x12, 0x12, 0x11, 0x88, 0x12, 0x12, 0x00, 79 + 0x53); 80 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, HX83112A_SETBANK, 0x00); 81 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, HX83112A_SETBANK, 0x03); 82 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, HX83112A_SETDGCLUT, 83 + 0xff, 0xfe, 0xfb, 0xf8, 0xf4, 0xf1, 0xed, 0xe6, 84 + 0xe2, 0xde, 0xdb, 0xd6, 0xd3, 0xcf, 0xca, 0xc6, 85 + 0xc2, 0xbe, 0xb9, 0xb0, 0xa7, 0x9e, 0x96, 0x8d, 86 + 0x84, 0x7c, 0x74, 0x6b, 0x62, 0x5a, 0x51, 0x49, 87 + 0x41, 0x39, 0x31, 0x29, 0x21, 0x19, 0x12, 0x0a, 88 + 0x06, 0x05, 0x02, 0x01, 0x00, 0x00, 0xc9, 0xb3, 89 + 0x08, 0x0e, 0xf2, 0xe1, 0x59, 0xf4, 0x22, 0xad, 90 + 0x40); 91 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, HX83112A_SETBANK, 0x02); 92 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, HX83112A_SETDGCLUT, 93 + 0xff, 0xfe, 0xfb, 0xf8, 0xf4, 0xf1, 0xed, 0xe6, 94 + 0xe2, 0xde, 0xdb, 0xd6, 0xd3, 0xcf, 0xca, 0xc6, 95 + 0xc2, 0xbe, 0xb9, 0xb0, 0xa7, 0x9e, 0x96, 0x8d, 96 + 0x84, 0x7c, 0x74, 0x6b, 0x62, 0x5a, 0x51, 0x49, 97 + 0x41, 0x39, 0x31, 0x29, 0x21, 0x19, 0x12, 0x0a, 98 + 0x06, 0x05, 0x02, 0x01, 0x00, 0x00, 0xc9, 0xb3, 99 + 0x08, 0x0e, 0xf2, 0xe1, 0x59, 0xf4, 0x22, 0xad, 100 + 0x40); 101 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, HX83112A_SETBANK, 0x01); 102 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, HX83112A_SETDGCLUT, 103 + 0xff, 0xfe, 0xfb, 0xf8, 0xf4, 0xf1, 0xed, 0xe6, 104 + 0xe2, 0xde, 0xdb, 0xd6, 0xd3, 0xcf, 0xca, 0xc6, 105 + 0xc2, 0xbe, 0xb9, 0xb0, 0xa7, 0x9e, 0x96, 0x8d, 106 + 0x84, 0x7c, 0x74, 0x6b, 0x62, 0x5a, 0x51, 0x49, 107 + 0x41, 0x39, 0x31, 0x29, 0x21, 0x19, 0x12, 0x0a, 108 + 0x06, 0x05, 0x02, 0x01, 0x00, 0x00, 0xc9, 0xb3, 109 + 0x08, 0x0e, 0xf2, 0xe1, 0x59, 0xf4, 0x22, 0xad, 110 + 0x40); 111 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, HX83112A_SETBANK, 0x00); 112 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, HX83112A_SETDGCLUT, 0x01); 113 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, HX83112A_SETTCON, 114 + 0x70, 0x00, 0x04, 0xe0, 0x33, 0x00); 115 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, HX83112A_SETPANEL, 0x08); 116 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, HX83112A_SETPOWER2, 0x2b, 0x2b); 117 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, HX83112A_SETGIP0, 118 + 0x80, 0x00, 0x00, 0x00, 0x00, 0x01, 0x00, 0x08, 119 + 0x08, 0x03, 0x03, 0x22, 0x18, 0x07, 0x07, 0x07, 120 + 0x07, 0x32, 0x10, 0x06, 0x00, 0x06, 0x32, 0x10, 121 + 0x07, 0x00, 0x07, 0x32, 0x19, 0x31, 0x09, 0x31, 122 + 0x00, 0x00, 0x05, 0x00, 0x00, 0x00, 0x00, 0x08, 123 + 0x09, 0x30, 0x00, 0x00, 0x00, 0x06, 0x0d, 0x00, 124 + 0x0f); 125 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, HX83112A_SETBANK, 0x01); 126 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, HX83112A_SETGIP0, 127 + 0x00, 0x00, 0x19, 0x10, 0x00, 0x0a, 0x00, 0x81); 128 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, HX83112A_SETBANK, 0x00); 129 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, HX83112A_SETGIP1, 130 + 0x18, 0x18, 0x18, 0x18, 0x18, 0x18, 0x18, 0x18, 131 + 0xc0, 0xc0, 0x18, 0x18, 0x19, 0x19, 0x18, 0x18, 132 + 0x40, 0x40, 0x18, 0x18, 0x18, 0x18, 0x3f, 0x3f, 133 + 0x28, 0x28, 0x24, 0x24, 0x02, 0x03, 0x02, 0x03, 134 + 0x00, 0x01, 0x00, 0x01, 0x31, 0x31, 0x31, 0x31, 135 + 0x30, 0x30, 0x30, 0x30, 0x2f, 0x2f, 0x2f, 0x2f); 136 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, HX83112A_SETGIP2, 137 + 0x18, 0x18, 0x18, 0x18, 0x18, 0x18, 0x18, 0x18, 138 + 0x40, 0x40, 0x18, 0x18, 0x18, 0x18, 0x19, 0x19, 139 + 0x40, 0x40, 0x18, 0x18, 0x18, 0x18, 0x3f, 0x3f, 140 + 0x24, 0x24, 0x28, 0x28, 0x01, 0x00, 0x01, 0x00, 141 + 0x03, 0x02, 0x03, 0x02, 0x31, 0x31, 0x31, 0x31, 142 + 0x30, 0x30, 0x30, 0x30, 0x2f, 0x2f, 0x2f, 0x2f); 143 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, HX83112A_SETGIP3, 144 + 0xaa, 0xea, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xea, 145 + 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xea, 0xab, 0xaa, 146 + 0xaa, 0xaa, 0xaa, 0xea, 0xab, 0xaa, 0xaa, 0xaa); 147 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, HX83112A_SETBANK, 0x01); 148 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, HX83112A_SETGIP3, 149 + 0xaa, 0x2e, 0x28, 0x00, 0x00, 0x00, 0xaa, 0x2e, 150 + 0x28, 0x00, 0x00, 0x00, 0xaa, 0xee, 0xaa, 0xaa, 151 + 0xaa, 0xaa, 0xaa, 0xee, 0xaa, 0xaa, 0xaa, 0xaa); 152 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, HX83112A_SETBANK, 0x02); 153 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, HX83112A_SETGIP3, 154 + 0xaa, 0xff, 0xff, 0xff, 0xff, 0xff, 0xaa, 0xff, 155 + 0xff, 0xff, 0xff, 0xff); 156 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, HX83112A_SETBANK, 0x03); 157 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, HX83112A_SETGIP3, 158 + 0xaa, 0xaa, 0xea, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 159 + 0xea, 0xaa, 0xaa, 0xaa, 0xaa, 0xff, 0xff, 0xff, 160 + 0xff, 0xff, 0xaa, 0xff, 0xff, 0xff, 0xff, 0xff); 161 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, HX83112A_SETBANK, 0x00); 162 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, HX83112A_SETTP1, 163 + 0x0e, 0x0e, 0x1e, 0x65, 0x1c, 0x65, 0x00, 0x50, 164 + 0x20, 0x20, 0x00, 0x00, 0x02, 0x02, 0x02, 0x05, 165 + 0x14, 0x14, 0x32, 0xb9, 0x23, 0xb9, 0x08); 166 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, HX83112A_SETBANK, 0x01); 167 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, HX83112A_SETTP1, 168 + 0x02, 0x00, 0xa8, 0x01, 0xa8, 0x0d, 0xa4, 0x0e); 169 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, HX83112A_SETBANK, 0x02); 170 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, HX83112A_SETTP1, 171 + 0x00, 0x00, 0x08, 0x00, 0x01, 0x00, 0x00, 0x00, 172 + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 173 + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x04, 0x00, 174 + 0x00, 0x00, 0x00, 0x02, 0x00); 175 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, HX83112A_SETBANK, 0x00); 176 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, HX83112A_UNKNOWN1, 0xc3); 177 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, HX83112A_SETCLOCK, 0xd1, 0xd6); 178 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, HX83112A_UNKNOWN1, 0x3f); 179 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, HX83112A_UNKNOWN1, 0xc6); 180 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, HX83112A_SETPTBA, 0x37); 181 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, HX83112A_UNKNOWN1, 0x3f); 184 182 185 - ret = mipi_dsi_dcs_exit_sleep_mode(dsi); 186 - if (ret < 0) { 187 - dev_err(dev, "Failed to exit sleep mode: %d\n", ret); 188 - return ret; 189 - } 190 - msleep(150); 183 + mipi_dsi_dcs_exit_sleep_mode_multi(&dsi_ctx); 184 + mipi_dsi_msleep(&dsi_ctx, 150); 191 185 192 - ret = mipi_dsi_dcs_set_display_on(dsi); 193 - if (ret < 0) { 194 - dev_err(dev, "Failed to set display on: %d\n", ret); 195 - return ret; 196 - } 197 - msleep(50); 186 + mipi_dsi_dcs_set_display_on_multi(&dsi_ctx); 187 + mipi_dsi_msleep(&dsi_ctx, 50); 198 188 199 - return 0; 189 + return dsi_ctx.accum_err; 200 190 } 201 191 202 192 static int hx83112a_disable(struct drm_panel *panel) 203 193 { 204 194 struct hx83112a_panel *ctx = to_hx83112a_panel(panel); 205 195 struct mipi_dsi_device *dsi = ctx->dsi; 206 - struct device *dev = &dsi->dev; 207 - int ret; 196 + struct mipi_dsi_multi_context dsi_ctx = { .dsi = dsi }; 208 197 209 198 dsi->mode_flags &= ~MIPI_DSI_MODE_LPM; 210 199 211 - ret = mipi_dsi_dcs_set_display_off(dsi); 212 - if (ret < 0) { 213 - dev_err(dev, "Failed to set display off: %d\n", ret); 214 - return ret; 215 - } 216 - msleep(20); 200 + mipi_dsi_dcs_set_display_off_multi(&dsi_ctx); 201 + mipi_dsi_msleep(&dsi_ctx, 20); 202 + mipi_dsi_dcs_enter_sleep_mode_multi(&dsi_ctx); 203 + mipi_dsi_msleep(&dsi_ctx, 120); 217 204 218 - ret = mipi_dsi_dcs_enter_sleep_mode(dsi); 219 - if (ret < 0) { 220 - dev_err(dev, "Failed to enter sleep mode: %d\n", ret); 221 - return ret; 222 - } 223 - msleep(120); 224 - 225 - return 0; 205 + return dsi_ctx.accum_err; 226 206 } 227 207 228 208 static int hx83112a_prepare(struct drm_panel *panel) 229 209 { 230 210 struct hx83112a_panel *ctx = to_hx83112a_panel(panel); 231 - struct device *dev = &ctx->dsi->dev; 232 211 int ret; 233 212 234 213 ret = regulator_bulk_enable(ARRAY_SIZE(ctx->supplies), ctx->supplies); 235 - if (ret < 0) { 236 - dev_err(dev, "Failed to enable regulators: %d\n", ret); 214 + if (ret < 0) 237 215 return ret; 238 - } 239 216 240 217 hx83112a_reset(ctx); 241 218 242 - ret = hx83112a_on(ctx); 219 + ret = hx83112a_on(ctx->dsi); 243 220 if (ret < 0) { 244 - dev_err(dev, "Failed to initialize panel: %d\n", ret); 245 221 gpiod_set_value_cansleep(ctx->reset_gpio, 1); 246 222 regulator_bulk_disable(ARRAY_SIZE(ctx->supplies), ctx->supplies); 247 - return ret; 248 223 } 249 224 250 - return 0; 225 + return ret; 251 226 } 252 227 253 228 static int hx83112a_unprepare(struct drm_panel *panel)
+3 -207
drivers/gpu/drm/panel/panel-ilitek-ili9341.c
··· 13 13 * Derived from drivers/drm/gpu/panel/panel-ilitek-ili9322.c 14 14 * the reuse of DBI abstraction part referred from Linus's patch 15 15 * "drm/panel: s6e63m0: Switch to DBI abstraction for SPI" 16 - * 17 - * For only-dbi part, copy from David's code (drm/tiny/ili9341.c) 18 - * Copyright 2018 David Lechner <david@lechnology.com> 19 16 */ 20 17 21 18 #include <linux/backlight.h> ··· 483 486 .get_modes = ili9341_dpi_get_modes, 484 487 }; 485 488 486 - static void ili9341_dbi_enable(struct drm_simple_display_pipe *pipe, 487 - struct drm_crtc_state *crtc_state, 488 - struct drm_plane_state *plane_state) 489 - { 490 - struct mipi_dbi_dev *dbidev = drm_to_mipi_dbi_dev(pipe->crtc.dev); 491 - struct mipi_dbi *dbi = &dbidev->dbi; 492 - u8 addr_mode; 493 - int ret, idx; 494 - 495 - if (!drm_dev_enter(pipe->crtc.dev, &idx)) 496 - return; 497 - 498 - ret = mipi_dbi_poweron_conditional_reset(dbidev); 499 - if (ret < 0) 500 - goto out_exit; 501 - if (ret == 1) 502 - goto out_enable; 503 - 504 - mipi_dbi_command(dbi, MIPI_DCS_SET_DISPLAY_OFF); 505 - 506 - mipi_dbi_command(dbi, ILI9341_POWERB, 0x00, 0xc1, 0x30); 507 - mipi_dbi_command(dbi, ILI9341_POWER_SEQ, 0x64, 0x03, 0x12, 0x81); 508 - mipi_dbi_command(dbi, ILI9341_DTCA, 0x85, 0x00, 0x78); 509 - mipi_dbi_command(dbi, ILI9341_POWERA, 0x39, 0x2c, 0x00, 0x34, 0x02); 510 - mipi_dbi_command(dbi, ILI9341_PRC, ILI9341_DBI_PRC_NORMAL); 511 - mipi_dbi_command(dbi, ILI9341_DTCB, 0x00, 0x00); 512 - 513 - /* Power Control */ 514 - mipi_dbi_command(dbi, ILI9341_POWER1, ILI9341_DBI_VCOMH_4P6V); 515 - mipi_dbi_command(dbi, ILI9341_POWER2, ILI9341_DBI_PWR_2_DEFAULT); 516 - /* VCOM */ 517 - mipi_dbi_command(dbi, ILI9341_VCOM1, ILI9341_DBI_VCOM_1_VMH_4P25V, 518 - ILI9341_DBI_VCOM_1_VML_1P5V); 519 - mipi_dbi_command(dbi, ILI9341_VCOM2, ILI9341_DBI_VCOM_2_DEC_58); 520 - 521 - /* Memory Access Control */ 522 - mipi_dbi_command(dbi, MIPI_DCS_SET_PIXEL_FORMAT, 523 - MIPI_DCS_PIXEL_FMT_16BIT); 524 - 525 - /* Frame Rate */ 526 - mipi_dbi_command(dbi, ILI9341_FRC, ILI9341_DBI_FRC_DIVA & 0x03, 527 - ILI9341_DBI_FRC_RTNA & 0x1f); 528 - 529 - /* Gamma */ 530 - mipi_dbi_command(dbi, ILI9341_3GAMMA_EN, 0x00); 531 - mipi_dbi_command(dbi, MIPI_DCS_SET_GAMMA_CURVE, ILI9341_GAMMA_CURVE_1); 532 - mipi_dbi_command(dbi, ILI9341_PGAMMA, 533 - 0x0f, 0x31, 0x2b, 0x0c, 0x0e, 0x08, 0x4e, 0xf1, 534 - 0x37, 0x07, 0x10, 0x03, 0x0e, 0x09, 0x00); 535 - mipi_dbi_command(dbi, ILI9341_NGAMMA, 536 - 0x00, 0x0e, 0x14, 0x03, 0x11, 0x07, 0x31, 0xc1, 537 - 0x48, 0x08, 0x0f, 0x0c, 0x31, 0x36, 0x0f); 538 - 539 - /* DDRAM */ 540 - mipi_dbi_command(dbi, ILI9341_ETMOD, ILI9341_DBI_EMS_GAS | 541 - ILI9341_DBI_EMS_DTS | 542 - ILI9341_DBI_EMS_GON); 543 - 544 - /* Display */ 545 - mipi_dbi_command(dbi, ILI9341_DFC, 0x08, 0x82, 0x27, 0x00); 546 - mipi_dbi_command(dbi, MIPI_DCS_EXIT_SLEEP_MODE); 547 - msleep(100); 548 - 549 - mipi_dbi_command(dbi, MIPI_DCS_SET_DISPLAY_ON); 550 - msleep(100); 551 - 552 - out_enable: 553 - switch (dbidev->rotation) { 554 - default: 555 - addr_mode = ILI9341_MADCTL_MX; 556 - break; 557 - case 90: 558 - addr_mode = ILI9341_MADCTL_MV; 559 - break; 560 - case 180: 561 - addr_mode = ILI9341_MADCTL_MY; 562 - break; 563 - case 270: 564 - addr_mode = ILI9341_MADCTL_MV | ILI9341_MADCTL_MY | 565 - ILI9341_MADCTL_MX; 566 - break; 567 - } 568 - 569 - addr_mode |= ILI9341_MADCTL_BGR; 570 - mipi_dbi_command(dbi, MIPI_DCS_SET_ADDRESS_MODE, addr_mode); 571 - mipi_dbi_enable_flush(dbidev, crtc_state, plane_state); 572 - drm_info(&dbidev->drm, "Initialized display serial interface\n"); 573 - out_exit: 574 - drm_dev_exit(idx); 575 - } 576 - 577 - static const struct drm_simple_display_pipe_funcs ili9341_dbi_funcs = { 578 - DRM_MIPI_DBI_SIMPLE_DISPLAY_PIPE_FUNCS(ili9341_dbi_enable), 579 - }; 580 - 581 - static const struct drm_display_mode ili9341_dbi_mode = { 582 - DRM_SIMPLE_MODE(240, 320, 37, 49), 583 - }; 584 - 585 - DEFINE_DRM_GEM_DMA_FOPS(ili9341_dbi_fops); 586 - 587 - static struct drm_driver ili9341_dbi_driver = { 588 - .driver_features = DRIVER_GEM | DRIVER_MODESET | DRIVER_ATOMIC, 589 - .fops = &ili9341_dbi_fops, 590 - DRM_GEM_DMA_DRIVER_OPS_VMAP, 591 - .debugfs_init = mipi_dbi_debugfs_init, 592 - .name = "ili9341", 593 - .desc = "Ilitek ILI9341", 594 - .date = "20210716", 595 - .major = 1, 596 - .minor = 0, 597 - }; 598 - 599 - static int ili9341_dbi_probe(struct spi_device *spi, struct gpio_desc *dc, 600 - struct gpio_desc *reset) 601 - { 602 - struct device *dev = &spi->dev; 603 - struct mipi_dbi_dev *dbidev; 604 - struct mipi_dbi *dbi; 605 - struct drm_device *drm; 606 - struct regulator *vcc; 607 - u32 rotation = 0; 608 - int ret; 609 - 610 - vcc = devm_regulator_get_optional(dev, "vcc"); 611 - if (IS_ERR(vcc)) { 612 - dev_err(dev, "get optional vcc failed\n"); 613 - vcc = NULL; 614 - } 615 - 616 - dbidev = devm_drm_dev_alloc(dev, &ili9341_dbi_driver, 617 - struct mipi_dbi_dev, drm); 618 - if (IS_ERR(dbidev)) 619 - return PTR_ERR(dbidev); 620 - 621 - dbi = &dbidev->dbi; 622 - drm = &dbidev->drm; 623 - dbi->reset = reset; 624 - dbidev->regulator = vcc; 625 - 626 - drm_mode_config_init(drm); 627 - 628 - dbidev->backlight = devm_of_find_backlight(dev); 629 - if (IS_ERR(dbidev->backlight)) 630 - return PTR_ERR(dbidev->backlight); 631 - 632 - device_property_read_u32(dev, "rotation", &rotation); 633 - 634 - ret = mipi_dbi_spi_init(spi, dbi, dc); 635 - if (ret) 636 - return ret; 637 - 638 - ret = mipi_dbi_dev_init(dbidev, &ili9341_dbi_funcs, 639 - &ili9341_dbi_mode, rotation); 640 - if (ret) 641 - return ret; 642 - 643 - drm_mode_config_reset(drm); 644 - 645 - ret = drm_dev_register(drm, 0); 646 - if (ret) 647 - return ret; 648 - 649 - spi_set_drvdata(spi, drm); 650 - 651 - drm_fbdev_dma_setup(drm, 0); 652 - 653 - return 0; 654 - } 655 - 656 489 static int ili9341_dpi_probe(struct spi_device *spi, struct gpio_desc *dc, 657 490 struct gpio_desc *reset) 658 491 { ··· 538 711 struct device *dev = &spi->dev; 539 712 struct gpio_desc *dc; 540 713 struct gpio_desc *reset; 541 - const struct spi_device_id *id = spi_get_device_id(spi); 542 714 543 715 reset = devm_gpiod_get_optional(dev, "reset", GPIOD_OUT_HIGH); 544 716 if (IS_ERR(reset)) ··· 547 721 if (IS_ERR(dc)) 548 722 return dev_err_probe(dev, PTR_ERR(dc), "Failed to get gpio 'dc'\n"); 549 723 550 - if (!strcmp(id->name, "sf-tc240t-9370-t")) 551 - return ili9341_dpi_probe(spi, dc, reset); 552 - 553 - if (!strcmp(id->name, "yx240qv29")) 554 - return ili9341_dbi_probe(spi, dc, reset); 555 - 556 - return -ENODEV; 724 + return ili9341_dpi_probe(spi, dc, reset); 557 725 } 558 726 559 727 static void ili9341_remove(struct spi_device *spi) 560 728 { 561 - const struct spi_device_id *id = spi_get_device_id(spi); 562 729 struct ili9341 *ili = spi_get_drvdata(spi); 563 - struct drm_device *drm = spi_get_drvdata(spi); 564 730 565 - if (!strcmp(id->name, "sf-tc240t-9370-t")) { 566 - ili9341_dpi_power_off(ili); 567 - drm_panel_remove(&ili->panel); 568 - } else if (!strcmp(id->name, "yx240qv29")) { 569 - drm_dev_unplug(drm); 570 - drm_atomic_helper_shutdown(drm); 571 - } 572 - } 573 - 574 - static void ili9341_shutdown(struct spi_device *spi) 575 - { 576 - const struct spi_device_id *id = spi_get_device_id(spi); 577 - 578 - if (!strcmp(id->name, "yx240qv29")) 579 - drm_atomic_helper_shutdown(spi_get_drvdata(spi)); 731 + ili9341_dpi_power_off(ili); 732 + drm_panel_remove(&ili->panel); 580 733 } 581 734 582 735 static const struct of_device_id ili9341_of_match[] = { ··· 563 758 .compatible = "st,sf-tc240t-9370-t", 564 759 .data = &ili9341_stm32f429_disco_data, 565 760 }, 566 - { 567 - /* porting from tiny/ili9341.c 568 - * for original mipi dbi compitable 569 - */ 570 - .compatible = "adafruit,yx240qv29", 571 - .data = NULL, 572 - }, 573 761 { } 574 762 }; 575 763 MODULE_DEVICE_TABLE(of, ili9341_of_match); 576 764 577 765 static const struct spi_device_id ili9341_id[] = { 578 - { "yx240qv29", 0 }, 579 766 { "sf-tc240t-9370-t", 0 }, 580 767 { } 581 768 }; ··· 576 779 static struct spi_driver ili9341_driver = { 577 780 .probe = ili9341_probe, 578 781 .remove = ili9341_remove, 579 - .shutdown = ili9341_shutdown, 580 782 .id_table = ili9341_id, 581 783 .driver = { 582 784 .name = "panel-ilitek-ili9341",
+2 -2
drivers/gpu/drm/panel/panel-khadas-ts050.c
··· 617 617 {0xd4, {0x04}, 0x01}, /* RGBMIPICTRL: VSYNC front porch = 4 */ 618 618 }; 619 619 620 - struct khadas_ts050_panel_data ts050_panel_data = { 620 + static struct khadas_ts050_panel_data ts050_panel_data = { 621 621 .init_code = (struct khadas_ts050_panel_cmd *)ts050_init_code, 622 622 .len = ARRAY_SIZE(ts050_init_code) 623 623 }; 624 624 625 - struct khadas_ts050_panel_data ts050v2_panel_data = { 625 + static struct khadas_ts050_panel_data ts050v2_panel_data = { 626 626 .init_code = (struct khadas_ts050_panel_cmd *)ts050v2_init_code, 627 627 .len = ARRAY_SIZE(ts050v2_init_code) 628 628 };
+2 -14
drivers/gpu/drm/panel/panel-novatek-nt36523.c
··· 1095 1095 static void nt36523_remove(struct mipi_dsi_device *dsi) 1096 1096 { 1097 1097 struct panel_info *pinfo = mipi_dsi_get_drvdata(dsi); 1098 - int ret; 1099 - 1100 - ret = mipi_dsi_detach(pinfo->dsi[0]); 1101 - if (ret < 0) 1102 - dev_err(&dsi->dev, "failed to detach from DSI0 host: %d\n", ret); 1103 - 1104 - if (pinfo->desc->is_dual_dsi) { 1105 - ret = mipi_dsi_detach(pinfo->dsi[1]); 1106 - if (ret < 0) 1107 - dev_err(&pinfo->dsi[1]->dev, "failed to detach from DSI1 host: %d\n", ret); 1108 - mipi_dsi_device_unregister(pinfo->dsi[1]); 1109 - } 1110 1098 1111 1099 drm_panel_remove(&pinfo->panel); 1112 1100 } ··· 1239 1251 if (!dsi1_host) 1240 1252 return dev_err_probe(dev, -EPROBE_DEFER, "cannot get secondary DSI host\n"); 1241 1253 1242 - pinfo->dsi[1] = mipi_dsi_device_register_full(dsi1_host, info); 1254 + pinfo->dsi[1] = devm_mipi_dsi_device_register_full(dev, dsi1_host, info); 1243 1255 if (IS_ERR(pinfo->dsi[1])) { 1244 1256 dev_err(dev, "cannot get secondary DSI device\n"); 1245 1257 return PTR_ERR(pinfo->dsi[1]); ··· 1276 1288 pinfo->dsi[i]->format = pinfo->desc->format; 1277 1289 pinfo->dsi[i]->mode_flags = pinfo->desc->mode_flags; 1278 1290 1279 - ret = mipi_dsi_attach(pinfo->dsi[i]); 1291 + ret = devm_mipi_dsi_attach(dev, pinfo->dsi[i]); 1280 1292 if (ret < 0) 1281 1293 return dev_err_probe(dev, ret, "cannot attach to DSI%d host.\n", i); 1282 1294 }
+26 -61
drivers/gpu/drm/panel/panel-raydium-rm69380.c
··· 46 46 static int rm69380_on(struct rm69380_panel *ctx) 47 47 { 48 48 struct mipi_dsi_device *dsi = ctx->dsi[0]; 49 - struct device *dev = &dsi->dev; 50 - int ret; 49 + struct mipi_dsi_multi_context dsi_ctx = { .dsi = dsi }; 51 50 52 51 dsi->mode_flags |= MIPI_DSI_MODE_LPM; 53 52 if (ctx->dsi[1]) 54 53 ctx->dsi[1]->mode_flags |= MIPI_DSI_MODE_LPM; 55 54 56 - mipi_dsi_dcs_write_seq(dsi, 0xfe, 0xd4); 57 - mipi_dsi_dcs_write_seq(dsi, 0x00, 0x80); 58 - mipi_dsi_dcs_write_seq(dsi, 0xfe, 0xd0); 59 - mipi_dsi_dcs_write_seq(dsi, 0x48, 0x00); 60 - mipi_dsi_dcs_write_seq(dsi, 0xfe, 0x26); 61 - mipi_dsi_dcs_write_seq(dsi, 0x75, 0x3f); 62 - mipi_dsi_dcs_write_seq(dsi, 0x1d, 0x1a); 63 - mipi_dsi_dcs_write_seq(dsi, 0xfe, 0x00); 64 - mipi_dsi_dcs_write_seq(dsi, MIPI_DCS_WRITE_CONTROL_DISPLAY, 0x28); 65 - mipi_dsi_dcs_write_seq(dsi, 0xc2, 0x08); 55 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0xfe, 0xd4); 56 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x00, 0x80); 57 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0xfe, 0xd0); 58 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x48, 0x00); 59 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0xfe, 0x26); 60 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x75, 0x3f); 61 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x1d, 0x1a); 62 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0xfe, 0x00); 63 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, MIPI_DCS_WRITE_CONTROL_DISPLAY, 0x28); 64 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0xc2, 0x08); 66 65 67 - ret = mipi_dsi_dcs_set_tear_on(dsi, MIPI_DSI_DCS_TEAR_MODE_VBLANK); 68 - if (ret < 0) { 69 - dev_err(dev, "Failed to set tear on: %d\n", ret); 70 - return ret; 71 - } 66 + mipi_dsi_dcs_set_tear_on_multi(&dsi_ctx, MIPI_DSI_DCS_TEAR_MODE_VBLANK); 67 + mipi_dsi_dcs_exit_sleep_mode_multi(&dsi_ctx); 68 + mipi_dsi_msleep(&dsi_ctx, 20); 72 69 73 - ret = mipi_dsi_dcs_exit_sleep_mode(dsi); 74 - if (ret < 0) { 75 - dev_err(dev, "Failed to exit sleep mode: %d\n", ret); 76 - return ret; 77 - } 78 - msleep(20); 70 + mipi_dsi_dcs_set_display_on_multi(&dsi_ctx); 71 + mipi_dsi_msleep(&dsi_ctx, 36); 79 72 80 - ret = mipi_dsi_dcs_set_display_on(dsi); 81 - if (ret < 0) { 82 - dev_err(dev, "Failed to set display on: %d\n", ret); 83 - return ret; 84 - } 85 - msleep(36); 86 - 87 - return 0; 73 + return dsi_ctx.accum_err; 88 74 } 89 75 90 - static int rm69380_off(struct rm69380_panel *ctx) 76 + static void rm69380_off(struct rm69380_panel *ctx) 91 77 { 92 78 struct mipi_dsi_device *dsi = ctx->dsi[0]; 93 - struct device *dev = &dsi->dev; 94 - int ret; 79 + struct mipi_dsi_multi_context dsi_ctx = { .dsi = dsi }; 95 80 96 81 dsi->mode_flags &= ~MIPI_DSI_MODE_LPM; 97 82 if (ctx->dsi[1]) 98 83 ctx->dsi[1]->mode_flags &= ~MIPI_DSI_MODE_LPM; 99 84 100 - ret = mipi_dsi_dcs_set_display_off(dsi); 101 - if (ret < 0) { 102 - dev_err(dev, "Failed to set display off: %d\n", ret); 103 - return ret; 104 - } 105 - msleep(35); 106 - 107 - ret = mipi_dsi_dcs_enter_sleep_mode(dsi); 108 - if (ret < 0) { 109 - dev_err(dev, "Failed to enter sleep mode: %d\n", ret); 110 - return ret; 111 - } 112 - msleep(20); 113 - 114 - return 0; 85 + mipi_dsi_dcs_set_display_off_multi(&dsi_ctx); 86 + mipi_dsi_msleep(&dsi_ctx, 35); 87 + mipi_dsi_dcs_enter_sleep_mode_multi(&dsi_ctx); 88 + mipi_dsi_msleep(&dsi_ctx, 20); 115 89 } 116 90 117 91 static int rm69380_prepare(struct drm_panel *panel) 118 92 { 119 93 struct rm69380_panel *ctx = to_rm69380_panel(panel); 120 - struct device *dev = &ctx->dsi[0]->dev; 121 94 int ret; 122 95 123 96 ret = regulator_bulk_enable(ARRAY_SIZE(ctx->supplies), ctx->supplies); 124 - if (ret < 0) { 125 - dev_err(dev, "Failed to enable regulators: %d\n", ret); 97 + if (ret < 0) 126 98 return ret; 127 - } 128 99 129 100 rm69380_reset(ctx); 130 101 131 102 ret = rm69380_on(ctx); 132 103 if (ret < 0) { 133 - dev_err(dev, "Failed to initialize panel: %d\n", ret); 134 104 gpiod_set_value_cansleep(ctx->reset_gpio, 1); 135 105 regulator_bulk_disable(ARRAY_SIZE(ctx->supplies), ctx->supplies); 136 - return ret; 137 106 } 138 107 139 - return 0; 108 + return ret; 140 109 } 141 110 142 111 static int rm69380_unprepare(struct drm_panel *panel) 143 112 { 144 113 struct rm69380_panel *ctx = to_rm69380_panel(panel); 145 - struct device *dev = &ctx->dsi[0]->dev; 146 - int ret; 147 114 148 - ret = rm69380_off(ctx); 149 - if (ret < 0) 150 - dev_err(dev, "Failed to un-initialize panel: %d\n", ret); 115 + rm69380_off(ctx); 151 116 152 117 gpiod_set_value_cansleep(ctx->reset_gpio, 1); 153 118 regulator_bulk_disable(ARRAY_SIZE(ctx->supplies), ctx->supplies);
+21 -50
drivers/gpu/drm/panel/panel-samsung-s6e3fa7.c
··· 38 38 usleep_range(10000, 11000); 39 39 } 40 40 41 - static int s6e3fa7_panel_on(struct s6e3fa7_panel *ctx) 41 + static int s6e3fa7_panel_on(struct mipi_dsi_device *dsi) 42 42 { 43 - struct mipi_dsi_device *dsi = ctx->dsi; 44 - struct device *dev = &dsi->dev; 45 - int ret; 43 + struct mipi_dsi_multi_context dsi_ctx = { .dsi = dsi }; 46 44 47 - ret = mipi_dsi_dcs_exit_sleep_mode(dsi); 48 - if (ret < 0) { 49 - dev_err(dev, "Failed to exit sleep mode: %d\n", ret); 50 - return ret; 51 - } 52 - msleep(120); 45 + mipi_dsi_dcs_exit_sleep_mode_multi(&dsi_ctx); 46 + mipi_dsi_msleep(&dsi_ctx, 120); 47 + mipi_dsi_dcs_set_tear_on_multi(&dsi_ctx, MIPI_DSI_DCS_TEAR_MODE_VBLANK); 53 48 54 - ret = mipi_dsi_dcs_set_tear_on(dsi, MIPI_DSI_DCS_TEAR_MODE_VBLANK); 55 - if (ret < 0) { 56 - dev_err(dev, "Failed to set tear on: %d\n", ret); 57 - return ret; 58 - } 49 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0xf0, 0x5a, 0x5a); 50 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0xf4, 51 + 0xbb, 0x23, 0x19, 0x3a, 0x9f, 0x0f, 0x09, 0xc0, 52 + 0x00, 0xb4, 0x37, 0x70, 0x79, 0x69); 53 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0xf0, 0xa5, 0xa5); 54 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, MIPI_DCS_WRITE_CONTROL_DISPLAY, 0x20); 59 55 60 - mipi_dsi_dcs_write_seq(dsi, 0xf0, 0x5a, 0x5a); 61 - mipi_dsi_dcs_write_seq(dsi, 0xf4, 62 - 0xbb, 0x23, 0x19, 0x3a, 0x9f, 0x0f, 0x09, 0xc0, 63 - 0x00, 0xb4, 0x37, 0x70, 0x79, 0x69); 64 - mipi_dsi_dcs_write_seq(dsi, 0xf0, 0xa5, 0xa5); 65 - mipi_dsi_dcs_write_seq(dsi, MIPI_DCS_WRITE_CONTROL_DISPLAY, 0x20); 56 + mipi_dsi_dcs_set_display_on_multi(&dsi_ctx); 66 57 67 - ret = mipi_dsi_dcs_set_display_on(dsi); 68 - if (ret < 0) { 69 - dev_err(dev, "Failed to set display on: %d\n", ret); 70 - return ret; 71 - } 72 - 73 - return 0; 58 + return dsi_ctx.accum_err; 74 59 } 75 60 76 61 static int s6e3fa7_panel_prepare(struct drm_panel *panel) 77 62 { 78 63 struct s6e3fa7_panel *ctx = to_s6e3fa7_panel(panel); 79 - struct device *dev = &ctx->dsi->dev; 80 64 int ret; 81 65 82 66 s6e3fa7_panel_reset(ctx); 83 67 84 - ret = s6e3fa7_panel_on(ctx); 85 - if (ret < 0) { 86 - dev_err(dev, "Failed to initialize panel: %d\n", ret); 68 + ret = s6e3fa7_panel_on(ctx->dsi); 69 + if (ret < 0) 87 70 gpiod_set_value_cansleep(ctx->reset_gpio, 1); 88 - return ret; 89 - } 90 71 91 - return 0; 72 + return ret; 92 73 } 93 74 94 75 static int s6e3fa7_panel_unprepare(struct drm_panel *panel) ··· 85 104 { 86 105 struct s6e3fa7_panel *ctx = to_s6e3fa7_panel(panel); 87 106 struct mipi_dsi_device *dsi = ctx->dsi; 88 - struct device *dev = &dsi->dev; 89 - int ret; 107 + struct mipi_dsi_multi_context dsi_ctx = { .dsi = dsi }; 90 108 91 - ret = mipi_dsi_dcs_set_display_off(dsi); 92 - if (ret < 0) { 93 - dev_err(dev, "Failed to set display off: %d\n", ret); 94 - return ret; 95 - } 109 + mipi_dsi_dcs_set_display_off_multi(&dsi_ctx); 110 + mipi_dsi_dcs_enter_sleep_mode_multi(&dsi_ctx); 111 + mipi_dsi_msleep(&dsi_ctx, 120); 96 112 97 - ret = mipi_dsi_dcs_enter_sleep_mode(dsi); 98 - if (ret < 0) { 99 - dev_err(dev, "Failed to enter sleep mode: %d\n", ret); 100 - return ret; 101 - } 102 - msleep(120); 103 - 104 - return 0; 113 + return dsi_ctx.accum_err; 105 114 } 106 115 107 116 static const struct drm_display_mode s6e3fa7_panel_mode = {
+1 -2
drivers/gpu/drm/panel/panel-sony-acx565akm.c
··· 562 562 lcd->enabled ? "enabled" : "disabled ", status); 563 563 564 564 acx565akm_read(lcd, MIPI_DCS_GET_DISPLAY_ID, lcd->display_id, 3); 565 - dev_dbg(&lcd->spi->dev, "MIPI display ID: %02x%02x%02x\n", 566 - lcd->display_id[0], lcd->display_id[1], lcd->display_id[2]); 565 + dev_dbg(&lcd->spi->dev, "MIPI display ID: %3phN\n", lcd->display_id); 567 566 568 567 switch (lcd->display_id[0]) { 569 568 case 0x10:
+43 -2
drivers/gpu/drm/panfrost/panfrost_drv.c
··· 3 3 /* Copyright 2019 Linaro, Ltd., Rob Herring <robh@kernel.org> */ 4 4 /* Copyright 2019 Collabora ltd. */ 5 5 6 + #ifdef CONFIG_ARM_ARCH_TIMER 7 + #include <asm/arch_timer.h> 8 + #endif 9 + 6 10 #include <linux/module.h> 7 11 #include <linux/of.h> 8 12 #include <linux/pagemap.h> ··· 25 21 #include "panfrost_gpu.h" 26 22 #include "panfrost_perfcnt.h" 27 23 24 + #define JOB_REQUIREMENTS (PANFROST_JD_REQ_FS | PANFROST_JD_REQ_CYCLE_COUNT) 25 + 28 26 static bool unstable_ioctls; 29 27 module_param_unsafe(unstable_ioctls, bool, 0600); 28 + 29 + static int panfrost_ioctl_query_timestamp(struct panfrost_device *pfdev, 30 + u64 *arg) 31 + { 32 + int ret; 33 + 34 + ret = pm_runtime_resume_and_get(pfdev->dev); 35 + if (ret) 36 + return ret; 37 + 38 + panfrost_cycle_counter_get(pfdev); 39 + *arg = panfrost_timestamp_read(pfdev); 40 + panfrost_cycle_counter_put(pfdev); 41 + 42 + pm_runtime_put(pfdev->dev); 43 + return 0; 44 + } 30 45 31 46 static int panfrost_ioctl_get_param(struct drm_device *ddev, void *data, struct drm_file *file) 32 47 { 33 48 struct drm_panfrost_get_param *param = data; 34 49 struct panfrost_device *pfdev = ddev->dev_private; 50 + int ret; 35 51 36 52 if (param->pad != 0) 37 53 return -EINVAL; ··· 93 69 PANFROST_FEATURE_ARRAY(JS_FEATURES, js_features, 15); 94 70 PANFROST_FEATURE(NR_CORE_GROUPS, nr_core_groups); 95 71 PANFROST_FEATURE(THREAD_TLS_ALLOC, thread_tls_alloc); 72 + 73 + case DRM_PANFROST_PARAM_SYSTEM_TIMESTAMP: 74 + ret = panfrost_ioctl_query_timestamp(pfdev, &param->value); 75 + if (ret) 76 + return ret; 77 + break; 78 + 79 + case DRM_PANFROST_PARAM_SYSTEM_TIMESTAMP_FREQUENCY: 80 + #ifdef CONFIG_ARM_ARCH_TIMER 81 + param->value = arch_timer_get_cntfrq(); 82 + #else 83 + param->value = 0; 84 + #endif 85 + break; 86 + 96 87 default: 97 88 return -EINVAL; 98 89 } ··· 284 245 if (!args->jc) 285 246 return -EINVAL; 286 247 287 - if (args->requirements && args->requirements != PANFROST_JD_REQ_FS) 248 + if (args->requirements & ~JOB_REQUIREMENTS) 288 249 return -EINVAL; 289 250 290 251 if (args->out_sync > 0) { ··· 623 584 * - 1.0 - initial interface 624 585 * - 1.1 - adds HEAP and NOEXEC flags for CREATE_BO 625 586 * - 1.2 - adds AFBC_FEATURES query 587 + * - 1.3 - adds JD_REQ_CYCLE_COUNT job requirement for SUBMIT 588 + * - adds SYSTEM_TIMESTAMP and SYSTEM_TIMESTAMP_FREQUENCY queries 626 589 */ 627 590 static const struct drm_driver panfrost_drm_driver = { 628 591 .driver_features = DRIVER_RENDER | DRIVER_GEM | DRIVER_SYNCOBJ, ··· 638 597 .desc = "panfrost DRM", 639 598 .date = "20180908", 640 599 .major = 1, 641 - .minor = 2, 600 + .minor = 3, 642 601 643 602 .gem_create_object = panfrost_gem_create_object, 644 603 .gem_prime_import_sg_table = panfrost_gem_prime_import_sg_table,
+12
drivers/gpu/drm/panfrost/panfrost_gpu.c
··· 380 380 return ((u64)hi << 32) | lo; 381 381 } 382 382 383 + unsigned long long panfrost_timestamp_read(struct panfrost_device *pfdev) 384 + { 385 + u32 hi, lo; 386 + 387 + do { 388 + hi = gpu_read(pfdev, GPU_TIMESTAMP_HI); 389 + lo = gpu_read(pfdev, GPU_TIMESTAMP_LO); 390 + } while (hi != gpu_read(pfdev, GPU_TIMESTAMP_HI)); 391 + 392 + return ((u64)hi << 32) | lo; 393 + } 394 + 383 395 static u64 panfrost_get_core_mask(struct panfrost_device *pfdev) 384 396 { 385 397 u64 core_mask;
+1
drivers/gpu/drm/panfrost/panfrost_gpu.h
··· 20 20 void panfrost_cycle_counter_get(struct panfrost_device *pfdev); 21 21 void panfrost_cycle_counter_put(struct panfrost_device *pfdev); 22 22 unsigned long long panfrost_cycle_counter_read(struct panfrost_device *pfdev); 23 + unsigned long long panfrost_timestamp_read(struct panfrost_device *pfdev); 23 24 24 25 void panfrost_gpu_amlogic_quirk(struct panfrost_device *pfdev); 25 26
+18 -12
drivers/gpu/drm/panfrost/panfrost_job.c
··· 159 159 struct panfrost_job *job = pfdev->jobs[slot][0]; 160 160 161 161 WARN_ON(!job); 162 - if (job->is_profiled) { 163 - if (job->engine_usage) { 164 - job->engine_usage->elapsed_ns[slot] += 165 - ktime_to_ns(ktime_sub(ktime_get(), job->start_time)); 166 - job->engine_usage->cycles[slot] += 167 - panfrost_cycle_counter_read(pfdev) - job->start_cycles; 168 - } 169 - panfrost_cycle_counter_put(job->pfdev); 162 + 163 + if (job->is_profiled && job->engine_usage) { 164 + job->engine_usage->elapsed_ns[slot] += 165 + ktime_to_ns(ktime_sub(ktime_get(), job->start_time)); 166 + job->engine_usage->cycles[slot] += 167 + panfrost_cycle_counter_read(pfdev) - job->start_cycles; 170 168 } 169 + 170 + if (job->requirements & PANFROST_JD_REQ_CYCLE_COUNT || job->is_profiled) 171 + panfrost_cycle_counter_put(pfdev); 171 172 172 173 pfdev->jobs[slot][0] = pfdev->jobs[slot][1]; 173 174 pfdev->jobs[slot][1] = NULL; ··· 244 243 subslot = panfrost_enqueue_job(pfdev, js, job); 245 244 /* Don't queue the job if a reset is in progress */ 246 245 if (!atomic_read(&pfdev->reset.pending)) { 247 - if (pfdev->profile_mode) { 246 + job->is_profiled = pfdev->profile_mode; 247 + 248 + if (job->requirements & PANFROST_JD_REQ_CYCLE_COUNT || 249 + job->is_profiled) 248 250 panfrost_cycle_counter_get(pfdev); 249 - job->is_profiled = true; 251 + 252 + if (job->is_profiled) { 250 253 job->start_time = ktime_get(); 251 254 job->start_cycles = panfrost_cycle_counter_read(pfdev); 252 255 } ··· 698 693 spin_lock(&pfdev->js->job_lock); 699 694 for (i = 0; i < NUM_JOB_SLOTS; i++) { 700 695 for (j = 0; j < ARRAY_SIZE(pfdev->jobs[0]) && pfdev->jobs[i][j]; j++) { 701 - if (pfdev->jobs[i][j]->is_profiled) 696 + if (pfdev->jobs[i][j]->requirements & PANFROST_JD_REQ_CYCLE_COUNT || 697 + pfdev->jobs[i][j]->is_profiled) 702 698 panfrost_cycle_counter_put(pfdev->jobs[i][j]->pfdev); 703 699 pm_runtime_put_noidle(pfdev->dev); 704 700 panfrost_devfreq_record_idle(&pfdev->pfdevfreq); ··· 733 727 734 728 /* Restart the schedulers */ 735 729 for (i = 0; i < NUM_JOB_SLOTS; i++) 736 - drm_sched_start(&pfdev->js->queue[i].sched); 730 + drm_sched_start(&pfdev->js->queue[i].sched, 0); 737 731 738 732 /* Re-enable job interrupts now that everything has been restarted. */ 739 733 job_write(pfdev, JOB_INT_MASK,
+2
drivers/gpu/drm/panfrost/panfrost_regs.h
··· 78 78 79 79 #define GPU_CYCLE_COUNT_LO 0x90 80 80 #define GPU_CYCLE_COUNT_HI 0x94 81 + #define GPU_TIMESTAMP_LO 0x98 82 + #define GPU_TIMESTAMP_HI 0x9C 81 83 82 84 #define GPU_THREAD_MAX_THREADS 0x0A0 /* (RO) Maximum number of threads per core */ 83 85 #define GPU_THREAD_MAX_WORKGROUP_SIZE 0x0A4 /* (RO) Maximum workgroup size */
+42 -1
drivers/gpu/drm/panthor/panthor_drv.c
··· 3 3 /* Copyright 2019 Linaro, Ltd., Rob Herring <robh@kernel.org> */ 4 4 /* Copyright 2019 Collabora ltd. */ 5 5 6 + #ifdef CONFIG_ARM_ARCH_TIMER 7 + #include <asm/arch_timer.h> 8 + #endif 9 + 6 10 #include <linux/list.h> 7 11 #include <linux/module.h> 8 12 #include <linux/of_platform.h> ··· 169 165 _Generic(_obj_name, \ 170 166 PANTHOR_UOBJ_DECL(struct drm_panthor_gpu_info, tiler_present), \ 171 167 PANTHOR_UOBJ_DECL(struct drm_panthor_csif_info, pad), \ 168 + PANTHOR_UOBJ_DECL(struct drm_panthor_timestamp_info, current_timestamp), \ 172 169 PANTHOR_UOBJ_DECL(struct drm_panthor_sync_op, timeline_value), \ 173 170 PANTHOR_UOBJ_DECL(struct drm_panthor_queue_submit, syncs), \ 174 171 PANTHOR_UOBJ_DECL(struct drm_panthor_queue_create, ringbuf_size), \ ··· 756 751 kvfree(ctx->jobs); 757 752 } 758 753 754 + static int panthor_query_timestamp_info(struct panthor_device *ptdev, 755 + struct drm_panthor_timestamp_info *arg) 756 + { 757 + int ret; 758 + 759 + ret = pm_runtime_resume_and_get(ptdev->base.dev); 760 + if (ret) 761 + return ret; 762 + 763 + #ifdef CONFIG_ARM_ARCH_TIMER 764 + arg->timestamp_frequency = arch_timer_get_cntfrq(); 765 + #else 766 + arg->timestamp_frequency = 0; 767 + #endif 768 + arg->current_timestamp = panthor_gpu_read_timestamp(ptdev); 769 + arg->timestamp_offset = panthor_gpu_read_timestamp_offset(ptdev); 770 + 771 + pm_runtime_put(ptdev->base.dev); 772 + return 0; 773 + } 774 + 759 775 static int panthor_ioctl_dev_query(struct drm_device *ddev, void *data, struct drm_file *file) 760 776 { 761 777 struct panthor_device *ptdev = container_of(ddev, struct panthor_device, base); 762 778 struct drm_panthor_dev_query *args = data; 779 + struct drm_panthor_timestamp_info timestamp_info; 780 + int ret; 763 781 764 782 if (!args->pointer) { 765 783 switch (args->type) { ··· 792 764 793 765 case DRM_PANTHOR_DEV_QUERY_CSIF_INFO: 794 766 args->size = sizeof(ptdev->csif_info); 767 + return 0; 768 + 769 + case DRM_PANTHOR_DEV_QUERY_TIMESTAMP_INFO: 770 + args->size = sizeof(timestamp_info); 795 771 return 0; 796 772 797 773 default: ··· 809 777 810 778 case DRM_PANTHOR_DEV_QUERY_CSIF_INFO: 811 779 return PANTHOR_UOBJ_SET(args->pointer, args->size, ptdev->csif_info); 780 + 781 + case DRM_PANTHOR_DEV_QUERY_TIMESTAMP_INFO: 782 + ret = panthor_query_timestamp_info(ptdev, &timestamp_info); 783 + 784 + if (ret) 785 + return ret; 786 + 787 + return PANTHOR_UOBJ_SET(args->pointer, args->size, timestamp_info); 812 788 813 789 default: 814 790 return -EINVAL; ··· 1436 1396 /* 1437 1397 * PanCSF driver version: 1438 1398 * - 1.0 - initial interface 1399 + * - 1.1 - adds DEV_QUERY_TIMESTAMP_INFO query 1439 1400 */ 1440 1401 static const struct drm_driver panthor_drm_driver = { 1441 1402 .driver_features = DRIVER_RENDER | DRIVER_GEM | DRIVER_SYNCOBJ | ··· 1450 1409 .desc = "Panthor DRM driver", 1451 1410 .date = "20230801", 1452 1411 .major = 1, 1453 - .minor = 0, 1412 + .minor = 1, 1454 1413 1455 1414 .gem_create_object = panthor_gem_create_object, 1456 1415 .gem_prime_import_sg_table = drm_gem_shmem_prime_import_sg_table,
+56 -1
drivers/gpu/drm/panthor/panthor_fw.c
··· 78 78 79 79 /** @CSF_FW_BINARY_ENTRY_TYPE_TIMELINE_METADATA: Timeline metadata interface. */ 80 80 CSF_FW_BINARY_ENTRY_TYPE_TIMELINE_METADATA = 4, 81 + 82 + /** 83 + * @CSF_FW_BINARY_ENTRY_TYPE_BUILD_INFO_METADATA: Metadata about how 84 + * the FW binary was built. 85 + */ 86 + CSF_FW_BINARY_ENTRY_TYPE_BUILD_INFO_METADATA = 6 81 87 }; 82 88 83 89 #define CSF_FW_BINARY_ENTRY_TYPE(ehdr) ((ehdr) & 0xff) ··· 136 130 /** @end: End offset in the FW binary. */ 137 131 u32 end; 138 132 } data; 133 + }; 134 + 135 + struct panthor_fw_build_info_hdr { 136 + /** @meta_start: Offset of the build info data in the FW binary */ 137 + u32 meta_start; 138 + /** @meta_size: Size of the build info data in the FW binary */ 139 + u32 meta_size; 139 140 }; 140 141 141 142 /** ··· 641 628 return 0; 642 629 } 643 630 631 + static int panthor_fw_read_build_info(struct panthor_device *ptdev, 632 + const struct firmware *fw, 633 + struct panthor_fw_binary_iter *iter, 634 + u32 ehdr) 635 + { 636 + struct panthor_fw_build_info_hdr hdr; 637 + char header[9]; 638 + const char git_sha_header[sizeof(header)] = "git_sha: "; 639 + int ret; 640 + 641 + ret = panthor_fw_binary_iter_read(ptdev, iter, &hdr, sizeof(hdr)); 642 + if (ret) 643 + return ret; 644 + 645 + if (hdr.meta_start > fw->size || 646 + hdr.meta_start + hdr.meta_size > fw->size) { 647 + drm_err(&ptdev->base, "Firmware build info corrupt\n"); 648 + /* We don't need the build info, so continue */ 649 + return 0; 650 + } 651 + 652 + if (memcmp(git_sha_header, fw->data + hdr.meta_start, 653 + sizeof(git_sha_header))) { 654 + /* Not the expected header, this isn't metadata we understand */ 655 + return 0; 656 + } 657 + 658 + /* Check that the git SHA is NULL terminated as expected */ 659 + if (fw->data[hdr.meta_start + hdr.meta_size - 1] != '\0') { 660 + drm_warn(&ptdev->base, "Firmware's git sha is not NULL terminated\n"); 661 + /* Don't treat as fatal */ 662 + return 0; 663 + } 664 + 665 + drm_info(&ptdev->base, "Firmware git sha: %s\n", 666 + fw->data + hdr.meta_start + sizeof(git_sha_header)); 667 + 668 + return 0; 669 + } 670 + 644 671 static void 645 672 panthor_reload_fw_sections(struct panthor_device *ptdev, bool full_reload) 646 673 { ··· 725 672 switch (CSF_FW_BINARY_ENTRY_TYPE(ehdr)) { 726 673 case CSF_FW_BINARY_ENTRY_TYPE_IFACE: 727 674 return panthor_fw_load_section_entry(ptdev, fw, &eiter, ehdr); 675 + case CSF_FW_BINARY_ENTRY_TYPE_BUILD_INFO_METADATA: 676 + return panthor_fw_read_build_info(ptdev, fw, &eiter, ehdr); 728 677 729 678 /* FIXME: handle those entry types? */ 730 679 case CSF_FW_BINARY_ENTRY_TYPE_CONFIG: ··· 976 921 return ret; 977 922 } 978 923 979 - drm_info(&ptdev->base, "CSF FW v%d.%d.%d, Features %#x Instrumentation features %#x", 924 + drm_info(&ptdev->base, "CSF FW using interface v%d.%d.%d, Features %#x Instrumentation features %#x", 980 925 CSF_IFACE_VERSION_MAJOR(glb_iface->control->version), 981 926 CSF_IFACE_VERSION_MINOR(glb_iface->control->version), 982 927 CSF_IFACE_VERSION_PATCH(glb_iface->control->version),
+47
drivers/gpu/drm/panthor/panthor_gpu.c
··· 480 480 panthor_gpu_irq_resume(&ptdev->gpu->irq, GPU_INTERRUPTS_MASK); 481 481 panthor_gpu_l2_power_on(ptdev); 482 482 } 483 + 484 + /** 485 + * panthor_gpu_read_64bit_counter() - Read a 64-bit counter at a given offset. 486 + * @ptdev: Device. 487 + * @reg: The offset of the register to read. 488 + * 489 + * Return: The counter value. 490 + */ 491 + static u64 492 + panthor_gpu_read_64bit_counter(struct panthor_device *ptdev, u32 reg) 493 + { 494 + u32 hi, lo; 495 + 496 + do { 497 + hi = gpu_read(ptdev, reg + 0x4); 498 + lo = gpu_read(ptdev, reg); 499 + } while (hi != gpu_read(ptdev, reg + 0x4)); 500 + 501 + return ((u64)hi << 32) | lo; 502 + } 503 + 504 + /** 505 + * panthor_gpu_read_timestamp() - Read the timestamp register. 506 + * @ptdev: Device. 507 + * 508 + * Return: The GPU timestamp value. 509 + */ 510 + u64 panthor_gpu_read_timestamp(struct panthor_device *ptdev) 511 + { 512 + return panthor_gpu_read_64bit_counter(ptdev, GPU_TIMESTAMP_LO); 513 + } 514 + 515 + /** 516 + * panthor_gpu_read_timestamp_offset() - Read the timestamp offset register. 517 + * @ptdev: Device. 518 + * 519 + * Return: The GPU timestamp offset value. 520 + */ 521 + u64 panthor_gpu_read_timestamp_offset(struct panthor_device *ptdev) 522 + { 523 + u32 hi, lo; 524 + 525 + hi = gpu_read(ptdev, GPU_TIMESTAMP_OFFSET_HI); 526 + lo = gpu_read(ptdev, GPU_TIMESTAMP_OFFSET_LO); 527 + 528 + return ((u64)hi << 32) | lo; 529 + }
+4
drivers/gpu/drm/panthor/panthor_gpu.h
··· 5 5 #ifndef __PANTHOR_GPU_H__ 6 6 #define __PANTHOR_GPU_H__ 7 7 8 + #include <linux/types.h> 9 + 8 10 struct panthor_device; 9 11 10 12 int panthor_gpu_init(struct panthor_device *ptdev); ··· 50 48 int panthor_gpu_flush_caches(struct panthor_device *ptdev, 51 49 u32 l2, u32 lsc, u32 other); 52 50 int panthor_gpu_soft_reset(struct panthor_device *ptdev); 51 + u64 panthor_gpu_read_timestamp(struct panthor_device *ptdev); 52 + u64 panthor_gpu_read_timestamp_offset(struct panthor_device *ptdev); 53 53 54 54 #endif
+3 -3
drivers/gpu/drm/panthor/panthor_mmu.c
··· 833 833 834 834 static void panthor_vm_start(struct panthor_vm *vm) 835 835 { 836 - drm_sched_start(&vm->sched); 836 + drm_sched_start(&vm->sched, 0); 837 837 } 838 838 839 839 /** ··· 2716 2716 * which passes iova as an unsigned long. Patch the mmu_features to reflect this 2717 2717 * limitation. 2718 2718 */ 2719 - if (sizeof(unsigned long) * 8 < va_bits) { 2719 + if (va_bits > BITS_PER_LONG) { 2720 2720 ptdev->gpu_info.mmu_features &= ~GENMASK(7, 0); 2721 - ptdev->gpu_info.mmu_features |= sizeof(unsigned long) * 8; 2721 + ptdev->gpu_info.mmu_features |= BITS_PER_LONG; 2722 2722 } 2723 2723 2724 2724 return drmm_add_action_or_reset(&ptdev->base, panthor_mmu_release_wq, mmu->vm.wq);
+1 -1
drivers/gpu/drm/panthor/panthor_sched.c
··· 2545 2545 list_for_each_entry(job, &queue->scheduler.pending_list, base.list) 2546 2546 job->base.s_fence->parent = dma_fence_get(job->done_fence); 2547 2547 2548 - drm_sched_start(&queue->scheduler); 2548 + drm_sched_start(&queue->scheduler, 0); 2549 2549 } 2550 2550 2551 2551 static void panthor_group_stop(struct panthor_group *group)
+1 -1
drivers/gpu/drm/rockchip/cdn-dp-reg.h
··· 77 77 #define SOURCE_PIF_PKT_ALLOC_WR_EN 0x30830 78 78 #define SOURCE_PIF_SW_RESET 0x30834 79 79 80 - /* bellow registers need access by mailbox */ 80 + /* below registers need access by mailbox */ 81 81 /* source car addr */ 82 82 #define SOURCE_HDTX_CAR 0x0900 83 83 #define SOURCE_DPTX_CAR 0x0904
+75 -87
drivers/gpu/drm/rockchip/dw_hdmi-rockchip.c
··· 76 76 struct rockchip_encoder encoder; 77 77 const struct rockchip_hdmi_chip_data *chip_data; 78 78 const struct dw_hdmi_plat_data *plat_data; 79 + struct clk *hdmiphy_clk; 79 80 struct clk *ref_clk; 80 81 struct clk *grf_clk; 81 82 struct dw_hdmi *hdmi; ··· 92 91 93 92 static const struct dw_hdmi_mpll_config rockchip_mpll_cfg[] = { 94 93 { 95 - 27000000, { 96 - { 0x00b3, 0x0000}, 97 - { 0x2153, 0x0000}, 98 - { 0x40f3, 0x0000} 94 + 30666000, { 95 + { 0x00b3, 0x0000 }, 96 + { 0x2153, 0x0000 }, 97 + { 0x40f3, 0x0000 }, 99 98 }, 100 99 }, { 101 - 36000000, { 102 - { 0x00b3, 0x0000}, 103 - { 0x2153, 0x0000}, 104 - { 0x40f3, 0x0000} 100 + 36800000, { 101 + { 0x00b3, 0x0000 }, 102 + { 0x2153, 0x0000 }, 103 + { 0x40a2, 0x0001 }, 105 104 }, 106 105 }, { 107 - 40000000, { 108 - { 0x00b3, 0x0000}, 109 - { 0x2153, 0x0000}, 110 - { 0x40f3, 0x0000} 106 + 46000000, { 107 + { 0x00b3, 0x0000 }, 108 + { 0x2142, 0x0001 }, 109 + { 0x40a2, 0x0001 }, 111 110 }, 112 111 }, { 113 - 54000000, { 114 - { 0x0072, 0x0001}, 115 - { 0x2142, 0x0001}, 116 - { 0x40a2, 0x0001}, 112 + 61333000, { 113 + { 0x0072, 0x0001 }, 114 + { 0x2142, 0x0001 }, 115 + { 0x40a2, 0x0001 }, 117 116 }, 118 117 }, { 119 - 65000000, { 120 - { 0x0072, 0x0001}, 121 - { 0x2142, 0x0001}, 122 - { 0x40a2, 0x0001}, 118 + 73600000, { 119 + { 0x0072, 0x0001 }, 120 + { 0x2142, 0x0001 }, 121 + { 0x4061, 0x0002 }, 123 122 }, 124 123 }, { 125 - 66000000, { 126 - { 0x013e, 0x0003}, 127 - { 0x217e, 0x0002}, 128 - { 0x4061, 0x0002} 124 + 92000000, { 125 + { 0x0072, 0x0001 }, 126 + { 0x2145, 0x0002 }, 127 + { 0x4061, 0x0002 }, 129 128 }, 130 129 }, { 131 - 74250000, { 132 - { 0x0072, 0x0001}, 133 - { 0x2145, 0x0002}, 134 - { 0x4061, 0x0002} 130 + 122666000, { 131 + { 0x0051, 0x0002 }, 132 + { 0x2145, 0x0002 }, 133 + { 0x4061, 0x0002 }, 135 134 }, 136 135 }, { 137 - 83500000, { 138 - { 0x0072, 0x0001}, 136 + 147200000, { 137 + { 0x0051, 0x0002 }, 138 + { 0x2145, 0x0002 }, 139 + { 0x4064, 0x0003 }, 139 140 }, 140 141 }, { 141 - 108000000, { 142 - { 0x0051, 0x0002}, 143 - { 0x2145, 0x0002}, 144 - { 0x4061, 0x0002} 142 + 184000000, { 143 + { 0x0051, 0x0002 }, 144 + { 0x214c, 0x0003 }, 145 + { 0x4064, 0x0003 }, 145 146 }, 146 147 }, { 147 - 106500000, { 148 - { 0x0051, 0x0002}, 149 - { 0x2145, 0x0002}, 150 - { 0x4061, 0x0002} 148 + 226666000, { 149 + { 0x0040, 0x0003 }, 150 + { 0x214c, 0x0003 }, 151 + { 0x4064, 0x0003 }, 151 152 }, 152 153 }, { 153 - 146250000, { 154 - { 0x0051, 0x0002}, 155 - { 0x2145, 0x0002}, 156 - { 0x4061, 0x0002} 157 - }, 158 - }, { 159 - 148500000, { 160 - { 0x0051, 0x0003}, 161 - { 0x214c, 0x0003}, 162 - { 0x4064, 0x0003} 154 + 272000000, { 155 + { 0x0040, 0x0003 }, 156 + { 0x214c, 0x0003 }, 157 + { 0x5a64, 0x0003 }, 163 158 }, 164 159 }, { 165 160 340000000, { ··· 164 167 { 0x5a64, 0x0003 }, 165 168 }, 166 169 }, { 170 + 600000000, { 171 + { 0x1a40, 0x0003 }, 172 + { 0x3b4c, 0x0003 }, 173 + { 0x5a64, 0x0003 }, 174 + }, 175 + }, { 167 176 ~0UL, { 168 - { 0x00a0, 0x000a }, 169 - { 0x2001, 0x000f }, 170 - { 0x4002, 0x000f }, 177 + { 0x0000, 0x0000 }, 178 + { 0x0000, 0x0000 }, 179 + { 0x0000, 0x0000 }, 171 180 }, 172 181 } 173 182 }; ··· 181 178 static const struct dw_hdmi_curr_ctrl rockchip_cur_ctr[] = { 182 179 /* pixelclk bpp8 bpp10 bpp12 */ 183 180 { 184 - 40000000, { 0x0018, 0x0018, 0x0018 }, 185 - }, { 186 - 65000000, { 0x0028, 0x0028, 0x0028 }, 187 - }, { 188 - 66000000, { 0x0038, 0x0038, 0x0038 }, 189 - }, { 190 - 74250000, { 0x0028, 0x0038, 0x0038 }, 191 - }, { 192 - 83500000, { 0x0028, 0x0038, 0x0038 }, 193 - }, { 194 - 146250000, { 0x0038, 0x0038, 0x0038 }, 195 - }, { 196 - 148500000, { 0x0000, 0x0038, 0x0038 }, 197 - }, { 198 181 600000000, { 0x0000, 0x0000, 0x0000 }, 199 182 }, { 200 - ~0UL, { 0x0000, 0x0000, 0x0000}, 183 + ~0UL, { 0x0000, 0x0000, 0x0000 }, 201 184 } 202 185 }; 203 186 204 187 static const struct dw_hdmi_phy_config rockchip_phy_config[] = { 205 188 /*pixelclk symbol term vlev*/ 206 189 { 74250000, 0x8009, 0x0004, 0x0272}, 207 - { 148500000, 0x802b, 0x0004, 0x028d}, 190 + { 165000000, 0x802b, 0x0004, 0x0209}, 208 191 { 297000000, 0x8039, 0x0005, 0x028d}, 192 + { 594000000, 0x8039, 0x0000, 0x019d}, 209 193 { ~0UL, 0x0000, 0x0000, 0x0000} 210 194 }; 211 195 ··· 241 251 const struct drm_display_mode *mode) 242 252 { 243 253 struct rockchip_hdmi *hdmi = data; 244 - const struct dw_hdmi_mpll_config *mpll_cfg = rockchip_mpll_cfg; 245 254 int pclk = mode->clock * 1000; 246 - bool exact_match = hdmi->plat_data->phy_force_vendor; 247 - int i; 248 255 249 256 if (hdmi->chip_data->max_tmds_clock && 250 257 mode->clock > hdmi->chip_data->max_tmds_clock) ··· 250 263 if (hdmi->ref_clk) { 251 264 int rpclk = clk_round_rate(hdmi->ref_clk, pclk); 252 265 253 - if (abs(rpclk - pclk) > pclk / 1000) 266 + if (rpclk < 0 || abs(rpclk - pclk) > pclk / 1000) 254 267 return MODE_NOCLOCK; 255 268 } 256 269 257 - for (i = 0; mpll_cfg[i].mpixelclock != (~0UL); i++) { 258 - /* 259 - * For vendor specific phys force an exact match of the pixelclock 260 - * to preserve the original behaviour of the driver. 261 - */ 262 - if (exact_match && pclk == mpll_cfg[i].mpixelclock) 263 - return MODE_OK; 264 - /* 265 - * The Synopsys phy can work with pixelclocks up to the value given 266 - * in the corresponding mpll_cfg entry. 267 - */ 268 - if (!exact_match && pclk <= mpll_cfg[i].mpixelclock) 269 - return MODE_OK; 270 + if (hdmi->hdmiphy_clk) { 271 + int rpclk = clk_round_rate(hdmi->hdmiphy_clk, pclk); 272 + 273 + if (rpclk < 0 || abs(rpclk - pclk) > pclk / 1000) 274 + return MODE_NOCLOCK; 270 275 } 271 276 272 - return MODE_BAD; 277 + return MODE_OK; 273 278 } 274 279 275 280 static void dw_hdmi_rockchip_encoder_disable(struct drm_encoder *encoder) ··· 481 502 .lcdsel_grf_reg = RK3399_GRF_SOC_CON20, 482 503 .lcdsel_big = HIWORD_UPDATE(0, RK3399_HDMI_LCDC_SEL), 483 504 .lcdsel_lit = HIWORD_UPDATE(RK3399_HDMI_LCDC_SEL, RK3399_HDMI_LCDC_SEL), 484 - .max_tmds_clock = 340000, 505 + .max_tmds_clock = 594000, 485 506 }; 486 507 487 508 static const struct dw_hdmi_plat_data rk3399_hdmi_drv_data = { ··· 495 516 496 517 static struct rockchip_hdmi_chip_data rk3568_chip_data = { 497 518 .lcdsel_grf_reg = -1, 498 - .max_tmds_clock = 340000, 519 + .max_tmds_clock = 594000, 499 520 }; 500 521 501 522 static const struct dw_hdmi_plat_data rk3568_hdmi_drv_data = { ··· 584 605 if (ret != -EPROBE_DEFER) 585 606 drm_err(hdmi, "failed to get phy\n"); 586 607 return ret; 608 + } 609 + 610 + if (hdmi->phy) { 611 + struct of_phandle_args clkspec; 612 + 613 + clkspec.np = hdmi->phy->dev.of_node; 614 + hdmi->hdmiphy_clk = of_clk_get_from_provider(&clkspec); 615 + if (IS_ERR(hdmi->hdmiphy_clk)) 616 + hdmi->hdmiphy_clk = NULL; 587 617 } 588 618 589 619 if (hdmi->chip_data == &rk3568_chip_data) {
+23
drivers/gpu/drm/rockchip/rockchip_drm_drv.c
··· 358 358 device_link_del(link); 359 359 } 360 360 361 + /* list of preferred vop devices */ 362 + static const char *const rockchip_drm_match_preferred[] = { 363 + "rockchip,rk3399-vop-big", 364 + NULL, 365 + }; 366 + 361 367 static struct component_match *rockchip_drm_match_add(struct device *dev) 362 368 { 363 369 struct component_match *match = NULL; 370 + struct device_node *port; 364 371 int i; 372 + 373 + /* add preferred vop device match before adding driver device matches */ 374 + for (i = 0; ; i++) { 375 + port = of_parse_phandle(dev->of_node, "ports", i); 376 + if (!port) 377 + break; 378 + 379 + if (of_device_is_available(port->parent) && 380 + of_device_compatible_match(port->parent, 381 + rockchip_drm_match_preferred)) 382 + drm_of_component_match_add(dev, &match, 383 + component_compare_of, 384 + port->parent); 385 + 386 + of_node_put(port); 387 + } 365 388 366 389 for (i = 0; i < num_rockchip_sub_drivers; i++) { 367 390 struct platform_driver *drv = rockchip_sub_drivers[i];
+4 -3
drivers/gpu/drm/scheduler/sched_main.c
··· 674 674 * drm_sched_start - recover jobs after a reset 675 675 * 676 676 * @sched: scheduler instance 677 + * @errno: error to set on the pending fences 677 678 * 678 679 */ 679 - void drm_sched_start(struct drm_gpu_scheduler *sched) 680 + void drm_sched_start(struct drm_gpu_scheduler *sched, int errno) 680 681 { 681 682 struct drm_sched_job *s_job, *tmp; 682 683 ··· 692 691 atomic_add(s_job->credits, &sched->credit_count); 693 692 694 693 if (!fence) { 695 - drm_sched_job_done(s_job, -ECANCELED); 694 + drm_sched_job_done(s_job, errno ?: -ECANCELED); 696 695 continue; 697 696 } 698 697 699 698 if (dma_fence_add_callback(fence, &s_job->cb, 700 699 drm_sched_job_done_cb)) 701 - drm_sched_job_done(s_job, fence->error); 700 + drm_sched_job_done(s_job, fence->error ?: errno); 702 701 } 703 702 704 703 drm_sched_start_timeout_unlocked(sched);
+37 -28
drivers/gpu/drm/tegra/gem.c
··· 76 76 /* 77 77 * Imported buffers need special treatment to satisfy the semantics of DMA-BUF. 78 78 */ 79 - if (gem->import_attach) { 80 - struct dma_buf *buf = gem->import_attach->dmabuf; 79 + if (obj->dma_buf) { 80 + struct dma_buf *buf = obj->dma_buf; 81 81 82 82 map->attach = dma_buf_attach(buf, dev); 83 83 if (IS_ERR(map->attach)) { ··· 184 184 if (obj->vaddr) 185 185 return obj->vaddr; 186 186 187 - if (obj->gem.import_attach) { 188 - ret = dma_buf_vmap_unlocked(obj->gem.import_attach->dmabuf, &map); 187 + if (obj->dma_buf) { 188 + ret = dma_buf_vmap_unlocked(obj->dma_buf, &map); 189 189 if (ret < 0) 190 190 return ERR_PTR(ret); 191 191 ··· 208 208 if (obj->vaddr) 209 209 return; 210 210 211 - if (obj->gem.import_attach) 212 - return dma_buf_vunmap_unlocked(obj->gem.import_attach->dmabuf, &map); 211 + if (obj->dma_buf) 212 + return dma_buf_vunmap_unlocked(obj->dma_buf, &map); 213 213 214 214 vunmap(addr); 215 215 } ··· 465 465 if (IS_ERR(bo)) 466 466 return bo; 467 467 468 - attach = dma_buf_attach(buf, drm->dev); 469 - if (IS_ERR(attach)) { 470 - err = PTR_ERR(attach); 471 - goto free; 472 - } 473 - 474 - get_dma_buf(buf); 475 - 476 - bo->sgt = dma_buf_map_attachment_unlocked(attach, DMA_TO_DEVICE); 477 - if (IS_ERR(bo->sgt)) { 478 - err = PTR_ERR(bo->sgt); 479 - goto detach; 480 - } 481 - 468 + /* 469 + * If we need to use IOMMU API to map the dma-buf into the internally managed 470 + * domain, map it first to the DRM device to get an sgt. 471 + */ 482 472 if (tegra->domain) { 473 + attach = dma_buf_attach(buf, drm->dev); 474 + if (IS_ERR(attach)) { 475 + err = PTR_ERR(attach); 476 + goto free; 477 + } 478 + 479 + bo->sgt = dma_buf_map_attachment_unlocked(attach, DMA_TO_DEVICE); 480 + if (IS_ERR(bo->sgt)) { 481 + err = PTR_ERR(bo->sgt); 482 + goto detach; 483 + } 484 + 483 485 err = tegra_bo_iommu_map(tegra, bo); 484 486 if (err < 0) 485 487 goto detach; 488 + 489 + bo->gem.import_attach = attach; 486 490 } 487 491 488 - bo->gem.import_attach = attach; 492 + get_dma_buf(buf); 493 + bo->dma_buf = buf; 489 494 490 495 return bo; 491 496 ··· 521 516 dev_name(mapping->dev)); 522 517 } 523 518 524 - if (tegra->domain) 519 + if (tegra->domain) { 525 520 tegra_bo_iommu_unmap(tegra, bo); 526 521 527 - if (gem->import_attach) { 528 - dma_buf_unmap_attachment_unlocked(gem->import_attach, bo->sgt, 529 - DMA_TO_DEVICE); 530 - drm_prime_gem_destroy(gem, NULL); 531 - } else { 532 - tegra_bo_free(gem->dev, bo); 522 + if (gem->import_attach) { 523 + dma_buf_unmap_attachment_unlocked(gem->import_attach, bo->sgt, 524 + DMA_TO_DEVICE); 525 + dma_buf_detach(gem->import_attach->dmabuf, gem->import_attach); 526 + } 533 527 } 528 + 529 + tegra_bo_free(gem->dev, bo); 530 + 531 + if (bo->dma_buf) 532 + dma_buf_put(bo->dma_buf); 534 533 535 534 drm_gem_object_release(gem); 536 535 kfree(bo);
+21
drivers/gpu/drm/tegra/gem.h
··· 32 32 enum tegra_bo_sector_layout sector_layout; 33 33 }; 34 34 35 + /* 36 + * How memory is referenced within a tegra_bo: 37 + * 38 + * Buffer source | Mapping API(*) | Fields 39 + * ---------------+-----------------+--------------- 40 + * Allocated here | DMA API | iova (IOVA mapped to drm->dev), vaddr (CPU VA) 41 + * 42 + * Allocated here | IOMMU API | pages/num_pages (Phys. memory), sgt (Mapped to drm->dev), 43 + * | iova/size (Mapped to domain) 44 + * 45 + * Imported | DMA API | dma_buf (Imported dma_buf) 46 + * 47 + * Imported | IOMMU API | dma_buf (Imported dma_buf), 48 + * | gem->import_attach (Attachment on drm->dev), 49 + * | sgt (Mapped to drm->dev) 50 + * | iova/size (Mapped to domain) 51 + * 52 + * (*) If tegra->domain is set, i.e. TegraDRM IOMMU domain is directly managed through IOMMU API, 53 + * this is IOMMU API. Otherwise DMA API. 54 + */ 35 55 struct tegra_bo { 36 56 struct drm_gem_object gem; 37 57 struct host1x_bo base; ··· 59 39 struct sg_table *sgt; 60 40 dma_addr_t iova; 61 41 void *vaddr; 42 + struct dma_buf *dma_buf; 62 43 63 44 struct drm_mm_node *mm; 64 45 unsigned long num_pages;
+33 -13
drivers/gpu/drm/tegra/gr3d.c
··· 46 46 unsigned int nclocks; 47 47 struct reset_control_bulk_data resets[RST_GR3D_MAX]; 48 48 unsigned int nresets; 49 - struct dev_pm_domain_list *pd_list; 50 49 51 50 DECLARE_BITMAP(addr_regs, GR3D_NUM_REGS); 52 51 }; ··· 369 370 return 0; 370 371 } 371 372 373 + static void gr3d_del_link(void *link) 374 + { 375 + device_link_del(link); 376 + } 377 + 372 378 static int gr3d_init_power(struct device *dev, struct gr3d *gr3d) 373 379 { 374 - struct dev_pm_domain_attach_data pd_data = { 375 - .pd_names = (const char *[]) { "3d0", "3d1" }, 376 - .num_pd_names = 2, 377 - }; 380 + static const char * const opp_genpd_names[] = { "3d0", "3d1", NULL }; 381 + const u32 link_flags = DL_FLAG_STATELESS | DL_FLAG_PM_RUNTIME; 382 + struct device **opp_virt_devs, *pd_dev; 383 + struct device_link *link; 384 + unsigned int i; 378 385 int err; 379 386 380 387 err = of_count_phandle_with_args(dev->of_node, "power-domains", ··· 414 409 if (dev->pm_domain) 415 410 return 0; 416 411 417 - err = dev_pm_domain_attach_list(dev, &pd_data, &gr3d->pd_list); 418 - if (err < 0) 412 + err = devm_pm_opp_attach_genpd(dev, opp_genpd_names, &opp_virt_devs); 413 + if (err) 419 414 return err; 415 + 416 + for (i = 0; opp_genpd_names[i]; i++) { 417 + pd_dev = opp_virt_devs[i]; 418 + if (!pd_dev) { 419 + dev_err(dev, "failed to get %s power domain\n", 420 + opp_genpd_names[i]); 421 + return -EINVAL; 422 + } 423 + 424 + link = device_link_add(dev, pd_dev, link_flags); 425 + if (!link) { 426 + dev_err(dev, "failed to link to %s\n", dev_name(pd_dev)); 427 + return -EINVAL; 428 + } 429 + 430 + err = devm_add_action_or_reset(dev, gr3d_del_link, link); 431 + if (err) 432 + return err; 433 + } 420 434 421 435 return 0; 422 436 } ··· 527 503 528 504 err = devm_tegra_core_dev_init_opp_table_common(&pdev->dev); 529 505 if (err) 530 - goto err; 506 + return err; 531 507 532 508 err = host1x_client_register(&gr3d->client.base); 533 509 if (err < 0) { 534 510 dev_err(&pdev->dev, "failed to register host1x client: %d\n", 535 511 err); 536 - goto err; 512 + return err; 537 513 } 538 514 539 515 /* initialize address register map */ ··· 541 517 set_bit(gr3d_addr_regs[i], gr3d->addr_regs); 542 518 543 519 return 0; 544 - err: 545 - dev_pm_domain_detach_list(gr3d->pd_list); 546 - return err; 547 520 } 548 521 549 522 static void gr3d_remove(struct platform_device *pdev) ··· 549 528 550 529 pm_runtime_disable(&pdev->dev); 551 530 host1x_client_unregister(&gr3d->client.base); 552 - dev_pm_domain_detach_list(gr3d->pd_list); 553 531 } 554 532 555 533 static int __maybe_unused gr3d_runtime_suspend(struct device *dev)
+1 -1
drivers/gpu/drm/tegra/hdmi.c
··· 434 434 435 435 static void tegra_hdmi_setup_audio_fs_tables(struct tegra_hdmi *hdmi) 436 436 { 437 - const unsigned int freqs[] = { 437 + static const unsigned int freqs[] = { 438 438 32000, 44100, 48000, 88200, 96000, 176400, 192000 439 439 }; 440 440 unsigned int i;
+358 -17
drivers/gpu/drm/tests/drm_framebuffer_test.c
··· 5 5 * Copyright (c) 2022 Maíra Canal <mairacanal@riseup.net> 6 6 */ 7 7 8 + #include <kunit/device.h> 8 9 #include <kunit/test.h> 9 10 10 11 #include <drm/drm_device.h> 12 + #include <drm/drm_drv.h> 11 13 #include <drm/drm_mode.h> 14 + #include <drm/drm_framebuffer.h> 12 15 #include <drm/drm_fourcc.h> 16 + #include <drm/drm_kunit_helpers.h> 13 17 #include <drm/drm_print.h> 14 18 15 19 #include "../drm_crtc_internal.h" ··· 22 18 #define MAX_WIDTH 4096 23 19 #define MIN_HEIGHT 4 24 20 #define MAX_HEIGHT 4096 21 + 22 + #define DRM_MODE_FB_INVALID BIT(2) 25 23 26 24 struct drm_framebuffer_test { 27 25 int buffer_created; ··· 87 81 .cmd = { .width = MAX_WIDTH, .height = MAX_HEIGHT, .pixel_format = DRM_FORMAT_ABGR8888, 88 82 .handles = { 1, 0, 0 }, .offsets = { UINT_MAX / 2, 0, 0 }, 89 83 .pitches = { 4 * MAX_WIDTH, 0, 0 }, 84 + } 85 + }, 86 + 87 + /* 88 + * All entries in members that represents per-plane values (@modifier, @handles, 89 + * @pitches and @offsets) must be zero when unused. 90 + */ 91 + { .buffer_created = 0, .name = "ABGR8888 Buffer offset for inexistent plane", 92 + .cmd = { .width = MAX_WIDTH, .height = MAX_HEIGHT, .pixel_format = DRM_FORMAT_ABGR8888, 93 + .handles = { 1, 0, 0 }, .offsets = { UINT_MAX / 2, UINT_MAX / 2, 0 }, 94 + .pitches = { 4 * MAX_WIDTH, 0, 0 }, .flags = DRM_MODE_FB_MODIFIERS, 95 + } 96 + }, 97 + 98 + { .buffer_created = 0, .name = "ABGR8888 Invalid flag", 99 + .cmd = { .width = MAX_WIDTH, .height = MAX_HEIGHT, .pixel_format = DRM_FORMAT_ABGR8888, 100 + .handles = { 1, 0, 0 }, .offsets = { UINT_MAX / 2, 0, 0 }, 101 + .pitches = { 4 * MAX_WIDTH, 0, 0 }, .flags = DRM_MODE_FB_INVALID, 90 102 } 91 103 }, 92 104 { .buffer_created = 1, .name = "ABGR8888 Set DRM_MODE_FB_MODIFIERS without modifiers", ··· 286 262 .pitches = { MAX_WIDTH, DIV_ROUND_UP(MAX_WIDTH, 2), DIV_ROUND_UP(MAX_WIDTH, 2) }, 287 263 } 288 264 }, 265 + { .buffer_created = 0, .name = "YUV420_10BIT Invalid modifier(DRM_FORMAT_MOD_LINEAR)", 266 + .cmd = { .width = MAX_WIDTH, .height = MAX_HEIGHT, .pixel_format = DRM_FORMAT_YUV420_10BIT, 267 + .handles = { 1, 0, 0 }, .flags = DRM_MODE_FB_MODIFIERS, 268 + .modifier = { DRM_FORMAT_MOD_LINEAR, 0, 0 }, 269 + .pitches = { MAX_WIDTH, 0, 0 }, 270 + } 271 + }, 289 272 { .buffer_created = 1, .name = "X0L2 Normal sizes", 290 273 .cmd = { .width = 600, .height = 600, .pixel_format = DRM_FORMAT_X0L2, 291 274 .handles = { 1, 0, 0 }, .pitches = { 1200, 0, 0 } ··· 348 317 }, 349 318 }; 350 319 320 + /* 321 + * This struct is intended to provide a way to mocked functions communicate 322 + * with the outer test when it can't be achieved by using its return value. In 323 + * this way, the functions that receive the mocked drm_device, for example, can 324 + * grab a reference to this and actually return something to be used on some 325 + * expectation. 326 + */ 327 + struct drm_framebuffer_test_priv { 328 + struct drm_device dev; 329 + bool buffer_created; 330 + bool buffer_freed; 331 + }; 332 + 351 333 static struct drm_framebuffer *fb_create_mock(struct drm_device *dev, 352 334 struct drm_file *file_priv, 353 335 const struct drm_mode_fb_cmd2 *mode_cmd) 354 336 { 355 - int *buffer_created = dev->dev_private; 356 - *buffer_created = 1; 337 + struct drm_framebuffer_test_priv *priv = container_of(dev, typeof(*priv), dev); 338 + 339 + priv->buffer_created = true; 357 340 return ERR_PTR(-EINVAL); 358 341 } 359 342 ··· 377 332 378 333 static int drm_framebuffer_test_init(struct kunit *test) 379 334 { 380 - struct drm_device *mock; 335 + struct device *parent; 336 + struct drm_framebuffer_test_priv *priv; 337 + struct drm_device *dev; 381 338 382 - mock = kunit_kzalloc(test, sizeof(*mock), GFP_KERNEL); 383 - KUNIT_ASSERT_NOT_ERR_OR_NULL(test, mock); 339 + parent = drm_kunit_helper_alloc_device(test); 340 + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, parent); 384 341 385 - mock->mode_config.min_width = MIN_WIDTH; 386 - mock->mode_config.max_width = MAX_WIDTH; 387 - mock->mode_config.min_height = MIN_HEIGHT; 388 - mock->mode_config.max_height = MAX_HEIGHT; 389 - mock->mode_config.funcs = &mock_config_funcs; 342 + priv = drm_kunit_helper_alloc_drm_device(test, parent, typeof(*priv), 343 + dev, 0); 344 + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, priv); 345 + dev = &priv->dev; 390 346 391 - test->priv = mock; 347 + dev->mode_config.min_width = MIN_WIDTH; 348 + dev->mode_config.max_width = MAX_WIDTH; 349 + dev->mode_config.min_height = MIN_HEIGHT; 350 + dev->mode_config.max_height = MAX_HEIGHT; 351 + dev->mode_config.funcs = &mock_config_funcs; 352 + 353 + test->priv = priv; 392 354 return 0; 393 355 } 394 356 395 357 static void drm_test_framebuffer_create(struct kunit *test) 396 358 { 397 359 const struct drm_framebuffer_test *params = test->param_value; 398 - struct drm_device *mock = test->priv; 399 - int buffer_created = 0; 360 + struct drm_framebuffer_test_priv *priv = test->priv; 361 + struct drm_device *dev = &priv->dev; 400 362 401 - mock->dev_private = &buffer_created; 402 - drm_internal_framebuffer_create(mock, &params->cmd, NULL); 403 - KUNIT_EXPECT_EQ(test, params->buffer_created, buffer_created); 363 + priv->buffer_created = false; 364 + drm_internal_framebuffer_create(dev, &params->cmd, NULL); 365 + KUNIT_EXPECT_EQ(test, params->buffer_created, priv->buffer_created); 404 366 } 405 367 406 368 static void drm_framebuffer_test_to_desc(const struct drm_framebuffer_test *t, char *desc) 407 369 { 408 - strcpy(desc, t->name); 370 + strscpy(desc, t->name, KUNIT_PARAM_DESC_SIZE); 409 371 } 410 372 411 373 KUNIT_ARRAY_PARAM(drm_framebuffer_create, drm_framebuffer_create_cases, 412 374 drm_framebuffer_test_to_desc); 413 375 376 + /* Tries to create a framebuffer with modifiers without drm_device supporting it */ 377 + static void drm_test_framebuffer_modifiers_not_supported(struct kunit *test) 378 + { 379 + struct drm_framebuffer_test_priv *priv = test->priv; 380 + struct drm_device *dev = &priv->dev; 381 + struct drm_framebuffer *fb; 382 + 383 + /* A valid cmd with modifier */ 384 + struct drm_mode_fb_cmd2 cmd = { 385 + .width = MAX_WIDTH, .height = MAX_HEIGHT, 386 + .pixel_format = DRM_FORMAT_ABGR8888, .handles = { 1, 0, 0 }, 387 + .offsets = { UINT_MAX / 2, 0, 0 }, .pitches = { 4 * MAX_WIDTH, 0, 0 }, 388 + .flags = DRM_MODE_FB_MODIFIERS, 389 + }; 390 + 391 + priv->buffer_created = false; 392 + dev->mode_config.fb_modifiers_not_supported = 1; 393 + 394 + fb = drm_internal_framebuffer_create(dev, &cmd, NULL); 395 + KUNIT_EXPECT_EQ(test, priv->buffer_created, false); 396 + KUNIT_EXPECT_EQ(test, PTR_ERR(fb), -EINVAL); 397 + } 398 + 399 + /* Parameters for testing drm_framebuffer_check_src_coords function */ 400 + struct drm_framebuffer_check_src_coords_case { 401 + const char *name; 402 + const int expect; 403 + const unsigned int fb_size; 404 + const uint32_t src_x; 405 + const uint32_t src_y; 406 + 407 + /* Deltas to be applied on source */ 408 + const uint32_t dsrc_w; 409 + const uint32_t dsrc_h; 410 + }; 411 + 412 + static const struct drm_framebuffer_check_src_coords_case 413 + drm_framebuffer_check_src_coords_cases[] = { 414 + { .name = "Success: source fits into fb", 415 + .expect = 0, 416 + }, 417 + { .name = "Fail: overflowing fb with x-axis coordinate", 418 + .expect = -ENOSPC, .src_x = 1, .fb_size = UINT_MAX, 419 + }, 420 + { .name = "Fail: overflowing fb with y-axis coordinate", 421 + .expect = -ENOSPC, .src_y = 1, .fb_size = UINT_MAX, 422 + }, 423 + { .name = "Fail: overflowing fb with source width", 424 + .expect = -ENOSPC, .dsrc_w = 1, .fb_size = UINT_MAX - 1, 425 + }, 426 + { .name = "Fail: overflowing fb with source height", 427 + .expect = -ENOSPC, .dsrc_h = 1, .fb_size = UINT_MAX - 1, 428 + }, 429 + }; 430 + 431 + static void drm_test_framebuffer_check_src_coords(struct kunit *test) 432 + { 433 + const struct drm_framebuffer_check_src_coords_case *params = test->param_value; 434 + const uint32_t src_x = params->src_x; 435 + const uint32_t src_y = params->src_y; 436 + const uint32_t src_w = (params->fb_size << 16) + params->dsrc_w; 437 + const uint32_t src_h = (params->fb_size << 16) + params->dsrc_h; 438 + const struct drm_framebuffer fb = { 439 + .width = params->fb_size, 440 + .height = params->fb_size 441 + }; 442 + int ret; 443 + 444 + ret = drm_framebuffer_check_src_coords(src_x, src_y, src_w, src_h, &fb); 445 + KUNIT_EXPECT_EQ(test, ret, params->expect); 446 + } 447 + 448 + static void 449 + check_src_coords_test_to_desc(const struct drm_framebuffer_check_src_coords_case *t, 450 + char *desc) 451 + { 452 + strscpy(desc, t->name, KUNIT_PARAM_DESC_SIZE); 453 + } 454 + 455 + KUNIT_ARRAY_PARAM(check_src_coords, drm_framebuffer_check_src_coords_cases, 456 + check_src_coords_test_to_desc); 457 + 458 + /* 459 + * Test if drm_framebuffer_cleanup() really pops out the framebuffer object 460 + * from device's fb_list and decrement the number of framebuffers for that 461 + * device, which is the only things it does. 462 + */ 463 + static void drm_test_framebuffer_cleanup(struct kunit *test) 464 + { 465 + struct drm_framebuffer_test_priv *priv = test->priv; 466 + struct drm_device *dev = &priv->dev; 467 + struct list_head *fb_list = &dev->mode_config.fb_list; 468 + struct drm_format_info format = { }; 469 + struct drm_framebuffer fb1 = { .dev = dev, .format = &format }; 470 + struct drm_framebuffer fb2 = { .dev = dev, .format = &format }; 471 + 472 + /* This will result on [fb_list] -> fb2 -> fb1 */ 473 + drm_framebuffer_init(dev, &fb1, NULL); 474 + drm_framebuffer_init(dev, &fb2, NULL); 475 + 476 + drm_framebuffer_cleanup(&fb1); 477 + 478 + /* Now fb2 is the only one element on fb_list */ 479 + KUNIT_ASSERT_TRUE(test, list_is_singular(&fb2.head)); 480 + KUNIT_ASSERT_EQ(test, dev->mode_config.num_fb, 1); 481 + 482 + drm_framebuffer_cleanup(&fb2); 483 + 484 + /* Now fb_list is empty */ 485 + KUNIT_ASSERT_TRUE(test, list_empty(fb_list)); 486 + KUNIT_ASSERT_EQ(test, dev->mode_config.num_fb, 0); 487 + } 488 + 489 + /* 490 + * Initialize a framebuffer, lookup its id and test if the returned reference 491 + * matches. 492 + */ 493 + static void drm_test_framebuffer_lookup(struct kunit *test) 494 + { 495 + struct drm_framebuffer_test_priv *priv = test->priv; 496 + struct drm_device *dev = &priv->dev; 497 + struct drm_format_info format = { }; 498 + struct drm_framebuffer expected_fb = { .dev = dev, .format = &format }; 499 + struct drm_framebuffer *returned_fb; 500 + uint32_t id = 0; 501 + int ret; 502 + 503 + ret = drm_framebuffer_init(dev, &expected_fb, NULL); 504 + KUNIT_ASSERT_EQ(test, ret, 0); 505 + id = expected_fb.base.id; 506 + 507 + /* Looking for expected_fb */ 508 + returned_fb = drm_framebuffer_lookup(dev, NULL, id); 509 + KUNIT_EXPECT_PTR_EQ(test, returned_fb, &expected_fb); 510 + drm_framebuffer_put(returned_fb); 511 + 512 + drm_framebuffer_cleanup(&expected_fb); 513 + } 514 + 515 + /* Try to lookup an id that is not linked to a framebuffer */ 516 + static void drm_test_framebuffer_lookup_inexistent(struct kunit *test) 517 + { 518 + struct drm_framebuffer_test_priv *priv = test->priv; 519 + struct drm_device *dev = &priv->dev; 520 + struct drm_framebuffer *fb; 521 + uint32_t id = 0; 522 + 523 + /* Looking for an inexistent framebuffer */ 524 + fb = drm_framebuffer_lookup(dev, NULL, id); 525 + KUNIT_EXPECT_NULL(test, fb); 526 + } 527 + 528 + /* 529 + * Test if drm_framebuffer_init initializes the framebuffer successfully, 530 + * asserting that its modeset object struct and its refcount are correctly 531 + * set and that strictly one framebuffer is initialized. 532 + */ 533 + static void drm_test_framebuffer_init(struct kunit *test) 534 + { 535 + struct drm_framebuffer_test_priv *priv = test->priv; 536 + struct drm_device *dev = &priv->dev; 537 + struct drm_format_info format = { }; 538 + struct drm_framebuffer fb1 = { .dev = dev, .format = &format }; 539 + struct drm_framebuffer_funcs funcs = { }; 540 + int ret; 541 + 542 + ret = drm_framebuffer_init(dev, &fb1, &funcs); 543 + KUNIT_ASSERT_EQ(test, ret, 0); 544 + 545 + /* Check if fb->funcs is actually set to the drm_framebuffer_funcs passed on */ 546 + KUNIT_EXPECT_PTR_EQ(test, fb1.funcs, &funcs); 547 + 548 + /* The fb->comm must be set to the current running process */ 549 + KUNIT_EXPECT_STREQ(test, fb1.comm, current->comm); 550 + 551 + /* The fb->base must be successfully initialized */ 552 + KUNIT_EXPECT_NE(test, fb1.base.id, 0); 553 + KUNIT_EXPECT_EQ(test, fb1.base.type, DRM_MODE_OBJECT_FB); 554 + KUNIT_EXPECT_EQ(test, kref_read(&fb1.base.refcount), 1); 555 + KUNIT_EXPECT_PTR_EQ(test, fb1.base.free_cb, &drm_framebuffer_free); 556 + 557 + /* There must be just that one fb initialized */ 558 + KUNIT_EXPECT_EQ(test, dev->mode_config.num_fb, 1); 559 + KUNIT_EXPECT_PTR_EQ(test, dev->mode_config.fb_list.prev, &fb1.head); 560 + KUNIT_EXPECT_PTR_EQ(test, dev->mode_config.fb_list.next, &fb1.head); 561 + 562 + drm_framebuffer_cleanup(&fb1); 563 + } 564 + 565 + /* Try to init a framebuffer without setting its format */ 566 + static void drm_test_framebuffer_init_bad_format(struct kunit *test) 567 + { 568 + struct drm_framebuffer_test_priv *priv = test->priv; 569 + struct drm_device *dev = &priv->dev; 570 + struct drm_framebuffer fb1 = { .dev = dev, .format = NULL }; 571 + struct drm_framebuffer_funcs funcs = { }; 572 + int ret; 573 + 574 + /* Fails if fb.format isn't set */ 575 + ret = drm_framebuffer_init(dev, &fb1, &funcs); 576 + KUNIT_EXPECT_EQ(test, ret, -EINVAL); 577 + } 578 + 579 + /* 580 + * Test calling drm_framebuffer_init() passing a framebuffer linked to a 581 + * different drm_device parent from the one passed on the first argument, which 582 + * must fail. 583 + */ 584 + static void drm_test_framebuffer_init_dev_mismatch(struct kunit *test) 585 + { 586 + struct drm_framebuffer_test_priv *priv = test->priv; 587 + struct drm_device *right_dev = &priv->dev; 588 + struct drm_device *wrong_dev; 589 + struct device *wrong_dev_parent; 590 + struct drm_format_info format = { }; 591 + struct drm_framebuffer fb1 = { .dev = right_dev, .format = &format }; 592 + struct drm_framebuffer_funcs funcs = { }; 593 + int ret; 594 + 595 + wrong_dev_parent = kunit_device_register(test, "drm-kunit-wrong-device-mock"); 596 + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, wrong_dev_parent); 597 + 598 + wrong_dev = __drm_kunit_helper_alloc_drm_device(test, wrong_dev_parent, 599 + sizeof(struct drm_device), 600 + 0, 0); 601 + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, wrong_dev); 602 + 603 + /* Fails if fb->dev doesn't point to the drm_device passed on first arg */ 604 + ret = drm_framebuffer_init(wrong_dev, &fb1, &funcs); 605 + KUNIT_EXPECT_EQ(test, ret, -EINVAL); 606 + } 607 + 608 + static void destroy_free_mock(struct drm_framebuffer *fb) 609 + { 610 + struct drm_framebuffer_test_priv *priv = container_of(fb->dev, typeof(*priv), dev); 611 + 612 + priv->buffer_freed = true; 613 + } 614 + 615 + static struct drm_framebuffer_funcs framebuffer_funcs_free_mock = { 616 + .destroy = destroy_free_mock, 617 + }; 618 + 619 + /* 620 + * In summary, the drm_framebuffer_free() function must implicitly call 621 + * fb->funcs->destroy() and garantee that the framebufer object is unregistered 622 + * from the drm_device idr pool. 623 + */ 624 + static void drm_test_framebuffer_free(struct kunit *test) 625 + { 626 + struct drm_framebuffer_test_priv *priv = test->priv; 627 + struct drm_device *dev = &priv->dev; 628 + struct drm_mode_object *obj; 629 + struct drm_framebuffer fb = { 630 + .dev = dev, 631 + .funcs = &framebuffer_funcs_free_mock, 632 + }; 633 + int id, ret; 634 + 635 + priv->buffer_freed = false; 636 + 637 + /* 638 + * Mock a framebuffer that was not unregistered at the moment of the 639 + * drm_framebuffer_free() call. 640 + */ 641 + ret = drm_mode_object_add(dev, &fb.base, DRM_MODE_OBJECT_FB); 642 + KUNIT_ASSERT_EQ(test, ret, 0); 643 + id = fb.base.id; 644 + 645 + drm_framebuffer_free(&fb.base.refcount); 646 + 647 + /* The framebuffer object must be unregistered */ 648 + obj = drm_mode_object_find(dev, NULL, id, DRM_MODE_OBJECT_FB); 649 + KUNIT_EXPECT_PTR_EQ(test, obj, NULL); 650 + KUNIT_EXPECT_EQ(test, fb.base.id, 0); 651 + 652 + /* Test if fb->funcs->destroy() was called */ 653 + KUNIT_EXPECT_EQ(test, priv->buffer_freed, true); 654 + } 655 + 414 656 static struct kunit_case drm_framebuffer_tests[] = { 657 + KUNIT_CASE_PARAM(drm_test_framebuffer_check_src_coords, check_src_coords_gen_params), 658 + KUNIT_CASE(drm_test_framebuffer_cleanup), 415 659 KUNIT_CASE_PARAM(drm_test_framebuffer_create, drm_framebuffer_create_gen_params), 660 + KUNIT_CASE(drm_test_framebuffer_free), 661 + KUNIT_CASE(drm_test_framebuffer_init), 662 + KUNIT_CASE(drm_test_framebuffer_init_bad_format), 663 + KUNIT_CASE(drm_test_framebuffer_init_dev_mismatch), 664 + KUNIT_CASE(drm_test_framebuffer_lookup), 665 + KUNIT_CASE(drm_test_framebuffer_lookup_inexistent), 666 + KUNIT_CASE(drm_test_framebuffer_modifiers_not_supported), 416 667 { } 417 668 }; 418 669
+1 -3
drivers/gpu/drm/tiny/Kconfig
··· 13 13 config DRM_BOCHS 14 14 tristate "DRM Support for bochs dispi vga interface (qemu stdvga)" 15 15 depends on DRM && PCI && MMU 16 + select DRM_GEM_SHMEM_HELPER 16 17 select DRM_KMS_HELPER 17 - select DRM_VRAM_HELPER 18 - select DRM_TTM 19 - select DRM_TTM_HELPER 20 18 help 21 19 This is a KMS driver for qemu's stdvga output. Choose this option 22 20 for qemu.
+226 -164
drivers/gpu/drm/tiny/bochs.c
··· 4 4 #include <linux/pci.h> 5 5 6 6 #include <drm/drm_aperture.h> 7 + #include <drm/drm_atomic.h> 7 8 #include <drm/drm_atomic_helper.h> 9 + #include <drm/drm_damage_helper.h> 8 10 #include <drm/drm_drv.h> 9 11 #include <drm/drm_edid.h> 10 - #include <drm/drm_fbdev_ttm.h> 12 + #include <drm/drm_fbdev_shmem.h> 11 13 #include <drm/drm_fourcc.h> 12 14 #include <drm/drm_framebuffer.h> 15 + #include <drm/drm_gem_atomic_helper.h> 13 16 #include <drm/drm_gem_framebuffer_helper.h> 14 - #include <drm/drm_gem_vram_helper.h> 17 + #include <drm/drm_gem_shmem_helper.h> 15 18 #include <drm/drm_managed.h> 16 19 #include <drm/drm_module.h> 20 + #include <drm/drm_plane_helper.h> 17 21 #include <drm/drm_probe_helper.h> 18 - #include <drm/drm_simple_kms_helper.h> 19 22 20 23 #include <video/vga.h> 21 24 ··· 74 71 }; 75 72 76 73 struct bochs_device { 74 + struct drm_device dev; 75 + 77 76 /* hw */ 78 77 void __iomem *mmio; 79 78 int ioports; ··· 90 85 u16 yres_virtual; 91 86 u32 stride; 92 87 u32 bpp; 93 - const struct drm_edid *drm_edid; 94 88 95 89 /* drm */ 96 - struct drm_device *dev; 97 - struct drm_simple_display_pipe pipe; 90 + struct drm_plane primary_plane; 91 + struct drm_crtc crtc; 92 + struct drm_encoder encoder; 98 93 struct drm_connector connector; 99 94 }; 95 + 96 + static struct bochs_device *to_bochs_device(const struct drm_device *dev) 97 + { 98 + return container_of(dev, struct bochs_device, dev); 99 + } 100 100 101 101 /* ---------------------------------------------------------------------- */ 102 102 ··· 182 172 #define bochs_hw_set_native_endian(_b) bochs_hw_set_little_endian(_b) 183 173 #endif 184 174 185 - static int bochs_get_edid_block(void *data, u8 *buf, 186 - unsigned int block, size_t len) 175 + static int bochs_get_edid_block(void *data, u8 *buf, unsigned int block, size_t len) 187 176 { 188 177 struct bochs_device *bochs = data; 189 178 size_t i, start = block * EDID_LENGTH; 179 + 180 + if (!bochs->mmio) 181 + return -1; 190 182 191 183 if (start + len > 0x400 /* vga register offset */) 192 184 return -1; ··· 199 187 return 0; 200 188 } 201 189 202 - static int bochs_hw_load_edid(struct bochs_device *bochs) 190 + static const struct drm_edid *bochs_hw_read_edid(struct drm_connector *connector) 203 191 { 192 + struct drm_device *dev = connector->dev; 193 + struct bochs_device *bochs = to_bochs_device(dev); 204 194 u8 header[8]; 205 - 206 - if (!bochs->mmio) 207 - return -1; 208 195 209 196 /* check header to detect whenever edid support is enabled in qemu */ 210 197 bochs_get_edid_block(bochs, header, 0, ARRAY_SIZE(header)); 211 198 if (drm_edid_header_is_valid(header) != 8) 212 - return -1; 199 + return NULL; 213 200 214 - drm_edid_free(bochs->drm_edid); 215 - bochs->drm_edid = drm_edid_read_custom(&bochs->connector, 216 - bochs_get_edid_block, bochs); 217 - if (!bochs->drm_edid) 218 - return -1; 201 + drm_dbg(dev, "Found EDID data blob.\n"); 219 202 220 - return 0; 203 + return drm_edid_read_custom(connector, bochs_get_edid_block, bochs); 221 204 } 222 205 223 - static int bochs_hw_init(struct drm_device *dev) 206 + static int bochs_hw_init(struct bochs_device *bochs) 224 207 { 225 - struct bochs_device *bochs = dev->dev_private; 208 + struct drm_device *dev = &bochs->dev; 226 209 struct pci_dev *pdev = to_pci_dev(dev->dev); 227 210 unsigned long addr, size, mem, ioaddr, iosize; 228 211 u16 id; 229 212 230 213 if (pdev->resource[2].flags & IORESOURCE_MEM) { 214 + ioaddr = pci_resource_start(pdev, 2); 215 + iosize = pci_resource_len(pdev, 2); 231 216 /* mmio bar with vga and bochs registers present */ 232 - if (pci_request_region(pdev, 2, "bochs-drm") != 0) { 217 + if (!devm_request_mem_region(&pdev->dev, ioaddr, iosize, "bochs-drm")) { 233 218 DRM_ERROR("Cannot request mmio region\n"); 234 219 return -EBUSY; 235 220 } 236 - ioaddr = pci_resource_start(pdev, 2); 237 - iosize = pci_resource_len(pdev, 2); 238 - bochs->mmio = ioremap(ioaddr, iosize); 221 + bochs->mmio = devm_ioremap(&pdev->dev, ioaddr, iosize); 239 222 if (bochs->mmio == NULL) { 240 223 DRM_ERROR("Cannot map mmio region\n"); 241 224 return -ENOMEM; ··· 238 231 } else { 239 232 ioaddr = VBE_DISPI_IOPORT_INDEX; 240 233 iosize = 2; 241 - if (!request_region(ioaddr, iosize, "bochs-drm")) { 234 + if (!devm_request_region(&pdev->dev, ioaddr, iosize, "bochs-drm")) { 242 235 DRM_ERROR("Cannot request ioports\n"); 243 236 return -EBUSY; 244 237 } ··· 265 258 size = min(size, mem); 266 259 } 267 260 268 - if (pci_request_region(pdev, 0, "bochs-drm") != 0) 261 + if (!devm_request_mem_region(&pdev->dev, addr, size, "bochs-drm")) 269 262 DRM_WARN("Cannot request framebuffer, boot fb still active?\n"); 270 263 271 - bochs->fb_map = ioremap(addr, size); 264 + bochs->fb_map = devm_ioremap_wc(&pdev->dev, addr, size); 272 265 if (bochs->fb_map == NULL) { 273 266 DRM_ERROR("Cannot map framebuffer\n"); 274 267 return -ENOMEM; ··· 297 290 return 0; 298 291 } 299 292 300 - static void bochs_hw_fini(struct drm_device *dev) 301 - { 302 - struct bochs_device *bochs = dev->dev_private; 303 - 304 - /* TODO: shot down existing vram mappings */ 305 - 306 - if (bochs->mmio) 307 - iounmap(bochs->mmio); 308 - if (bochs->ioports) 309 - release_region(VBE_DISPI_IOPORT_INDEX, 2); 310 - if (bochs->fb_map) 311 - iounmap(bochs->fb_map); 312 - pci_release_regions(to_pci_dev(dev->dev)); 313 - drm_edid_free(bochs->drm_edid); 314 - } 315 - 316 293 static void bochs_hw_blank(struct bochs_device *bochs, bool blank) 317 294 { 318 295 DRM_DEBUG_DRIVER("hw_blank %d\n", blank); ··· 312 321 { 313 322 int idx; 314 323 315 - if (!drm_dev_enter(bochs->dev, &idx)) 324 + if (!drm_dev_enter(&bochs->dev, &idx)) 316 325 return; 317 326 318 327 bochs->xres = mode->hdisplay; ··· 348 357 { 349 358 int idx; 350 359 351 - if (!drm_dev_enter(bochs->dev, &idx)) 360 + if (!drm_dev_enter(&bochs->dev, &idx)) 352 361 return; 353 362 354 363 DRM_DEBUG_DRIVER("format %c%c%c%c\n", ··· 379 388 unsigned long offset; 380 389 unsigned int vx, vy, vwidth, idx; 381 390 382 - if (!drm_dev_enter(bochs->dev, &idx)) 391 + if (!drm_dev_enter(&bochs->dev, &idx)) 383 392 return; 384 393 385 394 bochs->stride = stride; ··· 401 410 402 411 /* ---------------------------------------------------------------------- */ 403 412 404 - static const uint32_t bochs_formats[] = { 413 + static const uint32_t bochs_primary_plane_formats[] = { 405 414 DRM_FORMAT_XRGB8888, 406 415 DRM_FORMAT_BGRX8888, 407 416 }; 408 417 409 - static void bochs_plane_update(struct bochs_device *bochs, struct drm_plane_state *state) 418 + static int bochs_primary_plane_helper_atomic_check(struct drm_plane *plane, 419 + struct drm_atomic_state *state) 410 420 { 411 - struct drm_gem_vram_object *gbo; 412 - s64 gpu_addr; 421 + struct drm_plane_state *new_plane_state = drm_atomic_get_new_plane_state(state, plane); 422 + struct drm_crtc *new_crtc = new_plane_state->crtc; 423 + struct drm_crtc_state *new_crtc_state = NULL; 424 + int ret; 413 425 414 - if (!state->fb || !bochs->stride) 426 + if (new_crtc) 427 + new_crtc_state = drm_atomic_get_new_crtc_state(state, new_crtc); 428 + 429 + ret = drm_atomic_helper_check_plane_state(new_plane_state, new_crtc_state, 430 + DRM_PLANE_NO_SCALING, 431 + DRM_PLANE_NO_SCALING, 432 + false, false); 433 + if (ret) 434 + return ret; 435 + else if (!new_plane_state->visible) 436 + return 0; 437 + 438 + return 0; 439 + } 440 + 441 + static void bochs_primary_plane_helper_atomic_update(struct drm_plane *plane, 442 + struct drm_atomic_state *state) 443 + { 444 + struct drm_device *dev = plane->dev; 445 + struct bochs_device *bochs = to_bochs_device(dev); 446 + struct drm_plane_state *plane_state = plane->state; 447 + struct drm_plane_state *old_plane_state = drm_atomic_get_old_plane_state(state, plane); 448 + struct drm_shadow_plane_state *shadow_plane_state = to_drm_shadow_plane_state(plane_state); 449 + struct drm_framebuffer *fb = plane_state->fb; 450 + struct drm_atomic_helper_damage_iter iter; 451 + struct drm_rect damage; 452 + 453 + if (!fb || !bochs->stride) 415 454 return; 416 455 417 - gbo = drm_gem_vram_of_gem(state->fb->obj[0]); 418 - gpu_addr = drm_gem_vram_offset(gbo); 419 - if (WARN_ON_ONCE(gpu_addr < 0)) 420 - return; /* Bug: we didn't pin the BO to VRAM in prepare_fb. */ 456 + drm_atomic_helper_damage_iter_init(&iter, old_plane_state, plane_state); 457 + drm_atomic_for_each_plane_damage(&iter, &damage) { 458 + struct iosys_map dst = IOSYS_MAP_INIT_VADDR_IOMEM(bochs->fb_map); 421 459 460 + iosys_map_incr(&dst, drm_fb_clip_offset(fb->pitches[0], fb->format, &damage)); 461 + drm_fb_memcpy(&dst, fb->pitches, shadow_plane_state->data, fb, &damage); 462 + } 463 + 464 + /* Always scanout image at VRAM offset 0 */ 422 465 bochs_hw_setbase(bochs, 423 - state->crtc_x, 424 - state->crtc_y, 425 - state->fb->pitches[0], 426 - state->fb->offsets[0] + gpu_addr); 427 - bochs_hw_setformat(bochs, state->fb->format); 466 + plane_state->crtc_x, 467 + plane_state->crtc_y, 468 + fb->pitches[0], 469 + 0); 470 + bochs_hw_setformat(bochs, fb->format); 428 471 } 429 472 430 - static void bochs_pipe_enable(struct drm_simple_display_pipe *pipe, 431 - struct drm_crtc_state *crtc_state, 432 - struct drm_plane_state *plane_state) 473 + static const struct drm_plane_helper_funcs bochs_primary_plane_helper_funcs = { 474 + DRM_GEM_SHADOW_PLANE_HELPER_FUNCS, 475 + .atomic_check = bochs_primary_plane_helper_atomic_check, 476 + .atomic_update = bochs_primary_plane_helper_atomic_update, 477 + }; 478 + 479 + static const struct drm_plane_funcs bochs_primary_plane_funcs = { 480 + .update_plane = drm_atomic_helper_update_plane, 481 + .disable_plane = drm_atomic_helper_disable_plane, 482 + .destroy = drm_plane_cleanup, 483 + DRM_GEM_SHADOW_PLANE_FUNCS 484 + }; 485 + 486 + static void bochs_crtc_helper_mode_set_nofb(struct drm_crtc *crtc) 433 487 { 434 - struct bochs_device *bochs = pipe->crtc.dev->dev_private; 488 + struct bochs_device *bochs = to_bochs_device(crtc->dev); 489 + struct drm_crtc_state *crtc_state = crtc->state; 435 490 436 491 bochs_hw_setmode(bochs, &crtc_state->mode); 437 - bochs_plane_update(bochs, plane_state); 438 492 } 439 493 440 - static void bochs_pipe_disable(struct drm_simple_display_pipe *pipe) 494 + static int bochs_crtc_helper_atomic_check(struct drm_crtc *crtc, 495 + struct drm_atomic_state *state) 441 496 { 442 - struct bochs_device *bochs = pipe->crtc.dev->dev_private; 497 + struct drm_crtc_state *crtc_state = drm_atomic_get_new_crtc_state(state, crtc); 498 + 499 + if (!crtc_state->enable) 500 + return 0; 501 + 502 + return drm_atomic_helper_check_crtc_primary_plane(crtc_state); 503 + } 504 + 505 + static void bochs_crtc_helper_atomic_enable(struct drm_crtc *crtc, 506 + struct drm_atomic_state *state) 507 + { 508 + } 509 + 510 + static void bochs_crtc_helper_atomic_disable(struct drm_crtc *crtc, 511 + struct drm_atomic_state *crtc_state) 512 + { 513 + struct bochs_device *bochs = to_bochs_device(crtc->dev); 443 514 444 515 bochs_hw_blank(bochs, true); 445 516 } 446 517 447 - static void bochs_pipe_update(struct drm_simple_display_pipe *pipe, 448 - struct drm_plane_state *old_state) 449 - { 450 - struct bochs_device *bochs = pipe->crtc.dev->dev_private; 451 - 452 - bochs_plane_update(bochs, pipe->plane.state); 453 - } 454 - 455 - static const struct drm_simple_display_pipe_funcs bochs_pipe_funcs = { 456 - .enable = bochs_pipe_enable, 457 - .disable = bochs_pipe_disable, 458 - .update = bochs_pipe_update, 459 - .prepare_fb = drm_gem_vram_simple_display_pipe_prepare_fb, 460 - .cleanup_fb = drm_gem_vram_simple_display_pipe_cleanup_fb, 518 + static const struct drm_crtc_helper_funcs bochs_crtc_helper_funcs = { 519 + .mode_set_nofb = bochs_crtc_helper_mode_set_nofb, 520 + .atomic_check = bochs_crtc_helper_atomic_check, 521 + .atomic_enable = bochs_crtc_helper_atomic_enable, 522 + .atomic_disable = bochs_crtc_helper_atomic_disable, 461 523 }; 462 524 463 - static int bochs_connector_get_modes(struct drm_connector *connector) 525 + static const struct drm_crtc_funcs bochs_crtc_funcs = { 526 + .reset = drm_atomic_helper_crtc_reset, 527 + .destroy = drm_crtc_cleanup, 528 + .set_config = drm_atomic_helper_set_config, 529 + .page_flip = drm_atomic_helper_page_flip, 530 + .atomic_duplicate_state = drm_atomic_helper_crtc_duplicate_state, 531 + .atomic_destroy_state = drm_atomic_helper_crtc_destroy_state, 532 + }; 533 + 534 + static const struct drm_encoder_funcs bochs_encoder_funcs = { 535 + .destroy = drm_encoder_cleanup, 536 + }; 537 + 538 + static int bochs_connector_helper_get_modes(struct drm_connector *connector) 464 539 { 540 + const struct drm_edid *edid; 465 541 int count; 466 542 467 - count = drm_edid_connector_add_modes(connector); 543 + edid = bochs_hw_read_edid(connector); 468 544 469 - if (!count) { 545 + if (edid) { 546 + drm_edid_connector_update(connector, edid); 547 + count = drm_edid_connector_add_modes(connector); 548 + drm_edid_free(edid); 549 + } else { 550 + drm_edid_connector_update(connector, NULL); 470 551 count = drm_add_modes_noedid(connector, 8192, 8192); 471 552 drm_set_preferred_mode(connector, defx, defy); 472 553 } 554 + 473 555 return count; 474 556 } 475 557 476 - static const struct drm_connector_helper_funcs bochs_connector_connector_helper_funcs = { 477 - .get_modes = bochs_connector_get_modes, 558 + static const struct drm_connector_helper_funcs bochs_connector_helper_funcs = { 559 + .get_modes = bochs_connector_helper_get_modes, 478 560 }; 479 561 480 - static const struct drm_connector_funcs bochs_connector_connector_funcs = { 562 + static const struct drm_connector_funcs bochs_connector_funcs = { 481 563 .fill_modes = drm_helper_probe_single_connector_modes, 482 564 .destroy = drm_connector_cleanup, 483 565 .reset = drm_atomic_helper_connector_reset, ··· 558 494 .atomic_destroy_state = drm_atomic_helper_connector_destroy_state, 559 495 }; 560 496 561 - static void bochs_connector_init(struct drm_device *dev) 497 + static enum drm_mode_status bochs_mode_config_mode_valid(struct drm_device *dev, 498 + const struct drm_display_mode *mode) 562 499 { 563 - struct bochs_device *bochs = dev->dev_private; 564 - struct drm_connector *connector = &bochs->connector; 500 + struct bochs_device *bochs = to_bochs_device(dev); 501 + const struct drm_format_info *format = drm_format_info(DRM_FORMAT_XRGB8888); 502 + u64 pitch; 565 503 566 - drm_connector_init(dev, connector, &bochs_connector_connector_funcs, 567 - DRM_MODE_CONNECTOR_VIRTUAL); 568 - drm_connector_helper_add(connector, &bochs_connector_connector_helper_funcs); 504 + if (drm_WARN_ON(dev, !format)) 505 + return MODE_ERROR; 569 506 570 - bochs_hw_load_edid(bochs); 571 - if (bochs->drm_edid) { 572 - DRM_INFO("Found EDID data blob.\n"); 573 - drm_connector_attach_edid_property(connector); 574 - drm_edid_connector_update(&bochs->connector, bochs->drm_edid); 575 - } 507 + pitch = drm_format_info_min_pitch(format, 0, mode->hdisplay); 508 + if (!pitch) 509 + return MODE_BAD_WIDTH; 510 + if (mode->vdisplay > DIV_ROUND_DOWN_ULL(bochs->fb_size, pitch)) 511 + return MODE_MEM; 512 + 513 + return MODE_OK; 576 514 } 577 515 578 - static struct drm_framebuffer * 579 - bochs_gem_fb_create(struct drm_device *dev, struct drm_file *file, 580 - const struct drm_mode_fb_cmd2 *mode_cmd) 581 - { 582 - if (mode_cmd->pixel_format != DRM_FORMAT_XRGB8888 && 583 - mode_cmd->pixel_format != DRM_FORMAT_BGRX8888) 584 - return ERR_PTR(-EINVAL); 585 - 586 - return drm_gem_fb_create(dev, file, mode_cmd); 587 - } 588 - 589 - static const struct drm_mode_config_funcs bochs_mode_funcs = { 590 - .fb_create = bochs_gem_fb_create, 591 - .mode_valid = drm_vram_helper_mode_valid, 516 + static const struct drm_mode_config_funcs bochs_mode_config_funcs = { 517 + .fb_create = drm_gem_fb_create_with_dirty, 518 + .mode_valid = bochs_mode_config_mode_valid, 592 519 .atomic_check = drm_atomic_helper_check, 593 520 .atomic_commit = drm_atomic_helper_commit, 594 521 }; 595 522 596 523 static int bochs_kms_init(struct bochs_device *bochs) 597 524 { 525 + struct drm_device *dev = &bochs->dev; 526 + struct drm_plane *primary_plane; 527 + struct drm_crtc *crtc; 528 + struct drm_connector *connector; 529 + struct drm_encoder *encoder; 598 530 int ret; 599 531 600 - ret = drmm_mode_config_init(bochs->dev); 532 + ret = drmm_mode_config_init(dev); 601 533 if (ret) 602 534 return ret; 603 535 604 - bochs->dev->mode_config.max_width = 8192; 605 - bochs->dev->mode_config.max_height = 8192; 536 + dev->mode_config.max_width = 8192; 537 + dev->mode_config.max_height = 8192; 606 538 607 - bochs->dev->mode_config.preferred_depth = 24; 608 - bochs->dev->mode_config.prefer_shadow = 0; 609 - bochs->dev->mode_config.quirk_addfb_prefer_host_byte_order = true; 539 + dev->mode_config.preferred_depth = 24; 540 + dev->mode_config.quirk_addfb_prefer_host_byte_order = true; 610 541 611 - bochs->dev->mode_config.funcs = &bochs_mode_funcs; 542 + dev->mode_config.funcs = &bochs_mode_config_funcs; 612 543 613 - bochs_connector_init(bochs->dev); 614 - drm_simple_display_pipe_init(bochs->dev, 615 - &bochs->pipe, 616 - &bochs_pipe_funcs, 617 - bochs_formats, 618 - ARRAY_SIZE(bochs_formats), 619 - NULL, 620 - &bochs->connector); 544 + primary_plane = &bochs->primary_plane; 545 + ret = drm_universal_plane_init(dev, primary_plane, 0, 546 + &bochs_primary_plane_funcs, 547 + bochs_primary_plane_formats, 548 + ARRAY_SIZE(bochs_primary_plane_formats), 549 + NULL, 550 + DRM_PLANE_TYPE_PRIMARY, NULL); 551 + if (ret) 552 + return ret; 553 + drm_plane_helper_add(primary_plane, &bochs_primary_plane_helper_funcs); 554 + drm_plane_enable_fb_damage_clips(primary_plane); 621 555 622 - drm_mode_config_reset(bochs->dev); 556 + crtc = &bochs->crtc; 557 + ret = drm_crtc_init_with_planes(dev, crtc, primary_plane, NULL, 558 + &bochs_crtc_funcs, NULL); 559 + if (ret) 560 + return ret; 561 + drm_crtc_helper_add(crtc, &bochs_crtc_helper_funcs); 562 + 563 + encoder = &bochs->encoder; 564 + ret = drm_encoder_init(dev, encoder, &bochs_encoder_funcs, 565 + DRM_MODE_ENCODER_VIRTUAL, NULL); 566 + if (ret) 567 + return ret; 568 + encoder->possible_crtcs = drm_crtc_mask(crtc); 569 + 570 + connector = &bochs->connector; 571 + ret = drm_connector_init(dev, connector, &bochs_connector_funcs, 572 + DRM_MODE_CONNECTOR_VIRTUAL); 573 + if (ret) 574 + return ret; 575 + drm_connector_helper_add(connector, &bochs_connector_helper_funcs); 576 + drm_connector_attach_edid_property(connector); 577 + drm_connector_attach_encoder(connector, encoder); 578 + 579 + drm_mode_config_reset(dev); 623 580 624 581 return 0; 625 582 } ··· 648 563 /* ---------------------------------------------------------------------- */ 649 564 /* drm interface */ 650 565 651 - static int bochs_load(struct drm_device *dev) 566 + static int bochs_load(struct bochs_device *bochs) 652 567 { 653 - struct bochs_device *bochs; 654 568 int ret; 655 569 656 - bochs = drmm_kzalloc(dev, sizeof(*bochs), GFP_KERNEL); 657 - if (bochs == NULL) 658 - return -ENOMEM; 659 - dev->dev_private = bochs; 660 - bochs->dev = dev; 661 - 662 - ret = bochs_hw_init(dev); 570 + ret = bochs_hw_init(bochs); 663 571 if (ret) 664 572 return ret; 665 573 666 - ret = drmm_vram_helper_init(dev, bochs->fb_base, bochs->fb_size); 667 - if (ret) 668 - goto err_hw_fini; 669 - 670 574 ret = bochs_kms_init(bochs); 671 575 if (ret) 672 - goto err_hw_fini; 576 + return ret; 673 577 674 578 return 0; 675 - 676 - err_hw_fini: 677 - bochs_hw_fini(dev); 678 - return ret; 679 579 } 680 580 681 581 DEFINE_DRM_GEM_FOPS(bochs_fops); ··· 673 603 .date = "20130925", 674 604 .major = 1, 675 605 .minor = 0, 676 - DRM_GEM_VRAM_DRIVER, 606 + DRM_GEM_SHMEM_DRIVER_OPS, 677 607 }; 678 608 679 609 /* ---------------------------------------------------------------------- */ ··· 705 635 706 636 static int bochs_pci_probe(struct pci_dev *pdev, const struct pci_device_id *ent) 707 637 { 638 + struct bochs_device *bochs; 708 639 struct drm_device *dev; 709 - unsigned long fbsize; 710 640 int ret; 711 - 712 - fbsize = pci_resource_len(pdev, 0); 713 - if (fbsize < 4 * 1024 * 1024) { 714 - DRM_ERROR("less than 4 MB video memory, ignoring device\n"); 715 - return -ENOMEM; 716 - } 717 641 718 642 ret = drm_aperture_remove_conflicting_pci_framebuffers(pdev, &bochs_driver); 719 643 if (ret) 720 644 return ret; 721 645 722 - dev = drm_dev_alloc(&bochs_driver, &pdev->dev); 723 - if (IS_ERR(dev)) 646 + bochs = devm_drm_dev_alloc(&pdev->dev, &bochs_driver, struct bochs_device, dev); 647 + if (IS_ERR(bochs)) 724 648 return PTR_ERR(dev); 649 + dev = &bochs->dev; 725 650 726 651 ret = pcim_enable_device(pdev); 727 652 if (ret) ··· 724 659 725 660 pci_set_drvdata(pdev, dev); 726 661 727 - ret = bochs_load(dev); 662 + ret = bochs_load(bochs); 728 663 if (ret) 729 664 goto err_free_dev; 730 665 731 666 ret = drm_dev_register(dev, 0); 732 667 if (ret) 733 - goto err_hw_fini; 668 + goto err_free_dev; 734 669 735 - drm_fbdev_ttm_setup(dev, 32); 670 + drm_fbdev_shmem_setup(dev, 32); 736 671 return ret; 737 672 738 - err_hw_fini: 739 - bochs_hw_fini(dev); 740 673 err_free_dev: 741 674 drm_dev_put(dev); 742 675 return ret; ··· 746 683 747 684 drm_dev_unplug(dev); 748 685 drm_atomic_helper_shutdown(dev); 749 - bochs_hw_fini(dev); 750 686 drm_dev_put(dev); 751 687 } 752 688
+41 -5
drivers/gpu/drm/v3d/v3d_sched.c
··· 135 135 struct v3d_stats *global_stats = &v3d->queue[queue].stats; 136 136 struct v3d_stats *local_stats = &file->stats[queue]; 137 137 u64 now = local_clock(); 138 + unsigned long flags; 138 139 139 - preempt_disable(); 140 + /* 141 + * We only need to disable local interrupts to appease lockdep who 142 + * otherwise would think v3d_job_start_stats vs v3d_stats_update has an 143 + * unsafe in-irq vs no-irq-off usage problem. This is a false positive 144 + * because all the locks are per queue and stats type, and all jobs are 145 + * completely one at a time serialised. More specifically: 146 + * 147 + * 1. Locks for GPU queues are updated from interrupt handlers under a 148 + * spin lock and started here with preemption disabled. 149 + * 150 + * 2. Locks for CPU queues are updated from the worker with preemption 151 + * disabled and equally started here with preemption disabled. 152 + * 153 + * Therefore both are consistent. 154 + * 155 + * 3. Because next job can only be queued after the previous one has 156 + * been signaled, and locks are per queue, there is also no scope for 157 + * the start part to race with the update part. 158 + */ 159 + if (IS_ENABLED(CONFIG_LOCKDEP)) 160 + local_irq_save(flags); 161 + else 162 + preempt_disable(); 140 163 141 164 write_seqcount_begin(&local_stats->lock); 142 165 local_stats->start_ns = now; ··· 169 146 global_stats->start_ns = now; 170 147 write_seqcount_end(&global_stats->lock); 171 148 172 - preempt_enable(); 149 + if (IS_ENABLED(CONFIG_LOCKDEP)) 150 + local_irq_restore(flags); 151 + else 152 + preempt_enable(); 173 153 } 174 154 175 155 static void ··· 193 167 struct v3d_stats *global_stats = &v3d->queue[queue].stats; 194 168 struct v3d_stats *local_stats = &file->stats[queue]; 195 169 u64 now = local_clock(); 170 + unsigned long flags; 196 171 197 - preempt_disable(); 172 + /* See comment in v3d_job_start_stats() */ 173 + if (IS_ENABLED(CONFIG_LOCKDEP)) 174 + local_irq_save(flags); 175 + else 176 + preempt_disable(); 177 + 198 178 v3d_stats_update(local_stats, now); 199 179 v3d_stats_update(global_stats, now); 200 - preempt_enable(); 180 + 181 + if (IS_ENABLED(CONFIG_LOCKDEP)) 182 + local_irq_restore(flags); 183 + else 184 + preempt_enable(); 201 185 } 202 186 203 187 static struct dma_fence *v3d_bin_job_run(struct drm_sched_job *sched_job) ··· 703 667 704 668 /* Unblock schedulers and restart their jobs. */ 705 669 for (q = 0; q < V3D_MAX_QUEUES; q++) { 706 - drm_sched_start(&v3d->queue[q].sched); 670 + drm_sched_start(&v3d->queue[q].sched, 0); 707 671 } 708 672 709 673 mutex_unlock(&v3d->reset_lock);
+7 -7
drivers/gpu/drm/vc4/tests/vc4_mock.c
··· 155 155 drm_dev_unregister, 156 156 struct drm_device *); 157 157 158 - static struct vc4_dev *__mock_device(struct kunit *test, bool is_vc5) 158 + static struct vc4_dev *__mock_device(struct kunit *test, enum vc4_gen gen) 159 159 { 160 160 struct drm_device *drm; 161 - const struct drm_driver *drv = is_vc5 ? &vc5_drm_driver : &vc4_drm_driver; 162 - const struct vc4_mock_desc *desc = is_vc5 ? &vc5_mock : &vc4_mock; 161 + const struct drm_driver *drv = (gen == VC4_GEN_5) ? &vc5_drm_driver : &vc4_drm_driver; 162 + const struct vc4_mock_desc *desc = (gen == VC4_GEN_5) ? &vc5_mock : &vc4_mock; 163 163 struct vc4_dev *vc4; 164 164 struct device *dev; 165 165 int ret; ··· 173 173 KUNIT_ASSERT_NOT_ERR_OR_NULL(test, vc4); 174 174 175 175 vc4->dev = dev; 176 - vc4->is_vc5 = is_vc5; 176 + vc4->gen = gen; 177 177 178 - vc4->hvs = __vc4_hvs_alloc(vc4, NULL); 178 + vc4->hvs = __vc4_hvs_alloc(vc4, NULL, NULL); 179 179 KUNIT_ASSERT_NOT_ERR_OR_NULL(test, vc4->hvs); 180 180 181 181 drm = &vc4->base; ··· 198 198 199 199 struct vc4_dev *vc4_mock_device(struct kunit *test) 200 200 { 201 - return __mock_device(test, false); 201 + return __mock_device(test, VC4_GEN_4); 202 202 } 203 203 204 204 struct vc4_dev *vc5_mock_device(struct kunit *test) 205 205 { 206 - return __mock_device(test, true); 206 + return __mock_device(test, VC4_GEN_5); 207 207 }
+14 -14
drivers/gpu/drm/vc4/vc4_bo.c
··· 251 251 { 252 252 struct vc4_dev *vc4 = to_vc4_dev(bo->base.base.dev); 253 253 254 - if (WARN_ON_ONCE(vc4->is_vc5)) 254 + if (WARN_ON_ONCE(vc4->gen > VC4_GEN_4)) 255 255 return; 256 256 257 257 mutex_lock(&vc4->purgeable.lock); ··· 265 265 { 266 266 struct vc4_dev *vc4 = to_vc4_dev(bo->base.base.dev); 267 267 268 - if (WARN_ON_ONCE(vc4->is_vc5)) 268 + if (WARN_ON_ONCE(vc4->gen > VC4_GEN_4)) 269 269 return; 270 270 271 271 /* list_del_init() is used here because the caller might release ··· 396 396 struct vc4_dev *vc4 = to_vc4_dev(dev); 397 397 struct vc4_bo *bo; 398 398 399 - if (WARN_ON_ONCE(vc4->is_vc5)) 399 + if (WARN_ON_ONCE(vc4->gen > VC4_GEN_4)) 400 400 return ERR_PTR(-ENODEV); 401 401 402 402 bo = kzalloc(sizeof(*bo), GFP_KERNEL); ··· 427 427 struct drm_gem_dma_object *dma_obj; 428 428 struct vc4_bo *bo; 429 429 430 - if (WARN_ON_ONCE(vc4->is_vc5)) 430 + if (WARN_ON_ONCE(vc4->gen > VC4_GEN_4)) 431 431 return ERR_PTR(-ENODEV); 432 432 433 433 if (size == 0) ··· 496 496 struct vc4_bo *bo = NULL; 497 497 int ret; 498 498 499 - if (WARN_ON_ONCE(vc4->is_vc5)) 499 + if (WARN_ON_ONCE(vc4->gen > VC4_GEN_4)) 500 500 return -ENODEV; 501 501 502 502 ret = vc4_dumb_fixup_args(args); ··· 622 622 struct vc4_dev *vc4 = to_vc4_dev(bo->base.base.dev); 623 623 int ret; 624 624 625 - if (WARN_ON_ONCE(vc4->is_vc5)) 625 + if (WARN_ON_ONCE(vc4->gen > VC4_GEN_4)) 626 626 return -ENODEV; 627 627 628 628 /* Fast path: if the BO is already retained by someone, no need to ··· 661 661 { 662 662 struct vc4_dev *vc4 = to_vc4_dev(bo->base.base.dev); 663 663 664 - if (WARN_ON_ONCE(vc4->is_vc5)) 664 + if (WARN_ON_ONCE(vc4->gen > VC4_GEN_4)) 665 665 return; 666 666 667 667 /* Fast path: if the BO is still retained by someone, no need to test ··· 783 783 struct vc4_bo *bo = NULL; 784 784 int ret; 785 785 786 - if (WARN_ON_ONCE(vc4->is_vc5)) 786 + if (WARN_ON_ONCE(vc4->gen > VC4_GEN_4)) 787 787 return -ENODEV; 788 788 789 789 ret = vc4_grab_bin_bo(vc4, vc4file); ··· 813 813 struct drm_vc4_mmap_bo *args = data; 814 814 struct drm_gem_object *gem_obj; 815 815 816 - if (WARN_ON_ONCE(vc4->is_vc5)) 816 + if (WARN_ON_ONCE(vc4->gen > VC4_GEN_4)) 817 817 return -ENODEV; 818 818 819 819 gem_obj = drm_gem_object_lookup(file_priv, args->handle); ··· 839 839 struct vc4_bo *bo = NULL; 840 840 int ret; 841 841 842 - if (WARN_ON_ONCE(vc4->is_vc5)) 842 + if (WARN_ON_ONCE(vc4->gen > VC4_GEN_4)) 843 843 return -ENODEV; 844 844 845 845 if (args->size == 0) ··· 918 918 struct vc4_bo *bo; 919 919 bool t_format; 920 920 921 - if (WARN_ON_ONCE(vc4->is_vc5)) 921 + if (WARN_ON_ONCE(vc4->gen > VC4_GEN_4)) 922 922 return -ENODEV; 923 923 924 924 if (args->flags != 0) ··· 964 964 struct drm_gem_object *gem_obj; 965 965 struct vc4_bo *bo; 966 966 967 - if (WARN_ON_ONCE(vc4->is_vc5)) 967 + if (WARN_ON_ONCE(vc4->gen > VC4_GEN_4)) 968 968 return -ENODEV; 969 969 970 970 if (args->flags != 0 || args->modifier != 0) ··· 1007 1007 int ret; 1008 1008 int i; 1009 1009 1010 - if (WARN_ON_ONCE(vc4->is_vc5)) 1010 + if (WARN_ON_ONCE(vc4->gen > VC4_GEN_4)) 1011 1011 return -ENODEV; 1012 1012 1013 1013 /* Create the initial set of BO labels that the kernel will ··· 1071 1071 struct drm_gem_object *gem_obj; 1072 1072 int ret = 0, label; 1073 1073 1074 - if (WARN_ON_ONCE(vc4->is_vc5)) 1074 + if (WARN_ON_ONCE(vc4->gen > VC4_GEN_4)) 1075 1075 return -ENODEV; 1076 1076 1077 1077 if (!args->len)
+21 -14
drivers/gpu/drm/vc4/vc4_crtc.c
··· 105 105 struct vc4_hvs *hvs = vc4->hvs; 106 106 struct vc4_crtc *vc4_crtc = to_vc4_crtc(crtc); 107 107 struct vc4_crtc_state *vc4_crtc_state = to_vc4_crtc_state(crtc->state); 108 + unsigned int channel = vc4_crtc_state->assigned_channel; 108 109 unsigned int cob_size; 109 110 u32 val; 110 111 int fifo_lines; ··· 122 121 * Read vertical scanline which is currently composed for our 123 122 * pixelvalve by the HVS, and also the scaler status. 124 123 */ 125 - val = HVS_READ(SCALER_DISPSTATX(vc4_crtc_state->assigned_channel)); 124 + val = HVS_READ(SCALER_DISPSTATX(channel)); 126 125 127 126 /* Get optional system timestamp after query. */ 128 127 if (etime) ··· 138 137 *vpos /= 2; 139 138 140 139 /* Use hpos to correct for field offset in interlaced mode. */ 141 - if (vc4_hvs_get_fifo_frame_count(hvs, vc4_crtc_state->assigned_channel) % 2) 140 + if (vc4_hvs_get_fifo_frame_count(hvs, channel) % 2) 142 141 *hpos += mode->crtc_htotal / 2; 143 142 } 144 143 145 - cob_size = vc4_crtc_get_cob_allocation(vc4, vc4_crtc_state->assigned_channel); 144 + cob_size = vc4_crtc_get_cob_allocation(vc4, channel); 146 145 /* This is the offset we need for translating hvs -> pv scanout pos. */ 147 146 fifo_lines = cob_size / mode->crtc_hdisplay; 148 147 ··· 264 263 * Removing 1 from the FIFO full level however 265 264 * seems to completely remove that issue. 266 265 */ 267 - if (!vc4->is_vc5) 266 + if (vc4->gen == VC4_GEN_4) 268 267 return fifo_len_bytes - 3 * HVS_FIFO_LATENCY_PIX - 1; 269 268 270 269 return fifo_len_bytes - 3 * HVS_FIFO_LATENCY_PIX; ··· 429 428 if (is_dsi) 430 429 CRTC_WRITE(PV_HACT_ACT, mode->hdisplay * pixel_rep); 431 430 432 - if (vc4->is_vc5) 431 + if (vc4->gen == VC4_GEN_5) 433 432 CRTC_WRITE(PV_MUX_CFG, 434 433 VC4_SET_FIELD(PV_MUX_CFG_RGB_PIXEL_MUX_MODE_NO_SWAP, 435 434 PV_MUX_CFG_RGB_PIXEL_MUX_MODE)); ··· 736 735 if (conn_state->crtc != crtc) 737 736 continue; 738 737 739 - vc4_state->margins.left = conn_state->tv.margins.left; 740 - vc4_state->margins.right = conn_state->tv.margins.right; 741 - vc4_state->margins.top = conn_state->tv.margins.top; 742 - vc4_state->margins.bottom = conn_state->tv.margins.bottom; 738 + if (memcmp(&vc4_state->margins, &conn_state->tv.margins, 739 + sizeof(vc4_state->margins))) { 740 + memcpy(&vc4_state->margins, &conn_state->tv.margins, 741 + sizeof(vc4_state->margins)); 742 + 743 + /* 744 + * Need to force the dlist entries for all planes to be 745 + * updated so that the dest rectangles are changed. 746 + */ 747 + crtc_state->zpos_changed = true; 748 + } 743 749 break; 744 750 } 745 751 ··· 921 913 struct dma_fence *fence; 922 914 int ret; 923 915 924 - if (!vc4->is_vc5) { 916 + if (vc4->gen == VC4_GEN_4) { 925 917 struct vc4_bo *bo = to_vc4_bo(&dma_bo->base); 926 918 927 919 return vc4_queue_seqno_cb(dev, &flip_state->cb.seqno, bo->seqno, ··· 1008 1000 struct vc4_bo *bo = to_vc4_bo(&dma_bo->base); 1009 1001 int ret; 1010 1002 1011 - if (WARN_ON_ONCE(vc4->is_vc5)) 1003 + if (WARN_ON_ONCE(vc4->gen > VC4_GEN_4)) 1012 1004 return -ENODEV; 1013 1005 1014 1006 /* ··· 1051 1043 struct drm_device *dev = crtc->dev; 1052 1044 struct vc4_dev *vc4 = to_vc4_dev(dev); 1053 1045 1054 - if (vc4->is_vc5) 1046 + if (vc4->gen > VC4_GEN_4) 1055 1047 return vc5_async_page_flip(crtc, fb, event, flags); 1056 1048 else 1057 1049 return vc4_async_page_flip(crtc, fb, event, flags); ··· 1346 1338 1347 1339 drm_crtc_helper_add(crtc, crtc_helper_funcs); 1348 1340 1349 - if (!vc4->is_vc5) { 1341 + if (vc4->gen == VC4_GEN_4) { 1350 1342 drm_mode_crtc_set_gamma_size(crtc, ARRAY_SIZE(vc4_crtc->lut_r)); 1351 - 1352 1343 drm_crtc_enable_color_mgmt(crtc, 0, false, crtc->gamma_size); 1353 1344 1354 1345 /* We support CTM, but only for one CRTC at a time. It's therefore
+13 -9
drivers/gpu/drm/vc4/vc4_drv.c
··· 98 98 if (args->pad != 0) 99 99 return -EINVAL; 100 100 101 - if (WARN_ON_ONCE(vc4->is_vc5)) 101 + if (WARN_ON_ONCE(vc4->gen > VC4_GEN_4)) 102 102 return -ENODEV; 103 103 104 104 if (!vc4->v3d) ··· 147 147 struct vc4_dev *vc4 = to_vc4_dev(dev); 148 148 struct vc4_file *vc4file; 149 149 150 - if (WARN_ON_ONCE(vc4->is_vc5)) 150 + if (WARN_ON_ONCE(vc4->gen > VC4_GEN_4)) 151 151 return -ENODEV; 152 152 153 153 vc4file = kzalloc(sizeof(*vc4file), GFP_KERNEL); ··· 165 165 struct vc4_dev *vc4 = to_vc4_dev(dev); 166 166 struct vc4_file *vc4file = file->driver_priv; 167 167 168 - if (WARN_ON_ONCE(vc4->is_vc5)) 168 + if (WARN_ON_ONCE(vc4->gen > VC4_GEN_4)) 169 169 return; 170 170 171 171 if (vc4file->bin_bo_used) ··· 291 291 struct vc4_dev *vc4; 292 292 struct device_node *node; 293 293 struct drm_crtc *crtc; 294 - bool is_vc5; 294 + enum vc4_gen gen; 295 295 int ret = 0; 296 296 297 297 dev->coherent_dma_mask = DMA_BIT_MASK(32); 298 298 299 - is_vc5 = of_device_is_compatible(dev->of_node, "brcm,bcm2711-vc5"); 300 - if (is_vc5) 299 + if (of_device_is_compatible(dev->of_node, "brcm,bcm2711-vc5")) 300 + gen = VC4_GEN_5; 301 + else 302 + gen = VC4_GEN_4; 303 + 304 + if (gen > VC4_GEN_4) 301 305 driver = &vc5_drm_driver; 302 306 else 303 307 driver = &vc4_drm_driver; ··· 319 315 vc4 = devm_drm_dev_alloc(dev, driver, struct vc4_dev, base); 320 316 if (IS_ERR(vc4)) 321 317 return PTR_ERR(vc4); 322 - vc4->is_vc5 = is_vc5; 318 + vc4->gen = gen; 323 319 vc4->dev = dev; 324 320 325 321 drm = &vc4->base; 326 322 platform_set_drvdata(pdev, drm); 327 323 328 - if (!is_vc5) { 324 + if (gen == VC4_GEN_4) { 329 325 ret = drmm_mutex_init(drm, &vc4->bin_bo_lock); 330 326 if (ret) 331 327 goto err; ··· 339 335 if (ret) 340 336 goto err; 341 337 342 - if (!is_vc5) { 338 + if (gen == VC4_GEN_4) { 343 339 ret = vc4_gem_init(drm); 344 340 if (ret) 345 341 goto err;
+14 -15
drivers/gpu/drm/vc4/vc4_drv.h
··· 15 15 #include <drm/drm_debugfs.h> 16 16 #include <drm/drm_device.h> 17 17 #include <drm/drm_encoder.h> 18 + #include <drm/drm_fourcc.h> 18 19 #include <drm/drm_gem_dma_helper.h> 19 20 #include <drm/drm_managed.h> 20 21 #include <drm/drm_mm.h> ··· 81 80 u64 counters[] __counted_by(ncounters); 82 81 }; 83 82 83 + enum vc4_gen { 84 + VC4_GEN_4, 85 + VC4_GEN_5, 86 + }; 87 + 84 88 struct vc4_dev { 85 89 struct drm_device base; 86 90 struct device *dev; 87 91 88 - bool is_vc5; 92 + enum vc4_gen gen; 89 93 90 94 unsigned int irq; 91 95 ··· 321 315 struct platform_device *pdev; 322 316 void __iomem *regs; 323 317 u32 __iomem *dlist; 318 + unsigned int dlist_mem_size; 324 319 325 320 struct clk *core_clk; 326 321 ··· 401 394 */ 402 395 u32 pos0_offset; 403 396 u32 pos2_offset; 404 - u32 ptr0_offset; 397 + u32 ptr0_offset[DRM_FORMAT_MAX_PLANES]; 405 398 u32 lbm_offset; 406 399 407 400 /* Offset where the plane's dlist was last stored in the ··· 411 404 412 405 /* Clipped coordinates of the plane on the display. */ 413 406 int crtc_x, crtc_y, crtc_w, crtc_h; 414 - /* Clipped area being scanned from in the FB. */ 407 + /* Clipped area being scanned from in the FB in u16.16 format */ 415 408 u32 src_x, src_y; 416 409 417 410 u32 src_w[2], src_h[2]; ··· 420 413 enum vc4_scaling_mode x_scaling[2], y_scaling[2]; 421 414 bool is_unity; 422 415 bool is_yuv; 423 - 424 - /* Offset to start scanning out from the start of the plane's 425 - * BO. 426 - */ 427 - u32 offsets[3]; 428 416 429 417 /* Our allocation in LBM for temporary storage during scaling. */ 430 418 struct drm_mm_node lbm; ··· 600 598 bool txp_armed; 601 599 unsigned int assigned_channel; 602 600 603 - struct { 604 - unsigned int left; 605 - unsigned int right; 606 - unsigned int top; 607 - unsigned int bottom; 608 - } margins; 601 + struct drm_connector_tv_margins margins; 609 602 610 603 unsigned long hvs_load; 611 604 ··· 999 1002 1000 1003 /* vc4_hvs.c */ 1001 1004 extern struct platform_driver vc4_hvs_driver; 1002 - struct vc4_hvs *__vc4_hvs_alloc(struct vc4_dev *vc4, struct platform_device *pdev); 1005 + struct vc4_hvs *__vc4_hvs_alloc(struct vc4_dev *vc4, 1006 + void __iomem *regs, 1007 + struct platform_device *pdev); 1003 1008 void vc4_hvs_stop_channel(struct vc4_hvs *hvs, unsigned int output); 1004 1009 int vc4_hvs_get_fifo_from_output(struct vc4_hvs *hvs, unsigned int output); 1005 1010 u8 vc4_hvs_get_fifo_frame_count(struct vc4_hvs *hvs, unsigned int fifo);
+12 -12
drivers/gpu/drm/vc4/vc4_gem.c
··· 76 76 u32 i; 77 77 int ret = 0; 78 78 79 - if (WARN_ON_ONCE(vc4->is_vc5)) 79 + if (WARN_ON_ONCE(vc4->gen > VC4_GEN_4)) 80 80 return -ENODEV; 81 81 82 82 if (!vc4->v3d) { ··· 389 389 unsigned long timeout_expire; 390 390 DEFINE_WAIT(wait); 391 391 392 - if (WARN_ON_ONCE(vc4->is_vc5)) 392 + if (WARN_ON_ONCE(vc4->gen > VC4_GEN_4)) 393 393 return -ENODEV; 394 394 395 395 if (vc4->finished_seqno >= seqno) ··· 474 474 struct vc4_dev *vc4 = to_vc4_dev(dev); 475 475 struct vc4_exec_info *exec; 476 476 477 - if (WARN_ON_ONCE(vc4->is_vc5)) 477 + if (WARN_ON_ONCE(vc4->gen > VC4_GEN_4)) 478 478 return; 479 479 480 480 again: ··· 522 522 if (!exec) 523 523 return; 524 524 525 - if (WARN_ON_ONCE(vc4->is_vc5)) 525 + if (WARN_ON_ONCE(vc4->gen > VC4_GEN_4)) 526 526 return; 527 527 528 528 /* A previous RCL may have written to one of our textures, and ··· 543 543 struct vc4_dev *vc4 = to_vc4_dev(dev); 544 544 bool was_empty = list_empty(&vc4->render_job_list); 545 545 546 - if (WARN_ON_ONCE(vc4->is_vc5)) 546 + if (WARN_ON_ONCE(vc4->gen > VC4_GEN_4)) 547 547 return; 548 548 549 549 list_move_tail(&exec->head, &vc4->render_job_list); ··· 970 970 unsigned long irqflags; 971 971 struct vc4_seqno_cb *cb, *cb_temp; 972 972 973 - if (WARN_ON_ONCE(vc4->is_vc5)) 973 + if (WARN_ON_ONCE(vc4->gen > VC4_GEN_4)) 974 974 return; 975 975 976 976 spin_lock_irqsave(&vc4->job_lock, irqflags); ··· 1009 1009 struct vc4_dev *vc4 = to_vc4_dev(dev); 1010 1010 unsigned long irqflags; 1011 1011 1012 - if (WARN_ON_ONCE(vc4->is_vc5)) 1012 + if (WARN_ON_ONCE(vc4->gen > VC4_GEN_4)) 1013 1013 return -ENODEV; 1014 1014 1015 1015 cb->func = func; ··· 1065 1065 struct vc4_dev *vc4 = to_vc4_dev(dev); 1066 1066 struct drm_vc4_wait_seqno *args = data; 1067 1067 1068 - if (WARN_ON_ONCE(vc4->is_vc5)) 1068 + if (WARN_ON_ONCE(vc4->gen > VC4_GEN_4)) 1069 1069 return -ENODEV; 1070 1070 1071 1071 return vc4_wait_for_seqno_ioctl_helper(dev, args->seqno, ··· 1082 1082 struct drm_gem_object *gem_obj; 1083 1083 struct vc4_bo *bo; 1084 1084 1085 - if (WARN_ON_ONCE(vc4->is_vc5)) 1085 + if (WARN_ON_ONCE(vc4->gen > VC4_GEN_4)) 1086 1086 return -ENODEV; 1087 1087 1088 1088 if (args->pad != 0) ··· 1131 1131 args->shader_rec_size, 1132 1132 args->bo_handle_count); 1133 1133 1134 - if (WARN_ON_ONCE(vc4->is_vc5)) 1134 + if (WARN_ON_ONCE(vc4->gen > VC4_GEN_4)) 1135 1135 return -ENODEV; 1136 1136 1137 1137 if (!vc4->v3d) { ··· 1267 1267 struct vc4_dev *vc4 = to_vc4_dev(dev); 1268 1268 int ret; 1269 1269 1270 - if (WARN_ON_ONCE(vc4->is_vc5)) 1270 + if (WARN_ON_ONCE(vc4->gen > VC4_GEN_4)) 1271 1271 return -ENODEV; 1272 1272 1273 1273 vc4->dma_fence_context = dma_fence_context_alloc(1); ··· 1326 1326 struct vc4_bo *bo; 1327 1327 int ret; 1328 1328 1329 - if (WARN_ON_ONCE(vc4->is_vc5)) 1329 + if (WARN_ON_ONCE(vc4->gen > VC4_GEN_4)) 1330 1330 return -ENODEV; 1331 1331 1332 1332 switch (args->madv) {
+19 -6
drivers/gpu/drm/vc4/vc4_hdmi.c
··· 147 147 if (!drm_dev_enter(drm, &idx)) 148 148 return -ENODEV; 149 149 150 + WARN_ON(pm_runtime_resume_and_get(&vc4_hdmi->pdev->dev)); 151 + 150 152 drm_print_regset32(&p, &vc4_hdmi->hdmi_regset); 151 153 drm_print_regset32(&p, &vc4_hdmi->hd_regset); 152 154 drm_print_regset32(&p, &vc4_hdmi->cec_regset); ··· 157 155 drm_print_regset32(&p, &vc4_hdmi->phy_regset); 158 156 drm_print_regset32(&p, &vc4_hdmi->ram_regset); 159 157 drm_print_regset32(&p, &vc4_hdmi->rm_regset); 158 + 159 + pm_runtime_put(&vc4_hdmi->pdev->dev); 160 160 161 161 drm_dev_exit(idx); 162 162 ··· 1598 1594 VC4_HD_VID_CTL_CLRRGB | 1599 1595 VC4_HD_VID_CTL_UNDERFLOW_ENABLE | 1600 1596 VC4_HD_VID_CTL_FRAME_COUNTER_RESET | 1597 + VC4_HD_VID_CTL_BLANK_INSERT_EN | 1601 1598 (vsync_pos ? 0 : VC4_HD_VID_CTL_VSYNC_LOW) | 1602 1599 (hsync_pos ? 0 : VC4_HD_VID_CTL_HSYNC_LOW)); 1603 1600 ··· 1925 1920 } 1926 1921 1927 1922 if (!vc4_hdmi_audio_can_stream(vc4_hdmi)) { 1928 - ret = -ENODEV; 1923 + ret = -ENOTSUPP; 1929 1924 goto out_dev_exit; 1930 1925 } 1931 1926 ··· 2052 2047 struct vc4_hdmi *vc4_hdmi = dev_get_drvdata(dev); 2053 2048 struct drm_device *drm = vc4_hdmi->connector.dev; 2054 2049 struct drm_connector *connector = &vc4_hdmi->connector; 2050 + struct vc4_dev *vc4 = to_vc4_dev(drm); 2055 2051 unsigned int sample_rate = params->sample_rate; 2056 2052 unsigned int channels = params->channels; 2057 2053 unsigned long flags; ··· 2110 2104 VC4_HDMI_AUDIO_PACKET_CEA_MASK); 2111 2105 2112 2106 /* Set the MAI threshold */ 2113 - HDMI_WRITE(HDMI_MAI_THR, 2114 - VC4_SET_FIELD(0x08, VC4_HD_MAI_THR_PANICHIGH) | 2115 - VC4_SET_FIELD(0x08, VC4_HD_MAI_THR_PANICLOW) | 2116 - VC4_SET_FIELD(0x06, VC4_HD_MAI_THR_DREQHIGH) | 2117 - VC4_SET_FIELD(0x08, VC4_HD_MAI_THR_DREQLOW)); 2107 + if (vc4->gen >= VC4_GEN_5) 2108 + HDMI_WRITE(HDMI_MAI_THR, 2109 + VC4_SET_FIELD(0x10, VC4_HD_MAI_THR_PANICHIGH) | 2110 + VC4_SET_FIELD(0x10, VC4_HD_MAI_THR_PANICLOW) | 2111 + VC4_SET_FIELD(0x1c, VC4_HD_MAI_THR_DREQHIGH) | 2112 + VC4_SET_FIELD(0x1c, VC4_HD_MAI_THR_DREQLOW)); 2113 + else 2114 + HDMI_WRITE(HDMI_MAI_THR, 2115 + VC4_SET_FIELD(0x8, VC4_HD_MAI_THR_PANICHIGH) | 2116 + VC4_SET_FIELD(0x8, VC4_HD_MAI_THR_PANICLOW) | 2117 + VC4_SET_FIELD(0x6, VC4_HD_MAI_THR_DREQHIGH) | 2118 + VC4_SET_FIELD(0x8, VC4_HD_MAI_THR_DREQLOW)); 2118 2119 2119 2120 HDMI_WRITE(HDMI_MAI_CONFIG, 2120 2121 VC4_HDMI_MAI_CONFIG_BIT_REVERSE |
+4 -1
drivers/gpu/drm/vc4/vc4_hdmi_regs.h
··· 498 498 499 499 field = &variant->registers[reg]; 500 500 base = __vc4_hdmi_get_field_base(hdmi, field->reg); 501 - if (!base) 501 + if (!base) { 502 + dev_warn(&hdmi->pdev->dev, 503 + "Unknown register ID %u\n", reg); 502 504 return; 505 + } 503 506 504 507 writel(value, base + field->offset); 505 508 }
+218 -141
drivers/gpu/drm/vc4/vc4_hvs.c
··· 33 33 #include "vc4_drv.h" 34 34 #include "vc4_regs.h" 35 35 36 - static const struct debugfs_reg32 hvs_regs[] = { 36 + static const struct debugfs_reg32 vc4_hvs_regs[] = { 37 37 VC4_REG32(SCALER_DISPCTRL), 38 38 VC4_REG32(SCALER_DISPSTAT), 39 39 VC4_REG32(SCALER_DISPID), ··· 110 110 struct vc4_dev *vc4 = to_vc4_dev(dev); 111 111 struct vc4_hvs *hvs = vc4->hvs; 112 112 struct drm_printer p = drm_seq_file_printer(m); 113 - unsigned int next_entry_start = 0; 113 + unsigned int dlist_mem_size = hvs->dlist_mem_size; 114 + unsigned int next_entry_start; 114 115 unsigned int i, j; 115 116 u32 dlist_word, dispstat; 116 117 ··· 125 124 } 126 125 127 126 drm_printf(&p, "HVS chan %u:\n", i); 127 + next_entry_start = 0; 128 128 129 - for (j = HVS_READ(SCALER_DISPLISTX(i)); j < 256; j++) { 129 + for (j = HVS_READ(SCALER_DISPLISTX(i)); j < dlist_mem_size; j++) { 130 130 dlist_word = readl((u32 __iomem *)vc4->hvs->dlist + j); 131 131 drm_printf(&p, "dlist: %02d: 0x%08x\n", j, 132 132 dlist_word); ··· 224 222 if (!drm_dev_enter(drm, &idx)) 225 223 return; 226 224 225 + if (hvs->vc4->gen == VC4_GEN_4) 226 + return; 227 + 227 228 /* The LUT memory is laid out with each HVS channel in order, 228 229 * each of which takes 256 writes for R, 256 for G, then 256 229 230 * for B. ··· 296 291 u32 reg; 297 292 int ret; 298 293 299 - if (!vc4->is_vc5) 294 + switch (vc4->gen) { 295 + case VC4_GEN_4: 300 296 return output; 301 297 302 - /* 303 - * NOTE: We should probably use drm_dev_enter()/drm_dev_exit() 304 - * here, but this function is only used during the DRM device 305 - * initialization, so we should be fine. 306 - */ 298 + case VC4_GEN_5: 299 + /* 300 + * NOTE: We should probably use 301 + * drm_dev_enter()/drm_dev_exit() here, but this 302 + * function is only used during the DRM device 303 + * initialization, so we should be fine. 304 + */ 307 305 308 - switch (output) { 309 - case 0: 310 - return 0; 306 + switch (output) { 307 + case 0: 308 + return 0; 311 309 312 - case 1: 313 - return 1; 310 + case 1: 311 + return 1; 314 312 315 - case 2: 316 - reg = HVS_READ(SCALER_DISPECTRL); 317 - ret = FIELD_GET(SCALER_DISPECTRL_DSP2_MUX_MASK, reg); 318 - if (ret == 0) 319 - return 2; 313 + case 2: 314 + reg = HVS_READ(SCALER_DISPECTRL); 315 + ret = FIELD_GET(SCALER_DISPECTRL_DSP2_MUX_MASK, reg); 316 + if (ret == 0) 317 + return 2; 320 318 321 - return 0; 319 + return 0; 322 320 323 - case 3: 324 - reg = HVS_READ(SCALER_DISPCTRL); 325 - ret = FIELD_GET(SCALER_DISPCTRL_DSP3_MUX_MASK, reg); 326 - if (ret == 3) 321 + case 3: 322 + reg = HVS_READ(SCALER_DISPCTRL); 323 + ret = FIELD_GET(SCALER_DISPCTRL_DSP3_MUX_MASK, reg); 324 + if (ret == 3) 325 + return -EPIPE; 326 + 327 + return ret; 328 + 329 + case 4: 330 + reg = HVS_READ(SCALER_DISPEOLN); 331 + ret = FIELD_GET(SCALER_DISPEOLN_DSP4_MUX_MASK, reg); 332 + if (ret == 3) 333 + return -EPIPE; 334 + 335 + return ret; 336 + 337 + case 5: 338 + reg = HVS_READ(SCALER_DISPDITHER); 339 + ret = FIELD_GET(SCALER_DISPDITHER_DSP5_MUX_MASK, reg); 340 + if (ret == 3) 341 + return -EPIPE; 342 + 343 + return ret; 344 + 345 + default: 327 346 return -EPIPE; 328 - 329 - return ret; 330 - 331 - case 4: 332 - reg = HVS_READ(SCALER_DISPEOLN); 333 - ret = FIELD_GET(SCALER_DISPEOLN_DSP4_MUX_MASK, reg); 334 - if (ret == 3) 335 - return -EPIPE; 336 - 337 - return ret; 338 - 339 - case 5: 340 - reg = HVS_READ(SCALER_DISPDITHER); 341 - ret = FIELD_GET(SCALER_DISPDITHER_DSP5_MUX_MASK, reg); 342 - if (ret == 3) 343 - return -EPIPE; 344 - 345 - return ret; 347 + } 346 348 347 349 default: 348 350 return -EPIPE; ··· 384 372 dispctrl = SCALER_DISPCTRLX_ENABLE; 385 373 dispbkgndx = HVS_READ(SCALER_DISPBKGNDX(chan)); 386 374 387 - if (!vc4->is_vc5) { 375 + if (vc4->gen == VC4_GEN_4) { 388 376 dispctrl |= VC4_SET_FIELD(mode->hdisplay, 389 377 SCALER_DISPCTRLX_WIDTH) | 390 378 VC4_SET_FIELD(mode->vdisplay, ··· 406 394 dispbkgndx &= ~SCALER_DISPBKGND_INTERLACE; 407 395 408 396 HVS_WRITE(SCALER_DISPBKGNDX(chan), dispbkgndx | 409 - ((!vc4->is_vc5) ? SCALER_DISPBKGND_GAMMA : 0) | 397 + ((vc4->gen == VC4_GEN_4) ? SCALER_DISPBKGND_GAMMA : 0) | 410 398 (interlace ? SCALER_DISPBKGND_INTERLACE : 0)); 411 399 412 400 /* Reload the LUT, since the SRAMs would have been disabled if ··· 427 415 if (!drm_dev_enter(drm, &idx)) 428 416 return; 429 417 430 - if (HVS_READ(SCALER_DISPCTRLX(chan)) & SCALER_DISPCTRLX_ENABLE) 418 + if (!(HVS_READ(SCALER_DISPCTRLX(chan)) & SCALER_DISPCTRLX_ENABLE)) 431 419 goto out; 432 420 433 - HVS_WRITE(SCALER_DISPCTRLX(chan), 434 - HVS_READ(SCALER_DISPCTRLX(chan)) | SCALER_DISPCTRLX_RESET); 435 - HVS_WRITE(SCALER_DISPCTRLX(chan), 436 - HVS_READ(SCALER_DISPCTRLX(chan)) & ~SCALER_DISPCTRLX_ENABLE); 421 + HVS_WRITE(SCALER_DISPCTRLX(chan), SCALER_DISPCTRLX_RESET); 422 + HVS_WRITE(SCALER_DISPCTRLX(chan), 0); 437 423 438 424 /* Once we leave, the scaler should be disabled and its fifo empty. */ 439 425 WARN_ON_ONCE(HVS_READ(SCALER_DISPCTRLX(chan)) & SCALER_DISPCTRLX_RESET); ··· 466 456 if (hweight32(crtc_state->connector_mask) > 1) 467 457 return -EINVAL; 468 458 469 - drm_atomic_crtc_state_for_each_plane_state(plane, plane_state, crtc_state) 470 - dlist_count += vc4_plane_dlist_size(plane_state); 459 + drm_atomic_crtc_state_for_each_plane_state(plane, plane_state, crtc_state) { 460 + u32 plane_dlist_count = vc4_plane_dlist_size(plane_state); 461 + 462 + drm_dbg_driver(dev, "[CRTC:%d:%s] Found [PLANE:%d:%s] with DLIST size: %u\n", 463 + crtc->base.id, crtc->name, 464 + plane->base.id, plane->name, 465 + plane_dlist_count); 466 + 467 + dlist_count += plane_dlist_count; 468 + } 471 469 472 470 dlist_count++; /* Account for SCALER_CTL0_END. */ 473 471 472 + drm_dbg_driver(dev, "[CRTC:%d:%s] Allocating DLIST block with size: %u\n", 473 + crtc->base.id, crtc->name, dlist_count); 474 474 spin_lock_irqsave(&vc4->hvs->mm_lock, flags); 475 475 ret = drm_mm_insert_node(&vc4->hvs->dlist_mm, &vc4_state->mm, 476 476 dlist_count); 477 477 spin_unlock_irqrestore(&vc4->hvs->mm_lock, flags); 478 - if (ret) 478 + if (ret) { 479 + drm_err(dev, "Failed to allocate DLIST entry: %d\n", ret); 479 480 return ret; 481 + } 480 482 481 483 return 0; 482 484 } ··· 690 668 691 669 void vc4_hvs_mask_underrun(struct vc4_hvs *hvs, int channel) 692 670 { 693 - struct drm_device *drm = &hvs->vc4->base; 671 + struct vc4_dev *vc4 = hvs->vc4; 672 + struct drm_device *drm = &vc4->base; 694 673 u32 dispctrl; 695 674 int idx; 696 675 ··· 699 676 return; 700 677 701 678 dispctrl = HVS_READ(SCALER_DISPCTRL); 702 - dispctrl &= ~(hvs->vc4->is_vc5 ? SCALER5_DISPCTRL_DSPEISLUR(channel) : 703 - SCALER_DISPCTRL_DSPEISLUR(channel)); 679 + dispctrl &= ~((vc4->gen == VC4_GEN_5) ? 680 + SCALER5_DISPCTRL_DSPEISLUR(channel) : 681 + SCALER_DISPCTRL_DSPEISLUR(channel)); 704 682 705 683 HVS_WRITE(SCALER_DISPCTRL, dispctrl); 706 684 ··· 710 686 711 687 void vc4_hvs_unmask_underrun(struct vc4_hvs *hvs, int channel) 712 688 { 713 - struct drm_device *drm = &hvs->vc4->base; 689 + struct vc4_dev *vc4 = hvs->vc4; 690 + struct drm_device *drm = &vc4->base; 714 691 u32 dispctrl; 715 692 int idx; 716 693 ··· 719 694 return; 720 695 721 696 dispctrl = HVS_READ(SCALER_DISPCTRL); 722 - dispctrl |= (hvs->vc4->is_vc5 ? SCALER5_DISPCTRL_DSPEISLUR(channel) : 723 - SCALER_DISPCTRL_DSPEISLUR(channel)); 697 + dispctrl |= ((vc4->gen == VC4_GEN_5) ? 698 + SCALER5_DISPCTRL_DSPEISLUR(channel) : 699 + SCALER_DISPCTRL_DSPEISLUR(channel)); 724 700 725 701 HVS_WRITE(SCALER_DISPSTAT, 726 702 SCALER_DISPSTAT_EUFLOW(channel)); ··· 764 738 control = HVS_READ(SCALER_DISPCTRL); 765 739 766 740 for (channel = 0; channel < SCALER_CHANNELS_COUNT; channel++) { 767 - dspeislur = vc4->is_vc5 ? SCALER5_DISPCTRL_DSPEISLUR(channel) : 768 - SCALER_DISPCTRL_DSPEISLUR(channel); 741 + dspeislur = (vc4->gen == VC4_GEN_5) ? 742 + SCALER5_DISPCTRL_DSPEISLUR(channel) : 743 + SCALER_DISPCTRL_DSPEISLUR(channel); 744 + 769 745 /* Interrupt masking is not always honored, so check it here. */ 770 746 if (status & SCALER_DISPSTAT_EUFLOW(channel) && 771 747 control & dspeislur) { ··· 795 767 if (!vc4->hvs) 796 768 return -ENODEV; 797 769 798 - if (!vc4->is_vc5) 770 + if (vc4->gen == VC4_GEN_4) 799 771 debugfs_create_bool("hvs_load_tracker", S_IRUGO | S_IWUSR, 800 772 minor->debugfs_root, 801 773 &vc4->load_tracker_enabled); ··· 809 781 return 0; 810 782 } 811 783 812 - struct vc4_hvs *__vc4_hvs_alloc(struct vc4_dev *vc4, struct platform_device *pdev) 784 + struct vc4_hvs *__vc4_hvs_alloc(struct vc4_dev *vc4, 785 + void __iomem *regs, 786 + struct platform_device *pdev) 813 787 { 814 788 struct drm_device *drm = &vc4->base; 815 789 struct vc4_hvs *hvs; ··· 821 791 return ERR_PTR(-ENOMEM); 822 792 823 793 hvs->vc4 = vc4; 794 + hvs->regs = regs; 824 795 hvs->pdev = pdev; 825 796 826 797 spin_lock_init(&hvs->mm_lock); ··· 831 800 * our 16K), since we don't want to scramble the screen when 832 801 * transitioning from the firmware's boot setup to runtime. 833 802 */ 803 + hvs->dlist_mem_size = (SCALER_DLIST_SIZE >> 2) - HVS_BOOTLOADER_DLIST_END; 834 804 drm_mm_init(&hvs->dlist_mm, 835 805 HVS_BOOTLOADER_DLIST_END, 836 - (SCALER_DLIST_SIZE >> 2) - HVS_BOOTLOADER_DLIST_END); 806 + hvs->dlist_mem_size); 837 807 838 808 /* Set up the HVS LBM memory manager. We could have some more 839 809 * complicated data structure that allowed reuse of LBM areas 840 810 * between planes when they don't overlap on the screen, but 841 811 * for now we just allocate globally. 842 812 */ 843 - if (!vc4->is_vc5) 813 + if (vc4->gen == VC4_GEN_4) 844 814 /* 48k words of 2x12-bit pixels */ 845 815 drm_mm_init(&hvs->lbm_mm, 0, 48 * 1024); 846 816 else ··· 853 821 return hvs; 854 822 } 855 823 856 - static int vc4_hvs_bind(struct device *dev, struct device *master, void *data) 824 + static int vc4_hvs_hw_init(struct vc4_hvs *hvs) 857 825 { 858 - struct platform_device *pdev = to_platform_device(dev); 859 - struct drm_device *drm = dev_get_drvdata(master); 860 - struct vc4_dev *vc4 = to_vc4_dev(drm); 861 - struct vc4_hvs *hvs = NULL; 862 - int ret; 863 - u32 dispctrl; 864 - u32 reg, top; 826 + struct vc4_dev *vc4 = hvs->vc4; 827 + u32 dispctrl, reg; 865 828 866 - hvs = __vc4_hvs_alloc(vc4, NULL); 867 - if (IS_ERR(hvs)) 868 - return PTR_ERR(hvs); 869 - 870 - hvs->regs = vc4_ioremap_regs(pdev, 0); 871 - if (IS_ERR(hvs->regs)) 872 - return PTR_ERR(hvs->regs); 873 - 874 - hvs->regset.base = hvs->regs; 875 - hvs->regset.regs = hvs_regs; 876 - hvs->regset.nregs = ARRAY_SIZE(hvs_regs); 877 - 878 - if (vc4->is_vc5) { 879 - struct rpi_firmware *firmware; 880 - struct device_node *node; 881 - unsigned int max_rate; 882 - 883 - node = rpi_firmware_find_node(); 884 - if (!node) 885 - return -EINVAL; 886 - 887 - firmware = rpi_firmware_get(node); 888 - of_node_put(node); 889 - if (!firmware) 890 - return -EPROBE_DEFER; 891 - 892 - hvs->core_clk = devm_clk_get(&pdev->dev, NULL); 893 - if (IS_ERR(hvs->core_clk)) { 894 - dev_err(&pdev->dev, "Couldn't get core clock\n"); 895 - return PTR_ERR(hvs->core_clk); 896 - } 897 - 898 - max_rate = rpi_firmware_clk_get_max_rate(firmware, 899 - RPI_FIRMWARE_CORE_CLK_ID); 900 - rpi_firmware_put(firmware); 901 - if (max_rate >= 550000000) 902 - hvs->vc5_hdmi_enable_hdmi_20 = true; 903 - 904 - if (max_rate >= 600000000) 905 - hvs->vc5_hdmi_enable_4096by2160 = true; 906 - 907 - hvs->max_core_rate = max_rate; 908 - 909 - ret = clk_prepare_enable(hvs->core_clk); 910 - if (ret) { 911 - dev_err(&pdev->dev, "Couldn't enable the core clock\n"); 912 - return ret; 913 - } 914 - } 915 - 916 - if (!vc4->is_vc5) 917 - hvs->dlist = hvs->regs + SCALER_DLIST_START; 918 - else 919 - hvs->dlist = hvs->regs + SCALER5_DLIST_START; 920 - 921 - /* Upload filter kernels. We only have the one for now, so we 922 - * keep it around for the lifetime of the driver. 923 - */ 924 - ret = vc4_hvs_upload_linear_kernel(hvs, 925 - &hvs->mitchell_netravali_filter, 926 - mitchell_netravali_1_3_1_3_kernel); 927 - if (ret) 928 - return ret; 829 + dispctrl = HVS_READ(SCALER_DISPCTRL); 830 + dispctrl |= SCALER_DISPCTRL_ENABLE; 831 + HVS_WRITE(SCALER_DISPCTRL, dispctrl); 929 832 930 833 reg = HVS_READ(SCALER_DISPECTRL); 931 834 reg &= ~SCALER_DISPECTRL_DSP2_MUX_MASK; ··· 883 916 reg | VC4_SET_FIELD(3, SCALER_DISPDITHER_DSP5_MUX)); 884 917 885 918 dispctrl = HVS_READ(SCALER_DISPCTRL); 886 - 887 - dispctrl |= SCALER_DISPCTRL_ENABLE; 888 919 dispctrl |= SCALER_DISPCTRL_DISPEIRQ(0) | 889 920 SCALER_DISPCTRL_DISPEIRQ(1) | 890 921 SCALER_DISPCTRL_DISPEIRQ(2); 891 922 892 - if (!vc4->is_vc5) 923 + if (vc4->gen == VC4_GEN_4) 893 924 dispctrl &= ~(SCALER_DISPCTRL_DMAEIRQ | 894 925 SCALER_DISPCTRL_SLVWREIRQ | 895 926 SCALER_DISPCTRL_SLVRDEIRQ | ··· 927 962 dispctrl |= VC4_SET_FIELD(2, SCALER_DISPCTRL_PANIC1); 928 963 dispctrl |= VC4_SET_FIELD(2, SCALER_DISPCTRL_PANIC2); 929 964 965 + /* Set AXI panic mode. 966 + * VC4 panics when < 2 lines in FIFO. 967 + * VC5 panics when less than 1 line in the FIFO. 968 + */ 969 + dispctrl &= ~(SCALER_DISPCTRL_PANIC0_MASK | 970 + SCALER_DISPCTRL_PANIC1_MASK | 971 + SCALER_DISPCTRL_PANIC2_MASK); 972 + dispctrl |= VC4_SET_FIELD(2, SCALER_DISPCTRL_PANIC0); 973 + dispctrl |= VC4_SET_FIELD(2, SCALER_DISPCTRL_PANIC1); 974 + dispctrl |= VC4_SET_FIELD(2, SCALER_DISPCTRL_PANIC2); 975 + 930 976 HVS_WRITE(SCALER_DISPCTRL, dispctrl); 931 977 932 - /* Recompute Composite Output Buffer (COB) allocations for the displays 978 + return 0; 979 + } 980 + 981 + static int vc4_hvs_cob_init(struct vc4_hvs *hvs) 982 + { 983 + struct vc4_dev *vc4 = hvs->vc4; 984 + u32 reg, top; 985 + 986 + /* 987 + * Recompute Composite Output Buffer (COB) allocations for the 988 + * displays 933 989 */ 934 - if (!vc4->is_vc5) { 990 + switch (vc4->gen) { 991 + case VC4_GEN_4: 935 992 /* The COB is 20736 pixels, or just over 10 lines at 2048 wide. 936 993 * The bottom 2048 pixels are full 32bpp RGBA (intended for the 937 994 * TXP composing RGBA to memory), whilst the remainder are only ··· 977 990 top = VC4_COB_SIZE; 978 991 reg |= (top - 1) << 16; 979 992 HVS_WRITE(SCALER_DISPBASE0, reg); 980 - } else { 993 + break; 994 + 995 + case VC4_GEN_5: 981 996 /* The COB is 44416 pixels, or 10.8 lines at 4096 wide. 982 997 * The bottom 4096 pixels are full RGBA (intended for the TXP 983 998 * composing RGBA to memory), whilst the remainder are only ··· 1005 1016 top = VC5_COB_SIZE; 1006 1017 reg |= top << 16; 1007 1018 HVS_WRITE(SCALER_DISPBASE0, reg); 1019 + break; 1020 + 1021 + default: 1022 + return -EINVAL; 1008 1023 } 1024 + 1025 + return 0; 1026 + } 1027 + 1028 + static int vc4_hvs_bind(struct device *dev, struct device *master, void *data) 1029 + { 1030 + struct platform_device *pdev = to_platform_device(dev); 1031 + struct drm_device *drm = dev_get_drvdata(master); 1032 + struct vc4_dev *vc4 = to_vc4_dev(drm); 1033 + struct vc4_hvs *hvs = NULL; 1034 + void __iomem *regs; 1035 + int ret; 1036 + 1037 + regs = vc4_ioremap_regs(pdev, 0); 1038 + if (IS_ERR(regs)) 1039 + return PTR_ERR(regs); 1040 + 1041 + hvs = __vc4_hvs_alloc(vc4, regs, pdev); 1042 + if (IS_ERR(hvs)) 1043 + return PTR_ERR(hvs); 1044 + 1045 + hvs->regset.base = hvs->regs; 1046 + hvs->regset.regs = vc4_hvs_regs; 1047 + hvs->regset.nregs = ARRAY_SIZE(vc4_hvs_regs); 1048 + 1049 + if (vc4->gen == VC4_GEN_5) { 1050 + struct rpi_firmware *firmware; 1051 + struct device_node *node; 1052 + unsigned int max_rate; 1053 + 1054 + node = rpi_firmware_find_node(); 1055 + if (!node) 1056 + return -EINVAL; 1057 + 1058 + firmware = rpi_firmware_get(node); 1059 + of_node_put(node); 1060 + if (!firmware) 1061 + return -EPROBE_DEFER; 1062 + 1063 + hvs->core_clk = devm_clk_get(&pdev->dev, NULL); 1064 + if (IS_ERR(hvs->core_clk)) { 1065 + dev_err(&pdev->dev, "Couldn't get core clock\n"); 1066 + return PTR_ERR(hvs->core_clk); 1067 + } 1068 + 1069 + max_rate = rpi_firmware_clk_get_max_rate(firmware, 1070 + RPI_FIRMWARE_CORE_CLK_ID); 1071 + rpi_firmware_put(firmware); 1072 + if (max_rate >= 550000000) 1073 + hvs->vc5_hdmi_enable_hdmi_20 = true; 1074 + 1075 + if (max_rate >= 600000000) 1076 + hvs->vc5_hdmi_enable_4096by2160 = true; 1077 + 1078 + hvs->max_core_rate = max_rate; 1079 + 1080 + ret = clk_prepare_enable(hvs->core_clk); 1081 + if (ret) { 1082 + dev_err(&pdev->dev, "Couldn't enable the core clock\n"); 1083 + return ret; 1084 + } 1085 + } 1086 + 1087 + if (vc4->gen == VC4_GEN_4) 1088 + hvs->dlist = hvs->regs + SCALER_DLIST_START; 1089 + else 1090 + hvs->dlist = hvs->regs + SCALER5_DLIST_START; 1091 + 1092 + ret = vc4_hvs_hw_init(hvs); 1093 + if (ret) 1094 + return ret; 1095 + 1096 + /* Upload filter kernels. We only have the one for now, so we 1097 + * keep it around for the lifetime of the driver. 1098 + */ 1099 + ret = vc4_hvs_upload_linear_kernel(hvs, 1100 + &hvs->mitchell_netravali_filter, 1101 + mitchell_netravali_1_3_1_3_kernel); 1102 + if (ret) 1103 + return ret; 1104 + 1105 + ret = vc4_hvs_cob_init(hvs); 1106 + if (ret) 1107 + return ret; 1009 1108 1010 1109 ret = devm_request_irq(dev, platform_get_irq(pdev, 0), 1011 1110 vc4_hvs_irq_handler, 0, "vc4 hvs", drm);
+5 -5
drivers/gpu/drm/vc4/vc4_irq.c
··· 263 263 { 264 264 struct vc4_dev *vc4 = to_vc4_dev(dev); 265 265 266 - if (WARN_ON_ONCE(vc4->is_vc5)) 266 + if (WARN_ON_ONCE(vc4->gen > VC4_GEN_4)) 267 267 return; 268 268 269 269 if (!vc4->v3d) ··· 280 280 { 281 281 struct vc4_dev *vc4 = to_vc4_dev(dev); 282 282 283 - if (WARN_ON_ONCE(vc4->is_vc5)) 283 + if (WARN_ON_ONCE(vc4->gen > VC4_GEN_4)) 284 284 return; 285 285 286 286 if (!vc4->v3d) ··· 303 303 struct vc4_dev *vc4 = to_vc4_dev(dev); 304 304 int ret; 305 305 306 - if (WARN_ON_ONCE(vc4->is_vc5)) 306 + if (WARN_ON_ONCE(vc4->gen > VC4_GEN_4)) 307 307 return -ENODEV; 308 308 309 309 if (irq == IRQ_NOTCONNECTED) ··· 324 324 { 325 325 struct vc4_dev *vc4 = to_vc4_dev(dev); 326 326 327 - if (WARN_ON_ONCE(vc4->is_vc5)) 327 + if (WARN_ON_ONCE(vc4->gen > VC4_GEN_4)) 328 328 return; 329 329 330 330 vc4_irq_disable(dev); ··· 337 337 struct vc4_dev *vc4 = to_vc4_dev(dev); 338 338 unsigned long irqflags; 339 339 340 - if (WARN_ON_ONCE(vc4->is_vc5)) 340 + if (WARN_ON_ONCE(vc4->gen > VC4_GEN_4)) 341 341 return; 342 342 343 343 /* Acknowledge any stale IRQs. */
+7 -7
drivers/gpu/drm/vc4/vc4_kms.c
··· 369 369 old_hvs_state->fifo_state[channel].pending_commit = NULL; 370 370 } 371 371 372 - if (vc4->is_vc5) { 372 + if (vc4->gen == VC4_GEN_5) { 373 373 unsigned long state_rate = max(old_hvs_state->core_clock_rate, 374 374 new_hvs_state->core_clock_rate); 375 375 unsigned long core_rate = clamp_t(unsigned long, state_rate, ··· 388 388 389 389 vc4_ctm_commit(vc4, state); 390 390 391 - if (vc4->is_vc5) 391 + if (vc4->gen == VC4_GEN_5) 392 392 vc5_hvs_pv_muxing_commit(vc4, state); 393 393 else 394 394 vc4_hvs_pv_muxing_commit(vc4, state); ··· 406 406 407 407 drm_atomic_helper_cleanup_planes(dev, state); 408 408 409 - if (vc4->is_vc5) { 409 + if (vc4->gen == VC4_GEN_5) { 410 410 unsigned long core_rate = min_t(unsigned long, 411 411 hvs->max_core_rate, 412 412 new_hvs_state->core_clock_rate); ··· 461 461 struct vc4_dev *vc4 = to_vc4_dev(dev); 462 462 struct drm_mode_fb_cmd2 mode_cmd_local; 463 463 464 - if (WARN_ON_ONCE(vc4->is_vc5)) 464 + if (WARN_ON_ONCE(vc4->gen > VC4_GEN_4)) 465 465 return ERR_PTR(-ENODEV); 466 466 467 467 /* If the user didn't specify a modifier, use the ··· 1040 1040 * the BCM2711, but the load tracker computations are used for 1041 1041 * the core clock rate calculation. 1042 1042 */ 1043 - if (!vc4->is_vc5) { 1043 + if (vc4->gen == VC4_GEN_4) { 1044 1044 /* Start with the load tracker enabled. Can be 1045 1045 * disabled through the debugfs load_tracker file. 1046 1046 */ ··· 1056 1056 return ret; 1057 1057 } 1058 1058 1059 - if (vc4->is_vc5) { 1059 + if (vc4->gen == VC4_GEN_5) { 1060 1060 dev->mode_config.max_width = 7680; 1061 1061 dev->mode_config.max_height = 7680; 1062 1062 } else { ··· 1064 1064 dev->mode_config.max_height = 2048; 1065 1065 } 1066 1066 1067 - dev->mode_config.funcs = vc4->is_vc5 ? &vc5_mode_funcs : &vc4_mode_funcs; 1067 + dev->mode_config.funcs = (vc4->gen > VC4_GEN_4) ? &vc5_mode_funcs : &vc4_mode_funcs; 1068 1068 dev->mode_config.helper_private = &vc4_mode_config_helpers; 1069 1069 dev->mode_config.preferred_depth = 24; 1070 1070 dev->mode_config.async_page_flip = true;
+10 -10
drivers/gpu/drm/vc4/vc4_perfmon.c
··· 23 23 return; 24 24 25 25 vc4 = perfmon->dev; 26 - if (WARN_ON_ONCE(vc4->is_vc5)) 26 + if (WARN_ON_ONCE(vc4->gen > VC4_GEN_4)) 27 27 return; 28 28 29 29 refcount_inc(&perfmon->refcnt); ··· 37 37 return; 38 38 39 39 vc4 = perfmon->dev; 40 - if (WARN_ON_ONCE(vc4->is_vc5)) 40 + if (WARN_ON_ONCE(vc4->gen > VC4_GEN_4)) 41 41 return; 42 42 43 43 if (refcount_dec_and_test(&perfmon->refcnt)) ··· 49 49 unsigned int i; 50 50 u32 mask; 51 51 52 - if (WARN_ON_ONCE(vc4->is_vc5)) 52 + if (WARN_ON_ONCE(vc4->gen > VC4_GEN_4)) 53 53 return; 54 54 55 55 if (WARN_ON_ONCE(!perfmon || vc4->active_perfmon)) ··· 69 69 { 70 70 unsigned int i; 71 71 72 - if (WARN_ON_ONCE(vc4->is_vc5)) 72 + if (WARN_ON_ONCE(vc4->gen > VC4_GEN_4)) 73 73 return; 74 74 75 75 if (WARN_ON_ONCE(!vc4->active_perfmon || ··· 90 90 struct vc4_dev *vc4 = vc4file->dev; 91 91 struct vc4_perfmon *perfmon; 92 92 93 - if (WARN_ON_ONCE(vc4->is_vc5)) 93 + if (WARN_ON_ONCE(vc4->gen > VC4_GEN_4)) 94 94 return NULL; 95 95 96 96 mutex_lock(&vc4file->perfmon.lock); ··· 105 105 { 106 106 struct vc4_dev *vc4 = vc4file->dev; 107 107 108 - if (WARN_ON_ONCE(vc4->is_vc5)) 108 + if (WARN_ON_ONCE(vc4->gen > VC4_GEN_4)) 109 109 return; 110 110 111 111 mutex_init(&vc4file->perfmon.lock); ··· 126 126 { 127 127 struct vc4_dev *vc4 = vc4file->dev; 128 128 129 - if (WARN_ON_ONCE(vc4->is_vc5)) 129 + if (WARN_ON_ONCE(vc4->gen > VC4_GEN_4)) 130 130 return; 131 131 132 132 mutex_lock(&vc4file->perfmon.lock); ··· 146 146 unsigned int i; 147 147 int ret; 148 148 149 - if (WARN_ON_ONCE(vc4->is_vc5)) 149 + if (WARN_ON_ONCE(vc4->gen > VC4_GEN_4)) 150 150 return -ENODEV; 151 151 152 152 if (!vc4->v3d) { ··· 200 200 struct drm_vc4_perfmon_destroy *req = data; 201 201 struct vc4_perfmon *perfmon; 202 202 203 - if (WARN_ON_ONCE(vc4->is_vc5)) 203 + if (WARN_ON_ONCE(vc4->gen > VC4_GEN_4)) 204 204 return -ENODEV; 205 205 206 206 if (!vc4->v3d) { ··· 228 228 struct vc4_perfmon *perfmon; 229 229 int ret; 230 230 231 - if (WARN_ON_ONCE(vc4->is_vc5)) 231 + if (WARN_ON_ONCE(vc4->gen > VC4_GEN_4)) 232 232 return -ENODEV; 233 233 234 234 if (!vc4->v3d) {
+186 -95
drivers/gpu/drm/vc4/vc4_plane.c
··· 110 110 .pixel_order_hvs5 = HVS_PIXEL_ORDER_XYCRCB, 111 111 }, 112 112 { 113 + .drm = DRM_FORMAT_YUV444, 114 + .hvs = HVS_PIXEL_FORMAT_YCBCR_YUV422_3PLANE, 115 + .pixel_order = HVS_PIXEL_ORDER_XYCBCR, 116 + .pixel_order_hvs5 = HVS_PIXEL_ORDER_XYCBCR, 117 + }, 118 + { 119 + .drm = DRM_FORMAT_YVU444, 120 + .hvs = HVS_PIXEL_FORMAT_YCBCR_YUV422_3PLANE, 121 + .pixel_order = HVS_PIXEL_ORDER_XYCRCB, 122 + .pixel_order_hvs5 = HVS_PIXEL_ORDER_XYCRCB, 123 + }, 124 + { 113 125 .drm = DRM_FORMAT_YUV420, 114 126 .hvs = HVS_PIXEL_FORMAT_YCBCR_YUV420_3PLANE, 115 127 .pixel_order = HVS_PIXEL_ORDER_XYCBCR, ··· 263 251 264 252 static enum vc4_scaling_mode vc4_get_scaling_mode(u32 src, u32 dst) 265 253 { 266 - if (dst == src) 254 + if (dst == src >> 16) 267 255 return VC4_SCALING_NONE; 268 - if (3 * dst >= 2 * src) 256 + if (3 * dst >= 2 * (src >> 16)) 269 257 return VC4_SCALING_PPF; 270 258 else 271 259 return VC4_SCALING_TPZ; ··· 450 438 { 451 439 struct vc4_plane_state *vc4_state = to_vc4_plane_state(state); 452 440 struct drm_framebuffer *fb = state->fb; 453 - struct drm_gem_dma_object *bo; 454 441 int num_planes = fb->format->num_planes; 455 442 struct drm_crtc_state *crtc_state; 456 443 u32 h_subsample = fb->format->hsub; 457 444 u32 v_subsample = fb->format->vsub; 458 - int i, ret; 445 + int ret; 459 446 460 447 crtc_state = drm_atomic_get_existing_crtc_state(state->state, 461 448 state->crtc); ··· 468 457 if (ret) 469 458 return ret; 470 459 471 - for (i = 0; i < num_planes; i++) { 472 - bo = drm_fb_dma_get_gem_obj(fb, i); 473 - vc4_state->offsets[i] = bo->dma_addr + fb->offsets[i]; 474 - } 475 - 476 - /* 477 - * We don't support subpixel source positioning for scaling, 478 - * but fractional coordinates can be generated by clipping 479 - * so just round for now 480 - */ 481 - vc4_state->src_x = DIV_ROUND_CLOSEST(state->src.x1, 1 << 16); 482 - vc4_state->src_y = DIV_ROUND_CLOSEST(state->src.y1, 1 << 16); 483 - vc4_state->src_w[0] = DIV_ROUND_CLOSEST(state->src.x2, 1 << 16) - vc4_state->src_x; 484 - vc4_state->src_h[0] = DIV_ROUND_CLOSEST(state->src.y2, 1 << 16) - vc4_state->src_y; 460 + vc4_state->src_x = state->src.x1; 461 + vc4_state->src_y = state->src.y1; 462 + vc4_state->src_w[0] = state->src.x2 - vc4_state->src_x; 463 + vc4_state->src_h[0] = state->src.y2 - vc4_state->src_y; 485 464 486 465 vc4_state->crtc_x = state->dst.x1; 487 466 vc4_state->crtc_y = state->dst.y1; ··· 511 510 */ 512 511 if (vc4_state->x_scaling[1] == VC4_SCALING_NONE) 513 512 vc4_state->x_scaling[1] = VC4_SCALING_PPF; 513 + 514 + /* Similarly UV needs vertical scaling to be enabled. 515 + * Without this a 1:1 scaled YUV422 plane isn't rendered. 516 + */ 517 + if (vc4_state->y_scaling[1] == VC4_SCALING_NONE) 518 + vc4_state->y_scaling[1] = VC4_SCALING_PPF; 514 519 } else { 515 520 vc4_state->is_yuv = false; 516 521 vc4_state->x_scaling[1] = VC4_SCALING_NONE; ··· 530 523 { 531 524 u32 scale, recip; 532 525 533 - scale = (1 << 16) * src / dst; 526 + scale = src / dst; 534 527 535 528 /* The specs note that while the reciprocal would be defined 536 529 * as (1<<32)/scale, ~0 is close enough. ··· 544 537 VC4_SET_FIELD(recip, SCALER_TPZ1_RECIP)); 545 538 } 546 539 547 - static void vc4_write_ppf(struct vc4_plane_state *vc4_state, u32 src, u32 dst) 540 + /* phase magnitude bits */ 541 + #define PHASE_BITS 6 542 + 543 + static void vc4_write_ppf(struct vc4_plane_state *vc4_state, u32 src, u32 dst, 544 + u32 xy, int channel) 548 545 { 549 - u32 scale = (1 << 16) * src / dst; 546 + u32 scale = src / dst; 547 + s32 offset, offset2; 548 + s32 phase; 549 + 550 + /* 551 + * Start the phase at 1/2 pixel from the 1st pixel at src_x. 552 + * 1/4 pixel for YUV. 553 + */ 554 + if (channel) { 555 + /* 556 + * The phase is relative to scale_src->x, so shift it for 557 + * display list's x value 558 + */ 559 + offset = (xy & 0x1ffff) >> (16 - PHASE_BITS) >> 1; 560 + offset += -(1 << PHASE_BITS >> 2); 561 + } else { 562 + /* 563 + * The phase is relative to scale_src->x, so shift it for 564 + * display list's x value 565 + */ 566 + offset = (xy & 0xffff) >> (16 - PHASE_BITS); 567 + offset += -(1 << PHASE_BITS >> 1); 568 + 569 + /* 570 + * This is a kludge to make sure the scaling factors are 571 + * consistent with YUV's luma scaling. We lose 1-bit precision 572 + * because of this. 573 + */ 574 + scale &= ~1; 575 + } 576 + 577 + /* 578 + * There may be a also small error introduced by precision of scale. 579 + * Add half of that as a compromise 580 + */ 581 + offset2 = src - dst * scale; 582 + offset2 >>= 16 - PHASE_BITS; 583 + phase = offset + (offset2 >> 1); 584 + 585 + /* Ensure +ve values don't touch the sign bit, then truncate negative values */ 586 + if (phase >= 1 << PHASE_BITS) 587 + phase = (1 << PHASE_BITS) - 1; 588 + 589 + phase &= SCALER_PPF_IPHASE_MASK; 550 590 551 591 vc4_dlist_write(vc4_state, 552 592 SCALER_PPF_AGC | 553 593 VC4_SET_FIELD(scale, SCALER_PPF_SCALE) | 554 - VC4_SET_FIELD(0, SCALER_PPF_IPHASE)); 594 + VC4_SET_FIELD(phase, SCALER_PPF_IPHASE)); 555 595 } 556 596 557 597 static u32 vc4_lbm_size(struct drm_plane_state *state) ··· 623 569 if (vc4_state->x_scaling[0] == VC4_SCALING_TPZ) 624 570 pix_per_line = vc4_state->crtc_w; 625 571 else 626 - pix_per_line = vc4_state->src_w[0]; 572 + pix_per_line = vc4_state->src_w[0] >> 16; 627 573 628 574 if (!vc4_state->is_yuv) { 629 575 if (vc4_state->y_scaling[0] == VC4_SCALING_TPZ) ··· 641 587 } 642 588 643 589 /* Align it to 64 or 128 (hvs5) bytes */ 644 - lbm = roundup(lbm, vc4->is_vc5 ? 128 : 64); 590 + lbm = roundup(lbm, vc4->gen == VC4_GEN_5 ? 128 : 64); 645 591 646 592 /* Each "word" of the LBM memory contains 2 or 4 (hvs5) pixels */ 647 - lbm /= vc4->is_vc5 ? 4 : 2; 593 + lbm /= vc4->gen == VC4_GEN_5 ? 4 : 2; 648 594 649 595 return lbm; 650 596 } ··· 656 602 657 603 /* Ch0 H-PPF Word 0: Scaling Parameters */ 658 604 if (vc4_state->x_scaling[channel] == VC4_SCALING_PPF) { 659 - vc4_write_ppf(vc4_state, 660 - vc4_state->src_w[channel], vc4_state->crtc_w); 605 + vc4_write_ppf(vc4_state, vc4_state->src_w[channel], 606 + vc4_state->crtc_w, vc4_state->src_x, channel); 661 607 } 662 608 663 609 /* Ch0 V-PPF Words 0-1: Scaling Parameters, Context */ 664 610 if (vc4_state->y_scaling[channel] == VC4_SCALING_PPF) { 665 - vc4_write_ppf(vc4_state, 666 - vc4_state->src_h[channel], vc4_state->crtc_h); 611 + vc4_write_ppf(vc4_state, vc4_state->src_h[channel], 612 + vc4_state->crtc_h, vc4_state->src_y, channel); 667 613 vc4_dlist_write(vc4_state, 0xc0c0c0c0); 668 614 } 669 615 670 616 /* Ch0 H-TPZ Words 0-1: Scaling Parameters, Recip */ 671 617 if (vc4_state->x_scaling[channel] == VC4_SCALING_TPZ) { 672 - vc4_write_tpz(vc4_state, 673 - vc4_state->src_w[channel], vc4_state->crtc_w); 618 + vc4_write_tpz(vc4_state, vc4_state->src_w[channel], 619 + vc4_state->crtc_w); 674 620 } 675 621 676 622 /* Ch0 V-TPZ Words 0-2: Scaling Parameters, Recip, Context */ 677 623 if (vc4_state->y_scaling[channel] == VC4_SCALING_TPZ) { 678 - vc4_write_tpz(vc4_state, 679 - vc4_state->src_h[channel], vc4_state->crtc_h); 624 + vc4_write_tpz(vc4_state, vc4_state->src_h[channel], 625 + vc4_state->crtc_h); 680 626 vc4_dlist_write(vc4_state, 0xc0c0c0c0); 681 627 } 682 628 } ··· 714 660 for (i = 0; i < fb->format->num_planes; i++) { 715 661 /* Even if the bandwidth/plane required for a single frame is 716 662 * 717 - * vc4_state->src_w[i] * vc4_state->src_h[i] * cpp * vrefresh 663 + * (vc4_state->src_w[i] >> 16) * (vc4_state->src_h[i] >> 16) * 664 + * cpp * vrefresh 718 665 * 719 666 * when downscaling, we have to read more pixels per line in 720 667 * the time frame reserved for a single line, so the bandwidth ··· 724 669 * load by this number. We're likely over-estimating the read 725 670 * demand, but that's better than under-estimating it. 726 671 */ 727 - vscale_factor = DIV_ROUND_UP(vc4_state->src_h[i], 672 + vscale_factor = DIV_ROUND_UP(vc4_state->src_h[i] >> 16, 728 673 vc4_state->crtc_h); 729 - vc4_state->membus_load += vc4_state->src_w[i] * 730 - vc4_state->src_h[i] * vscale_factor * 731 - fb->format->cpp[i]; 674 + vc4_state->membus_load += (vc4_state->src_w[i] >> 16) * 675 + (vc4_state->src_h[i] >> 16) * 676 + vscale_factor * fb->format->cpp[i]; 732 677 vc4_state->hvs_load += vc4_state->crtc_h * vc4_state->crtc_w; 733 678 } 734 679 ··· 739 684 740 685 static int vc4_plane_allocate_lbm(struct drm_plane_state *state) 741 686 { 742 - struct vc4_dev *vc4 = to_vc4_dev(state->plane->dev); 687 + struct drm_device *drm = state->plane->dev; 688 + struct vc4_dev *vc4 = to_vc4_dev(drm); 689 + struct drm_plane *plane = state->plane; 743 690 struct vc4_plane_state *vc4_state = to_vc4_plane_state(state); 744 691 unsigned long irqflags; 745 692 u32 lbm_size; ··· 749 692 lbm_size = vc4_lbm_size(state); 750 693 if (!lbm_size) 751 694 return 0; 695 + 696 + if (vc4->gen == VC4_GEN_5) 697 + lbm_size = ALIGN(lbm_size, 64); 698 + else if (vc4->gen == VC4_GEN_4) 699 + lbm_size = ALIGN(lbm_size, 32); 700 + 701 + drm_dbg_driver(drm, "[PLANE:%d:%s] LBM Allocation Size: %u\n", 702 + plane->base.id, plane->name, lbm_size); 752 703 753 704 if (WARN_ON(!vc4_state->lbm_offset)) 754 705 return -EINVAL; ··· 770 705 spin_lock_irqsave(&vc4->hvs->mm_lock, irqflags); 771 706 ret = drm_mm_insert_node_generic(&vc4->hvs->lbm_mm, 772 707 &vc4_state->lbm, 773 - lbm_size, 774 - vc4->is_vc5 ? 64 : 32, 708 + lbm_size, 1, 775 709 0, 0); 776 710 spin_unlock_irqrestore(&vc4->hvs->mm_lock, irqflags); 777 711 778 - if (ret) 712 + if (ret) { 713 + drm_err(drm, "Failed to allocate LBM entry: %d\n", ret); 779 714 return ret; 715 + } 780 716 } else { 781 717 WARN_ON_ONCE(lbm_size != vc4_state->lbm.size); 782 718 } ··· 892 826 bool mix_plane_alpha; 893 827 bool covers_screen; 894 828 u32 scl0, scl1, pitch0; 895 - u32 tiling, src_y; 829 + u32 tiling, src_x, src_y; 830 + u32 width, height; 896 831 u32 hvs_format = format->hvs; 897 832 unsigned int rotation; 833 + u32 offsets[3] = { 0 }; 898 834 int ret, i; 899 835 900 836 if (vc4_state->dlist_initialized) ··· 905 837 ret = vc4_plane_setup_clipping_and_scaling(state); 906 838 if (ret) 907 839 return ret; 840 + 841 + width = vc4_state->src_w[0] >> 16; 842 + height = vc4_state->src_h[0] >> 16; 908 843 909 844 /* SCL1 is used for Cb/Cr scaling of planar formats. For RGB 910 845 * and 4:4:4, scl1 should be set to scl0 so both channels of ··· 929 858 DRM_MODE_REFLECT_Y); 930 859 931 860 /* We must point to the last line when Y reflection is enabled. */ 932 - src_y = vc4_state->src_y; 861 + src_y = vc4_state->src_y >> 16; 933 862 if (rotation & DRM_MODE_REFLECT_Y) 934 - src_y += vc4_state->src_h[0] - 1; 863 + src_y += height - 1; 864 + 865 + src_x = vc4_state->src_x >> 16; 935 866 936 867 switch (base_format_mod) { 937 868 case DRM_FORMAT_MOD_LINEAR: ··· 944 871 * out. 945 872 */ 946 873 for (i = 0; i < num_planes; i++) { 947 - vc4_state->offsets[i] += src_y / 948 - (i ? v_subsample : 1) * 949 - fb->pitches[i]; 950 - 951 - vc4_state->offsets[i] += vc4_state->src_x / 952 - (i ? h_subsample : 1) * 953 - fb->format->cpp[i]; 874 + offsets[i] += src_y / (i ? v_subsample : 1) * fb->pitches[i]; 875 + offsets[i] += src_x / (i ? h_subsample : 1) * fb->format->cpp[i]; 954 876 } 955 877 956 878 break; ··· 966 898 * pitch * tile_h == tile_size * tiles_per_row 967 899 */ 968 900 u32 tiles_w = fb->pitches[0] >> (tile_size_shift - tile_h_shift); 969 - u32 tiles_l = vc4_state->src_x >> tile_w_shift; 901 + u32 tiles_l = src_x >> tile_w_shift; 970 902 u32 tiles_r = tiles_w - tiles_l; 971 903 u32 tiles_t = src_y >> tile_h_shift; 972 904 /* Intra-tile offsets, which modify the base address (the ··· 976 908 u32 tile_y = (src_y >> 4) & 1; 977 909 u32 subtile_y = (src_y >> 2) & 3; 978 910 u32 utile_y = src_y & 3; 979 - u32 x_off = vc4_state->src_x & tile_w_mask; 911 + u32 x_off = src_x & tile_w_mask; 980 912 u32 y_off = src_y & tile_h_mask; 981 913 982 914 /* When Y reflection is requested we must set the ··· 1000 932 VC4_SET_FIELD(y_off, SCALER_PITCH0_TILE_Y_OFFSET) | 1001 933 VC4_SET_FIELD(tiles_l, SCALER_PITCH0_TILE_WIDTH_L) | 1002 934 VC4_SET_FIELD(tiles_r, SCALER_PITCH0_TILE_WIDTH_R)); 1003 - vc4_state->offsets[0] += tiles_t * (tiles_w << tile_size_shift); 1004 - vc4_state->offsets[0] += subtile_y << 8; 1005 - vc4_state->offsets[0] += utile_y << 4; 935 + offsets[0] += tiles_t * (tiles_w << tile_size_shift); 936 + offsets[0] += subtile_y << 8; 937 + offsets[0] += utile_y << 4; 1006 938 1007 939 /* Rows of tiles alternate left-to-right and right-to-left. */ 1008 940 if (tiles_t & 1) { 1009 941 pitch0 |= SCALER_PITCH0_TILE_INITIAL_LINE_DIR; 1010 - vc4_state->offsets[0] += (tiles_w - tiles_l) << 1011 - tile_size_shift; 1012 - vc4_state->offsets[0] -= (1 + !tile_y) << 10; 942 + offsets[0] += (tiles_w - tiles_l) << tile_size_shift; 943 + offsets[0] -= (1 + !tile_y) << 10; 1013 944 } else { 1014 - vc4_state->offsets[0] += tiles_l << tile_size_shift; 1015 - vc4_state->offsets[0] += tile_y << 10; 945 + offsets[0] += tiles_l << tile_size_shift; 946 + offsets[0] += tile_y << 10; 1016 947 } 1017 948 1018 949 break; ··· 1071 1004 * of the 12-pixels in that 128-bit word is the 1072 1005 * first pixel to be used 1073 1006 */ 1074 - u32 remaining_pixels = vc4_state->src_x % 96; 1007 + u32 remaining_pixels = src_x % 96; 1075 1008 u32 aligned = remaining_pixels / 12; 1076 1009 u32 last_bits = remaining_pixels % 12; 1077 1010 ··· 1093 1026 return -EINVAL; 1094 1027 } 1095 1028 pix_per_tile = tile_w / fb->format->cpp[0]; 1096 - x_off = (vc4_state->src_x % pix_per_tile) / 1029 + x_off = (src_x % pix_per_tile) / 1097 1030 (i ? h_subsample : 1) * 1098 1031 fb->format->cpp[i]; 1099 1032 } 1100 1033 1101 - tile = vc4_state->src_x / pix_per_tile; 1034 + tile = src_x / pix_per_tile; 1102 1035 1103 - vc4_state->offsets[i] += param * tile_w * tile; 1104 - vc4_state->offsets[i] += src_y / 1105 - (i ? v_subsample : 1) * 1106 - tile_w; 1107 - vc4_state->offsets[i] += x_off & ~(i ? 1 : 0); 1036 + offsets[i] += param * tile_w * tile; 1037 + offsets[i] += src_y / (i ? v_subsample : 1) * tile_w; 1038 + offsets[i] += x_off & ~(i ? 1 : 0); 1108 1039 } 1109 1040 1110 1041 pitch0 = VC4_SET_FIELD(param, SCALER_TILE_HEIGHT); ··· 1115 1050 return -EINVAL; 1116 1051 } 1117 1052 1053 + /* fetch an extra pixel if we don't actually line up with the left edge. */ 1054 + if ((vc4_state->src_x & 0xffff) && vc4_state->src_x < (state->fb->width << 16)) 1055 + width++; 1056 + 1057 + /* same for the right side */ 1058 + if (((vc4_state->src_x + vc4_state->src_w[0]) & 0xffff) && 1059 + vc4_state->src_x + vc4_state->src_w[0] < (state->fb->width << 16)) 1060 + width++; 1061 + 1062 + /* now for the top */ 1063 + if ((vc4_state->src_y & 0xffff) && vc4_state->src_y < (state->fb->height << 16)) 1064 + height++; 1065 + 1066 + /* and the bottom */ 1067 + if (((vc4_state->src_y + vc4_state->src_h[0]) & 0xffff) && 1068 + vc4_state->src_y + vc4_state->src_h[0] < (state->fb->height << 16)) 1069 + height++; 1070 + 1071 + /* For YUV444 the hardware wants double the width, otherwise it doesn't 1072 + * fetch full width of chroma 1073 + */ 1074 + if (format->drm == DRM_FORMAT_YUV444 || format->drm == DRM_FORMAT_YVU444) 1075 + width <<= 1; 1076 + 1118 1077 /* Don't waste cycles mixing with plane alpha if the set alpha 1119 1078 * is opaque or there is no per-pixel alpha information. 1120 1079 * In any case we use the alpha property value as the fixed alpha. ··· 1146 1057 mix_plane_alpha = state->alpha != DRM_BLEND_ALPHA_OPAQUE && 1147 1058 fb->format->has_alpha; 1148 1059 1149 - if (!vc4->is_vc5) { 1060 + if (vc4->gen == VC4_GEN_4) { 1150 1061 /* Control word */ 1151 1062 vc4_dlist_write(vc4_state, 1152 1063 SCALER_CTL0_VALID | ··· 1181 1092 vc4_dlist_write(vc4_state, 1182 1093 (mix_plane_alpha ? SCALER_POS2_ALPHA_MIX : 0) | 1183 1094 vc4_hvs4_get_alpha_blend_mode(state) | 1184 - VC4_SET_FIELD(vc4_state->src_w[0], 1185 - SCALER_POS2_WIDTH) | 1186 - VC4_SET_FIELD(vc4_state->src_h[0], 1187 - SCALER_POS2_HEIGHT)); 1095 + VC4_SET_FIELD(width, SCALER_POS2_WIDTH) | 1096 + VC4_SET_FIELD(height, SCALER_POS2_HEIGHT)); 1188 1097 1189 1098 /* Position Word 3: Context. Written by the HVS. */ 1190 1099 vc4_dlist_write(vc4_state, 0xc0c0c0c0); ··· 1235 1148 /* Position Word 2: Source Image Size */ 1236 1149 vc4_state->pos2_offset = vc4_state->dlist_count; 1237 1150 vc4_dlist_write(vc4_state, 1238 - VC4_SET_FIELD(vc4_state->src_w[0], 1239 - SCALER5_POS2_WIDTH) | 1240 - VC4_SET_FIELD(vc4_state->src_h[0], 1241 - SCALER5_POS2_HEIGHT)); 1151 + VC4_SET_FIELD(width, SCALER5_POS2_WIDTH) | 1152 + VC4_SET_FIELD(height, SCALER5_POS2_HEIGHT)); 1242 1153 1243 1154 /* Position Word 3: Context. Written by the HVS. */ 1244 1155 vc4_dlist_write(vc4_state, 0xc0c0c0c0); ··· 1247 1162 * 1248 1163 * The pointers may be any byte address. 1249 1164 */ 1250 - vc4_state->ptr0_offset = vc4_state->dlist_count; 1251 - for (i = 0; i < num_planes; i++) 1252 - vc4_dlist_write(vc4_state, vc4_state->offsets[i]); 1165 + vc4_state->ptr0_offset[0] = vc4_state->dlist_count; 1166 + 1167 + for (i = 0; i < num_planes; i++) { 1168 + struct drm_gem_dma_object *bo = drm_fb_dma_get_gem_obj(fb, i); 1169 + 1170 + vc4_dlist_write(vc4_state, bo->dma_addr + fb->offsets[i] + offsets[i]); 1171 + } 1253 1172 1254 1173 /* Pointer Context Word 0/1/2: Written by the HVS */ 1255 1174 for (i = 0; i < num_planes; i++) ··· 1387 1298 if (ret) 1388 1299 return ret; 1389 1300 1390 - return vc4_plane_allocate_lbm(new_plane_state); 1301 + ret = vc4_plane_allocate_lbm(new_plane_state); 1302 + if (ret) 1303 + return ret; 1304 + 1305 + return 0; 1391 1306 } 1392 1307 1393 1308 static void vc4_plane_atomic_update(struct drm_plane *plane, ··· 1455 1362 * scanout will start from this address as soon as the FIFO 1456 1363 * needs to refill with pixels. 1457 1364 */ 1458 - writel(addr, &vc4_state->hw_dlist[vc4_state->ptr0_offset]); 1365 + writel(addr, &vc4_state->hw_dlist[vc4_state->ptr0_offset[0]]); 1459 1366 1460 1367 /* Also update the CPU-side dlist copy, so that any later 1461 1368 * atomic updates that don't do a new modeset on our plane 1462 1369 * also use our updated address. 1463 1370 */ 1464 - vc4_state->dlist[vc4_state->ptr0_offset] = addr; 1371 + vc4_state->dlist[vc4_state->ptr0_offset[0]] = addr; 1465 1372 1466 1373 drm_dev_exit(idx); 1467 1374 } ··· 1516 1423 sizeof(vc4_state->y_scaling)); 1517 1424 vc4_state->is_unity = new_vc4_state->is_unity; 1518 1425 vc4_state->is_yuv = new_vc4_state->is_yuv; 1519 - memcpy(vc4_state->offsets, new_vc4_state->offsets, 1520 - sizeof(vc4_state->offsets)); 1521 1426 vc4_state->needs_bg_fill = new_vc4_state->needs_bg_fill; 1522 1427 1523 1428 /* Update the current vc4_state pos0, pos2 and ptr0 dlist entries. */ ··· 1523 1432 new_vc4_state->dlist[vc4_state->pos0_offset]; 1524 1433 vc4_state->dlist[vc4_state->pos2_offset] = 1525 1434 new_vc4_state->dlist[vc4_state->pos2_offset]; 1526 - vc4_state->dlist[vc4_state->ptr0_offset] = 1527 - new_vc4_state->dlist[vc4_state->ptr0_offset]; 1435 + vc4_state->dlist[vc4_state->ptr0_offset[0]] = 1436 + new_vc4_state->dlist[vc4_state->ptr0_offset[0]]; 1528 1437 1529 1438 /* Note that we can't just call vc4_plane_write_dlist() 1530 1439 * because that would smash the context data that the HVS is ··· 1534 1443 &vc4_state->hw_dlist[vc4_state->pos0_offset]); 1535 1444 writel(vc4_state->dlist[vc4_state->pos2_offset], 1536 1445 &vc4_state->hw_dlist[vc4_state->pos2_offset]); 1537 - writel(vc4_state->dlist[vc4_state->ptr0_offset], 1538 - &vc4_state->hw_dlist[vc4_state->ptr0_offset]); 1446 + writel(vc4_state->dlist[vc4_state->ptr0_offset[0]], 1447 + &vc4_state->hw_dlist[vc4_state->ptr0_offset[0]]); 1539 1448 1540 1449 drm_dev_exit(idx); 1541 1450 } ··· 1562 1471 if (old_vc4_state->dlist_count != new_vc4_state->dlist_count || 1563 1472 old_vc4_state->pos0_offset != new_vc4_state->pos0_offset || 1564 1473 old_vc4_state->pos2_offset != new_vc4_state->pos2_offset || 1565 - old_vc4_state->ptr0_offset != new_vc4_state->ptr0_offset || 1474 + old_vc4_state->ptr0_offset[0] != new_vc4_state->ptr0_offset[0] || 1566 1475 vc4_lbm_size(plane->state) != vc4_lbm_size(new_plane_state)) 1567 1476 return -EINVAL; 1568 1477 ··· 1572 1481 for (i = 0; i < new_vc4_state->dlist_count; i++) { 1573 1482 if (i == new_vc4_state->pos0_offset || 1574 1483 i == new_vc4_state->pos2_offset || 1575 - i == new_vc4_state->ptr0_offset || 1484 + i == new_vc4_state->ptr0_offset[0] || 1576 1485 (new_vc4_state->lbm_offset && 1577 1486 i == new_vc4_state->lbm_offset)) 1578 1487 continue; ··· 1723 1632 }; 1724 1633 1725 1634 for (i = 0; i < ARRAY_SIZE(hvs_formats); i++) { 1726 - if (!hvs_formats[i].hvs5_only || vc4->is_vc5) { 1635 + if (!hvs_formats[i].hvs5_only || vc4->gen == VC4_GEN_5) { 1727 1636 formats[num_formats] = hvs_formats[i].drm; 1728 1637 num_formats++; 1729 1638 } ··· 1738 1647 return ERR_CAST(vc4_plane); 1739 1648 plane = &vc4_plane->base; 1740 1649 1741 - if (vc4->is_vc5) 1650 + if (vc4->gen == VC4_GEN_5) 1742 1651 drm_plane_helper_add(plane, &vc5_plane_helper_funcs); 1743 1652 else 1744 1653 drm_plane_helper_add(plane, &vc4_plane_helper_funcs);
+1
drivers/gpu/drm/vc4/vc4_regs.h
··· 777 777 # define VC4_HD_VID_CTL_CLRSYNC BIT(24) 778 778 # define VC4_HD_VID_CTL_CLRRGB BIT(23) 779 779 # define VC4_HD_VID_CTL_BLANKPIX BIT(18) 780 + # define VC4_HD_VID_CTL_BLANK_INSERT_EN BIT(16) 780 781 781 782 # define VC4_HD_CSC_CTL_ORDER_MASK VC4_MASK(7, 5) 782 783 # define VC4_HD_CSC_CTL_ORDER_SHIFT 5
+1 -1
drivers/gpu/drm/vc4/vc4_render_cl.c
··· 599 599 bool has_bin = args->bin_cl_size != 0; 600 600 int ret; 601 601 602 - if (WARN_ON_ONCE(vc4->is_vc5)) 602 + if (WARN_ON_ONCE(vc4->gen > VC4_GEN_4)) 603 603 return -ENODEV; 604 604 605 605 if (args->min_x_tile > args->max_x_tile ||
+5 -5
drivers/gpu/drm/vc4/vc4_v3d.c
··· 127 127 int 128 128 vc4_v3d_pm_get(struct vc4_dev *vc4) 129 129 { 130 - if (WARN_ON_ONCE(vc4->is_vc5)) 130 + if (WARN_ON_ONCE(vc4->gen > VC4_GEN_4)) 131 131 return -ENODEV; 132 132 133 133 mutex_lock(&vc4->power_lock); ··· 148 148 void 149 149 vc4_v3d_pm_put(struct vc4_dev *vc4) 150 150 { 151 - if (WARN_ON_ONCE(vc4->is_vc5)) 151 + if (WARN_ON_ONCE(vc4->gen > VC4_GEN_4)) 152 152 return; 153 153 154 154 mutex_lock(&vc4->power_lock); ··· 178 178 uint64_t seqno = 0; 179 179 struct vc4_exec_info *exec; 180 180 181 - if (WARN_ON_ONCE(vc4->is_vc5)) 181 + if (WARN_ON_ONCE(vc4->gen > VC4_GEN_4)) 182 182 return -ENODEV; 183 183 184 184 try_again: ··· 325 325 { 326 326 int ret = 0; 327 327 328 - if (WARN_ON_ONCE(vc4->is_vc5)) 328 + if (WARN_ON_ONCE(vc4->gen > VC4_GEN_4)) 329 329 return -ENODEV; 330 330 331 331 mutex_lock(&vc4->bin_bo_lock); ··· 360 360 361 361 void vc4_v3d_bin_bo_put(struct vc4_dev *vc4) 362 362 { 363 - if (WARN_ON_ONCE(vc4->is_vc5)) 363 + if (WARN_ON_ONCE(vc4->gen > VC4_GEN_4)) 364 364 return; 365 365 366 366 mutex_lock(&vc4->bin_bo_lock);
+4 -4
drivers/gpu/drm/vc4/vc4_validate.c
··· 109 109 struct drm_gem_dma_object *obj; 110 110 struct vc4_bo *bo; 111 111 112 - if (WARN_ON_ONCE(vc4->is_vc5)) 112 + if (WARN_ON_ONCE(vc4->gen > VC4_GEN_4)) 113 113 return NULL; 114 114 115 115 if (hindex >= exec->bo_count) { ··· 169 169 uint32_t utile_w = utile_width(cpp); 170 170 uint32_t utile_h = utile_height(cpp); 171 171 172 - if (WARN_ON_ONCE(vc4->is_vc5)) 172 + if (WARN_ON_ONCE(vc4->gen > VC4_GEN_4)) 173 173 return false; 174 174 175 175 /* The shaded vertex format stores signed 12.4 fixed point ··· 495 495 uint32_t dst_offset = 0; 496 496 uint32_t src_offset = 0; 497 497 498 - if (WARN_ON_ONCE(vc4->is_vc5)) 498 + if (WARN_ON_ONCE(vc4->gen > VC4_GEN_4)) 499 499 return -ENODEV; 500 500 501 501 while (src_offset < len) { ··· 942 942 uint32_t i; 943 943 int ret = 0; 944 944 945 - if (WARN_ON_ONCE(vc4->is_vc5)) 945 + if (WARN_ON_ONCE(vc4->gen > VC4_GEN_4)) 946 946 return -ENODEV; 947 947 948 948 for (i = 0; i < exec->shader_state_count; i++) {
+1 -1
drivers/gpu/drm/vc4/vc4_validate_shaders.c
··· 786 786 struct vc4_validated_shader_info *validated_shader = NULL; 787 787 struct vc4_shader_validation_state validation_state; 788 788 789 - if (WARN_ON_ONCE(vc4->is_vc5)) 789 + if (WARN_ON_ONCE(vc4->gen > VC4_GEN_4)) 790 790 return NULL; 791 791 792 792 memset(&validation_state, 0, sizeof(validation_state));
+1
drivers/gpu/drm/xe/Kconfig
··· 14 14 select DRM_PANEL 15 15 select DRM_SUBALLOC_HELPER 16 16 select DRM_DISPLAY_DP_HELPER 17 + select DRM_DISPLAY_DSC_HELPER 17 18 select DRM_DISPLAY_HDCP_HELPER 18 19 select DRM_DISPLAY_HDMI_HELPER 19 20 select DRM_DISPLAY_HELPER
+1 -1
drivers/gpu/host1x/context_bus.c
··· 6 6 #include <linux/device.h> 7 7 #include <linux/of.h> 8 8 9 - struct bus_type host1x_context_device_bus_type = { 9 + const struct bus_type host1x_context_device_bus_type = { 10 10 .name = "host1x-context", 11 11 }; 12 12 EXPORT_SYMBOL_GPL(host1x_context_device_bus_type);
+72 -78
drivers/gpu/host1x/dev.c
··· 142 142 }; 143 143 144 144 static const struct host1x_sid_entry tegra186_sid_table[] = { 145 - { 146 - /* VIC */ 147 - .base = 0x1af0, 148 - .offset = 0x30, 149 - .limit = 0x34 150 - }, 151 - { 152 - /* NVDEC */ 153 - .base = 0x1b00, 154 - .offset = 0x30, 155 - .limit = 0x34 156 - }, 145 + { /* SE1 */ .base = 0x1ac8, .offset = 0x90, .limit = 0x90 }, 146 + { /* SE2 */ .base = 0x1ad0, .offset = 0x90, .limit = 0x90 }, 147 + { /* SE3 */ .base = 0x1ad8, .offset = 0x90, .limit = 0x90 }, 148 + { /* SE4 */ .base = 0x1ae0, .offset = 0x90, .limit = 0x90 }, 149 + { /* ISP */ .base = 0x1ae8, .offset = 0x50, .limit = 0x50 }, 150 + { /* VIC */ .base = 0x1af0, .offset = 0x30, .limit = 0x34 }, 151 + { /* NVENC */ .base = 0x1af8, .offset = 0x30, .limit = 0x34 }, 152 + { /* NVDEC */ .base = 0x1b00, .offset = 0x30, .limit = 0x34 }, 153 + { /* NVJPG */ .base = 0x1b08, .offset = 0x30, .limit = 0x34 }, 154 + { /* TSEC */ .base = 0x1b10, .offset = 0x30, .limit = 0x34 }, 155 + { /* TSECB */ .base = 0x1b18, .offset = 0x30, .limit = 0x34 }, 156 + { /* VI 0 */ .base = 0x1b80, .offset = 0x10000, .limit = 0x10000 }, 157 + { /* VI 1 */ .base = 0x1b88, .offset = 0x20000, .limit = 0x20000 }, 158 + { /* VI 2 */ .base = 0x1b90, .offset = 0x30000, .limit = 0x30000 }, 159 + { /* VI 3 */ .base = 0x1b98, .offset = 0x40000, .limit = 0x40000 }, 160 + { /* VI 4 */ .base = 0x1ba0, .offset = 0x50000, .limit = 0x50000 }, 161 + { /* VI 5 */ .base = 0x1ba8, .offset = 0x60000, .limit = 0x60000 }, 162 + { /* VI 6 */ .base = 0x1bb0, .offset = 0x70000, .limit = 0x70000 }, 163 + { /* VI 7 */ .base = 0x1bb8, .offset = 0x80000, .limit = 0x80000 }, 164 + { /* VI 8 */ .base = 0x1bc0, .offset = 0x90000, .limit = 0x90000 }, 165 + { /* VI 9 */ .base = 0x1bc8, .offset = 0xa0000, .limit = 0xa0000 }, 166 + { /* VI 10 */ .base = 0x1bd0, .offset = 0xb0000, .limit = 0xb0000 }, 167 + { /* VI 11 */ .base = 0x1bd8, .offset = 0xc0000, .limit = 0xc0000 }, 157 168 }; 158 169 159 170 static const struct host1x_info host1x06_info = { ··· 184 173 }; 185 174 186 175 static const struct host1x_sid_entry tegra194_sid_table[] = { 187 - { 188 - /* VIC */ 189 - .base = 0x1af0, 190 - .offset = 0x30, 191 - .limit = 0x34 192 - }, 193 - { 194 - /* NVDEC */ 195 - .base = 0x1b00, 196 - .offset = 0x30, 197 - .limit = 0x34 198 - }, 199 - { 200 - /* NVDEC1 */ 201 - .base = 0x1bc0, 202 - .offset = 0x30, 203 - .limit = 0x34 204 - }, 176 + { /* SE1 */ .base = 0x1ac8, .offset = 0x90, .limit = 0x90 }, 177 + { /* SE2 */ .base = 0x1ad0, .offset = 0x90, .limit = 0x90 }, 178 + { /* SE3 */ .base = 0x1ad8, .offset = 0x90, .limit = 0x90 }, 179 + { /* SE4 */ .base = 0x1ae0, .offset = 0x90, .limit = 0x90 }, 180 + { /* ISP */ .base = 0x1ae8, .offset = 0x800, .limit = 0x800 }, 181 + { /* VIC */ .base = 0x1af0, .offset = 0x30, .limit = 0x34 }, 182 + { /* NVENC */ .base = 0x1af8, .offset = 0x30, .limit = 0x34 }, 183 + { /* NVDEC */ .base = 0x1b00, .offset = 0x30, .limit = 0x34 }, 184 + { /* NVJPG */ .base = 0x1b08, .offset = 0x30, .limit = 0x34 }, 185 + { /* TSEC */ .base = 0x1b10, .offset = 0x30, .limit = 0x34 }, 186 + { /* TSECB */ .base = 0x1b18, .offset = 0x30, .limit = 0x34 }, 187 + { /* VI */ .base = 0x1b80, .offset = 0x800, .limit = 0x800 }, 188 + { /* VI_THI */ .base = 0x1b88, .offset = 0x30, .limit = 0x34 }, 189 + { /* ISP_THI */ .base = 0x1b90, .offset = 0x30, .limit = 0x34 }, 190 + { /* PVA0_CLUSTER */ .base = 0x1b98, .offset = 0x0, .limit = 0x0 }, 191 + { /* PVA0_CLUSTER */ .base = 0x1ba0, .offset = 0x0, .limit = 0x0 }, 192 + { /* NVDLA0 */ .base = 0x1ba8, .offset = 0x30, .limit = 0x34 }, 193 + { /* NVDLA1 */ .base = 0x1bb0, .offset = 0x30, .limit = 0x34 }, 194 + { /* NVENC1 */ .base = 0x1bb8, .offset = 0x30, .limit = 0x34 }, 195 + { /* NVDEC1 */ .base = 0x1bc0, .offset = 0x30, .limit = 0x34 }, 205 196 }; 206 197 207 198 static const struct host1x_info host1x07_info = { ··· 228 215 * and firmware stream ID in the MMIO path table. 229 216 */ 230 217 static const struct host1x_sid_entry tegra234_sid_table[] = { 231 - { 232 - /* SE2 MMIO */ 233 - .base = 0x1658, 234 - .offset = 0x90, 235 - .limit = 0x90 236 - }, 237 - { 238 - /* SE4 MMIO */ 239 - .base = 0x1660, 240 - .offset = 0x90, 241 - .limit = 0x90 242 - }, 243 - { 244 - /* SE2 channel */ 245 - .base = 0x1738, 246 - .offset = 0x90, 247 - .limit = 0x90 248 - }, 249 - { 250 - /* SE4 channel */ 251 - .base = 0x1740, 252 - .offset = 0x90, 253 - .limit = 0x90 254 - }, 255 - { 256 - /* VIC channel */ 257 - .base = 0x17b8, 258 - .offset = 0x30, 259 - .limit = 0x30 260 - }, 261 - { 262 - /* VIC MMIO */ 263 - .base = 0x1688, 264 - .offset = 0x34, 265 - .limit = 0x34 266 - }, 267 - { 268 - /* NVDEC channel */ 269 - .base = 0x17c8, 270 - .offset = 0x30, 271 - .limit = 0x30, 272 - }, 273 - { 274 - /* NVDEC MMIO */ 275 - .base = 0x1698, 276 - .offset = 0x34, 277 - .limit = 0x34, 278 - }, 218 + { /* SE1 MMIO */ .base = 0x1650, .offset = 0x90, .limit = 0x90 }, 219 + { /* SE1 ch */ .base = 0x1730, .offset = 0x90, .limit = 0x90 }, 220 + { /* SE2 MMIO */ .base = 0x1658, .offset = 0x90, .limit = 0x90 }, 221 + { /* SE2 ch */ .base = 0x1738, .offset = 0x90, .limit = 0x90 }, 222 + { /* SE4 MMIO */ .base = 0x1660, .offset = 0x90, .limit = 0x90 }, 223 + { /* SE4 ch */ .base = 0x1740, .offset = 0x90, .limit = 0x90 }, 224 + { /* ISP MMIO */ .base = 0x1680, .offset = 0x800, .limit = 0x800 }, 225 + { /* VIC MMIO */ .base = 0x1688, .offset = 0x34, .limit = 0x34 }, 226 + { /* VIC ch */ .base = 0x17b8, .offset = 0x30, .limit = 0x30 }, 227 + { /* NVENC MMIO */ .base = 0x1690, .offset = 0x34, .limit = 0x34 }, 228 + { /* NVENC ch */ .base = 0x17c0, .offset = 0x30, .limit = 0x30 }, 229 + { /* NVDEC MMIO */ .base = 0x1698, .offset = 0x34, .limit = 0x34 }, 230 + { /* NVDEC ch */ .base = 0x17c8, .offset = 0x30, .limit = 0x30 }, 231 + { /* NVJPG MMIO */ .base = 0x16a0, .offset = 0x34, .limit = 0x34 }, 232 + { /* NVJPG ch */ .base = 0x17d0, .offset = 0x30, .limit = 0x30 }, 233 + { /* TSEC MMIO */ .base = 0x16a8, .offset = 0x30, .limit = 0x34 }, 234 + { /* NVJPG1 MMIO */ .base = 0x16b0, .offset = 0x34, .limit = 0x34 }, 235 + { /* NVJPG1 ch */ .base = 0x17a8, .offset = 0x30, .limit = 0x30 }, 236 + { /* VI MMIO */ .base = 0x16b8, .offset = 0x800, .limit = 0x800 }, 237 + { /* VI_THI MMIO */ .base = 0x16c0, .offset = 0x30, .limit = 0x34 }, 238 + { /* ISP_THI MMIO */ .base = 0x16c8, .offset = 0x30, .limit = 0x34 }, 239 + { /* NVDLA MMIO */ .base = 0x16d8, .offset = 0x30, .limit = 0x34 }, 240 + { /* NVDLA ch */ .base = 0x17e0, .offset = 0x30, .limit = 0x34 }, 241 + { /* NVDLA1 MMIO */ .base = 0x16e0, .offset = 0x30, .limit = 0x34 }, 242 + { /* NVDLA1 ch */ .base = 0x17e8, .offset = 0x30, .limit = 0x34 }, 243 + { /* OFA MMIO */ .base = 0x16e8, .offset = 0x34, .limit = 0x34 }, 244 + { /* OFA ch */ .base = 0x1768, .offset = 0x30, .limit = 0x30 }, 245 + { /* VI2 MMIO */ .base = 0x16f0, .offset = 0x800, .limit = 0x800 }, 246 + { /* VI2_THI MMIO */ .base = 0x16f8, .offset = 0x30, .limit = 0x34 }, 279 247 }; 280 248 281 249 static const struct host1x_info host1x08_info = {
+3 -3
drivers/gpu/host1x/dev.h
··· 175 175 }; 176 176 177 177 void host1x_common_writel(struct host1x *host1x, u32 v, u32 r); 178 - void host1x_hypervisor_writel(struct host1x *host1x, u32 r, u32 v); 178 + void host1x_hypervisor_writel(struct host1x *host1x, u32 v, u32 r); 179 179 u32 host1x_hypervisor_readl(struct host1x *host1x, u32 r); 180 - void host1x_sync_writel(struct host1x *host1x, u32 r, u32 v); 180 + void host1x_sync_writel(struct host1x *host1x, u32 v, u32 r); 181 181 u32 host1x_sync_readl(struct host1x *host1x, u32 r); 182 - void host1x_ch_writel(struct host1x_channel *ch, u32 r, u32 v); 182 + void host1x_ch_writel(struct host1x_channel *ch, u32 v, u32 r); 183 183 u32 host1x_ch_readl(struct host1x_channel *ch, u32 r); 184 184 185 185 static inline void host1x_hw_syncpt_restore(struct host1x *host,
+12
drivers/gpu/host1x/hw/cdma_hw.c
··· 254 254 u32 offset; 255 255 256 256 switch (ch->client->class) { 257 + case HOST1X_CLASS_NVJPG1: 258 + offset = HOST1X_COMMON_NVJPG1_MLOCK; 259 + break; 260 + case HOST1X_CLASS_NVENC: 261 + offset = HOST1X_COMMON_NVENC_MLOCK; 262 + break; 257 263 case HOST1X_CLASS_VIC: 258 264 offset = HOST1X_COMMON_VIC_MLOCK; 259 265 break; 266 + case HOST1X_CLASS_NVJPG: 267 + offset = HOST1X_COMMON_NVJPG_MLOCK; 268 + break; 260 269 case HOST1X_CLASS_NVDEC: 261 270 offset = HOST1X_COMMON_NVDEC_MLOCK; 271 + break; 272 + case HOST1X_CLASS_OFA: 273 + offset = HOST1X_COMMON_OFA_MLOCK; 262 274 break; 263 275 default: 264 276 WARN(1, "%s was not updated for class %u", __func__, ch->client->class);
+12 -3
drivers/gpu/host1x/hw/debug_hw.c
··· 177 177 178 178 for (i = 0; i < words; i++) { 179 179 dma_addr_t addr = phys_addr + i * 4; 180 - u32 val = *(map_addr + offset / 4 + i); 180 + u32 voffset = offset + i * 4; 181 + u32 val; 182 + 183 + /* If we reach the RESTART opcode, continue at the beginning of pushbuffer */ 184 + if (cdma && voffset >= cdma->push_buffer.size) { 185 + addr -= cdma->push_buffer.size; 186 + voffset -= cdma->push_buffer.size; 187 + } 188 + 189 + val = *(map_addr + voffset / 4); 181 190 182 191 if (!data_count) { 183 192 host1x_debug_output(o, " %pad: %08x: ", &addr, val); ··· 212 203 job->num_slots, job->num_unpins); 213 204 214 205 show_gather(o, pb->dma + job->first_get, job->num_slots * 2, cdma, 215 - pb->dma + job->first_get, pb->mapped + job->first_get); 206 + pb->dma, pb->mapped); 216 207 217 208 for (i = 0; i < job->num_cmds; i++) { 218 209 struct host1x_job_gather *g; ··· 236 227 host1x_debug_output(o, " GATHER at %pad+%#x, %d words\n", 237 228 &g->base, g->offset, g->words); 238 229 239 - show_gather(o, g->base + g->offset, g->words, cdma, 230 + show_gather(o, g->base + g->offset, g->words, NULL, 240 231 g->base, mapped); 241 232 242 233 if (!job->gather_copy_mapped)
+13
include/drm/bridge/imx.h
··· 1 + // SPDX-License-Identifier: GPL-2.0+ 2 + /* 3 + * Copyright (C) 2012 Sascha Hauer, Pengutronix 4 + */ 5 + 6 + #ifndef DRM_IMX_BRIDGE_H 7 + #define DRM_IMX_BRIDGE_H 8 + 9 + struct drm_bridge *devm_imx_drm_legacy_bridge(struct device *dev, 10 + struct device_node *np, 11 + int type); 12 + 13 + #endif
-13
include/drm/drm_gem_vram_helper.h
··· 17 17 struct drm_mode_create_dumb; 18 18 struct drm_plane; 19 19 struct drm_plane_state; 20 - struct drm_simple_display_pipe; 21 20 struct filp; 22 21 struct vm_area_struct; 23 22 ··· 135 136 #define DRM_GEM_VRAM_PLANE_HELPER_FUNCS \ 136 137 .prepare_fb = drm_gem_vram_plane_helper_prepare_fb, \ 137 138 .cleanup_fb = drm_gem_vram_plane_helper_cleanup_fb 138 - 139 - /* 140 - * Helpers for struct drm_simple_display_pipe_funcs 141 - */ 142 - 143 - int drm_gem_vram_simple_display_pipe_prepare_fb( 144 - struct drm_simple_display_pipe *pipe, 145 - struct drm_plane_state *new_state); 146 - 147 - void drm_gem_vram_simple_display_pipe_cleanup_fb( 148 - struct drm_simple_display_pipe *pipe, 149 - struct drm_plane_state *old_state); 150 139 151 140 /** 152 141 * define DRM_GEM_VRAM_DRIVER - default callback functions for
+14
include/drm/drm_panic.h
··· 64 64 65 65 }; 66 66 67 + #ifdef CONFIG_DRM_PANIC 68 + 67 69 /** 68 70 * drm_panic_trylock - try to enter the panic printing critical section 69 71 * @dev: struct drm_device ··· 150 148 */ 151 149 #define drm_panic_unlock(dev, flags) \ 152 150 raw_spin_unlock_irqrestore(&(dev)->mode_config.panic_lock, flags) 151 + 152 + #else 153 + 154 + static inline bool drm_panic_trylock(struct drm_device *dev, unsigned long flags) 155 + { 156 + return true; 157 + } 158 + 159 + static inline void drm_panic_lock(struct drm_device *dev, unsigned long flags) {} 160 + static inline void drm_panic_unlock(struct drm_device *dev, unsigned long flags) {} 161 + 162 + #endif 153 163 154 164 #endif /* __DRM_PANIC_H__ */
+1 -1
include/drm/gpu_scheduler.h
··· 579 579 void drm_sched_wqueue_stop(struct drm_gpu_scheduler *sched); 580 580 void drm_sched_wqueue_start(struct drm_gpu_scheduler *sched); 581 581 void drm_sched_stop(struct drm_gpu_scheduler *sched, struct drm_sched_job *bad); 582 - void drm_sched_start(struct drm_gpu_scheduler *sched); 582 + void drm_sched_start(struct drm_gpu_scheduler *sched, int errno); 583 583 void drm_sched_resubmit_jobs(struct drm_gpu_scheduler *sched); 584 584 void drm_sched_increase_karma(struct drm_sched_job *bad); 585 585 void drm_sched_reset_karma(struct drm_sched_job *bad);
+6
include/linux/dma-fence.h
··· 574 574 * rather than success. This must be set before signaling (so that the value 575 575 * is visible before any waiters on the signal callback are woken). This 576 576 * helper exists to help catching erroneous setting of #dma_fence.error. 577 + * 578 + * Examples of error codes which drivers should use: 579 + * 580 + * * %-ENODATA This operation produced no data, no other operation affected. 581 + * * %-ECANCELED All operations from the same context have been canceled. 582 + * * %-ETIME Operation caused a timeout and potentially device reset. 577 583 */ 578 584 static inline void dma_fence_set_error(struct dma_fence *fence, 579 585 int error)
+5
include/linux/host1x.h
··· 14 14 15 15 enum host1x_class { 16 16 HOST1X_CLASS_HOST1X = 0x1, 17 + HOST1X_CLASS_NVJPG1 = 0x7, 18 + HOST1X_CLASS_NVENC = 0x21, 19 + HOST1X_CLASS_NVENC1 = 0x22, 17 20 HOST1X_CLASS_GR2D = 0x51, 18 21 HOST1X_CLASS_GR2D_SB = 0x52, 19 22 HOST1X_CLASS_VIC = 0x5D, 20 23 HOST1X_CLASS_GR3D = 0x60, 24 + HOST1X_CLASS_NVJPG = 0xC0, 21 25 HOST1X_CLASS_NVDEC = 0xF0, 22 26 HOST1X_CLASS_NVDEC1 = 0xF5, 27 + HOST1X_CLASS_OFA = 0xF8, 23 28 }; 24 29 25 30 struct host1x;
+1 -1
include/linux/host1x_context_bus.h
··· 9 9 #include <linux/device.h> 10 10 11 11 #ifdef CONFIG_TEGRA_HOST1X_CONTEXT_BUS 12 - extern struct bus_type host1x_context_device_bus_type; 12 + extern const struct bus_type host1x_context_device_bus_type; 13 13 #endif 14 14 15 15 #endif
+3
include/uapi/drm/panfrost_drm.h
··· 40 40 #define DRM_IOCTL_PANFROST_PERFCNT_DUMP DRM_IOW(DRM_COMMAND_BASE + DRM_PANFROST_PERFCNT_DUMP, struct drm_panfrost_perfcnt_dump) 41 41 42 42 #define PANFROST_JD_REQ_FS (1 << 0) 43 + #define PANFROST_JD_REQ_CYCLE_COUNT (1 << 1) 43 44 /** 44 45 * struct drm_panfrost_submit - ioctl argument for submitting commands to the 3D 45 46 * engine. ··· 173 172 DRM_PANFROST_PARAM_NR_CORE_GROUPS, 174 173 DRM_PANFROST_PARAM_THREAD_TLS_ALLOC, 175 174 DRM_PANFROST_PARAM_AFBC_FEATURES, 175 + DRM_PANFROST_PARAM_SYSTEM_TIMESTAMP, 176 + DRM_PANFROST_PARAM_SYSTEM_TIMESTAMP_FREQUENCY, 176 177 }; 177 178 178 179 struct drm_panfrost_get_param {
+22
include/uapi/drm/panthor_drm.h
··· 260 260 261 261 /** @DRM_PANTHOR_DEV_QUERY_CSIF_INFO: Query command-stream interface information. */ 262 262 DRM_PANTHOR_DEV_QUERY_CSIF_INFO, 263 + 264 + /** @DRM_PANTHOR_DEV_QUERY_TIMESTAMP_INFO: Query timestamp information. */ 265 + DRM_PANTHOR_DEV_QUERY_TIMESTAMP_INFO, 263 266 }; 264 267 265 268 /** ··· 378 375 * @pad: Padding field, set to zero. 379 376 */ 380 377 __u32 pad; 378 + }; 379 + 380 + /** 381 + * struct drm_panthor_timestamp_info - Timestamp information 382 + * 383 + * Structure grouping all queryable information relating to the GPU timestamp. 384 + */ 385 + struct drm_panthor_timestamp_info { 386 + /** 387 + * @timestamp_frequency: The frequency of the timestamp timer or 0 if 388 + * unknown. 389 + */ 390 + __u64 timestamp_frequency; 391 + 392 + /** @current_timestamp: The current timestamp. */ 393 + __u64 current_timestamp; 394 + 395 + /** @timestamp_offset: The offset of the timestamp timer. */ 396 + __u64 timestamp_offset; 381 397 }; 382 398 383 399 /**