Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'drm-misc-next-2024-04-19' of https://gitlab.freedesktop.org/drm/misc/kernel into drm-next

drm-misc-next for v6.10-rc1:

UAPI Changes:
- Add SIZE_HINTS property for cursor planes.

Cross-subsystem Changes:

Core Changes:
- Document the requirements and expectations of adding new
driver-specific properties.
- Assorted small fixes to ttm.
- More Kconfig fixes.
- Add struct drm_edid_product_id and helpers.
- Use drm device based logging in more drm functions.
- Fixes for drm-panic, and option to test it.
- Assorted small fixes and updates to edid.
- Add drm_crtc_vblank_crtc and use it in vkms, nouveau.

Driver Changes:
- Assorted small fixes and improvements to bridge/imx8mp-hdmi-tx, nouveau, ast, qaic, lima, vc4, bridge/anx7625, mipi-dsi.
- Add drm panic to simpledrm, mgag200, imx, ast.
- Use dev_err_probe in bridge/panel drivers.
- Add Innolux G121X1-L03, LG sw43408 panels.
- Use struct drm_edid in i915 bios parsing.

Signed-off-by: Dave Airlie <airlied@redhat.com>
From: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/2dc1b7c6-1743-4ddd-ad42-36f700234fbe@linux.intel.com

+3481 -519
+62
Documentation/devicetree/bindings/display/panel/lg,sw43408.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/display/panel/lg,sw43408.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: LG SW43408 1080x2160 DSI panel 8 + 9 + maintainers: 10 + - Caleb Connolly <caleb.connolly@linaro.org> 11 + 12 + description: 13 + This panel is used on the Pixel 3, it is a 60hz OLED panel which 14 + required DSC (Display Stream Compression) and has rounded corners. 15 + 16 + allOf: 17 + - $ref: panel-common.yaml# 18 + 19 + properties: 20 + compatible: 21 + items: 22 + - const: lg,sw43408 23 + 24 + reg: true 25 + port: true 26 + vddi-supply: true 27 + vpnl-supply: true 28 + reset-gpios: true 29 + 30 + required: 31 + - compatible 32 + - vddi-supply 33 + - vpnl-supply 34 + - reset-gpios 35 + 36 + additionalProperties: false 37 + 38 + examples: 39 + - | 40 + #include <dt-bindings/gpio/gpio.h> 41 + 42 + dsi { 43 + #address-cells = <1>; 44 + #size-cells = <0>; 45 + 46 + panel@0 { 47 + compatible = "lg,sw43408"; 48 + reg = <0>; 49 + 50 + vddi-supply = <&vreg_l14a_1p88>; 51 + vpnl-supply = <&vreg_l28a_3p0>; 52 + 53 + reset-gpios = <&tlmm 6 GPIO_ACTIVE_LOW>; 54 + 55 + port { 56 + endpoint { 57 + remote-endpoint = <&mdss_dsi0_out>; 58 + }; 59 + }; 60 + }; 61 + }; 62 + ...
+2
Documentation/devicetree/bindings/display/panel/panel-simple.yaml
··· 190 190 - innolux,g121i1-l01 191 191 # Innolux Corporation 12.1" G121X1-L03 XGA (1024x768) TFT LCD panel 192 192 - innolux,g121x1-l03 193 + # Innolux Corporation 12.1" G121XCE-L01 XGA (1024x768) TFT LCD panel 194 + - innolux,g121xce-l01 193 195 # Innolux Corporation 11.6" WXGA (1366x768) TFT LCD panel 194 196 - innolux,n116bca-ea1 195 197 # Innolux Corporation 11.6" WXGA (1366x768) TFT LCD panel
+1 -1
Documentation/driver-api/dma-buf.rst
··· 77 77 the usual size discover pattern size = SEEK_END(0); SEEK_SET(0). Every other 78 78 llseek operation will report -EINVAL. 79 79 80 - If llseek on dma-buf FDs isn't support the kernel will report -ESPIPE for all 80 + If llseek on dma-buf FDs isn't supported the kernel will report -ESPIPE for all 81 81 cases. Userspace can use this to detect support for discovering the dma-buf 82 82 size using llseek. 83 83
+22
Documentation/gpu/drm-kms.rst
··· 398 398 .. kernel-doc:: include/drm/drm_damage_helper.h 399 399 :internal: 400 400 401 + Plane Panic Feature 402 + ------------------- 403 + 404 + .. kernel-doc:: drivers/gpu/drm/drm_panic.c 405 + :doc: overview 406 + 407 + Plane Panic Functions Reference 408 + ------------------------------- 409 + 410 + .. kernel-doc:: include/drm/drm_panic.h 411 + :internal: 412 + 413 + .. kernel-doc:: drivers/gpu/drm/drm_panic.c 414 + :export: 415 + 401 416 Display Modes Function Reference 402 417 ================================ 403 418 ··· 510 495 system in during boot. 511 496 512 497 * An IGT test must be submitted where reasonable. 498 + 499 + For historical reasons, non-standard, driver-specific properties exist. If a KMS 500 + driver wants to add support for one of those properties, the requirements for 501 + new properties apply where possible. Additionally, the documented behavior must 502 + match the de facto semantics of the existing property to ensure compatibility. 503 + Developers of the driver that first added the property should help with those 504 + tasks and must ACK the documented behavior if possible. 513 505 514 506 Property Types and Blob Property Support 515 507 ----------------------------------------
+8
MAINTAINERS
··· 6764 6764 F: Documentation/devicetree/bindings/display/panel/jadard,jd9365da-h3.yaml 6765 6765 F: drivers/gpu/drm/panel/panel-jadard-jd9365da-h3.c 6766 6766 6767 + DRM DRIVER FOR LG SW43408 PANELS 6768 + M: Sumit Semwal <sumit.semwal@linaro.org> 6769 + M: Caleb Connolly <caleb.connolly@linaro.org> 6770 + S: Maintained 6771 + T: git git://anongit.freedesktop.org/drm/drm-misc 6772 + F: Documentation/devicetree/bindings/display/panel/lg,sw43408.yaml 6773 + F: drivers/gpu/drm/panel/panel-lg-sw43408.c 6774 + 6767 6775 DRM DRIVER FOR LOGICVC DISPLAY CONTROLLER 6768 6776 M: Paul Kocialkowski <paul.kocialkowski@bootlin.com> 6769 6777 S: Supported
+2 -1
drivers/accel/qaic/Makefile
··· 10 10 qaic_control.o \ 11 11 qaic_data.o \ 12 12 qaic_drv.o \ 13 - qaic_timesync.o 13 + qaic_timesync.o \ 14 + sahara.o 14 15 15 16 qaic-$(CONFIG_DEBUG_FS) += qaic_debugfs.o
+3 -3
drivers/accel/qaic/qaic_debugfs.h
··· 13 13 void qaic_bootlog_unregister(void); 14 14 void qaic_debugfs_init(struct qaic_drm_device *qddev); 15 15 #else 16 - int qaic_bootlog_register(void) { return 0; } 17 - void qaic_bootlog_unregister(void) {} 18 - void qaic_debugfs_init(struct qaic_drm_device *qddev) {} 16 + static inline int qaic_bootlog_register(void) { return 0; } 17 + static inline void qaic_bootlog_unregister(void) {} 18 + static inline void qaic_debugfs_init(struct qaic_drm_device *qddev) {} 19 19 #endif /* CONFIG_DEBUG_FS */ 20 20 #endif /* __QAIC_DEBUGFS_H__ */
+10
drivers/accel/qaic/qaic_drv.c
··· 30 30 #include "qaic.h" 31 31 #include "qaic_debugfs.h" 32 32 #include "qaic_timesync.h" 33 + #include "sahara.h" 33 34 34 35 MODULE_IMPORT_NS(DMA_BUF); 35 36 ··· 645 644 goto free_pci; 646 645 } 647 646 647 + ret = sahara_register(); 648 + if (ret) { 649 + pr_debug("qaic: sahara_register failed %d\n", ret); 650 + goto free_mhi; 651 + } 652 + 648 653 ret = qaic_timesync_init(); 649 654 if (ret) 650 655 pr_debug("qaic: qaic_timesync_init failed %d\n", ret); ··· 661 654 662 655 return 0; 663 656 657 + free_mhi: 658 + mhi_driver_unregister(&qaic_mhi_driver); 664 659 free_pci: 665 660 pci_unregister_driver(&qaic_pci_driver); 666 661 return ret; ··· 688 679 link_up = true; 689 680 qaic_bootlog_unregister(); 690 681 qaic_timesync_deinit(); 682 + sahara_unregister(); 691 683 mhi_driver_unregister(&qaic_mhi_driver); 692 684 pci_unregister_driver(&qaic_pci_driver); 693 685 }
+449
drivers/accel/qaic/sahara.c
··· 1 + // SPDX-License-Identifier: GPL-2.0-only 2 + 3 + /* Copyright (c) 2024 Qualcomm Innovation Center, Inc. All rights reserved. */ 4 + 5 + #include <linux/firmware.h> 6 + #include <linux/limits.h> 7 + #include <linux/mhi.h> 8 + #include <linux/minmax.h> 9 + #include <linux/mod_devicetable.h> 10 + #include <linux/overflow.h> 11 + #include <linux/types.h> 12 + #include <linux/workqueue.h> 13 + 14 + #include "sahara.h" 15 + 16 + #define SAHARA_HELLO_CMD 0x1 /* Min protocol version 1.0 */ 17 + #define SAHARA_HELLO_RESP_CMD 0x2 /* Min protocol version 1.0 */ 18 + #define SAHARA_READ_DATA_CMD 0x3 /* Min protocol version 1.0 */ 19 + #define SAHARA_END_OF_IMAGE_CMD 0x4 /* Min protocol version 1.0 */ 20 + #define SAHARA_DONE_CMD 0x5 /* Min protocol version 1.0 */ 21 + #define SAHARA_DONE_RESP_CMD 0x6 /* Min protocol version 1.0 */ 22 + #define SAHARA_RESET_CMD 0x7 /* Min protocol version 1.0 */ 23 + #define SAHARA_RESET_RESP_CMD 0x8 /* Min protocol version 1.0 */ 24 + #define SAHARA_MEM_DEBUG_CMD 0x9 /* Min protocol version 2.0 */ 25 + #define SAHARA_MEM_READ_CMD 0xa /* Min protocol version 2.0 */ 26 + #define SAHARA_CMD_READY_CMD 0xb /* Min protocol version 2.1 */ 27 + #define SAHARA_SWITCH_MODE_CMD 0xc /* Min protocol version 2.1 */ 28 + #define SAHARA_EXECUTE_CMD 0xd /* Min protocol version 2.1 */ 29 + #define SAHARA_EXECUTE_RESP_CMD 0xe /* Min protocol version 2.1 */ 30 + #define SAHARA_EXECUTE_DATA_CMD 0xf /* Min protocol version 2.1 */ 31 + #define SAHARA_MEM_DEBUG64_CMD 0x10 /* Min protocol version 2.5 */ 32 + #define SAHARA_MEM_READ64_CMD 0x11 /* Min protocol version 2.5 */ 33 + #define SAHARA_READ_DATA64_CMD 0x12 /* Min protocol version 2.8 */ 34 + #define SAHARA_RESET_STATE_CMD 0x13 /* Min protocol version 2.9 */ 35 + #define SAHARA_WRITE_DATA_CMD 0x14 /* Min protocol version 3.0 */ 36 + 37 + #define SAHARA_PACKET_MAX_SIZE 0xffffU /* MHI_MAX_MTU */ 38 + #define SAHARA_TRANSFER_MAX_SIZE 0x80000 39 + #define SAHARA_NUM_TX_BUF DIV_ROUND_UP(SAHARA_TRANSFER_MAX_SIZE,\ 40 + SAHARA_PACKET_MAX_SIZE) 41 + #define SAHARA_IMAGE_ID_NONE U32_MAX 42 + 43 + #define SAHARA_VERSION 2 44 + #define SAHARA_SUCCESS 0 45 + 46 + #define SAHARA_MODE_IMAGE_TX_PENDING 0x0 47 + #define SAHARA_MODE_IMAGE_TX_COMPLETE 0x1 48 + #define SAHARA_MODE_MEMORY_DEBUG 0x2 49 + #define SAHARA_MODE_COMMAND 0x3 50 + 51 + #define SAHARA_HELLO_LENGTH 0x30 52 + #define SAHARA_READ_DATA_LENGTH 0x14 53 + #define SAHARA_END_OF_IMAGE_LENGTH 0x10 54 + #define SAHARA_DONE_LENGTH 0x8 55 + #define SAHARA_RESET_LENGTH 0x8 56 + 57 + struct sahara_packet { 58 + __le32 cmd; 59 + __le32 length; 60 + 61 + union { 62 + struct { 63 + __le32 version; 64 + __le32 version_compat; 65 + __le32 max_length; 66 + __le32 mode; 67 + } hello; 68 + struct { 69 + __le32 version; 70 + __le32 version_compat; 71 + __le32 status; 72 + __le32 mode; 73 + } hello_resp; 74 + struct { 75 + __le32 image; 76 + __le32 offset; 77 + __le32 length; 78 + } read_data; 79 + struct { 80 + __le32 image; 81 + __le32 status; 82 + } end_of_image; 83 + }; 84 + }; 85 + 86 + struct sahara_context { 87 + struct sahara_packet *tx[SAHARA_NUM_TX_BUF]; 88 + struct sahara_packet *rx; 89 + struct work_struct work; 90 + struct mhi_device *mhi_dev; 91 + const char **image_table; 92 + u32 table_size; 93 + u32 active_image_id; 94 + const struct firmware *firmware; 95 + }; 96 + 97 + static const char *aic100_image_table[] = { 98 + [1] = "qcom/aic100/fw1.bin", 99 + [2] = "qcom/aic100/fw2.bin", 100 + [4] = "qcom/aic100/fw4.bin", 101 + [5] = "qcom/aic100/fw5.bin", 102 + [6] = "qcom/aic100/fw6.bin", 103 + [8] = "qcom/aic100/fw8.bin", 104 + [9] = "qcom/aic100/fw9.bin", 105 + [10] = "qcom/aic100/fw10.bin", 106 + }; 107 + 108 + static int sahara_find_image(struct sahara_context *context, u32 image_id) 109 + { 110 + int ret; 111 + 112 + if (image_id == context->active_image_id) 113 + return 0; 114 + 115 + if (context->active_image_id != SAHARA_IMAGE_ID_NONE) { 116 + dev_err(&context->mhi_dev->dev, "image id %d is not valid as %d is active\n", 117 + image_id, context->active_image_id); 118 + return -EINVAL; 119 + } 120 + 121 + if (image_id >= context->table_size || !context->image_table[image_id]) { 122 + dev_err(&context->mhi_dev->dev, "request for unknown image: %d\n", image_id); 123 + return -EINVAL; 124 + } 125 + 126 + /* 127 + * This image might be optional. The device may continue without it. 128 + * Only the device knows. Suppress error messages that could suggest an 129 + * a problem when we were actually able to continue. 130 + */ 131 + ret = firmware_request_nowarn(&context->firmware, 132 + context->image_table[image_id], 133 + &context->mhi_dev->dev); 134 + if (ret) { 135 + dev_dbg(&context->mhi_dev->dev, "request for image id %d / file %s failed %d\n", 136 + image_id, context->image_table[image_id], ret); 137 + return ret; 138 + } 139 + 140 + context->active_image_id = image_id; 141 + 142 + return 0; 143 + } 144 + 145 + static void sahara_release_image(struct sahara_context *context) 146 + { 147 + if (context->active_image_id != SAHARA_IMAGE_ID_NONE) 148 + release_firmware(context->firmware); 149 + context->active_image_id = SAHARA_IMAGE_ID_NONE; 150 + } 151 + 152 + static void sahara_send_reset(struct sahara_context *context) 153 + { 154 + int ret; 155 + 156 + context->tx[0]->cmd = cpu_to_le32(SAHARA_RESET_CMD); 157 + context->tx[0]->length = cpu_to_le32(SAHARA_RESET_LENGTH); 158 + 159 + ret = mhi_queue_buf(context->mhi_dev, DMA_TO_DEVICE, context->tx[0], 160 + SAHARA_RESET_LENGTH, MHI_EOT); 161 + if (ret) 162 + dev_err(&context->mhi_dev->dev, "Unable to send reset response %d\n", ret); 163 + } 164 + 165 + static void sahara_hello(struct sahara_context *context) 166 + { 167 + int ret; 168 + 169 + dev_dbg(&context->mhi_dev->dev, 170 + "HELLO cmd received. length:%d version:%d version_compat:%d max_length:%d mode:%d\n", 171 + le32_to_cpu(context->rx->length), 172 + le32_to_cpu(context->rx->hello.version), 173 + le32_to_cpu(context->rx->hello.version_compat), 174 + le32_to_cpu(context->rx->hello.max_length), 175 + le32_to_cpu(context->rx->hello.mode)); 176 + 177 + if (le32_to_cpu(context->rx->length) != SAHARA_HELLO_LENGTH) { 178 + dev_err(&context->mhi_dev->dev, "Malformed hello packet - length %d\n", 179 + le32_to_cpu(context->rx->length)); 180 + return; 181 + } 182 + if (le32_to_cpu(context->rx->hello.version) != SAHARA_VERSION) { 183 + dev_err(&context->mhi_dev->dev, "Unsupported hello packet - version %d\n", 184 + le32_to_cpu(context->rx->hello.version)); 185 + return; 186 + } 187 + 188 + if (le32_to_cpu(context->rx->hello.mode) != SAHARA_MODE_IMAGE_TX_PENDING && 189 + le32_to_cpu(context->rx->hello.mode) != SAHARA_MODE_IMAGE_TX_COMPLETE) { 190 + dev_err(&context->mhi_dev->dev, "Unsupported hello packet - mode %d\n", 191 + le32_to_cpu(context->rx->hello.mode)); 192 + return; 193 + } 194 + 195 + context->tx[0]->cmd = cpu_to_le32(SAHARA_HELLO_RESP_CMD); 196 + context->tx[0]->length = cpu_to_le32(SAHARA_HELLO_LENGTH); 197 + context->tx[0]->hello_resp.version = cpu_to_le32(SAHARA_VERSION); 198 + context->tx[0]->hello_resp.version_compat = cpu_to_le32(SAHARA_VERSION); 199 + context->tx[0]->hello_resp.status = cpu_to_le32(SAHARA_SUCCESS); 200 + context->tx[0]->hello_resp.mode = context->rx->hello_resp.mode; 201 + 202 + ret = mhi_queue_buf(context->mhi_dev, DMA_TO_DEVICE, context->tx[0], 203 + SAHARA_HELLO_LENGTH, MHI_EOT); 204 + if (ret) 205 + dev_err(&context->mhi_dev->dev, "Unable to send hello response %d\n", ret); 206 + } 207 + 208 + static void sahara_read_data(struct sahara_context *context) 209 + { 210 + u32 image_id, data_offset, data_len, pkt_data_len; 211 + int ret; 212 + int i; 213 + 214 + dev_dbg(&context->mhi_dev->dev, 215 + "READ_DATA cmd received. length:%d image:%d offset:%d data_length:%d\n", 216 + le32_to_cpu(context->rx->length), 217 + le32_to_cpu(context->rx->read_data.image), 218 + le32_to_cpu(context->rx->read_data.offset), 219 + le32_to_cpu(context->rx->read_data.length)); 220 + 221 + if (le32_to_cpu(context->rx->length) != SAHARA_READ_DATA_LENGTH) { 222 + dev_err(&context->mhi_dev->dev, "Malformed read_data packet - length %d\n", 223 + le32_to_cpu(context->rx->length)); 224 + return; 225 + } 226 + 227 + image_id = le32_to_cpu(context->rx->read_data.image); 228 + data_offset = le32_to_cpu(context->rx->read_data.offset); 229 + data_len = le32_to_cpu(context->rx->read_data.length); 230 + 231 + ret = sahara_find_image(context, image_id); 232 + if (ret) { 233 + sahara_send_reset(context); 234 + return; 235 + } 236 + 237 + /* 238 + * Image is released when the device is done with it via 239 + * SAHARA_END_OF_IMAGE_CMD. sahara_send_reset() will either cause the 240 + * device to retry the operation with a modification, or decide to be 241 + * done with the image and trigger SAHARA_END_OF_IMAGE_CMD. 242 + * release_image() is called from SAHARA_END_OF_IMAGE_CMD. processing 243 + * and is not needed here on error. 244 + */ 245 + 246 + if (data_len > SAHARA_TRANSFER_MAX_SIZE) { 247 + dev_err(&context->mhi_dev->dev, "Malformed read_data packet - data len %d exceeds max xfer size %d\n", 248 + data_len, SAHARA_TRANSFER_MAX_SIZE); 249 + sahara_send_reset(context); 250 + return; 251 + } 252 + 253 + if (data_offset >= context->firmware->size) { 254 + dev_err(&context->mhi_dev->dev, "Malformed read_data packet - data offset %d exceeds file size %zu\n", 255 + data_offset, context->firmware->size); 256 + sahara_send_reset(context); 257 + return; 258 + } 259 + 260 + if (size_add(data_offset, data_len) > context->firmware->size) { 261 + dev_err(&context->mhi_dev->dev, "Malformed read_data packet - data offset %d and length %d exceeds file size %zu\n", 262 + data_offset, data_len, context->firmware->size); 263 + sahara_send_reset(context); 264 + return; 265 + } 266 + 267 + for (i = 0; i < SAHARA_NUM_TX_BUF && data_len; ++i) { 268 + pkt_data_len = min(data_len, SAHARA_PACKET_MAX_SIZE); 269 + 270 + memcpy(context->tx[i], &context->firmware->data[data_offset], pkt_data_len); 271 + 272 + data_offset += pkt_data_len; 273 + data_len -= pkt_data_len; 274 + 275 + ret = mhi_queue_buf(context->mhi_dev, DMA_TO_DEVICE, 276 + context->tx[i], pkt_data_len, 277 + !data_len ? MHI_EOT : MHI_CHAIN); 278 + if (ret) { 279 + dev_err(&context->mhi_dev->dev, "Unable to send read_data response %d\n", 280 + ret); 281 + return; 282 + } 283 + } 284 + } 285 + 286 + static void sahara_end_of_image(struct sahara_context *context) 287 + { 288 + int ret; 289 + 290 + dev_dbg(&context->mhi_dev->dev, 291 + "END_OF_IMAGE cmd received. length:%d image:%d status:%d\n", 292 + le32_to_cpu(context->rx->length), 293 + le32_to_cpu(context->rx->end_of_image.image), 294 + le32_to_cpu(context->rx->end_of_image.status)); 295 + 296 + if (le32_to_cpu(context->rx->length) != SAHARA_END_OF_IMAGE_LENGTH) { 297 + dev_err(&context->mhi_dev->dev, "Malformed end_of_image packet - length %d\n", 298 + le32_to_cpu(context->rx->length)); 299 + return; 300 + } 301 + 302 + if (context->active_image_id != SAHARA_IMAGE_ID_NONE && 303 + le32_to_cpu(context->rx->end_of_image.image) != context->active_image_id) { 304 + dev_err(&context->mhi_dev->dev, "Malformed end_of_image packet - image %d is not the active image\n", 305 + le32_to_cpu(context->rx->end_of_image.image)); 306 + return; 307 + } 308 + 309 + sahara_release_image(context); 310 + 311 + if (le32_to_cpu(context->rx->end_of_image.status)) 312 + return; 313 + 314 + context->tx[0]->cmd = cpu_to_le32(SAHARA_DONE_CMD); 315 + context->tx[0]->length = cpu_to_le32(SAHARA_DONE_LENGTH); 316 + 317 + ret = mhi_queue_buf(context->mhi_dev, DMA_TO_DEVICE, context->tx[0], 318 + SAHARA_DONE_LENGTH, MHI_EOT); 319 + if (ret) 320 + dev_dbg(&context->mhi_dev->dev, "Unable to send done response %d\n", ret); 321 + } 322 + 323 + static void sahara_processing(struct work_struct *work) 324 + { 325 + struct sahara_context *context = container_of(work, struct sahara_context, work); 326 + int ret; 327 + 328 + switch (le32_to_cpu(context->rx->cmd)) { 329 + case SAHARA_HELLO_CMD: 330 + sahara_hello(context); 331 + break; 332 + case SAHARA_READ_DATA_CMD: 333 + sahara_read_data(context); 334 + break; 335 + case SAHARA_END_OF_IMAGE_CMD: 336 + sahara_end_of_image(context); 337 + break; 338 + case SAHARA_DONE_RESP_CMD: 339 + /* Intentional do nothing as we don't need to exit an app */ 340 + break; 341 + default: 342 + dev_err(&context->mhi_dev->dev, "Unknown command %d\n", 343 + le32_to_cpu(context->rx->cmd)); 344 + break; 345 + } 346 + 347 + ret = mhi_queue_buf(context->mhi_dev, DMA_FROM_DEVICE, context->rx, 348 + SAHARA_PACKET_MAX_SIZE, MHI_EOT); 349 + if (ret) 350 + dev_err(&context->mhi_dev->dev, "Unable to requeue rx buf %d\n", ret); 351 + } 352 + 353 + static int sahara_mhi_probe(struct mhi_device *mhi_dev, const struct mhi_device_id *id) 354 + { 355 + struct sahara_context *context; 356 + int ret; 357 + int i; 358 + 359 + context = devm_kzalloc(&mhi_dev->dev, sizeof(*context), GFP_KERNEL); 360 + if (!context) 361 + return -ENOMEM; 362 + 363 + context->rx = devm_kzalloc(&mhi_dev->dev, SAHARA_PACKET_MAX_SIZE, GFP_KERNEL); 364 + if (!context->rx) 365 + return -ENOMEM; 366 + 367 + /* 368 + * AIC100 defines SAHARA_TRANSFER_MAX_SIZE as the largest value it 369 + * will request for READ_DATA. This is larger than 370 + * SAHARA_PACKET_MAX_SIZE, and we need 9x SAHARA_PACKET_MAX_SIZE to 371 + * cover SAHARA_TRANSFER_MAX_SIZE. When the remote side issues a 372 + * READ_DATA, it requires a transfer of the exact size requested. We 373 + * can use MHI_CHAIN to link multiple buffers into a single transfer 374 + * but the remote side will not consume the buffers until it sees an 375 + * EOT, thus we need to allocate enough buffers to put in the tx fifo 376 + * to cover an entire READ_DATA request of the max size. 377 + */ 378 + for (i = 0; i < SAHARA_NUM_TX_BUF; ++i) { 379 + context->tx[i] = devm_kzalloc(&mhi_dev->dev, SAHARA_PACKET_MAX_SIZE, GFP_KERNEL); 380 + if (!context->tx[i]) 381 + return -ENOMEM; 382 + } 383 + 384 + context->mhi_dev = mhi_dev; 385 + INIT_WORK(&context->work, sahara_processing); 386 + context->image_table = aic100_image_table; 387 + context->table_size = ARRAY_SIZE(aic100_image_table); 388 + context->active_image_id = SAHARA_IMAGE_ID_NONE; 389 + dev_set_drvdata(&mhi_dev->dev, context); 390 + 391 + ret = mhi_prepare_for_transfer(mhi_dev); 392 + if (ret) 393 + return ret; 394 + 395 + ret = mhi_queue_buf(mhi_dev, DMA_FROM_DEVICE, context->rx, SAHARA_PACKET_MAX_SIZE, MHI_EOT); 396 + if (ret) { 397 + mhi_unprepare_from_transfer(mhi_dev); 398 + return ret; 399 + } 400 + 401 + return 0; 402 + } 403 + 404 + static void sahara_mhi_remove(struct mhi_device *mhi_dev) 405 + { 406 + struct sahara_context *context = dev_get_drvdata(&mhi_dev->dev); 407 + 408 + cancel_work_sync(&context->work); 409 + sahara_release_image(context); 410 + mhi_unprepare_from_transfer(mhi_dev); 411 + } 412 + 413 + static void sahara_mhi_ul_xfer_cb(struct mhi_device *mhi_dev, struct mhi_result *mhi_result) 414 + { 415 + } 416 + 417 + static void sahara_mhi_dl_xfer_cb(struct mhi_device *mhi_dev, struct mhi_result *mhi_result) 418 + { 419 + struct sahara_context *context = dev_get_drvdata(&mhi_dev->dev); 420 + 421 + if (!mhi_result->transaction_status) 422 + schedule_work(&context->work); 423 + } 424 + 425 + static const struct mhi_device_id sahara_mhi_match_table[] = { 426 + { .chan = "QAIC_SAHARA", }, 427 + {}, 428 + }; 429 + 430 + static struct mhi_driver sahara_mhi_driver = { 431 + .id_table = sahara_mhi_match_table, 432 + .remove = sahara_mhi_remove, 433 + .probe = sahara_mhi_probe, 434 + .ul_xfer_cb = sahara_mhi_ul_xfer_cb, 435 + .dl_xfer_cb = sahara_mhi_dl_xfer_cb, 436 + .driver = { 437 + .name = "sahara", 438 + }, 439 + }; 440 + 441 + int sahara_register(void) 442 + { 443 + return mhi_driver_register(&sahara_mhi_driver); 444 + } 445 + 446 + void sahara_unregister(void) 447 + { 448 + mhi_driver_unregister(&sahara_mhi_driver); 449 + }
+10
drivers/accel/qaic/sahara.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0-only */ 2 + 3 + /* Copyright (c) 2024 Qualcomm Innovation Center, Inc. All rights reserved. */ 4 + 5 + #ifndef __SAHARA_H__ 6 + #define __SAHARA_H__ 7 + 8 + int sahara_register(void); 9 + void sahara_unregister(void); 10 + #endif /* __SAHARA_H__ */
+34 -22
drivers/dma-buf/dma-buf.c
··· 35 35 36 36 static inline int is_dma_buf_file(struct file *); 37 37 38 - struct dma_buf_list { 39 - struct list_head head; 40 - struct mutex lock; 41 - }; 38 + #if IS_ENABLED(CONFIG_DEBUG_FS) 39 + static DEFINE_MUTEX(debugfs_list_mutex); 40 + static LIST_HEAD(debugfs_list); 42 41 43 - static struct dma_buf_list db_list; 42 + static void __dma_buf_debugfs_list_add(struct dma_buf *dmabuf) 43 + { 44 + mutex_lock(&debugfs_list_mutex); 45 + list_add(&dmabuf->list_node, &debugfs_list); 46 + mutex_unlock(&debugfs_list_mutex); 47 + } 48 + 49 + static void __dma_buf_debugfs_list_del(struct dma_buf *dmabuf) 50 + { 51 + if (!dmabuf) 52 + return; 53 + 54 + mutex_lock(&debugfs_list_mutex); 55 + list_del(&dmabuf->list_node); 56 + mutex_unlock(&debugfs_list_mutex); 57 + } 58 + #else 59 + static void __dma_buf_debugfs_list_add(struct dma_buf *dmabuf) 60 + { 61 + } 62 + 63 + static void __dma_buf_debugfs_list_del(struct file *file) 64 + { 65 + } 66 + #endif 44 67 45 68 static char *dmabuffs_dname(struct dentry *dentry, char *buffer, int buflen) 46 69 { ··· 112 89 113 90 static int dma_buf_file_release(struct inode *inode, struct file *file) 114 91 { 115 - struct dma_buf *dmabuf; 116 - 117 92 if (!is_dma_buf_file(file)) 118 93 return -EINVAL; 119 94 120 - dmabuf = file->private_data; 121 - if (dmabuf) { 122 - mutex_lock(&db_list.lock); 123 - list_del(&dmabuf->list_node); 124 - mutex_unlock(&db_list.lock); 125 - } 95 + __dma_buf_debugfs_list_del(file->private_data); 126 96 127 97 return 0; 128 98 } ··· 688 672 file->f_path.dentry->d_fsdata = dmabuf; 689 673 dmabuf->file = file; 690 674 691 - mutex_lock(&db_list.lock); 692 - list_add(&dmabuf->list_node, &db_list.head); 693 - mutex_unlock(&db_list.lock); 675 + __dma_buf_debugfs_list_add(dmabuf); 694 676 695 677 return dmabuf; 696 678 ··· 1625 1611 size_t size = 0; 1626 1612 int ret; 1627 1613 1628 - ret = mutex_lock_interruptible(&db_list.lock); 1614 + ret = mutex_lock_interruptible(&debugfs_list_mutex); 1629 1615 1630 1616 if (ret) 1631 1617 return ret; ··· 1634 1620 seq_printf(s, "%-8s\t%-8s\t%-8s\t%-8s\texp_name\t%-8s\tname\n", 1635 1621 "size", "flags", "mode", "count", "ino"); 1636 1622 1637 - list_for_each_entry(buf_obj, &db_list.head, list_node) { 1623 + list_for_each_entry(buf_obj, &debugfs_list, list_node) { 1638 1624 1639 1625 ret = dma_resv_lock_interruptible(buf_obj->resv, NULL); 1640 1626 if (ret) ··· 1671 1657 1672 1658 seq_printf(s, "\nTotal %d objects, %zu bytes\n", count, size); 1673 1659 1674 - mutex_unlock(&db_list.lock); 1660 + mutex_unlock(&debugfs_list_mutex); 1675 1661 return 0; 1676 1662 1677 1663 error_unlock: 1678 - mutex_unlock(&db_list.lock); 1664 + mutex_unlock(&debugfs_list_mutex); 1679 1665 return ret; 1680 1666 } 1681 1667 ··· 1732 1718 if (IS_ERR(dma_buf_mnt)) 1733 1719 return PTR_ERR(dma_buf_mnt); 1734 1720 1735 - mutex_init(&db_list.lock); 1736 - INIT_LIST_HEAD(&db_list.head); 1737 1721 dma_buf_init_debugfs(); 1738 1722 return 0; 1739 1723 }
+32
drivers/gpu/drm/Kconfig
··· 104 104 help 105 105 CRTC helpers for KMS drivers. 106 106 107 + config DRM_PANIC 108 + bool "Display a user-friendly message when a kernel panic occurs" 109 + depends on DRM && !FRAMEBUFFER_CONSOLE 110 + select DRM_KMS_HELPER 111 + select FONT_SUPPORT 112 + help 113 + Enable a drm panic handler, which will display a user-friendly message 114 + when a kernel panic occurs. It's useful when using a user-space 115 + console instead of fbcon. 116 + It will only work if your graphic driver supports this feature. 117 + To support Hi-DPI Display, you can enable bigger fonts like 118 + FONT_TER16x32 119 + 120 + config DRM_PANIC_FOREGROUND_COLOR 121 + hex "Drm panic screen foreground color, in RGB" 122 + depends on DRM_PANIC 123 + default 0xffffff 124 + 125 + config DRM_PANIC_BACKGROUND_COLOR 126 + hex "Drm panic screen background color, in RGB" 127 + depends on DRM_PANIC 128 + default 0x000000 129 + 130 + config DRM_PANIC_DEBUG 131 + bool "Add a debug fs entry to trigger drm_panic" 132 + depends on DRM_PANIC && DEBUG_FS 133 + help 134 + Add dri/[device]/drm_panic_plane_x in the kernel debugfs, to force the 135 + panic handler to write the panic message to this plane scanout buffer. 136 + This is unsafe and should not be enabled on a production build. 137 + If in doubt, say "N". 138 + 107 139 config DRM_DEBUG_DP_MST_TOPOLOGY_REFS 108 140 bool "Enable refcount backtrace history in the DP MST helpers" 109 141 depends on STACKTRACE_SUPPORT
+1
drivers/gpu/drm/Makefile
··· 88 88 drm_privacy_screen.o \ 89 89 drm_privacy_screen_x86.o 90 90 drm-$(CONFIG_DRM_ACCEL) += ../../accel/drm_accel.o 91 + drm-$(CONFIG_DRM_PANIC) += drm_panic.o 91 92 obj-$(CONFIG_DRM) += drm.o 92 93 93 94 obj-$(CONFIG_DRM_PANEL_ORIENTATION_QUIRKS) += drm_panel_orientation_quirks.o
+4 -1
drivers/gpu/drm/arm/malidp_mw.c
··· 72 72 __drm_atomic_helper_connector_destroy_state(connector->state); 73 73 74 74 kfree(connector->state); 75 - __drm_atomic_helper_connector_reset(connector, &mw_state->base); 75 + connector->state = NULL; 76 + 77 + if (mw_state) 78 + __drm_atomic_helper_connector_reset(connector, &mw_state->base); 76 79 } 77 80 78 81 static enum drm_connector_status
+20 -10
drivers/gpu/drm/ast/ast_ddc.c
··· 21 21 * of the Software. 22 22 */ 23 23 24 + #include <linux/i2c-algo-bit.h> 25 + #include <linux/i2c.h> 26 + 24 27 #include <drm/drm_managed.h> 25 28 #include <drm/drm_print.h> 26 29 27 30 #include "ast_ddc.h" 28 31 #include "ast_drv.h" 32 + 33 + struct ast_ddc { 34 + struct ast_device *ast; 35 + 36 + struct i2c_algo_bit_data bit; 37 + struct i2c_adapter adapter; 38 + }; 29 39 30 40 static void ast_ddc_algo_bit_data_setsda(void *data, int state) 31 41 { ··· 142 132 i2c_del_adapter(&ddc->adapter); 143 133 } 144 134 145 - struct ast_ddc *ast_ddc_create(struct ast_device *ast) 135 + struct i2c_adapter *ast_ddc_create(struct ast_device *ast) 146 136 { 147 137 struct drm_device *dev = &ast->base; 148 138 struct ast_ddc *ddc; ··· 155 145 return ERR_PTR(-ENOMEM); 156 146 ddc->ast = ast; 157 147 158 - adapter = &ddc->adapter; 159 - adapter->owner = THIS_MODULE; 160 - adapter->dev.parent = dev->dev; 161 - i2c_set_adapdata(adapter, ddc); 162 - snprintf(adapter->name, sizeof(adapter->name), "AST DDC bus"); 163 - 164 148 bit = &ddc->bit; 165 - bit->udelay = 20; 166 - bit->timeout = 2; 167 149 bit->data = ddc; 168 150 bit->setsda = ast_ddc_algo_bit_data_setsda; 169 151 bit->setscl = ast_ddc_algo_bit_data_setscl; ··· 163 161 bit->getscl = ast_ddc_algo_bit_data_getscl; 164 162 bit->pre_xfer = ast_ddc_algo_bit_data_pre_xfer; 165 163 bit->post_xfer = ast_ddc_algo_bit_data_post_xfer; 164 + bit->udelay = 20; 165 + bit->timeout = usecs_to_jiffies(2200); 166 166 167 + adapter = &ddc->adapter; 168 + adapter->owner = THIS_MODULE; 167 169 adapter->algo_data = bit; 170 + adapter->dev.parent = dev->dev; 171 + snprintf(adapter->name, sizeof(adapter->name), "AST DDC bus"); 172 + i2c_set_adapdata(adapter, ddc); 173 + 168 174 ret = i2c_bit_add_bus(adapter); 169 175 if (ret) { 170 176 drm_err(dev, "Failed to register bit i2c\n"); ··· 183 173 if (ret) 184 174 return ERR_PTR(ret); 185 175 186 - return ddc; 176 + return &ddc->adapter; 187 177 }
+2 -11
drivers/gpu/drm/ast/ast_ddc.h
··· 3 3 #ifndef __AST_DDC_H__ 4 4 #define __AST_DDC_H__ 5 5 6 - #include <linux/i2c.h> 7 - #include <linux/i2c-algo-bit.h> 8 - 9 6 struct ast_device; 7 + struct i2c_adapter; 10 8 11 - struct ast_ddc { 12 - struct ast_device *ast; 13 - 14 - struct i2c_adapter adapter; 15 - struct i2c_algo_bit_data bit; 16 - }; 17 - 18 - struct ast_ddc *ast_ddc_create(struct ast_device *ast); 9 + struct i2c_adapter *ast_ddc_create(struct ast_device *ast); 19 10 20 11 #endif
+22 -4
drivers/gpu/drm/ast/ast_mode.c
··· 43 43 #include <drm/drm_gem_framebuffer_helper.h> 44 44 #include <drm/drm_gem_shmem_helper.h> 45 45 #include <drm/drm_managed.h> 46 + #include <drm/drm_panic.h> 46 47 #include <drm/drm_probe_helper.h> 47 48 #include <drm/drm_simple_kms_helper.h> 48 49 ··· 702 701 ast_set_index_reg_mask(ast, AST_IO_VGASRI, 0x1, 0xdf, 0x20); 703 702 } 704 703 704 + static int ast_primary_plane_helper_get_scanout_buffer(struct drm_plane *plane, 705 + struct drm_scanout_buffer *sb) 706 + { 707 + struct ast_plane *ast_plane = to_ast_plane(plane); 708 + 709 + if (plane->state && plane->state->fb && ast_plane->vaddr) { 710 + sb->format = plane->state->fb->format; 711 + sb->width = plane->state->fb->width; 712 + sb->height = plane->state->fb->height; 713 + sb->pitch[0] = plane->state->fb->pitches[0]; 714 + iosys_map_set_vaddr_iomem(&sb->map[0], ast_plane->vaddr); 715 + return 0; 716 + } 717 + return -ENODEV; 718 + } 719 + 705 720 static const struct drm_plane_helper_funcs ast_primary_plane_helper_funcs = { 706 721 DRM_GEM_SHADOW_PLANE_HELPER_FUNCS, 707 722 .atomic_check = ast_primary_plane_helper_atomic_check, 708 723 .atomic_update = ast_primary_plane_helper_atomic_update, 709 724 .atomic_enable = ast_primary_plane_helper_atomic_enable, 710 725 .atomic_disable = ast_primary_plane_helper_atomic_disable, 726 + .get_scanout_buffer = ast_primary_plane_helper_get_scanout_buffer, 711 727 }; 712 728 713 729 static const struct drm_plane_funcs ast_primary_plane_funcs = { ··· 1378 1360 static int ast_vga_connector_init(struct drm_device *dev, struct drm_connector *connector) 1379 1361 { 1380 1362 struct ast_device *ast = to_ast_device(dev); 1381 - struct ast_ddc *ddc; 1363 + struct i2c_adapter *ddc; 1382 1364 int ret; 1383 1365 1384 1366 ddc = ast_ddc_create(ast); ··· 1389 1371 } 1390 1372 1391 1373 ret = drm_connector_init_with_ddc(dev, connector, &ast_vga_connector_funcs, 1392 - DRM_MODE_CONNECTOR_VGA, &ddc->adapter); 1374 + DRM_MODE_CONNECTOR_VGA, ddc); 1393 1375 if (ret) 1394 1376 return ret; 1395 1377 ··· 1447 1429 static int ast_sil164_connector_init(struct drm_device *dev, struct drm_connector *connector) 1448 1430 { 1449 1431 struct ast_device *ast = to_ast_device(dev); 1450 - struct ast_ddc *ddc; 1432 + struct i2c_adapter *ddc; 1451 1433 int ret; 1452 1434 1453 1435 ddc = ast_ddc_create(ast); ··· 1458 1440 } 1459 1441 1460 1442 ret = drm_connector_init_with_ddc(dev, connector, &ast_sil164_connector_funcs, 1461 - DRM_MODE_CONNECTOR_DVII, &ddc->adapter); 1443 + DRM_MODE_CONNECTOR_DVII, ddc); 1462 1444 if (ret) 1463 1445 return ret; 1464 1446
+1 -1
drivers/gpu/drm/bridge/analogix/Kconfig
··· 28 28 29 29 config DRM_ANALOGIX_DP 30 30 tristate 31 - depends on DRM 31 + depends on DRM_DISPLAY_HELPER 32 32 33 33 config DRM_ANALOGIX_ANX7625 34 34 tristate "Analogix Anx7625 MIPI to DP interface support"
+10 -5
drivers/gpu/drm/bridge/analogix/anx7625.c
··· 2066 2066 }; 2067 2067 2068 2068 host = of_find_mipi_dsi_host_by_node(ctx->pdata.mipi_host_node); 2069 - if (!host) { 2070 - DRM_DEV_ERROR(dev, "fail to find dsi host.\n"); 2071 - return -EPROBE_DEFER; 2072 - } 2069 + if (!host) 2070 + return dev_err_probe(dev, -EPROBE_DEFER, "fail to find dsi host.\n"); 2073 2071 2074 2072 dsi = devm_mipi_dsi_device_register_full(dev, host, &info); 2075 2073 if (IS_ERR(dsi)) { ··· 2469 2471 mutex_unlock(&ctx->aux_lock); 2470 2472 } 2471 2473 2474 + static void 2475 + anx7625_audio_update_connector_status(struct anx7625_data *ctx, 2476 + enum drm_connector_status status); 2477 + 2472 2478 static enum drm_connector_status 2473 2479 anx7625_bridge_detect(struct drm_bridge *bridge) 2474 2480 { 2475 2481 struct anx7625_data *ctx = bridge_to_anx7625(bridge); 2476 2482 struct device *dev = ctx->dev; 2483 + enum drm_connector_status status; 2477 2484 2478 2485 DRM_DEV_DEBUG_DRIVER(dev, "drm bridge detect\n"); 2479 2486 2480 - return anx7625_sink_detect(ctx); 2487 + status = anx7625_sink_detect(ctx); 2488 + anx7625_audio_update_connector_status(ctx, status); 2489 + return status; 2481 2490 } 2482 2491 2483 2492 static const struct drm_edid *anx7625_bridge_edid_read(struct drm_bridge *bridge,
+2 -4
drivers/gpu/drm/bridge/chipone-icn6211.c
··· 563 563 564 564 host = of_find_mipi_dsi_host_by_node(host_node); 565 565 of_node_put(host_node); 566 - if (!host) { 567 - dev_err(dev, "failed to find dsi host\n"); 568 - return -EPROBE_DEFER; 569 - } 566 + if (!host) 567 + return dev_err_probe(dev, -EPROBE_DEFER, "failed to find dsi host\n"); 570 568 571 569 dsi = mipi_dsi_device_register_full(host, &info); 572 570 if (IS_ERR(dsi)) {
+2 -4
drivers/gpu/drm/bridge/imx/imx8mp-hdmi-tx.c
··· 104 104 return 0; 105 105 } 106 106 107 - static int imx8mp_dw_hdmi_remove(struct platform_device *pdev) 107 + static void imx8mp_dw_hdmi_remove(struct platform_device *pdev) 108 108 { 109 109 struct imx8mp_hdmi *hdmi = platform_get_drvdata(pdev); 110 110 111 111 dw_hdmi_remove(hdmi->dw_hdmi); 112 - 113 - return 0; 114 112 } 115 113 116 114 static int __maybe_unused imx8mp_dw_hdmi_pm_suspend(struct device *dev) ··· 138 140 139 141 static struct platform_driver imx8mp_dw_hdmi_platform_driver = { 140 142 .probe = imx8mp_dw_hdmi_probe, 141 - .remove = imx8mp_dw_hdmi_remove, 143 + .remove_new = imx8mp_dw_hdmi_remove, 142 144 .driver = { 143 145 .name = "imx8mp-dw-hdmi-tx", 144 146 .of_match_table = imx8mp_dw_hdmi_of_table,
+2 -4
drivers/gpu/drm/bridge/lontium-lt8912b.c
··· 494 494 }; 495 495 496 496 host = of_find_mipi_dsi_host_by_node(lt->host_node); 497 - if (!host) { 498 - dev_err(dev, "failed to find dsi host\n"); 499 - return -EPROBE_DEFER; 500 - } 497 + if (!host) 498 + return dev_err_probe(dev, -EPROBE_DEFER, "failed to find dsi host\n"); 501 499 502 500 dsi = devm_mipi_dsi_device_register_full(dev, host, &info); 503 501 if (IS_ERR(dsi)) {
+2 -4
drivers/gpu/drm/bridge/lontium-lt9611.c
··· 761 761 int ret; 762 762 763 763 host = of_find_mipi_dsi_host_by_node(dsi_node); 764 - if (!host) { 765 - dev_err(lt9611->dev, "failed to find dsi host\n"); 766 - return ERR_PTR(-EPROBE_DEFER); 767 - } 764 + if (!host) 765 + return ERR_PTR(dev_err_probe(lt9611->dev, -EPROBE_DEFER, "failed to find dsi host\n")); 768 766 769 767 dsi = devm_mipi_dsi_device_register_full(dev, host, &info); 770 768 if (IS_ERR(dsi)) {
+2 -4
drivers/gpu/drm/bridge/lontium-lt9611uxc.c
··· 266 266 int ret; 267 267 268 268 host = of_find_mipi_dsi_host_by_node(dsi_node); 269 - if (!host) { 270 - dev_err(dev, "failed to find dsi host\n"); 271 - return ERR_PTR(-EPROBE_DEFER); 272 - } 269 + if (!host) 270 + return ERR_PTR(dev_err_probe(dev, -EPROBE_DEFER, "failed to find dsi host\n")); 273 271 274 272 dsi = devm_mipi_dsi_device_register_full(dev, host, &info); 275 273 if (IS_ERR(dsi)) {
+2 -4
drivers/gpu/drm/bridge/tc358775.c
··· 610 610 }; 611 611 612 612 host = of_find_mipi_dsi_host_by_node(tc->host_node); 613 - if (!host) { 614 - dev_err(dev, "failed to find dsi host\n"); 615 - return -EPROBE_DEFER; 616 - } 613 + if (!host) 614 + return dev_err_probe(dev, -EPROBE_DEFER, "failed to find dsi host\n"); 617 615 618 616 dsi = devm_mipi_dsi_device_register_full(dev, host, &info); 619 617 if (IS_ERR(dsi)) {
+9 -8
drivers/gpu/drm/bridge/ti-dlpc3433.c
··· 319 319 .channel = 0, 320 320 .node = NULL, 321 321 }; 322 + int ret; 322 323 323 324 host = of_find_mipi_dsi_host_by_node(dlpc->host_node); 324 - if (!host) { 325 - DRM_DEV_ERROR(dev, "failed to find dsi host\n"); 326 - return -EPROBE_DEFER; 327 - } 325 + if (!host) 326 + return dev_err_probe(dev, -EPROBE_DEFER, "failed to find dsi host\n"); 328 327 329 328 dlpc->dsi = mipi_dsi_device_register_full(host, &info); 330 329 if (IS_ERR(dlpc->dsi)) { ··· 335 336 dlpc->dsi->format = MIPI_DSI_FMT_RGB565; 336 337 dlpc->dsi->lanes = dlpc->dsi_lanes; 337 338 338 - return devm_mipi_dsi_attach(dev, dlpc->dsi); 339 + ret = devm_mipi_dsi_attach(dev, dlpc->dsi); 340 + if (ret) 341 + DRM_DEV_ERROR(dev, "failed to attach dsi host\n"); 342 + 343 + return ret; 339 344 } 340 345 341 346 static int dlpc3433_probe(struct i2c_client *client) ··· 370 367 drm_bridge_add(&dlpc->bridge); 371 368 372 369 ret = dlpc_host_attach(dlpc); 373 - if (ret) { 374 - DRM_DEV_ERROR(dev, "failed to attach dsi host\n"); 370 + if (ret) 375 371 goto err_remove_bridge; 376 - } 377 372 378 373 return 0; 379 374
+4
drivers/gpu/drm/drm_atomic_helper.c
··· 38 38 #include <drm/drm_drv.h> 39 39 #include <drm/drm_framebuffer.h> 40 40 #include <drm/drm_gem_atomic_helper.h> 41 + #include <drm/drm_panic.h> 41 42 #include <drm/drm_print.h> 42 43 #include <drm/drm_self_refresh_helper.h> 43 44 #include <drm/drm_vblank.h> ··· 3017 3016 bool stall) 3018 3017 { 3019 3018 int i, ret; 3019 + unsigned long flags; 3020 3020 struct drm_connector *connector; 3021 3021 struct drm_connector_state *old_conn_state, *new_conn_state; 3022 3022 struct drm_crtc *crtc; ··· 3101 3099 } 3102 3100 } 3103 3101 3102 + drm_panic_lock(state->dev, flags); 3104 3103 for_each_oldnew_plane_in_state(state, plane, old_plane_state, new_plane_state, i) { 3105 3104 WARN_ON(plane->state != old_plane_state); 3106 3105 ··· 3111 3108 state->planes[i].state = old_plane_state; 3112 3109 plane->state = new_plane_state; 3113 3110 } 3111 + drm_panic_unlock(state->dev, flags); 3114 3112 3115 3113 for_each_oldnew_private_obj_in_state(state, obj, old_obj_state, new_obj_state, i) { 3116 3114 WARN_ON(obj->state != old_obj_state);
+3 -3
drivers/gpu/drm/drm_atomic_uapi.c
··· 145 145 &state->mode, blob->data); 146 146 if (ret) { 147 147 drm_dbg_atomic(crtc->dev, 148 - "[CRTC:%d:%s] invalid mode (ret=%d, status=%s):\n", 148 + "[CRTC:%d:%s] invalid mode (%s, %pe): " DRM_MODE_FMT "\n", 149 149 crtc->base.id, crtc->name, 150 - ret, drm_get_mode_status_name(state->mode.status)); 151 - drm_mode_debug_printmodeline(&state->mode); 150 + drm_get_mode_status_name(state->mode.status), 151 + ERR_PTR(ret), DRM_MODE_ARG(&state->mode)); 152 152 return -EINVAL; 153 153 } 154 154
+70 -59
drivers/gpu/drm/drm_client_modeset.c
··· 242 242 for (i = 0; i < connector_count; i++) { 243 243 connector = connectors[i]; 244 244 enabled[i] = drm_connector_enabled(connector, true); 245 - DRM_DEBUG_KMS("connector %d enabled? %s\n", connector->base.id, 246 - connector->display_info.non_desktop ? "non desktop" : str_yes_no(enabled[i])); 245 + drm_dbg_kms(connector->dev, "[CONNECTOR:%d:%s] enabled? %s\n", 246 + connector->base.id, connector->name, 247 + connector->display_info.non_desktop ? 248 + "non desktop" : str_yes_no(enabled[i])); 247 249 248 250 any_enabled |= enabled[i]; 249 251 } ··· 305 303 } 306 304 307 305 if (can_clone) { 308 - DRM_DEBUG_KMS("can clone using command line\n"); 306 + drm_dbg_kms(dev, "can clone using command line\n"); 309 307 return true; 310 308 } 311 309 ··· 334 332 kfree(dmt_mode); 335 333 336 334 if (can_clone) { 337 - DRM_DEBUG_KMS("can clone using 1024x768\n"); 335 + drm_dbg_kms(dev, "can clone using 1024x768\n"); 338 336 return true; 339 337 } 340 338 fail: 341 - DRM_INFO("kms: can't enable cloning when we probably wanted to.\n"); 339 + drm_info(dev, "kms: can't enable cloning when we probably wanted to.\n"); 342 340 return false; 343 341 } 344 342 345 - static int drm_client_get_tile_offsets(struct drm_connector **connectors, 343 + static int drm_client_get_tile_offsets(struct drm_device *dev, 344 + struct drm_connector **connectors, 346 345 unsigned int connector_count, 347 346 struct drm_display_mode **modes, 348 347 struct drm_client_offset *offsets, ··· 360 357 continue; 361 358 362 359 if (!modes[i] && (h_idx || v_idx)) { 363 - DRM_DEBUG_KMS("no modes for connector tiled %d %d\n", i, 364 - connector->base.id); 360 + drm_dbg_kms(dev, 361 + "[CONNECTOR:%d:%s] no modes for connector tiled %d\n", 362 + connector->base.id, connector->name, i); 365 363 continue; 366 364 } 367 365 if (connector->tile_h_loc < h_idx) ··· 373 369 } 374 370 offsets[idx].x = hoffset; 375 371 offsets[idx].y = voffset; 376 - DRM_DEBUG_KMS("returned %d %d for %d %d\n", hoffset, voffset, h_idx, v_idx); 372 + drm_dbg_kms(dev, "returned %d %d for %d %d\n", hoffset, voffset, h_idx, v_idx); 377 373 return 0; 378 374 } 379 375 380 - static bool drm_client_target_preferred(struct drm_connector **connectors, 376 + static bool drm_client_target_preferred(struct drm_device *dev, 377 + struct drm_connector **connectors, 381 378 unsigned int connector_count, 382 379 struct drm_display_mode **modes, 383 380 struct drm_client_offset *offsets, ··· 428 423 * find the tile offsets for this pass - need to find 429 424 * all tiles left and above 430 425 */ 431 - drm_client_get_tile_offsets(connectors, connector_count, modes, offsets, i, 426 + drm_client_get_tile_offsets(dev, connectors, connector_count, 427 + modes, offsets, i, 432 428 connector->tile_h_loc, connector->tile_v_loc); 433 429 } 434 - DRM_DEBUG_KMS("looking for cmdline mode on connector %d\n", 435 - connector->base.id); 430 + drm_dbg_kms(dev, "[CONNECTOR:%d:%s] looking for cmdline mode\n", 431 + connector->base.id, connector->name); 436 432 437 433 /* got for command line mode first */ 438 434 modes[i] = drm_connector_pick_cmdline_mode(connector); 439 435 if (!modes[i]) { 440 - DRM_DEBUG_KMS("looking for preferred mode on connector %d %d\n", 441 - connector->base.id, connector->tile_group ? connector->tile_group->id : 0); 436 + drm_dbg_kms(dev, "[CONNECTOR:%d:%s] looking for preferred mode, tile %d\n", 437 + connector->base.id, connector->name, 438 + connector->tile_group ? connector->tile_group->id : 0); 442 439 modes[i] = drm_connector_has_preferred_mode(connector, width, height); 443 440 } 444 441 /* No preferred modes, pick one off the list */ ··· 462 455 (connector->tile_h_loc == 0 && 463 456 connector->tile_v_loc == 0 && 464 457 !drm_connector_get_tiled_mode(connector))) { 465 - DRM_DEBUG_KMS("Falling back to non tiled mode on Connector %d\n", 466 - connector->base.id); 458 + drm_dbg_kms(dev, 459 + "[CONNECTOR:%d:%s] Falling back to non-tiled mode\n", 460 + connector->base.id, connector->name); 467 461 modes[i] = drm_connector_fallback_non_tiled_mode(connector); 468 462 } else { 469 463 modes[i] = drm_connector_get_tiled_mode(connector); 470 464 } 471 465 } 472 466 473 - DRM_DEBUG_KMS("found mode %s\n", modes[i] ? modes[i]->name : 474 - "none"); 467 + drm_dbg_kms(dev, "[CONNECTOR:%d:%s] Found mode %s\n", 468 + connector->base.id, connector->name, 469 + modes[i] ? modes[i]->name : "none"); 475 470 conn_configured |= BIT_ULL(i); 476 471 } 477 472 ··· 594 585 if (!drm_drv_uses_atomic_modeset(dev)) 595 586 return false; 596 587 597 - if (WARN_ON(count <= 0)) 588 + if (drm_WARN_ON(dev, count <= 0)) 598 589 return false; 599 590 600 591 save_enabled = kcalloc(count, sizeof(bool), GFP_KERNEL); ··· 633 624 num_connectors_detected++; 634 625 635 626 if (!enabled[i]) { 636 - DRM_DEBUG_KMS("connector %s not enabled, skipping\n", 637 - connector->name); 627 + drm_dbg_kms(dev, "[CONNECTOR:%d:%s] not enabled, skipping\n", 628 + connector->base.id, connector->name); 638 629 conn_configured |= BIT(i); 639 630 continue; 640 631 } 641 632 642 633 if (connector->force == DRM_FORCE_OFF) { 643 - DRM_DEBUG_KMS("connector %s is disabled by user, skipping\n", 644 - connector->name); 634 + drm_dbg_kms(dev, "[CONNECTOR:%d:%s] disabled by user, skipping\n", 635 + connector->base.id, connector->name); 645 636 enabled[i] = false; 646 637 continue; 647 638 } 648 639 649 640 encoder = connector->state->best_encoder; 650 - if (!encoder || WARN_ON(!connector->state->crtc)) { 641 + if (!encoder || drm_WARN_ON(dev, !connector->state->crtc)) { 651 642 if (connector->force > DRM_FORCE_OFF) 652 643 goto bail; 653 644 654 - DRM_DEBUG_KMS("connector %s has no encoder or crtc, skipping\n", 655 - connector->name); 645 + drm_dbg_kms(dev, "[CONNECTOR:%d:%s] has no encoder or crtc, skipping\n", 646 + connector->base.id, connector->name); 656 647 enabled[i] = false; 657 648 conn_configured |= BIT(i); 658 649 continue; ··· 669 660 */ 670 661 for (j = 0; j < count; j++) { 671 662 if (crtcs[j] == new_crtc) { 672 - DRM_DEBUG_KMS("fallback: cloned configuration\n"); 663 + drm_dbg_kms(dev, "fallback: cloned configuration\n"); 673 664 goto bail; 674 665 } 675 666 } 676 667 677 - DRM_DEBUG_KMS("looking for cmdline mode on connector %s\n", 678 - connector->name); 668 + drm_dbg_kms(dev, "[CONNECTOR:%d:%s] looking for cmdline mode\n", 669 + connector->base.id, connector->name); 679 670 680 671 /* go for command line mode first */ 681 672 modes[i] = drm_connector_pick_cmdline_mode(connector); 682 673 683 674 /* try for preferred next */ 684 675 if (!modes[i]) { 685 - DRM_DEBUG_KMS("looking for preferred mode on connector %s %d\n", 686 - connector->name, connector->has_tile); 676 + drm_dbg_kms(dev, 677 + "[CONNECTOR:%d:%s] looking for preferred mode, has tile: %s\n", 678 + connector->base.id, connector->name, 679 + str_yes_no(connector->has_tile)); 687 680 modes[i] = drm_connector_has_preferred_mode(connector, width, height); 688 681 } 689 682 690 683 /* No preferred mode marked by the EDID? Are there any modes? */ 691 684 if (!modes[i] && !list_empty(&connector->modes)) { 692 - DRM_DEBUG_KMS("using first mode listed on connector %s\n", 693 - connector->name); 685 + drm_dbg_kms(dev, "[CONNECTOR:%d:%s] using first listed mode\n", 686 + connector->base.id, connector->name); 694 687 modes[i] = list_first_entry(&connector->modes, 695 688 struct drm_display_mode, 696 689 head); ··· 711 700 * This is crtc->mode and not crtc->state->mode for the 712 701 * fastboot check to work correctly. 713 702 */ 714 - DRM_DEBUG_KMS("looking for current mode on connector %s\n", 715 - connector->name); 703 + drm_dbg_kms(dev, "[CONNECTOR:%d:%s] looking for current mode\n", 704 + connector->base.id, connector->name); 716 705 modes[i] = &connector->state->crtc->mode; 717 706 } 718 707 /* ··· 721 710 */ 722 711 if (connector->has_tile && 723 712 num_tiled_conns < connector->num_h_tile * connector->num_v_tile) { 724 - DRM_DEBUG_KMS("Falling back to non tiled mode on Connector %d\n", 725 - connector->base.id); 713 + drm_dbg_kms(dev, "[CONNECTOR:%d:%s] Falling back to non-tiled mode\n", 714 + connector->base.id, connector->name); 726 715 modes[i] = drm_connector_fallback_non_tiled_mode(connector); 727 716 } 728 717 crtcs[i] = new_crtc; 729 718 730 - DRM_DEBUG_KMS("connector %s on [CRTC:%d:%s]: %dx%d%s\n", 731 - connector->name, 732 - connector->state->crtc->base.id, 733 - connector->state->crtc->name, 734 - modes[i]->hdisplay, modes[i]->vdisplay, 735 - modes[i]->flags & DRM_MODE_FLAG_INTERLACE ? "i" : ""); 719 + drm_dbg_kms(dev, "[CONNECTOR:%d:%s] on [CRTC:%d:%s]: %dx%d%s\n", 720 + connector->base.id, connector->name, 721 + connector->state->crtc->base.id, 722 + connector->state->crtc->name, 723 + modes[i]->hdisplay, modes[i]->vdisplay, 724 + modes[i]->flags & DRM_MODE_FLAG_INTERLACE ? "i" : ""); 736 725 737 726 fallback = false; 738 727 conn_configured |= BIT(i); ··· 748 737 */ 749 738 if (num_connectors_enabled != num_connectors_detected && 750 739 num_connectors_enabled < dev->mode_config.num_crtc) { 751 - DRM_DEBUG_KMS("fallback: Not all outputs enabled\n"); 752 - DRM_DEBUG_KMS("Enabled: %i, detected: %i\n", num_connectors_enabled, 753 - num_connectors_detected); 740 + drm_dbg_kms(dev, "fallback: Not all outputs enabled\n"); 741 + drm_dbg_kms(dev, "Enabled: %i, detected: %i\n", 742 + num_connectors_enabled, num_connectors_detected); 754 743 fallback = true; 755 744 } 756 745 757 746 if (fallback) { 758 747 bail: 759 - DRM_DEBUG_KMS("Not using firmware configuration\n"); 748 + drm_dbg_kms(dev, "Not using firmware configuration\n"); 760 749 memcpy(enabled, save_enabled, count); 761 750 ret = false; 762 751 } ··· 793 782 int i, ret = 0; 794 783 bool *enabled; 795 784 796 - DRM_DEBUG_KMS("\n"); 785 + drm_dbg_kms(dev, "\n"); 797 786 798 787 if (!width) 799 788 width = dev->mode_config.max_width; ··· 824 813 offsets = kcalloc(connector_count, sizeof(*offsets), GFP_KERNEL); 825 814 enabled = kcalloc(connector_count, sizeof(bool), GFP_KERNEL); 826 815 if (!crtcs || !modes || !enabled || !offsets) { 827 - DRM_ERROR("Memory allocation failed\n"); 828 816 ret = -ENOMEM; 829 817 goto out; 830 818 } ··· 834 824 for (i = 0; i < connector_count; i++) 835 825 total_modes_count += connectors[i]->funcs->fill_modes(connectors[i], width, height); 836 826 if (!total_modes_count) 837 - DRM_DEBUG_KMS("No connectors reported connected with modes\n"); 827 + drm_dbg_kms(dev, "No connectors reported connected with modes\n"); 838 828 drm_client_connectors_enabled(connectors, connector_count, enabled); 839 829 840 830 if (!drm_client_firmware_config(client, connectors, connector_count, crtcs, ··· 845 835 846 836 if (!drm_client_target_cloned(dev, connectors, connector_count, modes, 847 837 offsets, enabled, width, height) && 848 - !drm_client_target_preferred(connectors, connector_count, modes, 838 + !drm_client_target_preferred(dev, connectors, connector_count, modes, 849 839 offsets, enabled, width, height)) 850 - DRM_ERROR("Unable to find initial modes\n"); 840 + drm_err(dev, "Unable to find initial modes\n"); 851 841 852 - DRM_DEBUG_KMS("picking CRTCs for %dx%d config\n", 853 - width, height); 842 + drm_dbg_kms(dev, "picking CRTCs for %dx%d config\n", 843 + width, height); 854 844 855 845 drm_client_pick_crtcs(client, connectors, connector_count, 856 846 crtcs, modes, 0, width, height); ··· 868 858 struct drm_mode_set *modeset = drm_client_find_modeset(client, crtc); 869 859 struct drm_connector *connector = connectors[i]; 870 860 871 - DRM_DEBUG_KMS("desired mode %s set on crtc %d (%d,%d)\n", 872 - mode->name, crtc->base.id, offset->x, offset->y); 861 + drm_dbg_kms(dev, "[CRTC:%d:%s] desired mode %s set (%d,%d)\n", 862 + crtc->base.id, crtc->name, 863 + mode->name, offset->x, offset->y); 873 864 874 - if (WARN_ON_ONCE(modeset->num_connectors == DRM_CLIENT_MAX_CLONED_CONNECTORS || 875 - (dev->mode_config.num_crtc > 1 && modeset->num_connectors == 1))) { 865 + if (drm_WARN_ON_ONCE(dev, modeset->num_connectors == DRM_CLIENT_MAX_CLONED_CONNECTORS || 866 + (dev->mode_config.num_crtc > 1 && modeset->num_connectors == 1))) { 876 867 ret = -EINVAL; 877 868 break; 878 869 }
+18 -20
drivers/gpu/drm/drm_crtc.c
··· 716 716 717 717 crtc = drm_crtc_find(dev, file_priv, crtc_req->crtc_id); 718 718 if (!crtc) { 719 - DRM_DEBUG_KMS("Unknown CRTC ID %d\n", crtc_req->crtc_id); 719 + drm_dbg_kms(dev, "Unknown CRTC ID %d\n", crtc_req->crtc_id); 720 720 return -ENOENT; 721 721 } 722 - DRM_DEBUG_KMS("[CRTC:%d:%s]\n", crtc->base.id, crtc->name); 722 + drm_dbg_kms(dev, "[CRTC:%d:%s]\n", crtc->base.id, crtc->name); 723 723 724 724 plane = crtc->primary; 725 725 ··· 742 742 old_fb = plane->fb; 743 743 744 744 if (!old_fb) { 745 - DRM_DEBUG_KMS("CRTC doesn't have current FB\n"); 745 + drm_dbg_kms(dev, "CRTC doesn't have current FB\n"); 746 746 ret = -EINVAL; 747 747 goto out; 748 748 } ··· 753 753 } else { 754 754 fb = drm_framebuffer_lookup(dev, file_priv, crtc_req->fb_id); 755 755 if (!fb) { 756 - DRM_DEBUG_KMS("Unknown FB ID%d\n", 757 - crtc_req->fb_id); 756 + drm_dbg_kms(dev, "Unknown FB ID%d\n", 757 + crtc_req->fb_id); 758 758 ret = -ENOENT; 759 759 goto out; 760 760 } ··· 767 767 } 768 768 if (!file_priv->aspect_ratio_allowed && 769 769 (crtc_req->mode.flags & DRM_MODE_FLAG_PIC_AR_MASK) != DRM_MODE_FLAG_PIC_AR_NONE) { 770 - DRM_DEBUG_KMS("Unexpected aspect-ratio flag bits\n"); 770 + drm_dbg_kms(dev, "Unexpected aspect-ratio flag bits\n"); 771 771 ret = -EINVAL; 772 772 goto out; 773 773 } ··· 775 775 776 776 ret = drm_mode_convert_umode(dev, mode, &crtc_req->mode); 777 777 if (ret) { 778 - DRM_DEBUG_KMS("Invalid mode (ret=%d, status=%s)\n", 779 - ret, drm_get_mode_status_name(mode->status)); 780 - drm_mode_debug_printmodeline(mode); 778 + drm_dbg_kms(dev, "Invalid mode (%s, %pe): " DRM_MODE_FMT "\n", 779 + drm_get_mode_status_name(mode->status), 780 + ERR_PTR(ret), DRM_MODE_ARG(mode)); 781 781 goto out; 782 782 } 783 783 ··· 793 793 fb->format->format, 794 794 fb->modifier); 795 795 if (ret) { 796 - DRM_DEBUG_KMS("Invalid pixel format %p4cc, modifier 0x%llx\n", 797 - &fb->format->format, 798 - fb->modifier); 796 + drm_dbg_kms(dev, "Invalid pixel format %p4cc, modifier 0x%llx\n", 797 + &fb->format->format, fb->modifier); 799 798 goto out; 800 799 } 801 800 } ··· 807 808 } 808 809 809 810 if (crtc_req->count_connectors == 0 && mode) { 810 - DRM_DEBUG_KMS("Count connectors is 0 but mode set\n"); 811 + drm_dbg_kms(dev, "Count connectors is 0 but mode set\n"); 811 812 ret = -EINVAL; 812 813 goto out; 813 814 } 814 815 815 816 if (crtc_req->count_connectors > 0 && (!mode || !fb)) { 816 - DRM_DEBUG_KMS("Count connectors is %d but no mode or fb set\n", 817 - crtc_req->count_connectors); 817 + drm_dbg_kms(dev, "Count connectors is %d but no mode or fb set\n", 818 + crtc_req->count_connectors); 818 819 ret = -EINVAL; 819 820 goto out; 820 821 } ··· 846 847 847 848 connector = drm_connector_lookup(dev, file_priv, out_id); 848 849 if (!connector) { 849 - DRM_DEBUG_KMS("Connector id %d unknown\n", 850 - out_id); 850 + drm_dbg_kms(dev, "Connector id %d unknown\n", 851 + out_id); 851 852 ret = -ENOENT; 852 853 goto out; 853 854 } 854 - DRM_DEBUG_KMS("[CONNECTOR:%d:%s]\n", 855 - connector->base.id, 856 - connector->name); 855 + drm_dbg_kms(dev, "[CONNECTOR:%d:%s]\n", 856 + connector->base.id, connector->name); 857 857 858 858 connector_set[i] = connector; 859 859 num_connectors++;
+54 -46
drivers/gpu/drm/drm_crtc_helper.c
··· 110 110 struct drm_connector_list_iter conn_iter; 111 111 struct drm_device *dev = encoder->dev; 112 112 113 - WARN_ON(drm_drv_uses_atomic_modeset(dev)); 113 + drm_WARN_ON(dev, drm_drv_uses_atomic_modeset(dev)); 114 114 115 115 /* 116 116 * We can expect this mutex to be locked if we are not panicking. 117 117 * Locking is currently fubar in the panic handler. 118 118 */ 119 119 if (!oops_in_progress) { 120 - WARN_ON(!mutex_is_locked(&dev->mode_config.mutex)); 121 - WARN_ON(!drm_modeset_is_locked(&dev->mode_config.connection_mutex)); 120 + drm_WARN_ON(dev, !mutex_is_locked(&dev->mode_config.mutex)); 121 + drm_WARN_ON(dev, !drm_modeset_is_locked(&dev->mode_config.connection_mutex)); 122 122 } 123 123 124 124 ··· 150 150 struct drm_encoder *encoder; 151 151 struct drm_device *dev = crtc->dev; 152 152 153 - WARN_ON(drm_drv_uses_atomic_modeset(dev)); 153 + drm_WARN_ON(dev, drm_drv_uses_atomic_modeset(dev)); 154 154 155 155 /* 156 156 * We can expect this mutex to be locked if we are not panicking. 157 157 * Locking is currently fubar in the panic handler. 158 158 */ 159 159 if (!oops_in_progress) 160 - WARN_ON(!mutex_is_locked(&dev->mode_config.mutex)); 160 + drm_WARN_ON(dev, !mutex_is_locked(&dev->mode_config.mutex)); 161 161 162 162 drm_for_each_encoder(encoder, dev) 163 163 if (encoder->crtc == crtc && drm_helper_encoder_in_use(encoder)) ··· 230 230 */ 231 231 void drm_helper_disable_unused_functions(struct drm_device *dev) 232 232 { 233 - WARN_ON(drm_drv_uses_atomic_modeset(dev)); 233 + drm_WARN_ON(dev, drm_drv_uses_atomic_modeset(dev)); 234 234 235 235 drm_modeset_lock_all(dev); 236 236 __drm_helper_disable_unused_functions(dev); ··· 294 294 struct drm_encoder *encoder; 295 295 bool ret = true; 296 296 297 - WARN_ON(drm_drv_uses_atomic_modeset(dev)); 297 + drm_WARN_ON(dev, drm_drv_uses_atomic_modeset(dev)); 298 298 299 299 drm_warn_on_modeset_not_all_locked(dev); 300 300 ··· 338 338 if (encoder_funcs->mode_fixup) { 339 339 if (!(ret = encoder_funcs->mode_fixup(encoder, mode, 340 340 adjusted_mode))) { 341 - DRM_DEBUG_KMS("Encoder fixup failed\n"); 341 + drm_dbg_kms(dev, "[ENCODER:%d:%s] mode fixup failed\n", 342 + encoder->base.id, encoder->name); 342 343 goto done; 343 344 } 344 345 } ··· 348 347 if (crtc_funcs->mode_fixup) { 349 348 if (!(ret = crtc_funcs->mode_fixup(crtc, mode, 350 349 adjusted_mode))) { 351 - DRM_DEBUG_KMS("CRTC fixup failed\n"); 350 + drm_dbg_kms(dev, "[CRTC:%d:%s] mode fixup failed\n", 351 + crtc->base.id, crtc->name); 352 352 goto done; 353 353 } 354 354 } 355 - DRM_DEBUG_KMS("[CRTC:%d:%s]\n", crtc->base.id, crtc->name); 355 + drm_dbg_kms(dev, "[CRTC:%d:%s]\n", crtc->base.id, crtc->name); 356 356 357 357 drm_mode_copy(&crtc->hwmode, adjusted_mode); 358 358 ··· 392 390 if (!encoder_funcs) 393 391 continue; 394 392 395 - DRM_DEBUG_KMS("[ENCODER:%d:%s] set [MODE:%s]\n", 396 - encoder->base.id, encoder->name, mode->name); 393 + drm_dbg_kms(dev, "[ENCODER:%d:%s] set [MODE:%s]\n", 394 + encoder->base.id, encoder->name, mode->name); 397 395 if (encoder_funcs->mode_set) 398 396 encoder_funcs->mode_set(encoder, mode, adjusted_mode); 399 397 } ··· 505 503 { 506 504 struct drm_encoder *encoder; 507 505 508 - WARN_ON(hweight32(connector->possible_encoders) > 1); 506 + drm_WARN_ON(connector->dev, hweight32(connector->possible_encoders) > 1); 509 507 drm_connector_for_each_possible_encoder(connector, encoder) 510 508 return encoder; 511 509 ··· 566 564 int ret; 567 565 int i; 568 566 569 - DRM_DEBUG_KMS("\n"); 570 - 571 567 BUG_ON(!set); 572 568 BUG_ON(!set->crtc); 573 569 BUG_ON(!set->crtc->helper_private); ··· 577 577 crtc_funcs = set->crtc->helper_private; 578 578 579 579 dev = set->crtc->dev; 580 - WARN_ON(drm_drv_uses_atomic_modeset(dev)); 580 + 581 + drm_dbg_kms(dev, "\n"); 582 + 583 + drm_WARN_ON(dev, drm_drv_uses_atomic_modeset(dev)); 581 584 582 585 if (!set->mode) 583 586 set->fb = NULL; 584 587 585 588 if (set->fb) { 586 - DRM_DEBUG_KMS("[CRTC:%d:%s] [FB:%d] #connectors=%d (x y) (%i %i)\n", 587 - set->crtc->base.id, set->crtc->name, 588 - set->fb->base.id, 589 - (int)set->num_connectors, set->x, set->y); 589 + drm_dbg_kms(dev, "[CRTC:%d:%s] [FB:%d] #connectors=%d (x y) (%i %i)\n", 590 + set->crtc->base.id, set->crtc->name, 591 + set->fb->base.id, 592 + (int)set->num_connectors, set->x, set->y); 590 593 } else { 591 - DRM_DEBUG_KMS("[CRTC:%d:%s] [NOFB]\n", 592 - set->crtc->base.id, set->crtc->name); 594 + drm_dbg_kms(dev, "[CRTC:%d:%s] [NOFB]\n", 595 + set->crtc->base.id, set->crtc->name); 593 596 drm_crtc_helper_disable(set->crtc); 594 597 return 0; 595 598 } ··· 642 639 if (set->crtc->primary->fb != set->fb) { 643 640 /* If we have no fb then treat it as a full mode set */ 644 641 if (set->crtc->primary->fb == NULL) { 645 - DRM_DEBUG_KMS("crtc has no fb, full mode set\n"); 642 + drm_dbg_kms(dev, "[CRTC:%d:%s] no fb, full mode set\n", 643 + set->crtc->base.id, set->crtc->name); 646 644 mode_changed = true; 647 645 } else if (set->fb->format != set->crtc->primary->fb->format) { 648 646 mode_changed = true; ··· 655 651 fb_changed = true; 656 652 657 653 if (!drm_mode_equal(set->mode, &set->crtc->mode)) { 658 - DRM_DEBUG_KMS("modes are different, full mode set\n"); 659 - drm_mode_debug_printmodeline(&set->crtc->mode); 660 - drm_mode_debug_printmodeline(set->mode); 654 + drm_dbg_kms(dev, "[CRTC:%d:%s] modes are different, full mode set:\n", 655 + set->crtc->base.id, set->crtc->name); 656 + drm_dbg_kms(dev, DRM_MODE_FMT "\n", DRM_MODE_ARG(&set->crtc->mode)); 657 + drm_dbg_kms(dev, DRM_MODE_FMT "\n", DRM_MODE_ARG(set->mode)); 661 658 mode_changed = true; 662 659 } 663 660 ··· 692 687 fail = 1; 693 688 694 689 if (connector->dpms != DRM_MODE_DPMS_ON) { 695 - DRM_DEBUG_KMS("connector dpms not on, full mode switch\n"); 690 + drm_dbg_kms(dev, "[CONNECTOR:%d:%s] DPMS not on, full mode switch\n", 691 + connector->base.id, connector->name); 696 692 mode_changed = true; 697 693 } 698 694 ··· 702 696 } 703 697 704 698 if (new_encoder != connector->encoder) { 705 - DRM_DEBUG_KMS("encoder changed, full mode switch\n"); 699 + drm_dbg_kms(dev, "[CONNECTOR:%d:%s] encoder changed, full mode switch\n", 700 + connector->base.id, connector->name); 706 701 mode_changed = true; 707 702 /* If the encoder is reused for another connector, then 708 703 * the appropriate crtc will be set later. ··· 744 737 goto fail; 745 738 } 746 739 if (new_crtc != connector->encoder->crtc) { 747 - DRM_DEBUG_KMS("crtc changed, full mode switch\n"); 740 + drm_dbg_kms(dev, "[CONNECTOR:%d:%s] CRTC changed, full mode switch\n", 741 + connector->base.id, connector->name); 748 742 mode_changed = true; 749 743 connector->encoder->crtc = new_crtc; 750 744 } 751 745 if (new_crtc) { 752 - DRM_DEBUG_KMS("[CONNECTOR:%d:%s] to [CRTC:%d:%s]\n", 753 - connector->base.id, connector->name, 754 - new_crtc->base.id, new_crtc->name); 746 + drm_dbg_kms(dev, "[CONNECTOR:%d:%s] to [CRTC:%d:%s]\n", 747 + connector->base.id, connector->name, 748 + new_crtc->base.id, new_crtc->name); 755 749 } else { 756 - DRM_DEBUG_KMS("[CONNECTOR:%d:%s] to [NOCRTC]\n", 757 - connector->base.id, connector->name); 750 + drm_dbg_kms(dev, "[CONNECTOR:%d:%s] to [NOCRTC]\n", 751 + connector->base.id, connector->name); 758 752 } 759 753 } 760 754 drm_connector_list_iter_end(&conn_iter); ··· 766 758 767 759 if (mode_changed) { 768 760 if (drm_helper_crtc_in_use(set->crtc)) { 769 - DRM_DEBUG_KMS("attempting to set mode from" 770 - " userspace\n"); 771 - drm_mode_debug_printmodeline(set->mode); 761 + drm_dbg_kms(dev, "[CRTC:%d:%s] attempting to set mode from userspace: " DRM_MODE_FMT "\n", 762 + set->crtc->base.id, set->crtc->name, DRM_MODE_ARG(set->mode)); 772 763 set->crtc->primary->fb = set->fb; 773 764 if (!drm_crtc_helper_set_mode(set->crtc, set->mode, 774 765 set->x, set->y, 775 766 save_set.fb)) { 776 - DRM_ERROR("failed to set mode on [CRTC:%d:%s]\n", 777 - set->crtc->base.id, set->crtc->name); 767 + drm_err(dev, "[CRTC:%d:%s] failed to set mode\n", 768 + set->crtc->base.id, set->crtc->name); 778 769 set->crtc->primary->fb = save_set.fb; 779 770 ret = -EINVAL; 780 771 goto fail; 781 772 } 782 - DRM_DEBUG_KMS("Setting connector DPMS state to on\n"); 773 + drm_dbg_kms(dev, "[CRTC:%d:%s] Setting connector DPMS state to on\n", 774 + set->crtc->base.id, set->crtc->name); 783 775 for (i = 0; i < set->num_connectors; i++) { 784 - DRM_DEBUG_KMS("\t[CONNECTOR:%d:%s] set DPMS on\n", set->connectors[i]->base.id, 785 - set->connectors[i]->name); 776 + drm_dbg_kms(dev, "\t[CONNECTOR:%d:%s] set DPMS on\n", set->connectors[i]->base.id, 777 + set->connectors[i]->name); 786 778 set->connectors[i]->funcs->dpms(set->connectors[i], DRM_MODE_DPMS_ON); 787 779 } 788 780 } ··· 831 823 if (mode_changed && 832 824 !drm_crtc_helper_set_mode(save_set.crtc, save_set.mode, save_set.x, 833 825 save_set.y, save_set.fb)) 834 - DRM_ERROR("failed to restore config after modeset failure\n"); 826 + drm_err(dev, "failed to restore config after modeset failure\n"); 835 827 836 828 kfree(save_connector_encoders); 837 829 kfree(save_encoder_crtcs); ··· 913 905 struct drm_crtc *crtc = encoder ? encoder->crtc : NULL; 914 906 int old_dpms, encoder_dpms = DRM_MODE_DPMS_OFF; 915 907 916 - WARN_ON(drm_drv_uses_atomic_modeset(connector->dev)); 908 + drm_WARN_ON(connector->dev, drm_drv_uses_atomic_modeset(connector->dev)); 917 909 918 910 if (mode == connector->dpms) 919 911 return 0; ··· 988 980 int encoder_dpms; 989 981 bool ret; 990 982 991 - WARN_ON(drm_drv_uses_atomic_modeset(dev)); 983 + drm_WARN_ON(dev, drm_drv_uses_atomic_modeset(dev)); 992 984 993 985 drm_modeset_lock_all(dev); 994 986 drm_for_each_crtc(crtc, dev) { ··· 1001 993 1002 994 /* Restoring the old config should never fail! */ 1003 995 if (ret == false) 1004 - DRM_ERROR("failed to set mode on crtc %p\n", crtc); 996 + drm_err(dev, "failed to set mode on crtc %p\n", crtc); 1005 997 1006 998 /* Turn off outputs that were already powered off */ 1007 999 if (drm_helper_choose_crtc_dpms(crtc)) {
+6
drivers/gpu/drm/drm_crtc_internal.h
··· 43 43 enum drm_connector_force; 44 44 enum drm_mode_status; 45 45 46 + struct cea_sad; 46 47 struct drm_atomic_state; 47 48 struct drm_bridge; 48 49 struct drm_connector; 49 50 struct drm_crtc; 50 51 struct drm_device; 51 52 struct drm_display_mode; 53 + struct drm_edid; 52 54 struct drm_file; 53 55 struct drm_framebuffer; 54 56 struct drm_mode_create_dumb; ··· 299 297 int drm_edid_override_show(struct drm_connector *connector, struct seq_file *m); 300 298 int drm_edid_override_set(struct drm_connector *connector, const void *edid, size_t size); 301 299 int drm_edid_override_reset(struct drm_connector *connector); 300 + const u8 *drm_edid_find_extension(const struct drm_edid *drm_edid, 301 + int ext_id, int *ext_index); 302 + void drm_edid_cta_sad_get(const struct cea_sad *cta_sad, u8 *sad); 303 + void drm_edid_cta_sad_set(struct cea_sad *cta_sad, const u8 *sad); 302 304 303 305 /* drm_edid_load.c */ 304 306 #ifdef CONFIG_DRM_LOAD_EDID_FIRMWARE
+5 -2
drivers/gpu/drm/drm_displayid.c
··· 3 3 * Copyright © 2021 Intel Corporation 4 4 */ 5 5 6 - #include <drm/drm_displayid.h> 7 6 #include <drm/drm_edid.h> 8 7 #include <drm/drm_print.h> 8 + 9 + #include "drm_crtc_internal.h" 10 + #include "drm_displayid_internal.h" 9 11 10 12 static const struct displayid_header * 11 13 displayid_get_header(const u8 *displayid, int length, int index) ··· 55 53 int *length, int *idx, 56 54 int *ext_index) 57 55 { 58 - const u8 *displayid = drm_find_edid_extension(drm_edid, DISPLAYID_EXT, ext_index); 59 56 const struct displayid_header *base; 57 + const u8 *displayid; 60 58 59 + displayid = drm_edid_find_extension(drm_edid, DISPLAYID_EXT, ext_index); 61 60 if (!displayid) 62 61 return NULL; 63 62
+5
drivers/gpu/drm/drm_drv.c
··· 43 43 #include <drm/drm_file.h> 44 44 #include <drm/drm_managed.h> 45 45 #include <drm/drm_mode_object.h> 46 + #include <drm/drm_panic.h> 46 47 #include <drm/drm_print.h> 47 48 #include <drm/drm_privacy_screen_machine.h> 48 49 ··· 639 638 mutex_init(&dev->filelist_mutex); 640 639 mutex_init(&dev->clientlist_mutex); 641 640 mutex_init(&dev->master_mutex); 641 + raw_spin_lock_init(&dev->mode_config.panic_lock); 642 642 643 643 ret = drmm_add_action_or_reset(dev, drm_dev_init_release, NULL); 644 644 if (ret) ··· 945 943 if (ret) 946 944 goto err_unload; 947 945 } 946 + drm_panic_register(dev); 948 947 949 948 DRM_INFO("Initialized %s %d.%d.%d %s for %s on minor %d\n", 950 949 driver->name, driver->major, driver->minor, ··· 989 986 void drm_dev_unregister(struct drm_device *dev) 990 987 { 991 988 dev->registered = false; 989 + 990 + drm_panic_unregister(dev); 992 991 993 992 drm_client_dev_unregister(dev); 994 993
+87 -30
drivers/gpu/drm/drm_edid.c
··· 29 29 */ 30 30 31 31 #include <linux/bitfield.h> 32 + #include <linux/byteorder/generic.h> 32 33 #include <linux/cec.h> 33 34 #include <linux/hdmi.h> 34 35 #include <linux/i2c.h> 35 36 #include <linux/kernel.h> 36 37 #include <linux/module.h> 37 38 #include <linux/pci.h> 39 + #include <linux/seq_buf.h> 38 40 #include <linux/slab.h> 39 41 #include <linux/vga_switcheroo.h> 40 42 41 - #include <drm/drm_displayid.h> 42 43 #include <drm/drm_drv.h> 43 44 #include <drm/drm_edid.h> 44 45 #include <drm/drm_eld.h> ··· 47 46 #include <drm/drm_print.h> 48 47 49 48 #include "drm_crtc_internal.h" 49 + #include "drm_displayid_internal.h" 50 50 #include "drm_internal.h" 51 51 52 52 static int oui(u8 first, u8 second, u8 third) ··· 1820 1818 return !memchr_inv(edid, 0, EDID_LENGTH); 1821 1819 } 1822 1820 1823 - /** 1824 - * drm_edid_are_equal - compare two edid blobs. 1825 - * @edid1: pointer to first blob 1826 - * @edid2: pointer to second blob 1827 - * This helper can be used during probing to determine if 1828 - * edid had changed. 1829 - */ 1830 - bool drm_edid_are_equal(const struct edid *edid1, const struct edid *edid2) 1821 + static bool drm_edid_eq(const struct drm_edid *drm_edid, 1822 + const void *raw_edid, size_t raw_edid_size) 1831 1823 { 1832 - int edid1_len, edid2_len; 1833 - bool edid1_present = edid1 != NULL; 1834 - bool edid2_present = edid2 != NULL; 1824 + bool edid1_present = drm_edid && drm_edid->edid && drm_edid->size; 1825 + bool edid2_present = raw_edid && raw_edid_size; 1835 1826 1836 1827 if (edid1_present != edid2_present) 1837 1828 return false; 1838 1829 1839 - if (edid1) { 1840 - edid1_len = edid_size(edid1); 1841 - edid2_len = edid_size(edid2); 1842 - 1843 - if (edid1_len != edid2_len) 1830 + if (edid1_present) { 1831 + if (drm_edid->size != raw_edid_size) 1844 1832 return false; 1845 1833 1846 - if (memcmp(edid1, edid2, edid1_len)) 1834 + if (memcmp(drm_edid->edid, raw_edid, drm_edid->size)) 1847 1835 return false; 1848 1836 } 1849 1837 1850 1838 return true; 1851 1839 } 1852 - EXPORT_SYMBOL(drm_edid_are_equal); 1853 1840 1854 1841 enum edid_block_status { 1855 1842 EDID_BLOCK_OK = 0, ··· 2746 2755 return drm_edid_read_ddc(connector, connector->ddc); 2747 2756 } 2748 2757 EXPORT_SYMBOL(drm_edid_read); 2758 + 2759 + /** 2760 + * drm_edid_get_product_id - Get the vendor and product identification 2761 + * @drm_edid: EDID 2762 + * @id: Where to place the product id 2763 + */ 2764 + void drm_edid_get_product_id(const struct drm_edid *drm_edid, 2765 + struct drm_edid_product_id *id) 2766 + { 2767 + if (drm_edid && drm_edid->edid && drm_edid->size >= EDID_LENGTH) 2768 + memcpy(id, &drm_edid->edid->product_id, sizeof(*id)); 2769 + else 2770 + memset(id, 0, sizeof(*id)); 2771 + } 2772 + EXPORT_SYMBOL(drm_edid_get_product_id); 2773 + 2774 + static void decode_date(struct seq_buf *s, const struct drm_edid_product_id *id) 2775 + { 2776 + int week = id->week_of_manufacture; 2777 + int year = id->year_of_manufacture + 1990; 2778 + 2779 + if (week == 0xff) 2780 + seq_buf_printf(s, "model year: %d", year); 2781 + else if (!week) 2782 + seq_buf_printf(s, "year of manufacture: %d", year); 2783 + else 2784 + seq_buf_printf(s, "week/year of manufacture: %d/%d", week, year); 2785 + } 2786 + 2787 + /** 2788 + * drm_edid_print_product_id - Print decoded product id to printer 2789 + * @p: drm printer 2790 + * @id: EDID product id 2791 + * @raw: If true, also print the raw hex 2792 + * 2793 + * See VESA E-EDID 1.4 section 3.4. 2794 + */ 2795 + void drm_edid_print_product_id(struct drm_printer *p, 2796 + const struct drm_edid_product_id *id, bool raw) 2797 + { 2798 + DECLARE_SEQ_BUF(date, 40); 2799 + char vend[4]; 2800 + 2801 + drm_edid_decode_mfg_id(be16_to_cpu(id->manufacturer_name), vend); 2802 + 2803 + decode_date(&date, id); 2804 + 2805 + drm_printf(p, "manufacturer name: %s, product code: %u, serial number: %u, %s\n", 2806 + vend, le16_to_cpu(id->product_code), 2807 + le32_to_cpu(id->serial_number), seq_buf_str(&date)); 2808 + 2809 + if (raw) 2810 + drm_printf(p, "raw product id: %*ph\n", (int)sizeof(*id), id); 2811 + 2812 + WARN_ON(seq_buf_has_overflowed(&date)); 2813 + } 2814 + EXPORT_SYMBOL(drm_edid_print_product_id); 2749 2815 2750 2816 /** 2751 2817 * drm_edid_get_panel_id - Get a panel's ID from EDID ··· 4189 4141 * 4190 4142 * FIXME: Prefer not returning pointers to raw EDID data. 4191 4143 */ 4192 - const u8 *drm_find_edid_extension(const struct drm_edid *drm_edid, 4144 + const u8 *drm_edid_find_extension(const struct drm_edid *drm_edid, 4193 4145 int ext_id, int *ext_index) 4194 4146 { 4195 4147 const u8 *edid_ext = NULL; ··· 4219 4171 { 4220 4172 const struct displayid_block *block; 4221 4173 struct displayid_iter iter; 4222 - int ext_index = 0; 4174 + struct drm_edid_iter edid_iter; 4175 + const u8 *ext; 4223 4176 bool found = false; 4224 4177 4225 4178 /* Look for a top level CEA extension block */ 4226 - if (drm_find_edid_extension(drm_edid, CEA_EXT, &ext_index)) 4179 + drm_edid_iter_begin(drm_edid, &edid_iter); 4180 + drm_edid_iter_for_each(ext, &edid_iter) { 4181 + if (ext[0] == CEA_EXT) { 4182 + found = true; 4183 + break; 4184 + } 4185 + } 4186 + drm_edid_iter_end(&edid_iter); 4187 + 4188 + if (found) 4227 4189 return true; 4228 4190 4229 4191 /* CEA blocks can also be found embedded in a DisplayID block */ ··· 6926 6868 int ret; 6927 6869 6928 6870 if (connector->edid_blob_ptr) { 6929 - const struct edid *old_edid = connector->edid_blob_ptr->data; 6871 + const void *old_edid = connector->edid_blob_ptr->data; 6872 + size_t old_edid_size = connector->edid_blob_ptr->length; 6930 6873 6931 - if (old_edid) { 6932 - if (!drm_edid_are_equal(drm_edid ? drm_edid->edid : NULL, old_edid)) { 6933 - connector->epoch_counter++; 6934 - drm_dbg_kms(dev, "[CONNECTOR:%d:%s] EDID changed, epoch counter %llu\n", 6935 - connector->base.id, connector->name, 6936 - connector->epoch_counter); 6937 - } 6874 + if (old_edid && !drm_edid_eq(drm_edid, old_edid, old_edid_size)) { 6875 + connector->epoch_counter++; 6876 + drm_dbg_kms(dev, "[CONNECTOR:%d:%s] EDID changed, epoch counter %llu\n", 6877 + connector->base.id, connector->name, 6878 + connector->epoch_counter); 6938 6879 } 6939 6880 } 6940 6881
+3 -1
drivers/gpu/drm/drm_eld.c
··· 3 3 * Copyright © 2023 Intel Corporation 4 4 */ 5 5 6 + #include <linux/export.h> 7 + 6 8 #include <drm/drm_edid.h> 7 9 #include <drm/drm_eld.h> 8 10 9 - #include "drm_internal.h" 11 + #include "drm_crtc_internal.h" 10 12 11 13 /** 12 14 * drm_eld_sad_get - get SAD from ELD to struct cea_sad
+42
drivers/gpu/drm/drm_fb_dma_helper.c
··· 15 15 #include <drm/drm_framebuffer.h> 16 16 #include <drm/drm_gem_dma_helper.h> 17 17 #include <drm/drm_gem_framebuffer_helper.h> 18 + #include <drm/drm_panic.h> 18 19 #include <drm/drm_plane.h> 19 20 #include <linux/dma-mapping.h> 20 21 #include <linux/module.h> ··· 149 148 } 150 149 } 151 150 EXPORT_SYMBOL_GPL(drm_fb_dma_sync_non_coherent); 151 + 152 + /** 153 + * drm_fb_dma_get_scanout_buffer - Provide a scanout buffer in case of panic 154 + * @plane: DRM primary plane 155 + * @sb: scanout buffer for the panic handler 156 + * Returns: 0 or negative error code 157 + * 158 + * Generic get_scanout_buffer() implementation, for drivers that uses the 159 + * drm_fb_dma_helper. It won't call vmap in the panic context, so the driver 160 + * should make sure the primary plane is vmapped, otherwise the panic screen 161 + * won't get displayed. 162 + */ 163 + int drm_fb_dma_get_scanout_buffer(struct drm_plane *plane, 164 + struct drm_scanout_buffer *sb) 165 + { 166 + struct drm_gem_dma_object *dma_obj; 167 + struct drm_framebuffer *fb; 168 + 169 + fb = plane->state->fb; 170 + /* Only support linear modifier */ 171 + if (fb->modifier != DRM_FORMAT_MOD_LINEAR) 172 + return -ENODEV; 173 + 174 + dma_obj = drm_fb_dma_get_gem_obj(fb, 0); 175 + 176 + /* Buffer should be accessible from the CPU */ 177 + if (dma_obj->base.import_attach) 178 + return -ENODEV; 179 + 180 + /* Buffer should be already mapped to CPU */ 181 + if (!dma_obj->vaddr) 182 + return -ENODEV; 183 + 184 + iosys_map_set_vaddr(&sb->map[0], dma_obj->vaddr); 185 + sb->format = fb->format; 186 + sb->height = fb->height; 187 + sb->width = fb->width; 188 + sb->pitch[0] = fb->pitches[0]; 189 + return 0; 190 + } 191 + EXPORT_SYMBOL(drm_fb_dma_get_scanout_buffer);
-5
drivers/gpu/drm/drm_internal.h
··· 35 35 36 36 #define DRM_IF_VERSION(maj, min) (maj << 16 | min) 37 37 38 - struct cea_sad; 39 38 struct dentry; 40 39 struct dma_buf; 41 40 struct iosys_map; ··· 276 277 void drm_framebuffer_print_info(struct drm_printer *p, unsigned int indent, 277 278 const struct drm_framebuffer *fb); 278 279 void drm_framebuffer_debugfs_init(struct drm_device *dev); 279 - 280 - /* drm_edid.c */ 281 - void drm_edid_cta_sad_get(const struct cea_sad *cta_sad, u8 *sad); 282 - void drm_edid_cta_sad_set(struct cea_sad *cta_sad, const u8 *sad); 283 280 284 281 #endif /* __DRM_INTERNAL_H__ */
+41 -14
drivers/gpu/drm/drm_mipi_dsi.c
··· 645 645 EXPORT_SYMBOL(mipi_dsi_set_maximum_return_packet_size); 646 646 647 647 /** 648 + * mipi_dsi_compression_mode_ext() - enable/disable DSC on the peripheral 649 + * @dsi: DSI peripheral device 650 + * @enable: Whether to enable or disable the DSC 651 + * @algo: Selected compression algorithm 652 + * @pps_selector: Select PPS from the table of pre-stored or uploaded PPS entries 653 + * 654 + * Enable or disable Display Stream Compression on the peripheral. 655 + * 656 + * Return: 0 on success or a negative error code on failure. 657 + */ 658 + int mipi_dsi_compression_mode_ext(struct mipi_dsi_device *dsi, bool enable, 659 + enum mipi_dsi_compression_algo algo, 660 + unsigned int pps_selector) 661 + { 662 + u8 tx[2] = { }; 663 + struct mipi_dsi_msg msg = { 664 + .channel = dsi->channel, 665 + .type = MIPI_DSI_COMPRESSION_MODE, 666 + .tx_len = sizeof(tx), 667 + .tx_buf = tx, 668 + }; 669 + int ret; 670 + 671 + if (algo > 3 || pps_selector > 3) 672 + return -EINVAL; 673 + 674 + tx[0] = (enable << 0) | 675 + (algo << 1) | 676 + (pps_selector << 4); 677 + 678 + ret = mipi_dsi_device_transfer(dsi, &msg); 679 + 680 + return (ret < 0) ? ret : 0; 681 + } 682 + EXPORT_SYMBOL(mipi_dsi_compression_mode_ext); 683 + 684 + /** 648 685 * mipi_dsi_compression_mode() - enable/disable DSC on the peripheral 649 686 * @dsi: DSI peripheral device 650 687 * @enable: Whether to enable or disable the DSC ··· 691 654 * 692 655 * Return: 0 on success or a negative error code on failure. 693 656 */ 694 - ssize_t mipi_dsi_compression_mode(struct mipi_dsi_device *dsi, bool enable) 657 + int mipi_dsi_compression_mode(struct mipi_dsi_device *dsi, bool enable) 695 658 { 696 - /* Note: Needs updating for non-default PPS or algorithm */ 697 - u8 tx[2] = { enable << 0, 0 }; 698 - struct mipi_dsi_msg msg = { 699 - .channel = dsi->channel, 700 - .type = MIPI_DSI_COMPRESSION_MODE, 701 - .tx_len = sizeof(tx), 702 - .tx_buf = tx, 703 - }; 704 - int ret = mipi_dsi_device_transfer(dsi, &msg); 705 - 706 - return (ret < 0) ? ret : 0; 659 + return mipi_dsi_compression_mode_ext(dsi, enable, MIPI_DSI_COMPRESSION_DSC, 0); 707 660 } 708 661 EXPORT_SYMBOL(mipi_dsi_compression_mode); 709 662 ··· 706 679 * 707 680 * Return: 0 on success or a negative error code on failure. 708 681 */ 709 - ssize_t mipi_dsi_picture_parameter_set(struct mipi_dsi_device *dsi, 710 - const struct drm_dsc_picture_parameter_set *pps) 682 + int mipi_dsi_picture_parameter_set(struct mipi_dsi_device *dsi, 683 + const struct drm_dsc_picture_parameter_set *pps) 711 684 { 712 685 struct mipi_dsi_msg msg = { 713 686 .channel = dsi->channel,
+7
drivers/gpu/drm/drm_mode_config.c
··· 372 372 return -ENOMEM; 373 373 dev->mode_config.modifiers_property = prop; 374 374 375 + prop = drm_property_create(dev, 376 + DRM_MODE_PROP_IMMUTABLE | DRM_MODE_PROP_BLOB, 377 + "SIZE_HINTS", 0); 378 + if (!prop) 379 + return -ENOMEM; 380 + dev->mode_config.size_hints_property = prop; 381 + 375 382 return 0; 376 383 } 377 384
+19 -21
drivers/gpu/drm/drm_modes.c
··· 373 373 if (!bt601 && 374 374 (hact_duration_ns < params->hact_ns.min || 375 375 hact_duration_ns > params->hact_ns.max)) { 376 - DRM_ERROR("Invalid horizontal active area duration: %uns (min: %u, max %u)\n", 377 - hact_duration_ns, params->hact_ns.min, params->hact_ns.max); 376 + drm_err(dev, "Invalid horizontal active area duration: %uns (min: %u, max %u)\n", 377 + hact_duration_ns, params->hact_ns.min, params->hact_ns.max); 378 378 return -EINVAL; 379 379 } 380 380 ··· 385 385 if (!bt601 && 386 386 (hblk_duration_ns < params->hblk_ns.min || 387 387 hblk_duration_ns > params->hblk_ns.max)) { 388 - DRM_ERROR("Invalid horizontal blanking duration: %uns (min: %u, max %u)\n", 389 - hblk_duration_ns, params->hblk_ns.min, params->hblk_ns.max); 388 + drm_err(dev, "Invalid horizontal blanking duration: %uns (min: %u, max %u)\n", 389 + hblk_duration_ns, params->hblk_ns.min, params->hblk_ns.max); 390 390 return -EINVAL; 391 391 } 392 392 ··· 397 397 if (!bt601 && 398 398 (hslen_duration_ns < params->hslen_ns.min || 399 399 hslen_duration_ns > params->hslen_ns.max)) { 400 - DRM_ERROR("Invalid horizontal sync duration: %uns (min: %u, max %u)\n", 401 - hslen_duration_ns, params->hslen_ns.min, params->hslen_ns.max); 400 + drm_err(dev, "Invalid horizontal sync duration: %uns (min: %u, max %u)\n", 401 + hslen_duration_ns, params->hslen_ns.min, params->hslen_ns.max); 402 402 return -EINVAL; 403 403 } 404 404 ··· 409 409 if (!bt601 && 410 410 (porches_duration_ns > (params->hfp_ns.max + params->hbp_ns.max) || 411 411 porches_duration_ns < (params->hfp_ns.min + params->hbp_ns.min))) { 412 - DRM_ERROR("Invalid horizontal porches duration: %uns\n", porches_duration_ns); 412 + drm_err(dev, "Invalid horizontal porches duration: %uns\n", 413 + porches_duration_ns); 413 414 return -EINVAL; 414 415 } 415 416 ··· 432 431 if (!bt601 && 433 432 (hfp_duration_ns < params->hfp_ns.min || 434 433 hfp_duration_ns > params->hfp_ns.max)) { 435 - DRM_ERROR("Invalid horizontal front porch duration: %uns (min: %u, max %u)\n", 436 - hfp_duration_ns, params->hfp_ns.min, params->hfp_ns.max); 434 + drm_err(dev, "Invalid horizontal front porch duration: %uns (min: %u, max %u)\n", 435 + hfp_duration_ns, params->hfp_ns.min, params->hfp_ns.max); 437 436 return -EINVAL; 438 437 } 439 438 ··· 444 443 if (!bt601 && 445 444 (hbp_duration_ns < params->hbp_ns.min || 446 445 hbp_duration_ns > params->hbp_ns.max)) { 447 - DRM_ERROR("Invalid horizontal back porch duration: %uns (min: %u, max %u)\n", 448 - hbp_duration_ns, params->hbp_ns.min, params->hbp_ns.max); 446 + drm_err(dev, "Invalid horizontal back porch duration: %uns (min: %u, max %u)\n", 447 + hbp_duration_ns, params->hbp_ns.min, params->hbp_ns.max); 449 448 return -EINVAL; 450 449 } 451 450 ··· 496 495 497 496 vtotal = vactive + vfp + vslen + vbp; 498 497 if (params->num_lines != vtotal) { 499 - DRM_ERROR("Invalid vertical total: %upx (expected %upx)\n", 500 - vtotal, params->num_lines); 498 + drm_err(dev, "Invalid vertical total: %upx (expected %upx)\n", 499 + vtotal, params->num_lines); 501 500 return -EINVAL; 502 501 } 503 502 ··· 1201 1200 if (bus_flags) 1202 1201 drm_bus_flags_from_videomode(&vm, bus_flags); 1203 1202 1204 - pr_debug("%pOF: got %dx%d display mode\n", 1205 - np, vm.hactive, vm.vactive); 1206 - drm_mode_debug_printmodeline(dmode); 1203 + pr_debug("%pOF: got %dx%d display mode: " DRM_MODE_FMT "\n", 1204 + np, vm.hactive, vm.vactive, DRM_MODE_ARG(dmode)); 1207 1205 1208 1206 return 0; 1209 1207 } ··· 1250 1250 dmode->width_mm = width_mm; 1251 1251 dmode->height_mm = height_mm; 1252 1252 1253 - drm_mode_debug_printmodeline(dmode); 1253 + pr_debug(DRM_MODE_FMT "\n", DRM_MODE_ARG(dmode)); 1254 1254 1255 1255 return 0; 1256 1256 } ··· 1812 1812 DRM_MODE_FMT "\n", DRM_MODE_ARG(mode)); 1813 1813 } 1814 1814 if (verbose) { 1815 - drm_mode_debug_printmodeline(mode); 1816 - DRM_DEBUG_KMS("Not using %s mode: %s\n", 1817 - mode->name, 1818 - drm_get_mode_status_name(mode->status)); 1815 + drm_dbg_kms(dev, "Rejected mode: " DRM_MODE_FMT " (%s)\n", 1816 + DRM_MODE_ARG(mode), drm_get_mode_status_name(mode->status)); 1819 1817 } 1820 1818 drm_mode_destroy(dev, mode); 1821 1819 }
+585
drivers/gpu/drm/drm_panic.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 or MIT 2 + /* 3 + * Copyright (c) 2023 Red Hat. 4 + * Author: Jocelyn Falempe <jfalempe@redhat.com> 5 + * inspired by the drm_log driver from David Herrmann <dh.herrmann@gmail.com> 6 + * Tux Ascii art taken from cowsay written by Tony Monroe 7 + */ 8 + 9 + #include <linux/font.h> 10 + #include <linux/iosys-map.h> 11 + #include <linux/kdebug.h> 12 + #include <linux/kmsg_dump.h> 13 + #include <linux/list.h> 14 + #include <linux/module.h> 15 + #include <linux/types.h> 16 + 17 + #include <drm/drm_drv.h> 18 + #include <drm/drm_format_helper.h> 19 + #include <drm/drm_fourcc.h> 20 + #include <drm/drm_framebuffer.h> 21 + #include <drm/drm_modeset_helper_vtables.h> 22 + #include <drm/drm_panic.h> 23 + #include <drm/drm_plane.h> 24 + #include <drm/drm_print.h> 25 + 26 + MODULE_AUTHOR("Jocelyn Falempe"); 27 + MODULE_DESCRIPTION("DRM panic handler"); 28 + MODULE_LICENSE("GPL"); 29 + 30 + /** 31 + * DOC: overview 32 + * 33 + * To enable DRM panic for a driver, the primary plane must implement a 34 + * &drm_plane_helper_funcs.get_scanout_buffer helper function. It is then 35 + * automatically registered to the drm panic handler. 36 + * When a panic occurs, the &drm_plane_helper_funcs.get_scanout_buffer will be 37 + * called, and the driver can provide a framebuffer so the panic handler can 38 + * draw the panic screen on it. Currently only linear buffer and a few color 39 + * formats are supported. 40 + * Optionally the driver can also provide a &drm_plane_helper_funcs.panic_flush 41 + * callback, that will be called after that, to send additional commands to the 42 + * hardware to make the scanout buffer visible. 43 + */ 44 + 45 + /* 46 + * This module displays a user friendly message on screen when a kernel panic 47 + * occurs. This is conflicting with fbcon, so you can only enable it when fbcon 48 + * is disabled. 49 + * It's intended for end-user, so have minimal technical/debug information. 50 + * 51 + * Implementation details: 52 + * 53 + * It is a panic handler, so it can't take lock, allocate memory, run tasks/irq, 54 + * or attempt to sleep. It's a best effort, and it may not be able to display 55 + * the message in all situations (like if the panic occurs in the middle of a 56 + * modesetting). 57 + * It will display only one static frame, so performance optimizations are low 58 + * priority as the machine is already in an unusable state. 59 + */ 60 + 61 + struct drm_panic_line { 62 + u32 len; 63 + const char *txt; 64 + }; 65 + 66 + #define PANIC_LINE(s) {.len = sizeof(s) - 1, .txt = s} 67 + 68 + static struct drm_panic_line panic_msg[] = { 69 + PANIC_LINE("KERNEL PANIC !"), 70 + PANIC_LINE(""), 71 + PANIC_LINE("Please reboot your computer."), 72 + }; 73 + 74 + static const struct drm_panic_line logo[] = { 75 + PANIC_LINE(" .--. _"), 76 + PANIC_LINE(" |o_o | | |"), 77 + PANIC_LINE(" |:_/ | | |"), 78 + PANIC_LINE(" // \\ \\ |_|"), 79 + PANIC_LINE(" (| | ) _"), 80 + PANIC_LINE(" /'\\_ _/`\\ (_)"), 81 + PANIC_LINE(" \\___)=(___/"), 82 + }; 83 + 84 + /* 85 + * Color conversion 86 + */ 87 + 88 + static u16 convert_xrgb8888_to_rgb565(u32 pix) 89 + { 90 + return ((pix & 0x00F80000) >> 8) | 91 + ((pix & 0x0000FC00) >> 5) | 92 + ((pix & 0x000000F8) >> 3); 93 + } 94 + 95 + static u16 convert_xrgb8888_to_rgba5551(u32 pix) 96 + { 97 + return ((pix & 0x00f80000) >> 8) | 98 + ((pix & 0x0000f800) >> 5) | 99 + ((pix & 0x000000f8) >> 2) | 100 + BIT(0); /* set alpha bit */ 101 + } 102 + 103 + static u16 convert_xrgb8888_to_xrgb1555(u32 pix) 104 + { 105 + return ((pix & 0x00f80000) >> 9) | 106 + ((pix & 0x0000f800) >> 6) | 107 + ((pix & 0x000000f8) >> 3); 108 + } 109 + 110 + static u16 convert_xrgb8888_to_argb1555(u32 pix) 111 + { 112 + return BIT(15) | /* set alpha bit */ 113 + ((pix & 0x00f80000) >> 9) | 114 + ((pix & 0x0000f800) >> 6) | 115 + ((pix & 0x000000f8) >> 3); 116 + } 117 + 118 + static u32 convert_xrgb8888_to_argb8888(u32 pix) 119 + { 120 + return pix | GENMASK(31, 24); /* fill alpha bits */ 121 + } 122 + 123 + static u32 convert_xrgb8888_to_xbgr8888(u32 pix) 124 + { 125 + return ((pix & 0x00ff0000) >> 16) << 0 | 126 + ((pix & 0x0000ff00) >> 8) << 8 | 127 + ((pix & 0x000000ff) >> 0) << 16 | 128 + ((pix & 0xff000000) >> 24) << 24; 129 + } 130 + 131 + static u32 convert_xrgb8888_to_abgr8888(u32 pix) 132 + { 133 + return ((pix & 0x00ff0000) >> 16) << 0 | 134 + ((pix & 0x0000ff00) >> 8) << 8 | 135 + ((pix & 0x000000ff) >> 0) << 16 | 136 + GENMASK(31, 24); /* fill alpha bits */ 137 + } 138 + 139 + static u32 convert_xrgb8888_to_xrgb2101010(u32 pix) 140 + { 141 + pix = ((pix & 0x000000FF) << 2) | 142 + ((pix & 0x0000FF00) << 4) | 143 + ((pix & 0x00FF0000) << 6); 144 + return pix | ((pix >> 8) & 0x00300C03); 145 + } 146 + 147 + static u32 convert_xrgb8888_to_argb2101010(u32 pix) 148 + { 149 + pix = ((pix & 0x000000FF) << 2) | 150 + ((pix & 0x0000FF00) << 4) | 151 + ((pix & 0x00FF0000) << 6); 152 + return GENMASK(31, 30) /* set alpha bits */ | pix | ((pix >> 8) & 0x00300C03); 153 + } 154 + 155 + /* 156 + * convert_from_xrgb8888 - convert one pixel from xrgb8888 to the desired format 157 + * @color: input color, in xrgb8888 format 158 + * @format: output format 159 + * 160 + * Returns: 161 + * Color in the format specified, casted to u32. 162 + * Or 0 if the format is not supported. 163 + */ 164 + static u32 convert_from_xrgb8888(u32 color, u32 format) 165 + { 166 + switch (format) { 167 + case DRM_FORMAT_RGB565: 168 + return convert_xrgb8888_to_rgb565(color); 169 + case DRM_FORMAT_RGBA5551: 170 + return convert_xrgb8888_to_rgba5551(color); 171 + case DRM_FORMAT_XRGB1555: 172 + return convert_xrgb8888_to_xrgb1555(color); 173 + case DRM_FORMAT_ARGB1555: 174 + return convert_xrgb8888_to_argb1555(color); 175 + case DRM_FORMAT_RGB888: 176 + case DRM_FORMAT_XRGB8888: 177 + return color; 178 + case DRM_FORMAT_ARGB8888: 179 + return convert_xrgb8888_to_argb8888(color); 180 + case DRM_FORMAT_XBGR8888: 181 + return convert_xrgb8888_to_xbgr8888(color); 182 + case DRM_FORMAT_ABGR8888: 183 + return convert_xrgb8888_to_abgr8888(color); 184 + case DRM_FORMAT_XRGB2101010: 185 + return convert_xrgb8888_to_xrgb2101010(color); 186 + case DRM_FORMAT_ARGB2101010: 187 + return convert_xrgb8888_to_argb2101010(color); 188 + default: 189 + WARN_ONCE(1, "Can't convert to %p4cc\n", &format); 190 + return 0; 191 + } 192 + } 193 + 194 + /* 195 + * Blit & Fill 196 + */ 197 + static void drm_panic_blit16(struct iosys_map *dmap, unsigned int dpitch, 198 + const u8 *sbuf8, unsigned int spitch, 199 + unsigned int height, unsigned int width, 200 + u16 fg16, u16 bg16) 201 + { 202 + unsigned int y, x; 203 + u16 val16; 204 + 205 + for (y = 0; y < height; y++) { 206 + for (x = 0; x < width; x++) { 207 + val16 = (sbuf8[(y * spitch) + x / 8] & (0x80 >> (x % 8))) ? fg16 : bg16; 208 + iosys_map_wr(dmap, y * dpitch + x * sizeof(u16), u16, val16); 209 + } 210 + } 211 + } 212 + 213 + static void drm_panic_blit24(struct iosys_map *dmap, unsigned int dpitch, 214 + const u8 *sbuf8, unsigned int spitch, 215 + unsigned int height, unsigned int width, 216 + u32 fg32, u32 bg32) 217 + { 218 + unsigned int y, x; 219 + u32 val32; 220 + 221 + for (y = 0; y < height; y++) { 222 + for (x = 0; x < width; x++) { 223 + u32 off = y * dpitch + x * 3; 224 + 225 + val32 = (sbuf8[(y * spitch) + x / 8] & (0x80 >> (x % 8))) ? fg32 : bg32; 226 + 227 + /* write blue-green-red to output in little endianness */ 228 + iosys_map_wr(dmap, off, u8, (val32 & 0x000000FF) >> 0); 229 + iosys_map_wr(dmap, off + 1, u8, (val32 & 0x0000FF00) >> 8); 230 + iosys_map_wr(dmap, off + 2, u8, (val32 & 0x00FF0000) >> 16); 231 + } 232 + } 233 + } 234 + 235 + static void drm_panic_blit32(struct iosys_map *dmap, unsigned int dpitch, 236 + const u8 *sbuf8, unsigned int spitch, 237 + unsigned int height, unsigned int width, 238 + u32 fg32, u32 bg32) 239 + { 240 + unsigned int y, x; 241 + u32 val32; 242 + 243 + for (y = 0; y < height; y++) { 244 + for (x = 0; x < width; x++) { 245 + val32 = (sbuf8[(y * spitch) + x / 8] & (0x80 >> (x % 8))) ? fg32 : bg32; 246 + iosys_map_wr(dmap, y * dpitch + x * sizeof(u32), u32, val32); 247 + } 248 + } 249 + } 250 + 251 + /* 252 + * drm_panic_blit - convert a monochrome image to a linear framebuffer 253 + * @dmap: destination iosys_map 254 + * @dpitch: destination pitch in bytes 255 + * @sbuf8: source buffer, in monochrome format, 8 pixels per byte. 256 + * @spitch: source pitch in bytes 257 + * @height: height of the image to copy, in pixels 258 + * @width: width of the image to copy, in pixels 259 + * @fg_color: foreground color, in destination format 260 + * @bg_color: background color, in destination format 261 + * @pixel_width: pixel width in bytes. 262 + * 263 + * This can be used to draw a font character, which is a monochrome image, to a 264 + * framebuffer in other supported format. 265 + */ 266 + static void drm_panic_blit(struct iosys_map *dmap, unsigned int dpitch, 267 + const u8 *sbuf8, unsigned int spitch, 268 + unsigned int height, unsigned int width, 269 + u32 fg_color, u32 bg_color, 270 + unsigned int pixel_width) 271 + { 272 + switch (pixel_width) { 273 + case 2: 274 + drm_panic_blit16(dmap, dpitch, sbuf8, spitch, 275 + height, width, fg_color, bg_color); 276 + break; 277 + case 3: 278 + drm_panic_blit24(dmap, dpitch, sbuf8, spitch, 279 + height, width, fg_color, bg_color); 280 + break; 281 + case 4: 282 + drm_panic_blit32(dmap, dpitch, sbuf8, spitch, 283 + height, width, fg_color, bg_color); 284 + break; 285 + default: 286 + WARN_ONCE(1, "Can't blit with pixel width %d\n", pixel_width); 287 + } 288 + } 289 + 290 + static void drm_panic_fill16(struct iosys_map *dmap, unsigned int dpitch, 291 + unsigned int height, unsigned int width, 292 + u16 color) 293 + { 294 + unsigned int y, x; 295 + 296 + for (y = 0; y < height; y++) 297 + for (x = 0; x < width; x++) 298 + iosys_map_wr(dmap, y * dpitch + x * sizeof(u16), u16, color); 299 + } 300 + 301 + static void drm_panic_fill24(struct iosys_map *dmap, unsigned int dpitch, 302 + unsigned int height, unsigned int width, 303 + u32 color) 304 + { 305 + unsigned int y, x; 306 + 307 + for (y = 0; y < height; y++) { 308 + for (x = 0; x < width; x++) { 309 + unsigned int off = y * dpitch + x * 3; 310 + 311 + /* write blue-green-red to output in little endianness */ 312 + iosys_map_wr(dmap, off, u8, (color & 0x000000FF) >> 0); 313 + iosys_map_wr(dmap, off + 1, u8, (color & 0x0000FF00) >> 8); 314 + iosys_map_wr(dmap, off + 2, u8, (color & 0x00FF0000) >> 16); 315 + } 316 + } 317 + } 318 + 319 + static void drm_panic_fill32(struct iosys_map *dmap, unsigned int dpitch, 320 + unsigned int height, unsigned int width, 321 + u32 color) 322 + { 323 + unsigned int y, x; 324 + 325 + for (y = 0; y < height; y++) 326 + for (x = 0; x < width; x++) 327 + iosys_map_wr(dmap, y * dpitch + x * sizeof(u32), u32, color); 328 + } 329 + 330 + /* 331 + * drm_panic_fill - Fill a rectangle with a color 332 + * @dmap: destination iosys_map, pointing to the top left corner of the rectangle 333 + * @dpitch: destination pitch in bytes 334 + * @height: height of the rectangle, in pixels 335 + * @width: width of the rectangle, in pixels 336 + * @color: color to fill the rectangle. 337 + * @pixel_width: pixel width in bytes 338 + * 339 + * Fill a rectangle with a color, in a linear framebuffer. 340 + */ 341 + static void drm_panic_fill(struct iosys_map *dmap, unsigned int dpitch, 342 + unsigned int height, unsigned int width, 343 + u32 color, unsigned int pixel_width) 344 + { 345 + switch (pixel_width) { 346 + case 2: 347 + drm_panic_fill16(dmap, dpitch, height, width, color); 348 + break; 349 + case 3: 350 + drm_panic_fill24(dmap, dpitch, height, width, color); 351 + break; 352 + case 4: 353 + drm_panic_fill32(dmap, dpitch, height, width, color); 354 + break; 355 + default: 356 + WARN_ONCE(1, "Can't fill with pixel width %d\n", pixel_width); 357 + } 358 + } 359 + 360 + static const u8 *get_char_bitmap(const struct font_desc *font, char c, size_t font_pitch) 361 + { 362 + return font->data + (c * font->height) * font_pitch; 363 + } 364 + 365 + static unsigned int get_max_line_len(const struct drm_panic_line *lines, int len) 366 + { 367 + int i; 368 + unsigned int max = 0; 369 + 370 + for (i = 0; i < len; i++) 371 + max = max(lines[i].len, max); 372 + return max; 373 + } 374 + 375 + /* 376 + * Draw a text in a rectangle on a framebuffer. The text is truncated if it overflows the rectangle 377 + */ 378 + static void draw_txt_rectangle(struct drm_scanout_buffer *sb, 379 + const struct font_desc *font, 380 + const struct drm_panic_line *msg, 381 + unsigned int msg_lines, 382 + bool centered, 383 + struct drm_rect *clip, 384 + u32 fg_color, 385 + u32 bg_color) 386 + { 387 + int i, j; 388 + const u8 *src; 389 + size_t font_pitch = DIV_ROUND_UP(font->width, 8); 390 + struct iosys_map dst; 391 + unsigned int px_width = sb->format->cpp[0]; 392 + int left = 0; 393 + 394 + msg_lines = min(msg_lines, drm_rect_height(clip) / font->height); 395 + for (i = 0; i < msg_lines; i++) { 396 + size_t line_len = min(msg[i].len, drm_rect_width(clip) / font->width); 397 + 398 + if (centered) 399 + left = (drm_rect_width(clip) - (line_len * font->width)) / 2; 400 + 401 + dst = sb->map[0]; 402 + iosys_map_incr(&dst, (clip->y1 + i * font->height) * sb->pitch[0] + 403 + (clip->x1 + left) * px_width); 404 + for (j = 0; j < line_len; j++) { 405 + src = get_char_bitmap(font, msg[i].txt[j], font_pitch); 406 + drm_panic_blit(&dst, sb->pitch[0], src, font_pitch, 407 + font->height, font->width, 408 + fg_color, bg_color, px_width); 409 + iosys_map_incr(&dst, font->width * px_width); 410 + } 411 + } 412 + } 413 + 414 + /* 415 + * Draw the panic message at the center of the screen 416 + */ 417 + static void draw_panic_static(struct drm_scanout_buffer *sb) 418 + { 419 + size_t msg_lines = ARRAY_SIZE(panic_msg); 420 + size_t logo_lines = ARRAY_SIZE(logo); 421 + u32 fg_color = CONFIG_DRM_PANIC_FOREGROUND_COLOR; 422 + u32 bg_color = CONFIG_DRM_PANIC_BACKGROUND_COLOR; 423 + const struct font_desc *font = get_default_font(sb->width, sb->height, NULL, NULL); 424 + struct drm_rect r_logo, r_msg; 425 + 426 + if (!font) 427 + return; 428 + 429 + fg_color = convert_from_xrgb8888(fg_color, sb->format->format); 430 + bg_color = convert_from_xrgb8888(bg_color, sb->format->format); 431 + 432 + r_logo = DRM_RECT_INIT(0, 0, 433 + get_max_line_len(logo, logo_lines) * font->width, 434 + logo_lines * font->height); 435 + r_msg = DRM_RECT_INIT(0, 0, 436 + min(get_max_line_len(panic_msg, msg_lines) * font->width, sb->width), 437 + min(msg_lines * font->height, sb->height)); 438 + 439 + /* Center the panic message */ 440 + drm_rect_translate(&r_msg, (sb->width - r_msg.x2) / 2, (sb->height - r_msg.y2) / 2); 441 + 442 + /* Fill with the background color, and draw text on top */ 443 + drm_panic_fill(&sb->map[0], sb->pitch[0], sb->height, sb->width, 444 + bg_color, sb->format->cpp[0]); 445 + 446 + if ((r_msg.x1 >= drm_rect_width(&r_logo) || r_msg.y1 >= drm_rect_height(&r_logo)) && 447 + drm_rect_width(&r_logo) < sb->width && drm_rect_height(&r_logo) < sb->height) { 448 + draw_txt_rectangle(sb, font, logo, logo_lines, false, &r_logo, fg_color, bg_color); 449 + } 450 + draw_txt_rectangle(sb, font, panic_msg, msg_lines, true, &r_msg, fg_color, bg_color); 451 + } 452 + 453 + /* 454 + * drm_panic_is_format_supported() 455 + * @format: a fourcc color code 456 + * Returns: true if supported, false otherwise. 457 + * 458 + * Check if drm_panic will be able to use this color format. 459 + */ 460 + static bool drm_panic_is_format_supported(const struct drm_format_info *format) 461 + { 462 + if (format->num_planes != 1) 463 + return false; 464 + return convert_from_xrgb8888(0xffffff, format->format) != 0; 465 + } 466 + 467 + static void draw_panic_plane(struct drm_plane *plane) 468 + { 469 + struct drm_scanout_buffer sb; 470 + int ret; 471 + unsigned long flags; 472 + 473 + if (!drm_panic_trylock(plane->dev, flags)) 474 + return; 475 + 476 + ret = plane->helper_private->get_scanout_buffer(plane, &sb); 477 + 478 + if (!ret && drm_panic_is_format_supported(sb.format)) { 479 + draw_panic_static(&sb); 480 + if (plane->helper_private->panic_flush) 481 + plane->helper_private->panic_flush(plane); 482 + } 483 + drm_panic_unlock(plane->dev, flags); 484 + } 485 + 486 + static struct drm_plane *to_drm_plane(struct kmsg_dumper *kd) 487 + { 488 + return container_of(kd, struct drm_plane, kmsg_panic); 489 + } 490 + 491 + static void drm_panic(struct kmsg_dumper *dumper, enum kmsg_dump_reason reason) 492 + { 493 + struct drm_plane *plane = to_drm_plane(dumper); 494 + 495 + if (reason == KMSG_DUMP_PANIC) 496 + draw_panic_plane(plane); 497 + } 498 + 499 + 500 + /* 501 + * DEBUG FS, This is currently unsafe. 502 + * Create one file per plane, so it's possible to debug one plane at a time. 503 + * TODO: It would be better to emulate an NMI context. 504 + */ 505 + #ifdef CONFIG_DRM_PANIC_DEBUG 506 + #include <linux/debugfs.h> 507 + 508 + static ssize_t debugfs_trigger_write(struct file *file, const char __user *user_buf, 509 + size_t count, loff_t *ppos) 510 + { 511 + bool run; 512 + 513 + if (kstrtobool_from_user(user_buf, count, &run) == 0 && run) { 514 + struct drm_plane *plane = file->private_data; 515 + 516 + draw_panic_plane(plane); 517 + } 518 + return count; 519 + } 520 + 521 + static const struct file_operations dbg_drm_panic_ops = { 522 + .owner = THIS_MODULE, 523 + .write = debugfs_trigger_write, 524 + .open = simple_open, 525 + }; 526 + 527 + static void debugfs_register_plane(struct drm_plane *plane, int index) 528 + { 529 + char fname[32]; 530 + 531 + snprintf(fname, 32, "drm_panic_plane_%d", index); 532 + debugfs_create_file(fname, 0200, plane->dev->debugfs_root, 533 + plane, &dbg_drm_panic_ops); 534 + } 535 + #else 536 + static void debugfs_register_plane(struct drm_plane *plane, int index) {} 537 + #endif /* CONFIG_DRM_PANIC_DEBUG */ 538 + 539 + /** 540 + * drm_panic_register() - Initialize DRM panic for a device 541 + * @dev: the drm device on which the panic screen will be displayed. 542 + */ 543 + void drm_panic_register(struct drm_device *dev) 544 + { 545 + struct drm_plane *plane; 546 + int registered_plane = 0; 547 + 548 + if (!dev->mode_config.num_total_plane) 549 + return; 550 + 551 + drm_for_each_plane(plane, dev) { 552 + if (!plane->helper_private || !plane->helper_private->get_scanout_buffer) 553 + continue; 554 + plane->kmsg_panic.dump = drm_panic; 555 + plane->kmsg_panic.max_reason = KMSG_DUMP_PANIC; 556 + if (kmsg_dump_register(&plane->kmsg_panic)) 557 + drm_warn(dev, "Failed to register panic handler\n"); 558 + else { 559 + debugfs_register_plane(plane, registered_plane); 560 + registered_plane++; 561 + } 562 + } 563 + if (registered_plane) 564 + drm_info(dev, "Registered %d planes with drm panic\n", registered_plane); 565 + } 566 + EXPORT_SYMBOL(drm_panic_register); 567 + 568 + /** 569 + * drm_panic_unregister() 570 + * @dev: the drm device previously registered. 571 + */ 572 + void drm_panic_unregister(struct drm_device *dev) 573 + { 574 + struct drm_plane *plane; 575 + 576 + if (!dev->mode_config.num_total_plane) 577 + return; 578 + 579 + drm_for_each_plane(plane, dev) { 580 + if (!plane->helper_private || !plane->helper_private->get_scanout_buffer) 581 + continue; 582 + kmsg_dump_unregister(&plane->kmsg_panic); 583 + } 584 + } 585 + EXPORT_SYMBOL(drm_panic_unregister);
+56
drivers/gpu/drm/drm_plane.c
··· 140 140 * DRM_FORMAT_MOD_LINEAR. Before linux kernel release v5.1 there have been 141 141 * various bugs in this area with inconsistencies between the capability 142 142 * flag and per-plane properties. 143 + * 144 + * SIZE_HINTS: 145 + * Blob property which contains the set of recommended plane size 146 + * which can used for simple "cursor like" use cases (eg. no scaling). 147 + * Using these hints frees userspace from extensive probing of 148 + * supported plane sizes through atomic/setcursor ioctls. 149 + * 150 + * The blob contains an array of struct drm_plane_size_hint, in 151 + * order of preference. For optimal usage userspace should pick 152 + * the first size that satisfies its own requirements. 153 + * 154 + * Drivers should only attach this property to planes that 155 + * support a very limited set of sizes. 156 + * 157 + * Note that property value 0 (ie. no blob) is reserved for potential 158 + * future use. Current userspace is expected to ignore the property 159 + * if the value is 0, and fall back to some other means (eg. 160 + * &DRM_CAP_CURSOR_WIDTH and &DRM_CAP_CURSOR_HEIGHT) to determine 161 + * the appropriate plane size to use. 143 162 */ 144 163 145 164 static unsigned int drm_num_planes(struct drm_device *dev) ··· 1748 1729 return 0; 1749 1730 } 1750 1731 EXPORT_SYMBOL(drm_plane_create_scaling_filter_property); 1732 + 1733 + /** 1734 + * drm_plane_add_size_hint_property - create a size hint property 1735 + * 1736 + * @plane: drm plane 1737 + * @hints: size hints 1738 + * @num_hints: number of size hints 1739 + * 1740 + * Create a size hints property for the plane. 1741 + * 1742 + * RETURNS: 1743 + * Zero for success or -errno 1744 + */ 1745 + int drm_plane_add_size_hints_property(struct drm_plane *plane, 1746 + const struct drm_plane_size_hint *hints, 1747 + int num_hints) 1748 + { 1749 + struct drm_device *dev = plane->dev; 1750 + struct drm_mode_config *config = &dev->mode_config; 1751 + struct drm_property_blob *blob; 1752 + 1753 + /* extending to other plane types needs actual thought */ 1754 + if (drm_WARN_ON(dev, plane->type != DRM_PLANE_TYPE_CURSOR)) 1755 + return -EINVAL; 1756 + 1757 + blob = drm_property_create_blob(dev, 1758 + array_size(sizeof(hints[0]), num_hints), 1759 + hints); 1760 + if (IS_ERR(blob)) 1761 + return PTR_ERR(blob); 1762 + 1763 + drm_object_attach_property(&plane->base, config->size_hints_property, 1764 + blob->base.id); 1765 + 1766 + return 0; 1767 + } 1768 + EXPORT_SYMBOL(drm_plane_add_size_hints_property);
+19 -20
drivers/gpu/drm/drm_probe_helper.c
··· 567 567 568 568 drm_modeset_acquire_init(&ctx, 0); 569 569 570 - DRM_DEBUG_KMS("[CONNECTOR:%d:%s]\n", connector->base.id, 571 - connector->name); 570 + drm_dbg_kms(dev, "[CONNECTOR:%d:%s]\n", connector->base.id, 571 + connector->name); 572 572 573 573 retry: 574 574 ret = drm_modeset_lock(&dev->mode_config.connection_mutex, &ctx); ··· 611 611 * check here, and if anything changed start the hotplug code. 612 612 */ 613 613 if (old_status != connector->status) { 614 - DRM_DEBUG_KMS("[CONNECTOR:%d:%s] status updated from %s to %s\n", 615 - connector->base.id, 616 - connector->name, 617 - drm_get_connector_status_name(old_status), 618 - drm_get_connector_status_name(connector->status)); 614 + drm_dbg_kms(dev, "[CONNECTOR:%d:%s] status updated from %s to %s\n", 615 + connector->base.id, connector->name, 616 + drm_get_connector_status_name(old_status), 617 + drm_get_connector_status_name(connector->status)); 619 618 620 619 /* 621 620 * The hotplug event code might call into the fb ··· 637 638 drm_kms_helper_poll_enable(dev); 638 639 639 640 if (connector->status == connector_status_disconnected) { 640 - DRM_DEBUG_KMS("[CONNECTOR:%d:%s] disconnected\n", 641 - connector->base.id, connector->name); 641 + drm_dbg_kms(dev, "[CONNECTOR:%d:%s] disconnected\n", 642 + connector->base.id, connector->name); 642 643 drm_connector_update_edid_property(connector, NULL); 643 644 drm_mode_prune_invalid(dev, &connector->modes, false); 644 645 goto exit; ··· 696 697 697 698 drm_mode_sort(&connector->modes); 698 699 699 - DRM_DEBUG_KMS("[CONNECTOR:%d:%s] probed modes :\n", connector->base.id, 700 - connector->name); 700 + drm_dbg_kms(dev, "[CONNECTOR:%d:%s] probed modes:\n", 701 + connector->base.id, connector->name); 702 + 701 703 list_for_each_entry(mode, &connector->modes, head) { 702 704 drm_mode_set_crtcinfo(mode, CRTC_INTERLACE_HALVE_V); 703 - drm_mode_debug_printmodeline(mode); 705 + drm_dbg_kms(dev, "Probed mode: " DRM_MODE_FMT "\n", 706 + DRM_MODE_ARG(mode)); 704 707 } 705 708 706 709 return count; ··· 835 834 old = drm_get_connector_status_name(old_status); 836 835 new = drm_get_connector_status_name(connector->status); 837 836 838 - DRM_DEBUG_KMS("[CONNECTOR:%d:%s] " 839 - "status updated from %s to %s\n", 840 - connector->base.id, 841 - connector->name, 842 - old, new); 843 - DRM_DEBUG_KMS("[CONNECTOR:%d:%s] epoch counter %llu -> %llu\n", 844 - connector->base.id, connector->name, 845 - old_epoch_counter, connector->epoch_counter); 837 + drm_dbg_kms(dev, "[CONNECTOR:%d:%s] status updated from %s to %s\n", 838 + connector->base.id, connector->name, 839 + old, new); 840 + drm_dbg_kms(dev, "[CONNECTOR:%d:%s] epoch counter %llu -> %llu\n", 841 + connector->base.id, connector->name, 842 + old_epoch_counter, connector->epoch_counter); 846 843 847 844 changed = true; 848 845 }
+10 -10
drivers/gpu/drm/drm_sysfs.c
··· 209 209 ret = -EINVAL; 210 210 211 211 if (old_force != connector->force || !connector->force) { 212 - DRM_DEBUG_KMS("[CONNECTOR:%d:%s] force updated from %d to %d or reprobing\n", 213 - connector->base.id, 214 - connector->name, 215 - old_force, connector->force); 212 + drm_dbg_kms(dev, "[CONNECTOR:%d:%s] force updated from %d to %d or reprobing\n", 213 + connector->base.id, connector->name, 214 + old_force, connector->force); 216 215 217 216 connector->funcs->fill_modes(connector, 218 217 dev->mode_config.max_width, ··· 382 383 if (r) 383 384 goto err_free; 384 385 385 - DRM_DEBUG("adding \"%s\" to sysfs\n", 386 - connector->name); 386 + drm_dbg_kms(dev, "[CONNECTOR:%d:%s] adding connector to sysfs\n", 387 + connector->base.id, connector->name); 387 388 388 389 r = device_add(kdev); 389 390 if (r) { ··· 429 430 if (dev_fwnode(connector->kdev)) 430 431 component_del(connector->kdev, &typec_connector_ops); 431 432 432 - DRM_DEBUG("removing \"%s\" from sysfs\n", 433 - connector->name); 433 + drm_dbg_kms(connector->dev, 434 + "[CONNECTOR:%d:%s] removing connector from sysfs\n", 435 + connector->base.id, connector->name); 434 436 435 437 device_unregister(connector->kdev); 436 438 connector->kdev = NULL; ··· 442 442 char *event_string = "LEASE=1"; 443 443 char *envp[] = { event_string, NULL }; 444 444 445 - DRM_DEBUG("generating lease event\n"); 445 + drm_dbg_lease(dev, "generating lease event\n"); 446 446 447 447 kobject_uevent_env(&dev->primary->kdev->kobj, KOBJ_CHANGE, envp); 448 448 } ··· 463 463 char *event_string = "HOTPLUG=1"; 464 464 char *envp[] = { event_string, NULL }; 465 465 466 - DRM_DEBUG("generating hotplug event\n"); 466 + drm_dbg_kms(dev, "generating hotplug event\n"); 467 467 468 468 kobject_uevent_env(&dev->primary->kdev->kobj, KOBJ_CHANGE, envp); 469 469 }
+34 -24
drivers/gpu/drm/drm_vblank.c
··· 166 166 MODULE_PARM_DESC(vblankoffdelay, "Delay until vblank irq auto-disable [msecs] (0: never disable, <0: disable immediately)"); 167 167 MODULE_PARM_DESC(timestamp_precision_usec, "Max. error on timestamps [usecs]"); 168 168 169 + static struct drm_vblank_crtc * 170 + drm_vblank_crtc(struct drm_device *dev, unsigned int pipe) 171 + { 172 + return &dev->vblank[pipe]; 173 + } 174 + 175 + struct drm_vblank_crtc * 176 + drm_crtc_vblank_crtc(struct drm_crtc *crtc) 177 + { 178 + return drm_vblank_crtc(crtc->dev, drm_crtc_index(crtc)); 179 + } 180 + EXPORT_SYMBOL(drm_crtc_vblank_crtc); 181 + 169 182 static void store_vblank(struct drm_device *dev, unsigned int pipe, 170 183 u32 vblank_count_inc, 171 184 ktime_t t_vblank, u32 last) 172 185 { 173 - struct drm_vblank_crtc *vblank = &dev->vblank[pipe]; 186 + struct drm_vblank_crtc *vblank = drm_vblank_crtc(dev, pipe); 174 187 175 188 assert_spin_locked(&dev->vblank_time_lock); 176 189 ··· 197 184 198 185 static u32 drm_max_vblank_count(struct drm_device *dev, unsigned int pipe) 199 186 { 200 - struct drm_vblank_crtc *vblank = &dev->vblank[pipe]; 187 + struct drm_vblank_crtc *vblank = drm_vblank_crtc(dev, pipe); 201 188 202 189 return vblank->max_vblank_count ?: dev->max_vblank_count; 203 190 } ··· 286 273 static void drm_update_vblank_count(struct drm_device *dev, unsigned int pipe, 287 274 bool in_vblank_irq) 288 275 { 289 - struct drm_vblank_crtc *vblank = &dev->vblank[pipe]; 276 + struct drm_vblank_crtc *vblank = drm_vblank_crtc(dev, pipe); 290 277 u32 cur_vblank, diff; 291 278 bool rc; 292 279 ktime_t t_vblank; ··· 377 364 378 365 u64 drm_vblank_count(struct drm_device *dev, unsigned int pipe) 379 366 { 380 - struct drm_vblank_crtc *vblank = &dev->vblank[pipe]; 367 + struct drm_vblank_crtc *vblank = drm_vblank_crtc(dev, pipe); 381 368 u64 count; 382 369 383 370 if (drm_WARN_ON(dev, pipe >= dev->num_crtcs)) ··· 451 438 */ 452 439 void drm_vblank_disable_and_save(struct drm_device *dev, unsigned int pipe) 453 440 { 454 - struct drm_vblank_crtc *vblank = &dev->vblank[pipe]; 441 + struct drm_vblank_crtc *vblank = drm_vblank_crtc(dev, pipe); 455 442 unsigned long irqflags; 456 443 457 444 assert_spin_locked(&dev->vbl_lock); ··· 613 600 { 614 601 struct drm_device *dev = crtc->dev; 615 602 unsigned int pipe = drm_crtc_index(crtc); 616 - struct drm_vblank_crtc *vblank = &dev->vblank[pipe]; 603 + struct drm_vblank_crtc *vblank = drm_crtc_vblank_crtc(crtc); 617 604 int linedur_ns = 0, framedur_ns = 0; 618 605 int dotclock = mode->crtc_clock; 619 606 ··· 943 930 static u64 drm_vblank_count_and_time(struct drm_device *dev, unsigned int pipe, 944 931 ktime_t *vblanktime) 945 932 { 946 - struct drm_vblank_crtc *vblank = &dev->vblank[pipe]; 933 + struct drm_vblank_crtc *vblank = drm_vblank_crtc(dev, pipe); 947 934 u64 vblank_count; 948 935 unsigned int seq; 949 936 ··· 998 985 */ 999 986 int drm_crtc_next_vblank_start(struct drm_crtc *crtc, ktime_t *vblanktime) 1000 987 { 1001 - unsigned int pipe = drm_crtc_index(crtc); 1002 988 struct drm_vblank_crtc *vblank; 1003 989 struct drm_display_mode *mode; 1004 990 u64 vblank_start; ··· 1005 993 if (!drm_dev_has_vblank(crtc->dev)) 1006 994 return -EINVAL; 1007 995 1008 - vblank = &crtc->dev->vblank[pipe]; 996 + vblank = drm_crtc_vblank_crtc(crtc); 1009 997 mode = &vblank->hwmode; 1010 998 1011 999 if (!vblank->framedur_ns || !vblank->linedur_ns) ··· 1159 1147 1160 1148 static int drm_vblank_enable(struct drm_device *dev, unsigned int pipe) 1161 1149 { 1162 - struct drm_vblank_crtc *vblank = &dev->vblank[pipe]; 1150 + struct drm_vblank_crtc *vblank = drm_vblank_crtc(dev, pipe); 1163 1151 int ret = 0; 1164 1152 1165 1153 assert_spin_locked(&dev->vbl_lock); ··· 1197 1185 1198 1186 int drm_vblank_get(struct drm_device *dev, unsigned int pipe) 1199 1187 { 1200 - struct drm_vblank_crtc *vblank = &dev->vblank[pipe]; 1188 + struct drm_vblank_crtc *vblank = drm_vblank_crtc(dev, pipe); 1201 1189 unsigned long irqflags; 1202 1190 int ret = 0; 1203 1191 ··· 1240 1228 1241 1229 void drm_vblank_put(struct drm_device *dev, unsigned int pipe) 1242 1230 { 1243 - struct drm_vblank_crtc *vblank = &dev->vblank[pipe]; 1231 + struct drm_vblank_crtc *vblank = drm_vblank_crtc(dev, pipe); 1244 1232 1245 1233 if (drm_WARN_ON(dev, pipe >= dev->num_crtcs)) 1246 1234 return; ··· 1286 1274 */ 1287 1275 void drm_wait_one_vblank(struct drm_device *dev, unsigned int pipe) 1288 1276 { 1289 - struct drm_vblank_crtc *vblank = &dev->vblank[pipe]; 1277 + struct drm_vblank_crtc *vblank = drm_vblank_crtc(dev, pipe); 1290 1278 int ret; 1291 1279 u64 last; 1292 1280 ··· 1339 1327 { 1340 1328 struct drm_device *dev = crtc->dev; 1341 1329 unsigned int pipe = drm_crtc_index(crtc); 1342 - struct drm_vblank_crtc *vblank = &dev->vblank[pipe]; 1330 + struct drm_vblank_crtc *vblank = drm_crtc_vblank_crtc(crtc); 1343 1331 struct drm_pending_vblank_event *e, *t; 1344 1332 ktime_t now; 1345 1333 u64 seq; ··· 1417 1405 void drm_crtc_vblank_reset(struct drm_crtc *crtc) 1418 1406 { 1419 1407 struct drm_device *dev = crtc->dev; 1420 - unsigned int pipe = drm_crtc_index(crtc); 1421 - struct drm_vblank_crtc *vblank = &dev->vblank[pipe]; 1408 + struct drm_vblank_crtc *vblank = drm_crtc_vblank_crtc(crtc); 1422 1409 1423 1410 spin_lock_irq(&dev->vbl_lock); 1424 1411 /* ··· 1456 1445 u32 max_vblank_count) 1457 1446 { 1458 1447 struct drm_device *dev = crtc->dev; 1459 - unsigned int pipe = drm_crtc_index(crtc); 1460 - struct drm_vblank_crtc *vblank = &dev->vblank[pipe]; 1448 + struct drm_vblank_crtc *vblank = drm_crtc_vblank_crtc(crtc); 1461 1449 1462 1450 drm_WARN_ON(dev, dev->max_vblank_count); 1463 1451 drm_WARN_ON(dev, !READ_ONCE(vblank->inmodeset)); ··· 1479 1469 { 1480 1470 struct drm_device *dev = crtc->dev; 1481 1471 unsigned int pipe = drm_crtc_index(crtc); 1482 - struct drm_vblank_crtc *vblank = &dev->vblank[pipe]; 1472 + struct drm_vblank_crtc *vblank = drm_crtc_vblank_crtc(crtc); 1483 1473 1484 1474 if (drm_WARN_ON(dev, pipe >= dev->num_crtcs)) 1485 1475 return; ··· 1522 1512 assert_spin_locked(&dev->vbl_lock); 1523 1513 assert_spin_locked(&dev->vblank_time_lock); 1524 1514 1525 - vblank = &dev->vblank[pipe]; 1515 + vblank = drm_vblank_crtc(dev, pipe); 1526 1516 drm_WARN_ONCE(dev, 1527 1517 drm_debug_enabled(DRM_UT_VBL) && !vblank->framedur_ns, 1528 1518 "Cannot compute missed vblanks without frame duration\n"); ··· 1574 1564 union drm_wait_vblank *vblwait, 1575 1565 struct drm_file *file_priv) 1576 1566 { 1577 - struct drm_vblank_crtc *vblank = &dev->vblank[pipe]; 1567 + struct drm_vblank_crtc *vblank = drm_vblank_crtc(dev, pipe); 1578 1568 struct drm_pending_vblank_event *e; 1579 1569 ktime_t now; 1580 1570 u64 seq; ··· 1882 1872 */ 1883 1873 bool drm_handle_vblank(struct drm_device *dev, unsigned int pipe) 1884 1874 { 1885 - struct drm_vblank_crtc *vblank = &dev->vblank[pipe]; 1875 + struct drm_vblank_crtc *vblank = drm_vblank_crtc(dev, pipe); 1886 1876 unsigned long irqflags; 1887 1877 bool disable_irq; 1888 1878 ··· 1991 1981 1992 1982 pipe = drm_crtc_index(crtc); 1993 1983 1994 - vblank = &dev->vblank[pipe]; 1984 + vblank = drm_crtc_vblank_crtc(crtc); 1995 1985 vblank_enabled = dev->vblank_disable_immediate && READ_ONCE(vblank->enabled); 1996 1986 1997 1987 if (!vblank_enabled) { ··· 2056 2046 2057 2047 pipe = drm_crtc_index(crtc); 2058 2048 2059 - vblank = &dev->vblank[pipe]; 2049 + vblank = drm_crtc_vblank_crtc(crtc); 2060 2050 2061 2051 e = kzalloc(sizeof(*e), GFP_KERNEL); 2062 2052 if (e == NULL)
+1 -1
drivers/gpu/drm/drm_vblank_work.c
··· 245 245 { 246 246 kthread_init_work(&work->base, func); 247 247 INIT_LIST_HEAD(&work->node); 248 - work->vblank = &crtc->dev->vblank[drm_crtc_index(crtc)]; 248 + work->vblank = drm_crtc_vblank_crtc(crtc); 249 249 } 250 250 EXPORT_SYMBOL(drm_vblank_work_init); 251 251
+21 -28
drivers/gpu/drm/i915/display/intel_bios.c
··· 595 595 return (const void *)data + ptrs->ptr[index].fp_timing.offset; 596 596 } 597 597 598 - static const struct lvds_pnp_id * 598 + static const struct drm_edid_product_id * 599 599 get_lvds_pnp_id(const struct bdb_lvds_lfp_data *data, 600 600 const struct bdb_lvds_lfp_data_ptrs *ptrs, 601 601 int index) 602 602 { 603 + /* These two are supposed to have the same layout in memory. */ 604 + BUILD_BUG_ON(sizeof(struct lvds_pnp_id) != sizeof(struct drm_edid_product_id)); 605 + 603 606 return (const void *)data + ptrs->ptr[index].panel_pnp_id.offset; 604 607 } 605 608 ··· 614 611 return (const void *)data + ptrs->panel_name.offset; 615 612 else 616 613 return NULL; 617 - } 618 - 619 - static void dump_pnp_id(struct drm_i915_private *i915, 620 - const struct lvds_pnp_id *pnp_id, 621 - const char *name) 622 - { 623 - u16 mfg_name = be16_to_cpu((__force __be16)pnp_id->mfg_name); 624 - char vend[4]; 625 - 626 - drm_dbg_kms(&i915->drm, "%s PNPID mfg: %s (0x%x), prod: %u, serial: %u, week: %d, year: %d\n", 627 - name, drm_edid_decode_mfg_id(mfg_name, vend), 628 - pnp_id->mfg_name, pnp_id->product_code, pnp_id->serial, 629 - pnp_id->mfg_week, pnp_id->mfg_year + 1990); 630 614 } 631 615 632 616 static int opregion_get_panel_type(struct drm_i915_private *i915, ··· 654 664 { 655 665 const struct bdb_lvds_lfp_data *data; 656 666 const struct bdb_lvds_lfp_data_ptrs *ptrs; 657 - const struct lvds_pnp_id *edid_id; 658 - struct lvds_pnp_id edid_id_nodate; 659 - const struct edid *edid = drm_edid_raw(drm_edid); /* FIXME */ 667 + struct drm_edid_product_id product_id, product_id_nodate; 668 + struct drm_printer p; 660 669 int i, best = -1; 661 670 662 - if (!edid) 671 + if (!drm_edid) 663 672 return -1; 664 673 665 - edid_id = (const void *)&edid->mfg_id[0]; 674 + drm_edid_get_product_id(drm_edid, &product_id); 666 675 667 - edid_id_nodate = *edid_id; 668 - edid_id_nodate.mfg_week = 0; 669 - edid_id_nodate.mfg_year = 0; 676 + product_id_nodate = product_id; 677 + product_id_nodate.week_of_manufacture = 0; 678 + product_id_nodate.year_of_manufacture = 0; 670 679 671 - dump_pnp_id(i915, edid_id, "EDID"); 680 + p = drm_dbg_printer(&i915->drm, DRM_UT_KMS, "EDID"); 681 + drm_edid_print_product_id(&p, &product_id, true); 672 682 673 683 ptrs = bdb_find_section(i915, BDB_LVDS_LFP_DATA_PTRS); 674 684 if (!ptrs) ··· 679 689 return -1; 680 690 681 691 for (i = 0; i < 16; i++) { 682 - const struct lvds_pnp_id *vbt_id = 692 + const struct drm_edid_product_id *vbt_id = 683 693 get_lvds_pnp_id(data, ptrs, i); 684 694 685 695 /* full match? */ 686 - if (!memcmp(vbt_id, edid_id, sizeof(*vbt_id))) 696 + if (!memcmp(vbt_id, &product_id, sizeof(*vbt_id))) 687 697 return i; 688 698 689 699 /* ··· 691 701 * and the VBT entry does not specify a date. 692 702 */ 693 703 if (best < 0 && 694 - !memcmp(vbt_id, &edid_id_nodate, sizeof(*vbt_id))) 704 + !memcmp(vbt_id, &product_id_nodate, sizeof(*vbt_id))) 695 705 best = i; 696 706 } 697 707 ··· 877 887 const struct bdb_lvds_lfp_data *data; 878 888 const struct bdb_lvds_lfp_data_tail *tail; 879 889 const struct bdb_lvds_lfp_data_ptrs *ptrs; 880 - const struct lvds_pnp_id *pnp_id; 890 + const struct drm_edid_product_id *pnp_id; 891 + struct drm_printer p; 881 892 int panel_type = panel->vbt.panel_type; 882 893 883 894 ptrs = bdb_find_section(i915, BDB_LVDS_LFP_DATA_PTRS); ··· 893 902 parse_lfp_panel_dtd(i915, panel, data, ptrs); 894 903 895 904 pnp_id = get_lvds_pnp_id(data, ptrs, panel_type); 896 - dump_pnp_id(i915, pnp_id, "Panel"); 905 + 906 + p = drm_dbg_printer(&i915->drm, DRM_UT_KMS, "Panel"); 907 + drm_edid_print_product_id(&p, pnp_id, false); 897 908 898 909 tail = get_lfp_data_tail(data, ptrs); 899 910 if (!tail)
+24
drivers/gpu/drm/i915/display/intel_cursor.c
··· 843 843 .format_mod_supported = intel_cursor_format_mod_supported, 844 844 }; 845 845 846 + static void intel_cursor_add_size_hints_property(struct intel_plane *plane) 847 + { 848 + struct drm_i915_private *i915 = to_i915(plane->base.dev); 849 + const struct drm_mode_config *config = &i915->drm.mode_config; 850 + struct drm_plane_size_hint hints[4]; 851 + int size, max_size, num_hints = 0; 852 + 853 + max_size = min(config->cursor_width, config->cursor_height); 854 + 855 + /* for simplicity only enumerate the supported square+POT sizes */ 856 + for (size = 64; size <= max_size; size *= 2) { 857 + if (drm_WARN_ON(&i915->drm, num_hints >= ARRAY_SIZE(hints))) 858 + break; 859 + 860 + hints[num_hints].width = size; 861 + hints[num_hints].height = size; 862 + num_hints++; 863 + } 864 + 865 + drm_plane_add_size_hints_property(&plane->base, hints, num_hints); 866 + } 867 + 846 868 struct intel_plane * 847 869 intel_cursor_plane_create(struct drm_i915_private *dev_priv, 848 870 enum pipe pipe) ··· 922 900 DRM_MODE_ROTATE_0, 923 901 DRM_MODE_ROTATE_0 | 924 902 DRM_MODE_ROTATE_180); 903 + 904 + intel_cursor_add_size_hints_property(cursor); 925 905 926 906 zpos = DISPLAY_RUNTIME_INFO(dev_priv)->num_sprites[pipe] + 1; 927 907 drm_plane_create_zpos_immutable_property(&cursor->base, zpos);
+11 -1
drivers/gpu/drm/imx/ipuv3/ipuv3-plane.c
··· 773 773 .atomic_update = ipu_plane_atomic_update, 774 774 }; 775 775 776 + static const struct drm_plane_helper_funcs ipu_primary_plane_helper_funcs = { 777 + .atomic_check = ipu_plane_atomic_check, 778 + .atomic_disable = ipu_plane_atomic_disable, 779 + .atomic_update = ipu_plane_atomic_update, 780 + .get_scanout_buffer = drm_fb_dma_get_scanout_buffer, 781 + }; 782 + 776 783 bool ipu_plane_atomic_update_pending(struct drm_plane *plane) 777 784 { 778 785 struct ipu_plane *ipu_plane = to_ipu_plane(plane); ··· 923 916 ipu_plane->dma = dma; 924 917 ipu_plane->dp_flow = dp; 925 918 926 - drm_plane_helper_add(&ipu_plane->base, &ipu_plane_helper_funcs); 919 + if (type == DRM_PLANE_TYPE_PRIMARY) 920 + drm_plane_helper_add(&ipu_plane->base, &ipu_primary_plane_helper_funcs); 921 + else 922 + drm_plane_helper_add(&ipu_plane->base, &ipu_plane_helper_funcs); 927 923 928 924 if (dp == IPU_DP_FLOW_SYNC_BG || dp == IPU_DP_FLOW_SYNC_FG) 929 925 ret = drm_plane_create_zpos_property(&ipu_plane->base, zpos, 0,
+12
drivers/gpu/drm/lima/lima_bcast.c
··· 43 43 44 44 } 45 45 46 + int lima_bcast_mask_irq(struct lima_ip *ip) 47 + { 48 + bcast_write(LIMA_BCAST_BROADCAST_MASK, 0); 49 + bcast_write(LIMA_BCAST_INTERRUPT_MASK, 0); 50 + return 0; 51 + } 52 + 53 + int lima_bcast_reset(struct lima_ip *ip) 54 + { 55 + return lima_bcast_hw_init(ip); 56 + } 57 + 46 58 int lima_bcast_init(struct lima_ip *ip) 47 59 { 48 60 int i;
+3
drivers/gpu/drm/lima/lima_bcast.h
··· 13 13 14 14 void lima_bcast_enable(struct lima_device *dev, int num_pp); 15 15 16 + int lima_bcast_mask_irq(struct lima_ip *ip); 17 + int lima_bcast_reset(struct lima_ip *ip); 18 + 16 19 #endif
+18 -3
drivers/gpu/drm/lima/lima_drv.c
··· 371 371 { 372 372 struct lima_device *ldev; 373 373 struct drm_device *ddev; 374 + const struct lima_compatible *comp; 374 375 int err; 375 376 376 377 err = lima_sched_slab_init(); ··· 385 384 } 386 385 387 386 ldev->dev = &pdev->dev; 388 - ldev->id = (enum lima_gpu_id)of_device_get_match_data(&pdev->dev); 387 + comp = of_device_get_match_data(&pdev->dev); 388 + if (!comp) { 389 + err = -ENODEV; 390 + goto err_out0; 391 + } 392 + 393 + ldev->id = comp->id; 389 394 390 395 platform_set_drvdata(pdev, ldev); 391 396 ··· 466 459 lima_sched_slab_fini(); 467 460 } 468 461 462 + static const struct lima_compatible lima_mali400_data = { 463 + .id = lima_gpu_mali400, 464 + }; 465 + 466 + static const struct lima_compatible lima_mali450_data = { 467 + .id = lima_gpu_mali450, 468 + }; 469 + 469 470 static const struct of_device_id dt_match[] = { 470 - { .compatible = "arm,mali-400", .data = (void *)lima_gpu_mali400 }, 471 - { .compatible = "arm,mali-450", .data = (void *)lima_gpu_mali450 }, 471 + { .compatible = "arm,mali-400", .data = &lima_mali400_data }, 472 + { .compatible = "arm,mali-450", .data = &lima_mali450_data }, 472 473 {} 473 474 }; 474 475 MODULE_DEVICE_TABLE(of, dt_match);
+5
drivers/gpu/drm/lima/lima_drv.h
··· 7 7 #include <drm/drm_file.h> 8 8 9 9 #include "lima_ctx.h" 10 + #include "lima_device.h" 10 11 11 12 extern int lima_sched_timeout_ms; 12 13 extern uint lima_heap_init_nr_pages; ··· 38 37 u32 out_sync; 39 38 40 39 struct lima_sched_task *task; 40 + }; 41 + 42 + struct lima_compatible { 43 + enum lima_gpu_id id; 41 44 }; 42 45 43 46 static inline struct lima_drm_priv *
+10
drivers/gpu/drm/lima/lima_gp.c
··· 233 233 lima_sched_pipe_task_done(pipe); 234 234 } 235 235 236 + static void lima_gp_task_mask_irq(struct lima_sched_pipe *pipe) 237 + { 238 + struct lima_ip *ip = pipe->processor[0]; 239 + 240 + gp_write(LIMA_GP_INT_MASK, 0); 241 + } 242 + 236 243 static int lima_gp_task_recover(struct lima_sched_pipe *pipe) 237 244 { 238 245 struct lima_ip *ip = pipe->processor[0]; ··· 345 338 346 339 void lima_gp_fini(struct lima_ip *ip) 347 340 { 341 + struct lima_device *dev = ip->dev; 348 342 343 + devm_free_irq(dev->dev, ip->irq, ip); 349 344 } 350 345 351 346 int lima_gp_pipe_init(struct lima_device *dev) ··· 374 365 pipe->task_error = lima_gp_task_error; 375 366 pipe->task_mmu_error = lima_gp_task_mmu_error; 376 367 pipe->task_recover = lima_gp_task_recover; 368 + pipe->task_mask_irq = lima_gp_task_mask_irq; 377 369 378 370 return 0; 379 371 }
+5
drivers/gpu/drm/lima/lima_mmu.c
··· 118 118 119 119 void lima_mmu_fini(struct lima_ip *ip) 120 120 { 121 + struct lima_device *dev = ip->dev; 121 122 123 + if (ip->id == lima_ip_ppmmu_bcast) 124 + return; 125 + 126 + devm_free_irq(dev->dev, ip->irq, ip); 122 127 } 123 128 124 129 void lima_mmu_flush_tlb(struct lima_ip *ip)
+22
drivers/gpu/drm/lima/lima_pp.c
··· 286 286 287 287 void lima_pp_fini(struct lima_ip *ip) 288 288 { 289 + struct lima_device *dev = ip->dev; 289 290 291 + devm_free_irq(dev->dev, ip->irq, ip); 290 292 } 291 293 292 294 int lima_pp_bcast_resume(struct lima_ip *ip) ··· 321 319 322 320 void lima_pp_bcast_fini(struct lima_ip *ip) 323 321 { 322 + struct lima_device *dev = ip->dev; 324 323 324 + devm_free_irq(dev->dev, ip->irq, ip); 325 325 } 326 326 327 327 static int lima_pp_task_validate(struct lima_sched_pipe *pipe, ··· 433 429 434 430 lima_pp_hard_reset(ip); 435 431 } 432 + 433 + if (pipe->bcast_processor) 434 + lima_bcast_reset(pipe->bcast_processor); 436 435 } 437 436 438 437 static void lima_pp_task_mmu_error(struct lima_sched_pipe *pipe) 439 438 { 440 439 if (atomic_dec_and_test(&pipe->task)) 441 440 lima_sched_pipe_task_done(pipe); 441 + } 442 + 443 + static void lima_pp_task_mask_irq(struct lima_sched_pipe *pipe) 444 + { 445 + int i; 446 + 447 + for (i = 0; i < pipe->num_processor; i++) { 448 + struct lima_ip *ip = pipe->processor[i]; 449 + 450 + pp_write(LIMA_PP_INT_MASK, 0); 451 + } 452 + 453 + if (pipe->bcast_processor) 454 + lima_bcast_mask_irq(pipe->bcast_processor); 442 455 } 443 456 444 457 static struct kmem_cache *lima_pp_task_slab; ··· 489 468 pipe->task_fini = lima_pp_task_fini; 490 469 pipe->task_error = lima_pp_task_error; 491 470 pipe->task_mmu_error = lima_pp_task_mmu_error; 471 + pipe->task_mask_irq = lima_pp_task_mask_irq; 492 472 493 473 return 0; 494 474 }
+9
drivers/gpu/drm/lima/lima_sched.c
··· 422 422 */ 423 423 for (i = 0; i < pipe->num_processor; i++) 424 424 synchronize_irq(pipe->processor[i]->irq); 425 + if (pipe->bcast_processor) 426 + synchronize_irq(pipe->bcast_processor->irq); 425 427 426 428 if (dma_fence_is_signaled(task->fence)) { 427 429 DRM_WARN("%s unexpectedly high interrupt latency\n", lima_ip_name(ip)); 428 430 return DRM_GPU_SCHED_STAT_NOMINAL; 429 431 } 432 + 433 + /* 434 + * The task might still finish while this timeout handler runs. 435 + * To prevent a race condition on its completion, mask all irqs 436 + * on the running core until the next hard reset completes. 437 + */ 438 + pipe->task_mask_irq(pipe); 430 439 431 440 if (!pipe->error) 432 441 DRM_ERROR("%s job timeout\n", lima_ip_name(ip));
+1
drivers/gpu/drm/lima/lima_sched.h
··· 80 80 void (*task_error)(struct lima_sched_pipe *pipe); 81 81 void (*task_mmu_error)(struct lima_sched_pipe *pipe); 82 82 int (*task_recover)(struct lima_sched_pipe *pipe); 83 + void (*task_mask_irq)(struct lima_sched_pipe *pipe); 83 84 84 85 struct work_struct recover_work; 85 86 };
+6 -1
drivers/gpu/drm/mgag200/mgag200_drv.h
··· 366 366 struct drm_display_mode; 367 367 struct drm_plane; 368 368 struct drm_atomic_state; 369 + struct drm_scanout_buffer; 369 370 370 371 extern const uint32_t mgag200_primary_plane_formats[]; 371 372 extern const size_t mgag200_primary_plane_formats_size; ··· 380 379 struct drm_atomic_state *state); 381 380 void mgag200_primary_plane_helper_atomic_disable(struct drm_plane *plane, 382 381 struct drm_atomic_state *old_state); 382 + int mgag200_primary_plane_helper_get_scanout_buffer(struct drm_plane *plane, 383 + struct drm_scanout_buffer *sb); 384 + 383 385 #define MGAG200_PRIMARY_PLANE_HELPER_FUNCS \ 384 386 DRM_GEM_SHADOW_PLANE_HELPER_FUNCS, \ 385 387 .atomic_check = mgag200_primary_plane_helper_atomic_check, \ 386 388 .atomic_update = mgag200_primary_plane_helper_atomic_update, \ 387 389 .atomic_enable = mgag200_primary_plane_helper_atomic_enable, \ 388 - .atomic_disable = mgag200_primary_plane_helper_atomic_disable 390 + .atomic_disable = mgag200_primary_plane_helper_atomic_disable, \ 391 + .get_scanout_buffer = mgag200_primary_plane_helper_get_scanout_buffer 389 392 390 393 #define MGAG200_PRIMARY_PLANE_FUNCS \ 391 394 .update_plane = drm_atomic_helper_update_plane, \
+18
drivers/gpu/drm/mgag200/mgag200_mode.c
··· 21 21 #include <drm/drm_framebuffer.h> 22 22 #include <drm/drm_gem_atomic_helper.h> 23 23 #include <drm/drm_gem_framebuffer_helper.h> 24 + #include <drm/drm_panic.h> 24 25 #include <drm/drm_print.h> 25 26 26 27 #include "mgag200_drv.h" ··· 545 544 seq1 |= MGAREG_SEQ1_SCROFF; 546 545 WREG_SEQ(0x01, seq1); 547 546 msleep(20); 547 + } 548 + 549 + int mgag200_primary_plane_helper_get_scanout_buffer(struct drm_plane *plane, 550 + struct drm_scanout_buffer *sb) 551 + { 552 + struct mga_device *mdev = to_mga_device(plane->dev); 553 + struct iosys_map map = IOSYS_MAP_INIT_VADDR_IOMEM(mdev->vram); 554 + 555 + if (plane->state && plane->state->fb) { 556 + sb->format = plane->state->fb->format; 557 + sb->width = plane->state->fb->width; 558 + sb->height = plane->state->fb->height; 559 + sb->pitch[0] = plane->state->fb->pitches[0]; 560 + sb->map[0] = map; 561 + return 0; 562 + } 563 + return -ENODEV; 548 564 } 549 565 550 566 /*
+1 -1
drivers/gpu/drm/nouveau/nouveau_display.c
··· 83 83 nouveau_display_scanoutpos_head(struct drm_crtc *crtc, int *vpos, int *hpos, 84 84 ktime_t *stime, ktime_t *etime) 85 85 { 86 - struct drm_vblank_crtc *vblank = &crtc->dev->vblank[drm_crtc_index(crtc)]; 86 + struct drm_vblank_crtc *vblank = drm_crtc_vblank_crtc(crtc); 87 87 struct nvif_head *head = &nouveau_crtc(crtc)->head; 88 88 struct nvif_head_scanoutpos_v0 args; 89 89 int retry = 20;
+18 -5
drivers/gpu/drm/nouveau/nouveau_dp.c
··· 225 225 u8 *dpcd = nv_encoder->dp.dpcd; 226 226 int ret = NOUVEAU_DP_NONE, hpd; 227 227 228 - /* If we've already read the DPCD on an eDP device, we don't need to 229 - * reread it as it won't change 228 + /* eDP ports don't support hotplugging - so there's no point in probing eDP ports unless we 229 + * haven't probed them once before. 230 230 */ 231 - if (connector->connector_type == DRM_MODE_CONNECTOR_eDP && 232 - dpcd[DP_DPCD_REV] != 0) 233 - return NOUVEAU_DP_SST; 231 + if (connector->connector_type == DRM_MODE_CONNECTOR_eDP) { 232 + if (connector->status == connector_status_connected) 233 + return NOUVEAU_DP_SST; 234 + else if (connector->status == connector_status_disconnected) 235 + return NOUVEAU_DP_NONE; 236 + } 237 + 238 + // Ensure that the aux bus is enabled for probing 239 + drm_dp_dpcd_set_powered(&nv_connector->aux, true); 234 240 235 241 mutex_lock(&nv_encoder->dp.hpd_irq_lock); 236 242 if (mstm) { ··· 298 292 out: 299 293 if (mstm && !mstm->suspended && ret != NOUVEAU_DP_MST) 300 294 nv50_mstm_remove(mstm); 295 + 296 + /* GSP doesn't like when we try to do aux transactions on a port it considers disconnected, 297 + * and since we don't really have a usecase for that anyway - just disable the aux bus here 298 + * if we've decided the connector is disconnected 299 + */ 300 + if (ret == NOUVEAU_DP_NONE) 301 + drm_dp_dpcd_set_powered(&nv_connector->aux, false); 301 302 302 303 mutex_unlock(&nv_encoder->dp.hpd_irq_lock); 303 304 return ret;
+11
drivers/gpu/drm/panel/Kconfig
··· 335 335 Say Y here if you want to enable support for LG4573 RGB panel. 336 336 To compile this driver as a module, choose M here. 337 337 338 + config DRM_PANEL_LG_SW43408 339 + tristate "LG SW43408 panel" 340 + depends on OF 341 + depends on DRM_MIPI_DSI 342 + depends on BACKLIGHT_CLASS_DEVICE 343 + help 344 + Say Y here if you want to enable support for LG sw43408 panel. 345 + The panel has a 1080x2160@60Hz resolution and uses 24 bit RGB per 346 + pixel. It provides a MIPI DSI interface to the host and has a 347 + built-in LED backlight. 348 + 338 349 config DRM_PANEL_MAGNACHIP_D53E6EA8966 339 350 tristate "Magnachip D53E6EA8966 DSI panel" 340 351 depends on OF && SPI
+1
drivers/gpu/drm/panel/Makefile
··· 34 34 obj-$(CONFIG_DRM_PANEL_LEADTEK_LTK500HD1829) += panel-leadtek-ltk500hd1829.o 35 35 obj-$(CONFIG_DRM_PANEL_LG_LB035Q02) += panel-lg-lb035q02.o 36 36 obj-$(CONFIG_DRM_PANEL_LG_LG4573) += panel-lg-lg4573.o 37 + obj-$(CONFIG_DRM_PANEL_LG_SW43408) += panel-lg-sw43408.o 37 38 obj-$(CONFIG_DRM_PANEL_MAGNACHIP_D53E6EA8966) += panel-magnachip-d53e6ea8966.o 38 39 obj-$(CONFIG_DRM_PANEL_NEC_NL8048HL11) += panel-nec-nl8048hl11.o 39 40 obj-$(CONFIG_DRM_PANEL_NEWVISION_NV3051D) += panel-newvision-nv3051d.o
+320
drivers/gpu/drm/panel/panel-lg-sw43408.c
··· 1 + // SPDX-License-Identifier: GPL-2.0+ 2 + /* 3 + * Copyright (C) 2019-2024 Linaro Ltd 4 + * Author: Sumit Semwal <sumit.semwal@linaro.org> 5 + * Dmitry Baryshkov <dmitry.baryshkov@linaro.org> 6 + */ 7 + 8 + #include <linux/backlight.h> 9 + #include <linux/delay.h> 10 + #include <linux/gpio/consumer.h> 11 + #include <linux/module.h> 12 + #include <linux/of.h> 13 + #include <linux/regulator/consumer.h> 14 + 15 + #include <video/mipi_display.h> 16 + 17 + #include <drm/drm_mipi_dsi.h> 18 + #include <drm/drm_panel.h> 19 + #include <drm/drm_probe_helper.h> 20 + #include <drm/display/drm_dsc.h> 21 + #include <drm/display/drm_dsc_helper.h> 22 + 23 + #define NUM_SUPPLIES 2 24 + 25 + struct sw43408_panel { 26 + struct drm_panel base; 27 + struct mipi_dsi_device *link; 28 + 29 + struct regulator_bulk_data supplies[NUM_SUPPLIES]; 30 + 31 + struct gpio_desc *reset_gpio; 32 + 33 + struct drm_dsc_config dsc; 34 + }; 35 + 36 + static inline struct sw43408_panel *to_panel_info(struct drm_panel *panel) 37 + { 38 + return container_of(panel, struct sw43408_panel, base); 39 + } 40 + 41 + static int sw43408_unprepare(struct drm_panel *panel) 42 + { 43 + struct sw43408_panel *ctx = to_panel_info(panel); 44 + int ret; 45 + 46 + ret = mipi_dsi_dcs_set_display_off(ctx->link); 47 + if (ret < 0) 48 + dev_err(panel->dev, "set_display_off cmd failed ret = %d\n", ret); 49 + 50 + ret = mipi_dsi_dcs_enter_sleep_mode(ctx->link); 51 + if (ret < 0) 52 + dev_err(panel->dev, "enter_sleep cmd failed ret = %d\n", ret); 53 + 54 + msleep(100); 55 + 56 + gpiod_set_value(ctx->reset_gpio, 1); 57 + 58 + return regulator_bulk_disable(ARRAY_SIZE(ctx->supplies), ctx->supplies); 59 + } 60 + 61 + static int sw43408_program(struct drm_panel *panel) 62 + { 63 + struct sw43408_panel *ctx = to_panel_info(panel); 64 + struct drm_dsc_picture_parameter_set pps; 65 + 66 + mipi_dsi_dcs_write_seq(ctx->link, MIPI_DCS_SET_GAMMA_CURVE, 0x02); 67 + 68 + mipi_dsi_dcs_set_tear_on(ctx->link, MIPI_DSI_DCS_TEAR_MODE_VBLANK); 69 + 70 + mipi_dsi_dcs_write_seq(ctx->link, 0x53, 0x0c, 0x30); 71 + mipi_dsi_dcs_write_seq(ctx->link, 0x55, 0x00, 0x70, 0xdf, 0x00, 0x70, 0xdf); 72 + mipi_dsi_dcs_write_seq(ctx->link, 0xf7, 0x01, 0x49, 0x0c); 73 + 74 + mipi_dsi_dcs_exit_sleep_mode(ctx->link); 75 + 76 + msleep(135); 77 + 78 + /* COMPRESSION_MODE moved after setting the PPS */ 79 + 80 + mipi_dsi_dcs_write_seq(ctx->link, 0xb0, 0xac); 81 + mipi_dsi_dcs_write_seq(ctx->link, 0xe5, 82 + 0x00, 0x3a, 0x00, 0x3a, 0x00, 0x0e, 0x10); 83 + mipi_dsi_dcs_write_seq(ctx->link, 0xb5, 84 + 0x75, 0x60, 0x2d, 0x5d, 0x80, 0x00, 0x0a, 0x0b, 85 + 0x00, 0x05, 0x0b, 0x00, 0x80, 0x0d, 0x0e, 0x40, 86 + 0x00, 0x0c, 0x00, 0x16, 0x00, 0xb8, 0x00, 0x80, 87 + 0x0d, 0x0e, 0x40, 0x00, 0x0c, 0x00, 0x16, 0x00, 88 + 0xb8, 0x00, 0x81, 0x00, 0x03, 0x03, 0x03, 0x01, 89 + 0x01); 90 + msleep(85); 91 + mipi_dsi_dcs_write_seq(ctx->link, 0xcd, 92 + 0x00, 0x00, 0x00, 0x19, 0x19, 0x19, 0x19, 0x19, 93 + 0x19, 0x19, 0x19, 0x19, 0x19, 0x19, 0x19, 0x19, 94 + 0x16, 0x16); 95 + mipi_dsi_dcs_write_seq(ctx->link, 0xcb, 0x80, 0x5c, 0x07, 0x03, 0x28); 96 + mipi_dsi_dcs_write_seq(ctx->link, 0xc0, 0x02, 0x02, 0x0f); 97 + mipi_dsi_dcs_write_seq(ctx->link, 0x55, 0x04, 0x61, 0xdb, 0x04, 0x70, 0xdb); 98 + mipi_dsi_dcs_write_seq(ctx->link, 0xb0, 0xca); 99 + 100 + mipi_dsi_dcs_set_display_on(ctx->link); 101 + 102 + msleep(50); 103 + 104 + ctx->link->mode_flags &= ~MIPI_DSI_MODE_LPM; 105 + 106 + drm_dsc_pps_payload_pack(&pps, ctx->link->dsc); 107 + mipi_dsi_picture_parameter_set(ctx->link, &pps); 108 + 109 + ctx->link->mode_flags |= MIPI_DSI_MODE_LPM; 110 + 111 + /* 112 + * This panel uses PPS selectors with offset: 113 + * PPS 1 if pps_identifier is 0 114 + * PPS 2 if pps_identifier is 1 115 + */ 116 + mipi_dsi_compression_mode_ext(ctx->link, true, 117 + MIPI_DSI_COMPRESSION_DSC, 1); 118 + 119 + return 0; 120 + } 121 + 122 + static int sw43408_prepare(struct drm_panel *panel) 123 + { 124 + struct sw43408_panel *ctx = to_panel_info(panel); 125 + int ret; 126 + 127 + ret = regulator_bulk_enable(ARRAY_SIZE(ctx->supplies), ctx->supplies); 128 + if (ret < 0) 129 + return ret; 130 + 131 + usleep_range(5000, 6000); 132 + 133 + gpiod_set_value(ctx->reset_gpio, 0); 134 + usleep_range(9000, 10000); 135 + gpiod_set_value(ctx->reset_gpio, 1); 136 + usleep_range(1000, 2000); 137 + gpiod_set_value(ctx->reset_gpio, 0); 138 + usleep_range(9000, 10000); 139 + 140 + ret = sw43408_program(panel); 141 + if (ret) 142 + goto poweroff; 143 + 144 + return 0; 145 + 146 + poweroff: 147 + gpiod_set_value(ctx->reset_gpio, 1); 148 + regulator_bulk_disable(ARRAY_SIZE(ctx->supplies), ctx->supplies); 149 + return ret; 150 + } 151 + 152 + static const struct drm_display_mode sw43408_mode = { 153 + .clock = (1080 + 20 + 32 + 20) * (2160 + 20 + 4 + 20) * 60 / 1000, 154 + 155 + .hdisplay = 1080, 156 + .hsync_start = 1080 + 20, 157 + .hsync_end = 1080 + 20 + 32, 158 + .htotal = 1080 + 20 + 32 + 20, 159 + 160 + .vdisplay = 2160, 161 + .vsync_start = 2160 + 20, 162 + .vsync_end = 2160 + 20 + 4, 163 + .vtotal = 2160 + 20 + 4 + 20, 164 + 165 + .width_mm = 62, 166 + .height_mm = 124, 167 + 168 + .type = DRM_MODE_TYPE_DRIVER | DRM_MODE_TYPE_PREFERRED, 169 + }; 170 + 171 + static int sw43408_get_modes(struct drm_panel *panel, 172 + struct drm_connector *connector) 173 + { 174 + return drm_connector_helper_get_modes_fixed(connector, &sw43408_mode); 175 + } 176 + 177 + static int sw43408_backlight_update_status(struct backlight_device *bl) 178 + { 179 + struct mipi_dsi_device *dsi = bl_get_data(bl); 180 + u16 brightness = backlight_get_brightness(bl); 181 + 182 + return mipi_dsi_dcs_set_display_brightness_large(dsi, brightness); 183 + } 184 + 185 + const struct backlight_ops sw43408_backlight_ops = { 186 + .update_status = sw43408_backlight_update_status, 187 + }; 188 + 189 + static int sw43408_backlight_init(struct sw43408_panel *ctx) 190 + { 191 + struct device *dev = &ctx->link->dev; 192 + const struct backlight_properties props = { 193 + .type = BACKLIGHT_PLATFORM, 194 + .brightness = 255, 195 + .max_brightness = 255, 196 + }; 197 + 198 + ctx->base.backlight = devm_backlight_device_register(dev, dev_name(dev), dev, 199 + ctx->link, 200 + &sw43408_backlight_ops, 201 + &props); 202 + 203 + if (IS_ERR(ctx->base.backlight)) 204 + return dev_err_probe(dev, PTR_ERR(ctx->base.backlight), 205 + "Failed to create backlight\n"); 206 + 207 + return 0; 208 + } 209 + 210 + static const struct drm_panel_funcs sw43408_funcs = { 211 + .unprepare = sw43408_unprepare, 212 + .prepare = sw43408_prepare, 213 + .get_modes = sw43408_get_modes, 214 + }; 215 + 216 + static const struct of_device_id sw43408_of_match[] = { 217 + { .compatible = "lg,sw43408", }, 218 + { /* sentinel */ } 219 + }; 220 + MODULE_DEVICE_TABLE(of, sw43408_of_match); 221 + 222 + static int sw43408_add(struct sw43408_panel *ctx) 223 + { 224 + struct device *dev = &ctx->link->dev; 225 + int ret; 226 + 227 + ctx->supplies[0].supply = "vddi"; /* 1.88 V */ 228 + ctx->supplies[0].init_load_uA = 62000; 229 + ctx->supplies[1].supply = "vpnl"; /* 3.0 V */ 230 + ctx->supplies[1].init_load_uA = 857000; 231 + 232 + ret = devm_regulator_bulk_get(dev, ARRAY_SIZE(ctx->supplies), 233 + ctx->supplies); 234 + if (ret < 0) 235 + return ret; 236 + 237 + ctx->reset_gpio = devm_gpiod_get(dev, "reset", GPIOD_OUT_LOW); 238 + if (IS_ERR(ctx->reset_gpio)) { 239 + ret = PTR_ERR(ctx->reset_gpio); 240 + return dev_err_probe(dev, ret, "cannot get reset gpio\n"); 241 + } 242 + 243 + ret = sw43408_backlight_init(ctx); 244 + if (ret < 0) 245 + return ret; 246 + 247 + ctx->base.prepare_prev_first = true; 248 + 249 + drm_panel_init(&ctx->base, dev, &sw43408_funcs, DRM_MODE_CONNECTOR_DSI); 250 + 251 + drm_panel_add(&ctx->base); 252 + return ret; 253 + } 254 + 255 + static int sw43408_probe(struct mipi_dsi_device *dsi) 256 + { 257 + struct sw43408_panel *ctx; 258 + int ret; 259 + 260 + ctx = devm_kzalloc(&dsi->dev, sizeof(*ctx), GFP_KERNEL); 261 + if (!ctx) 262 + return -ENOMEM; 263 + 264 + dsi->mode_flags = MIPI_DSI_MODE_LPM; 265 + dsi->format = MIPI_DSI_FMT_RGB888; 266 + dsi->lanes = 4; 267 + 268 + ctx->link = dsi; 269 + mipi_dsi_set_drvdata(dsi, ctx); 270 + 271 + ret = sw43408_add(ctx); 272 + if (ret < 0) 273 + return ret; 274 + 275 + /* The panel works only in the DSC mode. Set DSC params. */ 276 + ctx->dsc.dsc_version_major = 0x1; 277 + ctx->dsc.dsc_version_minor = 0x1; 278 + 279 + /* slice_count * slice_width == width */ 280 + ctx->dsc.slice_height = 16; 281 + ctx->dsc.slice_width = 540; 282 + ctx->dsc.slice_count = 2; 283 + ctx->dsc.bits_per_component = 8; 284 + ctx->dsc.bits_per_pixel = 8 << 4; 285 + ctx->dsc.block_pred_enable = true; 286 + 287 + dsi->dsc = &ctx->dsc; 288 + 289 + return mipi_dsi_attach(dsi); 290 + } 291 + 292 + static void sw43408_remove(struct mipi_dsi_device *dsi) 293 + { 294 + struct sw43408_panel *ctx = mipi_dsi_get_drvdata(dsi); 295 + int ret; 296 + 297 + ret = sw43408_unprepare(&ctx->base); 298 + if (ret < 0) 299 + dev_err(&dsi->dev, "failed to unprepare panel: %d\n", ret); 300 + 301 + ret = mipi_dsi_detach(dsi); 302 + if (ret < 0) 303 + dev_err(&dsi->dev, "failed to detach from DSI host: %d\n", ret); 304 + 305 + drm_panel_remove(&ctx->base); 306 + } 307 + 308 + static struct mipi_dsi_driver sw43408_driver = { 309 + .driver = { 310 + .name = "panel-lg-sw43408", 311 + .of_match_table = sw43408_of_match, 312 + }, 313 + .probe = sw43408_probe, 314 + .remove = sw43408_remove, 315 + }; 316 + module_mipi_dsi_driver(sw43408_driver); 317 + 318 + MODULE_AUTHOR("Sumit Semwal <sumit.semwal@linaro.org>"); 319 + MODULE_DESCRIPTION("LG SW436408 MIPI-DSI LED panel"); 320 + MODULE_LICENSE("GPL");
+2 -4
drivers/gpu/drm/panel/panel-novatek-nt35950.c
··· 556 556 } 557 557 dsi_r_host = of_find_mipi_dsi_host_by_node(dsi_r); 558 558 of_node_put(dsi_r); 559 - if (!dsi_r_host) { 560 - dev_err(dev, "Cannot get secondary DSI host\n"); 561 - return -EPROBE_DEFER; 562 - } 559 + if (!dsi_r_host) 560 + return dev_err_probe(dev, -EPROBE_DEFER, "Cannot get secondary DSI host\n"); 563 561 564 562 nt->dsi[1] = mipi_dsi_device_register_full(dsi_r_host, info); 565 563 if (!nt->dsi[1]) {
+37 -13
drivers/gpu/drm/panel/panel-simple.c
··· 2591 2591 .connector_type = DRM_MODE_CONNECTOR_LVDS, 2592 2592 }; 2593 2593 2594 - static const struct drm_display_mode innolux_g121x1_l03_mode = { 2595 - .clock = 65000, 2596 - .hdisplay = 1024, 2597 - .hsync_start = 1024 + 0, 2598 - .hsync_end = 1024 + 1, 2599 - .htotal = 1024 + 0 + 1 + 320, 2600 - .vdisplay = 768, 2601 - .vsync_start = 768 + 38, 2602 - .vsync_end = 768 + 38 + 1, 2603 - .vtotal = 768 + 38 + 1 + 0, 2604 - .flags = DRM_MODE_FLAG_NHSYNC | DRM_MODE_FLAG_NVSYNC, 2594 + static const struct display_timing innolux_g121x1_l03_timings = { 2595 + .pixelclock = { 57500000, 64900000, 74400000 }, 2596 + .hactive = { 1024, 1024, 1024 }, 2597 + .hfront_porch = { 90, 140, 190 }, 2598 + .hback_porch = { 90, 140, 190 }, 2599 + .hsync_len = { 36, 40, 60 }, 2600 + .vactive = { 768, 768, 768 }, 2601 + .vfront_porch = { 2, 15, 30 }, 2602 + .vback_porch = { 2, 15, 30 }, 2603 + .vsync_len = { 2, 8, 20 }, 2604 + .flags = DISPLAY_FLAGS_HSYNC_LOW | DISPLAY_FLAGS_VSYNC_LOW, 2605 2605 }; 2606 2606 2607 2607 static const struct panel_desc innolux_g121x1_l03 = { 2608 - .modes = &innolux_g121x1_l03_mode, 2609 - .num_modes = 1, 2608 + .timings = &innolux_g121x1_l03_timings, 2609 + .num_timings = 1, 2610 2610 .bpc = 6, 2611 2611 .size = { 2612 2612 .width = 246, ··· 2617 2617 .unprepare = 200, 2618 2618 .disable = 400, 2619 2619 }, 2620 + .bus_format = MEDIA_BUS_FMT_RGB666_1X7X3_SPWG, 2621 + .bus_flags = DRM_BUS_FLAG_DE_HIGH, 2622 + .connector_type = DRM_MODE_CONNECTOR_LVDS, 2623 + }; 2624 + 2625 + static const struct panel_desc innolux_g121xce_l01 = { 2626 + .timings = &innolux_g121x1_l03_timings, 2627 + .num_timings = 1, 2628 + .bpc = 8, 2629 + .size = { 2630 + .width = 246, 2631 + .height = 185, 2632 + }, 2633 + .delay = { 2634 + .enable = 200, 2635 + .unprepare = 200, 2636 + .disable = 400, 2637 + }, 2638 + .bus_format = MEDIA_BUS_FMT_RGB888_1X7X4_SPWG, 2639 + .bus_flags = DRM_BUS_FLAG_DE_HIGH, 2640 + .connector_type = DRM_MODE_CONNECTOR_LVDS, 2620 2641 }; 2621 2642 2622 2643 static const struct display_timing innolux_g156hce_l01_timings = { ··· 4613 4592 }, { 4614 4593 .compatible = "innolux,g121x1-l03", 4615 4594 .data = &innolux_g121x1_l03, 4595 + }, { 4596 + .compatible = "innolux,g121xce-l01", 4597 + .data = &innolux_g121xce_l01, 4616 4598 }, { 4617 4599 .compatible = "innolux,g156hce-l01", 4618 4600 .data = &innolux_g156hce_l01,
+2 -4
drivers/gpu/drm/panel/panel-truly-nt35597.c
··· 550 550 551 551 dsi1_host = of_find_mipi_dsi_host_by_node(dsi1); 552 552 of_node_put(dsi1); 553 - if (!dsi1_host) { 554 - dev_err(dev, "failed to find dsi host\n"); 555 - return -EPROBE_DEFER; 556 - } 553 + if (!dsi1_host) 554 + return dev_err_probe(dev, -EPROBE_DEFER, "failed to find dsi host\n"); 557 555 558 556 /* register the second DSI device */ 559 557 dsi1_device = mipi_dsi_device_register_full(dsi1_host, &info);
+1 -1
drivers/gpu/drm/rockchip/Kconfig
··· 36 36 config ROCKCHIP_ANALOGIX_DP 37 37 bool "Rockchip specific extensions for Analogix DP driver" 38 38 depends on DRM_DISPLAY_DP_HELPER 39 - depends on DRM_DISPLAY_HELPER 39 + depends on DRM_DISPLAY_HELPER=y || (DRM_DISPLAY_HELPER=m && DRM_ROCKCHIP=m) 40 40 depends on ROCKCHIP_VOP 41 41 help 42 42 This selects support for Rockchip SoC specific extensions
+16
drivers/gpu/drm/tiny/simpledrm.c
··· 25 25 #include <drm/drm_gem_shmem_helper.h> 26 26 #include <drm/drm_managed.h> 27 27 #include <drm/drm_modeset_helper_vtables.h> 28 + #include <drm/drm_panic.h> 28 29 #include <drm/drm_probe_helper.h> 29 30 30 31 #define DRIVER_NAME "simpledrm" ··· 672 671 drm_dev_exit(idx); 673 672 } 674 673 674 + static int simpledrm_primary_plane_helper_get_scanout_buffer(struct drm_plane *plane, 675 + struct drm_scanout_buffer *sb) 676 + { 677 + struct simpledrm_device *sdev = simpledrm_device_of_dev(plane->dev); 678 + 679 + sb->width = sdev->mode.hdisplay; 680 + sb->height = sdev->mode.vdisplay; 681 + sb->format = sdev->format; 682 + sb->pitch[0] = sdev->pitch; 683 + sb->map[0] = sdev->screen_base; 684 + 685 + return 0; 686 + } 687 + 675 688 static const struct drm_plane_helper_funcs simpledrm_primary_plane_helper_funcs = { 676 689 DRM_GEM_SHADOW_PLANE_HELPER_FUNCS, 677 690 .atomic_check = simpledrm_primary_plane_helper_atomic_check, 678 691 .atomic_update = simpledrm_primary_plane_helper_atomic_update, 679 692 .atomic_disable = simpledrm_primary_plane_helper_atomic_disable, 693 + .get_scanout_buffer = simpledrm_primary_plane_helper_get_scanout_buffer, 680 694 }; 681 695 682 696 static const struct drm_plane_funcs simpledrm_primary_plane_funcs = {
+3 -5
drivers/gpu/drm/ttm/ttm_bo.c
··· 402 402 EXPORT_SYMBOL(ttm_bo_put); 403 403 404 404 static int ttm_bo_bounce_temp_buffer(struct ttm_buffer_object *bo, 405 - struct ttm_resource **mem, 406 405 struct ttm_operation_ctx *ctx, 407 406 struct ttm_place *hop) 408 407 { ··· 468 469 if (ret != -EMULTIHOP) 469 470 break; 470 471 471 - ret = ttm_bo_bounce_temp_buffer(bo, &evict_mem, ctx, &hop); 472 + ret = ttm_bo_bounce_temp_buffer(bo, ctx, &hop); 472 473 } while (!ret); 473 474 474 475 if (ret) { ··· 697 698 */ 698 699 static int ttm_bo_add_move_fence(struct ttm_buffer_object *bo, 699 700 struct ttm_resource_manager *man, 700 - struct ttm_resource *mem, 701 701 bool no_wait_gpu) 702 702 { 703 703 struct dma_fence *fence; ··· 785 787 if (ret) 786 788 continue; 787 789 788 - ret = ttm_bo_add_move_fence(bo, man, *res, ctx->no_wait_gpu); 790 + ret = ttm_bo_add_move_fence(bo, man, ctx->no_wait_gpu); 789 791 if (unlikely(ret)) { 790 792 ttm_resource_free(bo, res); 791 793 if (ret == -EBUSY) ··· 892 894 bounce: 893 895 ret = ttm_bo_handle_move_mem(bo, res, false, ctx, &hop); 894 896 if (ret == -EMULTIHOP) { 895 - ret = ttm_bo_bounce_temp_buffer(bo, &res, ctx, &hop); 897 + ret = ttm_bo_bounce_temp_buffer(bo, ctx, &hop); 896 898 /* try and move to final place now. */ 897 899 if (!ret) 898 900 goto bounce;
+2
drivers/gpu/drm/vc4/vc4_hdmi.c
··· 2740 2740 index = 1; 2741 2741 2742 2742 addr = of_get_address(dev->of_node, index, NULL, NULL); 2743 + if (!addr) 2744 + return -EINVAL; 2743 2745 2744 2746 vc4_hdmi->audio.dma_data.addr = be32_to_cpup(addr) + mai_data->offset; 2745 2747 vc4_hdmi->audio.dma_data.addr_width = DMA_SLAVE_BUSWIDTH_4_BYTES;
+2 -5
drivers/gpu/drm/vkms/vkms_crtc.c
··· 61 61 62 62 static int vkms_enable_vblank(struct drm_crtc *crtc) 63 63 { 64 - struct drm_device *dev = crtc->dev; 65 - unsigned int pipe = drm_crtc_index(crtc); 66 - struct drm_vblank_crtc *vblank = &dev->vblank[pipe]; 64 + struct drm_vblank_crtc *vblank = drm_crtc_vblank_crtc(crtc); 67 65 struct vkms_output *out = drm_crtc_to_vkms_output(crtc); 68 66 69 67 drm_calc_timestamping_constants(crtc, &crtc->mode); ··· 86 88 bool in_vblank_irq) 87 89 { 88 90 struct drm_device *dev = crtc->dev; 89 - unsigned int pipe = crtc->index; 90 91 struct vkms_device *vkmsdev = drm_device_to_vkms_device(dev); 91 92 struct vkms_output *output = &vkmsdev->output; 92 - struct drm_vblank_crtc *vblank = &dev->vblank[pipe]; 93 + struct drm_vblank_crtc *vblank = drm_crtc_vblank_crtc(crtc); 93 94 94 95 if (!READ_ONCE(vblank->enabled)) { 95 96 *vblank_time = ktime_get();
+1 -1
drivers/gpu/drm/vmwgfx/Makefile
··· 10 10 vmwgfx_simple_resource.o vmwgfx_va.o vmwgfx_blit.o \ 11 11 vmwgfx_validation.o vmwgfx_page_dirty.o vmwgfx_streamoutput.o \ 12 12 vmwgfx_devcaps.o ttm_object.o vmwgfx_system_manager.o \ 13 - vmwgfx_gem.o 13 + vmwgfx_gem.o vmwgfx_vkms.o 14 14 15 15 obj-$(CONFIG_DRM_VMWGFX) := vmwgfx.o
+4
drivers/gpu/drm/vmwgfx/vmwgfx_drv.c
··· 32 32 #include "vmwgfx_binding.h" 33 33 #include "vmwgfx_devcaps.h" 34 34 #include "vmwgfx_mksstat.h" 35 + #include "vmwgfx_vkms.h" 35 36 #include "ttm_object.h" 36 37 37 38 #include <drm/drm_aperture.h> ··· 911 910 "Please switch to a supported graphics device to avoid problems."); 912 911 } 913 912 913 + vmw_vkms_init(dev_priv); 914 + 914 915 ret = vmw_dma_select_mode(dev_priv); 915 916 if (unlikely(ret != 0)) { 916 917 drm_info(&dev_priv->drm, ··· 1198 1195 1199 1196 vmw_svga_disable(dev_priv); 1200 1197 1198 + vmw_vkms_cleanup(dev_priv); 1201 1199 vmw_kms_close(dev_priv); 1202 1200 vmw_overlay_close(dev_priv); 1203 1201
+4
drivers/gpu/drm/vmwgfx/vmwgfx_drv.h
··· 615 615 616 616 uint32 *devcaps; 617 617 618 + bool vkms_enabled; 619 + struct workqueue_struct *crc_workq; 620 + 618 621 /* 619 622 * mksGuestStat instance-descriptor and pid arrays 620 623 */ ··· 812 809 void vmw_resource_mob_detach(struct vmw_resource *res); 813 810 void vmw_resource_dirty_update(struct vmw_resource *res, pgoff_t start, 814 811 pgoff_t end); 812 + int vmw_resource_clean(struct vmw_resource *res); 815 813 int vmw_resources_clean(struct vmw_bo *vbo, pgoff_t start, 816 814 pgoff_t end, pgoff_t *num_prefault); 817 815
+33 -7
drivers/gpu/drm/vmwgfx/vmwgfx_kms.c
··· 27 27 #include "vmwgfx_kms.h" 28 28 29 29 #include "vmwgfx_bo.h" 30 + #include "vmwgfx_vkms.h" 30 31 #include "vmw_surface_cache.h" 31 32 32 33 #include <drm/drm_atomic.h> ··· 38 37 #include <drm/drm_sysfs.h> 39 38 #include <drm/drm_edid.h> 40 39 40 + void vmw_du_init(struct vmw_display_unit *du) 41 + { 42 + vmw_vkms_crtc_init(&du->crtc); 43 + } 44 + 41 45 void vmw_du_cleanup(struct vmw_display_unit *du) 42 46 { 43 47 struct vmw_private *dev_priv = vmw_priv(du->primary.dev); 48 + 49 + vmw_vkms_crtc_cleanup(&du->crtc); 44 50 drm_plane_cleanup(&du->primary); 45 51 if (vmw_cmd_supported(dev_priv)) 46 52 drm_plane_cleanup(&du->cursor.base); ··· 963 955 void vmw_du_crtc_atomic_begin(struct drm_crtc *crtc, 964 956 struct drm_atomic_state *state) 965 957 { 958 + vmw_vkms_crtc_atomic_begin(crtc, state); 966 959 } 967 - 968 - 969 - void vmw_du_crtc_atomic_flush(struct drm_crtc *crtc, 970 - struct drm_atomic_state *state) 971 - { 972 - } 973 - 974 960 975 961 /** 976 962 * vmw_du_crtc_duplicate_state - duplicate crtc state ··· 2030 2028 "hotplug_mode_update", 0, 1); 2031 2029 } 2032 2030 2031 + static void 2032 + vmw_atomic_commit_tail(struct drm_atomic_state *old_state) 2033 + { 2034 + struct vmw_private *vmw = vmw_priv(old_state->dev); 2035 + struct drm_crtc *crtc; 2036 + struct drm_crtc_state *old_crtc_state; 2037 + int i; 2038 + 2039 + drm_atomic_helper_commit_tail(old_state); 2040 + 2041 + if (vmw->vkms_enabled) { 2042 + for_each_old_crtc_in_state(old_state, crtc, old_crtc_state, i) { 2043 + struct vmw_display_unit *du = vmw_crtc_to_du(crtc); 2044 + (void)old_crtc_state; 2045 + flush_work(&du->vkms.crc_generator_work); 2046 + } 2047 + } 2048 + } 2049 + 2050 + static const struct drm_mode_config_helper_funcs vmw_mode_config_helpers = { 2051 + .atomic_commit_tail = vmw_atomic_commit_tail, 2052 + }; 2053 + 2033 2054 int vmw_kms_init(struct vmw_private *dev_priv) 2034 2055 { 2035 2056 struct drm_device *dev = &dev_priv->drm; ··· 2072 2047 dev->mode_config.max_width = dev_priv->texture_max_width; 2073 2048 dev->mode_config.max_height = dev_priv->texture_max_height; 2074 2049 dev->mode_config.preferred_depth = dev_priv->assume_16bpp ? 16 : 32; 2050 + dev->mode_config.helper_private = &vmw_mode_config_helpers; 2075 2051 2076 2052 drm_mode_create_suggested_offset_properties(dev); 2077 2053 vmw_kms_create_hotplug_mode_update_property(dev_priv);
+20 -2
drivers/gpu/drm/vmwgfx/vmwgfx_kms.h
··· 376 376 bool is_implicit; 377 377 int set_gui_x; 378 378 int set_gui_y; 379 + 380 + struct { 381 + struct work_struct crc_generator_work; 382 + struct hrtimer timer; 383 + ktime_t period_ns; 384 + 385 + /* protects concurrent access to the vblank handler */ 386 + atomic_t atomic_lock; 387 + /* protected by @atomic_lock */ 388 + bool crc_enabled; 389 + struct vmw_surface *surface; 390 + 391 + /* protects concurrent access to the crc worker */ 392 + spinlock_t crc_state_lock; 393 + /* protected by @crc_state_lock */ 394 + bool crc_pending; 395 + u64 frame_start; 396 + u64 frame_end; 397 + } vkms; 379 398 }; 380 399 381 400 #define vmw_crtc_to_du(x) \ ··· 406 387 /* 407 388 * Shared display unit functions - vmwgfx_kms.c 408 389 */ 390 + void vmw_du_init(struct vmw_display_unit *du); 409 391 void vmw_du_cleanup(struct vmw_display_unit *du); 410 392 void vmw_du_crtc_save(struct drm_crtc *crtc); 411 393 void vmw_du_crtc_restore(struct drm_crtc *crtc); ··· 492 472 int vmw_du_crtc_atomic_check(struct drm_crtc *crtc, 493 473 struct drm_atomic_state *state); 494 474 void vmw_du_crtc_atomic_begin(struct drm_crtc *crtc, 495 - struct drm_atomic_state *state); 496 - void vmw_du_crtc_atomic_flush(struct drm_crtc *crtc, 497 475 struct drm_atomic_state *state); 498 476 void vmw_du_crtc_reset(struct drm_crtc *crtc); 499 477 struct drm_crtc_state *vmw_du_crtc_duplicate_state(struct drm_crtc *crtc);
+9 -30
drivers/gpu/drm/vmwgfx/vmwgfx_ldu.c
··· 27 27 28 28 #include "vmwgfx_bo.h" 29 29 #include "vmwgfx_kms.h" 30 + #include "vmwgfx_vkms.h" 30 31 31 32 #include <drm/drm_atomic.h> 32 33 #include <drm/drm_atomic_helper.h> ··· 242 241 { 243 242 } 244 243 245 - /** 246 - * vmw_ldu_crtc_atomic_enable - Noop 247 - * 248 - * @crtc: CRTC associated with the new screen 249 - * @state: Unused 250 - * 251 - * This is called after a mode set has been completed. Here's 252 - * usually a good place to call vmw_ldu_add_active/vmw_ldu_del_active 253 - * but since for LDU the display plane is closely tied to the 254 - * CRTC, it makes more sense to do those at plane update time. 255 - */ 256 - static void vmw_ldu_crtc_atomic_enable(struct drm_crtc *crtc, 257 - struct drm_atomic_state *state) 258 - { 259 - } 260 - 261 - /** 262 - * vmw_ldu_crtc_atomic_disable - Turns off CRTC 263 - * 264 - * @crtc: CRTC to be turned off 265 - * @state: Unused 266 - */ 267 - static void vmw_ldu_crtc_atomic_disable(struct drm_crtc *crtc, 268 - struct drm_atomic_state *state) 269 - { 270 - } 271 - 272 244 static const struct drm_crtc_funcs vmw_legacy_crtc_funcs = { 273 245 .gamma_set = vmw_du_crtc_gamma_set, 274 246 .destroy = vmw_ldu_crtc_destroy, ··· 250 276 .atomic_destroy_state = vmw_du_crtc_destroy_state, 251 277 .set_config = drm_atomic_helper_set_config, 252 278 .page_flip = drm_atomic_helper_page_flip, 279 + .enable_vblank = vmw_vkms_enable_vblank, 280 + .disable_vblank = vmw_vkms_disable_vblank, 281 + .get_vblank_timestamp = vmw_vkms_get_vblank_timestamp, 253 282 }; 254 283 255 284 ··· 395 418 .mode_set_nofb = vmw_ldu_crtc_mode_set_nofb, 396 419 .atomic_check = vmw_du_crtc_atomic_check, 397 420 .atomic_begin = vmw_du_crtc_atomic_begin, 398 - .atomic_flush = vmw_du_crtc_atomic_flush, 399 - .atomic_enable = vmw_ldu_crtc_atomic_enable, 400 - .atomic_disable = vmw_ldu_crtc_atomic_disable, 421 + .atomic_flush = vmw_vkms_crtc_atomic_flush, 422 + .atomic_enable = vmw_vkms_crtc_atomic_enable, 423 + .atomic_disable = vmw_vkms_crtc_atomic_disable, 401 424 }; 402 425 403 426 ··· 517 540 (&connector->base, 518 541 dev_priv->implicit_placement_property, 519 542 1); 543 + 544 + vmw_du_init(&ldu->base); 520 545 521 546 return 0; 522 547
+20 -12
drivers/gpu/drm/vmwgfx/vmwgfx_resource.c
··· 1064 1064 end << PAGE_SHIFT); 1065 1065 } 1066 1066 1067 + int vmw_resource_clean(struct vmw_resource *res) 1068 + { 1069 + int ret = 0; 1070 + 1071 + if (res->res_dirty) { 1072 + if (!res->func->clean) 1073 + return -EINVAL; 1074 + 1075 + ret = res->func->clean(res); 1076 + if (ret) 1077 + return ret; 1078 + res->res_dirty = false; 1079 + } 1080 + return ret; 1081 + } 1082 + 1067 1083 /** 1068 1084 * vmw_resources_clean - Clean resources intersecting a mob range 1069 1085 * @vbo: The mob buffer object ··· 1096 1080 unsigned long res_start = start << PAGE_SHIFT; 1097 1081 unsigned long res_end = end << PAGE_SHIFT; 1098 1082 unsigned long last_cleaned = 0; 1083 + int ret; 1099 1084 1100 1085 /* 1101 1086 * Find the resource with lowest backup_offset that intersects the ··· 1123 1106 * intersecting the range. 1124 1107 */ 1125 1108 while (found) { 1126 - if (found->res_dirty) { 1127 - int ret; 1128 - 1129 - if (!found->func->clean) 1130 - return -EINVAL; 1131 - 1132 - ret = found->func->clean(found); 1133 - if (ret) 1134 - return ret; 1135 - 1136 - found->res_dirty = false; 1137 - } 1109 + ret = vmw_resource_clean(found); 1110 + if (ret) 1111 + return ret; 1138 1112 last_cleaned = found->guest_memory_offset + found->guest_memory_size; 1139 1113 cur = rb_next(&found->mob_node); 1140 1114 if (!cur)
+13 -15
drivers/gpu/drm/vmwgfx/vmwgfx_scrn.c
··· 27 27 28 28 #include "vmwgfx_bo.h" 29 29 #include "vmwgfx_kms.h" 30 + #include "vmwgfx_vkms.h" 30 31 31 32 #include <drm/drm_atomic.h> 32 33 #include <drm/drm_atomic_helper.h> 33 34 #include <drm/drm_damage_helper.h> 34 35 #include <drm/drm_fourcc.h> 36 + #include <drm/drm_vblank.h> 35 37 36 38 #define vmw_crtc_to_sou(x) \ 37 39 container_of(x, struct vmw_screen_object_unit, base.crtc) ··· 270 268 } 271 269 272 270 /** 273 - * vmw_sou_crtc_atomic_enable - Noop 274 - * 275 - * @crtc: CRTC associated with the new screen 276 - * @state: Unused 277 - * 278 - * This is called after a mode set has been completed. 279 - */ 280 - static void vmw_sou_crtc_atomic_enable(struct drm_crtc *crtc, 281 - struct drm_atomic_state *state) 282 - { 283 - } 284 - 285 - /** 286 271 * vmw_sou_crtc_atomic_disable - Turns off CRTC 287 272 * 288 273 * @crtc: CRTC to be turned off ··· 291 302 sou = vmw_crtc_to_sou(crtc); 292 303 dev_priv = vmw_priv(crtc->dev); 293 304 305 + if (dev_priv->vkms_enabled) 306 + drm_crtc_vblank_off(crtc); 307 + 294 308 if (sou->defined) { 295 309 ret = vmw_sou_fifo_destroy(dev_priv, sou); 296 310 if (ret) ··· 309 317 .atomic_destroy_state = vmw_du_crtc_destroy_state, 310 318 .set_config = drm_atomic_helper_set_config, 311 319 .page_flip = drm_atomic_helper_page_flip, 320 + .enable_vblank = vmw_vkms_enable_vblank, 321 + .disable_vblank = vmw_vkms_disable_vblank, 322 + .get_vblank_timestamp = vmw_vkms_get_vblank_timestamp, 312 323 }; 313 324 314 325 /* ··· 789 794 .mode_set_nofb = vmw_sou_crtc_mode_set_nofb, 790 795 .atomic_check = vmw_du_crtc_atomic_check, 791 796 .atomic_begin = vmw_du_crtc_atomic_begin, 792 - .atomic_flush = vmw_du_crtc_atomic_flush, 793 - .atomic_enable = vmw_sou_crtc_atomic_enable, 797 + .atomic_flush = vmw_vkms_crtc_atomic_flush, 798 + .atomic_enable = vmw_vkms_crtc_atomic_enable, 794 799 .atomic_disable = vmw_sou_crtc_atomic_disable, 795 800 }; 796 801 ··· 900 905 dev->mode_config.suggested_x_property, 0); 901 906 drm_object_attach_property(&connector->base, 902 907 dev->mode_config.suggested_y_property, 0); 908 + 909 + vmw_du_init(&sou->base); 910 + 903 911 return 0; 904 912 905 913 err_free_unregister:
+27 -15
drivers/gpu/drm/vmwgfx/vmwgfx_stdu.c
··· 27 27 28 28 #include "vmwgfx_bo.h" 29 29 #include "vmwgfx_kms.h" 30 + #include "vmwgfx_vkms.h" 30 31 #include "vmw_surface_cache.h" 31 32 32 33 #include <drm/drm_atomic.h> 33 34 #include <drm/drm_atomic_helper.h> 34 35 #include <drm/drm_damage_helper.h> 35 36 #include <drm/drm_fourcc.h> 37 + #include <drm/drm_vblank.h> 36 38 37 39 #define vmw_crtc_to_stdu(x) \ 38 40 container_of(x, struct vmw_screen_target_display_unit, base.crtc) ··· 409 407 crtc->x, crtc->y); 410 408 } 411 409 412 - 413 - static void vmw_stdu_crtc_helper_prepare(struct drm_crtc *crtc) 414 - { 415 - } 416 - 417 - static void vmw_stdu_crtc_atomic_enable(struct drm_crtc *crtc, 418 - struct drm_atomic_state *state) 419 - { 420 - } 421 - 422 410 static void vmw_stdu_crtc_atomic_disable(struct drm_crtc *crtc, 423 411 struct drm_atomic_state *state) 424 412 { 425 413 struct vmw_private *dev_priv; 426 414 struct vmw_screen_target_display_unit *stdu; 427 415 int ret; 428 - 429 416 430 417 if (!crtc) { 431 418 DRM_ERROR("CRTC is NULL\n"); ··· 423 432 424 433 stdu = vmw_crtc_to_stdu(crtc); 425 434 dev_priv = vmw_priv(crtc->dev); 435 + 436 + if (dev_priv->vkms_enabled) 437 + drm_crtc_vblank_off(crtc); 426 438 427 439 if (stdu->defined) { 428 440 ret = vmw_stdu_bind_st(dev_priv, stdu, NULL); ··· 764 770 return ret; 765 771 } 766 772 767 - 768 773 /* 769 774 * Screen Target CRTC dispatch table 770 775 */ ··· 775 782 .atomic_destroy_state = vmw_du_crtc_destroy_state, 776 783 .set_config = drm_atomic_helper_set_config, 777 784 .page_flip = drm_atomic_helper_page_flip, 785 + .enable_vblank = vmw_vkms_enable_vblank, 786 + .disable_vblank = vmw_vkms_disable_vblank, 787 + .get_vblank_timestamp = vmw_vkms_get_vblank_timestamp, 788 + .get_crc_sources = vmw_vkms_get_crc_sources, 789 + .set_crc_source = vmw_vkms_set_crc_source, 790 + .verify_crc_source = vmw_vkms_verify_crc_source, 778 791 }; 779 792 780 793 ··· 1412 1413 vmw_fence_obj_unreference(&fence); 1413 1414 } 1414 1415 1416 + static void 1417 + vmw_stdu_crtc_atomic_flush(struct drm_crtc *crtc, 1418 + struct drm_atomic_state *state) 1419 + { 1420 + struct vmw_private *vmw = vmw_priv(crtc->dev); 1421 + struct vmw_screen_target_display_unit *stdu = vmw_crtc_to_stdu(crtc); 1422 + 1423 + if (vmw->vkms_enabled) 1424 + vmw_vkms_set_crc_surface(crtc, stdu->display_srf); 1425 + vmw_vkms_crtc_atomic_flush(crtc, state); 1426 + } 1415 1427 1416 1428 static const struct drm_plane_funcs vmw_stdu_plane_funcs = { 1417 1429 .update_plane = drm_atomic_helper_update_plane, ··· 1463 1453 }; 1464 1454 1465 1455 static const struct drm_crtc_helper_funcs vmw_stdu_crtc_helper_funcs = { 1466 - .prepare = vmw_stdu_crtc_helper_prepare, 1467 1456 .mode_set_nofb = vmw_stdu_crtc_mode_set_nofb, 1468 1457 .atomic_check = vmw_du_crtc_atomic_check, 1469 1458 .atomic_begin = vmw_du_crtc_atomic_begin, 1470 - .atomic_flush = vmw_du_crtc_atomic_flush, 1471 - .atomic_enable = vmw_stdu_crtc_atomic_enable, 1459 + .atomic_flush = vmw_stdu_crtc_atomic_flush, 1460 + .atomic_enable = vmw_vkms_crtc_atomic_enable, 1472 1461 .atomic_disable = vmw_stdu_crtc_atomic_disable, 1473 1462 }; 1474 1463 ··· 1584 1575 dev->mode_config.suggested_x_property, 0); 1585 1576 drm_object_attach_property(&connector->base, 1586 1577 dev->mode_config.suggested_y_property, 0); 1578 + 1579 + vmw_du_init(&stdu->base); 1580 + 1587 1581 return 0; 1588 1582 1589 1583 err_free_unregister:
+632
drivers/gpu/drm/vmwgfx/vmwgfx_vkms.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 OR MIT 2 + /************************************************************************** 3 + * 4 + * Copyright (c) 2024 Broadcom. All Rights Reserved. The term 5 + * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries. 6 + * 7 + * Permission is hereby granted, free of charge, to any person obtaining a 8 + * copy of this software and associated documentation files (the 9 + * "Software"), to deal in the Software without restriction, including 10 + * without limitation the rights to use, copy, modify, merge, publish, 11 + * distribute, sub license, and/or sell copies of the Software, and to 12 + * permit persons to whom the Software is furnished to do so, subject to 13 + * the following conditions: 14 + * 15 + * The above copyright notice and this permission notice (including the 16 + * next paragraph) shall be included in all copies or substantial portions 17 + * of the Software. 18 + * 19 + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 20 + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 21 + * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL 22 + * THE COPYRIGHT HOLDERS, AUTHORS AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, 23 + * DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR 24 + * OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE 25 + * USE OR OTHER DEALINGS IN THE SOFTWARE. 26 + * 27 + **************************************************************************/ 28 + 29 + #include "vmwgfx_vkms.h" 30 + 31 + #include "vmwgfx_bo.h" 32 + #include "vmwgfx_drv.h" 33 + #include "vmwgfx_kms.h" 34 + #include "vmwgfx_vkms.h" 35 + 36 + #include "vmw_surface_cache.h" 37 + 38 + #include <drm/drm_crtc.h> 39 + #include <drm/drm_debugfs_crc.h> 40 + #include <drm/drm_print.h> 41 + #include <drm/drm_vblank.h> 42 + 43 + #include <linux/crc32.h> 44 + #include <linux/delay.h> 45 + 46 + #define GUESTINFO_VBLANK "guestinfo.vmwgfx.vkms_enable" 47 + 48 + static int 49 + vmw_surface_sync(struct vmw_private *vmw, 50 + struct vmw_surface *surf) 51 + { 52 + int ret; 53 + struct vmw_fence_obj *fence = NULL; 54 + struct vmw_bo *bo = surf->res.guest_memory_bo; 55 + 56 + vmw_resource_clean(&surf->res); 57 + 58 + ret = ttm_bo_reserve(&bo->tbo, false, false, NULL); 59 + if (ret != 0) { 60 + drm_warn(&vmw->drm, "%s: failed reserve\n", __func__); 61 + goto done; 62 + } 63 + 64 + ret = vmw_execbuf_fence_commands(NULL, vmw, &fence, NULL); 65 + if (ret != 0) { 66 + drm_warn(&vmw->drm, "%s: failed execbuf\n", __func__); 67 + ttm_bo_unreserve(&bo->tbo); 68 + goto done; 69 + } 70 + 71 + dma_fence_wait(&fence->base, false); 72 + dma_fence_put(&fence->base); 73 + 74 + ttm_bo_unreserve(&bo->tbo); 75 + done: 76 + return ret; 77 + } 78 + 79 + static int 80 + compute_crc(struct drm_crtc *crtc, 81 + struct vmw_surface *surf, 82 + u32 *crc) 83 + { 84 + u8 *mapped_surface; 85 + struct vmw_bo *bo = surf->res.guest_memory_bo; 86 + const struct SVGA3dSurfaceDesc *desc = 87 + vmw_surface_get_desc(surf->metadata.format); 88 + u32 row_pitch_bytes; 89 + SVGA3dSize blocks; 90 + u32 y; 91 + 92 + *crc = 0; 93 + 94 + vmw_surface_get_size_in_blocks(desc, &surf->metadata.base_size, &blocks); 95 + row_pitch_bytes = blocks.width * desc->pitchBytesPerBlock; 96 + WARN_ON(!bo); 97 + mapped_surface = vmw_bo_map_and_cache(bo); 98 + 99 + for (y = 0; y < blocks.height; y++) { 100 + *crc = crc32_le(*crc, mapped_surface, row_pitch_bytes); 101 + mapped_surface += row_pitch_bytes; 102 + } 103 + 104 + vmw_bo_unmap(bo); 105 + 106 + return 0; 107 + } 108 + 109 + static void 110 + crc_generate_worker(struct work_struct *work) 111 + { 112 + struct vmw_display_unit *du = 113 + container_of(work, struct vmw_display_unit, vkms.crc_generator_work); 114 + struct drm_crtc *crtc = &du->crtc; 115 + struct vmw_private *vmw = vmw_priv(crtc->dev); 116 + bool crc_pending; 117 + u64 frame_start, frame_end; 118 + u32 crc32 = 0; 119 + struct vmw_surface *surf = 0; 120 + int ret; 121 + 122 + spin_lock_irq(&du->vkms.crc_state_lock); 123 + crc_pending = du->vkms.crc_pending; 124 + spin_unlock_irq(&du->vkms.crc_state_lock); 125 + 126 + /* 127 + * We raced with the vblank hrtimer and previous work already computed 128 + * the crc, nothing to do. 129 + */ 130 + if (!crc_pending) 131 + return; 132 + 133 + spin_lock_irq(&du->vkms.crc_state_lock); 134 + surf = du->vkms.surface; 135 + spin_unlock_irq(&du->vkms.crc_state_lock); 136 + 137 + if (vmw_surface_sync(vmw, surf)) { 138 + drm_warn(crtc->dev, "CRC worker wasn't able to sync the crc surface!\n"); 139 + return; 140 + } 141 + 142 + ret = compute_crc(crtc, surf, &crc32); 143 + if (ret) 144 + return; 145 + 146 + spin_lock_irq(&du->vkms.crc_state_lock); 147 + frame_start = du->vkms.frame_start; 148 + frame_end = du->vkms.frame_end; 149 + crc_pending = du->vkms.crc_pending; 150 + du->vkms.frame_start = 0; 151 + du->vkms.frame_end = 0; 152 + du->vkms.crc_pending = false; 153 + spin_unlock_irq(&du->vkms.crc_state_lock); 154 + 155 + /* 156 + * The worker can fall behind the vblank hrtimer, make sure we catch up. 157 + */ 158 + while (frame_start <= frame_end) 159 + drm_crtc_add_crc_entry(crtc, true, frame_start++, &crc32); 160 + } 161 + 162 + static enum hrtimer_restart 163 + vmw_vkms_vblank_simulate(struct hrtimer *timer) 164 + { 165 + struct vmw_display_unit *du = container_of(timer, struct vmw_display_unit, vkms.timer); 166 + struct drm_crtc *crtc = &du->crtc; 167 + struct vmw_private *vmw = vmw_priv(crtc->dev); 168 + struct vmw_surface *surf = NULL; 169 + u64 ret_overrun; 170 + bool locked, ret; 171 + 172 + ret_overrun = hrtimer_forward_now(&du->vkms.timer, 173 + du->vkms.period_ns); 174 + if (ret_overrun != 1) 175 + drm_dbg_driver(crtc->dev, "vblank timer missed %lld frames.\n", 176 + ret_overrun - 1); 177 + 178 + locked = vmw_vkms_vblank_trylock(crtc); 179 + ret = drm_crtc_handle_vblank(crtc); 180 + WARN_ON(!ret); 181 + if (!locked) 182 + return HRTIMER_RESTART; 183 + surf = du->vkms.surface; 184 + vmw_vkms_unlock(crtc); 185 + 186 + if (du->vkms.crc_enabled && surf) { 187 + u64 frame = drm_crtc_accurate_vblank_count(crtc); 188 + 189 + spin_lock(&du->vkms.crc_state_lock); 190 + if (!du->vkms.crc_pending) 191 + du->vkms.frame_start = frame; 192 + else 193 + drm_dbg_driver(crtc->dev, 194 + "crc worker falling behind, frame_start: %llu, frame_end: %llu\n", 195 + du->vkms.frame_start, frame); 196 + du->vkms.frame_end = frame; 197 + du->vkms.crc_pending = true; 198 + spin_unlock(&du->vkms.crc_state_lock); 199 + 200 + ret = queue_work(vmw->crc_workq, &du->vkms.crc_generator_work); 201 + if (!ret) 202 + drm_dbg_driver(crtc->dev, "Composer worker already queued\n"); 203 + } 204 + 205 + return HRTIMER_RESTART; 206 + } 207 + 208 + void 209 + vmw_vkms_init(struct vmw_private *vmw) 210 + { 211 + char buffer[64]; 212 + const size_t max_buf_len = sizeof(buffer) - 1; 213 + size_t buf_len = max_buf_len; 214 + int ret; 215 + 216 + vmw->vkms_enabled = false; 217 + 218 + ret = vmw_host_get_guestinfo(GUESTINFO_VBLANK, buffer, &buf_len); 219 + if (ret || buf_len > max_buf_len) 220 + return; 221 + buffer[buf_len] = '\0'; 222 + 223 + ret = kstrtobool(buffer, &vmw->vkms_enabled); 224 + if (!ret && vmw->vkms_enabled) { 225 + ret = drm_vblank_init(&vmw->drm, VMWGFX_NUM_DISPLAY_UNITS); 226 + vmw->vkms_enabled = (ret == 0); 227 + } 228 + 229 + vmw->crc_workq = alloc_ordered_workqueue("vmwgfx_crc_generator", 0); 230 + if (!vmw->crc_workq) { 231 + drm_warn(&vmw->drm, "crc workqueue allocation failed. Disabling vkms."); 232 + vmw->vkms_enabled = false; 233 + } 234 + if (vmw->vkms_enabled) 235 + drm_info(&vmw->drm, "VKMS enabled\n"); 236 + } 237 + 238 + void 239 + vmw_vkms_cleanup(struct vmw_private *vmw) 240 + { 241 + destroy_workqueue(vmw->crc_workq); 242 + } 243 + 244 + bool 245 + vmw_vkms_get_vblank_timestamp(struct drm_crtc *crtc, 246 + int *max_error, 247 + ktime_t *vblank_time, 248 + bool in_vblank_irq) 249 + { 250 + struct drm_device *dev = crtc->dev; 251 + struct vmw_private *vmw = vmw_priv(dev); 252 + unsigned int pipe = crtc->index; 253 + struct vmw_display_unit *du = vmw_crtc_to_du(crtc); 254 + struct drm_vblank_crtc *vblank = &dev->vblank[pipe]; 255 + 256 + if (!vmw->vkms_enabled) 257 + return false; 258 + 259 + if (!READ_ONCE(vblank->enabled)) { 260 + *vblank_time = ktime_get(); 261 + return true; 262 + } 263 + 264 + *vblank_time = READ_ONCE(du->vkms.timer.node.expires); 265 + 266 + if (WARN_ON(*vblank_time == vblank->time)) 267 + return true; 268 + 269 + /* 270 + * To prevent races we roll the hrtimer forward before we do any 271 + * interrupt processing - this is how real hw works (the interrupt is 272 + * only generated after all the vblank registers are updated) and what 273 + * the vblank core expects. Therefore we need to always correct the 274 + * timestampe by one frame. 275 + */ 276 + *vblank_time -= du->vkms.period_ns; 277 + 278 + return true; 279 + } 280 + 281 + int 282 + vmw_vkms_enable_vblank(struct drm_crtc *crtc) 283 + { 284 + struct drm_device *dev = crtc->dev; 285 + struct vmw_private *vmw = vmw_priv(dev); 286 + unsigned int pipe = drm_crtc_index(crtc); 287 + struct drm_vblank_crtc *vblank = &dev->vblank[pipe]; 288 + struct vmw_display_unit *du = vmw_crtc_to_du(crtc); 289 + 290 + if (!vmw->vkms_enabled) 291 + return -EINVAL; 292 + 293 + drm_calc_timestamping_constants(crtc, &crtc->mode); 294 + 295 + hrtimer_init(&du->vkms.timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL); 296 + du->vkms.timer.function = &vmw_vkms_vblank_simulate; 297 + du->vkms.period_ns = ktime_set(0, vblank->framedur_ns); 298 + hrtimer_start(&du->vkms.timer, du->vkms.period_ns, HRTIMER_MODE_REL); 299 + 300 + return 0; 301 + } 302 + 303 + void 304 + vmw_vkms_disable_vblank(struct drm_crtc *crtc) 305 + { 306 + struct vmw_display_unit *du = vmw_crtc_to_du(crtc); 307 + struct vmw_private *vmw = vmw_priv(crtc->dev); 308 + 309 + if (!vmw->vkms_enabled) 310 + return; 311 + 312 + hrtimer_cancel(&du->vkms.timer); 313 + du->vkms.surface = NULL; 314 + du->vkms.period_ns = ktime_set(0, 0); 315 + } 316 + 317 + enum vmw_vkms_lock_state { 318 + VMW_VKMS_LOCK_UNLOCKED = 0, 319 + VMW_VKMS_LOCK_MODESET = 1, 320 + VMW_VKMS_LOCK_VBLANK = 2 321 + }; 322 + 323 + void 324 + vmw_vkms_crtc_init(struct drm_crtc *crtc) 325 + { 326 + struct vmw_display_unit *du = vmw_crtc_to_du(crtc); 327 + 328 + atomic_set(&du->vkms.atomic_lock, VMW_VKMS_LOCK_UNLOCKED); 329 + spin_lock_init(&du->vkms.crc_state_lock); 330 + 331 + INIT_WORK(&du->vkms.crc_generator_work, crc_generate_worker); 332 + du->vkms.surface = NULL; 333 + } 334 + 335 + void 336 + vmw_vkms_crtc_cleanup(struct drm_crtc *crtc) 337 + { 338 + struct vmw_display_unit *du = vmw_crtc_to_du(crtc); 339 + 340 + WARN_ON(work_pending(&du->vkms.crc_generator_work)); 341 + hrtimer_cancel(&du->vkms.timer); 342 + } 343 + 344 + void 345 + vmw_vkms_crtc_atomic_begin(struct drm_crtc *crtc, 346 + struct drm_atomic_state *state) 347 + { 348 + struct vmw_private *vmw = vmw_priv(crtc->dev); 349 + 350 + if (vmw->vkms_enabled) 351 + vmw_vkms_modeset_lock(crtc); 352 + } 353 + 354 + void 355 + vmw_vkms_crtc_atomic_flush(struct drm_crtc *crtc, 356 + struct drm_atomic_state *state) 357 + { 358 + unsigned long flags; 359 + struct vmw_private *vmw = vmw_priv(crtc->dev); 360 + 361 + if (!vmw->vkms_enabled) 362 + return; 363 + 364 + if (crtc->state->event) { 365 + spin_lock_irqsave(&crtc->dev->event_lock, flags); 366 + 367 + if (drm_crtc_vblank_get(crtc) != 0) 368 + drm_crtc_send_vblank_event(crtc, crtc->state->event); 369 + else 370 + drm_crtc_arm_vblank_event(crtc, crtc->state->event); 371 + 372 + spin_unlock_irqrestore(&crtc->dev->event_lock, flags); 373 + 374 + crtc->state->event = NULL; 375 + } 376 + 377 + vmw_vkms_unlock(crtc); 378 + } 379 + 380 + void 381 + vmw_vkms_crtc_atomic_enable(struct drm_crtc *crtc, 382 + struct drm_atomic_state *state) 383 + { 384 + struct vmw_private *vmw = vmw_priv(crtc->dev); 385 + 386 + if (vmw->vkms_enabled) 387 + drm_crtc_vblank_on(crtc); 388 + } 389 + 390 + void 391 + vmw_vkms_crtc_atomic_disable(struct drm_crtc *crtc, 392 + struct drm_atomic_state *state) 393 + { 394 + struct vmw_private *vmw = vmw_priv(crtc->dev); 395 + 396 + if (vmw->vkms_enabled) 397 + drm_crtc_vblank_off(crtc); 398 + } 399 + 400 + static bool 401 + is_crc_supported(struct drm_crtc *crtc) 402 + { 403 + struct vmw_private *vmw = vmw_priv(crtc->dev); 404 + 405 + if (!vmw->vkms_enabled) 406 + return false; 407 + 408 + if (vmw->active_display_unit != vmw_du_screen_target) 409 + return false; 410 + 411 + return true; 412 + } 413 + 414 + static const char * const pipe_crc_sources[] = {"auto"}; 415 + 416 + static int 417 + crc_parse_source(const char *src_name, 418 + bool *enabled) 419 + { 420 + int ret = 0; 421 + 422 + if (!src_name) { 423 + *enabled = false; 424 + } else if (strcmp(src_name, "auto") == 0) { 425 + *enabled = true; 426 + } else { 427 + *enabled = false; 428 + ret = -EINVAL; 429 + } 430 + 431 + return ret; 432 + } 433 + 434 + const char *const * 435 + vmw_vkms_get_crc_sources(struct drm_crtc *crtc, 436 + size_t *count) 437 + { 438 + *count = 0; 439 + if (!is_crc_supported(crtc)) 440 + return NULL; 441 + 442 + *count = ARRAY_SIZE(pipe_crc_sources); 443 + return pipe_crc_sources; 444 + } 445 + 446 + int 447 + vmw_vkms_verify_crc_source(struct drm_crtc *crtc, 448 + const char *src_name, 449 + size_t *values_cnt) 450 + { 451 + bool enabled; 452 + 453 + if (!is_crc_supported(crtc)) 454 + return -EINVAL; 455 + 456 + if (crc_parse_source(src_name, &enabled) < 0) { 457 + drm_dbg_driver(crtc->dev, "unknown source '%s'\n", src_name); 458 + return -EINVAL; 459 + } 460 + 461 + *values_cnt = 1; 462 + 463 + return 0; 464 + } 465 + 466 + int 467 + vmw_vkms_set_crc_source(struct drm_crtc *crtc, 468 + const char *src_name) 469 + { 470 + struct vmw_display_unit *du = vmw_crtc_to_du(crtc); 471 + bool enabled, prev_enabled, locked; 472 + int ret; 473 + 474 + if (!is_crc_supported(crtc)) 475 + return -EINVAL; 476 + 477 + ret = crc_parse_source(src_name, &enabled); 478 + 479 + if (enabled) 480 + drm_crtc_vblank_get(crtc); 481 + 482 + locked = vmw_vkms_modeset_lock_relaxed(crtc); 483 + prev_enabled = du->vkms.crc_enabled; 484 + du->vkms.crc_enabled = enabled; 485 + if (locked) 486 + vmw_vkms_unlock(crtc); 487 + 488 + if (prev_enabled) 489 + drm_crtc_vblank_put(crtc); 490 + 491 + return ret; 492 + } 493 + 494 + void 495 + vmw_vkms_set_crc_surface(struct drm_crtc *crtc, 496 + struct vmw_surface *surf) 497 + { 498 + struct vmw_display_unit *du = vmw_crtc_to_du(crtc); 499 + struct vmw_private *vmw = vmw_priv(crtc->dev); 500 + 501 + if (vmw->vkms_enabled) { 502 + WARN_ON(atomic_read(&du->vkms.atomic_lock) != VMW_VKMS_LOCK_MODESET); 503 + du->vkms.surface = surf; 504 + } 505 + } 506 + 507 + /** 508 + * vmw_vkms_lock_max_wait_ns - Return the max wait for the vkms lock 509 + * @du: The vmw_display_unit from which to grab the vblank timings 510 + * 511 + * Returns the maximum wait time used to acquire the vkms lock. By 512 + * default uses a time of a single frame and in case where vblank 513 + * was not initialized for the display unit 1/60th of a second. 514 + */ 515 + static inline u64 516 + vmw_vkms_lock_max_wait_ns(struct vmw_display_unit *du) 517 + { 518 + s64 nsecs = ktime_to_ns(du->vkms.period_ns); 519 + 520 + return (nsecs > 0) ? nsecs : 16666666; 521 + } 522 + 523 + /** 524 + * vmw_vkms_modeset_lock - Protects access to crtc during modeset 525 + * @crtc: The crtc to lock for vkms 526 + * 527 + * This function prevents the VKMS timers/callbacks from being called 528 + * while a modeset operation is in process. We don't want the callbacks 529 + * e.g. the vblank simulator to be trying to access incomplete state 530 + * so we need to make sure they execute only when the modeset has 531 + * finished. 532 + * 533 + * Normally this would have been done with a spinlock but locking the 534 + * entire atomic modeset with vmwgfx is impossible because kms prepare 535 + * executes non-atomic ops (e.g. vmw_validation_prepare holds a mutex to 536 + * guard various bits of state). Which means that we need to synchronize 537 + * atomic context (the vblank handler) with the non-atomic entirity 538 + * of kms - so use an atomic_t to track which part of vkms has access 539 + * to the basic vkms state. 540 + */ 541 + void 542 + vmw_vkms_modeset_lock(struct drm_crtc *crtc) 543 + { 544 + struct vmw_display_unit *du = vmw_crtc_to_du(crtc); 545 + const u64 nsecs_delay = 10; 546 + const u64 MAX_NSECS_DELAY = vmw_vkms_lock_max_wait_ns(du); 547 + u64 total_delay = 0; 548 + int ret; 549 + 550 + do { 551 + ret = atomic_cmpxchg(&du->vkms.atomic_lock, 552 + VMW_VKMS_LOCK_UNLOCKED, 553 + VMW_VKMS_LOCK_MODESET); 554 + if (ret == VMW_VKMS_LOCK_UNLOCKED || total_delay >= MAX_NSECS_DELAY) 555 + break; 556 + ndelay(nsecs_delay); 557 + total_delay += nsecs_delay; 558 + } while (1); 559 + 560 + if (total_delay >= MAX_NSECS_DELAY) { 561 + drm_warn(crtc->dev, "VKMS lock expired! total_delay = %lld, ret = %d, cur = %d\n", 562 + total_delay, ret, atomic_read(&du->vkms.atomic_lock)); 563 + } 564 + } 565 + 566 + /** 567 + * vmw_vkms_modeset_lock_relaxed - Protects access to crtc during modeset 568 + * @crtc: The crtc to lock for vkms 569 + * 570 + * Much like vmw_vkms_modeset_lock except that when the crtc is currently 571 + * in a modeset it will return immediately. 572 + * 573 + * Returns true if actually locked vkms to modeset or false otherwise. 574 + */ 575 + bool 576 + vmw_vkms_modeset_lock_relaxed(struct drm_crtc *crtc) 577 + { 578 + struct vmw_display_unit *du = vmw_crtc_to_du(crtc); 579 + const u64 nsecs_delay = 10; 580 + const u64 MAX_NSECS_DELAY = vmw_vkms_lock_max_wait_ns(du); 581 + u64 total_delay = 0; 582 + int ret; 583 + 584 + do { 585 + ret = atomic_cmpxchg(&du->vkms.atomic_lock, 586 + VMW_VKMS_LOCK_UNLOCKED, 587 + VMW_VKMS_LOCK_MODESET); 588 + if (ret == VMW_VKMS_LOCK_UNLOCKED || 589 + ret == VMW_VKMS_LOCK_MODESET || 590 + total_delay >= MAX_NSECS_DELAY) 591 + break; 592 + ndelay(nsecs_delay); 593 + total_delay += nsecs_delay; 594 + } while (1); 595 + 596 + if (total_delay >= MAX_NSECS_DELAY) { 597 + drm_warn(crtc->dev, "VKMS relaxed lock expired!\n"); 598 + return false; 599 + } 600 + 601 + return ret == VMW_VKMS_LOCK_UNLOCKED; 602 + } 603 + 604 + /** 605 + * vmw_vkms_vblank_trylock - Protects access to crtc during vblank 606 + * @crtc: The crtc to lock for vkms 607 + * 608 + * Tries to lock vkms for vblank, returns immediately. 609 + * 610 + * Returns true if locked vkms to vblank or false otherwise. 611 + */ 612 + bool 613 + vmw_vkms_vblank_trylock(struct drm_crtc *crtc) 614 + { 615 + struct vmw_display_unit *du = vmw_crtc_to_du(crtc); 616 + u32 ret; 617 + 618 + ret = atomic_cmpxchg(&du->vkms.atomic_lock, 619 + VMW_VKMS_LOCK_UNLOCKED, 620 + VMW_VKMS_LOCK_VBLANK); 621 + 622 + return ret == VMW_VKMS_LOCK_UNLOCKED; 623 + } 624 + 625 + void 626 + vmw_vkms_unlock(struct drm_crtc *crtc) 627 + { 628 + struct vmw_display_unit *du = vmw_crtc_to_du(crtc); 629 + 630 + /* Release flag; mark it as unlocked. */ 631 + atomic_set(&du->vkms.atomic_lock, VMW_VKMS_LOCK_UNLOCKED); 632 + }
+75
drivers/gpu/drm/vmwgfx/vmwgfx_vkms.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 OR MIT */ 2 + /************************************************************************** 3 + * 4 + * Copyright (c) 2024 Broadcom. All Rights Reserved. The term 5 + * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries. 6 + * 7 + * Permission is hereby granted, free of charge, to any person obtaining a 8 + * copy of this software and associated documentation files (the 9 + * "Software"), to deal in the Software without restriction, including 10 + * without limitation the rights to use, copy, modify, merge, publish, 11 + * distribute, sub license, and/or sell copies of the Software, and to 12 + * permit persons to whom the Software is furnished to do so, subject to 13 + * the following conditions: 14 + * 15 + * The above copyright notice and this permission notice (including the 16 + * next paragraph) shall be included in all copies or substantial portions 17 + * of the Software. 18 + * 19 + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 20 + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 21 + * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL 22 + * THE COPYRIGHT HOLDERS, AUTHORS AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, 23 + * DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR 24 + * OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE 25 + * USE OR OTHER DEALINGS IN THE SOFTWARE. 26 + * 27 + **************************************************************************/ 28 + 29 + #ifndef VMWGFX_VKMS_H_ 30 + #define VMWGFX_VKMS_H_ 31 + 32 + #include <linux/hrtimer_types.h> 33 + #include <linux/types.h> 34 + 35 + struct drm_atomic_state; 36 + struct drm_crtc; 37 + struct vmw_private; 38 + struct vmw_surface; 39 + 40 + void vmw_vkms_init(struct vmw_private *vmw); 41 + void vmw_vkms_cleanup(struct vmw_private *vmw); 42 + 43 + void vmw_vkms_modeset_lock(struct drm_crtc *crtc); 44 + bool vmw_vkms_modeset_lock_relaxed(struct drm_crtc *crtc); 45 + bool vmw_vkms_vblank_trylock(struct drm_crtc *crtc); 46 + void vmw_vkms_unlock(struct drm_crtc *crtc); 47 + 48 + bool vmw_vkms_get_vblank_timestamp(struct drm_crtc *crtc, 49 + int *max_error, 50 + ktime_t *vblank_time, 51 + bool in_vblank_irq); 52 + int vmw_vkms_enable_vblank(struct drm_crtc *crtc); 53 + void vmw_vkms_disable_vblank(struct drm_crtc *crtc); 54 + 55 + void vmw_vkms_crtc_init(struct drm_crtc *crtc); 56 + void vmw_vkms_crtc_cleanup(struct drm_crtc *crtc); 57 + void vmw_vkms_crtc_atomic_begin(struct drm_crtc *crtc, 58 + struct drm_atomic_state *state); 59 + void vmw_vkms_crtc_atomic_flush(struct drm_crtc *crtc, struct drm_atomic_state *state); 60 + void vmw_vkms_crtc_atomic_enable(struct drm_crtc *crtc, 61 + struct drm_atomic_state *state); 62 + void vmw_vkms_crtc_atomic_disable(struct drm_crtc *crtc, 63 + struct drm_atomic_state *state); 64 + 65 + const char *const *vmw_vkms_get_crc_sources(struct drm_crtc *crtc, 66 + size_t *count); 67 + int vmw_vkms_verify_crc_source(struct drm_crtc *crtc, 68 + const char *src_name, 69 + size_t *values_cnt); 70 + int vmw_vkms_set_crc_source(struct drm_crtc *crtc, 71 + const char *src_name); 72 + void vmw_vkms_set_crc_surface(struct drm_crtc *crtc, 73 + struct vmw_surface *surf); 74 + 75 + #endif
+3 -2
include/drm/drm_displayid.h drivers/gpu/drm/drm_displayid_internal.h
··· 19 19 * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR 20 20 * OTHER DEALINGS IN THE SOFTWARE. 21 21 */ 22 - #ifndef DRM_DISPLAYID_H 23 - #define DRM_DISPLAYID_H 22 + 23 + #ifndef __DRM_DISPLAYID_INTERNAL_H__ 24 + #define __DRM_DISPLAYID_INTERNAL_H__ 24 25 25 26 #include <linux/types.h> 26 27 #include <linux/bits.h>
+27 -14
include/drm/drm_edid.h
··· 30 30 struct drm_device; 31 31 struct drm_display_mode; 32 32 struct drm_edid; 33 + struct drm_printer; 33 34 struct hdmi_avi_infoframe; 34 35 struct hdmi_vendor_infoframe; 35 36 struct i2c_adapter; ··· 273 272 #define DRM_EDID_DSC_MAX_SLICES 0xf 274 273 #define DRM_EDID_DSC_TOTAL_CHUNK_KBYTES 0x3f 275 274 275 + struct drm_edid_product_id { 276 + __be16 manufacturer_name; 277 + __le16 product_code; 278 + __le32 serial_number; 279 + u8 week_of_manufacture; 280 + u8 year_of_manufacture; 281 + } __packed; 282 + 276 283 struct edid { 277 284 u8 header[8]; 278 285 /* Vendor & product info */ 279 - u8 mfg_id[2]; 280 - u8 prod_code[2]; 281 - u32 serial; /* FIXME: byte order */ 282 - u8 mfg_week; 283 - u8 mfg_year; 286 + union { 287 + struct drm_edid_product_id product_id; 288 + struct { 289 + u8 mfg_id[2]; 290 + u8 prod_code[2]; 291 + u32 serial; /* FIXME: byte order */ 292 + u8 mfg_week; 293 + u8 mfg_year; 294 + } __packed; 295 + } __packed; 284 296 /* EDID version */ 285 297 u8 version; 286 298 u8 revision; ··· 347 333 int drm_edid_to_speaker_allocation(const struct edid *edid, u8 **sadb); 348 334 int drm_av_sync_delay(struct drm_connector *connector, 349 335 const struct drm_display_mode *mode); 350 - 351 - bool drm_edid_are_equal(const struct edid *edid1, const struct edid *edid2); 352 336 353 337 int 354 338 drm_hdmi_avi_infoframe_from_display_mode(struct hdmi_avi_infoframe *frame, ··· 429 417 void *data); 430 418 struct edid *drm_get_edid(struct drm_connector *connector, 431 419 struct i2c_adapter *adapter); 432 - const struct drm_edid *drm_edid_read_base_block(struct i2c_adapter *adapter); 433 - u32 drm_edid_get_panel_id(const struct drm_edid *drm_edid); 434 - bool drm_edid_match(const struct drm_edid *drm_edid, 435 - const struct drm_edid_ident *ident); 436 420 struct edid *drm_get_edid_switcheroo(struct drm_connector *connector, 437 421 struct i2c_adapter *adapter); 438 422 struct edid *drm_edid_duplicate(const struct edid *edid); ··· 468 460 const struct drm_edid *drm_edid_read_custom(struct drm_connector *connector, 469 461 int (*read_block)(void *context, u8 *buf, unsigned int block, size_t len), 470 462 void *context); 463 + const struct drm_edid *drm_edid_read_base_block(struct i2c_adapter *adapter); 471 464 const struct drm_edid *drm_edid_read_switcheroo(struct drm_connector *connector, 472 465 struct i2c_adapter *adapter); 473 466 int drm_edid_connector_update(struct drm_connector *connector, 474 467 const struct drm_edid *edid); 475 468 int drm_edid_connector_add_modes(struct drm_connector *connector); 476 469 bool drm_edid_is_digital(const struct drm_edid *drm_edid); 477 - 478 - const u8 *drm_find_edid_extension(const struct drm_edid *drm_edid, 479 - int ext_id, int *ext_index); 470 + void drm_edid_get_product_id(const struct drm_edid *drm_edid, 471 + struct drm_edid_product_id *id); 472 + void drm_edid_print_product_id(struct drm_printer *p, 473 + const struct drm_edid_product_id *id, bool raw); 474 + u32 drm_edid_get_panel_id(const struct drm_edid *drm_edid); 475 + bool drm_edid_match(const struct drm_edid *drm_edid, 476 + const struct drm_edid_ident *ident); 480 477 481 478 #endif /* __DRM_EDID_H__ */
+5
include/drm/drm_fb_dma_helper.h
··· 6 6 7 7 struct drm_device; 8 8 struct drm_framebuffer; 9 + struct drm_plane; 9 10 struct drm_plane_state; 11 + struct drm_scanout_buffer; 10 12 11 13 struct drm_gem_dma_object *drm_fb_dma_get_gem_obj(struct drm_framebuffer *fb, 12 14 unsigned int plane); ··· 20 18 void drm_fb_dma_sync_non_coherent(struct drm_device *drm, 21 19 struct drm_plane_state *old_state, 22 20 struct drm_plane_state *state); 21 + 22 + int drm_fb_dma_get_scanout_buffer(struct drm_plane *plane, 23 + struct drm_scanout_buffer *sb); 23 24 24 25 #endif 25 26
+12 -3
include/drm/drm_mipi_dsi.h
··· 226 226 return -EINVAL; 227 227 } 228 228 229 + enum mipi_dsi_compression_algo { 230 + MIPI_DSI_COMPRESSION_DSC = 0, 231 + MIPI_DSI_COMPRESSION_VENDOR = 3, 232 + /* other two values are reserved, DSI 1.3 */ 233 + }; 234 + 229 235 struct mipi_dsi_device * 230 236 mipi_dsi_device_register_full(struct mipi_dsi_host *host, 231 237 const struct mipi_dsi_device_info *info); ··· 247 241 int mipi_dsi_turn_on_peripheral(struct mipi_dsi_device *dsi); 248 242 int mipi_dsi_set_maximum_return_packet_size(struct mipi_dsi_device *dsi, 249 243 u16 value); 250 - ssize_t mipi_dsi_compression_mode(struct mipi_dsi_device *dsi, bool enable); 251 - ssize_t mipi_dsi_picture_parameter_set(struct mipi_dsi_device *dsi, 252 - const struct drm_dsc_picture_parameter_set *pps); 244 + int mipi_dsi_compression_mode(struct mipi_dsi_device *dsi, bool enable); 245 + int mipi_dsi_compression_mode_ext(struct mipi_dsi_device *dsi, bool enable, 246 + enum mipi_dsi_compression_algo algo, 247 + unsigned int pps_selector); 248 + int mipi_dsi_picture_parameter_set(struct mipi_dsi_device *dsi, 249 + const struct drm_dsc_picture_parameter_set *pps); 253 250 254 251 ssize_t mipi_dsi_generic_write(struct mipi_dsi_device *dsi, const void *payload, 255 252 size_t size);
+15
include/drm/drm_mode_config.h
··· 506 506 struct list_head plane_list; 507 507 508 508 /** 509 + * @panic_lock: 510 + * 511 + * Raw spinlock used to protect critical sections of code that access 512 + * the display hardware or modeset software state, which the panic 513 + * printing code must be protected against. See drm_panic_trylock(), 514 + * drm_panic_lock() and drm_panic_unlock(). 515 + */ 516 + struct raw_spinlock panic_lock; 517 + 518 + /** 509 519 * @num_crtc: 510 520 * 511 521 * Number of CRTCs on this device linked with &drm_crtc.head. This is invariant over the lifetime ··· 951 941 * combination. 952 942 */ 953 943 struct drm_property *modifiers_property; 944 + 945 + /** 946 + * @size_hints_propertty: Plane SIZE_HINTS property. 947 + */ 948 + struct drm_property *size_hints_property; 954 949 955 950 /* cursor size */ 956 951 uint32_t cursor_width, cursor_height;
+39
include/drm/drm_modeset_helper_vtables.h
··· 48 48 * To make this clear all the helper vtables are pulled together in this location here. 49 49 */ 50 50 51 + struct drm_scanout_buffer; 51 52 struct drm_writeback_connector; 52 53 struct drm_writeback_job; 53 54 ··· 1444 1443 */ 1445 1444 void (*atomic_async_update)(struct drm_plane *plane, 1446 1445 struct drm_atomic_state *state); 1446 + 1447 + /** 1448 + * @get_scanout_buffer: 1449 + * 1450 + * Get the current scanout buffer, to display a message with drm_panic. 1451 + * The driver should do the minimum changes to provide a buffer, 1452 + * that can be used to display the panic screen. Currently only linear 1453 + * buffers are supported. Non-linear buffer support is on the TODO list. 1454 + * The device &dev.mode_config.panic_lock is taken before calling this 1455 + * function, so you can safely access the &plane.state 1456 + * It is called from a panic callback, and must follow its restrictions. 1457 + * Please look the documentation at drm_panic_trylock() for an in-depth 1458 + * discussions of what's safe and what is not allowed. 1459 + * It's a best effort mode, so it's expected that in some complex cases 1460 + * the panic screen won't be displayed. 1461 + * The returned &drm_scanout_buffer.map must be valid if no error code is 1462 + * returned. 1463 + * 1464 + * Return: 1465 + * %0 on success, negative errno on failure. 1466 + */ 1467 + int (*get_scanout_buffer)(struct drm_plane *plane, 1468 + struct drm_scanout_buffer *sb); 1469 + 1470 + /** 1471 + * @panic_flush: 1472 + * 1473 + * It is used by drm_panic, and is called after the panic screen is 1474 + * drawn to the scanout buffer. In this function, the driver 1475 + * can send additional commands to the hardware, to make the scanout 1476 + * buffer visible. 1477 + * It is only called if get_scanout_buffer() returned successfully, and 1478 + * the &dev.mode_config.panic_lock is held during the entire sequence. 1479 + * It is called from a panic callback, and must follow its restrictions. 1480 + * Please look the documentation at drm_panic_trylock() for an in-depth 1481 + * discussions of what's safe and what is not allowed. 1482 + */ 1483 + void (*panic_flush)(struct drm_plane *plane); 1447 1484 }; 1448 1485 1449 1486 /**
+152
include/drm/drm_panic.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 or MIT */ 2 + #ifndef __DRM_PANIC_H__ 3 + #define __DRM_PANIC_H__ 4 + 5 + #include <linux/module.h> 6 + #include <linux/types.h> 7 + #include <linux/iosys-map.h> 8 + 9 + #include <drm/drm_device.h> 10 + #include <drm/drm_fourcc.h> 11 + /* 12 + * Copyright (c) 2024 Intel 13 + */ 14 + 15 + /** 16 + * struct drm_scanout_buffer - DRM scanout buffer 17 + * 18 + * This structure holds the information necessary for drm_panic to draw the 19 + * panic screen, and display it. 20 + */ 21 + struct drm_scanout_buffer { 22 + /** 23 + * @format: 24 + * 25 + * drm format of the scanout buffer. 26 + */ 27 + const struct drm_format_info *format; 28 + 29 + /** 30 + * @map: 31 + * 32 + * Virtual address of the scanout buffer, either in memory or iomem. 33 + * The scanout buffer should be in linear format, and can be directly 34 + * sent to the display hardware. Tearing is not an issue for the panic 35 + * screen. 36 + */ 37 + struct iosys_map map[DRM_FORMAT_MAX_PLANES]; 38 + 39 + /** 40 + * @width: Width of the scanout buffer, in pixels. 41 + */ 42 + unsigned int width; 43 + 44 + /** 45 + * @height: Height of the scanout buffer, in pixels. 46 + */ 47 + unsigned int height; 48 + 49 + /** 50 + * @pitch: Length in bytes between the start of two consecutive lines. 51 + */ 52 + unsigned int pitch[DRM_FORMAT_MAX_PLANES]; 53 + }; 54 + 55 + /** 56 + * drm_panic_trylock - try to enter the panic printing critical section 57 + * @dev: struct drm_device 58 + * @flags: unsigned long irq flags you need to pass to the unlock() counterpart 59 + * 60 + * This function must be called by any panic printing code. The panic printing 61 + * attempt must be aborted if the trylock fails. 62 + * 63 + * Panic printing code can make the following assumptions while holding the 64 + * panic lock: 65 + * 66 + * - Anything protected by drm_panic_lock() and drm_panic_unlock() pairs is safe 67 + * to access. 68 + * 69 + * - Furthermore the panic printing code only registers in drm_dev_unregister() 70 + * and gets removed in drm_dev_unregister(). This allows the panic code to 71 + * safely access any state which is invariant in between these two function 72 + * calls, like the list of planes &drm_mode_config.plane_list or most of the 73 + * struct drm_plane structure. 74 + * 75 + * Specifically thanks to the protection around plane updates in 76 + * drm_atomic_helper_swap_state() the following additional guarantees hold: 77 + * 78 + * - It is safe to deference the drm_plane.state pointer. 79 + * 80 + * - Anything in struct drm_plane_state or the driver's subclass thereof which 81 + * stays invariant after the atomic check code has finished is safe to access. 82 + * Specifically this includes the reference counted pointers to framebuffer 83 + * and buffer objects. 84 + * 85 + * - Anything set up by &drm_plane_helper_funcs.fb_prepare and cleaned up 86 + * &drm_plane_helper_funcs.fb_cleanup is safe to access, as long as it stays 87 + * invariant between these two calls. This also means that for drivers using 88 + * dynamic buffer management the framebuffer is pinned, and therefer all 89 + * relevant datastructures can be accessed without taking any further locks 90 + * (which would be impossible in panic context anyway). 91 + * 92 + * - Importantly, software and hardware state set up by 93 + * &drm_plane_helper_funcs.begin_fb_access and 94 + * &drm_plane_helper_funcs.end_fb_access is not safe to access. 95 + * 96 + * Drivers must not make any assumptions about the actual state of the hardware, 97 + * unless they explicitly protected these hardware access with drm_panic_lock() 98 + * and drm_panic_unlock(). 99 + * 100 + * Return: 101 + * %0 when failing to acquire the raw spinlock, nonzero on success. 102 + */ 103 + #define drm_panic_trylock(dev, flags) \ 104 + raw_spin_trylock_irqsave(&(dev)->mode_config.panic_lock, flags) 105 + 106 + /** 107 + * drm_panic_lock - protect panic printing relevant state 108 + * @dev: struct drm_device 109 + * @flags: unsigned long irq flags you need to pass to the unlock() counterpart 110 + * 111 + * This function must be called to protect software and hardware state that the 112 + * panic printing code must be able to rely on. The protected sections must be 113 + * as small as possible. It uses the irqsave/irqrestore variant, and can be 114 + * called from irq handler. Examples include: 115 + * 116 + * - Access to peek/poke or other similar registers, if that is the way the 117 + * driver prints the pixels into the scanout buffer at panic time. 118 + * 119 + * - Updates to pointers like &drm_plane.state, allowing the panic handler to 120 + * safely deference these. This is done in drm_atomic_helper_swap_state(). 121 + * 122 + * - An state that isn't invariant and that the driver must be able to access 123 + * during panic printing. 124 + */ 125 + 126 + #define drm_panic_lock(dev, flags) \ 127 + raw_spin_lock_irqsave(&(dev)->mode_config.panic_lock, flags) 128 + 129 + /** 130 + * drm_panic_unlock - end of the panic printing critical section 131 + * @dev: struct drm_device 132 + * @flags: irq flags that were returned when acquiring the lock 133 + * 134 + * Unlocks the raw spinlock acquired by either drm_panic_lock() or 135 + * drm_panic_trylock(). 136 + */ 137 + #define drm_panic_unlock(dev, flags) \ 138 + raw_spin_unlock_irqrestore(&(dev)->mode_config.panic_lock, flags) 139 + 140 + #ifdef CONFIG_DRM_PANIC 141 + 142 + void drm_panic_register(struct drm_device *dev); 143 + void drm_panic_unregister(struct drm_device *dev); 144 + 145 + #else 146 + 147 + static inline void drm_panic_register(struct drm_device *dev) {} 148 + static inline void drm_panic_unregister(struct drm_device *dev) {} 149 + 150 + #endif 151 + 152 + #endif /* __DRM_PANIC_H__ */
+10
include/drm/drm_plane.h
··· 25 25 26 26 #include <linux/list.h> 27 27 #include <linux/ctype.h> 28 + #include <linux/kmsg_dump.h> 28 29 #include <drm/drm_mode_object.h> 29 30 #include <drm/drm_color_mgmt.h> 30 31 #include <drm/drm_rect.h> ··· 33 32 #include <drm/drm_util.h> 34 33 35 34 struct drm_crtc; 35 + struct drm_plane_size_hint; 36 36 struct drm_printer; 37 37 struct drm_modeset_acquire_ctx; 38 38 ··· 781 779 * @hotspot_y_property: property to set mouse hotspot y offset. 782 780 */ 783 781 struct drm_property *hotspot_y_property; 782 + 783 + /** 784 + * @kmsg_panic: Used to register a panic notifier for this plane 785 + */ 786 + struct kmsg_dumper kmsg_panic; 784 787 }; 785 788 786 789 #define obj_to_plane(x) container_of(x, struct drm_plane, base) ··· 983 976 984 977 int drm_plane_create_scaling_filter_property(struct drm_plane *plane, 985 978 unsigned int supported_filters); 979 + int drm_plane_add_size_hints_property(struct drm_plane *plane, 980 + const struct drm_plane_size_hint *hints, 981 + int num_hints); 986 982 987 983 #endif
+1
include/drm/drm_vblank.h
··· 225 225 wait_queue_head_t work_wait_queue; 226 226 }; 227 227 228 + struct drm_vblank_crtc *drm_crtc_vblank_crtc(struct drm_crtc *crtc); 228 229 int drm_vblank_init(struct drm_device *dev, unsigned int num_crtcs); 229 230 bool drm_dev_has_vblank(const struct drm_device *dev); 230 231 u64 drm_crtc_vblank_count(struct drm_crtc *crtc);
+2
include/linux/dma-buf.h
··· 370 370 */ 371 371 struct module *owner; 372 372 373 + #if IS_ENABLED(CONFIG_DEBUG_FS) 373 374 /** @list_node: node for dma_buf accounting and debugging. */ 374 375 struct list_head list_node; 376 + #endif 375 377 376 378 /** @priv: exporter specific private data for this buffer object. */ 377 379 void *priv;
+11
include/uapi/drm/drm_mode.h
··· 866 866 }; 867 867 868 868 /** 869 + * struct drm_plane_size_hint - Plane size hints 870 + * 871 + * The plane SIZE_HINTS property blob contains an 872 + * array of struct drm_plane_size_hint. 873 + */ 874 + struct drm_plane_size_hint { 875 + __u16 width; 876 + __u16 height; 877 + }; 878 + 879 + /** 869 880 * struct hdr_metadata_infoframe - HDR Metadata Infoframe Data. 870 881 * 871 882 * HDR Metadata Infoframe as per CTA 861.G spec. This is expected