Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'drm-misc-next-2022-09-09' of git://anongit.freedesktop.org/drm/drm-misc into drm-next

drm-misc-next for v6.1-rc1:

[airlied - fix sun4i_tv build]

UAPI Changes:
- Hide unregistered connectors from GETCONNECTOR ioctl.
- drm/virtio no longer advertises LINEAR modifier, as it doesn't work.
-

Cross-subsystem Changes:
- Fix GPF in udmabuf failure path.

Core Changes:
- Rework TTM placement to use intersect/compatible functions.
- Drop legacy DP-MST support.
- More DP-MST related fixes, and move all state into atomic.
- Make DRM_MIPI_DBI select DRM_KMS_HELPER.
- Add audio_infoframe packing for DP.
- Add logging when some atomic check functions fail.
- Assorted documentation updates and fixes.

Driver Changes:
- Assorted cleanups and fixes in msm, lcdif, nouveau, virtio,
panel/ilitek, bridge/icn6211, tve200, gma500, bridge/*, panfrost, via,
bochs, qxl, sun4i.
- Add add AUO B133UAN02.1, IVO M133NW4J-R3, Innolux N120ACA-EA1 eDP panels.
- Improve DP-MST modeset state handling in amdgpu, nouveau, i915.
- Drop DP-MST from radeon driver, it was broken and only user of legacy
DP-MST.
- Handle unplugging better in vc4.
- Simplify drm cmdparser tests.
- Add DP support to ti-sn65dsi86.
- Add MT8195 DP support to mediatek.
- Support RGB565, XRGB64, and ARGB64 formats in vkms.
- Convert sun4i tv support to atomic.
- Refactor vc4/vec TV Modesetting, and fix timings.
- Use atomic helpers instead of simple display helpers in ssd130x.

Maintainer changes:
- Add Douglas Anderson as reviewer for panel-edp.

Signed-off-by: Dave Airlie <airlied@redhat.com>
From: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/a489485b-3ebc-c734-0f80-aed963d89efe@linux.intel.com

+6065 -3091
+9
Documentation/devicetree/bindings/display/bridge/chipone,icn6211.yaml
··· 24 24 maxItems: 1 25 25 description: virtual channel number of a DSI peripheral 26 26 27 + clock-names: 28 + const: refclk 29 + 30 + clocks: 31 + maxItems: 1 32 + description: | 33 + Optional external clock connected to REF_CLK input. 34 + The clock rate must be in 10..154 MHz range. 35 + 27 36 enable-gpios: 28 37 description: Bridge EN pin, chip is reset when EN is low. 29 38
+13
Documentation/devicetree/bindings/display/bridge/chrontel,ch7033.yaml
··· 14 14 compatible: 15 15 const: chrontel,ch7033 16 16 17 + chrontel,byteswap: 18 + $ref: /schemas/types.yaml#/definitions/uint8 19 + enum: 20 + - 0 # BYTE_SWAP_RGB 21 + - 1 # BYTE_SWAP_RBG 22 + - 2 # BYTE_SWAP_GRB 23 + - 3 # BYTE_SWAP_GBR 24 + - 4 # BYTE_SWAP_BRG 25 + - 5 # BYTE_SWAP_BGR 26 + description: | 27 + Set the byteswap value of the bridge. This is optional and if not 28 + set value of BYTE_SWAP_BGR is used. 29 + 17 30 reg: 18 31 maxItems: 1 19 32 description: I2C address of the device
+116
Documentation/devicetree/bindings/display/mediatek/mediatek,dp.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/display/mediatek/mediatek,dp.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: MediaTek Display Port Controller 8 + 9 + maintainers: 10 + - Chun-Kuang Hu <chunkuang.hu@kernel.org> 11 + - Jitao shi <jitao.shi@mediatek.com> 12 + 13 + description: | 14 + MediaTek DP and eDP are different hardwares and there are some features 15 + which are not supported for eDP. For example, audio is not supported for 16 + eDP. Therefore, we need to use two different compatibles to describe them. 17 + In addition, We just need to enable the power domain of DP, so the clock 18 + of DP is generated by itself and we are not using other PLL to generate 19 + clocks. 20 + 21 + properties: 22 + compatible: 23 + enum: 24 + - mediatek,mt8195-dp-tx 25 + - mediatek,mt8195-edp-tx 26 + 27 + reg: 28 + maxItems: 1 29 + 30 + nvmem-cells: 31 + maxItems: 1 32 + description: efuse data for display port calibration 33 + 34 + nvmem-cell-names: 35 + const: dp_calibration_data 36 + 37 + power-domains: 38 + maxItems: 1 39 + 40 + interrupts: 41 + maxItems: 1 42 + 43 + ports: 44 + $ref: /schemas/graph.yaml#/properties/ports 45 + properties: 46 + port@0: 47 + $ref: /schemas/graph.yaml#/properties/port 48 + description: Input endpoint of the controller, usually dp_intf 49 + 50 + port@1: 51 + $ref: /schemas/graph.yaml#/$defs/port-base 52 + unevaluatedProperties: false 53 + description: Output endpoint of the controller 54 + properties: 55 + endpoint: 56 + $ref: /schemas/media/video-interfaces.yaml# 57 + unevaluatedProperties: false 58 + properties: 59 + data-lanes: 60 + description: | 61 + number of lanes supported by the hardware. 62 + The possible values: 63 + 0 - For 1 lane enabled in IP. 64 + 0 1 - For 2 lanes enabled in IP. 65 + 0 1 2 3 - For 4 lanes enabled in IP. 66 + minItems: 1 67 + maxItems: 4 68 + required: 69 + - data-lanes 70 + 71 + required: 72 + - port@0 73 + - port@1 74 + 75 + max-linkrate-mhz: 76 + enum: [ 1620, 2700, 5400, 8100 ] 77 + description: maximum link rate supported by the hardware. 78 + 79 + required: 80 + - compatible 81 + - reg 82 + - interrupts 83 + - ports 84 + - max-linkrate-mhz 85 + 86 + additionalProperties: false 87 + 88 + examples: 89 + - | 90 + #include <dt-bindings/interrupt-controller/arm-gic.h> 91 + #include <dt-bindings/power/mt8195-power.h> 92 + dptx@1c600000 { 93 + compatible = "mediatek,mt8195-dp-tx"; 94 + reg = <0x1c600000 0x8000>; 95 + power-domains = <&spm MT8195_POWER_DOMAIN_DP_TX>; 96 + interrupts = <GIC_SPI 458 IRQ_TYPE_LEVEL_HIGH 0>; 97 + max-linkrate-mhz = <8100>; 98 + 99 + ports { 100 + #address-cells = <1>; 101 + #size-cells = <0>; 102 + 103 + port@0 { 104 + reg = <0>; 105 + dptx_in: endpoint { 106 + remote-endpoint = <&dp_intf0_out>; 107 + }; 108 + }; 109 + port@1 { 110 + reg = <1>; 111 + dptx_out: endpoint { 112 + data-lanes = <0 1 2 3>; 113 + }; 114 + }; 115 + }; 116 + };
+1 -6
Documentation/gpu/vkms.rst
··· 118 118 119 119 There's lots of plane features we could add support for: 120 120 121 - - Clearing primary plane: clear primary plane before plane composition (at the 122 - start) for correctness of pixel blend ops. It also guarantees alpha channel 123 - is cleared in the target buffer for stable crc. [Good to get started] 124 - 125 121 - ARGB format on primary plane: blend the primary plane into background with 126 122 translucent alpha. 127 123 128 - - Support when the primary plane isn't exactly matching the output size: blend 129 - the primary plane into the black background. 124 + - Add background color KMS property[Good to get started]. 130 125 131 126 - Full alpha blending on all planes. 132 127
+5
MAINTAINERS
··· 6419 6419 F: Documentation/devicetree/bindings/display/panel/feiyang,fy07024di26a30d.yaml 6420 6420 F: drivers/gpu/drm/panel/panel-feiyang-fy07024di26a30d.c 6421 6421 6422 + DRM DRIVER FOR GENERIC EDP PANELS 6423 + R: Douglas Anderson <dianders@chromium.org> 6424 + F: Documentation/devicetree/bindings/display/panel/panel-edp.yaml 6425 + F: drivers/gpu/drm/panel/panel-edp.c 6426 + 6422 6427 DRM DRIVER FOR GENERIC USB DISPLAY 6423 6428 M: Noralf Trønnes <noralf@tronnes.org> 6424 6429 S: Maintained
+6 -3
drivers/dma-buf/udmabuf.c
··· 124 124 { 125 125 struct udmabuf *ubuf = buf->priv; 126 126 struct device *dev = ubuf->device->this_device; 127 + int ret = 0; 127 128 128 129 if (!ubuf->sg) { 129 130 ubuf->sg = get_sg_table(dev, buf, direction); 130 - if (IS_ERR(ubuf->sg)) 131 - return PTR_ERR(ubuf->sg); 131 + if (IS_ERR(ubuf->sg)) { 132 + ret = PTR_ERR(ubuf->sg); 133 + ubuf->sg = NULL; 134 + } 132 135 } else { 133 136 dma_sync_sg_for_cpu(dev, ubuf->sg->sgl, ubuf->sg->nents, 134 137 direction); 135 138 } 136 139 137 - return 0; 140 + return ret; 138 141 } 139 142 140 143 static int end_cpu_udmabuf(struct dma_buf *buf,
+1
drivers/gpu/drm/Kconfig
··· 31 31 config DRM_MIPI_DBI 32 32 tristate 33 33 depends on DRM 34 + select DRM_KMS_HELPER 34 35 35 36 config DRM_MIPI_DSI 36 37 bool
+38
drivers/gpu/drm/amd/amdgpu/amdgpu_gtt_mgr.c
··· 205 205 } 206 206 207 207 /** 208 + * amdgpu_gtt_mgr_intersects - test for intersection 209 + * 210 + * @man: Our manager object 211 + * @res: The resource to test 212 + * @place: The place for the new allocation 213 + * @size: The size of the new allocation 214 + * 215 + * Simplified intersection test, only interesting if we need GART or not. 216 + */ 217 + static bool amdgpu_gtt_mgr_intersects(struct ttm_resource_manager *man, 218 + struct ttm_resource *res, 219 + const struct ttm_place *place, 220 + size_t size) 221 + { 222 + return !place->lpfn || amdgpu_gtt_mgr_has_gart_addr(res); 223 + } 224 + 225 + /** 226 + * amdgpu_gtt_mgr_compatible - test for compatibility 227 + * 228 + * @man: Our manager object 229 + * @res: The resource to test 230 + * @place: The place for the new allocation 231 + * @size: The size of the new allocation 232 + * 233 + * Simplified compatibility test. 234 + */ 235 + static bool amdgpu_gtt_mgr_compatible(struct ttm_resource_manager *man, 236 + struct ttm_resource *res, 237 + const struct ttm_place *place, 238 + size_t size) 239 + { 240 + return !place->lpfn || amdgpu_gtt_mgr_has_gart_addr(res); 241 + } 242 + 243 + /** 208 244 * amdgpu_gtt_mgr_debug - dump VRAM table 209 245 * 210 246 * @man: TTM memory type manager ··· 261 225 static const struct ttm_resource_manager_func amdgpu_gtt_mgr_func = { 262 226 .alloc = amdgpu_gtt_mgr_new, 263 227 .free = amdgpu_gtt_mgr_del, 228 + .intersects = amdgpu_gtt_mgr_intersects, 229 + .compatible = amdgpu_gtt_mgr_compatible, 264 230 .debug = amdgpu_gtt_mgr_debug 265 231 }; 266 232
+14 -33
drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
··· 1330 1330 static bool amdgpu_ttm_bo_eviction_valuable(struct ttm_buffer_object *bo, 1331 1331 const struct ttm_place *place) 1332 1332 { 1333 - unsigned long num_pages = bo->resource->num_pages; 1334 1333 struct dma_resv_iter resv_cursor; 1335 - struct amdgpu_res_cursor cursor; 1336 1334 struct dma_fence *f; 1335 + 1336 + if (!amdgpu_bo_is_amdgpu_bo(bo)) 1337 + return ttm_bo_eviction_valuable(bo, place); 1337 1338 1338 1339 /* Swapout? */ 1339 1340 if (bo->resource->mem_type == TTM_PL_SYSTEM) ··· 1354 1353 return false; 1355 1354 } 1356 1355 1357 - switch (bo->resource->mem_type) { 1358 - case AMDGPU_PL_PREEMPT: 1359 - /* Preemptible BOs don't own system resources managed by the 1360 - * driver (pages, VRAM, GART space). They point to resources 1361 - * owned by someone else (e.g. pageable memory in user mode 1362 - * or a DMABuf). They are used in a preemptible context so we 1363 - * can guarantee no deadlocks and good QoS in case of MMU 1364 - * notifiers or DMABuf move notifiers from the resource owner. 1365 - */ 1366 - return false; 1367 - case TTM_PL_TT: 1368 - if (amdgpu_bo_is_amdgpu_bo(bo) && 1369 - amdgpu_bo_encrypted(ttm_to_amdgpu_bo(bo))) 1370 - return false; 1371 - return true; 1372 - 1373 - case TTM_PL_VRAM: 1374 - /* Check each drm MM node individually */ 1375 - amdgpu_res_first(bo->resource, 0, (u64)num_pages << PAGE_SHIFT, 1376 - &cursor); 1377 - while (cursor.remaining) { 1378 - if (place->fpfn < PFN_DOWN(cursor.start + cursor.size) 1379 - && !(place->lpfn && 1380 - place->lpfn <= PFN_DOWN(cursor.start))) 1381 - return true; 1382 - 1383 - amdgpu_res_next(&cursor, cursor.size); 1384 - } 1356 + /* Preemptible BOs don't own system resources managed by the 1357 + * driver (pages, VRAM, GART space). They point to resources 1358 + * owned by someone else (e.g. pageable memory in user mode 1359 + * or a DMABuf). They are used in a preemptible context so we 1360 + * can guarantee no deadlocks and good QoS in case of MMU 1361 + * notifiers or DMABuf move notifiers from the resource owner. 1362 + */ 1363 + if (bo->resource->mem_type == AMDGPU_PL_PREEMPT) 1385 1364 return false; 1386 1365 1387 - default: 1388 - break; 1389 - } 1366 + if (bo->resource->mem_type == TTM_PL_TT && 1367 + amdgpu_bo_encrypted(ttm_to_amdgpu_bo(bo))) 1368 + return false; 1390 1369 1391 1370 return ttm_bo_eviction_valuable(bo, place); 1392 1371 }
+68
drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c
··· 721 721 } 722 722 723 723 /** 724 + * amdgpu_vram_mgr_intersects - test each drm buddy block for intersection 725 + * 726 + * @man: TTM memory type manager 727 + * @res: The resource to test 728 + * @place: The place to test against 729 + * @size: Size of the new allocation 730 + * 731 + * Test each drm buddy block for intersection for eviction decision. 732 + */ 733 + static bool amdgpu_vram_mgr_intersects(struct ttm_resource_manager *man, 734 + struct ttm_resource *res, 735 + const struct ttm_place *place, 736 + size_t size) 737 + { 738 + struct amdgpu_vram_mgr_resource *mgr = to_amdgpu_vram_mgr_resource(res); 739 + struct drm_buddy_block *block; 740 + 741 + /* Check each drm buddy block individually */ 742 + list_for_each_entry(block, &mgr->blocks, link) { 743 + unsigned long fpfn = 744 + amdgpu_vram_mgr_block_start(block) >> PAGE_SHIFT; 745 + unsigned long lpfn = fpfn + 746 + (amdgpu_vram_mgr_block_size(block) >> PAGE_SHIFT); 747 + 748 + if (place->fpfn < lpfn && 749 + (place->lpfn && place->lpfn > fpfn)) 750 + return true; 751 + } 752 + 753 + return false; 754 + } 755 + 756 + /** 757 + * amdgpu_vram_mgr_compatible - test each drm buddy block for compatibility 758 + * 759 + * @man: TTM memory type manager 760 + * @res: The resource to test 761 + * @place: The place to test against 762 + * @size: Size of the new allocation 763 + * 764 + * Test each drm buddy block for placement compatibility. 765 + */ 766 + static bool amdgpu_vram_mgr_compatible(struct ttm_resource_manager *man, 767 + struct ttm_resource *res, 768 + const struct ttm_place *place, 769 + size_t size) 770 + { 771 + struct amdgpu_vram_mgr_resource *mgr = to_amdgpu_vram_mgr_resource(res); 772 + struct drm_buddy_block *block; 773 + 774 + /* Check each drm buddy block individually */ 775 + list_for_each_entry(block, &mgr->blocks, link) { 776 + unsigned long fpfn = 777 + amdgpu_vram_mgr_block_start(block) >> PAGE_SHIFT; 778 + unsigned long lpfn = fpfn + 779 + (amdgpu_vram_mgr_block_size(block) >> PAGE_SHIFT); 780 + 781 + if (fpfn < place->fpfn || 782 + (place->lpfn && lpfn > place->lpfn)) 783 + return false; 784 + } 785 + 786 + return true; 787 + } 788 + 789 + /** 724 790 * amdgpu_vram_mgr_debug - dump VRAM table 725 791 * 726 792 * @man: TTM memory type manager ··· 819 753 static const struct ttm_resource_manager_func amdgpu_vram_mgr_func = { 820 754 .alloc = amdgpu_vram_mgr_new, 821 755 .free = amdgpu_vram_mgr_del, 756 + .intersects = amdgpu_vram_mgr_intersects, 757 + .compatible = amdgpu_vram_mgr_compatible, 822 758 .debug = amdgpu_vram_mgr_debug 823 759 }; 824 760
+24 -44
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
··· 2808 2808 }; 2809 2809 2810 2810 static struct drm_mode_config_helper_funcs amdgpu_dm_mode_config_helperfuncs = { 2811 - .atomic_commit_tail = amdgpu_dm_atomic_commit_tail 2811 + .atomic_commit_tail = amdgpu_dm_atomic_commit_tail, 2812 + .atomic_commit_setup = drm_dp_mst_atomic_setup_commit, 2812 2813 }; 2813 2814 2814 2815 static void update_connector_ext_caps(struct amdgpu_dm_connector *aconnector) ··· 6296 6295 drm_atomic_get_old_connector_state(state, conn); 6297 6296 struct drm_crtc *crtc = new_con_state->crtc; 6298 6297 struct drm_crtc_state *new_crtc_state; 6298 + struct amdgpu_dm_connector *aconn = to_amdgpu_dm_connector(conn); 6299 6299 int ret; 6300 6300 6301 6301 trace_amdgpu_dm_connector_atomic_check(new_con_state); 6302 + 6303 + if (conn->connector_type == DRM_MODE_CONNECTOR_DisplayPort) { 6304 + ret = drm_dp_mst_root_conn_atomic_check(new_con_state, &aconn->mst_mgr); 6305 + if (ret < 0) 6306 + return ret; 6307 + } 6302 6308 6303 6309 if (!crtc) 6304 6310 return 0; ··· 6390 6382 const struct drm_display_mode *adjusted_mode = &crtc_state->adjusted_mode; 6391 6383 struct drm_dp_mst_topology_mgr *mst_mgr; 6392 6384 struct drm_dp_mst_port *mst_port; 6385 + struct drm_dp_mst_topology_state *mst_state; 6393 6386 enum dc_color_depth color_depth; 6394 6387 int clock, bpp = 0; 6395 6388 bool is_y420 = false; ··· 6404 6395 if (!crtc_state->connectors_changed && !crtc_state->mode_changed) 6405 6396 return 0; 6406 6397 6398 + mst_state = drm_atomic_get_mst_topology_state(state, mst_mgr); 6399 + if (IS_ERR(mst_state)) 6400 + return PTR_ERR(mst_state); 6401 + 6402 + if (!mst_state->pbn_div) 6403 + mst_state->pbn_div = dm_mst_get_pbn_divider(aconnector->mst_port->dc_link); 6404 + 6407 6405 if (!state->duplicated) { 6408 6406 int max_bpc = conn_state->max_requested_bpc; 6409 6407 is_y420 = drm_mode_is_420_also(&connector->display_info, adjusted_mode) && ··· 6422 6406 clock = adjusted_mode->clock; 6423 6407 dm_new_connector_state->pbn = drm_dp_calc_pbn_mode(clock, bpp, false); 6424 6408 } 6425 - dm_new_connector_state->vcpi_slots = drm_dp_atomic_find_vcpi_slots(state, 6426 - mst_mgr, 6427 - mst_port, 6428 - dm_new_connector_state->pbn, 6429 - dm_mst_get_pbn_divider(aconnector->dc_link)); 6409 + 6410 + dm_new_connector_state->vcpi_slots = 6411 + drm_dp_atomic_find_time_slots(state, mst_mgr, mst_port, 6412 + dm_new_connector_state->pbn); 6430 6413 if (dm_new_connector_state->vcpi_slots < 0) { 6431 6414 DRM_DEBUG_ATOMIC("failed finding vcpi slots: %d\n", (int)dm_new_connector_state->vcpi_slots); 6432 6415 return dm_new_connector_state->vcpi_slots; ··· 6495 6480 dm_conn_state->pbn = pbn; 6496 6481 dm_conn_state->vcpi_slots = slot_num; 6497 6482 6498 - drm_dp_mst_atomic_enable_dsc(state, 6499 - aconnector->port, 6500 - dm_conn_state->pbn, 6501 - 0, 6483 + drm_dp_mst_atomic_enable_dsc(state, aconnector->port, dm_conn_state->pbn, 6502 6484 false); 6503 6485 continue; 6504 6486 } 6505 6487 6506 - vcpi = drm_dp_mst_atomic_enable_dsc(state, 6507 - aconnector->port, 6508 - pbn, pbn_div, 6509 - true); 6488 + vcpi = drm_dp_mst_atomic_enable_dsc(state, aconnector->port, pbn, true); 6510 6489 if (vcpi < 0) 6511 6490 return vcpi; 6512 6491 ··· 7975 7966 DRM_ERROR("Waiting for fences timed out!"); 7976 7967 7977 7968 drm_atomic_helper_update_legacy_modeset_state(dev, state); 7969 + drm_dp_mst_atomic_wait_for_dependencies(state); 7978 7970 7979 7971 dm_state = dm_atomic_get_new_state(state); 7980 7972 if (dm_state && dm_state->context) { ··· 8373 8363 if (dc_state_temp) 8374 8364 dc_release_state(dc_state_temp); 8375 8365 } 8376 - 8377 8366 8378 8367 static int dm_force_atomic_commit(struct drm_connector *connector) 8379 8368 { ··· 9344 9335 struct dm_crtc_state *dm_old_crtc_state, *dm_new_crtc_state; 9345 9336 #if defined(CONFIG_DRM_AMD_DC_DCN) 9346 9337 struct dsc_mst_fairness_vars vars[MAX_PIPES]; 9347 - struct drm_dp_mst_topology_state *mst_state; 9348 - struct drm_dp_mst_topology_mgr *mgr; 9349 9338 #endif 9350 9339 9351 9340 trace_amdgpu_dm_atomic_check_begin(state); ··· 9582 9575 lock_and_validation_needed = true; 9583 9576 } 9584 9577 9585 - #if defined(CONFIG_DRM_AMD_DC_DCN) 9586 - /* set the slot info for each mst_state based on the link encoding format */ 9587 - for_each_new_mst_mgr_in_state(state, mgr, mst_state, i) { 9588 - struct amdgpu_dm_connector *aconnector; 9589 - struct drm_connector *connector; 9590 - struct drm_connector_list_iter iter; 9591 - u8 link_coding_cap; 9592 - 9593 - if (!mgr->mst_state ) 9594 - continue; 9595 - 9596 - drm_connector_list_iter_begin(dev, &iter); 9597 - drm_for_each_connector_iter(connector, &iter) { 9598 - int id = connector->index; 9599 - 9600 - if (id == mst_state->mgr->conn_base_id) { 9601 - aconnector = to_amdgpu_dm_connector(connector); 9602 - link_coding_cap = dc_link_dp_mst_decide_link_encoding_format(aconnector->dc_link); 9603 - drm_dp_mst_update_slots(mst_state, link_coding_cap); 9604 - 9605 - break; 9606 - } 9607 - } 9608 - drm_connector_list_iter_end(&iter); 9609 - 9610 - } 9611 - #endif 9612 9578 /** 9613 9579 * Streams and planes are reset when there are changes that affect 9614 9580 * bandwidth. Anything that affects bandwidth needs to go through
+33 -71
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_helpers.c
··· 27 27 #include <linux/acpi.h> 28 28 #include <linux/i2c.h> 29 29 30 + #include <drm/drm_atomic.h> 30 31 #include <drm/drm_probe_helper.h> 31 32 #include <drm/amdgpu_drm.h> 32 33 #include <drm/drm_edid.h> ··· 154 153 return result; 155 154 } 156 155 157 - static void get_payload_table( 158 - struct amdgpu_dm_connector *aconnector, 159 - struct dp_mst_stream_allocation_table *proposed_table) 156 + static void 157 + fill_dc_mst_payload_table_from_drm(struct drm_dp_mst_topology_state *mst_state, 158 + struct amdgpu_dm_connector *aconnector, 159 + struct dc_dp_mst_stream_allocation_table *table) 160 160 { 161 - int i; 162 - struct drm_dp_mst_topology_mgr *mst_mgr = 163 - &aconnector->mst_port->mst_mgr; 161 + struct dc_dp_mst_stream_allocation_table new_table = { 0 }; 162 + struct dc_dp_mst_stream_allocation *sa; 163 + struct drm_dp_mst_atomic_payload *payload; 164 164 165 - mutex_lock(&mst_mgr->payload_lock); 165 + /* Fill payload info*/ 166 + list_for_each_entry(payload, &mst_state->payloads, next) { 167 + if (payload->delete) 168 + continue; 166 169 167 - proposed_table->stream_count = 0; 168 - 169 - /* number of active streams */ 170 - for (i = 0; i < mst_mgr->max_payloads; i++) { 171 - if (mst_mgr->payloads[i].num_slots == 0) 172 - break; /* end of vcp_id table */ 173 - 174 - ASSERT(mst_mgr->payloads[i].payload_state != 175 - DP_PAYLOAD_DELETE_LOCAL); 176 - 177 - if (mst_mgr->payloads[i].payload_state == DP_PAYLOAD_LOCAL || 178 - mst_mgr->payloads[i].payload_state == 179 - DP_PAYLOAD_REMOTE) { 180 - 181 - struct dp_mst_stream_allocation *sa = 182 - &proposed_table->stream_allocations[ 183 - proposed_table->stream_count]; 184 - 185 - sa->slot_count = mst_mgr->payloads[i].num_slots; 186 - sa->vcp_id = mst_mgr->proposed_vcpis[i]->vcpi; 187 - proposed_table->stream_count++; 188 - } 170 + sa = &new_table.stream_allocations[new_table.stream_count]; 171 + sa->slot_count = payload->time_slots; 172 + sa->vcp_id = payload->vcpi; 173 + new_table.stream_count++; 189 174 } 190 175 191 - mutex_unlock(&mst_mgr->payload_lock); 176 + /* Overwrite the old table */ 177 + *table = new_table; 192 178 } 193 179 194 180 void dm_helpers_dp_update_branch_info( ··· 189 201 bool dm_helpers_dp_mst_write_payload_allocation_table( 190 202 struct dc_context *ctx, 191 203 const struct dc_stream_state *stream, 192 - struct dp_mst_stream_allocation_table *proposed_table, 204 + struct dc_dp_mst_stream_allocation_table *proposed_table, 193 205 bool enable) 194 206 { 195 207 struct amdgpu_dm_connector *aconnector; 196 - struct dm_connector_state *dm_conn_state; 208 + struct drm_dp_mst_topology_state *mst_state; 209 + struct drm_dp_mst_atomic_payload *payload; 197 210 struct drm_dp_mst_topology_mgr *mst_mgr; 198 - struct drm_dp_mst_port *mst_port; 199 - bool ret; 200 - u8 link_coding_cap = DP_8b_10b_ENCODING; 201 211 202 212 aconnector = (struct amdgpu_dm_connector *)stream->dm_stream_context; 203 213 /* Accessing the connector state is required for vcpi_slots allocation ··· 206 220 if (!aconnector || !aconnector->mst_port) 207 221 return false; 208 222 209 - dm_conn_state = to_dm_connector_state(aconnector->base.state); 210 - 211 223 mst_mgr = &aconnector->mst_port->mst_mgr; 212 - 213 - if (!mst_mgr->mst_state) 214 - return false; 215 - 216 - mst_port = aconnector->port; 217 - 218 - #if defined(CONFIG_DRM_AMD_DC_DCN) 219 - link_coding_cap = dc_link_dp_mst_decide_link_encoding_format(aconnector->dc_link); 220 - #endif 221 - 222 - if (enable) { 223 - 224 - ret = drm_dp_mst_allocate_vcpi(mst_mgr, mst_port, 225 - dm_conn_state->pbn, 226 - dm_conn_state->vcpi_slots); 227 - if (!ret) 228 - return false; 229 - 230 - } else { 231 - drm_dp_mst_reset_vcpi_slots(mst_mgr, mst_port); 232 - } 224 + mst_state = to_drm_dp_mst_topology_state(mst_mgr->base.state); 233 225 234 226 /* It's OK for this to fail */ 235 - drm_dp_update_payload_part1(mst_mgr, (link_coding_cap == DP_CAP_ANSI_128B132B) ? 0:1); 227 + payload = drm_atomic_get_mst_payload_state(mst_state, aconnector->port); 228 + if (enable) 229 + drm_dp_add_payload_part1(mst_mgr, mst_state, payload); 230 + else 231 + drm_dp_remove_payload(mst_mgr, mst_state, payload); 236 232 237 233 /* mst_mgr->->payloads are VC payload notify MST branch using DPCD or 238 234 * AUX message. The sequence is slot 1-63 allocated sequence for each 239 235 * stream. AMD ASIC stream slot allocation should follow the same 240 236 * sequence. copy DRM MST allocation to dc */ 241 - 242 - get_payload_table(aconnector, proposed_table); 237 + fill_dc_mst_payload_table_from_drm(mst_state, aconnector, proposed_table); 243 238 244 239 return true; 245 240 } ··· 277 310 bool enable) 278 311 { 279 312 struct amdgpu_dm_connector *aconnector; 313 + struct drm_dp_mst_topology_state *mst_state; 280 314 struct drm_dp_mst_topology_mgr *mst_mgr; 281 - struct drm_dp_mst_port *mst_port; 315 + struct drm_dp_mst_atomic_payload *payload; 282 316 enum mst_progress_status set_flag = MST_ALLOCATE_NEW_PAYLOAD; 283 317 enum mst_progress_status clr_flag = MST_CLEAR_ALLOCATED_PAYLOAD; 284 318 ··· 288 320 if (!aconnector || !aconnector->mst_port) 289 321 return false; 290 322 291 - mst_port = aconnector->port; 292 - 293 323 mst_mgr = &aconnector->mst_port->mst_mgr; 324 + mst_state = to_drm_dp_mst_topology_state(mst_mgr->base.state); 294 325 295 - if (!mst_mgr->mst_state) 296 - return false; 297 - 326 + payload = drm_atomic_get_mst_payload_state(mst_state, aconnector->port); 298 327 if (!enable) { 299 328 set_flag = MST_CLEAR_ALLOCATED_PAYLOAD; 300 329 clr_flag = MST_ALLOCATE_NEW_PAYLOAD; 301 330 } 302 331 303 - if (drm_dp_update_payload_part2(mst_mgr)) { 332 + if (enable && drm_dp_add_payload_part2(mst_mgr, mst_state->base.state, payload)) { 304 333 amdgpu_dm_set_mst_status(&aconnector->mst_status, 305 334 set_flag, false); 306 335 } else { ··· 306 341 amdgpu_dm_set_mst_status(&aconnector->mst_status, 307 342 clr_flag, false); 308 343 } 309 - 310 - if (!enable) 311 - drm_dp_mst_deallocate_vcpi(mst_mgr, mst_port); 312 344 313 345 return true; 314 346 }
+45 -80
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
··· 447 447 } 448 448 449 449 static int dm_dp_mst_atomic_check(struct drm_connector *connector, 450 - struct drm_atomic_state *state) 450 + struct drm_atomic_state *state) 451 451 { 452 - struct drm_connector_state *new_conn_state = 453 - drm_atomic_get_new_connector_state(state, connector); 454 - struct drm_connector_state *old_conn_state = 455 - drm_atomic_get_old_connector_state(state, connector); 456 452 struct amdgpu_dm_connector *aconnector = to_amdgpu_dm_connector(connector); 457 - struct drm_crtc_state *new_crtc_state; 458 - struct drm_dp_mst_topology_mgr *mst_mgr; 459 - struct drm_dp_mst_port *mst_port; 453 + struct drm_dp_mst_topology_mgr *mst_mgr = &aconnector->mst_port->mst_mgr; 454 + struct drm_dp_mst_port *mst_port = aconnector->port; 460 455 461 - mst_port = aconnector->port; 462 - mst_mgr = &aconnector->mst_port->mst_mgr; 463 - 464 - if (!old_conn_state->crtc) 465 - return 0; 466 - 467 - if (new_conn_state->crtc) { 468 - new_crtc_state = drm_atomic_get_new_crtc_state(state, new_conn_state->crtc); 469 - if (!new_crtc_state || 470 - !drm_atomic_crtc_needs_modeset(new_crtc_state) || 471 - new_crtc_state->enable) 472 - return 0; 473 - } 474 - 475 - return drm_dp_atomic_release_vcpi_slots(state, 476 - mst_mgr, 477 - mst_port); 456 + return drm_dp_atomic_release_time_slots(state, mst_mgr, mst_port); 478 457 } 479 458 480 459 static const struct drm_connector_helper_funcs dm_dp_mst_connector_helper_funcs = { ··· 597 618 598 619 dc_link_dp_get_max_link_enc_cap(aconnector->dc_link, &max_link_enc_cap); 599 620 aconnector->mst_mgr.cbs = &dm_mst_cbs; 600 - drm_dp_mst_topology_mgr_init( 601 - &aconnector->mst_mgr, 602 - adev_to_drm(dm->adev), 603 - &aconnector->dm_dp_aux.aux, 604 - 16, 605 - 4, 606 - max_link_enc_cap.lane_count, 607 - drm_dp_bw_code_to_link_rate(max_link_enc_cap.link_rate), 608 - aconnector->connector_id); 621 + drm_dp_mst_topology_mgr_init(&aconnector->mst_mgr, adev_to_drm(dm->adev), 622 + &aconnector->dm_dp_aux.aux, 16, 4, aconnector->connector_id); 609 623 610 624 drm_connector_attach_dp_subconnector_property(&aconnector->base); 611 625 } ··· 703 731 } 704 732 705 733 static bool increase_dsc_bpp(struct drm_atomic_state *state, 734 + struct drm_dp_mst_topology_state *mst_state, 706 735 struct dc_link *dc_link, 707 736 struct dsc_mst_fairness_params *params, 708 737 struct dsc_mst_fairness_vars *vars, ··· 716 743 int min_initial_slack; 717 744 int next_index; 718 745 int remaining_to_increase = 0; 719 - int pbn_per_timeslot; 720 746 int link_timeslots_used; 721 747 int fair_pbn_alloc; 722 - 723 - pbn_per_timeslot = dm_mst_get_pbn_divider(dc_link); 724 748 725 749 for (i = 0; i < count; i++) { 726 750 if (vars[i + k].dsc_enabled) { ··· 749 779 link_timeslots_used = 0; 750 780 751 781 for (i = 0; i < count; i++) 752 - link_timeslots_used += DIV_ROUND_UP(vars[i + k].pbn, pbn_per_timeslot); 782 + link_timeslots_used += DIV_ROUND_UP(vars[i + k].pbn, mst_state->pbn_div); 753 783 754 - fair_pbn_alloc = (63 - link_timeslots_used) / remaining_to_increase * pbn_per_timeslot; 784 + fair_pbn_alloc = 785 + (63 - link_timeslots_used) / remaining_to_increase * mst_state->pbn_div; 755 786 756 787 if (initial_slack[next_index] > fair_pbn_alloc) { 757 788 vars[next_index].pbn += fair_pbn_alloc; 758 - if (drm_dp_atomic_find_vcpi_slots(state, 789 + if (drm_dp_atomic_find_time_slots(state, 759 790 params[next_index].port->mgr, 760 791 params[next_index].port, 761 - vars[next_index].pbn, 762 - pbn_per_timeslot) < 0) 792 + vars[next_index].pbn) < 0) 763 793 return false; 764 794 if (!drm_dp_mst_atomic_check(state)) { 765 795 vars[next_index].bpp_x16 = bpp_x16_from_pbn(params[next_index], vars[next_index].pbn); 766 796 } else { 767 797 vars[next_index].pbn -= fair_pbn_alloc; 768 - if (drm_dp_atomic_find_vcpi_slots(state, 798 + if (drm_dp_atomic_find_time_slots(state, 769 799 params[next_index].port->mgr, 770 800 params[next_index].port, 771 - vars[next_index].pbn, 772 - pbn_per_timeslot) < 0) 801 + vars[next_index].pbn) < 0) 773 802 return false; 774 803 } 775 804 } else { 776 805 vars[next_index].pbn += initial_slack[next_index]; 777 - if (drm_dp_atomic_find_vcpi_slots(state, 806 + if (drm_dp_atomic_find_time_slots(state, 778 807 params[next_index].port->mgr, 779 808 params[next_index].port, 780 - vars[next_index].pbn, 781 - pbn_per_timeslot) < 0) 809 + vars[next_index].pbn) < 0) 782 810 return false; 783 811 if (!drm_dp_mst_atomic_check(state)) { 784 812 vars[next_index].bpp_x16 = params[next_index].bw_range.max_target_bpp_x16; 785 813 } else { 786 814 vars[next_index].pbn -= initial_slack[next_index]; 787 - if (drm_dp_atomic_find_vcpi_slots(state, 815 + if (drm_dp_atomic_find_time_slots(state, 788 816 params[next_index].port->mgr, 789 817 params[next_index].port, 790 - vars[next_index].pbn, 791 - pbn_per_timeslot) < 0) 818 + vars[next_index].pbn) < 0) 792 819 return false; 793 820 } 794 821 } ··· 839 872 break; 840 873 841 874 vars[next_index].pbn = kbps_to_peak_pbn(params[next_index].bw_range.stream_kbps); 842 - if (drm_dp_atomic_find_vcpi_slots(state, 875 + if (drm_dp_atomic_find_time_slots(state, 843 876 params[next_index].port->mgr, 844 877 params[next_index].port, 845 - vars[next_index].pbn, 846 - dm_mst_get_pbn_divider(dc_link)) < 0) 878 + vars[next_index].pbn) < 0) 847 879 return false; 848 880 849 881 if (!drm_dp_mst_atomic_check(state)) { ··· 850 884 vars[next_index].bpp_x16 = 0; 851 885 } else { 852 886 vars[next_index].pbn = kbps_to_peak_pbn(params[next_index].bw_range.max_kbps); 853 - if (drm_dp_atomic_find_vcpi_slots(state, 887 + if (drm_dp_atomic_find_time_slots(state, 854 888 params[next_index].port->mgr, 855 889 params[next_index].port, 856 - vars[next_index].pbn, 857 - dm_mst_get_pbn_divider(dc_link)) < 0) 890 + vars[next_index].pbn) < 0) 858 891 return false; 859 892 } 860 893 ··· 867 902 struct dc_state *dc_state, 868 903 struct dc_link *dc_link, 869 904 struct dsc_mst_fairness_vars *vars, 905 + struct drm_dp_mst_topology_mgr *mgr, 870 906 int *link_vars_start_index) 871 907 { 872 - int i, k; 873 908 struct dc_stream_state *stream; 874 909 struct dsc_mst_fairness_params params[MAX_PIPES]; 875 910 struct amdgpu_dm_connector *aconnector; 911 + struct drm_dp_mst_topology_state *mst_state = drm_atomic_get_mst_topology_state(state, mgr); 876 912 int count = 0; 913 + int i, k; 877 914 bool debugfs_overwrite = false; 878 915 879 916 memset(params, 0, sizeof(params)); 917 + 918 + if (IS_ERR(mst_state)) 919 + return false; 920 + 921 + mst_state->pbn_div = dm_mst_get_pbn_divider(dc_link); 922 + #if defined(CONFIG_DRM_AMD_DC_DCN) 923 + drm_dp_mst_update_slots(mst_state, dc_link_dp_mst_decide_link_encoding_format(dc_link)); 924 + #endif 880 925 881 926 /* Set up params */ 882 927 for (i = 0; i < dc_state->stream_count; i++) { ··· 946 971 vars[i + k].pbn = kbps_to_peak_pbn(params[i].bw_range.stream_kbps); 947 972 vars[i + k].dsc_enabled = false; 948 973 vars[i + k].bpp_x16 = 0; 949 - if (drm_dp_atomic_find_vcpi_slots(state, 950 - params[i].port->mgr, 951 - params[i].port, 952 - vars[i + k].pbn, 953 - dm_mst_get_pbn_divider(dc_link)) < 0) 974 + if (drm_dp_atomic_find_time_slots(state, params[i].port->mgr, params[i].port, 975 + vars[i + k].pbn) < 0) 954 976 return false; 955 977 } 956 978 if (!drm_dp_mst_atomic_check(state) && !debugfs_overwrite) { ··· 961 989 vars[i + k].pbn = kbps_to_peak_pbn(params[i].bw_range.min_kbps); 962 990 vars[i + k].dsc_enabled = true; 963 991 vars[i + k].bpp_x16 = params[i].bw_range.min_target_bpp_x16; 964 - if (drm_dp_atomic_find_vcpi_slots(state, 965 - params[i].port->mgr, 966 - params[i].port, 967 - vars[i + k].pbn, 968 - dm_mst_get_pbn_divider(dc_link)) < 0) 992 + if (drm_dp_atomic_find_time_slots(state, params[i].port->mgr, 993 + params[i].port, vars[i + k].pbn) < 0) 969 994 return false; 970 995 } else { 971 996 vars[i + k].pbn = kbps_to_peak_pbn(params[i].bw_range.stream_kbps); 972 997 vars[i + k].dsc_enabled = false; 973 998 vars[i + k].bpp_x16 = 0; 974 - if (drm_dp_atomic_find_vcpi_slots(state, 975 - params[i].port->mgr, 976 - params[i].port, 977 - vars[i + k].pbn, 978 - dm_mst_get_pbn_divider(dc_link)) < 0) 999 + if (drm_dp_atomic_find_time_slots(state, params[i].port->mgr, 1000 + params[i].port, vars[i + k].pbn) < 0) 979 1001 return false; 980 1002 } 981 1003 } ··· 977 1011 return false; 978 1012 979 1013 /* Optimize degree of compression */ 980 - if (!increase_dsc_bpp(state, dc_link, params, vars, count, k)) 1014 + if (!increase_dsc_bpp(state, mst_state, dc_link, params, vars, count, k)) 981 1015 return false; 982 1016 983 1017 if (!try_disable_dsc(state, dc_link, params, vars, count, k)) ··· 1123 1157 continue; 1124 1158 1125 1159 mutex_lock(&aconnector->mst_mgr.lock); 1126 - if (!compute_mst_dsc_configs_for_link(state, dc_state, stream->link, 1127 - vars, &link_vars_start_index)) { 1160 + if (!compute_mst_dsc_configs_for_link(state, dc_state, stream->link, vars, 1161 + &aconnector->mst_mgr, 1162 + &link_vars_start_index)) { 1128 1163 mutex_unlock(&aconnector->mst_mgr.lock); 1129 1164 return false; 1130 1165 } ··· 1183 1216 continue; 1184 1217 1185 1218 mutex_lock(&aconnector->mst_mgr.lock); 1186 - if (!compute_mst_dsc_configs_for_link(state, 1187 - dc_state, 1188 - stream->link, 1189 - vars, 1219 + if (!compute_mst_dsc_configs_for_link(state, dc_state, stream->link, vars, 1220 + &aconnector->mst_mgr, 1190 1221 &link_vars_start_index)) { 1191 1222 mutex_unlock(&aconnector->mst_mgr.lock); 1192 1223 return false;
+5 -5
drivers/gpu/drm/amd/display/dc/core/dc_link.c
··· 3516 3516 struct dc_link *link, 3517 3517 struct stream_encoder *stream_enc, 3518 3518 struct hpo_dp_stream_encoder *hpo_dp_stream_enc, // TODO: Rename stream_enc to dio_stream_enc? 3519 - const struct dp_mst_stream_allocation_table *proposed_table) 3519 + const struct dc_dp_mst_stream_allocation_table *proposed_table) 3520 3520 { 3521 3521 struct link_mst_stream_allocation work_table[MAX_CONTROLLER_NUM] = { 0 }; 3522 3522 struct link_mst_stream_allocation *dc_alloc; ··· 3679 3679 { 3680 3680 struct dc_stream_state *stream = pipe_ctx->stream; 3681 3681 struct dc_link *link = stream->link; 3682 - struct dp_mst_stream_allocation_table proposed_table = {0}; 3682 + struct dc_dp_mst_stream_allocation_table proposed_table = {0}; 3683 3683 struct fixed31_32 avg_time_slots_per_mtp; 3684 3684 struct fixed31_32 pbn; 3685 3685 struct fixed31_32 pbn_per_slot; ··· 3784 3784 struct fixed31_32 avg_time_slots_per_mtp; 3785 3785 struct fixed31_32 pbn; 3786 3786 struct fixed31_32 pbn_per_slot; 3787 - struct dp_mst_stream_allocation_table proposed_table = {0}; 3787 + struct dc_dp_mst_stream_allocation_table proposed_table = {0}; 3788 3788 uint8_t i; 3789 3789 const struct link_hwss *link_hwss = get_link_hwss(link, &pipe_ctx->link_res); 3790 3790 DC_LOGGER_INIT(link->ctx->logger); ··· 3873 3873 struct fixed31_32 avg_time_slots_per_mtp; 3874 3874 struct fixed31_32 pbn; 3875 3875 struct fixed31_32 pbn_per_slot; 3876 - struct dp_mst_stream_allocation_table proposed_table = {0}; 3876 + struct dc_dp_mst_stream_allocation_table proposed_table = {0}; 3877 3877 uint8_t i; 3878 3878 enum act_return_status ret; 3879 3879 const struct link_hwss *link_hwss = get_link_hwss(link, &pipe_ctx->link_res); ··· 3957 3957 { 3958 3958 struct dc_stream_state *stream = pipe_ctx->stream; 3959 3959 struct dc_link *link = stream->link; 3960 - struct dp_mst_stream_allocation_table proposed_table = {0}; 3960 + struct dc_dp_mst_stream_allocation_table proposed_table = {0}; 3961 3961 struct fixed31_32 avg_time_slots_per_mtp = dc_fixpt_from_int(0); 3962 3962 int i; 3963 3963 bool mst_mode = (link->type == dc_connection_mst_branch);
+2 -2
drivers/gpu/drm/amd/display/dc/dm_helpers.h
··· 33 33 #include "dc_types.h" 34 34 #include "dc.h" 35 35 36 - struct dp_mst_stream_allocation_table; 36 + struct dc_dp_mst_stream_allocation_table; 37 37 struct aux_payload; 38 38 enum aux_return_code_type; 39 39 ··· 77 77 bool dm_helpers_dp_mst_write_payload_allocation_table( 78 78 struct dc_context *ctx, 79 79 const struct dc_stream_state *stream, 80 - struct dp_mst_stream_allocation_table *proposed_table, 80 + struct dc_dp_mst_stream_allocation_table *proposed_table, 81 81 bool enable); 82 82 83 83 /*
+31 -6
drivers/gpu/drm/bridge/analogix/anx7625.c
··· 1440 1440 1441 1441 static int anx7625_read_hpd_status_p0(struct anx7625_data *ctx) 1442 1442 { 1443 + int ret; 1444 + 1445 + /* Set irq detect window to 2ms */ 1446 + ret = anx7625_reg_write(ctx, ctx->i2c.tx_p2_client, 1447 + HPD_DET_TIMER_BIT0_7, HPD_TIME & 0xFF); 1448 + ret |= anx7625_reg_write(ctx, ctx->i2c.tx_p2_client, 1449 + HPD_DET_TIMER_BIT8_15, 1450 + (HPD_TIME >> 8) & 0xFF); 1451 + ret |= anx7625_reg_write(ctx, ctx->i2c.tx_p2_client, 1452 + HPD_DET_TIMER_BIT16_23, 1453 + (HPD_TIME >> 16) & 0xFF); 1454 + if (ret < 0) 1455 + return ret; 1456 + 1443 1457 return anx7625_reg_read(ctx, ctx->i2c.rx_p0_client, SYSTEM_STSTUS); 1444 1458 } 1445 1459 ··· 1811 1797 int wl, ch, rate; 1812 1798 int ret = 0; 1813 1799 1814 - if (fmt->fmt != HDMI_DSP_A) { 1815 - DRM_DEV_ERROR(dev, "only supports DSP_A\n"); 1800 + if (anx7625_sink_detect(ctx) == connector_status_disconnected) { 1801 + DRM_DEV_DEBUG_DRIVER(dev, "DP not connected\n"); 1802 + return 0; 1803 + } 1804 + 1805 + if (fmt->fmt != HDMI_DSP_A && fmt->fmt != HDMI_I2S) { 1806 + DRM_DEV_ERROR(dev, "only supports DSP_A & I2S\n"); 1816 1807 return -EINVAL; 1817 1808 } 1818 1809 ··· 1825 1806 params->sample_rate, params->sample_width, 1826 1807 params->cea.channels); 1827 1808 1828 - ret |= anx7625_write_and_or(ctx, ctx->i2c.tx_p2_client, 1829 - AUDIO_CHANNEL_STATUS_6, 1830 - ~I2S_SLAVE_MODE, 1831 - TDM_SLAVE_MODE); 1809 + if (fmt->fmt == HDMI_DSP_A) 1810 + ret = anx7625_write_and_or(ctx, ctx->i2c.tx_p2_client, 1811 + AUDIO_CHANNEL_STATUS_6, 1812 + ~I2S_SLAVE_MODE, 1813 + TDM_SLAVE_MODE); 1814 + else 1815 + ret = anx7625_write_and_or(ctx, ctx->i2c.tx_p2_client, 1816 + AUDIO_CHANNEL_STATUS_6, 1817 + ~TDM_SLAVE_MODE, 1818 + I2S_SLAVE_MODE); 1832 1819 1833 1820 /* Word length */ 1834 1821 switch (params->sample_width) {
+6
drivers/gpu/drm/bridge/analogix/anx7625.h
··· 132 132 #define I2S_SLAVE_MODE 0x08 133 133 #define AUDIO_LAYOUT 0x01 134 134 135 + #define HPD_DET_TIMER_BIT0_7 0xea 136 + #define HPD_DET_TIMER_BIT8_15 0xeb 137 + #define HPD_DET_TIMER_BIT16_23 0xec 138 + /* HPD debounce time 2ms for 27M clock */ 139 + #define HPD_TIME 54000 140 + 135 141 #define AUDIO_CONTROL_REGISTER 0xe6 136 142 #define TDM_TIMING_MODE 0x08 137 143
+2 -1
drivers/gpu/drm/bridge/cadence/cdns-mhdp8546-core.c
··· 2605 2605 pm_runtime_disable(&pdev->dev); 2606 2606 2607 2607 cancel_work_sync(&mhdp->modeset_retry_work); 2608 - flush_scheduled_work(); 2608 + flush_work(&mhdp->hpd_work); 2609 + /* Ignoring mhdp->hdcp.check_work and mhdp->hdcp.prop_work here. */ 2609 2610 2610 2611 clk_disable_unprepare(mhdp->clk); 2611 2612
+37 -7
drivers/gpu/drm/bridge/chipone-icn6211.c
··· 11 11 12 12 #include <linux/bitfield.h> 13 13 #include <linux/bits.h> 14 + #include <linux/clk.h> 14 15 #include <linux/delay.h> 15 16 #include <linux/gpio/consumer.h> 16 17 #include <linux/i2c.h> ··· 152 151 struct regulator *vdd1; 153 152 struct regulator *vdd2; 154 153 struct regulator *vdd3; 154 + struct clk *refclk; 155 + unsigned long refclk_rate; 155 156 bool interface_i2c; 156 157 }; 157 158 ··· 262 259 263 260 /* 264 261 * DSI byte clock frequency (input into PLL) is calculated as: 265 - * DSI_CLK = mode clock * bpp / dsi_data_lanes / 8 262 + * DSI_CLK = HS clock / 4 266 263 * 267 264 * DPI pixel clock frequency (output from PLL) is mode clock. 268 265 * ··· 276 273 * It seems the PLL input clock after applying P pre-divider have 277 274 * to be lower than 20 MHz. 278 275 */ 279 - fin = mode_clock * mipi_dsi_pixel_format_to_bpp(icn->dsi->format) / 280 - icn->dsi->lanes / 8; /* in Hz */ 276 + if (icn->refclk) 277 + fin = icn->refclk_rate; 278 + else 279 + fin = icn->dsi->hs_rate / 4; /* in Hz */ 281 280 282 281 /* Minimum value of P predivider for PLL input in 5..20 MHz */ 283 282 p_min = clamp(DIV_ROUND_UP(fin, 20000000), 1U, 31U); ··· 324 319 best_p_pot = !(best_p & 1); 325 320 326 321 dev_dbg(icn->dev, 327 - "PLL: P[3:0]=%d P[4]=2*%d M=%d S[7:5]=2^%d delta=%d => DSI f_in=%d Hz ; DPI f_out=%d Hz\n", 322 + "PLL: P[3:0]=%d P[4]=2*%d M=%d S[7:5]=2^%d delta=%d => DSI f_in(%s)=%d Hz ; DPI f_out=%d Hz\n", 328 323 best_p >> best_p_pot, best_p_pot, best_m, best_s + 1, 329 - min_delta, fin, (fin * best_m) / (best_p << (best_s + 1))); 324 + min_delta, icn->refclk ? "EXT" : "DSI", fin, 325 + (fin * best_m) / (best_p << (best_s + 1))); 330 326 331 327 ref_div = PLL_REF_DIV_P(best_p >> best_p_pot) | PLL_REF_DIV_S(best_s); 332 328 if (best_p_pot) /* Prefer /2 pre-divider */ 333 329 ref_div |= PLL_REF_DIV_Pe; 334 330 335 - /* Clock source selection fixed to MIPI DSI clock lane */ 336 - chipone_writeb(icn, PLL_CTRL(6), PLL_CTRL_6_MIPI_CLK); 331 + /* Clock source selection either external clock or MIPI DSI clock lane */ 332 + chipone_writeb(icn, PLL_CTRL(6), 333 + icn->refclk ? PLL_CTRL_6_EXTERNAL : PLL_CTRL_6_MIPI_CLK); 337 334 chipone_writeb(icn, PLL_REF_DIV, ref_div); 338 335 chipone_writeb(icn, PLL_INT(0), best_m); 339 336 } ··· 471 464 "failed to enable VDD3 regulator: %d\n", ret); 472 465 } 473 466 467 + ret = clk_prepare_enable(icn->refclk); 468 + if (ret) 469 + DRM_DEV_ERROR(icn->dev, 470 + "failed to enable RECLK clock: %d\n", ret); 471 + 474 472 gpiod_set_value(icn->enable_gpio, 1); 475 473 476 474 usleep_range(10000, 11000); ··· 485 473 struct drm_bridge_state *old_bridge_state) 486 474 { 487 475 struct chipone *icn = bridge_to_chipone(bridge); 476 + 477 + clk_disable_unprepare(icn->refclk); 488 478 489 479 if (icn->vdd1) 490 480 regulator_disable(icn->vdd1); ··· 529 515 dsi->format = MIPI_DSI_FMT_RGB888; 530 516 dsi->mode_flags = MIPI_DSI_MODE_VIDEO | MIPI_DSI_MODE_VIDEO_BURST | 531 517 MIPI_DSI_MODE_LPM | MIPI_DSI_MODE_NO_EOT_PACKET; 518 + dsi->hs_rate = 500000000; 519 + dsi->lp_rate = 16000000; 532 520 533 521 ret = mipi_dsi_attach(dsi); 534 522 if (ret < 0) ··· 632 616 { 633 617 struct device *dev = icn->dev; 634 618 int ret; 619 + 620 + icn->refclk = devm_clk_get_optional(dev, "refclk"); 621 + if (IS_ERR(icn->refclk)) { 622 + ret = PTR_ERR(icn->refclk); 623 + DRM_DEV_ERROR(dev, "failed to get REFCLK clock: %d\n", ret); 624 + return ret; 625 + } else if (icn->refclk) { 626 + icn->refclk_rate = clk_get_rate(icn->refclk); 627 + if (icn->refclk_rate < 10000000 || icn->refclk_rate > 154000000) { 628 + DRM_DEV_ERROR(dev, "REFCLK out of range: %ld Hz\n", 629 + icn->refclk_rate); 630 + return -EINVAL; 631 + } 632 + } 635 633 636 634 icn->vdd1 = devm_regulator_get_optional(dev, "vdd1"); 637 635 if (IS_ERR(icn->vdd1)) {
+13 -2
drivers/gpu/drm/bridge/chrontel-ch7033.c
··· 68 68 BYTE_SWAP_GBR = 3, 69 69 BYTE_SWAP_BRG = 4, 70 70 BYTE_SWAP_BGR = 5, 71 + BYTE_SWAP_MAX = 6, 71 72 }; 72 73 73 74 /* Page 0, Register 0x19 */ ··· 356 355 int hsynclen = mode->hsync_end - mode->hsync_start; 357 356 int vbporch = mode->vsync_start - mode->vdisplay; 358 357 int vsynclen = mode->vsync_end - mode->vsync_start; 358 + u8 byte_swap; 359 + int ret; 359 360 360 361 /* 361 362 * Page 4 ··· 401 398 regmap_write(priv->regmap, 0x15, vbporch); 402 399 regmap_write(priv->regmap, 0x16, vsynclen); 403 400 404 - /* Input color swap. */ 405 - regmap_update_bits(priv->regmap, 0x18, SWAP, BYTE_SWAP_BGR); 401 + /* Input color swap. Byte order is optional and will default to 402 + * BYTE_SWAP_BGR to preserve backwards compatibility with existing 403 + * driver. 404 + */ 405 + ret = of_property_read_u8(priv->bridge.of_node, "chrontel,byteswap", 406 + &byte_swap); 407 + if (!ret && byte_swap < BYTE_SWAP_MAX) 408 + regmap_update_bits(priv->regmap, 0x18, SWAP, byte_swap); 409 + else 410 + regmap_update_bits(priv->regmap, 0x18, SWAP, BYTE_SWAP_BGR); 406 411 407 412 /* Input clock and sync polarity. */ 408 413 regmap_update_bits(priv->regmap, 0x19, 0x1, mode->clock >> 16);
+4 -4
drivers/gpu/drm/bridge/ite-it6505.c
··· 2951 2951 if (ret) 2952 2952 dev_err(dev, "Failed to setup AVI infoframe: %d", ret); 2953 2953 2954 - it6505_drm_dp_link_set_power(&it6505->aux, &it6505->link, 2955 - DP_SET_POWER_D0); 2956 - 2957 2954 it6505_update_video_parameter(it6505, mode); 2958 2955 2959 2956 ret = it6505_send_video_infoframe(it6505, &frame); ··· 2960 2963 2961 2964 it6505_int_mask_enable(it6505); 2962 2965 it6505_video_reset(it6505); 2966 + 2967 + it6505_drm_dp_link_set_power(&it6505->aux, &it6505->link, 2968 + DP_SET_POWER_D0); 2963 2969 } 2964 2970 2965 2971 static void it6505_bridge_atomic_disable(struct drm_bridge *bridge, ··· 2974 2974 DRM_DEV_DEBUG_DRIVER(dev, "start"); 2975 2975 2976 2976 if (it6505->powered) { 2977 - it6505_video_disable(it6505); 2978 2977 it6505_drm_dp_link_set_power(&it6505->aux, &it6505->link, 2979 2978 DP_SET_POWER_D3); 2979 + it6505_video_disable(it6505); 2980 2980 } 2981 2981 } 2982 2982
+3 -1
drivers/gpu/drm/bridge/megachips-stdpxxxx-ge-b850v3-fw.c
··· 296 296 * This check is to avoid both the drivers 297 297 * removing the bridge in their remove() function 298 298 */ 299 - if (!ge_b850v3_lvds_ptr) 299 + if (!ge_b850v3_lvds_ptr || 300 + !ge_b850v3_lvds_ptr->stdp2690_i2c || 301 + !ge_b850v3_lvds_ptr->stdp4028_i2c) 300 302 goto out; 301 303 302 304 drm_bridge_remove(&ge_b850v3_lvds_ptr->bridge);
+5
drivers/gpu/drm/bridge/parade-ps8640.c
··· 375 375 gpiod_set_value(ps_bridge->gpio_reset, 1); 376 376 usleep_range(2000, 2500); 377 377 gpiod_set_value(ps_bridge->gpio_reset, 0); 378 + /* Double reset for T4 and T5 */ 379 + msleep(50); 380 + gpiod_set_value(ps_bridge->gpio_reset, 1); 381 + msleep(50); 382 + gpiod_set_value(ps_bridge->gpio_reset, 0); 378 383 379 384 /* 380 385 * Mystery 200 ms delay for the "MCU to be ready". It's unclear if
+8 -5
drivers/gpu/drm/bridge/synopsys/dw-hdmi.c
··· 3096 3096 { 3097 3097 struct dw_hdmi *hdmi = dev_id; 3098 3098 u8 intr_stat, phy_int_pol, phy_pol_mask, phy_stat; 3099 + enum drm_connector_status status = connector_status_unknown; 3099 3100 3100 3101 intr_stat = hdmi_readb(hdmi, HDMI_IH_PHY_STAT0); 3101 3102 phy_int_pol = hdmi_readb(hdmi, HDMI_PHY_POL0); ··· 3135 3134 cec_notifier_phys_addr_invalidate(hdmi->cec_notifier); 3136 3135 mutex_unlock(&hdmi->cec_notifier_mutex); 3137 3136 } 3137 + 3138 + if (phy_stat & HDMI_PHY_HPD) 3139 + status = connector_status_connected; 3140 + 3141 + if (!(phy_stat & (HDMI_PHY_HPD | HDMI_PHY_RX_SENSE))) 3142 + status = connector_status_disconnected; 3138 3143 } 3139 3144 3140 - if (intr_stat & HDMI_IH_PHY_STAT0_HPD) { 3141 - enum drm_connector_status status = phy_int_pol & HDMI_PHY_HPD 3142 - ? connector_status_connected 3143 - : connector_status_disconnected; 3144 - 3145 + if (status != connector_status_unknown) { 3145 3146 dev_dbg(hdmi->dev, "EVENT=%s\n", 3146 3147 status == connector_status_connected ? 3147 3148 "plugin" : "plugout");
+8 -7
drivers/gpu/drm/bridge/tc358767.c
··· 1913 1913 static int tc_probe_dpi_bridge_endpoint(struct tc_data *tc) 1914 1914 { 1915 1915 struct device *dev = tc->dev; 1916 + struct drm_bridge *bridge; 1916 1917 struct drm_panel *panel; 1917 1918 int ret; 1918 1919 1919 1920 /* port@1 is the DPI input/output port */ 1920 - ret = drm_of_find_panel_or_bridge(dev->of_node, 1, 0, &panel, NULL); 1921 + ret = drm_of_find_panel_or_bridge(dev->of_node, 1, 0, &panel, &bridge); 1921 1922 if (ret && ret != -ENODEV) 1922 1923 return ret; 1923 1924 1924 1925 if (panel) { 1925 - struct drm_bridge *panel_bridge; 1926 + bridge = devm_drm_panel_bridge_add(dev, panel); 1927 + if (IS_ERR(bridge)) 1928 + return PTR_ERR(bridge); 1929 + } 1926 1930 1927 - panel_bridge = devm_drm_panel_bridge_add(dev, panel); 1928 - if (IS_ERR(panel_bridge)) 1929 - return PTR_ERR(panel_bridge); 1930 - 1931 - tc->panel_bridge = panel_bridge; 1931 + if (bridge) { 1932 + tc->panel_bridge = bridge; 1932 1933 tc->bridge.type = DRM_MODE_CONNECTOR_DPI; 1933 1934 tc->bridge.funcs = &tc_dpi_bridge_funcs; 1934 1935
+69 -3
drivers/gpu/drm/bridge/ti-sn65dsi86.c
··· 29 29 #include <drm/drm_atomic_helper.h> 30 30 #include <drm/drm_bridge.h> 31 31 #include <drm/drm_bridge_connector.h> 32 + #include <drm/drm_edid.h> 32 33 #include <drm/drm_mipi_dsi.h> 33 34 #include <drm/drm_of.h> 34 35 #include <drm/drm_panel.h> ··· 69 68 #define BPP_18_RGB BIT(0) 70 69 #define SN_HPD_DISABLE_REG 0x5C 71 70 #define HPD_DISABLE BIT(0) 71 + #define HPD_DEBOUNCED_STATE BIT(4) 72 72 #define SN_GPIO_IO_REG 0x5E 73 73 #define SN_GPIO_INPUT_SHIFT 4 74 74 #define SN_GPIO_OUTPUT_SHIFT 0 ··· 94 92 #define SN_DATARATE_CONFIG_REG 0x94 95 93 #define DP_DATARATE_MASK GENMASK(7, 5) 96 94 #define DP_DATARATE(x) ((x) << 5) 95 + #define SN_TRAINING_SETTING_REG 0x95 96 + #define SCRAMBLE_DISABLE BIT(4) 97 97 #define SN_ML_TX_MODE_REG 0x96 98 98 #define ML_TX_MAIN_LINK_OFF 0 99 99 #define ML_TX_NORMAL_MODE BIT(0) ··· 751 747 if (mode->clock > 594000) 752 748 return MODE_CLOCK_HIGH; 753 749 750 + /* 751 + * The front and back porch registers are 8 bits, and pulse width 752 + * registers are 15 bits, so reject any modes with larger periods. 753 + */ 754 + 755 + if ((mode->hsync_start - mode->hdisplay) > 0xff) 756 + return MODE_HBLANK_WIDE; 757 + 758 + if ((mode->vsync_start - mode->vdisplay) > 0xff) 759 + return MODE_VBLANK_WIDE; 760 + 761 + if ((mode->hsync_end - mode->hsync_start) > 0x7fff) 762 + return MODE_HSYNC_WIDE; 763 + 764 + if ((mode->vsync_end - mode->vsync_start) > 0x7fff) 765 + return MODE_VSYNC_WIDE; 766 + 767 + if ((mode->htotal - mode->hsync_end) > 0xff) 768 + return MODE_HBLANK_WIDE; 769 + 770 + if ((mode->vtotal - mode->vsync_end) > 0xff) 771 + return MODE_VBLANK_WIDE; 772 + 754 773 return MODE_OK; 755 774 } 756 775 ··· 1074 1047 1075 1048 /* 1076 1049 * The SN65DSI86 only supports ASSR Display Authentication method and 1077 - * this method is enabled by default. An eDP panel must support this 1050 + * this method is enabled for eDP panels. An eDP panel must support this 1078 1051 * authentication method. We need to enable this method in the eDP panel 1079 1052 * at DisplayPort address 0x0010A prior to link training. 1053 + * 1054 + * As only ASSR is supported by SN65DSI86, for full DisplayPort displays 1055 + * we need to disable the scrambler. 1080 1056 */ 1081 - drm_dp_dpcd_writeb(&pdata->aux, DP_EDP_CONFIGURATION_SET, 1082 - DP_ALTERNATE_SCRAMBLER_RESET_ENABLE); 1057 + if (pdata->bridge.type == DRM_MODE_CONNECTOR_eDP) { 1058 + drm_dp_dpcd_writeb(&pdata->aux, DP_EDP_CONFIGURATION_SET, 1059 + DP_ALTERNATE_SCRAMBLER_RESET_ENABLE); 1060 + 1061 + regmap_update_bits(pdata->regmap, SN_TRAINING_SETTING_REG, 1062 + SCRAMBLE_DISABLE, 0); 1063 + } else { 1064 + regmap_update_bits(pdata->regmap, SN_TRAINING_SETTING_REG, 1065 + SCRAMBLE_DISABLE, SCRAMBLE_DISABLE); 1066 + } 1083 1067 1084 1068 bpp = ti_sn_bridge_get_bpp(connector); 1085 1069 /* Set the DP output format (18 bpp or 24 bpp) */ ··· 1160 1122 pm_runtime_put_sync(pdata->dev); 1161 1123 } 1162 1124 1125 + static enum drm_connector_status ti_sn_bridge_detect(struct drm_bridge *bridge) 1126 + { 1127 + struct ti_sn65dsi86 *pdata = bridge_to_ti_sn65dsi86(bridge); 1128 + int val = 0; 1129 + 1130 + pm_runtime_get_sync(pdata->dev); 1131 + regmap_read(pdata->regmap, SN_HPD_DISABLE_REG, &val); 1132 + pm_runtime_put_autosuspend(pdata->dev); 1133 + 1134 + return val & HPD_DEBOUNCED_STATE ? connector_status_connected 1135 + : connector_status_disconnected; 1136 + } 1137 + 1138 + static struct edid *ti_sn_bridge_get_edid(struct drm_bridge *bridge, 1139 + struct drm_connector *connector) 1140 + { 1141 + struct ti_sn65dsi86 *pdata = bridge_to_ti_sn65dsi86(bridge); 1142 + 1143 + return drm_get_edid(connector, &pdata->aux.ddc); 1144 + } 1145 + 1163 1146 static const struct drm_bridge_funcs ti_sn_bridge_funcs = { 1164 1147 .attach = ti_sn_bridge_attach, 1165 1148 .detach = ti_sn_bridge_detach, 1166 1149 .mode_valid = ti_sn_bridge_mode_valid, 1150 + .get_edid = ti_sn_bridge_get_edid, 1151 + .detect = ti_sn_bridge_detect, 1167 1152 .atomic_pre_enable = ti_sn_bridge_atomic_pre_enable, 1168 1153 .atomic_enable = ti_sn_bridge_atomic_enable, 1169 1154 .atomic_disable = ti_sn_bridge_atomic_disable, ··· 1279 1218 1280 1219 pdata->bridge.funcs = &ti_sn_bridge_funcs; 1281 1220 pdata->bridge.of_node = np; 1221 + pdata->bridge.type = pdata->next_bridge->type == DRM_MODE_CONNECTOR_DisplayPort 1222 + ? DRM_MODE_CONNECTOR_DisplayPort : DRM_MODE_CONNECTOR_eDP; 1223 + 1224 + if (pdata->bridge.type == DRM_MODE_CONNECTOR_DisplayPort) 1225 + pdata->bridge.ops = DRM_BRIDGE_OP_EDID | DRM_BRIDGE_OP_DETECT; 1282 1226 1283 1227 drm_bridge_add(&pdata->bridge); 1284 1228
+32
drivers/gpu/drm/display/drm_dp_helper.c
··· 390 390 } 391 391 EXPORT_SYMBOL(drm_dp_link_train_channel_eq_delay); 392 392 393 + /** 394 + * drm_dp_phy_name() - Get the name of the given DP PHY 395 + * @dp_phy: The DP PHY identifier 396 + * 397 + * Given the @dp_phy, get a user friendly name of the DP PHY, either "DPRX" or 398 + * "LTTPR <N>", or "<INVALID DP PHY>" on errors. The returned string is always 399 + * non-NULL and valid. 400 + * 401 + * Returns: Name of the DP PHY. 402 + */ 403 + const char *drm_dp_phy_name(enum drm_dp_phy dp_phy) 404 + { 405 + static const char * const phy_names[] = { 406 + [DP_PHY_DPRX] = "DPRX", 407 + [DP_PHY_LTTPR1] = "LTTPR 1", 408 + [DP_PHY_LTTPR2] = "LTTPR 2", 409 + [DP_PHY_LTTPR3] = "LTTPR 3", 410 + [DP_PHY_LTTPR4] = "LTTPR 4", 411 + [DP_PHY_LTTPR5] = "LTTPR 5", 412 + [DP_PHY_LTTPR6] = "LTTPR 6", 413 + [DP_PHY_LTTPR7] = "LTTPR 7", 414 + [DP_PHY_LTTPR8] = "LTTPR 8", 415 + }; 416 + 417 + if (dp_phy < 0 || dp_phy >= ARRAY_SIZE(phy_names) || 418 + WARN_ON(!phy_names[dp_phy])) 419 + return "<INVALID DP PHY>"; 420 + 421 + return phy_names[dp_phy]; 422 + } 423 + EXPORT_SYMBOL(drm_dp_phy_name); 424 + 393 425 void drm_dp_lttpr_link_train_clock_recovery_delay(void) 394 426 { 395 427 usleep_range(100, 200);
+526 -641
drivers/gpu/drm/display/drm_dp_mst_topology.c
··· 68 68 static void drm_dp_mst_topology_put_port(struct drm_dp_mst_port *port); 69 69 70 70 static int drm_dp_dpcd_write_payload(struct drm_dp_mst_topology_mgr *mgr, 71 - int id, 72 - struct drm_dp_payload *payload); 71 + int id, u8 start_slot, u8 num_slots); 73 72 74 73 static int drm_dp_send_dpcd_read(struct drm_dp_mst_topology_mgr *mgr, 75 74 struct drm_dp_mst_port *port, ··· 1234 1235 return 0; 1235 1236 } 1236 1237 1237 - static int drm_dp_mst_assign_payload_id(struct drm_dp_mst_topology_mgr *mgr, 1238 - struct drm_dp_vcpi *vcpi) 1239 - { 1240 - int ret, vcpi_ret; 1241 - 1242 - mutex_lock(&mgr->payload_lock); 1243 - ret = find_first_zero_bit(&mgr->payload_mask, mgr->max_payloads + 1); 1244 - if (ret > mgr->max_payloads) { 1245 - ret = -EINVAL; 1246 - drm_dbg_kms(mgr->dev, "out of payload ids %d\n", ret); 1247 - goto out_unlock; 1248 - } 1249 - 1250 - vcpi_ret = find_first_zero_bit(&mgr->vcpi_mask, mgr->max_payloads + 1); 1251 - if (vcpi_ret > mgr->max_payloads) { 1252 - ret = -EINVAL; 1253 - drm_dbg_kms(mgr->dev, "out of vcpi ids %d\n", ret); 1254 - goto out_unlock; 1255 - } 1256 - 1257 - set_bit(ret, &mgr->payload_mask); 1258 - set_bit(vcpi_ret, &mgr->vcpi_mask); 1259 - vcpi->vcpi = vcpi_ret + 1; 1260 - mgr->proposed_vcpis[ret - 1] = vcpi; 1261 - out_unlock: 1262 - mutex_unlock(&mgr->payload_lock); 1263 - return ret; 1264 - } 1265 - 1266 - static void drm_dp_mst_put_payload_id(struct drm_dp_mst_topology_mgr *mgr, 1267 - int vcpi) 1268 - { 1269 - int i; 1270 - 1271 - if (vcpi == 0) 1272 - return; 1273 - 1274 - mutex_lock(&mgr->payload_lock); 1275 - drm_dbg_kms(mgr->dev, "putting payload %d\n", vcpi); 1276 - clear_bit(vcpi - 1, &mgr->vcpi_mask); 1277 - 1278 - for (i = 0; i < mgr->max_payloads; i++) { 1279 - if (mgr->proposed_vcpis[i] && 1280 - mgr->proposed_vcpis[i]->vcpi == vcpi) { 1281 - mgr->proposed_vcpis[i] = NULL; 1282 - clear_bit(i + 1, &mgr->payload_mask); 1283 - } 1284 - } 1285 - mutex_unlock(&mgr->payload_lock); 1286 - } 1287 - 1288 1238 static bool check_txmsg_state(struct drm_dp_mst_topology_mgr *mgr, 1289 1239 struct drm_dp_sideband_msg_tx *txmsg) 1290 1240 { ··· 1685 1737 #define save_mstb_topology_ref(mstb, type) 1686 1738 #define save_port_topology_ref(port, type) 1687 1739 #endif 1740 + 1741 + struct drm_dp_mst_atomic_payload * 1742 + drm_atomic_get_mst_payload_state(struct drm_dp_mst_topology_state *state, 1743 + struct drm_dp_mst_port *port) 1744 + { 1745 + struct drm_dp_mst_atomic_payload *payload; 1746 + 1747 + list_for_each_entry(payload, &state->payloads, next) 1748 + if (payload->port == port) 1749 + return payload; 1750 + 1751 + return NULL; 1752 + } 1753 + EXPORT_SYMBOL(drm_atomic_get_mst_payload_state); 1688 1754 1689 1755 static void drm_dp_destroy_mst_branch_device(struct kref *kref) 1690 1756 { ··· 2458 2496 return ret; 2459 2497 } 2460 2498 2461 - static void 2499 + static int 2462 2500 drm_dp_mst_handle_conn_stat(struct drm_dp_mst_branch *mstb, 2463 2501 struct drm_dp_connection_status_notify *conn_stat) 2464 2502 { ··· 2471 2509 2472 2510 port = drm_dp_get_port(mstb, conn_stat->port_number); 2473 2511 if (!port) 2474 - return; 2512 + return 0; 2475 2513 2476 2514 if (port->connector) { 2477 2515 if (!port->input && conn_stat->input_port) { ··· 2524 2562 2525 2563 out: 2526 2564 drm_dp_mst_topology_put_port(port); 2527 - if (dowork) 2528 - queue_work(system_long_wq, &mstb->mgr->work); 2565 + return dowork; 2529 2566 } 2530 2567 2531 2568 static struct drm_dp_mst_branch *drm_dp_get_mst_branch_device(struct drm_dp_mst_topology_mgr *mgr, ··· 3201 3240 struct drm_dp_mst_port *port, 3202 3241 struct drm_dp_query_stream_enc_status_ack_reply *status) 3203 3242 { 3243 + struct drm_dp_mst_topology_state *state; 3244 + struct drm_dp_mst_atomic_payload *payload; 3204 3245 struct drm_dp_sideband_msg_tx *txmsg; 3205 3246 u8 nonce[7]; 3206 3247 int ret; ··· 3219 3256 3220 3257 get_random_bytes(nonce, sizeof(nonce)); 3221 3258 3259 + drm_modeset_lock(&mgr->base.lock, NULL); 3260 + state = to_drm_dp_mst_topology_state(mgr->base.state); 3261 + payload = drm_atomic_get_mst_payload_state(state, port); 3262 + 3222 3263 /* 3223 3264 * "Source device targets the QUERY_STREAM_ENCRYPTION_STATUS message 3224 3265 * transaction at the MST Branch device directly connected to the ··· 3230 3263 */ 3231 3264 txmsg->dst = mgr->mst_primary; 3232 3265 3233 - build_query_stream_enc_status(txmsg, port->vcpi.vcpi, nonce); 3266 + build_query_stream_enc_status(txmsg, payload->vcpi, nonce); 3234 3267 3235 3268 drm_dp_queue_down_tx(mgr, txmsg); 3236 3269 ··· 3247 3280 memcpy(status, &txmsg->reply.u.enc_status, sizeof(*status)); 3248 3281 3249 3282 out: 3283 + drm_modeset_unlock(&mgr->base.lock); 3250 3284 drm_dp_mst_topology_put_port(port); 3251 3285 out_get_port: 3252 3286 kfree(txmsg); ··· 3256 3288 EXPORT_SYMBOL(drm_dp_send_query_stream_enc_status); 3257 3289 3258 3290 static int drm_dp_create_payload_step1(struct drm_dp_mst_topology_mgr *mgr, 3259 - int id, 3260 - struct drm_dp_payload *payload) 3291 + struct drm_dp_mst_atomic_payload *payload) 3261 3292 { 3262 - int ret; 3263 - 3264 - ret = drm_dp_dpcd_write_payload(mgr, id, payload); 3265 - if (ret < 0) { 3266 - payload->payload_state = 0; 3267 - return ret; 3268 - } 3269 - payload->payload_state = DP_PAYLOAD_LOCAL; 3270 - return 0; 3293 + return drm_dp_dpcd_write_payload(mgr, payload->vcpi, payload->vc_start_slot, 3294 + payload->time_slots); 3271 3295 } 3272 3296 3273 3297 static int drm_dp_create_payload_step2(struct drm_dp_mst_topology_mgr *mgr, 3274 - struct drm_dp_mst_port *port, 3275 - int id, 3276 - struct drm_dp_payload *payload) 3298 + struct drm_dp_mst_atomic_payload *payload) 3277 3299 { 3278 3300 int ret; 3301 + struct drm_dp_mst_port *port = drm_dp_mst_topology_get_port_validated(mgr, payload->port); 3279 3302 3280 - ret = drm_dp_payload_send_msg(mgr, port, id, port->vcpi.pbn); 3281 - if (ret < 0) 3282 - return ret; 3283 - payload->payload_state = DP_PAYLOAD_REMOTE; 3303 + if (!port) 3304 + return -EIO; 3305 + 3306 + ret = drm_dp_payload_send_msg(mgr, port, payload->vcpi, payload->pbn); 3307 + drm_dp_mst_topology_put_port(port); 3284 3308 return ret; 3285 3309 } 3286 3310 3287 3311 static int drm_dp_destroy_payload_step1(struct drm_dp_mst_topology_mgr *mgr, 3288 - struct drm_dp_mst_port *port, 3289 - int id, 3290 - struct drm_dp_payload *payload) 3312 + struct drm_dp_mst_topology_state *mst_state, 3313 + struct drm_dp_mst_atomic_payload *payload) 3291 3314 { 3292 3315 drm_dbg_kms(mgr->dev, "\n"); 3316 + 3293 3317 /* it's okay for these to fail */ 3294 - if (port) { 3295 - drm_dp_payload_send_msg(mgr, port, id, 0); 3296 - } 3318 + drm_dp_payload_send_msg(mgr, payload->port, payload->vcpi, 0); 3319 + drm_dp_dpcd_write_payload(mgr, payload->vcpi, payload->vc_start_slot, 0); 3297 3320 3298 - drm_dp_dpcd_write_payload(mgr, id, payload); 3299 - payload->payload_state = DP_PAYLOAD_DELETE_LOCAL; 3300 - return 0; 3301 - } 3302 - 3303 - static int drm_dp_destroy_payload_step2(struct drm_dp_mst_topology_mgr *mgr, 3304 - int id, 3305 - struct drm_dp_payload *payload) 3306 - { 3307 - payload->payload_state = 0; 3308 3321 return 0; 3309 3322 } 3310 3323 3311 3324 /** 3312 - * drm_dp_update_payload_part1() - Execute payload update part 1 3313 - * @mgr: manager to use. 3314 - * @start_slot: this is the cur slot 3325 + * drm_dp_add_payload_part1() - Execute payload update part 1 3326 + * @mgr: Manager to use. 3327 + * @mst_state: The MST atomic state 3328 + * @payload: The payload to write 3315 3329 * 3316 - * NOTE: start_slot is a temporary workaround for non-atomic drivers, 3317 - * this will be removed when non-atomic mst helpers are moved out of the helper 3330 + * Determines the starting time slot for the given payload, and programs the VCPI for this payload 3331 + * into hardware. After calling this, the driver should generate ACT and payload packets. 3318 3332 * 3319 - * This iterates over all proposed virtual channels, and tries to 3320 - * allocate space in the link for them. For 0->slots transitions, 3321 - * this step just writes the VCPI to the MST device. For slots->0 3322 - * transitions, this writes the updated VCPIs and removes the 3323 - * remote VC payloads. 3324 - * 3325 - * after calling this the driver should generate ACT and payload 3326 - * packets. 3333 + * Returns: 0 on success, error code on failure. In the event that this fails, 3334 + * @payload.vc_start_slot will also be set to -1. 3327 3335 */ 3328 - int drm_dp_update_payload_part1(struct drm_dp_mst_topology_mgr *mgr, int start_slot) 3336 + int drm_dp_add_payload_part1(struct drm_dp_mst_topology_mgr *mgr, 3337 + struct drm_dp_mst_topology_state *mst_state, 3338 + struct drm_dp_mst_atomic_payload *payload) 3329 3339 { 3330 - struct drm_dp_payload req_payload; 3331 3340 struct drm_dp_mst_port *port; 3332 - int i, j; 3333 - int cur_slots = start_slot; 3334 - bool skip; 3341 + int ret; 3335 3342 3336 - mutex_lock(&mgr->payload_lock); 3337 - for (i = 0; i < mgr->max_payloads; i++) { 3338 - struct drm_dp_vcpi *vcpi = mgr->proposed_vcpis[i]; 3339 - struct drm_dp_payload *payload = &mgr->payloads[i]; 3340 - bool put_port = false; 3343 + port = drm_dp_mst_topology_get_port_validated(mgr, payload->port); 3344 + if (!port) 3345 + return 0; 3341 3346 3342 - /* solve the current payloads - compare to the hw ones 3343 - - update the hw view */ 3344 - req_payload.start_slot = cur_slots; 3345 - if (vcpi) { 3346 - port = container_of(vcpi, struct drm_dp_mst_port, 3347 - vcpi); 3347 + if (mgr->payload_count == 0) 3348 + mgr->next_start_slot = mst_state->start_slot; 3348 3349 3349 - mutex_lock(&mgr->lock); 3350 - skip = !drm_dp_mst_port_downstream_of_branch(port, mgr->mst_primary); 3351 - mutex_unlock(&mgr->lock); 3350 + payload->vc_start_slot = mgr->next_start_slot; 3352 3351 3353 - if (skip) { 3354 - drm_dbg_kms(mgr->dev, 3355 - "Virtual channel %d is not in current topology\n", 3356 - i); 3357 - continue; 3358 - } 3359 - /* Validated ports don't matter if we're releasing 3360 - * VCPI 3361 - */ 3362 - if (vcpi->num_slots) { 3363 - port = drm_dp_mst_topology_get_port_validated( 3364 - mgr, port); 3365 - if (!port) { 3366 - if (vcpi->num_slots == payload->num_slots) { 3367 - cur_slots += vcpi->num_slots; 3368 - payload->start_slot = req_payload.start_slot; 3369 - continue; 3370 - } else { 3371 - drm_dbg_kms(mgr->dev, 3372 - "Fail:set payload to invalid sink"); 3373 - mutex_unlock(&mgr->payload_lock); 3374 - return -EINVAL; 3375 - } 3376 - } 3377 - put_port = true; 3378 - } 3379 - 3380 - req_payload.num_slots = vcpi->num_slots; 3381 - req_payload.vcpi = vcpi->vcpi; 3382 - } else { 3383 - port = NULL; 3384 - req_payload.num_slots = 0; 3385 - } 3386 - 3387 - payload->start_slot = req_payload.start_slot; 3388 - /* work out what is required to happen with this payload */ 3389 - if (payload->num_slots != req_payload.num_slots) { 3390 - 3391 - /* need to push an update for this payload */ 3392 - if (req_payload.num_slots) { 3393 - drm_dp_create_payload_step1(mgr, vcpi->vcpi, 3394 - &req_payload); 3395 - payload->num_slots = req_payload.num_slots; 3396 - payload->vcpi = req_payload.vcpi; 3397 - 3398 - } else if (payload->num_slots) { 3399 - payload->num_slots = 0; 3400 - drm_dp_destroy_payload_step1(mgr, port, 3401 - payload->vcpi, 3402 - payload); 3403 - req_payload.payload_state = 3404 - payload->payload_state; 3405 - payload->start_slot = 0; 3406 - } 3407 - payload->payload_state = req_payload.payload_state; 3408 - } 3409 - cur_slots += req_payload.num_slots; 3410 - 3411 - if (put_port) 3412 - drm_dp_mst_topology_put_port(port); 3352 + ret = drm_dp_create_payload_step1(mgr, payload); 3353 + drm_dp_mst_topology_put_port(port); 3354 + if (ret < 0) { 3355 + drm_warn(mgr->dev, "Failed to create MST payload for port %p: %d\n", 3356 + payload->port, ret); 3357 + payload->vc_start_slot = -1; 3358 + return ret; 3413 3359 } 3414 3360 3415 - for (i = 0; i < mgr->max_payloads; /* do nothing */) { 3416 - if (mgr->payloads[i].payload_state != DP_PAYLOAD_DELETE_LOCAL) { 3417 - i++; 3418 - continue; 3419 - } 3420 - 3421 - drm_dbg_kms(mgr->dev, "removing payload %d\n", i); 3422 - for (j = i; j < mgr->max_payloads - 1; j++) { 3423 - mgr->payloads[j] = mgr->payloads[j + 1]; 3424 - mgr->proposed_vcpis[j] = mgr->proposed_vcpis[j + 1]; 3425 - 3426 - if (mgr->proposed_vcpis[j] && 3427 - mgr->proposed_vcpis[j]->num_slots) { 3428 - set_bit(j + 1, &mgr->payload_mask); 3429 - } else { 3430 - clear_bit(j + 1, &mgr->payload_mask); 3431 - } 3432 - } 3433 - 3434 - memset(&mgr->payloads[mgr->max_payloads - 1], 0, 3435 - sizeof(struct drm_dp_payload)); 3436 - mgr->proposed_vcpis[mgr->max_payloads - 1] = NULL; 3437 - clear_bit(mgr->max_payloads, &mgr->payload_mask); 3438 - } 3439 - mutex_unlock(&mgr->payload_lock); 3361 + mgr->payload_count++; 3362 + mgr->next_start_slot += payload->time_slots; 3440 3363 3441 3364 return 0; 3442 3365 } 3443 - EXPORT_SYMBOL(drm_dp_update_payload_part1); 3366 + EXPORT_SYMBOL(drm_dp_add_payload_part1); 3444 3367 3445 3368 /** 3446 - * drm_dp_update_payload_part2() - Execute payload update part 2 3447 - * @mgr: manager to use. 3369 + * drm_dp_remove_payload() - Remove an MST payload 3370 + * @mgr: Manager to use. 3371 + * @mst_state: The MST atomic state 3372 + * @payload: The payload to write 3448 3373 * 3449 - * This iterates over all proposed virtual channels, and tries to 3450 - * allocate space in the link for them. For 0->slots transitions, 3451 - * this step writes the remote VC payload commands. For slots->0 3452 - * this just resets some internal state. 3374 + * Removes a payload from an MST topology if it was successfully assigned a start slot. Also updates 3375 + * the starting time slots of all other payloads which would have been shifted towards the start of 3376 + * the VC table as a result. After calling this, the driver should generate ACT and payload packets. 3453 3377 */ 3454 - int drm_dp_update_payload_part2(struct drm_dp_mst_topology_mgr *mgr) 3378 + void drm_dp_remove_payload(struct drm_dp_mst_topology_mgr *mgr, 3379 + struct drm_dp_mst_topology_state *mst_state, 3380 + struct drm_dp_mst_atomic_payload *payload) 3455 3381 { 3456 - struct drm_dp_mst_port *port; 3457 - int i; 3382 + struct drm_dp_mst_atomic_payload *pos; 3383 + bool send_remove = false; 3384 + 3385 + /* We failed to make the payload, so nothing to do */ 3386 + if (payload->vc_start_slot == -1) 3387 + return; 3388 + 3389 + mutex_lock(&mgr->lock); 3390 + send_remove = drm_dp_mst_port_downstream_of_branch(payload->port, mgr->mst_primary); 3391 + mutex_unlock(&mgr->lock); 3392 + 3393 + if (send_remove) 3394 + drm_dp_destroy_payload_step1(mgr, mst_state, payload); 3395 + else 3396 + drm_dbg_kms(mgr->dev, "Payload for VCPI %d not in topology, not sending remove\n", 3397 + payload->vcpi); 3398 + 3399 + list_for_each_entry(pos, &mst_state->payloads, next) { 3400 + if (pos != payload && pos->vc_start_slot > payload->vc_start_slot) 3401 + pos->vc_start_slot -= payload->time_slots; 3402 + } 3403 + payload->vc_start_slot = -1; 3404 + 3405 + mgr->payload_count--; 3406 + mgr->next_start_slot -= payload->time_slots; 3407 + } 3408 + EXPORT_SYMBOL(drm_dp_remove_payload); 3409 + 3410 + /** 3411 + * drm_dp_add_payload_part2() - Execute payload update part 2 3412 + * @mgr: Manager to use. 3413 + * @state: The global atomic state 3414 + * @payload: The payload to update 3415 + * 3416 + * If @payload was successfully assigned a starting time slot by drm_dp_add_payload_part1(), this 3417 + * function will send the sideband messages to finish allocating this payload. 3418 + * 3419 + * Returns: 0 on success, negative error code on failure. 3420 + */ 3421 + int drm_dp_add_payload_part2(struct drm_dp_mst_topology_mgr *mgr, 3422 + struct drm_atomic_state *state, 3423 + struct drm_dp_mst_atomic_payload *payload) 3424 + { 3458 3425 int ret = 0; 3459 - bool skip; 3460 3426 3461 - mutex_lock(&mgr->payload_lock); 3462 - for (i = 0; i < mgr->max_payloads; i++) { 3463 - 3464 - if (!mgr->proposed_vcpis[i]) 3465 - continue; 3466 - 3467 - port = container_of(mgr->proposed_vcpis[i], struct drm_dp_mst_port, vcpi); 3468 - 3469 - mutex_lock(&mgr->lock); 3470 - skip = !drm_dp_mst_port_downstream_of_branch(port, mgr->mst_primary); 3471 - mutex_unlock(&mgr->lock); 3472 - 3473 - if (skip) 3474 - continue; 3475 - 3476 - drm_dbg_kms(mgr->dev, "payload %d %d\n", i, mgr->payloads[i].payload_state); 3477 - if (mgr->payloads[i].payload_state == DP_PAYLOAD_LOCAL) { 3478 - ret = drm_dp_create_payload_step2(mgr, port, mgr->proposed_vcpis[i]->vcpi, &mgr->payloads[i]); 3479 - } else if (mgr->payloads[i].payload_state == DP_PAYLOAD_DELETE_LOCAL) { 3480 - ret = drm_dp_destroy_payload_step2(mgr, mgr->proposed_vcpis[i]->vcpi, &mgr->payloads[i]); 3481 - } 3482 - if (ret) { 3483 - mutex_unlock(&mgr->payload_lock); 3484 - return ret; 3485 - } 3427 + /* Skip failed payloads */ 3428 + if (payload->vc_start_slot == -1) { 3429 + drm_dbg_kms(state->dev, "Part 1 of payload creation for %s failed, skipping part 2\n", 3430 + payload->port->connector->name); 3431 + return -EIO; 3486 3432 } 3487 - mutex_unlock(&mgr->payload_lock); 3488 - return 0; 3433 + 3434 + ret = drm_dp_create_payload_step2(mgr, payload); 3435 + if (ret < 0) { 3436 + if (!payload->delete) 3437 + drm_err(mgr->dev, "Step 2 of creating MST payload for %p failed: %d\n", 3438 + payload->port, ret); 3439 + else 3440 + drm_dbg_kms(mgr->dev, "Step 2 of removing MST payload for %p failed: %d\n", 3441 + payload->port, ret); 3442 + } 3443 + 3444 + return ret; 3489 3445 } 3490 - EXPORT_SYMBOL(drm_dp_update_payload_part2); 3446 + EXPORT_SYMBOL(drm_dp_add_payload_part2); 3491 3447 3492 3448 static int drm_dp_send_dpcd_read(struct drm_dp_mst_topology_mgr *mgr, 3493 3449 struct drm_dp_mst_port *port, ··· 3591 3699 int ret = 0; 3592 3700 struct drm_dp_mst_branch *mstb = NULL; 3593 3701 3594 - mutex_lock(&mgr->payload_lock); 3595 3702 mutex_lock(&mgr->lock); 3596 3703 if (mst_state == mgr->mst_state) 3597 3704 goto out_unlock; ··· 3598 3707 mgr->mst_state = mst_state; 3599 3708 /* set the device into MST mode */ 3600 3709 if (mst_state) { 3601 - struct drm_dp_payload reset_pay; 3602 - int lane_count; 3603 - int link_rate; 3604 - 3605 3710 WARN_ON(mgr->mst_primary); 3606 3711 3607 3712 /* get dpcd info */ ··· 3605 3718 if (ret < 0) { 3606 3719 drm_dbg_kms(mgr->dev, "%s: failed to read DPCD, ret %d\n", 3607 3720 mgr->aux->name, ret); 3608 - goto out_unlock; 3609 - } 3610 - 3611 - lane_count = min_t(int, mgr->dpcd[2] & DP_MAX_LANE_COUNT_MASK, mgr->max_lane_count); 3612 - link_rate = min_t(int, drm_dp_bw_code_to_link_rate(mgr->dpcd[1]), mgr->max_link_rate); 3613 - mgr->pbn_div = drm_dp_get_vc_payload_bw(mgr, 3614 - link_rate, 3615 - lane_count); 3616 - if (mgr->pbn_div == 0) { 3617 - ret = -EINVAL; 3618 3721 goto out_unlock; 3619 3722 } 3620 3723 ··· 3627 3750 if (ret < 0) 3628 3751 goto out_unlock; 3629 3752 3630 - reset_pay.start_slot = 0; 3631 - reset_pay.num_slots = 0x3f; 3632 - drm_dp_dpcd_write_payload(mgr, 0, &reset_pay); 3753 + /* Write reset payload */ 3754 + drm_dp_dpcd_write_payload(mgr, 0, 0, 0x3f); 3633 3755 3634 3756 queue_work(system_long_wq, &mgr->work); 3635 3757 ··· 3640 3764 /* this can fail if the device is gone */ 3641 3765 drm_dp_dpcd_writeb(mgr->aux, DP_MSTM_CTRL, 0); 3642 3766 ret = 0; 3643 - memset(mgr->payloads, 0, 3644 - mgr->max_payloads * sizeof(mgr->payloads[0])); 3645 - memset(mgr->proposed_vcpis, 0, 3646 - mgr->max_payloads * sizeof(mgr->proposed_vcpis[0])); 3647 - mgr->payload_mask = 0; 3648 - set_bit(0, &mgr->payload_mask); 3649 - mgr->vcpi_mask = 0; 3650 3767 mgr->payload_id_table_cleared = false; 3651 3768 } 3652 3769 3653 3770 out_unlock: 3654 3771 mutex_unlock(&mgr->lock); 3655 - mutex_unlock(&mgr->payload_lock); 3656 3772 if (mstb) 3657 3773 drm_dp_mst_topology_put_mstb(mstb); 3658 3774 return ret; ··· 3915 4047 struct drm_dp_mst_branch *mstb = NULL; 3916 4048 struct drm_dp_sideband_msg_req_body *msg = &up_req->msg; 3917 4049 struct drm_dp_sideband_msg_hdr *hdr = &up_req->hdr; 3918 - bool hotplug = false; 4050 + bool hotplug = false, dowork = false; 3919 4051 3920 4052 if (hdr->broadcast) { 3921 4053 const u8 *guid = NULL; ··· 3938 4070 3939 4071 /* TODO: Add missing handler for DP_RESOURCE_STATUS_NOTIFY events */ 3940 4072 if (msg->req_type == DP_CONNECTION_STATUS_NOTIFY) { 3941 - drm_dp_mst_handle_conn_stat(mstb, &msg->u.conn_stat); 4073 + dowork = drm_dp_mst_handle_conn_stat(mstb, &msg->u.conn_stat); 3942 4074 hotplug = true; 3943 4075 } 3944 4076 3945 4077 drm_dp_mst_topology_put_mstb(mstb); 4078 + 4079 + if (dowork) 4080 + queue_work(system_long_wq, &mgr->work); 3946 4081 return hotplug; 3947 4082 } 3948 4083 ··· 4164 4293 EXPORT_SYMBOL(drm_dp_mst_get_edid); 4165 4294 4166 4295 /** 4167 - * drm_dp_find_vcpi_slots() - Find VCPI slots for this PBN value 4168 - * @mgr: manager to use 4169 - * @pbn: payload bandwidth to convert into slots. 4170 - * 4171 - * Calculate the number of VCPI slots that will be required for the given PBN 4172 - * value. This function is deprecated, and should not be used in atomic 4173 - * drivers. 4174 - * 4175 - * RETURNS: 4176 - * The total slots required for this port, or error. 4177 - */ 4178 - int drm_dp_find_vcpi_slots(struct drm_dp_mst_topology_mgr *mgr, 4179 - int pbn) 4180 - { 4181 - int num_slots; 4182 - 4183 - num_slots = DIV_ROUND_UP(pbn, mgr->pbn_div); 4184 - 4185 - /* max. time slots - one slot for MTP header */ 4186 - if (num_slots > 63) 4187 - return -ENOSPC; 4188 - return num_slots; 4189 - } 4190 - EXPORT_SYMBOL(drm_dp_find_vcpi_slots); 4191 - 4192 - static int drm_dp_init_vcpi(struct drm_dp_mst_topology_mgr *mgr, 4193 - struct drm_dp_vcpi *vcpi, int pbn, int slots) 4194 - { 4195 - int ret; 4196 - 4197 - vcpi->pbn = pbn; 4198 - vcpi->aligned_pbn = slots * mgr->pbn_div; 4199 - vcpi->num_slots = slots; 4200 - 4201 - ret = drm_dp_mst_assign_payload_id(mgr, vcpi); 4202 - if (ret < 0) 4203 - return ret; 4204 - return 0; 4205 - } 4206 - 4207 - /** 4208 - * drm_dp_atomic_find_vcpi_slots() - Find and add VCPI slots to the state 4296 + * drm_dp_atomic_find_time_slots() - Find and add time slots to the state 4209 4297 * @state: global atomic state 4210 4298 * @mgr: MST topology manager for the port 4211 - * @port: port to find vcpi slots for 4299 + * @port: port to find time slots for 4212 4300 * @pbn: bandwidth required for the mode in PBN 4213 - * @pbn_div: divider for DSC mode that takes FEC into account 4214 4301 * 4215 - * Allocates VCPI slots to @port, replacing any previous VCPI allocations it 4216 - * may have had. Any atomic drivers which support MST must call this function 4217 - * in their &drm_encoder_helper_funcs.atomic_check() callback to change the 4218 - * current VCPI allocation for the new state, but only when 4219 - * &drm_crtc_state.mode_changed or &drm_crtc_state.connectors_changed is set 4220 - * to ensure compatibility with userspace applications that still use the 4221 - * legacy modesetting UAPI. 4302 + * Allocates time slots to @port, replacing any previous time slot allocations it may 4303 + * have had. Any atomic drivers which support MST must call this function in 4304 + * their &drm_encoder_helper_funcs.atomic_check() callback unconditionally to 4305 + * change the current time slot allocation for the new state, and ensure the MST 4306 + * atomic state is added whenever the state of payloads in the topology changes. 4222 4307 * 4223 4308 * Allocations set by this function are not checked against the bandwidth 4224 4309 * restraints of @mgr until the driver calls drm_dp_mst_atomic_check(). 4225 4310 * 4226 4311 * Additionally, it is OK to call this function multiple times on the same 4227 4312 * @port as needed. It is not OK however, to call this function and 4228 - * drm_dp_atomic_release_vcpi_slots() in the same atomic check phase. 4313 + * drm_dp_atomic_release_time_slots() in the same atomic check phase. 4229 4314 * 4230 4315 * See also: 4231 - * drm_dp_atomic_release_vcpi_slots() 4316 + * drm_dp_atomic_release_time_slots() 4232 4317 * drm_dp_mst_atomic_check() 4233 4318 * 4234 4319 * Returns: 4235 4320 * Total slots in the atomic state assigned for this port, or a negative error 4236 4321 * code if the port no longer exists 4237 4322 */ 4238 - int drm_dp_atomic_find_vcpi_slots(struct drm_atomic_state *state, 4323 + int drm_dp_atomic_find_time_slots(struct drm_atomic_state *state, 4239 4324 struct drm_dp_mst_topology_mgr *mgr, 4240 - struct drm_dp_mst_port *port, int pbn, 4241 - int pbn_div) 4325 + struct drm_dp_mst_port *port, int pbn) 4242 4326 { 4243 4327 struct drm_dp_mst_topology_state *topology_state; 4244 - struct drm_dp_vcpi_allocation *pos, *vcpi = NULL; 4245 - int prev_slots, prev_bw, req_slots; 4328 + struct drm_dp_mst_atomic_payload *payload = NULL; 4329 + struct drm_connector_state *conn_state; 4330 + int prev_slots = 0, prev_bw = 0, req_slots; 4246 4331 4247 4332 topology_state = drm_atomic_get_mst_topology_state(state, mgr); 4248 4333 if (IS_ERR(topology_state)) 4249 4334 return PTR_ERR(topology_state); 4250 4335 4336 + conn_state = drm_atomic_get_new_connector_state(state, port->connector); 4337 + topology_state->pending_crtc_mask |= drm_crtc_mask(conn_state->crtc); 4338 + 4251 4339 /* Find the current allocation for this port, if any */ 4252 - list_for_each_entry(pos, &topology_state->vcpis, next) { 4253 - if (pos->port == port) { 4254 - vcpi = pos; 4255 - prev_slots = vcpi->vcpi; 4256 - prev_bw = vcpi->pbn; 4340 + payload = drm_atomic_get_mst_payload_state(topology_state, port); 4341 + if (payload) { 4342 + prev_slots = payload->time_slots; 4343 + prev_bw = payload->pbn; 4257 4344 4258 - /* 4259 - * This should never happen, unless the driver tries 4260 - * releasing and allocating the same VCPI allocation, 4261 - * which is an error 4262 - */ 4263 - if (WARN_ON(!prev_slots)) { 4264 - drm_err(mgr->dev, 4265 - "cannot allocate and release VCPI on [MST PORT:%p] in the same state\n", 4266 - port); 4267 - return -EINVAL; 4268 - } 4269 - 4270 - break; 4345 + /* 4346 + * This should never happen, unless the driver tries 4347 + * releasing and allocating the same timeslot allocation, 4348 + * which is an error 4349 + */ 4350 + if (drm_WARN_ON(mgr->dev, payload->delete)) { 4351 + drm_err(mgr->dev, 4352 + "cannot allocate and release time slots on [MST PORT:%p] in the same state\n", 4353 + port); 4354 + return -EINVAL; 4271 4355 } 4272 4356 } 4273 - if (!vcpi) { 4274 - prev_slots = 0; 4275 - prev_bw = 0; 4276 - } 4277 4357 4278 - if (pbn_div <= 0) 4279 - pbn_div = mgr->pbn_div; 4358 + req_slots = DIV_ROUND_UP(pbn, topology_state->pbn_div); 4280 4359 4281 - req_slots = DIV_ROUND_UP(pbn, pbn_div); 4282 - 4283 - drm_dbg_atomic(mgr->dev, "[CONNECTOR:%d:%s] [MST PORT:%p] VCPI %d -> %d\n", 4360 + drm_dbg_atomic(mgr->dev, "[CONNECTOR:%d:%s] [MST PORT:%p] TU %d -> %d\n", 4284 4361 port->connector->base.id, port->connector->name, 4285 4362 port, prev_slots, req_slots); 4286 4363 drm_dbg_atomic(mgr->dev, "[CONNECTOR:%d:%s] [MST PORT:%p] PBN %d -> %d\n", 4287 4364 port->connector->base.id, port->connector->name, 4288 4365 port, prev_bw, pbn); 4289 4366 4290 - /* Add the new allocation to the state */ 4291 - if (!vcpi) { 4292 - vcpi = kzalloc(sizeof(*vcpi), GFP_KERNEL); 4293 - if (!vcpi) 4367 + /* Add the new allocation to the state, note the VCPI isn't assigned until the end */ 4368 + if (!payload) { 4369 + payload = kzalloc(sizeof(*payload), GFP_KERNEL); 4370 + if (!payload) 4294 4371 return -ENOMEM; 4295 4372 4296 4373 drm_dp_mst_get_port_malloc(port); 4297 - vcpi->port = port; 4298 - list_add(&vcpi->next, &topology_state->vcpis); 4374 + payload->port = port; 4375 + payload->vc_start_slot = -1; 4376 + list_add(&payload->next, &topology_state->payloads); 4299 4377 } 4300 - vcpi->vcpi = req_slots; 4301 - vcpi->pbn = pbn; 4378 + payload->time_slots = req_slots; 4379 + payload->pbn = pbn; 4302 4380 4303 4381 return req_slots; 4304 4382 } 4305 - EXPORT_SYMBOL(drm_dp_atomic_find_vcpi_slots); 4383 + EXPORT_SYMBOL(drm_dp_atomic_find_time_slots); 4306 4384 4307 4385 /** 4308 - * drm_dp_atomic_release_vcpi_slots() - Release allocated vcpi slots 4386 + * drm_dp_atomic_release_time_slots() - Release allocated time slots 4309 4387 * @state: global atomic state 4310 4388 * @mgr: MST topology manager for the port 4311 - * @port: The port to release the VCPI slots from 4389 + * @port: The port to release the time slots from 4312 4390 * 4313 - * Releases any VCPI slots that have been allocated to a port in the atomic 4314 - * state. Any atomic drivers which support MST must call this function in 4315 - * their &drm_connector_helper_funcs.atomic_check() callback when the 4316 - * connector will no longer have VCPI allocated (e.g. because its CRTC was 4317 - * removed) when it had VCPI allocated in the previous atomic state. 4391 + * Releases any time slots that have been allocated to a port in the atomic 4392 + * state. Any atomic drivers which support MST must call this function 4393 + * unconditionally in their &drm_connector_helper_funcs.atomic_check() callback. 4394 + * This helper will check whether time slots would be released by the new state and 4395 + * respond accordingly, along with ensuring the MST state is always added to the 4396 + * atomic state whenever a new state would modify the state of payloads on the 4397 + * topology. 4318 4398 * 4319 4399 * It is OK to call this even if @port has been removed from the system. 4320 4400 * Additionally, it is OK to call this function multiple times on the same 4321 4401 * @port as needed. It is not OK however, to call this function and 4322 - * drm_dp_atomic_find_vcpi_slots() on the same @port in a single atomic check 4402 + * drm_dp_atomic_find_time_slots() on the same @port in a single atomic check 4323 4403 * phase. 4324 4404 * 4325 4405 * See also: 4326 - * drm_dp_atomic_find_vcpi_slots() 4406 + * drm_dp_atomic_find_time_slots() 4327 4407 * drm_dp_mst_atomic_check() 4328 4408 * 4329 4409 * Returns: 4330 - * 0 if all slots for this port were added back to 4331 - * &drm_dp_mst_topology_state.avail_slots or negative error code 4410 + * 0 on success, negative error code otherwise 4332 4411 */ 4333 - int drm_dp_atomic_release_vcpi_slots(struct drm_atomic_state *state, 4412 + int drm_dp_atomic_release_time_slots(struct drm_atomic_state *state, 4334 4413 struct drm_dp_mst_topology_mgr *mgr, 4335 4414 struct drm_dp_mst_port *port) 4336 4415 { 4337 4416 struct drm_dp_mst_topology_state *topology_state; 4338 - struct drm_dp_vcpi_allocation *pos; 4339 - bool found = false; 4417 + struct drm_dp_mst_atomic_payload *payload; 4418 + struct drm_connector_state *old_conn_state, *new_conn_state; 4419 + bool update_payload = true; 4420 + 4421 + old_conn_state = drm_atomic_get_old_connector_state(state, port->connector); 4422 + if (!old_conn_state->crtc) 4423 + return 0; 4424 + 4425 + /* If the CRTC isn't disabled by this state, don't release it's payload */ 4426 + new_conn_state = drm_atomic_get_new_connector_state(state, port->connector); 4427 + if (new_conn_state->crtc) { 4428 + struct drm_crtc_state *crtc_state = 4429 + drm_atomic_get_new_crtc_state(state, new_conn_state->crtc); 4430 + 4431 + /* No modeset means no payload changes, so it's safe to not pull in the MST state */ 4432 + if (!crtc_state || !drm_atomic_crtc_needs_modeset(crtc_state)) 4433 + return 0; 4434 + 4435 + if (!crtc_state->mode_changed && !crtc_state->connectors_changed) 4436 + update_payload = false; 4437 + } 4340 4438 4341 4439 topology_state = drm_atomic_get_mst_topology_state(state, mgr); 4342 4440 if (IS_ERR(topology_state)) 4343 4441 return PTR_ERR(topology_state); 4344 4442 4345 - list_for_each_entry(pos, &topology_state->vcpis, next) { 4346 - if (pos->port == port) { 4347 - found = true; 4348 - break; 4349 - } 4350 - } 4351 - if (WARN_ON(!found)) { 4352 - drm_err(mgr->dev, "no VCPI for [MST PORT:%p] found in mst state %p\n", 4443 + topology_state->pending_crtc_mask |= drm_crtc_mask(old_conn_state->crtc); 4444 + if (!update_payload) 4445 + return 0; 4446 + 4447 + payload = drm_atomic_get_mst_payload_state(topology_state, port); 4448 + if (WARN_ON(!payload)) { 4449 + drm_err(mgr->dev, "No payload for [MST PORT:%p] found in mst state %p\n", 4353 4450 port, &topology_state->base); 4354 4451 return -EINVAL; 4355 4452 } 4356 4453 4357 - drm_dbg_atomic(mgr->dev, "[MST PORT:%p] VCPI %d -> 0\n", port, pos->vcpi); 4358 - if (pos->vcpi) { 4454 + if (new_conn_state->crtc) 4455 + return 0; 4456 + 4457 + drm_dbg_atomic(mgr->dev, "[MST PORT:%p] TU %d -> 0\n", port, payload->time_slots); 4458 + if (!payload->delete) { 4359 4459 drm_dp_mst_put_port_malloc(port); 4360 - pos->vcpi = 0; 4361 - pos->pbn = 0; 4460 + payload->pbn = 0; 4461 + payload->delete = true; 4462 + topology_state->payload_mask &= ~BIT(payload->vcpi - 1); 4362 4463 } 4363 4464 4364 4465 return 0; 4365 4466 } 4366 - EXPORT_SYMBOL(drm_dp_atomic_release_vcpi_slots); 4467 + EXPORT_SYMBOL(drm_dp_atomic_release_time_slots); 4468 + 4469 + /** 4470 + * drm_dp_mst_atomic_setup_commit() - setup_commit hook for MST helpers 4471 + * @state: global atomic state 4472 + * 4473 + * This function saves all of the &drm_crtc_commit structs in an atomic state that touch any CRTCs 4474 + * currently assigned to an MST topology. Drivers must call this hook from their 4475 + * &drm_mode_config_helper_funcs.atomic_commit_setup hook. 4476 + * 4477 + * Returns: 4478 + * 0 if all CRTC commits were retrieved successfully, negative error code otherwise 4479 + */ 4480 + int drm_dp_mst_atomic_setup_commit(struct drm_atomic_state *state) 4481 + { 4482 + struct drm_dp_mst_topology_mgr *mgr; 4483 + struct drm_dp_mst_topology_state *mst_state; 4484 + struct drm_crtc *crtc; 4485 + struct drm_crtc_state *crtc_state; 4486 + int i, j, commit_idx, num_commit_deps; 4487 + 4488 + for_each_new_mst_mgr_in_state(state, mgr, mst_state, i) { 4489 + if (!mst_state->pending_crtc_mask) 4490 + continue; 4491 + 4492 + num_commit_deps = hweight32(mst_state->pending_crtc_mask); 4493 + mst_state->commit_deps = kmalloc_array(num_commit_deps, 4494 + sizeof(*mst_state->commit_deps), GFP_KERNEL); 4495 + if (!mst_state->commit_deps) 4496 + return -ENOMEM; 4497 + mst_state->num_commit_deps = num_commit_deps; 4498 + 4499 + commit_idx = 0; 4500 + for_each_new_crtc_in_state(state, crtc, crtc_state, j) { 4501 + if (mst_state->pending_crtc_mask & drm_crtc_mask(crtc)) { 4502 + mst_state->commit_deps[commit_idx++] = 4503 + drm_crtc_commit_get(crtc_state->commit); 4504 + } 4505 + } 4506 + } 4507 + 4508 + return 0; 4509 + } 4510 + EXPORT_SYMBOL(drm_dp_mst_atomic_setup_commit); 4511 + 4512 + /** 4513 + * drm_dp_mst_atomic_wait_for_dependencies() - Wait for all pending commits on MST topologies, 4514 + * prepare new MST state for commit 4515 + * @state: global atomic state 4516 + * 4517 + * Goes through any MST topologies in this atomic state, and waits for any pending commits which 4518 + * touched CRTCs that were/are on an MST topology to be programmed to hardware and flipped to before 4519 + * returning. This is to prevent multiple non-blocking commits affecting an MST topology from racing 4520 + * with eachother by forcing them to be executed sequentially in situations where the only resources 4521 + * the modeset objects in these commits share are an MST topology. 4522 + * 4523 + * This function also prepares the new MST state for commit by performing some state preparation 4524 + * which can't be done until this point, such as reading back the final VC start slots (which are 4525 + * determined at commit-time) from the previous state. 4526 + * 4527 + * All MST drivers must call this function after calling drm_atomic_helper_wait_for_dependencies(), 4528 + * or whatever their equivalent of that is. 4529 + */ 4530 + void drm_dp_mst_atomic_wait_for_dependencies(struct drm_atomic_state *state) 4531 + { 4532 + struct drm_dp_mst_topology_state *old_mst_state, *new_mst_state; 4533 + struct drm_dp_mst_topology_mgr *mgr; 4534 + struct drm_dp_mst_atomic_payload *old_payload, *new_payload; 4535 + int i, j, ret; 4536 + 4537 + for_each_oldnew_mst_mgr_in_state(state, mgr, old_mst_state, new_mst_state, i) { 4538 + for (j = 0; j < old_mst_state->num_commit_deps; j++) { 4539 + ret = drm_crtc_commit_wait(old_mst_state->commit_deps[j]); 4540 + if (ret < 0) 4541 + drm_err(state->dev, "Failed to wait for %s: %d\n", 4542 + old_mst_state->commit_deps[j]->crtc->name, ret); 4543 + } 4544 + 4545 + /* Now that previous state is committed, it's safe to copy over the start slot 4546 + * assignments 4547 + */ 4548 + list_for_each_entry(old_payload, &old_mst_state->payloads, next) { 4549 + if (old_payload->delete) 4550 + continue; 4551 + 4552 + new_payload = drm_atomic_get_mst_payload_state(new_mst_state, 4553 + old_payload->port); 4554 + new_payload->vc_start_slot = old_payload->vc_start_slot; 4555 + } 4556 + } 4557 + } 4558 + EXPORT_SYMBOL(drm_dp_mst_atomic_wait_for_dependencies); 4559 + 4560 + /** 4561 + * drm_dp_mst_root_conn_atomic_check() - Serialize CRTC commits on MST-capable connectors operating 4562 + * in SST mode 4563 + * @new_conn_state: The new connector state of the &drm_connector 4564 + * @mgr: The MST topology manager for the &drm_connector 4565 + * 4566 + * Since MST uses fake &drm_encoder structs, the generic atomic modesetting code isn't able to 4567 + * serialize non-blocking commits happening on the real DP connector of an MST topology switching 4568 + * into/away from MST mode - as the CRTC on the real DP connector and the CRTCs on the connector's 4569 + * MST topology will never share the same &drm_encoder. 4570 + * 4571 + * This function takes care of this serialization issue, by checking a root MST connector's atomic 4572 + * state to determine if it is about to have a modeset - and then pulling in the MST topology state 4573 + * if so, along with adding any relevant CRTCs to &drm_dp_mst_topology_state.pending_crtc_mask. 4574 + * 4575 + * Drivers implementing MST must call this function from the 4576 + * &drm_connector_helper_funcs.atomic_check hook of any physical DP &drm_connector capable of 4577 + * driving MST sinks. 4578 + * 4579 + * Returns: 4580 + * 0 on success, negative error code otherwise 4581 + */ 4582 + int drm_dp_mst_root_conn_atomic_check(struct drm_connector_state *new_conn_state, 4583 + struct drm_dp_mst_topology_mgr *mgr) 4584 + { 4585 + struct drm_atomic_state *state = new_conn_state->state; 4586 + struct drm_connector_state *old_conn_state = 4587 + drm_atomic_get_old_connector_state(state, new_conn_state->connector); 4588 + struct drm_crtc_state *crtc_state; 4589 + struct drm_dp_mst_topology_state *mst_state = NULL; 4590 + 4591 + if (new_conn_state->crtc) { 4592 + crtc_state = drm_atomic_get_new_crtc_state(state, new_conn_state->crtc); 4593 + if (crtc_state && drm_atomic_crtc_needs_modeset(crtc_state)) { 4594 + mst_state = drm_atomic_get_mst_topology_state(state, mgr); 4595 + if (IS_ERR(mst_state)) 4596 + return PTR_ERR(mst_state); 4597 + 4598 + mst_state->pending_crtc_mask |= drm_crtc_mask(new_conn_state->crtc); 4599 + } 4600 + } 4601 + 4602 + if (old_conn_state->crtc) { 4603 + crtc_state = drm_atomic_get_new_crtc_state(state, old_conn_state->crtc); 4604 + if (crtc_state && drm_atomic_crtc_needs_modeset(crtc_state)) { 4605 + if (!mst_state) { 4606 + mst_state = drm_atomic_get_mst_topology_state(state, mgr); 4607 + if (IS_ERR(mst_state)) 4608 + return PTR_ERR(mst_state); 4609 + } 4610 + 4611 + mst_state->pending_crtc_mask |= drm_crtc_mask(old_conn_state->crtc); 4612 + } 4613 + } 4614 + 4615 + return 0; 4616 + } 4617 + EXPORT_SYMBOL(drm_dp_mst_root_conn_atomic_check); 4367 4618 4368 4619 /** 4369 4620 * drm_dp_mst_update_slots() - updates the slot info depending on the DP ecoding format ··· 4508 4515 } 4509 4516 EXPORT_SYMBOL(drm_dp_mst_update_slots); 4510 4517 4511 - /** 4512 - * drm_dp_mst_allocate_vcpi() - Allocate a virtual channel 4513 - * @mgr: manager for this port 4514 - * @port: port to allocate a virtual channel for. 4515 - * @pbn: payload bandwidth number to request 4516 - * @slots: returned number of slots for this PBN. 4517 - */ 4518 - bool drm_dp_mst_allocate_vcpi(struct drm_dp_mst_topology_mgr *mgr, 4519 - struct drm_dp_mst_port *port, int pbn, int slots) 4520 - { 4521 - int ret; 4522 - 4523 - if (slots < 0) 4524 - return false; 4525 - 4526 - port = drm_dp_mst_topology_get_port_validated(mgr, port); 4527 - if (!port) 4528 - return false; 4529 - 4530 - if (port->vcpi.vcpi > 0) { 4531 - drm_dbg_kms(mgr->dev, 4532 - "payload: vcpi %d already allocated for pbn %d - requested pbn %d\n", 4533 - port->vcpi.vcpi, port->vcpi.pbn, pbn); 4534 - if (pbn == port->vcpi.pbn) { 4535 - drm_dp_mst_topology_put_port(port); 4536 - return true; 4537 - } 4538 - } 4539 - 4540 - ret = drm_dp_init_vcpi(mgr, &port->vcpi, pbn, slots); 4541 - if (ret) { 4542 - drm_dbg_kms(mgr->dev, "failed to init vcpi slots=%d ret=%d\n", 4543 - DIV_ROUND_UP(pbn, mgr->pbn_div), ret); 4544 - drm_dp_mst_topology_put_port(port); 4545 - goto out; 4546 - } 4547 - drm_dbg_kms(mgr->dev, "initing vcpi for pbn=%d slots=%d\n", pbn, port->vcpi.num_slots); 4548 - 4549 - /* Keep port allocated until its payload has been removed */ 4550 - drm_dp_mst_get_port_malloc(port); 4551 - drm_dp_mst_topology_put_port(port); 4552 - return true; 4553 - out: 4554 - return false; 4555 - } 4556 - EXPORT_SYMBOL(drm_dp_mst_allocate_vcpi); 4557 - 4558 - int drm_dp_mst_get_vcpi_slots(struct drm_dp_mst_topology_mgr *mgr, struct drm_dp_mst_port *port) 4559 - { 4560 - int slots = 0; 4561 - 4562 - port = drm_dp_mst_topology_get_port_validated(mgr, port); 4563 - if (!port) 4564 - return slots; 4565 - 4566 - slots = port->vcpi.num_slots; 4567 - drm_dp_mst_topology_put_port(port); 4568 - return slots; 4569 - } 4570 - EXPORT_SYMBOL(drm_dp_mst_get_vcpi_slots); 4571 - 4572 - /** 4573 - * drm_dp_mst_reset_vcpi_slots() - Reset number of slots to 0 for VCPI 4574 - * @mgr: manager for this port 4575 - * @port: unverified pointer to a port. 4576 - * 4577 - * This just resets the number of slots for the ports VCPI for later programming. 4578 - */ 4579 - void drm_dp_mst_reset_vcpi_slots(struct drm_dp_mst_topology_mgr *mgr, struct drm_dp_mst_port *port) 4580 - { 4581 - /* 4582 - * A port with VCPI will remain allocated until its VCPI is 4583 - * released, no verified ref needed 4584 - */ 4585 - 4586 - port->vcpi.num_slots = 0; 4587 - } 4588 - EXPORT_SYMBOL(drm_dp_mst_reset_vcpi_slots); 4589 - 4590 - /** 4591 - * drm_dp_mst_deallocate_vcpi() - deallocate a VCPI 4592 - * @mgr: manager for this port 4593 - * @port: port to deallocate vcpi for 4594 - * 4595 - * This can be called unconditionally, regardless of whether 4596 - * drm_dp_mst_allocate_vcpi() succeeded or not. 4597 - */ 4598 - void drm_dp_mst_deallocate_vcpi(struct drm_dp_mst_topology_mgr *mgr, 4599 - struct drm_dp_mst_port *port) 4600 - { 4601 - bool skip; 4602 - 4603 - if (!port->vcpi.vcpi) 4604 - return; 4605 - 4606 - mutex_lock(&mgr->lock); 4607 - skip = !drm_dp_mst_port_downstream_of_branch(port, mgr->mst_primary); 4608 - mutex_unlock(&mgr->lock); 4609 - 4610 - if (skip) 4611 - return; 4612 - 4613 - drm_dp_mst_put_payload_id(mgr, port->vcpi.vcpi); 4614 - port->vcpi.num_slots = 0; 4615 - port->vcpi.pbn = 0; 4616 - port->vcpi.aligned_pbn = 0; 4617 - port->vcpi.vcpi = 0; 4618 - drm_dp_mst_put_port_malloc(port); 4619 - } 4620 - EXPORT_SYMBOL(drm_dp_mst_deallocate_vcpi); 4621 - 4622 4518 static int drm_dp_dpcd_write_payload(struct drm_dp_mst_topology_mgr *mgr, 4623 - int id, struct drm_dp_payload *payload) 4519 + int id, u8 start_slot, u8 num_slots) 4624 4520 { 4625 4521 u8 payload_alloc[3], status; 4626 4522 int ret; ··· 4519 4637 DP_PAYLOAD_TABLE_UPDATED); 4520 4638 4521 4639 payload_alloc[0] = id; 4522 - payload_alloc[1] = payload->start_slot; 4523 - payload_alloc[2] = payload->num_slots; 4640 + payload_alloc[1] = start_slot; 4641 + payload_alloc[2] = num_slots; 4524 4642 4525 4643 ret = drm_dp_dpcd_write(mgr->aux, DP_PAYLOAD_ALLOCATE_SET, payload_alloc, 3); 4526 4644 if (ret != 3) { ··· 4735 4853 void drm_dp_mst_dump_topology(struct seq_file *m, 4736 4854 struct drm_dp_mst_topology_mgr *mgr) 4737 4855 { 4738 - int i; 4739 - struct drm_dp_mst_port *port; 4856 + struct drm_dp_mst_topology_state *state; 4857 + struct drm_dp_mst_atomic_payload *payload; 4858 + int i, ret; 4740 4859 4741 4860 mutex_lock(&mgr->lock); 4742 4861 if (mgr->mst_primary) ··· 4746 4863 /* dump VCPIs */ 4747 4864 mutex_unlock(&mgr->lock); 4748 4865 4749 - mutex_lock(&mgr->payload_lock); 4750 - seq_printf(m, "\n*** VCPI Info ***\n"); 4751 - seq_printf(m, "payload_mask: %lx, vcpi_mask: %lx, max_payloads: %d\n", mgr->payload_mask, mgr->vcpi_mask, mgr->max_payloads); 4866 + ret = drm_modeset_lock_single_interruptible(&mgr->base.lock); 4867 + if (ret < 0) 4868 + return; 4752 4869 4753 - seq_printf(m, "\n| idx | port # | vcp_id | # slots | sink name |\n"); 4870 + state = to_drm_dp_mst_topology_state(mgr->base.state); 4871 + seq_printf(m, "\n*** Atomic state info ***\n"); 4872 + seq_printf(m, "payload_mask: %x, max_payloads: %d, start_slot: %u, pbn_div: %d\n", 4873 + state->payload_mask, mgr->max_payloads, state->start_slot, state->pbn_div); 4874 + 4875 + seq_printf(m, "\n| idx | port | vcpi | slots | pbn | dsc | sink name |\n"); 4754 4876 for (i = 0; i < mgr->max_payloads; i++) { 4755 - if (mgr->proposed_vcpis[i]) { 4877 + list_for_each_entry(payload, &state->payloads, next) { 4756 4878 char name[14]; 4757 4879 4758 - port = container_of(mgr->proposed_vcpis[i], struct drm_dp_mst_port, vcpi); 4759 - fetch_monitor_name(mgr, port, name, sizeof(name)); 4760 - seq_printf(m, "%10d%10d%10d%10d%20s\n", 4880 + if (payload->vcpi != i || payload->delete) 4881 + continue; 4882 + 4883 + fetch_monitor_name(mgr, payload->port, name, sizeof(name)); 4884 + seq_printf(m, " %5d %6d %6d %02d - %02d %5d %5s %19s\n", 4761 4885 i, 4762 - port->port_num, 4763 - port->vcpi.vcpi, 4764 - port->vcpi.num_slots, 4886 + payload->port->port_num, 4887 + payload->vcpi, 4888 + payload->vc_start_slot, 4889 + payload->vc_start_slot + payload->time_slots - 1, 4890 + payload->pbn, 4891 + payload->dsc_enabled ? "Y" : "N", 4765 4892 (*name != 0) ? name : "Unknown"); 4766 - } else 4767 - seq_printf(m, "%6d - Unused\n", i); 4893 + } 4768 4894 } 4769 - seq_printf(m, "\n*** Payload Info ***\n"); 4770 - seq_printf(m, "| idx | state | start slot | # slots |\n"); 4771 - for (i = 0; i < mgr->max_payloads; i++) { 4772 - seq_printf(m, "%10d%10d%15d%10d\n", 4773 - i, 4774 - mgr->payloads[i].payload_state, 4775 - mgr->payloads[i].start_slot, 4776 - mgr->payloads[i].num_slots); 4777 - } 4778 - mutex_unlock(&mgr->payload_lock); 4779 4895 4780 4896 seq_printf(m, "\n*** DPCD Info ***\n"); 4781 4897 mutex_lock(&mgr->lock); ··· 4820 4938 4821 4939 out: 4822 4940 mutex_unlock(&mgr->lock); 4823 - 4941 + drm_modeset_unlock(&mgr->base.lock); 4824 4942 } 4825 4943 EXPORT_SYMBOL(drm_dp_mst_dump_topology); 4826 4944 ··· 4942 5060 { 4943 5061 struct drm_dp_mst_topology_state *state, *old_state = 4944 5062 to_dp_mst_topology_state(obj->state); 4945 - struct drm_dp_vcpi_allocation *pos, *vcpi; 5063 + struct drm_dp_mst_atomic_payload *pos, *payload; 4946 5064 4947 5065 state = kmemdup(old_state, sizeof(*state), GFP_KERNEL); 4948 5066 if (!state) ··· 4950 5068 4951 5069 __drm_atomic_helper_private_obj_duplicate_state(obj, &state->base); 4952 5070 4953 - INIT_LIST_HEAD(&state->vcpis); 5071 + INIT_LIST_HEAD(&state->payloads); 5072 + state->commit_deps = NULL; 5073 + state->num_commit_deps = 0; 5074 + state->pending_crtc_mask = 0; 4954 5075 4955 - list_for_each_entry(pos, &old_state->vcpis, next) { 4956 - /* Prune leftover freed VCPI allocations */ 4957 - if (!pos->vcpi) 5076 + list_for_each_entry(pos, &old_state->payloads, next) { 5077 + /* Prune leftover freed timeslot allocations */ 5078 + if (pos->delete) 4958 5079 continue; 4959 5080 4960 - vcpi = kmemdup(pos, sizeof(*vcpi), GFP_KERNEL); 4961 - if (!vcpi) 5081 + payload = kmemdup(pos, sizeof(*payload), GFP_KERNEL); 5082 + if (!payload) 4962 5083 goto fail; 4963 5084 4964 - drm_dp_mst_get_port_malloc(vcpi->port); 4965 - list_add(&vcpi->next, &state->vcpis); 5085 + drm_dp_mst_get_port_malloc(payload->port); 5086 + list_add(&payload->next, &state->payloads); 4966 5087 } 4967 5088 4968 5089 return &state->base; 4969 5090 4970 5091 fail: 4971 - list_for_each_entry_safe(pos, vcpi, &state->vcpis, next) { 5092 + list_for_each_entry_safe(pos, payload, &state->payloads, next) { 4972 5093 drm_dp_mst_put_port_malloc(pos->port); 4973 5094 kfree(pos); 4974 5095 } ··· 4985 5100 { 4986 5101 struct drm_dp_mst_topology_state *mst_state = 4987 5102 to_dp_mst_topology_state(state); 4988 - struct drm_dp_vcpi_allocation *pos, *tmp; 5103 + struct drm_dp_mst_atomic_payload *pos, *tmp; 5104 + int i; 4989 5105 4990 - list_for_each_entry_safe(pos, tmp, &mst_state->vcpis, next) { 4991 - /* We only keep references to ports with non-zero VCPIs */ 4992 - if (pos->vcpi) 5106 + list_for_each_entry_safe(pos, tmp, &mst_state->payloads, next) { 5107 + /* We only keep references to ports with active payloads */ 5108 + if (!pos->delete) 4993 5109 drm_dp_mst_put_port_malloc(pos->port); 4994 5110 kfree(pos); 4995 5111 } 4996 5112 5113 + for (i = 0; i < mst_state->num_commit_deps; i++) 5114 + drm_crtc_commit_put(mst_state->commit_deps[i]); 5115 + 5116 + kfree(mst_state->commit_deps); 4997 5117 kfree(mst_state); 4998 5118 } 4999 5119 ··· 5025 5135 drm_dp_mst_atomic_check_mstb_bw_limit(struct drm_dp_mst_branch *mstb, 5026 5136 struct drm_dp_mst_topology_state *state) 5027 5137 { 5028 - struct drm_dp_vcpi_allocation *vcpi; 5138 + struct drm_dp_mst_atomic_payload *payload; 5029 5139 struct drm_dp_mst_port *port; 5030 5140 int pbn_used = 0, ret; 5031 5141 bool found = false; ··· 5033 5143 /* Check that we have at least one port in our state that's downstream 5034 5144 * of this branch, otherwise we can skip this branch 5035 5145 */ 5036 - list_for_each_entry(vcpi, &state->vcpis, next) { 5037 - if (!vcpi->pbn || 5038 - !drm_dp_mst_port_downstream_of_branch(vcpi->port, mstb)) 5146 + list_for_each_entry(payload, &state->payloads, next) { 5147 + if (!payload->pbn || 5148 + !drm_dp_mst_port_downstream_of_branch(payload->port, mstb)) 5039 5149 continue; 5040 5150 5041 5151 found = true; ··· 5066 5176 drm_dp_mst_atomic_check_port_bw_limit(struct drm_dp_mst_port *port, 5067 5177 struct drm_dp_mst_topology_state *state) 5068 5178 { 5069 - struct drm_dp_vcpi_allocation *vcpi; 5179 + struct drm_dp_mst_atomic_payload *payload; 5070 5180 int pbn_used = 0; 5071 5181 5072 5182 if (port->pdt == DP_PEER_DEVICE_NONE) 5073 5183 return 0; 5074 5184 5075 5185 if (drm_dp_mst_is_end_device(port->pdt, port->mcs)) { 5076 - bool found = false; 5077 - 5078 - list_for_each_entry(vcpi, &state->vcpis, next) { 5079 - if (vcpi->port != port) 5080 - continue; 5081 - if (!vcpi->pbn) 5082 - return 0; 5083 - 5084 - found = true; 5085 - break; 5086 - } 5087 - if (!found) 5186 + payload = drm_atomic_get_mst_payload_state(state, port); 5187 + if (!payload) 5088 5188 return 0; 5089 5189 5090 5190 /* ··· 5088 5208 return -EINVAL; 5089 5209 } 5090 5210 5091 - pbn_used = vcpi->pbn; 5211 + pbn_used = payload->pbn; 5092 5212 } else { 5093 5213 pbn_used = drm_dp_mst_atomic_check_mstb_bw_limit(port->mstb, 5094 5214 state); ··· 5110 5230 } 5111 5231 5112 5232 static inline int 5113 - drm_dp_mst_atomic_check_vcpi_alloc_limit(struct drm_dp_mst_topology_mgr *mgr, 5114 - struct drm_dp_mst_topology_state *mst_state) 5233 + drm_dp_mst_atomic_check_payload_alloc_limits(struct drm_dp_mst_topology_mgr *mgr, 5234 + struct drm_dp_mst_topology_state *mst_state) 5115 5235 { 5116 - struct drm_dp_vcpi_allocation *vcpi; 5236 + struct drm_dp_mst_atomic_payload *payload; 5117 5237 int avail_slots = mst_state->total_avail_slots, payload_count = 0; 5118 5238 5119 - list_for_each_entry(vcpi, &mst_state->vcpis, next) { 5120 - /* Releasing VCPI is always OK-even if the port is gone */ 5121 - if (!vcpi->vcpi) { 5122 - drm_dbg_atomic(mgr->dev, "[MST PORT:%p] releases all VCPI slots\n", 5123 - vcpi->port); 5239 + list_for_each_entry(payload, &mst_state->payloads, next) { 5240 + /* Releasing payloads is always OK-even if the port is gone */ 5241 + if (payload->delete) { 5242 + drm_dbg_atomic(mgr->dev, "[MST PORT:%p] releases all time slots\n", 5243 + payload->port); 5124 5244 continue; 5125 5245 } 5126 5246 5127 - drm_dbg_atomic(mgr->dev, "[MST PORT:%p] requires %d vcpi slots\n", 5128 - vcpi->port, vcpi->vcpi); 5247 + drm_dbg_atomic(mgr->dev, "[MST PORT:%p] requires %d time slots\n", 5248 + payload->port, payload->time_slots); 5129 5249 5130 - avail_slots -= vcpi->vcpi; 5250 + avail_slots -= payload->time_slots; 5131 5251 if (avail_slots < 0) { 5132 5252 drm_dbg_atomic(mgr->dev, 5133 - "[MST PORT:%p] not enough VCPI slots in mst state %p (avail=%d)\n", 5134 - vcpi->port, mst_state, avail_slots + vcpi->vcpi); 5253 + "[MST PORT:%p] not enough time slots in mst state %p (avail=%d)\n", 5254 + payload->port, mst_state, avail_slots + payload->time_slots); 5135 5255 return -ENOSPC; 5136 5256 } 5137 5257 ··· 5141 5261 mgr, mst_state, mgr->max_payloads); 5142 5262 return -EINVAL; 5143 5263 } 5264 + 5265 + /* Assign a VCPI */ 5266 + if (!payload->vcpi) { 5267 + payload->vcpi = ffz(mst_state->payload_mask) + 1; 5268 + drm_dbg_atomic(mgr->dev, "[MST PORT:%p] assigned VCPI #%d\n", 5269 + payload->port, payload->vcpi); 5270 + mst_state->payload_mask |= BIT(payload->vcpi - 1); 5271 + } 5144 5272 } 5145 - drm_dbg_atomic(mgr->dev, "[MST MGR:%p] mst state %p VCPI avail=%d used=%d\n", 5146 - mgr, mst_state, avail_slots, mst_state->total_avail_slots - avail_slots); 5273 + 5274 + if (!payload_count) 5275 + mst_state->pbn_div = 0; 5276 + 5277 + drm_dbg_atomic(mgr->dev, "[MST MGR:%p] mst state %p TU pbn_div=%d avail=%d used=%d\n", 5278 + mgr, mst_state, mst_state->pbn_div, avail_slots, 5279 + mst_state->total_avail_slots - avail_slots); 5147 5280 5148 5281 return 0; 5149 5282 } ··· 5177 5284 int drm_dp_mst_add_affected_dsc_crtcs(struct drm_atomic_state *state, struct drm_dp_mst_topology_mgr *mgr) 5178 5285 { 5179 5286 struct drm_dp_mst_topology_state *mst_state; 5180 - struct drm_dp_vcpi_allocation *pos; 5287 + struct drm_dp_mst_atomic_payload *pos; 5181 5288 struct drm_connector *connector; 5182 5289 struct drm_connector_state *conn_state; 5183 5290 struct drm_crtc *crtc; ··· 5188 5295 if (IS_ERR(mst_state)) 5189 5296 return -EINVAL; 5190 5297 5191 - list_for_each_entry(pos, &mst_state->vcpis, next) { 5298 + list_for_each_entry(pos, &mst_state->payloads, next) { 5192 5299 5193 5300 connector = pos->port->connector; 5194 5301 ··· 5227 5334 * @state: Pointer to the new drm_atomic_state 5228 5335 * @port: Pointer to the affected MST Port 5229 5336 * @pbn: Newly recalculated bw required for link with DSC enabled 5230 - * @pbn_div: Divider to calculate correct number of pbn per slot 5231 5337 * @enable: Boolean flag to enable or disable DSC on the port 5232 5338 * 5233 5339 * This function enables DSC on the given Port ··· 5237 5345 */ 5238 5346 int drm_dp_mst_atomic_enable_dsc(struct drm_atomic_state *state, 5239 5347 struct drm_dp_mst_port *port, 5240 - int pbn, int pbn_div, 5241 - bool enable) 5348 + int pbn, bool enable) 5242 5349 { 5243 5350 struct drm_dp_mst_topology_state *mst_state; 5244 - struct drm_dp_vcpi_allocation *pos; 5245 - bool found = false; 5246 - int vcpi = 0; 5351 + struct drm_dp_mst_atomic_payload *payload; 5352 + int time_slots = 0; 5247 5353 5248 5354 mst_state = drm_atomic_get_mst_topology_state(state, port->mgr); 5249 - 5250 5355 if (IS_ERR(mst_state)) 5251 5356 return PTR_ERR(mst_state); 5252 5357 5253 - list_for_each_entry(pos, &mst_state->vcpis, next) { 5254 - if (pos->port == port) { 5255 - found = true; 5256 - break; 5257 - } 5258 - } 5259 - 5260 - if (!found) { 5358 + payload = drm_atomic_get_mst_payload_state(mst_state, port); 5359 + if (!payload) { 5261 5360 drm_dbg_atomic(state->dev, 5262 - "[MST PORT:%p] Couldn't find VCPI allocation in mst state %p\n", 5361 + "[MST PORT:%p] Couldn't find payload in mst state %p\n", 5263 5362 port, mst_state); 5264 5363 return -EINVAL; 5265 5364 } 5266 5365 5267 - if (pos->dsc_enabled == enable) { 5366 + if (payload->dsc_enabled == enable) { 5268 5367 drm_dbg_atomic(state->dev, 5269 - "[MST PORT:%p] DSC flag is already set to %d, returning %d VCPI slots\n", 5270 - port, enable, pos->vcpi); 5271 - vcpi = pos->vcpi; 5368 + "[MST PORT:%p] DSC flag is already set to %d, returning %d time slots\n", 5369 + port, enable, payload->time_slots); 5370 + time_slots = payload->time_slots; 5272 5371 } 5273 5372 5274 5373 if (enable) { 5275 - vcpi = drm_dp_atomic_find_vcpi_slots(state, port->mgr, port, pbn, pbn_div); 5374 + time_slots = drm_dp_atomic_find_time_slots(state, port->mgr, port, pbn); 5276 5375 drm_dbg_atomic(state->dev, 5277 - "[MST PORT:%p] Enabling DSC flag, reallocating %d VCPI slots on the port\n", 5278 - port, vcpi); 5279 - if (vcpi < 0) 5376 + "[MST PORT:%p] Enabling DSC flag, reallocating %d time slots on the port\n", 5377 + port, time_slots); 5378 + if (time_slots < 0) 5280 5379 return -EINVAL; 5281 5380 } 5282 5381 5283 - pos->dsc_enabled = enable; 5382 + payload->dsc_enabled = enable; 5284 5383 5285 - return vcpi; 5384 + return time_slots; 5286 5385 } 5287 5386 EXPORT_SYMBOL(drm_dp_mst_atomic_enable_dsc); 5387 + 5288 5388 /** 5289 5389 * drm_dp_mst_atomic_check - Check that the new state of an MST topology in an 5290 5390 * atomic update is valid ··· 5284 5400 * 5285 5401 * Checks the given topology state for an atomic update to ensure that it's 5286 5402 * valid. This includes checking whether there's enough bandwidth to support 5287 - * the new VCPI allocations in the atomic update. 5403 + * the new timeslot allocations in the atomic update. 5288 5404 * 5289 5405 * Any atomic drivers supporting DP MST must make sure to call this after 5290 5406 * checking the rest of their state in their 5291 5407 * &drm_mode_config_funcs.atomic_check() callback. 5292 5408 * 5293 5409 * See also: 5294 - * drm_dp_atomic_find_vcpi_slots() 5295 - * drm_dp_atomic_release_vcpi_slots() 5410 + * drm_dp_atomic_find_time_slots() 5411 + * drm_dp_atomic_release_time_slots() 5296 5412 * 5297 5413 * Returns: 5298 5414 * ··· 5308 5424 if (!mgr->mst_state) 5309 5425 continue; 5310 5426 5311 - ret = drm_dp_mst_atomic_check_vcpi_alloc_limit(mgr, mst_state); 5427 + ret = drm_dp_mst_atomic_check_payload_alloc_limits(mgr, mst_state); 5312 5428 if (ret) 5313 5429 break; 5314 5430 ··· 5334 5450 5335 5451 /** 5336 5452 * drm_atomic_get_mst_topology_state: get MST topology state 5337 - * 5338 5453 * @state: global atomic state 5339 5454 * @mgr: MST topology manager, also the private object in this case 5340 5455 * ··· 5353 5470 EXPORT_SYMBOL(drm_atomic_get_mst_topology_state); 5354 5471 5355 5472 /** 5473 + * drm_atomic_get_new_mst_topology_state: get new MST topology state in atomic state, if any 5474 + * @state: global atomic state 5475 + * @mgr: MST topology manager, also the private object in this case 5476 + * 5477 + * This function wraps drm_atomic_get_priv_obj_state() passing in the MST atomic 5478 + * state vtable so that the private object state returned is that of a MST 5479 + * topology object. 5480 + * 5481 + * Returns: 5482 + * 5483 + * The MST topology state, or NULL if there's no topology state for this MST mgr 5484 + * in the global atomic state 5485 + */ 5486 + struct drm_dp_mst_topology_state * 5487 + drm_atomic_get_new_mst_topology_state(struct drm_atomic_state *state, 5488 + struct drm_dp_mst_topology_mgr *mgr) 5489 + { 5490 + struct drm_private_state *priv_state = 5491 + drm_atomic_get_new_private_obj_state(state, &mgr->base); 5492 + 5493 + return priv_state ? to_dp_mst_topology_state(priv_state) : NULL; 5494 + } 5495 + EXPORT_SYMBOL(drm_atomic_get_new_mst_topology_state); 5496 + 5497 + /** 5356 5498 * drm_dp_mst_topology_mgr_init - initialise a topology manager 5357 5499 * @mgr: manager struct to initialise 5358 5500 * @dev: device providing this structure - for i2c addition. 5359 5501 * @aux: DP helper aux channel to talk to this device 5360 5502 * @max_dpcd_transaction_bytes: hw specific DPCD transaction limit 5361 5503 * @max_payloads: maximum number of payloads this GPU can source 5362 - * @max_lane_count: maximum number of lanes this GPU supports 5363 - * @max_link_rate: maximum link rate per lane this GPU supports in kHz 5364 5504 * @conn_base_id: the connector object ID the MST device is connected to. 5365 5505 * 5366 5506 * Return 0 for success, or negative error code on failure ··· 5391 5485 int drm_dp_mst_topology_mgr_init(struct drm_dp_mst_topology_mgr *mgr, 5392 5486 struct drm_device *dev, struct drm_dp_aux *aux, 5393 5487 int max_dpcd_transaction_bytes, int max_payloads, 5394 - int max_lane_count, int max_link_rate, 5395 5488 int conn_base_id) 5396 5489 { 5397 5490 struct drm_dp_mst_topology_state *mst_state; 5398 5491 5399 5492 mutex_init(&mgr->lock); 5400 5493 mutex_init(&mgr->qlock); 5401 - mutex_init(&mgr->payload_lock); 5402 5494 mutex_init(&mgr->delayed_destroy_lock); 5403 5495 mutex_init(&mgr->up_req_lock); 5404 5496 mutex_init(&mgr->probe_lock); ··· 5426 5522 mgr->aux = aux; 5427 5523 mgr->max_dpcd_transaction_bytes = max_dpcd_transaction_bytes; 5428 5524 mgr->max_payloads = max_payloads; 5429 - mgr->max_lane_count = max_lane_count; 5430 - mgr->max_link_rate = max_link_rate; 5431 5525 mgr->conn_base_id = conn_base_id; 5432 - if (max_payloads + 1 > sizeof(mgr->payload_mask) * 8 || 5433 - max_payloads + 1 > sizeof(mgr->vcpi_mask) * 8) 5434 - return -EINVAL; 5435 - mgr->payloads = kcalloc(max_payloads, sizeof(struct drm_dp_payload), GFP_KERNEL); 5436 - if (!mgr->payloads) 5437 - return -ENOMEM; 5438 - mgr->proposed_vcpis = kcalloc(max_payloads, sizeof(struct drm_dp_vcpi *), GFP_KERNEL); 5439 - if (!mgr->proposed_vcpis) 5440 - return -ENOMEM; 5441 - set_bit(0, &mgr->payload_mask); 5442 5526 5443 5527 mst_state = kzalloc(sizeof(*mst_state), GFP_KERNEL); 5444 5528 if (mst_state == NULL) ··· 5436 5544 mst_state->start_slot = 1; 5437 5545 5438 5546 mst_state->mgr = mgr; 5439 - INIT_LIST_HEAD(&mst_state->vcpis); 5547 + INIT_LIST_HEAD(&mst_state->payloads); 5440 5548 5441 5549 drm_atomic_private_obj_init(dev, &mgr->base, 5442 5550 &mst_state->base, ··· 5459 5567 destroy_workqueue(mgr->delayed_destroy_wq); 5460 5568 mgr->delayed_destroy_wq = NULL; 5461 5569 } 5462 - mutex_lock(&mgr->payload_lock); 5463 - kfree(mgr->payloads); 5464 - mgr->payloads = NULL; 5465 - kfree(mgr->proposed_vcpis); 5466 - mgr->proposed_vcpis = NULL; 5467 - mutex_unlock(&mgr->payload_lock); 5468 5570 mgr->dev = NULL; 5469 5571 mgr->aux = NULL; 5470 5572 drm_atomic_private_obj_fini(&mgr->base); 5471 5573 mgr->funcs = NULL; 5472 5574 5473 5575 mutex_destroy(&mgr->delayed_destroy_lock); 5474 - mutex_destroy(&mgr->payload_lock); 5475 5576 mutex_destroy(&mgr->qlock); 5476 5577 mutex_destroy(&mgr->lock); 5477 5578 mutex_destroy(&mgr->up_req_lock);
+75 -8
drivers/gpu/drm/drm_atomic_helper.c
··· 702 702 703 703 if (funcs->atomic_check) 704 704 ret = funcs->atomic_check(connector, state); 705 - if (ret) 705 + if (ret) { 706 + drm_dbg_atomic(dev, 707 + "[CONNECTOR:%d:%s] driver check failed\n", 708 + connector->base.id, connector->name); 706 709 return ret; 710 + } 707 711 708 712 connectors_mask |= BIT(i); 709 713 } ··· 749 745 750 746 if (funcs->atomic_check) 751 747 ret = funcs->atomic_check(connector, state); 752 - if (ret) 748 + if (ret) { 749 + drm_dbg_atomic(dev, 750 + "[CONNECTOR:%d:%s] driver check failed\n", 751 + connector->base.id, connector->name); 753 752 return ret; 753 + } 754 754 } 755 755 756 756 /* ··· 784 776 return mode_fixup(state); 785 777 } 786 778 EXPORT_SYMBOL(drm_atomic_helper_check_modeset); 779 + 780 + /** 781 + * drm_atomic_helper_check_wb_connector_state() - Check writeback encoder state 782 + * @encoder: encoder state to check 783 + * @conn_state: connector state to check 784 + * 785 + * Checks if the writeback connector state is valid, and returns an error if it 786 + * isn't. 787 + * 788 + * RETURNS: 789 + * Zero for success or -errno 790 + */ 791 + int 792 + drm_atomic_helper_check_wb_encoder_state(struct drm_encoder *encoder, 793 + struct drm_connector_state *conn_state) 794 + { 795 + struct drm_writeback_job *wb_job = conn_state->writeback_job; 796 + struct drm_property_blob *pixel_format_blob; 797 + struct drm_framebuffer *fb; 798 + size_t i, nformats; 799 + u32 *formats; 800 + 801 + if (!wb_job || !wb_job->fb) 802 + return 0; 803 + 804 + pixel_format_blob = wb_job->connector->pixel_formats_blob_ptr; 805 + nformats = pixel_format_blob->length / sizeof(u32); 806 + formats = pixel_format_blob->data; 807 + fb = wb_job->fb; 808 + 809 + for (i = 0; i < nformats; i++) 810 + if (fb->format->format == formats[i]) 811 + return 0; 812 + 813 + drm_dbg_kms(encoder->dev, "Invalid pixel format %p4cc\n", &fb->format->format); 814 + 815 + return -EINVAL; 816 + } 817 + EXPORT_SYMBOL(drm_atomic_helper_check_wb_encoder_state); 787 818 788 819 /** 789 820 * drm_atomic_helper_check_plane_state() - Check plane state for validity ··· 1835 1788 struct drm_plane_state *old_plane_state = NULL; 1836 1789 struct drm_plane_state *new_plane_state = NULL; 1837 1790 const struct drm_plane_helper_funcs *funcs; 1838 - int i, n_planes = 0; 1791 + int i, ret, n_planes = 0; 1839 1792 1840 1793 for_each_new_crtc_in_state(state, crtc, crtc_state, i) { 1841 1794 if (drm_atomic_crtc_needs_modeset(crtc_state)) ··· 1846 1799 n_planes++; 1847 1800 1848 1801 /* FIXME: we support only single plane updates for now */ 1849 - if (n_planes != 1) 1802 + if (n_planes != 1) { 1803 + drm_dbg_atomic(dev, 1804 + "only single plane async updates are supported\n"); 1850 1805 return -EINVAL; 1806 + } 1851 1807 1852 1808 if (!new_plane_state->crtc || 1853 - old_plane_state->crtc != new_plane_state->crtc) 1809 + old_plane_state->crtc != new_plane_state->crtc) { 1810 + drm_dbg_atomic(dev, 1811 + "[PLANE:%d:%s] async update cannot change CRTC\n", 1812 + plane->base.id, plane->name); 1854 1813 return -EINVAL; 1814 + } 1855 1815 1856 1816 funcs = plane->helper_private; 1857 - if (!funcs->atomic_async_update) 1817 + if (!funcs->atomic_async_update) { 1818 + drm_dbg_atomic(dev, 1819 + "[PLANE:%d:%s] driver does not support async updates\n", 1820 + plane->base.id, plane->name); 1858 1821 return -EINVAL; 1822 + } 1859 1823 1860 - if (new_plane_state->fence) 1824 + if (new_plane_state->fence) { 1825 + drm_dbg_atomic(dev, 1826 + "[PLANE:%d:%s] missing fence for async update\n", 1827 + plane->base.id, plane->name); 1861 1828 return -EINVAL; 1829 + } 1862 1830 1863 1831 /* 1864 1832 * Don't do an async update if there is an outstanding commit modifying ··· 1888 1826 return -EBUSY; 1889 1827 } 1890 1828 1891 - return funcs->atomic_async_check(plane, state); 1829 + ret = funcs->atomic_async_check(plane, state); 1830 + if (ret != 0) 1831 + drm_dbg_atomic(dev, 1832 + "[PLANE:%d:%s] driver async check failed\n", 1833 + plane->base.id, plane->name); 1834 + return ret; 1892 1835 } 1893 1836 EXPORT_SYMBOL(drm_atomic_helper_async_check); 1894 1837
+3
drivers/gpu/drm/drm_mode_config.c
··· 151 151 count = 0; 152 152 connector_id = u64_to_user_ptr(card_res->connector_id_ptr); 153 153 drm_for_each_connector_iter(connector, &conn_iter) { 154 + if (connector->registration_state != DRM_CONNECTOR_REGISTERED) 155 + continue; 156 + 154 157 /* only expose writeback connectors if userspace understands them */ 155 158 if (!file_priv->writeback_connectors && 156 159 (connector->connector_type == DRM_MODE_CONNECTOR_WRITEBACK))
+1 -1
drivers/gpu/drm/gma500/cdv_intel_dp.c
··· 115 115 116 116 /* 117 117 * Write a single byte to the current I2C address, the 118 - * the I2C link must be running or this returns -EIO 118 + * I2C link must be running or this returns -EIO 119 119 */ 120 120 static int 121 121 i2c_algo_dp_aux_put_byte(struct i2c_adapter *adapter, u8 byte)
+1 -1
drivers/gpu/drm/gma500/oaktrail_crtc.c
··· 310 310 temp & ~PIPEACONF_ENABLE, i); 311 311 REG_READ_WITH_AUX(map->conf, i); 312 312 } 313 - /* Wait for for the pipe disable to take effect. */ 313 + /* Wait for the pipe disable to take effect. */ 314 314 gma_wait_for_vblank(dev); 315 315 316 316 temp = REG_READ_WITH_AUX(map->dpll, i);
+41 -21
drivers/gpu/drm/gma500/psb_intel_sdvo.c
··· 400 400 #define IS_SDVOB(reg) (reg == SDVOB) 401 401 #define SDVO_NAME(svdo) (IS_SDVOB((svdo)->sdvo_reg) ? "SDVOB" : "SDVOC") 402 402 403 - static void psb_intel_sdvo_debug_write(struct psb_intel_sdvo *psb_intel_sdvo, u8 cmd, 404 - const void *args, int args_len) 403 + static void psb_intel_sdvo_debug_write(struct psb_intel_sdvo *psb_intel_sdvo, 404 + u8 cmd, const void *args, int args_len) 405 405 { 406 - int i; 406 + struct drm_device *dev = psb_intel_sdvo->base.base.dev; 407 + int i, pos = 0; 408 + char buffer[73]; 407 409 408 - DRM_DEBUG_KMS("%s: W: %02X ", 409 - SDVO_NAME(psb_intel_sdvo), cmd); 410 - for (i = 0; i < args_len; i++) 411 - DRM_DEBUG_KMS("%02X ", ((u8 *)args)[i]); 412 - for (; i < 8; i++) 413 - DRM_DEBUG_KMS(" "); 410 + #define BUF_PRINT(args...) \ 411 + pos += snprintf(buffer + pos, max_t(int, sizeof(buffer) - pos, 0), args) 412 + 413 + for (i = 0; i < args_len; i++) { 414 + BUF_PRINT("%02X ", ((u8 *)args)[i]); 415 + } 416 + 417 + for (; i < 8; i++) { 418 + BUF_PRINT(" "); 419 + } 420 + 414 421 for (i = 0; i < ARRAY_SIZE(sdvo_cmd_names); i++) { 415 422 if (cmd == sdvo_cmd_names[i].cmd) { 416 - DRM_DEBUG_KMS("(%s)", sdvo_cmd_names[i].name); 423 + BUF_PRINT("(%s)", sdvo_cmd_names[i].name); 417 424 break; 418 425 } 419 426 } 427 + 420 428 if (i == ARRAY_SIZE(sdvo_cmd_names)) 421 - DRM_DEBUG_KMS("(%02X)", cmd); 422 - DRM_DEBUG_KMS("\n"); 429 + BUF_PRINT("(%02X)", cmd); 430 + 431 + drm_WARN_ON(dev, pos >= sizeof(buffer) - 1); 432 + #undef BUF_PRINT 433 + 434 + DRM_DEBUG_KMS("%s: W: %02X %s\n", SDVO_NAME(psb_intel_sdvo), cmd, buffer); 423 435 } 424 436 425 437 static const char *cmd_status_names[] = { ··· 502 490 } 503 491 504 492 static bool psb_intel_sdvo_read_response(struct psb_intel_sdvo *psb_intel_sdvo, 505 - void *response, int response_len) 493 + void *response, int response_len) 506 494 { 495 + struct drm_device *dev = psb_intel_sdvo->base.base.dev; 496 + char buffer[73]; 497 + int i, pos = 0; 507 498 u8 retry = 5; 508 499 u8 status; 509 - int i; 510 - 511 - DRM_DEBUG_KMS("%s: R: ", SDVO_NAME(psb_intel_sdvo)); 512 500 513 501 /* 514 502 * The documentation states that all commands will be ··· 532 520 goto log_fail; 533 521 } 534 522 523 + #define BUF_PRINT(args...) \ 524 + pos += snprintf(buffer + pos, max_t(int, sizeof(buffer) - pos, 0), args) 525 + 535 526 if (status <= SDVO_CMD_STATUS_SCALING_NOT_SUPP) 536 - DRM_DEBUG_KMS("(%s)", cmd_status_names[status]); 527 + BUF_PRINT("(%s)", cmd_status_names[status]); 537 528 else 538 - DRM_DEBUG_KMS("(??? %d)", status); 529 + BUF_PRINT("(??? %d)", status); 539 530 540 531 if (status != SDVO_CMD_STATUS_SUCCESS) 541 532 goto log_fail; ··· 549 534 SDVO_I2C_RETURN_0 + i, 550 535 &((u8 *)response)[i])) 551 536 goto log_fail; 552 - DRM_DEBUG_KMS(" %02X", ((u8 *)response)[i]); 537 + BUF_PRINT(" %02X", ((u8 *)response)[i]); 553 538 } 554 - DRM_DEBUG_KMS("\n"); 539 + 540 + drm_WARN_ON(dev, pos >= sizeof(buffer) - 1); 541 + #undef BUF_PRINT 542 + 543 + DRM_DEBUG_KMS("%s: R: %s\n", SDVO_NAME(psb_intel_sdvo), buffer); 555 544 return true; 556 545 557 546 log_fail: 558 - DRM_DEBUG_KMS("... failed\n"); 547 + DRM_DEBUG_KMS("%s: R: ... failed %s\n", 548 + SDVO_NAME(psb_intel_sdvo), buffer); 559 549 return false; 560 550 } 561 551
+6
drivers/gpu/drm/i915/display/intel_display.c
··· 7531 7531 intel_atomic_commit_fence_wait(state); 7532 7532 7533 7533 drm_atomic_helper_wait_for_dependencies(&state->base); 7534 + drm_dp_mst_atomic_wait_for_dependencies(&state->base); 7534 7535 7535 7536 if (state->modeset) 7536 7537 wakeref = intel_display_power_get(dev_priv, POWER_DOMAIN_MODESET); ··· 8600 8599 return ret; 8601 8600 } 8602 8601 8602 + static const struct drm_mode_config_helper_funcs intel_mode_config_funcs = { 8603 + .atomic_commit_setup = drm_dp_mst_atomic_setup_commit, 8604 + }; 8605 + 8603 8606 static void intel_mode_config_init(struct drm_i915_private *i915) 8604 8607 { 8605 8608 struct drm_mode_config *mode_config = &i915->drm.mode_config; ··· 8618 8613 mode_config->prefer_shadow = 1; 8619 8614 8620 8615 mode_config->funcs = &intel_mode_funcs; 8616 + mode_config->helper_private = &intel_mode_config_funcs; 8621 8617 8622 8618 mode_config->async_page_flip = HAS_ASYNC_FLIPS(i915); 8623 8619
+9
drivers/gpu/drm/i915/display/intel_dp.c
··· 4992 4992 { 4993 4993 struct drm_i915_private *dev_priv = to_i915(conn->dev); 4994 4994 struct intel_atomic_state *state = to_intel_atomic_state(_state); 4995 + struct drm_connector_state *conn_state = drm_atomic_get_new_connector_state(_state, conn); 4996 + struct intel_connector *intel_conn = to_intel_connector(conn); 4997 + struct intel_dp *intel_dp = enc_to_intel_dp(intel_conn->encoder); 4995 4998 int ret; 4996 4999 4997 5000 ret = intel_digital_connector_atomic_check(conn, &state->base); 4998 5001 if (ret) 4999 5002 return ret; 5003 + 5004 + if (intel_dp_mst_source_support(intel_dp)) { 5005 + ret = drm_dp_mst_root_conn_atomic_check(conn_state, &intel_dp->mst_mgr); 5006 + if (ret) 5007 + return ret; 5008 + } 5000 5009 5001 5010 /* 5002 5011 * We don't enable port sync on BDW due to missing w/as and
+34 -63
drivers/gpu/drm/i915/display/intel_dp_mst.c
··· 52 52 struct drm_atomic_state *state = crtc_state->uapi.state; 53 53 struct intel_dp_mst_encoder *intel_mst = enc_to_mst(encoder); 54 54 struct intel_dp *intel_dp = &intel_mst->primary->dp; 55 + struct drm_dp_mst_topology_state *mst_state; 55 56 struct intel_connector *connector = 56 57 to_intel_connector(conn_state->connector); 57 58 struct drm_i915_private *i915 = to_i915(connector->base.dev); ··· 61 60 bool constant_n = drm_dp_has_quirk(&intel_dp->desc, DP_DPCD_QUIRK_CONSTANT_N); 62 61 int bpp, slots = -EINVAL; 63 62 63 + mst_state = drm_atomic_get_mst_topology_state(state, &intel_dp->mst_mgr); 64 + if (IS_ERR(mst_state)) 65 + return PTR_ERR(mst_state); 66 + 64 67 crtc_state->lane_count = limits->max_lane_count; 65 68 crtc_state->port_clock = limits->max_rate; 69 + 70 + // TODO: Handle pbn_div changes by adding a new MST helper 71 + if (!mst_state->pbn_div) { 72 + mst_state->pbn_div = drm_dp_get_vc_payload_bw(&intel_dp->mst_mgr, 73 + limits->max_rate, 74 + limits->max_lane_count); 75 + } 66 76 67 77 for (bpp = limits->max_bpp; bpp >= limits->min_bpp; bpp -= 2 * 3) { 68 78 crtc_state->pipe_bpp = bpp; ··· 81 69 crtc_state->pbn = drm_dp_calc_pbn_mode(adjusted_mode->crtc_clock, 82 70 crtc_state->pipe_bpp, 83 71 false); 84 - 85 - slots = drm_dp_atomic_find_vcpi_slots(state, &intel_dp->mst_mgr, 86 - connector->port, 87 - crtc_state->pbn, 88 - drm_dp_get_vc_payload_bw(&intel_dp->mst_mgr, 89 - crtc_state->port_clock, 90 - crtc_state->lane_count)); 72 + slots = drm_dp_atomic_find_time_slots(state, &intel_dp->mst_mgr, 73 + connector->port, crtc_state->pbn); 91 74 if (slots == -EDEADLK) 92 75 return slots; 93 76 if (slots >= 0) ··· 315 308 struct drm_atomic_state *_state) 316 309 { 317 310 struct intel_atomic_state *state = to_intel_atomic_state(_state); 318 - struct drm_connector_state *new_conn_state = 319 - drm_atomic_get_new_connector_state(&state->base, connector); 320 - struct drm_connector_state *old_conn_state = 321 - drm_atomic_get_old_connector_state(&state->base, connector); 322 311 struct intel_connector *intel_connector = 323 312 to_intel_connector(connector); 324 - struct drm_crtc *new_crtc = new_conn_state->crtc; 325 - struct drm_dp_mst_topology_mgr *mgr; 326 313 int ret; 327 314 328 315 ret = intel_digital_connector_atomic_check(connector, &state->base); ··· 327 326 if (ret) 328 327 return ret; 329 328 330 - if (!old_conn_state->crtc) 331 - return 0; 332 - 333 - /* We only want to free VCPI if this state disables the CRTC on this 334 - * connector 335 - */ 336 - if (new_crtc) { 337 - struct intel_crtc *crtc = to_intel_crtc(new_crtc); 338 - struct intel_crtc_state *crtc_state = 339 - intel_atomic_get_new_crtc_state(state, crtc); 340 - 341 - if (!crtc_state || 342 - !drm_atomic_crtc_needs_modeset(&crtc_state->uapi) || 343 - crtc_state->uapi.enable) 344 - return 0; 345 - } 346 - 347 - mgr = &enc_to_mst(to_intel_encoder(old_conn_state->best_encoder))->primary->dp.mst_mgr; 348 - ret = drm_dp_atomic_release_vcpi_slots(&state->base, mgr, 349 - intel_connector->port); 350 - 351 - return ret; 329 + return drm_dp_atomic_release_time_slots(&state->base, 330 + &intel_connector->mst_port->mst_mgr, 331 + intel_connector->port); 352 332 } 353 333 354 334 static void clear_act_sent(struct intel_encoder *encoder, ··· 365 383 struct intel_dp *intel_dp = &dig_port->dp; 366 384 struct intel_connector *connector = 367 385 to_intel_connector(old_conn_state->connector); 386 + struct drm_dp_mst_topology_state *mst_state = 387 + drm_atomic_get_mst_topology_state(&state->base, &intel_dp->mst_mgr); 368 388 struct drm_i915_private *i915 = to_i915(connector->base.dev); 369 - int start_slot = intel_dp_is_uhbr(old_crtc_state) ? 0 : 1; 370 - int ret; 371 389 372 390 drm_dbg_kms(&i915->drm, "active links %d\n", 373 391 intel_dp->active_mst_links); 374 392 375 393 intel_hdcp_disable(intel_mst->connector); 376 394 377 - drm_dp_mst_reset_vcpi_slots(&intel_dp->mst_mgr, connector->port); 378 - 379 - ret = drm_dp_update_payload_part1(&intel_dp->mst_mgr, start_slot); 380 - if (ret) { 381 - drm_dbg_kms(&i915->drm, "failed to update payload %d\n", ret); 382 - } 395 + drm_dp_remove_payload(&intel_dp->mst_mgr, mst_state, 396 + drm_atomic_get_mst_payload_state(mst_state, connector->port)); 383 397 384 398 intel_audio_codec_disable(encoder, old_crtc_state, old_conn_state); 385 399 } ··· 403 425 404 426 intel_disable_transcoder(old_crtc_state); 405 427 406 - drm_dp_update_payload_part2(&intel_dp->mst_mgr); 407 - 408 428 clear_act_sent(encoder, old_crtc_state); 409 429 410 430 intel_de_rmw(dev_priv, TRANS_DDI_FUNC_CTL(old_crtc_state->cpu_transcoder), 411 431 TRANS_DDI_DP_VC_PAYLOAD_ALLOC, 0); 412 432 413 433 wait_for_act_sent(encoder, old_crtc_state); 414 - 415 - drm_dp_mst_deallocate_vcpi(&intel_dp->mst_mgr, connector->port); 416 434 417 435 intel_ddi_disable_transcoder_func(old_crtc_state); 418 436 ··· 476 502 struct drm_i915_private *dev_priv = to_i915(encoder->base.dev); 477 503 struct intel_connector *connector = 478 504 to_intel_connector(conn_state->connector); 479 - int start_slot = intel_dp_is_uhbr(pipe_config) ? 0 : 1; 505 + struct drm_dp_mst_topology_state *mst_state = 506 + drm_atomic_get_new_mst_topology_state(&state->base, &intel_dp->mst_mgr); 480 507 int ret; 481 508 bool first_mst_stream; 482 509 ··· 503 528 dig_port->base.pre_enable(state, &dig_port->base, 504 529 pipe_config, NULL); 505 530 506 - ret = drm_dp_mst_allocate_vcpi(&intel_dp->mst_mgr, 507 - connector->port, 508 - pipe_config->pbn, 509 - pipe_config->dp_m_n.tu); 510 - if (!ret) 511 - drm_err(&dev_priv->drm, "failed to allocate vcpi\n"); 512 - 513 531 intel_dp->active_mst_links++; 514 532 515 - ret = drm_dp_update_payload_part1(&intel_dp->mst_mgr, start_slot); 533 + ret = drm_dp_add_payload_part1(&intel_dp->mst_mgr, mst_state, 534 + drm_atomic_get_mst_payload_state(mst_state, connector->port)); 535 + if (ret < 0) 536 + drm_err(&dev_priv->drm, "Failed to create MST payload for %s: %d\n", 537 + connector->base.name, ret); 516 538 517 539 /* 518 540 * Before Gen 12 this is not done as part of ··· 532 560 struct intel_dp_mst_encoder *intel_mst = enc_to_mst(encoder); 533 561 struct intel_digital_port *dig_port = intel_mst->primary; 534 562 struct intel_dp *intel_dp = &dig_port->dp; 563 + struct intel_connector *connector = to_intel_connector(conn_state->connector); 535 564 struct drm_i915_private *dev_priv = to_i915(encoder->base.dev); 565 + struct drm_dp_mst_topology_state *mst_state = 566 + drm_atomic_get_new_mst_topology_state(&state->base, &intel_dp->mst_mgr); 536 567 enum transcoder trans = pipe_config->cpu_transcoder; 537 568 538 569 drm_WARN_ON(&dev_priv->drm, pipe_config->has_pch_encoder); ··· 563 588 564 589 wait_for_act_sent(encoder, pipe_config); 565 590 566 - drm_dp_update_payload_part2(&intel_dp->mst_mgr); 591 + drm_dp_add_payload_part2(&intel_dp->mst_mgr, &state->base, 592 + drm_atomic_get_mst_payload_state(mst_state, connector->port)); 567 593 568 594 if (DISPLAY_VER(dev_priv) >= 12 && pipe_config->fec_enable) 569 595 intel_de_rmw(dev_priv, CHICKEN_TRANS(trans), 0, ··· 948 972 struct intel_dp *intel_dp = &dig_port->dp; 949 973 enum port port = dig_port->base.port; 950 974 int ret; 951 - int max_source_rate = 952 - intel_dp->source_rates[intel_dp->num_source_rates - 1]; 953 975 954 976 if (!HAS_DP_MST(i915) || intel_dp_is_edp(intel_dp)) 955 977 return 0; ··· 963 989 /* create encoders */ 964 990 intel_dp_create_fake_mst_encoders(dig_port); 965 991 ret = drm_dp_mst_topology_mgr_init(&intel_dp->mst_mgr, &i915->drm, 966 - &intel_dp->aux, 16, 3, 967 - dig_port->max_lanes, 968 - max_source_rate, 969 - conn_base_id); 992 + &intel_dp->aux, 16, 3, conn_base_id); 970 993 if (ret) { 971 994 intel_dp->mst_mgr.cbs = NULL; 972 995 return ret;
+23 -1
drivers/gpu/drm/i915/display/intel_hdcp.c
··· 30 30 31 31 static int intel_conn_to_vcpi(struct intel_connector *connector) 32 32 { 33 + struct drm_dp_mst_topology_mgr *mgr; 34 + struct drm_dp_mst_atomic_payload *payload; 35 + struct drm_dp_mst_topology_state *mst_state; 36 + int vcpi = 0; 37 + 33 38 /* For HDMI this is forced to be 0x0. For DP SST also this is 0x0. */ 34 - return connector->port ? connector->port->vcpi.vcpi : 0; 39 + if (!connector->port) 40 + return 0; 41 + mgr = connector->port->mgr; 42 + 43 + drm_modeset_lock(&mgr->base.lock, NULL); 44 + mst_state = to_drm_dp_mst_topology_state(mgr->base.state); 45 + payload = drm_atomic_get_mst_payload_state(mst_state, connector->port); 46 + if (drm_WARN_ON(mgr->dev, !payload)) 47 + goto out; 48 + 49 + vcpi = payload->vcpi; 50 + if (drm_WARN_ON(mgr->dev, vcpi < 0)) { 51 + vcpi = 0; 52 + goto out; 53 + } 54 + out: 55 + drm_modeset_unlock(&mgr->base.lock); 56 + return vcpi; 35 57 } 36 58 37 59 /*
+1 -40
drivers/gpu/drm/i915/gem/i915_gem_ttm.c
··· 361 361 const struct ttm_place *place) 362 362 { 363 363 struct drm_i915_gem_object *obj = i915_ttm_to_gem(bo); 364 - struct ttm_resource *res = bo->resource; 365 364 366 365 if (!obj) 367 366 return false; ··· 377 378 if (!i915_gem_object_evictable(obj)) 378 379 return false; 379 380 380 - switch (res->mem_type) { 381 - case I915_PL_LMEM0: { 382 - struct ttm_resource_manager *man = 383 - ttm_manager_type(bo->bdev, res->mem_type); 384 - struct i915_ttm_buddy_resource *bman_res = 385 - to_ttm_buddy_resource(res); 386 - struct drm_buddy *mm = bman_res->mm; 387 - struct drm_buddy_block *block; 388 - 389 - if (!place->fpfn && !place->lpfn) 390 - return true; 391 - 392 - GEM_BUG_ON(!place->lpfn); 393 - 394 - /* 395 - * If we just want something mappable then we can quickly check 396 - * if the current victim resource is using any of the CPU 397 - * visible portion. 398 - */ 399 - if (!place->fpfn && 400 - place->lpfn == i915_ttm_buddy_man_visible_size(man)) 401 - return bman_res->used_visible_size > 0; 402 - 403 - /* Real range allocation */ 404 - list_for_each_entry(block, &bman_res->blocks, link) { 405 - unsigned long fpfn = 406 - drm_buddy_block_offset(block) >> PAGE_SHIFT; 407 - unsigned long lpfn = fpfn + 408 - (drm_buddy_block_size(mm, block) >> PAGE_SHIFT); 409 - 410 - if (place->fpfn < lpfn && place->lpfn > fpfn) 411 - return true; 412 - } 413 - return false; 414 - } default: 415 - break; 416 - } 417 - 418 - return true; 381 + return ttm_bo_eviction_valuable(bo, place); 419 382 } 420 383 421 384 static void i915_ttm_evict_flags(struct ttm_buffer_object *bo,
+73
drivers/gpu/drm/i915/i915_ttm_buddy_manager.c
··· 173 173 kfree(bman_res); 174 174 } 175 175 176 + static bool i915_ttm_buddy_man_intersects(struct ttm_resource_manager *man, 177 + struct ttm_resource *res, 178 + const struct ttm_place *place, 179 + size_t size) 180 + { 181 + struct i915_ttm_buddy_resource *bman_res = to_ttm_buddy_resource(res); 182 + struct i915_ttm_buddy_manager *bman = to_buddy_manager(man); 183 + struct drm_buddy *mm = &bman->mm; 184 + struct drm_buddy_block *block; 185 + 186 + if (!place->fpfn && !place->lpfn) 187 + return true; 188 + 189 + GEM_BUG_ON(!place->lpfn); 190 + 191 + /* 192 + * If we just want something mappable then we can quickly check 193 + * if the current victim resource is using any of the CPU 194 + * visible portion. 195 + */ 196 + if (!place->fpfn && 197 + place->lpfn == i915_ttm_buddy_man_visible_size(man)) 198 + return bman_res->used_visible_size > 0; 199 + 200 + /* Check each drm buddy block individually */ 201 + list_for_each_entry(block, &bman_res->blocks, link) { 202 + unsigned long fpfn = 203 + drm_buddy_block_offset(block) >> PAGE_SHIFT; 204 + unsigned long lpfn = fpfn + 205 + (drm_buddy_block_size(mm, block) >> PAGE_SHIFT); 206 + 207 + if (place->fpfn < lpfn && place->lpfn > fpfn) 208 + return true; 209 + } 210 + 211 + return false; 212 + } 213 + 214 + static bool i915_ttm_buddy_man_compatible(struct ttm_resource_manager *man, 215 + struct ttm_resource *res, 216 + const struct ttm_place *place, 217 + size_t size) 218 + { 219 + struct i915_ttm_buddy_resource *bman_res = to_ttm_buddy_resource(res); 220 + struct i915_ttm_buddy_manager *bman = to_buddy_manager(man); 221 + struct drm_buddy *mm = &bman->mm; 222 + struct drm_buddy_block *block; 223 + 224 + if (!place->fpfn && !place->lpfn) 225 + return true; 226 + 227 + GEM_BUG_ON(!place->lpfn); 228 + 229 + if (!place->fpfn && 230 + place->lpfn == i915_ttm_buddy_man_visible_size(man)) 231 + return bman_res->used_visible_size == res->num_pages; 232 + 233 + /* Check each drm buddy block individually */ 234 + list_for_each_entry(block, &bman_res->blocks, link) { 235 + unsigned long fpfn = 236 + drm_buddy_block_offset(block) >> PAGE_SHIFT; 237 + unsigned long lpfn = fpfn + 238 + (drm_buddy_block_size(mm, block) >> PAGE_SHIFT); 239 + 240 + if (fpfn < place->fpfn || lpfn > place->lpfn) 241 + return false; 242 + } 243 + 244 + return true; 245 + } 246 + 176 247 static void i915_ttm_buddy_man_debug(struct ttm_resource_manager *man, 177 248 struct drm_printer *printer) 178 249 { ··· 271 200 static const struct ttm_resource_manager_func i915_ttm_buddy_manager_func = { 272 201 .alloc = i915_ttm_buddy_man_alloc, 273 202 .free = i915_ttm_buddy_man_free, 203 + .intersects = i915_ttm_buddy_man_intersects, 204 + .compatible = i915_ttm_buddy_man_compatible, 274 205 .debug = i915_ttm_buddy_man_debug, 275 206 }; 276 207
+9
drivers/gpu/drm/mediatek/Kconfig
··· 21 21 This driver provides kernel mode setting and 22 22 buffer management to userspace. 23 23 24 + config DRM_MEDIATEK_DP 25 + tristate "DRM DPTX Support for MediaTek SoCs" 26 + depends on DRM_MEDIATEK 27 + select PHY_MTK_DP 28 + select DRM_DISPLAY_HELPER 29 + select DRM_DISPLAY_DP_HELPER 30 + help 31 + DRM/KMS Display Port driver for MediaTek SoCs. 32 + 24 33 config DRM_MEDIATEK_HDMI 25 34 tristate "DRM HDMI Support for Mediatek SoCs" 26 35 depends on DRM_MEDIATEK
+2
drivers/gpu/drm/mediatek/Makefile
··· 23 23 mtk_hdmi_ddc.o 24 24 25 25 obj-$(CONFIG_DRM_MEDIATEK_HDMI) += mediatek-drm-hdmi.o 26 + 27 + obj-$(CONFIG_DRM_MEDIATEK_DP) += mtk_dp.o
+2661
drivers/gpu/drm/mediatek/mtk_dp.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * Copyright (c) 2019-2022 MediaTek Inc. 4 + * Copyright (c) 2022 BayLibre 5 + */ 6 + 7 + #include <drm/display/drm_dp.h> 8 + #include <drm/display/drm_dp_helper.h> 9 + #include <drm/drm_atomic_helper.h> 10 + #include <drm/drm_bridge.h> 11 + #include <drm/drm_crtc.h> 12 + #include <drm/drm_edid.h> 13 + #include <drm/drm_of.h> 14 + #include <drm/drm_panel.h> 15 + #include <drm/drm_print.h> 16 + #include <drm/drm_probe_helper.h> 17 + #include <linux/arm-smccc.h> 18 + #include <linux/clk.h> 19 + #include <linux/delay.h> 20 + #include <linux/errno.h> 21 + #include <linux/kernel.h> 22 + #include <linux/media-bus-format.h> 23 + #include <linux/nvmem-consumer.h> 24 + #include <linux/of.h> 25 + #include <linux/of_irq.h> 26 + #include <linux/of_platform.h> 27 + #include <linux/phy/phy.h> 28 + #include <linux/platform_device.h> 29 + #include <linux/pm_runtime.h> 30 + #include <linux/regmap.h> 31 + #include <linux/soc/mediatek/mtk_sip_svc.h> 32 + #include <sound/hdmi-codec.h> 33 + #include <video/videomode.h> 34 + 35 + #include "mtk_dp_reg.h" 36 + 37 + #define MTK_DP_SIP_CONTROL_AARCH32 MTK_SIP_SMC_CMD(0x523) 38 + #define MTK_DP_SIP_ATF_EDP_VIDEO_UNMUTE (BIT(0) | BIT(5)) 39 + #define MTK_DP_SIP_ATF_VIDEO_UNMUTE BIT(5) 40 + 41 + #define MTK_DP_THREAD_CABLE_STATE_CHG BIT(0) 42 + #define MTK_DP_THREAD_HPD_EVENT BIT(1) 43 + 44 + #define MTK_DP_4P1T 4 45 + #define MTK_DP_HDE 2 46 + #define MTK_DP_PIX_PER_ADDR 2 47 + #define MTK_DP_AUX_WAIT_REPLY_COUNT 20 48 + #define MTK_DP_TBC_BUF_READ_START_ADDR 0x8 49 + #define MTK_DP_TRAIN_VOLTAGE_LEVEL_RETRY 5 50 + #define MTK_DP_TRAIN_DOWNSCALE_RETRY 10 51 + #define MTK_DP_VERSION 0x11 52 + #define MTK_DP_SDP_AUI 0x4 53 + 54 + enum { 55 + MTK_DP_CAL_GLB_BIAS_TRIM = 0, 56 + MTK_DP_CAL_CLKTX_IMPSE, 57 + MTK_DP_CAL_LN_TX_IMPSEL_PMOS_0, 58 + MTK_DP_CAL_LN_TX_IMPSEL_PMOS_1, 59 + MTK_DP_CAL_LN_TX_IMPSEL_PMOS_2, 60 + MTK_DP_CAL_LN_TX_IMPSEL_PMOS_3, 61 + MTK_DP_CAL_LN_TX_IMPSEL_NMOS_0, 62 + MTK_DP_CAL_LN_TX_IMPSEL_NMOS_1, 63 + MTK_DP_CAL_LN_TX_IMPSEL_NMOS_2, 64 + MTK_DP_CAL_LN_TX_IMPSEL_NMOS_3, 65 + MTK_DP_CAL_MAX, 66 + }; 67 + 68 + struct mtk_dp_train_info { 69 + bool sink_ssc; 70 + bool cable_plugged_in; 71 + /* link_rate is in multiple of 0.27Gbps */ 72 + int link_rate; 73 + int lane_count; 74 + unsigned int channel_eq_pattern; 75 + }; 76 + 77 + struct mtk_dp_audio_cfg { 78 + bool detect_monitor; 79 + int sad_count; 80 + int sample_rate; 81 + int word_length_bits; 82 + int channels; 83 + }; 84 + 85 + struct mtk_dp_info { 86 + enum dp_pixelformat format; 87 + struct videomode vm; 88 + struct mtk_dp_audio_cfg audio_cur_cfg; 89 + }; 90 + 91 + struct mtk_dp_efuse_fmt { 92 + unsigned short idx; 93 + unsigned short shift; 94 + unsigned short mask; 95 + unsigned short min_val; 96 + unsigned short max_val; 97 + unsigned short default_val; 98 + }; 99 + 100 + struct mtk_dp { 101 + bool enabled; 102 + bool need_debounce; 103 + u8 max_lanes; 104 + u8 max_linkrate; 105 + u8 rx_cap[DP_RECEIVER_CAP_SIZE]; 106 + u32 cal_data[MTK_DP_CAL_MAX]; 107 + u32 irq_thread_handle; 108 + /* irq_thread_lock is used to protect irq_thread_handle */ 109 + spinlock_t irq_thread_lock; 110 + 111 + struct device *dev; 112 + struct drm_bridge bridge; 113 + struct drm_bridge *next_bridge; 114 + struct drm_connector *conn; 115 + struct drm_device *drm_dev; 116 + struct drm_dp_aux aux; 117 + 118 + const struct mtk_dp_data *data; 119 + struct mtk_dp_info info; 120 + struct mtk_dp_train_info train_info; 121 + 122 + struct platform_device *phy_dev; 123 + struct phy *phy; 124 + struct regmap *regs; 125 + struct timer_list debounce_timer; 126 + 127 + /* For audio */ 128 + bool audio_enable; 129 + hdmi_codec_plugged_cb plugged_cb; 130 + struct platform_device *audio_pdev; 131 + 132 + struct device *codec_dev; 133 + /* protect the plugged_cb as it's used in both bridge ops and audio */ 134 + struct mutex update_plugged_status_lock; 135 + }; 136 + 137 + struct mtk_dp_data { 138 + int bridge_type; 139 + unsigned int smc_cmd; 140 + const struct mtk_dp_efuse_fmt *efuse_fmt; 141 + bool audio_supported; 142 + }; 143 + 144 + static const struct mtk_dp_efuse_fmt mt8195_edp_efuse_fmt[MTK_DP_CAL_MAX] = { 145 + [MTK_DP_CAL_GLB_BIAS_TRIM] = { 146 + .idx = 3, 147 + .shift = 27, 148 + .mask = 0x1f, 149 + .min_val = 1, 150 + .max_val = 0x1e, 151 + .default_val = 0xf, 152 + }, 153 + [MTK_DP_CAL_CLKTX_IMPSE] = { 154 + .idx = 0, 155 + .shift = 9, 156 + .mask = 0xf, 157 + .min_val = 1, 158 + .max_val = 0xe, 159 + .default_val = 0x8, 160 + }, 161 + [MTK_DP_CAL_LN_TX_IMPSEL_PMOS_0] = { 162 + .idx = 2, 163 + .shift = 28, 164 + .mask = 0xf, 165 + .min_val = 1, 166 + .max_val = 0xe, 167 + .default_val = 0x8, 168 + }, 169 + [MTK_DP_CAL_LN_TX_IMPSEL_PMOS_1] = { 170 + .idx = 2, 171 + .shift = 20, 172 + .mask = 0xf, 173 + .min_val = 1, 174 + .max_val = 0xe, 175 + .default_val = 0x8, 176 + }, 177 + [MTK_DP_CAL_LN_TX_IMPSEL_PMOS_2] = { 178 + .idx = 2, 179 + .shift = 12, 180 + .mask = 0xf, 181 + .min_val = 1, 182 + .max_val = 0xe, 183 + .default_val = 0x8, 184 + }, 185 + [MTK_DP_CAL_LN_TX_IMPSEL_PMOS_3] = { 186 + .idx = 2, 187 + .shift = 4, 188 + .mask = 0xf, 189 + .min_val = 1, 190 + .max_val = 0xe, 191 + .default_val = 0x8, 192 + }, 193 + [MTK_DP_CAL_LN_TX_IMPSEL_NMOS_0] = { 194 + .idx = 2, 195 + .shift = 24, 196 + .mask = 0xf, 197 + .min_val = 1, 198 + .max_val = 0xe, 199 + .default_val = 0x8, 200 + }, 201 + [MTK_DP_CAL_LN_TX_IMPSEL_NMOS_1] = { 202 + .idx = 2, 203 + .shift = 16, 204 + .mask = 0xf, 205 + .min_val = 1, 206 + .max_val = 0xe, 207 + .default_val = 0x8, 208 + }, 209 + [MTK_DP_CAL_LN_TX_IMPSEL_NMOS_2] = { 210 + .idx = 2, 211 + .shift = 8, 212 + .mask = 0xf, 213 + .min_val = 1, 214 + .max_val = 0xe, 215 + .default_val = 0x8, 216 + }, 217 + [MTK_DP_CAL_LN_TX_IMPSEL_NMOS_3] = { 218 + .idx = 2, 219 + .shift = 0, 220 + .mask = 0xf, 221 + .min_val = 1, 222 + .max_val = 0xe, 223 + .default_val = 0x8, 224 + }, 225 + }; 226 + 227 + static const struct mtk_dp_efuse_fmt mt8195_dp_efuse_fmt[MTK_DP_CAL_MAX] = { 228 + [MTK_DP_CAL_GLB_BIAS_TRIM] = { 229 + .idx = 0, 230 + .shift = 27, 231 + .mask = 0x1f, 232 + .min_val = 1, 233 + .max_val = 0x1e, 234 + .default_val = 0xf, 235 + }, 236 + [MTK_DP_CAL_CLKTX_IMPSE] = { 237 + .idx = 0, 238 + .shift = 13, 239 + .mask = 0xf, 240 + .min_val = 1, 241 + .max_val = 0xe, 242 + .default_val = 0x8, 243 + }, 244 + [MTK_DP_CAL_LN_TX_IMPSEL_PMOS_0] = { 245 + .idx = 1, 246 + .shift = 28, 247 + .mask = 0xf, 248 + .min_val = 1, 249 + .max_val = 0xe, 250 + .default_val = 0x8, 251 + }, 252 + [MTK_DP_CAL_LN_TX_IMPSEL_PMOS_1] = { 253 + .idx = 1, 254 + .shift = 20, 255 + .mask = 0xf, 256 + .min_val = 1, 257 + .max_val = 0xe, 258 + .default_val = 0x8, 259 + }, 260 + [MTK_DP_CAL_LN_TX_IMPSEL_PMOS_2] = { 261 + .idx = 1, 262 + .shift = 12, 263 + .mask = 0xf, 264 + .min_val = 1, 265 + .max_val = 0xe, 266 + .default_val = 0x8, 267 + }, 268 + [MTK_DP_CAL_LN_TX_IMPSEL_PMOS_3] = { 269 + .idx = 1, 270 + .shift = 4, 271 + .mask = 0xf, 272 + .min_val = 1, 273 + .max_val = 0xe, 274 + .default_val = 0x8, 275 + }, 276 + [MTK_DP_CAL_LN_TX_IMPSEL_NMOS_0] = { 277 + .idx = 1, 278 + .shift = 24, 279 + .mask = 0xf, 280 + .min_val = 1, 281 + .max_val = 0xe, 282 + .default_val = 0x8, 283 + }, 284 + [MTK_DP_CAL_LN_TX_IMPSEL_NMOS_1] = { 285 + .idx = 1, 286 + .shift = 16, 287 + .mask = 0xf, 288 + .min_val = 1, 289 + .max_val = 0xe, 290 + .default_val = 0x8, 291 + }, 292 + [MTK_DP_CAL_LN_TX_IMPSEL_NMOS_2] = { 293 + .idx = 1, 294 + .shift = 8, 295 + .mask = 0xf, 296 + .min_val = 1, 297 + .max_val = 0xe, 298 + .default_val = 0x8, 299 + }, 300 + [MTK_DP_CAL_LN_TX_IMPSEL_NMOS_3] = { 301 + .idx = 1, 302 + .shift = 0, 303 + .mask = 0xf, 304 + .min_val = 1, 305 + .max_val = 0xe, 306 + .default_val = 0x8, 307 + }, 308 + }; 309 + 310 + static struct regmap_config mtk_dp_regmap_config = { 311 + .reg_bits = 32, 312 + .val_bits = 32, 313 + .reg_stride = 4, 314 + .max_register = SEC_OFFSET + 0x90, 315 + .name = "mtk-dp-registers", 316 + }; 317 + 318 + static struct mtk_dp *mtk_dp_from_bridge(struct drm_bridge *b) 319 + { 320 + return container_of(b, struct mtk_dp, bridge); 321 + } 322 + 323 + static u32 mtk_dp_read(struct mtk_dp *mtk_dp, u32 offset) 324 + { 325 + u32 read_val; 326 + int ret; 327 + 328 + ret = regmap_read(mtk_dp->regs, offset, &read_val); 329 + if (ret) { 330 + dev_err(mtk_dp->dev, "Failed to read register 0x%x: %d\n", 331 + offset, ret); 332 + return 0; 333 + } 334 + 335 + return read_val; 336 + } 337 + 338 + static int mtk_dp_write(struct mtk_dp *mtk_dp, u32 offset, u32 val) 339 + { 340 + int ret = regmap_write(mtk_dp->regs, offset, val); 341 + 342 + if (ret) 343 + dev_err(mtk_dp->dev, 344 + "Failed to write register 0x%x with value 0x%x\n", 345 + offset, val); 346 + return ret; 347 + } 348 + 349 + static int mtk_dp_update_bits(struct mtk_dp *mtk_dp, u32 offset, 350 + u32 val, u32 mask) 351 + { 352 + int ret = regmap_update_bits(mtk_dp->regs, offset, mask, val); 353 + 354 + if (ret) 355 + dev_err(mtk_dp->dev, 356 + "Failed to update register 0x%x with value 0x%x, mask 0x%x\n", 357 + offset, val, mask); 358 + return ret; 359 + } 360 + 361 + static void mtk_dp_bulk_16bit_write(struct mtk_dp *mtk_dp, u32 offset, u8 *buf, 362 + size_t length) 363 + { 364 + int i; 365 + 366 + /* 2 bytes per register */ 367 + for (i = 0; i < length; i += 2) { 368 + u32 val = buf[i] | (i + 1 < length ? buf[i + 1] << 8 : 0); 369 + 370 + if (mtk_dp_write(mtk_dp, offset + i * 2, val)) 371 + return; 372 + } 373 + } 374 + 375 + static void mtk_dp_msa_bypass_enable(struct mtk_dp *mtk_dp, bool enable) 376 + { 377 + u32 mask = HTOTAL_SEL_DP_ENC0_P0 | VTOTAL_SEL_DP_ENC0_P0 | 378 + HSTART_SEL_DP_ENC0_P0 | VSTART_SEL_DP_ENC0_P0 | 379 + HWIDTH_SEL_DP_ENC0_P0 | VHEIGHT_SEL_DP_ENC0_P0 | 380 + HSP_SEL_DP_ENC0_P0 | HSW_SEL_DP_ENC0_P0 | 381 + VSP_SEL_DP_ENC0_P0 | VSW_SEL_DP_ENC0_P0; 382 + 383 + mtk_dp_update_bits(mtk_dp, MTK_DP_ENC0_P0_3030, enable ? 0 : mask, mask); 384 + } 385 + 386 + static void mtk_dp_set_msa(struct mtk_dp *mtk_dp) 387 + { 388 + struct drm_display_mode mode; 389 + struct videomode *vm = &mtk_dp->info.vm; 390 + 391 + drm_display_mode_from_videomode(vm, &mode); 392 + 393 + /* horizontal */ 394 + mtk_dp_update_bits(mtk_dp, MTK_DP_ENC0_P0_3010, 395 + mode.htotal, HTOTAL_SW_DP_ENC0_P0_MASK); 396 + mtk_dp_update_bits(mtk_dp, MTK_DP_ENC0_P0_3018, 397 + vm->hsync_len + vm->hback_porch, 398 + HSTART_SW_DP_ENC0_P0_MASK); 399 + mtk_dp_update_bits(mtk_dp, MTK_DP_ENC0_P0_3028, 400 + vm->hsync_len, HSW_SW_DP_ENC0_P0_MASK); 401 + mtk_dp_update_bits(mtk_dp, MTK_DP_ENC0_P0_3028, 402 + 0, HSP_SW_DP_ENC0_P0_MASK); 403 + mtk_dp_update_bits(mtk_dp, MTK_DP_ENC0_P0_3020, 404 + vm->hactive, HWIDTH_SW_DP_ENC0_P0_MASK); 405 + 406 + /* vertical */ 407 + mtk_dp_update_bits(mtk_dp, MTK_DP_ENC0_P0_3014, 408 + mode.vtotal, VTOTAL_SW_DP_ENC0_P0_MASK); 409 + mtk_dp_update_bits(mtk_dp, MTK_DP_ENC0_P0_301C, 410 + vm->vsync_len + vm->vback_porch, 411 + VSTART_SW_DP_ENC0_P0_MASK); 412 + mtk_dp_update_bits(mtk_dp, MTK_DP_ENC0_P0_302C, 413 + vm->vsync_len, VSW_SW_DP_ENC0_P0_MASK); 414 + mtk_dp_update_bits(mtk_dp, MTK_DP_ENC0_P0_302C, 415 + 0, VSP_SW_DP_ENC0_P0_MASK); 416 + mtk_dp_update_bits(mtk_dp, MTK_DP_ENC0_P0_3024, 417 + vm->vactive, VHEIGHT_SW_DP_ENC0_P0_MASK); 418 + 419 + /* horizontal */ 420 + mtk_dp_update_bits(mtk_dp, MTK_DP_ENC0_P0_3064, 421 + vm->hactive, HDE_NUM_LAST_DP_ENC0_P0_MASK); 422 + mtk_dp_update_bits(mtk_dp, MTK_DP_ENC0_P0_3154, 423 + mode.htotal, PGEN_HTOTAL_DP_ENC0_P0_MASK); 424 + mtk_dp_update_bits(mtk_dp, MTK_DP_ENC0_P0_3158, 425 + vm->hfront_porch, 426 + PGEN_HSYNC_RISING_DP_ENC0_P0_MASK); 427 + mtk_dp_update_bits(mtk_dp, MTK_DP_ENC0_P0_315C, 428 + vm->hsync_len, 429 + PGEN_HSYNC_PULSE_WIDTH_DP_ENC0_P0_MASK); 430 + mtk_dp_update_bits(mtk_dp, MTK_DP_ENC0_P0_3160, 431 + vm->hback_porch + vm->hsync_len, 432 + PGEN_HFDE_START_DP_ENC0_P0_MASK); 433 + mtk_dp_update_bits(mtk_dp, MTK_DP_ENC0_P0_3164, 434 + vm->hactive, 435 + PGEN_HFDE_ACTIVE_WIDTH_DP_ENC0_P0_MASK); 436 + 437 + /* vertical */ 438 + mtk_dp_update_bits(mtk_dp, MTK_DP_ENC0_P0_3168, 439 + mode.vtotal, 440 + PGEN_VTOTAL_DP_ENC0_P0_MASK); 441 + mtk_dp_update_bits(mtk_dp, MTK_DP_ENC0_P0_316C, 442 + vm->vfront_porch, 443 + PGEN_VSYNC_RISING_DP_ENC0_P0_MASK); 444 + mtk_dp_update_bits(mtk_dp, MTK_DP_ENC0_P0_3170, 445 + vm->vsync_len, 446 + PGEN_VSYNC_PULSE_WIDTH_DP_ENC0_P0_MASK); 447 + mtk_dp_update_bits(mtk_dp, MTK_DP_ENC0_P0_3174, 448 + vm->vback_porch + vm->vsync_len, 449 + PGEN_VFDE_START_DP_ENC0_P0_MASK); 450 + mtk_dp_update_bits(mtk_dp, MTK_DP_ENC0_P0_3178, 451 + vm->vactive, 452 + PGEN_VFDE_ACTIVE_WIDTH_DP_ENC0_P0_MASK); 453 + } 454 + 455 + static int mtk_dp_set_color_format(struct mtk_dp *mtk_dp, 456 + enum dp_pixelformat color_format) 457 + { 458 + u32 val; 459 + 460 + /* update MISC0 */ 461 + mtk_dp_update_bits(mtk_dp, MTK_DP_ENC0_P0_3034, 462 + color_format << DP_TEST_COLOR_FORMAT_SHIFT, 463 + DP_TEST_COLOR_FORMAT_MASK); 464 + 465 + switch (color_format) { 466 + case DP_PIXELFORMAT_YUV422: 467 + val = PIXEL_ENCODE_FORMAT_DP_ENC0_P0_YCBCR422; 468 + break; 469 + case DP_PIXELFORMAT_RGB: 470 + val = PIXEL_ENCODE_FORMAT_DP_ENC0_P0_RGB; 471 + break; 472 + default: 473 + drm_warn(mtk_dp->drm_dev, "Unsupported color format: %d\n", 474 + color_format); 475 + return -EINVAL; 476 + } 477 + 478 + mtk_dp_update_bits(mtk_dp, MTK_DP_ENC0_P0_303C, 479 + val, PIXEL_ENCODE_FORMAT_DP_ENC0_P0_MASK); 480 + return 0; 481 + } 482 + 483 + static void mtk_dp_set_color_depth(struct mtk_dp *mtk_dp) 484 + { 485 + /* Only support 8 bits currently */ 486 + /* Update MISC0 */ 487 + mtk_dp_update_bits(mtk_dp, MTK_DP_ENC0_P0_3034, 488 + DP_MSA_MISC_8_BPC, DP_TEST_BIT_DEPTH_MASK); 489 + 490 + mtk_dp_update_bits(mtk_dp, MTK_DP_ENC0_P0_303C, 491 + VIDEO_COLOR_DEPTH_DP_ENC0_P0_8BIT, 492 + VIDEO_COLOR_DEPTH_DP_ENC0_P0_MASK); 493 + } 494 + 495 + static void mtk_dp_config_mn_mode(struct mtk_dp *mtk_dp) 496 + { 497 + /* 0: hw mode, 1: sw mode */ 498 + mtk_dp_update_bits(mtk_dp, MTK_DP_ENC0_P0_3004, 499 + 0, VIDEO_M_CODE_SEL_DP_ENC0_P0_MASK); 500 + } 501 + 502 + static void mtk_dp_set_sram_read_start(struct mtk_dp *mtk_dp, u32 val) 503 + { 504 + mtk_dp_update_bits(mtk_dp, MTK_DP_ENC0_P0_303C, 505 + val, SRAM_START_READ_THRD_DP_ENC0_P0_MASK); 506 + } 507 + 508 + static void mtk_dp_setup_encoder(struct mtk_dp *mtk_dp) 509 + { 510 + mtk_dp_update_bits(mtk_dp, MTK_DP_ENC0_P0_303C, 511 + VIDEO_MN_GEN_EN_DP_ENC0_P0, 512 + VIDEO_MN_GEN_EN_DP_ENC0_P0); 513 + mtk_dp_update_bits(mtk_dp, MTK_DP_ENC0_P0_3040, 514 + SDP_DOWN_CNT_DP_ENC0_P0_VAL, 515 + SDP_DOWN_CNT_INIT_DP_ENC0_P0_MASK); 516 + mtk_dp_update_bits(mtk_dp, MTK_DP_ENC1_P0_3364, 517 + SDP_DOWN_CNT_IN_HBLANK_DP_ENC1_P0_VAL, 518 + SDP_DOWN_CNT_INIT_IN_HBLANK_DP_ENC1_P0_MASK); 519 + mtk_dp_update_bits(mtk_dp, MTK_DP_ENC1_P0_3300, 520 + VIDEO_AFIFO_RDY_SEL_DP_ENC1_P0_VAL << 8, 521 + VIDEO_AFIFO_RDY_SEL_DP_ENC1_P0_MASK); 522 + mtk_dp_update_bits(mtk_dp, MTK_DP_ENC1_P0_3364, 523 + FIFO_READ_START_POINT_DP_ENC1_P0_VAL << 12, 524 + FIFO_READ_START_POINT_DP_ENC1_P0_MASK); 525 + mtk_dp_write(mtk_dp, MTK_DP_ENC1_P0_3368, DP_ENC1_P0_3368_VAL); 526 + } 527 + 528 + static void mtk_dp_pg_enable(struct mtk_dp *mtk_dp, bool enable) 529 + { 530 + mtk_dp_update_bits(mtk_dp, MTK_DP_ENC0_P0_3038, 531 + enable ? VIDEO_SOURCE_SEL_DP_ENC0_P0_MASK : 0, 532 + VIDEO_SOURCE_SEL_DP_ENC0_P0_MASK); 533 + mtk_dp_update_bits(mtk_dp, MTK_DP_ENC0_P0_31B0, 534 + PGEN_PATTERN_SEL_VAL << 4, PGEN_PATTERN_SEL_MASK); 535 + } 536 + 537 + static void mtk_dp_audio_setup_channels(struct mtk_dp *mtk_dp, 538 + struct mtk_dp_audio_cfg *cfg) 539 + { 540 + u32 channel_enable_bits; 541 + 542 + mtk_dp_update_bits(mtk_dp, MTK_DP_ENC1_P0_3324, 543 + AUDIO_SOURCE_MUX_DP_ENC1_P0_DPRX, 544 + AUDIO_SOURCE_MUX_DP_ENC1_P0_MASK); 545 + 546 + /* audio channel count change reset */ 547 + mtk_dp_update_bits(mtk_dp, MTK_DP_ENC1_P0_33F4, 548 + DP_ENC_DUMMY_RW_1, DP_ENC_DUMMY_RW_1); 549 + mtk_dp_update_bits(mtk_dp, MTK_DP_ENC1_P0_3304, 550 + AU_PRTY_REGEN_DP_ENC1_P0_MASK | 551 + AU_CH_STS_REGEN_DP_ENC1_P0_MASK | 552 + AUDIO_SAMPLE_PRSENT_REGEN_DP_ENC1_P0_MASK, 553 + AU_PRTY_REGEN_DP_ENC1_P0_MASK | 554 + AU_CH_STS_REGEN_DP_ENC1_P0_MASK | 555 + AUDIO_SAMPLE_PRSENT_REGEN_DP_ENC1_P0_MASK); 556 + 557 + switch (cfg->channels) { 558 + case 2: 559 + channel_enable_bits = AUDIO_2CH_SEL_DP_ENC0_P0_MASK | 560 + AUDIO_2CH_EN_DP_ENC0_P0_MASK; 561 + break; 562 + case 8: 563 + default: 564 + channel_enable_bits = AUDIO_8CH_SEL_DP_ENC0_P0_MASK | 565 + AUDIO_8CH_EN_DP_ENC0_P0_MASK; 566 + break; 567 + } 568 + mtk_dp_update_bits(mtk_dp, MTK_DP_ENC0_P0_3088, 569 + channel_enable_bits | AU_EN_DP_ENC0_P0, 570 + AUDIO_2CH_SEL_DP_ENC0_P0_MASK | 571 + AUDIO_2CH_EN_DP_ENC0_P0_MASK | 572 + AUDIO_8CH_SEL_DP_ENC0_P0_MASK | 573 + AUDIO_8CH_EN_DP_ENC0_P0_MASK | 574 + AU_EN_DP_ENC0_P0); 575 + 576 + /* audio channel count change reset */ 577 + mtk_dp_update_bits(mtk_dp, MTK_DP_ENC1_P0_33F4, 0, DP_ENC_DUMMY_RW_1); 578 + 579 + /* enable audio reset */ 580 + mtk_dp_update_bits(mtk_dp, MTK_DP_ENC1_P0_33F4, 581 + DP_ENC_DUMMY_RW_1_AUDIO_RST_EN, 582 + DP_ENC_DUMMY_RW_1_AUDIO_RST_EN); 583 + } 584 + 585 + static void mtk_dp_audio_channel_status_set(struct mtk_dp *mtk_dp, 586 + struct mtk_dp_audio_cfg *cfg) 587 + { 588 + struct snd_aes_iec958 iec = { 0 }; 589 + 590 + switch (cfg->sample_rate) { 591 + case 32000: 592 + iec.status[3] = IEC958_AES3_CON_FS_32000; 593 + break; 594 + case 44100: 595 + iec.status[3] = IEC958_AES3_CON_FS_44100; 596 + break; 597 + case 48000: 598 + iec.status[3] = IEC958_AES3_CON_FS_48000; 599 + break; 600 + case 88200: 601 + iec.status[3] = IEC958_AES3_CON_FS_88200; 602 + break; 603 + case 96000: 604 + iec.status[3] = IEC958_AES3_CON_FS_96000; 605 + break; 606 + case 192000: 607 + iec.status[3] = IEC958_AES3_CON_FS_192000; 608 + break; 609 + default: 610 + iec.status[3] = IEC958_AES3_CON_FS_NOTID; 611 + break; 612 + } 613 + 614 + switch (cfg->word_length_bits) { 615 + case 16: 616 + iec.status[4] = IEC958_AES4_CON_WORDLEN_20_16; 617 + break; 618 + case 20: 619 + iec.status[4] = IEC958_AES4_CON_WORDLEN_20_16 | 620 + IEC958_AES4_CON_MAX_WORDLEN_24; 621 + break; 622 + case 24: 623 + iec.status[4] = IEC958_AES4_CON_WORDLEN_24_20 | 624 + IEC958_AES4_CON_MAX_WORDLEN_24; 625 + break; 626 + default: 627 + iec.status[4] = IEC958_AES4_CON_WORDLEN_NOTID; 628 + } 629 + 630 + /* IEC 60958 consumer channel status bits */ 631 + mtk_dp_update_bits(mtk_dp, MTK_DP_ENC0_P0_308C, 632 + 0, CH_STATUS_0_DP_ENC0_P0_MASK); 633 + mtk_dp_update_bits(mtk_dp, MTK_DP_ENC0_P0_3090, 634 + iec.status[3] << 8, CH_STATUS_1_DP_ENC0_P0_MASK); 635 + mtk_dp_update_bits(mtk_dp, MTK_DP_ENC0_P0_3094, 636 + iec.status[4], CH_STATUS_2_DP_ENC0_P0_MASK); 637 + } 638 + 639 + static void mtk_dp_audio_sdp_asp_set_channels(struct mtk_dp *mtk_dp, 640 + int channels) 641 + { 642 + mtk_dp_update_bits(mtk_dp, MTK_DP_ENC0_P0_312C, 643 + (min(8, channels) - 1) << 8, 644 + ASP_HB2_DP_ENC0_P0_MASK | ASP_HB3_DP_ENC0_P0_MASK); 645 + } 646 + 647 + static void mtk_dp_audio_set_divider(struct mtk_dp *mtk_dp) 648 + { 649 + mtk_dp_update_bits(mtk_dp, MTK_DP_ENC0_P0_30BC, 650 + AUDIO_M_CODE_MULT_DIV_SEL_DP_ENC0_P0_DIV_2, 651 + AUDIO_M_CODE_MULT_DIV_SEL_DP_ENC0_P0_MASK); 652 + } 653 + 654 + static void mtk_dp_sdp_trigger_aui(struct mtk_dp *mtk_dp) 655 + { 656 + mtk_dp_update_bits(mtk_dp, MTK_DP_ENC1_P0_3280, 657 + MTK_DP_SDP_AUI, SDP_PACKET_TYPE_DP_ENC1_P0_MASK); 658 + mtk_dp_update_bits(mtk_dp, MTK_DP_ENC1_P0_3280, 659 + SDP_PACKET_W_DP_ENC1_P0, SDP_PACKET_W_DP_ENC1_P0); 660 + } 661 + 662 + static void mtk_dp_sdp_set_data(struct mtk_dp *mtk_dp, u8 *data_bytes) 663 + { 664 + mtk_dp_bulk_16bit_write(mtk_dp, MTK_DP_ENC1_P0_3200, 665 + data_bytes, 0x10); 666 + } 667 + 668 + static void mtk_dp_sdp_set_header_aui(struct mtk_dp *mtk_dp, 669 + struct dp_sdp_header *header) 670 + { 671 + u32 db_addr = MTK_DP_ENC0_P0_30D8 + (MTK_DP_SDP_AUI - 1) * 8; 672 + 673 + mtk_dp_bulk_16bit_write(mtk_dp, db_addr, (u8 *)header, 4); 674 + } 675 + 676 + static void mtk_dp_disable_sdp_aui(struct mtk_dp *mtk_dp) 677 + { 678 + /* Disable periodic send */ 679 + mtk_dp_update_bits(mtk_dp, MTK_DP_ENC0_P0_30A8 & 0xfffc, 0, 680 + 0xff << ((MTK_DP_ENC0_P0_30A8 & 3) * 8)); 681 + } 682 + 683 + static void mtk_dp_setup_sdp_aui(struct mtk_dp *mtk_dp, 684 + struct dp_sdp *sdp) 685 + { 686 + u32 shift; 687 + 688 + mtk_dp_sdp_set_data(mtk_dp, sdp->db); 689 + mtk_dp_sdp_set_header_aui(mtk_dp, &sdp->sdp_header); 690 + mtk_dp_disable_sdp_aui(mtk_dp); 691 + 692 + shift = (MTK_DP_ENC0_P0_30A8 & 3) * 8; 693 + 694 + mtk_dp_sdp_trigger_aui(mtk_dp); 695 + /* Enable periodic sending */ 696 + mtk_dp_update_bits(mtk_dp, MTK_DP_ENC0_P0_30A8 & 0xfffc, 697 + 0x05 << shift, 0xff << shift); 698 + } 699 + 700 + static void mtk_dp_aux_irq_clear(struct mtk_dp *mtk_dp) 701 + { 702 + mtk_dp_write(mtk_dp, MTK_DP_AUX_P0_3640, DP_AUX_P0_3640_VAL); 703 + } 704 + 705 + static void mtk_dp_aux_set_cmd(struct mtk_dp *mtk_dp, u8 cmd, u32 addr) 706 + { 707 + mtk_dp_update_bits(mtk_dp, MTK_DP_AUX_P0_3644, 708 + cmd, MCU_REQUEST_COMMAND_AUX_TX_P0_MASK); 709 + mtk_dp_update_bits(mtk_dp, MTK_DP_AUX_P0_3648, 710 + addr, MCU_REQUEST_ADDRESS_LSB_AUX_TX_P0_MASK); 711 + mtk_dp_update_bits(mtk_dp, MTK_DP_AUX_P0_364C, 712 + addr >> 16, MCU_REQUEST_ADDRESS_MSB_AUX_TX_P0_MASK); 713 + } 714 + 715 + static void mtk_dp_aux_clear_fifo(struct mtk_dp *mtk_dp) 716 + { 717 + mtk_dp_update_bits(mtk_dp, MTK_DP_AUX_P0_3650, 718 + MCU_ACK_TRAN_COMPLETE_AUX_TX_P0, 719 + MCU_ACK_TRAN_COMPLETE_AUX_TX_P0 | 720 + PHY_FIFO_RST_AUX_TX_P0_MASK | 721 + MCU_REQ_DATA_NUM_AUX_TX_P0_MASK); 722 + } 723 + 724 + static void mtk_dp_aux_request_ready(struct mtk_dp *mtk_dp) 725 + { 726 + mtk_dp_update_bits(mtk_dp, MTK_DP_AUX_P0_3630, 727 + AUX_TX_REQUEST_READY_AUX_TX_P0, 728 + AUX_TX_REQUEST_READY_AUX_TX_P0); 729 + } 730 + 731 + static void mtk_dp_aux_fill_write_fifo(struct mtk_dp *mtk_dp, u8 *buf, 732 + size_t length) 733 + { 734 + mtk_dp_bulk_16bit_write(mtk_dp, MTK_DP_AUX_P0_3708, buf, length); 735 + } 736 + 737 + static void mtk_dp_aux_read_rx_fifo(struct mtk_dp *mtk_dp, u8 *buf, 738 + size_t length, int read_delay) 739 + { 740 + int read_pos; 741 + 742 + mtk_dp_update_bits(mtk_dp, MTK_DP_AUX_P0_3620, 743 + 0, AUX_RD_MODE_AUX_TX_P0_MASK); 744 + 745 + for (read_pos = 0; read_pos < length; read_pos++) { 746 + mtk_dp_update_bits(mtk_dp, MTK_DP_AUX_P0_3620, 747 + AUX_RX_FIFO_READ_PULSE_TX_P0, 748 + AUX_RX_FIFO_READ_PULSE_TX_P0); 749 + 750 + /* Hardware needs time to update the data */ 751 + usleep_range(read_delay, read_delay * 2); 752 + buf[read_pos] = (u8)(mtk_dp_read(mtk_dp, MTK_DP_AUX_P0_3620) & 753 + AUX_RX_FIFO_READ_DATA_AUX_TX_P0_MASK); 754 + } 755 + } 756 + 757 + static void mtk_dp_aux_set_length(struct mtk_dp *mtk_dp, size_t length) 758 + { 759 + if (length > 0) { 760 + mtk_dp_update_bits(mtk_dp, MTK_DP_AUX_P0_3650, 761 + (length - 1) << 12, 762 + MCU_REQ_DATA_NUM_AUX_TX_P0_MASK); 763 + mtk_dp_update_bits(mtk_dp, MTK_DP_AUX_P0_362C, 764 + 0, 765 + AUX_NO_LENGTH_AUX_TX_P0 | 766 + AUX_TX_AUXTX_OV_EN_AUX_TX_P0_MASK | 767 + AUX_RESERVED_RW_0_AUX_TX_P0_MASK); 768 + } else { 769 + mtk_dp_update_bits(mtk_dp, MTK_DP_AUX_P0_362C, 770 + AUX_NO_LENGTH_AUX_TX_P0, 771 + AUX_NO_LENGTH_AUX_TX_P0 | 772 + AUX_TX_AUXTX_OV_EN_AUX_TX_P0_MASK | 773 + AUX_RESERVED_RW_0_AUX_TX_P0_MASK); 774 + } 775 + } 776 + 777 + static int mtk_dp_aux_wait_for_completion(struct mtk_dp *mtk_dp, bool is_read) 778 + { 779 + int wait_reply = MTK_DP_AUX_WAIT_REPLY_COUNT; 780 + 781 + while (--wait_reply) { 782 + u32 aux_irq_status; 783 + 784 + if (is_read) { 785 + u32 fifo_status = mtk_dp_read(mtk_dp, MTK_DP_AUX_P0_3618); 786 + 787 + if (fifo_status & 788 + (AUX_RX_FIFO_WRITE_POINTER_AUX_TX_P0_MASK | 789 + AUX_RX_FIFO_FULL_AUX_TX_P0_MASK)) { 790 + return 0; 791 + } 792 + } 793 + 794 + aux_irq_status = mtk_dp_read(mtk_dp, MTK_DP_AUX_P0_3640); 795 + if (aux_irq_status & AUX_RX_AUX_RECV_COMPLETE_IRQ_AUX_TX_P0) 796 + return 0; 797 + 798 + if (aux_irq_status & AUX_400US_TIMEOUT_IRQ_AUX_TX_P0) 799 + return -ETIMEDOUT; 800 + 801 + /* Give the hardware a chance to reach completion before retrying */ 802 + usleep_range(100, 500); 803 + } 804 + 805 + return -ETIMEDOUT; 806 + } 807 + 808 + static int mtk_dp_aux_do_transfer(struct mtk_dp *mtk_dp, bool is_read, u8 cmd, 809 + u32 addr, u8 *buf, size_t length) 810 + { 811 + int ret; 812 + u32 reply_cmd; 813 + 814 + if (is_read && (length > DP_AUX_MAX_PAYLOAD_BYTES || 815 + (cmd == DP_AUX_NATIVE_READ && !length))) 816 + return -EINVAL; 817 + 818 + if (!is_read) 819 + mtk_dp_update_bits(mtk_dp, MTK_DP_AUX_P0_3704, 820 + AUX_TX_FIFO_NEW_MODE_EN_AUX_TX_P0, 821 + AUX_TX_FIFO_NEW_MODE_EN_AUX_TX_P0); 822 + 823 + /* We need to clear fifo and irq before sending commands to the sink device. */ 824 + mtk_dp_aux_clear_fifo(mtk_dp); 825 + mtk_dp_aux_irq_clear(mtk_dp); 826 + 827 + mtk_dp_aux_set_cmd(mtk_dp, cmd, addr); 828 + mtk_dp_aux_set_length(mtk_dp, length); 829 + 830 + if (!is_read) { 831 + if (length) 832 + mtk_dp_aux_fill_write_fifo(mtk_dp, buf, length); 833 + 834 + mtk_dp_update_bits(mtk_dp, MTK_DP_AUX_P0_3704, 835 + AUX_TX_FIFO_WDATA_NEW_MODE_T_AUX_TX_P0_MASK, 836 + AUX_TX_FIFO_WDATA_NEW_MODE_T_AUX_TX_P0_MASK); 837 + } 838 + 839 + mtk_dp_aux_request_ready(mtk_dp); 840 + 841 + /* Wait for feedback from sink device. */ 842 + ret = mtk_dp_aux_wait_for_completion(mtk_dp, is_read); 843 + 844 + reply_cmd = mtk_dp_read(mtk_dp, MTK_DP_AUX_P0_3624) & 845 + AUX_RX_REPLY_COMMAND_AUX_TX_P0_MASK; 846 + 847 + if (ret || reply_cmd) { 848 + u32 phy_status = mtk_dp_read(mtk_dp, MTK_DP_AUX_P0_3628) & 849 + AUX_RX_PHY_STATE_AUX_TX_P0_MASK; 850 + if (phy_status != AUX_RX_PHY_STATE_AUX_TX_P0_RX_IDLE) { 851 + drm_err(mtk_dp->drm_dev, 852 + "AUX Rx Aux hang, need SW reset\n"); 853 + return -EIO; 854 + } 855 + 856 + return -ETIMEDOUT; 857 + } 858 + 859 + if (!length) { 860 + mtk_dp_update_bits(mtk_dp, MTK_DP_AUX_P0_362C, 861 + 0, 862 + AUX_NO_LENGTH_AUX_TX_P0 | 863 + AUX_TX_AUXTX_OV_EN_AUX_TX_P0_MASK | 864 + AUX_RESERVED_RW_0_AUX_TX_P0_MASK); 865 + } else if (is_read) { 866 + int read_delay; 867 + 868 + if (cmd == (DP_AUX_I2C_READ | DP_AUX_I2C_MOT) || 869 + cmd == DP_AUX_I2C_READ) 870 + read_delay = 500; 871 + else 872 + read_delay = 100; 873 + 874 + mtk_dp_aux_read_rx_fifo(mtk_dp, buf, length, read_delay); 875 + } 876 + 877 + return 0; 878 + } 879 + 880 + static void mtk_dp_set_swing_pre_emphasis(struct mtk_dp *mtk_dp, int lane_num, 881 + int swing_val, int preemphasis) 882 + { 883 + u32 lane_shift = lane_num * DP_TX1_VOLT_SWING_SHIFT; 884 + 885 + dev_dbg(mtk_dp->dev, 886 + "link training: swing_val = 0x%x, pre-emphasis = 0x%x\n", 887 + swing_val, preemphasis); 888 + 889 + mtk_dp_update_bits(mtk_dp, MTK_DP_TOP_SWING_EMP, 890 + swing_val << (DP_TX0_VOLT_SWING_SHIFT + lane_shift), 891 + DP_TX0_VOLT_SWING_MASK << lane_shift); 892 + mtk_dp_update_bits(mtk_dp, MTK_DP_TOP_SWING_EMP, 893 + preemphasis << (DP_TX0_PRE_EMPH_SHIFT + lane_shift), 894 + DP_TX0_PRE_EMPH_MASK << lane_shift); 895 + } 896 + 897 + static void mtk_dp_reset_swing_pre_emphasis(struct mtk_dp *mtk_dp) 898 + { 899 + mtk_dp_update_bits(mtk_dp, MTK_DP_TOP_SWING_EMP, 900 + 0, 901 + DP_TX0_VOLT_SWING_MASK | 902 + DP_TX1_VOLT_SWING_MASK | 903 + DP_TX2_VOLT_SWING_MASK | 904 + DP_TX3_VOLT_SWING_MASK | 905 + DP_TX0_PRE_EMPH_MASK | 906 + DP_TX1_PRE_EMPH_MASK | 907 + DP_TX2_PRE_EMPH_MASK | 908 + DP_TX3_PRE_EMPH_MASK); 909 + } 910 + 911 + static u32 mtk_dp_swirq_get_clear(struct mtk_dp *mtk_dp) 912 + { 913 + u32 irq_status = mtk_dp_read(mtk_dp, MTK_DP_TRANS_P0_35D0) & 914 + SW_IRQ_FINAL_STATUS_DP_TRANS_P0_MASK; 915 + 916 + if (irq_status) { 917 + mtk_dp_update_bits(mtk_dp, MTK_DP_TRANS_P0_35C8, 918 + irq_status, SW_IRQ_CLR_DP_TRANS_P0_MASK); 919 + mtk_dp_update_bits(mtk_dp, MTK_DP_TRANS_P0_35C8, 920 + 0, SW_IRQ_CLR_DP_TRANS_P0_MASK); 921 + } 922 + 923 + return irq_status; 924 + } 925 + 926 + static u32 mtk_dp_hwirq_get_clear(struct mtk_dp *mtk_dp) 927 + { 928 + u32 irq_status = (mtk_dp_read(mtk_dp, MTK_DP_TRANS_P0_3418) & 929 + IRQ_STATUS_DP_TRANS_P0_MASK) >> 12; 930 + 931 + if (irq_status) { 932 + mtk_dp_update_bits(mtk_dp, MTK_DP_TRANS_P0_3418, 933 + irq_status, IRQ_CLR_DP_TRANS_P0_MASK); 934 + mtk_dp_update_bits(mtk_dp, MTK_DP_TRANS_P0_3418, 935 + 0, IRQ_CLR_DP_TRANS_P0_MASK); 936 + } 937 + 938 + return irq_status; 939 + } 940 + 941 + static void mtk_dp_hwirq_enable(struct mtk_dp *mtk_dp, bool enable) 942 + { 943 + mtk_dp_update_bits(mtk_dp, MTK_DP_TRANS_P0_3418, 944 + enable ? 0 : 945 + IRQ_MASK_DP_TRANS_P0_DISC_IRQ | 946 + IRQ_MASK_DP_TRANS_P0_CONN_IRQ | 947 + IRQ_MASK_DP_TRANS_P0_INT_IRQ, 948 + IRQ_MASK_DP_TRANS_P0_MASK); 949 + } 950 + 951 + static void mtk_dp_initialize_settings(struct mtk_dp *mtk_dp) 952 + { 953 + mtk_dp_update_bits(mtk_dp, MTK_DP_TRANS_P0_342C, 954 + XTAL_FREQ_DP_TRANS_P0_DEFAULT, 955 + XTAL_FREQ_DP_TRANS_P0_MASK); 956 + mtk_dp_update_bits(mtk_dp, MTK_DP_TRANS_P0_3540, 957 + FEC_CLOCK_EN_MODE_DP_TRANS_P0, 958 + FEC_CLOCK_EN_MODE_DP_TRANS_P0); 959 + mtk_dp_update_bits(mtk_dp, MTK_DP_ENC0_P0_31EC, 960 + AUDIO_CH_SRC_SEL_DP_ENC0_P0, 961 + AUDIO_CH_SRC_SEL_DP_ENC0_P0); 962 + mtk_dp_update_bits(mtk_dp, MTK_DP_ENC0_P0_304C, 963 + 0, SDP_VSYNC_RISING_MASK_DP_ENC0_P0_MASK); 964 + mtk_dp_update_bits(mtk_dp, MTK_DP_TOP_IRQ_MASK, 965 + IRQ_MASK_AUX_TOP_IRQ, IRQ_MASK_AUX_TOP_IRQ); 966 + } 967 + 968 + static void mtk_dp_initialize_hpd_detect_settings(struct mtk_dp *mtk_dp) 969 + { 970 + u32 val; 971 + /* Debounce threshold */ 972 + mtk_dp_update_bits(mtk_dp, MTK_DP_TRANS_P0_3410, 973 + 8, HPD_DEB_THD_DP_TRANS_P0_MASK); 974 + 975 + val = (HPD_INT_THD_DP_TRANS_P0_LOWER_500US | 976 + HPD_INT_THD_DP_TRANS_P0_UPPER_1100US) << 4; 977 + mtk_dp_update_bits(mtk_dp, MTK_DP_TRANS_P0_3410, 978 + val, HPD_INT_THD_DP_TRANS_P0_MASK); 979 + 980 + /* 981 + * Connect threshold 1.5ms + 5 x 0.1ms = 2ms 982 + * Disconnect threshold 1.5ms + 5 x 0.1ms = 2ms 983 + */ 984 + val = (5 << 8) | (5 << 12); 985 + mtk_dp_update_bits(mtk_dp, MTK_DP_TRANS_P0_3410, 986 + val, 987 + HPD_DISC_THD_DP_TRANS_P0_MASK | 988 + HPD_CONN_THD_DP_TRANS_P0_MASK); 989 + mtk_dp_update_bits(mtk_dp, MTK_DP_TRANS_P0_3430, 990 + HPD_INT_THD_ECO_DP_TRANS_P0_HIGH_BOUND_EXT, 991 + HPD_INT_THD_ECO_DP_TRANS_P0_MASK); 992 + } 993 + 994 + static void mtk_dp_initialize_aux_settings(struct mtk_dp *mtk_dp) 995 + { 996 + /* modify timeout threshold = 0x1595 */ 997 + mtk_dp_update_bits(mtk_dp, MTK_DP_AUX_P0_360C, 998 + AUX_TIMEOUT_THR_AUX_TX_P0_VAL, 999 + AUX_TIMEOUT_THR_AUX_TX_P0_MASK); 1000 + mtk_dp_update_bits(mtk_dp, MTK_DP_AUX_P0_3658, 1001 + 0, AUX_TX_OV_EN_AUX_TX_P0_MASK); 1002 + /* 25 for 26M */ 1003 + mtk_dp_update_bits(mtk_dp, MTK_DP_AUX_P0_3634, 1004 + AUX_TX_OVER_SAMPLE_RATE_FOR_26M << 8, 1005 + AUX_TX_OVER_SAMPLE_RATE_AUX_TX_P0_MASK); 1006 + /* 13 for 26M */ 1007 + mtk_dp_update_bits(mtk_dp, MTK_DP_AUX_P0_3614, 1008 + AUX_RX_UI_CNT_THR_AUX_FOR_26M, 1009 + AUX_RX_UI_CNT_THR_AUX_TX_P0_MASK); 1010 + mtk_dp_update_bits(mtk_dp, MTK_DP_AUX_P0_37C8, 1011 + MTK_ATOP_EN_AUX_TX_P0, 1012 + MTK_ATOP_EN_AUX_TX_P0); 1013 + } 1014 + 1015 + static void mtk_dp_initialize_digital_settings(struct mtk_dp *mtk_dp) 1016 + { 1017 + mtk_dp_update_bits(mtk_dp, MTK_DP_ENC0_P0_304C, 1018 + 0, VBID_VIDEO_MUTE_DP_ENC0_P0_MASK); 1019 + 1020 + mtk_dp_update_bits(mtk_dp, MTK_DP_ENC1_P0_3368, 1021 + BS2BS_MODE_DP_ENC1_P0_VAL << 12, 1022 + BS2BS_MODE_DP_ENC1_P0_MASK); 1023 + 1024 + /* dp tx encoder reset all sw */ 1025 + mtk_dp_update_bits(mtk_dp, MTK_DP_ENC0_P0_3004, 1026 + DP_TX_ENCODER_4P_RESET_SW_DP_ENC0_P0, 1027 + DP_TX_ENCODER_4P_RESET_SW_DP_ENC0_P0); 1028 + 1029 + /* Wait for sw reset to complete */ 1030 + usleep_range(1000, 5000); 1031 + mtk_dp_update_bits(mtk_dp, MTK_DP_ENC0_P0_3004, 1032 + 0, DP_TX_ENCODER_4P_RESET_SW_DP_ENC0_P0); 1033 + } 1034 + 1035 + static void mtk_dp_digital_sw_reset(struct mtk_dp *mtk_dp) 1036 + { 1037 + mtk_dp_update_bits(mtk_dp, MTK_DP_TRANS_P0_340C, 1038 + DP_TX_TRANSMITTER_4P_RESET_SW_DP_TRANS_P0, 1039 + DP_TX_TRANSMITTER_4P_RESET_SW_DP_TRANS_P0); 1040 + 1041 + /* Wait for sw reset to complete */ 1042 + usleep_range(1000, 5000); 1043 + mtk_dp_update_bits(mtk_dp, MTK_DP_TRANS_P0_340C, 1044 + 0, DP_TX_TRANSMITTER_4P_RESET_SW_DP_TRANS_P0); 1045 + } 1046 + 1047 + static void mtk_dp_set_lanes(struct mtk_dp *mtk_dp, int lanes) 1048 + { 1049 + mtk_dp_update_bits(mtk_dp, MTK_DP_TRANS_P0_35F0, 1050 + lanes == 0 ? 0 : DP_TRANS_DUMMY_RW_0, 1051 + DP_TRANS_DUMMY_RW_0_MASK); 1052 + mtk_dp_update_bits(mtk_dp, MTK_DP_ENC0_P0_3000, 1053 + lanes, LANE_NUM_DP_ENC0_P0_MASK); 1054 + mtk_dp_update_bits(mtk_dp, MTK_DP_TRANS_P0_34A4, 1055 + lanes << 2, LANE_NUM_DP_TRANS_P0_MASK); 1056 + } 1057 + 1058 + static void mtk_dp_get_calibration_data(struct mtk_dp *mtk_dp) 1059 + { 1060 + const struct mtk_dp_efuse_fmt *fmt; 1061 + struct device *dev = mtk_dp->dev; 1062 + struct nvmem_cell *cell; 1063 + u32 *cal_data = mtk_dp->cal_data; 1064 + u32 *buf; 1065 + int i; 1066 + size_t len; 1067 + 1068 + cell = nvmem_cell_get(dev, "dp_calibration_data"); 1069 + if (IS_ERR(cell)) { 1070 + dev_warn(dev, "Failed to get nvmem cell dp_calibration_data\n"); 1071 + goto use_default_val; 1072 + } 1073 + 1074 + buf = (u32 *)nvmem_cell_read(cell, &len); 1075 + nvmem_cell_put(cell); 1076 + 1077 + if (IS_ERR(buf) || ((len / sizeof(u32)) != 4)) { 1078 + dev_warn(dev, "Failed to read nvmem_cell_read\n"); 1079 + 1080 + if (!IS_ERR(buf)) 1081 + kfree(buf); 1082 + 1083 + goto use_default_val; 1084 + } 1085 + 1086 + for (i = 0; i < MTK_DP_CAL_MAX; i++) { 1087 + fmt = &mtk_dp->data->efuse_fmt[i]; 1088 + cal_data[i] = (buf[fmt->idx] >> fmt->shift) & fmt->mask; 1089 + 1090 + if (cal_data[i] < fmt->min_val || cal_data[i] > fmt->max_val) { 1091 + dev_warn(mtk_dp->dev, "Invalid efuse data, idx = %d\n", i); 1092 + kfree(buf); 1093 + goto use_default_val; 1094 + } 1095 + } 1096 + kfree(buf); 1097 + 1098 + return; 1099 + 1100 + use_default_val: 1101 + dev_warn(mtk_dp->dev, "Use default calibration data\n"); 1102 + for (i = 0; i < MTK_DP_CAL_MAX; i++) 1103 + cal_data[i] = mtk_dp->data->efuse_fmt[i].default_val; 1104 + } 1105 + 1106 + static void mtk_dp_set_calibration_data(struct mtk_dp *mtk_dp) 1107 + { 1108 + u32 *cal_data = mtk_dp->cal_data; 1109 + 1110 + mtk_dp_update_bits(mtk_dp, DP_PHY_GLB_DPAUX_TX, 1111 + cal_data[MTK_DP_CAL_CLKTX_IMPSE] << 20, 1112 + RG_CKM_PT0_CKTX_IMPSEL); 1113 + mtk_dp_update_bits(mtk_dp, DP_PHY_GLB_BIAS_GEN_00, 1114 + cal_data[MTK_DP_CAL_GLB_BIAS_TRIM] << 16, 1115 + RG_XTP_GLB_BIAS_INTR_CTRL); 1116 + mtk_dp_update_bits(mtk_dp, DP_PHY_LANE_TX_0, 1117 + cal_data[MTK_DP_CAL_LN_TX_IMPSEL_PMOS_0] << 12, 1118 + RG_XTP_LN0_TX_IMPSEL_PMOS); 1119 + mtk_dp_update_bits(mtk_dp, DP_PHY_LANE_TX_0, 1120 + cal_data[MTK_DP_CAL_LN_TX_IMPSEL_NMOS_0] << 16, 1121 + RG_XTP_LN0_TX_IMPSEL_NMOS); 1122 + mtk_dp_update_bits(mtk_dp, DP_PHY_LANE_TX_1, 1123 + cal_data[MTK_DP_CAL_LN_TX_IMPSEL_PMOS_1] << 12, 1124 + RG_XTP_LN1_TX_IMPSEL_PMOS); 1125 + mtk_dp_update_bits(mtk_dp, DP_PHY_LANE_TX_1, 1126 + cal_data[MTK_DP_CAL_LN_TX_IMPSEL_NMOS_1] << 16, 1127 + RG_XTP_LN1_TX_IMPSEL_NMOS); 1128 + mtk_dp_update_bits(mtk_dp, DP_PHY_LANE_TX_2, 1129 + cal_data[MTK_DP_CAL_LN_TX_IMPSEL_PMOS_2] << 12, 1130 + RG_XTP_LN2_TX_IMPSEL_PMOS); 1131 + mtk_dp_update_bits(mtk_dp, DP_PHY_LANE_TX_2, 1132 + cal_data[MTK_DP_CAL_LN_TX_IMPSEL_NMOS_2] << 16, 1133 + RG_XTP_LN2_TX_IMPSEL_NMOS); 1134 + mtk_dp_update_bits(mtk_dp, DP_PHY_LANE_TX_3, 1135 + cal_data[MTK_DP_CAL_LN_TX_IMPSEL_PMOS_3] << 12, 1136 + RG_XTP_LN3_TX_IMPSEL_PMOS); 1137 + mtk_dp_update_bits(mtk_dp, DP_PHY_LANE_TX_3, 1138 + cal_data[MTK_DP_CAL_LN_TX_IMPSEL_NMOS_3] << 16, 1139 + RG_XTP_LN3_TX_IMPSEL_NMOS); 1140 + } 1141 + 1142 + static int mtk_dp_phy_configure(struct mtk_dp *mtk_dp, 1143 + u32 link_rate, int lane_count) 1144 + { 1145 + int ret; 1146 + union phy_configure_opts phy_opts = { 1147 + .dp = { 1148 + .link_rate = drm_dp_bw_code_to_link_rate(link_rate) / 100, 1149 + .set_rate = 1, 1150 + .lanes = lane_count, 1151 + .set_lanes = 1, 1152 + .ssc = mtk_dp->train_info.sink_ssc, 1153 + } 1154 + }; 1155 + 1156 + mtk_dp_update_bits(mtk_dp, MTK_DP_TOP_PWR_STATE, DP_PWR_STATE_BANDGAP, 1157 + DP_PWR_STATE_MASK); 1158 + 1159 + ret = phy_configure(mtk_dp->phy, &phy_opts); 1160 + if (ret) 1161 + return ret; 1162 + 1163 + mtk_dp_set_calibration_data(mtk_dp); 1164 + mtk_dp_update_bits(mtk_dp, MTK_DP_TOP_PWR_STATE, 1165 + DP_PWR_STATE_BANDGAP_TPLL_LANE, DP_PWR_STATE_MASK); 1166 + 1167 + return 0; 1168 + } 1169 + 1170 + static void mtk_dp_set_idle_pattern(struct mtk_dp *mtk_dp, bool enable) 1171 + { 1172 + u32 val = POST_MISC_DATA_LANE0_OV_DP_TRANS_P0_MASK | 1173 + POST_MISC_DATA_LANE1_OV_DP_TRANS_P0_MASK | 1174 + POST_MISC_DATA_LANE2_OV_DP_TRANS_P0_MASK | 1175 + POST_MISC_DATA_LANE3_OV_DP_TRANS_P0_MASK; 1176 + 1177 + mtk_dp_update_bits(mtk_dp, MTK_DP_TRANS_P0_3580, 1178 + enable ? val : 0, val); 1179 + } 1180 + 1181 + static void mtk_dp_train_set_pattern(struct mtk_dp *mtk_dp, int pattern) 1182 + { 1183 + /* TPS1 */ 1184 + if (pattern == 1) 1185 + mtk_dp_set_idle_pattern(mtk_dp, false); 1186 + 1187 + mtk_dp_update_bits(mtk_dp, 1188 + MTK_DP_TRANS_P0_3400, 1189 + pattern ? BIT(pattern - 1) << 12 : 0, 1190 + PATTERN1_EN_DP_TRANS_P0_MASK | 1191 + PATTERN2_EN_DP_TRANS_P0_MASK | 1192 + PATTERN3_EN_DP_TRANS_P0_MASK | 1193 + PATTERN4_EN_DP_TRANS_P0_MASK); 1194 + } 1195 + 1196 + static void mtk_dp_set_enhanced_frame_mode(struct mtk_dp *mtk_dp) 1197 + { 1198 + mtk_dp_update_bits(mtk_dp, MTK_DP_ENC0_P0_3000, 1199 + ENHANCED_FRAME_EN_DP_ENC0_P0, 1200 + ENHANCED_FRAME_EN_DP_ENC0_P0); 1201 + } 1202 + 1203 + static void mtk_dp_training_set_scramble(struct mtk_dp *mtk_dp, bool enable) 1204 + { 1205 + mtk_dp_update_bits(mtk_dp, MTK_DP_TRANS_P0_3404, 1206 + enable ? DP_SCR_EN_DP_TRANS_P0_MASK : 0, 1207 + DP_SCR_EN_DP_TRANS_P0_MASK); 1208 + } 1209 + 1210 + static void mtk_dp_video_mute(struct mtk_dp *mtk_dp, bool enable) 1211 + { 1212 + struct arm_smccc_res res; 1213 + u32 val = VIDEO_MUTE_SEL_DP_ENC0_P0 | 1214 + (enable ? VIDEO_MUTE_SW_DP_ENC0_P0 : 0); 1215 + 1216 + mtk_dp_update_bits(mtk_dp, MTK_DP_ENC0_P0_3000, 1217 + val, 1218 + VIDEO_MUTE_SEL_DP_ENC0_P0 | 1219 + VIDEO_MUTE_SW_DP_ENC0_P0); 1220 + 1221 + arm_smccc_smc(MTK_DP_SIP_CONTROL_AARCH32, 1222 + mtk_dp->data->smc_cmd, enable, 1223 + 0, 0, 0, 0, 0, &res); 1224 + 1225 + dev_dbg(mtk_dp->dev, "smc cmd: 0x%x, p1: 0x%x, ret: 0x%lx-0x%lx\n", 1226 + mtk_dp->data->smc_cmd, enable, res.a0, res.a1); 1227 + } 1228 + 1229 + static void mtk_dp_audio_mute(struct mtk_dp *mtk_dp, bool mute) 1230 + { 1231 + u32 val[3]; 1232 + 1233 + if (mute) { 1234 + val[0] = VBID_AUDIO_MUTE_FLAG_SW_DP_ENC0_P0 | 1235 + VBID_AUDIO_MUTE_FLAG_SEL_DP_ENC0_P0; 1236 + val[1] = 0; 1237 + val[2] = 0; 1238 + } else { 1239 + val[0] = 0; 1240 + val[1] = AU_EN_DP_ENC0_P0; 1241 + /* Send one every two frames */ 1242 + val[2] = 0x0F; 1243 + } 1244 + 1245 + mtk_dp_update_bits(mtk_dp, MTK_DP_ENC0_P0_3030, 1246 + val[0], 1247 + VBID_AUDIO_MUTE_FLAG_SW_DP_ENC0_P0 | 1248 + VBID_AUDIO_MUTE_FLAG_SEL_DP_ENC0_P0); 1249 + mtk_dp_update_bits(mtk_dp, MTK_DP_ENC0_P0_3088, 1250 + val[1], AU_EN_DP_ENC0_P0); 1251 + mtk_dp_update_bits(mtk_dp, MTK_DP_ENC0_P0_30A4, 1252 + val[2], AU_TS_CFG_DP_ENC0_P0_MASK); 1253 + } 1254 + 1255 + static void mtk_dp_power_enable(struct mtk_dp *mtk_dp) 1256 + { 1257 + mtk_dp_update_bits(mtk_dp, MTK_DP_TOP_RESET_AND_PROBE, 1258 + 0, SW_RST_B_PHYD); 1259 + 1260 + /* Wait for power enable */ 1261 + usleep_range(10, 200); 1262 + 1263 + mtk_dp_update_bits(mtk_dp, MTK_DP_TOP_RESET_AND_PROBE, 1264 + SW_RST_B_PHYD, SW_RST_B_PHYD); 1265 + mtk_dp_update_bits(mtk_dp, MTK_DP_TOP_PWR_STATE, 1266 + DP_PWR_STATE_BANDGAP_TPLL, DP_PWR_STATE_MASK); 1267 + mtk_dp_write(mtk_dp, MTK_DP_1040, 1268 + RG_DPAUX_RX_VALID_DEGLITCH_EN | RG_XTP_GLB_CKDET_EN | 1269 + RG_DPAUX_RX_EN); 1270 + mtk_dp_update_bits(mtk_dp, MTK_DP_0034, 0, DA_CKM_CKTX0_EN_FORCE_EN); 1271 + } 1272 + 1273 + static void mtk_dp_power_disable(struct mtk_dp *mtk_dp) 1274 + { 1275 + mtk_dp_write(mtk_dp, MTK_DP_TOP_PWR_STATE, 0); 1276 + 1277 + mtk_dp_update_bits(mtk_dp, MTK_DP_0034, 1278 + DA_CKM_CKTX0_EN_FORCE_EN, DA_CKM_CKTX0_EN_FORCE_EN); 1279 + 1280 + /* Disable RX */ 1281 + mtk_dp_write(mtk_dp, MTK_DP_1040, 0); 1282 + mtk_dp_write(mtk_dp, MTK_DP_TOP_MEM_PD, 1283 + 0x550 | FUSE_SEL | MEM_ISO_EN); 1284 + } 1285 + 1286 + static void mtk_dp_initialize_priv_data(struct mtk_dp *mtk_dp) 1287 + { 1288 + mtk_dp->train_info.link_rate = DP_LINK_BW_5_4; 1289 + mtk_dp->train_info.lane_count = mtk_dp->max_lanes; 1290 + mtk_dp->train_info.cable_plugged_in = false; 1291 + 1292 + mtk_dp->info.format = DP_PIXELFORMAT_RGB; 1293 + memset(&mtk_dp->info.vm, 0, sizeof(struct videomode)); 1294 + mtk_dp->audio_enable = false; 1295 + } 1296 + 1297 + static void mtk_dp_sdp_set_down_cnt_init(struct mtk_dp *mtk_dp, 1298 + u32 sram_read_start) 1299 + { 1300 + u32 sdp_down_cnt_init = 0; 1301 + struct drm_display_mode mode; 1302 + struct videomode *vm = &mtk_dp->info.vm; 1303 + 1304 + drm_display_mode_from_videomode(vm, &mode); 1305 + 1306 + if (mode.clock > 0) 1307 + sdp_down_cnt_init = sram_read_start * 1308 + mtk_dp->train_info.link_rate * 2700 * 8 / 1309 + (mode.clock * 4); 1310 + 1311 + switch (mtk_dp->train_info.lane_count) { 1312 + case 1: 1313 + sdp_down_cnt_init = max_t(u32, sdp_down_cnt_init, 0x1A); 1314 + break; 1315 + case 2: 1316 + /* case for LowResolution && High Audio Sample Rate */ 1317 + sdp_down_cnt_init = max_t(u32, sdp_down_cnt_init, 0x10); 1318 + sdp_down_cnt_init += mode.vtotal <= 525 ? 4 : 0; 1319 + break; 1320 + case 4: 1321 + default: 1322 + sdp_down_cnt_init = max_t(u32, sdp_down_cnt_init, 6); 1323 + break; 1324 + } 1325 + 1326 + mtk_dp_update_bits(mtk_dp, MTK_DP_ENC0_P0_3040, 1327 + sdp_down_cnt_init, 1328 + SDP_DOWN_CNT_INIT_DP_ENC0_P0_MASK); 1329 + } 1330 + 1331 + static void mtk_dp_sdp_set_down_cnt_init_in_hblank(struct mtk_dp *mtk_dp) 1332 + { 1333 + int pix_clk_mhz; 1334 + u32 dc_offset; 1335 + u32 spd_down_cnt_init = 0; 1336 + struct drm_display_mode mode; 1337 + struct videomode *vm = &mtk_dp->info.vm; 1338 + 1339 + drm_display_mode_from_videomode(vm, &mode); 1340 + 1341 + pix_clk_mhz = mtk_dp->info.format == DP_PIXELFORMAT_YUV420 ? 1342 + mode.clock / 2000 : mode.clock / 1000; 1343 + 1344 + switch (mtk_dp->train_info.lane_count) { 1345 + case 1: 1346 + spd_down_cnt_init = 0x20; 1347 + break; 1348 + case 2: 1349 + dc_offset = (mode.vtotal <= 525) ? 0x14 : 0x00; 1350 + spd_down_cnt_init = 0x18 + dc_offset; 1351 + break; 1352 + case 4: 1353 + default: 1354 + dc_offset = (mode.vtotal <= 525) ? 0x08 : 0x00; 1355 + if (pix_clk_mhz > mtk_dp->train_info.link_rate * 27) 1356 + spd_down_cnt_init = 0x8; 1357 + else 1358 + spd_down_cnt_init = 0x10 + dc_offset; 1359 + break; 1360 + } 1361 + 1362 + mtk_dp_update_bits(mtk_dp, MTK_DP_ENC1_P0_3364, spd_down_cnt_init, 1363 + SDP_DOWN_CNT_INIT_IN_HBLANK_DP_ENC1_P0_MASK); 1364 + } 1365 + 1366 + static void mtk_dp_setup_tu(struct mtk_dp *mtk_dp) 1367 + { 1368 + u32 sram_read_start = min_t(u32, MTK_DP_TBC_BUF_READ_START_ADDR, 1369 + mtk_dp->info.vm.hactive / 1370 + mtk_dp->train_info.lane_count / 1371 + MTK_DP_4P1T / MTK_DP_HDE / 1372 + MTK_DP_PIX_PER_ADDR); 1373 + mtk_dp_set_sram_read_start(mtk_dp, sram_read_start); 1374 + mtk_dp_setup_encoder(mtk_dp); 1375 + mtk_dp_sdp_set_down_cnt_init_in_hblank(mtk_dp); 1376 + mtk_dp_sdp_set_down_cnt_init(mtk_dp, sram_read_start); 1377 + } 1378 + 1379 + static void mtk_dp_set_tx_out(struct mtk_dp *mtk_dp) 1380 + { 1381 + mtk_dp_setup_tu(mtk_dp); 1382 + } 1383 + 1384 + static void mtk_dp_train_update_swing_pre(struct mtk_dp *mtk_dp, int lanes, 1385 + u8 dpcd_adjust_req[2]) 1386 + { 1387 + int lane; 1388 + 1389 + for (lane = 0; lane < lanes; ++lane) { 1390 + u8 val; 1391 + u8 swing; 1392 + u8 preemphasis; 1393 + int index = lane / 2; 1394 + int shift = lane % 2 ? DP_ADJUST_VOLTAGE_SWING_LANE1_SHIFT : 0; 1395 + 1396 + swing = (dpcd_adjust_req[index] >> shift) & 1397 + DP_ADJUST_VOLTAGE_SWING_LANE0_MASK; 1398 + preemphasis = ((dpcd_adjust_req[index] >> shift) & 1399 + DP_ADJUST_PRE_EMPHASIS_LANE0_MASK) >> 1400 + DP_ADJUST_PRE_EMPHASIS_LANE0_SHIFT; 1401 + val = swing << DP_TRAIN_VOLTAGE_SWING_SHIFT | 1402 + preemphasis << DP_TRAIN_PRE_EMPHASIS_SHIFT; 1403 + 1404 + if (swing == DP_TRAIN_VOLTAGE_SWING_LEVEL_3) 1405 + val |= DP_TRAIN_MAX_SWING_REACHED; 1406 + if (preemphasis == 3) 1407 + val |= DP_TRAIN_MAX_PRE_EMPHASIS_REACHED; 1408 + 1409 + mtk_dp_set_swing_pre_emphasis(mtk_dp, lane, swing, preemphasis); 1410 + drm_dp_dpcd_writeb(&mtk_dp->aux, DP_TRAINING_LANE0_SET + lane, 1411 + val); 1412 + } 1413 + } 1414 + 1415 + static void mtk_dp_pattern(struct mtk_dp *mtk_dp, bool is_tps1) 1416 + { 1417 + int pattern; 1418 + unsigned int aux_offset; 1419 + 1420 + if (is_tps1) { 1421 + pattern = 1; 1422 + aux_offset = DP_LINK_SCRAMBLING_DISABLE | DP_TRAINING_PATTERN_1; 1423 + } else { 1424 + aux_offset = mtk_dp->train_info.channel_eq_pattern; 1425 + 1426 + switch (mtk_dp->train_info.channel_eq_pattern) { 1427 + case DP_TRAINING_PATTERN_4: 1428 + pattern = 4; 1429 + break; 1430 + case DP_TRAINING_PATTERN_3: 1431 + pattern = 3; 1432 + aux_offset |= DP_LINK_SCRAMBLING_DISABLE; 1433 + break; 1434 + case DP_TRAINING_PATTERN_2: 1435 + default: 1436 + pattern = 2; 1437 + aux_offset |= DP_LINK_SCRAMBLING_DISABLE; 1438 + break; 1439 + } 1440 + } 1441 + 1442 + mtk_dp_train_set_pattern(mtk_dp, pattern); 1443 + drm_dp_dpcd_writeb(&mtk_dp->aux, DP_TRAINING_PATTERN_SET, aux_offset); 1444 + } 1445 + 1446 + static int mtk_dp_train_setting(struct mtk_dp *mtk_dp, u8 target_link_rate, 1447 + u8 target_lane_count) 1448 + { 1449 + int ret; 1450 + 1451 + drm_dp_dpcd_writeb(&mtk_dp->aux, DP_LINK_BW_SET, target_link_rate); 1452 + drm_dp_dpcd_writeb(&mtk_dp->aux, DP_LANE_COUNT_SET, 1453 + target_lane_count | DP_LANE_COUNT_ENHANCED_FRAME_EN); 1454 + 1455 + if (mtk_dp->train_info.sink_ssc) 1456 + drm_dp_dpcd_writeb(&mtk_dp->aux, DP_DOWNSPREAD_CTRL, 1457 + DP_SPREAD_AMP_0_5); 1458 + 1459 + mtk_dp_set_lanes(mtk_dp, target_lane_count / 2); 1460 + ret = mtk_dp_phy_configure(mtk_dp, target_link_rate, target_lane_count); 1461 + if (ret) 1462 + return ret; 1463 + 1464 + dev_dbg(mtk_dp->dev, 1465 + "Link train target_link_rate = 0x%x, target_lane_count = 0x%x\n", 1466 + target_link_rate, target_lane_count); 1467 + 1468 + return 0; 1469 + } 1470 + 1471 + static int mtk_dp_train_cr(struct mtk_dp *mtk_dp, u8 target_lane_count) 1472 + { 1473 + u8 lane_adjust[2] = {}; 1474 + u8 link_status[DP_LINK_STATUS_SIZE] = {}; 1475 + u8 prev_lane_adjust = 0xff; 1476 + int train_retries = 0; 1477 + int voltage_retries = 0; 1478 + 1479 + mtk_dp_pattern(mtk_dp, true); 1480 + 1481 + /* In DP spec 1.4, the retry count of CR is defined as 10. */ 1482 + do { 1483 + train_retries++; 1484 + if (!mtk_dp->train_info.cable_plugged_in) { 1485 + mtk_dp_train_set_pattern(mtk_dp, 0); 1486 + return -ENODEV; 1487 + } 1488 + 1489 + drm_dp_dpcd_read(&mtk_dp->aux, DP_ADJUST_REQUEST_LANE0_1, 1490 + lane_adjust, sizeof(lane_adjust)); 1491 + mtk_dp_train_update_swing_pre(mtk_dp, target_lane_count, 1492 + lane_adjust); 1493 + 1494 + drm_dp_link_train_clock_recovery_delay(&mtk_dp->aux, 1495 + mtk_dp->rx_cap); 1496 + 1497 + /* check link status from sink device */ 1498 + drm_dp_dpcd_read_link_status(&mtk_dp->aux, link_status); 1499 + if (drm_dp_clock_recovery_ok(link_status, 1500 + target_lane_count)) { 1501 + dev_dbg(mtk_dp->dev, "Link train CR pass\n"); 1502 + return 0; 1503 + } 1504 + 1505 + /* 1506 + * In DP spec 1.4, if current voltage level is the same 1507 + * with previous voltage level, we need to retry 5 times. 1508 + */ 1509 + if (prev_lane_adjust == link_status[4]) { 1510 + voltage_retries++; 1511 + /* 1512 + * Condition of CR fail: 1513 + * 1. Failed to pass CR using the same voltage 1514 + * level over five times. 1515 + * 2. Failed to pass CR when the current voltage 1516 + * level is the same with previous voltage 1517 + * level and reach max voltage level (3). 1518 + */ 1519 + if (voltage_retries > MTK_DP_TRAIN_VOLTAGE_LEVEL_RETRY || 1520 + (prev_lane_adjust & DP_ADJUST_VOLTAGE_SWING_LANE0_MASK) == 3) { 1521 + dev_dbg(mtk_dp->dev, "Link train CR fail\n"); 1522 + break; 1523 + } 1524 + } else { 1525 + /* 1526 + * If the voltage level is changed, we need to 1527 + * re-calculate this retry count. 1528 + */ 1529 + voltage_retries = 0; 1530 + } 1531 + prev_lane_adjust = link_status[4]; 1532 + } while (train_retries < MTK_DP_TRAIN_DOWNSCALE_RETRY); 1533 + 1534 + /* Failed to train CR, and disable pattern. */ 1535 + drm_dp_dpcd_writeb(&mtk_dp->aux, DP_TRAINING_PATTERN_SET, 1536 + DP_TRAINING_PATTERN_DISABLE); 1537 + mtk_dp_train_set_pattern(mtk_dp, 0); 1538 + 1539 + return -ETIMEDOUT; 1540 + } 1541 + 1542 + static int mtk_dp_train_eq(struct mtk_dp *mtk_dp, u8 target_lane_count) 1543 + { 1544 + u8 lane_adjust[2] = {}; 1545 + u8 link_status[DP_LINK_STATUS_SIZE] = {}; 1546 + int train_retries = 0; 1547 + 1548 + mtk_dp_pattern(mtk_dp, false); 1549 + 1550 + do { 1551 + train_retries++; 1552 + if (!mtk_dp->train_info.cable_plugged_in) { 1553 + mtk_dp_train_set_pattern(mtk_dp, 0); 1554 + return -ENODEV; 1555 + } 1556 + 1557 + drm_dp_dpcd_read(&mtk_dp->aux, DP_ADJUST_REQUEST_LANE0_1, 1558 + lane_adjust, sizeof(lane_adjust)); 1559 + mtk_dp_train_update_swing_pre(mtk_dp, target_lane_count, 1560 + lane_adjust); 1561 + 1562 + drm_dp_link_train_channel_eq_delay(&mtk_dp->aux, 1563 + mtk_dp->rx_cap); 1564 + 1565 + /* check link status from sink device */ 1566 + drm_dp_dpcd_read_link_status(&mtk_dp->aux, link_status); 1567 + if (drm_dp_channel_eq_ok(link_status, target_lane_count)) { 1568 + dev_dbg(mtk_dp->dev, "Link train EQ pass\n"); 1569 + 1570 + /* Training done, and disable pattern. */ 1571 + drm_dp_dpcd_writeb(&mtk_dp->aux, DP_TRAINING_PATTERN_SET, 1572 + DP_TRAINING_PATTERN_DISABLE); 1573 + mtk_dp_train_set_pattern(mtk_dp, 0); 1574 + return 0; 1575 + } 1576 + dev_dbg(mtk_dp->dev, "Link train EQ fail\n"); 1577 + } while (train_retries < MTK_DP_TRAIN_DOWNSCALE_RETRY); 1578 + 1579 + /* Failed to train EQ, and disable pattern. */ 1580 + drm_dp_dpcd_writeb(&mtk_dp->aux, DP_TRAINING_PATTERN_SET, 1581 + DP_TRAINING_PATTERN_DISABLE); 1582 + mtk_dp_train_set_pattern(mtk_dp, 0); 1583 + 1584 + return -ETIMEDOUT; 1585 + } 1586 + 1587 + static int mtk_dp_parse_capabilities(struct mtk_dp *mtk_dp) 1588 + { 1589 + u8 val; 1590 + ssize_t ret; 1591 + 1592 + drm_dp_read_dpcd_caps(&mtk_dp->aux, mtk_dp->rx_cap); 1593 + 1594 + if (drm_dp_tps4_supported(mtk_dp->rx_cap)) 1595 + mtk_dp->train_info.channel_eq_pattern = DP_TRAINING_PATTERN_4; 1596 + else if (drm_dp_tps3_supported(mtk_dp->rx_cap)) 1597 + mtk_dp->train_info.channel_eq_pattern = DP_TRAINING_PATTERN_3; 1598 + else 1599 + mtk_dp->train_info.channel_eq_pattern = DP_TRAINING_PATTERN_2; 1600 + 1601 + mtk_dp->train_info.sink_ssc = drm_dp_max_downspread(mtk_dp->rx_cap); 1602 + 1603 + ret = drm_dp_dpcd_readb(&mtk_dp->aux, DP_MSTM_CAP, &val); 1604 + if (ret < 1) { 1605 + drm_err(mtk_dp->drm_dev, "Read mstm cap failed\n"); 1606 + return ret == 0 ? -EIO : ret; 1607 + } 1608 + 1609 + if (val & DP_MST_CAP) { 1610 + /* Clear DP_DEVICE_SERVICE_IRQ_VECTOR_ESI0 */ 1611 + ret = drm_dp_dpcd_readb(&mtk_dp->aux, 1612 + DP_DEVICE_SERVICE_IRQ_VECTOR_ESI0, 1613 + &val); 1614 + if (ret < 1) { 1615 + drm_err(mtk_dp->drm_dev, "Read irq vector failed\n"); 1616 + return ret == 0 ? -EIO : ret; 1617 + } 1618 + 1619 + if (val) 1620 + drm_dp_dpcd_writeb(&mtk_dp->aux, 1621 + DP_DEVICE_SERVICE_IRQ_VECTOR_ESI0, 1622 + val); 1623 + } 1624 + 1625 + return 0; 1626 + } 1627 + 1628 + static bool mtk_dp_edid_parse_audio_capabilities(struct mtk_dp *mtk_dp, 1629 + struct mtk_dp_audio_cfg *cfg) 1630 + { 1631 + if (!mtk_dp->data->audio_supported) 1632 + return false; 1633 + 1634 + if (mtk_dp->info.audio_cur_cfg.sad_count <= 0) { 1635 + drm_info(mtk_dp->drm_dev, "The SADs is NULL\n"); 1636 + return false; 1637 + } 1638 + 1639 + return true; 1640 + } 1641 + 1642 + static void mtk_dp_train_change_mode(struct mtk_dp *mtk_dp) 1643 + { 1644 + phy_reset(mtk_dp->phy); 1645 + mtk_dp_reset_swing_pre_emphasis(mtk_dp); 1646 + } 1647 + 1648 + static int mtk_dp_training(struct mtk_dp *mtk_dp) 1649 + { 1650 + int ret; 1651 + u8 lane_count, link_rate, train_limit, max_link_rate; 1652 + 1653 + link_rate = min_t(u8, mtk_dp->max_linkrate, 1654 + mtk_dp->rx_cap[DP_MAX_LINK_RATE]); 1655 + max_link_rate = link_rate; 1656 + lane_count = min_t(u8, mtk_dp->max_lanes, 1657 + drm_dp_max_lane_count(mtk_dp->rx_cap)); 1658 + 1659 + /* 1660 + * TPS are generated by the hardware pattern generator. From the 1661 + * hardware setting we need to disable this scramble setting before 1662 + * use the TPS pattern generator. 1663 + */ 1664 + mtk_dp_training_set_scramble(mtk_dp, false); 1665 + 1666 + for (train_limit = 6; train_limit > 0; train_limit--) { 1667 + mtk_dp_train_change_mode(mtk_dp); 1668 + 1669 + ret = mtk_dp_train_setting(mtk_dp, link_rate, lane_count); 1670 + if (ret) 1671 + return ret; 1672 + 1673 + ret = mtk_dp_train_cr(mtk_dp, lane_count); 1674 + if (ret == -ENODEV) { 1675 + return ret; 1676 + } else if (ret) { 1677 + /* reduce link rate */ 1678 + switch (link_rate) { 1679 + case DP_LINK_BW_1_62: 1680 + lane_count = lane_count / 2; 1681 + link_rate = max_link_rate; 1682 + if (lane_count == 0) 1683 + return -EIO; 1684 + break; 1685 + case DP_LINK_BW_2_7: 1686 + link_rate = DP_LINK_BW_1_62; 1687 + break; 1688 + case DP_LINK_BW_5_4: 1689 + link_rate = DP_LINK_BW_2_7; 1690 + break; 1691 + case DP_LINK_BW_8_1: 1692 + link_rate = DP_LINK_BW_5_4; 1693 + break; 1694 + default: 1695 + return -EINVAL; 1696 + }; 1697 + continue; 1698 + } 1699 + 1700 + ret = mtk_dp_train_eq(mtk_dp, lane_count); 1701 + if (ret == -ENODEV) { 1702 + return ret; 1703 + } else if (ret) { 1704 + /* reduce lane count */ 1705 + if (lane_count == 0) 1706 + return -EIO; 1707 + lane_count /= 2; 1708 + continue; 1709 + } 1710 + 1711 + /* if we can run to this, training is done. */ 1712 + break; 1713 + } 1714 + 1715 + if (train_limit == 0) 1716 + return -ETIMEDOUT; 1717 + 1718 + mtk_dp->train_info.link_rate = link_rate; 1719 + mtk_dp->train_info.lane_count = lane_count; 1720 + 1721 + /* 1722 + * After training done, we need to output normal stream instead of TPS, 1723 + * so we need to enable scramble. 1724 + */ 1725 + mtk_dp_training_set_scramble(mtk_dp, true); 1726 + mtk_dp_set_enhanced_frame_mode(mtk_dp); 1727 + 1728 + return 0; 1729 + } 1730 + 1731 + static void mtk_dp_video_enable(struct mtk_dp *mtk_dp, bool enable) 1732 + { 1733 + /* the mute sequence is different between enable and disable */ 1734 + if (enable) { 1735 + mtk_dp_msa_bypass_enable(mtk_dp, false); 1736 + mtk_dp_pg_enable(mtk_dp, false); 1737 + mtk_dp_set_tx_out(mtk_dp); 1738 + mtk_dp_video_mute(mtk_dp, false); 1739 + } else { 1740 + mtk_dp_video_mute(mtk_dp, true); 1741 + mtk_dp_pg_enable(mtk_dp, true); 1742 + mtk_dp_msa_bypass_enable(mtk_dp, true); 1743 + } 1744 + } 1745 + 1746 + static void mtk_dp_audio_sdp_setup(struct mtk_dp *mtk_dp, 1747 + struct mtk_dp_audio_cfg *cfg) 1748 + { 1749 + struct dp_sdp sdp; 1750 + struct hdmi_audio_infoframe frame; 1751 + 1752 + hdmi_audio_infoframe_init(&frame); 1753 + frame.coding_type = HDMI_AUDIO_CODING_TYPE_PCM; 1754 + frame.channels = cfg->channels; 1755 + frame.sample_frequency = cfg->sample_rate; 1756 + 1757 + switch (cfg->word_length_bits) { 1758 + case 16: 1759 + frame.sample_size = HDMI_AUDIO_SAMPLE_SIZE_16; 1760 + break; 1761 + case 20: 1762 + frame.sample_size = HDMI_AUDIO_SAMPLE_SIZE_20; 1763 + break; 1764 + case 24: 1765 + default: 1766 + frame.sample_size = HDMI_AUDIO_SAMPLE_SIZE_24; 1767 + break; 1768 + } 1769 + 1770 + hdmi_audio_infoframe_pack_for_dp(&frame, &sdp, MTK_DP_VERSION); 1771 + 1772 + mtk_dp_audio_sdp_asp_set_channels(mtk_dp, cfg->channels); 1773 + mtk_dp_setup_sdp_aui(mtk_dp, &sdp); 1774 + } 1775 + 1776 + static void mtk_dp_audio_setup(struct mtk_dp *mtk_dp, 1777 + struct mtk_dp_audio_cfg *cfg) 1778 + { 1779 + mtk_dp_audio_sdp_setup(mtk_dp, cfg); 1780 + mtk_dp_audio_channel_status_set(mtk_dp, cfg); 1781 + 1782 + mtk_dp_audio_setup_channels(mtk_dp, cfg); 1783 + mtk_dp_audio_set_divider(mtk_dp); 1784 + } 1785 + 1786 + static int mtk_dp_video_config(struct mtk_dp *mtk_dp) 1787 + { 1788 + mtk_dp_config_mn_mode(mtk_dp); 1789 + mtk_dp_set_msa(mtk_dp); 1790 + mtk_dp_set_color_depth(mtk_dp); 1791 + return mtk_dp_set_color_format(mtk_dp, mtk_dp->info.format); 1792 + } 1793 + 1794 + static void mtk_dp_init_port(struct mtk_dp *mtk_dp) 1795 + { 1796 + mtk_dp_set_idle_pattern(mtk_dp, true); 1797 + mtk_dp_initialize_priv_data(mtk_dp); 1798 + 1799 + mtk_dp_initialize_settings(mtk_dp); 1800 + mtk_dp_initialize_aux_settings(mtk_dp); 1801 + mtk_dp_initialize_digital_settings(mtk_dp); 1802 + 1803 + mtk_dp_update_bits(mtk_dp, MTK_DP_AUX_P0_3690, 1804 + RX_REPLY_COMPLETE_MODE_AUX_TX_P0, 1805 + RX_REPLY_COMPLETE_MODE_AUX_TX_P0); 1806 + mtk_dp_initialize_hpd_detect_settings(mtk_dp); 1807 + 1808 + mtk_dp_digital_sw_reset(mtk_dp); 1809 + } 1810 + 1811 + static irqreturn_t mtk_dp_hpd_event_thread(int hpd, void *dev) 1812 + { 1813 + struct mtk_dp *mtk_dp = dev; 1814 + unsigned long flags; 1815 + u32 status; 1816 + 1817 + if (mtk_dp->need_debounce && mtk_dp->train_info.cable_plugged_in) 1818 + msleep(100); 1819 + 1820 + spin_lock_irqsave(&mtk_dp->irq_thread_lock, flags); 1821 + status = mtk_dp->irq_thread_handle; 1822 + mtk_dp->irq_thread_handle = 0; 1823 + spin_unlock_irqrestore(&mtk_dp->irq_thread_lock, flags); 1824 + 1825 + if (status & MTK_DP_THREAD_CABLE_STATE_CHG) { 1826 + drm_helper_hpd_irq_event(mtk_dp->bridge.dev); 1827 + 1828 + if (!mtk_dp->train_info.cable_plugged_in) { 1829 + mtk_dp_disable_sdp_aui(mtk_dp); 1830 + memset(&mtk_dp->info.audio_cur_cfg, 0, 1831 + sizeof(mtk_dp->info.audio_cur_cfg)); 1832 + 1833 + mtk_dp->need_debounce = false; 1834 + mod_timer(&mtk_dp->debounce_timer, 1835 + jiffies + msecs_to_jiffies(100) - 1); 1836 + } 1837 + } 1838 + 1839 + if (status & MTK_DP_THREAD_HPD_EVENT) 1840 + dev_dbg(mtk_dp->dev, "Receive IRQ from sink devices\n"); 1841 + 1842 + return IRQ_HANDLED; 1843 + } 1844 + 1845 + static irqreturn_t mtk_dp_hpd_event(int hpd, void *dev) 1846 + { 1847 + struct mtk_dp *mtk_dp = dev; 1848 + bool cable_sta_chg = false; 1849 + unsigned long flags; 1850 + u32 irq_status = mtk_dp_swirq_get_clear(mtk_dp) | 1851 + mtk_dp_hwirq_get_clear(mtk_dp); 1852 + 1853 + if (!irq_status) 1854 + return IRQ_HANDLED; 1855 + 1856 + spin_lock_irqsave(&mtk_dp->irq_thread_lock, flags); 1857 + 1858 + if (irq_status & MTK_DP_HPD_INTERRUPT) 1859 + mtk_dp->irq_thread_handle |= MTK_DP_THREAD_HPD_EVENT; 1860 + 1861 + /* Cable state is changed. */ 1862 + if (irq_status != MTK_DP_HPD_INTERRUPT) { 1863 + mtk_dp->irq_thread_handle |= MTK_DP_THREAD_CABLE_STATE_CHG; 1864 + cable_sta_chg = true; 1865 + } 1866 + 1867 + spin_unlock_irqrestore(&mtk_dp->irq_thread_lock, flags); 1868 + 1869 + if (cable_sta_chg) { 1870 + if (!!(mtk_dp_read(mtk_dp, MTK_DP_TRANS_P0_3414) & 1871 + HPD_DB_DP_TRANS_P0_MASK)) 1872 + mtk_dp->train_info.cable_plugged_in = true; 1873 + else 1874 + mtk_dp->train_info.cable_plugged_in = false; 1875 + } 1876 + 1877 + return IRQ_WAKE_THREAD; 1878 + } 1879 + 1880 + static int mtk_dp_dt_parse(struct mtk_dp *mtk_dp, 1881 + struct platform_device *pdev) 1882 + { 1883 + struct device_node *endpoint; 1884 + struct device *dev = &pdev->dev; 1885 + int ret; 1886 + void __iomem *base; 1887 + u32 linkrate; 1888 + int len; 1889 + 1890 + base = devm_platform_ioremap_resource(pdev, 0); 1891 + if (IS_ERR(base)) 1892 + return PTR_ERR(base); 1893 + 1894 + mtk_dp->regs = devm_regmap_init_mmio(dev, base, &mtk_dp_regmap_config); 1895 + if (IS_ERR(mtk_dp->regs)) 1896 + return PTR_ERR(mtk_dp->regs); 1897 + 1898 + endpoint = of_graph_get_endpoint_by_regs(pdev->dev.of_node, 1, -1); 1899 + len = of_property_count_elems_of_size(endpoint, 1900 + "data-lanes", sizeof(u32)); 1901 + if (len < 0 || len > 4 || len == 3) { 1902 + dev_err(dev, "invalid data lane size: %d\n", len); 1903 + return -EINVAL; 1904 + } 1905 + 1906 + mtk_dp->max_lanes = len; 1907 + 1908 + ret = device_property_read_u32(dev, "max-linkrate-mhz", &linkrate); 1909 + if (ret) { 1910 + dev_err(dev, "failed to read max linkrate: %d\n", ret); 1911 + return ret; 1912 + } 1913 + 1914 + mtk_dp->max_linkrate = drm_dp_link_rate_to_bw_code(linkrate * 100); 1915 + 1916 + return 0; 1917 + } 1918 + 1919 + static void mtk_dp_update_plugged_status(struct mtk_dp *mtk_dp) 1920 + { 1921 + mutex_lock(&mtk_dp->update_plugged_status_lock); 1922 + if (mtk_dp->plugged_cb && mtk_dp->codec_dev) 1923 + mtk_dp->plugged_cb(mtk_dp->codec_dev, 1924 + mtk_dp->enabled & 1925 + mtk_dp->info.audio_cur_cfg.detect_monitor); 1926 + mutex_unlock(&mtk_dp->update_plugged_status_lock); 1927 + } 1928 + 1929 + static enum drm_connector_status mtk_dp_bdg_detect(struct drm_bridge *bridge) 1930 + { 1931 + struct mtk_dp *mtk_dp = mtk_dp_from_bridge(bridge); 1932 + enum drm_connector_status ret = connector_status_disconnected; 1933 + bool enabled = mtk_dp->enabled; 1934 + u8 sink_count = 0; 1935 + 1936 + if (mtk_dp->train_info.cable_plugged_in) { 1937 + if (!enabled) { 1938 + /* power on aux */ 1939 + mtk_dp_update_bits(mtk_dp, MTK_DP_TOP_PWR_STATE, 1940 + DP_PWR_STATE_BANDGAP_TPLL_LANE, 1941 + DP_PWR_STATE_MASK); 1942 + 1943 + /* power on panel */ 1944 + drm_dp_dpcd_writeb(&mtk_dp->aux, DP_SET_POWER, DP_SET_POWER_D0); 1945 + usleep_range(2000, 5000); 1946 + } 1947 + /* 1948 + * Some dongles still source HPD when they do not connect to any 1949 + * sink device. To avoid this, we need to read the sink count 1950 + * to make sure we do connect to sink devices. After this detect 1951 + * function, we just need to check the HPD connection to check 1952 + * whether we connect to a sink device. 1953 + */ 1954 + drm_dp_dpcd_readb(&mtk_dp->aux, DP_SINK_COUNT, &sink_count); 1955 + if (DP_GET_SINK_COUNT(sink_count)) 1956 + ret = connector_status_connected; 1957 + 1958 + if (!enabled) { 1959 + /* power off panel */ 1960 + drm_dp_dpcd_writeb(&mtk_dp->aux, DP_SET_POWER, DP_SET_POWER_D3); 1961 + usleep_range(2000, 3000); 1962 + 1963 + /* power off aux */ 1964 + mtk_dp_update_bits(mtk_dp, MTK_DP_TOP_PWR_STATE, 1965 + DP_PWR_STATE_BANDGAP_TPLL, 1966 + DP_PWR_STATE_MASK); 1967 + } 1968 + } 1969 + return ret; 1970 + } 1971 + 1972 + static struct edid *mtk_dp_get_edid(struct drm_bridge *bridge, 1973 + struct drm_connector *connector) 1974 + { 1975 + struct mtk_dp *mtk_dp = mtk_dp_from_bridge(bridge); 1976 + bool enabled = mtk_dp->enabled; 1977 + struct edid *new_edid = NULL; 1978 + struct mtk_dp_audio_cfg *audio_caps = &mtk_dp->info.audio_cur_cfg; 1979 + struct cea_sad *sads; 1980 + 1981 + if (!enabled) { 1982 + drm_bridge_chain_pre_enable(bridge); 1983 + 1984 + /* power on aux */ 1985 + mtk_dp_update_bits(mtk_dp, MTK_DP_TOP_PWR_STATE, 1986 + DP_PWR_STATE_BANDGAP_TPLL_LANE, 1987 + DP_PWR_STATE_MASK); 1988 + 1989 + /* power on panel */ 1990 + drm_dp_dpcd_writeb(&mtk_dp->aux, DP_SET_POWER, DP_SET_POWER_D0); 1991 + usleep_range(2000, 5000); 1992 + } 1993 + 1994 + new_edid = drm_get_edid(connector, &mtk_dp->aux.ddc); 1995 + 1996 + /* 1997 + * Parse capability here to let atomic_get_input_bus_fmts and 1998 + * mode_valid use the capability to calculate sink bitrates. 1999 + */ 2000 + if (mtk_dp_parse_capabilities(mtk_dp)) { 2001 + drm_err(mtk_dp->drm_dev, "Can't parse capabilities\n"); 2002 + new_edid = NULL; 2003 + } 2004 + 2005 + if (new_edid) { 2006 + audio_caps->sad_count = drm_edid_to_sad(new_edid, &sads); 2007 + audio_caps->detect_monitor = drm_detect_monitor_audio(new_edid); 2008 + } 2009 + 2010 + if (!enabled) { 2011 + /* power off panel */ 2012 + drm_dp_dpcd_writeb(&mtk_dp->aux, DP_SET_POWER, DP_SET_POWER_D3); 2013 + usleep_range(2000, 3000); 2014 + 2015 + /* power off aux */ 2016 + mtk_dp_update_bits(mtk_dp, MTK_DP_TOP_PWR_STATE, 2017 + DP_PWR_STATE_BANDGAP_TPLL, 2018 + DP_PWR_STATE_MASK); 2019 + 2020 + drm_bridge_chain_post_disable(bridge); 2021 + } 2022 + 2023 + return new_edid; 2024 + } 2025 + 2026 + static ssize_t mtk_dp_aux_transfer(struct drm_dp_aux *mtk_aux, 2027 + struct drm_dp_aux_msg *msg) 2028 + { 2029 + struct mtk_dp *mtk_dp; 2030 + bool is_read; 2031 + u8 request; 2032 + size_t accessed_bytes = 0; 2033 + int ret; 2034 + 2035 + mtk_dp = container_of(mtk_aux, struct mtk_dp, aux); 2036 + 2037 + if (!mtk_dp->train_info.cable_plugged_in) { 2038 + ret = -EAGAIN; 2039 + goto err; 2040 + } 2041 + 2042 + switch (msg->request) { 2043 + case DP_AUX_I2C_MOT: 2044 + case DP_AUX_I2C_WRITE: 2045 + case DP_AUX_NATIVE_WRITE: 2046 + case DP_AUX_I2C_WRITE_STATUS_UPDATE: 2047 + case DP_AUX_I2C_WRITE_STATUS_UPDATE | DP_AUX_I2C_MOT: 2048 + request = msg->request & ~DP_AUX_I2C_WRITE_STATUS_UPDATE; 2049 + is_read = false; 2050 + break; 2051 + case DP_AUX_I2C_READ: 2052 + case DP_AUX_NATIVE_READ: 2053 + case DP_AUX_I2C_READ | DP_AUX_I2C_MOT: 2054 + request = msg->request; 2055 + is_read = true; 2056 + break; 2057 + default: 2058 + drm_err(mtk_aux->drm_dev, "invalid aux cmd = %d\n", 2059 + msg->request); 2060 + ret = -EINVAL; 2061 + goto err; 2062 + } 2063 + 2064 + do { 2065 + size_t to_access = min_t(size_t, DP_AUX_MAX_PAYLOAD_BYTES, 2066 + msg->size - accessed_bytes); 2067 + 2068 + ret = mtk_dp_aux_do_transfer(mtk_dp, is_read, request, 2069 + msg->address + accessed_bytes, 2070 + msg->buffer + accessed_bytes, 2071 + to_access); 2072 + 2073 + if (ret) { 2074 + drm_info(mtk_dp->drm_dev, 2075 + "Failed to do AUX transfer: %d\n", ret); 2076 + goto err; 2077 + } 2078 + accessed_bytes += to_access; 2079 + } while (accessed_bytes < msg->size); 2080 + 2081 + msg->reply = DP_AUX_NATIVE_REPLY_ACK | DP_AUX_I2C_REPLY_ACK; 2082 + return msg->size; 2083 + err: 2084 + msg->reply = DP_AUX_NATIVE_REPLY_NACK | DP_AUX_I2C_REPLY_NACK; 2085 + return ret; 2086 + } 2087 + 2088 + static int mtk_dp_poweron(struct mtk_dp *mtk_dp) 2089 + { 2090 + int ret; 2091 + 2092 + ret = phy_init(mtk_dp->phy); 2093 + if (ret) { 2094 + dev_err(mtk_dp->dev, "Failed to initialize phy: %d\n", ret); 2095 + return ret; 2096 + } 2097 + 2098 + mtk_dp_init_port(mtk_dp); 2099 + mtk_dp_power_enable(mtk_dp); 2100 + 2101 + return 0; 2102 + } 2103 + 2104 + static void mtk_dp_poweroff(struct mtk_dp *mtk_dp) 2105 + { 2106 + mtk_dp_power_disable(mtk_dp); 2107 + phy_exit(mtk_dp->phy); 2108 + } 2109 + 2110 + static int mtk_dp_bridge_attach(struct drm_bridge *bridge, 2111 + enum drm_bridge_attach_flags flags) 2112 + { 2113 + struct mtk_dp *mtk_dp = mtk_dp_from_bridge(bridge); 2114 + int ret; 2115 + 2116 + if (!(flags & DRM_BRIDGE_ATTACH_NO_CONNECTOR)) { 2117 + dev_err(mtk_dp->dev, "Driver does not provide a connector!"); 2118 + return -EINVAL; 2119 + } 2120 + 2121 + mtk_dp->aux.drm_dev = bridge->dev; 2122 + ret = drm_dp_aux_register(&mtk_dp->aux); 2123 + if (ret) { 2124 + dev_err(mtk_dp->dev, 2125 + "failed to register DP AUX channel: %d\n", ret); 2126 + return ret; 2127 + } 2128 + 2129 + ret = mtk_dp_poweron(mtk_dp); 2130 + if (ret) 2131 + goto err_aux_register; 2132 + 2133 + if (mtk_dp->next_bridge) { 2134 + ret = drm_bridge_attach(bridge->encoder, mtk_dp->next_bridge, 2135 + &mtk_dp->bridge, flags); 2136 + if (ret) { 2137 + drm_warn(mtk_dp->drm_dev, 2138 + "Failed to attach external bridge: %d\n", ret); 2139 + goto err_bridge_attach; 2140 + } 2141 + } 2142 + 2143 + mtk_dp->drm_dev = bridge->dev; 2144 + 2145 + mtk_dp_hwirq_enable(mtk_dp, true); 2146 + 2147 + return 0; 2148 + 2149 + err_bridge_attach: 2150 + mtk_dp_poweroff(mtk_dp); 2151 + err_aux_register: 2152 + drm_dp_aux_unregister(&mtk_dp->aux); 2153 + return ret; 2154 + } 2155 + 2156 + static void mtk_dp_bridge_detach(struct drm_bridge *bridge) 2157 + { 2158 + struct mtk_dp *mtk_dp = mtk_dp_from_bridge(bridge); 2159 + 2160 + mtk_dp_hwirq_enable(mtk_dp, false); 2161 + mtk_dp->drm_dev = NULL; 2162 + mtk_dp_poweroff(mtk_dp); 2163 + drm_dp_aux_unregister(&mtk_dp->aux); 2164 + } 2165 + 2166 + static void mtk_dp_bridge_atomic_enable(struct drm_bridge *bridge, 2167 + struct drm_bridge_state *old_state) 2168 + { 2169 + struct mtk_dp *mtk_dp = mtk_dp_from_bridge(bridge); 2170 + int ret; 2171 + 2172 + mtk_dp->conn = drm_atomic_get_new_connector_for_encoder(old_state->base.state, 2173 + bridge->encoder); 2174 + if (!mtk_dp->conn) { 2175 + drm_err(mtk_dp->drm_dev, 2176 + "Can't enable bridge as connector is missing\n"); 2177 + return; 2178 + } 2179 + 2180 + /* power on aux */ 2181 + mtk_dp_update_bits(mtk_dp, MTK_DP_TOP_PWR_STATE, 2182 + DP_PWR_STATE_BANDGAP_TPLL_LANE, 2183 + DP_PWR_STATE_MASK); 2184 + 2185 + if (mtk_dp->train_info.cable_plugged_in) { 2186 + drm_dp_dpcd_writeb(&mtk_dp->aux, DP_SET_POWER, DP_SET_POWER_D0); 2187 + usleep_range(2000, 5000); 2188 + } 2189 + 2190 + /* Training */ 2191 + ret = mtk_dp_training(mtk_dp); 2192 + if (ret) { 2193 + drm_err(mtk_dp->drm_dev, "Training failed, %d\n", ret); 2194 + goto power_off_aux; 2195 + } 2196 + 2197 + ret = mtk_dp_video_config(mtk_dp); 2198 + if (ret) 2199 + goto power_off_aux; 2200 + 2201 + mtk_dp_video_enable(mtk_dp, true); 2202 + 2203 + mtk_dp->audio_enable = 2204 + mtk_dp_edid_parse_audio_capabilities(mtk_dp, 2205 + &mtk_dp->info.audio_cur_cfg); 2206 + if (mtk_dp->audio_enable) { 2207 + mtk_dp_audio_setup(mtk_dp, &mtk_dp->info.audio_cur_cfg); 2208 + mtk_dp_audio_mute(mtk_dp, false); 2209 + } else { 2210 + memset(&mtk_dp->info.audio_cur_cfg, 0, 2211 + sizeof(mtk_dp->info.audio_cur_cfg)); 2212 + } 2213 + 2214 + mtk_dp->enabled = true; 2215 + mtk_dp_update_plugged_status(mtk_dp); 2216 + 2217 + return; 2218 + power_off_aux: 2219 + mtk_dp_update_bits(mtk_dp, MTK_DP_TOP_PWR_STATE, 2220 + DP_PWR_STATE_BANDGAP_TPLL, 2221 + DP_PWR_STATE_MASK); 2222 + } 2223 + 2224 + static void mtk_dp_bridge_atomic_disable(struct drm_bridge *bridge, 2225 + struct drm_bridge_state *old_state) 2226 + { 2227 + struct mtk_dp *mtk_dp = mtk_dp_from_bridge(bridge); 2228 + 2229 + mtk_dp->enabled = false; 2230 + mtk_dp_update_plugged_status(mtk_dp); 2231 + mtk_dp_video_enable(mtk_dp, false); 2232 + mtk_dp_audio_mute(mtk_dp, true); 2233 + 2234 + if (mtk_dp->train_info.cable_plugged_in) { 2235 + drm_dp_dpcd_writeb(&mtk_dp->aux, DP_SET_POWER, DP_SET_POWER_D3); 2236 + usleep_range(2000, 3000); 2237 + } 2238 + 2239 + /* power off aux */ 2240 + mtk_dp_update_bits(mtk_dp, MTK_DP_TOP_PWR_STATE, 2241 + DP_PWR_STATE_BANDGAP_TPLL, 2242 + DP_PWR_STATE_MASK); 2243 + 2244 + /* Ensure the sink is muted */ 2245 + msleep(20); 2246 + } 2247 + 2248 + static enum drm_mode_status 2249 + mtk_dp_bridge_mode_valid(struct drm_bridge *bridge, 2250 + const struct drm_display_info *info, 2251 + const struct drm_display_mode *mode) 2252 + { 2253 + struct mtk_dp *mtk_dp = mtk_dp_from_bridge(bridge); 2254 + u32 bpp = info->color_formats & DRM_COLOR_FORMAT_YCBCR422 ? 16 : 24; 2255 + u32 rate = min_t(u32, drm_dp_max_link_rate(mtk_dp->rx_cap) * 2256 + drm_dp_max_lane_count(mtk_dp->rx_cap), 2257 + drm_dp_bw_code_to_link_rate(mtk_dp->max_linkrate) * 2258 + mtk_dp->max_lanes); 2259 + 2260 + if (rate < mode->clock * bpp / 8) 2261 + return MODE_CLOCK_HIGH; 2262 + 2263 + return MODE_OK; 2264 + } 2265 + 2266 + static u32 *mtk_dp_bridge_atomic_get_output_bus_fmts(struct drm_bridge *bridge, 2267 + struct drm_bridge_state *bridge_state, 2268 + struct drm_crtc_state *crtc_state, 2269 + struct drm_connector_state *conn_state, 2270 + unsigned int *num_output_fmts) 2271 + { 2272 + u32 *output_fmts; 2273 + 2274 + *num_output_fmts = 0; 2275 + output_fmts = kmalloc(sizeof(*output_fmts), GFP_KERNEL); 2276 + if (!output_fmts) 2277 + return NULL; 2278 + *num_output_fmts = 1; 2279 + output_fmts[0] = MEDIA_BUS_FMT_FIXED; 2280 + return output_fmts; 2281 + } 2282 + 2283 + static const u32 mt8195_input_fmts[] = { 2284 + MEDIA_BUS_FMT_RGB888_1X24, 2285 + MEDIA_BUS_FMT_YUV8_1X24, 2286 + MEDIA_BUS_FMT_YUYV8_1X16, 2287 + }; 2288 + 2289 + static u32 *mtk_dp_bridge_atomic_get_input_bus_fmts(struct drm_bridge *bridge, 2290 + struct drm_bridge_state *bridge_state, 2291 + struct drm_crtc_state *crtc_state, 2292 + struct drm_connector_state *conn_state, 2293 + u32 output_fmt, 2294 + unsigned int *num_input_fmts) 2295 + { 2296 + u32 *input_fmts; 2297 + struct mtk_dp *mtk_dp = mtk_dp_from_bridge(bridge); 2298 + struct drm_display_mode *mode = &crtc_state->adjusted_mode; 2299 + struct drm_display_info *display_info = 2300 + &conn_state->connector->display_info; 2301 + u32 rate = min_t(u32, drm_dp_max_link_rate(mtk_dp->rx_cap) * 2302 + drm_dp_max_lane_count(mtk_dp->rx_cap), 2303 + drm_dp_bw_code_to_link_rate(mtk_dp->max_linkrate) * 2304 + mtk_dp->max_lanes); 2305 + 2306 + *num_input_fmts = 0; 2307 + 2308 + /* 2309 + * If the linkrate is smaller than datarate of RGB888, larger than 2310 + * datarate of YUV422 and sink device supports YUV422, we output YUV422 2311 + * format. Use this condition, we can support more resolution. 2312 + */ 2313 + if ((rate < (mode->clock * 24 / 8)) && 2314 + (rate > (mode->clock * 16 / 8)) && 2315 + (display_info->color_formats & DRM_COLOR_FORMAT_YCBCR422)) { 2316 + input_fmts = kcalloc(1, sizeof(*input_fmts), GFP_KERNEL); 2317 + if (!input_fmts) 2318 + return NULL; 2319 + *num_input_fmts = 1; 2320 + input_fmts[0] = MEDIA_BUS_FMT_YUYV8_1X16; 2321 + } else { 2322 + input_fmts = kcalloc(ARRAY_SIZE(mt8195_input_fmts), 2323 + sizeof(*input_fmts), 2324 + GFP_KERNEL); 2325 + if (!input_fmts) 2326 + return NULL; 2327 + 2328 + *num_input_fmts = ARRAY_SIZE(mt8195_input_fmts); 2329 + memcpy(input_fmts, mt8195_input_fmts, sizeof(mt8195_input_fmts)); 2330 + } 2331 + 2332 + return input_fmts; 2333 + } 2334 + 2335 + static int mtk_dp_bridge_atomic_check(struct drm_bridge *bridge, 2336 + struct drm_bridge_state *bridge_state, 2337 + struct drm_crtc_state *crtc_state, 2338 + struct drm_connector_state *conn_state) 2339 + { 2340 + struct mtk_dp *mtk_dp = mtk_dp_from_bridge(bridge); 2341 + struct drm_crtc *crtc = conn_state->crtc; 2342 + unsigned int input_bus_format; 2343 + 2344 + input_bus_format = bridge_state->input_bus_cfg.format; 2345 + 2346 + dev_dbg(mtk_dp->dev, "input format 0x%04x, output format 0x%04x\n", 2347 + bridge_state->input_bus_cfg.format, 2348 + bridge_state->output_bus_cfg.format); 2349 + 2350 + if (input_bus_format == MEDIA_BUS_FMT_YUYV8_1X16) 2351 + mtk_dp->info.format = DP_PIXELFORMAT_YUV422; 2352 + else 2353 + mtk_dp->info.format = DP_PIXELFORMAT_RGB; 2354 + 2355 + if (!crtc) { 2356 + drm_err(mtk_dp->drm_dev, 2357 + "Can't enable bridge as connector state doesn't have a crtc\n"); 2358 + return -EINVAL; 2359 + } 2360 + 2361 + drm_display_mode_to_videomode(&crtc_state->adjusted_mode, &mtk_dp->info.vm); 2362 + 2363 + return 0; 2364 + } 2365 + 2366 + static const struct drm_bridge_funcs mtk_dp_bridge_funcs = { 2367 + .atomic_check = mtk_dp_bridge_atomic_check, 2368 + .atomic_duplicate_state = drm_atomic_helper_bridge_duplicate_state, 2369 + .atomic_destroy_state = drm_atomic_helper_bridge_destroy_state, 2370 + .atomic_get_output_bus_fmts = mtk_dp_bridge_atomic_get_output_bus_fmts, 2371 + .atomic_get_input_bus_fmts = mtk_dp_bridge_atomic_get_input_bus_fmts, 2372 + .atomic_reset = drm_atomic_helper_bridge_reset, 2373 + .attach = mtk_dp_bridge_attach, 2374 + .detach = mtk_dp_bridge_detach, 2375 + .atomic_enable = mtk_dp_bridge_atomic_enable, 2376 + .atomic_disable = mtk_dp_bridge_atomic_disable, 2377 + .mode_valid = mtk_dp_bridge_mode_valid, 2378 + .get_edid = mtk_dp_get_edid, 2379 + .detect = mtk_dp_bdg_detect, 2380 + }; 2381 + 2382 + static void mtk_dp_debounce_timer(struct timer_list *t) 2383 + { 2384 + struct mtk_dp *mtk_dp = from_timer(mtk_dp, t, debounce_timer); 2385 + 2386 + mtk_dp->need_debounce = true; 2387 + } 2388 + 2389 + /* 2390 + * HDMI audio codec callbacks 2391 + */ 2392 + static int mtk_dp_audio_hw_params(struct device *dev, void *data, 2393 + struct hdmi_codec_daifmt *daifmt, 2394 + struct hdmi_codec_params *params) 2395 + { 2396 + struct mtk_dp *mtk_dp = dev_get_drvdata(dev); 2397 + 2398 + if (!mtk_dp->enabled) { 2399 + dev_err(mtk_dp->dev, "%s, DP is not ready!\n", __func__); 2400 + return -ENODEV; 2401 + } 2402 + 2403 + mtk_dp->info.audio_cur_cfg.channels = params->cea.channels; 2404 + mtk_dp->info.audio_cur_cfg.sample_rate = params->sample_rate; 2405 + 2406 + mtk_dp_audio_setup(mtk_dp, &mtk_dp->info.audio_cur_cfg); 2407 + 2408 + return 0; 2409 + } 2410 + 2411 + static int mtk_dp_audio_startup(struct device *dev, void *data) 2412 + { 2413 + struct mtk_dp *mtk_dp = dev_get_drvdata(dev); 2414 + 2415 + mtk_dp_audio_mute(mtk_dp, false); 2416 + 2417 + return 0; 2418 + } 2419 + 2420 + static void mtk_dp_audio_shutdown(struct device *dev, void *data) 2421 + { 2422 + struct mtk_dp *mtk_dp = dev_get_drvdata(dev); 2423 + 2424 + mtk_dp_audio_mute(mtk_dp, true); 2425 + } 2426 + 2427 + static int mtk_dp_audio_get_eld(struct device *dev, void *data, uint8_t *buf, 2428 + size_t len) 2429 + { 2430 + struct mtk_dp *mtk_dp = dev_get_drvdata(dev); 2431 + 2432 + if (mtk_dp->enabled) 2433 + memcpy(buf, mtk_dp->conn->eld, len); 2434 + else 2435 + memset(buf, 0, len); 2436 + 2437 + return 0; 2438 + } 2439 + 2440 + static int mtk_dp_audio_hook_plugged_cb(struct device *dev, void *data, 2441 + hdmi_codec_plugged_cb fn, 2442 + struct device *codec_dev) 2443 + { 2444 + struct mtk_dp *mtk_dp = data; 2445 + 2446 + mutex_lock(&mtk_dp->update_plugged_status_lock); 2447 + mtk_dp->plugged_cb = fn; 2448 + mtk_dp->codec_dev = codec_dev; 2449 + mutex_unlock(&mtk_dp->update_plugged_status_lock); 2450 + 2451 + mtk_dp_update_plugged_status(mtk_dp); 2452 + 2453 + return 0; 2454 + } 2455 + 2456 + static const struct hdmi_codec_ops mtk_dp_audio_codec_ops = { 2457 + .hw_params = mtk_dp_audio_hw_params, 2458 + .audio_startup = mtk_dp_audio_startup, 2459 + .audio_shutdown = mtk_dp_audio_shutdown, 2460 + .get_eld = mtk_dp_audio_get_eld, 2461 + .hook_plugged_cb = mtk_dp_audio_hook_plugged_cb, 2462 + .no_capture_mute = 1, 2463 + }; 2464 + 2465 + static int mtk_dp_register_audio_driver(struct device *dev) 2466 + { 2467 + struct mtk_dp *mtk_dp = dev_get_drvdata(dev); 2468 + struct hdmi_codec_pdata codec_data = { 2469 + .ops = &mtk_dp_audio_codec_ops, 2470 + .max_i2s_channels = 8, 2471 + .i2s = 1, 2472 + .data = mtk_dp, 2473 + }; 2474 + 2475 + mtk_dp->audio_pdev = platform_device_register_data(dev, 2476 + HDMI_CODEC_DRV_NAME, 2477 + PLATFORM_DEVID_AUTO, 2478 + &codec_data, 2479 + sizeof(codec_data)); 2480 + return PTR_ERR_OR_ZERO(mtk_dp->audio_pdev); 2481 + } 2482 + 2483 + static int mtk_dp_probe(struct platform_device *pdev) 2484 + { 2485 + struct mtk_dp *mtk_dp; 2486 + struct device *dev = &pdev->dev; 2487 + int ret, irq_num; 2488 + 2489 + mtk_dp = devm_kzalloc(dev, sizeof(*mtk_dp), GFP_KERNEL); 2490 + if (!mtk_dp) 2491 + return -ENOMEM; 2492 + 2493 + mtk_dp->dev = dev; 2494 + mtk_dp->data = (struct mtk_dp_data *)of_device_get_match_data(dev); 2495 + 2496 + irq_num = platform_get_irq(pdev, 0); 2497 + if (irq_num < 0) 2498 + return dev_err_probe(dev, irq_num, 2499 + "failed to request dp irq resource\n"); 2500 + 2501 + mtk_dp->next_bridge = devm_drm_of_get_bridge(dev, dev->of_node, 1, 0); 2502 + if (IS_ERR(mtk_dp->next_bridge) && 2503 + PTR_ERR(mtk_dp->next_bridge) == -ENODEV) 2504 + mtk_dp->next_bridge = NULL; 2505 + else if (IS_ERR(mtk_dp->next_bridge)) 2506 + return dev_err_probe(dev, PTR_ERR(mtk_dp->next_bridge), 2507 + "Failed to get bridge\n"); 2508 + 2509 + ret = mtk_dp_dt_parse(mtk_dp, pdev); 2510 + if (ret) 2511 + return dev_err_probe(dev, ret, "Failed to parse dt\n"); 2512 + 2513 + drm_dp_aux_init(&mtk_dp->aux); 2514 + mtk_dp->aux.name = "aux_mtk_dp"; 2515 + mtk_dp->aux.transfer = mtk_dp_aux_transfer; 2516 + 2517 + spin_lock_init(&mtk_dp->irq_thread_lock); 2518 + 2519 + ret = devm_request_threaded_irq(dev, irq_num, mtk_dp_hpd_event, 2520 + mtk_dp_hpd_event_thread, 2521 + IRQ_TYPE_LEVEL_HIGH, dev_name(dev), 2522 + mtk_dp); 2523 + if (ret) 2524 + return dev_err_probe(dev, ret, 2525 + "failed to request mediatek dptx irq\n"); 2526 + 2527 + mutex_init(&mtk_dp->update_plugged_status_lock); 2528 + 2529 + platform_set_drvdata(pdev, mtk_dp); 2530 + 2531 + if (mtk_dp->data->audio_supported) { 2532 + ret = mtk_dp_register_audio_driver(dev); 2533 + if (ret) { 2534 + dev_err(dev, "Failed to register audio driver: %d\n", 2535 + ret); 2536 + return ret; 2537 + } 2538 + } 2539 + 2540 + mtk_dp->phy_dev = platform_device_register_data(dev, "mediatek-dp-phy", 2541 + PLATFORM_DEVID_AUTO, 2542 + &mtk_dp->regs, 2543 + sizeof(struct regmap *)); 2544 + if (IS_ERR(mtk_dp->phy_dev)) 2545 + return dev_err_probe(dev, PTR_ERR(mtk_dp->phy_dev), 2546 + "Failed to create device mediatek-dp-phy\n"); 2547 + 2548 + mtk_dp_get_calibration_data(mtk_dp); 2549 + 2550 + mtk_dp->phy = devm_phy_get(&mtk_dp->phy_dev->dev, "dp"); 2551 + 2552 + if (IS_ERR(mtk_dp->phy)) { 2553 + platform_device_unregister(mtk_dp->phy_dev); 2554 + return dev_err_probe(dev, PTR_ERR(mtk_dp->phy), 2555 + "Failed to get phy\n"); 2556 + } 2557 + 2558 + mtk_dp->bridge.funcs = &mtk_dp_bridge_funcs; 2559 + mtk_dp->bridge.of_node = dev->of_node; 2560 + 2561 + mtk_dp->bridge.ops = 2562 + DRM_BRIDGE_OP_DETECT | DRM_BRIDGE_OP_EDID | DRM_BRIDGE_OP_HPD; 2563 + mtk_dp->bridge.type = mtk_dp->data->bridge_type; 2564 + 2565 + drm_bridge_add(&mtk_dp->bridge); 2566 + 2567 + mtk_dp->need_debounce = true; 2568 + timer_setup(&mtk_dp->debounce_timer, mtk_dp_debounce_timer, 0); 2569 + 2570 + pm_runtime_enable(dev); 2571 + pm_runtime_get_sync(dev); 2572 + 2573 + return 0; 2574 + } 2575 + 2576 + static int mtk_dp_remove(struct platform_device *pdev) 2577 + { 2578 + struct mtk_dp *mtk_dp = platform_get_drvdata(pdev); 2579 + 2580 + pm_runtime_put(&pdev->dev); 2581 + pm_runtime_disable(&pdev->dev); 2582 + del_timer_sync(&mtk_dp->debounce_timer); 2583 + drm_bridge_remove(&mtk_dp->bridge); 2584 + platform_device_unregister(mtk_dp->phy_dev); 2585 + if (mtk_dp->audio_pdev) 2586 + platform_device_unregister(mtk_dp->audio_pdev); 2587 + 2588 + return 0; 2589 + } 2590 + 2591 + #ifdef CONFIG_PM_SLEEP 2592 + static int mtk_dp_suspend(struct device *dev) 2593 + { 2594 + struct mtk_dp *mtk_dp = dev_get_drvdata(dev); 2595 + 2596 + mtk_dp_power_disable(mtk_dp); 2597 + mtk_dp_hwirq_enable(mtk_dp, false); 2598 + pm_runtime_put_sync(dev); 2599 + 2600 + return 0; 2601 + } 2602 + 2603 + static int mtk_dp_resume(struct device *dev) 2604 + { 2605 + struct mtk_dp *mtk_dp = dev_get_drvdata(dev); 2606 + 2607 + pm_runtime_get_sync(dev); 2608 + mtk_dp_init_port(mtk_dp); 2609 + mtk_dp_hwirq_enable(mtk_dp, true); 2610 + mtk_dp_power_enable(mtk_dp); 2611 + 2612 + return 0; 2613 + } 2614 + #endif 2615 + 2616 + static SIMPLE_DEV_PM_OPS(mtk_dp_pm_ops, mtk_dp_suspend, mtk_dp_resume); 2617 + 2618 + static const struct mtk_dp_data mt8195_edp_data = { 2619 + .bridge_type = DRM_MODE_CONNECTOR_eDP, 2620 + .smc_cmd = MTK_DP_SIP_ATF_EDP_VIDEO_UNMUTE, 2621 + .efuse_fmt = mt8195_edp_efuse_fmt, 2622 + .audio_supported = false, 2623 + }; 2624 + 2625 + static const struct mtk_dp_data mt8195_dp_data = { 2626 + .bridge_type = DRM_MODE_CONNECTOR_DisplayPort, 2627 + .smc_cmd = MTK_DP_SIP_ATF_VIDEO_UNMUTE, 2628 + .efuse_fmt = mt8195_dp_efuse_fmt, 2629 + .audio_supported = true, 2630 + }; 2631 + 2632 + static const struct of_device_id mtk_dp_of_match[] = { 2633 + { 2634 + .compatible = "mediatek,mt8195-edp-tx", 2635 + .data = &mt8195_edp_data, 2636 + }, 2637 + { 2638 + .compatible = "mediatek,mt8195-dp-tx", 2639 + .data = &mt8195_dp_data, 2640 + }, 2641 + {}, 2642 + }; 2643 + MODULE_DEVICE_TABLE(of, mtk_dp_of_match); 2644 + 2645 + struct platform_driver mtk_dp_driver = { 2646 + .probe = mtk_dp_probe, 2647 + .remove = mtk_dp_remove, 2648 + .driver = { 2649 + .name = "mediatek-drm-dp", 2650 + .of_match_table = mtk_dp_of_match, 2651 + .pm = &mtk_dp_pm_ops, 2652 + }, 2653 + }; 2654 + 2655 + module_platform_driver(mtk_dp_driver); 2656 + 2657 + MODULE_AUTHOR("Jitao Shi <jitao.shi@mediatek.com>"); 2658 + MODULE_AUTHOR("Markus Schneider-Pargmann <msp@baylibre.com>"); 2659 + MODULE_AUTHOR("Bo-Chen Chen <rex-bc.chen@mediatek.com>"); 2660 + MODULE_DESCRIPTION("MediaTek DisplayPort Driver"); 2661 + MODULE_LICENSE("GPL");
+356
drivers/gpu/drm/mediatek/mtk_dp_reg.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + /* 3 + * Copyright (c) 2019-2022 MediaTek Inc. 4 + * Copyright (c) 2022 BayLibre 5 + */ 6 + #ifndef _MTK_DP_REG_H_ 7 + #define _MTK_DP_REG_H_ 8 + 9 + #define SEC_OFFSET 0x4000 10 + 11 + #define MTK_DP_HPD_DISCONNECT BIT(1) 12 + #define MTK_DP_HPD_CONNECT BIT(2) 13 + #define MTK_DP_HPD_INTERRUPT BIT(3) 14 + 15 + /* offset: 0x0 */ 16 + #define DP_PHY_GLB_BIAS_GEN_00 0x0 17 + #define RG_XTP_GLB_BIAS_INTR_CTRL GENMASK(20, 16) 18 + #define DP_PHY_GLB_DPAUX_TX 0x8 19 + #define RG_CKM_PT0_CKTX_IMPSEL GENMASK(23, 20) 20 + #define MTK_DP_0034 0x34 21 + #define DA_XTP_GLB_CKDET_EN_FORCE_VAL BIT(15) 22 + #define DA_XTP_GLB_CKDET_EN_FORCE_EN BIT(14) 23 + #define DA_CKM_INTCKTX_EN_FORCE_VAL BIT(13) 24 + #define DA_CKM_INTCKTX_EN_FORCE_EN BIT(12) 25 + #define DA_CKM_CKTX0_EN_FORCE_VAL BIT(11) 26 + #define DA_CKM_CKTX0_EN_FORCE_EN BIT(10) 27 + #define DA_CKM_XTAL_CK_FORCE_VAL BIT(9) 28 + #define DA_CKM_XTAL_CK_FORCE_EN BIT(8) 29 + #define DA_CKM_BIAS_LPF_EN_FORCE_VAL BIT(7) 30 + #define DA_CKM_BIAS_LPF_EN_FORCE_EN BIT(6) 31 + #define DA_CKM_BIAS_EN_FORCE_VAL BIT(5) 32 + #define DA_CKM_BIAS_EN_FORCE_EN BIT(4) 33 + #define DA_XTP_GLB_AVD10_ON_FORCE_VAL BIT(3) 34 + #define DA_XTP_GLB_AVD10_ON_FORCE BIT(2) 35 + #define DA_XTP_GLB_LDO_EN_FORCE_VAL BIT(1) 36 + #define DA_XTP_GLB_LDO_EN_FORCE_EN BIT(0) 37 + #define DP_PHY_LANE_TX_0 0x104 38 + #define RG_XTP_LN0_TX_IMPSEL_PMOS GENMASK(15, 12) 39 + #define RG_XTP_LN0_TX_IMPSEL_NMOS GENMASK(19, 16) 40 + #define DP_PHY_LANE_TX_1 0x204 41 + #define RG_XTP_LN1_TX_IMPSEL_PMOS GENMASK(15, 12) 42 + #define RG_XTP_LN1_TX_IMPSEL_NMOS GENMASK(19, 16) 43 + #define DP_PHY_LANE_TX_2 0x304 44 + #define RG_XTP_LN2_TX_IMPSEL_PMOS GENMASK(15, 12) 45 + #define RG_XTP_LN2_TX_IMPSEL_NMOS GENMASK(19, 16) 46 + #define DP_PHY_LANE_TX_3 0x404 47 + #define RG_XTP_LN3_TX_IMPSEL_PMOS GENMASK(15, 12) 48 + #define RG_XTP_LN3_TX_IMPSEL_NMOS GENMASK(19, 16) 49 + #define MTK_DP_1040 0x1040 50 + #define RG_DPAUX_RX_VALID_DEGLITCH_EN BIT(2) 51 + #define RG_XTP_GLB_CKDET_EN BIT(1) 52 + #define RG_DPAUX_RX_EN BIT(0) 53 + 54 + /* offset: TOP_OFFSET (0x2000) */ 55 + #define MTK_DP_TOP_PWR_STATE 0x2000 56 + #define DP_PWR_STATE_MASK GENMASK(1, 0) 57 + #define DP_PWR_STATE_BANDGAP BIT(0) 58 + #define DP_PWR_STATE_BANDGAP_TPLL BIT(1) 59 + #define DP_PWR_STATE_BANDGAP_TPLL_LANE GENMASK(1, 0) 60 + #define MTK_DP_TOP_SWING_EMP 0x2004 61 + #define DP_TX0_VOLT_SWING_MASK GENMASK(1, 0) 62 + #define DP_TX0_VOLT_SWING_SHIFT 0 63 + #define DP_TX0_PRE_EMPH_MASK GENMASK(3, 2) 64 + #define DP_TX0_PRE_EMPH_SHIFT 2 65 + #define DP_TX1_VOLT_SWING_MASK GENMASK(9, 8) 66 + #define DP_TX1_VOLT_SWING_SHIFT 8 67 + #define DP_TX1_PRE_EMPH_MASK GENMASK(11, 10) 68 + #define DP_TX2_VOLT_SWING_MASK GENMASK(17, 16) 69 + #define DP_TX2_PRE_EMPH_MASK GENMASK(19, 18) 70 + #define DP_TX3_VOLT_SWING_MASK GENMASK(25, 24) 71 + #define DP_TX3_PRE_EMPH_MASK GENMASK(27, 26) 72 + #define MTK_DP_TOP_RESET_AND_PROBE 0x2020 73 + #define SW_RST_B_PHYD BIT(4) 74 + #define MTK_DP_TOP_IRQ_MASK 0x202c 75 + #define IRQ_MASK_AUX_TOP_IRQ BIT(2) 76 + #define MTK_DP_TOP_MEM_PD 0x2038 77 + #define MEM_ISO_EN BIT(0) 78 + #define FUSE_SEL BIT(2) 79 + 80 + /* offset: ENC0_OFFSET (0x3000) */ 81 + #define MTK_DP_ENC0_P0_3000 0x3000 82 + #define LANE_NUM_DP_ENC0_P0_MASK GENMASK(1, 0) 83 + #define VIDEO_MUTE_SW_DP_ENC0_P0 BIT(2) 84 + #define VIDEO_MUTE_SEL_DP_ENC0_P0 BIT(3) 85 + #define ENHANCED_FRAME_EN_DP_ENC0_P0 BIT(4) 86 + #define MTK_DP_ENC0_P0_3004 0x3004 87 + #define VIDEO_M_CODE_SEL_DP_ENC0_P0_MASK BIT(8) 88 + #define DP_TX_ENCODER_4P_RESET_SW_DP_ENC0_P0 BIT(9) 89 + #define MTK_DP_ENC0_P0_3010 0x3010 90 + #define HTOTAL_SW_DP_ENC0_P0_MASK GENMASK(15, 0) 91 + #define MTK_DP_ENC0_P0_3014 0x3014 92 + #define VTOTAL_SW_DP_ENC0_P0_MASK GENMASK(15, 0) 93 + #define MTK_DP_ENC0_P0_3018 0x3018 94 + #define HSTART_SW_DP_ENC0_P0_MASK GENMASK(15, 0) 95 + #define MTK_DP_ENC0_P0_301C 0x301c 96 + #define VSTART_SW_DP_ENC0_P0_MASK GENMASK(15, 0) 97 + #define MTK_DP_ENC0_P0_3020 0x3020 98 + #define HWIDTH_SW_DP_ENC0_P0_MASK GENMASK(15, 0) 99 + #define MTK_DP_ENC0_P0_3024 0x3024 100 + #define VHEIGHT_SW_DP_ENC0_P0_MASK GENMASK(15, 0) 101 + #define MTK_DP_ENC0_P0_3028 0x3028 102 + #define HSW_SW_DP_ENC0_P0_MASK GENMASK(14, 0) 103 + #define HSP_SW_DP_ENC0_P0_MASK BIT(15) 104 + #define MTK_DP_ENC0_P0_302C 0x302c 105 + #define VSW_SW_DP_ENC0_P0_MASK GENMASK(14, 0) 106 + #define VSP_SW_DP_ENC0_P0_MASK BIT(15) 107 + #define MTK_DP_ENC0_P0_3030 0x3030 108 + #define HTOTAL_SEL_DP_ENC0_P0 BIT(0) 109 + #define VTOTAL_SEL_DP_ENC0_P0 BIT(1) 110 + #define HSTART_SEL_DP_ENC0_P0 BIT(2) 111 + #define VSTART_SEL_DP_ENC0_P0 BIT(3) 112 + #define HWIDTH_SEL_DP_ENC0_P0 BIT(4) 113 + #define VHEIGHT_SEL_DP_ENC0_P0 BIT(5) 114 + #define HSP_SEL_DP_ENC0_P0 BIT(6) 115 + #define HSW_SEL_DP_ENC0_P0 BIT(7) 116 + #define VSP_SEL_DP_ENC0_P0 BIT(8) 117 + #define VSW_SEL_DP_ENC0_P0 BIT(9) 118 + #define VBID_AUDIO_MUTE_FLAG_SW_DP_ENC0_P0 BIT(11) 119 + #define VBID_AUDIO_MUTE_FLAG_SEL_DP_ENC0_P0 BIT(12) 120 + #define MTK_DP_ENC0_P0_3034 0x3034 121 + #define MTK_DP_ENC0_P0_3038 0x3038 122 + #define VIDEO_SOURCE_SEL_DP_ENC0_P0_MASK BIT(11) 123 + #define MTK_DP_ENC0_P0_303C 0x303c 124 + #define SRAM_START_READ_THRD_DP_ENC0_P0_MASK GENMASK(5, 0) 125 + #define VIDEO_COLOR_DEPTH_DP_ENC0_P0_MASK GENMASK(10, 8) 126 + #define VIDEO_COLOR_DEPTH_DP_ENC0_P0_16BIT (0 << 8) 127 + #define VIDEO_COLOR_DEPTH_DP_ENC0_P0_12BIT (1 << 8) 128 + #define VIDEO_COLOR_DEPTH_DP_ENC0_P0_10BIT (2 << 8) 129 + #define VIDEO_COLOR_DEPTH_DP_ENC0_P0_8BIT (3 << 8) 130 + #define VIDEO_COLOR_DEPTH_DP_ENC0_P0_6BIT (4 << 8) 131 + #define PIXEL_ENCODE_FORMAT_DP_ENC0_P0_MASK GENMASK(14, 12) 132 + #define PIXEL_ENCODE_FORMAT_DP_ENC0_P0_RGB (0 << 12) 133 + #define PIXEL_ENCODE_FORMAT_DP_ENC0_P0_YCBCR422 (1 << 12) 134 + #define PIXEL_ENCODE_FORMAT_DP_ENC0_P0_YCBCR420 (2 << 12) 135 + #define VIDEO_MN_GEN_EN_DP_ENC0_P0 BIT(15) 136 + #define MTK_DP_ENC0_P0_3040 0x3040 137 + #define SDP_DOWN_CNT_DP_ENC0_P0_VAL 0x20 138 + #define SDP_DOWN_CNT_INIT_DP_ENC0_P0_MASK GENMASK(11, 0) 139 + #define MTK_DP_ENC0_P0_304C 0x304c 140 + #define VBID_VIDEO_MUTE_DP_ENC0_P0_MASK BIT(2) 141 + #define SDP_VSYNC_RISING_MASK_DP_ENC0_P0_MASK BIT(8) 142 + #define MTK_DP_ENC0_P0_3064 0x3064 143 + #define HDE_NUM_LAST_DP_ENC0_P0_MASK GENMASK(15, 0) 144 + #define MTK_DP_ENC0_P0_3088 0x3088 145 + #define AU_EN_DP_ENC0_P0 BIT(6) 146 + #define AUDIO_8CH_EN_DP_ENC0_P0_MASK BIT(7) 147 + #define AUDIO_8CH_SEL_DP_ENC0_P0_MASK BIT(8) 148 + #define AUDIO_2CH_EN_DP_ENC0_P0_MASK BIT(14) 149 + #define AUDIO_2CH_SEL_DP_ENC0_P0_MASK BIT(15) 150 + #define MTK_DP_ENC0_P0_308C 0x308c 151 + #define CH_STATUS_0_DP_ENC0_P0_MASK GENMASK(15, 0) 152 + #define MTK_DP_ENC0_P0_3090 0x3090 153 + #define CH_STATUS_1_DP_ENC0_P0_MASK GENMASK(15, 0) 154 + #define MTK_DP_ENC0_P0_3094 0x3094 155 + #define CH_STATUS_2_DP_ENC0_P0_MASK GENMASK(7, 0) 156 + #define MTK_DP_ENC0_P0_30A0 0x30a0 157 + #define DP_ENC0_30A0_MASK (BIT(7) | BIT(8) | BIT(12)) 158 + #define MTK_DP_ENC0_P0_30A4 0x30a4 159 + #define AU_TS_CFG_DP_ENC0_P0_MASK GENMASK(7, 0) 160 + #define MTK_DP_ENC0_P0_30A8 0x30a8 161 + #define MTK_DP_ENC0_P0_30BC 0x30bc 162 + #define ISRC_CONT_DP_ENC0_P0 BIT(0) 163 + #define AUDIO_M_CODE_MULT_DIV_SEL_DP_ENC0_P0_MASK GENMASK(10, 8) 164 + #define AUDIO_M_CODE_MULT_DIV_SEL_DP_ENC0_P0_MUL_2 (1 << 8) 165 + #define AUDIO_M_CODE_MULT_DIV_SEL_DP_ENC0_P0_MUL_4 (2 << 8) 166 + #define AUDIO_M_CODE_MULT_DIV_SEL_DP_ENC0_P0_MUL_8 (3 << 8) 167 + #define AUDIO_M_CODE_MULT_DIV_SEL_DP_ENC0_P0_DIV_2 (5 << 8) 168 + #define AUDIO_M_CODE_MULT_DIV_SEL_DP_ENC0_P0_DIV_4 (6 << 8) 169 + #define AUDIO_M_CODE_MULT_DIV_SEL_DP_ENC0_P0_DIV_8 (7 << 8) 170 + #define MTK_DP_ENC0_P0_30D8 0x30d8 171 + #define MTK_DP_ENC0_P0_312C 0x312c 172 + #define ASP_HB2_DP_ENC0_P0_MASK GENMASK(7, 0) 173 + #define ASP_HB3_DP_ENC0_P0_MASK GENMASK(15, 8) 174 + #define MTK_DP_ENC0_P0_3130 0x3130 175 + #define MTK_DP_ENC0_P0_3138 0x3138 176 + #define MTK_DP_ENC0_P0_3154 0x3154 177 + #define PGEN_HTOTAL_DP_ENC0_P0_MASK GENMASK(13, 0) 178 + #define MTK_DP_ENC0_P0_3158 0x3158 179 + #define PGEN_HSYNC_RISING_DP_ENC0_P0_MASK GENMASK(13, 0) 180 + #define MTK_DP_ENC0_P0_315C 0x315c 181 + #define PGEN_HSYNC_PULSE_WIDTH_DP_ENC0_P0_MASK GENMASK(13, 0) 182 + #define MTK_DP_ENC0_P0_3160 0x3160 183 + #define PGEN_HFDE_START_DP_ENC0_P0_MASK GENMASK(13, 0) 184 + #define MTK_DP_ENC0_P0_3164 0x3164 185 + #define PGEN_HFDE_ACTIVE_WIDTH_DP_ENC0_P0_MASK GENMASK(13, 0) 186 + #define MTK_DP_ENC0_P0_3168 0x3168 187 + #define PGEN_VTOTAL_DP_ENC0_P0_MASK GENMASK(12, 0) 188 + #define MTK_DP_ENC0_P0_316C 0x316c 189 + #define PGEN_VSYNC_RISING_DP_ENC0_P0_MASK GENMASK(12, 0) 190 + #define MTK_DP_ENC0_P0_3170 0x3170 191 + #define PGEN_VSYNC_PULSE_WIDTH_DP_ENC0_P0_MASK GENMASK(12, 0) 192 + #define MTK_DP_ENC0_P0_3174 0x3174 193 + #define PGEN_VFDE_START_DP_ENC0_P0_MASK GENMASK(12, 0) 194 + #define MTK_DP_ENC0_P0_3178 0x3178 195 + #define PGEN_VFDE_ACTIVE_WIDTH_DP_ENC0_P0_MASK GENMASK(12, 0) 196 + #define MTK_DP_ENC0_P0_31B0 0x31b0 197 + #define PGEN_PATTERN_SEL_VAL 4 198 + #define PGEN_PATTERN_SEL_MASK GENMASK(6, 4) 199 + #define MTK_DP_ENC0_P0_31EC 0x31ec 200 + #define AUDIO_CH_SRC_SEL_DP_ENC0_P0 BIT(4) 201 + #define ISRC1_HB3_DP_ENC0_P0_MASK GENMASK(15, 8) 202 + 203 + /* offset: ENC1_OFFSET (0x3200) */ 204 + #define MTK_DP_ENC1_P0_3200 0x3200 205 + #define MTK_DP_ENC1_P0_3280 0x3280 206 + #define SDP_PACKET_TYPE_DP_ENC1_P0_MASK GENMASK(4, 0) 207 + #define SDP_PACKET_W_DP_ENC1_P0 BIT(5) 208 + #define SDP_PACKET_W_DP_ENC1_P0_MASK BIT(5) 209 + #define MTK_DP_ENC1_P0_328C 0x328c 210 + #define VSC_DATA_RDY_VESA_DP_ENC1_P0_MASK BIT(7) 211 + #define MTK_DP_ENC1_P0_3300 0x3300 212 + #define VIDEO_AFIFO_RDY_SEL_DP_ENC1_P0_VAL 2 213 + #define VIDEO_AFIFO_RDY_SEL_DP_ENC1_P0_MASK GENMASK(9, 8) 214 + #define MTK_DP_ENC1_P0_3304 0x3304 215 + #define AU_PRTY_REGEN_DP_ENC1_P0_MASK BIT(8) 216 + #define AU_CH_STS_REGEN_DP_ENC1_P0_MASK BIT(9) 217 + #define AUDIO_SAMPLE_PRSENT_REGEN_DP_ENC1_P0_MASK BIT(12) 218 + #define MTK_DP_ENC1_P0_3324 0x3324 219 + #define AUDIO_SOURCE_MUX_DP_ENC1_P0_MASK GENMASK(9, 8) 220 + #define AUDIO_SOURCE_MUX_DP_ENC1_P0_DPRX 0 221 + #define MTK_DP_ENC1_P0_3364 0x3364 222 + #define SDP_DOWN_CNT_IN_HBLANK_DP_ENC1_P0_VAL 0x20 223 + #define SDP_DOWN_CNT_INIT_IN_HBLANK_DP_ENC1_P0_MASK GENMASK(11, 0) 224 + #define FIFO_READ_START_POINT_DP_ENC1_P0_VAL 4 225 + #define FIFO_READ_START_POINT_DP_ENC1_P0_MASK GENMASK(15, 12) 226 + #define MTK_DP_ENC1_P0_3368 0x3368 227 + #define VIDEO_SRAM_FIFO_CNT_RESET_SEL_DP_ENC1_P0 BIT(0) 228 + #define VIDEO_STABLE_CNT_THRD_DP_ENC1_P0 BIT(4) 229 + #define SDP_DP13_EN_DP_ENC1_P0 BIT(8) 230 + #define BS2BS_MODE_DP_ENC1_P0 BIT(12) 231 + #define BS2BS_MODE_DP_ENC1_P0_MASK GENMASK(13, 12) 232 + #define BS2BS_MODE_DP_ENC1_P0_VAL 1 233 + #define DP_ENC1_P0_3368_VAL (VIDEO_SRAM_FIFO_CNT_RESET_SEL_DP_ENC1_P0 | \ 234 + VIDEO_STABLE_CNT_THRD_DP_ENC1_P0 | \ 235 + SDP_DP13_EN_DP_ENC1_P0 | \ 236 + BS2BS_MODE_DP_ENC1_P0) 237 + #define MTK_DP_ENC1_P0_33F4 0x33f4 238 + #define DP_ENC_DUMMY_RW_1_AUDIO_RST_EN BIT(0) 239 + #define DP_ENC_DUMMY_RW_1 BIT(9) 240 + 241 + /* offset: TRANS_OFFSET (0x3400) */ 242 + #define MTK_DP_TRANS_P0_3400 0x3400 243 + #define PATTERN1_EN_DP_TRANS_P0_MASK BIT(12) 244 + #define PATTERN2_EN_DP_TRANS_P0_MASK BIT(13) 245 + #define PATTERN3_EN_DP_TRANS_P0_MASK BIT(14) 246 + #define PATTERN4_EN_DP_TRANS_P0_MASK BIT(15) 247 + #define MTK_DP_TRANS_P0_3404 0x3404 248 + #define DP_SCR_EN_DP_TRANS_P0_MASK BIT(0) 249 + #define MTK_DP_TRANS_P0_340C 0x340c 250 + #define DP_TX_TRANSMITTER_4P_RESET_SW_DP_TRANS_P0 BIT(13) 251 + #define MTK_DP_TRANS_P0_3410 0x3410 252 + #define HPD_DEB_THD_DP_TRANS_P0_MASK GENMASK(3, 0) 253 + #define HPD_INT_THD_DP_TRANS_P0_MASK GENMASK(7, 4) 254 + #define HPD_INT_THD_DP_TRANS_P0_LOWER_500US (2 << 4) 255 + #define HPD_INT_THD_DP_TRANS_P0_UPPER_1100US (2 << 6) 256 + #define HPD_DISC_THD_DP_TRANS_P0_MASK GENMASK(11, 8) 257 + #define HPD_CONN_THD_DP_TRANS_P0_MASK GENMASK(15, 12) 258 + #define MTK_DP_TRANS_P0_3414 0x3414 259 + #define HPD_DB_DP_TRANS_P0_MASK BIT(2) 260 + #define MTK_DP_TRANS_P0_3418 0x3418 261 + #define IRQ_CLR_DP_TRANS_P0_MASK GENMASK(3, 0) 262 + #define IRQ_MASK_DP_TRANS_P0_MASK GENMASK(7, 4) 263 + #define IRQ_MASK_DP_TRANS_P0_DISC_IRQ (BIT(1) << 4) 264 + #define IRQ_MASK_DP_TRANS_P0_CONN_IRQ (BIT(2) << 4) 265 + #define IRQ_MASK_DP_TRANS_P0_INT_IRQ (BIT(3) << 4) 266 + #define IRQ_STATUS_DP_TRANS_P0_MASK GENMASK(15, 12) 267 + #define MTK_DP_TRANS_P0_342C 0x342c 268 + #define XTAL_FREQ_DP_TRANS_P0_DEFAULT (BIT(0) | BIT(3) | BIT(5) | BIT(6)) 269 + #define XTAL_FREQ_DP_TRANS_P0_MASK GENMASK(7, 0) 270 + #define MTK_DP_TRANS_P0_3430 0x3430 271 + #define HPD_INT_THD_ECO_DP_TRANS_P0_MASK GENMASK(1, 0) 272 + #define HPD_INT_THD_ECO_DP_TRANS_P0_HIGH_BOUND_EXT BIT(1) 273 + #define MTK_DP_TRANS_P0_34A4 0x34a4 274 + #define LANE_NUM_DP_TRANS_P0_MASK GENMASK(3, 2) 275 + #define MTK_DP_TRANS_P0_3540 0x3540 276 + #define FEC_EN_DP_TRANS_P0_MASK BIT(0) 277 + #define FEC_CLOCK_EN_MODE_DP_TRANS_P0 BIT(3) 278 + #define MTK_DP_TRANS_P0_3580 0x3580 279 + #define POST_MISC_DATA_LANE0_OV_DP_TRANS_P0_MASK BIT(8) 280 + #define POST_MISC_DATA_LANE1_OV_DP_TRANS_P0_MASK BIT(9) 281 + #define POST_MISC_DATA_LANE2_OV_DP_TRANS_P0_MASK BIT(10) 282 + #define POST_MISC_DATA_LANE3_OV_DP_TRANS_P0_MASK BIT(11) 283 + #define MTK_DP_TRANS_P0_35C8 0x35c8 284 + #define SW_IRQ_CLR_DP_TRANS_P0_MASK GENMASK(15, 0) 285 + #define SW_IRQ_STATUS_DP_TRANS_P0_MASK GENMASK(15, 0) 286 + #define MTK_DP_TRANS_P0_35D0 0x35d0 287 + #define SW_IRQ_FINAL_STATUS_DP_TRANS_P0_MASK GENMASK(15, 0) 288 + #define MTK_DP_TRANS_P0_35F0 0x35f0 289 + #define DP_TRANS_DUMMY_RW_0 BIT(3) 290 + #define DP_TRANS_DUMMY_RW_0_MASK GENMASK(3, 2) 291 + 292 + /* offset: AUX_OFFSET (0x3600) */ 293 + #define MTK_DP_AUX_P0_360C 0x360c 294 + #define AUX_TIMEOUT_THR_AUX_TX_P0_MASK GENMASK(12, 0) 295 + #define AUX_TIMEOUT_THR_AUX_TX_P0_VAL 0x1595 296 + #define MTK_DP_AUX_P0_3614 0x3614 297 + #define AUX_RX_UI_CNT_THR_AUX_TX_P0_MASK GENMASK(6, 0) 298 + #define AUX_RX_UI_CNT_THR_AUX_FOR_26M 13 299 + #define MTK_DP_AUX_P0_3618 0x3618 300 + #define AUX_RX_FIFO_FULL_AUX_TX_P0_MASK BIT(9) 301 + #define AUX_RX_FIFO_WRITE_POINTER_AUX_TX_P0_MASK GENMASK(3, 0) 302 + #define MTK_DP_AUX_P0_3620 0x3620 303 + #define AUX_RD_MODE_AUX_TX_P0_MASK BIT(9) 304 + #define AUX_RX_FIFO_READ_PULSE_TX_P0 BIT(8) 305 + #define AUX_RX_FIFO_READ_DATA_AUX_TX_P0_MASK GENMASK(7, 0) 306 + #define MTK_DP_AUX_P0_3624 0x3624 307 + #define AUX_RX_REPLY_COMMAND_AUX_TX_P0_MASK GENMASK(3, 0) 308 + #define MTK_DP_AUX_P0_3628 0x3628 309 + #define AUX_RX_PHY_STATE_AUX_TX_P0_MASK GENMASK(9, 0) 310 + #define AUX_RX_PHY_STATE_AUX_TX_P0_RX_IDLE BIT(0) 311 + #define MTK_DP_AUX_P0_362C 0x362c 312 + #define AUX_NO_LENGTH_AUX_TX_P0 BIT(0) 313 + #define AUX_TX_AUXTX_OV_EN_AUX_TX_P0_MASK BIT(1) 314 + #define AUX_RESERVED_RW_0_AUX_TX_P0_MASK GENMASK(15, 2) 315 + #define MTK_DP_AUX_P0_3630 0x3630 316 + #define AUX_TX_REQUEST_READY_AUX_TX_P0 BIT(3) 317 + #define MTK_DP_AUX_P0_3634 0x3634 318 + #define AUX_TX_OVER_SAMPLE_RATE_AUX_TX_P0_MASK GENMASK(15, 8) 319 + #define AUX_TX_OVER_SAMPLE_RATE_FOR_26M 25 320 + #define MTK_DP_AUX_P0_3640 0x3640 321 + #define AUX_RX_AUX_RECV_COMPLETE_IRQ_AUX_TX_P0 BIT(6) 322 + #define AUX_RX_EDID_RECV_COMPLETE_IRQ_AUX_TX_P0 BIT(5) 323 + #define AUX_RX_MCCS_RECV_COMPLETE_IRQ_AUX_TX_P0 BIT(4) 324 + #define AUX_RX_CMD_RECV_IRQ_AUX_TX_P0 BIT(3) 325 + #define AUX_RX_ADDR_RECV_IRQ_AUX_TX_P0 BIT(2) 326 + #define AUX_RX_DATA_RECV_IRQ_AUX_TX_P0 BIT(1) 327 + #define AUX_400US_TIMEOUT_IRQ_AUX_TX_P0 BIT(0) 328 + #define DP_AUX_P0_3640_VAL (AUX_400US_TIMEOUT_IRQ_AUX_TX_P0 | \ 329 + AUX_RX_DATA_RECV_IRQ_AUX_TX_P0 | \ 330 + AUX_RX_ADDR_RECV_IRQ_AUX_TX_P0 | \ 331 + AUX_RX_CMD_RECV_IRQ_AUX_TX_P0 | \ 332 + AUX_RX_MCCS_RECV_COMPLETE_IRQ_AUX_TX_P0 | \ 333 + AUX_RX_EDID_RECV_COMPLETE_IRQ_AUX_TX_P0 | \ 334 + AUX_RX_AUX_RECV_COMPLETE_IRQ_AUX_TX_P0) 335 + #define MTK_DP_AUX_P0_3644 0x3644 336 + #define MCU_REQUEST_COMMAND_AUX_TX_P0_MASK GENMASK(3, 0) 337 + #define MTK_DP_AUX_P0_3648 0x3648 338 + #define MCU_REQUEST_ADDRESS_LSB_AUX_TX_P0_MASK GENMASK(15, 0) 339 + #define MTK_DP_AUX_P0_364C 0x364c 340 + #define MCU_REQUEST_ADDRESS_MSB_AUX_TX_P0_MASK GENMASK(3, 0) 341 + #define MTK_DP_AUX_P0_3650 0x3650 342 + #define MCU_REQ_DATA_NUM_AUX_TX_P0_MASK GENMASK(15, 12) 343 + #define PHY_FIFO_RST_AUX_TX_P0_MASK BIT(9) 344 + #define MCU_ACK_TRAN_COMPLETE_AUX_TX_P0 BIT(8) 345 + #define MTK_DP_AUX_P0_3658 0x3658 346 + #define AUX_TX_OV_EN_AUX_TX_P0_MASK BIT(0) 347 + #define MTK_DP_AUX_P0_3690 0x3690 348 + #define RX_REPLY_COMPLETE_MODE_AUX_TX_P0 BIT(8) 349 + #define MTK_DP_AUX_P0_3704 0x3704 350 + #define AUX_TX_FIFO_WDATA_NEW_MODE_T_AUX_TX_P0_MASK BIT(1) 351 + #define AUX_TX_FIFO_NEW_MODE_EN_AUX_TX_P0 BIT(2) 352 + #define MTK_DP_AUX_P0_3708 0x3708 353 + #define MTK_DP_AUX_P0_37C8 0x37c8 354 + #define MTK_ATOP_EN_AUX_TX_P0 BIT(0) 355 + 356 + #endif /*_MTK_DP_REG_H_*/
+9 -4
drivers/gpu/drm/msm/msm_drv.c
··· 1242 1242 struct msm_drm_private *priv = platform_get_drvdata(pdev); 1243 1243 struct drm_device *drm = priv ? priv->dev : NULL; 1244 1244 1245 - if (!priv || !priv->kms) 1246 - return; 1247 - 1248 - drm_atomic_helper_shutdown(drm); 1245 + /* 1246 + * Shutdown the hw if we're far enough along where things might be on. 1247 + * If we run this too early, we'll end up panicking in any variety of 1248 + * places. Since we don't register the drm device until late in 1249 + * msm_drm_init, drm_dev->registered is used as an indicator that the 1250 + * shutdown will be successful. 1251 + */ 1252 + if (drm && drm->registered) 1253 + drm_atomic_helper_shutdown(drm); 1249 1254 } 1250 1255 1251 1256 static struct platform_driver msm_platform_driver = {
+3 -18
drivers/gpu/drm/mxsfb/lcdif_drv.c
··· 8 8 #include <linux/clk.h> 9 9 #include <linux/dma-mapping.h> 10 10 #include <linux/io.h> 11 - #include <linux/iopoll.h> 12 11 #include <linux/module.h> 13 12 #include <linux/of_device.h> 14 13 #include <linux/platform_device.h> ··· 15 16 16 17 #include <drm/drm_atomic_helper.h> 17 18 #include <drm/drm_bridge.h> 18 - #include <drm/drm_connector.h> 19 19 #include <drm/drm_drv.h> 20 20 #include <drm/drm_fb_helper.h> 21 - #include <drm/drm_fourcc.h> 22 21 #include <drm/drm_gem_dma_helper.h> 23 22 #include <drm/drm_gem_framebuffer_helper.h> 24 23 #include <drm/drm_mode_config.h> ··· 42 45 { 43 46 struct drm_device *drm = lcdif->drm; 44 47 struct drm_bridge *bridge; 45 - struct drm_panel *panel; 46 48 int ret; 47 49 48 - ret = drm_of_find_panel_or_bridge(drm->dev->of_node, 0, 0, &panel, 49 - &bridge); 50 - if (ret) 51 - return ret; 52 - 53 - if (panel) { 54 - bridge = devm_drm_panel_bridge_add_typed(drm->dev, panel, 55 - DRM_MODE_CONNECTOR_DPI); 56 - if (IS_ERR(bridge)) 57 - return PTR_ERR(bridge); 58 - } 59 - 60 - if (!bridge) 61 - return -ENODEV; 50 + bridge = devm_drm_of_get_bridge(drm->dev, drm->dev->of_node, 0, 0); 51 + if (IS_ERR(bridge)) 52 + return PTR_ERR(bridge); 62 53 63 54 ret = drm_bridge_attach(&lcdif->encoder, bridge, NULL, 0); 64 55 if (ret)
+1
drivers/gpu/drm/mxsfb/lcdif_drv.h
··· 8 8 #ifndef __LCDIF_DRV_H__ 9 9 #define __LCDIF_DRV_H__ 10 10 11 + #include <drm/drm_bridge.h> 11 12 #include <drm/drm_crtc.h> 12 13 #include <drm/drm_device.h> 13 14 #include <drm/drm_encoder.h>
+6 -6
drivers/gpu/drm/mxsfb/lcdif_kms.c
··· 17 17 #include <drm/drm_bridge.h> 18 18 #include <drm/drm_crtc.h> 19 19 #include <drm/drm_encoder.h> 20 - #include <drm/drm_framebuffer.h> 21 20 #include <drm/drm_fb_dma_helper.h> 22 21 #include <drm/drm_fourcc.h> 22 + #include <drm/drm_framebuffer.h> 23 23 #include <drm/drm_gem_atomic_helper.h> 24 24 #include <drm/drm_gem_dma_helper.h> 25 25 #include <drm/drm_plane.h> ··· 122 122 123 123 writel(ctrl, lcdif->base + LCDC_V8_CTRL); 124 124 125 - writel(DISP_SIZE_DELTA_Y(m->crtc_vdisplay) | 126 - DISP_SIZE_DELTA_X(m->crtc_hdisplay), 125 + writel(DISP_SIZE_DELTA_Y(m->vdisplay) | 126 + DISP_SIZE_DELTA_X(m->hdisplay), 127 127 lcdif->base + LCDC_V8_DISP_SIZE); 128 128 129 129 writel(HSYN_PARA_BP_H(m->htotal - m->hsync_end) | ··· 138 138 VSYN_HSYN_WIDTH_PW_H(m->hsync_end - m->hsync_start), 139 139 lcdif->base + LCDC_V8_VSYN_HSYN_WIDTH); 140 140 141 - writel(CTRLDESCL0_1_HEIGHT(m->crtc_vdisplay) | 142 - CTRLDESCL0_1_WIDTH(m->crtc_hdisplay), 141 + writel(CTRLDESCL0_1_HEIGHT(m->vdisplay) | 142 + CTRLDESCL0_1_WIDTH(m->hdisplay), 143 143 lcdif->base + LCDC_V8_CTRLDESCL0_1); 144 144 145 145 writel(CTRLDESCL0_3_PITCH(lcdif->crtc.primary->state->fb->pitches[0]), ··· 203 203 DRM_DEV_DEBUG_DRIVER(drm->dev, "Pixel clock: %dkHz (actual: %dkHz)\n", 204 204 m->crtc_clock, 205 205 (int)(clk_get_rate(lcdif->clk) / 1000)); 206 - DRM_DEV_DEBUG_DRIVER(drm->dev, "Connector bus_flags: 0x%08X\n", 206 + DRM_DEV_DEBUG_DRIVER(drm->dev, "Bridge bus_flags: 0x%08X\n", 207 207 bus_flags); 208 208 DRM_DEV_DEBUG_DRIVER(drm->dev, "Mode flags: 0x%08X\n", m->flags); 209 209
+91 -106
drivers/gpu/drm/nouveau/dispnv50/disp.c
··· 932 932 struct nv50_head *head; 933 933 struct nv50_mstc *mstc; 934 934 bool disabled; 935 + bool enabled; 935 936 }; 936 937 937 938 struct nouveau_encoder *nv50_real_outp(struct drm_encoder *encoder) ··· 948 947 return msto->mstc->mstm->outp; 949 948 } 950 949 951 - static struct drm_dp_payload * 952 - nv50_msto_payload(struct nv50_msto *msto) 953 - { 954 - struct nouveau_drm *drm = nouveau_drm(msto->encoder.dev); 955 - struct nv50_mstc *mstc = msto->mstc; 956 - struct nv50_mstm *mstm = mstc->mstm; 957 - int vcpi = mstc->port->vcpi.vcpi, i; 958 - 959 - WARN_ON(!mutex_is_locked(&mstm->mgr.payload_lock)); 960 - 961 - NV_ATOMIC(drm, "%s: vcpi %d\n", msto->encoder.name, vcpi); 962 - for (i = 0; i < mstm->mgr.max_payloads; i++) { 963 - struct drm_dp_payload *payload = &mstm->mgr.payloads[i]; 964 - NV_ATOMIC(drm, "%s: %d: vcpi %d start 0x%02x slots 0x%02x\n", 965 - mstm->outp->base.base.name, i, payload->vcpi, 966 - payload->start_slot, payload->num_slots); 967 - } 968 - 969 - for (i = 0; i < mstm->mgr.max_payloads; i++) { 970 - struct drm_dp_payload *payload = &mstm->mgr.payloads[i]; 971 - if (payload->vcpi == vcpi) 972 - return payload; 973 - } 974 - 975 - return NULL; 976 - } 977 - 978 950 static void 979 - nv50_msto_cleanup(struct nv50_msto *msto) 951 + nv50_msto_cleanup(struct drm_atomic_state *state, 952 + struct drm_dp_mst_topology_state *mst_state, 953 + struct drm_dp_mst_topology_mgr *mgr, 954 + struct nv50_msto *msto) 980 955 { 981 956 struct nouveau_drm *drm = nouveau_drm(msto->encoder.dev); 982 - struct nv50_mstc *mstc = msto->mstc; 983 - struct nv50_mstm *mstm = mstc->mstm; 984 - 985 - if (!msto->disabled) 986 - return; 957 + struct drm_dp_mst_atomic_payload *payload = 958 + drm_atomic_get_mst_payload_state(mst_state, msto->mstc->port); 987 959 988 960 NV_ATOMIC(drm, "%s: msto cleanup\n", msto->encoder.name); 989 961 990 - drm_dp_mst_deallocate_vcpi(&mstm->mgr, mstc->port); 991 - 992 - msto->mstc = NULL; 993 - msto->disabled = false; 962 + if (msto->disabled) { 963 + msto->mstc = NULL; 964 + msto->disabled = false; 965 + } else if (msto->enabled) { 966 + drm_dp_add_payload_part2(mgr, state, payload); 967 + msto->enabled = false; 968 + } 994 969 } 995 970 996 971 static void 997 - nv50_msto_prepare(struct nv50_msto *msto) 972 + nv50_msto_prepare(struct drm_atomic_state *state, 973 + struct drm_dp_mst_topology_state *mst_state, 974 + struct drm_dp_mst_topology_mgr *mgr, 975 + struct nv50_msto *msto) 998 976 { 999 977 struct nouveau_drm *drm = nouveau_drm(msto->encoder.dev); 1000 978 struct nv50_mstc *mstc = msto->mstc; 1001 979 struct nv50_mstm *mstm = mstc->mstm; 980 + struct drm_dp_mst_atomic_payload *payload; 1002 981 struct { 1003 982 struct nv50_disp_mthd_v1 base; 1004 983 struct nv50_disp_sor_dp_mst_vcpi_v0 vcpi; ··· 990 1009 (0x0100 << msto->head->base.index), 991 1010 }; 992 1011 993 - mutex_lock(&mstm->mgr.payload_lock); 994 - 995 1012 NV_ATOMIC(drm, "%s: msto prepare\n", msto->encoder.name); 996 - if (mstc->port->vcpi.vcpi > 0) { 997 - struct drm_dp_payload *payload = nv50_msto_payload(msto); 998 - if (payload) { 999 - args.vcpi.start_slot = payload->start_slot; 1000 - args.vcpi.num_slots = payload->num_slots; 1001 - args.vcpi.pbn = mstc->port->vcpi.pbn; 1002 - args.vcpi.aligned_pbn = mstc->port->vcpi.aligned_pbn; 1003 - } 1013 + 1014 + payload = drm_atomic_get_mst_payload_state(mst_state, mstc->port); 1015 + 1016 + // TODO: Figure out if we want to do a better job of handling VCPI allocation failures here? 1017 + if (msto->disabled) { 1018 + drm_dp_remove_payload(mgr, mst_state, payload); 1019 + } else { 1020 + if (msto->enabled) 1021 + drm_dp_add_payload_part1(mgr, mst_state, payload); 1022 + 1023 + args.vcpi.start_slot = payload->vc_start_slot; 1024 + args.vcpi.num_slots = payload->time_slots; 1025 + args.vcpi.pbn = payload->pbn; 1026 + args.vcpi.aligned_pbn = payload->time_slots * mst_state->pbn_div; 1004 1027 } 1005 1028 1006 1029 NV_ATOMIC(drm, "%s: %s: %02x %02x %04x %04x\n", ··· 1013 1028 args.vcpi.pbn, args.vcpi.aligned_pbn); 1014 1029 1015 1030 nvif_mthd(&drm->display->disp.object, 0, &args, sizeof(args)); 1016 - mutex_unlock(&mstm->mgr.payload_lock); 1017 1031 } 1018 1032 1019 1033 static int ··· 1022 1038 { 1023 1039 struct drm_atomic_state *state = crtc_state->state; 1024 1040 struct drm_connector *connector = conn_state->connector; 1041 + struct drm_dp_mst_topology_state *mst_state; 1025 1042 struct nv50_mstc *mstc = nv50_mstc(connector); 1026 1043 struct nv50_mstm *mstm = mstc->mstm; 1027 1044 struct nv50_head_atom *asyh = nv50_head_atom(crtc_state); ··· 1034 1049 if (ret) 1035 1050 return ret; 1036 1051 1037 - if (!crtc_state->mode_changed && !crtc_state->connectors_changed) 1052 + if (!drm_atomic_crtc_needs_modeset(crtc_state)) 1038 1053 return 0; 1039 1054 1040 1055 /* ··· 1050 1065 false); 1051 1066 } 1052 1067 1053 - slots = drm_dp_atomic_find_vcpi_slots(state, &mstm->mgr, mstc->port, 1054 - asyh->dp.pbn, 0); 1068 + mst_state = drm_atomic_get_mst_topology_state(state, &mstm->mgr); 1069 + if (IS_ERR(mst_state)) 1070 + return PTR_ERR(mst_state); 1071 + 1072 + if (!mst_state->pbn_div) { 1073 + struct nouveau_encoder *outp = mstc->mstm->outp; 1074 + 1075 + mst_state->pbn_div = drm_dp_get_vc_payload_bw(&mstm->mgr, 1076 + outp->dp.link_bw, outp->dp.link_nr); 1077 + } 1078 + 1079 + slots = drm_dp_atomic_find_time_slots(state, &mstm->mgr, mstc->port, asyh->dp.pbn); 1055 1080 if (slots < 0) 1056 1081 return slots; 1057 1082 ··· 1093 1098 struct drm_connector *connector; 1094 1099 struct drm_connector_list_iter conn_iter; 1095 1100 u8 proto; 1096 - bool r; 1097 1101 1098 1102 drm_connector_list_iter_begin(encoder->dev, &conn_iter); 1099 1103 drm_for_each_connector_iter(connector, &conn_iter) { ··· 1107 1113 if (WARN_ON(!mstc)) 1108 1114 return; 1109 1115 1110 - r = drm_dp_mst_allocate_vcpi(&mstm->mgr, mstc->port, asyh->dp.pbn, asyh->dp.tu); 1111 - if (!r) 1112 - DRM_DEBUG_KMS("Failed to allocate VCPI\n"); 1113 - 1114 1116 if (!mstm->links++) 1115 1117 nv50_outp_acquire(mstm->outp, false /*XXX: MST audio.*/); 1116 1118 ··· 1119 1129 nv50_dp_bpc_to_depth(asyh->or.bpc)); 1120 1130 1121 1131 msto->mstc = mstc; 1132 + msto->enabled = true; 1122 1133 mstm->modified = true; 1123 1134 } 1124 1135 ··· 1129 1138 struct nv50_msto *msto = nv50_msto(encoder); 1130 1139 struct nv50_mstc *mstc = msto->mstc; 1131 1140 struct nv50_mstm *mstm = mstc->mstm; 1132 - 1133 - drm_dp_mst_reset_vcpi_slots(&mstm->mgr, mstc->port); 1134 1141 1135 1142 mstm->outp->update(mstm->outp, msto->head->base.index, NULL, 0, 0); 1136 1143 mstm->modified = true; ··· 1244 1255 { 1245 1256 struct nv50_mstc *mstc = nv50_mstc(connector); 1246 1257 struct drm_dp_mst_topology_mgr *mgr = &mstc->mstm->mgr; 1247 - struct drm_connector_state *new_conn_state = 1248 - drm_atomic_get_new_connector_state(state, connector); 1249 - struct drm_connector_state *old_conn_state = 1250 - drm_atomic_get_old_connector_state(state, connector); 1251 - struct drm_crtc_state *crtc_state; 1252 - struct drm_crtc *new_crtc = new_conn_state->crtc; 1253 1258 1254 - if (!old_conn_state->crtc) 1255 - return 0; 1256 - 1257 - /* We only want to free VCPI if this state disables the CRTC on this 1258 - * connector 1259 - */ 1260 - if (new_crtc) { 1261 - crtc_state = drm_atomic_get_new_crtc_state(state, new_crtc); 1262 - 1263 - if (!crtc_state || 1264 - !drm_atomic_crtc_needs_modeset(crtc_state) || 1265 - crtc_state->enable) 1266 - return 0; 1267 - } 1268 - 1269 - return drm_dp_atomic_release_vcpi_slots(state, mgr, mstc->port); 1259 + return drm_dp_atomic_release_time_slots(state, mgr, mstc->port); 1270 1260 } 1271 1261 1272 1262 static int ··· 1349 1381 } 1350 1382 1351 1383 static void 1352 - nv50_mstm_cleanup(struct nv50_mstm *mstm) 1384 + nv50_mstm_cleanup(struct drm_atomic_state *state, 1385 + struct drm_dp_mst_topology_state *mst_state, 1386 + struct nv50_mstm *mstm) 1353 1387 { 1354 1388 struct nouveau_drm *drm = nouveau_drm(mstm->outp->base.base.dev); 1355 1389 struct drm_encoder *encoder; ··· 1359 1389 NV_ATOMIC(drm, "%s: mstm cleanup\n", mstm->outp->base.base.name); 1360 1390 drm_dp_check_act_status(&mstm->mgr); 1361 1391 1362 - drm_dp_update_payload_part2(&mstm->mgr); 1363 - 1364 1392 drm_for_each_encoder(encoder, mstm->outp->base.base.dev) { 1365 1393 if (encoder->encoder_type == DRM_MODE_ENCODER_DPMST) { 1366 1394 struct nv50_msto *msto = nv50_msto(encoder); 1367 1395 struct nv50_mstc *mstc = msto->mstc; 1368 1396 if (mstc && mstc->mstm == mstm) 1369 - nv50_msto_cleanup(msto); 1397 + nv50_msto_cleanup(state, mst_state, &mstm->mgr, msto); 1370 1398 } 1371 1399 } 1372 1400 ··· 1372 1404 } 1373 1405 1374 1406 static void 1375 - nv50_mstm_prepare(struct nv50_mstm *mstm) 1407 + nv50_mstm_prepare(struct drm_atomic_state *state, 1408 + struct drm_dp_mst_topology_state *mst_state, 1409 + struct nv50_mstm *mstm) 1376 1410 { 1377 1411 struct nouveau_drm *drm = nouveau_drm(mstm->outp->base.base.dev); 1378 1412 struct drm_encoder *encoder; 1379 1413 1380 1414 NV_ATOMIC(drm, "%s: mstm prepare\n", mstm->outp->base.base.name); 1381 - drm_dp_update_payload_part1(&mstm->mgr, 1); 1382 1415 1416 + /* Disable payloads first */ 1383 1417 drm_for_each_encoder(encoder, mstm->outp->base.base.dev) { 1384 1418 if (encoder->encoder_type == DRM_MODE_ENCODER_DPMST) { 1385 1419 struct nv50_msto *msto = nv50_msto(encoder); 1386 1420 struct nv50_mstc *mstc = msto->mstc; 1387 - if (mstc && mstc->mstm == mstm) 1388 - nv50_msto_prepare(msto); 1421 + if (mstc && mstc->mstm == mstm && msto->disabled) 1422 + nv50_msto_prepare(state, mst_state, &mstm->mgr, msto); 1423 + } 1424 + } 1425 + 1426 + /* Add payloads for new heads, while also updating the start slots of any unmodified (but 1427 + * active) heads that may have had their VC slots shifted left after the previous step 1428 + */ 1429 + drm_for_each_encoder(encoder, mstm->outp->base.base.dev) { 1430 + if (encoder->encoder_type == DRM_MODE_ENCODER_DPMST) { 1431 + struct nv50_msto *msto = nv50_msto(encoder); 1432 + struct nv50_mstc *mstc = msto->mstc; 1433 + if (mstc && mstc->mstm == mstm && !msto->disabled) 1434 + nv50_msto_prepare(state, mst_state, &mstm->mgr, msto); 1389 1435 } 1390 1436 } 1391 1437 ··· 1596 1614 mstm->mgr.cbs = &nv50_mstm; 1597 1615 1598 1616 ret = drm_dp_mst_topology_mgr_init(&mstm->mgr, dev, aux, aux_max, 1599 - max_payloads, outp->dcb->dpconf.link_nr, 1600 - drm_dp_bw_code_to_link_rate(outp->dcb->dpconf.link_bw), 1601 - conn_base_id); 1617 + max_payloads, conn_base_id); 1602 1618 if (ret) 1603 1619 return ret; 1604 1620 ··· 1814 1834 .destroy = nv50_sor_destroy, 1815 1835 }; 1816 1836 1817 - static bool nv50_has_mst(struct nouveau_drm *drm) 1837 + bool nv50_has_mst(struct nouveau_drm *drm) 1818 1838 { 1819 1839 struct nvkm_bios *bios = nvxx_bios(&drm->client.device); 1820 1840 u32 data; ··· 2048 2068 static void 2049 2069 nv50_disp_atomic_commit_core(struct drm_atomic_state *state, u32 *interlock) 2050 2070 { 2071 + struct drm_dp_mst_topology_mgr *mgr; 2072 + struct drm_dp_mst_topology_state *mst_state; 2051 2073 struct nouveau_drm *drm = nouveau_drm(state->dev); 2052 2074 struct nv50_disp *disp = nv50_disp(drm->dev); 2053 2075 struct nv50_core *core = disp->core; 2054 2076 struct nv50_mstm *mstm; 2055 - struct drm_encoder *encoder; 2077 + int i; 2056 2078 2057 2079 NV_ATOMIC(drm, "commit core %08x\n", interlock[NV50_DISP_INTERLOCK_BASE]); 2058 2080 2059 - drm_for_each_encoder(encoder, drm->dev) { 2060 - if (encoder->encoder_type != DRM_MODE_ENCODER_DPMST) { 2061 - mstm = nouveau_encoder(encoder)->dp.mstm; 2062 - if (mstm && mstm->modified) 2063 - nv50_mstm_prepare(mstm); 2064 - } 2081 + for_each_new_mst_mgr_in_state(state, mgr, mst_state, i) { 2082 + mstm = nv50_mstm(mgr); 2083 + if (mstm->modified) 2084 + nv50_mstm_prepare(state, mst_state, mstm); 2065 2085 } 2066 2086 2067 2087 core->func->ntfy_init(disp->sync, NV50_DISP_CORE_NTFY); ··· 2070 2090 disp->core->chan.base.device)) 2071 2091 NV_ERROR(drm, "core notifier timeout\n"); 2072 2092 2073 - drm_for_each_encoder(encoder, drm->dev) { 2074 - if (encoder->encoder_type != DRM_MODE_ENCODER_DPMST) { 2075 - mstm = nouveau_encoder(encoder)->dp.mstm; 2076 - if (mstm && mstm->modified) 2077 - nv50_mstm_cleanup(mstm); 2078 - } 2093 + for_each_new_mst_mgr_in_state(state, mgr, mst_state, i) { 2094 + mstm = nv50_mstm(mgr); 2095 + if (mstm->modified) 2096 + nv50_mstm_cleanup(state, mst_state, mstm); 2079 2097 } 2080 2098 } 2081 2099 ··· 2114 2136 nv50_crc_atomic_stop_reporting(state); 2115 2137 drm_atomic_helper_wait_for_fences(dev, state, false); 2116 2138 drm_atomic_helper_wait_for_dependencies(state); 2139 + drm_dp_mst_atomic_wait_for_dependencies(state); 2117 2140 drm_atomic_helper_update_legacy_modeset_state(dev, state); 2118 2141 drm_atomic_helper_calc_timestamping_constants(state); 2119 2142 ··· 2595 2616 .atomic_state_free = nv50_disp_atomic_state_free, 2596 2617 }; 2597 2618 2619 + static const struct drm_mode_config_helper_funcs 2620 + nv50_disp_helper_func = { 2621 + .atomic_commit_setup = drm_dp_mst_atomic_setup_commit, 2622 + }; 2623 + 2598 2624 /****************************************************************************** 2599 2625 * Init 2600 2626 *****************************************************************************/ ··· 2683 2699 nouveau_display(dev)->fini = nv50_display_fini; 2684 2700 disp->disp = &nouveau_display(dev)->disp; 2685 2701 dev->mode_config.funcs = &nv50_disp_func; 2702 + dev->mode_config.helper_private = &nv50_disp_helper_func; 2686 2703 dev->mode_config.quirk_addfb_prefer_xbgr_30bpp = true; 2687 2704 dev->mode_config.normalize_zpos = true; 2688 2705
+2
drivers/gpu/drm/nouveau/dispnv50/disp.h
··· 106 106 */ 107 107 struct nouveau_encoder *nv50_real_outp(struct drm_encoder *encoder); 108 108 109 + bool nv50_has_mst(struct nouveau_drm *drm); 110 + 109 111 u32 *evo_wait(struct nv50_dmac *, int nr); 110 112 void evo_kick(u32 *, struct nv50_dmac *); 111 113
+17 -1
drivers/gpu/drm/nouveau/nouveau_connector.c
··· 1106 1106 return NULL; 1107 1107 } 1108 1108 1109 + static int 1110 + nouveau_connector_atomic_check(struct drm_connector *connector, struct drm_atomic_state *state) 1111 + { 1112 + struct nouveau_connector *nv_conn = nouveau_connector(connector); 1113 + struct drm_connector_state *conn_state = 1114 + drm_atomic_get_new_connector_state(state, connector); 1115 + 1116 + if (!nv_conn->dp_encoder || !nv50_has_mst(nouveau_drm(connector->dev))) 1117 + return 0; 1118 + 1119 + return drm_dp_mst_root_conn_atomic_check(conn_state, &nv_conn->dp_encoder->dp.mstm->mgr); 1120 + } 1121 + 1109 1122 static const struct drm_connector_helper_funcs 1110 1123 nouveau_connector_helper_funcs = { 1111 1124 .get_modes = nouveau_connector_get_modes, 1112 1125 .mode_valid = nouveau_connector_mode_valid, 1113 1126 .best_encoder = nouveau_connector_best_encoder, 1127 + .atomic_check = nouveau_connector_atomic_check, 1114 1128 }; 1115 1129 1116 1130 static const struct drm_connector_funcs ··· 1382 1368 return ERR_PTR(-ENOMEM); 1383 1369 } 1384 1370 drm_dp_aux_init(&nv_connector->aux); 1385 - fallthrough; 1371 + break; 1386 1372 default: 1387 1373 funcs = &nouveau_connector_funcs; 1388 1374 break; ··· 1445 1431 1446 1432 switch (type) { 1447 1433 case DRM_MODE_CONNECTOR_DisplayPort: 1434 + nv_connector->dp_encoder = find_encoder(&nv_connector->base, DCB_OUTPUT_DP); 1435 + fallthrough; 1448 1436 case DRM_MODE_CONNECTOR_eDP: 1449 1437 drm_dp_cec_register_connector(&nv_connector->aux, connector); 1450 1438 break;
+3
drivers/gpu/drm/nouveau/nouveau_connector.h
··· 128 128 129 129 struct drm_dp_aux aux; 130 130 131 + /* The fixed DP encoder for this connector, if there is one */ 132 + struct nouveau_encoder *dp_encoder; 133 + 131 134 int dithering_mode; 132 135 int scaling_mode; 133 136
+17 -68
drivers/gpu/drm/nouveau/nouveau_hwmon.c
··· 211 211 212 212 #define N_ATTR_GROUPS 3 213 213 214 - static const u32 nouveau_config_chip[] = { 215 - HWMON_C_UPDATE_INTERVAL, 216 - 0 217 - }; 218 - 219 - static const u32 nouveau_config_in[] = { 220 - HWMON_I_INPUT | HWMON_I_MIN | HWMON_I_MAX | HWMON_I_LABEL, 221 - 0 222 - }; 223 - 224 - static const u32 nouveau_config_temp[] = { 225 - HWMON_T_INPUT | HWMON_T_MAX | HWMON_T_MAX_HYST | 226 - HWMON_T_CRIT | HWMON_T_CRIT_HYST | HWMON_T_EMERGENCY | 227 - HWMON_T_EMERGENCY_HYST, 228 - 0 229 - }; 230 - 231 - static const u32 nouveau_config_fan[] = { 232 - HWMON_F_INPUT, 233 - 0 234 - }; 235 - 236 - static const u32 nouveau_config_pwm[] = { 237 - HWMON_PWM_INPUT | HWMON_PWM_ENABLE, 238 - 0 239 - }; 240 - 241 - static const u32 nouveau_config_power[] = { 242 - HWMON_P_INPUT | HWMON_P_CAP_MAX | HWMON_P_CRIT, 243 - 0 244 - }; 245 - 246 - static const struct hwmon_channel_info nouveau_chip = { 247 - .type = hwmon_chip, 248 - .config = nouveau_config_chip, 249 - }; 250 - 251 - static const struct hwmon_channel_info nouveau_temp = { 252 - .type = hwmon_temp, 253 - .config = nouveau_config_temp, 254 - }; 255 - 256 - static const struct hwmon_channel_info nouveau_fan = { 257 - .type = hwmon_fan, 258 - .config = nouveau_config_fan, 259 - }; 260 - 261 - static const struct hwmon_channel_info nouveau_in = { 262 - .type = hwmon_in, 263 - .config = nouveau_config_in, 264 - }; 265 - 266 - static const struct hwmon_channel_info nouveau_pwm = { 267 - .type = hwmon_pwm, 268 - .config = nouveau_config_pwm, 269 - }; 270 - 271 - static const struct hwmon_channel_info nouveau_power = { 272 - .type = hwmon_power, 273 - .config = nouveau_config_power, 274 - }; 275 - 276 214 static const struct hwmon_channel_info *nouveau_info[] = { 277 - &nouveau_chip, 278 - &nouveau_temp, 279 - &nouveau_fan, 280 - &nouveau_in, 281 - &nouveau_pwm, 282 - &nouveau_power, 215 + HWMON_CHANNEL_INFO(chip, 216 + HWMON_C_UPDATE_INTERVAL), 217 + HWMON_CHANNEL_INFO(temp, 218 + HWMON_T_INPUT | 219 + HWMON_T_MAX | HWMON_T_MAX_HYST | 220 + HWMON_T_CRIT | HWMON_T_CRIT_HYST | 221 + HWMON_T_EMERGENCY | HWMON_T_EMERGENCY_HYST), 222 + HWMON_CHANNEL_INFO(fan, 223 + HWMON_F_INPUT), 224 + HWMON_CHANNEL_INFO(in, 225 + HWMON_I_INPUT | 226 + HWMON_I_MIN | HWMON_I_MAX | 227 + HWMON_I_LABEL), 228 + HWMON_CHANNEL_INFO(pwm, 229 + HWMON_PWM_INPUT | HWMON_PWM_ENABLE), 230 + HWMON_CHANNEL_INFO(power, 231 + HWMON_P_INPUT | HWMON_P_CAP_MAX | HWMON_P_CRIT), 283 232 NULL 284 233 }; 285 234
+29
drivers/gpu/drm/nouveau/nouveau_mem.c
··· 187 187 *res = &mem->base; 188 188 return 0; 189 189 } 190 + 191 + bool 192 + nouveau_mem_intersects(struct ttm_resource *res, 193 + const struct ttm_place *place, 194 + size_t size) 195 + { 196 + u32 num_pages = PFN_UP(size); 197 + 198 + /* Don't evict BOs outside of the requested placement range */ 199 + if (place->fpfn >= (res->start + num_pages) || 200 + (place->lpfn && place->lpfn <= res->start)) 201 + return false; 202 + 203 + return true; 204 + } 205 + 206 + bool 207 + nouveau_mem_compatible(struct ttm_resource *res, 208 + const struct ttm_place *place, 209 + size_t size) 210 + { 211 + u32 num_pages = PFN_UP(size); 212 + 213 + if (res->start < place->fpfn || 214 + (place->lpfn && (res->start + num_pages) > place->lpfn)) 215 + return false; 216 + 217 + return true; 218 + }
+6
drivers/gpu/drm/nouveau/nouveau_mem.h
··· 25 25 struct ttm_resource **); 26 26 void nouveau_mem_del(struct ttm_resource_manager *man, 27 27 struct ttm_resource *); 28 + bool nouveau_mem_intersects(struct ttm_resource *res, 29 + const struct ttm_place *place, 30 + size_t size); 31 + bool nouveau_mem_compatible(struct ttm_resource *res, 32 + const struct ttm_place *place, 33 + size_t size); 28 34 int nouveau_mem_vram(struct ttm_resource *, bool contig, u8 page); 29 35 int nouveau_mem_host(struct ttm_resource *, struct ttm_tt *); 30 36 void nouveau_mem_fini(struct nouveau_mem *);
+24
drivers/gpu/drm/nouveau/nouveau_ttm.c
··· 42 42 nouveau_mem_del(man, reg); 43 43 } 44 44 45 + static bool 46 + nouveau_manager_intersects(struct ttm_resource_manager *man, 47 + struct ttm_resource *res, 48 + const struct ttm_place *place, 49 + size_t size) 50 + { 51 + return nouveau_mem_intersects(res, place, size); 52 + } 53 + 54 + static bool 55 + nouveau_manager_compatible(struct ttm_resource_manager *man, 56 + struct ttm_resource *res, 57 + const struct ttm_place *place, 58 + size_t size) 59 + { 60 + return nouveau_mem_compatible(res, place, size); 61 + } 62 + 45 63 static int 46 64 nouveau_vram_manager_new(struct ttm_resource_manager *man, 47 65 struct ttm_buffer_object *bo, ··· 91 73 const struct ttm_resource_manager_func nouveau_vram_manager = { 92 74 .alloc = nouveau_vram_manager_new, 93 75 .free = nouveau_manager_del, 76 + .intersects = nouveau_manager_intersects, 77 + .compatible = nouveau_manager_compatible, 94 78 }; 95 79 96 80 static int ··· 117 97 const struct ttm_resource_manager_func nouveau_gart_manager = { 118 98 .alloc = nouveau_gart_manager_new, 119 99 .free = nouveau_manager_del, 100 + .intersects = nouveau_manager_intersects, 101 + .compatible = nouveau_manager_compatible, 120 102 }; 121 103 122 104 static int ··· 152 130 const struct ttm_resource_manager_func nv04_gart_manager = { 153 131 .alloc = nv04_gart_manager_new, 154 132 .free = nouveau_manager_del, 133 + .intersects = nouveau_manager_intersects, 134 + .compatible = nouveau_manager_compatible, 155 135 }; 156 136 157 137 static int
+1 -1
drivers/gpu/drm/nouveau/nvkm/subdev/clk/gm20b.c
··· 581 581 582 582 /* 583 583 * Interim step for changing DVFS detection settings: low enough 584 - * frequency to be safe at at DVFS coeff = 0. 584 + * frequency to be safe at DVFS coeff = 0. 585 585 * 586 586 * 1. If voltage is increasing: 587 587 * - safe frequency target matches the lowest - old - frequency
+2 -2
drivers/gpu/drm/panel/Kconfig
··· 165 165 config DRM_PANEL_ILITEK_ILI9341 166 166 tristate "Ilitek ILI9341 240x320 QVGA panels" 167 167 depends on OF && SPI 168 - depends on DRM_KMS_HELPER 169 - depends on DRM_GEM_DMA_HELPER 168 + select DRM_KMS_HELPER 169 + select DRM_GEM_DMA_HELPER 170 170 depends on BACKLIGHT_CLASS_DEVICE 171 171 select DRM_MIPI_DBI 172 172 help
+4 -1
drivers/gpu/drm/panel/panel-edp.c
··· 53 53 * before the HPD signal is reliable. Ideally this is 0 but some panels, 54 54 * board designs, or bad pulldown configs can cause a glitch here. 55 55 * 56 - * NOTE: on some old panel data this number appers to be much too big. 56 + * NOTE: on some old panel data this number appears to be much too big. 57 57 * Presumably some old panels simply didn't have HPD hooked up and put 58 58 * the hpd_absent here because this field predates the 59 59 * hpd_absent. While that works, it's non-ideal. ··· 1877 1877 */ 1878 1878 static const struct edp_panel_entry edp_panels[] = { 1879 1879 EDP_PANEL_ENTRY('A', 'U', 'O', 0x1062, &delay_200_500_e50, "B120XAN01.0"), 1880 + EDP_PANEL_ENTRY('A', 'U', 'O', 0x1e9b, &delay_200_500_e50, "B133UAN02.1"), 1880 1881 EDP_PANEL_ENTRY('A', 'U', 'O', 0x405c, &auo_b116xak01.delay, "B116XAK01"), 1881 1882 EDP_PANEL_ENTRY('A', 'U', 'O', 0x615c, &delay_200_500_e50, "B116XAN06.1"), 1882 1883 EDP_PANEL_ENTRY('A', 'U', 'O', 0x8594, &delay_200_500_e50, "B133UAN01.0"), ··· 1889 1888 EDP_PANEL_ENTRY('B', 'O', 'E', 0x0a5d, &delay_200_500_e50, "NV116WHM-N45"), 1890 1889 1891 1890 EDP_PANEL_ENTRY('C', 'M', 'N', 0x114c, &innolux_n116bca_ea1.delay, "N116BCA-EA1"), 1891 + EDP_PANEL_ENTRY('C', 'M', 'N', 0x1247, &delay_200_500_e80_d50, "N120ACA-EA1"), 1892 1892 1893 1893 EDP_PANEL_ENTRY('I', 'V', 'O', 0x057d, &delay_200_500_e200, "R140NWF5 RH"), 1894 + EDP_PANEL_ENTRY('I', 'V', 'O', 0x854b, &delay_200_500_p2e100, "M133NW4J-R3"), 1894 1895 1895 1896 EDP_PANEL_ENTRY('K', 'D', 'B', 0x0624, &kingdisplay_kd116n21_30nv_a010.delay, "116N21-30NV-A010"), 1896 1897 EDP_PANEL_ENTRY('K', 'D', 'B', 0x1120, &delay_200_500_e80_d50, "116N29-30NK-C007"),
+25 -15
drivers/gpu/drm/panfrost/panfrost_mmu.c
··· 248 248 mmu_write(pfdev, MMU_INT_MASK, ~0); 249 249 } 250 250 251 - static size_t get_pgsize(u64 addr, size_t size) 251 + static size_t get_pgsize(u64 addr, size_t size, size_t *count) 252 252 { 253 - if (addr & (SZ_2M - 1) || size < SZ_2M) 254 - return SZ_4K; 253 + size_t blk_offset = -addr % SZ_2M; 255 254 255 + if (blk_offset || size < SZ_2M) { 256 + *count = min_not_zero(blk_offset, size) / SZ_4K; 257 + return SZ_4K; 258 + } 259 + *count = size / SZ_2M; 256 260 return SZ_2M; 257 261 } 258 262 ··· 291 287 dev_dbg(pfdev->dev, "map: as=%d, iova=%llx, paddr=%lx, len=%zx", mmu->as, iova, paddr, len); 292 288 293 289 while (len) { 294 - size_t pgsize = get_pgsize(iova | paddr, len); 290 + size_t pgcount, mapped = 0; 291 + size_t pgsize = get_pgsize(iova | paddr, len, &pgcount); 295 292 296 - ops->map(ops, iova, paddr, pgsize, prot, GFP_KERNEL); 297 - iova += pgsize; 298 - paddr += pgsize; 299 - len -= pgsize; 293 + ops->map_pages(ops, iova, paddr, pgsize, pgcount, prot, 294 + GFP_KERNEL, &mapped); 295 + /* Don't get stuck if things have gone wrong */ 296 + mapped = max(mapped, pgsize); 297 + iova += mapped; 298 + paddr += mapped; 299 + len -= mapped; 300 300 } 301 301 } 302 302 ··· 352 344 mapping->mmu->as, iova, len); 353 345 354 346 while (unmapped_len < len) { 355 - size_t unmapped_page; 356 - size_t pgsize = get_pgsize(iova, len - unmapped_len); 347 + size_t unmapped_page, pgcount; 348 + size_t pgsize = get_pgsize(iova, len - unmapped_len, &pgcount); 357 349 358 - if (ops->iova_to_phys(ops, iova)) { 359 - unmapped_page = ops->unmap(ops, iova, pgsize, NULL); 360 - WARN_ON(unmapped_page != pgsize); 350 + if (bo->is_heap) 351 + pgcount = 1; 352 + if (!bo->is_heap || ops->iova_to_phys(ops, iova)) { 353 + unmapped_page = ops->unmap_pages(ops, iova, pgsize, pgcount, NULL); 354 + WARN_ON(unmapped_page != pgsize * pgcount); 361 355 } 362 - iova += pgsize; 363 - unmapped_len += pgsize; 356 + iova += pgsize * pgcount; 357 + unmapped_len += pgsize * pgcount; 364 358 } 365 359 366 360 panfrost_mmu_flush_range(pfdev, mapping->mmu,
+2 -1
drivers/gpu/drm/qxl/qxl_drv.c
··· 194 194 qdev->ram_header->int_mask = QXL_INTERRUPT_MASK; 195 195 if (!thaw) { 196 196 qxl_reinit_memslots(qdev); 197 - qxl_ring_init_hdr(qdev->release_ring); 198 197 } 199 198 200 199 qxl_create_monitors_object(qdev); ··· 219 220 { 220 221 struct pci_dev *pdev = to_pci_dev(dev); 221 222 struct drm_device *drm_dev = pci_get_drvdata(pdev); 223 + struct qxl_device *qdev = to_qxl(drm_dev); 222 224 223 225 pci_set_power_state(pdev, PCI_D0); 224 226 pci_restore_state(pdev); ··· 227 227 return -EIO; 228 228 } 229 229 230 + qxl_io_reset(qdev); 230 231 return qxl_drm_resume(drm_dev, false); 231 232 } 232 233
+1 -1
drivers/gpu/drm/radeon/Makefile
··· 49 49 rv770_smc.o cypress_dpm.o btc_dpm.o sumo_dpm.o sumo_smc.o trinity_dpm.o \ 50 50 trinity_smc.o ni_dpm.o si_smc.o si_dpm.o kv_smc.o kv_dpm.o ci_smc.o \ 51 51 ci_dpm.o dce6_afmt.o radeon_vm.o radeon_ucode.o radeon_ib.o \ 52 - radeon_sync.o radeon_audio.o radeon_dp_auxch.o radeon_dp_mst.o 52 + radeon_sync.o radeon_audio.o radeon_dp_auxch.o 53 53 54 54 radeon-$(CONFIG_MMU_NOTIFIER) += radeon_mn.o 55 55
+1 -10
drivers/gpu/drm/radeon/atombios_crtc.c
··· 617 617 } 618 618 } 619 619 620 - if (radeon_encoder->is_mst_encoder) { 621 - struct radeon_encoder_mst *mst_enc = radeon_encoder->enc_priv; 622 - struct radeon_connector_atom_dig *dig_connector = mst_enc->connector->con_priv; 623 - 624 - dp_clock = dig_connector->dp_clock; 625 - } 626 - 627 620 /* use recommended ref_div for ss */ 628 621 if (radeon_encoder->devices & (ATOM_DEVICE_LCD_SUPPORT)) { 629 622 if (radeon_crtc->ss_enabled) { ··· 965 972 radeon_crtc->bpc = 8; 966 973 radeon_crtc->ss_enabled = false; 967 974 968 - if (radeon_encoder->is_mst_encoder) { 969 - radeon_dp_mst_prepare_pll(crtc, mode); 970 - } else if ((radeon_encoder->active_device & (ATOM_DEVICE_LCD_SUPPORT | ATOM_DEVICE_DFP_SUPPORT)) || 975 + if ((radeon_encoder->active_device & (ATOM_DEVICE_LCD_SUPPORT | ATOM_DEVICE_DFP_SUPPORT)) || 971 976 (radeon_encoder_get_dp_bridge_encoder_id(radeon_crtc->encoder) != ENCODER_OBJECT_ID_NONE)) { 972 977 struct radeon_encoder_atom_dig *dig = radeon_encoder->enc_priv; 973 978 struct drm_connector *connector =
-59
drivers/gpu/drm/radeon/atombios_encoders.c
··· 667 667 struct drm_connector *connector; 668 668 struct radeon_connector *radeon_connector; 669 669 struct radeon_connector_atom_dig *dig_connector; 670 - struct radeon_encoder_atom_dig *dig_enc; 671 670 672 - if (radeon_encoder_is_digital(encoder)) { 673 - dig_enc = radeon_encoder->enc_priv; 674 - if (dig_enc->active_mst_links) 675 - return ATOM_ENCODER_MODE_DP_MST; 676 - } 677 - if (radeon_encoder->is_mst_encoder || radeon_encoder->offset) 678 - return ATOM_ENCODER_MODE_DP_MST; 679 671 /* dp bridges are always DP */ 680 672 if (radeon_encoder_get_dp_bridge_encoder_id(encoder) != ENCODER_OBJECT_ID_NONE) 681 673 return ATOM_ENCODER_MODE_DP; ··· 1715 1723 case DRM_MODE_DPMS_SUSPEND: 1716 1724 case DRM_MODE_DPMS_OFF: 1717 1725 1718 - /* don't power off encoders with active MST links */ 1719 - if (dig->active_mst_links) 1720 - return; 1721 - 1722 1726 if (ASIC_IS_DCE4(rdev)) { 1723 1727 if (ENCODER_MODE_IS_DP(atombios_get_encoder_mode(encoder)) && connector) 1724 1728 atombios_dig_encoder_setup(encoder, ATOM_ENCODER_CMD_DP_VIDEO_OFF, 0); ··· 1978 1990 1979 1991 /* update scratch regs with new routing */ 1980 1992 radeon_atombios_encoder_crtc_scratch_regs(encoder, radeon_crtc->crtc_id); 1981 - } 1982 - 1983 - void 1984 - atombios_set_mst_encoder_crtc_source(struct drm_encoder *encoder, int fe) 1985 - { 1986 - struct drm_device *dev = encoder->dev; 1987 - struct radeon_device *rdev = dev->dev_private; 1988 - struct radeon_crtc *radeon_crtc = to_radeon_crtc(encoder->crtc); 1989 - int index = GetIndexIntoMasterTable(COMMAND, SelectCRTC_Source); 1990 - uint8_t frev, crev; 1991 - union crtc_source_param args; 1992 - 1993 - memset(&args, 0, sizeof(args)); 1994 - 1995 - if (!atom_parse_cmd_header(rdev->mode_info.atom_context, index, &frev, &crev)) 1996 - return; 1997 - 1998 - if (frev != 1 && crev != 2) 1999 - DRM_ERROR("Unknown table for MST %d, %d\n", frev, crev); 2000 - 2001 - args.v2.ucCRTC = radeon_crtc->crtc_id; 2002 - args.v2.ucEncodeMode = ATOM_ENCODER_MODE_DP_MST; 2003 - 2004 - switch (fe) { 2005 - case 0: 2006 - args.v2.ucEncoderID = ASIC_INT_DIG1_ENCODER_ID; 2007 - break; 2008 - case 1: 2009 - args.v2.ucEncoderID = ASIC_INT_DIG2_ENCODER_ID; 2010 - break; 2011 - case 2: 2012 - args.v2.ucEncoderID = ASIC_INT_DIG3_ENCODER_ID; 2013 - break; 2014 - case 3: 2015 - args.v2.ucEncoderID = ASIC_INT_DIG4_ENCODER_ID; 2016 - break; 2017 - case 4: 2018 - args.v2.ucEncoderID = ASIC_INT_DIG5_ENCODER_ID; 2019 - break; 2020 - case 5: 2021 - args.v2.ucEncoderID = ASIC_INT_DIG6_ENCODER_ID; 2022 - break; 2023 - case 6: 2024 - args.v2.ucEncoderID = ASIC_INT_DIG7_ENCODER_ID; 2025 - break; 2026 - } 2027 - atom_execute_table(rdev->mode_info.atom_context, index, (uint32_t *)&args); 2028 1993 } 2029 1994 2030 1995 static void
-2
drivers/gpu/drm/radeon/radeon_atombios.c
··· 826 826 } 827 827 828 828 radeon_link_encoder_connector(dev); 829 - 830 - radeon_setup_mst_connector(dev); 831 829 return true; 832 830 } 833 831
+3 -58
drivers/gpu/drm/radeon/radeon_connectors.c
··· 37 37 #include <linux/pm_runtime.h> 38 38 #include <linux/vga_switcheroo.h> 39 39 40 - static int radeon_dp_handle_hpd(struct drm_connector *connector) 41 - { 42 - struct radeon_connector *radeon_connector = to_radeon_connector(connector); 43 - int ret; 44 - 45 - ret = radeon_dp_mst_check_status(radeon_connector); 46 - if (ret == -EINVAL) 47 - return 1; 48 - return 0; 49 - } 50 40 void radeon_connector_hotplug(struct drm_connector *connector) 51 41 { 52 42 struct drm_device *dev = connector->dev; 53 43 struct radeon_device *rdev = dev->dev_private; 54 44 struct radeon_connector *radeon_connector = to_radeon_connector(connector); 55 45 56 - if (connector->connector_type == DRM_MODE_CONNECTOR_DisplayPort) { 57 - struct radeon_connector_atom_dig *dig_connector = 58 - radeon_connector->con_priv; 59 - 60 - if (radeon_connector->is_mst_connector) 61 - return; 62 - if (dig_connector->is_mst) { 63 - radeon_dp_handle_hpd(connector); 64 - return; 65 - } 66 - } 67 46 /* bail if the connector does not have hpd pin, e.g., 68 47 * VGA, TV, etc. 69 48 */ ··· 1643 1664 struct drm_encoder *encoder = radeon_best_single_encoder(connector); 1644 1665 int r; 1645 1666 1646 - if (radeon_dig_connector->is_mst) 1647 - return connector_status_disconnected; 1648 - 1649 1667 if (!drm_kms_helper_is_poll_worker()) { 1650 1668 r = pm_runtime_get_sync(connector->dev->dev); 1651 1669 if (r < 0) { ··· 1705 1729 radeon_dig_connector->dp_sink_type = radeon_dp_getsinktype(radeon_connector); 1706 1730 if (radeon_hpd_sense(rdev, radeon_connector->hpd.hpd)) { 1707 1731 ret = connector_status_connected; 1708 - if (radeon_dig_connector->dp_sink_type == CONNECTOR_OBJECT_ID_DISPLAYPORT) { 1732 + if (radeon_dig_connector->dp_sink_type == CONNECTOR_OBJECT_ID_DISPLAYPORT) 1709 1733 radeon_dp_getdpcd(radeon_connector); 1710 - r = radeon_dp_mst_probe(radeon_connector); 1711 - if (r == 1) 1712 - ret = connector_status_disconnected; 1713 - } 1714 1734 } else { 1715 1735 if (radeon_dig_connector->dp_sink_type == CONNECTOR_OBJECT_ID_DISPLAYPORT) { 1716 - if (radeon_dp_getdpcd(radeon_connector)) { 1717 - r = radeon_dp_mst_probe(radeon_connector); 1718 - if (r == 1) 1719 - ret = connector_status_disconnected; 1720 - else 1721 - ret = connector_status_connected; 1722 - } 1736 + if (radeon_dp_getdpcd(radeon_connector)) 1737 + ret = connector_status_connected; 1723 1738 } else { 1724 1739 /* try non-aux ddc (DP to DVI/HDMI/etc. adapter) */ 1725 1740 if (radeon_ddc_probe(radeon_connector, false)) ··· 2527 2560 2528 2561 connector->display_info.subpixel_order = subpixel_order; 2529 2562 drm_connector_register(connector); 2530 - } 2531 - 2532 - void radeon_setup_mst_connector(struct drm_device *dev) 2533 - { 2534 - struct radeon_device *rdev = dev->dev_private; 2535 - struct drm_connector *connector; 2536 - struct radeon_connector *radeon_connector; 2537 - 2538 - if (!ASIC_IS_DCE5(rdev)) 2539 - return; 2540 - 2541 - if (radeon_mst == 0) 2542 - return; 2543 - 2544 - list_for_each_entry(connector, &dev->mode_config.connector_list, head) { 2545 - radeon_connector = to_radeon_connector(connector); 2546 - 2547 - if (connector->connector_type != DRM_MODE_CONNECTOR_DisplayPort) 2548 - continue; 2549 - 2550 - radeon_dp_mst_init(radeon_connector); 2551 - } 2552 2563 }
-1
drivers/gpu/drm/radeon/radeon_device.c
··· 1438 1438 goto failed; 1439 1439 1440 1440 radeon_gem_debugfs_init(rdev); 1441 - radeon_mst_debugfs_init(rdev); 1442 1441 1443 1442 if (rdev->flags & RADEON_IS_AGP && !rdev->accel_working) { 1444 1443 /* Acceleration not working on AGP card try again
-778
drivers/gpu/drm/radeon/radeon_dp_mst.c
··· 1 - // SPDX-License-Identifier: MIT 2 - 3 - #include <drm/display/drm_dp_mst_helper.h> 4 - #include <drm/drm_fb_helper.h> 5 - #include <drm/drm_file.h> 6 - #include <drm/drm_probe_helper.h> 7 - 8 - #include "atom.h" 9 - #include "ni_reg.h" 10 - #include "radeon.h" 11 - 12 - static struct radeon_encoder *radeon_dp_create_fake_mst_encoder(struct radeon_connector *connector); 13 - 14 - static int radeon_atom_set_enc_offset(int id) 15 - { 16 - static const int offsets[] = { EVERGREEN_CRTC0_REGISTER_OFFSET, 17 - EVERGREEN_CRTC1_REGISTER_OFFSET, 18 - EVERGREEN_CRTC2_REGISTER_OFFSET, 19 - EVERGREEN_CRTC3_REGISTER_OFFSET, 20 - EVERGREEN_CRTC4_REGISTER_OFFSET, 21 - EVERGREEN_CRTC5_REGISTER_OFFSET, 22 - 0x13830 - 0x7030 }; 23 - 24 - return offsets[id]; 25 - } 26 - 27 - static int radeon_dp_mst_set_be_cntl(struct radeon_encoder *primary, 28 - struct radeon_encoder_mst *mst_enc, 29 - enum radeon_hpd_id hpd, bool enable) 30 - { 31 - struct drm_device *dev = primary->base.dev; 32 - struct radeon_device *rdev = dev->dev_private; 33 - uint32_t reg; 34 - int retries = 0; 35 - uint32_t temp; 36 - 37 - reg = RREG32(NI_DIG_BE_CNTL + primary->offset); 38 - 39 - /* set MST mode */ 40 - reg &= ~NI_DIG_FE_DIG_MODE(7); 41 - reg |= NI_DIG_FE_DIG_MODE(NI_DIG_MODE_DP_MST); 42 - 43 - if (enable) 44 - reg |= NI_DIG_FE_SOURCE_SELECT(1 << mst_enc->fe); 45 - else 46 - reg &= ~NI_DIG_FE_SOURCE_SELECT(1 << mst_enc->fe); 47 - 48 - reg |= NI_DIG_HPD_SELECT(hpd); 49 - DRM_DEBUG_KMS("writing 0x%08x 0x%08x\n", NI_DIG_BE_CNTL + primary->offset, reg); 50 - WREG32(NI_DIG_BE_CNTL + primary->offset, reg); 51 - 52 - if (enable) { 53 - uint32_t offset = radeon_atom_set_enc_offset(mst_enc->fe); 54 - 55 - do { 56 - temp = RREG32(NI_DIG_FE_CNTL + offset); 57 - } while ((temp & NI_DIG_SYMCLK_FE_ON) && retries++ < 10000); 58 - if (retries == 10000) 59 - DRM_ERROR("timed out waiting for FE %d %d\n", primary->offset, mst_enc->fe); 60 - } 61 - return 0; 62 - } 63 - 64 - static int radeon_dp_mst_set_stream_attrib(struct radeon_encoder *primary, 65 - int stream_number, 66 - int fe, 67 - int slots) 68 - { 69 - struct drm_device *dev = primary->base.dev; 70 - struct radeon_device *rdev = dev->dev_private; 71 - u32 temp, val; 72 - int retries = 0; 73 - int satreg, satidx; 74 - 75 - satreg = stream_number >> 1; 76 - satidx = stream_number & 1; 77 - 78 - temp = RREG32(NI_DP_MSE_SAT0 + satreg + primary->offset); 79 - 80 - val = NI_DP_MSE_SAT_SLOT_COUNT0(slots) | NI_DP_MSE_SAT_SRC0(fe); 81 - 82 - val <<= (16 * satidx); 83 - 84 - temp &= ~(0xffff << (16 * satidx)); 85 - 86 - temp |= val; 87 - 88 - DRM_DEBUG_KMS("writing 0x%08x 0x%08x\n", NI_DP_MSE_SAT0 + satreg + primary->offset, temp); 89 - WREG32(NI_DP_MSE_SAT0 + satreg + primary->offset, temp); 90 - 91 - WREG32(NI_DP_MSE_SAT_UPDATE + primary->offset, 1); 92 - 93 - do { 94 - unsigned value1, value2; 95 - udelay(10); 96 - temp = RREG32(NI_DP_MSE_SAT_UPDATE + primary->offset); 97 - 98 - value1 = temp & NI_DP_MSE_SAT_UPDATE_MASK; 99 - value2 = temp & NI_DP_MSE_16_MTP_KEEPOUT; 100 - 101 - if (!value1 && !value2) 102 - break; 103 - } while (retries++ < 50); 104 - 105 - if (retries == 10000) 106 - DRM_ERROR("timed out waitin for SAT update %d\n", primary->offset); 107 - 108 - /* MTP 16 ? */ 109 - return 0; 110 - } 111 - 112 - static int radeon_dp_mst_update_stream_attribs(struct radeon_connector *mst_conn, 113 - struct radeon_encoder *primary) 114 - { 115 - struct drm_device *dev = mst_conn->base.dev; 116 - struct stream_attribs new_attribs[6]; 117 - int i; 118 - int idx = 0; 119 - struct radeon_connector *radeon_connector; 120 - struct drm_connector *connector; 121 - 122 - memset(new_attribs, 0, sizeof(new_attribs)); 123 - list_for_each_entry(connector, &dev->mode_config.connector_list, head) { 124 - struct radeon_encoder *subenc; 125 - struct radeon_encoder_mst *mst_enc; 126 - 127 - radeon_connector = to_radeon_connector(connector); 128 - if (!radeon_connector->is_mst_connector) 129 - continue; 130 - 131 - if (radeon_connector->mst_port != mst_conn) 132 - continue; 133 - 134 - subenc = radeon_connector->mst_encoder; 135 - mst_enc = subenc->enc_priv; 136 - 137 - if (!mst_enc->enc_active) 138 - continue; 139 - 140 - new_attribs[idx].fe = mst_enc->fe; 141 - new_attribs[idx].slots = drm_dp_mst_get_vcpi_slots(&mst_conn->mst_mgr, mst_enc->port); 142 - idx++; 143 - } 144 - 145 - for (i = 0; i < idx; i++) { 146 - if (new_attribs[i].fe != mst_conn->cur_stream_attribs[i].fe || 147 - new_attribs[i].slots != mst_conn->cur_stream_attribs[i].slots) { 148 - radeon_dp_mst_set_stream_attrib(primary, i, new_attribs[i].fe, new_attribs[i].slots); 149 - mst_conn->cur_stream_attribs[i].fe = new_attribs[i].fe; 150 - mst_conn->cur_stream_attribs[i].slots = new_attribs[i].slots; 151 - } 152 - } 153 - 154 - for (i = idx; i < mst_conn->enabled_attribs; i++) { 155 - radeon_dp_mst_set_stream_attrib(primary, i, 0, 0); 156 - mst_conn->cur_stream_attribs[i].fe = 0; 157 - mst_conn->cur_stream_attribs[i].slots = 0; 158 - } 159 - mst_conn->enabled_attribs = idx; 160 - return 0; 161 - } 162 - 163 - static int radeon_dp_mst_set_vcp_size(struct radeon_encoder *mst, s64 avg_time_slots_per_mtp) 164 - { 165 - struct drm_device *dev = mst->base.dev; 166 - struct radeon_device *rdev = dev->dev_private; 167 - struct radeon_encoder_mst *mst_enc = mst->enc_priv; 168 - uint32_t val, temp; 169 - uint32_t offset = radeon_atom_set_enc_offset(mst_enc->fe); 170 - int retries = 0; 171 - uint32_t x = drm_fixp2int(avg_time_slots_per_mtp); 172 - uint32_t y = drm_fixp2int_ceil((avg_time_slots_per_mtp - x) << 26); 173 - 174 - val = NI_DP_MSE_RATE_X(x) | NI_DP_MSE_RATE_Y(y); 175 - 176 - WREG32(NI_DP_MSE_RATE_CNTL + offset, val); 177 - 178 - do { 179 - temp = RREG32(NI_DP_MSE_RATE_UPDATE + offset); 180 - udelay(10); 181 - } while ((temp & 0x1) && (retries++ < 10000)); 182 - 183 - if (retries >= 10000) 184 - DRM_ERROR("timed out wait for rate cntl %d\n", mst_enc->fe); 185 - return 0; 186 - } 187 - 188 - static int radeon_dp_mst_get_ddc_modes(struct drm_connector *connector) 189 - { 190 - struct radeon_connector *radeon_connector = to_radeon_connector(connector); 191 - struct radeon_connector *master = radeon_connector->mst_port; 192 - struct edid *edid; 193 - int ret = 0; 194 - 195 - edid = drm_dp_mst_get_edid(connector, &master->mst_mgr, radeon_connector->port); 196 - radeon_connector->edid = edid; 197 - DRM_DEBUG_KMS("edid retrieved %p\n", edid); 198 - if (radeon_connector->edid) { 199 - drm_connector_update_edid_property(&radeon_connector->base, radeon_connector->edid); 200 - ret = drm_add_edid_modes(&radeon_connector->base, radeon_connector->edid); 201 - return ret; 202 - } 203 - drm_connector_update_edid_property(&radeon_connector->base, NULL); 204 - 205 - return ret; 206 - } 207 - 208 - static int radeon_dp_mst_get_modes(struct drm_connector *connector) 209 - { 210 - return radeon_dp_mst_get_ddc_modes(connector); 211 - } 212 - 213 - static enum drm_mode_status 214 - radeon_dp_mst_mode_valid(struct drm_connector *connector, 215 - struct drm_display_mode *mode) 216 - { 217 - /* TODO - validate mode against available PBN for link */ 218 - if (mode->clock < 10000) 219 - return MODE_CLOCK_LOW; 220 - 221 - if (mode->flags & DRM_MODE_FLAG_DBLCLK) 222 - return MODE_H_ILLEGAL; 223 - 224 - return MODE_OK; 225 - } 226 - 227 - static struct 228 - drm_encoder *radeon_mst_best_encoder(struct drm_connector *connector) 229 - { 230 - struct radeon_connector *radeon_connector = to_radeon_connector(connector); 231 - 232 - return &radeon_connector->mst_encoder->base; 233 - } 234 - 235 - static int 236 - radeon_dp_mst_detect(struct drm_connector *connector, 237 - struct drm_modeset_acquire_ctx *ctx, 238 - bool force) 239 - { 240 - struct radeon_connector *radeon_connector = 241 - to_radeon_connector(connector); 242 - struct radeon_connector *master = radeon_connector->mst_port; 243 - 244 - if (drm_connector_is_unregistered(connector)) 245 - return connector_status_disconnected; 246 - 247 - return drm_dp_mst_detect_port(connector, ctx, &master->mst_mgr, 248 - radeon_connector->port); 249 - } 250 - 251 - static const struct drm_connector_helper_funcs radeon_dp_mst_connector_helper_funcs = { 252 - .get_modes = radeon_dp_mst_get_modes, 253 - .mode_valid = radeon_dp_mst_mode_valid, 254 - .best_encoder = radeon_mst_best_encoder, 255 - .detect_ctx = radeon_dp_mst_detect, 256 - }; 257 - 258 - static void 259 - radeon_dp_mst_connector_destroy(struct drm_connector *connector) 260 - { 261 - struct radeon_connector *radeon_connector = to_radeon_connector(connector); 262 - struct radeon_encoder *radeon_encoder = radeon_connector->mst_encoder; 263 - 264 - drm_encoder_cleanup(&radeon_encoder->base); 265 - kfree(radeon_encoder); 266 - drm_connector_cleanup(connector); 267 - kfree(radeon_connector); 268 - } 269 - 270 - static const struct drm_connector_funcs radeon_dp_mst_connector_funcs = { 271 - .dpms = drm_helper_connector_dpms, 272 - .fill_modes = drm_helper_probe_single_connector_modes, 273 - .destroy = radeon_dp_mst_connector_destroy, 274 - }; 275 - 276 - static struct drm_connector *radeon_dp_add_mst_connector(struct drm_dp_mst_topology_mgr *mgr, 277 - struct drm_dp_mst_port *port, 278 - const char *pathprop) 279 - { 280 - struct radeon_connector *master = container_of(mgr, struct radeon_connector, mst_mgr); 281 - struct drm_device *dev = master->base.dev; 282 - struct radeon_connector *radeon_connector; 283 - struct drm_connector *connector; 284 - 285 - radeon_connector = kzalloc(sizeof(*radeon_connector), GFP_KERNEL); 286 - if (!radeon_connector) 287 - return NULL; 288 - 289 - radeon_connector->is_mst_connector = true; 290 - connector = &radeon_connector->base; 291 - radeon_connector->port = port; 292 - radeon_connector->mst_port = master; 293 - DRM_DEBUG_KMS("\n"); 294 - 295 - drm_connector_init(dev, connector, &radeon_dp_mst_connector_funcs, DRM_MODE_CONNECTOR_DisplayPort); 296 - drm_connector_helper_add(connector, &radeon_dp_mst_connector_helper_funcs); 297 - radeon_connector->mst_encoder = radeon_dp_create_fake_mst_encoder(master); 298 - 299 - drm_object_attach_property(&connector->base, dev->mode_config.path_property, 0); 300 - drm_object_attach_property(&connector->base, dev->mode_config.tile_property, 0); 301 - drm_connector_set_path_property(connector, pathprop); 302 - 303 - return connector; 304 - } 305 - 306 - static const struct drm_dp_mst_topology_cbs mst_cbs = { 307 - .add_connector = radeon_dp_add_mst_connector, 308 - }; 309 - 310 - static struct 311 - radeon_connector *radeon_mst_find_connector(struct drm_encoder *encoder) 312 - { 313 - struct drm_device *dev = encoder->dev; 314 - struct drm_connector *connector; 315 - 316 - list_for_each_entry(connector, &dev->mode_config.connector_list, head) { 317 - struct radeon_connector *radeon_connector = to_radeon_connector(connector); 318 - if (!connector->encoder) 319 - continue; 320 - if (!radeon_connector->is_mst_connector) 321 - continue; 322 - 323 - DRM_DEBUG_KMS("checking %p vs %p\n", connector->encoder, encoder); 324 - if (connector->encoder == encoder) 325 - return radeon_connector; 326 - } 327 - return NULL; 328 - } 329 - 330 - void radeon_dp_mst_prepare_pll(struct drm_crtc *crtc, struct drm_display_mode *mode) 331 - { 332 - struct radeon_crtc *radeon_crtc = to_radeon_crtc(crtc); 333 - struct drm_device *dev = crtc->dev; 334 - struct radeon_device *rdev = dev->dev_private; 335 - struct radeon_encoder *radeon_encoder = to_radeon_encoder(radeon_crtc->encoder); 336 - struct radeon_encoder_mst *mst_enc = radeon_encoder->enc_priv; 337 - struct radeon_connector *radeon_connector = radeon_mst_find_connector(&radeon_encoder->base); 338 - int dp_clock; 339 - struct radeon_connector_atom_dig *dig_connector = mst_enc->connector->con_priv; 340 - 341 - if (radeon_connector) { 342 - radeon_connector->pixelclock_for_modeset = mode->clock; 343 - if (radeon_connector->base.display_info.bpc) 344 - radeon_crtc->bpc = radeon_connector->base.display_info.bpc; 345 - else 346 - radeon_crtc->bpc = 8; 347 - } 348 - 349 - DRM_DEBUG_KMS("dp_clock %p %d\n", dig_connector, dig_connector->dp_clock); 350 - dp_clock = dig_connector->dp_clock; 351 - radeon_crtc->ss_enabled = 352 - radeon_atombios_get_asic_ss_info(rdev, &radeon_crtc->ss, 353 - ASIC_INTERNAL_SS_ON_DP, 354 - dp_clock); 355 - } 356 - 357 - static void 358 - radeon_mst_encoder_dpms(struct drm_encoder *encoder, int mode) 359 - { 360 - struct drm_device *dev = encoder->dev; 361 - struct radeon_device *rdev = dev->dev_private; 362 - struct radeon_encoder *radeon_encoder, *primary; 363 - struct radeon_encoder_mst *mst_enc; 364 - struct radeon_encoder_atom_dig *dig_enc; 365 - struct radeon_connector *radeon_connector; 366 - struct drm_crtc *crtc; 367 - struct radeon_crtc *radeon_crtc; 368 - int slots; 369 - s64 fixed_pbn, fixed_pbn_per_slot, avg_time_slots_per_mtp; 370 - if (!ASIC_IS_DCE5(rdev)) { 371 - DRM_ERROR("got mst dpms on non-DCE5\n"); 372 - return; 373 - } 374 - 375 - radeon_connector = radeon_mst_find_connector(encoder); 376 - if (!radeon_connector) 377 - return; 378 - 379 - radeon_encoder = to_radeon_encoder(encoder); 380 - 381 - mst_enc = radeon_encoder->enc_priv; 382 - 383 - primary = mst_enc->primary; 384 - 385 - dig_enc = primary->enc_priv; 386 - 387 - crtc = encoder->crtc; 388 - DRM_DEBUG_KMS("got connector %d\n", dig_enc->active_mst_links); 389 - 390 - switch (mode) { 391 - case DRM_MODE_DPMS_ON: 392 - dig_enc->active_mst_links++; 393 - 394 - radeon_crtc = to_radeon_crtc(crtc); 395 - 396 - if (dig_enc->active_mst_links == 1) { 397 - mst_enc->fe = dig_enc->dig_encoder; 398 - mst_enc->fe_from_be = true; 399 - atombios_set_mst_encoder_crtc_source(encoder, mst_enc->fe); 400 - 401 - atombios_dig_encoder_setup(&primary->base, ATOM_ENCODER_CMD_SETUP, 0); 402 - atombios_dig_transmitter_setup2(&primary->base, ATOM_TRANSMITTER_ACTION_ENABLE, 403 - 0, 0, dig_enc->dig_encoder); 404 - 405 - if (radeon_dp_needs_link_train(mst_enc->connector) || 406 - dig_enc->active_mst_links == 1) { 407 - radeon_dp_link_train(&primary->base, &mst_enc->connector->base); 408 - } 409 - 410 - } else { 411 - mst_enc->fe = radeon_atom_pick_dig_encoder(encoder, radeon_crtc->crtc_id); 412 - if (mst_enc->fe == -1) 413 - DRM_ERROR("failed to get frontend for dig encoder\n"); 414 - mst_enc->fe_from_be = false; 415 - atombios_set_mst_encoder_crtc_source(encoder, mst_enc->fe); 416 - } 417 - 418 - DRM_DEBUG_KMS("dig encoder is %d %d %d\n", dig_enc->dig_encoder, 419 - dig_enc->linkb, radeon_crtc->crtc_id); 420 - 421 - slots = drm_dp_find_vcpi_slots(&radeon_connector->mst_port->mst_mgr, 422 - mst_enc->pbn); 423 - drm_dp_mst_allocate_vcpi(&radeon_connector->mst_port->mst_mgr, 424 - radeon_connector->port, 425 - mst_enc->pbn, slots); 426 - drm_dp_update_payload_part1(&radeon_connector->mst_port->mst_mgr, 1); 427 - 428 - radeon_dp_mst_set_be_cntl(primary, mst_enc, 429 - radeon_connector->mst_port->hpd.hpd, true); 430 - 431 - mst_enc->enc_active = true; 432 - radeon_dp_mst_update_stream_attribs(radeon_connector->mst_port, primary); 433 - 434 - fixed_pbn = drm_int2fixp(mst_enc->pbn); 435 - fixed_pbn_per_slot = drm_int2fixp(radeon_connector->mst_port->mst_mgr.pbn_div); 436 - avg_time_slots_per_mtp = drm_fixp_div(fixed_pbn, fixed_pbn_per_slot); 437 - radeon_dp_mst_set_vcp_size(radeon_encoder, avg_time_slots_per_mtp); 438 - 439 - atombios_dig_encoder_setup2(&primary->base, ATOM_ENCODER_CMD_DP_VIDEO_ON, 0, 440 - mst_enc->fe); 441 - drm_dp_check_act_status(&radeon_connector->mst_port->mst_mgr); 442 - 443 - drm_dp_update_payload_part2(&radeon_connector->mst_port->mst_mgr); 444 - 445 - break; 446 - case DRM_MODE_DPMS_STANDBY: 447 - case DRM_MODE_DPMS_SUSPEND: 448 - case DRM_MODE_DPMS_OFF: 449 - DRM_ERROR("DPMS OFF %d\n", dig_enc->active_mst_links); 450 - 451 - if (!mst_enc->enc_active) 452 - return; 453 - 454 - drm_dp_mst_reset_vcpi_slots(&radeon_connector->mst_port->mst_mgr, mst_enc->port); 455 - drm_dp_update_payload_part1(&radeon_connector->mst_port->mst_mgr, 1); 456 - 457 - drm_dp_check_act_status(&radeon_connector->mst_port->mst_mgr); 458 - /* and this can also fail */ 459 - drm_dp_update_payload_part2(&radeon_connector->mst_port->mst_mgr); 460 - 461 - drm_dp_mst_deallocate_vcpi(&radeon_connector->mst_port->mst_mgr, mst_enc->port); 462 - 463 - mst_enc->enc_active = false; 464 - radeon_dp_mst_update_stream_attribs(radeon_connector->mst_port, primary); 465 - 466 - radeon_dp_mst_set_be_cntl(primary, mst_enc, 467 - radeon_connector->mst_port->hpd.hpd, false); 468 - atombios_dig_encoder_setup2(&primary->base, ATOM_ENCODER_CMD_DP_VIDEO_OFF, 0, 469 - mst_enc->fe); 470 - 471 - if (!mst_enc->fe_from_be) 472 - radeon_atom_release_dig_encoder(rdev, mst_enc->fe); 473 - 474 - mst_enc->fe_from_be = false; 475 - dig_enc->active_mst_links--; 476 - if (dig_enc->active_mst_links == 0) { 477 - /* drop link */ 478 - } 479 - 480 - break; 481 - } 482 - 483 - } 484 - 485 - static bool radeon_mst_mode_fixup(struct drm_encoder *encoder, 486 - const struct drm_display_mode *mode, 487 - struct drm_display_mode *adjusted_mode) 488 - { 489 - struct radeon_encoder_mst *mst_enc; 490 - struct radeon_encoder *radeon_encoder = to_radeon_encoder(encoder); 491 - struct radeon_connector_atom_dig *dig_connector; 492 - int bpp = 24; 493 - 494 - mst_enc = radeon_encoder->enc_priv; 495 - 496 - mst_enc->pbn = drm_dp_calc_pbn_mode(adjusted_mode->clock, bpp, false); 497 - 498 - mst_enc->primary->active_device = mst_enc->primary->devices & mst_enc->connector->devices; 499 - DRM_DEBUG_KMS("setting active device to %08x from %08x %08x for encoder %d\n", 500 - mst_enc->primary->active_device, mst_enc->primary->devices, 501 - mst_enc->connector->devices, mst_enc->primary->base.encoder_type); 502 - 503 - 504 - drm_mode_set_crtcinfo(adjusted_mode, 0); 505 - dig_connector = mst_enc->connector->con_priv; 506 - dig_connector->dp_lane_count = drm_dp_max_lane_count(dig_connector->dpcd); 507 - dig_connector->dp_clock = drm_dp_max_link_rate(dig_connector->dpcd); 508 - DRM_DEBUG_KMS("dig clock %p %d %d\n", dig_connector, 509 - dig_connector->dp_lane_count, dig_connector->dp_clock); 510 - return true; 511 - } 512 - 513 - static void radeon_mst_encoder_prepare(struct drm_encoder *encoder) 514 - { 515 - struct radeon_connector *radeon_connector; 516 - struct radeon_encoder *radeon_encoder, *primary; 517 - struct radeon_encoder_mst *mst_enc; 518 - struct radeon_encoder_atom_dig *dig_enc; 519 - 520 - radeon_connector = radeon_mst_find_connector(encoder); 521 - if (!radeon_connector) { 522 - DRM_DEBUG_KMS("failed to find connector %p\n", encoder); 523 - return; 524 - } 525 - radeon_encoder = to_radeon_encoder(encoder); 526 - 527 - radeon_mst_encoder_dpms(encoder, DRM_MODE_DPMS_OFF); 528 - 529 - mst_enc = radeon_encoder->enc_priv; 530 - 531 - primary = mst_enc->primary; 532 - 533 - dig_enc = primary->enc_priv; 534 - 535 - mst_enc->port = radeon_connector->port; 536 - 537 - if (dig_enc->dig_encoder == -1) { 538 - dig_enc->dig_encoder = radeon_atom_pick_dig_encoder(&primary->base, -1); 539 - primary->offset = radeon_atom_set_enc_offset(dig_enc->dig_encoder); 540 - atombios_set_mst_encoder_crtc_source(encoder, dig_enc->dig_encoder); 541 - 542 - 543 - } 544 - DRM_DEBUG_KMS("%d %d\n", dig_enc->dig_encoder, primary->offset); 545 - } 546 - 547 - static void 548 - radeon_mst_encoder_mode_set(struct drm_encoder *encoder, 549 - struct drm_display_mode *mode, 550 - struct drm_display_mode *adjusted_mode) 551 - { 552 - DRM_DEBUG_KMS("\n"); 553 - } 554 - 555 - static void radeon_mst_encoder_commit(struct drm_encoder *encoder) 556 - { 557 - radeon_mst_encoder_dpms(encoder, DRM_MODE_DPMS_ON); 558 - DRM_DEBUG_KMS("\n"); 559 - } 560 - 561 - static const struct drm_encoder_helper_funcs radeon_mst_helper_funcs = { 562 - .dpms = radeon_mst_encoder_dpms, 563 - .mode_fixup = radeon_mst_mode_fixup, 564 - .prepare = radeon_mst_encoder_prepare, 565 - .mode_set = radeon_mst_encoder_mode_set, 566 - .commit = radeon_mst_encoder_commit, 567 - }; 568 - 569 - static void radeon_dp_mst_encoder_destroy(struct drm_encoder *encoder) 570 - { 571 - drm_encoder_cleanup(encoder); 572 - kfree(encoder); 573 - } 574 - 575 - static const struct drm_encoder_funcs radeon_dp_mst_enc_funcs = { 576 - .destroy = radeon_dp_mst_encoder_destroy, 577 - }; 578 - 579 - static struct radeon_encoder * 580 - radeon_dp_create_fake_mst_encoder(struct radeon_connector *connector) 581 - { 582 - struct drm_device *dev = connector->base.dev; 583 - struct radeon_device *rdev = dev->dev_private; 584 - struct radeon_encoder *radeon_encoder; 585 - struct radeon_encoder_mst *mst_enc; 586 - struct drm_encoder *encoder; 587 - const struct drm_connector_helper_funcs *connector_funcs = connector->base.helper_private; 588 - struct drm_encoder *enc_master = connector_funcs->best_encoder(&connector->base); 589 - 590 - DRM_DEBUG_KMS("enc master is %p\n", enc_master); 591 - radeon_encoder = kzalloc(sizeof(*radeon_encoder), GFP_KERNEL); 592 - if (!radeon_encoder) 593 - return NULL; 594 - 595 - radeon_encoder->enc_priv = kzalloc(sizeof(*mst_enc), GFP_KERNEL); 596 - if (!radeon_encoder->enc_priv) { 597 - kfree(radeon_encoder); 598 - return NULL; 599 - } 600 - encoder = &radeon_encoder->base; 601 - switch (rdev->num_crtc) { 602 - case 1: 603 - encoder->possible_crtcs = 0x1; 604 - break; 605 - case 2: 606 - default: 607 - encoder->possible_crtcs = 0x3; 608 - break; 609 - case 4: 610 - encoder->possible_crtcs = 0xf; 611 - break; 612 - case 6: 613 - encoder->possible_crtcs = 0x3f; 614 - break; 615 - } 616 - 617 - drm_encoder_init(dev, &radeon_encoder->base, &radeon_dp_mst_enc_funcs, 618 - DRM_MODE_ENCODER_DPMST, NULL); 619 - drm_encoder_helper_add(encoder, &radeon_mst_helper_funcs); 620 - 621 - mst_enc = radeon_encoder->enc_priv; 622 - mst_enc->connector = connector; 623 - mst_enc->primary = to_radeon_encoder(enc_master); 624 - radeon_encoder->is_mst_encoder = true; 625 - return radeon_encoder; 626 - } 627 - 628 - int 629 - radeon_dp_mst_init(struct radeon_connector *radeon_connector) 630 - { 631 - struct drm_device *dev = radeon_connector->base.dev; 632 - int max_link_rate; 633 - 634 - if (!radeon_connector->ddc_bus->has_aux) 635 - return 0; 636 - 637 - if (radeon_connector_is_dp12_capable(&radeon_connector->base)) 638 - max_link_rate = 0x14; 639 - else 640 - max_link_rate = 0x0a; 641 - 642 - radeon_connector->mst_mgr.cbs = &mst_cbs; 643 - return drm_dp_mst_topology_mgr_init(&radeon_connector->mst_mgr, dev, 644 - &radeon_connector->ddc_bus->aux, 16, 6, 645 - 4, drm_dp_bw_code_to_link_rate(max_link_rate), 646 - radeon_connector->base.base.id); 647 - } 648 - 649 - int 650 - radeon_dp_mst_probe(struct radeon_connector *radeon_connector) 651 - { 652 - struct radeon_connector_atom_dig *dig_connector = radeon_connector->con_priv; 653 - struct drm_device *dev = radeon_connector->base.dev; 654 - struct radeon_device *rdev = dev->dev_private; 655 - int ret; 656 - u8 msg[1]; 657 - 658 - if (!radeon_mst) 659 - return 0; 660 - 661 - if (!ASIC_IS_DCE5(rdev)) 662 - return 0; 663 - 664 - if (dig_connector->dpcd[DP_DPCD_REV] < 0x12) 665 - return 0; 666 - 667 - ret = drm_dp_dpcd_read(&radeon_connector->ddc_bus->aux, DP_MSTM_CAP, msg, 668 - 1); 669 - if (ret) { 670 - if (msg[0] & DP_MST_CAP) { 671 - DRM_DEBUG_KMS("Sink is MST capable\n"); 672 - dig_connector->is_mst = true; 673 - } else { 674 - DRM_DEBUG_KMS("Sink is not MST capable\n"); 675 - dig_connector->is_mst = false; 676 - } 677 - 678 - } 679 - drm_dp_mst_topology_mgr_set_mst(&radeon_connector->mst_mgr, 680 - dig_connector->is_mst); 681 - return dig_connector->is_mst; 682 - } 683 - 684 - int 685 - radeon_dp_mst_check_status(struct radeon_connector *radeon_connector) 686 - { 687 - struct radeon_connector_atom_dig *dig_connector = radeon_connector->con_priv; 688 - int retry; 689 - 690 - if (dig_connector->is_mst) { 691 - u8 esi[16] = { 0 }; 692 - int dret; 693 - int ret = 0; 694 - bool handled; 695 - 696 - dret = drm_dp_dpcd_read(&radeon_connector->ddc_bus->aux, 697 - DP_SINK_COUNT_ESI, esi, 8); 698 - go_again: 699 - if (dret == 8) { 700 - DRM_DEBUG_KMS("got esi %3ph\n", esi); 701 - ret = drm_dp_mst_hpd_irq(&radeon_connector->mst_mgr, esi, &handled); 702 - 703 - if (handled) { 704 - for (retry = 0; retry < 3; retry++) { 705 - int wret; 706 - wret = drm_dp_dpcd_write(&radeon_connector->ddc_bus->aux, 707 - DP_SINK_COUNT_ESI + 1, &esi[1], 3); 708 - if (wret == 3) 709 - break; 710 - } 711 - 712 - dret = drm_dp_dpcd_read(&radeon_connector->ddc_bus->aux, 713 - DP_SINK_COUNT_ESI, esi, 8); 714 - if (dret == 8) { 715 - DRM_DEBUG_KMS("got esi2 %3ph\n", esi); 716 - goto go_again; 717 - } 718 - } else 719 - ret = 0; 720 - 721 - return ret; 722 - } else { 723 - DRM_DEBUG_KMS("failed to get ESI - device may have failed %d\n", ret); 724 - dig_connector->is_mst = false; 725 - drm_dp_mst_topology_mgr_set_mst(&radeon_connector->mst_mgr, 726 - dig_connector->is_mst); 727 - /* send a hotplug event */ 728 - } 729 - } 730 - return -EINVAL; 731 - } 732 - 733 - #if defined(CONFIG_DEBUG_FS) 734 - 735 - static int radeon_debugfs_mst_info_show(struct seq_file *m, void *unused) 736 - { 737 - struct radeon_device *rdev = (struct radeon_device *)m->private; 738 - struct drm_device *dev = rdev->ddev; 739 - struct drm_connector *connector; 740 - struct radeon_connector *radeon_connector; 741 - struct radeon_connector_atom_dig *dig_connector; 742 - int i; 743 - 744 - drm_modeset_lock_all(dev); 745 - list_for_each_entry(connector, &dev->mode_config.connector_list, head) { 746 - if (connector->connector_type != DRM_MODE_CONNECTOR_DisplayPort) 747 - continue; 748 - 749 - radeon_connector = to_radeon_connector(connector); 750 - dig_connector = radeon_connector->con_priv; 751 - if (radeon_connector->is_mst_connector) 752 - continue; 753 - if (!dig_connector->is_mst) 754 - continue; 755 - drm_dp_mst_dump_topology(m, &radeon_connector->mst_mgr); 756 - 757 - for (i = 0; i < radeon_connector->enabled_attribs; i++) 758 - seq_printf(m, "attrib %d: %d %d\n", i, 759 - radeon_connector->cur_stream_attribs[i].fe, 760 - radeon_connector->cur_stream_attribs[i].slots); 761 - } 762 - drm_modeset_unlock_all(dev); 763 - return 0; 764 - } 765 - 766 - DEFINE_SHOW_ATTRIBUTE(radeon_debugfs_mst_info); 767 - #endif 768 - 769 - void radeon_mst_debugfs_init(struct radeon_device *rdev) 770 - { 771 - #if defined(CONFIG_DEBUG_FS) 772 - struct dentry *root = rdev->ddev->primary->debugfs_root; 773 - 774 - debugfs_create_file("radeon_mst_info", 0444, root, rdev, 775 - &radeon_debugfs_mst_info_fops); 776 - 777 - #endif 778 - }
-4
drivers/gpu/drm/radeon/radeon_drv.c
··· 172 172 int radeon_bapm = -1; 173 173 int radeon_backlight = -1; 174 174 int radeon_auxch = -1; 175 - int radeon_mst = 0; 176 175 int radeon_uvd = 1; 177 176 int radeon_vce = 1; 178 177 ··· 261 262 262 263 MODULE_PARM_DESC(auxch, "Use native auxch experimental support (1 = enable, 0 = disable, -1 = auto)"); 263 264 module_param_named(auxch, radeon_auxch, int, 0444); 264 - 265 - MODULE_PARM_DESC(mst, "DisplayPort MST experimental support (1 = enable, 0 = disable)"); 266 - module_param_named(mst, radeon_mst, int, 0444); 267 265 268 266 MODULE_PARM_DESC(uvd, "uvd enable/disable uvd support (1 = enable, 0 = disable)"); 269 267 module_param_named(uvd, radeon_uvd, int, 0444);
+1 -13
drivers/gpu/drm/radeon/radeon_encoders.c
··· 244 244 245 245 list_for_each_entry(connector, &dev->mode_config.connector_list, head) { 246 246 radeon_connector = to_radeon_connector(connector); 247 - if (radeon_encoder->is_mst_encoder) { 248 - struct radeon_encoder_mst *mst_enc; 249 - 250 - if (!radeon_connector->is_mst_connector) 251 - continue; 252 - 253 - mst_enc = radeon_encoder->enc_priv; 254 - if (mst_enc->connector == radeon_connector->mst_port) 255 - return connector; 256 - } else if (radeon_encoder->active_device & radeon_connector->devices) 247 + if (radeon_encoder->active_device & radeon_connector->devices) 257 248 return connector; 258 249 } 259 250 return NULL; ··· 390 399 case DRM_MODE_CONNECTOR_DVID: 391 400 case DRM_MODE_CONNECTOR_HDMIA: 392 401 case DRM_MODE_CONNECTOR_DisplayPort: 393 - if (radeon_connector->is_mst_connector) 394 - return false; 395 - 396 402 dig_connector = radeon_connector->con_priv; 397 403 if ((dig_connector->dp_sink_type == CONNECTOR_OBJECT_ID_DISPLAYPORT) || 398 404 (dig_connector->dp_sink_type == CONNECTOR_OBJECT_ID_eDP))
+1 -9
drivers/gpu/drm/radeon/radeon_irq_kms.c
··· 100 100 101 101 static void radeon_dp_work_func(struct work_struct *work) 102 102 { 103 - struct radeon_device *rdev = container_of(work, struct radeon_device, 104 - dp_work); 105 - struct drm_device *dev = rdev->ddev; 106 - struct drm_mode_config *mode_config = &dev->mode_config; 107 - struct drm_connector *connector; 108 - 109 - /* this should take a mutex */ 110 - list_for_each_entry(connector, &mode_config->connector_list, head) 111 - radeon_connector_hotplug(connector); 112 103 } 104 + 113 105 /** 114 106 * radeon_driver_irq_preinstall_kms - drm irq preinstall callback 115 107 *
-40
drivers/gpu/drm/radeon/radeon_mode.h
··· 31 31 #define RADEON_MODE_H 32 32 33 33 #include <drm/display/drm_dp_helper.h> 34 - #include <drm/display/drm_dp_mst_helper.h> 35 34 #include <drm/drm_crtc.h> 36 35 #include <drm/drm_edid.h> 37 36 #include <drm/drm_encoder.h> ··· 435 436 int panel_mode; 436 437 struct radeon_afmt *afmt; 437 438 struct r600_audio_pin *pin; 438 - int active_mst_links; 439 439 }; 440 440 441 441 struct radeon_encoder_atom_dac { 442 442 enum radeon_tv_std tv_std; 443 - }; 444 - 445 - struct radeon_encoder_mst { 446 - int crtc; 447 - struct radeon_encoder *primary; 448 - struct radeon_connector *connector; 449 - struct drm_dp_mst_port *port; 450 - int pbn; 451 - int fe; 452 - bool fe_from_be; 453 - bool enc_active; 454 443 }; 455 444 456 445 struct radeon_encoder { ··· 462 475 enum radeon_output_csc output_csc; 463 476 bool can_mst; 464 477 uint32_t offset; 465 - bool is_mst_encoder; 466 - /* front end for this mst encoder */ 467 478 }; 468 479 469 480 struct radeon_connector_atom_dig { ··· 472 487 int dp_clock; 473 488 int dp_lane_count; 474 489 bool edp_on; 475 - bool is_mst; 476 490 }; 477 491 478 492 struct radeon_gpio_rec { ··· 515 531 RADEON_FMT_DITHER_ENABLE = 1, 516 532 }; 517 533 518 - struct stream_attribs { 519 - uint16_t fe; 520 - uint16_t slots; 521 - }; 522 - 523 534 struct radeon_connector { 524 535 struct drm_connector base; 525 536 uint32_t connector_id; ··· 537 558 enum radeon_connector_audio audio; 538 559 enum radeon_connector_dither dither; 539 560 int pixelclock_for_modeset; 540 - bool is_mst_connector; 541 - struct radeon_connector *mst_port; 542 - struct drm_dp_mst_port *port; 543 - struct drm_dp_mst_topology_mgr mst_mgr; 544 - 545 - struct radeon_encoder *mst_encoder; 546 - struct stream_attribs cur_stream_attribs[6]; 547 - int enabled_attribs; 548 561 }; 549 562 550 563 #define ENCODER_MODE_IS_DP(em) (((em) == ATOM_ENCODER_MODE_DP) || \ ··· 738 767 extern void atombios_dig_transmitter_setup2(struct drm_encoder *encoder, 739 768 int action, uint8_t lane_num, 740 769 uint8_t lane_set, int fe); 741 - extern void atombios_set_mst_encoder_crtc_source(struct drm_encoder *encoder, 742 - int fe); 743 770 extern void radeon_atom_ext_encoder_setup_ddc(struct drm_encoder *encoder); 744 771 extern struct drm_encoder *radeon_get_external_encoder(struct drm_encoder *encoder); 745 772 void radeon_atom_copy_swap(u8 *dst, u8 *src, u8 num_bytes, bool to_le); ··· 954 985 void radeon_crtc_handle_flip(struct radeon_device *rdev, int crtc_id); 955 986 956 987 int radeon_align_pitch(struct radeon_device *rdev, int width, int bpp, bool tiled); 957 - 958 - /* mst */ 959 - int radeon_dp_mst_init(struct radeon_connector *radeon_connector); 960 - int radeon_dp_mst_probe(struct radeon_connector *radeon_connector); 961 - int radeon_dp_mst_check_status(struct radeon_connector *radeon_connector); 962 - void radeon_mst_debugfs_init(struct radeon_device *rdev); 963 - void radeon_dp_mst_prepare_pll(struct drm_crtc *crtc, struct drm_display_mode *mode); 964 - 965 - void radeon_setup_mst_connector(struct drm_device *dev); 966 988 967 989 int radeon_atom_pick_dig_encoder(struct drm_encoder *encoder, int fe_idx); 968 990 void radeon_atom_release_dig_encoder(struct radeon_device *rdev, int enc_idx);
+2 -1
drivers/gpu/drm/scheduler/sched_main.c
··· 198 198 } 199 199 200 200 /** 201 - * drm_sched_dependency_optimized 201 + * drm_sched_dependency_optimized - test if the dependency can be optimized 202 202 * 203 203 * @fence: the dependency fence 204 204 * @entity: the entity which depends on the above fence ··· 993 993 * used 994 994 * @score: optional score atomic shared with other schedulers 995 995 * @name: name used for debugging 996 + * @dev: target &struct device 996 997 * 997 998 * Return 0 on success, otherwise error code. 998 999 */
+198 -94
drivers/gpu/drm/solomon/ssd130x.c
··· 18 18 #include <linux/pwm.h> 19 19 #include <linux/regulator/consumer.h> 20 20 21 + #include <drm/drm_atomic.h> 21 22 #include <drm/drm_atomic_helper.h> 22 23 #include <drm/drm_damage_helper.h> 23 24 #include <drm/drm_edid.h> ··· 565 564 return ret; 566 565 } 567 566 568 - static int ssd130x_display_pipe_mode_valid(struct drm_simple_display_pipe *pipe, 569 - const struct drm_display_mode *mode) 567 + static int ssd130x_primary_plane_helper_atomic_check(struct drm_plane *plane, 568 + struct drm_atomic_state *new_state) 570 569 { 571 - struct ssd130x_device *ssd130x = drm_to_ssd130x(pipe->crtc.dev); 570 + struct drm_plane_state *new_plane_state = drm_atomic_get_new_plane_state(new_state, plane); 571 + struct drm_crtc *new_crtc = new_plane_state->crtc; 572 + struct drm_crtc_state *new_crtc_state = NULL; 572 573 573 - if (mode->hdisplay != ssd130x->mode.hdisplay && 574 - mode->vdisplay != ssd130x->mode.vdisplay) 575 - return MODE_ONE_SIZE; 574 + if (new_crtc) 575 + new_crtc_state = drm_atomic_get_new_crtc_state(new_state, new_crtc); 576 576 577 - if (mode->hdisplay != ssd130x->mode.hdisplay) 578 - return MODE_ONE_WIDTH; 579 - 580 - if (mode->vdisplay != ssd130x->mode.vdisplay) 581 - return MODE_ONE_HEIGHT; 582 - 583 - return MODE_OK; 577 + return drm_atomic_helper_check_plane_state(new_plane_state, new_crtc_state, 578 + DRM_PLANE_NO_SCALING, 579 + DRM_PLANE_NO_SCALING, 580 + false, false); 584 581 } 585 582 586 - static void ssd130x_display_pipe_enable(struct drm_simple_display_pipe *pipe, 587 - struct drm_crtc_state *crtc_state, 588 - struct drm_plane_state *plane_state) 583 + static void ssd130x_primary_plane_helper_atomic_update(struct drm_plane *plane, 584 + struct drm_atomic_state *old_state) 589 585 { 590 - struct ssd130x_device *ssd130x = drm_to_ssd130x(pipe->crtc.dev); 586 + struct drm_plane_state *plane_state = plane->state; 587 + struct drm_plane_state *old_plane_state = drm_atomic_get_old_plane_state(old_state, plane); 591 588 struct drm_shadow_plane_state *shadow_plane_state = to_drm_shadow_plane_state(plane_state); 592 - struct drm_device *drm = &ssd130x->drm; 593 - int idx, ret; 594 - 595 - ret = ssd130x_power_on(ssd130x); 596 - if (ret) 597 - return; 598 - 599 - ret = ssd130x_init(ssd130x); 600 - if (ret) 601 - goto out_power_off; 602 - 603 - if (!drm_dev_enter(drm, &idx)) 604 - goto out_power_off; 605 - 606 - ssd130x_fb_blit_rect(plane_state->fb, &shadow_plane_state->data[0], &plane_state->dst); 607 - 608 - ssd130x_write_cmd(ssd130x, 1, SSD130X_DISPLAY_ON); 609 - 610 - backlight_enable(ssd130x->bl_dev); 611 - 612 - drm_dev_exit(idx); 613 - 614 - return; 615 - out_power_off: 616 - ssd130x_power_off(ssd130x); 617 - } 618 - 619 - static void ssd130x_display_pipe_disable(struct drm_simple_display_pipe *pipe) 620 - { 621 - struct ssd130x_device *ssd130x = drm_to_ssd130x(pipe->crtc.dev); 622 - struct drm_device *drm = &ssd130x->drm; 623 - int idx; 624 - 625 - if (!drm_dev_enter(drm, &idx)) 626 - return; 627 - 628 - ssd130x_clear_screen(ssd130x); 629 - 630 - backlight_disable(ssd130x->bl_dev); 631 - 632 - ssd130x_write_cmd(ssd130x, 1, SSD130X_DISPLAY_OFF); 633 - 634 - ssd130x_power_off(ssd130x); 635 - 636 - drm_dev_exit(idx); 637 - } 638 - 639 - static void ssd130x_display_pipe_update(struct drm_simple_display_pipe *pipe, 640 - struct drm_plane_state *old_plane_state) 641 - { 642 - struct ssd130x_device *ssd130x = drm_to_ssd130x(pipe->crtc.dev); 643 - struct drm_plane_state *plane_state = pipe->plane.state; 644 - struct drm_shadow_plane_state *shadow_plane_state = to_drm_shadow_plane_state(plane_state); 645 - struct drm_framebuffer *fb = plane_state->fb; 646 - struct drm_device *drm = &ssd130x->drm; 589 + struct drm_device *drm = plane->dev; 647 590 struct drm_rect src_clip, dst_clip; 648 591 int idx; 649 - 650 - if (!fb) 651 - return; 652 - 653 - if (!pipe->crtc.state->active) 654 - return; 655 592 656 593 if (!drm_atomic_helper_damage_merged(old_plane_state, plane_state, &src_clip)) 657 594 return; ··· 606 667 drm_dev_exit(idx); 607 668 } 608 669 609 - static const struct drm_simple_display_pipe_funcs ssd130x_pipe_funcs = { 610 - .mode_valid = ssd130x_display_pipe_mode_valid, 611 - .enable = ssd130x_display_pipe_enable, 612 - .disable = ssd130x_display_pipe_disable, 613 - .update = ssd130x_display_pipe_update, 614 - DRM_GEM_SIMPLE_DISPLAY_PIPE_SHADOW_PLANE_FUNCS, 670 + static void ssd130x_primary_plane_helper_atomic_disable(struct drm_plane *plane, 671 + struct drm_atomic_state *old_state) 672 + { 673 + struct drm_device *drm = plane->dev; 674 + struct ssd130x_device *ssd130x = drm_to_ssd130x(drm); 675 + int idx; 676 + 677 + if (!drm_dev_enter(drm, &idx)) 678 + return; 679 + 680 + ssd130x_clear_screen(ssd130x); 681 + 682 + drm_dev_exit(idx); 683 + } 684 + 685 + static const struct drm_plane_helper_funcs ssd130x_primary_plane_helper_funcs = { 686 + DRM_GEM_SHADOW_PLANE_HELPER_FUNCS, 687 + .atomic_check = ssd130x_primary_plane_helper_atomic_check, 688 + .atomic_update = ssd130x_primary_plane_helper_atomic_update, 689 + .atomic_disable = ssd130x_primary_plane_helper_atomic_disable, 615 690 }; 616 691 617 - static int ssd130x_connector_get_modes(struct drm_connector *connector) 692 + static const struct drm_plane_funcs ssd130x_primary_plane_funcs = { 693 + .update_plane = drm_atomic_helper_update_plane, 694 + .disable_plane = drm_atomic_helper_disable_plane, 695 + .destroy = drm_plane_cleanup, 696 + DRM_GEM_SHADOW_PLANE_FUNCS, 697 + }; 698 + 699 + static enum drm_mode_status ssd130x_crtc_helper_mode_valid(struct drm_crtc *crtc, 700 + const struct drm_display_mode *mode) 701 + { 702 + struct ssd130x_device *ssd130x = drm_to_ssd130x(crtc->dev); 703 + 704 + if (mode->hdisplay != ssd130x->mode.hdisplay && 705 + mode->vdisplay != ssd130x->mode.vdisplay) 706 + return MODE_ONE_SIZE; 707 + else if (mode->hdisplay != ssd130x->mode.hdisplay) 708 + return MODE_ONE_WIDTH; 709 + else if (mode->vdisplay != ssd130x->mode.vdisplay) 710 + return MODE_ONE_HEIGHT; 711 + 712 + return MODE_OK; 713 + } 714 + 715 + static int ssd130x_crtc_helper_atomic_check(struct drm_crtc *crtc, 716 + struct drm_atomic_state *new_state) 717 + { 718 + struct drm_crtc_state *new_crtc_state = drm_atomic_get_new_crtc_state(new_state, crtc); 719 + int ret; 720 + 721 + ret = drm_atomic_helper_check_crtc_state(new_crtc_state, false); 722 + if (ret) 723 + return ret; 724 + 725 + return drm_atomic_add_affected_planes(new_state, crtc); 726 + } 727 + 728 + /* 729 + * The CRTC is always enabled. Screen updates are performed by 730 + * the primary plane's atomic_update function. Disabling clears 731 + * the screen in the primary plane's atomic_disable function. 732 + */ 733 + static const struct drm_crtc_helper_funcs ssd130x_crtc_helper_funcs = { 734 + .mode_valid = ssd130x_crtc_helper_mode_valid, 735 + .atomic_check = ssd130x_crtc_helper_atomic_check, 736 + }; 737 + 738 + static void ssd130x_crtc_reset(struct drm_crtc *crtc) 739 + { 740 + struct drm_device *drm = crtc->dev; 741 + struct ssd130x_device *ssd130x = drm_to_ssd130x(drm); 742 + 743 + ssd130x_init(ssd130x); 744 + 745 + drm_atomic_helper_crtc_reset(crtc); 746 + } 747 + 748 + static const struct drm_crtc_funcs ssd130x_crtc_funcs = { 749 + .reset = ssd130x_crtc_reset, 750 + .destroy = drm_crtc_cleanup, 751 + .set_config = drm_atomic_helper_set_config, 752 + .page_flip = drm_atomic_helper_page_flip, 753 + .atomic_duplicate_state = drm_atomic_helper_crtc_duplicate_state, 754 + .atomic_destroy_state = drm_atomic_helper_crtc_destroy_state, 755 + }; 756 + 757 + static void ssd130x_encoder_helper_atomic_enable(struct drm_encoder *encoder, 758 + struct drm_atomic_state *state) 759 + { 760 + struct drm_device *drm = encoder->dev; 761 + struct ssd130x_device *ssd130x = drm_to_ssd130x(drm); 762 + int ret; 763 + 764 + ret = ssd130x_power_on(ssd130x); 765 + if (ret) 766 + return; 767 + 768 + ssd130x_write_cmd(ssd130x, 1, SSD130X_DISPLAY_ON); 769 + 770 + backlight_enable(ssd130x->bl_dev); 771 + } 772 + 773 + static void ssd130x_encoder_helper_atomic_disable(struct drm_encoder *encoder, 774 + struct drm_atomic_state *state) 775 + { 776 + struct drm_device *drm = encoder->dev; 777 + struct ssd130x_device *ssd130x = drm_to_ssd130x(drm); 778 + 779 + backlight_disable(ssd130x->bl_dev); 780 + 781 + ssd130x_write_cmd(ssd130x, 1, SSD130X_DISPLAY_OFF); 782 + 783 + ssd130x_power_off(ssd130x); 784 + } 785 + 786 + static const struct drm_encoder_helper_funcs ssd130x_encoder_helper_funcs = { 787 + .atomic_enable = ssd130x_encoder_helper_atomic_enable, 788 + .atomic_disable = ssd130x_encoder_helper_atomic_disable, 789 + }; 790 + 791 + static const struct drm_encoder_funcs ssd130x_encoder_funcs = { 792 + .destroy = drm_encoder_cleanup, 793 + }; 794 + 795 + static int ssd130x_connector_helper_get_modes(struct drm_connector *connector) 618 796 { 619 797 struct ssd130x_device *ssd130x = drm_to_ssd130x(connector->dev); 620 798 struct drm_display_mode *mode; ··· 751 695 } 752 696 753 697 static const struct drm_connector_helper_funcs ssd130x_connector_helper_funcs = { 754 - .get_modes = ssd130x_connector_get_modes, 698 + .get_modes = ssd130x_connector_helper_get_modes, 755 699 }; 756 700 757 701 static const struct drm_connector_funcs ssd130x_connector_funcs = { ··· 862 806 struct device *dev = ssd130x->dev; 863 807 struct drm_device *drm = &ssd130x->drm; 864 808 unsigned long max_width, max_height; 809 + struct drm_plane *primary_plane; 810 + struct drm_crtc *crtc; 811 + struct drm_encoder *encoder; 812 + struct drm_connector *connector; 865 813 int ret; 814 + 815 + /* 816 + * Modesetting 817 + */ 866 818 867 819 ret = drmm_mode_config_init(drm); 868 820 if (ret) { ··· 897 833 drm->mode_config.preferred_depth = 32; 898 834 drm->mode_config.funcs = &ssd130x_mode_config_funcs; 899 835 900 - ret = drm_connector_init(drm, &ssd130x->connector, &ssd130x_connector_funcs, 836 + /* Primary plane */ 837 + 838 + primary_plane = &ssd130x->primary_plane; 839 + ret = drm_universal_plane_init(drm, primary_plane, 0, &ssd130x_primary_plane_funcs, 840 + ssd130x_formats, ARRAY_SIZE(ssd130x_formats), 841 + NULL, DRM_PLANE_TYPE_PRIMARY, NULL); 842 + if (ret) { 843 + dev_err(dev, "DRM primary plane init failed: %d\n", ret); 844 + return ret; 845 + } 846 + 847 + drm_plane_helper_add(primary_plane, &ssd130x_primary_plane_helper_funcs); 848 + 849 + drm_plane_enable_fb_damage_clips(primary_plane); 850 + 851 + /* CRTC */ 852 + 853 + crtc = &ssd130x->crtc; 854 + ret = drm_crtc_init_with_planes(drm, crtc, primary_plane, NULL, 855 + &ssd130x_crtc_funcs, NULL); 856 + if (ret) { 857 + dev_err(dev, "DRM crtc init failed: %d\n", ret); 858 + return ret; 859 + } 860 + 861 + drm_crtc_helper_add(crtc, &ssd130x_crtc_helper_funcs); 862 + 863 + /* Encoder */ 864 + 865 + encoder = &ssd130x->encoder; 866 + ret = drm_encoder_init(drm, encoder, &ssd130x_encoder_funcs, 867 + DRM_MODE_ENCODER_NONE, NULL); 868 + if (ret) { 869 + dev_err(dev, "DRM encoder init failed: %d\n", ret); 870 + return ret; 871 + } 872 + 873 + drm_encoder_helper_add(encoder, &ssd130x_encoder_helper_funcs); 874 + 875 + encoder->possible_crtcs = drm_crtc_mask(crtc); 876 + 877 + /* Connector */ 878 + 879 + connector = &ssd130x->connector; 880 + ret = drm_connector_init(drm, connector, &ssd130x_connector_funcs, 901 881 DRM_MODE_CONNECTOR_Unknown); 902 882 if (ret) { 903 883 dev_err(dev, "DRM connector init failed: %d\n", ret); 904 884 return ret; 905 885 } 906 886 907 - drm_connector_helper_add(&ssd130x->connector, &ssd130x_connector_helper_funcs); 887 + drm_connector_helper_add(connector, &ssd130x_connector_helper_funcs); 908 888 909 - ret = drm_simple_display_pipe_init(drm, &ssd130x->pipe, &ssd130x_pipe_funcs, 910 - ssd130x_formats, ARRAY_SIZE(ssd130x_formats), 911 - NULL, &ssd130x->connector); 889 + ret = drm_connector_attach_encoder(connector, encoder); 912 890 if (ret) { 913 - dev_err(dev, "DRM simple display pipeline init failed: %d\n", ret); 891 + dev_err(dev, "DRM attach connector to encoder failed: %d\n", ret); 914 892 return ret; 915 893 } 916 - 917 - drm_plane_enable_fb_damage_clips(&ssd130x->pipe.plane); 918 894 919 895 drm_mode_config_reset(drm); 920 896
+7 -2
drivers/gpu/drm/solomon/ssd130x.h
··· 13 13 #ifndef __SSD1307X_H__ 14 14 #define __SSD1307X_H__ 15 15 16 + #include <drm/drm_connector.h> 17 + #include <drm/drm_crtc.h> 16 18 #include <drm/drm_drv.h> 17 - #include <drm/drm_simple_kms_helper.h> 19 + #include <drm/drm_encoder.h> 20 + #include <drm/drm_plane_helper.h> 18 21 19 22 #include <linux/regmap.h> 20 23 ··· 45 42 struct ssd130x_device { 46 43 struct drm_device drm; 47 44 struct device *dev; 48 - struct drm_simple_display_pipe pipe; 49 45 struct drm_display_mode mode; 46 + struct drm_plane primary_plane; 47 + struct drm_crtc crtc; 48 + struct drm_encoder encoder; 50 49 struct drm_connector connector; 51 50 struct i2c_client *client; 52 51
+21 -43
drivers/gpu/drm/sun4i/sun4i_tv.c
··· 14 14 #include <linux/regmap.h> 15 15 #include <linux/reset.h> 16 16 17 + #include <drm/drm_atomic.h> 17 18 #include <drm/drm_atomic_helper.h> 18 19 #include <drm/drm_of.h> 19 20 #include <drm/drm_panel.h> ··· 276 275 encoder); 277 276 } 278 277 279 - static inline struct sun4i_tv * 280 - drm_connector_to_sun4i_tv(struct drm_connector *connector) 281 - { 282 - return container_of(connector, struct sun4i_tv, 283 - connector); 284 - } 285 - 286 278 /* 287 279 * FIXME: If only the drm_display_mode private field was usable, this 288 280 * could go away... ··· 333 339 mode->vtotal = mode->vsync_end + tv_mode->vback_porch; 334 340 } 335 341 336 - static void sun4i_tv_disable(struct drm_encoder *encoder) 342 + static void sun4i_tv_disable(struct drm_encoder *encoder, 343 + struct drm_atomic_state *state) 337 344 { 338 345 struct sun4i_tv *tv = drm_encoder_to_sun4i_tv(encoder); 339 346 struct sun4i_crtc *crtc = drm_crtc_to_sun4i_crtc(encoder->crtc); ··· 348 353 sunxi_engine_disable_color_correction(crtc->engine); 349 354 } 350 355 351 - static void sun4i_tv_enable(struct drm_encoder *encoder) 356 + static void sun4i_tv_enable(struct drm_encoder *encoder, 357 + struct drm_atomic_state *state) 352 358 { 353 359 struct sun4i_tv *tv = drm_encoder_to_sun4i_tv(encoder); 354 360 struct sun4i_crtc *crtc = drm_crtc_to_sun4i_crtc(encoder->crtc); 361 + struct drm_crtc_state *crtc_state = 362 + drm_atomic_get_new_crtc_state(state, encoder->crtc); 363 + struct drm_display_mode *mode = &crtc_state->mode; 364 + const struct tv_mode *tv_mode = sun4i_tv_find_tv_by_mode(mode); 355 365 356 366 DRM_DEBUG_DRIVER("Enabling the TV Output\n"); 357 - 358 - sunxi_engine_apply_color_correction(crtc->engine); 359 - 360 - regmap_update_bits(tv->regs, SUN4I_TVE_EN_REG, 361 - SUN4I_TVE_EN_ENABLE, 362 - SUN4I_TVE_EN_ENABLE); 363 - } 364 - 365 - static void sun4i_tv_mode_set(struct drm_encoder *encoder, 366 - struct drm_display_mode *mode, 367 - struct drm_display_mode *adjusted_mode) 368 - { 369 - struct sun4i_tv *tv = drm_encoder_to_sun4i_tv(encoder); 370 - const struct tv_mode *tv_mode = sun4i_tv_find_tv_by_mode(mode); 371 367 372 368 /* Enable and map the DAC to the output */ 373 369 regmap_update_bits(tv->regs, SUN4I_TVE_EN_REG, ··· 452 466 SUN4I_TVE_RESYNC_FIELD : 0)); 453 467 454 468 regmap_write(tv->regs, SUN4I_TVE_SLAVE_REG, 0); 469 + 470 + sunxi_engine_apply_color_correction(crtc->engine); 471 + 472 + regmap_update_bits(tv->regs, SUN4I_TVE_EN_REG, 473 + SUN4I_TVE_EN_ENABLE, 474 + SUN4I_TVE_EN_ENABLE); 455 475 } 456 476 457 477 static const struct drm_encoder_helper_funcs sun4i_tv_helper_funcs = { 458 - .disable = sun4i_tv_disable, 459 - .enable = sun4i_tv_enable, 460 - .mode_set = sun4i_tv_mode_set, 478 + .atomic_disable = sun4i_tv_disable, 479 + .atomic_enable = sun4i_tv_enable, 461 480 }; 462 481 463 482 static int sun4i_tv_comp_get_modes(struct drm_connector *connector) ··· 488 497 return i; 489 498 } 490 499 491 - static int sun4i_tv_comp_mode_valid(struct drm_connector *connector, 492 - struct drm_display_mode *mode) 493 - { 494 - /* TODO */ 495 - return MODE_OK; 496 - } 497 - 498 500 static const struct drm_connector_helper_funcs sun4i_tv_comp_connector_helper_funcs = { 499 501 .get_modes = sun4i_tv_comp_get_modes, 500 - .mode_valid = sun4i_tv_comp_mode_valid, 501 502 }; 502 - 503 - static void 504 - sun4i_tv_comp_connector_destroy(struct drm_connector *connector) 505 - { 506 - drm_connector_cleanup(connector); 507 - } 508 503 509 504 static const struct drm_connector_funcs sun4i_tv_comp_connector_funcs = { 510 505 .fill_modes = drm_helper_probe_single_connector_modes, 511 - .destroy = sun4i_tv_comp_connector_destroy, 506 + .destroy = drm_connector_cleanup, 512 507 .reset = drm_atomic_helper_connector_reset, 513 508 .atomic_duplicate_state = drm_atomic_helper_connector_duplicate_state, 514 509 .atomic_destroy_state = drm_atomic_helper_connector_destroy_state, ··· 581 604 if (ret) { 582 605 dev_err(dev, 583 606 "Couldn't initialise the Composite connector\n"); 584 - goto err_cleanup_connector; 607 + goto err_cleanup_encoder; 585 608 } 586 609 tv->connector.interlace_allowed = true; 587 610 ··· 589 612 590 613 return 0; 591 614 592 - err_cleanup_connector: 615 + err_cleanup_encoder: 593 616 drm_encoder_cleanup(&tv->encoder); 594 617 err_disable_clk: 595 618 clk_disable_unprepare(tv->clk); ··· 606 629 drm_connector_cleanup(&tv->connector); 607 630 drm_encoder_cleanup(&tv->encoder); 608 631 clk_disable_unprepare(tv->clk); 632 + reset_control_assert(tv->reset); 609 633 } 610 634 611 635 static const struct component_ops sun4i_tv_ops = {
+143 -230
drivers/gpu/drm/tests/drm_cmdline_parser_test.c
··· 16 16 struct drm_cmdline_mode mode = { }; 17 17 const char *cmdline = "e"; 18 18 19 - KUNIT_EXPECT_TRUE(test, drm_mode_parse_command_line_for_connector(cmdline, 19 + KUNIT_ASSERT_TRUE(test, drm_mode_parse_command_line_for_connector(cmdline, 20 20 &no_connector, &mode)); 21 21 KUNIT_EXPECT_FALSE(test, mode.specified); 22 22 KUNIT_EXPECT_FALSE(test, mode.refresh_specified); ··· 34 34 struct drm_cmdline_mode mode = { }; 35 35 const char *cmdline = "D"; 36 36 37 - KUNIT_EXPECT_TRUE(test, drm_mode_parse_command_line_for_connector(cmdline, 37 + KUNIT_ASSERT_TRUE(test, drm_mode_parse_command_line_for_connector(cmdline, 38 38 &no_connector, &mode)); 39 39 KUNIT_EXPECT_FALSE(test, mode.specified); 40 40 KUNIT_EXPECT_FALSE(test, mode.refresh_specified); ··· 56 56 struct drm_cmdline_mode mode = { }; 57 57 const char *cmdline = "D"; 58 58 59 - KUNIT_EXPECT_TRUE(test, drm_mode_parse_command_line_for_connector(cmdline, 59 + KUNIT_ASSERT_TRUE(test, drm_mode_parse_command_line_for_connector(cmdline, 60 60 &connector_hdmi, &mode)); 61 61 KUNIT_EXPECT_FALSE(test, mode.specified); 62 62 KUNIT_EXPECT_FALSE(test, mode.refresh_specified); ··· 78 78 struct drm_cmdline_mode mode = { }; 79 79 const char *cmdline = "D"; 80 80 81 - KUNIT_EXPECT_TRUE(test, drm_mode_parse_command_line_for_connector(cmdline, 81 + KUNIT_ASSERT_TRUE(test, drm_mode_parse_command_line_for_connector(cmdline, 82 82 &connector_dvi, &mode)); 83 83 KUNIT_EXPECT_FALSE(test, mode.specified); 84 84 KUNIT_EXPECT_FALSE(test, mode.refresh_specified); ··· 96 96 struct drm_cmdline_mode mode = { }; 97 97 const char *cmdline = "d"; 98 98 99 - KUNIT_EXPECT_TRUE(test, drm_mode_parse_command_line_for_connector(cmdline, 99 + KUNIT_ASSERT_TRUE(test, drm_mode_parse_command_line_for_connector(cmdline, 100 100 &no_connector, &mode)); 101 101 KUNIT_EXPECT_FALSE(test, mode.specified); 102 102 KUNIT_EXPECT_FALSE(test, mode.refresh_specified); ··· 109 109 KUNIT_EXPECT_EQ(test, mode.force, DRM_FORCE_OFF); 110 110 } 111 111 112 - static void drm_cmdline_test_margin_only(struct kunit *test) 113 - { 114 - struct drm_cmdline_mode mode = { }; 115 - const char *cmdline = "m"; 116 - 117 - KUNIT_EXPECT_FALSE(test, drm_mode_parse_command_line_for_connector(cmdline, 118 - &no_connector, &mode)); 119 - } 120 - 121 - static void drm_cmdline_test_interlace_only(struct kunit *test) 122 - { 123 - struct drm_cmdline_mode mode = { }; 124 - const char *cmdline = "i"; 125 - 126 - KUNIT_EXPECT_FALSE(test, drm_mode_parse_command_line_for_connector(cmdline, 127 - &no_connector, &mode)); 128 - } 129 - 130 112 static void drm_cmdline_test_res(struct kunit *test) 131 113 { 132 114 struct drm_cmdline_mode mode = { }; 133 115 const char *cmdline = "720x480"; 134 116 135 - KUNIT_EXPECT_TRUE(test, drm_mode_parse_command_line_for_connector(cmdline, 117 + KUNIT_ASSERT_TRUE(test, drm_mode_parse_command_line_for_connector(cmdline, 136 118 &no_connector, &mode)); 137 119 KUNIT_EXPECT_TRUE(test, mode.specified); 138 120 KUNIT_EXPECT_EQ(test, mode.xres, 720); ··· 131 149 KUNIT_EXPECT_EQ(test, mode.force, DRM_FORCE_UNSPECIFIED); 132 150 } 133 151 134 - static void drm_cmdline_test_res_missing_x(struct kunit *test) 135 - { 136 - struct drm_cmdline_mode mode = { }; 137 - const char *cmdline = "x480"; 138 - 139 - KUNIT_EXPECT_FALSE(test, drm_mode_parse_command_line_for_connector(cmdline, 140 - &no_connector, &mode)); 141 - } 142 - 143 - static void drm_cmdline_test_res_missing_y(struct kunit *test) 144 - { 145 - struct drm_cmdline_mode mode = { }; 146 - const char *cmdline = "1024x"; 147 - 148 - KUNIT_EXPECT_FALSE(test, drm_mode_parse_command_line_for_connector(cmdline, 149 - &no_connector, &mode)); 150 - } 151 - 152 - static void drm_cmdline_test_res_bad_y(struct kunit *test) 153 - { 154 - struct drm_cmdline_mode mode = { }; 155 - const char *cmdline = "1024xtest"; 156 - 157 - KUNIT_EXPECT_FALSE(test, drm_mode_parse_command_line_for_connector(cmdline, 158 - &no_connector, &mode)); 159 - } 160 - 161 - static void drm_cmdline_test_res_missing_y_bpp(struct kunit *test) 162 - { 163 - struct drm_cmdline_mode mode = { }; 164 - const char *cmdline = "1024x-24"; 165 - 166 - KUNIT_EXPECT_FALSE(test, drm_mode_parse_command_line_for_connector(cmdline, 167 - &no_connector, &mode)); 168 - } 169 - 170 152 static void drm_cmdline_test_res_vesa(struct kunit *test) 171 153 { 172 154 struct drm_cmdline_mode mode = { }; 173 155 const char *cmdline = "720x480M"; 174 156 175 - KUNIT_EXPECT_TRUE(test, drm_mode_parse_command_line_for_connector(cmdline, 157 + KUNIT_ASSERT_TRUE(test, drm_mode_parse_command_line_for_connector(cmdline, 176 158 &no_connector, &mode)); 177 159 KUNIT_EXPECT_TRUE(test, mode.specified); 178 160 KUNIT_EXPECT_EQ(test, mode.xres, 720); ··· 158 212 struct drm_cmdline_mode mode = { }; 159 213 const char *cmdline = "720x480MR"; 160 214 161 - KUNIT_EXPECT_TRUE(test, drm_mode_parse_command_line_for_connector(cmdline, 215 + KUNIT_ASSERT_TRUE(test, drm_mode_parse_command_line_for_connector(cmdline, 162 216 &no_connector, &mode)); 163 217 KUNIT_EXPECT_TRUE(test, mode.specified); 164 218 KUNIT_EXPECT_EQ(test, mode.xres, 720); ··· 180 234 struct drm_cmdline_mode mode = { }; 181 235 const char *cmdline = "720x480R"; 182 236 183 - KUNIT_EXPECT_TRUE(test, drm_mode_parse_command_line_for_connector(cmdline, 237 + KUNIT_ASSERT_TRUE(test, drm_mode_parse_command_line_for_connector(cmdline, 184 238 &no_connector, &mode)); 185 239 KUNIT_EXPECT_TRUE(test, mode.specified); 186 240 KUNIT_EXPECT_EQ(test, mode.xres, 720); ··· 202 256 struct drm_cmdline_mode mode = { }; 203 257 const char *cmdline = "720x480-24"; 204 258 205 - KUNIT_EXPECT_TRUE(test, drm_mode_parse_command_line_for_connector(cmdline, 259 + KUNIT_ASSERT_TRUE(test, drm_mode_parse_command_line_for_connector(cmdline, 206 260 &no_connector, &mode)); 207 261 KUNIT_EXPECT_TRUE(test, mode.specified); 208 262 KUNIT_EXPECT_EQ(test, mode.xres, 720); ··· 220 274 KUNIT_EXPECT_EQ(test, mode.force, DRM_FORCE_UNSPECIFIED); 221 275 } 222 276 223 - static void drm_cmdline_test_res_bad_bpp(struct kunit *test) 224 - { 225 - struct drm_cmdline_mode mode = { }; 226 - const char *cmdline = "720x480-test"; 227 - 228 - KUNIT_EXPECT_FALSE(test, drm_mode_parse_command_line_for_connector(cmdline, 229 - &no_connector, &mode)); 230 - } 231 - 232 277 static void drm_cmdline_test_res_refresh(struct kunit *test) 233 278 { 234 279 struct drm_cmdline_mode mode = { }; 235 280 const char *cmdline = "720x480@60"; 236 281 237 - KUNIT_EXPECT_TRUE(test, drm_mode_parse_command_line_for_connector(cmdline, 282 + KUNIT_ASSERT_TRUE(test, drm_mode_parse_command_line_for_connector(cmdline, 238 283 &no_connector, &mode)); 239 284 KUNIT_EXPECT_TRUE(test, mode.specified); 240 285 KUNIT_EXPECT_EQ(test, mode.xres, 720); ··· 243 306 KUNIT_EXPECT_EQ(test, mode.force, DRM_FORCE_UNSPECIFIED); 244 307 } 245 308 246 - static void drm_cmdline_test_res_bad_refresh(struct kunit *test) 247 - { 248 - struct drm_cmdline_mode mode = { }; 249 - const char *cmdline = "720x480@refresh"; 250 - 251 - KUNIT_EXPECT_FALSE(test, drm_mode_parse_command_line_for_connector(cmdline, 252 - &no_connector, &mode)); 253 - } 254 - 255 309 static void drm_cmdline_test_res_bpp_refresh(struct kunit *test) 256 310 { 257 311 struct drm_cmdline_mode mode = { }; 258 312 const char *cmdline = "720x480-24@60"; 259 313 260 - KUNIT_EXPECT_TRUE(test, drm_mode_parse_command_line_for_connector(cmdline, 314 + KUNIT_ASSERT_TRUE(test, drm_mode_parse_command_line_for_connector(cmdline, 261 315 &no_connector, &mode)); 262 316 KUNIT_EXPECT_TRUE(test, mode.specified); 263 317 KUNIT_EXPECT_EQ(test, mode.xres, 720); ··· 272 344 struct drm_cmdline_mode mode = { }; 273 345 const char *cmdline = "720x480-24@60i"; 274 346 275 - KUNIT_EXPECT_TRUE(test, drm_mode_parse_command_line_for_connector(cmdline, 347 + KUNIT_ASSERT_TRUE(test, drm_mode_parse_command_line_for_connector(cmdline, 276 348 &no_connector, &mode)); 277 349 KUNIT_EXPECT_TRUE(test, mode.specified); 278 350 KUNIT_EXPECT_EQ(test, mode.xres, 720); ··· 294 366 static void drm_cmdline_test_res_bpp_refresh_margins(struct kunit *test) 295 367 { 296 368 struct drm_cmdline_mode mode = { }; 297 - const char *cmdline = "720x480-24@60m"; 369 + const char *cmdline = "720x480-24@60m"; 298 370 299 - KUNIT_EXPECT_TRUE(test, drm_mode_parse_command_line_for_connector(cmdline, 371 + KUNIT_ASSERT_TRUE(test, drm_mode_parse_command_line_for_connector(cmdline, 300 372 &no_connector, &mode)); 301 373 KUNIT_EXPECT_TRUE(test, mode.specified); 302 374 KUNIT_EXPECT_EQ(test, mode.xres, 720); ··· 318 390 static void drm_cmdline_test_res_bpp_refresh_force_off(struct kunit *test) 319 391 { 320 392 struct drm_cmdline_mode mode = { }; 321 - const char *cmdline = "720x480-24@60d"; 393 + const char *cmdline = "720x480-24@60d"; 322 394 323 - KUNIT_EXPECT_TRUE(test, drm_mode_parse_command_line_for_connector(cmdline, 395 + KUNIT_ASSERT_TRUE(test, drm_mode_parse_command_line_for_connector(cmdline, 324 396 &no_connector, &mode)); 325 397 KUNIT_EXPECT_TRUE(test, mode.specified); 326 398 KUNIT_EXPECT_EQ(test, mode.xres, 720); ··· 339 411 KUNIT_EXPECT_EQ(test, mode.force, DRM_FORCE_OFF); 340 412 } 341 413 342 - static void drm_cmdline_test_res_bpp_refresh_force_on_off(struct kunit *test) 343 - { 344 - struct drm_cmdline_mode mode = { }; 345 - const char *cmdline = "720x480-24@60de"; 346 - 347 - KUNIT_EXPECT_FALSE(test, drm_mode_parse_command_line_for_connector(cmdline, 348 - &no_connector, &mode)); 349 - } 350 - 351 414 static void drm_cmdline_test_res_bpp_refresh_force_on(struct kunit *test) 352 415 { 353 416 struct drm_cmdline_mode mode = { }; 354 - const char *cmdline = "720x480-24@60e"; 417 + const char *cmdline = "720x480-24@60e"; 355 418 356 - KUNIT_EXPECT_TRUE(test, drm_mode_parse_command_line_for_connector(cmdline, 419 + KUNIT_ASSERT_TRUE(test, drm_mode_parse_command_line_for_connector(cmdline, 357 420 &no_connector, &mode)); 358 421 KUNIT_EXPECT_TRUE(test, mode.specified); 359 422 KUNIT_EXPECT_EQ(test, mode.xres, 720); ··· 368 449 struct drm_cmdline_mode mode = { }; 369 450 const char *cmdline = "720x480-24@60D"; 370 451 371 - KUNIT_EXPECT_TRUE(test, drm_mode_parse_command_line_for_connector(cmdline, 452 + KUNIT_ASSERT_TRUE(test, drm_mode_parse_command_line_for_connector(cmdline, 372 453 &no_connector, &mode)); 373 454 KUNIT_EXPECT_TRUE(test, mode.specified); 374 455 KUNIT_EXPECT_EQ(test, mode.xres, 720); ··· 395 476 }; 396 477 const char *cmdline = "720x480-24@60D"; 397 478 398 - KUNIT_EXPECT_TRUE(test, drm_mode_parse_command_line_for_connector(cmdline, 479 + KUNIT_ASSERT_TRUE(test, drm_mode_parse_command_line_for_connector(cmdline, 399 480 &connector, &mode)); 400 481 KUNIT_EXPECT_TRUE(test, mode.specified); 401 482 KUNIT_EXPECT_EQ(test, mode.xres, 720); ··· 443 524 struct drm_cmdline_mode mode = { }; 444 525 const char *cmdline = "720x480me"; 445 526 446 - KUNIT_EXPECT_TRUE(test, drm_mode_parse_command_line_for_connector(cmdline, 527 + KUNIT_ASSERT_TRUE(test, drm_mode_parse_command_line_for_connector(cmdline, 447 528 &no_connector, &mode)); 448 529 KUNIT_EXPECT_TRUE(test, mode.specified); 449 530 KUNIT_EXPECT_EQ(test, mode.xres, 720); ··· 465 546 struct drm_cmdline_mode mode = { }; 466 547 const char *cmdline = "720x480Mm"; 467 548 468 - KUNIT_EXPECT_TRUE(test, drm_mode_parse_command_line_for_connector(cmdline, 549 + KUNIT_ASSERT_TRUE(test, drm_mode_parse_command_line_for_connector(cmdline, 469 550 &no_connector, &mode)); 470 551 KUNIT_EXPECT_TRUE(test, mode.specified); 471 552 KUNIT_EXPECT_EQ(test, mode.xres, 720); ··· 482 563 KUNIT_EXPECT_EQ(test, mode.force, DRM_FORCE_UNSPECIFIED); 483 564 } 484 565 485 - static void drm_cmdline_test_res_invalid_mode(struct kunit *test) 486 - { 487 - struct drm_cmdline_mode mode = { }; 488 - const char *cmdline = "720x480f"; 489 - 490 - KUNIT_EXPECT_FALSE(test, drm_mode_parse_command_line_for_connector(cmdline, 491 - &no_connector, &mode)); 492 - } 493 - 494 - static void drm_cmdline_test_res_bpp_wrong_place_mode(struct kunit *test) 495 - { 496 - struct drm_cmdline_mode mode = { }; 497 - const char *cmdline = "720x480e-24"; 498 - 499 - KUNIT_EXPECT_FALSE(test, drm_mode_parse_command_line_for_connector(cmdline, 500 - &no_connector, &mode)); 501 - } 502 - 503 566 static void drm_cmdline_test_name(struct kunit *test) 504 567 { 505 568 struct drm_cmdline_mode mode = { }; 506 569 const char *cmdline = "NTSC"; 507 570 508 - KUNIT_EXPECT_TRUE(test, drm_mode_parse_command_line_for_connector(cmdline, 571 + KUNIT_ASSERT_TRUE(test, drm_mode_parse_command_line_for_connector(cmdline, 509 572 &no_connector, &mode)); 510 573 KUNIT_EXPECT_STREQ(test, mode.name, "NTSC"); 511 574 KUNIT_EXPECT_FALSE(test, mode.refresh_specified); ··· 499 598 struct drm_cmdline_mode mode = { }; 500 599 const char *cmdline = "NTSC-24"; 501 600 502 - KUNIT_EXPECT_TRUE(test, drm_mode_parse_command_line_for_connector(cmdline, 601 + KUNIT_ASSERT_TRUE(test, drm_mode_parse_command_line_for_connector(cmdline, 503 602 &no_connector, &mode)); 504 603 KUNIT_EXPECT_STREQ(test, mode.name, "NTSC"); 505 604 ··· 509 608 KUNIT_EXPECT_EQ(test, mode.bpp, 24); 510 609 } 511 610 512 - static void drm_cmdline_test_name_bpp_refresh(struct kunit *test) 513 - { 514 - struct drm_cmdline_mode mode = { }; 515 - const char *cmdline = "NTSC-24@60"; 516 - 517 - KUNIT_EXPECT_FALSE(test, drm_mode_parse_command_line_for_connector(cmdline, 518 - &no_connector, &mode)); 519 - } 520 - 521 - static void drm_cmdline_test_name_refresh(struct kunit *test) 522 - { 523 - struct drm_cmdline_mode mode = { }; 524 - const char *cmdline = "NTSC@60"; 525 - 526 - KUNIT_EXPECT_FALSE(test, drm_mode_parse_command_line_for_connector(cmdline, 527 - &no_connector, &mode)); 528 - } 529 - 530 - static void drm_cmdline_test_name_refresh_wrong_mode(struct kunit *test) 531 - { 532 - struct drm_cmdline_mode mode = { }; 533 - const char *cmdline = "NTSC@60m"; 534 - 535 - KUNIT_EXPECT_FALSE(test, drm_mode_parse_command_line_for_connector(cmdline, 536 - &no_connector, &mode)); 537 - } 538 - 539 - static void drm_cmdline_test_name_refresh_invalid_mode(struct kunit *test) 540 - { 541 - struct drm_cmdline_mode mode = { }; 542 - const char *cmdline = "NTSC@60f"; 543 - 544 - KUNIT_EXPECT_FALSE(test, drm_mode_parse_command_line_for_connector(cmdline, 545 - &no_connector, &mode)); 546 - } 547 - 548 611 static void drm_cmdline_test_name_option(struct kunit *test) 549 612 { 550 613 struct drm_cmdline_mode mode = { }; 551 614 const char *cmdline = "NTSC,rotate=180"; 552 615 553 - KUNIT_EXPECT_TRUE(test, drm_mode_parse_command_line_for_connector(cmdline, 616 + KUNIT_ASSERT_TRUE(test, drm_mode_parse_command_line_for_connector(cmdline, 554 617 &no_connector, &mode)); 555 618 KUNIT_EXPECT_TRUE(test, mode.specified); 556 619 KUNIT_EXPECT_STREQ(test, mode.name, "NTSC"); ··· 526 661 struct drm_cmdline_mode mode = { }; 527 662 const char *cmdline = "NTSC-24,rotate=180"; 528 663 529 - KUNIT_EXPECT_TRUE(test, drm_mode_parse_command_line_for_connector(cmdline, 664 + KUNIT_ASSERT_TRUE(test, drm_mode_parse_command_line_for_connector(cmdline, 530 665 &no_connector, &mode)); 531 666 KUNIT_EXPECT_TRUE(test, mode.specified); 532 667 KUNIT_EXPECT_STREQ(test, mode.name, "NTSC"); ··· 540 675 struct drm_cmdline_mode mode = { }; 541 676 const char *cmdline = "720x480,rotate=0"; 542 677 543 - KUNIT_EXPECT_TRUE(test, drm_mode_parse_command_line_for_connector(cmdline, 678 + KUNIT_ASSERT_TRUE(test, drm_mode_parse_command_line_for_connector(cmdline, 544 679 &no_connector, &mode)); 545 680 KUNIT_EXPECT_TRUE(test, mode.specified); 546 681 KUNIT_EXPECT_EQ(test, mode.xres, 720); ··· 563 698 struct drm_cmdline_mode mode = { }; 564 699 const char *cmdline = "720x480,rotate=90"; 565 700 566 - KUNIT_EXPECT_TRUE(test, drm_mode_parse_command_line_for_connector(cmdline, 701 + KUNIT_ASSERT_TRUE(test, drm_mode_parse_command_line_for_connector(cmdline, 567 702 &no_connector, &mode)); 568 703 KUNIT_EXPECT_TRUE(test, mode.specified); 569 704 KUNIT_EXPECT_EQ(test, mode.xres, 720); ··· 586 721 struct drm_cmdline_mode mode = { }; 587 722 const char *cmdline = "720x480,rotate=180"; 588 723 589 - KUNIT_EXPECT_TRUE(test, drm_mode_parse_command_line_for_connector(cmdline, 724 + KUNIT_ASSERT_TRUE(test, drm_mode_parse_command_line_for_connector(cmdline, 590 725 &no_connector, &mode)); 591 726 KUNIT_EXPECT_TRUE(test, mode.specified); 592 727 KUNIT_EXPECT_EQ(test, mode.xres, 720); ··· 609 744 struct drm_cmdline_mode mode = { }; 610 745 const char *cmdline = "720x480,rotate=270"; 611 746 612 - KUNIT_EXPECT_TRUE(test, drm_mode_parse_command_line_for_connector(cmdline, 747 + KUNIT_ASSERT_TRUE(test, drm_mode_parse_command_line_for_connector(cmdline, 613 748 &no_connector, &mode)); 614 749 KUNIT_EXPECT_TRUE(test, mode.specified); 615 750 KUNIT_EXPECT_EQ(test, mode.xres, 720); ··· 627 762 KUNIT_EXPECT_EQ(test, mode.force, DRM_FORCE_UNSPECIFIED); 628 763 } 629 764 630 - static void drm_cmdline_test_rotate_multiple(struct kunit *test) 631 - { 632 - struct drm_cmdline_mode mode = { }; 633 - const char *cmdline = "720x480,rotate=0,rotate=90"; 634 - 635 - KUNIT_EXPECT_FALSE(test, drm_mode_parse_command_line_for_connector(cmdline, 636 - &no_connector, &mode)); 637 - } 638 - 639 - static void drm_cmdline_test_rotate_invalid_val(struct kunit *test) 640 - { 641 - struct drm_cmdline_mode mode = { }; 642 - const char *cmdline = "720x480,rotate=42"; 643 - 644 - KUNIT_EXPECT_FALSE(test, drm_mode_parse_command_line_for_connector(cmdline, 645 - &no_connector, &mode)); 646 - } 647 - 648 - static void drm_cmdline_test_rotate_truncated(struct kunit *test) 649 - { 650 - struct drm_cmdline_mode mode = { }; 651 - const char *cmdline = "720x480,rotate="; 652 - 653 - KUNIT_EXPECT_FALSE(test, drm_mode_parse_command_line_for_connector(cmdline, 654 - &no_connector, &mode)); 655 - } 656 - 657 765 static void drm_cmdline_test_hmirror(struct kunit *test) 658 766 { 659 767 struct drm_cmdline_mode mode = { }; 660 768 const char *cmdline = "720x480,reflect_x"; 661 769 662 - KUNIT_EXPECT_TRUE(test, drm_mode_parse_command_line_for_connector(cmdline, 770 + KUNIT_ASSERT_TRUE(test, drm_mode_parse_command_line_for_connector(cmdline, 663 771 &no_connector, &mode)); 664 772 KUNIT_EXPECT_TRUE(test, mode.specified); 665 773 KUNIT_EXPECT_EQ(test, mode.xres, 720); ··· 655 817 struct drm_cmdline_mode mode = { }; 656 818 const char *cmdline = "720x480,reflect_y"; 657 819 658 - KUNIT_EXPECT_TRUE(test, drm_mode_parse_command_line_for_connector(cmdline, 820 + KUNIT_ASSERT_TRUE(test, drm_mode_parse_command_line_for_connector(cmdline, 659 821 &no_connector, &mode)); 660 822 KUNIT_EXPECT_TRUE(test, mode.specified); 661 823 KUNIT_EXPECT_EQ(test, mode.xres, 720); ··· 679 841 const char *cmdline = 680 842 "720x480,margin_right=14,margin_left=24,margin_bottom=36,margin_top=42"; 681 843 682 - KUNIT_EXPECT_TRUE(test, drm_mode_parse_command_line_for_connector(cmdline, 844 + KUNIT_ASSERT_TRUE(test, drm_mode_parse_command_line_for_connector(cmdline, 683 845 &no_connector, &mode)); 684 846 KUNIT_EXPECT_TRUE(test, mode.specified); 685 847 KUNIT_EXPECT_EQ(test, mode.xres, 720); ··· 705 867 struct drm_cmdline_mode mode = { }; 706 868 const char *cmdline = "720x480,rotate=270,reflect_x"; 707 869 708 - KUNIT_EXPECT_TRUE(test, drm_mode_parse_command_line_for_connector(cmdline, 870 + KUNIT_ASSERT_TRUE(test, drm_mode_parse_command_line_for_connector(cmdline, 709 871 &no_connector, &mode)); 710 872 KUNIT_EXPECT_TRUE(test, mode.specified); 711 873 KUNIT_EXPECT_EQ(test, mode.xres, 720); ··· 723 885 KUNIT_EXPECT_EQ(test, mode.force, DRM_FORCE_UNSPECIFIED); 724 886 } 725 887 726 - static void drm_cmdline_test_invalid_option(struct kunit *test) 727 - { 728 - struct drm_cmdline_mode mode = { }; 729 - const char *cmdline = "720x480,test=42"; 730 - 731 - KUNIT_EXPECT_FALSE(test, drm_mode_parse_command_line_for_connector(cmdline, 732 - &no_connector, &mode)); 733 - } 734 - 735 888 static void drm_cmdline_test_bpp_extra_and_option(struct kunit *test) 736 889 { 737 890 struct drm_cmdline_mode mode = { }; 738 891 const char *cmdline = "720x480-24e,rotate=180"; 739 892 740 - KUNIT_EXPECT_TRUE(test, drm_mode_parse_command_line_for_connector(cmdline, 893 + KUNIT_ASSERT_TRUE(test, drm_mode_parse_command_line_for_connector(cmdline, 741 894 &no_connector, &mode)); 742 895 KUNIT_EXPECT_TRUE(test, mode.specified); 743 896 KUNIT_EXPECT_EQ(test, mode.xres, 720); ··· 752 923 struct drm_cmdline_mode mode = { }; 753 924 const char *cmdline = "720x480e,rotate=180"; 754 925 755 - KUNIT_EXPECT_TRUE(test, drm_mode_parse_command_line_for_connector(cmdline, 926 + KUNIT_ASSERT_TRUE(test, drm_mode_parse_command_line_for_connector(cmdline, 756 927 &no_connector, &mode)); 757 928 KUNIT_EXPECT_TRUE(test, mode.specified); 758 929 KUNIT_EXPECT_EQ(test, mode.xres, 720); ··· 774 945 struct drm_cmdline_mode mode = { }; 775 946 const char *cmdline = "margin_right=14,margin_left=24,margin_bottom=36,margin_top=42"; 776 947 777 - KUNIT_EXPECT_TRUE(test, drm_mode_parse_command_line_for_connector(cmdline, 948 + KUNIT_ASSERT_TRUE(test, drm_mode_parse_command_line_for_connector(cmdline, 778 949 &no_connector, &mode)); 779 950 KUNIT_EXPECT_FALSE(test, mode.specified); 780 951 KUNIT_EXPECT_FALSE(test, mode.refresh_specified); ··· 797 968 struct drm_cmdline_mode mode = { }; 798 969 const char *cmdline = "e,margin_right=14,margin_left=24,margin_bottom=36,margin_top=42"; 799 970 800 - KUNIT_EXPECT_TRUE(test, drm_mode_parse_command_line_for_connector(cmdline, 971 + KUNIT_ASSERT_TRUE(test, drm_mode_parse_command_line_for_connector(cmdline, 801 972 &no_connector, &mode)); 802 973 KUNIT_EXPECT_FALSE(test, mode.specified); 803 974 KUNIT_EXPECT_FALSE(test, mode.refresh_specified); ··· 820 991 struct drm_cmdline_mode mode = { }; 821 992 const char *cmdline = "panel_orientation=upside_down"; 822 993 823 - KUNIT_EXPECT_TRUE(test, drm_mode_parse_command_line_for_connector(cmdline, 994 + KUNIT_ASSERT_TRUE(test, drm_mode_parse_command_line_for_connector(cmdline, 824 995 &no_connector, &mode)); 825 996 KUNIT_EXPECT_FALSE(test, mode.specified); 826 997 KUNIT_EXPECT_FALSE(test, mode.refresh_specified); ··· 835 1006 KUNIT_EXPECT_EQ(test, mode.force, DRM_FORCE_UNSPECIFIED); 836 1007 } 837 1008 1009 + struct drm_cmdline_invalid_test { 1010 + const char *name; 1011 + const char *cmdline; 1012 + }; 1013 + 1014 + static void drm_cmdline_test_invalid(struct kunit *test) 1015 + { 1016 + const struct drm_cmdline_invalid_test *params = test->param_value; 1017 + struct drm_cmdline_mode mode = { }; 1018 + 1019 + KUNIT_EXPECT_FALSE(test, drm_mode_parse_command_line_for_connector(params->cmdline, 1020 + &no_connector, 1021 + &mode)); 1022 + } 1023 + 1024 + static const struct drm_cmdline_invalid_test drm_cmdline_invalid_tests[] = { 1025 + { 1026 + .name = "margin_only", 1027 + .cmdline = "m", 1028 + }, 1029 + { 1030 + .name = "interlace_only", 1031 + .cmdline = "i", 1032 + }, 1033 + { 1034 + .name = "res_missing_x", 1035 + .cmdline = "x480", 1036 + }, 1037 + { 1038 + .name = "res_missing_y", 1039 + .cmdline = "1024x", 1040 + }, 1041 + { 1042 + .name = "res_bad_y", 1043 + .cmdline = "1024xtest", 1044 + }, 1045 + { 1046 + .name = "res_missing_y_bpp", 1047 + .cmdline = "1024x-24", 1048 + }, 1049 + { 1050 + .name = "res_bad_bpp", 1051 + .cmdline = "720x480-test", 1052 + }, 1053 + { 1054 + .name = "res_bad_refresh", 1055 + .cmdline = "720x480@refresh", 1056 + }, 1057 + { 1058 + .name = "res_bpp_refresh_force_on_off", 1059 + .cmdline = "720x480-24@60de", 1060 + }, 1061 + { 1062 + .name = "res_invalid_mode", 1063 + .cmdline = "720x480f", 1064 + }, 1065 + { 1066 + .name = "res_bpp_wrong_place_mode", 1067 + .cmdline = "720x480e-24", 1068 + }, 1069 + { 1070 + .name = "name_bpp_refresh", 1071 + .cmdline = "NTSC-24@60", 1072 + }, 1073 + { 1074 + .name = "name_refresh", 1075 + .cmdline = "NTSC@60", 1076 + }, 1077 + { 1078 + .name = "name_refresh_wrong_mode", 1079 + .cmdline = "NTSC@60m", 1080 + }, 1081 + { 1082 + .name = "name_refresh_invalid_mode", 1083 + .cmdline = "NTSC@60f", 1084 + }, 1085 + { 1086 + .name = "rotate_multiple", 1087 + .cmdline = "720x480,rotate=0,rotate=90", 1088 + }, 1089 + { 1090 + .name = "rotate_invalid_val", 1091 + .cmdline = "720x480,rotate=42", 1092 + }, 1093 + { 1094 + .name = "rotate_truncated", 1095 + .cmdline = "720x480,rotate=", 1096 + }, 1097 + { 1098 + .name = "invalid_option", 1099 + .cmdline = "720x480,test=42", 1100 + }, 1101 + }; 1102 + 1103 + static void drm_cmdline_invalid_desc(const struct drm_cmdline_invalid_test *t, 1104 + char *desc) 1105 + { 1106 + sprintf(desc, "%s", t->name); 1107 + } 1108 + 1109 + KUNIT_ARRAY_PARAM(drm_cmdline_invalid, drm_cmdline_invalid_tests, drm_cmdline_invalid_desc); 1110 + 838 1111 static struct kunit_case drm_cmdline_parser_tests[] = { 839 1112 KUNIT_CASE(drm_cmdline_test_force_d_only), 840 1113 KUNIT_CASE(drm_cmdline_test_force_D_only_dvi), 841 1114 KUNIT_CASE(drm_cmdline_test_force_D_only_hdmi), 842 1115 KUNIT_CASE(drm_cmdline_test_force_D_only_not_digital), 843 1116 KUNIT_CASE(drm_cmdline_test_force_e_only), 844 - KUNIT_CASE(drm_cmdline_test_margin_only), 845 - KUNIT_CASE(drm_cmdline_test_interlace_only), 846 1117 KUNIT_CASE(drm_cmdline_test_res), 847 - KUNIT_CASE(drm_cmdline_test_res_missing_x), 848 - KUNIT_CASE(drm_cmdline_test_res_missing_y), 849 - KUNIT_CASE(drm_cmdline_test_res_bad_y), 850 - KUNIT_CASE(drm_cmdline_test_res_missing_y_bpp), 851 1118 KUNIT_CASE(drm_cmdline_test_res_vesa), 852 1119 KUNIT_CASE(drm_cmdline_test_res_vesa_rblank), 853 1120 KUNIT_CASE(drm_cmdline_test_res_rblank), 854 1121 KUNIT_CASE(drm_cmdline_test_res_bpp), 855 - KUNIT_CASE(drm_cmdline_test_res_bad_bpp), 856 1122 KUNIT_CASE(drm_cmdline_test_res_refresh), 857 - KUNIT_CASE(drm_cmdline_test_res_bad_refresh), 858 1123 KUNIT_CASE(drm_cmdline_test_res_bpp_refresh), 859 1124 KUNIT_CASE(drm_cmdline_test_res_bpp_refresh_interlaced), 860 1125 KUNIT_CASE(drm_cmdline_test_res_bpp_refresh_margins), 861 1126 KUNIT_CASE(drm_cmdline_test_res_bpp_refresh_force_off), 862 - KUNIT_CASE(drm_cmdline_test_res_bpp_refresh_force_on_off), 863 1127 KUNIT_CASE(drm_cmdline_test_res_bpp_refresh_force_on), 864 1128 KUNIT_CASE(drm_cmdline_test_res_bpp_refresh_force_on_analog), 865 1129 KUNIT_CASE(drm_cmdline_test_res_bpp_refresh_force_on_digital), 866 1130 KUNIT_CASE(drm_cmdline_test_res_bpp_refresh_interlaced_margins_force_on), 867 1131 KUNIT_CASE(drm_cmdline_test_res_margins_force_on), 868 1132 KUNIT_CASE(drm_cmdline_test_res_vesa_margins), 869 - KUNIT_CASE(drm_cmdline_test_res_invalid_mode), 870 - KUNIT_CASE(drm_cmdline_test_res_bpp_wrong_place_mode), 871 1133 KUNIT_CASE(drm_cmdline_test_name), 872 1134 KUNIT_CASE(drm_cmdline_test_name_bpp), 873 - KUNIT_CASE(drm_cmdline_test_name_refresh), 874 - KUNIT_CASE(drm_cmdline_test_name_bpp_refresh), 875 - KUNIT_CASE(drm_cmdline_test_name_refresh_wrong_mode), 876 - KUNIT_CASE(drm_cmdline_test_name_refresh_invalid_mode), 877 1135 KUNIT_CASE(drm_cmdline_test_name_option), 878 1136 KUNIT_CASE(drm_cmdline_test_name_bpp_option), 879 1137 KUNIT_CASE(drm_cmdline_test_rotate_0), 880 1138 KUNIT_CASE(drm_cmdline_test_rotate_90), 881 1139 KUNIT_CASE(drm_cmdline_test_rotate_180), 882 1140 KUNIT_CASE(drm_cmdline_test_rotate_270), 883 - KUNIT_CASE(drm_cmdline_test_rotate_multiple), 884 - KUNIT_CASE(drm_cmdline_test_rotate_invalid_val), 885 - KUNIT_CASE(drm_cmdline_test_rotate_truncated), 886 1141 KUNIT_CASE(drm_cmdline_test_hmirror), 887 1142 KUNIT_CASE(drm_cmdline_test_vmirror), 888 1143 KUNIT_CASE(drm_cmdline_test_margin_options), 889 1144 KUNIT_CASE(drm_cmdline_test_multiple_options), 890 - KUNIT_CASE(drm_cmdline_test_invalid_option), 891 1145 KUNIT_CASE(drm_cmdline_test_bpp_extra_and_option), 892 1146 KUNIT_CASE(drm_cmdline_test_extra_and_option), 893 1147 KUNIT_CASE(drm_cmdline_test_freestanding_options), 894 1148 KUNIT_CASE(drm_cmdline_test_freestanding_force_e_and_options), 895 1149 KUNIT_CASE(drm_cmdline_test_panel_orientation), 1150 + KUNIT_CASE_PARAM(drm_cmdline_test_invalid, drm_cmdline_invalid_gen_params), 896 1151 {} 897 1152 }; 898 1153
+2
drivers/gpu/drm/tiny/bochs.c
··· 309 309 static void bochs_hw_blank(struct bochs_device *bochs, bool blank) 310 310 { 311 311 DRM_DEBUG_DRIVER("hw_blank %d\n", blank); 312 + /* enable color bit (so VGA_IS1_RC access works) */ 313 + bochs_vga_writeb(bochs, VGA_MIS_W, VGA_MIS_COLOR); 312 314 /* discard ar_flip_flop */ 313 315 (void)bochs_vga_readb(bochs, VGA_IS1_RC); 314 316 /* blank or unblank; we need only update index and set 0x20 */
+4 -5
drivers/gpu/drm/ttm/ttm_bo.c
··· 518 518 bool ttm_bo_eviction_valuable(struct ttm_buffer_object *bo, 519 519 const struct ttm_place *place) 520 520 { 521 + struct ttm_resource *res = bo->resource; 522 + struct ttm_device *bdev = bo->bdev; 523 + 521 524 dma_resv_assert_held(bo->base.resv); 522 525 if (bo->resource->mem_type == TTM_PL_SYSTEM) 523 526 return true; ··· 528 525 /* Don't evict this BO if it's outside of the 529 526 * requested placement range 530 527 */ 531 - if (place->fpfn >= (bo->resource->start + bo->resource->num_pages) || 532 - (place->lpfn && place->lpfn <= bo->resource->start)) 533 - return false; 534 - 535 - return true; 528 + return ttm_resource_intersects(bdev, res, place, bo->base.size); 536 529 } 537 530 EXPORT_SYMBOL(ttm_bo_eviction_valuable); 538 531
+33
drivers/gpu/drm/ttm/ttm_range_manager.c
··· 113 113 kfree(node); 114 114 } 115 115 116 + static bool ttm_range_man_intersects(struct ttm_resource_manager *man, 117 + struct ttm_resource *res, 118 + const struct ttm_place *place, 119 + size_t size) 120 + { 121 + struct drm_mm_node *node = &to_ttm_range_mgr_node(res)->mm_nodes[0]; 122 + u32 num_pages = PFN_UP(size); 123 + 124 + /* Don't evict BOs outside of the requested placement range */ 125 + if (place->fpfn >= (node->start + num_pages) || 126 + (place->lpfn && place->lpfn <= node->start)) 127 + return false; 128 + 129 + return true; 130 + } 131 + 132 + static bool ttm_range_man_compatible(struct ttm_resource_manager *man, 133 + struct ttm_resource *res, 134 + const struct ttm_place *place, 135 + size_t size) 136 + { 137 + struct drm_mm_node *node = &to_ttm_range_mgr_node(res)->mm_nodes[0]; 138 + u32 num_pages = PFN_UP(size); 139 + 140 + if (node->start < place->fpfn || 141 + (place->lpfn && (node->start + num_pages) > place->lpfn)) 142 + return false; 143 + 144 + return true; 145 + } 146 + 116 147 static void ttm_range_man_debug(struct ttm_resource_manager *man, 117 148 struct drm_printer *printer) 118 149 { ··· 157 126 static const struct ttm_resource_manager_func ttm_range_manager_func = { 158 127 .alloc = ttm_range_man_alloc, 159 128 .free = ttm_range_man_free, 129 + .intersects = ttm_range_man_intersects, 130 + .compatible = ttm_range_man_compatible, 160 131 .debug = ttm_range_man_debug 161 132 }; 162 133
+62 -2
drivers/gpu/drm/ttm/ttm_resource.c
··· 253 253 } 254 254 EXPORT_SYMBOL(ttm_resource_free); 255 255 256 + /** 257 + * ttm_resource_intersects - test for intersection 258 + * 259 + * @bdev: TTM device structure 260 + * @res: The resource to test 261 + * @place: The placement to test 262 + * @size: How many bytes the new allocation needs. 263 + * 264 + * Test if @res intersects with @place and @size. Used for testing if evictions 265 + * are valueable or not. 266 + * 267 + * Returns true if the res placement intersects with @place and @size. 268 + */ 269 + bool ttm_resource_intersects(struct ttm_device *bdev, 270 + struct ttm_resource *res, 271 + const struct ttm_place *place, 272 + size_t size) 273 + { 274 + struct ttm_resource_manager *man; 275 + 276 + if (!res) 277 + return false; 278 + 279 + man = ttm_manager_type(bdev, res->mem_type); 280 + if (!place || !man->func->intersects) 281 + return true; 282 + 283 + return man->func->intersects(man, res, place, size); 284 + } 285 + 286 + /** 287 + * ttm_resource_compatible - test for compatibility 288 + * 289 + * @bdev: TTM device structure 290 + * @res: The resource to test 291 + * @place: The placement to test 292 + * @size: How many bytes the new allocation needs. 293 + * 294 + * Test if @res compatible with @place and @size. 295 + * 296 + * Returns true if the res placement compatible with @place and @size. 297 + */ 298 + bool ttm_resource_compatible(struct ttm_device *bdev, 299 + struct ttm_resource *res, 300 + const struct ttm_place *place, 301 + size_t size) 302 + { 303 + struct ttm_resource_manager *man; 304 + 305 + if (!res || !place) 306 + return false; 307 + 308 + man = ttm_manager_type(bdev, res->mem_type); 309 + if (!man->func->compatible) 310 + return true; 311 + 312 + return man->func->compatible(man, res, place, size); 313 + } 314 + 256 315 static bool ttm_resource_places_compat(struct ttm_resource *res, 257 316 const struct ttm_place *places, 258 317 unsigned num_placement) 259 318 { 319 + struct ttm_buffer_object *bo = res->bo; 320 + struct ttm_device *bdev = bo->bdev; 260 321 unsigned i; 261 322 262 323 if (res->placement & TTM_PL_FLAG_TEMPORARY) ··· 326 265 for (i = 0; i < num_placement; i++) { 327 266 const struct ttm_place *heap = &places[i]; 328 267 329 - if (res->start < heap->fpfn || (heap->lpfn && 330 - (res->start + res->num_pages) > heap->lpfn)) 268 + if (!ttm_resource_compatible(bdev, res, heap, bo->base.size)) 331 269 continue; 332 270 333 271 if ((res->mem_type == heap->mem_type) &&
+2 -1
drivers/gpu/drm/tve200/tve200_drv.c
··· 64 64 struct tve200_drm_dev_private *priv = dev->dev_private; 65 65 struct drm_panel *panel; 66 66 struct drm_bridge *bridge; 67 - int ret = 0; 67 + int ret; 68 68 69 69 drm_mode_config_init(dev); 70 70 mode_config = &dev->mode_config; ··· 92 92 * method to get the connector out of the bridge. 93 93 */ 94 94 dev_err(dev->dev, "the bridge is not a panel\n"); 95 + ret = -EINVAL; 95 96 goto out_bridge; 96 97 } 97 98
+40 -1
drivers/gpu/drm/vc4/vc4_crtc.c
··· 39 39 #include <drm/drm_atomic_uapi.h> 40 40 #include <drm/drm_fb_dma_helper.h> 41 41 #include <drm/drm_framebuffer.h> 42 + #include <drm/drm_drv.h> 42 43 #include <drm/drm_print.h> 43 44 #include <drm/drm_probe_helper.h> 44 45 #include <drm/drm_vblank.h> ··· 296 295 static void vc4_crtc_pixelvalve_reset(struct drm_crtc *crtc) 297 296 { 298 297 struct vc4_crtc *vc4_crtc = to_vc4_crtc(crtc); 298 + struct drm_device *dev = crtc->dev; 299 + int idx; 300 + 301 + if (!drm_dev_enter(dev, &idx)) 302 + return; 299 303 300 304 /* The PV needs to be disabled before it can be flushed */ 301 305 CRTC_WRITE(PV_CONTROL, CRTC_READ(PV_CONTROL) & ~PV_CONTROL_EN); 302 306 CRTC_WRITE(PV_CONTROL, CRTC_READ(PV_CONTROL) | PV_CONTROL_FIFO_CLR); 307 + 308 + drm_dev_exit(idx); 303 309 } 304 310 305 311 static void vc4_crtc_config_pv(struct drm_crtc *crtc, struct drm_encoder *encoder, ··· 329 321 u32 format = is_dsi1 ? PV_CONTROL_FORMAT_DSIV_24 : PV_CONTROL_FORMAT_24; 330 322 u8 ppc = pv_data->pixels_per_clock; 331 323 bool debug_dump_regs = false; 324 + int idx; 325 + 326 + if (!drm_dev_enter(dev, &idx)) 327 + return; 332 328 333 329 if (debug_dump_regs) { 334 330 struct drm_printer p = drm_info_printer(&vc4_crtc->pdev->dev); ··· 422 410 drm_crtc_index(crtc)); 423 411 drm_print_regset32(&p, &vc4_crtc->regset); 424 412 } 413 + 414 + drm_dev_exit(idx); 425 415 } 426 416 427 417 static void require_hvs_enabled(struct drm_device *dev) ··· 444 430 struct vc4_crtc *vc4_crtc = to_vc4_crtc(crtc); 445 431 struct drm_device *dev = crtc->dev; 446 432 struct vc4_dev *vc4 = to_vc4_dev(dev); 447 - int ret; 433 + int idx, ret; 434 + 435 + if (!drm_dev_enter(dev, &idx)) 436 + return -ENODEV; 448 437 449 438 CRTC_WRITE(PV_V_CONTROL, 450 439 CRTC_READ(PV_V_CONTROL) & ~PV_VCONTROL_VIDEN); ··· 480 463 481 464 if (vc4_encoder && vc4_encoder->post_crtc_powerdown) 482 465 vc4_encoder->post_crtc_powerdown(encoder, state); 466 + 467 + drm_dev_exit(idx); 483 468 484 469 return 0; 485 470 } ··· 607 588 struct vc4_crtc *vc4_crtc = to_vc4_crtc(crtc); 608 589 struct drm_encoder *encoder = vc4_get_crtc_encoder(crtc, new_state); 609 590 struct vc4_encoder *vc4_encoder = to_vc4_encoder(encoder); 591 + int idx; 610 592 611 593 drm_dbg(dev, "Enabling CRTC %s (%u) connected to Encoder %s (%u)", 612 594 crtc->name, crtc->base.id, encoder->name, encoder->base.id); 595 + 596 + if (!drm_dev_enter(dev, &idx)) 597 + return; 613 598 614 599 require_hvs_enabled(dev); 615 600 ··· 642 619 643 620 if (vc4_encoder->post_crtc_enable) 644 621 vc4_encoder->post_crtc_enable(encoder, state); 622 + 623 + drm_dev_exit(idx); 645 624 } 646 625 647 626 static enum drm_mode_status vc4_crtc_mode_valid(struct drm_crtc *crtc, ··· 736 711 static int vc4_enable_vblank(struct drm_crtc *crtc) 737 712 { 738 713 struct vc4_crtc *vc4_crtc = to_vc4_crtc(crtc); 714 + struct drm_device *dev = crtc->dev; 715 + int idx; 716 + 717 + if (!drm_dev_enter(dev, &idx)) 718 + return -ENODEV; 739 719 740 720 CRTC_WRITE(PV_INTEN, PV_INT_VFP_START); 721 + 722 + drm_dev_exit(idx); 741 723 742 724 return 0; 743 725 } ··· 752 720 static void vc4_disable_vblank(struct drm_crtc *crtc) 753 721 { 754 722 struct vc4_crtc *vc4_crtc = to_vc4_crtc(crtc); 723 + struct drm_device *dev = crtc->dev; 724 + int idx; 725 + 726 + if (!drm_dev_enter(dev, &idx)) 727 + return; 755 728 756 729 CRTC_WRITE(PV_INTEN, 0); 730 + 731 + drm_dev_exit(idx); 757 732 } 758 733 759 734 static void vc4_crtc_handle_page_flip(struct vc4_crtc *vc4_crtc)
+5 -2
drivers/gpu/drm/vc4/vc4_hdmi.c
··· 1425 1425 mutex_lock(&vc4_hdmi->mutex); 1426 1426 1427 1427 if (!drm_dev_enter(drm, &idx)) 1428 - return; 1428 + goto out; 1429 1429 1430 1430 if (vc4_hdmi->variant->csc_setup) 1431 1431 vc4_hdmi->variant->csc_setup(vc4_hdmi, conn_state, mode); ··· 1436 1436 1437 1437 drm_dev_exit(idx); 1438 1438 1439 + out: 1439 1440 mutex_unlock(&vc4_hdmi->mutex); 1440 1441 } 1441 1442 ··· 1456 1455 mutex_lock(&vc4_hdmi->mutex); 1457 1456 1458 1457 if (!drm_dev_enter(drm, &idx)) 1459 - return; 1458 + goto out; 1460 1459 1461 1460 spin_lock_irqsave(&vc4_hdmi->hw_lock, flags); 1462 1461 ··· 1517 1516 vc4_hdmi_enable_scrambling(encoder); 1518 1517 1519 1518 drm_dev_exit(idx); 1519 + 1520 + out: 1520 1521 mutex_unlock(&vc4_hdmi->mutex); 1521 1522 } 1522 1523
+2 -2
drivers/gpu/drm/vc4/vc4_hvs.c
··· 71 71 struct drm_printer p = drm_info_printer(&hvs->pdev->dev); 72 72 int idx, i; 73 73 74 - drm_print_regset32(&p, &hvs->regset); 75 - 76 74 if (!drm_dev_enter(drm, &idx)) 77 75 return; 76 + 77 + drm_print_regset32(&p, &hvs->regset); 78 78 79 79 DRM_INFO("HVS ctx:\n"); 80 80 for (i = 0; i < 64; i += 4) {
+20
drivers/gpu/drm/vc4/vc4_plane.c
··· 19 19 #include <drm/drm_atomic_helper.h> 20 20 #include <drm/drm_atomic_uapi.h> 21 21 #include <drm/drm_blend.h> 22 + #include <drm/drm_drv.h> 22 23 #include <drm/drm_fb_dma_helper.h> 23 24 #include <drm/drm_fourcc.h> 24 25 #include <drm/drm_framebuffer.h> ··· 1220 1219 { 1221 1220 struct vc4_plane_state *vc4_state = to_vc4_plane_state(plane->state); 1222 1221 int i; 1222 + int idx; 1223 + 1224 + if (!drm_dev_enter(plane->dev, &idx)) 1225 + goto out; 1223 1226 1224 1227 vc4_state->hw_dlist = dlist; 1225 1228 ··· 1231 1226 for (i = 0; i < vc4_state->dlist_count; i++) 1232 1227 writel(vc4_state->dlist[i], &dlist[i]); 1233 1228 1229 + drm_dev_exit(idx); 1230 + 1231 + out: 1234 1232 return vc4_state->dlist_count; 1235 1233 } 1236 1234 ··· 1253 1245 struct vc4_plane_state *vc4_state = to_vc4_plane_state(plane->state); 1254 1246 struct drm_gem_dma_object *bo = drm_fb_dma_get_gem_obj(fb, 0); 1255 1247 uint32_t addr; 1248 + int idx; 1249 + 1250 + if (!drm_dev_enter(plane->dev, &idx)) 1251 + return; 1256 1252 1257 1253 /* We're skipping the address adjustment for negative origin, 1258 1254 * because this is only called on the primary plane. ··· 1275 1263 * also use our updated address. 1276 1264 */ 1277 1265 vc4_state->dlist[vc4_state->ptr0_offset] = addr; 1266 + 1267 + drm_dev_exit(idx); 1278 1268 } 1279 1269 1280 1270 static void vc4_plane_atomic_async_update(struct drm_plane *plane, ··· 1285 1271 struct drm_plane_state *new_plane_state = drm_atomic_get_new_plane_state(state, 1286 1272 plane); 1287 1273 struct vc4_plane_state *vc4_state, *new_vc4_state; 1274 + int idx; 1275 + 1276 + if (!drm_dev_enter(plane->dev, &idx)) 1277 + return; 1288 1278 1289 1279 swap(plane->state->fb, new_plane_state->fb); 1290 1280 plane->state->crtc_x = new_plane_state->crtc_x; ··· 1351 1333 &vc4_state->hw_dlist[vc4_state->pos2_offset]); 1352 1334 writel(vc4_state->dlist[vc4_state->ptr0_offset], 1353 1335 &vc4_state->hw_dlist[vc4_state->ptr0_offset]); 1336 + 1337 + drm_dev_exit(idx); 1354 1338 } 1355 1339 1356 1340 static int vc4_plane_atomic_async_check(struct drm_plane *plane,
+34 -93
drivers/gpu/drm/vc4/vc4_vec.c
··· 171 171 172 172 struct clk *clock; 173 173 174 - const struct vc4_vec_tv_mode *tv_mode; 175 - 176 174 struct debugfs_regset32 regset; 177 175 }; 178 176 ··· 192 194 193 195 struct vc4_vec_tv_mode { 194 196 const struct drm_display_mode *mode; 195 - void (*mode_set)(struct vc4_vec *vec); 197 + u32 config0; 198 + u32 config1; 199 + u32 custom_freq; 196 200 }; 197 201 198 202 static const struct debugfs_reg32 vec_regs[] = { ··· 224 224 VC4_REG32(VEC_DAC_MISC), 225 225 }; 226 226 227 - static void vc4_vec_ntsc_mode_set(struct vc4_vec *vec) 228 - { 229 - struct drm_device *drm = vec->connector.dev; 230 - int idx; 231 - 232 - if (!drm_dev_enter(drm, &idx)) 233 - return; 234 - 235 - VEC_WRITE(VEC_CONFIG0, VEC_CONFIG0_NTSC_STD | VEC_CONFIG0_PDEN); 236 - VEC_WRITE(VEC_CONFIG1, VEC_CONFIG1_C_CVBS_CVBS); 237 - 238 - drm_dev_exit(idx); 239 - } 240 - 241 - static void vc4_vec_ntsc_j_mode_set(struct vc4_vec *vec) 242 - { 243 - struct drm_device *drm = vec->connector.dev; 244 - int idx; 245 - 246 - if (!drm_dev_enter(drm, &idx)) 247 - return; 248 - 249 - VEC_WRITE(VEC_CONFIG0, VEC_CONFIG0_NTSC_STD); 250 - VEC_WRITE(VEC_CONFIG1, VEC_CONFIG1_C_CVBS_CVBS); 251 - 252 - drm_dev_exit(idx); 253 - } 254 - 255 227 static const struct drm_display_mode ntsc_mode = { 256 228 DRM_MODE("720x480", DRM_MODE_TYPE_DRIVER, 13500, 257 229 720, 720 + 14, 720 + 14 + 64, 720 + 14 + 64 + 60, 0, 258 - 480, 480 + 3, 480 + 3 + 3, 480 + 3 + 3 + 16, 0, 230 + 480, 480 + 7, 480 + 7 + 6, 525, 0, 259 231 DRM_MODE_FLAG_INTERLACE) 260 232 }; 261 - 262 - static void vc4_vec_pal_mode_set(struct vc4_vec *vec) 263 - { 264 - struct drm_device *drm = vec->connector.dev; 265 - int idx; 266 - 267 - if (!drm_dev_enter(drm, &idx)) 268 - return; 269 - 270 - VEC_WRITE(VEC_CONFIG0, VEC_CONFIG0_PAL_BDGHI_STD); 271 - VEC_WRITE(VEC_CONFIG1, VEC_CONFIG1_C_CVBS_CVBS); 272 - 273 - drm_dev_exit(idx); 274 - } 275 - 276 - static void vc4_vec_pal_m_mode_set(struct vc4_vec *vec) 277 - { 278 - struct drm_device *drm = vec->connector.dev; 279 - int idx; 280 - 281 - if (!drm_dev_enter(drm, &idx)) 282 - return; 283 - 284 - VEC_WRITE(VEC_CONFIG0, VEC_CONFIG0_PAL_BDGHI_STD); 285 - VEC_WRITE(VEC_CONFIG1, 286 - VEC_CONFIG1_C_CVBS_CVBS | VEC_CONFIG1_CUSTOM_FREQ); 287 - VEC_WRITE(VEC_FREQ3_2, 0x223b); 288 - VEC_WRITE(VEC_FREQ1_0, 0x61d1); 289 - 290 - drm_dev_exit(idx); 291 - } 292 233 293 234 static const struct drm_display_mode pal_mode = { 294 235 DRM_MODE("720x576", DRM_MODE_TYPE_DRIVER, 13500, 295 236 720, 720 + 20, 720 + 20 + 64, 720 + 20 + 64 + 60, 0, 296 - 576, 576 + 2, 576 + 2 + 3, 576 + 2 + 3 + 20, 0, 237 + 576, 576 + 4, 576 + 4 + 6, 625, 0, 297 238 DRM_MODE_FLAG_INTERLACE) 298 239 }; 299 240 300 241 static const struct vc4_vec_tv_mode vc4_vec_tv_modes[] = { 301 242 [VC4_VEC_TV_MODE_NTSC] = { 302 243 .mode = &ntsc_mode, 303 - .mode_set = vc4_vec_ntsc_mode_set, 244 + .config0 = VEC_CONFIG0_NTSC_STD | VEC_CONFIG0_PDEN, 245 + .config1 = VEC_CONFIG1_C_CVBS_CVBS, 304 246 }, 305 247 [VC4_VEC_TV_MODE_NTSC_J] = { 306 248 .mode = &ntsc_mode, 307 - .mode_set = vc4_vec_ntsc_j_mode_set, 249 + .config0 = VEC_CONFIG0_NTSC_STD, 250 + .config1 = VEC_CONFIG1_C_CVBS_CVBS, 308 251 }, 309 252 [VC4_VEC_TV_MODE_PAL] = { 310 253 .mode = &pal_mode, 311 - .mode_set = vc4_vec_pal_mode_set, 254 + .config0 = VEC_CONFIG0_PAL_BDGHI_STD, 255 + .config1 = VEC_CONFIG1_C_CVBS_CVBS, 312 256 }, 313 257 [VC4_VEC_TV_MODE_PAL_M] = { 314 258 .mode = &pal_mode, 315 - .mode_set = vc4_vec_pal_m_mode_set, 259 + .config0 = VEC_CONFIG0_PAL_BDGHI_STD, 260 + .config1 = VEC_CONFIG1_C_CVBS_CVBS | VEC_CONFIG1_CUSTOM_FREQ, 261 + .custom_freq = 0x223b61d1, 316 262 }, 317 263 }; 318 264 ··· 314 368 drm_object_attach_property(&connector->base, 315 369 dev->mode_config.tv_mode_property, 316 370 VC4_VEC_TV_MODE_NTSC); 317 - vec->tv_mode = &vc4_vec_tv_modes[VC4_VEC_TV_MODE_NTSC]; 318 371 319 372 drm_connector_attach_encoder(connector, &vec->encoder.base); 320 373 321 374 return 0; 322 375 } 323 376 324 - static void vc4_vec_encoder_disable(struct drm_encoder *encoder) 377 + static void vc4_vec_encoder_disable(struct drm_encoder *encoder, 378 + struct drm_atomic_state *state) 325 379 { 326 380 struct drm_device *drm = encoder->dev; 327 381 struct vc4_vec *vec = encoder_to_vc4_vec(encoder); ··· 352 406 drm_dev_exit(idx); 353 407 } 354 408 355 - static void vc4_vec_encoder_enable(struct drm_encoder *encoder) 409 + static void vc4_vec_encoder_enable(struct drm_encoder *encoder, 410 + struct drm_atomic_state *state) 356 411 { 357 412 struct drm_device *drm = encoder->dev; 358 413 struct vc4_vec *vec = encoder_to_vc4_vec(encoder); 414 + struct drm_connector *connector = &vec->connector; 415 + struct drm_connector_state *conn_state = 416 + drm_atomic_get_new_connector_state(state, connector); 417 + const struct vc4_vec_tv_mode *tv_mode = 418 + &vc4_vec_tv_modes[conn_state->tv.mode]; 359 419 int idx, ret; 360 420 361 421 if (!drm_dev_enter(drm, &idx)) ··· 420 468 /* Mask all interrupts. */ 421 469 VEC_WRITE(VEC_MASK0, 0); 422 470 423 - vec->tv_mode->mode_set(vec); 471 + VEC_WRITE(VEC_CONFIG0, tv_mode->config0); 472 + VEC_WRITE(VEC_CONFIG1, tv_mode->config1); 473 + 474 + if (tv_mode->custom_freq) { 475 + VEC_WRITE(VEC_FREQ3_2, 476 + (tv_mode->custom_freq >> 16) & 0xffff); 477 + VEC_WRITE(VEC_FREQ1_0, 478 + tv_mode->custom_freq & 0xffff); 479 + } 424 480 425 481 VEC_WRITE(VEC_DAC_MISC, 426 482 VEC_DAC_MISC_VID_ACT | VEC_DAC_MISC_DAC_RST_N); ··· 441 481 pm_runtime_put(&vec->pdev->dev); 442 482 err_dev_exit: 443 483 drm_dev_exit(idx); 444 - } 445 - 446 - 447 - static bool vc4_vec_encoder_mode_fixup(struct drm_encoder *encoder, 448 - const struct drm_display_mode *mode, 449 - struct drm_display_mode *adjusted_mode) 450 - { 451 - return true; 452 - } 453 - 454 - static void vc4_vec_encoder_atomic_mode_set(struct drm_encoder *encoder, 455 - struct drm_crtc_state *crtc_state, 456 - struct drm_connector_state *conn_state) 457 - { 458 - struct vc4_vec *vec = encoder_to_vc4_vec(encoder); 459 - 460 - vec->tv_mode = &vc4_vec_tv_modes[conn_state->tv.mode]; 461 484 } 462 485 463 486 static int vc4_vec_encoder_atomic_check(struct drm_encoder *encoder, ··· 459 516 } 460 517 461 518 static const struct drm_encoder_helper_funcs vc4_vec_encoder_helper_funcs = { 462 - .disable = vc4_vec_encoder_disable, 463 - .enable = vc4_vec_encoder_enable, 464 - .mode_fixup = vc4_vec_encoder_mode_fixup, 465 519 .atomic_check = vc4_vec_encoder_atomic_check, 466 - .atomic_mode_set = vc4_vec_encoder_atomic_mode_set, 520 + .atomic_disable = vc4_vec_encoder_disable, 521 + .atomic_enable = vc4_vec_encoder_enable, 467 522 }; 468 523 469 524 static int vc4_vec_late_register(struct drm_encoder *encoder)
+1 -1
drivers/gpu/drm/via/via_dri1.c
··· 2961 2961 drm_via_private_t *dev_priv = 2962 2962 (drm_via_private_t *) dev->dev_private; 2963 2963 2964 - if (dev_priv->ring.virtual_start) { 2964 + if (dev_priv->ring.virtual_start && dev_priv->mmio) { 2965 2965 via_cmdbuf_reset(dev_priv); 2966 2966 2967 2967 drm_legacy_ioremapfree(&dev_priv->ring.map, dev);
+2
drivers/gpu/drm/virtio/virtgpu_display.c
··· 349 349 vgdev->ddev->mode_config.max_width = XRES_MAX; 350 350 vgdev->ddev->mode_config.max_height = YRES_MAX; 351 351 352 + vgdev->ddev->mode_config.fb_modifiers_not_supported = true; 353 + 352 354 for (i = 0 ; i < vgdev->num_scanouts; ++i) 353 355 vgdev_output_init(vgdev, i); 354 356
+1 -1
drivers/gpu/drm/virtio/virtgpu_ioctl.c
··· 168 168 * array contains any fence from a foreign context. 169 169 */ 170 170 ret = 0; 171 - if (!dma_fence_match_context(in_fence, vgdev->fence_drv.context)) 171 + if (!dma_fence_match_context(in_fence, fence_ctx + ring_idx)) 172 172 ret = dma_fence_wait(in_fence, true); 173 173 174 174 dma_fence_put(in_fence);
+1
drivers/gpu/drm/vkms/Makefile
··· 3 3 vkms_drv.o \ 4 4 vkms_plane.o \ 5 5 vkms_output.o \ 6 + vkms_formats.o \ 6 7 vkms_crtc.o \ 7 8 vkms_composer.o \ 8 9 vkms_writeback.o
+155 -191
drivers/gpu/drm/vkms/vkms_composer.c
··· 7 7 #include <drm/drm_fourcc.h> 8 8 #include <drm/drm_gem_framebuffer_helper.h> 9 9 #include <drm/drm_vblank.h> 10 + #include <linux/minmax.h> 10 11 11 12 #include "vkms_drv.h" 12 13 13 - static u32 get_pixel_from_buffer(int x, int y, const u8 *buffer, 14 - const struct vkms_composer *composer) 14 + static u16 pre_mul_blend_channel(u16 src, u16 dst, u16 alpha) 15 15 { 16 - u32 pixel; 17 - int src_offset = composer->offset + (y * composer->pitch) 18 - + (x * composer->cpp); 16 + u32 new_color; 19 17 20 - pixel = *(u32 *)&buffer[src_offset]; 18 + new_color = (src * 0xffff + dst * (0xffff - alpha)); 21 19 22 - return pixel; 20 + return DIV_ROUND_CLOSEST(new_color, 0xffff); 23 21 } 24 22 25 23 /** 26 - * compute_crc - Compute CRC value on output frame 24 + * pre_mul_alpha_blend - alpha blending equation 25 + * @src_frame_info: source framebuffer's metadata 26 + * @stage_buffer: The line with the pixels from src_plane 27 + * @output_buffer: A line buffer that receives all the blends output 27 28 * 28 - * @vaddr: address to final framebuffer 29 - * @composer: framebuffer's metadata 29 + * Using the information from the `frame_info`, this blends only the 30 + * necessary pixels from the `stage_buffer` to the `output_buffer` 31 + * using premultiplied blend formula. 30 32 * 31 - * returns CRC value computed using crc32 on the visible portion of 32 - * the final framebuffer at vaddr_out 33 + * The current DRM assumption is that pixel color values have been already 34 + * pre-multiplied with the alpha channel values. See more 35 + * drm_plane_create_blend_mode_property(). Also, this formula assumes a 36 + * completely opaque background. 33 37 */ 34 - static uint32_t compute_crc(const u8 *vaddr, 35 - const struct vkms_composer *composer) 38 + static void pre_mul_alpha_blend(struct vkms_frame_info *frame_info, 39 + struct line_buffer *stage_buffer, 40 + struct line_buffer *output_buffer) 36 41 { 37 - int x, y; 38 - u32 crc = 0, pixel = 0; 39 - int x_src = composer->src.x1 >> 16; 40 - int y_src = composer->src.y1 >> 16; 41 - int h_src = drm_rect_height(&composer->src) >> 16; 42 - int w_src = drm_rect_width(&composer->src) >> 16; 42 + int x_dst = frame_info->dst.x1; 43 + struct pixel_argb_u16 *out = output_buffer->pixels + x_dst; 44 + struct pixel_argb_u16 *in = stage_buffer->pixels; 45 + int x_limit = min_t(size_t, drm_rect_width(&frame_info->dst), 46 + stage_buffer->n_pixels); 43 47 44 - for (y = y_src; y < y_src + h_src; ++y) { 45 - for (x = x_src; x < x_src + w_src; ++x) { 46 - pixel = get_pixel_from_buffer(x, y, vaddr, composer); 47 - crc = crc32_le(crc, (void *)&pixel, sizeof(u32)); 48 - } 49 - } 50 - 51 - return crc; 52 - } 53 - 54 - static u8 blend_channel(u8 src, u8 dst, u8 alpha) 55 - { 56 - u32 pre_blend; 57 - u8 new_color; 58 - 59 - pre_blend = (src * 255 + dst * (255 - alpha)); 60 - 61 - /* Faster div by 255 */ 62 - new_color = ((pre_blend + ((pre_blend + 257) >> 8)) >> 8); 63 - 64 - return new_color; 65 - } 66 - 67 - /** 68 - * alpha_blend - alpha blending equation 69 - * @argb_src: src pixel on premultiplied alpha mode 70 - * @argb_dst: dst pixel completely opaque 71 - * 72 - * blend pixels using premultiplied blend formula. The current DRM assumption 73 - * is that pixel color values have been already pre-multiplied with the alpha 74 - * channel values. See more drm_plane_create_blend_mode_property(). Also, this 75 - * formula assumes a completely opaque background. 76 - */ 77 - static void alpha_blend(const u8 *argb_src, u8 *argb_dst) 78 - { 79 - u8 alpha; 80 - 81 - alpha = argb_src[3]; 82 - argb_dst[0] = blend_channel(argb_src[0], argb_dst[0], alpha); 83 - argb_dst[1] = blend_channel(argb_src[1], argb_dst[1], alpha); 84 - argb_dst[2] = blend_channel(argb_src[2], argb_dst[2], alpha); 85 - } 86 - 87 - /** 88 - * x_blend - blending equation that ignores the pixel alpha 89 - * 90 - * overwrites RGB color value from src pixel to dst pixel. 91 - */ 92 - static void x_blend(const u8 *xrgb_src, u8 *xrgb_dst) 93 - { 94 - memcpy(xrgb_dst, xrgb_src, sizeof(u8) * 3); 95 - } 96 - 97 - /** 98 - * blend - blend value at vaddr_src with value at vaddr_dst 99 - * @vaddr_dst: destination address 100 - * @vaddr_src: source address 101 - * @dst_composer: destination framebuffer's metadata 102 - * @src_composer: source framebuffer's metadata 103 - * @pixel_blend: blending equation based on plane format 104 - * 105 - * Blend the vaddr_src value with the vaddr_dst value using a pixel blend 106 - * equation according to the supported plane formats DRM_FORMAT_(A/XRGB8888) 107 - * and clearing alpha channel to an completely opaque background. This function 108 - * uses buffer's metadata to locate the new composite values at vaddr_dst. 109 - * 110 - * TODO: completely clear the primary plane (a = 0xff) before starting to blend 111 - * pixel color values 112 - */ 113 - static void blend(void *vaddr_dst, void *vaddr_src, 114 - struct vkms_composer *dst_composer, 115 - struct vkms_composer *src_composer, 116 - void (*pixel_blend)(const u8 *, u8 *)) 117 - { 118 - int i, j, j_dst, i_dst; 119 - int offset_src, offset_dst; 120 - u8 *pixel_dst, *pixel_src; 121 - 122 - int x_src = src_composer->src.x1 >> 16; 123 - int y_src = src_composer->src.y1 >> 16; 124 - 125 - int x_dst = src_composer->dst.x1; 126 - int y_dst = src_composer->dst.y1; 127 - int h_dst = drm_rect_height(&src_composer->dst); 128 - int w_dst = drm_rect_width(&src_composer->dst); 129 - 130 - int y_limit = y_src + h_dst; 131 - int x_limit = x_src + w_dst; 132 - 133 - for (i = y_src, i_dst = y_dst; i < y_limit; ++i) { 134 - for (j = x_src, j_dst = x_dst; j < x_limit; ++j) { 135 - offset_dst = dst_composer->offset 136 - + (i_dst * dst_composer->pitch) 137 - + (j_dst++ * dst_composer->cpp); 138 - offset_src = src_composer->offset 139 - + (i * src_composer->pitch) 140 - + (j * src_composer->cpp); 141 - 142 - pixel_src = (u8 *)(vaddr_src + offset_src); 143 - pixel_dst = (u8 *)(vaddr_dst + offset_dst); 144 - pixel_blend(pixel_src, pixel_dst); 145 - /* clearing alpha channel (0xff)*/ 146 - pixel_dst[3] = 0xff; 147 - } 148 - i_dst++; 48 + for (int x = 0; x < x_limit; x++) { 49 + out[x].a = (u16)0xffff; 50 + out[x].r = pre_mul_blend_channel(in[x].r, out[x].r, in[x].a); 51 + out[x].g = pre_mul_blend_channel(in[x].g, out[x].g, in[x].a); 52 + out[x].b = pre_mul_blend_channel(in[x].b, out[x].b, in[x].a); 149 53 } 150 54 } 151 55 152 - static void compose_plane(struct vkms_composer *primary_composer, 153 - struct vkms_composer *plane_composer, 154 - void *vaddr_out) 56 + static bool check_y_limit(struct vkms_frame_info *frame_info, int y) 155 57 { 156 - struct drm_framebuffer *fb = &plane_composer->fb; 157 - void *vaddr; 158 - void (*pixel_blend)(const u8 *p_src, u8 *p_dst); 58 + if (y >= frame_info->dst.y1 && y < frame_info->dst.y2) 59 + return true; 159 60 160 - if (WARN_ON(iosys_map_is_null(&plane_composer->map[0]))) 161 - return; 162 - 163 - vaddr = plane_composer->map[0].vaddr; 164 - 165 - if (fb->format->format == DRM_FORMAT_ARGB8888) 166 - pixel_blend = &alpha_blend; 167 - else 168 - pixel_blend = &x_blend; 169 - 170 - blend(vaddr_out, vaddr, primary_composer, plane_composer, pixel_blend); 61 + return false; 171 62 } 172 63 173 - static int compose_active_planes(void **vaddr_out, 174 - struct vkms_composer *primary_composer, 175 - struct vkms_crtc_state *crtc_state) 64 + static void fill_background(const struct pixel_argb_u16 *background_color, 65 + struct line_buffer *output_buffer) 176 66 { 177 - struct drm_framebuffer *fb = &primary_composer->fb; 178 - struct drm_gem_object *gem_obj = drm_gem_fb_get_obj(fb, 0); 179 - const void *vaddr; 180 - int i; 67 + for (size_t i = 0; i < output_buffer->n_pixels; i++) 68 + output_buffer->pixels[i] = *background_color; 69 + } 181 70 182 - if (!*vaddr_out) { 183 - *vaddr_out = kvzalloc(gem_obj->size, GFP_KERNEL); 184 - if (!*vaddr_out) { 185 - DRM_ERROR("Cannot allocate memory for output frame."); 186 - return -ENOMEM; 71 + /** 72 + * @wb_frame_info: The writeback frame buffer metadata 73 + * @crtc_state: The crtc state 74 + * @crc32: The crc output of the final frame 75 + * @output_buffer: A buffer of a row that will receive the result of the blend(s) 76 + * @stage_buffer: The line with the pixels from plane being blend to the output 77 + * 78 + * This function blends the pixels (Using the `pre_mul_alpha_blend`) 79 + * from all planes, calculates the crc32 of the output from the former step, 80 + * and, if necessary, convert and store the output to the writeback buffer. 81 + */ 82 + static void blend(struct vkms_writeback_job *wb, 83 + struct vkms_crtc_state *crtc_state, 84 + u32 *crc32, struct line_buffer *stage_buffer, 85 + struct line_buffer *output_buffer, size_t row_size) 86 + { 87 + struct vkms_plane_state **plane = crtc_state->active_planes; 88 + u32 n_active_planes = crtc_state->num_active_planes; 89 + 90 + const struct pixel_argb_u16 background_color = { .a = 0xffff }; 91 + 92 + size_t crtc_y_limit = crtc_state->base.crtc->mode.vdisplay; 93 + 94 + for (size_t y = 0; y < crtc_y_limit; y++) { 95 + fill_background(&background_color, output_buffer); 96 + 97 + /* The active planes are composed associatively in z-order. */ 98 + for (size_t i = 0; i < n_active_planes; i++) { 99 + if (!check_y_limit(plane[i]->frame_info, y)) 100 + continue; 101 + 102 + plane[i]->plane_read(stage_buffer, plane[i]->frame_info, y); 103 + pre_mul_alpha_blend(plane[i]->frame_info, stage_buffer, 104 + output_buffer); 187 105 } 106 + 107 + *crc32 = crc32_le(*crc32, (void *)output_buffer->pixels, row_size); 108 + 109 + if (wb) 110 + wb->wb_write(&wb->wb_frame_info, output_buffer, y); 188 111 } 112 + } 189 113 190 - if (WARN_ON(iosys_map_is_null(&primary_composer->map[0]))) 191 - return -EINVAL; 114 + static int check_format_funcs(struct vkms_crtc_state *crtc_state, 115 + struct vkms_writeback_job *active_wb) 116 + { 117 + struct vkms_plane_state **planes = crtc_state->active_planes; 118 + u32 n_active_planes = crtc_state->num_active_planes; 192 119 193 - vaddr = primary_composer->map[0].vaddr; 120 + for (size_t i = 0; i < n_active_planes; i++) 121 + if (!planes[i]->plane_read) 122 + return -1; 194 123 195 - memcpy(*vaddr_out, vaddr, gem_obj->size); 196 - 197 - /* If there are other planes besides primary, we consider the active 198 - * planes should be in z-order and compose them associatively: 199 - * ((primary <- overlay) <- cursor) 200 - */ 201 - for (i = 1; i < crtc_state->num_active_planes; i++) 202 - compose_plane(primary_composer, 203 - crtc_state->active_planes[i]->composer, 204 - *vaddr_out); 124 + if (active_wb && !active_wb->wb_write) 125 + return -1; 205 126 206 127 return 0; 128 + } 129 + 130 + static int check_iosys_map(struct vkms_crtc_state *crtc_state) 131 + { 132 + struct vkms_plane_state **plane_state = crtc_state->active_planes; 133 + u32 n_active_planes = crtc_state->num_active_planes; 134 + 135 + for (size_t i = 0; i < n_active_planes; i++) 136 + if (iosys_map_is_null(&plane_state[i]->frame_info->map[0])) 137 + return -1; 138 + 139 + return 0; 140 + } 141 + 142 + static int compose_active_planes(struct vkms_writeback_job *active_wb, 143 + struct vkms_crtc_state *crtc_state, 144 + u32 *crc32) 145 + { 146 + size_t line_width, pixel_size = sizeof(struct pixel_argb_u16); 147 + struct line_buffer output_buffer, stage_buffer; 148 + int ret = 0; 149 + 150 + /* 151 + * This check exists so we can call `crc32_le` for the entire line 152 + * instead doing it for each channel of each pixel in case 153 + * `struct `pixel_argb_u16` had any gap added by the compiler 154 + * between the struct fields. 155 + */ 156 + static_assert(sizeof(struct pixel_argb_u16) == 8); 157 + 158 + if (WARN_ON(check_iosys_map(crtc_state))) 159 + return -EINVAL; 160 + 161 + if (WARN_ON(check_format_funcs(crtc_state, active_wb))) 162 + return -EINVAL; 163 + 164 + line_width = crtc_state->base.crtc->mode.hdisplay; 165 + stage_buffer.n_pixels = line_width; 166 + output_buffer.n_pixels = line_width; 167 + 168 + stage_buffer.pixels = kvmalloc(line_width * pixel_size, GFP_KERNEL); 169 + if (!stage_buffer.pixels) { 170 + DRM_ERROR("Cannot allocate memory for the output line buffer"); 171 + return -ENOMEM; 172 + } 173 + 174 + output_buffer.pixels = kvmalloc(line_width * pixel_size, GFP_KERNEL); 175 + if (!output_buffer.pixels) { 176 + DRM_ERROR("Cannot allocate memory for intermediate line buffer"); 177 + ret = -ENOMEM; 178 + goto free_stage_buffer; 179 + } 180 + 181 + blend(active_wb, crtc_state, crc32, &stage_buffer, 182 + &output_buffer, line_width * pixel_size); 183 + 184 + kvfree(output_buffer.pixels); 185 + free_stage_buffer: 186 + kvfree(stage_buffer.pixels); 187 + 188 + return ret; 207 189 } 208 190 209 191 /** ··· 203 221 struct vkms_crtc_state, 204 222 composer_work); 205 223 struct drm_crtc *crtc = crtc_state->base.crtc; 224 + struct vkms_writeback_job *active_wb = crtc_state->active_writeback; 206 225 struct vkms_output *out = drm_crtc_to_vkms_output(crtc); 207 - struct vkms_composer *primary_composer = NULL; 208 - struct vkms_plane_state *act_plane = NULL; 209 226 bool crc_pending, wb_pending; 210 - void *vaddr_out = NULL; 211 - u32 crc32 = 0; 212 227 u64 frame_start, frame_end; 228 + u32 crc32 = 0; 213 229 int ret; 214 230 215 231 spin_lock_irq(&out->composer_lock); ··· 227 247 if (!crc_pending) 228 248 return; 229 249 230 - if (crtc_state->num_active_planes >= 1) { 231 - act_plane = crtc_state->active_planes[0]; 232 - if (act_plane->base.base.plane->type == DRM_PLANE_TYPE_PRIMARY) 233 - primary_composer = act_plane->composer; 234 - } 235 - 236 - if (!primary_composer) 237 - return; 238 - 239 250 if (wb_pending) 240 - vaddr_out = crtc_state->active_writeback->data[0].vaddr; 251 + ret = compose_active_planes(active_wb, crtc_state, &crc32); 252 + else 253 + ret = compose_active_planes(NULL, crtc_state, &crc32); 241 254 242 - ret = compose_active_planes(&vaddr_out, primary_composer, 243 - crtc_state); 244 - if (ret) { 245 - if (ret == -EINVAL && !wb_pending) 246 - kvfree(vaddr_out); 255 + if (ret) 247 256 return; 248 - } 249 - 250 - crc32 = compute_crc(vaddr_out, primary_composer); 251 257 252 258 if (wb_pending) { 253 259 drm_writeback_signal_completion(&out->wb_connector, 0); 254 260 spin_lock_irq(&out->composer_lock); 255 261 crtc_state->wb_pending = false; 256 262 spin_unlock_irq(&out->composer_lock); 257 - } else { 258 - kvfree(vaddr_out); 259 263 } 260 264 261 265 /*
+23 -10
drivers/gpu/drm/vkms/vkms_drv.h
··· 23 23 24 24 #define NUM_OVERLAY_PLANES 8 25 25 26 - struct vkms_writeback_job { 27 - struct iosys_map map[DRM_FORMAT_MAX_PLANES]; 28 - struct iosys_map data[DRM_FORMAT_MAX_PLANES]; 29 - }; 30 - 31 - struct vkms_composer { 32 - struct drm_framebuffer fb; 26 + struct vkms_frame_info { 27 + struct drm_framebuffer *fb; 33 28 struct drm_rect src, dst; 34 - struct iosys_map map[4]; 29 + struct iosys_map map[DRM_FORMAT_MAX_PLANES]; 35 30 unsigned int offset; 36 31 unsigned int pitch; 37 32 unsigned int cpp; 38 33 }; 39 34 35 + struct pixel_argb_u16 { 36 + u16 a, r, g, b; 37 + }; 38 + 39 + struct line_buffer { 40 + size_t n_pixels; 41 + struct pixel_argb_u16 *pixels; 42 + }; 43 + 44 + struct vkms_writeback_job { 45 + struct iosys_map data[DRM_FORMAT_MAX_PLANES]; 46 + struct vkms_frame_info wb_frame_info; 47 + void (*wb_write)(struct vkms_frame_info *frame_info, 48 + const struct line_buffer *buffer, int y); 49 + }; 50 + 40 51 /** 41 52 * vkms_plane_state - Driver specific plane state 42 53 * @base: base plane state 43 - * @composer: data required for composing computation 54 + * @frame_info: data required for composing computation 44 55 */ 45 56 struct vkms_plane_state { 46 57 struct drm_shadow_plane_state base; 47 - struct vkms_composer *composer; 58 + struct vkms_frame_info *frame_info; 59 + void (*plane_read)(struct line_buffer *buffer, 60 + const struct vkms_frame_info *frame_info, int y); 48 61 }; 49 62 50 63 struct vkms_plane {
+301
drivers/gpu/drm/vkms/vkms_formats.c
··· 1 + // SPDX-License-Identifier: GPL-2.0+ 2 + 3 + #include <drm/drm_rect.h> 4 + #include <linux/minmax.h> 5 + 6 + #include "vkms_formats.h" 7 + 8 + /* The following macros help doing fixed point arithmetic. */ 9 + /* 10 + * With Fixed-Point scale 15 we have 17 and 15 bits of integer and fractional 11 + * parts respectively. 12 + * | 0000 0000 0000 0000 0.000 0000 0000 0000 | 13 + * 31 0 14 + */ 15 + #define SHIFT 15 16 + 17 + #define INT_TO_FIXED(a) ((a) << SHIFT) 18 + #define FIXED_MUL(a, b) ((s32)(((s64)(a) * (b)) >> SHIFT)) 19 + #define FIXED_DIV(a, b) ((s32)(((s64)(a) << SHIFT) / (b))) 20 + /* This macro converts a fixed point number to int, and round half up it */ 21 + #define FIXED_TO_INT_ROUND(a) (((a) + (1 << (SHIFT - 1))) >> SHIFT) 22 + #define INT_TO_FIXED_DIV(a, b) (FIXED_DIV(INT_TO_FIXED(a), INT_TO_FIXED(b))) 23 + #define INT_TO_FIXED_DIV(a, b) (FIXED_DIV(INT_TO_FIXED(a), INT_TO_FIXED(b))) 24 + 25 + static size_t pixel_offset(const struct vkms_frame_info *frame_info, int x, int y) 26 + { 27 + return frame_info->offset + (y * frame_info->pitch) 28 + + (x * frame_info->cpp); 29 + } 30 + 31 + /* 32 + * packed_pixels_addr - Get the pointer to pixel of a given pair of coordinates 33 + * 34 + * @frame_info: Buffer metadata 35 + * @x: The x(width) coordinate of the 2D buffer 36 + * @y: The y(Heigth) coordinate of the 2D buffer 37 + * 38 + * Takes the information stored in the frame_info, a pair of coordinates, and 39 + * returns the address of the first color channel. 40 + * This function assumes the channels are packed together, i.e. a color channel 41 + * comes immediately after another in the memory. And therefore, this function 42 + * doesn't work for YUV with chroma subsampling (e.g. YUV420 and NV21). 43 + */ 44 + static void *packed_pixels_addr(const struct vkms_frame_info *frame_info, 45 + int x, int y) 46 + { 47 + size_t offset = pixel_offset(frame_info, x, y); 48 + 49 + return (u8 *)frame_info->map[0].vaddr + offset; 50 + } 51 + 52 + static void *get_packed_src_addr(const struct vkms_frame_info *frame_info, int y) 53 + { 54 + int x_src = frame_info->src.x1 >> 16; 55 + int y_src = y - frame_info->dst.y1 + (frame_info->src.y1 >> 16); 56 + 57 + return packed_pixels_addr(frame_info, x_src, y_src); 58 + } 59 + 60 + static void ARGB8888_to_argb_u16(struct line_buffer *stage_buffer, 61 + const struct vkms_frame_info *frame_info, int y) 62 + { 63 + struct pixel_argb_u16 *out_pixels = stage_buffer->pixels; 64 + u8 *src_pixels = get_packed_src_addr(frame_info, y); 65 + int x_limit = min_t(size_t, drm_rect_width(&frame_info->dst), 66 + stage_buffer->n_pixels); 67 + 68 + for (size_t x = 0; x < x_limit; x++, src_pixels += 4) { 69 + /* 70 + * The 257 is the "conversion ratio". This number is obtained by the 71 + * (2^16 - 1) / (2^8 - 1) division. Which, in this case, tries to get 72 + * the best color value in a pixel format with more possibilities. 73 + * A similar idea applies to others RGB color conversions. 74 + */ 75 + out_pixels[x].a = (u16)src_pixels[3] * 257; 76 + out_pixels[x].r = (u16)src_pixels[2] * 257; 77 + out_pixels[x].g = (u16)src_pixels[1] * 257; 78 + out_pixels[x].b = (u16)src_pixels[0] * 257; 79 + } 80 + } 81 + 82 + static void XRGB8888_to_argb_u16(struct line_buffer *stage_buffer, 83 + const struct vkms_frame_info *frame_info, int y) 84 + { 85 + struct pixel_argb_u16 *out_pixels = stage_buffer->pixels; 86 + u8 *src_pixels = get_packed_src_addr(frame_info, y); 87 + int x_limit = min_t(size_t, drm_rect_width(&frame_info->dst), 88 + stage_buffer->n_pixels); 89 + 90 + for (size_t x = 0; x < x_limit; x++, src_pixels += 4) { 91 + out_pixels[x].a = (u16)0xffff; 92 + out_pixels[x].r = (u16)src_pixels[2] * 257; 93 + out_pixels[x].g = (u16)src_pixels[1] * 257; 94 + out_pixels[x].b = (u16)src_pixels[0] * 257; 95 + } 96 + } 97 + 98 + static void ARGB16161616_to_argb_u16(struct line_buffer *stage_buffer, 99 + const struct vkms_frame_info *frame_info, 100 + int y) 101 + { 102 + struct pixel_argb_u16 *out_pixels = stage_buffer->pixels; 103 + u16 *src_pixels = get_packed_src_addr(frame_info, y); 104 + int x_limit = min_t(size_t, drm_rect_width(&frame_info->dst), 105 + stage_buffer->n_pixels); 106 + 107 + for (size_t x = 0; x < x_limit; x++, src_pixels += 4) { 108 + out_pixels[x].a = le16_to_cpu(src_pixels[3]); 109 + out_pixels[x].r = le16_to_cpu(src_pixels[2]); 110 + out_pixels[x].g = le16_to_cpu(src_pixels[1]); 111 + out_pixels[x].b = le16_to_cpu(src_pixels[0]); 112 + } 113 + } 114 + 115 + static void XRGB16161616_to_argb_u16(struct line_buffer *stage_buffer, 116 + const struct vkms_frame_info *frame_info, 117 + int y) 118 + { 119 + struct pixel_argb_u16 *out_pixels = stage_buffer->pixels; 120 + u16 *src_pixels = get_packed_src_addr(frame_info, y); 121 + int x_limit = min_t(size_t, drm_rect_width(&frame_info->dst), 122 + stage_buffer->n_pixels); 123 + 124 + for (size_t x = 0; x < x_limit; x++, src_pixels += 4) { 125 + out_pixels[x].a = (u16)0xffff; 126 + out_pixels[x].r = le16_to_cpu(src_pixels[2]); 127 + out_pixels[x].g = le16_to_cpu(src_pixels[1]); 128 + out_pixels[x].b = le16_to_cpu(src_pixels[0]); 129 + } 130 + } 131 + 132 + static void RGB565_to_argb_u16(struct line_buffer *stage_buffer, 133 + const struct vkms_frame_info *frame_info, int y) 134 + { 135 + struct pixel_argb_u16 *out_pixels = stage_buffer->pixels; 136 + u16 *src_pixels = get_packed_src_addr(frame_info, y); 137 + int x_limit = min_t(size_t, drm_rect_width(&frame_info->dst), 138 + stage_buffer->n_pixels); 139 + 140 + s32 fp_rb_ratio = INT_TO_FIXED_DIV(65535, 31); 141 + s32 fp_g_ratio = INT_TO_FIXED_DIV(65535, 63); 142 + 143 + for (size_t x = 0; x < x_limit; x++, src_pixels++) { 144 + u16 rgb_565 = le16_to_cpu(*src_pixels); 145 + s32 fp_r = INT_TO_FIXED((rgb_565 >> 11) & 0x1f); 146 + s32 fp_g = INT_TO_FIXED((rgb_565 >> 5) & 0x3f); 147 + s32 fp_b = INT_TO_FIXED(rgb_565 & 0x1f); 148 + 149 + out_pixels[x].a = (u16)0xffff; 150 + out_pixels[x].r = FIXED_TO_INT_ROUND(FIXED_MUL(fp_r, fp_rb_ratio)); 151 + out_pixels[x].g = FIXED_TO_INT_ROUND(FIXED_MUL(fp_g, fp_g_ratio)); 152 + out_pixels[x].b = FIXED_TO_INT_ROUND(FIXED_MUL(fp_b, fp_rb_ratio)); 153 + } 154 + } 155 + 156 + /* 157 + * The following functions take an line of argb_u16 pixels from the 158 + * src_buffer, convert them to a specific format, and store them in the 159 + * destination. 160 + * 161 + * They are used in the `compose_active_planes` to convert and store a line 162 + * from the src_buffer to the writeback buffer. 163 + */ 164 + static void argb_u16_to_ARGB8888(struct vkms_frame_info *frame_info, 165 + const struct line_buffer *src_buffer, int y) 166 + { 167 + int x_dst = frame_info->dst.x1; 168 + u8 *dst_pixels = packed_pixels_addr(frame_info, x_dst, y); 169 + struct pixel_argb_u16 *in_pixels = src_buffer->pixels; 170 + int x_limit = min_t(size_t, drm_rect_width(&frame_info->dst), 171 + src_buffer->n_pixels); 172 + 173 + for (size_t x = 0; x < x_limit; x++, dst_pixels += 4) { 174 + /* 175 + * This sequence below is important because the format's byte order is 176 + * in little-endian. In the case of the ARGB8888 the memory is 177 + * organized this way: 178 + * 179 + * | Addr | = blue channel 180 + * | Addr + 1 | = green channel 181 + * | Addr + 2 | = Red channel 182 + * | Addr + 3 | = Alpha channel 183 + */ 184 + dst_pixels[3] = DIV_ROUND_CLOSEST(in_pixels[x].a, 257); 185 + dst_pixels[2] = DIV_ROUND_CLOSEST(in_pixels[x].r, 257); 186 + dst_pixels[1] = DIV_ROUND_CLOSEST(in_pixels[x].g, 257); 187 + dst_pixels[0] = DIV_ROUND_CLOSEST(in_pixels[x].b, 257); 188 + } 189 + } 190 + 191 + static void argb_u16_to_XRGB8888(struct vkms_frame_info *frame_info, 192 + const struct line_buffer *src_buffer, int y) 193 + { 194 + int x_dst = frame_info->dst.x1; 195 + u8 *dst_pixels = packed_pixels_addr(frame_info, x_dst, y); 196 + struct pixel_argb_u16 *in_pixels = src_buffer->pixels; 197 + int x_limit = min_t(size_t, drm_rect_width(&frame_info->dst), 198 + src_buffer->n_pixels); 199 + 200 + for (size_t x = 0; x < x_limit; x++, dst_pixels += 4) { 201 + dst_pixels[3] = 0xff; 202 + dst_pixels[2] = DIV_ROUND_CLOSEST(in_pixels[x].r, 257); 203 + dst_pixels[1] = DIV_ROUND_CLOSEST(in_pixels[x].g, 257); 204 + dst_pixels[0] = DIV_ROUND_CLOSEST(in_pixels[x].b, 257); 205 + } 206 + } 207 + 208 + static void argb_u16_to_ARGB16161616(struct vkms_frame_info *frame_info, 209 + const struct line_buffer *src_buffer, int y) 210 + { 211 + int x_dst = frame_info->dst.x1; 212 + u16 *dst_pixels = packed_pixels_addr(frame_info, x_dst, y); 213 + struct pixel_argb_u16 *in_pixels = src_buffer->pixels; 214 + int x_limit = min_t(size_t, drm_rect_width(&frame_info->dst), 215 + src_buffer->n_pixels); 216 + 217 + for (size_t x = 0; x < x_limit; x++, dst_pixels += 4) { 218 + dst_pixels[3] = cpu_to_le16(in_pixels[x].a); 219 + dst_pixels[2] = cpu_to_le16(in_pixels[x].r); 220 + dst_pixels[1] = cpu_to_le16(in_pixels[x].g); 221 + dst_pixels[0] = cpu_to_le16(in_pixels[x].b); 222 + } 223 + } 224 + 225 + static void argb_u16_to_XRGB16161616(struct vkms_frame_info *frame_info, 226 + const struct line_buffer *src_buffer, int y) 227 + { 228 + int x_dst = frame_info->dst.x1; 229 + u16 *dst_pixels = packed_pixels_addr(frame_info, x_dst, y); 230 + struct pixel_argb_u16 *in_pixels = src_buffer->pixels; 231 + int x_limit = min_t(size_t, drm_rect_width(&frame_info->dst), 232 + src_buffer->n_pixels); 233 + 234 + for (size_t x = 0; x < x_limit; x++, dst_pixels += 4) { 235 + dst_pixels[3] = 0xffff; 236 + dst_pixels[2] = cpu_to_le16(in_pixels[x].r); 237 + dst_pixels[1] = cpu_to_le16(in_pixels[x].g); 238 + dst_pixels[0] = cpu_to_le16(in_pixels[x].b); 239 + } 240 + } 241 + 242 + static void argb_u16_to_RGB565(struct vkms_frame_info *frame_info, 243 + const struct line_buffer *src_buffer, int y) 244 + { 245 + int x_dst = frame_info->dst.x1; 246 + u16 *dst_pixels = packed_pixels_addr(frame_info, x_dst, y); 247 + struct pixel_argb_u16 *in_pixels = src_buffer->pixels; 248 + int x_limit = min_t(size_t, drm_rect_width(&frame_info->dst), 249 + src_buffer->n_pixels); 250 + 251 + s32 fp_rb_ratio = INT_TO_FIXED_DIV(65535, 31); 252 + s32 fp_g_ratio = INT_TO_FIXED_DIV(65535, 63); 253 + 254 + for (size_t x = 0; x < x_limit; x++, dst_pixels++) { 255 + s32 fp_r = INT_TO_FIXED(in_pixels[x].r); 256 + s32 fp_g = INT_TO_FIXED(in_pixels[x].g); 257 + s32 fp_b = INT_TO_FIXED(in_pixels[x].b); 258 + 259 + u16 r = FIXED_TO_INT_ROUND(FIXED_DIV(fp_r, fp_rb_ratio)); 260 + u16 g = FIXED_TO_INT_ROUND(FIXED_DIV(fp_g, fp_g_ratio)); 261 + u16 b = FIXED_TO_INT_ROUND(FIXED_DIV(fp_b, fp_rb_ratio)); 262 + 263 + *dst_pixels = cpu_to_le16(r << 11 | g << 5 | b); 264 + } 265 + } 266 + 267 + void *get_frame_to_line_function(u32 format) 268 + { 269 + switch (format) { 270 + case DRM_FORMAT_ARGB8888: 271 + return &ARGB8888_to_argb_u16; 272 + case DRM_FORMAT_XRGB8888: 273 + return &XRGB8888_to_argb_u16; 274 + case DRM_FORMAT_ARGB16161616: 275 + return &ARGB16161616_to_argb_u16; 276 + case DRM_FORMAT_XRGB16161616: 277 + return &XRGB16161616_to_argb_u16; 278 + case DRM_FORMAT_RGB565: 279 + return &RGB565_to_argb_u16; 280 + default: 281 + return NULL; 282 + } 283 + } 284 + 285 + void *get_line_to_frame_function(u32 format) 286 + { 287 + switch (format) { 288 + case DRM_FORMAT_ARGB8888: 289 + return &argb_u16_to_ARGB8888; 290 + case DRM_FORMAT_XRGB8888: 291 + return &argb_u16_to_XRGB8888; 292 + case DRM_FORMAT_ARGB16161616: 293 + return &argb_u16_to_ARGB16161616; 294 + case DRM_FORMAT_XRGB16161616: 295 + return &argb_u16_to_XRGB16161616; 296 + case DRM_FORMAT_RGB565: 297 + return &argb_u16_to_RGB565; 298 + default: 299 + return NULL; 300 + } 301 + }
+12
drivers/gpu/drm/vkms/vkms_formats.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0+ */ 2 + 3 + #ifndef _VKMS_FORMATS_H_ 4 + #define _VKMS_FORMATS_H_ 5 + 6 + #include "vkms_drv.h" 7 + 8 + void *get_frame_to_line_function(u32 format); 9 + 10 + void *get_line_to_frame_function(u32 format); 11 + 12 + #endif /* _VKMS_FORMATS_H_ */
+29 -21
drivers/gpu/drm/vkms/vkms_plane.c
··· 9 9 #include <drm/drm_gem_framebuffer_helper.h> 10 10 11 11 #include "vkms_drv.h" 12 + #include "vkms_formats.h" 12 13 13 14 static const u32 vkms_formats[] = { 14 15 DRM_FORMAT_XRGB8888, 16 + DRM_FORMAT_XRGB16161616, 17 + DRM_FORMAT_RGB565 15 18 }; 16 19 17 20 static const u32 vkms_plane_formats[] = { 18 21 DRM_FORMAT_ARGB8888, 19 - DRM_FORMAT_XRGB8888 22 + DRM_FORMAT_XRGB8888, 23 + DRM_FORMAT_XRGB16161616, 24 + DRM_FORMAT_ARGB16161616, 25 + DRM_FORMAT_RGB565 20 26 }; 21 27 22 28 static struct drm_plane_state * 23 29 vkms_plane_duplicate_state(struct drm_plane *plane) 24 30 { 25 31 struct vkms_plane_state *vkms_state; 26 - struct vkms_composer *composer; 32 + struct vkms_frame_info *frame_info; 27 33 28 34 vkms_state = kzalloc(sizeof(*vkms_state), GFP_KERNEL); 29 35 if (!vkms_state) 30 36 return NULL; 31 37 32 - composer = kzalloc(sizeof(*composer), GFP_KERNEL); 33 - if (!composer) { 34 - DRM_DEBUG_KMS("Couldn't allocate composer\n"); 38 + frame_info = kzalloc(sizeof(*frame_info), GFP_KERNEL); 39 + if (!frame_info) { 40 + DRM_DEBUG_KMS("Couldn't allocate frame_info\n"); 35 41 kfree(vkms_state); 36 42 return NULL; 37 43 } 38 44 39 - vkms_state->composer = composer; 45 + vkms_state->frame_info = frame_info; 40 46 41 47 __drm_gem_duplicate_shadow_plane_state(plane, &vkms_state->base); 42 48 ··· 55 49 struct vkms_plane_state *vkms_state = to_vkms_plane_state(old_state); 56 50 struct drm_crtc *crtc = vkms_state->base.base.crtc; 57 51 58 - if (crtc) { 52 + if (crtc && vkms_state->frame_info->fb) { 59 53 /* dropping the reference we acquired in 60 54 * vkms_primary_plane_update() 61 55 */ 62 - if (drm_framebuffer_read_refcount(&vkms_state->composer->fb)) 63 - drm_framebuffer_put(&vkms_state->composer->fb); 56 + if (drm_framebuffer_read_refcount(vkms_state->frame_info->fb)) 57 + drm_framebuffer_put(vkms_state->frame_info->fb); 64 58 } 65 59 66 - kfree(vkms_state->composer); 67 - vkms_state->composer = NULL; 60 + kfree(vkms_state->frame_info); 61 + vkms_state->frame_info = NULL; 68 62 69 63 __drm_gem_destroy_shadow_plane_state(&vkms_state->base); 70 64 kfree(vkms_state); ··· 104 98 struct vkms_plane_state *vkms_plane_state; 105 99 struct drm_shadow_plane_state *shadow_plane_state; 106 100 struct drm_framebuffer *fb = new_state->fb; 107 - struct vkms_composer *composer; 101 + struct vkms_frame_info *frame_info; 102 + u32 fmt = fb->format->format; 108 103 109 104 if (!new_state->crtc || !fb) 110 105 return; ··· 113 106 vkms_plane_state = to_vkms_plane_state(new_state); 114 107 shadow_plane_state = &vkms_plane_state->base; 115 108 116 - composer = vkms_plane_state->composer; 117 - memcpy(&composer->src, &new_state->src, sizeof(struct drm_rect)); 118 - memcpy(&composer->dst, &new_state->dst, sizeof(struct drm_rect)); 119 - memcpy(&composer->fb, fb, sizeof(struct drm_framebuffer)); 120 - memcpy(&composer->map, &shadow_plane_state->data, sizeof(composer->map)); 121 - drm_framebuffer_get(&composer->fb); 122 - composer->offset = fb->offsets[0]; 123 - composer->pitch = fb->pitches[0]; 124 - composer->cpp = fb->format->cpp[0]; 109 + frame_info = vkms_plane_state->frame_info; 110 + memcpy(&frame_info->src, &new_state->src, sizeof(struct drm_rect)); 111 + memcpy(&frame_info->dst, &new_state->dst, sizeof(struct drm_rect)); 112 + frame_info->fb = fb; 113 + memcpy(&frame_info->map, &shadow_plane_state->data, sizeof(frame_info->map)); 114 + drm_framebuffer_get(frame_info->fb); 115 + frame_info->offset = fb->offsets[0]; 116 + frame_info->pitch = fb->pitches[0]; 117 + frame_info->cpp = fb->format->cpp[0]; 118 + vkms_plane_state->plane_read = get_frame_to_line_function(fmt); 125 119 } 126 120 127 121 static int vkms_plane_atomic_check(struct drm_plane *plane,
+31 -8
drivers/gpu/drm/vkms/vkms_writeback.c
··· 12 12 #include <drm/drm_gem_shmem_helper.h> 13 13 14 14 #include "vkms_drv.h" 15 + #include "vkms_formats.h" 15 16 16 17 static const u32 vkms_wb_formats[] = { 17 18 DRM_FORMAT_XRGB8888, 19 + DRM_FORMAT_XRGB16161616, 20 + DRM_FORMAT_ARGB16161616, 21 + DRM_FORMAT_RGB565 18 22 }; 19 23 20 24 static const struct drm_connector_funcs vkms_wb_connector_funcs = { ··· 35 31 { 36 32 struct drm_framebuffer *fb; 37 33 const struct drm_display_mode *mode = &crtc_state->mode; 34 + int ret; 38 35 39 36 if (!conn_state->writeback_job || !conn_state->writeback_job->fb) 40 37 return 0; ··· 47 42 return -EINVAL; 48 43 } 49 44 50 - if (fb->format->format != vkms_wb_formats[0]) { 51 - DRM_DEBUG_KMS("Invalid pixel format %p4cc\n", 52 - &fb->format->format); 53 - return -EINVAL; 54 - } 45 + ret = drm_atomic_helper_check_wb_encoder_state(encoder, conn_state); 46 + if (ret < 0) 47 + return ret; 55 48 56 49 return 0; 57 50 } ··· 79 76 if (!vkmsjob) 80 77 return -ENOMEM; 81 78 82 - ret = drm_gem_fb_vmap(job->fb, vkmsjob->map, vkmsjob->data); 79 + ret = drm_gem_fb_vmap(job->fb, vkmsjob->wb_frame_info.map, vkmsjob->data); 83 80 if (ret) { 84 81 DRM_ERROR("vmap failed: %d\n", ret); 85 82 goto err_kfree; 86 83 } 84 + 85 + vkmsjob->wb_frame_info.fb = job->fb; 86 + drm_framebuffer_get(vkmsjob->wb_frame_info.fb); 87 87 88 88 job->priv = vkmsjob; 89 89 ··· 106 100 if (!job->fb) 107 101 return; 108 102 109 - drm_gem_fb_vunmap(job->fb, vkmsjob->map); 103 + drm_gem_fb_vunmap(job->fb, vkmsjob->wb_frame_info.map); 104 + 105 + drm_framebuffer_put(vkmsjob->wb_frame_info.fb); 110 106 111 107 vkmsdev = drm_device_to_vkms_device(job->fb->dev); 112 108 vkms_set_composer(&vkmsdev->output, false); ··· 125 117 struct drm_writeback_connector *wb_conn = &output->wb_connector; 126 118 struct drm_connector_state *conn_state = wb_conn->base.state; 127 119 struct vkms_crtc_state *crtc_state = output->composer_state; 120 + struct drm_framebuffer *fb = connector_state->writeback_job->fb; 121 + u16 crtc_height = crtc_state->base.crtc->mode.vdisplay; 122 + u16 crtc_width = crtc_state->base.crtc->mode.hdisplay; 123 + struct vkms_writeback_job *active_wb; 124 + struct vkms_frame_info *wb_frame_info; 125 + u32 wb_format = fb->format->format; 128 126 129 127 if (!conn_state) 130 128 return; 131 129 132 130 vkms_set_composer(&vkmsdev->output, true); 133 131 132 + active_wb = conn_state->writeback_job->priv; 133 + wb_frame_info = &active_wb->wb_frame_info; 134 + 134 135 spin_lock_irq(&output->composer_lock); 135 - crtc_state->active_writeback = conn_state->writeback_job->priv; 136 + crtc_state->active_writeback = active_wb; 137 + wb_frame_info->offset = fb->offsets[0]; 138 + wb_frame_info->pitch = fb->pitches[0]; 139 + wb_frame_info->cpp = fb->format->cpp[0]; 136 140 crtc_state->wb_pending = true; 137 141 spin_unlock_irq(&output->composer_lock); 138 142 drm_writeback_queue_job(wb_conn, connector_state); 143 + active_wb->wb_write = get_line_to_frame_function(wb_format); 144 + drm_rect_init(&wb_frame_info->src, 0, 0, crtc_width, crtc_height); 145 + drm_rect_init(&wb_frame_info->dst, 0, 0, crtc_width, crtc_height); 139 146 } 140 147 141 148 static const struct drm_connector_helper_funcs vkms_wb_conn_helper_funcs = {
+63 -19
drivers/video/hdmi.c
··· 21 21 * DEALINGS IN THE SOFTWARE. 22 22 */ 23 23 24 + #include <drm/display/drm_dp.h> 24 25 #include <linux/bitops.h> 25 26 #include <linux/bug.h> 26 27 #include <linux/errno.h> ··· 382 381 * 383 382 * Returns 0 on success or a negative error code on failure. 384 383 */ 385 - int hdmi_audio_infoframe_check(struct hdmi_audio_infoframe *frame) 384 + int hdmi_audio_infoframe_check(const struct hdmi_audio_infoframe *frame) 386 385 { 387 386 return hdmi_audio_infoframe_check_only(frame); 388 387 } 389 388 EXPORT_SYMBOL(hdmi_audio_infoframe_check); 389 + 390 + static void 391 + hdmi_audio_infoframe_pack_payload(const struct hdmi_audio_infoframe *frame, 392 + u8 *buffer) 393 + { 394 + u8 channels; 395 + 396 + if (frame->channels >= 2) 397 + channels = frame->channels - 1; 398 + else 399 + channels = 0; 400 + 401 + buffer[0] = ((frame->coding_type & 0xf) << 4) | (channels & 0x7); 402 + buffer[1] = ((frame->sample_frequency & 0x7) << 2) | 403 + (frame->sample_size & 0x3); 404 + buffer[2] = frame->coding_type_ext & 0x1f; 405 + buffer[3] = frame->channel_allocation; 406 + buffer[4] = (frame->level_shift_value & 0xf) << 3; 407 + 408 + if (frame->downmix_inhibit) 409 + buffer[4] |= BIT(7); 410 + } 390 411 391 412 /** 392 413 * hdmi_audio_infoframe_pack_only() - write HDMI audio infoframe to binary buffer ··· 427 404 ssize_t hdmi_audio_infoframe_pack_only(const struct hdmi_audio_infoframe *frame, 428 405 void *buffer, size_t size) 429 406 { 430 - unsigned char channels; 431 407 u8 *ptr = buffer; 432 408 size_t length; 433 409 int ret; ··· 442 420 443 421 memset(buffer, 0, size); 444 422 445 - if (frame->channels >= 2) 446 - channels = frame->channels - 1; 447 - else 448 - channels = 0; 449 - 450 423 ptr[0] = frame->type; 451 424 ptr[1] = frame->version; 452 425 ptr[2] = frame->length; 453 426 ptr[3] = 0; /* checksum */ 454 427 455 - /* start infoframe payload */ 456 - ptr += HDMI_INFOFRAME_HEADER_SIZE; 457 - 458 - ptr[0] = ((frame->coding_type & 0xf) << 4) | (channels & 0x7); 459 - ptr[1] = ((frame->sample_frequency & 0x7) << 2) | 460 - (frame->sample_size & 0x3); 461 - ptr[2] = frame->coding_type_ext & 0x1f; 462 - ptr[3] = frame->channel_allocation; 463 - ptr[4] = (frame->level_shift_value & 0xf) << 3; 464 - 465 - if (frame->downmix_inhibit) 466 - ptr[4] |= BIT(7); 428 + hdmi_audio_infoframe_pack_payload(frame, 429 + ptr + HDMI_INFOFRAME_HEADER_SIZE); 467 430 468 431 hdmi_infoframe_set_checksum(buffer, length); 469 432 ··· 485 478 return hdmi_audio_infoframe_pack_only(frame, buffer, size); 486 479 } 487 480 EXPORT_SYMBOL(hdmi_audio_infoframe_pack); 481 + 482 + /** 483 + * hdmi_audio_infoframe_pack_for_dp - Pack a HDMI Audio infoframe for DisplayPort 484 + * 485 + * @frame: HDMI Audio infoframe 486 + * @sdp: Secondary data packet for DisplayPort. 487 + * @dp_version: DisplayPort version to be encoded in the header 488 + * 489 + * Packs a HDMI Audio Infoframe to be sent over DisplayPort. This function 490 + * fills the secondary data packet to be used for DisplayPort. 491 + * 492 + * Return: Number of total written bytes or a negative errno on failure. 493 + */ 494 + ssize_t 495 + hdmi_audio_infoframe_pack_for_dp(const struct hdmi_audio_infoframe *frame, 496 + struct dp_sdp *sdp, u8 dp_version) 497 + { 498 + int ret; 499 + 500 + ret = hdmi_audio_infoframe_check(frame); 501 + if (ret) 502 + return ret; 503 + 504 + memset(sdp->db, 0, sizeof(sdp->db)); 505 + 506 + /* Secondary-data packet header */ 507 + sdp->sdp_header.HB0 = 0; 508 + sdp->sdp_header.HB1 = frame->type; 509 + sdp->sdp_header.HB2 = DP_SDP_AUDIO_INFOFRAME_HB2; 510 + sdp->sdp_header.HB3 = (dp_version & 0x3f) << 2; 511 + 512 + hdmi_audio_infoframe_pack_payload(frame, sdp->db); 513 + 514 + /* Return size = frame length + four HB for sdp_header */ 515 + return frame->length + 4; 516 + } 517 + EXPORT_SYMBOL(hdmi_audio_infoframe_pack_for_dp); 488 518 489 519 /** 490 520 * hdmi_vendor_infoframe_init() - initialize an HDMI vendor infoframe
+2
include/drm/display/drm_dp.h
··· 1536 1536 #define DP_SDP_VSC_EXT_CEA 0x21 /* DP 1.4 */ 1537 1537 /* 0x80+ CEA-861 infoframe types */ 1538 1538 1539 + #define DP_SDP_AUDIO_INFOFRAME_HB2 0x1b 1540 + 1539 1541 /** 1540 1542 * struct dp_sdp_header - DP secondary data packet header 1541 1543 * @HB0: Secondary Data Packet ID
+2
include/drm/display/drm_dp_helper.h
··· 69 69 u8 drm_dp_link_rate_to_bw_code(int link_rate); 70 70 int drm_dp_bw_code_to_link_rate(u8 link_bw); 71 71 72 + const char *drm_dp_phy_name(enum drm_dp_phy dp_phy); 73 + 72 74 /** 73 75 * struct drm_dp_vsc_sdp - drm DP VSC SDP 74 76 *
+133 -101
include/drm/display/drm_dp_mst_helper.h
··· 49 49 struct drm_dp_mst_branch; 50 50 51 51 /** 52 - * struct drm_dp_vcpi - Virtual Channel Payload Identifier 53 - * @vcpi: Virtual channel ID. 54 - * @pbn: Payload Bandwidth Number for this channel 55 - * @aligned_pbn: PBN aligned with slot size 56 - * @num_slots: number of slots for this PBN 57 - */ 58 - struct drm_dp_vcpi { 59 - int vcpi; 60 - int pbn; 61 - int aligned_pbn; 62 - int num_slots; 63 - }; 64 - 65 - /** 66 52 * struct drm_dp_mst_port - MST port 67 53 * @port_num: port number 68 54 * @input: if this port is an input port. Protected by ··· 128 142 struct drm_dp_aux aux; /* i2c bus for this port? */ 129 143 struct drm_dp_mst_branch *parent; 130 144 131 - struct drm_dp_vcpi vcpi; 132 145 struct drm_connector *connector; 133 146 struct drm_dp_mst_topology_mgr *mgr; 134 147 ··· 512 527 void (*poll_hpd_irq)(struct drm_dp_mst_topology_mgr *mgr); 513 528 }; 514 529 515 - #define DP_MAX_PAYLOAD (sizeof(unsigned long) * 8) 516 - 517 - #define DP_PAYLOAD_LOCAL 1 518 - #define DP_PAYLOAD_REMOTE 2 519 - #define DP_PAYLOAD_DELETE_LOCAL 3 520 - 521 - struct drm_dp_payload { 522 - int payload_state; 523 - int start_slot; 524 - int num_slots; 525 - int vcpi; 526 - }; 527 - 528 530 #define to_dp_mst_topology_state(x) container_of(x, struct drm_dp_mst_topology_state, base) 529 531 530 - struct drm_dp_vcpi_allocation { 532 + /** 533 + * struct drm_dp_mst_atomic_payload - Atomic state struct for an MST payload 534 + * 535 + * The primary atomic state structure for a given MST payload. Stores information like current 536 + * bandwidth allocation, intended action for this payload, etc. 537 + */ 538 + struct drm_dp_mst_atomic_payload { 539 + /** @port: The MST port assigned to this payload */ 531 540 struct drm_dp_mst_port *port; 532 - int vcpi; 541 + 542 + /** 543 + * @vc_start_slot: The time slot that this payload starts on. Because payload start slots 544 + * can't be determined ahead of time, the contents of this value are UNDEFINED at atomic 545 + * check time. This shouldn't usually matter, as the start slot should never be relevant for 546 + * atomic state computations. 547 + * 548 + * Since this value is determined at commit time instead of check time, this value is 549 + * protected by the MST helpers ensuring that async commits operating on the given topology 550 + * never run in parallel. In the event that a driver does need to read this value (e.g. to 551 + * inform hardware of the starting timeslot for a payload), the driver may either: 552 + * 553 + * * Read this field during the atomic commit after 554 + * drm_dp_mst_atomic_wait_for_dependencies() has been called, which will ensure the 555 + * previous MST states payload start slots have been copied over to the new state. Note 556 + * that a new start slot won't be assigned/removed from this payload until 557 + * drm_dp_add_payload_part1()/drm_dp_remove_payload() have been called. 558 + * * Acquire the MST modesetting lock, and then wait for any pending MST-related commits to 559 + * get committed to hardware by calling drm_crtc_commit_wait() on each of the 560 + * &drm_crtc_commit structs in &drm_dp_mst_topology_state.commit_deps. 561 + * 562 + * If neither of the two above solutions suffice (e.g. the driver needs to read the start 563 + * slot in the middle of an atomic commit without waiting for some reason), then drivers 564 + * should cache this value themselves after changing payloads. 565 + */ 566 + s8 vc_start_slot; 567 + 568 + /** @vcpi: The Virtual Channel Payload Identifier */ 569 + u8 vcpi; 570 + /** 571 + * @time_slots: 572 + * The number of timeslots allocated to this payload from the source DP Tx to 573 + * the immediate downstream DP Rx 574 + */ 575 + int time_slots; 576 + /** @pbn: The payload bandwidth for this payload */ 533 577 int pbn; 534 - bool dsc_enabled; 578 + 579 + /** @delete: Whether or not we intend to delete this payload during this atomic commit */ 580 + bool delete : 1; 581 + /** @dsc_enabled: Whether or not this payload has DSC enabled */ 582 + bool dsc_enabled : 1; 583 + 584 + /** @next: The list node for this payload */ 535 585 struct list_head next; 536 586 }; 537 587 588 + /** 589 + * struct drm_dp_mst_topology_state - DisplayPort MST topology atomic state 590 + * 591 + * This struct represents the atomic state of the toplevel DisplayPort MST manager 592 + */ 538 593 struct drm_dp_mst_topology_state { 594 + /** @base: Base private state for atomic */ 539 595 struct drm_private_state base; 540 - struct list_head vcpis; 596 + 597 + /** @mgr: The topology manager */ 541 598 struct drm_dp_mst_topology_mgr *mgr; 599 + 600 + /** 601 + * @pending_crtc_mask: A bitmask of all CRTCs this topology state touches, drivers may 602 + * modify this to add additional dependencies if needed. 603 + */ 604 + u32 pending_crtc_mask; 605 + /** 606 + * @commit_deps: A list of all CRTC commits affecting this topology, this field isn't 607 + * populated until drm_dp_mst_atomic_wait_for_dependencies() is called. 608 + */ 609 + struct drm_crtc_commit **commit_deps; 610 + /** @num_commit_deps: The number of CRTC commits in @commit_deps */ 611 + size_t num_commit_deps; 612 + 613 + /** @payload_mask: A bitmask of allocated VCPIs, used for VCPI assignments */ 614 + u32 payload_mask; 615 + /** @payloads: The list of payloads being created/destroyed in this state */ 616 + struct list_head payloads; 617 + 618 + /** @total_avail_slots: The total number of slots this topology can handle (63 or 64) */ 542 619 u8 total_avail_slots; 620 + /** @start_slot: The first usable time slot in this topology (1 or 0) */ 543 621 u8 start_slot; 622 + 623 + /** 624 + * @pbn_div: The current PBN divisor for this topology. The driver is expected to fill this 625 + * out itself. 626 + */ 627 + int pbn_div; 544 628 }; 545 629 546 630 #define to_dp_mst_topology_mgr(x) container_of(x, struct drm_dp_mst_topology_mgr, base) ··· 649 595 * @max_payloads: maximum number of payloads the GPU can generate. 650 596 */ 651 597 int max_payloads; 652 - /** 653 - * @max_lane_count: maximum number of lanes the GPU can drive. 654 - */ 655 - int max_lane_count; 656 - /** 657 - * @max_link_rate: maximum link rate per lane GPU can output, in kHz. 658 - */ 659 - int max_link_rate; 660 598 /** 661 599 * @conn_base_id: DRM connector ID this mgr is connected to. Only used 662 600 * to build the MST connector path value. ··· 692 646 bool payload_id_table_cleared : 1; 693 647 694 648 /** 649 + * @payload_count: The number of currently active payloads in hardware. This value is only 650 + * intended to be used internally by MST helpers for payload tracking, and is only safe to 651 + * read/write from the atomic commit (not check) context. 652 + */ 653 + u8 payload_count; 654 + 655 + /** 656 + * @next_start_slot: The starting timeslot to use for new VC payloads. This value is used 657 + * internally by MST helpers for payload tracking, and is only safe to read/write from the 658 + * atomic commit (not check) context. 659 + */ 660 + u8 next_start_slot; 661 + 662 + /** 695 663 * @mst_primary: Pointer to the primary/first branch device. 696 664 */ 697 665 struct drm_dp_mst_branch *mst_primary; ··· 718 658 * @sink_count: Sink count from DEVICE_SERVICE_IRQ_VECTOR_ESI0. 719 659 */ 720 660 u8 sink_count; 721 - /** 722 - * @pbn_div: PBN to slots divisor. 723 - */ 724 - int pbn_div; 725 661 726 662 /** 727 663 * @funcs: Atomic helper callbacks ··· 733 677 * @tx_msg_downq: List of pending down requests 734 678 */ 735 679 struct list_head tx_msg_downq; 736 - 737 - /** 738 - * @payload_lock: Protect payload information. 739 - */ 740 - struct mutex payload_lock; 741 - /** 742 - * @proposed_vcpis: Array of pointers for the new VCPI allocation. The 743 - * VCPI structure itself is &drm_dp_mst_port.vcpi, and the size of 744 - * this array is determined by @max_payloads. 745 - */ 746 - struct drm_dp_vcpi **proposed_vcpis; 747 - /** 748 - * @payloads: Array of payloads. The size of this array is determined 749 - * by @max_payloads. 750 - */ 751 - struct drm_dp_payload *payloads; 752 - /** 753 - * @payload_mask: Elements of @payloads actually in use. Since 754 - * reallocation of active outputs isn't possible gaps can be created by 755 - * disabling outputs out of order compared to how they've been enabled. 756 - */ 757 - unsigned long payload_mask; 758 - /** 759 - * @vcpi_mask: Similar to @payload_mask, but for @proposed_vcpis. 760 - */ 761 - unsigned long vcpi_mask; 762 680 763 681 /** 764 682 * @tx_waitq: Wait to queue stall for the tx worker. ··· 805 775 int drm_dp_mst_topology_mgr_init(struct drm_dp_mst_topology_mgr *mgr, 806 776 struct drm_device *dev, struct drm_dp_aux *aux, 807 777 int max_dpcd_transaction_bytes, 808 - int max_payloads, 809 - int max_lane_count, int max_link_rate, 810 - int conn_base_id); 778 + int max_payloads, int conn_base_id); 811 779 812 780 void drm_dp_mst_topology_mgr_destroy(struct drm_dp_mst_topology_mgr *mgr); 813 781 ··· 828 800 829 801 int drm_dp_calc_pbn_mode(int clock, int bpp, bool dsc); 830 802 831 - bool drm_dp_mst_allocate_vcpi(struct drm_dp_mst_topology_mgr *mgr, 832 - struct drm_dp_mst_port *port, int pbn, int slots); 833 - 834 - int drm_dp_mst_get_vcpi_slots(struct drm_dp_mst_topology_mgr *mgr, struct drm_dp_mst_port *port); 835 - 836 - 837 - void drm_dp_mst_reset_vcpi_slots(struct drm_dp_mst_topology_mgr *mgr, struct drm_dp_mst_port *port); 838 - 839 803 void drm_dp_mst_update_slots(struct drm_dp_mst_topology_state *mst_state, uint8_t link_encoding_cap); 840 804 841 - void drm_dp_mst_deallocate_vcpi(struct drm_dp_mst_topology_mgr *mgr, 842 - struct drm_dp_mst_port *port); 843 - 844 - 845 - int drm_dp_find_vcpi_slots(struct drm_dp_mst_topology_mgr *mgr, 846 - int pbn); 847 - 848 - 849 - int drm_dp_update_payload_part1(struct drm_dp_mst_topology_mgr *mgr, int start_slot); 850 - 851 - 852 - int drm_dp_update_payload_part2(struct drm_dp_mst_topology_mgr *mgr); 805 + int drm_dp_add_payload_part1(struct drm_dp_mst_topology_mgr *mgr, 806 + struct drm_dp_mst_topology_state *mst_state, 807 + struct drm_dp_mst_atomic_payload *payload); 808 + int drm_dp_add_payload_part2(struct drm_dp_mst_topology_mgr *mgr, 809 + struct drm_atomic_state *state, 810 + struct drm_dp_mst_atomic_payload *payload); 811 + void drm_dp_remove_payload(struct drm_dp_mst_topology_mgr *mgr, 812 + struct drm_dp_mst_topology_state *mst_state, 813 + struct drm_dp_mst_atomic_payload *payload); 853 814 854 815 int drm_dp_check_act_status(struct drm_dp_mst_topology_mgr *mgr); 855 816 ··· 860 843 void drm_dp_mst_connector_early_unregister(struct drm_connector *connector, 861 844 struct drm_dp_mst_port *port); 862 845 863 - struct drm_dp_mst_topology_state *drm_atomic_get_mst_topology_state(struct drm_atomic_state *state, 864 - struct drm_dp_mst_topology_mgr *mgr); 846 + struct drm_dp_mst_topology_state * 847 + drm_atomic_get_mst_topology_state(struct drm_atomic_state *state, 848 + struct drm_dp_mst_topology_mgr *mgr); 849 + struct drm_dp_mst_topology_state * 850 + drm_atomic_get_new_mst_topology_state(struct drm_atomic_state *state, 851 + struct drm_dp_mst_topology_mgr *mgr); 852 + struct drm_dp_mst_atomic_payload * 853 + drm_atomic_get_mst_payload_state(struct drm_dp_mst_topology_state *state, 854 + struct drm_dp_mst_port *port); 865 855 int __must_check 866 - drm_dp_atomic_find_vcpi_slots(struct drm_atomic_state *state, 856 + drm_dp_atomic_find_time_slots(struct drm_atomic_state *state, 867 857 struct drm_dp_mst_topology_mgr *mgr, 868 - struct drm_dp_mst_port *port, int pbn, 869 - int pbn_div); 858 + struct drm_dp_mst_port *port, int pbn); 870 859 int drm_dp_mst_atomic_enable_dsc(struct drm_atomic_state *state, 871 860 struct drm_dp_mst_port *port, 872 - int pbn, int pbn_div, 873 - bool enable); 861 + int pbn, bool enable); 874 862 int __must_check 875 863 drm_dp_mst_add_affected_dsc_crtcs(struct drm_atomic_state *state, 876 864 struct drm_dp_mst_topology_mgr *mgr); 877 865 int __must_check 878 - drm_dp_atomic_release_vcpi_slots(struct drm_atomic_state *state, 866 + drm_dp_atomic_release_time_slots(struct drm_atomic_state *state, 879 867 struct drm_dp_mst_topology_mgr *mgr, 880 868 struct drm_dp_mst_port *port); 869 + void drm_dp_mst_atomic_wait_for_dependencies(struct drm_atomic_state *state); 870 + int __must_check drm_dp_mst_atomic_setup_commit(struct drm_atomic_state *state); 881 871 int drm_dp_send_power_updown_phy(struct drm_dp_mst_topology_mgr *mgr, 882 872 struct drm_dp_mst_port *port, bool power_up); 883 873 int drm_dp_send_query_stream_enc_status(struct drm_dp_mst_topology_mgr *mgr, 884 874 struct drm_dp_mst_port *port, 885 875 struct drm_dp_query_stream_enc_status_ack_reply *status); 886 876 int __must_check drm_dp_mst_atomic_check(struct drm_atomic_state *state); 877 + int __must_check drm_dp_mst_root_conn_atomic_check(struct drm_connector_state *new_conn_state, 878 + struct drm_dp_mst_topology_mgr *mgr); 887 879 888 880 void drm_dp_mst_get_port_malloc(struct drm_dp_mst_port *port); 889 881 void drm_dp_mst_put_port_malloc(struct drm_dp_mst_port *port); 890 882 891 883 struct drm_dp_aux *drm_dp_mst_dsc_aux_for_port(struct drm_dp_mst_port *port); 884 + 885 + static inline struct drm_dp_mst_topology_state * 886 + to_drm_dp_mst_topology_state(struct drm_private_state *state) 887 + { 888 + return container_of(state, struct drm_dp_mst_topology_state, base); 889 + } 892 890 893 891 extern const struct drm_private_state_funcs drm_dp_mst_topology_state_funcs; 894 892
+3
include/drm/drm_atomic_helper.h
··· 49 49 50 50 int drm_atomic_helper_check_modeset(struct drm_device *dev, 51 51 struct drm_atomic_state *state); 52 + int 53 + drm_atomic_helper_check_wb_encoder_state(struct drm_encoder *encoder, 54 + struct drm_connector_state *conn_state); 52 55 int drm_atomic_helper_check_plane_state(struct drm_plane_state *plane_state, 53 56 const struct drm_crtc_state *crtc_state, 54 57 int min_scale,
+5 -4
include/drm/gpu_scheduler.h
··· 329 329 }; 330 330 331 331 /** 332 - * struct drm_sched_backend_ops 332 + * struct drm_sched_backend_ops - Define the backend operations 333 + * called by the scheduler 333 334 * 334 - * Define the backend operations called by the scheduler, 335 - * these functions should be implemented in driver side. 335 + * These functions should be implemented in the driver side. 336 336 */ 337 337 struct drm_sched_backend_ops { 338 338 /** ··· 409 409 }; 410 410 411 411 /** 412 - * struct drm_gpu_scheduler 412 + * struct drm_gpu_scheduler - scheduler instance-specific data 413 413 * 414 414 * @ops: backend operations provided by the driver. 415 415 * @hw_submission_limit: the max size of the hardware queue. ··· 435 435 * @_score: score used when the driver doesn't provide one 436 436 * @ready: marks if the underlying HW is ready to work 437 437 * @free_guilty: A hit to time out handler to free the guilty job. 438 + * @dev: system &struct device 438 439 * 439 440 * One scheduler is implemented for each hardware ring. 440 441 */
+1 -1
include/drm/ttm/ttm_bo_driver.h
··· 106 106 bool interruptible, bool no_wait, 107 107 struct ww_acquire_ctx *ticket) 108 108 { 109 - int ret = 0; 109 + int ret; 110 110 111 111 if (no_wait) { 112 112 bool success;
+40
include/drm/ttm/ttm_resource.h
··· 89 89 struct ttm_resource *res); 90 90 91 91 /** 92 + * struct ttm_resource_manager_func member intersects 93 + * 94 + * @man: Pointer to a memory type manager. 95 + * @res: Pointer to a struct ttm_resource to be checked. 96 + * @place: Placement to check against. 97 + * @size: Size of the check. 98 + * 99 + * Test if @res intersects with @place + @size. Used to judge if 100 + * evictions are valueable or not. 101 + */ 102 + bool (*intersects)(struct ttm_resource_manager *man, 103 + struct ttm_resource *res, 104 + const struct ttm_place *place, 105 + size_t size); 106 + 107 + /** 108 + * struct ttm_resource_manager_func member compatible 109 + * 110 + * @man: Pointer to a memory type manager. 111 + * @res: Pointer to a struct ttm_resource to be checked. 112 + * @place: Placement to check against. 113 + * @size: Size of the check. 114 + * 115 + * Test if @res compatible with @place + @size. Used to check of 116 + * the need to move the backing store or not. 117 + */ 118 + bool (*compatible)(struct ttm_resource_manager *man, 119 + struct ttm_resource *res, 120 + const struct ttm_place *place, 121 + size_t size); 122 + 123 + /** 92 124 * struct ttm_resource_manager_func member debug 93 125 * 94 126 * @man: Pointer to a memory type manager. ··· 361 329 const struct ttm_place *place, 362 330 struct ttm_resource **res); 363 331 void ttm_resource_free(struct ttm_buffer_object *bo, struct ttm_resource **res); 332 + bool ttm_resource_intersects(struct ttm_device *bdev, 333 + struct ttm_resource *res, 334 + const struct ttm_place *place, 335 + size_t size); 336 + bool ttm_resource_compatible(struct ttm_device *bdev, 337 + struct ttm_resource *res, 338 + const struct ttm_place *place, 339 + size_t size); 364 340 bool ttm_resource_compat(struct ttm_resource *res, 365 341 struct ttm_placement *placement); 366 342 void ttm_resource_set_bo(struct ttm_resource *res,
+6 -1
include/linux/hdmi.h
··· 336 336 void *buffer, size_t size); 337 337 ssize_t hdmi_audio_infoframe_pack_only(const struct hdmi_audio_infoframe *frame, 338 338 void *buffer, size_t size); 339 - int hdmi_audio_infoframe_check(struct hdmi_audio_infoframe *frame); 339 + int hdmi_audio_infoframe_check(const struct hdmi_audio_infoframe *frame); 340 + 341 + struct dp_sdp; 342 + ssize_t 343 + hdmi_audio_infoframe_pack_for_dp(const struct hdmi_audio_infoframe *frame, 344 + struct dp_sdp *sdp, u8 dp_version); 340 345 341 346 enum hdmi_3d_structure { 342 347 HDMI_3D_STRUCTURE_INVALID = -1,