···8899- compatible : Shall contain one of1010 - "renesas,r8a7743-lvds" for R8A7743 (RZ/G1M) compatible LVDS encoders1111+ - "renesas,r8a774c0-lvds" for R8A774C0 (RZ/G2E) compatible LVDS encoders1112 - "renesas,r8a7790-lvds" for R8A7790 (R-Car H2) compatible LVDS encoders1213 - "renesas,r8a7791-lvds" for R8A7791 (R-Car M2-W) compatible LVDS encoders1314 - "renesas,r8a7793-lvds" for R8A7793 (R-Car M2-N) compatible LVDS encoders···2625- clock-names: Name of the clocks. This property is model-dependent.2726 - The functional clock, which mandatory for all models, shall be listed2827 first, and shall be named "fck".2929- - On R8A77990 and R8A77995, the LVDS encoder can use the EXTAL or2828+ - On R8A77990, R8A77995 and R8A774C0, the LVDS encoder can use the EXTAL or3029 DU_DOTCLKINx clocks. Those clocks are optional. When supplied they must be3130 named "extal" and "dclkin.x" respectively, with "x" being the DU_DOTCLKIN3231 numerical index.
···116116.. kernel-doc:: drivers/gpu/drm/drm_fb_cma_helper.c117117 :export:118118119119-.. _drm_bridges:120120-121119Framebuffer GEM Helper Reference122120================================123121···124126125127.. kernel-doc:: drivers/gpu/drm/drm_gem_framebuffer_helper.c126128 :export:129129+130130+.. _drm_bridges:127131128132Bridges129133=======···208208.. kernel-doc:: drivers/gpu/drm/drm_dp_dual_mode_helper.c209209 :export:210210211211-Display Port MST Helper Functions Reference212212-===========================================211211+Display Port MST Helpers212212+========================213213+214214+Overview215215+--------213216214217.. kernel-doc:: drivers/gpu/drm/drm_dp_mst_topology.c215218 :doc: dp mst helper219219+220220+.. kernel-doc:: drivers/gpu/drm/drm_dp_mst_topology.c221221+ :doc: Branch device and port refcounting222222+223223+Functions Reference224224+-------------------216225217226.. kernel-doc:: include/drm/drm_dp_mst_helper.h218227 :internal:219228220229.. kernel-doc:: drivers/gpu/drm/drm_dp_mst_topology.c221230 :export:231231+232232+Topology Lifetime Internals233233+---------------------------234234+235235+These functions aren't exported to drivers, but are documented here to help make236236+the MST topology helpers easier to understand237237+238238+.. kernel-doc:: drivers/gpu/drm/drm_dp_mst_topology.c239239+ :functions: drm_dp_mst_topology_try_get_mstb drm_dp_mst_topology_get_mstb240240+ drm_dp_mst_topology_put_mstb241241+ drm_dp_mst_topology_try_get_port drm_dp_mst_topology_get_port242242+ drm_dp_mst_topology_put_port243243+ drm_dp_mst_get_mstb_malloc drm_dp_mst_put_mstb_malloc222244223245MIPI DSI Helper Functions Reference224246===================================
+30-3
Documentation/gpu/todo.rst
···209209210210Contact: Daniel Vetter211211212212+Generic fbdev defio support213213+---------------------------214214+215215+The defio support code in the fbdev core has some very specific requirements,216216+which means drivers need to have a special framebuffer for fbdev. Which prevents217217+us from using the generic fbdev emulation code everywhere. The main issue is218218+that it uses some fields in struct page itself, which breaks shmem gem objects219219+(and other things).220220+221221+Possible solution would be to write our own defio mmap code in the drm fbdev222222+emulation. It would need to fully wrap the existing mmap ops, forwarding223223+everything after it has done the write-protect/mkwrite trickery:224224+225225+- In the drm_fbdev_fb_mmap helper, if we need defio, change the226226+ default page prots to write-protected with something like this::227227+228228+ vma->vm_page_prot = pgprot_wrprotect(vma->vm_page_prot);229229+230230+- Set the mkwrite and fsync callbacks with similar implementions to the core231231+ fbdev defio stuff. These should all work on plain ptes, they don't actually232232+ require a struct page. uff. These should all work on plain ptes, they don't233233+ actually require a struct page.234234+235235+- Track the dirty pages in a separate structure (bitfield with one bit per page236236+ should work) to avoid clobbering struct page.237237+238238+Might be good to also have some igt testcases for this.239239+240240+Contact: Daniel Vetter, Noralf Tronnes241241+212242Put a reservation_object into drm_gem_object213243--------------------------------------------214244···383353------------384354385355Some of these date from the very introduction of KMS in 2008 ...386386-387387-- drm_mode_config.crtc_idr is misnamed, since it contains all KMS object. Should388388- be renamed to drm_mode_config.object_idr.389356390357- drm_display_mode doesn't need to be derived from drm_mode_object. That's391358 leftovers from older (never merged into upstream) KMS designs where modes
+18-1
MAINTAINERS
···48734873M: Dave Airlie <airlied@redhat.com>48744874M: Gerd Hoffmann <kraxel@redhat.com>48754875L: virtualization@lists.linux-foundation.org48764876+L: spice-devel@lists.freedesktop.org48764877T: git git://anongit.freedesktop.org/drm/drm-misc48774878S: Maintained48784879F: drivers/gpu/drm/qxl/···49104909S: Orphan / Obsolete49114910F: drivers/gpu/drm/tdfx/4912491149124912+DRM DRIVER FOR TPO TPG110 PANELS49134913+M: Linus Walleij <linus.walleij@linaro.org>49144914+T: git git://anongit.freedesktop.org/drm/drm-misc49154915+S: Maintained49164916+F: drivers/gpu/drm/panel/panel-tpo-tpg110.c49174917+F: Documentation/devicetree/bindings/display/panel/tpo,tpg110.txt49184918+49134919DRM DRIVER FOR USB DISPLAYLINK VIDEO ADAPTERS49144920M: Dave Airlie <airlied@redhat.com>49154921R: Sean Paul <sean@poorly.run>···49244916S: Odd Fixes49254917F: drivers/gpu/drm/udl/49264918T: git git://anongit.freedesktop.org/drm/drm-misc49194919+49204920+DRM DRIVER FOR VIRTUAL KERNEL MODESETTING (VKMS)49214921+M: Rodrigo Siqueira <rodrigosiqueiramelo@gmail.com>49224922+R: Haneen Mohammed <hamohammed.sa@gmail.com>49234923+R: Daniel Vetter <daniel@ffwll.ch>49244924+T: git git://anongit.freedesktop.org/drm/drm-misc49254925+S: Maintained49264926+L: dri-devel@lists.freedesktop.org49274927+F: drivers/gpu/drm/vkms/49284928+F: Documentation/gpu/vkms.rst4927492949284930DRM DRIVER FOR VMWARE VIRTUAL GPU49294931M: "VMware Graphics" <linux-graphics-maintainer@vmware.com>···50044986T: git git://anongit.freedesktop.org/drm/drm-misc5005498750064988DRM DRIVERS FOR BRIDGE CHIPS50075007-M: Archit Taneja <architt@codeaurora.org>50084989M: Andrzej Hajda <a.hajda@samsung.com>50094990R: Laurent Pinchart <Laurent.pinchart@ideasonboard.com>50104991S: Maintained
···103103 unsigned long pages;104104 u32 *pci_gart = NULL, page_base, gart_idx;105105 dma_addr_t bus_address = 0;106106- int i, j, ret = 0;106106+ int i, j, ret = -ENOMEM;107107 int max_ati_pages, max_real_pages;108108109109 if (!entry) {···117117 if (pci_set_dma_mask(dev->pdev, gart_info->table_mask)) {118118 DRM_ERROR("fail to set dma mask to 0x%Lx\n",119119 (unsigned long long)gart_info->table_mask);120120- ret = 1;120120+ ret = -EFAULT;121121 goto done;122122 }123123···160160 drm_ati_pcigart_cleanup(dev, gart_info);161161 address = NULL;162162 bus_address = 0;163163+ ret = -ENOMEM;163164 goto done;164165 }165166 page_base = (u32) entry->busaddr[i];···189188 page_base += ATI_PCIGART_PAGE_SIZE;190189 }191190 }192192- ret = 1;191191+ ret = 0;193192194193#if defined(__i386__) || defined(__x86_64__)195194 wbinvd();
···134134};135135136136/**137137- * drm_panel_bridge_add - Creates a drm_bridge and drm_connector that138138- * just calls the appropriate functions from drm_panel.137137+ * drm_panel_bridge_add - Creates a &drm_bridge and &drm_connector that138138+ * just calls the appropriate functions from &drm_panel.139139 *140140 * @panel: The drm_panel being wrapped. Must be non-NULL.141141 * @connector_type: The DRM_MODE_CONNECTOR_* for the connector to be···149149 * passed to drm_bridge_attach(). The drm_panel_prepare() and related150150 * functions can be dropped from the encoder driver (they're now151151 * called by the KMS helpers before calling into the encoder), along152152- * with connector creation. When done with the bridge,153153- * drm_bridge_detach() should be called as normal, then152152+ * with connector creation. When done with the bridge (after153153+ * drm_mode_config_cleanup() if the bridge has already been attached), then154154 * drm_panel_bridge_remove() to free it.155155+ *156156+ * See devm_drm_panel_bridge_add() for an automatically manged version of this157157+ * function.155158 */156159struct drm_bridge *drm_panel_bridge_add(struct drm_panel *panel,157160 u32 connector_type)···213210 drm_panel_bridge_remove(*bridge);214211}215212213213+/**214214+ * devm_drm_panel_bridge_add - Creates a managed &drm_bridge and &drm_connector215215+ * that just calls the appropriate functions from &drm_panel.216216+ * @dev: device to tie the bridge lifetime to217217+ * @panel: The drm_panel being wrapped. Must be non-NULL.218218+ * @connector_type: The DRM_MODE_CONNECTOR_* for the connector to be219219+ * created.220220+ *221221+ * This is the managed version of drm_panel_bridge_add() which automatically222222+ * calls drm_panel_bridge_remove() when @dev is unbound.223223+ */216224struct drm_bridge *devm_drm_panel_bridge_add(struct device *dev,217225 struct drm_panel *panel,218226 u32 connector_type)
+4-3
drivers/gpu/drm/bridge/sii902x.c
···232232}233233234234static void sii902x_bridge_mode_set(struct drm_bridge *bridge,235235- struct drm_display_mode *mode,236236- struct drm_display_mode *adj)235235+ const struct drm_display_mode *mode,236236+ const struct drm_display_mode *adj)237237{238238 struct sii902x *sii902x = bridge_to_sii902x(bridge);239239 struct regmap *regmap = sii902x->regmap;···258258 if (ret)259259 return;260260261261- ret = drm_hdmi_avi_infoframe_from_display_mode(&frame, adj, false);261261+ ret = drm_hdmi_avi_infoframe_from_display_mode(&frame,262262+ &sii902x->connector, adj);262263 if (ret < 0) {263264 DRM_ERROR("couldn't fill AVI infoframe\n");264265 return;
+1-2
drivers/gpu/drm/bridge/sil-sii8620.c
···11041104 int ret;1105110511061106 ret = drm_hdmi_avi_infoframe_from_display_mode(&frm.avi,11071107- mode,11081108- true);11071107+ NULL, mode);11091108 if (ctx->use_packed_pixel)11101109 frm.avi.colorspace = HDMI_COLORSPACE_YUV422;11111110
···11+// SPDX-License-Identifier: GPL-2.012/*23 * dw-hdmi-i2s-audio.c34 *45 * Copyright (c) 2017 Renesas Solutions Corp.56 * Kuninori Morimoto <kuninori.morimoto.gx@renesas.com>66- *77- * This program is free software; you can redistribute it and/or modify88- * it under the terms of the GNU General Public License version 2 as99- * published by the Free Software Foundation.107 */88+99+#include <linux/dma-mapping.h>1010+#include <linux/module.h>1111+1112#include <drm/bridge/dw_hdmi.h>12131314#include <sound/hdmi-codec.h>
···698698699699/**700700 * drm_atomic_private_obj_init - initialize private object701701+ * @dev: DRM device this object will be attached to701702 * @obj: private object702703 * @state: initial private object state703704 * @funcs: pointer to the struct of function pointers that identify the object···708707 * driver private object that needs its own atomic state.709708 */710709void711711-drm_atomic_private_obj_init(struct drm_private_obj *obj,710710+drm_atomic_private_obj_init(struct drm_device *dev,711711+ struct drm_private_obj *obj,712712 struct drm_private_state *state,713713 const struct drm_private_state_funcs *funcs)714714{715715 memset(obj, 0, sizeof(*obj));716716717717+ drm_modeset_lock_init(&obj->lock);718718+717719 obj->state = state;718720 obj->funcs = funcs;721721+ list_add_tail(&obj->head, &dev->mode_config.privobj_list);719722}720723EXPORT_SYMBOL(drm_atomic_private_obj_init);721724···732727void733728drm_atomic_private_obj_fini(struct drm_private_obj *obj)734729{730730+ list_del(&obj->head);735731 obj->funcs->atomic_destroy_state(obj, obj->state);732732+ drm_modeset_lock_fini(&obj->lock);736733}737734EXPORT_SYMBOL(drm_atomic_private_obj_fini);738735···744737 * @obj: private object to get the state for745738 *746739 * This function returns the private object state for the given private object,747747- * allocating the state if needed. It does not grab any locks as the caller is748748- * expected to care of any required locking.740740+ * allocating the state if needed. It will also grab the relevant private741741+ * object lock to make sure that the state is consistent.749742 *750743 * RETURNS:751744 *···755748drm_atomic_get_private_obj_state(struct drm_atomic_state *state,756749 struct drm_private_obj *obj)757750{758758- int index, num_objs, i;751751+ int index, num_objs, i, ret;759752 size_t size;760753 struct __drm_private_objs_state *arr;761754 struct drm_private_state *obj_state;···763756 for (i = 0; i < state->num_private_objs; i++)764757 if (obj == state->private_objs[i].ptr)765758 return state->private_objs[i].state;759759+760760+ ret = drm_modeset_lock(&obj->lock, state->acquire_ctx);761761+ if (ret)762762+ return ERR_PTR(ret);766763767764 num_objs = state->num_private_objs + 1;768765 size = sizeof(*state->private_objs) * num_objs;
+2-2
drivers/gpu/drm/drm_bridge.c
···294294 * Note: the bridge passed should be the one closest to the encoder295295 */296296void drm_bridge_mode_set(struct drm_bridge *bridge,297297- struct drm_display_mode *mode,298298- struct drm_display_mode *adjusted_mode)297297+ const struct drm_display_mode *mode,298298+ const struct drm_display_mode *adjusted_mode)299299{300300 if (!bridge)301301 return;
+69-22
drivers/gpu/drm/drm_connector.c
···11381138EXPORT_SYMBOL(drm_hdmi_avi_infoframe_content_type);1139113911401140/**11411141- * drm_create_tv_properties - create TV specific connector properties11411141+ * drm_mode_attach_tv_margin_properties - attach TV connector margin properties11421142+ * @connector: DRM connector11431143+ *11441144+ * Called by a driver when it needs to attach TV margin props to a connector.11451145+ * Typically used on SDTV and HDMI connectors.11461146+ */11471147+void drm_connector_attach_tv_margin_properties(struct drm_connector *connector)11481148+{11491149+ struct drm_device *dev = connector->dev;11501150+11511151+ drm_object_attach_property(&connector->base,11521152+ dev->mode_config.tv_left_margin_property,11531153+ 0);11541154+ drm_object_attach_property(&connector->base,11551155+ dev->mode_config.tv_right_margin_property,11561156+ 0);11571157+ drm_object_attach_property(&connector->base,11581158+ dev->mode_config.tv_top_margin_property,11591159+ 0);11601160+ drm_object_attach_property(&connector->base,11611161+ dev->mode_config.tv_bottom_margin_property,11621162+ 0);11631163+}11641164+EXPORT_SYMBOL(drm_connector_attach_tv_margin_properties);11651165+11661166+/**11671167+ * drm_mode_create_tv_margin_properties - create TV connector margin properties11681168+ * @dev: DRM device11691169+ *11701170+ * Called by a driver's HDMI connector initialization routine, this function11711171+ * creates the TV margin properties for a given device. No need to call this11721172+ * function for an SDTV connector, it's already called from11731173+ * drm_mode_create_tv_properties().11741174+ */11751175+int drm_mode_create_tv_margin_properties(struct drm_device *dev)11761176+{11771177+ if (dev->mode_config.tv_left_margin_property)11781178+ return 0;11791179+11801180+ dev->mode_config.tv_left_margin_property =11811181+ drm_property_create_range(dev, 0, "left margin", 0, 100);11821182+ if (!dev->mode_config.tv_left_margin_property)11831183+ return -ENOMEM;11841184+11851185+ dev->mode_config.tv_right_margin_property =11861186+ drm_property_create_range(dev, 0, "right margin", 0, 100);11871187+ if (!dev->mode_config.tv_right_margin_property)11881188+ return -ENOMEM;11891189+11901190+ dev->mode_config.tv_top_margin_property =11911191+ drm_property_create_range(dev, 0, "top margin", 0, 100);11921192+ if (!dev->mode_config.tv_top_margin_property)11931193+ return -ENOMEM;11941194+11951195+ dev->mode_config.tv_bottom_margin_property =11961196+ drm_property_create_range(dev, 0, "bottom margin", 0, 100);11971197+ if (!dev->mode_config.tv_bottom_margin_property)11981198+ return -ENOMEM;11991199+12001200+ return 0;12011201+}12021202+EXPORT_SYMBOL(drm_mode_create_tv_margin_properties);12031203+12041204+/**12051205+ * drm_mode_create_tv_properties - create TV specific connector properties11421206 * @dev: DRM device11431207 * @num_modes: number of different TV formats (modes) supported11441208 * @modes: array of pointers to strings containing name of each format···12471183 /*12481184 * Other, TV specific properties: margins & TV modes.12491185 */12501250- dev->mode_config.tv_left_margin_property =12511251- drm_property_create_range(dev, 0, "left margin", 0, 100);12521252- if (!dev->mode_config.tv_left_margin_property)12531253- goto nomem;12541254-12551255- dev->mode_config.tv_right_margin_property =12561256- drm_property_create_range(dev, 0, "right margin", 0, 100);12571257- if (!dev->mode_config.tv_right_margin_property)12581258- goto nomem;12591259-12601260- dev->mode_config.tv_top_margin_property =12611261- drm_property_create_range(dev, 0, "top margin", 0, 100);12621262- if (!dev->mode_config.tv_top_margin_property)12631263- goto nomem;12641264-12651265- dev->mode_config.tv_bottom_margin_property =12661266- drm_property_create_range(dev, 0, "bottom margin", 0, 100);12671267- if (!dev->mode_config.tv_bottom_margin_property)11861186+ if (drm_mode_create_tv_margin_properties(dev))12681187 goto nomem;1269118812701189 dev->mode_config.tv_mode_property =···21242077 * identifier for the tile group.21252078 *21262079 * RETURNS:21272127- * new tile group or error.20802080+ * new tile group or NULL.21282081 */21292082struct drm_tile_group *drm_mode_create_tile_group(struct drm_device *dev,21302083 char topology[8])···2134208721352088 tg = kzalloc(sizeof(*tg), GFP_KERNEL);21362089 if (!tg)21372137- return ERR_PTR(-ENOMEM);20902090+ return NULL;2138209121392092 kref_init(&tg->refcount);21402093 memcpy(tg->group_data, topology, 8);···21462099 tg->id = ret;21472100 } else {21482101 kfree(tg);21492149- tg = ERR_PTR(ret);21022102+ tg = NULL;21502103 }2151210421522105 mutex_unlock(&dev->mode_config.idr_mutex);
+9-6
drivers/gpu/drm/drm_context.c
···361361{362362 struct drm_ctx_list *ctx_entry;363363 struct drm_ctx *ctx = data;364364+ int tmp_handle;364365365366 if (!drm_core_check_feature(dev, DRIVER_KMS_LEGACY_CONTEXT) &&366367 !drm_core_check_feature(dev, DRIVER_LEGACY))367368 return -EOPNOTSUPP;368369369369- ctx->handle = drm_legacy_ctxbitmap_next(dev);370370- if (ctx->handle == DRM_KERNEL_CONTEXT) {370370+ tmp_handle = drm_legacy_ctxbitmap_next(dev);371371+ if (tmp_handle == DRM_KERNEL_CONTEXT) {371372 /* Skip kernel's context and get a new one. */372372- ctx->handle = drm_legacy_ctxbitmap_next(dev);373373+ tmp_handle = drm_legacy_ctxbitmap_next(dev);373374 }374374- DRM_DEBUG("%d\n", ctx->handle);375375- if (ctx->handle < 0) {375375+ DRM_DEBUG("%d\n", tmp_handle);376376+ if (tmp_handle < 0) {376377 DRM_DEBUG("Not enough free contexts.\n");377378 /* Should this return -EBUSY instead? */378378- return -ENOMEM;379379+ return tmp_handle;379380 }381381+382382+ ctx->handle = tmp_handle;380383381384 ctx_entry = kmalloc(sizeof(*ctx_entry), GFP_KERNEL);382385 if (!ctx_entry) {
-41
drivers/gpu/drm/drm_crtc.c
···9393}9494EXPORT_SYMBOL(drm_crtc_from_index);95959696-/**9797- * drm_crtc_force_disable - Forcibly turn off a CRTC9898- * @crtc: CRTC to turn off9999- *100100- * Note: This should only be used by non-atomic legacy drivers.101101- *102102- * Returns:103103- * Zero on success, error code on failure.104104- */10596int drm_crtc_force_disable(struct drm_crtc *crtc)10697{10798 struct drm_mode_set set = {···103112104113 return drm_mode_set_config_internal(&set);105114}106106-EXPORT_SYMBOL(drm_crtc_force_disable);107107-108108-/**109109- * drm_crtc_force_disable_all - Forcibly turn off all enabled CRTCs110110- * @dev: DRM device whose CRTCs to turn off111111- *112112- * Drivers may want to call this on unload to ensure that all displays are113113- * unlit and the GPU is in a consistent, low power state. Takes modeset locks.114114- *115115- * Note: This should only be used by non-atomic legacy drivers. For an atomic116116- * version look at drm_atomic_helper_shutdown().117117- *118118- * Returns:119119- * Zero on success, error code on failure.120120- */121121-int drm_crtc_force_disable_all(struct drm_device *dev)122122-{123123- struct drm_crtc *crtc;124124- int ret = 0;125125-126126- drm_modeset_lock_all(dev);127127- drm_for_each_crtc(crtc, dev)128128- if (crtc->enabled) {129129- ret = drm_crtc_force_disable(crtc);130130- if (ret)131131- goto out;132132- }133133-out:134134- drm_modeset_unlock_all(dev);135135- return ret;136136-}137137-EXPORT_SYMBOL(drm_crtc_force_disable_all);138115139116static unsigned int drm_num_crtcs(struct drm_device *dev)140117{
+51-7
drivers/gpu/drm/drm_crtc_helper.c
···9393 struct drm_connector_list_iter conn_iter;9494 struct drm_device *dev = encoder->dev;95959696+ WARN_ON(drm_drv_uses_atomic_modeset(dev));9797+9698 /*9799 * We can expect this mutex to be locked if we are not panicking.98100 * Locking is currently fubar in the panic handler.···132130{133131 struct drm_encoder *encoder;134132 struct drm_device *dev = crtc->dev;133133+134134+ WARN_ON(drm_drv_uses_atomic_modeset(dev));135135136136 /*137137 * We can expect this mutex to be locked if we are not panicking.···216212 */217213void drm_helper_disable_unused_functions(struct drm_device *dev)218214{219219- if (drm_core_check_feature(dev, DRIVER_ATOMIC))220220- DRM_ERROR("Called for atomic driver, this is not what you want.\n");215215+ WARN_ON(drm_drv_uses_atomic_modeset(dev));221216222217 drm_modeset_lock_all(dev);223218 __drm_helper_disable_unused_functions(dev);···283280 bool saved_enabled;284281 struct drm_encoder *encoder;285282 bool ret = true;283283+284284+ WARN_ON(drm_drv_uses_atomic_modeset(dev));286285287286 drm_warn_on_modeset_not_all_locked(dev);288287···391386 if (!encoder_funcs)392387 continue;393388394394- DRM_DEBUG_KMS("[ENCODER:%d:%s] set [MODE:%d:%s]\n",395395- encoder->base.id, encoder->name,396396- mode->base.id, mode->name);389389+ DRM_DEBUG_KMS("[ENCODER:%d:%s] set [MODE:%s]\n",390390+ encoder->base.id, encoder->name, mode->name);397391 if (encoder_funcs->mode_set)398392 encoder_funcs->mode_set(encoder, mode, adjusted_mode);399393···544540545541 crtc_funcs = set->crtc->helper_private;546542543543+ dev = set->crtc->dev;544544+ WARN_ON(drm_drv_uses_atomic_modeset(dev));545545+547546 if (!set->mode)548547 set->fb = NULL;549548···561554 drm_crtc_helper_disable(set->crtc);562555 return 0;563556 }564564-565565- dev = set->crtc->dev;566557567558 drm_warn_on_modeset_not_all_locked(dev);568559···880875 struct drm_crtc *crtc = encoder ? encoder->crtc : NULL;881876 int old_dpms, encoder_dpms = DRM_MODE_DPMS_OFF;882877878878+ WARN_ON(drm_drv_uses_atomic_modeset(connector->dev));879879+883880 if (mode == connector->dpms)884881 return 0;885882···953946 int encoder_dpms;954947 bool ret;955948949949+ WARN_ON(drm_drv_uses_atomic_modeset(dev));950950+956951 drm_modeset_lock_all(dev);957952 drm_for_each_crtc(crtc, dev) {958953···993984 drm_modeset_unlock_all(dev);994985}995986EXPORT_SYMBOL(drm_helper_resume_force_mode);987987+988988+/**989989+ * drm_helper_force_disable_all - Forcibly turn off all enabled CRTCs990990+ * @dev: DRM device whose CRTCs to turn off991991+ *992992+ * Drivers may want to call this on unload to ensure that all displays are993993+ * unlit and the GPU is in a consistent, low power state. Takes modeset locks.994994+ *995995+ * Note: This should only be used by non-atomic legacy drivers. For an atomic996996+ * version look at drm_atomic_helper_shutdown().997997+ *998998+ * Returns:999999+ * Zero on success, error code on failure.10001000+ */10011001+int drm_helper_force_disable_all(struct drm_device *dev)10021002+{10031003+ struct drm_crtc *crtc;10041004+ int ret = 0;10051005+10061006+ drm_modeset_lock_all(dev);10071007+ drm_for_each_crtc(crtc, dev)10081008+ if (crtc->enabled) {10091009+ struct drm_mode_set set = {10101010+ .crtc = crtc,10111011+ };10121012+10131013+ ret = drm_mode_set_config_internal(&set);10141014+ if (ret)10151015+ goto out;10161016+ }10171017+out:10181018+ drm_modeset_unlock_all(dev);10191019+ return ret;10201020+}10211021+EXPORT_SYMBOL(drm_helper_force_disable_all);
···154154 default:155155 WARN(1, "unknown DP link rate %d, using %x\n", link_rate,156156 DP_LINK_BW_1_62);157157+ /* fall through */157158 case 162000:158159 return DP_LINK_BW_1_62;159160 case 270000:···172171 switch (link_bw) {173172 default:174173 WARN(1, "unknown DP link BW code %x, using 162000\n", link_bw);174174+ /* fall through */175175 case DP_LINK_BW_1_62:176176 return 162000;177177 case DP_LINK_BW_2_7:···554552 case DP_DS_16BPC:555553 return 16;556554 }555555+ /* fall through */557556 default:558557 return 0;559558 }
+824-235
drivers/gpu/drm/drm_dp_mst_topology.c
···3333#include <drm/drm_fixed.h>3434#include <drm/drm_atomic.h>3535#include <drm/drm_atomic_helper.h>3636+#include <drm/drm_crtc_helper.h>36373738/**3839 * DOC: dp mst helper···4645 char *buf);4746static int test_calc_pbn_mode(void);48474949-static void drm_dp_put_port(struct drm_dp_mst_port *port);4848+static void drm_dp_mst_topology_put_port(struct drm_dp_mst_port *port);50495150static int drm_dp_dpcd_write_payload(struct drm_dp_mst_topology_mgr *mgr,5251 int id,···850849 if (lct > 1)851850 memcpy(mstb->rad, rad, lct / 2);852851 INIT_LIST_HEAD(&mstb->ports);853853- kref_init(&mstb->kref);852852+ kref_init(&mstb->topology_kref);853853+ kref_init(&mstb->malloc_kref);854854 return mstb;855855}856856857857-static void drm_dp_free_mst_port(struct kref *kref);858858-859857static void drm_dp_free_mst_branch_device(struct kref *kref)860858{861861- struct drm_dp_mst_branch *mstb = container_of(kref, struct drm_dp_mst_branch, kref);862862- if (mstb->port_parent) {863863- if (list_empty(&mstb->port_parent->next))864864- kref_put(&mstb->port_parent->kref, drm_dp_free_mst_port);865865- }859859+ struct drm_dp_mst_branch *mstb =860860+ container_of(kref, struct drm_dp_mst_branch, malloc_kref);861861+862862+ if (mstb->port_parent)863863+ drm_dp_mst_put_port_malloc(mstb->port_parent);864864+866865 kfree(mstb);867866}868867868868+/**869869+ * DOC: Branch device and port refcounting870870+ *871871+ * Topology refcount overview872872+ * ~~~~~~~~~~~~~~~~~~~~~~~~~~873873+ *874874+ * The refcounting schemes for &struct drm_dp_mst_branch and &struct875875+ * drm_dp_mst_port are somewhat unusual. Both ports and branch devices have876876+ * two different kinds of refcounts: topology refcounts, and malloc refcounts.877877+ *878878+ * Topology refcounts are not exposed to drivers, and are handled internally879879+ * by the DP MST helpers. The helpers use them in order to prevent the880880+ * in-memory topology state from being changed in the middle of critical881881+ * operations like changing the internal state of payload allocations. This882882+ * means each branch and port will be considered to be connected to the rest883883+ * of the topology until it's topology refcount reaches zero. Additionally,884884+ * for ports this means that their associated &struct drm_connector will stay885885+ * registered with userspace until the port's refcount reaches 0.886886+ *887887+ * Malloc refcount overview888888+ * ~~~~~~~~~~~~~~~~~~~~~~~~889889+ *890890+ * Malloc references are used to keep a &struct drm_dp_mst_port or &struct891891+ * drm_dp_mst_branch allocated even after all of its topology references have892892+ * been dropped, so that the driver or MST helpers can safely access each893893+ * branch's last known state before it was disconnected from the topology.894894+ * When the malloc refcount of a port or branch reaches 0, the memory895895+ * allocation containing the &struct drm_dp_mst_branch or &struct896896+ * drm_dp_mst_port respectively will be freed.897897+ *898898+ * For &struct drm_dp_mst_branch, malloc refcounts are not currently exposed899899+ * to drivers. As of writing this documentation, there are no drivers that900900+ * have a usecase for accessing &struct drm_dp_mst_branch outside of the MST901901+ * helpers. Exposing this API to drivers in a race-free manner would take more902902+ * tweaking of the refcounting scheme, however patches are welcome provided903903+ * there is a legitimate driver usecase for this.904904+ *905905+ * Refcount relationships in a topology906906+ * ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~907907+ *908908+ * Let's take a look at why the relationship between topology and malloc909909+ * refcounts is designed the way it is.910910+ *911911+ * .. kernel-figure:: dp-mst/topology-figure-1.dot912912+ *913913+ * An example of topology and malloc refs in a DP MST topology with two914914+ * active payloads. Topology refcount increments are indicated by solid915915+ * lines, and malloc refcount increments are indicated by dashed lines.916916+ * Each starts from the branch which incremented the refcount, and ends at917917+ * the branch to which the refcount belongs to, i.e. the arrow points the918918+ * same way as the C pointers used to reference a structure.919919+ *920920+ * As you can see in the above figure, every branch increments the topology921921+ * refcount of it's children, and increments the malloc refcount of it's922922+ * parent. Additionally, every payload increments the malloc refcount of it's923923+ * assigned port by 1.924924+ *925925+ * So, what would happen if MSTB #3 from the above figure was unplugged from926926+ * the system, but the driver hadn't yet removed payload #2 from port #3? The927927+ * topology would start to look like the figure below.928928+ *929929+ * .. kernel-figure:: dp-mst/topology-figure-2.dot930930+ *931931+ * Ports and branch devices which have been released from memory are932932+ * colored grey, and references which have been removed are colored red.933933+ *934934+ * Whenever a port or branch device's topology refcount reaches zero, it will935935+ * decrement the topology refcounts of all its children, the malloc refcount936936+ * of its parent, and finally its own malloc refcount. For MSTB #4 and port937937+ * #4, this means they both have been disconnected from the topology and freed938938+ * from memory. But, because payload #2 is still holding a reference to port939939+ * #3, port #3 is removed from the topology but it's &struct drm_dp_mst_port940940+ * is still accessible from memory. This also means port #3 has not yet941941+ * decremented the malloc refcount of MSTB #3, so it's &struct942942+ * drm_dp_mst_branch will also stay allocated in memory until port #3's943943+ * malloc refcount reaches 0.944944+ *945945+ * This relationship is necessary because in order to release payload #2, we946946+ * need to be able to figure out the last relative of port #3 that's still947947+ * connected to the topology. In this case, we would travel up the topology as948948+ * shown below.949949+ *950950+ * .. kernel-figure:: dp-mst/topology-figure-3.dot951951+ *952952+ * And finally, remove payload #2 by communicating with port #2 through953953+ * sideband transactions.954954+ */955955+956956+/**957957+ * drm_dp_mst_get_mstb_malloc() - Increment the malloc refcount of a branch958958+ * device959959+ * @mstb: The &struct drm_dp_mst_branch to increment the malloc refcount of960960+ *961961+ * Increments &drm_dp_mst_branch.malloc_kref. When962962+ * &drm_dp_mst_branch.malloc_kref reaches 0, the memory allocation for @mstb963963+ * will be released and @mstb may no longer be used.964964+ *965965+ * See also: drm_dp_mst_put_mstb_malloc()966966+ */967967+static void968968+drm_dp_mst_get_mstb_malloc(struct drm_dp_mst_branch *mstb)969969+{970970+ kref_get(&mstb->malloc_kref);971971+ DRM_DEBUG("mstb %p (%d)\n", mstb, kref_read(&mstb->malloc_kref));972972+}973973+974974+/**975975+ * drm_dp_mst_put_mstb_malloc() - Decrement the malloc refcount of a branch976976+ * device977977+ * @mstb: The &struct drm_dp_mst_branch to decrement the malloc refcount of978978+ *979979+ * Decrements &drm_dp_mst_branch.malloc_kref. When980980+ * &drm_dp_mst_branch.malloc_kref reaches 0, the memory allocation for @mstb981981+ * will be released and @mstb may no longer be used.982982+ *983983+ * See also: drm_dp_mst_get_mstb_malloc()984984+ */985985+static void986986+drm_dp_mst_put_mstb_malloc(struct drm_dp_mst_branch *mstb)987987+{988988+ DRM_DEBUG("mstb %p (%d)\n", mstb, kref_read(&mstb->malloc_kref) - 1);989989+ kref_put(&mstb->malloc_kref, drm_dp_free_mst_branch_device);990990+}991991+992992+static void drm_dp_free_mst_port(struct kref *kref)993993+{994994+ struct drm_dp_mst_port *port =995995+ container_of(kref, struct drm_dp_mst_port, malloc_kref);996996+997997+ drm_dp_mst_put_mstb_malloc(port->parent);998998+ kfree(port);999999+}10001000+10011001+/**10021002+ * drm_dp_mst_get_port_malloc() - Increment the malloc refcount of an MST port10031003+ * @port: The &struct drm_dp_mst_port to increment the malloc refcount of10041004+ *10051005+ * Increments &drm_dp_mst_port.malloc_kref. When &drm_dp_mst_port.malloc_kref10061006+ * reaches 0, the memory allocation for @port will be released and @port may10071007+ * no longer be used.10081008+ *10091009+ * Because @port could potentially be freed at any time by the DP MST helpers10101010+ * if &drm_dp_mst_port.malloc_kref reaches 0, including during a call to this10111011+ * function, drivers that which to make use of &struct drm_dp_mst_port should10121012+ * ensure that they grab at least one main malloc reference to their MST ports10131013+ * in &drm_dp_mst_topology_cbs.add_connector. This callback is called before10141014+ * there is any chance for &drm_dp_mst_port.malloc_kref to reach 0.10151015+ *10161016+ * See also: drm_dp_mst_put_port_malloc()10171017+ */10181018+void10191019+drm_dp_mst_get_port_malloc(struct drm_dp_mst_port *port)10201020+{10211021+ kref_get(&port->malloc_kref);10221022+ DRM_DEBUG("port %p (%d)\n", port, kref_read(&port->malloc_kref));10231023+}10241024+EXPORT_SYMBOL(drm_dp_mst_get_port_malloc);10251025+10261026+/**10271027+ * drm_dp_mst_put_port_malloc() - Decrement the malloc refcount of an MST port10281028+ * @port: The &struct drm_dp_mst_port to decrement the malloc refcount of10291029+ *10301030+ * Decrements &drm_dp_mst_port.malloc_kref. When &drm_dp_mst_port.malloc_kref10311031+ * reaches 0, the memory allocation for @port will be released and @port may10321032+ * no longer be used.10331033+ *10341034+ * See also: drm_dp_mst_get_port_malloc()10351035+ */10361036+void10371037+drm_dp_mst_put_port_malloc(struct drm_dp_mst_port *port)10381038+{10391039+ DRM_DEBUG("port %p (%d)\n", port, kref_read(&port->malloc_kref) - 1);10401040+ kref_put(&port->malloc_kref, drm_dp_free_mst_port);10411041+}10421042+EXPORT_SYMBOL(drm_dp_mst_put_port_malloc);10431043+8691044static void drm_dp_destroy_mst_branch_device(struct kref *kref)8701045{871871- struct drm_dp_mst_branch *mstb = container_of(kref, struct drm_dp_mst_branch, kref);10461046+ struct drm_dp_mst_branch *mstb =10471047+ container_of(kref, struct drm_dp_mst_branch, topology_kref);10481048+ struct drm_dp_mst_topology_mgr *mgr = mstb->mgr;8721049 struct drm_dp_mst_port *port, *tmp;8731050 bool wake_tx = false;8741051875875- /*876876- * init kref again to be used by ports to remove mst branch when it is877877- * not needed anymore878878- */879879- kref_init(kref);880880-881881- if (mstb->port_parent && list_empty(&mstb->port_parent->next))882882- kref_get(&mstb->port_parent->kref);883883-884884- /*885885- * destroy all ports - don't need lock886886- * as there are no more references to the mst branch887887- * device at this point.888888- */10521052+ mutex_lock(&mgr->lock);8891053 list_for_each_entry_safe(port, tmp, &mstb->ports, next) {8901054 list_del(&port->next);891891- drm_dp_put_port(port);10551055+ drm_dp_mst_topology_put_port(port);8921056 }10571057+ mutex_unlock(&mgr->lock);89310588941059 /* drop any tx slots msg */8951060 mutex_lock(&mstb->mgr->qlock);···1074907 if (wake_tx)1075908 wake_up_all(&mstb->mgr->tx_waitq);107690910771077- kref_put(kref, drm_dp_free_mst_branch_device);910910+ drm_dp_mst_put_mstb_malloc(mstb);1078911}107991210801080-static void drm_dp_put_mst_branch_device(struct drm_dp_mst_branch *mstb)913913+/**914914+ * drm_dp_mst_topology_try_get_mstb() - Increment the topology refcount of a915915+ * branch device unless its zero916916+ * @mstb: &struct drm_dp_mst_branch to increment the topology refcount of917917+ *918918+ * Attempts to grab a topology reference to @mstb, if it hasn't yet been919919+ * removed from the topology (e.g. &drm_dp_mst_branch.topology_kref has920920+ * reached 0). Holding a topology reference implies that a malloc reference921921+ * will be held to @mstb as long as the user holds the topology reference.922922+ *923923+ * Care should be taken to ensure that the user has at least one malloc924924+ * reference to @mstb. If you already have a topology reference to @mstb, you925925+ * should use drm_dp_mst_topology_get_mstb() instead.926926+ *927927+ * See also:928928+ * drm_dp_mst_topology_get_mstb()929929+ * drm_dp_mst_topology_put_mstb()930930+ *931931+ * Returns:932932+ * * 1: A topology reference was grabbed successfully933933+ * * 0: @port is no longer in the topology, no reference was grabbed934934+ */935935+static int __must_check936936+drm_dp_mst_topology_try_get_mstb(struct drm_dp_mst_branch *mstb)1081937{10821082- kref_put(&mstb->kref, drm_dp_destroy_mst_branch_device);938938+ int ret = kref_get_unless_zero(&mstb->topology_kref);939939+940940+ if (ret)941941+ DRM_DEBUG("mstb %p (%d)\n", mstb,942942+ kref_read(&mstb->topology_kref));943943+944944+ return ret;1083945}1084946947947+/**948948+ * drm_dp_mst_topology_get_mstb() - Increment the topology refcount of a949949+ * branch device950950+ * @mstb: The &struct drm_dp_mst_branch to increment the topology refcount of951951+ *952952+ * Increments &drm_dp_mst_branch.topology_refcount without checking whether or953953+ * not it's already reached 0. This is only valid to use in scenarios where954954+ * you are already guaranteed to have at least one active topology reference955955+ * to @mstb. Otherwise, drm_dp_mst_topology_try_get_mstb() must be used.956956+ *957957+ * See also:958958+ * drm_dp_mst_topology_try_get_mstb()959959+ * drm_dp_mst_topology_put_mstb()960960+ */961961+static void drm_dp_mst_topology_get_mstb(struct drm_dp_mst_branch *mstb)962962+{963963+ WARN_ON(kref_read(&mstb->topology_kref) == 0);964964+ kref_get(&mstb->topology_kref);965965+ DRM_DEBUG("mstb %p (%d)\n", mstb, kref_read(&mstb->topology_kref));966966+}967967+968968+/**969969+ * drm_dp_mst_topology_put_mstb() - release a topology reference to a branch970970+ * device971971+ * @mstb: The &struct drm_dp_mst_branch to release the topology reference from972972+ *973973+ * Releases a topology reference from @mstb by decrementing974974+ * &drm_dp_mst_branch.topology_kref.975975+ *976976+ * See also:977977+ * drm_dp_mst_topology_try_get_mstb()978978+ * drm_dp_mst_topology_get_mstb()979979+ */980980+static void981981+drm_dp_mst_topology_put_mstb(struct drm_dp_mst_branch *mstb)982982+{983983+ DRM_DEBUG("mstb %p (%d)\n",984984+ mstb, kref_read(&mstb->topology_kref) - 1);985985+ kref_put(&mstb->topology_kref, drm_dp_destroy_mst_branch_device);986986+}10859871086988static void drm_dp_port_teardown_pdt(struct drm_dp_mst_port *port, int old_pdt)1087989{···1165929 case DP_PEER_DEVICE_MST_BRANCHING:1166930 mstb = port->mstb;1167931 port->mstb = NULL;11681168- drm_dp_put_mst_branch_device(mstb);932932+ drm_dp_mst_topology_put_mstb(mstb);1169933 break;1170934 }1171935}11729361173937static void drm_dp_destroy_port(struct kref *kref)1174938{11751175- struct drm_dp_mst_port *port = container_of(kref, struct drm_dp_mst_port, kref);939939+ struct drm_dp_mst_port *port =940940+ container_of(kref, struct drm_dp_mst_port, topology_kref);1176941 struct drm_dp_mst_topology_mgr *mgr = port->mgr;11779421178943 if (!port->input) {11791179- port->vcpi.num_slots = 0;11801180-1181944 kfree(port->cached_edid);11829451183946 /*···1190955 * from an EDID retrieval */11919561192957 mutex_lock(&mgr->destroy_connector_lock);11931193- kref_get(&port->parent->kref);1194958 list_add(&port->next, &mgr->destroy_connector_list);1195959 mutex_unlock(&mgr->destroy_connector_lock);1196960 schedule_work(&mgr->destroy_connector_work);···1200966 drm_dp_port_teardown_pdt(port, port->pdt);1201967 port->pdt = DP_PEER_DEVICE_NONE;1202968 }12031203- kfree(port);969969+ drm_dp_mst_put_port_malloc(port);1204970}120597112061206-static void drm_dp_put_port(struct drm_dp_mst_port *port)972972+/**973973+ * drm_dp_mst_topology_try_get_port() - Increment the topology refcount of a974974+ * port unless its zero975975+ * @port: &struct drm_dp_mst_port to increment the topology refcount of976976+ *977977+ * Attempts to grab a topology reference to @port, if it hasn't yet been978978+ * removed from the topology (e.g. &drm_dp_mst_port.topology_kref has reached979979+ * 0). Holding a topology reference implies that a malloc reference will be980980+ * held to @port as long as the user holds the topology reference.981981+ *982982+ * Care should be taken to ensure that the user has at least one malloc983983+ * reference to @port. If you already have a topology reference to @port, you984984+ * should use drm_dp_mst_topology_get_port() instead.985985+ *986986+ * See also:987987+ * drm_dp_mst_topology_get_port()988988+ * drm_dp_mst_topology_put_port()989989+ *990990+ * Returns:991991+ * * 1: A topology reference was grabbed successfully992992+ * * 0: @port is no longer in the topology, no reference was grabbed993993+ */994994+static int __must_check995995+drm_dp_mst_topology_try_get_port(struct drm_dp_mst_port *port)1207996{12081208- kref_put(&port->kref, drm_dp_destroy_port);997997+ int ret = kref_get_unless_zero(&port->topology_kref);998998+999999+ if (ret)10001000+ DRM_DEBUG("port %p (%d)\n", port,10011001+ kref_read(&port->topology_kref));10021002+10031003+ return ret;12091004}1210100512111211-static struct drm_dp_mst_branch *drm_dp_mst_get_validated_mstb_ref_locked(struct drm_dp_mst_branch *mstb, struct drm_dp_mst_branch *to_find)10061006+/**10071007+ * drm_dp_mst_topology_get_port() - Increment the topology refcount of a port10081008+ * @port: The &struct drm_dp_mst_port to increment the topology refcount of10091009+ *10101010+ * Increments &drm_dp_mst_port.topology_refcount without checking whether or10111011+ * not it's already reached 0. This is only valid to use in scenarios where10121012+ * you are already guaranteed to have at least one active topology reference10131013+ * to @port. Otherwise, drm_dp_mst_topology_try_get_port() must be used.10141014+ *10151015+ * See also:10161016+ * drm_dp_mst_topology_try_get_port()10171017+ * drm_dp_mst_topology_put_port()10181018+ */10191019+static void drm_dp_mst_topology_get_port(struct drm_dp_mst_port *port)10201020+{10211021+ WARN_ON(kref_read(&port->topology_kref) == 0);10221022+ kref_get(&port->topology_kref);10231023+ DRM_DEBUG("port %p (%d)\n", port, kref_read(&port->topology_kref));10241024+}10251025+10261026+/**10271027+ * drm_dp_mst_topology_put_port() - release a topology reference to a port10281028+ * @port: The &struct drm_dp_mst_port to release the topology reference from10291029+ *10301030+ * Releases a topology reference from @port by decrementing10311031+ * &drm_dp_mst_port.topology_kref.10321032+ *10331033+ * See also:10341034+ * drm_dp_mst_topology_try_get_port()10351035+ * drm_dp_mst_topology_get_port()10361036+ */10371037+static void drm_dp_mst_topology_put_port(struct drm_dp_mst_port *port)10381038+{10391039+ DRM_DEBUG("port %p (%d)\n",10401040+ port, kref_read(&port->topology_kref) - 1);10411041+ kref_put(&port->topology_kref, drm_dp_destroy_port);10421042+}10431043+10441044+static struct drm_dp_mst_branch *10451045+drm_dp_mst_topology_get_mstb_validated_locked(struct drm_dp_mst_branch *mstb,10461046+ struct drm_dp_mst_branch *to_find)12121047{12131048 struct drm_dp_mst_port *port;12141049 struct drm_dp_mst_branch *rmstb;12151215- if (to_find == mstb) {12161216- kref_get(&mstb->kref);10501050+10511051+ if (to_find == mstb)12171052 return mstb;12181218- }10531053+12191054 list_for_each_entry(port, &mstb->ports, next) {12201055 if (port->mstb) {12211221- rmstb = drm_dp_mst_get_validated_mstb_ref_locked(port->mstb, to_find);10561056+ rmstb = drm_dp_mst_topology_get_mstb_validated_locked(10571057+ port->mstb, to_find);12221058 if (rmstb)12231059 return rmstb;12241060 }···1296992 return NULL;1297993}129899412991299-static struct drm_dp_mst_branch *drm_dp_get_validated_mstb_ref(struct drm_dp_mst_topology_mgr *mgr, struct drm_dp_mst_branch *mstb)995995+static struct drm_dp_mst_branch *996996+drm_dp_mst_topology_get_mstb_validated(struct drm_dp_mst_topology_mgr *mgr,997997+ struct drm_dp_mst_branch *mstb)1300998{1301999 struct drm_dp_mst_branch *rmstb = NULL;10001000+13021001 mutex_lock(&mgr->lock);13031303- if (mgr->mst_primary)13041304- rmstb = drm_dp_mst_get_validated_mstb_ref_locked(mgr->mst_primary, mstb);10021002+ if (mgr->mst_primary) {10031003+ rmstb = drm_dp_mst_topology_get_mstb_validated_locked(10041004+ mgr->mst_primary, mstb);10051005+10061006+ if (rmstb && !drm_dp_mst_topology_try_get_mstb(rmstb))10071007+ rmstb = NULL;10081008+ }13051009 mutex_unlock(&mgr->lock);13061010 return rmstb;13071011}1308101213091309-static struct drm_dp_mst_port *drm_dp_mst_get_port_ref_locked(struct drm_dp_mst_branch *mstb, struct drm_dp_mst_port *to_find)10131013+static struct drm_dp_mst_port *10141014+drm_dp_mst_topology_get_port_validated_locked(struct drm_dp_mst_branch *mstb,10151015+ struct drm_dp_mst_port *to_find)13101016{13111017 struct drm_dp_mst_port *port, *mport;1312101813131019 list_for_each_entry(port, &mstb->ports, next) {13141314- if (port == to_find) {13151315- kref_get(&port->kref);10201020+ if (port == to_find)13161021 return port;13171317- }10221022+13181023 if (port->mstb) {13191319- mport = drm_dp_mst_get_port_ref_locked(port->mstb, to_find);10241024+ mport = drm_dp_mst_topology_get_port_validated_locked(10251025+ port->mstb, to_find);13201026 if (mport)13211027 return mport;13221028 }···13341020 return NULL;13351021}1336102213371337-static struct drm_dp_mst_port *drm_dp_get_validated_port_ref(struct drm_dp_mst_topology_mgr *mgr, struct drm_dp_mst_port *port)10231023+static struct drm_dp_mst_port *10241024+drm_dp_mst_topology_get_port_validated(struct drm_dp_mst_topology_mgr *mgr,10251025+ struct drm_dp_mst_port *port)13381026{13391027 struct drm_dp_mst_port *rport = NULL;10281028+13401029 mutex_lock(&mgr->lock);13411341- if (mgr->mst_primary)13421342- rport = drm_dp_mst_get_port_ref_locked(mgr->mst_primary, port);10301030+ if (mgr->mst_primary) {10311031+ rport = drm_dp_mst_topology_get_port_validated_locked(10321032+ mgr->mst_primary, port);10331033+10341034+ if (rport && !drm_dp_mst_topology_try_get_port(rport))10351035+ rport = NULL;10361036+ }13431037 mutex_unlock(&mgr->lock);13441038 return rport;13451039}···13551033static struct drm_dp_mst_port *drm_dp_get_port(struct drm_dp_mst_branch *mstb, u8 port_num)13561034{13571035 struct drm_dp_mst_port *port;10361036+ int ret;1358103713591038 list_for_each_entry(port, &mstb->ports, next) {13601039 if (port->port_num == port_num) {13611361- kref_get(&port->kref);13621362- return port;10401040+ ret = drm_dp_mst_topology_try_get_port(port);10411041+ return ret ? port : NULL;13631042 }13641043 }13651044···14091086 if (port->mstb) {14101087 port->mstb->mgr = port->mgr;14111088 port->mstb->port_parent = port;10891089+ /*10901090+ * Make sure this port's memory allocation stays10911091+ * around until it's child MSTB releases it10921092+ */10931093+ drm_dp_mst_get_port_malloc(port);1412109414131095 send_link = true;14141096 }···14741146 bool created = false;14751147 int old_pdt = 0;14761148 int old_ddps = 0;11491149+14771150 port = drm_dp_get_port(mstb, port_msg->port_number);14781151 if (!port) {14791152 port = kzalloc(sizeof(*port), GFP_KERNEL);14801153 if (!port)14811154 return;14821482- kref_init(&port->kref);11551155+ kref_init(&port->topology_kref);11561156+ kref_init(&port->malloc_kref);14831157 port->parent = mstb;14841158 port->port_num = port_msg->port_number;14851159 port->mgr = mstb->mgr;14861160 port->aux.name = "DPMST";14871161 port->aux.dev = dev->dev;11621162+11631163+ /*11641164+ * Make sure the memory allocation for our parent branch stays11651165+ * around until our own memory allocation is released11661166+ */11671167+ drm_dp_mst_get_mstb_malloc(mstb);11681168+14881169 created = true;14891170 } else {14901171 old_pdt = port->pdt;···15131176 for this list */15141177 if (created) {15151178 mutex_lock(&mstb->mgr->lock);15161516- kref_get(&port->kref);11791179+ drm_dp_mst_topology_get_port(port);15171180 list_add(&port->next, &mstb->ports);15181181 mutex_unlock(&mstb->mgr->lock);15191182 }1520118315211184 if (old_ddps != port->ddps) {15221185 if (port->ddps) {15231523- if (!port->input)15241524- drm_dp_send_enum_path_resources(mstb->mgr, mstb, port);11861186+ if (!port->input) {11871187+ drm_dp_send_enum_path_resources(mstb->mgr,11881188+ mstb, port);11891189+ }15251190 } else {15261191 port->available_pbn = 0;15271527- }11921192+ }15281193 }1529119415301195 if (old_pdt != port->pdt && !port->input) {···15401201 if (created && !port->input) {15411202 char proppath[255];1542120315431543- build_mst_prop_path(mstb, port->port_num, proppath, sizeof(proppath));15441544- port->connector = (*mstb->mgr->cbs->add_connector)(mstb->mgr, port, proppath);12041204+ build_mst_prop_path(mstb, port->port_num, proppath,12051205+ sizeof(proppath));12061206+ port->connector = (*mstb->mgr->cbs->add_connector)(mstb->mgr,12071207+ port,12081208+ proppath);15451209 if (!port->connector) {15461210 /* remove it from the port list */15471211 mutex_lock(&mstb->mgr->lock);15481212 list_del(&port->next);15491213 mutex_unlock(&mstb->mgr->lock);15501214 /* drop port list reference */15511551- drm_dp_put_port(port);12151215+ drm_dp_mst_topology_put_port(port);15521216 goto out;15531217 }15541218 if ((port->pdt == DP_PEER_DEVICE_DP_LEGACY_CONV ||15551219 port->pdt == DP_PEER_DEVICE_SST_SINK) &&15561220 port->port_num >= DP_MST_LOGICAL_PORT_0) {15571557- port->cached_edid = drm_get_edid(port->connector, &port->aux.ddc);12211221+ port->cached_edid = drm_get_edid(port->connector,12221222+ &port->aux.ddc);15581223 drm_connector_set_tile_property(port->connector);15591224 }15601225 (*mstb->mgr->cbs->register_connector)(port->connector);···1566122315671224out:15681225 /* put reference to this port */15691569- drm_dp_put_port(port);12261226+ drm_dp_mst_topology_put_port(port);15701227}1571122815721229static void drm_dp_update_port(struct drm_dp_mst_branch *mstb,···16011258 dowork = true;16021259 }1603126016041604- drm_dp_put_port(port);12611261+ drm_dp_mst_topology_put_port(port);16051262 if (dowork)16061263 queue_work(system_long_wq, &mstb->mgr->work);16071264···16121269{16131270 struct drm_dp_mst_branch *mstb;16141271 struct drm_dp_mst_port *port;16151615- int i;12721272+ int i, ret;16161273 /* find the port by iterating down */1617127416181275 mutex_lock(&mgr->lock);···16371294 }16381295 }16391296 }16401640- kref_get(&mstb->kref);12971297+ ret = drm_dp_mst_topology_try_get_mstb(mstb);12981298+ if (!ret)12991299+ mstb = NULL;16411300out:16421301 mutex_unlock(&mgr->lock);16431302 return mstb;···16691324 return NULL;16701325}1671132616721672-static struct drm_dp_mst_branch *drm_dp_get_mst_branch_device_by_guid(16731673- struct drm_dp_mst_topology_mgr *mgr,16741674- uint8_t *guid)13271327+static struct drm_dp_mst_branch *13281328+drm_dp_get_mst_branch_device_by_guid(struct drm_dp_mst_topology_mgr *mgr,13291329+ uint8_t *guid)16751330{16761331 struct drm_dp_mst_branch *mstb;13321332+ int ret;1677133316781334 /* find the port by iterating down */16791335 mutex_lock(&mgr->lock);1680133616811337 mstb = get_mst_branch_device_by_guid_helper(mgr->mst_primary, guid);16821682-16831683- if (mstb)16841684- kref_get(&mstb->kref);13381338+ if (mstb) {13391339+ ret = drm_dp_mst_topology_try_get_mstb(mstb);13401340+ if (!ret)13411341+ mstb = NULL;13421342+ }1685134316861344 mutex_unlock(&mgr->lock);16871345 return mstb;···17091361 drm_dp_send_enum_path_resources(mgr, mstb, port);1710136217111363 if (port->mstb) {17121712- mstb_child = drm_dp_get_validated_mstb_ref(mgr, port->mstb);13641364+ mstb_child = drm_dp_mst_topology_get_mstb_validated(13651365+ mgr, port->mstb);17131366 if (mstb_child) {17141367 drm_dp_check_and_send_link_address(mgr, mstb_child);17151715- drm_dp_put_mst_branch_device(mstb_child);13681368+ drm_dp_mst_topology_put_mstb(mstb_child);17161369 }17171370 }17181371 }···17231374{17241375 struct drm_dp_mst_topology_mgr *mgr = container_of(work, struct drm_dp_mst_topology_mgr, work);17251376 struct drm_dp_mst_branch *mstb;13771377+ int ret;1726137817271379 mutex_lock(&mgr->lock);17281380 mstb = mgr->mst_primary;17291381 if (mstb) {17301730- kref_get(&mstb->kref);13821382+ ret = drm_dp_mst_topology_try_get_mstb(mstb);13831383+ if (!ret)13841384+ mstb = NULL;17311385 }17321386 mutex_unlock(&mgr->lock);17331387 if (mstb) {17341388 drm_dp_check_and_send_link_address(mgr, mstb);17351735- drm_dp_put_mst_branch_device(mstb);13891389+ drm_dp_mst_topology_put_mstb(mstb);17361390 }17371391}17381392···19911639 for (i = 0; i < txmsg->reply.u.link_addr.nports; i++) {19921640 drm_dp_add_port(mstb, mgr->dev, &txmsg->reply.u.link_addr.ports[i]);19931641 }19941994- (*mgr->cbs->hotplug)(mgr);16421642+ drm_kms_helper_hotplug_event(mgr->dev);19951643 }19961644 } else {19971645 mstb->link_address_sent = false;···20461694 return drm_dp_get_last_connected_port_to_mstb(mstb->port_parent->parent);20471695}2048169620492049-static struct drm_dp_mst_branch *drm_dp_get_last_connected_port_and_mstb(struct drm_dp_mst_topology_mgr *mgr,20502050- struct drm_dp_mst_branch *mstb,20512051- int *port_num)16971697+/*16981698+ * Searches upwards in the topology starting from mstb to try to find the16991699+ * closest available parent of mstb that's still connected to the rest of the17001700+ * topology. This can be used in order to perform operations like releasing17011701+ * payloads, where the branch device which owned the payload may no longer be17021702+ * around and thus would require that the payload on the last living relative17031703+ * be freed instead.17041704+ */17051705+static struct drm_dp_mst_branch *17061706+drm_dp_get_last_connected_port_and_mstb(struct drm_dp_mst_topology_mgr *mgr,17071707+ struct drm_dp_mst_branch *mstb,17081708+ int *port_num)20521709{20531710 struct drm_dp_mst_branch *rmstb = NULL;20541711 struct drm_dp_mst_port *found_port;20552055- mutex_lock(&mgr->lock);20562056- if (mgr->mst_primary) {20572057- found_port = drm_dp_get_last_connected_port_to_mstb(mstb);2058171220592059- if (found_port) {17131713+ mutex_lock(&mgr->lock);17141714+ if (!mgr->mst_primary)17151715+ goto out;17161716+17171717+ do {17181718+ found_port = drm_dp_get_last_connected_port_to_mstb(mstb);17191719+ if (!found_port)17201720+ break;17211721+17221722+ if (drm_dp_mst_topology_try_get_mstb(found_port->parent)) {20601723 rmstb = found_port->parent;20612061- kref_get(&rmstb->kref);20621724 *port_num = found_port->port_num;17251725+ } else {17261726+ /* Search again, starting from this parent */17271727+ mstb = found_port->parent;20631728 }20642064- }17291729+ } while (!rmstb);17301730+out:20651731 mutex_unlock(&mgr->lock);20661732 return rmstb;20671733}···20951725 u8 sinks[DRM_DP_MAX_SDP_STREAMS];20961726 int i;2097172720982098- port = drm_dp_get_validated_port_ref(mgr, port);20992099- if (!port)21002100- return -EINVAL;21012101-21021728 port_num = port->port_num;21032103- mstb = drm_dp_get_validated_mstb_ref(mgr, port->parent);17291729+ mstb = drm_dp_mst_topology_get_mstb_validated(mgr, port->parent);21041730 if (!mstb) {21052105- mstb = drm_dp_get_last_connected_port_and_mstb(mgr, port->parent, &port_num);17311731+ mstb = drm_dp_get_last_connected_port_and_mstb(mgr,17321732+ port->parent,17331733+ &port_num);2106173421072107- if (!mstb) {21082108- drm_dp_put_port(port);17351735+ if (!mstb)21091736 return -EINVAL;21102110- }21111737 }2112173821131739 txmsg = kzalloc(sizeof(*txmsg), GFP_KERNEL);···2122175621231757 drm_dp_queue_down_tx(mgr, txmsg);2124175817591759+ /*17601760+ * FIXME: there is a small chance that between getting the last17611761+ * connected mstb and sending the payload message, the last connected17621762+ * mstb could also be removed from the topology. In the future, this17631763+ * needs to be fixed by restarting the17641764+ * drm_dp_get_last_connected_port_and_mstb() search in the event of a17651765+ * timeout if the topology is still connected to the system.17661766+ */21251767 ret = drm_dp_mst_wait_tx_reply(mstb, txmsg);21261768 if (ret > 0) {21272127- if (txmsg->reply.reply_type == 1) {17691769+ if (txmsg->reply.reply_type == 1)21281770 ret = -EINVAL;21292129- } else17711771+ else21301772 ret = 0;21311773 }21321774 kfree(txmsg);21331775fail_put:21342134- drm_dp_put_mst_branch_device(mstb);21352135- drm_dp_put_port(port);17761776+ drm_dp_mst_topology_put_mstb(mstb);21361777 return ret;21371778}21381779···21491776 struct drm_dp_sideband_msg_tx *txmsg;21501777 int len, ret;2151177821522152- port = drm_dp_get_validated_port_ref(mgr, port);17791779+ port = drm_dp_mst_topology_get_port_validated(mgr, port);21531780 if (!port)21541781 return -EINVAL;2155178221561783 txmsg = kzalloc(sizeof(*txmsg), GFP_KERNEL);21571784 if (!txmsg) {21582158- drm_dp_put_port(port);17851785+ drm_dp_mst_topology_put_port(port);21591786 return -ENOMEM;21601787 }21611788···21711798 ret = 0;21721799 }21731800 kfree(txmsg);21742174- drm_dp_put_port(port);18011801+ drm_dp_mst_topology_put_port(port);2175180221761803 return ret;21771804}···22441871 */22451872int drm_dp_update_payload_part1(struct drm_dp_mst_topology_mgr *mgr)22461873{22472247- int i, j;22482248- int cur_slots = 1;22491874 struct drm_dp_payload req_payload;22501875 struct drm_dp_mst_port *port;18761876+ int i, j;18771877+ int cur_slots = 1;2251187822521879 mutex_lock(&mgr->payload_lock);22531880 for (i = 0; i < mgr->max_payloads; i++) {18811881+ struct drm_dp_vcpi *vcpi = mgr->proposed_vcpis[i];18821882+ struct drm_dp_payload *payload = &mgr->payloads[i];18831883+ bool put_port = false;18841884+22541885 /* solve the current payloads - compare to the hw ones22551886 - update the hw view */22561887 req_payload.start_slot = cur_slots;22572257- if (mgr->proposed_vcpis[i]) {22582258- port = container_of(mgr->proposed_vcpis[i], struct drm_dp_mst_port, vcpi);22592259- port = drm_dp_get_validated_port_ref(mgr, port);22602260- if (!port) {22612261- mutex_unlock(&mgr->payload_lock);22622262- return -EINVAL;18881888+ if (vcpi) {18891889+ port = container_of(vcpi, struct drm_dp_mst_port,18901890+ vcpi);18911891+18921892+ /* Validated ports don't matter if we're releasing18931893+ * VCPI18941894+ */18951895+ if (vcpi->num_slots) {18961896+ port = drm_dp_mst_topology_get_port_validated(18971897+ mgr, port);18981898+ if (!port) {18991899+ mutex_unlock(&mgr->payload_lock);19001900+ return -EINVAL;19011901+ }19021902+ put_port = true;22631903 }22642264- req_payload.num_slots = mgr->proposed_vcpis[i]->num_slots;22652265- req_payload.vcpi = mgr->proposed_vcpis[i]->vcpi;19041904+19051905+ req_payload.num_slots = vcpi->num_slots;19061906+ req_payload.vcpi = vcpi->vcpi;22661907 } else {22671908 port = NULL;22681909 req_payload.num_slots = 0;22691910 }2270191122712271- if (mgr->payloads[i].start_slot != req_payload.start_slot) {22722272- mgr->payloads[i].start_slot = req_payload.start_slot;22732273- }19121912+ payload->start_slot = req_payload.start_slot;22741913 /* work out what is required to happen with this payload */22752275- if (mgr->payloads[i].num_slots != req_payload.num_slots) {19141914+ if (payload->num_slots != req_payload.num_slots) {2276191522771916 /* need to push an update for this payload */22781917 if (req_payload.num_slots) {22792279- drm_dp_create_payload_step1(mgr, mgr->proposed_vcpis[i]->vcpi, &req_payload);22802280- mgr->payloads[i].num_slots = req_payload.num_slots;22812281- mgr->payloads[i].vcpi = req_payload.vcpi;22822282- } else if (mgr->payloads[i].num_slots) {22832283- mgr->payloads[i].num_slots = 0;22842284- drm_dp_destroy_payload_step1(mgr, port, mgr->payloads[i].vcpi, &mgr->payloads[i]);22852285- req_payload.payload_state = mgr->payloads[i].payload_state;22862286- mgr->payloads[i].start_slot = 0;19181918+ drm_dp_create_payload_step1(mgr, vcpi->vcpi,19191919+ &req_payload);19201920+ payload->num_slots = req_payload.num_slots;19211921+ payload->vcpi = req_payload.vcpi;19221922+19231923+ } else if (payload->num_slots) {19241924+ payload->num_slots = 0;19251925+ drm_dp_destroy_payload_step1(mgr, port,19261926+ payload->vcpi,19271927+ payload);19281928+ req_payload.payload_state =19291929+ payload->payload_state;19301930+ payload->start_slot = 0;22871931 }22882288- mgr->payloads[i].payload_state = req_payload.payload_state;19321932+ payload->payload_state = req_payload.payload_state;22891933 }22901934 cur_slots += req_payload.num_slots;2291193522922292- if (port)22932293- drm_dp_put_port(port);19361936+ if (put_port)19371937+ drm_dp_mst_topology_put_port(port);22941938 }2295193922961940 for (i = 0; i < mgr->max_payloads; i++) {22972297- if (mgr->payloads[i].payload_state == DP_PAYLOAD_DELETE_LOCAL) {22982298- DRM_DEBUG_KMS("removing payload %d\n", i);22992299- for (j = i; j < mgr->max_payloads - 1; j++) {23002300- memcpy(&mgr->payloads[j], &mgr->payloads[j + 1], sizeof(struct drm_dp_payload));23012301- mgr->proposed_vcpis[j] = mgr->proposed_vcpis[j + 1];23022302- if (mgr->proposed_vcpis[j] && mgr->proposed_vcpis[j]->num_slots) {23032303- set_bit(j + 1, &mgr->payload_mask);23042304- } else {23052305- clear_bit(j + 1, &mgr->payload_mask);23062306- }23072307- }23082308- memset(&mgr->payloads[mgr->max_payloads - 1], 0, sizeof(struct drm_dp_payload));23092309- mgr->proposed_vcpis[mgr->max_payloads - 1] = NULL;23102310- clear_bit(mgr->max_payloads, &mgr->payload_mask);19411941+ if (mgr->payloads[i].payload_state != DP_PAYLOAD_DELETE_LOCAL)19421942+ continue;2311194319441944+ DRM_DEBUG_KMS("removing payload %d\n", i);19451945+ for (j = i; j < mgr->max_payloads - 1; j++) {19461946+ mgr->payloads[j] = mgr->payloads[j + 1];19471947+ mgr->proposed_vcpis[j] = mgr->proposed_vcpis[j + 1];19481948+19491949+ if (mgr->proposed_vcpis[j] &&19501950+ mgr->proposed_vcpis[j]->num_slots) {19511951+ set_bit(j + 1, &mgr->payload_mask);19521952+ } else {19531953+ clear_bit(j + 1, &mgr->payload_mask);19541954+ }23121955 }19561956+19571957+ memset(&mgr->payloads[mgr->max_payloads - 1], 0,19581958+ sizeof(struct drm_dp_payload));19591959+ mgr->proposed_vcpis[mgr->max_payloads - 1] = NULL;19601960+ clear_bit(mgr->max_payloads, &mgr->payload_mask);23131961 }23141962 mutex_unlock(&mgr->payload_lock);23151963···24062012 struct drm_dp_sideband_msg_tx *txmsg;24072013 struct drm_dp_mst_branch *mstb;2408201424092409- mstb = drm_dp_get_validated_mstb_ref(mgr, port->parent);20152015+ mstb = drm_dp_mst_topology_get_mstb_validated(mgr, port->parent);24102016 if (!mstb)24112017 return -EINVAL;24122018···24302036 }24312037 kfree(txmsg);24322038fail_put:24332433- drm_dp_put_mst_branch_device(mstb);20392039+ drm_dp_mst_topology_put_mstb(mstb);24342040 return ret;24352041}24362042···2540214625412147 /* give this the main reference */25422148 mgr->mst_primary = mstb;25432543- kref_get(&mgr->mst_primary->kref);21492149+ drm_dp_mst_topology_get_mstb(mgr->mst_primary);2544215025452151 ret = drm_dp_dpcd_writeb(mgr->aux, DP_MSTM_CTRL,25462152 DP_MST_EN | DP_UP_REQ_EN | DP_UPSTREAM_IS_SRC);···25742180out_unlock:25752181 mutex_unlock(&mgr->lock);25762182 if (mstb)25772577- drm_dp_put_mst_branch_device(mstb);21832183+ drm_dp_mst_topology_put_mstb(mstb);25782184 return ret;2579218525802186}···27392345 mgr->down_rep_recv.initial_hdr.lct,27402346 mgr->down_rep_recv.initial_hdr.rad[0],27412347 mgr->down_rep_recv.msg[0]);27422742- drm_dp_put_mst_branch_device(mstb);23482348+ drm_dp_mst_topology_put_mstb(mstb);27432349 memset(&mgr->down_rep_recv, 0, sizeof(struct drm_dp_sideband_msg_rx));27442350 return 0;27452351 }···27502356 }2751235727522358 memset(&mgr->down_rep_recv, 0, sizeof(struct drm_dp_sideband_msg_rx));27532753- drm_dp_put_mst_branch_device(mstb);23592359+ drm_dp_mst_topology_put_mstb(mstb);2754236027552361 mutex_lock(&mgr->qlock);27562362 txmsg->state = DRM_DP_SIDEBAND_TX_RX;···28062412 drm_dp_update_port(mstb, &msg.u.conn_stat);2807241328082414 DRM_DEBUG_KMS("Got CSN: pn: %d ldps:%d ddps: %d mcs: %d ip: %d pdt: %d\n", msg.u.conn_stat.port_number, msg.u.conn_stat.legacy_device_plug_status, msg.u.conn_stat.displayport_device_plug_status, msg.u.conn_stat.message_capability_status, msg.u.conn_stat.input_port, msg.u.conn_stat.peer_device_type);28092809- (*mgr->cbs->hotplug)(mgr);24152415+ drm_kms_helper_hotplug_event(mgr->dev);2810241628112417 } else if (msg.req_type == DP_RESOURCE_STATUS_NOTIFY) {28122418 drm_dp_send_up_ack_reply(mgr, mgr->mst_primary, msg.req_type, seqno, false);···28232429 }2824243028252431 if (mstb)28262826- drm_dp_put_mst_branch_device(mstb);24322432+ drm_dp_mst_topology_put_mstb(mstb);2827243328282434 memset(&mgr->up_req_recv, 0, sizeof(struct drm_dp_sideband_msg_rx));28292435 }···28832489 enum drm_connector_status status = connector_status_disconnected;2884249028852491 /* we need to search for the port in the mgr in case its gone */28862886- port = drm_dp_get_validated_port_ref(mgr, port);24922492+ port = drm_dp_mst_topology_get_port_validated(mgr, port);28872493 if (!port)28882494 return connector_status_disconnected;28892495···29082514 break;29092515 }29102516out:29112911- drm_dp_put_port(port);25172517+ drm_dp_mst_topology_put_port(port);29122518 return status;29132519}29142520EXPORT_SYMBOL(drm_dp_mst_detect_port);···29252531{29262532 bool ret = false;2927253329282928- port = drm_dp_get_validated_port_ref(mgr, port);25342534+ port = drm_dp_mst_topology_get_port_validated(mgr, port);29292535 if (!port)29302536 return ret;29312537 ret = port->has_audio;29322932- drm_dp_put_port(port);25382538+ drm_dp_mst_topology_put_port(port);29332539 return ret;29342540}29352541EXPORT_SYMBOL(drm_dp_mst_port_has_audio);···29492555 struct edid *edid = NULL;2950255629512557 /* we need to search for the port in the mgr in case its gone */29522952- port = drm_dp_get_validated_port_ref(mgr, port);25582558+ port = drm_dp_mst_topology_get_port_validated(mgr, port);29532559 if (!port)29542560 return NULL;29552561···29602566 drm_connector_set_tile_property(connector);29612567 }29622568 port->has_audio = drm_detect_monitor_audio(edid);29632963- drm_dp_put_port(port);25692569+ drm_dp_mst_topology_put_port(port);29642570 return edid;29652571}29662572EXPORT_SYMBOL(drm_dp_mst_get_edid);···30112617}3012261830132619/**30143014- * drm_dp_atomic_find_vcpi_slots() - Find and add vcpi slots to the state26202620+ * drm_dp_atomic_find_vcpi_slots() - Find and add VCPI slots to the state30152621 * @state: global atomic state30162622 * @mgr: MST topology manager for the port30172623 * @port: port to find vcpi slots for30182624 * @pbn: bandwidth required for the mode in PBN30192625 *30203020- * RETURNS:30213021- * Total slots in the atomic state assigned for this port or error26262626+ * Allocates VCPI slots to @port, replacing any previous VCPI allocations it26272627+ * may have had. Any atomic drivers which support MST must call this function26282628+ * in their &drm_encoder_helper_funcs.atomic_check() callback to change the26292629+ * current VCPI allocation for the new state, but only when26302630+ * &drm_crtc_state.mode_changed or &drm_crtc_state.connectors_changed is set26312631+ * to ensure compatibility with userspace applications that still use the26322632+ * legacy modesetting UAPI.26332633+ *26342634+ * Allocations set by this function are not checked against the bandwidth26352635+ * restraints of @mgr until the driver calls drm_dp_mst_atomic_check().26362636+ *26372637+ * Additionally, it is OK to call this function multiple times on the same26382638+ * @port as needed. It is not OK however, to call this function and26392639+ * drm_dp_atomic_release_vcpi_slots() in the same atomic check phase.26402640+ *26412641+ * See also:26422642+ * drm_dp_atomic_release_vcpi_slots()26432643+ * drm_dp_mst_atomic_check()26442644+ *26452645+ * Returns:26462646+ * Total slots in the atomic state assigned for this port, or a negative error26472647+ * code if the port no longer exists30222648 */30232649int drm_dp_atomic_find_vcpi_slots(struct drm_atomic_state *state,30242650 struct drm_dp_mst_topology_mgr *mgr,30252651 struct drm_dp_mst_port *port, int pbn)30262652{30272653 struct drm_dp_mst_topology_state *topology_state;30283028- int req_slots;26542654+ struct drm_dp_vcpi_allocation *pos, *vcpi = NULL;26552655+ int prev_slots, req_slots, ret;3029265630302657 topology_state = drm_atomic_get_mst_topology_state(state, mgr);30312658 if (IS_ERR(topology_state))30322659 return PTR_ERR(topology_state);3033266030343034- port = drm_dp_get_validated_port_ref(mgr, port);26612661+ port = drm_dp_mst_topology_get_port_validated(mgr, port);30352662 if (port == NULL)30362663 return -EINVAL;30373037- req_slots = DIV_ROUND_UP(pbn, mgr->pbn_div);30383038- DRM_DEBUG_KMS("vcpi slots req=%d, avail=%d\n",30393039- req_slots, topology_state->avail_slots);3040266430413041- if (req_slots > topology_state->avail_slots) {30423042- drm_dp_put_port(port);30433043- return -ENOSPC;26652665+ /* Find the current allocation for this port, if any */26662666+ list_for_each_entry(pos, &topology_state->vcpis, next) {26672667+ if (pos->port == port) {26682668+ vcpi = pos;26692669+ prev_slots = vcpi->vcpi;26702670+26712671+ /*26722672+ * This should never happen, unless the driver tries26732673+ * releasing and allocating the same VCPI allocation,26742674+ * which is an error26752675+ */26762676+ if (WARN_ON(!prev_slots)) {26772677+ DRM_ERROR("cannot allocate and release VCPI on [MST PORT:%p] in the same state\n",26782678+ port);26792679+ return -EINVAL;26802680+ }26812681+26822682+ break;26832683+ }30442684 }26852685+ if (!vcpi)26862686+ prev_slots = 0;3045268730463046- topology_state->avail_slots -= req_slots;30473047- DRM_DEBUG_KMS("vcpi slots avail=%d", topology_state->avail_slots);26882688+ req_slots = DIV_ROUND_UP(pbn, mgr->pbn_div);3048268930493049- drm_dp_put_port(port);30503050- return req_slots;26902690+ DRM_DEBUG_ATOMIC("[CONNECTOR:%d:%s] [MST PORT:%p] VCPI %d -> %d\n",26912691+ port->connector->base.id, port->connector->name,26922692+ port, prev_slots, req_slots);26932693+26942694+ /* Add the new allocation to the state */26952695+ if (!vcpi) {26962696+ vcpi = kzalloc(sizeof(*vcpi), GFP_KERNEL);26972697+ if (!vcpi) {26982698+ ret = -ENOMEM;26992699+ goto out;27002700+ }27012701+27022702+ drm_dp_mst_get_port_malloc(port);27032703+ vcpi->port = port;27042704+ list_add(&vcpi->next, &topology_state->vcpis);27052705+ }27062706+ vcpi->vcpi = req_slots;27072707+27082708+ ret = req_slots;27092709+out:27102710+ drm_dp_mst_topology_put_port(port);27112711+ return ret;30512712}30522713EXPORT_SYMBOL(drm_dp_atomic_find_vcpi_slots);30532714···31102661 * drm_dp_atomic_release_vcpi_slots() - Release allocated vcpi slots31112662 * @state: global atomic state31122663 * @mgr: MST topology manager for the port31133113- * @slots: number of vcpi slots to release26642664+ * @port: The port to release the VCPI slots from31142665 *31153115- * RETURNS:31163116- * 0 if @slots were added back to &drm_dp_mst_topology_state->avail_slots or31173117- * negative error code26662666+ * Releases any VCPI slots that have been allocated to a port in the atomic26672667+ * state. Any atomic drivers which support MST must call this function in26682668+ * their &drm_connector_helper_funcs.atomic_check() callback when the26692669+ * connector will no longer have VCPI allocated (e.g. because it's CRTC was26702670+ * removed) when it had VCPI allocated in the previous atomic state.26712671+ *26722672+ * It is OK to call this even if @port has been removed from the system.26732673+ * Additionally, it is OK to call this function multiple times on the same26742674+ * @port as needed. It is not OK however, to call this function and26752675+ * drm_dp_atomic_find_vcpi_slots() on the same @port in a single atomic check26762676+ * phase.26772677+ *26782678+ * See also:26792679+ * drm_dp_atomic_find_vcpi_slots()26802680+ * drm_dp_mst_atomic_check()26812681+ *26822682+ * Returns:26832683+ * 0 if all slots for this port were added back to26842684+ * &drm_dp_mst_topology_state.avail_slots or negative error code31182685 */31192686int drm_dp_atomic_release_vcpi_slots(struct drm_atomic_state *state,31202687 struct drm_dp_mst_topology_mgr *mgr,31213121- int slots)26882688+ struct drm_dp_mst_port *port)31222689{31232690 struct drm_dp_mst_topology_state *topology_state;26912691+ struct drm_dp_vcpi_allocation *pos;26922692+ bool found = false;3124269331252694 topology_state = drm_atomic_get_mst_topology_state(state, mgr);31262695 if (IS_ERR(topology_state))31272696 return PTR_ERR(topology_state);3128269731293129- /* We cannot rely on port->vcpi.num_slots to update31303130- * topology_state->avail_slots as the port may not exist if the parent31313131- * branch device was unplugged. This should be fixed by tracking31323132- * per-port slot allocation in drm_dp_mst_topology_state instead of31333133- * depending on the caller to tell us how many slots to release.31343134- */31353135- topology_state->avail_slots += slots;31363136- DRM_DEBUG_KMS("vcpi slots released=%d, avail=%d\n",31373137- slots, topology_state->avail_slots);26982698+ list_for_each_entry(pos, &topology_state->vcpis, next) {26992699+ if (pos->port == port) {27002700+ found = true;27012701+ break;27022702+ }27032703+ }27042704+ if (WARN_ON(!found)) {27052705+ DRM_ERROR("no VCPI for [MST PORT:%p] found in mst state %p\n",27062706+ port, &topology_state->base);27072707+ return -EINVAL;27082708+ }27092709+27102710+ DRM_DEBUG_ATOMIC("[MST PORT:%p] VCPI %d -> 0\n", port, pos->vcpi);27112711+ if (pos->vcpi) {27122712+ drm_dp_mst_put_port_malloc(port);27132713+ pos->vcpi = 0;27142714+ }3138271531392716 return 0;31402717}···31782703{31792704 int ret;3180270531813181- port = drm_dp_get_validated_port_ref(mgr, port);27062706+ port = drm_dp_mst_topology_get_port_validated(mgr, port);31822707 if (!port)31832708 return false;31842709···31862711 return false;3187271231882713 if (port->vcpi.vcpi > 0) {31893189- DRM_DEBUG_KMS("payload: vcpi %d already allocated for pbn %d - requested pbn %d\n", port->vcpi.vcpi, port->vcpi.pbn, pbn);27142714+ DRM_DEBUG_KMS("payload: vcpi %d already allocated for pbn %d - requested pbn %d\n",27152715+ port->vcpi.vcpi, port->vcpi.pbn, pbn);31902716 if (pbn == port->vcpi.pbn) {31913191- drm_dp_put_port(port);27172717+ drm_dp_mst_topology_put_port(port);31922718 return true;31932719 }31942720 }···31972721 ret = drm_dp_init_vcpi(mgr, &port->vcpi, pbn, slots);31982722 if (ret) {31992723 DRM_DEBUG_KMS("failed to init vcpi slots=%d max=63 ret=%d\n",32003200- DIV_ROUND_UP(pbn, mgr->pbn_div), ret);27242724+ DIV_ROUND_UP(pbn, mgr->pbn_div), ret);32012725 goto out;32022726 }32032727 DRM_DEBUG_KMS("initing vcpi for pbn=%d slots=%d\n",32043204- pbn, port->vcpi.num_slots);27282728+ pbn, port->vcpi.num_slots);3205272932063206- drm_dp_put_port(port);27302730+ /* Keep port allocated until it's payload has been removed */27312731+ drm_dp_mst_get_port_malloc(port);27322732+ drm_dp_mst_topology_put_port(port);32072733 return true;32082734out:32092735 return false;···32152737int drm_dp_mst_get_vcpi_slots(struct drm_dp_mst_topology_mgr *mgr, struct drm_dp_mst_port *port)32162738{32172739 int slots = 0;32183218- port = drm_dp_get_validated_port_ref(mgr, port);27402740+ port = drm_dp_mst_topology_get_port_validated(mgr, port);32192741 if (!port)32202742 return slots;3221274332222744 slots = port->vcpi.num_slots;32233223- drm_dp_put_port(port);27452745+ drm_dp_mst_topology_put_port(port);32242746 return slots;32252747}32262748EXPORT_SYMBOL(drm_dp_mst_get_vcpi_slots);···32342756 */32352757void drm_dp_mst_reset_vcpi_slots(struct drm_dp_mst_topology_mgr *mgr, struct drm_dp_mst_port *port)32362758{32373237- port = drm_dp_get_validated_port_ref(mgr, port);32383238- if (!port)32393239- return;27592759+ /*27602760+ * A port with VCPI will remain allocated until it's VCPI is27612761+ * released, no verified ref needed27622762+ */27632763+32402764 port->vcpi.num_slots = 0;32413241- drm_dp_put_port(port);32422765}32432766EXPORT_SYMBOL(drm_dp_mst_reset_vcpi_slots);32442767···32482769 * @mgr: manager for this port32492770 * @port: unverified port to deallocate vcpi for32502771 */32513251-void drm_dp_mst_deallocate_vcpi(struct drm_dp_mst_topology_mgr *mgr, struct drm_dp_mst_port *port)27722772+void drm_dp_mst_deallocate_vcpi(struct drm_dp_mst_topology_mgr *mgr,27732773+ struct drm_dp_mst_port *port)32522774{32533253- port = drm_dp_get_validated_port_ref(mgr, port);32543254- if (!port)32553255- return;27752775+ /*27762776+ * A port with VCPI will remain allocated until it's VCPI is27772777+ * released, no verified ref needed27782778+ */3256277932572780 drm_dp_mst_put_payload_id(mgr, port->vcpi.vcpi);32582781 port->vcpi.num_slots = 0;32592782 port->vcpi.pbn = 0;32602783 port->vcpi.aligned_pbn = 0;32612784 port->vcpi.vcpi = 0;32623262- drm_dp_put_port(port);27852785+ drm_dp_mst_put_port_malloc(port);32632786}32642787EXPORT_SYMBOL(drm_dp_mst_deallocate_vcpi);32652788···35453064 mutex_unlock(&mgr->qlock);35463065}3547306635483548-static void drm_dp_free_mst_port(struct kref *kref)35493549-{35503550- struct drm_dp_mst_port *port = container_of(kref, struct drm_dp_mst_port, kref);35513551- kref_put(&port->parent->kref, drm_dp_free_mst_branch_device);35523552- kfree(port);35533553-}35543554-35553067static void drm_dp_destroy_connector_work(struct work_struct *work)35563068{35573069 struct drm_dp_mst_topology_mgr *mgr = container_of(work, struct drm_dp_mst_topology_mgr, destroy_connector_work);···35653091 list_del(&port->next);35663092 mutex_unlock(&mgr->destroy_connector_lock);3567309335683568- kref_init(&port->kref);35693094 INIT_LIST_HEAD(&port->next);3570309535713096 mgr->cbs->destroy_connector(mgr, port->connector);···35723099 drm_dp_port_teardown_pdt(port, port->pdt);35733100 port->pdt = DP_PEER_DEVICE_NONE;3574310135753575- if (!port->input && port->vcpi.vcpi > 0) {35763576- drm_dp_mst_reset_vcpi_slots(mgr, port);35773577- drm_dp_update_payload_part1(mgr);35783578- drm_dp_mst_put_payload_id(mgr, port->vcpi.vcpi);35793579- }35803580-35813581- kref_put(&port->kref, drm_dp_free_mst_port);31023102+ drm_dp_mst_put_port_malloc(port);35823103 send_hotplug = true;35833104 }35843105 if (send_hotplug)35853585- (*mgr->cbs->hotplug)(mgr);31063106+ drm_kms_helper_hotplug_event(mgr->dev);35863107}3587310835883109static struct drm_private_state *35893110drm_dp_mst_duplicate_state(struct drm_private_obj *obj)35903111{35913591- struct drm_dp_mst_topology_state *state;31123112+ struct drm_dp_mst_topology_state *state, *old_state =31133113+ to_dp_mst_topology_state(obj->state);31143114+ struct drm_dp_vcpi_allocation *pos, *vcpi;3592311535933593- state = kmemdup(obj->state, sizeof(*state), GFP_KERNEL);31163116+ state = kmemdup(old_state, sizeof(*state), GFP_KERNEL);35943117 if (!state)35953118 return NULL;3596311935973120 __drm_atomic_helper_private_obj_duplicate_state(obj, &state->base);3598312131223122+ INIT_LIST_HEAD(&state->vcpis);31233123+31243124+ list_for_each_entry(pos, &old_state->vcpis, next) {31253125+ /* Prune leftover freed VCPI allocations */31263126+ if (!pos->vcpi)31273127+ continue;31283128+31293129+ vcpi = kmemdup(pos, sizeof(*vcpi), GFP_KERNEL);31303130+ if (!vcpi)31313131+ goto fail;31323132+31333133+ drm_dp_mst_get_port_malloc(vcpi->port);31343134+ list_add(&vcpi->next, &state->vcpis);31353135+ }31363136+35993137 return &state->base;31383138+31393139+fail:31403140+ list_for_each_entry_safe(pos, vcpi, &state->vcpis, next) {31413141+ drm_dp_mst_put_port_malloc(pos->port);31423142+ kfree(pos);31433143+ }31443144+ kfree(state);31453145+31463146+ return NULL;36003147}3601314836023149static void drm_dp_mst_destroy_state(struct drm_private_obj *obj,···36243131{36253132 struct drm_dp_mst_topology_state *mst_state =36263133 to_dp_mst_topology_state(state);31343134+ struct drm_dp_vcpi_allocation *pos, *tmp;31353135+31363136+ list_for_each_entry_safe(pos, tmp, &mst_state->vcpis, next) {31373137+ /* We only keep references to ports with non-zero VCPIs */31383138+ if (pos->vcpi)31393139+ drm_dp_mst_put_port_malloc(pos->port);31403140+ kfree(pos);31413141+ }3627314236283143 kfree(mst_state);36293144}3630314536313631-static const struct drm_private_state_funcs mst_state_funcs = {31463146+static inline int31473147+drm_dp_mst_atomic_check_topology_state(struct drm_dp_mst_topology_mgr *mgr,31483148+ struct drm_dp_mst_topology_state *mst_state)31493149+{31503150+ struct drm_dp_vcpi_allocation *vcpi;31513151+ int avail_slots = 63, payload_count = 0;31523152+31533153+ list_for_each_entry(vcpi, &mst_state->vcpis, next) {31543154+ /* Releasing VCPI is always OK-even if the port is gone */31553155+ if (!vcpi->vcpi) {31563156+ DRM_DEBUG_ATOMIC("[MST PORT:%p] releases all VCPI slots\n",31573157+ vcpi->port);31583158+ continue;31593159+ }31603160+31613161+ DRM_DEBUG_ATOMIC("[MST PORT:%p] requires %d vcpi slots\n",31623162+ vcpi->port, vcpi->vcpi);31633163+31643164+ avail_slots -= vcpi->vcpi;31653165+ if (avail_slots < 0) {31663166+ DRM_DEBUG_ATOMIC("[MST PORT:%p] not enough VCPI slots in mst state %p (avail=%d)\n",31673167+ vcpi->port, mst_state,31683168+ avail_slots + vcpi->vcpi);31693169+ return -ENOSPC;31703170+ }31713171+31723172+ if (++payload_count > mgr->max_payloads) {31733173+ DRM_DEBUG_ATOMIC("[MST MGR:%p] state %p has too many payloads (max=%d)\n",31743174+ mgr, mst_state, mgr->max_payloads);31753175+ return -EINVAL;31763176+ }31773177+ }31783178+ DRM_DEBUG_ATOMIC("[MST MGR:%p] mst state %p VCPI avail=%d used=%d\n",31793179+ mgr, mst_state, avail_slots,31803180+ 63 - avail_slots);31813181+31823182+ return 0;31833183+}31843184+31853185+/**31863186+ * drm_dp_mst_atomic_check - Check that the new state of an MST topology in an31873187+ * atomic update is valid31883188+ * @state: Pointer to the new &struct drm_dp_mst_topology_state31893189+ *31903190+ * Checks the given topology state for an atomic update to ensure that it's31913191+ * valid. This includes checking whether there's enough bandwidth to support31923192+ * the new VCPI allocations in the atomic update.31933193+ *31943194+ * Any atomic drivers supporting DP MST must make sure to call this after31953195+ * checking the rest of their state in their31963196+ * &drm_mode_config_funcs.atomic_check() callback.31973197+ *31983198+ * See also:31993199+ * drm_dp_atomic_find_vcpi_slots()32003200+ * drm_dp_atomic_release_vcpi_slots()32013201+ *32023202+ * Returns:32033203+ *32043204+ * 0 if the new state is valid, negative error code otherwise.32053205+ */32063206+int drm_dp_mst_atomic_check(struct drm_atomic_state *state)32073207+{32083208+ struct drm_dp_mst_topology_mgr *mgr;32093209+ struct drm_dp_mst_topology_state *mst_state;32103210+ int i, ret = 0;32113211+32123212+ for_each_new_mst_mgr_in_state(state, mgr, mst_state, i) {32133213+ ret = drm_dp_mst_atomic_check_topology_state(mgr, mst_state);32143214+ if (ret)32153215+ break;32163216+ }32173217+32183218+ return ret;32193219+}32203220+EXPORT_SYMBOL(drm_dp_mst_atomic_check);32213221+32223222+const struct drm_private_state_funcs drm_dp_mst_topology_state_funcs = {36323223 .atomic_duplicate_state = drm_dp_mst_duplicate_state,36333224 .atomic_destroy_state = drm_dp_mst_destroy_state,36343225};32263226+EXPORT_SYMBOL(drm_dp_mst_topology_state_funcs);3635322736363228/**36373229 * drm_atomic_get_mst_topology_state: get MST topology state···37943216 return -ENOMEM;3795321737963218 mst_state->mgr = mgr;32193219+ INIT_LIST_HEAD(&mst_state->vcpis);3797322037983798- /* max. time slots - one slot for MTP header */37993799- mst_state->avail_slots = 63;38003800-38013801- drm_atomic_private_obj_init(&mgr->base,32213221+ drm_atomic_private_obj_init(dev, &mgr->base,38023222 &mst_state->base,38033803- &mst_state_funcs);32233223+ &drm_dp_mst_topology_state_funcs);3804322438053225 return 0;38063226}···38103234 */38113235void drm_dp_mst_topology_mgr_destroy(struct drm_dp_mst_topology_mgr *mgr)38123236{32373237+ drm_dp_mst_topology_mgr_set_mst(mgr, false);38133238 flush_work(&mgr->work);38143239 flush_work(&mgr->destroy_connector_work);38153240 mutex_lock(&mgr->payload_lock);···38263249}38273250EXPORT_SYMBOL(drm_dp_mst_topology_mgr_destroy);3828325132523252+static bool remote_i2c_read_ok(const struct i2c_msg msgs[], int num)32533253+{32543254+ int i;32553255+32563256+ if (num - 1 > DP_REMOTE_I2C_READ_MAX_TRANSACTIONS)32573257+ return false;32583258+32593259+ for (i = 0; i < num - 1; i++) {32603260+ if (msgs[i].flags & I2C_M_RD ||32613261+ msgs[i].len > 0xff)32623262+ return false;32633263+ }32643264+32653265+ return msgs[num - 1].flags & I2C_M_RD &&32663266+ msgs[num - 1].len <= 0xff;32673267+}32683268+38293269/* I2C device */38303270static int drm_dp_mst_i2c_xfer(struct i2c_adapter *adapter, struct i2c_msg *msgs,38313271 int num)···38523258 struct drm_dp_mst_branch *mstb;38533259 struct drm_dp_mst_topology_mgr *mgr = port->mgr;38543260 unsigned int i;38553855- bool reading = false;38563261 struct drm_dp_sideband_msg_req_body msg;38573262 struct drm_dp_sideband_msg_tx *txmsg = NULL;38583263 int ret;3859326438603860- mstb = drm_dp_get_validated_mstb_ref(mgr, port->parent);32653265+ mstb = drm_dp_mst_topology_get_mstb_validated(mgr, port->parent);38613266 if (!mstb)38623267 return -EREMOTEIO;3863326838643864- /* construct i2c msg */38653865- /* see if last msg is a read */38663866- if (msgs[num - 1].flags & I2C_M_RD)38673867- reading = true;38683868-38693869- if (!reading || (num - 1 > DP_REMOTE_I2C_READ_MAX_TRANSACTIONS)) {32693269+ if (!remote_i2c_read_ok(msgs, num)) {38703270 DRM_DEBUG_KMS("Unsupported I2C transaction for MST device\n");38713271 ret = -EIO;38723272 goto out;···38743286 msg.u.i2c_read.transactions[i].i2c_dev_id = msgs[i].addr;38753287 msg.u.i2c_read.transactions[i].num_bytes = msgs[i].len;38763288 msg.u.i2c_read.transactions[i].bytes = msgs[i].buf;32893289+ msg.u.i2c_read.transactions[i].no_stop_bit = !(msgs[i].flags & I2C_M_STOP);38773290 }38783291 msg.u.i2c_read.read_i2c_device_id = msgs[num - 1].addr;38793292 msg.u.i2c_read.num_bytes_read = msgs[num - 1].len;···39063317 }39073318out:39083319 kfree(txmsg);39093909- drm_dp_put_mst_branch_device(mstb);33203320+ drm_dp_mst_topology_put_mstb(mstb);39103321 return ret;39113322}39123323
+11-13
drivers/gpu/drm/drm_drv.c
···4141#include "drm_crtc_internal.h"4242#include "drm_legacy.h"4343#include "drm_internal.h"4444-#include "drm_crtc_internal.h"45444645/*4746 * drm_debug: Enable debug output.···264265 * DOC: driver instance overview265266 *266267 * A device instance for a drm driver is represented by &struct drm_device. This267267- * is allocated with drm_dev_alloc(), usually from bus-specific ->probe()268268+ * is initialized with drm_dev_init(), usually from bus-specific ->probe()268269 * callbacks implemented by the driver. The driver then needs to initialize all269270 * the various subsystems for the drm device like memory management, vblank270271 * handling, modesetting support and intial output configuration plus obviously271271- * initialize all the corresponding hardware bits. An important part of this is272272- * also calling drm_dev_set_unique() to set the userspace-visible unique name of273273- * this device instance. Finally when everything is up and running and ready for274274- * userspace the device instance can be published using drm_dev_register().272272+ * initialize all the corresponding hardware bits. Finally when everything is up273273+ * and running and ready for userspace the device instance can be published274274+ * using drm_dev_register().275275 *276276 * There is also deprecated support for initalizing device instances using277277 * bus-specific helpers and the &drm_driver.load callback. But due to···286288 * Note that the lifetime rules for &drm_device instance has still a lot of287289 * historical baggage. Hence use the reference counting provided by288290 * drm_dev_get() and drm_dev_put() only carefully.289289- *290290- * It is recommended that drivers embed &struct drm_device into their own device291291- * structure, which is supported through drm_dev_init().292291 */293292294293/**···470475 *471476 * The initial ref-count of the object is 1. Use drm_dev_get() and472477 * drm_dev_put() to take and drop further ref-counts.478478+ *479479+ * It is recommended that drivers embed &struct drm_device into their own device480480+ * structure.473481 *474482 * Drivers that do not want to allocate their own device struct475483 * embedding &struct drm_device can call drm_dev_alloc() instead. For drivers···764766 * @flags: Flags passed to the driver's .load() function765767 *766768 * Register the DRM device @dev with the system, advertise device to user-space767767- * and start normal device operation. @dev must be allocated via drm_dev_alloc()769769+ * and start normal device operation. @dev must be initialized via drm_dev_init()768770 * previously.769771 *770772 * Never call this twice on any device!···876878 * @dev: device of which to set the unique name877879 * @name: unique name878880 *879879- * Sets the unique name of a DRM device using the specified string. Drivers880880- * can use this at driver probe time if the unique name of the devices they881881- * drive is static.881881+ * Sets the unique name of a DRM device using the specified string. This is882882+ * already done by drm_dev_init(), drivers should only override the default883883+ * unique name for backwards compatibility reasons.882884 *883885 * Return: 0 on success or a negative error code on failure.884886 */
+51-50
drivers/gpu/drm/drm_edid.c
···36413641 return oui == HDMI_FORUM_IEEE_OUI;36423642}3643364336443644+static bool cea_db_is_vcdb(const u8 *db)36453645+{36463646+ if (cea_db_tag(db) != USE_EXTENDED_TAG)36473647+ return false;36483648+36493649+ if (cea_db_payload_len(db) != 2)36503650+ return false;36513651+36523652+ if (cea_db_extended_tag(db) != EXT_VIDEO_CAPABILITY_BLOCK)36533653+ return false;36543654+36553655+ return true;36563656+}36573657+36443658static bool cea_db_is_y420cmdb(const u8 *db)36453659{36463660 if (cea_db_tag(db) != USE_EXTENDED_TAG)···42374223}42384224EXPORT_SYMBOL(drm_detect_monitor_audio);4239422542404240-/**42414241- * drm_rgb_quant_range_selectable - is RGB quantization range selectable?42424242- * @edid: EDID block to scan42434243- *42444244- * Check whether the monitor reports the RGB quantization range selection42454245- * as supported. The AVI infoframe can then be used to inform the monitor42464246- * which quantization range (full or limited) is used.42474247- *42484248- * Return: True if the RGB quantization range is selectable, false otherwise.42494249- */42504250-bool drm_rgb_quant_range_selectable(struct edid *edid)42514251-{42524252- u8 *edid_ext;42534253- int i, start, end;42544254-42554255- edid_ext = drm_find_cea_extension(edid);42564256- if (!edid_ext)42574257- return false;42584258-42594259- if (cea_db_offsets(edid_ext, &start, &end))42604260- return false;42614261-42624262- for_each_cea_db(edid_ext, i, start, end) {42634263- if (cea_db_tag(&edid_ext[i]) == USE_EXTENDED_TAG &&42644264- cea_db_payload_len(&edid_ext[i]) == 2 &&42654265- cea_db_extended_tag(&edid_ext[i]) ==42664266- EXT_VIDEO_CAPABILITY_BLOCK) {42674267- DRM_DEBUG_KMS("CEA VCDB 0x%02x\n", edid_ext[i + 2]);42684268- return edid_ext[i + 2] & EDID_CEA_VCDB_QS;42694269- }42704270- }42714271-42724272- return false;42734273-}42744274-EXPORT_SYMBOL(drm_rgb_quant_range_selectable);4275422642764227/**42774228 * drm_default_rgb_quant_range - default RGB quantization range···42564277 HDMI_QUANTIZATION_RANGE_FULL;42574278}42584279EXPORT_SYMBOL(drm_default_rgb_quant_range);42804280+42814281+static void drm_parse_vcdb(struct drm_connector *connector, const u8 *db)42824282+{42834283+ struct drm_display_info *info = &connector->display_info;42844284+42854285+ DRM_DEBUG_KMS("CEA VCDB 0x%02x\n", db[2]);42864286+42874287+ if (db[2] & EDID_CEA_VCDB_QS)42884288+ info->rgb_quant_range_selectable = true;42894289+}4259429042604291static void drm_parse_ycbcr420_deep_color_info(struct drm_connector *connector,42614292 const u8 *db)···44414452 drm_parse_hdmi_forum_vsdb(connector, db);44424453 if (cea_db_is_y420cmdb(db))44434454 drm_parse_y420cmdb_bitmap(connector, db);44554455+ if (cea_db_is_vcdb(db))44564456+ drm_parse_vcdb(connector, db);44444457 }44454458}44464459···44634472 info->max_tmds_clock = 0;44644473 info->dvi_dual = false;44654474 info->has_hdmi_infoframe = false;44754475+ info->rgb_quant_range_selectable = false;44664476 memset(&info->hdmi, 0, sizeof(info->hdmi));4467447744684478 info->non_desktop = 0;···48224830}48234831EXPORT_SYMBOL(drm_set_preferred_mode);4824483248334833+static bool is_hdmi2_sink(struct drm_connector *connector)48344834+{48354835+ /*48364836+ * FIXME: sil-sii8620 doesn't have a connector around when48374837+ * we need one, so we have to be prepared for a NULL connector.48384838+ */48394839+ if (!connector)48404840+ return true;48414841+48424842+ return connector->display_info.hdmi.scdc.supported ||48434843+ connector->display_info.color_formats & DRM_COLOR_FORMAT_YCRCB420;48444844+}48454845+48254846/**48264847 * drm_hdmi_avi_infoframe_from_display_mode() - fill an HDMI AVI infoframe with48274848 * data from a DRM display mode48284849 * @frame: HDMI AVI infoframe48504850+ * @connector: the connector48294851 * @mode: DRM display mode48304830- * @is_hdmi2_sink: Sink is HDMI 2.0 compliant48314852 *48324853 * Return: 0 on success or a negative error code on failure.48334854 */48344855int48354856drm_hdmi_avi_infoframe_from_display_mode(struct hdmi_avi_infoframe *frame,48364836- const struct drm_display_mode *mode,48374837- bool is_hdmi2_sink)48574857+ struct drm_connector *connector,48584858+ const struct drm_display_mode *mode)48384859{48394860 enum hdmi_picture_aspect picture_aspect;48404861 int err;···48694864 * HDMI 2.0 VIC range: 1 <= VIC <= 107 (CEA-861-F). So we48704865 * have to make sure we dont break HDMI 1.4 sinks.48714866 */48724872- if (!is_hdmi2_sink && frame->video_code > 64)48674867+ if (!is_hdmi2_sink(connector) && frame->video_code > 64)48734868 frame->video_code = 0;4874486948754870 /*···49284923 * drm_hdmi_avi_infoframe_quant_range() - fill the HDMI AVI infoframe49294924 * quantization range information49304925 * @frame: HDMI AVI infoframe49264926+ * @connector: the connector49314927 * @mode: DRM display mode49324928 * @rgb_quant_range: RGB quantization range (Q)49334933- * @rgb_quant_range_selectable: Sink support selectable RGB quantization range (QS)49344934- * @is_hdmi2_sink: HDMI 2.0 sink, which has different default recommendations49354935- *49364936- * Note that @is_hdmi2_sink can be derived by looking at the49374937- * &drm_scdc.supported flag stored in &drm_hdmi_info.scdc,49384938- * &drm_display_info.hdmi, which can be found in &drm_connector.display_info.49394929 */49404930void49414931drm_hdmi_avi_infoframe_quant_range(struct hdmi_avi_infoframe *frame,49324932+ struct drm_connector *connector,49424933 const struct drm_display_mode *mode,49434943- enum hdmi_quantization_range rgb_quant_range,49444944- bool rgb_quant_range_selectable,49454945- bool is_hdmi2_sink)49344934+ enum hdmi_quantization_range rgb_quant_range)49464935{49364936+ const struct drm_display_info *info = &connector->display_info;49374937+49474938 /*49484939 * CEA-861:49494940 * "A Source shall not send a non-zero Q value that does not correspond···49504949 * HDMI 2.0 recommends sending non-zero Q when it does match the49514950 * default RGB quantization range for the mode, even when QS=0.49524951 */49534953- if (rgb_quant_range_selectable ||49524952+ if (info->rgb_quant_range_selectable ||49544953 rgb_quant_range == drm_default_rgb_quant_range(mode))49554954 frame->quantization_range = rgb_quant_range;49564955 else···49694968 * we limit non-zero YQ to HDMI 2.0 sinks only as HDMI 2.0 is based49704969 * on on CEA-861-F.49714970 */49724972- if (!is_hdmi2_sink ||49714971+ if (!is_hdmi2_sink(connector) ||49734972 rgb_quant_range == HDMI_QUANTIZATION_RANGE_LIMITED)49744973 frame->ycc_quantization_range =49754974 HDMI_YCC_QUANTIZATION_RANGE_LIMITED;
+113-42
drivers/gpu/drm/drm_fb_helper.c
···17971797 int i;17981798 struct drm_fb_helper_surface_size sizes;17991799 int gamma_size = 0;18001800+ int best_depth = 0;1800180118011802 memset(&sizes, 0, sizeof(struct drm_fb_helper_surface_size));18021803 sizes.surface_depth = 24;···18051804 sizes.fb_width = (u32)-1;18061805 sizes.fb_height = (u32)-1;1807180618081808- /* if driver picks 8 or 16 by default use that for both depth/bpp */18071807+ /*18081808+ * If driver picks 8 or 16 by default use that for both depth/bpp18091809+ * to begin with18101810+ */18091811 if (preferred_bpp != sizes.surface_bpp)18101812 sizes.surface_depth = sizes.surface_bpp = preferred_bpp;18111813···18411837 }18421838 break;18431839 }18401840+ }18411841+18421842+ /*18431843+ * If we run into a situation where, for example, the primary plane18441844+ * supports RGBA5551 (16 bpp, depth 15) but not RGB565 (16 bpp, depth18451845+ * 16) we need to scale down the depth of the sizes we request.18461846+ */18471847+ for (i = 0; i < fb_helper->crtc_count; i++) {18481848+ struct drm_mode_set *mode_set = &fb_helper->crtc_info[i].mode_set;18491849+ struct drm_crtc *crtc = mode_set->crtc;18501850+ struct drm_plane *plane = crtc->primary;18511851+ int j;18521852+18531853+ DRM_DEBUG("test CRTC %d primary plane\n", i);18541854+18551855+ for (j = 0; j < plane->format_count; j++) {18561856+ const struct drm_format_info *fmt;18571857+18581858+ fmt = drm_format_info(plane->format_types[j]);18591859+18601860+ /*18611861+ * Do not consider YUV or other complicated formats18621862+ * for framebuffers. This means only legacy formats18631863+ * are supported (fmt->depth is a legacy field) but18641864+ * the framebuffer emulation can only deal with such18651865+ * formats, specifically RGB/BGA formats.18661866+ */18671867+ if (fmt->depth == 0)18681868+ continue;18691869+18701870+ /* We found a perfect fit, great */18711871+ if (fmt->depth == sizes.surface_depth) {18721872+ best_depth = fmt->depth;18731873+ break;18741874+ }18751875+18761876+ /* Skip depths above what we're looking for */18771877+ if (fmt->depth > sizes.surface_depth)18781878+ continue;18791879+18801880+ /* Best depth found so far */18811881+ if (fmt->depth > best_depth)18821882+ best_depth = fmt->depth;18831883+ }18841884+ }18851885+ if (sizes.surface_depth != best_depth) {18861886+ DRM_INFO("requested bpp %d, scaled depth down to %d",18871887+ sizes.surface_bpp, best_depth);18881888+ sizes.surface_depth = best_depth;18441889 }1845189018461891 crtc_count = 0;···29192866 return 0;2920286729212868err_drm_fb_helper_fini:29222922- drm_fb_helper_fini(fb_helper);28692869+ drm_fb_helper_fbdev_teardown(dev);2923287029242871 return ret;29252872}···30142961 return 0;30152962}3016296330173017-/*30183018- * fb_ops.fb_destroy is called by the last put_fb_info() call at the end of30193019- * unregister_framebuffer() or fb_release().30203020- */30213021-static void drm_fbdev_fb_destroy(struct fb_info *info)29642964+static void drm_fbdev_cleanup(struct drm_fb_helper *fb_helper)30222965{30233023- struct drm_fb_helper *fb_helper = info->par;30242966 struct fb_info *fbi = fb_helper->fbdev;30252967 struct fb_ops *fbops = NULL;30262968 void *shadow = NULL;3027296930283028- if (fbi->fbdefio) {29702970+ if (!fb_helper->dev)29712971+ return;29722972+29732973+ if (fbi && fbi->fbdefio) {30292974 fb_deferred_io_cleanup(fbi);30302975 shadow = fbi->screen_buffer;30312976 fbops = fbi->fbops;···30372986 }3038298730392988 drm_client_framebuffer_delete(fb_helper->buffer);29892989+}29902990+29912991+static void drm_fbdev_release(struct drm_fb_helper *fb_helper)29922992+{29932993+ drm_fbdev_cleanup(fb_helper);29942994+30402995 /*30412996 * FIXME:30422997 * Remove conditional when all CMA drivers have been moved over to using···30522995 drm_client_release(&fb_helper->client);30532996 kfree(fb_helper);30542997 }29982998+}29992999+30003000+/*30013001+ * fb_ops.fb_destroy is called by the last put_fb_info() call at the end of30023002+ * unregister_framebuffer() or fb_release().30033003+ */30043004+static void drm_fbdev_fb_destroy(struct fb_info *info)30053005+{30063006+ drm_fbdev_release(info->par);30553007}3056300830573009static int drm_fbdev_fb_mmap(struct fb_info *info, struct vm_area_struct *vma)···31133047 struct drm_framebuffer *fb;31143048 struct fb_info *fbi;31153049 u32 format;31163116- int ret;3117305031183051 DRM_DEBUG_KMS("surface width(%d), height(%d) and bpp(%d)\n",31193052 sizes->surface_width, sizes->surface_height,···31293064 fb = buffer->fb;3130306531313066 fbi = drm_fb_helper_alloc_fbi(fb_helper);31323132- if (IS_ERR(fbi)) {31333133- ret = PTR_ERR(fbi);31343134- goto err_free_buffer;31353135- }30673067+ if (IS_ERR(fbi))30683068+ return PTR_ERR(fbi);3136306931373070 fbi->par = fb_helper;31383071 fbi->fbops = &drm_fbdev_fb_ops;···31613098 if (!fbops || !shadow) {31623099 kfree(fbops);31633100 vfree(shadow);31643164- ret = -ENOMEM;31653165- goto err_fb_info_destroy;31013101+ return -ENOMEM;31663102 }3167310331683104 *fbops = *fbi->fbops;···31733111 }3174311231753113 return 0;31763176-31773177-err_fb_info_destroy:31783178- drm_fb_helper_fini(fb_helper);31793179-err_free_buffer:31803180- drm_client_framebuffer_delete(buffer);31813181-31823182- return ret;31833114}31843115EXPORT_SYMBOL(drm_fb_helper_generic_probe);31853116···31843129{31853130 struct drm_fb_helper *fb_helper = drm_fb_helper_from_client(client);3186313131873187- if (fb_helper->fbdev) {31883188- drm_fb_helper_unregister_fbi(fb_helper);31323132+ if (fb_helper->fbdev)31893133 /* drm_fbdev_fb_destroy() takes care of cleanup */31903190- return;31913191- }31923192-31933193- /* Did drm_fb_helper_fbdev_setup() run? */31943194- if (fb_helper->dev)31953195- drm_fb_helper_fini(fb_helper);31963196-31973197- drm_client_release(client);31983198- kfree(fb_helper);31343134+ drm_fb_helper_unregister_fbi(fb_helper);31353135+ else31363136+ drm_fbdev_release(fb_helper);31993137}3200313832013139static int drm_fbdev_client_restore(struct drm_client_dev *client)···32063158 struct drm_device *dev = client->dev;32073159 int ret;3208316032093209- /* If drm_fb_helper_fbdev_setup() failed, we only try once */31613161+ /* Setup is not retried if it has failed */32103162 if (!fb_helper->dev && fb_helper->funcs)32113163 return 0;32123164···32183170 return 0;32193171 }3220317232213221- ret = drm_fb_helper_fbdev_setup(dev, fb_helper, &drm_fb_helper_generic_funcs,32223222- fb_helper->preferred_bpp, 0);32233223- if (ret) {32243224- fb_helper->dev = NULL;32253225- fb_helper->fbdev = NULL;32263226- return ret;32273227- }31733173+ drm_fb_helper_prepare(dev, fb_helper, &drm_fb_helper_generic_funcs);31743174+31753175+ ret = drm_fb_helper_init(dev, fb_helper, dev->mode_config.num_connector);31763176+ if (ret)31773177+ goto err;31783178+31793179+ ret = drm_fb_helper_single_add_all_connectors(fb_helper);31803180+ if (ret)31813181+ goto err_cleanup;31823182+31833183+ if (!drm_drv_uses_atomic_modeset(dev))31843184+ drm_helper_disable_unused_functions(dev);31853185+31863186+ ret = drm_fb_helper_initial_config(fb_helper, fb_helper->preferred_bpp);31873187+ if (ret)31883188+ goto err_cleanup;3228318932293190 return 0;31913191+31923192+err_cleanup:31933193+ drm_fbdev_cleanup(fb_helper);31943194+err:31953195+ fb_helper->dev = NULL;31963196+ fb_helper->fbdev = NULL;31973197+31983198+ DRM_DEV_ERROR(dev->dev, "fbdev: Failed to setup generic emulation (ret=%d)\n", ret);31993199+32003200+ return ret;32303201}3231320232323203static const struct drm_client_funcs drm_fbdev_client_funcs = {···3304323733053238 drm_client_add(&fb_helper->client);3306323932403240+ if (!preferred_bpp)32413241+ preferred_bpp = dev->mode_config.preferred_depth;32423242+ if (!preferred_bpp)32433243+ preferred_bpp = 32;33073244 fb_helper->preferred_bpp = preferred_bpp;3308324533093246 ret = drm_fbdev_client_hotplug(&fb_helper->client);
···3737#include <linux/shmem_fs.h>3838#include <linux/dma-buf.h>3939#include <linux/mem_encrypt.h>4040+#include <linux/pagevec.h>4041#include <drm/drmP.h>4142#include <drm/drm_vma_manager.h>4243#include <drm/drm_gem.h>···527526}528527EXPORT_SYMBOL(drm_gem_create_mmap_offset);529528529529+/*530530+ * Move pages to appropriate lru and release the pagevec, decrementing the531531+ * ref count of those pages.532532+ */533533+static void drm_gem_check_release_pagevec(struct pagevec *pvec)534534+{535535+ check_move_unevictable_pages(pvec);536536+ __pagevec_release(pvec);537537+ cond_resched();538538+}539539+530540/**531541 * drm_gem_get_pages - helper to allocate backing pages for a GEM object532542 * from shmem···563551{564552 struct address_space *mapping;565553 struct page *p, **pages;554554+ struct pagevec pvec;566555 int i, npages;567556568557 /* This is the shared memory object that backs the GEM resource */···580567 pages = kvmalloc_array(npages, sizeof(struct page *), GFP_KERNEL);581568 if (pages == NULL)582569 return ERR_PTR(-ENOMEM);570570+571571+ mapping_set_unevictable(mapping);583572584573 for (i = 0; i < npages; i++) {585574 p = shmem_read_mapping_page(mapping, i);···601586 return pages;602587603588fail:604604- while (i--)605605- put_page(pages[i]);589589+ mapping_clear_unevictable(mapping);590590+ pagevec_init(&pvec);591591+ while (i--) {592592+ if (!pagevec_add(&pvec, pages[i]))593593+ drm_gem_check_release_pagevec(&pvec);594594+ }595595+ if (pagevec_count(&pvec))596596+ drm_gem_check_release_pagevec(&pvec);606597607598 kvfree(pages);608599 return ERR_CAST(p);···626605 bool dirty, bool accessed)627606{628607 int i, npages;608608+ struct address_space *mapping;609609+ struct pagevec pvec;610610+611611+ mapping = file_inode(obj->filp)->i_mapping;612612+ mapping_clear_unevictable(mapping);629613630614 /* We already BUG_ON() for non-page-aligned sizes in631615 * drm_gem_object_init(), so we should never hit this unless···640614641615 npages = obj->size >> PAGE_SHIFT;642616617617+ pagevec_init(&pvec);643618 for (i = 0; i < npages; i++) {644619 if (dirty)645620 set_page_dirty(pages[i]);···649622 mark_page_accessed(pages[i]);650623651624 /* Undo the reference we took when populating the table */652652- put_page(pages[i]);625625+ if (!pagevec_add(&pvec, pages[i]))626626+ drm_gem_check_release_pagevec(&pvec);653627 }628628+ if (pagevec_count(&pvec))629629+ drm_gem_check_release_pagevec(&pvec);654630655631 kvfree(pages);656632}
···218218219219 idr_for_each_entry(leases, entry, object) {220220 error = 0;221221- if (!idr_find(&dev->mode_config.crtc_idr, object))221221+ if (!idr_find(&dev->mode_config.object_idr, object))222222 error = -ENOENT;223223 else if (!_drm_lease_held_master(lessor, object))224224 error = -EACCES;···439439 /*440440 * We're using an IDR to hold the set of leased441441 * objects, but we don't need to point at the object's442442- * data structure from the lease as the main crtc_idr442442+ * data structure from the lease as the main object_idr443443 * will be used to actually find that. Instead, all we444444 * really want is a 'leased/not-leased' result, for445445 * which any non-NULL pointer will work fine.···687687688688 if (lessee->lessor == NULL)689689 /* owner can use all objects */690690- object_idr = &lessee->dev->mode_config.crtc_idr;690690+ object_idr = &lessee->dev->mode_config.object_idr;691691 else692692 /* lessee can only use allowed object */693693 object_idr = &lessee->leases;
···3838 int ret;39394040 mutex_lock(&dev->mode_config.idr_mutex);4141- ret = idr_alloc(&dev->mode_config.crtc_idr, register_obj ? obj : NULL,4141+ ret = idr_alloc(&dev->mode_config.object_idr, register_obj ? obj : NULL,4242 1, 0, GFP_KERNEL);4343 if (ret >= 0) {4444 /*···7979 struct drm_mode_object *obj)8080{8181 mutex_lock(&dev->mode_config.idr_mutex);8282- idr_replace(&dev->mode_config.crtc_idr, obj, obj->id);8282+ idr_replace(&dev->mode_config.object_idr, obj, obj->id);8383 mutex_unlock(&dev->mode_config.idr_mutex);8484}8585···9999{100100 mutex_lock(&dev->mode_config.idr_mutex);101101 if (object->id) {102102- idr_remove(&dev->mode_config.crtc_idr, object->id);102102+ idr_remove(&dev->mode_config.object_idr, object->id);103103 object->id = 0;104104 }105105 mutex_unlock(&dev->mode_config.idr_mutex);···131131 struct drm_mode_object *obj = NULL;132132133133 mutex_lock(&dev->mode_config.idr_mutex);134134- obj = idr_find(&dev->mode_config.crtc_idr, id);134134+ obj = idr_find(&dev->mode_config.object_idr, id);135135 if (obj && type != DRM_MODE_OBJECT_ANY && obj->type != type)136136 obj = NULL;137137 if (obj && obj->id != id)···459459 struct drm_modeset_acquire_ctx ctx;460460 int ret;461461462462- drm_modeset_acquire_init(&ctx, 0);463463-464462 state = drm_atomic_state_alloc(dev);465463 if (!state)466464 return -ENOMEM;465465+466466+ drm_modeset_acquire_init(&ctx, 0);467467 state->acquire_ctx = &ctx;468468+468469retry:469470 if (prop == state->dev->mode_config.dpms_property) {470471 if (obj->type != DRM_MODE_OBJECT_CONNECTOR) {
-9
drivers/gpu/drm/drm_modes.c
···7171 if (!nmode)7272 return NULL;73737474- if (drm_mode_object_add(dev, &nmode->base, DRM_MODE_OBJECT_MODE)) {7575- kfree(nmode);7676- return NULL;7777- }7878-7974 return nmode;8075}8176EXPORT_SYMBOL(drm_mode_create);···8691{8792 if (!mode)8893 return;8989-9090- drm_mode_object_unregister(dev, &mode->base);91949295 kfree(mode);9396}···904911 */905912void drm_mode_copy(struct drm_display_mode *dst, const struct drm_display_mode *src)906913{907907- int id = dst->base.id;908914 struct list_head head = dst->head;909915910916 *dst = *src;911911- dst->base.id = id;912917 dst->head = head;913918}914919EXPORT_SYMBOL(drm_mode_copy);
+8
drivers/gpu/drm/drm_modeset_lock.c
···2222 */23232424#include <drm/drmP.h>2525+#include <drm/drm_atomic.h>2526#include <drm/drm_crtc.h>2627#include <drm/drm_modeset_lock.h>2728···395394int drm_modeset_lock_all_ctx(struct drm_device *dev,396395 struct drm_modeset_acquire_ctx *ctx)397396{397397+ struct drm_private_obj *privobj;398398 struct drm_crtc *crtc;399399 struct drm_plane *plane;400400 int ret;···412410413411 drm_for_each_plane(plane, dev) {414412 ret = drm_modeset_lock(&plane->mutex, ctx);413413+ if (ret)414414+ return ret;415415+ }416416+417417+ drm_for_each_privobj(privobj, dev) {418418+ ret = drm_modeset_lock(&privobj->lock, ctx);415419 if (ret)416420 return ret;417421 }
+3-1
drivers/gpu/drm/drm_of.c
···217217}218218EXPORT_SYMBOL_GPL(drm_of_encoder_active_endpoint);219219220220-/*220220+/**221221 * drm_of_find_panel_or_bridge - return connected panel or bridge device222222 * @np: device tree node containing encoder output ports223223+ * @port: port in the device tree node224224+ * @endpoint: endpoint in the device tree node223225 * @panel: pointer to hold returned drm_panel224226 * @bridge: pointer to hold returned drm_bridge225227 *
+3
drivers/gpu/drm/drm_panel.c
···3636 * The DRM panel helpers allow drivers to register panel objects with a3737 * central registry and provide functions to retrieve those panels in display3838 * drivers.3939+ *4040+ * For easy integration into drivers using the &drm_bridge infrastructure please4141+ * take look at drm_panel_bridge_add() and devm_drm_panel_bridge_add().3942 */40434144/**
···5656#include "drm_internal.h"5757#include <drm/drm_syncobj.h>58585959+struct syncobj_wait_entry {6060+ struct list_head node;6161+ struct task_struct *task;6262+ struct dma_fence *fence;6363+ struct dma_fence_cb fence_cb;6464+};6565+6666+static void syncobj_wait_syncobj_func(struct drm_syncobj *syncobj,6767+ struct syncobj_wait_entry *wait);6868+5969/**6070 * drm_syncobj_find - lookup and reference a sync object.6171 * @file_private: drm file private pointer···9282}9383EXPORT_SYMBOL(drm_syncobj_find);94849595-static void drm_syncobj_add_callback_locked(struct drm_syncobj *syncobj,9696- struct drm_syncobj_cb *cb,9797- drm_syncobj_func_t func)8585+static void drm_syncobj_fence_add_wait(struct drm_syncobj *syncobj,8686+ struct syncobj_wait_entry *wait)9887{9999- cb->func = func;100100- list_add_tail(&cb->node, &syncobj->cb_list);101101-}102102-103103-static int drm_syncobj_fence_get_or_add_callback(struct drm_syncobj *syncobj,104104- struct dma_fence **fence,105105- struct drm_syncobj_cb *cb,106106- drm_syncobj_func_t func)107107-{108108- int ret;109109-110110- *fence = drm_syncobj_fence_get(syncobj);111111- if (*fence)112112- return 1;8888+ if (wait->fence)8989+ return;1139011491 spin_lock(&syncobj->lock);11592 /* We've already tried once to get a fence and failed. Now that we11693 * have the lock, try one more time just to be sure we don't add a11794 * callback when a fence has already been set.11895 */119119- if (syncobj->fence) {120120- *fence = dma_fence_get(rcu_dereference_protected(syncobj->fence,121121- lockdep_is_held(&syncobj->lock)));122122- ret = 1;123123- } else {124124- *fence = NULL;125125- drm_syncobj_add_callback_locked(syncobj, cb, func);126126- ret = 0;127127- }128128- spin_unlock(&syncobj->lock);129129-130130- return ret;131131-}132132-133133-void drm_syncobj_add_callback(struct drm_syncobj *syncobj,134134- struct drm_syncobj_cb *cb,135135- drm_syncobj_func_t func)136136-{137137- spin_lock(&syncobj->lock);138138- drm_syncobj_add_callback_locked(syncobj, cb, func);9696+ if (syncobj->fence)9797+ wait->fence = dma_fence_get(9898+ rcu_dereference_protected(syncobj->fence, 1));9999+ else100100+ list_add_tail(&wait->node, &syncobj->cb_list);139101 spin_unlock(&syncobj->lock);140102}141103142142-void drm_syncobj_remove_callback(struct drm_syncobj *syncobj,143143- struct drm_syncobj_cb *cb)104104+static void drm_syncobj_remove_wait(struct drm_syncobj *syncobj,105105+ struct syncobj_wait_entry *wait)144106{107107+ if (!wait->node.next)108108+ return;109109+145110 spin_lock(&syncobj->lock);146146- list_del_init(&cb->node);111111+ list_del_init(&wait->node);147112 spin_unlock(&syncobj->lock);148113}149114···133148 struct dma_fence *fence)134149{135150 struct dma_fence *old_fence;136136- struct drm_syncobj_cb *cur, *tmp;151151+ struct syncobj_wait_entry *cur, *tmp;137152138153 if (fence)139154 dma_fence_get(fence);···147162 if (fence != old_fence) {148163 list_for_each_entry_safe(cur, tmp, &syncobj->cb_list, node) {149164 list_del_init(&cur->node);150150- cur->func(syncobj, cur);165165+ syncobj_wait_syncobj_func(syncobj, cur);151166 }152167 }153168···593608 &args->handle);594609}595610596596-struct syncobj_wait_entry {597597- struct task_struct *task;598598- struct dma_fence *fence;599599- struct dma_fence_cb fence_cb;600600- struct drm_syncobj_cb syncobj_cb;601601-};602602-603611static void syncobj_wait_fence_func(struct dma_fence *fence,604612 struct dma_fence_cb *cb)605613{···603625}604626605627static void syncobj_wait_syncobj_func(struct drm_syncobj *syncobj,606606- struct drm_syncobj_cb *cb)628628+ struct syncobj_wait_entry *wait)607629{608608- struct syncobj_wait_entry *wait =609609- container_of(cb, struct syncobj_wait_entry, syncobj_cb);610610-611630 /* This happens inside the syncobj lock */612631 wait->fence = dma_fence_get(rcu_dereference_protected(syncobj->fence,613632 lockdep_is_held(&syncobj->lock)));···663688 */664689665690 if (flags & DRM_SYNCOBJ_WAIT_FLAGS_WAIT_FOR_SUBMIT) {666666- for (i = 0; i < count; ++i) {667667- drm_syncobj_fence_get_or_add_callback(syncobjs[i],668668- &entries[i].fence,669669- &entries[i].syncobj_cb,670670- syncobj_wait_syncobj_func);671671- }691691+ for (i = 0; i < count; ++i)692692+ drm_syncobj_fence_add_wait(syncobjs[i], &entries[i]);672693 }673694674695 do {···713742714743cleanup_entries:715744 for (i = 0; i < count; ++i) {716716- if (entries[i].syncobj_cb.func)717717- drm_syncobj_remove_callback(syncobjs[i],718718- &entries[i].syncobj_cb);745745+ drm_syncobj_remove_wait(syncobjs[i], &entries[i]);719746 if (entries[i].fence_cb.func)720747 dma_fence_remove_callback(entries[i].fence,721748 &entries[i].fence_cb);
+42-3
drivers/gpu/drm/drm_vblank.c
···105105 write_sequnlock(&vblank->seqlock);106106}107107108108+static u32 drm_max_vblank_count(struct drm_device *dev, unsigned int pipe)109109+{110110+ struct drm_vblank_crtc *vblank = &dev->vblank[pipe];111111+112112+ return vblank->max_vblank_count ?: dev->max_vblank_count;113113+}114114+108115/*109116 * "No hw counter" fallback implementation of .get_vblank_counter() hook,110117 * if there is no useable hardware frame counter available.111118 */112119static u32 drm_vblank_no_hw_counter(struct drm_device *dev, unsigned int pipe)113120{114114- WARN_ON_ONCE(dev->max_vblank_count != 0);121121+ WARN_ON_ONCE(drm_max_vblank_count(dev, pipe) != 0);115122 return 0;116123}117124···205198 ktime_t t_vblank;206199 int count = DRM_TIMESTAMP_MAXRETRIES;207200 int framedur_ns = vblank->framedur_ns;201201+ u32 max_vblank_count = drm_max_vblank_count(dev, pipe);208202209203 /*210204 * Interrupts were disabled prior to this call, so deal with counter···224216 rc = drm_get_last_vbltimestamp(dev, pipe, &t_vblank, in_vblank_irq);225217 } while (cur_vblank != __get_vblank_counter(dev, pipe) && --count > 0);226218227227- if (dev->max_vblank_count != 0) {219219+ if (max_vblank_count) {228220 /* trust the hw counter when it's around */229229- diff = (cur_vblank - vblank->last) & dev->max_vblank_count;221221+ diff = (cur_vblank - vblank->last) & max_vblank_count;230222 } else if (rc && framedur_ns) {231223 u64 diff_ns = ktime_to_ns(ktime_sub(t_vblank, vblank->time));232224···12111203 WARN_ON(!list_empty(&dev->vblank_event_list));12121204}12131205EXPORT_SYMBOL(drm_crtc_vblank_reset);12061206+12071207+/**12081208+ * drm_crtc_set_max_vblank_count - configure the hw max vblank counter value12091209+ * @crtc: CRTC in question12101210+ * @max_vblank_count: max hardware vblank counter value12111211+ *12121212+ * Update the maximum hardware vblank counter value for @crtc12131213+ * at runtime. Useful for hardware where the operation of the12141214+ * hardware vblank counter depends on the currently active12151215+ * display configuration.12161216+ *12171217+ * For example, if the hardware vblank counter does not work12181218+ * when a specific connector is active the maximum can be set12191219+ * to zero. And when that specific connector isn't active the12201220+ * maximum can again be set to the appropriate non-zero value.12211221+ *12221222+ * If used, must be called before drm_vblank_on().12231223+ */12241224+void drm_crtc_set_max_vblank_count(struct drm_crtc *crtc,12251225+ u32 max_vblank_count)12261226+{12271227+ struct drm_device *dev = crtc->dev;12281228+ unsigned int pipe = drm_crtc_index(crtc);12291229+ struct drm_vblank_crtc *vblank = &dev->vblank[pipe];12301230+12311231+ WARN_ON(dev->max_vblank_count);12321232+ WARN_ON(!READ_ONCE(vblank->inmodeset));12331233+12341234+ vblank->max_vblank_count = max_vblank_count;12351235+}12361236+EXPORT_SYMBOL(drm_crtc_set_max_vblank_count);1214123712151238/**12161239 * drm_crtc_vblank_on - enable vblank events on a CRTC
···819819 return;820820 }821821822822- ret = drm_hdmi_avi_infoframe_from_display_mode(&frm.avi, m, false);822822+ ret = drm_hdmi_avi_infoframe_from_display_mode(&frm.avi,823823+ &hdata->connector, m);823824 if (!ret)824825 ret = hdmi_avi_infoframe_pack(&frm.avi, buf, sizeof(buf));825826 if (ret > 0) {
···359359 if (modes_changed) {360360 drm_helper_probe_single_connector_modes(connector, 0, 0);361361362362- /* Disable the crtc to ensure a full modeset is363363- * performed whenever it's turned on again. */364362 if (crtc)365365- drm_crtc_force_disable(crtc);363363+ drm_crtc_helper_set_mode(crtc, &crtc->mode,364364+ crtc->x, crtc->y,365365+ crtc->primary->fb);366366 }367367368368 return 0;
···390390 if (!fence)391391 return;392392393393- pr_notice("Asynchronous wait on fence %s:%s:%x timed out (hint:%pS)\n",393393+ pr_notice("Asynchronous wait on fence %s:%s:%llx timed out (hint:%pS)\n",394394 cb->dma->ops->get_driver_name(cb->dma),395395 cb->dma->ops->get_timeline_name(cb->dma),396396 cb->dma->seqno,
+4
drivers/gpu/drm/i915/intel_connector.c
···9494 intel_panel_fini(&intel_connector->panel);95959696 drm_connector_cleanup(connector);9797+9898+ if (intel_connector->port)9999+ drm_dp_mst_put_port_malloc(intel_connector->port);100100+97101 kfree(connector);98102}99103
···1616 * this program. If not, see <http://www.gnu.org/licenses/>.1717 */18181919+#include <drm/drm_util.h>19202021#include "mdp5_kms.h"2122#include "mdp5_smp.h"
···750750 /* Disable the crtc to ensure a full modeset is751751 * performed whenever it's turned on again. */752752 if (crtc)753753- drm_crtc_force_disable(crtc);753753+ drm_crtc_helper_set_mode(crtc, &crtc->mode,754754+ crtc->x, crtc->y,755755+ crtc->primary->fb);754756 }755757756758 return 0;
+74-37
drivers/gpu/drm/nouveau/dispnv50/disp.c
···561561 u32 max_ac_packet;562562 union hdmi_infoframe avi_frame;563563 union hdmi_infoframe vendor_frame;564564- bool scdc_supported, high_tmds_clock_ratio = false, scrambling = false;564564+ bool high_tmds_clock_ratio = false, scrambling = false;565565 u8 config;566566 int ret;567567 int size;···571571 return;572572573573 hdmi = &nv_connector->base.display_info.hdmi;574574- scdc_supported = hdmi->scdc.supported;575574576576- ret = drm_hdmi_avi_infoframe_from_display_mode(&avi_frame.avi, mode,577577- scdc_supported);575575+ ret = drm_hdmi_avi_infoframe_from_display_mode(&avi_frame.avi,576576+ &nv_connector->base, mode);578577 if (!ret) {579578 /* We have an AVI InfoFrame, populate it to the display */580579 args.pwr.avi_infoframe_length···679680 struct nv50_mstm *mstm = mstc->mstm;680681 int vcpi = mstc->port->vcpi.vcpi, i;681682683683+ WARN_ON(!mutex_is_locked(&mstm->mgr.payload_lock));684684+682685 NV_ATOMIC(drm, "%s: vcpi %d\n", msto->encoder.name, vcpi);683686 for (i = 0; i < mstm->mgr.max_payloads; i++) {684687 struct drm_dp_payload *payload = &mstm->mgr.payloads[i];···705704 struct nv50_mstc *mstc = msto->mstc;706705 struct nv50_mstm *mstm = mstc->mstm;707706707707+ if (!msto->disabled)708708+ return;709709+708710 NV_ATOMIC(drm, "%s: msto cleanup\n", msto->encoder.name);709709- if (mstc->port && mstc->port->vcpi.vcpi > 0 && !nv50_msto_payload(msto))710710- drm_dp_mst_deallocate_vcpi(&mstm->mgr, mstc->port);711711- if (msto->disabled) {712712- msto->mstc = NULL;713713- msto->head = NULL;714714- msto->disabled = false;715715- }711711+712712+ drm_dp_mst_deallocate_vcpi(&mstm->mgr, mstc->port);713713+714714+ msto->mstc = NULL;715715+ msto->head = NULL;716716+ msto->disabled = false;716717}717718718719static void···734731 (0x0100 << msto->head->base.index),735732 };736733734734+ mutex_lock(&mstm->mgr.payload_lock);735735+737736 NV_ATOMIC(drm, "%s: msto prepare\n", msto->encoder.name);738738- if (mstc->port && mstc->port->vcpi.vcpi > 0) {737737+ if (mstc->port->vcpi.vcpi > 0) {739738 struct drm_dp_payload *payload = nv50_msto_payload(msto);740739 if (payload) {741740 args.vcpi.start_slot = payload->start_slot;···751746 msto->encoder.name, msto->head->base.base.name,752747 args.vcpi.start_slot, args.vcpi.num_slots,753748 args.vcpi.pbn, args.vcpi.aligned_pbn);749749+754750 nvif_mthd(&drm->display->disp.object, 0, &args, sizeof(args));751751+ mutex_unlock(&mstm->mgr.payload_lock);755752}756753757754static int···761754 struct drm_crtc_state *crtc_state,762755 struct drm_connector_state *conn_state)763756{764764- struct nv50_mstc *mstc = nv50_mstc(conn_state->connector);757757+ struct drm_atomic_state *state = crtc_state->state;758758+ struct drm_connector *connector = conn_state->connector;759759+ struct nv50_mstc *mstc = nv50_mstc(connector);765760 struct nv50_mstm *mstm = mstc->mstm;766766- int bpp = conn_state->connector->display_info.bpc * 3;761761+ int bpp = connector->display_info.bpc * 3;767762 int slots;768763769769- mstc->pbn = drm_dp_calc_pbn_mode(crtc_state->adjusted_mode.clock, bpp);764764+ mstc->pbn = drm_dp_calc_pbn_mode(crtc_state->adjusted_mode.clock,765765+ bpp);770766771771- slots = drm_dp_find_vcpi_slots(&mstm->mgr, mstc->pbn);772772- if (slots < 0)773773- return slots;767767+ if (drm_atomic_crtc_needs_modeset(crtc_state) &&768768+ !drm_connector_is_unregistered(connector)) {769769+ slots = drm_dp_atomic_find_vcpi_slots(state, &mstm->mgr,770770+ mstc->port, mstc->pbn);771771+ if (slots < 0)772772+ return slots;773773+ }774774775775 return nv50_outp_atomic_check_view(encoder, crtc_state, conn_state,776776 mstc->native);···843829 struct nv50_mstc *mstc = msto->mstc;844830 struct nv50_mstm *mstm = mstc->mstm;845831846846- if (mstc->port)847847- drm_dp_mst_reset_vcpi_slots(&mstm->mgr, mstc->port);832832+ drm_dp_mst_reset_vcpi_slots(&mstm->mgr, mstc->port);848833849834 mstm->outp->update(mstm->outp, msto->head->base.index, NULL, 0, 0);850835 mstm->modified = true;···940927 return ret;941928}942929930930+static int931931+nv50_mstc_atomic_check(struct drm_connector *connector,932932+ struct drm_connector_state *new_conn_state)933933+{934934+ struct drm_atomic_state *state = new_conn_state->state;935935+ struct nv50_mstc *mstc = nv50_mstc(connector);936936+ struct drm_dp_mst_topology_mgr *mgr = &mstc->mstm->mgr;937937+ struct drm_connector_state *old_conn_state =938938+ drm_atomic_get_old_connector_state(state, connector);939939+ struct drm_crtc_state *crtc_state;940940+ struct drm_crtc *new_crtc = new_conn_state->crtc;941941+942942+ if (!old_conn_state->crtc)943943+ return 0;944944+945945+ /* We only want to free VCPI if this state disables the CRTC on this946946+ * connector947947+ */948948+ if (new_crtc) {949949+ crtc_state = drm_atomic_get_new_crtc_state(state, new_crtc);950950+951951+ if (!crtc_state ||952952+ !drm_atomic_crtc_needs_modeset(crtc_state) ||953953+ crtc_state->enable)954954+ return 0;955955+ }956956+957957+ return drm_dp_atomic_release_vcpi_slots(state, mgr, mstc->port);958958+}959959+943960static const struct drm_connector_helper_funcs944961nv50_mstc_help = {945962 .get_modes = nv50_mstc_get_modes,946963 .mode_valid = nv50_mstc_mode_valid,947964 .best_encoder = nv50_mstc_best_encoder,948965 .atomic_best_encoder = nv50_mstc_atomic_best_encoder,966966+ .atomic_check = nv50_mstc_atomic_check,949967};950968951969static enum drm_connector_status···986942 enum drm_connector_status conn_status;987943 int ret;988944989989- if (!mstc->port)945945+ if (drm_connector_is_unregistered(connector))990946 return connector_status_disconnected;991947992948 ret = pm_runtime_get_sync(connector->dev->dev);···1005961nv50_mstc_destroy(struct drm_connector *connector)1006962{1007963 struct nv50_mstc *mstc = nv50_mstc(connector);964964+1008965 drm_connector_cleanup(&mstc->connector);966966+ drm_dp_mst_put_port_malloc(mstc->port);967967+1009968 kfree(mstc);1010969}1011970···10561009 drm_object_attach_property(&mstc->connector.base, dev->mode_config.path_property, 0);10571010 drm_object_attach_property(&mstc->connector.base, dev->mode_config.tile_property, 0);10581011 drm_connector_set_path_property(&mstc->connector, path);10121012+ drm_dp_mst_get_port_malloc(port);10591013 return 0;10601014}10611015···11111063}1112106411131065static void11141114-nv50_mstm_hotplug(struct drm_dp_mst_topology_mgr *mgr)11151115-{11161116- struct nv50_mstm *mstm = nv50_mstm(mgr);11171117- drm_kms_helper_hotplug_event(mstm->outp->base.base.dev);11181118-}11191119-11201120-static void11211066nv50_mstm_destroy_connector(struct drm_dp_mst_topology_mgr *mgr,11221067 struct drm_connector *connector)11231068{···11201079 drm_connector_unregister(&mstc->connector);1121108011221081 drm_fb_helper_remove_one_connector(&drm->fbcon->helper, &mstc->connector);11231123-11241124- drm_modeset_lock(&drm->dev->mode_config.connection_mutex, NULL);11251125- mstc->port = NULL;11261126- drm_modeset_unlock(&drm->dev->mode_config.connection_mutex);1127108211281083 drm_connector_put(&mstc->connector);11291084}···11431106 int ret;1144110711451108 ret = nv50_mstc_new(mstm, port, path, &mstc);11461146- if (ret) {11471147- if (mstc)11481148- mstc->connector.funcs->destroy(&mstc->connector);11091109+ if (ret)11491110 return NULL;11501150- }1151111111521112 return &mstc->connector;11531113}···11541120 .add_connector = nv50_mstm_add_connector,11551121 .register_connector = nv50_mstm_register_connector,11561122 .destroy_connector = nv50_mstm_destroy_connector,11571157- .hotplug = nv50_mstm_hotplug,11581123};1159112411601125void···21572124 if (ret)21582125 return ret;21592126 }21272127+21282128+ ret = drm_dp_mst_atomic_check(state);21292129+ if (ret)21302130+ return ret;2160213121612132 return 0;21622133}
···204204 Say Y here if you want to enable support for the Sitronix205205 ST7789V controller for 240x320 LCD panels206206207207+config DRM_PANEL_TPO_TPG110208208+ tristate "TPO TPG 800x400 panel"209209+ depends on OF && SPI && GPIOLIB210210+ depends on BACKLIGHT_CLASS_DEVICE211211+ help212212+ Say Y here if you want to enable support for TPO TPG110213213+ 400CH LTPS TFT LCD Single Chip Digital Driver for up to214214+ 800x400 LCD panels.215215+207216config DRM_PANEL_TRULY_NT35597_WQXGA208217 tristate "Truly WQXGA"209218 depends on OF
···147147148148 rcar_du_group_setup_pins(rgrp);149149150150- if (rcar_du_has(rgrp->dev, RCAR_DU_FEATURE_EXT_CTRL_REGS)) {150150+ if (rcdu->info->gen >= 2) {151151 rcar_du_group_setup_defr8(rgrp);152152 rcar_du_group_setup_didsr(rgrp);153153 }···262262 unsigned int index;263263 int ret;264264265265- if (!rcar_du_has(rcdu, RCAR_DU_FEATURE_EXT_CTRL_REGS))265265+ if (rcdu->info->gen < 2)266266 return 0;267267268268 /*···287287 return 0;288288}289289290290+static void rcar_du_group_set_dpad_levels(struct rcar_du_group *rgrp)291291+{292292+ static const u32 doflr_values[2] = {293293+ DOFLR_HSYCFL0 | DOFLR_VSYCFL0 | DOFLR_ODDFL0 |294294+ DOFLR_DISPFL0 | DOFLR_CDEFL0 | DOFLR_RGBFL0,295295+ DOFLR_HSYCFL1 | DOFLR_VSYCFL1 | DOFLR_ODDFL1 |296296+ DOFLR_DISPFL1 | DOFLR_CDEFL1 | DOFLR_RGBFL1,297297+ };298298+ static const u32 dpad_mask = BIT(RCAR_DU_OUTPUT_DPAD1)299299+ | BIT(RCAR_DU_OUTPUT_DPAD0);300300+ struct rcar_du_device *rcdu = rgrp->dev;301301+ u32 doflr = DOFLR_CODE;302302+ unsigned int i;303303+304304+ if (rcdu->info->gen < 2)305305+ return;306306+307307+ /*308308+ * The DPAD outputs can't be controlled directly. However, the parallel309309+ * output of the DU channels routed to DPAD can be set to fixed levels310310+ * through the DOFLR group register. Use this to turn the DPAD on or off311311+ * by driving fixed low-level signals at the output of any DU channel312312+ * not routed to a DPAD output. This doesn't affect the DU output313313+ * signals going to other outputs, such as the internal LVDS and HDMI314314+ * encoders.315315+ */316316+317317+ for (i = 0; i < rgrp->num_crtcs; ++i) {318318+ struct rcar_du_crtc_state *rstate;319319+ struct rcar_du_crtc *rcrtc;320320+321321+ rcrtc = &rcdu->crtcs[rgrp->index * 2 + i];322322+ rstate = to_rcar_crtc_state(rcrtc->crtc.state);323323+324324+ if (!(rstate->outputs & dpad_mask))325325+ doflr |= doflr_values[i];326326+ }327327+328328+ rcar_du_group_write(rgrp, DOFLR, doflr);329329+}330330+290331int rcar_du_group_set_routing(struct rcar_du_group *rgrp)291332{292292- struct rcar_du_crtc *crtc0 = &rgrp->dev->crtcs[rgrp->index * 2];333333+ struct rcar_du_device *rcdu = rgrp->dev;293334 u32 dorcr = rcar_du_group_read(rgrp, DORCR);294335295336 dorcr &= ~(DORCR_PG2T | DORCR_DK2S | DORCR_PG2D_MASK);···340299 * CRTC 1 in all other cases to avoid cloning CRTC 0 to DPAD0 and DPAD1341300 * by default.342301 */343343- if (crtc0->outputs & BIT(RCAR_DU_OUTPUT_DPAD1))302302+ if (rcdu->dpad1_source == rgrp->index * 2)344303 dorcr |= DORCR_PG2D_DS1;345304 else346305 dorcr |= DORCR_PG2T | DORCR_DK2S | DORCR_PG2D_DS2;347306348307 rcar_du_group_write(rgrp, DORCR, dorcr);308308+309309+ rcar_du_group_set_dpad_levels(rgrp);349310350311 return rcar_du_set_dpad0_vsp1_routing(rgrp->dev);351312}
+22-1
drivers/gpu/drm/rcar-du/rcar_du_kms.c
···77 * Contact: Laurent Pinchart (laurent.pinchart@ideasonboard.com)88 */991010-#include <drm/drmP.h>1110#include <drm/drm_atomic.h>1211#include <drm/drm_atomic_helper.h>1312#include <drm/drm_crtc.h>···277278static void rcar_du_atomic_commit_tail(struct drm_atomic_state *old_state)278279{279280 struct drm_device *dev = old_state->dev;281281+ struct rcar_du_device *rcdu = dev->dev_private;282282+ struct drm_crtc_state *crtc_state;283283+ struct drm_crtc *crtc;284284+ unsigned int i;285285+286286+ /*287287+ * Store RGB routing to DPAD0 and DPAD1, the hardware will be configured288288+ * when starting the CRTCs.289289+ */290290+ rcdu->dpad1_source = -1;291291+292292+ for_each_new_crtc_in_state(old_state, crtc, crtc_state, i) {293293+ struct rcar_du_crtc_state *rcrtc_state =294294+ to_rcar_crtc_state(crtc_state);295295+ struct rcar_du_crtc *rcrtc = to_rcar_crtc(crtc);296296+297297+ if (rcrtc_state->outputs & BIT(RCAR_DU_OUTPUT_DPAD0))298298+ rcdu->dpad0_source = rcrtc->index;299299+300300+ if (rcrtc_state->outputs & BIT(RCAR_DU_OUTPUT_DPAD1))301301+ rcdu->dpad1_source = rcrtc->index;302302+ }280303281304 /* Apply the atomic update. */282305 drm_atomic_helper_commit_modeset_disables(dev, old_state);
···5252 u8 buffer[17];5353 int i, ret;54545555- ret = drm_hdmi_avi_infoframe_from_display_mode(&frame, mode, false);5555+ ret = drm_hdmi_avi_infoframe_from_display_mode(&frame,5656+ &hdmi->connector, mode);5657 if (ret < 0) {5758 DRM_ERROR("Failed to get infoframes from mode\n");5859 return ret;
+2-1
drivers/gpu/drm/tegra/hdmi.c
···741741 u8 buffer[17];742742 ssize_t err;743743744744- err = drm_hdmi_avi_infoframe_from_display_mode(&frame, mode, false);744744+ err = drm_hdmi_avi_infoframe_from_display_mode(&frame,745745+ &hdmi->output.connector, mode);745746 if (err < 0) {746747 dev_err(hdmi->dev, "failed to setup AVI infoframe: %zd\n", err);747748 return;
···130130 }131131}132132133133-/* Invalidates the (read-only) L2 cache. */133133+/* Invalidates the (read-only) L2C cache. This was the L2 cache for134134+ * uniforms and instructions on V3D 3.2.135135+ */134136static void135135-v3d_invalidate_l2(struct v3d_dev *v3d, int core)137137+v3d_invalidate_l2c(struct v3d_dev *v3d, int core)136138{139139+ if (v3d->ver > 32)140140+ return;141141+137142 V3D_CORE_WRITE(core, V3D_CTL_L2CACTL,138143 V3D_L2CACTL_L2CCLR |139144 V3D_L2CACTL_L2CENA);140140-}141141-142142-static void143143-v3d_invalidate_l1td(struct v3d_dev *v3d, int core)144144-{145145- V3D_CORE_WRITE(core, V3D_CTL_L2TCACTL, V3D_L2TCACTL_TMUWCF);146146- if (wait_for(!(V3D_CORE_READ(core, V3D_CTL_L2TCACTL) &147147- V3D_L2TCACTL_L2TFLS), 100)) {148148- DRM_ERROR("Timeout waiting for L1T write combiner flush\n");149149- }150145}151146152147/* Invalidates texture L2 cachelines */153148static void154149v3d_flush_l2t(struct v3d_dev *v3d, int core)155150{156156- v3d_invalidate_l1td(v3d, core);157157-151151+ /* While there is a busy bit (V3D_L2TCACTL_L2TFLS), we don't152152+ * need to wait for completion before dispatching the job --153153+ * L2T accesses will be stalled until the flush has completed.154154+ */158155 V3D_CORE_WRITE(core, V3D_CTL_L2TCACTL,159156 V3D_L2TCACTL_L2TFLS |160157 V3D_SET_FIELD(V3D_L2TCACTL_FLM_FLUSH, V3D_L2TCACTL_FLM));161161- if (wait_for(!(V3D_CORE_READ(core, V3D_CTL_L2TCACTL) &162162- V3D_L2TCACTL_L2TFLS), 100)) {163163- DRM_ERROR("Timeout waiting for L2T flush\n");164164- }165158}166159167160/* Invalidates the slice caches. These are read-only caches. */···168175 V3D_SET_FIELD(0xf, V3D_SLCACTL_ICC));169176}170177171171-/* Invalidates texture L2 cachelines */172172-static void173173-v3d_invalidate_l2t(struct v3d_dev *v3d, int core)174174-{175175- V3D_CORE_WRITE(core,176176- V3D_CTL_L2TCACTL,177177- V3D_L2TCACTL_L2TFLS |178178- V3D_SET_FIELD(V3D_L2TCACTL_FLM_CLEAR, V3D_L2TCACTL_FLM));179179- if (wait_for(!(V3D_CORE_READ(core, V3D_CTL_L2TCACTL) &180180- V3D_L2TCACTL_L2TFLS), 100)) {181181- DRM_ERROR("Timeout waiting for L2T invalidate\n");182182- }183183-}184184-185178void186179v3d_invalidate_caches(struct v3d_dev *v3d)187180{181181+ /* Invalidate the caches from the outside in. That way if182182+ * another CL's concurrent use of nearby memory were to pull183183+ * an invalidated cacheline back in, we wouldn't leave stale184184+ * data in the inner cache.185185+ */188186 v3d_flush_l3(v3d);189189-190190- v3d_invalidate_l2(v3d, 0);191191- v3d_invalidate_slices(v3d, 0);187187+ v3d_invalidate_l2c(v3d, 0);192188 v3d_flush_l2t(v3d, 0);193193-}194194-195195-void196196-v3d_flush_caches(struct v3d_dev *v3d)197197-{198198- v3d_invalidate_l1td(v3d, 0);199199- v3d_invalidate_l2t(v3d, 0);189189+ v3d_invalidate_slices(v3d, 0);200190}201191202192static void
+43
drivers/gpu/drm/vc4/vc4_crtc.c
···4949 struct drm_mm_node mm;5050 bool feed_txp;5151 bool txp_armed;5252+5353+ struct {5454+ unsigned int left;5555+ unsigned int right;5656+ unsigned int top;5757+ unsigned int bottom;5858+ } margins;5259};53605461static inline struct vc4_crtc_state *···631624 return MODE_OK;632625}633626627627+void vc4_crtc_get_margins(struct drm_crtc_state *state,628628+ unsigned int *left, unsigned int *right,629629+ unsigned int *top, unsigned int *bottom)630630+{631631+ struct vc4_crtc_state *vc4_state = to_vc4_crtc_state(state);632632+ struct drm_connector_state *conn_state;633633+ struct drm_connector *conn;634634+ int i;635635+636636+ *left = vc4_state->margins.left;637637+ *right = vc4_state->margins.right;638638+ *top = vc4_state->margins.top;639639+ *bottom = vc4_state->margins.bottom;640640+641641+ /* We have to interate over all new connector states because642642+ * vc4_crtc_get_margins() might be called before643643+ * vc4_crtc_atomic_check() which means margins info in vc4_crtc_state644644+ * might be outdated.645645+ */646646+ for_each_new_connector_in_state(state->state, conn, conn_state, i) {647647+ if (conn_state->crtc != state->crtc)648648+ continue;649649+650650+ *left = conn_state->tv.margins.left;651651+ *right = conn_state->tv.margins.right;652652+ *top = conn_state->tv.margins.top;653653+ *bottom = conn_state->tv.margins.bottom;654654+ break;655655+ }656656+}657657+634658static int vc4_crtc_atomic_check(struct drm_crtc *crtc,635659 struct drm_crtc_state *state)636660{···709671 vc4_state->feed_txp = false;710672 }711673674674+ vc4_state->margins.left = conn_state->tv.margins.left;675675+ vc4_state->margins.right = conn_state->tv.margins.right;676676+ vc4_state->margins.top = conn_state->tv.margins.top;677677+ vc4_state->margins.bottom = conn_state->tv.margins.bottom;712678 break;713679 }714680···10149721015973 old_vc4_state = to_vc4_crtc_state(crtc->state);1016974 vc4_state->feed_txp = old_vc4_state->feed_txp;975975+ vc4_state->margins = old_vc4_state->margins;10179761018977 __drm_atomic_helper_crtc_duplicate_state(crtc, &vc4_state->base);1019978 return &vc4_state->base;
+4
drivers/gpu/drm/vc4/vc4_drv.h
···99#include <linux/mm_types.h>1010#include <linux/reservation.h>1111#include <drm/drmP.h>1212+#include <drm/drm_util.h>1213#include <drm/drm_encoder.h>1314#include <drm/drm_gem_cma_helper.h>1415#include <drm/drm_atomic.h>···708707 const struct drm_display_mode *mode);709708void vc4_crtc_handle_vblank(struct vc4_crtc *crtc);710709void vc4_crtc_txp_armed(struct drm_crtc_state *state);710710+void vc4_crtc_get_margins(struct drm_crtc_state *state,711711+ unsigned int *right, unsigned int *left,712712+ unsigned int *top, unsigned int *bottom);711713712714/* vc4_debugfs.c */713715int vc4_debugfs_init(struct drm_minor *minor);
···11-/*22- * Copyright (C) 2015 Red Hat, Inc.33- * All Rights Reserved.44- *55- * Permission is hereby granted, free of charge, to any person obtaining66- * a copy of this software and associated documentation files (the77- * "Software"), to deal in the Software without restriction, including88- * without limitation the rights to use, copy, modify, merge, publish,99- * distribute, sublicense, and/or sell copies of the Software, and to1010- * permit persons to whom the Software is furnished to do so, subject to1111- * the following conditions:1212- *1313- * The above copyright notice and this permission notice (including the1414- * next paragraph) shall be included in all copies or substantial1515- * portions of the Software.1616- *1717- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,1818- * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF1919- * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.2020- * IN NO EVENT SHALL THE COPYRIGHT OWNER(S) AND/OR ITS SUPPLIERS BE2121- * LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION2222- * OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION2323- * WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.2424- */2525-2626-#include <linux/pci.h>2727-#include <drm/drm_fb_helper.h>2828-2929-#include "virtgpu_drv.h"3030-3131-int drm_virtio_init(struct drm_driver *driver, struct virtio_device *vdev)3232-{3333- struct drm_device *dev;3434- int ret;3535-3636- dev = drm_dev_alloc(driver, &vdev->dev);3737- if (IS_ERR(dev))3838- return PTR_ERR(dev);3939- vdev->priv = dev;4040-4141- if (strcmp(vdev->dev.parent->bus->name, "pci") == 0) {4242- struct pci_dev *pdev = to_pci_dev(vdev->dev.parent);4343- const char *pname = dev_name(&pdev->dev);4444- bool vga = (pdev->class >> 8) == PCI_CLASS_DISPLAY_VGA;4545- char unique[20];4646-4747- DRM_INFO("pci: %s detected at %s\n",4848- vga ? "virtio-vga" : "virtio-gpu-pci",4949- pname);5050- dev->pdev = pdev;5151- if (vga)5252- drm_fb_helper_remove_conflicting_pci_framebuffers(pdev,5353- 0,5454- "virtiodrmfb");5555-5656- /*5757- * Normally the drm_dev_set_unique() call is done by core DRM.5858- * The following comment covers, why virtio cannot rely on it.5959- *6060- * Unlike the other virtual GPU drivers, virtio abstracts the6161- * underlying bus type by using struct virtio_device.6262- *6363- * Hence the dev_is_pci() check, used in core DRM, will fail6464- * and the unique returned will be the virtio_device "virtio0",6565- * while a "pci:..." one is required.6666- *6767- * A few other ideas were considered:6868- * - Extend the dev_is_pci() check [in drm_set_busid] to6969- * consider virtio.7070- * Seems like a bigger hack than what we have already.7171- *7272- * - Point drm_device::dev to the parent of the virtio_device7373- * Semantic changes:7474- * * Using the wrong device for i2c, framebuffer_alloc and7575- * prime import.7676- * Visual changes:7777- * * Helpers such as DRM_DEV_ERROR, dev_info, drm_printer,7878- * will print the wrong information.7979- *8080- * We could address the latter issues, by introducing8181- * drm_device::bus_dev, ... which would be used solely for this.8282- *8383- * So for the moment keep things as-is, with a bulky comment8484- * for the next person who feels like removing this8585- * drm_dev_set_unique() quirk.8686- */8787- snprintf(unique, sizeof(unique), "pci:%s", pname);8888- ret = drm_dev_set_unique(dev, unique);8989- if (ret)9090- goto err_free;9191-9292- }9393-9494- ret = drm_dev_register(dev, 0);9595- if (ret)9696- goto err_free;9797-9898- return 0;9999-100100-err_free:101101- drm_dev_put(dev);102102- return ret;103103-}
+81-3
drivers/gpu/drm/virtio/virtgpu_drv.c
···4040MODULE_PARM_DESC(modeset, "Disable/Enable modesetting");4141module_param_named(modeset, virtio_gpu_modeset, int, 0400);42424343+static int virtio_gpu_pci_quirk(struct drm_device *dev, struct virtio_device *vdev)4444+{4545+ struct pci_dev *pdev = to_pci_dev(vdev->dev.parent);4646+ const char *pname = dev_name(&pdev->dev);4747+ bool vga = (pdev->class >> 8) == PCI_CLASS_DISPLAY_VGA;4848+ char unique[20];4949+5050+ DRM_INFO("pci: %s detected at %s\n",5151+ vga ? "virtio-vga" : "virtio-gpu-pci",5252+ pname);5353+ dev->pdev = pdev;5454+ if (vga)5555+ drm_fb_helper_remove_conflicting_pci_framebuffers(pdev,5656+ 0,5757+ "virtiodrmfb");5858+5959+ /*6060+ * Normally the drm_dev_set_unique() call is done by core DRM.6161+ * The following comment covers, why virtio cannot rely on it.6262+ *6363+ * Unlike the other virtual GPU drivers, virtio abstracts the6464+ * underlying bus type by using struct virtio_device.6565+ *6666+ * Hence the dev_is_pci() check, used in core DRM, will fail6767+ * and the unique returned will be the virtio_device "virtio0",6868+ * while a "pci:..." one is required.6969+ *7070+ * A few other ideas were considered:7171+ * - Extend the dev_is_pci() check [in drm_set_busid] to7272+ * consider virtio.7373+ * Seems like a bigger hack than what we have already.7474+ *7575+ * - Point drm_device::dev to the parent of the virtio_device7676+ * Semantic changes:7777+ * * Using the wrong device for i2c, framebuffer_alloc and7878+ * prime import.7979+ * Visual changes:8080+ * * Helpers such as DRM_DEV_ERROR, dev_info, drm_printer,8181+ * will print the wrong information.8282+ *8383+ * We could address the latter issues, by introducing8484+ * drm_device::bus_dev, ... which would be used solely for this.8585+ *8686+ * So for the moment keep things as-is, with a bulky comment8787+ * for the next person who feels like removing this8888+ * drm_dev_set_unique() quirk.8989+ */9090+ snprintf(unique, sizeof(unique), "pci:%s", pname);9191+ return drm_dev_set_unique(dev, unique);9292+}9393+4394static int virtio_gpu_probe(struct virtio_device *vdev)4495{9696+ struct drm_device *dev;9797+ int ret;9898+4599 if (vgacon_text_force() && virtio_gpu_modeset == -1)46100 return -EINVAL;4710148102 if (virtio_gpu_modeset == 0)49103 return -EINVAL;501045151- return drm_virtio_init(&driver, vdev);105105+ dev = drm_dev_alloc(&driver, &vdev->dev);106106+ if (IS_ERR(dev))107107+ return PTR_ERR(dev);108108+ vdev->priv = dev;109109+110110+ if (!strcmp(vdev->dev.parent->bus->name, "pci")) {111111+ ret = virtio_gpu_pci_quirk(dev, vdev);112112+ if (ret)113113+ goto err_free;114114+ }115115+116116+ ret = virtio_gpu_init(dev);117117+ if (ret)118118+ goto err_free;119119+120120+ ret = drm_dev_register(dev, 0);121121+ if (ret)122122+ goto err_free;123123+124124+ drm_fbdev_generic_setup(vdev->priv, 32);125125+ return 0;126126+127127+err_free:128128+ drm_dev_put(dev);129129+ return ret;52130}5313154132static void virtio_gpu_remove(struct virtio_device *vdev)55133{56134 struct drm_device *dev = vdev->priv;57135136136+ drm_dev_unregister(dev);137137+ virtio_gpu_deinit(dev);58138 drm_put_dev(dev);59139}60140···196116197117static struct drm_driver driver = {198118 .driver_features = DRIVER_MODESET | DRIVER_GEM | DRIVER_PRIME | DRIVER_RENDER | DRIVER_ATOMIC,199199- .load = virtio_gpu_driver_load,200200- .unload = virtio_gpu_driver_unload,201119 .open = virtio_gpu_driver_open,202120 .postclose = virtio_gpu_driver_postclose,203121
···125125 union hdmi_infoframe frame;126126 int ret;127127128128- ret = drm_hdmi_avi_infoframe_from_display_mode(&frame.avi, mode, false);128128+ ret = drm_hdmi_avi_infoframe_from_display_mode(&frame.avi,129129+ &hdmi->connector,130130+ mode);129131 if (ret) {130132 DRM_DEV_ERROR(hdmi->dev, "failed to get avi infoframe: %d\n",131133 ret);
-5
drivers/staging/vboxvideo/vbox_fb.c
···95959696 strcpy(info->fix.id, "vboxdrmfb");97979898- /*9999- * The last flag forces a mode set on VT switches even if the kernel100100- * does not think it is needed.101101- */102102- info->flags = FBINFO_DEFAULT | FBINFO_MISC_ALWAYS_SETPAR;10398 info->fbops = &vboxfb_ops;10499105100 /*
···14141515struct dw_mipi_dsi_phy_ops {1616 int (*init)(void *priv_data);1717- int (*get_lane_mbps)(void *priv_data, struct drm_display_mode *mode,1717+ int (*get_lane_mbps)(void *priv_data,1818+ const struct drm_display_mode *mode,1819 unsigned long mode_flags, u32 lanes, u32 format,1920 unsigned int *lane_mbps);2021};
+6-20
include/drm/drmP.h
···9494struct pci_dev;9595struct pci_controller;96969797-#define DRM_IF_VERSION(maj, min) (maj << 16 | min)9898-9999-#define DRM_SWITCH_POWER_ON 0100100-#define DRM_SWITCH_POWER_OFF 1101101-#define DRM_SWITCH_POWER_CHANGING 2102102-#define DRM_SWITCH_POWER_DYNAMIC_OFF 3103103-104104-/* returns true if currently okay to sleep */105105-static inline bool drm_can_sleep(void)106106-{107107- if (in_atomic() || in_dbg_master() || irqs_disabled())108108- return false;109109- return true;110110-}111111-112112-#if defined(CONFIG_DRM_DEBUG_SELFTEST_MODULE)113113-#define EXPORT_SYMBOL_FOR_TESTS_ONLY(x) EXPORT_SYMBOL(x)114114-#else115115-#define EXPORT_SYMBOL_FOR_TESTS_ONLY(x)116116-#endif9797+/*9898+ * NOTE: drmP.h is obsolete - do NOT add anything to this file9999+ *100100+ * Do not include drmP.h in new files.101101+ * Work is ongoing to remove drmP.h includes from existing files102102+ */117103118104#endif
+39-4
include/drm/drm_atomic.h
···139139 /**140140 * @abort_completion:141141 *142142- * A flag that's set after drm_atomic_helper_setup_commit takes a second143143- * reference for the completion of $drm_crtc_state.event. It's used by144144- * the free code to remove the second reference if commit fails.142142+ * A flag that's set after drm_atomic_helper_setup_commit() takes a143143+ * second reference for the completion of $drm_crtc_state.event. It's144144+ * used by the free code to remove the second reference if commit fails.145145 */146146 bool abort_completion;147147};···228228 * Currently only tracks the state update functions and the opaque driver229229 * private state itself, but in the future might also track which230230 * &drm_modeset_lock is required to duplicate and update this object's state.231231+ *232232+ * All private objects must be initialized before the DRM device they are233233+ * attached to is registered to the DRM subsystem (call to drm_dev_register())234234+ * and should stay around until this DRM device is unregistered (call to235235+ * drm_dev_unregister()). In other words, private objects lifetime is tied236236+ * to the DRM device lifetime. This implies that:237237+ *238238+ * 1/ all calls to drm_atomic_private_obj_init() must be done before calling239239+ * drm_dev_register()240240+ * 2/ all calls to drm_atomic_private_obj_fini() must be done after calling241241+ * drm_dev_unregister()231242 */232243struct drm_private_obj {244244+ /**245245+ * @head: List entry used to attach a private object to a &drm_device246246+ * (queued to &drm_mode_config.privobj_list).247247+ */248248+ struct list_head head;249249+250250+ /**251251+ * @lock: Modeset lock to protect the state object.252252+ */253253+ struct drm_modeset_lock lock;254254+233255 /**234256 * @state: Current atomic state for this driver private object.235257 */···265243 */266244 const struct drm_private_state_funcs *funcs;267245};246246+247247+/**248248+ * drm_for_each_privobj() - private object iterator249249+ *250250+ * @privobj: pointer to the current private object. Updated after each251251+ * iteration252252+ * @dev: the DRM device we want get private objects from253253+ *254254+ * Allows one to iterate over all private objects attached to @dev255255+ */256256+#define drm_for_each_privobj(privobj, dev) \257257+ list_for_each_entry(privobj, &(dev)->mode_config.privobj_list, head)268258269259/**270260 * struct drm_private_state - base struct for driver private object state···434400drm_atomic_get_connector_state(struct drm_atomic_state *state,435401 struct drm_connector *connector);436402437437-void drm_atomic_private_obj_init(struct drm_private_obj *obj,403403+void drm_atomic_private_obj_init(struct drm_device *dev,404404+ struct drm_private_obj *obj,438405 struct drm_private_state *state,439406 const struct drm_private_state_funcs *funcs);440407void drm_atomic_private_obj_fini(struct drm_private_obj *obj);
···2424struct pci_dev;2525struct pci_controller;26262727+2728/**2828- * DRM device structure. This structure represent a complete card that2929+ * enum drm_switch_power - power state of drm device3030+ */3131+3232+enum switch_power_state {3333+ /** @DRM_SWITCH_POWER_ON: Power state is ON */3434+ DRM_SWITCH_POWER_ON = 0,3535+3636+ /** @DRM_SWITCH_POWER_OFF: Power state is OFF */3737+ DRM_SWITCH_POWER_OFF = 1,3838+3939+ /** @DRM_SWITCH_POWER_CHANGING: Power state is changing */4040+ DRM_SWITCH_POWER_CHANGING = 2,4141+4242+ /** @DRM_SWITCH_POWER_DYNAMIC_OFF: Suspended */4343+ DRM_SWITCH_POWER_DYNAMIC_OFF = 3,4444+};4545+4646+/**4747+ * struct drm_device - DRM device structure4848+ *4949+ * This structure represent a complete card that2950 * may contain multiple heads.3051 */3152struct drm_device {3232- struct list_head legacy_dev_list;/**< list of devices per driver for stealth attach cleanup */3333- int if_version; /**< Highest interface version set */5353+ /**5454+ * @legacy_dev_list:5555+ *5656+ * List of devices per driver for stealth attach cleanup5757+ */5858+ struct list_head legacy_dev_list;34593535- /** \name Lifetime Management */3636- /*@{ */3737- struct kref ref; /**< Object ref-count */3838- struct device *dev; /**< Device structure of bus-device */3939- struct drm_driver *driver; /**< DRM driver managing the device */4040- void *dev_private; /**< DRM driver private data */4141- struct drm_minor *primary; /**< Primary node */4242- struct drm_minor *render; /**< Render node */6060+ /** @if_version: Highest interface version set */6161+ int if_version;6262+6363+ /** @ref: Object ref-count */6464+ struct kref ref;6565+6666+ /** @dev: Device structure of bus-device */6767+ struct device *dev;6868+6969+ /** @driver: DRM driver managing the device */7070+ struct drm_driver *driver;7171+7272+ /**7373+ * @dev_private:7474+ *7575+ * DRM driver private data. Instead of using this pointer it is7676+ * recommended that drivers use drm_dev_init() and embed struct7777+ * &drm_device in their larger per-device structure.7878+ */7979+ void *dev_private;8080+8181+ /** @primary: Primary node */8282+ struct drm_minor *primary;8383+8484+ /** @render: Render node */8585+ struct drm_minor *render;8686+8787+ /**8888+ * @registered:8989+ *9090+ * Internally used by drm_dev_register() and drm_connector_register().9191+ */4392 bool registered;44934545- /* currently active master for this device. Protected by master_mutex */9494+ /**9595+ * @master:9696+ *9797+ * Currently active master for this device.9898+ * Protected by &master_mutex9999+ */46100 struct drm_master *master;4710148102 /**···11763 */11864 bool unplugged;11965120120- struct inode *anon_inode; /**< inode for private address-space */121121- char *unique; /**< unique name of the device */122122- /*@} */6666+ /** @anon_inode: inode for private address-space */6767+ struct inode *anon_inode;12368124124- /** \name Locks */125125- /*@{ */126126- struct mutex struct_mutex; /**< For others */127127- struct mutex master_mutex; /**< For drm_minor::master and drm_file::is_master */128128- /*@} */6969+ /** @unique: Unique name of the device */7070+ char *unique;12971130130- /** \name Usage Counters */131131- /*@{ */132132- int open_count; /**< Outstanding files open, protected by drm_global_mutex. */133133- spinlock_t buf_lock; /**< For drm_device::buf_use and a few other things. */134134- int buf_use; /**< Buffers in use -- cannot alloc */135135- atomic_t buf_alloc; /**< Buffer allocation in progress */136136- /*@} */7272+ /**7373+ * @struct_mutex:7474+ *7575+ * Lock for others (not &drm_minor.master and &drm_file.is_master)7676+ */7777+ struct mutex struct_mutex;137787979+ /**8080+ * @master_mutex:8181+ *8282+ * Lock for &drm_minor.master and &drm_file.is_master8383+ */8484+ struct mutex master_mutex;8585+8686+ /**8787+ * @open_count:8888+ *8989+ * Usage counter for outstanding files open,9090+ * protected by drm_global_mutex9191+ */9292+ int open_count;9393+9494+ /** @filelist_mutex: Protects @filelist. */13895 struct mutex filelist_mutex;9696+ /**9797+ * @filelist:9898+ *9999+ * List of userspace clients, linked through &drm_file.lhead.100100+ */139101 struct list_head filelist;140102141103 /**142104 * @filelist_internal:143105 *144144- * List of open DRM files for in-kernel clients. Protected by @filelist_mutex.106106+ * List of open DRM files for in-kernel clients.107107+ * Protected by &filelist_mutex.145108 */146109 struct list_head filelist_internal;147110148111 /**149112 * @clientlist_mutex:150113 *151151- * Protects @clientlist access.114114+ * Protects &clientlist access.152115 */153116 struct mutex clientlist_mutex;154117155118 /**156119 * @clientlist:157120 *158158- * List of in-kernel clients. Protected by @clientlist_mutex.121121+ * List of in-kernel clients. Protected by &clientlist_mutex.159122 */160123 struct list_head clientlist;161161-162162- /** \name Memory management */163163- /*@{ */164164- struct list_head maplist; /**< Linked list of regions */165165- struct drm_open_hash map_hash; /**< User token hash table for maps */166166-167167- /** \name Context handle management */168168- /*@{ */169169- struct list_head ctxlist; /**< Linked list of context handles */170170- struct mutex ctxlist_mutex; /**< For ctxlist */171171-172172- struct idr ctx_idr;173173-174174- struct list_head vmalist; /**< List of vmas (for debugging) */175175-176176- /*@} */177177-178178- /** \name DMA support */179179- /*@{ */180180- struct drm_device_dma *dma; /**< Optional pointer for DMA support */181181- /*@} */182182-183183- /** \name Context support */184184- /*@{ */185185-186186- __volatile__ long context_flag; /**< Context swapping flag */187187- int last_context; /**< Last current context */188188- /*@} */189124190125 /**191126 * @irq_enabled:···184141 * to true manually.185142 */186143 bool irq_enabled;144144+145145+ /**146146+ * @irq: Used by the drm_irq_install() and drm_irq_unistall() helpers.147147+ */187148 int irq;188149189150 /**···215168 */216169 struct drm_vblank_crtc *vblank;217170218218- spinlock_t vblank_time_lock; /**< Protects vblank count and time updates during vblank enable/disable */171171+ /**172172+ * @vblank_time_lock:173173+ *174174+ * Protects vblank count and time updates during vblank enable/disable175175+ */176176+ spinlock_t vblank_time_lock;177177+ /**178178+ * @vbl_lock: Top-level vblank references lock, wraps the low-level179179+ * @vblank_time_lock.180180+ */219181 spinlock_t vbl_lock;220182221183 /**···240184 * races and imprecision over longer time periods, hence exposing a241185 * hardware vblank counter is always recommended.242186 *243243- * If non-zeor, &drm_crtc_funcs.get_vblank_counter must be set.187187+ * This is the statically configured device wide maximum. The driver188188+ * can instead choose to use a runtime configurable per-crtc value189189+ * &drm_vblank_crtc.max_vblank_count, in which case @max_vblank_count190190+ * must be left at zero. See drm_crtc_set_max_vblank_count() on how191191+ * to use the per-crtc value.192192+ *193193+ * If non-zero, &drm_crtc_funcs.get_vblank_counter must be set.244194 */245245- u32 max_vblank_count; /**< size of vblank counter register */195195+ u32 max_vblank_count;196196+197197+ /** @vblank_event_list: List of vblank events */198198+ struct list_head vblank_event_list;246199247200 /**248248- * List of events201201+ * @event_lock:202202+ *203203+ * Protects @vblank_event_list and event delivery in204204+ * general. See drm_send_event() and drm_send_event_locked().249205 */250250- struct list_head vblank_event_list;251206 spinlock_t event_lock;252207253253- /*@} */208208+ /** @agp: AGP data */209209+ struct drm_agp_head *agp;254210255255- struct drm_agp_head *agp; /**< AGP data */211211+ /** @pdev: PCI device structure */212212+ struct pci_dev *pdev;256213257257- struct pci_dev *pdev; /**< PCI device structure */258214#ifdef __alpha__215215+ /** @hose: PCI hose, only used on ALPHA platforms. */259216 struct pci_controller *hose;260217#endif218218+ /** @num_crtcs: Number of CRTCs on this device */219219+ unsigned int num_crtcs;261220262262- struct drm_sg_mem *sg; /**< Scatter gather memory */263263- unsigned int num_crtcs; /**< Number of CRTCs on this device */221221+ /** @mode_config: Current mode config */222222+ struct drm_mode_config mode_config;223223+224224+ /** @object_name_lock: GEM information */225225+ struct mutex object_name_lock;226226+227227+ /** @object_name_idr: GEM information */228228+ struct idr object_name_idr;229229+230230+ /** @vma_offset_manager: GEM information */231231+ struct drm_vma_offset_manager *vma_offset_manager;232232+233233+ /**234234+ * @switch_power_state:235235+ *236236+ * Power state of the client.237237+ * Used by drivers supporting the switcheroo driver.238238+ * The state is maintained in the239239+ * &vga_switcheroo_client_ops.set_gpu_state callback240240+ */241241+ enum switch_power_state switch_power_state;242242+243243+ /**244244+ * @fb_helper:245245+ *246246+ * Pointer to the fbdev emulation structure.247247+ * Set by drm_fb_helper_init() and cleared by drm_fb_helper_fini().248248+ */249249+ struct drm_fb_helper *fb_helper;250250+251251+ /* Everything below here is for legacy driver, never use! */252252+ /* private: */253253+254254+ /* Context handle management - linked list of context handles */255255+ struct list_head ctxlist;256256+257257+ /* Context handle management - mutex for &ctxlist */258258+ struct mutex ctxlist_mutex;259259+260260+ /* Context handle management */261261+ struct idr ctx_idr;262262+263263+ /* Memory management - linked list of regions */264264+ struct list_head maplist;265265+266266+ /* Memory management - user token hash table for maps */267267+ struct drm_open_hash map_hash;268268+269269+ /* Context handle management - list of vmas (for debugging) */270270+ struct list_head vmalist;271271+272272+ /* Optional pointer for DMA support */273273+ struct drm_device_dma *dma;274274+275275+ /* Context swapping flag */276276+ __volatile__ long context_flag;277277+278278+ /* Last current context */279279+ int last_context;280280+281281+ /* Lock for &buf_use and a few other things. */282282+ spinlock_t buf_lock;283283+284284+ /* Usage counter for buffers in use -- cannot alloc */285285+ int buf_use;286286+287287+ /* Buffer allocation in progress */288288+ atomic_t buf_alloc;264289265290 struct {266291 int context;···351214 struct drm_local_map *agp_buffer_map;352215 unsigned int agp_buffer_token;353216354354- struct drm_mode_config mode_config; /**< Current mode config */355355-356356- /** \name GEM information */357357- /*@{ */358358- struct mutex object_name_lock;359359- struct idr object_name_idr;360360- struct drm_vma_offset_manager *vma_offset_manager;361361- /*@} */362362- int switch_power_state;363363-364364- /**365365- * @fb_helper:366366- *367367- * Pointer to the fbdev emulation structure.368368- * Set by drm_fb_helper_init() and cleared by drm_fb_helper_fini().369369- */370370- struct drm_fb_helper *fb_helper;217217+ /* Scatter gather memory */218218+ struct drm_sg_mem *sg;371219};372220373221#endif
+140-13
include/drm/drm_dp_mst_helper.h
···44444545/**4646 * struct drm_dp_mst_port - MST port4747- * @kref: reference count for this port.4847 * @port_num: port number4948 * @input: if this port is an input port.5049 * @mcs: message capability status - DP 1.2 spec.···6667 * in the MST topology.6768 */6869struct drm_dp_mst_port {6969- struct kref kref;7070+ /**7171+ * @topology_kref: refcount for this port's lifetime in the topology,7272+ * only the DP MST helpers should need to touch this7373+ */7474+ struct kref topology_kref;7575+7676+ /**7777+ * @malloc_kref: refcount for the memory allocation containing this7878+ * structure. See drm_dp_mst_get_port_malloc() and7979+ * drm_dp_mst_put_port_malloc().8080+ */8181+ struct kref malloc_kref;70827183 u8 port_num;7284 bool input;···112102113103/**114104 * struct drm_dp_mst_branch - MST branch device.115115- * @kref: reference count for this port.116105 * @rad: Relative Address to talk to this branch device.117106 * @lct: Link count total to talk to this branch device.118107 * @num_ports: number of ports on the branch.···130121 * to downstream port of parent branches.131122 */132123struct drm_dp_mst_branch {133133- struct kref kref;124124+ /**125125+ * @topology_kref: refcount for this branch device's lifetime in the126126+ * topology, only the DP MST helpers should need to touch this127127+ */128128+ struct kref topology_kref;129129+130130+ /**131131+ * @malloc_kref: refcount for the memory allocation containing this132132+ * structure. See drm_dp_mst_get_mstb_malloc() and133133+ * drm_dp_mst_put_mstb_malloc().134134+ */135135+ struct kref malloc_kref;136136+134137 u8 rad[8];135138 u8 lct;136139 int num_ports;···408387 void (*register_connector)(struct drm_connector *connector);409388 void (*destroy_connector)(struct drm_dp_mst_topology_mgr *mgr,410389 struct drm_connector *connector);411411- void (*hotplug)(struct drm_dp_mst_topology_mgr *mgr);412412-413390};414391415392#define DP_MAX_PAYLOAD (sizeof(unsigned long) * 8)···425406426407#define to_dp_mst_topology_state(x) container_of(x, struct drm_dp_mst_topology_state, base)427408409409+struct drm_dp_vcpi_allocation {410410+ struct drm_dp_mst_port *port;411411+ int vcpi;412412+ struct list_head next;413413+};414414+428415struct drm_dp_mst_topology_state {429416 struct drm_private_state base;430430- int avail_slots;417417+ struct list_head vcpis;431418 struct drm_dp_mst_topology_mgr *mgr;432419};433420···644619int drm_dp_mst_topology_mgr_resume(struct drm_dp_mst_topology_mgr *mgr);645620struct drm_dp_mst_topology_state *drm_atomic_get_mst_topology_state(struct drm_atomic_state *state,646621 struct drm_dp_mst_topology_mgr *mgr);647647-int drm_dp_atomic_find_vcpi_slots(struct drm_atomic_state *state,648648- struct drm_dp_mst_topology_mgr *mgr,649649- struct drm_dp_mst_port *port, int pbn);650650-int drm_dp_atomic_release_vcpi_slots(struct drm_atomic_state *state,651651- struct drm_dp_mst_topology_mgr *mgr,652652- int slots);622622+int __must_check623623+drm_dp_atomic_find_vcpi_slots(struct drm_atomic_state *state,624624+ struct drm_dp_mst_topology_mgr *mgr,625625+ struct drm_dp_mst_port *port, int pbn);626626+int __must_check627627+drm_dp_atomic_release_vcpi_slots(struct drm_atomic_state *state,628628+ struct drm_dp_mst_topology_mgr *mgr,629629+ struct drm_dp_mst_port *port);653630int drm_dp_send_power_updown_phy(struct drm_dp_mst_topology_mgr *mgr,654631 struct drm_dp_mst_port *port, bool power_up);632632+int __must_check drm_dp_mst_atomic_check(struct drm_atomic_state *state);633633+634634+void drm_dp_mst_get_port_malloc(struct drm_dp_mst_port *port);635635+void drm_dp_mst_put_port_malloc(struct drm_dp_mst_port *port);636636+637637+extern const struct drm_private_state_funcs drm_dp_mst_topology_state_funcs;638638+639639+/**640640+ * __drm_dp_mst_state_iter_get - private atomic state iterator function for641641+ * macro-internal use642642+ * @state: &struct drm_atomic_state pointer643643+ * @mgr: pointer to the &struct drm_dp_mst_topology_mgr iteration cursor644644+ * @old_state: optional pointer to the old &struct drm_dp_mst_topology_state645645+ * iteration cursor646646+ * @new_state: optional pointer to the new &struct drm_dp_mst_topology_state647647+ * iteration cursor648648+ * @i: int iteration cursor, for macro-internal use649649+ *650650+ * Used by for_each_oldnew_mst_mgr_in_state(),651651+ * for_each_old_mst_mgr_in_state(), and for_each_new_mst_mgr_in_state(). Don't652652+ * call this directly.653653+ *654654+ * Returns:655655+ * True if the current &struct drm_private_obj is a &struct656656+ * drm_dp_mst_topology_mgr, false otherwise.657657+ */658658+static inline bool659659+__drm_dp_mst_state_iter_get(struct drm_atomic_state *state,660660+ struct drm_dp_mst_topology_mgr **mgr,661661+ struct drm_dp_mst_topology_state **old_state,662662+ struct drm_dp_mst_topology_state **new_state,663663+ int i)664664+{665665+ struct __drm_private_objs_state *objs_state = &state->private_objs[i];666666+667667+ if (objs_state->ptr->funcs != &drm_dp_mst_topology_state_funcs)668668+ return false;669669+670670+ *mgr = to_dp_mst_topology_mgr(objs_state->ptr);671671+ if (old_state)672672+ *old_state = to_dp_mst_topology_state(objs_state->old_state);673673+ if (new_state)674674+ *new_state = to_dp_mst_topology_state(objs_state->new_state);675675+676676+ return true;677677+}678678+679679+/**680680+ * for_each_oldnew_mst_mgr_in_state - iterate over all DP MST topology681681+ * managers in an atomic update682682+ * @__state: &struct drm_atomic_state pointer683683+ * @mgr: &struct drm_dp_mst_topology_mgr iteration cursor684684+ * @old_state: &struct drm_dp_mst_topology_state iteration cursor for the old685685+ * state686686+ * @new_state: &struct drm_dp_mst_topology_state iteration cursor for the new687687+ * state688688+ * @__i: int iteration cursor, for macro-internal use689689+ *690690+ * This iterates over all DRM DP MST topology managers in an atomic update,691691+ * tracking both old and new state. This is useful in places where the state692692+ * delta needs to be considered, for example in atomic check functions.693693+ */694694+#define for_each_oldnew_mst_mgr_in_state(__state, mgr, old_state, new_state, __i) \695695+ for ((__i) = 0; (__i) < (__state)->num_private_objs; (__i)++) \696696+ for_each_if(__drm_dp_mst_state_iter_get((__state), &(mgr), &(old_state), &(new_state), (__i)))697697+698698+/**699699+ * for_each_old_mst_mgr_in_state - iterate over all DP MST topology managers700700+ * in an atomic update701701+ * @__state: &struct drm_atomic_state pointer702702+ * @mgr: &struct drm_dp_mst_topology_mgr iteration cursor703703+ * @old_state: &struct drm_dp_mst_topology_state iteration cursor for the old704704+ * state705705+ * @__i: int iteration cursor, for macro-internal use706706+ *707707+ * This iterates over all DRM DP MST topology managers in an atomic update,708708+ * tracking only the old state. This is useful in disable functions, where we709709+ * need the old state the hardware is still in.710710+ */711711+#define for_each_old_mst_mgr_in_state(__state, mgr, old_state, __i) \712712+ for ((__i) = 0; (__i) < (__state)->num_private_objs; (__i)++) \713713+ for_each_if(__drm_dp_mst_state_iter_get((__state), &(mgr), &(old_state), NULL, (__i)))714714+715715+/**716716+ * for_each_new_mst_mgr_in_state - iterate over all DP MST topology managers717717+ * in an atomic update718718+ * @__state: &struct drm_atomic_state pointer719719+ * @mgr: &struct drm_dp_mst_topology_mgr iteration cursor720720+ * @new_state: &struct drm_dp_mst_topology_state iteration cursor for the new721721+ * state722722+ * @__i: int iteration cursor, for macro-internal use723723+ *724724+ * This iterates over all DRM DP MST topology managers in an atomic update,725725+ * tracking only the new state. This is useful in enable functions, where we726726+ * need the new state the hardware should be in when the atomic commit727727+ * operation has completed.728728+ */729729+#define for_each_new_mst_mgr_in_state(__state, mgr, new_state, __i) \730730+ for ((__i) = 0; (__i) < (__state)->num_private_objs; (__i)++) \731731+ for_each_if(__drm_dp_mst_state_iter_get((__state), &(mgr), NULL, &(new_state), (__i)))655732656733#endif
···391391 /**392392 * @idr_mutex:393393 *394394- * Mutex for KMS ID allocation and management. Protects both @crtc_idr394394+ * Mutex for KMS ID allocation and management. Protects both @object_idr395395 * and @tile_idr.396396 */397397 struct mutex idr_mutex;398398399399 /**400400- * @crtc_idr:400400+ * @object_idr:401401 *402402 * Main KMS ID tracking object. Use this idr for all IDs, fb, crtc,403403 * connector, modes - just makes life easier to have only one.404404 */405405- struct idr crtc_idr;405405+ struct idr object_idr;406406407407 /**408408 * @tile_idr:···511511 * locks.512512 */513513 struct list_head property_list;514514+515515+ /**516516+ * @privobj_list:517517+ *518518+ * List of private objects linked with &drm_private_obj.head. This is519519+ * invariant over the lifetime of a device and hence doesn't need any520520+ * locks.521521+ */522522+ struct list_head privobj_list;514523515524 int min_width, min_height;516525 int max_width, max_height;···697688 struct drm_property *tv_mode_property;698689 /**699690 * @tv_left_margin_property: Optional TV property to set the left700700- * margin.691691+ * margin (expressed in pixels).701692 */702693 struct drm_property *tv_left_margin_property;703694 /**704695 * @tv_right_margin_property: Optional TV property to set the right705705- * margin.696696+ * margin (expressed in pixels).706697 */707698 struct drm_property *tv_right_margin_property;708699 /**709700 * @tv_top_margin_property: Optional TV property to set the right710710- * margin.701701+ * margin (expressed in pixels).711702 */712703 struct drm_property *tv_top_margin_property;713704 /**714705 * @tv_bottom_margin_property: Optional TV property to set the right715715- * margin.706706+ * margin (expressed in pixels).716707 */717708 struct drm_property *tv_bottom_margin_property;718709 /**
···30303131struct drm_file;32323333-struct drm_syncobj_cb;3434-3533/**3634 * struct drm_syncobj - sync object.3735 *···6062 * @file: A file backing for this syncobj.6163 */6264 struct file *file;6363-};6464-6565-typedef void (*drm_syncobj_func_t)(struct drm_syncobj *syncobj,6666- struct drm_syncobj_cb *cb);6767-6868-/**6969- * struct drm_syncobj_cb - callback for drm_syncobj_add_callback7070- * @node: used by drm_syncob_add_callback to append this struct to7171- * &drm_syncobj.cb_list7272- * @func: drm_syncobj_func_t to call7373- *7474- * This struct will be initialized by drm_syncobj_add_callback, additional7575- * data can be passed along by embedding drm_syncobj_cb in another struct.7676- * The callback will get called the next time drm_syncobj_replace_fence is7777- * called.7878- */7979-struct drm_syncobj_cb {8080- struct list_head node;8181- drm_syncobj_func_t func;8265};83668467void drm_syncobj_free(struct kref *kref);
+52-1
include/drm/drm_util.h
···2626#ifndef _DRM_UTIL_H_2727#define _DRM_UTIL_H_28282929-/* helper for handling conditionals in various for_each macros */2929+/**3030+ * DOC: drm utils3131+ *3232+ * Macros and inline functions that does not naturally belong in other places3333+ */3434+3535+#include <linux/interrupt.h>3636+#include <linux/kgdb.h>3737+#include <linux/preempt.h>3838+#include <linux/smp.h>3939+4040+/*4141+ * Use EXPORT_SYMBOL_FOR_TESTS_ONLY() for functions that shall4242+ * only be visible for drmselftests.4343+ */4444+#if defined(CONFIG_DRM_DEBUG_SELFTEST_MODULE)4545+#define EXPORT_SYMBOL_FOR_TESTS_ONLY(x) EXPORT_SYMBOL(x)4646+#else4747+#define EXPORT_SYMBOL_FOR_TESTS_ONLY(x)4848+#endif4949+5050+/**5151+ * for_each_if - helper for handling conditionals in various for_each macros5252+ * @condition: The condition to check5353+ *5454+ * Typical use::5555+ *5656+ * #define for_each_foo_bar(x, y) \'5757+ * list_for_each_entry(x, y->list, head) \'5858+ * for_each_if(x->something == SOMETHING)5959+ *6060+ * The for_each_if() macro makes the use of for_each_foo_bar() less error6161+ * prone.6262+ */3063#define for_each_if(condition) if (!(condition)) {} else6464+6565+/**6666+ * drm_can_sleep - returns true if currently okay to sleep6767+ *6868+ * This function shall not be used in new code.6969+ * The check for running in atomic context may not work - see linux/preempt.h.7070+ *7171+ * FIXME: All users of drm_can_sleep should be removed (see todo.rst)7272+ *7373+ * Returns:7474+ * True if kgdb is active or we are in an atomic context or irqs are disabled7575+ */7676+static inline bool drm_can_sleep(void)7777+{7878+ if (in_atomic() || in_dbg_master() || irqs_disabled())7979+ return false;8080+ return true;8181+}31823283#endif
+22
include/drm/drm_vblank.h
···129129 */130130 u32 last;131131 /**132132+ * @max_vblank_count:133133+ *134134+ * Maximum value of the vblank registers for this crtc. This value +1135135+ * will result in a wrap-around of the vblank register. It is used136136+ * by the vblank core to handle wrap-arounds.137137+ *138138+ * If set to zero the vblank core will try to guess the elapsed vblanks139139+ * between times when the vblank interrupt is disabled through140140+ * high-precision timestamps. That approach is suffering from small141141+ * races and imprecision over longer time periods, hence exposing a142142+ * hardware vblank counter is always recommended.143143+ *144144+ * This is the runtime configurable per-crtc maximum set through145145+ * drm_crtc_set_max_vblank_count(). If this is used the driver146146+ * must leave the device wide &drm_device.max_vblank_count at zero.147147+ *148148+ * If non-zero, &drm_crtc_funcs.get_vblank_counter must be set.149149+ */150150+ u32 max_vblank_count;151151+ /**132152 * @inmodeset: Tracks whether the vblank is disabled due to a modeset.133153 * For legacy driver bit 2 additionally tracks whether an additional134154 * temporary vblank reference has been acquired to paper over the···226206void drm_calc_timestamping_constants(struct drm_crtc *crtc,227207 const struct drm_display_mode *mode);228208wait_queue_head_t *drm_crtc_vblank_waitqueue(struct drm_crtc *crtc);209209+void drm_crtc_set_max_vblank_count(struct drm_crtc *crtc,210210+ u32 max_vblank_count);229211#endif
+15-7
include/linux/dma-fence.h
···7777 struct list_head cb_list;7878 spinlock_t *lock;7979 u64 context;8080- unsigned seqno;8080+ u64 seqno;8181 unsigned long flags;8282 ktime_t timestamp;8383 int error;···244244};245245246246void dma_fence_init(struct dma_fence *fence, const struct dma_fence_ops *ops,247247- spinlock_t *lock, u64 context, unsigned seqno);247247+ spinlock_t *lock, u64 context, u64 seqno);248248249249void dma_fence_release(struct kref *kref);250250void dma_fence_free(struct dma_fence *fence);···414414 * Returns true if f1 is chronologically later than f2. Both fences must be415415 * from the same context, since a seqno is not common across contexts.416416 */417417-static inline bool __dma_fence_is_later(u32 f1, u32 f2)417417+static inline bool __dma_fence_is_later(u64 f1, u64 f2)418418{419419- return (int)(f1 - f2) > 0;419419+ /* This is for backward compatibility with drivers which can only handle420420+ * 32bit sequence numbers. Use a 64bit compare when any of the higher421421+ * bits are none zero, otherwise use a 32bit compare with wrap around422422+ * handling.423423+ */424424+ if (upper_32_bits(f1) || upper_32_bits(f2))425425+ return f1 > f2;426426+427427+ return (int)(lower_32_bits(f1) - lower_32_bits(f2)) > 0;420428}421429422430/**···556548 do { \557549 struct dma_fence *__ff = (f); \558550 if (IS_ENABLED(CONFIG_DMA_FENCE_TRACE)) \559559- pr_info("f %llu#%u: " fmt, \551551+ pr_info("f %llu#%llu: " fmt, \560552 __ff->context, __ff->seqno, ##args); \561553 } while (0)562554563555#define DMA_FENCE_WARN(f, fmt, args...) \564556 do { \565557 struct dma_fence *__ff = (f); \566566- pr_warn("f %llu#%u: " fmt, __ff->context, __ff->seqno, \558558+ pr_warn("f %llu#%llu: " fmt, __ff->context, __ff->seqno,\567559 ##args); \568560 } while (0)569561570562#define DMA_FENCE_ERR(f, fmt, args...) \571563 do { \572564 struct dma_fence *__ff = (f); \573573- pr_err("f %llu#%u: " fmt, __ff->context, __ff->seqno, \565565+ pr_err("f %llu#%llu: " fmt, __ff->context, __ff->seqno, \574566 ##args); \575567 } while (0)576568
+23
include/uapi/drm/drm_fourcc.h
···581581 * Indicates the superblock size(s) used for the AFBC buffer. The buffer582582 * size (in pixels) must be aligned to a multiple of the superblock size.583583 * Four lowest significant bits(LSBs) are reserved for block size.584584+ *585585+ * Where one superblock size is specified, it applies to all planes of the586586+ * buffer (e.g. 16x16, 32x8). When multiple superblock sizes are specified,587587+ * the first applies to the Luma plane and the second applies to the Chroma588588+ * plane(s). e.g. (32x8_64x4 means 32x8 Luma, with 64x4 Chroma).589589+ * Multiple superblock sizes are only valid for multi-plane YCbCr formats.584590 */585591#define AFBC_FORMAT_MOD_BLOCK_SIZE_MASK 0xf586592#define AFBC_FORMAT_MOD_BLOCK_SIZE_16x16 (1ULL)587593#define AFBC_FORMAT_MOD_BLOCK_SIZE_32x8 (2ULL)594594+#define AFBC_FORMAT_MOD_BLOCK_SIZE_64x4 (3ULL)595595+#define AFBC_FORMAT_MOD_BLOCK_SIZE_32x8_64x4 (4ULL)588596589597/*590598 * AFBC lossless colorspace transform···651643 * can be reduced if a whole superblock is a single color.652644 */653645#define AFBC_FORMAT_MOD_SC (1ULL << 9)646646+647647+/*648648+ * AFBC double-buffer649649+ *650650+ * Indicates that the buffer is allocated in a layout safe for front-buffer651651+ * rendering.652652+ */653653+#define AFBC_FORMAT_MOD_DB (1ULL << 10)654654+655655+/*656656+ * AFBC buffer content hints657657+ *658658+ * Indicates that the buffer includes per-superblock content hints.659659+ */660660+#define AFBC_FORMAT_MOD_BCH (1ULL << 11)654661655662#if defined(__cplusplus)656663}
+8
include/uapi/drm/v3d_drm.h
···5252 *5353 * This asks the kernel to have the GPU execute an optional binner5454 * command list, and a render command list.5555+ *5656+ * The L1T, slice, L2C, L2T, and GCA caches will be flushed before5757+ * each CL executes. The VCD cache should be flushed (if necessary)5858+ * by the submitted CLs. The TLB writes are guaranteed to have been5959+ * flushed by the time the render done IRQ happens, which is the6060+ * trigger for out_sync. Any dirtying of cachelines by the job (only6161+ * possible using TMU writes) must be flushed by the caller using the6262+ * CL's cache flush commands.5563 */5664struct drm_v3d_submit_cl {5765 /* Pointer to the binner command list.