Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'drm-misc-next-2020-05-07' of git://anongit.freedesktop.org/drm/drm-misc into drm-next

drm-misc-next for 5.8:

UAPI Changes:

Cross-subsystem Changes:

* MAINTAINERS: restore alphabetical order; update cirrus driver
* Dcomuentation: document visionix, chronteli, ite vendor prefices; update
documentation for Chrontel CH7033, IT6505, IVO, BOE,
Panasonic, Chunghwa, AUO bindings; convert dw_mipi_dsi.txt
to YAML; remove todo item for drm_display_mode.hsync removal;

Core Changes:

* drm: add devm_drm_dev_alloc() for managed allocations of drm_device;
use DRM_MODESET_LOCK_ALL_*() in mode-object code; remove
drm_display_mode.hsync; small cleanups of unused variables,
compiler warnings and static functions
* drm/client: dual-lincensing: GPL-2.0 or MIT
* drm/mm: optimize tree searches in rb_hole_addr()

Driver Changes:

* drm/{many}: use devm_drm_dev_alloc(); don't use drm_device.dev_private
* drm/ast: don't double-assign to drm_crtc_funcs.set_config; drop
drm_connector_register()
* drm/bochs: drop drm_connector_register()
* drm/bridge: add support for Chrontel ch7033; fix stack usage with
old gccs; return error pointer in drm_panel_bridge_add()
* drm/cirrus: Move to tiny
* drm/dp_mst: don't use 2nd sideband tx slot; revert "Remove single tx
msg restriction"
* drm/lima: support runtime PM;
* drm/meson: limit modes wrt chipset
* drm/panel: add support for Visionox rm69299; fix clock on
boe-tv101wum-n16; fix panel type for AUO G101EVN10;
add support for Ivo M133NFW4 R0; add support for BOE
NV133FHM-N61; add support for AUO G121EAN01.4, G156XTN01.0,
G190EAN01
* drm/pl111: improve vexpress init; fix module auto-loading
* drm/stm: read number of endpoints from device tree
* drm/vboxvideo: use managed PCI functions; drop DRM_MTRR_WC
* drm/vkms: fix use-after-free in vkms_gem_create(); enable cursor
support by default
* fbdev: use boolean values in several drivers
* fbdev/controlfb: fix COMPILE_TEST
* fbdev/w100fb: fix double-free bug

Signed-off-by: Dave Airlie <airlied@redhat.com>

From: Thomas Zimmermann <tzimmermann@suse.de>
Link: https://patchwork.freedesktop.org/patch/msgid/20200507072503.GA10979@linux-uq9g

+2653 -1217
+77
Documentation/devicetree/bindings/display/bridge/chrontel,ch7033.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 + # Copyright (C) 2019,2020 Lubomir Rintel <lkundrak@v3.sk> 3 + %YAML 1.2 4 + --- 5 + $id: http://devicetree.org/schemas/display/bridge/chrontel,ch7033.yaml# 6 + $schema: http://devicetree.org/meta-schemas/core.yaml# 7 + 8 + title: Chrontel CH7033 Video Encoder Device Tree Bindings 9 + 10 + maintainers: 11 + - Lubomir Rintel <lkundrak@v3.sk> 12 + 13 + properties: 14 + compatible: 15 + const: chrontel,ch7033 16 + 17 + reg: 18 + maxItems: 1 19 + description: I2C address of the device 20 + 21 + ports: 22 + type: object 23 + 24 + properties: 25 + port@0: 26 + type: object 27 + description: | 28 + Video port for RGB input. 29 + 30 + port@1: 31 + type: object 32 + description: | 33 + DVI port, should be connected to a node compatible with the 34 + dvi-connector binding. 35 + 36 + required: 37 + - port@0 38 + - port@1 39 + 40 + required: 41 + - compatible 42 + - reg 43 + - ports 44 + 45 + additionalProperties: false 46 + 47 + examples: 48 + - | 49 + i2c { 50 + #address-cells = <1>; 51 + #size-cells = <0>; 52 + 53 + vga-dvi-encoder@76 { 54 + compatible = "chrontel,ch7033"; 55 + reg = <0x76>; 56 + 57 + ports { 58 + #address-cells = <1>; 59 + #size-cells = <0>; 60 + 61 + port@0 { 62 + reg = <0>; 63 + endpoint { 64 + remote-endpoint = <&lcd0_rgb_out>; 65 + }; 66 + }; 67 + 68 + port@1 { 69 + reg = <1>; 70 + endpoint { 71 + remote-endpoint = <&dvi_in>; 72 + }; 73 + }; 74 + 75 + }; 76 + }; 77 + };
-32
Documentation/devicetree/bindings/display/bridge/dw_mipi_dsi.txt
··· 1 - Synopsys DesignWare MIPI DSI host controller 2 - ============================================ 3 - 4 - This document defines device tree properties for the Synopsys DesignWare MIPI 5 - DSI host controller. It doesn't constitue a device tree binding specification 6 - by itself but is meant to be referenced by platform-specific device tree 7 - bindings. 8 - 9 - When referenced from platform device tree bindings the properties defined in 10 - this document are defined as follows. The platform device tree bindings are 11 - responsible for defining whether each optional property is used or not. 12 - 13 - - reg: Memory mapped base address and length of the DesignWare MIPI DSI 14 - host controller registers. (mandatory) 15 - 16 - - clocks: References to all the clocks specified in the clock-names property 17 - as specified in [1]. (mandatory) 18 - 19 - - clock-names: 20 - - "pclk" is the peripheral clock for either AHB and APB. (mandatory) 21 - - "px_clk" is the pixel clock for the DPI/RGB input. (optional) 22 - 23 - - resets: References to all the resets specified in the reset-names property 24 - as specified in [2]. (optional) 25 - 26 - - reset-names: string reset name, must be "apb" if used. (optional) 27 - 28 - - panel or bridge node: see [3]. (mandatory) 29 - 30 - [1] Documentation/devicetree/bindings/clock/clock-bindings.txt 31 - [2] Documentation/devicetree/bindings/reset/reset.txt 32 - [3] Documentation/devicetree/bindings/display/mipi-dsi-bus.txt
+91
Documentation/devicetree/bindings/display/bridge/ite,it6505.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/display/bridge/ite,it6505.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: ITE it6505 Device Tree Bindings 8 + 9 + maintainers: 10 + - Allen Chen <allen.chen@ite.com.tw> 11 + 12 + description: | 13 + The IT6505 is a high-performance DisplayPort 1.1a transmitter, 14 + fully compliant with DisplayPort 1.1a, HDCP 1.3 specifications. 15 + The IT6505 supports color depth of up to 36 bits (12 bits/color) 16 + and ensures robust transmission of high-quality uncompressed video 17 + content, along with uncompressed and compressed digital audio content. 18 + 19 + Aside from the various video output formats supported, the IT6505 20 + also encodes and transmits up to 8 channels of I2S digital audio, 21 + with sampling rate up to 192kHz and sample size up to 24 bits. 22 + In addition, an S/PDIF input port takes in compressed audio of up to 23 + 192kHz frame rate. 24 + 25 + Each IT6505 chip comes preprogrammed with an unique HDCP key, 26 + in compliance with the HDCP 1.3 standard so as to provide secure 27 + transmission of high-definition content. Users of the IT6505 need not 28 + purchase any HDCP keys or ROMs. 29 + 30 + properties: 31 + compatible: 32 + const: ite,it6505 33 + 34 + ovdd-supply: 35 + maxItems: 1 36 + description: I/O voltage 37 + 38 + pwr18-supply: 39 + maxItems: 1 40 + description: core voltage 41 + 42 + interrupts: 43 + maxItems: 1 44 + description: interrupt specifier of INT pin 45 + 46 + reset-gpios: 47 + maxItems: 1 48 + description: gpio specifier of RESET pin 49 + 50 + extcon: 51 + maxItems: 1 52 + description: extcon specifier for the Power Delivery 53 + 54 + port: 55 + type: object 56 + description: A port node pointing to DPI host port node 57 + 58 + required: 59 + - compatible 60 + - ovdd-supply 61 + - pwr18-supply 62 + - interrupts 63 + - reset-gpios 64 + - extcon 65 + 66 + examples: 67 + - | 68 + #include <dt-bindings/interrupt-controller/irq.h> 69 + 70 + i2c { 71 + #address-cells = <1>; 72 + #size-cells = <0>; 73 + 74 + dp-bridge@5c { 75 + compatible = "ite,it6505"; 76 + interrupts = <152 IRQ_TYPE_EDGE_FALLING 152 0>; 77 + reg = <0x5c>; 78 + pinctrl-names = "default"; 79 + pinctrl-0 = <&it6505_pins>; 80 + ovdd-supply = <&mt6358_vsim1_reg>; 81 + pwr18-supply = <&it6505_pp18_reg>; 82 + reset-gpios = <&pio 179 1>; 83 + extcon = <&usbc_extcon>; 84 + 85 + port { 86 + it6505_in: endpoint { 87 + remote-endpoint = <&dpi_out>; 88 + }; 89 + }; 90 + }; 91 + };
+68
Documentation/devicetree/bindings/display/bridge/snps,dw-mipi-dsi.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/display/bridge/snps,dw-mipi-dsi.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: Synopsys DesignWare MIPI DSI host controller 8 + 9 + maintainers: 10 + - Philippe CORNU <philippe.cornu@st.com> 11 + 12 + description: | 13 + This document defines device tree properties for the Synopsys DesignWare MIPI 14 + DSI host controller. It doesn't constitue a device tree binding specification 15 + by itself but is meant to be referenced by platform-specific device tree 16 + bindings. 17 + 18 + When referenced from platform device tree bindings the properties defined in 19 + this document are defined as follows. The platform device tree bindings are 20 + responsible for defining whether each property is required or optional. 21 + 22 + allOf: 23 + - $ref: ../dsi-controller.yaml# 24 + 25 + properties: 26 + reg: 27 + maxItems: 1 28 + 29 + clocks: 30 + items: 31 + - description: Module clock 32 + - description: DSI bus clock for either AHB and APB 33 + - description: Pixel clock for the DPI/RGB input 34 + minItems: 2 35 + 36 + clock-names: 37 + items: 38 + - const: ref 39 + - const: pclk 40 + - const: px_clk 41 + minItems: 2 42 + 43 + resets: 44 + maxItems: 1 45 + 46 + reset-names: 47 + const: apb 48 + 49 + ports: 50 + type: object 51 + 52 + properties: 53 + port@0: 54 + type: object 55 + description: Input node to receive pixel data. 56 + port@1: 57 + type: object 58 + description: DSI output node to panel. 59 + 60 + required: 61 + - port@0 62 + - port@1 63 + 64 + required: 65 + - clock-names 66 + - clocks 67 + - ports 68 + - reg
+2
Documentation/devicetree/bindings/display/panel/panel-simple-dsi.yaml
··· 42 42 # One Stop Displays OSD101T2587-53TS 10.1" 1920x1200 panel 43 43 - osddisplays,osd101t2587-53ts 44 44 # Panasonic 10" WUXGA TFT LCD panel 45 + - panasonic,vvx10f004b00 46 + # Panasonic 10" WUXGA TFT LCD panel 45 47 - panasonic,vvx10f034n00 46 48 47 49 reg:
+12
Documentation/devicetree/bindings/display/panel/panel-simple.yaml
··· 53 53 - auo,g101evn010 54 54 # AU Optronics Corporation 10.4" (800x600) color TFT LCD panel 55 55 - auo,g104sn02 56 + # AU Optronics Corporation 12.1" (1280x800) TFT LCD panel 57 + - auo,g121ean01 56 58 # AU Optronics Corporation 13.3" FHD (1920x1080) TFT LCD panel 57 59 - auo,g133han01 60 + # AU Optronics Corporation 15.6" (1366x768) TFT LCD panel 61 + - auo,g156xtn01 58 62 # AU Optronics Corporation 18.5" FHD (1920x1080) TFT LCD panel 59 63 - auo,g185han01 64 + # AU Optronics Corporation 19.0" (1280x1024) TFT LCD panel 65 + - auo,g190ean01 60 66 # AU Optronics Corporation 31.5" FHD (1920x1080) TFT LCD panel 61 67 - auo,p320hvn03 62 68 # AU Optronics Corporation 21.5" FHD (1920x1080) color TFT LCD panel ··· 73 67 - boe,hv070wsa-100 74 68 # BOE OPTOELECTRONICS TECHNOLOGY 10.1" WXGA TFT LCD panel 75 69 - boe,nv101wxmn51 70 + # BOE NV133FHM-N61 13.3" FHD (1920x1080) TFT LCD Panel 71 + - boe,nv133fhm-n61 76 72 # BOE NV140FHM-N49 14.0" FHD a-Si FT panel 77 73 - boe,nv140fhmn49 78 74 # CDTech(H.K.) Electronics Limited 4.3" 480x272 color TFT-LCD panel ··· 85 77 - chunghwa,claa070wp03xg 86 78 # Chunghwa Picture Tubes Ltd. 10.1" WXGA TFT LCD panel 87 79 - chunghwa,claa101wa01a 80 + # Chunghwa Picture Tubes Ltd. 10.1" WXGA TFT LCD panel 81 + - chunghwa,claa101wb01 88 82 # Chunghwa Picture Tubes Ltd. 10.1" WXGA TFT LCD panel 89 83 - chunghwa,claa101wb03 90 84 # DataImage, Inc. 7" WVGA (800x480) TFT LCD panel with 24-bit parallel interface. ··· 133 123 - hannstar,hsd100pxn1 134 124 # Hitachi Ltd. Corporation 9" WVGA (800x480) TFT LCD panel 135 125 - hit,tx23d38vm0caa 126 + # InfoVision Optoelectronics M133NWF4 R0 13.3" FHD (1920x1080) TFT LCD panel 127 + - ivo,m133nwf4-r0 136 128 # Innolux AT043TN24 4.3" WQVGA TFT LCD panel 137 129 - innolux,at043tn24 138 130 # Innolux AT070TN92 7.0" WQVGA TFT LCD panel
+7 -1
Documentation/devicetree/bindings/vendor-prefixes.yaml
··· 187 187 description: ChipOne 188 188 "^chipspark,.*": 189 189 description: ChipSPARK 190 + "^chrontel,.*": 191 + description: Chrontel, Inc. 190 192 "^chrp,.*": 191 193 description: Common Hardware Reference Platform 192 194 "^chunghwa,.*": ··· 465 463 description: Infineon Technologies 466 464 "^inforce,.*": 467 465 description: Inforce Computing 466 + "^ivo,.*": 467 + description: InfoVision Optoelectronics Kunshan Co. Ltd. 468 468 "^ingenic,.*": 469 469 description: Ingenic Semiconductor 470 470 "^innolux,.*": ··· 492 488 "^issi,.*": 493 489 description: Integrated Silicon Solutions Inc. 494 490 "^ite,.*": 495 - description: ITE Tech, Inc. 491 + description: ITE Tech. Inc. 496 492 "^itead,.*": 497 493 description: ITEAD Intelligent Systems Co.Ltd 498 494 "^iwave,.*": ··· 1043 1039 description: Tronsmart 1044 1040 "^truly,.*": 1045 1041 description: Truly Semiconductors Limited 1042 + "^visionox,.*": 1043 + description: Visionox 1046 1044 "^tsd,.*": 1047 1045 description: Theobroma Systems Design und Consulting GmbH 1048 1046 "^tyan,.*":
-12
Documentation/gpu/todo.rst
··· 347 347 348 348 Level: Starter 349 349 350 - Remove drm_display_mode.hsync 351 - ----------------------------- 352 - 353 - We have drm_mode_hsync() to calculate this from hsync_start/end, since drivers 354 - shouldn't/don't use this, remove this member to avoid any temptations to use it 355 - in the future. If there is any debug code using drm_display_mode.hsync, convert 356 - it to use drm_mode_hsync() instead. 357 - 358 - Contact: Sean Paul 359 - 360 - Level: Starter 361 - 362 350 connector register/unregister fixes 363 351 ----------------------------------- 364 352
+1 -1
MAINTAINERS
··· 5400 5400 S: Obsolete 5401 5401 W: https://www.kraxel.org/blog/2014/10/qemu-using-cirrus-considered-harmful/ 5402 5402 T: git git://anongit.freedesktop.org/drm/drm-misc 5403 - F: drivers/gpu/drm/cirrus/ 5403 + F: drivers/gpu/drm/tiny/cirrus.c 5404 5404 5405 5405 DRM DRIVER FOR QXL VIRTUAL GPU 5406 5406 M: Dave Airlie <airlied@redhat.com>
-2
drivers/gpu/drm/Kconfig
··· 310 310 311 311 source "drivers/gpu/drm/mgag200/Kconfig" 312 312 313 - source "drivers/gpu/drm/cirrus/Kconfig" 314 - 315 313 source "drivers/gpu/drm/armada/Kconfig" 316 314 317 315 source "drivers/gpu/drm/atmel-hlcdc/Kconfig"
-1
drivers/gpu/drm/Makefile
··· 74 74 obj-$(CONFIG_DRM_MGAG200) += mgag200/ 75 75 obj-$(CONFIG_DRM_V3D) += v3d/ 76 76 obj-$(CONFIG_DRM_VC4) += vc4/ 77 - obj-$(CONFIG_DRM_CIRRUS_QEMU) += cirrus/ 78 77 obj-$(CONFIG_DRM_SIS) += sis/ 79 78 obj-$(CONFIG_DRM_SAVAGE)+= savage/ 80 79 obj-$(CONFIG_DRM_VMWGFX)+= vmwgfx/
+1 -1
drivers/gpu/drm/amd/display/dc/dsc/dc_dsc.c
··· 129 129 static bool dsc_throughput_from_dpcd(int dpcd_throughput, int *throughput) 130 130 { 131 131 switch (dpcd_throughput) { 132 - case DP_DSC_THROUGHPUT_MODE_0_UPSUPPORTED: 132 + case DP_DSC_THROUGHPUT_MODE_0_UNSUPPORTED: 133 133 *throughput = 0; 134 134 break; 135 135 case DP_DSC_THROUGHPUT_MODE_0_170:
+5 -11
drivers/gpu/drm/arm/display/komeda/komeda_kms.c
··· 261 261 262 262 struct komeda_kms_dev *komeda_kms_attach(struct komeda_dev *mdev) 263 263 { 264 - struct komeda_kms_dev *kms = kzalloc(sizeof(*kms), GFP_KERNEL); 264 + struct komeda_kms_dev *kms; 265 265 struct drm_device *drm; 266 266 int err; 267 267 268 - if (!kms) 269 - return ERR_PTR(-ENOMEM); 268 + kms = devm_drm_dev_alloc(mdev->dev, &komeda_kms_driver, 269 + struct komeda_kms_dev, base); 270 + if (IS_ERR(kms)) 271 + return kms; 270 272 271 273 drm = &kms->base; 272 - err = drm_dev_init(drm, &komeda_kms_driver, mdev->dev); 273 - if (err) 274 - goto free_kms; 275 - drmm_add_final_kfree(drm, kms); 276 274 277 275 drm->dev_private = mdev; 278 276 ··· 327 329 drm_mode_config_cleanup(drm); 328 330 komeda_kms_cleanup_private_objs(kms); 329 331 drm->dev_private = NULL; 330 - drm_dev_put(drm); 331 - free_kms: 332 - kfree(kms); 333 332 return ERR_PTR(err); 334 333 } 335 334 ··· 343 348 drm_mode_config_cleanup(drm); 344 349 komeda_kms_cleanup_private_objs(kms); 345 350 drm->dev_private = NULL; 346 - drm_dev_put(drm); 347 351 }
+2 -1
drivers/gpu/drm/aspeed/aspeed_gfx.h
··· 5 5 #include <drm/drm_simple_kms_helper.h> 6 6 7 7 struct aspeed_gfx { 8 + struct drm_device drm; 8 9 void __iomem *base; 9 10 struct clk *clk; 10 11 struct reset_control *rst; ··· 13 12 14 13 struct drm_simple_display_pipe pipe; 15 14 struct drm_connector connector; 16 - struct drm_fbdev_cma *fbdev; 17 15 }; 16 + #define to_aspeed_gfx(x) container_of(x, struct aspeed_gfx, drm) 18 17 19 18 int aspeed_gfx_create_pipe(struct drm_device *drm); 20 19 int aspeed_gfx_create_output(struct drm_device *drm);
+1 -1
drivers/gpu/drm/aspeed/aspeed_gfx_crtc.c
··· 231 231 232 232 int aspeed_gfx_create_pipe(struct drm_device *drm) 233 233 { 234 - struct aspeed_gfx *priv = drm->dev_private; 234 + struct aspeed_gfx *priv = to_aspeed_gfx(drm); 235 235 236 236 return drm_simple_display_pipe_init(drm, &priv->pipe, &aspeed_gfx_funcs, 237 237 aspeed_gfx_formats,
+11 -20
drivers/gpu/drm/aspeed/aspeed_gfx_drv.c
··· 77 77 static irqreturn_t aspeed_gfx_irq_handler(int irq, void *data) 78 78 { 79 79 struct drm_device *drm = data; 80 - struct aspeed_gfx *priv = drm->dev_private; 80 + struct aspeed_gfx *priv = to_aspeed_gfx(drm); 81 81 u32 reg; 82 82 83 83 reg = readl(priv->base + CRT_CTRL1); ··· 96 96 static int aspeed_gfx_load(struct drm_device *drm) 97 97 { 98 98 struct platform_device *pdev = to_platform_device(drm->dev); 99 - struct aspeed_gfx *priv; 99 + struct aspeed_gfx *priv = to_aspeed_gfx(drm); 100 100 struct resource *res; 101 101 int ret; 102 - 103 - priv = devm_kzalloc(&pdev->dev, sizeof(*priv), GFP_KERNEL); 104 - if (!priv) 105 - return -ENOMEM; 106 - drm->dev_private = priv; 107 102 108 103 res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 109 104 priv->base = devm_ioremap_resource(drm->dev, res); ··· 182 187 { 183 188 drm_kms_helper_poll_fini(drm); 184 189 drm_mode_config_cleanup(drm); 185 - 186 - drm->dev_private = NULL; 187 190 } 188 191 189 192 DEFINE_DRM_GEM_CMA_FOPS(fops); ··· 209 216 210 217 static int aspeed_gfx_probe(struct platform_device *pdev) 211 218 { 212 - struct drm_device *drm; 219 + struct aspeed_gfx *priv; 213 220 int ret; 214 221 215 - drm = drm_dev_alloc(&aspeed_gfx_driver, &pdev->dev); 216 - if (IS_ERR(drm)) 217 - return PTR_ERR(drm); 222 + priv = devm_drm_dev_alloc(&pdev->dev, &aspeed_gfx_driver, 223 + struct aspeed_gfx, drm); 224 + if (IS_ERR(priv)) 225 + return PTR_ERR(priv); 218 226 219 - ret = aspeed_gfx_load(drm); 227 + ret = aspeed_gfx_load(&priv->drm); 220 228 if (ret) 221 - goto err_free; 229 + return ret; 222 230 223 - ret = drm_dev_register(drm, 0); 231 + ret = drm_dev_register(&priv->drm, 0); 224 232 if (ret) 225 233 goto err_unload; 226 234 227 235 return 0; 228 236 229 237 err_unload: 230 - aspeed_gfx_unload(drm); 231 - err_free: 232 - drm_dev_put(drm); 238 + aspeed_gfx_unload(&priv->drm); 233 239 234 240 return ret; 235 241 } ··· 239 247 240 248 drm_dev_unregister(drm); 241 249 aspeed_gfx_unload(drm); 242 - drm_dev_put(drm); 243 250 244 251 return 0; 245 252 }
+1 -1
drivers/gpu/drm/aspeed/aspeed_gfx_out.c
··· 28 28 29 29 int aspeed_gfx_create_output(struct drm_device *drm) 30 30 { 31 - struct aspeed_gfx *priv = drm->dev_private; 31 + struct aspeed_gfx *priv = to_aspeed_gfx(drm); 32 32 int ret; 33 33 34 34 priv->connector.dpms = DRM_MODE_DPMS_OFF;
-4
drivers/gpu/drm/ast/ast_mode.c
··· 931 931 932 932 static const struct drm_crtc_funcs ast_crtc_funcs = { 933 933 .reset = ast_crtc_reset, 934 - .set_config = drm_crtc_helper_set_config, 935 934 .gamma_set = drm_atomic_helper_legacy_gamma_set, 936 935 .destroy = ast_crtc_destroy, 937 936 .set_config = drm_atomic_helper_set_config, ··· 1079 1080 { 1080 1081 struct ast_connector *ast_connector = to_ast_connector(connector); 1081 1082 ast_i2c_destroy(ast_connector->i2c); 1082 - drm_connector_unregister(connector); 1083 1083 drm_connector_cleanup(connector); 1084 1084 kfree(connector); 1085 1085 } ··· 1120 1122 1121 1123 connector->interlace_allowed = 0; 1122 1124 connector->doublescan_allowed = 0; 1123 - 1124 - drm_connector_register(connector); 1125 1125 1126 1126 connector->polled = DRM_CONNECTOR_POLL_CONNECT; 1127 1127
-1
drivers/gpu/drm/bochs/bochs_kms.c
··· 104 104 DRM_MODE_CONNECTOR_VIRTUAL); 105 105 drm_connector_helper_add(connector, 106 106 &bochs_connector_connector_helper_funcs); 107 - drm_connector_register(connector); 108 107 109 108 bochs_hw_load_edid(bochs); 110 109 if (bochs->edid) {
+10
drivers/gpu/drm/bridge/Kconfig
··· 27 27 Support Cadence DPI to DSI bridge. This is an internal 28 28 bridge and is meant to be directly embedded in a SoC. 29 29 30 + config DRM_CHRONTEL_CH7033 31 + tristate "Chrontel CH7033 Video Encoder" 32 + depends on OF 33 + select DRM_KMS_HELPER 34 + help 35 + Enable support for the Chrontel CH7033 VGA/DVI/HDMI Encoder, as 36 + found in the Dell Wyse 3020 thin client. 37 + 38 + If in doubt, say "N". 39 + 30 40 config DRM_DISPLAY_CONNECTOR 31 41 tristate "Display connector support" 32 42 depends on OF
+1
drivers/gpu/drm/bridge/Makefile
··· 1 1 # SPDX-License-Identifier: GPL-2.0 2 2 obj-$(CONFIG_DRM_CDNS_DSI) += cdns-dsi.o 3 + obj-$(CONFIG_DRM_CHRONTEL_CH7033) += chrontel-ch7033.o 3 4 obj-$(CONFIG_DRM_DISPLAY_CONNECTOR) += display-connector.o 4 5 obj-$(CONFIG_DRM_LVDS_CODEC) += lvds-codec.o 5 6 obj-$(CONFIG_DRM_MEGACHIPS_STDPXXXX_GE_B850V3_FW) += megachips-stdpxxxx-ge-b850v3-fw.o
+620
drivers/gpu/drm/bridge/chrontel-ch7033.c
··· 1 + // SPDX-License-Identifier: GPL-2.0-only 2 + /* 3 + * Chrontel CH7033 Video Encoder Driver 4 + * 5 + * Copyright (C) 2019,2020 Lubomir Rintel 6 + */ 7 + 8 + #include <linux/gpio/consumer.h> 9 + #include <linux/module.h> 10 + #include <linux/regmap.h> 11 + 12 + #include <drm/drm_atomic_helper.h> 13 + #include <drm/drm_bridge.h> 14 + #include <drm/drm_edid.h> 15 + #include <drm/drm_of.h> 16 + #include <drm/drm_print.h> 17 + #include <drm/drm_probe_helper.h> 18 + 19 + /* Page 0, Register 0x07 */ 20 + enum { 21 + DRI_PD = BIT(3), 22 + IO_PD = BIT(5), 23 + }; 24 + 25 + /* Page 0, Register 0x08 */ 26 + enum { 27 + DRI_PDDRI = GENMASK(7, 4), 28 + PDDAC = GENMASK(3, 1), 29 + PANEN = BIT(0), 30 + }; 31 + 32 + /* Page 0, Register 0x09 */ 33 + enum { 34 + DPD = BIT(7), 35 + GCKOFF = BIT(6), 36 + TV_BP = BIT(5), 37 + SCLPD = BIT(4), 38 + SDPD = BIT(3), 39 + VGA_PD = BIT(2), 40 + HDBKPD = BIT(1), 41 + HDMI_PD = BIT(0), 42 + }; 43 + 44 + /* Page 0, Register 0x0a */ 45 + enum { 46 + MEMINIT = BIT(7), 47 + MEMIDLE = BIT(6), 48 + MEMPD = BIT(5), 49 + STOP = BIT(4), 50 + LVDS_PD = BIT(3), 51 + HD_DVIB = BIT(2), 52 + HDCP_PD = BIT(1), 53 + MCU_PD = BIT(0), 54 + }; 55 + 56 + /* Page 0, Register 0x18 */ 57 + enum { 58 + IDF = GENMASK(7, 4), 59 + INTEN = BIT(3), 60 + SWAP = GENMASK(2, 0), 61 + }; 62 + 63 + enum { 64 + BYTE_SWAP_RGB = 0, 65 + BYTE_SWAP_RBG = 1, 66 + BYTE_SWAP_GRB = 2, 67 + BYTE_SWAP_GBR = 3, 68 + BYTE_SWAP_BRG = 4, 69 + BYTE_SWAP_BGR = 5, 70 + }; 71 + 72 + /* Page 0, Register 0x19 */ 73 + enum { 74 + HPO_I = BIT(5), 75 + VPO_I = BIT(4), 76 + DEPO_I = BIT(3), 77 + CRYS_EN = BIT(2), 78 + GCLKFREQ = GENMASK(2, 0), 79 + }; 80 + 81 + /* Page 0, Register 0x2e */ 82 + enum { 83 + HFLIP = BIT(7), 84 + VFLIP = BIT(6), 85 + DEPO_O = BIT(5), 86 + HPO_O = BIT(4), 87 + VPO_O = BIT(3), 88 + TE = GENMASK(2, 0), 89 + }; 90 + 91 + /* Page 0, Register 0x2b */ 92 + enum { 93 + SWAPS = GENMASK(7, 4), 94 + VFMT = GENMASK(3, 0), 95 + }; 96 + 97 + /* Page 0, Register 0x54 */ 98 + enum { 99 + COMP_BP = BIT(7), 100 + DAC_EN_T = BIT(6), 101 + HWO_HDMI_HI = GENMASK(5, 3), 102 + HOO_HDMI_HI = GENMASK(2, 0), 103 + }; 104 + 105 + /* Page 0, Register 0x57 */ 106 + enum { 107 + FLDSEN = BIT(7), 108 + VWO_HDMI_HI = GENMASK(5, 3), 109 + VOO_HDMI_HI = GENMASK(2, 0), 110 + }; 111 + 112 + /* Page 0, Register 0x7e */ 113 + enum { 114 + HDMI_LVDS_SEL = BIT(7), 115 + DE_GEN = BIT(6), 116 + PWM_INDEX_HI = BIT(5), 117 + USE_DE = BIT(4), 118 + R_INT = GENMASK(3, 0), 119 + }; 120 + 121 + /* Page 1, Register 0x07 */ 122 + enum { 123 + BPCKSEL = BIT(7), 124 + DRI_CMFB_EN = BIT(6), 125 + CEC_PUEN = BIT(5), 126 + CEC_T = BIT(3), 127 + CKINV = BIT(2), 128 + CK_TVINV = BIT(1), 129 + DRI_CKS2 = BIT(0), 130 + }; 131 + 132 + /* Page 1, Register 0x08 */ 133 + enum { 134 + DACG = BIT(6), 135 + DACKTST = BIT(5), 136 + DEDGEB = BIT(4), 137 + SYO = BIT(3), 138 + DRI_IT_LVDS = GENMASK(2, 1), 139 + DISPON = BIT(0), 140 + }; 141 + 142 + /* Page 1, Register 0x0c */ 143 + enum { 144 + DRI_PLL_CP = GENMASK(7, 6), 145 + DRI_PLL_DIVSEL = BIT(5), 146 + DRI_PLL_N1_1 = BIT(4), 147 + DRI_PLL_N1_0 = BIT(3), 148 + DRI_PLL_N3_1 = BIT(2), 149 + DRI_PLL_N3_0 = BIT(1), 150 + DRI_PLL_CKTSTEN = BIT(0), 151 + }; 152 + 153 + /* Page 1, Register 0x6b */ 154 + enum { 155 + VCO3CS = GENMASK(7, 6), 156 + ICPGBK2_0 = GENMASK(5, 3), 157 + DRI_VCO357SC = BIT(2), 158 + PDPLL2 = BIT(1), 159 + DRI_PD_SER = BIT(0), 160 + }; 161 + 162 + /* Page 1, Register 0x6c */ 163 + enum { 164 + PLL2N11 = GENMASK(7, 4), 165 + PLL2N5_4 = BIT(3), 166 + PLL2N5_TOP = BIT(2), 167 + DRI_PLL_PD = BIT(1), 168 + PD_I2CM = BIT(0), 169 + }; 170 + 171 + /* Page 3, Register 0x28 */ 172 + enum { 173 + DIFF_EN = GENMASK(7, 6), 174 + CORREC_EN = GENMASK(5, 4), 175 + VGACLK_BP = BIT(3), 176 + HM_LV_SEL = BIT(2), 177 + HD_VGA_SEL = BIT(1), 178 + }; 179 + 180 + /* Page 3, Register 0x2a */ 181 + enum { 182 + LVDSCLK_BP = BIT(7), 183 + HDTVCLK_BP = BIT(6), 184 + HDMICLK_BP = BIT(5), 185 + HDTV_BP = BIT(4), 186 + HDMI_BP = BIT(3), 187 + THRWL = GENMASK(2, 0), 188 + }; 189 + 190 + /* Page 4, Register 0x52 */ 191 + enum { 192 + PGM_ARSTB = BIT(7), 193 + MCU_ARSTB = BIT(6), 194 + MCU_RETB = BIT(2), 195 + RESETIB = BIT(1), 196 + RESETDB = BIT(0), 197 + }; 198 + 199 + struct ch7033_priv { 200 + struct regmap *regmap; 201 + struct drm_bridge *next_bridge; 202 + struct drm_bridge bridge; 203 + struct drm_connector connector; 204 + }; 205 + 206 + #define conn_to_ch7033_priv(x) \ 207 + container_of(x, struct ch7033_priv, connector) 208 + #define bridge_to_ch7033_priv(x) \ 209 + container_of(x, struct ch7033_priv, bridge) 210 + 211 + 212 + static enum drm_connector_status ch7033_connector_detect( 213 + struct drm_connector *connector, bool force) 214 + { 215 + struct ch7033_priv *priv = conn_to_ch7033_priv(connector); 216 + 217 + return drm_bridge_detect(priv->next_bridge); 218 + } 219 + 220 + static const struct drm_connector_funcs ch7033_connector_funcs = { 221 + .reset = drm_atomic_helper_connector_reset, 222 + .fill_modes = drm_helper_probe_single_connector_modes, 223 + .detect = ch7033_connector_detect, 224 + .destroy = drm_connector_cleanup, 225 + .atomic_duplicate_state = drm_atomic_helper_connector_duplicate_state, 226 + .atomic_destroy_state = drm_atomic_helper_connector_destroy_state, 227 + }; 228 + 229 + static int ch7033_connector_get_modes(struct drm_connector *connector) 230 + { 231 + struct ch7033_priv *priv = conn_to_ch7033_priv(connector); 232 + struct edid *edid; 233 + int ret; 234 + 235 + edid = drm_bridge_get_edid(priv->next_bridge, connector); 236 + drm_connector_update_edid_property(connector, edid); 237 + if (edid) { 238 + ret = drm_add_edid_modes(connector, edid); 239 + kfree(edid); 240 + } else { 241 + ret = drm_add_modes_noedid(connector, 1920, 1080); 242 + drm_set_preferred_mode(connector, 1024, 768); 243 + } 244 + 245 + return ret; 246 + } 247 + 248 + static struct drm_encoder *ch7033_connector_best_encoder( 249 + struct drm_connector *connector) 250 + { 251 + struct ch7033_priv *priv = conn_to_ch7033_priv(connector); 252 + 253 + return priv->bridge.encoder; 254 + } 255 + 256 + static const struct drm_connector_helper_funcs ch7033_connector_helper_funcs = { 257 + .get_modes = ch7033_connector_get_modes, 258 + .best_encoder = ch7033_connector_best_encoder, 259 + }; 260 + 261 + static void ch7033_hpd_event(void *arg, enum drm_connector_status status) 262 + { 263 + struct ch7033_priv *priv = arg; 264 + 265 + if (priv->bridge.dev) 266 + drm_helper_hpd_irq_event(priv->connector.dev); 267 + } 268 + 269 + static int ch7033_bridge_attach(struct drm_bridge *bridge, 270 + enum drm_bridge_attach_flags flags) 271 + { 272 + struct ch7033_priv *priv = bridge_to_ch7033_priv(bridge); 273 + struct drm_connector *connector = &priv->connector; 274 + int ret; 275 + 276 + ret = drm_bridge_attach(bridge->encoder, priv->next_bridge, bridge, 277 + DRM_BRIDGE_ATTACH_NO_CONNECTOR); 278 + if (ret) 279 + return ret; 280 + 281 + if (flags & DRM_BRIDGE_ATTACH_NO_CONNECTOR) 282 + return 0; 283 + 284 + if (priv->next_bridge->ops & DRM_BRIDGE_OP_DETECT) { 285 + connector->polled = DRM_CONNECTOR_POLL_HPD; 286 + } else { 287 + connector->polled = DRM_CONNECTOR_POLL_CONNECT | 288 + DRM_CONNECTOR_POLL_DISCONNECT; 289 + } 290 + 291 + if (priv->next_bridge->ops & DRM_BRIDGE_OP_HPD) { 292 + drm_bridge_hpd_enable(priv->next_bridge, ch7033_hpd_event, 293 + priv); 294 + } 295 + 296 + drm_connector_helper_add(connector, 297 + &ch7033_connector_helper_funcs); 298 + ret = drm_connector_init_with_ddc(bridge->dev, &priv->connector, 299 + &ch7033_connector_funcs, 300 + priv->next_bridge->type, 301 + priv->next_bridge->ddc); 302 + if (ret) { 303 + DRM_ERROR("Failed to initialize connector\n"); 304 + return ret; 305 + } 306 + 307 + return drm_connector_attach_encoder(&priv->connector, bridge->encoder); 308 + } 309 + 310 + static void ch7033_bridge_detach(struct drm_bridge *bridge) 311 + { 312 + struct ch7033_priv *priv = bridge_to_ch7033_priv(bridge); 313 + 314 + if (priv->next_bridge->ops & DRM_BRIDGE_OP_HPD) 315 + drm_bridge_hpd_disable(priv->next_bridge); 316 + drm_connector_cleanup(&priv->connector); 317 + } 318 + 319 + static enum drm_mode_status ch7033_bridge_mode_valid(struct drm_bridge *bridge, 320 + const struct drm_display_mode *mode) 321 + { 322 + if (mode->clock > 165000) 323 + return MODE_CLOCK_HIGH; 324 + if (mode->hdisplay >= 1920) 325 + return MODE_BAD_HVALUE; 326 + if (mode->vdisplay >= 1080) 327 + return MODE_BAD_VVALUE; 328 + return MODE_OK; 329 + } 330 + 331 + static void ch7033_bridge_disable(struct drm_bridge *bridge) 332 + { 333 + struct ch7033_priv *priv = bridge_to_ch7033_priv(bridge); 334 + 335 + regmap_write(priv->regmap, 0x03, 0x04); 336 + regmap_update_bits(priv->regmap, 0x52, RESETDB, 0x00); 337 + } 338 + 339 + static void ch7033_bridge_enable(struct drm_bridge *bridge) 340 + { 341 + struct ch7033_priv *priv = bridge_to_ch7033_priv(bridge); 342 + 343 + regmap_write(priv->regmap, 0x03, 0x04); 344 + regmap_update_bits(priv->regmap, 0x52, RESETDB, RESETDB); 345 + } 346 + 347 + static void ch7033_bridge_mode_set(struct drm_bridge *bridge, 348 + const struct drm_display_mode *mode, 349 + const struct drm_display_mode *adjusted_mode) 350 + { 351 + struct ch7033_priv *priv = bridge_to_ch7033_priv(bridge); 352 + int hbporch = mode->hsync_start - mode->hdisplay; 353 + int hsynclen = mode->hsync_end - mode->hsync_start; 354 + int vbporch = mode->vsync_start - mode->vdisplay; 355 + int vsynclen = mode->vsync_end - mode->vsync_start; 356 + 357 + /* 358 + * Page 4 359 + */ 360 + regmap_write(priv->regmap, 0x03, 0x04); 361 + 362 + /* Turn everything off to set all the registers to their defaults. */ 363 + regmap_write(priv->regmap, 0x52, 0x00); 364 + /* Bring I/O block up. */ 365 + regmap_write(priv->regmap, 0x52, RESETIB); 366 + 367 + /* 368 + * Page 0 369 + */ 370 + regmap_write(priv->regmap, 0x03, 0x00); 371 + 372 + /* Bring up parts we need from the power down. */ 373 + regmap_update_bits(priv->regmap, 0x07, DRI_PD | IO_PD, 0); 374 + regmap_update_bits(priv->regmap, 0x08, DRI_PDDRI | PDDAC | PANEN, 0); 375 + regmap_update_bits(priv->regmap, 0x09, DPD | GCKOFF | 376 + HDMI_PD | VGA_PD, 0); 377 + regmap_update_bits(priv->regmap, 0x0a, HD_DVIB, 0); 378 + 379 + /* Horizontal input timing. */ 380 + regmap_write(priv->regmap, 0x0b, (mode->htotal >> 8) << 3 | 381 + (mode->hdisplay >> 8)); 382 + regmap_write(priv->regmap, 0x0c, mode->hdisplay); 383 + regmap_write(priv->regmap, 0x0d, mode->htotal); 384 + regmap_write(priv->regmap, 0x0e, (hsynclen >> 8) << 3 | 385 + (hbporch >> 8)); 386 + regmap_write(priv->regmap, 0x0f, hbporch); 387 + regmap_write(priv->regmap, 0x10, hsynclen); 388 + 389 + /* Vertical input timing. */ 390 + regmap_write(priv->regmap, 0x11, (mode->vtotal >> 8) << 3 | 391 + (mode->vdisplay >> 8)); 392 + regmap_write(priv->regmap, 0x12, mode->vdisplay); 393 + regmap_write(priv->regmap, 0x13, mode->vtotal); 394 + regmap_write(priv->regmap, 0x14, ((vsynclen >> 8) << 3) | 395 + (vbporch >> 8)); 396 + regmap_write(priv->regmap, 0x15, vbporch); 397 + regmap_write(priv->regmap, 0x16, vsynclen); 398 + 399 + /* Input color swap. */ 400 + regmap_update_bits(priv->regmap, 0x18, SWAP, BYTE_SWAP_BGR); 401 + 402 + /* Input clock and sync polarity. */ 403 + regmap_update_bits(priv->regmap, 0x19, 0x1, mode->clock >> 16); 404 + regmap_update_bits(priv->regmap, 0x19, HPO_I | VPO_I | GCLKFREQ, 405 + (mode->flags & DRM_MODE_FLAG_PHSYNC) ? HPO_I : 0 | 406 + (mode->flags & DRM_MODE_FLAG_PVSYNC) ? VPO_I : 0 | 407 + mode->clock >> 16); 408 + regmap_write(priv->regmap, 0x1a, mode->clock >> 8); 409 + regmap_write(priv->regmap, 0x1b, mode->clock); 410 + 411 + /* Horizontal output timing. */ 412 + regmap_write(priv->regmap, 0x1f, (mode->htotal >> 8) << 3 | 413 + (mode->hdisplay >> 8)); 414 + regmap_write(priv->regmap, 0x20, mode->hdisplay); 415 + regmap_write(priv->regmap, 0x21, mode->htotal); 416 + 417 + /* Vertical output timing. */ 418 + regmap_write(priv->regmap, 0x25, (mode->vtotal >> 8) << 3 | 419 + (mode->vdisplay >> 8)); 420 + regmap_write(priv->regmap, 0x26, mode->vdisplay); 421 + regmap_write(priv->regmap, 0x27, mode->vtotal); 422 + 423 + /* VGA channel bypass */ 424 + regmap_update_bits(priv->regmap, 0x2b, VFMT, 9); 425 + 426 + /* Output sync polarity. */ 427 + regmap_update_bits(priv->regmap, 0x2e, HPO_O | VPO_O, 428 + (mode->flags & DRM_MODE_FLAG_PHSYNC) ? HPO_O : 0 | 429 + (mode->flags & DRM_MODE_FLAG_PVSYNC) ? VPO_O : 0); 430 + 431 + /* HDMI horizontal output timing. */ 432 + regmap_update_bits(priv->regmap, 0x54, HWO_HDMI_HI | HOO_HDMI_HI, 433 + (hsynclen >> 8) << 3 | 434 + (hbporch >> 8)); 435 + regmap_write(priv->regmap, 0x55, hbporch); 436 + regmap_write(priv->regmap, 0x56, hsynclen); 437 + 438 + /* HDMI vertical output timing. */ 439 + regmap_update_bits(priv->regmap, 0x57, VWO_HDMI_HI | VOO_HDMI_HI, 440 + (vsynclen >> 8) << 3 | 441 + (vbporch >> 8)); 442 + regmap_write(priv->regmap, 0x58, vbporch); 443 + regmap_write(priv->regmap, 0x59, vsynclen); 444 + 445 + /* Pick HDMI, not LVDS. */ 446 + regmap_update_bits(priv->regmap, 0x7e, HDMI_LVDS_SEL, HDMI_LVDS_SEL); 447 + 448 + /* 449 + * Page 1 450 + */ 451 + regmap_write(priv->regmap, 0x03, 0x01); 452 + 453 + /* No idea what these do, but VGA is wobbly and blinky without them. */ 454 + regmap_update_bits(priv->regmap, 0x07, CKINV, CKINV); 455 + regmap_update_bits(priv->regmap, 0x08, DISPON, DISPON); 456 + 457 + /* DRI PLL */ 458 + regmap_update_bits(priv->regmap, 0x0c, DRI_PLL_DIVSEL, DRI_PLL_DIVSEL); 459 + if (mode->clock <= 40000) { 460 + regmap_update_bits(priv->regmap, 0x0c, DRI_PLL_N1_1 | 461 + DRI_PLL_N1_0 | 462 + DRI_PLL_N3_1 | 463 + DRI_PLL_N3_0, 464 + 0); 465 + } else if (mode->clock < 80000) { 466 + regmap_update_bits(priv->regmap, 0x0c, DRI_PLL_N1_1 | 467 + DRI_PLL_N1_0 | 468 + DRI_PLL_N3_1 | 469 + DRI_PLL_N3_0, 470 + DRI_PLL_N3_0 | 471 + DRI_PLL_N1_0); 472 + } else { 473 + regmap_update_bits(priv->regmap, 0x0c, DRI_PLL_N1_1 | 474 + DRI_PLL_N1_0 | 475 + DRI_PLL_N3_1 | 476 + DRI_PLL_N3_0, 477 + DRI_PLL_N3_1 | 478 + DRI_PLL_N1_1); 479 + } 480 + 481 + /* This seems to be color calibration for VGA. */ 482 + regmap_write(priv->regmap, 0x64, 0x29); /* LSB Blue */ 483 + regmap_write(priv->regmap, 0x65, 0x29); /* LSB Green */ 484 + regmap_write(priv->regmap, 0x66, 0x29); /* LSB Red */ 485 + regmap_write(priv->regmap, 0x67, 0x00); /* MSB Blue */ 486 + regmap_write(priv->regmap, 0x68, 0x00); /* MSB Green */ 487 + regmap_write(priv->regmap, 0x69, 0x00); /* MSB Red */ 488 + 489 + regmap_update_bits(priv->regmap, 0x6b, DRI_PD_SER, 0x00); 490 + regmap_update_bits(priv->regmap, 0x6c, DRI_PLL_PD, 0x00); 491 + 492 + /* 493 + * Page 3 494 + */ 495 + regmap_write(priv->regmap, 0x03, 0x03); 496 + 497 + /* More bypasses and apparently another HDMI/LVDS selector. */ 498 + regmap_update_bits(priv->regmap, 0x28, VGACLK_BP | HM_LV_SEL, 499 + VGACLK_BP | HM_LV_SEL); 500 + regmap_update_bits(priv->regmap, 0x2a, HDMICLK_BP | HDMI_BP, 501 + HDMICLK_BP | HDMI_BP); 502 + 503 + /* 504 + * Page 4 505 + */ 506 + regmap_write(priv->regmap, 0x03, 0x04); 507 + 508 + /* Output clock. */ 509 + regmap_write(priv->regmap, 0x10, mode->clock >> 16); 510 + regmap_write(priv->regmap, 0x11, mode->clock >> 8); 511 + regmap_write(priv->regmap, 0x12, mode->clock); 512 + } 513 + 514 + static const struct drm_bridge_funcs ch7033_bridge_funcs = { 515 + .attach = ch7033_bridge_attach, 516 + .detach = ch7033_bridge_detach, 517 + .mode_valid = ch7033_bridge_mode_valid, 518 + .disable = ch7033_bridge_disable, 519 + .enable = ch7033_bridge_enable, 520 + .mode_set = ch7033_bridge_mode_set, 521 + }; 522 + 523 + static const struct regmap_config ch7033_regmap_config = { 524 + .reg_bits = 8, 525 + .val_bits = 8, 526 + .max_register = 0x7f, 527 + }; 528 + 529 + static int ch7033_probe(struct i2c_client *client, 530 + const struct i2c_device_id *id) 531 + { 532 + struct device *dev = &client->dev; 533 + struct ch7033_priv *priv; 534 + unsigned int val; 535 + int ret; 536 + 537 + priv = devm_kzalloc(dev, sizeof(*priv), GFP_KERNEL); 538 + if (!priv) 539 + return -ENOMEM; 540 + 541 + dev_set_drvdata(dev, priv); 542 + 543 + ret = drm_of_find_panel_or_bridge(dev->of_node, 1, -1, NULL, 544 + &priv->next_bridge); 545 + if (ret) 546 + return ret; 547 + 548 + priv->regmap = devm_regmap_init_i2c(client, &ch7033_regmap_config); 549 + if (IS_ERR(priv->regmap)) { 550 + dev_err(&client->dev, "regmap init failed\n"); 551 + return PTR_ERR(priv->regmap); 552 + } 553 + 554 + ret = regmap_read(priv->regmap, 0x00, &val); 555 + if (ret < 0) { 556 + dev_err(&client->dev, "error reading the model id: %d\n", ret); 557 + return ret; 558 + } 559 + if ((val & 0xf7) != 0x56) { 560 + dev_err(&client->dev, "the device is not a ch7033\n"); 561 + return -ENODEV; 562 + } 563 + 564 + regmap_write(priv->regmap, 0x03, 0x04); 565 + ret = regmap_read(priv->regmap, 0x51, &val); 566 + if (ret < 0) { 567 + dev_err(&client->dev, "error reading the model id: %d\n", ret); 568 + return ret; 569 + } 570 + if ((val & 0x0f) != 3) { 571 + dev_err(&client->dev, "unknown revision %u\n", val); 572 + return -ENODEV; 573 + } 574 + 575 + INIT_LIST_HEAD(&priv->bridge.list); 576 + priv->bridge.funcs = &ch7033_bridge_funcs; 577 + priv->bridge.of_node = dev->of_node; 578 + drm_bridge_add(&priv->bridge); 579 + 580 + dev_info(dev, "Chrontel CH7033 Video Encoder\n"); 581 + return 0; 582 + } 583 + 584 + static int ch7033_remove(struct i2c_client *client) 585 + { 586 + struct device *dev = &client->dev; 587 + struct ch7033_priv *priv = dev_get_drvdata(dev); 588 + 589 + drm_bridge_remove(&priv->bridge); 590 + 591 + return 0; 592 + } 593 + 594 + static const struct of_device_id ch7033_dt_ids[] = { 595 + { .compatible = "chrontel,ch7033", }, 596 + { } 597 + }; 598 + MODULE_DEVICE_TABLE(of, ch7033_dt_ids); 599 + 600 + static const struct i2c_device_id ch7033_ids[] = { 601 + { "ch7033", 0 }, 602 + { } 603 + }; 604 + MODULE_DEVICE_TABLE(i2c, ch7033_ids); 605 + 606 + static struct i2c_driver ch7033_driver = { 607 + .probe = ch7033_probe, 608 + .remove = ch7033_remove, 609 + .driver = { 610 + .name = "ch7033", 611 + .of_match_table = of_match_ptr(ch7033_dt_ids), 612 + }, 613 + .id_table = ch7033_ids, 614 + }; 615 + 616 + module_i2c_driver(ch7033_driver); 617 + 618 + MODULE_AUTHOR("Lubomir Rintel <lkundrak@v3.sk>"); 619 + MODULE_DESCRIPTION("Chrontel CH7033 Video Encoder Driver"); 620 + MODULE_LICENSE("GPL v2");
+3 -3
drivers/gpu/drm/bridge/panel.c
··· 166 166 * 167 167 * The connector type is set to @panel->connector_type, which must be set to a 168 168 * known type. Calling this function with a panel whose connector type is 169 - * DRM_MODE_CONNECTOR_Unknown will return NULL. 169 + * DRM_MODE_CONNECTOR_Unknown will return ERR_PTR(-EINVAL). 170 170 * 171 171 * See devm_drm_panel_bridge_add() for an automatically managed version of this 172 172 * function. ··· 174 174 struct drm_bridge *drm_panel_bridge_add(struct drm_panel *panel) 175 175 { 176 176 if (WARN_ON(panel->connector_type == DRM_MODE_CONNECTOR_Unknown)) 177 - return NULL; 177 + return ERR_PTR(-EINVAL); 178 178 179 179 return drm_panel_bridge_add_typed(panel, panel->connector_type); 180 180 } ··· 265 265 struct drm_panel *panel) 266 266 { 267 267 if (WARN_ON(panel->connector_type == DRM_MODE_CONNECTOR_Unknown)) 268 - return NULL; 268 + return ERR_PTR(-EINVAL); 269 269 270 270 return devm_drm_panel_bridge_add_typed(dev, panel, 271 271 panel->connector_type);
-2
drivers/gpu/drm/bridge/parade-ps8640.c
··· 268 268 if (!panel) 269 269 return -ENODEV; 270 270 271 - panel->connector_type = DRM_MODE_CONNECTOR_eDP; 272 - 273 271 ps_bridge->panel_bridge = devm_drm_panel_bridge_add(dev, panel); 274 272 if (IS_ERR(ps_bridge->panel_bridge)) 275 273 return PTR_ERR(ps_bridge->panel_bridge);
+3 -1
drivers/gpu/drm/bridge/tc358768.c
··· 178 178 179 179 static void tc358768_write(struct tc358768_priv *priv, u32 reg, u32 val) 180 180 { 181 + /* work around https://gcc.gnu.org/bugzilla/show_bug.cgi?id=81715 */ 182 + int tmpval = val; 181 183 size_t count = 2; 182 184 183 185 if (priv->error) ··· 189 187 if (reg < 0x100 || reg >= 0x600) 190 188 count = 1; 191 189 192 - priv->error = regmap_bulk_write(priv->regmap, reg, &val, count); 190 + priv->error = regmap_bulk_write(priv->regmap, reg, &tmpval, count); 193 191 } 194 192 195 193 static void tc358768_read(struct tc358768_priv *priv, u32 reg, u32 *val)
-19
drivers/gpu/drm/cirrus/Kconfig
··· 1 - # SPDX-License-Identifier: GPL-2.0-only 2 - config DRM_CIRRUS_QEMU 3 - tristate "Cirrus driver for QEMU emulated device" 4 - depends on DRM && PCI && MMU 5 - select DRM_KMS_HELPER 6 - select DRM_GEM_SHMEM_HELPER 7 - help 8 - This is a KMS driver for emulated cirrus device in qemu. 9 - It is *NOT* intended for real cirrus devices. This requires 10 - the modesetting userspace X.org driver. 11 - 12 - Cirrus is obsolete, the hardware was designed in the 90ies 13 - and can't keep up with todays needs. More background: 14 - https://www.kraxel.org/blog/2014/10/qemu-using-cirrus-considered-harmful/ 15 - 16 - Better alternatives are: 17 - - stdvga (DRM_BOCHS, qemu -vga std, default in qemu 2.2+) 18 - - qxl (DRM_QXL, qemu -vga qxl, works best with spice) 19 - - virtio (DRM_VIRTIO_GPU), qemu -vga virtio)
-2
drivers/gpu/drm/cirrus/Makefile
··· 1 - # SPDX-License-Identifier: GPL-2.0-only 2 - obj-$(CONFIG_DRM_CIRRUS_QEMU) += cirrus.o
+9 -13
drivers/gpu/drm/cirrus/cirrus.c drivers/gpu/drm/tiny/cirrus.c
··· 59 59 void __iomem *mmio; 60 60 }; 61 61 62 + #define to_cirrus(_dev) container_of(_dev, struct cirrus_device, dev) 63 + 62 64 /* ------------------------------------------------------------------ */ 63 65 /* 64 66 * The meat of this driver. The core passes us a mode and we have to program ··· 313 311 static int cirrus_fb_blit_rect(struct drm_framebuffer *fb, 314 312 struct drm_rect *rect) 315 313 { 316 - struct cirrus_device *cirrus = fb->dev->dev_private; 314 + struct cirrus_device *cirrus = to_cirrus(fb->dev); 317 315 void *vmap; 318 316 int idx, ret; 319 317 ··· 438 436 struct drm_crtc_state *crtc_state, 439 437 struct drm_plane_state *plane_state) 440 438 { 441 - struct cirrus_device *cirrus = pipe->crtc.dev->dev_private; 439 + struct cirrus_device *cirrus = to_cirrus(pipe->crtc.dev); 442 440 443 441 cirrus_mode_set(cirrus, &crtc_state->mode, plane_state->fb); 444 442 cirrus_fb_blit_fullscreen(plane_state->fb); ··· 447 445 static void cirrus_pipe_update(struct drm_simple_display_pipe *pipe, 448 446 struct drm_plane_state *old_state) 449 447 { 450 - struct cirrus_device *cirrus = pipe->crtc.dev->dev_private; 448 + struct cirrus_device *cirrus = to_cirrus(pipe->crtc.dev); 451 449 struct drm_plane_state *state = pipe->plane.state; 452 450 struct drm_crtc *crtc = &pipe->crtc; 453 451 struct drm_rect rect; ··· 569 567 return ret; 570 568 571 569 ret = -ENOMEM; 572 - cirrus = kzalloc(sizeof(*cirrus), GFP_KERNEL); 573 - if (cirrus == NULL) 574 - return ret; 570 + cirrus = devm_drm_dev_alloc(&pdev->dev, &cirrus_driver, 571 + struct cirrus_device, dev); 572 + if (IS_ERR(cirrus)) 573 + return PTR_ERR(cirrus); 575 574 576 575 dev = &cirrus->dev; 577 - ret = devm_drm_dev_init(&pdev->dev, dev, &cirrus_driver); 578 - if (ret) { 579 - kfree(cirrus); 580 - return ret; 581 - } 582 - dev->dev_private = cirrus; 583 - drmm_add_final_kfree(dev, cirrus); 584 576 585 577 cirrus->vram = devm_ioremap(&pdev->dev, pci_resource_start(pdev, 0), 586 578 pci_resource_len(pdev, 0));
+41 -96
drivers/gpu/drm/drm_dp_mst_topology.c
··· 1197 1197 1198 1198 /* remove from q */ 1199 1199 if (txmsg->state == DRM_DP_SIDEBAND_TX_QUEUED || 1200 - txmsg->state == DRM_DP_SIDEBAND_TX_START_SEND) { 1200 + txmsg->state == DRM_DP_SIDEBAND_TX_START_SEND) 1201 1201 list_del(&txmsg->next); 1202 - } 1203 - 1204 - if (txmsg->state == DRM_DP_SIDEBAND_TX_START_SEND || 1205 - txmsg->state == DRM_DP_SIDEBAND_TX_SENT) { 1206 - mstb->tx_slots[txmsg->seqno] = NULL; 1207 - } 1208 1202 } 1209 1203 out: 1210 1204 if (unlikely(ret == -EIO) && drm_debug_enabled(DRM_UT_DP)) { ··· 1208 1214 } 1209 1215 mutex_unlock(&mgr->qlock); 1210 1216 1217 + drm_dp_mst_kick_tx(mgr); 1211 1218 return ret; 1212 1219 } 1213 1220 ··· 2677 2682 struct drm_dp_mst_branch *mstb = txmsg->dst; 2678 2683 u8 req_type; 2679 2684 2680 - /* both msg slots are full */ 2681 - if (txmsg->seqno == -1) { 2682 - if (mstb->tx_slots[0] && mstb->tx_slots[1]) { 2683 - DRM_DEBUG_KMS("%s: failed to find slot\n", __func__); 2684 - return -EAGAIN; 2685 - } 2686 - if (mstb->tx_slots[0] == NULL && mstb->tx_slots[1] == NULL) { 2687 - txmsg->seqno = mstb->last_seqno; 2688 - mstb->last_seqno ^= 1; 2689 - } else if (mstb->tx_slots[0] == NULL) 2690 - txmsg->seqno = 0; 2691 - else 2692 - txmsg->seqno = 1; 2693 - mstb->tx_slots[txmsg->seqno] = txmsg; 2694 - } 2695 - 2696 2685 req_type = txmsg->msg[0] & 0x7f; 2697 2686 if (req_type == DP_CONNECTION_STATUS_NOTIFY || 2698 2687 req_type == DP_RESOURCE_STATUS_NOTIFY) ··· 2688 2709 hdr->lcr = mstb->lct - 1; 2689 2710 if (mstb->lct > 1) 2690 2711 memcpy(hdr->rad, mstb->rad, mstb->lct / 2); 2691 - hdr->seqno = txmsg->seqno; 2712 + 2692 2713 return 0; 2693 2714 } 2694 2715 /* ··· 2703 2724 int len, space, idx, tosend; 2704 2725 int ret; 2705 2726 2727 + if (txmsg->state == DRM_DP_SIDEBAND_TX_SENT) 2728 + return 0; 2729 + 2706 2730 memset(&hdr, 0, sizeof(struct drm_dp_sideband_msg_hdr)); 2707 2731 2708 - if (txmsg->state == DRM_DP_SIDEBAND_TX_QUEUED) { 2709 - txmsg->seqno = -1; 2732 + if (txmsg->state == DRM_DP_SIDEBAND_TX_QUEUED) 2710 2733 txmsg->state = DRM_DP_SIDEBAND_TX_START_SEND; 2711 - } 2712 2734 2713 - /* make hdr from dst mst - for replies use seqno 2714 - otherwise assign one */ 2735 + /* make hdr from dst mst */ 2715 2736 ret = set_hdr_from_dst_qlock(&hdr, txmsg); 2716 2737 if (ret < 0) 2717 2738 return ret; ··· 2764 2785 if (list_empty(&mgr->tx_msg_downq)) 2765 2786 return; 2766 2787 2767 - txmsg = list_first_entry(&mgr->tx_msg_downq, struct drm_dp_sideband_msg_tx, next); 2788 + txmsg = list_first_entry(&mgr->tx_msg_downq, 2789 + struct drm_dp_sideband_msg_tx, next); 2768 2790 ret = process_single_tx_qlock(mgr, txmsg, false); 2769 - if (ret == 1) { 2770 - /* txmsg is sent it should be in the slots now */ 2771 - list_del(&txmsg->next); 2772 - } else if (ret) { 2791 + if (ret < 0) { 2773 2792 DRM_DEBUG_KMS("failed to send msg in q %d\n", ret); 2774 2793 list_del(&txmsg->next); 2775 - if (txmsg->seqno != -1) 2776 - txmsg->dst->tx_slots[txmsg->seqno] = NULL; 2777 2794 txmsg->state = DRM_DP_SIDEBAND_TX_TIMEOUT; 2778 2795 wake_up_all(&mgr->tx_waitq); 2779 - } 2780 - } 2781 - 2782 - /* called holding qlock */ 2783 - static void process_single_up_tx_qlock(struct drm_dp_mst_topology_mgr *mgr, 2784 - struct drm_dp_sideband_msg_tx *txmsg) 2785 - { 2786 - int ret; 2787 - 2788 - /* construct a chunk from the first msg in the tx_msg queue */ 2789 - ret = process_single_tx_qlock(mgr, txmsg, true); 2790 - 2791 - if (ret != 1) 2792 - DRM_DEBUG_KMS("failed to send msg in q %d\n", ret); 2793 - 2794 - if (txmsg->seqno != -1) { 2795 - WARN_ON((unsigned int)txmsg->seqno > 2796 - ARRAY_SIZE(txmsg->dst->tx_slots)); 2797 - txmsg->dst->tx_slots[txmsg->seqno] = NULL; 2798 2796 } 2799 2797 } 2800 2798 ··· 3407 3451 3408 3452 static int drm_dp_send_up_ack_reply(struct drm_dp_mst_topology_mgr *mgr, 3409 3453 struct drm_dp_mst_branch *mstb, 3410 - int req_type, int seqno, bool broadcast) 3454 + int req_type, bool broadcast) 3411 3455 { 3412 3456 struct drm_dp_sideband_msg_tx *txmsg; 3413 3457 ··· 3416 3460 return -ENOMEM; 3417 3461 3418 3462 txmsg->dst = mstb; 3419 - txmsg->seqno = seqno; 3420 3463 drm_dp_encode_up_ack_reply(txmsg, req_type); 3421 3464 3422 3465 mutex_lock(&mgr->qlock); 3423 - 3424 - process_single_up_tx_qlock(mgr, txmsg); 3425 - 3466 + /* construct a chunk from the first msg in the tx_msg queue */ 3467 + process_single_tx_qlock(mgr, txmsg, true); 3426 3468 mutex_unlock(&mgr->qlock); 3427 3469 3428 3470 kfree(txmsg); ··· 3645 3691 } 3646 3692 EXPORT_SYMBOL(drm_dp_mst_topology_mgr_resume); 3647 3693 3648 - static bool drm_dp_get_one_sb_msg(struct drm_dp_mst_topology_mgr *mgr, bool up, 3649 - struct drm_dp_mst_branch **mstb, int *seqno) 3694 + static bool 3695 + drm_dp_get_one_sb_msg(struct drm_dp_mst_topology_mgr *mgr, bool up, 3696 + struct drm_dp_mst_branch **mstb) 3650 3697 { 3651 3698 int len; 3652 3699 u8 replyblock[32]; ··· 3655 3700 int ret; 3656 3701 u8 hdrlen; 3657 3702 struct drm_dp_sideband_msg_hdr hdr; 3658 - struct drm_dp_sideband_msg_rx *msg; 3703 + struct drm_dp_sideband_msg_rx *msg = 3704 + up ? &mgr->up_req_recv : &mgr->down_rep_recv; 3659 3705 int basereg = up ? DP_SIDEBAND_MSG_UP_REQ_BASE : 3660 3706 DP_SIDEBAND_MSG_DOWN_REP_BASE; 3661 3707 3662 3708 if (!up) 3663 3709 *mstb = NULL; 3664 - *seqno = -1; 3665 3710 3666 3711 len = min(mgr->max_dpcd_transaction_bytes, 16); 3667 3712 ret = drm_dp_dpcd_read(mgr->aux, basereg, replyblock, len); ··· 3678 3723 return false; 3679 3724 } 3680 3725 3681 - *seqno = hdr.seqno; 3682 - 3683 - if (up) { 3684 - msg = &mgr->up_req_recv; 3685 - } else { 3726 + if (!up) { 3686 3727 /* Caller is responsible for giving back this reference */ 3687 3728 *mstb = drm_dp_get_mst_branch_device(mgr, hdr.lct, hdr.rad); 3688 3729 if (!*mstb) { ··· 3686 3735 hdr.lct); 3687 3736 return false; 3688 3737 } 3689 - msg = &(*mstb)->down_rep_recv[hdr.seqno]; 3690 3738 } 3691 3739 3692 3740 if (!drm_dp_sideband_msg_set_header(msg, &hdr, hdrlen)) { ··· 3729 3779 { 3730 3780 struct drm_dp_sideband_msg_tx *txmsg; 3731 3781 struct drm_dp_mst_branch *mstb = NULL; 3732 - struct drm_dp_sideband_msg_rx *msg = NULL; 3733 - int seqno = -1; 3782 + struct drm_dp_sideband_msg_rx *msg = &mgr->down_rep_recv; 3734 3783 3735 - if (!drm_dp_get_one_sb_msg(mgr, false, &mstb, &seqno)) 3736 - goto out_clear_reply; 3737 - 3738 - msg = &mstb->down_rep_recv[seqno]; 3784 + if (!drm_dp_get_one_sb_msg(mgr, false, &mstb)) 3785 + goto out; 3739 3786 3740 3787 /* Multi-packet message transmission, don't clear the reply */ 3741 3788 if (!msg->have_eomt) ··· 3740 3793 3741 3794 /* find the message */ 3742 3795 mutex_lock(&mgr->qlock); 3743 - txmsg = mstb->tx_slots[seqno]; 3744 - /* remove from slots */ 3796 + txmsg = list_first_entry_or_null(&mgr->tx_msg_downq, 3797 + struct drm_dp_sideband_msg_tx, next); 3745 3798 mutex_unlock(&mgr->qlock); 3746 3799 3747 - if (!txmsg) { 3800 + /* Were we actually expecting a response, and from this mstb? */ 3801 + if (!txmsg || txmsg->dst != mstb) { 3748 3802 struct drm_dp_sideband_msg_hdr *hdr; 3749 3803 hdr = &msg->initial_hdr; 3750 3804 DRM_DEBUG_KMS("Got MST reply with no msg %p %d %d %02x %02x\n", ··· 3770 3822 3771 3823 mutex_lock(&mgr->qlock); 3772 3824 txmsg->state = DRM_DP_SIDEBAND_TX_RX; 3773 - mstb->tx_slots[seqno] = NULL; 3825 + list_del(&txmsg->next); 3774 3826 mutex_unlock(&mgr->qlock); 3775 3827 3776 3828 wake_up_all(&mgr->tx_waitq); ··· 3778 3830 return 0; 3779 3831 3780 3832 out_clear_reply: 3781 - if (msg) 3782 - memset(msg, 0, sizeof(struct drm_dp_sideband_msg_rx)); 3833 + memset(msg, 0, sizeof(struct drm_dp_sideband_msg_rx)); 3783 3834 out: 3784 3835 if (mstb) 3785 3836 drm_dp_mst_topology_put_mstb(mstb); ··· 3858 3911 static int drm_dp_mst_handle_up_req(struct drm_dp_mst_topology_mgr *mgr) 3859 3912 { 3860 3913 struct drm_dp_pending_up_req *up_req; 3861 - int seqno; 3862 3914 3863 - if (!drm_dp_get_one_sb_msg(mgr, true, NULL, &seqno)) 3915 + if (!drm_dp_get_one_sb_msg(mgr, true, NULL)) 3864 3916 goto out; 3865 3917 3866 3918 if (!mgr->up_req_recv.have_eomt) ··· 3883 3937 } 3884 3938 3885 3939 drm_dp_send_up_ack_reply(mgr, mgr->mst_primary, up_req->msg.req_type, 3886 - seqno, false); 3940 + false); 3887 3941 3888 3942 if (up_req->msg.req_type == DP_CONNECTION_STATUS_NOTIFY) { 3889 3943 const struct drm_dp_connection_status_notify *conn_stat = ··· 4649 4703 drm_dp_delayed_destroy_mstb(struct drm_dp_mst_branch *mstb) 4650 4704 { 4651 4705 struct drm_dp_mst_topology_mgr *mgr = mstb->mgr; 4652 - struct drm_dp_mst_port *port, *tmp; 4706 + struct drm_dp_mst_port *port, *port_tmp; 4707 + struct drm_dp_sideband_msg_tx *txmsg, *txmsg_tmp; 4653 4708 bool wake_tx = false; 4654 4709 4655 4710 mutex_lock(&mgr->lock); 4656 - list_for_each_entry_safe(port, tmp, &mstb->ports, next) { 4711 + list_for_each_entry_safe(port, port_tmp, &mstb->ports, next) { 4657 4712 list_del(&port->next); 4658 4713 drm_dp_mst_topology_put_port(port); 4659 4714 } 4660 4715 mutex_unlock(&mgr->lock); 4661 4716 4662 - /* drop any tx slots msg */ 4717 + /* drop any tx slot msg */ 4663 4718 mutex_lock(&mstb->mgr->qlock); 4664 - if (mstb->tx_slots[0]) { 4665 - mstb->tx_slots[0]->state = DRM_DP_SIDEBAND_TX_TIMEOUT; 4666 - mstb->tx_slots[0] = NULL; 4667 - wake_tx = true; 4668 - } 4669 - if (mstb->tx_slots[1]) { 4670 - mstb->tx_slots[1]->state = DRM_DP_SIDEBAND_TX_TIMEOUT; 4671 - mstb->tx_slots[1] = NULL; 4719 + list_for_each_entry_safe(txmsg, txmsg_tmp, &mgr->tx_msg_downq, next) { 4720 + if (txmsg->dst != mstb) 4721 + continue; 4722 + 4723 + txmsg->state = DRM_DP_SIDEBAND_TX_TIMEOUT; 4724 + list_del(&txmsg->next); 4672 4725 wake_tx = true; 4673 4726 } 4674 4727 mutex_unlock(&mstb->mgr->qlock);
+23
drivers/gpu/drm/drm_drv.c
··· 739 739 } 740 740 EXPORT_SYMBOL(devm_drm_dev_init); 741 741 742 + void *__devm_drm_dev_alloc(struct device *parent, struct drm_driver *driver, 743 + size_t size, size_t offset) 744 + { 745 + void *container; 746 + struct drm_device *drm; 747 + int ret; 748 + 749 + container = kzalloc(size, GFP_KERNEL); 750 + if (!container) 751 + return ERR_PTR(-ENOMEM); 752 + 753 + drm = container + offset; 754 + ret = devm_drm_dev_init(parent, drm, driver); 755 + if (ret) { 756 + kfree(container); 757 + return ERR_PTR(ret); 758 + } 759 + drmm_add_final_kfree(drm, container); 760 + 761 + return container; 762 + } 763 + EXPORT_SYMBOL(__devm_drm_dev_alloc); 764 + 742 765 /** 743 766 * drm_dev_alloc - Allocate new DRM device 744 767 * @driver: DRM driver to allocate device for
+8
drivers/gpu/drm/drm_edid.c
··· 2380 2380 (a == 0x20 && b == 0x20); 2381 2381 } 2382 2382 2383 + static int drm_mode_hsync(const struct drm_display_mode *mode) 2384 + { 2385 + if (mode->htotal <= 0) 2386 + return 0; 2387 + 2388 + return DIV_ROUND_CLOSEST(mode->clock, mode->htotal); 2389 + } 2390 + 2383 2391 /** 2384 2392 * drm_mode_std - convert standard mode info (width, height, refresh) into mode 2385 2393 * @connector: connector of for the EDID block
+4 -2
drivers/gpu/drm/drm_file.c
··· 613 613 file_priv->event_space -= length; 614 614 list_add(&e->link, &file_priv->event_list); 615 615 spin_unlock_irq(&dev->event_lock); 616 - wake_up_interruptible(&file_priv->event_wait); 616 + wake_up_interruptible_poll(&file_priv->event_wait, 617 + EPOLLIN | EPOLLRDNORM); 617 618 break; 618 619 } 619 620 ··· 810 809 list_del(&e->pending_link); 811 810 list_add_tail(&e->link, 812 811 &e->file_priv->event_list); 813 - wake_up_interruptible(&e->file_priv->event_wait); 812 + wake_up_interruptible_poll(&e->file_priv->event_wait, 813 + EPOLLIN | EPOLLRDNORM); 814 814 } 815 815 EXPORT_SYMBOL(drm_send_event_locked); 816 816
+114 -19
drivers/gpu/drm/drm_mm.c
··· 212 212 &drm_mm_interval_tree_augment); 213 213 } 214 214 215 - #define RB_INSERT(root, member, expr) do { \ 216 - struct rb_node **link = &root.rb_node, *rb = NULL; \ 217 - u64 x = expr(node); \ 218 - while (*link) { \ 219 - rb = *link; \ 220 - if (x < expr(rb_entry(rb, struct drm_mm_node, member))) \ 221 - link = &rb->rb_left; \ 222 - else \ 223 - link = &rb->rb_right; \ 224 - } \ 225 - rb_link_node(&node->member, rb, link); \ 226 - rb_insert_color(&node->member, &root); \ 227 - } while (0) 228 - 229 215 #define HOLE_SIZE(NODE) ((NODE)->hole_size) 230 216 #define HOLE_ADDR(NODE) (__drm_mm_hole_node_start(NODE)) 231 217 ··· 241 255 rb_insert_color_cached(&node->rb_hole_size, root, first); 242 256 } 243 257 258 + RB_DECLARE_CALLBACKS_MAX(static, augment_callbacks, 259 + struct drm_mm_node, rb_hole_addr, 260 + u64, subtree_max_hole, HOLE_SIZE) 261 + 262 + static void insert_hole_addr(struct rb_root *root, struct drm_mm_node *node) 263 + { 264 + struct rb_node **link = &root->rb_node, *rb_parent = NULL; 265 + u64 start = HOLE_ADDR(node), subtree_max_hole = node->subtree_max_hole; 266 + struct drm_mm_node *parent; 267 + 268 + while (*link) { 269 + rb_parent = *link; 270 + parent = rb_entry(rb_parent, struct drm_mm_node, rb_hole_addr); 271 + if (parent->subtree_max_hole < subtree_max_hole) 272 + parent->subtree_max_hole = subtree_max_hole; 273 + if (start < HOLE_ADDR(parent)) 274 + link = &parent->rb_hole_addr.rb_left; 275 + else 276 + link = &parent->rb_hole_addr.rb_right; 277 + } 278 + 279 + rb_link_node(&node->rb_hole_addr, rb_parent, link); 280 + rb_insert_augmented(&node->rb_hole_addr, root, &augment_callbacks); 281 + } 282 + 244 283 static void add_hole(struct drm_mm_node *node) 245 284 { 246 285 struct drm_mm *mm = node->mm; 247 286 248 287 node->hole_size = 249 288 __drm_mm_hole_node_end(node) - __drm_mm_hole_node_start(node); 289 + node->subtree_max_hole = node->hole_size; 250 290 DRM_MM_BUG_ON(!drm_mm_hole_follows(node)); 251 291 252 292 insert_hole_size(&mm->holes_size, node); 253 - RB_INSERT(mm->holes_addr, rb_hole_addr, HOLE_ADDR); 293 + insert_hole_addr(&mm->holes_addr, node); 254 294 255 295 list_add(&node->hole_stack, &mm->hole_stack); 256 296 } ··· 287 275 288 276 list_del(&node->hole_stack); 289 277 rb_erase_cached(&node->rb_hole_size, &node->mm->holes_size); 290 - rb_erase(&node->rb_hole_addr, &node->mm->holes_addr); 278 + rb_erase_augmented(&node->rb_hole_addr, &node->mm->holes_addr, 279 + &augment_callbacks); 291 280 node->hole_size = 0; 281 + node->subtree_max_hole = 0; 292 282 293 283 DRM_MM_BUG_ON(drm_mm_hole_follows(node)); 294 284 } ··· 375 361 } 376 362 } 377 363 364 + /** 365 + * next_hole_high_addr - returns next hole for a DRM_MM_INSERT_HIGH mode request 366 + * @entry: previously selected drm_mm_node 367 + * @size: size of the a hole needed for the request 368 + * 369 + * This function will verify whether left subtree of @entry has hole big enough 370 + * to fit the requtested size. If so, it will return previous node of @entry or 371 + * else it will return parent node of @entry 372 + * 373 + * It will also skip the complete left subtree if subtree_max_hole of that 374 + * subtree is same as the subtree_max_hole of the @entry. 375 + * 376 + * Returns: 377 + * previous node of @entry if left subtree of @entry can serve the request or 378 + * else return parent of @entry 379 + */ 380 + static struct drm_mm_node * 381 + next_hole_high_addr(struct drm_mm_node *entry, u64 size) 382 + { 383 + struct rb_node *rb_node, *left_rb_node, *parent_rb_node; 384 + struct drm_mm_node *left_node; 385 + 386 + if (!entry) 387 + return NULL; 388 + 389 + rb_node = &entry->rb_hole_addr; 390 + if (rb_node->rb_left) { 391 + left_rb_node = rb_node->rb_left; 392 + parent_rb_node = rb_parent(rb_node); 393 + left_node = rb_entry(left_rb_node, 394 + struct drm_mm_node, rb_hole_addr); 395 + if ((left_node->subtree_max_hole < size || 396 + entry->size == entry->subtree_max_hole) && 397 + parent_rb_node && parent_rb_node->rb_left != rb_node) 398 + return rb_hole_addr_to_node(parent_rb_node); 399 + } 400 + 401 + return rb_hole_addr_to_node(rb_prev(rb_node)); 402 + } 403 + 404 + /** 405 + * next_hole_low_addr - returns next hole for a DRM_MM_INSERT_LOW mode request 406 + * @entry: previously selected drm_mm_node 407 + * @size: size of the a hole needed for the request 408 + * 409 + * This function will verify whether right subtree of @entry has hole big enough 410 + * to fit the requtested size. If so, it will return next node of @entry or 411 + * else it will return parent node of @entry 412 + * 413 + * It will also skip the complete right subtree if subtree_max_hole of that 414 + * subtree is same as the subtree_max_hole of the @entry. 415 + * 416 + * Returns: 417 + * next node of @entry if right subtree of @entry can serve the request or 418 + * else return parent of @entry 419 + */ 420 + static struct drm_mm_node * 421 + next_hole_low_addr(struct drm_mm_node *entry, u64 size) 422 + { 423 + struct rb_node *rb_node, *right_rb_node, *parent_rb_node; 424 + struct drm_mm_node *right_node; 425 + 426 + if (!entry) 427 + return NULL; 428 + 429 + rb_node = &entry->rb_hole_addr; 430 + if (rb_node->rb_right) { 431 + right_rb_node = rb_node->rb_right; 432 + parent_rb_node = rb_parent(rb_node); 433 + right_node = rb_entry(right_rb_node, 434 + struct drm_mm_node, rb_hole_addr); 435 + if ((right_node->subtree_max_hole < size || 436 + entry->size == entry->subtree_max_hole) && 437 + parent_rb_node && parent_rb_node->rb_right != rb_node) 438 + return rb_hole_addr_to_node(parent_rb_node); 439 + } 440 + 441 + return rb_hole_addr_to_node(rb_next(rb_node)); 442 + } 443 + 378 444 static struct drm_mm_node * 379 445 next_hole(struct drm_mm *mm, 380 446 struct drm_mm_node *node, 447 + u64 size, 381 448 enum drm_mm_insert_mode mode) 382 449 { 383 450 switch (mode) { ··· 467 372 return rb_hole_size_to_node(rb_prev(&node->rb_hole_size)); 468 373 469 374 case DRM_MM_INSERT_LOW: 470 - return rb_hole_addr_to_node(rb_next(&node->rb_hole_addr)); 375 + return next_hole_low_addr(node, size); 471 376 472 377 case DRM_MM_INSERT_HIGH: 473 - return rb_hole_addr_to_node(rb_prev(&node->rb_hole_addr)); 378 + return next_hole_high_addr(node, size); 474 379 475 380 case DRM_MM_INSERT_EVICT: 476 381 node = list_next_entry(node, hole_stack); ··· 584 489 remainder_mask = is_power_of_2(alignment) ? alignment - 1 : 0; 585 490 for (hole = first_hole(mm, range_start, range_end, size, mode); 586 491 hole; 587 - hole = once ? NULL : next_hole(mm, hole, mode)) { 492 + hole = once ? NULL : next_hole(mm, hole, size, mode)) { 588 493 u64 hole_start = __drm_mm_hole_node_start(hole); 589 494 u64 hole_end = hole_start + hole->hole_size; 590 495 u64 adj_start, adj_end;
+6 -4
drivers/gpu/drm/drm_mode_object.c
··· 402 402 { 403 403 struct drm_mode_obj_get_properties *arg = data; 404 404 struct drm_mode_object *obj; 405 + struct drm_modeset_acquire_ctx ctx; 405 406 int ret = 0; 406 407 407 408 if (!drm_core_check_feature(dev, DRIVER_MODESET)) 408 409 return -EOPNOTSUPP; 409 410 410 - drm_modeset_lock_all(dev); 411 + DRM_MODESET_LOCK_ALL_BEGIN(dev, ctx, 0, ret); 411 412 412 413 obj = drm_mode_object_find(dev, file_priv, arg->obj_id, arg->obj_type); 413 414 if (!obj) { ··· 428 427 out_unref: 429 428 drm_mode_object_put(obj); 430 429 out: 431 - drm_modeset_unlock_all(dev); 430 + DRM_MODESET_LOCK_ALL_END(ctx, ret); 432 431 return ret; 433 432 } 434 433 ··· 450 449 { 451 450 struct drm_device *dev = prop->dev; 452 451 struct drm_mode_object *ref; 452 + struct drm_modeset_acquire_ctx ctx; 453 453 int ret = -EINVAL; 454 454 455 455 if (!drm_property_change_valid_get(prop, prop_value, &ref)) 456 456 return -EINVAL; 457 457 458 - drm_modeset_lock_all(dev); 458 + DRM_MODESET_LOCK_ALL_BEGIN(dev, ctx, 0, ret); 459 459 switch (obj->type) { 460 460 case DRM_MODE_OBJECT_CONNECTOR: 461 461 ret = drm_connector_set_obj_prop(obj, prop, prop_value); ··· 470 468 break; 471 469 } 472 470 drm_property_change_valid_put(prop, ref); 473 - drm_modeset_unlock_all(dev); 471 + DRM_MODESET_LOCK_ALL_END(ctx, ret); 474 472 475 473 return ret; 476 474 }
-26
drivers/gpu/drm/drm_modes.c
··· 748 748 EXPORT_SYMBOL(drm_mode_set_name); 749 749 750 750 /** 751 - * drm_mode_hsync - get the hsync of a mode 752 - * @mode: mode 753 - * 754 - * Returns: 755 - * @modes's hsync rate in kHz, rounded to the nearest integer. Calculates the 756 - * value first if it is not yet set. 757 - */ 758 - int drm_mode_hsync(const struct drm_display_mode *mode) 759 - { 760 - unsigned int calc_val; 761 - 762 - if (mode->hsync) 763 - return mode->hsync; 764 - 765 - if (mode->htotal <= 0) 766 - return 0; 767 - 768 - calc_val = (mode->clock * 1000) / mode->htotal; /* hsync in Hz */ 769 - calc_val += 500; /* round to 1000Hz */ 770 - calc_val /= 1000; /* truncate to kHz */ 771 - 772 - return calc_val; 773 - } 774 - EXPORT_SYMBOL(drm_mode_hsync); 775 - 776 - /** 777 751 * drm_mode_vrefresh - get the vrefresh of a mode 778 752 * @mode: mode 779 753 *
-1
drivers/gpu/drm/i915/display/intel_display.c
··· 8875 8875 8876 8876 mode->clock = pipe_config->hw.adjusted_mode.crtc_clock; 8877 8877 8878 - mode->hsync = drm_mode_hsync(mode); 8879 8878 mode->vrefresh = drm_mode_vrefresh(mode); 8880 8879 drm_mode_set_name(mode); 8881 8880 }
+4 -13
drivers/gpu/drm/i915/i915_drv.c
··· 877 877 (struct intel_device_info *)ent->driver_data; 878 878 struct intel_device_info *device_info; 879 879 struct drm_i915_private *i915; 880 - int err; 881 880 882 - i915 = kzalloc(sizeof(*i915), GFP_KERNEL); 883 - if (!i915) 884 - return ERR_PTR(-ENOMEM); 885 - 886 - err = drm_dev_init(&i915->drm, &driver, &pdev->dev); 887 - if (err) { 888 - kfree(i915); 889 - return ERR_PTR(err); 890 - } 891 - 892 - drmm_add_final_kfree(&i915->drm, i915); 881 + i915 = devm_drm_dev_alloc(&pdev->dev, &driver, 882 + struct drm_i915_private, drm); 883 + if (IS_ERR(i915)) 884 + return i915; 893 885 894 886 i915->drm.pdev = pdev; 895 887 pci_set_drvdata(pdev, i915); ··· 998 1006 pci_disable_device(pdev); 999 1007 out_fini: 1000 1008 i915_probe_error(i915, "Device initialization failed (%d)\n", ret); 1001 - drm_dev_put(&i915->drm); 1002 1009 return ret; 1003 1010 } 1004 1011
-2
drivers/gpu/drm/i915/i915_pci.c
··· 941 941 942 942 i915_driver_remove(i915); 943 943 pci_set_drvdata(pdev, NULL); 944 - 945 - drm_dev_put(&i915->drm); 946 944 } 947 945 948 946 /* is device_id present in comma separated list of ids */
+4 -11
drivers/gpu/drm/ingenic/ingenic-drm.c
··· 611 611 return -EINVAL; 612 612 } 613 613 614 - priv = kzalloc(sizeof(*priv), GFP_KERNEL); 615 - if (!priv) 616 - return -ENOMEM; 614 + priv = devm_drm_dev_alloc(dev, &ingenic_drm_driver_data, 615 + struct ingenic_drm, drm); 616 + if (IS_ERR(priv)) 617 + return PTR_ERR(priv); 617 618 618 619 priv->soc_info = soc_info; 619 620 priv->dev = dev; 620 621 drm = &priv->drm; 621 - drm->dev_private = priv; 622 622 623 623 platform_set_drvdata(pdev, priv); 624 - 625 - ret = devm_drm_dev_init(dev, drm, &ingenic_drm_driver_data); 626 - if (ret) { 627 - kfree(priv); 628 - return ret; 629 - } 630 - drmm_add_final_kfree(drm, priv); 631 624 632 625 ret = drmm_mode_config_init(drm); 633 626 if (ret)
+20 -5
drivers/gpu/drm/lima/lima_bcast.c
··· 26 26 bcast_write(LIMA_BCAST_BROADCAST_MASK, mask); 27 27 } 28 28 29 + static int lima_bcast_hw_init(struct lima_ip *ip) 30 + { 31 + bcast_write(LIMA_BCAST_BROADCAST_MASK, ip->data.mask << 16); 32 + bcast_write(LIMA_BCAST_INTERRUPT_MASK, ip->data.mask); 33 + return 0; 34 + } 35 + 36 + int lima_bcast_resume(struct lima_ip *ip) 37 + { 38 + return lima_bcast_hw_init(ip); 39 + } 40 + 41 + void lima_bcast_suspend(struct lima_ip *ip) 42 + { 43 + 44 + } 45 + 29 46 int lima_bcast_init(struct lima_ip *ip) 30 47 { 31 - int i, mask = 0; 48 + int i; 32 49 33 50 for (i = lima_ip_pp0; i <= lima_ip_pp7; i++) { 34 51 if (ip->dev->ip[i].present) 35 - mask |= 1 << (i - lima_ip_pp0); 52 + ip->data.mask |= 1 << (i - lima_ip_pp0); 36 53 } 37 54 38 - bcast_write(LIMA_BCAST_BROADCAST_MASK, mask << 16); 39 - bcast_write(LIMA_BCAST_INTERRUPT_MASK, mask); 40 - return 0; 55 + return lima_bcast_hw_init(ip); 41 56 } 42 57 43 58 void lima_bcast_fini(struct lima_ip *ip)
+2
drivers/gpu/drm/lima/lima_bcast.h
··· 6 6 7 7 struct lima_ip; 8 8 9 + int lima_bcast_resume(struct lima_ip *ip); 10 + void lima_bcast_suspend(struct lima_ip *ip); 9 11 int lima_bcast_init(struct lima_ip *ip); 10 12 void lima_bcast_fini(struct lima_ip *ip); 11 13
+27 -4
drivers/gpu/drm/lima/lima_devfreq.c
··· 101 101 } 102 102 103 103 if (devfreq->devfreq) { 104 - devm_devfreq_remove_device(&ldev->pdev->dev, 105 - devfreq->devfreq); 104 + devm_devfreq_remove_device(ldev->dev, devfreq->devfreq); 106 105 devfreq->devfreq = NULL; 107 106 } 108 107 109 108 if (devfreq->opp_of_table_added) { 110 - dev_pm_opp_of_remove_table(&ldev->pdev->dev); 109 + dev_pm_opp_of_remove_table(ldev->dev); 111 110 devfreq->opp_of_table_added = false; 112 111 } 113 112 ··· 124 125 int lima_devfreq_init(struct lima_device *ldev) 125 126 { 126 127 struct thermal_cooling_device *cooling; 127 - struct device *dev = &ldev->pdev->dev; 128 + struct device *dev = ldev->dev; 128 129 struct opp_table *opp_table; 129 130 struct devfreq *devfreq; 130 131 struct lima_devfreq *ldevfreq = &ldev->devfreq; ··· 230 231 WARN_ON(--devfreq->busy_count < 0); 231 232 232 233 spin_unlock_irqrestore(&devfreq->lock, irqflags); 234 + } 235 + 236 + int lima_devfreq_resume(struct lima_devfreq *devfreq) 237 + { 238 + unsigned long irqflags; 239 + 240 + if (!devfreq->devfreq) 241 + return 0; 242 + 243 + spin_lock_irqsave(&devfreq->lock, irqflags); 244 + 245 + lima_devfreq_reset(devfreq); 246 + 247 + spin_unlock_irqrestore(&devfreq->lock, irqflags); 248 + 249 + return devfreq_resume_device(devfreq->devfreq); 250 + } 251 + 252 + int lima_devfreq_suspend(struct lima_devfreq *devfreq) 253 + { 254 + if (!devfreq->devfreq) 255 + return 0; 256 + 257 + return devfreq_suspend_device(devfreq->devfreq); 233 258 }
+3
drivers/gpu/drm/lima/lima_devfreq.h
··· 38 38 void lima_devfreq_record_busy(struct lima_devfreq *devfreq); 39 39 void lima_devfreq_record_idle(struct lima_devfreq *devfreq); 40 40 41 + int lima_devfreq_resume(struct lima_devfreq *devfreq); 42 + int lima_devfreq_suspend(struct lima_devfreq *devfreq); 43 + 41 44 #endif
+169 -42
drivers/gpu/drm/lima/lima_device.c
··· 25 25 26 26 int (*init)(struct lima_ip *ip); 27 27 void (*fini)(struct lima_ip *ip); 28 + int (*resume)(struct lima_ip *ip); 29 + void (*suspend)(struct lima_ip *ip); 28 30 }; 29 31 30 32 #define LIMA_IP_DESC(ipname, mst0, mst1, off0, off1, func, irq) \ ··· 43 41 }, \ 44 42 .init = lima_##func##_init, \ 45 43 .fini = lima_##func##_fini, \ 44 + .resume = lima_##func##_resume, \ 45 + .suspend = lima_##func##_suspend, \ 46 46 } 47 47 48 48 static struct lima_ip_desc lima_ip_desc[lima_ip_num] = { ··· 81 77 return lima_ip_desc[ip->id].name; 82 78 } 83 79 84 - static int lima_clk_init(struct lima_device *dev) 80 + static int lima_clk_enable(struct lima_device *dev) 85 81 { 86 82 int err; 87 - 88 - dev->clk_bus = devm_clk_get(dev->dev, "bus"); 89 - if (IS_ERR(dev->clk_bus)) { 90 - err = PTR_ERR(dev->clk_bus); 91 - if (err != -EPROBE_DEFER) 92 - dev_err(dev->dev, "get bus clk failed %d\n", err); 93 - return err; 94 - } 95 - 96 - dev->clk_gpu = devm_clk_get(dev->dev, "core"); 97 - if (IS_ERR(dev->clk_gpu)) { 98 - err = PTR_ERR(dev->clk_gpu); 99 - if (err != -EPROBE_DEFER) 100 - dev_err(dev->dev, "get core clk failed %d\n", err); 101 - return err; 102 - } 103 83 104 84 err = clk_prepare_enable(dev->clk_bus); 105 85 if (err) ··· 93 105 if (err) 94 106 goto error_out0; 95 107 96 - dev->reset = devm_reset_control_array_get_optional_shared(dev->dev); 97 - 98 - if (IS_ERR(dev->reset)) { 99 - err = PTR_ERR(dev->reset); 100 - if (err != -EPROBE_DEFER) 101 - dev_err(dev->dev, "get reset controller failed %d\n", 102 - err); 103 - goto error_out1; 104 - } else if (dev->reset != NULL) { 108 + if (dev->reset) { 105 109 err = reset_control_deassert(dev->reset); 106 110 if (err) { 107 111 dev_err(dev->dev, ··· 111 131 return err; 112 132 } 113 133 114 - static void lima_clk_fini(struct lima_device *dev) 134 + static void lima_clk_disable(struct lima_device *dev) 115 135 { 116 - if (dev->reset != NULL) 136 + if (dev->reset) 117 137 reset_control_assert(dev->reset); 118 138 clk_disable_unprepare(dev->clk_gpu); 119 139 clk_disable_unprepare(dev->clk_bus); 140 + } 141 + 142 + static int lima_clk_init(struct lima_device *dev) 143 + { 144 + int err; 145 + 146 + dev->clk_bus = devm_clk_get(dev->dev, "bus"); 147 + if (IS_ERR(dev->clk_bus)) { 148 + err = PTR_ERR(dev->clk_bus); 149 + if (err != -EPROBE_DEFER) 150 + dev_err(dev->dev, "get bus clk failed %d\n", err); 151 + dev->clk_bus = NULL; 152 + return err; 153 + } 154 + 155 + dev->clk_gpu = devm_clk_get(dev->dev, "core"); 156 + if (IS_ERR(dev->clk_gpu)) { 157 + err = PTR_ERR(dev->clk_gpu); 158 + if (err != -EPROBE_DEFER) 159 + dev_err(dev->dev, "get core clk failed %d\n", err); 160 + dev->clk_gpu = NULL; 161 + return err; 162 + } 163 + 164 + dev->reset = devm_reset_control_array_get_optional_shared(dev->dev); 165 + if (IS_ERR(dev->reset)) { 166 + err = PTR_ERR(dev->reset); 167 + if (err != -EPROBE_DEFER) 168 + dev_err(dev->dev, "get reset controller failed %d\n", 169 + err); 170 + dev->reset = NULL; 171 + return err; 172 + } 173 + 174 + return lima_clk_enable(dev); 175 + } 176 + 177 + static void lima_clk_fini(struct lima_device *dev) 178 + { 179 + lima_clk_disable(dev); 180 + } 181 + 182 + static int lima_regulator_enable(struct lima_device *dev) 183 + { 184 + int ret; 185 + 186 + if (!dev->regulator) 187 + return 0; 188 + 189 + ret = regulator_enable(dev->regulator); 190 + if (ret < 0) { 191 + dev_err(dev->dev, "failed to enable regulator: %d\n", ret); 192 + return ret; 193 + } 194 + 195 + return 0; 196 + } 197 + 198 + static void lima_regulator_disable(struct lima_device *dev) 199 + { 200 + if (dev->regulator) 201 + regulator_disable(dev->regulator); 120 202 } 121 203 122 204 static int lima_regulator_init(struct lima_device *dev) ··· 196 154 return ret; 197 155 } 198 156 199 - ret = regulator_enable(dev->regulator); 200 - if (ret < 0) { 201 - dev_err(dev->dev, "failed to enable regulator: %d\n", ret); 202 - return ret; 203 - } 204 - 205 - return 0; 157 + return lima_regulator_enable(dev); 206 158 } 207 159 208 160 static void lima_regulator_fini(struct lima_device *dev) 209 161 { 210 - if (dev->regulator) 211 - regulator_disable(dev->regulator); 162 + lima_regulator_disable(dev); 212 163 } 213 164 214 165 static int lima_init_ip(struct lima_device *dev, int index) 215 166 { 167 + struct platform_device *pdev = to_platform_device(dev->dev); 216 168 struct lima_ip_desc *desc = lima_ip_desc + index; 217 169 struct lima_ip *ip = dev->ip + index; 170 + const char *irq_name = desc->irq_name; 218 171 int offset = desc->offset[dev->id]; 219 172 bool must = desc->must_have[dev->id]; 220 173 int err; ··· 220 183 ip->dev = dev; 221 184 ip->id = index; 222 185 ip->iomem = dev->iomem + offset; 223 - if (desc->irq_name) { 224 - err = platform_get_irq_byname(dev->pdev, desc->irq_name); 186 + if (irq_name) { 187 + err = must ? platform_get_irq_byname(pdev, irq_name) : 188 + platform_get_irq_byname_optional(pdev, irq_name); 225 189 if (err < 0) 226 190 goto out; 227 191 ip->irq = err; ··· 245 207 246 208 if (ip->present) 247 209 desc->fini(ip); 210 + } 211 + 212 + static int lima_resume_ip(struct lima_device *ldev, int index) 213 + { 214 + struct lima_ip_desc *desc = lima_ip_desc + index; 215 + struct lima_ip *ip = ldev->ip + index; 216 + int ret = 0; 217 + 218 + if (ip->present) 219 + ret = desc->resume(ip); 220 + 221 + return ret; 222 + } 223 + 224 + static void lima_suspend_ip(struct lima_device *ldev, int index) 225 + { 226 + struct lima_ip_desc *desc = lima_ip_desc + index; 227 + struct lima_ip *ip = ldev->ip + index; 228 + 229 + if (ip->present) 230 + desc->suspend(ip); 248 231 } 249 232 250 233 static int lima_init_gp_pipe(struct lima_device *dev) ··· 353 294 354 295 int lima_device_init(struct lima_device *ldev) 355 296 { 297 + struct platform_device *pdev = to_platform_device(ldev->dev); 356 298 int err, i; 357 - struct resource *res; 358 299 359 300 dma_set_coherent_mask(ldev->dev, DMA_BIT_MASK(32)); 360 301 ··· 385 326 } else 386 327 ldev->va_end = LIMA_VA_RESERVE_END; 387 328 388 - res = platform_get_resource(ldev->pdev, IORESOURCE_MEM, 0); 389 - ldev->iomem = devm_ioremap_resource(ldev->dev, res); 329 + ldev->iomem = devm_platform_ioremap_resource(pdev, 0); 390 330 if (IS_ERR(ldev->iomem)) { 391 331 dev_err(ldev->dev, "fail to ioremap iomem\n"); 392 332 err = PTR_ERR(ldev->iomem); ··· 461 403 lima_regulator_fini(ldev); 462 404 463 405 lima_clk_fini(ldev); 406 + } 407 + 408 + int lima_device_resume(struct device *dev) 409 + { 410 + struct lima_device *ldev = dev_get_drvdata(dev); 411 + int i, err; 412 + 413 + err = lima_clk_enable(ldev); 414 + if (err) { 415 + dev_err(dev, "resume clk fail %d\n", err); 416 + return err; 417 + } 418 + 419 + err = lima_regulator_enable(ldev); 420 + if (err) { 421 + dev_err(dev, "resume regulator fail %d\n", err); 422 + goto err_out0; 423 + } 424 + 425 + for (i = 0; i < lima_ip_num; i++) { 426 + err = lima_resume_ip(ldev, i); 427 + if (err) { 428 + dev_err(dev, "resume ip %d fail\n", i); 429 + goto err_out1; 430 + } 431 + } 432 + 433 + err = lima_devfreq_resume(&ldev->devfreq); 434 + if (err) { 435 + dev_err(dev, "devfreq resume fail\n"); 436 + goto err_out1; 437 + } 438 + 439 + return 0; 440 + 441 + err_out1: 442 + while (--i >= 0) 443 + lima_suspend_ip(ldev, i); 444 + lima_regulator_disable(ldev); 445 + err_out0: 446 + lima_clk_disable(ldev); 447 + return err; 448 + } 449 + 450 + int lima_device_suspend(struct device *dev) 451 + { 452 + struct lima_device *ldev = dev_get_drvdata(dev); 453 + int i, err; 454 + 455 + /* check any task running */ 456 + for (i = 0; i < lima_pipe_num; i++) { 457 + if (atomic_read(&ldev->pipe[i].base.hw_rq_count)) 458 + return -EBUSY; 459 + } 460 + 461 + err = lima_devfreq_suspend(&ldev->devfreq); 462 + if (err) { 463 + dev_err(dev, "devfreq suspend fail\n"); 464 + return err; 465 + } 466 + 467 + for (i = lima_ip_num - 1; i >= 0; i--) 468 + lima_suspend_ip(ldev, i); 469 + 470 + lima_regulator_disable(ldev); 471 + 472 + lima_clk_disable(ldev); 473 + 474 + return 0; 464 475 }
+5 -1
drivers/gpu/drm/lima/lima_device.h
··· 64 64 bool async_reset; 65 65 /* l2 cache */ 66 66 spinlock_t lock; 67 + /* pmu/bcast */ 68 + u32 mask; 67 69 } data; 68 70 }; 69 71 ··· 78 76 struct lima_device { 79 77 struct device *dev; 80 78 struct drm_device *ddev; 81 - struct platform_device *pdev; 82 79 83 80 enum lima_gpu_id id; 84 81 u32 gp_version; ··· 139 138 } 140 139 return 0; 141 140 } 141 + 142 + int lima_device_suspend(struct device *dev); 143 + int lima_device_resume(struct device *dev); 142 144 143 145 #endif
+16 -1
drivers/gpu/drm/lima/lima_dlbu.c
··· 42 42 dlbu_write(LIMA_DLBU_START_TILE_POS, reg[3]); 43 43 } 44 44 45 - int lima_dlbu_init(struct lima_ip *ip) 45 + static int lima_dlbu_hw_init(struct lima_ip *ip) 46 46 { 47 47 struct lima_device *dev = ip->dev; 48 48 ··· 50 50 dlbu_write(LIMA_DLBU_MASTER_TLLIST_VADDR, LIMA_VA_RESERVE_DLBU); 51 51 52 52 return 0; 53 + } 54 + 55 + int lima_dlbu_resume(struct lima_ip *ip) 56 + { 57 + return lima_dlbu_hw_init(ip); 58 + } 59 + 60 + void lima_dlbu_suspend(struct lima_ip *ip) 61 + { 62 + 63 + } 64 + 65 + int lima_dlbu_init(struct lima_ip *ip) 66 + { 67 + return lima_dlbu_hw_init(ip); 53 68 } 54 69 55 70 void lima_dlbu_fini(struct lima_ip *ip)
+2
drivers/gpu/drm/lima/lima_dlbu.h
··· 12 12 13 13 void lima_dlbu_set_reg(struct lima_ip *ip, u32 *reg); 14 14 15 + int lima_dlbu_resume(struct lima_ip *ip); 16 + void lima_dlbu_suspend(struct lima_ip *ip); 15 17 int lima_dlbu_init(struct lima_ip *ip); 16 18 void lima_dlbu_fini(struct lima_ip *ip); 17 19
+24 -17
drivers/gpu/drm/lima/lima_drv.c
··· 5 5 #include <linux/of_platform.h> 6 6 #include <linux/uaccess.h> 7 7 #include <linux/slab.h> 8 + #include <linux/pm_runtime.h> 8 9 #include <drm/drm_ioctl.h> 9 10 #include <drm/drm_drv.h> 10 11 #include <drm/drm_prime.h> ··· 381 380 goto err_out0; 382 381 } 383 382 384 - ldev->pdev = pdev; 385 383 ldev->dev = &pdev->dev; 386 384 ldev->id = (enum lima_gpu_id)of_device_get_match_data(&pdev->dev); 387 385 ··· 404 404 goto err_out2; 405 405 } 406 406 407 + pm_runtime_set_active(ldev->dev); 408 + pm_runtime_mark_last_busy(ldev->dev); 409 + pm_runtime_set_autosuspend_delay(ldev->dev, 200); 410 + pm_runtime_use_autosuspend(ldev->dev); 411 + pm_runtime_enable(ldev->dev); 412 + 407 413 /* 408 414 * Register the DRM device with the core and the connectors with 409 415 * sysfs. ··· 418 412 if (err < 0) 419 413 goto err_out3; 420 414 421 - platform_set_drvdata(pdev, ldev); 422 - 423 415 if (sysfs_create_bin_file(&ldev->dev->kobj, &lima_error_state_attr)) 424 416 dev_warn(ldev->dev, "fail to create error state sysfs\n"); 425 417 426 418 return 0; 427 419 428 420 err_out3: 429 - lima_device_fini(ldev); 430 - err_out2: 421 + pm_runtime_disable(ldev->dev); 431 422 lima_devfreq_fini(ldev); 423 + err_out2: 424 + lima_device_fini(ldev); 432 425 err_out1: 433 426 drm_dev_put(ddev); 434 427 err_out0: ··· 441 436 struct drm_device *ddev = ldev->ddev; 442 437 443 438 sysfs_remove_bin_file(&ldev->dev->kobj, &lima_error_state_attr); 444 - platform_set_drvdata(pdev, NULL); 439 + 445 440 drm_dev_unregister(ddev); 441 + 442 + /* stop autosuspend to make sure device is in active state */ 443 + pm_runtime_set_autosuspend_delay(ldev->dev, -1); 444 + pm_runtime_disable(ldev->dev); 445 + 446 446 lima_devfreq_fini(ldev); 447 447 lima_device_fini(ldev); 448 + 448 449 drm_dev_put(ddev); 449 450 lima_sched_slab_fini(); 450 451 return 0; ··· 463 452 }; 464 453 MODULE_DEVICE_TABLE(of, dt_match); 465 454 455 + static const struct dev_pm_ops lima_pm_ops = { 456 + SET_SYSTEM_SLEEP_PM_OPS(pm_runtime_force_suspend, pm_runtime_force_resume) 457 + SET_RUNTIME_PM_OPS(lima_device_suspend, lima_device_resume, NULL) 458 + }; 459 + 466 460 static struct platform_driver lima_platform_driver = { 467 461 .probe = lima_pdev_probe, 468 462 .remove = lima_pdev_remove, 469 463 .driver = { 470 464 .name = "lima", 465 + .pm = &lima_pm_ops, 471 466 .of_match_table = dt_match, 472 467 }, 473 468 }; 474 469 475 - static int __init lima_init(void) 476 - { 477 - return platform_driver_register(&lima_platform_driver); 478 - } 479 - module_init(lima_init); 480 - 481 - static void __exit lima_exit(void) 482 - { 483 - platform_driver_unregister(&lima_platform_driver); 484 - } 485 - module_exit(lima_exit); 470 + module_platform_driver(lima_platform_driver); 486 471 487 472 MODULE_AUTHOR("Lima Project Developers"); 488 473 MODULE_DESCRIPTION("Lima DRM Driver");
+18 -3
drivers/gpu/drm/lima/lima_gp.c
··· 274 274 static struct kmem_cache *lima_gp_task_slab; 275 275 static int lima_gp_task_slab_refcnt; 276 276 277 + static int lima_gp_hw_init(struct lima_ip *ip) 278 + { 279 + ip->data.async_reset = false; 280 + lima_gp_soft_reset_async(ip); 281 + return lima_gp_soft_reset_async_wait(ip); 282 + } 283 + 284 + int lima_gp_resume(struct lima_ip *ip) 285 + { 286 + return lima_gp_hw_init(ip); 287 + } 288 + 289 + void lima_gp_suspend(struct lima_ip *ip) 290 + { 291 + 292 + } 293 + 277 294 int lima_gp_init(struct lima_ip *ip) 278 295 { 279 296 struct lima_device *dev = ip->dev; ··· 298 281 299 282 lima_gp_print_version(ip); 300 283 301 - ip->data.async_reset = false; 302 - lima_gp_soft_reset_async(ip); 303 - err = lima_gp_soft_reset_async_wait(ip); 284 + err = lima_gp_hw_init(ip); 304 285 if (err) 305 286 return err; 306 287
+2
drivers/gpu/drm/lima/lima_gp.h
··· 7 7 struct lima_ip; 8 8 struct lima_device; 9 9 10 + int lima_gp_resume(struct lima_ip *ip); 11 + void lima_gp_suspend(struct lima_ip *ip); 10 12 int lima_gp_init(struct lima_ip *ip); 11 13 void lima_gp_fini(struct lima_ip *ip); 12 14
+28 -10
drivers/gpu/drm/lima/lima_l2_cache.c
··· 38 38 return ret; 39 39 } 40 40 41 + static int lima_l2_cache_hw_init(struct lima_ip *ip) 42 + { 43 + int err; 44 + 45 + err = lima_l2_cache_flush(ip); 46 + if (err) 47 + return err; 48 + 49 + l2_cache_write(LIMA_L2_CACHE_ENABLE, 50 + LIMA_L2_CACHE_ENABLE_ACCESS | 51 + LIMA_L2_CACHE_ENABLE_READ_ALLOCATE); 52 + l2_cache_write(LIMA_L2_CACHE_MAX_READS, 0x1c); 53 + 54 + return 0; 55 + } 56 + 57 + int lima_l2_cache_resume(struct lima_ip *ip) 58 + { 59 + return lima_l2_cache_hw_init(ip); 60 + } 61 + 62 + void lima_l2_cache_suspend(struct lima_ip *ip) 63 + { 64 + 65 + } 66 + 41 67 int lima_l2_cache_init(struct lima_ip *ip) 42 68 { 43 - int i, err; 69 + int i; 44 70 u32 size; 45 71 struct lima_device *dev = ip->dev; 46 72 ··· 89 63 1 << (size & 0xff), 90 64 1 << ((size >> 24) & 0xff)); 91 65 92 - err = lima_l2_cache_flush(ip); 93 - if (err) 94 - return err; 95 - 96 - l2_cache_write(LIMA_L2_CACHE_ENABLE, 97 - LIMA_L2_CACHE_ENABLE_ACCESS|LIMA_L2_CACHE_ENABLE_READ_ALLOCATE); 98 - l2_cache_write(LIMA_L2_CACHE_MAX_READS, 0x1c); 99 - 100 - return 0; 66 + return lima_l2_cache_hw_init(ip); 101 67 } 102 68 103 69 void lima_l2_cache_fini(struct lima_ip *ip)
+2
drivers/gpu/drm/lima/lima_l2_cache.h
··· 6 6 7 7 struct lima_ip; 8 8 9 + int lima_l2_cache_resume(struct lima_ip *ip); 10 + void lima_l2_cache_suspend(struct lima_ip *ip); 9 11 int lima_l2_cache_init(struct lima_ip *ip); 10 12 void lima_l2_cache_fini(struct lima_ip *ip); 11 13
+35 -14
drivers/gpu/drm/lima/lima_mmu.c
··· 59 59 return IRQ_HANDLED; 60 60 } 61 61 62 - int lima_mmu_init(struct lima_ip *ip) 62 + static int lima_mmu_hw_init(struct lima_ip *ip) 63 63 { 64 64 struct lima_device *dev = ip->dev; 65 65 int err; 66 66 u32 v; 67 + 68 + mmu_write(LIMA_MMU_COMMAND, LIMA_MMU_COMMAND_HARD_RESET); 69 + err = lima_mmu_send_command(LIMA_MMU_COMMAND_HARD_RESET, 70 + LIMA_MMU_DTE_ADDR, v, v == 0); 71 + if (err) 72 + return err; 73 + 74 + mmu_write(LIMA_MMU_INT_MASK, 75 + LIMA_MMU_INT_PAGE_FAULT | LIMA_MMU_INT_READ_BUS_ERROR); 76 + mmu_write(LIMA_MMU_DTE_ADDR, dev->empty_vm->pd.dma); 77 + return lima_mmu_send_command(LIMA_MMU_COMMAND_ENABLE_PAGING, 78 + LIMA_MMU_STATUS, v, 79 + v & LIMA_MMU_STATUS_PAGING_ENABLED); 80 + } 81 + 82 + int lima_mmu_resume(struct lima_ip *ip) 83 + { 84 + if (ip->id == lima_ip_ppmmu_bcast) 85 + return 0; 86 + 87 + return lima_mmu_hw_init(ip); 88 + } 89 + 90 + void lima_mmu_suspend(struct lima_ip *ip) 91 + { 92 + 93 + } 94 + 95 + int lima_mmu_init(struct lima_ip *ip) 96 + { 97 + struct lima_device *dev = ip->dev; 98 + int err; 67 99 68 100 if (ip->id == lima_ip_ppmmu_bcast) 69 101 return 0; ··· 106 74 return -EIO; 107 75 } 108 76 109 - mmu_write(LIMA_MMU_COMMAND, LIMA_MMU_COMMAND_HARD_RESET); 110 - err = lima_mmu_send_command(LIMA_MMU_COMMAND_HARD_RESET, 111 - LIMA_MMU_DTE_ADDR, v, v == 0); 112 - if (err) 113 - return err; 114 - 115 77 err = devm_request_irq(dev->dev, ip->irq, lima_mmu_irq_handler, 116 78 IRQF_SHARED, lima_ip_name(ip), ip); 117 79 if (err) { ··· 113 87 return err; 114 88 } 115 89 116 - mmu_write(LIMA_MMU_INT_MASK, LIMA_MMU_INT_PAGE_FAULT | LIMA_MMU_INT_READ_BUS_ERROR); 117 - mmu_write(LIMA_MMU_DTE_ADDR, dev->empty_vm->pd.dma); 118 - return lima_mmu_send_command(LIMA_MMU_COMMAND_ENABLE_PAGING, 119 - LIMA_MMU_STATUS, v, 120 - v & LIMA_MMU_STATUS_PAGING_ENABLED); 90 + return lima_mmu_hw_init(ip); 121 91 } 122 92 123 93 void lima_mmu_fini(struct lima_ip *ip) ··· 135 113 LIMA_MMU_STATUS, v, 136 114 v & LIMA_MMU_STATUS_STALL_ACTIVE); 137 115 138 - if (vm) 139 - mmu_write(LIMA_MMU_DTE_ADDR, vm->pd.dma); 116 + mmu_write(LIMA_MMU_DTE_ADDR, vm->pd.dma); 140 117 141 118 /* flush the TLB */ 142 119 mmu_write(LIMA_MMU_COMMAND, LIMA_MMU_COMMAND_ZAP_CACHE);
+2
drivers/gpu/drm/lima/lima_mmu.h
··· 7 7 struct lima_ip; 8 8 struct lima_vm; 9 9 10 + int lima_mmu_resume(struct lima_ip *ip); 11 + void lima_mmu_suspend(struct lima_ip *ip); 10 12 int lima_mmu_init(struct lima_ip *ip); 11 13 void lima_mmu_fini(struct lima_ip *ip); 12 14
+74 -3
drivers/gpu/drm/lima/lima_pmu.c
··· 21 21 v, v & LIMA_PMU_INT_CMD_MASK, 22 22 100, 100000); 23 23 if (err) { 24 - dev_err(dev->dev, "timeout wait pmd cmd\n"); 24 + dev_err(dev->dev, "timeout wait pmu cmd\n"); 25 25 return err; 26 26 } 27 27 ··· 29 29 return 0; 30 30 } 31 31 32 - int lima_pmu_init(struct lima_ip *ip) 32 + static u32 lima_pmu_get_ip_mask(struct lima_ip *ip) 33 + { 34 + struct lima_device *dev = ip->dev; 35 + u32 ret = 0; 36 + int i; 37 + 38 + ret |= LIMA_PMU_POWER_GP0_MASK; 39 + 40 + if (dev->id == lima_gpu_mali400) { 41 + ret |= LIMA_PMU_POWER_L2_MASK; 42 + for (i = 0; i < 4; i++) { 43 + if (dev->ip[lima_ip_pp0 + i].present) 44 + ret |= LIMA_PMU_POWER_PP_MASK(i); 45 + } 46 + } else { 47 + if (dev->ip[lima_ip_pp0].present) 48 + ret |= LIMA450_PMU_POWER_PP0_MASK; 49 + for (i = lima_ip_pp1; i <= lima_ip_pp3; i++) { 50 + if (dev->ip[i].present) { 51 + ret |= LIMA450_PMU_POWER_PP13_MASK; 52 + break; 53 + } 54 + } 55 + for (i = lima_ip_pp4; i <= lima_ip_pp7; i++) { 56 + if (dev->ip[i].present) { 57 + ret |= LIMA450_PMU_POWER_PP47_MASK; 58 + break; 59 + } 60 + } 61 + } 62 + 63 + return ret; 64 + } 65 + 66 + static int lima_pmu_hw_init(struct lima_ip *ip) 33 67 { 34 68 int err; 35 69 u32 stat; ··· 88 54 return 0; 89 55 } 90 56 57 + static void lima_pmu_hw_fini(struct lima_ip *ip) 58 + { 59 + u32 stat; 60 + 61 + if (!ip->data.mask) 62 + ip->data.mask = lima_pmu_get_ip_mask(ip); 63 + 64 + stat = ~pmu_read(LIMA_PMU_STATUS) & ip->data.mask; 65 + if (stat) { 66 + pmu_write(LIMA_PMU_POWER_DOWN, stat); 67 + 68 + /* Don't wait for interrupt on Mali400 if all domains are 69 + * powered off because the HW won't generate an interrupt 70 + * in this case. 71 + */ 72 + if (ip->dev->id == lima_gpu_mali400) 73 + pmu_write(LIMA_PMU_INT_CLEAR, LIMA_PMU_INT_CMD_MASK); 74 + else 75 + lima_pmu_wait_cmd(ip); 76 + } 77 + } 78 + 79 + int lima_pmu_resume(struct lima_ip *ip) 80 + { 81 + return lima_pmu_hw_init(ip); 82 + } 83 + 84 + void lima_pmu_suspend(struct lima_ip *ip) 85 + { 86 + lima_pmu_hw_fini(ip); 87 + } 88 + 89 + int lima_pmu_init(struct lima_ip *ip) 90 + { 91 + return lima_pmu_hw_init(ip); 92 + } 93 + 91 94 void lima_pmu_fini(struct lima_ip *ip) 92 95 { 93 - 96 + lima_pmu_hw_fini(ip); 94 97 }
+2
drivers/gpu/drm/lima/lima_pmu.h
··· 6 6 7 7 struct lima_ip; 8 8 9 + int lima_pmu_resume(struct lima_ip *ip); 10 + void lima_pmu_suspend(struct lima_ip *ip); 9 11 int lima_pmu_init(struct lima_ip *ip); 10 12 void lima_pmu_fini(struct lima_ip *ip); 11 13
+28 -3
drivers/gpu/drm/lima/lima_pp.c
··· 223 223 lima_ip_name(ip), name, major, minor); 224 224 } 225 225 226 + static int lima_pp_hw_init(struct lima_ip *ip) 227 + { 228 + ip->data.async_reset = false; 229 + lima_pp_soft_reset_async(ip); 230 + return lima_pp_soft_reset_async_wait(ip); 231 + } 232 + 233 + int lima_pp_resume(struct lima_ip *ip) 234 + { 235 + return lima_pp_hw_init(ip); 236 + } 237 + 238 + void lima_pp_suspend(struct lima_ip *ip) 239 + { 240 + 241 + } 242 + 226 243 int lima_pp_init(struct lima_ip *ip) 227 244 { 228 245 struct lima_device *dev = ip->dev; ··· 247 230 248 231 lima_pp_print_version(ip); 249 232 250 - ip->data.async_reset = false; 251 - lima_pp_soft_reset_async(ip); 252 - err = lima_pp_soft_reset_async_wait(ip); 233 + err = lima_pp_hw_init(ip); 253 234 if (err) 254 235 return err; 255 236 ··· 265 250 } 266 251 267 252 void lima_pp_fini(struct lima_ip *ip) 253 + { 254 + 255 + } 256 + 257 + int lima_pp_bcast_resume(struct lima_ip *ip) 258 + { 259 + return 0; 260 + } 261 + 262 + void lima_pp_bcast_suspend(struct lima_ip *ip) 268 263 { 269 264 270 265 }
+4
drivers/gpu/drm/lima/lima_pp.h
··· 7 7 struct lima_ip; 8 8 struct lima_device; 9 9 10 + int lima_pp_resume(struct lima_ip *ip); 11 + void lima_pp_suspend(struct lima_ip *ip); 10 12 int lima_pp_init(struct lima_ip *ip); 11 13 void lima_pp_fini(struct lima_ip *ip); 12 14 15 + int lima_pp_bcast_resume(struct lima_ip *ip); 16 + void lima_pp_bcast_suspend(struct lima_ip *ip); 13 17 int lima_pp_bcast_init(struct lima_ip *ip); 14 18 void lima_pp_bcast_fini(struct lima_ip *ip); 15 19
+43 -20
drivers/gpu/drm/lima/lima_sched.c
··· 4 4 #include <linux/kthread.h> 5 5 #include <linux/slab.h> 6 6 #include <linux/vmalloc.h> 7 + #include <linux/pm_runtime.h> 7 8 8 9 #include "lima_devfreq.h" 9 10 #include "lima_drv.h" ··· 195 194 return NULL; 196 195 } 197 196 197 + static int lima_pm_busy(struct lima_device *ldev) 198 + { 199 + int ret; 200 + 201 + /* resume GPU if it has been suspended by runtime PM */ 202 + ret = pm_runtime_get_sync(ldev->dev); 203 + if (ret < 0) 204 + return ret; 205 + 206 + lima_devfreq_record_busy(&ldev->devfreq); 207 + return 0; 208 + } 209 + 210 + static void lima_pm_idle(struct lima_device *ldev) 211 + { 212 + lima_devfreq_record_idle(&ldev->devfreq); 213 + 214 + /* GPU can do auto runtime suspend */ 215 + pm_runtime_mark_last_busy(ldev->dev); 216 + pm_runtime_put_autosuspend(ldev->dev); 217 + } 218 + 198 219 static struct dma_fence *lima_sched_run_job(struct drm_sched_job *job) 199 220 { 200 221 struct lima_sched_task *task = to_lima_task(job); 201 222 struct lima_sched_pipe *pipe = to_lima_pipe(job->sched); 223 + struct lima_device *ldev = pipe->ldev; 202 224 struct lima_fence *fence; 203 225 struct dma_fence *ret; 204 - struct lima_vm *vm = NULL, *last_vm = NULL; 205 - int i; 226 + int i, err; 206 227 207 228 /* after GPU reset */ 208 229 if (job->s_fence->finished.error < 0) ··· 233 210 fence = lima_fence_create(pipe); 234 211 if (!fence) 235 212 return NULL; 213 + 214 + err = lima_pm_busy(ldev); 215 + if (err < 0) { 216 + dma_fence_put(&fence->base); 217 + return NULL; 218 + } 219 + 236 220 task->fence = &fence->base; 237 221 238 222 /* for caller usage of the fence, otherwise irq handler 239 223 * may consume the fence before caller use it 240 224 */ 241 225 ret = dma_fence_get(task->fence); 242 - 243 - lima_devfreq_record_busy(&pipe->ldev->devfreq); 244 226 245 227 pipe->current_task = task; 246 228 ··· 267 239 for (i = 0; i < pipe->num_l2_cache; i++) 268 240 lima_l2_cache_flush(pipe->l2_cache[i]); 269 241 270 - if (task->vm != pipe->current_vm) { 271 - vm = lima_vm_get(task->vm); 272 - last_vm = pipe->current_vm; 273 - pipe->current_vm = task->vm; 274 - } 242 + lima_vm_put(pipe->current_vm); 243 + pipe->current_vm = lima_vm_get(task->vm); 275 244 276 245 if (pipe->bcast_mmu) 277 - lima_mmu_switch_vm(pipe->bcast_mmu, vm); 246 + lima_mmu_switch_vm(pipe->bcast_mmu, pipe->current_vm); 278 247 else { 279 248 for (i = 0; i < pipe->num_mmu; i++) 280 - lima_mmu_switch_vm(pipe->mmu[i], vm); 249 + lima_mmu_switch_vm(pipe->mmu[i], pipe->current_vm); 281 250 } 282 - 283 - if (last_vm) 284 - lima_vm_put(last_vm); 285 251 286 252 trace_lima_task_run(task); 287 253 ··· 307 285 mutex_lock(&dev->error_task_list_lock); 308 286 309 287 if (dev->dump.num_tasks >= lima_max_error_tasks) { 310 - dev_info(dev->dev, "fail to save task state: error task list is full\n"); 288 + dev_info(dev->dev, "fail to save task state from %s pid %d: " 289 + "error task list is full\n", ctx->pname, ctx->pid); 311 290 goto out; 312 291 } 313 292 ··· 417 394 { 418 395 struct lima_sched_pipe *pipe = to_lima_pipe(job->sched); 419 396 struct lima_sched_task *task = to_lima_task(job); 397 + struct lima_device *ldev = pipe->ldev; 420 398 421 399 if (!pipe->error) 422 400 DRM_ERROR("lima job timeout\n"); ··· 439 415 lima_mmu_page_fault_resume(pipe->mmu[i]); 440 416 } 441 417 442 - if (pipe->current_vm) 443 - lima_vm_put(pipe->current_vm); 444 - 418 + lima_vm_put(pipe->current_vm); 445 419 pipe->current_vm = NULL; 446 420 pipe->current_task = NULL; 447 421 448 - lima_devfreq_record_idle(&pipe->ldev->devfreq); 422 + lima_pm_idle(ldev); 449 423 450 424 drm_sched_resubmit_jobs(&pipe->base); 451 425 drm_sched_start(&pipe->base, true); ··· 515 493 void lima_sched_pipe_task_done(struct lima_sched_pipe *pipe) 516 494 { 517 495 struct lima_sched_task *task = pipe->current_task; 496 + struct lima_device *ldev = pipe->ldev; 518 497 519 498 if (pipe->error) { 520 499 if (task && task->recoverable) ··· 526 503 pipe->task_fini(pipe); 527 504 dma_fence_signal(task->fence); 528 505 529 - lima_devfreq_record_idle(&pipe->ldev->devfreq); 506 + lima_pm_idle(ldev); 530 507 } 531 508 }
+2 -1
drivers/gpu/drm/lima/lima_vm.h
··· 54 54 55 55 static inline void lima_vm_put(struct lima_vm *vm) 56 56 { 57 - kref_put(&vm->refcount, lima_vm_release); 57 + if (vm) 58 + kref_put(&vm->refcount, lima_vm_release); 58 59 } 59 60 60 61 void lima_vm_print(struct lima_vm *vm);
+5 -5
drivers/gpu/drm/mcde/mcde_display.c
··· 948 948 { 949 949 struct drm_crtc *crtc = &pipe->crtc; 950 950 struct drm_device *drm = crtc->dev; 951 - struct mcde *mcde = drm->dev_private; 951 + struct mcde *mcde = to_mcde(drm); 952 952 struct drm_pending_vblank_event *event; 953 953 954 954 drm_crtc_vblank_off(crtc); ··· 1020 1020 { 1021 1021 struct drm_crtc *crtc = &pipe->crtc; 1022 1022 struct drm_device *drm = crtc->dev; 1023 - struct mcde *mcde = drm->dev_private; 1023 + struct mcde *mcde = to_mcde(drm); 1024 1024 struct drm_pending_vblank_event *event = crtc->state->event; 1025 1025 struct drm_plane *plane = &pipe->plane; 1026 1026 struct drm_plane_state *pstate = plane->state; ··· 1078 1078 { 1079 1079 struct drm_crtc *crtc = &pipe->crtc; 1080 1080 struct drm_device *drm = crtc->dev; 1081 - struct mcde *mcde = drm->dev_private; 1081 + struct mcde *mcde = to_mcde(drm); 1082 1082 u32 val; 1083 1083 1084 1084 /* Enable all VBLANK IRQs */ ··· 1097 1097 { 1098 1098 struct drm_crtc *crtc = &pipe->crtc; 1099 1099 struct drm_device *drm = crtc->dev; 1100 - struct mcde *mcde = drm->dev_private; 1100 + struct mcde *mcde = to_mcde(drm); 1101 1101 1102 1102 /* Disable all VBLANK IRQs */ 1103 1103 writel(0, mcde->regs + MCDE_IMSCPP); ··· 1117 1117 1118 1118 int mcde_display_init(struct drm_device *drm) 1119 1119 { 1120 - struct mcde *mcde = drm->dev_private; 1120 + struct mcde *mcde = to_mcde(drm); 1121 1121 int ret; 1122 1122 static const u32 formats[] = { 1123 1123 DRM_FORMAT_ARGB8888,
+2
drivers/gpu/drm/mcde/mcde_drm.h
··· 34 34 struct regulator *vana; 35 35 }; 36 36 37 + #define to_mcde(dev) container_of(dev, struct mcde, drm) 38 + 37 39 bool mcde_dsi_irq(struct mipi_dsi_device *mdsi); 38 40 void mcde_dsi_te_request(struct mipi_dsi_device *mdsi); 39 41 extern struct platform_driver mcde_dsi_driver;
+6 -15
drivers/gpu/drm/mcde/mcde_drv.c
··· 164 164 static int mcde_modeset_init(struct drm_device *drm) 165 165 { 166 166 struct drm_mode_config *mode_config; 167 - struct mcde *mcde = drm->dev_private; 167 + struct mcde *mcde = to_mcde(drm); 168 168 int ret; 169 169 170 170 if (!mcde->bridge) { ··· 307 307 int ret; 308 308 int i; 309 309 310 - mcde = kzalloc(sizeof(*mcde), GFP_KERNEL); 311 - if (!mcde) 312 - return -ENOMEM; 313 - mcde->dev = dev; 314 - 315 - ret = devm_drm_dev_init(dev, &mcde->drm, &mcde_drm_driver); 316 - if (ret) { 317 - kfree(mcde); 318 - return ret; 319 - } 310 + mcde = devm_drm_dev_alloc(dev, &mcde_drm_driver, struct mcde, drm); 311 + if (IS_ERR(mcde)) 312 + return PTR_ERR(mcde); 320 313 drm = &mcde->drm; 321 - drm->dev_private = mcde; 322 - drmm_add_final_kfree(drm, mcde); 314 + mcde->dev = dev; 323 315 platform_set_drvdata(pdev, drm); 324 316 325 317 /* Enable continuous updates: this is what Linux' framebuffer expects */ 326 318 mcde->oneshot_mode = false; 327 - drm->dev_private = mcde; 328 319 329 320 /* First obtain and turn on the main power */ 330 321 mcde->epod = devm_regulator_get(dev, "epod"); ··· 485 494 static int mcde_remove(struct platform_device *pdev) 486 495 { 487 496 struct drm_device *drm = platform_get_drvdata(pdev); 488 - struct mcde *mcde = drm->dev_private; 497 + struct mcde *mcde = to_mcde(drm); 489 498 490 499 component_master_del(&pdev->dev, &mcde_drm_comp_ops); 491 500 clk_disable_unprepare(mcde->mcde_clk);
+1 -1
drivers/gpu/drm/mcde/mcde_dsi.c
··· 1020 1020 void *data) 1021 1021 { 1022 1022 struct drm_device *drm = data; 1023 - struct mcde *mcde = drm->dev_private; 1023 + struct mcde *mcde = to_mcde(drm); 1024 1024 struct mcde_dsi *d = dev_get_drvdata(dev); 1025 1025 struct device_node *child; 1026 1026 struct drm_panel *panel = NULL;
+28 -1
drivers/gpu/drm/meson/meson_drv.c
··· 11 11 #include <linux/component.h> 12 12 #include <linux/module.h> 13 13 #include <linux/of_graph.h> 14 + #include <linux/sys_soc.h> 14 15 #include <linux/platform_device.h> 15 16 #include <linux/soc/amlogic/meson-canvas.h> 16 17 ··· 184 183 kfree(ap); 185 184 } 186 185 186 + struct meson_drm_soc_attr { 187 + struct meson_drm_soc_limits limits; 188 + const struct soc_device_attribute *attrs; 189 + }; 190 + 191 + static const struct meson_drm_soc_attr meson_drm_soc_attrs[] = { 192 + /* S805X/S805Y HDMI PLL won't lock for HDMI PHY freq > 1,65GHz */ 193 + { 194 + .limits = { 195 + .max_hdmi_phy_freq = 1650000, 196 + }, 197 + .attrs = (const struct soc_device_attribute []) { 198 + { .soc_id = "GXL (S805*)", }, 199 + { /* sentinel */ }, 200 + } 201 + }, 202 + }; 203 + 187 204 static int meson_drv_bind_master(struct device *dev, bool has_components) 188 205 { 189 206 struct platform_device *pdev = to_platform_device(dev); ··· 210 191 struct drm_device *drm; 211 192 struct resource *res; 212 193 void __iomem *regs; 213 - int ret; 194 + int ret, i; 214 195 215 196 /* Checks if an output connector is available */ 216 197 if (!meson_vpu_has_available_connectors(dev)) { ··· 299 280 ret = drm_vblank_init(drm, 1); 300 281 if (ret) 301 282 goto free_drm; 283 + 284 + /* Assign limits per soc revision/package */ 285 + for (i = 0 ; i < ARRAY_SIZE(meson_drm_soc_attrs) ; ++i) { 286 + if (soc_device_match(meson_drm_soc_attrs[i].attrs)) { 287 + priv->limits = &meson_drm_soc_attrs[i].limits; 288 + break; 289 + } 290 + } 302 291 303 292 /* Remove early framebuffers (ie. simplefb) */ 304 293 meson_remove_framebuffers();
+6
drivers/gpu/drm/meson/meson_drv.h
··· 30 30 struct meson_afbcd_ops *afbcd_ops; 31 31 }; 32 32 33 + struct meson_drm_soc_limits { 34 + unsigned int max_hdmi_phy_freq; 35 + }; 36 + 33 37 struct meson_drm { 34 38 struct device *dev; 35 39 enum vpu_compatible compat; ··· 51 47 struct drm_crtc *crtc; 52 48 struct drm_plane *primary_plane; 53 49 struct drm_plane *overlay_plane; 50 + 51 + const struct meson_drm_soc_limits *limits; 54 52 55 53 /* Components Data */ 56 54 struct {
+1 -1
drivers/gpu/drm/meson/meson_dw_hdmi.c
··· 695 695 dev_dbg(connector->dev->dev, "%s: vclk:%d phy=%d venc=%d hdmi=%d\n", 696 696 __func__, phy_freq, vclk_freq, venc_freq, hdmi_freq); 697 697 698 - return meson_vclk_vic_supported_freq(phy_freq, vclk_freq); 698 + return meson_vclk_vic_supported_freq(priv, phy_freq, vclk_freq); 699 699 } 700 700 701 701 /* Encoder */
+1 -1
drivers/gpu/drm/meson/meson_plane.c
··· 223 223 priv->viu.osd1_blk0_cfg[0] |= OSD_BLK_MODE_16 | 224 224 OSD_COLOR_MATRIX_16_RGB565; 225 225 break; 226 - }; 226 + } 227 227 } 228 228 229 229 switch (fb->format->format) {
+15 -1
drivers/gpu/drm/meson/meson_vclk.c
··· 725 725 /* In DMT mode, path after PLL is always /10 */ 726 726 freq *= 10; 727 727 728 + /* Check against soc revision/package limits */ 729 + if (priv->limits) { 730 + if (priv->limits->max_hdmi_phy_freq && 731 + freq > priv->limits->max_hdmi_phy_freq) 732 + return MODE_CLOCK_HIGH; 733 + } 734 + 728 735 if (meson_hdmi_pll_find_params(priv, freq, &m, &frac, &od)) 729 736 return MODE_OK; 730 737 ··· 769 762 } 770 763 771 764 enum drm_mode_status 772 - meson_vclk_vic_supported_freq(unsigned int phy_freq, 765 + meson_vclk_vic_supported_freq(struct meson_drm *priv, unsigned int phy_freq, 773 766 unsigned int vclk_freq) 774 767 { 775 768 int i; 776 769 777 770 DRM_DEBUG_DRIVER("phy_freq = %d vclk_freq = %d\n", 778 771 phy_freq, vclk_freq); 772 + 773 + /* Check against soc revision/package limits */ 774 + if (priv->limits) { 775 + if (priv->limits->max_hdmi_phy_freq && 776 + phy_freq > priv->limits->max_hdmi_phy_freq) 777 + return MODE_CLOCK_HIGH; 778 + } 779 779 780 780 for (i = 0 ; params[i].pixel_freq ; ++i) { 781 781 DRM_DEBUG_DRIVER("i = %d pixel_freq = %d alt = %d\n",
+2 -1
drivers/gpu/drm/meson/meson_vclk.h
··· 25 25 enum drm_mode_status 26 26 meson_vclk_dmt_supported_freq(struct meson_drm *priv, unsigned int freq); 27 27 enum drm_mode_status 28 - meson_vclk_vic_supported_freq(unsigned int phy_freq, unsigned int vclk_freq); 28 + meson_vclk_vic_supported_freq(struct meson_drm *priv, unsigned int phy_freq, 29 + unsigned int vclk_freq); 29 30 30 31 void meson_vclk_setup(struct meson_drm *priv, unsigned int target, 31 32 unsigned int phy_freq, unsigned int vclk_freq,
+6 -27
drivers/gpu/drm/omapdrm/dss/dispc.c
··· 3137 3137 dispc_write_reg(dispc, DISPC_TIMING_H(channel), timing_h); 3138 3138 dispc_write_reg(dispc, DISPC_TIMING_V(channel), timing_v); 3139 3139 3140 - if (vm->flags & DISPLAY_FLAGS_VSYNC_HIGH) 3141 - vs = false; 3142 - else 3143 - vs = true; 3144 - 3145 - if (vm->flags & DISPLAY_FLAGS_HSYNC_HIGH) 3146 - hs = false; 3147 - else 3148 - hs = true; 3149 - 3150 - if (vm->flags & DISPLAY_FLAGS_DE_HIGH) 3151 - de = false; 3152 - else 3153 - de = true; 3154 - 3155 - if (vm->flags & DISPLAY_FLAGS_PIXDATA_POSEDGE) 3156 - ipc = false; 3157 - else 3158 - ipc = true; 3159 - 3160 - /* always use the 'rf' setting */ 3161 - onoff = true; 3162 - 3163 - if (vm->flags & DISPLAY_FLAGS_SYNC_POSEDGE) 3164 - rf = true; 3165 - else 3166 - rf = false; 3140 + vs = !!(vm->flags & DISPLAY_FLAGS_VSYNC_LOW); 3141 + hs = !!(vm->flags & DISPLAY_FLAGS_HSYNC_LOW); 3142 + de = !!(vm->flags & DISPLAY_FLAGS_DE_LOW); 3143 + ipc = !!(vm->flags & DISPLAY_FLAGS_PIXDATA_NEGEDGE); 3144 + onoff = true; /* always use the 'rf' setting */ 3145 + rf = !!(vm->flags & DISPLAY_FLAGS_SYNC_POSEDGE); 3167 3146 3168 3147 l = FLD_VAL(onoff, 17, 17) | 3169 3148 FLD_VAL(rf, 16, 16) |
-43
drivers/gpu/drm/omapdrm/dss/venc.c
··· 208 208 .gen_ctrl = 0x00F90000, 209 209 }; 210 210 211 - static const struct venc_config venc_config_pal_bdghi = { 212 - .f_control = 0, 213 - .vidout_ctrl = 0, 214 - .sync_ctrl = 0, 215 - .hfltr_ctrl = 0, 216 - .x_color = 0, 217 - .line21 = 0, 218 - .ln_sel = 21, 219 - .htrigger_vtrigger = 0, 220 - .tvdetgp_int_start_stop_x = 0x00140001, 221 - .tvdetgp_int_start_stop_y = 0x00010001, 222 - .gen_ctrl = 0x00FB0000, 223 - 224 - .llen = 864-1, 225 - .flens = 625-1, 226 - .cc_carr_wss_carr = 0x2F7625ED, 227 - .c_phase = 0xDF, 228 - .gain_u = 0x111, 229 - .gain_v = 0x181, 230 - .gain_y = 0x140, 231 - .black_level = 0x3e, 232 - .blank_level = 0x3e, 233 - .m_control = 0<<2 | 1<<1, 234 - .bstamp_wss_data = 0x42, 235 - .s_carr = 0x2a098acb, 236 - .l21__wc_ctl = 0<<13 | 0x16<<8 | 0<<0, 237 - .savid__eavid = 0x06A70108, 238 - .flen__fal = 23<<16 | 624<<0, 239 - .lal__phase_reset = 2<<17 | 310<<0, 240 - .hs_int_start_stop_x = 0x00920358, 241 - .hs_ext_start_stop_x = 0x000F035F, 242 - .vs_int_start_x = 0x1a7<<16, 243 - .vs_int_stop_x__vs_int_start_y = 0x000601A7, 244 - .vs_int_stop_y__vs_ext_start_x = 0x01AF0036, 245 - .vs_ext_stop_x__vs_ext_start_y = 0x27101af, 246 - .vs_ext_stop_y = 0x05, 247 - .avid_start_stop_x = 0x03530082, 248 - .avid_start_stop_y = 0x0270002E, 249 - .fid_int_start_x__fid_int_start_y = 0x0005008A, 250 - .fid_int_offset_y__fid_ext_start_x = 0x002E0138, 251 - .fid_ext_start_y__fid_ext_offset_y = 0x01380005, 252 - }; 253 - 254 211 enum venc_videomode { 255 212 VENC_MODE_UNKNOWN, 256 213 VENC_MODE_PAL,
+8
drivers/gpu/drm/panel/Kconfig
··· 444 444 Say Y here if you want to enable support for Truly NT35597 WQXGA Dual DSI 445 445 Video Mode panel 446 446 447 + config DRM_PANEL_VISIONOX_RM69299 448 + tristate "Visionox RM69299" 449 + depends on OF 450 + depends on DRM_MIPI_DSI 451 + help 452 + Say Y here if you want to enable support for Visionox 453 + RM69299 DSI Video Mode panel. 454 + 447 455 config DRM_PANEL_XINPENG_XPP055C272 448 456 tristate "Xinpeng XPP055C272 panel driver" 449 457 depends on OF
+1
drivers/gpu/drm/panel/Makefile
··· 47 47 obj-$(CONFIG_DRM_PANEL_TPO_TD043MTEA1) += panel-tpo-td043mtea1.o 48 48 obj-$(CONFIG_DRM_PANEL_TPO_TPG110) += panel-tpo-tpg110.o 49 49 obj-$(CONFIG_DRM_PANEL_TRULY_NT35597_WQXGA) += panel-truly-nt35597.o 50 + obj-$(CONFIG_DRM_PANEL_VISIONOX_RM69299) += panel-visionox-rm69299.o 50 51 obj-$(CONFIG_DRM_PANEL_XINPENG_XPP055C272) += panel-xinpeng-xpp055c272.o
+4 -4
drivers/gpu/drm/panel/panel-boe-tv101wum-nl6.c
··· 697 697 }; 698 698 699 699 static const struct drm_display_mode boe_tv105wum_nw0_default_mode = { 700 - .clock = 159260, 700 + .clock = 159916, 701 701 .hdisplay = 1200, 702 702 .hsync_start = 1200 + 80, 703 703 .hsync_end = 1200 + 80 + 24, 704 704 .htotal = 1200 + 80 + 24 + 60, 705 705 .vdisplay = 1920, 706 - .vsync_start = 1920 + 10, 707 - .vsync_end = 1920 + 10 + 2, 708 - .vtotal = 1920 + 10 + 2 + 14, 706 + .vsync_start = 1920 + 20, 707 + .vsync_end = 1920 + 20 + 4, 708 + .vtotal = 1920 + 20 + 4 + 10, 709 709 .vrefresh = 60, 710 710 .type = DRM_MODE_TYPE_DRIVER | DRM_MODE_TYPE_PREFERRED, 711 711 };
+2 -2
drivers/gpu/drm/panel/panel-ilitek-ili9322.c
··· 379 379 "can't set up VCOM amplitude (%d)\n", ret); 380 380 return ret; 381 381 } 382 - }; 382 + } 383 383 384 384 if (ili->vcom_high != U8_MAX) { 385 385 ret = regmap_write(ili->regmap, ILI9322_VCOM_HIGH, ··· 388 388 dev_err(ili->dev, "can't set up VCOM high (%d)\n", ret); 389 389 return ret; 390 390 } 391 - }; 391 + } 392 392 393 393 /* Set up gamma correction */ 394 394 for (i = 0; i < ARRAY_SIZE(ili->gamma); i++) {
+159 -1
drivers/gpu/drm/panel/panel-simple.c
··· 836 836 .width = 216, 837 837 .height = 135, 838 838 }, 839 - .bus_format = MEDIA_BUS_FMT_RGB666_1X18, 839 + .bus_format = MEDIA_BUS_FMT_RGB666_1X7X3_SPWG, 840 + .connector_type = DRM_MODE_CONNECTOR_LVDS, 840 841 }; 841 842 842 843 static const struct drm_display_mode auo_g104sn02_mode = { ··· 861 860 .width = 211, 862 861 .height = 158, 863 862 }, 863 + }; 864 + 865 + static const struct drm_display_mode auo_g121ean01_mode = { 866 + .clock = 66700, 867 + .hdisplay = 1280, 868 + .hsync_start = 1280 + 58, 869 + .hsync_end = 1280 + 58 + 8, 870 + .htotal = 1280 + 58 + 8 + 70, 871 + .vdisplay = 800, 872 + .vsync_start = 800 + 6, 873 + .vsync_end = 800 + 6 + 4, 874 + .vtotal = 800 + 6 + 4 + 10, 875 + .vrefresh = 60, 876 + }; 877 + 878 + static const struct panel_desc auo_g121ean01 = { 879 + .modes = &auo_g121ean01_mode, 880 + .num_modes = 1, 881 + .bpc = 8, 882 + .size = { 883 + .width = 261, 884 + .height = 163, 885 + }, 886 + .bus_format = MEDIA_BUS_FMT_RGB888_1X7X4_SPWG, 887 + .connector_type = DRM_MODE_CONNECTOR_LVDS, 864 888 }; 865 889 866 890 static const struct display_timing auo_g133han01_timings = { ··· 918 892 .connector_type = DRM_MODE_CONNECTOR_LVDS, 919 893 }; 920 894 895 + static const struct drm_display_mode auo_g156xtn01_mode = { 896 + .clock = 76000, 897 + .hdisplay = 1366, 898 + .hsync_start = 1366 + 33, 899 + .hsync_end = 1366 + 33 + 67, 900 + .htotal = 1560, 901 + .vdisplay = 768, 902 + .vsync_start = 768 + 4, 903 + .vsync_end = 768 + 4 + 4, 904 + .vtotal = 806, 905 + .vrefresh = 60, 906 + }; 907 + 908 + static const struct panel_desc auo_g156xtn01 = { 909 + .modes = &auo_g156xtn01_mode, 910 + .num_modes = 1, 911 + .bpc = 8, 912 + .size = { 913 + .width = 344, 914 + .height = 194, 915 + }, 916 + .bus_format = MEDIA_BUS_FMT_RGB888_1X7X4_SPWG, 917 + .connector_type = DRM_MODE_CONNECTOR_LVDS, 918 + }; 919 + 921 920 static const struct display_timing auo_g185han01_timings = { 922 921 .pixelclock = { 120000000, 144000000, 175000000 }, 923 922 .hactive = { 1920, 1920, 1920 }, ··· 962 911 .size = { 963 912 .width = 409, 964 913 .height = 230, 914 + }, 915 + .delay = { 916 + .prepare = 50, 917 + .enable = 200, 918 + .disable = 110, 919 + .unprepare = 1000, 920 + }, 921 + .bus_format = MEDIA_BUS_FMT_RGB888_1X7X4_SPWG, 922 + .connector_type = DRM_MODE_CONNECTOR_LVDS, 923 + }; 924 + 925 + static const struct display_timing auo_g190ean01_timings = { 926 + .pixelclock = { 90000000, 108000000, 135000000 }, 927 + .hactive = { 1280, 1280, 1280 }, 928 + .hfront_porch = { 126, 184, 1266 }, 929 + .hback_porch = { 84, 122, 844 }, 930 + .hsync_len = { 70, 102, 704 }, 931 + .vactive = { 1024, 1024, 1024 }, 932 + .vfront_porch = { 4, 26, 76 }, 933 + .vback_porch = { 2, 8, 25 }, 934 + .vsync_len = { 2, 8, 25 }, 935 + }; 936 + 937 + static const struct panel_desc auo_g190ean01 = { 938 + .timings = &auo_g190ean01_timings, 939 + .num_timings = 1, 940 + .bpc = 8, 941 + .size = { 942 + .width = 376, 943 + .height = 301, 965 944 }, 966 945 .delay = { 967 946 .prepare = 50, ··· 1171 1090 .enable = 50, 1172 1091 .unprepare = 160, 1173 1092 }, 1093 + }; 1094 + 1095 + static const struct drm_display_mode boe_nv133fhm_n61_modes = { 1096 + .clock = 147840, 1097 + .hdisplay = 1920, 1098 + .hsync_start = 1920 + 48, 1099 + .hsync_end = 1920 + 48 + 32, 1100 + .htotal = 1920 + 48 + 32 + 200, 1101 + .vdisplay = 1080, 1102 + .vsync_start = 1080 + 3, 1103 + .vsync_end = 1080 + 3 + 6, 1104 + .vtotal = 1080 + 3 + 6 + 31, 1105 + .vrefresh = 60, 1106 + }; 1107 + 1108 + static const struct panel_desc boe_nv133fhm_n61 = { 1109 + .modes = &boe_nv133fhm_n61_modes, 1110 + .num_modes = 1, 1111 + .bpc = 8, 1112 + .size = { 1113 + .width = 300, 1114 + .height = 187, 1115 + }, 1116 + .delay = { 1117 + .hpd_absent_delay = 200, 1118 + .unprepare = 500, 1119 + }, 1120 + .bus_format = MEDIA_BUS_FMT_RGB888_1X24, 1121 + .bus_flags = DRM_BUS_FLAG_DATA_MSB_TO_LSB, 1122 + .connector_type = DRM_MODE_CONNECTOR_eDP, 1174 1123 }; 1175 1124 1176 1125 static const struct drm_display_mode boe_nv140fhmn49_modes[] = { ··· 2091 1980 }, 2092 1981 }; 2093 1982 1983 + static const struct drm_display_mode ivo_m133nwf4_r0_mode = { 1984 + .clock = 138778, 1985 + .hdisplay = 1920, 1986 + .hsync_start = 1920 + 24, 1987 + .hsync_end = 1920 + 24 + 48, 1988 + .htotal = 1920 + 24 + 48 + 88, 1989 + .vdisplay = 1080, 1990 + .vsync_start = 1080 + 3, 1991 + .vsync_end = 1080 + 3 + 12, 1992 + .vtotal = 1080 + 3 + 12 + 17, 1993 + .vrefresh = 60, 1994 + .flags = DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC, 1995 + }; 1996 + 1997 + static const struct panel_desc ivo_m133nwf4_r0 = { 1998 + .modes = &ivo_m133nwf4_r0_mode, 1999 + .num_modes = 1, 2000 + .bpc = 8, 2001 + .size = { 2002 + .width = 294, 2003 + .height = 165, 2004 + }, 2005 + .delay = { 2006 + .hpd_absent_delay = 200, 2007 + .unprepare = 500, 2008 + }, 2009 + .bus_format = MEDIA_BUS_FMT_RGB888_1X24, 2010 + .bus_flags = DRM_BUS_FLAG_DATA_MSB_TO_LSB, 2011 + .connector_type = DRM_MODE_CONNECTOR_eDP, 2012 + }; 2013 + 2094 2014 static const struct display_timing koe_tx14d24vm1bpa_timing = { 2095 2015 .pixelclock = { 5580000, 5850000, 6200000 }, 2096 2016 .hactive = { 320, 320, 320 }, ··· 2310 2168 .width = 267, 2311 2169 .height = 183, 2312 2170 }, 2171 + .connector_type = DRM_MODE_CONNECTOR_eDP, 2313 2172 }; 2314 2173 2315 2174 static const struct drm_display_mode lg_lp129qe_mode = { ··· 3624 3481 .compatible = "auo,g104sn02", 3625 3482 .data = &auo_g104sn02, 3626 3483 }, { 3484 + .compatible = "auo,g121ean01", 3485 + .data = &auo_g121ean01, 3486 + }, { 3627 3487 .compatible = "auo,g133han01", 3628 3488 .data = &auo_g133han01, 3629 3489 }, { 3490 + .compatible = "auo,g156xtn01", 3491 + .data = &auo_g156xtn01, 3492 + }, { 3630 3493 .compatible = "auo,g185han01", 3631 3494 .data = &auo_g185han01, 3495 + }, { 3496 + .compatible = "auo,g190ean01", 3497 + .data = &auo_g190ean01, 3632 3498 }, { 3633 3499 .compatible = "auo,p320hvn03", 3634 3500 .data = &auo_p320hvn03, ··· 3656 3504 }, { 3657 3505 .compatible = "boe,nv101wxmn51", 3658 3506 .data = &boe_nv101wxmn51, 3507 + }, { 3508 + .compatible = "boe,nv133fhm-n61", 3509 + .data = &boe_nv133fhm_n61, 3659 3510 }, { 3660 3511 .compatible = "boe,nv140fhmn49", 3661 3512 .data = &boe_nv140fhmn49, ··· 3767 3612 }, { 3768 3613 .compatible = "innolux,zj070na-01p", 3769 3614 .data = &innolux_zj070na_01p, 3615 + }, { 3616 + .compatible = "ivo,m133nwf4-r0", 3617 + .data = &ivo_m133nwf4_r0, 3770 3618 }, { 3771 3619 .compatible = "koe,tx14d24vm1bpa", 3772 3620 .data = &koe_tx14d24vm1bpa,
-2
drivers/gpu/drm/panel/panel-truly-nt35597.c
··· 490 490 { 491 491 struct device *dev = ctx->dev; 492 492 int ret, i; 493 - const struct nt35597_config *config; 494 493 495 - config = ctx->config; 496 494 for (i = 0; i < ARRAY_SIZE(ctx->supplies); i++) 497 495 ctx->supplies[i].supply = regulator_names[i]; 498 496
+302
drivers/gpu/drm/panel/panel-visionox-rm69299.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * Copyright (c) 2019, The Linux Foundation. All rights reserved. 4 + */ 5 + 6 + #include <linux/delay.h> 7 + #include <linux/module.h> 8 + #include <linux/of_device.h> 9 + #include <linux/gpio/consumer.h> 10 + #include <linux/regulator/consumer.h> 11 + 12 + #include <video/mipi_display.h> 13 + 14 + #include <drm/drm_mipi_dsi.h> 15 + #include <drm/drm_modes.h> 16 + #include <drm/drm_panel.h> 17 + #include <drm/drm_print.h> 18 + 19 + struct visionox_rm69299 { 20 + struct drm_panel panel; 21 + struct regulator_bulk_data supplies[2]; 22 + struct gpio_desc *reset_gpio; 23 + struct mipi_dsi_device *dsi; 24 + bool prepared; 25 + bool enabled; 26 + }; 27 + 28 + static inline struct visionox_rm69299 *panel_to_ctx(struct drm_panel *panel) 29 + { 30 + return container_of(panel, struct visionox_rm69299, panel); 31 + } 32 + 33 + static int visionox_rm69299_power_on(struct visionox_rm69299 *ctx) 34 + { 35 + int ret; 36 + 37 + ret = regulator_bulk_enable(ARRAY_SIZE(ctx->supplies), ctx->supplies); 38 + if (ret < 0) 39 + return ret; 40 + 41 + /* 42 + * Reset sequence of visionox panel requires the panel to be 43 + * out of reset for 10ms, followed by being held in reset 44 + * for 10ms and then out again 45 + */ 46 + gpiod_set_value(ctx->reset_gpio, 1); 47 + usleep_range(10000, 20000); 48 + gpiod_set_value(ctx->reset_gpio, 0); 49 + usleep_range(10000, 20000); 50 + gpiod_set_value(ctx->reset_gpio, 1); 51 + usleep_range(10000, 20000); 52 + 53 + return 0; 54 + } 55 + 56 + static int visionox_rm69299_power_off(struct visionox_rm69299 *ctx) 57 + { 58 + gpiod_set_value(ctx->reset_gpio, 0); 59 + 60 + return regulator_bulk_disable(ARRAY_SIZE(ctx->supplies), ctx->supplies); 61 + } 62 + 63 + static int visionox_rm69299_unprepare(struct drm_panel *panel) 64 + { 65 + struct visionox_rm69299 *ctx = panel_to_ctx(panel); 66 + int ret; 67 + 68 + ctx->dsi->mode_flags = 0; 69 + 70 + ret = mipi_dsi_dcs_write(ctx->dsi, MIPI_DCS_SET_DISPLAY_OFF, NULL, 0); 71 + if (ret < 0) 72 + DRM_DEV_ERROR(ctx->panel.dev, 73 + "set_display_off cmd failed ret = %d\n", ret); 74 + 75 + /* 120ms delay required here as per DCS spec */ 76 + msleep(120); 77 + 78 + ret = mipi_dsi_dcs_write(ctx->dsi, MIPI_DCS_ENTER_SLEEP_MODE, NULL, 0); 79 + if (ret < 0) { 80 + DRM_DEV_ERROR(ctx->panel.dev, 81 + "enter_sleep cmd failed ret = %d\n", ret); 82 + } 83 + 84 + ret = visionox_rm69299_power_off(ctx); 85 + 86 + ctx->prepared = false; 87 + return ret; 88 + } 89 + 90 + static int visionox_rm69299_prepare(struct drm_panel *panel) 91 + { 92 + struct visionox_rm69299 *ctx = panel_to_ctx(panel); 93 + int ret; 94 + 95 + if (ctx->prepared) 96 + return 0; 97 + 98 + ret = visionox_rm69299_power_on(ctx); 99 + if (ret < 0) 100 + return ret; 101 + 102 + ctx->dsi->mode_flags |= MIPI_DSI_MODE_LPM; 103 + 104 + ret = mipi_dsi_dcs_write_buffer(ctx->dsi, (u8[]) { 0xfe, 0x00 }, 2); 105 + if (ret < 0) { 106 + DRM_DEV_ERROR(ctx->panel.dev, 107 + "cmd set tx 0 failed, ret = %d\n", ret); 108 + goto power_off; 109 + } 110 + 111 + ret = mipi_dsi_dcs_write_buffer(ctx->dsi, (u8[]) { 0xc2, 0x08 }, 2); 112 + if (ret < 0) { 113 + DRM_DEV_ERROR(ctx->panel.dev, 114 + "cmd set tx 1 failed, ret = %d\n", ret); 115 + goto power_off; 116 + } 117 + 118 + ret = mipi_dsi_dcs_write_buffer(ctx->dsi, (u8[]) { 0x35, 0x00 }, 2); 119 + if (ret < 0) { 120 + DRM_DEV_ERROR(ctx->panel.dev, 121 + "cmd set tx 2 failed, ret = %d\n", ret); 122 + goto power_off; 123 + } 124 + 125 + ret = mipi_dsi_dcs_write_buffer(ctx->dsi, (u8[]) { 0x51, 0xff }, 2); 126 + if (ret < 0) { 127 + DRM_DEV_ERROR(ctx->panel.dev, 128 + "cmd set tx 3 failed, ret = %d\n", ret); 129 + goto power_off; 130 + } 131 + 132 + ret = mipi_dsi_dcs_write(ctx->dsi, MIPI_DCS_EXIT_SLEEP_MODE, NULL, 0); 133 + if (ret < 0) { 134 + DRM_DEV_ERROR(ctx->panel.dev, 135 + "exit_sleep_mode cmd failed ret = %d\n", ret); 136 + goto power_off; 137 + } 138 + 139 + /* Per DSI spec wait 120ms after sending exit sleep DCS command */ 140 + msleep(120); 141 + 142 + ret = mipi_dsi_dcs_write(ctx->dsi, MIPI_DCS_SET_DISPLAY_ON, NULL, 0); 143 + if (ret < 0) { 144 + DRM_DEV_ERROR(ctx->panel.dev, 145 + "set_display_on cmd failed ret = %d\n", ret); 146 + goto power_off; 147 + } 148 + 149 + /* Per DSI spec wait 120ms after sending set_display_on DCS command */ 150 + msleep(120); 151 + 152 + ctx->prepared = true; 153 + 154 + return 0; 155 + 156 + power_off: 157 + return ret; 158 + } 159 + 160 + static const struct drm_display_mode visionox_rm69299_1080x2248_60hz = { 161 + .name = "1080x2248", 162 + .clock = 158695, 163 + .hdisplay = 1080, 164 + .hsync_start = 1080 + 26, 165 + .hsync_end = 1080 + 26 + 2, 166 + .htotal = 1080 + 26 + 2 + 36, 167 + .vdisplay = 2248, 168 + .vsync_start = 2248 + 56, 169 + .vsync_end = 2248 + 56 + 4, 170 + .vtotal = 2248 + 56 + 4 + 4, 171 + .vrefresh = 60, 172 + .flags = 0, 173 + }; 174 + 175 + static int visionox_rm69299_get_modes(struct drm_panel *panel, 176 + struct drm_connector *connector) 177 + { 178 + struct visionox_rm69299 *ctx = panel_to_ctx(panel); 179 + struct drm_display_mode *mode; 180 + 181 + mode = drm_mode_create(connector->dev); 182 + if (!mode) { 183 + DRM_DEV_ERROR(ctx->panel.dev, 184 + "failed to create a new display mode\n"); 185 + return 0; 186 + } 187 + 188 + connector->display_info.width_mm = 74; 189 + connector->display_info.height_mm = 131; 190 + drm_mode_copy(mode, &visionox_rm69299_1080x2248_60hz); 191 + mode->type = DRM_MODE_TYPE_DRIVER | DRM_MODE_TYPE_PREFERRED; 192 + drm_mode_probed_add(connector, mode); 193 + 194 + return 1; 195 + } 196 + 197 + static const struct drm_panel_funcs visionox_rm69299_drm_funcs = { 198 + .unprepare = visionox_rm69299_unprepare, 199 + .prepare = visionox_rm69299_prepare, 200 + .get_modes = visionox_rm69299_get_modes, 201 + }; 202 + 203 + static int visionox_rm69299_probe(struct mipi_dsi_device *dsi) 204 + { 205 + struct device *dev = &dsi->dev; 206 + struct visionox_rm69299 *ctx; 207 + int ret; 208 + 209 + ctx = devm_kzalloc(dev, sizeof(*ctx), GFP_KERNEL); 210 + if (!ctx) 211 + return -ENOMEM; 212 + 213 + mipi_dsi_set_drvdata(dsi, ctx); 214 + 215 + ctx->panel.dev = dev; 216 + ctx->dsi = dsi; 217 + 218 + ctx->supplies[0].supply = "vdda"; 219 + ctx->supplies[1].supply = "vdd3p3"; 220 + 221 + ret = devm_regulator_bulk_get(ctx->panel.dev, ARRAY_SIZE(ctx->supplies), 222 + ctx->supplies); 223 + if (ret < 0) 224 + return ret; 225 + 226 + ctx->reset_gpio = devm_gpiod_get(ctx->panel.dev, 227 + "reset", GPIOD_OUT_LOW); 228 + if (IS_ERR(ctx->reset_gpio)) { 229 + DRM_DEV_ERROR(dev, "cannot get reset gpio %ld\n", 230 + PTR_ERR(ctx->reset_gpio)); 231 + return PTR_ERR(ctx->reset_gpio); 232 + } 233 + 234 + drm_panel_init(&ctx->panel, dev, &visionox_rm69299_drm_funcs, 235 + DRM_MODE_CONNECTOR_DSI); 236 + ctx->panel.dev = dev; 237 + ctx->panel.funcs = &visionox_rm69299_drm_funcs; 238 + drm_panel_add(&ctx->panel); 239 + 240 + dsi->lanes = 4; 241 + dsi->format = MIPI_DSI_FMT_RGB888; 242 + dsi->mode_flags = MIPI_DSI_MODE_VIDEO | MIPI_DSI_MODE_LPM | 243 + MIPI_DSI_CLOCK_NON_CONTINUOUS; 244 + ret = mipi_dsi_attach(dsi); 245 + if (ret < 0) { 246 + DRM_DEV_ERROR(dev, "dsi attach failed ret = %d\n", ret); 247 + goto err_dsi_attach; 248 + } 249 + 250 + ret = regulator_set_load(ctx->supplies[0].consumer, 32000); 251 + if (ret) { 252 + DRM_DEV_ERROR(dev, 253 + "regulator set load failed for vdda supply ret = %d\n", 254 + ret); 255 + goto err_set_load; 256 + } 257 + 258 + ret = regulator_set_load(ctx->supplies[1].consumer, 13200); 259 + if (ret) { 260 + DRM_DEV_ERROR(dev, 261 + "regulator set load failed for vdd3p3 supply ret = %d\n", 262 + ret); 263 + goto err_set_load; 264 + } 265 + 266 + return 0; 267 + 268 + err_set_load: 269 + mipi_dsi_detach(dsi); 270 + err_dsi_attach: 271 + drm_panel_remove(&ctx->panel); 272 + return ret; 273 + } 274 + 275 + static int visionox_rm69299_remove(struct mipi_dsi_device *dsi) 276 + { 277 + struct visionox_rm69299 *ctx = mipi_dsi_get_drvdata(dsi); 278 + 279 + mipi_dsi_detach(ctx->dsi); 280 + mipi_dsi_device_unregister(ctx->dsi); 281 + 282 + drm_panel_remove(&ctx->panel); 283 + return 0; 284 + } 285 + 286 + static const struct of_device_id visionox_rm69299_of_match[] = { 287 + { .compatible = "visionox,rm69299-1080p-display", }, 288 + { /* sentinel */ } 289 + }; 290 + MODULE_DEVICE_TABLE(of, visionox_rm69299_of_match); 291 + 292 + static struct mipi_dsi_driver visionox_rm69299_driver = { 293 + .driver = { 294 + .name = "panel-visionox-rm69299", 295 + .of_match_table = visionox_rm69299_of_match, 296 + }, 297 + .probe = visionox_rm69299_probe, 298 + .remove = visionox_rm69299_remove, 299 + }; 300 + module_mipi_dsi_driver(visionox_rm69299_driver); 301 + 302 + MODULE_DESCRIPTION("Visionox RM69299 DSI Panel Driver");
-1
drivers/gpu/drm/pl111/Makefile
··· 3 3 pl111_versatile.o \ 4 4 pl111_drv.o 5 5 6 - pl111_drm-$(CONFIG_ARCH_VEXPRESS) += pl111_vexpress.o 7 6 pl111_drm-$(CONFIG_ARCH_NOMADIK) += pl111_nomadik.o 8 7 pl111_drm-$(CONFIG_DEBUG_FS) += pl111_debugfs.o 9 8
+1
drivers/gpu/drm/pl111/pl111_drv.c
··· 444 444 }, 445 445 {0, 0}, 446 446 }; 447 + MODULE_DEVICE_TABLE(amba, pl111_id_table); 447 448 448 449 static struct amba_driver pl111_amba_driver __maybe_unused = { 449 450 .drv = {
+109 -39
drivers/gpu/drm/pl111/pl111_versatile.c
··· 8 8 #include <linux/of.h> 9 9 #include <linux/of_platform.h> 10 10 #include <linux/regmap.h> 11 + #include <linux/vexpress.h> 11 12 12 13 #include "pl111_versatile.h" 13 - #include "pl111_vexpress.h" 14 14 #include "pl111_drm.h" 15 15 16 16 static struct regmap *versatile_syscon_map; ··· 361 361 .broken_clockdivider = true, 362 362 }; 363 363 364 + #define VEXPRESS_FPGAMUX_MOTHERBOARD 0x00 365 + #define VEXPRESS_FPGAMUX_DAUGHTERBOARD_1 0x01 366 + #define VEXPRESS_FPGAMUX_DAUGHTERBOARD_2 0x02 367 + 368 + static int pl111_vexpress_clcd_init(struct device *dev, struct device_node *np, 369 + struct pl111_drm_dev_private *priv) 370 + { 371 + struct platform_device *pdev; 372 + struct device_node *root; 373 + struct device_node *child; 374 + struct device_node *ct_clcd = NULL; 375 + struct regmap *map; 376 + bool has_coretile_clcd = false; 377 + bool has_coretile_hdlcd = false; 378 + bool mux_motherboard = true; 379 + u32 val; 380 + int ret; 381 + 382 + if (!IS_ENABLED(CONFIG_VEXPRESS_CONFIG)) 383 + return -ENODEV; 384 + 385 + /* 386 + * Check if we have a CLCD or HDLCD on the core tile by checking if a 387 + * CLCD or HDLCD is available in the root of the device tree. 388 + */ 389 + root = of_find_node_by_path("/"); 390 + if (!root) 391 + return -EINVAL; 392 + 393 + for_each_available_child_of_node(root, child) { 394 + if (of_device_is_compatible(child, "arm,pl111")) { 395 + has_coretile_clcd = true; 396 + ct_clcd = child; 397 + break; 398 + } 399 + if (of_device_is_compatible(child, "arm,hdlcd")) { 400 + has_coretile_hdlcd = true; 401 + of_node_put(child); 402 + break; 403 + } 404 + } 405 + 406 + of_node_put(root); 407 + 408 + /* 409 + * If there is a coretile HDLCD and it has a driver, 410 + * do not mux the CLCD on the motherboard to the DVI. 411 + */ 412 + if (has_coretile_hdlcd && IS_ENABLED(CONFIG_DRM_HDLCD)) 413 + mux_motherboard = false; 414 + 415 + /* 416 + * On the Vexpress CA9 we let the CLCD on the coretile 417 + * take precedence, so also in this case do not mux the 418 + * motherboard to the DVI. 419 + */ 420 + if (has_coretile_clcd) 421 + mux_motherboard = false; 422 + 423 + if (mux_motherboard) { 424 + dev_info(dev, "DVI muxed to motherboard CLCD\n"); 425 + val = VEXPRESS_FPGAMUX_MOTHERBOARD; 426 + } else if (ct_clcd == dev->of_node) { 427 + dev_info(dev, 428 + "DVI muxed to daughterboard 1 (core tile) CLCD\n"); 429 + val = VEXPRESS_FPGAMUX_DAUGHTERBOARD_1; 430 + } else { 431 + dev_info(dev, "core tile graphics present\n"); 432 + dev_info(dev, "this device will be deactivated\n"); 433 + return -ENODEV; 434 + } 435 + 436 + /* Call into deep Vexpress configuration API */ 437 + pdev = of_find_device_by_node(np); 438 + if (!pdev) { 439 + dev_err(dev, "can't find the sysreg device, deferring\n"); 440 + return -EPROBE_DEFER; 441 + } 442 + 443 + map = devm_regmap_init_vexpress_config(&pdev->dev); 444 + if (IS_ERR(map)) { 445 + platform_device_put(pdev); 446 + return PTR_ERR(map); 447 + } 448 + 449 + ret = regmap_write(map, 0, val); 450 + platform_device_put(pdev); 451 + if (ret) { 452 + dev_err(dev, "error setting DVI muxmode\n"); 453 + return -ENODEV; 454 + } 455 + 456 + priv->variant = &pl111_vexpress; 457 + dev_info(dev, "initializing Versatile Express PL111\n"); 458 + 459 + return 0; 460 + } 461 + 364 462 int pl111_versatile_init(struct device *dev, struct pl111_drm_dev_private *priv) 365 463 { 366 464 const struct of_device_id *clcd_id; 367 465 enum versatile_clcd versatile_clcd_type; 368 466 struct device_node *np; 369 467 struct regmap *map; 370 - int ret; 371 468 372 469 np = of_find_matching_node_and_match(NULL, versatile_clcd_of_match, 373 470 &clcd_id); ··· 474 377 } 475 378 476 379 versatile_clcd_type = (enum versatile_clcd)clcd_id->data; 380 + 381 + /* Versatile Express special handling */ 382 + if (versatile_clcd_type == VEXPRESS_CLCD_V2M) { 383 + int ret = pl111_vexpress_clcd_init(dev, np, priv); 384 + of_node_put(np); 385 + if (ret) 386 + dev_err(dev, "Versatile Express init failed - %d", ret); 387 + return ret; 388 + } 477 389 478 390 /* 479 391 * On the Integrator, check if we should use the IM-PD1 instead, ··· 496 390 versatile_clcd_type = (enum versatile_clcd)clcd_id->data; 497 391 } 498 392 499 - /* Versatile Express special handling */ 500 - if (versatile_clcd_type == VEXPRESS_CLCD_V2M) { 501 - struct platform_device *pdev; 502 - 503 - /* Registers a driver for the muxfpga */ 504 - ret = vexpress_muxfpga_init(); 505 - if (ret) { 506 - dev_err(dev, "unable to initialize muxfpga driver\n"); 507 - of_node_put(np); 508 - return ret; 509 - } 510 - 511 - /* Call into deep Vexpress configuration API */ 512 - pdev = of_find_device_by_node(np); 513 - if (!pdev) { 514 - dev_err(dev, "can't find the sysreg device, deferring\n"); 515 - of_node_put(np); 516 - return -EPROBE_DEFER; 517 - } 518 - map = dev_get_drvdata(&pdev->dev); 519 - if (!map) { 520 - dev_err(dev, "sysreg has not yet probed\n"); 521 - platform_device_put(pdev); 522 - of_node_put(np); 523 - return -EPROBE_DEFER; 524 - } 525 - } else { 526 - map = syscon_node_to_regmap(np); 527 - } 393 + map = syscon_node_to_regmap(np); 528 394 of_node_put(np); 529 - 530 395 if (IS_ERR(map)) { 531 396 dev_err(dev, "no Versatile syscon regmap\n"); 532 397 return PTR_ERR(map); ··· 542 465 priv->variant_display_enable = pl111_realview_clcd_enable; 543 466 priv->variant_display_disable = pl111_realview_clcd_disable; 544 467 dev_info(dev, "set up callbacks for RealView PL111\n"); 545 - break; 546 - case VEXPRESS_CLCD_V2M: 547 - priv->variant = &pl111_vexpress; 548 - dev_info(dev, "initializing Versatile Express PL111\n"); 549 - ret = pl111_vexpress_clcd_init(dev, priv, map); 550 - if (ret) 551 - return ret; 552 468 break; 553 469 default: 554 470 dev_info(dev, "unknown Versatile system controller\n");
-138
drivers/gpu/drm/pl111/pl111_vexpress.c
··· 1 - // SPDX-License-Identifier: GPL-2.0 2 - /* 3 - * Versatile Express PL111 handling 4 - * Copyright (C) 2018 Linus Walleij 5 - * 6 - * This module binds to the "arm,vexpress-muxfpga" device on the 7 - * Versatile Express configuration bus and sets up which CLCD instance 8 - * gets muxed out on the DVI bridge. 9 - */ 10 - #include <linux/device.h> 11 - #include <linux/module.h> 12 - #include <linux/regmap.h> 13 - #include <linux/vexpress.h> 14 - #include <linux/platform_device.h> 15 - #include <linux/of.h> 16 - #include <linux/of_address.h> 17 - #include <linux/of_platform.h> 18 - #include "pl111_drm.h" 19 - #include "pl111_vexpress.h" 20 - 21 - #define VEXPRESS_FPGAMUX_MOTHERBOARD 0x00 22 - #define VEXPRESS_FPGAMUX_DAUGHTERBOARD_1 0x01 23 - #define VEXPRESS_FPGAMUX_DAUGHTERBOARD_2 0x02 24 - 25 - int pl111_vexpress_clcd_init(struct device *dev, 26 - struct pl111_drm_dev_private *priv, 27 - struct regmap *map) 28 - { 29 - struct device_node *root; 30 - struct device_node *child; 31 - struct device_node *ct_clcd = NULL; 32 - bool has_coretile_clcd = false; 33 - bool has_coretile_hdlcd = false; 34 - bool mux_motherboard = true; 35 - u32 val; 36 - int ret; 37 - 38 - /* 39 - * Check if we have a CLCD or HDLCD on the core tile by checking if a 40 - * CLCD or HDLCD is available in the root of the device tree. 41 - */ 42 - root = of_find_node_by_path("/"); 43 - if (!root) 44 - return -EINVAL; 45 - 46 - for_each_available_child_of_node(root, child) { 47 - if (of_device_is_compatible(child, "arm,pl111")) { 48 - has_coretile_clcd = true; 49 - ct_clcd = child; 50 - break; 51 - } 52 - if (of_device_is_compatible(child, "arm,hdlcd")) { 53 - has_coretile_hdlcd = true; 54 - of_node_put(child); 55 - break; 56 - } 57 - } 58 - 59 - of_node_put(root); 60 - 61 - /* 62 - * If there is a coretile HDLCD and it has a driver, 63 - * do not mux the CLCD on the motherboard to the DVI. 64 - */ 65 - if (has_coretile_hdlcd && IS_ENABLED(CONFIG_DRM_HDLCD)) 66 - mux_motherboard = false; 67 - 68 - /* 69 - * On the Vexpress CA9 we let the CLCD on the coretile 70 - * take precedence, so also in this case do not mux the 71 - * motherboard to the DVI. 72 - */ 73 - if (has_coretile_clcd) 74 - mux_motherboard = false; 75 - 76 - if (mux_motherboard) { 77 - dev_info(dev, "DVI muxed to motherboard CLCD\n"); 78 - val = VEXPRESS_FPGAMUX_MOTHERBOARD; 79 - } else if (ct_clcd == dev->of_node) { 80 - dev_info(dev, 81 - "DVI muxed to daughterboard 1 (core tile) CLCD\n"); 82 - val = VEXPRESS_FPGAMUX_DAUGHTERBOARD_1; 83 - } else { 84 - dev_info(dev, "core tile graphics present\n"); 85 - dev_info(dev, "this device will be deactivated\n"); 86 - return -ENODEV; 87 - } 88 - 89 - ret = regmap_write(map, 0, val); 90 - if (ret) { 91 - dev_err(dev, "error setting DVI muxmode\n"); 92 - return -ENODEV; 93 - } 94 - 95 - return 0; 96 - } 97 - 98 - /* 99 - * This sets up the regmap pointer that will then be retrieved by 100 - * the detection code in pl111_versatile.c and passed in to the 101 - * pl111_vexpress_clcd_init() function above. 102 - */ 103 - static int vexpress_muxfpga_probe(struct platform_device *pdev) 104 - { 105 - struct device *dev = &pdev->dev; 106 - struct regmap *map; 107 - 108 - map = devm_regmap_init_vexpress_config(&pdev->dev); 109 - if (IS_ERR(map)) 110 - return PTR_ERR(map); 111 - dev_set_drvdata(dev, map); 112 - 113 - return 0; 114 - } 115 - 116 - static const struct of_device_id vexpress_muxfpga_match[] = { 117 - { .compatible = "arm,vexpress-muxfpga", }, 118 - {} 119 - }; 120 - 121 - static struct platform_driver vexpress_muxfpga_driver = { 122 - .driver = { 123 - .name = "vexpress-muxfpga", 124 - .of_match_table = of_match_ptr(vexpress_muxfpga_match), 125 - }, 126 - .probe = vexpress_muxfpga_probe, 127 - }; 128 - 129 - int vexpress_muxfpga_init(void) 130 - { 131 - int ret; 132 - 133 - ret = platform_driver_register(&vexpress_muxfpga_driver); 134 - /* -EBUSY just means this driver is already registered */ 135 - if (ret == -EBUSY) 136 - ret = 0; 137 - return ret; 138 - }
-29
drivers/gpu/drm/pl111/pl111_vexpress.h
··· 1 - // SPDX-License-Identifier: GPL-2.0 2 - 3 - struct device; 4 - struct pl111_drm_dev_private; 5 - struct regmap; 6 - 7 - #ifdef CONFIG_ARCH_VEXPRESS 8 - 9 - int pl111_vexpress_clcd_init(struct device *dev, 10 - struct pl111_drm_dev_private *priv, 11 - struct regmap *map); 12 - 13 - int vexpress_muxfpga_init(void); 14 - 15 - #else 16 - 17 - static inline int pl111_vexpress_clcd_init(struct device *dev, 18 - struct pl111_drm_dev_private *priv, 19 - struct regmap *map) 20 - { 21 - return -ENODEV; 22 - } 23 - 24 - static inline int vexpress_muxfpga_init(void) 25 - { 26 - return 0; 27 - } 28 - 29 - #endif
+3 -4
drivers/gpu/drm/qxl/qxl_debugfs.c
··· 39 39 qxl_debugfs_irq_received(struct seq_file *m, void *data) 40 40 { 41 41 struct drm_info_node *node = (struct drm_info_node *) m->private; 42 - struct qxl_device *qdev = node->minor->dev->dev_private; 42 + struct qxl_device *qdev = to_qxl(node->minor->dev); 43 43 44 44 seq_printf(m, "%d\n", atomic_read(&qdev->irq_received)); 45 45 seq_printf(m, "%d\n", atomic_read(&qdev->irq_received_display)); ··· 53 53 qxl_debugfs_buffers_info(struct seq_file *m, void *data) 54 54 { 55 55 struct drm_info_node *node = (struct drm_info_node *) m->private; 56 - struct qxl_device *qdev = node->minor->dev->dev_private; 56 + struct qxl_device *qdev = to_qxl(node->minor->dev); 57 57 struct qxl_bo *bo; 58 58 59 59 list_for_each_entry(bo, &qdev->gem.objects, list) { ··· 83 83 qxl_debugfs_init(struct drm_minor *minor) 84 84 { 85 85 #if defined(CONFIG_DEBUG_FS) 86 - struct qxl_device *dev = 87 - (struct qxl_device *) minor->dev->dev_private; 86 + struct qxl_device *dev = to_qxl(minor->dev); 88 87 89 88 drm_debugfs_create_files(qxl_debugfs_list, QXL_DEBUGFS_ENTRIES, 90 89 minor->debugfs_root, minor);
+16 -16
drivers/gpu/drm/qxl/qxl_display.c
··· 221 221 bool preferred) 222 222 { 223 223 struct drm_device *dev = connector->dev; 224 - struct qxl_device *qdev = dev->dev_private; 224 + struct qxl_device *qdev = to_qxl(dev); 225 225 struct drm_display_mode *mode = NULL; 226 226 int rc; 227 227 ··· 242 242 static int qxl_add_monitors_config_modes(struct drm_connector *connector) 243 243 { 244 244 struct drm_device *dev = connector->dev; 245 - struct qxl_device *qdev = dev->dev_private; 245 + struct qxl_device *qdev = to_qxl(dev); 246 246 struct qxl_output *output = drm_connector_to_qxl_output(connector); 247 247 int h = output->index; 248 248 struct qxl_head *head; ··· 310 310 const char *reason) 311 311 { 312 312 struct drm_device *dev = crtc->dev; 313 - struct qxl_device *qdev = dev->dev_private; 313 + struct qxl_device *qdev = to_qxl(dev); 314 314 struct qxl_crtc *qcrtc = to_qxl_crtc(crtc); 315 315 struct qxl_head head; 316 316 int oldcount, i = qcrtc->index; ··· 400 400 unsigned int num_clips) 401 401 { 402 402 /* TODO: vmwgfx where this was cribbed from had locking. Why? */ 403 - struct qxl_device *qdev = fb->dev->dev_private; 403 + struct qxl_device *qdev = to_qxl(fb->dev); 404 404 struct drm_clip_rect norect; 405 405 struct qxl_bo *qobj; 406 406 bool is_primary; ··· 462 462 static int qxl_primary_atomic_check(struct drm_plane *plane, 463 463 struct drm_plane_state *state) 464 464 { 465 - struct qxl_device *qdev = plane->dev->dev_private; 465 + struct qxl_device *qdev = to_qxl(plane->dev); 466 466 struct qxl_bo *bo; 467 467 468 468 if (!state->crtc || !state->fb) ··· 476 476 static int qxl_primary_apply_cursor(struct drm_plane *plane) 477 477 { 478 478 struct drm_device *dev = plane->dev; 479 - struct qxl_device *qdev = dev->dev_private; 479 + struct qxl_device *qdev = to_qxl(dev); 480 480 struct drm_framebuffer *fb = plane->state->fb; 481 481 struct qxl_crtc *qcrtc = to_qxl_crtc(plane->state->crtc); 482 482 struct qxl_cursor_cmd *cmd; ··· 523 523 static void qxl_primary_atomic_update(struct drm_plane *plane, 524 524 struct drm_plane_state *old_state) 525 525 { 526 - struct qxl_device *qdev = plane->dev->dev_private; 526 + struct qxl_device *qdev = to_qxl(plane->dev); 527 527 struct qxl_bo *bo = gem_to_qxl_bo(plane->state->fb->obj[0]); 528 528 struct qxl_bo *primary; 529 529 struct drm_clip_rect norect = { ··· 554 554 static void qxl_primary_atomic_disable(struct drm_plane *plane, 555 555 struct drm_plane_state *old_state) 556 556 { 557 - struct qxl_device *qdev = plane->dev->dev_private; 557 + struct qxl_device *qdev = to_qxl(plane->dev); 558 558 559 559 if (old_state->fb) { 560 560 struct qxl_bo *bo = gem_to_qxl_bo(old_state->fb->obj[0]); ··· 570 570 struct drm_plane_state *old_state) 571 571 { 572 572 struct drm_device *dev = plane->dev; 573 - struct qxl_device *qdev = dev->dev_private; 573 + struct qxl_device *qdev = to_qxl(dev); 574 574 struct drm_framebuffer *fb = plane->state->fb; 575 575 struct qxl_crtc *qcrtc = to_qxl_crtc(plane->state->crtc); 576 576 struct qxl_release *release; ··· 679 679 static void qxl_cursor_atomic_disable(struct drm_plane *plane, 680 680 struct drm_plane_state *old_state) 681 681 { 682 - struct qxl_device *qdev = plane->dev->dev_private; 682 + struct qxl_device *qdev = to_qxl(plane->dev); 683 683 struct qxl_release *release; 684 684 struct qxl_cursor_cmd *cmd; 685 685 int ret; ··· 762 762 static int qxl_plane_prepare_fb(struct drm_plane *plane, 763 763 struct drm_plane_state *new_state) 764 764 { 765 - struct qxl_device *qdev = plane->dev->dev_private; 765 + struct qxl_device *qdev = to_qxl(plane->dev); 766 766 struct drm_gem_object *obj; 767 767 struct qxl_bo *user_bo; 768 768 struct qxl_surface surf; ··· 923 923 { 924 924 struct qxl_crtc *qxl_crtc; 925 925 struct drm_plane *primary, *cursor; 926 - struct qxl_device *qdev = dev->dev_private; 926 + struct qxl_device *qdev = to_qxl(dev); 927 927 int r; 928 928 929 929 qxl_crtc = kzalloc(sizeof(struct qxl_crtc), GFP_KERNEL); ··· 965 965 static int qxl_conn_get_modes(struct drm_connector *connector) 966 966 { 967 967 struct drm_device *dev = connector->dev; 968 - struct qxl_device *qdev = dev->dev_private; 968 + struct qxl_device *qdev = to_qxl(dev); 969 969 struct qxl_output *output = drm_connector_to_qxl_output(connector); 970 970 unsigned int pwidth = 1024; 971 971 unsigned int pheight = 768; ··· 991 991 struct drm_display_mode *mode) 992 992 { 993 993 struct drm_device *ddev = connector->dev; 994 - struct qxl_device *qdev = ddev->dev_private; 994 + struct qxl_device *qdev = to_qxl(ddev); 995 995 996 996 if (qxl_check_mode(qdev, mode->hdisplay, mode->vdisplay) != 0) 997 997 return MODE_BAD; ··· 1021 1021 struct qxl_output *output = 1022 1022 drm_connector_to_qxl_output(connector); 1023 1023 struct drm_device *ddev = connector->dev; 1024 - struct qxl_device *qdev = ddev->dev_private; 1024 + struct qxl_device *qdev = to_qxl(ddev); 1025 1025 bool connected = false; 1026 1026 1027 1027 /* The first monitor is always connected */ ··· 1071 1071 1072 1072 static int qdev_output_init(struct drm_device *dev, int num_output) 1073 1073 { 1074 - struct qxl_device *qdev = dev->dev_private; 1074 + struct qxl_device *qdev = to_qxl(dev); 1075 1075 struct qxl_output *qxl_output; 1076 1076 struct drm_connector *connector; 1077 1077 struct drm_encoder *encoder;
+12 -11
drivers/gpu/drm/qxl/qxl_drv.c
··· 81 81 return -EINVAL; /* TODO: ENODEV ? */ 82 82 } 83 83 84 - qdev = kzalloc(sizeof(struct qxl_device), GFP_KERNEL); 85 - if (!qdev) 84 + qdev = devm_drm_dev_alloc(&pdev->dev, &qxl_driver, 85 + struct qxl_device, ddev); 86 + if (IS_ERR(qdev)) { 87 + pr_err("Unable to init drm dev"); 86 88 return -ENOMEM; 89 + } 87 90 88 91 ret = pci_enable_device(pdev); 89 92 if (ret) 90 - goto free_dev; 93 + return ret; 91 94 92 95 ret = drm_fb_helper_remove_conflicting_pci_framebuffers(pdev, "qxl"); 93 96 if (ret) ··· 104 101 } 105 102 } 106 103 107 - ret = qxl_device_init(qdev, &qxl_driver, pdev); 104 + ret = qxl_device_init(qdev, pdev); 108 105 if (ret) 109 106 goto put_vga; 110 107 ··· 131 128 vga_put(pdev, VGA_RSRC_LEGACY_IO); 132 129 disable_pci: 133 130 pci_disable_device(pdev); 134 - free_dev: 135 - kfree(qdev); 131 + 136 132 return ret; 137 133 } 138 134 139 135 static void qxl_drm_release(struct drm_device *dev) 140 136 { 141 - struct qxl_device *qdev = dev->dev_private; 137 + struct qxl_device *qdev = to_qxl(dev); 142 138 143 139 /* 144 140 * TODO: qxl_device_fini() call should be in qxl_pci_remove(), ··· 157 155 drm_atomic_helper_shutdown(dev); 158 156 if (is_vga(pdev)) 159 157 vga_put(pdev, VGA_RSRC_LEGACY_IO); 160 - drm_dev_put(dev); 161 158 } 162 159 163 160 DEFINE_DRM_GEM_FOPS(qxl_fops); ··· 164 163 static int qxl_drm_freeze(struct drm_device *dev) 165 164 { 166 165 struct pci_dev *pdev = dev->pdev; 167 - struct qxl_device *qdev = dev->dev_private; 166 + struct qxl_device *qdev = to_qxl(dev); 168 167 int ret; 169 168 170 169 ret = drm_mode_config_helper_suspend(dev); ··· 186 185 187 186 static int qxl_drm_resume(struct drm_device *dev, bool thaw) 188 187 { 189 - struct qxl_device *qdev = dev->dev_private; 188 + struct qxl_device *qdev = to_qxl(dev); 190 189 191 190 qdev->ram_header->int_mask = QXL_INTERRUPT_MASK; 192 191 if (!thaw) { ··· 245 244 { 246 245 struct pci_dev *pdev = to_pci_dev(dev); 247 246 struct drm_device *drm_dev = pci_get_drvdata(pdev); 248 - struct qxl_device *qdev = drm_dev->dev_private; 247 + struct qxl_device *qdev = to_qxl(drm_dev); 249 248 250 249 qxl_io_reset(qdev); 251 250 return qxl_drm_resume(drm_dev, false);
+3 -4
drivers/gpu/drm/qxl/qxl_drv.h
··· 192 192 193 193 int qxl_debugfs_fence_init(struct qxl_device *rdev); 194 194 195 - struct qxl_device; 196 - 197 195 struct qxl_device { 198 196 struct drm_device ddev; 199 197 ··· 271 273 int monitors_config_height; 272 274 }; 273 275 276 + #define to_qxl(dev) container_of(dev, struct qxl_device, ddev) 277 + 274 278 extern const struct drm_ioctl_desc qxl_ioctls[]; 275 279 extern int qxl_max_ioctl; 276 280 277 - int qxl_device_init(struct qxl_device *qdev, struct drm_driver *drv, 278 - struct pci_dev *pdev); 281 + int qxl_device_init(struct qxl_device *qdev, struct pci_dev *pdev); 279 282 void qxl_device_fini(struct qxl_device *qdev); 280 283 281 284 int qxl_modeset_init(struct qxl_device *qdev);
+1 -1
drivers/gpu/drm/qxl/qxl_dumb.c
··· 32 32 struct drm_device *dev, 33 33 struct drm_mode_create_dumb *args) 34 34 { 35 - struct qxl_device *qdev = dev->dev_private; 35 + struct qxl_device *qdev = to_qxl(dev); 36 36 struct qxl_bo *qobj; 37 37 uint32_t handle; 38 38 int r;
+1 -1
drivers/gpu/drm/qxl/qxl_gem.c
··· 34 34 struct qxl_device *qdev; 35 35 struct ttm_buffer_object *tbo; 36 36 37 - qdev = (struct qxl_device *)gobj->dev->dev_private; 37 + qdev = to_qxl(gobj->dev); 38 38 39 39 qxl_surface_evict(qdev, qobj, false); 40 40
+7 -7
drivers/gpu/drm/qxl/qxl_ioctl.c
··· 36 36 static int qxl_alloc_ioctl(struct drm_device *dev, void *data, 37 37 struct drm_file *file_priv) 38 38 { 39 - struct qxl_device *qdev = dev->dev_private; 39 + struct qxl_device *qdev = to_qxl(dev); 40 40 struct drm_qxl_alloc *qxl_alloc = data; 41 41 int ret; 42 42 struct qxl_bo *qobj; ··· 64 64 static int qxl_map_ioctl(struct drm_device *dev, void *data, 65 65 struct drm_file *file_priv) 66 66 { 67 - struct qxl_device *qdev = dev->dev_private; 67 + struct qxl_device *qdev = to_qxl(dev); 68 68 struct drm_qxl_map *qxl_map = data; 69 69 70 70 return qxl_mode_dumb_mmap(file_priv, &qdev->ddev, qxl_map->handle, ··· 279 279 static int qxl_execbuffer_ioctl(struct drm_device *dev, void *data, 280 280 struct drm_file *file_priv) 281 281 { 282 - struct qxl_device *qdev = dev->dev_private; 282 + struct qxl_device *qdev = to_qxl(dev); 283 283 struct drm_qxl_execbuffer *execbuffer = data; 284 284 struct drm_qxl_command user_cmd; 285 285 int cmd_num; ··· 304 304 static int qxl_update_area_ioctl(struct drm_device *dev, void *data, 305 305 struct drm_file *file) 306 306 { 307 - struct qxl_device *qdev = dev->dev_private; 307 + struct qxl_device *qdev = to_qxl(dev); 308 308 struct drm_qxl_update_area *update_area = data; 309 309 struct qxl_rect area = {.left = update_area->left, 310 310 .top = update_area->top, ··· 354 354 static int qxl_getparam_ioctl(struct drm_device *dev, void *data, 355 355 struct drm_file *file_priv) 356 356 { 357 - struct qxl_device *qdev = dev->dev_private; 357 + struct qxl_device *qdev = to_qxl(dev); 358 358 struct drm_qxl_getparam *param = data; 359 359 360 360 switch (param->param) { ··· 373 373 static int qxl_clientcap_ioctl(struct drm_device *dev, void *data, 374 374 struct drm_file *file_priv) 375 375 { 376 - struct qxl_device *qdev = dev->dev_private; 376 + struct qxl_device *qdev = to_qxl(dev); 377 377 struct drm_qxl_clientcap *param = data; 378 378 int byte, idx; 379 379 ··· 394 394 static int qxl_alloc_surf_ioctl(struct drm_device *dev, void *data, 395 395 struct drm_file *file) 396 396 { 397 - struct qxl_device *qdev = dev->dev_private; 397 + struct qxl_device *qdev = to_qxl(dev); 398 398 struct drm_qxl_alloc_surf *param = data; 399 399 struct qxl_bo *qobj; 400 400 int handle;
+1 -1
drivers/gpu/drm/qxl/qxl_irq.c
··· 32 32 irqreturn_t qxl_irq_handler(int irq, void *arg) 33 33 { 34 34 struct drm_device *dev = (struct drm_device *) arg; 35 - struct qxl_device *qdev = (struct qxl_device *)dev->dev_private; 35 + struct qxl_device *qdev = to_qxl(dev); 36 36 uint32_t pending; 37 37 38 38 pending = xchg(&qdev->ram_header->int_pending, 0);
+1 -12
drivers/gpu/drm/qxl/qxl_kms.c
··· 108 108 } 109 109 110 110 int qxl_device_init(struct qxl_device *qdev, 111 - struct drm_driver *drv, 112 111 struct pci_dev *pdev) 113 112 { 114 113 int r, sb; 115 114 116 - r = drm_dev_init(&qdev->ddev, drv, &pdev->dev); 117 - if (r) { 118 - pr_err("Unable to init drm dev"); 119 - goto error; 120 - } 121 - 122 115 qdev->ddev.pdev = pdev; 123 116 pci_set_drvdata(pdev, &qdev->ddev); 124 - qdev->ddev.dev_private = qdev; 125 - drmm_add_final_kfree(&qdev->ddev, qdev); 126 117 127 118 mutex_init(&qdev->gem.mutex); 128 119 mutex_init(&qdev->update_area_mutex); ··· 129 138 qdev->vram_mapping = io_mapping_create_wc(qdev->vram_base, pci_resource_len(pdev, 0)); 130 139 if (!qdev->vram_mapping) { 131 140 pr_err("Unable to create vram_mapping"); 132 - r = -ENOMEM; 133 - goto error; 141 + return -ENOMEM; 134 142 } 135 143 136 144 if (pci_resource_len(pdev, 4) > 0) { ··· 283 293 io_mapping_free(qdev->surface_mapping); 284 294 vram_mapping_free: 285 295 io_mapping_free(qdev->vram_mapping); 286 - error: 287 296 return r; 288 297 } 289 298
+1 -1
drivers/gpu/drm/qxl/qxl_object.c
··· 33 33 struct qxl_device *qdev; 34 34 35 35 bo = to_qxl_bo(tbo); 36 - qdev = (struct qxl_device *)bo->tbo.base.dev->dev_private; 36 + qdev = to_qxl(bo->tbo.base.dev); 37 37 38 38 qxl_surface_evict(qdev, bo, false); 39 39 WARN_ON_ONCE(bo->map_count > 0);
+1 -1
drivers/gpu/drm/qxl/qxl_release.c
··· 243 243 return ret; 244 244 245 245 /* allocate a surface for reserved + validated buffers */ 246 - ret = qxl_bo_check_id(bo->tbo.base.dev->dev_private, bo); 246 + ret = qxl_bo_check_id(to_qxl(bo->tbo.base.dev), bo); 247 247 if (ret) 248 248 return ret; 249 249 return 0;
+1 -1
drivers/gpu/drm/qxl/qxl_ttm.c
··· 243 243 if (!qxl_ttm_bo_is_qxl_bo(bo)) 244 244 return; 245 245 qbo = to_qxl_bo(bo); 246 - qdev = qbo->tbo.base.dev->dev_private; 246 + qdev = to_qxl(qbo->tbo.base.dev); 247 247 248 248 if (bo->mem.mem_type == TTM_PL_PRIV && qbo->surface_id) 249 249 qxl_surface_evict(qdev, qbo, new_mem ? true : false);
+2 -2
drivers/gpu/drm/rockchip/cdn-dp-core.c
··· 1106 1106 .unbind = cdn_dp_unbind, 1107 1107 }; 1108 1108 1109 - int cdn_dp_suspend(struct device *dev) 1109 + static int cdn_dp_suspend(struct device *dev) 1110 1110 { 1111 1111 struct cdn_dp_device *dp = dev_get_drvdata(dev); 1112 1112 int ret = 0; ··· 1120 1120 return ret; 1121 1121 } 1122 1122 1123 - int cdn_dp_resume(struct device *dev) 1123 + static int cdn_dp_resume(struct device *dev) 1124 1124 { 1125 1125 struct cdn_dp_device *dp = dev_get_drvdata(dev); 1126 1126
+3 -3
drivers/gpu/drm/rockchip/cdn-dp-reg.c
··· 601 601 case YCBCR_4_2_0: 602 602 val[0] = 5; 603 603 break; 604 - }; 604 + } 605 605 606 606 switch (video->color_depth) { 607 607 case 6: ··· 619 619 case 16: 620 620 val[1] = 4; 621 621 break; 622 - }; 622 + } 623 623 624 624 msa_misc = 2 * val[0] + 32 * val[1] + 625 625 ((video->color_fmt == Y_ONLY) ? (1 << 14) : 0); ··· 700 700 case 16: 701 701 val = BCS_16; 702 702 break; 703 - }; 703 + } 704 704 705 705 val += video->color_fmt << 8; 706 706 ret = cdn_dp_reg_write(dp, DP_FRAMER_PXL_REPR, val);
+52 -50
drivers/gpu/drm/stm/ltdc.c
··· 42 42 43 43 #define MAX_IRQ 4 44 44 45 - #define MAX_ENDPOINTS 2 46 - 47 45 #define HWVER_10200 0x010200 48 46 #define HWVER_10300 0x010300 49 47 #define HWVER_20101 0x020101 ··· 1199 1201 struct ltdc_device *ldev = ddev->dev_private; 1200 1202 struct device *dev = ddev->dev; 1201 1203 struct device_node *np = dev->of_node; 1202 - struct drm_bridge *bridge[MAX_ENDPOINTS] = {NULL}; 1203 - struct drm_panel *panel[MAX_ENDPOINTS] = {NULL}; 1204 + struct drm_bridge *bridge; 1205 + struct drm_panel *panel; 1204 1206 struct drm_crtc *crtc; 1205 1207 struct reset_control *rstc; 1206 1208 struct resource *res; 1207 - int irq, ret, i, endpoint_not_ready = -ENODEV; 1209 + int irq, i, nb_endpoints; 1210 + int ret = -ENODEV; 1208 1211 1209 1212 DRM_DEBUG_DRIVER("\n"); 1210 1213 1211 - /* Get endpoints if any */ 1212 - for (i = 0; i < MAX_ENDPOINTS; i++) { 1213 - ret = drm_of_find_panel_or_bridge(np, 0, i, &panel[i], 1214 - &bridge[i]); 1215 - 1216 - /* 1217 - * If at least one endpoint is -EPROBE_DEFER, defer probing, 1218 - * else if at least one endpoint is ready, continue probing. 1219 - */ 1220 - if (ret == -EPROBE_DEFER) 1221 - return ret; 1222 - else if (!ret) 1223 - endpoint_not_ready = 0; 1224 - } 1225 - 1226 - if (endpoint_not_ready) 1227 - return endpoint_not_ready; 1228 - 1229 - rstc = devm_reset_control_get_exclusive(dev, NULL); 1230 - 1231 - mutex_init(&ldev->err_lock); 1214 + /* Get number of endpoints */ 1215 + nb_endpoints = of_graph_get_endpoint_count(np); 1216 + if (!nb_endpoints) 1217 + return -ENODEV; 1232 1218 1233 1219 ldev->pixel_clk = devm_clk_get(dev, "lcd"); 1234 1220 if (IS_ERR(ldev->pixel_clk)) { ··· 1225 1243 DRM_ERROR("Unable to prepare pixel clock\n"); 1226 1244 return -ENODEV; 1227 1245 } 1246 + 1247 + /* Get endpoints if any */ 1248 + for (i = 0; i < nb_endpoints; i++) { 1249 + ret = drm_of_find_panel_or_bridge(np, 0, i, &panel, &bridge); 1250 + 1251 + /* 1252 + * If at least one endpoint is -ENODEV, continue probing, 1253 + * else if at least one endpoint returned an error 1254 + * (ie -EPROBE_DEFER) then stop probing. 1255 + */ 1256 + if (ret == -ENODEV) 1257 + continue; 1258 + else if (ret) 1259 + goto err; 1260 + 1261 + if (panel) { 1262 + bridge = drm_panel_bridge_add_typed(panel, 1263 + DRM_MODE_CONNECTOR_DPI); 1264 + if (IS_ERR(bridge)) { 1265 + DRM_ERROR("panel-bridge endpoint %d\n", i); 1266 + ret = PTR_ERR(bridge); 1267 + goto err; 1268 + } 1269 + } 1270 + 1271 + if (bridge) { 1272 + ret = ltdc_encoder_init(ddev, bridge); 1273 + if (ret) { 1274 + DRM_ERROR("init encoder endpoint %d\n", i); 1275 + goto err; 1276 + } 1277 + } 1278 + } 1279 + 1280 + rstc = devm_reset_control_get_exclusive(dev, NULL); 1281 + 1282 + mutex_init(&ldev->err_lock); 1228 1283 1229 1284 if (!IS_ERR(rstc)) { 1230 1285 reset_control_assert(rstc); ··· 1304 1285 DRM_ERROR("Failed to register LTDC interrupt\n"); 1305 1286 goto err; 1306 1287 } 1307 - } 1308 1288 1309 - /* Add endpoints panels or bridges if any */ 1310 - for (i = 0; i < MAX_ENDPOINTS; i++) { 1311 - if (panel[i]) { 1312 - bridge[i] = drm_panel_bridge_add_typed(panel[i], 1313 - DRM_MODE_CONNECTOR_DPI); 1314 - if (IS_ERR(bridge[i])) { 1315 - DRM_ERROR("panel-bridge endpoint %d\n", i); 1316 - ret = PTR_ERR(bridge[i]); 1317 - goto err; 1318 - } 1319 - } 1320 - 1321 - if (bridge[i]) { 1322 - ret = ltdc_encoder_init(ddev, bridge[i]); 1323 - if (ret) { 1324 - DRM_ERROR("init encoder endpoint %d\n", i); 1325 - goto err; 1326 - } 1327 - } 1328 1289 } 1329 1290 1330 1291 crtc = devm_kzalloc(dev, sizeof(*crtc), GFP_KERNEL); ··· 1339 1340 1340 1341 return 0; 1341 1342 err: 1342 - for (i = 0; i < MAX_ENDPOINTS; i++) 1343 - drm_panel_bridge_remove(bridge[i]); 1343 + for (i = 0; i < nb_endpoints; i++) 1344 + drm_of_panel_bridge_remove(ddev->dev->of_node, 0, i); 1344 1345 1345 1346 clk_disable_unprepare(ldev->pixel_clk); 1346 1347 ··· 1349 1350 1350 1351 void ltdc_unload(struct drm_device *ddev) 1351 1352 { 1352 - int i; 1353 + struct device *dev = ddev->dev; 1354 + int nb_endpoints, i; 1353 1355 1354 1356 DRM_DEBUG_DRIVER("\n"); 1355 1357 1356 - for (i = 0; i < MAX_ENDPOINTS; i++) 1358 + nb_endpoints = of_graph_get_endpoint_count(dev->of_node); 1359 + 1360 + for (i = 0; i < nb_endpoints; i++) 1357 1361 drm_of_panel_bridge_remove(ddev->dev->of_node, 0, i); 1358 1362 1359 1363 pm_runtime_disable(ddev->dev);
+8 -8
drivers/gpu/drm/tidss/tidss_crtc.c
··· 24 24 static void tidss_crtc_finish_page_flip(struct tidss_crtc *tcrtc) 25 25 { 26 26 struct drm_device *ddev = tcrtc->crtc.dev; 27 - struct tidss_device *tidss = ddev->dev_private; 27 + struct tidss_device *tidss = to_tidss(ddev); 28 28 struct drm_pending_vblank_event *event; 29 29 unsigned long flags; 30 30 bool busy; ··· 88 88 struct drm_crtc_state *state) 89 89 { 90 90 struct drm_device *ddev = crtc->dev; 91 - struct tidss_device *tidss = ddev->dev_private; 91 + struct tidss_device *tidss = to_tidss(ddev); 92 92 struct dispc_device *dispc = tidss->dispc; 93 93 struct tidss_crtc *tcrtc = to_tidss_crtc(crtc); 94 94 u32 hw_videoport = tcrtc->hw_videoport; ··· 165 165 { 166 166 struct tidss_crtc *tcrtc = to_tidss_crtc(crtc); 167 167 struct drm_device *ddev = crtc->dev; 168 - struct tidss_device *tidss = ddev->dev_private; 168 + struct tidss_device *tidss = to_tidss(ddev); 169 169 unsigned long flags; 170 170 171 171 dev_dbg(ddev->dev, ··· 216 216 { 217 217 struct tidss_crtc *tcrtc = to_tidss_crtc(crtc); 218 218 struct drm_device *ddev = crtc->dev; 219 - struct tidss_device *tidss = ddev->dev_private; 219 + struct tidss_device *tidss = to_tidss(ddev); 220 220 const struct drm_display_mode *mode = &crtc->state->adjusted_mode; 221 221 unsigned long flags; 222 222 int r; ··· 259 259 { 260 260 struct tidss_crtc *tcrtc = to_tidss_crtc(crtc); 261 261 struct drm_device *ddev = crtc->dev; 262 - struct tidss_device *tidss = ddev->dev_private; 262 + struct tidss_device *tidss = to_tidss(ddev); 263 263 unsigned long flags; 264 264 265 265 dev_dbg(ddev->dev, "%s, event %p\n", __func__, crtc->state->event); ··· 295 295 { 296 296 struct tidss_crtc *tcrtc = to_tidss_crtc(crtc); 297 297 struct drm_device *ddev = crtc->dev; 298 - struct tidss_device *tidss = ddev->dev_private; 298 + struct tidss_device *tidss = to_tidss(ddev); 299 299 300 300 return dispc_vp_mode_valid(tidss->dispc, tcrtc->hw_videoport, mode); 301 301 } ··· 314 314 static int tidss_crtc_enable_vblank(struct drm_crtc *crtc) 315 315 { 316 316 struct drm_device *ddev = crtc->dev; 317 - struct tidss_device *tidss = ddev->dev_private; 317 + struct tidss_device *tidss = to_tidss(ddev); 318 318 319 319 dev_dbg(ddev->dev, "%s\n", __func__); 320 320 ··· 328 328 static void tidss_crtc_disable_vblank(struct drm_crtc *crtc) 329 329 { 330 330 struct drm_device *ddev = crtc->dev; 331 - struct tidss_device *tidss = ddev->dev_private; 331 + struct tidss_device *tidss = to_tidss(ddev); 332 332 333 333 dev_dbg(ddev->dev, "%s\n", __func__); 334 334
+2 -9
drivers/gpu/drm/tidss/tidss_dispc.c
··· 181 181 .vid_name = { "vid", "vidl1" }, 182 182 .vid_lite = { false, true, }, 183 183 .vid_order = { 1, 0 }, 184 - 185 - .errata = { 186 - .i2000 = true, 187 - }, 188 184 }; 189 185 190 186 static const u16 tidss_j721e_common_regs[DISPC_COMMON_REG_TABLE_LEN] = { ··· 2670 2674 return -ENOMEM; 2671 2675 2672 2676 num_fourccs = 0; 2673 - for (i = 0; i < ARRAY_SIZE(dispc_color_formats); ++i) { 2674 - if (feat->errata.i2000 && 2675 - dispc_fourcc_is_yuv(dispc_color_formats[i].fourcc)) 2676 - continue; 2677 + for (i = 0; i < ARRAY_SIZE(dispc_color_formats); ++i) 2677 2678 dispc->fourccs[num_fourccs++] = dispc_color_formats[i].fourcc; 2678 - } 2679 + 2679 2680 dispc->num_fourccs = num_fourccs; 2680 2681 dispc->tidss = tidss; 2681 2682 dispc->dev = dev;
-6
drivers/gpu/drm/tidss/tidss_dispc.h
··· 46 46 u32 xinc_max; 47 47 }; 48 48 49 - struct dispc_errata { 50 - bool i2000; /* DSS Does Not Support YUV Pixel Data Formats */ 51 - }; 52 - 53 49 enum dispc_vp_bus_type { 54 50 DISPC_VP_DPI, /* DPI output */ 55 51 DISPC_VP_OLDI, /* OLDI (LVDS) output */ ··· 79 83 const char *vid_name[TIDSS_MAX_PLANES]; /* Should match dt reg names */ 80 84 bool vid_lite[TIDSS_MAX_PLANES]; 81 85 u32 vid_order[TIDSS_MAX_PLANES]; 82 - 83 - struct dispc_errata errata; 84 86 }; 85 87 86 88 extern const struct dispc_features dispc_k2g_feats;
+4 -13
drivers/gpu/drm/tidss/tidss_drv.c
··· 135 135 136 136 dev_dbg(dev, "%s\n", __func__); 137 137 138 - /* Can't use devm_* since drm_device's lifetime may exceed dev's */ 139 - tidss = kzalloc(sizeof(*tidss), GFP_KERNEL); 140 - if (!tidss) 141 - return -ENOMEM; 138 + tidss = devm_drm_dev_alloc(&pdev->dev, &tidss_driver, 139 + struct tidss_device, ddev); 140 + if (IS_ERR(tidss)) 141 + return PTR_ERR(tidss); 142 142 143 143 ddev = &tidss->ddev; 144 - 145 - ret = devm_drm_dev_init(&pdev->dev, ddev, &tidss_driver); 146 - if (ret) { 147 - kfree(ddev); 148 - return ret; 149 - } 150 - drmm_add_final_kfree(ddev, tidss); 151 144 152 145 tidss->dev = dev; 153 146 tidss->feat = of_device_get_match_data(dev); 154 147 155 148 platform_set_drvdata(pdev, tidss); 156 - 157 - ddev->dev_private = tidss; 158 149 159 150 ret = dispc_init(tidss); 160 151 if (ret) {
+2 -2
drivers/gpu/drm/tidss/tidss_drv.h
··· 29 29 30 30 spinlock_t wait_lock; /* protects the irq masks */ 31 31 dispc_irq_t irq_mask; /* enabled irqs in addition to wait_list */ 32 - 33 - struct drm_atomic_state *saved_state; 34 32 }; 33 + 34 + #define to_tidss(__dev) container_of(__dev, struct tidss_device, ddev) 35 35 36 36 int tidss_runtime_get(struct tidss_device *tidss); 37 37 void tidss_runtime_put(struct tidss_device *tidss);
+6 -6
drivers/gpu/drm/tidss/tidss_irq.c
··· 23 23 void tidss_irq_enable_vblank(struct drm_crtc *crtc) 24 24 { 25 25 struct drm_device *ddev = crtc->dev; 26 - struct tidss_device *tidss = ddev->dev_private; 26 + struct tidss_device *tidss = to_tidss(ddev); 27 27 struct tidss_crtc *tcrtc = to_tidss_crtc(crtc); 28 28 u32 hw_videoport = tcrtc->hw_videoport; 29 29 unsigned long flags; ··· 38 38 void tidss_irq_disable_vblank(struct drm_crtc *crtc) 39 39 { 40 40 struct drm_device *ddev = crtc->dev; 41 - struct tidss_device *tidss = ddev->dev_private; 41 + struct tidss_device *tidss = to_tidss(ddev); 42 42 struct tidss_crtc *tcrtc = to_tidss_crtc(crtc); 43 43 u32 hw_videoport = tcrtc->hw_videoport; 44 44 unsigned long flags; ··· 53 53 irqreturn_t tidss_irq_handler(int irq, void *arg) 54 54 { 55 55 struct drm_device *ddev = (struct drm_device *)arg; 56 - struct tidss_device *tidss = ddev->dev_private; 56 + struct tidss_device *tidss = to_tidss(ddev); 57 57 unsigned int id; 58 58 dispc_irq_t irqstatus; 59 59 ··· 95 95 96 96 void tidss_irq_preinstall(struct drm_device *ddev) 97 97 { 98 - struct tidss_device *tidss = ddev->dev_private; 98 + struct tidss_device *tidss = to_tidss(ddev); 99 99 100 100 spin_lock_init(&tidss->wait_lock); 101 101 ··· 109 109 110 110 int tidss_irq_postinstall(struct drm_device *ddev) 111 111 { 112 - struct tidss_device *tidss = ddev->dev_private; 112 + struct tidss_device *tidss = to_tidss(ddev); 113 113 unsigned long flags; 114 114 unsigned int i; 115 115 ··· 138 138 139 139 void tidss_irq_uninstall(struct drm_device *ddev) 140 140 { 141 - struct tidss_device *tidss = ddev->dev_private; 141 + struct tidss_device *tidss = to_tidss(ddev); 142 142 143 143 tidss_runtime_get(tidss); 144 144 dispc_set_irqenable(tidss->dispc, 0);
+1 -1
drivers/gpu/drm/tidss/tidss_kms.c
··· 25 25 static void tidss_atomic_commit_tail(struct drm_atomic_state *old_state) 26 26 { 27 27 struct drm_device *ddev = old_state->dev; 28 - struct tidss_device *tidss = ddev->dev_private; 28 + struct tidss_device *tidss = to_tidss(ddev); 29 29 30 30 dev_dbg(ddev->dev, "%s\n", __func__); 31 31
+3 -3
drivers/gpu/drm/tidss/tidss_plane.c
··· 22 22 struct drm_plane_state *state) 23 23 { 24 24 struct drm_device *ddev = plane->dev; 25 - struct tidss_device *tidss = ddev->dev_private; 25 + struct tidss_device *tidss = to_tidss(ddev); 26 26 struct tidss_plane *tplane = to_tidss_plane(plane); 27 27 const struct drm_format_info *finfo; 28 28 struct drm_crtc_state *crtc_state; ··· 101 101 struct drm_plane_state *old_state) 102 102 { 103 103 struct drm_device *ddev = plane->dev; 104 - struct tidss_device *tidss = ddev->dev_private; 104 + struct tidss_device *tidss = to_tidss(ddev); 105 105 struct tidss_plane *tplane = to_tidss_plane(plane); 106 106 struct drm_plane_state *state = plane->state; 107 107 u32 hw_videoport; ··· 133 133 struct drm_plane_state *old_state) 134 134 { 135 135 struct drm_device *ddev = plane->dev; 136 - struct tidss_device *tidss = ddev->dev_private; 136 + struct tidss_device *tidss = to_tidss(ddev); 137 137 struct tidss_plane *tplane = to_tidss_plane(plane); 138 138 139 139 dev_dbg(ddev->dev, "%s\n", __func__);
+19
drivers/gpu/drm/tiny/Kconfig
··· 1 1 # SPDX-License-Identifier: GPL-2.0-only 2 2 3 + config DRM_CIRRUS_QEMU 4 + tristate "Cirrus driver for QEMU emulated device" 5 + depends on DRM && PCI && MMU 6 + select DRM_KMS_HELPER 7 + select DRM_GEM_SHMEM_HELPER 8 + help 9 + This is a KMS driver for emulated cirrus device in qemu. 10 + It is *NOT* intended for real cirrus devices. This requires 11 + the modesetting userspace X.org driver. 12 + 13 + Cirrus is obsolete, the hardware was designed in the 90ies 14 + and can't keep up with todays needs. More background: 15 + https://www.kraxel.org/blog/2014/10/qemu-using-cirrus-considered-harmful/ 16 + 17 + Better alternatives are: 18 + - stdvga (DRM_BOCHS, qemu -vga std, default in qemu 2.2+) 19 + - qxl (DRM_QXL, qemu -vga qxl, works best with spice) 20 + - virtio (DRM_VIRTIO_GPU), qemu -vga virtio) 21 + 3 22 config DRM_GM12U320 4 23 tristate "GM12U320 driver for USB projectors" 5 24 depends on DRM && USB
+1
drivers/gpu/drm/tiny/Makefile
··· 1 1 # SPDX-License-Identifier: GPL-2.0-only 2 2 3 + obj-$(CONFIG_DRM_CIRRUS_QEMU) += cirrus.o 3 4 obj-$(CONFIG_DRM_GM12U320) += gm12u320.o 4 5 obj-$(CONFIG_TINYDRM_HX8357D) += hx8357d.o 5 6 obj-$(CONFIG_TINYDRM_ILI9225) += ili9225.o
+10 -14
drivers/gpu/drm/tiny/gm12u320.c
··· 98 98 } fb_update; 99 99 }; 100 100 101 + #define to_gm12u320(__dev) container_of(__dev, struct gm12u320_device, dev) 102 + 101 103 static const char cmd_data[CMD_SIZE] = { 102 104 0x55, 0x53, 0x42, 0x43, 0x00, 0x00, 0x00, 0x00, 103 105 0x68, 0xfc, 0x00, 0x00, 0x00, 0x00, 0x10, 0xff, ··· 410 408 static void gm12u320_fb_mark_dirty(struct drm_framebuffer *fb, 411 409 struct drm_rect *dirty) 412 410 { 413 - struct gm12u320_device *gm12u320 = fb->dev->dev_private; 411 + struct gm12u320_device *gm12u320 = to_gm12u320(fb->dev); 414 412 struct drm_framebuffer *old_fb = NULL; 415 413 bool wakeup = false; 416 414 ··· 560 558 struct drm_plane_state *plane_state) 561 559 { 562 560 struct drm_rect rect = { 0, 0, GM12U320_USER_WIDTH, GM12U320_HEIGHT }; 563 - struct gm12u320_device *gm12u320 = pipe->crtc.dev->dev_private; 561 + struct gm12u320_device *gm12u320 = to_gm12u320(pipe->crtc.dev); 564 562 565 563 gm12u320->fb_update.draw_status_timeout = FIRST_FRAME_TIMEOUT; 566 564 gm12u320_fb_mark_dirty(plane_state->fb, &rect); ··· 568 566 569 567 static void gm12u320_pipe_disable(struct drm_simple_display_pipe *pipe) 570 568 { 571 - struct gm12u320_device *gm12u320 = pipe->crtc.dev->dev_private; 569 + struct gm12u320_device *gm12u320 = to_gm12u320(pipe->crtc.dev); 572 570 573 571 gm12u320_stop_fb_update(gm12u320); 574 572 } ··· 633 631 if (interface->cur_altsetting->desc.bInterfaceNumber != 0) 634 632 return -ENODEV; 635 633 636 - gm12u320 = kzalloc(sizeof(*gm12u320), GFP_KERNEL); 637 - if (gm12u320 == NULL) 638 - return -ENOMEM; 634 + gm12u320 = devm_drm_dev_alloc(&interface->dev, &gm12u320_drm_driver, 635 + struct gm12u320_device, dev); 636 + if (IS_ERR(gm12u320)) 637 + return PTR_ERR(gm12u320); 639 638 640 639 gm12u320->udev = interface_to_usbdev(interface); 641 640 INIT_DELAYED_WORK(&gm12u320->fb_update.work, gm12u320_fb_update_work); 642 641 mutex_init(&gm12u320->fb_update.lock); 643 642 644 643 dev = &gm12u320->dev; 645 - ret = devm_drm_dev_init(&interface->dev, dev, &gm12u320_drm_driver); 646 - if (ret) { 647 - kfree(gm12u320); 648 - return ret; 649 - } 650 - dev->dev_private = gm12u320; 651 - drmm_add_final_kfree(dev, gm12u320); 652 644 653 645 ret = drmm_mode_config_init(dev); 654 646 if (ret) ··· 707 711 static __maybe_unused int gm12u320_resume(struct usb_interface *interface) 708 712 { 709 713 struct drm_device *dev = usb_get_intfdata(interface); 710 - struct gm12u320_device *gm12u320 = dev->dev_private; 714 + struct gm12u320_device *gm12u320 = to_gm12u320(dev); 711 715 712 716 gm12u320_set_ecomode(gm12u320); 713 717
+4 -9
drivers/gpu/drm/tiny/hx8357d.c
··· 226 226 u32 rotation = 0; 227 227 int ret; 228 228 229 - dbidev = kzalloc(sizeof(*dbidev), GFP_KERNEL); 230 - if (!dbidev) 231 - return -ENOMEM; 229 + dbidev = devm_drm_dev_alloc(dev, &hx8357d_driver, 230 + struct mipi_dbi_dev, drm); 231 + if (IS_ERR(dbidev)) 232 + return PTR_ERR(dbidev); 232 233 233 234 drm = &dbidev->drm; 234 - ret = devm_drm_dev_init(dev, drm, &hx8357d_driver); 235 - if (ret) { 236 - kfree(dbidev); 237 - return ret; 238 - } 239 - drmm_add_final_kfree(drm, dbidev); 240 235 241 236 dc = devm_gpiod_get(dev, "dc", GPIOD_OUT_LOW); 242 237 if (IS_ERR(dc)) {
+4 -9
drivers/gpu/drm/tiny/ili9225.c
··· 376 376 u32 rotation = 0; 377 377 int ret; 378 378 379 - dbidev = kzalloc(sizeof(*dbidev), GFP_KERNEL); 380 - if (!dbidev) 381 - return -ENOMEM; 379 + dbidev = devm_drm_dev_alloc(dev, &ili9225_driver, 380 + struct mipi_dbi_dev, drm); 381 + if (IS_ERR(dbidev)) 382 + return PTR_ERR(dbidev); 382 383 383 384 dbi = &dbidev->dbi; 384 385 drm = &dbidev->drm; 385 - ret = devm_drm_dev_init(dev, drm, &ili9225_driver); 386 - if (ret) { 387 - kfree(dbidev); 388 - return ret; 389 - } 390 - drmm_add_final_kfree(drm, dbidev); 391 386 392 387 dbi->reset = devm_gpiod_get(dev, "reset", GPIOD_OUT_HIGH); 393 388 if (IS_ERR(dbi->reset)) {
+4 -9
drivers/gpu/drm/tiny/ili9341.c
··· 183 183 u32 rotation = 0; 184 184 int ret; 185 185 186 - dbidev = kzalloc(sizeof(*dbidev), GFP_KERNEL); 187 - if (!dbidev) 188 - return -ENOMEM; 186 + dbidev = devm_drm_dev_alloc(dev, &ili9341_driver, 187 + struct mipi_dbi_dev, drm); 188 + if (IS_ERR(dbidev)) 189 + return PTR_ERR(dbidev); 189 190 190 191 dbi = &dbidev->dbi; 191 192 drm = &dbidev->drm; 192 - ret = devm_drm_dev_init(dev, drm, &ili9341_driver); 193 - if (ret) { 194 - kfree(dbidev); 195 - return ret; 196 - } 197 - drmm_add_final_kfree(drm, dbidev); 198 193 199 194 dbi->reset = devm_gpiod_get_optional(dev, "reset", GPIOD_OUT_HIGH); 200 195 if (IS_ERR(dbi->reset)) {
+4 -9
drivers/gpu/drm/tiny/ili9486.c
··· 197 197 u32 rotation = 0; 198 198 int ret; 199 199 200 - dbidev = kzalloc(sizeof(*dbidev), GFP_KERNEL); 201 - if (!dbidev) 202 - return -ENOMEM; 200 + dbidev = devm_drm_dev_alloc(dev, &ili9486_driver, 201 + struct mipi_dbi_dev, drm); 202 + if (IS_ERR(dbidev)) 203 + return PTR_ERR(dbidev); 203 204 204 205 dbi = &dbidev->dbi; 205 206 drm = &dbidev->drm; 206 - ret = devm_drm_dev_init(dev, drm, &ili9486_driver); 207 - if (ret) { 208 - kfree(dbidev); 209 - return ret; 210 - } 211 - drmm_add_final_kfree(drm, dbidev); 212 207 213 208 dbi->reset = devm_gpiod_get(dev, "reset", GPIOD_OUT_HIGH); 214 209 if (IS_ERR(dbi->reset)) {
+4 -9
drivers/gpu/drm/tiny/mi0283qt.c
··· 187 187 u32 rotation = 0; 188 188 int ret; 189 189 190 - dbidev = kzalloc(sizeof(*dbidev), GFP_KERNEL); 191 - if (!dbidev) 192 - return -ENOMEM; 190 + dbidev = devm_drm_dev_alloc(dev, &mi0283qt_driver, 191 + struct mipi_dbi_dev, drm); 192 + if (IS_ERR(dbidev)) 193 + return PTR_ERR(dbidev); 193 194 194 195 dbi = &dbidev->dbi; 195 196 drm = &dbidev->drm; 196 - ret = devm_drm_dev_init(dev, drm, &mi0283qt_driver); 197 - if (ret) { 198 - kfree(dbidev); 199 - return ret; 200 - } 201 - drmm_add_final_kfree(drm, dbidev); 202 197 203 198 dbi->reset = devm_gpiod_get_optional(dev, "reset", GPIOD_OUT_HIGH); 204 199 if (IS_ERR(dbi->reset)) {
+4 -10
drivers/gpu/drm/tiny/repaper.c
··· 1002 1002 } 1003 1003 } 1004 1004 1005 - epd = kzalloc(sizeof(*epd), GFP_KERNEL); 1006 - if (!epd) 1007 - return -ENOMEM; 1005 + epd = devm_drm_dev_alloc(dev, &repaper_driver, 1006 + struct repaper_epd, drm); 1007 + if (IS_ERR(epd)) 1008 + return PTR_ERR(epd); 1008 1009 1009 1010 drm = &epd->drm; 1010 - 1011 - ret = devm_drm_dev_init(dev, drm, &repaper_driver); 1012 - if (ret) { 1013 - kfree(epd); 1014 - return ret; 1015 - } 1016 - drmm_add_final_kfree(drm, epd); 1017 1011 1018 1012 ret = drmm_mode_config_init(drm); 1019 1013 if (ret)
+4 -9
drivers/gpu/drm/tiny/st7586.c
··· 317 317 size_t bufsize; 318 318 int ret; 319 319 320 - dbidev = kzalloc(sizeof(*dbidev), GFP_KERNEL); 321 - if (!dbidev) 322 - return -ENOMEM; 320 + dbidev = devm_drm_dev_alloc(dev, &st7586_driver, 321 + struct mipi_dbi_dev, drm); 322 + if (IS_ERR(dbidev)) 323 + return PTR_ERR(dbidev); 323 324 324 325 dbi = &dbidev->dbi; 325 326 drm = &dbidev->drm; 326 - ret = devm_drm_dev_init(dev, drm, &st7586_driver); 327 - if (ret) { 328 - kfree(dbidev); 329 - return ret; 330 - } 331 - drmm_add_final_kfree(drm, dbidev); 332 327 333 328 bufsize = (st7586_mode.vdisplay + 2) / 3 * st7586_mode.hdisplay; 334 329
+4 -9
drivers/gpu/drm/tiny/st7735r.c
··· 195 195 if (!cfg) 196 196 cfg = (void *)spi_get_device_id(spi)->driver_data; 197 197 198 - priv = kzalloc(sizeof(*priv), GFP_KERNEL); 199 - if (!priv) 200 - return -ENOMEM; 198 + priv = devm_drm_dev_alloc(dev, &st7735r_driver, 199 + struct st7735r_priv, dbidev.drm); 200 + if (IS_ERR(priv)) 201 + return PTR_ERR(priv); 201 202 202 203 dbidev = &priv->dbidev; 203 204 priv->cfg = cfg; 204 205 205 206 dbi = &dbidev->dbi; 206 207 drm = &dbidev->drm; 207 - ret = devm_drm_dev_init(dev, drm, &st7735r_driver); 208 - if (ret) { 209 - kfree(dbidev); 210 - return ret; 211 - } 212 - drmm_add_final_kfree(drm, dbidev); 213 208 214 209 dbi->reset = devm_gpiod_get(dev, "reset", GPIOD_OUT_HIGH); 215 210 if (IS_ERR(dbi->reset)) {
+2 -2
drivers/gpu/drm/udl/udl_connector.c
··· 59 59 static enum drm_mode_status udl_mode_valid(struct drm_connector *connector, 60 60 struct drm_display_mode *mode) 61 61 { 62 - struct udl_device *udl = connector->dev->dev_private; 62 + struct udl_device *udl = to_udl(connector->dev); 63 63 if (!udl->sku_pixel_limit) 64 64 return 0; 65 65 ··· 72 72 static enum drm_connector_status 73 73 udl_detect(struct drm_connector *connector, bool force) 74 74 { 75 - struct udl_device *udl = connector->dev->dev_private; 75 + struct udl_device *udl = to_udl(connector->dev); 76 76 struct udl_drm_connector *udl_connector = 77 77 container_of(connector, 78 78 struct udl_drm_connector,
+7 -20
drivers/gpu/drm/udl/udl_drv.c
··· 57 57 struct udl_device *udl; 58 58 int r; 59 59 60 - udl = kzalloc(sizeof(*udl), GFP_KERNEL); 61 - if (!udl) 62 - return ERR_PTR(-ENOMEM); 63 - 64 - r = drm_dev_init(&udl->drm, &driver, &interface->dev); 65 - if (r) { 66 - kfree(udl); 67 - return ERR_PTR(r); 68 - } 60 + udl = devm_drm_dev_alloc(&interface->dev, &driver, 61 + struct udl_device, drm); 62 + if (IS_ERR(udl)) 63 + return udl; 69 64 70 65 udl->udev = udev; 71 - udl->drm.dev_private = udl; 72 - drmm_add_final_kfree(&udl->drm, udl); 73 66 74 67 r = udl_init(udl); 75 - if (r) { 76 - drm_dev_put(&udl->drm); 68 + if (r) 77 69 return ERR_PTR(r); 78 - } 79 70 80 71 usb_set_intfdata(interface, udl); 72 + 81 73 return udl; 82 74 } 83 75 ··· 85 93 86 94 r = drm_dev_register(&udl->drm, 0); 87 95 if (r) 88 - goto err_free; 96 + return r; 89 97 90 98 DRM_INFO("Initialized udl on minor %d\n", udl->drm.primary->index); 91 99 92 100 drm_fbdev_generic_setup(&udl->drm, 0); 93 101 94 102 return 0; 95 - 96 - err_free: 97 - drm_dev_put(&udl->drm); 98 - return r; 99 103 } 100 104 101 105 static void udl_usb_disconnect(struct usb_interface *interface) ··· 101 113 drm_kms_helper_poll_fini(dev); 102 114 udl_drop_usb(dev); 103 115 drm_dev_unplug(dev); 104 - drm_dev_put(dev); 105 116 } 106 117 107 118 /*
+5 -5
drivers/gpu/drm/udl/udl_modeset.c
··· 215 215 static int udl_crtc_write_mode_to_hw(struct drm_crtc *crtc) 216 216 { 217 217 struct drm_device *dev = crtc->dev; 218 - struct udl_device *udl = dev->dev_private; 218 + struct udl_device *udl = to_udl(dev); 219 219 struct urb *urb; 220 220 char *buf; 221 221 int retval; ··· 266 266 return 0; 267 267 } 268 268 269 - int udl_handle_damage(struct drm_framebuffer *fb, int x, int y, 270 - int width, int height) 269 + static int udl_handle_damage(struct drm_framebuffer *fb, int x, int y, 270 + int width, int height) 271 271 { 272 272 struct drm_device *dev = fb->dev; 273 273 struct dma_buf_attachment *import_attach = fb->obj[0]->import_attach; ··· 369 369 struct drm_crtc *crtc = &pipe->crtc; 370 370 struct drm_device *dev = crtc->dev; 371 371 struct drm_framebuffer *fb = plane_state->fb; 372 - struct udl_device *udl = dev->dev_private; 372 + struct udl_device *udl = to_udl(dev); 373 373 struct drm_display_mode *mode = &crtc_state->mode; 374 374 char *buf; 375 375 char *wrptr; ··· 464 464 int udl_modeset_init(struct drm_device *dev) 465 465 { 466 466 size_t format_count = ARRAY_SIZE(udl_simple_display_pipe_formats); 467 - struct udl_device *udl = dev->dev_private; 467 + struct udl_device *udl = to_udl(dev); 468 468 struct drm_connector *connector; 469 469 int ret; 470 470
+6 -6
drivers/gpu/drm/v3d/v3d_debugfs.c
··· 132 132 u32 ident0, ident1, ident2, ident3, cores; 133 133 int ret, core; 134 134 135 - ret = pm_runtime_get_sync(v3d->dev); 135 + ret = pm_runtime_get_sync(v3d->drm.dev); 136 136 if (ret < 0) 137 137 return ret; 138 138 ··· 187 187 (misccfg & V3D_MISCCFG_OVRTMUOUT) != 0); 188 188 } 189 189 190 - pm_runtime_mark_last_busy(v3d->dev); 191 - pm_runtime_put_autosuspend(v3d->dev); 190 + pm_runtime_mark_last_busy(v3d->drm.dev); 191 + pm_runtime_put_autosuspend(v3d->drm.dev); 192 192 193 193 return 0; 194 194 } ··· 219 219 int measure_ms = 1000; 220 220 int ret; 221 221 222 - ret = pm_runtime_get_sync(v3d->dev); 222 + ret = pm_runtime_get_sync(v3d->drm.dev); 223 223 if (ret < 0) 224 224 return ret; 225 225 ··· 245 245 cycles / (measure_ms * 1000), 246 246 (cycles / (measure_ms * 100)) % 10); 247 247 248 - pm_runtime_mark_last_busy(v3d->dev); 249 - pm_runtime_put_autosuspend(v3d->dev); 248 + pm_runtime_mark_last_busy(v3d->drm.dev); 249 + pm_runtime_put_autosuspend(v3d->drm.dev); 250 250 251 251 return 0; 252 252 }
+17 -30
drivers/gpu/drm/v3d/v3d_drv.c
··· 105 105 if (args->value != 0) 106 106 return -EINVAL; 107 107 108 - ret = pm_runtime_get_sync(v3d->dev); 108 + ret = pm_runtime_get_sync(v3d->drm.dev); 109 109 if (ret < 0) 110 110 return ret; 111 111 if (args->param >= DRM_V3D_PARAM_V3D_CORE0_IDENT0 && ··· 114 114 } else { 115 115 args->value = V3D_READ(offset); 116 116 } 117 - pm_runtime_mark_last_busy(v3d->dev); 118 - pm_runtime_put_autosuspend(v3d->dev); 117 + pm_runtime_mark_last_busy(v3d->drm.dev); 118 + pm_runtime_put_autosuspend(v3d->drm.dev); 119 119 return 0; 120 120 } 121 121 ··· 235 235 map_regs(struct v3d_dev *v3d, void __iomem **regs, const char *name) 236 236 { 237 237 struct resource *res = 238 - platform_get_resource_byname(v3d->pdev, IORESOURCE_MEM, name); 238 + platform_get_resource_byname(v3d_to_pdev(v3d), IORESOURCE_MEM, name); 239 239 240 - *regs = devm_ioremap_resource(v3d->dev, res); 240 + *regs = devm_ioremap_resource(v3d->drm.dev, res); 241 241 return PTR_ERR_OR_ZERO(*regs); 242 242 } 243 243 ··· 251 251 u32 ident1; 252 252 253 253 254 - v3d = kzalloc(sizeof(*v3d), GFP_KERNEL); 255 - if (!v3d) 256 - return -ENOMEM; 257 - v3d->dev = dev; 258 - v3d->pdev = pdev; 254 + v3d = devm_drm_dev_alloc(dev, &v3d_drm_driver, struct v3d_dev, drm); 255 + if (IS_ERR(v3d)) 256 + return PTR_ERR(v3d); 257 + 259 258 drm = &v3d->drm; 260 259 261 - ret = drm_dev_init(&v3d->drm, &v3d_drm_driver, dev); 262 - if (ret) { 263 - kfree(v3d); 264 - return ret; 265 - } 266 - 267 260 platform_set_drvdata(pdev, drm); 268 - drm->dev_private = v3d; 269 - drmm_add_final_kfree(drm, v3d); 270 261 271 262 ret = map_regs(v3d, &v3d->hub_regs, "hub"); 272 263 if (ret) 273 - goto dev_destroy; 264 + return ret; 274 265 275 266 ret = map_regs(v3d, &v3d->core_regs[0], "core0"); 276 267 if (ret) 277 - goto dev_destroy; 268 + return ret; 278 269 279 270 mmu_debug = V3D_READ(V3D_MMU_DEBUG_INFO); 280 271 dev->coherent_dma_mask = ··· 283 292 ret = PTR_ERR(v3d->reset); 284 293 285 294 if (ret == -EPROBE_DEFER) 286 - goto dev_destroy; 295 + return ret; 287 296 288 297 v3d->reset = NULL; 289 298 ret = map_regs(v3d, &v3d->bridge_regs, "bridge"); 290 299 if (ret) { 291 300 dev_err(dev, 292 301 "Failed to get reset control or bridge regs\n"); 293 - goto dev_destroy; 302 + return ret; 294 303 } 295 304 } 296 305 297 306 if (v3d->ver < 41) { 298 307 ret = map_regs(v3d, &v3d->gca_regs, "gca"); 299 308 if (ret) 300 - goto dev_destroy; 309 + return ret; 301 310 } 302 311 303 312 v3d->mmu_scratch = dma_alloc_wc(dev, 4096, &v3d->mmu_scratch_paddr, 304 313 GFP_KERNEL | __GFP_NOWARN | __GFP_ZERO); 305 314 if (!v3d->mmu_scratch) { 306 315 dev_err(dev, "Failed to allocate MMU scratch page\n"); 307 - ret = -ENOMEM; 308 - goto dev_destroy; 316 + return -ENOMEM; 309 317 } 310 318 311 319 pm_runtime_use_autosuspend(dev); ··· 331 341 v3d_gem_destroy(drm); 332 342 dma_free: 333 343 dma_free_wc(dev, 4096, v3d->mmu_scratch, v3d->mmu_scratch_paddr); 334 - dev_destroy: 335 - drm_dev_put(drm); 336 344 return ret; 337 345 } 338 346 ··· 343 355 344 356 v3d_gem_destroy(drm); 345 357 346 - drm_dev_put(drm); 347 - 348 - dma_free_wc(v3d->dev, 4096, v3d->mmu_scratch, v3d->mmu_scratch_paddr); 358 + dma_free_wc(v3d->drm.dev, 4096, v3d->mmu_scratch, 359 + v3d->mmu_scratch_paddr); 349 360 350 361 return 0; 351 362 }
+3 -4
drivers/gpu/drm/v3d/v3d_drv.h
··· 14 14 #include "uapi/drm/v3d_drm.h" 15 15 16 16 struct clk; 17 - struct device; 18 17 struct platform_device; 19 18 struct reset_control; 20 19 ··· 46 47 int ver; 47 48 bool single_irq_line; 48 49 49 - struct device *dev; 50 - struct platform_device *pdev; 51 50 void __iomem *hub_regs; 52 51 void __iomem *core_regs[3]; 53 52 void __iomem *bridge_regs; ··· 118 121 static inline struct v3d_dev * 119 122 to_v3d_dev(struct drm_device *dev) 120 123 { 121 - return (struct v3d_dev *)dev->dev_private; 124 + return container_of(dev, struct v3d_dev, drm); 122 125 } 123 126 124 127 static inline bool ··· 126 129 { 127 130 return v3d->ver >= 41; 128 131 } 132 + 133 + #define v3d_to_pdev(v3d) to_platform_device((v3d)->drm.dev) 129 134 130 135 /* The per-fd struct, which tracks the MMU mappings. */ 131 136 struct v3d_file_priv {
+9 -8
drivers/gpu/drm/v3d/v3d_gem.c
··· 370 370 dma_fence_put(job->irq_fence); 371 371 dma_fence_put(job->done_fence); 372 372 373 - pm_runtime_mark_last_busy(job->v3d->dev); 374 - pm_runtime_put_autosuspend(job->v3d->dev); 373 + pm_runtime_mark_last_busy(job->v3d->drm.dev); 374 + pm_runtime_put_autosuspend(job->v3d->drm.dev); 375 375 376 376 kfree(job); 377 377 } ··· 439 439 job->v3d = v3d; 440 440 job->free = free; 441 441 442 - ret = pm_runtime_get_sync(v3d->dev); 442 + ret = pm_runtime_get_sync(v3d->drm.dev); 443 443 if (ret < 0) 444 444 return ret; 445 445 ··· 458 458 return 0; 459 459 fail: 460 460 xa_destroy(&job->deps); 461 - pm_runtime_put_autosuspend(v3d->dev); 461 + pm_runtime_put_autosuspend(v3d->drm.dev); 462 462 return ret; 463 463 } 464 464 ··· 886 886 */ 887 887 drm_mm_init(&v3d->mm, 1, pt_size / sizeof(u32) - 1); 888 888 889 - v3d->pt = dma_alloc_wc(v3d->dev, pt_size, 889 + v3d->pt = dma_alloc_wc(v3d->drm.dev, pt_size, 890 890 &v3d->pt_paddr, 891 891 GFP_KERNEL | __GFP_NOWARN | __GFP_ZERO); 892 892 if (!v3d->pt) { 893 893 drm_mm_takedown(&v3d->mm); 894 - dev_err(v3d->dev, 894 + dev_err(v3d->drm.dev, 895 895 "Failed to allocate page tables. " 896 896 "Please ensure you have CMA enabled.\n"); 897 897 return -ENOMEM; ··· 903 903 ret = v3d_sched_init(v3d); 904 904 if (ret) { 905 905 drm_mm_takedown(&v3d->mm); 906 - dma_free_coherent(v3d->dev, 4096 * 1024, (void *)v3d->pt, 906 + dma_free_coherent(v3d->drm.dev, 4096 * 1024, (void *)v3d->pt, 907 907 v3d->pt_paddr); 908 908 } 909 909 ··· 925 925 926 926 drm_mm_takedown(&v3d->mm); 927 927 928 - dma_free_coherent(v3d->dev, 4096 * 1024, (void *)v3d->pt, v3d->pt_paddr); 928 + dma_free_coherent(v3d->drm.dev, 4096 * 1024, (void *)v3d->pt, 929 + v3d->pt_paddr); 929 930 }
+9 -7
drivers/gpu/drm/v3d/v3d_irq.c
··· 128 128 * always-allowed mode. 129 129 */ 130 130 if (intsts & V3D_INT_GMPV) 131 - dev_err(v3d->dev, "GMP violation\n"); 131 + dev_err(v3d->drm.dev, "GMP violation\n"); 132 132 133 133 /* V3D 4.2 wires the hub and core IRQs together, so if we & 134 134 * didn't see the common one then check hub for MMU IRQs. ··· 189 189 client = v3d41_axi_ids[axi_id]; 190 190 } 191 191 192 - dev_err(v3d->dev, "MMU error from client %s (%d) at 0x%llx%s%s%s\n", 192 + dev_err(v3d->drm.dev, "MMU error from client %s (%d) at 0x%llx%s%s%s\n", 193 193 client, axi_id, (long long)vio_addr, 194 194 ((intsts & V3D_HUB_INT_MMU_WRV) ? 195 195 ", write violation" : ""), ··· 217 217 V3D_CORE_WRITE(core, V3D_CTL_INT_CLR, V3D_CORE_IRQS); 218 218 V3D_WRITE(V3D_HUB_INT_CLR, V3D_HUB_IRQS); 219 219 220 - irq1 = platform_get_irq(v3d->pdev, 1); 220 + irq1 = platform_get_irq(v3d_to_pdev(v3d), 1); 221 221 if (irq1 == -EPROBE_DEFER) 222 222 return irq1; 223 223 if (irq1 > 0) { 224 - ret = devm_request_irq(v3d->dev, irq1, 224 + ret = devm_request_irq(v3d->drm.dev, irq1, 225 225 v3d_irq, IRQF_SHARED, 226 226 "v3d_core0", v3d); 227 227 if (ret) 228 228 goto fail; 229 - ret = devm_request_irq(v3d->dev, platform_get_irq(v3d->pdev, 0), 229 + ret = devm_request_irq(v3d->drm.dev, 230 + platform_get_irq(v3d_to_pdev(v3d), 0), 230 231 v3d_hub_irq, IRQF_SHARED, 231 232 "v3d_hub", v3d); 232 233 if (ret) ··· 235 234 } else { 236 235 v3d->single_irq_line = true; 237 236 238 - ret = devm_request_irq(v3d->dev, platform_get_irq(v3d->pdev, 0), 237 + ret = devm_request_irq(v3d->drm.dev, 238 + platform_get_irq(v3d_to_pdev(v3d), 0), 239 239 v3d_irq, IRQF_SHARED, 240 240 "v3d", v3d); 241 241 if (ret) ··· 248 246 249 247 fail: 250 248 if (ret != -EPROBE_DEFER) 251 - dev_err(v3d->dev, "IRQ setup failed: %d\n", ret); 249 + dev_err(v3d->drm.dev, "IRQ setup failed: %d\n", ret); 252 250 return ret; 253 251 } 254 252
+5 -5
drivers/gpu/drm/v3d/v3d_mmu.c
··· 40 40 ret = wait_for(!(V3D_READ(V3D_MMU_CTL) & 41 41 V3D_MMU_CTL_TLB_CLEARING), 100); 42 42 if (ret) 43 - dev_err(v3d->dev, "TLB clear wait idle pre-wait failed\n"); 43 + dev_err(v3d->drm.dev, "TLB clear wait idle pre-wait failed\n"); 44 44 45 45 V3D_WRITE(V3D_MMU_CTL, V3D_READ(V3D_MMU_CTL) | 46 46 V3D_MMU_CTL_TLB_CLEAR); ··· 52 52 ret = wait_for(!(V3D_READ(V3D_MMU_CTL) & 53 53 V3D_MMU_CTL_TLB_CLEARING), 100); 54 54 if (ret) { 55 - dev_err(v3d->dev, "TLB clear wait idle failed\n"); 55 + dev_err(v3d->drm.dev, "TLB clear wait idle failed\n"); 56 56 return ret; 57 57 } 58 58 59 59 ret = wait_for(!(V3D_READ(V3D_MMUC_CONTROL) & 60 60 V3D_MMUC_CONTROL_FLUSHING), 100); 61 61 if (ret) 62 - dev_err(v3d->dev, "MMUC flush wait idle failed\n"); 62 + dev_err(v3d->drm.dev, "MMUC flush wait idle failed\n"); 63 63 64 64 return ret; 65 65 } ··· 109 109 shmem_obj->base.size >> V3D_MMU_PAGE_SHIFT); 110 110 111 111 if (v3d_mmu_flush_all(v3d)) 112 - dev_err(v3d->dev, "MMU flush timeout\n"); 112 + dev_err(v3d->drm.dev, "MMU flush timeout\n"); 113 113 } 114 114 115 115 void v3d_mmu_remove_ptes(struct v3d_bo *bo) ··· 122 122 v3d->pt[page] = 0; 123 123 124 124 if (v3d_mmu_flush_all(v3d)) 125 - dev_err(v3d->dev, "MMU flush timeout\n"); 125 + dev_err(v3d->drm.dev, "MMU flush timeout\n"); 126 126 }
+5 -5
drivers/gpu/drm/v3d/v3d_sched.c
··· 403 403 msecs_to_jiffies(hang_limit_ms), 404 404 "v3d_bin"); 405 405 if (ret) { 406 - dev_err(v3d->dev, "Failed to create bin scheduler: %d.", ret); 406 + dev_err(v3d->drm.dev, "Failed to create bin scheduler: %d.", ret); 407 407 return ret; 408 408 } 409 409 ··· 413 413 msecs_to_jiffies(hang_limit_ms), 414 414 "v3d_render"); 415 415 if (ret) { 416 - dev_err(v3d->dev, "Failed to create render scheduler: %d.", 416 + dev_err(v3d->drm.dev, "Failed to create render scheduler: %d.", 417 417 ret); 418 418 v3d_sched_fini(v3d); 419 419 return ret; ··· 425 425 msecs_to_jiffies(hang_limit_ms), 426 426 "v3d_tfu"); 427 427 if (ret) { 428 - dev_err(v3d->dev, "Failed to create TFU scheduler: %d.", 428 + dev_err(v3d->drm.dev, "Failed to create TFU scheduler: %d.", 429 429 ret); 430 430 v3d_sched_fini(v3d); 431 431 return ret; ··· 438 438 msecs_to_jiffies(hang_limit_ms), 439 439 "v3d_csd"); 440 440 if (ret) { 441 - dev_err(v3d->dev, "Failed to create CSD scheduler: %d.", 441 + dev_err(v3d->drm.dev, "Failed to create CSD scheduler: %d.", 442 442 ret); 443 443 v3d_sched_fini(v3d); 444 444 return ret; ··· 450 450 msecs_to_jiffies(hang_limit_ms), 451 451 "v3d_cache_clean"); 452 452 if (ret) { 453 - dev_err(v3d->dev, "Failed to create CACHE_CLEAN scheduler: %d.", 453 + dev_err(v3d->drm.dev, "Failed to create CACHE_CLEAN scheduler: %d.", 454 454 ret); 455 455 v3d_sched_fini(v3d); 456 456 return ret;
+7 -19
drivers/gpu/drm/vboxvideo/vbox_drv.c
··· 46 46 if (ret) 47 47 return ret; 48 48 49 - vbox = kzalloc(sizeof(*vbox), GFP_KERNEL); 50 - if (!vbox) 51 - return -ENOMEM; 52 - 53 - ret = drm_dev_init(&vbox->ddev, &driver, &pdev->dev); 54 - if (ret) { 55 - kfree(vbox); 56 - return ret; 57 - } 49 + vbox = devm_drm_dev_alloc(&pdev->dev, &driver, 50 + struct vbox_private, ddev); 51 + if (IS_ERR(vbox)) 52 + return PTR_ERR(vbox); 58 53 59 54 vbox->ddev.pdev = pdev; 60 - vbox->ddev.dev_private = vbox; 61 55 pci_set_drvdata(pdev, vbox); 62 - drmm_add_final_kfree(&vbox->ddev, vbox); 63 56 mutex_init(&vbox->hw_mutex); 64 57 65 - ret = pci_enable_device(pdev); 58 + ret = pcim_enable_device(pdev); 66 59 if (ret) 67 - goto err_dev_put; 60 + return ret; 68 61 69 62 ret = vbox_hw_init(vbox); 70 63 if (ret) 71 - goto err_pci_disable; 64 + return ret; 72 65 73 66 ret = vbox_mm_init(vbox); 74 67 if (ret) ··· 91 98 vbox_mm_fini(vbox); 92 99 err_hw_fini: 93 100 vbox_hw_fini(vbox); 94 - err_pci_disable: 95 - pci_disable_device(pdev); 96 - err_dev_put: 97 - drm_dev_put(&vbox->ddev); 98 101 return ret; 99 102 } 100 103 ··· 103 114 vbox_mode_fini(vbox); 104 115 vbox_mm_fini(vbox); 105 116 vbox_hw_fini(vbox); 106 - drm_dev_put(&vbox->ddev); 107 117 } 108 118 109 119 #ifdef CONFIG_PM_SLEEP
+1
drivers/gpu/drm/vboxvideo/vbox_drv.h
··· 127 127 #define to_vbox_crtc(x) container_of(x, struct vbox_crtc, base) 128 128 #define to_vbox_connector(x) container_of(x, struct vbox_connector, base) 129 129 #define to_vbox_encoder(x) container_of(x, struct vbox_encoder, base) 130 + #define to_vbox_dev(x) container_of(x, struct vbox_private, ddev) 130 131 131 132 bool vbox_check_supported(u16 id); 132 133 int vbox_hw_init(struct vbox_private *vbox);
+1 -1
drivers/gpu/drm/vboxvideo/vbox_irq.c
··· 34 34 irqreturn_t vbox_irq_handler(int irq, void *arg) 35 35 { 36 36 struct drm_device *dev = (struct drm_device *)arg; 37 - struct vbox_private *vbox = (struct vbox_private *)dev->dev_private; 37 + struct vbox_private *vbox = to_vbox_dev(dev); 38 38 u32 host_flags = vbox_get_flags(vbox); 39 39 40 40 if (!(host_flags & HGSMIHOSTFLAGS_IRQ))
+9 -20
drivers/gpu/drm/vboxvideo/vbox_main.c
··· 71 71 72 72 for (i = 0; i < vbox->num_crtcs; ++i) 73 73 vbva_disable(&vbox->vbva_info[i], vbox->guest_pool, i); 74 - 75 - pci_iounmap(vbox->ddev.pdev, vbox->vbva_buffers); 76 74 } 77 75 78 76 /* Do we support the 4.3 plus mode hint reporting interface? */ ··· 121 123 return -ENOMEM; 122 124 123 125 /* Create guest-heap mem-pool use 2^4 = 16 byte chunks */ 124 - vbox->guest_pool = gen_pool_create(4, -1); 126 + vbox->guest_pool = devm_gen_pool_create(vbox->ddev.dev, 4, -1, 127 + "vboxvideo-accel"); 125 128 if (!vbox->guest_pool) 126 - goto err_unmap_guest_heap; 129 + return -ENOMEM; 127 130 128 131 ret = gen_pool_add_virt(vbox->guest_pool, 129 132 (unsigned long)vbox->guest_heap, 130 133 GUEST_HEAP_OFFSET(vbox), 131 134 GUEST_HEAP_USABLE_SIZE, -1); 132 135 if (ret) 133 - goto err_destroy_guest_pool; 136 + return ret; 134 137 135 138 ret = hgsmi_test_query_conf(vbox->guest_pool); 136 139 if (ret) { 137 140 DRM_ERROR("vboxvideo: hgsmi_test_query_conf failed\n"); 138 - goto err_destroy_guest_pool; 141 + return ret; 139 142 } 140 143 141 144 /* Reduce available VRAM size to reflect the guest heap. */ ··· 148 149 149 150 if (!have_hgsmi_mode_hints(vbox)) { 150 151 ret = -ENOTSUPP; 151 - goto err_destroy_guest_pool; 152 + return ret; 152 153 } 153 154 154 155 vbox->last_mode_hints = devm_kcalloc(vbox->ddev.dev, vbox->num_crtcs, 155 156 sizeof(struct vbva_modehint), 156 157 GFP_KERNEL); 157 - if (!vbox->last_mode_hints) { 158 - ret = -ENOMEM; 159 - goto err_destroy_guest_pool; 160 - } 158 + if (!vbox->last_mode_hints) 159 + return -ENOMEM; 161 160 162 161 ret = vbox_accel_init(vbox); 163 162 if (ret) 164 - goto err_destroy_guest_pool; 163 + return ret; 165 164 166 165 return 0; 167 - 168 - err_destroy_guest_pool: 169 - gen_pool_destroy(vbox->guest_pool); 170 - err_unmap_guest_heap: 171 - pci_iounmap(vbox->ddev.pdev, vbox->guest_heap); 172 - return ret; 173 166 } 174 167 175 168 void vbox_hw_fini(struct vbox_private *vbox) 176 169 { 177 170 vbox_accel_fini(vbox); 178 - gen_pool_destroy(vbox->guest_pool); 179 - pci_iounmap(vbox->ddev.pdev, vbox->guest_heap); 180 171 }
+5 -5
drivers/gpu/drm/vboxvideo/vbox_mode.c
··· 36 36 u16 flags; 37 37 s32 x_offset, y_offset; 38 38 39 - vbox = crtc->dev->dev_private; 39 + vbox = to_vbox_dev(crtc->dev); 40 40 width = vbox_crtc->width ? vbox_crtc->width : 640; 41 41 height = vbox_crtc->height ? vbox_crtc->height : 480; 42 42 bpp = fb ? fb->format->cpp[0] * 8 : 32; ··· 77 77 static int vbox_set_view(struct drm_crtc *crtc) 78 78 { 79 79 struct vbox_crtc *vbox_crtc = to_vbox_crtc(crtc); 80 - struct vbox_private *vbox = crtc->dev->dev_private; 80 + struct vbox_private *vbox = to_vbox_dev(crtc->dev); 81 81 struct vbva_infoview *p; 82 82 83 83 /* ··· 174 174 int x, int y) 175 175 { 176 176 struct drm_gem_vram_object *gbo = drm_gem_vram_of_gem(fb->obj[0]); 177 - struct vbox_private *vbox = crtc->dev->dev_private; 177 + struct vbox_private *vbox = to_vbox_dev(crtc->dev); 178 178 struct vbox_crtc *vbox_crtc = to_vbox_crtc(crtc); 179 179 bool needs_modeset = drm_atomic_crtc_needs_modeset(crtc->state); 180 180 ··· 272 272 { 273 273 struct drm_crtc *crtc = plane->state->crtc; 274 274 struct drm_framebuffer *fb = plane->state->fb; 275 - struct vbox_private *vbox = fb->dev->dev_private; 275 + struct vbox_private *vbox = to_vbox_dev(fb->dev); 276 276 struct drm_mode_rect *clips; 277 277 uint32_t num_clips, i; 278 278 ··· 704 704 int preferred_width, preferred_height; 705 705 706 706 vbox_connector = to_vbox_connector(connector); 707 - vbox = connector->dev->dev_private; 707 + vbox = to_vbox_dev(connector->dev); 708 708 709 709 hgsmi_report_flags_location(vbox->guest_pool, GUEST_HEAP_OFFSET(vbox) + 710 710 HOST_FLAGS_OFFSET);
-12
drivers/gpu/drm/vboxvideo/vbox_ttm.c
··· 24 24 return ret; 25 25 } 26 26 27 - #ifdef DRM_MTRR_WC 28 - vbox->fb_mtrr = drm_mtrr_add(pci_resource_start(dev->pdev, 0), 29 - pci_resource_len(dev->pdev, 0), 30 - DRM_MTRR_WC); 31 - #else 32 27 vbox->fb_mtrr = arch_phys_wc_add(pci_resource_start(dev->pdev, 0), 33 28 pci_resource_len(dev->pdev, 0)); 34 - #endif 35 29 return 0; 36 30 } 37 31 38 32 void vbox_mm_fini(struct vbox_private *vbox) 39 33 { 40 - #ifdef DRM_MTRR_WC 41 - drm_mtrr_del(vbox->fb_mtrr, 42 - pci_resource_start(vbox->ddev.pdev, 0), 43 - pci_resource_len(vbox->ddev.pdev, 0), DRM_MTRR_WC); 44 - #else 45 34 arch_phys_wc_del(vbox->fb_mtrr); 46 - #endif 47 35 drm_vram_helper_release_mm(&vbox->ddev); 48 36 }
+1 -1
drivers/gpu/drm/vkms/vkms_drv.c
··· 35 35 36 36 static struct vkms_device *vkms_device; 37 37 38 - bool enable_cursor; 38 + bool enable_cursor = true; 39 39 module_param_named(enable_cursor, enable_cursor, bool, 0444); 40 40 MODULE_PARM_DESC(enable_cursor, "Enable/Disable cursor support"); 41 41
-5
drivers/gpu/drm/vkms/vkms_drv.h
··· 117 117 enum drm_plane_type type, int index); 118 118 119 119 /* Gem stuff */ 120 - struct drm_gem_object *vkms_gem_create(struct drm_device *dev, 121 - struct drm_file *file, 122 - u32 *handle, 123 - u64 size); 124 - 125 120 vm_fault_t vkms_gem_fault(struct vm_fault *vmf); 126 121 127 122 int vkms_dumb_create(struct drm_file *file, struct drm_device *dev,
+6 -5
drivers/gpu/drm/vkms/vkms_gem.c
··· 97 97 return ret; 98 98 } 99 99 100 - struct drm_gem_object *vkms_gem_create(struct drm_device *dev, 101 - struct drm_file *file, 102 - u32 *handle, 103 - u64 size) 100 + static struct drm_gem_object *vkms_gem_create(struct drm_device *dev, 101 + struct drm_file *file, 102 + u32 *handle, 103 + u64 size) 104 104 { 105 105 struct vkms_gem_object *obj; 106 106 int ret; ··· 113 113 return ERR_CAST(obj); 114 114 115 115 ret = drm_gem_handle_create(file, &obj->gem, handle); 116 - drm_gem_object_put_unlocked(&obj->gem); 117 116 if (ret) 118 117 return ERR_PTR(ret); 119 118 ··· 140 141 141 142 args->size = gem_obj->size; 142 143 args->pitch = pitch; 144 + 145 + drm_gem_object_put_unlocked(gem_obj); 143 146 144 147 DRM_DEBUG_DRIVER("Created object of size %lld\n", size); 145 148
+2 -2
drivers/video/fbdev/aty/atyfb_base.c
··· 3819 3819 3820 3820 while ((this_opt = strsep(&options, ",")) != NULL) { 3821 3821 if (!strncmp(this_opt, "noaccel", 7)) { 3822 - noaccel = 1; 3822 + noaccel = true; 3823 3823 } else if (!strncmp(this_opt, "nomtrr", 6)) { 3824 - nomtrr = 1; 3824 + nomtrr = true; 3825 3825 } else if (!strncmp(this_opt, "vram:", 5)) 3826 3826 vram = simple_strtoul(this_opt + 5, NULL, 0); 3827 3827 else if (!strncmp(this_opt, "pll:", 4))
+1 -1
drivers/video/fbdev/controlfb.c
··· 55 55 #include "macmodes.h" 56 56 #include "controlfb.h" 57 57 58 - #ifndef CONFIG_PPC_PMAC 58 + #if !defined(CONFIG_PPC_PMAC) || !defined(CONFIG_PPC32) 59 59 #define invalid_vram_cache(addr) 60 60 #undef in_8 61 61 #undef out_8
+5 -5
drivers/video/fbdev/i810/i810_main.c
··· 1966 1966 1967 1967 while ((this_opt = strsep(&options, ",")) != NULL) { 1968 1968 if (!strncmp(this_opt, "mtrr", 4)) 1969 - mtrr = 1; 1969 + mtrr = true; 1970 1970 else if (!strncmp(this_opt, "accel", 5)) 1971 - accel = 1; 1971 + accel = true; 1972 1972 else if (!strncmp(this_opt, "extvga", 6)) 1973 - extvga = 1; 1973 + extvga = true; 1974 1974 else if (!strncmp(this_opt, "sync", 4)) 1975 - sync = 1; 1975 + sync = true; 1976 1976 else if (!strncmp(this_opt, "vram:", 5)) 1977 1977 vram = (simple_strtoul(this_opt+5, NULL, 0)); 1978 1978 else if (!strncmp(this_opt, "voffset:", 8)) ··· 1998 1998 else if (!strncmp(this_opt, "vsync2:", 7)) 1999 1999 vsync2 = simple_strtoul(this_opt+7, NULL, 0); 2000 2000 else if (!strncmp(this_opt, "dcolor", 6)) 2001 - dcolor = 1; 2001 + dcolor = true; 2002 2002 else if (!strncmp(this_opt, "ddc3", 4)) 2003 2003 ddc3 = true; 2004 2004 else
-18
drivers/video/fbdev/riva/riva_hw.c
··· 1343 1343 /* 1344 1344 * Load fixed function state and pre-calculated/stored state. 1345 1345 */ 1346 - #if 0 1347 - #define LOAD_FIXED_STATE(tbl,dev) \ 1348 - for (i = 0; i < sizeof(tbl##Table##dev)/8; i++) \ 1349 - chip->dev[tbl##Table##dev[i][0]] = tbl##Table##dev[i][1] 1350 - #define LOAD_FIXED_STATE_8BPP(tbl,dev) \ 1351 - for (i = 0; i < sizeof(tbl##Table##dev##_8BPP)/8; i++) \ 1352 - chip->dev[tbl##Table##dev##_8BPP[i][0]] = tbl##Table##dev##_8BPP[i][1] 1353 - #define LOAD_FIXED_STATE_15BPP(tbl,dev) \ 1354 - for (i = 0; i < sizeof(tbl##Table##dev##_15BPP)/8; i++) \ 1355 - chip->dev[tbl##Table##dev##_15BPP[i][0]] = tbl##Table##dev##_15BPP[i][1] 1356 - #define LOAD_FIXED_STATE_16BPP(tbl,dev) \ 1357 - for (i = 0; i < sizeof(tbl##Table##dev##_16BPP)/8; i++) \ 1358 - chip->dev[tbl##Table##dev##_16BPP[i][0]] = tbl##Table##dev##_16BPP[i][1] 1359 - #define LOAD_FIXED_STATE_32BPP(tbl,dev) \ 1360 - for (i = 0; i < sizeof(tbl##Table##dev##_32BPP)/8; i++) \ 1361 - chip->dev[tbl##Table##dev##_32BPP[i][0]] = tbl##Table##dev##_32BPP[i][1] 1362 - #endif 1363 - 1364 1346 #define LOAD_FIXED_STATE(tbl,dev) \ 1365 1347 for (i = 0; i < sizeof(tbl##Table##dev)/8; i++) \ 1366 1348 NV_WR32(&chip->dev[tbl##Table##dev[i][0]], 0, tbl##Table##dev[i][1])
+3 -3
drivers/video/fbdev/udlfb.c
··· 64 64 MODULE_DEVICE_TABLE(usb, id_table); 65 65 66 66 /* module options */ 67 - static bool console = 1; /* Allow fbcon to open framebuffer */ 68 - static bool fb_defio = 1; /* Detect mmap writes using page faults */ 69 - static bool shadow = 1; /* Optionally disable shadow framebuffer */ 67 + static bool console = true; /* Allow fbcon to open framebuffer */ 68 + static bool fb_defio = true; /* Detect mmap writes using page faults */ 69 + static bool shadow = true; /* Optionally disable shadow framebuffer */ 70 70 static int pixel_limit; /* Optionally force a pixel resolution limit */ 71 71 72 72 struct dlfb_deferred_free {
+6 -6
drivers/video/fbdev/uvesafb.c
··· 45 45 }; 46 46 47 47 static int mtrr = 3; /* enable mtrr by default */ 48 - static bool blank = 1; /* enable blanking by default */ 48 + static bool blank = true; /* enable blanking by default */ 49 49 static int ypan = 1; /* 0: scroll, 1: ypan, 2: ywrap */ 50 50 static bool pmi_setpal = true; /* use PMI for palette changes */ 51 51 static bool nocrtc; /* ignore CRTC settings */ ··· 1824 1824 else if (!strcmp(this_opt, "ywrap")) 1825 1825 ypan = 2; 1826 1826 else if (!strcmp(this_opt, "vgapal")) 1827 - pmi_setpal = 0; 1827 + pmi_setpal = false; 1828 1828 else if (!strcmp(this_opt, "pmipal")) 1829 - pmi_setpal = 1; 1829 + pmi_setpal = true; 1830 1830 else if (!strncmp(this_opt, "mtrr:", 5)) 1831 1831 mtrr = simple_strtoul(this_opt+5, NULL, 0); 1832 1832 else if (!strcmp(this_opt, "nomtrr")) 1833 1833 mtrr = 0; 1834 1834 else if (!strcmp(this_opt, "nocrtc")) 1835 - nocrtc = 1; 1835 + nocrtc = true; 1836 1836 else if (!strcmp(this_opt, "noedid")) 1837 - noedid = 1; 1837 + noedid = true; 1838 1838 else if (!strcmp(this_opt, "noblank")) 1839 - blank = 0; 1839 + blank = true; 1840 1840 else if (!strncmp(this_opt, "vtotal:", 7)) 1841 1841 vram_total = simple_strtoul(this_opt + 7, NULL, 0); 1842 1842 else if (!strncmp(this_opt, "vremap:", 7))
+2 -2
drivers/video/fbdev/valkyriefb.c
··· 331 331 struct resource r; 332 332 333 333 dp = of_find_node_by_name(NULL, "valkyrie"); 334 - if (dp == 0) 334 + if (!dp) 335 335 return 0; 336 336 337 337 if (of_address_to_resource(dp, 0, &r)) { ··· 345 345 #endif /* ppc (!CONFIG_MAC) */ 346 346 347 347 p = kzalloc(sizeof(*p), GFP_ATOMIC); 348 - if (p == 0) 348 + if (!p) 349 349 return -ENOMEM; 350 350 351 351 /* Map in frame buffer and registers */
+2
drivers/video/fbdev/w100fb.c
··· 588 588 memsize=par->mach->mem->size; 589 589 memcpy_toio(remapped_fbuf + (W100_FB_BASE-MEM_WINDOW_BASE), par->saved_extmem, memsize); 590 590 vfree(par->saved_extmem); 591 + par->saved_extmem = NULL; 591 592 } 592 593 if (par->saved_intmem) { 593 594 memsize=MEM_INT_SIZE; ··· 597 596 else 598 597 memcpy_toio(remapped_fbuf + (W100_FB_BASE-MEM_WINDOW_BASE), par->saved_intmem, memsize); 599 598 vfree(par->saved_intmem); 599 + par->saved_intmem = NULL; 600 600 } 601 601 } 602 602
+1 -1
include/drm/drm_client.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0 */ 1 + /* SPDX-License-Identifier: GPL-2.0 or MIT */ 2 2 3 3 #ifndef _DRM_CLIENT_H_ 4 4 #define _DRM_CLIENT_H_
+2 -2
include/drm/drm_dp_helper.h
··· 292 292 #define DP_DSC_PEAK_THROUGHPUT 0x06B 293 293 # define DP_DSC_THROUGHPUT_MODE_0_MASK (0xf << 0) 294 294 # define DP_DSC_THROUGHPUT_MODE_0_SHIFT 0 295 - # define DP_DSC_THROUGHPUT_MODE_0_UPSUPPORTED 0 295 + # define DP_DSC_THROUGHPUT_MODE_0_UNSUPPORTED 0 296 296 # define DP_DSC_THROUGHPUT_MODE_0_340 (1 << 0) 297 297 # define DP_DSC_THROUGHPUT_MODE_0_400 (2 << 0) 298 298 # define DP_DSC_THROUGHPUT_MODE_0_450 (3 << 0) ··· 310 310 # define DP_DSC_THROUGHPUT_MODE_0_170 (15 << 0) /* 1.4a */ 311 311 # define DP_DSC_THROUGHPUT_MODE_1_MASK (0xf << 4) 312 312 # define DP_DSC_THROUGHPUT_MODE_1_SHIFT 4 313 - # define DP_DSC_THROUGHPUT_MODE_1_UPSUPPORTED 0 313 + # define DP_DSC_THROUGHPUT_MODE_1_UNSUPPORTED 0 314 314 # define DP_DSC_THROUGHPUT_MODE_1_340 (1 << 4) 315 315 # define DP_DSC_THROUGHPUT_MODE_1_400 (2 << 4) 316 316 # define DP_DSC_THROUGHPUT_MODE_1_450 (3 << 4)
+8 -16
include/drm/drm_dp_mst_helper.h
··· 194 194 * @rad: Relative Address to talk to this branch device. 195 195 * @lct: Link count total to talk to this branch device. 196 196 * @num_ports: number of ports on the branch. 197 - * @msg_slots: one bit per transmitted msg slot. 198 197 * @port_parent: pointer to the port parent, NULL if toplevel. 199 198 * @mgr: topology manager for this branch device. 200 - * @tx_slots: transmission slots for this device. 201 - * @last_seqno: last sequence number used to talk to this. 202 199 * @link_address_sent: if a link address message has been sent to this device yet. 203 200 * @guid: guid for DP 1.2 branch device. port under this branch can be 204 201 * identified by port #. ··· 236 239 u8 lct; 237 240 int num_ports; 238 241 239 - int msg_slots; 240 242 /** 241 243 * @ports: the list of ports on this branch device. This should be 242 244 * considered protected for reading by &drm_dp_mst_topology_mgr.lock. ··· 248 252 */ 249 253 struct list_head ports; 250 254 251 - /* list of tx ops queue for this port */ 252 255 struct drm_dp_mst_port *port_parent; 253 256 struct drm_dp_mst_topology_mgr *mgr; 254 257 255 - /* slots are protected by mstb->mgr->qlock */ 256 - struct drm_dp_sideband_msg_tx *tx_slots[2]; 257 - int last_seqno; 258 258 bool link_address_sent; 259 - 260 - /** 261 - * @down_rep_recv: Message receiver state for down replies. 262 - */ 263 - struct drm_dp_sideband_msg_rx down_rep_recv[2]; 264 259 265 260 /* global unique identifier to identify branch devices */ 266 261 u8 guid[16]; ··· 555 568 struct drm_dp_sideband_msg_rx up_req_recv; 556 569 557 570 /** 571 + * @down_rep_recv: Message receiver state for replies to down 572 + * requests. 573 + */ 574 + struct drm_dp_sideband_msg_rx down_rep_recv; 575 + 576 + /** 558 577 * @lock: protects @mst_state, @mst_primary, @dpcd, and 559 578 * @payload_id_table_cleared. 560 579 */ ··· 609 616 const struct drm_private_state_funcs *funcs; 610 617 611 618 /** 612 - * @qlock: protects @tx_msg_downq, the &drm_dp_mst_branch.txslost and 613 - * &drm_dp_sideband_msg_tx.state once they are queued 619 + * @qlock: protects @tx_msg_downq and &drm_dp_sideband_msg_tx.state 614 620 */ 615 621 struct mutex qlock; 616 622 617 623 /** 618 - * @tx_msg_downq: List of pending down replies. 624 + * @tx_msg_downq: List of pending down requests 619 625 */ 620 626 struct list_head tx_msg_downq; 621 627
+33
include/drm/drm_drv.h
··· 623 623 struct drm_device *dev, 624 624 struct drm_driver *driver); 625 625 626 + void *__devm_drm_dev_alloc(struct device *parent, struct drm_driver *driver, 627 + size_t size, size_t offset); 628 + 629 + /** 630 + * devm_drm_dev_alloc - Resource managed allocation of a &drm_device instance 631 + * @parent: Parent device object 632 + * @driver: DRM driver 633 + * @type: the type of the struct which contains struct &drm_device 634 + * @member: the name of the &drm_device within @type. 635 + * 636 + * This allocates and initialize a new DRM device. No device registration is done. 637 + * Call drm_dev_register() to advertice the device to user space and register it 638 + * with other core subsystems. This should be done last in the device 639 + * initialization sequence to make sure userspace can't access an inconsistent 640 + * state. 641 + * 642 + * The initial ref-count of the object is 1. Use drm_dev_get() and 643 + * drm_dev_put() to take and drop further ref-counts. 644 + * 645 + * It is recommended that drivers embed &struct drm_device into their own device 646 + * structure. 647 + * 648 + * Note that this manages the lifetime of the resulting &drm_device 649 + * automatically using devres. The DRM device initialized with this function is 650 + * automatically put on driver detach using drm_dev_put(). 651 + * 652 + * RETURNS: 653 + * Pointer to new DRM device, or ERR_PTR on failure. 654 + */ 655 + #define devm_drm_dev_alloc(parent, driver, type, member) \ 656 + ((type *) __devm_drm_dev_alloc(parent, driver, sizeof(type), \ 657 + offsetof(type, member))) 658 + 626 659 struct drm_device *drm_dev_alloc(struct drm_driver *driver, 627 660 struct device *parent); 628 661 int drm_dev_register(struct drm_device *dev, unsigned long flags);
+1
include/drm/drm_mm.h
··· 168 168 struct rb_node rb_hole_addr; 169 169 u64 __subtree_last; 170 170 u64 hole_size; 171 + u64 subtree_max_hole; 171 172 unsigned long flags; 172 173 #define DRM_MM_NODE_ALLOCATED_BIT 0 173 174 #define DRM_MM_NODE_SCANNED_BIT 1
-11
include/drm/drm_modes.h
··· 391 391 int vrefresh; 392 392 393 393 /** 394 - * @hsync: 395 - * 396 - * Horizontal refresh rate, for debug output in human readable form. Not 397 - * used in a functional way. 398 - * 399 - * This value is in kHz. 400 - */ 401 - int hsync; 402 - 403 - /** 404 394 * @picture_aspect_ratio: 405 395 * 406 396 * Field for setting the HDMI picture aspect ratio of a mode. ··· 483 493 int index); 484 494 485 495 void drm_mode_set_name(struct drm_display_mode *mode); 486 - int drm_mode_hsync(const struct drm_display_mode *mode); 487 496 int drm_mode_vrefresh(const struct drm_display_mode *mode); 488 497 void drm_mode_get_hv_timing(const struct drm_display_mode *mode, 489 498 int *hdisplay, int *vdisplay);
-1
include/drm/ttm/ttm_bo_driver.h
··· 390 390 /** 391 391 * struct ttm_bo_global - Buffer object driver global data. 392 392 * 393 - * @mem_glob: Pointer to a struct ttm_mem_global object for accounting. 394 393 * @dummy_read_page: Pointer to a dummy page used for mapping requests 395 394 * of unpopulated pages. 396 395 * @shrink: A shrink callback object used for buffer object swap.