Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'drm-misc-next-2018-09-05' of git://anongit.freedesktop.org/drm/drm-misc into drm-next

drm-misc-next for 4.20:

UAPI Changes:
- Add userspace dma-buf device to turn memfd regions into dma-bufs (Gerd)
- Add per-plane blend mode property (Lowry)
- Change in drm_fourcc.h is documentation only (Brian)

Cross-subsystem Changes:
- None

Core Changes:
- Remove user logspam and useless lock in vma_offset_mgr destroy (Chris)
- Add get/verify_crc_source for improved crc source selection (Mahesh)
- Add __drm_atomic_helper_plane_reset to reduce copypasta (Alexandru)

Driver Changes:
- various: Replance ref/unref calls with drm_dev_get/put (Thomas)
- bridge: Add driver for TI SN65DSI86 chip (Sandeep)
- rockchip: Add PX30 support (Sandy)
- sun4i: Add support for R40 TCON (Jernej)
- vkms: Continued building out vkms, added gem support (Haneen)Driver Changes:
- various: fbdev: Wrap remove_conflicting_framebuffers with resource_len
accessors to remove a bunch of cargo-cult (Michał)
- rockchip: Add rgb output iface support + fixes (Sandy/Heiko)
- nouveau/amdgpu: Add cec-over-aux support (Hans)
- sun4i: Add support for Allwinner A64 (Jagan)

Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Michał Mirosław <mirq-linux@rere.qmqm.pl>
Cc: Heiko Stuebner <heiko@sntech.de>
Cc: Sandy Huang <hjc@rock-chips.com>
Cc: Hans Verkuil <hans.verkuil@cisco.com>
Cc: Jagan Teki <jagan@amarulasolutions.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
From: Sean Paul <sean@poorly.run>
Link: https://patchwork.freedesktop.org/patch/msgid/20180905202210.GA95199@art_vandelay

+4529 -691
+23
Documentation/devicetree/bindings/display/atmel/hlcdc-dc.txt
··· 15 15 to external devices using the OF graph reprensentation (see ../graph.txt). 16 16 At least one port node is required. 17 17 18 + Optional properties in grandchild nodes: 19 + Any endpoint grandchild node may specify a desired video interface 20 + according to ../../media/video-interfaces.txt, specifically 21 + - bus-width: recognized values are <12>, <16>, <18> and <24>, and 22 + override any output mode selection heuristic, forcing "rgb444", 23 + "rgb565", "rgb666" and "rgb888" respectively. 24 + 18 25 Example: 19 26 20 27 hlcdc: hlcdc@f0030000 { ··· 55 48 pinctrl-names = "default"; 56 49 pinctrl-0 = <&pinctrl_lcd_pwm>; 57 50 #pwm-cells = <3>; 51 + }; 52 + }; 53 + 54 + Example 2: With a video interface override to force rgb565; as above 55 + but with these changes/additions: 56 + 57 + &hlcdc { 58 + hlcdc-display-controller { 59 + pinctrl-names = "default"; 60 + pinctrl-0 = <&pinctrl_lcd_base &pinctrl_lcd_rgb565>; 61 + 62 + port@0 { 63 + hlcdc_panel_output: endpoint@0 { 64 + bus-width = <16>; 65 + }; 66 + }; 58 67 }; 59 68 };
+7 -1
Documentation/devicetree/bindings/display/bridge/lvds-transmitter.txt
··· 22 22 23 23 Required properties: 24 24 25 - - compatible: Must be "lvds-encoder" 25 + - compatible: Must be one or more of the following 26 + - "ti,ds90c185" for the TI DS90C185 FPD-Link Serializer 27 + - "lvds-encoder" for a generic LVDS encoder device 28 + 29 + When compatible with the generic version, nodes must list the 30 + device-specific version corresponding to the device first 31 + followed by the generic version. 26 32 27 33 Required nodes: 28 34
+87
Documentation/devicetree/bindings/display/bridge/ti,sn65dsi86.txt
··· 1 + SN65DSI86 DSI to eDP bridge chip 2 + -------------------------------- 3 + 4 + This is the binding for Texas Instruments SN65DSI86 bridge. 5 + http://www.ti.com/general/docs/lit/getliterature.tsp?genericPartNumber=sn65dsi86&fileType=pdf 6 + 7 + Required properties: 8 + - compatible: Must be "ti,sn65dsi86" 9 + - reg: i2c address of the chip, 0x2d as per datasheet 10 + - enable-gpios: gpio specification for bridge_en pin (active high) 11 + 12 + - vccio-supply: A 1.8V supply that powers up the digital IOs. 13 + - vpll-supply: A 1.8V supply that powers up the displayport PLL. 14 + - vcca-supply: A 1.2V supply that powers up the analog circuits. 15 + - vcc-supply: A 1.2V supply that powers up the digital core. 16 + 17 + Optional properties: 18 + - interrupts-extended: Specifier for the SN65DSI86 interrupt line. 19 + 20 + - gpio-controller: Marks the device has a GPIO controller. 21 + - #gpio-cells : Should be two. The first cell is the pin number and 22 + the second cell is used to specify flags. 23 + See ../../gpio/gpio.txt for more information. 24 + - #pwm-cells : Should be one. See ../../pwm/pwm.txt for description of 25 + the cell formats. 26 + 27 + - clock-names: should be "refclk" 28 + - clocks: Specification for input reference clock. The reference 29 + clock rate must be 12 MHz, 19.2 MHz, 26 MHz, 27 MHz or 38.4 MHz. 30 + 31 + - data-lanes: See ../../media/video-interface.txt 32 + - lane-polarities: See ../../media/video-interface.txt 33 + 34 + - suspend-gpios: specification for GPIO1 pin on bridge (active low) 35 + 36 + Required nodes: 37 + This device has two video ports. Their connections are modelled using the 38 + OF graph bindings specified in Documentation/devicetree/bindings/graph.txt. 39 + 40 + - Video port 0 for DSI input 41 + - Video port 1 for eDP output 42 + 43 + Example 44 + ------- 45 + 46 + edp-bridge@2d { 47 + compatible = "ti,sn65dsi86"; 48 + #address-cells = <1>; 49 + #size-cells = <0>; 50 + reg = <0x2d>; 51 + 52 + enable-gpios = <&msmgpio 33 GPIO_ACTIVE_HIGH>; 53 + suspend-gpios = <&msmgpio 34 GPIO_ACTIVE_LOW>; 54 + 55 + interrupts-extended = <&gpio3 4 IRQ_TYPE_EDGE_FALLING>; 56 + 57 + vccio-supply = <&pm8916_l17>; 58 + vcca-supply = <&pm8916_l6>; 59 + vpll-supply = <&pm8916_l17>; 60 + vcc-supply = <&pm8916_l6>; 61 + 62 + clock-names = "refclk"; 63 + clocks = <&input_refclk>; 64 + 65 + ports { 66 + #address-cells = <1>; 67 + #size-cells = <0>; 68 + 69 + port@0 { 70 + reg = <0>; 71 + 72 + edp_bridge_in: endpoint { 73 + remote-endpoint = <&dsi_out>; 74 + }; 75 + }; 76 + 77 + port@1 { 78 + reg = <1>; 79 + 80 + edp_bridge_out: endpoint { 81 + data-lanes = <2 1 3 0>; 82 + lane-polarities = <0 1 0 1>; 83 + remote-endpoint = <&edp_panel_in>; 84 + }; 85 + }; 86 + }; 87 + }
+35
Documentation/devicetree/bindings/display/bridge/toshiba,tc358764.txt
··· 1 + TC358764 MIPI-DSI to LVDS panel bridge 2 + 3 + Required properties: 4 + - compatible: "toshiba,tc358764" 5 + - reg: the virtual channel number of a DSI peripheral 6 + - vddc-supply: core voltage supply, 1.2V 7 + - vddio-supply: I/O voltage supply, 1.8V or 3.3V 8 + - vddlvds-supply: LVDS1/2 voltage supply, 3.3V 9 + - reset-gpios: a GPIO spec for the reset pin 10 + 11 + The device node can contain following 'port' child nodes, 12 + according to the OF graph bindings defined in [1]: 13 + 0: DSI Input, not required, if the bridge is DSI controlled 14 + 1: LVDS Output, mandatory 15 + 16 + [1]: Documentation/devicetree/bindings/media/video-interfaces.txt 17 + 18 + Example: 19 + 20 + bridge@0 { 21 + reg = <0>; 22 + compatible = "toshiba,tc358764"; 23 + vddc-supply = <&vcc_1v2_reg>; 24 + vddio-supply = <&vcc_1v8_reg>; 25 + vddlvds-supply = <&vcc_3v3_reg>; 26 + reset-gpios = <&gpd1 6 GPIO_ACTIVE_LOW>; 27 + #address-cells = <1>; 28 + #size-cells = <0>; 29 + port@1 { 30 + reg = <1>; 31 + lvds_ep: endpoint { 32 + remote-endpoint = <&panel_ep>; 33 + }; 34 + }; 35 + };
+145 -8
Documentation/devicetree/bindings/display/mipi-dsi-bus.txt
··· 16 16 host. Experience shows that this is true for the large majority of setups. 17 17 18 18 DSI host 19 - -------- 19 + ======== 20 20 21 21 In addition to the standard properties and those defined by the parent bus of 22 22 a DSI host, the following properties apply to a node representing a DSI host. ··· 29 29 - #size-cells: Should be 0. There are cases where it makes sense to use a 30 30 different value here. See below. 31 31 32 - DSI peripheral 33 - -------------- 32 + Optional properties: 33 + - clock-master: boolean. Should be enabled if the host is being used in 34 + conjunction with another DSI host to drive the same peripheral. Hardware 35 + supporting such a configuration generally requires the data on both the busses 36 + to be driven by the same clock. Only the DSI host instance controlling this 37 + clock should contain this property. 34 38 35 - Peripherals are represented as child nodes of the DSI host's node. Properties 36 - described here apply to all DSI peripherals, but individual bindings may want 37 - to define additional, device-specific properties. 39 + DSI peripheral 40 + ============== 41 + 42 + Peripherals with DSI as control bus, or no control bus 43 + ------------------------------------------------------ 44 + 45 + Peripherals with the DSI bus as the primary control bus, or peripherals with 46 + no control bus but use the DSI bus to transmit pixel data are represented 47 + as child nodes of the DSI host's node. Properties described here apply to all 48 + DSI peripherals, but individual bindings may want to define additional, 49 + device-specific properties. 38 50 39 51 Required properties: 40 52 - reg: The virtual channel number of a DSI peripheral. Must be in the range ··· 61 49 property is the number of the first virtual channel and the second cell is 62 50 the number of consecutive virtual channels. 63 51 64 - Example 65 - ------- 52 + Peripherals with a different control bus 53 + ---------------------------------------- 66 54 55 + There are peripherals that have I2C/SPI (or some other non-DSI bus) as the 56 + primary control bus, but are also connected to a DSI bus (mostly for the data 57 + path). Connections between such peripherals and a DSI host can be represented 58 + using the graph bindings [1], [2]. 59 + 60 + Peripherals that support dual channel DSI 61 + ----------------------------------------- 62 + 63 + Peripherals with higher bandwidth requirements can be connected to 2 DSI 64 + busses. Each DSI bus/channel drives some portion of the pixel data (generally 65 + left/right half of each line of the display, or even/odd lines of the display). 66 + The graph bindings should be used to represent the multiple DSI busses that are 67 + connected to this peripheral. Each DSI host's output endpoint can be linked to 68 + an input endpoint of the DSI peripheral. 69 + 70 + [1] Documentation/devicetree/bindings/graph.txt 71 + [2] Documentation/devicetree/bindings/media/video-interfaces.txt 72 + 73 + Examples 74 + ======== 75 + - (1), (2) and (3) are examples of a DSI host and peripheral on the DSI bus 76 + with different virtual channel configurations. 77 + - (4) is an example of a peripheral on a I2C control bus connected to a 78 + DSI host using of-graph bindings. 79 + - (5) is an example of 2 DSI hosts driving a dual-channel DSI peripheral, 80 + which uses I2C as its primary control bus. 81 + 82 + 1) 67 83 dsi-host { 68 84 ... 69 85 ··· 107 67 ... 108 68 }; 109 69 70 + 2) 110 71 dsi-host { 111 72 ... 112 73 ··· 123 82 ... 124 83 }; 125 84 85 + 3) 126 86 dsi-host { 127 87 ... 128 88 ··· 137 95 }; 138 96 139 97 ... 98 + }; 99 + 100 + 4) 101 + i2c-host { 102 + ... 103 + 104 + dsi-bridge@35 { 105 + compatible = "..."; 106 + reg = <0x35>; 107 + 108 + ports { 109 + ... 110 + 111 + port { 112 + bridge_mipi_in: endpoint { 113 + remote-endpoint = <&host_mipi_out>; 114 + }; 115 + }; 116 + }; 117 + }; 118 + }; 119 + 120 + dsi-host { 121 + ... 122 + 123 + ports { 124 + ... 125 + 126 + port { 127 + host_mipi_out: endpoint { 128 + remote-endpoint = <&bridge_mipi_in>; 129 + }; 130 + }; 131 + }; 132 + }; 133 + 134 + 5) 135 + i2c-host { 136 + dsi-bridge@35 { 137 + compatible = "..."; 138 + reg = <0x35>; 139 + 140 + ports { 141 + #address-cells = <1>; 142 + #size-cells = <0>; 143 + 144 + port@0 { 145 + reg = <0>; 146 + dsi0_in: endpoint { 147 + remote-endpoint = <&dsi0_out>; 148 + }; 149 + }; 150 + 151 + port@1 { 152 + reg = <1>; 153 + dsi1_in: endpoint { 154 + remote-endpoint = <&dsi1_out>; 155 + }; 156 + }; 157 + }; 158 + }; 159 + }; 160 + 161 + dsi0-host { 162 + ... 163 + 164 + /* 165 + * this DSI instance drives the clock for both the host 166 + * controllers 167 + */ 168 + clock-master; 169 + 170 + ports { 171 + ... 172 + 173 + port { 174 + dsi0_out: endpoint { 175 + remote-endpoint = <&dsi0_in>; 176 + }; 177 + }; 178 + }; 179 + }; 180 + 181 + dsi1-host { 182 + ... 183 + 184 + ports { 185 + ... 186 + 187 + port { 188 + dsi1_out: endpoint { 189 + remote-endpoint = <&dsi1_in>; 190 + }; 191 + }; 192 + }; 140 193 };
+3
Documentation/devicetree/bindings/display/rockchip/rockchip-vop.txt
··· 8 8 - compatible: value should be one of the following 9 9 "rockchip,rk3036-vop"; 10 10 "rockchip,rk3126-vop"; 11 + "rockchip,px30-vop-lit"; 12 + "rockchip,px30-vop-big"; 13 + "rockchip,rk3188-vop"; 11 14 "rockchip,rk3288-vop"; 12 15 "rockchip,rk3368-vop"; 13 16 "rockchip,rk3366-vop";
+9
Documentation/devicetree/bindings/display/sunxi/sun4i-drm.txt
··· 78 78 79 79 - compatible: value must be one of: 80 80 * "allwinner,sun8i-a83t-dw-hdmi" 81 + * "allwinner,sun50i-a64-dw-hdmi", "allwinner,sun8i-a83t-dw-hdmi" 81 82 - reg: base address and size of memory-mapped region 82 83 - reg-io-width: See dw_hdmi.txt. Shall be 1. 83 84 - interrupts: HDMI interrupt number ··· 96 95 Documentation/devicetree/bindings/media/video-interfaces.txt. The 97 96 first port should be the input endpoint. The second should be the 98 97 output, usually to an HDMI connector. 98 + 99 + Optional properties: 100 + - hvcc-supply: the VCC power supply of the controller 99 101 100 102 DWC HDMI PHY 101 103 ------------ ··· 155 151 * allwinner,sun8i-v3s-tcon 156 152 * allwinner,sun9i-a80-tcon-lcd 157 153 * allwinner,sun9i-a80-tcon-tv 154 + * "allwinner,sun50i-a64-tcon-lcd", "allwinner,sun8i-a83t-tcon-lcd" 155 + * "allwinner,sun50i-a64-tcon-tv", "allwinner,sun8i-a83t-tcon-tv" 158 156 - reg: base address and size of memory-mapped region 159 157 - interrupts: interrupt associated to this IP 160 158 - clocks: phandles to the clocks feeding the TCON. ··· 376 370 * allwinner,sun8i-a83t-de2-mixer-1 377 371 * allwinner,sun8i-h3-de2-mixer-0 378 372 * allwinner,sun8i-v3s-de2-mixer 373 + * allwinner,sun50i-a64-de2-mixer-0 374 + * allwinner,sun50i-a64-de2-mixer-1 379 375 - reg: base address and size of the memory-mapped region. 380 376 - clocks: phandles to the clocks feeding the mixer 381 377 * bus: the mixer interface clock ··· 411 403 * allwinner,sun8i-r40-display-engine 412 404 * allwinner,sun8i-v3s-display-engine 413 405 * allwinner,sun9i-a80-display-engine 406 + * allwinner,sun50i-a64-display-engine 414 407 415 408 - allwinner,pipelines: list of phandle to the display engine 416 409 frontends (DE 1.0) or mixers (DE 2.0) available.
+6
Documentation/gpu/drm-kms.rst
··· 323 323 DRM Format Handling 324 324 =================== 325 325 326 + .. kernel-doc:: include/uapi/drm/drm_fourcc.h 327 + :doc: overview 328 + 329 + Format Functions Reference 330 + -------------------------- 331 + 326 332 .. kernel-doc:: include/drm/drm_fourcc.h 327 333 :internal: 328 334
+1 -1
Documentation/gpu/drm-mm.rst
··· 297 297 struct vm_operations_struct { 298 298 void (*open)(struct vm_area_struct * area); 299 299 void (*close)(struct vm_area_struct * area); 300 - int (*fault)(struct vm_fault *vmf); 300 + vm_fault_t (*fault)(struct vm_fault *vmf); 301 301 }; 302 302 303 303
+1
Documentation/ioctl/ioctl-number.txt
··· 272 272 't' 90-91 linux/toshiba.h toshiba and toshiba_acpi SMM 273 273 'u' 00-1F linux/smb_fs.h gone 274 274 'u' 20-3F linux/uvcvideo.h USB video class host driver 275 + 'u' 40-4f linux/udmabuf.h userspace dma-buf misc device 275 276 'v' 00-1F linux/ext2_fs.h conflict! 276 277 'v' 00-1F linux/fs.h conflict! 277 278 'v' 00-0F linux/sonypi.h conflict!
+8
MAINTAINERS
··· 15343 15343 F: fs/hostfs/ 15344 15344 F: fs/hppfs/ 15345 15345 15346 + USERSPACE DMA BUFFER DRIVER 15347 + M: Gerd Hoffmann <kraxel@redhat.com> 15348 + S: Maintained 15349 + L: dri-devel@lists.freedesktop.org 15350 + F: drivers/dma-buf/udmabuf.c 15351 + F: include/uapi/linux/udmabuf.h 15352 + T: git git://anongit.freedesktop.org/drm/drm-misc 15353 + 15346 15354 USERSPACE I/O (UIO) 15347 15355 M: Greg Kroah-Hartman <gregkh@linuxfoundation.org> 15348 15356 S: Maintained
+8
drivers/dma-buf/Kconfig
··· 30 30 WARNING: improper use of this can result in deadlocking kernel 31 31 drivers from userspace. Intended for test and debug only. 32 32 33 + config UDMABUF 34 + bool "userspace dmabuf misc driver" 35 + default n 36 + depends on DMA_SHARED_BUFFER 37 + help 38 + A driver to let userspace turn memfd regions into dma-bufs. 39 + Qemu can use this to create host dmabufs for guest framebuffers. 40 + 33 41 endmenu
+1
drivers/dma-buf/Makefile
··· 1 1 obj-y := dma-buf.o dma-fence.o dma-fence-array.o reservation.o seqno-fence.o 2 2 obj-$(CONFIG_SYNC_FILE) += sync_file.o 3 3 obj-$(CONFIG_SW_SYNC) += sw_sync.o sync_debug.o 4 + obj-$(CONFIG_UDMABUF) += udmabuf.o
-1
drivers/dma-buf/dma-buf.c
··· 405 405 || !exp_info->ops->map_dma_buf 406 406 || !exp_info->ops->unmap_dma_buf 407 407 || !exp_info->ops->release 408 - || !exp_info->ops->map 409 408 || !exp_info->ops->mmap)) { 410 409 return ERR_PTR(-EINVAL); 411 410 }
+288
drivers/dma-buf/udmabuf.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + #include <linux/init.h> 3 + #include <linux/module.h> 4 + #include <linux/device.h> 5 + #include <linux/kernel.h> 6 + #include <linux/slab.h> 7 + #include <linux/miscdevice.h> 8 + #include <linux/dma-buf.h> 9 + #include <linux/highmem.h> 10 + #include <linux/cred.h> 11 + #include <linux/shmem_fs.h> 12 + #include <linux/memfd.h> 13 + 14 + #include <uapi/linux/udmabuf.h> 15 + 16 + struct udmabuf { 17 + u32 pagecount; 18 + struct page **pages; 19 + }; 20 + 21 + static int udmabuf_vm_fault(struct vm_fault *vmf) 22 + { 23 + struct vm_area_struct *vma = vmf->vma; 24 + struct udmabuf *ubuf = vma->vm_private_data; 25 + 26 + if (WARN_ON(vmf->pgoff >= ubuf->pagecount)) 27 + return VM_FAULT_SIGBUS; 28 + 29 + vmf->page = ubuf->pages[vmf->pgoff]; 30 + get_page(vmf->page); 31 + return 0; 32 + } 33 + 34 + static const struct vm_operations_struct udmabuf_vm_ops = { 35 + .fault = udmabuf_vm_fault, 36 + }; 37 + 38 + static int mmap_udmabuf(struct dma_buf *buf, struct vm_area_struct *vma) 39 + { 40 + struct udmabuf *ubuf = buf->priv; 41 + 42 + if ((vma->vm_flags & (VM_SHARED | VM_MAYSHARE)) == 0) 43 + return -EINVAL; 44 + 45 + vma->vm_ops = &udmabuf_vm_ops; 46 + vma->vm_private_data = ubuf; 47 + return 0; 48 + } 49 + 50 + static struct sg_table *map_udmabuf(struct dma_buf_attachment *at, 51 + enum dma_data_direction direction) 52 + { 53 + struct udmabuf *ubuf = at->dmabuf->priv; 54 + struct sg_table *sg; 55 + 56 + sg = kzalloc(sizeof(*sg), GFP_KERNEL); 57 + if (!sg) 58 + goto err1; 59 + if (sg_alloc_table_from_pages(sg, ubuf->pages, ubuf->pagecount, 60 + 0, ubuf->pagecount << PAGE_SHIFT, 61 + GFP_KERNEL) < 0) 62 + goto err2; 63 + if (!dma_map_sg(at->dev, sg->sgl, sg->nents, direction)) 64 + goto err3; 65 + 66 + return sg; 67 + 68 + err3: 69 + sg_free_table(sg); 70 + err2: 71 + kfree(sg); 72 + err1: 73 + return ERR_PTR(-ENOMEM); 74 + } 75 + 76 + static void unmap_udmabuf(struct dma_buf_attachment *at, 77 + struct sg_table *sg, 78 + enum dma_data_direction direction) 79 + { 80 + sg_free_table(sg); 81 + kfree(sg); 82 + } 83 + 84 + static void release_udmabuf(struct dma_buf *buf) 85 + { 86 + struct udmabuf *ubuf = buf->priv; 87 + pgoff_t pg; 88 + 89 + for (pg = 0; pg < ubuf->pagecount; pg++) 90 + put_page(ubuf->pages[pg]); 91 + kfree(ubuf->pages); 92 + kfree(ubuf); 93 + } 94 + 95 + static void *kmap_udmabuf(struct dma_buf *buf, unsigned long page_num) 96 + { 97 + struct udmabuf *ubuf = buf->priv; 98 + struct page *page = ubuf->pages[page_num]; 99 + 100 + return kmap(page); 101 + } 102 + 103 + static void kunmap_udmabuf(struct dma_buf *buf, unsigned long page_num, 104 + void *vaddr) 105 + { 106 + kunmap(vaddr); 107 + } 108 + 109 + static struct dma_buf_ops udmabuf_ops = { 110 + .map_dma_buf = map_udmabuf, 111 + .unmap_dma_buf = unmap_udmabuf, 112 + .release = release_udmabuf, 113 + .map = kmap_udmabuf, 114 + .unmap = kunmap_udmabuf, 115 + .mmap = mmap_udmabuf, 116 + }; 117 + 118 + #define SEALS_WANTED (F_SEAL_SHRINK) 119 + #define SEALS_DENIED (F_SEAL_WRITE) 120 + 121 + static long udmabuf_create(struct udmabuf_create_list *head, 122 + struct udmabuf_create_item *list) 123 + { 124 + DEFINE_DMA_BUF_EXPORT_INFO(exp_info); 125 + struct file *memfd = NULL; 126 + struct udmabuf *ubuf; 127 + struct dma_buf *buf; 128 + pgoff_t pgoff, pgcnt, pgidx, pgbuf; 129 + struct page *page; 130 + int seals, ret = -EINVAL; 131 + u32 i, flags; 132 + 133 + ubuf = kzalloc(sizeof(struct udmabuf), GFP_KERNEL); 134 + if (!ubuf) 135 + return -ENOMEM; 136 + 137 + for (i = 0; i < head->count; i++) { 138 + if (!IS_ALIGNED(list[i].offset, PAGE_SIZE)) 139 + goto err_free_ubuf; 140 + if (!IS_ALIGNED(list[i].size, PAGE_SIZE)) 141 + goto err_free_ubuf; 142 + ubuf->pagecount += list[i].size >> PAGE_SHIFT; 143 + } 144 + ubuf->pages = kmalloc_array(ubuf->pagecount, sizeof(struct page *), 145 + GFP_KERNEL); 146 + if (!ubuf->pages) { 147 + ret = -ENOMEM; 148 + goto err_free_ubuf; 149 + } 150 + 151 + pgbuf = 0; 152 + for (i = 0; i < head->count; i++) { 153 + memfd = fget(list[i].memfd); 154 + if (!memfd) 155 + goto err_put_pages; 156 + if (!shmem_mapping(file_inode(memfd)->i_mapping)) 157 + goto err_put_pages; 158 + seals = memfd_fcntl(memfd, F_GET_SEALS, 0); 159 + if (seals == -EINVAL || 160 + (seals & SEALS_WANTED) != SEALS_WANTED || 161 + (seals & SEALS_DENIED) != 0) 162 + goto err_put_pages; 163 + pgoff = list[i].offset >> PAGE_SHIFT; 164 + pgcnt = list[i].size >> PAGE_SHIFT; 165 + for (pgidx = 0; pgidx < pgcnt; pgidx++) { 166 + page = shmem_read_mapping_page( 167 + file_inode(memfd)->i_mapping, pgoff + pgidx); 168 + if (IS_ERR(page)) { 169 + ret = PTR_ERR(page); 170 + goto err_put_pages; 171 + } 172 + ubuf->pages[pgbuf++] = page; 173 + } 174 + fput(memfd); 175 + } 176 + memfd = NULL; 177 + 178 + exp_info.ops = &udmabuf_ops; 179 + exp_info.size = ubuf->pagecount << PAGE_SHIFT; 180 + exp_info.priv = ubuf; 181 + 182 + buf = dma_buf_export(&exp_info); 183 + if (IS_ERR(buf)) { 184 + ret = PTR_ERR(buf); 185 + goto err_put_pages; 186 + } 187 + 188 + flags = 0; 189 + if (head->flags & UDMABUF_FLAGS_CLOEXEC) 190 + flags |= O_CLOEXEC; 191 + return dma_buf_fd(buf, flags); 192 + 193 + err_put_pages: 194 + while (pgbuf > 0) 195 + put_page(ubuf->pages[--pgbuf]); 196 + err_free_ubuf: 197 + if (memfd) 198 + fput(memfd); 199 + kfree(ubuf->pages); 200 + kfree(ubuf); 201 + return ret; 202 + } 203 + 204 + static long udmabuf_ioctl_create(struct file *filp, unsigned long arg) 205 + { 206 + struct udmabuf_create create; 207 + struct udmabuf_create_list head; 208 + struct udmabuf_create_item list; 209 + 210 + if (copy_from_user(&create, (void __user *)arg, 211 + sizeof(struct udmabuf_create))) 212 + return -EFAULT; 213 + 214 + head.flags = create.flags; 215 + head.count = 1; 216 + list.memfd = create.memfd; 217 + list.offset = create.offset; 218 + list.size = create.size; 219 + 220 + return udmabuf_create(&head, &list); 221 + } 222 + 223 + static long udmabuf_ioctl_create_list(struct file *filp, unsigned long arg) 224 + { 225 + struct udmabuf_create_list head; 226 + struct udmabuf_create_item *list; 227 + int ret = -EINVAL; 228 + u32 lsize; 229 + 230 + if (copy_from_user(&head, (void __user *)arg, sizeof(head))) 231 + return -EFAULT; 232 + if (head.count > 1024) 233 + return -EINVAL; 234 + lsize = sizeof(struct udmabuf_create_item) * head.count; 235 + list = memdup_user((void __user *)(arg + sizeof(head)), lsize); 236 + if (IS_ERR(list)) 237 + return PTR_ERR(list); 238 + 239 + ret = udmabuf_create(&head, list); 240 + kfree(list); 241 + return ret; 242 + } 243 + 244 + static long udmabuf_ioctl(struct file *filp, unsigned int ioctl, 245 + unsigned long arg) 246 + { 247 + long ret; 248 + 249 + switch (ioctl) { 250 + case UDMABUF_CREATE: 251 + ret = udmabuf_ioctl_create(filp, arg); 252 + break; 253 + case UDMABUF_CREATE_LIST: 254 + ret = udmabuf_ioctl_create_list(filp, arg); 255 + break; 256 + default: 257 + ret = -EINVAL; 258 + break; 259 + } 260 + return ret; 261 + } 262 + 263 + static const struct file_operations udmabuf_fops = { 264 + .owner = THIS_MODULE, 265 + .unlocked_ioctl = udmabuf_ioctl, 266 + }; 267 + 268 + static struct miscdevice udmabuf_misc = { 269 + .minor = MISC_DYNAMIC_MINOR, 270 + .name = "udmabuf", 271 + .fops = &udmabuf_fops, 272 + }; 273 + 274 + static int __init udmabuf_dev_init(void) 275 + { 276 + return misc_register(&udmabuf_misc); 277 + } 278 + 279 + static void __exit udmabuf_dev_exit(void) 280 + { 281 + misc_deregister(&udmabuf_misc); 282 + } 283 + 284 + module_init(udmabuf_dev_init) 285 + module_exit(udmabuf_dev_exit) 286 + 287 + MODULE_AUTHOR("Gerd Hoffmann <kraxel@redhat.com>"); 288 + MODULE_LICENSE("GPL v2");
+1 -23
drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
··· 785 785 786 786 static struct drm_driver kms_driver; 787 787 788 - static int amdgpu_kick_out_firmware_fb(struct pci_dev *pdev) 789 - { 790 - struct apertures_struct *ap; 791 - bool primary = false; 792 - 793 - ap = alloc_apertures(1); 794 - if (!ap) 795 - return -ENOMEM; 796 - 797 - ap->ranges[0].base = pci_resource_start(pdev, 0); 798 - ap->ranges[0].size = pci_resource_len(pdev, 0); 799 - 800 - #ifdef CONFIG_X86 801 - primary = pdev->resource[PCI_ROM_RESOURCE].flags & IORESOURCE_ROM_SHADOW; 802 - #endif 803 - drm_fb_helper_remove_conflicting_framebuffers(ap, "amdgpudrmfb", primary); 804 - kfree(ap); 805 - 806 - return 0; 807 - } 808 - 809 - 810 788 static int amdgpu_pci_probe(struct pci_dev *pdev, 811 789 const struct pci_device_id *ent) 812 790 { ··· 812 834 return ret; 813 835 814 836 /* Get rid of things like offb */ 815 - ret = amdgpu_kick_out_firmware_fb(pdev); 837 + ret = drm_fb_helper_remove_conflicting_pci_framebuffers(pdev, 0, "amdgpudrmfb"); 816 838 if (ret) 817 839 return ret; 818 840
+9 -1
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
··· 896 896 aconnector->dc_sink = sink; 897 897 if (sink->dc_edid.length == 0) { 898 898 aconnector->edid = NULL; 899 + drm_dp_cec_unset_edid(&aconnector->dm_dp_aux.aux); 899 900 } else { 900 901 aconnector->edid = 901 902 (struct edid *) sink->dc_edid.raw_edid; ··· 904 903 905 904 drm_connector_update_edid_property(connector, 906 905 aconnector->edid); 906 + drm_dp_cec_set_edid(&aconnector->dm_dp_aux.aux, 907 + aconnector->edid); 907 908 } 908 909 amdgpu_dm_add_sink_to_freesync_module(connector, aconnector->edid); 909 910 910 911 } else { 912 + drm_dp_cec_unset_edid(&aconnector->dm_dp_aux.aux); 911 913 amdgpu_dm_remove_sink_from_freesync_module(connector); 912 914 drm_connector_update_edid_property(connector, NULL); 913 915 aconnector->num_modes = 0; ··· 1065 1061 (dc_link->type == dc_connection_mst_branch)) 1066 1062 dm_handle_hpd_rx_irq(aconnector); 1067 1063 1068 - if (dc_link->type != dc_connection_mst_branch) 1064 + if (dc_link->type != dc_connection_mst_branch) { 1065 + drm_dp_cec_irq(&aconnector->dm_dp_aux.aux); 1069 1066 mutex_unlock(&aconnector->hpd_lock); 1067 + } 1070 1068 } 1071 1069 1072 1070 static void register_hpd_handlers(struct amdgpu_device *adev) ··· 2601 2595 .atomic_duplicate_state = dm_crtc_duplicate_state, 2602 2596 .atomic_destroy_state = dm_crtc_destroy_state, 2603 2597 .set_crc_source = amdgpu_dm_crtc_set_crc_source, 2598 + .verify_crc_source = amdgpu_dm_crtc_verify_crc_source, 2604 2599 .enable_vblank = dm_enable_vblank, 2605 2600 .disable_vblank = dm_disable_vblank, 2606 2601 }; ··· 2737 2730 dm->backlight_dev = NULL; 2738 2731 } 2739 2732 #endif 2733 + drm_dp_cec_unregister_connector(&aconnector->dm_dp_aux.aux); 2740 2734 drm_connector_unregister(connector); 2741 2735 drm_connector_cleanup(connector); 2742 2736 kfree(connector);
+5 -2
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h
··· 258 258 259 259 /* amdgpu_dm_crc.c */ 260 260 #ifdef CONFIG_DEBUG_FS 261 - int amdgpu_dm_crtc_set_crc_source(struct drm_crtc *crtc, const char *src_name, 262 - size_t *values_cnt); 261 + int amdgpu_dm_crtc_set_crc_source(struct drm_crtc *crtc, const char *src_name); 262 + int amdgpu_dm_crtc_verify_crc_source(struct drm_crtc *crtc, 263 + const char *src_name, 264 + size_t *values_cnt); 263 265 void amdgpu_dm_crtc_handle_crc_irq(struct drm_crtc *crtc); 264 266 #else 265 267 #define amdgpu_dm_crtc_set_crc_source NULL 268 + #define amdgpu_dm_crtc_verify_crc_source NULL 266 269 #define amdgpu_dm_crtc_handle_crc_irq(x) 267 270 #endif 268 271
+17 -3
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_crc.c
··· 46 46 return AMDGPU_DM_PIPE_CRC_SOURCE_INVALID; 47 47 } 48 48 49 - int amdgpu_dm_crtc_set_crc_source(struct drm_crtc *crtc, const char *src_name, 50 - size_t *values_cnt) 49 + int 50 + amdgpu_dm_crtc_verify_crc_source(struct drm_crtc *crtc, const char *src_name, 51 + size_t *values_cnt) 52 + { 53 + enum amdgpu_dm_pipe_crc_source source = dm_parse_crc_source(src_name); 54 + 55 + if (source < 0) { 56 + DRM_DEBUG_DRIVER("Unknown CRC source %s for CRTC%d\n", 57 + src_name, crtc->index); 58 + return -EINVAL; 59 + } 60 + 61 + *values_cnt = 3; 62 + return 0; 63 + } 64 + 65 + int amdgpu_dm_crtc_set_crc_source(struct drm_crtc *crtc, const char *src_name) 51 66 { 52 67 struct dm_crtc_state *crtc_state = to_dm_crtc_state(crtc->state); 53 68 struct dc_stream_state *stream_state = crtc_state->stream; ··· 98 83 return -EINVAL; 99 84 } 100 85 101 - *values_cnt = 3; 102 86 /* Reset crc_skipped on dm state */ 103 87 crtc_state->crc_skip_count = 0; 104 88 return 0;
+2
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
··· 496 496 aconnector->dm_dp_aux.ddc_service = aconnector->dc_link->ddc; 497 497 498 498 drm_dp_aux_register(&aconnector->dm_dp_aux.aux); 499 + drm_dp_cec_register_connector(&aconnector->dm_dp_aux.aux, 500 + aconnector->base.name, dm->adev->dev); 499 501 aconnector->mst_mgr.cbs = &dm_mst_cbs; 500 502 drm_dp_mst_topology_mgr_init( 501 503 &aconnector->mst_mgr,
+2 -5
drivers/gpu/drm/arm/malidp_planes.c
··· 78 78 kfree(state); 79 79 plane->state = NULL; 80 80 state = kzalloc(sizeof(*state), GFP_KERNEL); 81 - if (state) { 82 - state->base.plane = plane; 83 - state->base.rotation = DRM_MODE_ROTATE_0; 84 - plane->state = &state->base; 85 - } 81 + if (state) 82 + __drm_atomic_helper_plane_reset(plane, &state->base); 86 83 } 87 84 88 85 static struct
+73 -27
drivers/gpu/drm/atmel-hlcdc/atmel_hlcdc_crtc.c
··· 101 101 (adj->crtc_hdisplay - 1) | 102 102 ((adj->crtc_vdisplay - 1) << 16)); 103 103 104 - cfg = 0; 104 + cfg = ATMEL_HLCDC_CLKSEL; 105 105 106 - prate = clk_get_rate(crtc->dc->hlcdc->sys_clk); 106 + prate = 2 * clk_get_rate(crtc->dc->hlcdc->sys_clk); 107 107 mode_rate = adj->crtc_clock * 1000; 108 - if ((prate / 2) < mode_rate) { 109 - prate *= 2; 110 - cfg |= ATMEL_HLCDC_CLKSEL; 111 - } 112 108 113 109 div = DIV_ROUND_UP(prate, mode_rate); 114 - if (div < 2) 110 + if (div < 2) { 115 111 div = 2; 112 + } else if (ATMEL_HLCDC_CLKDIV(div) & ~ATMEL_HLCDC_CLKDIV_MASK) { 113 + /* The divider ended up too big, try a lower base rate. */ 114 + cfg &= ~ATMEL_HLCDC_CLKSEL; 115 + prate /= 2; 116 + div = DIV_ROUND_UP(prate, mode_rate); 117 + if (ATMEL_HLCDC_CLKDIV(div) & ~ATMEL_HLCDC_CLKDIV_MASK) 118 + div = ATMEL_HLCDC_CLKDIV_MASK; 119 + } else { 120 + int div_low = prate / mode_rate; 121 + 122 + if (div_low >= 2 && 123 + ((prate / div_low - mode_rate) < 124 + 10 * (mode_rate - prate / div))) 125 + /* 126 + * At least 10 times better when using a higher 127 + * frequency than requested, instead of a lower. 128 + * So, go with that. 129 + */ 130 + div = div_low; 131 + } 116 132 117 133 cfg |= ATMEL_HLCDC_CLKDIV(div); 118 134 ··· 242 226 #define ATMEL_HLCDC_RGB888_OUTPUT BIT(3) 243 227 #define ATMEL_HLCDC_OUTPUT_MODE_MASK GENMASK(3, 0) 244 228 229 + static int atmel_hlcdc_connector_output_mode(struct drm_connector_state *state) 230 + { 231 + struct drm_connector *connector = state->connector; 232 + struct drm_display_info *info = &connector->display_info; 233 + struct drm_encoder *encoder; 234 + unsigned int supported_fmts = 0; 235 + int j; 236 + 237 + encoder = state->best_encoder; 238 + if (!encoder) 239 + encoder = connector->encoder; 240 + 241 + switch (atmel_hlcdc_encoder_get_bus_fmt(encoder)) { 242 + case 0: 243 + break; 244 + case MEDIA_BUS_FMT_RGB444_1X12: 245 + return ATMEL_HLCDC_RGB444_OUTPUT; 246 + case MEDIA_BUS_FMT_RGB565_1X16: 247 + return ATMEL_HLCDC_RGB565_OUTPUT; 248 + case MEDIA_BUS_FMT_RGB666_1X18: 249 + return ATMEL_HLCDC_RGB666_OUTPUT; 250 + case MEDIA_BUS_FMT_RGB888_1X24: 251 + return ATMEL_HLCDC_RGB888_OUTPUT; 252 + default: 253 + return -EINVAL; 254 + } 255 + 256 + for (j = 0; j < info->num_bus_formats; j++) { 257 + switch (info->bus_formats[j]) { 258 + case MEDIA_BUS_FMT_RGB444_1X12: 259 + supported_fmts |= ATMEL_HLCDC_RGB444_OUTPUT; 260 + break; 261 + case MEDIA_BUS_FMT_RGB565_1X16: 262 + supported_fmts |= ATMEL_HLCDC_RGB565_OUTPUT; 263 + break; 264 + case MEDIA_BUS_FMT_RGB666_1X18: 265 + supported_fmts |= ATMEL_HLCDC_RGB666_OUTPUT; 266 + break; 267 + case MEDIA_BUS_FMT_RGB888_1X24: 268 + supported_fmts |= ATMEL_HLCDC_RGB888_OUTPUT; 269 + break; 270 + default: 271 + break; 272 + } 273 + } 274 + 275 + return supported_fmts; 276 + } 277 + 245 278 static int atmel_hlcdc_crtc_select_output_mode(struct drm_crtc_state *state) 246 279 { 247 280 unsigned int output_fmts = ATMEL_HLCDC_OUTPUT_MODE_MASK; ··· 303 238 crtc = drm_crtc_to_atmel_hlcdc_crtc(state->crtc); 304 239 305 240 for_each_new_connector_in_state(state->state, connector, cstate, i) { 306 - struct drm_display_info *info = &connector->display_info; 307 241 unsigned int supported_fmts = 0; 308 - int j; 309 242 310 243 if (!cstate->crtc) 311 244 continue; 312 245 313 - for (j = 0; j < info->num_bus_formats; j++) { 314 - switch (info->bus_formats[j]) { 315 - case MEDIA_BUS_FMT_RGB444_1X12: 316 - supported_fmts |= ATMEL_HLCDC_RGB444_OUTPUT; 317 - break; 318 - case MEDIA_BUS_FMT_RGB565_1X16: 319 - supported_fmts |= ATMEL_HLCDC_RGB565_OUTPUT; 320 - break; 321 - case MEDIA_BUS_FMT_RGB666_1X18: 322 - supported_fmts |= ATMEL_HLCDC_RGB666_OUTPUT; 323 - break; 324 - case MEDIA_BUS_FMT_RGB888_1X24: 325 - supported_fmts |= ATMEL_HLCDC_RGB888_OUTPUT; 326 - break; 327 - default: 328 - break; 329 - } 330 - } 246 + supported_fmts = atmel_hlcdc_connector_output_mode(cstate); 331 247 332 248 if (crtc->dc->desc->conflicting_output_formats) 333 249 output_fmts &= supported_fmts;
+1
drivers/gpu/drm/atmel-hlcdc/atmel_hlcdc_dc.h
··· 441 441 int atmel_hlcdc_crtc_create(struct drm_device *dev); 442 442 443 443 int atmel_hlcdc_create_outputs(struct drm_device *dev); 444 + int atmel_hlcdc_encoder_get_bus_fmt(struct drm_encoder *encoder); 444 445 445 446 #endif /* DRM_ATMEL_HLCDC_H */
+82 -10
drivers/gpu/drm/atmel-hlcdc/atmel_hlcdc_output.c
··· 27 27 28 28 #include "atmel_hlcdc_dc.h" 29 29 30 + struct atmel_hlcdc_rgb_output { 31 + struct drm_encoder encoder; 32 + int bus_fmt; 33 + }; 34 + 30 35 static const struct drm_encoder_funcs atmel_hlcdc_panel_encoder_funcs = { 31 36 .destroy = drm_encoder_cleanup, 32 37 }; 33 38 39 + static struct atmel_hlcdc_rgb_output * 40 + atmel_hlcdc_encoder_to_rgb_output(struct drm_encoder *encoder) 41 + { 42 + return container_of(encoder, struct atmel_hlcdc_rgb_output, encoder); 43 + } 44 + 45 + int atmel_hlcdc_encoder_get_bus_fmt(struct drm_encoder *encoder) 46 + { 47 + struct atmel_hlcdc_rgb_output *output; 48 + 49 + output = atmel_hlcdc_encoder_to_rgb_output(encoder); 50 + 51 + return output->bus_fmt; 52 + } 53 + 54 + static int atmel_hlcdc_of_bus_fmt(const struct device_node *ep) 55 + { 56 + u32 bus_width; 57 + int ret; 58 + 59 + ret = of_property_read_u32(ep, "bus-width", &bus_width); 60 + if (ret == -EINVAL) 61 + return 0; 62 + if (ret) 63 + return ret; 64 + 65 + switch (bus_width) { 66 + case 12: 67 + return MEDIA_BUS_FMT_RGB444_1X12; 68 + case 16: 69 + return MEDIA_BUS_FMT_RGB565_1X16; 70 + case 18: 71 + return MEDIA_BUS_FMT_RGB666_1X18; 72 + case 24: 73 + return MEDIA_BUS_FMT_RGB888_1X24; 74 + default: 75 + return -EINVAL; 76 + } 77 + } 78 + 34 79 static int atmel_hlcdc_attach_endpoint(struct drm_device *dev, int endpoint) 35 80 { 36 - struct drm_encoder *encoder; 81 + struct atmel_hlcdc_rgb_output *output; 82 + struct device_node *ep; 37 83 struct drm_panel *panel; 38 84 struct drm_bridge *bridge; 39 85 int ret; 40 86 87 + ep = of_graph_get_endpoint_by_regs(dev->dev->of_node, 0, endpoint); 88 + if (!ep) 89 + return -ENODEV; 90 + 41 91 ret = drm_of_find_panel_or_bridge(dev->dev->of_node, 0, endpoint, 42 92 &panel, &bridge); 43 - if (ret) 93 + if (ret) { 94 + of_node_put(ep); 44 95 return ret; 96 + } 45 97 46 - encoder = devm_kzalloc(dev->dev, sizeof(*encoder), GFP_KERNEL); 47 - if (!encoder) 98 + output = devm_kzalloc(dev->dev, sizeof(*output), GFP_KERNEL); 99 + if (!output) { 100 + of_node_put(ep); 101 + return -ENOMEM; 102 + } 103 + 104 + output->bus_fmt = atmel_hlcdc_of_bus_fmt(ep); 105 + of_node_put(ep); 106 + if (output->bus_fmt < 0) { 107 + dev_err(dev->dev, "endpoint %d: invalid bus width\n", endpoint); 48 108 return -EINVAL; 109 + } 49 110 50 - ret = drm_encoder_init(dev, encoder, 111 + ret = drm_encoder_init(dev, &output->encoder, 51 112 &atmel_hlcdc_panel_encoder_funcs, 52 113 DRM_MODE_ENCODER_NONE, NULL); 53 114 if (ret) 54 115 return ret; 55 116 56 - encoder->possible_crtcs = 0x1; 117 + output->encoder.possible_crtcs = 0x1; 57 118 58 119 if (panel) { 59 120 bridge = drm_panel_bridge_add(panel, DRM_MODE_CONNECTOR_Unknown); ··· 123 62 } 124 63 125 64 if (bridge) { 126 - ret = drm_bridge_attach(encoder, bridge, NULL); 65 + ret = drm_bridge_attach(&output->encoder, bridge, NULL); 127 66 if (!ret) 128 67 return 0; 129 68 ··· 131 70 drm_panel_bridge_remove(bridge); 132 71 } 133 72 134 - drm_encoder_cleanup(encoder); 73 + drm_encoder_cleanup(&output->encoder); 135 74 136 75 return ret; 137 76 } ··· 139 78 int atmel_hlcdc_create_outputs(struct drm_device *dev) 140 79 { 141 80 int endpoint, ret = 0; 81 + int attached = 0; 142 82 143 - for (endpoint = 0; !ret; endpoint++) 83 + /* 84 + * Always scan the first few endpoints even if we get -ENODEV, 85 + * but keep going after that as long as we keep getting hits. 86 + */ 87 + for (endpoint = 0; !ret || endpoint < 4; endpoint++) { 144 88 ret = atmel_hlcdc_attach_endpoint(dev, endpoint); 89 + if (ret == -ENODEV) 90 + continue; 91 + if (ret) 92 + break; 93 + attached++; 94 + } 145 95 146 96 /* At least one device was successfully attached.*/ 147 - if (ret == -ENODEV && endpoint) 97 + if (ret == -ENODEV && attached) 148 98 return 0; 149 99 150 100 return ret;
+1 -4
drivers/gpu/drm/atmel-hlcdc/atmel_hlcdc_plane.c
··· 942 942 "Failed to allocate initial plane state\n"); 943 943 return; 944 944 } 945 - 946 - p->state = &state->base; 947 - p->state->alpha = DRM_BLEND_ALPHA_OPAQUE; 948 - p->state->plane = p; 945 + __drm_atomic_helper_plane_reset(p, &state->base); 949 946 } 950 947 } 951 948
+1 -17
drivers/gpu/drm/bochs/bochs_drv.c
··· 143 143 /* ---------------------------------------------------------------------- */ 144 144 /* pci interface */ 145 145 146 - static int bochs_kick_out_firmware_fb(struct pci_dev *pdev) 147 - { 148 - struct apertures_struct *ap; 149 - 150 - ap = alloc_apertures(1); 151 - if (!ap) 152 - return -ENOMEM; 153 - 154 - ap->ranges[0].base = pci_resource_start(pdev, 0); 155 - ap->ranges[0].size = pci_resource_len(pdev, 0); 156 - drm_fb_helper_remove_conflicting_framebuffers(ap, "bochsdrmfb", false); 157 - kfree(ap); 158 - 159 - return 0; 160 - } 161 - 162 146 static int bochs_pci_probe(struct pci_dev *pdev, 163 147 const struct pci_device_id *ent) 164 148 { ··· 155 171 return -ENOMEM; 156 172 } 157 173 158 - ret = bochs_kick_out_firmware_fb(pdev); 174 + ret = drm_fb_helper_remove_conflicting_pci_framebuffers(pdev, 0, "bochsdrmfb"); 159 175 if (ret) 160 176 return ret; 161 177
+1 -1
drivers/gpu/drm/bochs/bochs_mm.c
··· 430 430 return; 431 431 432 432 tbo = &((*bo)->bo); 433 - ttm_bo_unref(&tbo); 433 + ttm_bo_put(tbo); 434 434 *bo = NULL; 435 435 } 436 436
+18
drivers/gpu/drm/bridge/Kconfig
··· 112 112 ---help--- 113 113 Thine THC63LVD1024 LVDS/parallel converter driver. 114 114 115 + config DRM_TOSHIBA_TC358764 116 + tristate "TC358764 DSI/LVDS bridge" 117 + depends on DRM && DRM_PANEL 118 + depends on OF 119 + select DRM_MIPI_DSI 120 + help 121 + Toshiba TC358764 DSI/LVDS bridge driver. 122 + 115 123 config DRM_TOSHIBA_TC358767 116 124 tristate "Toshiba TC358767 eDP bridge" 117 125 depends on OF ··· 135 127 select DRM_KMS_HELPER 136 128 ---help--- 137 129 Texas Instruments TFP410 DVI/HDMI Transmitter driver 130 + 131 + config DRM_TI_SN65DSI86 132 + tristate "TI SN65DSI86 DSI to eDP bridge" 133 + depends on OF 134 + select DRM_KMS_HELPER 135 + select REGMAP_I2C 136 + select DRM_PANEL 137 + select DRM_MIPI_DSI 138 + help 139 + Texas Instruments SN65DSI86 DSI to eDP Bridge driver 138 140 139 141 source "drivers/gpu/drm/bridge/analogix/Kconfig" 140 142
+2
drivers/gpu/drm/bridge/Makefile
··· 10 10 obj-$(CONFIG_DRM_SII902X) += sii902x.o 11 11 obj-$(CONFIG_DRM_SII9234) += sii9234.o 12 12 obj-$(CONFIG_DRM_THINE_THC63LVD1024) += thc63lvd1024.o 13 + obj-$(CONFIG_DRM_TOSHIBA_TC358764) += tc358764.o 13 14 obj-$(CONFIG_DRM_TOSHIBA_TC358767) += tc358767.o 14 15 obj-$(CONFIG_DRM_ANALOGIX_DP) += analogix/ 15 16 obj-$(CONFIG_DRM_I2C_ADV7511) += adv7511/ 17 + obj-$(CONFIG_DRM_TI_SN65DSI86) += ti-sn65dsi86.o 16 18 obj-$(CONFIG_DRM_TI_TFP410) += ti-tfp410.o 17 19 obj-y += synopsys/
-2
drivers/gpu/drm/bridge/synopsys/Makefile
··· 1 - #ccflags-y := -Iinclude/drm 2 - 3 1 obj-$(CONFIG_DRM_DW_HDMI) += dw-hdmi.o 4 2 obj-$(CONFIG_DRM_DW_HDMI_AHB_AUDIO) += dw-hdmi-ahb-audio.o 5 3 obj-$(CONFIG_DRM_DW_HDMI_I2S_AUDIO) += dw-hdmi-i2s-audio.o
+499
drivers/gpu/drm/bridge/tc358764.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * Copyright (C) 2018 Samsung Electronics Co., Ltd 4 + * 5 + * Authors: 6 + * Andrzej Hajda <a.hajda@samsung.com> 7 + * Maciej Purski <m.purski@samsung.com> 8 + */ 9 + 10 + #include <drm/drm_atomic_helper.h> 11 + #include <drm/drm_crtc.h> 12 + #include <drm/drm_crtc_helper.h> 13 + #include <drm/drm_fb_helper.h> 14 + #include <drm/drm_mipi_dsi.h> 15 + #include <drm/drm_of.h> 16 + #include <drm/drm_panel.h> 17 + #include <drm/drmP.h> 18 + #include <linux/gpio/consumer.h> 19 + #include <linux/of_graph.h> 20 + #include <linux/regulator/consumer.h> 21 + #include <video/mipi_display.h> 22 + 23 + #define FLD_MASK(start, end) (((1 << ((start) - (end) + 1)) - 1) << (end)) 24 + #define FLD_VAL(val, start, end) (((val) << (end)) & FLD_MASK(start, end)) 25 + 26 + /* PPI layer registers */ 27 + #define PPI_STARTPPI 0x0104 /* START control bit */ 28 + #define PPI_LPTXTIMECNT 0x0114 /* LPTX timing signal */ 29 + #define PPI_LANEENABLE 0x0134 /* Enables each lane */ 30 + #define PPI_TX_RX_TA 0x013C /* BTA timing parameters */ 31 + #define PPI_D0S_CLRSIPOCOUNT 0x0164 /* Assertion timer for Lane 0 */ 32 + #define PPI_D1S_CLRSIPOCOUNT 0x0168 /* Assertion timer for Lane 1 */ 33 + #define PPI_D2S_CLRSIPOCOUNT 0x016C /* Assertion timer for Lane 2 */ 34 + #define PPI_D3S_CLRSIPOCOUNT 0x0170 /* Assertion timer for Lane 3 */ 35 + #define PPI_START_FUNCTION 1 36 + 37 + /* DSI layer registers */ 38 + #define DSI_STARTDSI 0x0204 /* START control bit of DSI-TX */ 39 + #define DSI_LANEENABLE 0x0210 /* Enables each lane */ 40 + #define DSI_RX_START 1 41 + 42 + /* Video path registers */ 43 + #define VP_CTRL 0x0450 /* Video Path Control */ 44 + #define VP_CTRL_MSF(v) FLD_VAL(v, 0, 0) /* Magic square in RGB666 */ 45 + #define VP_CTRL_VTGEN(v) FLD_VAL(v, 4, 4) /* Use chip clock for timing */ 46 + #define VP_CTRL_EVTMODE(v) FLD_VAL(v, 5, 5) /* Event mode */ 47 + #define VP_CTRL_RGB888(v) FLD_VAL(v, 8, 8) /* RGB888 mode */ 48 + #define VP_CTRL_VSDELAY(v) FLD_VAL(v, 31, 20) /* VSYNC delay */ 49 + #define VP_CTRL_HSPOL BIT(17) /* Polarity of HSYNC signal */ 50 + #define VP_CTRL_DEPOL BIT(18) /* Polarity of DE signal */ 51 + #define VP_CTRL_VSPOL BIT(19) /* Polarity of VSYNC signal */ 52 + #define VP_HTIM1 0x0454 /* Horizontal Timing Control 1 */ 53 + #define VP_HTIM1_HBP(v) FLD_VAL(v, 24, 16) 54 + #define VP_HTIM1_HSYNC(v) FLD_VAL(v, 8, 0) 55 + #define VP_HTIM2 0x0458 /* Horizontal Timing Control 2 */ 56 + #define VP_HTIM2_HFP(v) FLD_VAL(v, 24, 16) 57 + #define VP_HTIM2_HACT(v) FLD_VAL(v, 10, 0) 58 + #define VP_VTIM1 0x045C /* Vertical Timing Control 1 */ 59 + #define VP_VTIM1_VBP(v) FLD_VAL(v, 23, 16) 60 + #define VP_VTIM1_VSYNC(v) FLD_VAL(v, 7, 0) 61 + #define VP_VTIM2 0x0460 /* Vertical Timing Control 2 */ 62 + #define VP_VTIM2_VFP(v) FLD_VAL(v, 23, 16) 63 + #define VP_VTIM2_VACT(v) FLD_VAL(v, 10, 0) 64 + #define VP_VFUEN 0x0464 /* Video Frame Timing Update Enable */ 65 + 66 + /* LVDS registers */ 67 + #define LV_MX0003 0x0480 /* Mux input bit 0 to 3 */ 68 + #define LV_MX0407 0x0484 /* Mux input bit 4 to 7 */ 69 + #define LV_MX0811 0x0488 /* Mux input bit 8 to 11 */ 70 + #define LV_MX1215 0x048C /* Mux input bit 12 to 15 */ 71 + #define LV_MX1619 0x0490 /* Mux input bit 16 to 19 */ 72 + #define LV_MX2023 0x0494 /* Mux input bit 20 to 23 */ 73 + #define LV_MX2427 0x0498 /* Mux input bit 24 to 27 */ 74 + #define LV_MX(b0, b1, b2, b3) (FLD_VAL(b0, 4, 0) | FLD_VAL(b1, 12, 8) | \ 75 + FLD_VAL(b2, 20, 16) | FLD_VAL(b3, 28, 24)) 76 + 77 + /* Input bit numbers used in mux registers */ 78 + enum { 79 + LVI_R0, 80 + LVI_R1, 81 + LVI_R2, 82 + LVI_R3, 83 + LVI_R4, 84 + LVI_R5, 85 + LVI_R6, 86 + LVI_R7, 87 + LVI_G0, 88 + LVI_G1, 89 + LVI_G2, 90 + LVI_G3, 91 + LVI_G4, 92 + LVI_G5, 93 + LVI_G6, 94 + LVI_G7, 95 + LVI_B0, 96 + LVI_B1, 97 + LVI_B2, 98 + LVI_B3, 99 + LVI_B4, 100 + LVI_B5, 101 + LVI_B6, 102 + LVI_B7, 103 + LVI_HS, 104 + LVI_VS, 105 + LVI_DE, 106 + LVI_L0 107 + }; 108 + 109 + #define LV_CFG 0x049C /* LVDS Configuration */ 110 + #define LV_PHY0 0x04A0 /* LVDS PHY 0 */ 111 + #define LV_PHY0_RST(v) FLD_VAL(v, 22, 22) /* PHY reset */ 112 + #define LV_PHY0_IS(v) FLD_VAL(v, 15, 14) 113 + #define LV_PHY0_ND(v) FLD_VAL(v, 4, 0) /* Frequency range select */ 114 + #define LV_PHY0_PRBS_ON(v) FLD_VAL(v, 20, 16) /* Clock/Data Flag pins */ 115 + 116 + /* System registers */ 117 + #define SYS_RST 0x0504 /* System Reset */ 118 + #define SYS_ID 0x0580 /* System ID */ 119 + 120 + #define SYS_RST_I2CS BIT(0) /* Reset I2C-Slave controller */ 121 + #define SYS_RST_I2CM BIT(1) /* Reset I2C-Master controller */ 122 + #define SYS_RST_LCD BIT(2) /* Reset LCD controller */ 123 + #define SYS_RST_BM BIT(3) /* Reset Bus Management controller */ 124 + #define SYS_RST_DSIRX BIT(4) /* Reset DSI-RX and App controller */ 125 + #define SYS_RST_REG BIT(5) /* Reset Register module */ 126 + 127 + #define LPX_PERIOD 2 128 + #define TTA_SURE 3 129 + #define TTA_GET 0x20000 130 + 131 + /* Lane enable PPI and DSI register bits */ 132 + #define LANEENABLE_CLEN BIT(0) 133 + #define LANEENABLE_L0EN BIT(1) 134 + #define LANEENABLE_L1EN BIT(2) 135 + #define LANEENABLE_L2EN BIT(3) 136 + #define LANEENABLE_L3EN BIT(4) 137 + 138 + /* LVCFG fields */ 139 + #define LV_CFG_LVEN BIT(0) 140 + #define LV_CFG_LVDLINK BIT(1) 141 + #define LV_CFG_CLKPOL1 BIT(2) 142 + #define LV_CFG_CLKPOL2 BIT(3) 143 + 144 + static const char * const tc358764_supplies[] = { 145 + "vddc", "vddio", "vddlvds" 146 + }; 147 + 148 + struct tc358764 { 149 + struct device *dev; 150 + struct drm_bridge bridge; 151 + struct drm_connector connector; 152 + struct regulator_bulk_data supplies[ARRAY_SIZE(tc358764_supplies)]; 153 + struct gpio_desc *gpio_reset; 154 + struct drm_panel *panel; 155 + int error; 156 + }; 157 + 158 + static int tc358764_clear_error(struct tc358764 *ctx) 159 + { 160 + int ret = ctx->error; 161 + 162 + ctx->error = 0; 163 + return ret; 164 + } 165 + 166 + static void tc358764_read(struct tc358764 *ctx, u16 addr, u32 *val) 167 + { 168 + struct mipi_dsi_device *dsi = to_mipi_dsi_device(ctx->dev); 169 + ssize_t ret; 170 + 171 + if (ctx->error) 172 + return; 173 + 174 + cpu_to_le16s(&addr); 175 + ret = mipi_dsi_generic_read(dsi, &addr, sizeof(addr), val, sizeof(*val)); 176 + if (ret >= 0) 177 + le32_to_cpus(val); 178 + 179 + dev_dbg(ctx->dev, "read: %d, addr: %d\n", addr, *val); 180 + } 181 + 182 + static void tc358764_write(struct tc358764 *ctx, u16 addr, u32 val) 183 + { 184 + struct mipi_dsi_device *dsi = to_mipi_dsi_device(ctx->dev); 185 + ssize_t ret; 186 + u8 data[6]; 187 + 188 + if (ctx->error) 189 + return; 190 + 191 + data[0] = addr; 192 + data[1] = addr >> 8; 193 + data[2] = val; 194 + data[3] = val >> 8; 195 + data[4] = val >> 16; 196 + data[5] = val >> 24; 197 + 198 + ret = mipi_dsi_generic_write(dsi, data, sizeof(data)); 199 + if (ret < 0) 200 + ctx->error = ret; 201 + } 202 + 203 + static inline struct tc358764 *bridge_to_tc358764(struct drm_bridge *bridge) 204 + { 205 + return container_of(bridge, struct tc358764, bridge); 206 + } 207 + 208 + static inline 209 + struct tc358764 *connector_to_tc358764(struct drm_connector *connector) 210 + { 211 + return container_of(connector, struct tc358764, connector); 212 + } 213 + 214 + static int tc358764_init(struct tc358764 *ctx) 215 + { 216 + u32 v = 0; 217 + 218 + tc358764_read(ctx, SYS_ID, &v); 219 + if (ctx->error) 220 + return tc358764_clear_error(ctx); 221 + dev_info(ctx->dev, "ID: %#x\n", v); 222 + 223 + /* configure PPI counters */ 224 + tc358764_write(ctx, PPI_TX_RX_TA, TTA_GET | TTA_SURE); 225 + tc358764_write(ctx, PPI_LPTXTIMECNT, LPX_PERIOD); 226 + tc358764_write(ctx, PPI_D0S_CLRSIPOCOUNT, 5); 227 + tc358764_write(ctx, PPI_D1S_CLRSIPOCOUNT, 5); 228 + tc358764_write(ctx, PPI_D2S_CLRSIPOCOUNT, 5); 229 + tc358764_write(ctx, PPI_D3S_CLRSIPOCOUNT, 5); 230 + 231 + /* enable four data lanes and clock lane */ 232 + tc358764_write(ctx, PPI_LANEENABLE, LANEENABLE_L3EN | LANEENABLE_L2EN | 233 + LANEENABLE_L1EN | LANEENABLE_L0EN | LANEENABLE_CLEN); 234 + tc358764_write(ctx, DSI_LANEENABLE, LANEENABLE_L3EN | LANEENABLE_L2EN | 235 + LANEENABLE_L1EN | LANEENABLE_L0EN | LANEENABLE_CLEN); 236 + 237 + /* start */ 238 + tc358764_write(ctx, PPI_STARTPPI, PPI_START_FUNCTION); 239 + tc358764_write(ctx, DSI_STARTDSI, DSI_RX_START); 240 + 241 + /* configure video path */ 242 + tc358764_write(ctx, VP_CTRL, VP_CTRL_VSDELAY(15) | VP_CTRL_RGB888(1) | 243 + VP_CTRL_EVTMODE(1) | VP_CTRL_HSPOL | VP_CTRL_VSPOL); 244 + 245 + /* reset PHY */ 246 + tc358764_write(ctx, LV_PHY0, LV_PHY0_RST(1) | 247 + LV_PHY0_PRBS_ON(4) | LV_PHY0_IS(2) | LV_PHY0_ND(6)); 248 + tc358764_write(ctx, LV_PHY0, LV_PHY0_PRBS_ON(4) | LV_PHY0_IS(2) | 249 + LV_PHY0_ND(6)); 250 + 251 + /* reset bridge */ 252 + tc358764_write(ctx, SYS_RST, SYS_RST_LCD); 253 + 254 + /* set bit order */ 255 + tc358764_write(ctx, LV_MX0003, LV_MX(LVI_R0, LVI_R1, LVI_R2, LVI_R3)); 256 + tc358764_write(ctx, LV_MX0407, LV_MX(LVI_R4, LVI_R7, LVI_R5, LVI_G0)); 257 + tc358764_write(ctx, LV_MX0811, LV_MX(LVI_G1, LVI_G2, LVI_G6, LVI_G7)); 258 + tc358764_write(ctx, LV_MX1215, LV_MX(LVI_G3, LVI_G4, LVI_G5, LVI_B0)); 259 + tc358764_write(ctx, LV_MX1619, LV_MX(LVI_B6, LVI_B7, LVI_B1, LVI_B2)); 260 + tc358764_write(ctx, LV_MX2023, LV_MX(LVI_B3, LVI_B4, LVI_B5, LVI_L0)); 261 + tc358764_write(ctx, LV_MX2427, LV_MX(LVI_HS, LVI_VS, LVI_DE, LVI_R6)); 262 + tc358764_write(ctx, LV_CFG, LV_CFG_CLKPOL2 | LV_CFG_CLKPOL1 | 263 + LV_CFG_LVEN); 264 + 265 + return tc358764_clear_error(ctx); 266 + } 267 + 268 + static void tc358764_reset(struct tc358764 *ctx) 269 + { 270 + gpiod_set_value(ctx->gpio_reset, 1); 271 + usleep_range(1000, 2000); 272 + gpiod_set_value(ctx->gpio_reset, 0); 273 + usleep_range(1000, 2000); 274 + } 275 + 276 + static int tc358764_get_modes(struct drm_connector *connector) 277 + { 278 + struct tc358764 *ctx = connector_to_tc358764(connector); 279 + 280 + return drm_panel_get_modes(ctx->panel); 281 + } 282 + 283 + static const 284 + struct drm_connector_helper_funcs tc358764_connector_helper_funcs = { 285 + .get_modes = tc358764_get_modes, 286 + }; 287 + 288 + static const struct drm_connector_funcs tc358764_connector_funcs = { 289 + .fill_modes = drm_helper_probe_single_connector_modes, 290 + .destroy = drm_connector_cleanup, 291 + .reset = drm_atomic_helper_connector_reset, 292 + .atomic_duplicate_state = drm_atomic_helper_connector_duplicate_state, 293 + .atomic_destroy_state = drm_atomic_helper_connector_destroy_state, 294 + }; 295 + 296 + static void tc358764_disable(struct drm_bridge *bridge) 297 + { 298 + struct tc358764 *ctx = bridge_to_tc358764(bridge); 299 + int ret = drm_panel_disable(bridge_to_tc358764(bridge)->panel); 300 + 301 + if (ret < 0) 302 + dev_err(ctx->dev, "error disabling panel (%d)\n", ret); 303 + } 304 + 305 + static void tc358764_post_disable(struct drm_bridge *bridge) 306 + { 307 + struct tc358764 *ctx = bridge_to_tc358764(bridge); 308 + int ret; 309 + 310 + ret = drm_panel_unprepare(ctx->panel); 311 + if (ret < 0) 312 + dev_err(ctx->dev, "error unpreparing panel (%d)\n", ret); 313 + tc358764_reset(ctx); 314 + usleep_range(10000, 15000); 315 + ret = regulator_bulk_disable(ARRAY_SIZE(ctx->supplies), ctx->supplies); 316 + if (ret < 0) 317 + dev_err(ctx->dev, "error disabling regulators (%d)\n", ret); 318 + } 319 + 320 + static void tc358764_pre_enable(struct drm_bridge *bridge) 321 + { 322 + struct tc358764 *ctx = bridge_to_tc358764(bridge); 323 + int ret; 324 + 325 + ret = regulator_bulk_enable(ARRAY_SIZE(ctx->supplies), ctx->supplies); 326 + if (ret < 0) 327 + dev_err(ctx->dev, "error enabling regulators (%d)\n", ret); 328 + usleep_range(10000, 15000); 329 + tc358764_reset(ctx); 330 + ret = tc358764_init(ctx); 331 + if (ret < 0) 332 + dev_err(ctx->dev, "error initializing bridge (%d)\n", ret); 333 + ret = drm_panel_prepare(ctx->panel); 334 + if (ret < 0) 335 + dev_err(ctx->dev, "error preparing panel (%d)\n", ret); 336 + } 337 + 338 + static void tc358764_enable(struct drm_bridge *bridge) 339 + { 340 + struct tc358764 *ctx = bridge_to_tc358764(bridge); 341 + int ret = drm_panel_enable(ctx->panel); 342 + 343 + if (ret < 0) 344 + dev_err(ctx->dev, "error enabling panel (%d)\n", ret); 345 + } 346 + 347 + static int tc358764_attach(struct drm_bridge *bridge) 348 + { 349 + struct tc358764 *ctx = bridge_to_tc358764(bridge); 350 + struct drm_device *drm = bridge->dev; 351 + int ret; 352 + 353 + ctx->connector.polled = DRM_CONNECTOR_POLL_HPD; 354 + ret = drm_connector_init(drm, &ctx->connector, 355 + &tc358764_connector_funcs, 356 + DRM_MODE_CONNECTOR_LVDS); 357 + if (ret) { 358 + DRM_ERROR("Failed to initialize connector\n"); 359 + return ret; 360 + } 361 + 362 + drm_connector_helper_add(&ctx->connector, 363 + &tc358764_connector_helper_funcs); 364 + drm_connector_attach_encoder(&ctx->connector, bridge->encoder); 365 + drm_panel_attach(ctx->panel, &ctx->connector); 366 + ctx->connector.funcs->reset(&ctx->connector); 367 + drm_fb_helper_add_one_connector(drm->fb_helper, &ctx->connector); 368 + drm_connector_register(&ctx->connector); 369 + 370 + return 0; 371 + } 372 + 373 + static void tc358764_detach(struct drm_bridge *bridge) 374 + { 375 + struct tc358764 *ctx = bridge_to_tc358764(bridge); 376 + struct drm_device *drm = bridge->dev; 377 + 378 + drm_connector_unregister(&ctx->connector); 379 + drm_fb_helper_remove_one_connector(drm->fb_helper, &ctx->connector); 380 + drm_panel_detach(ctx->panel); 381 + ctx->panel = NULL; 382 + drm_connector_unreference(&ctx->connector); 383 + } 384 + 385 + static const struct drm_bridge_funcs tc358764_bridge_funcs = { 386 + .disable = tc358764_disable, 387 + .post_disable = tc358764_post_disable, 388 + .enable = tc358764_enable, 389 + .pre_enable = tc358764_pre_enable, 390 + .attach = tc358764_attach, 391 + .detach = tc358764_detach, 392 + }; 393 + 394 + static int tc358764_parse_dt(struct tc358764 *ctx) 395 + { 396 + struct device *dev = ctx->dev; 397 + int ret; 398 + 399 + ctx->gpio_reset = devm_gpiod_get(dev, "reset", GPIOD_OUT_LOW); 400 + if (IS_ERR(ctx->gpio_reset)) { 401 + dev_err(dev, "no reset GPIO pin provided\n"); 402 + return PTR_ERR(ctx->gpio_reset); 403 + } 404 + 405 + ret = drm_of_find_panel_or_bridge(ctx->dev->of_node, 1, 0, &ctx->panel, 406 + NULL); 407 + if (ret && ret != -EPROBE_DEFER) 408 + dev_err(dev, "cannot find panel (%d)\n", ret); 409 + 410 + return ret; 411 + } 412 + 413 + static int tc358764_configure_regulators(struct tc358764 *ctx) 414 + { 415 + int i, ret; 416 + 417 + for (i = 0; i < ARRAY_SIZE(ctx->supplies); ++i) 418 + ctx->supplies[i].supply = tc358764_supplies[i]; 419 + 420 + ret = devm_regulator_bulk_get(ctx->dev, ARRAY_SIZE(ctx->supplies), 421 + ctx->supplies); 422 + if (ret < 0) 423 + dev_err(ctx->dev, "failed to get regulators: %d\n", ret); 424 + 425 + return ret; 426 + } 427 + 428 + static int tc358764_probe(struct mipi_dsi_device *dsi) 429 + { 430 + struct device *dev = &dsi->dev; 431 + struct tc358764 *ctx; 432 + int ret; 433 + 434 + ctx = devm_kzalloc(dev, sizeof(struct tc358764), GFP_KERNEL); 435 + if (!ctx) 436 + return -ENOMEM; 437 + 438 + mipi_dsi_set_drvdata(dsi, ctx); 439 + 440 + ctx->dev = dev; 441 + 442 + dsi->lanes = 4; 443 + dsi->format = MIPI_DSI_FMT_RGB888; 444 + dsi->mode_flags = MIPI_DSI_MODE_VIDEO | MIPI_DSI_MODE_VIDEO_BURST 445 + | MIPI_DSI_MODE_VIDEO_AUTO_VERT | MIPI_DSI_MODE_LPM; 446 + 447 + ret = tc358764_parse_dt(ctx); 448 + if (ret < 0) 449 + return ret; 450 + 451 + ret = tc358764_configure_regulators(ctx); 452 + if (ret < 0) 453 + return ret; 454 + 455 + ctx->bridge.funcs = &tc358764_bridge_funcs; 456 + ctx->bridge.of_node = dev->of_node; 457 + 458 + drm_bridge_add(&ctx->bridge); 459 + 460 + ret = mipi_dsi_attach(dsi); 461 + if (ret < 0) { 462 + drm_bridge_remove(&ctx->bridge); 463 + dev_err(dev, "failed to attach dsi\n"); 464 + } 465 + 466 + return ret; 467 + } 468 + 469 + static int tc358764_remove(struct mipi_dsi_device *dsi) 470 + { 471 + struct tc358764 *ctx = mipi_dsi_get_drvdata(dsi); 472 + 473 + mipi_dsi_detach(dsi); 474 + drm_bridge_remove(&ctx->bridge); 475 + 476 + return 0; 477 + } 478 + 479 + static const struct of_device_id tc358764_of_match[] = { 480 + { .compatible = "toshiba,tc358764" }, 481 + { } 482 + }; 483 + MODULE_DEVICE_TABLE(of, tc358764_of_match); 484 + 485 + static struct mipi_dsi_driver tc358764_driver = { 486 + .probe = tc358764_probe, 487 + .remove = tc358764_remove, 488 + .driver = { 489 + .name = "tc358764", 490 + .owner = THIS_MODULE, 491 + .of_match_table = tc358764_of_match, 492 + }, 493 + }; 494 + module_mipi_dsi_driver(tc358764_driver); 495 + 496 + MODULE_AUTHOR("Andrzej Hajda <a.hajda@samsung.com>"); 497 + MODULE_AUTHOR("Maciej Purski <m.purski@samsung.com>"); 498 + MODULE_DESCRIPTION("MIPI-DSI based Driver for TC358764 DSI/LVDS Bridge"); 499 + MODULE_LICENSE("GPL v2");
+779
drivers/gpu/drm/bridge/ti-sn65dsi86.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * Copyright (c) 2018, The Linux Foundation. All rights reserved. 4 + */ 5 + 6 + #include <drm/drmP.h> 7 + #include <drm/drm_atomic.h> 8 + #include <drm/drm_atomic_helper.h> 9 + #include <drm/drm_crtc_helper.h> 10 + #include <drm/drm_dp_helper.h> 11 + #include <drm/drm_mipi_dsi.h> 12 + #include <drm/drm_of.h> 13 + #include <drm/drm_panel.h> 14 + #include <linux/clk.h> 15 + #include <linux/gpio/consumer.h> 16 + #include <linux/i2c.h> 17 + #include <linux/iopoll.h> 18 + #include <linux/of_graph.h> 19 + #include <linux/pm_runtime.h> 20 + #include <linux/regmap.h> 21 + #include <linux/regulator/consumer.h> 22 + 23 + #define SN_DEVICE_REV_REG 0x08 24 + #define SN_DPPLL_SRC_REG 0x0A 25 + #define DPPLL_CLK_SRC_DSICLK BIT(0) 26 + #define REFCLK_FREQ_MASK GENMASK(3, 1) 27 + #define REFCLK_FREQ(x) ((x) << 1) 28 + #define DPPLL_SRC_DP_PLL_LOCK BIT(7) 29 + #define SN_PLL_ENABLE_REG 0x0D 30 + #define SN_DSI_LANES_REG 0x10 31 + #define CHA_DSI_LANES_MASK GENMASK(4, 3) 32 + #define CHA_DSI_LANES(x) ((x) << 3) 33 + #define SN_DSIA_CLK_FREQ_REG 0x12 34 + #define SN_CHA_ACTIVE_LINE_LENGTH_LOW_REG 0x20 35 + #define SN_CHA_VERTICAL_DISPLAY_SIZE_LOW_REG 0x24 36 + #define SN_CHA_HSYNC_PULSE_WIDTH_LOW_REG 0x2C 37 + #define SN_CHA_HSYNC_PULSE_WIDTH_HIGH_REG 0x2D 38 + #define CHA_HSYNC_POLARITY BIT(7) 39 + #define SN_CHA_VSYNC_PULSE_WIDTH_LOW_REG 0x30 40 + #define SN_CHA_VSYNC_PULSE_WIDTH_HIGH_REG 0x31 41 + #define CHA_VSYNC_POLARITY BIT(7) 42 + #define SN_CHA_HORIZONTAL_BACK_PORCH_REG 0x34 43 + #define SN_CHA_VERTICAL_BACK_PORCH_REG 0x36 44 + #define SN_CHA_HORIZONTAL_FRONT_PORCH_REG 0x38 45 + #define SN_CHA_VERTICAL_FRONT_PORCH_REG 0x3A 46 + #define SN_ENH_FRAME_REG 0x5A 47 + #define VSTREAM_ENABLE BIT(3) 48 + #define SN_DATA_FORMAT_REG 0x5B 49 + #define SN_HPD_DISABLE_REG 0x5C 50 + #define HPD_DISABLE BIT(0) 51 + #define SN_AUX_WDATA_REG(x) (0x64 + (x)) 52 + #define SN_AUX_ADDR_19_16_REG 0x74 53 + #define SN_AUX_ADDR_15_8_REG 0x75 54 + #define SN_AUX_ADDR_7_0_REG 0x76 55 + #define SN_AUX_LENGTH_REG 0x77 56 + #define SN_AUX_CMD_REG 0x78 57 + #define AUX_CMD_SEND BIT(1) 58 + #define AUX_CMD_REQ(x) ((x) << 4) 59 + #define SN_AUX_RDATA_REG(x) (0x79 + (x)) 60 + #define SN_SSC_CONFIG_REG 0x93 61 + #define DP_NUM_LANES_MASK GENMASK(5, 4) 62 + #define DP_NUM_LANES(x) ((x) << 4) 63 + #define SN_DATARATE_CONFIG_REG 0x94 64 + #define DP_DATARATE_MASK GENMASK(7, 5) 65 + #define DP_DATARATE(x) ((x) << 5) 66 + #define SN_ML_TX_MODE_REG 0x96 67 + #define ML_TX_MAIN_LINK_OFF 0 68 + #define ML_TX_NORMAL_MODE BIT(0) 69 + #define SN_AUX_CMD_STATUS_REG 0xF4 70 + #define AUX_IRQ_STATUS_AUX_RPLY_TOUT BIT(3) 71 + #define AUX_IRQ_STATUS_AUX_SHORT BIT(5) 72 + #define AUX_IRQ_STATUS_NAT_I2C_FAIL BIT(6) 73 + 74 + #define MIN_DSI_CLK_FREQ_MHZ 40 75 + 76 + /* fudge factor required to account for 8b/10b encoding */ 77 + #define DP_CLK_FUDGE_NUM 10 78 + #define DP_CLK_FUDGE_DEN 8 79 + 80 + /* Matches DP_AUX_MAX_PAYLOAD_BYTES (for now) */ 81 + #define SN_AUX_MAX_PAYLOAD_BYTES 16 82 + 83 + #define SN_REGULATOR_SUPPLY_NUM 4 84 + 85 + struct ti_sn_bridge { 86 + struct device *dev; 87 + struct regmap *regmap; 88 + struct drm_dp_aux aux; 89 + struct drm_bridge bridge; 90 + struct drm_connector connector; 91 + struct device_node *host_node; 92 + struct mipi_dsi_device *dsi; 93 + struct clk *refclk; 94 + struct drm_panel *panel; 95 + struct gpio_desc *enable_gpio; 96 + struct regulator_bulk_data supplies[SN_REGULATOR_SUPPLY_NUM]; 97 + }; 98 + 99 + static const struct regmap_range ti_sn_bridge_volatile_ranges[] = { 100 + { .range_min = 0, .range_max = 0xFF }, 101 + }; 102 + 103 + static const struct regmap_access_table ti_sn_bridge_volatile_table = { 104 + .yes_ranges = ti_sn_bridge_volatile_ranges, 105 + .n_yes_ranges = ARRAY_SIZE(ti_sn_bridge_volatile_ranges), 106 + }; 107 + 108 + static const struct regmap_config ti_sn_bridge_regmap_config = { 109 + .reg_bits = 8, 110 + .val_bits = 8, 111 + .volatile_table = &ti_sn_bridge_volatile_table, 112 + .cache_type = REGCACHE_NONE, 113 + }; 114 + 115 + static void ti_sn_bridge_write_u16(struct ti_sn_bridge *pdata, 116 + unsigned int reg, u16 val) 117 + { 118 + regmap_write(pdata->regmap, reg, val & 0xFF); 119 + regmap_write(pdata->regmap, reg + 1, val >> 8); 120 + } 121 + 122 + static int __maybe_unused ti_sn_bridge_resume(struct device *dev) 123 + { 124 + struct ti_sn_bridge *pdata = dev_get_drvdata(dev); 125 + int ret; 126 + 127 + ret = regulator_bulk_enable(SN_REGULATOR_SUPPLY_NUM, pdata->supplies); 128 + if (ret) { 129 + DRM_ERROR("failed to enable supplies %d\n", ret); 130 + return ret; 131 + } 132 + 133 + gpiod_set_value(pdata->enable_gpio, 1); 134 + 135 + return ret; 136 + } 137 + 138 + static int __maybe_unused ti_sn_bridge_suspend(struct device *dev) 139 + { 140 + struct ti_sn_bridge *pdata = dev_get_drvdata(dev); 141 + int ret; 142 + 143 + gpiod_set_value(pdata->enable_gpio, 0); 144 + 145 + ret = regulator_bulk_disable(SN_REGULATOR_SUPPLY_NUM, pdata->supplies); 146 + if (ret) 147 + DRM_ERROR("failed to disable supplies %d\n", ret); 148 + 149 + return ret; 150 + } 151 + 152 + static const struct dev_pm_ops ti_sn_bridge_pm_ops = { 153 + SET_RUNTIME_PM_OPS(ti_sn_bridge_suspend, ti_sn_bridge_resume, NULL) 154 + }; 155 + 156 + /* Connector funcs */ 157 + static struct ti_sn_bridge * 158 + connector_to_ti_sn_bridge(struct drm_connector *connector) 159 + { 160 + return container_of(connector, struct ti_sn_bridge, connector); 161 + } 162 + 163 + static int ti_sn_bridge_connector_get_modes(struct drm_connector *connector) 164 + { 165 + struct ti_sn_bridge *pdata = connector_to_ti_sn_bridge(connector); 166 + 167 + return drm_panel_get_modes(pdata->panel); 168 + } 169 + 170 + static enum drm_mode_status 171 + ti_sn_bridge_connector_mode_valid(struct drm_connector *connector, 172 + struct drm_display_mode *mode) 173 + { 174 + /* maximum supported resolution is 4K at 60 fps */ 175 + if (mode->clock > 594000) 176 + return MODE_CLOCK_HIGH; 177 + 178 + return MODE_OK; 179 + } 180 + 181 + static struct drm_connector_helper_funcs ti_sn_bridge_connector_helper_funcs = { 182 + .get_modes = ti_sn_bridge_connector_get_modes, 183 + .mode_valid = ti_sn_bridge_connector_mode_valid, 184 + }; 185 + 186 + static enum drm_connector_status 187 + ti_sn_bridge_connector_detect(struct drm_connector *connector, bool force) 188 + { 189 + /** 190 + * TODO: Currently if drm_panel is present, then always 191 + * return the status as connected. Need to add support to detect 192 + * device state for hot pluggable scenarios. 193 + */ 194 + return connector_status_connected; 195 + } 196 + 197 + static const struct drm_connector_funcs ti_sn_bridge_connector_funcs = { 198 + .fill_modes = drm_helper_probe_single_connector_modes, 199 + .detect = ti_sn_bridge_connector_detect, 200 + .destroy = drm_connector_cleanup, 201 + .reset = drm_atomic_helper_connector_reset, 202 + .atomic_duplicate_state = drm_atomic_helper_connector_duplicate_state, 203 + .atomic_destroy_state = drm_atomic_helper_connector_destroy_state, 204 + }; 205 + 206 + static struct ti_sn_bridge *bridge_to_ti_sn_bridge(struct drm_bridge *bridge) 207 + { 208 + return container_of(bridge, struct ti_sn_bridge, bridge); 209 + } 210 + 211 + static int ti_sn_bridge_parse_regulators(struct ti_sn_bridge *pdata) 212 + { 213 + unsigned int i; 214 + const char * const ti_sn_bridge_supply_names[] = { 215 + "vcca", "vcc", "vccio", "vpll", 216 + }; 217 + 218 + for (i = 0; i < SN_REGULATOR_SUPPLY_NUM; i++) 219 + pdata->supplies[i].supply = ti_sn_bridge_supply_names[i]; 220 + 221 + return devm_regulator_bulk_get(pdata->dev, SN_REGULATOR_SUPPLY_NUM, 222 + pdata->supplies); 223 + } 224 + 225 + static int ti_sn_bridge_attach(struct drm_bridge *bridge) 226 + { 227 + int ret, val; 228 + struct ti_sn_bridge *pdata = bridge_to_ti_sn_bridge(bridge); 229 + struct mipi_dsi_host *host; 230 + struct mipi_dsi_device *dsi; 231 + const struct mipi_dsi_device_info info = { .type = "ti_sn_bridge", 232 + .channel = 0, 233 + .node = NULL, 234 + }; 235 + 236 + ret = drm_connector_init(bridge->dev, &pdata->connector, 237 + &ti_sn_bridge_connector_funcs, 238 + DRM_MODE_CONNECTOR_eDP); 239 + if (ret) { 240 + DRM_ERROR("Failed to initialize connector with drm\n"); 241 + return ret; 242 + } 243 + 244 + drm_connector_helper_add(&pdata->connector, 245 + &ti_sn_bridge_connector_helper_funcs); 246 + drm_connector_attach_encoder(&pdata->connector, bridge->encoder); 247 + 248 + /* 249 + * TODO: ideally finding host resource and dsi dev registration needs 250 + * to be done in bridge probe. But some existing DSI host drivers will 251 + * wait for any of the drm_bridge/drm_panel to get added to the global 252 + * bridge/panel list, before completing their probe. So if we do the 253 + * dsi dev registration part in bridge probe, before populating in 254 + * the global bridge list, then it will cause deadlock as dsi host probe 255 + * will never complete, neither our bridge probe. So keeping it here 256 + * will satisfy most of the existing host drivers. Once the host driver 257 + * is fixed we can move the below code to bridge probe safely. 258 + */ 259 + host = of_find_mipi_dsi_host_by_node(pdata->host_node); 260 + if (!host) { 261 + DRM_ERROR("failed to find dsi host\n"); 262 + ret = -ENODEV; 263 + goto err_dsi_host; 264 + } 265 + 266 + dsi = mipi_dsi_device_register_full(host, &info); 267 + if (IS_ERR(dsi)) { 268 + DRM_ERROR("failed to create dsi device\n"); 269 + ret = PTR_ERR(dsi); 270 + goto err_dsi_host; 271 + } 272 + 273 + /* TODO: setting to 4 lanes always for now */ 274 + dsi->lanes = 4; 275 + dsi->format = MIPI_DSI_FMT_RGB888; 276 + dsi->mode_flags = MIPI_DSI_MODE_VIDEO | MIPI_DSI_MODE_VIDEO_SYNC_PULSE | 277 + MIPI_DSI_MODE_EOT_PACKET | MIPI_DSI_MODE_VIDEO_HSE; 278 + 279 + /* check if continuous dsi clock is required or not */ 280 + pm_runtime_get_sync(pdata->dev); 281 + regmap_read(pdata->regmap, SN_DPPLL_SRC_REG, &val); 282 + pm_runtime_put(pdata->dev); 283 + if (!(val & DPPLL_CLK_SRC_DSICLK)) 284 + dsi->mode_flags |= MIPI_DSI_CLOCK_NON_CONTINUOUS; 285 + 286 + ret = mipi_dsi_attach(dsi); 287 + if (ret < 0) { 288 + DRM_ERROR("failed to attach dsi to host\n"); 289 + goto err_dsi_attach; 290 + } 291 + pdata->dsi = dsi; 292 + 293 + /* attach panel to bridge */ 294 + drm_panel_attach(pdata->panel, &pdata->connector); 295 + 296 + return 0; 297 + 298 + err_dsi_attach: 299 + mipi_dsi_device_unregister(dsi); 300 + err_dsi_host: 301 + drm_connector_cleanup(&pdata->connector); 302 + return ret; 303 + } 304 + 305 + static void ti_sn_bridge_disable(struct drm_bridge *bridge) 306 + { 307 + struct ti_sn_bridge *pdata = bridge_to_ti_sn_bridge(bridge); 308 + 309 + drm_panel_disable(pdata->panel); 310 + 311 + /* disable video stream */ 312 + regmap_update_bits(pdata->regmap, SN_ENH_FRAME_REG, VSTREAM_ENABLE, 0); 313 + /* semi auto link training mode OFF */ 314 + regmap_write(pdata->regmap, SN_ML_TX_MODE_REG, 0); 315 + /* disable DP PLL */ 316 + regmap_write(pdata->regmap, SN_PLL_ENABLE_REG, 0); 317 + 318 + drm_panel_unprepare(pdata->panel); 319 + } 320 + 321 + static u32 ti_sn_bridge_get_dsi_freq(struct ti_sn_bridge *pdata) 322 + { 323 + u32 bit_rate_khz, clk_freq_khz; 324 + struct drm_display_mode *mode = 325 + &pdata->bridge.encoder->crtc->state->adjusted_mode; 326 + 327 + bit_rate_khz = mode->clock * 328 + mipi_dsi_pixel_format_to_bpp(pdata->dsi->format); 329 + clk_freq_khz = bit_rate_khz / (pdata->dsi->lanes * 2); 330 + 331 + return clk_freq_khz; 332 + } 333 + 334 + /* clk frequencies supported by bridge in Hz in case derived from REFCLK pin */ 335 + static const u32 ti_sn_bridge_refclk_lut[] = { 336 + 12000000, 337 + 19200000, 338 + 26000000, 339 + 27000000, 340 + 38400000, 341 + }; 342 + 343 + /* clk frequencies supported by bridge in Hz in case derived from DACP/N pin */ 344 + static const u32 ti_sn_bridge_dsiclk_lut[] = { 345 + 468000000, 346 + 384000000, 347 + 416000000, 348 + 486000000, 349 + 460800000, 350 + }; 351 + 352 + static void ti_sn_bridge_set_refclk_freq(struct ti_sn_bridge *pdata) 353 + { 354 + int i; 355 + u32 refclk_rate; 356 + const u32 *refclk_lut; 357 + size_t refclk_lut_size; 358 + 359 + if (pdata->refclk) { 360 + refclk_rate = clk_get_rate(pdata->refclk); 361 + refclk_lut = ti_sn_bridge_refclk_lut; 362 + refclk_lut_size = ARRAY_SIZE(ti_sn_bridge_refclk_lut); 363 + clk_prepare_enable(pdata->refclk); 364 + } else { 365 + refclk_rate = ti_sn_bridge_get_dsi_freq(pdata) * 1000; 366 + refclk_lut = ti_sn_bridge_dsiclk_lut; 367 + refclk_lut_size = ARRAY_SIZE(ti_sn_bridge_dsiclk_lut); 368 + } 369 + 370 + /* for i equals to refclk_lut_size means default frequency */ 371 + for (i = 0; i < refclk_lut_size; i++) 372 + if (refclk_lut[i] == refclk_rate) 373 + break; 374 + 375 + regmap_update_bits(pdata->regmap, SN_DPPLL_SRC_REG, REFCLK_FREQ_MASK, 376 + REFCLK_FREQ(i)); 377 + } 378 + 379 + /** 380 + * LUT index corresponds to register value and 381 + * LUT values corresponds to dp data rate supported 382 + * by the bridge in Mbps unit. 383 + */ 384 + static const unsigned int ti_sn_bridge_dp_rate_lut[] = { 385 + 0, 1620, 2160, 2430, 2700, 3240, 4320, 5400 386 + }; 387 + 388 + static void ti_sn_bridge_set_dsi_dp_rate(struct ti_sn_bridge *pdata) 389 + { 390 + unsigned int bit_rate_mhz, clk_freq_mhz, dp_rate_mhz; 391 + unsigned int val, i; 392 + struct drm_display_mode *mode = 393 + &pdata->bridge.encoder->crtc->state->adjusted_mode; 394 + 395 + /* set DSIA clk frequency */ 396 + bit_rate_mhz = (mode->clock / 1000) * 397 + mipi_dsi_pixel_format_to_bpp(pdata->dsi->format); 398 + clk_freq_mhz = bit_rate_mhz / (pdata->dsi->lanes * 2); 399 + 400 + /* for each increment in val, frequency increases by 5MHz */ 401 + val = (MIN_DSI_CLK_FREQ_MHZ / 5) + 402 + (((clk_freq_mhz - MIN_DSI_CLK_FREQ_MHZ) / 5) & 0xFF); 403 + regmap_write(pdata->regmap, SN_DSIA_CLK_FREQ_REG, val); 404 + 405 + /* set DP data rate */ 406 + dp_rate_mhz = ((bit_rate_mhz / pdata->dsi->lanes) * DP_CLK_FUDGE_NUM) / 407 + DP_CLK_FUDGE_DEN; 408 + for (i = 0; i < ARRAY_SIZE(ti_sn_bridge_dp_rate_lut) - 1; i++) 409 + if (ti_sn_bridge_dp_rate_lut[i] > dp_rate_mhz) 410 + break; 411 + 412 + regmap_update_bits(pdata->regmap, SN_DATARATE_CONFIG_REG, 413 + DP_DATARATE_MASK, DP_DATARATE(i)); 414 + } 415 + 416 + static void ti_sn_bridge_set_video_timings(struct ti_sn_bridge *pdata) 417 + { 418 + struct drm_display_mode *mode = 419 + &pdata->bridge.encoder->crtc->state->adjusted_mode; 420 + u8 hsync_polarity = 0, vsync_polarity = 0; 421 + 422 + if (mode->flags & DRM_MODE_FLAG_PHSYNC) 423 + hsync_polarity = CHA_HSYNC_POLARITY; 424 + if (mode->flags & DRM_MODE_FLAG_PVSYNC) 425 + vsync_polarity = CHA_VSYNC_POLARITY; 426 + 427 + ti_sn_bridge_write_u16(pdata, SN_CHA_ACTIVE_LINE_LENGTH_LOW_REG, 428 + mode->hdisplay); 429 + ti_sn_bridge_write_u16(pdata, SN_CHA_VERTICAL_DISPLAY_SIZE_LOW_REG, 430 + mode->vdisplay); 431 + regmap_write(pdata->regmap, SN_CHA_HSYNC_PULSE_WIDTH_LOW_REG, 432 + (mode->hsync_end - mode->hsync_start) & 0xFF); 433 + regmap_write(pdata->regmap, SN_CHA_HSYNC_PULSE_WIDTH_HIGH_REG, 434 + (((mode->hsync_end - mode->hsync_start) >> 8) & 0x7F) | 435 + hsync_polarity); 436 + regmap_write(pdata->regmap, SN_CHA_VSYNC_PULSE_WIDTH_LOW_REG, 437 + (mode->vsync_end - mode->vsync_start) & 0xFF); 438 + regmap_write(pdata->regmap, SN_CHA_VSYNC_PULSE_WIDTH_HIGH_REG, 439 + (((mode->vsync_end - mode->vsync_start) >> 8) & 0x7F) | 440 + vsync_polarity); 441 + 442 + regmap_write(pdata->regmap, SN_CHA_HORIZONTAL_BACK_PORCH_REG, 443 + (mode->htotal - mode->hsync_end) & 0xFF); 444 + regmap_write(pdata->regmap, SN_CHA_VERTICAL_BACK_PORCH_REG, 445 + (mode->vtotal - mode->vsync_end) & 0xFF); 446 + 447 + regmap_write(pdata->regmap, SN_CHA_HORIZONTAL_FRONT_PORCH_REG, 448 + (mode->hsync_start - mode->hdisplay) & 0xFF); 449 + regmap_write(pdata->regmap, SN_CHA_VERTICAL_FRONT_PORCH_REG, 450 + (mode->vsync_start - mode->vdisplay) & 0xFF); 451 + 452 + usleep_range(10000, 10500); /* 10ms delay recommended by spec */ 453 + } 454 + 455 + static void ti_sn_bridge_enable(struct drm_bridge *bridge) 456 + { 457 + struct ti_sn_bridge *pdata = bridge_to_ti_sn_bridge(bridge); 458 + unsigned int val; 459 + int ret; 460 + 461 + /* 462 + * FIXME: 463 + * This 70ms was found necessary by experimentation. If it's not 464 + * present, link training fails. It seems like it can go anywhere from 465 + * pre_enable() up to semi-auto link training initiation below. 466 + * 467 + * Neither the datasheet for the bridge nor the panel tested mention a 468 + * delay of this magnitude in the timing requirements. So for now, add 469 + * the mystery delay until someone figures out a better fix. 470 + */ 471 + msleep(70); 472 + 473 + /* DSI_A lane config */ 474 + val = CHA_DSI_LANES(4 - pdata->dsi->lanes); 475 + regmap_update_bits(pdata->regmap, SN_DSI_LANES_REG, 476 + CHA_DSI_LANES_MASK, val); 477 + 478 + /* DP lane config */ 479 + val = DP_NUM_LANES(pdata->dsi->lanes - 1); 480 + regmap_update_bits(pdata->regmap, SN_SSC_CONFIG_REG, DP_NUM_LANES_MASK, 481 + val); 482 + 483 + /* set dsi/dp clk frequency value */ 484 + ti_sn_bridge_set_dsi_dp_rate(pdata); 485 + 486 + /* enable DP PLL */ 487 + regmap_write(pdata->regmap, SN_PLL_ENABLE_REG, 1); 488 + 489 + ret = regmap_read_poll_timeout(pdata->regmap, SN_DPPLL_SRC_REG, val, 490 + val & DPPLL_SRC_DP_PLL_LOCK, 1000, 491 + 50 * 1000); 492 + if (ret) { 493 + DRM_ERROR("DP_PLL_LOCK polling failed (%d)\n", ret); 494 + return; 495 + } 496 + 497 + /** 498 + * The SN65DSI86 only supports ASSR Display Authentication method and 499 + * this method is enabled by default. An eDP panel must support this 500 + * authentication method. We need to enable this method in the eDP panel 501 + * at DisplayPort address 0x0010A prior to link training. 502 + */ 503 + drm_dp_dpcd_writeb(&pdata->aux, DP_EDP_CONFIGURATION_SET, 504 + DP_ALTERNATE_SCRAMBLER_RESET_ENABLE); 505 + 506 + /* Semi auto link training mode */ 507 + regmap_write(pdata->regmap, SN_ML_TX_MODE_REG, 0x0A); 508 + ret = regmap_read_poll_timeout(pdata->regmap, SN_ML_TX_MODE_REG, val, 509 + val == ML_TX_MAIN_LINK_OFF || 510 + val == ML_TX_NORMAL_MODE, 1000, 511 + 500 * 1000); 512 + if (ret) { 513 + DRM_ERROR("Training complete polling failed (%d)\n", ret); 514 + return; 515 + } else if (val == ML_TX_MAIN_LINK_OFF) { 516 + DRM_ERROR("Link training failed, link is off\n"); 517 + return; 518 + } 519 + 520 + /* config video parameters */ 521 + ti_sn_bridge_set_video_timings(pdata); 522 + 523 + /* enable video stream */ 524 + regmap_update_bits(pdata->regmap, SN_ENH_FRAME_REG, VSTREAM_ENABLE, 525 + VSTREAM_ENABLE); 526 + 527 + drm_panel_enable(pdata->panel); 528 + } 529 + 530 + static void ti_sn_bridge_pre_enable(struct drm_bridge *bridge) 531 + { 532 + struct ti_sn_bridge *pdata = bridge_to_ti_sn_bridge(bridge); 533 + 534 + pm_runtime_get_sync(pdata->dev); 535 + 536 + /* configure bridge ref_clk */ 537 + ti_sn_bridge_set_refclk_freq(pdata); 538 + 539 + /* in case drm_panel is connected then HPD is not supported */ 540 + regmap_update_bits(pdata->regmap, SN_HPD_DISABLE_REG, HPD_DISABLE, 541 + HPD_DISABLE); 542 + 543 + drm_panel_prepare(pdata->panel); 544 + } 545 + 546 + static void ti_sn_bridge_post_disable(struct drm_bridge *bridge) 547 + { 548 + struct ti_sn_bridge *pdata = bridge_to_ti_sn_bridge(bridge); 549 + 550 + if (pdata->refclk) 551 + clk_disable_unprepare(pdata->refclk); 552 + 553 + pm_runtime_put_sync(pdata->dev); 554 + } 555 + 556 + static const struct drm_bridge_funcs ti_sn_bridge_funcs = { 557 + .attach = ti_sn_bridge_attach, 558 + .pre_enable = ti_sn_bridge_pre_enable, 559 + .enable = ti_sn_bridge_enable, 560 + .disable = ti_sn_bridge_disable, 561 + .post_disable = ti_sn_bridge_post_disable, 562 + }; 563 + 564 + static struct ti_sn_bridge *aux_to_ti_sn_bridge(struct drm_dp_aux *aux) 565 + { 566 + return container_of(aux, struct ti_sn_bridge, aux); 567 + } 568 + 569 + static ssize_t ti_sn_aux_transfer(struct drm_dp_aux *aux, 570 + struct drm_dp_aux_msg *msg) 571 + { 572 + struct ti_sn_bridge *pdata = aux_to_ti_sn_bridge(aux); 573 + u32 request = msg->request & ~DP_AUX_I2C_MOT; 574 + u32 request_val = AUX_CMD_REQ(msg->request); 575 + u8 *buf = (u8 *)msg->buffer; 576 + unsigned int val; 577 + int ret, i; 578 + 579 + if (msg->size > SN_AUX_MAX_PAYLOAD_BYTES) 580 + return -EINVAL; 581 + 582 + switch (request) { 583 + case DP_AUX_NATIVE_WRITE: 584 + case DP_AUX_I2C_WRITE: 585 + case DP_AUX_NATIVE_READ: 586 + case DP_AUX_I2C_READ: 587 + regmap_write(pdata->regmap, SN_AUX_CMD_REG, request_val); 588 + break; 589 + default: 590 + return -EINVAL; 591 + } 592 + 593 + regmap_write(pdata->regmap, SN_AUX_ADDR_19_16_REG, 594 + (msg->address >> 16) & 0xF); 595 + regmap_write(pdata->regmap, SN_AUX_ADDR_15_8_REG, 596 + (msg->address >> 8) & 0xFF); 597 + regmap_write(pdata->regmap, SN_AUX_ADDR_7_0_REG, msg->address & 0xFF); 598 + 599 + regmap_write(pdata->regmap, SN_AUX_LENGTH_REG, msg->size); 600 + 601 + if (request == DP_AUX_NATIVE_WRITE || request == DP_AUX_I2C_WRITE) { 602 + for (i = 0; i < msg->size; i++) 603 + regmap_write(pdata->regmap, SN_AUX_WDATA_REG(i), 604 + buf[i]); 605 + } 606 + 607 + regmap_write(pdata->regmap, SN_AUX_CMD_REG, request_val | AUX_CMD_SEND); 608 + 609 + ret = regmap_read_poll_timeout(pdata->regmap, SN_AUX_CMD_REG, val, 610 + !(val & AUX_CMD_SEND), 200, 611 + 50 * 1000); 612 + if (ret) 613 + return ret; 614 + 615 + ret = regmap_read(pdata->regmap, SN_AUX_CMD_STATUS_REG, &val); 616 + if (ret) 617 + return ret; 618 + else if ((val & AUX_IRQ_STATUS_NAT_I2C_FAIL) 619 + || (val & AUX_IRQ_STATUS_AUX_RPLY_TOUT) 620 + || (val & AUX_IRQ_STATUS_AUX_SHORT)) 621 + return -ENXIO; 622 + 623 + if (request == DP_AUX_NATIVE_WRITE || request == DP_AUX_I2C_WRITE) 624 + return msg->size; 625 + 626 + for (i = 0; i < msg->size; i++) { 627 + unsigned int val; 628 + ret = regmap_read(pdata->regmap, SN_AUX_RDATA_REG(i), 629 + &val); 630 + if (ret) 631 + return ret; 632 + 633 + WARN_ON(val & ~0xFF); 634 + buf[i] = (u8)(val & 0xFF); 635 + } 636 + 637 + return msg->size; 638 + } 639 + 640 + static int ti_sn_bridge_parse_dsi_host(struct ti_sn_bridge *pdata) 641 + { 642 + struct device_node *np = pdata->dev->of_node; 643 + 644 + pdata->host_node = of_graph_get_remote_node(np, 0, 0); 645 + 646 + if (!pdata->host_node) { 647 + DRM_ERROR("remote dsi host node not found\n"); 648 + return -ENODEV; 649 + } 650 + 651 + return 0; 652 + } 653 + 654 + static int ti_sn_bridge_probe(struct i2c_client *client, 655 + const struct i2c_device_id *id) 656 + { 657 + struct ti_sn_bridge *pdata; 658 + int ret; 659 + 660 + if (!i2c_check_functionality(client->adapter, I2C_FUNC_I2C)) { 661 + DRM_ERROR("device doesn't support I2C\n"); 662 + return -ENODEV; 663 + } 664 + 665 + pdata = devm_kzalloc(&client->dev, sizeof(struct ti_sn_bridge), 666 + GFP_KERNEL); 667 + if (!pdata) 668 + return -ENOMEM; 669 + 670 + pdata->regmap = devm_regmap_init_i2c(client, 671 + &ti_sn_bridge_regmap_config); 672 + if (IS_ERR(pdata->regmap)) { 673 + DRM_ERROR("regmap i2c init failed\n"); 674 + return PTR_ERR(pdata->regmap); 675 + } 676 + 677 + pdata->dev = &client->dev; 678 + 679 + ret = drm_of_find_panel_or_bridge(pdata->dev->of_node, 1, 0, 680 + &pdata->panel, NULL); 681 + if (ret) { 682 + DRM_ERROR("could not find any panel node\n"); 683 + return ret; 684 + } 685 + 686 + dev_set_drvdata(&client->dev, pdata); 687 + 688 + pdata->enable_gpio = devm_gpiod_get(pdata->dev, "enable", 689 + GPIOD_OUT_LOW); 690 + if (IS_ERR(pdata->enable_gpio)) { 691 + DRM_ERROR("failed to get enable gpio from DT\n"); 692 + ret = PTR_ERR(pdata->enable_gpio); 693 + return ret; 694 + } 695 + 696 + ret = ti_sn_bridge_parse_regulators(pdata); 697 + if (ret) { 698 + DRM_ERROR("failed to parse regulators\n"); 699 + return ret; 700 + } 701 + 702 + pdata->refclk = devm_clk_get(pdata->dev, "refclk"); 703 + if (IS_ERR(pdata->refclk)) { 704 + ret = PTR_ERR(pdata->refclk); 705 + if (ret == -EPROBE_DEFER) 706 + return ret; 707 + DRM_DEBUG_KMS("refclk not found\n"); 708 + pdata->refclk = NULL; 709 + } 710 + 711 + ret = ti_sn_bridge_parse_dsi_host(pdata); 712 + if (ret) 713 + return ret; 714 + 715 + pm_runtime_enable(pdata->dev); 716 + 717 + i2c_set_clientdata(client, pdata); 718 + 719 + pdata->aux.name = "ti-sn65dsi86-aux"; 720 + pdata->aux.dev = pdata->dev; 721 + pdata->aux.transfer = ti_sn_aux_transfer; 722 + drm_dp_aux_register(&pdata->aux); 723 + 724 + pdata->bridge.funcs = &ti_sn_bridge_funcs; 725 + pdata->bridge.of_node = client->dev.of_node; 726 + 727 + drm_bridge_add(&pdata->bridge); 728 + 729 + return 0; 730 + } 731 + 732 + static int ti_sn_bridge_remove(struct i2c_client *client) 733 + { 734 + struct ti_sn_bridge *pdata = i2c_get_clientdata(client); 735 + 736 + if (!pdata) 737 + return -EINVAL; 738 + 739 + of_node_put(pdata->host_node); 740 + 741 + pm_runtime_disable(pdata->dev); 742 + 743 + if (pdata->dsi) { 744 + mipi_dsi_detach(pdata->dsi); 745 + mipi_dsi_device_unregister(pdata->dsi); 746 + } 747 + 748 + drm_bridge_remove(&pdata->bridge); 749 + 750 + return 0; 751 + } 752 + 753 + static struct i2c_device_id ti_sn_bridge_id[] = { 754 + { "ti,sn65dsi86", 0}, 755 + {}, 756 + }; 757 + MODULE_DEVICE_TABLE(i2c, ti_sn_bridge_id); 758 + 759 + static const struct of_device_id ti_sn_bridge_match_table[] = { 760 + {.compatible = "ti,sn65dsi86"}, 761 + {}, 762 + }; 763 + MODULE_DEVICE_TABLE(of, ti_sn_bridge_match_table); 764 + 765 + static struct i2c_driver ti_sn_bridge_driver = { 766 + .driver = { 767 + .name = "ti_sn65dsi86", 768 + .of_match_table = ti_sn_bridge_match_table, 769 + .pm = &ti_sn_bridge_pm_ops, 770 + }, 771 + .probe = ti_sn_bridge_probe, 772 + .remove = ti_sn_bridge_remove, 773 + .id_table = ti_sn_bridge_id, 774 + }; 775 + module_i2c_driver(ti_sn_bridge_driver); 776 + 777 + MODULE_AUTHOR("Sandeep Panda <spanda@codeaurora.org>"); 778 + MODULE_DESCRIPTION("sn65dsi86 DSI to eDP bridge driver"); 779 + MODULE_LICENSE("GPL v2");
+3 -24
drivers/gpu/drm/cirrus/cirrus_drv.c
··· 16 16 #include "cirrus_drv.h" 17 17 18 18 int cirrus_modeset = -1; 19 - int cirrus_bpp = 24; 19 + int cirrus_bpp = 16; 20 20 21 21 MODULE_PARM_DESC(modeset, "Disable/Enable modesetting"); 22 22 module_param_named(modeset, cirrus_modeset, int, 0400); 23 - MODULE_PARM_DESC(bpp, "Max bits-per-pixel (default:24)"); 23 + MODULE_PARM_DESC(bpp, "Max bits-per-pixel (default:16)"); 24 24 module_param_named(bpp, cirrus_bpp, int, 0400); 25 25 26 26 /* ··· 42 42 }; 43 43 44 44 45 - static int cirrus_kick_out_firmware_fb(struct pci_dev *pdev) 46 - { 47 - struct apertures_struct *ap; 48 - bool primary = false; 49 - 50 - ap = alloc_apertures(1); 51 - if (!ap) 52 - return -ENOMEM; 53 - 54 - ap->ranges[0].base = pci_resource_start(pdev, 0); 55 - ap->ranges[0].size = pci_resource_len(pdev, 0); 56 - 57 - #ifdef CONFIG_X86 58 - primary = pdev->resource[PCI_ROM_RESOURCE].flags & IORESOURCE_ROM_SHADOW; 59 - #endif 60 - drm_fb_helper_remove_conflicting_framebuffers(ap, "cirrusdrmfb", primary); 61 - kfree(ap); 62 - 63 - return 0; 64 - } 65 - 66 45 static int cirrus_pci_probe(struct pci_dev *pdev, 67 46 const struct pci_device_id *ent) 68 47 { 69 48 int ret; 70 49 71 - ret = cirrus_kick_out_firmware_fb(pdev); 50 + ret = drm_fb_helper_remove_conflicting_pci_framebuffers(pdev, 0, "cirrusdrmfb"); 72 51 if (ret) 73 52 return ret; 74 53
+1 -1
drivers/gpu/drm/cirrus/cirrus_drv.h
··· 146 146 147 147 struct cirrus_fbdev { 148 148 struct drm_fb_helper helper; 149 - struct drm_framebuffer gfb; 149 + struct drm_framebuffer *gfb; 150 150 void *sysram; 151 151 int size; 152 152 int x1, y1, x2, y2; /* dirty rect */
+28 -23
drivers/gpu/drm/cirrus/cirrus_fbdev.c
··· 22 22 struct drm_gem_object *obj; 23 23 struct cirrus_bo *bo; 24 24 int src_offset, dst_offset; 25 - int bpp = afbdev->gfb.format->cpp[0]; 25 + int bpp = afbdev->gfb->format->cpp[0]; 26 26 int ret = -EBUSY; 27 27 bool unmap = false; 28 28 bool store_for_later = false; 29 29 int x2, y2; 30 30 unsigned long flags; 31 31 32 - obj = afbdev->gfb.obj[0]; 32 + obj = afbdev->gfb->obj[0]; 33 33 bo = gem_to_cirrus_bo(obj); 34 34 35 35 /* ··· 82 82 } 83 83 for (i = y; i < y + height; i++) { 84 84 /* assume equal stride for now */ 85 - src_offset = dst_offset = i * afbdev->gfb.pitches[0] + (x * bpp); 85 + src_offset = dst_offset = i * afbdev->gfb->pitches[0] + (x * bpp); 86 86 memcpy_toio(bo->kmap.virtual + src_offset, afbdev->sysram + src_offset, width * bpp); 87 87 88 88 } ··· 192 192 return -ENOMEM; 193 193 194 194 info = drm_fb_helper_alloc_fbi(helper); 195 - if (IS_ERR(info)) 196 - return PTR_ERR(info); 195 + if (IS_ERR(info)) { 196 + ret = PTR_ERR(info); 197 + goto err_vfree; 198 + } 197 199 198 200 info->par = gfbdev; 199 201 200 - ret = cirrus_framebuffer_init(cdev->dev, &gfbdev->gfb, &mode_cmd, gobj); 202 + fb = kzalloc(sizeof(*fb), GFP_KERNEL); 203 + if (!fb) { 204 + ret = -ENOMEM; 205 + goto err_drm_gem_object_put_unlocked; 206 + } 207 + 208 + ret = cirrus_framebuffer_init(cdev->dev, fb, &mode_cmd, gobj); 201 209 if (ret) 202 - return ret; 210 + goto err_kfree; 203 211 204 212 gfbdev->sysram = sysram; 205 213 gfbdev->size = size; 206 - 207 - fb = &gfbdev->gfb; 208 - if (!fb) { 209 - DRM_INFO("fb is NULL\n"); 210 - return -EINVAL; 211 - } 214 + gfbdev->gfb = fb; 212 215 213 216 /* setup helper */ 214 217 gfbdev->helper.fb = fb; ··· 244 241 DRM_INFO(" pitch is %d\n", fb->pitches[0]); 245 242 246 243 return 0; 244 + 245 + err_kfree: 246 + kfree(fb); 247 + err_drm_gem_object_put_unlocked: 248 + drm_gem_object_put_unlocked(gobj); 249 + err_vfree: 250 + vfree(sysram); 251 + return ret; 247 252 } 248 253 249 254 static int cirrus_fbdev_destroy(struct drm_device *dev, 250 255 struct cirrus_fbdev *gfbdev) 251 256 { 252 - struct drm_framebuffer *gfb = &gfbdev->gfb; 257 + struct drm_framebuffer *gfb = gfbdev->gfb; 253 258 254 259 drm_fb_helper_unregister_fbi(&gfbdev->helper); 255 260 256 - if (gfb->obj[0]) { 257 - drm_gem_object_put_unlocked(gfb->obj[0]); 258 - gfb->obj[0] = NULL; 259 - } 260 - 261 261 vfree(gfbdev->sysram); 262 262 drm_fb_helper_fini(&gfbdev->helper); 263 - drm_framebuffer_unregister_private(gfb); 264 - drm_framebuffer_cleanup(gfb); 263 + if (gfb) 264 + drm_framebuffer_put(gfb); 265 265 266 266 return 0; 267 267 } ··· 277 271 { 278 272 struct cirrus_fbdev *gfbdev; 279 273 int ret; 280 - int bpp_sel = 24; 281 274 282 275 /*bpp_sel = 8;*/ 283 276 gfbdev = kzalloc(sizeof(struct cirrus_fbdev), GFP_KERNEL); ··· 301 296 /* disable all the possible outputs/crtcs before entering KMS mode */ 302 297 drm_helper_disable_unused_functions(cdev->dev); 303 298 304 - return drm_fb_helper_initial_config(&gfbdev->helper, bpp_sel); 299 + return drm_fb_helper_initial_config(&gfbdev->helper, cirrus_bpp); 305 300 } 306 301 307 302 void cirrus_fbdev_fini(struct cirrus_device *cdev)
+1 -1
drivers/gpu/drm/cirrus/cirrus_main.c
··· 269 269 return; 270 270 271 271 tbo = &((*bo)->bo); 272 - ttm_bo_unref(&tbo); 272 + ttm_bo_put(tbo); 273 273 *bo = NULL; 274 274 } 275 275
+2 -2
drivers/gpu/drm/cirrus/cirrus_mode.c
··· 127 127 return ret; 128 128 } 129 129 130 - if (&cdev->mode_info.gfbdev->gfb == crtc->primary->fb) { 130 + if (cdev->mode_info.gfbdev->gfb == crtc->primary->fb) { 131 131 /* if pushing console in kmap it */ 132 132 ret = ttm_bo_kmap(&bo->bo, 0, bo->bo.num_pages, &bo->kmap); 133 133 if (ret) ··· 512 512 cdev->dev->mode_config.max_height = CIRRUS_MAX_FB_HEIGHT; 513 513 514 514 cdev->dev->mode_config.fb_base = cdev->mc.vram_base; 515 - cdev->dev->mode_config.preferred_depth = 24; 515 + cdev->dev->mode_config.preferred_depth = cirrus_bpp; 516 516 /* don't prefer a shadow on virt GPU */ 517 517 cdev->dev->mode_config.prefer_shadow = 0; 518 518
+4
drivers/gpu/drm/drm_atomic.c
··· 895 895 state->src_h = val; 896 896 } else if (property == plane->alpha_property) { 897 897 state->alpha = val; 898 + } else if (property == plane->blend_mode_property) { 899 + state->pixel_blend_mode = val; 898 900 } else if (property == plane->rotation_property) { 899 901 if (!is_power_of_2(val & DRM_MODE_ROTATE_MASK)) { 900 902 DRM_DEBUG_ATOMIC("[PLANE:%d:%s] bad rotation bitmask: 0x%llx\n", ··· 970 968 *val = state->src_h; 971 969 } else if (property == plane->alpha_property) { 972 970 *val = state->alpha; 971 + } else if (property == plane->blend_mode_property) { 972 + *val = state->pixel_blend_mode; 973 973 } else if (property == plane->rotation_property) { 974 974 *val = state->rotation; 975 975 } else if (property == plane->zpos_property) {
+25 -9
drivers/gpu/drm/drm_atomic_helper.c
··· 3555 3555 EXPORT_SYMBOL(drm_atomic_helper_crtc_destroy_state); 3556 3556 3557 3557 /** 3558 + * __drm_atomic_helper_plane_reset - resets planes state to default values 3559 + * @plane: plane object, must not be NULL 3560 + * @state: atomic plane state, must not be NULL 3561 + * 3562 + * Initializes plane state to default. This is useful for drivers that subclass 3563 + * the plane state. 3564 + */ 3565 + void __drm_atomic_helper_plane_reset(struct drm_plane *plane, 3566 + struct drm_plane_state *state) 3567 + { 3568 + state->plane = plane; 3569 + state->rotation = DRM_MODE_ROTATE_0; 3570 + 3571 + /* Reset the alpha value to fully opaque if it matters */ 3572 + if (plane->alpha_property) 3573 + state->alpha = plane->alpha_property->values[1]; 3574 + state->pixel_blend_mode = DRM_MODE_BLEND_PREMULTI; 3575 + 3576 + plane->state = state; 3577 + } 3578 + EXPORT_SYMBOL(__drm_atomic_helper_plane_reset); 3579 + 3580 + /** 3558 3581 * drm_atomic_helper_plane_reset - default &drm_plane_funcs.reset hook for planes 3559 3582 * @plane: drm plane 3560 3583 * ··· 3591 3568 3592 3569 kfree(plane->state); 3593 3570 plane->state = kzalloc(sizeof(*plane->state), GFP_KERNEL); 3594 - 3595 - if (plane->state) { 3596 - plane->state->plane = plane; 3597 - plane->state->rotation = DRM_MODE_ROTATE_0; 3598 - 3599 - /* Reset the alpha value to fully opaque if it matters */ 3600 - if (plane->alpha_property) 3601 - plane->state->alpha = plane->alpha_property->values[1]; 3602 - } 3571 + if (plane->state) 3572 + __drm_atomic_helper_plane_reset(plane, plane->state); 3603 3573 } 3604 3574 EXPORT_SYMBOL(drm_atomic_helper_plane_reset); 3605 3575
+123
drivers/gpu/drm/drm_blend.c
··· 107 107 * planes. Without this property the primary plane is always below the cursor 108 108 * plane, and ordering between all other planes is undefined. 109 109 * 110 + * pixel blend mode: 111 + * Pixel blend mode is set up with drm_plane_create_blend_mode_property(). 112 + * It adds a blend mode for alpha blending equation selection, describing 113 + * how the pixels from the current plane are composited with the 114 + * background. 115 + * 116 + * Three alpha blending equations are defined: 117 + * 118 + * "None": 119 + * Blend formula that ignores the pixel alpha:: 120 + * 121 + * out.rgb = plane_alpha * fg.rgb + 122 + * (1 - plane_alpha) * bg.rgb 123 + * 124 + * "Pre-multiplied": 125 + * Blend formula that assumes the pixel color values 126 + * have been already pre-multiplied with the alpha 127 + * channel values:: 128 + * 129 + * out.rgb = plane_alpha * fg.rgb + 130 + * (1 - (plane_alpha * fg.alpha)) * bg.rgb 131 + * 132 + * "Coverage": 133 + * Blend formula that assumes the pixel color values have not 134 + * been pre-multiplied and will do so when blending them to the 135 + * background color values:: 136 + * 137 + * out.rgb = plane_alpha * fg.alpha * fg.rgb + 138 + * (1 - (plane_alpha * fg.alpha)) * bg.rgb 139 + * 140 + * Using the following symbols: 141 + * 142 + * "fg.rgb": 143 + * Each of the RGB component values from the plane's pixel 144 + * "fg.alpha": 145 + * Alpha component value from the plane's pixel. If the plane's 146 + * pixel format has no alpha component, then this is assumed to be 147 + * 1.0. In these cases, this property has no effect, as all three 148 + * equations become equivalent. 149 + * "bg.rgb": 150 + * Each of the RGB component values from the background 151 + * "plane_alpha": 152 + * Plane alpha value set by the plane "alpha" property. If the 153 + * plane does not expose the "alpha" property, then this is 154 + * assumed to be 1.0 155 + * 110 156 * Note that all the property extensions described here apply either to the 111 157 * plane or the CRTC (e.g. for the background color, which currently is not 112 158 * exposed and assumed to be black). ··· 494 448 return 0; 495 449 } 496 450 EXPORT_SYMBOL(drm_atomic_normalize_zpos); 451 + 452 + /** 453 + * drm_plane_create_blend_mode_property - create a new blend mode property 454 + * @plane: drm plane 455 + * @supported_modes: bitmask of supported modes, must include 456 + * BIT(DRM_MODE_BLEND_PREMULTI). Current DRM assumption is 457 + * that alpha is premultiplied, and old userspace can break if 458 + * the property defaults to anything else. 459 + * 460 + * This creates a new property describing the blend mode. 461 + * 462 + * The property exposed to userspace is an enumeration property (see 463 + * drm_property_create_enum()) called "pixel blend mode" and has the 464 + * following enumeration values: 465 + * 466 + * "None": 467 + * Blend formula that ignores the pixel alpha. 468 + * 469 + * "Pre-multiplied": 470 + * Blend formula that assumes the pixel color values have been already 471 + * pre-multiplied with the alpha channel values. 472 + * 473 + * "Coverage": 474 + * Blend formula that assumes the pixel color values have not been 475 + * pre-multiplied and will do so when blending them to the background color 476 + * values. 477 + * 478 + * RETURNS: 479 + * Zero for success or -errno 480 + */ 481 + int drm_plane_create_blend_mode_property(struct drm_plane *plane, 482 + unsigned int supported_modes) 483 + { 484 + struct drm_device *dev = plane->dev; 485 + struct drm_property *prop; 486 + static const struct drm_prop_enum_list props[] = { 487 + { DRM_MODE_BLEND_PIXEL_NONE, "None" }, 488 + { DRM_MODE_BLEND_PREMULTI, "Pre-multiplied" }, 489 + { DRM_MODE_BLEND_COVERAGE, "Coverage" }, 490 + }; 491 + unsigned int valid_mode_mask = BIT(DRM_MODE_BLEND_PIXEL_NONE) | 492 + BIT(DRM_MODE_BLEND_PREMULTI) | 493 + BIT(DRM_MODE_BLEND_COVERAGE); 494 + int i; 495 + 496 + if (WARN_ON((supported_modes & ~valid_mode_mask) || 497 + ((supported_modes & BIT(DRM_MODE_BLEND_PREMULTI)) == 0))) 498 + return -EINVAL; 499 + 500 + prop = drm_property_create(dev, DRM_MODE_PROP_ENUM, 501 + "pixel blend mode", 502 + hweight32(supported_modes)); 503 + if (!prop) 504 + return -ENOMEM; 505 + 506 + for (i = 0; i < ARRAY_SIZE(props); i++) { 507 + int ret; 508 + 509 + if (!(BIT(props[i].type) & supported_modes)) 510 + continue; 511 + 512 + ret = drm_property_add_enum(prop, props[i].type, 513 + props[i].name); 514 + 515 + if (ret) { 516 + drm_property_destroy(dev, prop); 517 + 518 + return ret; 519 + } 520 + } 521 + 522 + drm_object_attach_property(&plane->base, prop, DRM_MODE_BLEND_PREMULTI); 523 + plane->blend_mode_property = prop; 524 + 525 + return 0; 526 + } 527 + EXPORT_SYMBOL(drm_plane_create_blend_mode_property);
+57 -47
drivers/gpu/drm/drm_debugfs_crc.c
··· 68 68 { 69 69 struct drm_crtc *crtc = m->private; 70 70 71 - seq_printf(m, "%s\n", crtc->crc.source); 71 + if (crtc->funcs->get_crc_sources) { 72 + size_t count; 73 + const char *const *sources = crtc->funcs->get_crc_sources(crtc, 74 + &count); 75 + size_t values_cnt; 76 + int i; 72 77 78 + if (count == 0 || !sources) 79 + goto out; 80 + 81 + for (i = 0; i < count; i++) 82 + if (!crtc->funcs->verify_crc_source(crtc, sources[i], 83 + &values_cnt)) { 84 + if (strcmp(sources[i], crtc->crc.source)) 85 + seq_printf(m, "%s\n", sources[i]); 86 + else 87 + seq_printf(m, "%s*\n", sources[i]); 88 + } 89 + } 90 + return 0; 91 + 92 + out: 93 + seq_printf(m, "%s*\n", crtc->crc.source); 73 94 return 0; 74 95 } 75 96 ··· 108 87 struct drm_crtc *crtc = m->private; 109 88 struct drm_crtc_crc *crc = &crtc->crc; 110 89 char *source; 90 + size_t values_cnt; 91 + int ret; 111 92 112 93 if (len == 0) 113 94 return 0; ··· 126 103 127 104 if (source[len] == '\n') 128 105 source[len] = '\0'; 106 + 107 + ret = crtc->funcs->verify_crc_source(crtc, source, &values_cnt); 108 + if (ret) 109 + return ret; 129 110 130 111 spin_lock_irq(&crc->lock); 131 112 ··· 195 168 return ret; 196 169 } 197 170 198 - spin_lock_irq(&crc->lock); 199 - if (!crc->opened) 200 - crc->opened = true; 201 - else 202 - ret = -EBUSY; 203 - spin_unlock_irq(&crc->lock); 204 - 171 + ret = crtc->funcs->verify_crc_source(crtc, crc->source, &values_cnt); 205 172 if (ret) 206 173 return ret; 207 174 208 - ret = crtc->funcs->set_crc_source(crtc, crc->source, &values_cnt); 175 + if (WARN_ON(values_cnt > DRM_MAX_CRC_NR)) 176 + return -EINVAL; 177 + 178 + if (WARN_ON(values_cnt == 0)) 179 + return -EINVAL; 180 + 181 + entries = kcalloc(DRM_CRC_ENTRIES_NR, sizeof(*entries), GFP_KERNEL); 182 + if (!entries) 183 + return -ENOMEM; 184 + 185 + spin_lock_irq(&crc->lock); 186 + if (!crc->opened) { 187 + crc->opened = true; 188 + crc->entries = entries; 189 + crc->values_cnt = values_cnt; 190 + } else { 191 + ret = -EBUSY; 192 + } 193 + spin_unlock_irq(&crc->lock); 194 + 195 + if (ret) { 196 + kfree(entries); 197 + return ret; 198 + } 199 + 200 + ret = crtc->funcs->set_crc_source(crtc, crc->source); 209 201 if (ret) 210 202 goto err; 211 203 212 - if (WARN_ON(values_cnt > DRM_MAX_CRC_NR)) { 213 - ret = -EINVAL; 214 - goto err_disable; 215 - } 216 - 217 - if (WARN_ON(values_cnt == 0)) { 218 - ret = -EINVAL; 219 - goto err_disable; 220 - } 221 - 222 - entries = kcalloc(DRM_CRC_ENTRIES_NR, sizeof(*entries), GFP_KERNEL); 223 - if (!entries) { 224 - ret = -ENOMEM; 225 - goto err_disable; 226 - } 227 - 228 - spin_lock_irq(&crc->lock); 229 - crc->entries = entries; 230 - crc->values_cnt = values_cnt; 231 - 232 - /* 233 - * Only return once we got a first frame, so userspace doesn't have to 234 - * guess when this particular piece of HW will be ready to start 235 - * generating CRCs. 236 - */ 237 - ret = wait_event_interruptible_lock_irq(crc->wq, 238 - crtc_crc_data_count(crc), 239 - crc->lock); 240 - spin_unlock_irq(&crc->lock); 241 - 242 - if (ret) 243 - goto err_disable; 244 - 245 204 return 0; 246 205 247 - err_disable: 248 - crtc->funcs->set_crc_source(crtc, NULL, &values_cnt); 249 206 err: 250 207 spin_lock_irq(&crc->lock); 251 208 crtc_crc_cleanup(crc); ··· 241 230 { 242 231 struct drm_crtc *crtc = filep->f_inode->i_private; 243 232 struct drm_crtc_crc *crc = &crtc->crc; 244 - size_t values_cnt; 245 233 246 - crtc->funcs->set_crc_source(crtc, NULL, &values_cnt); 234 + crtc->funcs->set_crc_source(crtc, NULL); 247 235 248 236 spin_lock_irq(&crc->lock); 249 237 crtc_crc_cleanup(crc); ··· 348 338 { 349 339 struct dentry *crc_ent, *ent; 350 340 351 - if (!crtc->funcs->set_crc_source) 341 + if (!crtc->funcs->set_crc_source || !crtc->funcs->verify_crc_source) 352 342 return 0; 353 343 354 344 crc_ent = debugfs_create_dir("crc", crtc->debugfs_entry);
+17 -1
drivers/gpu/drm/drm_dp_cec.c
··· 16 16 * here. Quite a few active (mini-)DP-to-HDMI or USB-C-to-HDMI adapters 17 17 * have a converter chip that supports CEC-Tunneling-over-AUX (usually the 18 18 * Parade PS176), but they do not wire up the CEC pin, thus making CEC 19 - * useless. 19 + * useless. Note that MegaChips 2900-based adapters appear to have good 20 + * support for CEC tunneling. Those adapters that I have tested using 21 + * this chipset all have the CEC line connected. 20 22 * 21 23 * Sadly there is no way for this driver to know this. What happens is 22 24 * that a /dev/cecX device is created that is isolated and unable to see ··· 240 238 u8 cec_irq; 241 239 int ret; 242 240 241 + /* No transfer function was set, so not a DP connector */ 242 + if (!aux->transfer) 243 + return; 244 + 243 245 mutex_lock(&aux->cec.lock); 244 246 if (!aux->cec.adap) 245 247 goto unlock; ··· 298 292 u32 cec_caps = CEC_CAP_DEFAULTS | CEC_CAP_NEEDS_HPD; 299 293 unsigned int num_las = 1; 300 294 u8 cap; 295 + 296 + /* No transfer function was set, so not a DP connector */ 297 + if (!aux->transfer) 298 + return; 301 299 302 300 #ifndef CONFIG_MEDIA_CEC_RC 303 301 /* ··· 371 361 */ 372 362 void drm_dp_cec_unset_edid(struct drm_dp_aux *aux) 373 363 { 364 + /* No transfer function was set, so not a DP connector */ 365 + if (!aux->transfer) 366 + return; 367 + 374 368 cancel_delayed_work_sync(&aux->cec.unregister_work); 375 369 376 370 mutex_lock(&aux->cec.lock); ··· 418 404 struct device *parent) 419 405 { 420 406 WARN_ON(aux->cec.adap); 407 + if (WARN_ON(!aux->transfer)) 408 + return; 421 409 aux->cec.name = name; 422 410 aux->cec.parent = parent; 423 411 INIT_DELAYED_WORK(&aux->cec.unregister_work,
+2 -1
drivers/gpu/drm/drm_dp_helper.c
··· 850 850 return ret; 851 851 852 852 case DP_AUX_I2C_REPLY_NACK: 853 - DRM_DEBUG_KMS("I2C nack (result=%d, size=%zu\n", ret, msg->size); 853 + DRM_DEBUG_KMS("I2C nack (result=%d, size=%zu)\n", 854 + ret, msg->size); 854 855 aux->i2c_nack_count++; 855 856 return -EREMOTEIO; 856 857
+1
drivers/gpu/drm/drm_dp_mst_topology.c
··· 439 439 if (idx > raw->curlen) 440 440 goto fail_len; 441 441 repmsg->u.remote_dpcd_read_ack.num_bytes = raw->msg[idx]; 442 + idx++; 442 443 if (idx > raw->curlen) 443 444 goto fail_len; 444 445
+9 -17
drivers/gpu/drm/drm_fb_cma_helper.c
··· 86 86 { 87 87 struct drm_gem_cma_object *obj; 88 88 dma_addr_t paddr; 89 + u8 h_div = 1, v_div = 1; 89 90 90 91 obj = drm_fb_cma_get_gem_obj(fb, plane); 91 92 if (!obj) 92 93 return 0; 93 94 94 95 paddr = obj->paddr + fb->offsets[plane]; 95 - paddr += fb->format->cpp[plane] * (state->src_x >> 16); 96 - paddr += fb->pitches[plane] * (state->src_y >> 16); 96 + 97 + if (plane > 0) { 98 + h_div = fb->format->hsub; 99 + v_div = fb->format->vsub; 100 + } 101 + 102 + paddr += (fb->format->cpp[plane] * (state->src_x >> 16)) / h_div; 103 + paddr += (fb->pitches[plane] * (state->src_y >> 16)) / v_div; 97 104 98 105 return paddr; 99 106 } ··· 224 217 drm_fb_helper_hotplug_event(&fbdev_cma->fb_helper); 225 218 } 226 219 EXPORT_SYMBOL_GPL(drm_fbdev_cma_hotplug_event); 227 - 228 - /** 229 - * drm_fbdev_cma_set_suspend - wrapper around drm_fb_helper_set_suspend 230 - * @fbdev_cma: The drm_fbdev_cma struct, may be NULL 231 - * @state: desired state, zero to resume, non-zero to suspend 232 - * 233 - * Calls drm_fb_helper_set_suspend, which is a wrapper around 234 - * fb_set_suspend implemented by fbdev core. 235 - */ 236 - void drm_fbdev_cma_set_suspend(struct drm_fbdev_cma *fbdev_cma, bool state) 237 - { 238 - if (fbdev_cma) 239 - drm_fb_helper_set_suspend(&fbdev_cma->fb_helper, state); 240 - } 241 - EXPORT_SYMBOL(drm_fbdev_cma_set_suspend); 242 220 243 221 /** 244 222 * drm_fbdev_cma_set_suspend_unlocked - wrapper around
+2 -2
drivers/gpu/drm/drm_gem_cma_helper.c
··· 436 436 437 437 sgt = kzalloc(sizeof(*sgt), GFP_KERNEL); 438 438 if (!sgt) 439 - return NULL; 439 + return ERR_PTR(-ENOMEM); 440 440 441 441 ret = dma_get_sgtable(obj->dev->dev, sgt, cma_obj->vaddr, 442 442 cma_obj->paddr, obj->size); ··· 447 447 448 448 out: 449 449 kfree(sgt); 450 - return NULL; 450 + return ERR_PTR(ret); 451 451 } 452 452 EXPORT_SYMBOL_GPL(drm_gem_cma_prime_get_sg_table); 453 453
+2
drivers/gpu/drm/drm_panel.c
··· 152 152 * 153 153 * Return: A pointer to the panel registered for the specified device tree 154 154 * node or an ERR_PTR() if no panel matching the device tree node can be found. 155 + * 155 156 * Possible error codes returned by this function: 157 + * 156 158 * - EPROBE_DEFER: the panel device has not been probed yet, and the caller 157 159 * should retry later 158 160 * - ENODEV: the device is not available (status != "okay" or "ok")
-15
drivers/gpu/drm/drm_syncobj.c
··· 120 120 return ret; 121 121 } 122 122 123 - /** 124 - * drm_syncobj_add_callback - adds a callback to syncobj::cb_list 125 - * @syncobj: Sync object to which to add the callback 126 - * @cb: Callback to add 127 - * @func: Func to use when initializing the drm_syncobj_cb struct 128 - * 129 - * This adds a callback to be called next time the fence is replaced 130 - */ 131 123 void drm_syncobj_add_callback(struct drm_syncobj *syncobj, 132 124 struct drm_syncobj_cb *cb, 133 125 drm_syncobj_func_t func) ··· 128 136 drm_syncobj_add_callback_locked(syncobj, cb, func); 129 137 spin_unlock(&syncobj->lock); 130 138 } 131 - EXPORT_SYMBOL(drm_syncobj_add_callback); 132 139 133 - /** 134 - * drm_syncobj_add_callback - removes a callback to syncobj::cb_list 135 - * @syncobj: Sync object from which to remove the callback 136 - * @cb: Callback to remove 137 - */ 138 140 void drm_syncobj_remove_callback(struct drm_syncobj *syncobj, 139 141 struct drm_syncobj_cb *cb) 140 142 { ··· 136 150 list_del_init(&cb->node); 137 151 spin_unlock(&syncobj->lock); 138 152 } 139 - EXPORT_SYMBOL(drm_syncobj_remove_callback); 140 153 141 154 /** 142 155 * drm_syncobj_replace_fence - replace fence in a sync object.
+3 -3
drivers/gpu/drm/drm_vblank.c
··· 873 873 * handler by calling drm_crtc_send_vblank_event() and make sure that there's no 874 874 * possible race with the hardware committing the atomic update. 875 875 * 876 - * Caller must hold a vblank reference for the event @e, which will be dropped 877 - * when the next vblank arrives. 876 + * Caller must hold a vblank reference for the event @e acquired by a 877 + * drm_crtc_vblank_get(), which will be dropped when the next vblank arrives. 878 878 */ 879 879 void drm_crtc_arm_vblank_event(struct drm_crtc *crtc, 880 880 struct drm_pending_vblank_event *e) ··· 1541 1541 if (vblwait->request.type & 1542 1542 ~(_DRM_VBLANK_TYPES_MASK | _DRM_VBLANK_FLAGS_MASK | 1543 1543 _DRM_VBLANK_HIGH_CRTC_MASK)) { 1544 - DRM_ERROR("Unsupported type value 0x%x, supported mask 0x%x\n", 1544 + DRM_DEBUG("Unsupported type value 0x%x, supported mask 0x%x\n", 1545 1545 vblwait->request.type, 1546 1546 (_DRM_VBLANK_TYPES_MASK | _DRM_VBLANK_FLAGS_MASK | 1547 1547 _DRM_VBLANK_HIGH_CRTC_MASK));
-3
drivers/gpu/drm/drm_vma_manager.c
··· 103 103 */ 104 104 void drm_vma_offset_manager_destroy(struct drm_vma_offset_manager *mgr) 105 105 { 106 - /* take the lock to protect against buggy drivers */ 107 - write_lock(&mgr->vm_lock); 108 106 drm_mm_takedown(&mgr->vm_addr_space_mm); 109 - write_unlock(&mgr->vm_lock); 110 107 } 111 108 EXPORT_SYMBOL(drm_vma_offset_manager_destroy); 112 109
-1
drivers/gpu/drm/gma500/psb_drv.h
··· 24 24 #include <linux/mm_types.h> 25 25 26 26 #include <drm/drmP.h> 27 - #include <drm/drm_global.h> 28 27 #include <drm/gma_drm.h> 29 28 #include "psb_reg.h" 30 29 #include "psb_intel_drv.h"
-7
drivers/gpu/drm/i915/i915_gem_clflush.c
··· 45 45 return "clflush"; 46 46 } 47 47 48 - static bool i915_clflush_enable_signaling(struct dma_fence *fence) 49 - { 50 - return true; 51 - } 52 - 53 48 static void i915_clflush_release(struct dma_fence *fence) 54 49 { 55 50 struct clflush *clflush = container_of(fence, typeof(*clflush), dma); ··· 58 63 static const struct dma_fence_ops i915_clflush_ops = { 59 64 .get_driver_name = i915_clflush_get_driver_name, 60 65 .get_timeline_name = i915_clflush_get_timeline_name, 61 - .enable_signaling = i915_clflush_enable_signaling, 62 - .wait = dma_fence_default_wait, 63 66 .release = i915_clflush_release, 64 67 }; 65 68
+2
drivers/gpu/drm/i915/intel_display.c
··· 12896 12896 .atomic_duplicate_state = intel_crtc_duplicate_state, 12897 12897 .atomic_destroy_state = intel_crtc_destroy_state, 12898 12898 .set_crc_source = intel_crtc_set_crc_source, 12899 + .verify_crc_source = intel_crtc_verify_crc_source, 12900 + .get_crc_sources = intel_crtc_get_crc_sources, 12899 12901 }; 12900 12902 12901 12903 struct wait_rps_boost {
+7 -2
drivers/gpu/drm/i915/intel_drv.h
··· 2172 2172 2173 2173 /* intel_pipe_crc.c */ 2174 2174 #ifdef CONFIG_DEBUG_FS 2175 - int intel_crtc_set_crc_source(struct drm_crtc *crtc, const char *source_name, 2176 - size_t *values_cnt); 2175 + int intel_crtc_set_crc_source(struct drm_crtc *crtc, const char *source_name); 2176 + int intel_crtc_verify_crc_source(struct drm_crtc *crtc, 2177 + const char *source_name, size_t *values_cnt); 2178 + const char *const *intel_crtc_get_crc_sources(struct drm_crtc *crtc, 2179 + size_t *count); 2177 2180 void intel_crtc_disable_pipe_crc(struct intel_crtc *crtc); 2178 2181 void intel_crtc_enable_pipe_crc(struct intel_crtc *crtc); 2179 2182 #else 2180 2183 #define intel_crtc_set_crc_source NULL 2184 + #define intel_crtc_verify_crc_source NULL 2185 + #define intel_crtc_get_crc_sources NULL 2181 2186 static inline void intel_crtc_disable_pipe_crc(struct intel_crtc *crtc) 2182 2187 { 2183 2188 }
+116 -3
drivers/gpu/drm/i915/intel_pipe_crc.c
··· 468 468 } 469 469 } 470 470 471 - int intel_crtc_set_crc_source(struct drm_crtc *crtc, const char *source_name, 472 - size_t *values_cnt) 471 + static int i8xx_crc_source_valid(struct drm_i915_private *dev_priv, 472 + const enum intel_pipe_crc_source source) 473 + { 474 + switch (source) { 475 + case INTEL_PIPE_CRC_SOURCE_PIPE: 476 + case INTEL_PIPE_CRC_SOURCE_NONE: 477 + return 0; 478 + default: 479 + return -EINVAL; 480 + } 481 + } 482 + 483 + static int i9xx_crc_source_valid(struct drm_i915_private *dev_priv, 484 + const enum intel_pipe_crc_source source) 485 + { 486 + switch (source) { 487 + case INTEL_PIPE_CRC_SOURCE_PIPE: 488 + case INTEL_PIPE_CRC_SOURCE_TV: 489 + case INTEL_PIPE_CRC_SOURCE_DP_B: 490 + case INTEL_PIPE_CRC_SOURCE_DP_C: 491 + case INTEL_PIPE_CRC_SOURCE_DP_D: 492 + case INTEL_PIPE_CRC_SOURCE_NONE: 493 + return 0; 494 + default: 495 + return -EINVAL; 496 + } 497 + } 498 + 499 + static int vlv_crc_source_valid(struct drm_i915_private *dev_priv, 500 + const enum intel_pipe_crc_source source) 501 + { 502 + switch (source) { 503 + case INTEL_PIPE_CRC_SOURCE_PIPE: 504 + case INTEL_PIPE_CRC_SOURCE_DP_B: 505 + case INTEL_PIPE_CRC_SOURCE_DP_C: 506 + case INTEL_PIPE_CRC_SOURCE_DP_D: 507 + case INTEL_PIPE_CRC_SOURCE_NONE: 508 + return 0; 509 + default: 510 + return -EINVAL; 511 + } 512 + } 513 + 514 + static int ilk_crc_source_valid(struct drm_i915_private *dev_priv, 515 + const enum intel_pipe_crc_source source) 516 + { 517 + switch (source) { 518 + case INTEL_PIPE_CRC_SOURCE_PIPE: 519 + case INTEL_PIPE_CRC_SOURCE_PLANE1: 520 + case INTEL_PIPE_CRC_SOURCE_PLANE2: 521 + case INTEL_PIPE_CRC_SOURCE_NONE: 522 + return 0; 523 + default: 524 + return -EINVAL; 525 + } 526 + } 527 + 528 + static int ivb_crc_source_valid(struct drm_i915_private *dev_priv, 529 + const enum intel_pipe_crc_source source) 530 + { 531 + switch (source) { 532 + case INTEL_PIPE_CRC_SOURCE_PIPE: 533 + case INTEL_PIPE_CRC_SOURCE_PLANE1: 534 + case INTEL_PIPE_CRC_SOURCE_PLANE2: 535 + case INTEL_PIPE_CRC_SOURCE_PF: 536 + case INTEL_PIPE_CRC_SOURCE_NONE: 537 + return 0; 538 + default: 539 + return -EINVAL; 540 + } 541 + } 542 + 543 + static int 544 + intel_is_valid_crc_source(struct drm_i915_private *dev_priv, 545 + const enum intel_pipe_crc_source source) 546 + { 547 + if (IS_GEN2(dev_priv)) 548 + return i8xx_crc_source_valid(dev_priv, source); 549 + else if (INTEL_GEN(dev_priv) < 5) 550 + return i9xx_crc_source_valid(dev_priv, source); 551 + else if (IS_VALLEYVIEW(dev_priv) || IS_CHERRYVIEW(dev_priv)) 552 + return vlv_crc_source_valid(dev_priv, source); 553 + else if (IS_GEN5(dev_priv) || IS_GEN6(dev_priv)) 554 + return ilk_crc_source_valid(dev_priv, source); 555 + else 556 + return ivb_crc_source_valid(dev_priv, source); 557 + } 558 + 559 + const char *const *intel_crtc_get_crc_sources(struct drm_crtc *crtc, 560 + size_t *count) 561 + { 562 + *count = ARRAY_SIZE(pipe_crc_sources); 563 + return pipe_crc_sources; 564 + } 565 + 566 + int intel_crtc_verify_crc_source(struct drm_crtc *crtc, const char *source_name, 567 + size_t *values_cnt) 568 + { 569 + struct drm_i915_private *dev_priv = to_i915(crtc->dev); 570 + enum intel_pipe_crc_source source; 571 + 572 + if (display_crc_ctl_parse_source(source_name, &source) < 0) { 573 + DRM_DEBUG_DRIVER("unknown source %s\n", source_name); 574 + return -EINVAL; 575 + } 576 + 577 + if (source == INTEL_PIPE_CRC_SOURCE_AUTO || 578 + intel_is_valid_crc_source(dev_priv, source) == 0) { 579 + *values_cnt = 5; 580 + return 0; 581 + } 582 + 583 + return -EINVAL; 584 + } 585 + 586 + int intel_crtc_set_crc_source(struct drm_crtc *crtc, const char *source_name) 473 587 { 474 588 struct drm_i915_private *dev_priv = to_i915(crtc->dev); 475 589 struct intel_pipe_crc *pipe_crc = &dev_priv->pipe_crc[crtc->index]; ··· 622 508 } 623 509 624 510 pipe_crc->skipped = 0; 625 - *values_cnt = 5; 626 511 627 512 out: 628 513 intel_display_power_put(dev_priv, power_domain);
-8
drivers/gpu/drm/i915/selftests/i915_sw_fence.c
··· 611 611 return "mock"; 612 612 } 613 613 614 - static bool mock_enable_signaling(struct dma_fence *fence) 615 - { 616 - return true; 617 - } 618 - 619 614 static const struct dma_fence_ops mock_fence_ops = { 620 615 .get_driver_name = mock_name, 621 616 .get_timeline_name = mock_name, 622 - .enable_signaling = mock_enable_signaling, 623 - .wait = dma_fence_default_wait, 624 - .release = dma_fence_free, 625 617 }; 626 618 627 619 static DEFINE_SPINLOCK(mock_fence_lock);
+3 -6
drivers/gpu/drm/imx/ipuv3-plane.c
··· 281 281 ipu_state = to_ipu_plane_state(plane->state); 282 282 __drm_atomic_helper_plane_destroy_state(plane->state); 283 283 kfree(ipu_state); 284 + plane->state = NULL; 284 285 } 285 286 286 287 ipu_state = kzalloc(sizeof(*ipu_state), GFP_KERNEL); 287 288 288 - if (ipu_state) { 289 - ipu_state->base.plane = plane; 290 - ipu_state->base.rotation = DRM_MODE_ROTATE_0; 291 - } 292 - 293 - plane->state = &ipu_state->base; 289 + if (ipu_state) 290 + __drm_atomic_helper_plane_reset(plane, &ipu_state->base); 294 291 } 295 292 296 293 static struct drm_plane_state *
+1 -20
drivers/gpu/drm/mgag200/mgag200_drv.c
··· 42 42 43 43 MODULE_DEVICE_TABLE(pci, pciidlist); 44 44 45 - static void mgag200_kick_out_firmware_fb(struct pci_dev *pdev) 46 - { 47 - struct apertures_struct *ap; 48 - bool primary = false; 49 - 50 - ap = alloc_apertures(1); 51 - if (!ap) 52 - return; 53 - 54 - ap->ranges[0].base = pci_resource_start(pdev, 0); 55 - ap->ranges[0].size = pci_resource_len(pdev, 0); 56 - 57 - #ifdef CONFIG_X86 58 - primary = pdev->resource[PCI_ROM_RESOURCE].flags & IORESOURCE_ROM_SHADOW; 59 - #endif 60 - drm_fb_helper_remove_conflicting_framebuffers(ap, "mgag200drmfb", primary); 61 - kfree(ap); 62 - } 63 - 64 45 65 46 static int mga_pci_probe(struct pci_dev *pdev, const struct pci_device_id *ent) 66 47 { 67 - mgag200_kick_out_firmware_fb(pdev); 48 + drm_fb_helper_remove_conflicting_pci_framebuffers(pdev, 0, "mgag200drmfb"); 68 49 69 50 return drm_get_pci_dev(pdev, ent, &driver); 70 51 }
-9
drivers/gpu/drm/mgag200/mgag200_main.c
··· 124 124 static int mga_vram_init(struct mga_device *mdev) 125 125 { 126 126 void __iomem *mem; 127 - struct apertures_struct *aper = alloc_apertures(1); 128 - if (!aper) 129 - return -ENOMEM; 130 127 131 128 /* BAR 0 is VRAM */ 132 129 mdev->mc.vram_base = pci_resource_start(mdev->dev->pdev, 0); 133 130 mdev->mc.vram_window = pci_resource_len(mdev->dev->pdev, 0); 134 - 135 - aper->ranges[0].base = mdev->mc.vram_base; 136 - aper->ranges[0].size = mdev->mc.vram_window; 137 - 138 - drm_fb_helper_remove_conflicting_framebuffers(aper, "mgafb", true); 139 - kfree(aper); 140 131 141 132 if (!devm_request_mem_region(mdev->dev->dev, mdev->mc.vram_base, mdev->mc.vram_window, 142 133 "mgadrmfb_vram")) {
-8
drivers/gpu/drm/msm/msm_fence.c
··· 119 119 return f->fctx->name; 120 120 } 121 121 122 - static bool msm_fence_enable_signaling(struct dma_fence *fence) 123 - { 124 - return true; 125 - } 126 - 127 122 static bool msm_fence_signaled(struct dma_fence *fence) 128 123 { 129 124 struct msm_fence *f = to_msm_fence(fence); ··· 128 133 static const struct dma_fence_ops msm_fence_ops = { 129 134 .get_driver_name = msm_fence_get_driver_name, 130 135 .get_timeline_name = msm_fence_get_timeline_name, 131 - .enable_signaling = msm_fence_enable_signaling, 132 136 .signaled = msm_fence_signaled, 133 - .wait = dma_fence_default_wait, 134 - .release = dma_fence_free, 135 137 }; 136 138 137 139 struct dma_fence *
+15 -2
drivers/gpu/drm/nouveau/nouveau_connector.c
··· 400 400 kfree(nv_connector->edid); 401 401 drm_connector_unregister(connector); 402 402 drm_connector_cleanup(connector); 403 - if (nv_connector->aux.transfer) 403 + if (nv_connector->aux.transfer) { 404 + drm_dp_cec_unregister_connector(&nv_connector->aux); 404 405 drm_dp_aux_unregister(&nv_connector->aux); 406 + } 405 407 kfree(connector); 406 408 } 407 409 ··· 610 608 611 609 nouveau_connector_set_encoder(connector, nv_encoder); 612 610 conn_status = connector_status_connected; 611 + drm_dp_cec_set_edid(&nv_connector->aux, nv_connector->edid); 613 612 goto out; 614 613 } 615 614 ··· 1111 1108 1112 1109 if (rep->mask & NVIF_NOTIFY_CONN_V0_IRQ) { 1113 1110 NV_DEBUG(drm, "service %s\n", name); 1111 + drm_dp_cec_irq(&nv_connector->aux); 1114 1112 if ((nv_encoder = find_encoder(connector, DCB_OUTPUT_DP))) 1115 1113 nv50_mstm_service(nv_encoder->dp.mstm); 1116 1114 } else { 1117 1115 bool plugged = (rep->mask != NVIF_NOTIFY_CONN_V0_UNPLUG); 1118 1116 1117 + if (!plugged) 1118 + drm_dp_cec_unset_edid(&nv_connector->aux); 1119 1119 NV_DEBUG(drm, "%splugged %s\n", plugged ? "" : "un", name); 1120 1120 if ((nv_encoder = find_encoder(connector, DCB_OUTPUT_DP))) { 1121 1121 if (!plugged) ··· 1308 1302 kfree(nv_connector); 1309 1303 return ERR_PTR(ret); 1310 1304 } 1311 - 1312 1305 funcs = &nouveau_connector_funcs; 1313 1306 break; 1314 1307 default: ··· 1358 1353 break; 1359 1354 default: 1360 1355 nv_connector->dithering_mode = DITHERING_MODE_AUTO; 1356 + break; 1357 + } 1358 + 1359 + switch (type) { 1360 + case DRM_MODE_CONNECTOR_DisplayPort: 1361 + case DRM_MODE_CONNECTOR_eDP: 1362 + drm_dp_cec_register_connector(&nv_connector->aux, 1363 + connector->name, dev->dev); 1361 1364 break; 1362 1365 } 1363 1366
-1
drivers/gpu/drm/nouveau/nouveau_fence.c
··· 526 526 .get_timeline_name = nouveau_fence_get_timeline_name, 527 527 .enable_signaling = nouveau_fence_enable_signaling, 528 528 .signaled = nouveau_fence_is_signaled, 529 - .wait = dma_fence_default_wait, 530 529 .release = nouveau_fence_release 531 530 };
+13 -3
drivers/gpu/drm/qxl/qxl_display.c
··· 37 37 return head->width && head->height; 38 38 } 39 39 40 - static void qxl_alloc_client_monitors_config(struct qxl_device *qdev, unsigned count) 40 + static int qxl_alloc_client_monitors_config(struct qxl_device *qdev, 41 + unsigned int count) 41 42 { 42 43 if (qdev->client_monitors_config && 43 44 count > qdev->client_monitors_config->count) { ··· 50 49 sizeof(struct qxl_monitors_config) + 51 50 sizeof(struct qxl_head) * count, GFP_KERNEL); 52 51 if (!qdev->client_monitors_config) 53 - return; 52 + return -ENOMEM; 54 53 } 55 54 qdev->client_monitors_config->count = count; 55 + return 0; 56 56 } 57 57 58 58 enum { 59 59 MONITORS_CONFIG_MODIFIED, 60 60 MONITORS_CONFIG_UNCHANGED, 61 61 MONITORS_CONFIG_BAD_CRC, 62 + MONITORS_CONFIG_ERROR, 62 63 }; 63 64 64 65 static int qxl_display_copy_rom_client_monitors_config(struct qxl_device *qdev) ··· 90 87 && (num_monitors != qdev->client_monitors_config->count)) { 91 88 status = MONITORS_CONFIG_MODIFIED; 92 89 } 93 - qxl_alloc_client_monitors_config(qdev, num_monitors); 90 + if (qxl_alloc_client_monitors_config(qdev, num_monitors)) { 91 + status = MONITORS_CONFIG_ERROR; 92 + return status; 93 + } 94 94 /* we copy max from the client but it isn't used */ 95 95 qdev->client_monitors_config->max_allowed = 96 96 qdev->monitors_config->max_allowed; ··· 166 160 if (status != MONITORS_CONFIG_BAD_CRC) 167 161 break; 168 162 udelay(5); 163 + } 164 + if (status == MONITORS_CONFIG_ERROR) { 165 + DRM_DEBUG_KMS("ignoring client monitors config: error"); 166 + return; 169 167 } 170 168 if (status == MONITORS_CONFIG_BAD_CRC) { 171 169 DRM_DEBUG_KMS("ignoring client monitors config: bad crc");
+6 -22
drivers/gpu/drm/qxl/qxl_drv.c
··· 119 119 120 120 dev->dev_private = NULL; 121 121 kfree(qdev); 122 - drm_dev_unref(dev); 122 + drm_dev_put(dev); 123 123 } 124 124 125 125 static const struct file_operations qxl_fops = { ··· 136 136 { 137 137 struct pci_dev *pdev = dev->pdev; 138 138 struct qxl_device *qdev = dev->dev_private; 139 - struct drm_crtc *crtc; 139 + int ret; 140 140 141 - drm_kms_helper_poll_disable(dev); 142 - 143 - console_lock(); 144 - qxl_fbdev_set_suspend(qdev, 1); 145 - console_unlock(); 146 - 147 - /* unpin the front buffers */ 148 - list_for_each_entry(crtc, &dev->mode_config.crtc_list, head) { 149 - const struct drm_crtc_helper_funcs *crtc_funcs = crtc->helper_private; 150 - if (crtc->enabled) 151 - (*crtc_funcs->disable)(crtc); 152 - } 141 + ret = drm_mode_config_helper_suspend(dev); 142 + if (ret) 143 + return ret; 153 144 154 145 qxl_destroy_monitors_object(qdev); 155 146 qxl_surf_evict(qdev); ··· 166 175 } 167 176 168 177 qxl_create_monitors_object(qdev); 169 - drm_helper_resume_force_mode(dev); 170 - 171 - console_lock(); 172 - qxl_fbdev_set_suspend(qdev, 0); 173 - console_unlock(); 174 - 175 - drm_kms_helper_poll_enable(dev); 176 - return 0; 178 + return drm_mode_config_helper_resume(dev); 177 179 } 178 180 179 181 static int qxl_pm_suspend(struct device *dev)
+1 -1
drivers/gpu/drm/qxl/qxl_gem.c
··· 40 40 qxl_surface_evict(qdev, qobj, false); 41 41 42 42 tbo = &qobj->tbo; 43 - ttm_bo_unref(&tbo); 43 + ttm_bo_put(tbo); 44 44 } 45 45 46 46 int qxl_gem_object_create(struct qxl_device *qdev, int size,
+73 -7
drivers/gpu/drm/qxl/qxl_kms.c
··· 102 102 int r, sb; 103 103 104 104 r = drm_dev_init(&qdev->ddev, drv, &pdev->dev); 105 - if (r) 106 - return r; 105 + if (r) { 106 + pr_err("Unable to init drm dev"); 107 + goto error; 108 + } 107 109 108 110 qdev->ddev.pdev = pdev; 109 111 pci_set_drvdata(pdev, &qdev->ddev); ··· 123 121 qdev->io_base = pci_resource_start(pdev, 3); 124 122 125 123 qdev->vram_mapping = io_mapping_create_wc(qdev->vram_base, pci_resource_len(pdev, 0)); 124 + if (!qdev->vram_mapping) { 125 + pr_err("Unable to create vram_mapping"); 126 + r = -ENOMEM; 127 + goto error; 128 + } 126 129 127 130 if (pci_resource_len(pdev, 4) > 0) { 128 131 /* 64bit surface bar present */ ··· 146 139 qdev->surface_mapping = 147 140 io_mapping_create_wc(qdev->surfaceram_base, 148 141 qdev->surfaceram_size); 142 + if (!qdev->surface_mapping) { 143 + pr_err("Unable to create surface_mapping"); 144 + r = -ENOMEM; 145 + goto vram_mapping_free; 146 + } 149 147 } 150 148 151 149 DRM_DEBUG_KMS("qxl: vram %llx-%llx(%dM %dk), surface %llx-%llx(%dM %dk, %s)\n", ··· 167 155 qdev->rom = ioremap(qdev->rom_base, qdev->rom_size); 168 156 if (!qdev->rom) { 169 157 pr_err("Unable to ioremap ROM\n"); 170 - return -ENOMEM; 158 + r = -ENOMEM; 159 + goto surface_mapping_free; 171 160 } 172 161 173 - qxl_check_device(qdev); 162 + if (!qxl_check_device(qdev)) { 163 + r = -ENODEV; 164 + goto surface_mapping_free; 165 + } 174 166 175 167 r = qxl_bo_init(qdev); 176 168 if (r) { 177 169 DRM_ERROR("bo init failed %d\n", r); 178 - return r; 170 + goto rom_unmap; 179 171 } 180 172 181 173 qdev->ram_header = ioremap(qdev->vram_base + 182 174 qdev->rom->ram_header_offset, 183 175 sizeof(*qdev->ram_header)); 176 + if (!qdev->ram_header) { 177 + DRM_ERROR("Unable to ioremap RAM header\n"); 178 + r = -ENOMEM; 179 + goto bo_fini; 180 + } 184 181 185 182 qdev->command_ring = qxl_ring_create(&(qdev->ram_header->cmd_ring_hdr), 186 183 sizeof(struct qxl_command), ··· 197 176 qdev->io_base + QXL_IO_NOTIFY_CMD, 198 177 false, 199 178 &qdev->display_event); 179 + if (!qdev->command_ring) { 180 + DRM_ERROR("Unable to create command ring\n"); 181 + r = -ENOMEM; 182 + goto ram_header_unmap; 183 + } 200 184 201 185 qdev->cursor_ring = qxl_ring_create( 202 186 &(qdev->ram_header->cursor_ring_hdr), ··· 211 185 false, 212 186 &qdev->cursor_event); 213 187 188 + if (!qdev->cursor_ring) { 189 + DRM_ERROR("Unable to create cursor ring\n"); 190 + r = -ENOMEM; 191 + goto command_ring_free; 192 + } 193 + 214 194 qdev->release_ring = qxl_ring_create( 215 195 &(qdev->ram_header->release_ring_hdr), 216 196 sizeof(uint64_t), 217 197 QXL_RELEASE_RING_SIZE, 0, true, 218 198 NULL); 219 199 200 + if (!qdev->release_ring) { 201 + DRM_ERROR("Unable to create release ring\n"); 202 + r = -ENOMEM; 203 + goto cursor_ring_free; 204 + } 220 205 /* TODO - slot initialization should happen on reset. where is our 221 206 * reset handler? */ 222 207 qdev->n_mem_slots = qdev->rom->slots_end; ··· 239 202 qdev->mem_slots = 240 203 kmalloc_array(qdev->n_mem_slots, sizeof(struct qxl_memslot), 241 204 GFP_KERNEL); 205 + 206 + if (!qdev->mem_slots) { 207 + DRM_ERROR("Unable to alloc mem slots\n"); 208 + r = -ENOMEM; 209 + goto release_ring_free; 210 + } 242 211 243 212 idr_init(&qdev->release_idr); 244 213 spin_lock_init(&qdev->release_idr_lock); ··· 261 218 262 219 /* must initialize irq before first async io - slot creation */ 263 220 r = qxl_irq_init(qdev); 264 - if (r) 265 - return r; 221 + if (r) { 222 + DRM_ERROR("Unable to init qxl irq\n"); 223 + goto mem_slots_free; 224 + } 266 225 267 226 /* 268 227 * Note that virtual is surface0. We rely on the single ioremap done ··· 288 243 INIT_WORK(&qdev->gc_work, qxl_gc_work); 289 244 290 245 return 0; 246 + 247 + mem_slots_free: 248 + kfree(qdev->mem_slots); 249 + release_ring_free: 250 + qxl_ring_free(qdev->release_ring); 251 + cursor_ring_free: 252 + qxl_ring_free(qdev->cursor_ring); 253 + command_ring_free: 254 + qxl_ring_free(qdev->command_ring); 255 + ram_header_unmap: 256 + iounmap(qdev->ram_header); 257 + bo_fini: 258 + qxl_bo_fini(qdev); 259 + rom_unmap: 260 + iounmap(qdev->rom); 261 + surface_mapping_free: 262 + io_mapping_free(qdev->surface_mapping); 263 + vram_mapping_free: 264 + io_mapping_free(qdev->vram_mapping); 265 + error: 266 + return r; 291 267 } 292 268 293 269 void qxl_device_fini(struct qxl_device *qdev)
+1 -22
drivers/gpu/drm/radeon/radeon_drv.c
··· 316 316 317 317 bool radeon_device_is_virtual(void); 318 318 319 - static int radeon_kick_out_firmware_fb(struct pci_dev *pdev) 320 - { 321 - struct apertures_struct *ap; 322 - bool primary = false; 323 - 324 - ap = alloc_apertures(1); 325 - if (!ap) 326 - return -ENOMEM; 327 - 328 - ap->ranges[0].base = pci_resource_start(pdev, 0); 329 - ap->ranges[0].size = pci_resource_len(pdev, 0); 330 - 331 - #ifdef CONFIG_X86 332 - primary = pdev->resource[PCI_ROM_RESOURCE].flags & IORESOURCE_ROM_SHADOW; 333 - #endif 334 - drm_fb_helper_remove_conflicting_framebuffers(ap, "radeondrmfb", primary); 335 - kfree(ap); 336 - 337 - return 0; 338 - } 339 - 340 319 static int radeon_pci_probe(struct pci_dev *pdev, 341 320 const struct pci_device_id *ent) 342 321 { ··· 325 346 return -EPROBE_DEFER; 326 347 327 348 /* Get rid of things like offb */ 328 - ret = radeon_kick_out_firmware_fb(pdev); 349 + ret = drm_fb_helper_remove_conflicting_pci_framebuffers(pdev, 0, "radeondrmfb"); 329 350 if (ret) 330 351 return ret; 331 352
+128 -21
drivers/gpu/drm/rcar-du/rcar_du_crtc.c
··· 691 691 .atomic_disable = rcar_du_crtc_atomic_disable, 692 692 }; 693 693 694 + static void rcar_du_crtc_crc_init(struct rcar_du_crtc *rcrtc) 695 + { 696 + struct rcar_du_device *rcdu = rcrtc->group->dev; 697 + const char **sources; 698 + unsigned int count; 699 + int i = -1; 700 + 701 + /* CRC available only on Gen3 HW. */ 702 + if (rcdu->info->gen < 3) 703 + return; 704 + 705 + /* Reserve 1 for "auto" source. */ 706 + count = rcrtc->vsp->num_planes + 1; 707 + 708 + sources = kmalloc_array(count, sizeof(*sources), GFP_KERNEL); 709 + if (!sources) 710 + return; 711 + 712 + sources[0] = kstrdup("auto", GFP_KERNEL); 713 + if (!sources[0]) 714 + goto error; 715 + 716 + for (i = 0; i < rcrtc->vsp->num_planes; ++i) { 717 + struct drm_plane *plane = &rcrtc->vsp->planes[i].plane; 718 + char name[16]; 719 + 720 + sprintf(name, "plane%u", plane->base.id); 721 + sources[i + 1] = kstrdup(name, GFP_KERNEL); 722 + if (!sources[i + 1]) 723 + goto error; 724 + } 725 + 726 + rcrtc->sources = sources; 727 + rcrtc->sources_count = count; 728 + return; 729 + 730 + error: 731 + while (i >= 0) { 732 + kfree(sources[i]); 733 + i--; 734 + } 735 + kfree(sources); 736 + } 737 + 738 + static void rcar_du_crtc_crc_cleanup(struct rcar_du_crtc *rcrtc) 739 + { 740 + unsigned int i; 741 + 742 + if (!rcrtc->sources) 743 + return; 744 + 745 + for (i = 0; i < rcrtc->sources_count; i++) 746 + kfree(rcrtc->sources[i]); 747 + kfree(rcrtc->sources); 748 + 749 + rcrtc->sources = NULL; 750 + rcrtc->sources_count = 0; 751 + } 752 + 694 753 static struct drm_crtc_state * 695 754 rcar_du_crtc_atomic_duplicate_state(struct drm_crtc *crtc) 696 755 { ··· 774 715 { 775 716 __drm_atomic_helper_crtc_destroy_state(state); 776 717 kfree(to_rcar_crtc_state(state)); 718 + } 719 + 720 + static void rcar_du_crtc_cleanup(struct drm_crtc *crtc) 721 + { 722 + struct rcar_du_crtc *rcrtc = to_rcar_crtc(crtc); 723 + 724 + rcar_du_crtc_crc_cleanup(rcrtc); 725 + 726 + return drm_crtc_cleanup(crtc); 777 727 } 778 728 779 729 static void rcar_du_crtc_reset(struct drm_crtc *crtc) ··· 824 756 rcrtc->vblank_enable = false; 825 757 } 826 758 827 - static int rcar_du_crtc_set_crc_source(struct drm_crtc *crtc, 828 - const char *source_name, 829 - size_t *values_cnt) 759 + static int rcar_du_crtc_parse_crc_source(struct rcar_du_crtc *rcrtc, 760 + const char *source_name, 761 + enum vsp1_du_crc_source *source) 830 762 { 831 - struct rcar_du_crtc *rcrtc = to_rcar_crtc(crtc); 832 - struct drm_modeset_acquire_ctx ctx; 833 - struct drm_crtc_state *crtc_state; 834 - struct drm_atomic_state *state; 835 - enum vsp1_du_crc_source source; 836 - unsigned int index = 0; 837 - unsigned int i; 763 + unsigned int index; 838 764 int ret; 839 765 840 766 /* ··· 836 774 * CRC on an input plane (%u is the plane ID), and "auto" to compute the 837 775 * CRC on the composer (VSP) output. 838 776 */ 777 + 839 778 if (!source_name) { 840 - source = VSP1_DU_CRC_NONE; 779 + *source = VSP1_DU_CRC_NONE; 780 + return 0; 841 781 } else if (!strcmp(source_name, "auto")) { 842 - source = VSP1_DU_CRC_OUTPUT; 782 + *source = VSP1_DU_CRC_OUTPUT; 783 + return 0; 843 784 } else if (strstarts(source_name, "plane")) { 844 - source = VSP1_DU_CRC_PLANE; 785 + unsigned int i; 786 + 787 + *source = VSP1_DU_CRC_PLANE; 845 788 846 789 ret = kstrtouint(source_name + strlen("plane"), 10, &index); 847 790 if (ret < 0) 848 791 return ret; 849 792 850 793 for (i = 0; i < rcrtc->vsp->num_planes; ++i) { 851 - if (index == rcrtc->vsp->planes[i].plane.base.id) { 852 - index = i; 853 - break; 854 - } 794 + if (index == rcrtc->vsp->planes[i].plane.base.id) 795 + return i; 855 796 } 797 + } 856 798 857 - if (i >= rcrtc->vsp->num_planes) 858 - return -EINVAL; 859 - } else { 799 + return -EINVAL; 800 + } 801 + 802 + static int rcar_du_crtc_verify_crc_source(struct drm_crtc *crtc, 803 + const char *source_name, 804 + size_t *values_cnt) 805 + { 806 + struct rcar_du_crtc *rcrtc = to_rcar_crtc(crtc); 807 + enum vsp1_du_crc_source source; 808 + 809 + if (rcar_du_crtc_parse_crc_source(rcrtc, source_name, &source) < 0) { 810 + DRM_DEBUG_DRIVER("unknown source %s\n", source_name); 860 811 return -EINVAL; 861 812 } 862 813 863 814 *values_cnt = 1; 815 + return 0; 816 + } 817 + 818 + const char *const *rcar_du_crtc_get_crc_sources(struct drm_crtc *crtc, 819 + size_t *count) 820 + { 821 + struct rcar_du_crtc *rcrtc = to_rcar_crtc(crtc); 822 + 823 + *count = rcrtc->sources_count; 824 + return rcrtc->sources; 825 + } 826 + 827 + static int rcar_du_crtc_set_crc_source(struct drm_crtc *crtc, 828 + const char *source_name) 829 + { 830 + struct rcar_du_crtc *rcrtc = to_rcar_crtc(crtc); 831 + struct drm_modeset_acquire_ctx ctx; 832 + struct drm_crtc_state *crtc_state; 833 + struct drm_atomic_state *state; 834 + enum vsp1_du_crc_source source; 835 + unsigned int index; 836 + int ret; 837 + 838 + ret = rcar_du_crtc_parse_crc_source(rcrtc, source_name, &source); 839 + if (ret < 0) 840 + return ret; 841 + 842 + index = ret; 864 843 865 844 /* Perform an atomic commit to set the CRC source. */ 866 845 drm_modeset_acquire_init(&ctx, 0); ··· 956 853 957 854 static const struct drm_crtc_funcs crtc_funcs_gen3 = { 958 855 .reset = rcar_du_crtc_reset, 959 - .destroy = drm_crtc_cleanup, 856 + .destroy = rcar_du_crtc_cleanup, 960 857 .set_config = drm_atomic_helper_set_config, 961 858 .page_flip = drm_atomic_helper_page_flip, 962 859 .atomic_duplicate_state = rcar_du_crtc_atomic_duplicate_state, ··· 964 861 .enable_vblank = rcar_du_crtc_enable_vblank, 965 862 .disable_vblank = rcar_du_crtc_disable_vblank, 966 863 .set_crc_source = rcar_du_crtc_set_crc_source, 864 + .verify_crc_source = rcar_du_crtc_verify_crc_source, 865 + .get_crc_sources = rcar_du_crtc_get_crc_sources, 967 866 }; 968 867 969 868 /* ----------------------------------------------------------------------------- ··· 1103 998 "failed to register IRQ for CRTC %u\n", swindex); 1104 999 return ret; 1105 1000 } 1001 + 1002 + rcar_du_crtc_crc_init(rcrtc); 1106 1003 1107 1004 return 0; 1108 1005 }
+3
drivers/gpu/drm/rcar-du/rcar_du_crtc.h
··· 67 67 struct rcar_du_group *group; 68 68 struct rcar_du_vsp *vsp; 69 69 unsigned int vsp_pipe; 70 + 71 + const char *const *sources; 72 + unsigned int sources_count; 70 73 }; 71 74 72 75 #define to_rcar_crtc(c) container_of(c, struct rcar_du_crtc, crtc)
+2 -4
drivers/gpu/drm/rcar-du/rcar_du_plane.c
··· 690 690 if (state == NULL) 691 691 return; 692 692 693 + __drm_atomic_helper_plane_reset(plane, &state->state); 694 + 693 695 state->hwindex = -1; 694 696 state->source = RCAR_DU_PLANE_MEMORY; 695 697 state->colorkey = RCAR_DU_COLORKEY_NONE; 696 698 state->state.zpos = plane->type == DRM_PLANE_TYPE_PRIMARY ? 0 : 1; 697 - 698 - plane->state = &state->state; 699 - plane->state->alpha = DRM_BLEND_ALPHA_OPAQUE; 700 - plane->state->plane = plane; 701 699 } 702 700 703 701 static int rcar_du_plane_atomic_set_property(struct drm_plane *plane,
+1 -4
drivers/gpu/drm/rcar-du/rcar_du_vsp.c
··· 346 346 if (state == NULL) 347 347 return; 348 348 349 - state->state.alpha = DRM_BLEND_ALPHA_OPAQUE; 349 + __drm_atomic_helper_plane_reset(plane, &state->state); 350 350 state->state.zpos = plane->type == DRM_PLANE_TYPE_PRIMARY ? 0 : 1; 351 - 352 - plane->state = &state->state; 353 - plane->state->plane = plane; 354 351 } 355 352 356 353 static const struct drm_plane_funcs rcar_du_vsp_plane_funcs = {
+18 -7
drivers/gpu/drm/rockchip/Kconfig
··· 8 8 select DRM_ANALOGIX_DP if ROCKCHIP_ANALOGIX_DP 9 9 select DRM_DW_HDMI if ROCKCHIP_DW_HDMI 10 10 select DRM_MIPI_DSI if ROCKCHIP_DW_MIPI_DSI 11 + select DRM_RGB if ROCKCHIP_RGB 11 12 select SND_SOC_HDMI_CODEC if ROCKCHIP_CDN_DP && SND_SOC 12 13 help 13 14 Choose this option if you have a Rockchip soc chipset. ··· 24 23 help 25 24 This selects support for Rockchip SoC specific extensions 26 25 for the Analogix Core DP driver. If you want to enable DP 27 - on RK3288 based SoC, you should selet this option. 26 + on RK3288 or RK3399 based SoC, you should select this option. 28 27 29 28 config ROCKCHIP_CDN_DP 30 29 bool "Rockchip cdn DP" ··· 40 39 help 41 40 This selects support for Rockchip SoC specific extensions 42 41 for the Synopsys DesignWare HDMI driver. If you want to 43 - enable HDMI on RK3288 based SoC, you should selet this 44 - option. 42 + enable HDMI on RK3288 or RK3399 based SoC, you should select 43 + this option. 45 44 46 45 config ROCKCHIP_DW_MIPI_DSI 47 46 bool "Rockchip specific extensions for Synopsys DW MIPI DSI" 48 47 help 49 - This selects support for Rockchip SoC specific extensions 50 - for the Synopsys DesignWare HDMI driver. If you want to 51 - enable MIPI DSI on RK3288 based SoC, you should selet this 52 - option. 48 + This selects support for Rockchip SoC specific extensions 49 + for the Synopsys DesignWare HDMI driver. If you want to 50 + enable MIPI DSI on RK3288 or RK3399 based SoC, you should 51 + select this option. 53 52 54 53 config ROCKCHIP_INNO_HDMI 55 54 bool "Rockchip specific extensions for Innosilicon HDMI" ··· 67 66 Rockchip rk3288 SoC has LVDS TX Controller can be used, and it 68 67 support LVDS, rgb, dual LVDS output mode. say Y to enable its 69 68 driver. 69 + 70 + config ROCKCHIP_RGB 71 + bool "Rockchip RGB support" 72 + depends on DRM_ROCKCHIP 73 + depends on PINCTRL 74 + help 75 + Choose this option to enable support for Rockchip RGB output. 76 + Some Rockchip CRTCs, like rv1108, can directly output parallel 77 + and serial RGB format to panel or connect to a conversion chip. 78 + say Y to enable its driver. 70 79 endif
+1
drivers/gpu/drm/rockchip/Makefile
··· 14 14 rockchipdrm-$(CONFIG_ROCKCHIP_DW_MIPI_DSI) += dw-mipi-dsi.o 15 15 rockchipdrm-$(CONFIG_ROCKCHIP_INNO_HDMI) += inno_hdmi.o 16 16 rockchipdrm-$(CONFIG_ROCKCHIP_LVDS) += rockchip_lvds.o 17 + rockchipdrm-$(CONFIG_ROCKCHIP_RGB) += rockchip_rgb.o 17 18 18 19 obj-$(CONFIG_DRM_ROCKCHIP) += rockchipdrm.o
+52 -46
drivers/gpu/drm/rockchip/rockchip_drm_drv.c
··· 24 24 #include <linux/pm_runtime.h> 25 25 #include <linux/module.h> 26 26 #include <linux/of_graph.h> 27 + #include <linux/of_platform.h> 27 28 #include <linux/component.h> 28 29 #include <linux/console.h> 29 30 #include <linux/iommu.h> ··· 185 184 err_free: 186 185 drm_dev->dev_private = NULL; 187 186 dev_set_drvdata(dev, NULL); 188 - drm_dev_unref(drm_dev); 187 + drm_dev_put(drm_dev); 189 188 return ret; 190 189 } 191 190 ··· 205 204 206 205 drm_dev->dev_private = NULL; 207 206 dev_set_drvdata(dev, NULL); 208 - drm_dev_unref(drm_dev); 207 + drm_dev_put(drm_dev); 209 208 } 210 209 211 210 static const struct file_operations rockchip_drm_driver_fops = { ··· 244 243 }; 245 244 246 245 #ifdef CONFIG_PM_SLEEP 247 - static void rockchip_drm_fb_suspend(struct drm_device *drm) 248 - { 249 - struct rockchip_drm_private *priv = drm->dev_private; 250 - 251 - console_lock(); 252 - drm_fb_helper_set_suspend(&priv->fbdev_helper, 1); 253 - console_unlock(); 254 - } 255 - 256 - static void rockchip_drm_fb_resume(struct drm_device *drm) 257 - { 258 - struct rockchip_drm_private *priv = drm->dev_private; 259 - 260 - console_lock(); 261 - drm_fb_helper_set_suspend(&priv->fbdev_helper, 0); 262 - console_unlock(); 263 - } 264 - 265 246 static int rockchip_drm_sys_suspend(struct device *dev) 266 247 { 267 248 struct drm_device *drm = dev_get_drvdata(dev); 268 - struct rockchip_drm_private *priv; 269 249 270 - if (!drm) 271 - return 0; 272 - 273 - drm_kms_helper_poll_disable(drm); 274 - rockchip_drm_fb_suspend(drm); 275 - 276 - priv = drm->dev_private; 277 - priv->state = drm_atomic_helper_suspend(drm); 278 - if (IS_ERR(priv->state)) { 279 - rockchip_drm_fb_resume(drm); 280 - drm_kms_helper_poll_enable(drm); 281 - return PTR_ERR(priv->state); 282 - } 283 - 284 - return 0; 250 + return drm_mode_config_helper_suspend(drm); 285 251 } 286 252 287 253 static int rockchip_drm_sys_resume(struct device *dev) 288 254 { 289 255 struct drm_device *drm = dev_get_drvdata(dev); 290 - struct rockchip_drm_private *priv; 291 256 292 - if (!drm) 293 - return 0; 294 - 295 - priv = drm->dev_private; 296 - drm_atomic_helper_resume(drm, priv->state); 297 - rockchip_drm_fb_resume(drm); 298 - drm_kms_helper_poll_enable(drm); 299 - 300 - return 0; 257 + return drm_mode_config_helper_resume(drm); 301 258 } 302 259 #endif 303 260 ··· 267 308 #define MAX_ROCKCHIP_SUB_DRIVERS 16 268 309 static struct platform_driver *rockchip_sub_drivers[MAX_ROCKCHIP_SUB_DRIVERS]; 269 310 static int num_rockchip_sub_drivers; 311 + 312 + /* 313 + * Check if a vop endpoint is leading to a rockchip subdriver or bridge. 314 + * Should be called from the component bind stage of the drivers 315 + * to ensure that all subdrivers are probed. 316 + * 317 + * @ep: endpoint of a rockchip vop 318 + * 319 + * returns true if subdriver, false if external bridge and -ENODEV 320 + * if remote port does not contain a device. 321 + */ 322 + int rockchip_drm_endpoint_is_subdriver(struct device_node *ep) 323 + { 324 + struct device_node *node = of_graph_get_remote_port_parent(ep); 325 + struct platform_device *pdev; 326 + struct device_driver *drv; 327 + int i; 328 + 329 + if (!node) 330 + return -ENODEV; 331 + 332 + /* status disabled will prevent creation of platform-devices */ 333 + pdev = of_find_device_by_node(node); 334 + of_node_put(node); 335 + if (!pdev) 336 + return -ENODEV; 337 + 338 + /* 339 + * All rockchip subdrivers have probed at this point, so 340 + * any device not having a driver now is an external bridge. 341 + */ 342 + drv = pdev->dev.driver; 343 + if (!drv) { 344 + platform_device_put(pdev); 345 + return false; 346 + } 347 + 348 + for (i = 0; i < num_rockchip_sub_drivers; i++) { 349 + if (rockchip_sub_drivers[i] == to_platform_driver(drv)) { 350 + platform_device_put(pdev); 351 + return true; 352 + } 353 + } 354 + 355 + platform_device_put(pdev); 356 + return false; 357 + } 270 358 271 359 static int compare_dev(struct device *dev, void *data) 272 360 {
+1 -1
drivers/gpu/drm/rockchip/rockchip_drm_drv.h
··· 51 51 struct rockchip_drm_private { 52 52 struct drm_fb_helper fbdev_helper; 53 53 struct drm_gem_object *fbdev_bo; 54 - struct drm_atomic_state *state; 55 54 struct iommu_domain *domain; 56 55 struct mutex mm_lock; 57 56 struct drm_mm mm; ··· 64 65 struct device *dev); 65 66 int rockchip_drm_wait_vact_end(struct drm_crtc *crtc, unsigned int mstimeout); 66 67 68 + int rockchip_drm_endpoint_is_subdriver(struct device_node *ep); 67 69 extern struct platform_driver cdn_dp_driver; 68 70 extern struct platform_driver dw_hdmi_rockchip_pltfm_driver; 69 71 extern struct platform_driver dw_mipi_dsi_driver;
+41 -7
drivers/gpu/drm/rockchip/rockchip_drm_vop.c
··· 32 32 #include <linux/of_device.h> 33 33 #include <linux/pm_runtime.h> 34 34 #include <linux/component.h> 35 + #include <linux/overflow.h> 35 36 36 37 #include <linux/reset.h> 37 38 #include <linux/delay.h> ··· 42 41 #include "rockchip_drm_fb.h" 43 42 #include "rockchip_drm_psr.h" 44 43 #include "rockchip_drm_vop.h" 44 + #include "rockchip_rgb.h" 45 45 46 46 #define VOP_WIN_SET(x, win, name, v) \ 47 47 vop_reg_set(vop, &win->phy->name, win->base, ~0, v, #name) ··· 94 92 struct vop *vop; 95 93 }; 96 94 95 + struct rockchip_rgb; 97 96 struct vop { 98 97 struct drm_crtc crtc; 99 98 struct device *dev; ··· 137 134 138 135 /* vop dclk reset */ 139 136 struct reset_control *dclk_rst; 137 + 138 + /* optional internal rgb encoder */ 139 + struct rockchip_rgb *rgb; 140 140 141 141 struct vop_win win[]; 142 142 }; ··· 1117 1111 } 1118 1112 1119 1113 static int vop_crtc_set_crc_source(struct drm_crtc *crtc, 1120 - const char *source_name, size_t *values_cnt) 1114 + const char *source_name) 1121 1115 { 1122 1116 struct vop *vop = to_vop(crtc); 1123 1117 struct drm_connector *connector; ··· 1126 1120 connector = vop_get_edp_connector(vop); 1127 1121 if (!connector) 1128 1122 return -EINVAL; 1129 - 1130 - *values_cnt = 3; 1131 1123 1132 1124 if (source_name && strcmp(source_name, "auto") == 0) 1133 1125 ret = analogix_dp_start_crc(connector); ··· 1136 1132 1137 1133 return ret; 1138 1134 } 1135 + 1136 + static int 1137 + vop_crtc_verify_crc_source(struct drm_crtc *crtc, const char *source_name, 1138 + size_t *values_cnt) 1139 + { 1140 + if (source_name && strcmp(source_name, "auto") != 0) 1141 + return -EINVAL; 1142 + 1143 + *values_cnt = 3; 1144 + return 0; 1145 + } 1146 + 1139 1147 #else 1140 1148 static int vop_crtc_set_crc_source(struct drm_crtc *crtc, 1141 - const char *source_name, size_t *values_cnt) 1149 + const char *source_name) 1150 + { 1151 + return -ENODEV; 1152 + } 1153 + 1154 + static int 1155 + vop_crtc_verify_crc_source(struct drm_crtc *crtc, const char *source_name, 1156 + size_t *values_cnt) 1142 1157 { 1143 1158 return -ENODEV; 1144 1159 } ··· 1173 1150 .enable_vblank = vop_crtc_enable_vblank, 1174 1151 .disable_vblank = vop_crtc_disable_vblank, 1175 1152 .set_crc_source = vop_crtc_set_crc_source, 1153 + .verify_crc_source = vop_crtc_verify_crc_source, 1176 1154 }; 1177 1155 1178 1156 static void vop_fb_unref_worker(struct drm_flip_work *work, void *val) ··· 1585 1561 struct drm_device *drm_dev = data; 1586 1562 struct vop *vop; 1587 1563 struct resource *res; 1588 - size_t alloc_size; 1589 1564 int ret, irq; 1590 1565 1591 1566 vop_data = of_device_get_match_data(dev); ··· 1592 1569 return -ENODEV; 1593 1570 1594 1571 /* Allocate vop struct and its vop_win array */ 1595 - alloc_size = sizeof(*vop) + sizeof(*vop->win) * vop_data->win_size; 1596 - vop = devm_kzalloc(dev, alloc_size, GFP_KERNEL); 1572 + vop = devm_kzalloc(dev, struct_size(vop, win, vop_data->win_size), 1573 + GFP_KERNEL); 1597 1574 if (!vop) 1598 1575 return -ENOMEM; 1599 1576 ··· 1643 1620 if (ret) 1644 1621 goto err_disable_pm_runtime; 1645 1622 1623 + if (vop->data->feature & VOP_FEATURE_INTERNAL_RGB) { 1624 + vop->rgb = rockchip_rgb_init(dev, &vop->crtc, vop->drm_dev); 1625 + if (IS_ERR(vop->rgb)) { 1626 + ret = PTR_ERR(vop->rgb); 1627 + goto err_disable_pm_runtime; 1628 + } 1629 + } 1630 + 1646 1631 return 0; 1647 1632 1648 1633 err_disable_pm_runtime: ··· 1662 1631 static void vop_unbind(struct device *dev, struct device *master, void *data) 1663 1632 { 1664 1633 struct vop *vop = dev_get_drvdata(dev); 1634 + 1635 + if (vop->rgb) 1636 + rockchip_rgb_fini(vop->rgb); 1665 1637 1666 1638 pm_runtime_disable(dev); 1667 1639 vop_destroy_crtc(vop);
+1
drivers/gpu/drm/rockchip/rockchip_drm_vop.h
··· 162 162 unsigned int win_size; 163 163 164 164 #define VOP_FEATURE_OUTPUT_RGB10 BIT(0) 165 + #define VOP_FEATURE_INTERNAL_RGB BIT(1) 165 166 u64 feature; 166 167 }; 167 168
+173
drivers/gpu/drm/rockchip/rockchip_rgb.c
··· 1 + //SPDX-License-Identifier: GPL-2.0+ 2 + /* 3 + * Copyright (C) Fuzhou Rockchip Electronics Co.Ltd 4 + * Author: 5 + * Sandy Huang <hjc@rock-chips.com> 6 + * 7 + * This software is licensed under the terms of the GNU General Public 8 + * License version 2, as published by the Free Software Foundation, and 9 + * may be copied, distributed, and modified under those terms. 10 + * 11 + * This program is distributed in the hope that it will be useful, 12 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 13 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 14 + * GNU General Public License for more details. 15 + */ 16 + 17 + #include <drm/drmP.h> 18 + #include <drm/drm_atomic_helper.h> 19 + #include <drm/drm_crtc_helper.h> 20 + #include <drm/drm_dp_helper.h> 21 + #include <drm/drm_panel.h> 22 + #include <drm/drm_of.h> 23 + 24 + #include <linux/component.h> 25 + #include <linux/of_graph.h> 26 + 27 + #include "rockchip_drm_drv.h" 28 + #include "rockchip_drm_vop.h" 29 + 30 + #define encoder_to_rgb(c) container_of(c, struct rockchip_rgb, encoder) 31 + 32 + struct rockchip_rgb { 33 + struct device *dev; 34 + struct drm_device *drm_dev; 35 + struct drm_bridge *bridge; 36 + struct drm_encoder encoder; 37 + int output_mode; 38 + }; 39 + 40 + static int 41 + rockchip_rgb_encoder_atomic_check(struct drm_encoder *encoder, 42 + struct drm_crtc_state *crtc_state, 43 + struct drm_connector_state *conn_state) 44 + { 45 + struct rockchip_crtc_state *s = to_rockchip_crtc_state(crtc_state); 46 + struct drm_connector *connector = conn_state->connector; 47 + struct drm_display_info *info = &connector->display_info; 48 + u32 bus_format; 49 + 50 + if (info->num_bus_formats) 51 + bus_format = info->bus_formats[0]; 52 + else 53 + bus_format = MEDIA_BUS_FMT_RGB888_1X24; 54 + 55 + switch (bus_format) { 56 + case MEDIA_BUS_FMT_RGB666_1X18: 57 + s->output_mode = ROCKCHIP_OUT_MODE_P666; 58 + break; 59 + case MEDIA_BUS_FMT_RGB565_1X16: 60 + s->output_mode = ROCKCHIP_OUT_MODE_P565; 61 + break; 62 + case MEDIA_BUS_FMT_RGB888_1X24: 63 + case MEDIA_BUS_FMT_RGB666_1X24_CPADHI: 64 + default: 65 + s->output_mode = ROCKCHIP_OUT_MODE_P888; 66 + break; 67 + } 68 + 69 + s->output_type = DRM_MODE_CONNECTOR_LVDS; 70 + 71 + return 0; 72 + } 73 + 74 + static const 75 + struct drm_encoder_helper_funcs rockchip_rgb_encoder_helper_funcs = { 76 + .atomic_check = rockchip_rgb_encoder_atomic_check, 77 + }; 78 + 79 + static const struct drm_encoder_funcs rockchip_rgb_encoder_funcs = { 80 + .destroy = drm_encoder_cleanup, 81 + }; 82 + 83 + struct rockchip_rgb *rockchip_rgb_init(struct device *dev, 84 + struct drm_crtc *crtc, 85 + struct drm_device *drm_dev) 86 + { 87 + struct rockchip_rgb *rgb; 88 + struct drm_encoder *encoder; 89 + struct device_node *port, *endpoint; 90 + u32 endpoint_id; 91 + int ret = 0, child_count = 0; 92 + struct drm_panel *panel; 93 + struct drm_bridge *bridge; 94 + 95 + rgb = devm_kzalloc(dev, sizeof(*rgb), GFP_KERNEL); 96 + if (!rgb) 97 + return ERR_PTR(-ENOMEM); 98 + 99 + rgb->dev = dev; 100 + rgb->drm_dev = drm_dev; 101 + 102 + port = of_graph_get_port_by_id(dev->of_node, 0); 103 + if (!port) 104 + return ERR_PTR(-EINVAL); 105 + 106 + for_each_child_of_node(port, endpoint) { 107 + if (of_property_read_u32(endpoint, "reg", &endpoint_id)) 108 + endpoint_id = 0; 109 + 110 + if (rockchip_drm_endpoint_is_subdriver(endpoint) > 0) 111 + continue; 112 + 113 + child_count++; 114 + ret = drm_of_find_panel_or_bridge(dev->of_node, 0, endpoint_id, 115 + &panel, &bridge); 116 + if (!ret) 117 + break; 118 + } 119 + 120 + of_node_put(port); 121 + 122 + /* if the rgb output is not connected to anything, just return */ 123 + if (!child_count) 124 + return NULL; 125 + 126 + if (ret < 0) { 127 + if (ret != -EPROBE_DEFER) 128 + DRM_DEV_ERROR(dev, "failed to find panel or bridge %d\n", ret); 129 + return ERR_PTR(ret); 130 + } 131 + 132 + encoder = &rgb->encoder; 133 + encoder->possible_crtcs = drm_crtc_mask(crtc); 134 + 135 + ret = drm_encoder_init(drm_dev, encoder, &rockchip_rgb_encoder_funcs, 136 + DRM_MODE_ENCODER_NONE, NULL); 137 + if (ret < 0) { 138 + DRM_DEV_ERROR(drm_dev->dev, 139 + "failed to initialize encoder: %d\n", ret); 140 + return ERR_PTR(ret); 141 + } 142 + 143 + drm_encoder_helper_add(encoder, &rockchip_rgb_encoder_helper_funcs); 144 + 145 + if (panel) { 146 + bridge = drm_panel_bridge_add(panel, DRM_MODE_CONNECTOR_LVDS); 147 + if (IS_ERR(bridge)) 148 + return ERR_CAST(bridge); 149 + } 150 + 151 + rgb->bridge = bridge; 152 + 153 + ret = drm_bridge_attach(encoder, rgb->bridge, NULL); 154 + if (ret) { 155 + DRM_DEV_ERROR(drm_dev->dev, 156 + "failed to attach bridge: %d\n", ret); 157 + goto err_free_encoder; 158 + } 159 + 160 + return rgb; 161 + 162 + err_free_encoder: 163 + drm_encoder_cleanup(encoder); 164 + return ERR_PTR(ret); 165 + } 166 + EXPORT_SYMBOL_GPL(rockchip_rgb_init); 167 + 168 + void rockchip_rgb_fini(struct rockchip_rgb *rgb) 169 + { 170 + drm_panel_bridge_remove(rgb->bridge); 171 + drm_encoder_cleanup(&rgb->encoder); 172 + } 173 + EXPORT_SYMBOL_GPL(rockchip_rgb_fini);
+33
drivers/gpu/drm/rockchip/rockchip_rgb.h
··· 1 + //SPDX-License-Identifier: GPL-2.0+ 2 + /* 3 + * Copyright (C) Fuzhou Rockchip Electronics Co.Ltd 4 + * Author: 5 + * Sandy Huang <hjc@rock-chips.com> 6 + * 7 + * This software is licensed under the terms of the GNU General Public 8 + * License version 2, as published by the Free Software Foundation, and 9 + * may be copied, distributed, and modified under those terms. 10 + * 11 + * This program is distributed in the hope that it will be useful, 12 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 13 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 14 + * GNU General Public License for more details. 15 + */ 16 + 17 + #ifdef CONFIG_ROCKCHIP_RGB 18 + struct rockchip_rgb *rockchip_rgb_init(struct device *dev, 19 + struct drm_crtc *crtc, 20 + struct drm_device *drm_dev); 21 + void rockchip_rgb_fini(struct rockchip_rgb *rgb); 22 + #else 23 + static inline struct rockchip_rgb *rockchip_rgb_init(struct device *dev, 24 + struct drm_crtc *crtc, 25 + struct drm_device *drm_dev) 26 + { 27 + return NULL; 28 + } 29 + 30 + static inline void rockchip_rgb_fini(struct rockchip_rgb *rgb) 31 + { 32 + } 33 + #endif
+215
drivers/gpu/drm/rockchip/rockchip_vop_reg.c
··· 177 177 .win_size = ARRAY_SIZE(rk3126_vop_win_data), 178 178 }; 179 179 180 + static const int px30_vop_intrs[] = { 181 + FS_INTR, 182 + 0, 0, 183 + LINE_FLAG_INTR, 184 + 0, 185 + BUS_ERROR_INTR, 186 + 0, 0, 187 + DSP_HOLD_VALID_INTR, 188 + }; 189 + 190 + static const struct vop_intr px30_intr = { 191 + .intrs = px30_vop_intrs, 192 + .nintrs = ARRAY_SIZE(px30_vop_intrs), 193 + .line_flag_num[0] = VOP_REG(PX30_LINE_FLAG, 0xfff, 0), 194 + .status = VOP_REG_MASK_SYNC(PX30_INTR_STATUS, 0xffff, 0), 195 + .enable = VOP_REG_MASK_SYNC(PX30_INTR_EN, 0xffff, 0), 196 + .clear = VOP_REG_MASK_SYNC(PX30_INTR_CLEAR, 0xffff, 0), 197 + }; 198 + 199 + static const struct vop_common px30_common = { 200 + .standby = VOP_REG_SYNC(PX30_SYS_CTRL2, 0x1, 1), 201 + .out_mode = VOP_REG(PX30_DSP_CTRL2, 0xf, 16), 202 + .dsp_blank = VOP_REG(PX30_DSP_CTRL2, 0x1, 14), 203 + .cfg_done = VOP_REG_SYNC(PX30_REG_CFG_DONE, 0x1, 0), 204 + }; 205 + 206 + static const struct vop_modeset px30_modeset = { 207 + .htotal_pw = VOP_REG(PX30_DSP_HTOTAL_HS_END, 0x0fff0fff, 0), 208 + .hact_st_end = VOP_REG(PX30_DSP_HACT_ST_END, 0x0fff0fff, 0), 209 + .vtotal_pw = VOP_REG(PX30_DSP_VTOTAL_VS_END, 0x0fff0fff, 0), 210 + .vact_st_end = VOP_REG(PX30_DSP_VACT_ST_END, 0x0fff0fff, 0), 211 + }; 212 + 213 + static const struct vop_output px30_output = { 214 + .rgb_pin_pol = VOP_REG(PX30_DSP_CTRL0, 0xf, 1), 215 + .mipi_pin_pol = VOP_REG(PX30_DSP_CTRL0, 0xf, 25), 216 + .rgb_en = VOP_REG(PX30_DSP_CTRL0, 0x1, 0), 217 + .mipi_en = VOP_REG(PX30_DSP_CTRL0, 0x1, 24), 218 + }; 219 + 220 + static const struct vop_scl_regs px30_win_scl = { 221 + .scale_yrgb_x = VOP_REG(PX30_WIN0_SCL_FACTOR_YRGB, 0xffff, 0x0), 222 + .scale_yrgb_y = VOP_REG(PX30_WIN0_SCL_FACTOR_YRGB, 0xffff, 16), 223 + .scale_cbcr_x = VOP_REG(PX30_WIN0_SCL_FACTOR_CBR, 0xffff, 0x0), 224 + .scale_cbcr_y = VOP_REG(PX30_WIN0_SCL_FACTOR_CBR, 0xffff, 16), 225 + }; 226 + 227 + static const struct vop_win_phy px30_win0_data = { 228 + .scl = &px30_win_scl, 229 + .data_formats = formats_win_full, 230 + .nformats = ARRAY_SIZE(formats_win_full), 231 + .enable = VOP_REG(PX30_WIN0_CTRL0, 0x1, 0), 232 + .format = VOP_REG(PX30_WIN0_CTRL0, 0x7, 1), 233 + .rb_swap = VOP_REG(PX30_WIN0_CTRL0, 0x1, 12), 234 + .act_info = VOP_REG(PX30_WIN0_ACT_INFO, 0xffffffff, 0), 235 + .dsp_info = VOP_REG(PX30_WIN0_DSP_INFO, 0xffffffff, 0), 236 + .dsp_st = VOP_REG(PX30_WIN0_DSP_ST, 0xffffffff, 0), 237 + .yrgb_mst = VOP_REG(PX30_WIN0_YRGB_MST0, 0xffffffff, 0), 238 + .uv_mst = VOP_REG(PX30_WIN0_CBR_MST0, 0xffffffff, 0), 239 + .yrgb_vir = VOP_REG(PX30_WIN0_VIR, 0x1fff, 0), 240 + .uv_vir = VOP_REG(PX30_WIN0_VIR, 0x1fff, 16), 241 + }; 242 + 243 + static const struct vop_win_phy px30_win1_data = { 244 + .data_formats = formats_win_lite, 245 + .nformats = ARRAY_SIZE(formats_win_lite), 246 + .enable = VOP_REG(PX30_WIN1_CTRL0, 0x1, 0), 247 + .format = VOP_REG(PX30_WIN1_CTRL0, 0x7, 4), 248 + .rb_swap = VOP_REG(PX30_WIN1_CTRL0, 0x1, 12), 249 + .dsp_info = VOP_REG(PX30_WIN1_DSP_INFO, 0xffffffff, 0), 250 + .dsp_st = VOP_REG(PX30_WIN1_DSP_ST, 0xffffffff, 0), 251 + .yrgb_mst = VOP_REG(PX30_WIN1_MST, 0xffffffff, 0), 252 + .yrgb_vir = VOP_REG(PX30_WIN1_VIR, 0x1fff, 0), 253 + }; 254 + 255 + static const struct vop_win_phy px30_win2_data = { 256 + .data_formats = formats_win_lite, 257 + .nformats = ARRAY_SIZE(formats_win_lite), 258 + .gate = VOP_REG(PX30_WIN2_CTRL0, 0x1, 4), 259 + .enable = VOP_REG(PX30_WIN2_CTRL0, 0x1, 0), 260 + .format = VOP_REG(PX30_WIN2_CTRL0, 0x3, 5), 261 + .rb_swap = VOP_REG(PX30_WIN2_CTRL0, 0x1, 20), 262 + .dsp_info = VOP_REG(PX30_WIN2_DSP_INFO0, 0x0fff0fff, 0), 263 + .dsp_st = VOP_REG(PX30_WIN2_DSP_ST0, 0x1fff1fff, 0), 264 + .yrgb_mst = VOP_REG(PX30_WIN2_MST0, 0xffffffff, 0), 265 + .yrgb_vir = VOP_REG(PX30_WIN2_VIR0_1, 0x1fff, 0), 266 + }; 267 + 268 + static const struct vop_win_data px30_vop_big_win_data[] = { 269 + { .base = 0x00, .phy = &px30_win0_data, 270 + .type = DRM_PLANE_TYPE_PRIMARY }, 271 + { .base = 0x00, .phy = &px30_win1_data, 272 + .type = DRM_PLANE_TYPE_OVERLAY }, 273 + { .base = 0x00, .phy = &px30_win2_data, 274 + .type = DRM_PLANE_TYPE_CURSOR }, 275 + }; 276 + 277 + static const struct vop_data px30_vop_big = { 278 + .intr = &px30_intr, 279 + .feature = VOP_FEATURE_INTERNAL_RGB, 280 + .common = &px30_common, 281 + .modeset = &px30_modeset, 282 + .output = &px30_output, 283 + .win = px30_vop_big_win_data, 284 + .win_size = ARRAY_SIZE(px30_vop_big_win_data), 285 + }; 286 + 287 + static const struct vop_win_data px30_vop_lit_win_data[] = { 288 + { .base = 0x00, .phy = &px30_win1_data, 289 + .type = DRM_PLANE_TYPE_PRIMARY }, 290 + }; 291 + 292 + static const struct vop_data px30_vop_lit = { 293 + .intr = &px30_intr, 294 + .feature = VOP_FEATURE_INTERNAL_RGB, 295 + .common = &px30_common, 296 + .modeset = &px30_modeset, 297 + .output = &px30_output, 298 + .win = px30_vop_lit_win_data, 299 + .win_size = ARRAY_SIZE(px30_vop_lit_win_data), 300 + }; 301 + 302 + static const struct vop_scl_regs rk3188_win_scl = { 303 + .scale_yrgb_x = VOP_REG(RK3188_WIN0_SCL_FACTOR_YRGB, 0xffff, 0x0), 304 + .scale_yrgb_y = VOP_REG(RK3188_WIN0_SCL_FACTOR_YRGB, 0xffff, 16), 305 + .scale_cbcr_x = VOP_REG(RK3188_WIN0_SCL_FACTOR_CBR, 0xffff, 0x0), 306 + .scale_cbcr_y = VOP_REG(RK3188_WIN0_SCL_FACTOR_CBR, 0xffff, 16), 307 + }; 308 + 309 + static const struct vop_win_phy rk3188_win0_data = { 310 + .scl = &rk3188_win_scl, 311 + .data_formats = formats_win_full, 312 + .nformats = ARRAY_SIZE(formats_win_full), 313 + .enable = VOP_REG(RK3188_SYS_CTRL, 0x1, 0), 314 + .format = VOP_REG(RK3188_SYS_CTRL, 0x7, 3), 315 + .rb_swap = VOP_REG(RK3188_SYS_CTRL, 0x1, 15), 316 + .act_info = VOP_REG(RK3188_WIN0_ACT_INFO, 0x1fff1fff, 0), 317 + .dsp_info = VOP_REG(RK3188_WIN0_DSP_INFO, 0x0fff0fff, 0), 318 + .dsp_st = VOP_REG(RK3188_WIN0_DSP_ST, 0x1fff1fff, 0), 319 + .yrgb_mst = VOP_REG(RK3188_WIN0_YRGB_MST0, 0xffffffff, 0), 320 + .uv_mst = VOP_REG(RK3188_WIN0_CBR_MST0, 0xffffffff, 0), 321 + .yrgb_vir = VOP_REG(RK3188_WIN_VIR, 0x1fff, 0), 322 + }; 323 + 324 + static const struct vop_win_phy rk3188_win1_data = { 325 + .data_formats = formats_win_lite, 326 + .nformats = ARRAY_SIZE(formats_win_lite), 327 + .enable = VOP_REG(RK3188_SYS_CTRL, 0x1, 1), 328 + .format = VOP_REG(RK3188_SYS_CTRL, 0x7, 6), 329 + .rb_swap = VOP_REG(RK3188_SYS_CTRL, 0x1, 19), 330 + /* no act_info on window1 */ 331 + .dsp_info = VOP_REG(RK3188_WIN1_DSP_INFO, 0x07ff07ff, 0), 332 + .dsp_st = VOP_REG(RK3188_WIN1_DSP_ST, 0x0fff0fff, 0), 333 + .yrgb_mst = VOP_REG(RK3188_WIN1_MST, 0xffffffff, 0), 334 + .yrgb_vir = VOP_REG(RK3188_WIN_VIR, 0x1fff, 16), 335 + }; 336 + 337 + static const struct vop_modeset rk3188_modeset = { 338 + .htotal_pw = VOP_REG(RK3188_DSP_HTOTAL_HS_END, 0x0fff0fff, 0), 339 + .hact_st_end = VOP_REG(RK3188_DSP_HACT_ST_END, 0x0fff0fff, 0), 340 + .vtotal_pw = VOP_REG(RK3188_DSP_VTOTAL_VS_END, 0x0fff0fff, 0), 341 + .vact_st_end = VOP_REG(RK3188_DSP_VACT_ST_END, 0x0fff0fff, 0), 342 + }; 343 + 344 + static const struct vop_output rk3188_output = { 345 + .pin_pol = VOP_REG(RK3188_DSP_CTRL0, 0xf, 4), 346 + }; 347 + 348 + static const struct vop_common rk3188_common = { 349 + .gate_en = VOP_REG(RK3188_SYS_CTRL, 0x1, 31), 350 + .standby = VOP_REG(RK3188_SYS_CTRL, 0x1, 30), 351 + .out_mode = VOP_REG(RK3188_DSP_CTRL0, 0xf, 0), 352 + .cfg_done = VOP_REG(RK3188_REG_CFG_DONE, 0x1, 0), 353 + .dsp_blank = VOP_REG(RK3188_DSP_CTRL1, 0x3, 24), 354 + }; 355 + 356 + static const struct vop_win_data rk3188_vop_win_data[] = { 357 + { .base = 0x00, .phy = &rk3188_win0_data, 358 + .type = DRM_PLANE_TYPE_PRIMARY }, 359 + { .base = 0x00, .phy = &rk3188_win1_data, 360 + .type = DRM_PLANE_TYPE_CURSOR }, 361 + }; 362 + 363 + static const int rk3188_vop_intrs[] = { 364 + 0, 365 + FS_INTR, 366 + LINE_FLAG_INTR, 367 + BUS_ERROR_INTR, 368 + }; 369 + 370 + static const struct vop_intr rk3188_vop_intr = { 371 + .intrs = rk3188_vop_intrs, 372 + .nintrs = ARRAY_SIZE(rk3188_vop_intrs), 373 + .line_flag_num[0] = VOP_REG(RK3188_INT_STATUS, 0xfff, 12), 374 + .status = VOP_REG(RK3188_INT_STATUS, 0xf, 0), 375 + .enable = VOP_REG(RK3188_INT_STATUS, 0xf, 4), 376 + .clear = VOP_REG(RK3188_INT_STATUS, 0xf, 8), 377 + }; 378 + 379 + static const struct vop_data rk3188_vop = { 380 + .intr = &rk3188_vop_intr, 381 + .common = &rk3188_common, 382 + .modeset = &rk3188_modeset, 383 + .output = &rk3188_output, 384 + .win = rk3188_vop_win_data, 385 + .win_size = ARRAY_SIZE(rk3188_vop_win_data), 386 + .feature = VOP_FEATURE_INTERNAL_RGB, 387 + }; 388 + 180 389 static const struct vop_scl_extension rk3288_win_full_scl_ext = { 181 390 .cbcr_vsd_mode = VOP_REG(RK3288_WIN0_CTRL1, 0x1, 31), 182 391 .cbcr_vsu_mode = VOP_REG(RK3288_WIN0_CTRL1, 0x1, 30), ··· 750 541 .data = &rk3036_vop }, 751 542 { .compatible = "rockchip,rk3126-vop", 752 543 .data = &rk3126_vop }, 544 + { .compatible = "rockchip,px30-vop-big", 545 + .data = &px30_vop_big }, 546 + { .compatible = "rockchip,px30-vop-lit", 547 + .data = &px30_vop_lit }, 548 + { .compatible = "rockchip,rk3188-vop", 549 + .data = &rk3188_vop }, 753 550 { .compatible = "rockchip,rk3288-vop", 754 551 .data = &rk3288_vop }, 755 552 { .compatible = "rockchip,rk3368-vop",
+99
drivers/gpu/drm/rockchip/rockchip_vop_reg.h
··· 884 884 #define RK3126_WIN1_DSP_ST 0x54 885 885 /* rk3126 register definition end */ 886 886 887 + /* px30 register definition */ 888 + #define PX30_REG_CFG_DONE 0x00000 889 + #define PX30_VERSION 0x00004 890 + #define PX30_DSP_BG 0x00008 891 + #define PX30_MCU_CTRL 0x0000c 892 + #define PX30_SYS_CTRL0 0x00010 893 + #define PX30_SYS_CTRL1 0x00014 894 + #define PX30_SYS_CTRL2 0x00018 895 + #define PX30_DSP_CTRL0 0x00020 896 + #define PX30_DSP_CTRL2 0x00028 897 + #define PX30_VOP_STATUS 0x0002c 898 + #define PX30_LINE_FLAG 0x00030 899 + #define PX30_INTR_EN 0x00034 900 + #define PX30_INTR_CLEAR 0x00038 901 + #define PX30_INTR_STATUS 0x0003c 902 + #define PX30_WIN0_CTRL0 0x00050 903 + #define PX30_WIN0_CTRL1 0x00054 904 + #define PX30_WIN0_COLOR_KEY 0x00058 905 + #define PX30_WIN0_VIR 0x0005c 906 + #define PX30_WIN0_YRGB_MST0 0x00060 907 + #define PX30_WIN0_CBR_MST0 0x00064 908 + #define PX30_WIN0_ACT_INFO 0x00068 909 + #define PX30_WIN0_DSP_INFO 0x0006c 910 + #define PX30_WIN0_DSP_ST 0x00070 911 + #define PX30_WIN0_SCL_FACTOR_YRGB 0x00074 912 + #define PX30_WIN0_SCL_FACTOR_CBR 0x00078 913 + #define PX30_WIN0_SCL_OFFSET 0x0007c 914 + #define PX30_WIN0_ALPHA_CTRL 0x00080 915 + #define PX30_WIN1_CTRL0 0x00090 916 + #define PX30_WIN1_CTRL1 0x00094 917 + #define PX30_WIN1_VIR 0x00098 918 + #define PX30_WIN1_MST 0x000a0 919 + #define PX30_WIN1_DSP_INFO 0x000a4 920 + #define PX30_WIN1_DSP_ST 0x000a8 921 + #define PX30_WIN1_COLOR_KEY 0x000ac 922 + #define PX30_WIN1_ALPHA_CTRL 0x000bc 923 + #define PX30_HWC_CTRL0 0x000e0 924 + #define PX30_HWC_CTRL1 0x000e4 925 + #define PX30_HWC_MST 0x000e8 926 + #define PX30_HWC_DSP_ST 0x000ec 927 + #define PX30_HWC_ALPHA_CTRL 0x000f0 928 + #define PX30_DSP_HTOTAL_HS_END 0x00100 929 + #define PX30_DSP_HACT_ST_END 0x00104 930 + #define PX30_DSP_VTOTAL_VS_END 0x00108 931 + #define PX30_DSP_VACT_ST_END 0x0010c 932 + #define PX30_DSP_VS_ST_END_F1 0x00110 933 + #define PX30_DSP_VACT_ST_END_F1 0x00114 934 + #define PX30_BCSH_CTRL 0x00160 935 + #define PX30_BCSH_COL_BAR 0x00164 936 + #define PX30_BCSH_BCS 0x00168 937 + #define PX30_BCSH_H 0x0016c 938 + #define PX30_FRC_LOWER01_0 0x00170 939 + #define PX30_FRC_LOWER01_1 0x00174 940 + #define PX30_FRC_LOWER10_0 0x00178 941 + #define PX30_FRC_LOWER10_1 0x0017c 942 + #define PX30_FRC_LOWER11_0 0x00180 943 + #define PX30_FRC_LOWER11_1 0x00184 944 + #define PX30_MCU_RW_BYPASS_PORT 0x0018c 945 + #define PX30_WIN2_CTRL0 0x00190 946 + #define PX30_WIN2_CTRL1 0x00194 947 + #define PX30_WIN2_VIR0_1 0x00198 948 + #define PX30_WIN2_VIR2_3 0x0019c 949 + #define PX30_WIN2_MST0 0x001a0 950 + #define PX30_WIN2_DSP_INFO0 0x001a4 951 + #define PX30_WIN2_DSP_ST0 0x001a8 952 + #define PX30_WIN2_COLOR_KEY 0x001ac 953 + #define PX30_WIN2_ALPHA_CTRL 0x001bc 954 + #define PX30_BLANKING_VALUE 0x001f4 955 + #define PX30_FLAG_REG_FRM_VALID 0x001f8 956 + #define PX30_FLAG_REG 0x001fc 957 + #define PX30_HWC_LUT_ADDR 0x00600 958 + #define PX30_GAMMA_LUT_ADDR 0x00a00 959 + /* px30 register definition end */ 960 + 961 + /* rk3188 register definition */ 962 + #define RK3188_SYS_CTRL 0x00 963 + #define RK3188_DSP_CTRL0 0x04 964 + #define RK3188_DSP_CTRL1 0x08 965 + #define RK3188_INT_STATUS 0x10 966 + #define RK3188_WIN0_YRGB_MST0 0x20 967 + #define RK3188_WIN0_CBR_MST0 0x24 968 + #define RK3188_WIN0_YRGB_MST1 0x28 969 + #define RK3188_WIN0_CBR_MST1 0x2c 970 + #define RK3188_WIN_VIR 0x30 971 + #define RK3188_WIN0_ACT_INFO 0x34 972 + #define RK3188_WIN0_DSP_INFO 0x38 973 + #define RK3188_WIN0_DSP_ST 0x3c 974 + #define RK3188_WIN0_SCL_FACTOR_YRGB 0x40 975 + #define RK3188_WIN0_SCL_FACTOR_CBR 0x44 976 + #define RK3188_WIN1_MST 0x4c 977 + #define RK3188_WIN1_DSP_INFO 0x50 978 + #define RK3188_WIN1_DSP_ST 0x54 979 + #define RK3188_DSP_HTOTAL_HS_END 0x6c 980 + #define RK3188_DSP_HACT_ST_END 0x70 981 + #define RK3188_DSP_VTOTAL_VS_END 0x74 982 + #define RK3188_DSP_VACT_ST_END 0x78 983 + #define RK3188_REG_CFG_DONE 0x90 984 + /* rk3188 register definition end */ 985 + 887 986 #endif /* _ROCKCHIP_VOP_REG_H */
-1
drivers/gpu/drm/sti/sti_hda.c
··· 721 721 return 0; 722 722 723 723 err_sysfs: 724 - drm_bridge_remove(bridge); 725 724 return -EINVAL; 726 725 } 727 726
-1
drivers/gpu/drm/sti/sti_hdmi.c
··· 1315 1315 return 0; 1316 1316 1317 1317 err_sysfs: 1318 - drm_bridge_remove(bridge); 1319 1318 hdmi->drm_connector = NULL; 1320 1319 return -EINVAL; 1321 1320 }
+36 -45
drivers/gpu/drm/sun4i/sun4i_backend.c
··· 34 34 struct sun4i_backend_quirks { 35 35 /* backend <-> TCON muxing selection done in backend */ 36 36 bool needs_output_muxing; 37 + 38 + /* alpha at the lowest z position is not always supported */ 39 + bool supports_lowest_plane_alpha; 37 40 }; 38 41 39 42 static const u32 sunxi_rgb2yuv_coef[12] = { ··· 62 59 0x000004a7, 0x00000000, 0x00000662, 0x00003211, 63 60 0x000004a7, 0x00000812, 0x00000000, 0x00002eb1, 64 61 }; 65 - 66 - static inline bool sun4i_backend_format_is_planar_yuv(uint32_t format) 67 - { 68 - switch (format) { 69 - case DRM_FORMAT_YUV411: 70 - case DRM_FORMAT_YUV422: 71 - case DRM_FORMAT_YUV444: 72 - return true; 73 - default: 74 - return false; 75 - } 76 - } 77 - 78 - static inline bool sun4i_backend_format_is_packed_yuv422(uint32_t format) 79 - { 80 - switch (format) { 81 - case DRM_FORMAT_YUYV: 82 - case DRM_FORMAT_YVYU: 83 - case DRM_FORMAT_UYVY: 84 - case DRM_FORMAT_VYUY: 85 - return true; 86 - 87 - default: 88 - return false; 89 - } 90 - } 91 62 92 63 static void sun4i_backend_apply_color_correction(struct sunxi_engine *engine) 93 64 { ··· 192 215 { 193 216 struct drm_plane_state *state = plane->state; 194 217 struct drm_framebuffer *fb = state->fb; 195 - uint32_t format = fb->format->format; 218 + const struct drm_format_info *format = fb->format; 219 + const uint32_t fmt = format->format; 196 220 u32 val = SUN4I_BACKEND_IYUVCTL_EN; 197 221 int i; 198 222 ··· 211 233 SUN4I_BACKEND_ATTCTL_REG0_LAY_YUVEN); 212 234 213 235 /* TODO: Add support for the multi-planar YUV formats */ 214 - if (sun4i_backend_format_is_packed_yuv422(format)) 236 + if (format->num_planes == 1) 215 237 val |= SUN4I_BACKEND_IYUVCTL_FBFMT_PACKED_YUV422; 216 238 else 217 - DRM_DEBUG_DRIVER("Unsupported YUV format (0x%x)\n", format); 239 + DRM_DEBUG_DRIVER("Unsupported YUV format (0x%x)\n", fmt); 218 240 219 241 /* 220 242 * Allwinner seems to list the pixel sequence from right to left, while 221 243 * DRM lists it from left to right. 222 244 */ 223 - switch (format) { 245 + switch (fmt) { 224 246 case DRM_FORMAT_YUYV: 225 247 val |= SUN4I_BACKEND_IYUVCTL_FBPS_VYUY; 226 248 break; ··· 235 257 break; 236 258 default: 237 259 DRM_DEBUG_DRIVER("Unsupported YUV pixel sequence (0x%x)\n", 238 - format); 260 + fmt); 239 261 } 240 262 241 263 regmap_write(backend->engine.regs, SUN4I_BACKEND_IYUVCTL_REG, val); ··· 435 457 struct drm_crtc_state *crtc_state) 436 458 { 437 459 struct drm_plane_state *plane_states[SUN4I_BACKEND_NUM_LAYERS] = { 0 }; 460 + struct sun4i_backend *backend = engine_to_sun4i_backend(engine); 438 461 struct drm_atomic_state *state = crtc_state->state; 439 462 struct drm_device *drm = state->dev; 440 463 struct drm_plane *plane; 441 464 unsigned int num_planes = 0; 442 465 unsigned int num_alpha_planes = 0; 443 466 unsigned int num_frontend_planes = 0; 467 + unsigned int num_alpha_planes_max = 1; 444 468 unsigned int num_yuv_planes = 0; 445 469 unsigned int current_pipe = 0; 446 470 unsigned int i; ··· 506 526 * the layer with the highest priority. 507 527 * 508 528 * The second step is the actual alpha blending, that takes 509 - * the two pipes as input, and uses the eventual alpha 529 + * the two pipes as input, and uses the potential alpha 510 530 * component to do the transparency between the two. 511 531 * 512 - * This two steps scenario makes us unable to guarantee a 532 + * This two-step scenario makes us unable to guarantee a 513 533 * robust alpha blending between the 4 layers in all 514 534 * situations, since this means that we need to have one layer 515 535 * with alpha at the lowest position of our two pipes. 516 536 * 517 - * However, we cannot even do that, since the hardware has a 518 - * bug where the lowest plane of the lowest pipe (pipe 0, 519 - * priority 0), if it has any alpha, will discard the pixel 520 - * entirely and just display the pixels in the background 521 - * color (black by default). 537 + * However, we cannot even do that on every platform, since 538 + * the hardware has a bug where the lowest plane of the lowest 539 + * pipe (pipe 0, priority 0), if it has any alpha, will 540 + * discard the pixel data entirely and just display the pixels 541 + * in the background color (black by default). 522 542 * 523 - * This means that we effectively have only three valid 524 - * configurations with alpha, all of them with the alpha being 525 - * on pipe1 with the lowest position, which can be 1, 2 or 3 526 - * depending on the number of planes and their zpos. 543 + * This means that on the affected platforms, we effectively 544 + * have only three valid configurations with alpha, all of 545 + * them with the alpha being on pipe1 with the lowest 546 + * position, which can be 1, 2 or 3 depending on the number of 547 + * planes and their zpos. 527 548 */ 528 - if (num_alpha_planes > SUN4I_BACKEND_NUM_ALPHA_LAYERS) { 549 + 550 + /* For platforms that are not affected by the issue described above. */ 551 + if (backend->quirks->supports_lowest_plane_alpha) 552 + num_alpha_planes_max++; 553 + 554 + if (num_alpha_planes > num_alpha_planes_max) { 529 555 DRM_DEBUG_DRIVER("Too many planes with alpha, rejecting...\n"); 530 556 return -EINVAL; 531 557 } 532 558 533 559 /* We can't have an alpha plane at the lowest position */ 534 - if (plane_states[0]->fb->format->has_alpha || 535 - (plane_states[0]->alpha != DRM_BLEND_ALPHA_OPAQUE)) 560 + if (!backend->quirks->supports_lowest_plane_alpha && 561 + (plane_states[0]->fb->format->has_alpha || 562 + (plane_states[0]->alpha != DRM_BLEND_ALPHA_OPAQUE))) 536 563 return -EINVAL; 537 564 538 565 for (i = 1; i < num_planes; i++) { ··· 863 876 : SUN4I_BACKEND_MODCTL_OUT_LCD0)); 864 877 } 865 878 879 + backend->quirks = quirks; 880 + 866 881 return 0; 867 882 868 883 err_disable_ram_clk: ··· 924 935 925 936 static const struct sun4i_backend_quirks sun7i_backend_quirks = { 926 937 .needs_output_muxing = true, 938 + .supports_lowest_plane_alpha = true, 927 939 }; 928 940 929 941 static const struct sun4i_backend_quirks sun8i_a33_backend_quirks = { 942 + .supports_lowest_plane_alpha = true, 930 943 }; 931 944 932 945 static const struct sun4i_backend_quirks sun9i_backend_quirks = {
+2 -1
drivers/gpu/drm/sun4i/sun4i_backend.h
··· 167 167 #define SUN4I_BACKEND_PIPE_OFF(p) (0x5000 + (0x400 * (p))) 168 168 169 169 #define SUN4I_BACKEND_NUM_LAYERS 4 170 - #define SUN4I_BACKEND_NUM_ALPHA_LAYERS 1 171 170 #define SUN4I_BACKEND_NUM_FRONTEND_LAYERS 1 172 171 #define SUN4I_BACKEND_NUM_YUV_PLANES 1 173 172 ··· 186 187 /* Protects against races in the frontend teardown */ 187 188 spinlock_t frontend_lock; 188 189 bool frontend_teardown; 190 + 191 + const struct sun4i_backend_quirks *quirks; 189 192 }; 190 193 191 194 static inline struct sun4i_backend *
+2 -17
drivers/gpu/drm/sun4i/sun4i_drv.c
··· 61 61 /* Frame Buffer Operations */ 62 62 }; 63 63 64 - static void sun4i_remove_framebuffers(void) 65 - { 66 - struct apertures_struct *ap; 67 - 68 - ap = alloc_apertures(1); 69 - if (!ap) 70 - return; 71 - 72 - /* The framebuffer can be located anywhere in RAM */ 73 - ap->ranges[0].base = 0; 74 - ap->ranges[0].size = ~0; 75 - 76 - drm_fb_helper_remove_conflicting_framebuffers(ap, "sun4i-drm-fb", false); 77 - kfree(ap); 78 - } 79 - 80 64 static int sun4i_drv_bind(struct device *dev) 81 65 { 82 66 struct drm_device *drm; ··· 103 119 drm->irq_enabled = true; 104 120 105 121 /* Remove early framebuffers (ie. simplefb) */ 106 - sun4i_remove_framebuffers(); 122 + drm_fb_helper_remove_conflicting_framebuffers(NULL, "sun4i-drm-fb", false); 107 123 108 124 /* Create our framebuffer */ 109 125 ret = sun4i_framebuffer_init(drm); ··· 405 421 { .compatible = "allwinner,sun8i-r40-display-engine" }, 406 422 { .compatible = "allwinner,sun8i-v3s-display-engine" }, 407 423 { .compatible = "allwinner,sun9i-a80-display-engine" }, 424 + { .compatible = "allwinner,sun50i-a64-display-engine" }, 408 425 { } 409 426 }; 410 427 MODULE_DEVICE_TABLE(of, sun4i_drv_of_table);
+1 -3
drivers/gpu/drm/sun4i/sun4i_layer.c
··· 35 35 36 36 state = kzalloc(sizeof(*state), GFP_KERNEL); 37 37 if (state) { 38 - plane->state = &state->state; 39 - plane->state->plane = plane; 40 - plane->state->alpha = DRM_BLEND_ALPHA_OPAQUE; 38 + __drm_atomic_helper_plane_reset(plane, &state->state); 41 39 plane->state->zpos = layer->id; 42 40 } 43 41 }
+117 -2
drivers/gpu/drm/sun4i/sun4i_tcon.c
··· 17 17 #include <drm/drm_encoder.h> 18 18 #include <drm/drm_modes.h> 19 19 #include <drm/drm_of.h> 20 + #include <drm/drm_panel.h> 20 21 21 22 #include <uapi/drm/drm_mode.h> 22 23 ··· 36 35 #include "sun4i_rgb.h" 37 36 #include "sun4i_tcon.h" 38 37 #include "sun6i_mipi_dsi.h" 38 + #include "sun8i_tcon_top.h" 39 39 #include "sunxi_engine.h" 40 40 41 41 static struct drm_connector *sun4i_tcon_get_connector(const struct drm_encoder *encoder) ··· 476 474 if (mode->flags & DRM_MODE_FLAG_PVSYNC) 477 475 val |= SUN4I_TCON0_IO_POL_VSYNC_POSITIVE; 478 476 477 + /* 478 + * On A20 and similar SoCs, the only way to achieve Positive Edge 479 + * (Rising Edge), is setting dclk clock phase to 2/3(240°). 480 + * By default TCON works in Negative Edge(Falling Edge), 481 + * this is why phase is set to 0 in that case. 482 + * Unfortunately there's no way to logically invert dclk through 483 + * IO_POL register. 484 + * The only acceptable way to work, triple checked with scope, 485 + * is using clock phase set to 0° for Negative Edge and set to 240° 486 + * for Positive Edge. 487 + * On A33 and similar SoCs there would be a 90° phase option, 488 + * but it divides also dclk by 2. 489 + * Following code is a way to avoid quirks all around TCON 490 + * and DOTCLOCK drivers. 491 + */ 492 + if (!IS_ERR(tcon->panel)) { 493 + struct drm_panel *panel = tcon->panel; 494 + struct drm_connector *connector = panel->connector; 495 + struct drm_display_info display_info = connector->display_info; 496 + 497 + if (display_info.bus_flags & DRM_BUS_FLAG_PIXDATA_POSEDGE) 498 + clk_set_phase(tcon->dclk, 240); 499 + 500 + if (display_info.bus_flags & DRM_BUS_FLAG_PIXDATA_NEGEDGE) 501 + clk_set_phase(tcon->dclk, 0); 502 + } 503 + 479 504 regmap_update_bits(tcon->regs, SUN4I_TCON0_IO_POL_REG, 480 505 SUN4I_TCON0_IO_POL_HSYNC_POSITIVE | SUN4I_TCON0_IO_POL_VSYNC_POSITIVE, 481 506 val); ··· 909 880 return ERR_PTR(-EINVAL); 910 881 } 911 882 883 + static bool sun4i_tcon_connected_to_tcon_top(struct device_node *node) 884 + { 885 + struct device_node *remote; 886 + bool ret = false; 887 + 888 + remote = of_graph_get_remote_node(node, 0, -1); 889 + if (remote) { 890 + ret = !!of_match_node(sun8i_tcon_top_of_table, remote); 891 + of_node_put(remote); 892 + } 893 + 894 + return ret; 895 + } 896 + 897 + static int sun4i_tcon_get_index(struct sun4i_drv *drv) 898 + { 899 + struct list_head *pos; 900 + int size = 0; 901 + 902 + /* 903 + * Because TCON is added to the list at the end of the probe 904 + * (after this function is called), index of the current TCON 905 + * will be same as current TCON list size. 906 + */ 907 + list_for_each(pos, &drv->tcon_list) 908 + ++size; 909 + 910 + return size; 911 + } 912 + 912 913 /* 913 914 * On SoCs with the old display pipeline design (Display Engine 1.0), 914 915 * we assumed the TCON was always tied to just one backend. However ··· 987 928 * connections between the backend and TCON? 988 929 */ 989 930 if (of_get_child_count(port) > 1) { 990 - /* Get our ID directly from an upstream endpoint */ 991 - int id = sun4i_tcon_of_get_id_from_port(port); 931 + int id; 932 + 933 + /* 934 + * When pipeline has the same number of TCONs and engines which 935 + * are represented by frontends/backends (DE1) or mixers (DE2), 936 + * we match them by their respective IDs. However, if pipeline 937 + * contains TCON TOP, chances are that there are either more 938 + * TCONs than engines (R40) or TCONs with non-consecutive ids. 939 + * (H6). In that case it's easier just use TCON index in list 940 + * as an id. That means that on R40, any 2 TCONs can be enabled 941 + * in DT out of 4 (there are 2 mixers). Due to the design of 942 + * TCON TOP, remaining 2 TCONs can't be connected to anything 943 + * anyway. 944 + */ 945 + if (sun4i_tcon_connected_to_tcon_top(node)) 946 + id = sun4i_tcon_get_index(drv); 947 + else 948 + id = sun4i_tcon_of_get_id_from_port(port); 992 949 993 950 /* Get our engine by matching our ID */ 994 951 engine = sun4i_tcon_get_engine_by_id(drv, id); ··· 1319 1244 return 0; 1320 1245 } 1321 1246 1247 + static int sun8i_r40_tcon_tv_set_mux(struct sun4i_tcon *tcon, 1248 + const struct drm_encoder *encoder) 1249 + { 1250 + struct device_node *port, *remote; 1251 + struct platform_device *pdev; 1252 + int id, ret; 1253 + 1254 + /* find TCON TOP platform device and TCON id */ 1255 + 1256 + port = of_graph_get_port_by_id(tcon->dev->of_node, 0); 1257 + if (!port) 1258 + return -EINVAL; 1259 + 1260 + id = sun4i_tcon_of_get_id_from_port(port); 1261 + of_node_put(port); 1262 + 1263 + remote = of_graph_get_remote_node(tcon->dev->of_node, 0, -1); 1264 + if (!remote) 1265 + return -EINVAL; 1266 + 1267 + pdev = of_find_device_by_node(remote); 1268 + of_node_put(remote); 1269 + if (!pdev) 1270 + return -EINVAL; 1271 + 1272 + if (encoder->encoder_type == DRM_MODE_ENCODER_TMDS) { 1273 + ret = sun8i_tcon_top_set_hdmi_src(&pdev->dev, id); 1274 + if (ret) 1275 + return ret; 1276 + } 1277 + 1278 + return sun8i_tcon_top_de_config(&pdev->dev, tcon->id, id); 1279 + } 1280 + 1322 1281 static const struct sun4i_tcon_quirks sun4i_a10_quirks = { 1323 1282 .has_channel_0 = true, 1324 1283 .has_channel_1 = true, ··· 1400 1291 .has_channel_1 = true, 1401 1292 }; 1402 1293 1294 + static const struct sun4i_tcon_quirks sun8i_r40_tv_quirks = { 1295 + .has_channel_1 = true, 1296 + .set_mux = sun8i_r40_tcon_tv_set_mux, 1297 + }; 1298 + 1403 1299 static const struct sun4i_tcon_quirks sun8i_v3s_quirks = { 1404 1300 .has_channel_0 = true, 1405 1301 }; ··· 1429 1315 { .compatible = "allwinner,sun8i-a33-tcon", .data = &sun8i_a33_quirks }, 1430 1316 { .compatible = "allwinner,sun8i-a83t-tcon-lcd", .data = &sun8i_a83t_lcd_quirks }, 1431 1317 { .compatible = "allwinner,sun8i-a83t-tcon-tv", .data = &sun8i_a83t_tv_quirks }, 1318 + { .compatible = "allwinner,sun8i-r40-tcon-tv", .data = &sun8i_r40_tv_quirks }, 1432 1319 { .compatible = "allwinner,sun8i-v3s-tcon", .data = &sun8i_v3s_quirks }, 1433 1320 { .compatible = "allwinner,sun9i-a80-tcon-lcd", .data = &sun9i_a80_tcon_lcd_quirks }, 1434 1321 { .compatible = "allwinner,sun9i-a80-tcon-tv", .data = &sun9i_a80_tcon_tv_quirks },
+16 -1
drivers/gpu/drm/sun4i/sun8i_dw_hdmi.c
··· 125 125 return PTR_ERR(hdmi->clk_tmds); 126 126 } 127 127 128 + hdmi->regulator = devm_regulator_get(dev, "hvcc"); 129 + if (IS_ERR(hdmi->regulator)) { 130 + dev_err(dev, "Couldn't get regulator\n"); 131 + return PTR_ERR(hdmi->regulator); 132 + } 133 + 134 + ret = regulator_enable(hdmi->regulator); 135 + if (ret) { 136 + dev_err(dev, "Failed to enable regulator\n"); 137 + return ret; 138 + } 139 + 128 140 ret = reset_control_deassert(hdmi->rst_ctrl); 129 141 if (ret) { 130 142 dev_err(dev, "Could not deassert ctrl reset control\n"); 131 - return ret; 143 + goto err_disable_regulator; 132 144 } 133 145 134 146 ret = clk_prepare_enable(hdmi->clk_tmds); ··· 195 183 clk_disable_unprepare(hdmi->clk_tmds); 196 184 err_assert_ctrl_reset: 197 185 reset_control_assert(hdmi->rst_ctrl); 186 + err_disable_regulator: 187 + regulator_disable(hdmi->regulator); 198 188 199 189 return ret; 200 190 } ··· 210 196 sun8i_hdmi_phy_remove(hdmi); 211 197 clk_disable_unprepare(hdmi->clk_tmds); 212 198 reset_control_assert(hdmi->rst_ctrl); 199 + regulator_disable(hdmi->regulator); 213 200 } 214 201 215 202 static const struct component_ops sun8i_dw_hdmi_ops = {
+2
drivers/gpu/drm/sun4i/sun8i_dw_hdmi.h
··· 10 10 #include <drm/drm_encoder.h> 11 11 #include <linux/clk.h> 12 12 #include <linux/regmap.h> 13 + #include <linux/regulator/consumer.h> 13 14 #include <linux/reset.h> 14 15 15 16 #define SUN8I_HDMI_PHY_DBG_CTRL_REG 0x0000 ··· 177 176 struct drm_encoder encoder; 178 177 struct sun8i_hdmi_phy *phy; 179 178 struct dw_hdmi_plat_data plat_data; 179 + struct regulator *regulator; 180 180 struct reset_control *rst_ctrl; 181 181 }; 182 182
+24
drivers/gpu/drm/sun4i/sun8i_mixer.c
··· 569 569 .mod_rate = 150000000, 570 570 }; 571 571 572 + static const struct sun8i_mixer_cfg sun50i_a64_mixer0_cfg = { 573 + .ccsc = 0, 574 + .mod_rate = 297000000, 575 + .scaler_mask = 0xf, 576 + .ui_num = 3, 577 + .vi_num = 1, 578 + }; 579 + 580 + static const struct sun8i_mixer_cfg sun50i_a64_mixer1_cfg = { 581 + .ccsc = 1, 582 + .mod_rate = 297000000, 583 + .scaler_mask = 0x3, 584 + .ui_num = 1, 585 + .vi_num = 1, 586 + }; 587 + 572 588 static const struct of_device_id sun8i_mixer_of_table[] = { 573 589 { 574 590 .compatible = "allwinner,sun8i-a83t-de2-mixer-0", ··· 609 593 { 610 594 .compatible = "allwinner,sun8i-v3s-de2-mixer", 611 595 .data = &sun8i_v3s_mixer_cfg, 596 + }, 597 + { 598 + .compatible = "allwinner,sun50i-a64-de2-mixer-0", 599 + .data = &sun50i_a64_mixer0_cfg, 600 + }, 601 + { 602 + .compatible = "allwinner,sun50i-a64-de2-mixer-1", 603 + .data = &sun50i_a64_mixer1_cfg, 612 604 }, 613 605 { } 614 606 };
+1 -2
drivers/gpu/drm/sun4i/sun8i_tcon_top.c
··· 129 129 if (!tcon_top) 130 130 return -ENOMEM; 131 131 132 - clk_data = devm_kzalloc(dev, sizeof(*clk_data) + 133 - sizeof(*clk_data->hws) * CLK_NUM, 132 + clk_data = devm_kzalloc(dev, struct_size(clk_data, hws, CLK_NUM), 134 133 GFP_KERNEL); 135 134 if (!clk_data) 136 135 return -ENOMEM;
+4
drivers/gpu/drm/tegra/drm.c
··· 1187 1187 1188 1188 dev_set_drvdata(&dev->dev, drm); 1189 1189 1190 + err = drm_fb_helper_remove_conflicting_framebuffers(NULL, "tegradrmfb", false); 1191 + if (err < 0) 1192 + goto unref; 1193 + 1190 1194 err = drm_dev_register(drm, 0); 1191 1195 if (err < 0) 1192 1196 goto unref;
+3 -3
drivers/gpu/drm/tinydrm/core/tinydrm-core.c
··· 135 135 /* 136 136 * We don't embed drm_device, because that prevent us from using 137 137 * devm_kzalloc() to allocate tinydrm_device in the driver since 138 - * drm_dev_unref() frees the structure. The devm_ functions provide 138 + * drm_dev_put() frees the structure. The devm_ functions provide 139 139 * for easy error handling. 140 140 */ 141 141 drm = drm_dev_alloc(driver, parent); ··· 155 155 drm_mode_config_cleanup(tdev->drm); 156 156 mutex_destroy(&tdev->dirty_lock); 157 157 tdev->drm->dev_private = NULL; 158 - drm_dev_unref(tdev->drm); 158 + drm_dev_put(tdev->drm); 159 159 } 160 160 161 161 static void devm_tinydrm_release(void *data) ··· 172 172 * 173 173 * This function initializes @tdev, the underlying DRM device and it's 174 174 * mode_config. Resources will be automatically freed on driver detach (devres) 175 - * using drm_mode_config_cleanup() and drm_dev_unref(). 175 + * using drm_mode_config_cleanup() and drm_dev_put(). 176 176 * 177 177 * Returns: 178 178 * Zero on success, negative error code on failure.
+1 -19
drivers/gpu/drm/vc4/vc4_drv.c
··· 248 248 } 249 249 } 250 250 251 - static void vc4_kick_out_firmware_fb(void) 252 - { 253 - struct apertures_struct *ap; 254 - 255 - ap = alloc_apertures(1); 256 - if (!ap) 257 - return; 258 - 259 - /* Since VC4 is a UMA device, the simplefb node may have been 260 - * located anywhere in memory. 261 - */ 262 - ap->ranges[0].base = 0; 263 - ap->ranges[0].size = ~0; 264 - 265 - drm_fb_helper_remove_conflicting_framebuffers(ap, "vc4drmfb", false); 266 - kfree(ap); 267 - } 268 - 269 251 static int vc4_drm_bind(struct device *dev) 270 252 { 271 253 struct platform_device *pdev = to_platform_device(dev); ··· 280 298 if (ret) 281 299 goto gem_destroy; 282 300 283 - vc4_kick_out_firmware_fb(); 301 + drm_fb_helper_remove_conflicting_framebuffers(NULL, "vc4drmfb", false); 284 302 285 303 ret = drm_dev_register(drm, 0); 286 304 if (ret < 0)
+1 -3
drivers/gpu/drm/vc4/vc4_plane.c
··· 200 200 if (!vc4_state) 201 201 return; 202 202 203 - plane->state = &vc4_state->base; 204 - plane->state->alpha = DRM_BLEND_ALPHA_OPAQUE; 205 - vc4_state->base.plane = plane; 203 + __drm_atomic_helper_plane_reset(plane, &vc4_state->base); 206 204 } 207 205 208 206 static void vc4_dlist_write(struct vc4_plane_state *vc4_state, u32 val)
+1 -1
drivers/gpu/drm/vgem/vgem_drv.c
··· 504 504 static void __exit vgem_exit(void) 505 505 { 506 506 drm_dev_unregister(&vgem_device->drm); 507 - drm_dev_unref(&vgem_device->drm); 507 + drm_dev_put(&vgem_device->drm); 508 508 } 509 509 510 510 module_init(vgem_init);
-13
drivers/gpu/drm/vgem/vgem_fence.c
··· 43 43 return "unbound"; 44 44 } 45 45 46 - static bool vgem_fence_signaled(struct dma_fence *fence) 47 - { 48 - return false; 49 - } 50 - 51 - static bool vgem_fence_enable_signaling(struct dma_fence *fence) 52 - { 53 - return true; 54 - } 55 - 56 46 static void vgem_fence_release(struct dma_fence *base) 57 47 { 58 48 struct vgem_fence *fence = container_of(base, typeof(*fence), base); ··· 66 76 static const struct dma_fence_ops vgem_fence_ops = { 67 77 .get_driver_name = vgem_fence_get_driver_name, 68 78 .get_timeline_name = vgem_fence_get_timeline_name, 69 - .enable_signaling = vgem_fence_enable_signaling, 70 - .signaled = vgem_fence_signaled, 71 - .wait = dma_fence_default_wait, 72 79 .release = vgem_fence_release, 73 80 74 81 .fence_value_str = vgem_fence_value_str,
+4
drivers/gpu/drm/virtio/virtgpu_display.c
··· 109 109 static void virtio_gpu_crtc_atomic_enable(struct drm_crtc *crtc, 110 110 struct drm_crtc_state *old_state) 111 111 { 112 + struct virtio_gpu_output *output = drm_crtc_to_virtio_gpu_output(crtc); 113 + 114 + output->enabled = true; 112 115 } 113 116 114 117 static void virtio_gpu_crtc_atomic_disable(struct drm_crtc *crtc, ··· 122 119 struct virtio_gpu_output *output = drm_crtc_to_virtio_gpu_output(crtc); 123 120 124 121 virtio_gpu_cmd_set_scanout(vgdev, output->index, 0, 0, 0, 0, 0); 122 + output->enabled = false; 125 123 } 126 124 127 125 static int virtio_gpu_crtc_atomic_check(struct drm_crtc *crtc,
+4 -22
drivers/gpu/drm/virtio/virtgpu_drm_bus.c
··· 28 28 29 29 #include "virtgpu_drv.h" 30 30 31 - static void virtio_pci_kick_out_firmware_fb(struct pci_dev *pci_dev) 32 - { 33 - struct apertures_struct *ap; 34 - bool primary; 35 - 36 - ap = alloc_apertures(1); 37 - if (!ap) 38 - return; 39 - 40 - ap->ranges[0].base = pci_resource_start(pci_dev, 0); 41 - ap->ranges[0].size = pci_resource_len(pci_dev, 0); 42 - 43 - primary = pci_dev->resource[PCI_ROM_RESOURCE].flags 44 - & IORESOURCE_ROM_SHADOW; 45 - 46 - drm_fb_helper_remove_conflicting_framebuffers(ap, "virtiodrmfb", primary); 47 - 48 - kfree(ap); 49 - } 50 - 51 31 int drm_virtio_init(struct drm_driver *driver, struct virtio_device *vdev) 52 32 { 53 33 struct drm_device *dev; ··· 49 69 pname); 50 70 dev->pdev = pdev; 51 71 if (vga) 52 - virtio_pci_kick_out_firmware_fb(pdev); 72 + drm_fb_helper_remove_conflicting_pci_framebuffers(pdev, 73 + 0, 74 + "virtiodrmfb"); 53 75 54 76 snprintf(unique, sizeof(unique), "pci:%s", pname); 55 77 ret = drm_dev_set_unique(dev, unique); ··· 67 85 return 0; 68 86 69 87 err_free: 70 - drm_dev_unref(dev); 88 + drm_dev_put(dev); 71 89 return ret; 72 90 }
+7 -6
drivers/gpu/drm/virtio/virtgpu_drv.h
··· 57 57 uint32_t hw_res_handle; 58 58 59 59 struct sg_table *pages; 60 + uint32_t mapped; 60 61 void *vmap; 61 62 bool dumb; 62 63 struct ttm_place placement_code; ··· 115 114 struct virtio_gpu_update_cursor cursor; 116 115 int cur_x; 117 116 int cur_y; 117 + bool enabled; 118 118 }; 119 119 #define drm_crtc_to_virtio_gpu_output(x) \ 120 120 container_of(x, struct virtio_gpu_output, crtc) ··· 278 276 struct virtio_gpu_object *obj, 279 277 uint32_t resource_id, 280 278 struct virtio_gpu_fence **fence); 279 + void virtio_gpu_object_detach(struct virtio_gpu_device *vgdev, 280 + struct virtio_gpu_object *obj); 281 281 int virtio_gpu_attach_status_page(struct virtio_gpu_device *vgdev); 282 282 int virtio_gpu_detach_status_page(struct virtio_gpu_device *vgdev); 283 283 void virtio_gpu_cursor_ping(struct virtio_gpu_device *vgdev, 284 284 struct virtio_gpu_output *output); 285 285 int virtio_gpu_cmd_get_display_info(struct virtio_gpu_device *vgdev); 286 - void virtio_gpu_cmd_resource_inval_backing(struct virtio_gpu_device *vgdev, 287 - uint32_t resource_id); 288 286 int virtio_gpu_cmd_get_capset_info(struct virtio_gpu_device *vgdev, int idx); 289 287 int virtio_gpu_cmd_get_capset(struct virtio_gpu_device *vgdev, 290 288 int idx, int version, ··· 374 372 static inline struct virtio_gpu_object* 375 373 virtio_gpu_object_ref(struct virtio_gpu_object *bo) 376 374 { 377 - ttm_bo_reference(&bo->tbo); 375 + ttm_bo_get(&bo->tbo); 378 376 return bo; 379 377 } 380 378 ··· 385 383 if ((*bo) == NULL) 386 384 return; 387 385 tbo = &((*bo)->tbo); 388 - ttm_bo_unref(&tbo); 389 - if (tbo == NULL) 390 - *bo = NULL; 386 + ttm_bo_put(tbo); 387 + *bo = NULL; 391 388 } 392 389 393 390 static inline u64 virtio_gpu_object_mmap_offset(struct virtio_gpu_object *bo)
+1 -1
drivers/gpu/drm/virtio/virtgpu_fb.c
··· 291 291 return 0; 292 292 293 293 err_fb_alloc: 294 - virtio_gpu_cmd_resource_inval_backing(vgdev, resid); 294 + virtio_gpu_object_detach(vgdev, obj); 295 295 err_obj_attach: 296 296 err_obj_vmap: 297 297 virtio_gpu_gem_free_object(&obj->gem_base);
+1 -1
drivers/gpu/drm/virtio/virtgpu_plane.c
··· 152 152 if (WARN_ON(!output)) 153 153 return; 154 154 155 - if (plane->state->fb) { 155 + if (plane->state->fb && output->enabled) { 156 156 vgfb = to_virtio_gpu_framebuffer(plane->state->fb); 157 157 bo = gem_to_virtio_gpu_obj(vgfb->base.obj[0]); 158 158 handle = bo->hw_res_handle;
+2 -37
drivers/gpu/drm/virtio/virtgpu_ttm.c
··· 106 106 } 107 107 } 108 108 109 - #if 0 110 - /* 111 - * Hmm, seems to not do anything useful. Leftover debug hack? 112 - * Something like printing pagefaults to kernel log? 113 - */ 114 - static struct vm_operations_struct virtio_gpu_ttm_vm_ops; 115 - static const struct vm_operations_struct *ttm_vm_ops; 116 - 117 - static int virtio_gpu_ttm_fault(struct vm_fault *vmf) 118 - { 119 - struct ttm_buffer_object *bo; 120 - struct virtio_gpu_device *vgdev; 121 - int r; 122 - 123 - bo = (struct ttm_buffer_object *)vmf->vma->vm_private_data; 124 - if (bo == NULL) 125 - return VM_FAULT_NOPAGE; 126 - vgdev = virtio_gpu_get_vgdev(bo->bdev); 127 - r = ttm_vm_ops->fault(vmf); 128 - return r; 129 - } 130 - #endif 131 - 132 109 int virtio_gpu_mmap(struct file *filp, struct vm_area_struct *vma) 133 110 { 134 111 struct drm_file *file_priv; ··· 120 143 return -EINVAL; 121 144 } 122 145 r = ttm_bo_mmap(filp, vma, &vgdev->mman.bdev); 123 - #if 0 124 - if (unlikely(r != 0)) 125 - return r; 126 - if (unlikely(ttm_vm_ops == NULL)) { 127 - ttm_vm_ops = vma->vm_ops; 128 - virtio_gpu_ttm_vm_ops = *ttm_vm_ops; 129 - virtio_gpu_ttm_vm_ops.fault = &virtio_gpu_ttm_fault; 130 - } 131 - vma->vm_ops = &virtio_gpu_ttm_vm_ops; 132 - return 0; 133 - #else 146 + 134 147 return r; 135 - #endif 136 148 } 137 149 138 150 static int virtio_gpu_invalidate_caches(struct ttm_bo_device *bdev, ··· 343 377 344 378 if (!new_mem || (new_mem->placement & TTM_PL_FLAG_SYSTEM)) { 345 379 if (bo->hw_res_handle) 346 - virtio_gpu_cmd_resource_inval_backing(vgdev, 347 - bo->hw_res_handle); 380 + virtio_gpu_object_detach(vgdev, bo); 348 381 349 382 } else if (new_mem->placement & TTM_PL_FLAG_TT) { 350 383 if (bo->hw_res_handle) {
+46 -11
drivers/gpu/drm/virtio/virtgpu_vq.c
··· 423 423 virtio_gpu_queue_ctrl_buffer(vgdev, vbuf); 424 424 } 425 425 426 - void virtio_gpu_cmd_resource_inval_backing(struct virtio_gpu_device *vgdev, 427 - uint32_t resource_id) 426 + static void virtio_gpu_cmd_resource_inval_backing(struct virtio_gpu_device *vgdev, 427 + uint32_t resource_id, 428 + struct virtio_gpu_fence **fence) 428 429 { 429 430 struct virtio_gpu_resource_detach_backing *cmd_p; 430 431 struct virtio_gpu_vbuffer *vbuf; ··· 436 435 cmd_p->hdr.type = cpu_to_le32(VIRTIO_GPU_CMD_RESOURCE_DETACH_BACKING); 437 436 cmd_p->resource_id = cpu_to_le32(resource_id); 438 437 439 - virtio_gpu_queue_ctrl_buffer(vgdev, vbuf); 438 + virtio_gpu_queue_fenced_ctrl_buffer(vgdev, vbuf, &cmd_p->hdr, fence); 440 439 } 441 440 442 441 void virtio_gpu_cmd_set_scanout(struct virtio_gpu_device *vgdev, ··· 649 648 { 650 649 struct virtio_gpu_get_capset *cmd_p; 651 650 struct virtio_gpu_vbuffer *vbuf; 652 - int max_size = vgdev->capsets[idx].max_size; 651 + int max_size; 653 652 struct virtio_gpu_drv_cap_cache *cache_ent; 654 653 void *resp_buf; 655 654 656 - if (idx > vgdev->num_capsets) 655 + if (idx >= vgdev->num_capsets) 657 656 return -EINVAL; 658 657 659 658 if (version > vgdev->capsets[idx].max_version) ··· 663 662 if (!cache_ent) 664 663 return -ENOMEM; 665 664 665 + max_size = vgdev->capsets[idx].max_size; 666 666 cache_ent->caps_cache = kmalloc(max_size, GFP_KERNEL); 667 667 if (!cache_ent->caps_cache) { 668 668 kfree(cache_ent); ··· 850 848 uint32_t resource_id, 851 849 struct virtio_gpu_fence **fence) 852 850 { 851 + bool use_dma_api = !virtio_has_iommu_quirk(vgdev->vdev); 853 852 struct virtio_gpu_mem_entry *ents; 854 853 struct scatterlist *sg; 855 - int si; 854 + int si, nents; 856 855 857 856 if (!obj->pages) { 858 857 int ret; ··· 863 860 return ret; 864 861 } 865 862 863 + if (use_dma_api) { 864 + obj->mapped = dma_map_sg(vgdev->vdev->dev.parent, 865 + obj->pages->sgl, obj->pages->nents, 866 + DMA_TO_DEVICE); 867 + nents = obj->mapped; 868 + } else { 869 + nents = obj->pages->nents; 870 + } 871 + 866 872 /* gets freed when the ring has consumed it */ 867 - ents = kmalloc_array(obj->pages->nents, 868 - sizeof(struct virtio_gpu_mem_entry), 873 + ents = kmalloc_array(nents, sizeof(struct virtio_gpu_mem_entry), 869 874 GFP_KERNEL); 870 875 if (!ents) { 871 876 DRM_ERROR("failed to allocate ent list\n"); 872 877 return -ENOMEM; 873 878 } 874 879 875 - for_each_sg(obj->pages->sgl, sg, obj->pages->nents, si) { 876 - ents[si].addr = cpu_to_le64(sg_phys(sg)); 880 + for_each_sg(obj->pages->sgl, sg, nents, si) { 881 + ents[si].addr = cpu_to_le64(use_dma_api 882 + ? sg_dma_address(sg) 883 + : sg_phys(sg)); 877 884 ents[si].length = cpu_to_le32(sg->length); 878 885 ents[si].padding = 0; 879 886 } 880 887 881 888 virtio_gpu_cmd_resource_attach_backing(vgdev, resource_id, 882 - ents, obj->pages->nents, 889 + ents, nents, 883 890 fence); 884 891 obj->hw_res_handle = resource_id; 885 892 return 0; 893 + } 894 + 895 + void virtio_gpu_object_detach(struct virtio_gpu_device *vgdev, 896 + struct virtio_gpu_object *obj) 897 + { 898 + bool use_dma_api = !virtio_has_iommu_quirk(vgdev->vdev); 899 + struct virtio_gpu_fence *fence; 900 + 901 + if (use_dma_api && obj->mapped) { 902 + /* detach backing and wait for the host process it ... */ 903 + virtio_gpu_cmd_resource_inval_backing(vgdev, obj->hw_res_handle, &fence); 904 + dma_fence_wait(&fence->f, true); 905 + dma_fence_put(&fence->f); 906 + 907 + /* ... then tear down iommu mappings */ 908 + dma_unmap_sg(vgdev->vdev->dev.parent, 909 + obj->pages->sgl, obj->mapped, 910 + DMA_TO_DEVICE); 911 + obj->mapped = 0; 912 + } else { 913 + virtio_gpu_cmd_resource_inval_backing(vgdev, obj->hw_res_handle, NULL); 914 + } 886 915 } 887 916 888 917 void virtio_gpu_cursor_ping(struct virtio_gpu_device *vgdev,
+1 -1
drivers/gpu/drm/vkms/Makefile
··· 1 - vkms-y := vkms_drv.o vkms_plane.o vkms_output.o vkms_crtc.o vkms_gem.o 1 + vkms-y := vkms_drv.o vkms_plane.o vkms_output.o vkms_crtc.o vkms_gem.o vkms_crc.o 2 2 3 3 obj-$(CONFIG_DRM_VKMS) += vkms.o
+153
drivers/gpu/drm/vkms/vkms_crc.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + #include "vkms_drv.h" 3 + #include <linux/crc32.h> 4 + #include <drm/drm_gem_framebuffer_helper.h> 5 + 6 + static uint32_t _vkms_get_crc(struct vkms_crc_data *crc_data) 7 + { 8 + struct drm_framebuffer *fb = &crc_data->fb; 9 + struct drm_gem_object *gem_obj = drm_gem_fb_get_obj(fb, 0); 10 + struct vkms_gem_object *vkms_obj = drm_gem_to_vkms_gem(gem_obj); 11 + u32 crc = 0; 12 + int i = 0; 13 + unsigned int x = crc_data->src.x1 >> 16; 14 + unsigned int y = crc_data->src.y1 >> 16; 15 + unsigned int height = drm_rect_height(&crc_data->src) >> 16; 16 + unsigned int width = drm_rect_width(&crc_data->src) >> 16; 17 + unsigned int cpp = fb->format->cpp[0]; 18 + unsigned int src_offset; 19 + unsigned int size_byte = width * cpp; 20 + void *vaddr; 21 + 22 + mutex_lock(&vkms_obj->pages_lock); 23 + vaddr = vkms_obj->vaddr; 24 + if (WARN_ON(!vaddr)) 25 + goto out; 26 + 27 + for (i = y; i < y + height; i++) { 28 + src_offset = fb->offsets[0] + (i * fb->pitches[0]) + (x * cpp); 29 + crc = crc32_le(crc, vaddr + src_offset, size_byte); 30 + } 31 + 32 + out: 33 + mutex_unlock(&vkms_obj->pages_lock); 34 + return crc; 35 + } 36 + 37 + /** 38 + * vkms_crc_work_handle - ordered work_struct to compute CRC 39 + * 40 + * @work: work_struct 41 + * 42 + * Work handler for computing CRCs. work_struct scheduled in 43 + * an ordered workqueue that's periodically scheduled to run by 44 + * _vblank_handle() and flushed at vkms_atomic_crtc_destroy_state(). 45 + */ 46 + void vkms_crc_work_handle(struct work_struct *work) 47 + { 48 + struct vkms_crtc_state *crtc_state = container_of(work, 49 + struct vkms_crtc_state, 50 + crc_work); 51 + struct drm_crtc *crtc = crtc_state->base.crtc; 52 + struct vkms_output *out = drm_crtc_to_vkms_output(crtc); 53 + struct vkms_device *vdev = container_of(out, struct vkms_device, 54 + output); 55 + struct vkms_crc_data *primary_crc = NULL; 56 + struct drm_plane *plane; 57 + u32 crc32 = 0; 58 + u64 frame_start, frame_end; 59 + unsigned long flags; 60 + 61 + spin_lock_irqsave(&out->state_lock, flags); 62 + frame_start = crtc_state->frame_start; 63 + frame_end = crtc_state->frame_end; 64 + spin_unlock_irqrestore(&out->state_lock, flags); 65 + 66 + /* _vblank_handle() hasn't updated frame_start yet */ 67 + if (!frame_start || frame_start == frame_end) 68 + goto out; 69 + 70 + drm_for_each_plane(plane, &vdev->drm) { 71 + struct vkms_plane_state *vplane_state; 72 + struct vkms_crc_data *crc_data; 73 + 74 + vplane_state = to_vkms_plane_state(plane->state); 75 + crc_data = vplane_state->crc_data; 76 + 77 + if (drm_framebuffer_read_refcount(&crc_data->fb) == 0) 78 + continue; 79 + 80 + if (plane->type == DRM_PLANE_TYPE_PRIMARY) { 81 + primary_crc = crc_data; 82 + break; 83 + } 84 + } 85 + 86 + if (primary_crc) 87 + crc32 = _vkms_get_crc(primary_crc); 88 + 89 + frame_end = drm_crtc_accurate_vblank_count(crtc); 90 + 91 + /* queue_work can fail to schedule crc_work; add crc for 92 + * missing frames 93 + */ 94 + while (frame_start <= frame_end) 95 + drm_crtc_add_crc_entry(crtc, true, frame_start++, &crc32); 96 + 97 + out: 98 + /* to avoid using the same value for frame number again */ 99 + spin_lock_irqsave(&out->state_lock, flags); 100 + crtc_state->frame_end = frame_end; 101 + crtc_state->frame_start = 0; 102 + spin_unlock_irqrestore(&out->state_lock, flags); 103 + } 104 + 105 + static int vkms_crc_parse_source(const char *src_name, bool *enabled) 106 + { 107 + int ret = 0; 108 + 109 + if (!src_name) { 110 + *enabled = false; 111 + } else if (strcmp(src_name, "auto") == 0) { 112 + *enabled = true; 113 + } else { 114 + *enabled = false; 115 + ret = -EINVAL; 116 + } 117 + 118 + return ret; 119 + } 120 + 121 + int vkms_verify_crc_source(struct drm_crtc *crtc, const char *src_name, 122 + size_t *values_cnt) 123 + { 124 + bool enabled; 125 + 126 + if (vkms_crc_parse_source(src_name, &enabled) < 0) { 127 + DRM_DEBUG_DRIVER("unknown source %s\n", src_name); 128 + return -EINVAL; 129 + } 130 + 131 + *values_cnt = 1; 132 + 133 + return 0; 134 + } 135 + 136 + int vkms_set_crc_source(struct drm_crtc *crtc, const char *src_name) 137 + { 138 + struct vkms_output *out = drm_crtc_to_vkms_output(crtc); 139 + bool enabled = false; 140 + unsigned long flags; 141 + int ret = 0; 142 + 143 + ret = vkms_crc_parse_source(src_name, &enabled); 144 + 145 + /* make sure nothing is scheduled on crtc workq */ 146 + flush_workqueue(out->crc_workq); 147 + 148 + spin_lock_irqsave(&out->lock, flags); 149 + out->crc_enabled = enabled; 150 + spin_unlock_irqrestore(&out->lock, flags); 151 + 152 + return ret; 153 + }
+108 -8
drivers/gpu/drm/vkms/vkms_crtc.c
··· 10 10 #include <drm/drm_atomic_helper.h> 11 11 #include <drm/drm_crtc_helper.h> 12 12 13 + static void _vblank_handle(struct vkms_output *output) 14 + { 15 + struct drm_crtc *crtc = &output->crtc; 16 + struct vkms_crtc_state *state = to_vkms_crtc_state(crtc->state); 17 + bool ret; 18 + 19 + spin_lock(&output->lock); 20 + ret = drm_crtc_handle_vblank(crtc); 21 + if (!ret) 22 + DRM_ERROR("vkms failure on handling vblank"); 23 + 24 + if (state && output->crc_enabled) { 25 + u64 frame = drm_crtc_accurate_vblank_count(crtc); 26 + 27 + /* update frame_start only if a queued vkms_crc_work_handle() 28 + * has read the data 29 + */ 30 + spin_lock(&output->state_lock); 31 + if (!state->frame_start) 32 + state->frame_start = frame; 33 + spin_unlock(&output->state_lock); 34 + 35 + ret = queue_work(output->crc_workq, &state->crc_work); 36 + if (!ret) 37 + DRM_WARN("failed to queue vkms_crc_work_handle"); 38 + } 39 + 40 + spin_unlock(&output->lock); 41 + } 42 + 13 43 static enum hrtimer_restart vkms_vblank_simulate(struct hrtimer *timer) 14 44 { 15 45 struct vkms_output *output = container_of(timer, struct vkms_output, 16 46 vblank_hrtimer); 17 - struct drm_crtc *crtc = &output->crtc; 18 47 int ret_overrun; 19 - bool ret; 20 48 21 - ret = drm_crtc_handle_vblank(crtc); 22 - if (!ret) 23 - DRM_ERROR("vkms failure on handling vblank"); 49 + _vblank_handle(output); 24 50 25 51 ret_overrun = hrtimer_forward_now(&output->vblank_hrtimer, 26 52 output->period_ns); ··· 90 64 return true; 91 65 } 92 66 67 + static void vkms_atomic_crtc_reset(struct drm_crtc *crtc) 68 + { 69 + struct vkms_crtc_state *vkms_state = NULL; 70 + 71 + if (crtc->state) { 72 + vkms_state = to_vkms_crtc_state(crtc->state); 73 + __drm_atomic_helper_crtc_destroy_state(crtc->state); 74 + kfree(vkms_state); 75 + crtc->state = NULL; 76 + } 77 + 78 + vkms_state = kzalloc(sizeof(*vkms_state), GFP_KERNEL); 79 + if (!vkms_state) 80 + return; 81 + 82 + crtc->state = &vkms_state->base; 83 + crtc->state->crtc = crtc; 84 + } 85 + 86 + static struct drm_crtc_state * 87 + vkms_atomic_crtc_duplicate_state(struct drm_crtc *crtc) 88 + { 89 + struct vkms_crtc_state *vkms_state; 90 + 91 + if (WARN_ON(!crtc->state)) 92 + return NULL; 93 + 94 + vkms_state = kzalloc(sizeof(*vkms_state), GFP_KERNEL); 95 + if (!vkms_state) 96 + return NULL; 97 + 98 + __drm_atomic_helper_crtc_duplicate_state(crtc, &vkms_state->base); 99 + 100 + INIT_WORK(&vkms_state->crc_work, vkms_crc_work_handle); 101 + 102 + return &vkms_state->base; 103 + } 104 + 105 + static void vkms_atomic_crtc_destroy_state(struct drm_crtc *crtc, 106 + struct drm_crtc_state *state) 107 + { 108 + struct vkms_crtc_state *vkms_state = to_vkms_crtc_state(state); 109 + 110 + __drm_atomic_helper_crtc_destroy_state(state); 111 + 112 + if (vkms_state) { 113 + flush_work(&vkms_state->crc_work); 114 + kfree(vkms_state); 115 + } 116 + } 117 + 93 118 static const struct drm_crtc_funcs vkms_crtc_funcs = { 94 119 .set_config = drm_atomic_helper_set_config, 95 120 .destroy = drm_crtc_cleanup, 96 121 .page_flip = drm_atomic_helper_page_flip, 97 - .reset = drm_atomic_helper_crtc_reset, 98 - .atomic_duplicate_state = drm_atomic_helper_crtc_duplicate_state, 99 - .atomic_destroy_state = drm_atomic_helper_crtc_destroy_state, 122 + .reset = vkms_atomic_crtc_reset, 123 + .atomic_duplicate_state = vkms_atomic_crtc_duplicate_state, 124 + .atomic_destroy_state = vkms_atomic_crtc_destroy_state, 100 125 .enable_vblank = vkms_enable_vblank, 101 126 .disable_vblank = vkms_disable_vblank, 127 + .set_crc_source = vkms_set_crc_source, 128 + .verify_crc_source = vkms_verify_crc_source, 102 129 }; 103 130 104 131 static void vkms_crtc_atomic_enable(struct drm_crtc *crtc, ··· 166 87 drm_crtc_vblank_off(crtc); 167 88 } 168 89 90 + static void vkms_crtc_atomic_begin(struct drm_crtc *crtc, 91 + struct drm_crtc_state *old_crtc_state) 92 + { 93 + struct vkms_output *vkms_output = drm_crtc_to_vkms_output(crtc); 94 + 95 + /* This lock is held across the atomic commit to block vblank timer 96 + * from scheduling vkms_crc_work_handle until the crc_data is updated 97 + */ 98 + spin_lock_irq(&vkms_output->lock); 99 + } 100 + 169 101 static void vkms_crtc_atomic_flush(struct drm_crtc *crtc, 170 102 struct drm_crtc_state *old_crtc_state) 171 103 { 104 + struct vkms_output *vkms_output = drm_crtc_to_vkms_output(crtc); 172 105 unsigned long flags; 173 106 174 107 if (crtc->state->event) { ··· 195 104 196 105 crtc->state->event = NULL; 197 106 } 107 + 108 + spin_unlock_irq(&vkms_output->lock); 198 109 } 199 110 200 111 static const struct drm_crtc_helper_funcs vkms_crtc_helper_funcs = { 112 + .atomic_begin = vkms_crtc_atomic_begin, 201 113 .atomic_flush = vkms_crtc_atomic_flush, 202 114 .atomic_enable = vkms_crtc_atomic_enable, 203 115 .atomic_disable = vkms_crtc_atomic_disable, ··· 209 115 int vkms_crtc_init(struct drm_device *dev, struct drm_crtc *crtc, 210 116 struct drm_plane *primary, struct drm_plane *cursor) 211 117 { 118 + struct vkms_output *vkms_out = drm_crtc_to_vkms_output(crtc); 212 119 int ret; 213 120 214 121 ret = drm_crtc_init_with_planes(dev, crtc, primary, cursor, ··· 220 125 } 221 126 222 127 drm_crtc_helper_add(crtc, &vkms_crtc_helper_funcs); 128 + 129 + spin_lock_init(&vkms_out->lock); 130 + spin_lock_init(&vkms_out->state_lock); 131 + 132 + vkms_out->crc_workq = alloc_ordered_workqueue("vkms_crc_workq", 0); 223 133 224 134 return ret; 225 135 }
+1
drivers/gpu/drm/vkms/vkms_drv.c
··· 47 47 drm_atomic_helper_shutdown(&vkms->drm); 48 48 drm_mode_config_cleanup(&vkms->drm); 49 49 drm_dev_fini(&vkms->drm); 50 + destroy_workqueue(vkms->output.crc_workq); 50 51 } 51 52 52 53 static struct drm_driver vkms_driver = {
+58 -1
drivers/gpu/drm/vkms/vkms_drv.h
··· 20 20 DRM_FORMAT_XRGB8888, 21 21 }; 22 22 23 + struct vkms_crc_data { 24 + struct drm_rect src; 25 + struct drm_framebuffer fb; 26 + }; 27 + 28 + /** 29 + * vkms_plane_state - Driver specific plane state 30 + * @base: base plane state 31 + * @crc_data: data required for CRC computation 32 + */ 33 + struct vkms_plane_state { 34 + struct drm_plane_state base; 35 + struct vkms_crc_data *crc_data; 36 + }; 37 + 38 + /** 39 + * vkms_crtc_state - Driver specific CRTC state 40 + * @base: base CRTC state 41 + * @crc_work: work struct to compute and add CRC entries 42 + * @n_frame_start: start frame number for computed CRC 43 + * @n_frame_end: end frame number for computed CRC 44 + */ 45 + struct vkms_crtc_state { 46 + struct drm_crtc_state base; 47 + struct work_struct crc_work; 48 + u64 frame_start; 49 + u64 frame_end; 50 + }; 51 + 23 52 struct vkms_output { 24 53 struct drm_crtc crtc; 25 54 struct drm_encoder encoder; ··· 56 27 struct hrtimer vblank_hrtimer; 57 28 ktime_t period_ns; 58 29 struct drm_pending_vblank_event *event; 30 + bool crc_enabled; 31 + /* ordered wq for crc_work */ 32 + struct workqueue_struct *crc_workq; 33 + /* protects concurrent access to crc_data */ 34 + spinlock_t lock; 35 + /* protects concurrent access to crtc_state */ 36 + spinlock_t state_lock; 59 37 }; 60 38 61 39 struct vkms_device { ··· 75 39 struct drm_gem_object gem; 76 40 struct mutex pages_lock; /* Page lock used in page fault handler */ 77 41 struct page **pages; 42 + unsigned int vmap_count; 43 + void *vaddr; 78 44 }; 79 45 80 46 #define drm_crtc_to_vkms_output(target) \ ··· 84 46 85 47 #define drm_device_to_vkms_device(target) \ 86 48 container_of(target, struct vkms_device, drm) 49 + 50 + #define drm_gem_to_vkms_gem(target)\ 51 + container_of(target, struct vkms_gem_object, gem) 52 + 53 + #define to_vkms_crtc_state(target)\ 54 + container_of(target, struct vkms_crtc_state, base) 55 + 56 + #define to_vkms_plane_state(target)\ 57 + container_of(target, struct vkms_plane_state, base) 87 58 88 59 /* CRTC */ 89 60 int vkms_crtc_init(struct drm_device *dev, struct drm_crtc *crtc, ··· 112 65 u32 *handle, 113 66 u64 size); 114 67 115 - int vkms_gem_fault(struct vm_fault *vmf); 68 + vm_fault_t vkms_gem_fault(struct vm_fault *vmf); 116 69 117 70 int vkms_dumb_create(struct drm_file *file, struct drm_device *dev, 118 71 struct drm_mode_create_dumb *args); ··· 121 74 u32 handle, u64 *offset); 122 75 123 76 void vkms_gem_free_object(struct drm_gem_object *obj); 77 + 78 + int vkms_gem_vmap(struct drm_gem_object *obj); 79 + 80 + void vkms_gem_vunmap(struct drm_gem_object *obj); 81 + 82 + /* CRC Support */ 83 + int vkms_set_crc_source(struct drm_crtc *crtc, const char *src_name); 84 + int vkms_verify_crc_source(struct drm_crtc *crtc, const char *source_name, 85 + size_t *values_cnt); 86 + void vkms_crc_work_handle(struct work_struct *work); 124 87 125 88 #endif /* _VKMS_DRV_H_ */
+79 -4
drivers/gpu/drm/vkms/vkms_gem.c
··· 37 37 struct vkms_gem_object *gem = container_of(obj, struct vkms_gem_object, 38 38 gem); 39 39 40 - kvfree(gem->pages); 40 + WARN_ON(gem->pages); 41 + WARN_ON(gem->vaddr); 42 + 41 43 mutex_destroy(&gem->pages_lock); 42 44 drm_gem_object_release(obj); 43 45 kfree(gem); 44 46 } 45 47 46 - int vkms_gem_fault(struct vm_fault *vmf) 48 + vm_fault_t vkms_gem_fault(struct vm_fault *vmf) 47 49 { 48 50 struct vm_area_struct *vma = vmf->vma; 49 51 struct vkms_gem_object *obj = vma->vm_private_data; 50 52 unsigned long vaddr = vmf->address; 51 53 pgoff_t page_offset; 52 54 loff_t num_pages; 53 - int ret; 55 + vm_fault_t ret = VM_FAULT_SIGBUS; 54 56 55 57 page_offset = (vaddr - vma->vm_start) >> PAGE_SHIFT; 56 58 num_pages = DIV_ROUND_UP(obj->gem.size, PAGE_SIZE); ··· 60 58 if (page_offset > num_pages) 61 59 return VM_FAULT_SIGBUS; 62 60 63 - ret = -ENOENT; 64 61 mutex_lock(&obj->pages_lock); 65 62 if (obj->pages) { 66 63 get_page(obj->pages[page_offset]); ··· 176 175 unref: 177 176 drm_gem_object_put_unlocked(obj); 178 177 178 + return ret; 179 + } 180 + 181 + static struct page **_get_pages(struct vkms_gem_object *vkms_obj) 182 + { 183 + struct drm_gem_object *gem_obj = &vkms_obj->gem; 184 + 185 + if (!vkms_obj->pages) { 186 + struct page **pages = drm_gem_get_pages(gem_obj); 187 + 188 + if (IS_ERR(pages)) 189 + return pages; 190 + 191 + if (cmpxchg(&vkms_obj->pages, NULL, pages)) 192 + drm_gem_put_pages(gem_obj, pages, false, true); 193 + } 194 + 195 + return vkms_obj->pages; 196 + } 197 + 198 + void vkms_gem_vunmap(struct drm_gem_object *obj) 199 + { 200 + struct vkms_gem_object *vkms_obj = drm_gem_to_vkms_gem(obj); 201 + 202 + mutex_lock(&vkms_obj->pages_lock); 203 + if (vkms_obj->vmap_count < 1) { 204 + WARN_ON(vkms_obj->vaddr); 205 + WARN_ON(vkms_obj->pages); 206 + mutex_unlock(&vkms_obj->pages_lock); 207 + return; 208 + } 209 + 210 + vkms_obj->vmap_count--; 211 + 212 + if (vkms_obj->vmap_count == 0) { 213 + vunmap(vkms_obj->vaddr); 214 + vkms_obj->vaddr = NULL; 215 + drm_gem_put_pages(obj, vkms_obj->pages, false, true); 216 + vkms_obj->pages = NULL; 217 + } 218 + 219 + mutex_unlock(&vkms_obj->pages_lock); 220 + } 221 + 222 + int vkms_gem_vmap(struct drm_gem_object *obj) 223 + { 224 + struct vkms_gem_object *vkms_obj = drm_gem_to_vkms_gem(obj); 225 + int ret = 0; 226 + 227 + mutex_lock(&vkms_obj->pages_lock); 228 + 229 + if (!vkms_obj->vaddr) { 230 + unsigned int n_pages = obj->size >> PAGE_SHIFT; 231 + struct page **pages = _get_pages(vkms_obj); 232 + 233 + if (IS_ERR(pages)) { 234 + ret = PTR_ERR(pages); 235 + goto out; 236 + } 237 + 238 + vkms_obj->vaddr = vmap(pages, n_pages, VM_MAP, PAGE_KERNEL); 239 + if (!vkms_obj->vaddr) 240 + goto err_vmap; 241 + } 242 + 243 + vkms_obj->vmap_count++; 244 + goto out; 245 + 246 + err_vmap: 247 + ret = -ENOMEM; 248 + drm_gem_put_pages(obj, vkms_obj->pages, false, true); 249 + vkms_obj->pages = NULL; 250 + out: 251 + mutex_unlock(&vkms_obj->pages_lock); 179 252 return ret; 180 253 }
+137 -3
drivers/gpu/drm/vkms/vkms_plane.c
··· 8 8 9 9 #include "vkms_drv.h" 10 10 #include <drm/drm_plane_helper.h> 11 + #include <drm/drm_atomic.h> 11 12 #include <drm/drm_atomic_helper.h> 13 + #include <drm/drm_gem_framebuffer_helper.h> 14 + 15 + static struct drm_plane_state * 16 + vkms_plane_duplicate_state(struct drm_plane *plane) 17 + { 18 + struct vkms_plane_state *vkms_state; 19 + struct vkms_crc_data *crc_data; 20 + 21 + vkms_state = kzalloc(sizeof(*vkms_state), GFP_KERNEL); 22 + if (!vkms_state) 23 + return NULL; 24 + 25 + crc_data = kzalloc(sizeof(*crc_data), GFP_KERNEL); 26 + if (WARN_ON(!crc_data)) 27 + DRM_INFO("Couldn't allocate crc_data"); 28 + 29 + vkms_state->crc_data = crc_data; 30 + 31 + __drm_atomic_helper_plane_duplicate_state(plane, 32 + &vkms_state->base); 33 + 34 + return &vkms_state->base; 35 + } 36 + 37 + static void vkms_plane_destroy_state(struct drm_plane *plane, 38 + struct drm_plane_state *old_state) 39 + { 40 + struct vkms_plane_state *vkms_state = to_vkms_plane_state(old_state); 41 + struct drm_crtc *crtc = vkms_state->base.crtc; 42 + 43 + if (crtc) { 44 + /* dropping the reference we acquired in 45 + * vkms_primary_plane_update() 46 + */ 47 + if (drm_framebuffer_read_refcount(&vkms_state->crc_data->fb)) 48 + drm_framebuffer_put(&vkms_state->crc_data->fb); 49 + } 50 + 51 + kfree(vkms_state->crc_data); 52 + vkms_state->crc_data = NULL; 53 + 54 + __drm_atomic_helper_plane_destroy_state(old_state); 55 + kfree(vkms_state); 56 + } 57 + 58 + static void vkms_plane_reset(struct drm_plane *plane) 59 + { 60 + struct vkms_plane_state *vkms_state; 61 + 62 + if (plane->state) 63 + vkms_plane_destroy_state(plane, plane->state); 64 + 65 + vkms_state = kzalloc(sizeof(*vkms_state), GFP_KERNEL); 66 + if (!vkms_state) { 67 + DRM_ERROR("Cannot allocate vkms_plane_state\n"); 68 + return; 69 + } 70 + 71 + plane->state = &vkms_state->base; 72 + plane->state->plane = plane; 73 + } 12 74 13 75 static const struct drm_plane_funcs vkms_plane_funcs = { 14 76 .update_plane = drm_atomic_helper_update_plane, 15 77 .disable_plane = drm_atomic_helper_disable_plane, 16 78 .destroy = drm_plane_cleanup, 17 - .reset = drm_atomic_helper_plane_reset, 18 - .atomic_duplicate_state = drm_atomic_helper_plane_duplicate_state, 19 - .atomic_destroy_state = drm_atomic_helper_plane_destroy_state, 79 + .reset = vkms_plane_reset, 80 + .atomic_duplicate_state = vkms_plane_duplicate_state, 81 + .atomic_destroy_state = vkms_plane_destroy_state, 20 82 }; 21 83 22 84 static void vkms_primary_plane_update(struct drm_plane *plane, 23 85 struct drm_plane_state *old_state) 24 86 { 87 + struct vkms_plane_state *vkms_plane_state; 88 + struct vkms_crc_data *crc_data; 89 + 90 + if (!plane->state->crtc || !plane->state->fb) 91 + return; 92 + 93 + vkms_plane_state = to_vkms_plane_state(plane->state); 94 + crc_data = vkms_plane_state->crc_data; 95 + memcpy(&crc_data->src, &plane->state->src, sizeof(struct drm_rect)); 96 + memcpy(&crc_data->fb, plane->state->fb, sizeof(struct drm_framebuffer)); 97 + drm_framebuffer_get(&crc_data->fb); 98 + } 99 + 100 + static int vkms_plane_atomic_check(struct drm_plane *plane, 101 + struct drm_plane_state *state) 102 + { 103 + struct drm_crtc_state *crtc_state; 104 + int ret; 105 + 106 + if (!state->fb | !state->crtc) 107 + return 0; 108 + 109 + crtc_state = drm_atomic_get_crtc_state(state->state, state->crtc); 110 + if (IS_ERR(crtc_state)) 111 + return PTR_ERR(crtc_state); 112 + 113 + ret = drm_atomic_helper_check_plane_state(state, crtc_state, 114 + DRM_PLANE_HELPER_NO_SCALING, 115 + DRM_PLANE_HELPER_NO_SCALING, 116 + false, true); 117 + if (ret != 0) 118 + return ret; 119 + 120 + /* for now primary plane must be visible and full screen */ 121 + if (!state->visible) 122 + return -EINVAL; 123 + 124 + return 0; 125 + } 126 + 127 + static int vkms_prepare_fb(struct drm_plane *plane, 128 + struct drm_plane_state *state) 129 + { 130 + struct drm_gem_object *gem_obj; 131 + struct vkms_gem_object *vkms_obj; 132 + int ret; 133 + 134 + if (!state->fb) 135 + return 0; 136 + 137 + gem_obj = drm_gem_fb_get_obj(state->fb, 0); 138 + vkms_obj = drm_gem_to_vkms_gem(gem_obj); 139 + ret = vkms_gem_vmap(gem_obj); 140 + if (ret) 141 + DRM_ERROR("vmap failed: %d\n", ret); 142 + 143 + return drm_gem_fb_prepare_fb(plane, state); 144 + } 145 + 146 + static void vkms_cleanup_fb(struct drm_plane *plane, 147 + struct drm_plane_state *old_state) 148 + { 149 + struct drm_gem_object *gem_obj; 150 + 151 + if (!old_state->fb) 152 + return; 153 + 154 + gem_obj = drm_gem_fb_get_obj(old_state->fb, 0); 155 + vkms_gem_vunmap(gem_obj); 25 156 } 26 157 27 158 static const struct drm_plane_helper_funcs vkms_primary_helper_funcs = { 28 159 .atomic_update = vkms_primary_plane_update, 160 + .atomic_check = vkms_plane_atomic_check, 161 + .prepare_fb = vkms_prepare_fb, 162 + .cleanup_fb = vkms_cleanup_fb, 29 163 }; 30 164 31 165 struct drm_plane *vkms_plane_init(struct vkms_device *vkmsdev)
+1 -3
drivers/gpu/drm/vmwgfx/vmwgfx_kms.c
··· 720 720 return; 721 721 } 722 722 723 - plane->state = &vps->base; 724 - plane->state->plane = plane; 725 - plane->state->rotation = DRM_MODE_ROTATE_0; 723 + __drm_atomic_helper_plane_reset(plane, &vps->base); 726 724 } 727 725 728 726
+1 -1
drivers/gpu/drm/xen/xen_drm_front_gem.c
··· 179 179 struct xen_gem_object *xen_obj = to_xen_gem_obj(gem_obj); 180 180 181 181 if (!xen_obj->pages) 182 - return NULL; 182 + return ERR_PTR(-ENOMEM); 183 183 184 184 return drm_prime_pages_to_sg(xen_obj->pages, xen_obj->num_pages); 185 185 }
+61 -2
drivers/video/fbdev/core/fbmem.c
··· 34 34 #include <linux/fb.h> 35 35 #include <linux/fbcon.h> 36 36 #include <linux/mem_encrypt.h> 37 + #include <linux/pci.h> 37 38 38 39 #include <asm/fb.h> 39 40 ··· 1606 1605 (primary && gen_aper && gen_aper->count && 1607 1606 gen_aper->ranges[0].base == VGA_FB_PHYS)) { 1608 1607 1609 - printk(KERN_INFO "fb: switching to %s from %s\n", 1610 - name, registered_fb[i]->fix.id); 1608 + printk(KERN_INFO "fb%d: switching to %s from %s\n", 1609 + i, name, registered_fb[i]->fix.id); 1611 1610 ret = do_unregister_framebuffer(registered_fb[i]); 1612 1611 if (ret) 1613 1612 return ret; ··· 1794 1793 } 1795 1794 EXPORT_SYMBOL(unlink_framebuffer); 1796 1795 1796 + /** 1797 + * remove_conflicting_framebuffers - remove firmware-configured framebuffers 1798 + * @a: memory range, users of which are to be removed 1799 + * @name: requesting driver name 1800 + * @primary: also kick vga16fb if present 1801 + * 1802 + * This function removes framebuffer devices (initialized by firmware/bootloader) 1803 + * which use memory range described by @a. If @a is NULL all such devices are 1804 + * removed. 1805 + */ 1797 1806 int remove_conflicting_framebuffers(struct apertures_struct *a, 1798 1807 const char *name, bool primary) 1799 1808 { 1800 1809 int ret; 1810 + bool do_free = false; 1811 + 1812 + if (!a) { 1813 + a = alloc_apertures(1); 1814 + if (!a) 1815 + return -ENOMEM; 1816 + 1817 + a->ranges[0].base = 0; 1818 + a->ranges[0].size = ~0; 1819 + do_free = true; 1820 + } 1801 1821 1802 1822 mutex_lock(&registration_lock); 1803 1823 ret = do_remove_conflicting_framebuffers(a, name, primary); 1804 1824 mutex_unlock(&registration_lock); 1805 1825 1826 + if (do_free) 1827 + kfree(a); 1828 + 1806 1829 return ret; 1807 1830 } 1808 1831 EXPORT_SYMBOL(remove_conflicting_framebuffers); 1832 + 1833 + /** 1834 + * remove_conflicting_pci_framebuffers - remove firmware-configured framebuffers for PCI devices 1835 + * @pdev: PCI device 1836 + * @resource_id: index of PCI BAR configuring framebuffer memory 1837 + * @name: requesting driver name 1838 + * 1839 + * This function removes framebuffer devices (eg. initialized by firmware) 1840 + * using memory range configured for @pdev's BAR @resource_id. 1841 + * 1842 + * The function assumes that PCI device with shadowed ROM drives a primary 1843 + * display and so kicks out vga16fb. 1844 + */ 1845 + int remove_conflicting_pci_framebuffers(struct pci_dev *pdev, int res_id, const char *name) 1846 + { 1847 + struct apertures_struct *ap; 1848 + bool primary = false; 1849 + int err; 1850 + 1851 + ap = alloc_apertures(1); 1852 + if (!ap) 1853 + return -ENOMEM; 1854 + 1855 + ap->ranges[0].base = pci_resource_start(pdev, res_id); 1856 + ap->ranges[0].size = pci_resource_len(pdev, res_id); 1857 + #ifdef CONFIG_X86 1858 + primary = pdev->resource[PCI_ROM_RESOURCE].flags & 1859 + IORESOURCE_ROM_SHADOW; 1860 + #endif 1861 + err = remove_conflicting_framebuffers(ap, name, primary); 1862 + kfree(ap); 1863 + return err; 1864 + } 1865 + EXPORT_SYMBOL(remove_conflicting_pci_framebuffers); 1809 1866 1810 1867 /** 1811 1868 * register_framebuffer - registers a frame buffer device
+2
include/drm/drm_atomic_helper.h
··· 156 156 void drm_atomic_helper_crtc_destroy_state(struct drm_crtc *crtc, 157 157 struct drm_crtc_state *state); 158 158 159 + void __drm_atomic_helper_plane_reset(struct drm_plane *plane, 160 + struct drm_plane_state *state); 159 161 void drm_atomic_helper_plane_reset(struct drm_plane *plane); 160 162 void __drm_atomic_helper_plane_duplicate_state(struct drm_plane *plane, 161 163 struct drm_plane_state *state);
+6
include/drm/drm_blend.h
··· 27 27 #include <linux/ctype.h> 28 28 #include <drm/drm_mode.h> 29 29 30 + #define DRM_MODE_BLEND_PREMULTI 0 31 + #define DRM_MODE_BLEND_COVERAGE 1 32 + #define DRM_MODE_BLEND_PIXEL_NONE 2 33 + 30 34 struct drm_device; 31 35 struct drm_atomic_state; 32 36 struct drm_plane; ··· 56 52 unsigned int zpos); 57 53 int drm_atomic_normalize_zpos(struct drm_device *dev, 58 54 struct drm_atomic_state *state); 55 + int drm_plane_create_blend_mode_property(struct drm_plane *plane, 56 + unsigned int supported_modes); 59 57 #endif
+39 -2
include/drm/drm_crtc.h
··· 744 744 * 745 745 * 0 on success or a negative error code on failure. 746 746 */ 747 - int (*set_crc_source)(struct drm_crtc *crtc, const char *source, 748 - size_t *values_cnt); 747 + int (*set_crc_source)(struct drm_crtc *crtc, const char *source); 748 + /** 749 + * @verify_crc_source: 750 + * 751 + * verifies the source of CRC checksums of frames before setting the 752 + * source for CRC and during crc open. Source parameter can be NULL 753 + * while disabling crc source. 754 + * 755 + * This callback is optional if the driver does not support any CRC 756 + * generation functionality. 757 + * 758 + * RETURNS: 759 + * 760 + * 0 on success or a negative error code on failure. 761 + */ 762 + int (*verify_crc_source)(struct drm_crtc *crtc, const char *source, 763 + size_t *values_cnt); 764 + /** 765 + * @get_crc_sources: 766 + * 767 + * Driver callback for getting a list of all the available sources for 768 + * CRC generation. This callback depends upon verify_crc_source, So 769 + * verify_crc_source callback should be implemented before implementing 770 + * this. Driver can pass full list of available crc sources, this 771 + * callback does the verification on each crc-source before passing it 772 + * to userspace. 773 + * 774 + * This callback is optional if the driver does not support exporting of 775 + * possible CRC sources list. 776 + * 777 + * RETURNS: 778 + * 779 + * a constant character pointer to the list of all the available CRC 780 + * sources. On failure driver should return NULL. count should be 781 + * updated with number of sources in list. if zero we don't process any 782 + * source from the list. 783 + */ 784 + const char *const *(*get_crc_sources)(struct drm_crtc *crtc, 785 + size_t *count); 749 786 750 787 /** 751 788 * @atomic_print_state:
+3 -2
include/drm/drm_dp_helper.h
··· 123 123 # define DP_FRAMING_CHANGE_CAP (1 << 1) 124 124 # define DP_DPCD_DISPLAY_CONTROL_CAPABLE (1 << 3) /* edp v1.2 or higher */ 125 125 126 - #define DP_TRAINING_AUX_RD_INTERVAL 0x00e /* XXX 1.2? */ 127 - # define DP_TRAINING_AUX_RD_MASK 0x7F /* XXX 1.2? */ 126 + #define DP_TRAINING_AUX_RD_INTERVAL 0x00e /* XXX 1.2? */ 127 + # define DP_TRAINING_AUX_RD_MASK 0x7F /* DP 1.3 */ 128 + # define DP_EXTENDED_RECEIVER_CAP_FIELD_PRESENT (1 << 7) /* DP 1.3 */ 128 129 129 130 #define DP_ADAPTER_CAP 0x00f /* 1.2 */ 130 131 # define DP_FORCE_LOAD_SENSE_CAP (1 << 0)
-1
include/drm/drm_fb_cma_helper.h
··· 26 26 27 27 void drm_fbdev_cma_restore_mode(struct drm_fbdev_cma *fbdev_cma); 28 28 void drm_fbdev_cma_hotplug_event(struct drm_fbdev_cma *fbdev_cma); 29 - void drm_fbdev_cma_set_suspend(struct drm_fbdev_cma *fbdev_cma, bool state); 30 29 void drm_fbdev_cma_set_suspend_unlocked(struct drm_fbdev_cma *fbdev_cma, 31 30 bool state); 32 31
+12
include/drm/drm_fb_helper.h
··· 615 615 #endif 616 616 } 617 617 618 + static inline int 619 + drm_fb_helper_remove_conflicting_pci_framebuffers(struct pci_dev *pdev, 620 + int resource_id, 621 + const char *name) 622 + { 623 + #if IS_REACHABLE(CONFIG_FB) 624 + return remove_conflicting_pci_framebuffers(pdev, resource_id, name); 625 + #else 626 + return 0; 627 + #endif 628 + } 629 + 618 630 #endif
+1
include/drm/drm_panel.h
··· 82 82 * @drm: DRM device owning the panel 83 83 * @connector: DRM connector that the panel is attached to 84 84 * @dev: parent device of the panel 85 + * @link: link from panel device (supplier) to DRM device (consumer) 85 86 * @funcs: operations that can be performed on the panel 86 87 * @list: panel entry in registry 87 88 */
+16
include/drm/drm_plane.h
··· 119 119 u16 alpha; 120 120 121 121 /** 122 + * @pixel_blend_mode: 123 + * The alpha blending equation selection, describing how the pixels from 124 + * the current plane are composited with the background. Value can be 125 + * one of DRM_MODE_BLEND_* 126 + */ 127 + uint16_t pixel_blend_mode; 128 + 129 + /** 122 130 * @rotation: 123 131 * Rotation of the plane. See drm_plane_create_rotation_property() for 124 132 * more details. ··· 667 659 * drm_plane_create_rotation_property(). 668 660 */ 669 661 struct drm_property *rotation_property; 662 + /** 663 + * @blend_mode_property: 664 + * Optional "pixel blend mode" enum property for this plane. 665 + * Blend mode property represents the alpha blending equation selection, 666 + * describing how the pixels from the current plane are composited with 667 + * the background. 668 + */ 669 + struct drm_property *blend_mode_property; 670 670 671 671 /** 672 672 * @color_encoding_property:
+1 -1
include/drm/drm_print.h
··· 381 381 382 382 #define DRM_DEV_DEBUG_DP(dev, fmt, ...) \ 383 383 drm_dev_dbg(dev, DRM_UT_DP, fmt, ## __VA_ARGS__) 384 - #define DRM_DEBUG_DP(dev, fmt, ...) \ 384 + #define DRM_DEBUG_DP(fmt, ...) \ 385 385 drm_dbg(DRM_UT_DP, fmt, ## __VA_ARGS__) 386 386 387 387 #define _DRM_DEV_DEFINE_DEBUG_RATELIMITED(dev, category, fmt, ...) \
-5
include/drm/drm_syncobj.h
··· 131 131 132 132 struct drm_syncobj *drm_syncobj_find(struct drm_file *file_private, 133 133 u32 handle); 134 - void drm_syncobj_add_callback(struct drm_syncobj *syncobj, 135 - struct drm_syncobj_cb *cb, 136 - drm_syncobj_func_t func); 137 - void drm_syncobj_remove_callback(struct drm_syncobj *syncobj, 138 - struct drm_syncobj_cb *cb); 139 134 void drm_syncobj_replace_fence(struct drm_syncobj *syncobj, 140 135 struct dma_fence *fence); 141 136 int drm_syncobj_find_fence(struct drm_file *file_private,
+2
include/linux/fb.h
··· 632 632 extern int register_framebuffer(struct fb_info *fb_info); 633 633 extern int unregister_framebuffer(struct fb_info *fb_info); 634 634 extern int unlink_framebuffer(struct fb_info *fb_info); 635 + extern int remove_conflicting_pci_framebuffers(struct pci_dev *pdev, int res_id, 636 + const char *name); 635 637 extern int remove_conflicting_framebuffers(struct apertures_struct *a, 636 638 const char *name, bool primary); 637 639 extern int fb_prepare_logo(struct fb_info *fb_info, int rotate);
+36
include/uapi/drm/drm_fourcc.h
··· 30 30 extern "C" { 31 31 #endif 32 32 33 + /** 34 + * DOC: overview 35 + * 36 + * In the DRM subsystem, framebuffer pixel formats are described using the 37 + * fourcc codes defined in `include/uapi/drm/drm_fourcc.h`. In addition to the 38 + * fourcc code, a Format Modifier may optionally be provided, in order to 39 + * further describe the buffer's format - for example tiling or compression. 40 + * 41 + * Format Modifiers 42 + * ---------------- 43 + * 44 + * Format modifiers are used in conjunction with a fourcc code, forming a 45 + * unique fourcc:modifier pair. This format:modifier pair must fully define the 46 + * format and data layout of the buffer, and should be the only way to describe 47 + * that particular buffer. 48 + * 49 + * Having multiple fourcc:modifier pairs which describe the same layout should 50 + * be avoided, as such aliases run the risk of different drivers exposing 51 + * different names for the same data format, forcing userspace to understand 52 + * that they are aliases. 53 + * 54 + * Format modifiers may change any property of the buffer, including the number 55 + * of planes and/or the required allocation size. Format modifiers are 56 + * vendor-namespaced, and as such the relationship between a fourcc code and a 57 + * modifier is specific to the modifer being used. For example, some modifiers 58 + * may preserve meaning - such as number of planes - from the fourcc code, 59 + * whereas others may not. 60 + * 61 + * Vendors should document their modifier usage in as much detail as 62 + * possible, to ensure maximum compatibility across devices, drivers and 63 + * applications. 64 + * 65 + * The authoritative list of format modifier codes is found in 66 + * `include/uapi/drm/drm_fourcc.h` 67 + */ 68 + 33 69 #define fourcc_code(a, b, c, d) ((__u32)(a) | ((__u32)(b) << 8) | \ 34 70 ((__u32)(c) << 16) | ((__u32)(d) << 24)) 35 71
+33
include/uapi/linux/udmabuf.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */ 2 + #ifndef _UAPI_LINUX_UDMABUF_H 3 + #define _UAPI_LINUX_UDMABUF_H 4 + 5 + #include <linux/types.h> 6 + #include <linux/ioctl.h> 7 + 8 + #define UDMABUF_FLAGS_CLOEXEC 0x01 9 + 10 + struct udmabuf_create { 11 + __u32 memfd; 12 + __u32 flags; 13 + __u64 offset; 14 + __u64 size; 15 + }; 16 + 17 + struct udmabuf_create_item { 18 + __u32 memfd; 19 + __u32 __pad; 20 + __u64 offset; 21 + __u64 size; 22 + }; 23 + 24 + struct udmabuf_create_list { 25 + __u32 flags; 26 + __u32 count; 27 + struct udmabuf_create_item list[]; 28 + }; 29 + 30 + #define UDMABUF_CREATE _IOW('u', 0x42, struct udmabuf_create) 31 + #define UDMABUF_CREATE_LIST _IOW('u', 0x43, struct udmabuf_create_list) 32 + 33 + #endif /* _UAPI_LINUX_UDMABUF_H */
+5
tools/testing/selftests/drivers/dma-buf/Makefile
··· 1 + CFLAGS += -I../../../../../usr/include/ 2 + 3 + TEST_GEN_PROGS := udmabuf 4 + 5 + include ../../lib.mk
+96
tools/testing/selftests/drivers/dma-buf/udmabuf.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + #include <stdio.h> 3 + #include <stdlib.h> 4 + #include <unistd.h> 5 + #include <string.h> 6 + #include <errno.h> 7 + #include <fcntl.h> 8 + #include <malloc.h> 9 + 10 + #include <sys/ioctl.h> 11 + #include <sys/syscall.h> 12 + #include <linux/memfd.h> 13 + #include <linux/udmabuf.h> 14 + 15 + #define TEST_PREFIX "drivers/dma-buf/udmabuf" 16 + #define NUM_PAGES 4 17 + 18 + static int memfd_create(const char *name, unsigned int flags) 19 + { 20 + return syscall(__NR_memfd_create, name, flags); 21 + } 22 + 23 + int main(int argc, char *argv[]) 24 + { 25 + struct udmabuf_create create; 26 + int devfd, memfd, buf, ret; 27 + off_t size; 28 + void *mem; 29 + 30 + devfd = open("/dev/udmabuf", O_RDWR); 31 + if (devfd < 0) { 32 + printf("%s: [skip,no-udmabuf]\n", TEST_PREFIX); 33 + exit(77); 34 + } 35 + 36 + memfd = memfd_create("udmabuf-test", MFD_CLOEXEC); 37 + if (memfd < 0) { 38 + printf("%s: [skip,no-memfd]\n", TEST_PREFIX); 39 + exit(77); 40 + } 41 + 42 + size = getpagesize() * NUM_PAGES; 43 + ret = ftruncate(memfd, size); 44 + if (ret == -1) { 45 + printf("%s: [FAIL,memfd-truncate]\n", TEST_PREFIX); 46 + exit(1); 47 + } 48 + 49 + memset(&create, 0, sizeof(create)); 50 + 51 + /* should fail (offset not page aligned) */ 52 + create.memfd = memfd; 53 + create.offset = getpagesize()/2; 54 + create.size = getpagesize(); 55 + buf = ioctl(devfd, UDMABUF_CREATE, &create); 56 + if (buf >= 0) { 57 + printf("%s: [FAIL,test-1]\n", TEST_PREFIX); 58 + exit(1); 59 + } 60 + 61 + /* should fail (size not multiple of page) */ 62 + create.memfd = memfd; 63 + create.offset = 0; 64 + create.size = getpagesize()/2; 65 + buf = ioctl(devfd, UDMABUF_CREATE, &create); 66 + if (buf >= 0) { 67 + printf("%s: [FAIL,test-2]\n", TEST_PREFIX); 68 + exit(1); 69 + } 70 + 71 + /* should fail (not memfd) */ 72 + create.memfd = 0; /* stdin */ 73 + create.offset = 0; 74 + create.size = size; 75 + buf = ioctl(devfd, UDMABUF_CREATE, &create); 76 + if (buf >= 0) { 77 + printf("%s: [FAIL,test-3]\n", TEST_PREFIX); 78 + exit(1); 79 + } 80 + 81 + /* should work */ 82 + create.memfd = memfd; 83 + create.offset = 0; 84 + create.size = size; 85 + buf = ioctl(devfd, UDMABUF_CREATE, &create); 86 + if (buf < 0) { 87 + printf("%s: [FAIL,test-4]\n", TEST_PREFIX); 88 + exit(1); 89 + } 90 + 91 + fprintf(stderr, "%s: ok\n", TEST_PREFIX); 92 + close(buf); 93 + close(memfd); 94 + close(devfd); 95 + return 0; 96 + }